linux-2.6-microblaze.git
2 years agoxdp: Move the rxq_info.mem clearing to unreg_mem_model()
Jakub Kicinski [Fri, 25 Jun 2021 22:16:12 +0000 (15:16 -0700)]
xdp: Move the rxq_info.mem clearing to unreg_mem_model()

xdp_rxq_info_unreg() implicitly calls xdp_rxq_info_unreg_mem_model().
This may well be confusing to the driver authors, and lead to double free
if they call xdp_rxq_info_unreg_mem_model() before xdp_rxq_info_unreg()
(when mem model type == MEM_TYPE_PAGE_POOL).

In fact error path of mvpp2_rxq_init() seems to currently do exactly that.

The double free will result in refcount underflow in page_pool_destroy().
Make the interface a little more programmer friendly by clearing type and
id so that xdp_rxq_info_unreg_mem_model() can be called multiple times.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210625221612.2637086-1-kuba@kernel.org
2 years agobpf: Fix false positive kmemleak report in bpf_ringbuf_area_alloc()
Rustam Kovhaev [Sat, 26 Jun 2021 18:11:56 +0000 (11:11 -0700)]
bpf: Fix false positive kmemleak report in bpf_ringbuf_area_alloc()

kmemleak scans struct page, but it does not scan the page content. If we
allocate some memory with kmalloc(), then allocate page with alloc_page(),
and if we put kmalloc pointer somewhere inside that page, kmemleak will
report kmalloc pointer as a false positive.

We can instruct kmemleak to scan the memory area by calling kmemleak_alloc()
and kmemleak_free(), but part of struct bpf_ringbuf is mmaped to user space,
and if struct bpf_ringbuf changes we would have to revisit and review size
argument in kmemleak_alloc(), because we do not want kmemleak to scan the
user space memory. Let's simplify things and use kmemleak_not_leak() here.

For posterity, also adding additional prior analysis from Andrii:

  I think either kmemleak or syzbot are misreporting this. I've added a
  bunch of printks around all allocations performed by BPF ringbuf. [...]
  On repro side I get these two warnings:

  [vmuser@archvm bpf]$ sudo ./repro
  BUG: memory leak
  unreferenced object 0xffff88810d538c00 (size 64):
    comm "repro", pid 2140, jiffies 4294692933 (age 14.540s)
    hex dump (first 32 bytes):
      00 af 19 04 00 ea ff ff c0 ae 19 04 00 ea ff ff  ................
      80 ae 19 04 00 ea ff ff c0 29 2e 04 00 ea ff ff  .........)......
    backtrace:
      [<0000000077bfbfbd>] __bpf_map_area_alloc+0x31/0xc0
      [<00000000587fa522>] ringbuf_map_alloc.cold.4+0x48/0x218
      [<0000000044d49e96>] __do_sys_bpf+0x359/0x1d90
      [<00000000f601d565>] do_syscall_64+0x2d/0x40
      [<0000000043d3112a>] entry_SYSCALL_64_after_hwframe+0x44/0xae

  BUG: memory leak
  unreferenced object 0xffff88810d538c80 (size 64):
    comm "repro", pid 2143, jiffies 4294699025 (age 8.448s)
    hex dump (first 32 bytes):
      80 aa 19 04 00 ea ff ff 00 ab 19 04 00 ea ff ff  ................
      c0 ab 19 04 00 ea ff ff 80 44 28 04 00 ea ff ff  .........D(.....
    backtrace:
      [<0000000077bfbfbd>] __bpf_map_area_alloc+0x31/0xc0
      [<00000000587fa522>] ringbuf_map_alloc.cold.4+0x48/0x218
      [<0000000044d49e96>] __do_sys_bpf+0x359/0x1d90
      [<00000000f601d565>] do_syscall_64+0x2d/0x40
      [<0000000043d3112a>] entry_SYSCALL_64_after_hwframe+0x44/0xae

  Note that both reported leaks (ffff88810d538c80 and ffff88810d538c00)
  correspond to pages array bpf_ringbuf is allocating and tracking properly
  internally. Note also that syzbot repro doesn't close FD of created BPF
  ringbufs, and even when ./repro itself exits with error, there are still
  two forked processes hanging around in my system. So clearly ringbuf maps
  are alive at that point. So reporting any memory leak looks weird at that
  point, because that memory is being used by active referenced BPF ringbuf.

  It's also a question why repro doesn't clean up its forks. But if I do a
  `pkill repro`, I do see that all the allocated memory is /properly/ cleaned
  up [and the] "leaks" are deallocated properly.

  BTW, if I add close() right after bpf() syscall in syzbot repro, I see that
  everything is immediately deallocated, like designed. And no memory leak
  is reported. So I don't think the problem is anywhere in bpf_ringbuf code,
  rather in the leak detection and/or repro itself.

Reported-by: syzbot+5d895828587f49e7fe9b@syzkaller.appspotmail.com
Signed-off-by: Rustam Kovhaev <rkovhaev@gmail.com>
[ Daniel: also included analysis from Andrii to the commit log ]
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: syzbot+5d895828587f49e7fe9b@syzkaller.appspotmail.com
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/CAEf4BzYk+dqs+jwu6VKXP-RttcTEGFe+ySTGWT9CRNkagDiJVA@mail.gmail.com
Link: https://lore.kernel.org/lkml/YNTAqiE7CWJhOK2M@nuc10
Link: https://lore.kernel.org/lkml/20210615101515.GC26027@arm.com
Link: https://syzkaller.appspot.com/bug?extid=5d895828587f49e7fe9b
Link: https://lore.kernel.org/bpf/20210626181156.1873604-1-rkovhaev@gmail.com
2 years agobpf: Allow bpf_get_current_ancestor_cgroup_id for tracing
Namhyung Kim [Sun, 27 Jun 2021 15:36:27 +0000 (08:36 -0700)]
bpf: Allow bpf_get_current_ancestor_cgroup_id for tracing

Allow the helper to be called from tracing programs. This is needed to
handle cgroup hiererachies in the program.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210627153627.824198-1-namhyung@kernel.org
2 years agobpf, x86: Fix extable offset calculation
Ravi Bangoria [Tue, 22 Jun 2021 11:00:26 +0000 (16:30 +0530)]
bpf, x86: Fix extable offset calculation

Commit 4c5de127598e1 ("bpf: Emit explicit NULL pointer checks for PROBE_LDX
instructions.") is emitting a couple of instructions before the actual load.
Consider those additional instructions while calculating extable offset.

Fixes: 4c5de127598e1 ("bpf: Emit explicit NULL pointer checks for PROBE_LDX instructions.")
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210622110026.1157847-1-ravi.bangoria@linux.ibm.com
2 years agobpfilter: Specify the log level for the kmsg message
Gary Lin [Wed, 23 Jun 2021 04:09:18 +0000 (12:09 +0800)]
bpfilter: Specify the log level for the kmsg message

Per the kmsg document [0], if we don't specify the log level with a
prefix "<N>" in the message string, the default log level will be
applied to the message. Since the default level could be warning(4),
this would make the log utility such as journalctl treat the message,
"Started bpfilter", as a warning. To avoid confusion, this commit
adds the prefix "<5>" to make the message always a notice.

  [0] https://www.kernel.org/doc/Documentation/ABI/testing/dev-kmsg

Fixes: 36c4357c63f3 ("net: bpfilter: print umh messages to /dev/kmsg")
Reported-by: Martin Loviska <mloviska@suse.com>
Signed-off-by: Gary Lin <glin@suse.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Dmitrii Banshchikov <me@ubique.spb.ru>
Link: https://lore.kernel.org/bpf/20210623040918.8683-1-glin@suse.com
2 years agoti: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:09 +0000 (18:06 +0200)]
ti: Remove rcu_read_lock() around XDP program invocation

The cpsw driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small. This
turns out to be harmless because it all happens in a single NAPI poll
cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock()
misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: linux-omap@vger.kernel.org
Link: https://lore.kernel.org/bpf/20210624160609.292325-20-toke@redhat.com
2 years agostmmac: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:08 +0000 (18:06 +0200)]
stmmac: Remove rcu_read_lock() around XDP program invocation

The stmmac driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small. This
turns out to be harmless because it all happens in a single NAPI poll
cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock()
misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Cc: Jose Abreu <joabreu@synopsys.com>
Link: https://lore.kernel.org/bpf/20210624160609.292325-19-toke@redhat.com
2 years agonetsec: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:07 +0000 (18:06 +0200)]
netsec: Remove rcu_read_lock() around XDP program invocation

The netsec driver has a rcu_read_lock()/rcu_read_unlock() pair around the
full RX loop, covering everything up to and including xdp_do_flush(). This
is actually the correct behaviour, but because it all happens in a single
NAPI poll cycle (and thus under local_bh_disable()), it is also technically
redundant.

With the addition of RCU annotations to the XDP_REDIRECT map types that
take bh execution into account, lockdep even understands this to be safe,
so there's really no reason to keep the rcu_read_lock() around anymore, so
let's just remove it.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Jassi Brar <jaswinder.singh@linaro.org>
Link: https://lore.kernel.org/bpf/20210624160609.292325-18-toke@redhat.com
2 years agosfc: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:06 +0000 (18:06 +0200)]
sfc: Remove rcu_read_lock() around XDP program invocation

The sfc driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small. This
turns out to be harmless because it all happens in a single NAPI poll
cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock()
misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Edward Cree <ecree.xilinx@gmail.com>
Cc: Martin Habets <habetsm.xilinx@gmail.com>
Link: https://lore.kernel.org/bpf/20210624160609.292325-17-toke@redhat.com
2 years agoqede: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:05 +0000 (18:06 +0200)]
qede: Remove rcu_read_lock() around XDP program invocation

The qede driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small. This
turns out to be harmless because it all happens in a single NAPI poll
cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock()
misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Ariel Elior <aelior@marvell.com>
Cc: gr-everest-linux-l2@marvell.com
Link: https://lore.kernel.org/bpf/20210624160609.292325-16-toke@redhat.com
2 years agonfp: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:04 +0000 (18:06 +0200)]
nfp: Remove rcu_read_lock() around XDP program invocation

The nfp driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small.

While this is not actually an issue for the nfp driver because it doesn't
support XDP_REDIRECT (and thus doesn't call xdp_do_flush()), the
rcu_read_lock() is still unneeded. And With the addition of RCU annotations
to the XDP_REDIRECT map types that take bh execution into account, lockdep
even understands this to be safe, so there's really no reason to keep it
around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Cc: oss-drivers@netronome.com
Link: https://lore.kernel.org/bpf/20210624160609.292325-15-toke@redhat.com
2 years agomlx4: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:03 +0000 (18:06 +0200)]
mlx4: Remove rcu_read_lock() around XDP program invocation

The mlx4 driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small. This
turns out to be harmless because it all happens in a single NAPI poll
cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock()
misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around. Also switch the RCU
dereferences in the driver loop itself to the _bh variants.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/bpf/20210624160609.292325-14-toke@redhat.com
2 years agomarvell: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:02 +0000 (18:06 +0200)]
marvell: Remove rcu_read_lock() around XDP program invocation

The mvneta and mvpp2 drivers have rcu_read_lock()/rcu_read_unlock() pairs
around XDP program invocations. However, the actual lifetime of the objects
referred by the XDP program invocation is longer, all the way through to
the call to xdp_do_flush(), making the scope of the rcu_read_lock() too
small. This turns out to be harmless because it all happens in a single
NAPI poll cycle (and thus under local_bh_disable()), but it makes the
rcu_read_lock() misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Marcin Wojtas <mw@semihalf.com>
Link: https://lore.kernel.org/bpf/20210624160609.292325-13-toke@redhat.com
2 years agointel: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:01 +0000 (18:06 +0200)]
intel: Remove rcu_read_lock() around XDP program invocation

The Intel drivers all have rcu_read_lock()/rcu_read_unlock() pairs around
XDP program invocations. However, the actual lifetime of the objects
referred by the XDP program invocation is longer, all the way through to
the call to xdp_do_flush(), making the scope of the rcu_read_lock() too
small. This turns out to be harmless because it all happens in a single
NAPI poll cycle (and thus under local_bh_disable()), but it makes the
rcu_read_lock() misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> # i40e
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: intel-wired-lan@lists.osuosl.org
Link: https://lore.kernel.org/bpf/20210624160609.292325-12-toke@redhat.com
2 years agofreescale: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:06:00 +0000 (18:06 +0200)]
freescale: Remove rcu_read_lock() around XDP program invocation

The dpaa and dpaa2 drivers have rcu_read_lock()/rcu_read_unlock() pairs
around XDP program invocations. However, the actual lifetime of the objects
referred by the XDP program invocation is longer, all the way through to
the call to xdp_do_flush(), making the scope of the rcu_read_lock() too
small. This turns out to be harmless because it all happens in a single
NAPI poll cycle (and thus under local_bh_disable()), but it makes the
rcu_read_lock() misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Camelia Groza <camelia.groza@nxp.com>
Cc: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Cc: Madalin Bucur <madalin.bucur@nxp.com>
Cc: Ioana Ciornei <ioana.ciornei@nxp.com>
Link: https://lore.kernel.org/bpf/20210624160609.292325-11-toke@redhat.com
2 years agothunderx: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:05:59 +0000 (18:05 +0200)]
thunderx: Remove rcu_read_lock() around XDP program invocation

The thunderx driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small. This
turns out to be harmless because it all happens in a single NAPI poll
cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock()
misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Sunil Goutham <sgoutham@marvell.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/bpf/20210624160609.292325-10-toke@redhat.com
2 years agobnxt: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:05:58 +0000 (18:05 +0200)]
bnxt: Remove rcu_read_lock() around XDP program invocation

The bnxt driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small. This
turns out to be harmless because it all happens in a single NAPI poll
cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock()
misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/bpf/20210624160609.292325-9-toke@redhat.com
2 years agoena: Remove rcu_read_lock() around XDP program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:05:57 +0000 (18:05 +0200)]
ena: Remove rcu_read_lock() around XDP program invocation

The ena driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small. This
turns out to be harmless because it all happens in a single NAPI poll
cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock()
misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Saeed Bishara <saeedb@amazon.com>
Cc: Guy Tzalik <gtzalik@amazon.com>
Link: https://lore.kernel.org/bpf/20210624160609.292325-8-toke@redhat.com
2 years agobpf, sched: Remove unneeded rcu_read_lock() around BPF program invocation
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:05:56 +0000 (18:05 +0200)]
bpf, sched: Remove unneeded rcu_read_lock() around BPF program invocation

The rcu_read_lock() call in cls_bpf and act_bpf are redundant: on the TX
side, there's already a call to rcu_read_lock_bh() in __dev_queue_xmit(),
and on RX there's a covering rcu_read_lock() in
netif_receive_skb{,_list}_internal().

With the previous patches we also amended the lockdep checks in the map
code to not require any particular RCU flavour, so we can just get rid of
the rcu_read_lock()s.

Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210624160609.292325-7-toke@redhat.com
2 years agoxdp: Add proper __rcu annotations to redirect map entries
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:05:55 +0000 (18:05 +0200)]
xdp: Add proper __rcu annotations to redirect map entries

XDP_REDIRECT works by a three-step process: the bpf_redirect() and
bpf_redirect_map() helpers will lookup the target of the redirect and store
it (along with some other metadata) in a per-CPU struct bpf_redirect_info.
Next, when the program returns the XDP_REDIRECT return code, the driver
will call xdp_do_redirect() which will use the information thus stored to
actually enqueue the frame into a bulk queue structure (that differs
slightly by map type, but shares the same principle). Finally, before
exiting its NAPI poll loop, the driver will call xdp_do_flush(), which will
flush all the different bulk queues, thus completing the redirect.

Pointers to the map entries will be kept around for this whole sequence of
steps, protected by RCU. However, there is no top-level rcu_read_lock() in
the core code; instead drivers add their own rcu_read_lock() around the XDP
portions of the code, but somewhat inconsistently as Martin discovered[0].
However, things still work because everything happens inside a single NAPI
poll sequence, which means it's between a pair of calls to
local_bh_disable()/local_bh_enable(). So Paul suggested[1] that we could
document this intention by using rcu_dereference_check() with
rcu_read_lock_bh_held() as a second parameter, thus allowing sparse and
lockdep to verify that everything is done correctly.

This patch does just that: we add an __rcu annotation to the map entry
pointers and remove the various comments explaining the NAPI poll assurance
strewn through devmap.c in favour of a longer explanation in filter.c. The
goal is to have one coherent documentation of the entire flow, and rely on
the RCU annotations as a "standard" way of communicating the flow in the
map code (which can additionally be understood by sparse and lockdep).

The RCU annotation replacements result in a fairly straight-forward
replacement where READ_ONCE() becomes rcu_dereference_check(), WRITE_ONCE()
becomes rcu_assign_pointer() and xchg() and cmpxchg() gets wrapped in the
proper constructs to cast the pointer back and forth between __rcu and
__kernel address space (for the benefit of sparse). The one complication is
that xskmap has a few constructions where double-pointers are passed back
and forth; these simply all gain __rcu annotations, and only the final
reference/dereference to the inner-most pointer gets changed.

With this, everything can be run through sparse without eliciting
complaints, and lockdep can verify correctness even without the use of
rcu_read_lock() in the drivers. Subsequent patches will clean these up from
the drivers.

[0] https://lore.kernel.org/bpf/20210415173551.7ma4slcbqeyiba2r@kafai-mbp.dhcp.thefacebook.com/
[1] https://lore.kernel.org/bpf/20210419165837.GA975577@paulmck-ThinkPad-P17-Gen-1/

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210624160609.292325-6-toke@redhat.com
2 years agobpf: Allow RCU-protected lookups to happen from bh context
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:05:54 +0000 (18:05 +0200)]
bpf: Allow RCU-protected lookups to happen from bh context

XDP programs are called from a NAPI poll context, which means the RCU
reference liveness is ensured by local_bh_disable(). Add
rcu_read_lock_bh_held() as a condition to the RCU checks for map lookups so
lockdep understands that the dereferences are safe from inside *either* an
rcu_read_lock() section *or* a local_bh_disable() section. While both
bh_disabled and rcu_read_lock() provide RCU protection, they are
semantically distinct, so we need both conditions to prevent lockdep
complaints.

This change is done in preparation for removing the redundant
rcu_read_lock()s from drivers.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210624160609.292325-5-toke@redhat.com
2 years agodoc: Give XDP as example of non-obvious RCU reader/updater pairing
Toke Høiland-Jørgensen [Thu, 24 Jun 2021 16:05:53 +0000 (18:05 +0200)]
doc: Give XDP as example of non-obvious RCU reader/updater pairing

This commit gives an example of non-obvious RCU reader/updater pairing
in the guise of the XDP feature in networking, which calls BPF programs
from network-driver NAPI (softirq) context.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210624160609.292325-4-toke@redhat.com
2 years agodoc: Clarify and expand RCU updaters and corresponding readers
Paul E. McKenney [Thu, 24 Jun 2021 16:05:52 +0000 (18:05 +0200)]
doc: Clarify and expand RCU updaters and corresponding readers

This commit clarifies which primitives readers can use given that the
corresponding updaters have made a specific choice.  This commit also adds
this information for the various RCU Tasks flavors.  While in the area, it
removes a paragraph that no longer applies in any straightforward manner.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210624160609.292325-3-toke@redhat.com
2 years agorcu: Create an unrcu_pointer() to remove __rcu from a pointer
Paul E. McKenney [Thu, 24 Jun 2021 16:05:51 +0000 (18:05 +0200)]
rcu: Create an unrcu_pointer() to remove __rcu from a pointer

The xchg() and cmpxchg() functions are sometimes used to carry out RCU
updates.  Unfortunately, this can result in sparse warnings for both
the old-value and new-value arguments, as well as for the return value.
The arguments can be dealt with using RCU_INITIALIZER():

        old_p = xchg(&p, RCU_INITIALIZER(new_p));

But a sparse warning still remains due to assigning the __rcu pointer
returned from xchg to the (most likely) non-__rcu pointer old_p.

This commit therefore provides an unrcu_pointer() macro that strips
the __rcu.  This macro can be used as follows:

        old_p = unrcu_pointer(xchg(&p, RCU_INITIALIZER(new_p)));

Reported-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210624160609.292325-2-toke@redhat.com
2 years agobpf: Support all gso types in bpf_skb_change_proto()
Maciej Żenczykowski [Thu, 17 Jun 2021 00:09:52 +0000 (17:09 -0700)]
bpf: Support all gso types in bpf_skb_change_proto()

Since we no longer modify gso_size, it is now theoretically
safe to not set SKB_GSO_DODGY and reset gso_segs to zero.

This also means the skb_is_gso_tcp() check should no longer
be necessary.

Unfortunately we cannot remove the skb_{decrease,increase}_gso_size()
helpers, as they are still used elsewhere:

  bpf_skb_net_grow() without BPF_F_ADJ_ROOM_FIXED_GSO
  bpf_skb_net_shrink() without BPF_F_ADJ_ROOM_FIXED_GSO
  net/core/lwt_bpf.c's handle_gso_type()

Signed-off-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Dongseok Yi <dseok.yi@samsung.com>
Cc: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/bpf/20210617000953.2787453-3-zenczykowski@gmail.com
2 years agobpf: Do not change gso_size during bpf_skb_change_proto()
Maciej Żenczykowski [Thu, 17 Jun 2021 00:09:51 +0000 (17:09 -0700)]
bpf: Do not change gso_size during bpf_skb_change_proto()

This is technically a backwards incompatible change in behaviour, but I'm
going to argue that it is very unlikely to break things, and likely to fix
*far* more then it breaks.

In no particular order, various reasons follow:

(a) I've long had a bug assigned to myself to debug a super rare kernel crash
on Android Pixel phones which can (per stacktrace) be traced back to BPF clat
IPv6 to IPv4 protocol conversion causing some sort of ugly failure much later
on during transmit deep in the GSO engine, AFAICT precisely because of this
change to gso_size, though I've never been able to manually reproduce it. I
believe it may be related to the particular network offload support of attached
USB ethernet dongle being used for tethering off of an IPv6-only cellular
connection. The reason might be we end up with more segments than max permitted,
or with a GSO packet with only one segment... (either way we break some
assumption and hit a BUG_ON)

(b) There is no check that the gso_size is > 20 when reducing it by 20, so we
might end up with a negative (or underflowing) gso_size or a gso_size of 0.
This can't possibly be good. Indeed this is probably somehow exploitable (or
at least can result in a kernel crash) by delivering crafted packets and perhaps
triggering an infinite loop or a divide by zero... As a reminder: gso_size (MSS)
is related to MTU, but not directly derived from it: gso_size/MSS may be
significantly smaller then one would get by deriving from local MTU. And on
some NICs (which do loose MTU checking on receive, it may even potentially be
larger, for example my work pc with 1500 MTU can receive 1520 byte frames [and
sometimes does due to bugs in a vendor plat46 implementation]). Indeed even just
going from 21 to 1 is potentially problematic because it increases the number
of segments by a factor of 21 (think DoS, or some other crash due to too many
segments).

(c) It's always safe to not increase the gso_size, because it doesn't result in
the max packet size increasing.  So the skb_increase_gso_size() call was always
unnecessary for correctness (and outright undesirable, see later). As such the
only part which is potentially dangerous (ie. could cause backwards compatibility
issues) is the removal of the skb_decrease_gso_size() call.

(d) If the packets are ultimately destined to the local device, then there is
absolutely no benefit to playing around with gso_size. It only matters if the
packets will egress the device. ie. we're either forwarding, or transmitting
from the device.

(e) This logic only triggers for packets which are GSO. It does not trigger for
skbs which are not GSO. It will not convert a non-GSO MTU sized packet into a
GSO packet (and you don't even know what the MTU is, so you can't even fix it).
As such your transmit path must *already* be able to handle an MTU 20 bytes
larger then your receive path (for IPv4 to IPv6 translation) - and indeed 28
bytes larger due to IPv4 fragments. Thus removing the skb_decrease_gso_size()
call doesn't actually increase the size of the packets your transmit side must
be able to handle. ie. to handle non-GSO max-MTU packets, the IPv4/IPv6 device/
route MTUs must already be set correctly. Since for example with an IPv4 egress
MTU of 1500, IPv4 to IPv6 translation will already build 1520 byte IPv6 frames,
so you need a 1520 byte device MTU. This means if your IPv6 device's egress
MTU is 1280, your IPv4 route must be 1260 (and actually 1252, because of the
need to handle fragments). This is to handle normal non-GSO packets. Thus the
reduction is simply not needed for GSO packets, because when they're correctly
built, they will already be the right size.

(f) TSO/GSO should be able to exactly undo GRO: the number of packets (TCP
segments) should not be modified, so that TCP's MSS counting works correctly
(this matters for congestion control). If protocol conversion changes the
gso_size, then the number of TCP segments may increase or decrease. Packet loss
after protocol conversion can result in partial loss of MSS segments that the
sender sent. How's the sending TCP stack going to react to receiving ACKs/SACKs
in the middle of the segments it sent?

(g) skb_{decrease,increase}_gso_size() are already no-ops for GSO_BY_FRAGS
case (besides triggering WARN_ON_ONCE). This means you already cannot guarantee
that gso_size (and thus resulting packet MTU) is changed. ie. you must assume
it won't be changed.

(h) changing gso_size is outright buggy for UDP GSO packets, where framing
matters (I believe that's also the case for SCTP, but it's already excluded
by [g]).  So the only remaining case is TCP, which also doesn't want it
(see [f]).

(i) see also the reasoning on the previous attempt at fixing this
(commit fa7b83bf3b156c767f3e4a25bbf3817b08f3ff8e) which shows that the current
behaviour causes TCP packet loss:

  In the forwarding path GRO -> BPF 6 to 4 -> GSO for TCP traffic, the
  coalesced packet payload can be > MSS, but < MSS + 20.

  bpf_skb_proto_6_to_4() will upgrade the MSS and it can be > the payload
  length. After then tcp_gso_segment checks for the payload length if it
  is <= MSS. The condition is causing the packet to be dropped.

  tcp_gso_segment():
    [...]
    mss = skb_shinfo(skb)->gso_size;
    if (unlikely(skb->len <= mss)) goto out;
    [...]

Thus changing the gso_size is simply a very bad idea. Increasing is unnecessary
and buggy, and decreasing can go negative.

Fixes: 6578171a7ff0 ("bpf: add bpf_skb_change_proto helper")
Signed-off-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Dongseok Yi <dseok.yi@samsung.com>
Cc: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/bpf/CANP3RGfjLikQ6dg=YpBU0OeHvyv7JOki7CyOUS9modaXAi-9vQ@mail.gmail.com
Link: https://lore.kernel.org/bpf/20210617000953.2787453-2-zenczykowski@gmail.com
2 years agoRevert "bpf: Check for BPF_F_ADJ_ROOM_FIXED_GSO when bpf_skb_change_proto"
Maciej Żenczykowski [Thu, 17 Jun 2021 00:09:50 +0000 (17:09 -0700)]
Revert "bpf: Check for BPF_F_ADJ_ROOM_FIXED_GSO when bpf_skb_change_proto"

This reverts commit fa7b83bf3b156c767f3e4a25bbf3817b08f3ff8e.

See the followup commit for the reasoning why I believe the appropriate
approach is to simply make this change without a flag, but it can basically
be summarized as using this helper without the flag is bug-prone or outright
buggy, and thus the default should be this new behaviour.

As this commit has only made it into net-next/master, but not into
any real release, such a backwards incompatible change is still ok.

Signed-off-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Dongseok Yi <dseok.yi@samsung.com>
Cc: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/bpf/20210617000953.2787453-1-zenczykowski@gmail.com
2 years agomedia, bpf: Do not copy more entries than user space requested
Sean Young [Wed, 23 Jun 2021 21:37:54 +0000 (22:37 +0100)]
media, bpf: Do not copy more entries than user space requested

The syscall bpf(BPF_PROG_QUERY, &attr) should use the prog_cnt field to
see how many entries user space provided and return ENOSPC if there are
more programs than that. Before this patch, this is not checked and
ENOSPC is never returned.

Note that one lirc device is limited to 64 bpf programs, and user space
I'm aware of -- ir-keytable -- always gives enough space for 64 entries
already. However, we should not copy program ids than are requested.

Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210623213754.632-1-sean@mess.org
2 years agobpf, x86: Remove unused cnt increase from EMIT macro
Jiri Olsa [Wed, 23 Jun 2021 11:25:04 +0000 (13:25 +0200)]
bpf, x86: Remove unused cnt increase from EMIT macro

Removing unused cnt increase from EMIT macro together with cnt declarations.
This was introduced in commit [1] to ensure proper code generation. But that
code was removed in commit [2] and this extra code was left in.

  [1] b52f00e6a715 ("x86: bpf_jit: implement bpf_tail_call() helper")
  [2] ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210623112504.709856-1-jolsa@kernel.org
2 years agodocs, af_xdp: Consistent indentation in examples
Ilya Maximets [Tue, 22 Jun 2021 18:56:47 +0000 (20:56 +0200)]
docs, af_xdp: Consistent indentation in examples

Examples in this document use all kinds of indentation from 3 to 5
spaces and even mixed with tabs. Making them all even and equal to
4 spaces.

Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20210622185647.3705104-1-i.maximets@ovn.org
2 years agolibbpf: Switch to void * casting in netlink helpers
Kumar Kartikeya Dwivedi [Sat, 19 Jun 2021 04:14:54 +0000 (09:44 +0530)]
libbpf: Switch to void * casting in netlink helpers

Netlink helpers I added in 8bbb77b7c7a2 ("libbpf: Add various netlink
helpers") used char * casts everywhere, and there were a few more that
existed from before.

Convert all of them to void * cast, as it is treated equivalently by
clang/gcc for the purposes of pointer arithmetic and to follow the
convention elsewhere in the kernel/libbpf.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210619041454.417577-2-memxor@gmail.com
2 years agolibbpf: Add request buffer type for netlink messages
Kumar Kartikeya Dwivedi [Sat, 19 Jun 2021 04:14:53 +0000 (09:44 +0530)]
libbpf: Add request buffer type for netlink messages

Coverity complains about OOB writes to nlmsghdr. There is no OOB as we
write to the trailing buffer, but static analyzers and compilers may
rightfully be confused as the nlmsghdr pointer has subobject provenance
(and hence subobject bounds).

Fix this by using an explicit request structure containing the nlmsghdr,
struct tcmsg/ifinfomsg, and attribute buffer.

Also switch nh_tail (renamed to req_tail) to cast req * to char * so
that it can be understood as arithmetic on pointer to the representation
array (hence having same bound as request structure), which should
further appease analyzers.

As a bonus, callers don't have to pass sizeof(req) all the time now, as
size is implicitly obtained using the pointer. While at it, also reduce
the size of attribute buffer to 128 bytes (132 for ifinfomsg using
functions due to the padding).

Summary of problem:

  Even though C standard allows interconvertibility of pointer to first
  member and pointer to struct, for the purposes of alias analysis it
  would still consider the first as having pointer value "pointer to T"
  where T is type of first member hence having subobject bounds,
  allowing analyzers within reason to complain when object is accessed
  beyond the size of pointed to object.

  The only exception to this rule may be when a char * is formed to a
  member subobject. It is not possible for the compiler to be able to
  tell the intent of the programmer that it is a pointer to member
  object or the underlying representation array of the containing
  object, so such diagnosis is suppressed.

Fixes: 715c5ce454a6 ("libbpf: Add low level TC-BPF management API")
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210619041454.417577-1-memxor@gmail.com
2 years agolibbpf: Add extra BPF_PROG_TYPE check to bpf_object__probe_loading
Jonathan Edwards [Sat, 19 Jun 2021 15:10:07 +0000 (11:10 -0400)]
libbpf: Add extra BPF_PROG_TYPE check to bpf_object__probe_loading

eBPF has been backported for RHEL 7 w/ kernel 3.10-940+ [0]. However only
the following program types are supported [1]:

  BPF_PROG_TYPE_KPROBE
  BPF_PROG_TYPE_TRACEPOINT
  BPF_PROG_TYPE_PERF_EVENT

For libbpf this causes an EINVAL return during the bpf_object__probe_loading
call which only checks to see if programs of type BPF_PROG_TYPE_SOCKET_FILTER
can load.

The following will try BPF_PROG_TYPE_TRACEPOINT as a fallback attempt before
erroring out. BPF_PROG_TYPE_KPROBE was not a good candidate because on some
kernels it requires knowledge of the LINUX_VERSION_CODE.

  [0] https://www.redhat.com/en/blog/introduction-ebpf-red-hat-enterprise-linux-7
  [1] https://access.redhat.com/articles/3550581

Signed-off-by: Jonathan Edwards <jonathan.edwards@165gc.onmicrosoft.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210619151007.GA6963@165gc.onmicrosoft.com
2 years agobpf: Add documentation for libbpf including API autogen
Grant Seltzer [Fri, 18 Jun 2021 14:04:59 +0000 (14:04 +0000)]
bpf: Add documentation for libbpf including API autogen

This patch is meant to start the initiative to document libbpf.
It includes .rst files which are text documentation describing building,
API naming convention, as well as an index to generated API documentation.

In this approach the generated API documentation is enabled by the kernels
existing kernel documentation system which uses sphinx. The resulting docs
would then be synced to kernel.org/doc

You can test this by running `make htmldocs` and serving the html in
Documentation/output. Since libbpf does not yet have comments in kernel
doc format, see kernel.org/doc/html/latest/doc-guide/kernel-doc.html for
an example so you can test this.

The advantage of this approach is to use the existing sphinx
infrastructure that the kernel has, and have libbpf docs in
the same place as everything else.

The current plan is to have the libbpf mirror sync the generated docs
and version them based on the libbpf releases which are cut on github.

This patch includes the addition of libbpf_api.rst which pulls comment
documentation from header files in libbpf under tools/lib/bpf/. The comment
docs would be of the standard kernel doc format.

Signed-off-by: Grant Seltzer <grantseltzer@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210618140459.9887-2-grantseltzer@gmail.com
2 years agosamples/bpf: Fix the error return code of xdp_redirect's main()
Wang Hai [Wed, 16 Jun 2021 04:25:34 +0000 (12:25 +0800)]
samples/bpf: Fix the error return code of xdp_redirect's main()

Fix to return a negative error code from the error handling
case instead of 0, as done elsewhere in this function.

If bpf_map_update_elem() failed, main() should return a negative error.

Fixes: 832622e6bd18 ("xdp: sample program for new bpf_redirect helper")
Signed-off-by: Wang Hai <wanghai38@huawei.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210616042534.315097-1-wanghai38@huawei.com
2 years agosamples/bpf: Fix Segmentation fault for xdp_redirect command
Wang Hai [Wed, 16 Jun 2021 04:23:24 +0000 (12:23 +0800)]
samples/bpf: Fix Segmentation fault for xdp_redirect command

A Segmentation fault error is caused when the following command
is executed.

$ sudo ./samples/bpf/xdp_redirect lo
Segmentation fault

This command is missing a device <IFNAME|IFINDEX> as an argument, resulting
in out-of-bounds access from argv.

If the number of devices for the xdp_redirect parameter is not 2,
we should report an error and exit.

Fixes: 24251c264798 ("samples/bpf: add option for native and skb mode for redirect apps")
Signed-off-by: Wang Hai <wanghai38@huawei.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210616042324.314832-1-wanghai38@huawei.com
2 years agoselftests/bpf: Fix ringbuf test fetching map FD
Andrii Nakryiko [Fri, 18 Jun 2021 00:28:24 +0000 (17:28 -0700)]
selftests/bpf: Fix ringbuf test fetching map FD

Seems like 4d1b62986125 ("selftests/bpf: Convert few tests to light skeleton.")
and 704e2beba23c ("selftests/bpf: Test ringbuf mmap read-only and read-write
restrictions") were done independently on bpf and bpf-next trees and are in
conflict with each other, despite a clean merge. Fix fetching of ringbuf's
map_fd to use light skeleton properly.

Fixes: 704e2beba23c ("selftests/bpf: Test ringbuf mmap read-only and read-write restrictions")
Fixes: 4d1b62986125 ("selftests/bpf: Convert few tests to light skeleton.")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210618002824.2081922-1-andrii@kernel.org
2 years agoMerge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next...
David S. Miller [Thu, 17 Jun 2021 19:11:28 +0000 (12:11 -0700)]
Merge branch '100GbE' of git://git./linux/kernel/git/tnguy/next-queue

Tony Nguyen says:

====================
100GbE Intel Wired LAN Driver Updates 2021-06-17

This series contains updates to ice driver only.

Jake corrects a couple of entries in the PTYPE table to properly
reflect the datasheet and removes unneeded NULL checks for some
PTP calls.

Paul reduces the scope of variables and removes the use of a local
variable.

Shaokun Zhang removes a duplicate function declaration.

Lorenzo Bianconi fixes a compilation warning if PTP_1588_CLOCK is
disabled.

Colin Ian King changes a for loop to remove an unneeded 'continue'.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'hdlc_ppp-cleanups'
David S. Miller [Thu, 17 Jun 2021 19:08:46 +0000 (12:08 -0700)]
Merge branch 'hdlc_ppp-cleanups'

Guangbin Huang says:

====================
net: hdlc_ppp: clean up some code style issues

This patchset clean up some code style issues.

---
Change Log:
V1 -> V2:
1. remove patch "net: hdlc_ppp: fix the comments style issue" and
patch "net: hdlc_ppp: remove redundant spaces" from this patchset.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hdlc_ppp: add required space
Peng Li [Thu, 17 Jun 2021 14:03:19 +0000 (22:03 +0800)]
net: hdlc_ppp: add required space

Add space required after that ','.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hdlc_ppp: remove unnecessary out of memory message
Peng Li [Thu, 17 Jun 2021 14:03:18 +0000 (22:03 +0800)]
net: hdlc_ppp: remove unnecessary out of memory message

This patch removes unnecessary out of memory message,
to fix the following checkpatch.pl warning:
"WARNING: Possible unnecessary 'out of memory' message"

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hdlc_ppp: move out assignment in if condition
Peng Li [Thu, 17 Jun 2021 14:03:17 +0000 (22:03 +0800)]
net: hdlc_ppp: move out assignment in if condition

Should not use assignment in if condition.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hdlc_ppp: fix the code style issue about "foo* bar"
Peng Li [Thu, 17 Jun 2021 14:03:16 +0000 (22:03 +0800)]
net: hdlc_ppp: fix the code style issue about "foo* bar"

Fix the checkpatch error as "foo* bar" or "foo*bar" should be "foo *bar".

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hdlc_ppp: add blank line after declarations
Peng Li [Thu, 17 Jun 2021 14:03:15 +0000 (22:03 +0800)]
net: hdlc_ppp: add blank line after declarations

This patch fixes the checkpatch error about missing a blank line
after declarations.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hdlc_ppp: remove redundant blank lines
Peng Li [Thu, 17 Jun 2021 14:03:14 +0000 (22:03 +0800)]
net: hdlc_ppp: remove redundant blank lines

This patch removes some redundant blank lines.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'mdio-nodes'
David S. Miller [Thu, 17 Jun 2021 19:06:53 +0000 (12:06 -0700)]
Merge branch 'mdio-nodes'

Ioana Ciornei says:

====================
net: mdio: setup both fwnode and of_node

The first patch in this series fixes a bug introduced by mistake in the
previous ACPI MDIO patch set.

The next two patches are adding a new helper which takes a device and a
fwnode_handle and populates both the of_node and fwnode so that we make
sure that a bug like this does not happen anymore.
Also, the new helper is used in the MDIO area.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: mdio: use device_set_node() to setup both fwnode and of
Ioana Ciornei [Thu, 17 Jun 2021 12:29:05 +0000 (15:29 +0300)]
net: mdio: use device_set_node() to setup both fwnode and of

Use the newly introduced helper to setup both the of_node and the
fwnode for a given device.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodriver core: add a helper to setup both the of_node and fwnode of a device
Ioana Ciornei [Thu, 17 Jun 2021 12:29:04 +0000 (15:29 +0300)]
driver core: add a helper to setup both the of_node and fwnode of a device

There are many places where both the fwnode_handle and the of_node of a
device need to be populated. Add a function which does both so that we
have consistency.

Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: mdio: setup of_node for the MDIO device
Ioana Ciornei [Thu, 17 Jun 2021 12:29:03 +0000 (15:29 +0300)]
net: mdio: setup of_node for the MDIO device

By mistake, the of_node of the MDIO device was not setup in the patch
linked below. As a consequence, any PHY driver that depends on the
of_node in its probe callback was not be able to successfully finish its
probe on a PHY, thus the Generic PHY driver was used instead.

Fix this by actually setting up the of_node.

Fixes: bc1bee3b87ee ("net: mdiobus: Introduce fwnode_mdiobus_register_phy()")
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agor8152: store the information of the pipes
Hayes Wang [Thu, 17 Jun 2021 10:00:15 +0000 (18:00 +0800)]
r8152: store the information of the pipes

Store the information of the pipes to avoid calling usb_rcvctrlpipe(),
usb_sndctrlpipe(), usb_rcvbulkpipe(), usb_sndbulkpipe(), and
usb_rcvintpipe() frequently.

Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
David S. Miller [Thu, 17 Jun 2021 18:54:56 +0000 (11:54 -0700)]
Merge git://git./linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2021-06-17

The following pull-request contains BPF updates for your *net-next* tree.

We've added 50 non-merge commits during the last 25 day(s) which contain
a total of 148 files changed, 4779 insertions(+), 1248 deletions(-).

The main changes are:

1) BPF infrastructure to migrate TCP child sockets from a listener to another
   in the same reuseport group/map, from Kuniyuki Iwashima.

2) Add a provably sound, faster and more precise algorithm for tnum_mul() as
   noted in https://arxiv.org/abs/2105.05398, from Harishankar Vishwanathan.

3) Streamline error reporting changes in libbpf as planned out in the
   'libbpf: the road to v1.0' effort, from Andrii Nakryiko.

4) Add broadcast support to xdp_redirect_map(), from Hangbin Liu.

5) Extends bpf_map_lookup_and_delete_elem() functionality to 4 more map
   types, that is, {LRU_,PERCPU_,LRU_PERCPU_,}HASH, from Denis Salopek.

6) Support new LLVM relocations in libbpf to make them more linker friendly,
   also add a doc to describe the BPF backend relocations, from Yonghong Song.

7) Silence long standing KUBSAN complaints on register-based shifts in
   interpreter, from Daniel Borkmann and Eric Biggers.

8) Add dummy PT_REGS macros in libbpf to fail BPF program compilation when
   target arch cannot be determined, from Lorenz Bauer.

9) Extend AF_XDP to support large umems with 1M+ pages, from Magnus Karlsson.

10) Fix two minor libbpf tc BPF API issues, from Kumar Kartikeya Dwivedi.

11) Move libbpf BPF_SEQ_PRINTF/BPF_SNPRINTF macros that can be used by BPF
    programs to bpf_helpers.h header, from Florent Revest.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'gianfar-64-bit-stats'
David S. Miller [Thu, 17 Jun 2021 18:39:48 +0000 (11:39 -0700)]
Merge branch 'gianfar-64-bit-stats'

Esben Haabendal says:

====================
net: gianfar: 64-bit statistics and rx_missed_errors counter

This series replaces the legacy 32-bit statistics to proper 64-bit ditto,
and implements rx_missed_errors counter on top of that.

The device supports a 16-bit RDRP counter, and a related carry bit and
interrupt, which allows implementation of a robust 64-bit counter.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: gianfar: Implement rx_missed_errors counter
Esben Haabendal [Thu, 17 Jun 2021 09:49:28 +0000 (11:49 +0200)]
net: gianfar: Implement rx_missed_errors counter

Devices with RMON support has a 16-bit RDRP counter.  It provides: "Receive
dropped packets counter. Increments for frames received which are streamed
to system but are later dropped due to lack of system resources."

To handle more than 2^16 dropped packets, a carry bit in CAR1 register is
set on overflow, so we enable irq when this is set, extending the counter
to 2^64 for handling situations where lots of packets are missed (e.g.
during heavy network storms).

Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: gianfar: Add definitions for CAR1 and CAM1 register bits
Esben Haabendal [Thu, 17 Jun 2021 09:49:26 +0000 (11:49 +0200)]
net: gianfar: Add definitions for CAR1 and CAM1 register bits

These are for carry status and interrupt mask bits of statistics registers.

Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: gianfar: Avoid 16 bytes of memset
Esben Haabendal [Thu, 17 Jun 2021 09:49:23 +0000 (11:49 +0200)]
net: gianfar: Avoid 16 bytes of memset

The memset on CAMx is wrong, as it actually unmasks all carry irq's,
which we clearly are not interested in.

The memset on CARx registers is just pointless, as they are W1C.

So let's just stop the memset before CAR1.

Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: gianfar: Clear CAR registers
Esben Haabendal [Thu, 17 Jun 2021 09:49:20 +0000 (11:49 +0200)]
net: gianfar: Clear CAR registers

The CAR1 and CAR2 registers are W1C style registers, to the memset does not
actually clear them.

Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: gianfar: Extend statistics counters to 64-bit
Esben Haabendal [Thu, 17 Jun 2021 09:49:17 +0000 (11:49 +0200)]
net: gianfar: Extend statistics counters to 64-bit

No reason to wrap counter values at 2^32.  Especially the bytes counters
can wrap pretty fast on Gbit networks.

Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: gianfar: Convert to ndo_get_stats64 interface
Esben Haabendal [Thu, 17 Jun 2021 09:49:15 +0000 (11:49 +0200)]
net: gianfar: Convert to ndo_get_stats64 interface

No reason to produce the legacy net_device_stats struct, only to have it
converted to rtnl_link_stats64.  And as a bonus, this allows for improving
counter size to 64 bit.

Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: sched: fix error return code in tcf_del_walker()
Yang Yingliang [Thu, 17 Jun 2021 08:02:07 +0000 (16:02 +0800)]
net: sched: fix error return code in tcf_del_walker()

When nla_put_u32() fails, 'ret' could be 0, it should
return error code in tcf_del_walker().

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: ipa: Add missing of_node_put() in ipa_firmware_load()
Yang Yingliang [Thu, 17 Jun 2021 05:11:19 +0000 (13:11 +0800)]
net: ipa: Add missing of_node_put() in ipa_firmware_load()

This node pointer is returned by of_parse_phandle() with refcount
incremented in this function. of_node_put() on it before exiting
this function.

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Acked-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: fix mistake path for netdev_features_strings
Jian Shen [Thu, 17 Jun 2021 03:37:11 +0000 (11:37 +0800)]
net: fix mistake path for netdev_features_strings

Th_strings arrays netdev_features_strings, tunable_strings, and
phy_tunable_strings has been moved to file net/ethtool/common.c.
So fixes the comment.

Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodocumentation: networking: devlink: fix prestera.rst formatting that causes build...
Oleksandr Mazur [Wed, 16 Jun 2021 17:46:07 +0000 (20:46 +0300)]
documentation: networking: devlink: fix prestera.rst formatting that causes build warnings

Fixes: 66826c43e63d ("documentation: networking: devlink: add prestera switched driver Documentation")

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: pcs: xpcs: Fix a less than zero u16 comparison error
Colin Ian King [Tue, 15 Jun 2021 13:52:53 +0000 (14:52 +0100)]
net: pcs: xpcs: Fix a less than zero u16 comparison error

Currently the check for the u16 variable val being less than zero is
always false because val is unsigned. Fix this by using the int
variable for the assignment and less than zero check.

Addresses-Coverity: ("Unsigned compared against 0")
Fixes: f7380bba42fd ("net: pcs: xpcs: add support for NXP SJA1110")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoice: remove redundant continue statement in a for-loop
Colin Ian King [Tue, 15 Jun 2021 14:28:47 +0000 (15:28 +0100)]
ice: remove redundant continue statement in a for-loop

The continue statement in the for-loop is redundant. Re-work the hw_lock
check to remove it.

Addresses-Coverity: ("Continue has no effect")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agonet: ice: ptp: fix compilation warning if PTP_1588_CLOCK is disabled
Lorenzo Bianconi [Tue, 15 Jun 2021 14:14:12 +0000 (16:14 +0200)]
net: ice: ptp: fix compilation warning if PTP_1588_CLOCK is disabled

Fix the following compilation warning if PTP_1588_CLOCK is not enabled

drivers/net/ethernet/intel/ice/ice_ptp.h:149:1:
   error: return type defaults to ‘int’ [-Werror=return-type]
   ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)

Fixes: ea9b847cda647 ("ice: enable transmit timestamps for E810 devices")
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoice: remove unnecessary NULL checks before ptp_read_system_*
Jacob Keller [Mon, 14 Jun 2021 16:59:16 +0000 (09:59 -0700)]
ice: remove unnecessary NULL checks before ptp_read_system_*

The ptp_read_system_prets and ptp_read_system_postts functions already
check for the NULL value of the ptp_system_timestamp structure pointer.
There is no need to check this manually in the ice driver code. Remove
the checks.

Reported-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoice: Remove the repeated declaration
Shaokun Zhang [Mon, 24 May 2021 08:39:01 +0000 (16:39 +0800)]
ice: Remove the repeated declaration

Function 'ice_is_vsi_valid' is declared twice, remove the
repeated declaration.

Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoice: remove local variable
Paul M Stillwell Jr [Thu, 6 May 2021 15:40:08 +0000 (08:40 -0700)]
ice: remove local variable

Remove the local variable since it's only used once. Instead, use it
directly.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoice: reduce scope of variables
Paul M Stillwell Jr [Thu, 6 May 2021 15:40:07 +0000 (08:40 -0700)]
ice: reduce scope of variables

There are some places where the scope of a variable can
be reduced so do that.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoice: mark PTYPE 2 as reserved
Jacob Keller [Thu, 6 May 2021 15:40:05 +0000 (08:40 -0700)]
ice: mark PTYPE 2 as reserved

The entry for PTYPE 2 in the ice_ptype_lkup table incorrectly states
that this is an L2 packet with no payload. According to the datasheet,
this PTYPE is actually unused and reserved.

Fix the lookup entry to indicate this is an unused entry that is
reserved.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoice: fix incorrect payload indicator on PTYPE
Jacob Keller [Thu, 6 May 2021 15:40:04 +0000 (08:40 -0700)]
ice: fix incorrect payload indicator on PTYPE

The entry for PTYPE 90 indicates that the payload is layer 3. This does
not match the specification in the datasheet which indicates the packet
is a MAC, IPv6, UDP packet, with a payload in layer 4.

Fix the lookup table to match the data sheet.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoselftests/bpf: Fix selftests build with old system-wide headers
Andrii Nakryiko [Thu, 17 Jun 2021 04:14:46 +0000 (21:14 -0700)]
selftests/bpf: Fix selftests build with old system-wide headers

migrate_reuseport.c selftest relies on having TCP_FASTOPEN_CONNECT defined in
system-wide netinet/tcp.h. Selftests can use up-to-date uapi/linux/tcp.h, but
that one doesn't have SOL_TCP. So instead of switching everything to uapi
header, add #define for TCP_FASTOPEN_CONNECT to fix the build.

Fixes: c9d0bdef89a6 ("bpf: Test BPF_SK_REUSEPORT_SELECT_OR_MIGRATE.")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
Link: https://lore.kernel.org/bpf/20210617041446.425283-1-andrii@kernel.org
2 years agobpf: Fix up register-based shifts in interpreter to silence KUBSAN
Daniel Borkmann [Wed, 16 Jun 2021 09:25:11 +0000 (11:25 +0200)]
bpf: Fix up register-based shifts in interpreter to silence KUBSAN

syzbot reported a shift-out-of-bounds that KUBSAN observed in the
interpreter:

  [...]
  UBSAN: shift-out-of-bounds in kernel/bpf/core.c:1420:2
  shift exponent 255 is too large for 64-bit type 'long long unsigned int'
  CPU: 1 PID: 11097 Comm: syz-executor.4 Not tainted 5.12.0-rc2-syzkaller #0
  Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
  Call Trace:
   __dump_stack lib/dump_stack.c:79 [inline]
   dump_stack+0x141/0x1d7 lib/dump_stack.c:120
   ubsan_epilogue+0xb/0x5a lib/ubsan.c:148
   __ubsan_handle_shift_out_of_bounds.cold+0xb1/0x181 lib/ubsan.c:327
   ___bpf_prog_run.cold+0x19/0x56c kernel/bpf/core.c:1420
   __bpf_prog_run32+0x8f/0xd0 kernel/bpf/core.c:1735
   bpf_dispatcher_nop_func include/linux/bpf.h:644 [inline]
   bpf_prog_run_pin_on_cpu include/linux/filter.h:624 [inline]
   bpf_prog_run_clear_cb include/linux/filter.h:755 [inline]
   run_filter+0x1a1/0x470 net/packet/af_packet.c:2031
   packet_rcv+0x313/0x13e0 net/packet/af_packet.c:2104
   dev_queue_xmit_nit+0x7c2/0xa90 net/core/dev.c:2387
   xmit_one net/core/dev.c:3588 [inline]
   dev_hard_start_xmit+0xad/0x920 net/core/dev.c:3609
   __dev_queue_xmit+0x2121/0x2e00 net/core/dev.c:4182
   __bpf_tx_skb net/core/filter.c:2116 [inline]
   __bpf_redirect_no_mac net/core/filter.c:2141 [inline]
   __bpf_redirect+0x548/0xc80 net/core/filter.c:2164
   ____bpf_clone_redirect net/core/filter.c:2448 [inline]
   bpf_clone_redirect+0x2ae/0x420 net/core/filter.c:2420
   ___bpf_prog_run+0x34e1/0x77d0 kernel/bpf/core.c:1523
   __bpf_prog_run512+0x99/0xe0 kernel/bpf/core.c:1737
   bpf_dispatcher_nop_func include/linux/bpf.h:644 [inline]
   bpf_test_run+0x3ed/0xc50 net/bpf/test_run.c:50
   bpf_prog_test_run_skb+0xabc/0x1c50 net/bpf/test_run.c:582
   bpf_prog_test_run kernel/bpf/syscall.c:3127 [inline]
   __do_sys_bpf+0x1ea9/0x4f00 kernel/bpf/syscall.c:4406
   do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
   entry_SYSCALL_64_after_hwframe+0x44/0xae
  [...]

Generally speaking, KUBSAN reports from the kernel should be fixed.
However, in case of BPF, this particular report caused concerns since
the large shift is not wrong from BPF point of view, just undefined.
In the verifier, K-based shifts that are >= {64,32} (depending on the
bitwidth of the instruction) are already rejected. The register-based
cases were not given their content might not be known at verification
time. Ideas such as verifier instruction rewrite with an additional
AND instruction for the source register were brought up, but regularly
rejected due to the additional runtime overhead they incur.

As Edward Cree rightly put it:

  Shifts by more than insn bitness are legal in the BPF ISA; they are
  implementation-defined behaviour [of the underlying architecture],
  rather than UB, and have been made legal for performance reasons.
  Each of the JIT backends compiles the BPF shift operations to machine
  instructions which produce implementation-defined results in such a
  case; the resulting contents of the register may be arbitrary but
  program behaviour as a whole remains defined.

  Guard checks in the fast path (i.e. affecting JITted code) will thus
  not be accepted.

  The case of division by zero is not truly analogous here, as division
  instructions on many of the JIT-targeted architectures will raise a
  machine exception / fault on division by zero, whereas (to the best
  of my knowledge) none will do so on an out-of-bounds shift.

Given the KUBSAN report only affects the BPF interpreter, but not JITs,
one solution is to add the ANDs with 63 or 31 into ___bpf_prog_run().
That would make the shifts defined, and thus shuts up KUBSAN, and the
compiler would optimize out the AND on any CPU that interprets the shift
amounts modulo the width anyway (e.g., confirmed from disassembly that
on x86-64 and arm64 the generated interpreter code is the same before
and after this fix).

The BPF interpreter is slow path, and most likely compiled out anyway
as distros select BPF_JIT_ALWAYS_ON to avoid speculative execution of
BPF instructions by the interpreter. Given the main argument was to
avoid sacrificing performance, the fact that the AND is optimized away
from compiler for mainstream archs helps as well as a solution moving
forward. Also add a comment on LSH/RSH/ARSH translation for JIT authors
to provide guidance when they see the ___bpf_prog_run() interpreter
code and use it as a model for a new JIT backend.

Reported-by: syzbot+bed360704c521841c85d@syzkaller.appspotmail.com
Reported-by: Kurt Manucredo <fuzzybritches0@gmail.com>
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Co-developed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: syzbot+bed360704c521841c85d@syzkaller.appspotmail.com
Cc: Edward Cree <ecree.xilinx@gmail.com>
Link: https://lore.kernel.org/bpf/0000000000008f912605bd30d5d7@google.com
Link: https://lore.kernel.org/bpf/bac16d8d-c174-bdc4-91bd-bfa62b410190@gmail.com
2 years agolibbpf: Fail compilation if target arch is missing
Lorenz Bauer [Wed, 16 Jun 2021 08:36:35 +0000 (09:36 +0100)]
libbpf: Fail compilation if target arch is missing

bpf2go is the Go equivalent of libbpf skeleton. The convention is that
the compiled BPF is checked into the repository to facilitate distributing
BPF as part of Go packages. To make this portable, bpf2go by default
generates both bpfel and bpfeb variants of the C.

Using bpf_tracing.h is inherently non-portable since the fields of
struct pt_regs differ between platforms, so CO-RE can't help us here.
The only way of working around this is to compile for each target
platform independently. bpf2go can't do this by default since there
are too many platforms.

Define the various PT_... macros when no target can be determined and
turn them into compilation failures. This works because bpf2go always
compiles for bpf targets, so the compiler fallback doesn't kick in.
Conditionally define __BPF_MISSING_TARGET so that we can inject a
more appropriate error message at build time. The user can then
choose which platform to target explicitly.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210616083635.11434-1-lmb@cloudflare.com
2 years agosamples/bpf: Add missing option to xdp_sample_pkts usage
Wang Hai [Tue, 15 Jun 2021 13:57:24 +0000 (21:57 +0800)]
samples/bpf: Add missing option to xdp_sample_pkts usage

xdp_sample_pkts usage() is missing the introduction of the
"-S" option, this patch adds it.

Fixes: d50ecc46d18f ("samples/bpf: Attach XDP programs in driver mode by default")
Signed-off-by: Wang Hai <wanghai38@huawei.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20210615135724.29528-1-wanghai38@huawei.com
2 years agosamples/bpf: Add missing option to xdp_fwd usage
Wang Hai [Tue, 15 Jun 2021 13:55:54 +0000 (21:55 +0800)]
samples/bpf: Add missing option to xdp_fwd usage

xdp_fwd usage() is missing the introduction of the "-S"
and "-F" options, this patch adds it.

Fixes: d50ecc46d18f ("samples/bpf: Attach XDP programs in driver mode by default")
Signed-off-by: Wang Hai <wanghai38@huawei.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20210615135554.29158-1-wanghai38@huawei.com
2 years agobpf: Fix typo in kernel/bpf/bpf_lsm.c
Shuyi Cheng [Wed, 16 Jun 2021 02:04:36 +0000 (10:04 +0800)]
bpf: Fix typo in kernel/bpf/bpf_lsm.c

Fix s/sleeable/sleepable/ typo in a comment.

Signed-off-by: Shuyi Cheng <chengshuyi@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1623809076-97907-1-git-send-email-chengshuyi@linux.alibaba.com
2 years agoselftests/bpf: Whitelist test_progs.h from .gitignore
Daniel Xu [Wed, 16 Jun 2021 21:52:11 +0000 (14:52 -0700)]
selftests/bpf: Whitelist test_progs.h from .gitignore

Somehow test_progs.h was being included by the existing rule:

    /test_progs*

This is bad because:

    1) test_progs.h is a checked in file
    2) grep-like tools like ripgrep[0] respect gitignore and
       test_progs.h was being hidden from searches

[0]: https://github.com/BurntSushi/ripgrep

Fixes: 74b5a5968fe8 ("selftests/bpf: Replace test_progs and test_maps w/ general rule")
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/a46f64944bf678bc652410ca6028d3450f4f7f4b.1623880296.git.dxu@dxuuu.xyz
2 years agoMerge tag 'wireless-drivers-next-2021-06-16' of git://git.kernel.org/pub/scm/linux...
David S. Miller [Wed, 16 Jun 2021 19:59:42 +0000 (12:59 -0700)]
Merge tag 'wireless-drivers-next-2021-06-16' of git://git./linux/kernel/git/kvalo/wireless-drivers-next

Kalle Valo says:

====================
wireless-drivers-next patches for v5.14

First set of patches for v5.14. Major new features are here support
WCN6855 PCI in ath11k and WoWLAN support for wcn36xx. Also smaller
fixes and cleanups all over.

ath9k

* provide STBC info in the received frames

brcmfmac

* fix setting of station info chains bitmask

* correctly report average RSSI in station info

rsi

* support for changing beacon interval in AP mode

ath11k

* support for WCN6855 PCI hardware

wcn36xx

* WoWLAN support with magic packets and GTK rekeying
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'marvell-prestera-flower-match-all'
David S. Miller [Wed, 16 Jun 2021 19:58:28 +0000 (12:58 -0700)]
Merge branch 'marvell-prestera-flower-match-all'

Vadym Kochan says:

====================
Marvell Prestera add flower and match all support

Add ACL infrastructure for Prestera Switch ASICs family devices to
offload cls_flower rules to be processed in the HW.

ACL implementation is based on tc filter api. The flower classifier
is supported to configure ACL rules/matches/action.

Supported actions:

    - drop
    - trap
    - pass

Supported dissector keys:

    - indev
    - src_mac
    - dst_mac
    - src_ip
    - dst_ip
    - ip_proto
    - src_port
    - dst_port
    - vlan_id
    - vlan_ethtype
    - icmp type/code

- Introduce matchall filter support
- Add SPAN API to configure port mirroring.
- Add tc mirror action.

At this moment, only mirror (egress) action is supported.

Example:
    tc filter ... action mirred egress mirror dev DEV

v2:
    Fixed "newline at EOF warnings" from "git am" by
        re-applying with --whitespace=fix

    patch #1:
        1) Set TC HW Offload always enabled without disable it     [suggested by Vladimir Oltean]
           by user. It reduced the logic by removing feature
           handling and acl block disable counting.

    patch #2:
        1) Removed extra not needed diff with prestera_port and    [suggested by Vladimir Oltean]
           prestera_switch  lines exchanging in prestera_acl.h

        2) Fix local variables ordering to reverse chrostmas tree  [suggested by Vladimir Oltean]

        3) Use tc_cls_can_offload_and_chain0() in                  [suggested by Vladimir Oltean]
           prestera_span_replace()

        4) Removed TODO about prio check                           [suggested by Vladimir Oltean]

        5) Rephrase error message if prestera_netdev_check()       [suggested by Vladimir Oltean]
           fails in prestera_span_replace()
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: marvell: prestera: Add matchall support
Serhiy Boiko [Wed, 16 Jun 2021 16:01:45 +0000 (19:01 +0300)]
net: marvell: prestera: Add matchall support

- Introduce matchall filter support
- Add SPAN API to configure port mirroring.
- Add tc mirror action.

At this moment, only mirror (egress) action is supported.

Example:
    tc filter ... action mirred egress mirror dev DEV

Co-developed-by: Volodymyr Mytnyk <vmytnyk@marvell.com>
Signed-off-by: Volodymyr Mytnyk <vmytnyk@marvell.com>
Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Signed-off-by: Vadym Kochan <vkochan@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: marvell: Implement TC flower offload
Serhiy Boiko [Wed, 16 Jun 2021 16:01:44 +0000 (19:01 +0300)]
net: marvell: Implement TC flower offload

Add ACL infrastructure for Prestera Switch ASICs family devices to
offload cls_flower rules to be processed in the HW.

ACL implementation is based on tc filter api. The flower classifier
is supported to configure ACL rules/matches/action.

Supported actions:

    - drop
    - trap
    - pass

Supported dissector keys:

    - indev
    - src_mac
    - dst_mac
    - src_ip
    - dst_ip
    - ip_proto
    - src_port
    - dst_port
    - vlan_id
    - vlan_ethtype
    - icmp type/code

Co-developed-by: Volodymyr Mytnyk <vmytnyk@marvell.com>
Signed-off-by: Volodymyr Mytnyk <vmytnyk@marvell.com>
Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Signed-off-by: Vadym Kochan <vkochan@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'net-smc-stats'
David S. Miller [Wed, 16 Jun 2021 19:54:02 +0000 (12:54 -0700)]
Merge branch 'net-smc-stats'

Karsten Graul says:

====================
net/smc: Add SMC statistic support

Please apply the following patch series for smc to netdev's net-next tree.

This v2 is a resend of the code contained in v1 but with an updated
cover letter to describe why we have chosen to use the generic netlink
mechanism to access the smc protocol's statistic data.

The patchset adds statistic support to the SMC protocol. Per-cpu
variables are used to collect the statistic information for better
performance and for reducing concurrency pitfalls. The code that is
collecting statistic data is implemented in macros to increase code
reuse and readability.
The generic netlink mechanism in SMC is extended to provide the
collected statistics to userspace.
Network namespace awareness is also part of the statistics
implementation.

SMC is a protocol interacting with PCI devices (like RoCE Cards) and
runs on top of the TCP protocol. As SMC is a network protocol and not
an ethernet device driver, we decided to use the generic netlink
interface. This should be comparable to what other protocols in the
net subsystem like tipc, ncsi, ieee802154 or tcp, et al, do.
There is already an established internal generic netlink interface
mechanism in SMC which is used to collect SMC Protocol internal
information. This patchset extends that existing mechanism.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Make SMC statistics network namespace aware
Guvenc Gulce [Wed, 16 Jun 2021 14:52:58 +0000 (16:52 +0200)]
net/smc: Make SMC statistics network namespace aware

Make the gathered SMC statistics network namespace aware, for each
namespace collect an own set of statistic information.

Signed-off-by: Guvenc Gulce <guvenc@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Add netlink support for SMC fallback statistics
Guvenc Gulce [Wed, 16 Jun 2021 14:52:57 +0000 (16:52 +0200)]
net/smc: Add netlink support for SMC fallback statistics

Add support to collect more detailed SMC fallback reason statistics and
provide these statistics to user space on the netlink interface.

Signed-off-by: Guvenc Gulce <guvenc@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Add netlink support for SMC statistics
Guvenc Gulce [Wed, 16 Jun 2021 14:52:56 +0000 (16:52 +0200)]
net/smc: Add netlink support for SMC statistics

Add the netlink function which collects the statistics information and
delivers it to the userspace.

Signed-off-by: Guvenc Gulce <guvenc@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Add SMC statistics support
Guvenc Gulce [Wed, 16 Jun 2021 14:52:55 +0000 (16:52 +0200)]
net/smc: Add SMC statistics support

Add the ability to collect SMC statistics information. Per-cpu
variables are used to collect the statistic information for better
performance and for reducing concurrency pitfalls. The code that is
collecting statistic data is implemented in macros to increase code
reuse and readability.

Signed-off-by: Guvenc Gulce <guvenc@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agomlxsw: spectrum_router: remove redundant continue statement
Colin Ian King [Wed, 16 Jun 2021 13:02:58 +0000 (14:02 +0100)]
mlxsw: spectrum_router: remove redundant continue statement

The continue statement at the end of a for-loop has no effect,
remove it.

Addresses-Coverity: ("Continue has no effect")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'nfp-ct-part-two'
David S. Miller [Wed, 16 Jun 2021 19:42:53 +0000 (12:42 -0700)]
Merge branch 'nfp-ct-part-two'

Simon Horman says:

====================
Next set of conntrack patches for the nfp driver

Louis Peens says:

This follows on from the previous series of a similar nature.
Looking at the diagram as explained in the previous series
this implements changes up to the point where the merged
nft entries are saved. There are still bits of stubbed
out code where offloading of the flows will be implemented.

+-------------+                      +----------+
| pre_ct flow +--------+             | nft flow |
+-------------+        v             +------+---+
                  +----------+              |
                  | tc_merge +--------+     |
                  +----------+        v     v
+--------------+       ^           +-------------+
| post_ct flow +-------+       +---+nft_tc merge |
+--------------+               |   +-------------+
                               |
                               |
                               |
                               v
                        Offload to nfp
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfp: flower-ct: implement action_merge check
Louis Peens [Wed, 16 Jun 2021 10:02:07 +0000 (12:02 +0200)]
nfp: flower-ct: implement action_merge check

Fill in code stub to check that the flow actions are valid for
merge. The actions of the flow X should not conflict with the
matches of flow X+1. For now this check is quite strict and
set_actions are very limited, will need to update this when
NAT support is added.

Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfp: flower-ct: fill ct metadata check function
Louis Peens [Wed, 16 Jun 2021 10:02:06 +0000 (12:02 +0200)]
nfp: flower-ct: fill ct metadata check function

Fill in check_meta stub to check that ct_metadata action fields in
the nft flow matches the ct_match data of the post_ct flow.

Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfp: flower-ct: fill in ct merge check function
Louis Peens [Wed, 16 Jun 2021 10:02:05 +0000 (12:02 +0200)]
nfp: flower-ct: fill in ct merge check function

Replace merge check stub code with the actual implementation. This
checks that the match parts of two tc flows does not conflict.
Only overlapping keys needs to be checked, and only the narrowest
masked parts needs to be checked, so each key is masked with the
AND'd result of both masks before comparing.

Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfp: flower-ct: implement code to save merge of tc and nft flows
Louis Peens [Wed, 16 Jun 2021 10:02:04 +0000 (12:02 +0200)]
nfp: flower-ct: implement code to save merge of tc and nft flows

Add in the code to merge the tc_merge objects with the flows
received from nft. At the moment flows are just merged blindly
as the validity check functions are stubbed out, this will
be populated in follow-up patches.

Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfp: flower-ct: add nft_merge table
Louis Peens [Wed, 16 Jun 2021 10:02:03 +0000 (12:02 +0200)]
nfp: flower-ct: add nft_merge table

Add table and struct to save the result of the three-way merge
between pre_ct,post_ct, and nft flows. Merging code is to be
added in follow-up patches.

Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfp: flower-ct: make a full copy of the rule when it is a NFT flow
Yinjun Zhang [Wed, 16 Jun 2021 10:02:02 +0000 (12:02 +0200)]
nfp: flower-ct: make a full copy of the rule when it is a NFT flow

The nft flow will be destroyed after offload cb returns. This means
we need save a full copy of it since it can be referenced through
other paths other than just the offload cb, for example when a new
pre_ct or post_ct entry is added, and it needs to be merged with
an existing nft entry.

Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfp: flower-ct: add nft flows to nft list
Louis Peens [Wed, 16 Jun 2021 10:02:01 +0000 (12:02 +0200)]
nfp: flower-ct: add nft flows to nft list

Implement code to add and remove nft flows to the relevant list.
Registering and deregistering the callback function for the nft
table is quite complicated. The safest is to delete the callback
on the removal of the last pre_ct flow. This is because if this
is also the latest pre_ct flow in software it means that this
specific nft table will be freed, so there will not be a later
opportunity to do this. Another place where it looks possible
to delete the callback is when the last nft_flow is deleted,
but this happens under the flow_table lock, which is also taken
when deregistering the callback, leading to a deadlock situation.

This means the final solution here is to delete the callback
when removing the last pre_ct flow, and then clean up any
remaining nft_flow entries which may still be present, since
there will never be a callback now to do this, leaving them
orphaned if not cleaned up here as well.

Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfp: flower-ct: add nft callback stubs
Louis Peens [Wed, 16 Jun 2021 10:02:00 +0000 (12:02 +0200)]
nfp: flower-ct: add nft callback stubs

Add register/unregister of the nft callback. For now just add
stub code to accept the flows, but don't do anything with it.
Decided to accept the flows since netfilter will keep on trying
to offload a flow if it was rejected, which is quite noisy.
Follow-up patches will start implementing the functions to add
nft flows to the relevant tables.

Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfp: flower-ct: add delete flow handling for ct
Louis Peens [Wed, 16 Jun 2021 10:01:59 +0000 (12:01 +0200)]
nfp: flower-ct: add delete flow handling for ct

Add functions to handle delete flow callbacks for ct flows. Also
accept the flows for offloading by returning 0 instead of -EOPNOTSUPP.
Flows will still not actually be offloaded to hw, but at this point
it's difficult to not accept the flows and also exercise the cleanup
paths properly. Traffic will still be handled safely through the
fallback path.

Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'net-phy-cleanups'
David S. Miller [Wed, 16 Jun 2021 19:34:08 +0000 (12:34 -0700)]
Merge branch 'net-phy-cleanups'

Weihang Li says:

====================
net: phy: fix some coding-style issues

Make some cleanups according to the coding style of kernel.

Changes since v1:
- Update commit description of #1 and #3.
- Avoid changing the indentation in #2.
- Change a group of if-else statement into switch from #4 and put it into
  a single patch.
- Put '|' at the end of line in #5 and #7.
- Avoid deleting spaces in definition of 'settings' in #5.
- Drop #8 from the series which needs more discussion with David.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: phy: replace if-else statements with switch
Weihang Li [Wed, 16 Jun 2021 10:01:26 +0000 (18:01 +0800)]
net: phy: replace if-else statements with switch

Switch statement is clearer than a group of 'if-else'.

Signed-off-by: Weihang Li <liweihang@huawei.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>