Brett Creeley [Thu, 2 Dec 2021 16:38:39 +0000 (08:38 -0800)]
ice: Refactor spoofcheck configuration functions
Add functions to configure Tx VLAN antispoof based on iproute
configuration and/or VLAN mode and VF driver support. This is needed
later so the driver can control when it can be configured. Also, add
functions that can be used to enable and disable MAC and VLAN
spoofcheck. Move spoofchk configuration during VSI setup into the
SR-IOV initialization path and into the post VSI rebuild flow for VF
VSIs.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Oleksij Rempel [Wed, 9 Feb 2022 12:42:55 +0000 (13:42 +0100)]
net: usb: smsc95xx: add generic selftest support
Provide generic selftest support. Tested with LAN9500 and LAN9512.
Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wang Qing [Wed, 9 Feb 2022 08:39:19 +0000 (00:39 -0800)]
net: ethernet: cavium: use div64_u64() instead of do_div()
do_div() does a 64-by-32 division.
When the divisor is u64, do_div() truncates it to 32 bits, this means it
can test non-zero and be truncated to zero for division.
fix do_div.cocci warning:
do_div() does a 64-by-32 division, please consider using div64_u64 instead.
Signed-off-by: Wang Qing <wangqing@vivo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Po Liu [Wed, 9 Feb 2022 12:33:03 +0000 (20:33 +0800)]
net:enetc: enetc qos using the CBDR dma alloc function
Now we can use the enetc_cbd_alloc_data_mem() to replace complicated DMA
data alloc method and CBDR memory basic seting.
Signed-off-by: Po Liu <po.liu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Po Liu [Wed, 9 Feb 2022 12:33:02 +0000 (20:33 +0800)]
net:enetc: command BD ring data memory alloc as one function alone
Separate the CBDR data memory alloc standalone. It is convenient for
other part loading, for example the ENETC QOS part.
Reported-and-suggested-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Po Liu <po.liu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Po Liu [Wed, 9 Feb 2022 12:33:01 +0000 (20:33 +0800)]
net:enetc: allocate CBD ring data memory using DMA coherent methods
To replace the dma_map_single() stream DMA mapping with DMA coherent
method dma_alloc_coherent() which is more simple.
dma_map_single() found by Tim Gardner not proper. Suggested by Claudiu
Manoil and Jakub Kicinski to use dma_alloc_coherent(). Discussion at:
https://lore.kernel.org/netdev/AM9PR04MB8397F300DECD3C44D2EBD07796BD9@AM9PR04MB8397.eurprd04.prod.outlook.com/t/
Fixes:
888ae5a3952ba ("net: enetc: add tc flower psfp offload driver")
cc: Claudiu Manoil <claudiu.manoil@nxp.com>
Reported-by: Tim Gardner <tim.gardner@canonical.com>
Signed-off-by: Po Liu <po.liu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 9 Feb 2022 13:15:35 +0000 (13:15 +0000)]
Merge branch 'dpaa2-eth-sw-TSO'
Ioana Ciornei says:
====================
dpaa2-eth: add support for software TSO
This series adds support for driver level TSO in the dpaa2-eth driver.
The first 5 patches lay the ground work for the actual feature:
rearrange some variable declaration, cleaning up the interraction with
the S/G Table buffer cache etc.
The 6th patch adds the actual driver level software TSO support by using
the usual tso_build_hdr()/tso_build_data() APIs and creates the S/G FDs.
With this patch set we can see the following improvement in a TCP flow
running on a single A72@2.2GHz of the LX2160A SoC:
before: 6.38Gbit/s
after: 8.48Gbit/s
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:35 +0000 (11:23 +0200)]
soc: fsl: dpio: read the consumer index from the cache inhibited area
Once we added support in the dpaa2-eth for driver level software TSO we
observed the following situation: if the EQCR CI (consumer index) is
read from the cache-enabled area we sometimes end up with a computed
value of available enqueue entries bigger than the size of the ring.
This eventually will lead to the multiple enqueue of the same FD which
will determine the same FD to end up on the Tx confirmation path and the
same skb being freed twice.
Just read the consumer index from the cache inhibited area so that we
avoid this situation.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:34 +0000 (11:23 +0200)]
dpaa2-eth: add support for software TSO
This patch adds support for driver level TSO in the enetc driver using
the TSO API.
There is not much to say about this specific implementation. We are
using the usual tso_build_hdr(), tso_build_data() to create each data
segment, we create an array of S/G FDs where the first S/G entry is
referencing the header data and the remaining ones the data portion.
For the S/G Table buffer we use the same cache of buffers used on the
other non-GSO cases - dpaa2_eth_sgt_get() and dpaa2_eth_sgt_recycle().
We cannot keep a DMA coherent buffer for all the TSO headers because the
DPAA2 architecture does not work in a ring based fashion so we just
allocate a buffer each time.
Even with these limitations we get the following improvement in TCP
termination on the LX2160A SoC, on a single A72 core running at 2.2GHz.
before: 6.38Gbit/s
after: 8.48Gbit/s
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:33 +0000 (11:23 +0200)]
dpaa2-eth: work with an array of FDs
Up until now, the __dpaa2_eth_tx function used a single FD on the stack
to construct the structure to be enqueued. Since we are now preparing
the ground work to add support for TSO done in software at the driver
level, the same function needs to work with an array of FDs and enqueue
as many as the build_*_fd functions create.
Make the necessary adjustments in order to do this. These include:
keeping an array of FDs in a percpu structure, cleaning up the necessary
FDs before populating it and then, retrying the enqueue process up till
all the generated FDs were enqueued or until we reach the maximum number
retries.
This patch does not change the fact that only a single FD will result
from a __dpaa2_eth_tx call but rather just creates the necessary changes
for the next patch.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:32 +0000 (11:23 +0200)]
dpaa2-eth: use the S/G table cache also for the normal S/G path
Instead of allocating memory for an S/G table each time a nonlinear skb
is processed, and then freeing it on the Tx confirmation path, use the
S/G table cache in order to reuse the memory.
For this to work we have to change the size of the cached buffers so
that it can hold the maximum number of scatterlist entries.
Other than that, each allocate/free call is replaced by a call to the
dpaa2_eth_sgt_get/dpaa2_eth_sgt_recycle functions, introduced in the
previous patch.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:31 +0000 (11:23 +0200)]
dpaa2-eth: extract the S/G table buffer cache interaction into functions
The dpaa2-eth driver uses in certain circumstances a buffer cache for
the S/G tables needed in case of a S/G FD. At the moment, the
interraction with the cache is open-coded and couldn't be reused easily.
Add two new functions - dpaa2_eth_sgt_get and dpaa2_eth_sgt_recycle -
which help with code reusability.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:30 +0000 (11:23 +0200)]
dpaa2-eth: allocate a fragment already aligned
Instead of allocating memory and then manually aligning it to the
desired value use napi_alloc_frag_align() directly to streamline the
process.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:29 +0000 (11:23 +0200)]
dpaa2-eth: rearrange variable declaration in __dpaa2_eth_tx
In the next patches we'll be moving things arroung in the mentioned
function and also add some new variable declarations. Before all this,
cleanup the variable declaration order.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 9 Feb 2022 13:02:33 +0000 (13:02 +0000)]
Merge branch 'octeontx2-af-priority-flow-control'
Hariprasad Kelam says:
====================
Priority flow control support for RVU netdev
In network congestion, instead of pausing all traffic on link
PFC allows user to selectively pause traffic according to its
class. This series of patches add support of PFC for RVU netdev
drivers.
Patch1 adds support to disable pause frames by default as
with PFC user can enable either PFC or 802.3 pause frames.
Patch2&3 adds resource management support for flow control
and configures necessary registers for PFC.
Patch4 adds dcb ops registration for netdev drivers.
V2 changes:
Fix compilation error by exporting required symbols 'otx2_config_pause_frm'
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Kelam [Wed, 9 Feb 2022 07:15:19 +0000 (12:45 +0530)]
octeontx2-pf: PFC config support with DCBx
Data centric bridging designed to eliminate packet loss due to
queue overflow by adding enhancements to ethernet network such as
proprity flow control etc. This patch adds support for management
of Priority flow control(PFC) on Octeontx2 and CN10K interfaces.
To enable PFC for all priorities
dcb pfc set dev eth0 prio-pfc all:on/off
To enable PFC on selected priorites
dcb pfc set dev eth0 prio-pfc 0:on/off 1:on/off ..7:on/off
With the ntuple commands user can map Priority to receive queues.
On queue overflow NIX will assert backpressure such that PFC pause frames
are genarated with mapped priority.
To map priority 7 to Queue 1
ethtool -U eth0 flow-type ether dst xx:xx:xx:xx:xx:xx vlan 0xe00a
m 0x1fff queue 1
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Kelam [Wed, 9 Feb 2022 07:15:18 +0000 (12:45 +0530)]
octeontx2-af: Flow control resource management
CN10K MAC block (RPM) and Octeontx2 MAC block (CGX) both supports
PFC flow control and 802.3X flow control pause frames.
Each MAC block supports max 4 LMACS and AF driver assigns same
(MAC,LMAC) to PF and its VFs. As PF and its share same (MAC,LMAC)
pair we need resource management to address below scenarios
1. Maintain PFC and 8023X pause frames mutually exclusive.
2. Reject disable flow control request if other PF or Vfs
enabled it.
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sunil Kumar Kori [Wed, 9 Feb 2022 07:15:17 +0000 (12:45 +0530)]
octeontx2-af: Priority flow control configuration support
Prirority based flow control (802.1Qbb) mechanism is similar to
ethernet pause frames (802.3x) instead pausing all traffic on a link,
PFC allows user to selectively pause traffic according to its class.
Oceteontx2 MAC block (CGX) and CN10K Mac block (RPM) both supports
PFC. As upper layer mbox handler is same for both the MACs, this
patch configures PFC by calling apporopritate callbacks.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Kelam [Wed, 9 Feb 2022 07:15:16 +0000 (12:45 +0530)]
octeontx2-af: Don't enable Pause frames by default
Current implementation is such that 802.3x pause frames are
enabled by default. As CGX and RPM blocks support PFC
(priority flow control) also, instead of driver enabling one
between them enable them upon request from PF or its VFs.
Also add support to disable pause frames in driver unbind.
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 9 Feb 2022 12:00:11 +0000 (12:00 +0000)]
Merge branch 'MCTP-tag-control-interface'
Jeremy Kerr says:
====================
MCTP tag control interface
This series implements a small interface for userspace-controlled
message tag allocation for the MCTP protocol. Rather than leaving the
kernel to allocate per-message tag values, userspace can explicitly
allocate (and release) message tags through two new ioctls:
SIOCMCTPALLOCTAG and SIOCMCTPDROPTAG.
In order to do this, we first introduce some minor changes to the tag
handling, including a couple of new tests for the route input paths.
As always, any comments/queries/etc are most welcome.
v2:
- make mctp_lookup_prealloc_tag static
- minor checkpatch formatting fixes
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Matt Johnston [Wed, 9 Feb 2022 04:05:57 +0000 (12:05 +0800)]
mctp: Add SIOCMCTP{ALLOC,DROP}TAG ioctls for tag control
This change adds a couple of new ioctls for mctp sockets:
SIOCMCTPALLOCTAG and SIOCMCTPDROPTAG. These ioctls provide facilities
for explicit allocation / release of tags, overriding the automatic
allocate-on-send/release-on-reply and timeout behaviours. This allows
userspace more control over messages that may not fit a simple
request/response model.
In order to indicate a pre-allocated tag to the sendmsg() syscall, we
introduce a new flag to the struct sockaddr_mctp.smctp_tag value:
MCTP_TAG_PREALLOC.
Additional changes from Jeremy Kerr <jk@codeconstruct.com.au>.
Contains a fix that was:
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Matt Johnston <matt@codeconstruct.com.au>
Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeremy Kerr [Wed, 9 Feb 2022 04:05:56 +0000 (12:05 +0800)]
mctp: Allow keys matching any local address
Currently, we require an exact match on an incoming packet's dest
address, and the key's local_addr field.
In a future change, we may want to set up a key before packets are
routed, meaning we have no local address to match on.
This change allows key lookups to match on local_addr = MCTP_ADDR_ANY.
Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeremy Kerr [Wed, 9 Feb 2022 04:05:55 +0000 (12:05 +0800)]
mctp: Add helper for address match checking
Currently, we have a couple of paths that check that an EID matches, or
the match value is MCTP_ADDR_ANY.
Rather than open coding this, add a little helper.
Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeremy Kerr [Wed, 9 Feb 2022 04:05:54 +0000 (12:05 +0800)]
mctp: tests: Add key state tests
This change adds a few more tests to check the key/tag lookups on route
input. We add a specific entry to the keys lists, route a packet with
specific header values, and check for key match/mismatch.
Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeremy Kerr [Wed, 9 Feb 2022 04:05:53 +0000 (12:05 +0800)]
mctp: tests: Rename FL_T macro to FL_TO
This is a definition for the tag-owner flag, which has TO as a standard
abbreviation. We'll want to add a helper for the actual tag value in a
future change.
Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 9 Feb 2022 11:57:54 +0000 (11:57 +0000)]
Merge branch '40GbE' of git://git./linux/kernel/git/tnguy/next
-queue
Tony Nguyen says:
====================
40GbE Intel Wired LAN Driver Updates 2022-02-08
Joe Damato says:
This patch set makes several updates to the i40e driver stats collection
and reporting code to help users of i40e get a better sense of how the
driver is performing and interacting with the rest of the kernel.
These patches include some new stats (like waived and busy) which were
inspired by other drivers that track stats using the same nomenclature.
The new stats and an existing stat, rx_reuse, are now accessible with
ethtool to make harvesting this data more convenient for users.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 8 Feb 2022 21:41:48 +0000 (13:41 -0800)]
ip6_tunnel: fix possible NULL deref in ip6_tnl_xmit
Make sure to test that skb has a dst attached to it.
general protection fault, probably for non-canonical address 0xdffffc0000000011: 0000 [#1] PREEMPT SMP KASAN
KASAN: null-ptr-deref in range [0x0000000000000088-0x000000000000008f]
CPU: 0 PID: 32650 Comm: syz-executor.4 Not tainted 5.17.0-rc2-next-
20220204-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:ip6_tnl_xmit+0x2140/0x35f0 net/ipv6/ip6_tunnel.c:1127
Code: 4d 85 f6 0f 85 c5 04 00 00 e8 9c b0 66 f9 48 83 e3 fe 48 b8 00 00 00 00 00 fc ff df 48 8d bb 88 00 00 00 48 89 fa 48 c1 ea 03 <0f> b6 04 02 84 c0 74 07 7f 05 e8 11 25 b2 f9 44 0f b6 b3 88 00 00
RSP: 0018:
ffffc900141b7310 EFLAGS:
00010206
RAX:
dffffc0000000000 RBX:
0000000000000000 RCX:
ffffc9000c77a000
RDX:
0000000000000011 RSI:
ffffffff8811f854 RDI:
0000000000000088
RBP:
ffffc900141b7480 R08:
0000000000000000 R09:
0000000000000008
R10:
ffffffff8811f846 R11:
0000000000000008 R12:
ffffc900141b7548
R13:
ffff8880297c6000 R14:
0000000000000000 R15:
ffff8880351c8dc0
FS:
00007f9827ba2700(0000) GS:
ffff8880b9c00000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
0000001b31322000 CR3:
0000000033a70000 CR4:
00000000003506f0
DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
Call Trace:
<TASK>
ipxip6_tnl_xmit net/ipv6/ip6_tunnel.c:1386 [inline]
ip6_tnl_start_xmit+0x71e/0x1830 net/ipv6/ip6_tunnel.c:1435
__netdev_start_xmit include/linux/netdevice.h:4683 [inline]
netdev_start_xmit include/linux/netdevice.h:4697 [inline]
xmit_one net/core/dev.c:3473 [inline]
dev_hard_start_xmit+0x1eb/0x920 net/core/dev.c:3489
__dev_queue_xmit+0x2a24/0x3760 net/core/dev.c:4116
packet_snd net/packet/af_packet.c:3057 [inline]
packet_sendmsg+0x2265/0x5460 net/packet/af_packet.c:3084
sock_sendmsg_nosec net/socket.c:705 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:725
sock_write_iter+0x289/0x3c0 net/socket.c:1061
call_write_iter include/linux/fs.h:2075 [inline]
do_iter_readv_writev+0x47a/0x750 fs/read_write.c:726
do_iter_write+0x188/0x710 fs/read_write.c:852
vfs_writev+0x1aa/0x630 fs/read_write.c:925
do_writev+0x27f/0x300 fs/read_write.c:968
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f9828c2d059
Fixes:
c1f55c5e0482 ("ip6_tunnel: allow routing IPv4 traffic in NBMA mode")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Qing Deng <i@moy.cat>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tianyu Lan [Tue, 8 Feb 2022 14:26:52 +0000 (09:26 -0500)]
Netvsc: Call hv_unmap_memory() in the netvsc_device_remove()
netvsc_device_remove() calls vunmap() inside which should not be
called in the interrupt context. Current code calls hv_unmap_memory()
in the free_netvsc_device() which is rcu callback and maybe called
in the interrupt context. This will trigger BUG_ON(in_interrupt())
in the vunmap(). Fix it via moving hv_unmap_memory() to netvsc_device_
remove().
Fixes:
846da38de0e8 ("net: netvsc: Add Isolation VM support for netvsc driver")
Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 9 Feb 2022 11:51:23 +0000 (11:51 +0000)]
Merge branch '1GbE' of git://git./linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
1GbE Intel Wired LAN Driver Updates 2022-02-07
Corinna Vinschen says:
Fix the kernel warning "Missing unregister, handled but fix driver"
when running, e.g.,
$ ethtool -G eth0 rx 1024
on igc. Remove memset hack from igb and align igb code to igc.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Biju Das [Sun, 6 Feb 2022 20:24:25 +0000 (20:24 +0000)]
dt-bindings: net: renesas,etheravb: Document RZ/G2UL SoC
Document Gigabit Ethernet IP found on RZ/G2UL SoC. Gigabit Ethernet
Interface is identical to one found on the RZ/G2L SoC. No driver changes
are required as generic compatible string "renesas,rzg2l-gbeth" will be
used as a fallback.
Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Biju Das [Sun, 6 Feb 2022 20:24:24 +0000 (20:24 +0000)]
dt-bindings: net: renesas,etheravb: Document RZ/V2L SoC
Document Gigabit Ethernet IP found on RZ/V2L SoC. Gigabit Ethernet
Interface is identical to one found on the RZ/G2L SoC. No driver changes
are required as generic compatible string "renesas,rzg2l-gbeth" will be
used as a fallback.
Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Acked-by: Rob Herring <robh@kernel.org>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
Luiz Angelo Daros de Luca [Tue, 8 Feb 2022 05:32:10 +0000 (02:32 -0300)]
net: dsa: typo in comment
Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20220208053210.14831-1-luizluca@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Andy Shevchenko [Mon, 7 Feb 2022 21:07:30 +0000 (23:07 +0200)]
ptp_pch: Remove unused pch_pm_ops
The default values for hooks in the driver.pm are NULLs.
Hence drop unused pch_pm_ops.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20220207210730.75252-6-andriy.shevchenko@linux.intel.com
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Andy Shevchenko [Mon, 7 Feb 2022 21:07:29 +0000 (23:07 +0200)]
ptp_pch: Convert to use managed functions pcim_* and devm_*
This makes the error handling much more simpler than open-coding everything
and in addition makes the probe function smaller an tidier.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20220207210730.75252-5-andriy.shevchenko@linux.intel.com
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Andy Shevchenko [Mon, 7 Feb 2022 21:07:28 +0000 (23:07 +0200)]
ptp_pch: Switch to use module_pci_driver() macro
Eliminate some boilerplate code by using module_pci_driver() instead of
init/exit, and, if needed, moving the salient bits from init into probe.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20220207210730.75252-4-andriy.shevchenko@linux.intel.com
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Andy Shevchenko [Mon, 7 Feb 2022 21:07:27 +0000 (23:07 +0200)]
ptp_pch: Use ioread64_hi_lo() / iowrite64_hi_lo()
There is already helper functions to do 64-bit I/O on 32-bit machines or
buses, thus we don't need to reinvent the wheel.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20220207210730.75252-3-andriy.shevchenko@linux.intel.com
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Andy Shevchenko [Mon, 7 Feb 2022 21:07:26 +0000 (23:07 +0200)]
ptp_pch: Use ioread64_lo_hi() / iowrite64_lo_hi()
There is already helper functions to do 64-bit I/O on 32-bit machines or
buses, thus we don't need to reinvent the wheel.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20220207210730.75252-2-andriy.shevchenko@linux.intel.com
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Andy Shevchenko [Mon, 7 Feb 2022 21:07:25 +0000 (23:07 +0200)]
ptp_pch: use mac_pton()
Use mac_pton() instead of custom approach.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Link: https://lore.kernel.org/r/20220207210730.75252-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 9 Feb 2022 04:41:46 +0000 (20:41 -0800)]
Merge branch 'net-speedup-netns-dismantles'
Eric Dumazet says:
====================
net: speedup netns dismantles
From: Eric Dumazet <edumazet@google.com>
In this series, I made network namespace deletions more scalable,
by 4x on the little benchmark described in this cover letter.
- Remove bottleneck on ipv6 addrconf, by replacing a global
hash table to a per netns one.
- Rework many (struct pernet_operations)->exit() handlers to
exit_batch() ones. This removes many rtnl acquisitions,
and gives to cleanup_net() kind of a priority over rtnl
ownership.
Tested on a host with 24 cpus (48 HT)
Test script:
for nr in {1..10}
do
(for i in {1..10000}; do unshare -n /bin/bash -c "ifconfig lo up"; done) &
done
wait
for i in {1..10}
do
sleep 1
echo 3 >/proc/sys/vm/drop_caches
grep net_namespace /proc/slabinfo
done
Before: We can see host struggles to clean the netns, even after there are no new creations.
Memory cost is high, because each netns consumes a good amount of memory.
time ./unshare10.sh
net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0
net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0
net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0
net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0
net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0
net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0
net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0
net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0
net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0
net_namespace 37214 37792 3968 1 1 : tunables 24 12 8 : slabdata 37214 37792 192
real 6m57.766s
user 3m37.277s
sys 40m4.826s
After: We can see the script completes much faster,
the kernel thread doing the cleanup_net() keeps up just fine.
Memory cost is not too big.
time ./unshare10.sh
net_namespace 9945 9945 4096 1 1 : tunables 24 12 8 : slabdata 9945 9945 0
net_namespace 4087 4665 4096 1 1 : tunables 24 12 8 : slabdata 4087 4665 192
net_namespace 4082 4607 4096 1 1 : tunables 24 12 8 : slabdata 4082 4607 192
net_namespace 234 761 4096 1 1 : tunables 24 12 8 : slabdata 234 761 192
net_namespace 224 751 4096 1 1 : tunables 24 12 8 : slabdata 224 751 192
net_namespace 218 745 4096 1 1 : tunables 24 12 8 : slabdata 218 745 192
net_namespace 193 667 4096 1 1 : tunables 24 12 8 : slabdata 193 667 172
net_namespace 167 609 4096 1 1 : tunables 24 12 8 : slabdata 167 609 152
net_namespace 167 609 4096 1 1 : tunables 24 12 8 : slabdata 167 609 152
net_namespace 157 609 4096 1 1 : tunables 24 12 8 : slabdata 157 609 152
real 1m43.876s
user 3m39.728s
sys 7m36.342s
====================
Link: https://lore.kernel.org/r/20220208045038.2635826-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:38 +0000 (20:50 -0800)]
net: remove default_device_exit()
For some reason default_device_ops kept two exit method:
1) default_device_exit() is called for each netns being dismantled in
a cleanup_net() round. This acquires rtnl for each invocation.
2) default_device_exit_batch() is called once with the list of all netns
int the batch, allowing for a single rtnl invocation.
Get rid of the .exit() method to handle the logic from
default_device_exit_batch(), to decrease the number of rtnl acquisition
to one.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:37 +0000 (20:50 -0800)]
bonding: switch bond_net_exit() to batch mode
cleanup_net() is competing with other rtnl users.
Batching bond_net_exit() factorizes all rtnl acquistions
to a single one, giving chance for cleanup_net()
to progress much faster, holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jay Vosburgh <j.vosburgh@gmail.com>
Cc: Veaceslav Falico <vfalico@gmail.com>
Cc: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:36 +0000 (20:50 -0800)]
can: gw: switch cangw_pernet_exit() to batch mode
cleanup_net() is competing with other rtnl users.
Avoiding to acquire rtnl for each netns before calling
cgw_remove_all_jobs() gives chance for cleanup_net()
to progress much faster, holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Oliver Hartkopp <socketcan@hartkopp.net>
Acked-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:35 +0000 (20:50 -0800)]
ipmr: introduce ipmr_net_exit_batch()
cleanup_net() is competing with other rtnl users.
Avoiding to acquire rtnl for each netns before calling
ipmr_rules_exit() gives chance for cleanup_net()
to progress much faster, holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:34 +0000 (20:50 -0800)]
ip6mr: introduce ip6mr_net_exit_batch()
cleanup_net() is competing with other rtnl users.
Avoiding to acquire rtnl for each netns before calling
ip6mr_rules_exit() gives chance for cleanup_net()
to progress much faster, holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:33 +0000 (20:50 -0800)]
ipv6: change fib6_rules_net_exit() to batch mode
cleanup_net() is competing with other rtnl users.
fib6_rules_net_exit() seems a good candidate for exit_batch(),
as this gives chance for cleanup_net() to progress much faster,
holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:32 +0000 (20:50 -0800)]
ipv4: add fib_net_exit_batch()
cleanup_net() is competing with other rtnl users.
Instead of acquiring rtnl at each fib_net_exit() invocation,
add fib_net_exit_batch() so that rtnl is acquired once.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:31 +0000 (20:50 -0800)]
nexthop: change nexthop_net_exit() to nexthop_net_exit_batch()
cleanup_net() is competing with other rtnl users.
nexthop_net_exit() seems a good candidate for exit_batch(),
as this gives chance for cleanup_net() to progress much faster,
holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:30 +0000 (20:50 -0800)]
ipv6/addrconf: switch to per netns inet6_addr_lst hash table
IPv6 does not scale very well with the number of IPv6 addresses.
It uses a global (shared by all netns) hash table with 256 buckets.
Some functions like addrconf_verify_rtnl() and addrconf_ifdown()
have to iterate all addresses in the hash table.
I have seen addrconf_verify_rtnl() holding the cpu for 10ms or more.
Switch to the per netns hashtable (and spinlock) added
in prior patches.
This considerably speeds up netns dismantle times on hosts
with thousands of netns. This also has an impact
on regular (fast path) IPv6 processing.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:29 +0000 (20:50 -0800)]
ipv6/addrconf: use one delayed work per netns
Next step for using per netns inet6_addr_lst
is to have per netns work item to ultimately
call addrconf_verify_rtnl() and addrconf_verify()
with a new 'struct net*' argument.
Everything is still using the global inet6_addr_lst[] table.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:28 +0000 (20:50 -0800)]
ipv6/addrconf: allocate a per netns hash table
Add a per netns hash table and a dedicated spinlock,
first step to get rid of the global inet6_addr_lst[] one.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 7 Feb 2022 18:41:07 +0000 (10:41 -0800)]
net: add dev->dev_registered_tracker
Convert one dev_hold()/dev_put() pair in register_netdevice()
and unregister_netdevice_many() to dev_hold_track()
and dev_put_track().
This would allow to detect a rogue dev_put() a bit earlier.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20220207184107.1401096-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 00:48:55 +0000 (16:48 -0800)]
et131x: support arbitrary MAX_SKB_FRAGS
This NIC does not support TSO, it is very unlikely it would
have to send packets with many fragments.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20220208004855.1887345-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 9 Feb 2022 00:23:38 +0000 (16:23 -0800)]
Merge branch 'iwl-next' of git://git./linux/kernel/git/tnguy/linux
Nguyen, Anthony L says:
====================
iwl-next Intel Wired LAN Driver Updates 2022-02-07
Dave adds support for ice driver to provide DSCP QoS mappings to irdma
driver.
[1] https://lore.kernel.org/netdev/
20220202191921.1638-1-shiraz.saleem@intel.com/
* 'iwl-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux:
ice: add support for DSCP QoS for IDC
====================
Link: https://lore.kernel.org/r/20220207235921.1303522-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Joe Damato [Fri, 17 Dec 2021 19:35:19 +0000 (11:35 -0800)]
i40e: Add a stat for tracking busy rx pages
In some cases, pages cannot be reused by i40e because the page is busy. Add
a counter for this event.
Busy page count is accessible via ethtool.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Dave Switzer <david.switzer@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Joe Damato [Fri, 17 Dec 2021 19:35:18 +0000 (11:35 -0800)]
i40e: Add a stat for tracking pages waived
In some cases, pages can not be reused because they are not associated with
the correct NUMA zone. Knowing how often pages are waived helps users to
understand the interaction between the driver's memory usage and their
system.
Pass rx_stats through to i40e_can_reuse_rx_page to allow tracking when
pages are waived.
The page waive count is accessible via ethtool.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Dave Switzer <david.switzer@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Joe Damato [Fri, 17 Dec 2021 19:35:17 +0000 (11:35 -0800)]
i40e: Add a stat tracking new RX page allocations
Add a counter for new page allocations in the i40e RX path. This stat is
accessible with ethtool.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Dave Switzer <david.switzer@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Joe Damato [Fri, 17 Dec 2021 19:35:16 +0000 (11:35 -0800)]
i40e: Aggregate and export RX page reuse stat
rx page reuse was already being tracked by the i40e driver per RX ring.
Aggregate the counts and make them accessible via ethtool.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Dave Switzer <david.switzer@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Joe Damato [Fri, 17 Dec 2021 19:35:15 +0000 (11:35 -0800)]
i40e: Remove rx page reuse double count
Page reuse was being tracked from two locations:
- i40e_reuse_rx_page (via 40e_clean_rx_irq), and
- i40e_alloc_mapped_page
Remove the double count and only count reuse from i40e_alloc_mapped_page
when the page is about to be reused.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Dave Switzer <david.switzer@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jakub Kicinski [Tue, 8 Feb 2022 04:12:48 +0000 (20:12 -0800)]
Merge branch 'inet-separate-dscp-from-ecn-bits-using-new-dscp_t-type'
Guillaume Nault says:
====================
inet: Separate DSCP from ECN bits using new dscp_t type
The networking stack currently doesn't clearly distinguish between DSCP
and ECN bits. The entire DSCP+ECN bits are stored in u8 variables (or
structure fields), and each part of the stack handles them in their own
way, using different macros. This has created several bugs in the past
and some uncommon code paths are still unfixed.
Such bugs generally manifest by selecting invalid routes because of ECN
bits interfering with FIB routes and rules lookups (more details in the
LPC 2021 talk[1] and in the RFC of this series[2]).
This patch series aims at preventing the introduction of such bugs (and
detecting existing ones), by introducing a dscp_t type, representing
"sanitised" DSCP values (that is, with no ECN information), as opposed
to plain u8 values that contain both DSCP and ECN information. dscp_t
makes it clear for the reader what we're working on, and Sparse can
flag invalid interactions between dscp_t and plain u8.
This series converts only a few variables and structures:
* Patch 1 converts the tclass field of struct fib6_rule. It
effectively forbids the use of ECN bits in the tos/dsfield option
of ip -6 rule. Rules now match packets solely based on their DSCP
bits, so ECN doesn't influence the result any more. This contrasts
with the previous behaviour where all 8 bits of the Traffic Class
field were used. It is believed that this change is acceptable as
matching ECN bits wasn't usable for IPv4, so only IPv6-only
deployments could be depending on it. Also the previous behaviour
made DSCP-based ip6-rules fail for packets with both a DSCP and an
ECN mark, which is another reason why any such deploy is unlikely.
* Patch 2 converts the tos field of struct fib4_rule. This one too
effectively forbids defining ECN bits, this time in ip -4 rule.
Before that, setting ECN bit 1 was accepted, while ECN bit 0 was
rejected. But even when accepted, the rule would never match, as
the packets would have their ECN bits cleared before doing the
rule lookup.
* Patch 3 converts the fc_tos field of struct fib_config. This is
equivalent to patch 2, but for IPv4 routes. Routes using a
tos/dsfield option with any ECN bit set is now rejected. Before
this patch, they were accepted but, as with ip4 rules, these routes
couldn't match any packet, since their ECN bits are cleared before
the lookup.
* Patch 4 converts the fa_tos field of struct fib_alias. This one is
pure internal u8 to dscp_t conversion. While patches 1-3 had user
facing consequences, this patch shouldn't have any side effect and
is there to give an overview of what future conversion patches will
look like. Conversions are quite mechanical, but imply some code
churn, which is the price for the extra clarity a possibility of
type checking.
To summarise, all the behaviour changes required for the dscp_t type
approach to work should be contained in patches 1-3. These changes are
edge cases of ip-route and ip-rule that don't currently work properly.
So they should be safe. Also, a kernel selftest is added for each of
them.
Finally, this work also paves the way for allowing the usage of the 3
high order DSCP bits in IPv4 (a few call paths already handle them, but
in general the stack clears them before IPv4 rule and route lookups).
References:
[1] LPC 2021 talk:
- https://linuxplumbersconf.org/event/11/contributions/943/
- Direct link to slide deck:
https://linuxplumbersconf.org/event/11/contributions/943/attachments/901/1780/inet_tos_lpc2021.pdf
[2] RFC version of this series:
- https://lore.kernel.org/netdev/cover.
1638814614.git.gnault@redhat.com/
====================
Link: https://lore.kernel.org/r/cover.1643981839.git.gnault@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Guillaume Nault [Fri, 4 Feb 2022 13:58:19 +0000 (14:58 +0100)]
ipv4: Use dscp_t in struct fib_alias
Use the new dscp_t type to replace the fa_tos field of fib_alias. This
ensures ECN bits are ignored and makes the field compatible with the
fc_dscp field of struct fib_config.
Converting old *tos variables and fields to dscp_t allows sparse to
flag incorrect uses of DSCP and ECN bits. This patch is entirely about
type annotation and shouldn't change any existing behaviour.
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Acked-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Guillaume Nault [Fri, 4 Feb 2022 13:58:16 +0000 (14:58 +0100)]
ipv4: Reject routes specifying ECN bits in rtm_tos
Use the new dscp_t type to replace the fc_tos field of fib_config, to
ensure IPv4 routes aren't influenced by ECN bits when configured with
non-zero rtm_tos.
Before this patch, IPv4 routes specifying an rtm_tos with some of the
ECN bits set were accepted. However they wouldn't work (never match) as
IPv4 normally clears the ECN bits with IPTOS_RT_MASK before doing a FIB
lookup (although a few buggy code paths don't).
After this patch, IPv4 routes specifying an rtm_tos with any ECN bit
set is rejected.
Note: IPv6 routes ignore rtm_tos altogether, any rtm_tos is accepted,
but treated as if it were 0.
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Acked-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Guillaume Nault [Fri, 4 Feb 2022 13:58:14 +0000 (14:58 +0100)]
ipv4: Stop taking ECN bits into account in fib4-rules
Use the new dscp_t type to replace the tos field of struct fib4_rule,
so that fib4-rules consistently ignore ECN bits.
Before this patch, fib4-rules did accept rules with the high order ECN
bit set (but not the low order one). Also, it relied on its callers
masking the ECN bits of ->flowi4_tos to prevent those from influencing
the result. This was brittle and a few call paths still do the lookup
without masking the ECN bits first.
After this patch fib4-rules only compare the DSCP bits. ECN can't
influence the result anymore, even if the caller didn't mask these
bits. Also, fib4-rules now must have both ECN bits cleared or they will
be rejected.
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Acked-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Guillaume Nault [Fri, 4 Feb 2022 13:58:11 +0000 (14:58 +0100)]
ipv6: Define dscp_t and stop taking ECN bits into account in fib6-rules
Define a dscp_t type and its appropriate helpers that ensure ECN bits
are not taken into account when handling DSCP.
Use this new type to replace the tclass field of struct fib6_rule, so
that fib6-rules don't get influenced by ECN bits anymore.
Before this patch, fib6-rules didn't make any distinction between the
DSCP and ECN bits. Therefore, rules specifying a DSCP (tos or dsfield
options in iproute2) stopped working as soon a packets had at least one
of its ECN bits set (as a work around one could create four rules for
each DSCP value to match, one for each possible ECN value).
After this patch fib6-rules only compare the DSCP bits. ECN doesn't
influence the result anymore. Also, fib6-rules now must have the ECN
bits cleared or they will be rejected.
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Acked-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Yannick Vignon [Fri, 4 Feb 2022 13:55:44 +0000 (14:55 +0100)]
net: stmmac: optimize locking around PTP clock reads
Reading the PTP clock is a simple operation requiring only 3 register
reads. Under a PREEMPT_RT kernel, protecting those reads by a spin_lock is
counter-productive: if the 2nd task preempting the 1st has a higher prio
but needs to read time as well, it will require 2 context switches, which
will pretty much always be more costly than just disabling preemption for
the duration of the reads. Moreover, with the code logic recently added
to get_systime(), disabling preemption is not even required anymore:
reads and writes just need to be protected from each other, to prevent a
clock read while the clock is being updated.
Improve the above situation by replacing the PTP spinlock by a rwlock, and
using read_lock for PTP clock reads so simultaneous reads do not block
each other.
Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
Link: https://lore.kernel.org/r/20220204135545.2770625-1-yannick.vignon@oss.nxp.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 00:35:02 +0000 (16:35 -0800)]
net: typhoon: include <net/vxlan.h>
We need this to get vxlan_features_check() definition.
Fixes:
d2692eee05b8 ("net: typhoon: implement ndo_features_check method")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20220208003502.1799728-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Corinna Vinschen [Wed, 19 Jan 2022 14:52:59 +0000 (15:52 +0100)]
igb: refactor XDP registration
On changing the RX ring parameters igb uses a hack to avoid a warning
when calling xdp_rxq_info_reg via igb_setup_rx_resources. It just
clears the struct xdp_rxq_info content.
Instead, change this to unregister if we're already registered. Align
code to the igc code.
Fixes:
9cbc948b5a20c ("igb: add XDP support")
Signed-off-by: Corinna Vinschen <vinschen@redhat.com>
Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Corinna Vinschen [Wed, 19 Jan 2022 14:52:58 +0000 (15:52 +0100)]
igc: avoid kernel warning when changing RX ring parameters
Calling ethtool changing the RX ring parameters like this:
$ ethtool -G eth0 rx 1024
on igc triggers kernel warnings like this:
[ 225.198467] ------------[ cut here ]------------
[ 225.198473] Missing unregister, handled but fix driver
[ 225.198485] WARNING: CPU: 7 PID: 959 at net/core/xdp.c:168
xdp_rxq_info_reg+0x79/0xd0
[...]
[ 225.198601] Call Trace:
[ 225.198604] <TASK>
[ 225.198609] igc_setup_rx_resources+0x3f/0xe0 [igc]
[ 225.198617] igc_ethtool_set_ringparam+0x30e/0x450 [igc]
[ 225.198626] ethnl_set_rings+0x18a/0x250
[ 225.198631] genl_family_rcv_msg_doit+0xca/0x110
[ 225.198637] genl_rcv_msg+0xce/0x1c0
[ 225.198640] ? rings_prepare_data+0x60/0x60
[ 225.198644] ? genl_get_cmd+0xd0/0xd0
[ 225.198647] netlink_rcv_skb+0x4e/0xf0
[ 225.198652] genl_rcv+0x24/0x40
[ 225.198655] netlink_unicast+0x20e/0x330
[ 225.198659] netlink_sendmsg+0x23f/0x480
[ 225.198663] sock_sendmsg+0x5b/0x60
[ 225.198667] __sys_sendto+0xf0/0x160
[ 225.198671] ? handle_mm_fault+0xb2/0x280
[ 225.198676] ? do_user_addr_fault+0x1eb/0x690
[ 225.198680] __x64_sys_sendto+0x20/0x30
[ 225.198683] do_syscall_64+0x38/0x90
[ 225.198687] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 225.198693] RIP: 0033:0x7f7ae38ac3aa
igc_ethtool_set_ringparam() copies the igc_ring structure but neglects to
reset the xdp_rxq_info member before calling igc_setup_rx_resources().
This in turn calls xdp_rxq_info_reg() with an already registered xdp_rxq_info.
Make sure to unregister the xdp_rxq_info structure first in
igc_setup_rx_resources.
Fixes:
73f1071c1d29 ("igc: Add support for XDP_TX action")
Reported-by: Lennert Buytenhek <buytenh@arista.com>
Signed-off-by: Corinna Vinschen <vinschen@redhat.com>
Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Dan Carpenter [Mon, 7 Feb 2022 08:24:39 +0000 (11:24 +0300)]
net: dsa: mv88e6xxx: Unlock on error in mv88e6xxx_port_bridge_join()
Call mv88e6xxx_reg_unlock(chip) before returning on this error path.
Fixes:
7af4a361a62f ("net: dsa: mv88e6xxx: Improve isolation of standalone ports")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Mon, 7 Feb 2022 08:22:53 +0000 (11:22 +0300)]
net: dsa: mv88e6xxx: Fix off by in one in mv88e6185_phylink_get_caps()
The <= ARRAY_SIZE() needs to be < ARRAY_SIZE() to prevent an out of
bounds error.
Fixes:
d4ebf12bcec4 ("net: dsa: mv88e6xxx: populate supported_interfaces and mac_capabilities")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yufeng Mo [Mon, 7 Feb 2022 01:44:23 +0000 (09:44 +0800)]
net: hns3: add support for TX push mode
For the device that supports the TX push capability, the BD can
be directly copied to the device memory. However, due to hardware
restrictions, the push mode can be used only when there are no
more than two BDs, otherwise, the doorbell mode based on device
memory is used.
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pavel Skripkin [Sun, 6 Feb 2022 18:05:16 +0000 (21:05 +0300)]
net: asix: add proper error handling of usb read errors
Syzbot once again hit uninit value in asix driver. The problem still the
same -- asix_read_cmd() reads less bytes, than was requested by caller.
Since all read requests are performed via asix_read_cmd() let's catch
usb related error there and add __must_check notation to be sure all
callers actually check return value.
So, this patch adds sanity check inside asix_read_cmd(), that simply
checks if bytes read are not less, than was requested and adds missing
error handling of asix_read_cmd() all across the driver code.
Fixes:
d9fe64e51114 ("net: asix: Add in_pm parameter")
Reported-and-tested-by: syzbot+6ca9f7867b77c2d316ac@syzkaller.appspotmail.com
Signed-off-by: Pavel Skripkin <paskripkin@gmail.com>
Tested-by: Oleksij Rempel <o.rempel@pengutronix.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Sun, 6 Feb 2022 16:07:13 +0000 (17:07 +0100)]
r8169: factor out redundant RTL8168d PHY config functionality to rtl8168d_1_common()
rtl8168d_2_hw_phy_config() shares quite some functionality with
rtl8168d_1_hw_phy_config(), so let's factor out the common part to a
new function rtl8168d_1_common(). In addition improve the code a little.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sun, 6 Feb 2022 15:56:00 +0000 (07:56 -0800)]
ip6mr: fix use-after-free in ip6mr_sk_done()
Apparently addrconf_exit_net() is called before igmp6_net_exit()
and ndisc_net_exit() at netns dismantle time:
net_namespace: call ip6table_mangle_net_exit()
net_namespace: call ip6_tables_net_exit()
net_namespace: call ipv6_sysctl_net_exit()
net_namespace: call ioam6_net_exit()
net_namespace: call seg6_net_exit()
net_namespace: call ping_v6_proc_exit_net()
net_namespace: call tcpv6_net_exit()
ip6mr_sk_done sk=
ffffa354c78a74c0
net_namespace: call ipv6_frags_exit_net()
net_namespace: call addrconf_exit_net()
net_namespace: call ip6addrlbl_net_exit()
net_namespace: call ip6_flowlabel_net_exit()
net_namespace: call ip6_route_net_exit_late()
net_namespace: call fib6_rules_net_exit()
net_namespace: call xfrm6_net_exit()
net_namespace: call fib6_net_exit()
net_namespace: call ip6_route_net_exit()
net_namespace: call ipv6_inetpeer_exit()
net_namespace: call if6_proc_net_exit()
net_namespace: call ipv6_proc_exit_net()
net_namespace: call udplite6_proc_exit_net()
net_namespace: call raw6_exit_net()
net_namespace: call igmp6_net_exit()
ip6mr_sk_done sk=
ffffa35472b2a180
ip6mr_sk_done sk=
ffffa354c78a7980
net_namespace: call ndisc_net_exit()
ip6mr_sk_done sk=
ffffa35472b2ab00
net_namespace: call ip6mr_net_exit()
net_namespace: call inet6_net_exit()
This was fine because ip6mr_sk_done() would not reach the point decreasing
net->ipv6.devconf_all->mc_forwarding until my patch in ip6mr_sk_done().
To fix this without changing struct pernet_operations ordering,
we can clear net->ipv6.devconf_dflt and net->ipv6.devconf_all
when they are freed from addrconf_exit_net()
BUG: KASAN: use-after-free in instrument_atomic_read include/linux/instrumented.h:71 [inline]
BUG: KASAN: use-after-free in atomic_read include/linux/atomic/atomic-instrumented.h:27 [inline]
BUG: KASAN: use-after-free in ip6mr_sk_done+0x11b/0x410 net/ipv6/ip6mr.c:1578
Read of size 4 at addr
ffff88801ff08688 by task kworker/u4:4/963
CPU: 0 PID: 963 Comm: kworker/u4:4 Not tainted
5.17.0-rc2-syzkaller-00650-g5a8fb33e5305 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: netns cleanup_net
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
print_address_description.constprop.0.cold+0x8d/0x336 mm/kasan/report.c:255
__kasan_report mm/kasan/report.c:442 [inline]
kasan_report.cold+0x83/0xdf mm/kasan/report.c:459
check_region_inline mm/kasan/generic.c:183 [inline]
kasan_check_range+0x13d/0x180 mm/kasan/generic.c:189
instrument_atomic_read include/linux/instrumented.h:71 [inline]
atomic_read include/linux/atomic/atomic-instrumented.h:27 [inline]
ip6mr_sk_done+0x11b/0x410 net/ipv6/ip6mr.c:1578
rawv6_close+0x58/0x80 net/ipv6/raw.c:1201
inet_release+0x12e/0x280 net/ipv4/af_inet.c:428
inet6_release+0x4c/0x70 net/ipv6/af_inet6.c:478
__sock_release net/socket.c:650 [inline]
sock_release+0x87/0x1b0 net/socket.c:678
inet_ctl_sock_destroy include/net/inet_common.h:65 [inline]
igmp6_net_exit+0x6b/0x170 net/ipv6/mcast.c:3173
ops_exit_list+0xb0/0x170 net/core/net_namespace.c:168
cleanup_net+0x4ea/0xb00 net/core/net_namespace.c:600
process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307
worker_thread+0x657/0x1110 kernel/workqueue.c:2454
kthread+0x2e9/0x3a0 kernel/kthread.c:377
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Fixes:
f2f2325ec799 ("ip6mr: ip6mr_sk_done() can exit early in common cases")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Rix [Sun, 6 Feb 2022 14:55:21 +0000 (06:55 -0800)]
caif: cleanup double word in comment
Replace the second 'so' with 'free'.
Signed-off-by: Tom Rix <trix@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 7 Feb 2022 11:59:57 +0000 (11:59 +0000)]
Merge branch 'mlxsw-dip-sip-mangling'
Ido Schimmel says:
====================
mlxsw: Add SIP and DIP mangling support
Danielle says:
On Spectrum-2 onwards, it is possible to overwrite SIP and DIP address
of an IPv4 or IPv6 packet in the ACL engine. That corresponds to pedit
munges of, respectively, ip src and ip dst fields, and likewise for ip6.
Offload these munges on the systems where they are supported.
Patchset overview:
Patch #1: introduces SIP_DIP_ACTION and its fields.
Patch #2-#3: adds the new pedit fields, and dispatches on them on
Spectrum-2 and above.
Patch #4 adds a selftest.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Sun, 6 Feb 2022 15:36:13 +0000 (17:36 +0200)]
selftests: forwarding: Add a test for pedit munge SIP and DIP
Add a test that checks that pedit adjusts source and destination
addresses of IPv4 and IPv6 packets.
Output example:
$ ./pedit_ip.sh
TEST: ping [ OK ]
TEST: ping6 [ OK ]
TEST: dev swp2 ingress pedit ip src set 198.51.100.1 [ OK ]
TEST: dev swp3 egress pedit ip src set 198.51.100.1 [ OK ]
TEST: dev swp2 ingress pedit ip dst set 198.51.100.1 [ OK ]
TEST: dev swp3 egress pedit ip dst set 198.51.100.1 [ OK ]
TEST: dev swp2 ingress pedit ip6 src set 2001:db8:2::1 [ OK ]
TEST: dev swp3 egress pedit ip6 src set 2001:db8:2::1 [ OK ]
TEST: dev swp2 ingress pedit ip6 dst set 2001:db8:2::1 [ OK ]
TEST: dev swp3 egress pedit ip6 dst set 2001:db8:2::1 [ OK ]
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Sun, 6 Feb 2022 15:36:12 +0000 (17:36 +0200)]
mlxsw: Support FLOW_ACTION_MANGLE for SIP and DIP IPv6 addresses
Spectrum-2 supports an ACL action SIP_DIP, which allows IPv4 and IPv6
source and destination addresses change. Offload suitable mangles to
the IPv6 address change action.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Sun, 6 Feb 2022 15:36:11 +0000 (17:36 +0200)]
mlxsw: Support FLOW_ACTION_MANGLE for SIP and DIP IPv4 addresses
Spectrum-2 supports an ACL action SIP_DIP, which allows IPv4 and IPv6
source and destination addresses change. Offload suitable mangles to
the IPv4 address change action.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Sun, 6 Feb 2022 15:36:10 +0000 (17:36 +0200)]
mlxsw: core_acl_flex_actions: Add SIP_DIP_ACTION
Add fields related to SIP_DIP_ACTION, which is used for changing of SIP
and DIP addresses.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 7 Feb 2022 11:18:49 +0000 (11:18 +0000)]
Merge branch 'ipv6-kfree_skb_reason'
Menglong Dong says:
====================
net: use kfree_skb_reason() for ip/udp packet receive
In this series patches, kfree_skb() is replaced with kfree_skb_reason()
during ipv4 and udp4 packet receiving path, and following drop reasons
are introduced:
SKB_DROP_REASON_SOCKET_FILTER
SKB_DROP_REASON_NETFILTER_DROP
SKB_DROP_REASON_OTHERHOST
SKB_DROP_REASON_IP_CSUM
SKB_DROP_REASON_IP_INHDR
SKB_DROP_REASON_IP_RPFILTER
SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST
SKB_DROP_REASON_XFRM_POLICY
SKB_DROP_REASON_IP_NOPROTO
SKB_DROP_REASON_SOCKET_RCVBUFF
SKB_DROP_REASON_PROTO_MEM
TCP is more complex, so I left it in the next series.
I just figure out how __print_symbolic() works. It doesn't base on the
array index, but searching for symbols by loop. So I'm a little afraid
it's performance.
Changes since v3:
- fix some small problems in the third patch (net: ipv4: use
kfree_skb_reason() in ip_rcv_core()), as David Ahern said
Changes since v2:
- use SKB_DROP_REASON_PKT_TOO_SMALL for a path in ip_rcv_core()
Changes since v1:
- add document for all drop reasons, as David advised
- remove unreleated cleanup
- remove EARLY_DEMUX and IP_ROUTE_INPUT drop reason
- replace {UDP, TCP}_FILTER with SOCKET_FILTER
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Menglong Dong [Sat, 5 Feb 2022 07:47:39 +0000 (15:47 +0800)]
net: udp: use kfree_skb_reason() in __udp_queue_rcv_skb()
Replace kfree_skb() with kfree_skb_reason() in __udp_queue_rcv_skb().
Following new drop reasons are introduced:
SKB_DROP_REASON_SOCKET_RCVBUFF
SKB_DROP_REASON_PROTO_MEM
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Menglong Dong [Sat, 5 Feb 2022 07:47:38 +0000 (15:47 +0800)]
net: udp: use kfree_skb_reason() in udp_queue_rcv_one_skb()
Replace kfree_skb() with kfree_skb_reason() in udp_queue_rcv_one_skb().
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Menglong Dong [Sat, 5 Feb 2022 07:47:37 +0000 (15:47 +0800)]
net: ipv4: use kfree_skb_reason() in ip_protocol_deliver_rcu()
Replace kfree_skb() with kfree_skb_reason() in ip_protocol_deliver_rcu().
Following new drop reasons are introduced:
SKB_DROP_REASON_XFRM_POLICY
SKB_DROP_REASON_IP_NOPROTO
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Menglong Dong [Sat, 5 Feb 2022 07:47:36 +0000 (15:47 +0800)]
net: ipv4: use kfree_skb_reason() in ip_rcv_finish_core()
Replace kfree_skb() with kfree_skb_reason() in ip_rcv_finish_core(),
following drop reasons are introduced:
SKB_DROP_REASON_IP_RPFILTER
SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Menglong Dong [Sat, 5 Feb 2022 07:47:35 +0000 (15:47 +0800)]
net: ipv4: use kfree_skb_reason() in ip_rcv_core()
Replace kfree_skb() with kfree_skb_reason() in ip_rcv_core(). Three new
drop reasons are introduced:
SKB_DROP_REASON_OTHERHOST
SKB_DROP_REASON_IP_CSUM
SKB_DROP_REASON_IP_INHDR
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Menglong Dong [Sat, 5 Feb 2022 07:47:34 +0000 (15:47 +0800)]
net: netfilter: use kfree_drop_reason() for NF_DROP
Replace kfree_skb() with kfree_skb_reason() in nf_hook_slow() when
skb is dropped by reason of NF_DROP. Following new drop reasons
are introduced:
SKB_DROP_REASON_NETFILTER_DROP
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Menglong Dong [Sat, 5 Feb 2022 07:47:33 +0000 (15:47 +0800)]
net: skb_drop_reason: add document for drop reasons
Add document for following existing drop reasons:
SKB_DROP_REASON_NOT_SPECIFIED
SKB_DROP_REASON_NO_SOCKET
SKB_DROP_REASON_PKT_TOO_SMALL
SKB_DROP_REASON_TCP_CSUM
SKB_DROP_REASON_SOCKET_FILTER
SKB_DROP_REASON_UDP_CSUM
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 5 Feb 2022 17:27:11 +0000 (09:27 -0800)]
ref_tracker: remove filter_irq_stacks() call
After commit
e94006608949 ("lib/stackdepot: always do filter_irq_stacks()
in stack_depot_save()") it became unnecessary to filter the stack
before calling stack_depot_save().
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 5 Feb 2022 17:01:25 +0000 (09:01 -0800)]
net: initialize init_net earlier
While testing a patch that will follow later
("net: add netns refcount tracker to struct nsproxy")
I found that devtmpfs_init() was called before init_net
was initialized.
This is a bug, because devtmpfs_setup() calls
ksys_unshare(CLONE_NEWNS);
This has the effect of increasing init_net refcount,
which will be later overwritten to 1, as part of setup_net(&init_net)
We had too many prior patches [1] trying to work around the root cause.
Really, make sure init_net is in BSS section, and that net_ns_init()
is called earlier at boot time.
Note that another patch ("vfs: add netns refcount tracker
to struct fs_context") also will need net_ns_init() being called
before vfs_caches_init()
As a bonus, this patch saves around 4KB in .data section.
[1]
f8c46cb39079 ("netns: do not call pernet ops for not yet set up init_net namespace")
b5082df8019a ("net: Initialise init_net.count to 1")
734b65417b24 ("net: Statically initialize init_net.dev_base_head")
v2: fixed a build error reported by kernel build bots (CONFIG_NET=n)
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Juhee Kang [Sat, 5 Feb 2022 15:40:38 +0000 (15:40 +0000)]
net: hsr: use hlist_head instead of list_head for mac addresses
Currently, HSR manages mac addresses of known HSR nodes by using list_head.
It takes a lot of time when there are a lot of registered nodes due to
finding specific mac address nodes by using linear search. We can be
reducing the time by using hlist. Thus, this patch moves list_head to
hlist_head for mac addresses and this allows for further improvement of
network performance.
Condition: registered 10,000 known HSR nodes
Before:
# iperf3 -c 192.168.10.1 -i 1 -t 10
Connecting to host 192.168.10.1, port 5201
[ 5] local 192.168.10.2 port 59442 connected to 192.168.10.1 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.49 sec 3.75 MBytes 21.1 Mbits/sec 0 158 KBytes
[ 5] 1.49-2.05 sec 1.25 MBytes 18.7 Mbits/sec 0 166 KBytes
[ 5] 2.05-3.06 sec 2.44 MBytes 20.3 Mbits/sec 56 16.9 KBytes
[ 5] 3.06-4.08 sec 1.43 MBytes 11.7 Mbits/sec 11 38.0 KBytes
[ 5] 4.08-5.00 sec 951 KBytes 8.49 Mbits/sec 0 56.3 KBytes
After:
# iperf3 -c 192.168.10.1 -i 1 -t 10
Connecting to host 192.168.10.1, port 5201
[ 5] local 192.168.10.2 port 36460 connected to 192.168.10.1 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 7.39 MBytes 62.0 Mbits/sec 3 130 KBytes
[ 5] 1.00-2.00 sec 5.06 MBytes 42.4 Mbits/sec 16 113 KBytes
[ 5] 2.00-3.00 sec 8.58 MBytes 72.0 Mbits/sec 42 94.3 KBytes
[ 5] 3.00-4.00 sec 7.44 MBytes 62.4 Mbits/sec 2 131 KBytes
[ 5] 4.00-5.07 sec 8.13 MBytes 63.5 Mbits/sec 38 92.9 KBytes
Signed-off-by: Juhee Kang <claudiajkang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 5 Feb 2022 04:56:14 +0000 (20:56 -0800)]
skmsg: convert struct sk_msg_sg::copy to a bitmap
We have plans for increasing MAX_SKB_FRAGS, but sk_msg_sg::copy
is currently an unsigned long, limiting MAX_SKB_FRAGS to 30 on 32bit arches.
Convert it to a bitmap, as Jakub suggested.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 5 Feb 2022 04:54:59 +0000 (20:54 -0800)]
net: typhoon: implement ndo_features_check method
Instead of disabling TSO at compile time if MAX_SKB_FRAGS > 32,
implement ndo_features_check() method for this driver for
a more dynamic handling.
If skb has more than 32 frags and is a GSO packet, force
software segmentation.
Most locally generated packets will use a small number
of fragments anyway.
For forwarding workloads, we can limit gro_max_size at ingress,
we might also implement gro_max_segs if needed.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Fri, 4 Feb 2022 23:29:06 +0000 (17:29 -0600)]
net: sundance: Replace one-element array with non-array object
It seems this one-element array is not actually being used as an
array of variable size, so we can just replace it with just a
non-array object of type struct desc_frag and refactor a bit the
rest of the code.
This helps with the ongoing efforts to globally enable -Warray-bounds
and get us closer to being able to tighten the FORTIFY_SOURCE routines
on memcpy().
This issue was found with the help of Coccinelle and audited and fixed,
manually.
[1] https://en.wikipedia.org/wiki/Flexible_array_member
[2] https://www.kernel.org/doc/html/v5.16/process/deprecated.html#zero-length-and-one-element-arrays
Link: https://github.com/KSPP/linux/issues/79
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Fri, 4 Feb 2022 23:21:44 +0000 (17:21 -0600)]
bnx2x: Replace one-element array with flexible-array member
There is a regular need in the kernel to provide a way to declare having
a dynamically sized set of trailing elements in a structure. Kernel code
should always use “flexible array members”[1] for these cases. The older
style of one-element or zero-length arrays should no longer be used[2].
This helps with the ongoing efforts to globally enable -Warray-bounds
and get us closer to being able to tighten the FORTIFY_SOURCE routines
on memcpy().
This issue was found with the help of Coccinelle and audited and fixed,
manually.
[1] https://en.wikipedia.org/wiki/Flexible_array_member
[2] https://www.kernel.org/doc/html/v5.16/process/deprecated.html#zero-length-and-one-element-arrays
Link: https://github.com/KSPP/linux/issues/79
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 5 Feb 2022 15:26:00 +0000 (15:26 +0000)]
Merge branch 'net-mana-next'
Haiyang Zhang says:
====================
net: mana: Add handling of CQE_RX_TRUNCATED and a cleanup
Add handling of CQE_RX_TRUNCATED and a cleanup patch
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Haiyang Zhang [Fri, 4 Feb 2022 22:45:45 +0000 (14:45 -0800)]
net: mana: Remove unnecessary check of cqe_type in mana_process_rx_cqe()
The switch statement already ensures cqe_type == CQE_RX_OKAY at that
point.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haiyang Zhang [Fri, 4 Feb 2022 22:45:44 +0000 (14:45 -0800)]
net: mana: Add handling of CQE_RX_TRUNCATED
The proper way to drop this kind of CQE is advancing rxq tail
without indicating the packet to the upper network layer.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 5 Feb 2022 15:22:45 +0000 (15:22 +0000)]
Merge branch 'net-dev-tracking-improvements'
Eric Dumazet says:
====================
net: device tracking improvements
Main goal of this series is to be able to detect the following case
which apparently is still haunting us.
dev_hold_track(dev, tracker_1, GFP_ATOMIC);
dev_hold(dev);
dev_put(dev);
dev_put(dev); // Should complain loudly here.
dev_put_track(dev, tracker_1); // instead of here (as before this series)
v2: third patch:
I replaced the dev_put() in linkwatch_do_dev() with __dev_put().
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Fri, 4 Feb 2022 22:42:37 +0000 (14:42 -0800)]
net: refine dev_put()/dev_hold() debugging
We are still chasing some syzbot reports where we think a rogue dev_put()
is called with no corresponding prior dev_hold().
Unfortunately it eats a reference on dev->dev_refcnt taken by innocent
dev_hold_track(), meaning that the refcount saturation splat comes
too late to be useful.
Make sure that 'not tracked' dev_put() and dev_hold() better use
CONFIG_NET_DEV_REFCNT_TRACKER=y debug infrastructure:
Prior patch in the series allowed ref_tracker_alloc() and ref_tracker_free()
to be called with a NULL @trackerp parameter, and to use a separate refcount
only to detect too many put() even in the following case:
dev_hold_track(dev, tracker_1, GFP_ATOMIC);
dev_hold(dev);
dev_put(dev);
dev_put(dev); // Should complain loudly here.
dev_put_track(dev, tracker_1); // instead of here
Add clarification about netdev_tracker_alloc() role.
v2: I replaced the dev_put() in linkwatch_do_dev()
with __dev_put() because callers called netdev_tracker_free().
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Fri, 4 Feb 2022 22:42:36 +0000 (14:42 -0800)]
ref_tracker: add a count of untracked references
We are still chasing a netdev refcount imbalance, and we suspect
we have one rogue dev_put() that is consuming a reference taken
from a dev_hold_track()
To detect this case, allow ref_tracker_alloc() and ref_tracker_free()
to be called with a NULL @trackerp parameter, and use a dedicated
refcount_t just for them.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>