linux-2.6-microblaze.git
3 years agonet/smc: add netlink support for SMC-Rv2
Karsten Graul [Sat, 16 Oct 2021 09:37:51 +0000 (11:37 +0200)]
net/smc: add netlink support for SMC-Rv2

Implement the netlink support for SMC-Rv2 related attributes that are
provided to user space.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/smc: extend LLC layer for SMC-Rv2
Karsten Graul [Sat, 16 Oct 2021 09:37:50 +0000 (11:37 +0200)]
net/smc: extend LLC layer for SMC-Rv2

Add support for large v2 LLC control messages in smc_llc.c.
The new large work request buffer allows to combine control
messages into one packet that had to be spread over several
packets before.
Add handling of the new v2 LLC messages.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/smc: add v2 support to the work request layer
Karsten Graul [Sat, 16 Oct 2021 09:37:49 +0000 (11:37 +0200)]
net/smc: add v2 support to the work request layer

In the work request layer define one large v2 buffer for each link group
that is used to transmit and receive large LLC control messages.
Add the completion queue handling for this buffer.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/smc: retrieve v2 gid from IB device
Karsten Graul [Sat, 16 Oct 2021 09:37:48 +0000 (11:37 +0200)]
net/smc: retrieve v2 gid from IB device

In smc_ib.c, scan for RoCE devices that support UDP encapsulation.
Find an eligible device and check that there is a route to the
remote peer.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/smc: add v2 format of CLC decline message
Karsten Graul [Sat, 16 Oct 2021 09:37:47 +0000 (11:37 +0200)]
net/smc: add v2 format of CLC decline message

The CLC decline message changed with SMC-Rv2 and supports up to
4 additional diagnosis codes.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/smc: add listen processing for SMC-Rv2
Karsten Graul [Sat, 16 Oct 2021 09:37:46 +0000 (11:37 +0200)]
net/smc: add listen processing for SMC-Rv2

Implement the server side of the SMC-Rv2 processing. Process incoming
CLC messages, find eligible devices and check for a valid route to the
remote peer.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/smc: add SMC-Rv2 connection establishment
Karsten Graul [Sat, 16 Oct 2021 09:37:45 +0000 (11:37 +0200)]
net/smc: add SMC-Rv2 connection establishment

Send a CLC proposal message, and the remote side process this type of
message and determine the target GID. Check for a valid route to this
GID, and complete the connection establishment.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/smc: prepare for SMC-Rv2 connection
Karsten Graul [Sat, 16 Oct 2021 09:37:44 +0000 (11:37 +0200)]
net/smc: prepare for SMC-Rv2 connection

Prepare the connection establishment with SMC-Rv2. Detect eligible
RoCE cards and indicate all supported SMC modes for the connection.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/smc: save stack space and allocate smc_init_info
Karsten Graul [Sat, 16 Oct 2021 09:37:43 +0000 (11:37 +0200)]
net/smc: save stack space and allocate smc_init_info

The struct smc_init_info grew over time, its time to save space on stack
and allocate this struct dynamically.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: stream: don't purge sk_error_queue in sk_stream_kill_queues()
Jakub Kicinski [Fri, 15 Oct 2021 13:37:39 +0000 (06:37 -0700)]
net: stream: don't purge sk_error_queue in sk_stream_kill_queues()

sk_stream_kill_queues() can be called on close when there are
still outstanding skbs to transmit. Those skbs may try to queue
notifications to the error queue (e.g. timestamps).
If sk_stream_kill_queues() purges the queue without taking
its lock the queue may get corrupted, and skbs leaked.

This shows up as a warning about an rmem leak:

WARNING: CPU: 24 PID: 0 at net/ipv4/af_inet.c:154 inet_sock_destruct+0x...

The leak is always a multiple of 0x300 bytes (the value is in
%rax on my builds, so RAX: 0000000000000300). 0x300 is truesize of
an empty sk_buff. Indeed if we dump the socket state at the time
of the warning the sk_error_queue is often (but not always)
corrupted. The ->next pointer points back at the list head,
but not the ->prev pointer. Indeed we can find the leaked skb
by scanning the kernel memory for something that looks like
an skb with ->sk = socket in question, and ->truesize = 0x300.
The contents of ->cb[] of the skb confirms the suspicion that
it is indeed a timestamp notification (as generated in
__skb_complete_tx_timestamp()).

Removing purging of sk_error_queue should be okay, since
inet_sock_destruct() does it again once all socket refs
are gone. Eric suggests this may cause sockets that go
thru disconnect() to maintain notifications from the
previous incarnations of the socket, but that should be
okay since the race was there anyway, and disconnect()
is not exactly dependable.

Thanks to Jonathan Lemon and Omar Sandoval for help at various
stages of tracing the issue.

Fixes: cb9eff097831 ("net: new user space API for time stamping of incoming and outgoing packets")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'dev_addr-conversions-part-1'
David S. Miller [Sat, 16 Oct 2021 07:53:46 +0000 (08:53 +0100)]
Merge branch 'dev_addr-conversions-part-1'

Jakub Kicinski says:

====================
ethernet: manual netdev->dev_addr conversions (part 1)

Manual conversions of drivers writing directly
to netdev->dev_addr (part 1 out of 3).
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: ixgb: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:52 +0000 (15:16 -0700)]
ethernet: ixgb: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Read the address into an array on the stack, then call
eth_hw_addr_set(). ixgb_get_ee_mac_addr() is used with
a non-nevdev->dev_addr pointer so we can't deal with the problem
inside it.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: ibmveth: use ether_addr_to_u64()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:51 +0000 (15:16 -0700)]
ethernet: ibmveth: use ether_addr_to_u64()

We'll want to make netdev->dev_addr const, remove the local
helper which is missing a const qualifier on the argument
and use ether_addr_to_u64().

Similar story to mlx4.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: enetc: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:50 +0000 (15:16 -0700)]
ethernet: enetc: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Pass a netdev into the helper instead of just the address,
read the address into an array on the stack, then call
eth_hw_addr_set().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: ec_bhf: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:49 +0000 (15:16 -0700)]
ethernet: ec_bhf: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Copy the address into an array on the stack, then call
eth_hw_addr_set().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: enic: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:48 +0000 (15:16 -0700)]
ethernet: enic: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Use a zero'ed array on the stack, then call eth_hw_addr_set().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: bcmgenet: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:47 +0000 (15:16 -0700)]
ethernet: bcmgenet: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Read the address into an array on the stack, then call
eth_hw_addr_set().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: bnx2x: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:46 +0000 (15:16 -0700)]
ethernet: bnx2x: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Read the address into an array on the stack, then call
eth_hw_addr_set().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: aquantia: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:45 +0000 (15:16 -0700)]
ethernet: aquantia: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Use an array on the stack, then call eth_hw_addr_set().
eth_hw_addr_set() is after error checking, this should
be fine, error propagates all the way to failing probe.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: amd: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:44 +0000 (15:16 -0700)]
ethernet: amd: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Read the address into an array on the stack, then call
eth_hw_addr_set().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: alteon: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:43 +0000 (15:16 -0700)]
ethernet: alteon: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Break the address apart into an array on the stack, then call
eth_hw_addr_set().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: aeroflex: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:42 +0000 (15:16 -0700)]
ethernet: aeroflex: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

macaddr[] is a module param, and int, so copy the address into
an array of u8 on the stack, then call eth_hw_addr_set().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethernet: adaptec: use eth_hw_addr_set()
Jakub Kicinski [Fri, 15 Oct 2021 22:16:41 +0000 (15:16 -0700)]
ethernet: adaptec: use eth_hw_addr_set()

Commit 406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.

Read the address into an array on the stack, then call
eth_hw_addr_set().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: ipvtap: fix template string argument of device_create() call
Jean Sacren [Sat, 16 Oct 2021 05:41:35 +0000 (23:41 -0600)]
net: ipvtap: fix template string argument of device_create() call

The last argument of device_create() call should be a template string.
The tap_name variable should be the argument to the string, but not the
argument of the call itself.  We should add the template string and turn
tap_name into its argument.

Signed-off-by: Jean Sacren <sakiwit@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: macvtap: fix template string argument of device_create() call
Jean Sacren [Sat, 16 Oct 2021 05:41:34 +0000 (23:41 -0600)]
net: macvtap: fix template string argument of device_create() call

The last argument of device_create() call should be a template string.
The tap_name variable should be the argument to the string, but not the
argument of the call itself.  We should add the template string and turn
tap_name into its argument.

Signed-off-by: Jean Sacren <sakiwit@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge tag 'mlx5-updates-2021-10-15' of git://git.kernel.org/pub/scm/linux/kernel...
David S. Miller [Sat, 16 Oct 2021 07:49:19 +0000 (08:49 +0100)]
Merge tag 'mlx5-updates-2021-10-15' of git://git./linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2021-10-15

1) From Rongwei Liu:

Use system_image_guid and native_port_num when bonding.
Don't relay on PCIe ids anymore.

With some specific NIC, the physical devices may have PCIe IDs like
0001:01:00.0/1 and 0002:02:00.0/1. All of these devices should have
the same system_image_guid and device index can be queried from
native_port_num.

For matching sibling devices/port of the same HCA, compare the HCA
GUID reported on each device rather than just assuming PCIe ids have
similar attributes.

2) From Amir Tzin: Use HCA defined Timouts

Replace hard coded timeouts with values stored by firmware in default
timeouts register (DTOR). Timeouts are read during driver load. If DTOR
is not supported by firmware then fallback to hard coded defaults
instead.

3) From Shay Drory: Disable roce at HCA level
Disable RoCE in Firmware when devlink roce parameter is set to off.

4) A small set of trivial cleanups
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next...
David S. Miller [Sat, 16 Oct 2021 07:47:23 +0000 (08:47 +0100)]
Merge branch '100GbE' of git://git./linux/kernel/git/tnguy/next-queue

Tony Nguyen says:

====================
100GbE Intel Wired LAN Driver Updates 2021-10-14

Maciej Machnikowski says:

Extend the driver implementation to support PTP pins on E810-T and
derivative devices.

E810-T adapters are equipped with:
- 2 external bidirectional SMA connectors
- 1 internal TX U.FL shared with SMA1
- 1 internal RX U.FL shared with SMA2

The SMA and U.FL configuration is controlled by the external
multiplexer.

E810-T Derivatives are equipped with:
- 2 1PPS outputs on SDP20 and SDP22
- 2 1PPS inputs on SDP21 and SDP23
---
v2:
- Remove defensive programming check and simplify return statement
  (Patch 3)
- Remove unnecessary parentheses (Patch 4)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'mptcp-fixes'
David S. Miller [Sat, 16 Oct 2021 07:46:08 +0000 (08:46 +0100)]
Merge branch 'mptcp-fixes'

Mat Martineau says:

====================
mptcp: A few fixes

This set has three separate changes for the net-next tree:

Patch 1 guarantees safe handling and a warning if a NULL value is
encountered when gathering subflow data for the MPTCP_SUBFLOW_ADDRS
socket option.

Patch 2 increases the default number of subflows allowed per MPTCP
connection.

Patch 3 makes an existing function 'static'.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agomptcp: Make mptcp_pm_nl_mp_prio_send_ack() static
Mat Martineau [Fri, 15 Oct 2021 23:05:52 +0000 (16:05 -0700)]
mptcp: Make mptcp_pm_nl_mp_prio_send_ack() static

This function is only used within pm_netlink.c now.

Fixes: 067065422fcd ("mptcp: add the outgoing MP_PRIO support")
Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agomptcp: increase default max additional subflows to 2
Paolo Abeni [Fri, 15 Oct 2021 23:05:51 +0000 (16:05 -0700)]
mptcp: increase default max additional subflows to 2

The current default does not allowing additional subflows, mostly
as a safety restriction to avoid uncontrolled resource consumption
on busy servers.

Still the system admin and/or the application have to opt-in to
MPTCP explicitly. After that, they need to change (increase) the
default maximum number of additional subflows.

Let set that to reasonable default, and make end-users life easier.

Additionally we need to update some self-tests accordingly.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agomptcp: Avoid NULL dereference in mptcp_getsockopt_subflow_addrs()
Tim Gardner [Fri, 15 Oct 2021 23:05:50 +0000 (16:05 -0700)]
mptcp: Avoid NULL dereference in mptcp_getsockopt_subflow_addrs()

Coverity complains of a possible NULL dereference in
mptcp_getsockopt_subflow_addrs():

 861       } else if (sk->sk_family == AF_INET6) {
     3. returned_null: inet6_sk returns NULL. [show details]
     4. var_assigned: Assigning: np = NULL return value from inet6_sk.
 862                const struct ipv6_pinfo *np = inet6_sk(sk);

Fix this by checking for NULL.

Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/231
Fixes: c11c5906bc0a ("mptcp: add MPTCP_SUBFLOW_ADDRS getsockopt support")
Cc: Florian Westphal <fw@strlen.de>
Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
[mjm: Added WARN_ON_ONCE() to the unexpected case]
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/mlx5: Use system_image_guid to determine bonding
Rongwei Liu [Tue, 12 Oct 2021 07:53:00 +0000 (10:53 +0300)]
net/mlx5: Use system_image_guid to determine bonding

With specific NICs, the PFs may have different PCIe ids like
0001:01:00.0/1 and 0002:02:00:00/1.

For PFs with the same system_image_guid, driver should consider
them under the same physical NIC and they are legal to bond together.

If firmware doesn't support system_image_guid, set it to zero and
fallback to use PCIe ids.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Use native_port_num as 1st option of device index
Rongwei Liu [Tue, 12 Oct 2021 07:40:52 +0000 (10:40 +0300)]
net/mlx5: Use native_port_num as 1st option of device index

Using "native_port_num" can support more NICs.

Fallback to PCIe IDs if "native_port_num" query fails.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Introduce new device index wrapper
Rongwei Liu [Thu, 16 Sep 2021 07:46:17 +0000 (10:46 +0300)]
net/mlx5: Introduce new device index wrapper

Downstream patches.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Check return status first when querying system_image_guid
Rongwei Liu [Fri, 8 Oct 2021 06:02:39 +0000 (09:02 +0300)]
net/mlx5: Check return status first when querying system_image_guid

When querying system_image_guid from firmware, we should check return
value first. The buffer content is valid only if query succeed.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: DR, Prefer kcalloc over open coded arithmetic
Len Baker [Sun, 5 Sep 2021 07:49:36 +0000 (09:49 +0200)]
net/mlx5: DR, Prefer kcalloc over open coded arithmetic

As noted in the "Deprecated Interfaces, Language Features, Attributes,
and Conventions" documentation [1], size calculations (especially
multiplication) should not be performed in memory allocator (or similar)
function arguments due to the risk of them overflowing. This could lead
to values wrapping around and a smaller allocation being made than the
caller was expecting. Using those allocations could lead to linear
overflows of heap memory and other misbehaviors.

So, refactor the code a bit to use the purpose specific kcalloc()
function instead of the argument size * count in the kzalloc() function.

[1] https://www.kernel.org/doc/html/v5.14/process/deprecated.html#open-coded-arithmetic-in-allocator-arguments

Signed-off-by: Len Baker <len.baker@gmx.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5e: Add extack msgs related to TC for better debug
Abhiram R N [Wed, 22 Sep 2021 06:30:07 +0000 (12:00 +0530)]
net/mlx5e: Add extack msgs related to TC for better debug

As multiple places EOPNOTSUPP and EINVAL is returned from driver
it becomes difficult to understand the reason only with error code.
With the netlink extack message exact reason will be known and will
aid in debugging.

Signed-off-by: Abhiram R N <abhiramrn@gmail.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: CT: Fix missing cleanup of ct nat table on init failure
Paul Blakey [Thu, 30 Sep 2021 11:23:32 +0000 (14:23 +0300)]
net/mlx5: CT: Fix missing cleanup of ct nat table on init failure

If CT fails to initialize it's rhashtables, it doesn't destroy
the ct nat global table.

Destroy the ct nat global table on ct init failure.

Fixes: d7cade513752 ("net/mlx5e: check return value of rhashtable_init")
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Oz Shlomo <ozsh@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Disable roce at HCA level
Shay Drory [Wed, 18 Aug 2021 10:21:30 +0000 (13:21 +0300)]
net/mlx5: Disable roce at HCA level

Currently, when a user disables roce via the devlink param, this change
isn't passed down to the device.
If device allows disabling RoCE at device level, make use of it. This
instructs the device to skip memory allocations related to RoCE
functionality which otherwise is done by the device.

Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5i: Enable Rx steering for IPoIB via ethtool
Moosa Baransi [Sun, 26 Sep 2021 14:59:52 +0000 (17:59 +0300)]
net/mlx5i: Enable Rx steering for IPoIB via ethtool

Enable steering IPoIB packets via ethtool, the same way it is done today
for Ethernet packets.

Signed-off-by: Moosa Baransi <moosab@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Bridge, provide flow source hints
Vlad Buslov [Tue, 12 Oct 2021 11:49:15 +0000 (14:49 +0300)]
net/mlx5: Bridge, provide flow source hints

Currently, SMFS mode doesn't support rx-loopback flows which causes bridge
egress rules to be rejected because without hint rules for both rx and tx
destinations are created by default. Provide explicit flow source hints for
compatibility with SMFS.

Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Read timeout values from DTOR
Amir Tzin [Wed, 13 Oct 2021 06:07:13 +0000 (09:07 +0300)]
net/mlx5: Read timeout values from DTOR

Replace hard coded timeouts with values stored by firmware in default
timeouts register (DTOR). Timeouts are read during driver load. If DTOR
is not supported by firmware then fallback to hard coded defaults
instead.

Signed-off-by: Amir Tzin <amirtz@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Read timeout values from init segment
Amir Tzin [Thu, 7 Oct 2021 15:00:27 +0000 (18:00 +0300)]
net/mlx5: Read timeout values from init segment

Replace hard coded timeouts with values stored in firmware's init
segment. Timeouts are read from init segment during driver load. If init
segment timeouts are not supported then fallback to hard coded defaults
instead. Also move pre initialization timeouts which cannot be read from
firmware to the new mechanism.

Signed-off-by: Amir Tzin <amirtz@mellanox.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Add layout to support default timeouts register
Amir Tzin [Wed, 21 Jul 2021 13:14:12 +0000 (16:14 +0300)]
net/mlx5: Add layout to support default timeouts register

Add needed structures and defines for DTOR (default timeouts register).
This will be used to get timeouts values from FW instead of hard coded
values in the driver code thus enabling support for slower devices which
need longer timeouts.

Signed-off-by: Amir Tzin <amirtz@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agoice: make use of ice_for_each_* macros
Maciej Fijalkowski [Thu, 19 Aug 2021 12:00:04 +0000 (14:00 +0200)]
ice: make use of ice_for_each_* macros

Go through the code base and use ice_for_each_* macros.  While at it,
introduce ice_for_each_xdp_txq() macro that can be used for looping over
xdp_rings array.

Commit is not introducing any new functionality.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: introduce XDP_TX fallback path
Maciej Fijalkowski [Thu, 19 Aug 2021 12:00:03 +0000 (14:00 +0200)]
ice: introduce XDP_TX fallback path

Under rare circumstances there might be a situation where a requirement
of having XDP Tx queue per CPU could not be fulfilled and some of the Tx
resources have to be shared between CPUs. This yields a need for placing
accesses to xdp_ring inside a critical section protected by spinlock.
These accesses happen to be in the hot path, so let's introduce the
static branch that will be triggered from the control plane when driver
could not provide Tx queue dedicated for XDP on each CPU.

Currently, the design that has been picked is to allow any number of XDP
Tx queues that is at least half of a count of CPUs that platform has.
For lower number driver will bail out with a response to user that there
were not enough Tx resources that would allow configuring XDP. The
sharing of rings is signalled via static branch enablement which in turn
indicates that lock for xdp_ring accesses needs to be taken in hot path.

Approach based on static branch has no impact on performance of a
non-fallback path. One thing that is needed to be mentioned is a fact
that the static branch will act as a global driver switch, meaning that
if one PF got out of Tx resources, then other PFs that ice driver is
servicing will suffer. However, given the fact that HW that ice driver
is handling has 1024 Tx queues per each PF, this is currently an
unlikely scenario.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: optimize XDP_TX workloads
Maciej Fijalkowski [Thu, 19 Aug 2021 12:00:02 +0000 (14:00 +0200)]
ice: optimize XDP_TX workloads

Optimize Tx descriptor cleaning for XDP. Current approach doesn't
really scale and chokes when multiple flows are handled.

Introduce two ring fields, @next_dd and @next_rs that will keep track of
descriptor that should be looked at when the need for cleaning arise and
the descriptor that should have the RS bit set, respectively.

Note that at this point the threshold is a constant (32), but it is
something that we could make configurable.

First thing is to get away from setting RS bit on each descriptor. Let's
do this only once NTU is higher than the currently @next_rs value. In
such case, grab the tx_desc[next_rs], set the RS bit in descriptor and
advance the @next_rs by a 32.

Second thing is to clean the Tx ring only when there are less than 32
free entries. For that case, look up the tx_desc[next_dd] for a DD bit.
This bit is written back by HW to let the driver know that xmit was
successful. It will happen only for those descriptors that had RS bit
set. Clean only 32 descriptors and advance the DD bit.

Actual cleaning routine is moved from ice_napi_poll() down to the
ice_xmit_xdp_ring(). It is safe to do so as XDP ring will not get any
SKBs in there that would rely on interrupts for the cleaning. Nice side
effect is that for rare case of Tx fallback path (that next patch is
going to introduce) we don't have to trigger the SW irq to clean the
ring.

With those two concepts, ring is kept at being almost full, but it is
guaranteed that driver will be able to produce Tx descriptors.

This approach seems to work out well even though the Tx descriptors are
produced in one-by-one manner. Test was conducted with the ice HW
bombarded with packets from HW generator, configured to generate 30
flows.

Xdp2 sample yields the following results:
<snip>
proto 17:   79973066 pkt/s
proto 17:   80018911 pkt/s
proto 17:   80004654 pkt/s
proto 17:   79992395 pkt/s
proto 17:   79975162 pkt/s
proto 17:   79955054 pkt/s
proto 17:   79869168 pkt/s
proto 17:   79823947 pkt/s
proto 17:   79636971 pkt/s
</snip>

As that sample reports the Rx'ed frames, let's look at sar output.
It says that what we Rx'ed we do actually Tx, no noticeable drops.
Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s txcmp/s  rxmcst/s   %ifutil
Average:       ens4f1 79842324.00 79842310.40 4678261.17 4678260.38 0.00      0.00      0.00     38.32

with tx_busy staying calm.

When compared to a state before:
Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s txcmp/s  rxmcst/s   %ifutil
Average:       ens4f1 90919711.60 42233822.60 5327326.85 2474638.04 0.00      0.00      0.00     43.64

it can be observed that the amount of txpck/s is almost doubled, meaning
that the performance is improved by around 90%. All of this due to the
drops in the driver, previously the tx_busy stat was bumped at a 7mpps
rate.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: propagate xdp_ring onto rx_ring
Maciej Fijalkowski [Thu, 19 Aug 2021 12:00:01 +0000 (14:00 +0200)]
ice: propagate xdp_ring onto rx_ring

With rings being split, it is now convenient to introduce a pointer to
XDP ring within the Rx ring. For XDP_TX workloads this means that
xdp_rings array access will be skipped, which was executed per each
processed frame.

Also, read the XDP prog once per NAPI and if prog is present, set up the
local xdp_ring pointer. Reading prog a single time was discussed in [1]
with some concern raised by Toke around dispatcher handling and having
the need for going through the RCU grace period in the ndo_bpf driver
callback, but ice currently is torning down NAPI instances regardless of
the prog presence on VSI.

Although the pointer to XDP ring introduced to Rx ring makes things a
lot slimmer/simpler, I still feel that single prog read per NAPI
lifetime is beneficial.

Further patch that will introduce the fallback path will also get a
profit from that as xdp_ring pointer will be set during the XDP rings
setup.

[1]: https://lore.kernel.org/bpf/87k0oseo6e.fsf@toke.dk/

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: do not create xdp_frame on XDP_TX
Maciej Fijalkowski [Thu, 19 Aug 2021 12:00:00 +0000 (14:00 +0200)]
ice: do not create xdp_frame on XDP_TX

xdp_frame is not needed for XDP_TX data path in ice driver case.
For this data path cleaning of sent descriptor will not happen anywhere
outside of the driver, which means that carrying the information about
the underlying memory model via xdp_frame will not be used. Therefore,
this conversion can be simply dropped, which would relieve CPU a bit.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: unify xdp_rings accesses
Maciej Fijalkowski [Thu, 19 Aug 2021 11:59:59 +0000 (13:59 +0200)]
ice: unify xdp_rings accesses

There has been a long lasting issue of improper xdp_rings indexing for
XDP_TX and XDP_REDIRECT actions. Given that currently rx_ring->q_index
is mixed with smp_processor_id(), there could be a situation where Tx
descriptors are produced onto XDP Tx ring, but tail is never bumped -
for example pin a particular queue id to non-matching IRQ line.

Address this problem by ignoring the user ring count setting and always
initialize the xdp_rings array to be of num_possible_cpus() size. Then,
always use the smp_processor_id() as an index to xdp_rings array. This
provides serialization as at given time only a single softirq can run on
a particular CPU.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: split ice_ring onto Tx/Rx separate structs
Maciej Fijalkowski [Thu, 19 Aug 2021 11:59:58 +0000 (13:59 +0200)]
ice: split ice_ring onto Tx/Rx separate structs

While it was convenient to have a generic ring structure that served
both Tx and Rx sides, next commits are going to introduce several
Tx-specific fields, so in order to avoid hurting the Rx side, let's
pull out the Tx ring onto new ice_tx_ring and ice_rx_ring structs.

Rx ring could be handled by the old ice_ring which would reduce the code
churn within this patch, but this would make things asymmetric.

Make the union out of the ring container within ice_q_vector so that it
is possible to iterate over newly introduced ice_tx_ring.

Remove the @size as it's only accessed from control path and it can be
calculated pretty easily.

Change definitions of ice_update_ring_stats and
ice_fetch_u64_stats_per_ring so that they are ring agnostic and can be
used for both Rx and Tx rings.

Sizes of Rx and Tx ring structs are 256 and 192 bytes, respectively. In
Rx ring xdp_rxq_info occupies its own cacheline, so it's the major
difference now.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: move ice_container_type onto ice_ring_container
Maciej Fijalkowski [Thu, 19 Aug 2021 11:59:57 +0000 (13:59 +0200)]
ice: move ice_container_type onto ice_ring_container

Currently ice_container_type is scoped only for ice_ethtool.c. Next
commit that will split the ice_ring struct onto Rx/Tx specific ring
structs is going to also modify the type of linked list of rings that is
within ice_ring_container. Therefore, the functions that are taking the
ice_ring_container as an input argument will need to be aware of a ring
type that will be looked up.

Embed ice_container_type within ice_ring_container and initialize it
properly when allocating the q_vectors.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: remove ring_active from ice_ring
Maciej Fijalkowski [Thu, 19 Aug 2021 11:59:56 +0000 (13:59 +0200)]
ice: remove ring_active from ice_ring

This field is dead and driver is not making any use of it. Simply remove
it.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoMerge branch 'dpaa2-irq-coalescing'
David S. Miller [Fri, 15 Oct 2021 13:32:41 +0000 (14:32 +0100)]
Merge branch 'dpaa2-irq-coalescing'

Ioana Ciornei says:

====================
dpaa2-eth: add support for IRQ coalescing

This patch set adds support for interrupts coalescing in dpaa2-eth.
The first patches add support for the hardware level configuration of
the IRQ coalescing in the dpio driver, while the ones that touch the
dpaa2-eth driver are responsible for the ethtool user interraction.

With the adaptive IRQ coalescing in place and enabled we have observed
the following changes in interrupt rates on one A72 core @2.2GHz
(LX2160A) while running a Rx TCP flow.  The TCP stream is sent on a
10Gbit link and the only cpu that does Rx is fully utilized.
                                IRQ rate (irqs / sec)
before:   4.59 Gbits/sec                24k
after:    5.67 Gbits/sec                1.3k
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dpaa2: add adaptive interrupt coalescing
Ioana Ciornei [Fri, 15 Oct 2021 09:01:27 +0000 (12:01 +0300)]
net: dpaa2: add adaptive interrupt coalescing

Add support for adaptive interrupt coalescing to the dpaa2-eth driver.
First of all, ETHTOOL_COALESCE_USE_ADAPTIVE_RX is defined as a supported
coalesce parameter and the requested state is configured through the
dpio APIs added in the previous patch.

Besides the ethtool API interaction, we keep track of how many bytes and
frames are dequeued per CDAN (Channel Data Availability Notification)
and update the Net DIM instance through the dpaa2_io_update_net_dim()
API.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosoc: fsl: dpio: add Net DIM integration
Ioana Ciornei [Fri, 15 Oct 2021 09:01:26 +0000 (12:01 +0300)]
soc: fsl: dpio: add Net DIM integration

Use the generic dynamic interrupt moderation (dim) framework to
implement adaptive interrupt coalescing on Rx. With the per-packet
interrupt scheme, a high interrupt rate has been noted for moderate
traffic flows leading to high CPU utilization.

The dpio driver exports new functions to enable/disable adaptive IRQ
coalescing on a DPIO object, to query the state or to update Net DIM
with a new set of bytes and frames dequeued.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dpaa2: add support for manual setup of IRQ coalesing
Ioana Ciornei [Fri, 15 Oct 2021 09:01:25 +0000 (12:01 +0300)]
net: dpaa2: add support for manual setup of IRQ coalesing

Use the newly exported dpio driver API to manually configure the IRQ
coalescing parameters requested by the user.
The .get_coalesce() and .set_coalesce() net_device callbacks are
implemented and directly export or setup the rx-usecs on all the
channels configured.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosoc: fsl: dpio: add support for irq coalescing per software portal
Ioana Ciornei [Fri, 15 Oct 2021 09:01:24 +0000 (12:01 +0300)]
soc: fsl: dpio: add support for irq coalescing per software portal

In DPAA2 based SoCs, the IRQ coalesing support per software portal has 2
configurable parameters:
 - the IRQ timeout period (QBMAN_CINH_SWP_ITPR): how many 256 QBMAN
   cycles need to pass until a dequeue interrupt is asserted.
 - the IRQ threshold (QBMAN_CINH_SWP_DQRR_ITR): how many dequeue
   responses in the DQRR ring would generate an IRQ.

Add support for setting up and querying these IRQ coalescing related
parameters.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosoc: fsl: dpio: extract the QBMAN clock frequency from the attributes
Ioana Ciornei [Fri, 15 Oct 2021 09:01:23 +0000 (12:01 +0300)]
soc: fsl: dpio: extract the QBMAN clock frequency from the attributes

Through the dpio_get_attributes() firmware call the dpio driver has
access to the QBMAN clock frequency. Extend the structure which holds
the firmware's response so that we can have access to this information.

This will be needed in the next patches which also add support for
interrupt coalescing which needs to be configured based on the
frequency.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'L4S-style-ce_threshold_ect1-marking'
David S. Miller [Fri, 15 Oct 2021 10:33:09 +0000 (11:33 +0100)]
Merge branch 'L4S-style-ce_threshold_ect1-marking'

Eric Dumazet says:

====================
net/sched: implement L4S style ce_threshold_ect1 marking

As suggested by Ingemar Johansson, Neal Cardwell, and others, fq_codel can be used
for Low Latency, Low Loss, Scalable Throughput (L4S) with a small change.

In ce_threshold_ect1 mode, only ECT(1) packets can be marked to CE if
their sojourn time is above the threshold.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agofq_codel: implement L4S style ce_threshold_ect1 marking
Eric Dumazet [Thu, 14 Oct 2021 17:59:18 +0000 (10:59 -0700)]
fq_codel: implement L4S style ce_threshold_ect1 marking

Add TCA_FQ_CODEL_CE_THRESHOLD_ECT1 boolean option to select Low Latency,
Low Loss, Scalable Throughput (L4S) style marking, along with ce_threshold.

If enabled, only packets with ECT(1) can be transformed to CE
if their sojourn time is above the ce_threshold.

Note that this new option does not change rules for codel law.
In particular, if TCA_FQ_CODEL_ECN is left enabled (this is
the default when fq_codel qdisc is created), ECT(0) packets can
still get CE if codel law (as governed by limit/target) decides so.

Section 4.3.b of current draft [1] states:

b.  A scheduler with per-flow queues such as FQ-CoDel or FQ-PIE can
    be used for L4S.  For instance within each queue of an FQ-CoDel
    system, as well as a CoDel AQM, there is typically also ECN
    marking at an immediate (unsmoothed) shallow threshold to support
    use in data centres (see Sec.5.2.7 of [RFC8290]).  This can be
    modified so that the shallow threshold is solely applied to
    ECT(1) packets.  Then if there is a flow of non-ECN or ECT(0)
    packets in the per-flow-queue, the Classic AQM (e.g.  CoDel) is
    applied; while if there is a flow of ECT(1) packets in the queue,
    the shallower (typically sub-millisecond) threshold is applied.

Tested:

tc qd replace dev eth1 root fq_codel ce_threshold_ect1 50usec

netperf ... -t TCP_STREAM -- K dctcp

tc -s -d qd sh dev eth1
qdisc fq_codel 8022: root refcnt 32 limit 10240p flows 1024 quantum 9212 target 5ms ce_threshold_ect1 49us interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 14388596616 bytes 9543449 pkt (dropped 0, overlimits 0 requeues 152013)
 backlog 0b 0p requeues 152013
  maxpacket 68130 drop_overlimit 0 new_flow_count 95678 ecn_mark 0 ce_mark 7639
  new_flows_len 0 old_flows_len 0

[1] L4S current draft:
https://datatracker.ietf.org/doc/html/draft-ietf-tsvwg-l4s-arch

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Ingemar Johansson S <ingemar.s.johansson@ericsson.com>
Cc: Tom Henderson <tomh@tomh.org>
Cc: Bob Briscoe <in@bobbriscoe.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: add skb_get_dsfield() helper
Eric Dumazet [Thu, 14 Oct 2021 17:59:17 +0000 (10:59 -0700)]
net: add skb_get_dsfield() helper

skb_get_dsfield(skb) gets dsfield from skb, or -1
if an error was found.

This is basically a wrapper around ipv4_get_dsfield()
and ipv6_get_dsfield().

Used by following patch for fq_codel.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Ingemar Johansson S <ingemar.s.johansson@ericsson.com>
Cc: Tom Henderson <tomh@tomh.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agotcp: switch orphan_count to bare per-cpu counters
Eric Dumazet [Thu, 14 Oct 2021 13:41:26 +0000 (06:41 -0700)]
tcp: switch orphan_count to bare per-cpu counters

Use of percpu_counter structure to track count of orphaned
sockets is causing problems on modern hosts with 256 cpus
or more.

Stefan Bach reported a serious spinlock contention in real workloads,
that I was able to reproduce with a netfilter rule dropping
incoming FIN packets.

    53.56%  server  [kernel.kallsyms]      [k] queued_spin_lock_slowpath
            |
            ---queued_spin_lock_slowpath
               |
                --53.51%--_raw_spin_lock_irqsave
                          |
                           --53.51%--__percpu_counter_sum
                                     tcp_check_oom
                                     |
                                     |--39.03%--__tcp_close
                                     |          tcp_close
                                     |          inet_release
                                     |          inet6_release
                                     |          sock_close
                                     |          __fput
                                     |          ____fput
                                     |          task_work_run
                                     |          exit_to_usermode_loop
                                     |          do_syscall_64
                                     |          entry_SYSCALL_64_after_hwframe
                                     |          __GI___libc_close
                                     |
                                      --14.48%--tcp_out_of_resources
                                                tcp_write_timeout
                                                tcp_retransmit_timer
                                                tcp_write_timer_handler
                                                tcp_write_timer
                                                call_timer_fn
                                                expire_timers
                                                __run_timers
                                                run_timer_softirq
                                                __softirqentry_text_start

As explained in commit cf86a086a180 ("net/dst: use a smaller percpu_counter
batch for dst entries accounting"), default batch size is too big
for the default value of tcp_max_orphans (262144).

But even if we reduce batch sizes, there would still be cases
where the estimated count of orphans is beyond the limit,
and where tcp_too_many_orphans() has to call the expensive
percpu_counter_sum_positive().

One solution is to use plain per-cpu counters, and have
a timer to periodically refresh this cache.

Updating this cache every 100ms seems about right, tcp pressure
state is not radically changing over shorter periods.

percpu_counter was nice 15 years ago while hosts had less
than 16 cpus, not anymore by current standards.

v2: Fix the build issue for CONFIG_CRYPTO_DEV_CHELSIO_TLS=m,
    reported by kernel test robot <lkp@intel.com>
    Remove unused socket argument from tcp_too_many_orphans()

Fixes: dd24c00191d5 ("net: Use a percpu_counter for orphan_count")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Stefan Bach <sfb@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agomctp: Avoid leak of mctp_sk_key
Matt Johnston [Thu, 14 Oct 2021 08:10:50 +0000 (16:10 +0800)]
mctp: Avoid leak of mctp_sk_key

mctp_key_alloc() returns a key already referenced.

The mctp_route_input() path receives a packet for a bind socket and
allocates a key. It passes the key to mctp_key_add() which takes a
refcount and adds the key to lists. mctp_route_input() should then
release its own refcount when setting the key pointer to NULL.

In the mctp_alloc_local_tag() path (for mctp_local_output()) we
similarly need to unref the key before returning (mctp_reserve_tag()
takes a refcount and adds the key to lists).

Fixes: 73c618456dc5 ("mctp: locking, lifetime and validity changes for sk_keys")
Signed-off-by: Matt Johnston <matt@codeconstruct.com.au>
Reviewed-by: Jeremy Kerr <jk@codeconstruct.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'qca8337-improvements'
David S. Miller [Fri, 15 Oct 2021 10:06:38 +0000 (11:06 +0100)]
Merge branch 'qca8337-improvements'

Ansuel Smith says:

====================
Multiple improvement for qca8337 switch

This series is the final step of a long process of porting 80+ devices
to use the new qca8k driver instead of the hacky qca one based on never
merged swconfig platform.
Some background to justify all these additions.
QCA used a special binding to declare raw initval to set the swich. I
made a script to convert all these magic values and convert 80+ dts and
scan all the needed "unsupported regs". We find a baseline where we
manage to find the common and used regs so in theory hopefully we don't
have to add anymore things.
We discovered lots of things with this, especially about how differently
qca8327 works compared to qca8337.

In short, we found that qca8327 have some problem with suspend/resume for
their internal phy. It instead sets some dedicated regs that suspend the
phy without setting the standard bit. First 4 patch are to fix this.
There is also a patch about preferring master. This is directly from the
original driver and it seems to be needed to prevent some problem with
the pause frame.

Every ipq806x target sets the mac power sel and this specific reg
regulates the output voltage of the regulator. Without this some
instability can occur.

Some configuration (for some reason) swap mac6 with mac0. We add support
for this.
Also, we discovered that some device doesn't work at all with pll enabled
for sgmii line. In the original code this was based on the switch
revision. In later revision the pll regs were decided based on the switch
type (disabled for qca8327 and enabled for qca8337) but still some
device had that disabled in the initval regs.
Considering we found at least one qca8337 device that required pll
disabled to work (no traffic problem) we decided to introduce a binding
to enable pll and set it only with that.

Lastly, we add support for led open drain that require the power-on-sel
to set. Also, some device have only the power-on-sel set in the initval
so we add also support for that. This is needed for the correct function
of the switch leds.
Qca8327 have a special reg in the pws regs that set it to a reduced
48pin layout. This is needed or the switch doesn't work.

These are all the special configuration we find on all these devices that
are from various targets. Mostly ath79, ipq806x and bcm53xx.
Changes v7:
- Fix missing newline in yaml
- Handle error with wrong cpu port detected
- Move yaml commit as last to fix bot error

Changes v6:
- Convert Documentation to yaml
- Add extra check for cpu port and invalid phy mode
- Add co developed by tag to give credits to Matthew

Changes v5:
- Swap patch. Document first then implement.
- Fix some grammar error reported.
- Rework function. Remove phylink mac_config DT scan and move everything
  to dedicated function in probe.
- Introduce new logic for delay selection where is also supported with
  internal delay declared and rgmii set as phy mode
- Start working on ymal conversion. Will later post this in v6 when we
  finally take final decision about mac swap.

Changes v4:
- Fix typo in SGMII falling edge about using PHY id instead of
  switch id

Changes v3:
- Drop phy patches (proposed separateley)
- Drop special pwr binding. Rework to ipq806x specific
- Better describe compatible and add serial print on switch chip
- Drop mac exchange. Rework falling edge and move it to mac_config
- Add support for port 6 cpu port. Drop hardcoded cpu port to port0
- Improve port stability with sgmii. QCA source have intenal delay also
  for sgmii
- Add warning with pll enabled on wrong configuration

Changes v2:
- Reword Documentation patch to dt-bindings
- Propose first 2 phy patch to net
- Better describe and add hint on how to use all the new
  bindings
- Rework delay scan function and move to phylink mac_config
- Drop package48 wrong binding
- Introduce support for qca8328 switch
- Fix wrong binding name power-on-sel
- Return error on wrong config with led open drain and
  ignore-power-on-sel not set
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodt-bindings: net: dsa: qca8k: convert to YAML schema
Matthew Hagan [Wed, 13 Oct 2021 22:39:21 +0000 (00:39 +0200)]
dt-bindings: net: dsa: qca8k: convert to YAML schema

Convert the qca8k bindings to YAML format.

Signed-off-by: Matthew Hagan <mnhagan88@gmail.com>
Co-developed-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodt-bindings: net: ipq8064-mdio: fix warning with new qca8k switch
Ansuel Smith [Wed, 13 Oct 2021 22:39:20 +0000 (00:39 +0200)]
dt-bindings: net: ipq8064-mdio: fix warning with new qca8k switch

Fix warning now that we have qca8k switch Documentation using yaml.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dsa: qca8k: move port config to dedicated struct
Ansuel Smith [Wed, 13 Oct 2021 22:39:19 +0000 (00:39 +0200)]
net: dsa: qca8k: move port config to dedicated struct

Move ports related config to dedicated struct to keep things organized.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dsa: qca8k: set internal delay also for sgmii
Ansuel Smith [Wed, 13 Oct 2021 22:39:18 +0000 (00:39 +0200)]
net: dsa: qca8k: set internal delay also for sgmii

QCA original code report port instability and sa that SGMII also require
to set internal delay. Generalize the rgmii delay function and apply the
advised value if they are not defined in DT.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dsa: qca8k: add support for QCA8328
Ansuel Smith [Wed, 13 Oct 2021 22:39:17 +0000 (00:39 +0200)]
net: dsa: qca8k: add support for QCA8328

QCA8328 switch is the bigger brother of the qca8327. Same regs different
chip. Change the function to set the correct pin layout and introduce a
new match_data to differentiate the 2 switch as they have the same ID
and their internal PHY have the same ID.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodt-bindings: net: dsa: qca8k: document support for qca8328
Ansuel Smith [Wed, 13 Oct 2021 22:39:16 +0000 (00:39 +0200)]
dt-bindings: net: dsa: qca8k: document support for qca8328

QCA8328 is the bigger brother of qca8327. Document the new compatible
binding and add some information to understand the various switch
compatible.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dsa: qca8k: add support for pws config reg
Ansuel Smith [Wed, 13 Oct 2021 22:39:15 +0000 (00:39 +0200)]
net: dsa: qca8k: add support for pws config reg

Some qca8327 switch require to force the ignore of power on sel
strapping. Some switch require to set the led open drain mode in regs
instead of using strapping. While most of the device implements this
using the correct way using pin strapping, there are still some broken
device that require to be set using sw regs.
Introduce a new binding and support these special configuration.
As led open drain require to ignore pin strapping to work, the probe
fails with EINVAL error with incorrect configuration.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodt-bindings: net: dsa: qca8k: Document qca,led-open-drain binding
Ansuel Smith [Wed, 13 Oct 2021 22:39:14 +0000 (00:39 +0200)]
dt-bindings: net: dsa: qca8k: Document qca,led-open-drain binding

Document new binding qca,ignore-power-on-sel used to ignore
power on strapping and use sw regs instead.
Document qca,led-open.drain to set led to open drain mode, the
qca,ignore-power-on-sel is mandatory with this enabled or an error will
be reported.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dsa: qca8k: add explicit SGMII PLL enable
Ansuel Smith [Wed, 13 Oct 2021 22:39:13 +0000 (00:39 +0200)]
net: dsa: qca8k: add explicit SGMII PLL enable

Support enabling PLL on the SGMII CPU port. Some device require this
special configuration or no traffic is transmitted and the switch
doesn't work at all. A dedicated binding is added to the CPU node
port to apply the correct reg on mac config.
Fail to correctly configure sgmii with qca8327 switch and warn if pll is
used on qca8337 with a revision greater than 1.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodt-bindings: net: dsa: qca8k: Document qca,sgmii-enable-pll
Ansuel Smith [Wed, 13 Oct 2021 22:39:12 +0000 (00:39 +0200)]
dt-bindings: net: dsa: qca8k: Document qca,sgmii-enable-pll

Document qca,sgmii-enable-pll binding used in the CPU nodes to
enable SGMII PLL on MAC config.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dsa: qca8k: rework rgmii delay logic and scan for cpu port 6
Ansuel Smith [Wed, 13 Oct 2021 22:39:11 +0000 (00:39 +0200)]
net: dsa: qca8k: rework rgmii delay logic and scan for cpu port 6

Future proof commit. This switch have 2 CPU ports and one valid
configuration is first CPU port set to sgmii and second CPU port set to
rgmii-id. The current implementation detects delay only for CPU port
zero set to rgmii and doesn't count any delay set in a secondary CPU
port. Drop the current delay scan function and move it to the sgmii
parser function to generalize and implicitly add support for secondary
CPU port set to rgmii-id. Introduce new logic where delay is enabled
also with internal delay binding declared and rgmii set as PHY mode.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dsa: qca8k: add support for cpu port 6
Ansuel Smith [Wed, 13 Oct 2021 22:39:10 +0000 (00:39 +0200)]
net: dsa: qca8k: add support for cpu port 6

Currently CPU port is always hardcoded to port 0. This switch have 2 CPU
ports. The original intention of this driver seems to be use the
mac06_exchange bit to swap MAC0 with MAC6 in the strange configuration
where device have connected only the CPU port 6. To skip the
introduction of a new binding, rework the driver to address the
secondary CPU port as primary and drop any reference of hardcoded port.
With configuration of mac06 exchange, just skip the definition of port0
and define the CPU port as a secondary. The driver will autoconfigure
the switch to use that as the primary CPU port.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodt-bindings: net: dsa: qca8k: Document support for CPU port 6
Ansuel Smith [Wed, 13 Oct 2021 22:39:09 +0000 (00:39 +0200)]
dt-bindings: net: dsa: qca8k: Document support for CPU port 6

The switch now support CPU port to be set 6 instead of be hardcoded to
0. Document support for it and describe logic selection.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dsa: qca8k: add support for sgmii falling edge
Ansuel Smith [Wed, 13 Oct 2021 22:39:08 +0000 (00:39 +0200)]
net: dsa: qca8k: add support for sgmii falling edge

Add support for this in the qca8k driver. Also add support for SGMII
rx/tx clock falling edge. This is only present for pad0, pad5 and
pad6 have these bit reserved from Documentation. Add a comment that this
is hardcoded to PAD0 as qca8327/28/34/37 have an unique sgmii line and
setting falling in port0 applies to both configuration with sgmii used
for port0 or port6.

Co-developed-by: Matthew Hagan <mnhagan88@gmail.com>
Signed-off-by: Matthew Hagan <mnhagan88@gmail.com>
Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodt-bindings: net: dsa: qca8k: Add SGMII clock phase properties
Ansuel Smith [Wed, 13 Oct 2021 22:39:07 +0000 (00:39 +0200)]
dt-bindings: net: dsa: qca8k: Add SGMII clock phase properties

Add names and descriptions of additional PORT0_PAD_CTRL properties.
qca,sgmii-(rx|tx)clk-falling-edge are for setting the respective clock
phase to failling edge.

Co-developed-by: Matthew Hagan <mnhagan88@gmail.com>
Signed-off-by: Matthew Hagan <mnhagan88@gmail.com>
Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodsa: qca8k: add mac_power_sel support
Ansuel Smith [Wed, 13 Oct 2021 22:39:06 +0000 (00:39 +0200)]
dsa: qca8k: add mac_power_sel support

Add missing mac power sel support needed for ipq8064/5 SoC that require
1.8v for the internal regulator port instead of the default 1.5v.
If other device needs this, consider adding a dedicated binding to
support this.

Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoxen-netback: Remove redundant initialization of variable err
Colin Ian King [Wed, 13 Oct 2021 16:51:42 +0000 (17:51 +0100)]
xen-netback: Remove redundant initialization of variable err

The variable err is being initialized with a value that is never read, it
is being updated immediately afterwards. The assignment is redundant and
can be removed.

Addresses-Coverity: ("Unused value")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agopage_pool: disable dma mapping support for 32-bit arch with 64-bit DMA
Yunsheng Lin [Wed, 13 Oct 2021 09:19:20 +0000 (17:19 +0800)]
page_pool: disable dma mapping support for 32-bit arch with 64-bit DMA

As the 32-bit arch with 64-bit DMA seems to rare those days,
and page pool might carry a lot of code and complexity for
systems that possibly.

So disable dma mapping support for such systems, if drivers
really want to work on such systems, they have to implement
their own DMA-mapping fallback tracking outside page_pool.

Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'octeontx2-af-miscellaneous-changes-for-cpt'
Jakub Kicinski [Fri, 15 Oct 2021 03:01:08 +0000 (20:01 -0700)]
Merge branch 'octeontx2-af-miscellaneous-changes-for-cpt'

Srujana Challa says:

====================
octeontx2-af: Miscellaneous changes for CPT

This patchset consists of miscellaneous changes for CPT.
First patch enables the CPT HW interrupts, second patch
adds support for CPT LF teardown in non FLR path and
final patch does CPT CTX flush in FLR handler.

v2:
- Fixed a warning reported by kernel test robot.
====================

Link: https://lore.kernel.org/r/20211013055621.1812301-1-schalla@marvell.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agoocteontx2-af: Add support to flush full CPT CTX cache
Srujana Challa [Wed, 13 Oct 2021 05:56:21 +0000 (11:26 +0530)]
octeontx2-af: Add support to flush full CPT CTX cache

Adds support to flush or invalidate CPT CTX entries as part of FLR
and also provides a mailbox to flush CPT CTX entries in case of
graceful exit.
This patch also adds support for AF -> CPT PF uplink mailbox messages
and adds a new mbox message to submit a CPT instruction from AF.

Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agoocteontx2-af: Perform cpt lf teardown in non FLR path
Nithin Dabilpuram [Wed, 13 Oct 2021 05:56:20 +0000 (11:26 +0530)]
octeontx2-af: Perform cpt lf teardown in non FLR path

Perform CPT LF teardown in non FLR path as well via cpt_lf_free()
Currently CPT LF teardown and reset sequence is only
done when FLR is handled with CPT LF still attached.

This patch also fixes cpt_lf_alloc() to set EXEC_LDWB in
CPT_AF_LFX_CTL2 when being completely overwritten as that is
the default value and is better for performance.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agoocteontx2-af: Enable CPT HW interrupts
Srujana Challa [Wed, 13 Oct 2021 05:56:19 +0000 (11:26 +0530)]
octeontx2-af: Enable CPT HW interrupts

This patch enables and registers interrupt handler for CPT HW
interrupts.

Signed-off-by: Srujana Challa <schalla@marvell.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet: tulip: winbond-840: fix build for UML
Randy Dunlap [Thu, 14 Oct 2021 05:06:06 +0000 (22:06 -0700)]
net: tulip: winbond-840: fix build for UML

On i386, when builtin (not a loadable module), the winbond-840 driver
inspects boot_cpu_data to see what CPU family it is running on, and
then acts on that data. The "family" struct member (x86) does not exist
when running on UML, so prevent that test and do the default action.

Prevents this build error on UML + i386:

../drivers/net/ethernet/dec/tulip/winbond-840.c: In function ‘init_registers’:
../drivers/net/ethernet/dec/tulip/winbond-840.c:882:19: error: ‘struct cpuinfo_um’ has no member named ‘x86’
  if (boot_cpu_data.x86 <= 4) {

Fixes: 68f5d3f3b654 ("um: add PCI over virtio emulation driver")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: linux-um@lists.infradead.org
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Link: https://lore.kernel.org/r/20211014050606.7288-1-rdunlap@infradead.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet: intel: igc_ptp: fix build for UML
Randy Dunlap [Thu, 14 Oct 2021 05:05:16 +0000 (22:05 -0700)]
net: intel: igc_ptp: fix build for UML

On a UML build, the igc_ptp driver uses CONFIG_X86_TSC for timestamp
conversion. The function that is used is not available on UML builds,
so have the function use the default system_counterval_t timestamp
instead for UML builds.

Prevents this build error on UML:

../drivers/net/ethernet/intel/igc/igc_ptp.c: In function ‘igc_device_tstamp_to_system’:
../drivers/net/ethernet/intel/igc/igc_ptp.c:777:9: error: implicit declaration of function ‘convert_art_ns_to_tsc’ [-Werror=implicit-function-declaration]
  return convert_art_ns_to_tsc(tstamp);
../drivers/net/ethernet/intel/igc/igc_ptp.c:777:9: error: incompatible types when returning type ‘int’ but ‘struct system_counterval_t’ was expected
  return convert_art_ns_to_tsc(tstamp);

Fixes: 68f5d3f3b654 ("um: add PCI over virtio emulation driver")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: linux-um@lists.infradead.org
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: intel-wired-lan@lists.osuosl.org
Link: https://lore.kernel.org/r/20211014050516.6846-1-rdunlap@infradead.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet: fealnx: fix build for UML
Randy Dunlap [Thu, 14 Oct 2021 05:05:00 +0000 (22:05 -0700)]
net: fealnx: fix build for UML

On i386, when builtin (not a loadable module), the fealnx driver
inspects boot_cpu_data to see what CPU family it is running on, and
then acts on that data. The "family" struct member (x86) does not exist
when running on UML, so prevent that test and do the default action.

Prevents this build error on UML + i386:

../drivers/net/ethernet/fealnx.c: In function ‘netdev_open’:
../drivers/net/ethernet/fealnx.c:861:19: error: ‘struct cpuinfo_um’ has no member named ‘x86’

Fixes: 68f5d3f3b654 ("um: add PCI over virtio emulation driver")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: linux-um@lists.infradead.org
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Link: https://lore.kernel.org/r/20211014050500.5620-1-rdunlap@infradead.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agohv_netvsc: Add comment of netvsc_xdp_xmit()
Jiasheng Jiang [Thu, 14 Oct 2021 01:26:26 +0000 (01:26 +0000)]
hv_netvsc: Add comment of netvsc_xdp_xmit()

Adding comment to avoid the misusing of netvsc_xdp_xmit().
Otherwise the value of skb->queue_mapping could be 0 and
then the return value of skb_get_rx_queue() could be MAX_U16
cause by overflow.

Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
Link: https://lore.kernel.org/r/1634174786-1810351-1-git-send-email-jiasheng@iscas.ac.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agoMerge branch 'minor-managed-neighbor-follow-ups'
Jakub Kicinski [Fri, 15 Oct 2021 02:16:24 +0000 (19:16 -0700)]
Merge branch 'minor-managed-neighbor-follow-ups'

Daniel Borkmann says:

====================
Minor managed neighbor follow-ups

Minor follow-up series to address prior feedback from David and Jakub.
Patch 1 adds a build time assertion to prevent overflows when shifting
in extended flags, patch 2 is a cleanup to use NLA_POLICY_MASK instead
of open-coding invalid flags rejection and patch 3 rejects creating new
neighbors with NUD_PERMANENT & NTF_MANAGED. For details, see individual
patches.
====================

Link: https://lore.kernel.org/r/20211013132140.11143-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet, neigh: Reject creating NUD_PERMANENT with NTF_MANAGED entries
Daniel Borkmann [Wed, 13 Oct 2021 13:21:40 +0000 (15:21 +0200)]
net, neigh: Reject creating NUD_PERMANENT with NTF_MANAGED entries

The combination of NUD_PERMANENT + NTF_MANAGED is not supported and does
not make sense either given the former indicates a static/fixed neighbor
entry whereas the latter a dynamically resolved one. While it is possible
to transition from one over to the other, we should however reject such
creation attempts.

Fixes: 7482e3841d52 ("net, neigh: Add NTF_MANAGED flag for managed neighbor entries")
Suggested-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet, neigh: Use NLA_POLICY_MASK helper for NDA_FLAGS_EXT attribute
Daniel Borkmann [Wed, 13 Oct 2021 13:21:39 +0000 (15:21 +0200)]
net, neigh: Use NLA_POLICY_MASK helper for NDA_FLAGS_EXT attribute

Instead of open-coding a check for invalid bits in NTF_EXT_MASK, we can just
use the NLA_POLICY_MASK() helper instead, and simplify NDA_FLAGS_EXT sanity
check this way.

Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet, neigh: Add build-time assertion to avoid neigh->flags overflow
Daniel Borkmann [Wed, 13 Oct 2021 13:21:38 +0000 (15:21 +0200)]
net, neigh: Add build-time assertion to avoid neigh->flags overflow

Currently, NDA_FLAGS_EXT flags allow a maximum of 24 bits to be used for
extended neighbor flags. These are eventually fed into neigh->flags by
shifting with NTF_EXT_SHIFT as per commit 2c611ad97a82 ("net, neigh:
Extend neigh->flags to 32 bit to allow for extensions").

If really ever needed in future, the full 32 bits from NDA_FLAGS_EXT can
be used, it would only require to move neigh->flags from u32 to u64 inside
the kernel.

Add a build-time assertion such that when extending the NTF_EXT_MASK with
new bits, we'll trigger an error once we surpass the 24th bit. This assumes
that no bit holes in new NTF_EXT_* flags will slip in from UAPI, but I
think this is reasonable to assume.

Suggested-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet: mvneta: Delete unused variable
Yuval Shaia [Wed, 13 Oct 2021 06:49:21 +0000 (09:49 +0300)]
net: mvneta: Delete unused variable

The variable pp is not in use - delete it.

Signed-off-by: Yuval Shaia <yshaia@marvell.com>
Link: https://lore.kernel.org/r/20211013064921.26346-1-yshaia@marvell.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet: phy: dp83867: introduce critical chip default init for non-of platform
Lay, Kuan Loon [Wed, 13 Oct 2021 06:59:41 +0000 (14:59 +0800)]
net: phy: dp83867: introduce critical chip default init for non-of platform

PHY driver dp83867 has rich supports for OF-platform to fine-tune the PHY
chip during phy configuration. However, for non-OF platform, certain PHY
tunable parameters such as IO impedance and RX & TX internal delays are
critical and should be initialized to its default during PHY driver probe.

Tested-by: Clement <clement@intel.com>
Signed-off-by: Lay, Kuan Loon <kuan.loon.lay@intel.com>
Co-developed-by: Ong Boon Leong <boon.leong.ong@intel.com>
Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com>
Tested-by: Kurt Kanzenbach <kurt@linutronix.de>
Link: https://lore.kernel.org/r/20211013065941.2124858-1-boon.leong.ong@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet: microchip: lan743x: add support for PTP pulse width (duty cycle)
Yuiko Oshino [Tue, 12 Oct 2021 13:49:53 +0000 (09:49 -0400)]
net: microchip: lan743x: add support for PTP pulse width (duty cycle)

If the PTP_PEROUT_DUTY_CYCLE flag is set, then check if the
request_on value in ptp_perout_request matches the pre-defined
values or a toggle option.
Return a failure if the value is not supported.

Preserve the old behaviors if the PTP_PEROUT_DUTY_CYCLE flag is not
set.

Tested with an oscilloscope on EVB-LAN7430:
e.g., to output PPS 1sec period 500mS on (high) to GPIO 2.
 ./testptp -L 2,2
 ./testptp -p 1000000000 -w 500000000

Signed-off-by: Yuiko Oshino <yuiko.oshino@microchip.com>
Link: https://lore.kernel.org/r/1634046593-64312-1-git-send-email-yuiko.oshino@microchip.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agonet: phy: micrel: make *-skew-ps check more lenient
Matthias Schiffer [Tue, 12 Oct 2021 10:34:02 +0000 (12:34 +0200)]
net: phy: micrel: make *-skew-ps check more lenient

It seems reasonable to fine-tune only some of the skew values when using
one of the rgmii-*id PHY modes, and even when all skew values are
specified, using the correct ID PHY mode makes sense for documentation
purposes. Such a configuration also appears in the binding docs in
Documentation/devicetree/bindings/net/micrel-ksz90x1.txt, so the driver
should not warn about it.

Signed-off-by: Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
Link: https://lore.kernel.org/r/20211012103402.21438-1-matthias.schiffer@ew.tq-group.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Jakub Kicinski [Thu, 14 Oct 2021 23:50:14 +0000 (16:50 -0700)]
Merge git://git./linux/kernel/git/netdev/net

tools/testing/selftests/net/ioam6.sh
  7b1700e009cc ("selftests: net: modify IOAM tests for undef bits")
  bf77b1400a56 ("selftests: net: Test for the IOAM encapsulation with IPv6")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>