David S. Miller [Fri, 27 Mar 2020 02:35:13 +0000 (19:35 -0700)]
Merge branch 'veth-stats'
Lorenzo Bianconi says:
====================
veth: move ndo_xdp_xmit stats to peer veth_rq
Move ndo_xdp_xmit ethtool stats accounting to peer veth_rq.
Move XDP_TX accounting to veth_xdp_flush_bq routine.
Changes since v1:
- rename xdp_xmit[_err] counters to peer_tq_xdp_xmit[_err]
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Lorenzo Bianconi [Thu, 26 Mar 2020 22:10:20 +0000 (23:10 +0100)]
veth: rely on peer veth_rq for ndo_xdp_xmit accounting
Rely on 'remote' veth_rq to account ndo_xdp_xmit ethtool counters.
Move XDP_TX accounting to veth_xdp_flush_bq routine.
Remove 'rx' prefix in rx xdp ethool counters
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lorenzo Bianconi [Thu, 26 Mar 2020 22:10:19 +0000 (23:10 +0100)]
veth: rely on veth_rq in veth_xdp_flush_bq signature
Substitute net_device point with veth_rq one in veth_xdp_flush_bq,
veth_xdp_flush and veth_xdp_tx signature. This is a preliminary patch
to account xdp_xmit counter on 'receiving' veth_rq
Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wolfram Sang [Thu, 26 Mar 2020 21:10:00 +0000 (22:10 +0100)]
sfc: falcon: convert to use i2c_new_client_device()
Move away from the deprecated API and return the shiny new ERRPTR where
useful.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wolfram Sang [Thu, 26 Mar 2020 21:09:59 +0000 (22:09 +0100)]
igb: convert to use i2c_new_client_device()
Move away from the deprecated API and return the shiny new ERRPTR where
useful.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 27 Mar 2020 02:20:37 +0000 (19:20 -0700)]
Merge branch 'Implement-stats_update-callback-for-pedit-and-skbedit'
Petr Machata says:
====================
Implement stats_update callback for pedit and skbedit
The stats_update callback is used for adding HW counters to the SW ones.
Both skbedit and pedit actions are actually recognized by flow_offload.h,
but do not implement these callbacks. As a consequence, the reported values
are only the SW ones, even where there is a HW counter available.
Patch #1 adds the callback to action skbedit, patch #2 adds it to action
pedit. Patch #3 tweaks an skbedit selftest with a check that would have
caught this problem.
The pedit test is not likewise tweaked, because the iproute2 pedit action
currently does not support JSON dumping. This will be addressed later.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 26 Mar 2020 20:45:57 +0000 (22:45 +0200)]
selftests: skbedit_priority: Test counters at the skbedit rule
Currently the test checks the observable effect of skbedit priority:
queueing of packets at the correct qdisc band. It therefore misses the fact
that the counters for offloaded rules are not updated. Add an extra check
for the counter.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 26 Mar 2020 20:45:56 +0000 (22:45 +0200)]
sched: act_pedit: Implement stats_update callback
Implement this callback in order to get the offloaded stats added to the
kernel stats.
Reported-by: Alexander Petrovskiy <alexpe@mellanox.com>
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 26 Mar 2020 20:45:55 +0000 (22:45 +0200)]
sched: act_skbedit: Implement stats_update callback
Implement this callback in order to get the offloaded stats added to the
kernel stats.
Reported-by: Alexander Petrovskiy <alexpe@mellanox.com>
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 26 Mar 2020 18:55:41 +0000 (11:55 -0700)]
Merge branch 'mlxsw-Offload-TC-action-pedit-munge-dsfield'
Ido Schimmel says:
====================
mlxsw: Offload TC action pedit munge dsfield
Petr says:
The Spectrum switches allow packet prioritization based on DSCP on ingress,
and update of DSCP on egress. This is configured through the DCB APP rules.
For some use cases, assigning a custom DSCP value based on an ACL match is
a better tool. To that end, offload FLOW_ACTION_MANGLE to permit changing
of dsfield as a whole, or DSCP and ECN values in isolation.
After fixing a commentary nit in patch #1, and mlxsw naming in patch #2,
patches #3 and #4 add the offload to mlxsw.
Patch #5 adds a forwarding selftest for pedit dsfield, applicable to SW as
well as HW datapaths. Patch #6 adds a mlxsw-specific test to verify DSCP
rewrite due to DCB APP rules is not performed on pedited packets.
The tests only cover IPv4 dsfield setting. We have tests for IPv6 as well,
but would like to postpone their contribution until the corresponding
iproute patches have been accepted.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 26 Mar 2020 14:01:14 +0000 (16:01 +0200)]
selftests: mlxsw: qos_dscp_router: Test no DSCP rewrite after pedit
When DSCP is updated through an offloaded pedit action, DSCP rewrite on
egress should be disabled. Add a test that check that it is so.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 26 Mar 2020 14:01:13 +0000 (16:01 +0200)]
selftests: forwarding: Add a forwarding test for pedit munge dsfield
Add a test that runs packets with dsfield set, and test that pedit adjusts
the DSCP or ECN parts or the whole field.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 26 Mar 2020 14:01:12 +0000 (16:01 +0200)]
mlxsw: spectrum_flower: Offload FLOW_ACTION_MANGLE
Offload action pedit ex munge when used with a flower classifier. Only
allow setting of DSCP, ECN, or the whole DSField in IPv4 and IPv6 packets.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 26 Mar 2020 14:01:11 +0000 (16:01 +0200)]
mlxsw: core: Add DSCP, ECN, dscp_rw to QOS_ACTION
The QOS_ACTION is used for manipulating the QOS attributes of the packet.
Add the defines and helpers related to DSCP and ECN fields, and dscp_rw.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 26 Mar 2020 14:01:10 +0000 (16:01 +0200)]
mlxsw: core: Rename mlxsw_afa_qos_cmd to mlxsw_afa_qos_switch_prio_cmd
The original idea was to reuse this set of actions for ECN rewrite as well,
but on second look, it's not such a great idea. These two items should each
have its own command. Rename the existing enum to make it obvious that it
belongs to switch_prio_cmd.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 26 Mar 2020 14:01:09 +0000 (16:01 +0200)]
net: flow_offload.h: Fix a comment at flow_action_entry.mangle
This field references FLOW_ACTION_PACKET_EDIT. Such action does not exist
though. Instead the field is used for FLOW_ACTION_MANGLE and _ADD.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 26 Mar 2020 18:38:48 +0000 (11:38 -0700)]
Merge tag 'mlx5-updates-2020-03-25' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2020-03-25
1) Cleanups from Dan Carpenter and wenxu.
2) Paul and Roi, Some minor updates and fixes to E-Switch to address
issues introduced in the previous reg_c0 updates series.
3) Eli Cohen simplifies and improves flow steering matching group searches
and flow table entries version management.
4) Parav Pandit, improves devlink eswitch mode changes thread safety.
By making devlink rely on driver for thread safety and introducing mlx5
eswitch mode change protection.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Thu, 26 Mar 2020 03:21:50 +0000 (11:21 +0800)]
atl2: remove unused variable 'atl2_driver_string'
drivers/net/ethernet/atheros/atlx/atl2.c:40:19: warning: ‘atl2_driver_string’ defined but not used [-Wunused-const-variable=]
static const char atl2_driver_string[] = "Atheros(R) L2 Ethernet Driver";
^~~~~~~~~~~~~~~~~~
commit
ea973742140b ("net/atheros: Clean atheros code from driver version")
left behind this, remove it.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hoang Le [Thu, 26 Mar 2020 02:50:29 +0000 (09:50 +0700)]
tipc: Add a missing case of TIPC_DIRECT_MSG type
In the commit
f73b12812a3d
("tipc: improve throughput between nodes in netns"), we're missing a check
to handle TIPC_DIRECT_MSG type, it's still using old sending mechanism for
this message type. So, throughput improvement is not significant as
expected.
Besides that, when sending a large message with that type, we're also
handle wrong receiving queue, it should be enqueued in socket receiving
instead of multicast messages.
Fix this by adding the missing case for TIPC_DIRECT_MSG.
Fixes:
f73b12812a3d ("tipc: improve throughput between nodes in netns")
Reported-by: Tuong Lien <tuong.t.lien@dektech.com.au>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Acked-by: Jon Maloy <jmaloy@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Parav Pandit [Wed, 18 Dec 2019 08:51:19 +0000 (02:51 -0600)]
net/mlx5: E-switch, Protect eswitch mode changes
Currently eswitch mode change is occurring from 2 different execution
contexts as below.
1. sriov sysfs enable/disable
2. devlink eswitch set commands
Both of them need to access eswitch related data structures in
synchronized manner.
Without any synchronization below race condition exist.
SR-IOV enable/disable with devlink eswitch mode change:
cpu-0 cpu-1
----- -----
mlx5_device_disable_sriov() mlx5_devlink_eswitch_mode_set()
mlx5_eswitch_disable() esw_offloads_stop()
esw_offloads_disable() mlx5_eswitch_disable()
esw_offloads_disable()
Hence, they are synchronized using a new mode_lock.
eswitch's state_lock is not used as it can lead to a deadlock scenario
below and state_lock is only for vport and fdb exclusive access.
ip link set vf <param>
netlink rcv_msg() - Lock A
rtnl_lock
vfinfo()
esw->state_lock() - Lock B
devlink eswitch_set
devlink_mutex
esw->state_lock() - Lock B
attach_netdev()
register_netdev()
rtnl_lock - Lock A
Alternatives considered:
1. Acquiring rtnl lock before taking esw->state_lock to follow similar
locking sequence as ip link flow during eswitch mode set.
rtnl lock is not good idea for two reasons.
(a) Holding rtnl lock for several hundred device commands is not good
idea.
(b) It leads to below and more similar deadlocks.
devlink eswitch_set
devlink_mutex
rtnl_lock - Lock A
esw->state_lock() - Lock B
eswitch_disable()
reload()
ib_register_device()
ib_cache_setup_one()
rtnl_lock()
2. Exporting devlink lock may lead to undesired use of it in vendor
driver(s) in future.
3. Unloading representors outside of the mode_lock requires
serialization with other process trying to enable the eswitch.
4. Differing the representors life cycle to a different workqueue
requires synchronization with func_change_handler workqueue.
Reviewed-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Bodong Wang <bodong@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Parav Pandit [Wed, 18 Dec 2019 10:58:58 +0000 (04:58 -0600)]
net/mlx5: E-switch, Extend eswitch enable to handle num_vfs change
Subsequent patch protects eswitch mode changes across sriov and devlink
interfaces. It is desirable for eswitch to provide thread safe eswitch
enable and disable APIs.
Hence, extend eswitch enable API to optionally update num_vfs when
requested.
In subsequent patch, eswitch num_vfs are updated after all the eswitch
users eswitch drops its reference count.
Reviewed-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Bodong Wang <bodong@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Parav Pandit [Sat, 14 Dec 2019 10:09:04 +0000 (04:09 -0600)]
net/mlx5: Split eswitch mode check to different helper function
In order to check eswitch state under a lock, prepare code to split
capability check and eswitch state check into two helper functions.
Reviewed-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Bodong Wang <bodong@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Parav Pandit [Mon, 24 Feb 2020 01:06:56 +0000 (19:06 -0600)]
devlink: Rely on driver eswitch thread safety instead of devlink
devlink_nl_cmd_eswitch_set_doit() doesn't hold devlink->lock mutex while
invoking driver callback. This is likely due to eswitch mode setting
involves adding/remove devlink ports, health reporters or
other devlink objects for a devlink device.
So it is driver responsiblity to ensure thread safe eswitch state
transition happening via either sriov legacy enablement or via devlink
eswitch set callback.
Therefore, get() callback should also be invoked without holding
devlink->lock mutex.
Vendor driver can use same internal lock which it uses during eswitch
mode set() callback.
This makes get() and set() implimentation symmetric in devlink core and
in vendor drivers.
Hence, remove holding devlink->lock mutex during eswitch get() callback.
Failing to do so results into below deadlock scenario when mlx5_core
driver is improved to handle eswitch mode set critical section invoked
by devlink and sriov sysfs interface in subsequent patch.
devlink_nl_cmd_eswitch_set_doit()
mlx5_eswitch_mode_set()
mutex_lock(esw->mode_lock) <- Lock A
[...]
register_devlink_port()
mutex_lock(&devlink->lock); <- lock B
mutex_lock(&devlink->lock); <- lock B
devlink_nl_cmd_eswitch_get_doit()
mlx5_eswitch_mode_get()
mutex_lock(esw->mode_lock) <- Lock A
In subsequent patch, mlx5_core driver uses its internal lock during
get() and set() eswitch callbacks.
Other drivers have been inspected which returns either constant during
get operations or reads the value from already allocated structure.
Hence it is safe to remove the lock in get( ) callback and let vendor
driver handle it.
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Parav Pandit [Mon, 9 Mar 2020 04:17:37 +0000 (23:17 -0500)]
net/mlx5: Simplify mlx5_unload_one() and its callers
mlx5_unload_one() always returns 0.
Simplify callers of mlx5_unload_one() and remove the dead code.
Reviewed-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Parav Pandit [Mon, 9 Mar 2020 03:19:51 +0000 (22:19 -0500)]
net/mlx5: Simplify mlx5_register_device to return void
mlx5_register_device() doesn't check for any error and always returns 0.
Simplify mlx5_register_device() to return void and its caller, remove
dead code related to it.
Reviewed-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Eli Cohen [Wed, 4 Mar 2020 08:32:56 +0000 (10:32 +0200)]
net/mlx5: Avoid group version scan when not necessary
Group version is used when modifying a rule is allowed
(FLOW_ACT_NO_APPEND is clear) to detect a case where the rule was found
but while the groups where unlocked a new FTE was added. In this case,
the added FTE could be one with identical match value so we need to
attempt again with group lock held.
Change the code so version is retrieved only when FLOW_ACT_NO_APPEND is
cleared. As result, later compare can also be avoided if FLOW_ACT_NO_APPEND
is cleared.
Also improve comments text.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Eli Cohen [Wed, 4 Mar 2020 08:32:56 +0000 (10:32 +0200)]
net/mlx5: Avoid incrementing FTE version
FTE version is not used anywhere in the code so avoid incrementing it.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Eli Cohen [Wed, 4 Mar 2020 08:32:56 +0000 (10:32 +0200)]
net/mlx5: Fix group version management
When adding a rule to a flow group we need increment the version of the
group. Current code fails to do that and as a result, when trying to add
a rule, we will fail to discover a case where an FTE with the same match
value was added while we scanned the groups of the same match criteria,
thus we may try to add an FTE that was already added.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Eli Cohen [Wed, 26 Feb 2020 13:03:16 +0000 (15:03 +0200)]
net/mlx5: Simplify matching group searches
Instead of using two different structs for searching groups with the
same match, use a single struct and thus simplify the code, make it more
readable and smaller size which means less code cache misses.
text data bss dec hex
before: 35524 2744 0 38268 957c
after: 35038 2744 0 37782 9396
When testing add 70000 rules, delete all the rules, and repeat three
times taking the average, we get (time in seconds):
Before the change: insert 16.80, delete 11.02
After the change: insert 16.55, delete 10.95
Signed-off-by: Eli Cohen <eli@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Roi Dayan [Mon, 23 Mar 2020 10:14:58 +0000 (12:14 +0200)]
net/mlx5: E-Switch, Use correct type for chain, prio and level values
The correct type is u32.
Fixes:
d18296ffd9cc ("net/mlx5: E-Switch, Introduce global tables")
Signed-off-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Paul Blakey <paulb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Roi Dayan [Thu, 19 Mar 2020 15:48:18 +0000 (17:48 +0200)]
net/mlx5: E-Switch, free flow_group_in after creating the restore table
We allocate a temporary memory but forget to free it.
Fixes:
11b717d61526 ("net/mlx5: E-Switch, Get reg_c0 value on CQE")
Signed-off-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Paul Blakey <paulb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Paul Blakey [Wed, 18 Mar 2020 09:43:06 +0000 (11:43 +0200)]
net/mlx5: E-Switch, Enable chains only if regs loopback is enabled
Register c0 loopback is needed to fully support chains and prios.
Enable chains and prio only if loopback (of reg c1 which came together
with c0), is enabled. To be able to check that, move enabling of loopback
before eswitch chains init.
Signed-off-by: Paul Blakey <paulb@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Paul Blakey [Wed, 18 Mar 2020 08:55:12 +0000 (10:55 +0200)]
net/mlx5: E-Switch, Enable restore table only if reg_c1 is supported
Reg c0/c1 matching, rewrite of regs c0/c1, and copy header of regs c1,B
is needed for the restore table to function, might not be supported by
firmware, and creation of the restore table or the copy header will
fail.
Check reg_c1 loopback support, as firmware which supports this,
should have all of the above.
Fixes:
11b717d61526 ("net/mlx5: E-Switch, Get reg_c0 value on CQE")
Signed-off-by: Paul Blakey <paulb@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
wenxu [Wed, 18 Mar 2020 09:05:43 +0000 (17:05 +0800)]
net/mlx5e: remove duplicated check chain_index in mlx5e_rep_setup_ft_cb
The function mlx5e_rep_setup_ft_cb check chain_index is zero twice.
Signed-off-by: wenxu <wenxu@ucloud.cn>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Dan Carpenter [Fri, 20 Mar 2020 13:23:05 +0000 (16:23 +0300)]
net/mlx5e: Fix actions_match_supported() return
The actions_match_supported() function returns a bool, true for success
and false for failure. This error path is returning a negative which
is cast to true but it should return false.
Fixes:
4c3844d9e97e ("net/mlx5e: CT: Introduce connection tracking")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
David S. Miller [Thu, 26 Mar 2020 01:58:11 +0000 (18:58 -0700)]
Merge git://git./linux/kernel/git/netdev/net
Overlapping header include additions in macsec.c
A bug fix in 'net' overlapping with the removal of 'version'
string in ena_netdev.c
Overlapping test additions in selftests Makefile
Overlapping PCI ID table adjustments in iwlwifi driver.
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Wed, 25 Mar 2020 20:58:05 +0000 (13:58 -0700)]
Merge git://git./linux/kernel/git/netdev/net
Pull networking fixes from David Miller:
1) Fix deadlock in bpf_send_signal() from Yonghong Song.
2) Fix off by one in kTLS offload of mlx5, from Tariq Toukan.
3) Add missing locking in iwlwifi mvm code, from Avraham Stern.
4) Fix MSG_WAITALL handling in rxrpc, from David Howells.
5) Need to hold RTNL mutex in tcindex_partial_destroy_work(), from Cong
Wang.
6) Fix producer race condition in AF_PACKET, from Willem de Bruijn.
7) cls_route removes the wrong filter during change operations, from
Cong Wang.
8) Reject unrecognized request flags in ethtool netlink code, from
Michal Kubecek.
9) Need to keep MAC in reset until PHY is up in bcmgenet driver, from
Doug Berger.
10) Don't leak ct zone template in act_ct during replace, from Paul
Blakey.
11) Fix flushing of offloaded netfilter flowtable flows, also from Paul
Blakey.
12) Fix throughput drop during tx backpressure in cxgb4, from Rahul
Lakkireddy.
13) Don't let a non-NULL skb->dev leave the TCP stack, from Eric
Dumazet.
14) TCP_QUEUE_SEQ socket option has to update tp->copied_seq as well,
also from Eric Dumazet.
15) Restrict macsec to ethernet devices, from Willem de Bruijn.
16) Fix reference leak in some ethtool *_SET handlers, from Michal
Kubecek.
17) Fix accidental disabling of MSI for some r8169 chips, from Heiner
Kallweit.
* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (138 commits)
net: Fix CONFIG_NET_CLS_ACT=n and CONFIG_NFT_FWD_NETDEV={y, m} build
net: ena: Add PCI shutdown handler to allow safe kexec
selftests/net/forwarding: define libs as TEST_PROGS_EXTENDED
selftests/net: add missing tests to Makefile
r8169: re-enable MSI on RTL8168c
net: phy: mdio-bcm-unimac: Fix clock handling
cxgb4/ptp: pass the sign of offset delta in FW CMD
net: dsa: tag_8021q: replace dsa_8021q_remove_header with __skb_vlan_pop
net: cbs: Fix software cbs to consider packet sending time
net/mlx5e: Do not recover from a non-fatal syndrome
net/mlx5e: Fix ICOSQ recovery flow with Striding RQ
net/mlx5e: Fix missing reset of SW metadata in Striding RQ reset
net/mlx5e: Enhance ICOSQ WQE info fields
net/mlx5_core: Set IB capability mask1 to fix ib_srpt connection failure
selftests: netfilter: add nfqueue test case
netfilter: nft_fwd_netdev: allow to redirect to ifb via ingress
netfilter: nft_fwd_netdev: validate family and chain type
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
netfilter: nft_set_rbtree: Introduce and use nft_rbtree_interval_start()
netfilter: nft_set_pipapo: Separate partial and complete overlap cases on insertion
...
Linus Torvalds [Wed, 25 Mar 2020 20:52:36 +0000 (13:52 -0700)]
Merge tag 'gpio-v5.6-3' of git://git./linux/kernel/git/linusw/linux-gpio
Pull GPIO fixes from Linus Walleij:
- One core quirk by myself to fix the .irq_disable() semantics when the
gpiolib core takes over this callback.
- The rest is an elaborate series of four patches fixing Intel laptop
ACPI wakeup quirks.
* tag 'gpio-v5.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio:
gpiolib: acpi: Add quirk to ignore EC wakeups on HP x2 10 CHT + AXP288 model
gpiolib: acpi: Add quirk to ignore EC wakeups on HP x2 10 BYT + AXP288 model
gpiolib: acpi: Rework honor_wakeup option into an ignore_wake option
gpiolib: acpi: Correct comment for HP x2 10 honor_wakeup quirk
gpiolib: Fix irq_disable() semantics
David S. Miller [Wed, 25 Mar 2020 20:12:26 +0000 (13:12 -0700)]
Merge tag 'wireless-drivers-2020-03-25' of git://git./linux/kernel/git/kvalo/wireless-drivers
Kalle Valo says:
====================
wireless-drivers fixes for v5.6
Fourth, and last, set of fixes for v5.6. Just two important fixes to
iwlwifi regressions.
iwlwifi
* fix GEO_TX_POWER_LIMIT command on certain devices which caused
firmware to crash during initialisation
* add back device ids for three devices which were accidentally
removed
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Pablo Neira Ayuso [Wed, 25 Mar 2020 12:47:18 +0000 (13:47 +0100)]
net: Fix CONFIG_NET_CLS_ACT=n and CONFIG_NFT_FWD_NETDEV={y, m} build
net/netfilter/nft_fwd_netdev.c: In function ‘nft_fwd_netdev_eval’:
net/netfilter/nft_fwd_netdev.c:32:10: error: ‘struct sk_buff’ has no member named ‘tc_redirected’
pkt->skb->tc_redirected = 1;
^~
net/netfilter/nft_fwd_netdev.c:33:10: error: ‘struct sk_buff’ has no member named ‘tc_from_ingress’
pkt->skb->tc_from_ingress = 1;
^~
To avoid a direct dependency with tc actions from netfilter, wrap the
redirect bits around CONFIG_NET_REDIRECT and move helpers to
include/linux/skbuff.h. Turn on this toggle from the ifb driver, the
only existing client of these bits in the tree.
This patch adds skb_set_redirected() that sets on the redirected bit
on the skbuff, it specifies if the packet was redirect from ingress
and resets the timestamp (timestamp reset was originally missing in the
netfilter bugfix).
Fixes:
bcfabee1afd99484 ("netfilter: nft_fwd_netdev: allow to redirect to ifb via ingress")
Reported-by: noreply@ellerman.id.au
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Kundu [Wed, 25 Mar 2020 11:53:09 +0000 (04:53 -0700)]
cxgb4: Add support to catch bits set in INT_CAUSE5
This commit adds support to catch any bits set in SGE_INT_CAUSE5 for Parity Errors.
F_ERR_T_RXCRC flag is used to ignore that particular bit as it is not considered as fatal.
So, we clear out the bit before looking for error.
This patch now read and report separately all three registers(Cause1, Cause2, Cause5).
Also, checks for errors if any.
Signed-off-by: Raju Rangoju <rajur@chelsio.com>
Signed-off-by: Rahul Kundu <rahul.kundu@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 25 Mar 2020 19:20:00 +0000 (12:20 -0700)]
Merge branch 'octeontx2-pf-Miscellaneous-fixes'
Sunil Goutham says:
====================
octeontx2-pf: Miscellaneous fixes
This patchset fixes couple of issues related to missing
page refcount updation and taking a mutex lock in atomic
context.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Sunil Goutham [Wed, 25 Mar 2020 11:41:17 +0000 (17:11 +0530)]
octeontx2-pf: Fix ndo_set_rx_mode
Since set_rx_mode takes a mutex lock for sending mailbox
message to admin function to set the mode, moved logic
to a workqueue.
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sunil Goutham [Wed, 25 Mar 2020 11:41:16 +0000 (17:11 +0530)]
octeontx2-pf: Fix rx buffer page refcount
Fixed an issue wherein while refilling receive buffers
for the last page allocated, recount is not being updated.
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 25 Mar 2020 19:07:16 +0000 (12:07 -0700)]
Merge branch 's390-next'
Julian Wiedmann says:
====================
s390/qeth: updates 2020-03-25
please apply the following patch series for qeth to netdev's net-next
tree.
Same series as yesterday, with one minor update to patch 1 as per
your review.
This adds
1) NAPI poll support for the async-Completion Queue (with one qdio layer
patch acked by Heiko),
2) ethtool support for per-queue TX IRQ coalescing,
3) various cleanups.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:35:07 +0000 (10:35 +0100)]
s390/qeth: modernize two list helpers
Replace list_for_each() with list_for_each_entry(), and
list_entry(head.next) with list_first_entry().
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:35:06 +0000 (10:35 +0100)]
s390/qeth: keep track of fixed prio-queue configuration
When a device is configured in prio-queue mode to pin all traffic onto
a specific HW queue, treat this as a distinct variant of prio-queueing
instead of QETH_NO_PRIO_QUEUEING.
This corrects an error message from qeth_osa_set_output_queues() for
devices configured in such a mode.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:35:05 +0000 (10:35 +0100)]
s390/qeth: fine-tune MAC Address-related errnos
Return the correct errnos when .ndo_set_mac_address fails to set a new
MAC address.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:35:04 +0000 (10:35 +0100)]
s390/qeth: add TX IRQ coalescing support for IQD devices
Since IQD devices complete (most of) their transmissions synchronously,
they don't offer TX completion IRQs and have no HW coalescing controls.
But we can fake the easy parts in SW, and give the user some control wrt
to how often the TX NAPI code should be triggered to process the TX
completions.
Having per-queue controls can in particular help the dedicated mcast
queue, as it likely benefits from different fine-tuning than what the
ucast queues need.
CC: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:35:03 +0000 (10:35 +0100)]
s390/qeth: collect more TX statistics
Count the number of TX doorbells we issue to the qdio layer.
Also count the number of actual frames in a TX buffer, and then
use this data along with the byte count during TX completion.
We'll make additional use of the frame count in a subsequent patch.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:35:02 +0000 (10:35 +0100)]
s390/qeth: clean up the mac_bits
We're down to a single bit flag for MAC-address related status, reflect
that in the info struct.
Also set up the flag during initialization instead of clearing it during
shutdown - one more little step towards unifying the shutdown code.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:35:01 +0000 (10:35 +0100)]
s390/qeth: simplify L3 dev_id logic
The logic that deals with errors from qeth_l3_get_unique_id() is quite
complex: it sets card->unique_id to 0xfffe, additionally flags it as
UNIQUE_ID_NOT_BY_CARD and later takes this flag as cue to not propagate
card->unique_id to dev->dev_id. With dev->dev_id thus holding 0,
addrconf_ifid_eui48() applies its default behaviour.
Get rid of all the special bit masks, and just return the old uid in
case of an error. For the vast majority of cases this will be 0 (and so
we still get the desired default behaviour) - with the rare exception
where qeth_l3_get_unique_id() might have been called earlier but the
initialization then failed at a later point.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:35:00 +0000 (10:35 +0100)]
s390/qdio: extend polling support to multiple queues
When the support for polling drivers was initially added, it only
considered Input Queue 0. But as QDIO interrupts are actually for the
full device and not a single queue, this doesn't really fit for
configurations where multiple Input Queues are used.
Rework the qdio code so that interrupts for a polling driver are not
split up into actions for each queue. Instead deliver the interrupt as
a single event, and let the driver decide which queue needs what action.
When re-enabling the QDIO interrupt via qdio_start_irq(), this means
that the qdio code needs to
(1) put _all_ eligible queues back into a state where they raise IRQs,
(2) and afterwards check _all_ eligible queues for new work to bridge
the race window.
On the qeth side of things (as the only qdio polling driver), we can now
add CQ polling support to the main NAPI poll routine. It doesn't consume
NAPI budget, and to avoid hogging the CPU we yield control after
completing one full queue worth of buffers.
The subsequent qdio_start_irq() will check for any additional work, and
have us re-schedule the NAPI instance accordingly.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:34:59 +0000 (10:34 +0100)]
s390/qeth: remove redundant if-clause in RX poll code
Whenever all completed RX buffers have been processed
(ie. rx->b_count == 0), we call down to the HW layer to scan for
additional buffers. If no further buffers are available, the code
breaks out of the while-loop.
So we never reach the 'process an RX buffer' step with rx->b_count == 0,
eliminate that check and one level of indentation.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:34:58 +0000 (10:34 +0100)]
s390/qeth: split out RX poll code
The main NAPI poll routine should eventually handle more types of work,
beyond just the RX ring.
Split off the RX poll logic into a separate function, and simplify the
nested while-loop.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Wed, 25 Mar 2020 09:34:57 +0000 (10:34 +0100)]
s390/qeth: simplify RX buffer tracking
Since RX buffers may contain multiple packets, qeth's NAPI poll code can
exhaust its budget in the middle of an RX buffer. Thus we keep track of
our current position within the active RX buffer, so we can resume
processing here in the next NAPI poll period.
Clean up that code by tracking the index of the active buffer element,
instead of a pointer to it.
Also simplify the code that advances to the next RX buffer when the
current buffer has been fully processed.
v2: - remove QDIO_ELEMENT_NO() macro (davem)
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Guilherme G. Piccoli [Fri, 20 Mar 2020 12:55:34 +0000 (09:55 -0300)]
net: ena: Add PCI shutdown handler to allow safe kexec
Currently ENA only provides the PCI remove() handler, used during rmmod
for example. This is not called on shutdown/kexec path; we are potentially
creating a failure scenario on kexec:
(a) Kexec is triggered, no shutdown() / remove() handler is called for ENA;
instead pci_device_shutdown() clears the master bit of the PCI device,
stopping all DMA transactions;
(b) Kexec reboot happens and the device gets enabled again, likely having
its FW with that DMA transaction buffered; then it may trigger the (now
invalid) memory operation in the new kernel, corrupting kernel memory area.
This patch aims to prevent this, by implementing a shutdown() handler
quite similar to the remove() one - the difference being the handling
of the netdev, which is unregistered on remove(), but following the
convention observed in other drivers, it's only detached on shutdown().
This prevents an odd issue in AWS Nitro instances, in which after the 2nd
kexec the next one will fail with an initrd corruption, caused by a wild
DMA write to invalid kernel memory. The lspci output for the adapter
present in my instance is:
00:05.0 Ethernet controller [0200]: Amazon.com, Inc. Elastic Network
Adapter (ENA) [1d0f:ec20]
Suggested-by: Gavin Shan <gshan@redhat.com>
Signed-off-by: Guilherme G. Piccoli <gpiccoli@canonical.com>
Acked-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hangbin Liu [Wed, 25 Mar 2020 08:41:01 +0000 (16:41 +0800)]
selftests/net/forwarding: define libs as TEST_PROGS_EXTENDED
The lib files should not be defined as TEST_PROGS, or we will run them
in run_kselftest.sh.
Also remove ethtool_lib.sh exec permission.
Fixes:
81573b18f26d ("selftests/net/forwarding: add Makefile to install tests")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hangbin Liu [Wed, 25 Mar 2020 08:07:01 +0000 (16:07 +0800)]
selftests/net: add missing tests to Makefile
Find some tests are missed in Makefile by running:
for file in $(ls *.sh); do grep -q $file Makefile || echo $file; done
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Wed, 25 Mar 2020 02:23:21 +0000 (19:23 -0700)]
net: use indirect call wrappers for skb_copy_datagram_iter()
TCP recvmsg() calls skb_copy_datagram_iter(), which
calls an indirect function (cb pointing to simple_copy_to_iter())
for every MSS (fragment) present in the skb.
CONFIG_RETPOLINE=y forces a very expensive operation
that we can avoid thanks to indirect call wrappers.
This patch gives a 13% increase of performance on
a single flow, if the bottleneck is the thread reading
the TCP socket.
Fixes:
950fcaecd5cc ("datagram: consolidate datagram copy to iter helpers")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Wed, 25 Mar 2020 01:17:50 +0000 (09:17 +0800)]
cxgb4: remove set but not used variable 'tab'
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c: In function cxgb4_get_free_ftid:
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c:547:23:
warning: variable tab set but not used [-Wunused-but-set-variable]
commit
8d174351f285 ("cxgb4: rework TC filter rule insertion across regions")
involved this, remove it.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Wed, 25 Mar 2020 17:34:02 +0000 (10:34 -0700)]
Merge tag 'zonefs-5.6-rc7' of git://git./linux/kernel/git/dlemoal/zonefs
Pull zonefs fix from Damien Le Moal:
"A single fix from me to correctly handle the size of read-only zone
files"
* tag 'zonefs-5.6-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs:
zonfs: Fix handling of read-only zones
Damien Le Moal [Fri, 20 Mar 2020 12:36:54 +0000 (21:36 +0900)]
zonfs: Fix handling of read-only zones
The write pointer of zones in the read-only consition is defined as
invalid by the SCSI ZBC and ATA ZAC specifications. It is thus not
possible to determine the correct size of a read-only zone file on
mount. Fix this by handling read-only zones in the same manner as
offline zones by disabling all accesses to the zone (read and write)
and initializing the inode size of the read-only zone to 0).
For zones found to be in the read-only condition at runtime, only
disable write access to the zone and keep the size of the zone file to
its last updated value to allow the user to recover previously written
data.
Also fix zonefs documentation file to reflect this change.
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
David S. Miller [Wed, 25 Mar 2020 00:30:40 +0000 (17:30 -0700)]
Merge git://git./pub/scm/linux/kernel/git/pablo/nf
Pablo Neira Ayuso says:
====================
Netfilter fixes for net
The following patchset contains Netfilter fixes for net:
1) A new selftest for nf_queue, from Florian Westphal. This test
covers two recent fixes:
07f8e4d0fddb ("tcp: also NULL skb->dev
when copy was needed") and
b738a185beaa ("tcp: ensure skb->dev is
NULL before leaving TCP stack").
2) The fwd action breaks with ifb. For safety in next extensions,
make sure the fwd action only runs from ingress until it is extended
to be used from a different hook.
3) The pipapo set type now reports EEXIST in case of subrange overlaps.
Update the rbtree set to validate range overlaps, so far this
validation is only done only from userspace. From Stefano Brivio.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 25 Mar 2020 00:25:54 +0000 (17:25 -0700)]
Merge tag 'mlx5-fixes-2020-03-24' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
Mellanox, mlx5 fixes 2020-03-24
This series introduces some fixes to mlx5 driver.
From Aya, Fixes to the RX error recovery flows
From Leon, Fix IB capability mask
Please pull and let me know if there is any problem.
For -stable v5.5
('net/mlx5_core: Set IB capability mask1 to fix ib_srpt connection failure')
For -stable v5.4
('net/mlx5e: Fix ICOSQ recovery flow with Striding RQ')
('net/mlx5e: Do not recover from a non-fatal syndrome')
('net/mlx5e: Fix missing reset of SW metadata in Striding RQ reset')
('net/mlx5e: Enhance ICOSQ WQE info fields')
The above patch ('net/mlx5e: Enhance ICOSQ WQE info fields')
will fail to apply cleanly on v5.4 due to a trivial contextual conflict,
but it is an important fix, do I need to do something about it or just
assume Greg will know how to handle this ?
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Tue, 24 Mar 2020 19:58:29 +0000 (20:58 +0100)]
r8169: re-enable MSI on RTL8168c
The original change fixed an issue on RTL8168b by mimicking the vendor
driver behavior to disable MSI on chip versions before RTL8168d.
This however now caused an issue on a system with RTL8168c, see [0].
Therefore leave MSI disabled on RTL8168b, but re-enable it on RTL8168c.
[0] https://bugzilla.redhat.com/show_bug.cgi?id=
1792839
Fixes:
003bd5b4a7b4 ("r8169: don't use MSI before RTL8168d")
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 24 Mar 2020 17:30:16 +0000 (10:30 -0700)]
devlink: expand the devlink-info documentation
We are having multiple review cycles with all vendors trying
to implement devlink-info. Let's expand the documentation with
more information about what's implemented and motivation behind
this interface in an attempt to make the implementations easier.
Describe what each info section is supposed to contain, and make
some references to other HW interfaces (PCI caps).
Document how firmware management is expected to look, to make
it clear how devlink-info and devlink-flash work in concert.
Name some future work.
v2: - improve wording
v3: - improve wording
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 16:10:10 +0000 (16:10 +0000)]
net: phy: mdio-bcm-unimac: Fix clock handling
The DT binding for this PHY describes an *optional* clock property.
Due to a bug in the error handling logic, we are actually ignoring this
clock *all* of the time so far.
Fix this by using devm_clk_get_optional() to handle this clock properly.
Fixes:
b78ac6ecd1b6b ("net: phy: mdio-bcm-unimac: Allow configuring MDIO clock divider")
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 24 Mar 2020 14:13:58 +0000 (16:13 +0200)]
net: phy: mscc: consolidate a common RGMII delay implementation
It looks like the VSC8584 PHY driver is rolling its own RGMII delay
configuration code, despite the fact that the logic is mostly the same.
In fact only the register layout and position for the RGMII controls has
changed. So we need to adapt and parameterize the PHY-dependent bit
fields when calling the new generic function.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 24 Mar 2020 23:33:05 +0000 (16:33 -0700)]
Merge branch 'axienet-Update-error-handling-and-add-64-bit-DMA-support'
Andre Przywara says:
====================
net: axienet: Update error handling and add 64-bit DMA support
a minor update, fixing the 32-bit build breakage, and brightening up
Dave's christmas tree. Rebased against latest net-next/master.
This series is based on net-next as of today (
9970de8b013a), which
includes Russell's fixes [1], solving the SGMII issues I have had.
[1] https://lore.kernel.org/netdev/E1j6trA-0003GY-N1@rmk-PC.armlinux.org.uk/
Changelog v2 .. v3:
- Use two "left-shifts by 16" to fix builds with 32-bit phys_addr_t
- reorder variable declarations
Changelog v1 .. v2:
- Add Reviewed-by: tags from Radhey
- Extend kerndoc documentation
- Convert DMA error handler tasklet to work queue
- log DMA mapping errors
- mark DMA mapping error checks as unlikely (in "hot" paths)
- return NETDEV_TX_OK on TX DMA mapping error (increasing TX drop counter)
- Request eth IRQ as an optional IRQ
- Remove no longer needed MDIO IRQ register names
- Drop DT propery check for address width, assume full 64 bit
This series updates the Xilinx Axienet driver to work on our board
here. One big issue was broken SGMII support, which Russell fixed already
(in net-next).
While debugging and understanding the driver, I found several problems
in the error handling and cleanup paths, which patches 2-7 address.
Patch 8 removes a annoying error message, patch 9 paves the way for newer
revisions of the IP. The next patch adds mii-tool support, just for good
measure.
The next four patches add support for 64-bit DMA. This is an integration
option on newer IP revisions (>= v7.1), and expects MSB bits in formerly
reserved registers. Without writing to those MSB registers, the state
machine won't trigger, so it's mandatory to access them, even if they
are zero. Patches 11 and 12 prepare the code by adding accessors, to
wrap this properly and keep it working on older IP revisions.
Patch 13 enables access to the MSB registers, by trying to write a
non-zero value to them and checking if that sticks. Older IP revisions
always read those registers as zero.
Patch 14 then adjusts the DMA mask, based on the autodetected MSB
feature. It uses the full 64 bits in this case, the rest of the system
(actual physical addresses in use) should provide a natural limit if the
chip has connected fewer address lines. If not, the parent DT node can
use a dma-range property.
The Xilinx PG138 and PG021 documents (in versions 7.1 in both cases)
were used for this series.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:47 +0000 (13:23 +0000)]
net: axienet: Allow DMA to beyond 4GB
With all DMA address accesses wrapped, we can actually support 64-bit
DMA if this option was chosen at IP integration time.
If the IP has been configured for an address width greater than 32 bits,
we assume the full 64 bit DMA width is working. In practise this will be
limited by the actual system address bus width, which will ideally be the
same as the DMA IP address width.
If this is not the case, the actual width can still be configured using a
dma-ranges property in the parent of the MAC node.
This increases the DMA mask on those systems to let the kernel choose
buffers from memory at higher addresses.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:46 +0000 (13:23 +0000)]
net: axienet: Autodetect 64-bit DMA capability
When newer revisions of the Axienet IP are configured for a 64-bit bus,
we *need* to write to the MSB part of the an address registers,
otherwise the IP won't recognise this as a DMA start condition.
This is even true when the actual DMA address comes from the lower 4 GB.
To autodetect this configuration, at probe time we write all 1's to such
an MSB register, and see if any bits stick. If this is configured for a
32-bit bus, those MSB registers are RES0, so reading back 0 indicates
that no MSB writes are necessary.
On the other hands reading anything other than 0 indicated the need to
write the MSB registers, so we set the respective flag.
The actual DMA mask stays at 32-bit for now. To help bisecting, a
separate patch will enable allocations from higher addresses.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:45 +0000 (13:23 +0000)]
net: axienet: Upgrade descriptors to hold 64-bit addresses
Newer revisions of the AXI DMA IP (>= v7.1) support 64-bit addresses,
both for the descriptors itself, as well as for the buffers they are
pointing to.
This is realised by adding "MSB" words for the next and phys pointer
right behind the existing address word, now named "LSB". These MSB words
live in formerly reserved areas of the descriptor.
If the hardware supports it, write both words when setting an address.
The buffer address is handled by two wrapper functions, the two
occasions where we set the next pointers are open coded.
For now this is guarded by a flag which we don't set yet.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:44 +0000 (13:23 +0000)]
net: axienet: Wrap DMA pointer writes to prepare for 64 bit
Newer versions of the Xilink DMA IP support busses with more than 32
address bits, by introducing an MSB word for the registers holding DMA
pointers (tail/current, RX/TX descriptor addresses).
On IP configured for more than 32 bits, it is also *required* to write
both words, to let the IP recognise this as a start condition for an
MM2S request, for instance.
Wrap the DMA pointer writes with a separate function, to add this
functionality later. For now we stick to the lower 32 bits.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:43 +0000 (13:23 +0000)]
net: axienet: Add mii-tool support
mii-tool is useful for debugging, and all it requires to work is to wire
up the ioctl ops function pointer.
Add this to the axienet driver to enable mii-tool.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:42 +0000 (13:23 +0000)]
net: axienet: Drop MDIO interrupt registers from ethtools dump
Newer revisions of the IP don't have these registers. Since we don't
really use them, just drop them from the ethtools dump.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:41 +0000 (13:23 +0000)]
net: axienet: Mark eth_irq as optional
According to the DT binding, the Ethernet core interrupt is optional.
Use platform_get_irq_optional() to avoid the error message when the
IRQ is not specified.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:40 +0000 (13:23 +0000)]
net: axienet: Check for DMA mapping errors
Especially with the default 32-bit DMA mask, DMA buffers are a limited
resource, so their allocation can fail.
So as the DMA API documentation requires, add error checking code after
dma_map_single() calls to catch the case where we run out of "low" memory.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:39 +0000 (13:23 +0000)]
net: axienet: Factor out TX descriptor chain cleanup
Factor out the code that cleans up a number of connected TX descriptors,
as we will need it to properly roll back a failed _xmit() call.
There are subtle differences between cleaning up a successfully sent
chain (unknown number of involved descriptors, total data size needed)
and a chain that was about to set up (number of descriptors known), so
cater for those variations with some extra parameters.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:38 +0000 (13:23 +0000)]
net: axienet: Improve DMA error handling
Since 0 is a valid DMA address, we cannot use the physical address to
check whether a TX descriptor is valid and is holding a DMA mapping.
Use the "cntrl" member of the descriptor to make this decision, as it
contains at least the length of the buffer, so 0 points to an
uninitialised buffer.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:37 +0000 (13:23 +0000)]
net: axienet: Fix DMA descriptor cleanup path
When axienet_dma_bd_init() bails out during the initialisation process,
it might do so with parts of the structure already allocated and
initialised, while other parts have not been touched yet. Before
returning in this case, we call axienet_dma_bd_release(), which does not
take care of this corner case.
This is most obvious by the first loop happily dereferencing
lp->rx_bd_v, which we actually check to be non NULL *afterwards*.
Make sure we only unmap or free already allocated structures, by:
- directly returning with -ENOMEM if nothing has been allocated at all
- checking for lp->rx_bd_v to be non-NULL *before* using it
- only unmapping allocated DMA RX regions
This avoids NULL pointer dereferences when initialisation fails.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:36 +0000 (13:23 +0000)]
net: axienet: Propagate failure of DMA descriptor setup
When we fail allocating the DMA buffers in axienet_dma_bd_init(), we
report this error, but carry on with initialisation nevertheless.
This leads to a kernel panic when the driver later wants to send a
packet, as it uses uninitialised data structures.
Make the axienet_device_reset() routine return an error value, as it
contains the DMA buffer initialisation. Make sure we propagate the error
up the chain and eventually fail the driver initialisation, to avoid
relying on non-initialised buffers.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:35 +0000 (13:23 +0000)]
net: axienet: Convert DMA error handler to a work queue
The DMA error handler routine is currently a tasklet, scheduled to run
after the DMA error IRQ was handled.
However it needs to take the MDIO mutex, which is not allowed to do in a
tasklet. A kernel (with debug options) complains consequently:
[ 614.050361] net eth0: DMA Tx error 0x174019
[ 614.064002] net eth0: Current BD is at: 0x8f84aa0ce
[ 614.080195] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:935
[ 614.109484] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 40, name: kworker/u4:4
[ 614.135428] 3 locks held by kworker/u4:4/40:
[ 614.149075] #0:
ffff000879863328 ((wq_completion)rpciod){....}, at: process_one_work+0x1f0/0x6a8
[ 614.177528] #1:
ffff80001251bdf8 ((work_completion)(&task->u.tk_work)){....}, at: process_one_work+0x1f0/0x6a8
[ 614.209033] #2:
ffff0008784e0110 (sk_lock-AF_INET-RPC){....}, at: tcp_sendmsg+0x24/0x58
[ 614.235429] CPU: 0 PID: 40 Comm: kworker/u4:4 Not tainted
5.6.0-rc3-00926-g4a165a9d5921 #26
[ 614.260854] Hardware name: ARM Test FPGA (DT)
[ 614.274734] Workqueue: rpciod rpc_async_schedule
[ 614.289022] Call trace:
[ 614.296871] dump_backtrace+0x0/0x1a0
[ 614.308311] show_stack+0x14/0x20
[ 614.318751] dump_stack+0xbc/0x100
[ 614.329403] ___might_sleep+0xf0/0x140
[ 614.341018] __might_sleep+0x4c/0x80
[ 614.352201] __mutex_lock+0x5c/0x8a8
[ 614.363348] mutex_lock_nested+0x1c/0x28
[ 614.375654] axienet_dma_err_handler+0x38/0x388
[ 614.389999] tasklet_action_common.isra.15+0x160/0x1a8
[ 614.405894] tasklet_action+0x24/0x30
[ 614.417297] efi_header_end+0xe0/0x494
[ 614.429020] irq_exit+0xd0/0xd8
[ 614.439047] __handle_domain_irq+0x60/0xb0
[ 614.451877] gic_handle_irq+0xdc/0x2d0
[ 614.463486] el1_irq+0xcc/0x180
[ 614.473451] __tcp_transmit_skb+0x41c/0xb58
[ 614.486513] tcp_write_xmit+0x224/0x10a0
[ 614.498792] __tcp_push_pending_frames+0x38/0xc8
[ 614.513126] tcp_rcv_established+0x41c/0x820
[ 614.526301] tcp_v4_do_rcv+0x8c/0x218
[ 614.537784] __release_sock+0x5c/0x108
[ 614.549466] release_sock+0x34/0xa0
[ 614.560318] tcp_sendmsg+0x40/0x58
[ 614.571053] inet_sendmsg+0x40/0x68
[ 614.582061] sock_sendmsg+0x18/0x30
[ 614.593074] xs_sendpages+0x218/0x328
[ 614.604506] xs_tcp_send_request+0xa0/0x1b8
[ 614.617461] xprt_transmit+0xc8/0x4f0
[ 614.628943] call_transmit+0x8c/0xa0
[ 614.640028] __rpc_execute+0xbc/0x6f8
[ 614.651380] rpc_async_schedule+0x28/0x48
[ 614.663846] process_one_work+0x298/0x6a8
[ 614.676299] worker_thread+0x40/0x490
[ 614.687687] kthread+0x134/0x138
[ 614.697804] ret_from_fork+0x10/0x18
[ 614.717319] xilinx_axienet
7fe00000.ethernet eth0: Link is Down
[ 615.748343] xilinx_axienet
7fe00000.ethernet eth0: Link is Up - 1Gbps/Full - flow control off
Since tasklets are not really popular anymore anyway, lets convert this
over to a work queue, which can sleep and thus can take the MDIO mutex.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andre Przywara [Tue, 24 Mar 2020 13:23:34 +0000 (13:23 +0000)]
net: xilinx: temac: Relax Kconfig dependencies
Similar to axienet, the temac driver is now architecture agnostic, and
can be at least compiled for several architectures.
Especially the fact that this is a soft IP for implementing in FPGAs
makes the current restriction rather pointless, as it could literally
appear on any architecture, as long as an FPGA is connected to the bus.
The driver hasn't been actually tried on any hardware, it is just a
drive-by patch when doing the same for axienet (a similar patch for
axienet is already merged).
This (temac and axienet) have been compile-tested for:
alpha hppa64 microblaze mips64 powerpc powerpc64 riscv64 s390 sparc64
(using kernel.org cross compilers).
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladyslav Tarasiuk [Tue, 24 Mar 2020 11:57:08 +0000 (13:57 +0200)]
ethtool: fix incorrect tx-checksumming settings reporting
Currently, ethtool feature mask for checksum command is ORed with
NETIF_F_FCOE_CRC_BIT, which is bit's position number, instead of the
actual feature bit - NETIF_F_FCOE_CRC.
The invalid bitmask here might affect unrelated features when toggling
TX checksumming. For example, TX checksumming is always mistakenly
reported as enabled on the netdevs tested (mlx5, virtio_net).
Fixes:
f70bb06563ed ("ethtool: update mapping of features to legacy ioctl requests")
Signed-off-by: Vladyslav Tarasiuk <vladyslavt@mellanox.com>
Reviewed-by: Michal Kubecek <mkubecek@suse.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
Raju Rangoju [Tue, 24 Mar 2020 11:40:00 +0000 (17:10 +0530)]
cxgb4/ptp: pass the sign of offset delta in FW CMD
cxgb4_ptp_fineadjtime() doesn't pass the signedness of offset delta
in FW_PTP_CMD. Fix it by passing correct sign.
Signed-off-by: Raju Rangoju <rajur@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dejin Zheng [Tue, 24 Mar 2020 11:26:47 +0000 (19:26 +0800)]
net: phy: mdio-mux-bcm-iproc: use readl_poll_timeout() to simplify code
use readl_poll_timeout() to replace the poll codes for simplify
iproc_mdio_wait_for_idle() function
Signed-off-by: Dejin Zheng <zhengdejin5@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 24 Mar 2020 09:45:34 +0000 (11:45 +0200)]
net: dsa: tag_8021q: replace dsa_8021q_remove_header with __skb_vlan_pop
Not only did this wheel did not need reinventing, but there is also
an issue with it: It doesn't remove the VLAN header in a way that
preserves the L2 payload checksum when that is being provided by the DSA
master hw. It should recalculate checksum both for the push, before
removing the header, and for the pull afterwards. But the current
implementation is quite dizzying, with pulls followed immediately
afterwards by pushes, the memmove is done before the push, etc. This
makes a DSA master with RX checksumming offload to print stack traces
with the infamous 'hw csum failure' message.
So remove the dsa_8021q_remove_header function and replace it with
something that actually works with inet checksumming.
Fixes:
d461933638ae ("net: dsa: tag_8021q: Create helper function for removing VLAN header")
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 24 Mar 2020 23:15:58 +0000 (16:15 -0700)]
Merge tag 'wireless-drivers-next-2020-03-24' of git://git./linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers-next patches for v5.7
Second set of patches for v5.7. Lots of cleanup patches this time, but
of course various new features as well fixes.
When merging with wireless-drivers this pull request has a conflict in:
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
To solve that just drop the changes from commit
cf52c8a776d1 in
wireless-drivers and take the hunk from wireless-drivers-next as is.
The list of specific subsystem device IDs are not necessary after
commit
d6f2134a3831 (in wireless-drivers-next) anymore, the detection
is based on other characteristics of the devices.
Major changes:
qtnfmac
* support WPA3 SAE and OWE in AP mode
ath10k
* support for getting btcoex settings from Device Tree
* support QCA9377 SDIO device
ath11k
* add HE rate accounting
* add thermal sensor and cooling devices
mt76
* MT7663 support for the MT7615 driver
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Zh-yuan Ye [Tue, 24 Mar 2020 08:28:25 +0000 (17:28 +0900)]
net: cbs: Fix software cbs to consider packet sending time
Currently the software CBS does not consider the packet sending time
when depleting the credits. It caused the throughput to be
Idleslope[kbps] * (Port transmit rate[kbps] / |Sendslope[kbps]|) where
Idleslope * (Port transmit rate / (Idleslope + |Sendslope|)) = Idleslope
is expected. In order to fix the issue above, this patch takes the time
when the packet sending completes into account by moving the anchor time
variable "last" ahead to the send completion time upon transmission and
adding wait when the next dequeue request comes before the send
completion time of the previous packet.
changelog:
V2->V3:
- remove unnecessary whitespace cleanup
- add the checks if port_rate is 0 before division
V1->V2:
- combine variable "send_completed" into "last"
- add the comment for estimate of the packet sending
Fixes:
585d763af09c ("net/sched: Introduce Credit Based Shaper (CBS) qdisc")
Signed-off-by: Zh-yuan Ye <ye.zh-yuan@socionext.com>
Reviewed-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Aya Levin [Thu, 19 Mar 2020 11:25:17 +0000 (13:25 +0200)]
net/mlx5e: Do not recover from a non-fatal syndrome
For non-fatal syndromes like LOCAL_LENGTH_ERR, recovery shouldn't be
triggered. In these scenarios, the RQ is not actually in ERR state.
This misleads the recovery flow which assumes that the RQ is really in
error state and no more completions arrive, causing crashes on bad page
state.
Fixes:
8276ea1353a4 ("net/mlx5e: Report and recover from CQE with error on RQ")
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Aya Levin [Mon, 16 Mar 2020 14:53:10 +0000 (16:53 +0200)]
net/mlx5e: Fix ICOSQ recovery flow with Striding RQ
In striding RQ mode, the buffers of an RX WQE are first
prepared and posted to the HW using a UMR WQEs via the ICOSQ.
We maintain the state of these in-progress WQEs in the RQ
SW struct.
In the flow of ICOSQ recovery, the corresponding RQ is not
in error state, hence:
- The buffers of the in-progress WQEs must be released
and the RQ metadata should reflect it.
- Existing RX WQEs in the RQ should not be affected.
For this, wrap the dealloc of the in-progress WQEs in
a function, and use it in the ICOSQ recovery flow
instead of mlx5e_free_rx_descs().
Fixes:
be5323c8379f ("net/mlx5e: Report and recover from CQE error on ICOSQ")
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Aya Levin [Thu, 12 Mar 2020 10:35:32 +0000 (12:35 +0200)]
net/mlx5e: Fix missing reset of SW metadata in Striding RQ reset
When resetting the RQ (moving RQ state from RST to RDY), the driver
resets the WQ's SW metadata.
In striding RQ mode, we maintain a field that reflects the actual
expected WQ head (including in progress WQEs posted to the ICOSQ).
It was mistakenly not reset together with the WQ. Fix this here.
Fixes:
8276ea1353a4 ("net/mlx5e: Report and recover from CQE with error on RQ")
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Aya Levin [Mon, 9 Mar 2020 07:44:18 +0000 (09:44 +0200)]
net/mlx5e: Enhance ICOSQ WQE info fields
Add number of WQEBBs (WQE's Basic Block) to WQE info struct. Set the
number of WQEBBs on WQE post, and increment the consumer counter (cc)
on completion.
In case of error completions, the cc was mistakenly not incremented,
keeping a gap between cc and pc (producer counter). This failed the
recovery flow on the ICOSQ from a CQE error which timed-out waiting for
the cc and pc to meet.
Fixes:
be5323c8379f ("net/mlx5e: Report and recover from CQE error on ICOSQ")
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Leon Romanovsky [Mon, 16 Mar 2020 07:31:03 +0000 (09:31 +0200)]
net/mlx5_core: Set IB capability mask1 to fix ib_srpt connection failure
The cap_mask1 isn't protected by field_select and not listed among RW
fields, but it is required to be written to properly initialize ports
in IB virtualization mode.
Link: https://lore.kernel.org/linux-rdma/88bab94d2fd72f3145835b4518bc63dda587add6.camel@redhat.com
Fixes:
ab118da4c10a ("net/mlx5: Don't write read-only fields in MODIFY_HCA_VPORT_CONTEXT command")
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Florian Westphal [Mon, 23 Mar 2020 16:34:30 +0000 (17:34 +0100)]
selftests: netfilter: add nfqueue test case
Add a test case to check nf queue infrastructure.
Could be extended in the future to also cover serialization of
conntrack, uid and secctx attributes in nfqueue.
For now, this checks that 'queue bypass' works, that a queue rule with
no bypass option blocks traffic and that userspace receives the expected
number of packets.
For this we add two queues and hook all of
prerouting/input/forward/output/postrouting.
Packets get queued twice with a dummy base chain in between:
This passes with current nf tree, but reverting
commit
946c0d8e6ed4 ("netfilter: nf_queue: fix reinject verdict handling")
makes this trip (it processes 30 instead of expected 20 packets).
v2: update config file with queue and other options missing/needed for
other tests.
v3: also test with tcp, this reveals problem with commit
28f8bfd1ac94 ("netfilter: Support iif matches in POSTROUTING"), due to
skb->dev pointing at another skb in the retransmit rbtree (skb->dev
aliases to rbnode child).
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Mon, 23 Mar 2020 18:53:10 +0000 (19:53 +0100)]
netfilter: nft_fwd_netdev: allow to redirect to ifb via ingress
Set skb->tc_redirected to 1, otherwise the ifb driver drops the packet.
Set skb->tc_from_ingress to 1 to reinject the packet back to the ingress
path after leaving the ifb egress path.
This patch inconditionally sets on these two skb fields that are
meaningful to the ifb driver. The existing forward action is guaranteed
to run from ingress path.
Fixes:
39e6dea28adc ("netfilter: nf_tables: add forward expression to the netdev family")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Mon, 23 Mar 2020 13:27:16 +0000 (14:27 +0100)]
netfilter: nft_fwd_netdev: validate family and chain type
Make sure the forward action is only used from ingress.
Fixes:
39e6dea28adc ("netfilter: nf_tables: add forward expression to the netdev family")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Stefano Brivio [Sun, 22 Mar 2020 02:22:01 +0000 (03:22 +0100)]
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
...and return -ENOTEMPTY to the front-end in this case, instead of
proceeding. Currently, nft takes care of checking for these cases
and not sending them to the kernel, but if we drop the set_overlap()
call in nft we can end up in situations like:
# nft add table t
# nft add set t s '{ type inet_service ; flags interval ; }'
# nft add element t s '{ 1 - 5 }'
# nft add element t s '{ 6 - 10 }'
# nft add element t s '{ 4 - 7 }'
# nft list set t s
table ip t {
set s {
type inet_service
flags interval
elements = { 1-3, 4-5, 6-7 }
}
}
This change has the primary purpose of making the behaviour
consistent with nft_set_pipapo, but is also functional to avoid
inconsistent behaviour if userspace sends overlapping elements for
any reason.
v2: When we meet the same key data in the tree, as start element while
inserting an end element, or as end element while inserting a start
element, actually check that the existing element is active, before
resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Stefano Brivio [Sun, 22 Mar 2020 02:22:00 +0000 (03:22 +0100)]
netfilter: nft_set_rbtree: Introduce and use nft_rbtree_interval_start()
Replace negations of nft_rbtree_interval_end() with a new helper,
nft_rbtree_interval_start(), wherever this helps to visualise the
problem at hand, that is, for all the occurrences except for the
comparison against given flags in __nft_rbtree_get().
This gets especially useful in the next patch.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>