linux-2.6-microblaze.git
3 years agodpaa2-eth: add a dpaa2_eth_ prefix to all functions in dpaa2-eth-dcb.c
Ioana Ciornei [Mon, 31 Aug 2020 18:12:40 +0000 (21:12 +0300)]
dpaa2-eth: add a dpaa2_eth_ prefix to all functions in dpaa2-eth-dcb.c

Some static functions in the dpaa2-eth driver don't have the dpaa2_eth_
prefix and this is becoming an inconvenience when looking at, for
example, a perf top output and trying to determine easily which entries
are dpaa2-eth related. Ammend this by adding the prefix to all the
functions.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodpaa2-eth: add a dpaa2_eth_ prefix to all functions in dpaa2-eth.c
Ioana Ciornei [Mon, 31 Aug 2020 18:12:39 +0000 (21:12 +0300)]
dpaa2-eth: add a dpaa2_eth_ prefix to all functions in dpaa2-eth.c

Some static functions in the dpaa2-eth driver don't have the dpaa2_eth_
prefix and this is becoming an inconvenience when looking at, for
example, a perf top output and trying to determine easily which entries
are dpaa2-eth related. Ammend this by adding the prefix to all the
functions.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodpaa2-eth: add a dpaa2_eth_ prefix to all functions in dpaa2-ethtool.c
Ioana Ciornei [Mon, 31 Aug 2020 18:12:38 +0000 (21:12 +0300)]
dpaa2-eth: add a dpaa2_eth_ prefix to all functions in dpaa2-ethtool.c

Some static functions in the dpaa2-eth driver don't have the dpaa2_eth_
prefix and this is becoming an inconvenience when looking at, for
example, a perf top output and trying to determine easily which entries
are dpaa2-eth related. Ammend this by adding the prefix to all the
functions.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: openvswitch: fixes crash if nf_conncount_init() fails
Eelco Chaudron [Mon, 31 Aug 2020 09:57:57 +0000 (11:57 +0200)]
net: openvswitch: fixes crash if nf_conncount_init() fails

If nf_conncount_init fails currently the dispatched work is not canceled,
causing problems when the timer fires. This change fixes this by not
scheduling the work until all initialization is successful.

Fixes: a65878d6f00b ("net: openvswitch: fixes potential deadlock in dp cleanup code")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Reviewed-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoibmvnic: Harden device Command Response Queue handshake
Thomas Falcon [Mon, 31 Aug 2020 16:59:57 +0000 (11:59 -0500)]
ibmvnic: Harden device Command Response Queue handshake

In some cases, the device or firmware may be busy when the
driver attempts to perform the CRQ initialization handshake.
If the partner is busy, the hypervisor will return the H_CLOSED
return code. The aim of this patch is that, if the device is not
ready, to query the device a number of times, with a small wait
time in between queries. If all initialization requests fail,
the driver will remain in a dormant state, awaiting a signal
from the device that it is ready for operation.

Signed-off-by: Thomas Falcon <tlfalcon@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
David S. Miller [Tue, 1 Sep 2020 20:05:08 +0000 (13:05 -0700)]
Merge git://git./linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2020-09-01

The following pull-request contains BPF updates for your *net-next* tree.

There are two small conflicts when pulling, resolve as follows:

1) Merge conflict in tools/lib/bpf/libbpf.c between 88a82120282b ("libbpf: Factor
   out common ELF operations and improve logging") in bpf-next and 1e891e513e16
   ("libbpf: Fix map index used in error message") in net-next. Resolve by taking
   the hunk in bpf-next:

        [...]
        scn = elf_sec_by_idx(obj, obj->efile.btf_maps_shndx);
        data = elf_sec_data(obj, scn);
        if (!scn || !data) {
                pr_warn("elf: failed to get %s map definitions for %s\n",
                        MAPS_ELF_SEC, obj->path);
                return -EINVAL;
        }
        [...]

2) Merge conflict in drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c between
   9647c57b11e5 ("xsk: i40e: ice: ixgbe: mlx5: Test for dma_need_sync earlier for
   better performance") in bpf-next and e20f0dbf204f ("net/mlx5e: RX, Add a prefetch
   command for small L1_CACHE_BYTES") in net-next. Resolve the two locations by retaining
   net_prefetch() and taking xsk_buff_dma_sync_for_cpu() from bpf-next. Should look like:

        [...]
        xdp_set_data_meta_invalid(xdp);
        xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool);
        net_prefetch(xdp->data);
        [...]

We've added 133 non-merge commits during the last 14 day(s) which contain
a total of 246 files changed, 13832 insertions(+), 3105 deletions(-).

The main changes are:

1) Initial support for sleepable BPF programs along with bpf_copy_from_user() helper
   for tracing to reliably access user memory, from Alexei Starovoitov.

2) Add BPF infra for writing and parsing TCP header options, from Martin KaFai Lau.

3) bpf_d_path() helper for returning full path for given 'struct path', from Jiri Olsa.

4) AF_XDP support for shared umems between devices and queues, from Magnus Karlsson.

5) Initial prep work for full BPF-to-BPF call support in libbpf, from Andrii Nakryiko.

6) Generalize bpf_sk_storage map & add local storage for inodes, from KP Singh.

7) Implement sockmap/hash updates from BPF context, from Lorenz Bauer.

8) BPF xor verification for scalar types & add BPF link iterator, from Yonghong Song.

9) Use target's prog type for BPF_PROG_TYPE_EXT prog verification, from Udip Pant.

10) Rework BPF tracing samples to use libbpf loader, from Daniel T. Lee.

11) Fix xdpsock sample to really cycle through all buffers, from Weqaar Janjua.

12) Improve type safety for tun/veth XDP frame handling, from Maciej Żenczykowski.

13) Various smaller cleanups and improvements all over the place.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoliquidio: Remove unneeded cast from memory allocation
YueHaibing [Tue, 1 Sep 2020 14:11:15 +0000 (22:11 +0800)]
liquidio: Remove unneeded cast from memory allocation

Remove unneeded return value cast.
This is detected by coccinelle.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: sungem: Remove unneeded cast from memory allocation
YueHaibing [Tue, 1 Sep 2020 14:10:28 +0000 (22:10 +0800)]
net: sungem: Remove unneeded cast from memory allocation

Remove dma_alloc_coherent return value cast.
This is detected by coccinelle.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/tls: Implement getsockopt SOL_TLS TLS_RX
Yutaro Hayakawa [Tue, 1 Sep 2020 13:59:45 +0000 (22:59 +0900)]
net/tls: Implement getsockopt SOL_TLS TLS_RX

Implement the getsockopt SOL_TLS TLS_RX which is currently missing. The
primary usecase is to use it in conjunction with TCP_REPAIR to
checkpoint/restore the TLS record layer state.

TLS connection state usually exists on the user space library. So
basically we can easily extract it from there, but when the TLS
connections are delegated to the kTLS, it is not the case. We need to
have a way to extract the TLS state from the kernel for both of TX and
RX side.

The new TLS_RX getsockopt copies the crypto_info to user in the same
way as TLS_TX does.

We have described use cases in our research work in Netdev 0x14
Transport Workshop [1].

Also, there is an TLS implementation called tlse [2] which supports
TLS connection migration. They have support of kTLS and their code
shows that they are expecting the future support of this option.

[1] https://speakerdeck.com/yutarohayakawa/prism-proxies-without-the-pain
[2] https://github.com/eduardsui/tlse

Signed-off-by: Yutaro Hayakawa <yhayakawa3720@gmail.com>
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'net-openvswitch-improve-the-codes'
David S. Miller [Tue, 1 Sep 2020 18:42:15 +0000 (11:42 -0700)]
Merge branch 'net-openvswitch-improve-the-codes'

Tonghao Zhang says:

====================
net: openvswitch: improve the codes

This series patches are not bug fix, just improve codes.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: openvswitch: remove unused keep_flows
Tonghao Zhang [Tue, 1 Sep 2020 12:26:14 +0000 (20:26 +0800)]
net: openvswitch: remove unused keep_flows

keep_flows was introduced by [1], which used as flag to delete flows or not.
When rehashing or expanding the table instance, we will not flush the flows.
Now don't use it anymore, remove it.

[1] - https://github.com/openvswitch/ovs/commit/acd051f1761569205827dc9b037e15568a8d59f8
Cc: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: openvswitch: refactor flow free function
Tonghao Zhang [Tue, 1 Sep 2020 12:26:13 +0000 (20:26 +0800)]
net: openvswitch: refactor flow free function

Decrease table->count and ufid_count unconditionally,
because we only don't use count or ufid_count to count
when flushing the flows. To simplify the codes, we
remove the "count" argument of table_instance_flow_free.

To avoid a bug when deleting flows in the future, add
WARN_ON in flush flows function.

Cc: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: openvswitch: improve the coding style
Tonghao Zhang [Tue, 1 Sep 2020 12:26:12 +0000 (20:26 +0800)]
net: openvswitch: improve the coding style

Not change the logic, just improve the coding style.

Cc: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agobpf: {cpu,dev}map: Change various functions return type from int to void
Björn Töpel [Tue, 1 Sep 2020 08:39:28 +0000 (10:39 +0200)]
bpf: {cpu,dev}map: Change various functions return type from int to void

The functions bq_enqueue(), bq_flush_to_queue(), and bq_xmit_all() in
{cpu,dev}map.c always return zero. Changing the return type from int
to void makes the code easier to follow.

Suggested-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20200901083928.6199-1-bjorn.topel@gmail.com
3 years agobpf: Remove bpf_lsm_file_mprotect from sleepable list.
Alexei Starovoitov [Mon, 31 Aug 2020 20:16:51 +0000 (13:16 -0700)]
bpf: Remove bpf_lsm_file_mprotect from sleepable list.

Technically the bpf programs can sleep while attached to bpf_lsm_file_mprotect,
but such programs need to access user memory. So they're in might_fault()
category. Which means they cannot be called from file_mprotect lsm hook that
takes write lock on mm->mmap_lock.
Adjust the test accordingly.

Also add might_fault() to __bpf_prog_enter_sleepable() to catch such deadlocks early.

Fixes: 1e6c62a88215 ("bpf: Introduce sleepable BPF programs")
Fixes: e68a144547fc ("selftests/bpf: Add sleepable tests")
Reported-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200831201651.82447-1-alexei.starovoitov@gmail.com
3 years agosamples/bpf: Fix to xdpsock to avoid recycling frames
Weqaar Janjua [Fri, 28 Aug 2020 16:17:17 +0000 (00:17 +0800)]
samples/bpf: Fix to xdpsock to avoid recycling frames

The txpush program in the xdpsock sample application is supposed
to send out all packets in the umem in a round-robin fashion.
The problem is that it only cycled through the first BATCH_SIZE
worth of packets. Fixed this so that it cycles through all buffers
in the umem as intended.

Fixes: 248c7f9c0e21 ("samples/bpf: convert xdpsock to use libbpf for AF_XDP access")
Signed-off-by: Weqaar Janjua <weqaar.a.janjua@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/20200828161717.42705-1-weqaar.a.janjua@intel.com
3 years agosamples/bpf: Optimize l2fwd performance in xdpsock
Magnus Karlsson [Fri, 28 Aug 2020 12:51:05 +0000 (14:51 +0200)]
samples/bpf: Optimize l2fwd performance in xdpsock

Optimize the throughput performance of the l2fwd sub-app in the
xdpsock sample application by removing a duplicate syscall and
increasing the size of the fill ring.

The latter needs some further explanation. We recommend that you set
the fill ring size >= HW RX ring size + AF_XDP RX ring size. Make sure
you fill up the fill ring with buffers at regular intervals, and you
will with this setting avoid allocation failures in the driver. These
are usually quite expensive since drivers have not been written to
assume that allocation failures are common. For regular sockets,
kernel allocated memory is used that only runs out in OOM situations
that should be rare.

These two performance optimizations together lead to a 6% percent
improvement for the l2fwd app on my machine.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1598619065-1944-1-git-send-email-magnus.karlsson@intel.com
3 years agonet: ipv4: remove unused arg exact_dif in compute_score
Miaohe Lin [Mon, 31 Aug 2020 06:26:34 +0000 (02:26 -0400)]
net: ipv4: remove unused arg exact_dif in compute_score

The arg exact_dif is not used anymore, remove it. inet_exact_dif_match()
is no longer needed after the above is removed, so remove it too.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: ipv6: remove unused arg exact_dif in compute_score
Miaohe Lin [Mon, 31 Aug 2020 06:26:10 +0000 (02:26 -0400)]
net: ipv6: remove unused arg exact_dif in compute_score

The arg exact_dif is not used anymore, remove it. inet6_exact_dif_match()
is no longer needed after the above is removed, remove it too.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'net-phy-add-Lynx-PCS-MDIO-module'
David S. Miller [Mon, 31 Aug 2020 19:52:33 +0000 (12:52 -0700)]
Merge branch 'net-phy-add-Lynx-PCS-MDIO-module'

Ioana Ciornei says:

====================
net: phy: add Lynx PCS MDIO module

Add support for the Lynx PCS as a separate module in drivers/net/phy/.
The advantage of this structure is that multiple ethernet or switch
drivers used on NXP hardware (ENETC, Seville, Felix DSA switch etc) can
share the same implementation of PCS configuration and runtime
management.

The module implements phylink_pcs_ops and exports a phylink_pcs
(incorporated into a lynx_pcs) which can be directly passed to phylink
through phylink_pcs_set.

The first 3 patches add some missing pieces in phylink and the locked
mdiobus write accessor. Next, the Lynx PCS MDIO module is added as a
standalone module. The majority of the code is extracted from the Felix
DSA driver. The last patch makes the necessary changes in the Felix and
Seville drivers in order to use the new common PCS implementation.

At the moment, USXGMII (only with in-band AN), SGMII, QSGMII (with and
without in-band AN) and 2500Base-X (only w/o in-band AN) are supported
by the Lynx PCS MDIO module since these were also supported by Felix and
no functional change is intended at this time.

Changes in v2:
 * got rid of the mdio_lynx_pcs structure and directly exported the
 functions without the need of an indirection
 * made the necessary adjustments for this in the Felix DSA driver
 * solved the broken allmodconfig build test by making the module
 tristate instead of bool
 * fixed a memory leakage in the Felix driver (the pcs structure was
 allocated twice)

Changes in v3:
 * added support for PHYLINK PCS ops in DSA (patch 5/9)
 * cleanup in Felix PHYLINK operations and migrate to
 phylink_mac_link_up() being the callback of choice for applying MAC
 configuration (patches 6-8)

Changes in v4:
 * use the newly introduced phylink PCS mechanism
 * install the phylink_pcs in the phylink_mac_config DSA ops
 * remove the direct implementations of the PCS ops
 * do no use the SGMII_ prefix when referring to the IF_MORE register
 * add a phylink helper to decode the USXGMII code word
 * remove cleanup patches for Felix (these have been already accepted)
 * Seville (recently introduced) now has PCS support through the same
 Lynx PCS module

Changes in v5:
 - move the pcs-lynx driver to drivers/net/pcs
 - reword the commit message a bit in 4/5
 - add error checking and error propagation in 4/5
 - s/IF_MODE_DUPLEX/IF_MODE_HALF_DUPLEX in 4/5
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dsa: ocelot: use the Lynx PCS helpers in Felix and Seville
Ioana Ciornei [Sun, 30 Aug 2020 08:34:02 +0000 (11:34 +0300)]
net: dsa: ocelot: use the Lynx PCS helpers in Felix and Seville

Use the helper functions introduced by the newly added
Lynx PCS MDIO module in the Felix VSC9959 and Seville VSC9953.

Instead of representing the PCS as a phy_device, a mdio_device structure
will be passed to the Lynx module which is now actually implementing all
the PCS configuration and status reporting.

All code previously used for PCS monitoring and runtime configuration
is removed and replaced will calls to the Lynx PCS operations.

Tested on the following SERDES protocols of LS1028A: 0x7777
(2500Base-X), 0x85bb (QSGMII), 0x9999 (SGMII) and 0x13bb (USXGMII).

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: phy: add Lynx PCS module
Ioana Ciornei [Sun, 30 Aug 2020 08:34:01 +0000 (11:34 +0300)]
net: phy: add Lynx PCS module

Add a Lynx PCS module which exposes the necessary operations to drive
the PCS using phylink.

The majority of the code is extracted from the Felix DSA driver, which
will be also modified in a later patch, and exposed as a separate module
for code reusability purposes.
As such, this aims at feature and bug parity with the existing Felix DSA
driver, and thus USXGMII, SGMII, QSGMII and 2500Base-X (only w/o in-band
AN) are supported by the Lynx PCS module since these were also supported
by Felix.

The module can only be enabled by the drivers in need and not user
selectable.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: mdiobus: add clause 45 mdiobus write accessor
Ioana Ciornei [Sun, 30 Aug 2020 08:34:00 +0000 (11:34 +0300)]
net: mdiobus: add clause 45 mdiobus write accessor

Add the locked variant of the clause 45 mdiobus write accessor -
mdiobus_c45_write().

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: phylink: consider QSGMII interface mode in phylink_mii_c22_pcs_get_state
Ioana Ciornei [Sun, 30 Aug 2020 08:33:59 +0000 (11:33 +0300)]
net: phylink: consider QSGMII interface mode in phylink_mii_c22_pcs_get_state

The same link partner advertisement word is used for both QSGMII and
SGMII, thus treat both interface modes using the same
phylink_decode_sgmii_word() function.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: phylink: add helper function to decode USXGMII word
Ioana Ciornei [Sun, 30 Aug 2020 08:33:58 +0000 (11:33 +0300)]
net: phylink: add helper function to decode USXGMII word

With the new addition of the USXGMII link partner ability constants we
can now introduce a phylink helper that decodes the USXGMII word and
populates the appropriate fields in the phylink_link_state structure
based on them.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/wan/fsl_ucc_hdlc: Add MODULE_DESCRIPTION
YueHaibing [Sat, 29 Aug 2020 11:58:23 +0000 (19:58 +0800)]
net/wan/fsl_ucc_hdlc: Add MODULE_DESCRIPTION

Add missing MODULE_DESCRIPTION.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: hns: Remove unused macro AE_NAME_PORT_ID_IDX
YueHaibing [Sat, 29 Aug 2020 11:57:37 +0000 (19:57 +0800)]
net: hns: Remove unused macro AE_NAME_PORT_ID_IDX

There is no caller in tree.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: dl2k: Remove unused macro DRV_NAME
YueHaibing [Sat, 29 Aug 2020 11:56:23 +0000 (19:56 +0800)]
net: dl2k: Remove unused macro DRV_NAME

There is no caller in tree any more.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: wan: slic_ds26522: Remove unused macro DRV_NAME
YueHaibing [Sat, 29 Aug 2020 11:55:49 +0000 (19:55 +0800)]
net: wan: slic_ds26522: Remove unused macro DRV_NAME

There is no caller in tree any more.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agotipc: Remove unused macro TIPC_NACK_INTV
YueHaibing [Sat, 29 Aug 2020 11:52:14 +0000 (19:52 +0800)]
tipc: Remove unused macro TIPC_NACK_INTV

There is no caller in tree any more.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agotipc: Remove unused macro TIPC_FWD_MSG
YueHaibing [Sat, 29 Aug 2020 11:50:26 +0000 (19:50 +0800)]
tipc: Remove unused macro TIPC_FWD_MSG

There is no caller in tree any more.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agomptcp: Remove unused macro MPTCP_SAME_STATE
YueHaibing [Sat, 29 Aug 2020 11:42:26 +0000 (19:42 +0800)]
mptcp: Remove unused macro MPTCP_SAME_STATE

There is no caller in tree any more.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: clean up codestyle
Miaohe Lin [Sat, 29 Aug 2020 09:21:30 +0000 (05:21 -0400)]
net: clean up codestyle

This is a pure codestyle cleanup patch. No functional change intended.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: Use helper macro IP_MAX_MTU in __ip_append_data()
Miaohe Lin [Sat, 29 Aug 2020 09:09:18 +0000 (05:09 -0400)]
net: Use helper macro IP_MAX_MTU in __ip_append_data()

What 0xFFFF means here is actually the max mtu of a ip packet. Use help
macro IP_MAX_MTU here.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: ethernet: ti: am65-cpts: fix i2083 genf (and estf) Reconfiguration Issue
Grygorii Strashko [Fri, 28 Aug 2020 20:33:25 +0000 (23:33 +0300)]
net: ethernet: ti: am65-cpts: fix i2083 genf (and estf) Reconfiguration Issue

The new bit TX_GENF_CLR_EN has been added in AM65x SR2.0 to fix i2083
errata, which can be just set unconditionally for all SoCs.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'sfc-clean-up-some-W-1-build-warnings'
David S. Miller [Mon, 31 Aug 2020 19:28:50 +0000 (12:28 -0700)]
Merge branch 'sfc-clean-up-some-W-1-build-warnings'

Edward Cree says:

====================
sfc: clean up some W=1 build warnings

A collection of minor fixes to issues flagged up by W=1.
After this series, the only remaining warnings in the sfc driver are
 some 'member missing in kerneldoc' warnings from ptp.c.
Tested by building on x86_64 and running 'ethtool -p' on an EF10 NIC;
 there was no error, but I couldn't observe the actual LED as I'm
 working remotely.

[ Incidentally, ethtool_phys_id()'s behaviour on an error return
  looks strange — if I'm reading it right, it will break out of the
  inner loop but not the outer one, and eventually return the rc
  from the last run of the inner loop.  Is this intended? ]
====================

Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosfc: return errors from efx_mcdi_set_id_led, and de-indirect
Edward Cree [Fri, 28 Aug 2020 17:51:04 +0000 (18:51 +0100)]
sfc: return errors from efx_mcdi_set_id_led, and de-indirect

W=1 warnings indicated that 'rc' was unused in efx_mcdi_set_id_led();
 change the function to return int instead of void and plumb the rc
 through the caller efx_ethtool_phys_id().
Since (post-Falcon) all sfc NICs use MCDI for this, there's no point in
 indirecting through a nic_type method, so remove that and just call
 efx_mcdi_set_id_led() directly.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosfc: fix kernel-doc on struct efx_loopback_state
Edward Cree [Fri, 28 Aug 2020 17:50:39 +0000 (18:50 +0100)]
sfc: fix kernel-doc on struct efx_loopback_state

Missing 'struct' keyword caused "cannot understand function prototype"
 warnings.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosfc: fix unused-but-set-variable warning in efx_farch_filter_remove_safe
Edward Cree [Fri, 28 Aug 2020 17:50:24 +0000 (18:50 +0100)]
sfc: fix unused-but-set-variable warning in efx_farch_filter_remove_safe

Thanks to some past refactor, 'spec' is not actually used in this
 function; the code using it moved to the callee efx_farch_filter_remove.
Remove the variable to fix a W=1 warning.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosfc: fix W=1 warnings in efx_farch_handle_rx_not_ok
Edward Cree [Fri, 28 Aug 2020 17:50:02 +0000 (18:50 +0100)]
sfc: fix W=1 warnings in efx_farch_handle_rx_not_ok

Some of these RX-event flags aren't used at all, so remove them.
Others are used only #ifdef DEBUG to log a message; suppress the
 unused-var warnings #ifndef DEBUG with a void cast.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'Add-ip6_fragment-in-ipv6_stub'
David S. Miller [Mon, 31 Aug 2020 19:26:39 +0000 (12:26 -0700)]
Merge branch 'Add-ip6_fragment-in-ipv6_stub'

wenxu says:

====================
Add ip6_fragment in ipv6_stub

Add ip6_fragment in ipv6_stub and use it in openvswitch
This version add default function eafnosupport_ipv6_fragment
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoopenvswitch: using ip6_fragment in ipv6_stub
wenxu [Fri, 28 Aug 2020 15:14:32 +0000 (23:14 +0800)]
openvswitch: using ip6_fragment in ipv6_stub

Using ipv6_stub->ipv6_fragment to avoid the netfilter dependency

Signed-off-by: wenxu <wenxu@ucloud.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoipv6: add ipv6_fragment hook in ipv6_stub
wenxu [Fri, 28 Aug 2020 15:14:31 +0000 (23:14 +0800)]
ipv6: add ipv6_fragment hook in ipv6_stub

Add ipv6_fragment to ipv6_stub to avoid calling netfilter when
access ip6_fragment.

Signed-off-by: wenxu <wenxu@ucloud.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'gtp-minor-enhancements'
David S. Miller [Mon, 31 Aug 2020 19:24:35 +0000 (12:24 -0700)]
Merge branch 'gtp-minor-enhancements'

Nicolas Dichtel says:

====================
gtp: minor enhancements

The first patch removes a useless rcu lock and the second relax alloc
constraints when a PDP context is added.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agogtp: relax alloc constraint when adding a pdp
Nicolas Dichtel [Fri, 28 Aug 2020 13:30:56 +0000 (15:30 +0200)]
gtp: relax alloc constraint when adding a pdp

When a PDP context is added, the rtnl lock is held, thus no need to force
a GFP_ATOMIC.

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agogtp: remove useless rcu_read_lock()
Nicolas Dichtel [Fri, 28 Aug 2020 13:30:55 +0000 (15:30 +0200)]
gtp: remove useless rcu_read_lock()

The rtnl lock is taken just the line above, no need to take the rcu also.

Fixes: 1788b8569f5d ("gtp: fix use-after-free in gtp_encap_destroy()")
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: phylink: avoid oops during initialisation
Russell King [Fri, 28 Aug 2020 10:53:53 +0000 (11:53 +0100)]
net: phylink: avoid oops during initialisation

If we intend to use PCS operations, mac_pcs_get_state() will not be
implemented, so will be NULL. If we also intend to register the PCS
operations in mac_prepare() or mac_config(), then this leads to an
attempt to call NULL function pointer during phylink_start(). Avoid
this, but we must report the link is down.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'hinic-add-debugfs-support'
David S. Miller [Mon, 31 Aug 2020 19:21:27 +0000 (12:21 -0700)]
Merge branch 'hinic-add-debugfs-support'

Luo bin says:

====================
hinic: add debugfs support

add debugfs node for querying sq/rq info and function table
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agohinic: add support to query function table
Luo bin [Fri, 28 Aug 2020 03:37:48 +0000 (11:37 +0800)]
hinic: add support to query function table

add debugfs node for querying function table, for example:
cat /sys/kernel/debug/hinic/0000:15:00.0/func_table/valid

Signed-off-by: Luo bin <luobin9@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agohinic: add support to query rq info
Luo bin [Fri, 28 Aug 2020 03:37:47 +0000 (11:37 +0800)]
hinic: add support to query rq info

add debugfs node for querying rq info, for example:
cat /sys/kernel/debug/hinic/0000:15:00.0/RQs/0x0/rq_hw_pi

Signed-off-by: Luo bin <luobin9@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agohinic: add support to query sq info
Luo bin [Fri, 28 Aug 2020 03:37:46 +0000 (11:37 +0800)]
hinic: add support to query sq info

add debugfs node for querying sq info, for example:
cat /sys/kernel/debug/hinic/0000:15:00.0/SQs/0x0/sq_pi

Signed-off-by: Luo bin <luobin9@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoxsk: Documentation for XDP_SHARED_UMEM between queues and netdevs
Magnus Karlsson [Fri, 28 Aug 2020 08:26:29 +0000 (10:26 +0200)]
xsk: Documentation for XDP_SHARED_UMEM between queues and netdevs

Add documentation for the XDP_SHARED_UMEM feature when a UMEM is
shared between different queues and/or netdevs.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-16-git-send-email-magnus.karlsson@intel.com
3 years agosamples/bpf: Add new sample xsk_fwd.c
Cristian Dumitrescu [Fri, 28 Aug 2020 08:26:28 +0000 (10:26 +0200)]
samples/bpf: Add new sample xsk_fwd.c

This sample code illustrates the packet forwarding between multiple
AF_XDP sockets in multi-threading environment. All the threads and
sockets are sharing a common buffer pool, with each socket having
its own private buffer cache. The sockets are created with the
xsk_socket__create_shared() function, which allows multiple AF_XDP
sockets to share the same UMEM object.

Example 1: Single thread handling two sockets. Packets received
from socket A (on top of interface IFA, queue QA) are forwarded
to socket B (on top of interface IFB, queue QB) and vice-versa.
The thread is affinitized to CPU core C:

./xsk_fwd -i IFA -q QA -i IFB -q QB -c C

Example 2: Two threads, each handling two sockets. Packets from
socket A are sent to socket B (by thread X), packets
from socket B are sent to socket A (by thread X); packets from
socket C are sent to socket D (by thread Y), packets from socket
D are sent to socket C (by thread Y). The two threads are bound
to CPU cores CX and CY:

./xdp_fwd -i IFA -q QA -i IFB -q QB -i IFC -q QC -i IFD -q QD -c CX -c CY

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-15-git-send-email-magnus.karlsson@intel.com
3 years agolibbpf: Support shared umems between queues and devices
Magnus Karlsson [Fri, 28 Aug 2020 08:26:27 +0000 (10:26 +0200)]
libbpf: Support shared umems between queues and devices

Add support for shared umems between hardware queues and devices to
the AF_XDP part of libbpf. This so that zero-copy can be achieved in
applications that want to send and receive packets between HW queues
on one device or between different devices/netdevs.

In order to create sockets that share a umem between hardware queues
and devices, a new function has been added called
xsk_socket__create_shared(). It takes the same arguments as
xsk_socket_create() plus references to a fill ring and a completion
ring. So for every socket that share a umem, you need to have one more
set of fill and completion rings. This in order to maintain the
single-producer single-consumer semantics of the rings.

You can create all the sockets via the new xsk_socket__create_shared()
call, or create the first one with xsk_socket__create() and the rest
with xsk_socket__create_shared(). Both methods work.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-14-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: Add shared umem support between devices
Magnus Karlsson [Fri, 28 Aug 2020 08:26:26 +0000 (10:26 +0200)]
xsk: Add shared umem support between devices

Add support to share a umem between different devices. This mode
can be invoked with the XDP_SHARED_UMEM bind flag. Previously,
sharing was only supported within the same device. Note that when
sharing a umem between devices, just as in the case of sharing a
umem between queue ids, you need to create a fill ring and a
completion ring and tie them to the socket (with two setsockopts,
one for each ring) before you do the bind with the
XDP_SHARED_UMEM flag. This so that the single-producer
single-consumer semantics of the rings can be upheld.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-13-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: Add shared umem support between queue ids
Magnus Karlsson [Fri, 28 Aug 2020 08:26:25 +0000 (10:26 +0200)]
xsk: Add shared umem support between queue ids

Add support to share a umem between queue ids on the same
device. This mode can be invoked with the XDP_SHARED_UMEM bind
flag. Previously, sharing was only supported within the same
queue id and device, and you shared one set of fill and
completion rings. However, note that when sharing a umem between
queue ids, you need to create a fill ring and a completion ring
and tie them to the socket before you do the bind with the
XDP_SHARED_UMEM flag. This so that the single-producer
single-consumer semantics can be upheld.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-12-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: i40e: ice: ixgbe: mlx5: Test for dma_need_sync earlier for better performance
Magnus Karlsson [Fri, 28 Aug 2020 08:26:24 +0000 (10:26 +0200)]
xsk: i40e: ice: ixgbe: mlx5: Test for dma_need_sync earlier for better performance

Test for dma_need_sync earlier to increase
performance. xsk_buff_dma_sync_for_cpu() takes an xdp_buff as
parameter and from that the xsk_buff_pool reference is dug out. Perf
shows that this dereference causes a lot of cache misses. But as the
buffer pool is now sent down to the driver at zero-copy initialization
time, we might as well use this pointer directly, instead of going via
the xsk_buff and we can do so already in xsk_buff_dma_sync_for_cpu()
instead of in xp_dma_sync_for_cpu. This gets rid of these cache
misses.

Throughput increases with 3% for the xdpsock l2fwd sample application
on my machine.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-11-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: Rearrange internal structs for better performance
Magnus Karlsson [Fri, 28 Aug 2020 08:26:23 +0000 (10:26 +0200)]
xsk: Rearrange internal structs for better performance

Rearrange the xdp_sock, xdp_umem and xsk_buff_pool structures so
that they get smaller and align better to the cache lines. In the
previous commits of this patch set, these structs have been
reordered with the focus on functionality and simplicity, not
performance. This patch improves throughput performance by around
3%.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-10-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: Enable sharing of dma mappings
Magnus Karlsson [Fri, 28 Aug 2020 08:26:22 +0000 (10:26 +0200)]
xsk: Enable sharing of dma mappings

Enable the sharing of dma mappings by moving them out from the buffer
pool. Instead we put each dma mapped umem region in a list in the umem
structure. If dma has already been mapped for this umem and device, it
is not mapped again and the existing dma mappings are reused.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-9-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: Move addrs from buffer pool to umem
Magnus Karlsson [Fri, 28 Aug 2020 08:26:21 +0000 (10:26 +0200)]
xsk: Move addrs from buffer pool to umem

Replicate the addrs pointer in the buffer pool to the umem. This mapping
will be the same for all buffer pools sharing the same umem. In the
buffer pool we leave the addrs pointer for performance reasons.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-8-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: Move xsk_tx_list and its lock to buffer pool
Magnus Karlsson [Fri, 28 Aug 2020 08:26:20 +0000 (10:26 +0200)]
xsk: Move xsk_tx_list and its lock to buffer pool

Move the xsk_tx_list and the xsk_tx_list_lock from the umem to
the buffer pool. This so that we in a later commit can share the
umem between multiple HW queues. There is one xsk_tx_list per
device and queue id, so it should be located in the buffer pool.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-7-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: Move queue_id, dev and need_wakeup to buffer pool
Magnus Karlsson [Fri, 28 Aug 2020 08:26:19 +0000 (10:26 +0200)]
xsk: Move queue_id, dev and need_wakeup to buffer pool

Move queue_id, dev, and need_wakeup from the umem to the
buffer pool. This so that we in a later commit can share the umem
between multiple HW queues. There is one buffer pool per dev and
queue id, so these variables should belong to the buffer pool, not
the umem. Need_wakeup is also something that is set on a per napi
level, so there is usually one per device and queue id. So move
this to the buffer pool too.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-6-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: Move fill and completion rings to buffer pool
Magnus Karlsson [Fri, 28 Aug 2020 08:26:18 +0000 (10:26 +0200)]
xsk: Move fill and completion rings to buffer pool

Move the fill and completion rings from the umem to the buffer
pool. This so that we in a later commit can share the umem
between multiple HW queue ids. In this case, we need one fill and
completion ring per queue id. As the buffer pool is per queue id
and napi id this is a natural place for it and one umem
struture can be shared between these buffer pools.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-5-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: Create and free buffer pool independently from umem
Magnus Karlsson [Fri, 28 Aug 2020 08:26:17 +0000 (10:26 +0200)]
xsk: Create and free buffer pool independently from umem

Create and free the buffer pool independently from the umem. Move
these operations that are performed on the buffer pool from the
umem create and destroy functions to new create and destroy
functions just for the buffer pool. This so that in later commits
we can instantiate multiple buffer pools per umem when sharing a
umem between HW queues and/or devices. We also erradicate the
back pointer from the umem to the buffer pool as this will not
work when we introduce the possibility to have multiple buffer
pools per umem.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-4-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: i40e: ice: ixgbe: mlx5: Rename xsk zero-copy driver interfaces
Magnus Karlsson [Fri, 28 Aug 2020 08:26:16 +0000 (10:26 +0200)]
xsk: i40e: ice: ixgbe: mlx5: Rename xsk zero-copy driver interfaces

Rename the AF_XDP zero-copy driver interface functions to better
reflect what they do after the replacement of umems with buffer
pools in the previous commit. Mostly it is about replacing the
umem name from the function names with xsk_buff and also have
them take the a buffer pool pointer instead of a umem. The
various ring functions have also been renamed in the process so
that they have the same naming convention as the internal
functions in xsk_queue.h. This so that it will be clearer what
they do and also for consistency.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-3-git-send-email-magnus.karlsson@intel.com
3 years agoxsk: i40e: ice: ixgbe: mlx5: Pass buffer pool to driver instead of umem
Magnus Karlsson [Fri, 28 Aug 2020 08:26:15 +0000 (10:26 +0200)]
xsk: i40e: ice: ixgbe: mlx5: Pass buffer pool to driver instead of umem

Replace the explicit umem reference passed to the driver in AF_XDP
zero-copy mode with the buffer pool instead. This in preparation for
extending the functionality of the zero-copy mode so that umems can be
shared between queues on the same netdev and also between netdevs. In
this commit, only an umem reference has been added to the buffer pool
struct. But later commits will add other entities to it. These are
going to be entities that are different between different queue ids
and netdevs even though the umem is shared between them.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/1598603189-32145-2-git-send-email-magnus.karlsson@intel.com
3 years agonetlink: policy: correct validation type check
Johannes Berg [Mon, 31 Aug 2020 18:28:05 +0000 (20:28 +0200)]
netlink: policy: correct validation type check

In the policy export for binary attributes I erroneously used
a != NLA_VALIDATE_NONE comparison instead of checking for the
two possible values, which meant that if a validation function
pointer ended up aliasing the min/max as negatives, we'd hit
a warning in nla_get_range_unsigned().

Fix this to correctly check for only the two types that should
be handled here, i.e. range with or without warn-too-long.

Reported-by: syzbot+353df1490da781637624@syzkaller.appspotmail.com
Fixes: 8aa26c575fb3 ("netlink: make NLA_BINARY validation more flexible")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agobpf: Fix build without BPF_LSM.
Alexei Starovoitov [Mon, 31 Aug 2020 16:31:32 +0000 (09:31 -0700)]
bpf: Fix build without BPF_LSM.

resolve_btfids doesn't like empty set. Add unused ID when BPF_LSM is off.

Fixes: 1e6c62a88215 ("bpf: Introduce sleepable BPF programs")
Reported-by: Björn Töpel <bjorn.topel@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Song Liu <songliubraving@fb.com>
Acked-by: KP Singh <kpsingh@google.com>
Link: https://lore.kernel.org/bpf/20200831163132.66521-1-alexei.starovoitov@gmail.com
3 years agobpf: Fix build without BPF_SYSCALL, but with BPF_JIT.
Alexei Starovoitov [Mon, 31 Aug 2020 15:51:55 +0000 (08:51 -0700)]
bpf: Fix build without BPF_SYSCALL, but with BPF_JIT.

When CONFIG_BPF_SYSCALL is not set, but CONFIG_BPF_JIT=y
the kernel build fails:
In file included from ../kernel/bpf/trampoline.c:11:
../kernel/bpf/trampoline.c: In function ‘bpf_trampoline_update’:
../kernel/bpf/trampoline.c:220:39: error: ‘call_rcu_tasks_trace’ undeclared
../kernel/bpf/trampoline.c: In function ‘__bpf_prog_enter_sleepable’:
../kernel/bpf/trampoline.c:411:2: error: implicit declaration of function ‘rcu_read_lock_trace’
../kernel/bpf/trampoline.c: In function ‘__bpf_prog_exit_sleepable’:
../kernel/bpf/trampoline.c:416:2: error: implicit declaration of function ‘rcu_read_unlock_trace’

This is due to:
obj-$(CONFIG_BPF_JIT) += trampoline.o
obj-$(CONFIG_BPF_JIT) += dispatcher.o
There is a number of functions that arch/x86/net/bpf_jit_comp.c is
using from these two files, but none of them will be used when
only cBPF is on (which is the case for BPF_SYSCALL=n BPF_JIT=y).

Add rcu_trace functions to rcupdate_trace.h. The JITed code won't execute them
and BPF trampoline logic won't be used without BPF_SYSCALL.

Fixes: 1e6c62a88215 ("bpf: Introduce sleepable BPF programs")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Link: https://lore.kernel.org/bpf/20200831155155.62754-1-alexei.starovoitov@gmail.com
3 years agoMerge branch 'bpf-sleepable'
Daniel Borkmann [Fri, 28 Aug 2020 19:20:33 +0000 (21:20 +0200)]
Merge branch 'bpf-sleepable'

Alexei Starovoitov says:

====================
v2->v3:
- switched to minimal allowlist approach. Essentially that means that syscall
  entry, few btrfs allow_error_inject functions, should_fail_bio(), and two LSM
  hooks: file_mprotect and bprm_committed_creds are the only hooks that allow
  attaching of sleepable BPF programs. When comprehensive analysis of LSM hooks
  will be done this allowlist will be extended.
- added patch 1 that fixes prototypes of two mm functions to reliably work with
  error injection. It's also necessary for resolve_btfids tool to recognize
  these two funcs, but that's secondary.

v1->v2:
- split fmod_ret fix into separate patch
- added denylist

v1:
This patch set introduces the minimal viable support for sleepable bpf programs.
In this patch only fentry/fexit/fmod_ret and lsm progs can be sleepable.
Only array and pre-allocated hash and lru maps allowed.

Here is 'perf report' difference of sleepable vs non-sleepable:
   3.86%  bench     [k] __srcu_read_unlock
   3.22%  bench     [k] __srcu_read_lock
   0.92%  bench     [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep
   0.50%  bench     [k] bpf_trampoline_10297
   0.26%  bench     [k] __bpf_prog_exit_sleepable
   0.21%  bench     [k] __bpf_prog_enter_sleepable
vs
   0.88%  bench     [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry
   0.84%  bench     [k] bpf_trampoline_10297
   0.13%  bench     [k] __bpf_prog_enter
   0.12%  bench     [k] __bpf_prog_exit
vs
   0.79%  bench     [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep
   0.72%  bench     [k] bpf_trampoline_10381
   0.31%  bench     [k] __bpf_prog_exit_sleepable
   0.29%  bench     [k] __bpf_prog_enter_sleepable

Sleepable vs non-sleepable program invocation overhead is only marginally higher
due to rcu_trace. srcu approach is much slower.
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
3 years agoselftests/bpf: Add sleepable tests
Alexei Starovoitov [Thu, 27 Aug 2020 22:01:14 +0000 (15:01 -0700)]
selftests/bpf: Add sleepable tests

Modify few tests to sanity test sleepable bpf functionality.

Running 'bench trig-fentry-sleep' vs 'bench trig-fentry' and 'perf report':
sleepable with SRCU:
   3.86%  bench     [k] __srcu_read_unlock
   3.22%  bench     [k] __srcu_read_lock
   0.92%  bench     [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep
   0.50%  bench     [k] bpf_trampoline_10297
   0.26%  bench     [k] __bpf_prog_exit_sleepable
   0.21%  bench     [k] __bpf_prog_enter_sleepable

sleepable with RCU_TRACE:
   0.79%  bench     [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep
   0.72%  bench     [k] bpf_trampoline_10381
   0.31%  bench     [k] __bpf_prog_exit_sleepable
   0.29%  bench     [k] __bpf_prog_enter_sleepable

non-sleepable with RCU:
   0.88%  bench     [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry
   0.84%  bench     [k] bpf_trampoline_10297
   0.13%  bench     [k] __bpf_prog_enter
   0.12%  bench     [k] __bpf_prog_exit

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: KP Singh <kpsingh@google.com>
Link: https://lore.kernel.org/bpf/20200827220114.69225-6-alexei.starovoitov@gmail.com
3 years agolibbpf: Support sleepable progs
Alexei Starovoitov [Thu, 27 Aug 2020 22:01:13 +0000 (15:01 -0700)]
libbpf: Support sleepable progs

Pass request to load program as sleepable via ".s" suffix in the section name.
If it happens in the future that all map types and helpers are allowed with
BPF_F_SLEEPABLE flag "fmod_ret/" and "lsm/" can be aliased to "fmod_ret.s/" and
"lsm.s/" to make all lsm and fmod_ret programs sleepable by default. The fentry
and fexit programs would always need to have sleepable vs non-sleepable
distinction, since not all fentry/fexit progs will be attached to sleepable
kernel functions.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: KP Singh <kpsingh@google.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200827220114.69225-5-alexei.starovoitov@gmail.com
3 years agobpf: Add bpf_copy_from_user() helper.
Alexei Starovoitov [Thu, 27 Aug 2020 22:01:12 +0000 (15:01 -0700)]
bpf: Add bpf_copy_from_user() helper.

Sleepable BPF programs can now use copy_from_user() to access user memory.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: KP Singh <kpsingh@google.com>
Link: https://lore.kernel.org/bpf/20200827220114.69225-4-alexei.starovoitov@gmail.com
3 years agobpf: Introduce sleepable BPF programs
Alexei Starovoitov [Thu, 27 Aug 2020 22:01:11 +0000 (15:01 -0700)]
bpf: Introduce sleepable BPF programs

Introduce sleepable BPF programs that can request such property for themselves
via BPF_F_SLEEPABLE flag at program load time. In such case they will be able
to use helpers like bpf_copy_from_user() that might sleep. At present only
fentry/fexit/fmod_ret and lsm programs can request to be sleepable and only
when they are attached to kernel functions that are known to allow sleeping.

The non-sleepable programs are relying on implicit rcu_read_lock() and
migrate_disable() to protect life time of programs, maps that they use and
per-cpu kernel structures used to pass info between bpf programs and the
kernel. The sleepable programs cannot be enclosed into rcu_read_lock().
migrate_disable() maps to preempt_disable() in non-RT kernels, so the progs
should not be enclosed in migrate_disable() as well. Therefore
rcu_read_lock_trace is used to protect the life time of sleepable progs.

There are many networking and tracing program types. In many cases the
'struct bpf_prog *' pointer itself is rcu protected within some other kernel
data structure and the kernel code is using rcu_dereference() to load that
program pointer and call BPF_PROG_RUN() on it. All these cases are not touched.
Instead sleepable bpf programs are allowed with bpf trampoline only. The
program pointers are hard-coded into generated assembly of bpf trampoline and
synchronize_rcu_tasks_trace() is used to protect the life time of the program.
The same trampoline can hold both sleepable and non-sleepable progs.

When rcu_read_lock_trace is held it means that some sleepable bpf program is
running from bpf trampoline. Those programs can use bpf arrays and preallocated
hash/lru maps. These map types are waiting on programs to complete via
synchronize_rcu_tasks_trace();

Updates to trampoline now has to do synchronize_rcu_tasks_trace() and
synchronize_rcu_tasks() to wait for sleepable progs to finish and for
trampoline assembly to finish.

This is the first step of introducing sleepable progs. Eventually dynamically
allocated hash maps can be allowed and networking program types can become
sleepable too.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: KP Singh <kpsingh@google.com>
Link: https://lore.kernel.org/bpf/20200827220114.69225-3-alexei.starovoitov@gmail.com
3 years agomm/error_inject: Fix allow_error_inject function signatures.
Alexei Starovoitov [Thu, 27 Aug 2020 22:01:10 +0000 (15:01 -0700)]
mm/error_inject: Fix allow_error_inject function signatures.

'static' and 'static noinline' function attributes make no guarantees that
gcc/clang won't optimize them. The compiler may decide to inline 'static'
function and in such case ALLOW_ERROR_INJECT becomes meaningless. The compiler
could have inlined __add_to_page_cache_locked() in one callsite and didn't
inline in another. In such case injecting errors into it would cause
unpredictable behavior. It's worse with 'static noinline' which won't be
inlined, but it still can be optimized. Like the compiler may decide to remove
one argument or constant propagate the value depending on the callsite.

To avoid such issues make sure that these functions are global noinline.

Fixes: af3b854492f3 ("mm/page_alloc.c: allow error injection")
Fixes: cfcbfb1382db ("mm/filemap.c: enable error injection at add_to_page_cache()")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Link: https://lore.kernel.org/bpf/20200827220114.69225-2-alexei.starovoitov@gmail.com
3 years agonetlabel: remove unused param from audit_log_format()
Alex Dewar [Fri, 28 Aug 2020 13:55:23 +0000 (14:55 +0100)]
netlabel: remove unused param from audit_log_format()

Commit d3b990b7f327 ("netlabel: fix problems with mapping removal")
added a check to return an error if ret_val != 0, before ret_val is
later used in a log message. Now it will unconditionally print "...
res=1". So just drop the check.

Addresses-Coverity: ("Dead code")
Fixes: d3b990b7f327 ("netlabel: fix problems with mapping removal")
Signed-off-by: Alex Dewar <alex.dewar90@gmail.com>
Acked-by: Paul Moore <paul@paul-moore.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'ionic-memory-usage-rework'
David S. Miller [Fri, 28 Aug 2020 15:01:30 +0000 (08:01 -0700)]
Merge branch 'ionic-memory-usage-rework'

Shannon Nelson says:

====================
ionic memory usage rework

Previous review comments have suggested [1],[2] that this driver
needs to rework how queue resources are managed and reconfigured
so that we don't do a full driver reset and to better handle
potential allocation failures.  This patchset is intended to
address those comments.

The first few patches clean some general issues and
simplify some of the memory structures.  The last 4 patches
specifically address queue parameter changes without a full
ionic_stop()/ionic_open().

[1] https://lore.kernel.org/netdev/20200706103305.182bd727@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com/
[2] https://lore.kernel.org/netdev/20200724.194417.2151242753657227232.davem@davemloft.net/

v3: use PTR_ALIGN without typecast
    fix up Neel's attribution

v2: use PTR_ALIGN
    recovery if netif_set_real_num_tx/rx_queues fails
    less racy queue bring up after reconfig
    common-ize the reconfig queue stop and start
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: pull reset_queues into tx_timeout handler
Shannon Nelson [Thu, 27 Aug 2020 23:00:30 +0000 (16:00 -0700)]
ionic: pull reset_queues into tx_timeout handler

Convert tx_timeout handler to not do the full reset.  As this was
the last user of ionic_reset_queues(), we can drop it.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: change queue count with no reset
Shannon Nelson [Thu, 27 Aug 2020 23:00:29 +0000 (16:00 -0700)]
ionic: change queue count with no reset

Add to our new ionic_reconfigure_queues() to also be able to change
the number of queues in use, and to change the queue interrupt layout
between split and combined.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: change the descriptor ring length without full reset
Shannon Nelson [Thu, 27 Aug 2020 23:00:28 +0000 (16:00 -0700)]
ionic: change the descriptor ring length without full reset

The original way of changing ring length was to completely
tear down the lif's queue structure and then rebuild it, while
running the risk of allocations that might fail in the middle
and leave us with a broken driver.

Instead, we can set up all the new queue and descriptor
allocations first, then swap them out and delete the old
allocations.  If the new allocations fail, we report the error,
stay with the old setup and continue running.  This gives us
a safer path, and a smaller window of time where we're not
processing traffic.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: change mtu without full queue rebuild
Shannon Nelson [Thu, 27 Aug 2020 23:00:27 +0000 (16:00 -0700)]
ionic: change mtu without full queue rebuild

We really don't need to tear down and rebuild the whole queue structure
when changing the MTU; we can simply stop the queues, clean and refill,
then restart the queues.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: use index not pointer for queue tracking
Shannon Nelson [Thu, 27 Aug 2020 23:00:26 +0000 (16:00 -0700)]
ionic: use index not pointer for queue tracking

Use index counters rather than pointers for tracking head
and tail in the queues to save a little memory and to perhaps
slightly faster queue processing.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: reduce contiguous memory allocation requirement
Shannon Nelson [Thu, 27 Aug 2020 23:00:25 +0000 (16:00 -0700)]
ionic: reduce contiguous memory allocation requirement

Split out the queue descriptor blocks into separate dma
allocations to make for smaller blocks.

Co-developed-by: Neel Patel <neel@pensando.io>
Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: clean up unnecessary non-static functions
Shannon Nelson [Thu, 27 Aug 2020 23:00:24 +0000 (16:00 -0700)]
ionic: clean up unnecessary non-static functions

ionic_open() and ionic_stop() are not referenced outside of their
defining file, so make them static.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: rework and simplify handling of the queue stats block
Shannon Nelson [Thu, 27 Aug 2020 23:00:23 +0000 (16:00 -0700)]
ionic: rework and simplify handling of the queue stats block

Use a block of stats structs attached to the lif instead of
little ones attached to each qcq.  This simplifies our memory
management and gets rid of a lot of unnecessary indirection.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: remove lif list concept
Shannon Nelson [Thu, 27 Aug 2020 23:00:22 +0000 (16:00 -0700)]
ionic: remove lif list concept

As we aren't yet supporting multiple lifs, we can remove
complexity by removing the list concept and related code,
to be re-engineered later when actually needed.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: use kcalloc for new arrays
Shannon Nelson [Thu, 27 Aug 2020 23:00:21 +0000 (16:00 -0700)]
ionic: use kcalloc for new arrays

Use kcalloc for allocating arrays of structures.

Following along after
commit e71642009cbdA ("ionic_lif: Use devm_kcalloc() in ionic_qcq_alloc()")
there are a couple more array allocations that can be converted
to using devm_kcalloc().

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: fix up a couple of debug strings
Shannon Nelson [Thu, 27 Aug 2020 23:00:20 +0000 (16:00 -0700)]
ionic: fix up a couple of debug strings

Fix the queue name displayed.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: set MTU floor at ETH_MIN_MTU
Shannon Nelson [Thu, 27 Aug 2020 23:00:19 +0000 (16:00 -0700)]
ionic: set MTU floor at ETH_MIN_MTU

The NIC might tell us its minimum MTU, but let's be sure not
to use something smaller than ETH_MIN_MTU.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'Enable-Fiber-on-DP83822-PHY'
David S. Miller [Fri, 28 Aug 2020 13:57:58 +0000 (06:57 -0700)]
Merge branch 'Enable-Fiber-on-DP83822-PHY'

Dan Murphy says:

====================
Enable Fiber on DP83822 PHY

The DP83822 Ethernet PHY has the ability to connect via a Fiber port.  The
derivative PHYs DP83825 and DP83826 do not have this ability. In fiber mode
the DP83822 disables auto negotiation and has a fixed 100Mbps speed with
support for full or half duplex modes.

A devicetree binding was added to set the signal polarity for the fiber
connection.  This property is only applicable if the FX_EN strap is set in
hardware other wise the signal loss detection is disabled on the PHY.

If the FX_EN is not strapped the device can be configured to run in fiber mode
via the device tree. All be it the PHY will not perform signal loss detection.

v2 review from a long time ago can be found here - https://lore.kernel.org/patchwork/patch/1270958/
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: phy: DP83822: Add ability to advertise Fiber connection
Dan Murphy [Thu, 27 Aug 2020 13:45:09 +0000 (08:45 -0500)]
net: phy: DP83822: Add ability to advertise Fiber connection

The DP83822 can be configured to use a Fiber connection.  The strap
register is read to determine if the device has been configured to use
a fiber connection.  With the fiber connection the PHY can be configured
to detect whether the fiber connection is active by either a high signal
or a low signal.

Fiber mode is only applicable to the DP83822 so rework the PHY match
table so that non-fiber PHYs can still use the same driver but not call
or use any of the fiber features.

Signed-off-by: Dan Murphy <dmurphy@ti.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodt-bindings: net: dp83822: Add TI dp83822 phy
Dan Murphy [Thu, 27 Aug 2020 13:45:08 +0000 (08:45 -0500)]
dt-bindings: net: dp83822: Add TI dp83822 phy

Add a dt binding for the TI dp83822 ethernet phy device.

Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Dan Murphy <dmurphy@ti.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: add option to not create fall-back tunnels in root-ns as well
Mahesh Bandewar [Wed, 26 Aug 2020 16:05:35 +0000 (09:05 -0700)]
net: add option to not create fall-back tunnels in root-ns as well

The sysctl that was added  earlier by commit 79134e6ce2c ("net: do
not create fallback tunnels for non-default namespaces") to create
fall-back only in root-ns. This patch enhances that behavior to provide
option not to create fallback tunnels in root-ns as well. Since modules
that create fallback tunnels could be built-in and setting the sysctl
value after booting is pointless, so added a kernel cmdline options to
change this default. The default setting is preseved for backward
compatibility. The kernel command line option of fb_tunnels=initns will
set the sysctl value to 1 and will create fallback tunnels only in initns
while kernel cmdline fb_tunnels=none will set the sysctl value to 2 and
fallback tunnels are skipped in every netns.

Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Maciej Zenczykowski <maze@google.com>
Cc: Jian Yang <jianyang@google.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'Add-phylib-support-to-smsc95xx'
David S. Miller [Fri, 28 Aug 2020 13:48:44 +0000 (06:48 -0700)]
Merge branch 'Add-phylib-support-to-smsc95xx'

Andre Edich says:

====================
Add phylib support to smsc95xx

To allow to probe external PHY drivers, this patch series adds use of
phylib to the smsc95xx driver.

Changes in v5:
- Removed all phy_read calls from the smsc95xx driver.

Changes in v4:
- Removed useless inline type qualifier.

Changes in v3:
- Moved all MDI-X functionality to the corresponding phy driver;
- Removed field internal_phy from a struct smsc95xx_priv;
- Initialized field is_internal of a struct phy_device;
- Kconfig: Added selection of PHYLIB and SMSC_PHY for USB_NET_SMSC95XX.

Changes in v2:
- Moved 'net' patches from here to the separate patch series;
- Removed redundant call of the phy_start_aneg after phy_start;
- Removed netif_dbg tracing "speed, duplex, lcladv, and rmtadv";
- mdiobus: added dependency from the usbnet device;
- Moved making of the MII address from 'phy_id' and 'idx' into the
  function mii_address;
- Moved direct MDIO accesses under condition 'if (pdata->internal_phy)',
  as they only need for the internal PHY;
- To be sure, that this set of patches is git-bisectable, tested each
  sub-set of patches to be functional for both, internal and external
  PHYs, including suspend/resume test for the 'devices'
  (5.7.8-1-ARCH, Raspberry Pi 3 Model B).
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosmsc95xx: add phylib support
Andre Edich [Wed, 26 Aug 2020 11:17:17 +0000 (13:17 +0200)]
smsc95xx: add phylib support

Generally, each PHY has their own configuration and it can be done
through an external PHY driver.  The smsc95xx driver uses only the
hard-coded internal PHY configuration.

This patch adds phylib support to probe external PHY drivers for
configuring external PHYs.

The MDI-X configuration for the internal PHYs moves from
drivers/net/usb/smsc95xx.c to drivers/net/phy/smsc.c.

Signed-off-by: Andre Edich <andre.edich@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosmsc95xx: use usbnet->driver_priv
Andre Edich [Wed, 26 Aug 2020 11:17:16 +0000 (13:17 +0200)]
smsc95xx: use usbnet->driver_priv

Using `void *driver_priv` instead of `unsigned long data[]` is more
straightforward way to recover the `struct smsc95xx_priv *` from the
`struct net_device *`.

Signed-off-by: Andre Edich <andre.edich@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agosmsc95xx: remove redundant function arguments
Andre Edich [Wed, 26 Aug 2020 11:17:15 +0000 (13:17 +0200)]
smsc95xx: remove redundant function arguments

This patch removes arguments netdev and phy_id from the functions
smsc95xx_mdio_read_nopm and smsc95xx_mdio_write_nopm.  Both removed
arguments are recovered from a new argument `struct usbnet *dev`.

Signed-off-by: Andre Edich <andre.edich@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agobpf: selftests: Add test for different inner map size
Martin KaFai Lau [Fri, 28 Aug 2020 01:18:19 +0000 (18:18 -0700)]
bpf: selftests: Add test for different inner map size

This patch tests the inner map size can be different
for reuseport_sockarray but has to be the same for
arraymap.  A new subtest "diff_size" is added for this.

The existing test is moved to a subtest "lookup_update".

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200828011819.1970825-1-kafai@fb.com
3 years agobpf: Relax max_entries check for most of the inner map types
Martin KaFai Lau [Fri, 28 Aug 2020 01:18:13 +0000 (18:18 -0700)]
bpf: Relax max_entries check for most of the inner map types

Most of the maps do not use max_entries during verification time.
Thus, those map_meta_equal() do not need to enforce max_entries
when it is inserted as an inner map during runtime.  The max_entries
check is removed from the default implementation bpf_map_meta_equal().

The prog_array_map and xsk_map are exception.  Its map_gen_lookup
uses max_entries to generate inline lookup code.  Thus, they will
implement its own map_meta_equal() to enforce max_entries.
Since there are only two cases now, the max_entries check
is not refactored and stays in its own .c file.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200828011813.1970516-1-kafai@fb.com
3 years agobpf: Add map_meta_equal map ops
Martin KaFai Lau [Fri, 28 Aug 2020 01:18:06 +0000 (18:18 -0700)]
bpf: Add map_meta_equal map ops

Some properties of the inner map is used in the verification time.
When an inner map is inserted to an outer map at runtime,
bpf_map_meta_equal() is currently used to ensure those properties
of the inserting inner map stays the same as the verification
time.

In particular, the current bpf_map_meta_equal() checks max_entries which
turns out to be too restrictive for most of the maps which do not use
max_entries during the verification time.  It limits the use case that
wants to replace a smaller inner map with a larger inner map.  There are
some maps do use max_entries during verification though.  For example,
the map_gen_lookup in array_map_ops uses the max_entries to generate
the inline lookup code.

To accommodate differences between maps, the map_meta_equal is added
to bpf_map_ops.  Each map-type can decide what to check when its
map is used as an inner map during runtime.

Also, some map types cannot be used as an inner map and they are
currently black listed in bpf_map_meta_alloc() in map_in_map.c.
It is not unusual that the new map types may not aware that such
blacklist exists.  This patch enforces an explicit opt-in
and only allows a map to be used as an inner map if it has
implemented the map_meta_equal ops.  It is based on the
discussion in [1].

All maps that support inner map has its map_meta_equal points
to bpf_map_meta_equal in this patch.  A later patch will
relax the max_entries check for most maps.  bpf_types.h
counts 28 map types.  This patch adds 23 ".map_meta_equal"
by using coccinelle.  -5 for
BPF_MAP_TYPE_PROG_ARRAY
BPF_MAP_TYPE_(PERCPU)_CGROUP_STORAGE
BPF_MAP_TYPE_STRUCT_OPS
BPF_MAP_TYPE_ARRAY_OF_MAPS
BPF_MAP_TYPE_HASH_OF_MAPS

The "if (inner_map->inner_map_meta)" check in bpf_map_meta_alloc()
is moved such that the same error is returned.

[1]: https://lore.kernel.org/bpf/20200522022342.899756-1-kafai@fb.com/

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200828011806.1970400-1-kafai@fb.com