linux-2.6-microblaze.git
3 years agoMerge branch 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox...
Jason Gunthorpe [Mon, 12 Apr 2021 16:49:48 +0000 (13:49 -0300)]
Merge branch 'mlx5-next' of git://git./linux/kernel/git/mellanox/linux

Saeed Mahameed says:
====================

This pr contains changes from  mlx5-next branch,
already reviewed on netdev and rdma mailing lists, links below.

1) From Leon, Dynamically assign MSI-X vectors count
Already Acked by Bjorn Helgaas.
https://patchwork.kernel.org/project/netdevbpf/cover/20210314124256.70253-1-leon@kernel.org/

2) Cleanup series:
https://patchwork.kernel.org/project/netdevbpf/cover/20210311070915.321814-1-saeed@kernel.org/

From Mark, E-Switch cleanups and refactoring, and the addition
of single FDB mode needed HW bits.

From Mikhael, Remove unused struct field

From Saeed, Cleanup W=1 prototype warning

From Zheng, Esw related cleanup

From Tariq, User order-0 page allocation for EQs

====================

* mlx5-next:
  net/mlx5: Implement sriov_get_vf_total_msix/count() callbacks
  net/mlx5: Dynamically assign MSI-X vectors count
  net/mlx5: Add dynamic MSI-X capabilities bits
  PCI/IOV: Add sysfs MSI-X vector assignment interface
  net/mlx5: Use order-0 allocations for EQs
  net/mlx5: Add IFC bits needed for single FDB mode
  net/mlx5: E-Switch, Refactor send to vport to be more generic
  RDMA/mlx5: Use representor E-Switch when getting netdev and metadata
  net/mlx5: E-Switch, Add eswitch pointer to each representor
  net/mlx5: E-Switch, Add match on vhca id to default send rules
  net/mlx5: Remove unused mlx5_core_health member recover_work
  net/mlx5: simplify the return expression of mlx5_esw_offloads_pair()
  net/mlx5: Cleanup prototype warning

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Prevent le32 from being implicitly converted to u32
Lang Cheng [Fri, 2 Apr 2021 09:07:34 +0000 (17:07 +0800)]
RDMA/hns: Prevent le32 from being implicitly converted to u32

Replace BUILD_BUG_ON_ZERO() with BUILD_BUG_ON() to avoid sparse
complaining "restricted __le32 degrades to integer".

Link: https://lore.kernel.org/r/1617354454-47840-10-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Simplify the function config_eqc()
Yixing Liu [Fri, 2 Apr 2021 09:07:33 +0000 (17:07 +0800)]
RDMA/hns: Simplify the function config_eqc()

Use "hr_reg_write" replace "roce_set_filed".

Link: https://lore.kernel.org/r/1617354454-47840-9-git-send-email-liweihang@huawei.com
Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Add XRC subtype in QPC and XRC type in SRQC
Wenpeng Liang [Fri, 2 Apr 2021 09:07:32 +0000 (17:07 +0800)]
RDMA/hns: Add XRC subtype in QPC and XRC type in SRQC

A field to distuiguish basic SRQ from XRC SRQ in SRQC and a field in QPC
to determine whether a QP is XRC TGT QP or XRC INI QP are missing.

Fixes: 32548870d438 ("RDMA/hns: Add support for XRC on HIP09")
Link: https://lore.kernel.org/r/1617354454-47840-8-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Remove unsupported QP types
Wenpeng Liang [Fri, 2 Apr 2021 09:07:31 +0000 (17:07 +0800)]
RDMA/hns: Remove unsupported QP types

The hns ROCEE does not support UC QP currently.

Link: https://lore.kernel.org/r/1617354454-47840-7-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Delete unused members in the structure hns_roce_hw
Yangyang Li [Fri, 2 Apr 2021 09:07:30 +0000 (17:07 +0800)]
RDMA/hns: Delete unused members in the structure hns_roce_hw

Some structure members in hns_roce_hw have never been used and need to be
deleted.

Fixes: 9a4435375cd1 ("IB/hns: Add driver files for hns RoCE driver")
Fixes: b156269d88e4 ("RDMA/hns: Add modify CQ support for hip08")
Fixes: c7bcb13442e1 ("RDMA/hns: Add SRQ support for hip08 kernel mode")
Link: https://lore.kernel.org/r/1617354454-47840-6-git-send-email-liweihang@huawei.com
Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Delete redundant abnormal interrupt status
Wenpeng Liang [Fri, 2 Apr 2021 09:07:29 +0000 (17:07 +0800)]
RDMA/hns: Delete redundant abnormal interrupt status

The hardware supports only two types of abnormal interrupts.

Fixes: a5073d6054f7 ("RDMA/hns: Add eq support of hip08")
Link: https://lore.kernel.org/r/1617354454-47840-5-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Delete redundant condition judgment related to eq
Yangyang Li [Fri, 2 Apr 2021 09:07:28 +0000 (17:07 +0800)]
RDMA/hns: Delete redundant condition judgment related to eq

The register value related to the eq interrupt depends only on
enable_flag, so the redundant condition judgment is deleted.

Fixes: a5073d6054f7 ("RDMA/hns: Add eq support of hip08")
Link: https://lore.kernel.org/r/1617354454-47840-4-git-send-email-liweihang@huawei.com
Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Fix missing assignment of max_inline_data
Weihang Li [Fri, 2 Apr 2021 09:07:27 +0000 (17:07 +0800)]
RDMA/hns: Fix missing assignment of max_inline_data

When querying QP, the ULPs should be informed of the max length of inline
data supported by the hardware.

Fixes: 30b707886aeb ("RDMA/hns: Support inline data in extented sge space for RC")
Link: https://lore.kernel.org/r/1617354454-47840-3-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Avoid enabling RQ inline on UD
Weihang Li [Fri, 2 Apr 2021 09:07:26 +0000 (17:07 +0800)]
RDMA/hns: Avoid enabling RQ inline on UD

RQ inline is not supported on UD/GSI QP, it should be disabled in QPC.

Link: https://lore.kernel.org/r/1617354454-47840-2-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Modify prints for mailbox and command queue
Wenpeng Liang [Thu, 1 Apr 2021 07:32:21 +0000 (15:32 +0800)]
RDMA/hns: Modify prints for mailbox and command queue

Use ratelimited print in mbox and cmq. And print mailbox operation if
mailbox fails because it's useful information for the user.

Link: https://lore.kernel.org/r/1617262341-37571-4-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Support more return types of command queue
Lang Cheng [Thu, 1 Apr 2021 07:32:20 +0000 (15:32 +0800)]
RDMA/hns: Support more return types of command queue

Add error code definition according to the return code from firmware to
help find out more detailed reasons why a command fails to be sent.

Link: https://lore.kernel.org/r/1617262341-37571-3-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Enable all CMDQ context
Lang Cheng [Thu, 1 Apr 2021 07:32:19 +0000 (15:32 +0800)]
RDMA/hns: Enable all CMDQ context

Fix error of cmd's context number calculation algorithm to enable all of
32 cmd entries and support 32 concurrent accesses.

Link: https://lore.kernel.org/r/1617262341-37571-2-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rxe: Fix missing acks from responder
Bob Pearson [Fri, 2 Apr 2021 00:10:17 +0000 (19:10 -0500)]
RDMA/rxe: Fix missing acks from responder

All responder errors from request packets that do not consume a receive
WQE fail to generate acks for RC QPs.  This patch corrects this behavior
by making the flow follow the same path as request packets that do consume
a WQE after the completion.

Link: https://lore.kernel.org/r/20210402001016.3210-1-rpearson@hpe.com
Link: https://lore.kernel.org/linux-rdma/1a7286ac-bcea-40fb-2267-480134dd301b@gmail.com/
Signed-off-by: Bob Pearson <rpearson@hpe.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/core: Make the wc status prompt message clearer
Yixian Liu [Tue, 6 Apr 2021 08:46:12 +0000 (16:46 +0800)]
RDMA/core: Make the wc status prompt message clearer

Local invalidate is also a kind of memory management operation, not only
memory bind operation. Furthermore, as invalidate operations include local
and remote, add prefix to the prompt message to make it clearer.

Link: https://lore.kernel.org/r/1617698772-13871-1-git-send-email-liweihang@huawei.com
Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Use GFP_ATOMIC under spin lock
Wei Yongjun [Wed, 7 Apr 2021 15:49:00 +0000 (15:49 +0000)]
RDMA/hns: Use GFP_ATOMIC under spin lock

A spin lock is taken here so we should use GFP_ATOMIC.

Fixes: f91696f2f053 ("RDMA/hns: Support congestion control type selection according to the FW")
Link: https://lore.kernel.org/r/20210407154900.3486268-1-weiyongjun1@huawei.com
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/mlx5: Reduce max order of memory allocated for xlt update
Praveen Kumar Kannoju [Sat, 3 Apr 2021 04:53:55 +0000 (04:53 +0000)]
IB/mlx5: Reduce max order of memory allocated for xlt update

To update xlt (during mlx5_ib_reg_user_mr()), the driver can request up to
1 MB (order-8) memory, depending on the size of the MR. This costly
allocation can sometimes take very long to return (a few seconds). This
causes the calling application to hang for a long time, especially when
the system is fragmented.  To avoid these long latency spikes, the calls
the higher order allocations need to fail faster in case they are not
available.

In order to acheive this we need __GFP_NORETRY flag in the gfp_mask before
during fetching the free pages. Allow the algorithm to automatically fall
back to smaller page sizes.

Link: https://lore.kernel.org/r/1617425635-35631-1-git-send-email-praveen.kannoju@oracle.com
Signed-off-by: Praveen Kumar Kannoju <praveen.kannoju@oracle.com>
Acked-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/hfi1: Remove unused function
Kaike Wan [Mon, 29 Mar 2021 14:06:31 +0000 (10:06 -0400)]
IB/hfi1: Remove unused function

Remove the unused function sdma_iowait_schedule().

Fixes: 7724105686e7 ("IB/hfi1: add driver files")
Link: https://lore.kernel.org/r/1617026791-89997-1-git-send-email-dennis.dalessandro@cornelisnetworks.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/hfi1: Use kzalloc() for mmu_rb_handler allocation
Mike Marciniszyn [Mon, 29 Mar 2021 13:54:14 +0000 (09:54 -0400)]
IB/hfi1: Use kzalloc() for mmu_rb_handler allocation

The code currently assumes that the mmu_notifier struct
embedded in mmu_rb_handler only contains two fields.

There are now extra fields:

struct mmu_notifier {
        struct hlist_node hlist;
        const struct mmu_notifier_ops *ops;
        struct mm_struct *mm;
        struct rcu_head rcu;
        unsigned int users;
};

Given that there in no init for the mmu_notifier, a kzalloc() should
be used to insure that any newly added fields are given a predictable
initial value of zero.

Fixes: 06e0ffa69312 ("IB/hfi1: Re-factor MMU notification code")
Link: https://lore.kernel.org/r/1617026056-50483-9-git-send-email-dennis.dalessandro@cornelisnetworks.com
Reviewed-by: Adam Goldman <adam.goldman@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/hfi1: Add additional usdma traces
Mike Marciniszyn [Mon, 29 Mar 2021 13:54:13 +0000 (09:54 -0400)]
IB/hfi1: Add additional usdma traces

Add traces that were vital in isolating an issue with pq waitlist in
commit fa8dac396863 ("IB/hfi1: Fix another case where pq is left on
waitlist")

Link: https://lore.kernel.org/r/1617026056-50483-8-git-send-email-dennis.dalessandro@cornelisnetworks.com
Reviewed-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/hfi1: Remove indirect call to hfi1_ipoib_send_dma()
Mike Marciniszyn [Mon, 29 Mar 2021 13:54:11 +0000 (09:54 -0400)]
IB/hfi1: Remove indirect call to hfi1_ipoib_send_dma()

hfi1_ipoib_send() directly calls hfi1_ipoib_send_dma() with no value add.

Fix by renaming hfi1_ipoib_send_dma() to hfi1_ipoib_send().

Link: https://lore.kernel.org/r/1617026056-50483-6-git-send-email-dennis.dalessandro@cornelisnetworks.com
Reviewed-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/hfi1: Use napi_schedule_irqoff() for tx napi
Mike Marciniszyn [Mon, 29 Mar 2021 13:54:10 +0000 (09:54 -0400)]
IB/hfi1: Use napi_schedule_irqoff() for tx napi

The call is from an ISR context and napi_schedule_irqoff() can be used.

Change the call to the more efficient type.

Link: https://lore.kernel.org/r/1617026056-50483-5-git-send-email-dennis.dalessandro@cornelisnetworks.com
Reviewed-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/hfi1: Correct oversized ring allocation
Mike Marciniszyn [Mon, 29 Mar 2021 13:54:09 +0000 (09:54 -0400)]
IB/hfi1: Correct oversized ring allocation

The completion ring for tx is using the wrong size to size the ring,
oversizing the ring by two orders of magniture.

Correct the allocation size and use kcalloc_node() to allocate the ring.
Fix mistaken GFP defines in similar allocations.

Link: https://lore.kernel.org/r/1617026056-50483-4-git-send-email-dennis.dalessandro@cornelisnetworks.com
Reviewed-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/{ipoib,hfi1}: Add a timeout handler for rdma_netdev
Mike Marciniszyn [Mon, 29 Mar 2021 13:54:08 +0000 (09:54 -0400)]
IB/{ipoib,hfi1}: Add a timeout handler for rdma_netdev

The current rdma_netdev handling in ipoib hooks the tx_timeout handler,
but prints out a totally useless message that prevents effective debugging
especially when multiple transmit queues are being used.

Add a tx_timeout rdma_netdev hook and implement the callback in the hfi1
to print additional information.

The existing non-helpful message is avoided when the driver has presented
a callback.

Link: https://lore.kernel.org/r/1617026056-50483-3-git-send-email-dennis.dalessandro@cornelisnetworks.com
Reviewed-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/hfi1: Add AIP tx traces
Mike Marciniszyn [Mon, 29 Mar 2021 13:54:07 +0000 (09:54 -0400)]
IB/hfi1: Add AIP tx traces

Add traces to allow for debugging issues with AIP tx.

Link: https://lore.kernel.org/r/1617026056-50483-2-git-send-email-dennis.dalessandro@cornelisnetworks.com
Reviewed-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agonet/mlx5: Implement sriov_get_vf_total_msix/count() callbacks
Leon Romanovsky [Sun, 14 Mar 2021 12:42:56 +0000 (14:42 +0200)]
net/mlx5: Implement sriov_get_vf_total_msix/count() callbacks

The mlx5 implementation executes a firmware command on the PF to change
the configuration of the selected VF.

Link: https://lore.kernel.org/linux-pci/20210314124256.70253-5-leon@kernel.org
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
3 years agonet/mlx5: Dynamically assign MSI-X vectors count
Leon Romanovsky [Sun, 14 Mar 2021 12:42:55 +0000 (14:42 +0200)]
net/mlx5: Dynamically assign MSI-X vectors count

The number of MSI-X vectors is a PCI property visible through lspci. The
field is read-only and configured by the device. The mlx5 devices work in
a static or dynamic assignment mode.

Static assignment means that all newly created VFs have a preset number of
MSI-X vectors determined by device configuration parameters. This can
result in some VFs having too many or too few MSI-X vectors. Till now this
has been the only means of fine-tuning the MSI-X vector count and it was
acceptable for small numbers of VFs.

With dynamic assignment the inefficiency of having a fixed number of MSI-X
vectors can be avoided with each VF having exactly the required
vectors. Userspace will provide this information while provisioning the VF
for use, based on the intended use. For instance if being used with a VM,
the MSI-X vector count might be matched to the CPU count of the VM.

For compatibility mlx5 continues to start up with MSI-X vector assignment,
but the kernel can now access a larger dynamic vector pool and assign more
vectors to created VFs.

Link: https://lore.kernel.org/linux-pci/20210314124256.70253-4-leon@kernel.org
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
3 years agonet/mlx5: Add dynamic MSI-X capabilities bits
Leon Romanovsky [Sun, 14 Mar 2021 12:42:54 +0000 (14:42 +0200)]
net/mlx5: Add dynamic MSI-X capabilities bits

These new fields declare the number of MSI-X vectors that is possible to
allocate on the VF through PF configuration.

Value must be in range defined by min_dynamic_vf_msix_table_size and
max_dynamic_vf_msix_table_size.

The driver should continue to query its MSI-X table through PCI
configuration header.

Link: https://lore.kernel.org/linux-pci/20210314124256.70253-3-leon@kernel.org
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
3 years agoPCI/IOV: Add sysfs MSI-X vector assignment interface
Leon Romanovsky [Sun, 4 Apr 2021 07:22:18 +0000 (10:22 +0300)]
PCI/IOV: Add sysfs MSI-X vector assignment interface

A typical cloud provider SR-IOV use case is to create many VFs for use by
guest VMs. The VFs may not be assigned to a VM until a customer requests a
VM of a certain size, e.g., number of CPUs. A VF may need MSI-X vectors
proportional to the number of CPUs in the VM, but there is no standard way
to change the number of MSI-X vectors supported by a VF.

Some Mellanox ConnectX devices support dynamic assignment of MSI-X vectors
to SR-IOV VFs. This can be done by the PF driver after VFs are enabled,
and it can be done without affecting VFs that are already in use. The
hardware supports a limited pool of MSI-X vectors that can be assigned to
the PF or to individual VFs.  This is device-specific behavior that
requires support in the PF driver.

Add a read-only "sriov_vf_total_msix" sysfs file for the PF and a writable
"sriov_vf_msix_count" file for each VF. Management software may use these
to learn how many MSI-X vectors are available and to dynamically assign
them to VFs before the VFs are passed through to a VM.

If the PF driver implements the ->sriov_get_vf_total_msix() callback,
"sriov_vf_total_msix" contains the total number of MSI-X vectors available
for distribution among VFs.

If no driver is bound to the VF, writing "N" to "sriov_vf_msix_count" uses
the PF driver ->sriov_set_msix_vec_count() callback to assign "N" MSI-X
vectors to the VF.  When a VF driver subsequently reads the MSI-X Message
Control register, it will see the new Table Size "N".

Link: https://lore.kernel.org/linux-pci/20210314124256.70253-2-leon@kernel.org
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
3 years agoRDMA/hns: Reorganize doorbell update interfaces for all queues
Yixian Liu [Sat, 27 Mar 2021 10:25:38 +0000 (18:25 +0800)]
RDMA/hns: Reorganize doorbell update interfaces for all queues

The doorbell update interfaces are very similar for different queues, such
as SQ, RQ, SRQ, CQ and EQ. So reorganize these code and also fix some
inappropriate naming.

Link: https://lore.kernel.org/r/1616840738-7866-3-git-send-email-liweihang@huawei.com
Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Support configuring doorbell mode of RQ and CQ
Yixian Liu [Sat, 27 Mar 2021 10:25:37 +0000 (18:25 +0800)]
RDMA/hns: Support configuring doorbell mode of RQ and CQ

HIP08 supports both normal and record doorbell mode for RQ and CQ, SQ
record doorbell for userspace is also supported by the software for
flushing CQE process. As now the capability of HIP08 are exposed to the
user and are configurable, the support of normal doorbell should be added
back.

Note that, if switching to normal doorbell, the kernel will report "flush
cqe is unsupported" if modify qp to error status as the flush is based on
record doorbell.

Link: https://lore.kernel.org/r/1616840738-7866-2-git-send-email-liweihang@huawei.com
Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Simplify command fields for HEM base address configuration
Xi Wang [Sat, 27 Mar 2021 03:21:34 +0000 (11:21 +0800)]
RDMA/hns: Simplify command fields for HEM base address configuration

Use hr_reg_write() instead of roce_set_field() to simplify codes about
configuring HEM BA.

Link: https://lore.kernel.org/r/1616815294-13434-6-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Reorganize process of setting HEM
Xi Wang [Sat, 27 Mar 2021 03:21:33 +0000 (11:21 +0800)]
RDMA/hns: Reorganize process of setting HEM

Encapsulate configuring GMV base address and other type of HEM table into
two separate functions to make process of setting HEM clearer.

Link: https://lore.kernel.org/r/1616815294-13434-5-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Refactor reset state checking flow
Xi Wang [Sat, 27 Mar 2021 03:21:32 +0000 (11:21 +0800)]
RDMA/hns: Refactor reset state checking flow

The 'HNS_ROCE_OPC_QUERY_MB_ST' command will response the mailbox complete
status and hardware busy flag, and the complete status is only valid when
the busy flag is 0, so it's better to query these two fields at a time
rather than separately.

Link: https://lore.kernel.org/r/1616815294-13434-4-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Reorganize hns_roce_create_cq()
Yixing Liu [Sat, 27 Mar 2021 03:21:31 +0000 (11:21 +0800)]
RDMA/hns: Reorganize hns_roce_create_cq()

Encapsulate two subprocesses into functions.

Link: https://lore.kernel.org/r/1616815294-13434-3-git-send-email-liweihang@huawei.com
Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Refactor hns_roce_v2_poll_one()
Weihang Li [Sat, 27 Mar 2021 03:21:30 +0000 (11:21 +0800)]
RDMA/hns: Refactor hns_roce_v2_poll_one()

Encapsulate the process of obtaining the current QP and filling WC as
functions, also merge some duplicate code.

Link: https://lore.kernel.org/r/1616815294-13434-2-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoMAINTAINERS: remove Xavier as maintainer of HISILICON ROCE DRIVER
Weihang Li [Mon, 29 Mar 2021 08:46:24 +0000 (16:46 +0800)]
MAINTAINERS: remove Xavier as maintainer of HISILICON ROCE DRIVER

Wei Hu(Xavier) has left Hisilicon and his email address is invalid now.
I'd be glad to add him back with another address if he wants to continue
maintain this module.

Link: https://lore.kernel.org/r/1617007584-39842-1-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs-clt: Cap max_io_size
Jack Wang [Thu, 25 Mar 2021 15:33:05 +0000 (16:33 +0100)]
RDMA/rtrs-clt: Cap max_io_size

Max io size is limited by both remote buffer size and the max fr pages per
mr.

Link: https://lore.kernel.org/r/20210325153308.1214057-20-gi-oh.kim@ionos.com
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs: Cleanup unused 's' variable in __alloc_sess
Jack Wang [Thu, 25 Mar 2021 15:33:03 +0000 (16:33 +0100)]
RDMA/rtrs: Cleanup unused 's' variable in __alloc_sess

Link: https://lore.kernel.org/r/20210325153308.1214057-18-gi-oh.kim@ionos.com
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs-srv: Report temporary sessname for error message
Gioh Kim [Thu, 25 Mar 2021 15:33:02 +0000 (16:33 +0100)]
RDMA/rtrs-srv: Report temporary sessname for error message

Before receiving the session name, the error message cannot include any
information about which connection generates the error.

This patch stores the addresses of source and target in the sessname field
to show which generates the error. That field will be over-written
when receiving the session name from client.

Link: https://lore.kernel.org/r/20210325153308.1214057-17-gi-oh.kim@ionos.com
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs: New function converting rtrs_addr to string
Gioh Kim [Thu, 25 Mar 2021 15:32:59 +0000 (16:32 +0100)]
RDMA/rtrs: New function converting rtrs_addr to string

There is common code converting addresses of source machine and
destination machine to a string.  We already have a struct rtrs_addr to
store two addresses.  This patch introduces a new function that converts
two addresses into one string with struct rtrs_addr.

Link: https://lore.kernel.org/r/20210325153308.1214057-14-gi-oh.kim@ionos.com
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs: Cleanup the code in rtrs_srv_rdma_cm_handler
Guoqing Jiang [Thu, 25 Mar 2021 15:32:56 +0000 (16:32 +0100)]
RDMA/rtrs: Cleanup the code in rtrs_srv_rdma_cm_handler

Let these cases share the same path since all of them need to close
session.

Link: https://lore.kernel.org/r/20210325153308.1214057-11-gi-oh.kim@ionos.com
Signed-off-by: Guoqing Jiang <guoqing.jiang@ionos.com>
Reviewed-by: Danil Kipnis <danil.kipnis@ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs: Remove sessname and sess_kobj from rtrs_attrs
Guoqing Jiang [Thu, 25 Mar 2021 15:32:55 +0000 (16:32 +0100)]
RDMA/rtrs: Remove sessname and sess_kobj from rtrs_attrs

The two members are not used in the code, so remove them.

Link: https://lore.kernel.org/r/20210325153308.1214057-10-gi-oh.kim@ionos.com
Signed-off-by: Guoqing Jiang <guoqing.jiang@ionos.com>
Reviewed-by: Danil Kipnis <danil.kipnis@ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs: Kill the put label in rtrs_srv_create_once_sysfs_root_folders
Guoqing Jiang [Thu, 25 Mar 2021 15:32:54 +0000 (16:32 +0100)]
RDMA/rtrs: Kill the put label in rtrs_srv_create_once_sysfs_root_folders

We can remove the label after move put_device to the right place.

Link: https://lore.kernel.org/r/20210325153308.1214057-9-gi-oh.kim@ionos.com
Signed-off-by: Guoqing Jiang <guoqing.jiang@ionos.com>
Reviewed-by: Danil Kipnis <danil.kipnis@ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs-clt: Remove redundant code from rtrs_clt_read_req
Guoqing Jiang [Thu, 25 Mar 2021 15:32:53 +0000 (16:32 +0100)]
RDMA/rtrs-clt: Remove redundant code from rtrs_clt_read_req

There is no need to dereference 's' from 'sess', since we have "sess =
to_clt_sess(s)" before.

And we can deference 'dev' from 's' earlier.

Link: https://lore.kernel.org/r/20210325153308.1214057-8-gi-oh.kim@ionos.com
Signed-off-by: Guoqing Jiang <guoqing.jiang@ionos.com>
Reviewed-by: Danil Kipnis <danil.kipnis@ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoMAINTAINERS: Change maintainer for rtrs module
Danil Kipnis [Thu, 25 Mar 2021 15:32:47 +0000 (16:32 +0100)]
MAINTAINERS: Change maintainer for rtrs module

Danil will step down, Haris will take over.  Also update to email address
to ionos.com, cloud.ionos.com will still work for sometime.

Link: https://lore.kernel.org/r/20210325153308.1214057-2-gi-oh.kim@ionos.com
Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Acked-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/uverbs: Fix -Wunused-function warning
YueHaibing [Thu, 1 Apr 2021 02:10:28 +0000 (10:10 +0800)]
RDMA/uverbs: Fix -Wunused-function warning

make W=1 warns this:

In file included from drivers/infiniband/sw/rdmavt/mmap.c:51:0:
./include/rdma/uverbs_ioctl.h:937:1:
 warning: â€˜_uverbs_get_const_unsigned’ defined but not used [-Wunused-function]
 _uverbs_get_const_unsigned(u64 *to,
 ^~~~~~~~~~~~~~~~~~~~~~~~~~
./include/rdma/uverbs_ioctl.h:930:1:
 warning: â€˜_uverbs_get_const_signed’ defined but not used [-Wunused-function]
 _uverbs_get_const_signed(s64 *to, const struct uverbs_attr_bundle *attrs_bundle,
 ^~~~~~~~~~~~~~~~~~~~~~~~

Make these functions inline to fix this warnings.

Fixes: 2904bb37b35d ("IB/core: Split uverbs_get_const/default to consider target type")
Link: https://lore.kernel.org/r/20210401021028.25720-1-yuehaibing@huawei.com
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Support congestion control type selection according to the FW
Yangyang Li [Thu, 25 Mar 2021 13:33:56 +0000 (21:33 +0800)]
RDMA/hns: Support congestion control type selection according to the FW

The type of congestion control algorithm includes DCQCN, LDCP, HC3 and
DIP. The driver will select one of them according to the firmware when
querying PF capabilities, and then set the related configuration fields
into QPC.

Link: https://lore.kernel.org/r/1616679236-7795-3-git-send-email-liweihang@huawei.com
Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Support query information of functions from FW
Wei Xu [Thu, 25 Mar 2021 13:33:55 +0000 (21:33 +0800)]
RDMA/hns: Support query information of functions from FW

Add a new type of command to query mac id of functions from the firmware,
it is used to select the template of congestion algorithm. More info will
be supported in the future.

Link: https://lore.kernel.org/r/1616679236-7795-2-git-send-email-liweihang@huawei.com
Signed-off-by: Wei Xu <xuwei5@hisilicon.com>
Signed-off-by: Shengming Shu <shushengming1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/core: Fix corrupted SL on passive side
HÃ¥kon Bugge [Mon, 22 Mar 2021 13:35:32 +0000 (14:35 +0100)]
RDMA/core: Fix corrupted SL on passive side

On RoCE systems, a CM REQ contains a Primary Hop Limit > 1 and Primary
Subnet Local is zero.

In cm_req_handler(), the cm_process_routed_req() function is called. Since
the Primary Subnet Local value is zero in the request, and since this is
RoCE (Primary Local LID is permissive), the following statement will be
executed:

      IBA_SET(CM_REQ_PRIMARY_SL, req_msg, wc->sl);

This corrupts SL in req_msg if it was different from zero. In other words,
a request to setup a connection using an SL != zero, will not be honored,
and a connection using SL zero will be created instead.

Fixed by not calling cm_process_routed_req() on RoCE systems, the
cm_process_route_req() is only for IB anyhow.

Fixes: 3971c9f6dbf2 ("IB/cm: Add interim support for routed paths")
Link: https://lore.kernel.org/r/1616420132-31005-1-git-send-email-haakon.bugge@oracle.com
Signed-off-by: HÃ¥kon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rxe: Remove rxe_dma_device declaration
Kamal Heib [Wed, 31 Mar 2021 10:20:43 +0000 (13:20 +0300)]
RDMA/rxe: Remove rxe_dma_device declaration

The function isn't implemented - delete the declaration.

Fixes: a9d2e9ae953f ("RDMA/siw,rxe: Make emulated devices virtual in the device tree")
Link: https://lore.kernel.org/r/20210331102043.691950-1-kamalheib1@gmail.com
Signed-off-by: Kamal Heib <kamalheib1@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/iw_cxgb4: Use DEFINE_SPINLOCK() for spinlock
Tang Yizhou [Wed, 31 Mar 2021 02:01:05 +0000 (10:01 +0800)]
RDMA/iw_cxgb4: Use DEFINE_SPINLOCK() for spinlock

spinlock can be initialized automatically with DEFINE_SPINLOCK() rather
than explicitly calling spin_lock_init().

Link: https://lore.kernel.org/r/20210331020105.4858-1-tangyizhou@huawei.com
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Tang Yizhou <tangyizhou@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/iser: struct iscsi_iser_task is declared twice
Wan Jiabing [Fri, 26 Mar 2021 11:33:46 +0000 (19:33 +0800)]
RDMA/iser: struct iscsi_iser_task is declared twice

struct iscsi_iser_task has been declared at 201st line. Remove the
duplicate.

Link: https://lore.kernel.org/r/20210326113347.903976-1-wanjiabing@vivo.com
Signed-off-by: Wan Jiabing <wanjiabing@vivo.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Fix a spelling mistake in hns_roce_hw_v1.c
Ruiqi Gong [Tue, 30 Mar 2021 12:29:12 +0000 (08:29 -0400)]
RDMA/hns: Fix a spelling mistake in hns_roce_hw_v1.c

s/caculating/calculating

Link: https://lore.kernel.org/r/20210330122912.19989-1-gongruiqi1@huawei.com
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Ruiqi Gong <gongruiqi1@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rxe: Split MEM into MR and MW
Bob Pearson [Thu, 25 Mar 2021 21:24:26 +0000 (16:24 -0500)]
RDMA/rxe: Split MEM into MR and MW

In the original rxe implementation it was intended to use a common object
to represent MRs and MWs but they are different enough to separate these
into two objects.

This allows replacing the mem name with mr for MRs which is more
consistent with the style for the other objects and less likely to be
confusing. This is a long patch that mostly changes mem to mr where it
makes sense and adds a new rxe_mw struct.

Link: https://lore.kernel.org/r/20210325212425.2792-1-rpearson@hpe.com
Signed-off-by: Bob Pearson <rpearson@hpe.com>
Acked-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/efa: Use strscpy instead of strlcpy
Gal Pressman [Mon, 29 Mar 2021 12:01:28 +0000 (15:01 +0300)]
RDMA/efa: Use strscpy instead of strlcpy

The strlcpy function doesn't limit the source length, use the preferred
strscpy function instead.

Link: https://lore.kernel.org/r/20210329120131.18793-1-galpress@amazon.com
Reviewed-by: Firas JahJah <firasj@amazon.com>
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/isert: Fix a use after free in isert_connect_request
Lv Yunlong [Mon, 22 Mar 2021 16:13:25 +0000 (09:13 -0700)]
IB/isert: Fix a use after free in isert_connect_request

The device is got by isert_device_get() with refcount is 1, and is
assigned to isert_conn by
  isert_conn->device = device.

When isert_create_qp() failed, device will be freed with
isert_device_put().

Later, the device is used in isert_free_login_buf(isert_conn) by the
isert_conn->device->ib_device statement.

Free the device in the correct order.

Fixes: ae9ea9ed38c9 ("iser-target: Split some logic in isert_connect_request to routines")
Link: https://lore.kernel.org/r/20210322161325.7491-1-lyl2019@mail.ustc.edu.cn
Signed-off-by: Lv Yunlong <lyl2019@mail.ustc.edu.cn>
Acked-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA: Fix a typo
Bhaskar Chowdhury [Mon, 22 Mar 2021 06:43:22 +0000 (12:13 +0530)]
RDMA: Fix a typo

s/struture/structure/

Link: https://lore.kernel.org/r/20210322064322.3933985-1-unixbhaskar@gmail.com
Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/hfi1: Fix a typo
Bhaskar Chowdhury [Mon, 22 Mar 2021 06:29:23 +0000 (11:59 +0530)]
IB/hfi1: Fix a typo

s/struture/structure/

And add the missing colon for kdoc

Link: https://lore.kernel.org/r/20210322062923.3306167-1-unixbhaskar@gmail.com
Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/core: Correct misspellings of two words in comments
Yangyang Li [Fri, 19 Mar 2021 09:55:49 +0000 (17:55 +0800)]
RDMA/core: Correct misspellings of two words in comments

Correct the following spelling errors:
1. shold -> should
2. uncontext -> ucontext

Link: https://lore.kernel.org/r/1616147749-49106-1-git-send-email-liweihang@huawei.com
Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Set ODP caps only if device profile support ODP
Shay Drory [Thu, 18 Mar 2021 13:52:59 +0000 (15:52 +0200)]
RDMA/mlx5: Set ODP caps only if device profile support ODP

Currently, ODP caps are set during the init stage of mlx5_ib_dev,
regardless of whether the device profile supports ODP or not.  There is no
point in setting ODP caps if the device profile doesn't support
ODP. Hence, move setting the ODP caps to the odp_init stage.

Link: https://lore.kernel.org/r/20210318135259.681264-1-leon@kernel.org
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Fix drop packet rule in egress table
Maor Gottlieb [Thu, 18 Mar 2021 13:51:23 +0000 (15:51 +0200)]
RDMA/mlx5: Fix drop packet rule in egress table

Initial drop action support missed that drop action can be added to egress
flow tables as well. Add the missing support.

This requires making sure that dest_type isn't set to PORT which in turn
exposes a possibility of passing dst while indicating number of dsts as
zero. Explicitly check for number of dsts and pass the appropriate
pointer.

Fixes: f29de9eee782 ("RDMA/mlx5: Add support for drop action in DV steering")
Link: https://lore.kernel.org/r/20210318135123.680759-1-leon@kernel.org
Reviewed-by: Mark Bloch <markb@nvidia.com>
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/uverbs: Refactor rdma_counter_set_auto_mode and __counter_set_mode
Patrisious Haddad [Thu, 18 Mar 2021 11:05:02 +0000 (13:05 +0200)]
RDMA/uverbs: Refactor rdma_counter_set_auto_mode and __counter_set_mode

Success is returned in the following flows:
 * New mode is the same as the current one.
 * Switched to new mode and there are no bound counters yet.

Link: https://lore.kernel.org/r/20210318110502.673676-1-leon@kernel.org
Signed-off-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/bnxt_re: Move device to error state upon device crash
Selvin Xavier [Wed, 17 Mar 2021 08:15:42 +0000 (01:15 -0700)]
RDMA/bnxt_re: Move device to error state upon device crash

When the L2 driver detects a device crash or device undergone reset, it
invokes a stop callback to recover from error.

The current RoCE driver doesn't recover the device. So move the device to
error state and dispatch fatal events to all qps Release the MSIx vectors
to avoid a crash when L2 driver disables the MSIx.  Also, check for the
device state to avoid posting further commands to the HW.

Link: https://lore.kernel.org/r/1615968942-30970-1-git-send-email-selvin.xavier@broadcom.com
Signed-off-by: Naresh Kumar PBS <nareshkumar.pbs@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA: Support more than 255 rdma ports
Mark Bloch [Mon, 1 Mar 2021 07:04:20 +0000 (09:04 +0200)]
RDMA: Support more than 255 rdma ports

Current code uses many different types when dealing with a port of a RDMA
device: u8, unsigned int and u32. Switch to u32 to clean up the logic.

This allows us to make (at least) the core view consistent and use the
same type. Unfortunately not all places can be converted. Many uverbs
functions expect port to be u8 so keep those places in order not to break
UAPIs.  HW/Spec defined values must also not be changed.

With the switch to u32 we now can support devices with more than 255
ports. U32_MAX is reserved to make control logic a bit easier to deal
with. As a device with U32_MAX ports probably isn't going to happen any
time soon this seems like a non issue.

When a device with more than 255 ports is created uverbs will report the
RDMA device as having 255 ports as this is the max currently supported.

The verbs interface is not changed yet because the IBTA spec limits the
port size in too many places to be u8 and all applications that relies in
verbs won't be able to cope with this change. At this stage, we are
extending the interfaces that are using vendor channel solely

Once the limitation is lifted mlx5 in switchdev mode will be able to have
thousands of SFs created by the device. As the only instance of an RDMA
device that reports more than 255 ports will be a representor device and
it exposes itself as a RAW Ethernet only device CM/MAD/IPoIB and other
ULPs aren't effected by this change and their sysfs/interfaces that are
exposes to userspace can remain unchanged.

While here cleanup some alignment issues and remove unneeded sanity
checks (mainly in rdmavt),

Link: https://lore.kernel.org/r/20210301070420.439400-1-leon@kernel.org
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Support to query firmware version
Lang Cheng [Tue, 16 Mar 2021 08:09:21 +0000 (16:09 +0800)]
RDMA/hns: Support to query firmware version

Implement the ops named get_dev_fw_str to support ib_get_device_fw_str().

Link: https://lore.kernel.org/r/1615882161-53827-1-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Create ODP EQ only when ODP MR is created
Shay Drory [Sun, 14 Mar 2021 12:54:18 +0000 (14:54 +0200)]
RDMA/mlx5: Create ODP EQ only when ODP MR is created

There is no need to create the ODP EQ if the user doesn't use ODP MRs.
Hence, create it only when the first ODP MR is created. This EQ will be
destroyed only when the device is unloaded.
This will decrease the number of EQs created per device. for example: If
we creates 1K devices (SF/VF/etc'), than we will decrease the num of EQs
by 1K.

Link: https://lore.kernel.org/r/20210314125418.179716-1-leon@kernel.org
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Fix memory corruption when allocating XRCDN
Weihang Li [Mon, 22 Mar 2021 02:44:29 +0000 (10:44 +0800)]
RDMA/hns: Fix memory corruption when allocating XRCDN

It's incorrect to cast the type of pointer to xrcdn from (u32 *) to
(unsigned long *), then pass it into hns_roce_bitmap_alloc(), this will
lead to a memory corruption.

Fixes: 32548870d438 ("RDMA/hns: Add support for XRC on HIP09")
Link: https://lore.kernel.org/r/1616381069-51759-1-git-send-email-liweihang@huawei.com
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/hns: Fix mispelling of subsystem
Bhaskar Chowdhury [Mon, 22 Mar 2021 02:27:51 +0000 (07:57 +0530)]
IB/hns: Fix mispelling of subsystem

s/wubsytem/subsystem/

Link: https://lore.kernel.org/r/20210322022751.4137205-1-unixbhaskar@gmail.com
Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/include: Mundane typo fixes throughout the file
Bhaskar Chowdhury [Thu, 18 Mar 2021 10:04:53 +0000 (15:34 +0530)]
RDMA/include: Mundane typo fixes throughout the file

s/proviee/provide/
s/undelying/underlying/
s/quesiton/question/
s/drivr/driver/

Link: https://lore.kernel.org/r/20210318100453.9759-1-unixbhaskar@gmail.com
Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/cma: Remove unused leftovers in cma code
Gal Pressman [Sun, 14 Mar 2021 14:34:27 +0000 (16:34 +0200)]
RDMA/cma: Remove unused leftovers in cma code

Commit ee1c60b1bff8 ("IB/SA: Modify SA to implicitly cache Class Port
info") removed the class_port_info_context struct usage, remove a couple
of leftovers.

Link: https://lore.kernel.org/r/20210314143427.76101-1-galpress@amazon.com
Signed-off-by: Gal Pressman <galpress@amazon.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA: Delete not-used static inline functions
Leon Romanovsky [Sun, 14 Mar 2021 13:39:08 +0000 (15:39 +0200)]
RDMA: Delete not-used static inline functions

Perform mass deletion of static inline functions that are not used.

Link: https://lore.kernel.org/r/20210314133908.291945-3-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA: Fix kernel-doc compilation warnings
Leon Romanovsky [Sun, 14 Mar 2021 13:39:07 +0000 (15:39 +0200)]
RDMA: Fix kernel-doc compilation warnings

This patch fixes bunch of kernel-doc compilation warnings like below:

 drivers/infiniband/hw/i40iw/i40iw_cm.c:4372: warning: expecting prototype for i40iw_ifdown_notify(). Prototype was for i40iw_if_notify() instead

Link: https://lore.kernel.org/r/20210314133908.291945-2-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Add missing returned error check of mlx5_ib_dereg_mr
Leon Romanovsky [Sun, 14 Mar 2021 08:22:50 +0000 (10:22 +0200)]
RDMA/mlx5: Add missing returned error check of mlx5_ib_dereg_mr

Fix the following smatch error:

drivers/infiniband/hw/mlx5/mr.c:1950 mlx5_ib_dereg_mr() error: uninitialized symbol 'rc'.

Fixes: e6fb246ccafb ("RDMA/mlx5: Consolidate MR destruction to mlx5_ib_dereg_mr()")
Link: https://lore.kernel.org/r/20210314082250.10143-1-leon@kernel.org
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agonet/mlx5: Use order-0 allocations for EQs
Tariq Toukan [Thu, 11 Mar 2021 07:09:15 +0000 (23:09 -0800)]
net/mlx5: Use order-0 allocations for EQs

Currently we are allocating high-order page for EQs. In case of
fragmented system, VF hot remove/add in VMs for example, there isn't
enough contiguous memory for EQs allocation, which results in crashing
of the VM.
Therefore, use order-0 fragments for the EQ allocations instead.

Performance tests:
ConnectX-5 100Gbps, CPU: Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
Performance tests show no sensible degradation.

Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Add IFC bits needed for single FDB mode
Mark Bloch [Thu, 11 Mar 2021 07:09:14 +0000 (23:09 -0800)]
net/mlx5: Add IFC bits needed for single FDB mode

Currently we operate in a mode where each eswitch manager has a separate
FDB. In order to combine these multiple FDBs we expose new caps to allow
this:

- Set root flow table which isn't native.
- Set FDB a different selection mode when in LAG mode.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: E-Switch, Refactor send to vport to be more generic
Mark Bloch [Thu, 11 Mar 2021 07:09:13 +0000 (23:09 -0800)]
net/mlx5: E-Switch, Refactor send to vport to be more generic

Now that each representor stores a pointer to the managing E-Switch
use that information when creating the send-to-vport rules.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agoRDMA/mlx5: Use representor E-Switch when getting netdev and metadata
Mark Bloch [Thu, 11 Mar 2021 07:09:12 +0000 (23:09 -0800)]
RDMA/mlx5: Use representor E-Switch when getting netdev and metadata

Now that a pointer to the managing E-Switch is stored in the representor
use it.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
Acked-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: E-Switch, Add eswitch pointer to each representor
Mark Bloch [Thu, 11 Mar 2021 07:09:11 +0000 (23:09 -0800)]
net/mlx5: E-Switch, Add eswitch pointer to each representor

Store the managing E-Switch of each representor. This will be used
when a representor is created on eswitch manager 0 but the vport
belongs to eswitch manager 1.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: E-Switch, Add match on vhca id to default send rules
Mark Bloch [Thu, 11 Mar 2021 07:09:10 +0000 (23:09 -0800)]
net/mlx5: E-Switch, Add match on vhca id to default send rules

Match on the vhca id of the E-Switch owner when creating the send-to-vport
representor rules.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Remove unused mlx5_core_health member recover_work
Mikhael Goikhman [Thu, 11 Mar 2021 07:09:09 +0000 (23:09 -0800)]
net/mlx5: Remove unused mlx5_core_health member recover_work

The code related to health->recover_work was removed in
commit 63cbc552eebf ("net/mlx5: Handle SW reset of FW in error flow")

Fix struct mlx5_core_health accordingly.

Signed-off-by: Mikhael Goikhman <migo@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: simplify the return expression of mlx5_esw_offloads_pair()
Zheng Yongjun [Thu, 11 Mar 2021 07:09:08 +0000 (23:09 -0800)]
net/mlx5: simplify the return expression of mlx5_esw_offloads_pair()

Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Cleanup prototype warning
Saeed Mahameed [Thu, 11 Mar 2021 07:09:07 +0000 (23:09 -0800)]
net/mlx5: Cleanup prototype warning

Cleanup W=1 warning:

drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c:49:
   warning: expecting prototype for Set lag port affinity().
   Prototype was for mlx5_lag_set_port_affinity() instead

Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agoRDMA/mlx5: Allow larger pages in DevX umem
Jason Gunthorpe [Thu, 4 Mar 2021 13:05:01 +0000 (15:05 +0200)]
RDMA/mlx5: Allow larger pages in DevX umem

The umem DMA list calculation was locked at 4k pages due to confusion
around how this API works and is used when larger pages are present.

The conclusion is:

 - umem's cannot extend past what is mapped into the process, so creating
   a lage page size and referring to a sub-range is not allowed

 - umem's must always have a page offset of zero, except for sub PAGE_SIZE
   umems

 - The feature of umem_offset to create multiple objects inside a umem
   is buggy and isn't used anyplace. Thus we can assume all users of the
   current API have umem_offset == 0 as well

Provide a new page size calculator that limits the DMA list to the VA
range and enforces umem_offset == 0.

Allow user space to specify the page sizes which it can accept, this
bitmap must be derived from the intended use of the umem, based on
per-usage HW limitations.

Link: https://lore.kernel.org/r/20210304130501.1102577-4-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/core: Split uverbs_get_const/default to consider target type
Yishai Hadas [Thu, 4 Mar 2021 13:05:00 +0000 (15:05 +0200)]
IB/core: Split uverbs_get_const/default to consider target type

Change uverbs_get_const/uverbs_get_const_default to work properly with
both signed/unsigned parameters.

Current APIs mix s64 and u64 which leads to incorrect check when u64
value was supplied and its upper bit was set. In that case
uverbs_get_const() / uverbs_get_const_default() lower bound check may
fail unexpectedly, target is unsigned (lower bound is 0) but value
became negative as of the s64 usage.

Split to have two different APIs, no change to callers as the required
API will be called internally according to the target type.

Link: https://lore.kernel.org/r/20210304130501.1102577-3-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoIB/core: Drop WARN_ON() from ib_umem_find_best_pgsz()
Yishai Hadas [Thu, 4 Mar 2021 13:04:59 +0000 (15:04 +0200)]
IB/core: Drop WARN_ON() from ib_umem_find_best_pgsz()

The WARN_ON() issued as part of ib_umem_find_best_pgsz() blocked cases
when only page sizes larger than PAGE_SIZE were set, drop it to enable
those cases.

In addition, there is no need to have a specific check for zero
pgsz_bitmap, the function will do its job and return 0 at the end if
nothing match will be found.

Link: https://lore.kernel.org/r/20210304130501.1102577-2-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Fix mlx5 rates to IB rates map
Mark Zhang [Thu, 4 Mar 2021 12:45:17 +0000 (14:45 +0200)]
RDMA/mlx5: Fix mlx5 rates to IB rates map

Correct the map between mlx5 rates and corresponding ib rates, as they
don't always have a fixed offset between them.

Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Link: https://lore.kernel.org/r/20210304124517.1100608-4-leon@kernel.org
Signed-off-by: Mark Zhang <markzhang@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Fix query RoCE port
Maor Gottlieb [Thu, 4 Mar 2021 12:45:15 +0000 (14:45 +0200)]
RDMA/mlx5: Fix query RoCE port

mlx5_is_roce_enabled returns the devlink RoCE init value, therefore it
should be used only when driver is loaded. Instead we just need to read
the roce_en field.

In addition, rename mlx5_is_roce_enabled to mlx5_is_roce_init_enabled.

Fixes: 7a58779edd75 ("IB/mlx5: Improve query port for representor port")
Link: https://lore.kernel.org/r/20210304124517.1100608-2-leon@kernel.org
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Rename mlx5_mr_cache_invalidate() to revoke_mr()
Jason Gunthorpe [Thu, 4 Mar 2021 12:07:45 +0000 (14:07 +0200)]
RDMA/mlx5: Rename mlx5_mr_cache_invalidate() to revoke_mr()

Now that this is only used in a few places in mr.c give it a sensible
name. It has nothing to do with the cache and can be invoked on any
MR. DMA is stopped and the user cannot touch the MR any further once it
completes.

Link: https://lore.kernel.org/r/20210304120745.1090751-5-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Consolidate MR destruction to mlx5_ib_dereg_mr()
Jason Gunthorpe [Thu, 4 Mar 2021 12:07:44 +0000 (14:07 +0200)]
RDMA/mlx5: Consolidate MR destruction to mlx5_ib_dereg_mr()

Now that the SRCU stuff has been removed the entire MR destroy logic can
be made a lot simpler. Currently there are many different ways to destroy a
MR and it makes it really hard to do this task correctly. Route all
destruction through mlx5_ib_dereg_mr() and make it work for all
situations.

Since it turns out all the different MR types do basically the same thing
this removes a lot of knowledge of MR internals from ODP and leaves ODP
just exporting an operation to clean up children.

This fixes a few weird corner cases bugs and firmly uses the correct
ordering of the MR destruction:
 - Stop parallel access to the mkey via the ODP xarray
 - Stop DMA
 - Release the umem
 - Clean up ODP children
 - Free/Recycle the MR

Link: https://lore.kernel.org/r/20210304120745.1090751-4-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Use a union inside mlx5_ib_mr
Jason Gunthorpe [Thu, 4 Mar 2021 12:07:43 +0000 (14:07 +0200)]
RDMA/mlx5: Use a union inside mlx5_ib_mr

The struct mlx5_ib_mr can be used for three different things, but only one
at a time:

 - In the user MR cache
 - As a kernel MR
 - As a user MR

Overlay the three things into a single union with the following rules:

 - If the mr is found on the cache_ent->head list then it is a cache MR
   and umem == NULL. The entire union is zero after the MR is removed from
   the cache.

 - If umem != NULL or type == IB_MR_TYPE_USER then it is a user MR.

 - If umem == NULL then it is a kernel MR

This reduces the size of struct mlx5_ib_mr to 552 bytes from 702.

The only place the three flows overlap in the code is during dereg, so add
a few extra checks along there.

Link: https://lore.kernel.org/r/20210304120745.1090751-3-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/mlx5: Zero out ODP related items in the mlx5_ib_mr
Jason Gunthorpe [Thu, 4 Mar 2021 12:07:42 +0000 (14:07 +0200)]
RDMA/mlx5: Zero out ODP related items in the mlx5_ib_mr

All of the ODP code assumes when it calls mlx5_mr_cache_alloc() the ODP
related fields are zero'd. This is true if the MR was just allocated, but
if the MR is recycled through the cache then the values are never zero'd.

This causes a bug in the odp_stats, they don't reset when the MR is
reallocated, also is_odp_implicit is never 0'd.

So we can use memset on a block of the mlx5_ib_mr reorganize the structure
to put all the data that can be zero'd by the cache at the end.

It is organized as an anonymous struct because the next patch will make
this a union.

Delete the unused smr_info. Don't set the kernel only desc_size on the
user path. No longer any need to zero mr->parent before freeing it, the
memset() will get it now.

Fixes: a3de94e3d61e ("IB/mlx5: Introduce ODP diagnostic counters")
Link: https://lore.kernel.org/r/20210304120745.1090751-2-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Add support for XRC on HIP09
Wenpeng Liang [Thu, 4 Mar 2021 02:55:58 +0000 (10:55 +0800)]
RDMA/hns: Add support for XRC on HIP09

The HIP09 supports XRC transport service, it greatly saves the number of
QPs required to connect all processes in a large cluster.

Link: https://lore.kernel.org/r/1614826558-35423-1-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs-clt: Use rdma_event_msg in log
Jack Wang [Mon, 22 Feb 2021 14:15:51 +0000 (15:15 +0100)]
RDMA/rtrs-clt: Use rdma_event_msg in log

It's easier to understand a string instead of enum.

Link: https://lore.kernel.org/r/20210222141551.54345-2-jinpu.wang@cloud.ionos.com
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/rtrs: Use new shared CQ mechanism
Jack Wang [Mon, 22 Feb 2021 14:15:50 +0000 (15:15 +0100)]
RDMA/rtrs: Use new shared CQ mechanism

Have the driver use shared CQs which provids a ~10%-20% improvement during
test.

Instead of opening a CQ for each QP per connection, a CQ for each QP will
be provided by the RDMA core driver that will be shared between the QPs on
that core reducing interrupt overhead.

Link: https://lore.kernel.org/r/20210222141551.54345-1-jinpu.wang@cloud.ionos.com
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/core: Remove unused req_ncomp_notif device operation
Gal Pressman [Thu, 11 Mar 2021 15:09:21 +0000 (17:09 +0200)]
RDMA/core: Remove unused req_ncomp_notif device operation

The request_ncomp_notif device operation and function are unused, remove
them.

Link: https://lore.kernel.org/r/20210311150921.23726-1-galpress@amazon.com
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/iwcm: Allow AFONLY binding for IPv6 addresses
Bernard Metzler [Fri, 19 Feb 2021 14:34:41 +0000 (15:34 +0100)]
RDMA/iwcm: Allow AFONLY binding for IPv6 addresses

Binding IPv6 address/port to AF_INET6 domain only is provided via
rdma_set_afonly(), but was not signalled to the provider.  Applications
like NFS/RDMA bind the same port to both IPv4 and IPv6 addresses
simultaneously and thus rely on it working correctly.

Link: https://lore.kernel.org/r/20210219143441.1068-1-bmt@zurich.ibm.com
Tested-by: Chuck Lever <chuck.lever@oracle.com>
Tested-by: Benjamin Coddington <bcodding@redhat.com>
Signed-off-by: Bernard Metzler <bmt@zurich.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoRDMA/hns: Use new SQ doorbell register for HIP09
Lang Cheng [Tue, 23 Feb 2021 12:20:33 +0000 (20:20 +0800)]
RDMA/hns: Use new SQ doorbell register for HIP09

HIP09 uses new address space to map SQ doorbell registers, the doorbell of
each QP is isolated based on the size of 64KB, which can improve the
performance in concurrency scenarios.

Link: https://lore.kernel.org/r/1614082833-23130-1-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
3 years agoLinux 5.12-rc2
Linus Torvalds [Sat, 6 Mar 2021 01:33:41 +0000 (17:33 -0800)]
Linux 5.12-rc2

3 years agoMerge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
Linus Torvalds [Sat, 6 Mar 2021 01:27:59 +0000 (17:27 -0800)]
Merge tag 'for-linus' of git://git./linux/kernel/git/rdma/rdma

Pull rdma fixes from Jason Gunthorpe:
 "Nothing special here, though Bob's regression fixes for rxe would have
  made it before the rc cycle had there not been such strong winter
  weather!

   - Fix corner cases in the rxe reference counting cleanup that are
     causing regressions in blktests for SRP

   - Two kdoc fixes so W=1 is clean

   - Missing error return in error unwind for mlx5

   - Wrong lock type nesting in IB CM"

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
  RDMA/rxe: Fix errant WARN_ONCE in rxe_completer()
  RDMA/rxe: Fix extra deref in rxe_rcv_mcast_pkt()
  RDMA/rxe: Fix missed IB reference counting in loopback
  RDMA/uverbs: Fix kernel-doc warning of _uverbs_alloc
  RDMA/mlx5: Set correct kernel-doc identifier
  IB/mlx5: Add missing error code
  RDMA/rxe: Fix missing kconfig dependency on CRYPTO
  RDMA/cm: Fix IRQ restore in ib_send_cm_sidr_rep