linux-2.6-microblaze.git
2 years agoRDMA/hns: Adjust the order in which irq are requested and enabled
Wenpeng Liang [Thu, 26 Aug 2021 13:37:34 +0000 (21:37 +0800)]
RDMA/hns: Adjust the order in which irq are requested and enabled

It should first alloc workqueue and request irq, and finally enable irq.

Link: https://lore.kernel.org/r/1629985056-57004-6-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Remove RST2RST error prints for hw v1
Weihang Li [Thu, 26 Aug 2021 13:37:33 +0000 (21:37 +0800)]
RDMA/hns: Remove RST2RST error prints for hw v1

There is no need to prints error for hw_v1.

Link: https://lore.kernel.org/r/1629985056-57004-5-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Remove dqpn filling when modify qp from Init to Init
Wenpeng Liang [Thu, 26 Aug 2021 13:37:32 +0000 (21:37 +0800)]
RDMA/hns: Remove dqpn filling when modify qp from Init to Init

According to the IB specification, the destination qpn is allowed to be
filled into the qpc only when the qp transitions from Init to RTR, so this
code is unused.

Link: https://lore.kernel.org/r/1629985056-57004-4-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Fix QP's resp incomplete assignment
Wenpeng Liang [Thu, 26 Aug 2021 13:37:31 +0000 (21:37 +0800)]
RDMA/hns: Fix QP's resp incomplete assignment

The resp passed to the user space represents the enable flag of qp,
incomplete assignment will cause some features of the user space to be
disabled.

Fixes: 90ae0b57e4a5 ("RDMA/hns: Combine enable flags of qp")
Fixes: aba457ca890c ("RDMA/hns: Support owner mode doorbell")
Link: https://lore.kernel.org/r/1629985056-57004-3-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Fix query destination qpn
Wenpeng Liang [Thu, 26 Aug 2021 13:37:30 +0000 (21:37 +0800)]
RDMA/hns: Fix query destination qpn

The bit width of dqpn is 24 bits, using u8 will cause truncation error.

Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC")
Link: https://lore.kernel.org/r/1629985056-57004-2-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hfi1: Convert to SPDX identifier
Cai Huoqing [Mon, 23 Aug 2021 04:26:22 +0000 (12:26 +0800)]
RDMA/hfi1: Convert to SPDX identifier

use SPDX-License-Identifier instead of a verbose license text

Link: https://lore.kernel.org/r/20210823042622.109-1-caihuoqing@baidu.com
Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/rdmavt: Convert to SPDX identifier
Cai Huoqing [Mon, 23 Aug 2021 02:35:30 +0000 (10:35 +0800)]
IB/rdmavt: Convert to SPDX identifier

use SPDX-License-Identifier instead of a verbose license text

Link: https://lore.kernel.org/r/20210823023530.48-1-caihuoqing@baidu.com
Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Bugfix for incorrect association between dip_idx and dgid
Junxian Huang [Wed, 25 Aug 2021 09:43:12 +0000 (17:43 +0800)]
RDMA/hns: Bugfix for incorrect association between dip_idx and dgid

dip_idx and dgid should be a one-to-one mapping relationship, but when
qp_num loops back to the start number, it may happen that two different
dgid are assiociated to the same dip_idx incorrectly.

One solution is to store the qp_num that is not assigned to dip_idx in an
array. When a dip_idx needs to be allocated to a new dgid, an spare qp_num
is extracted and assigned to dip_idx.

Fixes: f91696f2f053 ("RDMA/hns: Support congestion control type selection according to the FW")
Link: https://lore.kernel.org/r/1629884592-23424-4-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Junxian Huang <huangjunxian4@hisilicon.com>
Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Bugfix for the missing assignment for dip_idx
Junxian Huang [Wed, 25 Aug 2021 09:43:11 +0000 (17:43 +0800)]
RDMA/hns: Bugfix for the missing assignment for dip_idx

When the dgid-dip_idx mapping relationship exists, dip should be assigned.

Fixes: f91696f2f053 ("RDMA/hns: Support congestion control type selection according to the FW")
Link: https://lore.kernel.org/r/1629884592-23424-3-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Junxian Huang <huangjunxian4@hisilicon.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Bugfix for data type of dip_idx
Junxian Huang [Wed, 25 Aug 2021 09:43:10 +0000 (17:43 +0800)]
RDMA/hns: Bugfix for data type of dip_idx

dip_idx is associated with qp_num whose data type is u32. However, dip_idx
is incorrectly defined as u8 data in the hns_roce_dip struct, which leads
to data truncation during value assignment.

Fixes: f91696f2f053 ("RDMA/hns: Support congestion control type selection according to the FW")
Link: https://lore.kernel.org/r/1629884592-23424-2-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Junxian Huang <huangjunxian4@hisilicon.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Fix incorrect lsn field
Yixing Liu [Wed, 25 Aug 2021 09:19:29 +0000 (17:19 +0800)]
RDMA/hns: Fix incorrect lsn field

In RNR NAK screnario, according to the specification, when no credit is
available, only the first fragment of the send request can be sent. The
LSN(Limit Sequence Number) field should be 0 or the entire packet will be
resent.

Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC")
Link: https://lore.kernel.org/r/1629883169-2306-1-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/irdma: Remove the repeated declaration
Shaokun Zhang [Wed, 25 Aug 2021 03:21:14 +0000 (11:21 +0800)]
RDMA/irdma: Remove the repeated declaration

Functions 'irdma_alloc_ws_node_id' and 'irdma_free_ws_node_id' are
declared twice, so remove the repeated declaration.

Link: https://lore.kernel.org/r/1629861674-53343-1-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/core/sa_query: Retry SA queries
Håkon Bugge [Thu, 12 Aug 2021 16:12:35 +0000 (18:12 +0200)]
RDMA/core/sa_query: Retry SA queries

A MAD packet is sent as an unreliable datagram (UD). SA requests are sent
as MAD packets. As such, SA requests or responses may be silently dropped.

IB Core's MAD layer has a timeout and retry mechanism, which amongst
other, is used by RDMA CM. But it is not used by SA queries. The lack of
retries of SA queries leads to long specified timeout, and error being
returned in case of packet loss. The ULP or user-land process has to
perform the retry.

Fix this by taking advantage of the MAD layer's retry mechanism.

First, a check against a zero timeout is added in rdma_resolve_route(). In
send_mad(), we set the MAD layer timeout to one tenth of the specified
timeout and the number of retries to 10. The special case when timeout is
less than 10 is handled.

With this fix:

 # ucmatose -c 1000 -S 1024 -C 1

runs stable on an Infiniband fabric. Without this fix, we see an
intermittent behavior and it errors out with:

cmatose: event: RDMA_CM_EVENT_ROUTE_ERROR, error: -110

(110 is ETIMEDOUT)

Link: https://lore.kernel.org/r/1628784755-28316-1-git-send-email-haakon.bugge@oracle.com
Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Delete unused hns bitmap interface
Yangyang Li [Thu, 19 Aug 2021 01:36:20 +0000 (09:36 +0800)]
RDMA/hns: Delete unused hns bitmap interface

The resources that use the hns bitmap interface: qp, cq, mr, pd, xrcd,
uar, srq, have been changed to IDA interfaces, and the unused hns' own
bitmap interfaces need to be deleted.

Link: https://lore.kernel.org/r/1629336980-17499-4-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Use IDA interface to manage srq index
Yangyang Li [Thu, 19 Aug 2021 01:36:19 +0000 (09:36 +0800)]
RDMA/hns: Use IDA interface to manage srq index

Switch srq index allocation and release from hns' own bitmap interface to
IDA interface.

Link: https://lore.kernel.org/r/1629336980-17499-3-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Use IDA interface to manage uar index
Yangyang Li [Thu, 19 Aug 2021 01:36:18 +0000 (09:36 +0800)]
RDMA/hns: Use IDA interface to manage uar index

Switch uar index allocation and release from hns' own bitmap interface to
IDA interface.

Link: https://lore.kernel.org/r/1629336980-17499-2-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Ownerbit mode add control field
Lang Cheng [Sat, 21 Aug 2021 09:53:27 +0000 (17:53 +0800)]
RDMA/hns: Ownerbit mode add control field

The ownerbit mode is for external card mode. Make it controlled by the
firmware.

Fixes: aba457ca890c ("RDMA/hns: Support owner mode doorbell")
Link: https://lore.kernel.org/r/1629539607-33217-4-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Enable stash feature of HIP09
Yixing Liu [Sat, 21 Aug 2021 09:53:26 +0000 (17:53 +0800)]
RDMA/hns: Enable stash feature of HIP09

The stash feature is enabled by default on HIP09.

Fixes: f93c39bc9547 ("RDMA/hns: Add support for QP stash")
Fixes: bfefae9f108d ("RDMA/hns: Add support for CQ stash")
Link: https://lore.kernel.org/r/1629539607-33217-3-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Remove unsupport cmdq mode
Lang Cheng [Sat, 21 Aug 2021 09:53:25 +0000 (17:53 +0800)]
RDMA/hns: Remove unsupport cmdq mode

CMDQ support un-interrupt mode only, and firmware ignores this mode flag,
so remove it.

Fixes: a04ff739f2a9 ("RDMA/hns: Add command queue support for hip08 RoCE driver")
Link: https://lore.kernel.org/r/1629539607-33217-2-git-send-email-liangwenpeng@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA: switch from 'pci_' to 'dma_' API
Christophe JAILLET [Sun, 22 Aug 2021 12:24:44 +0000 (14:24 +0200)]
RDMA: switch from 'pci_' to 'dma_' API

The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Link: https://lore.kernel.org/r/259e53b7a00f64bf081d41da8761b171b2ad8f5c.1629634798.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/core: Remove deprecated current_seq comments
Li Zhijian [Mon, 23 Aug 2021 03:52:46 +0000 (11:52 +0800)]
IB/core: Remove deprecated current_seq comments

current_seq was removed since the commit below.

Fixes: 36f30e486dce ("IB/core: Improve ODP to use hmm_range_fault()")
Link: https://lore.kernel.org/r/20210823035246.3506-1-lizhijian@cn.fujitsu.com
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/efa: Rename vector field in efa_irq struct to irqn
Gal Pressman [Wed, 11 Aug 2021 15:11:30 +0000 (18:11 +0300)]
RDMA/efa: Rename vector field in efa_irq struct to irqn

The vector field naming is quite confusing, it is better referred to as
irqn.

Link: https://lore.kernel.org/r/20210811151131.39138-4-galpress@amazon.com
Reviewed-by: Firas JahJah <firasj@amazon.com>
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/efa: Remove unused cpu field from irq struct
Gal Pressman [Wed, 11 Aug 2021 15:11:29 +0000 (18:11 +0300)]
RDMA/efa: Remove unused cpu field from irq struct

The cpu field in efa_irq struct is unused, remove it.

Link: https://lore.kernel.org/r/20210811151131.39138-3-galpress@amazon.com
Reviewed-by: Firas JahJah <firasj@amazon.com>
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs: Remove (void) casting for functions
Gioh Kim [Fri, 6 Aug 2021 11:21:12 +0000 (13:21 +0200)]
RDMA/rtrs: Remove (void) casting for functions

Casting to (void) does nothing, remove them.

Link: https://lore.kernel.org/r/20210806112112.124313-7-haris.iqbal@ionos.com
Suggested-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs-clt: Fix counting inflight IO
Gioh Kim [Fri, 6 Aug 2021 11:21:11 +0000 (13:21 +0200)]
RDMA/rtrs-clt: Fix counting inflight IO

There are mis-match at counting inflight IO after changing the multipath
policy.

For example, we started fio test with round-robin policy and then we
changed the policy to min-inflight. IOs created under the RR policy is
finished under the min-inflight policy and inflight counter only
decreased. So the counter would be negative value.  And also we started
fio test with min-inflight policy and changed the policy to the
round-robin. IOs created under the min-inflight policy increased the
inflight IO counter but the inflight IO counter was not decreased because
the policy was the round-robin when IO was finished.

So it should count IOs only if the IO is created under the min-inflight
policy. It should not care the policy when the IO is finished.

This patch adds a field mp_policy in struct rtrs_clt_io_req and stores the
multipath policy when an object of rtrs_clt_io_req is created. Then
rtrs-clt checks the mp_policy of only struct rtrs_clt_io_req instead of
the struct rtrs_clt.

Link: https://lore.kernel.org/r/20210806112112.124313-6-haris.iqbal@ionos.com
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs: Remove all likely and unlikely
Gioh Kim [Fri, 6 Aug 2021 11:21:10 +0000 (13:21 +0200)]
RDMA/rtrs: Remove all likely and unlikely

The IO performance test with fio after swapping the likely and unlikely
macros in all if-statement shows no difference.  They do not help for the
performance of rtrs.

Thanks to Haakon Bugge for the test scenario.

The fio test did random read on 32 rnbd devices and 64 processes.
Test environment:
- Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
- 376G memory
- kernel version: 5.4.86
- gcc version: gcc (Debian 8.3.0-6) 8.3.0
- Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

Test result:
- before swapping:       IOPS=829k, BW=3239MiB/s
- after swapping:        IOPS=829k, BW=3238MiB/s
- remove all (un)likely: IOPS=829k, BW=3238MiB/s

Link: https://lore.kernel.org/r/20210806112112.124313-5-haris.iqbal@ionos.com
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs: Remove unused functions
Jack Wang [Fri, 6 Aug 2021 11:21:08 +0000 (13:21 +0200)]
RDMA/rtrs: Remove unused functions

The two functions are unused, so just remove them.

Link: https://lore.kernel.org/r/20210806112112.124313-3-haris.iqbal@ionos.com
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs-clt: During add_path change for_new_clt according to path_num
Md Haris Iqbal [Fri, 6 Aug 2021 11:21:07 +0000 (13:21 +0200)]
RDMA/rtrs-clt: During add_path change for_new_clt according to path_num

When all the paths are removed for a session, the addition of the first
path is like a new session for the storage server.

Hence, for_new_clt has to be set to 1.

Link: https://lore.kernel.org/r/20210806112112.124313-2-haris.iqbal@ionos.com
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoMerge branch 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox...
Jason Gunthorpe [Fri, 20 Aug 2021 14:26:43 +0000 (11:26 -0300)]
Merge branch 'mlx5-next' of git://git./linux/kernel/git/mellanox/linux

Saeed Mahameed says:
====================

This pulls mlx5-next branch which includes patches already reviewed on
net-next and rdma mailing lists.

1) mlx5 single E-Switch FDB for lag

2) IB/mlx5: Rename is_apu_thread_cq function to is_apu_cq

3) Add DCS caps & fields support

We need this in net-next as multiple features are dependent on the
single FDB feature.

====================

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* mellanox/mlx5-next:
  net/mlx5: Lag, Create shared FDB when in switchdev mode
  net/mlx5: E-Switch, add logic to enable shared FDB
  net/mlx5: Lag, move lag destruction to a workqueue
  net/mlx5: Lag, properly lock eswitch if needed
  net/mlx5: Add send to vport rules on paired device
  net/mlx5: E-Switch, Add event callback for representors
  net/mlx5e: Use shared mappings for restoring from metadata
  net/mlx5e: Add an option to create a shared mapping
  net/mlx5: E-Switch, set flow source for send to uplink rule
  RDMA/mlx5: Add shared FDB support
  {net, RDMA}/mlx5: Extend send to vport rules
  RDMA/mlx5: Fill port info based on the relevant eswitch
  net/mlx5: Lag, add initial logic for shared FDB
  net/mlx5: Return mdev from eswitch
  IB/mlx5: Rename is_apu_thread_cq function to is_apu_cq

2 years agoRDMA/core/sa_query: Remove unused function
Håkon Bugge [Wed, 11 Aug 2021 17:25:36 +0000 (19:25 +0200)]
RDMA/core/sa_query: Remove unused function

ib_sa_service_rec_query() was introduced in kernel v2.6.13 by
commit cbae32c56314 ("[PATCH] IB: Add Service Record support to SA client")
in 2005. It was not used then and have never been used since.

Removing it and related functions/structs.

Link: https://lore.kernel.org/r/1628702736-12651-1-git-send-email-haakon.bugge@oracle.com
Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/qedr: Move variables reset to qedr_set_common_qp_params()
Prabhakar Kushwaha [Wed, 11 Aug 2021 05:16:50 +0000 (08:16 +0300)]
RDMA/qedr: Move variables reset to qedr_set_common_qp_params()

Qedr code is tightly coupled with existing both INIT transitions.  Here,
during first INIT transition all variables are reset and the RESET state
is checked in post_recv() before any posting.

Commit dc70f7c3ed34 ("RDMA/cma: Remove unnecessary INIT->INIT transition")
exposed this bug.

So moving variables reset to qedr_set_common_qp_params() and also avoid
RESET state check for post_recv().

Link: https://lore.kernel.org/r/20210811051650.14914-1-pkushwaha@marvell.com
Signed-off-by: Michal Kalderon <mkalderon@marvell.com>
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Shai Malin <smalin@marvell.com>
Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hfi1: Stop using seq_get_buf in _driver_stats_seq_show
Christoph Hellwig [Tue, 10 Aug 2021 15:17:11 +0000 (17:17 +0200)]
RDMA/hfi1: Stop using seq_get_buf in _driver_stats_seq_show

Just use seq_write to copy the stats into the seq_file buffer instead of
poking holes into the seq_file abstraction.

Link: https://lore.kernel.org/r/20210810151711.1795374-1-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Tested-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs: Remove a useless kfree()
Christophe JAILLET [Thu, 5 Aug 2021 17:43:36 +0000 (19:43 +0200)]
RDMA/rtrs: Remove a useless kfree()

'sess->rbufs' is known to be NULL here, so there is no point in kfree'ing
it.

Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality")
Link: https://lore.kernel.org/r/9a57c9f837fa2c6f0070578a1bc4840688f62962.1628185335.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Fix return in hns_roce_rereg_user_mr()
YueHaibing [Wed, 4 Aug 2021 12:59:39 +0000 (20:59 +0800)]
RDMA/hns: Fix return in hns_roce_rereg_user_mr()

If re-registering an MR in hns_roce_rereg_user_mr(), we should return NULL
instead of passing 0 to ERR_PTR for clarity.

Fixes: 4e9fc1dae2a9 ("RDMA/hns: Optimize the MR registration process")
Link: https://lore.kernel.org/r/20210804125939.20516-1-yuehaibing@huawei.com
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agonet/mlx5: Lag, Create shared FDB when in switchdev mode
Mark Bloch [Tue, 3 Aug 2021 23:19:59 +0000 (16:19 -0700)]
net/mlx5: Lag, Create shared FDB when in switchdev mode

If both eswitches are in switchdev mode and the uplink representors
are enslaved to the same bond device create a shared FDB configuration.

When moving to shared FDB mode not only the hardware needs be configured
but the RDMA driver needs to reconfigure itself.

When such change is done, unload the RDMA devices, configure the hardware
and load the RDMA representors.

When destroying the lag (can happen if a PCI function is unbinded,
driver is unloaded or by just removing a netdev from the bond) make sure
to restore the system to the previous state only if possible.

For example, if a PCI function is unbinded there is no need to load the
representors as the device is going away.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: E-Switch, add logic to enable shared FDB
Mark Bloch [Tue, 3 Aug 2021 23:19:58 +0000 (16:19 -0700)]
net/mlx5: E-Switch, add logic to enable shared FDB

Shared FDB allows to direct traffic from all the vports in the HCA to a
single eswitch. In order to do that three things are needed.

1) Point the ingress ACL of the slave uplink to that of the master.
   With this, wire traffic from both uplinks will reach the same eswitch
   with the same metadata where a single steering rule can catch traffic
   from both ports.

2) Set the FDB root flow table of the slave's eswitch to that of the
   master. As this flow table can change dynamically make sure to
   sync it on any set root flow table FDB command.
   This will make sure traffic from SFs, VFs, ECPFs and PFs reach the
   master eswitch.

3) Split wire traffic at the eswitch manager egress ACL so that it's
   directed to the native eswitch manager. We only treat wire traffic
   from both ports the same at the eswitch level. If such traffic wasn't
   handled in the eswitch it needs to reach the right representor to be
   processed by software. For example LACP packets should *always*
   reach the right uplink representor for correct operation.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Lag, move lag destruction to a workqueue
Mark Bloch [Tue, 3 Aug 2021 23:19:57 +0000 (16:19 -0700)]
net/mlx5: Lag, move lag destruction to a workqueue

If a netdev is removed from the lag the lag should be destroyed.
With downstream patches this might trigger a reconfiguration of
representors on a different eswitch and such we don't have the proper
locking to so from this path. Move the destruction to be done by the
workqueue.

As the destruction won't affect the netdev side it okay to do so.
The RDMA side will be reconfigured and it already coded to handle such
reconfiguration.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Lag, properly lock eswitch if needed
Mark Bloch [Tue, 3 Aug 2021 23:19:56 +0000 (16:19 -0700)]
net/mlx5: Lag, properly lock eswitch if needed

Currently when doing hardware lag we check the eswitch mode
but as this isn't done under a lock the check isn't valid.

As the code needs to sync between two different devices an extra
care is needed.

- When going to change eswitch mode, if hardware lag is active destroy it.
- While changing eswitch modes block any hardware bond creation.
- Delay handling bonding events until there are no mode changes in
  progress.
- When attaching a new mdev to lag, block until there is no mode change
  in progress. In order for the mode change to finish the interface lock
  will have to be taken. Release the lock and sleep for 100ms to
  allow forward progress. As this is a very rare condition (can happen if
  the user unbinds and binds a PCI function while also changing eswitch
  mode of the other PCI function) it has no real world impact.

As taking multiple eswitch mode locks is now required lockdep will
complain about a possible deadlock. Register a key per eswitch to make
lockdep happy.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Add send to vport rules on paired device
Mark Bloch [Tue, 3 Aug 2021 23:19:55 +0000 (16:19 -0700)]
net/mlx5: Add send to vport rules on paired device

When two mlx5 devices are paired in switchdev mode, always offload the
send-to-vport rule to the peer E-Switch. This allows to abstract
the logic when this is really necessary (single FDB) and combine
the logic of both cases into one.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: E-Switch, Add event callback for representors
Mark Bloch [Tue, 3 Aug 2021 23:19:54 +0000 (16:19 -0700)]
net/mlx5: E-Switch, Add event callback for representors

This callback will allow to notify representors about relevant events
when in OFFLOADS mode. In downstream patches, this will be used to notify
about PAIR/UNPAIR devcom events.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Use shared mappings for restoring from metadata
Roi Dayan [Tue, 3 Aug 2021 23:19:53 +0000 (16:19 -0700)]
net/mlx5e: Use shared mappings for restoring from metadata

FTEs are added with mapped metadata which is saved per eswitch.
When uplink reps are bonded and we are in a single FDB mode,
we could fail to find metadata which was stored on one eswitch mapping
but not the other or with a different id.
To resolve this issue use shared mapping between eswitch ports.
We do not have any conflict using a single mapping, for a type,
between the ports.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Add an option to create a shared mapping
Roi Dayan [Tue, 3 Aug 2021 23:19:52 +0000 (16:19 -0700)]
net/mlx5e: Add an option to create a shared mapping

The shared mapping is identified by an id and type.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: E-Switch, set flow source for send to uplink rule
Ariel Levkovich [Tue, 3 Aug 2021 23:19:51 +0000 (16:19 -0700)]
net/mlx5: E-Switch, set flow source for send to uplink rule

Set the flow source param to local vport for the uplink rep
send-to-vport rule.

This will comply with the recent changes in SW steering that
use the flow source as an indication for the rule type - rx or tx.

Since the uplink send-to-vport rule is forwarding traffic to the wire
it has to indicate that it is an sx rule and can't use the any port
value in the flow source.

Signed-off-by: Ariel Levkovich <lariel@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agoRDMA/mlx5: Add shared FDB support
Mark Bloch [Tue, 3 Aug 2021 23:19:50 +0000 (16:19 -0700)]
RDMA/mlx5: Add shared FDB support

Shared FDB allows to create a single RDMA device that holds representors
from both eswitches. As shared FDB is only active when both uplink
representors are enslaved there is a single RDMA port that represents
both uplinks.

The number of ports is the number of vports on both eswitches minus one
as we only need 1 port for both uplinks.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years ago{net, RDMA}/mlx5: Extend send to vport rules
Mark Bloch [Tue, 3 Aug 2021 23:19:49 +0000 (16:19 -0700)]
{net, RDMA}/mlx5: Extend send to vport rules

In shared FDB there is only one eswitch which is active and it receives
traffic from all representors and all vports in the HCA.

While the Ethernet representor will always reside on its native PF
the IB representor will not. Extend send to vport rule creation to
support such flows. Need to account for source vport that sends the
traffic (on which the representors resides) and the target eswitch
the traffic which reach.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agoRDMA/mlx5: Fill port info based on the relevant eswitch
Mark Bloch [Tue, 3 Aug 2021 23:19:48 +0000 (16:19 -0700)]
RDMA/mlx5: Fill port info based on the relevant eswitch

In shared FDB a single RDMA device can have representors that are
connected to two different eswitches. Use the right eswitch when
preparing the response to userspace.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Lag, add initial logic for shared FDB
Mark Bloch [Tue, 3 Aug 2021 23:19:47 +0000 (16:19 -0700)]
net/mlx5: Lag, add initial logic for shared FDB

As shared FDB requires changes in two subsystems first expose the needed
core functions so the RDMA side can be changed.

mlx5_lag_is_master(): return true if a given mlx5 device is the lag master.
mlx5_lag_is_shared_fdb(): Returns true if the lag mode is shared FDB.
mlx5_lag_get_peer_mdev(): Return the peer mdev in lag.

The mentioned functions will be used by downstream patches in order
to add support for shared FDB for the RDMA side.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Return mdev from eswitch
Mark Bloch [Tue, 3 Aug 2021 23:19:46 +0000 (16:19 -0700)]
net/mlx5: Return mdev from eswitch

Export a function so users can retrieve the mellanox device that manages
the eswitch from the eswitch device.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agoRDMA/core: Create clean QP creations interface for uverbs
Leon Romanovsky [Tue, 3 Aug 2021 18:20:38 +0000 (21:20 +0300)]
RDMA/core: Create clean QP creations interface for uverbs

Unify create QP creation interface to make clean approach to create
XRC_TGT and regular QPs.

Link: https://lore.kernel.org/r/5cd50e7d8ad9112545a1a61dea62799a5cb3224a.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/core: Properly increment and decrement QP usecnts
Leon Romanovsky [Tue, 3 Aug 2021 18:20:37 +0000 (21:20 +0300)]
RDMA/core: Properly increment and decrement QP usecnts

The QP usecnts were incremented through QP attributes structure while
decreased through QP itself. Rely on the ib_creat_qp_user() code that
initialized all QP parameters prior returning to the user and increment
exactly like destroy does.

Link: https://lore.kernel.org/r/25d256a3bb1fc480b77d7fe439817b993de48610.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/core: Configure selinux QP during creation
Leon Romanovsky [Tue, 3 Aug 2021 18:20:36 +0000 (21:20 +0300)]
RDMA/core: Configure selinux QP during creation

All QP creation flows called ib_create_qp_security(), but differently.
This caused to the need to provide exclusion conditions for the XRC_TGT,
because such QP already had selinux configuration call.

In order to fix it, move ib_create_qp_security() to the general QP
creation routine.

Link: https://lore.kernel.org/r/4d7cd6f5828aca37fb62283e6b126b73ab86b18c.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/core: Reorganize create QP low-level functions
Leon Romanovsky [Tue, 3 Aug 2021 18:20:35 +0000 (21:20 +0300)]
RDMA/core: Reorganize create QP low-level functions

The low-level create QP function grew to be larger than any sensible
inline function should be. The inline attribute is not really needed for
that function and can be implemented as exported symbol.

Link: https://lore.kernel.org/r/2c08709d86f876c3dfb77684357b2a939e570ca4.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/core: Remove protection from wrong in-kernel API usage
Leon Romanovsky [Tue, 3 Aug 2021 18:20:34 +0000 (21:20 +0300)]
RDMA/core: Remove protection from wrong in-kernel API usage

The ib_create_named_qp() is kernel verb that is not used for user supplied
attributes. In such case, it is ULP responsibility to provide valid QP
attributes.

In-kernel API shouldn't check it, exactly like other functions that don't
check device capabilities.

Link: https://lore.kernel.org/r/b9b9e981d1af148b750750196e686199dbbf61f8.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/core: Delete duplicated and unreachable code
Leon Romanovsky [Tue, 3 Aug 2021 18:20:33 +0000 (21:20 +0300)]
RDMA/core: Delete duplicated and unreachable code

The ib_create_named_qp() is kernel verb and no kernel users exist that use
XRC_INI QP. Hence such QP path is not reachable. In addition, delete
duplicated assignments of QP attributes from the initialization structure.

Link: https://lore.kernel.org/r/1b4c0d1def5f8f6d26839e14d19da950cc4a0b05.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Delete not-available udata check
Leon Romanovsky [Tue, 3 Aug 2021 18:20:32 +0000 (21:20 +0300)]
RDMA/mlx5: Delete not-available udata check

XRC_TGT QPs are created through kernel verbs and don't have udata at all.

Fixes: 6eefa839c4dd ("RDMA/mlx5: Protect from kernel crash if XRC_TGT doesn't have udata")
Fixes: e383085c2425 ("RDMA/mlx5: Set ECE options during QP create")
Link: https://lore.kernel.org/r/b68228597e730675020aa5162745390a2d39d3a2.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Drop in-driver verbs object creations
Leon Romanovsky [Fri, 23 Jul 2021 11:39:51 +0000 (14:39 +0300)]
RDMA/mlx5: Drop in-driver verbs object creations

There is no real value in bypassing IB/core APIs for creating standard
objects with standard types. The open-coded variant didn't have any
restrack task management calls and caused to such objects to be not
present when running rdmatoool.

Link: https://lore.kernel.org/r/f745590e5fb7d56f90fdb25f64ee3983ba17e1e4.1627040189.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA: Globally allocate and release QP memory
Leon Romanovsky [Fri, 23 Jul 2021 11:39:50 +0000 (14:39 +0300)]
RDMA: Globally allocate and release QP memory

Convert QP object to follow IB/core general allocation scheme.  That
change allows us to make sure that restrack properly kref the memory.

Link: https://lore.kernel.org/r/48e767124758aeecc433360ddd85eaa6325b34d9.1627040189.git.leonro@nvidia.com
Reviewed-by: Gal Pressman <galpress@amazon.com> #efa
Tested-by: Gal Pressman <galpress@amazon.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> #rdma and core
Tested-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Tested-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rdmavt: Decouple QP and SGE lists allocations
Leon Romanovsky [Fri, 23 Jul 2021 11:39:49 +0000 (14:39 +0300)]
RDMA/rdmavt: Decouple QP and SGE lists allocations

The rdmavt QP has fields that are both needed for the control and data
path. Such mixed declaration caused to the very specific allocation flow
with kzalloc_node and SGE list embedded into the struct rvt_qp.

This patch separates QP creation to two: regular memory allocation for the
control path and specific code for the SGE list, while the access to the
later is performed through derefenced pointer.

Such pointer and its context are expected to be in the cache, so
performance difference is expected to be negligible, if any exists.

Link: https://lore.kernel.org/r/f66c1e20ccefba0db3c69c58ca9c897f062b4d1c.1627040189.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Rework custom driver QP type creation
Leon Romanovsky [Fri, 23 Jul 2021 11:39:48 +0000 (14:39 +0300)]
RDMA/mlx5: Rework custom driver QP type creation

Starting from commit 2b1f747071c5 ("RDMA/core: Allow drivers to disable
restrack DB") the restrack is able to handle non-standard QP types either.

That change allows us to rewrite custom QP calls to their IB/core
counterparts, so we will use general QP creation flow even for the driver
QP types.

Link: https://lore.kernel.org/r/51682ab82298748941f38bd23ee3bf77ef1cab7b.1627040189.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Delete device resource mutex that didn't protect anything
Leon Romanovsky [Fri, 23 Jul 2021 11:39:47 +0000 (14:39 +0300)]
RDMA/mlx5: Delete device resource mutex that didn't protect anything

The dev->devr.mutex was intended to protect GSI QP pointer change in the
struct mlx5_ib_port_resources when it is accessed from the
pkey_change_work. However that pointer isn't changed during the runtime
and once IB/core adds MAD, it stays stable.

Link: https://lore.kernel.org/r/6e338c561033df20d92e1371fc6a7a0d93aad945.1627040189.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Cancel pkey work before destroying device resources
Leon Romanovsky [Fri, 23 Jul 2021 11:39:46 +0000 (14:39 +0300)]
RDMA/mlx5: Cancel pkey work before destroying device resources

In the driver release flow, we are ensuring that notifier is disabled and
no new works can be added to pkey_change_handler. It means that we can
cancel that handler before destroying resources to make sure that our
unwind routine is symmetrical to the allocation one.

Link: https://lore.kernel.org/r/f2b1ea1bad952e4e7a48a6f731de9e0344986b29.1627040189.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/efa: Remove double QP type assignment
Leon Romanovsky [Fri, 23 Jul 2021 11:39:45 +0000 (14:39 +0300)]
RDMA/efa: Remove double QP type assignment

The QP type is set by the IB/core and shouldn't be set in the driver.

Fixes: 40909f664d27 ("RDMA/efa: Add EFA verbs implementation")
Link: https://lore.kernel.org/r/838c40134c1590167b888ca06ad51071139ff2ae.1627040189.git.leonro@nvidia.com
Acked-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Don't overwrite supplied QP attributes
Leon Romanovsky [Fri, 23 Jul 2021 11:39:44 +0000 (14:39 +0300)]
RDMA/hns: Don't overwrite supplied QP attributes

QP attributes that were supplied by IB/core already have all parameters
set when they are passed to the driver. The drivers are not supposed to
change anything in struct ib_qp_init_attr.

Fixes: 66d86e529dd5 ("RDMA/hns: Add UD support for HIP09")
Link: https://lore.kernel.org/r/5987138875e8ade9aa339d4db6e1bd9694ed4591.1627040189.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hns: Don't skip IB creation flow for regular RC QP
Leon Romanovsky [Fri, 23 Jul 2021 11:39:43 +0000 (14:39 +0300)]
RDMA/hns: Don't skip IB creation flow for regular RC QP

The call to internal QP creation function skips QP creation checks and
misses the addition of such device QPs to the restrack DB.

As a preparation to general allocation scheme, convert hns to use proper
API.

Link: https://lore.kernel.org/r/7b236c15f7d5abb368958297ac6962d8459cb824.1627040189.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/qedr: Improve error logs for rdma_alloc_tid error return
Prabhakar Kushwaha [Thu, 29 Jul 2021 15:17:32 +0000 (18:17 +0300)]
RDMA/qedr: Improve error logs for rdma_alloc_tid error return

Use -EINVAL return type to identify whether error is returned because of
"Out of MR resources" or any other error types.

Link: https://lore.kernel.org/r/20210729151732.30995-2-pkushwaha@marvell.com
Signed-off-by: Shai Malin <smalin@marvell.com>
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/qed: Use accurate error num in qed_cxt_dynamic_ilt_alloc
Prabhakar Kushwaha [Thu, 29 Jul 2021 15:17:31 +0000 (18:17 +0300)]
RDMA/qed: Use accurate error num in qed_cxt_dynamic_ilt_alloc

To have more accurate error return type use -EOPNOTSUPP instead of
-EINVAL.

Link: https://lore.kernel.org/r/20210729151732.30995-1-pkushwaha@marvell.com
Signed-off-by: Shai Malin <smalin@marvell.com>
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hfi1: Fix typo in comments
Cai Huoqing [Thu, 29 Jul 2021 08:23:46 +0000 (16:23 +0800)]
RDMA/hfi1: Fix typo in comments

Remove the repeated word 'the' from comments

Link: https://lore.kernel.org/r/20210729082346.1882-1-caihuoqing@baidu.com
Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agodocs: Fix infiniband uverbs minor number
Leon Romanovsky [Wed, 28 Jul 2021 13:04:12 +0000 (16:04 +0300)]
docs: Fix infiniband uverbs minor number

Starting from the beginning of infiniband subsystem, the uverbs char
devices start from 192 as a minor number, see
commit bc38a6abdd5a ("[PATCH] IB uverbs: core implementation").

This patch updates the admin guide documentation to reflect it.

Fixes: 9d85025b0418 ("docs-rst: create an user's manual book")
Link: https://lore.kernel.org/r/bad03e6bcde45550c01e12908a6fe7dfa4770703.1627477347.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/iwpm: Rely on the rdma_nl_[un]register() to ensure that requests are valid
Leon Romanovsky [Fri, 23 Jul 2021 14:08:57 +0000 (17:08 +0300)]
RDMA/iwpm: Rely on the rdma_nl_[un]register() to ensure that requests are valid

The core netlink code alread guarentees that no netlink callback can be
running outside the rdma_nl_register/unregister() region and this
registration happens during module init/exit. Thus it is already prevented
that iwpm_valid_client() can ever fail. Remove it.

Link: https://lore.kernel.org/r/a9f05a78f9996bf6ea47099b5e02671bf742f5ab.1627048781.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/iwpm: Remove not-needed reference counting
Leon Romanovsky [Fri, 23 Jul 2021 14:08:56 +0000 (17:08 +0300)]
RDMA/iwpm: Remove not-needed reference counting

iwpm_init() and iwpm_exit() are called only once during iw_cm module
load. This makes whole reference count implementation not needed at all.

Link: https://lore.kernel.org/r/1778ded873ba58c9fadc5bb25038de1cec843bec.1627048781.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/iwcm: Release resources if iw_cm module initialization fails
Leon Romanovsky [Fri, 23 Jul 2021 14:08:55 +0000 (17:08 +0300)]
RDMA/iwcm: Release resources if iw_cm module initialization fails

The failure during iw_cm module initialization partially left the system
with unreleased memory and other resources. Rewrite the module init/exit
routines in such way that netlink commands will be opened only after
successful initialization.

Fixes: b493d91d333e ("iwcm: common code for port mapper")
Link: https://lore.kernel.org/r/b01239f99cb1a3e6d2b0694c242d89e6410bcd93.1627048781.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/hfi1: Convert from atomic_t to refcount_t on hfi1_devdata->user_refcount
Xiyu Yang [Mon, 19 Jul 2021 06:00:53 +0000 (14:00 +0800)]
RDMA/hfi1: Convert from atomic_t to refcount_t on hfi1_devdata->user_refcount

refcount_t type and corresponding API can protect refcounters from
accidental underflow and overflow and further use-after-free situations.

Link: https://lore.kernel.org/r/1626674454-56075-1-git-send-email-xiyuyang19@fudan.edu.cn
Signed-off-by: Xiyu Yang <xiyuyang19@fudan.edu.cn>
Signed-off-by: Xin Tan <tanxin.ctf@gmail.com>
Tested-by: Josh Fisher <josh.fisher@cornelisnetworks.com>
Acked-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/hfi1: Adjust pkey entry in index 0
Mike Marciniszyn [Thu, 15 Jul 2021 16:04:45 +0000 (12:04 -0400)]
IB/hfi1: Adjust pkey entry in index 0

It is possible for the primary IPoIB network device associated with any
RDMA device to fail to join certain multicast groups preventing IPv6
neighbor discovery and possibly other network ULPs from working
correctly. The IPv4 broadcast group is not affected as the IPoIB network
device handles joining that multicast group directly.

This is because the primary IPoIB network device uses the pkey at ndex 0
in the associated RDMA device's pkey table. Anytime the pkey value of
index 0 changes, the primary IPoIB network device automatically modifies
it's broadcast address (i.e. /sys/class/net/[ib0]/broadcast), since the
broadcast address includes the pkey value, and then bounces carrier. This
includes initial pkey assignment, such as when the pkey at index 0
transitions from the opa default of invalid (0x0000) to some value such as
the OPA default pkey for Virtual Fabric 0: 0x8001 or when the fabric
manager is restarted with a configuration change causing the pkey at index
0 to change. Many network ULPs are not sensitive to the carrier bounce and
are not expecting the broadcast address to change including the linux IPv6
stack.  This problem does not affect IPoIB child network devices as their
pkey value is constant for all time.

To mitigate this issue, change the default pkey in at index 0 to 0x8001 to
cover the predominant case and avoid issues as ipoib comes up and the FM
sweeps.

At some point, ipoib multicast support should automatically fix
non-broadcast addresses as it does with the primary broadcast address.

Fixes: 7724105686e7 ("IB/hfi1: add driver files")
Link: https://lore.kernel.org/r/20210715160445.142451.47651.stgit@awfm-01.cornelisnetworks.com
Suggested-by: Josh Collier <josh.d.collier@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/hfi1: Indicate DMA wait when txq is queued for wakeup
Mike Marciniszyn [Thu, 15 Jul 2021 16:04:40 +0000 (12:04 -0400)]
IB/hfi1: Indicate DMA wait when txq is queued for wakeup

There is no counter for dmawait in AIP, which hampers debugging
performance issues.

Add the counter increment when the txq is queued.

Fixes: d99dc602e2a5 ("IB/hfi1: Add functions to transmit datagram ipoib packets")
Link: https://lore.kernel.org/r/20210715160440.142451.8278.stgit@awfm-01.cornelisnetworks.com
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/mlx5: Rename is_apu_thread_cq function to is_apu_cq
Tal Gilboa [Sun, 18 Jul 2021 11:54:13 +0000 (14:54 +0300)]
IB/mlx5: Rename is_apu_thread_cq function to is_apu_cq

is_apu_thread_cq() used to detect CQs which are attached to APU
threads. This was extended to support other elements as well,
so the function was renamed to is_apu_cq().

c_eqn_or_apu_element was extended from 8 bits to 32 bits, which wan't
reflected when the APU support was first introduced.

Acked-by: Michael S. Tsirkin <mst@redhat.com> # vdpa
Signed-off-by: Tal Gilboa <talgi@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2 years agoMerge branch 'mlx5_dcs' into rdma.git for-next
Jason Gunthorpe [Tue, 20 Jul 2021 18:11:38 +0000 (15:11 -0300)]
Merge branch 'mlx5_dcs' into rdma.git for-next

Leon Romanovsky says:

====================
Add ConnectX DCS offload support

This patchset from Lior adds support of DCI stream channel (DCS) support.

DCS is an offload to SW load balancing of DC initiator work requests.

A single DC QP initiator (DCI) can be connected to only one target at the
time and can't start new connection until the previous work request is
completed.

This limitation causes to delays when the initiator process needs to
transfer data to multiple targets at the same time.
====================

* branch 'mlx5_dcs':
  RDMA/mlx5: Add DCS offload support
  RDMA/mlx5: Separate DCI QP creation logic
  net/mlx5: Add DCS caps & fields support

2 years agoRDMA/mlx5: Add DCS offload support
Lior Nahmanson [Mon, 21 Jun 2021 07:06:16 +0000 (10:06 +0300)]
RDMA/mlx5: Add DCS offload support

DCS is an offload to SW load balancing of DC initiator work requests.

A single DCI can be connected to only one target at the time and can't
start new connection until the previous work request is completed.  This
limitation will cause to delay when the initiator process needs to
transfer data to multiple targets at the same time.  The SW solution is to
use a process that handling and spreading the work request on many DCIs
according to destinations.

This feature is an offload to this process and coming to reduce the load
from the CPU and improve the performance.

Link: https://lore.kernel.org/r/491c2c2afdb5b07de7f03eab3f93cf0704549dbc.1624258894.git.leonro@nvidia.com
Reviewed-by: Meir Lichtinger <meirl@nvidia.com>
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Separate DCI QP creation logic
Lior Nahmanson [Mon, 21 Jun 2021 07:06:15 +0000 (10:06 +0300)]
RDMA/mlx5: Separate DCI QP creation logic

This patch isolates DCI QP creation logic to separate function, so this
change will reduce complexity when adding new features to DCI QP without
interfering with other QP types.

The code was copied from create_user_qp() while taking only DCI relevant bits.

Link: https://lore.kernel.org/r/b4530bdd999349c59691224f016ff1efb5dc3b92.1624258894.git.leonro@nvidia.com
Reviewed-by: Meir Lichtinger <meirl@nvidia.com>
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agonet/mlx5: Add DCS caps & fields support
Lior Nahmanson [Mon, 28 Dec 2020 08:38:12 +0000 (10:38 +0200)]
net/mlx5: Add DCS caps & fields support

This fields will be needed when adding a support for DCS offload

max_dci_stream_channels - maximum DCI stream channels supported per DCI.
max_dci_errored_streams - maximum DCI error stream channels
supported per DCI before a DCI move to error state.

Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2 years agoRDMA/rxe: Fix types in rxe_icrc.c
Bob Pearson [Wed, 7 Jul 2021 04:00:41 +0000 (23:00 -0500)]
RDMA/rxe: Fix types in rxe_icrc.c

Currently the ICRC is generated as a u32 type and then forced to a __be32
and stored into the ICRC field in the packet. The actual type of the ICRC
is __be32. This patch replaces u32 by __be32 and eliminates the casts.
The computation is exactly the same as the original but the types are more
consistent.

Link: https://lore.kernel.org/r/20210707040040.15434-10-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Add kernel-doc comments to rxe_icrc.c
Bob Pearson [Wed, 7 Jul 2021 04:00:40 +0000 (23:00 -0500)]
RDMA/rxe: Add kernel-doc comments to rxe_icrc.c

This patch adds kernel-doc style comments to rxe_icrc.c

Link: https://lore.kernel.org/r/20210707040040.15434-9-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Move crc32 init code to rxe_icrc.c
Bob Pearson [Wed, 7 Jul 2021 04:00:39 +0000 (23:00 -0500)]
RDMA/rxe: Move crc32 init code to rxe_icrc.c

This patch collects the code from rxe_register_device() that sets up the
crc32 calculation into a subroutine rxe_icrc_init() in rxe_icrc.c.

Link: https://lore.kernel.org/r/20210707040040.15434-8-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Fixup rxe_icrc_hdr
Bob Pearson [Wed, 7 Jul 2021 04:00:38 +0000 (23:00 -0500)]
RDMA/rxe: Fixup rxe_icrc_hdr

rxe_icrc_hdr() in rxe_icrc.c is no longer shared. This patch makes it
static and changes the parameter list to match the other routines there.

Link: https://lore.kernel.org/r/20210707040040.15434-7-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Move rxe_crc32 to a subroutine
Bob Pearson [Wed, 7 Jul 2021 04:00:37 +0000 (23:00 -0500)]
RDMA/rxe: Move rxe_crc32 to a subroutine

Move rxe_crc32() from rxe.h to rxe_icrc.c as a static local function.

Link: https://lore.kernel.org/r/20210707040040.15434-6-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Move ICRC generation to a subroutine
Bob Pearson [Wed, 7 Jul 2021 04:00:36 +0000 (23:00 -0500)]
RDMA/rxe: Move ICRC generation to a subroutine

Isolate ICRC generation into a single subroutine named rxe_generate_icrc()
in rxe_icrc.c. Remove scattered crc generation code from elsewhere.

Link: https://lore.kernel.org/r/20210707040040.15434-5-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Fixup rxe_send and rxe_loopback
Bob Pearson [Wed, 7 Jul 2021 04:00:35 +0000 (23:00 -0500)]
RDMA/rxe: Fixup rxe_send and rxe_loopback

Fixup rxe_send() and rxe_loopback() in rxe_net.c to have the same calling
sequence. This patch makes them static and have the same parameter list
and return value.

Link: https://lore.kernel.org/r/20210707040040.15434-4-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Move rxe_xmit_packet to a subroutine
Bob Pearson [Wed, 7 Jul 2021 04:00:34 +0000 (23:00 -0500)]
RDMA/rxe: Move rxe_xmit_packet to a subroutine

rxe_xmit_packet() was an overlong inline subroutine. This patch moves it
into rxe_net.c as an ordinary subroutine.

Link: https://lore.kernel.org/r/20210707040040.15434-3-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Move ICRC checking to a subroutine
Bob Pearson [Wed, 7 Jul 2021 04:00:33 +0000 (23:00 -0500)]
RDMA/rxe: Move ICRC checking to a subroutine

Move the code in rxe_recv() that checks the ICRC on incoming packets to a
subroutine rxe_check_icrc() and move that to rxe_icrc.c.

Link: https://lore.kernel.org/r/20210707040040.15434-2-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/core: Read subnet_prefix in ib_query_port via cache.
Anand Khoje [Mon, 12 Jul 2021 12:26:25 +0000 (17:56 +0530)]
IB/core: Read subnet_prefix in ib_query_port via cache.

ib_query_port() calls device->ops.query_port() to get the port
attributes. The method of querying is device driver specific.  The same
function calls device->ops.query_gid() to get the GID and extract the
subnet_prefix (gid_prefix).

The GID and subnet_prefix are stored in a cache. But they do not get
read from the cache if the device is an Infiniband device. The
following change takes advantage of the cached subnet_prefix.
Testing with RDBMS has shown a significant improvement in performance
with this change.

Link: https://lore.kernel.org/r/20210712122625.1147-4-anand.a.khoje@oracle.com
Signed-off-by: Anand Khoje <anand.a.khoje@oracle.com>
Signed-off-by: Haakon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/core: Shifting initialization of device->cache_lock
Anand Khoje [Mon, 12 Jul 2021 12:26:24 +0000 (17:56 +0530)]
IB/core: Shifting initialization of device->cache_lock

The lock cache_lock of struct ib_device is initialized in function
ib_cache_setup_one(). This is much later than the device initialization in
_ib_alloc_device().

This change shifts initialization of cache_lock in _ib_alloc_device().

Link: https://lore.kernel.org/r/20210712122625.1147-3-anand.a.khoje@oracle.com
Suggested-by: Haakon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Anand Khoje <anand.a.khoje@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/core: Updating cache for subnet_prefix in config_non_roce_gid_cache()
Anand Khoje [Mon, 12 Jul 2021 12:26:23 +0000 (17:56 +0530)]
IB/core: Updating cache for subnet_prefix in config_non_roce_gid_cache()

Currently, cache for subnet_prefix was getting updated by reading port
attributes via ib_query_port. ib_query_port() calls ops.query_gid() to get
subnet_prefix and returns it via port_attr.

In ib_cache_update(), config_non_roce_gid_cache() obtains GIDs by calling
ops.query_gid(). We utilize this to store subnet_prefix in cache.

Link: https://lore.kernel.org/r/20210712122625.1147-2-anand.a.khoje@oracle.com
Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Suggested-by: Aru Kolappan <aru.kolappan@oracle.com>
Signed-off-by: Anand Khoje <anand.a.khoje@oracle.com>
Signed-off-by: Haakon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/efa: Split hardware stats to device and port stats
Gal Pressman [Mon, 12 Jul 2021 10:59:23 +0000 (13:59 +0300)]
RDMA/efa: Split hardware stats to device and port stats

The hardware stats API distinguishes between device and port statistics,
split the EFA stats accordingly instead of always dumping everything.

Link: https://lore.kernel.org/r/20210712105923.17389-1-galpress@amazon.com
Reviewed-by: Firas JahJah <firasj@amazon.com>
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoMAINTAINERS: Update maintainers of HiSilicon RoCE
Weihang Li [Thu, 8 Jul 2021 10:59:18 +0000 (18:59 +0800)]
MAINTAINERS: Update maintainers of HiSilicon RoCE

Lijun has moved to work in other technical areas, and Wenpeng will
maintain this modules instead of him.

Link: https://lore.kernel.org/r/1625741958-51363-1-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Remove the repeated 'mr->umem = umem'
Xiao Yang [Fri, 2 Jul 2021 12:30:24 +0000 (20:30 +0800)]
RDMA/rxe: Remove the repeated 'mr->umem = umem'

Drop duplicated code

Link: https://lore.kernel.org/r/20210702123024.37025-1-ice_yangxiao@163.com
Signed-off-by: Xiao Yang <yangx.jy@fujitsu.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/siw: Convert siw_tx_hdt() to kmap_local_page()
Ira Weiny [Thu, 24 Jun 2021 17:48:14 +0000 (10:48 -0700)]
RDMA/siw: Convert siw_tx_hdt() to kmap_local_page()

kmap() is being deprecated and will break uses of device dax after PKS
protection is introduced.[1]

The use of kmap() in siw_tx_hdt() is all thread local therefore
kmap_local_page() is a sufficient replacement and will work with pgmap
protected pages when those are implemented.

siw_tx_hdt() tracks pages used in a page_array.  It uses that array to
unmap pages which were mapped on function exit.  Not all entries in the
array are mapped and this is tracked in kmap_mask.

kunmap_local() takes a mapped address rather than a page.  Alter
siw_unmap_pages() to take the iov array to reuse the iov_base address of
each mapping.  Use PAGE_MASK to get the proper address for kunmap_local().

kmap_local_page() mappings are tracked in a stack and must be unmapped in
the opposite order they were mapped in.  Because segments are mapped into
the page array in increasing index order, modify siw_unmap_pages() to
unmap pages in decreasing order.

Use kmap_local_page() instead of kmap() to map pages in the page_array.

[1] https://lore.kernel.org/lkml/20201009195033.3208459-59-ira.weiny@intel.com/

Link: https://lore.kernel.org/r/20210624174814.2822896-1-ira.weiny@intel.com
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Bernard Metzler <bmt@zurich.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/siw: Remove kmap()
Ira Weiny [Tue, 22 Jun 2021 06:14:21 +0000 (23:14 -0700)]
RDMA/siw: Remove kmap()

kmap() is being deprecated and will break uses of device dax after PKS
protection is introduced.[1]

These uses of kmap() in the SIW driver are thread local.  Therefore
kmap_local_page() is sufficient to use and will work with pgmap protected
pages when those are implemnted.

There is one more use of kmap() in this driver which is split into its own
patch because kmap_local_page() has strict ordering rules and the use of
the kmap_mask over multiple segments must be handled carefully.
Therefore, that conversion is handled in a stand alone patch.

Use kmap_local_page() instead of kmap() in the 'easy' cases.

[1] https://lore.kernel.org/lkml/20201009195033.3208459-59-ira.weiny@intel.com/

Link: https://lore.kernel.org/r/20210622061422.2633501-4-ira.weiny@intel.com
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs: Move sq_wr_avail to rtrs_con
Jack Wang [Mon, 12 Jul 2021 06:07:50 +0000 (08:07 +0200)]
RDMA/rtrs: Move sq_wr_avail to rtrs_con

In order to account HB for sq_wr_avail properly, move sq_wr_avail from
rtrs_srv_con to rtrs_con.

Although rtrs-clt do not care sq_wr_avail, but still init it to
max_send_wr.

Fixes: b38041d50add ("RDMA/rtrs: Do not signal for heatbeat")
Link: https://lore.kernel.org/r/20210712060750.16494-7-jinpu.wang@ionos.com
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Aleksei Marov <aleksei.marov@ionos.com>
Reviewed-by: Gioh Kim <gi-oh.kim@ionos.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs: Remove unused flags parameter
Jack Wang [Mon, 12 Jul 2021 06:07:49 +0000 (08:07 +0200)]
RDMA/rtrs: Remove unused flags parameter

flags is not used, so remove it from rtrs_post_rdma_write_imm_empty.

Link: https://lore.kernel.org/r/20210712060750.16494-6-jinpu.wang@ionos.com
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Aleksei Marov <aleksei.marov@ionos.com>
Reviewed-by: Gioh Kim <gi-oh.kim@ionos.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs: Make rtrs_post_rdma_write_imm_empty static
Jack Wang [Mon, 12 Jul 2021 06:07:48 +0000 (08:07 +0200)]
RDMA/rtrs: Make rtrs_post_rdma_write_imm_empty static

It's only used in rtrs.c, so no need to export.

Link: https://lore.kernel.org/r/20210712060750.16494-5-jinpu.wang@ionos.com
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Aleksei Marov <aleksei.marov@ionos.com>
Reviewed-by: Gioh Kim <gi-oh.kim@ionos.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rtrs: Enable the same selective signal for heartbeat and IO
Jack Wang [Mon, 12 Jul 2021 06:07:47 +0000 (08:07 +0200)]
RDMA/rtrs: Enable the same selective signal for heartbeat and IO

On idle session, because we do not do signal for heartbeat, it will
overflow the send queue after sometime.

To avoid that, we need to enable the signal for heartbeat. To do that, add
a new member signal_interval in rtrs_path, which will set min of
queue_depth and SERVICE_CON_QUEUE_DEPTH, and track it for both heartbeat
and IO, so the sq queue full accounting is correct.

Fixes: b38041d50add ("RDMA/rtrs: Do not signal for heatbeat")
Link: https://lore.kernel.org/r/20210712060750.16494-4-jinpu.wang@ionos.com
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Aleksei Marov <aleksei.marov@ionos.com>
Reviewed-by: Gioh Kim <gi-oh.kim@ionos.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>