enetc: Use generic rule to map Tx rings to interrupt vectors
authorClaudiu Manoil <claudiu.manoil@nxp.com>
Fri, 9 Apr 2021 07:16:13 +0000 (10:16 +0300)
committerJakub Kicinski <kuba@kernel.org>
Sat, 10 Apr 2021 01:22:09 +0000 (18:22 -0700)
Even if the current mapping is correct for the 1 CPU and 2 CPU cases
(currently enetc is included in SoCs with up to 2 CPUs only), better
use a generic rule for the mapping to cover all possible cases.
The number of CPUs is the same as the number of interrupt vectors:

Per device Tx rings -
device_tx_ring[idx], where idx = 0..n_rings_total-1

Per interrupt vector Tx rings -
int_vector[i].ring[j], where i = 0..n_int_vects-1
     j = 0..n_rings_per_v-1

Mapping rule -
n_rings_per_v = n_rings_total / n_int_vects
for i = 0..n_int_vects - 1:
for j = 0..n_rings_per_v - 1:
idx = n_int_vects * j + i
int_vector[i].ring[j] <- device_tx_ring[idx]

Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Link: https://lore.kernel.org/r/20210409071613.28912-1-claudiu.manoil@nxp.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
drivers/net/ethernet/freescale/enetc/enetc.c

index 182d808..41bfc6e 100644 (file)
@@ -2344,11 +2344,7 @@ int enetc_alloc_msix(struct enetc_ndev_priv *priv)
                        int idx;
 
                        /* default tx ring mapping policy */
-                       if (priv->bdr_int_num == ENETC_MAX_BDR_INT)
-                               idx = 2 * j + i; /* 2 CPUs */
-                       else
-                               idx = j + i * v_tx_rings; /* default */
-
+                       idx = priv->bdr_int_num * j + i;
                        __set_bit(idx, &v->tx_rings_map);
                        bdr = &v->tx_ring[j];
                        bdr->index = idx;