Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
authorJakub Kicinski <kuba@kernel.org>
Sat, 9 Jul 2022 19:24:15 +0000 (12:24 -0700)
committerJakub Kicinski <kuba@kernel.org>
Sat, 9 Jul 2022 19:24:16 +0000 (12:24 -0700)
Daniel Borkmann says:

====================
pull-request: bpf-next 2022-07-09

We've added 94 non-merge commits during the last 19 day(s) which contain
a total of 125 files changed, 5141 insertions(+), 6701 deletions(-).

The main changes are:

1) Add new way for performing BTF type queries to BPF, from Daniel Müller.

2) Add inlining of calls to bpf_loop() helper when its function callback is
   statically known, from Eduard Zingerman.

3) Implement BPF TCP CC framework usability improvements, from Jörn-Thorben Hinz.

4) Add LSM flavor for attaching per-cgroup BPF programs to existing LSM
   hooks, from Stanislav Fomichev.

5) Remove all deprecated libbpf APIs in prep for 1.0 release, from Andrii Nakryiko.

6) Add benchmarks around local_storage to BPF selftests, from Dave Marchevsky.

7) AF_XDP sample removal (given move to libxdp) and various improvements around AF_XDP
   selftests, from Magnus Karlsson & Maciej Fijalkowski.

8) Add bpftool improvements for memcg probing and bash completion, from Quentin Monnet.

9) Add arm64 JIT support for BPF-2-BPF coupled with tail calls, from Jakub Sitnicki.

10) Sockmap optimizations around throughput of UDP transmissions which have been
    improved by 61%, from Cong Wang.

11) Rework perf's BPF prologue code to remove deprecated functions, from Jiri Olsa.

12) Fix sockmap teardown path to avoid sleepable sk_psock_stop, from John Fastabend.

13) Fix libbpf's cleanup around legacy kprobe/uprobe on error case, from Chuang Wang.

14) Fix libbpf's bpf_helpers.h to work with gcc for the case of its sec/pragma
    macro, from James Hilliard.

15) Fix libbpf's pt_regs macros for riscv to use a0 for RC register, from Yixun Lan.

16) Fix bpftool to show the name of type BPF_OBJ_LINK, from Yafang Shao.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (94 commits)
  selftests/bpf: Fix xdp_synproxy build failure if CONFIG_NF_CONNTRACK=m/n
  bpf: Correctly propagate errors up from bpf_core_composites_match
  libbpf: Disable SEC pragma macro on GCC
  bpf: Check attach_func_proto more carefully in check_return_code
  selftests/bpf: Add test involving restrict type qualifier
  bpftool: Add support for KIND_RESTRICT to gen min_core_btf command
  MAINTAINERS: Add entry for AF_XDP selftests files
  selftests, xsk: Rename AF_XDP testing app
  bpf, docs: Remove deprecated xsk libbpf APIs description
  selftests/bpf: Add benchmark for local_storage RCU Tasks Trace usage
  libbpf, riscv: Use a0 for RC register
  libbpf: Remove unnecessary usdt_rel_ip assignments
  selftests/bpf: Fix few more compiler warnings
  selftests/bpf: Fix bogus uninitialized variable warning
  bpftool: Remove zlib feature test from Makefile
  libbpf: Cleanup the legacy uprobe_event on failed add/attach_event()
  libbpf: Fix wrong variable used in perf_event_uprobe_open_legacy()
  libbpf: Cleanup the legacy kprobe_event on failed add/attach_event()
  selftests/bpf: Add type match test against kernel's task_struct
  selftests/bpf: Add nested type to type based tests
  ...
====================

Link: https://lore.kernel.org/r/20220708233145.32365-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
1301 files changed:
.mailmap
Documentation/ABI/testing/sysfs-bus-iio-vf610
Documentation/admin-guide/sysctl/net.rst
Documentation/devicetree/bindings/hwmon/ti,tmp401.yaml
Documentation/devicetree/bindings/interrupt-controller/socionext,uniphier-aidet.yaml
Documentation/devicetree/bindings/net/can/microchip,mpfs-can.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/net/cdns,macb.yaml
Documentation/devicetree/bindings/net/dsa/mediatek,mt7530.yaml
Documentation/devicetree/bindings/net/dsa/microchip,lan937x.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/net/dsa/renesas,rzn1-a5psw.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/net/ethernet-controller.yaml
Documentation/devicetree/bindings/net/mediatek,star-emac.yaml
Documentation/devicetree/bindings/net/micrel.txt
Documentation/devicetree/bindings/net/pcs/renesas,rzn1-miic.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/net/snps,dwmac.yaml
Documentation/devicetree/bindings/spi/microchip,mpfs-spi.yaml
Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml
Documentation/devicetree/bindings/usb/generic-ehci.yaml
Documentation/devicetree/bindings/usb/generic-ohci.yaml
Documentation/driver-api/firmware/other_interfaces.rst
Documentation/driver-api/gpio/board.rst
Documentation/driver-api/gpio/consumer.rst
Documentation/driver-api/gpio/intro.rst
Documentation/filesystems/btrfs.rst
Documentation/filesystems/ext4/attributes.rst
Documentation/filesystems/ext4/bigalloc.rst
Documentation/filesystems/ext4/bitmaps.rst
Documentation/filesystems/ext4/blockgroup.rst
Documentation/filesystems/ext4/blockmap.rst
Documentation/filesystems/ext4/checksums.rst
Documentation/filesystems/ext4/directory.rst
Documentation/filesystems/ext4/eainode.rst
Documentation/filesystems/ext4/group_descr.rst
Documentation/filesystems/ext4/ifork.rst
Documentation/filesystems/ext4/inlinedata.rst
Documentation/filesystems/ext4/inodes.rst
Documentation/filesystems/ext4/journal.rst
Documentation/filesystems/ext4/mmp.rst
Documentation/filesystems/ext4/overview.rst
Documentation/filesystems/ext4/special_inodes.rst
Documentation/filesystems/ext4/super.rst
Documentation/kbuild/llvm.rst
Documentation/loongarch/introduction.rst
Documentation/loongarch/irq-chip-model.rst
Documentation/networking/bonding.rst
Documentation/networking/can.rst
Documentation/networking/device_drivers/can/can327.rst [new file with mode: 0644]
Documentation/networking/device_drivers/can/index.rst
Documentation/networking/device_drivers/ethernet/index.rst
Documentation/networking/device_drivers/ethernet/neterion/vxge.rst [deleted file]
Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst [new file with mode: 0644]
Documentation/networking/ip-sysctl.rst
Documentation/networking/tls.rst
Documentation/process/maintainer-netdev.rst
Documentation/translations/zh_CN/loongarch/introduction.rst
Documentation/translations/zh_CN/loongarch/irq-chip-model.rst
Documentation/vm/hwpoison.rst
MAINTAINERS
Makefile
arch/arm/boot/dts/Makefile
arch/arm/boot/dts/aspeed-bmc-nuvia-dc-scm.dts [deleted file]
arch/arm/boot/dts/aspeed-bmc-qcom-dc-scm-v1.dts [new file with mode: 0644]
arch/arm/boot/dts/at91-sam9x60ek.dts
arch/arm/boot/dts/at91-sama5d2_icp.dts
arch/arm/boot/dts/bcm2711-rpi-400.dts
arch/arm/boot/dts/imx6qdl-colibri.dtsi
arch/arm/boot/dts/imx6qdl.dtsi
arch/arm/boot/dts/imx7d-smegw01.dts
arch/arm/boot/dts/imx7s.dtsi
arch/arm/boot/dts/stm32mp15-scmi.dtsi [new file with mode: 0644]
arch/arm/boot/dts/stm32mp151.dtsi
arch/arm/boot/dts/stm32mp157a-dk1-scmi.dts
arch/arm/boot/dts/stm32mp157c-dk2-scmi.dts
arch/arm/boot/dts/stm32mp157c-ed1-scmi.dts
arch/arm/boot/dts/stm32mp157c-ev1-scmi.dts
arch/arm/configs/mxs_defconfig
arch/arm/mach-at91/pm.c
arch/arm/mach-axxia/platsmp.c
arch/arm/mach-cns3xxx/core.c
arch/arm/mach-exynos/exynos.c
arch/arm/mach-meson/platsmp.c
arch/arm/mach-spear/time.c
arch/arm/xen/p2m.c
arch/arm64/boot/dts/exynos/exynos7885.dtsi
arch/arm64/boot/dts/freescale/imx8mp-evk.dts
arch/arm64/boot/dts/freescale/imx8mp-icore-mx8mp-edimm2.2.dts
arch/arm64/boot/dts/freescale/imx8mp-phyboard-pollux-rdk.dts
arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
arch/arm64/boot/dts/freescale/imx8mp.dtsi
arch/arm64/boot/dts/freescale/s32g2.dtsi
arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi
arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts
arch/arm64/boot/dts/qcom/msm8994.dtsi
arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor.dtsi
arch/arm64/boot/dts/qcom/sdm845.dtsi
arch/arm64/boot/dts/qcom/sm8450.dtsi
arch/arm64/boot/dts/ti/k3-am64-main.dtsi
arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
arch/arm64/kernel/cpufeature.c
arch/arm64/kernel/entry-ftrace.S
arch/arm64/kernel/ftrace.c
arch/arm64/kernel/setup.c
arch/arm64/kvm/arm.c
arch/arm64/mm/cache.S
arch/arm64/mm/hugetlbpage.c
arch/loongarch/include/asm/branch.h
arch/loongarch/include/asm/pgtable.h
arch/loongarch/kernel/cpu-probe.c
arch/loongarch/kernel/head.S
arch/loongarch/kernel/traps.c
arch/loongarch/kernel/vmlinux.lds.S
arch/loongarch/mm/tlb.c
arch/mips/boot/dts/ingenic/x1000.dtsi
arch/mips/boot/dts/ingenic/x1830.dtsi
arch/mips/generic/board-ranchu.c
arch/mips/lantiq/falcon/sysctrl.c
arch/mips/lantiq/irq.c
arch/mips/lantiq/xway/sysctrl.c
arch/mips/mti-malta/malta-time.c
arch/mips/pic32/pic32mzda/init.c
arch/mips/pic32/pic32mzda/time.c
arch/mips/ralink/of.c
arch/mips/vr41xx/common/icu.c
arch/openrisc/kernel/unwinder.c
arch/parisc/Kconfig
arch/parisc/include/asm/fb.h
arch/parisc/kernel/asm-offsets.c
arch/parisc/kernel/cache.c
arch/parisc/kernel/unaligned.c
arch/parisc/math-emu/decode_exc.c
arch/powerpc/Kconfig
arch/powerpc/include/asm/bpf_perf_event.h [new file with mode: 0644]
arch/powerpc/include/uapi/asm/bpf_perf_event.h [deleted file]
arch/powerpc/kernel/process.c
arch/powerpc/kernel/prom_init.c
arch/powerpc/kernel/prom_init_check.sh
arch/powerpc/kernel/rtas.c
arch/powerpc/kernel/setup-common.c
arch/powerpc/mm/mem.c
arch/powerpc/mm/nohash/book3e_pgtable.c
arch/powerpc/platforms/microwatt/microwatt.h [new file with mode: 0644]
arch/powerpc/platforms/microwatt/rng.c
arch/powerpc/platforms/microwatt/setup.c
arch/powerpc/platforms/powernv/powernv.h
arch/powerpc/platforms/powernv/rng.c
arch/powerpc/platforms/powernv/setup.c
arch/powerpc/platforms/pseries/pseries.h
arch/powerpc/platforms/pseries/rng.c
arch/powerpc/platforms/pseries/setup.c
arch/powerpc/sysdev/xive/spapr.c
arch/riscv/Kconfig
arch/riscv/Kconfig.erratas
arch/riscv/boot/dts/microchip/mpfs.dtsi
arch/riscv/include/asm/errata_list.h
arch/riscv/kernel/cpufeature.c
arch/s390/Kconfig
arch/s390/crypto/arch_random.c
arch/s390/include/asm/archrandom.h
arch/s390/include/asm/qdio.h
arch/s390/kernel/crash_dump.c
arch/s390/kernel/perf_cpum_cf.c
arch/s390/kernel/perf_pai_crypto.c
arch/s390/kernel/setup.c
arch/s390/purgatory/Makefile
arch/x86/coco/tdx/tdx.c
arch/x86/hyperv/hv_init.c
arch/x86/hyperv/ivm.c
arch/x86/include/asm/e820/api.h
arch/x86/include/asm/efi.h
arch/x86/include/asm/mshyperv.h
arch/x86/include/asm/pci_x86.h
arch/x86/include/asm/setup.h
arch/x86/kernel/Makefile
arch/x86/kernel/ftrace_64.S
arch/x86/kernel/resource.c
arch/x86/kernel/setup.c
arch/x86/kernel/vmlinux.lds.S
arch/x86/kvm/svm/sev.c
arch/x86/kvm/svm/svm.c
arch/x86/kvm/svm/svm.h
arch/x86/net/bpf_jit_comp.c
arch/x86/pci/acpi.c
arch/xtensa/kernel/entry.S
arch/xtensa/kernel/time.c
arch/xtensa/platforms/xtfpga/setup.c
block/bfq-iosched.c
block/blk-core.c
block/blk-ia-ranges.c
block/blk-mq-debugfs.c
block/blk-mq-debugfs.h
block/blk-mq-sched.c
block/blk-mq.c
block/blk-rq-qos.c
block/blk-rq-qos.h
block/blk-sysfs.c
block/genhd.c
block/holder.c
block/kyber-iosched.c
block/mq-deadline.c
certs/Makefile
certs/blacklist.c
certs/common.c [deleted file]
certs/common.h [deleted file]
certs/system_keyring.c
crypto/Kconfig
crypto/Makefile
crypto/asymmetric_keys/Kconfig
crypto/asymmetric_keys/Makefile
crypto/asymmetric_keys/selftest.c [new file with mode: 0644]
crypto/asymmetric_keys/x509_loader.c [new file with mode: 0644]
crypto/asymmetric_keys/x509_parser.h
crypto/asymmetric_keys/x509_public_key.c
crypto/memneq.c [deleted file]
drivers/acpi/acpi_video.c
drivers/ata/pata_cs5535.c
drivers/base/init.c
drivers/base/memory.c
drivers/base/regmap/regmap-irq.c
drivers/base/regmap/regmap.c
drivers/block/xen-blkfront.c
drivers/bus/bt1-apb.c
drivers/bus/bt1-axi.c
drivers/bus/fsl-mc/fsl-mc-bus.c
drivers/char/lp.c
drivers/char/random.c
drivers/clk/stm32/reset-stm32.c
drivers/clocksource/hyperv_timer.c
drivers/comedi/drivers/vmk80xx.c
drivers/cpufreq/amd-pstate.c
drivers/cpufreq/cpufreq-dt-platdev.c
drivers/cpufreq/pmac32-cpufreq.c
drivers/cpufreq/qcom-cpufreq-hw.c
drivers/cpufreq/qoriq-cpufreq.c
drivers/crypto/Kconfig
drivers/crypto/ccp/sp-platform.c
drivers/devfreq/devfreq.c
drivers/devfreq/event/exynos-ppmu.c
drivers/devfreq/governor_passive.c
drivers/dma-buf/udmabuf.c
drivers/firewire/core-cdev.c
drivers/firewire/core-device.c
drivers/firmware/arm_scmi/base.c
drivers/firmware/arm_scmi/bus.c
drivers/firmware/arm_scmi/clock.c
drivers/firmware/arm_scmi/driver.c
drivers/firmware/arm_scmi/optee.c
drivers/firmware/arm_scmi/perf.c
drivers/firmware/arm_scmi/power.c
drivers/firmware/arm_scmi/protocols.h
drivers/firmware/arm_scmi/reset.c
drivers/firmware/arm_scmi/sensors.c
drivers/firmware/arm_scmi/voltage.c
drivers/firmware/efi/sysfb_efi.c
drivers/firmware/sysfb.c
drivers/firmware/sysfb_simplefb.c
drivers/gpio/gpio-grgpio.c
drivers/gpio/gpio-mxs.c
drivers/gpio/gpio-realtek-otto.c
drivers/gpio/gpio-vr41xx.c
drivers/gpio/gpio-winbond.c
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.c
drivers/gpu/drm/amd/display/dc/dcn201/dcn201_dpp.c
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
drivers/gpu/drm/drm_panel_orientation_quirks.c
drivers/gpu/drm/exynos/exynos_drm_drv.c
drivers/gpu/drm/exynos/exynos_drm_mic.c
drivers/gpu/drm/i915/display/intel_dp.c
drivers/gpu/drm/i915/display/intel_dpll_mgr.c
drivers/gpu/drm/i915/gem/i915_gem_context.c
drivers/gpu/drm/i915/gem/i915_gem_domain.c
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
drivers/gpu/drm/i915/gt/intel_gt.c
drivers/gpu/drm/i915/gt/intel_gt_sysfs.c
drivers/gpu/drm/i915/gt/intel_gt_sysfs.h
drivers/gpu/drm/i915/gt/intel_gt_types.h
drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
drivers/gpu/drm/i915/i915_driver.c
drivers/gpu/drm/i915/i915_drm_client.c
drivers/gpu/drm/i915/i915_sysfs.c
drivers/gpu/drm/i915/i915_vma.c
drivers/gpu/drm/msm/adreno/adreno_gpu.c
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
drivers/gpu/drm/msm/dp/dp_ctrl.c
drivers/gpu/drm/msm/dp/dp_ctrl.h
drivers/gpu/drm/msm/dp/dp_display.c
drivers/gpu/drm/msm/msm_drv.c
drivers/gpu/drm/msm/msm_drv.h
drivers/gpu/drm/msm/msm_fence.c
drivers/gpu/drm/msm/msm_gem.c
drivers/gpu/drm/msm/msm_gem.h
drivers/gpu/drm/msm/msm_gem_prime.c
drivers/gpu/drm/msm/msm_gem_submit.c
drivers/gpu/drm/msm/msm_gem_vma.c
drivers/gpu/drm/msm/msm_gpu.c
drivers/gpu/drm/msm/msm_iommu.c
drivers/gpu/drm/msm/msm_ringbuffer.c
drivers/gpu/drm/sun4i/sun4i_drv.c
drivers/gpu/drm/sun4i/sun4i_layer.c
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
drivers/gpu/drm/ttm/ttm_bo.c
drivers/gpu/drm/ttm/ttm_device.c
drivers/gpu/drm/ttm/ttm_resource.c
drivers/gpu/drm/vc4/vc4_bo.c
drivers/gpu/drm/vc4/vc4_crtc.c
drivers/gpu/drm/vc4/vc4_drv.c
drivers/gpu/drm/vc4/vc4_drv.h
drivers/gpu/drm/vc4/vc4_gem.c
drivers/gpu/drm/vc4/vc4_hdmi.c
drivers/gpu/drm/vc4/vc4_hvs.c
drivers/gpu/drm/vc4/vc4_irq.c
drivers/gpu/drm/vc4/vc4_kms.c
drivers/gpu/drm/vc4/vc4_perfmon.c
drivers/gpu/drm/vc4/vc4_plane.c
drivers/gpu/drm/vc4/vc4_render_cl.c
drivers/gpu/drm/vc4/vc4_v3d.c
drivers/gpu/drm/vc4/vc4_validate.c
drivers/gpu/drm/vc4/vc4_validate_shaders.c
drivers/gpu/drm/xen/xen_drm_front_gem.c
drivers/hid/hid-hyperv.c
drivers/hv/channel_mgmt.c
drivers/hv/hv_kvp.c
drivers/hv/vmbus_drv.c
drivers/hwmon/asus-ec-sensors.c
drivers/hwmon/ibmaem.c
drivers/hwmon/occ/common.c
drivers/hwmon/occ/common.h
drivers/hwmon/occ/p8_i2c.c
drivers/hwmon/occ/p9_sbe.c
drivers/hwmon/pmbus/ucd9200.c
drivers/i2c/busses/i2c-designware-common.c
drivers/i2c/busses/i2c-designware-platdrv.c
drivers/i2c/busses/i2c-mt65xx.c
drivers/i2c/busses/i2c-npcm7xx.c
drivers/iio/accel/bma180.c
drivers/iio/accel/kxcjk-1013.c
drivers/iio/accel/mma8452.c
drivers/iio/accel/mxc4005.c
drivers/iio/adc/adi-axi-adc.c
drivers/iio/adc/aspeed_adc.c
drivers/iio/adc/axp288_adc.c
drivers/iio/adc/rzg2l_adc.c
drivers/iio/adc/stm32-adc-core.c
drivers/iio/adc/stm32-adc.c
drivers/iio/adc/ti-ads131e08.c
drivers/iio/adc/xilinx-ams.c
drivers/iio/afe/iio-rescale.c
drivers/iio/chemical/ccs811.c
drivers/iio/frequency/admv1014.c
drivers/iio/gyro/mpu3050-core.c
drivers/iio/humidity/hts221_buffer.c
drivers/iio/imu/inv_icm42600/inv_icm42600.h
drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
drivers/iio/magnetometer/yamaha-yas530.c
drivers/iio/proximity/sx9324.c
drivers/iio/test/Kconfig
drivers/iio/test/Makefile
drivers/iio/trigger/iio-trig-sysfs.c
drivers/infiniband/core/cm.c
drivers/infiniband/hw/qedr/qedr.h
drivers/infiniband/hw/qedr/verbs.c
drivers/infiniband/ulp/ipoib/ipoib_ib.c
drivers/iommu/ipmmu-vmsa.c
drivers/irqchip/Kconfig
drivers/irqchip/irq-apple-aic.c
drivers/irqchip/irq-gic-realview.c
drivers/irqchip/irq-gic-v3.c
drivers/irqchip/irq-loongson-liointc.c
drivers/irqchip/irq-or1k-pic.c
drivers/irqchip/irq-realtek-rtl.c
drivers/irqchip/irq-uniphier-aidet.c
drivers/isdn/hardware/mISDN/hfcsusb.c
drivers/md/dm-core.h
drivers/md/dm-era-target.c
drivers/md/dm-log.c
drivers/md/dm-raid.c
drivers/md/dm.c
drivers/md/md.c
drivers/md/md.h
drivers/md/raid5-ppl.c
drivers/md/raid5.c
drivers/memory/Kconfig
drivers/memory/mtk-smi.c
drivers/memory/samsung/exynos5422-dmc.c
drivers/misc/atmel-ssc.c
drivers/misc/cardreader/rts5261.c
drivers/misc/eeprom/at25.c
drivers/misc/mei/hbm.c
drivers/misc/mei/hw-me-regs.h
drivers/misc/mei/hw-me.c
drivers/misc/mei/pci-me.c
drivers/mmc/host/mtk-sd.c
drivers/mmc/host/sdhci-pci-o2micro.c
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
drivers/mtd/nand/raw/nand_ids.c
drivers/net/Kconfig
drivers/net/amt.c
drivers/net/bonding/bond_3ad.c
drivers/net/bonding/bond_alb.c
drivers/net/bonding/bond_main.c
drivers/net/bonding/bond_netlink.c
drivers/net/bonding/bond_options.c
drivers/net/caif/caif_virtio.c
drivers/net/can/Kconfig
drivers/net/can/Makefile
drivers/net/can/can327.c [new file with mode: 0644]
drivers/net/can/ctucanfd/ctucanfd_base.c
drivers/net/can/dev/Makefile
drivers/net/can/dev/bittiming.c
drivers/net/can/dev/calc_bittiming.c [new file with mode: 0644]
drivers/net/can/dev/dev.c
drivers/net/can/dev/netlink.c
drivers/net/can/dev/skb.c
drivers/net/can/grcan.c
drivers/net/can/m_can/Kconfig
drivers/net/can/m_can/m_can.c
drivers/net/can/rcar/rcar_canfd.c
drivers/net/can/slcan.c [deleted file]
drivers/net/can/slcan/Makefile [new file with mode: 0644]
drivers/net/can/slcan/slcan-core.c [new file with mode: 0644]
drivers/net/can/slcan/slcan-ethtool.c [new file with mode: 0644]
drivers/net/can/slcan/slcan.h [new file with mode: 0644]
drivers/net/can/spi/mcp251xfd/Kconfig
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
drivers/net/can/spi/mcp251xfd/mcp251xfd-regmap.c
drivers/net/can/usb/Kconfig
drivers/net/can/usb/Makefile
drivers/net/can/usb/esd_usb.c [new file with mode: 0644]
drivers/net/can/usb/esd_usb2.c [deleted file]
drivers/net/can/usb/etas_es58x/es58x_core.c
drivers/net/can/usb/etas_es58x/es58x_core.h
drivers/net/can/usb/gs_usb.c
drivers/net/can/usb/kvaser_usb/kvaser_usb.h
drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
drivers/net/can/xilinx_can.c
drivers/net/dsa/Kconfig
drivers/net/dsa/Makefile
drivers/net/dsa/b53/b53_spi.c
drivers/net/dsa/bcm_sf2.c
drivers/net/dsa/hirschmann/hellcreek_ptp.c
drivers/net/dsa/microchip/Kconfig
drivers/net/dsa/microchip/Makefile
drivers/net/dsa/microchip/ksz8.h
drivers/net/dsa/microchip/ksz8795.c
drivers/net/dsa/microchip/ksz8795_reg.h
drivers/net/dsa/microchip/ksz8795_spi.c [deleted file]
drivers/net/dsa/microchip/ksz8863_smi.c
drivers/net/dsa/microchip/ksz9477.c
drivers/net/dsa/microchip/ksz9477.h [new file with mode: 0644]
drivers/net/dsa/microchip/ksz9477_i2c.c
drivers/net/dsa/microchip/ksz9477_reg.h
drivers/net/dsa/microchip/ksz9477_spi.c [deleted file]
drivers/net/dsa/microchip/ksz_common.c
drivers/net/dsa/microchip/ksz_common.h
drivers/net/dsa/microchip/ksz_spi.c [new file with mode: 0644]
drivers/net/dsa/microchip/lan937x.h [new file with mode: 0644]
drivers/net/dsa/microchip/lan937x_main.c [new file with mode: 0644]
drivers/net/dsa/microchip/lan937x_reg.h [new file with mode: 0644]
drivers/net/dsa/mv88e6xxx/chip.c
drivers/net/dsa/mv88e6xxx/chip.h
drivers/net/dsa/mv88e6xxx/port.c
drivers/net/dsa/mv88e6xxx/port.h
drivers/net/dsa/ocelot/Kconfig
drivers/net/dsa/ocelot/felix.c
drivers/net/dsa/ocelot/felix.h
drivers/net/dsa/ocelot/felix_vsc9959.c
drivers/net/dsa/qca/ar9331.c
drivers/net/dsa/qca8k.c
drivers/net/dsa/qca8k.h
drivers/net/dsa/rzn1_a5psw.c [new file with mode: 0644]
drivers/net/dsa/rzn1_a5psw.h [new file with mode: 0644]
drivers/net/ethernet/Kconfig
drivers/net/ethernet/Makefile
drivers/net/ethernet/agere/et131x.c
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
drivers/net/ethernet/amd/xgbe/xgbe.h
drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h
drivers/net/ethernet/atheros/ag71xx.c
drivers/net/ethernet/atheros/atl1c/atl1c_main.c
drivers/net/ethernet/atheros/atl1e/atl1e_main.c
drivers/net/ethernet/atheros/atlx/atl1.c
drivers/net/ethernet/broadcom/bcm63xx_enet.c
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
drivers/net/ethernet/broadcom/bnxt/bnxt.c
drivers/net/ethernet/broadcom/cnic.c
drivers/net/ethernet/broadcom/tg3.c
drivers/net/ethernet/brocade/bna/bnad.c
drivers/net/ethernet/cadence/macb_main.c
drivers/net/ethernet/cavium/thunder/nicvf_queues.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
drivers/net/ethernet/chelsio/cxgb4/sge.c
drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
drivers/net/ethernet/cisco/enic/enic_main.c
drivers/net/ethernet/emulex/benet/be_main.c
drivers/net/ethernet/freescale/fec_main.c
drivers/net/ethernet/freescale/fs_enet/fs_enet.h
drivers/net/ethernet/freescale/gianfar.c
drivers/net/ethernet/freescale/gianfar_ethtool.c
drivers/net/ethernet/fungible/funcore/fun_hci.h
drivers/net/ethernet/fungible/funeth/funeth_ethtool.c
drivers/net/ethernet/fungible/funeth/funeth_main.c
drivers/net/ethernet/fungible/funeth/funeth_tx.c
drivers/net/ethernet/fungible/funeth/funeth_txrx.h
drivers/net/ethernet/google/gve/gve_tx_dqo.c
drivers/net/ethernet/hisilicon/hns/hns_enet.c
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
drivers/net/ethernet/hisilicon/hns3/hns3_trace.h
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
drivers/net/ethernet/hisilicon/hns_mdio.c
drivers/net/ethernet/huawei/hinic/hinic_dev.h
drivers/net/ethernet/huawei/hinic/hinic_main.c
drivers/net/ethernet/huawei/hinic/hinic_rx.c
drivers/net/ethernet/huawei/hinic/hinic_tx.c
drivers/net/ethernet/ibm/ehea/ehea_main.c
drivers/net/ethernet/ibm/ibmvnic.c
drivers/net/ethernet/intel/e100.c
drivers/net/ethernet/intel/e1000/e1000_hw.c
drivers/net/ethernet/intel/e1000/e1000_main.c
drivers/net/ethernet/intel/e1000/e1000_param.c
drivers/net/ethernet/intel/e1000e/mac.c
drivers/net/ethernet/intel/e1000e/netdev.c
drivers/net/ethernet/intel/e1000e/param.c
drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
drivers/net/ethernet/intel/fm10k/fm10k_tlv.c
drivers/net/ethernet/intel/i40e/i40e.h
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
drivers/net/ethernet/intel/i40e/i40e_main.c
drivers/net/ethernet/intel/i40e/i40e_ptp.c
drivers/net/ethernet/intel/i40e/i40e_register.h
drivers/net/ethernet/intel/i40e/i40e_txrx.c
drivers/net/ethernet/intel/i40e/i40e_type.h
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
drivers/net/ethernet/intel/i40e/i40e_xsk.c
drivers/net/ethernet/intel/iavf/iavf_main.c
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
drivers/net/ethernet/intel/ice/ice_ethtool.c
drivers/net/ethernet/intel/ice/ice_flex_pipe.c
drivers/net/ethernet/intel/ice/ice_lag.c
drivers/net/ethernet/intel/ice/ice_lib.c
drivers/net/ethernet/intel/ice/ice_protocol_type.h
drivers/net/ethernet/intel/ice/ice_switch.c
drivers/net/ethernet/intel/ice/ice_switch.h
drivers/net/ethernet/intel/ice/ice_tc_lib.c
drivers/net/ethernet/intel/ice/ice_tc_lib.h
drivers/net/ethernet/intel/ice/ice_vlan_mode.c
drivers/net/ethernet/intel/igb/e1000_82575.c
drivers/net/ethernet/intel/igb/e1000_mac.c
drivers/net/ethernet/intel/igb/igb_main.c
drivers/net/ethernet/intel/igbvf/igbvf.h
drivers/net/ethernet/intel/igbvf/netdev.c
drivers/net/ethernet/intel/igc/igc_mac.c
drivers/net/ethernet/intel/igc/igc_ptp.c
drivers/net/ethernet/intel/ixgb/ixgb_main.c
drivers/net/ethernet/intel/ixgb/ixgb_param.c
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
drivers/net/ethernet/intel/ixgbevf/ethtool.c
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
drivers/net/ethernet/intel/ixgbevf/vf.c
drivers/net/ethernet/marvell/mv643xx_eth.c
drivers/net/ethernet/marvell/mvneta.c
drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h
drivers/net/ethernet/marvell/octeontx2/af/cgx.c
drivers/net/ethernet/marvell/octeontx2/af/rpm.c
drivers/net/ethernet/marvell/octeontx2/af/rpm.h
drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
drivers/net/ethernet/marvell/prestera/prestera.h
drivers/net/ethernet/marvell/prestera/prestera_acl.c
drivers/net/ethernet/marvell/prestera/prestera_acl.h
drivers/net/ethernet/marvell/prestera/prestera_flow.c
drivers/net/ethernet/marvell/prestera/prestera_flow.h
drivers/net/ethernet/marvell/prestera/prestera_flower.c
drivers/net/ethernet/marvell/prestera/prestera_hw.h
drivers/net/ethernet/marvell/sky2.c
drivers/net/ethernet/mediatek/mtk_star_emac.c
drivers/net/ethernet/mellanox/mlx4/en_tx.c
drivers/net/ethernet/mellanox/mlx5/core/Makefile
drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/trap.c
drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.h [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.h [deleted file]
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlx5/core/main.c
drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
drivers/net/ethernet/mellanox/mlx5/core/sriov.c
drivers/net/ethernet/mellanox/mlxsw/Makefile
drivers/net/ethernet/mellanox/mlxsw/cmd.h
drivers/net/ethernet/mellanox/mlxsw/core.h
drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
drivers/net/ethernet/mellanox/mlxsw/core_env.c
drivers/net/ethernet/mellanox/mlxsw/pci.c
drivers/net/ethernet/mellanox/mlxsw/reg.h
drivers/net/ethernet/mellanox/mlxsw/resources.h
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
drivers/net/ethernet/mellanox/mlxsw/spectrum.h
drivers/net/ethernet/mellanox/mlxsw/spectrum2_kvdl.c
drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c
drivers/net/ethernet/mellanox/mlxsw/spectrum_pgt.c [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h
drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
drivers/net/ethernet/microchip/lan743x_main.c
drivers/net/ethernet/microchip/lan966x/lan966x_main.c
drivers/net/ethernet/microchip/lan966x/lan966x_main.h
drivers/net/ethernet/microchip/sparx5/sparx5_switchdev.c
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
drivers/net/ethernet/natsemi/natsemi.c
drivers/net/ethernet/neterion/Kconfig
drivers/net/ethernet/neterion/Makefile
drivers/net/ethernet/neterion/s2io.c
drivers/net/ethernet/neterion/vxge/Makefile [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-config.c [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-config.h [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-ethtool.c [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-ethtool.h [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-main.c [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-main.h [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-reg.h [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-traffic.c [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-traffic.h [deleted file]
drivers/net/ethernet/neterion/vxge/vxge-version.h [deleted file]
drivers/net/ethernet/netronome/nfp/flower/action.c
drivers/net/ethernet/netronome/nfp/flower/conntrack.c
drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
drivers/net/ethernet/netronome/nfp/flower/metadata.c
drivers/net/ethernet/netronome/nfp/flower/offload.c
drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
drivers/net/ethernet/netronome/nfp/nfd3/dp.c
drivers/net/ethernet/netronome/nfp/nfd3/rings.c
drivers/net/ethernet/netronome/nfp/nfd3/xsk.c
drivers/net/ethernet/netronome/nfp/nfdk/dp.c
drivers/net/ethernet/netronome/nfp/nfdk/rings.c
drivers/net/ethernet/netronome/nfp/nfp_main.c
drivers/net/ethernet/netronome/nfp/nfp_net.h
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
drivers/net/ethernet/netronome/nfp/nfp_net_dp.c
drivers/net/ethernet/netronome/nfp/nfp_net_dp.h
drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
drivers/net/ethernet/qlogic/qed/qed_int.c
drivers/net/ethernet/qlogic/qed/qed_rdma.c
drivers/net/ethernet/qlogic/qede/qede_fp.c
drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
drivers/net/ethernet/qualcomm/emac/emac-mac.c
drivers/net/ethernet/realtek/r8169_main.c
drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
drivers/net/ethernet/sfc/ef10.c
drivers/net/ethernet/sfc/ef100.c
drivers/net/ethernet/sfc/ef100_ethtool.c
drivers/net/ethernet/sfc/ef100_netdev.c
drivers/net/ethernet/sfc/ef100_netdev.h
drivers/net/ethernet/sfc/ef100_nic.c
drivers/net/ethernet/sfc/ef100_nic.h
drivers/net/ethernet/sfc/efx.c
drivers/net/ethernet/sfc/efx_common.c
drivers/net/ethernet/sfc/efx_common.h
drivers/net/ethernet/sfc/ethtool.c
drivers/net/ethernet/sfc/ethtool_common.c
drivers/net/ethernet/sfc/falcon/bitfield.h
drivers/net/ethernet/sfc/falcon/farch.c
drivers/net/ethernet/sfc/mcdi.c
drivers/net/ethernet/sfc/mcdi_pcol.h
drivers/net/ethernet/sfc/mcdi_port.c
drivers/net/ethernet/sfc/net_driver.h
drivers/net/ethernet/sfc/rx_common.c
drivers/net/ethernet/sfc/siena/farch.c
drivers/net/ethernet/sfc/siena/mcdi_pcol.h
drivers/net/ethernet/sfc/sriov.c
drivers/net/ethernet/sfc/tx.c
drivers/net/ethernet/smsc/epic100.c
drivers/net/ethernet/stmicro/stmmac/mmc_core.c
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
drivers/net/ethernet/sun/cassini.c
drivers/net/ethernet/sun/cassini.h
drivers/net/ethernet/sun/ldmvsw.c
drivers/net/ethernet/sun/sungem.c
drivers/net/ethernet/sunplus/spl2sw_driver.c
drivers/net/ethernet/synopsys/dwc-xlgmac-net.c
drivers/net/ethernet/wangxun/Kconfig [new file with mode: 0644]
drivers/net/ethernet/wangxun/Makefile [new file with mode: 0644]
drivers/net/ethernet/wangxun/txgbe/Makefile [new file with mode: 0644]
drivers/net/ethernet/wangxun/txgbe/txgbe.h [new file with mode: 0644]
drivers/net/ethernet/wangxun/txgbe/txgbe_main.c [new file with mode: 0644]
drivers/net/ethernet/wangxun/txgbe/txgbe_type.h [new file with mode: 0644]
drivers/net/ethernet/xilinx/xilinx_axienet_mdio.c
drivers/net/ethernet/xscale/ixp4xx_eth.c
drivers/net/hamradio/6pack.c
drivers/net/ipa/gsi_trans.c
drivers/net/pcs/Kconfig
drivers/net/pcs/Makefile
drivers/net/pcs/pcs-lynx.c
drivers/net/pcs/pcs-rzn1-miic.c [new file with mode: 0644]
drivers/net/phy/Kconfig
drivers/net/phy/Makefile
drivers/net/phy/aquantia_main.c
drivers/net/phy/at803x.c
drivers/net/phy/ax88796b.c
drivers/net/phy/bcm-phy-lib.h
drivers/net/phy/bcm-phy-ptp.c [new file with mode: 0644]
drivers/net/phy/broadcom.c
drivers/net/phy/dp83822.c
drivers/net/phy/dp83td510.c
drivers/net/phy/micrel.c
drivers/net/phy/mxl-gpy.c
drivers/net/phy/nxp-tja11xx.c
drivers/net/phy/phy.c
drivers/net/phy/phy_device.c
drivers/net/phy/phylink.c
drivers/net/phy/sfp.c
drivers/net/phy/smsc.c
drivers/net/tun.c
drivers/net/usb/asix.h
drivers/net/usb/asix_common.c
drivers/net/usb/ax88179_178a.c
drivers/net/usb/catc.c
drivers/net/usb/cdc_eem.c
drivers/net/usb/smsc95xx.c
drivers/net/usb/usbnet.c
drivers/net/veth.c
drivers/net/virtio_net.c
drivers/net/wireless/ath/wil6210/txrx.c
drivers/net/xen-netback/netback.c
drivers/net/xen-netfront.c
drivers/nfc/nfcmrvl/i2c.c
drivers/nfc/nfcmrvl/spi.c
drivers/nfc/nxp-nci/i2c.c
drivers/nvdimm/bus.c
drivers/nvme/host/core.c
drivers/nvme/host/nvme.h
drivers/nvme/host/pci.c
drivers/nvme/host/rdma.c
drivers/nvme/host/tcp.c
drivers/nvme/target/configfs.c
drivers/nvme/target/core.c
drivers/nvme/target/nvmet.h
drivers/nvme/target/passthru.c
drivers/nvme/target/tcp.c
drivers/pinctrl/aspeed/pinctrl-aspeed.c
drivers/pinctrl/freescale/pinctrl-imx93.c
drivers/pinctrl/stm32/pinctrl-stm32.c
drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c
drivers/pinctrl/sunxi/pinctrl-sunxi.c
drivers/platform/mellanox/nvsw-sn2201.c
drivers/platform/x86/Kconfig
drivers/platform/x86/hp-wmi.c
drivers/platform/x86/ideapad-laptop.c
drivers/platform/x86/intel/pmc/core.c
drivers/platform/x86/panasonic-laptop.c
drivers/platform/x86/thinkpad_acpi.c
drivers/regulator/qcom_smd-regulator.c
drivers/s390/char/sclp.c
drivers/s390/virtio/virtio_ccw.c
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
drivers/scsi/ibmvscsi/ibmvfc.c
drivers/scsi/ibmvscsi/ibmvfc.h
drivers/scsi/scsi_debug.c
drivers/scsi/scsi_transport_iscsi.c
drivers/scsi/storvsc_drv.c
drivers/soc/atmel/soc.c
drivers/soc/bcm/brcmstb/pm/pm-arm.c
drivers/soc/imx/imx8m-blk-ctrl.c
drivers/soc/ixp4xx/ixp4xx-npe.c
drivers/soc/qcom/smem.c
drivers/spi/spi-cadence.c
drivers/spi/spi-mem.c
drivers/spi/spi-rockchip.c
drivers/staging/olpc_dcon/Kconfig
drivers/staging/qlge/qlge_main.c
drivers/staging/r8188eu/core/rtw_xmit.c
drivers/staging/r8188eu/os_dep/ioctl_linux.c
drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
drivers/thermal/intel/intel_tcc_cooling.c
drivers/tty/goldfish.c
drivers/tty/n_gsm.c
drivers/tty/serial/8250/8250_port.c
drivers/tty/serial/qcom_geni_serial.c
drivers/tty/serial/serial_core.c
drivers/tty/sysrq.c
drivers/ufs/core/ufshcd.c
drivers/usb/cdns3/cdnsp-ring.c
drivers/usb/chipidea/udc.c
drivers/usb/dwc2/hcd.c
drivers/usb/dwc3/core.c
drivers/usb/dwc3/dwc3-pci.c
drivers/usb/dwc3/gadget.c
drivers/usb/gadget/function/f_fs.c
drivers/usb/gadget/function/u_ether.c
drivers/usb/gadget/function/uvc_video.c
drivers/usb/gadget/legacy/raw_gadget.c
drivers/usb/gadget/udc/lpc32xx_udc.c
drivers/usb/host/xhci-hub.c
drivers/usb/host/xhci-pci.c
drivers/usb/host/xhci.c
drivers/usb/host/xhci.h
drivers/usb/serial/io_ti.c
drivers/usb/serial/io_usbvend.h
drivers/usb/serial/option.c
drivers/usb/serial/pl2303.c
drivers/usb/typec/tcpm/Kconfig
drivers/vdpa/mlx5/net/mlx5_vnet.c
drivers/vdpa/vdpa_user/vduse_dev.c
drivers/vhost/vdpa.c
drivers/video/console/sticore.c
drivers/video/fbdev/au1100fb.c
drivers/video/fbdev/cirrusfb.c
drivers/video/fbdev/core/fbmem.c
drivers/video/fbdev/intelfb/intelfbdrv.c
drivers/video/fbdev/intelfb/intelfbhw.c
drivers/video/fbdev/omap/sossi.c
drivers/video/fbdev/omap2/omapfb/dss/hdmi_phy.c
drivers/video/fbdev/pxa3xx-gcu.c
drivers/video/fbdev/simplefb.c
drivers/video/fbdev/skeletonfb.c
drivers/virtio/Kconfig
drivers/virtio/virtio.c
drivers/virtio/virtio_mmio.c
drivers/virtio/virtio_pci_modern_dev.c
drivers/virtio/virtio_ring.c
drivers/watchdog/gxp-wdt.c
drivers/xen/features.c
drivers/xen/gntdev-common.h
drivers/xen/gntdev.c
fs/9p/fid.c
fs/9p/vfs_addr.c
fs/9p/vfs_inode.c
fs/9p/vfs_inode_dotl.c
fs/afs/inode.c
fs/btrfs/block-group.h
fs/btrfs/ctree.h
fs/btrfs/disk-io.c
fs/btrfs/extent-tree.c
fs/btrfs/extent_io.c
fs/btrfs/file.c
fs/btrfs/inode.c
fs/btrfs/locking.c
fs/btrfs/reflink.c
fs/btrfs/super.c
fs/btrfs/zoned.c
fs/btrfs/zoned.h
fs/ceph/caps.c
fs/cifs/cifs_debug.c
fs/cifs/cifsglob.h
fs/cifs/cifsproto.h
fs/cifs/connect.c
fs/cifs/misc.c
fs/cifs/sess.c
fs/cifs/smb2ops.c
fs/cifs/smb2pdu.c
fs/cifs/trace.h
fs/exfat/namei.c
fs/ext2/dir.c
fs/ext4/inode.c
fs/ext4/mballoc.c
fs/ext4/migrate.c
fs/ext4/namei.c
fs/ext4/page-io.c
fs/ext4/resize.c
fs/ext4/super.c
fs/ext4/xattr.c
fs/f2fs/iostat.c
fs/f2fs/namei.c
fs/f2fs/node.c
fs/hugetlbfs/inode.c
fs/io_uring.c
fs/jbd2/transaction.c
fs/ksmbd/smb2pdu.c
fs/ksmbd/transport_rdma.c
fs/ksmbd/transport_tcp.c
fs/ksmbd/vfs.c
fs/nfs/callback_proc.c
fs/nfs/dir.c
fs/nfs/nfs4file.c
fs/nfs/nfs4proc.c
fs/nfs/nfs4state.c
fs/nfs/pnfs.c
fs/nfs/pnfs.h
fs/nfsd/vfs.c
fs/notify/fanotify/fanotify_user.c
fs/read_write.c
fs/tracefs/inode.c
fs/xfs/libxfs/xfs_attr.c
fs/xfs/libxfs/xfs_attr.h
fs/xfs/libxfs/xfs_attr_leaf.c
fs/xfs/libxfs/xfs_attr_leaf.h
fs/xfs/libxfs/xfs_da_btree.h
fs/xfs/xfs_attr_item.c
fs/xfs/xfs_bmap_util.c
fs/xfs/xfs_icache.c
fs/xfs/xfs_icache.h
fs/xfs/xfs_inode.c
fs/xfs/xfs_ioctl.c
fs/xfs/xfs_log.c
fs/xfs/xfs_mount.h
fs/xfs/xfs_qm_syscalls.c
fs/xfs/xfs_super.c
fs/xfs/xfs_trace.h
fs/xfs/xfs_xattr.c
include/drm/drm_atomic.h
include/drm/ttm/ttm_resource.h
include/dt-bindings/net/pcs-rzn1-miic.h [new file with mode: 0644]
include/keys/asymmetric-type.h
include/linux/backing-dev.h
include/linux/blkdev.h
include/linux/brcmphy.h
include/linux/can/bittiming.h
include/linux/can/skb.h
include/linux/compiler_types.h
include/linux/console.h
include/linux/devfreq.h
include/linux/dim.h
include/linux/fanotify.h
include/linux/gpio/driver.h
include/linux/lockref.h
include/linux/mlx5/eswitch.h
include/linux/mm.h
include/linux/mroute_base.h
include/linux/netdevice.h
include/linux/nvme.h
include/linux/objtool.h
include/linux/pcs-rzn1-miic.h [new file with mode: 0644]
include/linux/phy.h
include/linux/printk.h
include/linux/ratelimit_types.h
include/linux/refcount.h
include/linux/scmi_protocol.h
include/linux/serial_core.h
include/linux/skbuff.h
include/linux/sockptr.h
include/linux/sysctl.h
include/linux/sysfb.h
include/linux/tcp.h
include/linux/time64.h
include/linux/virtio_config.h
include/linux/visorbus.h [deleted file]
include/net/af_unix.h
include/net/bond_options.h
include/net/bonding.h
include/net/dsa.h
include/net/flow_offload.h
include/net/inet_sock.h
include/net/mptcp.h
include/net/neighbour.h
include/net/net_namespace.h
include/net/netfilter/nf_tables.h
include/net/netns/unix.h
include/net/pkt_sched.h
include/net/raw.h
include/net/sock.h
include/net/strparser.h
include/net/switchdev.h
include/net/tls.h
include/soc/mscc/ocelot.h
include/sound/soc.h
include/trace/events/io_uring.h
include/trace/events/libata.h
include/trace/events/net.h
include/trace/events/qdisc.h
include/uapi/drm/drm_fourcc.h
include/uapi/linux/if_ether.h
include/uapi/linux/if_link.h
include/uapi/linux/io_uring.h
include/uapi/linux/mptcp.h
include/uapi/linux/neighbour.h
include/uapi/linux/snmp.h
include/uapi/linux/sysctl.h
include/uapi/linux/tls.h
include/uapi/linux/tty.h
kernel/auditsc.c
kernel/bpf/btf.c
kernel/bpf/verifier.c
kernel/dma/direct.c
kernel/hung_task.c
kernel/irq/chip.c
kernel/kthread.c
kernel/locking/lockdep.c
kernel/panic.c
kernel/power/hibernate.c
kernel/printk/printk.c
kernel/rcu/tree_stall.h
kernel/reboot.c
kernel/sched/core.c
kernel/sched/sched.h
kernel/signal.c
kernel/sysctl.c
kernel/time/tick-sched.c
kernel/trace/blktrace.c
kernel/trace/bpf_trace.c
kernel/trace/ftrace.c
kernel/trace/rethook.c
kernel/trace/trace.c
kernel/trace/trace_kprobe.c
kernel/trace/trace_uprobe.c
kernel/watchdog.c
kernel/watchdog_hld.c
lib/Kconfig
lib/Kconfig.ubsan
lib/Makefile
lib/crypto/Kconfig
lib/lockref.c
lib/memneq.c [new file with mode: 0644]
lib/sbitmap.c
mm/backing-dev.c
mm/damon/reclaim.c
mm/filemap.c
mm/huge_memory.c
mm/hwpoison-inject.c
mm/kfence/core.c
mm/madvise.c
mm/memcontrol.c
mm/memory-failure.c
mm/migrate.c
mm/page_isolation.c
mm/readahead.c
mm/slub.c
mm/swap.c
net/bluetooth/hci_core.c
net/bluetooth/hci_sync.c
net/bridge/br_netfilter_hooks.c
net/can/Kconfig
net/can/bcm.c
net/core/dev.c
net/core/filter.c
net/core/neighbour.c
net/core/net-sysfs.c
net/core/page_pool.c
net/core/skbuff.c
net/core/skmsg.c
net/core/sock.c
net/decnet/dn_neigh.c
net/dsa/Kconfig
net/dsa/Makefile
net/dsa/slave.c
net/dsa/tag_ksz.c
net/dsa/tag_rzn1_a5psw.c [new file with mode: 0644]
net/ethtool/eeprom.c
net/ipv4/arp.c
net/ipv4/esp4.c
net/ipv4/ip_gre.c
net/ipv4/ip_output.c
net/ipv4/ip_tunnel_core.c
net/ipv4/ipconfig.c
net/ipv4/ipmr.c
net/ipv4/ipmr_base.c
net/ipv4/ping.c
net/ipv4/raw.c
net/ipv4/raw_diag.c
net/ipv4/tcp.c
net/ipv4/tcp_bpf.c
net/ipv4/tcp_ipv4.c
net/ipv6/addrconf.c
net/ipv6/ip6_gre.c
net/ipv6/ip6mr.c
net/ipv6/ndisc.c
net/ipv6/raw.c
net/ipv6/route.c
net/ipv6/seg6_hmac.c
net/ipv6/sit.c
net/l2tp/l2tp_debugfs.c
net/l2tp/l2tp_ppp.c
net/mptcp/options.c
net/mptcp/pm.c
net/mptcp/pm_netlink.c
net/mptcp/pm_userspace.c
net/mptcp/protocol.c
net/mptcp/protocol.h
net/mptcp/subflow.c
net/ncsi/ncsi-manage.c
net/netfilter/nf_dup_netdev.c
net/netfilter/nf_tables_api.c
net/netfilter/nf_tables_core.c
net/netfilter/nf_tables_trace.c
net/netfilter/nfnetlink_cttimeout.c
net/netfilter/nft_meta.c
net/netfilter/nft_numgen.c
net/netfilter/nft_set_hash.c
net/netfilter/nft_set_pipapo.c
net/openvswitch/flow.c
net/rose/rose_route.c
net/rose/rose_timer.c
net/rxrpc/rxkad.c
net/sched/act_api.c
net/sched/act_police.c
net/sched/sch_netem.c
net/sched/sch_taprio.c
net/socket.c
net/strparser/strparser.c
net/sunrpc/clnt.c
net/sunrpc/xdr.c
net/tipc/core.c
net/tipc/node.c
net/tipc/socket.c
net/tls/tls.h [new file with mode: 0644]
net/tls/tls_device.c
net/tls/tls_device_fallback.c
net/tls/tls_main.c
net/tls/tls_proc.c
net/tls/tls_sw.c
net/tls/tls_toe.c
net/unix/af_unix.c
net/unix/diag.c
net/unix/sysctl_net_unix.c
net/xdp/xsk.c
net/xdp/xsk_buff_pool.c
samples/fprobe/fprobe_example.c
scripts/faddr2line
scripts/gen_autoksyms.sh
scripts/mod/modpost.c
security/selinux/hooks.c
sound/core/memalloc.c
sound/hda/hdac_i915.c
sound/hda/intel-dsp-config.c
sound/hda/intel-nhlt.c
sound/pci/cs46xx/cs46xx.c
sound/pci/hda/hda_auto_parser.c
sound/pci/hda/hda_local.h
sound/pci/hda/patch_conexant.c
sound/pci/hda/patch_realtek.c
sound/pci/hda/patch_via.c
sound/soc/codecs/ak4613.c
sound/soc/codecs/cs35l41-lib.c
sound/soc/codecs/cs35l41.c
sound/soc/codecs/cs47l15.c
sound/soc/codecs/madera.c
sound/soc/codecs/max98373-sdw.c
sound/soc/codecs/rt1308-sdw.c
sound/soc/codecs/rt1316-sdw.c
sound/soc/codecs/rt5682-sdw.c
sound/soc/codecs/rt700-sdw.c
sound/soc/codecs/rt700.c
sound/soc/codecs/rt711-sdca-sdw.c
sound/soc/codecs/rt711-sdca.c
sound/soc/codecs/rt711-sdw.c
sound/soc/codecs/rt711.c
sound/soc/codecs/rt715-sdca-sdw.c
sound/soc/codecs/rt715-sdw.c
sound/soc/codecs/wcd9335.c
sound/soc/codecs/wcd938x.c
sound/soc/codecs/wm5110.c
sound/soc/codecs/wm_adsp.c
sound/soc/intel/avs/topology.c
sound/soc/intel/boards/bytcr_wm5102.c
sound/soc/intel/boards/sof_sdw.c
sound/soc/qcom/qdsp6/q6apm-dai.c
sound/soc/rockchip/rockchip_i2s.c
sound/soc/soc-dapm.c
sound/soc/soc-ops.c
sound/soc/sof/intel/hda-dsp.c
sound/soc/sof/intel/hda-loader.c
sound/soc/sof/intel/hda-pcm.c
sound/soc/sof/intel/hda-stream.c
sound/soc/sof/intel/hda.h
sound/soc/sof/ipc3-topology.c
sound/soc/sof/mediatek/mt8186/mt8186.c
sound/soc/sof/pm.c
sound/soc/sof/sof-priv.h
sound/usb/mixer_us16x08.c
sound/usb/quirks-table.h
sound/usb/quirks.c
sound/x86/intel_hdmi_audio.c
tools/arch/arm64/include/asm/cputype.h
tools/arch/arm64/include/uapi/asm/kvm.h
tools/arch/x86/include/asm/cpufeatures.h
tools/arch/x86/include/asm/disabled-features.h
tools/arch/x86/include/uapi/asm/kvm.h
tools/arch/x86/include/uapi/asm/svm.h
tools/include/linux/objtool.h
tools/include/uapi/drm/i915_drm.h
tools/include/uapi/linux/if_link.h
tools/include/uapi/linux/kvm.h
tools/include/uapi/linux/prctl.h
tools/include/uapi/linux/vhost.h
tools/kvm/kvm_stat/kvm_stat
tools/lib/perf/evsel.c
tools/perf/builtin-inject.c
tools/perf/builtin-stat.c
tools/perf/tests/bp_account.c
tools/perf/tests/expr.c
tools/perf/tests/shell/lib/perf_csv_output_lint.py [deleted file]
tools/perf/tests/shell/stat+csv_output.sh
tools/perf/tests/shell/test_arm_callgraph_fp.sh
tools/perf/tests/topology.c
tools/perf/trace/beauty/arch_errno_names.sh
tools/perf/trace/beauty/include/linux/socket.h
tools/perf/util/arm-spe.c
tools/perf/util/bpf-utils.c
tools/perf/util/bpf_off_cpu.c
tools/perf/util/bpf_skel/off_cpu.bpf.c
tools/perf/util/build-id.c
tools/perf/util/evsel.c
tools/perf/util/expr.l
tools/perf/util/header.c
tools/perf/util/header.h
tools/perf/util/metricgroup.c
tools/perf/util/off_cpu.h
tools/perf/util/synthetic-events.c
tools/perf/util/unwind-libunwind-local.c
tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c
tools/testing/selftests/bpf/prog_tests/tailcalls.c
tools/testing/selftests/bpf/progs/kprobe_multi.c
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c [new file with mode: 0644]
tools/testing/selftests/bpf/verifier/jmp32.c
tools/testing/selftests/bpf/verifier/jump.c
tools/testing/selftests/dma/Makefile
tools/testing/selftests/dma/dma_map_benchmark.c
tools/testing/selftests/kvm/lib/aarch64/ucall.c
tools/testing/selftests/lib.mk
tools/testing/selftests/net/.gitignore
tools/testing/selftests/net/Makefile
tools/testing/selftests/net/af_unix/Makefile
tools/testing/selftests/net/af_unix/unix_connect.c [new file with mode: 0644]
tools/testing/selftests/net/bpf/Makefile
tools/testing/selftests/net/cmsg_sender.c
tools/testing/selftests/net/fcnal-test.sh
tools/testing/selftests/net/fib_rule_tests.sh
tools/testing/selftests/net/forwarding/Makefile
tools/testing/selftests/net/forwarding/bridge_mdb_port_down.sh [new file with mode: 0755]
tools/testing/selftests/net/forwarding/ethtool_extended_state.sh
tools/testing/selftests/net/forwarding/lib.sh
tools/testing/selftests/net/mptcp/diag.sh
tools/testing/selftests/net/mptcp/mptcp_connect.c
tools/testing/selftests/net/mptcp/mptcp_inq.c
tools/testing/selftests/net/mptcp/mptcp_join.sh
tools/testing/selftests/net/mptcp/mptcp_sockopt.c
tools/testing/selftests/net/mptcp/pm_nl_ctl.c
tools/testing/selftests/net/mptcp/simult_flows.sh
tools/testing/selftests/net/mptcp/userspace_pm.sh
tools/testing/selftests/net/tls.c
tools/testing/selftests/net/tun.c [new file with mode: 0644]
tools/testing/selftests/net/udpgro.sh
tools/testing/selftests/net/udpgro_bench.sh
tools/testing/selftests/net/udpgro_frglist.sh
tools/testing/selftests/net/udpgro_fwd.sh
tools/testing/selftests/net/udpgso_bench.sh
tools/testing/selftests/net/veth.sh
tools/testing/selftests/netfilter/nft_concat_range.sh
tools/testing/selftests/tc-testing/.gitignore
tools/testing/selftests/tc-testing/tc-tests/actions/gact.json
tools/testing/selftests/vm/gup_test.c
tools/testing/selftests/vm/ksm_tests.c
tools/testing/selftests/wireguard/qemu/Makefile
tools/testing/selftests/wireguard/qemu/arch/arm.config
tools/testing/selftests/wireguard/qemu/arch/armeb.config
tools/testing/selftests/wireguard/qemu/arch/i686.config
tools/testing/selftests/wireguard/qemu/arch/m68k.config
tools/testing/selftests/wireguard/qemu/arch/mips.config
tools/testing/selftests/wireguard/qemu/arch/mipsel.config
tools/testing/selftests/wireguard/qemu/arch/powerpc.config
tools/testing/selftests/wireguard/qemu/arch/x86_64.config
tools/testing/selftests/wireguard/qemu/init.c

index 825fae8..2ed1cf8 100644 (file)
--- a/.mailmap
+++ b/.mailmap
@@ -10,6 +10,8 @@
 # Please keep this list dictionary sorted.
 #
 Aaron Durbin <adurbin@google.com>
+Abel Vesa <abelvesa@kernel.org> <abel.vesa@nxp.com>
+Abel Vesa <abelvesa@kernel.org> <abelvesa@gmail.com>
 Abhinav Kumar <quic_abhinavk@quicinc.com> <abhinavk@codeaurora.org>
 Adam Oldham <oldhamca@gmail.com>
 Adam Radford <aradford@gmail.com>
@@ -85,6 +87,7 @@ Christian Borntraeger <borntraeger@linux.ibm.com> <borntrae@de.ibm.com>
 Christian Brauner <brauner@kernel.org> <christian@brauner.io>
 Christian Brauner <brauner@kernel.org> <christian.brauner@canonical.com>
 Christian Brauner <brauner@kernel.org> <christian.brauner@ubuntu.com>
+Christian Marangi <ansuelsmth@gmail.com>
 Christophe Ricard <christophe.ricard@gmail.com>
 Christoph Hellwig <hch@lst.de>
 Colin Ian King <colin.king@intel.com> <colin.king@canonical.com>
@@ -165,6 +168,7 @@ Jan Glauber <jan.glauber@gmail.com> <jang@de.ibm.com>
 Jan Glauber <jan.glauber@gmail.com> <jang@linux.vnet.ibm.com>
 Jan Glauber <jan.glauber@gmail.com> <jglauber@cavium.com>
 Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@linux.intel.com>
+Jarkko Sakkinen <jarkko@kernel.org> <jarkko@profian.com>
 Jason Gunthorpe <jgg@ziepe.ca> <jgg@mellanox.com>
 Jason Gunthorpe <jgg@ziepe.ca> <jgg@nvidia.com>
 Jason Gunthorpe <jgg@ziepe.ca> <jgunthorpe@obsidianresearch.com>
index 308a675..491ead8 100644 (file)
@@ -1,4 +1,4 @@
-What:          /sys/bus/iio/devices/iio:deviceX/conversion_mode
+What:          /sys/bus/iio/devices/iio:deviceX/in_conversion_mode
 KernelVersion: 4.2
 Contact:       linux-iio@vger.kernel.org
 Description:
index fcd650b..805f228 100644 (file)
@@ -391,6 +391,18 @@ GRO has decided not to coalesce, it is placed on a per-NAPI list. This
 list is then passed to the stack when the number of segments reaches the
 gro_normal_batch limit.
 
+high_order_alloc_disable
+------------------------
+
+By default the allocator for page frags tries to use high order pages (order-3
+on x86). While the default behavior gives good results in most cases, some users
+might have hit a contention in page allocations/freeing. This was especially
+true on older kernels (< 5.14) when high-order pages were not stored on per-cpu
+lists. This allows to opt-in for order-0 allocation instead but is now mostly of
+historical importance.
+
+Default: 0
+
 2. /proc/sys/net/unix - Parameters for Unix domain sockets
 ----------------------------------------------------------
 
index fe0ac08..0e8ddf0 100644 (file)
@@ -40,9 +40,8 @@ properties:
       value to be used for converting remote channel measurements to
       temperature.
     $ref: /schemas/types.yaml#/definitions/int32
-    items:
-      minimum: -128
-      maximum: 127
+    minimum: -128
+    maximum: 127
 
   ti,beta-compensation:
     description:
index f89ebde..de7c5e5 100644 (file)
@@ -30,6 +30,7 @@ properties:
       - socionext,uniphier-ld11-aidet
       - socionext,uniphier-ld20-aidet
       - socionext,uniphier-pxs3-aidet
+      - socionext,uniphier-nx1-aidet
 
   reg:
     maxItems: 1
diff --git a/Documentation/devicetree/bindings/net/can/microchip,mpfs-can.yaml b/Documentation/devicetree/bindings/net/can/microchip,mpfs-can.yaml
new file mode 100644 (file)
index 0000000..45aa3de
--- /dev/null
@@ -0,0 +1,45 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/can/microchip,mpfs-can.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title:
+  Microchip PolarFire SoC (MPFS) can controller
+
+maintainers:
+  - Conor Dooley <conor.dooley@microchip.com>
+
+allOf:
+  - $ref: can-controller.yaml#
+
+properties:
+  compatible:
+    const: microchip,mpfs-can
+
+  reg:
+    maxItems: 1
+
+  interrupts:
+    maxItems: 1
+
+  clocks:
+    maxItems: 1
+
+required:
+  - compatible
+  - reg
+  - interrupts
+  - clocks
+
+additionalProperties: false
+
+examples:
+  - |
+    can@2010c000 {
+        compatible = "microchip,mpfs-can";
+        reg = <0x2010c000 0x1000>;
+        clocks = <&clkcfg 17>;
+        interrupt-parent = <&plic>;
+        interrupts = <56>;
+    };
index 86fc31c..9c92156 100644 (file)
@@ -28,6 +28,7 @@ properties:
           - enum:
               - cdns,at91sam9260-macb # Atmel at91sam9 SoCs
               - cdns,sam9x60-macb     # Microchip sam9x60 SoC
+              - microchip,mpfs-macb   # Microchip PolarFire SoC
           - const: cdns,macb          # Generic
 
       - items:
index a3bf432..17ab6c6 100644 (file)
@@ -66,6 +66,9 @@ properties:
       - mediatek,mt7531
       - mediatek,mt7621
 
+  reg:
+    maxItems: 1
+
   core-supply:
     description:
       Phandle to the regulator node necessary for the core power.
diff --git a/Documentation/devicetree/bindings/net/dsa/microchip,lan937x.yaml b/Documentation/devicetree/bindings/net/dsa/microchip,lan937x.yaml
new file mode 100644 (file)
index 0000000..630bf0f
--- /dev/null
@@ -0,0 +1,192 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/dsa/microchip,lan937x.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: LAN937x Ethernet Switch Series Tree Bindings
+
+maintainers:
+  - UNGLinuxDriver@microchip.com
+
+allOf:
+  - $ref: dsa.yaml#
+
+properties:
+  compatible:
+    enum:
+      - microchip,lan9370
+      - microchip,lan9371
+      - microchip,lan9372
+      - microchip,lan9373
+      - microchip,lan9374
+
+  reg:
+    maxItems: 1
+
+  spi-max-frequency:
+    maximum: 50000000
+
+  reset-gpios:
+    description: Optional gpio specifier for a reset line
+    maxItems: 1
+
+  mdio:
+    $ref: /schemas/net/mdio.yaml#
+    unevaluatedProperties: false
+
+patternProperties:
+  "^(ethernet-)?ports$":
+    patternProperties:
+      "^(ethernet-)?port@[0-9]+$":
+        allOf:
+          - if:
+              properties:
+                phy-mode:
+                  contains:
+                    enum:
+                      - rgmii
+                      - rgmii-id
+                      - rgmii-txid
+                      - rgmii-rxid
+            then:
+              properties:
+                rx-internal-delay-ps:
+                  enum: [0, 2000]
+                  default: 0
+                tx-internal-delay-ps:
+                  enum: [0, 2000]
+                  default: 0
+
+required:
+  - compatible
+  - reg
+
+unevaluatedProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/gpio/gpio.h>
+
+    macb0 {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            fixed-link {
+                    speed = <1000>;
+                    full-duplex;
+            };
+    };
+
+    spi {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            lan9374: switch@0 {
+                    compatible = "microchip,lan9374";
+                    reg = <0>;
+                    spi-max-frequency = <44000000>;
+
+                    ethernet-ports {
+                            #address-cells = <1>;
+                            #size-cells = <0>;
+
+                            port@0 {
+                                    reg = <0>;
+                                    label = "lan1";
+                                    phy-mode = "internal";
+                                    phy-handle = <&t1phy0>;
+                            };
+
+                            port@1 {
+                                    reg = <1>;
+                                    label = "lan2";
+                                    phy-mode = "internal";
+                                    phy-handle = <&t1phy1>;
+                            };
+
+                            port@2 {
+                                    reg = <2>;
+                                    label = "lan4";
+                                    phy-mode = "internal";
+                                    phy-handle = <&t1phy2>;
+                            };
+
+                            port@3 {
+                                    reg = <3>;
+                                    label = "lan6";
+                                    phy-mode = "internal";
+                                    phy-handle = <&t1phy3>;
+                            };
+
+                            port@4 {
+                                    reg = <4>;
+                                    phy-mode = "rgmii";
+                                    tx-internal-delay-ps = <2000>;
+                                    rx-internal-delay-ps = <2000>;
+                                    ethernet = <&macb0>;
+
+                                    fixed-link {
+                                            speed = <1000>;
+                                            full-duplex;
+                                    };
+                            };
+
+                            port@5 {
+                                    reg = <5>;
+                                    label = "lan7";
+                                    phy-mode = "rgmii";
+                                    tx-internal-delay-ps = <2000>;
+                                    rx-internal-delay-ps = <2000>;
+
+                                    fixed-link {
+                                            speed = <1000>;
+                                            full-duplex;
+                                    };
+                            };
+
+                            port@6 {
+                                    reg = <6>;
+                                    label = "lan5";
+                                    phy-mode = "internal";
+                                    phy-handle = <&t1phy6>;
+                            };
+
+                            port@7 {
+                                    reg = <7>;
+                                    label = "lan3";
+                                    phy-mode = "internal";
+                                    phy-handle = <&t1phy7>;
+                            };
+                    };
+
+                    mdio {
+                            #address-cells = <1>;
+                            #size-cells = <0>;
+
+                            t1phy0: ethernet-phy@0{
+                                    reg = <0x0>;
+                            };
+
+                            t1phy1: ethernet-phy@1{
+                                    reg = <0x1>;
+                            };
+
+                            t1phy2: ethernet-phy@2{
+                                    reg = <0x2>;
+                            };
+
+                            t1phy3: ethernet-phy@3{
+                                    reg = <0x3>;
+                            };
+
+                            t1phy6: ethernet-phy@6{
+                                    reg = <0x6>;
+                            };
+
+                            t1phy7: ethernet-phy@7{
+                                    reg = <0x7>;
+                            };
+                    };
+            };
+    };
diff --git a/Documentation/devicetree/bindings/net/dsa/renesas,rzn1-a5psw.yaml b/Documentation/devicetree/bindings/net/dsa/renesas,rzn1-a5psw.yaml
new file mode 100644 (file)
index 0000000..4d428f5
--- /dev/null
@@ -0,0 +1,157 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/dsa/renesas,rzn1-a5psw.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Renesas RZ/N1 Advanced 5 ports ethernet switch
+
+maintainers:
+  - Clément Léger <clement.leger@bootlin.com>
+
+description: |
+  The advanced 5 ports switch is present on the Renesas RZ/N1 SoC family and
+  handles 4 ports + 1 CPU management port.
+
+allOf:
+  - $ref: dsa.yaml#
+
+properties:
+  compatible:
+    items:
+      - enum:
+          - renesas,r9a06g032-a5psw
+      - const: renesas,rzn1-a5psw
+
+  reg:
+    maxItems: 1
+
+  interrupts:
+    items:
+      - description: Device Level Ring (DLR) interrupt
+      - description: Switch interrupt
+      - description: Parallel Redundancy Protocol (PRP) interrupt
+      - description: Integrated HUB module interrupt
+      - description: Receive Pattern Match interrupt
+
+  interrupt-names:
+    items:
+      - const: dlr
+      - const: switch
+      - const: prp
+      - const: hub
+      - const: ptrn
+
+  power-domains:
+    maxItems: 1
+
+  mdio:
+    $ref: /schemas/net/mdio.yaml#
+    unevaluatedProperties: false
+
+  clocks:
+    items:
+      - description: AHB clock used for the switch register interface
+      - description: Switch system clock
+
+  clock-names:
+    items:
+      - const: hclk
+      - const: clk
+
+  ethernet-ports:
+    type: object
+    properties:
+      '#address-cells':
+        const: 1
+      '#size-cells':
+        const: 0
+
+    patternProperties:
+      "^(ethernet-)?port@[0-4]$":
+        type: object
+        description: Ethernet switch ports
+
+        properties:
+          pcs-handle:
+            description:
+              phandle pointing to a PCS sub-node compatible with
+              renesas,rzn1-miic.yaml#
+            $ref: /schemas/types.yaml#/definitions/phandle
+
+unevaluatedProperties: false
+
+required:
+  - compatible
+  - reg
+  - clocks
+  - clock-names
+  - power-domains
+
+examples:
+  - |
+    #include <dt-bindings/gpio/gpio.h>
+    #include <dt-bindings/clock/r9a06g032-sysctrl.h>
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    switch@44050000 {
+        compatible = "renesas,r9a06g032-a5psw", "renesas,rzn1-a5psw";
+        reg = <0x44050000 0x10000>;
+        clocks = <&sysctrl R9A06G032_HCLK_SWITCH>, <&sysctrl R9A06G032_CLK_SWITCH>;
+        clock-names = "hclk", "clk";
+        power-domains = <&sysctrl>;
+        interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
+                     <GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>,
+                     <GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH>,
+                     <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>,
+                     <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
+        interrupt-names = "dlr", "switch", "prp", "hub", "ptrn";
+
+        dsa,member = <0 0>;
+
+        ethernet-ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                reg = <0>;
+                label = "lan0";
+                phy-handle = <&switch0phy3>;
+                pcs-handle = <&mii_conv4>;
+            };
+
+            port@1 {
+                reg = <1>;
+                label = "lan1";
+                phy-handle = <&switch0phy1>;
+                pcs-handle = <&mii_conv3>;
+            };
+
+            port@4 {
+                reg = <4>;
+                ethernet = <&gmac2>;
+                label = "cpu";
+                fixed-link {
+                  speed = <1000>;
+                  full-duplex;
+                };
+            };
+        };
+
+        mdio {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            reset-gpios = <&gpio0a 2 GPIO_ACTIVE_HIGH>;
+            reset-delay-us = <15>;
+            clock-frequency = <2500000>;
+
+            switch0phy1: ethernet-phy@1{
+                reg = <1>;
+            };
+
+            switch0phy3: ethernet-phy@3{
+                reg = <3>;
+            };
+        };
+    };
index 4f15463..56d9aca 100644 (file)
@@ -133,12 +133,6 @@ properties:
       and is useful for determining certain configuration settings
       such as flow control thresholds.
 
-  rx-internal-delay-ps:
-    description: |
-      RGMII Receive Clock Delay defined in pico seconds.
-      This is used for controllers that have configurable RX internal delays.
-      If this property is present then the MAC applies the RX delay.
-
   sfp:
     $ref: /schemas/types.yaml#/definitions/phandle
     description:
@@ -150,12 +144,6 @@ properties:
       The size of the controller\'s transmit fifo in bytes. This
       is used for components that can have configurable fifo sizes.
 
-  tx-internal-delay-ps:
-    description: |
-      RGMII Transmit Clock Delay defined in pico seconds.
-      This is used for controllers that have configurable TX internal delays.
-      If this property is present then the MAC applies the TX delay.
-
   managed:
     description:
       Specifies the PHY management type. If auto is set and fixed-link
@@ -232,6 +220,29 @@ properties:
           required:
             - speed
 
+allOf:
+  - if:
+      properties:
+        phy-mode:
+          contains:
+            enum:
+              - rgmii
+              - rgmii-rxid
+              - rgmii-txid
+              - rgmii-id
+    then:
+      properties:
+        rx-internal-delay-ps:
+          description:
+            RGMII Receive Clock Delay defined in pico seconds.This is used for
+            controllers that have configurable RX internal delays. If this
+            property is present then the MAC applies the RX delay.
+        tx-internal-delay-ps:
+          description:
+            RGMII Transmit Clock Delay defined in pico seconds.This is used for
+            controllers that have configurable TX internal delays. If this
+            property is present then the MAC applies the TX delay.
+
 additionalProperties: true
 
 ...
index def994c..64c893c 100644 (file)
@@ -23,6 +23,7 @@ properties:
       - mediatek,mt8516-eth
       - mediatek,mt8518-eth
       - mediatek,mt8175-eth
+      - mediatek,mt8365-eth
 
   reg:
     maxItems: 1
@@ -47,6 +48,22 @@ properties:
       Phandle to the device containing the PERICFG register range. This is used
       to control the MII mode.
 
+  mediatek,rmii-rxc:
+    type: boolean
+    description:
+      If present, indicates that the RMII reference clock, which is from external
+      PHYs, is connected to RXC pin. Otherwise, is connected to TXC pin.
+
+  mediatek,rxc-inverse:
+    type: boolean
+    description:
+      If present, indicates that clock on RXC pad will be inversed.
+
+  mediatek,txc-inverse:
+    type: boolean
+    description:
+      If present, indicates that clock on TXC pad will be inversed.
+
   mdio:
     $ref: mdio.yaml#
     unevaluatedProperties: false
index a9ed691..a407dd1 100644 (file)
@@ -16,6 +16,7 @@ Optional properties:
        KSZ8051: register 0x1f, bits 5..4
        KSZ8081: register 0x1f, bits 5..4
        KSZ8091: register 0x1f, bits 5..4
+       LAN8814: register EP5.0, bit 6
 
        See the respective PHY datasheet for the mode values.
 
diff --git a/Documentation/devicetree/bindings/net/pcs/renesas,rzn1-miic.yaml b/Documentation/devicetree/bindings/net/pcs/renesas,rzn1-miic.yaml
new file mode 100644 (file)
index 0000000..2d33bba
--- /dev/null
@@ -0,0 +1,171 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/pcs/renesas,rzn1-miic.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Renesas RZ/N1 MII converter
+
+maintainers:
+  - Clément Léger <clement.leger@bootlin.com>
+
+description: |
+  This MII converter is present on the Renesas RZ/N1 SoC family. It is
+  responsible to do MII passthrough or convert it to RMII/RGMII.
+
+properties:
+  '#address-cells':
+    const: 1
+
+  '#size-cells':
+    const: 0
+
+  compatible:
+    items:
+      - enum:
+          - renesas,r9a06g032-miic
+      - const: renesas,rzn1-miic
+
+  reg:
+    maxItems: 1
+
+  clocks:
+    items:
+      - description: MII reference clock
+      - description: RGMII reference clock
+      - description: RMII reference clock
+      - description: AHB clock used for the MII converter register interface
+
+  clock-names:
+    items:
+      - const: mii_ref
+      - const: rgmii_ref
+      - const: rmii_ref
+      - const: hclk
+
+  renesas,miic-switch-portin:
+    description: MII Switch PORTIN configuration. This value should use one of
+      the values defined in dt-bindings/net/pcs-rzn1-miic.h.
+    $ref: /schemas/types.yaml#/definitions/uint32
+    enum: [1, 2]
+
+  power-domains:
+    maxItems: 1
+
+patternProperties:
+  "^mii-conv@[0-5]$":
+    type: object
+    description: MII converter port
+
+    properties:
+      reg:
+        description: MII Converter port number.
+        enum: [1, 2, 3, 4, 5]
+
+      renesas,miic-input:
+        description: Converter input port configuration. This value should use
+          one of the values defined in dt-bindings/net/pcs-rzn1-miic.h.
+        $ref: /schemas/types.yaml#/definitions/uint32
+
+    required:
+      - reg
+      - renesas,miic-input
+
+    additionalProperties: false
+
+    allOf:
+      - if:
+          properties:
+            reg:
+              const: 1
+        then:
+          properties:
+            renesas,miic-input:
+              const: 0
+      - if:
+          properties:
+            reg:
+              const: 2
+        then:
+          properties:
+            renesas,miic-input:
+              enum: [1, 11]
+      - if:
+          properties:
+            reg:
+              const: 3
+        then:
+          properties:
+            renesas,miic-input:
+              enum: [7, 10]
+      - if:
+          properties:
+            reg:
+              const: 4
+        then:
+          properties:
+            renesas,miic-input:
+              enum: [4, 6, 9, 13]
+      - if:
+          properties:
+            reg:
+              const: 5
+        then:
+          properties:
+            renesas,miic-input:
+              enum: [3, 5, 8, 12]
+
+required:
+  - '#address-cells'
+  - '#size-cells'
+  - compatible
+  - reg
+  - clocks
+  - clock-names
+  - power-domains
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/net/pcs-rzn1-miic.h>
+    #include <dt-bindings/clock/r9a06g032-sysctrl.h>
+
+    eth-miic@44030000 {
+      #address-cells = <1>;
+      #size-cells = <0>;
+      compatible = "renesas,r9a06g032-miic", "renesas,rzn1-miic";
+      reg = <0x44030000 0x10000>;
+      clocks = <&sysctrl R9A06G032_CLK_MII_REF>,
+              <&sysctrl R9A06G032_CLK_RGMII_REF>,
+              <&sysctrl R9A06G032_CLK_RMII_REF>,
+              <&sysctrl R9A06G032_HCLK_SWITCH_RG>;
+      clock-names = "mii_ref", "rgmii_ref", "rmii_ref", "hclk";
+      renesas,miic-switch-portin = <MIIC_GMAC2_PORT>;
+      power-domains = <&sysctrl>;
+
+      mii_conv1: mii-conv@1 {
+        renesas,miic-input = <MIIC_GMAC1_PORT>;
+        reg = <1>;
+      };
+
+      mii_conv2: mii-conv@2 {
+        renesas,miic-input = <MIIC_SWITCH_PORTD>;
+        reg = <2>;
+      };
+
+      mii_conv3: mii-conv@3 {
+        renesas,miic-input = <MIIC_SWITCH_PORTC>;
+        reg = <3>;
+      };
+
+      mii_conv4: mii-conv@4 {
+        renesas,miic-input = <MIIC_SWITCH_PORTB>;
+        reg = <4>;
+      };
+
+      mii_conv5: mii-conv@5 {
+        renesas,miic-input = <MIIC_SWITCH_PORTA>;
+        reg = <5>;
+      };
+    };
index 36c85eb..491597c 100644 (file)
@@ -65,6 +65,8 @@ properties:
         - ingenic,x2000-mac
         - loongson,ls2k-dwmac
         - loongson,ls7a-dwmac
+        - renesas,r9a06g032-gmac
+        - renesas,rzn1-gmac
         - rockchip,px30-gmac
         - rockchip,rk3128-gmac
         - rockchip,rk3228-gmac
@@ -135,6 +137,9 @@ properties:
   reset-names:
     const: stmmaceth
 
+  power-domains:
+    maxItems: 1
+
   mac-mode:
     $ref: ethernet-controller.yaml#/properties/phy-connection-type
     description:
index ece261b..7326c0a 100644 (file)
@@ -47,6 +47,5 @@ examples:
         clocks = <&clkcfg CLK_SPI0>;
         interrupt-parent = <&plic>;
         interrupts = <54>;
-        spi-max-frequency = <25000000>;
     };
 ...
index e2c7b93..78ceb9d 100644 (file)
@@ -110,7 +110,6 @@ examples:
         pinctrl-names = "default";
         pinctrl-0 = <&qup_spi1_default>;
         interrupts = <GIC_SPI 602 IRQ_TYPE_LEVEL_HIGH>;
-        spi-max-frequency = <50000000>;
         #address-cells = <1>;
         #size-cells = <0>;
     };
index 0b4524b..1e84e1b 100644 (file)
@@ -136,7 +136,8 @@ properties:
       Phandle of a companion.
 
   phys:
-    maxItems: 1
+    minItems: 1
+    maxItems: 3
 
   phy-names:
     const: usb
index e2ac846..bb6bbd5 100644 (file)
@@ -103,7 +103,8 @@ properties:
       Overrides the detected port count
 
   phys:
-    maxItems: 1
+    minItems: 1
+    maxItems: 3
 
   phy-names:
     const: usb
index b81794e..06ac89a 100644 (file)
@@ -13,6 +13,12 @@ EDD Interfaces
 .. kernel-doc:: drivers/firmware/edd.c
    :internal:
 
+Generic System Framebuffers Interface
+-------------------------------------
+
+.. kernel-doc:: drivers/firmware/sysfb.c
+   :export:
+
 Intel Stratix10 SoC Service Layer
 ---------------------------------
 Some features of the Intel Stratix10 SoC require a level of privilege
index 4e3adf3..b33aa04 100644 (file)
@@ -6,7 +6,7 @@ This document explains how GPIOs can be assigned to given devices and functions.
 
 Note that it only applies to the new descriptor-based interface. For a
 description of the deprecated integer-based GPIO interface please refer to
-gpio-legacy.txt (actually, there is no real mapping possible with the old
+legacy.rst (actually, there is no real mapping possible with the old
 interface; you just fetch an integer from somewhere and request the
 corresponding GPIO).
 
index 47869ca..72bcf5f 100644 (file)
@@ -4,7 +4,7 @@ GPIO Descriptor Consumer Interface
 
 This document describes the consumer interface of the GPIO framework. Note that
 it describes the new descriptor-based interface. For a description of the
-deprecated integer-based GPIO interface please refer to gpio-legacy.txt.
+deprecated integer-based GPIO interface please refer to legacy.rst.
 
 
 Guidelines for GPIOs consumers
@@ -78,7 +78,7 @@ whether the line is configured active high or active low (see
 
 The two last flags are used for use cases where open drain is mandatory, such
 as I2C: if the line is not already configured as open drain in the mappings
-(see board.txt), then open drain will be enforced anyway and a warning will be
+(see board.rst), then open drain will be enforced anyway and a warning will be
 printed that the board configuration needs to be updated to match the use case.
 
 Both functions return either a valid GPIO descriptor, or an error code checkable
@@ -270,7 +270,7 @@ driven.
 The same is applicable for open drain or open source output lines: those do not
 actively drive their output high (open drain) or low (open source), they just
 switch their output to a high impedance value. The consumer should not need to
-care. (For details read about open drain in driver.txt.)
+care. (For details read about open drain in driver.rst.)
 
 With this, all the gpiod_set_(array)_value_xxx() functions interpret the
 parameter "value" as "asserted" ("1") or "de-asserted" ("0"). The physical line
index 2e924fb..c9c1924 100644 (file)
@@ -14,12 +14,12 @@ Due to the history of GPIO interfaces in the kernel, there are two different
 ways to obtain and use GPIOs:
 
   - The descriptor-based interface is the preferred way to manipulate GPIOs,
-    and is described by all the files in this directory excepted gpio-legacy.txt.
+    and is described by all the files in this directory excepted legacy.rst.
   - The legacy integer-based interface which is considered deprecated (but still
-    usable for compatibility reasons) is documented in gpio-legacy.txt.
+    usable for compatibility reasons) is documented in legacy.rst.
 
 The remainder of this document applies to the new descriptor-based interface.
-gpio-legacy.txt contains the same information applied to the legacy
+legacy.rst contains the same information applied to the legacy
 integer-based interface.
 
 
index d0904f6..992eddb 100644 (file)
@@ -19,13 +19,23 @@ The main Btrfs features include:
     * Subvolumes (separate internal filesystem roots)
     * Object level mirroring and striping
     * Checksums on data and metadata (multiple algorithms available)
-    * Compression
+    * Compression (multiple algorithms available)
+    * Reflink, deduplication
+    * Scrub (on-line checksum verification)
+    * Hierarchical quota groups (subvolume and snapshot support)
     * Integrated multiple device support, with several raid algorithms
     * Offline filesystem check
-    * Efficient incremental backup and FS mirroring
+    * Efficient incremental backup and FS mirroring (send/receive)
+    * Trim/discard
     * Online filesystem defragmentation
+    * Swapfile support
+    * Zoned mode
+    * Read/write metadata verification
+    * Online resize (shrink, grow)
 
-For more information please refer to the wiki
+For more information please refer to the documentation site or wiki
+
+  https://btrfs.readthedocs.io
 
   https://btrfs.wiki.kernel.org
 
index 871d2da..8781469 100644 (file)
@@ -13,8 +13,8 @@ disappeared as of Linux 3.0.
 
 There are two places where extended attributes can be found. The first
 place is between the end of each inode entry and the beginning of the
-next inode entry. For example, if inode.i\_extra\_isize = 28 and
-sb.inode\_size = 256, then there are 256 - (128 + 28) = 100 bytes
+next inode entry. For example, if inode.i_extra_isize = 28 and
+sb.inode_size = 256, then there are 256 - (128 + 28) = 100 bytes
 available for in-inode extended attribute storage. The second place
 where extended attributes can be found is in the block pointed to by
 ``inode.i_file_acl``. As of Linux 3.11, it is not possible for this
@@ -38,8 +38,8 @@ Extended attributes, when stored after the inode, have a header
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - h\_magic
+     - __le32
+     - h_magic
      - Magic number for identification, 0xEA020000. This value is set by the
        Linux driver, though e2fsprogs doesn't seem to check it(?)
 
@@ -55,28 +55,28 @@ The beginning of an extended attribute block is in
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - h\_magic
+     - __le32
+     - h_magic
      - Magic number for identification, 0xEA020000.
    * - 0x4
-     - \_\_le32
-     - h\_refcount
+     - __le32
+     - h_refcount
      - Reference count.
    * - 0x8
-     - \_\_le32
-     - h\_blocks
+     - __le32
+     - h_blocks
      - Number of disk blocks used.
    * - 0xC
-     - \_\_le32
-     - h\_hash
+     - __le32
+     - h_hash
      - Hash value of all attributes.
    * - 0x10
-     - \_\_le32
-     - h\_checksum
+     - __le32
+     - h_checksum
      - Checksum of the extended attribute block.
    * - 0x14
-     - \_\_u32
-     - h\_reserved[3]
+     - __u32
+     - h_reserved[3]
      - Zero.
 
 The checksum is calculated against the FS UUID, the 64-bit block number
@@ -100,46 +100,46 @@ Attributes stored inside an inode do not need be stored in sorted order.
      - Name
      - Description
    * - 0x0
-     - \_\_u8
-     - e\_name\_len
+     - __u8
+     - e_name_len
      - Length of name.
    * - 0x1
-     - \_\_u8
-     - e\_name\_index
+     - __u8
+     - e_name_index
      - Attribute name index. There is a discussion of this below.
    * - 0x2
-     - \_\_le16
-     - e\_value\_offs
+     - __le16
+     - e_value_offs
      - Location of this attribute's value on the disk block where it is stored.
        Multiple attributes can share the same value. For an inode attribute
        this value is relative to the start of the first entry; for a block this
        value is relative to the start of the block (i.e. the header).
    * - 0x4
-     - \_\_le32
-     - e\_value\_inum
+     - __le32
+     - e_value_inum
      - The inode where the value is stored. Zero indicates the value is in the
        same block as this entry. This field is only used if the
-       INCOMPAT\_EA\_INODE feature is enabled.
+       INCOMPAT_EA_INODE feature is enabled.
    * - 0x8
-     - \_\_le32
-     - e\_value\_size
+     - __le32
+     - e_value_size
      - Length of attribute value.
    * - 0xC
-     - \_\_le32
-     - e\_hash
+     - __le32
+     - e_hash
      - Hash value of attribute name and attribute value. The kernel doesn't
        update the hash for in-inode attributes, so for that case this value
        must be zero, because e2fsck validates any non-zero hash regardless of
        where the xattr lives.
    * - 0x10
      - char
-     - e\_name[e\_name\_len]
+     - e_name[e_name_len]
      - Attribute name. Does not include trailing NULL.
 
 Attribute values can follow the end of the entry table. There appears to
 be a requirement that they be aligned to 4-byte boundaries. The values
 are stored starting at the end of the block and grow towards the
-xattr\_header/xattr\_entry table. When the two collide, the overflow is
+xattr_header/xattr_entry table. When the two collide, the overflow is
 put into a separate disk block. If the disk block fills up, the
 filesystem returns -ENOSPC.
 
@@ -167,15 +167,15 @@ the key name. Here is a map of name index values to key prefixes:
    * - 1
      - “user.”
    * - 2
-     - “system.posix\_acl\_access”
+     - “system.posix_acl_access”
    * - 3
-     - “system.posix\_acl\_default”
+     - “system.posix_acl_default”
    * - 4
      - “trusted.”
    * - 6
      - “security.”
    * - 7
-     - “system.” (inline\_data only?)
+     - “system.” (inline_data only?)
    * - 8
      - “system.richacl” (SuSE kernels only?)
 
index 72075aa..976a180 100644 (file)
@@ -23,7 +23,7 @@ means that a block group addresses 32 gigabytes instead of 128 megabytes,
 also shrinking the amount of file system overhead for metadata.
 
 The administrator can set a block cluster size at mkfs time (which is
-stored in the s\_log\_cluster\_size field in the superblock); from then
+stored in the s_log_cluster_size field in the superblock); from then
 on, the block bitmaps track clusters, not individual blocks. This means
 that block groups can be several gigabytes in size (instead of just
 128MiB); however, the minimum allocation unit becomes a cluster, not a
index c7546db..91c45d8 100644 (file)
@@ -9,15 +9,15 @@ group.
 The inode bitmap records which entries in the inode table are in use.
 
 As with most bitmaps, one bit represents the usage status of one data
-block or inode table entry. This implies a block group size of 8 \*
-number\_of\_bytes\_in\_a\_logical\_block.
+block or inode table entry. This implies a block group size of 8 *
+number_of_bytes_in_a_logical_block.
 
 NOTE: If ``BLOCK_UNINIT`` is set for a given block group, various parts
 of the kernel and e2fsprogs code pretends that the block bitmap contains
 zeros (i.e. all blocks in the group are free). However, it is not
 necessarily the case that no blocks are in use -- if ``meta_bg`` is set,
 the bitmaps and group descriptor live inside the group. Unfortunately,
-ext2fs\_test\_block\_bitmap2() will return '0' for those locations,
+ext2fs_test_block_bitmap2() will return '0' for those locations,
 which produces confusing debugfs output.
 
 Inode Table
index d5d652a..46d78f8 100644 (file)
@@ -56,39 +56,39 @@ established that the super block and the group descriptor table, if
 present, will be at the beginning of the block group. The bitmaps and
 the inode table can be anywhere, and it is quite possible for the
 bitmaps to come after the inode table, or for both to be in different
-groups (flex\_bg). Leftover space is used for file data blocks, indirect
+groups (flex_bg). Leftover space is used for file data blocks, indirect
 block maps, extent tree blocks, and extended attributes.
 
 Flexible Block Groups
 ---------------------
 
 Starting in ext4, there is a new feature called flexible block groups
-(flex\_bg). In a flex\_bg, several block groups are tied together as one
+(flex_bg). In a flex_bg, several block groups are tied together as one
 logical block group; the bitmap spaces and the inode table space in the
-first block group of the flex\_bg are expanded to include the bitmaps
-and inode tables of all other block groups in the flex\_bg. For example,
-if the flex\_bg size is 4, then group 0 will contain (in order) the
+first block group of the flex_bg are expanded to include the bitmaps
+and inode tables of all other block groups in the flex_bg. For example,
+if the flex_bg size is 4, then group 0 will contain (in order) the
 superblock, group descriptors, data block bitmaps for groups 0-3, inode
 bitmaps for groups 0-3, inode tables for groups 0-3, and the remaining
 space in group 0 is for file data. The effect of this is to group the
 block group metadata close together for faster loading, and to enable
 large files to be continuous on disk. Backup copies of the superblock
 and group descriptors are always at the beginning of block groups, even
-if flex\_bg is enabled. The number of block groups that make up a
-flex\_bg is given by 2 ^ ``sb.s_log_groups_per_flex``.
+if flex_bg is enabled. The number of block groups that make up a
+flex_bg is given by 2 ^ ``sb.s_log_groups_per_flex``.
 
 Meta Block Groups
 -----------------
 
-Without the option META\_BG, for safety concerns, all block group
+Without the option META_BG, for safety concerns, all block group
 descriptors copies are kept in the first block group. Given the default
 128MiB(2^27 bytes) block group size and 64-byte group descriptors, ext4
 can have at most 2^27/64 = 2^21 block groups. This limits the entire
 filesystem size to 2^21 * 2^27 = 2^48bytes or 256TiB.
 
 The solution to this problem is to use the metablock group feature
-(META\_BG), which is already in ext3 for all 2.6 releases. With the
-META\_BG feature, ext4 filesystems are partitioned into many metablock
+(META_BG), which is already in ext3 for all 2.6 releases. With the
+META_BG feature, ext4 filesystems are partitioned into many metablock
 groups. Each metablock group is a cluster of block groups whose group
 descriptor structures can be stored in a single disk block. For ext4
 filesystems with 4 KB block size, a single metablock group partition
@@ -110,7 +110,7 @@ bytes, a meta-block group contains 32 block groups for filesystems with
 a 1KB block size, and 128 block groups for filesystems with a 4KB
 blocksize. Filesystems can either be created using this new block group
 descriptor layout, or existing filesystems can be resized on-line, and
-the field s\_first\_meta\_bg in the superblock will indicate the first
+the field s_first_meta_bg in the superblock will indicate the first
 block group using this new layout.
 
 Please see an important note about ``BLOCK_UNINIT`` in the section about
@@ -121,15 +121,15 @@ Lazy Block Group Initialization
 
 A new feature for ext4 are three block group descriptor flags that
 enable mkfs to skip initializing other parts of the block group
-metadata. Specifically, the INODE\_UNINIT and BLOCK\_UNINIT flags mean
+metadata. Specifically, the INODE_UNINIT and BLOCK_UNINIT flags mean
 that the inode and block bitmaps for that group can be calculated and
 therefore the on-disk bitmap blocks are not initialized. This is
 generally the case for an empty block group or a block group containing
-only fixed-location block group metadata. The INODE\_ZEROED flag means
+only fixed-location block group metadata. The INODE_ZEROED flag means
 that the inode table has been initialized; mkfs will unset this flag and
 rely on the kernel to initialize the inode tables in the background.
 
 By not writing zeroes to the bitmaps and inode table, mkfs time is
-reduced considerably. Note the feature flag is RO\_COMPAT\_GDT\_CSUM,
-but the dumpe2fs output prints this as “uninit\_bg”. They are the same
+reduced considerably. Note the feature flag is RO_COMPAT_GDT_CSUM,
+but the dumpe2fs output prints this as “uninit_bg”. They are the same
 thing.
index 30e2575..2bd9904 100644 (file)
@@ -1,7 +1,7 @@
 .. SPDX-License-Identifier: GPL-2.0
 
 +---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| i.i\_block Offset   | Where It Points                                                                                                                                                                                                              |
+| i.i_block Offset   | Where It Points                                                                                                                                                                                                              |
 +=====================+==============================================================================================================================================================================================================================+
 | 0 to 11             | Direct map to file blocks 0 to 11.                                                                                                                                                                                           |
 +---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
index 5519e25..e232749 100644 (file)
@@ -4,7 +4,7 @@ Checksums
 ---------
 
 Starting in early 2012, metadata checksums were added to all major ext4
-and jbd2 data structures. The associated feature flag is metadata\_csum.
+and jbd2 data structures. The associated feature flag is metadata_csum.
 The desired checksum algorithm is indicated in the superblock, though as
 of October 2012 the only supported algorithm is crc32c. Some data
 structures did not have space to fit a full 32-bit checksum, so only the
@@ -20,7 +20,7 @@ encounters directory blocks that lack sufficient empty space to add a
 checksum, it will request that you run ``e2fsck -D`` to have the
 directories rebuilt with checksums. This has the added benefit of
 removing slack space from the directory files and rebalancing the htree
-indexes. If you \_ignore\_ this step, your directories will not be
+indexes. If you _ignore_ this step, your directories will not be
 protected by a checksum!
 
 The following table describes the data elements that go into each type
@@ -35,39 +35,39 @@ of checksum. The checksum function is whatever the superblock describes
      - Length
      - Ingredients
    * - Superblock
-     - \_\_le32
+     - __le32
      - The entire superblock up to the checksum field. The UUID lives inside
        the superblock.
    * - MMP
-     - \_\_le32
+     - __le32
      - UUID + the entire MMP block up to the checksum field.
    * - Extended Attributes
-     - \_\_le32
+     - __le32
      - UUID + the entire extended attribute block. The checksum field is set to
        zero.
    * - Directory Entries
-     - \_\_le32
+     - __le32
      - UUID + inode number + inode generation + the directory block up to the
        fake entry enclosing the checksum field.
    * - HTREE Nodes
-     - \_\_le32
+     - __le32
      - UUID + inode number + inode generation + all valid extents + HTREE tail.
        The checksum field is set to zero.
    * - Extents
-     - \_\_le32
+     - __le32
      - UUID + inode number + inode generation + the entire extent block up to
        the checksum field.
    * - Bitmaps
-     - \_\_le32 or \_\_le16
+     - __le32 or __le16
      - UUID + the entire bitmap. Checksums are stored in the group descriptor,
        and truncated if the group descriptor size is 32 bytes (i.e. ^64bit)
    * - Inodes
-     - \_\_le32
+     - __le32
      - UUID + inode number + inode generation + the entire inode. The checksum
        field is set to zero. Each inode has its own checksum.
    * - Group Descriptors
-     - \_\_le16
-     - If metadata\_csum, then UUID + group number + the entire descriptor;
-       else if gdt\_csum, then crc16(UUID + group number + the entire
+     - __le16
+     - If metadata_csum, then UUID + group number + the entire descriptor;
+       else if gdt_csum, then crc16(UUID + group number + the entire
        descriptor). In all cases, only the lower 16 bits are stored.
 
index 55f618b..6eece8e 100644 (file)
@@ -42,24 +42,24 @@ is at most 263 bytes long, though on disk you'll need to reference
      - Name
      - Description
    * - 0x0
-     - \_\_le32
+     - __le32
      - inode
      - Number of the inode that this directory entry points to.
    * - 0x4
-     - \_\_le16
-     - rec\_len
+     - __le16
+     - rec_len
      - Length of this directory entry. Must be a multiple of 4.
    * - 0x6
-     - \_\_le16
-     - name\_len
+     - __le16
+     - name_len
      - Length of the file name.
    * - 0x8
      - char
-     - name[EXT4\_NAME\_LEN]
+     - name[EXT4_NAME_LEN]
      - File name.
 
 Since file names cannot be longer than 255 bytes, the new directory
-entry format shortens the name\_len field and uses the space for a file
+entry format shortens the name_len field and uses the space for a file
 type flag, probably to avoid having to load every inode during directory
 tree traversal. This format is ``ext4_dir_entry_2``, which is at most
 263 bytes long, though on disk you'll need to reference
@@ -74,24 +74,24 @@ tree traversal. This format is ``ext4_dir_entry_2``, which is at most
      - Name
      - Description
    * - 0x0
-     - \_\_le32
+     - __le32
      - inode
      - Number of the inode that this directory entry points to.
    * - 0x4
-     - \_\_le16
-     - rec\_len
+     - __le16
+     - rec_len
      - Length of this directory entry.
    * - 0x6
-     - \_\_u8
-     - name\_len
+     - __u8
+     - name_len
      - Length of the file name.
    * - 0x7
-     - \_\_u8
-     - file\_type
+     - __u8
+     - file_type
      - File type code, see ftype_ table below.
    * - 0x8
      - char
-     - name[EXT4\_NAME\_LEN]
+     - name[EXT4_NAME_LEN]
      - File name.
 
 .. _ftype:
@@ -137,19 +137,19 @@ entry uses this extension, it may be up to 271 bytes.
      - Name
      - Description
    * - 0x0
-     - \_\_le32
+     - __le32
      - hash
      - The hash of the directory name
    * - 0x4
-     - \_\_le32
-     - minor\_hash
+     - __le32
+     - minor_hash
      - The minor hash of the directory name
 
 
 In order to add checksums to these classic directory blocks, a phony
 ``struct ext4_dir_entry`` is placed at the end of each leaf block to
 hold the checksum. The directory entry is 12 bytes long. The inode
-number and name\_len fields are set to zero to fool old software into
+number and name_len fields are set to zero to fool old software into
 ignoring an apparently empty directory entry, and the checksum is stored
 in the place where the name normally goes. The structure is
 ``struct ext4_dir_entry_tail``:
@@ -163,24 +163,24 @@ in the place where the name normally goes. The structure is
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - det\_reserved\_zero1
+     - __le32
+     - det_reserved_zero1
      - Inode number, which must be zero.
    * - 0x4
-     - \_\_le16
-     - det\_rec\_len
+     - __le16
+     - det_rec_len
      - Length of this directory entry, which must be 12.
    * - 0x6
-     - \_\_u8
-     - det\_reserved\_zero2
+     - __u8
+     - det_reserved_zero2
      - Length of the file name, which must be zero.
    * - 0x7
-     - \_\_u8
-     - det\_reserved\_ft
+     - __u8
+     - det_reserved_ft
      - File type, which must be 0xDE.
    * - 0x8
-     - \_\_le32
-     - det\_checksum
+     - __le32
+     - det_checksum
      - Directory leaf block checksum.
 
 The leaf directory block checksum is calculated against the FS UUID, the
@@ -194,7 +194,7 @@ Hash Tree Directories
 A linear array of directory entries isn't great for performance, so a
 new feature was added to ext3 to provide a faster (but peculiar)
 balanced tree keyed off a hash of the directory entry name. If the
-EXT4\_INDEX\_FL (0x1000) flag is set in the inode, this directory uses a
+EXT4_INDEX_FL (0x1000) flag is set in the inode, this directory uses a
 hashed btree (htree) to organize and find directory entries. For
 backwards read-only compatibility with ext2, this tree is actually
 hidden inside the directory file, masquerading as “empty” directory data
@@ -206,14 +206,14 @@ rest of the directory block is empty so that it moves on.
 The root of the tree always lives in the first data block of the
 directory. By ext2 custom, the '.' and '..' entries must appear at the
 beginning of this first block, so they are put here as two
-``struct ext4_dir_entry_2``\ s and not stored in the tree. The rest of
+``struct ext4_dir_entry_2`` s and not stored in the tree. The rest of
 the root node contains metadata about the tree and finally a hash->block
 map to find nodes that are lower in the htree. If
 ``dx_root.info.indirect_levels`` is non-zero then the htree has two
 levels; the data block pointed to by the root node's map is an interior
 node, which is indexed by a minor hash. Interior nodes in this tree
 contains a zeroed out ``struct ext4_dir_entry_2`` followed by a
-minor\_hash->block map to find leafe nodes. Leaf nodes contain a linear
+minor_hash->block map to find leafe nodes. Leaf nodes contain a linear
 array of all ``struct ext4_dir_entry_2``; all of these entries
 (presumably) hash to the same value. If there is an overflow, the
 entries simply overflow into the next leaf node, and the
@@ -245,83 +245,83 @@ of a data block:
      - Name
      - Description
    * - 0x0
-     - \_\_le32
+     - __le32
      - dot.inode
      - inode number of this directory.
    * - 0x4
-     - \_\_le16
-     - dot.rec\_len
+     - __le16
+     - dot.rec_len
      - Length of this record, 12.
    * - 0x6
      - u8
-     - dot.name\_len
+     - dot.name_len
      - Length of the name, 1.
    * - 0x7
      - u8
-     - dot.file\_type
+     - dot.file_type
      - File type of this entry, 0x2 (directory) (if the feature flag is set).
    * - 0x8
      - char
      - dot.name[4]
-     - “.\\0\\0\\0”
+     - “.\0\0\0”
    * - 0xC
-     - \_\_le32
+     - __le32
      - dotdot.inode
      - inode number of parent directory.
    * - 0x10
-     - \_\_le16
-     - dotdot.rec\_len
-     - block\_size - 12. The record length is long enough to cover all htree
+     - __le16
+     - dotdot.rec_len
+     - block_size - 12. The record length is long enough to cover all htree
        data.
    * - 0x12
      - u8
-     - dotdot.name\_len
+     - dotdot.name_len
      - Length of the name, 2.
    * - 0x13
      - u8
-     - dotdot.file\_type
+     - dotdot.file_type
      - File type of this entry, 0x2 (directory) (if the feature flag is set).
    * - 0x14
      - char
-     - dotdot\_name[4]
-     - “..\\0\\0”
+     - dotdot_name[4]
+     - “..\0\0”
    * - 0x18
-     - \_\_le32
-     - struct dx\_root\_info.reserved\_zero
+     - __le32
+     - struct dx_root_info.reserved_zero
      - Zero.
    * - 0x1C
      - u8
-     - struct dx\_root\_info.hash\_version
+     - struct dx_root_info.hash_version
      - Hash type, see dirhash_ table below.
    * - 0x1D
      - u8
-     - struct dx\_root\_info.info\_length
+     - struct dx_root_info.info_length
      - Length of the tree information, 0x8.
    * - 0x1E
      - u8
-     - struct dx\_root\_info.indirect\_levels
-     - Depth of the htree. Cannot be larger than 3 if the INCOMPAT\_LARGEDIR
+     - struct dx_root_info.indirect_levels
+     - Depth of the htree. Cannot be larger than 3 if the INCOMPAT_LARGEDIR
        feature is set; cannot be larger than 2 otherwise.
    * - 0x1F
      - u8
-     - struct dx\_root\_info.unused\_flags
+     - struct dx_root_info.unused_flags
      -
    * - 0x20
-     - \_\_le16
+     - __le16
      - limit
-     - Maximum number of dx\_entries that can follow this header, plus 1 for
+     - Maximum number of dx_entries that can follow this header, plus 1 for
        the header itself.
    * - 0x22
-     - \_\_le16
+     - __le16
      - count
-     - Actual number of dx\_entries that follow this header, plus 1 for the
+     - Actual number of dx_entries that follow this header, plus 1 for the
        header itself.
    * - 0x24
-     - \_\_le32
+     - __le32
      - block
      - The block number (within the directory file) that goes with hash=0.
    * - 0x28
-     - struct dx\_entry
+     - struct dx_entry
      - entries[0]
      - As many 8-byte ``struct dx_entry`` as fits in the rest of the data block.
 
@@ -362,38 +362,38 @@ also the full length of a data block:
      - Name
      - Description
    * - 0x0
-     - \_\_le32
+     - __le32
      - fake.inode
      - Zero, to make it look like this entry is not in use.
    * - 0x4
-     - \_\_le16
-     - fake.rec\_len
-     - The size of the block, in order to hide all of the dx\_node data.
+     - __le16
+     - fake.rec_len
+     - The size of the block, in order to hide all of the dx_node data.
    * - 0x6
      - u8
-     - name\_len
+     - name_len
      - Zero. There is no name for this “unused” directory entry.
    * - 0x7
      - u8
-     - file\_type
+     - file_type
      - Zero. There is no file type for this “unused” directory entry.
    * - 0x8
-     - \_\_le16
+     - __le16
      - limit
-     - Maximum number of dx\_entries that can follow this header, plus 1 for
+     - Maximum number of dx_entries that can follow this header, plus 1 for
        the header itself.
    * - 0xA
-     - \_\_le16
+     - __le16
      - count
-     - Actual number of dx\_entries that follow this header, plus 1 for the
+     - Actual number of dx_entries that follow this header, plus 1 for the
        header itself.
    * - 0xE
-     - \_\_le32
+     - __le32
      - block
      - The block number (within the directory file) that goes with the lowest
        hash value of this block. This value is stored in the parent block.
    * - 0x12
-     - struct dx\_entry
+     - struct dx_entry
      - entries[0]
      - As many 8-byte ``struct dx_entry`` as fits in the rest of the data block.
 
@@ -410,11 +410,11 @@ long:
      - Name
      - Description
    * - 0x0
-     - \_\_le32
+     - __le32
      - hash
      - Hash code.
    * - 0x4
-     - \_\_le32
+     - __le32
      - block
      - Block number (within the directory file, not filesystem blocks) of the
        next node in the htree.
@@ -423,13 +423,13 @@ long:
 author.)
 
 If metadata checksums are enabled, the last 8 bytes of the directory
-block (precisely the length of one dx\_entry) are used to store a
+block (precisely the length of one dx_entry) are used to store a
 ``struct dx_tail``, which contains the checksum. The ``limit`` and
-``count`` entries in the dx\_root/dx\_node structures are adjusted as
-necessary to fit the dx\_tail into the block. If there is no space for
-the dx\_tail, the user is notified to run e2fsck -D to rebuild the
+``count`` entries in the dx_root/dx_node structures are adjusted as
+necessary to fit the dx_tail into the block. If there is no space for
+the dx_tail, the user is notified to run e2fsck -D to rebuild the
 directory index (which will ensure that there's space for the checksum.
-The dx\_tail structure is 8 bytes long and looks like this:
+The dx_tail structure is 8 bytes long and looks like this:
 
 .. list-table::
    :widths: 8 8 24 40
@@ -441,13 +441,13 @@ The dx\_tail structure is 8 bytes long and looks like this:
      - Description
    * - 0x0
      - u32
-     - dt\_reserved
+     - dt_reserved
      - Zero.
    * - 0x4
-     - \_\_le32
-     - dt\_checksum
+     - __le32
+     - dt_checksum
      - Checksum of the htree directory block.
 
 The checksum is calculated against the FS UUID, the htree index header
-(dx\_root or dx\_node), all of the htree indices (dx\_entry) that are in
-use, and the tail block (dx\_tail).
+(dx_root or dx_node), all of the htree indices (dx_entry) that are in
+use, and the tail block (dx_tail).
index ecc0d01..7a2ef26 100644 (file)
@@ -5,14 +5,14 @@ Large Extended Attribute Values
 
 To enable ext4 to store extended attribute values that do not fit in the
 inode or in the single extended attribute block attached to an inode,
-the EA\_INODE feature allows us to store the value in the data blocks of
+the EA_INODE feature allows us to store the value in the data blocks of
 a regular file inode. This “EA inode” is linked only from the extended
 attribute name index and must not appear in a directory entry. The
-inode's i\_atime field is used to store a checksum of the xattr value;
-and i\_ctime/i\_version store a 64-bit reference count, which enables
+inode's i_atime field is used to store a checksum of the xattr value;
+and i_ctime/i_version store a 64-bit reference count, which enables
 sharing of large xattr values between multiple owning inodes. For
 backward compatibility with older versions of this feature, the
-i\_mtime/i\_generation *may* store a back-reference to the inode number
-and i\_generation of the **one** owning inode (in cases where the EA
+i_mtime/i_generation *may* store a back-reference to the inode number
+and i_generation of the **one** owning inode (in cases where the EA
 inode is not referenced by multiple inodes) to verify that the EA inode
 is the correct one being accessed.
index 7ba6114..392ec44 100644 (file)
@@ -7,34 +7,34 @@ Each block group on the filesystem has one of these descriptors
 associated with it. As noted in the Layout section above, the group
 descriptors (if present) are the second item in the block group. The
 standard configuration is for each block group to contain a full copy of
-the block group descriptor table unless the sparse\_super feature flag
+the block group descriptor table unless the sparse_super feature flag
 is set.
 
 Notice how the group descriptor records the location of both bitmaps and
 the inode table (i.e. they can float). This means that within a block
 group, the only data structures with fixed locations are the superblock
-and the group descriptor table. The flex\_bg mechanism uses this
+and the group descriptor table. The flex_bg mechanism uses this
 property to group several block groups into a flex group and lay out all
 of the groups' bitmaps and inode tables into one long run in the first
 group of the flex group.
 
-If the meta\_bg feature flag is set, then several block groups are
-grouped together into a meta group. Note that in the meta\_bg case,
+If the meta_bg feature flag is set, then several block groups are
+grouped together into a meta group. Note that in the meta_bg case,
 however, the first and last two block groups within the larger meta
 group contain only group descriptors for the groups inside the meta
 group.
 
-flex\_bg and meta\_bg do not appear to be mutually exclusive features.
+flex_bg and meta_bg do not appear to be mutually exclusive features.
 
 In ext2, ext3, and ext4 (when the 64bit feature is not enabled), the
 block group descriptor was only 32 bytes long and therefore ends at
-bg\_checksum. On an ext4 filesystem with the 64bit feature enabled, the
+bg_checksum. On an ext4 filesystem with the 64bit feature enabled, the
 block group descriptor expands to at least the 64 bytes described below;
 the size is stored in the superblock.
 
-If gdt\_csum is set and metadata\_csum is not set, the block group
+If gdt_csum is set and metadata_csum is not set, the block group
 checksum is the crc16 of the FS UUID, the group number, and the group
-descriptor structure. If metadata\_csum is set, then the block group
+descriptor structure. If metadata_csum is set, then the block group
 checksum is the lower 16 bits of the checksum of the FS UUID, the group
 number, and the group descriptor structure. Both block and inode bitmap
 checksums are calculated against the FS UUID, the group number, and the
@@ -51,59 +51,59 @@ The block group descriptor is laid out in ``struct ext4_group_desc``.
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - bg\_block\_bitmap\_lo
+     - __le32
+     - bg_block_bitmap_lo
      - Lower 32-bits of location of block bitmap.
    * - 0x4
-     - \_\_le32
-     - bg\_inode\_bitmap\_lo
+     - __le32
+     - bg_inode_bitmap_lo
      - Lower 32-bits of location of inode bitmap.
    * - 0x8
-     - \_\_le32
-     - bg\_inode\_table\_lo
+     - __le32
+     - bg_inode_table_lo
      - Lower 32-bits of location of inode table.
    * - 0xC
-     - \_\_le16
-     - bg\_free\_blocks\_count\_lo
+     - __le16
+     - bg_free_blocks_count_lo
      - Lower 16-bits of free block count.
    * - 0xE
-     - \_\_le16
-     - bg\_free\_inodes\_count\_lo
+     - __le16
+     - bg_free_inodes_count_lo
      - Lower 16-bits of free inode count.
    * - 0x10
-     - \_\_le16
-     - bg\_used\_dirs\_count\_lo
+     - __le16
+     - bg_used_dirs_count_lo
      - Lower 16-bits of directory count.
    * - 0x12
-     - \_\_le16
-     - bg\_flags
+     - __le16
+     - bg_flags
      - Block group flags. See the bgflags_ table below.
    * - 0x14
-     - \_\_le32
-     - bg\_exclude\_bitmap\_lo
+     - __le32
+     - bg_exclude_bitmap_lo
      - Lower 32-bits of location of snapshot exclusion bitmap.
    * - 0x18
-     - \_\_le16
-     - bg\_block\_bitmap\_csum\_lo
+     - __le16
+     - bg_block_bitmap_csum_lo
      - Lower 16-bits of the block bitmap checksum.
    * - 0x1A
-     - \_\_le16
-     - bg\_inode\_bitmap\_csum\_lo
+     - __le16
+     - bg_inode_bitmap_csum_lo
      - Lower 16-bits of the inode bitmap checksum.
    * - 0x1C
-     - \_\_le16
-     - bg\_itable\_unused\_lo
+     - __le16
+     - bg_itable_unused_lo
      - Lower 16-bits of unused inode count. If set, we needn't scan past the
-       ``(sb.s_inodes_per_group - gdt.bg_itable_unused)``\ th entry in the
+       ``(sb.s_inodes_per_group - gdt.bg_itable_unused)`` th entry in the
        inode table for this group.
    * - 0x1E
-     - \_\_le16
-     - bg\_checksum
-     - Group descriptor checksum; crc16(sb\_uuid+group\_num+bg\_desc) if the
-       RO\_COMPAT\_GDT\_CSUM feature is set, or
-       crc32c(sb\_uuid+group\_num+bg\_desc) & 0xFFFF if the
-       RO\_COMPAT\_METADATA\_CSUM feature is set.  The bg\_checksum
-       field in bg\_desc is skipped when calculating crc16 checksum,
+     - __le16
+     - bg_checksum
+     - Group descriptor checksum; crc16(sb_uuid+group_num+bg_desc) if the
+       RO_COMPAT_GDT_CSUM feature is set, or
+       crc32c(sb_uuid+group_num+bg_desc) & 0xFFFF if the
+       RO_COMPAT_METADATA_CSUM feature is set.  The bg_checksum
+       field in bg_desc is skipped when calculating crc16 checksum,
        and set to zero if crc32c checksum is used.
    * -
      -
@@ -111,48 +111,48 @@ The block group descriptor is laid out in ``struct ext4_group_desc``.
      - These fields only exist if the 64bit feature is enabled and s_desc_size
        > 32.
    * - 0x20
-     - \_\_le32
-     - bg\_block\_bitmap\_hi
+     - __le32
+     - bg_block_bitmap_hi
      - Upper 32-bits of location of block bitmap.
    * - 0x24
-     - \_\_le32
-     - bg\_inode\_bitmap\_hi
+     - __le32
+     - bg_inode_bitmap_hi
      - Upper 32-bits of location of inodes bitmap.
    * - 0x28
-     - \_\_le32
-     - bg\_inode\_table\_hi
+     - __le32
+     - bg_inode_table_hi
      - Upper 32-bits of location of inodes table.
    * - 0x2C
-     - \_\_le16
-     - bg\_free\_blocks\_count\_hi
+     - __le16
+     - bg_free_blocks_count_hi
      - Upper 16-bits of free block count.
    * - 0x2E
-     - \_\_le16
-     - bg\_free\_inodes\_count\_hi
+     - __le16
+     - bg_free_inodes_count_hi
      - Upper 16-bits of free inode count.
    * - 0x30
-     - \_\_le16
-     - bg\_used\_dirs\_count\_hi
+     - __le16
+     - bg_used_dirs_count_hi
      - Upper 16-bits of directory count.
    * - 0x32
-     - \_\_le16
-     - bg\_itable\_unused\_hi
+     - __le16
+     - bg_itable_unused_hi
      - Upper 16-bits of unused inode count.
    * - 0x34
-     - \_\_le32
-     - bg\_exclude\_bitmap\_hi
+     - __le32
+     - bg_exclude_bitmap_hi
      - Upper 32-bits of location of snapshot exclusion bitmap.
    * - 0x38
-     - \_\_le16
-     - bg\_block\_bitmap\_csum\_hi
+     - __le16
+     - bg_block_bitmap_csum_hi
      - Upper 16-bits of the block bitmap checksum.
    * - 0x3A
-     - \_\_le16
-     - bg\_inode\_bitmap\_csum\_hi
+     - __le16
+     - bg_inode_bitmap_csum_hi
      - Upper 16-bits of the inode bitmap checksum.
    * - 0x3C
-     - \_\_u32
-     - bg\_reserved
+     - __u32
+     - bg_reserved
      - Padding to 64 bytes.
 
 .. _bgflags:
@@ -166,8 +166,8 @@ Block group flags can be any combination of the following:
    * - Value
      - Description
    * - 0x1
-     - inode table and bitmap are not initialized (EXT4\_BG\_INODE\_UNINIT).
+     - inode table and bitmap are not initialized (EXT4_BG_INODE_UNINIT).
    * - 0x2
-     - block bitmap is not initialized (EXT4\_BG\_BLOCK\_UNINIT).
+     - block bitmap is not initialized (EXT4_BG_BLOCK_UNINIT).
    * - 0x4
-     - inode table is zeroed (EXT4\_BG\_INODE\_ZEROED).
+     - inode table is zeroed (EXT4_BG_INODE_ZEROED).
index b9816d5..dc31f50 100644 (file)
@@ -1,6 +1,6 @@
 .. SPDX-License-Identifier: GPL-2.0
 
-The Contents of inode.i\_block
+The Contents of inode.i_block
 ------------------------------
 
 Depending on the type of file an inode describes, the 60 bytes of
@@ -47,7 +47,7 @@ In ext4, the file to logical block map has been replaced with an extent
 tree. Under the old scheme, allocating a contiguous run of 1,000 blocks
 requires an indirect block to map all 1,000 entries; with extents, the
 mapping is reduced to a single ``struct ext4_extent`` with
-``ee_len = 1000``. If flex\_bg is enabled, it is possible to allocate
+``ee_len = 1000``. If flex_bg is enabled, it is possible to allocate
 very large files with a single extent, at a considerable reduction in
 metadata block use, and some improvement in disk efficiency. The inode
 must have the extents flag (0x80000) flag set for this feature to be in
@@ -76,28 +76,28 @@ which is 12 bytes long:
      - Name
      - Description
    * - 0x0
-     - \_\_le16
-     - eh\_magic
+     - __le16
+     - eh_magic
      - Magic number, 0xF30A.
    * - 0x2
-     - \_\_le16
-     - eh\_entries
+     - __le16
+     - eh_entries
      - Number of valid entries following the header.
    * - 0x4
-     - \_\_le16
-     - eh\_max
+     - __le16
+     - eh_max
      - Maximum number of entries that could follow the header.
    * - 0x6
-     - \_\_le16
-     - eh\_depth
+     - __le16
+     - eh_depth
      - Depth of this extent node in the extent tree. 0 = this extent node
        points to data blocks; otherwise, this extent node points to other
        extent nodes. The extent tree can be at most 5 levels deep: a logical
        block number can be at most ``2^32``, and the smallest ``n`` that
        satisfies ``4*(((blocksize - 12)/12)^n) >= 2^32`` is 5.
    * - 0x8
-     - \_\_le32
-     - eh\_generation
+     - __le32
+     - eh_generation
      - Generation of the tree. (Used by Lustre, but not standard ext4).
 
 Internal nodes of the extent tree, also known as index nodes, are
@@ -112,22 +112,22 @@ recorded as ``struct ext4_extent_idx``, and are 12 bytes long:
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - ei\_block
+     - __le32
+     - ei_block
      - This index node covers file blocks from 'block' onward.
    * - 0x4
-     - \_\_le32
-     - ei\_leaf\_lo
+     - __le32
+     - ei_leaf_lo
      - Lower 32-bits of the block number of the extent node that is the next
        level lower in the tree. The tree node pointed to can be either another
        internal node or a leaf node, described below.
    * - 0x8
-     - \_\_le16
-     - ei\_leaf\_hi
+     - __le16
+     - ei_leaf_hi
      - Upper 16-bits of the previous field.
    * - 0xA
-     - \_\_u16
-     - ei\_unused
+     - __u16
+     - ei_unused
      -
 
 Leaf nodes of the extent tree are recorded as ``struct ext4_extent``,
@@ -142,24 +142,24 @@ and are also 12 bytes long:
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - ee\_block
+     - __le32
+     - ee_block
      - First file block number that this extent covers.
    * - 0x4
-     - \_\_le16
-     - ee\_len
+     - __le16
+     - ee_len
      - Number of blocks covered by extent. If the value of this field is <=
        32768, the extent is initialized. If the value of the field is > 32768,
        the extent is uninitialized and the actual extent length is ``ee_len`` -
        32768. Therefore, the maximum length of a initialized extent is 32768
        blocks, and the maximum length of an uninitialized extent is 32767.
    * - 0x6
-     - \_\_le16
-     - ee\_start\_hi
+     - __le16
+     - ee_start_hi
      - Upper 16-bits of the block number to which this extent points.
    * - 0x8
-     - \_\_le32
-     - ee\_start\_lo
+     - __le32
+     - ee_start_lo
      - Lower 32-bits of the block number to which this extent points.
 
 Prior to the introduction of metadata checksums, the extent header +
@@ -182,8 +182,8 @@ including) the checksum itself.
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - eb\_checksum
+     - __le32
+     - eb_checksum
      - Checksum of the extent block, crc32c(uuid+inum+igeneration+extentblock)
 
 Inline Data
index d107517..a728af0 100644 (file)
@@ -11,12 +11,12 @@ file is smaller than 60 bytes, then the data are stored inline in
 attribute space, then it might be found as an extended attribute
 “system.data” within the inode body (“ibody EA”). This of course
 constrains the amount of extended attributes one can attach to an inode.
-If the data size increases beyond i\_block + ibody EA, a regular block
+If the data size increases beyond i_block + ibody EA, a regular block
 is allocated and the contents moved to that block.
 
 Pending a change to compact the extended attribute key used to store
 inline data, one ought to be able to store 160 bytes of data in a
-256-byte inode (as of June 2015, when i\_extra\_isize is 28). Prior to
+256-byte inode (as of June 2015, when i_extra_isize is 28). Prior to
 that, the limit was 156 bytes due to inefficient use of inode space.
 
 The inline data feature requires the presence of an extended attribute
@@ -25,12 +25,12 @@ for “system.data”, even if the attribute value is zero length.
 Inline Directories
 ~~~~~~~~~~~~~~~~~~
 
-The first four bytes of i\_block are the inode number of the parent
+The first four bytes of i_block are the inode number of the parent
 directory. Following that is a 56-byte space for an array of directory
 entries; see ``struct ext4_dir_entry``. If there is a “system.data”
 attribute in the inode body, the EA value is an array of
 ``struct ext4_dir_entry`` as well. Note that for inline directories, the
-i\_block and EA space are treated as separate dirent blocks; directory
+i_block and EA space are treated as separate dirent blocks; directory
 entries cannot span the two.
 
 Inline directory entries are not checksummed, as the inode checksum
index 6c5ce66..cfc6c16 100644 (file)
@@ -38,138 +38,138 @@ The inode table entry is laid out in ``struct ext4_inode``.
      - Name
      - Description
    * - 0x0
-     - \_\_le16
-     - i\_mode
+     - __le16
+     - i_mode
      - File mode. See the table i_mode_ below.
    * - 0x2
-     - \_\_le16
-     - i\_uid
+     - __le16
+     - i_uid
      - Lower 16-bits of Owner UID.
    * - 0x4
-     - \_\_le32
-     - i\_size\_lo
+     - __le32
+     - i_size_lo
      - Lower 32-bits of size in bytes.
    * - 0x8
-     - \_\_le32
-     - i\_atime
-     - Last access time, in seconds since the epoch. However, if the EA\_INODE
+     - __le32
+     - i_atime
+     - Last access time, in seconds since the epoch. However, if the EA_INODE
        inode flag is set, this inode stores an extended attribute value and
        this field contains the checksum of the value.
    * - 0xC
-     - \_\_le32
-     - i\_ctime
+     - __le32
+     - i_ctime
      - Last inode change time, in seconds since the epoch. However, if the
-       EA\_INODE inode flag is set, this inode stores an extended attribute
+       EA_INODE inode flag is set, this inode stores an extended attribute
        value and this field contains the lower 32 bits of the attribute value's
        reference count.
    * - 0x10
-     - \_\_le32
-     - i\_mtime
+     - __le32
+     - i_mtime
      - Last data modification time, in seconds since the epoch. However, if the
-       EA\_INODE inode flag is set, this inode stores an extended attribute
+       EA_INODE inode flag is set, this inode stores an extended attribute
        value and this field contains the number of the inode that owns the
        extended attribute.
    * - 0x14
-     - \_\_le32
-     - i\_dtime
+     - __le32
+     - i_dtime
      - Deletion Time, in seconds since the epoch.
    * - 0x18
-     - \_\_le16
-     - i\_gid
+     - __le16
+     - i_gid
      - Lower 16-bits of GID.
    * - 0x1A
-     - \_\_le16
-     - i\_links\_count
+     - __le16
+     - i_links_count
      - Hard link count. Normally, ext4 does not permit an inode to have more
        than 65,000 hard links. This applies to files as well as directories,
        which means that there cannot be more than 64,998 subdirectories in a
        directory (each subdirectory's '..' entry counts as a hard link, as does
-       the '.' entry in the directory itself). With the DIR\_NLINK feature
+       the '.' entry in the directory itself). With the DIR_NLINK feature
        enabled, ext4 supports more than 64,998 subdirectories by setting this
        field to 1 to indicate that the number of hard links is not known.
    * - 0x1C
-     - \_\_le32
-     - i\_blocks\_lo
-     - Lower 32-bits of “block” count. If the huge\_file feature flag is not
+     - __le32
+     - i_blocks_lo
+     - Lower 32-bits of “block” count. If the huge_file feature flag is not
        set on the filesystem, the file consumes ``i_blocks_lo`` 512-byte blocks
-       on disk. If huge\_file is set and EXT4\_HUGE\_FILE\_FL is NOT set in
+       on disk. If huge_file is set and EXT4_HUGE_FILE_FL is NOT set in
        ``inode.i_flags``, then the file consumes ``i_blocks_lo + (i_blocks_hi
-       << 32)`` 512-byte blocks on disk. If huge\_file is set and
-       EXT4\_HUGE\_FILE\_FL IS set in ``inode.i_flags``, then this file
+       << 32)`` 512-byte blocks on disk. If huge_file is set and
+       EXT4_HUGE_FILE_FL IS set in ``inode.i_flags``, then this file
        consumes (``i_blocks_lo + i_blocks_hi`` << 32) filesystem blocks on
        disk.
    * - 0x20
-     - \_\_le32
-     - i\_flags
+     - __le32
+     - i_flags
      - Inode flags. See the table i_flags_ below.
    * - 0x24
      - 4 bytes
-     - i\_osd1
+     - i_osd1
      - See the table i_osd1_ for more details.
    * - 0x28
      - 60 bytes
-     - i\_block[EXT4\_N\_BLOCKS=15]
-     - Block map or extent tree. See the section “The Contents of inode.i\_block”.
+     - i_block[EXT4_N_BLOCKS=15]
+     - Block map or extent tree. See the section “The Contents of inode.i_block”.
    * - 0x64
-     - \_\_le32
-     - i\_generation
+     - __le32
+     - i_generation
      - File version (for NFS).
    * - 0x68
-     - \_\_le32
-     - i\_file\_acl\_lo
+     - __le32
+     - i_file_acl_lo
      - Lower 32-bits of extended attribute block. ACLs are of course one of
        many possible extended attributes; I think the name of this field is a
        result of the first use of extended attributes being for ACLs.
    * - 0x6C
-     - \_\_le32
-     - i\_size\_high / i\_dir\_acl
+     - __le32
+     - i_size_high / i_dir_acl
      - Upper 32-bits of file/directory size. In ext2/3 this field was named
-       i\_dir\_acl, though it was usually set to zero and never used.
+       i_dir_acl, though it was usually set to zero and never used.
    * - 0x70
-     - \_\_le32
-     - i\_obso\_faddr
+     - __le32
+     - i_obso_faddr
      - (Obsolete) fragment address.
    * - 0x74
      - 12 bytes
-     - i\_osd2
+     - i_osd2
      - See the table i_osd2_ for more details.
    * - 0x80
-     - \_\_le16
-     - i\_extra\_isize
+     - __le16
+     - i_extra_isize
      - Size of this inode - 128. Alternately, the size of the extended inode
        fields beyond the original ext2 inode, including this field.
    * - 0x82
-     - \_\_le16
-     - i\_checksum\_hi
+     - __le16
+     - i_checksum_hi
      - Upper 16-bits of the inode checksum.
    * - 0x84
-     - \_\_le32
-     - i\_ctime\_extra
+     - __le32
+     - i_ctime_extra
      - Extra change time bits. This provides sub-second precision. See Inode
        Timestamps section.
    * - 0x88
-     - \_\_le32
-     - i\_mtime\_extra
+     - __le32
+     - i_mtime_extra
      - Extra modification time bits. This provides sub-second precision.
    * - 0x8C
-     - \_\_le32
-     - i\_atime\_extra
+     - __le32
+     - i_atime_extra
      - Extra access time bits. This provides sub-second precision.
    * - 0x90
-     - \_\_le32
-     - i\_crtime
+     - __le32
+     - i_crtime
      - File creation time, in seconds since the epoch.
    * - 0x94
-     - \_\_le32
-     - i\_crtime\_extra
+     - __le32
+     - i_crtime_extra
      - Extra file creation time bits. This provides sub-second precision.
    * - 0x98
-     - \_\_le32
-     - i\_version\_hi
+     - __le32
+     - i_version_hi
      - Upper 32-bits for version number.
    * - 0x9C
-     - \_\_le32
-     - i\_projid
+     - __le32
+     - i_projid
      - Project ID.
 
 .. _i_mode:
@@ -183,45 +183,45 @@ The ``i_mode`` value is a combination of the following flags:
    * - Value
      - Description
    * - 0x1
-     - S\_IXOTH (Others may execute)
+     - S_IXOTH (Others may execute)
    * - 0x2
-     - S\_IWOTH (Others may write)
+     - S_IWOTH (Others may write)
    * - 0x4
-     - S\_IROTH (Others may read)
+     - S_IROTH (Others may read)
    * - 0x8
-     - S\_IXGRP (Group members may execute)
+     - S_IXGRP (Group members may execute)
    * - 0x10
-     - S\_IWGRP (Group members may write)
+     - S_IWGRP (Group members may write)
    * - 0x20
-     - S\_IRGRP (Group members may read)
+     - S_IRGRP (Group members may read)
    * - 0x40
-     - S\_IXUSR (Owner may execute)
+     - S_IXUSR (Owner may execute)
    * - 0x80
-     - S\_IWUSR (Owner may write)
+     - S_IWUSR (Owner may write)
    * - 0x100
-     - S\_IRUSR (Owner may read)
+     - S_IRUSR (Owner may read)
    * - 0x200
-     - S\_ISVTX (Sticky bit)
+     - S_ISVTX (Sticky bit)
    * - 0x400
-     - S\_ISGID (Set GID)
+     - S_ISGID (Set GID)
    * - 0x800
-     - S\_ISUID (Set UID)
+     - S_ISUID (Set UID)
    * -
      - These are mutually-exclusive file types:
    * - 0x1000
-     - S\_IFIFO (FIFO)
+     - S_IFIFO (FIFO)
    * - 0x2000
-     - S\_IFCHR (Character device)
+     - S_IFCHR (Character device)
    * - 0x4000
-     - S\_IFDIR (Directory)
+     - S_IFDIR (Directory)
    * - 0x6000
-     - S\_IFBLK (Block device)
+     - S_IFBLK (Block device)
    * - 0x8000
-     - S\_IFREG (Regular file)
+     - S_IFREG (Regular file)
    * - 0xA000
-     - S\_IFLNK (Symbolic link)
+     - S_IFLNK (Symbolic link)
    * - 0xC000
-     - S\_IFSOCK (Socket)
+     - S_IFSOCK (Socket)
 
 .. _i_flags:
 
@@ -234,56 +234,56 @@ The ``i_flags`` field is a combination of these values:
    * - Value
      - Description
    * - 0x1
-     - This file requires secure deletion (EXT4\_SECRM\_FL). (not implemented)
+     - This file requires secure deletion (EXT4_SECRM_FL). (not implemented)
    * - 0x2
      - This file should be preserved, should undeletion be desired
-       (EXT4\_UNRM\_FL). (not implemented)
+       (EXT4_UNRM_FL). (not implemented)
    * - 0x4
-     - File is compressed (EXT4\_COMPR\_FL). (not really implemented)
+     - File is compressed (EXT4_COMPR_FL). (not really implemented)
    * - 0x8
-     - All writes to the file must be synchronous (EXT4\_SYNC\_FL).
+     - All writes to the file must be synchronous (EXT4_SYNC_FL).
    * - 0x10
-     - File is immutable (EXT4\_IMMUTABLE\_FL).
+     - File is immutable (EXT4_IMMUTABLE_FL).
    * - 0x20
-     - File can only be appended (EXT4\_APPEND\_FL).
+     - File can only be appended (EXT4_APPEND_FL).
    * - 0x40
-     - The dump(1) utility should not dump this file (EXT4\_NODUMP\_FL).
+     - The dump(1) utility should not dump this file (EXT4_NODUMP_FL).
    * - 0x80
-     - Do not update access time (EXT4\_NOATIME\_FL).
+     - Do not update access time (EXT4_NOATIME_FL).
    * - 0x100
-     - Dirty compressed file (EXT4\_DIRTY\_FL). (not used)
+     - Dirty compressed file (EXT4_DIRTY_FL). (not used)
    * - 0x200
-     - File has one or more compressed clusters (EXT4\_COMPRBLK\_FL). (not used)
+     - File has one or more compressed clusters (EXT4_COMPRBLK_FL). (not used)
    * - 0x400
-     - Do not compress file (EXT4\_NOCOMPR\_FL). (not used)
+     - Do not compress file (EXT4_NOCOMPR_FL). (not used)
    * - 0x800
-     - Encrypted inode (EXT4\_ENCRYPT\_FL). This bit value previously was
-       EXT4\_ECOMPR\_FL (compression error), which was never used.
+     - Encrypted inode (EXT4_ENCRYPT_FL). This bit value previously was
+       EXT4_ECOMPR_FL (compression error), which was never used.
    * - 0x1000
-     - Directory has hashed indexes (EXT4\_INDEX\_FL).
+     - Directory has hashed indexes (EXT4_INDEX_FL).
    * - 0x2000
-     - AFS magic directory (EXT4\_IMAGIC\_FL).
+     - AFS magic directory (EXT4_IMAGIC_FL).
    * - 0x4000
      - File data must always be written through the journal
-       (EXT4\_JOURNAL\_DATA\_FL).
+       (EXT4_JOURNAL_DATA_FL).
    * - 0x8000
-     - File tail should not be merged (EXT4\_NOTAIL\_FL). (not used by ext4)
+     - File tail should not be merged (EXT4_NOTAIL_FL). (not used by ext4)
    * - 0x10000
      - All directory entry data should be written synchronously (see
-       ``dirsync``) (EXT4\_DIRSYNC\_FL).
+       ``dirsync``) (EXT4_DIRSYNC_FL).
    * - 0x20000
-     - Top of directory hierarchy (EXT4\_TOPDIR\_FL).
+     - Top of directory hierarchy (EXT4_TOPDIR_FL).
    * - 0x40000
-     - This is a huge file (EXT4\_HUGE\_FILE\_FL).
+     - This is a huge file (EXT4_HUGE_FILE_FL).
    * - 0x80000
-     - Inode uses extents (EXT4\_EXTENTS\_FL).
+     - Inode uses extents (EXT4_EXTENTS_FL).
    * - 0x100000
-     - Verity protected file (EXT4\_VERITY\_FL).
+     - Verity protected file (EXT4_VERITY_FL).
    * - 0x200000
      - Inode stores a large extended attribute value in its data blocks
-       (EXT4\_EA\_INODE\_FL).
+       (EXT4_EA_INODE_FL).
    * - 0x400000
-     - This file has blocks allocated past EOF (EXT4\_EOFBLOCKS\_FL).
+     - This file has blocks allocated past EOF (EXT4_EOFBLOCKS_FL).
        (deprecated)
    * - 0x01000000
      - Inode is a snapshot (``EXT4_SNAPFILE_FL``). (not in mainline)
@@ -294,21 +294,21 @@ The ``i_flags`` field is a combination of these values:
      - Snapshot shrink has completed (``EXT4_SNAPFILE_SHRUNK_FL``). (not in
        mainline)
    * - 0x10000000
-     - Inode has inline data (EXT4\_INLINE\_DATA\_FL).
+     - Inode has inline data (EXT4_INLINE_DATA_FL).
    * - 0x20000000
-     - Create children with the same project ID (EXT4\_PROJINHERIT\_FL).
+     - Create children with the same project ID (EXT4_PROJINHERIT_FL).
    * - 0x80000000
-     - Reserved for ext4 library (EXT4\_RESERVED\_FL).
+     - Reserved for ext4 library (EXT4_RESERVED_FL).
    * -
      - Aggregate flags:
    * - 0x705BDFFF
      - User-visible flags.
    * - 0x604BC0FF
-     - User-modifiable flags. Note that while EXT4\_JOURNAL\_DATA\_FL and
-       EXT4\_EXTENTS\_FL can be set with setattr, they are not in the kernel's
-       EXT4\_FL\_USER\_MODIFIABLE mask, since it needs to handle the setting of
+     - User-modifiable flags. Note that while EXT4_JOURNAL_DATA_FL and
+       EXT4_EXTENTS_FL can be set with setattr, they are not in the kernel's
+       EXT4_FL_USER_MODIFIABLE mask, since it needs to handle the setting of
        these flags in a special manner and they are masked out of the set of
-       flags that are saved directly to i\_flags.
+       flags that are saved directly to i_flags.
 
 .. _i_osd1:
 
@@ -325,9 +325,9 @@ Linux:
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - l\_i\_version
-     - Inode version. However, if the EA\_INODE inode flag is set, this inode
+     - __le32
+     - l_i_version
+     - Inode version. However, if the EA_INODE inode flag is set, this inode
        stores an extended attribute value and this field contains the upper 32
        bits of the attribute value's reference count.
 
@@ -342,8 +342,8 @@ Hurd:
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - h\_i\_translator
+     - __le32
+     - h_i_translator
      - ??
 
 Masix:
@@ -357,8 +357,8 @@ Masix:
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - m\_i\_reserved
+     - __le32
+     - m_i_reserved
      - ??
 
 .. _i_osd2:
@@ -376,30 +376,30 @@ Linux:
      - Name
      - Description
    * - 0x0
-     - \_\_le16
-     - l\_i\_blocks\_high
+     - __le16
+     - l_i_blocks_high
      - Upper 16-bits of the block count. Please see the note attached to
-       i\_blocks\_lo.
+       i_blocks_lo.
    * - 0x2
-     - \_\_le16
-     - l\_i\_file\_acl\_high
+     - __le16
+     - l_i_file_acl_high
      - Upper 16-bits of the extended attribute block (historically, the file
        ACL location). See the Extended Attributes section below.
    * - 0x4
-     - \_\_le16
-     - l\_i\_uid\_high
+     - __le16
+     - l_i_uid_high
      - Upper 16-bits of the Owner UID.
    * - 0x6
-     - \_\_le16
-     - l\_i\_gid\_high
+     - __le16
+     - l_i_gid_high
      - Upper 16-bits of the GID.
    * - 0x8
-     - \_\_le16
-     - l\_i\_checksum\_lo
+     - __le16
+     - l_i_checksum_lo
      - Lower 16-bits of the inode checksum.
    * - 0xA
-     - \_\_le16
-     - l\_i\_reserved
+     - __le16
+     - l_i_reserved
      - Unused.
 
 Hurd:
@@ -413,24 +413,24 @@ Hurd:
      - Name
      - Description
    * - 0x0
-     - \_\_le16
-     - h\_i\_reserved1
+     - __le16
+     - h_i_reserved1
      - ??
    * - 0x2
-     - \_\_u16
-     - h\_i\_mode\_high
+     - __u16
+     - h_i_mode_high
      - Upper 16-bits of the file mode.
    * - 0x4
-     - \_\_le16
-     - h\_i\_uid\_high
+     - __le16
+     - h_i_uid_high
      - Upper 16-bits of the Owner UID.
    * - 0x6
-     - \_\_le16
-     - h\_i\_gid\_high
+     - __le16
+     - h_i_gid_high
      - Upper 16-bits of the GID.
    * - 0x8
-     - \_\_u32
-     - h\_i\_author
+     - __u32
+     - h_i_author
      - Author code?
 
 Masix:
@@ -444,17 +444,17 @@ Masix:
      - Name
      - Description
    * - 0x0
-     - \_\_le16
-     - h\_i\_reserved1
+     - __le16
+     - h_i_reserved1
      - ??
    * - 0x2
-     - \_\_u16
-     - m\_i\_file\_acl\_high
+     - __u16
+     - m_i_file_acl_high
      - Upper 16-bits of the extended attribute block (historically, the file
        ACL location).
    * - 0x4
-     - \_\_u32
-     - m\_i\_reserved2[2]
+     - __u32
+     - m_i_reserved2[2]
      - ??
 
 Inode Size
@@ -466,11 +466,11 @@ In ext2 and ext3, the inode structure size was fixed at 128 bytes
 on-disk inode at format time for all inodes in the filesystem to provide
 space beyond the end of the original ext2 inode. The on-disk inode
 record size is recorded in the superblock as ``s_inode_size``. The
-number of bytes actually used by struct ext4\_inode beyond the original
+number of bytes actually used by struct ext4_inode beyond the original
 128-byte ext2 inode is recorded in the ``i_extra_isize`` field for each
-inode, which allows struct ext4\_inode to grow for a new kernel without
+inode, which allows struct ext4_inode to grow for a new kernel without
 having to upgrade all of the on-disk inodes. Access to fields beyond
-EXT2\_GOOD\_OLD\_INODE\_SIZE should be verified to be within
+EXT2_GOOD_OLD_INODE_SIZE should be verified to be within
 ``i_extra_isize``. By default, ext4 inode records are 256 bytes, and (as
 of August 2019) the inode structure is 160 bytes
 (``i_extra_isize = 32``). The extra space between the end of the inode
@@ -516,7 +516,7 @@ creation time (crtime); this field is 64-bits wide and decoded in the
 same manner as 64-bit [cma]time. Neither crtime nor dtime are accessible
 through the regular stat() interface, though debugfs will report them.
 
-We use the 32-bit signed time value plus (2^32 \* (extra epoch bits)).
+We use the 32-bit signed time value plus (2^32 * (extra epoch bits)).
 In other words:
 
 .. list-table::
@@ -525,8 +525,8 @@ In other words:
 
    * - Extra epoch bits
      - MSB of 32-bit time
-     - Adjustment for signed 32-bit to 64-bit tv\_sec
-     - Decoded 64-bit tv\_sec
+     - Adjustment for signed 32-bit to 64-bit tv_sec
+     - Decoded 64-bit tv_sec
      - valid time range
    * - 0 0
      - 1
index 5fad388..a6bef52 100644 (file)
@@ -63,8 +63,8 @@ Generally speaking, the journal has this format:
    :header-rows: 1
 
    * - Superblock
-     - descriptor\_block (data\_blocks or revocation\_block) [more data or
-       revocations] commmit\_block
+     - descriptor_block (data_blocks or revocation_block) [more data or
+       revocations] commmit_block
      - [more transactions...]
    * - 
      - One transaction
@@ -93,8 +93,8 @@ superblock.
    * - 1024 bytes of padding
      - ext4 Superblock
      - Journal Superblock
-     - descriptor\_block (data\_blocks or revocation\_block) [more data or
-       revocations] commmit\_block
+     - descriptor_block (data_blocks or revocation_block) [more data or
+       revocations] commmit_block
      - [more transactions...]
    * - 
      -
@@ -117,17 +117,17 @@ Every block in the journal starts with a common 12-byte header
      - Name
      - Description
    * - 0x0
-     - \_\_be32
-     - h\_magic
+     - __be32
+     - h_magic
      - jbd2 magic number, 0xC03B3998.
    * - 0x4
-     - \_\_be32
-     - h\_blocktype
+     - __be32
+     - h_blocktype
      - Description of what this block contains. See the jbd2_blocktype_ table
        below.
    * - 0x8
-     - \_\_be32
-     - h\_sequence
+     - __be32
+     - h_sequence
      - The transaction ID that goes with this block.
 
 .. _jbd2_blocktype:
@@ -177,99 +177,99 @@ which is 1024 bytes long:
      -
      - Static information describing the journal.
    * - 0x0
-     - journal\_header\_t (12 bytes)
-     - s\_header
+     - journal_header_t (12 bytes)
+     - s_header
      - Common header identifying this as a superblock.
    * - 0xC
-     - \_\_be32
-     - s\_blocksize
+     - __be32
+     - s_blocksize
      - Journal device block size.
    * - 0x10
-     - \_\_be32
-     - s\_maxlen
+     - __be32
+     - s_maxlen
      - Total number of blocks in this journal.
    * - 0x14
-     - \_\_be32
-     - s\_first
+     - __be32
+     - s_first
      - First block of log information.
    * -
      -
      -
      - Dynamic information describing the current state of the log.
    * - 0x18
-     - \_\_be32
-     - s\_sequence
+     - __be32
+     - s_sequence
      - First commit ID expected in log.
    * - 0x1C
-     - \_\_be32
-     - s\_start
+     - __be32
+     - s_start
      - Block number of the start of log. Contrary to the comments, this field
        being zero does not imply that the journal is clean!
    * - 0x20
-     - \_\_be32
-     - s\_errno
-     - Error value, as set by jbd2\_journal\_abort().
+     - __be32
+     - s_errno
+     - Error value, as set by jbd2_journal_abort().
    * -
      -
      -
      - The remaining fields are only valid in a v2 superblock.
    * - 0x24
-     - \_\_be32
-     - s\_feature\_compat;
+     - __be32
+     - s_feature_compat;
      - Compatible feature set. See the table jbd2_compat_ below.
    * - 0x28
-     - \_\_be32
-     - s\_feature\_incompat
+     - __be32
+     - s_feature_incompat
      - Incompatible feature set. See the table jbd2_incompat_ below.
    * - 0x2C
-     - \_\_be32
-     - s\_feature\_ro\_compat
+     - __be32
+     - s_feature_ro_compat
      - Read-only compatible feature set. There aren't any of these currently.
    * - 0x30
-     - \_\_u8
-     - s\_uuid[16]
+     - __u8
+     - s_uuid[16]
      - 128-bit uuid for journal. This is compared against the copy in the ext4
        super block at mount time.
    * - 0x40
-     - \_\_be32
-     - s\_nr\_users
+     - __be32
+     - s_nr_users
      - Number of file systems sharing this journal.
    * - 0x44
-     - \_\_be32
-     - s\_dynsuper
+     - __be32
+     - s_dynsuper
      - Location of dynamic super block copy. (Not used?)
    * - 0x48
-     - \_\_be32
-     - s\_max\_transaction
+     - __be32
+     - s_max_transaction
      - Limit of journal blocks per transaction. (Not used?)
    * - 0x4C
-     - \_\_be32
-     - s\_max\_trans\_data
+     - __be32
+     - s_max_trans_data
      - Limit of data blocks per transaction. (Not used?)
    * - 0x50
-     - \_\_u8
-     - s\_checksum\_type
+     - __u8
+     - s_checksum_type
      - Checksum algorithm used for the journal.  See jbd2_checksum_type_ for
        more info.
    * - 0x51
-     - \_\_u8[3]
-     - s\_padding2
+     - __u8[3]
+     - s_padding2
      -
    * - 0x54
-     - \_\_be32
-     - s\_num\_fc\_blocks
+     - __be32
+     - s_num_fc_blocks
      - Number of fast commit blocks in the journal.
    * - 0x58
-     - \_\_u32
-     - s\_padding[42]
+     - __u32
+     - s_padding[42]
      -
    * - 0xFC
-     - \_\_be32
-     - s\_checksum
+     - __be32
+     - s_checksum
      - Checksum of the entire superblock, with this field set to zero.
    * - 0x100
-     - \_\_u8
-     - s\_users[16\*48]
+     - __u8
+     - s_users[16*48]
      - ids of all file systems sharing the log. e2fsprogs/Linux don't allow
        shared external journals, but I imagine Lustre (or ocfs2?), which use
        the jbd2 code, might.
@@ -286,7 +286,7 @@ The journal compat features are any combination of the following:
      - Description
    * - 0x1
      - Journal maintains checksums on the data blocks.
-       (JBD2\_FEATURE\_COMPAT\_CHECKSUM)
+       (JBD2_FEATURE_COMPAT_CHECKSUM)
 
 .. _jbd2_incompat:
 
@@ -299,23 +299,23 @@ The journal incompat features are any combination of the following:
    * - Value
      - Description
    * - 0x1
-     - Journal has block revocation records. (JBD2\_FEATURE\_INCOMPAT\_REVOKE)
+     - Journal has block revocation records. (JBD2_FEATURE_INCOMPAT_REVOKE)
    * - 0x2
      - Journal can deal with 64-bit block numbers.
-       (JBD2\_FEATURE\_INCOMPAT\_64BIT)
+       (JBD2_FEATURE_INCOMPAT_64BIT)
    * - 0x4
-     - Journal commits asynchronously. (JBD2\_FEATURE\_INCOMPAT\_ASYNC\_COMMIT)
+     - Journal commits asynchronously. (JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)
    * - 0x8
      - This journal uses v2 of the checksum on-disk format. Each journal
        metadata block gets its own checksum, and the block tags in the
        descriptor table contain checksums for each of the data blocks in the
-       journal. (JBD2\_FEATURE\_INCOMPAT\_CSUM\_V2)
+       journal. (JBD2_FEATURE_INCOMPAT_CSUM_V2)
    * - 0x10
      - This journal uses v3 of the checksum on-disk format. This is the same as
        v2, but the journal block tag size is fixed regardless of the size of
-       block numbers. (JBD2\_FEATURE\_INCOMPAT\_CSUM\_V3)
+       block numbers. (JBD2_FEATURE_INCOMPAT_CSUM_V3)
    * - 0x20
-     - Journal has fast commit blocks. (JBD2\_FEATURE\_INCOMPAT\_FAST\_COMMIT)
+     - Journal has fast commit blocks. (JBD2_FEATURE_INCOMPAT_FAST_COMMIT)
 
 .. _jbd2_checksum_type:
 
@@ -355,11 +355,11 @@ Descriptor blocks consume at least 36 bytes, but use a full block:
      - Name
      - Descriptor
    * - 0x0
-     - journal\_header\_t
+     - journal_header_t
      - (open coded)
      - Common block header.
    * - 0xC
-     - struct journal\_block\_tag\_s
+     - struct journal_block_tag_s
      - open coded array[]
      - Enough tags either to fill up the block or to describe all the data
        blocks that follow this descriptor block.
@@ -367,7 +367,7 @@ Descriptor blocks consume at least 36 bytes, but use a full block:
 Journal block tags have any of the following formats, depending on which
 journal feature and block tag flags are set.
 
-If JBD2\_FEATURE\_INCOMPAT\_CSUM\_V3 is set, the journal block tag is
+If JBD2_FEATURE_INCOMPAT_CSUM_V3 is set, the journal block tag is
 defined as ``struct journal_block_tag3_s``, which looks like the
 following. The size is 16 or 32 bytes.
 
@@ -380,24 +380,24 @@ following. The size is 16 or 32 bytes.
      - Name
      - Descriptor
    * - 0x0
-     - \_\_be32
-     - t\_blocknr
+     - __be32
+     - t_blocknr
      - Lower 32-bits of the location of where the corresponding data block
        should end up on disk.
    * - 0x4
-     - \_\_be32
-     - t\_flags
+     - __be32
+     - t_flags
      - Flags that go with the descriptor. See the table jbd2_tag_flags_ for
        more info.
    * - 0x8
-     - \_\_be32
-     - t\_blocknr\_high
+     - __be32
+     - t_blocknr_high
      - Upper 32-bits of the location of where the corresponding data block
-       should end up on disk. This is zero if JBD2\_FEATURE\_INCOMPAT\_64BIT is
+       should end up on disk. This is zero if JBD2_FEATURE_INCOMPAT_64BIT is
        not enabled.
    * - 0xC
-     - \_\_be32
-     - t\_checksum
+     - __be32
+     - t_checksum
      - Checksum of the journal UUID, the sequence number, and the data block.
    * -
      -
@@ -433,7 +433,7 @@ The journal tag flags are any combination of the following:
    * - 0x8
      - This is the last tag in this descriptor block.
 
-If JBD2\_FEATURE\_INCOMPAT\_CSUM\_V3 is NOT set, the journal block tag
+If JBD2_FEATURE_INCOMPAT_CSUM_V3 is NOT set, the journal block tag
 is defined as ``struct journal_block_tag_s``, which looks like the
 following. The size is 8, 12, 24, or 28 bytes:
 
@@ -446,18 +446,18 @@ following. The size is 8, 12, 24, or 28 bytes:
      - Name
      - Descriptor
    * - 0x0
-     - \_\_be32
-     - t\_blocknr
+     - __be32
+     - t_blocknr
      - Lower 32-bits of the location of where the corresponding data block
        should end up on disk.
    * - 0x4
-     - \_\_be16
-     - t\_checksum
+     - __be16
+     - t_checksum
      - Checksum of the journal UUID, the sequence number, and the data block.
        Note that only the lower 16 bits are stored.
    * - 0x6
-     - \_\_be16
-     - t\_flags
+     - __be16
+     - t_flags
      - Flags that go with the descriptor. See the table jbd2_tag_flags_ for
        more info.
    * -
@@ -466,8 +466,8 @@ following. The size is 8, 12, 24, or 28 bytes:
      - This next field is only present if the super block indicates support for
        64-bit block numbers.
    * - 0x8
-     - \_\_be32
-     - t\_blocknr\_high
+     - __be32
+     - t_blocknr_high
      - Upper 32-bits of the location of where the corresponding data block
        should end up on disk.
    * -
@@ -483,8 +483,8 @@ following. The size is 8, 12, 24, or 28 bytes:
        ``j_uuid`` field in ``struct journal_s``, but only tune2fs touches that
        field.
 
-If JBD2\_FEATURE\_INCOMPAT\_CSUM\_V2 or
-JBD2\_FEATURE\_INCOMPAT\_CSUM\_V3 are set, the end of the block is a
+If JBD2_FEATURE_INCOMPAT_CSUM_V2 or
+JBD2_FEATURE_INCOMPAT_CSUM_V3 are set, the end of the block is a
 ``struct jbd2_journal_block_tail``, which looks like this:
 
 .. list-table::
@@ -496,8 +496,8 @@ JBD2\_FEATURE\_INCOMPAT\_CSUM\_V3 are set, the end of the block is a
      - Name
      - Descriptor
    * - 0x0
-     - \_\_be32
-     - t\_checksum
+     - __be32
+     - t_checksum
      - Checksum of the journal UUID + the descriptor block, with this field set
        to zero.
 
@@ -538,25 +538,25 @@ length, but use a full block:
      - Name
      - Description
    * - 0x0
-     - journal\_header\_t
-     - r\_header
+     - journal_header_t
+     - r_header
      - Common block header.
    * - 0xC
-     - \_\_be32
-     - r\_count
+     - __be32
+     - r_count
      - Number of bytes used in this block.
    * - 0x10
-     - \_\_be32 or \_\_be64
+     - __be32 or __be64
      - blocks[0]
      - Blocks to revoke.
 
-After r\_count is a linear array of block numbers that are effectively
+After r_count is a linear array of block numbers that are effectively
 revoked by this transaction. The size of each block number is 8 bytes if
 the superblock advertises 64-bit block number support, or 4 bytes
 otherwise.
 
-If JBD2\_FEATURE\_INCOMPAT\_CSUM\_V2 or
-JBD2\_FEATURE\_INCOMPAT\_CSUM\_V3 are set, the end of the revocation
+If JBD2_FEATURE_INCOMPAT_CSUM_V2 or
+JBD2_FEATURE_INCOMPAT_CSUM_V3 are set, the end of the revocation
 block is a ``struct jbd2_journal_revoke_tail``, which has this format:
 
 .. list-table::
@@ -568,8 +568,8 @@ block is a ``struct jbd2_journal_revoke_tail``, which has this format:
      - Name
      - Description
    * - 0x0
-     - \_\_be32
-     - r\_checksum
+     - __be32
+     - r_checksum
      - Checksum of the journal UUID + revocation block
 
 Commit Block
@@ -592,38 +592,38 @@ bytes long (but uses a full block):
      - Name
      - Descriptor
    * - 0x0
-     - journal\_header\_s
+     - journal_header_s
      - (open coded)
      - Common block header.
    * - 0xC
      - unsigned char
-     - h\_chksum\_type
+     - h_chksum_type
      - The type of checksum to use to verify the integrity of the data blocks
        in the transaction. See jbd2_checksum_type_ for more info.
    * - 0xD
      - unsigned char
-     - h\_chksum\_size
+     - h_chksum_size
      - The number of bytes used by the checksum. Most likely 4.
    * - 0xE
      - unsigned char
-     - h\_padding[2]
+     - h_padding[2]
      -
    * - 0x10
-     - \_\_be32
-     - h\_chksum[JBD2\_CHECKSUM\_BYTES]
+     - __be32
+     - h_chksum[JBD2_CHECKSUM_BYTES]
      - 32 bytes of space to store checksums. If
-       JBD2\_FEATURE\_INCOMPAT\_CSUM\_V2 or JBD2\_FEATURE\_INCOMPAT\_CSUM\_V3
+       JBD2_FEATURE_INCOMPAT_CSUM_V2 or JBD2_FEATURE_INCOMPAT_CSUM_V3
        are set, the first ``__be32`` is the checksum of the journal UUID and
        the entire commit block, with this field zeroed. If
-       JBD2\_FEATURE\_COMPAT\_CHECKSUM is set, the first ``__be32`` is the
+       JBD2_FEATURE_COMPAT_CHECKSUM is set, the first ``__be32`` is the
        crc32 of all the blocks already written to the transaction.
    * - 0x30
-     - \_\_be64
-     - h\_commit\_sec
+     - __be64
+     - h_commit_sec
      - The time that the transaction was committed, in seconds since the epoch.
    * - 0x38
-     - \_\_be32
-     - h\_commit\_nsec
+     - __be32
+     - h_commit_nsec
      - Nanoseconds component of the above timestamp.
 
 Fast commits
index 2566098..174dd65 100644 (file)
@@ -7,8 +7,8 @@ Multiple mount protection (MMP) is a feature that protects the
 filesystem against multiple hosts trying to use the filesystem
 simultaneously. When a filesystem is opened (for mounting, or fsck,
 etc.), the MMP code running on the node (call it node A) checks a
-sequence number. If the sequence number is EXT4\_MMP\_SEQ\_CLEAN, the
-open continues. If the sequence number is EXT4\_MMP\_SEQ\_FSCK, then
+sequence number. If the sequence number is EXT4_MMP_SEQ_CLEAN, the
+open continues. If the sequence number is EXT4_MMP_SEQ_FSCK, then
 fsck is (hopefully) running, and open fails immediately. Otherwise, the
 open code will wait for twice the specified MMP check interval and check
 the sequence number again. If the sequence number has changed, then the
@@ -40,38 +40,38 @@ The MMP structure (``struct mmp_struct``) is as follows:
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - mmp\_magic
+     - __le32
+     - mmp_magic
      - Magic number for MMP, 0x004D4D50 (“MMP”).
    * - 0x4
-     - \_\_le32
-     - mmp\_seq
+     - __le32
+     - mmp_seq
      - Sequence number, updated periodically.
    * - 0x8
-     - \_\_le64
-     - mmp\_time
+     - __le64
+     - mmp_time
      - Time that the MMP block was last updated.
    * - 0x10
      - char[64]
-     - mmp\_nodename
+     - mmp_nodename
      - Hostname of the node that opened the filesystem.
    * - 0x50
      - char[32]
-     - mmp\_bdevname
+     - mmp_bdevname
      - Block device name of the filesystem.
    * - 0x70
-     - \_\_le16
-     - mmp\_check\_interval
+     - __le16
+     - mmp_check_interval
      - The MMP re-check interval, in seconds.
    * - 0x72
-     - \_\_le16
-     - mmp\_pad1
+     - __le16
+     - mmp_pad1
      - Zero.
    * - 0x74
-     - \_\_le32[226]
-     - mmp\_pad2
+     - __le32[226]
+     - mmp_pad2
      - Zero.
    * - 0x3FC
-     - \_\_le32
-     - mmp\_checksum
+     - __le32
+     - mmp_checksum
      - Checksum of the MMP block.
index 123ebfd..0fad6ed 100644 (file)
@@ -7,7 +7,7 @@ An ext4 file system is split into a series of block groups. To reduce
 performance difficulties due to fragmentation, the block allocator tries
 very hard to keep each file's blocks within the same group, thereby
 reducing seek times. The size of a block group is specified in
-``sb.s_blocks_per_group`` blocks, though it can also calculated as 8 \*
+``sb.s_blocks_per_group`` blocks, though it can also calculated as 8 *
 ``block_size_in_bytes``. With the default block size of 4KiB, each group
 will contain 32,768 blocks, for a length of 128MiB. The number of block
 groups is the size of the device divided by the size of a block group.
index 94f304e..fc06369 100644 (file)
@@ -34,7 +34,7 @@ ext4 reserves some inode for special features, as follows:
    * - 10
      - Replica inode, used for some non-upstream feature?
    * - 11
-     - Traditional first non-reserved inode. Usually this is the lost+found directory. See s\_first\_ino in the superblock.
+     - Traditional first non-reserved inode. Usually this is the lost+found directory. See s_first_ino in the superblock.
 
 Note that there are also some inodes allocated from non-reserved inode numbers
 for other filesystem features which are not referenced from standard directory
@@ -47,9 +47,9 @@ hierarchy. These are generally reference from the superblock. They are:
    * - Superblock field
      - Description
 
-   * - s\_lpf\_ino
+   * - s_lpf_ino
      - Inode number of lost+found directory.
-   * - s\_prj\_quota\_inum
+   * - s_prj_quota_inum
      - Inode number of quota file tracking project quotas
-   * - s\_orphan\_file\_inum
+   * - s_orphan_file_inum
      - Inode number of file tracking orphan inodes.
index f6a548e..2688885 100644 (file)
@@ -7,7 +7,7 @@ The superblock records various information about the enclosing
 filesystem, such as block counts, inode counts, supported features,
 maintenance information, and more.
 
-If the sparse\_super feature flag is set, redundant copies of the
+If the sparse_super feature flag is set, redundant copies of the
 superblock and group descriptors are kept only in the groups whose group
 number is either 0 or a power of 3, 5, or 7. If the flag is not set,
 redundant copies are kept in all groups.
@@ -27,107 +27,107 @@ The ext4 superblock is laid out as follows in
      - Name
      - Description
    * - 0x0
-     - \_\_le32
-     - s\_inodes\_count
+     - __le32
+     - s_inodes_count
      - Total inode count.
    * - 0x4
-     - \_\_le32
-     - s\_blocks\_count\_lo
+     - __le32
+     - s_blocks_count_lo
      - Total block count.
    * - 0x8
-     - \_\_le32
-     - s\_r\_blocks\_count\_lo
+     - __le32
+     - s_r_blocks_count_lo
      - This number of blocks can only be allocated by the super-user.
    * - 0xC
-     - \_\_le32
-     - s\_free\_blocks\_count\_lo
+     - __le32
+     - s_free_blocks_count_lo
      - Free block count.
    * - 0x10
-     - \_\_le32
-     - s\_free\_inodes\_count
+     - __le32
+     - s_free_inodes_count
      - Free inode count.
    * - 0x14
-     - \_\_le32
-     - s\_first\_data\_block
+     - __le32
+     - s_first_data_block
      - First data block. This must be at least 1 for 1k-block filesystems and
        is typically 0 for all other block sizes.
    * - 0x18
-     - \_\_le32
-     - s\_log\_block\_size
-     - Block size is 2 ^ (10 + s\_log\_block\_size).
+     - __le32
+     - s_log_block_size
+     - Block size is 2 ^ (10 + s_log_block_size).
    * - 0x1C
-     - \_\_le32
-     - s\_log\_cluster\_size
-     - Cluster size is 2 ^ (10 + s\_log\_cluster\_size) blocks if bigalloc is
-       enabled. Otherwise s\_log\_cluster\_size must equal s\_log\_block\_size.
+     - __le32
+     - s_log_cluster_size
+     - Cluster size is 2 ^ (10 + s_log_cluster_size) blocks if bigalloc is
+       enabled. Otherwise s_log_cluster_size must equal s_log_block_size.
    * - 0x20
-     - \_\_le32
-     - s\_blocks\_per\_group
+     - __le32
+     - s_blocks_per_group
      - Blocks per group.
    * - 0x24
-     - \_\_le32
-     - s\_clusters\_per\_group
+     - __le32
+     - s_clusters_per_group
      - Clusters per group, if bigalloc is enabled. Otherwise
-       s\_clusters\_per\_group must equal s\_blocks\_per\_group.
+       s_clusters_per_group must equal s_blocks_per_group.
    * - 0x28
-     - \_\_le32
-     - s\_inodes\_per\_group
+     - __le32
+     - s_inodes_per_group
      - Inodes per group.
    * - 0x2C
-     - \_\_le32
-     - s\_mtime
+     - __le32
+     - s_mtime
      - Mount time, in seconds since the epoch.
    * - 0x30
-     - \_\_le32
-     - s\_wtime
+     - __le32
+     - s_wtime
      - Write time, in seconds since the epoch.
    * - 0x34
-     - \_\_le16
-     - s\_mnt\_count
+     - __le16
+     - s_mnt_count
      - Number of mounts since the last fsck.
    * - 0x36
-     - \_\_le16
-     - s\_max\_mnt\_count
+     - __le16
+     - s_max_mnt_count
      - Number of mounts beyond which a fsck is needed.
    * - 0x38
-     - \_\_le16
-     - s\_magic
+     - __le16
+     - s_magic
      - Magic signature, 0xEF53
    * - 0x3A
-     - \_\_le16
-     - s\_state
+     - __le16
+     - s_state
      - File system state. See super_state_ for more info.
    * - 0x3C
-     - \_\_le16
-     - s\_errors
+     - __le16
+     - s_errors
      - Behaviour when detecting errors. See super_errors_ for more info.
    * - 0x3E
-     - \_\_le16
-     - s\_minor\_rev\_level
+     - __le16
+     - s_minor_rev_level
      - Minor revision level.
    * - 0x40
-     - \_\_le32
-     - s\_lastcheck
+     - __le32
+     - s_lastcheck
      - Time of last check, in seconds since the epoch.
    * - 0x44
-     - \_\_le32
-     - s\_checkinterval
+     - __le32
+     - s_checkinterval
      - Maximum time between checks, in seconds.
    * - 0x48
-     - \_\_le32
-     - s\_creator\_os
+     - __le32
+     - s_creator_os
      - Creator OS. See the table super_creator_ for more info.
    * - 0x4C
-     - \_\_le32
-     - s\_rev\_level
+     - __le32
+     - s_rev_level
      - Revision level. See the table super_revision_ for more info.
    * - 0x50
-     - \_\_le16
-     - s\_def\_resuid
+     - __le16
+     - s_def_resuid
      - Default uid for reserved blocks.
    * - 0x52
-     - \_\_le16
-     - s\_def\_resgid
+     - __le16
+     - s_def_resgid
      - Default gid for reserved blocks.
    * -
      -
@@ -143,50 +143,50 @@ The ext4 superblock is laid out as follows in
        about a feature in either the compatible or incompatible feature set, it
        must abort and not try to meddle with things it doesn't understand...
    * - 0x54
-     - \_\_le32
-     - s\_first\_ino
+     - __le32
+     - s_first_ino
      - First non-reserved inode.
    * - 0x58
-     - \_\_le16
-     - s\_inode\_size
+     - __le16
+     - s_inode_size
      - Size of inode structure, in bytes.
    * - 0x5A
-     - \_\_le16
-     - s\_block\_group\_nr
+     - __le16
+     - s_block_group_nr
      - Block group # of this superblock.
    * - 0x5C
-     - \_\_le32
-     - s\_feature\_compat
+     - __le32
+     - s_feature_compat
      - Compatible feature set flags. Kernel can still read/write this fs even
        if it doesn't understand a flag; fsck should not do that. See the
        super_compat_ table for more info.
    * - 0x60
-     - \_\_le32
-     - s\_feature\_incompat
+     - __le32
+     - s_feature_incompat
      - Incompatible feature set. If the kernel or fsck doesn't understand one
        of these bits, it should stop. See the super_incompat_ table for more
        info.
    * - 0x64
-     - \_\_le32
-     - s\_feature\_ro\_compat
+     - __le32
+     - s_feature_ro_compat
      - Readonly-compatible feature set. If the kernel doesn't understand one of
        these bits, it can still mount read-only. See the super_rocompat_ table
        for more info.
    * - 0x68
-     - \_\_u8
-     - s\_uuid[16]
+     - __u8
+     - s_uuid[16]
      - 128-bit UUID for volume.
    * - 0x78
      - char
-     - s\_volume\_name[16]
+     - s_volume_name[16]
      - Volume label.
    * - 0x88
      - char
-     - s\_last\_mounted[64]
+     - s_last_mounted[64]
      - Directory where filesystem was last mounted.
    * - 0xC8
-     - \_\_le32
-     - s\_algorithm\_usage\_bitmap
+     - __le32
+     - s_algorithm_usage_bitmap
      - For compression (Not used in e2fsprogs/Linux)
    * -
      -
@@ -194,18 +194,18 @@ The ext4 superblock is laid out as follows in
      - Performance hints.  Directory preallocation should only happen if the
        EXT4_FEATURE_COMPAT_DIR_PREALLOC flag is on.
    * - 0xCC
-     - \_\_u8
-     - s\_prealloc\_blocks
+     - __u8
+     - s_prealloc_blocks
      - #. of blocks to try to preallocate for ... files? (Not used in
        e2fsprogs/Linux)
    * - 0xCD
-     - \_\_u8
-     - s\_prealloc\_dir\_blocks
+     - __u8
+     - s_prealloc_dir_blocks
      - #. of blocks to preallocate for directories. (Not used in
        e2fsprogs/Linux)
    * - 0xCE
-     - \_\_le16
-     - s\_reserved\_gdt\_blocks
+     - __le16
+     - s_reserved_gdt_blocks
      - Number of reserved GDT entries for future filesystem expansion.
    * -
      -
@@ -213,281 +213,281 @@ The ext4 superblock is laid out as follows in
      - Journalling support is valid only if EXT4_FEATURE_COMPAT_HAS_JOURNAL is
        set.
    * - 0xD0
-     - \_\_u8
-     - s\_journal\_uuid[16]
+     - __u8
+     - s_journal_uuid[16]
      - UUID of journal superblock
    * - 0xE0
-     - \_\_le32
-     - s\_journal\_inum
+     - __le32
+     - s_journal_inum
      - inode number of journal file.
    * - 0xE4
-     - \_\_le32
-     - s\_journal\_dev
+     - __le32
+     - s_journal_dev
      - Device number of journal file, if the external journal feature flag is
        set.
    * - 0xE8
-     - \_\_le32
-     - s\_last\_orphan
+     - __le32
+     - s_last_orphan
      - Start of list of orphaned inodes to delete.
    * - 0xEC
-     - \_\_le32
-     - s\_hash\_seed[4]
+     - __le32
+     - s_hash_seed[4]
      - HTREE hash seed.
    * - 0xFC
-     - \_\_u8
-     - s\_def\_hash\_version
+     - __u8
+     - s_def_hash_version
      - Default hash algorithm to use for directory hashes. See super_def_hash_
        for more info.
    * - 0xFD
-     - \_\_u8
-     - s\_jnl\_backup\_type
-     - If this value is 0 or EXT3\_JNL\_BACKUP\_BLOCKS (1), then the
+     - __u8
+     - s_jnl_backup_type
+     - If this value is 0 or EXT3_JNL_BACKUP_BLOCKS (1), then the
        ``s_jnl_blocks`` field contains a duplicate copy of the inode's
        ``i_block[]`` array and ``i_size``.
    * - 0xFE
-     - \_\_le16
-     - s\_desc\_size
+     - __le16
+     - s_desc_size
      - Size of group descriptors, in bytes, if the 64bit incompat feature flag
        is set.
    * - 0x100
-     - \_\_le32
-     - s\_default\_mount\_opts
+     - __le32
+     - s_default_mount_opts
      - Default mount options. See the super_mountopts_ table for more info.
    * - 0x104
-     - \_\_le32
-     - s\_first\_meta\_bg
-     - First metablock block group, if the meta\_bg feature is enabled.
+     - __le32
+     - s_first_meta_bg
+     - First metablock block group, if the meta_bg feature is enabled.
    * - 0x108
-     - \_\_le32
-     - s\_mkfs\_time
+     - __le32
+     - s_mkfs_time
      - When the filesystem was created, in seconds since the epoch.
    * - 0x10C
-     - \_\_le32
-     - s\_jnl\_blocks[17]
+     - __le32
+     - s_jnl_blocks[17]
      - Backup copy of the journal inode's ``i_block[]`` array in the first 15
-       elements and i\_size\_high and i\_size in the 16th and 17th elements,
+       elements and i_size_high and i_size in the 16th and 17th elements,
        respectively.
    * -
      -
      -
      - 64bit support is valid only if EXT4_FEATURE_COMPAT_64BIT is set.
    * - 0x150
-     - \_\_le32
-     - s\_blocks\_count\_hi
+     - __le32
+     - s_blocks_count_hi
      - High 32-bits of the block count.
    * - 0x154
-     - \_\_le32
-     - s\_r\_blocks\_count\_hi
+     - __le32
+     - s_r_blocks_count_hi
      - High 32-bits of the reserved block count.
    * - 0x158
-     - \_\_le32
-     - s\_free\_blocks\_count\_hi
+     - __le32
+     - s_free_blocks_count_hi
      - High 32-bits of the free block count.
    * - 0x15C
-     - \_\_le16
-     - s\_min\_extra\_isize
+     - __le16
+     - s_min_extra_isize
      - All inodes have at least # bytes.
    * - 0x15E
-     - \_\_le16
-     - s\_want\_extra\_isize
+     - __le16
+     - s_want_extra_isize
      - New inodes should reserve # bytes.
    * - 0x160
-     - \_\_le32
-     - s\_flags
+     - __le32
+     - s_flags
      - Miscellaneous flags. See the super_flags_ table for more info.
    * - 0x164
-     - \_\_le16
-     - s\_raid\_stride
+     - __le16
+     - s_raid_stride
      - RAID stride. This is the number of logical blocks read from or written
        to the disk before moving to the next disk. This affects the placement
        of filesystem metadata, which will hopefully make RAID storage faster.
    * - 0x166
-     - \_\_le16
-     - s\_mmp\_interval
+     - __le16
+     - s_mmp_interval
      - #. seconds to wait in multi-mount prevention (MMP) checking. In theory,
        MMP is a mechanism to record in the superblock which host and device
        have mounted the filesystem, in order to prevent multiple mounts. This
        feature does not seem to be implemented...
    * - 0x168
-     - \_\_le64
-     - s\_mmp\_block
+     - __le64
+     - s_mmp_block
      - Block # for multi-mount protection data.
    * - 0x170
-     - \_\_le32
-     - s\_raid\_stripe\_width
+     - __le32
+     - s_raid_stripe_width
      - RAID stripe width. This is the number of logical blocks read from or
        written to the disk before coming back to the current disk. This is used
        by the block allocator to try to reduce the number of read-modify-write
        operations in a RAID5/6.
    * - 0x174
-     - \_\_u8
-     - s\_log\_groups\_per\_flex
+     - __u8
+     - s_log_groups_per_flex
      - Size of a flexible block group is 2 ^ ``s_log_groups_per_flex``.
    * - 0x175
-     - \_\_u8
-     - s\_checksum\_type
+     - __u8
+     - s_checksum_type
      - Metadata checksum algorithm type. The only valid value is 1 (crc32c).
    * - 0x176
-     - \_\_le16
-     - s\_reserved\_pad
+     - __le16
+     - s_reserved_pad
      -
    * - 0x178
-     - \_\_le64
-     - s\_kbytes\_written
+     - __le64
+     - s_kbytes_written
      - Number of KiB written to this filesystem over its lifetime.
    * - 0x180
-     - \_\_le32
-     - s\_snapshot\_inum
+     - __le32
+     - s_snapshot_inum
      - inode number of active snapshot. (Not used in e2fsprogs/Linux.)
    * - 0x184
-     - \_\_le32
-     - s\_snapshot\_id
+     - __le32
+     - s_snapshot_id
      - Sequential ID of active snapshot. (Not used in e2fsprogs/Linux.)
    * - 0x188
-     - \_\_le64
-     - s\_snapshot\_r\_blocks\_count
+     - __le64
+     - s_snapshot_r_blocks_count
      - Number of blocks reserved for active snapshot's future use. (Not used in
        e2fsprogs/Linux.)
    * - 0x190
-     - \_\_le32
-     - s\_snapshot\_list
+     - __le32
+     - s_snapshot_list
      - inode number of the head of the on-disk snapshot list. (Not used in
        e2fsprogs/Linux.)
    * - 0x194
-     - \_\_le32
-     - s\_error\_count
+     - __le32
+     - s_error_count
      - Number of errors seen.
    * - 0x198
-     - \_\_le32
-     - s\_first\_error\_time
+     - __le32
+     - s_first_error_time
      - First time an error happened, in seconds since the epoch.
    * - 0x19C
-     - \_\_le32
-     - s\_first\_error\_ino
+     - __le32
+     - s_first_error_ino
      - inode involved in first error.
    * - 0x1A0
-     - \_\_le64
-     - s\_first\_error\_block
+     - __le64
+     - s_first_error_block
      - Number of block involved of first error.
    * - 0x1A8
-     - \_\_u8
-     - s\_first\_error\_func[32]
+     - __u8
+     - s_first_error_func[32]
      - Name of function where the error happened.
    * - 0x1C8
-     - \_\_le32
-     - s\_first\_error\_line
+     - __le32
+     - s_first_error_line
      - Line number where error happened.
    * - 0x1CC
-     - \_\_le32
-     - s\_last\_error\_time
+     - __le32
+     - s_last_error_time
      - Time of most recent error, in seconds since the epoch.
    * - 0x1D0
-     - \_\_le32
-     - s\_last\_error\_ino
+     - __le32
+     - s_last_error_ino
      - inode involved in most recent error.
    * - 0x1D4
-     - \_\_le32
-     - s\_last\_error\_line
+     - __le32
+     - s_last_error_line
      - Line number where most recent error happened.
    * - 0x1D8
-     - \_\_le64
-     - s\_last\_error\_block
+     - __le64
+     - s_last_error_block
      - Number of block involved in most recent error.
    * - 0x1E0
-     - \_\_u8
-     - s\_last\_error\_func[32]
+     - __u8
+     - s_last_error_func[32]
      - Name of function where the most recent error happened.
    * - 0x200
-     - \_\_u8
-     - s\_mount\_opts[64]
+     - __u8
+     - s_mount_opts[64]
      - ASCIIZ string of mount options.
    * - 0x240
-     - \_\_le32
-     - s\_usr\_quota\_inum
+     - __le32
+     - s_usr_quota_inum
      - Inode number of user `quota <quota>`__ file.
    * - 0x244
-     - \_\_le32
-     - s\_grp\_quota\_inum
+     - __le32
+     - s_grp_quota_inum
      - Inode number of group `quota <quota>`__ file.
    * - 0x248
-     - \_\_le32
-     - s\_overhead\_blocks
+     - __le32
+     - s_overhead_blocks
      - Overhead blocks/clusters in fs. (Huh? This field is always zero, which
        means that the kernel calculates it dynamically.)
    * - 0x24C
-     - \_\_le32
-     - s\_backup\_bgs[2]
-     - Block groups containing superblock backups (if sparse\_super2)
+     - __le32
+     - s_backup_bgs[2]
+     - Block groups containing superblock backups (if sparse_super2)
    * - 0x254
-     - \_\_u8
-     - s\_encrypt\_algos[4]
+     - __u8
+     - s_encrypt_algos[4]
      - Encryption algorithms in use. There can be up to four algorithms in use
        at any time; valid algorithm codes are given in the super_encrypt_ table
        below.
    * - 0x258
-     - \_\_u8
-     - s\_encrypt\_pw\_salt[16]
+     - __u8
+     - s_encrypt_pw_salt[16]
      - Salt for the string2key algorithm for encryption.
    * - 0x268
-     - \_\_le32
-     - s\_lpf\_ino
+     - __le32
+     - s_lpf_ino
      - Inode number of lost+found
    * - 0x26C
-     - \_\_le32
-     - s\_prj\_quota\_inum
+     - __le32
+     - s_prj_quota_inum
      - Inode that tracks project quotas.
    * - 0x270
-     - \_\_le32
-     - s\_checksum\_seed
-     - Checksum seed used for metadata\_csum calculations. This value is
-       crc32c(~0, $orig\_fs\_uuid).
+     - __le32
+     - s_checksum_seed
+     - Checksum seed used for metadata_csum calculations. This value is
+       crc32c(~0, $orig_fs_uuid).
    * - 0x274
-     - \_\_u8
-     - s\_wtime_hi
+     - __u8
+     - s_wtime_hi
      - Upper 8 bits of the s_wtime field.
    * - 0x275
-     - \_\_u8
-     - s\_mtime_hi
+     - __u8
+     - s_mtime_hi
      - Upper 8 bits of the s_mtime field.
    * - 0x276
-     - \_\_u8
-     - s\_mkfs_time_hi
+     - __u8
+     - s_mkfs_time_hi
      - Upper 8 bits of the s_mkfs_time field.
    * - 0x277
-     - \_\_u8
-     - s\_lastcheck_hi
+     - __u8
+     - s_lastcheck_hi
      - Upper 8 bits of the s_lastcheck_hi field.
    * - 0x278
-     - \_\_u8
-     - s\_first_error_time_hi
+     - __u8
+     - s_first_error_time_hi
      - Upper 8 bits of the s_first_error_time_hi field.
    * - 0x279
-     - \_\_u8
-     - s\_last_error_time_hi
+     - __u8
+     - s_last_error_time_hi
      - Upper 8 bits of the s_last_error_time_hi field.
    * - 0x27A
-     - \_\_u8
-     - s\_pad[2]
+     - __u8
+     - s_pad[2]
      - Zero padding.
    * - 0x27C
-     - \_\_le16
-     - s\_encoding
+     - __le16
+     - s_encoding
      - Filename charset encoding.
    * - 0x27E
-     - \_\_le16
-     - s\_encoding_flags
+     - __le16
+     - s_encoding_flags
      - Filename charset encoding flags.
    * - 0x280
-     - \_\_le32
-     - s\_orphan\_file\_inum
+     - __le32
+     - s_orphan_file_inum
      - Orphan file inode number.
    * - 0x284
-     - \_\_le32
-     - s\_reserved[94]
+     - __le32
+     - s_reserved[94]
      - Padding to the end of the block.
    * - 0x3FC
-     - \_\_le32
-     - s\_checksum
+     - __le32
+     - s_checksum
      - Superblock checksum.
 
 .. _super_state:
@@ -574,44 +574,44 @@ following:
    * - Value
      - Description
    * - 0x1
-     - Directory preallocation (COMPAT\_DIR\_PREALLOC).
+     - Directory preallocation (COMPAT_DIR_PREALLOC).
    * - 0x2
      - “imagic inodes”. Not clear from the code what this does
-       (COMPAT\_IMAGIC\_INODES).
+       (COMPAT_IMAGIC_INODES).
    * - 0x4
-     - Has a journal (COMPAT\_HAS\_JOURNAL).
+     - Has a journal (COMPAT_HAS_JOURNAL).
    * - 0x8
-     - Supports extended attributes (COMPAT\_EXT\_ATTR).
+     - Supports extended attributes (COMPAT_EXT_ATTR).
    * - 0x10
      - Has reserved GDT blocks for filesystem expansion
-       (COMPAT\_RESIZE\_INODE). Requires RO\_COMPAT\_SPARSE\_SUPER.
+       (COMPAT_RESIZE_INODE). Requires RO_COMPAT_SPARSE_SUPER.
    * - 0x20
-     - Has directory indices (COMPAT\_DIR\_INDEX).
+     - Has directory indices (COMPAT_DIR_INDEX).
    * - 0x40
      - “Lazy BG”. Not in Linux kernel, seems to have been for uninitialized
-       block groups? (COMPAT\_LAZY\_BG)
+       block groups? (COMPAT_LAZY_BG)
    * - 0x80
-     - “Exclude inode”. Not used. (COMPAT\_EXCLUDE\_INODE).
+     - “Exclude inode”. Not used. (COMPAT_EXCLUDE_INODE).
    * - 0x100
      - “Exclude bitmap”. Seems to be used to indicate the presence of
        snapshot-related exclude bitmaps? Not defined in kernel or used in
-       e2fsprogs (COMPAT\_EXCLUDE\_BITMAP).
+       e2fsprogs (COMPAT_EXCLUDE_BITMAP).
    * - 0x200
-     - Sparse Super Block, v2. If this flag is set, the SB field s\_backup\_bgs
+     - Sparse Super Block, v2. If this flag is set, the SB field s_backup_bgs
        points to the two block groups that contain backup superblocks
-       (COMPAT\_SPARSE\_SUPER2).
+       (COMPAT_SPARSE_SUPER2).
    * - 0x400
      - Fast commits supported. Although fast commits blocks are
        backward incompatible, fast commit blocks are not always
        present in the journal. If fast commit blocks are present in
        the journal, JBD2 incompat feature
-       (JBD2\_FEATURE\_INCOMPAT\_FAST\_COMMIT) gets
-       set (COMPAT\_FAST\_COMMIT).
+       (JBD2_FEATURE_INCOMPAT_FAST_COMMIT) gets
+       set (COMPAT_FAST_COMMIT).
    * - 0x1000
      - Orphan file allocated. This is the special file for more efficient
        tracking of unlinked but still open inodes. When there may be any
        entries in the file, we additionally set proper rocompat feature
-       (RO\_COMPAT\_ORPHAN\_PRESENT).
+       (RO_COMPAT_ORPHAN_PRESENT).
 
 .. _super_incompat:
 
@@ -625,45 +625,45 @@ following:
    * - Value
      - Description
    * - 0x1
-     - Compression (INCOMPAT\_COMPRESSION).
+     - Compression (INCOMPAT_COMPRESSION).
    * - 0x2
-     - Directory entries record the file type. See ext4\_dir\_entry\_2 below
-       (INCOMPAT\_FILETYPE).
+     - Directory entries record the file type. See ext4_dir_entry_2 below
+       (INCOMPAT_FILETYPE).
    * - 0x4
-     - Filesystem needs recovery (INCOMPAT\_RECOVER).
+     - Filesystem needs recovery (INCOMPAT_RECOVER).
    * - 0x8
-     - Filesystem has a separate journal device (INCOMPAT\_JOURNAL\_DEV).
+     - Filesystem has a separate journal device (INCOMPAT_JOURNAL_DEV).
    * - 0x10
      - Meta block groups. See the earlier discussion of this feature
-       (INCOMPAT\_META\_BG).
+       (INCOMPAT_META_BG).
    * - 0x40
-     - Files in this filesystem use extents (INCOMPAT\_EXTENTS).
+     - Files in this filesystem use extents (INCOMPAT_EXTENTS).
    * - 0x80
-     - Enable a filesystem size of 2^64 blocks (INCOMPAT\_64BIT).
+     - Enable a filesystem size of 2^64 blocks (INCOMPAT_64BIT).
    * - 0x100
-     - Multiple mount protection (INCOMPAT\_MMP).
+     - Multiple mount protection (INCOMPAT_MMP).
    * - 0x200
      - Flexible block groups. See the earlier discussion of this feature
-       (INCOMPAT\_FLEX\_BG).
+       (INCOMPAT_FLEX_BG).
    * - 0x400
      - Inodes can be used to store large extended attribute values
-       (INCOMPAT\_EA\_INODE).
+       (INCOMPAT_EA_INODE).
    * - 0x1000
-     - Data in directory entry (INCOMPAT\_DIRDATA). (Not implemented?)
+     - Data in directory entry (INCOMPAT_DIRDATA). (Not implemented?)
    * - 0x2000
      - Metadata checksum seed is stored in the superblock. This feature enables
-       the administrator to change the UUID of a metadata\_csum filesystem
+       the administrator to change the UUID of a metadata_csum filesystem
        while the filesystem is mounted; without it, the checksum definition
-       requires all metadata blocks to be rewritten (INCOMPAT\_CSUM\_SEED).
+       requires all metadata blocks to be rewritten (INCOMPAT_CSUM_SEED).
    * - 0x4000
-     - Large directory >2GB or 3-level htree (INCOMPAT\_LARGEDIR). Prior to
+     - Large directory >2GB or 3-level htree (INCOMPAT_LARGEDIR). Prior to
        this feature, directories could not be larger than 4GiB and could not
        have an htree more than 2 levels deep. If this feature is enabled,
        directories can be larger than 4GiB and have a maximum htree depth of 3.
    * - 0x8000
-     - Data in inode (INCOMPAT\_INLINE\_DATA).
+     - Data in inode (INCOMPAT_INLINE_DATA).
    * - 0x10000
-     - Encrypted inodes are present on the filesystem. (INCOMPAT\_ENCRYPT).
+     - Encrypted inodes are present on the filesystem. (INCOMPAT_ENCRYPT).
 
 .. _super_rocompat:
 
@@ -678,54 +678,54 @@ the following:
      - Description
    * - 0x1
      - Sparse superblocks. See the earlier discussion of this feature
-       (RO\_COMPAT\_SPARSE\_SUPER).
+       (RO_COMPAT_SPARSE_SUPER).
    * - 0x2
      - This filesystem has been used to store a file greater than 2GiB
-       (RO\_COMPAT\_LARGE\_FILE).
+       (RO_COMPAT_LARGE_FILE).
    * - 0x4
-     - Not used in kernel or e2fsprogs (RO\_COMPAT\_BTREE\_DIR).
+     - Not used in kernel or e2fsprogs (RO_COMPAT_BTREE_DIR).
    * - 0x8
      - This filesystem has files whose sizes are represented in units of
        logical blocks, not 512-byte sectors. This implies a very large file
-       indeed! (RO\_COMPAT\_HUGE\_FILE)
+       indeed! (RO_COMPAT_HUGE_FILE)
    * - 0x10
      - Group descriptors have checksums. In addition to detecting corruption,
        this is useful for lazy formatting with uninitialized groups
-       (RO\_COMPAT\_GDT\_CSUM).
+       (RO_COMPAT_GDT_CSUM).
    * - 0x20
      - Indicates that the old ext3 32,000 subdirectory limit no longer applies
-       (RO\_COMPAT\_DIR\_NLINK). A directory's i\_links\_count will be set to 1
+       (RO_COMPAT_DIR_NLINK). A directory's i_links_count will be set to 1
        if it is incremented past 64,999.
    * - 0x40
      - Indicates that large inodes exist on this filesystem
-       (RO\_COMPAT\_EXTRA\_ISIZE).
+       (RO_COMPAT_EXTRA_ISIZE).
    * - 0x80
-     - This filesystem has a snapshot (RO\_COMPAT\_HAS\_SNAPSHOT).
+     - This filesystem has a snapshot (RO_COMPAT_HAS_SNAPSHOT).
    * - 0x100
-     - `Quota <Quota>`__ (RO\_COMPAT\_QUOTA).
+     - `Quota <Quota>`__ (RO_COMPAT_QUOTA).
    * - 0x200
      - This filesystem supports “bigalloc”, which means that file extents are
        tracked in units of clusters (of blocks) instead of blocks
-       (RO\_COMPAT\_BIGALLOC).
+       (RO_COMPAT_BIGALLOC).
    * - 0x400
      - This filesystem supports metadata checksumming.
-       (RO\_COMPAT\_METADATA\_CSUM; implies RO\_COMPAT\_GDT\_CSUM, though
-       GDT\_CSUM must not be set)
+       (RO_COMPAT_METADATA_CSUM; implies RO_COMPAT_GDT_CSUM, though
+       GDT_CSUM must not be set)
    * - 0x800
      - Filesystem supports replicas. This feature is neither in the kernel nor
-       e2fsprogs. (RO\_COMPAT\_REPLICA)
+       e2fsprogs. (RO_COMPAT_REPLICA)
    * - 0x1000
      - Read-only filesystem image; the kernel will not mount this image
        read-write and most tools will refuse to write to the image.
-       (RO\_COMPAT\_READONLY)
+       (RO_COMPAT_READONLY)
    * - 0x2000
-     - Filesystem tracks project quotas. (RO\_COMPAT\_PROJECT)
+     - Filesystem tracks project quotas. (RO_COMPAT_PROJECT)
    * - 0x8000
-     - Verity inodes may be present on the filesystem. (RO\_COMPAT\_VERITY)
+     - Verity inodes may be present on the filesystem. (RO_COMPAT_VERITY)
    * - 0x10000
      - Indicates orphan file may have valid orphan entries and thus we need
        to clean them up when mounting the filesystem
-       (RO\_COMPAT\_ORPHAN\_PRESENT).
+       (RO_COMPAT_ORPHAN_PRESENT).
 
 .. _super_def_hash:
 
@@ -761,36 +761,36 @@ The ``s_default_mount_opts`` field is any combination of the following:
    * - Value
      - Description
    * - 0x0001
-     - Print debugging info upon (re)mount. (EXT4\_DEFM\_DEBUG)
+     - Print debugging info upon (re)mount. (EXT4_DEFM_DEBUG)
    * - 0x0002
      - New files take the gid of the containing directory (instead of the fsgid
-       of the current process). (EXT4\_DEFM\_BSDGROUPS)
+       of the current process). (EXT4_DEFM_BSDGROUPS)
    * - 0x0004
-     - Support userspace-provided extended attributes. (EXT4\_DEFM\_XATTR\_USER)
+     - Support userspace-provided extended attributes. (EXT4_DEFM_XATTR_USER)
    * - 0x0008
-     - Support POSIX access control lists (ACLs). (EXT4\_DEFM\_ACL)
+     - Support POSIX access control lists (ACLs). (EXT4_DEFM_ACL)
    * - 0x0010
-     - Do not support 32-bit UIDs. (EXT4\_DEFM\_UID16)
+     - Do not support 32-bit UIDs. (EXT4_DEFM_UID16)
    * - 0x0020
      - All data and metadata are commited to the journal.
-       (EXT4\_DEFM\_JMODE\_DATA)
+       (EXT4_DEFM_JMODE_DATA)
    * - 0x0040
      - All data are flushed to the disk before metadata are committed to the
-       journal. (EXT4\_DEFM\_JMODE\_ORDERED)
+       journal. (EXT4_DEFM_JMODE_ORDERED)
    * - 0x0060
      - Data ordering is not preserved; data may be written after the metadata
-       has been written. (EXT4\_DEFM\_JMODE\_WBACK)
+       has been written. (EXT4_DEFM_JMODE_WBACK)
    * - 0x0100
-     - Disable write flushes. (EXT4\_DEFM\_NOBARRIER)
+     - Disable write flushes. (EXT4_DEFM_NOBARRIER)
    * - 0x0200
      - Track which blocks in a filesystem are metadata and therefore should not
        be used as data blocks. This option will be enabled by default on 3.18,
-       hopefully. (EXT4\_DEFM\_BLOCK\_VALIDITY)
+       hopefully. (EXT4_DEFM_BLOCK_VALIDITY)
    * - 0x0400
      - Enable DISCARD support, where the storage device is told about blocks
-       becoming unused. (EXT4\_DEFM\_DISCARD)
+       becoming unused. (EXT4_DEFM_DISCARD)
    * - 0x0800
-     - Disable delayed allocation. (EXT4\_DEFM\_NODELALLOC)
+     - Disable delayed allocation. (EXT4_DEFM_NODELALLOC)
 
 .. _super_flags:
 
@@ -820,12 +820,12 @@ The ``s_encrypt_algos`` list can contain any of the following:
    * - Value
      - Description
    * - 0
-     - Invalid algorithm (ENCRYPTION\_MODE\_INVALID).
+     - Invalid algorithm (ENCRYPTION_MODE_INVALID).
    * - 1
-     - 256-bit AES in XTS mode (ENCRYPTION\_MODE\_AES\_256\_XTS).
+     - 256-bit AES in XTS mode (ENCRYPTION_MODE_AES_256_XTS).
    * - 2
-     - 256-bit AES in GCM mode (ENCRYPTION\_MODE\_AES\_256\_GCM).
+     - 256-bit AES in GCM mode (ENCRYPTION_MODE_AES_256_GCM).
    * - 3
-     - 256-bit AES in CBC mode (ENCRYPTION\_MODE\_AES\_256\_CBC).
+     - 256-bit AES in CBC mode (ENCRYPTION_MODE_AES_256_CBC).
 
 Total size of the superblock is 1024 bytes.
index b854bb4..6b2bac8 100644 (file)
@@ -129,18 +129,24 @@ yet. Bug reports are always welcome at the issue tracker below!
    * - arm64
      - Supported
      - ``LLVM=1``
+   * - hexagon
+     - Maintained
+     - ``LLVM=1``
    * - mips
      - Maintained
-     - ``CC=clang``
+     - ``LLVM=1``
    * - powerpc
      - Maintained
      - ``CC=clang``
    * - riscv
      - Maintained
-     - ``CC=clang``
+     - ``LLVM=1``
    * - s390
      - Maintained
      - ``CC=clang``
+   * - um (User Mode)
+     - Maintained
+     - ``LLVM=1``
    * - x86
      - Supported
      - ``LLVM=1``
index 2bf40ad..216b3f3 100644 (file)
@@ -45,10 +45,12 @@ Name              Alias           Usage               Preserved
 ``$r23``-``$r31`` ``$s0``-``$s8`` Static registers    Yes
 ================= =============== =================== ============
 
-Note: The register ``$r21`` is reserved in the ELF psABI, but used by the Linux
-kernel for storing the percpu base address. It normally has no ABI name, but is
-called ``$u0`` in the kernel. You may also see ``$v0`` or ``$v1`` in some old code,
-however they are deprecated aliases of ``$a0`` and ``$a1`` respectively.
+.. Note::
+    The register ``$r21`` is reserved in the ELF psABI, but used by the Linux
+    kernel for storing the percpu base address. It normally has no ABI name,
+    but is called ``$u0`` in the kernel. You may also see ``$v0`` or ``$v1``
+    in some old code,however they are deprecated aliases of ``$a0`` and ``$a1``
+    respectively.
 
 FPRs
 ----
@@ -69,8 +71,9 @@ Name              Alias              Usage               Preserved
 ``$f24``-``$f31`` ``$fs0``-``$fs7``  Static registers    Yes
 ================= ================== =================== ============
 
-Note: You may see ``$fv0`` or ``$fv1`` in some old code, however they are deprecated
-aliases of ``$fa0`` and ``$fa1`` respectively.
+.. Note::
+    You may see ``$fv0`` or ``$fv1`` in some old code, however they are
+    deprecated aliases of ``$fa0`` and ``$fa1`` respectively.
 
 VRs
 ----
index 8d88f7a..7988f41 100644 (file)
@@ -145,12 +145,16 @@ Documentation of Loongson's LS7A chipset:
 
   https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-7A1000-usermanual-2.00-EN.pdf (in English)
 
-Note: CPUINTC is CSR.ECFG/CSR.ESTAT and its interrupt controller described
-in Section 7.4 of "LoongArch Reference Manual, Vol 1"; LIOINTC is "Legacy I/O
-Interrupts" described in Section 11.1 of "Loongson 3A5000 Processor Reference
-Manual"; EIOINTC is "Extended I/O Interrupts" described in Section 11.2 of
-"Loongson 3A5000 Processor Reference Manual"; HTVECINTC is "HyperTransport
-Interrupts" described in Section 14.3 of "Loongson 3A5000 Processor Reference
-Manual"; PCH-PIC/PCH-MSI is "Interrupt Controller" described in Section 5 of
-"Loongson 7A1000 Bridge User Manual"; PCH-LPC is "LPC Interrupts" described in
-Section 24.3 of "Loongson 7A1000 Bridge User Manual".
+.. Note::
+    - CPUINTC is CSR.ECFG/CSR.ESTAT and its interrupt controller described
+      in Section 7.4 of "LoongArch Reference Manual, Vol 1";
+    - LIOINTC is "Legacy I/OInterrupts" described in Section 11.1 of
+      "Loongson 3A5000 Processor Reference Manual";
+    - EIOINTC is "Extended I/O Interrupts" described in Section 11.2 of
+      "Loongson 3A5000 Processor Reference Manual";
+    - HTVECINTC is "HyperTransport Interrupts" described in Section 14.3 of
+      "Loongson 3A5000 Processor Reference Manual";
+    - PCH-PIC/PCH-MSI is "Interrupt Controller" described in Section 5 of
+      "Loongson 7A1000 Bridge User Manual";
+    - PCH-LPC is "LPC Interrupts" described in Section 24.3 of
+      "Loongson 7A1000 Bridge User Manual".
index 43be378..53a18ff 100644 (file)
@@ -780,6 +780,17 @@ peer_notif_delay
        value is 0 which means to match the value of the link monitor
        interval.
 
+prio
+       Slave priority. A higher number means higher priority.
+       The primary slave has the highest priority. This option also
+       follows the primary_reselect rules.
+
+       This option could only be configured via netlink, and is only valid
+       for active-backup(1), balance-tlb (5) and balance-alb (6) mode.
+       The valid value range is a signed 32 bit integer.
+
+       The default value is 0.
+
 primary
 
        A string (eth0, eth2, etc) specifying which slave is the
index f34cb0e..ebc822e 100644 (file)
@@ -168,7 +168,7 @@ reflect the correct [#f1]_ traffic on the node the loopback of the sent
 data has to be performed right after a successful transmission. If
 the CAN network interface is not capable of performing the loopback for
 some reason the SocketCAN core can do this task as a fallback solution.
-See :ref:`socketcan-local-loopback1` for details (recommended).
+See :ref:`socketcan-local-loopback2` for details (recommended).
 
 The loopback functionality is enabled by default to reflect standard
 networking behaviour for CAN applications. Due to some requests from
diff --git a/Documentation/networking/device_drivers/can/can327.rst b/Documentation/networking/device_drivers/can/can327.rst
new file mode 100644 (file)
index 0000000..b87bfbe
--- /dev/null
@@ -0,0 +1,331 @@
+.. SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
+
+can327: ELM327 driver for Linux SocketCAN
+==========================================
+
+Authors
+--------
+
+Max Staudt <max@enpas.org>
+
+
+
+Motivation
+-----------
+
+This driver aims to lower the initial cost for hackers interested in
+working with CAN buses.
+
+CAN adapters are expensive, few, and far between.
+ELM327 interfaces are cheap and plentiful.
+Let's use ELM327s as CAN adapters.
+
+
+
+Introduction
+-------------
+
+This driver is an effort to turn abundant ELM327 based OBD interfaces
+into full fledged (as far as possible) CAN interfaces.
+
+Since the ELM327 was never meant to be a stand alone CAN controller,
+the driver has to switch between its modes as quickly as possible in
+order to fake full-duplex operation.
+
+As such, can327 is a best effort driver. However, this is more than
+enough to implement simple request-response protocols (such as OBD II),
+and to monitor broadcast messages on a bus (such as in a vehicle).
+
+Most ELM327s come as nondescript serial devices, attached via USB or
+Bluetooth. The driver cannot recognize them by itself, and as such it
+is up to the user to attach it in form of a TTY line discipline
+(similar to PPP, SLIP, slcan, ...).
+
+This driver is meant for ELM327 versions 1.4b and up, see below for
+known limitations in older controllers and clones.
+
+
+
+Data sheet
+-----------
+
+The official data sheets can be found at ELM electronics' home page:
+
+  https://www.elmelectronics.com/
+
+
+
+How to attach the line discipline
+----------------------------------
+
+Every ELM327 chip is factory programmed to operate at a serial setting
+of 38400 baud/s, 8 data bits, no parity, 1 stopbit.
+
+If you have kept this default configuration, the line discipline can
+be attached on a command prompt as follows::
+
+    sudo ldattach \
+           --debug \
+           --speed 38400 \
+           --eightbits \
+           --noparity \
+           --onestopbit \
+           --iflag -ICRNL,INLCR,-IXOFF \
+           30 \
+           /dev/ttyUSB0
+
+To change the ELM327's serial settings, please refer to its data
+sheet. This needs to be done before attaching the line discipline.
+
+Once the ldisc is attached, the CAN interface starts out unconfigured.
+Set the speed before starting it::
+
+    # The interface needs to be down to change parameters
+    sudo ip link set can0 down
+    sudo ip link set can0 type can bitrate 500000
+    sudo ip link set can0 up
+
+500000 bit/s is a common rate for OBD-II diagnostics.
+If you're connecting straight to a car's OBD port, this is the speed
+that most cars (but not all!) expect.
+
+After this, you can set out as usual with candump, cansniffer, etc.
+
+
+
+How to check the controller version
+------------------------------------
+
+Use a terminal program to attach to the controller.
+
+After issuing the "``AT WS``" command, the controller will respond with
+its version::
+
+    >AT WS
+
+
+    ELM327 v1.4b
+
+    >
+
+Note that clones may claim to be any version they like.
+It is not indicative of their actual feature set.
+
+
+
+
+Communication example
+----------------------
+
+This is a short and incomplete introduction on how to talk to an ELM327.
+It is here to guide understanding of the controller's and the driver's
+limitation (listed below) as well as manual testing.
+
+
+The ELM327 has two modes:
+
+- Command mode
+- Reception mode
+
+In command mode, it expects one command per line, terminated by CR.
+By default, the prompt is a "``>``", after which a command can be
+entered::
+
+    >ATE1
+    OK
+    >
+
+The init script in the driver switches off several configuration options
+that are only meaningful in the original OBD scenario the chip is meant
+for, and are actually a hindrance for can327.
+
+
+When a command is not recognized, such as by an older version of the
+ELM327, a question mark is printed as a response instead of OK::
+
+    >ATUNKNOWN
+    ?
+    >
+
+At present, can327 does not evaluate this response. See the section
+below on known limitations for details.
+
+
+When a CAN frame is to be sent, the target address is configured, after
+which the frame is sent as a command that consists of the data's hex
+dump::
+
+    >ATSH123
+    OK
+    >DEADBEEF12345678
+    OK
+    >
+
+The above interaction sends the SFF frame "``DE AD BE EF 12 34 56 78``"
+with (11 bit) CAN ID ``0x123``.
+For this to function, the controller must be configured for SFF sending
+mode (using "``AT PB``", see code or datasheet).
+
+
+Once a frame has been sent and wait-for-reply mode is on (``ATR1``,
+configured on ``listen-only=off``), or when the reply timeout expires
+and the driver sets the controller into monitoring mode (``ATMA``),
+the ELM327 will send one line for each received CAN frame, consisting
+of CAN ID, DLC, and data::
+
+    123 8 DEADBEEF12345678
+
+For EFF (29 bit) CAN frames, the address format is slightly different,
+which can327 uses to tell the two apart::
+
+    12 34 56 78 8 DEADBEEF12345678
+
+The ELM327 will receive both SFF and EFF frames - the current CAN
+config (``ATPB``) does not matter.
+
+
+If the ELM327's internal UART sending buffer runs full, it will abort
+the monitoring mode, print "BUFFER FULL" and drop back into command
+mode. Note that in this case, unlike with other error messages, the
+error message may appear on the same line as the last (usually
+incomplete) data frame::
+
+    12 34 56 78 8 DEADBEEF123 BUFFER FULL
+
+
+
+Known limitations of the controller
+------------------------------------
+
+- Clone devices ("v1.5" and others)
+
+  Sending RTR frames is not supported and will be dropped silently.
+
+  Receiving RTR with DLC 8 will appear to be a regular frame with
+  the last received frame's DLC and payload.
+
+  "``AT CSM``" (CAN Silent Monitoring, i.e. don't send CAN ACKs) is
+  not supported, and is hard coded to ON. Thus, frames are not ACKed
+  while listening: "``AT MA``" (Monitor All) will always be "silent".
+  However, immediately after sending a frame, the ELM327 will be in
+  "receive reply" mode, in which it *does* ACK any received frames.
+  Once the bus goes silent, or an error occurs (such as BUFFER FULL),
+  or the receive reply timeout runs out, the ELM327 will end reply
+  reception mode on its own and can327 will fall back to "``AT MA``"
+  in order to keep monitoring the bus.
+
+  Other limitations may apply, depending on the clone and the quality
+  of its firmware.
+
+
+- All versions
+
+  No full duplex operation is supported. The driver will switch
+  between input/output mode as quickly as possible.
+
+  The length of outgoing RTR frames cannot be set. In fact, some
+  clones (tested with one identifying as "``v1.5``") are unable to
+  send RTR frames at all.
+
+  We don't have a way to get real-time notifications on CAN errors.
+  While there is a command (``AT CS``) to retrieve some basic stats,
+  we don't poll it as it would force us to interrupt reception mode.
+
+
+- Versions prior to 1.4b
+
+  These versions do not send CAN ACKs when in monitoring mode (AT MA).
+  However, they do send ACKs while waiting for a reply immediately
+  after sending a frame. The driver maximizes this time to make the
+  controller as useful as possible.
+
+  Starting with version 1.4b, the ELM327 supports the "``AT CSM``"
+  command, and the "listen-only" CAN option will take effect.
+
+
+- Versions prior to 1.4
+
+  These chips do not support the "``AT PB``" command, and thus cannot
+  change bitrate or SFF/EFF mode on-the-fly. This will have to be
+  programmed by the user before attaching the line discipline. See the
+  data sheet for details.
+
+
+- Versions prior to 1.3
+
+  These chips cannot be used at all with can327. They do not support
+  the "``AT D1``" command, which is necessary to avoid parsing conflicts
+  on incoming data, as well as distinction of RTR frame lengths.
+
+  Specifically, this allows for easy distinction of SFF and EFF
+  frames, and to check whether frames are complete. While it is possible
+  to deduce the type and length from the length of the line the ELM327
+  sends us, this method fails when the ELM327's UART output buffer
+  overruns. It may abort sending in the middle of the line, which will
+  then be mistaken for something else.
+
+
+
+Known limitations of the driver
+--------------------------------
+
+- No 8/7 timing.
+
+  ELM327 can only set CAN bitrates that are of the form 500000/n, where
+  n is an integer divisor.
+  However there is an exception: With a separate flag, it may set the
+  speed to be 8/7 of the speed indicated by the divisor.
+  This mode is not currently implemented.
+
+- No evaluation of command responses.
+
+  The ELM327 will reply with OK when a command is understood, and with ?
+  when it is not. The driver does not currently check this, and simply
+  assumes that the chip understands every command.
+  The driver is built such that functionality degrades gracefully
+  nevertheless. See the section on known limitations of the controller.
+
+- No use of hardware CAN ID filtering
+
+  An ELM327's UART sending buffer will easily overflow on heavy CAN bus
+  load, resulting in the "``BUFFER FULL``" message. Using the hardware
+  filters available through "``AT CF xxx``" and "``AT CM xxx``" would be
+  helpful here, however SocketCAN does not currently provide a facility
+  to make use of such hardware features.
+
+
+
+Rationale behind the chosen configuration
+------------------------------------------
+
+``AT E1``
+  Echo on
+
+  We need this to be able to get a prompt reliably.
+
+``AT S1``
+  Spaces on
+
+  We need this to distinguish 11/29 bit CAN addresses received.
+
+  Note:
+  We can usually do this using the line length (odd/even),
+  but this fails if the line is not transmitted fully to
+  the host (BUFFER FULL).
+
+``AT D1``
+  DLC on
+
+  We need this to tell the "length" of RTR frames.
+
+
+
+A note on CAN bus termination
+------------------------------
+
+Your adapter may have resistors soldered in which are meant to terminate
+the bus. This is correct when it is plugged into a OBD-II socket, but
+not helpful when trying to tap into the middle of an existing CAN bus.
+
+If communications don't work with the adapter connected, check for the
+termination resistors on its PCB and try removing them.
index 0c3cc66..6a8a4f7 100644 (file)
@@ -10,6 +10,7 @@ Contents:
 .. toctree::
    :maxdepth: 2
 
+   can327
    ctu/ctucanfd-driver
    freescale/flexcan
 
index 4e06684..7f17771 100644 (file)
@@ -42,7 +42,6 @@ Contents:
    mellanox/mlx5
    microsoft/netvsc
    neterion/s2io
-   neterion/vxge
    netronome/nfp
    pensando/ionic
    smsc/smc9
@@ -52,6 +51,7 @@ Contents:
    ti/am65_nuss_cpsw_switchdev
    ti/tlan
    toshiba/spider_net
+   wangxun/txgbe
 
 .. only::  subproject and html
 
diff --git a/Documentation/networking/device_drivers/ethernet/neterion/vxge.rst b/Documentation/networking/device_drivers/ethernet/neterion/vxge.rst
deleted file mode 100644 (file)
index 589c6b1..0000000
+++ /dev/null
@@ -1,115 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0
-
-==============================================================================
-Neterion's (Formerly S2io) X3100 Series 10GbE PCIe Server Adapter Linux driver
-==============================================================================
-
-.. Contents
-
-  1) Introduction
-  2) Features supported
-  3) Configurable driver parameters
-  4) Troubleshooting
-
-1. Introduction
-===============
-
-This Linux driver supports all Neterion's X3100 series 10 GbE PCIe I/O
-Virtualized Server adapters.
-
-The X3100 series supports four modes of operation, configurable via
-firmware:
-
-       - Single function mode
-       - Multi function mode
-       - SRIOV mode
-       - MRIOV mode
-
-The functions share a 10GbE link and the pci-e bus, but hardly anything else
-inside the ASIC. Features like independent hw reset, statistics, bandwidth/
-priority allocation and guarantees, GRO, TSO, interrupt moderation etc are
-supported independently on each function.
-
-(See below for a complete list of features supported for both IPv4 and IPv6)
-
-2. Features supported
-=====================
-
-i)   Single function mode (up to 17 queues)
-
-ii)  Multi function mode (up to 17 functions)
-
-iii) PCI-SIG's I/O Virtualization
-
-       - Single Root mode: v1.0 (up to 17 functions)
-       - Multi-Root mode: v1.0 (up to 17 functions)
-
-iv)  Jumbo frames
-
-       X3100 Series supports MTU up to 9600 bytes, modifiable using
-       ip command.
-
-v)   Offloads supported: (Enabled by default)
-
-       - Checksum offload (TCP/UDP/IP) on transmit and receive paths
-       - TCP Segmentation Offload (TSO) on transmit path
-       - Generic Receive Offload (GRO) on receive path
-
-vi)  MSI-X: (Enabled by default)
-
-       Resulting in noticeable performance improvement (up to 7% on certain
-       platforms).
-
-vii) NAPI: (Enabled by default)
-
-       For better Rx interrupt moderation.
-
-viii)RTH (Receive Traffic Hash): (Enabled by default)
-
-       Receive side steering for better scaling.
-
-ix)  Statistics
-
-       Comprehensive MAC-level and software statistics displayed using
-       "ethtool -S" option.
-
-x)   Multiple hardware queues: (Enabled by default)
-
-       Up to 17 hardware based transmit and receive data channels, with
-       multiple steering options (transmit multiqueue enabled by default).
-
-3) Configurable driver parameters:
-----------------------------------
-
-i)  max_config_dev
-       Specifies maximum device functions to be enabled.
-
-       Valid range: 1-8
-
-ii) max_config_port
-       Specifies number of ports to be enabled.
-
-       Valid range: 1,2
-
-       Default: 1
-
-iii) max_config_vpath
-       Specifies maximum VPATH(s) configured for each device function.
-
-       Valid range: 1-17
-
-iv) vlan_tag_strip
-       Enables/disables vlan tag stripping from all received tagged frames that
-       are not replicated at the internal L2 switch.
-
-       Valid range: 0,1 (disabled, enabled respectively)
-
-       Default: 1
-
-v)  addr_learn_en
-       Enable learning the mac address of the guest OS interface in
-       virtualization environment.
-
-       Valid range: 0,1 (disabled, enabled respectively)
-
-       Default: 0
diff --git a/Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst b/Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst
new file mode 100644 (file)
index 0000000..eaa87db
--- /dev/null
@@ -0,0 +1,20 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+================================================================
+Linux Base Driver for WangXun(R) 10 Gigabit PCI Express Adapters
+================================================================
+
+WangXun 10 Gigabit Linux driver.
+Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd.
+
+
+Contents
+========
+
+- Support
+
+
+Support
+=======
+If you got any problem, contact Wangxun support team via support@trustnetic.com
+and Cc: netdev.
index 9f41961..4c8bbf5 100644 (file)
@@ -202,6 +202,12 @@ neigh/default/unres_qlen - INTEGER
 
        Default: 101
 
+neigh/default/interval_probe_time_ms - INTEGER
+       The probe interval for neighbor entries with NTF_MANAGED flag,
+       the min value is 1.
+
+       Default: 5000
+
 mtu_expires - INTEGER
        Time, in seconds, that cached PMTU information is kept.
 
index be8e10c..7a66438 100644 (file)
@@ -239,6 +239,19 @@ for the original TCP transmission and TCP retransmissions. To the receiver
 this will look like TLS records had been tampered with and will result
 in record authentication failures.
 
+TLS_RX_EXPECT_NO_PAD
+~~~~~~~~~~~~~~~~~~~~
+
+TLS 1.3 only. Expect the sender to not pad records. This allows the data
+to be decrypted directly into user space buffers with TLS 1.3.
+
+This optimization is safe to enable only if the remote end is trusted,
+otherwise it is an attack vector to doubling the TLS processing cost.
+
+If the record decrypted turns out to had been padded or is not a data
+record it will be decrypted again into a kernel buffer without zero copy.
+Such events are counted in the ``TlsDecryptRetry`` statistic.
+
 Statistics
 ==========
 
@@ -264,3 +277,8 @@ TLS implementation exposes the following per-namespace statistics
 
 - ``TlsDeviceRxResync`` -
   number of RX resyncs sent to NICs handling cryptography
+
+- ``TlsDecryptRetry`` -
+  number of RX records which had to be re-decrypted due to
+  ``TLS_RX_EXPECT_NO_PAD`` mis-prediction. Note that this counter will
+  also increment for non-data records.
index c456b52..d140070 100644 (file)
@@ -6,6 +6,15 @@
 netdev FAQ
 ==========
 
+tl;dr
+-----
+
+ - designate your patch to a tree - ``[PATCH net]`` or ``[PATCH net-next]``
+ - for fixes the ``Fixes:`` tag is required, regardless of the tree
+ - don't post large series (> 15 patches), break them up
+ - don't repost your patches within one 24h period
+ - reverse xmas tree
+
 What is netdev?
 ---------------
 It is a mailing list for all network-related Linux stuff.  This
@@ -136,6 +145,20 @@ it to the maintainer to figure out what is the most recent and current
 version that should be applied. If there is any doubt, the maintainer
 will reply and ask what should be done.
 
+How do I divide my work into patches?
+-------------------------------------
+
+Put yourself in the shoes of the reviewer. Each patch is read separately
+and therefore should constitute a comprehensible step towards your stated
+goal.
+
+Avoid sending series longer than 15 patches. Larger series takes longer
+to review as reviewers will defer looking at it until they find a large
+chunk of time. A small series can be reviewed in a short time, so Maintainers
+just do it. As a result, a sequence of smaller series gets merged quicker and
+with better review coverage. Re-posting large series also increases the mailing
+list traffic.
+
 I made changes to only a few patches in a patch series should I resend only those changed?
 ------------------------------------------------------------------------------------------
 No, please resend the entire patch series and make sure you do number your
@@ -183,6 +206,19 @@ it is requested that you make it look like this::
    * another line of text
    */
 
+What is "reverse xmas tree"?
+----------------------------
+
+Netdev has a convention for ordering local variables in functions.
+Order the variable declaration lines longest to shortest, e.g.::
+
+  struct scatterlist *sg;
+  struct sk_buff *skb;
+  int err, i;
+
+If there are dependencies between the variables preventing the ordering
+move the initialization out of line.
+
 I am working in existing code which uses non-standard formatting. Which formatting should I use?
 ------------------------------------------------------------------------------------------------
 Make your code follow the most recent guidelines, so that eventually all code
index e31a1a9..11686ee 100644 (file)
@@ -46,10 +46,11 @@ LA64中每个寄存器为64位宽。 ``$r0`` 的内容总是固定为0,而其
 ``$r23``-``$r31`` ``$s0``-``$s8`` 静态寄存器          是
 ================= =============== =================== ==========
 
-注意:``$r21``寄存器在ELF psABI中保留未使用,但是在Linux内核用于保存每CPU
-变量基地址。该寄存器没有ABI命名,不过在内核中称为``$u0``。在一些遗留代码
-中有时可能见到``$v0``和``$v1``,它们是``$a0``和``$a1``的别名,属于已经废弃
-的用法。
+.. note::
+    注意: ``$r21`` 寄存器在ELF psABI中保留未使用,但是在Linux内核用于保
+    存每CPU变量基地址。该寄存器没有ABI命名,不过在内核中称为 ``$u0`` 。在
+    一些遗留代码中有时可能见到 ``$v0`` 和 ``$v1`` ,它们是 ``$a0`` 和
+    ``$a1`` 的别名,属于已经废弃的用法。
 
 浮点寄存器
 ----------
@@ -68,8 +69,9 @@ LA64中每个寄存器为64位宽。 ``$r0`` 的内容总是固定为0,而其
 ``$f24``-``$f31`` ``$fs0``-``$fs7``  静态寄存器          是
 ================= ================== =================== ==========
 
-注意:在一些遗留代码中有时可能见到 ``$v0`` 和 ``$v1`` ,它们是 ``$a0``
-和 ``$a1`` 的别名,属于已经废弃的用法。
+.. note::
+    注意:在一些遗留代码中有时可能见到 ``$v0`` 和 ``$v1`` ,它们是
+    ``$a0`` 和 ``$a1`` 的别名,属于已经废弃的用法。
 
 
 向量寄存器
index 2a4c3ad..fb5d23b 100644 (file)
@@ -147,9 +147,11 @@ PCH-LPC::
 
   https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-7A1000-usermanual-2.00-EN.pdf (英文版)
 
-注:CPUINTC即《龙芯架构参考手册卷一》第7.4节所描述的CSR.ECFG/CSR.ESTAT寄存器及其中断
-控制逻辑;LIOINTC即《龙芯3A5000处理器使用手册》第11.1节所描述的“传统I/O中断”;EIOINTC
-即《龙芯3A5000处理器使用手册》第11.2节所描述的“扩展I/O中断”;HTVECINTC即《龙芯3A5000
-处理器使用手册》第14.3节所描述的“HyperTransport中断”;PCH-PIC/PCH-MSI即《龙芯7A1000桥
-片用户手册》第5章所描述的“中断控制器”;PCH-LPC即《龙芯7A1000桥片用户手册》第24.3节所
-描述的“LPC中断”。
+.. note::
+    - CPUINTC:即《龙芯架构参考手册卷一》第7.4节所描述的CSR.ECFG/CSR.ESTAT寄存器及其
+      中断控制逻辑;
+    - LIOINTC:即《龙芯3A5000处理器使用手册》第11.1节所描述的“传统I/O中断”;
+    - EIOINTC:即《龙芯3A5000处理器使用手册》第11.2节所描述的“扩展I/O中断”;
+    - HTVECINTC:即《龙芯3A5000处理器使用手册》第14.3节所描述的“HyperTransport中断”;
+    - PCH-PIC/PCH-MSI:即《龙芯7A1000桥片用户手册》第5章所描述的“中断控制器”;
+    - PCH-LPC:即《龙芯7A1000桥片用户手册》第24.3节所描述的“LPC中断”。
index c742de1..b9d5253 100644 (file)
@@ -120,7 +120,8 @@ Testing
   unpoison-pfn
        Software-unpoison page at PFN echoed into this file. This way
        a page can be reused again.  This only works for Linux
-       injected failures, not for real memory failures.
+       injected failures, not for real memory failures. Once any hardware
+       memory failure happens, this feature is disabled.
 
   Note these injection interfaces are not stable and might change between
   kernel versions
index 1da8a8d..14b0749 100644 (file)
@@ -426,6 +426,7 @@ ACPI VIOT DRIVER
 M:     Jean-Philippe Brucker <jean-philippe@linaro.org>
 L:     linux-acpi@vger.kernel.org
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 S:     Maintained
 F:     drivers/acpi/viot.c
 F:     include/linux/acpi_viot.h
@@ -959,6 +960,7 @@ AMD IOMMU (AMD-VI)
 M:     Joerg Roedel <joro@8bytes.org>
 R:     Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 S:     Maintained
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
 F:     drivers/iommu/amd/
@@ -2466,6 +2468,7 @@ ARM/NXP S32G ARCHITECTURE
 M:     Chester Lin <clin@suse.com>
 R:     Andreas Färber <afaerber@suse.de>
 R:     Matthias Brugger <mbrugger@suse.com>
+R:     NXP S32 Linux Team <s32@nxp.com>
 L:     linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
 S:     Maintained
 F:     arch/arm64/boot/dts/freescale/s32g*.dts*
@@ -2536,6 +2539,7 @@ W:        http://www.armlinux.org.uk/
 ARM/QUALCOMM SUPPORT
 M:     Andy Gross <agross@kernel.org>
 M:     Bjorn Andersson <bjorn.andersson@linaro.org>
+R:     Konrad Dybcio <konrad.dybcio@somainline.org>
 L:     linux-arm-msm@vger.kernel.org
 S:     Maintained
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git
@@ -3613,16 +3617,18 @@ S:      Maintained
 F:     Documentation/devicetree/bindings/iio/accel/bosch,bma400.yaml
 F:     drivers/iio/accel/bma400*
 
-BPF (Safe dynamic programs and tools)
+BPF [GENERAL] (Safe Dynamic Programs and Tools)
 M:     Alexei Starovoitov <ast@kernel.org>
 M:     Daniel Borkmann <daniel@iogearbox.net>
 M:     Andrii Nakryiko <andrii@kernel.org>
-R:     Martin KaFai Lau <kafai@fb.com>
-R:     Song Liu <songliubraving@fb.com>
+R:     Martin KaFai Lau <martin.lau@linux.dev>
+R:     Song Liu <song@kernel.org>
 R:     Yonghong Song <yhs@fb.com>
 R:     John Fastabend <john.fastabend@gmail.com>
 R:     KP Singh <kpsingh@kernel.org>
-L:     netdev@vger.kernel.org
+R:     Stanislav Fomichev <sdf@google.com>
+R:     Hao Luo <haoluo@google.com>
+R:     Jiri Olsa <jolsa@kernel.org>
 L:     bpf@vger.kernel.org
 S:     Supported
 W:     https://bpf.io/
@@ -3654,21 +3660,17 @@ F:      scripts/pahole-version.sh
 F:     tools/bpf/
 F:     tools/lib/bpf/
 F:     tools/testing/selftests/bpf/
-N:     bpf
-K:     bpf
 
 BPF JIT for ARM
 M:     Shubham Bansal <illusionist.neo@gmail.com>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
-S:     Maintained
+S:     Odd Fixes
 F:     arch/arm/net/
 
 BPF JIT for ARM64
 M:     Daniel Borkmann <daniel@iogearbox.net>
 M:     Alexei Starovoitov <ast@kernel.org>
 M:     Zi Shen Lim <zlim.lnx@gmail.com>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
 S:     Supported
 F:     arch/arm64/net/
@@ -3676,29 +3678,26 @@ F:      arch/arm64/net/
 BPF JIT for MIPS (32-BIT AND 64-BIT)
 M:     Johan Almbladh <johan.almbladh@anyfinetworks.com>
 M:     Paul Burton <paulburton@kernel.org>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
 S:     Maintained
 F:     arch/mips/net/
 
 BPF JIT for NFP NICs
 M:     Jakub Kicinski <kuba@kernel.org>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
-S:     Supported
+S:     Odd Fixes
 F:     drivers/net/ethernet/netronome/nfp/bpf/
 
 BPF JIT for POWERPC (32-BIT AND 64-BIT)
 M:     Naveen N. Rao <naveen.n.rao@linux.ibm.com>
-L:     netdev@vger.kernel.org
+M:     Michael Ellerman <mpe@ellerman.id.au>
 L:     bpf@vger.kernel.org
-S:     Maintained
+S:     Supported
 F:     arch/powerpc/net/
 
 BPF JIT for RISC-V (32-bit)
 M:     Luke Nelson <luke.r.nels@gmail.com>
 M:     Xi Wang <xi.wang@gmail.com>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
 S:     Maintained
 F:     arch/riscv/net/
@@ -3706,7 +3705,6 @@ X:        arch/riscv/net/bpf_jit_comp64.c
 
 BPF JIT for RISC-V (64-bit)
 M:     Björn Töpel <bjorn@kernel.org>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
 S:     Maintained
 F:     arch/riscv/net/
@@ -3716,36 +3714,80 @@ BPF JIT for S390
 M:     Ilya Leoshkevich <iii@linux.ibm.com>
 M:     Heiko Carstens <hca@linux.ibm.com>
 M:     Vasily Gorbik <gor@linux.ibm.com>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
-S:     Maintained
+S:     Supported
 F:     arch/s390/net/
 X:     arch/s390/net/pnet.c
 
 BPF JIT for SPARC (32-BIT AND 64-BIT)
 M:     David S. Miller <davem@davemloft.net>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
-S:     Maintained
+S:     Odd Fixes
 F:     arch/sparc/net/
 
 BPF JIT for X86 32-BIT
 M:     Wang YanQing <udknight@gmail.com>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
-S:     Maintained
+S:     Odd Fixes
 F:     arch/x86/net/bpf_jit_comp32.c
 
 BPF JIT for X86 64-BIT
 M:     Alexei Starovoitov <ast@kernel.org>
 M:     Daniel Borkmann <daniel@iogearbox.net>
-L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
 S:     Supported
 F:     arch/x86/net/
 X:     arch/x86/net/bpf_jit_comp32.c
 
-BPF LSM (Security Audit and Enforcement using BPF)
+BPF [CORE]
+M:     Alexei Starovoitov <ast@kernel.org>
+M:     Daniel Borkmann <daniel@iogearbox.net>
+R:     John Fastabend <john.fastabend@gmail.com>
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     kernel/bpf/verifier.c
+F:     kernel/bpf/tnum.c
+F:     kernel/bpf/core.c
+F:     kernel/bpf/syscall.c
+F:     kernel/bpf/dispatcher.c
+F:     kernel/bpf/trampoline.c
+F:     include/linux/bpf*
+F:     include/linux/filter.h
+
+BPF [BTF]
+M:     Martin KaFai Lau <martin.lau@linux.dev>
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     kernel/bpf/btf.c
+F:     include/linux/btf*
+
+BPF [TRACING]
+M:     Song Liu <song@kernel.org>
+R:     Jiri Olsa <jolsa@kernel.org>
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     kernel/trace/bpf_trace.c
+F:     kernel/bpf/stackmap.c
+
+BPF [NETWORKING] (tc BPF, sock_addr)
+M:     Martin KaFai Lau <martin.lau@linux.dev>
+M:     Daniel Borkmann <daniel@iogearbox.net>
+R:     John Fastabend <john.fastabend@gmail.com>
+L:     bpf@vger.kernel.org
+L:     netdev@vger.kernel.org
+S:     Maintained
+F:     net/core/filter.c
+F:     net/sched/act_bpf.c
+F:     net/sched/cls_bpf.c
+
+BPF [NETWORKING] (struct_ops, reuseport)
+M:     Martin KaFai Lau <martin.lau@linux.dev>
+L:     bpf@vger.kernel.org
+L:     netdev@vger.kernel.org
+S:     Maintained
+F:     kernel/bpf/bpf_struct*
+
+BPF [SECURITY & LSM] (Security Audit and Enforcement using BPF)
 M:     KP Singh <kpsingh@kernel.org>
 R:     Florent Revest <revest@chromium.org>
 R:     Brendan Jackman <jackmanb@chromium.org>
@@ -3756,13 +3798,64 @@ F:      include/linux/bpf_lsm.h
 F:     kernel/bpf/bpf_lsm.c
 F:     security/bpf/
 
-BPFTOOL
+BPF [STORAGE & CGROUPS]
+M:     Martin KaFai Lau <martin.lau@linux.dev>
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     kernel/bpf/cgroup.c
+F:     kernel/bpf/*storage.c
+F:     kernel/bpf/bpf_lru*
+
+BPF [RINGBUF]
+M:     Andrii Nakryiko <andrii@kernel.org>
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     kernel/bpf/ringbuf.c
+
+BPF [ITERATOR]
+M:     Yonghong Song <yhs@fb.com>
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     kernel/bpf/*iter.c
+
+BPF [L7 FRAMEWORK] (sockmap)
+M:     John Fastabend <john.fastabend@gmail.com>
+M:     Jakub Sitnicki <jakub@cloudflare.com>
+L:     netdev@vger.kernel.org
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     include/linux/skmsg.h
+F:     net/core/skmsg.c
+F:     net/core/sock_map.c
+F:     net/ipv4/tcp_bpf.c
+F:     net/ipv4/udp_bpf.c
+F:     net/unix/unix_bpf.c
+
+BPF [LIBRARY] (libbpf)
+M:     Andrii Nakryiko <andrii@kernel.org>
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     tools/lib/bpf/
+
+BPF [TOOLING] (bpftool)
 M:     Quentin Monnet <quentin@isovalent.com>
 L:     bpf@vger.kernel.org
 S:     Maintained
 F:     kernel/bpf/disasm.*
 F:     tools/bpf/bpftool/
 
+BPF [SELFTESTS] (Test Runners & Infrastructure)
+M:     Andrii Nakryiko <andrii@kernel.org>
+R:     Mykola Lysenko <mykolal@fb.com>
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     tools/testing/selftests/bpf/
+
+BPF [MISC]
+L:     bpf@vger.kernel.org
+S:     Odd Fixes
+K:     (?:\b|_)bpf(?:\b|_)
+
 BROADCOM B44 10/100 ETHERNET DRIVER
 M:     Michael Chan <michael.chan@broadcom.com>
 L:     netdev@vger.kernel.org
@@ -3795,12 +3888,12 @@ N:      bcmbca
 N:     bcm[9]?47622
 
 BROADCOM BCM2711/BCM2835 ARM ARCHITECTURE
-M:     Nicolas Saenz Julienne <nsaenz@kernel.org>
+M:     Florian Fainelli <f.fainelli@gmail.com>
 R:     Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
 L:     linux-rpi-kernel@lists.infradead.org (moderated for non-subscribers)
 L:     linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
 S:     Maintained
-T:     git git://git.kernel.org/pub/scm/linux/kernel/git/nsaenz/linux-rpi.git
+T:     git git://github.com/broadcom/stblinux.git
 F:     Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
 F:     drivers/pci/controller/pcie-brcmstb.c
 F:     drivers/staging/vc04_services
@@ -4958,6 +5051,7 @@ Q:        http://patchwork.kernel.org/project/linux-clk/list/
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux.git
 F:     Documentation/devicetree/bindings/clock/
 F:     drivers/clk/
+F:     include/dt-bindings/clock/
 F:     include/linux/clk-pr*
 F:     include/linux/clk/
 F:     include/linux/of_clk.h
@@ -5961,6 +6055,7 @@ M:        Christoph Hellwig <hch@lst.de>
 M:     Marek Szyprowski <m.szyprowski@samsung.com>
 R:     Robin Murphy <robin.murphy@arm.com>
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 S:     Supported
 W:     http://git.infradead.org/users/hch/dma-mapping.git
 T:     git git://git.infradead.org/users/hch/dma-mapping.git
@@ -5973,6 +6068,7 @@ F:        kernel/dma/
 DMA MAPPING BENCHMARK
 M:     Xiang Chen <chenxiang66@hisilicon.com>
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 F:     kernel/dma/map_benchmark.c
 F:     tools/testing/selftests/dma/
 
@@ -7301,6 +7397,13 @@ L:       netdev@vger.kernel.org
 S:     Maintained
 F:     drivers/net/ethernet/ibm/ehea/
 
+ELM327 CAN NETWORK DRIVER
+M:     Max Staudt <max@enpas.org>
+L:     linux-can@vger.kernel.org
+S:     Maintained
+F:     Documentation/networking/device_drivers/can/can327.rst
+F:     drivers/net/can/can327.c
+
 EM28XX VIDEO4LINUX DRIVER
 M:     Mauro Carvalho Chehab <mchehab@kernel.org>
 L:     linux-media@vger.kernel.org
@@ -7406,6 +7509,13 @@ S:       Maintained
 F:     include/linux/errseq.h
 F:     lib/errseq.c
 
+ESD CAN/USB DRIVERS
+M:     Frank Jungclaus <frank.jungclaus@esd.eu>
+R:     socketcan@esd.eu
+L:     linux-can@vger.kernel.org
+S:     Maintained
+F:     drivers/net/can/usb/esd_usb.c
+
 ET131X NETWORK DRIVER
 M:     Mark Einon <mark.einon@gmail.com>
 S:     Odd Fixes
@@ -7557,6 +7667,7 @@ F:        drivers/gpu/drm/exynos/exynos_dp*
 EXYNOS SYSMMU (IOMMU) driver
 M:     Marek Szyprowski <m.szyprowski@samsung.com>
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 S:     Maintained
 F:     drivers/iommu/exynos-iommu.c
 
@@ -8478,6 +8589,7 @@ F:        Documentation/devicetree/bindings/gpio/
 F:     Documentation/driver-api/gpio/
 F:     drivers/gpio/
 F:     include/asm-generic/gpio.h
+F:     include/dt-bindings/gpio/
 F:     include/linux/gpio.h
 F:     include/linux/gpio/
 F:     include/linux/of_gpio.h
@@ -9131,6 +9243,7 @@ F:        drivers/media/platform/st/sti/hva
 
 HWPOISON MEMORY FAILURE HANDLING
 M:     Naoya Horiguchi <naoya.horiguchi@nec.com>
+R:     Miaohe Lin <linmiaohe@huawei.com>
 L:     linux-mm@kvack.org
 S:     Maintained
 F:     mm/hwpoison-inject.c
@@ -9275,6 +9388,7 @@ T:        git git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux.git
 F:     Documentation/devicetree/bindings/i2c/i2c.txt
 F:     Documentation/i2c/
 F:     drivers/i2c/*
+F:     include/dt-bindings/i2c/i2c.h
 F:     include/linux/i2c-dev.h
 F:     include/linux/i2c-smbus.h
 F:     include/linux/i2c.h
@@ -9290,6 +9404,7 @@ T:        git git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux.git
 F:     Documentation/devicetree/bindings/i2c/
 F:     drivers/i2c/algos/
 F:     drivers/i2c/busses/
+F:     include/dt-bindings/i2c/
 
 I2C-TAOS-EVM DRIVER
 M:     Jean Delvare <jdelvare@suse.com>
@@ -9810,7 +9925,10 @@ INTEL ASoC DRIVERS
 M:     Cezary Rojewski <cezary.rojewski@intel.com>
 M:     Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
 M:     Liam Girdwood <liam.r.girdwood@linux.intel.com>
-M:     Jie Yang <yang.jie@linux.intel.com>
+M:     Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
+M:     Bard Liao <yung-chuan.liao@linux.intel.com>
+M:     Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
+M:     Kai Vehmanen <kai.vehmanen@linux.intel.com>
 L:     alsa-devel@alsa-project.org (moderated for non-subscribers)
 S:     Supported
 F:     sound/soc/intel/
@@ -9974,6 +10092,7 @@ INTEL IOMMU (VT-d)
 M:     David Woodhouse <dwmw2@infradead.org>
 M:     Lu Baolu <baolu.lu@linux.intel.com>
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 S:     Supported
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
 F:     drivers/iommu/intel/
@@ -10353,6 +10472,7 @@ IOMMU DRIVERS
 M:     Joerg Roedel <joro@8bytes.org>
 M:     Will Deacon <will@kernel.org>
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 S:     Maintained
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
 F:     Documentation/devicetree/bindings/iommu/
@@ -10829,6 +10949,7 @@ M:      Marc Zyngier <maz@kernel.org>
 R:     James Morse <james.morse@arm.com>
 R:     Alexandru Elisei <alexandru.elisei@arm.com>
 R:     Suzuki K Poulose <suzuki.poulose@arm.com>
+R:     Oliver Upton <oliver.upton@linux.dev>
 L:     linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
 L:     kvmarm@lists.cs.columbia.edu (moderated for non-subscribers)
 S:     Maintained
@@ -10895,28 +11016,51 @@ F:    tools/testing/selftests/kvm/*/s390x/
 F:     tools/testing/selftests/kvm/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
+M:     Sean Christopherson <seanjc@google.com>
 M:     Paolo Bonzini <pbonzini@redhat.com>
-R:     Sean Christopherson <seanjc@google.com>
-R:     Vitaly Kuznetsov <vkuznets@redhat.com>
-R:     Wanpeng Li <wanpengli@tencent.com>
-R:     Jim Mattson <jmattson@google.com>
-R:     Joerg Roedel <joro@8bytes.org>
 L:     kvm@vger.kernel.org
 S:     Supported
-W:     http://www.linux-kvm.org
 T:     git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
 F:     arch/x86/include/asm/kvm*
-F:     arch/x86/include/asm/pvclock-abi.h
 F:     arch/x86/include/asm/svm.h
 F:     arch/x86/include/asm/vmx*.h
 F:     arch/x86/include/uapi/asm/kvm*
 F:     arch/x86/include/uapi/asm/svm.h
 F:     arch/x86/include/uapi/asm/vmx.h
-F:     arch/x86/kernel/kvm.c
-F:     arch/x86/kernel/kvmclock.c
 F:     arch/x86/kvm/
 F:     arch/x86/kvm/*/
 
+KVM PARAVIRT (KVM/paravirt)
+M:     Paolo Bonzini <pbonzini@redhat.com>
+R:     Wanpeng Li <wanpengli@tencent.com>
+R:     Vitaly Kuznetsov <vkuznets@redhat.com>
+L:     kvm@vger.kernel.org
+S:     Supported
+T:     git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
+F:     arch/x86/kernel/kvm.c
+F:     arch/x86/kernel/kvmclock.c
+F:     arch/x86/include/asm/pvclock-abi.h
+F:     include/linux/kvm_para.h
+F:     include/uapi/linux/kvm_para.h
+F:     include/uapi/asm-generic/kvm_para.h
+F:     include/asm-generic/kvm_para.h
+F:     arch/um/include/asm/kvm_para.h
+F:     arch/x86/include/asm/kvm_para.h
+F:     arch/x86/include/uapi/asm/kvm_para.h
+
+KVM X86 HYPER-V (KVM/hyper-v)
+M:     Vitaly Kuznetsov <vkuznets@redhat.com>
+M:     Sean Christopherson <seanjc@google.com>
+M:     Paolo Bonzini <pbonzini@redhat.com>
+L:     kvm@vger.kernel.org
+S:     Supported
+T:     git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
+F:     arch/x86/kvm/hyperv.*
+F:     arch/x86/kvm/kvm_onhyperv.*
+F:     arch/x86/kvm/svm/hyperv.*
+F:     arch/x86/kvm/svm/svm_onhyperv.*
+F:     arch/x86/kvm/vmx/evmcs.*
+
 KERNFS
 M:     Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 M:     Tejun Heo <tj@kernel.org>
@@ -11095,20 +11239,6 @@ S:     Maintained
 F:     include/net/l3mdev.h
 F:     net/l3mdev
 
-L7 BPF FRAMEWORK
-M:     John Fastabend <john.fastabend@gmail.com>
-M:     Daniel Borkmann <daniel@iogearbox.net>
-M:     Jakub Sitnicki <jakub@cloudflare.com>
-L:     netdev@vger.kernel.org
-L:     bpf@vger.kernel.org
-S:     Maintained
-F:     include/linux/skmsg.h
-F:     net/core/skmsg.c
-F:     net/core/sock_map.c
-F:     net/ipv4/tcp_bpf.c
-F:     net/ipv4/udp_bpf.c
-F:     net/unix/unix_bpf.c
-
 LANDLOCK SECURITY MODULE
 M:     Mickaël Salaün <mic@digikod.net>
 L:     linux-security-module@vger.kernel.org
@@ -11588,6 +11718,7 @@ F:      drivers/gpu/drm/bridge/lontium-lt8912b.c
 LOONGARCH
 M:     Huacai Chen <chenhuacai@kernel.org>
 R:     WANG Xuerui <kernel@xen0n.name>
+L:     loongarch@lists.linux.dev
 S:     Maintained
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson.git
 F:     arch/loongarch/
@@ -12501,6 +12632,7 @@ F:      drivers/i2c/busses/i2c-mt65xx.c
 MEDIATEK IOMMU DRIVER
 M:     Yong Wu <yong.wu@mediatek.com>
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 L:     linux-mediatek@lists.infradead.org (moderated for non-subscribers)
 S:     Supported
 F:     Documentation/devicetree/bindings/iommu/mediatek*
@@ -12843,9 +12975,8 @@ M:      Andrew Morton <akpm@linux-foundation.org>
 L:     linux-mm@kvack.org
 S:     Maintained
 W:     http://www.linux-mm.org
-T:     quilt https://ozlabs.org/~akpm/mmotm/
-T:     quilt https://ozlabs.org/~akpm/mmots/
-T:     git git://github.com/hnaz/linux-mm.git
+T:     git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
+T:     quilt git://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new
 F:     include/linux/gfp.h
 F:     include/linux/memory_hotplug.h
 F:     include/linux/mm.h
@@ -12855,6 +12986,18 @@ F:     include/linux/vmalloc.h
 F:     mm/
 F:     tools/testing/selftests/vm/
 
+MEMORY HOT(UN)PLUG
+M:     David Hildenbrand <david@redhat.com>
+M:     Oscar Salvador <osalvador@suse.de>
+L:     linux-mm@kvack.org
+S:     Maintained
+F:     Documentation/admin-guide/mm/memory-hotplug.rst
+F:     Documentation/core-api/memory-hotplug.rst
+F:     drivers/base/memory.c
+F:     include/linux/memory_hotplug.h
+F:     mm/memory_hotplug.c
+F:     tools/testing/selftests/memory-hotplug/
+
 MEMORY TECHNOLOGY DEVICES (MTD)
 M:     Miquel Raynal <miquel.raynal@bootlin.com>
 M:     Richard Weinberger <richard@nod.at>
@@ -13048,6 +13191,7 @@ M:      UNGLinuxDriver@microchip.com
 L:     netdev@vger.kernel.org
 S:     Maintained
 F:     Documentation/devicetree/bindings/net/dsa/microchip,ksz.yaml
+F:     Documentation/devicetree/bindings/net/dsa/microchip,lan937x.yaml
 F:     drivers/net/dsa/microchip/*
 F:     include/linux/platform_data/microchip-ksz.h
 F:     net/dsa/tag_ksz.c
@@ -13711,12 +13855,11 @@ L:    netdev@vger.kernel.org
 S:     Maintained
 F:     net/sched/sch_netem.c
 
-NETERION 10GbE DRIVERS (s2io/vxge)
+NETERION 10GbE DRIVERS (s2io)
 M:     Jon Mason <jdmason@kudzu.us>
 L:     netdev@vger.kernel.org
 S:     Supported
 F:     Documentation/networking/device_drivers/ethernet/neterion/s2io.rst
-F:     Documentation/networking/device_drivers/ethernet/neterion/vxge.rst
 F:     drivers/net/ethernet/neterion/
 
 NETFILTER
@@ -13951,7 +14094,6 @@ F:      net/ipv6/tcp*.c
 NETWORKING [TLS]
 M:     Boris Pismenny <borisp@nvidia.com>
 M:     John Fastabend <john.fastabend@gmail.com>
-M:     Daniel Borkmann <daniel@iogearbox.net>
 M:     Jakub Kicinski <kuba@kernel.org>
 L:     netdev@vger.kernel.org
 S:     Maintained
@@ -14260,7 +14402,7 @@ F:      drivers/iio/gyro/fxas21002c_i2c.c
 F:     drivers/iio/gyro/fxas21002c_spi.c
 
 NXP i.MX CLOCK DRIVERS
-M:     Abel Vesa <abel.vesa@nxp.com>
+M:     Abel Vesa <abelvesa@kernel.org>
 L:     linux-clk@vger.kernel.org
 L:     linux-imx@nxp.com
 S:     Maintained
@@ -14348,9 +14490,8 @@ F:      Documentation/devicetree/bindings/sound/nxp,tfa989x.yaml
 F:     sound/soc/codecs/tfa989x.c
 
 NXP-NCI NFC DRIVER
-R:     Charles Gorand <charles.gorand@effinnov.com>
 L:     linux-nfc@lists.01.org (subscribers-only)
-S:     Supported
+S:     Orphan
 F:     Documentation/devicetree/bindings/net/nfc/nxp,nci.yaml
 F:     drivers/nfc/nxp-nci
 
@@ -14868,6 +15009,7 @@ F:      include/dt-bindings/
 
 OPENCOMPUTE PTP CLOCK DRIVER
 M:     Jonathan Lemon <jonathan.lemon@gmail.com>
+M:     Vadim Fedorenko <vadfed@fb.com>
 L:     netdev@vger.kernel.org
 S:     Maintained
 F:     drivers/ptp/ptp_ocp.c
@@ -15738,7 +15880,7 @@ F:      drivers/pinctrl/freescale/
 PIN CONTROLLER - INTEL
 M:     Mika Westerberg <mika.westerberg@linux.intel.com>
 M:     Andy Shevchenko <andy@kernel.org>
-S:     Maintained
+S:     Supported
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git
 F:     drivers/pinctrl/intel/
 
@@ -16260,7 +16402,7 @@ F:      drivers/crypto/qat/
 
 QCOM AUDIO (ASoC) DRIVERS
 M:     Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
-M:     Banajit Goswami <bgoswami@codeaurora.org>
+M:     Banajit Goswami <bgoswami@quicinc.com>
 L:     alsa-devel@alsa-project.org (moderated for non-subscribers)
 S:     Supported
 F:     sound/soc/codecs/lpass-va-macro.c
@@ -16487,7 +16629,7 @@ F:      Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml
 F:     drivers/cpufreq/qcom-cpufreq-nvmem.c
 
 QUALCOMM CRYPTO DRIVERS
-M:     Thara Gopinath <thara.gopinath@linaro.org>
+M:     Thara Gopinath <thara.gopinath@gmail.com>
 L:     linux-crypto@vger.kernel.org
 L:     linux-arm-msm@vger.kernel.org
 S:     Maintained
@@ -16542,6 +16684,7 @@ F:      drivers/i2c/busses/i2c-qcom-cci.c
 QUALCOMM IOMMU
 M:     Rob Clark <robdclark@gmail.com>
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 L:     linux-arm-msm@vger.kernel.org
 S:     Maintained
 F:     drivers/iommu/arm/arm-smmu/qcom_iommu.c
@@ -16597,7 +16740,7 @@ F:      include/linux/if_rmnet.h
 
 QUALCOMM TSENS THERMAL DRIVER
 M:     Amit Kucheria <amitk@kernel.org>
-M:     Thara Gopinath <thara.gopinath@linaro.org>
+M:     Thara Gopinath <thara.gopinath@gmail.com>
 L:     linux-pm@vger.kernel.org
 L:     linux-arm-msm@vger.kernel.org
 S:     Maintained
@@ -17054,6 +17197,19 @@ S:     Supported
 F:     Documentation/devicetree/bindings/iio/adc/renesas,rzg2l-adc.yaml
 F:     drivers/iio/adc/rzg2l_adc.c
 
+RENESAS RZ/N1 A5PSW SWITCH DRIVER
+M:     Clément Léger <clement.leger@bootlin.com>
+L:     linux-renesas-soc@vger.kernel.org
+L:     netdev@vger.kernel.org
+S:     Maintained
+F:     Documentation/devicetree/bindings/net/dsa/renesas,rzn1-a5psw.yaml
+F:     Documentation/devicetree/bindings/net/pcs/renesas,rzn1-miic.yaml
+F:     drivers/net/dsa/rzn1_a5psw*
+F:     drivers/net/pcs/pcs-rzn1-miic.c
+F:     include/dt-bindings/net/pcs-rzn1-miic.h
+F:     include/linux/pcs-rzn1-miic.h
+F:     net/dsa/tag_rzn1_a5psw.c
+
 RENESAS RZ/N1 RTC CONTROLLER DRIVER
 M:     Miquel Raynal <miquel.raynal@bootlin.com>
 L:     linux-rtc@vger.kernel.org
@@ -18054,6 +18210,7 @@ F:      drivers/misc/sgi-xp/
 
 SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS
 M:     Karsten Graul <kgraul@linux.ibm.com>
+M:     Wenjia Zhang <wenjia@linux.ibm.com>
 L:     linux-s390@vger.kernel.org
 S:     Supported
 W:     http://www.ibm.com/developerworks/linux/linux390/
@@ -18686,8 +18843,10 @@ F:     sound/soc/
 SOUND - SOUND OPEN FIRMWARE (SOF) DRIVERS
 M:     Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
 M:     Liam Girdwood <lgirdwood@gmail.com>
+M:     Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
+M:     Bard Liao <yung-chuan.liao@linux.intel.com>
 M:     Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
-M:     Kai Vehmanen <kai.vehmanen@linux.intel.com>
+R:     Kai Vehmanen <kai.vehmanen@linux.intel.com>
 M:     Daniel Baluta <daniel.baluta@nxp.com>
 L:     sound-open-firmware@alsa-project.org (moderated for non-subscribers)
 S:     Supported
@@ -19167,6 +19326,7 @@ F:      arch/x86/boot/video*
 SWIOTLB SUBSYSTEM
 M:     Christoph Hellwig <hch@infradead.org>
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 S:     Supported
 W:     http://git.infradead.org/users/hch/dma-mapping.git
 T:     git git://git.infradead.org/users/hch/dma-mapping.git
@@ -19304,7 +19464,7 @@ R:      Andy Shevchenko <andriy.shevchenko@linux.intel.com>
 R:     Mika Westerberg <mika.westerberg@linux.intel.com>
 R:     Jan Dabros <jsd@semihalf.com>
 L:     linux-i2c@vger.kernel.org
-S:     Maintained
+S:     Supported
 F:     drivers/i2c/busses/i2c-designware-*
 
 SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER
@@ -20711,6 +20871,7 @@ T:      git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git
 F:     Documentation/devicetree/bindings/usb/
 F:     Documentation/usb/
 F:     drivers/usb/
+F:     include/dt-bindings/usb/
 F:     include/linux/usb.h
 F:     include/linux/usb/
 
@@ -21433,6 +21594,13 @@ L:     linux-input@vger.kernel.org
 S:     Maintained
 F:     drivers/input/tablet/wacom_serial4.c
 
+WANGXUN ETHERNET DRIVER
+M:     Jiawen Wu <jiawenwu@trustnetic.com>
+L:     netdev@vger.kernel.org
+S:     Maintained
+F:     Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst
+F:     drivers/net/ethernet/wangxun/
+
 WATCHDOG DEVICE DRIVERS
 M:     Wim Van Sebroeck <wim@linux-watchdog.org>
 M:     Guenter Roeck <linux@roeck-us.net>
@@ -21840,6 +22008,7 @@ M:      Juergen Gross <jgross@suse.com>
 M:     Stefano Stabellini <sstabellini@kernel.org>
 L:     xen-devel@lists.xenproject.org (moderated for non-subscribers)
 L:     iommu@lists.linux-foundation.org
+L:     iommu@lists.linux.dev
 S:     Supported
 F:     arch/x86/xen/*swiotlb*
 F:     drivers/xen/*swiotlb*
index 1a6678d..990d2ee 100644 (file)
--- a/Makefile
+++ b/Makefile
@@ -2,7 +2,7 @@
 VERSION = 5
 PATCHLEVEL = 19
 SUBLEVEL = 0
-EXTRAVERSION = -rc2
+EXTRAVERSION = -rc5
 NAME = Superb Owl
 
 # *DOCUMENTATION*
@@ -1141,7 +1141,7 @@ KBUILD_MODULES := 1
 
 autoksyms_recursive: descend modules.order
        $(Q)$(CONFIG_SHELL) $(srctree)/scripts/adjust_autoksyms.sh \
-         "$(MAKE) -f $(srctree)/Makefile vmlinux"
+         "$(MAKE) -f $(srctree)/Makefile autoksyms_recursive"
 endif
 
 autoksyms_h := $(if $(CONFIG_TRIM_UNUSED_KSYMS), include/generated/autoksyms.h)
index 1848998..5112f49 100644 (file)
@@ -1586,7 +1586,6 @@ dtb-$(CONFIG_ARCH_ASPEED) += \
        aspeed-bmc-lenovo-hr630.dtb \
        aspeed-bmc-lenovo-hr855xg2.dtb \
        aspeed-bmc-microsoft-olympus.dtb \
-       aspeed-bmc-nuvia-dc-scm.dtb \
        aspeed-bmc-opp-lanyang.dtb \
        aspeed-bmc-opp-mihawk.dtb \
        aspeed-bmc-opp-mowgli.dtb \
@@ -1599,6 +1598,7 @@ dtb-$(CONFIG_ARCH_ASPEED) += \
        aspeed-bmc-opp-witherspoon.dtb \
        aspeed-bmc-opp-zaius.dtb \
        aspeed-bmc-portwell-neptune.dtb \
+       aspeed-bmc-qcom-dc-scm-v1.dtb \
        aspeed-bmc-quanta-q71l.dtb \
        aspeed-bmc-quanta-s6q.dtb \
        aspeed-bmc-supermicro-x11spi.dtb \
diff --git a/arch/arm/boot/dts/aspeed-bmc-nuvia-dc-scm.dts b/arch/arm/boot/dts/aspeed-bmc-nuvia-dc-scm.dts
deleted file mode 100644 (file)
index f4a97cf..0000000
+++ /dev/null
@@ -1,190 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-// Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
-
-/dts-v1/;
-
-#include "aspeed-g6.dtsi"
-
-/ {
-       model = "Nuvia DC-SCM BMC";
-       compatible = "nuvia,dc-scm-bmc", "aspeed,ast2600";
-
-       aliases {
-               serial4 = &uart5;
-       };
-
-       chosen {
-               stdout-path = &uart5;
-               bootargs = "console=ttyS4,115200n8";
-       };
-
-       memory@80000000 {
-               device_type = "memory";
-               reg = <0x80000000 0x40000000>;
-       };
-};
-
-&mdio3 {
-       status = "okay";
-
-       ethphy3: ethernet-phy@1 {
-               compatible = "ethernet-phy-ieee802.3-c22";
-               reg = <1>;
-       };
-};
-
-&mac2 {
-       status = "okay";
-
-       /* Bootloader sets up the MAC to insert delay */
-       phy-mode = "rgmii";
-       phy-handle = <&ethphy3>;
-
-       pinctrl-names = "default";
-       pinctrl-0 = <&pinctrl_rgmii3_default>;
-};
-
-&mac3 {
-       status = "okay";
-
-       pinctrl-names = "default";
-       pinctrl-0 = <&pinctrl_rmii4_default>;
-
-       use-ncsi;
-};
-
-&rtc {
-       status = "okay";
-};
-
-&fmc {
-       status = "okay";
-
-       flash@0 {
-               status = "okay";
-               m25p,fast-read;
-               label = "bmc";
-               spi-max-frequency = <133000000>;
-#include "openbmc-flash-layout-64.dtsi"
-       };
-
-       flash@1 {
-               status = "okay";
-               m25p,fast-read;
-               label = "alt-bmc";
-               spi-max-frequency = <133000000>;
-#include "openbmc-flash-layout-64-alt.dtsi"
-       };
-};
-
-&spi1 {
-       status = "okay";
-       pinctrl-names = "default";
-       pinctrl-0 = <&pinctrl_spi1_default>;
-
-       flash@0 {
-               status = "okay";
-               m25p,fast-read;
-               label = "bios";
-               spi-max-frequency = <133000000>;
-       };
-};
-
-&gpio0 {
-       gpio-line-names =
-       /*A0-A7*/       "","","","","","","","",
-       /*B0-B7*/       "BMC_FLASH_MUX_SEL","","","","","","","",
-       /*C0-C7*/       "","","","","","","","",
-       /*D0-D7*/       "","","","","","","","",
-       /*E0-E7*/       "","","","","","","","",
-       /*F0-F7*/       "","","","","","","","",
-       /*G0-G7*/       "","","","","","","","",
-       /*H0-H7*/       "","","","","","","","",
-       /*I0-I7*/       "","","","","","","","",
-       /*J0-J7*/       "","","","","","","","",
-       /*K0-K7*/       "","","","","","","","",
-       /*L0-L7*/       "","","","","","","","",
-       /*M0-M7*/       "","","","","","","","",
-       /*N0-N7*/       "BMC_FWSPI_RST_N","","GPIO_1_BMC_3V3","","","","","",
-       /*O0-O7*/       "JTAG_MUX_A","JTAG_MUX_B","","","","","","",
-       /*P0-P7*/       "","","","","","","","",
-       /*Q0-Q7*/       "","","","","","","","",
-       /*R0-R7*/       "","","","","","","","",
-       /*S0-S7*/       "","","","","","","","",
-       /*T0-T7*/       "","","","","","","","",
-       /*U0-U7*/       "","","","","","","","",
-       /*V0-V7*/       "","","","SCMFPGA_SPARE_GPIO1_3V3",
-                       "SCMFPGA_SPARE_GPIO2_3V3","SCMFPGA_SPARE_GPIO3_3V3",
-                       "SCMFPGA_SPARE_GPIO4_3V3","SCMFPGA_SPARE_GPIO5_3V3",
-       /*W0-W7*/       "","","","","","","","",
-       /*X0-X7*/       "","","","","","","","",
-       /*Y0-Y7*/       "","","","","","","","",
-       /*Z0-Z7*/       "","","","","","","","",
-       /*AA0-AA7*/     "","","","","","","","",
-       /*AB0-AB7*/     "","","","","","","","",
-       /*AC0-AC7*/     "","","","","","","","";
-};
-
-&gpio1 {
-       gpio-line-names =
-       /*A0-A7*/       "GPI_1_BMC_1V8","","","","","",
-                       "SCMFPGA_SPARE_GPIO1_1V8","SCMFPGA_SPARE_GPIO2_1V8",
-       /*B0-B7*/       "SCMFPGA_SPARE_GPIO3_1V8","SCMFPGA_SPARE_GPIO4_1V8",
-                       "SCMFPGA_SPARE_GPIO5_1V8","","","","","",
-       /*C0-C7*/       "","","","","","","","",
-       /*D0-D7*/       "","BMC_SPI1_RST_N","BIOS_FLASH_MUX_SEL","",
-                       "","TPM2_PIRQ_N","TPM2_RST_N","",
-       /*E0-E7*/       "","","","","","","","";
-};
-
-&i2c2 {
-       status = "okay";
-};
-
-&i2c4 {
-       status = "okay";
-};
-
-&i2c5 {
-       status = "okay";
-};
-
-&i2c6 {
-       status = "okay";
-};
-
-&i2c7 {
-       status = "okay";
-};
-
-&i2c8 {
-       status = "okay";
-};
-
-&i2c9 {
-       status = "okay";
-};
-
-&i2c10 {
-       status = "okay";
-};
-
-&i2c12 {
-       status = "okay";
-};
-
-&i2c13 {
-       status = "okay";
-};
-
-&i2c14 {
-       status = "okay";
-};
-
-&i2c15 {
-       status = "okay";
-};
-
-&vhub {
-       status = "okay";
-};
diff --git a/arch/arm/boot/dts/aspeed-bmc-qcom-dc-scm-v1.dts b/arch/arm/boot/dts/aspeed-bmc-qcom-dc-scm-v1.dts
new file mode 100644 (file)
index 0000000..259ef3f
--- /dev/null
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+// Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+
+/dts-v1/;
+
+#include "aspeed-g6.dtsi"
+
+/ {
+       model = "Qualcomm DC-SCM V1 BMC";
+       compatible = "qcom,dc-scm-v1-bmc", "aspeed,ast2600";
+
+       aliases {
+               serial4 = &uart5;
+       };
+
+       chosen {
+               stdout-path = &uart5;
+               bootargs = "console=ttyS4,115200n8";
+       };
+
+       memory@80000000 {
+               device_type = "memory";
+               reg = <0x80000000 0x40000000>;
+       };
+};
+
+&mdio3 {
+       status = "okay";
+
+       ethphy3: ethernet-phy@1 {
+               compatible = "ethernet-phy-ieee802.3-c22";
+               reg = <1>;
+       };
+};
+
+&mac2 {
+       status = "okay";
+
+       /* Bootloader sets up the MAC to insert delay */
+       phy-mode = "rgmii";
+       phy-handle = <&ethphy3>;
+
+       pinctrl-names = "default";
+       pinctrl-0 = <&pinctrl_rgmii3_default>;
+};
+
+&mac3 {
+       status = "okay";
+
+       pinctrl-names = "default";
+       pinctrl-0 = <&pinctrl_rmii4_default>;
+
+       use-ncsi;
+};
+
+&rtc {
+       status = "okay";
+};
+
+&fmc {
+       status = "okay";
+
+       flash@0 {
+               status = "okay";
+               m25p,fast-read;
+               label = "bmc";
+               spi-max-frequency = <133000000>;
+#include "openbmc-flash-layout-64.dtsi"
+       };
+
+       flash@1 {
+               status = "okay";
+               m25p,fast-read;
+               label = "alt-bmc";
+               spi-max-frequency = <133000000>;
+#include "openbmc-flash-layout-64-alt.dtsi"
+       };
+};
+
+&spi1 {
+       status = "okay";
+       pinctrl-names = "default";
+       pinctrl-0 = <&pinctrl_spi1_default>;
+
+       flash@0 {
+               status = "okay";
+               m25p,fast-read;
+               label = "bios";
+               spi-max-frequency = <133000000>;
+       };
+};
+
+&gpio0 {
+       gpio-line-names =
+       /*A0-A7*/       "","","","","","","","",
+       /*B0-B7*/       "BMC_FLASH_MUX_SEL","","","","","","","",
+       /*C0-C7*/       "","","","","","","","",
+       /*D0-D7*/       "","","","","","","","",
+       /*E0-E7*/       "","","","","","","","",
+       /*F0-F7*/       "","","","","","","","",
+       /*G0-G7*/       "","","","","","","","",
+       /*H0-H7*/       "","","","","","","","",
+       /*I0-I7*/       "","","","","","","","",
+       /*J0-J7*/       "","","","","","","","",
+       /*K0-K7*/       "","","","","","","","",
+       /*L0-L7*/       "","","","","","","","",
+       /*M0-M7*/       "","","","","","","","",
+       /*N0-N7*/       "BMC_FWSPI_RST_N","","GPIO_1_BMC_3V3","","","","","",
+       /*O0-O7*/       "JTAG_MUX_A","JTAG_MUX_B","","","","","","",
+       /*P0-P7*/       "","","","","","","","",
+       /*Q0-Q7*/       "","","","","","","","",
+       /*R0-R7*/       "","","","","","","","",
+       /*S0-S7*/       "","","","","","","","",
+       /*T0-T7*/       "","","","","","","","",
+       /*U0-U7*/       "","","","","","","","",
+       /*V0-V7*/       "","","","SCMFPGA_SPARE_GPIO1_3V3",
+                       "SCMFPGA_SPARE_GPIO2_3V3","SCMFPGA_SPARE_GPIO3_3V3",
+                       "SCMFPGA_SPARE_GPIO4_3V3","SCMFPGA_SPARE_GPIO5_3V3",
+       /*W0-W7*/       "","","","","","","","",
+       /*X0-X7*/       "","","","","","","","",
+       /*Y0-Y7*/       "","","","","","","","",
+       /*Z0-Z7*/       "","","","","","","","",
+       /*AA0-AA7*/     "","","","","","","","",
+       /*AB0-AB7*/     "","","","","","","","",
+       /*AC0-AC7*/     "","","","","","","","";
+};
+
+&gpio1 {
+       gpio-line-names =
+       /*A0-A7*/       "GPI_1_BMC_1V8","","","","","",
+                       "SCMFPGA_SPARE_GPIO1_1V8","SCMFPGA_SPARE_GPIO2_1V8",
+       /*B0-B7*/       "SCMFPGA_SPARE_GPIO3_1V8","SCMFPGA_SPARE_GPIO4_1V8",
+                       "SCMFPGA_SPARE_GPIO5_1V8","","","","","",
+       /*C0-C7*/       "","","","","","","","",
+       /*D0-D7*/       "","BMC_SPI1_RST_N","BIOS_FLASH_MUX_SEL","",
+                       "","TPM2_PIRQ_N","TPM2_RST_N","",
+       /*E0-E7*/       "","","","","","","","";
+};
+
+&i2c2 {
+       status = "okay";
+};
+
+&i2c4 {
+       status = "okay";
+};
+
+&i2c5 {
+       status = "okay";
+};
+
+&i2c6 {
+       status = "okay";
+};
+
+&i2c7 {
+       status = "okay";
+};
+
+&i2c8 {
+       status = "okay";
+};
+
+&i2c9 {
+       status = "okay";
+};
+
+&i2c10 {
+       status = "okay";
+};
+
+&i2c12 {
+       status = "okay";
+};
+
+&i2c13 {
+       status = "okay";
+};
+
+&i2c14 {
+       status = "okay";
+};
+
+&i2c15 {
+       status = "okay";
+};
+
+&vhub {
+       status = "okay";
+};
index 7719ea3..81ccb06 100644 (file)
                status = "okay";
 
                eeprom@53 {
-                       compatible = "atmel,24c32";
+                       compatible = "atmel,24c02";
                        reg = <0x53>;
                        pagesize = <16>;
-                       size = <128>;
                        status = "okay";
                };
        };
index 806eb1d..164201a 100644 (file)
        status = "okay";
 
        eeprom@50 {
-               compatible = "atmel,24c32";
+               compatible = "atmel,24c02";
                reg = <0x50>;
                pagesize = <16>;
                status = "okay";
        };
 
        eeprom@52 {
-               compatible = "atmel,24c32";
+               compatible = "atmel,24c02";
                reg = <0x52>;
                pagesize = <16>;
                status = "disabled";
        };
 
        eeprom@53 {
-               compatible = "atmel,24c32";
+               compatible = "atmel,24c02";
                reg = <0x53>;
                pagesize = <16>;
                status = "disabled";
index f4d2fc2..c53d9eb 100644 (file)
 &expgpio {
        gpio-line-names = "BT_ON",
                          "WL_ON",
-                         "",
+                         "PWR_LED_OFF",
                          "GLOBAL_RESET",
                          "VDD_SD_IO_SEL",
-                         "CAM_GPIO",
+                         "GLOBAL_SHUTDOWN",
                          "SD_PWR_ON",
-                         "SD_OC_N";
+                         "SHUTDOWN_REQUEST";
 };
 
 &genet_mdio {
index c383e0e..7df270c 100644 (file)
                pinctrl-names = "default";
                pinctrl-0 = <&pinctrl_atmel_conn>;
                reg = <0x4a>;
-               reset-gpios = <&gpio1 14 GPIO_ACTIVE_HIGH>;     /* SODIMM 106 */
+               reset-gpios = <&gpio1 14 GPIO_ACTIVE_LOW>;      /* SODIMM 106 */
                status = "disabled";
        };
 };
index d27beb4..652feff 100644 (file)
                                        regulator-name = "vddpu";
                                        regulator-min-microvolt = <725000>;
                                        regulator-max-microvolt = <1450000>;
-                                       regulator-enable-ramp-delay = <150>;
+                                       regulator-enable-ramp-delay = <380>;
                                        anatop-reg-offset = <0x140>;
                                        anatop-vol-bit-shift = <9>;
                                        anatop-vol-bit-width = <5>;
index c6b3206..21b509c 100644 (file)
        pinctrl-names = "default";
        pinctrl-0 = <&pinctrl_usdhc2>;
        bus-width = <4>;
+       no-1-8-v;
        non-removable;
-       cap-sd-highspeed;
-       sd-uhs-ddr50;
-       mmc-ddr-1_8v;
        vmmc-supply = <&reg_wifi>;
        enable-sdio-wakeup;
        status = "okay";
index 008e3da..039eed7 100644 (file)
                compatible = "usb-nop-xceiv";
                clocks = <&clks IMX7D_USB_HSIC_ROOT_CLK>;
                clock-names = "main_clk";
+               power-domains = <&pgc_hsic_phy>;
                #phy-cells = <0>;
        };
 
                                compatible = "fsl,imx7d-usb", "fsl,imx27-usb";
                                reg = <0x30b30000 0x200>;
                                interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
-                               power-domains = <&pgc_hsic_phy>;
                                clocks = <&clks IMX7D_USB_CTRL_CLK>;
                                fsl,usbphy = <&usbphynop3>;
                                fsl,usbmisc = <&usbmisc3 0>;
diff --git a/arch/arm/boot/dts/stm32mp15-scmi.dtsi b/arch/arm/boot/dts/stm32mp15-scmi.dtsi
new file mode 100644 (file)
index 0000000..543f24c
--- /dev/null
@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
+/*
+ * Copyright (C) STMicroelectronics 2022 - All Rights Reserved
+ * Author: Alexandre Torgue <alexandre.torgue@foss.st.com> for STMicroelectronics.
+ */
+
+/ {
+       firmware {
+               optee: optee {
+                       compatible = "linaro,optee-tz";
+                       method = "smc";
+               };
+
+               scmi: scmi {
+                       compatible = "linaro,scmi-optee";
+                       #address-cells = <1>;
+                       #size-cells = <0>;
+                       linaro,optee-channel-id = <0>;
+                       shmem = <&scmi_shm>;
+
+                       scmi_clk: protocol@14 {
+                               reg = <0x14>;
+                               #clock-cells = <1>;
+                       };
+
+                       scmi_reset: protocol@16 {
+                               reg = <0x16>;
+                               #reset-cells = <1>;
+                       };
+
+                       scmi_voltd: protocol@17 {
+                               reg = <0x17>;
+
+                               scmi_reguls: regulators {
+                                       #address-cells = <1>;
+                                       #size-cells = <0>;
+
+                                       scmi_reg11: reg11@0 {
+                                               reg = <0>;
+                                               regulator-name = "reg11";
+                                               regulator-min-microvolt = <1100000>;
+                                               regulator-max-microvolt = <1100000>;
+                                       };
+
+                                       scmi_reg18: reg18@1 {
+                                               voltd-name = "reg18";
+                                               reg = <1>;
+                                               regulator-name = "reg18";
+                                               regulator-min-microvolt = <1800000>;
+                                               regulator-max-microvolt = <1800000>;
+                                       };
+
+                                       scmi_usb33: usb33@2 {
+                                               reg = <2>;
+                                               regulator-name = "usb33";
+                                               regulator-min-microvolt = <3300000>;
+                                               regulator-max-microvolt = <3300000>;
+                                       };
+                               };
+                       };
+               };
+       };
+
+       soc {
+               scmi_sram: sram@2ffff000 {
+                       compatible = "mmio-sram";
+                       reg = <0x2ffff000 0x1000>;
+                       #address-cells = <1>;
+                       #size-cells = <1>;
+                       ranges = <0 0x2ffff000 0x1000>;
+
+                       scmi_shm: scmi-sram@0 {
+                               compatible = "arm,scmi-shmem";
+                               reg = <0 0x80>;
+                       };
+               };
+       };
+};
+
+&reg11 {
+       status = "disabled";
+};
+
+&reg18 {
+       status = "disabled";
+};
+
+&usb33 {
+       status = "disabled";
+};
+
+&usbotg_hs {
+       usb33d-supply = <&scmi_usb33>;
+};
+
+&usbphyc {
+       vdda1v1-supply = <&scmi_reg11>;
+       vdda1v8-supply = <&scmi_reg18>;
+};
+
+/delete-node/ &clk_hse;
+/delete-node/ &clk_hsi;
+/delete-node/ &clk_lse;
+/delete-node/ &clk_lsi;
+/delete-node/ &clk_csi;
index 1b2fd34..e04dda5 100644 (file)
                status = "disabled";
        };
 
-       firmware {
-               optee: optee {
-                       compatible = "linaro,optee-tz";
-                       method = "smc";
-                       status = "disabled";
-               };
-
-               scmi: scmi {
-                       compatible = "linaro,scmi-optee";
-                       #address-cells = <1>;
-                       #size-cells = <0>;
-                       linaro,optee-channel-id = <0>;
-                       shmem = <&scmi_shm>;
-                       status = "disabled";
-
-                       scmi_clk: protocol@14 {
-                               reg = <0x14>;
-                               #clock-cells = <1>;
-                       };
-
-                       scmi_reset: protocol@16 {
-                               reg = <0x16>;
-                               #reset-cells = <1>;
-                       };
-               };
-       };
-
        soc {
                compatible = "simple-bus";
                #address-cells = <1>;
                interrupt-parent = <&intc>;
                ranges;
 
-               scmi_sram: sram@2ffff000 {
-                       compatible = "mmio-sram";
-                       reg = <0x2ffff000 0x1000>;
-                       #address-cells = <1>;
-                       #size-cells = <1>;
-                       ranges = <0 0x2ffff000 0x1000>;
-
-                       scmi_shm: scmi-sram@0 {
-                               compatible = "arm,scmi-shmem";
-                               reg = <0 0x80>;
-                               status = "disabled";
-                       };
-               };
-
                timers2: timer@40000000 {
                        #address-cells = <1>;
                        #size-cells = <0>;
                        compatible = "st,stm32-cec";
                        reg = <0x40016000 0x400>;
                        interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
-                       clocks = <&rcc CEC_K>, <&clk_lse>;
+                       clocks = <&rcc CEC_K>, <&rcc CEC>;
                        clock-names = "cec", "hdmi-cec";
                        status = "disabled";
                };
                usbh_ohci: usb@5800c000 {
                        compatible = "generic-ohci";
                        reg = <0x5800c000 0x1000>;
-                       clocks = <&rcc USBH>, <&usbphyc>;
+                       clocks = <&usbphyc>, <&rcc USBH>;
                        resets = <&rcc USBH_R>;
                        interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>;
                        status = "disabled";
                usbh_ehci: usb@5800d000 {
                        compatible = "generic-ehci";
                        reg = <0x5800d000 0x1000>;
-                       clocks = <&rcc USBH>;
+                       clocks = <&usbphyc>, <&rcc USBH>;
                        resets = <&rcc USBH_R>;
                        interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
                        companion = <&usbh_ohci>;
index e3d3f3f..e539cc8 100644 (file)
@@ -7,6 +7,7 @@
 /dts-v1/;
 
 #include "stm32mp157a-dk1.dts"
+#include "stm32mp15-scmi.dtsi"
 
 / {
        model = "STMicroelectronics STM32MP157A-DK1 SCMI Discovery Board";
        clocks = <&scmi_clk CK_SCMI_MPU>;
 };
 
+&dsi {
+       clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>;
+};
+
 &gpioz {
        clocks = <&scmi_clk CK_SCMI_GPIOZ>;
 };
        resets = <&scmi_reset RST_SCMI_MCU>;
 };
 
-&optee {
-       status = "okay";
-};
-
 &rcc {
        compatible = "st,stm32mp1-rcc-secure", "syscon";
        clock-names = "hse", "hsi", "csi", "lse", "lsi";
 &rtc {
        clocks = <&scmi_clk CK_SCMI_RTCAPB>, <&scmi_clk CK_SCMI_RTC>;
 };
-
-&scmi {
-       status = "okay";
-};
-
-&scmi_shm {
-       status = "okay";
-};
index 45dcd29..97e4f94 100644 (file)
@@ -7,6 +7,7 @@
 /dts-v1/;
 
 #include "stm32mp157c-dk2.dts"
+#include "stm32mp15-scmi.dtsi"
 
 / {
        model = "STMicroelectronics STM32MP157C-DK2 SCMI Discovery Board";
@@ -34,6 +35,7 @@
 };
 
 &dsi {
+       phy-dsi-supply = <&scmi_reg18>;
        clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>;
 };
 
        resets = <&scmi_reset RST_SCMI_MCU>;
 };
 
-&optee {
-       status = "okay";
-};
-
 &rcc {
        compatible = "st,stm32mp1-rcc-secure", "syscon";
        clock-names = "hse", "hsi", "csi", "lse", "lsi";
 &rtc {
        clocks = <&scmi_clk CK_SCMI_RTCAPB>, <&scmi_clk CK_SCMI_RTC>;
 };
-
-&scmi {
-       status = "okay";
-};
-
-&scmi_shm {
-       status = "okay";
-};
index 458e0ca..9cf0a44 100644 (file)
@@ -7,6 +7,7 @@
 /dts-v1/;
 
 #include "stm32mp157c-ed1.dts"
+#include "stm32mp15-scmi.dtsi"
 
 / {
        model = "STMicroelectronics STM32MP157C-ED1 SCMI eval daughter";
        resets = <&scmi_reset RST_SCMI_CRYP1>;
 };
 
+&dsi {
+       clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>;
+};
+
 &gpioz {
        clocks = <&scmi_clk CK_SCMI_GPIOZ>;
 };
        resets = <&scmi_reset RST_SCMI_MCU>;
 };
 
-&optee {
-       status = "okay";
-};
-
 &rcc {
        compatible = "st,stm32mp1-rcc-secure", "syscon";
        clock-names = "hse", "hsi", "csi", "lse", "lsi";
 &rtc {
        clocks = <&scmi_clk CK_SCMI_RTCAPB>, <&scmi_clk CK_SCMI_RTC>;
 };
-
-&scmi {
-       status = "okay";
-};
-
-&scmi_shm {
-       status = "okay";
-};
index df9c113..3b9dd6f 100644 (file)
@@ -7,6 +7,7 @@
 /dts-v1/;
 
 #include "stm32mp157c-ev1.dts"
+#include "stm32mp15-scmi.dtsi"
 
 / {
        model = "STMicroelectronics STM32MP157C-EV1 SCMI eval daughter on eval mother";
@@ -35,6 +36,7 @@
 };
 
 &dsi {
+       phy-dsi-supply = <&scmi_reg18>;
        clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>;
 };
 
        resets = <&scmi_reset RST_SCMI_MCU>;
 };
 
-&optee {
-       status = "okay";
-};
-
 &rcc {
        compatible = "st,stm32mp1-rcc-secure", "syscon";
        clock-names = "hse", "hsi", "csi", "lse", "lsi";
 &rtc {
        clocks = <&scmi_clk CK_SCMI_RTCAPB>, <&scmi_clk CK_SCMI_RTC>;
 };
-
-&scmi {
-       status = "okay";
-};
-
-&scmi_shm {
-       status = "okay";
-};
index ca32446..f53086d 100644 (file)
@@ -93,6 +93,7 @@ CONFIG_REGULATOR_FIXED_VOLTAGE=y
 CONFIG_DRM=y
 CONFIG_DRM_PANEL_SEIKO_43WVF1G=y
 CONFIG_DRM_MXSFB=y
+CONFIG_FB=y
 CONFIG_FB_MODE_HELPERS=y
 CONFIG_LCD_CLASS_DEVICE=y
 CONFIG_BACKLIGHT_CLASS_DEVICE=y
index b1a43d7..df6d673 100644 (file)
@@ -202,7 +202,7 @@ static const struct wakeup_source_info ws_info[] = {
 
 static const struct of_device_id sama5d2_ws_ids[] = {
        { .compatible = "atmel,sama5d2-gem",            .data = &ws_info[0] },
-       { .compatible = "atmel,at91rm9200-rtc",         .data = &ws_info[1] },
+       { .compatible = "atmel,sama5d2-rtc",            .data = &ws_info[1] },
        { .compatible = "atmel,sama5d3-udc",            .data = &ws_info[2] },
        { .compatible = "atmel,at91rm9200-ohci",        .data = &ws_info[2] },
        { .compatible = "usb-ohci",                     .data = &ws_info[2] },
@@ -213,24 +213,24 @@ static const struct of_device_id sama5d2_ws_ids[] = {
 };
 
 static const struct of_device_id sam9x60_ws_ids[] = {
-       { .compatible = "atmel,at91sam9x5-rtc",         .data = &ws_info[1] },
+       { .compatible = "microchip,sam9x60-rtc",        .data = &ws_info[1] },
        { .compatible = "atmel,at91rm9200-ohci",        .data = &ws_info[2] },
        { .compatible = "usb-ohci",                     .data = &ws_info[2] },
        { .compatible = "atmel,at91sam9g45-ehci",       .data = &ws_info[2] },
        { .compatible = "usb-ehci",                     .data = &ws_info[2] },
-       { .compatible = "atmel,at91sam9260-rtt",        .data = &ws_info[4] },
+       { .compatible = "microchip,sam9x60-rtt",        .data = &ws_info[4] },
        { .compatible = "cdns,sam9x60-macb",            .data = &ws_info[5] },
        { /* sentinel */ }
 };
 
 static const struct of_device_id sama7g5_ws_ids[] = {
-       { .compatible = "atmel,at91sam9x5-rtc",         .data = &ws_info[1] },
+       { .compatible = "microchip,sama7g5-rtc",        .data = &ws_info[1] },
        { .compatible = "microchip,sama7g5-ohci",       .data = &ws_info[2] },
        { .compatible = "usb-ohci",                     .data = &ws_info[2] },
        { .compatible = "atmel,at91sam9g45-ehci",       .data = &ws_info[2] },
        { .compatible = "usb-ehci",                     .data = &ws_info[2] },
        { .compatible = "microchip,sama7g5-sdhci",      .data = &ws_info[3] },
-       { .compatible = "atmel,at91sam9260-rtt",        .data = &ws_info[4] },
+       { .compatible = "microchip,sama7g5-rtt",        .data = &ws_info[4] },
        { /* sentinel */ }
 };
 
@@ -1079,7 +1079,7 @@ securam_fail:
        return ret;
 }
 
-static void at91_pm_secure_init(void)
+static void __init at91_pm_secure_init(void)
 {
        int suspend_mode;
        struct arm_smccc_res res;
index 512943e..2e20362 100644 (file)
@@ -39,6 +39,7 @@ static int axxia_boot_secondary(unsigned int cpu, struct task_struct *idle)
                return -ENOENT;
 
        syscon = of_iomap(syscon_np, 0);
+       of_node_put(syscon_np);
        if (!syscon)
                return -ENOMEM;
 
index e4f4b20..3fc4ec8 100644 (file)
@@ -372,6 +372,7 @@ static void __init cns3xxx_init(void)
                /* De-Asscer SATA Reset */
                cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SATA));
        }
+       of_node_put(dn);
 
        dn = of_find_compatible_node(NULL, NULL, "cavium,cns3420-sdhci");
        if (of_device_is_available(dn)) {
@@ -385,6 +386,7 @@ static void __init cns3xxx_init(void)
                cns3xxx_pwr_clk_en(CNS3XXX_PWR_CLK_EN(SDIO));
                cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SDIO));
        }
+       of_node_put(dn);
 
        pm_power_off = cns3xxx_power_off;
 
index 8b48326..51a247c 100644 (file)
@@ -149,6 +149,7 @@ static void exynos_map_pmu(void)
        np = of_find_matching_node(NULL, exynos_dt_pmu_match);
        if (np)
                pmu_base_addr = of_iomap(np, 0);
+       of_node_put(np);
 }
 
 static void __init exynos_init_irq(void)
index 4b8ad72..32ac60b 100644 (file)
@@ -71,6 +71,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible,
        }
 
        sram_base = of_iomap(node, 0);
+       of_node_put(node);
        if (!sram_base) {
                pr_err("Couldn't map SRAM registers\n");
                return;
@@ -91,6 +92,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible,
        }
 
        scu_base = of_iomap(node, 0);
+       of_node_put(node);
        if (!scu_base) {
                pr_err("Couldn't map SCU registers\n");
                return;
index d1fdb60..c7c17c0 100644 (file)
@@ -218,13 +218,13 @@ void __init spear_setup_of_timer(void)
        irq = irq_of_parse_and_map(np, 0);
        if (!irq) {
                pr_err("%s: No irq passed for timer via DT\n", __func__);
-               return;
+               goto err_put_np;
        }
 
        gpt_base = of_iomap(np, 0);
        if (!gpt_base) {
                pr_err("%s: of iomap failed\n", __func__);
-               return;
+               goto err_put_np;
        }
 
        gpt_clk = clk_get_sys("gpt0", NULL);
@@ -239,6 +239,8 @@ void __init spear_setup_of_timer(void)
                goto err_prepare_enable_clk;
        }
 
+       of_node_put(np);
+
        spear_clockevent_init(irq);
        spear_clocksource_init();
 
@@ -248,4 +250,6 @@ err_prepare_enable_clk:
        clk_put(gpt_clk);
 err_iomap:
        iounmap(gpt_base);
+err_put_np:
+       of_node_put(np);
 }
index 84a1cea..309648c 100644 (file)
@@ -63,11 +63,12 @@ out:
 
 unsigned long __pfn_to_mfn(unsigned long pfn)
 {
-       struct rb_node *n = phys_to_mach.rb_node;
+       struct rb_node *n;
        struct xen_p2m_entry *entry;
        unsigned long irqflags;
 
        read_lock_irqsave(&p2m_lock, irqflags);
+       n = phys_to_mach.rb_node;
        while (n) {
                entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
                if (entry->pfn <= pfn &&
@@ -152,10 +153,11 @@ bool __set_phys_to_machine_multi(unsigned long pfn,
        int rc;
        unsigned long irqflags;
        struct xen_p2m_entry *p2m_entry;
-       struct rb_node *n = phys_to_mach.rb_node;
+       struct rb_node *n;
 
        if (mfn == INVALID_P2M_ENTRY) {
                write_lock_irqsave(&p2m_lock, irqflags);
+               n = phys_to_mach.rb_node;
                while (n) {
                        p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
                        if (p2m_entry->pfn <= pfn &&
index 3170661..9c233c5 100644 (file)
                        interrupts = <GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>;
                        pinctrl-names = "default";
                        pinctrl-0 = <&uart0_bus>;
-                       clocks = <&cmu_peri CLK_GOUT_UART0_EXT_UCLK>,
-                                <&cmu_peri CLK_GOUT_UART0_PCLK>;
+                       clocks = <&cmu_peri CLK_GOUT_UART0_PCLK>,
+                                <&cmu_peri CLK_GOUT_UART0_EXT_UCLK>;
                        clock-names = "uart", "clk_uart_baud0";
                        samsung,uart-fifosize = <64>;
                        status = "disabled";
                        interrupts = <GIC_SPI 247 IRQ_TYPE_LEVEL_HIGH>;
                        pinctrl-names = "default";
                        pinctrl-0 = <&uart1_bus>;
-                       clocks = <&cmu_peri CLK_GOUT_UART1_EXT_UCLK>,
-                                <&cmu_peri CLK_GOUT_UART1_PCLK>;
+                       clocks = <&cmu_peri CLK_GOUT_UART1_PCLK>,
+                                <&cmu_peri CLK_GOUT_UART1_EXT_UCLK>;
                        clock-names = "uart", "clk_uart_baud0";
                        samsung,uart-fifosize = <256>;
                        status = "disabled";
                        interrupts = <GIC_SPI 279 IRQ_TYPE_LEVEL_HIGH>;
                        pinctrl-names = "default";
                        pinctrl-0 = <&uart2_bus>;
-                       clocks = <&cmu_peri CLK_GOUT_UART2_EXT_UCLK>,
-                                <&cmu_peri CLK_GOUT_UART2_PCLK>;
+                       clocks = <&cmu_peri CLK_GOUT_UART2_PCLK>,
+                                <&cmu_peri CLK_GOUT_UART2_EXT_UCLK>;
                        clock-names = "uart", "clk_uart_baud0";
                        samsung,uart-fifosize = <256>;
                        status = "disabled";
index 4c3ac42..9a4de73 100644 (file)
 &iomuxc {
        pinctrl_eqos: eqosgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC                             0x3
-                       MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO                           0x3
-                       MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0                       0x91
-                       MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1                       0x91
-                       MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2                       0x91
-                       MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3                       0x91
-                       MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK       0x91
-                       MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL                 0x91
-                       MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0                       0x1f
-                       MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1                       0x1f
-                       MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2                       0x1f
-                       MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3                       0x1f
-                       MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL                 0x1f
-                       MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK       0x1f
-                       MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22                               0x19
+                       MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC                             0x2
+                       MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO                           0x2
+                       MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0                       0x90
+                       MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1                       0x90
+                       MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2                       0x90
+                       MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3                       0x90
+                       MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK       0x90
+                       MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL                 0x90
+                       MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0                       0x16
+                       MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1                       0x16
+                       MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2                       0x16
+                       MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3                       0x16
+                       MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL                 0x16
+                       MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK       0x16
+                       MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22                               0x10
                >;
        };
 
        pinctrl_fec: fecgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC               0x3
-                       MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO              0x3
-                       MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0         0x91
-                       MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1         0x91
-                       MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2         0x91
-                       MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3         0x91
-                       MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC          0x91
-                       MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL      0x91
-                       MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0         0x1f
-                       MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1         0x1f
-                       MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2         0x1f
-                       MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3         0x1f
-                       MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL      0x1f
-                       MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC         0x1f
-                       MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02              0x19
+                       MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC               0x2
+                       MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO              0x2
+                       MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0         0x90
+                       MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1         0x90
+                       MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2         0x90
+                       MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3         0x90
+                       MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC          0x90
+                       MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL      0x90
+                       MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0         0x16
+                       MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1         0x16
+                       MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2         0x16
+                       MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3         0x16
+                       MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL      0x16
+                       MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC         0x16
+                       MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02              0x10
                >;
        };
 
 
        pinctrl_gpio_led: gpioledgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16   0x19
+                       MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16   0x140
                >;
        };
 
        pinctrl_i2c1: i2c1grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL         0x400001c3
-                       MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA         0x400001c3
+                       MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL         0x400001c2
+                       MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA         0x400001c2
                >;
        };
 
        pinctrl_i2c3: i2c3grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL         0x400001c3
-                       MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA         0x400001c3
+                       MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL         0x400001c2
+                       MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA         0x400001c2
                >;
        };
 
        pinctrl_i2c5: i2c5grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_SPDIF_RX__I2C5_SDA         0x400001c3
-                       MX8MP_IOMUXC_SPDIF_TX__I2C5_SCL         0x400001c3
+                       MX8MP_IOMUXC_SPDIF_RX__I2C5_SDA         0x400001c2
+                       MX8MP_IOMUXC_SPDIF_TX__I2C5_SCL         0x400001c2
                >;
        };
 
 
        pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19    0x41
+                       MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19    0x40
                >;
        };
 
        pinctrl_uart2: uart2grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX    0x49
-                       MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX    0x49
+                       MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX    0x140
+                       MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX    0x140
                >;
        };
 
        pinctrl_usb1_vbus: usb1grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR   0x19
+                       MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR   0x10
                >;
        };
 
                        MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1    0x1d0
                        MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2    0x1d0
                        MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3    0x1d0
-                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
+                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
                >;
        };
 
                        MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1    0x1d4
                        MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2    0x1d4
                        MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3    0x1d4
-                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
+                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
                >;
        };
 
                        MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1    0x1d6
                        MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2    0x1d6
                        MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3    0x1d6
-                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
+                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
                >;
        };
 
index 70a701a..dd703b6 100644 (file)
 &iomuxc {
        pinctrl_eqos: eqosgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC                             0x3
-                       MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO                           0x3
-                       MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0                       0x91
-                       MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1                       0x91
-                       MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2                       0x91
-                       MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3                       0x91
-                       MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK       0x91
-                       MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL                 0x91
-                       MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0                       0x1f
-                       MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1                       0x1f
-                       MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2                       0x1f
-                       MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3                       0x1f
-                       MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL                 0x1f
-                       MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK       0x1f
-                       MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07                            0x19
+                       MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC                             0x2
+                       MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO                           0x2
+                       MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0                       0x90
+                       MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1                       0x90
+                       MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2                       0x90
+                       MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3                       0x90
+                       MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK       0x90
+                       MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL                 0x90
+                       MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0                       0x16
+                       MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1                       0x16
+                       MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2                       0x16
+                       MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3                       0x16
+                       MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL                 0x16
+                       MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK       0x16
+                       MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07                            0x10
                >;
        };
 
        pinctrl_uart2: uart2grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX    0x49
-                       MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX    0x49
+                       MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX    0x40
+                       MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX    0x40
                >;
        };
 
                        MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1    0x1d0
                        MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2    0x1d0
                        MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3    0x1d0
-                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
+                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
                >;
        };
 
 
        pinctrl_reg_usb1: regusb1grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_GPIO1_IO14__GPIO1_IO14     0x19
+                       MX8MP_IOMUXC_GPIO1_IO14__GPIO1_IO14     0x10
                >;
        };
 
        pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19    0x41
+                       MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19    0x40
                >;
        };
 };
index 984a6b9..6aa720b 100644 (file)
 &iomuxc {
        pinctrl_eqos: eqosgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC                     0x3
-                       MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO                   0x3
-                       MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0               0x91
-                       MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1               0x91
-                       MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2               0x91
-                       MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3               0x91
-                       MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK       0x91
-                       MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL         0x91
-                       MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0               0x1f
-                       MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1               0x1f
-                       MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2               0x1f
-                       MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3               0x1f
-                       MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL         0x1f
-                       MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK       0x1f
+                       MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC                     0x2
+                       MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO                   0x2
+                       MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0               0x90
+                       MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1               0x90
+                       MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2               0x90
+                       MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3               0x90
+                       MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK       0x90
+                       MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL         0x90
+                       MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0               0x16
+                       MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1               0x16
+                       MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2               0x16
+                       MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3               0x16
+                       MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL         0x16
+                       MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK       0x16
                        MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20                      0x10
                >;
        };
 
        pinctrl_i2c2: i2c2grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL         0x400001c3
-                       MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA         0x400001c3
+                       MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL         0x400001c2
+                       MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA         0x400001c2
                >;
        };
 
        pinctrl_i2c2_gpio: i2c2gpiogrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16       0x1e3
-                       MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17       0x1e3
+                       MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16       0x1e2
+                       MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17       0x1e2
                >;
        };
 
        pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19    0x41
+                       MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19    0x40
                >;
        };
 
        pinctrl_uart1: uart1grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX    0x49
-                       MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX    0x49
+                       MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX    0x40
+                       MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX    0x40
                >;
        };
 
                        MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1    0x1d0
                        MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2    0x1d0
                        MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3    0x1d0
-                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
+                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
                >;
        };
 
                        MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1    0x1d4
                        MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2    0x1d4
                        MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3    0x1d4
-                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
+                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
                >;
        };
 
                        MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1    0x1d6
                        MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2    0x1d6
                        MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3    0x1d6
-                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
+                       MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
                >;
        };
 };
index 101d311..5212155 100644 (file)
 
        pinctrl_hog: hoggrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09     0x40000041 /* DIO0 */
-                       MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11     0x40000041 /* DIO1 */
-                       MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14       0x40000041 /* M2SKT_OFF# */
-                       MX8MP_IOMUXC_SD2_DATA2__GPIO2_IO17      0x40000159 /* PCIE1_WDIS# */
-                       MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18      0x40000159 /* PCIE2_WDIS# */
-                       MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14        0x40000159 /* PCIE3_WDIS# */
-                       MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06    0x40000041 /* M2SKT_RST# */
-                       MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18      0x40000159 /* M2SKT_WDIS# */
-                       MX8MP_IOMUXC_NAND_ALE__GPIO3_IO00       0x40000159 /* M2SKT_GDIS# */
+                       MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09     0x40000040 /* DIO0 */
+                       MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11     0x40000040 /* DIO1 */
+                       MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14       0x40000040 /* M2SKT_OFF# */
+                       MX8MP_IOMUXC_SD2_DATA2__GPIO2_IO17      0x40000150 /* PCIE1_WDIS# */
+                       MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18      0x40000150 /* PCIE2_WDIS# */
+                       MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14        0x40000150 /* PCIE3_WDIS# */
+                       MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06    0x40000040 /* M2SKT_RST# */
+                       MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18      0x40000150 /* M2SKT_WDIS# */
+                       MX8MP_IOMUXC_NAND_ALE__GPIO3_IO00       0x40000150 /* M2SKT_GDIS# */
                        MX8MP_IOMUXC_SAI3_TXD__GPIO5_IO01       0x40000104 /* UART_TERM */
                        MX8MP_IOMUXC_SAI3_TXFS__GPIO4_IO31      0x40000104 /* UART_RS485 */
                        MX8MP_IOMUXC_SAI3_TXC__GPIO5_IO00       0x40000104 /* UART_HALF */
 
        pinctrl_accel: accelgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_GPIO1_IO07__GPIO1_IO07     0x159
+                       MX8MP_IOMUXC_GPIO1_IO07__GPIO1_IO07     0x150
                >;
        };
 
        pinctrl_eqos: eqosgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC                             0x3
-                       MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO                           0x3
-                       MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0               0x91
-                       MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1               0x91
-                       MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2               0x91
-                       MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3               0x91
-                       MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK       0x91
-                       MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL         0x91
-                       MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0               0x1f
-                       MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1               0x1f
-                       MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2               0x1f
-                       MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3               0x1f
-                       MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL         0x1f
-                       MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK       0x1f
-                       MX8MP_IOMUXC_SAI3_RXD__GPIO4_IO30               0x141 /* RST# */
-                       MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28              0x159 /* IRQ# */
+                       MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC                             0x2
+                       MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO                           0x2
+                       MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0               0x90
+                       MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1               0x90
+                       MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2               0x90
+                       MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3               0x90
+                       MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK       0x90
+                       MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL         0x90
+                       MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0               0x16
+                       MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1               0x16
+                       MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2               0x16
+                       MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3               0x16
+                       MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL         0x16
+                       MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK       0x16
+                       MX8MP_IOMUXC_SAI3_RXD__GPIO4_IO30               0x140 /* RST# */
+                       MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28              0x150 /* IRQ# */
                >;
        };
 
        pinctrl_fec: fecgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0         0x91
-                       MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1         0x91
-                       MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2         0x91
-                       MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3         0x91
-                       MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC          0x91
-                       MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL      0x91
-                       MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0         0x1f
-                       MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1         0x1f
-                       MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2         0x1f
-                       MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3         0x1f
-                       MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL      0x1f
-                       MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC         0x1f
-                       MX8MP_IOMUXC_SAI1_RXFS__ENET1_1588_EVENT0_IN    0x141
-                       MX8MP_IOMUXC_SAI1_RXC__ENET1_1588_EVENT0_OUT    0x141
+                       MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0         0x90
+                       MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1         0x90
+                       MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2         0x90
+                       MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3         0x90
+                       MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC          0x90
+                       MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL      0x90
+                       MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0         0x16
+                       MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1         0x16
+                       MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2         0x16
+                       MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3         0x16
+                       MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL      0x16
+                       MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC         0x16
+                       MX8MP_IOMUXC_SAI1_RXFS__ENET1_1588_EVENT0_IN    0x140
+                       MX8MP_IOMUXC_SAI1_RXC__ENET1_1588_EVENT0_OUT    0x140
                >;
        };
 
 
        pinctrl_gsc: gscgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20      0x159
+                       MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20      0x150
                >;
        };
 
        pinctrl_i2c1: i2c1grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL         0x400001c3
-                       MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA         0x400001c3
+                       MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL         0x400001c2
+                       MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA         0x400001c2
                >;
        };
 
        pinctrl_i2c2: i2c2grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL         0x400001c3
-                       MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA         0x400001c3
+                       MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL         0x400001c2
+                       MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA         0x400001c2
                >;
        };
 
        pinctrl_i2c3: i2c3grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL         0x400001c3
-                       MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA         0x400001c3
+                       MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL         0x400001c2
+                       MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA         0x400001c2
                >;
        };
 
        pinctrl_i2c4: i2c4grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_I2C4_SCL__I2C4_SCL         0x400001c3
-                       MX8MP_IOMUXC_I2C4_SDA__I2C4_SDA         0x400001c3
+                       MX8MP_IOMUXC_I2C4_SCL__I2C4_SCL         0x400001c2
+                       MX8MP_IOMUXC_I2C4_SDA__I2C4_SDA         0x400001c2
                >;
        };
 
        pinctrl_ksz: kszgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_SAI3_RXC__GPIO4_IO29       0x159 /* IRQ# */
-                       MX8MP_IOMUXC_SAI3_MCLK__GPIO5_IO02      0x141 /* RST# */
+                       MX8MP_IOMUXC_SAI3_RXC__GPIO4_IO29       0x150 /* IRQ# */
+                       MX8MP_IOMUXC_SAI3_MCLK__GPIO5_IO02      0x140 /* RST# */
                >;
        };
 
        pinctrl_gpio_leds: ledgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_SD2_DATA0__GPIO2_IO15      0x19
-                       MX8MP_IOMUXC_SD2_DATA1__GPIO2_IO16      0x19
+                       MX8MP_IOMUXC_SD2_DATA0__GPIO2_IO15      0x10
+                       MX8MP_IOMUXC_SD2_DATA1__GPIO2_IO16      0x10
                >;
        };
 
        pinctrl_pmic: pmicgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07    0x141
+                       MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07    0x140
                >;
        };
 
        pinctrl_pps: ppsgrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_GPIO1_IO12__GPIO1_IO12     0x141
+                       MX8MP_IOMUXC_GPIO1_IO12__GPIO1_IO12     0x140
                >;
        };
 
 
        pinctrl_reg_usb2: regusb2grp {
                fsl,pins = <
-                       MX8MP_IOMUXC_GPIO1_IO06__GPIO1_IO06     0x141
+                       MX8MP_IOMUXC_GPIO1_IO06__GPIO1_IO06     0x140
                >;
        };
 
        pinctrl_reg_wifi: regwifigrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_NAND_DATA03__GPIO3_IO09    0x119
+                       MX8MP_IOMUXC_NAND_DATA03__GPIO3_IO09    0x110
                >;
        };
 
 
        pinctrl_uart3_gpio: uart3gpiogrp {
                fsl,pins = <
-                       MX8MP_IOMUXC_NAND_DATA02__GPIO3_IO08    0x119
+                       MX8MP_IOMUXC_NAND_DATA02__GPIO3_IO08    0x110
                >;
        };
 
index d9542df..410d0d5 100644 (file)
                                        pgc_ispdwp: power-domain@18 {
                                                #power-domain-cells = <0>;
                                                reg = <IMX8MP_POWER_DOMAIN_MEDIAMIX_ISPDWP>;
-                                               clocks = <&clk IMX8MP_CLK_MEDIA_ISP_DIV>;
+                                               clocks = <&clk IMX8MP_CLK_MEDIA_ISP_ROOT>;
                                        };
                                };
                        };
index 59ea8a2..824d401 100644 (file)
@@ -79,7 +79,7 @@
                };
        };
 
-       soc {
+       soc@0 {
                compatible = "simple-bus";
                #address-cells = <1>;
                #size-cells = <1>;
index 3b0cc85..71e373b 100644 (file)
@@ -74,7 +74,7 @@
                vdd_l17_29-supply = <&vph_pwr>;
                vdd_l20_21-supply = <&vph_pwr>;
                vdd_l25-supply = <&pm8994_s5>;
-               vdd_lvs1_2 = <&pm8994_s4>;
+               vdd_lvs1_2-supply = <&pm8994_s4>;
 
                /* S1, S2, S6 and S12 are managed by RPMPD */
 
index 7748b74..afa91ca 100644 (file)
                vdd_l17_29-supply = <&vph_pwr>;
                vdd_l20_21-supply = <&vph_pwr>;
                vdd_l25-supply = <&pm8994_s5>;
-               vdd_lvs1_2 = <&pm8994_s4>;
+               vdd_lvs1_2-supply = <&pm8994_s4>;
 
                /* S1, S2, S6 and S12 are managed by RPMPD */
 
index 0318d42..1ac2913 100644 (file)
                CPU6: cpu@102 {
                        device_type = "cpu";
                        compatible = "arm,cortex-a57";
-                       reg = <0x0 0x101>;
+                       reg = <0x0 0x102>;
                        enable-method = "psci";
                        next-level-cache = <&L2_1>;
                };
                CPU7: cpu@103 {
                        device_type = "cpu";
                        compatible = "arm,cortex-a57";
-                       reg = <0x0 0x101>;
+                       reg = <0x0 0x103>;
                        enable-method = "psci";
                        next-level-cache = <&L2_1>;
                };
index 9b3e3d1..d1e2df5 100644 (file)
@@ -5,7 +5,7 @@
  * Copyright 2021 Google LLC.
  */
 
-#include "sc7180-trogdor.dtsi"
+/* This file must be included after sc7180-trogdor.dtsi */
 
 / {
        /* BOARD-SPECIFIC TOP LEVEL NODES */
index fe2369c..88f6a7d 100644 (file)
@@ -5,7 +5,7 @@
  * Copyright 2020 Google LLC.
  */
 
-#include "sc7180-trogdor.dtsi"
+/* This file must be included after sc7180-trogdor.dtsi */
 
 &ap_sar_sensor {
        semtech,cs0-ground;
index 0692ae0..038538c 100644 (file)
 
                        power-domains = <&dispcc MDSS_GDSC>;
 
-                       clocks = <&gcc GCC_DISP_AHB_CLK>,
+                       clocks = <&dispcc DISP_CC_MDSS_AHB_CLK>,
                                 <&dispcc DISP_CC_MDSS_MDP_CLK>;
                        clock-names = "iface", "core";
 
index 7d08fad..b87756b 100644 (file)
                        reg = <0x0 0x17100000 0x0 0x10000>,     /* GICD */
                              <0x0 0x17180000 0x0 0x200000>;    /* GICR * 8 */
                        interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+                       #address-cells = <2>;
+                       #size-cells = <2>;
+                       ranges;
+
+                       gic_its: msi-controller@17140000 {
+                               compatible = "arm,gic-v3-its";
+                               reg = <0x0 0x17140000 0x0 0x20000>;
+                               msi-controller;
+                               #msi-cells = <1>;
+                       };
                };
 
                timer@17420000 {
 
                        iommus = <&apps_smmu 0xe0 0x0>;
 
-                       interconnects = <&aggre1_noc MASTER_UFS_MEM &mc_virt SLAVE_EBI1>,
-                                       <&gem_noc MASTER_APPSS_PROC &config_noc SLAVE_UFS_MEM_CFG>;
+                       interconnects = <&aggre1_noc MASTER_UFS_MEM 0 &mc_virt SLAVE_EBI1 0>,
+                                       <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_UFS_MEM_CFG 0>;
                        interconnect-names = "ufs-ddr", "cpu-ufs";
                        clock-names =
                                "core_clk",
index f64b368..cdb5305 100644 (file)
                clock-names = "clk_ahb", "clk_xin";
                mmc-ddr-1_8v;
                mmc-hs200-1_8v;
-               mmc-hs400-1_8v;
                ti,trm-icp = <0x2>;
                ti,otap-del-sel-legacy = <0x0>;
                ti,otap-del-sel-mmc-hs = <0x0>;
                ti,otap-del-sel-ddr52 = <0x6>;
                ti,otap-del-sel-hs200 = <0x7>;
-               ti,otap-del-sel-hs400 = <0x4>;
        };
 
        sdhci1: mmc@fa00000 {
index be7f392..19966f7 100644 (file)
@@ -33,7 +33,7 @@
                ranges;
                #interrupt-cells = <3>;
                interrupt-controller;
-               reg = <0x00 0x01800000 0x00 0x200000>, /* GICD */
+               reg = <0x00 0x01800000 0x00 0x100000>, /* GICD */
                      <0x00 0x01900000 0x00 0x100000>, /* GICR */
                      <0x00 0x6f000000 0x00 0x2000>,   /* GICC */
                      <0x00 0x6f010000 0x00 0x1000>,   /* GICH */
index 79fac13..8d88433 100644 (file)
@@ -3101,7 +3101,6 @@ void cpu_set_feature(unsigned int num)
        WARN_ON(num >= MAX_CPU_FEATURES);
        elf_hwcap |= BIT(num);
 }
-EXPORT_SYMBOL_GPL(cpu_set_feature);
 
 bool cpu_have_feature(unsigned int num)
 {
index d42a205..bd5df50 100644 (file)
@@ -102,7 +102,6 @@ SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
  * x19-x29 per the AAPCS, and we created frame records upon entry, so we need
  * to restore x0-x8, x29, and x30.
  */
-ftrace_common_return:
        /* Restore function arguments */
        ldp     x0, x1, [sp]
        ldp     x2, x3, [sp, #S_X2]
index f447c4a..ea5dc7c 100644 (file)
@@ -78,47 +78,76 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
 }
 
 /*
- * Turn on the call to ftrace_caller() in instrumented function
+ * Find the address the callsite must branch to in order to reach '*addr'.
+ *
+ * Due to the limited range of 'BL' instructions, modules may be placed too far
+ * away to branch directly and must use a PLT.
+ *
+ * Returns true when '*addr' contains a reachable target address, or has been
+ * modified to contain a PLT address. Returns false otherwise.
  */
-int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+static bool ftrace_find_callable_addr(struct dyn_ftrace *rec,
+                                     struct module *mod,
+                                     unsigned long *addr)
 {
        unsigned long pc = rec->ip;
-       u32 old, new;
-       long offset = (long)pc - (long)addr;
+       long offset = (long)*addr - (long)pc;
+       struct plt_entry *plt;
 
-       if (offset < -SZ_128M || offset >= SZ_128M) {
-               struct module *mod;
-               struct plt_entry *plt;
+       /*
+        * When the target is within range of the 'BL' instruction, use 'addr'
+        * as-is and branch to that directly.
+        */
+       if (offset >= -SZ_128M && offset < SZ_128M)
+               return true;
 
-               if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
-                       return -EINVAL;
+       /*
+        * When the target is outside of the range of a 'BL' instruction, we
+        * must use a PLT to reach it. We can only place PLTs for modules, and
+        * only when module PLT support is built-in.
+        */
+       if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
+               return false;
 
-               /*
-                * On kernels that support module PLTs, the offset between the
-                * branch instruction and its target may legally exceed the
-                * range of an ordinary relative 'bl' opcode. In this case, we
-                * need to branch via a trampoline in the module.
-                *
-                * NOTE: __module_text_address() must be called with preemption
-                * disabled, but we can rely on ftrace_lock to ensure that 'mod'
-                * retains its validity throughout the remainder of this code.
-                */
+       /*
+        * 'mod' is only set at module load time, but if we end up
+        * dealing with an out-of-range condition, we can assume it
+        * is due to a module being loaded far away from the kernel.
+        *
+        * NOTE: __module_text_address() must be called with preemption
+        * disabled, but we can rely on ftrace_lock to ensure that 'mod'
+        * retains its validity throughout the remainder of this code.
+        */
+       if (!mod) {
                preempt_disable();
                mod = __module_text_address(pc);
                preempt_enable();
+       }
 
-               if (WARN_ON(!mod))
-                       return -EINVAL;
+       if (WARN_ON(!mod))
+               return false;
 
-               plt = get_ftrace_plt(mod, addr);
-               if (!plt) {
-                       pr_err("ftrace: no module PLT for %ps\n", (void *)addr);
-                       return -EINVAL;
-               }
-
-               addr = (unsigned long)plt;
+       plt = get_ftrace_plt(mod, *addr);
+       if (!plt) {
+               pr_err("ftrace: no module PLT for %ps\n", (void *)*addr);
+               return false;
        }
 
+       *addr = (unsigned long)plt;
+       return true;
+}
+
+/*
+ * Turn on the call to ftrace_caller() in instrumented function
+ */
+int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+{
+       unsigned long pc = rec->ip;
+       u32 old, new;
+
+       if (!ftrace_find_callable_addr(rec, NULL, &addr))
+               return -EINVAL;
+
        old = aarch64_insn_gen_nop();
        new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
 
@@ -132,6 +161,11 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
        unsigned long pc = rec->ip;
        u32 old, new;
 
+       if (!ftrace_find_callable_addr(rec, NULL, &old_addr))
+               return -EINVAL;
+       if (!ftrace_find_callable_addr(rec, NULL, &addr))
+               return -EINVAL;
+
        old = aarch64_insn_gen_branch_imm(pc, old_addr,
                                          AARCH64_INSN_BRANCH_LINK);
        new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
@@ -181,54 +215,15 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
                    unsigned long addr)
 {
        unsigned long pc = rec->ip;
-       bool validate = true;
        u32 old = 0, new;
-       long offset = (long)pc - (long)addr;
 
-       if (offset < -SZ_128M || offset >= SZ_128M) {
-               u32 replaced;
-
-               if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
-                       return -EINVAL;
-
-               /*
-                * 'mod' is only set at module load time, but if we end up
-                * dealing with an out-of-range condition, we can assume it
-                * is due to a module being loaded far away from the kernel.
-                */
-               if (!mod) {
-                       preempt_disable();
-                       mod = __module_text_address(pc);
-                       preempt_enable();
-
-                       if (WARN_ON(!mod))
-                               return -EINVAL;
-               }
-
-               /*
-                * The instruction we are about to patch may be a branch and
-                * link instruction that was redirected via a PLT entry. In
-                * this case, the normal validation will fail, but we can at
-                * least check that we are dealing with a branch and link
-                * instruction that points into the right module.
-                */
-               if (aarch64_insn_read((void *)pc, &replaced))
-                       return -EFAULT;
-
-               if (!aarch64_insn_is_bl(replaced) ||
-                   !within_module(pc + aarch64_get_branch_offset(replaced),
-                                  mod))
-                       return -EINVAL;
-
-               validate = false;
-       } else {
-               old = aarch64_insn_gen_branch_imm(pc, addr,
-                                                 AARCH64_INSN_BRANCH_LINK);
-       }
+       if (!ftrace_find_callable_addr(rec, mod, &addr))
+               return -EINVAL;
 
+       old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
        new = aarch64_insn_gen_nop();
 
-       return ftrace_modify_code(pc, old, new, validate);
+       return ftrace_modify_code(pc, old, new, true);
 }
 
 void arch_ftrace_update_code(int command)
index cf3a759..fea3223 100644 (file)
@@ -303,14 +303,13 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
        early_fixmap_init();
        early_ioremap_init();
 
+       setup_machine_fdt(__fdt_pointer);
+
        /*
         * Initialise the static keys early as they may be enabled by the
-        * cpufeature code, early parameters, and DT setup.
+        * cpufeature code and early parameters.
         */
        jump_label_init();
-
-       setup_machine_fdt(__fdt_pointer);
-
        parse_early_param();
 
        /*
index a018814..83a7f61 100644 (file)
@@ -2112,11 +2112,11 @@ static int finalize_hyp_mode(void)
                return 0;
 
        /*
-        * Exclude HYP BSS from kmemleak so that it doesn't get peeked
-        * at, which would end badly once the section is inaccessible.
-        * None of other sections should ever be introspected.
+        * Exclude HYP sections from kmemleak so that they don't get peeked
+        * at, which would end badly once inaccessible.
         */
        kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start);
+       kmemleak_free_part(__va(hyp_mem_base), hyp_mem_size);
        return pkvm_drop_host_privileges();
 }
 
index 0ea6cc2..21c9079 100644 (file)
@@ -218,8 +218,6 @@ SYM_FUNC_ALIAS(__dma_flush_area, __pi___dma_flush_area)
  */
 SYM_FUNC_START(__pi___dma_map_area)
        add     x1, x0, x1
-       cmp     w2, #DMA_FROM_DEVICE
-       b.eq    __pi_dcache_inval_poc
        b       __pi_dcache_clean_poc
 SYM_FUNC_END(__pi___dma_map_area)
 SYM_FUNC_ALIAS(__dma_map_area, __pi___dma_map_area)
index e2a5ec9..3618ef3 100644 (file)
@@ -214,6 +214,19 @@ static pte_t get_clear_contig(struct mm_struct *mm,
        return orig_pte;
 }
 
+static pte_t get_clear_contig_flush(struct mm_struct *mm,
+                                   unsigned long addr,
+                                   pte_t *ptep,
+                                   unsigned long pgsize,
+                                   unsigned long ncontig)
+{
+       pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
+       struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+
+       flush_tlb_range(&vma, addr, addr + (pgsize * ncontig));
+       return orig_pte;
+}
+
 /*
  * Changing some bits of contiguous entries requires us to follow a
  * Break-Before-Make approach, breaking the whole contiguous set
@@ -447,19 +460,20 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
        int ncontig, i;
        size_t pgsize = 0;
        unsigned long pfn = pte_pfn(pte), dpfn;
+       struct mm_struct *mm = vma->vm_mm;
        pgprot_t hugeprot;
        pte_t orig_pte;
 
        if (!pte_cont(pte))
                return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
 
-       ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize);
+       ncontig = find_num_contig(mm, addr, ptep, &pgsize);
        dpfn = pgsize >> PAGE_SHIFT;
 
        if (!__cont_access_flags_changed(ptep, pte, ncontig))
                return 0;
 
-       orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig);
+       orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
 
        /* Make sure we don't lose the dirty or young state */
        if (pte_dirty(orig_pte))
@@ -470,7 +484,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 
        hugeprot = pte_pgprot(pte);
        for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
-               set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot));
+               set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot));
 
        return 1;
 }
@@ -492,7 +506,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
        ncontig = find_num_contig(mm, addr, ptep, &pgsize);
        dpfn = pgsize >> PAGE_SHIFT;
 
-       pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
+       pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
        pte = pte_wrprotect(pte);
 
        hugeprot = pte_pgprot(pte);
@@ -505,17 +519,15 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
                            unsigned long addr, pte_t *ptep)
 {
+       struct mm_struct *mm = vma->vm_mm;
        size_t pgsize;
        int ncontig;
-       pte_t orig_pte;
 
        if (!pte_cont(READ_ONCE(*ptep)))
                return ptep_clear_flush(vma, addr, ptep);
 
-       ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize);
-       orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig);
-       flush_tlb_range(vma, addr, addr + pgsize * ncontig);
-       return orig_pte;
+       ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+       return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
 }
 
 static int __init hugetlbpage_init(void)
index 3f33c89..9a133e4 100644 (file)
@@ -12,10 +12,9 @@ static inline unsigned long exception_era(struct pt_regs *regs)
        return regs->csr_era;
 }
 
-static inline int compute_return_era(struct pt_regs *regs)
+static inline void compute_return_era(struct pt_regs *regs)
 {
        regs->csr_era += 4;
-       return 0;
 }
 
 #endif /* _ASM_BRANCH_H */
index 5dc84d8..d9e86cf 100644 (file)
@@ -426,6 +426,11 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
 
 #define kern_addr_valid(addr)  (1)
 
+static inline unsigned long pmd_pfn(pmd_t pmd)
+{
+       return (pmd_val(pmd) & _PFN_MASK) >> _PFN_SHIFT;
+}
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 
 /* We don't have hardware dirty/accessed bits, generic_pmdp_establish is fine.*/
@@ -497,11 +502,6 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd)
        return pmd;
 }
 
-static inline unsigned long pmd_pfn(pmd_t pmd)
-{
-       return (pmd_val(pmd) & _PFN_MASK) >> _PFN_SHIFT;
-}
-
 static inline struct page *pmd_page(pmd_t pmd)
 {
        if (pmd_trans_huge(pmd))
index 6c87ea3..529ab8f 100644 (file)
@@ -263,7 +263,7 @@ void cpu_probe(void)
 
        c->cputype      = CPU_UNKNOWN;
        c->processor_id = read_cpucfg(LOONGARCH_CPUCFG0);
-       c->fpu_vers     = (read_cpucfg(LOONGARCH_CPUCFG2) >> 3) & 0x3;
+       c->fpu_vers     = (read_cpucfg(LOONGARCH_CPUCFG2) & CPUCFG2_FPVERS) >> 3;
 
        c->fpu_csr0     = FPU_CSR_RN;
        c->fpu_mask     = FPU_CSR_RSVD;
index e596dfc..d01e62d 100644 (file)
@@ -14,8 +14,6 @@
 
        __REF
 
-SYM_ENTRY(_stext, SYM_L_GLOBAL, SYM_A_NONE)
-
 SYM_CODE_START(kernel_entry)                   # kernel entry point
 
        /* Config direct window and set PG */
index e4060f8..1bf58c6 100644 (file)
@@ -475,8 +475,7 @@ asmlinkage void noinstr do_ri(struct pt_regs *regs)
 
        die_if_kernel("Reserved instruction in kernel code", regs);
 
-       if (unlikely(compute_return_era(regs) < 0))
-               goto out;
+       compute_return_era(regs);
 
        if (unlikely(get_user(opcode, era) < 0)) {
                status = SIGSEGV;
index 9d50815..69c76f2 100644 (file)
@@ -37,6 +37,7 @@ SECTIONS
        HEAD_TEXT_SECTION
 
        . = ALIGN(PECOFF_SEGMENT_ALIGN);
+       _stext = .;
        .text : {
                TEXT_TEXT
                SCHED_TEXT
@@ -101,6 +102,7 @@ SECTIONS
 
        STABS_DEBUG
        DWARF_DEBUG
+       ELF_DETAILS
 
        .gptab.sdata : {
                *(.gptab.data)
index e272f8a..9818ce1 100644 (file)
@@ -281,15 +281,16 @@ void setup_tlb_handler(int cpu)
                if (pcpu_handlers[cpu])
                        return;
 
-               page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, get_order(vec_sz));
+               page = alloc_pages_node(cpu_to_node(cpu), GFP_ATOMIC, get_order(vec_sz));
                if (!page)
                        return;
 
                addr = page_address(page);
-               pcpu_handlers[cpu] = virt_to_phys(addr);
+               pcpu_handlers[cpu] = (unsigned long)addr;
                memcpy((void *)addr, (void *)eentry, vec_sz);
                local_flush_icache_range((unsigned long)addr, (unsigned long)addr + vec_sz);
-               csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_TLBRENTRY);
+               csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_EENTRY);
+               csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_MERRENTRY);
                csr_write64(pcpu_handlers[cpu] + 80*VECSIZE, LOONGARCH_CSR_TLBRENTRY);
        }
 #endif
index b0a034b..42e6966 100644 (file)
 
                clocks = <&cgu X1000_CLK_RTCLK>,
                         <&cgu X1000_CLK_EXCLK>,
-                        <&cgu X1000_CLK_PCLK>;
-               clock-names = "rtc", "ext", "pclk";
+                        <&cgu X1000_CLK_PCLK>,
+                        <&cgu X1000_CLK_TCU>;
+               clock-names = "rtc", "ext", "pclk", "tcu";
 
                interrupt-controller;
                #interrupt-cells = <1>;
index dbf21af..65a5da7 100644 (file)
 
                clocks = <&cgu X1830_CLK_RTCLK>,
                         <&cgu X1830_CLK_EXCLK>,
-                        <&cgu X1830_CLK_PCLK>;
-               clock-names = "rtc", "ext", "pclk";
+                        <&cgu X1830_CLK_PCLK>,
+                        <&cgu X1830_CLK_TCU>;
+               clock-names = "rtc", "ext", "pclk", "tcu";
 
                interrupt-controller;
                #interrupt-cells = <1>;
index a89aaad..930c450 100644 (file)
@@ -44,6 +44,7 @@ static __init unsigned int ranchu_measure_hpt_freq(void)
                      __func__);
 
        rtc_base = of_iomap(np, 0);
+       of_node_put(np);
        if (!rtc_base)
                panic("%s(): Failed to ioremap Goldfish RTC base!", __func__);
 
index 5204fc6..1187729 100644 (file)
@@ -208,6 +208,12 @@ void __init ltq_soc_init(void)
                        of_address_to_resource(np_sysgpe, 0, &res_sys[2]))
                panic("Failed to get core resources");
 
+       of_node_put(np_status);
+       of_node_put(np_ebu);
+       of_node_put(np_sys1);
+       of_node_put(np_syseth);
+       of_node_put(np_sysgpe);
+
        if ((request_mem_region(res_status.start, resource_size(&res_status),
                                res_status.name) < 0) ||
                (request_mem_region(res_ebu.start, resource_size(&res_ebu),
index b732495..20622bf 100644 (file)
@@ -408,6 +408,7 @@ int __init icu_of_init(struct device_node *node, struct device_node *parent)
                if (!ltq_eiu_membase)
                        panic("Failed to remap eiu memory");
        }
+       of_node_put(eiu_node);
 
        return 0;
 }
index 084f6ca..d444a1b 100644 (file)
@@ -441,6 +441,10 @@ void __init ltq_soc_init(void)
                        of_address_to_resource(np_ebu, 0, &res_ebu))
                panic("Failed to get core resources");
 
+       of_node_put(np_pmu);
+       of_node_put(np_cgu);
+       of_node_put(np_ebu);
+
        if (!request_mem_region(res_pmu.start, resource_size(&res_pmu),
                                res_pmu.name) ||
                !request_mem_region(res_cgu.start, resource_size(&res_cgu),
index bbf1e38..2cb708c 100644 (file)
@@ -214,6 +214,8 @@ static void update_gic_frequency_dt(void)
 
        if (of_update_property(node, &gic_frequency_prop) < 0)
                pr_err("error updating gic frequency property\n");
+
+       of_node_put(node);
 }
 
 #endif
index 1299156..d9c8c4e 100644 (file)
@@ -98,13 +98,18 @@ static int __init pic32_of_prepare_platform_data(struct of_dev_auxdata *lookup)
                np = of_find_compatible_node(NULL, NULL, lookup->compatible);
                if (np) {
                        lookup->name = (char *)np->name;
-                       if (lookup->phys_addr)
+                       if (lookup->phys_addr) {
+                               of_node_put(np);
                                continue;
+                       }
                        if (!of_address_to_resource(np, 0, &res))
                                lookup->phys_addr = res.start;
+                       of_node_put(np);
                }
        }
 
+       of_node_put(root);
+
        return 0;
 }
 
index 7174e9a..777b515 100644 (file)
@@ -32,6 +32,9 @@ static unsigned int pic32_xlate_core_timer_irq(void)
                goto default_map;
 
        irq = irq_of_parse_and_map(node, 0);
+
+       of_node_put(node);
+
        if (!irq)
                goto default_map;
 
index 587c7b9..ea8072a 100644 (file)
@@ -40,6 +40,8 @@ __iomem void *plat_of_remap_node(const char *node)
        if (of_address_to_resource(np, 0, &res))
                panic("Failed to get resource for %s", node);
 
+       of_node_put(np);
+
        if (!request_mem_region(res.start,
                                resource_size(&res),
                                res.name))
index 7b7f25b..9240bcd 100644 (file)
@@ -640,8 +640,6 @@ static int icu_get_irq(unsigned int irq)
 
        printk(KERN_ERR "spurious ICU interrupt: %04x,%04x\n", pend1, pend2);
 
-       atomic_inc(&irq_err_count);
-
        return -1;
 }
 
index 8ae15c2..c6ad6f8 100644 (file)
@@ -25,7 +25,7 @@ struct or1k_frameinfo {
 /*
  * Verify a frameinfo structure.  The return address should be a valid text
  * address.  The frame pointer may be null if its the last frame, otherwise
- * the frame pointer should point to a location in the stack after the the
+ * the frame pointer should point to a location in the stack after the
  * top of the next frame up.
  */
 static inline int or1k_frameinfo_valid(struct or1k_frameinfo *frameinfo)
index 5f2448d..fa40005 100644 (file)
@@ -10,6 +10,7 @@ config PARISC
        select ARCH_WANT_FRAME_POINTERS
        select ARCH_HAS_ELF_RANDOMIZE
        select ARCH_HAS_STRICT_KERNEL_RWX
+       select ARCH_HAS_STRICT_MODULE_RWX
        select ARCH_HAS_UBSAN_SANITIZE_ALL
        select ARCH_HAS_PTE_SPECIAL
        select ARCH_NO_SG_CHAIN
index d63a2ac..55d29c4 100644 (file)
@@ -12,7 +12,7 @@ static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
        pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE;
 }
 
-#if defined(CONFIG_STI_CONSOLE) || defined(CONFIG_FB_STI)
+#if defined(CONFIG_FB_STI)
 int fb_is_primary_device(struct fb_info *info);
 #else
 static inline int fb_is_primary_device(struct fb_info *info)
index 2673d57..94652e1 100644 (file)
@@ -224,8 +224,13 @@ int main(void)
        BLANK();
        DEFINE(ASM_SIGFRAME_SIZE, PARISC_RT_SIGFRAME_SIZE);
        DEFINE(SIGFRAME_CONTEXT_REGS, offsetof(struct rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE);
+#ifdef CONFIG_64BIT
        DEFINE(ASM_SIGFRAME_SIZE32, PARISC_RT_SIGFRAME_SIZE32);
        DEFINE(SIGFRAME_CONTEXT_REGS32, offsetof(struct compat_rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE32);
+#else
+       DEFINE(ASM_SIGFRAME_SIZE32, PARISC_RT_SIGFRAME_SIZE);
+       DEFINE(SIGFRAME_CONTEXT_REGS32, offsetof(struct rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE);
+#endif
        BLANK();
        DEFINE(ICACHE_BASE, offsetof(struct pdc_cache_info, ic_base));
        DEFINE(ICACHE_STRIDE, offsetof(struct pdc_cache_info, ic_stride));
index c8a11fc..a9bc578 100644 (file)
@@ -722,7 +722,10 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned lon
                return;
 
        if (parisc_requires_coherency()) {
-               flush_user_cache_page(vma, vmaddr);
+               if (vma->vm_flags & VM_SHARED)
+                       flush_data_cache();
+               else
+                       flush_user_cache_page(vma, vmaddr);
                return;
        }
 
index ed1e88a..bac581b 100644 (file)
@@ -146,7 +146,7 @@ static int emulate_ldw(struct pt_regs *regs, int toreg, int flop)
 "      depw    %%r0,31,2,%4\n"
 "1:    ldw     0(%%sr1,%4),%0\n"
 "2:    ldw     4(%%sr1,%4),%3\n"
-"      subi    32,%4,%2\n"
+"      subi    32,%2,%2\n"
 "      mtctl   %2,11\n"
 "      vshd    %0,%3,%0\n"
 "3:    \n"
index 494ca41..d41ddb3 100644 (file)
@@ -102,7 +102,7 @@ decode_fpu(unsigned int Fpu_register[], unsigned int trap_counts[])
      * that happen.  Want to keep this overhead low, but still provide
      * some information to the customer.  All exits from this routine
      * need to restore Fpu_register[0]
-    */
+     */
 
     bflags=(Fpu_register[0] & 0xf8000000);
     Fpu_register[0] &= 0x07ffffff;
index c2ce2e6..7aa12e8 100644 (file)
@@ -358,6 +358,10 @@ config ARCH_SUSPEND_NONZERO_CPU
        def_bool y
        depends on PPC_POWERNV || PPC_PSERIES
 
+config ARCH_HAS_ADD_PAGES
+       def_bool y
+       depends on ARCH_ENABLE_MEMORY_HOTPLUG
+
 config PPC_DCR_NATIVE
        bool
 
diff --git a/arch/powerpc/include/asm/bpf_perf_event.h b/arch/powerpc/include/asm/bpf_perf_event.h
new file mode 100644 (file)
index 0000000..e8a7b4f
--- /dev/null
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_BPF_PERF_EVENT_H
+#define _ASM_POWERPC_BPF_PERF_EVENT_H
+
+#include <asm/ptrace.h>
+
+typedef struct user_pt_regs bpf_user_pt_regs_t;
+
+#endif /* _ASM_POWERPC_BPF_PERF_EVENT_H */
diff --git a/arch/powerpc/include/uapi/asm/bpf_perf_event.h b/arch/powerpc/include/uapi/asm/bpf_perf_event.h
deleted file mode 100644 (file)
index 5e1e648..0000000
+++ /dev/null
@@ -1,9 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-#ifndef _UAPI__ASM_BPF_PERF_EVENT_H__
-#define _UAPI__ASM_BPF_PERF_EVENT_H__
-
-#include <asm/ptrace.h>
-
-typedef struct user_pt_regs bpf_user_pt_regs_t;
-
-#endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */
index ee04338..0fbda89 100644 (file)
@@ -1855,7 +1855,7 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
                tm_reclaim_current(0);
 #endif
 
-       memset(regs->gpr, 0, sizeof(regs->gpr));
+       memset(&regs->gpr[1], 0, sizeof(regs->gpr) - sizeof(regs->gpr[0]));
        regs->ctr = 0;
        regs->link = 0;
        regs->xer = 0;
index 04694ec..13d6cb1 100644 (file)
@@ -2302,7 +2302,7 @@ static void __init prom_init_stdout(void)
 
 static int __init prom_find_machine_type(void)
 {
-       char compat[256];
+       static char compat[256] __prombss;
        int len, i = 0;
 #ifdef CONFIG_PPC64
        phandle rtas;
index b183ab9..dfa5f72 100644 (file)
@@ -13,7 +13,7 @@
 # If you really need to reference something from prom_init.o add
 # it to the list below:
 
-grep "^CONFIG_KASAN=y$" .config >/dev/null
+grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null
 if [ $? -eq 0 ]
 then
        MEM_FUNCS="__memcpy __memset"
index a6fce31..6931339 100644 (file)
@@ -1071,7 +1071,7 @@ static struct rtas_filter rtas_filters[] __ro_after_init = {
        { "get-time-of-day", -1, -1, -1, -1, -1 },
        { "ibm,get-vpd", -1, 0, -1, 1, 2 },
        { "ibm,lpar-perftools", -1, 2, 3, -1, -1 },
-       { "ibm,platform-dump", -1, 4, 5, -1, -1 },
+       { "ibm,platform-dump", -1, 4, 5, -1, -1 },              /* Special cased */
        { "ibm,read-slot-reset-state", -1, -1, -1, -1, -1 },
        { "ibm,scan-log-dump", -1, 0, 1, -1, -1 },
        { "ibm,set-dynamic-indicator", -1, 2, -1, -1, -1 },
@@ -1120,6 +1120,15 @@ static bool block_rtas_call(int token, int nargs,
                                size = 1;
 
                        end = base + size - 1;
+
+                       /*
+                        * Special case for ibm,platform-dump - NULL buffer
+                        * address is used to indicate end of dump processing
+                        */
+                       if (!strcmp(f->name, "ibm,platform-dump") &&
+                           base == 0)
+                               return false;
+
                        if (!in_rmo_buf(base, end))
                                goto err;
                }
index eb0077b..1a02629 100644 (file)
@@ -935,12 +935,6 @@ void __init setup_arch(char **cmdline_p)
        /* Print various info about the machine that has been gathered so far. */
        print_system_info();
 
-       /* Reserve large chunks of memory for use by CMA for KVM. */
-       kvm_cma_reserve();
-
-       /*  Reserve large chunks of memory for us by CMA for hugetlb */
-       gigantic_hugetlb_cma_reserve();
-
        klp_init_thread_info(&init_task);
 
        setup_initial_init_mm(_stext, _etext, _edata, _end);
@@ -955,6 +949,13 @@ void __init setup_arch(char **cmdline_p)
 
        initmem_init();
 
+       /*
+        * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
+        * be called after initmem_init(), so that pageblock_order is initialised.
+        */
+       kvm_cma_reserve();
+       gigantic_hugetlb_cma_reserve();
+
        early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
 
        if (ppc_md.setup_arch)
index 52b7768..a97128a 100644 (file)
@@ -105,6 +105,37 @@ void __ref arch_remove_linear_mapping(u64 start, u64 size)
        vm_unmap_aliases();
 }
 
+/*
+ * After memory hotplug the variables max_pfn, max_low_pfn and high_memory need
+ * updating.
+ */
+static void update_end_of_memory_vars(u64 start, u64 size)
+{
+       unsigned long end_pfn = PFN_UP(start + size);
+
+       if (end_pfn > max_pfn) {
+               max_pfn = end_pfn;
+               max_low_pfn = end_pfn;
+               high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1;
+       }
+}
+
+int __ref add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,
+                   struct mhp_params *params)
+{
+       int ret;
+
+       ret = __add_pages(nid, start_pfn, nr_pages, params);
+       if (ret)
+               return ret;
+
+       /* update max_pfn, max_low_pfn and high_memory */
+       update_end_of_memory_vars(start_pfn << PAGE_SHIFT,
+                                 nr_pages << PAGE_SHIFT);
+
+       return ret;
+}
+
 int __ref arch_add_memory(int nid, u64 start, u64 size,
                          struct mhp_params *params)
 {
@@ -115,7 +146,7 @@ int __ref arch_add_memory(int nid, u64 start, u64 size,
        rc = arch_create_linear_mapping(nid, start, size, params);
        if (rc)
                return rc;
-       rc = __add_pages(nid, start_pfn, nr_pages, params);
+       rc = add_pages(nid, start_pfn, nr_pages, params);
        if (rc)
                arch_remove_linear_mapping(start, size);
        return rc;
index 7d4368d..b80fc4a 100644 (file)
@@ -96,8 +96,8 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
                pgdp = pgd_offset_k(ea);
                p4dp = p4d_offset(pgdp, ea);
                if (p4d_none(*p4dp)) {
-                       pmdp = early_alloc_pgtable(PMD_TABLE_SIZE);
-                       p4d_populate(&init_mm, p4dp, pmdp);
+                       pudp = early_alloc_pgtable(PUD_TABLE_SIZE);
+                       p4d_populate(&init_mm, p4dp, pudp);
                }
                pudp = pud_offset(p4dp, ea);
                if (pud_none(*pudp)) {
@@ -106,7 +106,7 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
                }
                pmdp = pmd_offset(pudp, ea);
                if (!pmd_present(*pmdp)) {
-                       ptep = early_alloc_pgtable(PAGE_SIZE);
+                       ptep = early_alloc_pgtable(PTE_TABLE_SIZE);
                        pmd_populate_kernel(&init_mm, pmdp, ptep);
                }
                ptep = pte_offset_kernel(pmdp, ea);
diff --git a/arch/powerpc/platforms/microwatt/microwatt.h b/arch/powerpc/platforms/microwatt/microwatt.h
new file mode 100644 (file)
index 0000000..335417e
--- /dev/null
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _MICROWATT_H
+#define _MICROWATT_H
+
+void microwatt_rng_init(void);
+
+#endif /* _MICROWATT_H */
index 7bc4d1c..8ece87d 100644 (file)
@@ -11,6 +11,7 @@
 #include <asm/archrandom.h>
 #include <asm/cputable.h>
 #include <asm/machdep.h>
+#include "microwatt.h"
 
 #define DARN_ERR 0xFFFFFFFFFFFFFFFFul
 
@@ -29,7 +30,7 @@ static int microwatt_get_random_darn(unsigned long *v)
        return 1;
 }
 
-static __init int rng_init(void)
+void __init microwatt_rng_init(void)
 {
        unsigned long val;
        int i;
@@ -37,12 +38,7 @@ static __init int rng_init(void)
        for (i = 0; i < 10; i++) {
                if (microwatt_get_random_darn(&val)) {
                        ppc_md.get_random_seed = microwatt_get_random_darn;
-                       return 0;
+                       return;
                }
        }
-
-       pr_warn("Unable to use DARN for get_random_seed()\n");
-
-       return -EIO;
 }
-machine_subsys_initcall(, rng_init);
index 0b02603..6b32539 100644 (file)
@@ -16,6 +16,8 @@
 #include <asm/xics.h>
 #include <asm/udbg.h>
 
+#include "microwatt.h"
+
 static void __init microwatt_init_IRQ(void)
 {
        xics_init();
@@ -32,10 +34,16 @@ static int __init microwatt_populate(void)
 }
 machine_arch_initcall(microwatt, microwatt_populate);
 
+static void __init microwatt_setup_arch(void)
+{
+       microwatt_rng_init();
+}
+
 define_machine(microwatt) {
        .name                   = "microwatt",
        .probe                  = microwatt_probe,
        .init_IRQ               = microwatt_init_IRQ,
+       .setup_arch             = microwatt_setup_arch,
        .progress               = udbg_progress,
        .calibrate_decr         = generic_calibrate_decr,
 };
index e297bf4..866efdc 100644 (file)
@@ -42,4 +42,6 @@ ssize_t memcons_copy(struct memcons *mc, char *to, loff_t pos, size_t count);
 u32 __init memcons_get_size(struct memcons *mc);
 struct memcons *__init memcons_init(struct device_node *node, const char *mc_prop_name);
 
+void pnv_rng_init(void);
+
 #endif /* _POWERNV_H */
index e3d44b3..463c78c 100644 (file)
@@ -17,6 +17,7 @@
 #include <asm/prom.h>
 #include <asm/machdep.h>
 #include <asm/smp.h>
+#include "powernv.h"
 
 #define DARN_ERR 0xFFFFFFFFFFFFFFFFul
 
@@ -28,7 +29,6 @@ struct powernv_rng {
 
 static DEFINE_PER_CPU(struct powernv_rng *, powernv_rng);
 
-
 int powernv_hwrng_present(void)
 {
        struct powernv_rng *rng;
@@ -98,9 +98,6 @@ static int __init initialise_darn(void)
                        return 0;
                }
        }
-
-       pr_warn("Unable to use DARN for get_random_seed()\n");
-
        return -EIO;
 }
 
@@ -163,32 +160,55 @@ static __init int rng_create(struct device_node *dn)
 
        rng_init_per_cpu(rng, dn);
 
-       pr_info_once("Registering arch random hook.\n");
-
        ppc_md.get_random_seed = powernv_get_random_long;
 
        return 0;
 }
 
-static __init int rng_init(void)
+static int __init pnv_get_random_long_early(unsigned long *v)
 {
        struct device_node *dn;
-       int rc;
+
+       if (!slab_is_available())
+               return 0;
+
+       if (cmpxchg(&ppc_md.get_random_seed, pnv_get_random_long_early,
+                   NULL) != pnv_get_random_long_early)
+               return 0;
 
        for_each_compatible_node(dn, NULL, "ibm,power-rng") {
-               rc = rng_create(dn);
-               if (rc) {
-                       pr_err("Failed creating rng for %pOF (%d).\n",
-                               dn, rc);
+               if (rng_create(dn))
                        continue;
-               }
-
                /* Create devices for hwrng driver */
                of_platform_device_create(dn, NULL, NULL);
        }
 
-       initialise_darn();
+       if (!ppc_md.get_random_seed)
+               return 0;
+       return ppc_md.get_random_seed(v);
+}
+
+void __init pnv_rng_init(void)
+{
+       struct device_node *dn;
 
+       /* Prefer darn over the rest. */
+       if (!initialise_darn())
+               return;
+
+       dn = of_find_compatible_node(NULL, NULL, "ibm,power-rng");
+       if (dn)
+               ppc_md.get_random_seed = pnv_get_random_long_early;
+
+       of_node_put(dn);
+}
+
+static int __init pnv_rng_late_init(void)
+{
+       unsigned long v;
+       /* In case it wasn't called during init for some other reason. */
+       if (ppc_md.get_random_seed == pnv_get_random_long_early)
+               pnv_get_random_long_early(&v);
        return 0;
 }
-machine_subsys_initcall(powernv, rng_init);
+machine_subsys_initcall(powernv, pnv_rng_late_init);
index 824c3ad..dac545a 100644 (file)
@@ -203,6 +203,8 @@ static void __init pnv_setup_arch(void)
        pnv_check_guarded_cores();
 
        /* XXX PMCS */
+
+       pnv_rng_init();
 }
 
 static void __init pnv_init(void)
index f5c916c..1d75b77 100644 (file)
@@ -122,4 +122,6 @@ void pseries_lpar_read_hblkrm_characteristics(void);
 static inline void pseries_lpar_read_hblkrm_characteristics(void) { }
 #endif
 
+void pseries_rng_init(void);
+
 #endif /* _PSERIES_PSERIES_H */
index 6268545..6ddfdea 100644 (file)
@@ -10,6 +10,7 @@
 #include <asm/archrandom.h>
 #include <asm/machdep.h>
 #include <asm/plpar_wrappers.h>
+#include "pseries.h"
 
 
 static int pseries_get_random_long(unsigned long *v)
@@ -24,19 +25,13 @@ static int pseries_get_random_long(unsigned long *v)
        return 0;
 }
 
-static __init int rng_init(void)
+void __init pseries_rng_init(void)
 {
        struct device_node *dn;
 
        dn = of_find_compatible_node(NULL, NULL, "ibm,random");
        if (!dn)
-               return -ENODEV;
-
-       pr_info("Registering arch random hook.\n");
-
+               return;
        ppc_md.get_random_seed = pseries_get_random_long;
-
        of_node_put(dn);
-       return 0;
 }
-machine_subsys_initcall(pseries, rng_init);
index afb0742..ee4f1db 100644 (file)
@@ -839,6 +839,7 @@ static void __init pSeries_setup_arch(void)
        }
 
        ppc_md.pcibios_root_bridge_prepare = pseries_root_bridge_prepare;
+       pseries_rng_init();
 }
 
 static void pseries_panic(char *str)
index 7d51286..d02911e 100644 (file)
@@ -15,6 +15,7 @@
 #include <linux/of_fdt.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
+#include <linux/bitmap.h>
 #include <linux/cpumask.h>
 #include <linux/mm.h>
 #include <linux/delay.h>
@@ -57,7 +58,7 @@ static int __init xive_irq_bitmap_add(int base, int count)
        spin_lock_init(&xibm->lock);
        xibm->base = base;
        xibm->count = count;
-       xibm->bitmap = kzalloc(xibm->count, GFP_KERNEL);
+       xibm->bitmap = bitmap_zalloc(xibm->count, GFP_KERNEL);
        if (!xibm->bitmap) {
                kfree(xibm);
                return -ENOMEM;
@@ -75,7 +76,7 @@ static void xive_irq_bitmap_remove_all(void)
 
        list_for_each_entry_safe(xibm, tmp, &xive_irq_bitmaps, list) {
                list_del(&xibm->list);
-               kfree(xibm->bitmap);
+               bitmap_free(xibm->bitmap);
                kfree(xibm);
        }
 }
index c22f581..32ffef9 100644 (file)
@@ -364,8 +364,13 @@ config RISCV_ISA_SVPBMT
        select RISCV_ALTERNATIVE
        default y
        help
-          Adds support to dynamically detect the presence of the SVPBMT extension
-          (Supervisor-mode: page-based memory types) and enable its usage.
+          Adds support to dynamically detect the presence of the SVPBMT
+          ISA-extension (Supervisor-mode: page-based memory types) and
+          enable its usage.
+
+          The memory type for a page contains a combination of attributes
+          that indicate the cacheability, idempotency, and ordering
+          properties for access to that page.
 
           The SVPBMT extension is only available on 64Bit cpus.
 
index ebfcd5c..457ac72 100644 (file)
@@ -35,6 +35,7 @@ config ERRATA_SIFIVE_CIP_1200
 
 config ERRATA_THEAD
        bool "T-HEAD errata"
+       depends on !XIP_KERNEL
        select RISCV_ALTERNATIVE
        help
          All T-HEAD errata Kconfig depend on this Kconfig. Disabling
index 8c32591..0ec9a14 100644 (file)
                        riscv,ndev = <186>;
                };
 
+               pdma: dma-controller@3000000 {
+                       compatible = "sifive,fu540-c000-pdma", "sifive,pdma0";
+                       reg = <0x0 0x3000000 0x0 0x8000>;
+                       interrupt-parent = <&plic>;
+                       interrupts = <5 6>, <7 8>, <9 10>, <11 12>;
+                       dma-channels = <4>;
+                       #dma-cells = <1>;
+               };
+
                clkcfg: clkcfg@20002000 {
                        compatible = "microchip,mpfs-clkcfg";
                        reg = <0x0 0x20002000 0x0 0x1000>, <0x0 0x3E001000 0x0 0x1000>;
                        status = "disabled";
                };
 
+               can0: can@2010c000 {
+                       compatible = "microchip,mpfs-can";
+                       reg = <0x0 0x2010c000 0x0 0x1000>;
+                       clocks = <&clkcfg CLK_CAN0>;
+                       interrupt-parent = <&plic>;
+                       interrupts = <56>;
+                       status = "disabled";
+               };
+
+               can1: can@2010d000 {
+                       compatible = "microchip,mpfs-can";
+                       reg = <0x0 0x2010d000 0x0 0x1000>;
+                       clocks = <&clkcfg CLK_CAN1>;
+                       interrupt-parent = <&plic>;
+                       interrupts = <57>;
+                       status = "disabled";
+               };
+
                mac0: ethernet@20110000 {
                        compatible = "cdns,macb";
                        reg = <0x0 0x20110000 0x0 0x2000>;
index 9e2888d..416ead0 100644 (file)
@@ -75,20 +75,20 @@ asm volatile(ALTERNATIVE(                                           \
        "nop\n\t"                                                       \
        "nop\n\t"                                                       \
        "nop",                                                          \
-       "li      t3, %2\n\t"                                            \
-       "slli    t3, t3, %4\n\t"                                        \
+       "li      t3, %1\n\t"                                            \
+       "slli    t3, t3, %3\n\t"                                        \
        "and     t3, %0, t3\n\t"                                        \
        "bne     t3, zero, 2f\n\t"                                      \
-       "li      t3, %3\n\t"                                            \
-       "slli    t3, t3, %4\n\t"                                        \
+       "li      t3, %2\n\t"                                            \
+       "slli    t3, t3, %3\n\t"                                        \
        "or      %0, %0, t3\n\t"                                        \
        "2:",  THEAD_VENDOR_ID,                                         \
                ERRATA_THEAD_PBMT, CONFIG_ERRATA_THEAD_PBMT)            \
        : "+r"(_val)                                                    \
-       : "0"(_val),                                                    \
-         "I"(_PAGE_MTMASK_THEAD >> ALT_THEAD_PBMT_SHIFT),              \
+       : "I"(_PAGE_MTMASK_THEAD >> ALT_THEAD_PBMT_SHIFT),              \
          "I"(_PAGE_PMA_THEAD >> ALT_THEAD_PBMT_SHIFT),                 \
-         "I"(ALT_THEAD_PBMT_SHIFT))
+         "I"(ALT_THEAD_PBMT_SHIFT)                                     \
+       : "t3")
 #else
 #define ALT_THEAD_PMA(_val)
 #endif
index a6f62a6..12b05ce 100644 (file)
@@ -293,7 +293,6 @@ void __init_or_module riscv_cpufeature_patch_func(struct alt_entry *begin,
                                                  unsigned int stage)
 {
        u32 cpu_req_feature = cpufeature_probe(stage);
-       u32 cpu_apply_feature = 0;
        struct alt_entry *alt;
        u32 tmp;
 
@@ -307,10 +306,8 @@ void __init_or_module riscv_cpufeature_patch_func(struct alt_entry *begin,
                }
 
                tmp = (1U << alt->errata_id);
-               if (cpu_req_feature & tmp) {
+               if (cpu_req_feature & tmp)
                        patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len);
-                       cpu_apply_feature |= tmp;
-               }
        }
 }
 #endif
index 91c0b80..8cd9e56 100644 (file)
@@ -484,7 +484,6 @@ config KEXEC
 config KEXEC_FILE
        bool "kexec file based system call"
        select KEXEC_CORE
-       select BUILD_BIN2C
        depends on CRYPTO
        depends on CRYPTO_SHA256
        depends on CRYPTO_SHA256_S390
index 56007c7..1f2d409 100644 (file)
  *
  * Copyright IBM Corp. 2017, 2020
  * Author(s): Harald Freudenberger
- *
- * The s390_arch_random_generate() function may be called from random.c
- * in interrupt context. So this implementation does the best to be very
- * fast. There is a buffer of random data which is asynchronously checked
- * and filled by a workqueue thread.
- * If there are enough bytes in the buffer the s390_arch_random_generate()
- * just delivers these bytes. Otherwise false is returned until the
- * worker thread refills the buffer.
- * The worker fills the rng buffer by pulling fresh entropy from the
- * high quality (but slow) true hardware random generator. This entropy
- * is then spread over the buffer with an pseudo random generator PRNG.
- * As the arch_get_random_seed_long() fetches 8 bytes and the calling
- * function add_interrupt_randomness() counts this as 1 bit entropy the
- * distribution needs to make sure there is in fact 1 bit entropy contained
- * in 8 bytes of the buffer. The current values pull 32 byte entropy
- * and scatter this into a 2048 byte buffer. So 8 byte in the buffer
- * will contain 1 bit of entropy.
- * The worker thread is rescheduled based on the charge level of the
- * buffer but at least with 500 ms delay to avoid too much CPU consumption.
- * So the max. amount of rng data delivered via arch_get_random_seed is
- * limited to 4k bytes per second.
  */
 
 #include <linux/kernel.h>
 #include <linux/atomic.h>
 #include <linux/random.h>
-#include <linux/slab.h>
 #include <linux/static_key.h>
-#include <linux/workqueue.h>
-#include <linux/moduleparam.h>
 #include <asm/cpacf.h>
 
 DEFINE_STATIC_KEY_FALSE(s390_arch_random_available);
 
 atomic64_t s390_arch_random_counter = ATOMIC64_INIT(0);
 EXPORT_SYMBOL(s390_arch_random_counter);
-
-#define ARCH_REFILL_TICKS (HZ/2)
-#define ARCH_PRNG_SEED_SIZE 32
-#define ARCH_RNG_BUF_SIZE 2048
-
-static DEFINE_SPINLOCK(arch_rng_lock);
-static u8 *arch_rng_buf;
-static unsigned int arch_rng_buf_idx;
-
-static void arch_rng_refill_buffer(struct work_struct *);
-static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer);
-
-bool s390_arch_random_generate(u8 *buf, unsigned int nbytes)
-{
-       /* max hunk is ARCH_RNG_BUF_SIZE */
-       if (nbytes > ARCH_RNG_BUF_SIZE)
-               return false;
-
-       /* lock rng buffer */
-       if (!spin_trylock(&arch_rng_lock))
-               return false;
-
-       /* try to resolve the requested amount of bytes from the buffer */
-       arch_rng_buf_idx -= nbytes;
-       if (arch_rng_buf_idx < ARCH_RNG_BUF_SIZE) {
-               memcpy(buf, arch_rng_buf + arch_rng_buf_idx, nbytes);
-               atomic64_add(nbytes, &s390_arch_random_counter);
-               spin_unlock(&arch_rng_lock);
-               return true;
-       }
-
-       /* not enough bytes in rng buffer, refill is done asynchronously */
-       spin_unlock(&arch_rng_lock);
-
-       return false;
-}
-EXPORT_SYMBOL(s390_arch_random_generate);
-
-static void arch_rng_refill_buffer(struct work_struct *unused)
-{
-       unsigned int delay = ARCH_REFILL_TICKS;
-
-       spin_lock(&arch_rng_lock);
-       if (arch_rng_buf_idx > ARCH_RNG_BUF_SIZE) {
-               /* buffer is exhausted and needs refill */
-               u8 seed[ARCH_PRNG_SEED_SIZE];
-               u8 prng_wa[240];
-               /* fetch ARCH_PRNG_SEED_SIZE bytes of entropy */
-               cpacf_trng(NULL, 0, seed, sizeof(seed));
-               /* blow this entropy up to ARCH_RNG_BUF_SIZE with PRNG */
-               memset(prng_wa, 0, sizeof(prng_wa));
-               cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED,
-                          &prng_wa, NULL, 0, seed, sizeof(seed));
-               cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN,
-                          &prng_wa, arch_rng_buf, ARCH_RNG_BUF_SIZE, NULL, 0);
-               arch_rng_buf_idx = ARCH_RNG_BUF_SIZE;
-       }
-       delay += (ARCH_REFILL_TICKS * arch_rng_buf_idx) / ARCH_RNG_BUF_SIZE;
-       spin_unlock(&arch_rng_lock);
-
-       /* kick next check */
-       queue_delayed_work(system_long_wq, &arch_rng_work, delay);
-}
-
-/*
- * Here follows the implementation of s390_arch_get_random_long().
- *
- * The random longs to be pulled by arch_get_random_long() are
- * prepared in an 4K buffer which is filled from the NIST 800-90
- * compliant s390 drbg. By default the random long buffer is refilled
- * 256 times before the drbg itself needs a reseed. The reseed of the
- * drbg is done with 32 bytes fetched from the high quality (but slow)
- * trng which is assumed to deliver 100% entropy. So the 32 * 8 = 256
- * bits of entropy are spread over 256 * 4KB = 1MB serving 131072
- * arch_get_random_long() invocations before reseeded.
- *
- * How often the 4K random long buffer is refilled with the drbg
- * before the drbg is reseeded can be adjusted. There is a module
- * parameter 's390_arch_rnd_long_drbg_reseed' accessible via
- *   /sys/module/arch_random/parameters/rndlong_drbg_reseed
- * or as kernel command line parameter
- *   arch_random.rndlong_drbg_reseed=<value>
- * This parameter tells how often the drbg fills the 4K buffer before
- * it is re-seeded by fresh entropy from the trng.
- * A value of 16 results in reseeding the drbg at every 16 * 4 KB = 64
- * KB with 32 bytes of fresh entropy pulled from the trng. So a value
- * of 16 would result in 256 bits entropy per 64 KB.
- * A value of 256 results in 1MB of drbg output before a reseed of the
- * drbg is done. So this would spread the 256 bits of entropy among 1MB.
- * Setting this parameter to 0 forces the reseed to take place every
- * time the 4K buffer is depleted, so the entropy rises to 256 bits
- * entropy per 4K or 0.5 bit entropy per arch_get_random_long().  With
- * setting this parameter to negative values all this effort is
- * disabled, arch_get_random long() returns false and thus indicating
- * that the arch_get_random_long() feature is disabled at all.
- */
-
-static unsigned long rndlong_buf[512];
-static DEFINE_SPINLOCK(rndlong_lock);
-static int rndlong_buf_index;
-
-static int rndlong_drbg_reseed = 256;
-module_param_named(rndlong_drbg_reseed, rndlong_drbg_reseed, int, 0600);
-MODULE_PARM_DESC(rndlong_drbg_reseed, "s390 arch_get_random_long() drbg reseed");
-
-static inline void refill_rndlong_buf(void)
-{
-       static u8 prng_ws[240];
-       static int drbg_counter;
-
-       if (--drbg_counter < 0) {
-               /* need to re-seed the drbg */
-               u8 seed[32];
-
-               /* fetch seed from trng */
-               cpacf_trng(NULL, 0, seed, sizeof(seed));
-               /* seed drbg */
-               memset(prng_ws, 0, sizeof(prng_ws));
-               cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED,
-                          &prng_ws, NULL, 0, seed, sizeof(seed));
-               /* re-init counter for drbg */
-               drbg_counter = rndlong_drbg_reseed;
-       }
-
-       /* fill the arch_get_random_long buffer from drbg */
-       cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, &prng_ws,
-                  (u8 *) rndlong_buf, sizeof(rndlong_buf),
-                  NULL, 0);
-}
-
-bool s390_arch_get_random_long(unsigned long *v)
-{
-       bool rc = false;
-       unsigned long flags;
-
-       /* arch_get_random_long() disabled ? */
-       if (rndlong_drbg_reseed < 0)
-               return false;
-
-       /* try to lock the random long lock */
-       if (!spin_trylock_irqsave(&rndlong_lock, flags))
-               return false;
-
-       if (--rndlong_buf_index >= 0) {
-               /* deliver next long value from the buffer */
-               *v = rndlong_buf[rndlong_buf_index];
-               rc = true;
-               goto out;
-       }
-
-       /* buffer is depleted and needs refill */
-       if (in_interrupt()) {
-               /* delay refill in interrupt context to next caller */
-               rndlong_buf_index = 0;
-               goto out;
-       }
-
-       /* refill random long buffer */
-       refill_rndlong_buf();
-       rndlong_buf_index = ARRAY_SIZE(rndlong_buf);
-
-       /* and provide one random long */
-       *v = rndlong_buf[--rndlong_buf_index];
-       rc = true;
-
-out:
-       spin_unlock_irqrestore(&rndlong_lock, flags);
-       return rc;
-}
-EXPORT_SYMBOL(s390_arch_get_random_long);
-
-static int __init s390_arch_random_init(void)
-{
-       /* all the needed PRNO subfunctions available ? */
-       if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG) &&
-           cpacf_query_func(CPACF_PRNO, CPACF_PRNO_SHA512_DRNG_GEN)) {
-
-               /* alloc arch random working buffer */
-               arch_rng_buf = kmalloc(ARCH_RNG_BUF_SIZE, GFP_KERNEL);
-               if (!arch_rng_buf)
-                       return -ENOMEM;
-
-               /* kick worker queue job to fill the random buffer */
-               queue_delayed_work(system_long_wq,
-                                  &arch_rng_work, ARCH_REFILL_TICKS);
-
-               /* enable arch random to the outside world */
-               static_branch_enable(&s390_arch_random_available);
-       }
-
-       return 0;
-}
-arch_initcall(s390_arch_random_init);
index 5dc712f..2c6e1c6 100644 (file)
 
 #include <linux/static_key.h>
 #include <linux/atomic.h>
+#include <asm/cpacf.h>
 
 DECLARE_STATIC_KEY_FALSE(s390_arch_random_available);
 extern atomic64_t s390_arch_random_counter;
 
-bool s390_arch_get_random_long(unsigned long *v);
-bool s390_arch_random_generate(u8 *buf, unsigned int nbytes);
-
 static inline bool __must_check arch_get_random_long(unsigned long *v)
 {
-       if (static_branch_likely(&s390_arch_random_available))
-               return s390_arch_get_random_long(v);
        return false;
 }
 
@@ -37,7 +33,9 @@ static inline bool __must_check arch_get_random_int(unsigned int *v)
 static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
 {
        if (static_branch_likely(&s390_arch_random_available)) {
-               return s390_arch_random_generate((u8 *)v, sizeof(*v));
+               cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
+               atomic64_add(sizeof(*v), &s390_arch_random_counter);
+               return true;
        }
        return false;
 }
@@ -45,7 +43,9 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
 static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
 {
        if (static_branch_likely(&s390_arch_random_available)) {
-               return s390_arch_random_generate((u8 *)v, sizeof(*v));
+               cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
+               atomic64_add(sizeof(*v), &s390_arch_random_counter);
+               return true;
        }
        return false;
 }
index 54ae2dc..2f983e0 100644 (file)
@@ -133,9 +133,9 @@ struct slibe {
  * @sb_count: number of storage blocks
  * @sba: storage block element addresses
  * @dcount: size of storage block elements
- * @user0: user defineable value
- * @res4: reserved paramater
- * @user1: user defineable value
+ * @user0: user definable value
+ * @res4: reserved parameter
+ * @user1: user definable value
  */
 struct qaob {
        u64 res0[6];
index a2c1c55..28124d0 100644 (file)
@@ -219,6 +219,11 @@ ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn, size_t csize,
        unsigned long src;
        int rc;
 
+       if (!(iter_is_iovec(iter) || iov_iter_is_kvec(iter)))
+               return -EINVAL;
+       /* Multi-segment iterators are not supported */
+       if (iter->nr_segs > 1)
+               return -EINVAL;
        if (!csize)
                return 0;
        src = pfn_to_phys(pfn) + offset;
@@ -228,7 +233,10 @@ ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn, size_t csize,
                rc = copy_oldmem_user(iter->iov->iov_base, src, csize);
        else
                rc = copy_oldmem_kernel(iter->kvec->iov_base, src, csize);
-       return rc;
+       if (rc < 0)
+               return rc;
+       iov_iter_advance(iter, csize);
+       return csize;
 }
 
 /*
index 483ab5e..f7dd3c8 100644 (file)
@@ -516,6 +516,26 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type)
        return err;
 }
 
+/* Events CPU_CYLCES and INSTRUCTIONS can be submitted with two different
+ * attribute::type values:
+ * - PERF_TYPE_HARDWARE:
+ * - pmu->type:
+ * Handle both type of invocations identical. They address the same hardware.
+ * The result is different when event modifiers exclude_kernel and/or
+ * exclude_user are also set.
+ */
+static int cpumf_pmu_event_type(struct perf_event *event)
+{
+       u64 ev = event->attr.config;
+
+       if (cpumf_generic_events_basic[PERF_COUNT_HW_CPU_CYCLES] == ev ||
+           cpumf_generic_events_basic[PERF_COUNT_HW_INSTRUCTIONS] == ev ||
+           cpumf_generic_events_user[PERF_COUNT_HW_CPU_CYCLES] == ev ||
+           cpumf_generic_events_user[PERF_COUNT_HW_INSTRUCTIONS] == ev)
+               return PERF_TYPE_HARDWARE;
+       return PERF_TYPE_RAW;
+}
+
 static int cpumf_pmu_event_init(struct perf_event *event)
 {
        unsigned int type = event->attr.type;
@@ -525,7 +545,7 @@ static int cpumf_pmu_event_init(struct perf_event *event)
                err = __hw_perf_event_init(event, type);
        else if (event->pmu->type == type)
                /* Registered as unknown PMU */
-               err = __hw_perf_event_init(event, PERF_TYPE_RAW);
+               err = __hw_perf_event_init(event, cpumf_pmu_event_type(event));
        else
                return -ENOENT;
 
index 8c15459..b38b4ae 100644 (file)
@@ -193,8 +193,9 @@ static int paicrypt_event_init(struct perf_event *event)
        /* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */
        if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type)
                return -ENOENT;
-       /* PAI crypto event must be valid */
-       if (a->config > PAI_CRYPTO_BASE + paicrypt_cnt)
+       /* PAI crypto event must be in valid range */
+       if (a->config < PAI_CRYPTO_BASE ||
+           a->config > PAI_CRYPTO_BASE + paicrypt_cnt)
                return -EINVAL;
        /* Allow only CPU wide operation, no process context for now. */
        if (event->hw.target || event->cpu == -1)
@@ -208,6 +209,12 @@ static int paicrypt_event_init(struct perf_event *event)
        if (rc)
                return rc;
 
+       /* Event initialization sets last_tag to 0. When later on the events
+        * are deleted and re-added, do not reset the event count value to zero.
+        * Events are added, deleted and re-added when 2 or more events
+        * are active at the same time.
+        */
+       event->hw.last_tag = 0;
        cpump->event = event;
        event->destroy = paicrypt_event_destroy;
 
@@ -242,9 +249,12 @@ static void paicrypt_start(struct perf_event *event, int flags)
 {
        u64 sum;
 
-       sum = paicrypt_getall(event);           /* Get current value */
-       local64_set(&event->hw.prev_count, sum);
-       local64_set(&event->count, 0);
+       if (!event->hw.last_tag) {
+               event->hw.last_tag = 1;
+               sum = paicrypt_getall(event);           /* Get current value */
+               local64_set(&event->count, 0);
+               local64_set(&event->hw.prev_count, sum);
+       }
 }
 
 static int paicrypt_add(struct perf_event *event, int flags)
index 8d91ecc..0a37f5d 100644 (file)
@@ -875,6 +875,11 @@ static void __init setup_randomness(void)
        if (stsi(vmms, 3, 2, 2) == 0 && vmms->count)
                add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count);
        memblock_free(vmms, PAGE_SIZE);
+
+#ifdef CONFIG_ARCH_RANDOM
+       if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG))
+               static_branch_enable(&s390_arch_random_available);
+#endif
 }
 
 /*
index 360ada8..d237bc6 100644 (file)
@@ -48,7 +48,6 @@ OBJCOPYFLAGS_purgatory.ro += --remove-section='.note.*'
 $(obj)/purgatory.ro: $(obj)/purgatory $(obj)/purgatory.chk FORCE
                $(call if_changed,objcopy)
 
-$(obj)/kexec-purgatory.o: $(obj)/kexec-purgatory.S $(obj)/purgatory.ro FORCE
-       $(call if_changed_rule,as_o_S)
+$(obj)/kexec-purgatory.o: $(obj)/purgatory.ro
 
-obj-$(CONFIG_ARCH_HAS_KEXEC_PURGATORY) += kexec-purgatory.o
+obj-y += kexec-purgatory.o
index 03deb4d..928dcf7 100644 (file)
@@ -124,6 +124,51 @@ static u64 get_cc_mask(void)
        return BIT_ULL(gpa_width - 1);
 }
 
+/*
+ * The TDX module spec states that #VE may be injected for a limited set of
+ * reasons:
+ *
+ *  - Emulation of the architectural #VE injection on EPT violation;
+ *
+ *  - As a result of guest TD execution of a disallowed instruction,
+ *    a disallowed MSR access, or CPUID virtualization;
+ *
+ *  - A notification to the guest TD about anomalous behavior;
+ *
+ * The last one is opt-in and is not used by the kernel.
+ *
+ * The Intel Software Developer's Manual describes cases when instruction
+ * length field can be used in section "Information for VM Exits Due to
+ * Instruction Execution".
+ *
+ * For TDX, it ultimately means GET_VEINFO provides reliable instruction length
+ * information if #VE occurred due to instruction execution, but not for EPT
+ * violations.
+ */
+static int ve_instr_len(struct ve_info *ve)
+{
+       switch (ve->exit_reason) {
+       case EXIT_REASON_HLT:
+       case EXIT_REASON_MSR_READ:
+       case EXIT_REASON_MSR_WRITE:
+       case EXIT_REASON_CPUID:
+       case EXIT_REASON_IO_INSTRUCTION:
+               /* It is safe to use ve->instr_len for #VE due instructions */
+               return ve->instr_len;
+       case EXIT_REASON_EPT_VIOLATION:
+               /*
+                * For EPT violations, ve->insn_len is not defined. For those,
+                * the kernel must decode instructions manually and should not
+                * be using this function.
+                */
+               WARN_ONCE(1, "ve->instr_len is not defined for EPT violations");
+               return 0;
+       default:
+               WARN_ONCE(1, "Unexpected #VE-type: %lld\n", ve->exit_reason);
+               return ve->instr_len;
+       }
+}
+
 static u64 __cpuidle __halt(const bool irq_disabled, const bool do_sti)
 {
        struct tdx_hypercall_args args = {
@@ -147,7 +192,7 @@ static u64 __cpuidle __halt(const bool irq_disabled, const bool do_sti)
        return __tdx_hypercall(&args, do_sti ? TDX_HCALL_ISSUE_STI : 0);
 }
 
-static bool handle_halt(void)
+static int handle_halt(struct ve_info *ve)
 {
        /*
         * Since non safe halt is mainly used in CPU offlining
@@ -158,9 +203,9 @@ static bool handle_halt(void)
        const bool do_sti = false;
 
        if (__halt(irq_disabled, do_sti))
-               return false;
+               return -EIO;
 
-       return true;
+       return ve_instr_len(ve);
 }
 
 void __cpuidle tdx_safe_halt(void)
@@ -180,7 +225,7 @@ void __cpuidle tdx_safe_halt(void)
                WARN_ONCE(1, "HLT instruction emulation failed\n");
 }
 
-static bool read_msr(struct pt_regs *regs)
+static int read_msr(struct pt_regs *regs, struct ve_info *ve)
 {
        struct tdx_hypercall_args args = {
                .r10 = TDX_HYPERCALL_STANDARD,
@@ -194,14 +239,14 @@ static bool read_msr(struct pt_regs *regs)
         * (GHCI), section titled "TDG.VP.VMCALL<Instruction.RDMSR>".
         */
        if (__tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT))
-               return false;
+               return -EIO;
 
        regs->ax = lower_32_bits(args.r11);
        regs->dx = upper_32_bits(args.r11);
-       return true;
+       return ve_instr_len(ve);
 }
 
-static bool write_msr(struct pt_regs *regs)
+static int write_msr(struct pt_regs *regs, struct ve_info *ve)
 {
        struct tdx_hypercall_args args = {
                .r10 = TDX_HYPERCALL_STANDARD,
@@ -215,10 +260,13 @@ static bool write_msr(struct pt_regs *regs)
         * can be found in TDX Guest-Host-Communication Interface
         * (GHCI) section titled "TDG.VP.VMCALL<Instruction.WRMSR>".
         */
-       return !__tdx_hypercall(&args, 0);
+       if (__tdx_hypercall(&args, 0))
+               return -EIO;
+
+       return ve_instr_len(ve);
 }
 
-static bool handle_cpuid(struct pt_regs *regs)
+static int handle_cpuid(struct pt_regs *regs, struct ve_info *ve)
 {
        struct tdx_hypercall_args args = {
                .r10 = TDX_HYPERCALL_STANDARD,
@@ -236,7 +284,7 @@ static bool handle_cpuid(struct pt_regs *regs)
         */
        if (regs->ax < 0x40000000 || regs->ax > 0x4FFFFFFF) {
                regs->ax = regs->bx = regs->cx = regs->dx = 0;
-               return true;
+               return ve_instr_len(ve);
        }
 
        /*
@@ -245,7 +293,7 @@ static bool handle_cpuid(struct pt_regs *regs)
         * (GHCI), section titled "VP.VMCALL<Instruction.CPUID>".
         */
        if (__tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT))
-               return false;
+               return -EIO;
 
        /*
         * As per TDX GHCI CPUID ABI, r12-r15 registers contain contents of
@@ -257,7 +305,7 @@ static bool handle_cpuid(struct pt_regs *regs)
        regs->cx = args.r14;
        regs->dx = args.r15;
 
-       return true;
+       return ve_instr_len(ve);
 }
 
 static bool mmio_read(int size, unsigned long addr, unsigned long *val)
@@ -283,10 +331,10 @@ static bool mmio_write(int size, unsigned long addr, unsigned long val)
                               EPT_WRITE, addr, val);
 }
 
-static bool handle_mmio(struct pt_regs *regs, struct ve_info *ve)
+static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
 {
+       unsigned long *reg, val, vaddr;
        char buffer[MAX_INSN_SIZE];
-       unsigned long *reg, val;
        struct insn insn = {};
        enum mmio_type mmio;
        int size, extend_size;
@@ -294,34 +342,49 @@ static bool handle_mmio(struct pt_regs *regs, struct ve_info *ve)
 
        /* Only in-kernel MMIO is supported */
        if (WARN_ON_ONCE(user_mode(regs)))
-               return false;
+               return -EFAULT;
 
        if (copy_from_kernel_nofault(buffer, (void *)regs->ip, MAX_INSN_SIZE))
-               return false;
+               return -EFAULT;
 
        if (insn_decode(&insn, buffer, MAX_INSN_SIZE, INSN_MODE_64))
-               return false;
+               return -EINVAL;
 
        mmio = insn_decode_mmio(&insn, &size);
        if (WARN_ON_ONCE(mmio == MMIO_DECODE_FAILED))
-               return false;
+               return -EINVAL;
 
        if (mmio != MMIO_WRITE_IMM && mmio != MMIO_MOVS) {
                reg = insn_get_modrm_reg_ptr(&insn, regs);
                if (!reg)
-                       return false;
+                       return -EINVAL;
        }
 
-       ve->instr_len = insn.length;
+       /*
+        * Reject EPT violation #VEs that split pages.
+        *
+        * MMIO accesses are supposed to be naturally aligned and therefore
+        * never cross page boundaries. Seeing split page accesses indicates
+        * a bug or a load_unaligned_zeropad() that stepped into an MMIO page.
+        *
+        * load_unaligned_zeropad() will recover using exception fixups.
+        */
+       vaddr = (unsigned long)insn_get_addr_ref(&insn, regs);
+       if (vaddr / PAGE_SIZE != (vaddr + size - 1) / PAGE_SIZE)
+               return -EFAULT;
 
        /* Handle writes first */
        switch (mmio) {
        case MMIO_WRITE:
                memcpy(&val, reg, size);
-               return mmio_write(size, ve->gpa, val);
+               if (!mmio_write(size, ve->gpa, val))
+                       return -EIO;
+               return insn.length;
        case MMIO_WRITE_IMM:
                val = insn.immediate.value;
-               return mmio_write(size, ve->gpa, val);
+               if (!mmio_write(size, ve->gpa, val))
+                       return -EIO;
+               return insn.length;
        case MMIO_READ:
        case MMIO_READ_ZERO_EXTEND:
        case MMIO_READ_SIGN_EXTEND:
@@ -334,15 +397,15 @@ static bool handle_mmio(struct pt_regs *regs, struct ve_info *ve)
                 * decoded or handled properly. It was likely not using io.h
                 * helpers or accessed MMIO accidentally.
                 */
-               return false;
+               return -EINVAL;
        default:
                WARN_ONCE(1, "Unknown insn_decode_mmio() decode value?");
-               return false;
+               return -EINVAL;
        }
 
        /* Handle reads */
        if (!mmio_read(size, ve->gpa, &val))
-               return false;
+               return -EIO;
 
        switch (mmio) {
        case MMIO_READ:
@@ -364,13 +427,13 @@ static bool handle_mmio(struct pt_regs *regs, struct ve_info *ve)
        default:
                /* All other cases has to be covered with the first switch() */
                WARN_ON_ONCE(1);
-               return false;
+               return -EINVAL;
        }
 
        if (extend_size)
                memset(reg, extend_val, extend_size);
        memcpy(reg, &val, size);
-       return true;
+       return insn.length;
 }
 
 static bool handle_in(struct pt_regs *regs, int size, int port)
@@ -421,13 +484,14 @@ static bool handle_out(struct pt_regs *regs, int size, int port)
  *
  * Return True on success or False on failure.
  */
-static bool handle_io(struct pt_regs *regs, u32 exit_qual)
+static int handle_io(struct pt_regs *regs, struct ve_info *ve)
 {
+       u32 exit_qual = ve->exit_qual;
        int size, port;
-       bool in;
+       bool in, ret;
 
        if (VE_IS_IO_STRING(exit_qual))
-               return false;
+               return -EIO;
 
        in   = VE_IS_IO_IN(exit_qual);
        size = VE_GET_IO_SIZE(exit_qual);
@@ -435,9 +499,13 @@ static bool handle_io(struct pt_regs *regs, u32 exit_qual)
 
 
        if (in)
-               return handle_in(regs, size, port);
+               ret = handle_in(regs, size, port);
        else
-               return handle_out(regs, size, port);
+               ret = handle_out(regs, size, port);
+       if (!ret)
+               return -EIO;
+
+       return ve_instr_len(ve);
 }
 
 /*
@@ -447,13 +515,19 @@ static bool handle_io(struct pt_regs *regs, u32 exit_qual)
 __init bool tdx_early_handle_ve(struct pt_regs *regs)
 {
        struct ve_info ve;
+       int insn_len;
 
        tdx_get_ve_info(&ve);
 
        if (ve.exit_reason != EXIT_REASON_IO_INSTRUCTION)
                return false;
 
-       return handle_io(regs, ve.exit_qual);
+       insn_len = handle_io(regs, &ve);
+       if (insn_len < 0)
+               return false;
+
+       regs->ip += insn_len;
+       return true;
 }
 
 void tdx_get_ve_info(struct ve_info *ve)
@@ -486,54 +560,65 @@ void tdx_get_ve_info(struct ve_info *ve)
        ve->instr_info  = upper_32_bits(out.r10);
 }
 
-/* Handle the user initiated #VE */
-static bool virt_exception_user(struct pt_regs *regs, struct ve_info *ve)
+/*
+ * Handle the user initiated #VE.
+ *
+ * On success, returns the number of bytes RIP should be incremented (>=0)
+ * or -errno on error.
+ */
+static int virt_exception_user(struct pt_regs *regs, struct ve_info *ve)
 {
        switch (ve->exit_reason) {
        case EXIT_REASON_CPUID:
-               return handle_cpuid(regs);
+               return handle_cpuid(regs, ve);
        default:
                pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
-               return false;
+               return -EIO;
        }
 }
 
-/* Handle the kernel #VE */
-static bool virt_exception_kernel(struct pt_regs *regs, struct ve_info *ve)
+/*
+ * Handle the kernel #VE.
+ *
+ * On success, returns the number of bytes RIP should be incremented (>=0)
+ * or -errno on error.
+ */
+static int virt_exception_kernel(struct pt_regs *regs, struct ve_info *ve)
 {
        switch (ve->exit_reason) {
        case EXIT_REASON_HLT:
-               return handle_halt();
+               return handle_halt(ve);
        case EXIT_REASON_MSR_READ:
-               return read_msr(regs);
+               return read_msr(regs, ve);
        case EXIT_REASON_MSR_WRITE:
-               return write_msr(regs);
+               return write_msr(regs, ve);
        case EXIT_REASON_CPUID:
-               return handle_cpuid(regs);
+               return handle_cpuid(regs, ve);
        case EXIT_REASON_EPT_VIOLATION:
                return handle_mmio(regs, ve);
        case EXIT_REASON_IO_INSTRUCTION:
-               return handle_io(regs, ve->exit_qual);
+               return handle_io(regs, ve);
        default:
                pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
-               return false;
+               return -EIO;
        }
 }
 
 bool tdx_handle_virt_exception(struct pt_regs *regs, struct ve_info *ve)
 {
-       bool ret;
+       int insn_len;
 
        if (user_mode(regs))
-               ret = virt_exception_user(regs, ve);
+               insn_len = virt_exception_user(regs, ve);
        else
-               ret = virt_exception_kernel(regs, ve);
+               insn_len = virt_exception_kernel(regs, ve);
+       if (insn_len < 0)
+               return false;
 
        /* After successful #VE handling, move the IP */
-       if (ret)
-               regs->ip += ve->instr_len;
+       regs->ip += insn_len;
 
-       return ret;
+       return true;
 }
 
 static bool tdx_tlb_flush_required(bool private)
index 8b392b6..3de6d8b 100644 (file)
@@ -13,6 +13,7 @@
 #include <linux/io.h>
 #include <asm/apic.h>
 #include <asm/desc.h>
+#include <asm/sev.h>
 #include <asm/hypervisor.h>
 #include <asm/hyperv-tlfs.h>
 #include <asm/mshyperv.h>
@@ -405,6 +406,11 @@ void __init hyperv_init(void)
        }
 
        if (hv_isolation_type_snp()) {
+               /* Negotiate GHCB Version. */
+               if (!hv_ghcb_negotiate_protocol())
+                       hv_ghcb_terminate(SEV_TERM_SET_GEN,
+                                         GHCB_SEV_ES_PROT_UNSUPPORTED);
+
                hv_ghcb_pg = alloc_percpu(union hv_ghcb *);
                if (!hv_ghcb_pg)
                        goto free_vp_assist_page;
index 2b99411..1dbcbd9 100644 (file)
@@ -53,6 +53,8 @@ union hv_ghcb {
        } hypercall;
 } __packed __aligned(HV_HYP_PAGE_SIZE);
 
+static u16 hv_ghcb_version __ro_after_init;
+
 u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size)
 {
        union hv_ghcb *hv_ghcb;
@@ -96,12 +98,85 @@ u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size)
        return status;
 }
 
+static inline u64 rd_ghcb_msr(void)
+{
+       return __rdmsr(MSR_AMD64_SEV_ES_GHCB);
+}
+
+static inline void wr_ghcb_msr(u64 val)
+{
+       native_wrmsrl(MSR_AMD64_SEV_ES_GHCB, val);
+}
+
+static enum es_result hv_ghcb_hv_call(struct ghcb *ghcb, u64 exit_code,
+                                  u64 exit_info_1, u64 exit_info_2)
+{
+       /* Fill in protocol and format specifiers */
+       ghcb->protocol_version = hv_ghcb_version;
+       ghcb->ghcb_usage       = GHCB_DEFAULT_USAGE;
+
+       ghcb_set_sw_exit_code(ghcb, exit_code);
+       ghcb_set_sw_exit_info_1(ghcb, exit_info_1);
+       ghcb_set_sw_exit_info_2(ghcb, exit_info_2);
+
+       VMGEXIT();
+
+       if (ghcb->save.sw_exit_info_1 & GENMASK_ULL(31, 0))
+               return ES_VMM_ERROR;
+       else
+               return ES_OK;
+}
+
+void hv_ghcb_terminate(unsigned int set, unsigned int reason)
+{
+       u64 val = GHCB_MSR_TERM_REQ;
+
+       /* Tell the hypervisor what went wrong. */
+       val |= GHCB_SEV_TERM_REASON(set, reason);
+
+       /* Request Guest Termination from Hypvervisor */
+       wr_ghcb_msr(val);
+       VMGEXIT();
+
+       while (true)
+               asm volatile("hlt\n" : : : "memory");
+}
+
+bool hv_ghcb_negotiate_protocol(void)
+{
+       u64 ghcb_gpa;
+       u64 val;
+
+       /* Save ghcb page gpa. */
+       ghcb_gpa = rd_ghcb_msr();
+
+       /* Do the GHCB protocol version negotiation */
+       wr_ghcb_msr(GHCB_MSR_SEV_INFO_REQ);
+       VMGEXIT();
+       val = rd_ghcb_msr();
+
+       if (GHCB_MSR_INFO(val) != GHCB_MSR_SEV_INFO_RESP)
+               return false;
+
+       if (GHCB_MSR_PROTO_MAX(val) < GHCB_PROTOCOL_MIN ||
+           GHCB_MSR_PROTO_MIN(val) > GHCB_PROTOCOL_MAX)
+               return false;
+
+       hv_ghcb_version = min_t(size_t, GHCB_MSR_PROTO_MAX(val),
+                            GHCB_PROTOCOL_MAX);
+
+       /* Write ghcb page back after negotiating protocol. */
+       wr_ghcb_msr(ghcb_gpa);
+       VMGEXIT();
+
+       return true;
+}
+
 void hv_ghcb_msr_write(u64 msr, u64 value)
 {
        union hv_ghcb *hv_ghcb;
        void **ghcb_base;
        unsigned long flags;
-       struct es_em_ctxt ctxt;
 
        if (!hv_ghcb_pg)
                return;
@@ -120,8 +195,7 @@ void hv_ghcb_msr_write(u64 msr, u64 value)
        ghcb_set_rax(&hv_ghcb->ghcb, lower_32_bits(value));
        ghcb_set_rdx(&hv_ghcb->ghcb, upper_32_bits(value));
 
-       if (sev_es_ghcb_hv_call(&hv_ghcb->ghcb, false, &ctxt,
-                               SVM_EXIT_MSR, 1, 0))
+       if (hv_ghcb_hv_call(&hv_ghcb->ghcb, SVM_EXIT_MSR, 1, 0))
                pr_warn("Fail to write msr via ghcb %llx.\n", msr);
 
        local_irq_restore(flags);
@@ -133,7 +207,6 @@ void hv_ghcb_msr_read(u64 msr, u64 *value)
        union hv_ghcb *hv_ghcb;
        void **ghcb_base;
        unsigned long flags;
-       struct es_em_ctxt ctxt;
 
        /* Check size of union hv_ghcb here. */
        BUILD_BUG_ON(sizeof(union hv_ghcb) != HV_HYP_PAGE_SIZE);
@@ -152,8 +225,7 @@ void hv_ghcb_msr_read(u64 msr, u64 *value)
        }
 
        ghcb_set_rcx(&hv_ghcb->ghcb, msr);
-       if (sev_es_ghcb_hv_call(&hv_ghcb->ghcb, false, &ctxt,
-                               SVM_EXIT_MSR, 0, 0))
+       if (hv_ghcb_hv_call(&hv_ghcb->ghcb, SVM_EXIT_MSR, 0, 0))
                pr_warn("Fail to read msr via ghcb %llx.\n", msr);
        else
                *value = (u64)lower_32_bits(hv_ghcb->ghcb.save.rax)
index 5a39ed5..e8f58dd 100644 (file)
@@ -4,9 +4,6 @@
 
 #include <asm/e820/types.h>
 
-struct device;
-struct resource;
-
 extern struct e820_table *e820_table;
 extern struct e820_table *e820_table_kexec;
 extern struct e820_table *e820_table_firmware;
@@ -46,8 +43,6 @@ extern void e820__register_nosave_regions(unsigned long limit_pfn);
 
 extern int  e820__get_entry_type(u64 start, u64 end);
 
-extern void remove_e820_regions(struct device *dev, struct resource *avail);
-
 /*
  * Returns true iff the specified range [start,end) is completely contained inside
  * the ISA region.
index 71943dc..9636742 100644 (file)
@@ -323,7 +323,7 @@ static inline u32 efi64_convert_status(efi_status_t status)
 #define __efi64_argmap_get_memory_space_descriptor(phys, desc) \
        (__efi64_split(phys), (desc))
 
-#define __efi64_argmap_set_memory_space_descriptor(phys, size, flags) \
+#define __efi64_argmap_set_memory_space_attributes(phys, size, flags) \
        (__efi64_split(phys), __efi64_split(size), __efi64_split(flags))
 
 /*
index a82f603..61f0c20 100644 (file)
@@ -179,9 +179,13 @@ int hv_set_mem_host_visibility(unsigned long addr, int numpages, bool visible);
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 void hv_ghcb_msr_write(u64 msr, u64 value);
 void hv_ghcb_msr_read(u64 msr, u64 *value);
+bool hv_ghcb_negotiate_protocol(void);
+void hv_ghcb_terminate(unsigned int set, unsigned int reason);
 #else
 static inline void hv_ghcb_msr_write(u64 msr, u64 value) {}
 static inline void hv_ghcb_msr_read(u64 msr, u64 *value) {}
+static inline bool hv_ghcb_negotiate_protocol(void) { return false; }
+static inline void hv_ghcb_terminate(unsigned int set, unsigned int reason) {}
 #endif
 
 extern bool hv_isolation_type_snp(void);
index f52a886..70533fd 100644 (file)
@@ -69,6 +69,8 @@ void pcibios_scan_specific_bus(int busn);
 
 /* pci-irq.c */
 
+struct pci_dev;
+
 struct irq_info {
        u8 bus, devfn;                  /* Bus, device and function */
        struct {
@@ -246,3 +248,9 @@ static inline void mmio_config_writel(void __iomem *pos, u32 val)
 # define x86_default_pci_init_irq      NULL
 # define x86_default_pci_fixup_irqs    NULL
 #endif
+
+#if defined(CONFIG_PCI) && defined(CONFIG_ACPI)
+extern bool pci_use_e820;
+#else
+#define pci_use_e820 false
+#endif
index 7590ac2..f8b9ee9 100644 (file)
@@ -108,19 +108,16 @@ extern unsigned long _brk_end;
 void *extend_brk(size_t size, size_t align);
 
 /*
- * Reserve space in the brk section.  The name must be unique within the file,
- * and somewhat descriptive.  The size is in bytes.
+ * Reserve space in the .brk section, which is a block of memory from which the
+ * caller is allowed to allocate very early (before even memblock is available)
+ * by calling extend_brk().  All allocated memory will be eventually converted
+ * to memblock.  Any leftover unallocated memory will be freed.
  *
- * The allocation is done using inline asm (rather than using a section
- * attribute on a normal variable) in order to allow the use of @nobits, so
- * that it doesn't take up any space in the vmlinux file.
+ * The size is in bytes.
  */
-#define RESERVE_BRK(name, size)                                                \
-       asm(".pushsection .brk_reservation,\"aw\",@nobits\n\t"          \
-           ".brk." #name ":\n\t"                                       \
-           ".skip " __stringify(size) "\n\t"                           \
-           ".size .brk." #name ", " __stringify(size) "\n\t"           \
-           ".popsection\n\t")
+#define RESERVE_BRK(name, size)                                        \
+       __section(".bss..brk") __aligned(1) __used      \
+       static char __brk_##name[size]
 
 extern void probe_roms(void);
 #ifdef __i386__
@@ -133,12 +130,19 @@ asmlinkage void __init x86_64_start_reservations(char *real_mode_data);
 
 #endif /* __i386__ */
 #endif /* _SETUP */
-#else
-#define RESERVE_BRK(name,sz)                           \
-       .pushsection .brk_reservation,"aw",@nobits;     \
-.brk.name:                                             \
-1:     .skip sz;                                       \
-       .size .brk.name,.-1b;                           \
+
+#else  /* __ASSEMBLY */
+
+.macro __RESERVE_BRK name, size
+       .pushsection .bss..brk, "aw"
+SYM_DATA_START(__brk_\name)
+       .skip \size
+SYM_DATA_END(__brk_\name)
        .popsection
+.endm
+
+#define RESERVE_BRK(name, size) __RESERVE_BRK name, size
+
 #endif /* __ASSEMBLY__ */
+
 #endif /* _ASM_X86_SETUP_H */
index 03364dc..4c8b6ae 100644 (file)
@@ -36,10 +36,6 @@ KCSAN_SANITIZE := n
 
 OBJECT_FILES_NON_STANDARD_test_nx.o                    := y
 
-ifdef CONFIG_FRAME_POINTER
-OBJECT_FILES_NON_STANDARD_ftrace_$(BITS).o             := y
-endif
-
 # If instrumentation of this dir is enabled, boot hangs during first second.
 # Probably could be more selective here, but note that files related to irqs,
 # boot, dumpstack/stacktrace, etc are either non-interesting or can lead to
index 4ec1360..dfeb227 100644 (file)
@@ -175,6 +175,7 @@ SYM_INNER_LABEL(ftrace_caller_end, SYM_L_GLOBAL)
 
        jmp ftrace_epilogue
 SYM_FUNC_END(ftrace_caller);
+STACK_FRAME_NON_STANDARD_FP(ftrace_caller)
 
 SYM_FUNC_START(ftrace_epilogue)
 /*
@@ -282,6 +283,7 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
        jmp     ftrace_epilogue
 
 SYM_FUNC_END(ftrace_regs_caller)
+STACK_FRAME_NON_STANDARD_FP(ftrace_regs_caller)
 
 
 #else /* ! CONFIG_DYNAMIC_FTRACE */
@@ -311,10 +313,14 @@ trace:
        jmp ftrace_stub
 SYM_FUNC_END(__fentry__)
 EXPORT_SYMBOL(__fentry__)
+STACK_FRAME_NON_STANDARD_FP(__fentry__)
+
 #endif /* CONFIG_DYNAMIC_FTRACE */
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
-SYM_FUNC_START(return_to_handler)
+SYM_CODE_START(return_to_handler)
+       UNWIND_HINT_EMPTY
+       ANNOTATE_NOENDBR
        subq  $16, %rsp
 
        /* Save the return values */
@@ -339,7 +345,6 @@ SYM_FUNC_START(return_to_handler)
        int3
 .Ldo_rop:
        mov %rdi, (%rsp)
-       UNWIND_HINT_FUNC
        RET
-SYM_FUNC_END(return_to_handler)
+SYM_CODE_END(return_to_handler)
 #endif
index db2b350..bba1abd 100644 (file)
@@ -1,7 +1,8 @@
 // SPDX-License-Identifier: GPL-2.0
-#include <linux/dev_printk.h>
 #include <linux/ioport.h>
+#include <linux/printk.h>
 #include <asm/e820/api.h>
+#include <asm/pci_x86.h>
 
 static void resource_clip(struct resource *res, resource_size_t start,
                          resource_size_t end)
@@ -24,14 +25,14 @@ static void resource_clip(struct resource *res, resource_size_t start,
                res->start = end + 1;
 }
 
-void remove_e820_regions(struct device *dev, struct resource *avail)
+static void remove_e820_regions(struct resource *avail)
 {
        int i;
        struct e820_entry *entry;
        u64 e820_start, e820_end;
        struct resource orig = *avail;
 
-       if (!(avail->flags & IORESOURCE_MEM))
+       if (!pci_use_e820)
                return;
 
        for (i = 0; i < e820_table->nr_entries; i++) {
@@ -41,7 +42,7 @@ void remove_e820_regions(struct device *dev, struct resource *avail)
 
                resource_clip(avail, e820_start, e820_end);
                if (orig.start != avail->start || orig.end != avail->end) {
-                       dev_info(dev, "clipped %pR to %pR for e820 entry [mem %#010Lx-%#010Lx]\n",
+                       pr_info("clipped %pR to %pR for e820 entry [mem %#010Lx-%#010Lx]\n",
                                 &orig, avail, e820_start, e820_end);
                        orig = *avail;
                }
@@ -55,6 +56,9 @@ void arch_remove_reservations(struct resource *avail)
         * the low 1MB unconditionally, as this area is needed for some ISA
         * cards requiring a memory range, e.g. the i82365 PCMCIA controller.
         */
-       if (avail->flags & IORESOURCE_MEM)
+       if (avail->flags & IORESOURCE_MEM) {
                resource_clip(avail, BIOS_ROM_BASE, BIOS_ROM_END);
+
+               remove_e820_regions(avail);
+       }
 }
index 3ebb853..bd6c6fd 100644 (file)
@@ -67,11 +67,6 @@ RESERVE_BRK(dmi_alloc, 65536);
 #endif
 
 
-/*
- * Range of the BSS area. The size of the BSS area is determined
- * at link time, with RESERVE_BRK() facility reserving additional
- * chunks.
- */
 unsigned long _brk_start = (unsigned long)__brk_base;
 unsigned long _brk_end   = (unsigned long)__brk_base;
 
index f5f6dc2..81aba71 100644 (file)
@@ -385,10 +385,10 @@ SECTIONS
        __end_of_kernel_reserve = .;
 
        . = ALIGN(PAGE_SIZE);
-       .brk : AT(ADDR(.brk) - LOAD_OFFSET) {
+       .brk (NOLOAD) : AT(ADDR(.brk) - LOAD_OFFSET) {
                __brk_base = .;
                . += 64 * 1024;         /* 64k alignment slop space */
-               *(.brk_reservation)     /* areas brk users have reserved */
+               *(.bss..brk)            /* areas brk users have reserved */
                __brk_limit = .;
        }
 
index 51fd985..0c240ed 100644 (file)
@@ -844,7 +844,7 @@ static int __sev_dbg_encrypt_user(struct kvm *kvm, unsigned long paddr,
 
        /* If source buffer is not aligned then use an intermediate buffer */
        if (!IS_ALIGNED((unsigned long)vaddr, 16)) {
-               src_tpage = alloc_page(GFP_KERNEL);
+               src_tpage = alloc_page(GFP_KERNEL_ACCOUNT);
                if (!src_tpage)
                        return -ENOMEM;
 
@@ -865,7 +865,7 @@ static int __sev_dbg_encrypt_user(struct kvm *kvm, unsigned long paddr,
        if (!IS_ALIGNED((unsigned long)dst_vaddr, 16) || !IS_ALIGNED(size, 16)) {
                int dst_offset;
 
-               dst_tpage = alloc_page(GFP_KERNEL);
+               dst_tpage = alloc_page(GFP_KERNEL_ACCOUNT);
                if (!dst_tpage) {
                        ret = -ENOMEM;
                        goto e_free;
@@ -1665,19 +1665,24 @@ static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm)
 {
        struct kvm_sev_info *dst = &to_kvm_svm(dst_kvm)->sev_info;
        struct kvm_sev_info *src = &to_kvm_svm(src_kvm)->sev_info;
+       struct kvm_vcpu *dst_vcpu, *src_vcpu;
+       struct vcpu_svm *dst_svm, *src_svm;
        struct kvm_sev_info *mirror;
+       unsigned long i;
 
        dst->active = true;
        dst->asid = src->asid;
        dst->handle = src->handle;
        dst->pages_locked = src->pages_locked;
        dst->enc_context_owner = src->enc_context_owner;
+       dst->es_active = src->es_active;
 
        src->asid = 0;
        src->active = false;
        src->handle = 0;
        src->pages_locked = 0;
        src->enc_context_owner = NULL;
+       src->es_active = false;
 
        list_cut_before(&dst->regions_list, &src->regions_list, &src->regions_list);
 
@@ -1704,26 +1709,21 @@ static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm)
                list_del(&src->mirror_entry);
                list_add_tail(&dst->mirror_entry, &owner_sev_info->mirror_vms);
        }
-}
 
-static int sev_es_migrate_from(struct kvm *dst, struct kvm *src)
-{
-       unsigned long i;
-       struct kvm_vcpu *dst_vcpu, *src_vcpu;
-       struct vcpu_svm *dst_svm, *src_svm;
+       kvm_for_each_vcpu(i, dst_vcpu, dst_kvm) {
+               dst_svm = to_svm(dst_vcpu);
 
-       if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus))
-               return -EINVAL;
+               sev_init_vmcb(dst_svm);
 
-       kvm_for_each_vcpu(i, src_vcpu, src) {
-               if (!src_vcpu->arch.guest_state_protected)
-                       return -EINVAL;
-       }
+               if (!dst->es_active)
+                       continue;
 
-       kvm_for_each_vcpu(i, src_vcpu, src) {
+               /*
+                * Note, the source is not required to have the same number of
+                * vCPUs as the destination when migrating a vanilla SEV VM.
+                */
+               src_vcpu = kvm_get_vcpu(dst_kvm, i);
                src_svm = to_svm(src_vcpu);
-               dst_vcpu = kvm_get_vcpu(dst, i);
-               dst_svm = to_svm(dst_vcpu);
 
                /*
                 * Transfer VMSA and GHCB state to the destination.  Nullify and
@@ -1740,8 +1740,23 @@ static int sev_es_migrate_from(struct kvm *dst, struct kvm *src)
                src_svm->vmcb->control.vmsa_pa = INVALID_PAGE;
                src_vcpu->arch.guest_state_protected = false;
        }
-       to_kvm_svm(src)->sev_info.es_active = false;
-       to_kvm_svm(dst)->sev_info.es_active = true;
+}
+
+static int sev_check_source_vcpus(struct kvm *dst, struct kvm *src)
+{
+       struct kvm_vcpu *src_vcpu;
+       unsigned long i;
+
+       if (!sev_es_guest(src))
+               return 0;
+
+       if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus))
+               return -EINVAL;
+
+       kvm_for_each_vcpu(i, src_vcpu, src) {
+               if (!src_vcpu->arch.guest_state_protected)
+                       return -EINVAL;
+       }
 
        return 0;
 }
@@ -1789,11 +1804,9 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
        if (ret)
                goto out_dst_vcpu;
 
-       if (sev_es_guest(source_kvm)) {
-               ret = sev_es_migrate_from(kvm, source_kvm);
-               if (ret)
-                       goto out_source_vcpu;
-       }
+       ret = sev_check_source_vcpus(kvm, source_kvm);
+       if (ret)
+               goto out_source_vcpu;
 
        sev_migrate_from(kvm, source_kvm);
        kvm_vm_dead(source_kvm);
@@ -2914,7 +2927,7 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
                                    count, in);
 }
 
-void sev_es_init_vmcb(struct vcpu_svm *svm)
+static void sev_es_init_vmcb(struct vcpu_svm *svm)
 {
        struct kvm_vcpu *vcpu = &svm->vcpu;
 
@@ -2967,6 +2980,15 @@ void sev_es_init_vmcb(struct vcpu_svm *svm)
        }
 }
 
+void sev_init_vmcb(struct vcpu_svm *svm)
+{
+       svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
+       clr_exception_intercept(svm, UD_VECTOR);
+
+       if (sev_es_guest(svm->vcpu.kvm))
+               sev_es_init_vmcb(svm);
+}
+
 void sev_es_vcpu_reset(struct vcpu_svm *svm)
 {
        /*
index 87da903..44bbf25 100644 (file)
@@ -1212,15 +1212,8 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
                svm->vmcb->control.int_ctl |= V_GIF_ENABLE_MASK;
        }
 
-       if (sev_guest(vcpu->kvm)) {
-               svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
-               clr_exception_intercept(svm, UD_VECTOR);
-
-               if (sev_es_guest(vcpu->kvm)) {
-                       /* Perform SEV-ES specific VMCB updates */
-                       sev_es_init_vmcb(svm);
-               }
-       }
+       if (sev_guest(vcpu->kvm))
+               sev_init_vmcb(svm);
 
        svm_hv_init_vmcb(vmcb);
        init_vmcb_after_set_cpuid(vcpu);
index 1bddd33..9223ac1 100644 (file)
@@ -649,10 +649,10 @@ void __init sev_set_cpu_caps(void);
 void __init sev_hardware_setup(void);
 void sev_hardware_unsetup(void);
 int sev_cpu_init(struct svm_cpu_data *sd);
+void sev_init_vmcb(struct vcpu_svm *svm);
 void sev_free_vcpu(struct kvm_vcpu *vcpu);
 int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
 int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
-void sev_es_init_vmcb(struct vcpu_svm *svm);
 void sev_es_vcpu_reset(struct vcpu_svm *svm);
 void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
 void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
index 2f460c6..b88f43c 100644 (file)
@@ -1420,8 +1420,9 @@ st:                       if (is_imm8(insn->off))
                case BPF_JMP | BPF_CALL:
                        func = (u8 *) __bpf_call_base + imm32;
                        if (tail_call_reachable) {
+                               /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
                                EMIT3_off32(0x48, 0x8B, 0x85,
-                                           -(bpf_prog->aux->stack_depth + 8));
+                                           -round_up(bpf_prog->aux->stack_depth, 8) - 8);
                                if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7))
                                        return -EINVAL;
                        } else {
index a4f4305..2f82480 100644 (file)
@@ -8,7 +8,6 @@
 #include <linux/pci-acpi.h>
 #include <asm/numa.h>
 #include <asm/pci_x86.h>
-#include <asm/e820/api.h>
 
 struct pci_root_info {
        struct acpi_pci_root_info common;
@@ -20,7 +19,7 @@ struct pci_root_info {
 #endif
 };
 
-static bool pci_use_e820 = true;
+bool pci_use_e820 = true;
 static bool pci_use_crs = true;
 static bool pci_ignore_seg;
 
@@ -387,11 +386,6 @@ static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci)
 
        status = acpi_pci_probe_root_resources(ci);
 
-       if (pci_use_e820) {
-               resource_list_for_each_entry(entry, &ci->resources)
-                       remove_e820_regions(&device->dev, entry->res);
-       }
-
        if (pci_use_crs) {
                resource_list_for_each_entry_safe(entry, tmp, &ci->resources)
                        if (resource_is_pcicfg_ioport(entry->res))
index e3eae64..ab30bcb 100644 (file)
@@ -2173,7 +2173,7 @@ ENDPROC(ret_from_kernel_thread)
 
 #ifdef CONFIG_HIBERNATION
 
-       .bss
+       .section        .bss, "aw"
        .align  4
 .Lsaved_regs:
 #if defined(__XTENSA_WINDOWED_ABI__)
index e8ceb15..16b8a62 100644 (file)
@@ -154,6 +154,7 @@ static void __init calibrate_ccount(void)
        cpu = of_find_compatible_node(NULL, NULL, "cdns,xtensa-cpu");
        if (cpu) {
                clk = of_clk_get(cpu, 0);
+               of_node_put(cpu);
                if (!IS_ERR(clk)) {
                        ccount_freq = clk_get_rate(clk);
                        return;
index 538e674..c79c1d0 100644 (file)
@@ -133,6 +133,7 @@ static int __init machine_setup(void)
 
        if ((eth = of_find_compatible_node(eth, NULL, "opencores,ethoc")))
                update_local_mac(eth);
+       of_node_put(eth);
        return 0;
 }
 arch_initcall(machine_setup);
index 0d46cb7..e6d7e6b 100644 (file)
@@ -7046,6 +7046,7 @@ static void bfq_exit_queue(struct elevator_queue *e)
        spin_unlock_irq(&bfqd->lock);
 #endif
 
+       blk_stat_disable_accounting(bfqd->queue);
        wbt_enable_default(bfqd->queue);
 
        kfree(bfqd);
@@ -7188,7 +7189,12 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
        bfq_init_root_group(bfqd->root_group, bfqd);
        bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group);
 
+       /* We dispatch from request queue wide instead of hw queue */
+       blk_queue_flag_set(QUEUE_FLAG_SQ_SCHED, q);
+
        wbt_disable_default(q);
+       blk_stat_enable_accounting(q);
+
        return 0;
 
 out_free:
index 06ff5bb..27fb135 100644 (file)
@@ -322,19 +322,6 @@ void blk_cleanup_queue(struct request_queue *q)
                blk_mq_exit_queue(q);
        }
 
-       /*
-        * In theory, request pool of sched_tags belongs to request queue.
-        * However, the current implementation requires tag_set for freeing
-        * requests, so free the pool now.
-        *
-        * Queue has become frozen, there can't be any in-queue requests, so
-        * it is safe to free requests now.
-        */
-       mutex_lock(&q->sysfs_lock);
-       if (q->elevator)
-               blk_mq_sched_free_rqs(q);
-       mutex_unlock(&q->sysfs_lock);
-
        /* @q is and will stay empty, shutdown and put */
        blk_put_queue(q);
 }
index 56ed48d..47c89e6 100644 (file)
@@ -144,7 +144,6 @@ int disk_register_independent_access_ranges(struct gendisk *disk,
        }
 
        for (i = 0; i < iars->nr_ia_ranges; i++) {
-               iars->ia_range[i].queue = q;
                ret = kobject_init_and_add(&iars->ia_range[i].kobj,
                                           &blk_ia_range_ktype, &iars->kobj,
                                           "%d", i);
index 7e4136a..4d1ce9e 100644 (file)
@@ -711,11 +711,6 @@ void blk_mq_debugfs_register(struct request_queue *q)
        }
 }
 
-void blk_mq_debugfs_unregister(struct request_queue *q)
-{
-       q->sched_debugfs_dir = NULL;
-}
-
 static void blk_mq_debugfs_register_ctx(struct blk_mq_hw_ctx *hctx,
                                        struct blk_mq_ctx *ctx)
 {
@@ -746,6 +741,8 @@ void blk_mq_debugfs_register_hctx(struct request_queue *q,
 
 void blk_mq_debugfs_unregister_hctx(struct blk_mq_hw_ctx *hctx)
 {
+       if (!hctx->queue->debugfs_dir)
+               return;
        debugfs_remove_recursive(hctx->debugfs_dir);
        hctx->sched_debugfs_dir = NULL;
        hctx->debugfs_dir = NULL;
@@ -773,6 +770,8 @@ void blk_mq_debugfs_register_sched(struct request_queue *q)
 {
        struct elevator_type *e = q->elevator->type;
 
+       lockdep_assert_held(&q->debugfs_mutex);
+
        /*
         * If the parent directory has not been created yet, return, we will be
         * called again later on and the directory/files will be created then.
@@ -790,6 +789,8 @@ void blk_mq_debugfs_register_sched(struct request_queue *q)
 
 void blk_mq_debugfs_unregister_sched(struct request_queue *q)
 {
+       lockdep_assert_held(&q->debugfs_mutex);
+
        debugfs_remove_recursive(q->sched_debugfs_dir);
        q->sched_debugfs_dir = NULL;
 }
@@ -811,6 +812,10 @@ static const char *rq_qos_id_to_name(enum rq_qos_id id)
 
 void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos)
 {
+       lockdep_assert_held(&rqos->q->debugfs_mutex);
+
+       if (!rqos->q->debugfs_dir)
+               return;
        debugfs_remove_recursive(rqos->debugfs_dir);
        rqos->debugfs_dir = NULL;
 }
@@ -820,6 +825,8 @@ void blk_mq_debugfs_register_rqos(struct rq_qos *rqos)
        struct request_queue *q = rqos->q;
        const char *dir_name = rq_qos_id_to_name(rqos->id);
 
+       lockdep_assert_held(&q->debugfs_mutex);
+
        if (rqos->debugfs_dir || !rqos->ops->debugfs_attrs)
                return;
 
@@ -833,17 +840,13 @@ void blk_mq_debugfs_register_rqos(struct rq_qos *rqos)
        debugfs_create_files(rqos->debugfs_dir, rqos, rqos->ops->debugfs_attrs);
 }
 
-void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q)
-{
-       debugfs_remove_recursive(q->rqos_debugfs_dir);
-       q->rqos_debugfs_dir = NULL;
-}
-
 void blk_mq_debugfs_register_sched_hctx(struct request_queue *q,
                                        struct blk_mq_hw_ctx *hctx)
 {
        struct elevator_type *e = q->elevator->type;
 
+       lockdep_assert_held(&q->debugfs_mutex);
+
        /*
         * If the parent debugfs directory has not been created yet, return;
         * We will be called again later on with appropriate parent debugfs
@@ -863,6 +866,10 @@ void blk_mq_debugfs_register_sched_hctx(struct request_queue *q,
 
 void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx)
 {
+       lockdep_assert_held(&hctx->queue->debugfs_mutex);
+
+       if (!hctx->queue->debugfs_dir)
+               return;
        debugfs_remove_recursive(hctx->sched_debugfs_dir);
        hctx->sched_debugfs_dir = NULL;
 }
index 69918f4..9c7d4b6 100644 (file)
@@ -21,7 +21,6 @@ int __blk_mq_debugfs_rq_show(struct seq_file *m, struct request *rq);
 int blk_mq_debugfs_rq_show(struct seq_file *m, void *v);
 
 void blk_mq_debugfs_register(struct request_queue *q);
-void blk_mq_debugfs_unregister(struct request_queue *q);
 void blk_mq_debugfs_register_hctx(struct request_queue *q,
                                  struct blk_mq_hw_ctx *hctx);
 void blk_mq_debugfs_unregister_hctx(struct blk_mq_hw_ctx *hctx);
@@ -36,16 +35,11 @@ void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx);
 
 void blk_mq_debugfs_register_rqos(struct rq_qos *rqos);
 void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos);
-void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q);
 #else
 static inline void blk_mq_debugfs_register(struct request_queue *q)
 {
 }
 
-static inline void blk_mq_debugfs_unregister(struct request_queue *q)
-{
-}
-
 static inline void blk_mq_debugfs_register_hctx(struct request_queue *q,
                                                struct blk_mq_hw_ctx *hctx)
 {
@@ -87,10 +81,6 @@ static inline void blk_mq_debugfs_register_rqos(struct rq_qos *rqos)
 static inline void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos)
 {
 }
-
-static inline void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q)
-{
-}
 #endif
 
 #ifdef CONFIG_BLK_DEBUG_FS_ZONED
index 9e56a69..a4f7c10 100644 (file)
@@ -564,6 +564,7 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e)
        int ret;
 
        if (!e) {
+               blk_queue_flag_clear(QUEUE_FLAG_SQ_SCHED, q);
                q->elevator = NULL;
                q->nr_requests = q->tag_set->queue_depth;
                return 0;
@@ -593,7 +594,9 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e)
        if (ret)
                goto err_free_map_and_rqs;
 
+       mutex_lock(&q->debugfs_mutex);
        blk_mq_debugfs_register_sched(q);
+       mutex_unlock(&q->debugfs_mutex);
 
        queue_for_each_hw_ctx(q, hctx, i) {
                if (e->ops.init_hctx) {
@@ -606,7 +609,9 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e)
                                return ret;
                        }
                }
+               mutex_lock(&q->debugfs_mutex);
                blk_mq_debugfs_register_sched_hctx(q, hctx);
+               mutex_unlock(&q->debugfs_mutex);
        }
 
        return 0;
@@ -647,14 +652,21 @@ void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e)
        unsigned int flags = 0;
 
        queue_for_each_hw_ctx(q, hctx, i) {
+               mutex_lock(&q->debugfs_mutex);
                blk_mq_debugfs_unregister_sched_hctx(hctx);
+               mutex_unlock(&q->debugfs_mutex);
+
                if (e->type->ops.exit_hctx && hctx->sched_data) {
                        e->type->ops.exit_hctx(hctx, i);
                        hctx->sched_data = NULL;
                }
                flags = hctx->flags;
        }
+
+       mutex_lock(&q->debugfs_mutex);
        blk_mq_debugfs_unregister_sched(q);
+       mutex_unlock(&q->debugfs_mutex);
+
        if (e->type->ops.exit_sched)
                e->type->ops.exit_sched(e);
        blk_mq_sched_tags_teardown(q, flags);
index e9bf950..93d9d60 100644 (file)
@@ -579,6 +579,8 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
        if (!blk_mq_hw_queue_mapped(data.hctx))
                goto out_queue_exit;
        cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
+       if (cpu >= nr_cpu_ids)
+               goto out_queue_exit;
        data.ctx = __blk_mq_get_ctx(q, cpu);
 
        if (!q->elevator)
@@ -2140,20 +2142,6 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
 }
 EXPORT_SYMBOL(blk_mq_run_hw_queue);
 
-/*
- * Is the request queue handled by an IO scheduler that does not respect
- * hardware queues when dispatching?
- */
-static bool blk_mq_has_sqsched(struct request_queue *q)
-{
-       struct elevator_queue *e = q->elevator;
-
-       if (e && e->type->ops.dispatch_request &&
-           !(e->type->elevator_features & ELEVATOR_F_MQ_AWARE))
-               return true;
-       return false;
-}
-
 /*
  * Return prefered queue to dispatch from (if any) for non-mq aware IO
  * scheduler.
@@ -2186,7 +2174,7 @@ void blk_mq_run_hw_queues(struct request_queue *q, bool async)
        unsigned long i;
 
        sq_hctx = NULL;
-       if (blk_mq_has_sqsched(q))
+       if (blk_queue_sq_sched(q))
                sq_hctx = blk_mq_get_sq_hctx(q);
        queue_for_each_hw_ctx(q, hctx, i) {
                if (blk_mq_hctx_stopped(hctx))
@@ -2214,7 +2202,7 @@ void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs)
        unsigned long i;
 
        sq_hctx = NULL;
-       if (blk_mq_has_sqsched(q))
+       if (blk_queue_sq_sched(q))
                sq_hctx = blk_mq_get_sq_hctx(q);
        queue_for_each_hw_ctx(q, hctx, i) {
                if (blk_mq_hctx_stopped(hctx))
@@ -2777,15 +2765,20 @@ static inline struct request *blk_mq_get_cached_request(struct request_queue *q,
                return NULL;
        }
 
-       rq_qos_throttle(q, *bio);
-
        if (blk_mq_get_hctx_type((*bio)->bi_opf) != rq->mq_hctx->type)
                return NULL;
        if (op_is_flush(rq->cmd_flags) != op_is_flush((*bio)->bi_opf))
                return NULL;
 
-       rq->cmd_flags = (*bio)->bi_opf;
+       /*
+        * If any qos ->throttle() end up blocking, we will have flushed the
+        * plug and hence killed the cached_rq list as well. Pop this entry
+        * before we throttle.
+        */
        plug->cached_rq = rq_list_next(rq);
+       rq_qos_throttle(q, *bio);
+
+       rq->cmd_flags = (*bio)->bi_opf;
        INIT_LIST_HEAD(&rq->queuelist);
        return rq;
 }
@@ -3443,8 +3436,9 @@ static void blk_mq_exit_hctx(struct request_queue *q,
        if (blk_mq_hw_queue_mapped(hctx))
                blk_mq_tag_idle(hctx);
 
-       blk_mq_clear_flush_rq_mapping(set->tags[hctx_idx],
-                       set->queue_depth, flush_rq);
+       if (blk_queue_init_done(q))
+               blk_mq_clear_flush_rq_mapping(set->tags[hctx_idx],
+                               set->queue_depth, flush_rq);
        if (set->ops->exit_request)
                set->ops->exit_request(set, flush_rq, hctx_idx);
 
@@ -4438,12 +4432,14 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
        if (!qe)
                return false;
 
+       /* q->elevator needs protection from ->sysfs_lock */
+       mutex_lock(&q->sysfs_lock);
+
        INIT_LIST_HEAD(&qe->node);
        qe->q = q;
        qe->type = q->elevator->type;
        list_add(&qe->node, head);
 
-       mutex_lock(&q->sysfs_lock);
        /*
         * After elevator_switch_mq, the previous elevator_queue will be
         * released by elevator_release. The reference of the io scheduler
index e83af7b..d3a7569 100644 (file)
@@ -294,8 +294,6 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data,
 
 void rq_qos_exit(struct request_queue *q)
 {
-       blk_mq_debugfs_unregister_queue_rqos(q);
-
        while (q->rq_qos) {
                struct rq_qos *rqos = q->rq_qos;
                q->rq_qos = rqos->next;
index 6826700..0e46052 100644 (file)
@@ -104,8 +104,11 @@ static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
 
        blk_mq_unfreeze_queue(q);
 
-       if (rqos->ops->debugfs_attrs)
+       if (rqos->ops->debugfs_attrs) {
+               mutex_lock(&q->debugfs_mutex);
                blk_mq_debugfs_register_rqos(rqos);
+               mutex_unlock(&q->debugfs_mutex);
+       }
 }
 
 static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
@@ -129,7 +132,9 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
 
        blk_mq_unfreeze_queue(q);
 
+       mutex_lock(&q->debugfs_mutex);
        blk_mq_debugfs_unregister_rqos(rqos);
+       mutex_unlock(&q->debugfs_mutex);
 }
 
 typedef bool (acquire_inflight_cb_t)(struct rq_wait *rqw, void *private_data);
index 88bd41d..9b905e9 100644 (file)
@@ -779,14 +779,6 @@ static void blk_release_queue(struct kobject *kobj)
        if (queue_is_mq(q))
                blk_mq_release(q);
 
-       blk_trace_shutdown(q);
-       mutex_lock(&q->debugfs_mutex);
-       debugfs_remove_recursive(q->debugfs_dir);
-       mutex_unlock(&q->debugfs_mutex);
-
-       if (queue_is_mq(q))
-               blk_mq_debugfs_unregister(q);
-
        bioset_exit(&q->bio_split);
 
        if (blk_queue_has_srcu(q))
@@ -836,17 +828,16 @@ int blk_register_queue(struct gendisk *disk)
                goto unlock;
        }
 
+       if (queue_is_mq(q))
+               __blk_mq_register_dev(dev, q);
+       mutex_lock(&q->sysfs_lock);
+
        mutex_lock(&q->debugfs_mutex);
        q->debugfs_dir = debugfs_create_dir(kobject_name(q->kobj.parent),
                                            blk_debugfs_root);
-       mutex_unlock(&q->debugfs_mutex);
-
-       if (queue_is_mq(q)) {
-               __blk_mq_register_dev(dev, q);
+       if (queue_is_mq(q))
                blk_mq_debugfs_register(q);
-       }
-
-       mutex_lock(&q->sysfs_lock);
+       mutex_unlock(&q->debugfs_mutex);
 
        ret = disk_register_independent_access_ranges(disk, NULL);
        if (ret)
@@ -948,8 +939,15 @@ void blk_unregister_queue(struct gendisk *disk)
        /* Now that we've deleted all child objects, we can delete the queue. */
        kobject_uevent(&q->kobj, KOBJ_REMOVE);
        kobject_del(&q->kobj);
-
        mutex_unlock(&q->sysfs_dir_lock);
 
+       mutex_lock(&q->debugfs_mutex);
+       blk_trace_shutdown(q);
+       debugfs_remove_recursive(q->debugfs_dir);
+       q->debugfs_dir = NULL;
+       q->sched_debugfs_dir = NULL;
+       q->rqos_debugfs_dir = NULL;
+       mutex_unlock(&q->debugfs_mutex);
+
        kobject_put(&disk_to_dev(disk)->kobj);
 }
index 27205ae..278227b 100644 (file)
@@ -623,6 +623,7 @@ void del_gendisk(struct gendisk *disk)
         * Prevent new I/O from crossing bio_queue_enter().
         */
        blk_queue_start_drain(q);
+       blk_mq_freeze_queue_wait(q);
 
        if (!(disk->flags & GENHD_FL_HIDDEN)) {
                sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
@@ -646,12 +647,21 @@ void del_gendisk(struct gendisk *disk)
        pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
        device_del(disk_to_dev(disk));
 
-       blk_mq_freeze_queue_wait(q);
-
        blk_throtl_cancel_bios(disk->queue);
 
        blk_sync_queue(q);
        blk_flush_integrity();
+       blk_mq_cancel_work_sync(q);
+
+       blk_mq_quiesce_queue(q);
+       if (q->elevator) {
+               mutex_lock(&q->sysfs_lock);
+               elevator_exit(q);
+               mutex_unlock(&q->sysfs_lock);
+       }
+       rq_qos_exit(q);
+       blk_mq_unquiesce_queue(q);
+
        /*
         * Allow using passthrough request again after the queue is torn down.
         */
@@ -1120,31 +1130,6 @@ static const struct attribute_group *disk_attr_groups[] = {
        NULL
 };
 
-static void disk_release_mq(struct request_queue *q)
-{
-       blk_mq_cancel_work_sync(q);
-
-       /*
-        * There can't be any non non-passthrough bios in flight here, but
-        * requests stay around longer, including passthrough ones so we
-        * still need to freeze the queue here.
-        */
-       blk_mq_freeze_queue(q);
-
-       /*
-        * Since the I/O scheduler exit code may access cgroup information,
-        * perform I/O scheduler exit before disassociating from the block
-        * cgroup controller.
-        */
-       if (q->elevator) {
-               mutex_lock(&q->sysfs_lock);
-               elevator_exit(q);
-               mutex_unlock(&q->sysfs_lock);
-       }
-       rq_qos_exit(q);
-       __blk_mq_unfreeze_queue(q, true);
-}
-
 /**
  * disk_release - releases all allocated resources of the gendisk
  * @dev: the device representing this disk
@@ -1166,9 +1151,6 @@ static void disk_release(struct device *dev)
        might_sleep();
        WARN_ON_ONCE(disk_live(disk));
 
-       if (queue_is_mq(disk->queue))
-               disk_release_mq(disk->queue);
-
        blkcg_exit_queue(disk->queue);
 
        disk_release_events(disk);
index 8d75028..5283bc8 100644 (file)
@@ -79,10 +79,6 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 
        WARN_ON_ONCE(!bdev->bd_holder);
 
-       /* FIXME: remove the following once add_disk() handles errors */
-       if (WARN_ON(!bdev->bd_holder_dir))
-               goto out_unlock;
-
        holder = bd_find_holder_disk(bdev, disk);
        if (holder) {
                holder->refcnt++;
index 70ff2a5..8f7c745 100644 (file)
@@ -421,6 +421,8 @@ static int kyber_init_sched(struct request_queue *q, struct elevator_type *e)
 
        blk_stat_enable_accounting(q);
 
+       blk_queue_flag_clear(QUEUE_FLAG_SQ_SCHED, q);
+
        eq->elevator_data = kqd;
        q->elevator = eq;
 
@@ -1033,7 +1035,6 @@ static struct elevator_type kyber_sched = {
 #endif
        .elevator_attrs = kyber_sched_attrs,
        .elevator_name = "kyber",
-       .elevator_features = ELEVATOR_F_MQ_AWARE,
        .elevator_owner = THIS_MODULE,
 };
 
index 6ed602b..1a9e835 100644 (file)
@@ -642,6 +642,9 @@ static int dd_init_sched(struct request_queue *q, struct elevator_type *e)
        spin_lock_init(&dd->lock);
        spin_lock_init(&dd->zone_lock);
 
+       /* We dispatch from request queue wide instead of hw queue */
+       blk_queue_flag_set(QUEUE_FLAG_SQ_SCHED, q);
+
        q->elevator = eq;
        return 0;
 
index a8d628f..88a73b2 100644 (file)
@@ -3,8 +3,8 @@
 # Makefile for the linux kernel signature checking certificates.
 #
 
-obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o common.o
-obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o common.o
+obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o
+obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o
 obj-$(CONFIG_SYSTEM_REVOCATION_LIST) += revocation_certificates.o
 ifneq ($(CONFIG_SYSTEM_BLACKLIST_HASH_LIST),)
 
index 25094ea..41f1060 100644 (file)
 #include <linux/err.h>
 #include <linux/seq_file.h>
 #include <linux/uidgid.h>
-#include <linux/verification.h>
+#include <keys/asymmetric-type.h>
 #include <keys/system_keyring.h>
 #include "blacklist.h"
-#include "common.h"
 
 /*
  * According to crypto/asymmetric_keys/x509_cert_parser.c:x509_note_pkey_algo(),
@@ -365,8 +364,9 @@ static __init int load_revocation_certificate_list(void)
        if (revocation_certificate_list_size)
                pr_notice("Loading compiled-in revocation X.509 certificates\n");
 
-       return load_certificate_list(revocation_certificate_list, revocation_certificate_list_size,
-                                    blacklist_keyring);
+       return x509_load_certificate_list(revocation_certificate_list,
+                                         revocation_certificate_list_size,
+                                         blacklist_keyring);
 }
 late_initcall(load_revocation_certificate_list);
 #endif
diff --git a/certs/common.c b/certs/common.c
deleted file mode 100644 (file)
index 16a2208..0000000
+++ /dev/null
@@ -1,57 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-
-#include <linux/kernel.h>
-#include <linux/key.h>
-#include "common.h"
-
-int load_certificate_list(const u8 cert_list[],
-                         const unsigned long list_size,
-                         const struct key *keyring)
-{
-       key_ref_t key;
-       const u8 *p, *end;
-       size_t plen;
-
-       p = cert_list;
-       end = p + list_size;
-       while (p < end) {
-               /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
-                * than 256 bytes in size.
-                */
-               if (end - p < 4)
-                       goto dodgy_cert;
-               if (p[0] != 0x30 &&
-                   p[1] != 0x82)
-                       goto dodgy_cert;
-               plen = (p[2] << 8) | p[3];
-               plen += 4;
-               if (plen > end - p)
-                       goto dodgy_cert;
-
-               key = key_create_or_update(make_key_ref(keyring, 1),
-                                          "asymmetric",
-                                          NULL,
-                                          p,
-                                          plen,
-                                          ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
-                                          KEY_USR_VIEW | KEY_USR_READ),
-                                          KEY_ALLOC_NOT_IN_QUOTA |
-                                          KEY_ALLOC_BUILT_IN |
-                                          KEY_ALLOC_BYPASS_RESTRICTION);
-               if (IS_ERR(key)) {
-                       pr_err("Problem loading in-kernel X.509 certificate (%ld)\n",
-                              PTR_ERR(key));
-               } else {
-                       pr_notice("Loaded X.509 cert '%s'\n",
-                                 key_ref_to_ptr(key)->description);
-                       key_ref_put(key);
-               }
-               p += plen;
-       }
-
-       return 0;
-
-dodgy_cert:
-       pr_err("Problem parsing in-kernel X.509 certificate list\n");
-       return 0;
-}
diff --git a/certs/common.h b/certs/common.h
deleted file mode 100644 (file)
index abdb579..0000000
+++ /dev/null
@@ -1,9 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-
-#ifndef _CERT_COMMON_H
-#define _CERT_COMMON_H
-
-int load_certificate_list(const u8 cert_list[], const unsigned long list_size,
-                         const struct key *keyring);
-
-#endif
index 05b66ce..5042cc5 100644 (file)
@@ -16,7 +16,6 @@
 #include <keys/asymmetric-type.h>
 #include <keys/system_keyring.h>
 #include <crypto/pkcs7.h>
-#include "common.h"
 
 static struct key *builtin_trusted_keys;
 #ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
@@ -183,7 +182,8 @@ __init int load_module_cert(struct key *keyring)
 
        pr_notice("Loading compiled-in module X.509 certificates\n");
 
-       return load_certificate_list(system_certificate_list, module_cert_size, keyring);
+       return x509_load_certificate_list(system_certificate_list,
+                                         module_cert_size, keyring);
 }
 
 /*
@@ -204,7 +204,7 @@ static __init int load_system_certificate_list(void)
        size = system_certificate_list_size - module_cert_size;
 #endif
 
-       return load_certificate_list(p, size, builtin_trusted_keys);
+       return x509_load_certificate_list(p, size, builtin_trusted_keys);
 }
 late_initcall(load_system_certificate_list);
 
index 1919746..7b81685 100644 (file)
@@ -15,6 +15,7 @@ source "crypto/async_tx/Kconfig"
 #
 menuconfig CRYPTO
        tristate "Cryptographic API"
+       select LIB_MEMNEQ
        help
          This option provides the core Cryptographic API.
 
@@ -665,6 +666,18 @@ config CRYPTO_CRC32_MIPS
          CRC32c and CRC32 CRC algorithms implemented using mips crypto
          instructions, when available.
 
+config CRYPTO_CRC32_S390
+       tristate "CRC-32 algorithms"
+       depends on S390
+       select CRYPTO_HASH
+       select CRC32
+       help
+         Select this option if you want to use hardware accelerated
+         implementations of CRC algorithms.  With this option, you
+         can optimize the computation of CRC-32 (IEEE 802.3 Ethernet)
+         and CRC-32C (Castagnoli).
+
+         It is available with IBM z13 or later.
 
 config CRYPTO_XXHASH
        tristate "xxHash hash algorithm"
@@ -897,6 +910,16 @@ config CRYPTO_SHA512_SSSE3
          Extensions version 1 (AVX1), or Advanced Vector Extensions
          version 2 (AVX2) instructions, when available.
 
+config CRYPTO_SHA512_S390
+       tristate "SHA384 and SHA512 digest algorithm"
+       depends on S390
+       select CRYPTO_HASH
+       help
+         This is the s390 hardware accelerated implementation of the
+         SHA512 secure hash standard.
+
+         It is available as of z10.
+
 config CRYPTO_SHA1_OCTEON
        tristate "SHA1 digest algorithm (OCTEON)"
        depends on CPU_CAVIUM_OCTEON
@@ -929,6 +952,16 @@ config CRYPTO_SHA1_PPC_SPE
          SHA-1 secure hash standard (DFIPS 180-4) implemented
          using powerpc SPE SIMD instruction set.
 
+config CRYPTO_SHA1_S390
+       tristate "SHA1 digest algorithm"
+       depends on S390
+       select CRYPTO_HASH
+       help
+         This is the s390 hardware accelerated implementation of the
+         SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
+
+         It is available as of z990.
+
 config CRYPTO_SHA256
        tristate "SHA224 and SHA256 digest algorithm"
        select CRYPTO_HASH
@@ -969,6 +1002,16 @@ config CRYPTO_SHA256_SPARC64
          SHA-256 secure hash standard (DFIPS 180-2) implemented
          using sparc64 crypto instructions, when available.
 
+config CRYPTO_SHA256_S390
+       tristate "SHA256 digest algorithm"
+       depends on S390
+       select CRYPTO_HASH
+       help
+         This is the s390 hardware accelerated implementation of the
+         SHA256 secure hash standard (DFIPS 180-2).
+
+         It is available as of z9.
+
 config CRYPTO_SHA512
        tristate "SHA384 and SHA512 digest algorithms"
        select CRYPTO_HASH
@@ -1009,6 +1052,26 @@ config CRYPTO_SHA3
          References:
          http://keccak.noekeon.org/
 
+config CRYPTO_SHA3_256_S390
+       tristate "SHA3_224 and SHA3_256 digest algorithm"
+       depends on S390
+       select CRYPTO_HASH
+       help
+         This is the s390 hardware accelerated implementation of the
+         SHA3_256 secure hash standard.
+
+         It is available as of z14.
+
+config CRYPTO_SHA3_512_S390
+       tristate "SHA3_384 and SHA3_512 digest algorithm"
+       depends on S390
+       select CRYPTO_HASH
+       help
+         This is the s390 hardware accelerated implementation of the
+         SHA3_512 secure hash standard.
+
+         It is available as of z14.
+
 config CRYPTO_SM3
        tristate
 
@@ -1069,6 +1132,16 @@ config CRYPTO_GHASH_CLMUL_NI_INTEL
          This is the x86_64 CLMUL-NI accelerated implementation of
          GHASH, the hash function used in GCM (Galois/Counter mode).
 
+config CRYPTO_GHASH_S390
+       tristate "GHASH hash function"
+       depends on S390
+       select CRYPTO_HASH
+       help
+         This is the s390 hardware accelerated implementation of GHASH,
+         the hash function used in GCM (Galois/Counter mode).
+
+         It is available as of z196.
+
 comment "Ciphers"
 
 config CRYPTO_AES
@@ -1184,6 +1257,23 @@ config CRYPTO_AES_PPC_SPE
          architecture specific assembler implementations that work on 1KB
          tables or 256 bytes S-boxes.
 
+config CRYPTO_AES_S390
+       tristate "AES cipher algorithms"
+       depends on S390
+       select CRYPTO_ALGAPI
+       select CRYPTO_SKCIPHER
+       help
+         This is the s390 hardware accelerated implementation of the
+         AES cipher algorithms (FIPS-197).
+
+         As of z9 the ECB and CBC modes are hardware accelerated
+         for 128 bit keys.
+         As of z10 the ECB and CBC modes are hardware accelerated
+         for all AES key sizes.
+         As of z196 the CTR mode is hardware accelerated for all AES
+         key sizes and XTS mode is hardware accelerated for 256 and
+         512 bit keys.
+
 config CRYPTO_ANUBIS
        tristate "Anubis cipher algorithm"
        depends on CRYPTO_USER_API_ENABLE_OBSOLETE
@@ -1414,6 +1504,19 @@ config CRYPTO_DES3_EDE_X86_64
          algorithm are provided; regular processing one input block and
          one that processes three blocks parallel.
 
+config CRYPTO_DES_S390
+       tristate "DES and Triple DES cipher algorithms"
+       depends on S390
+       select CRYPTO_ALGAPI
+       select CRYPTO_SKCIPHER
+       select CRYPTO_LIB_DES
+       help
+         This is the s390 hardware accelerated implementation of the
+         DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
+
+         As of z990 the ECB and CBC mode are hardware accelerated.
+         As of z196 the CTR mode is hardware accelerated.
+
 config CRYPTO_FCRYPT
        tristate "FCrypt cipher algorithm"
        select CRYPTO_ALGAPI
@@ -1473,6 +1576,18 @@ config CRYPTO_CHACHA_MIPS
        select CRYPTO_SKCIPHER
        select CRYPTO_ARCH_HAVE_LIB_CHACHA
 
+config CRYPTO_CHACHA_S390
+       tristate "ChaCha20 stream cipher"
+       depends on S390
+       select CRYPTO_SKCIPHER
+       select CRYPTO_LIB_CHACHA_GENERIC
+       select CRYPTO_ARCH_HAVE_LIB_CHACHA
+       help
+         This is the s390 SIMD implementation of the ChaCha20 stream
+         cipher (RFC 7539).
+
+         It is available as of z13.
+
 config CRYPTO_SEED
        tristate "SEED cipher algorithm"
        depends on CRYPTO_USER_API_ENABLE_OBSOLETE
index 43bc33e..ceaaa9f 100644 (file)
@@ -4,7 +4,7 @@
 #
 
 obj-$(CONFIG_CRYPTO) += crypto.o
-crypto-y := api.o cipher.o compress.o memneq.o
+crypto-y := api.o cipher.o compress.o
 
 obj-$(CONFIG_CRYPTO_ENGINE) += crypto_engine.o
 obj-$(CONFIG_CRYPTO_FIPS) += fips.o
index 460bc5d..3df3fe4 100644 (file)
@@ -75,4 +75,14 @@ config SIGNED_PE_FILE_VERIFICATION
          This option provides support for verifying the signature(s) on a
          signed PE binary.
 
+config FIPS_SIGNATURE_SELFTEST
+       bool "Run FIPS selftests on the X.509+PKCS7 signature verification"
+       help
+         This option causes some selftests to be run on the signature
+         verification code, using some built in data.  This is required
+         for FIPS.
+       depends on KEYS
+       depends on ASYMMETRIC_KEY_TYPE
+       depends on PKCS7_MESSAGE_PARSER
+
 endif # ASYMMETRIC_KEY_TYPE
index c38424f..0d1fa1b 100644 (file)
@@ -20,7 +20,9 @@ x509_key_parser-y := \
        x509.asn1.o \
        x509_akid.asn1.o \
        x509_cert_parser.o \
+       x509_loader.o \
        x509_public_key.o
+x509_key_parser-$(CONFIG_FIPS_SIGNATURE_SELFTEST) += selftest.o
 
 $(obj)/x509_cert_parser.o: \
        $(obj)/x509.asn1.h \
diff --git a/crypto/asymmetric_keys/selftest.c b/crypto/asymmetric_keys/selftest.c
new file mode 100644 (file)
index 0000000..fa0bf7f
--- /dev/null
@@ -0,0 +1,224 @@
+/* Self-testing for signature checking.
+ *
+ * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ */
+
+#include <linux/kernel.h>
+#include <linux/cred.h>
+#include <linux/key.h>
+#include <crypto/pkcs7.h>
+#include "x509_parser.h"
+
+struct certs_test {
+       const u8        *data;
+       size_t          data_len;
+       const u8        *pkcs7;
+       size_t          pkcs7_len;
+};
+
+/*
+ * Set of X.509 certificates to provide public keys for the tests.  These will
+ * be loaded into a temporary keyring for the duration of the testing.
+ */
+static const __initconst u8 certs_selftest_keys[] = {
+       "\x30\x82\x05\x55\x30\x82\x03\x3d\xa0\x03\x02\x01\x02\x02\x14\x73"
+       "\x98\xea\x98\x2d\xd0\x2e\xa8\xb1\xcf\x57\xc7\xf2\x97\xb3\xe6\x1a"
+       "\xfc\x8c\x0a\x30\x0d\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01\x01\x0b"
+       "\x05\x00\x30\x34\x31\x32\x30\x30\x06\x03\x55\x04\x03\x0c\x29\x43"
+       "\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x20\x76\x65\x72\x69\x66"
+       "\x69\x63\x61\x74\x69\x6f\x6e\x20\x73\x65\x6c\x66\x2d\x74\x65\x73"
+       "\x74\x69\x6e\x67\x20\x6b\x65\x79\x30\x20\x17\x0d\x32\x32\x30\x35"
+       "\x31\x38\x32\x32\x33\x32\x34\x31\x5a\x18\x0f\x32\x31\x32\x32\x30"
+       "\x34\x32\x34\x32\x32\x33\x32\x34\x31\x5a\x30\x34\x31\x32\x30\x30"
+       "\x06\x03\x55\x04\x03\x0c\x29\x43\x65\x72\x74\x69\x66\x69\x63\x61"
+       "\x74\x65\x20\x76\x65\x72\x69\x66\x69\x63\x61\x74\x69\x6f\x6e\x20"
+       "\x73\x65\x6c\x66\x2d\x74\x65\x73\x74\x69\x6e\x67\x20\x6b\x65\x79"
+       "\x30\x82\x02\x22\x30\x0d\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01\x01"
+       "\x01\x05\x00\x03\x82\x02\x0f\x00\x30\x82\x02\x0a\x02\x82\x02\x01"
+       "\x00\xcc\xac\x49\xdd\x3b\xca\xb0\x15\x7e\x84\x6a\xb2\x0a\x69\x5f"
+       "\x1c\x0a\x61\x82\x3b\x4f\x2c\xa3\x95\x2c\x08\x58\x4b\xb1\x5d\x99"
+       "\xe0\xc3\xc1\x79\xc2\xb3\xeb\xc0\x1e\x6d\x3e\x54\x1d\xbd\xb7\x92"
+       "\x7b\x4d\xb5\x95\x58\xb2\x52\x2e\xc6\x24\x4b\x71\x63\x80\x32\x77"
+       "\xa7\x38\x5e\xdb\x72\xae\x6e\x0d\xec\xfb\xb6\x6d\x01\x7f\xe9\x55"
+       "\x66\xdf\xbf\x1d\x76\x78\x02\x31\xe8\xe5\x07\xf8\xb7\x82\x5c\x0d"
+       "\xd4\xbb\xfb\xa2\x59\x0d\x2e\x3a\x78\x95\x3a\x8b\x46\x06\x47\x44"
+       "\x46\xd7\xcd\x06\x6a\x41\x13\xe3\x19\xf6\xbb\x6e\x38\xf4\x83\x01"
+       "\xa3\xbf\x4a\x39\x4f\xd7\x0a\xe9\x38\xb3\xf5\x94\x14\x4e\xdd\xf7"
+       "\x43\xfd\x24\xb2\x49\x3c\xa5\xf7\x7a\x7c\xd4\x45\x3d\x97\x75\x68"
+       "\xf1\xed\x4c\x42\x0b\x70\xca\x85\xf3\xde\xe5\x88\x2c\xc5\xbe\xb6"
+       "\x97\x34\xba\x24\x02\xcd\x8b\x86\x9f\xa9\x73\xca\x73\xcf\x92\x81"
+       "\xee\x75\x55\xbb\x18\x67\x5c\xff\x3f\xb5\xdd\x33\x1b\x0c\xe9\x78"
+       "\xdb\x5c\xcf\xaa\x5c\x43\x42\xdf\x5e\xa9\x6d\xec\xd7\xd7\xff\xe6"
+       "\xa1\x3a\x92\x1a\xda\xae\xf6\x8c\x6f\x7b\xd5\xb4\x6e\x06\xe9\x8f"
+       "\xe8\xde\x09\x31\x89\xed\x0e\x11\xa1\xfa\x8a\xe9\xe9\x64\x59\x62"
+       "\x53\xda\xd1\x70\xbe\x11\xd4\x99\x97\x11\xcf\x99\xde\x0b\x9d\x94"
+       "\x7e\xaa\xb8\x52\xea\x37\xdb\x90\x7e\x35\xbd\xd9\xfe\x6d\x0a\x48"
+       "\x70\x28\xdd\xd5\x0d\x7f\x03\x80\x93\x14\x23\x8f\xb9\x22\xcd\x7c"
+       "\x29\xfe\xf1\x72\xb5\x5c\x0b\x12\xcf\x9c\x15\xf6\x11\x4c\x7a\x45"
+       "\x25\x8c\x45\x0a\x34\xac\x2d\x9a\x81\xca\x0b\x13\x22\xcd\xeb\x1a"
+       "\x38\x88\x18\x97\x96\x08\x81\xaa\xcc\x8f\x0f\x8a\x32\x7b\x76\x68"
+       "\x03\x68\x43\xbf\x11\xba\x55\x60\xfd\x80\x1c\x0d\x9b\x69\xb6\x09"
+       "\x72\xbc\x0f\x41\x2f\x07\x82\xc6\xe3\xb2\x13\x91\xc4\x6d\x14\x95"
+       "\x31\xbe\x19\xbd\xbc\xed\xe1\x4c\x74\xa2\xe0\x78\x0b\xbb\x94\xec"
+       "\x4c\x53\x3a\xa2\xb5\x84\x1d\x4b\x65\x7e\xdc\xf7\xdb\x36\x7d\xbe"
+       "\x9e\x3b\x36\x66\x42\x66\x76\x35\xbf\xbe\xf0\xc1\x3c\x7c\xe9\x42"
+       "\x5c\x24\x53\x03\x05\xa8\x67\x24\x50\x02\x75\xff\x24\x46\x3b\x35"
+       "\x89\x76\xe6\x70\xda\xc5\x51\x8c\x9a\xe5\x05\xb0\x0b\xd0\x2d\xd4"
+       "\x7d\x57\x75\x94\x6b\xf9\x0a\xad\x0e\x41\x00\x15\xd0\x4f\xc0\x7f"
+       "\x90\x2d\x18\x48\x8f\x28\xfe\x5d\xa7\xcd\x99\x9e\xbd\x02\x6c\x8a"
+       "\x31\xf3\x1c\xc7\x4b\xe6\x93\xcd\x42\xa2\xe4\x68\x10\x47\x9d\xfc"
+       "\x21\x02\x03\x01\x00\x01\xa3\x5d\x30\x5b\x30\x0c\x06\x03\x55\x1d"
+       "\x13\x01\x01\xff\x04\x02\x30\x00\x30\x0b\x06\x03\x55\x1d\x0f\x04"
+       "\x04\x03\x02\x07\x80\x30\x1d\x06\x03\x55\x1d\x0e\x04\x16\x04\x14"
+       "\xf5\x87\x03\xbb\x33\xce\x1b\x73\xee\x02\xec\xcd\xee\x5b\x88\x17"
+       "\x51\x8f\xe3\xdb\x30\x1f\x06\x03\x55\x1d\x23\x04\x18\x30\x16\x80"
+       "\x14\xf5\x87\x03\xbb\x33\xce\x1b\x73\xee\x02\xec\xcd\xee\x5b\x88"
+       "\x17\x51\x8f\xe3\xdb\x30\x0d\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01"
+       "\x01\x0b\x05\x00\x03\x82\x02\x01\x00\xc0\x2e\x12\x41\x7b\x73\x85"
+       "\x16\xc8\xdb\x86\x79\xe8\xf5\xcd\x44\xf4\xc6\xe2\x81\x23\x5e\x47"
+       "\xcb\xab\x25\xf1\x1e\x58\x3e\x31\x7f\x78\xad\x85\xeb\xfe\x14\x88"
+       "\x60\xf7\x7f\xd2\x26\xa2\xf4\x98\x2a\xfd\xba\x05\x0c\x20\x33\x12"
+       "\xcc\x4d\x14\x61\x64\x81\x93\xd3\x33\xed\xc8\xff\xf1\x78\xcc\x5f"
+       "\x51\x9f\x09\xd7\xbe\x0d\x5c\x74\xfd\x9b\xdf\x52\x4a\xc9\xa8\x71"
+       "\x25\x33\x04\x10\x67\x36\xd0\xb3\x0b\xc9\xa1\x40\x72\xae\x41\x7b"
+       "\x68\xe6\xe4\x7b\xd0\x28\xf7\x6d\xe7\x3f\x50\xfc\x91\x7c\x91\x56"
+       "\xd4\xdf\xa6\xbb\xe8\x4d\x1b\x58\xaa\x28\xfa\xc1\x19\xeb\x11\x2f"
+       "\x24\x8b\x7c\xc5\xa9\x86\x26\xaa\x6e\xb7\x9b\xd5\xf8\x06\xfb\x02"
+       "\x52\x7b\x9c\x9e\xa1\xe0\x07\x8b\x5e\xe4\xb8\x55\x29\xf6\x48\x52"
+       "\x1c\x1b\x54\x2d\x46\xd8\xe5\x71\xb9\x60\xd1\x45\xb5\x92\x89\x8a"
+       "\x63\x58\x2a\xb3\xc6\xb2\x76\xe2\x3c\x82\x59\x04\xae\x5a\xc4\x99"
+       "\x7b\x2e\x4b\x46\x57\xb8\x29\x24\xb2\xfd\xee\x2c\x0d\xa4\x83\xfa"
+       "\x65\x2a\x07\x35\x8b\x97\xcf\xbd\x96\x2e\xd1\x7e\x6c\xc2\x1e\x87"
+       "\xb6\x6c\x76\x65\xb5\xb2\x62\xda\x8b\xe9\x73\xe3\xdb\x33\xdd\x13"
+       "\x3a\x17\x63\x6a\x76\xde\x8d\x8f\xe0\x47\x61\x28\x3a\x83\xff\x8f"
+       "\xe7\xc7\xe0\x4a\xa3\xe5\x07\xcf\xe9\x8c\x35\x35\x2e\xe7\x80\x66"
+       "\x31\xbf\x91\x58\x0a\xe1\x25\x3d\x38\xd3\xa4\xf0\x59\x34\x47\x07"
+       "\x62\x0f\xbe\x30\xdd\x81\x88\x58\xf0\x28\xb0\x96\xe5\x82\xf8\x05"
+       "\xb7\x13\x01\xbc\xfa\xc6\x1f\x86\x72\xcc\xf9\xee\x8e\xd9\xd6\x04"
+       "\x8c\x24\x6c\xbf\x0f\x5d\x37\x39\xcf\x45\xc1\x93\x3a\xd2\xed\x5c"
+       "\x58\x79\x74\x86\x62\x30\x7e\x8e\xbb\xdd\x7a\xa9\xed\xca\x40\xcb"
+       "\x62\x47\xf4\xb4\x9f\x52\x7f\x72\x63\xa8\xf0\x2b\xaf\x45\x2a\x48"
+       "\x19\x6d\xe3\xfb\xf9\x19\x66\x69\xc8\xcc\x62\x87\x6c\x53\x2b\x2d"
+       "\x6e\x90\x6c\x54\x3a\x82\x25\x41\xcb\x18\x6a\xa4\x22\xa8\xa1\xc4"
+       "\x47\xd7\x81\x00\x1c\x15\x51\x0f\x1a\xaf\xef\x9f\xa6\x61\x8c\xbd"
+       "\x6b\x8b\xed\xe6\xac\x0e\xb6\x3a\x4c\x92\xe6\x0f\x91\x0a\x0f\x71"
+       "\xc7\xa0\xb9\x0d\x3a\x17\x5a\x6f\x35\xc8\xe7\x50\x4f\x46\xe8\x70"
+       "\x60\x48\x06\x82\x8b\x66\x58\xe6\x73\x91\x9c\x12\x3d\x35\x8e\x46"
+       "\xad\x5a\xf5\xb3\xdb\x69\x21\x04\xfd\xd3\x1c\xdf\x94\x9d\x56\xb0"
+       "\x0a\xd1\x95\x76\x8d\xec\x9e\xdd\x0b\x15\x97\x64\xad\xe5\xf2\x62"
+       "\x02\xfc\x9e\x5f\x56\x42\x39\x05\xb3"
+};
+
+/*
+ * Signed data and detached signature blobs that form the verification tests.
+ */
+static const __initconst u8 certs_selftest_1_data[] = {
+       "\x54\x68\x69\x73\x20\x69\x73\x20\x73\x6f\x6d\x65\x20\x74\x65\x73"
+       "\x74\x20\x64\x61\x74\x61\x20\x75\x73\x65\x64\x20\x66\x6f\x72\x20"
+       "\x73\x65\x6c\x66\x2d\x74\x65\x73\x74\x69\x6e\x67\x20\x63\x65\x72"
+       "\x74\x69\x66\x69\x63\x61\x74\x65\x20\x76\x65\x72\x69\x66\x69\x63"
+       "\x61\x74\x69\x6f\x6e\x2e\x0a"
+};
+
+static const __initconst u8 certs_selftest_1_pkcs7[] = {
+       "\x30\x82\x02\xab\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01\x07\x02\xa0"
+       "\x82\x02\x9c\x30\x82\x02\x98\x02\x01\x01\x31\x0d\x30\x0b\x06\x09"
+       "\x60\x86\x48\x01\x65\x03\x04\x02\x01\x30\x0b\x06\x09\x2a\x86\x48"
+       "\x86\xf7\x0d\x01\x07\x01\x31\x82\x02\x75\x30\x82\x02\x71\x02\x01"
+       "\x01\x30\x4c\x30\x34\x31\x32\x30\x30\x06\x03\x55\x04\x03\x0c\x29"
+       "\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x20\x76\x65\x72\x69"
+       "\x66\x69\x63\x61\x74\x69\x6f\x6e\x20\x73\x65\x6c\x66\x2d\x74\x65"
+       "\x73\x74\x69\x6e\x67\x20\x6b\x65\x79\x02\x14\x73\x98\xea\x98\x2d"
+       "\xd0\x2e\xa8\xb1\xcf\x57\xc7\xf2\x97\xb3\xe6\x1a\xfc\x8c\x0a\x30"
+       "\x0b\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x30\x0d\x06\x09"
+       "\x2a\x86\x48\x86\xf7\x0d\x01\x01\x01\x05\x00\x04\x82\x02\x00\xac"
+       "\xb0\xf2\x07\xd6\x99\x6d\xc0\xc0\xd9\x8d\x31\x0d\x7e\x04\xeb\xc3"
+       "\x88\x90\xc4\x58\x46\xd4\xe2\xa0\xa3\x25\xe3\x04\x50\x37\x85\x8c"
+       "\x91\xc6\xfc\xc5\xd4\x92\xfd\x05\xd8\xb8\xa3\xb8\xba\x89\x13\x00"
+       "\x88\x79\x99\x51\x6b\x5b\x28\x31\xc0\xb3\x1b\x7a\x68\x2c\x00\xdb"
+       "\x4b\x46\x11\xf3\xfa\x50\x8e\x19\x89\xa2\x4c\xda\x4c\x89\x01\x11"
+       "\x89\xee\xd3\xc8\xc1\xe7\xa7\xf6\xb2\xa2\xf8\x65\xb8\x35\x20\x33"
+       "\xba\x12\x62\xd5\xbd\xaa\x71\xe5\x5b\xc0\x6a\x32\xff\x6a\x2e\x23"
+       "\xef\x2b\xb6\x58\xb1\xfb\x5f\x82\x34\x40\x6d\x9f\xbc\x27\xac\x37"
+       "\x23\x99\xcf\x7d\x20\xb2\x39\x01\xc0\x12\xce\xd7\x5d\x2f\xb6\xab"
+       "\xb5\x56\x4f\xef\xf4\x72\x07\x58\x65\xa9\xeb\x1f\x75\x1c\x5f\x0c"
+       "\x88\xe0\xa4\xe2\xcd\x73\x2b\x9e\xb2\x05\x7e\x12\xf8\xd0\x66\x41"
+       "\xcc\x12\x63\xd4\xd6\xac\x9b\x1d\x14\x77\x8d\x1c\x57\xd5\x27\xc6"
+       "\x49\xa2\x41\x43\xf3\x59\x29\xe5\xcb\xd1\x75\xbc\x3a\x97\x2a\x72"
+       "\x22\x66\xc5\x3b\xc1\xba\xfc\x53\x18\x98\xe2\x21\x64\xc6\x52\x87"
+       "\x13\xd5\x7c\x42\xe8\xfb\x9c\x9a\x45\x32\xd5\xa5\x22\x62\x9d\xd4"
+       "\xcb\xa4\xfa\x77\xbb\x50\x24\x0b\x8b\x88\x99\x15\x56\xa9\x1e\x92"
+       "\xbf\x5d\x94\x77\xb6\xf1\x67\x01\x60\x06\x58\x5c\xdf\x18\x52\x79"
+       "\x37\x30\x93\x7d\x87\x04\xf1\xe0\x55\x59\x52\xf3\xc2\xb1\x1c\x5b"
+       "\x12\x7c\x49\x87\xfb\xf7\xed\xdd\x95\x71\xec\x4b\x1a\x85\x08\xb0"
+       "\xa0\x36\xc4\x7b\xab\x40\xe0\xf1\x98\xcc\xaf\x19\x40\x8f\x47\x6f"
+       "\xf0\x6c\x84\x29\x7f\x7f\x04\x46\xcb\x08\x0f\xe0\xc1\xc9\x70\x6e"
+       "\x95\x3b\xa4\xbc\x29\x2b\x53\x67\x45\x1b\x0d\xbc\x13\xa5\x76\x31"
+       "\xaf\xb9\xd0\xe0\x60\x12\xd2\xf4\xb7\x7c\x58\x7e\xf6\x2d\xbb\x24"
+       "\x14\x5a\x20\x24\xa8\x12\xdf\x25\xbd\x42\xce\x96\x7c\x2e\xba\x14"
+       "\x1b\x81\x9f\x18\x45\xa4\xc6\x70\x3e\x0e\xf0\xd3\x7b\x9c\x10\xbe"
+       "\xb8\x7a\x89\xc5\x9e\xd9\x97\xdf\xd7\xe7\xc6\x1d\xc0\x20\x6c\xb8"
+       "\x1e\x3a\x63\xb8\x39\x8e\x8e\x62\xd5\xd2\xb4\xcd\xff\x46\xfc\x8e"
+       "\xec\x07\x35\x0c\xff\xb0\x05\xe6\xf4\xe5\xfe\xa2\xe3\x0a\xe6\x36"
+       "\xa7\x4a\x7e\x62\x1d\xc4\x50\x39\x35\x4e\x28\xcb\x4a\xfb\x9d\xdb"
+       "\xdd\x23\xd6\x53\xb1\x74\x77\x12\xf7\x9c\xf0\x9a\x6b\xf7\xa9\x64"
+       "\x2d\x86\x21\x2a\xcf\xc6\x54\xf5\xc9\xad\xfa\xb5\x12\xb4\xf3\x51"
+       "\x77\x55\x3c\x6f\x0c\x32\xd3\x8c\x44\x39\x71\x25\xfe\x96\xd2"
+};
+
+/*
+ * List of tests to be run.
+ */
+#define TEST(data, pkcs7) { data, sizeof(data) - 1, pkcs7, sizeof(pkcs7) - 1 }
+static const struct certs_test certs_tests[] __initconst = {
+       TEST(certs_selftest_1_data, certs_selftest_1_pkcs7),
+};
+
+int __init fips_signature_selftest(void)
+{
+       struct key *keyring;
+       int ret, i;
+
+       pr_notice("Running certificate verification selftests\n");
+
+       keyring = keyring_alloc(".certs_selftest",
+                               GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, current_cred(),
+                               (KEY_POS_ALL & ~KEY_POS_SETATTR) |
+                               KEY_USR_VIEW | KEY_USR_READ |
+                               KEY_USR_SEARCH,
+                               KEY_ALLOC_NOT_IN_QUOTA,
+                               NULL, NULL);
+       if (IS_ERR(keyring))
+               panic("Can't allocate certs selftest keyring: %ld\n",
+                     PTR_ERR(keyring));
+
+       ret = x509_load_certificate_list(certs_selftest_keys,
+                                        sizeof(certs_selftest_keys) - 1, keyring);
+       if (ret < 0)
+               panic("Can't allocate certs selftest keyring: %d\n", ret);
+
+       for (i = 0; i < ARRAY_SIZE(certs_tests); i++) {
+               const struct certs_test *test = &certs_tests[i];
+               struct pkcs7_message *pkcs7;
+
+               pkcs7 = pkcs7_parse_message(test->pkcs7, test->pkcs7_len);
+               if (IS_ERR(pkcs7))
+                       panic("Certs selftest %d: pkcs7_parse_message() = %d\n", i, ret);
+
+               pkcs7_supply_detached_data(pkcs7, test->data, test->data_len);
+
+               ret = pkcs7_verify(pkcs7, VERIFYING_MODULE_SIGNATURE);
+               if (ret < 0)
+                       panic("Certs selftest %d: pkcs7_verify() = %d\n", i, ret);
+
+               ret = pkcs7_validate_trust(pkcs7, keyring);
+               if (ret < 0)
+                       panic("Certs selftest %d: pkcs7_validate_trust() = %d\n", i, ret);
+
+               pkcs7_free_message(pkcs7);
+       }
+
+       key_put(keyring);
+       return 0;
+}
diff --git a/crypto/asymmetric_keys/x509_loader.c b/crypto/asymmetric_keys/x509_loader.c
new file mode 100644 (file)
index 0000000..1bc169d
--- /dev/null
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <linux/kernel.h>
+#include <linux/key.h>
+#include <keys/asymmetric-type.h>
+
+int x509_load_certificate_list(const u8 cert_list[],
+                              const unsigned long list_size,
+                              const struct key *keyring)
+{
+       key_ref_t key;
+       const u8 *p, *end;
+       size_t plen;
+
+       p = cert_list;
+       end = p + list_size;
+       while (p < end) {
+               /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
+                * than 256 bytes in size.
+                */
+               if (end - p < 4)
+                       goto dodgy_cert;
+               if (p[0] != 0x30 &&
+                   p[1] != 0x82)
+                       goto dodgy_cert;
+               plen = (p[2] << 8) | p[3];
+               plen += 4;
+               if (plen > end - p)
+                       goto dodgy_cert;
+
+               key = key_create_or_update(make_key_ref(keyring, 1),
+                                          "asymmetric",
+                                          NULL,
+                                          p,
+                                          plen,
+                                          ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
+                                          KEY_USR_VIEW | KEY_USR_READ),
+                                          KEY_ALLOC_NOT_IN_QUOTA |
+                                          KEY_ALLOC_BUILT_IN |
+                                          KEY_ALLOC_BYPASS_RESTRICTION);
+               if (IS_ERR(key)) {
+                       pr_err("Problem loading in-kernel X.509 certificate (%ld)\n",
+                              PTR_ERR(key));
+               } else {
+                       pr_notice("Loaded X.509 cert '%s'\n",
+                                 key_ref_to_ptr(key)->description);
+                       key_ref_put(key);
+               }
+               p += plen;
+       }
+
+       return 0;
+
+dodgy_cert:
+       pr_err("Problem parsing in-kernel X.509 certificate list\n");
+       return 0;
+}
index 97a886c..a299c9c 100644 (file)
@@ -40,6 +40,15 @@ struct x509_certificate {
        bool            blacklisted;
 };
 
+/*
+ * selftest.c
+ */
+#ifdef CONFIG_FIPS_SIGNATURE_SELFTEST
+extern int __init fips_signature_selftest(void);
+#else
+static inline int fips_signature_selftest(void) { return 0; }
+#endif
+
 /*
  * x509_cert_parser.c
  */
index 77ed4e9..0b4943a 100644 (file)
@@ -244,9 +244,15 @@ static struct asymmetric_key_parser x509_key_parser = {
 /*
  * Module stuff
  */
+extern int __init certs_selftest(void);
 static int __init x509_key_init(void)
 {
-       return register_asymmetric_key_parser(&x509_key_parser);
+       int ret;
+
+       ret = register_asymmetric_key_parser(&x509_key_parser);
+       if (ret < 0)
+               return ret;
+       return fips_signature_selftest();
 }
 
 static void __exit x509_key_exit(void)
diff --git a/crypto/memneq.c b/crypto/memneq.c
deleted file mode 100644 (file)
index fb11608..0000000
+++ /dev/null
@@ -1,176 +0,0 @@
-/*
- * Constant-time equality testing of memory regions.
- *
- * Authors:
- *
- *   James Yonan <james@openvpn.net>
- *   Daniel Borkmann <dborkman@redhat.com>
- *
- * This file is provided under a dual BSD/GPLv2 license.  When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
- * The full GNU General Public License is included in this distribution
- * in the file called LICENSE.GPL.
- *
- * BSD LICENSE
- *
- * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- *   * Redistributions of source code must retain the above copyright
- *     notice, this list of conditions and the following disclaimer.
- *   * Redistributions in binary form must reproduce the above copyright
- *     notice, this list of conditions and the following disclaimer in
- *     the documentation and/or other materials provided with the
- *     distribution.
- *   * Neither the name of OpenVPN Technologies nor the names of its
- *     contributors may be used to endorse or promote products derived
- *     from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <crypto/algapi.h>
-#include <asm/unaligned.h>
-
-#ifndef __HAVE_ARCH_CRYPTO_MEMNEQ
-
-/* Generic path for arbitrary size */
-static inline unsigned long
-__crypto_memneq_generic(const void *a, const void *b, size_t size)
-{
-       unsigned long neq = 0;
-
-#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
-       while (size >= sizeof(unsigned long)) {
-               neq |= get_unaligned((unsigned long *)a) ^
-                      get_unaligned((unsigned long *)b);
-               OPTIMIZER_HIDE_VAR(neq);
-               a += sizeof(unsigned long);
-               b += sizeof(unsigned long);
-               size -= sizeof(unsigned long);
-       }
-#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
-       while (size > 0) {
-               neq |= *(unsigned char *)a ^ *(unsigned char *)b;
-               OPTIMIZER_HIDE_VAR(neq);
-               a += 1;
-               b += 1;
-               size -= 1;
-       }
-       return neq;
-}
-
-/* Loop-free fast-path for frequently used 16-byte size */
-static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
-{
-       unsigned long neq = 0;
-
-#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
-       if (sizeof(unsigned long) == 8) {
-               neq |= get_unaligned((unsigned long *)a) ^
-                      get_unaligned((unsigned long *)b);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= get_unaligned((unsigned long *)(a + 8)) ^
-                      get_unaligned((unsigned long *)(b + 8));
-               OPTIMIZER_HIDE_VAR(neq);
-       } else if (sizeof(unsigned int) == 4) {
-               neq |= get_unaligned((unsigned int *)a) ^
-                      get_unaligned((unsigned int *)b);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= get_unaligned((unsigned int *)(a + 4)) ^
-                      get_unaligned((unsigned int *)(b + 4));
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= get_unaligned((unsigned int *)(a + 8)) ^
-                      get_unaligned((unsigned int *)(b + 8));
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= get_unaligned((unsigned int *)(a + 12)) ^
-                      get_unaligned((unsigned int *)(b + 12));
-               OPTIMIZER_HIDE_VAR(neq);
-       } else
-#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
-       {
-               neq |= *(unsigned char *)(a)    ^ *(unsigned char *)(b);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+1)  ^ *(unsigned char *)(b+1);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+2)  ^ *(unsigned char *)(b+2);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+3)  ^ *(unsigned char *)(b+3);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+4)  ^ *(unsigned char *)(b+4);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+5)  ^ *(unsigned char *)(b+5);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+6)  ^ *(unsigned char *)(b+6);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+7)  ^ *(unsigned char *)(b+7);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+8)  ^ *(unsigned char *)(b+8);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+9)  ^ *(unsigned char *)(b+9);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+10) ^ *(unsigned char *)(b+10);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+11) ^ *(unsigned char *)(b+11);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+12) ^ *(unsigned char *)(b+12);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+13) ^ *(unsigned char *)(b+13);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+14) ^ *(unsigned char *)(b+14);
-               OPTIMIZER_HIDE_VAR(neq);
-               neq |= *(unsigned char *)(a+15) ^ *(unsigned char *)(b+15);
-               OPTIMIZER_HIDE_VAR(neq);
-       }
-
-       return neq;
-}
-
-/* Compare two areas of memory without leaking timing information,
- * and with special optimizations for common sizes.  Users should
- * not call this function directly, but should instead use
- * crypto_memneq defined in crypto/algapi.h.
- */
-noinline unsigned long __crypto_memneq(const void *a, const void *b,
-                                      size_t size)
-{
-       switch (size) {
-       case 16:
-               return __crypto_memneq_16(a, b);
-       default:
-               return __crypto_memneq_generic(a, b, size);
-       }
-}
-EXPORT_SYMBOL(__crypto_memneq);
-
-#endif /* __HAVE_ARCH_CRYPTO_MEMNEQ */
index e07782b..43177c2 100644 (file)
@@ -73,6 +73,7 @@ module_param(device_id_scheme, bool, 0444);
 static int only_lcd = -1;
 module_param(only_lcd, int, 0444);
 
+static bool has_backlight;
 static int register_count;
 static DEFINE_MUTEX(register_count_mutex);
 static DEFINE_MUTEX(video_list_lock);
@@ -1222,6 +1223,9 @@ acpi_video_bus_get_one_device(struct acpi_device *device,
        acpi_video_device_bind(video, data);
        acpi_video_device_find_cap(data);
 
+       if (data->cap._BCM && data->cap._BCL)
+               has_backlight = true;
+
        mutex_lock(&video->device_list_lock);
        list_add_tail(&data->entry, &video->video_device_list);
        mutex_unlock(&video->device_list_lock);
@@ -2249,6 +2253,7 @@ void acpi_video_unregister(void)
        if (register_count) {
                acpi_bus_unregister_driver(&acpi_video_bus);
                register_count = 0;
+               has_backlight = false;
        }
        mutex_unlock(&register_count_mutex);
 }
@@ -2270,13 +2275,7 @@ void acpi_video_unregister_backlight(void)
 
 bool acpi_video_handles_brightness_key_presses(void)
 {
-       bool have_video_busses;
-
-       mutex_lock(&video_list_lock);
-       have_video_busses = !list_empty(&video_bus_head);
-       mutex_unlock(&video_list_lock);
-
-       return have_video_busses &&
+       return has_backlight &&
               (report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS);
 }
 EXPORT_SYMBOL(acpi_video_handles_brightness_key_presses);
index 6725931..c2c3238 100644 (file)
@@ -90,7 +90,7 @@ static void cs5535_set_piomode(struct ata_port *ap, struct ata_device *adev)
        static const u16 pio_cmd_timings[5] = {
                0xF7F4, 0x53F3, 0x13F1, 0x5131, 0x1131
        };
-       u32 reg, dummy;
+       u32 reg, __maybe_unused dummy;
        struct ata_device *pair = ata_dev_pair(adev);
 
        int mode = adev->pio_mode - XFER_PIO_0;
@@ -129,7 +129,7 @@ static void cs5535_set_dmamode(struct ata_port *ap, struct ata_device *adev)
        static const u32 mwdma_timings[3] = {
                0x7F0FFFF3, 0x7F035352, 0x7F024241
        };
-       u32 reg, dummy;
+       u32 reg, __maybe_unused dummy;
        int mode = adev->dma_mode;
 
        rdmsr(ATAC_CH0D0_DMA + 2 * adev->devno, reg, dummy);
index d8d0fe6..397eb98 100644 (file)
@@ -8,6 +8,7 @@
 #include <linux/init.h>
 #include <linux/memory.h>
 #include <linux/of.h>
+#include <linux/backing-dev.h>
 
 #include "base.h"
 
@@ -20,6 +21,7 @@
 void __init driver_init(void)
 {
        /* These are the core pieces */
+       bdi_init(&noop_backing_dev_info);
        devtmpfs_init();
        devices_init();
        buses_init();
index 084d67f..bc60c9c 100644 (file)
@@ -558,7 +558,7 @@ static ssize_t hard_offline_page_store(struct device *dev,
        if (kstrtoull(buf, 0, &pfn) < 0)
                return -EINVAL;
        pfn >>= PAGE_SHIFT;
-       ret = memory_failure(pfn, 0);
+       ret = memory_failure(pfn, MF_SW_SIMULATED);
        if (ret == -EOPNOTSUPP)
                ret = 0;
        return ret ? ret : count;
index 400c741..a6db605 100644 (file)
@@ -252,6 +252,7 @@ static void regmap_irq_enable(struct irq_data *data)
        struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data);
        struct regmap *map = d->map;
        const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq);
+       unsigned int reg = irq_data->reg_offset / map->reg_stride;
        unsigned int mask, type;
 
        type = irq_data->type.type_falling_val | irq_data->type.type_rising_val;
@@ -268,14 +269,14 @@ static void regmap_irq_enable(struct irq_data *data)
         * at the corresponding offset in regmap_irq_set_type().
         */
        if (d->chip->type_in_mask && type)
-               mask = d->type_buf[irq_data->reg_offset / map->reg_stride];
+               mask = d->type_buf[reg] & irq_data->mask;
        else
                mask = irq_data->mask;
 
        if (d->chip->clear_on_unmask)
                d->clear_status = true;
 
-       d->mask_buf[irq_data->reg_offset / map->reg_stride] &= ~mask;
+       d->mask_buf[reg] &= ~mask;
 }
 
 static void regmap_irq_disable(struct irq_data *data)
@@ -386,6 +387,7 @@ static inline int read_sub_irq_data(struct regmap_irq_chip_data *data,
                subreg = &chip->sub_reg_offsets[b];
                for (i = 0; i < subreg->num_regs; i++) {
                        unsigned int offset = subreg->offset[i];
+                       unsigned int index = offset / map->reg_stride;
 
                        if (chip->not_fixed_stride)
                                ret = regmap_read(map,
@@ -394,7 +396,7 @@ static inline int read_sub_irq_data(struct regmap_irq_chip_data *data,
                        else
                                ret = regmap_read(map,
                                                chip->status_base + offset,
-                                               &data->status_buf[offset]);
+                                               &data->status_buf[index]);
 
                        if (ret)
                                break;
index 2221d98..c3517cc 100644 (file)
@@ -1880,8 +1880,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
  */
 bool regmap_can_raw_write(struct regmap *map)
 {
-       return map->bus && map->bus->write && map->format.format_val &&
-               map->format.format_reg;
+       return map->write && map->format.format_val && map->format.format_reg;
 }
 EXPORT_SYMBOL_GPL(regmap_can_raw_write);
 
@@ -2155,10 +2154,9 @@ int regmap_noinc_write(struct regmap *map, unsigned int reg,
        size_t write_len;
        int ret;
 
-       if (!map->bus)
-               return -EINVAL;
-       if (!map->bus->write)
+       if (!map->write)
                return -ENOTSUPP;
+
        if (val_len % map->format.val_bytes)
                return -EINVAL;
        if (!IS_ALIGNED(reg, map->reg_stride))
@@ -2278,7 +2276,7 @@ int regmap_bulk_write(struct regmap *map, unsigned int reg, const void *val,
         * Some devices don't support bulk write, for them we have a series of
         * single write operations.
         */
-       if (!map->bus || !map->format.parse_inplace) {
+       if (!map->write || !map->format.parse_inplace) {
                map->lock(map->lock_arg);
                for (i = 0; i < val_count; i++) {
                        unsigned int ival;
@@ -2904,6 +2902,9 @@ int regmap_noinc_read(struct regmap *map, unsigned int reg,
        size_t read_len;
        int ret;
 
+       if (!map->read)
+               return -ENOTSUPP;
+
        if (val_len % map->format.val_bytes)
                return -EINVAL;
        if (!IS_ALIGNED(reg, map->reg_stride))
@@ -3017,7 +3018,7 @@ int regmap_bulk_read(struct regmap *map, unsigned int reg, void *val,
        if (val_count == 0)
                return -EINVAL;
 
-       if (map->format.parse_inplace && (vol || map->cache_type == REGCACHE_NONE)) {
+       if (map->read && map->format.parse_inplace && (vol || map->cache_type == REGCACHE_NONE)) {
                ret = regmap_raw_read(map, reg, val, val_bytes * val_count);
                if (ret != 0)
                        return ret;
index a88ce44..3646c0c 100644 (file)
@@ -152,6 +152,10 @@ static unsigned int xen_blkif_max_ring_order;
 module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, 0444);
 MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring");
 
+static bool __read_mostly xen_blkif_trusted = true;
+module_param_named(trusted, xen_blkif_trusted, bool, 0644);
+MODULE_PARM_DESC(trusted, "Is the backend trusted");
+
 #define BLK_RING_SIZE(info)    \
        __CONST_RING_SIZE(blkif, XEN_PAGE_SIZE * (info)->nr_ring_pages)
 
@@ -210,6 +214,7 @@ struct blkfront_info
        unsigned int feature_discard:1;
        unsigned int feature_secdiscard:1;
        unsigned int feature_persistent:1;
+       unsigned int bounce:1;
        unsigned int discard_granularity;
        unsigned int discard_alignment;
        /* Number of 4KB segments handled */
@@ -310,8 +315,8 @@ static int fill_grant_buffer(struct blkfront_ring_info *rinfo, int num)
                if (!gnt_list_entry)
                        goto out_of_memory;
 
-               if (info->feature_persistent) {
-                       granted_page = alloc_page(GFP_NOIO);
+               if (info->bounce) {
+                       granted_page = alloc_page(GFP_NOIO | __GFP_ZERO);
                        if (!granted_page) {
                                kfree(gnt_list_entry);
                                goto out_of_memory;
@@ -330,7 +335,7 @@ out_of_memory:
        list_for_each_entry_safe(gnt_list_entry, n,
                                 &rinfo->grants, node) {
                list_del(&gnt_list_entry->node);
-               if (info->feature_persistent)
+               if (info->bounce)
                        __free_page(gnt_list_entry->page);
                kfree(gnt_list_entry);
                i--;
@@ -376,7 +381,7 @@ static struct grant *get_grant(grant_ref_t *gref_head,
        /* Assign a gref to this page */
        gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
        BUG_ON(gnt_list_entry->gref == -ENOSPC);
-       if (info->feature_persistent)
+       if (info->bounce)
                grant_foreign_access(gnt_list_entry, info);
        else {
                /* Grant access to the GFN passed by the caller */
@@ -400,7 +405,7 @@ static struct grant *get_indirect_grant(grant_ref_t *gref_head,
        /* Assign a gref to this page */
        gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
        BUG_ON(gnt_list_entry->gref == -ENOSPC);
-       if (!info->feature_persistent) {
+       if (!info->bounce) {
                struct page *indirect_page;
 
                /* Fetch a pre-allocated page to use for indirect grefs */
@@ -703,7 +708,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
                .grant_idx = 0,
                .segments = NULL,
                .rinfo = rinfo,
-               .need_copy = rq_data_dir(req) && info->feature_persistent,
+               .need_copy = rq_data_dir(req) && info->bounce,
        };
 
        /*
@@ -981,11 +986,12 @@ static void xlvbd_flush(struct blkfront_info *info)
 {
        blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
                              info->feature_fua ? true : false);
-       pr_info("blkfront: %s: %s %s %s %s %s\n",
+       pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
                info->gd->disk_name, flush_info(info),
                "persistent grants:", info->feature_persistent ?
                "enabled;" : "disabled;", "indirect descriptors:",
-               info->max_indirect_segments ? "enabled;" : "disabled;");
+               info->max_indirect_segments ? "enabled;" : "disabled;",
+               "bounce buffer:", info->bounce ? "enabled" : "disabled;");
 }
 
 static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
@@ -1207,7 +1213,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
        if (!list_empty(&rinfo->indirect_pages)) {
                struct page *indirect_page, *n;
 
-               BUG_ON(info->feature_persistent);
+               BUG_ON(info->bounce);
                list_for_each_entry_safe(indirect_page, n, &rinfo->indirect_pages, lru) {
                        list_del(&indirect_page->lru);
                        __free_page(indirect_page);
@@ -1224,7 +1230,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
                                                          NULL);
                                rinfo->persistent_gnts_c--;
                        }
-                       if (info->feature_persistent)
+                       if (info->bounce)
                                __free_page(persistent_gnt->page);
                        kfree(persistent_gnt);
                }
@@ -1245,7 +1251,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
                for (j = 0; j < segs; j++) {
                        persistent_gnt = rinfo->shadow[i].grants_used[j];
                        gnttab_end_foreign_access(persistent_gnt->gref, NULL);
-                       if (info->feature_persistent)
+                       if (info->bounce)
                                __free_page(persistent_gnt->page);
                        kfree(persistent_gnt);
                }
@@ -1428,7 +1434,7 @@ static int blkif_completion(unsigned long *id,
        data.s = s;
        num_sg = s->num_sg;
 
-       if (bret->operation == BLKIF_OP_READ && info->feature_persistent) {
+       if (bret->operation == BLKIF_OP_READ && info->bounce) {
                for_each_sg(s->sg, sg, num_sg, i) {
                        BUG_ON(sg->offset + sg->length > PAGE_SIZE);
 
@@ -1487,7 +1493,7 @@ static int blkif_completion(unsigned long *id,
                                 * Add the used indirect page back to the list of
                                 * available pages for indirect grefs.
                                 */
-                               if (!info->feature_persistent) {
+                               if (!info->bounce) {
                                        indirect_page = s->indirect_grants[i]->page;
                                        list_add(&indirect_page->lru, &rinfo->indirect_pages);
                                }
@@ -1764,6 +1770,10 @@ static int talk_to_blkback(struct xenbus_device *dev,
        if (!info)
                return -ENODEV;
 
+       /* Check if backend is trusted. */
+       info->bounce = !xen_blkif_trusted ||
+                      !xenbus_read_unsigned(dev->nodename, "trusted", 1);
+
        max_page_order = xenbus_read_unsigned(info->xbdev->otherend,
                                              "max-ring-page-order", 0);
        ring_page_order = min(xen_blkif_max_ring_order, max_page_order);
@@ -2114,9 +2124,11 @@ static void blkfront_closing(struct blkfront_info *info)
                return;
 
        /* No more blkif_request(). */
-       blk_mq_stop_hw_queues(info->rq);
-       blk_mark_disk_dead(info->gd);
-       set_capacity(info->gd, 0);
+       if (info->rq && info->gd) {
+               blk_mq_stop_hw_queues(info->rq);
+               blk_mark_disk_dead(info->gd);
+               set_capacity(info->gd, 0);
+       }
 
        for_each_rinfo(info, rinfo, i) {
                /* No more gnttab callback work. */
@@ -2171,17 +2183,18 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
        if (err)
                goto out_of_memory;
 
-       if (!info->feature_persistent && info->max_indirect_segments) {
+       if (!info->bounce && info->max_indirect_segments) {
                /*
-                * We are using indirect descriptors but not persistent
-                * grants, we need to allocate a set of pages that can be
+                * We are using indirect descriptors but don't have a bounce
+                * buffer, we need to allocate a set of pages that can be
                 * used for mapping indirect grefs
                 */
                int num = INDIRECT_GREFS(grants) * BLK_RING_SIZE(info);
 
                BUG_ON(!list_empty(&rinfo->indirect_pages));
                for (i = 0; i < num; i++) {
-                       struct page *indirect_page = alloc_page(GFP_KERNEL);
+                       struct page *indirect_page = alloc_page(GFP_KERNEL |
+                                                               __GFP_ZERO);
                        if (!indirect_page)
                                goto out_of_memory;
                        list_add(&indirect_page->lru, &rinfo->indirect_pages);
@@ -2274,6 +2287,8 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
                info->feature_persistent =
                        !!xenbus_read_unsigned(info->xbdev->otherend,
                                               "feature-persistent", 0);
+       if (info->feature_persistent)
+               info->bounce = true;
 
        indirect_segments = xenbus_read_unsigned(info->xbdev->otherend,
                                        "feature-max-indirect-segments", 0);
@@ -2457,16 +2472,19 @@ static int blkfront_remove(struct xenbus_device *xbdev)
 
        dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
 
-       del_gendisk(info->gd);
+       if (info->gd)
+               del_gendisk(info->gd);
 
        mutex_lock(&blkfront_mutex);
        list_del(&info->info_list);
        mutex_unlock(&blkfront_mutex);
 
        blkif_free(info, 0);
-       xlbd_release_minors(info->gd->first_minor, info->gd->minors);
-       blk_cleanup_disk(info->gd);
-       blk_mq_free_tag_set(&info->tag_set);
+       if (info->gd) {
+               xlbd_release_minors(info->gd->first_minor, info->gd->minors);
+               blk_cleanup_disk(info->gd);
+               blk_mq_free_tag_set(&info->tag_set);
+       }
 
        kfree(info);
        return 0;
@@ -2542,6 +2560,13 @@ static void blkfront_delay_work(struct work_struct *work)
        struct blkfront_info *info;
        bool need_schedule_work = false;
 
+       /*
+        * Note that when using bounce buffers but not persistent grants
+        * there's no need to run blkfront_delay_work because grants are
+        * revoked in blkif_completion or else an error is reported and the
+        * connection is closed.
+        */
+
        mutex_lock(&blkfront_mutex);
 
        list_for_each_entry(info, &info_list, info_list) {
index b25ff94..63b1b4a 100644 (file)
@@ -175,10 +175,9 @@ static int bt1_apb_request_rst(struct bt1_apb *apb)
        int ret;
 
        apb->prst = devm_reset_control_get_optional_exclusive(apb->dev, "prst");
-       if (IS_ERR(apb->prst)) {
-               dev_warn(apb->dev, "Couldn't get reset control line\n");
-               return PTR_ERR(apb->prst);
-       }
+       if (IS_ERR(apb->prst))
+               return dev_err_probe(apb->dev, PTR_ERR(apb->prst),
+                                    "Couldn't get reset control line\n");
 
        ret = reset_control_deassert(apb->prst);
        if (ret)
@@ -199,10 +198,9 @@ static int bt1_apb_request_clk(struct bt1_apb *apb)
        int ret;
 
        apb->pclk = devm_clk_get(apb->dev, "pclk");
-       if (IS_ERR(apb->pclk)) {
-               dev_err(apb->dev, "Couldn't get APB clock descriptor\n");
-               return PTR_ERR(apb->pclk);
-       }
+       if (IS_ERR(apb->pclk))
+               return dev_err_probe(apb->dev, PTR_ERR(apb->pclk),
+                                    "Couldn't get APB clock descriptor\n");
 
        ret = clk_prepare_enable(apb->pclk);
        if (ret) {
index e7a6744..70e49a6 100644 (file)
@@ -135,10 +135,9 @@ static int bt1_axi_request_rst(struct bt1_axi *axi)
        int ret;
 
        axi->arst = devm_reset_control_get_optional_exclusive(axi->dev, "arst");
-       if (IS_ERR(axi->arst)) {
-               dev_warn(axi->dev, "Couldn't get reset control line\n");
-               return PTR_ERR(axi->arst);
-       }
+       if (IS_ERR(axi->arst))
+               return dev_err_probe(axi->dev, PTR_ERR(axi->arst),
+                                    "Couldn't get reset control line\n");
 
        ret = reset_control_deassert(axi->arst);
        if (ret)
@@ -159,10 +158,9 @@ static int bt1_axi_request_clk(struct bt1_axi *axi)
        int ret;
 
        axi->aclk = devm_clk_get(axi->dev, "aclk");
-       if (IS_ERR(axi->aclk)) {
-               dev_err(axi->dev, "Couldn't get AXI Interconnect clock\n");
-               return PTR_ERR(axi->aclk);
-       }
+       if (IS_ERR(axi->aclk))
+               return dev_err_probe(axi->dev, PTR_ERR(axi->aclk),
+                                    "Couldn't get AXI Interconnect clock\n");
 
        ret = clk_prepare_enable(axi->aclk);
        if (ret) {
index e81a970..6143dbf 100644 (file)
@@ -1239,14 +1239,14 @@ error_cleanup_mc_io:
 static int fsl_mc_bus_remove(struct platform_device *pdev)
 {
        struct fsl_mc *mc = platform_get_drvdata(pdev);
+       struct fsl_mc_io *mc_io;
 
        if (!fsl_mc_is_root_dprc(&mc->root_mc_bus_dev->dev))
                return -EINVAL;
 
+       mc_io = mc->root_mc_bus_dev->mc_io;
        fsl_mc_device_remove(mc->root_mc_bus_dev);
-
-       fsl_destroy_mc_io(mc->root_mc_bus_dev->mc_io);
-       mc->root_mc_bus_dev->mc_io = NULL;
+       fsl_destroy_mc_io(mc_io);
 
        bus_unregister_notifier(&fsl_mc_bus_type, &fsl_mc_nb);
 
index 0e22e3b..38aad99 100644 (file)
@@ -1019,7 +1019,7 @@ static struct parport_driver lp_driver = {
 
 static int __init lp_init(void)
 {
-       int i, err = 0;
+       int i, err;
 
        if (parport_nr[0] == LP_PARPORT_OFF)
                return 0;
index 655e327..e3dd1dd 100644 (file)
@@ -87,7 +87,7 @@ static struct fasync_struct *fasync;
 
 /* Control how we warn userspace. */
 static struct ratelimit_state urandom_warning =
-       RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3);
+       RATELIMIT_STATE_INIT_FLAGS("urandom_warning", HZ, 3, RATELIMIT_MSG_ON_RELEASE);
 static int ratelimit_disable __read_mostly =
        IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM);
 module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);
@@ -408,7 +408,7 @@ static ssize_t get_random_bytes_user(struct iov_iter *iter)
 
        /*
         * Immediately overwrite the ChaCha key at index 4 with random
-        * bytes, in case userspace causes copy_to_user() below to sleep
+        * bytes, in case userspace causes copy_to_iter() below to sleep
         * forever, so that we still retain forward secrecy in that case.
         */
        crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE);
@@ -1009,7 +1009,7 @@ void add_interrupt_randomness(int irq)
        if (new_count & MIX_INFLIGHT)
                return;
 
-       if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ))
+       if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
                return;
 
        if (unlikely(!fast_pool->mix.func))
index 0408701..e893815 100644 (file)
@@ -111,6 +111,7 @@ int stm32_rcc_reset_init(struct device *dev, const struct of_device_id *match,
        if (!reset_data)
                return -ENOMEM;
 
+       spin_lock_init(&reset_data->lock);
        reset_data->membase = base;
        reset_data->rcdev.owner = THIS_MODULE;
        reset_data->rcdev.ops = &stm32_reset_ops;
index ff188ab..bb47610 100644 (file)
@@ -565,4 +565,3 @@ void __init hv_init_clocksource(void)
        hv_sched_clock_offset = hv_read_reference_counter();
        hv_setup_sched_clock(read_hv_sched_clock_msr);
 }
-EXPORT_SYMBOL_GPL(hv_init_clocksource);
index 46023ad..4536ed4 100644 (file)
@@ -684,7 +684,7 @@ static int vmk80xx_alloc_usb_buffers(struct comedi_device *dev)
        if (!devpriv->usb_rx_buf)
                return -ENOMEM;
 
-       size = max(usb_endpoint_maxp(devpriv->ep_rx), MIN_BUF_SIZE);
+       size = max(usb_endpoint_maxp(devpriv->ep_tx), MIN_BUF_SIZE);
        devpriv->usb_tx_buf = kzalloc(size, GFP_KERNEL);
        if (!devpriv->usb_tx_buf)
                return -ENOMEM;
index 7be38bc..9ac75c1 100644 (file)
@@ -566,6 +566,28 @@ static int amd_pstate_cpu_exit(struct cpufreq_policy *policy)
        return 0;
 }
 
+static int amd_pstate_cpu_resume(struct cpufreq_policy *policy)
+{
+       int ret;
+
+       ret = amd_pstate_enable(true);
+       if (ret)
+               pr_err("failed to enable amd-pstate during resume, return %d\n", ret);
+
+       return ret;
+}
+
+static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy)
+{
+       int ret;
+
+       ret = amd_pstate_enable(false);
+       if (ret)
+               pr_err("failed to disable amd-pstate during suspend, return %d\n", ret);
+
+       return ret;
+}
+
 /* Sysfs attributes */
 
 /*
@@ -636,6 +658,8 @@ static struct cpufreq_driver amd_pstate_driver = {
        .target         = amd_pstate_target,
        .init           = amd_pstate_cpu_init,
        .exit           = amd_pstate_cpu_exit,
+       .suspend        = amd_pstate_cpu_suspend,
+       .resume         = amd_pstate_cpu_resume,
        .set_boost      = amd_pstate_set_boost,
        .name           = "amd-pstate",
        .attr           = amd_pstate_attr,
index 96de153..2c96de3 100644 (file)
@@ -127,6 +127,7 @@ static const struct of_device_id blocklist[] __initconst = {
        { .compatible = "mediatek,mt8173", },
        { .compatible = "mediatek,mt8176", },
        { .compatible = "mediatek,mt8183", },
+       { .compatible = "mediatek,mt8186", },
        { .compatible = "mediatek,mt8365", },
        { .compatible = "mediatek,mt8516", },
 
index 20f64a8..4b8ee20 100644 (file)
@@ -470,6 +470,10 @@ static int pmac_cpufreq_init_MacRISC3(struct device_node *cpunode)
        if (slew_done_gpio_np)
                slew_done_gpio = read_gpio(slew_done_gpio_np);
 
+       of_node_put(volt_gpio_np);
+       of_node_put(freq_gpio_np);
+       of_node_put(slew_done_gpio_np);
+
        /* If we use the frequency GPIOs, calculate the min/max speeds based
         * on the bus frequencies
         */
index 0253731..36c7958 100644 (file)
@@ -442,6 +442,9 @@ static int qcom_cpufreq_hw_cpu_online(struct cpufreq_policy *policy)
        struct platform_device *pdev = cpufreq_get_driver_data();
        int ret;
 
+       if (data->throttle_irq <= 0)
+               return 0;
+
        ret = irq_set_affinity_hint(data->throttle_irq, policy->cpus);
        if (ret)
                dev_err(&pdev->dev, "Failed to set CPU affinity of %s[%d]\n",
@@ -469,6 +472,9 @@ static int qcom_cpufreq_hw_cpu_offline(struct cpufreq_policy *policy)
 
 static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data)
 {
+       if (data->throttle_irq <= 0)
+               return;
+
        free_irq(data->throttle_irq, data);
 }
 
index 6b6b20d..573b417 100644 (file)
@@ -275,6 +275,7 @@ static int qoriq_cpufreq_probe(struct platform_device *pdev)
 
        np = of_find_matching_node(NULL, qoriq_cpufreq_blacklist);
        if (np) {
+               of_node_put(np);
                dev_info(&pdev->dev, "Disabling due to erratum A-008083");
                return -ENODEV;
        }
index ee99c02..3e6aa31 100644 (file)
@@ -133,98 +133,6 @@ config CRYPTO_PAES_S390
          Select this option if you want to use the paes cipher
          for example to use protected key encrypted devices.
 
-config CRYPTO_SHA1_S390
-       tristate "SHA1 digest algorithm"
-       depends on S390
-       select CRYPTO_HASH
-       help
-         This is the s390 hardware accelerated implementation of the
-         SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
-
-         It is available as of z990.
-
-config CRYPTO_SHA256_S390
-       tristate "SHA256 digest algorithm"
-       depends on S390
-       select CRYPTO_HASH
-       help
-         This is the s390 hardware accelerated implementation of the
-         SHA256 secure hash standard (DFIPS 180-2).
-
-         It is available as of z9.
-
-config CRYPTO_SHA512_S390
-       tristate "SHA384 and SHA512 digest algorithm"
-       depends on S390
-       select CRYPTO_HASH
-       help
-         This is the s390 hardware accelerated implementation of the
-         SHA512 secure hash standard.
-
-         It is available as of z10.
-
-config CRYPTO_SHA3_256_S390
-       tristate "SHA3_224 and SHA3_256 digest algorithm"
-       depends on S390
-       select CRYPTO_HASH
-       help
-         This is the s390 hardware accelerated implementation of the
-         SHA3_256 secure hash standard.
-
-         It is available as of z14.
-
-config CRYPTO_SHA3_512_S390
-       tristate "SHA3_384 and SHA3_512 digest algorithm"
-       depends on S390
-       select CRYPTO_HASH
-       help
-         This is the s390 hardware accelerated implementation of the
-         SHA3_512 secure hash standard.
-
-         It is available as of z14.
-
-config CRYPTO_DES_S390
-       tristate "DES and Triple DES cipher algorithms"
-       depends on S390
-       select CRYPTO_ALGAPI
-       select CRYPTO_SKCIPHER
-       select CRYPTO_LIB_DES
-       help
-         This is the s390 hardware accelerated implementation of the
-         DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
-
-         As of z990 the ECB and CBC mode are hardware accelerated.
-         As of z196 the CTR mode is hardware accelerated.
-
-config CRYPTO_AES_S390
-       tristate "AES cipher algorithms"
-       depends on S390
-       select CRYPTO_ALGAPI
-       select CRYPTO_SKCIPHER
-       help
-         This is the s390 hardware accelerated implementation of the
-         AES cipher algorithms (FIPS-197).
-
-         As of z9 the ECB and CBC modes are hardware accelerated
-         for 128 bit keys.
-         As of z10 the ECB and CBC modes are hardware accelerated
-         for all AES key sizes.
-         As of z196 the CTR mode is hardware accelerated for all AES
-         key sizes and XTS mode is hardware accelerated for 256 and
-         512 bit keys.
-
-config CRYPTO_CHACHA_S390
-       tristate "ChaCha20 stream cipher"
-       depends on S390
-       select CRYPTO_SKCIPHER
-       select CRYPTO_LIB_CHACHA_GENERIC
-       select CRYPTO_ARCH_HAVE_LIB_CHACHA
-       help
-         This is the s390 SIMD implementation of the ChaCha20 stream
-         cipher (RFC 7539).
-
-         It is available as of z13.
-
 config S390_PRNG
        tristate "Pseudo random number generator device driver"
        depends on S390
@@ -238,29 +146,6 @@ config S390_PRNG
 
          It is available as of z9.
 
-config CRYPTO_GHASH_S390
-       tristate "GHASH hash function"
-       depends on S390
-       select CRYPTO_HASH
-       help
-         This is the s390 hardware accelerated implementation of GHASH,
-         the hash function used in GCM (Galois/Counter mode).
-
-         It is available as of z196.
-
-config CRYPTO_CRC32_S390
-       tristate "CRC-32 algorithms"
-       depends on S390
-       select CRYPTO_HASH
-       select CRC32
-       help
-         Select this option if you want to use hardware accelerated
-         implementations of CRC algorithms.  With this option, you
-         can optimize the computation of CRC-32 (IEEE 802.3 Ethernet)
-         and CRC-32C (Castagnoli).
-
-         It is available with IBM z13 or later.
-
 config CRYPTO_DEV_NIAGARA2
        tristate "Niagara2 Stream Processing Unit driver"
        select CRYPTO_LIB_DES
index 9dba52f..7d79a87 100644 (file)
@@ -85,17 +85,9 @@ static int sp_get_irqs(struct sp_device *sp)
        struct sp_platform *sp_platform = sp->dev_specific;
        struct device *dev = sp->dev;
        struct platform_device *pdev = to_platform_device(dev);
-       unsigned int i, count;
        int ret;
 
-       for (i = 0, count = 0; i < pdev->num_resources; i++) {
-               struct resource *res = &pdev->resource[i];
-
-               if (resource_type(res) == IORESOURCE_IRQ)
-                       count++;
-       }
-
-       sp_platform->irq_count = count;
+       sp_platform->irq_count = platform_irq_count(pdev);
 
        ret = platform_get_irq(pdev, 0);
        if (ret < 0) {
@@ -104,7 +96,7 @@ static int sp_get_irqs(struct sp_device *sp)
        }
 
        sp->psp_irq = ret;
-       if (count == 1) {
+       if (sp_platform->irq_count == 1) {
                sp->ccp_irq = ret;
        } else {
                ret = platform_get_irq(pdev, 1);
index 01474da..9602141 100644 (file)
@@ -123,7 +123,7 @@ void devfreq_get_freq_range(struct devfreq *devfreq,
                            unsigned long *min_freq,
                            unsigned long *max_freq)
 {
-       unsigned long *freq_table = devfreq->profile->freq_table;
+       unsigned long *freq_table = devfreq->freq_table;
        s32 qos_min_freq, qos_max_freq;
 
        lockdep_assert_held(&devfreq->lock);
@@ -133,11 +133,11 @@ void devfreq_get_freq_range(struct devfreq *devfreq,
         * The devfreq drivers can initialize this in either ascending or
         * descending order and devfreq core supports both.
         */
-       if (freq_table[0] < freq_table[devfreq->profile->max_state - 1]) {
+       if (freq_table[0] < freq_table[devfreq->max_state - 1]) {
                *min_freq = freq_table[0];
-               *max_freq = freq_table[devfreq->profile->max_state - 1];
+               *max_freq = freq_table[devfreq->max_state - 1];
        } else {
-               *min_freq = freq_table[devfreq->profile->max_state - 1];
+               *min_freq = freq_table[devfreq->max_state - 1];
                *max_freq = freq_table[0];
        }
 
@@ -169,8 +169,8 @@ static int devfreq_get_freq_level(struct devfreq *devfreq, unsigned long freq)
 {
        int lev;
 
-       for (lev = 0; lev < devfreq->profile->max_state; lev++)
-               if (freq == devfreq->profile->freq_table[lev])
+       for (lev = 0; lev < devfreq->max_state; lev++)
+               if (freq == devfreq->freq_table[lev])
                        return lev;
 
        return -EINVAL;
@@ -178,7 +178,6 @@ static int devfreq_get_freq_level(struct devfreq *devfreq, unsigned long freq)
 
 static int set_freq_table(struct devfreq *devfreq)
 {
-       struct devfreq_dev_profile *profile = devfreq->profile;
        struct dev_pm_opp *opp;
        unsigned long freq;
        int i, count;
@@ -188,25 +187,22 @@ static int set_freq_table(struct devfreq *devfreq)
        if (count <= 0)
                return -EINVAL;
 
-       profile->max_state = count;
-       profile->freq_table = devm_kcalloc(devfreq->dev.parent,
-                                       profile->max_state,
-                                       sizeof(*profile->freq_table),
-                                       GFP_KERNEL);
-       if (!profile->freq_table) {
-               profile->max_state = 0;
+       devfreq->max_state = count;
+       devfreq->freq_table = devm_kcalloc(devfreq->dev.parent,
+                                          devfreq->max_state,
+                                          sizeof(*devfreq->freq_table),
+                                          GFP_KERNEL);
+       if (!devfreq->freq_table)
                return -ENOMEM;
-       }
 
-       for (i = 0, freq = 0; i < profile->max_state; i++, freq++) {
+       for (i = 0, freq = 0; i < devfreq->max_state; i++, freq++) {
                opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &freq);
                if (IS_ERR(opp)) {
-                       devm_kfree(devfreq->dev.parent, profile->freq_table);
-                       profile->max_state = 0;
+                       devm_kfree(devfreq->dev.parent, devfreq->freq_table);
                        return PTR_ERR(opp);
                }
                dev_pm_opp_put(opp);
-               profile->freq_table[i] = freq;
+               devfreq->freq_table[i] = freq;
        }
 
        return 0;
@@ -246,7 +242,7 @@ int devfreq_update_status(struct devfreq *devfreq, unsigned long freq)
 
        if (lev != prev_lev) {
                devfreq->stats.trans_table[
-                       (prev_lev * devfreq->profile->max_state) + lev]++;
+                       (prev_lev * devfreq->max_state) + lev]++;
                devfreq->stats.total_trans++;
        }
 
@@ -835,6 +831,9 @@ struct devfreq *devfreq_add_device(struct device *dev,
                if (err < 0)
                        goto err_dev;
                mutex_lock(&devfreq->lock);
+       } else {
+               devfreq->freq_table = devfreq->profile->freq_table;
+               devfreq->max_state = devfreq->profile->max_state;
        }
 
        devfreq->scaling_min_freq = find_available_min_freq(devfreq);
@@ -870,8 +869,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
 
        devfreq->stats.trans_table = devm_kzalloc(&devfreq->dev,
                        array3_size(sizeof(unsigned int),
-                                   devfreq->profile->max_state,
-                                   devfreq->profile->max_state),
+                                   devfreq->max_state,
+                                   devfreq->max_state),
                        GFP_KERNEL);
        if (!devfreq->stats.trans_table) {
                mutex_unlock(&devfreq->lock);
@@ -880,7 +879,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
        }
 
        devfreq->stats.time_in_state = devm_kcalloc(&devfreq->dev,
-                       devfreq->profile->max_state,
+                       devfreq->max_state,
                        sizeof(*devfreq->stats.time_in_state),
                        GFP_KERNEL);
        if (!devfreq->stats.time_in_state) {
@@ -932,8 +931,9 @@ struct devfreq *devfreq_add_device(struct device *dev,
        err = devfreq->governor->event_handler(devfreq, DEVFREQ_GOV_START,
                                                NULL);
        if (err) {
-               dev_err(dev, "%s: Unable to start governor for the device\n",
-                       __func__);
+               dev_err_probe(dev, err,
+                       "%s: Unable to start governor for the device\n",
+                        __func__);
                goto err_init;
        }
        create_sysfs_files(devfreq, devfreq->governor);
@@ -1665,9 +1665,9 @@ static ssize_t available_frequencies_show(struct device *d,
 
        mutex_lock(&df->lock);
 
-       for (i = 0; i < df->profile->max_state; i++)
+       for (i = 0; i < df->max_state; i++)
                count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
-                               "%lu ", df->profile->freq_table[i]);
+                               "%lu ", df->freq_table[i]);
 
        mutex_unlock(&df->lock);
        /* Truncate the trailing space */
@@ -1690,7 +1690,7 @@ static ssize_t trans_stat_show(struct device *dev,
 
        if (!df->profile)
                return -EINVAL;
-       max_state = df->profile->max_state;
+       max_state = df->max_state;
 
        if (max_state == 0)
                return sprintf(buf, "Not Supported.\n");
@@ -1707,19 +1707,17 @@ static ssize_t trans_stat_show(struct device *dev,
        len += sprintf(buf + len, "           :");
        for (i = 0; i < max_state; i++)
                len += sprintf(buf + len, "%10lu",
-                               df->profile->freq_table[i]);
+                               df->freq_table[i]);
 
        len += sprintf(buf + len, "   time(ms)\n");
 
        for (i = 0; i < max_state; i++) {
-               if (df->profile->freq_table[i]
-                                       == df->previous_freq) {
+               if (df->freq_table[i] == df->previous_freq)
                        len += sprintf(buf + len, "*");
-               } else {
+               else
                        len += sprintf(buf + len, " ");
-               }
-               len += sprintf(buf + len, "%10lu:",
-                               df->profile->freq_table[i]);
+
+               len += sprintf(buf + len, "%10lu:", df->freq_table[i]);
                for (j = 0; j < max_state; j++)
                        len += sprintf(buf + len, "%10u",
                                df->stats.trans_table[(i * max_state) + j]);
@@ -1743,7 +1741,7 @@ static ssize_t trans_stat_store(struct device *dev,
        if (!df->profile)
                return -EINVAL;
 
-       if (df->profile->max_state == 0)
+       if (df->max_state == 0)
                return count;
 
        err = kstrtoint(buf, 10, &value);
@@ -1751,11 +1749,11 @@ static ssize_t trans_stat_store(struct device *dev,
                return -EINVAL;
 
        mutex_lock(&df->lock);
-       memset(df->stats.time_in_state, 0, (df->profile->max_state *
+       memset(df->stats.time_in_state, 0, (df->max_state *
                                        sizeof(*df->stats.time_in_state)));
        memset(df->stats.trans_table, 0, array3_size(sizeof(unsigned int),
-                                       df->profile->max_state,
-                                       df->profile->max_state));
+                                       df->max_state,
+                                       df->max_state));
        df->stats.total_trans = 0;
        df->stats.last_update = get_jiffies_64();
        mutex_unlock(&df->lock);
index 9b849d7..a443e7c 100644 (file)
@@ -519,15 +519,19 @@ static int of_get_devfreq_events(struct device_node *np,
 
        count = of_get_child_count(events_np);
        desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL);
-       if (!desc)
+       if (!desc) {
+               of_node_put(events_np);
                return -ENOMEM;
+       }
        info->num_events = count;
 
        of_id = of_match_device(exynos_ppmu_id_match, dev);
        if (of_id)
                info->ppmu_type = (enum exynos_ppmu_type)of_id->data;
-       else
+       else {
+               of_node_put(events_np);
                return -EINVAL;
+       }
 
        j = 0;
        for_each_child_of_node(events_np, node) {
index 72c6797..953cf9a 100644 (file)
@@ -1,4 +1,4 @@
-       // SPDX-License-Identifier: GPL-2.0-only
+// SPDX-License-Identifier: GPL-2.0-only
 /*
  * linux/drivers/devfreq/governor_passive.c
  *
 #include <linux/slab.h>
 #include <linux/device.h>
 #include <linux/devfreq.h>
+#include <linux/units.h>
 #include "governor.h"
 
-#define HZ_PER_KHZ     1000
-
 static struct devfreq_cpu_data *
 get_parent_cpu_data(struct devfreq_passive_data *p_data,
                    struct cpufreq_policy *policy)
@@ -34,6 +33,20 @@ get_parent_cpu_data(struct devfreq_passive_data *p_data,
        return NULL;
 }
 
+static void delete_parent_cpu_data(struct devfreq_passive_data *p_data)
+{
+       struct devfreq_cpu_data *parent_cpu_data, *tmp;
+
+       list_for_each_entry_safe(parent_cpu_data, tmp, &p_data->cpu_data_list, node) {
+               list_del(&parent_cpu_data->node);
+
+               if (parent_cpu_data->opp_table)
+                       dev_pm_opp_put_opp_table(parent_cpu_data->opp_table);
+
+               kfree(parent_cpu_data);
+       }
+}
+
 static unsigned long get_target_freq_by_required_opp(struct device *p_dev,
                                                struct opp_table *p_opp_table,
                                                struct opp_table *opp_table,
@@ -131,18 +144,18 @@ static int get_target_freq_with_devfreq(struct devfreq *devfreq,
                goto out;
 
        /* Use interpolation if required opps is not available */
-       for (i = 0; i < parent_devfreq->profile->max_state; i++)
-               if (parent_devfreq->profile->freq_table[i] == *freq)
+       for (i = 0; i < parent_devfreq->max_state; i++)
+               if (parent_devfreq->freq_table[i] == *freq)
                        break;
 
-       if (i == parent_devfreq->profile->max_state)
+       if (i == parent_devfreq->max_state)
                return -EINVAL;
 
-       if (i < devfreq->profile->max_state) {
-               child_freq = devfreq->profile->freq_table[i];
+       if (i < devfreq->max_state) {
+               child_freq = devfreq->freq_table[i];
        } else {
-               count = devfreq->profile->max_state;
-               child_freq = devfreq->profile->freq_table[count - 1];
+               count = devfreq->max_state;
+               child_freq = devfreq->freq_table[count - 1];
        }
 
 out:
@@ -222,8 +235,7 @@ static int cpufreq_passive_unregister_notifier(struct devfreq *devfreq)
 {
        struct devfreq_passive_data *p_data
                        = (struct devfreq_passive_data *)devfreq->data;
-       struct devfreq_cpu_data *parent_cpu_data;
-       int cpu, ret = 0;
+       int ret;
 
        if (p_data->nb.notifier_call) {
                ret = cpufreq_unregister_notifier(&p_data->nb,
@@ -232,27 +244,9 @@ static int cpufreq_passive_unregister_notifier(struct devfreq *devfreq)
                        return ret;
        }
 
-       for_each_possible_cpu(cpu) {
-               struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
-               if (!policy) {
-                       ret = -EINVAL;
-                       continue;
-               }
-
-               parent_cpu_data = get_parent_cpu_data(p_data, policy);
-               if (!parent_cpu_data) {
-                       cpufreq_cpu_put(policy);
-                       continue;
-               }
+       delete_parent_cpu_data(p_data);
 
-               list_del(&parent_cpu_data->node);
-               if (parent_cpu_data->opp_table)
-                       dev_pm_opp_put_opp_table(parent_cpu_data->opp_table);
-               kfree(parent_cpu_data);
-               cpufreq_cpu_put(policy);
-       }
-
-       return ret;
+       return 0;
 }
 
 static int cpufreq_passive_register_notifier(struct devfreq *devfreq)
@@ -336,7 +330,6 @@ err_free_cpu_data:
 err_put_policy:
        cpufreq_cpu_put(policy);
 err:
-       WARN_ON(cpufreq_passive_unregister_notifier(devfreq));
 
        return ret;
 }
@@ -407,8 +400,7 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
        if (!p_data)
                return -EINVAL;
 
-       if (!p_data->this)
-               p_data->this = devfreq;
+       p_data->this = devfreq;
 
        switch (event) {
        case DEVFREQ_GOV_START:
index e733068..9631f2f 100644 (file)
@@ -32,8 +32,11 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
 {
        struct vm_area_struct *vma = vmf->vma;
        struct udmabuf *ubuf = vma->vm_private_data;
+       pgoff_t pgoff = vmf->pgoff;
 
-       vmf->page = ubuf->pages[vmf->pgoff];
+       if (pgoff >= ubuf->pagecount)
+               return VM_FAULT_SIGBUS;
+       vmf->page = ubuf->pages[pgoff];
        get_page(vmf->page);
        return 0;
 }
index c9fe590..9c89f7d 100644 (file)
@@ -1211,7 +1211,7 @@ static int ioctl_get_cycle_timer2(struct client *client, union ioctl_arg *arg)
        struct fw_cdev_get_cycle_timer2 *a = &arg->get_cycle_timer2;
        struct fw_card *card = client->device->card;
        struct timespec64 ts = {0, 0};
-       u32 cycle_time;
+       u32 cycle_time = 0;
        int ret = 0;
 
        local_irq_disable();
index 90ed8fd..adddd8c 100644 (file)
@@ -372,8 +372,7 @@ static ssize_t rom_index_show(struct device *dev,
        struct fw_device *device = fw_device(dev->parent);
        struct fw_unit *unit = fw_unit(dev);
 
-       return snprintf(buf, PAGE_SIZE, "%d\n",
-                       (int)(unit->directory - device->config_rom));
+       return sysfs_emit(buf, "%td\n", unit->directory - device->config_rom);
 }
 
 static struct device_attribute fw_unit_attributes[] = {
@@ -403,8 +402,7 @@ static ssize_t guid_show(struct device *dev,
        int ret;
 
        down_read(&fw_device_rwsem);
-       ret = snprintf(buf, PAGE_SIZE, "0x%08x%08x\n",
-                      device->config_rom[3], device->config_rom[4]);
+       ret = sysfs_emit(buf, "0x%08x%08x\n", device->config_rom[3], device->config_rom[4]);
        up_read(&fw_device_rwsem);
 
        return ret;
index 20fba73..a52f084 100644 (file)
@@ -36,7 +36,7 @@ struct scmi_msg_resp_base_attributes {
 
 struct scmi_msg_resp_base_discover_agent {
        __le32 agent_id;
-       u8 name[SCMI_MAX_STR_SIZE];
+       u8 name[SCMI_SHORT_NAME_MAX_SIZE];
 };
 
 
@@ -119,7 +119,7 @@ scmi_base_vendor_id_get(const struct scmi_protocol_handle *ph, bool sub_vendor)
 
        ret = ph->xops->do_xfer(ph, t);
        if (!ret)
-               memcpy(vendor_id, t->rx.buf, size);
+               strscpy(vendor_id, t->rx.buf, size);
 
        ph->xops->xfer_put(ph, t);
 
@@ -221,11 +221,17 @@ scmi_base_implementation_list_get(const struct scmi_protocol_handle *ph,
                calc_list_sz = (1 + (loop_num_ret - 1) / sizeof(u32)) *
                                sizeof(u32);
                if (calc_list_sz != real_list_sz) {
-                       dev_err(dev,
-                               "Malformed reply - real_sz:%zd  calc_sz:%u\n",
-                               real_list_sz, calc_list_sz);
-                       ret = -EPROTO;
-                       break;
+                       dev_warn(dev,
+                                "Malformed reply - real_sz:%zd  calc_sz:%u  (loop_num_ret:%d)\n",
+                                real_list_sz, calc_list_sz, loop_num_ret);
+                       /*
+                        * Bail out if the expected list size is bigger than the
+                        * total payload size of the received reply.
+                        */
+                       if (calc_list_sz > real_list_sz) {
+                               ret = -EPROTO;
+                               break;
+                       }
                }
 
                for (loop = 0; loop < loop_num_ret; loop++)
@@ -270,7 +276,7 @@ static int scmi_base_discover_agent_get(const struct scmi_protocol_handle *ph,
        ret = ph->xops->do_xfer(ph, t);
        if (!ret) {
                agent_info = t->rx.buf;
-               strlcpy(name, agent_info->name, SCMI_MAX_STR_SIZE);
+               strscpy(name, agent_info->name, SCMI_SHORT_NAME_MAX_SIZE);
        }
 
        ph->xops->xfer_put(ph, t);
@@ -369,7 +375,7 @@ static int scmi_base_protocol_init(const struct scmi_protocol_handle *ph)
        int id, ret;
        u8 *prot_imp;
        u32 version;
-       char name[SCMI_MAX_STR_SIZE];
+       char name[SCMI_SHORT_NAME_MAX_SIZE];
        struct device *dev = ph->dev;
        struct scmi_revision_info *rev = scmi_revision_area_get(ph);
 
index f6fe723..d4e2310 100644 (file)
@@ -181,7 +181,7 @@ scmi_device_create(struct device_node *np, struct device *parent, int protocol,
                return NULL;
        }
 
-       id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL);
+       id = ida_alloc_min(&scmi_bus_id, 1, GFP_KERNEL);
        if (id < 0) {
                kfree_const(scmi_dev->name);
                kfree(scmi_dev);
@@ -204,7 +204,7 @@ scmi_device_create(struct device_node *np, struct device *parent, int protocol,
 put_dev:
        kfree_const(scmi_dev->name);
        put_device(&scmi_dev->dev);
-       ida_simple_remove(&scmi_bus_id, id);
+       ida_free(&scmi_bus_id, id);
        return NULL;
 }
 
@@ -212,7 +212,7 @@ void scmi_device_destroy(struct scmi_device *scmi_dev)
 {
        kfree_const(scmi_dev->name);
        scmi_handle_put(scmi_dev->handle);
-       ida_simple_remove(&scmi_bus_id, scmi_dev->id);
+       ida_free(&scmi_bus_id, scmi_dev->id);
        device_unregister(&scmi_dev->dev);
 }
 
index 4d36a9a..3ed7ae0 100644 (file)
@@ -153,7 +153,7 @@ static int scmi_clock_attributes_get(const struct scmi_protocol_handle *ph,
        if (!ret) {
                u32 latency = 0;
                attributes = le32_to_cpu(attr->attributes);
-               strlcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE);
+               strscpy(clk->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE);
                /* clock_enable_latency field is present only since SCMI v3.1 */
                if (PROTOCOL_REV_MAJOR(version) >= 0x2)
                        latency = le32_to_cpu(attr->clock_enable_latency);
@@ -194,6 +194,7 @@ static int rate_cmp_func(const void *_r1, const void *_r2)
 }
 
 struct scmi_clk_ipriv {
+       struct device *dev;
        u32 clk_id;
        struct scmi_clock_info *clk;
 };
@@ -223,6 +224,29 @@ iter_clk_describe_update_state(struct scmi_iterator_state *st,
        st->num_returned = NUM_RETURNED(flags);
        p->clk->rate_discrete = RATE_DISCRETE(flags);
 
+       /* Warn about out of spec replies ... */
+       if (!p->clk->rate_discrete &&
+           (st->num_returned != 3 || st->num_remaining != 0)) {
+               dev_warn(p->dev,
+                        "Out-of-spec CLOCK_DESCRIBE_RATES reply for %s - returned:%d remaining:%d rx_len:%zd\n",
+                        p->clk->name, st->num_returned, st->num_remaining,
+                        st->rx_len);
+
+               /*
+                * A known quirk: a triplet is returned but num_returned != 3
+                * Check for a safe payload size and fix.
+                */
+               if (st->num_returned != 3 && st->num_remaining == 0 &&
+                   st->rx_len == sizeof(*r) + sizeof(__le32) * 2 * 3) {
+                       st->num_returned = 3;
+                       st->num_remaining = 0;
+               } else {
+                       dev_err(p->dev,
+                               "Cannot fix out-of-spec reply !\n");
+                       return -EPROTO;
+               }
+       }
+
        return 0;
 }
 
@@ -255,7 +279,6 @@ iter_clk_describe_process_response(const struct scmi_protocol_handle *ph,
 
                *rate = RATE_TO_U64(r->rate[st->loop_idx]);
                p->clk->list.num_rates++;
-               //XXX dev_dbg(ph->dev, "Rate %llu Hz\n", *rate);
        }
 
        return ret;
@@ -266,9 +289,7 @@ scmi_clock_describe_rates_get(const struct scmi_protocol_handle *ph, u32 clk_id,
                              struct scmi_clock_info *clk)
 {
        int ret;
-
        void *iter;
-       struct scmi_msg_clock_describe_rates *msg;
        struct scmi_iterator_ops ops = {
                .prepare_message = iter_clk_describe_prepare_message,
                .update_state = iter_clk_describe_update_state,
@@ -277,11 +298,13 @@ scmi_clock_describe_rates_get(const struct scmi_protocol_handle *ph, u32 clk_id,
        struct scmi_clk_ipriv cpriv = {
                .clk_id = clk_id,
                .clk = clk,
+               .dev = ph->dev,
        };
 
        iter = ph->hops->iter_response_init(ph, &ops, SCMI_MAX_NUM_RATES,
                                            CLOCK_DESCRIBE_RATES,
-                                           sizeof(*msg), &cpriv);
+                                           sizeof(struct scmi_msg_clock_describe_rates),
+                                           &cpriv);
        if (IS_ERR(iter))
                return PTR_ERR(iter);
 
index c1922bd..8b7ac66 100644 (file)
@@ -1223,6 +1223,7 @@ static int scmi_iterator_run(void *iter)
                if (ret)
                        break;
 
+               st->rx_len = i->t->rx.len;
                ret = iops->update_state(st, i->resp, i->priv);
                if (ret)
                        break;
index b503c22..8abace5 100644 (file)
@@ -117,6 +117,7 @@ struct scmi_optee_channel {
        u32 channel_id;
        u32 tee_session;
        u32 caps;
+       u32 rx_len;
        struct mutex mu;
        struct scmi_chan_info *cinfo;
        union {
@@ -302,6 +303,9 @@ static int invoke_process_msg_channel(struct scmi_optee_channel *channel, size_t
                return -EIO;
        }
 
+       /* Save response size */
+       channel->rx_len = param[2].u.memref.size;
+
        return 0;
 }
 
@@ -353,6 +357,7 @@ static int setup_dynamic_shmem(struct device *dev, struct scmi_optee_channel *ch
        shbuf = tee_shm_get_va(channel->tee_shm, 0);
        memset(shbuf, 0, msg_size);
        channel->req.msg = shbuf;
+       channel->rx_len = msg_size;
 
        return 0;
 }
@@ -508,7 +513,7 @@ static void scmi_optee_fetch_response(struct scmi_chan_info *cinfo,
        struct scmi_optee_channel *channel = cinfo->transport_info;
 
        if (channel->tee_shm)
-               msg_fetch_response(channel->req.msg, SCMI_OPTEE_MAX_MSG_SIZE, xfer);
+               msg_fetch_response(channel->req.msg, channel->rx_len, xfer);
        else
                shmem_fetch_response(channel->req.shmem, xfer);
 }
index 8f4051a..bbb0331 100644 (file)
@@ -252,7 +252,7 @@ scmi_perf_domain_attributes_get(const struct scmi_protocol_handle *ph,
                        dom_info->mult_factor =
                                        (dom_info->sustained_freq_khz * 1000) /
                                        dom_info->sustained_perf_level;
-               strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
+               strscpy(dom_info->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE);
        }
 
        ph->xops->xfer_put(ph, t);
@@ -332,7 +332,6 @@ scmi_perf_describe_levels_get(const struct scmi_protocol_handle *ph, u32 domain,
 {
        int ret;
        void *iter;
-       struct scmi_msg_perf_describe_levels *msg;
        struct scmi_iterator_ops ops = {
                .prepare_message = iter_perf_levels_prepare_message,
                .update_state = iter_perf_levels_update_state,
@@ -345,7 +344,8 @@ scmi_perf_describe_levels_get(const struct scmi_protocol_handle *ph, u32 domain,
 
        iter = ph->hops->iter_response_init(ph, &ops, MAX_OPPS,
                                            PERF_DESCRIBE_LEVELS,
-                                           sizeof(*msg), &ppriv);
+                                           sizeof(struct scmi_msg_perf_describe_levels),
+                                           &ppriv);
        if (IS_ERR(iter))
                return PTR_ERR(iter);
 
index 964882c..356e836 100644 (file)
@@ -122,7 +122,7 @@ scmi_power_domain_attributes_get(const struct scmi_protocol_handle *ph,
                dom_info->state_set_notify = SUPPORTS_STATE_SET_NOTIFY(flags);
                dom_info->state_set_async = SUPPORTS_STATE_SET_ASYNC(flags);
                dom_info->state_set_sync = SUPPORTS_STATE_SET_SYNC(flags);
-               strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
+               strscpy(dom_info->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE);
        }
        ph->xops->xfer_put(ph, t);
 
index 73304af..51c3137 100644 (file)
@@ -24,8 +24,6 @@
 
 #include <asm/unaligned.h>
 
-#define SCMI_SHORT_NAME_MAX_SIZE       16
-
 #define PROTOCOL_REV_MINOR_MASK        GENMASK(15, 0)
 #define PROTOCOL_REV_MAJOR_MASK        GENMASK(31, 16)
 #define PROTOCOL_REV_MAJOR(x)  ((u16)(FIELD_GET(PROTOCOL_REV_MAJOR_MASK, (x))))
@@ -181,6 +179,8 @@ struct scmi_protocol_handle {
  * @max_resources: Maximum acceptable number of items, configured by the caller
  *                depending on the underlying resources that it is querying.
  * @loop_idx: The iterator loop index in the current multi-part reply.
+ * @rx_len: Size in bytes of the currenly processed message; it can be used by
+ *         the user of the iterator to verify a reply size.
  * @priv: Optional pointer to some additional state-related private data setup
  *       by the caller during the iterations.
  */
@@ -190,6 +190,7 @@ struct scmi_iterator_state {
        unsigned int num_remaining;
        unsigned int max_resources;
        unsigned int loop_idx;
+       size_t rx_len;
        void *priv;
 };
 
index a420a91..673f3eb 100644 (file)
@@ -116,7 +116,7 @@ scmi_reset_domain_attributes_get(const struct scmi_protocol_handle *ph,
                dom_info->latency_us = le32_to_cpu(attr->latency);
                if (dom_info->latency_us == U32_MAX)
                        dom_info->latency_us = 0;
-               strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
+               strscpy(dom_info->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE);
        }
 
        ph->xops->xfer_put(ph, t);
index 21e0ce8..7288c61 100644 (file)
@@ -338,7 +338,6 @@ static int scmi_sensor_update_intervals(const struct scmi_protocol_handle *ph,
                                        struct scmi_sensor_info *s)
 {
        void *iter;
-       struct scmi_msg_sensor_list_update_intervals *msg;
        struct scmi_iterator_ops ops = {
                .prepare_message = iter_intervals_prepare_message,
                .update_state = iter_intervals_update_state,
@@ -351,22 +350,28 @@ static int scmi_sensor_update_intervals(const struct scmi_protocol_handle *ph,
 
        iter = ph->hops->iter_response_init(ph, &ops, s->intervals.count,
                                            SENSOR_LIST_UPDATE_INTERVALS,
-                                           sizeof(*msg), &upriv);
+                                           sizeof(struct scmi_msg_sensor_list_update_intervals),
+                                           &upriv);
        if (IS_ERR(iter))
                return PTR_ERR(iter);
 
        return ph->hops->iter_response_run(iter);
 }
 
+struct scmi_apriv {
+       bool any_axes_support_extended_names;
+       struct scmi_sensor_info *s;
+};
+
 static void iter_axes_desc_prepare_message(void *message,
                                           const unsigned int desc_index,
                                           const void *priv)
 {
        struct scmi_msg_sensor_axis_description_get *msg = message;
-       const struct scmi_sensor_info *s = priv;
+       const struct scmi_apriv *apriv = priv;
 
        /* Set the number of sensors to be skipped/already read */
-       msg->id = cpu_to_le32(s->id);
+       msg->id = cpu_to_le32(apriv->s->id);
        msg->axis_desc_index = cpu_to_le32(desc_index);
 }
 
@@ -393,19 +398,21 @@ iter_axes_desc_process_response(const struct scmi_protocol_handle *ph,
        u32 attrh, attrl;
        struct scmi_sensor_axis_info *a;
        size_t dsize = SCMI_MSG_RESP_AXIS_DESCR_BASE_SZ;
-       struct scmi_sensor_info *s = priv;
+       struct scmi_apriv *apriv = priv;
        const struct scmi_axis_descriptor *adesc = st->priv;
 
        attrl = le32_to_cpu(adesc->attributes_low);
+       if (SUPPORTS_EXTENDED_AXIS_NAMES(attrl))
+               apriv->any_axes_support_extended_names = true;
 
-       a = &s->axis[st->desc_index + st->loop_idx];
+       a = &apriv->s->axis[st->desc_index + st->loop_idx];
        a->id = le32_to_cpu(adesc->id);
        a->extended_attrs = SUPPORTS_EXTEND_ATTRS(attrl);
 
        attrh = le32_to_cpu(adesc->attributes_high);
        a->scale = S32_EXT(SENSOR_SCALE(attrh));
        a->type = SENSOR_TYPE(attrh);
-       strscpy(a->name, adesc->name, SCMI_MAX_STR_SIZE);
+       strscpy(a->name, adesc->name, SCMI_SHORT_NAME_MAX_SIZE);
 
        if (a->extended_attrs) {
                unsigned int ares = le32_to_cpu(adesc->resolution);
@@ -444,10 +451,19 @@ iter_axes_extended_name_process_response(const struct scmi_protocol_handle *ph,
                                         void *priv)
 {
        struct scmi_sensor_axis_info *a;
-       const struct scmi_sensor_info *s = priv;
+       const struct scmi_apriv *apriv = priv;
        struct scmi_sensor_axis_name_descriptor *adesc = st->priv;
+       u32 axis_id = le32_to_cpu(adesc->axis_id);
 
-       a = &s->axis[st->desc_index + st->loop_idx];
+       if (axis_id >= st->max_resources)
+               return -EPROTO;
+
+       /*
+        * Pick the corresponding descriptor based on the axis_id embedded
+        * in the reply since the list of axes supporting extended names
+        * can be a subset of all the axes.
+        */
+       a = &apriv->s->axis[axis_id];
        strscpy(a->name, adesc->name, SCMI_MAX_STR_SIZE);
        st->priv = ++adesc;
 
@@ -458,21 +474,36 @@ static int
 scmi_sensor_axis_extended_names_get(const struct scmi_protocol_handle *ph,
                                    struct scmi_sensor_info *s)
 {
+       int ret;
        void *iter;
-       struct scmi_msg_sensor_axis_description_get *msg;
        struct scmi_iterator_ops ops = {
                .prepare_message = iter_axes_desc_prepare_message,
                .update_state = iter_axes_extended_name_update_state,
                .process_response = iter_axes_extended_name_process_response,
        };
+       struct scmi_apriv apriv = {
+               .any_axes_support_extended_names = false,
+               .s = s,
+       };
 
        iter = ph->hops->iter_response_init(ph, &ops, s->num_axis,
                                            SENSOR_AXIS_NAME_GET,
-                                           sizeof(*msg), s);
+                                           sizeof(struct scmi_msg_sensor_axis_description_get),
+                                           &apriv);
        if (IS_ERR(iter))
                return PTR_ERR(iter);
 
-       return ph->hops->iter_response_run(iter);
+       /*
+        * Do not cause whole protocol initialization failure when failing to
+        * get extended names for axes.
+        */
+       ret = ph->hops->iter_response_run(iter);
+       if (ret)
+               dev_warn(ph->dev,
+                        "Failed to get axes extended names for %s (ret:%d).\n",
+                        s->name, ret);
+
+       return 0;
 }
 
 static int scmi_sensor_axis_description(const struct scmi_protocol_handle *ph,
@@ -481,12 +512,15 @@ static int scmi_sensor_axis_description(const struct scmi_protocol_handle *ph,
 {
        int ret;
        void *iter;
-       struct scmi_msg_sensor_axis_description_get *msg;
        struct scmi_iterator_ops ops = {
                .prepare_message = iter_axes_desc_prepare_message,
                .update_state = iter_axes_desc_update_state,
                .process_response = iter_axes_desc_process_response,
        };
+       struct scmi_apriv apriv = {
+               .any_axes_support_extended_names = false,
+               .s = s,
+       };
 
        s->axis = devm_kcalloc(ph->dev, s->num_axis,
                               sizeof(*s->axis), GFP_KERNEL);
@@ -495,7 +529,8 @@ static int scmi_sensor_axis_description(const struct scmi_protocol_handle *ph,
 
        iter = ph->hops->iter_response_init(ph, &ops, s->num_axis,
                                            SENSOR_AXIS_DESCRIPTION_GET,
-                                           sizeof(*msg), s);
+                                           sizeof(struct scmi_msg_sensor_axis_description_get),
+                                           &apriv);
        if (IS_ERR(iter))
                return PTR_ERR(iter);
 
@@ -503,7 +538,8 @@ static int scmi_sensor_axis_description(const struct scmi_protocol_handle *ph,
        if (ret)
                return ret;
 
-       if (PROTOCOL_REV_MAJOR(version) >= 0x3)
+       if (PROTOCOL_REV_MAJOR(version) >= 0x3 &&
+           apriv.any_axes_support_extended_names)
                ret = scmi_sensor_axis_extended_names_get(ph, s);
 
        return ret;
@@ -598,7 +634,7 @@ iter_sens_descr_process_response(const struct scmi_protocol_handle *ph,
                            SUPPORTS_AXIS(attrh) ?
                            SENSOR_AXIS_NUMBER(attrh) : 0,
                            SCMI_MAX_NUM_SENSOR_AXIS);
-       strscpy(s->name, sdesc->name, SCMI_MAX_STR_SIZE);
+       strscpy(s->name, sdesc->name, SCMI_SHORT_NAME_MAX_SIZE);
 
        /*
         * If supported overwrite short name with the extended
index 9d195d8..eaa8d94 100644 (file)
@@ -180,7 +180,6 @@ static int scmi_voltage_levels_get(const struct scmi_protocol_handle *ph,
 {
        int ret;
        void *iter;
-       struct scmi_msg_cmd_describe_levels *msg;
        struct scmi_iterator_ops ops = {
                .prepare_message = iter_volt_levels_prepare_message,
                .update_state = iter_volt_levels_update_state,
@@ -193,7 +192,8 @@ static int scmi_voltage_levels_get(const struct scmi_protocol_handle *ph,
 
        iter = ph->hops->iter_response_init(ph, &ops, v->num_levels,
                                            VOLTAGE_DESCRIBE_LEVELS,
-                                           sizeof(*msg), &vpriv);
+                                           sizeof(struct scmi_msg_cmd_describe_levels),
+                                           &vpriv);
        if (IS_ERR(iter))
                return PTR_ERR(iter);
 
@@ -225,15 +225,14 @@ static int scmi_voltage_descriptors_get(const struct scmi_protocol_handle *ph,
 
                /* Retrieve domain attributes at first ... */
                put_unaligned_le32(dom, td->tx.buf);
-               ret = ph->xops->do_xfer(ph, td);
                /* Skip domain on comms error */
-               if (ret)
+               if (ph->xops->do_xfer(ph, td))
                        continue;
 
                v = vinfo->domains + dom;
                v->id = dom;
                attributes = le32_to_cpu(resp_dom->attr);
-               strlcpy(v->name, resp_dom->name, SCMI_MAX_STR_SIZE);
+               strscpy(v->name, resp_dom->name, SCMI_SHORT_NAME_MAX_SIZE);
 
                /*
                 * If supported overwrite short name with the extended one;
@@ -249,12 +248,8 @@ static int scmi_voltage_descriptors_get(const struct scmi_protocol_handle *ph,
                                v->async_level_set = true;
                }
 
-               ret = scmi_voltage_levels_get(ph, v);
                /* Skip invalid voltage descriptors */
-               if (ret)
-                       continue;
-
-               ph->xops->reset_rx_to_maxsz(ph, td);
+               scmi_voltage_levels_get(ph, v);
        }
 
        ph->xops->xfer_put(ph, td);
index 4c7c9dd..7882d4b 100644 (file)
@@ -26,8 +26,6 @@
 #include <linux/sysfb.h>
 #include <video/vga.h>
 
-#include <asm/efi.h>
-
 enum {
        OVERRIDE_NONE = 0x0,
        OVERRIDE_BASE = 0x1,
index 2bfbb05..1f276f1 100644 (file)
 #include <linux/screen_info.h>
 #include <linux/sysfb.h>
 
+static struct platform_device *pd;
+static DEFINE_MUTEX(disable_lock);
+static bool disabled;
+
+static bool sysfb_unregister(void)
+{
+       if (IS_ERR_OR_NULL(pd))
+               return false;
+
+       platform_device_unregister(pd);
+       pd = NULL;
+
+       return true;
+}
+
+/**
+ * sysfb_disable() - disable the Generic System Framebuffers support
+ *
+ * This disables the registration of system framebuffer devices that match the
+ * generic drivers that make use of the system framebuffer set up by firmware.
+ *
+ * It also unregisters a device if this was already registered by sysfb_init().
+ *
+ * Context: The function can sleep. A @disable_lock mutex is acquired to serialize
+ *          against sysfb_init(), that registers a system framebuffer device.
+ */
+void sysfb_disable(void)
+{
+       mutex_lock(&disable_lock);
+       sysfb_unregister();
+       disabled = true;
+       mutex_unlock(&disable_lock);
+}
+EXPORT_SYMBOL_GPL(sysfb_disable);
+
 static __init int sysfb_init(void)
 {
        struct screen_info *si = &screen_info;
        struct simplefb_platform_data mode;
-       struct platform_device *pd;
        const char *name;
        bool compatible;
-       int ret;
+       int ret = 0;
+
+       mutex_lock(&disable_lock);
+       if (disabled)
+               goto unlock_mutex;
 
        /* try to create a simple-framebuffer device */
        compatible = sysfb_parse_mode(si, &mode);
        if (compatible) {
-               ret = sysfb_create_simplefb(si, &mode);
-               if (!ret)
-                       return 0;
+               pd = sysfb_create_simplefb(si, &mode);
+               if (!IS_ERR(pd))
+                       goto unlock_mutex;
        }
 
        /* if the FB is incompatible, create a legacy framebuffer device */
@@ -60,8 +98,10 @@ static __init int sysfb_init(void)
                name = "platform-framebuffer";
 
        pd = platform_device_alloc(name, 0);
-       if (!pd)
-               return -ENOMEM;
+       if (!pd) {
+               ret = -ENOMEM;
+               goto unlock_mutex;
+       }
 
        sysfb_apply_efi_quirks(pd);
 
@@ -73,9 +113,11 @@ static __init int sysfb_init(void)
        if (ret)
                goto err;
 
-       return 0;
+       goto unlock_mutex;
 err:
        platform_device_put(pd);
+unlock_mutex:
+       mutex_unlock(&disable_lock);
        return ret;
 }
 
index bda8712..a353e27 100644 (file)
@@ -57,8 +57,8 @@ __init bool sysfb_parse_mode(const struct screen_info *si,
        return false;
 }
 
-__init int sysfb_create_simplefb(const struct screen_info *si,
-                                const struct simplefb_platform_data *mode)
+__init struct platform_device *sysfb_create_simplefb(const struct screen_info *si,
+                                                    const struct simplefb_platform_data *mode)
 {
        struct platform_device *pd;
        struct resource res;
@@ -76,7 +76,7 @@ __init int sysfb_create_simplefb(const struct screen_info *si,
                base |= (u64)si->ext_lfb_base << 32;
        if (!base || (u64)(resource_size_t)base != base) {
                printk(KERN_DEBUG "sysfb: inaccessible VRAM base\n");
-               return -EINVAL;
+               return ERR_PTR(-EINVAL);
        }
 
        /*
@@ -93,7 +93,7 @@ __init int sysfb_create_simplefb(const struct screen_info *si,
        length = mode->height * mode->stride;
        if (length > size) {
                printk(KERN_WARNING "sysfb: VRAM smaller than advertised\n");
-               return -EINVAL;
+               return ERR_PTR(-EINVAL);
        }
        length = PAGE_ALIGN(length);
 
@@ -104,11 +104,11 @@ __init int sysfb_create_simplefb(const struct screen_info *si,
        res.start = base;
        res.end = res.start + length - 1;
        if (res.end <= res.start)
-               return -EINVAL;
+               return ERR_PTR(-EINVAL);
 
        pd = platform_device_alloc("simple-framebuffer", 0);
        if (!pd)
-               return -ENOMEM;
+               return ERR_PTR(-ENOMEM);
 
        sysfb_apply_efi_quirks(pd);
 
@@ -124,10 +124,10 @@ __init int sysfb_create_simplefb(const struct screen_info *si,
        if (ret)
                goto err_put_device;
 
-       return 0;
+       return pd;
 
 err_put_device:
        platform_device_put(pd);
 
-       return ret;
+       return ERR_PTR(ret);
 }
index df56361..bea0e32 100644 (file)
@@ -434,25 +434,13 @@ static int grgpio_probe(struct platform_device *ofdev)
 static int grgpio_remove(struct platform_device *ofdev)
 {
        struct grgpio_priv *priv = platform_get_drvdata(ofdev);
-       int i;
-       int ret = 0;
-
-       if (priv->domain) {
-               for (i = 0; i < GRGPIO_MAX_NGPIO; i++) {
-                       if (priv->uirqs[i].refcnt != 0) {
-                               ret = -EBUSY;
-                               goto out;
-                       }
-               }
-       }
 
        gpiochip_remove(&priv->gc);
 
        if (priv->domain)
                irq_domain_remove(priv->domain);
 
-out:
-       return ret;
+       return 0;
 }
 
 static const struct of_device_id grgpio_match[] = {
index c5166cd..7f59e5d 100644 (file)
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0+
 //
-// MXC GPIO support. (c) 2008 Daniel Mack <daniel@caiaq.de>
+// MXS GPIO support. (c) 2008 Daniel Mack <daniel@caiaq.de>
 // Copyright 2008 Juergen Beisert, kernel@pengutronix.de
 //
 // Based on code from Freescale,
index c52b2cb..63dcf42 100644 (file)
@@ -172,6 +172,8 @@ static void realtek_gpio_irq_unmask(struct irq_data *data)
        unsigned long flags;
        u16 m;
 
+       gpiochip_enable_irq(&ctrl->gc, line);
+
        raw_spin_lock_irqsave(&ctrl->lock, flags);
        m = ctrl->intr_mask[port];
        m |= realtek_gpio_imr_bits(port_pin, REALTEK_GPIO_IMR_LINE_MASK);
@@ -195,6 +197,8 @@ static void realtek_gpio_irq_mask(struct irq_data *data)
        ctrl->intr_mask[port] = m;
        realtek_gpio_write_imr(ctrl, port, ctrl->intr_type[port], m);
        raw_spin_unlock_irqrestore(&ctrl->lock, flags);
+
+       gpiochip_disable_irq(&ctrl->gc, line);
 }
 
 static int realtek_gpio_irq_set_type(struct irq_data *data, unsigned int flow_type)
@@ -315,13 +319,15 @@ static int realtek_gpio_irq_init(struct gpio_chip *gc)
        return 0;
 }
 
-static struct irq_chip realtek_gpio_irq_chip = {
+static const struct irq_chip realtek_gpio_irq_chip = {
        .name = "realtek-otto-gpio",
        .irq_ack = realtek_gpio_irq_ack,
        .irq_mask = realtek_gpio_irq_mask,
        .irq_unmask = realtek_gpio_irq_unmask,
        .irq_set_type = realtek_gpio_irq_set_type,
        .irq_set_affinity = realtek_gpio_irq_set_affinity,
+       .flags = IRQCHIP_IMMUTABLE,
+       GPIOCHIP_IRQ_RESOURCE_HELPERS,
 };
 
 static const struct of_device_id realtek_gpio_of_match[] = {
@@ -404,7 +410,7 @@ static int realtek_gpio_probe(struct platform_device *pdev)
        irq = platform_get_irq_optional(pdev, 0);
        if (!(dev_flags & GPIO_INTERRUPTS_DISABLED) && irq > 0) {
                girq = &ctrl->gc.irq;
-               girq->chip = &realtek_gpio_irq_chip;
+               gpio_irq_chip_set_chip(girq, &realtek_gpio_irq_chip);
                girq->default_type = IRQ_TYPE_NONE;
                girq->handler = handle_bad_irq;
                girq->parent_handler = realtek_gpio_irq_handler;
index 98cd715..8d09b61 100644 (file)
@@ -217,8 +217,6 @@ static int giu_get_irq(unsigned int irq)
        printk(KERN_ERR "spurious GIU interrupt: %04x(%04x),%04x(%04x)\n",
               maskl, pendl, maskh, pendh);
 
-       atomic_inc(&irq_err_count);
-
        return -EINVAL;
 }
 
index 7f8f5b0..4b61d97 100644 (file)
@@ -385,12 +385,13 @@ static int winbond_gpio_get(struct gpio_chip *gc, unsigned int offset)
        unsigned long *base = gpiochip_get_data(gc);
        const struct winbond_gpio_info *info;
        bool val;
+       int ret;
 
        winbond_gpio_get_info(&offset, &info);
 
-       val = winbond_sio_enter(*base);
-       if (val)
-               return val;
+       ret = winbond_sio_enter(*base);
+       if (ret)
+               return ret;
 
        winbond_sio_select_logical(*base, info->dev);
 
index 1f8161c..3b1c675 100644 (file)
@@ -714,7 +714,8 @@ int amdgpu_amdkfd_flush_gpu_tlb_pasid(struct amdgpu_device *adev,
 {
        bool all_hub = false;
 
-       if (adev->family == AMDGPU_FAMILY_AI)
+       if (adev->family == AMDGPU_FAMILY_AI ||
+           adev->family == AMDGPU_FAMILY_RV)
                all_hub = true;
 
        return amdgpu_gmc_flush_gpu_tlb_pasid(adev, pasid, flush_type, all_hub);
index 625424f..58df107 100644 (file)
@@ -5164,7 +5164,7 @@ int amdgpu_device_gpu_recover_imp(struct amdgpu_device *adev,
                 */
                amdgpu_unregister_gpu_instance(tmp_adev);
 
-               drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, true);
+               drm_fb_helper_set_suspend_unlocked(adev_to_drm(tmp_adev)->fb_helper, true);
 
                /* disable ras on ALL IPs */
                if (!need_emergency_restart &&
index b4cf871..89011ba 100644 (file)
@@ -320,6 +320,7 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
        if (!amdgpu_device_has_dc_support(adev)) {
                if (!adev->enable_virtual_display)
                        /* Disable vblank IRQs aggressively for power-saving */
+                       /* XXX: can this be enabled for DC? */
                        adev_to_drm(adev)->vblank_disable_immediate = true;
 
                r = drm_vblank_init(adev_to_drm(adev), adev->mode_info.num_crtc);
index 801f6fa..6de63ea 100644 (file)
@@ -642,7 +642,6 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
                            atomic64_read(&adev->visible_pin_size),
                            vram_gtt.vram_size);
                vram_gtt.gtt_size = ttm_manager_type(&adev->mman.bdev, TTM_PL_TT)->size;
-               vram_gtt.gtt_size *= PAGE_SIZE;
                vram_gtt.gtt_size -= atomic64_read(&adev->gart_pin_size);
                return copy_to_user(out, &vram_gtt,
                                    min((size_t)size, sizeof(vram_gtt))) ? -EFAULT : 0;
@@ -675,7 +674,6 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
                        mem.cpu_accessible_vram.usable_heap_size * 3 / 4;
 
                mem.gtt.total_heap_size = gtt_man->size;
-               mem.gtt.total_heap_size *= PAGE_SIZE;
                mem.gtt.usable_heap_size = mem.gtt.total_heap_size -
                        atomic64_read(&adev->gart_pin_size);
                mem.gtt.heap_usage = ttm_resource_manager_usage(gtt_man);
index be6f76a..3b4c194 100644 (file)
@@ -1798,18 +1798,26 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
        DRM_INFO("amdgpu: %uM of VRAM memory ready\n",
                 (unsigned) (adev->gmc.real_vram_size / (1024 * 1024)));
 
-       /* Compute GTT size, either bsaed on 3/4th the size of RAM size
+       /* Compute GTT size, either based on 1/2 the size of RAM size
         * or whatever the user passed on module init */
        if (amdgpu_gtt_size == -1) {
                struct sysinfo si;
 
                si_meminfo(&si);
-               gtt_size = min(max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20),
-                              adev->gmc.mc_vram_size),
-                              ((uint64_t)si.totalram * si.mem_unit * 3/4));
-       }
-       else
+               /* Certain GL unit tests for large textures can cause problems
+                * with the OOM killer since there is no way to link this memory
+                * to a process.  This was originally mitigated (but not necessarily
+                * eliminated) by limiting the GTT size.  The problem is this limit
+                * is often too low for many modern games so just make the limit 1/2
+                * of system memory which aligns with TTM. The OOM accounting needs
+                * to be addressed, but we shouldn't prevent common 3D applications
+                * from being usable just to potentially mitigate that corner case.
+                */
+               gtt_size = max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20),
+                              (u64)si.totalram * si.mem_unit / 2);
+       } else {
                gtt_size = (uint64_t)amdgpu_gtt_size << 20;
+       }
 
        /* Initialize GTT memory pool */
        r = amdgpu_gtt_mgr_init(adev, gtt_size);
index 70be67a..9dd2e06 100644 (file)
@@ -2812,7 +2812,7 @@ static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = {
 
 static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
 {
-       u32 max_cll, min_cll, max, min, q, r;
+       u32 max_avg, min_cll, max, min, q, r;
        struct amdgpu_dm_backlight_caps *caps;
        struct amdgpu_display_manager *dm;
        struct drm_connector *conn_base;
@@ -2842,7 +2842,7 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
        caps = &dm->backlight_caps[i];
        caps->ext_caps = &aconnector->dc_link->dpcd_sink_ext_caps;
        caps->aux_support = false;
-       max_cll = conn_base->hdr_sink_metadata.hdmi_type1.max_cll;
+       max_avg = conn_base->hdr_sink_metadata.hdmi_type1.max_fall;
        min_cll = conn_base->hdr_sink_metadata.hdmi_type1.min_cll;
 
        if (caps->ext_caps->bits.oled == 1 /*||
@@ -2870,8 +2870,8 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
         * The results of the above expressions can be verified at
         * pre_computed_values.
         */
-       q = max_cll >> 5;
-       r = max_cll % 32;
+       q = max_avg >> 5;
+       r = max_avg % 32;
        max = (1 << q) * pre_computed_values[r];
 
        // min luminance: maxLum * (CV/255)^2 / 100
@@ -4259,9 +4259,6 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
                }
        }
 
-       /* Disable vblank IRQs aggressively for power-saving. */
-       adev_to_drm(adev)->vblank_disable_immediate = true;
-
        /* loops over all connectors on the board */
        for (i = 0; i < link_cnt; i++) {
                struct dc_link *link = NULL;
index fb4ae80..f438172 100644 (file)
@@ -550,7 +550,7 @@ static void dcn315_clk_mgr_helper_populate_bw_params(
                if (!bw_params->clk_table.entries[i].dtbclk_mhz)
                        bw_params->clk_table.entries[i].dtbclk_mhz = def_max.dtbclk_mhz;
        }
-       ASSERT(bw_params->clk_table.entries[i].dcfclk_mhz);
+       ASSERT(bw_params->clk_table.entries[i-1].dcfclk_mhz);
        bw_params->vram_type = bios_info->memory_type;
        bw_params->num_channels = bios_info->ma_channel_number;
        if (!bw_params->num_channels)
index cbc47ae..d8eee89 100644 (file)
@@ -944,7 +944,7 @@ static void override_lane_settings(const struct link_training_settings *lt_setti
 
                return;
 
-       for (lane = 1; lane < LANE_COUNT_DP_MAX; lane++) {
+       for (lane = 0; lane < LANE_COUNT_DP_MAX; lane++) {
                if (lt_settings->voltage_swing)
                        lane_settings[lane].VOLTAGE_SWING = *lt_settings->voltage_swing;
                if (lt_settings->pre_emphasis)
index 7eff781..5f2afa5 100644 (file)
@@ -1766,29 +1766,9 @@ void dce110_enable_accelerated_mode(struct dc *dc, struct dc_state *context)
                                break;
                        }
                }
-
-               /*
-                * TO-DO: So far the code logic below only addresses single eDP case.
-                * For dual eDP case, there are a few things that need to be
-                * implemented first:
-                *
-                * 1. Change the fastboot logic above, so eDP link[0 or 1]'s
-                * stream[0 or 1] will all be checked.
-                *
-                * 2. Change keep_edp_vdd_on to an array, and maintain keep_edp_vdd_on
-                * for each eDP.
-                *
-                * Once above 2 things are completed, we can then change the logic below
-                * correspondingly, so dual eDP case will be fully covered.
-                */
-
-               // We are trying to enable eDP, don't power down VDD if eDP stream is existing
-               if ((edp_stream_num == 1 && edp_streams[0] != NULL) || can_apply_edp_fast_boot) {
+               // We are trying to enable eDP, don't power down VDD
+               if (can_apply_edp_fast_boot)
                        keep_edp_vdd_on = true;
-                       DC_LOG_EVENT_LINK_TRAINING("Keep eDP Vdd on\n");
-               } else {
-                       DC_LOG_EVENT_LINK_TRAINING("No eDP stream enabled, turn eDP Vdd off\n");
-               }
        }
 
        // Check seamless boot support
index 970b65e..eaa7032 100644 (file)
@@ -212,6 +212,9 @@ static void dpp2_cnv_setup (
                break;
        }
 
+       /* Set default color space based on format if none is given. */
+       color_space = input_color_space ? input_color_space : color_space;
+
        if (is_2bit == 1 && alpha_2bit_lut != NULL) {
                REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0);
                REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);
index 8b6505b..f50ab96 100644 (file)
@@ -153,6 +153,9 @@ static void dpp201_cnv_setup(
                break;
        }
 
+       /* Set default color space based on format if none is given. */
+       color_space = input_color_space ? input_color_space : color_space;
+
        if (is_2bit == 1 && alpha_2bit_lut != NULL) {
                REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0);
                REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);
index ab3918c..0dcc075 100644 (file)
@@ -294,6 +294,9 @@ static void dpp3_cnv_setup (
                break;
        }
 
+       /* Set default color space based on format if none is given. */
+       color_space = input_color_space ? input_color_space : color_space;
+
        if (is_2bit == 1 && alpha_2bit_lut != NULL) {
                REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0);
                REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);
index 4e853ac..df87ba9 100644 (file)
@@ -152,6 +152,12 @@ static const struct dmi_system_id orientation_data[] = {
                  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYA NEO 2021"),
                },
                .driver_data = (void *)&lcd800x1280_rightside_up,
+       }, {    /* AYA NEO NEXT */
+               .matches = {
+                 DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AYANEO"),
+                 DMI_MATCH(DMI_BOARD_NAME, "NEXT"),
+               },
+               .driver_data = (void *)&lcd800x1280_rightside_up,
        }, {    /* Chuwi HiBook (CWI514) */
                .matches = {
                        DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
index 424ea23..16c5396 100644 (file)
@@ -176,15 +176,15 @@ static struct exynos_drm_driver_info exynos_drm_drivers[] = {
        }, {
                DRV_PTR(mixer_driver, CONFIG_DRM_EXYNOS_MIXER),
                DRM_COMPONENT_DRIVER
-       }, {
-               DRV_PTR(mic_driver, CONFIG_DRM_EXYNOS_MIC),
-               DRM_COMPONENT_DRIVER
        }, {
                DRV_PTR(dp_driver, CONFIG_DRM_EXYNOS_DP),
                DRM_COMPONENT_DRIVER
        }, {
                DRV_PTR(dsi_driver, CONFIG_DRM_EXYNOS_DSI),
                DRM_COMPONENT_DRIVER
+       }, {
+               DRV_PTR(mic_driver, CONFIG_DRM_EXYNOS_MIC),
+               DRM_COMPONENT_DRIVER
        }, {
                DRV_PTR(hdmi_driver, CONFIG_DRM_EXYNOS_HDMI),
                DRM_COMPONENT_DRIVER
index 9e06f8e..09ce28e 100644 (file)
@@ -26,6 +26,7 @@
 #include <drm/drm_print.h>
 
 #include "exynos_drm_drv.h"
+#include "exynos_drm_crtc.h"
 
 /* Sysreg registers for MIC */
 #define DSD_CFG_MUX    0x1004
@@ -100,9 +101,7 @@ struct exynos_mic {
 
        bool i80_mode;
        struct videomode vm;
-       struct drm_encoder *encoder;
        struct drm_bridge bridge;
-       struct drm_bridge *next_bridge;
 
        bool enabled;
 };
@@ -229,8 +228,6 @@ static void mic_set_reg_on(struct exynos_mic *mic, bool enable)
        writel(reg, mic->reg + MIC_OP);
 }
 
-static void mic_disable(struct drm_bridge *bridge) { }
-
 static void mic_post_disable(struct drm_bridge *bridge)
 {
        struct exynos_mic *mic = bridge->driver_private;
@@ -297,34 +294,30 @@ unlock:
        mutex_unlock(&mic_mutex);
 }
 
-static void mic_enable(struct drm_bridge *bridge) { }
-
-static int mic_attach(struct drm_bridge *bridge,
-                     enum drm_bridge_attach_flags flags)
-{
-       struct exynos_mic *mic = bridge->driver_private;
-
-       return drm_bridge_attach(bridge->encoder, mic->next_bridge,
-                                &mic->bridge, flags);
-}
-
 static const struct drm_bridge_funcs mic_bridge_funcs = {
-       .disable = mic_disable,
        .post_disable = mic_post_disable,
        .mode_set = mic_mode_set,
        .pre_enable = mic_pre_enable,
-       .enable = mic_enable,
-       .attach = mic_attach,
 };
 
 static int exynos_mic_bind(struct device *dev, struct device *master,
                           void *data)
 {
        struct exynos_mic *mic = dev_get_drvdata(dev);
+       struct drm_device *drm_dev = data;
+       struct exynos_drm_crtc *crtc = exynos_drm_crtc_get_by_type(drm_dev,
+                                                      EXYNOS_DISPLAY_TYPE_LCD);
+       struct drm_encoder *e, *encoder = NULL;
+
+       drm_for_each_encoder(e, drm_dev)
+               if (e->possible_crtcs == drm_crtc_mask(&crtc->base))
+                       encoder = e;
+       if (!encoder)
+               return -ENODEV;
 
        mic->bridge.driver_private = mic;
 
-       return 0;
+       return drm_bridge_attach(encoder, &mic->bridge, NULL, 0);
 }
 
 static void exynos_mic_unbind(struct device *dev, struct device *master,
@@ -388,7 +381,6 @@ static int exynos_mic_probe(struct platform_device *pdev)
 {
        struct device *dev = &pdev->dev;
        struct exynos_mic *mic;
-       struct device_node *remote;
        struct resource res;
        int ret, i;
 
@@ -432,16 +424,6 @@ static int exynos_mic_probe(struct platform_device *pdev)
                }
        }
 
-       remote = of_graph_get_remote_node(dev->of_node, 1, 0);
-       mic->next_bridge = of_drm_find_bridge(remote);
-       if (IS_ERR(mic->next_bridge)) {
-               DRM_DEV_ERROR(dev, "mic: Failed to find next bridge\n");
-               ret = PTR_ERR(mic->next_bridge);
-               goto err;
-       }
-
-       of_node_put(remote);
-
        platform_set_drvdata(pdev, mic);
 
        mic->bridge.funcs = &mic_bridge_funcs;
index e4a79c1..ff67899 100644 (file)
@@ -388,13 +388,23 @@ static int dg2_max_source_rate(struct intel_dp *intel_dp)
        return intel_dp_is_edp(intel_dp) ? 810000 : 1350000;
 }
 
+static bool is_low_voltage_sku(struct drm_i915_private *i915, enum phy phy)
+{
+       u32 voltage;
+
+       voltage = intel_de_read(i915, ICL_PORT_COMP_DW3(phy)) & VOLTAGE_INFO_MASK;
+
+       return voltage == VOLTAGE_INFO_0_85V;
+}
+
 static int icl_max_source_rate(struct intel_dp *intel_dp)
 {
        struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
        struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
        enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port);
 
-       if (intel_phy_is_combo(dev_priv, phy) && !intel_dp_is_edp(intel_dp))
+       if (intel_phy_is_combo(dev_priv, phy) &&
+           (is_low_voltage_sku(dev_priv, phy) || !intel_dp_is_edp(intel_dp)))
                return 540000;
 
        return 810000;
@@ -402,7 +412,23 @@ static int icl_max_source_rate(struct intel_dp *intel_dp)
 
 static int ehl_max_source_rate(struct intel_dp *intel_dp)
 {
-       if (intel_dp_is_edp(intel_dp))
+       struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
+       struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
+       enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port);
+
+       if (intel_dp_is_edp(intel_dp) || is_low_voltage_sku(dev_priv, phy))
+               return 540000;
+
+       return 810000;
+}
+
+static int dg1_max_source_rate(struct intel_dp *intel_dp)
+{
+       struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
+       struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
+       enum phy phy = intel_port_to_phy(i915, dig_port->base.port);
+
+       if (intel_phy_is_combo(i915, phy) && is_low_voltage_sku(i915, phy))
                return 540000;
 
        return 810000;
@@ -445,7 +471,7 @@ intel_dp_set_source_rates(struct intel_dp *intel_dp)
                        max_rate = dg2_max_source_rate(intel_dp);
                else if (IS_ALDERLAKE_P(dev_priv) || IS_ALDERLAKE_S(dev_priv) ||
                         IS_DG1(dev_priv) || IS_ROCKETLAKE(dev_priv))
-                       max_rate = 810000;
+                       max_rate = dg1_max_source_rate(intel_dp);
                else if (IS_JSL_EHL(dev_priv))
                        max_rate = ehl_max_source_rate(intel_dp);
                else
index 22f5557..88c2f38 100644 (file)
@@ -2396,7 +2396,7 @@ static void icl_wrpll_params_populate(struct skl_wrpll_params *params,
 }
 
 /*
- * Display WA #22010492432: ehl, tgl, adl-p
+ * Display WA #22010492432: ehl, tgl, adl-s, adl-p
  * Program half of the nominal DCO divider fraction value.
  */
 static bool
@@ -2404,7 +2404,7 @@ ehl_combo_pll_div_frac_wa_needed(struct drm_i915_private *i915)
 {
        return ((IS_PLATFORM(i915, INTEL_ELKHARTLAKE) &&
                 IS_JSL_EHL_DISPLAY_STEP(i915, STEP_B0, STEP_FOREVER)) ||
-                IS_TIGERLAKE(i915) || IS_ALDERLAKE_P(i915)) &&
+                IS_TIGERLAKE(i915) || IS_ALDERLAKE_S(i915) || IS_ALDERLAKE_P(i915)) &&
                 i915->dpll.ref_clks.nssc == 38400;
 }
 
index ab4c5ab..321af10 100644 (file)
@@ -933,8 +933,9 @@ static int set_proto_ctx_param(struct drm_i915_file_private *fpriv,
        case I915_CONTEXT_PARAM_PERSISTENCE:
                if (args->size)
                        ret = -EINVAL;
-               ret = proto_context_set_persistence(fpriv->dev_priv, pc,
-                                                   args->value);
+               else
+                       ret = proto_context_set_persistence(fpriv->dev_priv, pc,
+                                                           args->value);
                break;
 
        case I915_CONTEXT_PARAM_PROTECTED_CONTENT:
index 3e5d605..1674b0c 100644 (file)
@@ -35,12 +35,12 @@ bool i915_gem_cpu_write_needs_clflush(struct drm_i915_gem_object *obj)
        if (obj->cache_dirty)
                return false;
 
-       if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE))
-               return true;
-
        if (IS_DGFX(i915))
                return false;
 
+       if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE))
+               return true;
+
        /* Currently in use by HW (display engine)? Keep flushed. */
        return i915_gem_object_is_framebuffer(obj);
 }
index c326bd2..30fe847 100644 (file)
@@ -999,7 +999,8 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
                        }
                }
 
-               err = dma_resv_reserve_fences(vma->obj->base.resv, 1);
+               /* Reserve enough slots to accommodate composite fences */
+               err = dma_resv_reserve_fences(vma->obj->base.resv, eb->num_batches);
                if (err)
                        return err;
 
index 53307ca..51a0fe6 100644 (file)
@@ -785,6 +785,7 @@ void intel_gt_driver_unregister(struct intel_gt *gt)
 {
        intel_wakeref_t wakeref;
 
+       intel_gt_sysfs_unregister(gt);
        intel_rps_driver_unregister(&gt->rps);
        intel_gsc_fini(&gt->gsc);
 
index 8ec8bc6..9e4ebf5 100644 (file)
@@ -24,7 +24,7 @@ bool is_object_gt(struct kobject *kobj)
 
 static struct intel_gt *kobj_to_gt(struct kobject *kobj)
 {
-       return container_of(kobj, struct kobj_gt, base)->gt;
+       return container_of(kobj, struct intel_gt, sysfs_gt);
 }
 
 struct intel_gt *intel_gt_sysfs_get_drvdata(struct device *dev,
@@ -72,9 +72,9 @@ static struct attribute *id_attrs[] = {
 };
 ATTRIBUTE_GROUPS(id);
 
+/* A kobject needs a release() method even if it does nothing */
 static void kobj_gt_release(struct kobject *kobj)
 {
-       kfree(kobj);
 }
 
 static struct kobj_type kobj_gt_type = {
@@ -85,8 +85,6 @@ static struct kobj_type kobj_gt_type = {
 
 void intel_gt_sysfs_register(struct intel_gt *gt)
 {
-       struct kobj_gt *kg;
-
        /*
         * We need to make things right with the
         * ABI compatibility. The files were originally
@@ -98,25 +96,22 @@ void intel_gt_sysfs_register(struct intel_gt *gt)
        if (gt_is_root(gt))
                intel_gt_sysfs_pm_init(gt, gt_get_parent_obj(gt));
 
-       kg = kzalloc(sizeof(*kg), GFP_KERNEL);
-       if (!kg)
+       /* init and xfer ownership to sysfs tree */
+       if (kobject_init_and_add(&gt->sysfs_gt, &kobj_gt_type,
+                                gt->i915->sysfs_gt, "gt%d", gt->info.id))
                goto exit_fail;
 
-       kobject_init(&kg->base, &kobj_gt_type);
-       kg->gt = gt;
-
-       /* xfer ownership to sysfs tree */
-       if (kobject_add(&kg->base, gt->i915->sysfs_gt, "gt%d", gt->info.id))
-               goto exit_kobj_put;
-
-       intel_gt_sysfs_pm_init(gt, &kg->base);
+       intel_gt_sysfs_pm_init(gt, &gt->sysfs_gt);
 
        return;
 
-exit_kobj_put:
-       kobject_put(&kg->base);
-
 exit_fail:
+       kobject_put(&gt->sysfs_gt);
        drm_warn(&gt->i915->drm,
                 "failed to initialize gt%d sysfs root\n", gt->info.id);
 }
+
+void intel_gt_sysfs_unregister(struct intel_gt *gt)
+{
+       kobject_put(&gt->sysfs_gt);
+}
index 9471b26..a99aa7e 100644 (file)
 
 struct intel_gt;
 
-struct kobj_gt {
-       struct kobject base;
-       struct intel_gt *gt;
-};
-
 bool is_object_gt(struct kobject *kobj);
 
 struct drm_i915_private *kobj_to_i915(struct kobject *kobj);
@@ -28,6 +23,7 @@ intel_gt_create_kobj(struct intel_gt *gt,
                     const char *name);
 
 void intel_gt_sysfs_register(struct intel_gt *gt);
+void intel_gt_sysfs_unregister(struct intel_gt *gt);
 struct intel_gt *intel_gt_sysfs_get_drvdata(struct device *dev,
                                            const char *name);
 
index b06611c..edd7a3c 100644 (file)
@@ -224,6 +224,9 @@ struct intel_gt {
        } mocs;
 
        struct intel_pxp pxp;
+
+       /* gt/gtN sysfs */
+       struct kobject sysfs_gt;
 };
 
 enum intel_gt_scratch_field {
index d078f88..f0d7b57 100644 (file)
@@ -156,7 +156,7 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
                [INTEL_UC_FW_TYPE_GUC] = { blobs_guc, ARRAY_SIZE(blobs_guc) },
                [INTEL_UC_FW_TYPE_HUC] = { blobs_huc, ARRAY_SIZE(blobs_huc) },
        };
-       static const struct uc_fw_platform_requirement *fw_blobs;
+       const struct uc_fw_platform_requirement *fw_blobs;
        enum intel_platform p = INTEL_INFO(i915)->platform;
        u32 fw_count;
        u8 rev = INTEL_REVID(i915);
index 90b0ce5..1041b53 100644 (file)
@@ -530,6 +530,7 @@ mask_err:
 static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 {
        struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
+       struct pci_dev *root_pdev;
        int ret;
 
        if (i915_inject_probe_failure(dev_priv))
@@ -641,6 +642,15 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 
        intel_bw_init_hw(dev_priv);
 
+       /*
+        * FIXME: Temporary hammer to avoid freezing the machine on our DGFX
+        * This should be totally removed when we handle the pci states properly
+        * on runtime PM and on s2idle cases.
+        */
+       root_pdev = pcie_find_root_port(pdev);
+       if (root_pdev)
+               pci_d3cold_disable(root_pdev);
+
        return 0;
 
 err_msi:
@@ -664,11 +674,16 @@ err_perf:
 static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
 {
        struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
+       struct pci_dev *root_pdev;
 
        i915_perf_fini(dev_priv);
 
        if (pdev->msi_enabled)
                pci_disable_msi(pdev);
+
+       root_pdev = pcie_find_root_port(pdev);
+       if (root_pdev)
+               pci_d3cold_enable(root_pdev);
 }
 
 /**
@@ -1193,14 +1208,6 @@ static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation)
                goto out;
        }
 
-       /*
-        * FIXME: Temporary hammer to avoid freezing the machine on our DGFX
-        * This should be totally removed when we handle the pci states properly
-        * on runtime PM and on s2idle cases.
-        */
-       if (suspend_to_idle(dev_priv))
-               pci_d3cold_disable(pdev);
-
        pci_disable_device(pdev);
        /*
         * During hibernation on some platforms the BIOS may try to access
@@ -1365,8 +1372,6 @@ static int i915_drm_resume_early(struct drm_device *dev)
 
        pci_set_master(pdev);
 
-       pci_d3cold_enable(pdev);
-
        disable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
 
        ret = vlv_resume_prepare(dev_priv, false);
@@ -1543,7 +1548,6 @@ static int intel_runtime_suspend(struct device *kdev)
 {
        struct drm_i915_private *dev_priv = kdev_to_i915(kdev);
        struct intel_runtime_pm *rpm = &dev_priv->runtime_pm;
-       struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
        int ret;
 
        if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv)))
@@ -1589,12 +1593,6 @@ static int intel_runtime_suspend(struct device *kdev)
                drm_err(&dev_priv->drm,
                        "Unclaimed access detected prior to suspending\n");
 
-       /*
-        * FIXME: Temporary hammer to avoid freezing the machine on our DGFX
-        * This should be totally removed when we handle the pci states properly
-        * on runtime PM and on s2idle cases.
-        */
-       pci_d3cold_disable(pdev);
        rpm->suspended = true;
 
        /*
@@ -1633,7 +1631,6 @@ static int intel_runtime_resume(struct device *kdev)
 {
        struct drm_i915_private *dev_priv = kdev_to_i915(kdev);
        struct intel_runtime_pm *rpm = &dev_priv->runtime_pm;
-       struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
        int ret;
 
        if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv)))
@@ -1646,7 +1643,6 @@ static int intel_runtime_resume(struct device *kdev)
 
        intel_opregion_notify_adapter(dev_priv, PCI_D0);
        rpm->suspended = false;
-       pci_d3cold_enable(pdev);
        if (intel_uncore_unclaimed_mmio(&dev_priv->uncore))
                drm_dbg(&dev_priv->drm,
                        "Unclaimed access during suspend, bios?\n");
index 18d38cb..b09d1d3 100644 (file)
@@ -116,8 +116,9 @@ show_client_class(struct seq_file *m,
                total += busy_add(ctx, class);
        rcu_read_unlock();
 
-       seq_printf(m, "drm-engine-%s:\t%llu ns\n",
-                  uabi_class_names[class], total);
+       if (capacity)
+               seq_printf(m, "drm-engine-%s:\t%llu ns\n",
+                          uabi_class_names[class], total);
 
        if (capacity > 1)
                seq_printf(m, "drm-engine-capacity-%s:\t%u\n",
index 8521dab..1e27502 100644 (file)
@@ -166,7 +166,14 @@ static ssize_t error_state_read(struct file *filp, struct kobject *kobj,
        struct device *kdev = kobj_to_dev(kobj);
        struct drm_i915_private *i915 = kdev_minor_to_i915(kdev);
        struct i915_gpu_coredump *gpu;
-       ssize_t ret;
+       ssize_t ret = 0;
+
+       /*
+        * FIXME: Concurrent clients triggering resets and reading + clearing
+        * dumps can cause inconsistent sysfs reads when a user calls in with a
+        * non-zero offset to complete a prior partial read but the
+        * gpu_coredump has been cleared or replaced.
+        */
 
        gpu = i915_first_error_state(i915);
        if (IS_ERR(gpu)) {
@@ -178,8 +185,10 @@ static ssize_t error_state_read(struct file *filp, struct kobject *kobj,
                const char *str = "No error state collected\n";
                size_t len = strlen(str);
 
-               ret = min_t(size_t, count, len - off);
-               memcpy(buf, str + off, ret);
+               if (off < len) {
+                       ret = min_t(size_t, count, len - off);
+                       memcpy(buf, str + off, ret);
+               }
        }
 
        return ret;
@@ -259,4 +268,6 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
 
        device_remove_bin_file(kdev,  &dpf_attrs_1);
        device_remove_bin_file(kdev,  &dpf_attrs);
+
+       kobject_put(dev_priv->sysfs_gt);
 }
index 4f6db53..0bffb70 100644 (file)
@@ -23,6 +23,7 @@
  */
 
 #include <linux/sched/mm.h>
+#include <linux/dma-fence-array.h>
 #include <drm/drm_gem.h>
 
 #include "display/intel_frontbuffer.h"
@@ -1823,6 +1824,21 @@ int _i915_vma_move_to_active(struct i915_vma *vma,
        if (unlikely(err))
                return err;
 
+       /*
+        * Reserve fences slot early to prevent an allocation after preparing
+        * the workload and associating fences with dma_resv.
+        */
+       if (fence && !(flags & __EXEC_OBJECT_NO_RESERVE)) {
+               struct dma_fence *curr;
+               int idx;
+
+               dma_fence_array_for_each(curr, idx, fence)
+                       ;
+               err = dma_resv_reserve_fences(vma->obj->base.resv, idx);
+               if (unlikely(err))
+                       return err;
+       }
+
        if (flags & EXEC_OBJECT_WRITE) {
                struct intel_frontbuffer *front;
 
@@ -1832,31 +1848,23 @@ int _i915_vma_move_to_active(struct i915_vma *vma,
                                i915_active_add_request(&front->write, rq);
                        intel_frontbuffer_put(front);
                }
+       }
 
-               if (!(flags & __EXEC_OBJECT_NO_RESERVE)) {
-                       err = dma_resv_reserve_fences(vma->obj->base.resv, 1);
-                       if (unlikely(err))
-                               return err;
-               }
+       if (fence) {
+               struct dma_fence *curr;
+               enum dma_resv_usage usage;
+               int idx;
 
-               if (fence) {
-                       dma_resv_add_fence(vma->obj->base.resv, fence,
-                                          DMA_RESV_USAGE_WRITE);
+               obj->read_domains = 0;
+               if (flags & EXEC_OBJECT_WRITE) {
+                       usage = DMA_RESV_USAGE_WRITE;
                        obj->write_domain = I915_GEM_DOMAIN_RENDER;
-                       obj->read_domains = 0;
-               }
-       } else {
-               if (!(flags & __EXEC_OBJECT_NO_RESERVE)) {
-                       err = dma_resv_reserve_fences(vma->obj->base.resv, 1);
-                       if (unlikely(err))
-                               return err;
+               } else {
+                       usage = DMA_RESV_USAGE_READ;
                }
 
-               if (fence) {
-                       dma_resv_add_fence(vma->obj->base.resv, fence,
-                                          DMA_RESV_USAGE_READ);
-                       obj->write_domain = 0;
-               }
+               dma_fence_array_for_each(curr, idx, fence)
+                       dma_resv_add_fence(vma->obj->base.resv, curr, usage);
        }
 
        if (flags & EXEC_OBJECT_NEEDS_FENCE && vma->fence)
index 4e665c8..efe9840 100644 (file)
@@ -498,10 +498,15 @@ int adreno_hw_init(struct msm_gpu *gpu)
 
                ring->cur = ring->start;
                ring->next = ring->start;
-
-               /* reset completed fence seqno: */
-               ring->memptrs->fence = ring->fctx->completed_fence;
                ring->memptrs->rptr = 0;
+
+               /* Detect and clean up an impossible fence, ie. if GPU managed
+                * to scribble something invalid, we don't want that to confuse
+                * us into mistakingly believing that submits have completed.
+                */
+               if (fence_before(ring->fctx->last_fence, ring->memptrs->fence)) {
+                       ring->memptrs->fence = ring->fctx->last_fence;
+               }
        }
 
        return 0;
@@ -1057,7 +1062,8 @@ void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
        for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
                release_firmware(adreno_gpu->fw[i]);
 
-       pm_runtime_disable(&priv->gpu_pdev->dev);
+       if (pm_runtime_enabled(&priv->gpu_pdev->dev))
+               pm_runtime_disable(&priv->gpu_pdev->dev);
 
        msm_gpu_cleanup(&adreno_gpu->base);
 }
index 3a462e3..a1b8c45 100644 (file)
@@ -1251,12 +1251,13 @@ static void dpu_encoder_vblank_callback(struct drm_encoder *drm_enc,
        DPU_ATRACE_BEGIN("encoder_vblank_callback");
        dpu_enc = to_dpu_encoder_virt(drm_enc);
 
+       atomic_inc(&phy_enc->vsync_cnt);
+
        spin_lock_irqsave(&dpu_enc->enc_spinlock, lock_flags);
        if (dpu_enc->crtc)
                dpu_crtc_vblank_callback(dpu_enc->crtc);
        spin_unlock_irqrestore(&dpu_enc->enc_spinlock, lock_flags);
 
-       atomic_inc(&phy_enc->vsync_cnt);
        DPU_ATRACE_END("encoder_vblank_callback");
 }
 
index 59da348..0ec809a 100644 (file)
@@ -252,11 +252,6 @@ static int dpu_encoder_phys_wb_atomic_check(
        DPU_DEBUG("[atomic_check:%d, \"%s\",%d,%d]\n",
                        phys_enc->wb_idx, mode->name, mode->hdisplay, mode->vdisplay);
 
-       if (!conn_state->writeback_job || !conn_state->writeback_job->fb)
-               return 0;
-
-       fb = conn_state->writeback_job->fb;
-
        if (!conn_state || !conn_state->connector) {
                DPU_ERROR("invalid connector state\n");
                return -EINVAL;
@@ -267,6 +262,11 @@ static int dpu_encoder_phys_wb_atomic_check(
                return -EINVAL;
        }
 
+       if (!conn_state->writeback_job || !conn_state->writeback_job->fb)
+               return 0;
+
+       fb = conn_state->writeback_job->fb;
+
        DPU_DEBUG("[fb_id:%u][fb:%u,%u]\n", fb->base.id,
                        fb->width, fb->height);
 
index 399115e..2fd7870 100644 (file)
@@ -11,7 +11,14 @@ static int dpu_wb_conn_get_modes(struct drm_connector *connector)
        struct msm_drm_private *priv = dev->dev_private;
        struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms);
 
-       return drm_add_modes_noedid(connector, dpu_kms->catalog->caps->max_linewidth,
+       /*
+        * We should ideally be limiting the modes only to the maxlinewidth but
+        * on some chipsets this will allow even 4k modes to be added which will
+        * fail the per SSPP bandwidth checks. So, till we have dual-SSPP support
+        * and source split support added lets limit the modes based on max_mixer_width
+        * as 4K modes can then be supported.
+        */
+       return drm_add_modes_noedid(connector, dpu_kms->catalog->caps->max_mixer_width,
                        dev->mode_config.max_height);
 }
 
index fb48c8c..17cb1fc 100644 (file)
@@ -216,6 +216,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms,
                encoder = mdp4_lcdc_encoder_init(dev, panel_node);
                if (IS_ERR(encoder)) {
                        DRM_DEV_ERROR(dev->dev, "failed to construct LCDC encoder\n");
+                       of_node_put(panel_node);
                        return PTR_ERR(encoder);
                }
 
@@ -225,6 +226,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms,
                connector = mdp4_lvds_connector_init(dev, panel_node, encoder);
                if (IS_ERR(connector)) {
                        DRM_DEV_ERROR(dev->dev, "failed to initialize LVDS connector\n");
+                       of_node_put(panel_node);
                        return PTR_ERR(connector);
                }
 
index b7f5b8d..7032493 100644 (file)
@@ -1534,6 +1534,8 @@ end:
        return ret;
 }
 
+static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl);
+
 static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
 {
        int ret = 0;
@@ -1557,7 +1559,7 @@ static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
 
        ret = dp_ctrl_on_link(&ctrl->dp_ctrl);
        if (!ret)
-               ret = dp_ctrl_on_stream(&ctrl->dp_ctrl);
+               ret = dp_ctrl_on_stream_phy_test_report(&ctrl->dp_ctrl);
        else
                DRM_ERROR("failed to enable DP link controller\n");
 
@@ -1813,7 +1815,27 @@ static int dp_ctrl_link_retrain(struct dp_ctrl_private *ctrl)
        return dp_ctrl_setup_main_link(ctrl, &training_step);
 }
 
-int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl)
+{
+       int ret;
+       struct dp_ctrl_private *ctrl;
+
+       ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
+
+       ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock;
+
+       ret = dp_ctrl_enable_stream_clocks(ctrl);
+       if (ret) {
+               DRM_ERROR("Failed to start pixel clocks. ret=%d\n", ret);
+               return ret;
+       }
+
+       dp_ctrl_send_phy_test_pattern(ctrl);
+
+       return 0;
+}
+
+int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl, bool force_link_train)
 {
        int ret = 0;
        bool mainlink_ready = false;
@@ -1849,12 +1871,7 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
                goto end;
        }
 
-       if (ctrl->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN) {
-               dp_ctrl_send_phy_test_pattern(ctrl);
-               return 0;
-       }
-
-       if (!dp_ctrl_channel_eq_ok(ctrl))
+       if (force_link_train || !dp_ctrl_channel_eq_ok(ctrl))
                dp_ctrl_link_retrain(ctrl);
 
        /* stop txing train pattern to end link training */
index 0745fde..b563e2e 100644 (file)
@@ -21,7 +21,7 @@ struct dp_ctrl {
 };
 
 int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl);
-int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl);
+int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl, bool force_link_train);
 int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl);
 int dp_ctrl_off_link(struct dp_ctrl *dp_ctrl);
 int dp_ctrl_off(struct dp_ctrl *dp_ctrl);
index bce7793..239c8e3 100644 (file)
@@ -309,12 +309,15 @@ static void dp_display_unbind(struct device *dev, struct device *master,
        struct msm_drm_private *priv = dev_get_drvdata(master);
 
        /* disable all HPD interrupts */
-       dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
+       if (dp->core_initialized)
+               dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
 
        kthread_stop(dp->ev_tsk);
 
        dp_power_client_deinit(dp->power);
        dp_aux_unregister(dp->aux);
+       dp->drm_dev = NULL;
+       dp->aux->drm_dev = NULL;
        priv->dp[dp->id] = NULL;
 }
 
@@ -872,7 +875,7 @@ static int dp_display_enable(struct dp_display_private *dp, u32 data)
                return 0;
        }
 
-       rc = dp_ctrl_on_stream(dp->ctrl);
+       rc = dp_ctrl_on_stream(dp->ctrl, data);
        if (!rc)
                dp_display->power_on = true;
 
@@ -1659,6 +1662,7 @@ void dp_bridge_enable(struct drm_bridge *drm_bridge)
        int rc = 0;
        struct dp_display_private *dp_display;
        u32 state;
+       bool force_link_train = false;
 
        dp_display = container_of(dp, struct dp_display_private, dp_display);
        if (!dp_display->dp_mode.drm_mode.clock) {
@@ -1693,10 +1697,12 @@ void dp_bridge_enable(struct drm_bridge *drm_bridge)
 
        state =  dp_display->hpd_state;
 
-       if (state == ST_DISPLAY_OFF)
+       if (state == ST_DISPLAY_OFF) {
                dp_display_host_phy_init(dp_display);
+               force_link_train = true;
+       }
 
-       dp_display_enable(dp_display, 0);
+       dp_display_enable(dp_display, force_link_train);
 
        rc = dp_display_post_enable(dp);
        if (rc) {
@@ -1705,10 +1711,6 @@ void dp_bridge_enable(struct drm_bridge *drm_bridge)
                dp_display_unprepare(dp);
        }
 
-       /* manual kick off plug event to train link */
-       if (state == ST_DISPLAY_OFF)
-               dp_add_event(dp_display, EV_IRQ_HPD_INT, 0, 0);
-
        /* completed connection */
        dp_display->hpd_state = ST_CONNECTED;
 
index 4448536..14ab9a6 100644 (file)
@@ -964,7 +964,7 @@ static const struct drm_driver msm_driver = {
        .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
        .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
        .gem_prime_import_sg_table = msm_gem_prime_import_sg_table,
-       .gem_prime_mmap     = drm_gem_prime_mmap,
+       .gem_prime_mmap     = msm_gem_prime_mmap,
 #ifdef CONFIG_DEBUG_FS
        .debugfs_init       = msm_debugfs_init,
 #endif
index 08388d7..099a67d 100644 (file)
@@ -246,6 +246,7 @@ unsigned long msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_t
 void msm_gem_shrinker_init(struct drm_device *dev);
 void msm_gem_shrinker_cleanup(struct drm_device *dev);
 
+int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj);
 int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map);
 void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
index 3df2554..38e3323 100644 (file)
@@ -46,12 +46,14 @@ bool msm_fence_completed(struct msm_fence_context *fctx, uint32_t fence)
                (int32_t)(*fctx->fenceptr - fence) >= 0;
 }
 
-/* called from workqueue */
+/* called from irq handler and workqueue (in recover path) */
 void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence)
 {
-       spin_lock(&fctx->spinlock);
+       unsigned long flags;
+
+       spin_lock_irqsave(&fctx->spinlock, flags);
        fctx->completed_fence = max(fence, fctx->completed_fence);
-       spin_unlock(&fctx->spinlock);
+       spin_unlock_irqrestore(&fctx->spinlock, flags);
 }
 
 struct msm_fence {
index 97d5b4d..7f92231 100644 (file)
@@ -439,14 +439,12 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma)
        return ret;
 }
 
-void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma)
+void msm_gem_unpin_locked(struct drm_gem_object *obj)
 {
        struct msm_gem_object *msm_obj = to_msm_bo(obj);
 
        GEM_WARN_ON(!msm_gem_is_locked(obj));
 
-       msm_gem_unpin_vma(vma);
-
        msm_obj->pin_count--;
        GEM_WARN_ON(msm_obj->pin_count < 0);
 
@@ -586,7 +584,8 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj,
        msm_gem_lock(obj);
        vma = lookup_vma(obj, aspace);
        if (!GEM_WARN_ON(!vma)) {
-               msm_gem_unpin_vma_locked(obj, vma);
+               msm_gem_unpin_vma(vma);
+               msm_gem_unpin_locked(obj);
        }
        msm_gem_unlock(obj);
 }
index c75d3b8..6b7d5bb 100644 (file)
@@ -145,7 +145,7 @@ struct msm_gem_object {
 
 uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);
 int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma);
-void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma);
+void msm_gem_unpin_locked(struct drm_gem_object *obj);
 struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj,
                                           struct msm_gem_address_space *aspace);
 int msm_gem_get_iova(struct drm_gem_object *obj,
@@ -377,10 +377,11 @@ struct msm_gem_submit {
        } *cmd;  /* array of size nr_cmds */
        struct {
 /* make sure these don't conflict w/ MSM_SUBMIT_BO_x */
-#define BO_VALID    0x8000   /* is current addr in cmdstream correct/valid? */
-#define BO_LOCKED   0x4000   /* obj lock is held */
-#define BO_ACTIVE   0x2000   /* active refcnt is held */
-#define BO_PINNED   0x1000   /* obj is pinned and on active list */
+#define BO_VALID       0x8000  /* is current addr in cmdstream correct/valid? */
+#define BO_LOCKED      0x4000  /* obj lock is held */
+#define BO_ACTIVE      0x2000  /* active refcnt is held */
+#define BO_OBJ_PINNED  0x1000  /* obj (pages) is pinned and on active list */
+#define BO_VMA_PINNED  0x0800  /* vma (virtual address) is pinned */
                uint32_t flags;
                union {
                        struct msm_gem_object *obj;
index 94ab705..dcc8a57 100644 (file)
 #include "msm_drv.h"
 #include "msm_gem.h"
 
+int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+{
+       int ret;
+
+       /* Ensure the mmap offset is initialized.  We lazily initialize it,
+        * so if it has not been first mmap'd directly as a GEM object, the
+        * mmap offset will not be already initialized.
+        */
+       ret = drm_gem_create_mmap_offset(obj);
+       if (ret)
+               return ret;
+
+       return drm_gem_prime_mmap(obj, vma);
+}
+
 struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
 {
        struct msm_gem_object *msm_obj = to_msm_bo(obj);
index 8097522..c9e4aeb 100644 (file)
@@ -232,8 +232,11 @@ static void submit_cleanup_bo(struct msm_gem_submit *submit, int i,
         */
        submit->bos[i].flags &= ~cleanup_flags;
 
-       if (flags & BO_PINNED)
-               msm_gem_unpin_vma_locked(obj, submit->bos[i].vma);
+       if (flags & BO_VMA_PINNED)
+               msm_gem_unpin_vma(submit->bos[i].vma);
+
+       if (flags & BO_OBJ_PINNED)
+               msm_gem_unpin_locked(obj);
 
        if (flags & BO_ACTIVE)
                msm_gem_active_put(obj);
@@ -244,7 +247,9 @@ static void submit_cleanup_bo(struct msm_gem_submit *submit, int i,
 
 static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, int i)
 {
-       submit_cleanup_bo(submit, i, BO_PINNED | BO_ACTIVE | BO_LOCKED);
+       unsigned cleanup_flags = BO_VMA_PINNED | BO_OBJ_PINNED |
+                                BO_ACTIVE | BO_LOCKED;
+       submit_cleanup_bo(submit, i, cleanup_flags);
 
        if (!(submit->bos[i].flags & BO_VALID))
                submit->bos[i].iova = 0;
@@ -375,7 +380,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
                if (ret)
                        break;
 
-               submit->bos[i].flags |= BO_PINNED;
+               submit->bos[i].flags |= BO_OBJ_PINNED | BO_VMA_PINNED;
                submit->bos[i].vma = vma;
 
                if (vma->iova == submit->bos[i].iova) {
@@ -511,7 +516,7 @@ static void submit_cleanup(struct msm_gem_submit *submit, bool error)
        unsigned i;
 
        if (error)
-               cleanup_flags |= BO_PINNED | BO_ACTIVE;
+               cleanup_flags |= BO_VMA_PINNED | BO_OBJ_PINNED | BO_ACTIVE;
 
        for (i = 0; i < submit->nr_bos; i++) {
                struct msm_gem_object *msm_obj = submit->bos[i].obj;
@@ -529,7 +534,8 @@ void msm_submit_retire(struct msm_gem_submit *submit)
                struct drm_gem_object *obj = &submit->bos[i].obj->base;
 
                msm_gem_lock(obj);
-               submit_cleanup_bo(submit, i, BO_PINNED | BO_ACTIVE);
+               /* Note, VMA already fence-unpinned before submit: */
+               submit_cleanup_bo(submit, i, BO_OBJ_PINNED | BO_ACTIVE);
                msm_gem_unlock(obj);
                drm_gem_object_put(obj);
        }
@@ -922,7 +928,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
                                                    INT_MAX, GFP_KERNEL);
        }
        if (submit->fence_id < 0) {
-               ret = submit->fence_id = 0;
+               ret = submit->fence_id;
                submit->fence_id = 0;
        }
 
index 3c1dc92..c471aeb 100644 (file)
@@ -62,8 +62,7 @@ void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
        unsigned size = vma->node.size;
 
        /* Print a message if we try to purge a vma in use */
-       if (GEM_WARN_ON(msm_gem_vma_inuse(vma)))
-               return;
+       GEM_WARN_ON(msm_gem_vma_inuse(vma));
 
        /* Don't do anything if the memory isn't mapped */
        if (!vma->mapped)
@@ -128,8 +127,7 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace,
 void msm_gem_close_vma(struct msm_gem_address_space *aspace,
                struct msm_gem_vma *vma)
 {
-       if (GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped))
-               return;
+       GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped);
 
        spin_lock(&aspace->lock);
        if (vma->iova)
index eb8a666..c8cd9bf 100644 (file)
@@ -164,24 +164,6 @@ int msm_gpu_hw_init(struct msm_gpu *gpu)
        return ret;
 }
 
-static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
-               uint32_t fence)
-{
-       struct msm_gem_submit *submit;
-       unsigned long flags;
-
-       spin_lock_irqsave(&ring->submit_lock, flags);
-       list_for_each_entry(submit, &ring->submits, node) {
-               if (fence_after(submit->seqno, fence))
-                       break;
-
-               msm_update_fence(submit->ring->fctx,
-                       submit->hw_fence->seqno);
-               dma_fence_signal(submit->hw_fence);
-       }
-       spin_unlock_irqrestore(&ring->submit_lock, flags);
-}
-
 #ifdef CONFIG_DEV_COREDUMP
 static ssize_t msm_gpu_devcoredump_read(char *buffer, loff_t offset,
                size_t count, void *data, size_t datalen)
@@ -436,9 +418,9 @@ static void recover_worker(struct kthread_work *work)
                 * one more to clear the faulting submit
                 */
                if (ring == cur_ring)
-                       fence++;
+                       ring->memptrs->fence = ++fence;
 
-               update_fences(gpu, ring, fence);
+               msm_update_fence(ring->fctx, fence);
        }
 
        if (msm_gpu_active(gpu)) {
@@ -672,7 +654,6 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
        msm_submit_retire(submit);
 
        pm_runtime_mark_last_busy(&gpu->pdev->dev);
-       pm_runtime_put_autosuspend(&gpu->pdev->dev);
 
        spin_lock_irqsave(&ring->submit_lock, flags);
        list_del(&submit->node);
@@ -686,6 +667,8 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
                msm_devfreq_idle(gpu);
        mutex_unlock(&gpu->active_lock);
 
+       pm_runtime_put_autosuspend(&gpu->pdev->dev);
+
        msm_gem_submit_put(submit);
 }
 
@@ -735,7 +718,7 @@ void msm_gpu_retire(struct msm_gpu *gpu)
        int i;
 
        for (i = 0; i < gpu->nr_rings; i++)
-               update_fences(gpu, gpu->rb[i], gpu->rb[i]->memptrs->fence);
+               msm_update_fence(gpu->rb[i]->fctx, gpu->rb[i]->memptrs->fence);
 
        kthread_queue_work(gpu->worker, &gpu->retire_work);
        update_sw_cntrs(gpu);
index bcaddbb..a54ed35 100644 (file)
@@ -58,7 +58,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova,
        u64 addr = iova;
        unsigned int i;
 
-       for_each_sg(sgt->sgl, sg, sgt->nents, i) {
+       for_each_sgtable_sg(sgt, sg, i) {
                size_t size = sg->length;
                phys_addr_t phys = sg_phys(sg);
 
index 4306632..56eecb4 100644 (file)
@@ -25,7 +25,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job)
 
                msm_gem_lock(obj);
                msm_gem_unpin_vma_fenced(submit->bos[i].vma, fctx);
-               submit->bos[i].flags &= ~BO_PINNED;
+               submit->bos[i].flags &= ~BO_VMA_PINNED;
                msm_gem_unlock(obj);
        }
 
index 275f7e4..6eb1aab 100644 (file)
@@ -7,6 +7,7 @@
  */
 
 #include <linux/component.h>
+#include <linux/dma-mapping.h>
 #include <linux/kfifo.h>
 #include <linux/module.h>
 #include <linux/of_graph.h>
@@ -73,7 +74,6 @@ static int sun4i_drv_bind(struct device *dev)
                goto free_drm;
        }
 
-       dev_set_drvdata(dev, drm);
        drm->dev_private = drv;
        INIT_LIST_HEAD(&drv->frontend_list);
        INIT_LIST_HEAD(&drv->engine_list);
@@ -114,6 +114,8 @@ static int sun4i_drv_bind(struct device *dev)
 
        drm_fbdev_generic_setup(drm, 32);
 
+       dev_set_drvdata(dev, drm);
+
        return 0;
 
 finish_poll:
@@ -130,6 +132,7 @@ static void sun4i_drv_unbind(struct device *dev)
 {
        struct drm_device *drm = dev_get_drvdata(dev);
 
+       dev_set_drvdata(dev, NULL);
        drm_dev_unregister(drm);
        drm_kms_helper_poll_fini(drm);
        drm_atomic_helper_shutdown(drm);
@@ -367,6 +370,13 @@ static int sun4i_drv_probe(struct platform_device *pdev)
 
        INIT_KFIFO(list.fifo);
 
+       /*
+        * DE2 and DE3 cores actually supports 40-bit addresses, but
+        * driver does not.
+        */
+       dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+       dma_set_max_seg_size(&pdev->dev, UINT_MAX);
+
        for (i = 0;; i++) {
                struct device_node *pipeline = of_parse_phandle(np,
                                                                "allwinner,pipelines",
index 6d43080..85fb9e8 100644 (file)
@@ -117,7 +117,7 @@ static bool sun4i_layer_format_mod_supported(struct drm_plane *plane,
        struct sun4i_layer *layer = plane_to_sun4i_layer(plane);
 
        if (IS_ERR_OR_NULL(layer->backend->frontend))
-               sun4i_backend_format_is_supported(format, modifier);
+               return sun4i_backend_format_is_supported(format, modifier);
 
        return sun4i_backend_format_is_supported(format, modifier) ||
               sun4i_frontend_format_is_supported(format, modifier);
index a8d75fd..477cb69 100644 (file)
@@ -93,34 +93,10 @@ crtcs_exit:
        return crtcs;
 }
 
-static int sun8i_dw_hdmi_find_connector_pdev(struct device *dev,
-                                            struct platform_device **pdev_out)
-{
-       struct platform_device *pdev;
-       struct device_node *remote;
-
-       remote = of_graph_get_remote_node(dev->of_node, 1, -1);
-       if (!remote)
-               return -ENODEV;
-
-       if (!of_device_is_compatible(remote, "hdmi-connector")) {
-               of_node_put(remote);
-               return -ENODEV;
-       }
-
-       pdev = of_find_device_by_node(remote);
-       of_node_put(remote);
-       if (!pdev)
-               return -ENODEV;
-
-       *pdev_out = pdev;
-       return 0;
-}
-
 static int sun8i_dw_hdmi_bind(struct device *dev, struct device *master,
                              void *data)
 {
-       struct platform_device *pdev = to_platform_device(dev), *connector_pdev;
+       struct platform_device *pdev = to_platform_device(dev);
        struct dw_hdmi_plat_data *plat_data;
        struct drm_device *drm = data;
        struct device_node *phy_node;
@@ -167,30 +143,16 @@ static int sun8i_dw_hdmi_bind(struct device *dev, struct device *master,
                return dev_err_probe(dev, PTR_ERR(hdmi->regulator),
                                     "Couldn't get regulator\n");
 
-       ret = sun8i_dw_hdmi_find_connector_pdev(dev, &connector_pdev);
-       if (!ret) {
-               hdmi->ddc_en = gpiod_get_optional(&connector_pdev->dev,
-                                                 "ddc-en", GPIOD_OUT_HIGH);
-               platform_device_put(connector_pdev);
-
-               if (IS_ERR(hdmi->ddc_en)) {
-                       dev_err(dev, "Couldn't get ddc-en gpio\n");
-                       return PTR_ERR(hdmi->ddc_en);
-               }
-       }
-
        ret = regulator_enable(hdmi->regulator);
        if (ret) {
                dev_err(dev, "Failed to enable regulator\n");
-               goto err_unref_ddc_en;
+               return ret;
        }
 
-       gpiod_set_value(hdmi->ddc_en, 1);
-
        ret = reset_control_deassert(hdmi->rst_ctrl);
        if (ret) {
                dev_err(dev, "Could not deassert ctrl reset control\n");
-               goto err_disable_ddc_en;
+               goto err_disable_regulator;
        }
 
        ret = clk_prepare_enable(hdmi->clk_tmds);
@@ -245,12 +207,8 @@ err_disable_clk_tmds:
        clk_disable_unprepare(hdmi->clk_tmds);
 err_assert_ctrl_reset:
        reset_control_assert(hdmi->rst_ctrl);
-err_disable_ddc_en:
-       gpiod_set_value(hdmi->ddc_en, 0);
+err_disable_regulator:
        regulator_disable(hdmi->regulator);
-err_unref_ddc_en:
-       if (hdmi->ddc_en)
-               gpiod_put(hdmi->ddc_en);
 
        return ret;
 }
@@ -264,11 +222,7 @@ static void sun8i_dw_hdmi_unbind(struct device *dev, struct device *master,
        sun8i_hdmi_phy_deinit(hdmi->phy);
        clk_disable_unprepare(hdmi->clk_tmds);
        reset_control_assert(hdmi->rst_ctrl);
-       gpiod_set_value(hdmi->ddc_en, 0);
        regulator_disable(hdmi->regulator);
-
-       if (hdmi->ddc_en)
-               gpiod_put(hdmi->ddc_en);
 }
 
 static const struct component_ops sun8i_dw_hdmi_ops = {
index bffe1b9..9ad0952 100644 (file)
@@ -9,7 +9,6 @@
 #include <drm/bridge/dw_hdmi.h>
 #include <drm/drm_encoder.h>
 #include <linux/clk.h>
-#include <linux/gpio/consumer.h>
 #include <linux/regmap.h>
 #include <linux/regulator/consumer.h>
 #include <linux/reset.h>
@@ -193,7 +192,6 @@ struct sun8i_dw_hdmi {
        struct regulator                *regulator;
        const struct sun8i_dw_hdmi_quirks *quirks;
        struct reset_control            *rst_ctrl;
-       struct gpio_desc                *ddc_en;
 };
 
 extern struct platform_driver sun8i_hdmi_phy_driver;
index 75d308e..406e9c3 100644 (file)
@@ -109,11 +109,11 @@ void ttm_bo_set_bulk_move(struct ttm_buffer_object *bo,
                return;
 
        spin_lock(&bo->bdev->lru_lock);
-       if (bo->bulk_move && bo->resource)
-               ttm_lru_bulk_move_del(bo->bulk_move, bo->resource);
+       if (bo->resource)
+               ttm_resource_del_bulk_move(bo->resource, bo);
        bo->bulk_move = bulk;
-       if (bo->bulk_move && bo->resource)
-               ttm_lru_bulk_move_add(bo->bulk_move, bo->resource);
+       if (bo->resource)
+               ttm_resource_add_bulk_move(bo->resource, bo);
        spin_unlock(&bo->bdev->lru_lock);
 }
 EXPORT_SYMBOL(ttm_bo_set_bulk_move);
@@ -689,8 +689,11 @@ void ttm_bo_pin(struct ttm_buffer_object *bo)
 {
        dma_resv_assert_held(bo->base.resv);
        WARN_ON_ONCE(!kref_read(&bo->kref));
-       if (!(bo->pin_count++) && bo->bulk_move && bo->resource)
-               ttm_lru_bulk_move_del(bo->bulk_move, bo->resource);
+       spin_lock(&bo->bdev->lru_lock);
+       if (bo->resource)
+               ttm_resource_del_bulk_move(bo->resource, bo);
+       ++bo->pin_count;
+       spin_unlock(&bo->bdev->lru_lock);
 }
 EXPORT_SYMBOL(ttm_bo_pin);
 
@@ -707,8 +710,11 @@ void ttm_bo_unpin(struct ttm_buffer_object *bo)
        if (WARN_ON_ONCE(!bo->pin_count))
                return;
 
-       if (!(--bo->pin_count) && bo->bulk_move && bo->resource)
-               ttm_lru_bulk_move_add(bo->bulk_move, bo->resource);
+       spin_lock(&bo->bdev->lru_lock);
+       --bo->pin_count;
+       if (bo->resource)
+               ttm_resource_add_bulk_move(bo->resource, bo);
+       spin_unlock(&bo->bdev->lru_lock);
 }
 EXPORT_SYMBOL(ttm_bo_unpin);
 
index a0562ab..e7147e3 100644 (file)
@@ -156,8 +156,12 @@ int ttm_device_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx,
 
                ttm_resource_manager_for_each_res(man, &cursor, res) {
                        struct ttm_buffer_object *bo = res->bo;
-                       uint32_t num_pages = PFN_UP(bo->base.size);
+                       uint32_t num_pages;
 
+                       if (!bo)
+                               continue;
+
+                       num_pages = PFN_UP(bo->base.size);
                        ret = ttm_bo_swapout(bo, ctx, gfp_flags);
                        /* ttm_bo_swapout has dropped the lru_lock */
                        if (!ret)
index 65889b3..20f9adc 100644 (file)
@@ -91,8 +91,8 @@ static void ttm_lru_bulk_move_pos_tail(struct ttm_lru_bulk_move_pos *pos,
 }
 
 /* Add the resource to a bulk_move cursor */
-void ttm_lru_bulk_move_add(struct ttm_lru_bulk_move *bulk,
-                          struct ttm_resource *res)
+static void ttm_lru_bulk_move_add(struct ttm_lru_bulk_move *bulk,
+                                 struct ttm_resource *res)
 {
        struct ttm_lru_bulk_move_pos *pos = ttm_lru_bulk_move_pos(bulk, res);
 
@@ -105,8 +105,8 @@ void ttm_lru_bulk_move_add(struct ttm_lru_bulk_move *bulk,
 }
 
 /* Remove the resource from a bulk_move range */
-void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk,
-                          struct ttm_resource *res)
+static void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk,
+                                 struct ttm_resource *res)
 {
        struct ttm_lru_bulk_move_pos *pos = ttm_lru_bulk_move_pos(bulk, res);
 
@@ -122,6 +122,22 @@ void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk,
        }
 }
 
+/* Add the resource to a bulk move if the BO is configured for it */
+void ttm_resource_add_bulk_move(struct ttm_resource *res,
+                               struct ttm_buffer_object *bo)
+{
+       if (bo->bulk_move && !bo->pin_count)
+               ttm_lru_bulk_move_add(bo->bulk_move, res);
+}
+
+/* Remove the resource from a bulk move if the BO is configured for it */
+void ttm_resource_del_bulk_move(struct ttm_resource *res,
+                               struct ttm_buffer_object *bo)
+{
+       if (bo->bulk_move && !bo->pin_count)
+               ttm_lru_bulk_move_del(bo->bulk_move, res);
+}
+
 /* Move a resource to the LRU or bulk tail */
 void ttm_resource_move_to_lru_tail(struct ttm_resource *res)
 {
@@ -169,15 +185,14 @@ void ttm_resource_init(struct ttm_buffer_object *bo,
        res->bus.is_iomem = false;
        res->bus.caching = ttm_cached;
        res->bo = bo;
-       INIT_LIST_HEAD(&res->lru);
 
        man = ttm_manager_type(bo->bdev, place->mem_type);
        spin_lock(&bo->bdev->lru_lock);
-       man->usage += res->num_pages << PAGE_SHIFT;
-       if (bo->bulk_move)
-               ttm_lru_bulk_move_add(bo->bulk_move, res);
+       if (bo->pin_count)
+               list_add_tail(&res->lru, &bo->bdev->pinned);
        else
-               ttm_resource_move_to_lru_tail(res);
+               list_add_tail(&res->lru, &man->lru[bo->priority]);
+       man->usage += res->num_pages << PAGE_SHIFT;
        spin_unlock(&bo->bdev->lru_lock);
 }
 EXPORT_SYMBOL(ttm_resource_init);
@@ -210,8 +225,16 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo,
 {
        struct ttm_resource_manager *man =
                ttm_manager_type(bo->bdev, place->mem_type);
+       int ret;
+
+       ret = man->func->alloc(man, bo, place, res_ptr);
+       if (ret)
+               return ret;
 
-       return man->func->alloc(man, bo, place, res_ptr);
+       spin_lock(&bo->bdev->lru_lock);
+       ttm_resource_add_bulk_move(*res_ptr, bo);
+       spin_unlock(&bo->bdev->lru_lock);
+       return 0;
 }
 
 void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res)
@@ -221,12 +244,9 @@ void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res)
        if (!*res)
                return;
 
-       if (bo->bulk_move) {
-               spin_lock(&bo->bdev->lru_lock);
-               ttm_lru_bulk_move_del(bo->bulk_move, *res);
-               spin_unlock(&bo->bdev->lru_lock);
-       }
-
+       spin_lock(&bo->bdev->lru_lock);
+       ttm_resource_del_bulk_move(*res, bo);
+       spin_unlock(&bo->bdev->lru_lock);
        man = ttm_manager_type(bo->bdev, (*res)->mem_type);
        man->func->free(man, *res);
        *res = NULL;
index 49c0f2a..b8d8563 100644 (file)
@@ -248,6 +248,9 @@ void vc4_bo_add_to_purgeable_pool(struct vc4_bo *bo)
 {
        struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        mutex_lock(&vc4->purgeable.lock);
        list_add_tail(&bo->size_head, &vc4->purgeable.list);
        vc4->purgeable.num++;
@@ -259,6 +262,9 @@ static void vc4_bo_remove_from_purgeable_pool_locked(struct vc4_bo *bo)
 {
        struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        /* list_del_init() is used here because the caller might release
         * the purgeable lock in order to acquire the madv one and update the
         * madv status.
@@ -387,6 +393,9 @@ struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size)
        struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct vc4_bo *bo;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return ERR_PTR(-ENODEV);
+
        bo = kzalloc(sizeof(*bo), GFP_KERNEL);
        if (!bo)
                return ERR_PTR(-ENOMEM);
@@ -413,6 +422,9 @@ struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size,
        struct drm_gem_cma_object *cma_obj;
        struct vc4_bo *bo;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return ERR_PTR(-ENODEV);
+
        if (size == 0)
                return ERR_PTR(-EINVAL);
 
@@ -471,19 +483,20 @@ struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size,
        return bo;
 }
 
-int vc4_dumb_create(struct drm_file *file_priv,
-                   struct drm_device *dev,
-                   struct drm_mode_create_dumb *args)
+int vc4_bo_dumb_create(struct drm_file *file_priv,
+                      struct drm_device *dev,
+                      struct drm_mode_create_dumb *args)
 {
-       int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct vc4_bo *bo = NULL;
        int ret;
 
-       if (args->pitch < min_pitch)
-               args->pitch = min_pitch;
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
 
-       if (args->size < args->pitch * args->height)
-               args->size = args->pitch * args->height;
+       ret = vc4_dumb_fixup_args(args);
+       if (ret)
+               return ret;
 
        bo = vc4_bo_create(dev, args->size, false, VC4_BO_TYPE_DUMB);
        if (IS_ERR(bo))
@@ -601,8 +614,12 @@ static void vc4_bo_cache_time_work(struct work_struct *work)
 
 int vc4_bo_inc_usecnt(struct vc4_bo *bo)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
        int ret;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        /* Fast path: if the BO is already retained by someone, no need to
         * check the madv status.
         */
@@ -637,6 +654,11 @@ int vc4_bo_inc_usecnt(struct vc4_bo *bo)
 
 void vc4_bo_dec_usecnt(struct vc4_bo *bo)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        /* Fast path: if the BO is still retained by someone, no need to test
         * the madv value.
         */
@@ -756,6 +778,9 @@ int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
        struct vc4_bo *bo = NULL;
        int ret;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        ret = vc4_grab_bin_bo(vc4, vc4file);
        if (ret)
                return ret;
@@ -779,9 +804,13 @@ int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
 int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
                      struct drm_file *file_priv)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct drm_vc4_mmap_bo *args = data;
        struct drm_gem_object *gem_obj;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        gem_obj = drm_gem_object_lookup(file_priv, args->handle);
        if (!gem_obj) {
                DRM_DEBUG("Failed to look up GEM BO %d\n", args->handle);
@@ -805,6 +834,9 @@ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
        struct vc4_bo *bo = NULL;
        int ret;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (args->size == 0)
                return -EINVAL;
 
@@ -875,11 +907,15 @@ fail:
 int vc4_set_tiling_ioctl(struct drm_device *dev, void *data,
                         struct drm_file *file_priv)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct drm_vc4_set_tiling *args = data;
        struct drm_gem_object *gem_obj;
        struct vc4_bo *bo;
        bool t_format;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (args->flags != 0)
                return -EINVAL;
 
@@ -918,10 +954,14 @@ int vc4_set_tiling_ioctl(struct drm_device *dev, void *data,
 int vc4_get_tiling_ioctl(struct drm_device *dev, void *data,
                         struct drm_file *file_priv)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct drm_vc4_get_tiling *args = data;
        struct drm_gem_object *gem_obj;
        struct vc4_bo *bo;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (args->flags != 0 || args->modifier != 0)
                return -EINVAL;
 
@@ -948,6 +988,9 @@ int vc4_bo_cache_init(struct drm_device *dev)
        struct vc4_dev *vc4 = to_vc4_dev(dev);
        int i;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        /* Create the initial set of BO labels that the kernel will
         * use.  This lets us avoid a bunch of string reallocation in
         * the kernel's draw and BO allocation paths.
@@ -1007,6 +1050,9 @@ int vc4_label_bo_ioctl(struct drm_device *dev, void *data,
        struct drm_gem_object *gem_obj;
        int ret = 0, label;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (!args->len)
                return -EINVAL;
 
index 59b20c8..9355213 100644 (file)
@@ -256,7 +256,7 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format)
                 * Removing 1 from the FIFO full level however
                 * seems to completely remove that issue.
                 */
-               if (!vc4->hvs->hvs5)
+               if (!vc4->is_vc5)
                        return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1;
 
                return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX;
@@ -389,7 +389,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
        if (is_dsi)
                CRTC_WRITE(PV_HACT_ACT, mode->hdisplay * pixel_rep);
 
-       if (vc4->hvs->hvs5)
+       if (vc4->is_vc5)
                CRTC_WRITE(PV_MUX_CFG,
                           VC4_SET_FIELD(PV_MUX_CFG_RGB_PIXEL_MUX_MODE_NO_SWAP,
                                         PV_MUX_CFG_RGB_PIXEL_MUX_MODE));
@@ -775,17 +775,18 @@ struct vc4_async_flip_state {
        struct drm_framebuffer *old_fb;
        struct drm_pending_vblank_event *event;
 
-       struct vc4_seqno_cb cb;
+       union {
+               struct dma_fence_cb fence;
+               struct vc4_seqno_cb seqno;
+       } cb;
 };
 
 /* Called when the V3D execution for the BO being flipped to is done, so that
  * we can actually update the plane's address to point to it.
  */
 static void
-vc4_async_page_flip_complete(struct vc4_seqno_cb *cb)
+vc4_async_page_flip_complete(struct vc4_async_flip_state *flip_state)
 {
-       struct vc4_async_flip_state *flip_state =
-               container_of(cb, struct vc4_async_flip_state, cb);
        struct drm_crtc *crtc = flip_state->crtc;
        struct drm_device *dev = crtc->dev;
        struct drm_plane *plane = crtc->primary;
@@ -802,59 +803,96 @@ vc4_async_page_flip_complete(struct vc4_seqno_cb *cb)
        drm_crtc_vblank_put(crtc);
        drm_framebuffer_put(flip_state->fb);
 
-       /* Decrement the BO usecnt in order to keep the inc/dec calls balanced
-        * when the planes are updated through the async update path.
-        * FIXME: we should move to generic async-page-flip when it's
-        * available, so that we can get rid of this hand-made cleanup_fb()
-        * logic.
-        */
-       if (flip_state->old_fb) {
-               struct drm_gem_cma_object *cma_bo;
-               struct vc4_bo *bo;
+       if (flip_state->old_fb)
+               drm_framebuffer_put(flip_state->old_fb);
+
+       kfree(flip_state);
+}
+
+static void vc4_async_page_flip_seqno_complete(struct vc4_seqno_cb *cb)
+{
+       struct vc4_async_flip_state *flip_state =
+               container_of(cb, struct vc4_async_flip_state, cb.seqno);
+       struct vc4_bo *bo = NULL;
 
-               cma_bo = drm_fb_cma_get_gem_obj(flip_state->old_fb, 0);
+       if (flip_state->old_fb) {
+               struct drm_gem_cma_object *cma_bo =
+                       drm_fb_cma_get_gem_obj(flip_state->old_fb, 0);
                bo = to_vc4_bo(&cma_bo->base);
-               vc4_bo_dec_usecnt(bo);
-               drm_framebuffer_put(flip_state->old_fb);
        }
 
-       kfree(flip_state);
+       vc4_async_page_flip_complete(flip_state);
+
+       /*
+        * Decrement the BO usecnt in order to keep the inc/dec
+        * calls balanced when the planes are updated through
+        * the async update path.
+        *
+        * FIXME: we should move to generic async-page-flip when
+        * it's available, so that we can get rid of this
+        * hand-made cleanup_fb() logic.
+        */
+       if (bo)
+               vc4_bo_dec_usecnt(bo);
 }
 
-/* Implements async (non-vblank-synced) page flips.
- *
- * The page flip ioctl needs to return immediately, so we grab the
- * modeset semaphore on the pipe, and queue the address update for
- * when V3D is done with the BO being flipped to.
- */
-static int vc4_async_page_flip(struct drm_crtc *crtc,
-                              struct drm_framebuffer *fb,
-                              struct drm_pending_vblank_event *event,
-                              uint32_t flags)
+static void vc4_async_page_flip_fence_complete(struct dma_fence *fence,
+                                              struct dma_fence_cb *cb)
 {
-       struct drm_device *dev = crtc->dev;
-       struct drm_plane *plane = crtc->primary;
-       int ret = 0;
-       struct vc4_async_flip_state *flip_state;
+       struct vc4_async_flip_state *flip_state =
+               container_of(cb, struct vc4_async_flip_state, cb.fence);
+
+       vc4_async_page_flip_complete(flip_state);
+       dma_fence_put(fence);
+}
+
+static int vc4_async_set_fence_cb(struct drm_device *dev,
+                                 struct vc4_async_flip_state *flip_state)
+{
+       struct drm_framebuffer *fb = flip_state->fb;
        struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0);
-       struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
+       struct dma_fence *fence;
+       int ret;
 
-       /* Increment the BO usecnt here, so that we never end up with an
-        * unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the
-        * plane is later updated through the non-async path.
-        * FIXME: we should move to generic async-page-flip when it's
-        * available, so that we can get rid of this hand-made prepare_fb()
-        * logic.
-        */
-       ret = vc4_bo_inc_usecnt(bo);
+       if (!vc4->is_vc5) {
+               struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);
+
+               return vc4_queue_seqno_cb(dev, &flip_state->cb.seqno, bo->seqno,
+                                         vc4_async_page_flip_seqno_complete);
+       }
+
+       ret = dma_resv_get_singleton(cma_bo->base.resv, DMA_RESV_USAGE_READ, &fence);
        if (ret)
                return ret;
 
+       /* If there's no fence, complete the page flip immediately */
+       if (!fence) {
+               vc4_async_page_flip_fence_complete(fence, &flip_state->cb.fence);
+               return 0;
+       }
+
+       /* If the fence has already been completed, complete the page flip */
+       if (dma_fence_add_callback(fence, &flip_state->cb.fence,
+                                  vc4_async_page_flip_fence_complete))
+               vc4_async_page_flip_fence_complete(fence, &flip_state->cb.fence);
+
+       return 0;
+}
+
+static int
+vc4_async_page_flip_common(struct drm_crtc *crtc,
+                          struct drm_framebuffer *fb,
+                          struct drm_pending_vblank_event *event,
+                          uint32_t flags)
+{
+       struct drm_device *dev = crtc->dev;
+       struct drm_plane *plane = crtc->primary;
+       struct vc4_async_flip_state *flip_state;
+
        flip_state = kzalloc(sizeof(*flip_state), GFP_KERNEL);
-       if (!flip_state) {
-               vc4_bo_dec_usecnt(bo);
+       if (!flip_state)
                return -ENOMEM;
-       }
 
        drm_framebuffer_get(fb);
        flip_state->fb = fb;
@@ -881,23 +919,79 @@ static int vc4_async_page_flip(struct drm_crtc *crtc,
         */
        drm_atomic_set_fb_for_plane(plane->state, fb);
 
-       vc4_queue_seqno_cb(dev, &flip_state->cb, bo->seqno,
-                          vc4_async_page_flip_complete);
+       vc4_async_set_fence_cb(dev, flip_state);
 
        /* Driver takes ownership of state on successful async commit. */
        return 0;
 }
 
+/* Implements async (non-vblank-synced) page flips.
+ *
+ * The page flip ioctl needs to return immediately, so we grab the
+ * modeset semaphore on the pipe, and queue the address update for
+ * when V3D is done with the BO being flipped to.
+ */
+static int vc4_async_page_flip(struct drm_crtc *crtc,
+                              struct drm_framebuffer *fb,
+                              struct drm_pending_vblank_event *event,
+                              uint32_t flags)
+{
+       struct drm_device *dev = crtc->dev;
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
+       struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0);
+       struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);
+       int ret;
+
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
+       /*
+        * Increment the BO usecnt here, so that we never end up with an
+        * unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the
+        * plane is later updated through the non-async path.
+        *
+        * FIXME: we should move to generic async-page-flip when
+        * it's available, so that we can get rid of this
+        * hand-made prepare_fb() logic.
+        */
+       ret = vc4_bo_inc_usecnt(bo);
+       if (ret)
+               return ret;
+
+       ret = vc4_async_page_flip_common(crtc, fb, event, flags);
+       if (ret) {
+               vc4_bo_dec_usecnt(bo);
+               return ret;
+       }
+
+       return 0;
+}
+
+static int vc5_async_page_flip(struct drm_crtc *crtc,
+                              struct drm_framebuffer *fb,
+                              struct drm_pending_vblank_event *event,
+                              uint32_t flags)
+{
+       return vc4_async_page_flip_common(crtc, fb, event, flags);
+}
+
 int vc4_page_flip(struct drm_crtc *crtc,
                  struct drm_framebuffer *fb,
                  struct drm_pending_vblank_event *event,
                  uint32_t flags,
                  struct drm_modeset_acquire_ctx *ctx)
 {
-       if (flags & DRM_MODE_PAGE_FLIP_ASYNC)
-               return vc4_async_page_flip(crtc, fb, event, flags);
-       else
+       if (flags & DRM_MODE_PAGE_FLIP_ASYNC) {
+               struct drm_device *dev = crtc->dev;
+               struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+               if (vc4->is_vc5)
+                       return vc5_async_page_flip(crtc, fb, event, flags);
+               else
+                       return vc4_async_page_flip(crtc, fb, event, flags);
+       } else {
                return drm_atomic_helper_page_flip(crtc, fb, event, flags, ctx);
+       }
 }
 
 struct drm_crtc_state *vc4_crtc_duplicate_state(struct drm_crtc *crtc)
@@ -1149,7 +1243,7 @@ int vc4_crtc_init(struct drm_device *drm, struct vc4_crtc *vc4_crtc,
                                  crtc_funcs, NULL);
        drm_crtc_helper_add(crtc, crtc_helper_funcs);
 
-       if (!vc4->hvs->hvs5) {
+       if (!vc4->is_vc5) {
                drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r));
 
                drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size);
index 162bc18..0f0f026 100644 (file)
@@ -63,6 +63,32 @@ void __iomem *vc4_ioremap_regs(struct platform_device *pdev, int index)
        return map;
 }
 
+int vc4_dumb_fixup_args(struct drm_mode_create_dumb *args)
+{
+       int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
+
+       if (args->pitch < min_pitch)
+               args->pitch = min_pitch;
+
+       if (args->size < args->pitch * args->height)
+               args->size = args->pitch * args->height;
+
+       return 0;
+}
+
+static int vc5_dumb_create(struct drm_file *file_priv,
+                          struct drm_device *dev,
+                          struct drm_mode_create_dumb *args)
+{
+       int ret;
+
+       ret = vc4_dumb_fixup_args(args);
+       if (ret)
+               return ret;
+
+       return drm_gem_cma_dumb_create_internal(file_priv, dev, args);
+}
+
 static int vc4_get_param_ioctl(struct drm_device *dev, void *data,
                               struct drm_file *file_priv)
 {
@@ -73,6 +99,9 @@ static int vc4_get_param_ioctl(struct drm_device *dev, void *data,
        if (args->pad != 0)
                return -EINVAL;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (!vc4->v3d)
                return -ENODEV;
 
@@ -116,11 +145,16 @@ static int vc4_get_param_ioctl(struct drm_device *dev, void *data,
 
 static int vc4_open(struct drm_device *dev, struct drm_file *file)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct vc4_file *vc4file;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        vc4file = kzalloc(sizeof(*vc4file), GFP_KERNEL);
        if (!vc4file)
                return -ENOMEM;
+       vc4file->dev = vc4;
 
        vc4_perfmon_open_file(vc4file);
        file->driver_priv = vc4file;
@@ -132,6 +166,9 @@ static void vc4_close(struct drm_device *dev, struct drm_file *file)
        struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct vc4_file *vc4file = file->driver_priv;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        if (vc4file->bin_bo_used)
                vc4_v3d_bin_bo_put(vc4);
 
@@ -160,7 +197,7 @@ static const struct drm_ioctl_desc vc4_drm_ioctls[] = {
        DRM_IOCTL_DEF_DRV(VC4_PERFMON_GET_VALUES, vc4_perfmon_get_values_ioctl, DRM_RENDER_ALLOW),
 };
 
-static struct drm_driver vc4_drm_driver = {
+static const struct drm_driver vc4_drm_driver = {
        .driver_features = (DRIVER_MODESET |
                            DRIVER_ATOMIC |
                            DRIVER_GEM |
@@ -175,7 +212,7 @@ static struct drm_driver vc4_drm_driver = {
 
        .gem_create_object = vc4_create_object,
 
-       DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_dumb_create),
+       DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_bo_dumb_create),
 
        .ioctls = vc4_drm_ioctls,
        .num_ioctls = ARRAY_SIZE(vc4_drm_ioctls),
@@ -189,6 +226,27 @@ static struct drm_driver vc4_drm_driver = {
        .patchlevel = DRIVER_PATCHLEVEL,
 };
 
+static const struct drm_driver vc5_drm_driver = {
+       .driver_features = (DRIVER_MODESET |
+                           DRIVER_ATOMIC |
+                           DRIVER_GEM),
+
+#if defined(CONFIG_DEBUG_FS)
+       .debugfs_init = vc4_debugfs_init,
+#endif
+
+       DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc5_dumb_create),
+
+       .fops = &vc4_drm_fops,
+
+       .name = DRIVER_NAME,
+       .desc = DRIVER_DESC,
+       .date = DRIVER_DATE,
+       .major = DRIVER_MAJOR,
+       .minor = DRIVER_MINOR,
+       .patchlevel = DRIVER_PATCHLEVEL,
+};
+
 static void vc4_match_add_drivers(struct device *dev,
                                  struct component_match **match,
                                  struct platform_driver *const *drivers,
@@ -212,42 +270,49 @@ static void vc4_match_add_drivers(struct device *dev,
 static int vc4_drm_bind(struct device *dev)
 {
        struct platform_device *pdev = to_platform_device(dev);
+       const struct drm_driver *driver;
        struct rpi_firmware *firmware = NULL;
        struct drm_device *drm;
        struct vc4_dev *vc4;
        struct device_node *node;
        struct drm_crtc *crtc;
+       bool is_vc5;
        int ret = 0;
 
        dev->coherent_dma_mask = DMA_BIT_MASK(32);
 
-       /* If VC4 V3D is missing, don't advertise render nodes. */
-       node = of_find_matching_node_and_match(NULL, vc4_v3d_dt_match, NULL);
-       if (!node || !of_device_is_available(node))
-               vc4_drm_driver.driver_features &= ~DRIVER_RENDER;
-       of_node_put(node);
+       is_vc5 = of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5");
+       if (is_vc5)
+               driver = &vc5_drm_driver;
+       else
+               driver = &vc4_drm_driver;
 
-       vc4 = devm_drm_dev_alloc(dev, &vc4_drm_driver, struct vc4_dev, base);
+       vc4 = devm_drm_dev_alloc(dev, driver, struct vc4_dev, base);
        if (IS_ERR(vc4))
                return PTR_ERR(vc4);
+       vc4->is_vc5 = is_vc5;
 
        drm = &vc4->base;
        platform_set_drvdata(pdev, drm);
        INIT_LIST_HEAD(&vc4->debugfs_list);
 
-       mutex_init(&vc4->bin_bo_lock);
+       if (!is_vc5) {
+               mutex_init(&vc4->bin_bo_lock);
 
-       ret = vc4_bo_cache_init(drm);
-       if (ret)
-               return ret;
+               ret = vc4_bo_cache_init(drm);
+               if (ret)
+                       return ret;
+       }
 
        ret = drmm_mode_config_init(drm);
        if (ret)
                return ret;
 
-       ret = vc4_gem_init(drm);
-       if (ret)
-               return ret;
+       if (!is_vc5) {
+               ret = vc4_gem_init(drm);
+               if (ret)
+                       return ret;
+       }
 
        node = of_find_compatible_node(NULL, NULL, "raspberrypi,bcm2835-firmware");
        if (node) {
@@ -258,7 +323,7 @@ static int vc4_drm_bind(struct device *dev)
                        return -EPROBE_DEFER;
        }
 
-       ret = drm_aperture_remove_framebuffers(false, &vc4_drm_driver);
+       ret = drm_aperture_remove_framebuffers(false, driver);
        if (ret)
                return ret;
 
index 15e0c2a..93fd55b 100644 (file)
@@ -48,6 +48,8 @@ enum vc4_kernel_bo_type {
  * done. This way, only events related to a specific job will be counted.
  */
 struct vc4_perfmon {
+       struct vc4_dev *dev;
+
        /* Tracks the number of users of the perfmon, when this counter reaches
         * zero the perfmon is destroyed.
         */
@@ -74,6 +76,8 @@ struct vc4_perfmon {
 struct vc4_dev {
        struct drm_device base;
 
+       bool is_vc5;
+
        unsigned int irq;
 
        struct vc4_hvs *hvs;
@@ -316,6 +320,7 @@ struct vc4_v3d {
 };
 
 struct vc4_hvs {
+       struct vc4_dev *vc4;
        struct platform_device *pdev;
        void __iomem *regs;
        u32 __iomem *dlist;
@@ -333,9 +338,6 @@ struct vc4_hvs {
        struct drm_mm_node mitchell_netravali_filter;
 
        struct debugfs_regset32 regset;
-
-       /* HVS version 5 flag, therefore requires updated dlist structures */
-       bool hvs5;
 };
 
 struct vc4_plane {
@@ -580,6 +582,8 @@ to_vc4_crtc_state(struct drm_crtc_state *crtc_state)
 #define VC4_REG32(reg) { .name = #reg, .offset = reg }
 
 struct vc4_exec_info {
+       struct vc4_dev *dev;
+
        /* Sequence number for this bin/render job. */
        uint64_t seqno;
 
@@ -701,6 +705,8 @@ struct vc4_exec_info {
  * released when the DRM file is closed should be placed here.
  */
 struct vc4_file {
+       struct vc4_dev *dev;
+
        struct {
                struct idr idr;
                struct mutex lock;
@@ -814,9 +820,9 @@ struct vc4_validated_shader_info {
 struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size);
 struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t size,
                             bool from_cache, enum vc4_kernel_bo_type type);
-int vc4_dumb_create(struct drm_file *file_priv,
-                   struct drm_device *dev,
-                   struct drm_mode_create_dumb *args);
+int vc4_bo_dumb_create(struct drm_file *file_priv,
+                      struct drm_device *dev,
+                      struct drm_mode_create_dumb *args);
 int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
                        struct drm_file *file_priv);
 int vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
@@ -885,6 +891,7 @@ static inline void vc4_debugfs_add_regset32(struct drm_device *drm,
 
 /* vc4_drv.c */
 void __iomem *vc4_ioremap_regs(struct platform_device *dev, int index);
+int vc4_dumb_fixup_args(struct drm_mode_create_dumb *args);
 
 /* vc4_dpi.c */
 extern struct platform_driver vc4_dpi_driver;
index 9eaf304..fe10d9c 100644 (file)
@@ -76,6 +76,9 @@ vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
        u32 i;
        int ret = 0;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (!vc4->v3d) {
                DRM_DEBUG("VC4_GET_HANG_STATE with no VC4 V3D probed\n");
                return -ENODEV;
@@ -386,6 +389,9 @@ vc4_wait_for_seqno(struct drm_device *dev, uint64_t seqno, uint64_t timeout_ns,
        unsigned long timeout_expire;
        DEFINE_WAIT(wait);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (vc4->finished_seqno >= seqno)
                return 0;
 
@@ -468,6 +474,9 @@ vc4_submit_next_bin_job(struct drm_device *dev)
        struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct vc4_exec_info *exec;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
 again:
        exec = vc4_first_bin_job(vc4);
        if (!exec)
@@ -513,6 +522,9 @@ vc4_submit_next_render_job(struct drm_device *dev)
        if (!exec)
                return;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        /* A previous RCL may have written to one of our textures, and
         * our full cache flush at bin time may have occurred before
         * that RCL completed.  Flush the texture cache now, but not
@@ -531,6 +543,9 @@ vc4_move_job_to_render(struct drm_device *dev, struct vc4_exec_info *exec)
        struct vc4_dev *vc4 = to_vc4_dev(dev);
        bool was_empty = list_empty(&vc4->render_job_list);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        list_move_tail(&exec->head, &vc4->render_job_list);
        if (was_empty)
                vc4_submit_next_render_job(dev);
@@ -997,6 +1012,9 @@ vc4_job_handle_completed(struct vc4_dev *vc4)
        unsigned long irqflags;
        struct vc4_seqno_cb *cb, *cb_temp;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        spin_lock_irqsave(&vc4->job_lock, irqflags);
        while (!list_empty(&vc4->job_done_list)) {
                struct vc4_exec_info *exec =
@@ -1033,6 +1051,9 @@ int vc4_queue_seqno_cb(struct drm_device *dev,
        struct vc4_dev *vc4 = to_vc4_dev(dev);
        unsigned long irqflags;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        cb->func = func;
        INIT_WORK(&cb->work, vc4_seqno_cb_work);
 
@@ -1083,8 +1104,12 @@ int
 vc4_wait_seqno_ioctl(struct drm_device *dev, void *data,
                     struct drm_file *file_priv)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct drm_vc4_wait_seqno *args = data;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno,
                                               &args->timeout_ns);
 }
@@ -1093,11 +1118,15 @@ int
 vc4_wait_bo_ioctl(struct drm_device *dev, void *data,
                  struct drm_file *file_priv)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        int ret;
        struct drm_vc4_wait_bo *args = data;
        struct drm_gem_object *gem_obj;
        struct vc4_bo *bo;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (args->pad != 0)
                return -EINVAL;
 
@@ -1144,6 +1173,9 @@ vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
                                  args->shader_rec_size,
                                  args->bo_handle_count);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (!vc4->v3d) {
                DRM_DEBUG("VC4_SUBMIT_CL with no VC4 V3D probed\n");
                return -ENODEV;
@@ -1167,6 +1199,7 @@ vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
                DRM_ERROR("malloc failure on exec struct\n");
                return -ENOMEM;
        }
+       exec->dev = vc4;
 
        ret = vc4_v3d_pm_get(vc4);
        if (ret) {
@@ -1276,6 +1309,9 @@ int vc4_gem_init(struct drm_device *dev)
 {
        struct vc4_dev *vc4 = to_vc4_dev(dev);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        vc4->dma_fence_context = dma_fence_context_alloc(1);
 
        INIT_LIST_HEAD(&vc4->bin_job_list);
@@ -1321,11 +1357,15 @@ static void vc4_gem_destroy(struct drm_device *dev, void *unused)
 int vc4_gem_madvise_ioctl(struct drm_device *dev, void *data,
                          struct drm_file *file_priv)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct drm_vc4_gem_madvise *args = data;
        struct drm_gem_object *gem_obj;
        struct vc4_bo *bo;
        int ret;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        switch (args->madv) {
        case VC4_MADV_DONTNEED:
        case VC4_MADV_WILLNEED:
index 823d812..ce9d166 100644 (file)
@@ -1481,7 +1481,7 @@ vc4_hdmi_encoder_compute_mode_clock(const struct drm_display_mode *mode,
                                    unsigned int bpc,
                                    enum vc4_hdmi_output_format fmt)
 {
-       unsigned long long clock = mode->clock * 1000;
+       unsigned long long clock = mode->clock * 1000ULL;
 
        if (mode->flags & DRM_MODE_FLAG_DBLCLK)
                clock = clock * 2;
index 2a58fc4..ba2c8e5 100644 (file)
@@ -220,10 +220,11 @@ u8 vc4_hvs_get_fifo_frame_count(struct vc4_hvs *hvs, unsigned int fifo)
 
 int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output)
 {
+       struct vc4_dev *vc4 = hvs->vc4;
        u32 reg;
        int ret;
 
-       if (!hvs->hvs5)
+       if (!vc4->is_vc5)
                return output;
 
        switch (output) {
@@ -273,6 +274,7 @@ int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output)
 static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
                                struct drm_display_mode *mode, bool oneshot)
 {
+       struct vc4_dev *vc4 = hvs->vc4;
        struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
        struct vc4_crtc_state *vc4_crtc_state = to_vc4_crtc_state(crtc->state);
        unsigned int chan = vc4_crtc_state->assigned_channel;
@@ -291,7 +293,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
         */
        dispctrl = SCALER_DISPCTRLX_ENABLE;
 
-       if (!hvs->hvs5)
+       if (!vc4->is_vc5)
                dispctrl |= VC4_SET_FIELD(mode->hdisplay,
                                          SCALER_DISPCTRLX_WIDTH) |
                            VC4_SET_FIELD(mode->vdisplay,
@@ -312,7 +314,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
 
        HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx |
                  SCALER_DISPBKGND_AUTOHS |
-                 ((!hvs->hvs5) ? SCALER_DISPBKGND_GAMMA : 0) |
+                 ((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) |
                  (interlace ? SCALER_DISPBKGND_INTERLACE : 0));
 
        /* Reload the LUT, since the SRAMs would have been disabled if
@@ -617,11 +619,9 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
        if (!hvs)
                return -ENOMEM;
 
+       hvs->vc4 = vc4;
        hvs->pdev = pdev;
 
-       if (of_device_is_compatible(pdev->dev.of_node, "brcm,bcm2711-hvs"))
-               hvs->hvs5 = true;
-
        hvs->regs = vc4_ioremap_regs(pdev, 0);
        if (IS_ERR(hvs->regs))
                return PTR_ERR(hvs->regs);
@@ -630,7 +630,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
        hvs->regset.regs = hvs_regs;
        hvs->regset.nregs = ARRAY_SIZE(hvs_regs);
 
-       if (hvs->hvs5) {
+       if (vc4->is_vc5) {
                hvs->core_clk = devm_clk_get(&pdev->dev, NULL);
                if (IS_ERR(hvs->core_clk)) {
                        dev_err(&pdev->dev, "Couldn't get core clock\n");
@@ -644,7 +644,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
                }
        }
 
-       if (!hvs->hvs5)
+       if (!vc4->is_vc5)
                hvs->dlist = hvs->regs + SCALER_DLIST_START;
        else
                hvs->dlist = hvs->regs + SCALER5_DLIST_START;
@@ -665,7 +665,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
         * between planes when they don't overlap on the screen, but
         * for now we just allocate globally.
         */
-       if (!hvs->hvs5)
+       if (!vc4->is_vc5)
                /* 48k words of 2x12-bit pixels */
                drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024);
        else
index 4342fb4..2eacfb6 100644 (file)
@@ -265,6 +265,9 @@ vc4_irq_enable(struct drm_device *dev)
 {
        struct vc4_dev *vc4 = to_vc4_dev(dev);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        if (!vc4->v3d)
                return;
 
@@ -279,6 +282,9 @@ vc4_irq_disable(struct drm_device *dev)
 {
        struct vc4_dev *vc4 = to_vc4_dev(dev);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        if (!vc4->v3d)
                return;
 
@@ -296,8 +302,12 @@ vc4_irq_disable(struct drm_device *dev)
 
 int vc4_irq_install(struct drm_device *dev, int irq)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        int ret;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (irq == IRQ_NOTCONNECTED)
                return -ENOTCONN;
 
@@ -316,6 +326,9 @@ void vc4_irq_uninstall(struct drm_device *dev)
 {
        struct vc4_dev *vc4 = to_vc4_dev(dev);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        vc4_irq_disable(dev);
        free_irq(vc4->irq, dev);
 }
@@ -326,6 +339,9 @@ void vc4_irq_reset(struct drm_device *dev)
        struct vc4_dev *vc4 = to_vc4_dev(dev);
        unsigned long irqflags;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        /* Acknowledge any stale IRQs. */
        V3D_WRITE(V3D_INTCTL, V3D_DRIVER_IRQS);
 
index c169bd7..893d831 100644 (file)
@@ -393,7 +393,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
                old_hvs_state->fifo_state[channel].pending_commit = NULL;
        }
 
-       if (vc4->hvs->hvs5) {
+       if (vc4->is_vc5) {
                unsigned long state_rate = max(old_hvs_state->core_clock_rate,
                                               new_hvs_state->core_clock_rate);
                unsigned long core_rate = max_t(unsigned long,
@@ -412,7 +412,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
 
        vc4_ctm_commit(vc4, state);
 
-       if (vc4->hvs->hvs5)
+       if (vc4->is_vc5)
                vc5_hvs_pv_muxing_commit(vc4, state);
        else
                vc4_hvs_pv_muxing_commit(vc4, state);
@@ -430,7 +430,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
 
        drm_atomic_helper_cleanup_planes(dev, state);
 
-       if (vc4->hvs->hvs5) {
+       if (vc4->is_vc5) {
                drm_dbg(dev, "Running the core clock at %lu Hz\n",
                        new_hvs_state->core_clock_rate);
 
@@ -479,8 +479,12 @@ static struct drm_framebuffer *vc4_fb_create(struct drm_device *dev,
                                             struct drm_file *file_priv,
                                             const struct drm_mode_fb_cmd2 *mode_cmd)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct drm_mode_fb_cmd2 mode_cmd_local;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return ERR_PTR(-ENODEV);
+
        /* If the user didn't specify a modifier, use the
         * vc4_set_tiling_ioctl() state for the BO.
         */
@@ -997,11 +1001,15 @@ static const struct drm_mode_config_funcs vc4_mode_funcs = {
        .fb_create = vc4_fb_create,
 };
 
+static const struct drm_mode_config_funcs vc5_mode_funcs = {
+       .atomic_check = vc4_atomic_check,
+       .atomic_commit = drm_atomic_helper_commit,
+       .fb_create = drm_gem_fb_create,
+};
+
 int vc4_kms_load(struct drm_device *dev)
 {
        struct vc4_dev *vc4 = to_vc4_dev(dev);
-       bool is_vc5 = of_device_is_compatible(dev->dev->of_node,
-                                             "brcm,bcm2711-vc5");
        int ret;
 
        /*
@@ -1009,7 +1017,7 @@ int vc4_kms_load(struct drm_device *dev)
         * the BCM2711, but the load tracker computations are used for
         * the core clock rate calculation.
         */
-       if (!is_vc5) {
+       if (!vc4->is_vc5) {
                /* Start with the load tracker enabled. Can be
                 * disabled through the debugfs load_tracker file.
                 */
@@ -1025,7 +1033,7 @@ int vc4_kms_load(struct drm_device *dev)
                return ret;
        }
 
-       if (is_vc5) {
+       if (vc4->is_vc5) {
                dev->mode_config.max_width = 7680;
                dev->mode_config.max_height = 7680;
        } else {
@@ -1033,7 +1041,7 @@ int vc4_kms_load(struct drm_device *dev)
                dev->mode_config.max_height = 2048;
        }
 
-       dev->mode_config.funcs = &vc4_mode_funcs;
+       dev->mode_config.funcs = vc4->is_vc5 ? &vc5_mode_funcs : &vc4_mode_funcs;
        dev->mode_config.helper_private = &vc4_mode_config_helpers;
        dev->mode_config.preferred_depth = 24;
        dev->mode_config.async_page_flip = true;
index 18abc06..79a7418 100644 (file)
 
 void vc4_perfmon_get(struct vc4_perfmon *perfmon)
 {
-       if (perfmon)
-               refcount_inc(&perfmon->refcnt);
+       struct vc4_dev *vc4;
+
+       if (!perfmon)
+               return;
+
+       vc4 = perfmon->dev;
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
+       refcount_inc(&perfmon->refcnt);
 }
 
 void vc4_perfmon_put(struct vc4_perfmon *perfmon)
 {
-       if (perfmon && refcount_dec_and_test(&perfmon->refcnt))
+       struct vc4_dev *vc4;
+
+       if (!perfmon)
+               return;
+
+       vc4 = perfmon->dev;
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
+       if (refcount_dec_and_test(&perfmon->refcnt))
                kfree(perfmon);
 }
 
@@ -32,6 +49,9 @@ void vc4_perfmon_start(struct vc4_dev *vc4, struct vc4_perfmon *perfmon)
        unsigned int i;
        u32 mask;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        if (WARN_ON_ONCE(!perfmon || vc4->active_perfmon))
                return;
 
@@ -49,6 +69,9 @@ void vc4_perfmon_stop(struct vc4_dev *vc4, struct vc4_perfmon *perfmon,
 {
        unsigned int i;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        if (WARN_ON_ONCE(!vc4->active_perfmon ||
                         perfmon != vc4->active_perfmon))
                return;
@@ -64,8 +87,12 @@ void vc4_perfmon_stop(struct vc4_dev *vc4, struct vc4_perfmon *perfmon,
 
 struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id)
 {
+       struct vc4_dev *vc4 = vc4file->dev;
        struct vc4_perfmon *perfmon;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return NULL;
+
        mutex_lock(&vc4file->perfmon.lock);
        perfmon = idr_find(&vc4file->perfmon.idr, id);
        vc4_perfmon_get(perfmon);
@@ -76,8 +103,14 @@ struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id)
 
 void vc4_perfmon_open_file(struct vc4_file *vc4file)
 {
+       struct vc4_dev *vc4 = vc4file->dev;
+
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        mutex_init(&vc4file->perfmon.lock);
        idr_init_base(&vc4file->perfmon.idr, VC4_PERFMONID_MIN);
+       vc4file->dev = vc4;
 }
 
 static int vc4_perfmon_idr_del(int id, void *elem, void *data)
@@ -91,6 +124,11 @@ static int vc4_perfmon_idr_del(int id, void *elem, void *data)
 
 void vc4_perfmon_close_file(struct vc4_file *vc4file)
 {
+       struct vc4_dev *vc4 = vc4file->dev;
+
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        mutex_lock(&vc4file->perfmon.lock);
        idr_for_each(&vc4file->perfmon.idr, vc4_perfmon_idr_del, NULL);
        idr_destroy(&vc4file->perfmon.idr);
@@ -107,6 +145,9 @@ int vc4_perfmon_create_ioctl(struct drm_device *dev, void *data,
        unsigned int i;
        int ret;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (!vc4->v3d) {
                DRM_DEBUG("Creating perfmon no VC4 V3D probed\n");
                return -ENODEV;
@@ -127,6 +168,7 @@ int vc4_perfmon_create_ioctl(struct drm_device *dev, void *data,
                          GFP_KERNEL);
        if (!perfmon)
                return -ENOMEM;
+       perfmon->dev = vc4;
 
        for (i = 0; i < req->ncounters; i++)
                perfmon->events[i] = req->events[i];
@@ -157,6 +199,9 @@ int vc4_perfmon_destroy_ioctl(struct drm_device *dev, void *data,
        struct drm_vc4_perfmon_destroy *req = data;
        struct vc4_perfmon *perfmon;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (!vc4->v3d) {
                DRM_DEBUG("Destroying perfmon no VC4 V3D probed\n");
                return -ENODEV;
@@ -182,6 +227,9 @@ int vc4_perfmon_get_values_ioctl(struct drm_device *dev, void *data,
        struct vc4_perfmon *perfmon;
        int ret;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (!vc4->v3d) {
                DRM_DEBUG("Getting perfmon no VC4 V3D probed\n");
                return -ENODEV;
index b3438f4..1e866dc 100644 (file)
@@ -489,10 +489,10 @@ static u32 vc4_lbm_size(struct drm_plane_state *state)
        }
 
        /* Align it to 64 or 128 (hvs5) bytes */
-       lbm = roundup(lbm, vc4->hvs->hvs5 ? 128 : 64);
+       lbm = roundup(lbm, vc4->is_vc5 ? 128 : 64);
 
        /* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */
-       lbm /= vc4->hvs->hvs5 ? 4 : 2;
+       lbm /= vc4->is_vc5 ? 4 : 2;
 
        return lbm;
 }
@@ -608,7 +608,7 @@ static int vc4_plane_allocate_lbm(struct drm_plane_state *state)
                ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm,
                                                 &vc4_state->lbm,
                                                 lbm_size,
-                                                vc4->hvs->hvs5 ? 64 : 32,
+                                                vc4->is_vc5 ? 64 : 32,
                                                 0, 0);
                spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags);
 
@@ -917,7 +917,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
        mix_plane_alpha = state->alpha != DRM_BLEND_ALPHA_OPAQUE &&
                          fb->format->has_alpha;
 
-       if (!vc4->hvs->hvs5) {
+       if (!vc4->is_vc5) {
        /* Control word */
                vc4_dlist_write(vc4_state,
                                SCALER_CTL0_VALID |
@@ -1321,6 +1321,10 @@ static int vc4_plane_atomic_async_check(struct drm_plane *plane,
 
        old_vc4_state = to_vc4_plane_state(plane->state);
        new_vc4_state = to_vc4_plane_state(new_plane_state);
+
+       if (!new_vc4_state->hw_dlist)
+               return -EINVAL;
+
        if (old_vc4_state->dlist_count != new_vc4_state->dlist_count ||
            old_vc4_state->pos0_offset != new_vc4_state->pos0_offset ||
            old_vc4_state->pos2_offset != new_vc4_state->pos2_offset ||
@@ -1385,6 +1389,13 @@ static const struct drm_plane_helper_funcs vc4_plane_helper_funcs = {
        .atomic_async_update = vc4_plane_atomic_async_update,
 };
 
+static const struct drm_plane_helper_funcs vc5_plane_helper_funcs = {
+       .atomic_check = vc4_plane_atomic_check,
+       .atomic_update = vc4_plane_atomic_update,
+       .atomic_async_check = vc4_plane_atomic_async_check,
+       .atomic_async_update = vc4_plane_atomic_async_update,
+};
+
 static bool vc4_format_mod_supported(struct drm_plane *plane,
                                     uint32_t format,
                                     uint64_t modifier)
@@ -1453,14 +1464,13 @@ static const struct drm_plane_funcs vc4_plane_funcs = {
 struct drm_plane *vc4_plane_init(struct drm_device *dev,
                                 enum drm_plane_type type)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct drm_plane *plane = NULL;
        struct vc4_plane *vc4_plane;
        u32 formats[ARRAY_SIZE(hvs_formats)];
        int num_formats = 0;
        int ret = 0;
        unsigned i;
-       bool hvs5 = of_device_is_compatible(dev->dev->of_node,
-                                           "brcm,bcm2711-vc5");
        static const uint64_t modifiers[] = {
                DRM_FORMAT_MOD_BROADCOM_VC4_T_TILED,
                DRM_FORMAT_MOD_BROADCOM_SAND128,
@@ -1476,7 +1486,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
                return ERR_PTR(-ENOMEM);
 
        for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) {
-               if (!hvs_formats[i].hvs5_only || hvs5) {
+               if (!hvs_formats[i].hvs5_only || vc4->is_vc5) {
                        formats[num_formats] = hvs_formats[i].drm;
                        num_formats++;
                }
@@ -1490,7 +1500,10 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
        if (ret)
                return ERR_PTR(ret);
 
-       drm_plane_helper_add(plane, &vc4_plane_helper_funcs);
+       if (vc4->is_vc5)
+               drm_plane_helper_add(plane, &vc5_plane_helper_funcs);
+       else
+               drm_plane_helper_add(plane, &vc4_plane_helper_funcs);
 
        drm_plane_create_alpha_property(plane);
        drm_plane_create_rotation_property(plane, DRM_MODE_ROTATE_0,
index 3c918ee..f6b7dc3 100644 (file)
@@ -593,11 +593,15 @@ vc4_rcl_render_config_surface_setup(struct vc4_exec_info *exec,
 
 int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct vc4_rcl_setup setup = {0};
        struct drm_vc4_submit_cl *args = exec->args;
        bool has_bin = args->bin_cl_size != 0;
        int ret;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        if (args->min_x_tile > args->max_x_tile ||
            args->min_y_tile > args->max_y_tile) {
                DRM_DEBUG("Bad render tile set (%d,%d)-(%d,%d)\n",
index 7bb3067..cc714dc 100644 (file)
@@ -127,6 +127,9 @@ static int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused)
 int
 vc4_v3d_pm_get(struct vc4_dev *vc4)
 {
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        mutex_lock(&vc4->power_lock);
        if (vc4->power_refcount++ == 0) {
                int ret = pm_runtime_get_sync(&vc4->v3d->pdev->dev);
@@ -145,6 +148,9 @@ vc4_v3d_pm_get(struct vc4_dev *vc4)
 void
 vc4_v3d_pm_put(struct vc4_dev *vc4)
 {
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        mutex_lock(&vc4->power_lock);
        if (--vc4->power_refcount == 0) {
                pm_runtime_mark_last_busy(&vc4->v3d->pdev->dev);
@@ -172,6 +178,9 @@ int vc4_v3d_get_bin_slot(struct vc4_dev *vc4)
        uint64_t seqno = 0;
        struct vc4_exec_info *exec;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
 try_again:
        spin_lock_irqsave(&vc4->job_lock, irqflags);
        slot = ffs(~vc4->bin_alloc_used);
@@ -316,6 +325,9 @@ int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used)
 {
        int ret = 0;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        mutex_lock(&vc4->bin_bo_lock);
 
        if (used && *used)
@@ -348,6 +360,9 @@ static void bin_bo_release(struct kref *ref)
 
 void vc4_v3d_bin_bo_put(struct vc4_dev *vc4)
 {
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return;
+
        mutex_lock(&vc4->bin_bo_lock);
        kref_put(&vc4->bin_bo_kref, bin_bo_release);
        mutex_unlock(&vc4->bin_bo_lock);
index eec76af..2feba55 100644 (file)
@@ -105,9 +105,13 @@ size_is_lt(uint32_t width, uint32_t height, int cpp)
 struct drm_gem_cma_object *
 vc4_use_bo(struct vc4_exec_info *exec, uint32_t hindex)
 {
+       struct vc4_dev *vc4 = exec->dev;
        struct drm_gem_cma_object *obj;
        struct vc4_bo *bo;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return NULL;
+
        if (hindex >= exec->bo_count) {
                DRM_DEBUG("BO index %d greater than BO count %d\n",
                          hindex, exec->bo_count);
@@ -160,10 +164,14 @@ vc4_check_tex_size(struct vc4_exec_info *exec, struct drm_gem_cma_object *fbo,
                   uint32_t offset, uint8_t tiling_format,
                   uint32_t width, uint32_t height, uint8_t cpp)
 {
+       struct vc4_dev *vc4 = exec->dev;
        uint32_t aligned_width, aligned_height, stride, size;
        uint32_t utile_w = utile_width(cpp);
        uint32_t utile_h = utile_height(cpp);
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return false;
+
        /* The shaded vertex format stores signed 12.4 fixed point
         * (-2048,2047) offsets from the viewport center, so we should
         * never have a render target larger than 4096.  The texture
@@ -482,10 +490,14 @@ vc4_validate_bin_cl(struct drm_device *dev,
                    void *unvalidated,
                    struct vc4_exec_info *exec)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        uint32_t len = exec->args->bin_cl_size;
        uint32_t dst_offset = 0;
        uint32_t src_offset = 0;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        while (src_offset < len) {
                void *dst_pkt = validated + dst_offset;
                void *src_pkt = unvalidated + src_offset;
@@ -926,9 +938,13 @@ int
 vc4_validate_shader_recs(struct drm_device *dev,
                         struct vc4_exec_info *exec)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        uint32_t i;
        int ret = 0;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return -ENODEV;
+
        for (i = 0; i < exec->shader_state_count; i++) {
                ret = validate_gl_shader_rec(dev, exec, &exec->shader_state[i]);
                if (ret)
index 7cf82b0..e315aeb 100644 (file)
@@ -778,6 +778,7 @@ vc4_handle_branch_target(struct vc4_shader_validation_state *validation_state)
 struct vc4_validated_shader_info *
 vc4_validate_shader(struct drm_gem_cma_object *shader_obj)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(shader_obj->base.dev);
        bool found_shader_end = false;
        int shader_end_ip = 0;
        uint32_t last_thread_switch_ip = -3;
@@ -785,6 +786,9 @@ vc4_validate_shader(struct drm_gem_cma_object *shader_obj)
        struct vc4_validated_shader_info *validated_shader = NULL;
        struct vc4_shader_validation_state validation_state;
 
+       if (WARN_ON_ONCE(vc4->is_vc5))
+               return NULL;
+
        memset(&validation_state, 0, sizeof(validation_state));
        validation_state.shader = shader_obj->vaddr;
        validation_state.max_ip = shader_obj->base.size / sizeof(uint64_t);
index 5a5bf4e..e31554d 100644 (file)
@@ -71,7 +71,7 @@ static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj,
         * the whole buffer.
         */
        vma->vm_flags &= ~VM_PFNMAP;
-       vma->vm_flags |= VM_MIXEDMAP;
+       vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
        vma->vm_pgoff = 0;
 
        /*
index 978ee2a..e0bc731 100644 (file)
@@ -199,7 +199,8 @@ static void mousevsc_on_receive_device_info(struct mousevsc_dev *input_device,
        if (!input_device->hid_desc)
                goto cleanup;
 
-       input_device->report_desc_size = desc->desc[0].wDescriptorLength;
+       input_device->report_desc_size = le16_to_cpu(
+                                       desc->desc[0].wDescriptorLength);
        if (input_device->report_desc_size == 0) {
                input_device->dev_info_status = -EINVAL;
                goto cleanup;
@@ -217,7 +218,7 @@ static void mousevsc_on_receive_device_info(struct mousevsc_dev *input_device,
 
        memcpy(input_device->report_desc,
               ((unsigned char *)desc) + desc->bLength,
-              desc->desc[0].wDescriptorLength);
+              le16_to_cpu(desc->desc[0].wDescriptorLength));
 
        /* Send the ack */
        memset(&ack, 0, sizeof(struct mousevsc_prt_msg));
index b60f134..5b12040 100644 (file)
@@ -21,6 +21,7 @@
 #include <linux/cpu.h>
 #include <linux/hyperv.h>
 #include <asm/mshyperv.h>
+#include <linux/sched/isolation.h>
 
 #include "hyperv_vmbus.h"
 
@@ -638,6 +639,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
                 */
                if (newchannel->offermsg.offer.sub_channel_index == 0) {
                        mutex_unlock(&vmbus_connection.channel_mutex);
+                       cpus_read_unlock();
                        /*
                         * Don't call free_channel(), because newchannel->kobj
                         * is not initialized yet.
@@ -728,16 +730,20 @@ static void init_vp_index(struct vmbus_channel *channel)
        u32 i, ncpu = num_online_cpus();
        cpumask_var_t available_mask;
        struct cpumask *allocated_mask;
+       const struct cpumask *hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
        u32 target_cpu;
        int numa_node;
 
        if (!perf_chn ||
-           !alloc_cpumask_var(&available_mask, GFP_KERNEL)) {
+           !alloc_cpumask_var(&available_mask, GFP_KERNEL) ||
+           cpumask_empty(hk_mask)) {
                /*
                 * If the channel is not a performance critical
                 * channel, bind it to VMBUS_CONNECT_CPU.
                 * In case alloc_cpumask_var() fails, bind it to
                 * VMBUS_CONNECT_CPU.
+                * If all the cpus are isolated, bind it to
+                * VMBUS_CONNECT_CPU.
                 */
                channel->target_cpu = VMBUS_CONNECT_CPU;
                if (perf_chn)
@@ -758,17 +764,19 @@ static void init_vp_index(struct vmbus_channel *channel)
                }
                allocated_mask = &hv_context.hv_numa_map[numa_node];
 
-               if (cpumask_equal(allocated_mask, cpumask_of_node(numa_node))) {
+retry:
+               cpumask_xor(available_mask, allocated_mask, cpumask_of_node(numa_node));
+               cpumask_and(available_mask, available_mask, hk_mask);
+
+               if (cpumask_empty(available_mask)) {
                        /*
                         * We have cycled through all the CPUs in the node;
                         * reset the allocated map.
                         */
                        cpumask_clear(allocated_mask);
+                       goto retry;
                }
 
-               cpumask_xor(available_mask, allocated_mask,
-                           cpumask_of_node(numa_node));
-
                target_cpu = cpumask_first(available_mask);
                cpumask_set_cpu(target_cpu, allocated_mask);
 
index c698592..d35b60c 100644 (file)
@@ -394,7 +394,7 @@ kvp_send_key(struct work_struct *dummy)
        in_msg = kvp_transaction.kvp_msg;
 
        /*
-        * The key/value strings sent from the host are encoded in
+        * The key/value strings sent from the host are encoded
         * in utf16; convert it to utf8 strings.
         * The host assures us that the utf16 strings will not exceed
         * the max lengths specified. We will however, reserve room
index 714d549..547ae33 100644 (file)
@@ -21,6 +21,7 @@
 #include <linux/kernel_stat.h>
 #include <linux/clockchips.h>
 #include <linux/cpu.h>
+#include <linux/sched/isolation.h>
 #include <linux/sched/task_stack.h>
 
 #include <linux/delay.h>
@@ -1770,6 +1771,9 @@ static ssize_t target_cpu_store(struct vmbus_channel *channel,
        if (target_cpu >= nr_cpumask_bits)
                return -EINVAL;
 
+       if (!cpumask_test_cpu(target_cpu, housekeeping_cpumask(HK_TYPE_MANAGED_IRQ)))
+               return -EINVAL;
+
        /* No CPUs should come up or down during this. */
        cpus_read_lock();
 
index 57e11b2..3633ab6 100644 (file)
@@ -259,7 +259,7 @@ static const struct ec_board_info board_info[] = {
        },
        {
                .board_names = {
-                       "ROG CROSSHAIR VIII FORMULA"
+                       "ROG CROSSHAIR VIII FORMULA",
                        "ROG CROSSHAIR VIII HERO",
                        "ROG CROSSHAIR VIII HERO (WI-FI)",
                },
index 5c4cf74..157e232 100644 (file)
@@ -550,7 +550,7 @@ static int aem_init_aem1_inst(struct aem_ipmi_data *probe, u8 module_handle)
 
        res = platform_device_add(data->pdev);
        if (res)
-               goto ipmi_err;
+               goto dev_add_err;
 
        platform_set_drvdata(data->pdev, data);
 
@@ -598,7 +598,9 @@ hwmon_reg_err:
        ipmi_destroy_user(data->ipmi.user);
 ipmi_err:
        platform_set_drvdata(data->pdev, NULL);
-       platform_device_unregister(data->pdev);
+       platform_device_del(data->pdev);
+dev_add_err:
+       platform_device_put(data->pdev);
 dev_err:
        ida_free(&aem_ida, data->id);
 id_err:
@@ -690,7 +692,7 @@ static int aem_init_aem2_inst(struct aem_ipmi_data *probe,
 
        res = platform_device_add(data->pdev);
        if (res)
-               goto ipmi_err;
+               goto dev_add_err;
 
        platform_set_drvdata(data->pdev, data);
 
@@ -738,7 +740,9 @@ hwmon_reg_err:
        ipmi_destroy_user(data->ipmi.user);
 ipmi_err:
        platform_set_drvdata(data->pdev, NULL);
-       platform_device_unregister(data->pdev);
+       platform_device_del(data->pdev);
+dev_add_err:
+       platform_device_put(data->pdev);
 dev_err:
        ida_free(&aem_ida, data->id);
 id_err:
index d78f4be..157b73a 100644 (file)
@@ -145,7 +145,7 @@ static int occ_poll(struct occ *occ)
        cmd[6] = 0;                     /* checksum lsb */
 
        /* mutex should already be locked if necessary */
-       rc = occ->send_cmd(occ, cmd, sizeof(cmd));
+       rc = occ->send_cmd(occ, cmd, sizeof(cmd), &occ->resp, sizeof(occ->resp));
        if (rc) {
                occ->last_error = rc;
                if (occ->error_count++ > OCC_ERROR_COUNT_THRESHOLD)
@@ -182,6 +182,7 @@ static int occ_set_user_power_cap(struct occ *occ, u16 user_power_cap)
 {
        int rc;
        u8 cmd[8];
+       u8 resp[8];
        __be16 user_power_cap_be = cpu_to_be16(user_power_cap);
 
        cmd[0] = 0;     /* sequence number */
@@ -198,7 +199,7 @@ static int occ_set_user_power_cap(struct occ *occ, u16 user_power_cap)
        if (rc)
                return rc;
 
-       rc = occ->send_cmd(occ, cmd, sizeof(cmd));
+       rc = occ->send_cmd(occ, cmd, sizeof(cmd), resp, sizeof(resp));
 
        mutex_unlock(&occ->lock);
 
@@ -1228,10 +1229,15 @@ EXPORT_SYMBOL_GPL(occ_setup);
 
 void occ_shutdown(struct occ *occ)
 {
+       mutex_lock(&occ->lock);
+
        occ_shutdown_sysfs(occ);
 
        if (occ->hwmon)
                hwmon_device_unregister(occ->hwmon);
+       occ->hwmon = NULL;
+
+       mutex_unlock(&occ->lock);
 }
 EXPORT_SYMBOL_GPL(occ_shutdown);
 
index 64d5ec7..7ac4b2f 100644 (file)
@@ -96,7 +96,8 @@ struct occ {
 
        int powr_sample_time_us;        /* average power sample time */
        u8 poll_cmd_data;               /* to perform OCC poll command */
-       int (*send_cmd)(struct occ *occ, u8 *cmd, size_t len);
+       int (*send_cmd)(struct occ *occ, u8 *cmd, size_t len, void *resp,
+                       size_t resp_len);
 
        unsigned long next_update;
        struct mutex lock;              /* lock OCC access */
index da39ea2..b221be1 100644 (file)
@@ -111,7 +111,8 @@ static int p8_i2c_occ_putscom_be(struct i2c_client *client, u32 address,
                                      be32_to_cpu(data1));
 }
 
-static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len)
+static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len,
+                              void *resp, size_t resp_len)
 {
        int i, rc;
        unsigned long start;
@@ -120,7 +121,7 @@ static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len)
        const long wait_time = msecs_to_jiffies(OCC_CMD_IN_PRG_WAIT_MS);
        struct p8_i2c_occ *ctx = to_p8_i2c_occ(occ);
        struct i2c_client *client = ctx->client;
-       struct occ_response *resp = &occ->resp;
+       struct occ_response *or = (struct occ_response *)resp;
 
        start = jiffies;
 
@@ -151,7 +152,7 @@ static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len)
                        return rc;
 
                /* wait for OCC */
-               if (resp->return_status == OCC_RESP_CMD_IN_PRG) {
+               if (or->return_status == OCC_RESP_CMD_IN_PRG) {
                        rc = -EALREADY;
 
                        if (time_after(jiffies, start + timeout))
@@ -163,7 +164,7 @@ static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len)
        } while (rc);
 
        /* check the OCC response */
-       switch (resp->return_status) {
+       switch (or->return_status) {
        case OCC_RESP_CMD_IN_PRG:
                rc = -ETIMEDOUT;
                break;
@@ -192,8 +193,8 @@ static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len)
        if (rc < 0)
                return rc;
 
-       data_length = get_unaligned_be16(&resp->data_length);
-       if (data_length > OCC_RESP_DATA_BYTES)
+       data_length = get_unaligned_be16(&or->data_length);
+       if ((data_length + 7) > resp_len)
                return -EMSGSIZE;
 
        /* fetch the rest of the response data */
index 42fc7b9..a91937e 100644 (file)
@@ -78,11 +78,10 @@ done:
        return notify;
 }
 
-static int p9_sbe_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len)
+static int p9_sbe_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len,
+                              void *resp, size_t resp_len)
 {
-       struct occ_response *resp = &occ->resp;
        struct p9_sbe_occ *ctx = to_p9_sbe_occ(occ);
-       size_t resp_len = sizeof(*resp);
        int rc;
 
        rc = fsi_occ_submit(ctx->sbe, cmd, len, resp, &resp_len);
@@ -96,7 +95,7 @@ static int p9_sbe_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len)
                return rc;
        }
 
-       switch (resp->return_status) {
+       switch (((struct occ_response *)resp)->return_status) {
        case OCC_RESP_CMD_IN_PRG:
                rc = -ETIMEDOUT;
                break;
index 6bc3273..3ad375a 100644 (file)
@@ -148,7 +148,7 @@ static int ucd9200_probe(struct i2c_client *client)
         * This only affects the READ_IOUT and READ_TEMPERATURE2 registers.
         * READ_IOUT will return the sum of currents of all phases of a rail,
         * and READ_TEMPERATURE2 will return the maximum temperature detected
-        * for the the phases of the rail.
+        * for the phases of the rail.
         */
        for (i = 0; i < info->pages; i++) {
                /*
index e7d316b..c023b69 100644 (file)
@@ -477,9 +477,6 @@ int i2c_dw_prepare_clk(struct dw_i2c_dev *dev, bool prepare)
 {
        int ret;
 
-       if (IS_ERR(dev->clk))
-               return PTR_ERR(dev->clk);
-
        if (prepare) {
                /* Optional interface clock */
                ret = clk_prepare_enable(dev->pclk);
index 70ade53..ba043b5 100644 (file)
@@ -320,8 +320,17 @@ static int dw_i2c_plat_probe(struct platform_device *pdev)
                goto exit_reset;
        }
 
-       dev->clk = devm_clk_get(&pdev->dev, NULL);
-       if (!i2c_dw_prepare_clk(dev, true)) {
+       dev->clk = devm_clk_get_optional(&pdev->dev, NULL);
+       if (IS_ERR(dev->clk)) {
+               ret = PTR_ERR(dev->clk);
+               goto exit_reset;
+       }
+
+       ret = i2c_dw_prepare_clk(dev, true);
+       if (ret)
+               goto exit_reset;
+
+       if (dev->clk) {
                u64 clk_khz;
 
                dev->get_clk_rate_khz = i2c_dw_get_clk_rate_khz;
index bdecb78..8e69853 100644 (file)
@@ -1420,17 +1420,22 @@ static int mtk_i2c_probe(struct platform_device *pdev)
        if (ret < 0) {
                dev_err(&pdev->dev,
                        "Request I2C IRQ %d fail\n", irq);
-               return ret;
+               goto err_bulk_unprepare;
        }
 
        i2c_set_adapdata(&i2c->adap, i2c);
        ret = i2c_add_adapter(&i2c->adap);
        if (ret)
-               return ret;
+               goto err_bulk_unprepare;
 
        platform_set_drvdata(pdev, i2c);
 
        return 0;
+
+err_bulk_unprepare:
+       clk_bulk_unprepare(I2C_MT65XX_CLK_MAX, i2c->clocks);
+
+       return ret;
 }
 
 static int mtk_i2c_remove(struct platform_device *pdev)
index 5960ccd..aede9d5 100644 (file)
@@ -2372,8 +2372,7 @@ static struct platform_driver npcm_i2c_bus_driver = {
 static int __init npcm_i2c_init(void)
 {
        npcm_i2c_debugfs_dir = debugfs_create_dir("npcm_i2c", NULL);
-       platform_driver_register(&npcm_i2c_bus_driver);
-       return 0;
+       return platform_driver_register(&npcm_i2c_bus_driver);
 }
 module_init(npcm_i2c_init);
 
index 4f73bc8..9c9e985 100644 (file)
@@ -1006,11 +1006,12 @@ static int bma180_probe(struct i2c_client *client,
 
                data->trig->ops = &bma180_trigger_ops;
                iio_trigger_set_drvdata(data->trig, indio_dev);
-               indio_dev->trig = iio_trigger_get(data->trig);
 
                ret = iio_trigger_register(data->trig);
                if (ret)
                        goto err_trigger_free;
+
+               indio_dev->trig = iio_trigger_get(data->trig);
        }
 
        ret = iio_triggered_buffer_setup(indio_dev, NULL,
index ac74cdc..748b35c 100644 (file)
@@ -1554,12 +1554,12 @@ static int kxcjk1013_probe(struct i2c_client *client,
 
                data->dready_trig->ops = &kxcjk1013_trigger_ops;
                iio_trigger_set_drvdata(data->dready_trig, indio_dev);
-               indio_dev->trig = data->dready_trig;
-               iio_trigger_get(indio_dev->trig);
                ret = iio_trigger_register(data->dready_trig);
                if (ret)
                        goto err_poweroff;
 
+               indio_dev->trig = iio_trigger_get(data->dready_trig);
+
                data->motion_trig->ops = &kxcjk1013_trigger_ops;
                iio_trigger_set_drvdata(data->motion_trig, indio_dev);
                ret = iio_trigger_register(data->motion_trig);
index 912a447..c7d9ca9 100644 (file)
@@ -1511,10 +1511,14 @@ static int mma8452_reset(struct i2c_client *client)
        int i;
        int ret;
 
-       ret = i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2,
+       /*
+        * Find on fxls8471, after config reset bit, it reset immediately,
+        * and will not give ACK, so here do not check the return value.
+        * The following code will read the reset register, and check whether
+        * this reset works.
+        */
+       i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2,
                                        MMA8452_CTRL_REG2_RST);
-       if (ret < 0)
-               return ret;
 
        for (i = 0; i < 10; i++) {
                usleep_range(100, 200);
@@ -1557,11 +1561,13 @@ static int mma8452_probe(struct i2c_client *client,
        mutex_init(&data->lock);
 
        data->chip_info = device_get_match_data(&client->dev);
-       if (!data->chip_info && id) {
-               data->chip_info = &mma_chip_info_table[id->driver_data];
-       } else {
-               dev_err(&client->dev, "unknown device model\n");
-               return -ENODEV;
+       if (!data->chip_info) {
+               if (id) {
+                       data->chip_info = &mma_chip_info_table[id->driver_data];
+               } else {
+                       dev_err(&client->dev, "unknown device model\n");
+                       return -ENODEV;
+               }
        }
 
        ret = iio_read_mount_matrix(&client->dev, &data->orientation);
index b3afbf0..df600d2 100644 (file)
@@ -456,8 +456,6 @@ static int mxc4005_probe(struct i2c_client *client,
 
                data->dready_trig->ops = &mxc4005_trigger_ops;
                iio_trigger_set_drvdata(data->dready_trig, indio_dev);
-               indio_dev->trig = data->dready_trig;
-               iio_trigger_get(indio_dev->trig);
                ret = devm_iio_trigger_register(&client->dev,
                                                data->dready_trig);
                if (ret) {
@@ -465,6 +463,8 @@ static int mxc4005_probe(struct i2c_client *client,
                                "failed to register trigger\n");
                        return ret;
                }
+
+               indio_dev->trig = iio_trigger_get(data->dready_trig);
        }
 
        return devm_iio_device_register(&client->dev, indio_dev);
index a73e3c2..a9e655e 100644 (file)
@@ -322,16 +322,19 @@ static struct adi_axi_adc_client *adi_axi_adc_attach_client(struct device *dev)
 
                if (!try_module_get(cl->dev->driver->owner)) {
                        mutex_unlock(&registered_clients_lock);
+                       of_node_put(cln);
                        return ERR_PTR(-ENODEV);
                }
 
                get_device(cl->dev);
                cl->info = info;
                mutex_unlock(&registered_clients_lock);
+               of_node_put(cln);
                return cl;
        }
 
        mutex_unlock(&registered_clients_lock);
+       of_node_put(cln);
 
        return ERR_PTR(-EPROBE_DEFER);
 }
index 0793d24..9341e0e 100644 (file)
@@ -186,6 +186,7 @@ static int aspeed_adc_set_trim_data(struct iio_dev *indio_dev)
                return -EOPNOTSUPP;
        }
        scu = syscon_node_to_regmap(syscon);
+       of_node_put(syscon);
        if (IS_ERR(scu)) {
                dev_warn(data->dev, "Failed to get syscon regmap\n");
                return -EOPNOTSUPP;
index a4b8be5..580361b 100644 (file)
@@ -196,6 +196,14 @@ static const struct dmi_system_id axp288_adc_ts_bias_override[] = {
                },
                .driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA,
        },
+       {
+               /* Nuvision Solo 10 Draw */
+               .matches = {
+                 DMI_MATCH(DMI_SYS_VENDOR, "TMAX"),
+                 DMI_MATCH(DMI_PRODUCT_NAME, "TM101W610L"),
+               },
+               .driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA,
+       },
        {}
 };
 
index 7585144..5b09a93 100644 (file)
@@ -334,11 +334,15 @@ static int rzg2l_adc_parse_properties(struct platform_device *pdev, struct rzg2l
        i = 0;
        device_for_each_child_node(&pdev->dev, fwnode) {
                ret = fwnode_property_read_u32(fwnode, "reg", &channel);
-               if (ret)
+               if (ret) {
+                       fwnode_handle_put(fwnode);
                        return ret;
+               }
 
-               if (channel >= RZG2L_ADC_MAX_CHANNELS)
+               if (channel >= RZG2L_ADC_MAX_CHANNELS) {
+                       fwnode_handle_put(fwnode);
                        return -EINVAL;
+               }
 
                chan_array[i].type = IIO_VOLTAGE;
                chan_array[i].indexed = 1;
index 1426562..3efb8c4 100644 (file)
@@ -64,6 +64,7 @@ struct stm32_adc_priv;
  * @max_clk_rate_hz: maximum analog clock rate (Hz, from datasheet)
  * @has_syscfg: SYSCFG capability flags
  * @num_irqs:  number of interrupt lines
+ * @num_adcs:   maximum number of ADC instances in the common registers
  */
 struct stm32_adc_priv_cfg {
        const struct stm32_adc_common_regs *regs;
@@ -71,6 +72,7 @@ struct stm32_adc_priv_cfg {
        u32 max_clk_rate_hz;
        unsigned int has_syscfg;
        unsigned int num_irqs;
+       unsigned int num_adcs;
 };
 
 /**
@@ -352,7 +354,7 @@ static void stm32_adc_irq_handler(struct irq_desc *desc)
         * before invoking the interrupt handler (e.g. call ISR only for
         * IRQ-enabled ADCs).
         */
-       for (i = 0; i < priv->cfg->num_irqs; i++) {
+       for (i = 0; i < priv->cfg->num_adcs; i++) {
                if ((status & priv->cfg->regs->eoc_msk[i] &&
                     stm32_adc_eoc_enabled(priv, i)) ||
                     (status & priv->cfg->regs->ovr_msk[i]))
@@ -792,6 +794,7 @@ static const struct stm32_adc_priv_cfg stm32f4_adc_priv_cfg = {
        .clk_sel = stm32f4_adc_clk_sel,
        .max_clk_rate_hz = 36000000,
        .num_irqs = 1,
+       .num_adcs = 3,
 };
 
 static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
@@ -800,14 +803,16 @@ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
        .max_clk_rate_hz = 36000000,
        .has_syscfg = HAS_VBOOSTER,
        .num_irqs = 1,
+       .num_adcs = 2,
 };
 
 static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = {
        .regs = &stm32h7_adc_common_regs,
        .clk_sel = stm32h7_adc_clk_sel,
-       .max_clk_rate_hz = 40000000,
+       .max_clk_rate_hz = 36000000,
        .has_syscfg = HAS_VBOOSTER | HAS_ANASWVDD,
        .num_irqs = 2,
+       .num_adcs = 2,
 };
 
 static const struct of_device_id stm32_adc_of_match[] = {
index a68ecbd..11ef873 100644 (file)
@@ -1365,7 +1365,7 @@ static int stm32_adc_read_raw(struct iio_dev *indio_dev,
                else
                        ret = -EINVAL;
 
-               if (mask == IIO_CHAN_INFO_PROCESSED && adc->vrefint.vrefint_cal)
+               if (mask == IIO_CHAN_INFO_PROCESSED)
                        *val = STM32_ADC_VREFINT_VOLTAGE * adc->vrefint.vrefint_cal / *val;
 
                iio_device_release_direct_mode(indio_dev);
@@ -1407,7 +1407,6 @@ static irqreturn_t stm32_adc_threaded_isr(int irq, void *data)
        struct stm32_adc *adc = iio_priv(indio_dev);
        const struct stm32_adc_regspec *regs = adc->cfg->regs;
        u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg);
-       u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg);
 
        /* Check ovr status right now, as ovr mask should be already disabled */
        if (status & regs->isr_ovr.mask) {
@@ -1422,11 +1421,6 @@ static irqreturn_t stm32_adc_threaded_isr(int irq, void *data)
                return IRQ_HANDLED;
        }
 
-       if (!(status & mask))
-               dev_err_ratelimited(&indio_dev->dev,
-                                   "Unexpected IRQ: IER=0x%08x, ISR=0x%08x\n",
-                                   mask, status);
-
        return IRQ_NONE;
 }
 
@@ -1436,10 +1430,6 @@ static irqreturn_t stm32_adc_isr(int irq, void *data)
        struct stm32_adc *adc = iio_priv(indio_dev);
        const struct stm32_adc_regspec *regs = adc->cfg->regs;
        u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg);
-       u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg);
-
-       if (!(status & mask))
-               return IRQ_WAKE_THREAD;
 
        if (status & regs->isr_ovr.mask) {
                /*
@@ -1979,10 +1969,10 @@ static int stm32_adc_populate_int_ch(struct iio_dev *indio_dev, const char *ch_n
 
        for (i = 0; i < STM32_ADC_INT_CH_NB; i++) {
                if (!strncmp(stm32_adc_ic[i].name, ch_name, STM32_ADC_CH_SZ)) {
-                       adc->int_ch[i] = chan;
-
-                       if (stm32_adc_ic[i].idx != STM32_ADC_INT_CH_VREFINT)
-                               continue;
+                       if (stm32_adc_ic[i].idx != STM32_ADC_INT_CH_VREFINT) {
+                               adc->int_ch[i] = chan;
+                               break;
+                       }
 
                        /* Get calibration data for vrefint channel */
                        ret = nvmem_cell_read_u16(&indio_dev->dev, "vrefint", &vrefint);
@@ -1990,10 +1980,15 @@ static int stm32_adc_populate_int_ch(struct iio_dev *indio_dev, const char *ch_n
                                return dev_err_probe(indio_dev->dev.parent, ret,
                                                     "nvmem access error\n");
                        }
-                       if (ret == -ENOENT)
-                               dev_dbg(&indio_dev->dev, "vrefint calibration not found\n");
-                       else
-                               adc->vrefint.vrefint_cal = vrefint;
+                       if (ret == -ENOENT) {
+                               dev_dbg(&indio_dev->dev, "vrefint calibration not found. Skip vrefint channel\n");
+                               return ret;
+                       } else if (!vrefint) {
+                               dev_dbg(&indio_dev->dev, "Null vrefint calibration value. Skip vrefint channel\n");
+                               return -ENOENT;
+                       }
+                       adc->int_ch[i] = chan;
+                       adc->vrefint.vrefint_cal = vrefint;
                }
        }
 
@@ -2030,7 +2025,9 @@ static int stm32_adc_generic_chan_init(struct iio_dev *indio_dev,
                        }
                        strncpy(adc->chan_name[val], name, STM32_ADC_CH_SZ);
                        ret = stm32_adc_populate_int_ch(indio_dev, name, val);
-                       if (ret)
+                       if (ret == -ENOENT)
+                               continue;
+                       else if (ret)
                                goto err;
                } else if (ret != -EINVAL) {
                        dev_err(&indio_dev->dev, "Invalid label %d\n", ret);
index 0c2025a..80a0981 100644 (file)
@@ -739,7 +739,7 @@ static int ads131e08_alloc_channels(struct iio_dev *indio_dev)
        device_for_each_child_node(dev, node) {
                ret = fwnode_property_read_u32(node, "reg", &channel);
                if (ret)
-                       return ret;
+                       goto err_child_out;
 
                ret = fwnode_property_read_u32(node, "ti,gain", &tmp);
                if (ret) {
@@ -747,7 +747,7 @@ static int ads131e08_alloc_channels(struct iio_dev *indio_dev)
                } else {
                        ret = ads131e08_pga_gain_to_field_value(st, tmp);
                        if (ret < 0)
-                               return ret;
+                               goto err_child_out;
 
                        channel_config[i].pga_gain = tmp;
                }
@@ -758,7 +758,7 @@ static int ads131e08_alloc_channels(struct iio_dev *indio_dev)
                } else {
                        ret = ads131e08_validate_channel_mux(st, tmp);
                        if (ret)
-                               return ret;
+                               goto err_child_out;
 
                        channel_config[i].mux = tmp;
                }
@@ -784,6 +784,10 @@ static int ads131e08_alloc_channels(struct iio_dev *indio_dev)
        st->channel_config = channel_config;
 
        return 0;
+
+err_child_out:
+       fwnode_handle_put(node);
+       return ret;
 }
 
 static void ads131e08_regulator_disable(void *data)
index a55396c..a768770 100644 (file)
@@ -1409,7 +1409,7 @@ static int ams_probe(struct platform_device *pdev)
 
        irq = platform_get_irq(pdev, 0);
        if (irq < 0)
-               return ret;
+               return irq;
 
        ret = devm_request_irq(&pdev->dev, irq, &ams_irq, 0, "ams-irq",
                               indio_dev);
index c6cf709..6949d21 100644 (file)
@@ -277,7 +277,7 @@ static int rescale_configure_channel(struct device *dev,
        chan->ext_info = rescale->ext_info;
        chan->type = rescale->cfg->type;
 
-       if (iio_channel_has_info(schan, IIO_CHAN_INFO_RAW) ||
+       if (iio_channel_has_info(schan, IIO_CHAN_INFO_RAW) &&
            iio_channel_has_info(schan, IIO_CHAN_INFO_SCALE)) {
                dev_info(dev, "using raw+scale source channel\n");
        } else if (iio_channel_has_info(schan, IIO_CHAN_INFO_PROCESSED)) {
index 847194f..80ef1aa 100644 (file)
@@ -499,11 +499,11 @@ static int ccs811_probe(struct i2c_client *client,
 
                data->drdy_trig->ops = &ccs811_trigger_ops;
                iio_trigger_set_drvdata(data->drdy_trig, indio_dev);
-               indio_dev->trig = data->drdy_trig;
-               iio_trigger_get(indio_dev->trig);
                ret = iio_trigger_register(data->drdy_trig);
                if (ret)
                        goto err_poweroff;
+
+               indio_dev->trig = iio_trigger_get(data->drdy_trig);
        }
 
        ret = iio_triggered_buffer_setup(indio_dev, NULL,
index a7994f8..1aac566 100644 (file)
@@ -700,8 +700,10 @@ static int admv1014_init(struct admv1014_state *st)
                         ADMV1014_DET_EN_MSK;
 
        enable_reg = FIELD_PREP(ADMV1014_P1DB_COMPENSATION_MSK, st->p1db_comp ? 3 : 0) |
-                    FIELD_PREP(ADMV1014_IF_AMP_PD_MSK, !(st->input_mode)) |
-                    FIELD_PREP(ADMV1014_BB_AMP_PD_MSK, st->input_mode) |
+                    FIELD_PREP(ADMV1014_IF_AMP_PD_MSK,
+                               (st->input_mode == ADMV1014_IF_MODE) ? 0 : 1) |
+                    FIELD_PREP(ADMV1014_BB_AMP_PD_MSK,
+                               (st->input_mode == ADMV1014_IF_MODE) ? 1 : 0) |
                     FIELD_PREP(ADMV1014_DET_EN_MSK, st->det_en);
 
        return __admv1014_spi_update_bits(st, ADMV1014_REG_ENABLE, enable_reg_msk, enable_reg);
index 4f19dc7..5908a96 100644 (file)
@@ -875,6 +875,7 @@ static int mpu3050_power_up(struct mpu3050 *mpu3050)
        ret = regmap_update_bits(mpu3050->map, MPU3050_PWR_MGM,
                                 MPU3050_PWR_MGM_SLEEP, 0);
        if (ret) {
+               regulator_bulk_disable(ARRAY_SIZE(mpu3050->regs), mpu3050->regs);
                dev_err(mpu3050->dev, "error setting power mode\n");
                return ret;
        }
index f29692b..66b3241 100644 (file)
@@ -135,9 +135,12 @@ int hts221_allocate_trigger(struct iio_dev *iio_dev)
 
        iio_trigger_set_drvdata(hw->trig, iio_dev);
        hw->trig->ops = &hts221_trigger_ops;
+
+       err = devm_iio_trigger_register(hw->dev, hw->trig);
+
        iio_dev->trig = iio_trigger_get(hw->trig);
 
-       return devm_iio_trigger_register(hw->dev, hw->trig);
+       return err;
 }
 
 static int hts221_buffer_preenable(struct iio_dev *iio_dev)
index c0f5059..995a9dc 100644 (file)
@@ -17,6 +17,7 @@
 #include "inv_icm42600_buffer.h"
 
 enum inv_icm42600_chip {
+       INV_CHIP_INVALID,
        INV_CHIP_ICM42600,
        INV_CHIP_ICM42602,
        INV_CHIP_ICM42605,
index 86858da..ca85fcc 100644 (file)
@@ -565,7 +565,7 @@ int inv_icm42600_core_probe(struct regmap *regmap, int chip, int irq,
        bool open_drain;
        int ret;
 
-       if (chip < 0 || chip >= INV_CHIP_NB) {
+       if (chip <= INV_CHIP_INVALID || chip >= INV_CHIP_NB) {
                dev_err(dev, "invalid chip = %d\n", chip);
                return -ENODEV;
        }
index 9ff7b0e..b2bc637 100644 (file)
@@ -639,7 +639,7 @@ static int yas532_get_calibration_data(struct yas5xx *yas5xx)
        dev_dbg(yas5xx->dev, "calibration data: %*ph\n", 14, data);
 
        /* Sanity check, is this all zeroes? */
-       if (memchr_inv(data, 0x00, 13)) {
+       if (memchr_inv(data, 0x00, 13) == NULL) {
                if (!(data[13] & BIT(7)))
                        dev_warn(yas5xx->dev, "calibration is blank!\n");
        }
index 70c37f6..63fbcaa 100644 (file)
@@ -885,6 +885,9 @@ sx9324_get_default_reg(struct device *dev, int idx,
                        break;
                ret = device_property_read_u32_array(dev, prop, pin_defs,
                                                     ARRAY_SIZE(pin_defs));
+               if (ret)
+                       break;
+
                for (pin = 0; pin < SX9324_NUM_PINS; pin++)
                        raw |= (pin_defs[pin] << (2 * pin)) &
                               SX9324_REG_AFE_PH0_PIN_MASK(pin);
index 56ca0ad..4c66c3f 100644 (file)
@@ -6,7 +6,7 @@
 # Keep in alphabetical order
 config IIO_RESCALE_KUNIT_TEST
        bool "Test IIO rescale conversion functions"
-       depends on KUNIT=y && !IIO_RESCALE
+       depends on KUNIT=y && IIO_RESCALE=y
        default KUNIT_ALL_TESTS
        help
          If you want to run tests on the iio-rescale code say Y here.
index f15ae0a..880360f 100644 (file)
@@ -4,6 +4,6 @@
 #
 
 # Keep in alphabetical order
-obj-$(CONFIG_IIO_RESCALE_KUNIT_TEST) += iio-test-rescale.o ../afe/iio-rescale.o
+obj-$(CONFIG_IIO_RESCALE_KUNIT_TEST) += iio-test-rescale.o
 obj-$(CONFIG_IIO_TEST_FORMAT) += iio-test-format.o
 CFLAGS_iio-test-format.o += $(DISABLE_STRUCTLEAK_PLUGIN)
index f1a8704..d6c5e96 100644 (file)
@@ -190,6 +190,7 @@ static int iio_sysfs_trigger_remove(int id)
        }
 
        iio_trigger_unregister(t->trig);
+       irq_work_sync(&t->work);
        iio_trigger_free(t->trig);
 
        list_del(&t->l);
index 1c107d6..b985e0d 100644 (file)
@@ -1252,8 +1252,10 @@ struct ib_cm_id *ib_cm_insert_listen(struct ib_device *device,
                return ERR_CAST(cm_id_priv);
 
        err = cm_init_listen(cm_id_priv, service_id, 0);
-       if (err)
+       if (err) {
+               ib_destroy_cm_id(&cm_id_priv->id);
                return ERR_PTR(err);
+       }
 
        spin_lock_irq(&cm_id_priv->lock);
        listen_id_priv = cm_insert_listen(cm_id_priv, cm_handler);
index 8def88c..db9ef3e 100644 (file)
@@ -418,6 +418,7 @@ struct qedr_qp {
        u32 sq_psn;
        u32 qkey;
        u32 dest_qp_num;
+       u8 timeout;
 
        /* Relevant to qps created from kernel space only (ULPs) */
        u8 prev_wqe_size;
index f0f43b6..03ed7c0 100644 (file)
@@ -2613,6 +2613,8 @@ int qedr_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
                                        1 << max_t(int, attr->timeout - 8, 0);
                else
                        qp_params.ack_timeout = 0;
+
+               qp->timeout = attr->timeout;
        }
 
        if (attr_mask & IB_QP_RETRY_CNT) {
@@ -2772,7 +2774,7 @@ int qedr_query_qp(struct ib_qp *ibqp,
        rdma_ah_set_dgid_raw(&qp_attr->ah_attr, &params.dgid.bytes[0]);
        rdma_ah_set_port_num(&qp_attr->ah_attr, 1);
        rdma_ah_set_sl(&qp_attr->ah_attr, 0);
-       qp_attr->timeout = params.timeout;
+       qp_attr->timeout = qp->timeout;
        qp_attr->rnr_retry = params.rnr_retry;
        qp_attr->retry_cnt = params.retry_cnt;
        qp_attr->min_rnr_timer = params.min_rnr_nak_timer;
index 2c3dca4..f799551 100644 (file)
@@ -573,7 +573,7 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb,
        unsigned int usable_sge = priv->max_send_sge - !!skb_headlen(skb);
 
        if (skb_is_gso(skb)) {
-               hlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hlen = skb_tcp_all_headers(skb);
                phead = skb->data;
                if (unlikely(!skb_pull(skb, hlen))) {
                        ipoib_warn(priv, "linear data too small\n");
index 8fdb84b..1d42084 100644 (file)
@@ -987,7 +987,7 @@ static const struct of_device_id ipmmu_of_ids[] = {
                .compatible = "renesas,ipmmu-r8a779a0",
                .data = &ipmmu_features_rcar_gen4,
        }, {
-               .compatible = "renesas,rcar-gen4-ipmmu",
+               .compatible = "renesas,rcar-gen4-ipmmu-vmsa",
                .data = &ipmmu_features_rcar_gen4,
        }, {
                /* Terminator */
index 4ab1038..1f23a6b 100644 (file)
@@ -298,7 +298,7 @@ config XTENSA_MX
 
 config XILINX_INTC
        bool "Xilinx Interrupt Controller IP"
-       depends on MICROBLAZE || ARCH_ZYNQ || ARCH_ZYNQMP
+       depends on OF
        select IRQ_DOMAIN
        help
          Support for the Xilinx Interrupt Controller IP core.
index 12dd487..5ac8318 100644 (file)
@@ -1035,6 +1035,7 @@ static void build_fiq_affinity(struct aic_irq_chip *ic, struct device_node *aff)
                        continue;
 
                cpu = of_cpu_node_to_id(cpu_node);
+               of_node_put(cpu_node);
                if (WARN_ON(cpu < 0))
                        continue;
 
@@ -1143,6 +1144,7 @@ static int __init aic_of_ic_init(struct device_node *node, struct device_node *p
                for_each_child_of_node(affs, chld)
                        build_fiq_affinity(irqc, chld);
        }
+       of_node_put(affs);
 
        set_handle_irq(aic_handle_irq);
        set_handle_fiq(aic_handle_fiq);
index b4c1924..38fab02 100644 (file)
@@ -57,6 +57,7 @@ realview_gic_of_init(struct device_node *node, struct device_node *parent)
 
        /* The PB11MPCore GIC needs to be configured in the syscon */
        map = syscon_node_to_regmap(np);
+       of_node_put(np);
        if (!IS_ERR(map)) {
                /* new irq mode with no DCC */
                regmap_write(map, REALVIEW_SYS_LOCK_OFFSET,
index 2be8dea..5c1cf90 100644 (file)
@@ -1932,7 +1932,7 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node)
 
        gic_data.ppi_descs = kcalloc(gic_data.ppi_nr, sizeof(*gic_data.ppi_descs), GFP_KERNEL);
        if (!gic_data.ppi_descs)
-               return;
+               goto out_put_node;
 
        nr_parts = of_get_child_count(parts_node);
 
@@ -1973,12 +1973,15 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node)
                                continue;
 
                        cpu = of_cpu_node_to_id(cpu_node);
-                       if (WARN_ON(cpu < 0))
+                       if (WARN_ON(cpu < 0)) {
+                               of_node_put(cpu_node);
                                continue;
+                       }
 
                        pr_cont("%pOF[%d] ", cpu_node, cpu);
 
                        cpumask_set_cpu(cpu, &part->mask);
+                       of_node_put(cpu_node);
                }
 
                pr_cont("}\n");
index aed8885..8d05d8b 100644 (file)
 
 #define LIOINTC_ERRATA_IRQ     10
 
+#if defined(CONFIG_MIPS)
+#define liointc_core_id get_ebase_cpunum()
+#else
+#define liointc_core_id get_csr_cpuid()
+#endif
+
 struct liointc_handler_data {
        struct liointc_priv     *priv;
        u32                     parent_int_map;
@@ -57,7 +63,7 @@ static void liointc_chained_handle_irq(struct irq_desc *desc)
        struct liointc_handler_data *handler = irq_desc_get_handler_data(desc);
        struct irq_chip *chip = irq_desc_get_chip(desc);
        struct irq_chip_generic *gc = handler->priv->gc;
-       int core = cpu_logical_map(smp_processor_id()) % LIOINTC_NUM_CORES;
+       int core = liointc_core_id % LIOINTC_NUM_CORES;
        u32 pending;
 
        chained_irq_enter(chip, desc);
index 49b47e7..f289ccd 100644 (file)
@@ -66,7 +66,6 @@ static struct or1k_pic_dev or1k_pic_level = {
                .name = "or1k-PIC-level",
                .irq_unmask = or1k_pic_unmask,
                .irq_mask = or1k_pic_mask,
-               .irq_mask_ack = or1k_pic_mask_ack,
        },
        .handle = handle_level_irq,
        .flags = IRQ_LEVEL | IRQ_NOPROBE,
index 50a5682..56bf502 100644 (file)
@@ -134,9 +134,9 @@ static int __init map_interrupts(struct device_node *node, struct irq_domain *do
                if (!cpu_ictl)
                        return -EINVAL;
                ret = of_property_read_u32(cpu_ictl, "#interrupt-cells", &tmp);
+               of_node_put(cpu_ictl);
                if (ret || tmp != 1)
                        return -EINVAL;
-               of_node_put(cpu_ictl);
 
                cpu_int = be32_to_cpup(imap + 2);
                if (cpu_int > 7 || cpu_int < 2)
index 89121b3..716b1bb 100644 (file)
@@ -237,6 +237,7 @@ static const struct of_device_id uniphier_aidet_match[] = {
        { .compatible = "socionext,uniphier-ld11-aidet" },
        { .compatible = "socionext,uniphier-ld20-aidet" },
        { .compatible = "socionext,uniphier-pxs3-aidet" },
+       { .compatible = "socionext,uniphier-nx1-aidet" },
        { /* sentinel */ }
 };
 
index cd5642c..651f2f8 100644 (file)
@@ -1557,7 +1557,7 @@ reset_hfcsusb(struct hfcsusb *hw)
        write_reg(hw, HFCUSB_USB_SIZE, (hw->packet_size / 8) |
                  ((hw->packet_size / 8) << 4));
 
-       /* set USB_SIZE_I to match the the wMaxPacketSize for ISO transfers */
+       /* set USB_SIZE_I to match the wMaxPacketSize for ISO transfers */
        write_reg(hw, HFCUSB_USB_SIZE_I, hw->iso_packet_size);
 
        /* enable PCM/GCI master mode */
index 54c0473..c954ff9 100644 (file)
@@ -272,6 +272,7 @@ struct dm_io {
        atomic_t io_count;
        struct mapped_device *md;
 
+       struct bio *split_bio;
        /* The three fields represent mapped part of original bio */
        struct bio *orig_bio;
        unsigned int sector_offset; /* offset to end of orig_bio */
index 1f6bf15..e92c1af 100644 (file)
@@ -1400,7 +1400,7 @@ static void start_worker(struct era *era)
 static void stop_worker(struct era *era)
 {
        atomic_set(&era->suspended, 1);
-       flush_workqueue(era->wq);
+       drain_workqueue(era->wq);
 }
 
 /*----------------------------------------------------------------
@@ -1570,6 +1570,12 @@ static void era_postsuspend(struct dm_target *ti)
        }
 
        stop_worker(era);
+
+       r = metadata_commit(era->md);
+       if (r) {
+               DMERR("%s: metadata_commit failed", __func__);
+               /* FIXME: fail mode */
+       }
 }
 
 static int era_preresume(struct dm_target *ti)
index 06f3289..0c6620e 100644 (file)
@@ -415,8 +415,7 @@ static int create_log_context(struct dm_dirty_log *log, struct dm_target *ti,
        /*
         * Work out how many "unsigned long"s we need to hold the bitset.
         */
-       bitset_size = dm_round_up(region_count,
-                                 sizeof(*lc->clean_bits) << BYTE_SHIFT);
+       bitset_size = dm_round_up(region_count, BITS_PER_LONG);
        bitset_size >>= BYTE_SHIFT;
 
        lc->bitset_uint32_count = bitset_size / sizeof(*lc->clean_bits);
@@ -616,7 +615,7 @@ static int disk_resume(struct dm_dirty_log *log)
                        log_clear_bit(lc, lc->clean_bits, i);
 
        /* clear any old bits -- device has shrunk */
-       for (i = lc->region_count; i % (sizeof(*lc->clean_bits) << BYTE_SHIFT); i++)
+       for (i = lc->region_count; i % BITS_PER_LONG; i++)
                log_clear_bit(lc, lc->clean_bits, i);
 
        /* copy clean across to sync */
index 5e41fba..80c9f71 100644 (file)
@@ -1001,12 +1001,13 @@ static int validate_region_size(struct raid_set *rs, unsigned long region_size)
 static int validate_raid_redundancy(struct raid_set *rs)
 {
        unsigned int i, rebuild_cnt = 0;
-       unsigned int rebuilds_per_group = 0, copies;
+       unsigned int rebuilds_per_group = 0, copies, raid_disks;
        unsigned int group_size, last_group_start;
 
-       for (i = 0; i < rs->md.raid_disks; i++)
-               if (!test_bit(In_sync, &rs->dev[i].rdev.flags) ||
-                   !rs->dev[i].rdev.sb_page)
+       for (i = 0; i < rs->raid_disks; i++)
+               if (!test_bit(FirstUse, &rs->dev[i].rdev.flags) &&
+                   ((!test_bit(In_sync, &rs->dev[i].rdev.flags) ||
+                     !rs->dev[i].rdev.sb_page)))
                        rebuild_cnt++;
 
        switch (rs->md.level) {
@@ -1046,8 +1047,9 @@ static int validate_raid_redundancy(struct raid_set *rs)
                 *          A    A    B    B    C
                 *          C    D    D    E    E
                 */
+               raid_disks = min(rs->raid_disks, rs->md.raid_disks);
                if (__is_raid10_near(rs->md.new_layout)) {
-                       for (i = 0; i < rs->md.raid_disks; i++) {
+                       for (i = 0; i < raid_disks; i++) {
                                if (!(i % copies))
                                        rebuilds_per_group = 0;
                                if ((!rs->dev[i].rdev.sb_page ||
@@ -1070,10 +1072,10 @@ static int validate_raid_redundancy(struct raid_set *rs)
                 * results in the need to treat the last (potentially larger)
                 * set differently.
                 */
-               group_size = (rs->md.raid_disks / copies);
-               last_group_start = (rs->md.raid_disks / group_size) - 1;
+               group_size = (raid_disks / copies);
+               last_group_start = (raid_disks / group_size) - 1;
                last_group_start *= group_size;
-               for (i = 0; i < rs->md.raid_disks; i++) {
+               for (i = 0; i < raid_disks; i++) {
                        if (!(i % copies) && !(i > last_group_start))
                                rebuilds_per_group = 0;
                        if ((!rs->dev[i].rdev.sb_page ||
@@ -1588,7 +1590,7 @@ static sector_t __rdev_sectors(struct raid_set *rs)
 {
        int i;
 
-       for (i = 0; i < rs->md.raid_disks; i++) {
+       for (i = 0; i < rs->raid_disks; i++) {
                struct md_rdev *rdev = &rs->dev[i].rdev;
 
                if (!test_bit(Journal, &rdev->flags) &&
@@ -3725,7 +3727,7 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv,
        if (!strcasecmp(argv[0], "idle") || !strcasecmp(argv[0], "frozen")) {
                if (mddev->sync_thread) {
                        set_bit(MD_RECOVERY_INTR, &mddev->recovery);
-                       md_reap_sync_thread(mddev, false);
+                       md_reap_sync_thread(mddev);
                }
        } else if (decipher_sync_action(mddev, mddev->recovery) != st_idle)
                return -EBUSY;
@@ -3766,13 +3768,13 @@ static int raid_iterate_devices(struct dm_target *ti,
        unsigned int i;
        int r = 0;
 
-       for (i = 0; !r && i < rs->md.raid_disks; i++)
-               if (rs->dev[i].data_dev)
-                       r = fn(ti,
-                                rs->dev[i].data_dev,
-                                0, /* No offset on data devs */
-                                rs->md.dev_sectors,
-                                data);
+       for (i = 0; !r && i < rs->raid_disks; i++) {
+               if (rs->dev[i].data_dev) {
+                       r = fn(ti, rs->dev[i].data_dev,
+                              0, /* No offset on data devs */
+                              rs->md.dev_sectors, data);
+               }
+       }
 
        return r;
 }
index d8f1618..2b75f1e 100644 (file)
@@ -555,6 +555,10 @@ static void dm_start_io_acct(struct dm_io *io, struct bio *clone)
                unsigned long flags;
                /* Can afford locking given DM_TIO_IS_DUPLICATE_BIO */
                spin_lock_irqsave(&io->lock, flags);
+               if (dm_io_flagged(io, DM_IO_ACCOUNTED)) {
+                       spin_unlock_irqrestore(&io->lock, flags);
+                       return;
+               }
                dm_io_set_flag(io, DM_IO_ACCOUNTED);
                spin_unlock_irqrestore(&io->lock, flags);
        }
@@ -590,6 +594,7 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio)
        atomic_set(&io->io_count, 2);
        this_cpu_inc(*md->pending_io);
        io->orig_bio = bio;
+       io->split_bio = NULL;
        io->md = md;
        spin_lock_init(&io->lock);
        io->start_time = jiffies;
@@ -711,18 +716,18 @@ static void dm_put_live_table_fast(struct mapped_device *md) __releases(RCU)
 }
 
 static inline struct dm_table *dm_get_live_table_bio(struct mapped_device *md,
-                                                    int *srcu_idx, struct bio *bio)
+                                                    int *srcu_idx, unsigned bio_opf)
 {
-       if (bio->bi_opf & REQ_NOWAIT)
+       if (bio_opf & REQ_NOWAIT)
                return dm_get_live_table_fast(md);
        else
                return dm_get_live_table(md, srcu_idx);
 }
 
 static inline void dm_put_live_table_bio(struct mapped_device *md, int srcu_idx,
-                                        struct bio *bio)
+                                        unsigned bio_opf)
 {
-       if (bio->bi_opf & REQ_NOWAIT)
+       if (bio_opf & REQ_NOWAIT)
                dm_put_live_table_fast(md);
        else
                dm_put_live_table(md, srcu_idx);
@@ -883,7 +888,7 @@ static void dm_io_complete(struct dm_io *io)
 {
        blk_status_t io_error;
        struct mapped_device *md = io->md;
-       struct bio *bio = io->orig_bio;
+       struct bio *bio = io->split_bio ? io->split_bio : io->orig_bio;
 
        if (io->status == BLK_STS_DM_REQUEUE) {
                unsigned long flags;
@@ -935,9 +940,11 @@ static void dm_io_complete(struct dm_io *io)
                        if (io_error == BLK_STS_AGAIN) {
                                /* io_uring doesn't handle BLK_STS_AGAIN (yet) */
                                queue_io(md, bio);
+                               return;
                        }
                }
-               return;
+               if (io_error == BLK_STS_DM_REQUEUE)
+                       return;
        }
 
        if (bio_is_flush_with_data(bio)) {
@@ -1609,7 +1616,12 @@ static blk_status_t __split_and_process_bio(struct clone_info *ci)
        ti = dm_table_find_target(ci->map, ci->sector);
        if (unlikely(!ti))
                return BLK_STS_IOERR;
-       else if (unlikely(ci->is_abnormal_io))
+
+       if (unlikely((ci->bio->bi_opf & REQ_NOWAIT) != 0) &&
+           unlikely(!dm_target_supports_nowait(ti->type)))
+               return BLK_STS_NOTSUPP;
+
+       if (unlikely(ci->is_abnormal_io))
                return __process_abnormal_io(ci, ti);
 
        /*
@@ -1682,9 +1694,11 @@ static void dm_split_and_process_bio(struct mapped_device *md,
         * Remainder must be passed to submit_bio_noacct() so it gets handled
         * *after* bios already submitted have been completely processed.
         */
-       bio_trim(bio, io->sectors, ci.sector_count);
-       trace_block_split(bio, bio->bi_iter.bi_sector);
-       bio_inc_remaining(bio);
+       WARN_ON_ONCE(!dm_io_flagged(io, DM_IO_WAS_SPLIT));
+       io->split_bio = bio_split(bio, io->sectors, GFP_NOIO,
+                                 &md->queue->bio_split);
+       bio_chain(io->split_bio, bio);
+       trace_block_split(io->split_bio, bio->bi_iter.bi_sector);
        submit_bio_noacct(bio);
 out:
        /*
@@ -1711,8 +1725,9 @@ static void dm_submit_bio(struct bio *bio)
        struct mapped_device *md = bio->bi_bdev->bd_disk->private_data;
        int srcu_idx;
        struct dm_table *map;
+       unsigned bio_opf = bio->bi_opf;
 
-       map = dm_get_live_table_bio(md, &srcu_idx, bio);
+       map = dm_get_live_table_bio(md, &srcu_idx, bio_opf);
 
        /* If suspended, or map not yet available, queue this IO for later */
        if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) ||
@@ -1728,7 +1743,7 @@ static void dm_submit_bio(struct bio *bio)
 
        dm_split_and_process_bio(md, map, bio);
 out:
-       dm_put_live_table_bio(md, srcu_idx, bio);
+       dm_put_live_table_bio(md, srcu_idx, bio_opf);
 }
 
 static bool dm_poll_dm_io(struct dm_io *io, struct io_comp_batch *iob,
index 8273ac5..c7ecb0b 100644 (file)
@@ -4831,7 +4831,7 @@ action_store(struct mddev *mddev, const char *page, size_t len)
                                flush_workqueue(md_misc_wq);
                        if (mddev->sync_thread) {
                                set_bit(MD_RECOVERY_INTR, &mddev->recovery);
-                               md_reap_sync_thread(mddev, true);
+                               md_reap_sync_thread(mddev);
                        }
                        mddev_unlock(mddev);
                }
@@ -6197,7 +6197,7 @@ static void __md_stop_writes(struct mddev *mddev)
                flush_workqueue(md_misc_wq);
        if (mddev->sync_thread) {
                set_bit(MD_RECOVERY_INTR, &mddev->recovery);
-               md_reap_sync_thread(mddev, true);
+               md_reap_sync_thread(mddev);
        }
 
        del_timer_sync(&mddev->safemode_timer);
@@ -9303,7 +9303,7 @@ void md_check_recovery(struct mddev *mddev)
                         * ->spare_active and clear saved_raid_disk
                         */
                        set_bit(MD_RECOVERY_INTR, &mddev->recovery);
-                       md_reap_sync_thread(mddev, true);
+                       md_reap_sync_thread(mddev);
                        clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
                        clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
                        clear_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags);
@@ -9338,7 +9338,7 @@ void md_check_recovery(struct mddev *mddev)
                        goto unlock;
                }
                if (mddev->sync_thread) {
-                       md_reap_sync_thread(mddev, true);
+                       md_reap_sync_thread(mddev);
                        goto unlock;
                }
                /* Set RUNNING before clearing NEEDED to avoid
@@ -9411,18 +9411,14 @@ void md_check_recovery(struct mddev *mddev)
 }
 EXPORT_SYMBOL(md_check_recovery);
 
-void md_reap_sync_thread(struct mddev *mddev, bool reconfig_mutex_held)
+void md_reap_sync_thread(struct mddev *mddev)
 {
        struct md_rdev *rdev;
        sector_t old_dev_sectors = mddev->dev_sectors;
        bool is_reshaped = false;
 
-       if (reconfig_mutex_held)
-               mddev_unlock(mddev);
        /* resync has finished, collect result */
        md_unregister_thread(&mddev->sync_thread);
-       if (reconfig_mutex_held)
-               mddev_lock_nointr(mddev);
        if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
            !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) &&
            mddev->degraded != mddev->raid_disks) {
index 5f62c46..cf2cbb1 100644 (file)
@@ -719,7 +719,7 @@ extern struct md_thread *md_register_thread(
 extern void md_unregister_thread(struct md_thread **threadp);
 extern void md_wakeup_thread(struct md_thread *thread);
 extern void md_check_recovery(struct mddev *mddev);
-extern void md_reap_sync_thread(struct mddev *mddev, bool reconfig_mutex_held);
+extern void md_reap_sync_thread(struct mddev *mddev);
 extern int mddev_init_writes_pending(struct mddev *mddev);
 extern bool md_write_start(struct mddev *mddev, struct bio *bi);
 extern void md_write_inc(struct mddev *mddev, struct bio *bi);
index 973e2e0..0a2e480 100644 (file)
@@ -629,9 +629,9 @@ static void ppl_do_flush(struct ppl_io_unit *io)
                if (bdev) {
                        struct bio *bio;
 
-                       bio = bio_alloc_bioset(bdev, 0, GFP_NOIO,
+                       bio = bio_alloc_bioset(bdev, 0,
                                               REQ_OP_WRITE | REQ_PREFLUSH,
-                                              &ppl_conf->flush_bs);
+                                              GFP_NOIO, &ppl_conf->flush_bs);
                        bio->bi_private = io;
                        bio->bi_end_io = ppl_flush_endio;
 
index 5d09256..20e53b1 100644 (file)
@@ -7933,7 +7933,7 @@ static int raid5_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
        int err = 0;
        int number = rdev->raid_disk;
        struct md_rdev __rcu **rdevp;
-       struct disk_info *p = conf->disks + number;
+       struct disk_info *p;
        struct md_rdev *tmp;
 
        print_raid5_conf(conf);
@@ -7952,6 +7952,9 @@ static int raid5_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
                log_exit(conf);
                return 0;
        }
+       if (unlikely(number >= conf->pool_size))
+               return 0;
+       p = conf->disks + number;
        if (rdev == rcu_access_pointer(p->rdev))
                rdevp = &p->rdev;
        else if (rdev == rcu_access_pointer(p->replacement))
@@ -8062,6 +8065,7 @@ static int raid5_add_disk(struct mddev *mddev, struct md_rdev *rdev)
         */
        if (rdev->saved_raid_disk >= 0 &&
            rdev->saved_raid_disk >= first &&
+           rdev->saved_raid_disk <= last &&
            conf->disks[rdev->saved_raid_disk].rdev == NULL)
                first = rdev->saved_raid_disk;
 
index b7800b3..ac1a411 100644 (file)
@@ -105,6 +105,7 @@ config TI_EMIF
 config OMAP_GPMC
        tristate "Texas Instruments OMAP SoC GPMC driver"
        depends on OF_ADDRESS
+       depends on ARCH_OMAP2PLUS || ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST
        select GPIOLIB
        help
          This driver is for the General Purpose Memory Controller (GPMC)
index 86a3d34..4c5154e 100644 (file)
@@ -404,13 +404,16 @@ static int mtk_smi_device_link_common(struct device *dev, struct device **com_de
        of_node_put(smi_com_node);
        if (smi_com_pdev) {
                /* smi common is the supplier, Make sure it is ready before */
-               if (!platform_get_drvdata(smi_com_pdev))
+               if (!platform_get_drvdata(smi_com_pdev)) {
+                       put_device(&smi_com_pdev->dev);
                        return -EPROBE_DEFER;
+               }
                smi_com_dev = &smi_com_pdev->dev;
                link = device_link_add(dev, smi_com_dev,
                                       DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
                if (!link) {
                        dev_err(dev, "Unable to link smi-common dev\n");
+                       put_device(&smi_com_pdev->dev);
                        return -ENODEV;
                }
                *com_dev = smi_com_dev;
index 4733e78..c491cd5 100644 (file)
@@ -1187,33 +1187,39 @@ static int of_get_dram_timings(struct exynos5_dmc *dmc)
 
        dmc->timing_row = devm_kmalloc_array(dmc->dev, TIMING_COUNT,
                                             sizeof(u32), GFP_KERNEL);
-       if (!dmc->timing_row)
-               return -ENOMEM;
+       if (!dmc->timing_row) {
+               ret = -ENOMEM;
+               goto put_node;
+       }
 
        dmc->timing_data = devm_kmalloc_array(dmc->dev, TIMING_COUNT,
                                              sizeof(u32), GFP_KERNEL);
-       if (!dmc->timing_data)
-               return -ENOMEM;
+       if (!dmc->timing_data) {
+               ret = -ENOMEM;
+               goto put_node;
+       }
 
        dmc->timing_power = devm_kmalloc_array(dmc->dev, TIMING_COUNT,
                                               sizeof(u32), GFP_KERNEL);
-       if (!dmc->timing_power)
-               return -ENOMEM;
+       if (!dmc->timing_power) {
+               ret = -ENOMEM;
+               goto put_node;
+       }
 
        dmc->timings = of_lpddr3_get_ddr_timings(np_ddr, dmc->dev,
                                                 DDR_TYPE_LPDDR3,
                                                 &dmc->timings_arr_size);
        if (!dmc->timings) {
-               of_node_put(np_ddr);
                dev_warn(dmc->dev, "could not get timings from DT\n");
-               return -EINVAL;
+               ret = -EINVAL;
+               goto put_node;
        }
 
        dmc->min_tck = of_lpddr3_get_min_tck(np_ddr, dmc->dev);
        if (!dmc->min_tck) {
-               of_node_put(np_ddr);
                dev_warn(dmc->dev, "could not get tck from DT\n");
-               return -EINVAL;
+               ret = -EINVAL;
+               goto put_node;
        }
 
        /* Sorted array of OPPs with frequency ascending */
@@ -1227,13 +1233,14 @@ static int of_get_dram_timings(struct exynos5_dmc *dmc)
                                             clk_period_ps);
        }
 
-       of_node_put(np_ddr);
 
        /* Take the highest frequency's timings as 'bypass' */
        dmc->bypass_timing_row = dmc->timing_row[idx - 1];
        dmc->bypass_timing_data = dmc->timing_data[idx - 1];
        dmc->bypass_timing_power = dmc->timing_power[idx - 1];
 
+put_node:
+       of_node_put(np_ddr);
        return ret;
 }
 
index d6cd553..69f9b03 100644 (file)
@@ -232,9 +232,9 @@ static int ssc_probe(struct platform_device *pdev)
        clk_disable_unprepare(ssc->clk);
 
        ssc->irq = platform_get_irq(pdev, 0);
-       if (!ssc->irq) {
+       if (ssc->irq < 0) {
                dev_dbg(&pdev->dev, "could not get irq\n");
-               return -ENXIO;
+               return ssc->irq;
        }
 
        mutex_lock(&user_lock);
index 749cc5a..b1e7603 100644 (file)
@@ -407,6 +407,8 @@ static void rts5261_init_from_hw(struct rtsx_pcr *pcr)
                // default
                setting_reg1 = PCR_SETTING_REG1;
                setting_reg2 = PCR_SETTING_REG2;
+       } else {
+               return;
        }
 
        pci_read_config_dword(pdev, setting_reg2, &lval2);
index 8d169a3..c9c56fd 100644 (file)
@@ -79,6 +79,11 @@ static int at25_ee_read(void *priv, unsigned int offset,
 {
        struct at25_data *at25 = priv;
        char *buf = val;
+       size_t max_chunk = spi_max_transfer_size(at25->spi);
+       size_t num_msgs = DIV_ROUND_UP(count, max_chunk);
+       size_t nr_bytes = 0;
+       unsigned int msg_offset;
+       size_t msg_count;
        u8                      *cp;
        ssize_t                 status;
        struct spi_transfer     t[2];
@@ -92,54 +97,59 @@ static int at25_ee_read(void *priv, unsigned int offset,
        if (unlikely(!count))
                return -EINVAL;
 
-       cp = at25->command;
+       msg_offset = (unsigned int)offset;
+       msg_count = min(count, max_chunk);
+       while (num_msgs) {
+               cp = at25->command;
 
-       instr = AT25_READ;
-       if (at25->chip.flags & EE_INSTR_BIT3_IS_ADDR)
-               if (offset >= BIT(at25->addrlen * 8))
-                       instr |= AT25_INSTR_BIT3;
+               instr = AT25_READ;
+               if (at25->chip.flags & EE_INSTR_BIT3_IS_ADDR)
+                       if (msg_offset >= BIT(at25->addrlen * 8))
+                               instr |= AT25_INSTR_BIT3;
 
-       mutex_lock(&at25->lock);
+               mutex_lock(&at25->lock);
 
-       *cp++ = instr;
-
-       /* 8/16/24-bit address is written MSB first */
-       switch (at25->addrlen) {
-       default:        /* case 3 */
-               *cp++ = offset >> 16;
-               fallthrough;
-       case 2:
-               *cp++ = offset >> 8;
-               fallthrough;
-       case 1:
-       case 0: /* can't happen: for better code generation */
-               *cp++ = offset >> 0;
-       }
+               *cp++ = instr;
 
-       spi_message_init(&m);
-       memset(t, 0, sizeof(t));
+               /* 8/16/24-bit address is written MSB first */
+               switch (at25->addrlen) {
+               default:        /* case 3 */
+                       *cp++ = msg_offset >> 16;
+                       fallthrough;
+               case 2:
+                       *cp++ = msg_offset >> 8;
+                       fallthrough;
+               case 1:
+               case 0: /* can't happen: for better code generation */
+                       *cp++ = msg_offset >> 0;
+               }
 
-       t[0].tx_buf = at25->command;
-       t[0].len = at25->addrlen + 1;
-       spi_message_add_tail(&t[0], &m);
+               spi_message_init(&m);
+               memset(t, 0, sizeof(t));
 
-       t[1].rx_buf = buf;
-       t[1].len = count;
-       spi_message_add_tail(&t[1], &m);
+               t[0].tx_buf = at25->command;
+               t[0].len = at25->addrlen + 1;
+               spi_message_add_tail(&t[0], &m);
 
-       /*
-        * Read it all at once.
-        *
-        * REVISIT that's potentially a problem with large chips, if
-        * other devices on the bus need to be accessed regularly or
-        * this chip is clocked very slowly.
-        */
-       status = spi_sync(at25->spi, &m);
-       dev_dbg(&at25->spi->dev, "read %zu bytes at %d --> %zd\n",
-               count, offset, status);
+               t[1].rx_buf = buf + nr_bytes;
+               t[1].len = msg_count;
+               spi_message_add_tail(&t[1], &m);
 
-       mutex_unlock(&at25->lock);
-       return status;
+               status = spi_sync(at25->spi, &m);
+
+               mutex_unlock(&at25->lock);
+
+               if (status)
+                       return status;
+
+               --num_msgs;
+               msg_offset += msg_count;
+               nr_bytes += msg_count;
+       }
+
+       dev_dbg(&at25->spi->dev, "read %zu bytes at %d\n",
+               count, offset);
+       return 0;
 }
 
 /* Read extra registers as ID or serial number */
@@ -190,6 +200,7 @@ ATTRIBUTE_GROUPS(sernum);
 static int at25_ee_write(void *priv, unsigned int off, void *val, size_t count)
 {
        struct at25_data *at25 = priv;
+       size_t maxsz = spi_max_transfer_size(at25->spi);
        const char *buf = val;
        int                     status = 0;
        unsigned                buf_size;
@@ -253,6 +264,8 @@ static int at25_ee_write(void *priv, unsigned int off, void *val, size_t count)
                segment = buf_size - (offset % buf_size);
                if (segment > count)
                        segment = count;
+               if (segment > maxsz)
+                       segment = maxsz;
                memcpy(cp, buf, segment);
                status = spi_write(at25->spi, bounce,
                                segment + at25->addrlen + 1);
index cebcca6..cf2b826 100644 (file)
@@ -1351,7 +1351,8 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
 
                if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
                    dev->hbm_state != MEI_HBM_CAP_SETUP) {
-                       if (dev->dev_state == MEI_DEV_POWER_DOWN) {
+                       if (dev->dev_state == MEI_DEV_POWER_DOWN ||
+                           dev->dev_state == MEI_DEV_POWERING_DOWN) {
                                dev_dbg(dev->dev, "hbm: capabilities response: on shutdown, ignoring\n");
                                return 0;
                        }
index 64ce3f8..15e8e2b 100644 (file)
 #define MEI_DEV_ID_ADP_P      0x51E0  /* Alder Lake Point P */
 #define MEI_DEV_ID_ADP_N      0x54E0  /* Alder Lake Point N */
 
+#define MEI_DEV_ID_RPL_S      0x7A68  /* Raptor Lake Point S */
+
 /*
  * MEI HW Section
  */
index 9870bf7..befa491 100644 (file)
@@ -1154,6 +1154,8 @@ static int mei_me_hw_reset(struct mei_device *dev, bool intr_enable)
                        ret = mei_me_d0i3_exit_sync(dev);
                        if (ret)
                                return ret;
+               } else {
+                       hw->pg_state = MEI_PG_OFF;
                }
        }
 
index 33e5882..5435604 100644 (file)
@@ -116,6 +116,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
        {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
        {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)},
 
+       {MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)},
+
        /* required last entry */
        {0, }
 };
index 195dc89..9da4489 100644 (file)
@@ -1356,7 +1356,7 @@ static void msdc_data_xfer_next(struct msdc_host *host, struct mmc_request *mrq)
                msdc_request_done(host, mrq);
 }
 
-static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
+static void msdc_data_xfer_done(struct msdc_host *host, u32 events,
                                struct mmc_request *mrq, struct mmc_data *data)
 {
        struct mmc_command *stop;
@@ -1376,7 +1376,7 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
        spin_unlock_irqrestore(&host->lock, flags);
 
        if (done)
-               return true;
+               return;
        stop = data->stop;
 
        if (check_data || (stop && stop->error)) {
@@ -1385,12 +1385,15 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
                sdr_set_field(host->base + MSDC_DMA_CTRL, MSDC_DMA_CTRL_STOP,
                                1);
 
+               ret = readl_poll_timeout_atomic(host->base + MSDC_DMA_CTRL, val,
+                                               !(val & MSDC_DMA_CTRL_STOP), 1, 20000);
+               if (ret)
+                       dev_dbg(host->dev, "DMA stop timed out\n");
+
                ret = readl_poll_timeout_atomic(host->base + MSDC_DMA_CFG, val,
                                                !(val & MSDC_DMA_CFG_STS), 1, 20000);
-               if (ret) {
-                       dev_dbg(host->dev, "DMA stop timed out\n");
-                       return false;
-               }
+               if (ret)
+                       dev_dbg(host->dev, "DMA inactive timed out\n");
 
                sdr_clr_bits(host->base + MSDC_INTEN, data_ints_mask);
                dev_dbg(host->dev, "DMA stop\n");
@@ -1415,9 +1418,7 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
                }
 
                msdc_data_xfer_next(host, mrq);
-               done = true;
        }
-       return done;
 }
 
 static void msdc_set_buswidth(struct msdc_host *host, u32 width)
@@ -2416,6 +2417,9 @@ static void msdc_cqe_disable(struct mmc_host *mmc, bool recovery)
        if (recovery) {
                sdr_set_field(host->base + MSDC_DMA_CTRL,
                              MSDC_DMA_CTRL_STOP, 1);
+               if (WARN_ON(readl_poll_timeout(host->base + MSDC_DMA_CTRL, val,
+                       !(val & MSDC_DMA_CTRL_STOP), 1, 3000)))
+                       return;
                if (WARN_ON(readl_poll_timeout(host->base + MSDC_DMA_CFG, val,
                        !(val & MSDC_DMA_CFG_STS), 1, 3000)))
                        return;
index 92c20cb..0d4d343 100644 (file)
@@ -152,6 +152,8 @@ static int sdhci_o2_get_cd(struct mmc_host *mmc)
 
        if (!(sdhci_readw(host, O2_PLL_DLL_WDT_CONTROL1) & O2_PLL_LOCK_STATUS))
                sdhci_o2_enable_internal_clock(host);
+       else
+               sdhci_o2_wait_card_detect_stable(host);
 
        return !!(sdhci_readl(host, SDHCI_PRESENT_STATE) & SDHCI_CARD_PRESENT);
 }
index 0b68d05..889e403 100644 (file)
@@ -890,7 +890,7 @@ static int gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
        hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) |
                      BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) |
                      BF_GPMI_TIMING0_DATA_SETUP(data_setup_cycles);
-       hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(busy_timeout_cycles * 4096);
+       hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(DIV_ROUND_UP(busy_timeout_cycles, 4096));
 
        /*
         * Derive NFC ideal delay from {3}:
index 88c2440..dacc552 100644 (file)
@@ -29,9 +29,6 @@ struct nand_flash_dev nand_flash_ids[] = {
        {"TC58NVG0S3E 1G 3.3V 8-bit",
                { .id = {0x98, 0xd1, 0x90, 0x15, 0x76, 0x14, 0x01, 0x00} },
                  SZ_2K, SZ_128, SZ_128K, 0, 8, 64, NAND_ECC_INFO(1, SZ_512), },
-       {"TC58NVG0S3HTA00 1G 3.3V 8-bit",
-               { .id = {0x98, 0xf1, 0x80, 0x15} },
-                 SZ_2K, SZ_128, SZ_128K, 0, 4, 128, NAND_ECC_INFO(8, SZ_512), },
        {"TC58NVG2S0F 4G 3.3V 8-bit",
                { .id = {0x98, 0xdc, 0x90, 0x26, 0x76, 0x15, 0x01, 0x08} },
                  SZ_4K, SZ_512, SZ_256K, 0, 8, 224, NAND_ECC_INFO(4, SZ_512) },
index b2a4f99..94c8898 100644 (file)
@@ -94,6 +94,7 @@ config WIREGUARD
        select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
        select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
        select CRYPTO_POLY1305_MIPS if MIPS
+       select CRYPTO_CHACHA_S390 if S390
        help
          WireGuard is a secure, fast, and easy to use replacement for IPSec
          that uses modern cryptography and clever networking tricks. It's
@@ -499,6 +500,8 @@ config NET_SB1000
 
 source "drivers/net/phy/Kconfig"
 
+source "drivers/net/can/Kconfig"
+
 source "drivers/net/mctp/Kconfig"
 
 source "drivers/net/mdio/Kconfig"
index be2719a..732f4c0 100644 (file)
@@ -1373,11 +1373,11 @@ static void amt_add_srcs(struct amt_dev *amt, struct amt_tunnel_list *tunnel,
        int i;
 
        if (!v6) {
-               igmp_grec = (struct igmpv3_grec *)grec;
+               igmp_grec = grec;
                nsrcs = ntohs(igmp_grec->grec_nsrcs);
        } else {
 #if IS_ENABLED(CONFIG_IPV6)
-               mld_grec = (struct mld2_grec *)grec;
+               mld_grec = grec;
                nsrcs = ntohs(mld_grec->grec_nsrcs);
 #else
        return;
@@ -1458,11 +1458,11 @@ static void amt_lookup_act_srcs(struct amt_tunnel_list *tunnel,
        int i, j;
 
        if (!v6) {
-               igmp_grec = (struct igmpv3_grec *)grec;
+               igmp_grec = grec;
                nsrcs = ntohs(igmp_grec->grec_nsrcs);
        } else {
 #if IS_ENABLED(CONFIG_IPV6)
-               mld_grec = (struct mld2_grec *)grec;
+               mld_grec = grec;
                nsrcs = ntohs(mld_grec->grec_nsrcs);
 #else
        return;
index a86b1f7..d7fb33c 100644 (file)
@@ -2228,7 +2228,8 @@ void bond_3ad_unbind_slave(struct slave *slave)
                                temp_aggregator->num_of_ports--;
                                if (__agg_active_ports(temp_aggregator) == 0) {
                                        select_new_active_agg = temp_aggregator->is_active;
-                                       ad_clear_agg(temp_aggregator);
+                                       if (temp_aggregator->num_of_ports == 0)
+                                               ad_clear_agg(temp_aggregator);
                                        if (select_new_active_agg) {
                                                slave_info(bond->dev, slave->dev, "Removing an active aggregator\n");
                                                /* select new active aggregator */
index 303c8d3..007d43e 100644 (file)
@@ -1302,12 +1302,12 @@ int bond_alb_initialize(struct bonding *bond, int rlb_enabled)
                return res;
 
        if (rlb_enabled) {
-               bond->alb_info.rlb_enabled = 1;
                res = rlb_initialize(bond);
                if (res) {
                        tlb_deinitialize(bond);
                        return res;
                }
+               bond->alb_info.rlb_enabled = 1;
        } else {
                bond->alb_info.rlb_enabled = 0;
        }
index 3d42718..e75acb1 100644 (file)
@@ -1026,12 +1026,38 @@ out:
 
 }
 
+/**
+ * bond_choose_primary_or_current - select the primary or high priority slave
+ * @bond: our bonding struct
+ *
+ * - Check if there is a primary link. If the primary link was set and is up,
+ *   go on and do link reselection.
+ *
+ * - If primary link is not set or down, find the highest priority link.
+ *   If the highest priority link is not current slave, set it as primary
+ *   link and do link reselection.
+ */
 static struct slave *bond_choose_primary_or_current(struct bonding *bond)
 {
        struct slave *prim = rtnl_dereference(bond->primary_slave);
        struct slave *curr = rtnl_dereference(bond->curr_active_slave);
+       struct slave *slave, *hprio = NULL;
+       struct list_head *iter;
 
        if (!prim || prim->link != BOND_LINK_UP) {
+               bond_for_each_slave(bond, slave, iter) {
+                       if (slave->link == BOND_LINK_UP) {
+                               hprio = hprio ?: slave;
+                               if (slave->prio > hprio->prio)
+                                       hprio = slave;
+                       }
+               }
+
+               if (hprio && hprio != curr) {
+                       prim = hprio;
+                       goto link_reselect;
+               }
+
                if (!curr || curr->link != BOND_LINK_UP)
                        return NULL;
                return curr;
@@ -1042,6 +1068,7 @@ static struct slave *bond_choose_primary_or_current(struct bonding *bond)
                return prim;
        }
 
+link_reselect:
        if (!curr || curr->link != BOND_LINK_UP)
                return prim;
 
@@ -3684,9 +3711,11 @@ re_arm:
                if (!rtnl_trylock())
                        return;
 
-               if (should_notify_peers)
+               if (should_notify_peers) {
+                       bond->send_peer_notif--;
                        call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
                                                 bond->dev);
+               }
                if (should_notify_rtnl) {
                        bond_slave_state_notify(bond);
                        bond_slave_link_notify(bond);
index 5a6f444..c2d080f 100644 (file)
@@ -27,6 +27,7 @@ static size_t bond_get_slave_size(const struct net_device *bond_dev,
                nla_total_size(sizeof(u16)) +   /* IFLA_BOND_SLAVE_AD_AGGREGATOR_ID */
                nla_total_size(sizeof(u8)) +    /* IFLA_BOND_SLAVE_AD_ACTOR_OPER_PORT_STATE */
                nla_total_size(sizeof(u16)) +   /* IFLA_BOND_SLAVE_AD_PARTNER_OPER_PORT_STATE */
+               nla_total_size(sizeof(s32)) +   /* IFLA_BOND_SLAVE_PRIO */
                0;
 }
 
@@ -53,6 +54,9 @@ static int bond_fill_slave_info(struct sk_buff *skb,
        if (nla_put_u16(skb, IFLA_BOND_SLAVE_QUEUE_ID, slave->queue_id))
                goto nla_put_failure;
 
+       if (nla_put_s32(skb, IFLA_BOND_SLAVE_PRIO, slave->prio))
+               goto nla_put_failure;
+
        if (BOND_MODE(slave->bond) == BOND_MODE_8023AD) {
                const struct aggregator *agg;
                const struct port *ad_port;
@@ -117,6 +121,7 @@ static const struct nla_policy bond_policy[IFLA_BOND_MAX + 1] = {
 
 static const struct nla_policy bond_slave_policy[IFLA_BOND_SLAVE_MAX + 1] = {
        [IFLA_BOND_SLAVE_QUEUE_ID]      = { .type = NLA_U16 },
+       [IFLA_BOND_SLAVE_PRIO]          = { .type = NLA_S32 },
 };
 
 static int bond_validate(struct nlattr *tb[], struct nlattr *data[],
@@ -157,6 +162,16 @@ static int bond_slave_changelink(struct net_device *bond_dev,
                        return err;
        }
 
+       if (data[IFLA_BOND_SLAVE_PRIO]) {
+               int prio = nla_get_s32(data[IFLA_BOND_SLAVE_PRIO]);
+
+               bond_opt_slave_initval(&newval, &slave_dev, prio);
+               err = __bond_opt_set(bond, BOND_OPT_PRIO, &newval,
+                                    data[IFLA_BOND_SLAVE_PRIO], extack);
+               if (err)
+                       return err;
+       }
+
        return 0;
 }
 
index 96eef19..3498db1 100644 (file)
@@ -40,6 +40,8 @@ static int bond_option_arp_validate_set(struct bonding *bond,
                                        const struct bond_opt_value *newval);
 static int bond_option_arp_all_targets_set(struct bonding *bond,
                                           const struct bond_opt_value *newval);
+static int bond_option_prio_set(struct bonding *bond,
+                               const struct bond_opt_value *newval);
 static int bond_option_primary_set(struct bonding *bond,
                                   const struct bond_opt_value *newval);
 static int bond_option_primary_reselect_set(struct bonding *bond,
@@ -365,6 +367,16 @@ static const struct bond_option bond_opts[BOND_OPT_LAST] = {
                .values = bond_intmax_tbl,
                .set = bond_option_miimon_set
        },
+       [BOND_OPT_PRIO] = {
+               .id = BOND_OPT_PRIO,
+               .name = "prio",
+               .desc = "Link priority for failover re-selection",
+               .flags = BOND_OPTFLAG_RAWVAL,
+               .unsuppmodes = BOND_MODE_ALL_EX(BIT(BOND_MODE_ACTIVEBACKUP) |
+                                               BIT(BOND_MODE_TLB) |
+                                               BIT(BOND_MODE_ALB)),
+               .set = bond_option_prio_set
+       },
        [BOND_OPT_PRIMARY] = {
                .id = BOND_OPT_PRIMARY,
                .name = "primary",
@@ -1306,6 +1318,27 @@ static int bond_option_missed_max_set(struct bonding *bond,
        return 0;
 }
 
+static int bond_option_prio_set(struct bonding *bond,
+                               const struct bond_opt_value *newval)
+{
+       struct slave *slave;
+
+       slave = bond_slave_get_rtnl(newval->slave_dev);
+       if (!slave) {
+               netdev_dbg(newval->slave_dev, "%s called on NULL slave\n", __func__);
+               return -ENODEV;
+       }
+       slave->prio = newval->value;
+
+       if (rtnl_dereference(bond->primary_slave))
+               slave_warn(bond->dev, slave->dev,
+                          "prio updated, but will not affect failover re-selection as primary slave have been set\n");
+       else
+               bond_select_active_slave(bond);
+
+       return 0;
+}
+
 static int bond_option_primary_set(struct bonding *bond,
                                   const struct bond_opt_value *newval)
 {
index 5458f57..0b0f234 100644 (file)
@@ -722,13 +722,21 @@ static int cfv_probe(struct virtio_device *vdev)
        /* Carrier is off until netdevice is opened */
        netif_carrier_off(netdev);
 
+       /* serialize netdev register + virtio_device_ready() with ndo_open() */
+       rtnl_lock();
+
        /* register Netdev */
-       err = register_netdev(netdev);
+       err = register_netdevice(netdev);
        if (err) {
+               rtnl_unlock();
                dev_err(&vdev->dev, "Unable to register netdev (%d)\n", err);
                goto err;
        }
 
+       virtio_device_ready(vdev);
+
+       rtnl_unlock();
+
        debugfs_init(cfv);
 
        return 0;
index b2dcc1e..3048ad7 100644 (file)
@@ -1,5 +1,26 @@
 # SPDX-License-Identifier: GPL-2.0-only
-menu "CAN Device Drivers"
+
+menuconfig CAN_DEV
+       tristate "CAN Device Drivers"
+       default y
+       depends on CAN
+       help
+         Controller Area Network (CAN) is serial communications protocol up to
+         1Mbit/s for its original release (now known as Classical CAN) and up
+         to 8Mbit/s for the more recent CAN with Flexible Data-Rate
+         (CAN-FD). The CAN bus was originally mainly for automotive, but is now
+         widely used in marine (NMEA2000), industrial, and medical
+         applications. More information on the CAN network protocol family
+         PF_CAN is contained in <Documentation/networking/can.rst>.
+
+         This section contains all the CAN(-FD) device drivers including the
+         virtual ones. If you own such devices or plan to use the virtual CAN
+         interfaces to develop applications, say Y here.
+
+         To compile as a module, choose M here: the module will be called
+         can-dev.
+
+if CAN_DEV
 
 config CAN_VCAN
        tristate "Virtual Local CAN Interface (vcan)"
@@ -28,35 +49,22 @@ config CAN_VXCAN
          This driver can also be built as a module.  If so, the module
          will be called vxcan.
 
-config CAN_SLCAN
-       tristate "Serial / USB serial CAN Adaptors (slcan)"
-       depends on TTY
+config CAN_NETLINK
+       bool "CAN device drivers with Netlink support"
+       default y
        help
-         CAN driver for several 'low cost' CAN interfaces that are attached
-         via serial lines or via USB-to-serial adapters using the LAWICEL
-         ASCII protocol. The driver implements the tty linediscipline N_SLCAN.
+         Enables the common framework for CAN device drivers. This is the
+         standard library and provides features for the Netlink interface such
+         as bittiming validation, support of CAN error states, device restart
+         and others.
 
-         As only the sending and receiving of CAN frames is implemented, this
-         driver should work with the (serial/USB) CAN hardware from:
-         www.canusb.com / www.can232.com / www.mictronics.de / www.canhack.de
-
-         Userspace tools to attach the SLCAN line discipline (slcan_attach,
-         slcand) can be found in the can-utils at the linux-can project, see
-         https://github.com/linux-can/can-utils for details.
-
-         The slcan driver supports up to 10 CAN netdevices by default which
-         can be changed by the 'maxdev=xx' module option. This driver can
-         also be built as a module. If so, the module will be called slcan.
+         The additional features selected by this option will be added to the
+         can-dev module.
 
-config CAN_DEV
-       tristate "Platform CAN drivers with Netlink support"
-       default y
-       help
-         Enables the common framework for platform CAN drivers with Netlink
-         support. This is the standard library for CAN drivers.
-         If unsure, say Y.
+         This is required by all platform and hardware CAN drivers. If you
+         plan to use such devices or if unsure, say Y.
 
-if CAN_DEV
+if CAN_NETLINK
 
 config CAN_CALC_BITTIMING
        bool "CAN bit-timing calculation"
@@ -69,8 +77,15 @@ config CAN_CALC_BITTIMING
          source clock frequencies. Disabling saves some space, but then the
          bit-timing parameters must be specified directly using the Netlink
          arguments "tq", "prop_seg", "phase_seg1", "phase_seg2" and "sjw".
+
+         The additional features selected by this option will be added to the
+         can-dev module.
+
          If unsure, say Y.
 
+config CAN_RX_OFFLOAD
+       bool
+
 config CAN_AT91
        tristate "Atmel AT91 onchip CAN controller"
        depends on (ARCH_AT91 || COMPILE_TEST) && HAS_IOMEM
@@ -78,10 +93,29 @@ config CAN_AT91
          This is a driver for the SoC CAN controller in Atmel's AT91SAM9263
          and AT91SAM9X5 processors.
 
+config CAN_CAN327
+       tristate "Serial / USB serial ELM327 based OBD-II Interfaces (can327)"
+       depends on TTY
+       select CAN_RX_OFFLOAD
+       help
+         CAN driver for several 'low cost' OBD-II interfaces based on the
+         ELM327 OBD-II interpreter chip.
+
+         This is a best effort driver - the ELM327 interface was never
+         designed to be used as a standalone CAN interface. However, it can
+         still be used for simple request-response protocols (such as OBD II),
+         and to monitor broadcast messages on a bus (such as in a vehicle).
+
+         Please refer to the documentation for information on how to use it:
+         Documentation/networking/device_drivers/can/can327.rst
+
+         If this driver is built as a module, it will be called can327.
+
 config CAN_FLEXCAN
        tristate "Support for Freescale FLEXCAN based chips"
        depends on OF || COLDFIRE || COMPILE_TEST
        depends on HAS_IOMEM
+       select CAN_RX_OFFLOAD
        help
          Say Y here if you want to support for Freescale FlexCAN.
 
@@ -118,6 +152,26 @@ config CAN_KVASER_PCIEFD
            Kvaser Mini PCI Express HS v2
            Kvaser Mini PCI Express 2xHS v2
 
+config CAN_SLCAN
+       tristate "Serial / USB serial CAN Adaptors (slcan)"
+       depends on TTY
+       help
+         CAN driver for several 'low cost' CAN interfaces that are attached
+         via serial lines or via USB-to-serial adapters using the LAWICEL
+         ASCII protocol. The driver implements the tty linediscipline N_SLCAN.
+
+         As only the sending and receiving of CAN frames is implemented, this
+         driver should work with the (serial/USB) CAN hardware from:
+         www.canusb.com / www.can232.com / www.mictronics.de / www.canhack.de
+
+         Userspace tools to attach the SLCAN line discipline (slcan_attach,
+         slcand) can be found in the can-utils at the linux-can project, see
+         https://github.com/linux-can/can-utils for details.
+
+         The slcan driver supports up to 10 CAN netdevices by default which
+         can be changed by the 'maxdev=xx' module option. This driver can
+         also be built as a module. If so, the module will be called slcan.
+
 config CAN_SUN4I
        tristate "Allwinner A10 CAN controller"
        depends on MACH_SUN4I || MACH_SUN7I || COMPILE_TEST
@@ -131,6 +185,7 @@ config CAN_SUN4I
 config CAN_TI_HECC
        depends on ARM
        tristate "TI High End CAN Controller"
+       select CAN_RX_OFFLOAD
        help
          Driver for TI HECC (High End CAN Controller) module found on many
          TI devices. The device specifications are available from www.ti.com
@@ -164,7 +219,7 @@ source "drivers/net/can/softing/Kconfig"
 source "drivers/net/can/spi/Kconfig"
 source "drivers/net/can/usb/Kconfig"
 
-endif
+endif #CAN_NETLINK
 
 config CAN_DEBUG_DEVICES
        bool "CAN devices debugging messages"
@@ -174,4 +229,4 @@ config CAN_DEBUG_DEVICES
          a problem with CAN support and want to see more of what is going
          on.
 
-endmenu
+endif #CAN_DEV
index 0af8598..61c75ce 100644 (file)
@@ -5,7 +5,7 @@
 
 obj-$(CONFIG_CAN_VCAN)         += vcan.o
 obj-$(CONFIG_CAN_VXCAN)                += vxcan.o
-obj-$(CONFIG_CAN_SLCAN)                += slcan.o
+obj-$(CONFIG_CAN_SLCAN)                += slcan/
 
 obj-y                          += dev/
 obj-y                          += rcar/
@@ -14,6 +14,7 @@ obj-y                         += usb/
 obj-y                          += softing/
 
 obj-$(CONFIG_CAN_AT91)         += at91_can.o
+obj-$(CONFIG_CAN_CAN327)       += can327.o
 obj-$(CONFIG_CAN_CC770)                += cc770/
 obj-$(CONFIG_CAN_C_CAN)                += c_can/
 obj-$(CONFIG_CAN_CTUCANFD)     += ctucanfd/
diff --git a/drivers/net/can/can327.c b/drivers/net/can/can327.c
new file mode 100644 (file)
index 0000000..5da7778
--- /dev/null
@@ -0,0 +1,1137 @@
+// SPDX-License-Identifier: GPL-2.0
+/* ELM327 based CAN interface driver (tty line discipline)
+ *
+ * This driver started as a derivative of linux/drivers/net/can/slcan.c
+ * and my thanks go to the original authors for their inspiration.
+ *
+ * can327.c Author : Max Staudt <max-linux@enpas.org>
+ * slcan.c Author  : Oliver Hartkopp <socketcan@hartkopp.net>
+ * slip.c Authors  : Laurence Culhane <loz@holmes.demon.co.uk>
+ *                   Fred N. van Kempen <waltje@uwalt.nl.mugnet.org>
+ */
+
+#define pr_fmt(fmt) "can327: " fmt
+
+#include <linux/init.h>
+#include <linux/module.h>
+
+#include <linux/bitops.h>
+#include <linux/ctype.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/lockdep.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/tty.h>
+#include <linux/tty_ldisc.h>
+#include <linux/workqueue.h>
+
+#include <uapi/linux/tty.h>
+
+#include <linux/can.h>
+#include <linux/can/dev.h>
+#include <linux/can/error.h>
+#include <linux/can/rx-offload.h>
+
+#define CAN327_NAPI_WEIGHT 4
+
+#define CAN327_SIZE_TXBUF 32
+#define CAN327_SIZE_RXBUF 1024
+
+#define CAN327_CAN_CONFIG_SEND_SFF 0x8000
+#define CAN327_CAN_CONFIG_VARIABLE_DLC 0x4000
+#define CAN327_CAN_CONFIG_RECV_BOTH_SFF_EFF 0x2000
+#define CAN327_CAN_CONFIG_BAUDRATE_MULT_8_7 0x1000
+
+#define CAN327_DUMMY_CHAR 'y'
+#define CAN327_DUMMY_STRING "y"
+#define CAN327_READY_CHAR '>'
+
+/* Bits in elm->cmds_todo */
+enum can327_tx_do {
+       CAN327_TX_DO_CAN_DATA = 0,
+       CAN327_TX_DO_CANID_11BIT,
+       CAN327_TX_DO_CANID_29BIT_LOW,
+       CAN327_TX_DO_CANID_29BIT_HIGH,
+       CAN327_TX_DO_CAN_CONFIG_PART2,
+       CAN327_TX_DO_CAN_CONFIG,
+       CAN327_TX_DO_RESPONSES,
+       CAN327_TX_DO_SILENT_MONITOR,
+       CAN327_TX_DO_INIT,
+};
+
+struct can327 {
+       /* This must be the first member when using alloc_candev() */
+       struct can_priv can;
+
+       struct can_rx_offload offload;
+
+       /* TTY buffers */
+       u8 txbuf[CAN327_SIZE_TXBUF];
+       u8 rxbuf[CAN327_SIZE_RXBUF];
+
+       /* Per-channel lock */
+       spinlock_t lock;
+
+       /* TTY and netdev devices that we're bridging */
+       struct tty_struct *tty;
+       struct net_device *dev;
+
+       /* TTY buffer accounting */
+       struct work_struct tx_work;     /* Flushes TTY TX buffer */
+       u8 *txhead;                     /* Next TX byte */
+       size_t txleft;                  /* Bytes left to TX */
+       int rxfill;                     /* Bytes already RX'd in buffer */
+
+       /* State machine */
+       enum {
+               CAN327_STATE_NOTINIT = 0,
+               CAN327_STATE_GETDUMMYCHAR,
+               CAN327_STATE_GETPROMPT,
+               CAN327_STATE_RECEIVING,
+       } state;
+
+       /* Things we have yet to send */
+       char **next_init_cmd;
+       unsigned long cmds_todo;
+
+       /* The CAN frame and config the ELM327 is sending/using,
+        * or will send/use after finishing all cmds_todo
+        */
+       struct can_frame can_frame_to_send;
+       u16 can_config;
+       u8 can_bitrate_divisor;
+
+       /* Parser state */
+       bool drop_next_line;
+
+       /* Stop the channel on UART side hardware failure, e.g. stray
+        * characters or neverending lines. This may be caused by bad
+        * UART wiring, a bad ELM327, a bad UART bridge...
+        * Once this is true, nothing will be sent to the TTY.
+        */
+       bool uart_side_failure;
+};
+
+static inline void can327_uart_side_failure(struct can327 *elm);
+
+static void can327_send(struct can327 *elm, const void *buf, size_t len)
+{
+       int written;
+
+       lockdep_assert_held(&elm->lock);
+
+       if (elm->uart_side_failure)
+               return;
+
+       memcpy(elm->txbuf, buf, len);
+
+       /* Order of next two lines is *very* important.
+        * When we are sending a little amount of data,
+        * the transfer may be completed inside the ops->write()
+        * routine, because it's running with interrupts enabled.
+        * In this case we *never* got WRITE_WAKEUP event,
+        * if we did not request it before write operation.
+        *       14 Oct 1994  Dmitry Gorodchanin.
+        */
+       set_bit(TTY_DO_WRITE_WAKEUP, &elm->tty->flags);
+       written = elm->tty->ops->write(elm->tty, elm->txbuf, len);
+       if (written < 0) {
+               netdev_err(elm->dev, "Failed to write to tty %s.\n",
+                          elm->tty->name);
+               can327_uart_side_failure(elm);
+               return;
+       }
+
+       elm->txleft = len - written;
+       elm->txhead = elm->txbuf + written;
+}
+
+/* Take the ELM327 out of almost any state and back into command mode.
+ * We send CAN327_DUMMY_CHAR which will either abort any running
+ * operation, or be echoed back to us in case we're already in command
+ * mode.
+ */
+static void can327_kick_into_cmd_mode(struct can327 *elm)
+{
+       lockdep_assert_held(&elm->lock);
+
+       if (elm->state != CAN327_STATE_GETDUMMYCHAR &&
+           elm->state != CAN327_STATE_GETPROMPT) {
+               can327_send(elm, CAN327_DUMMY_STRING, 1);
+
+               elm->state = CAN327_STATE_GETDUMMYCHAR;
+       }
+}
+
+/* Schedule a CAN frame and necessary config changes to be sent to the TTY. */
+static void can327_send_frame(struct can327 *elm, struct can_frame *frame)
+{
+       lockdep_assert_held(&elm->lock);
+
+       /* Schedule any necessary changes in ELM327's CAN configuration */
+       if (elm->can_frame_to_send.can_id != frame->can_id) {
+               /* Set the new CAN ID for transmission. */
+               if ((frame->can_id ^ elm->can_frame_to_send.can_id)
+                   & CAN_EFF_FLAG) {
+                       elm->can_config =
+                               (frame->can_id & CAN_EFF_FLAG ? 0 : CAN327_CAN_CONFIG_SEND_SFF) |
+                               CAN327_CAN_CONFIG_VARIABLE_DLC |
+                               CAN327_CAN_CONFIG_RECV_BOTH_SFF_EFF |
+                               elm->can_bitrate_divisor;
+
+                       set_bit(CAN327_TX_DO_CAN_CONFIG, &elm->cmds_todo);
+               }
+
+               if (frame->can_id & CAN_EFF_FLAG) {
+                       clear_bit(CAN327_TX_DO_CANID_11BIT, &elm->cmds_todo);
+                       set_bit(CAN327_TX_DO_CANID_29BIT_LOW, &elm->cmds_todo);
+                       set_bit(CAN327_TX_DO_CANID_29BIT_HIGH, &elm->cmds_todo);
+               } else {
+                       set_bit(CAN327_TX_DO_CANID_11BIT, &elm->cmds_todo);
+                       clear_bit(CAN327_TX_DO_CANID_29BIT_LOW,
+                                 &elm->cmds_todo);
+                       clear_bit(CAN327_TX_DO_CANID_29BIT_HIGH,
+                                 &elm->cmds_todo);
+               }
+       }
+
+       /* Schedule the CAN frame itself. */
+       elm->can_frame_to_send = *frame;
+       set_bit(CAN327_TX_DO_CAN_DATA, &elm->cmds_todo);
+
+       can327_kick_into_cmd_mode(elm);
+}
+
+/* ELM327 initialisation sequence.
+ * The line length is limited by the buffer in can327_handle_prompt().
+ */
+static char *can327_init_script[] = {
+       "AT WS\r",        /* v1.0: Warm Start */
+       "AT PP FF OFF\r", /* v1.0: All Programmable Parameters Off */
+       "AT M0\r",        /* v1.0: Memory Off */
+       "AT AL\r",        /* v1.0: Allow Long messages */
+       "AT BI\r",        /* v1.0: Bypass Initialisation */
+       "AT CAF0\r",      /* v1.0: CAN Auto Formatting Off */
+       "AT CFC0\r",      /* v1.0: CAN Flow Control Off */
+       "AT CF 000\r",    /* v1.0: Reset CAN ID Filter */
+       "AT CM 000\r",    /* v1.0: Reset CAN ID Mask */
+       "AT E1\r",        /* v1.0: Echo On */
+       "AT H1\r",        /* v1.0: Headers On */
+       "AT L0\r",        /* v1.0: Linefeeds Off */
+       "AT SH 7DF\r",    /* v1.0: Set CAN sending ID to 0x7df */
+       "AT ST FF\r",     /* v1.0: Set maximum Timeout for response after TX */
+       "AT AT0\r",       /* v1.2: Adaptive Timing Off */
+       "AT D1\r",        /* v1.3: Print DLC On */
+       "AT S1\r",        /* v1.3: Spaces On */
+       "AT TP B\r",      /* v1.0: Try Protocol B */
+       NULL
+};
+
+static void can327_init_device(struct can327 *elm)
+{
+       lockdep_assert_held(&elm->lock);
+
+       elm->state = CAN327_STATE_NOTINIT;
+       elm->can_frame_to_send.can_id = 0x7df; /* ELM327 HW default */
+       elm->rxfill = 0;
+       elm->drop_next_line = 0;
+
+       /* We can only set the bitrate as a fraction of 500000.
+        * The bitrates listed in can327_bitrate_const will
+        * limit the user to the right values.
+        */
+       elm->can_bitrate_divisor = 500000 / elm->can.bittiming.bitrate;
+       elm->can_config =
+               CAN327_CAN_CONFIG_SEND_SFF | CAN327_CAN_CONFIG_VARIABLE_DLC |
+               CAN327_CAN_CONFIG_RECV_BOTH_SFF_EFF | elm->can_bitrate_divisor;
+
+       /* Configure ELM327 and then start monitoring */
+       elm->next_init_cmd = &can327_init_script[0];
+       set_bit(CAN327_TX_DO_INIT, &elm->cmds_todo);
+       set_bit(CAN327_TX_DO_SILENT_MONITOR, &elm->cmds_todo);
+       set_bit(CAN327_TX_DO_RESPONSES, &elm->cmds_todo);
+       set_bit(CAN327_TX_DO_CAN_CONFIG, &elm->cmds_todo);
+
+       can327_kick_into_cmd_mode(elm);
+}
+
+static void can327_feed_frame_to_netdev(struct can327 *elm, struct sk_buff *skb)
+{
+       lockdep_assert_held(&elm->lock);
+
+       if (!netif_running(elm->dev))
+               return;
+
+       /* Queue for NAPI pickup.
+        * rx-offload will update stats and LEDs for us.
+        */
+       if (can_rx_offload_queue_tail(&elm->offload, skb))
+               elm->dev->stats.rx_fifo_errors++;
+
+       /* Wake NAPI */
+       can_rx_offload_irq_finish(&elm->offload);
+}
+
+/* Called when we're out of ideas and just want it all to end. */
+static inline void can327_uart_side_failure(struct can327 *elm)
+{
+       struct can_frame *frame;
+       struct sk_buff *skb;
+
+       lockdep_assert_held(&elm->lock);
+
+       elm->uart_side_failure = true;
+
+       clear_bit(TTY_DO_WRITE_WAKEUP, &elm->tty->flags);
+
+       elm->can.can_stats.bus_off++;
+       netif_stop_queue(elm->dev);
+       elm->can.state = CAN_STATE_BUS_OFF;
+       can_bus_off(elm->dev);
+
+       netdev_err(elm->dev,
+                  "ELM327 misbehaved. Blocking further communication.\n");
+
+       skb = alloc_can_err_skb(elm->dev, &frame);
+       if (!skb)
+               return;
+
+       frame->can_id |= CAN_ERR_BUSOFF;
+       can327_feed_frame_to_netdev(elm, skb);
+}
+
+/* Compares a byte buffer (non-NUL terminated) to the payload part of
+ * a string, and returns true iff the buffer (content *and* length) is
+ * exactly that string, without the terminating NUL byte.
+ *
+ * Example: If reference is "BUS ERROR", then this returns true iff nbytes == 9
+ *          and !memcmp(buf, "BUS ERROR", 9).
+ *
+ * The reason to use strings is so we can easily include them in the C
+ * code, and to avoid hardcoding lengths.
+ */
+static inline bool can327_rxbuf_cmp(const u8 *buf, size_t nbytes,
+                                   const char *reference)
+{
+       size_t ref_len = strlen(reference);
+
+       return (nbytes == ref_len) && !memcmp(buf, reference, ref_len);
+}
+
+static void can327_parse_error(struct can327 *elm, size_t len)
+{
+       struct can_frame *frame;
+       struct sk_buff *skb;
+
+       lockdep_assert_held(&elm->lock);
+
+       skb = alloc_can_err_skb(elm->dev, &frame);
+       if (!skb)
+               /* It's okay to return here:
+                * The outer parsing loop will drop this UART buffer.
+                */
+               return;
+
+       /* Filter possible error messages based on length of RX'd line */
+       if (can327_rxbuf_cmp(elm->rxbuf, len, "UNABLE TO CONNECT")) {
+               netdev_err(elm->dev,
+                          "ELM327 reported UNABLE TO CONNECT. Please check your setup.\n");
+       } else if (can327_rxbuf_cmp(elm->rxbuf, len, "BUFFER FULL")) {
+               /* This will only happen if the last data line was complete.
+                * Otherwise, can327_parse_frame() will heuristically
+                * emit this kind of error frame instead.
+                */
+               frame->can_id |= CAN_ERR_CRTL;
+               frame->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
+       } else if (can327_rxbuf_cmp(elm->rxbuf, len, "BUS ERROR")) {
+               frame->can_id |= CAN_ERR_BUSERROR;
+       } else if (can327_rxbuf_cmp(elm->rxbuf, len, "CAN ERROR")) {
+               frame->can_id |= CAN_ERR_PROT;
+       } else if (can327_rxbuf_cmp(elm->rxbuf, len, "<RX ERROR")) {
+               frame->can_id |= CAN_ERR_PROT;
+       } else if (can327_rxbuf_cmp(elm->rxbuf, len, "BUS BUSY")) {
+               frame->can_id |= CAN_ERR_PROT;
+               frame->data[2] = CAN_ERR_PROT_OVERLOAD;
+       } else if (can327_rxbuf_cmp(elm->rxbuf, len, "FB ERROR")) {
+               frame->can_id |= CAN_ERR_PROT;
+               frame->data[2] = CAN_ERR_PROT_TX;
+       } else if (len == 5 && !memcmp(elm->rxbuf, "ERR", 3)) {
+               /* ERR is followed by two digits, hence line length 5 */
+               netdev_err(elm->dev, "ELM327 reported an ERR%c%c. Please power it off and on again.\n",
+                          elm->rxbuf[3], elm->rxbuf[4]);
+               frame->can_id |= CAN_ERR_CRTL;
+       } else {
+               /* Something else has happened.
+                * Maybe garbage on the UART line.
+                * Emit a generic error frame.
+                */
+       }
+
+       can327_feed_frame_to_netdev(elm, skb);
+}
+
+/* Parse CAN frames coming as ASCII from ELM327.
+ * They can be of various formats:
+ *
+ * 29-bit ID (EFF):  12 34 56 78 D PL PL PL PL PL PL PL PL
+ * 11-bit ID (!EFF): 123 D PL PL PL PL PL PL PL PL
+ *
+ * where D = DLC, PL = payload byte
+ *
+ * Instead of a payload, RTR indicates a remote request.
+ *
+ * We will use the spaces and line length to guess the format.
+ */
+static int can327_parse_frame(struct can327 *elm, size_t len)
+{
+       struct can_frame *frame;
+       struct sk_buff *skb;
+       int hexlen;
+       int datastart;
+       int i;
+
+       lockdep_assert_held(&elm->lock);
+
+       skb = alloc_can_skb(elm->dev, &frame);
+       if (!skb)
+               return -ENOMEM;
+
+       /* Find first non-hex and non-space character:
+        *  - In the simplest case, there is none.
+        *  - For RTR frames, 'R' is the first non-hex character.
+        *  - An error message may replace the end of the data line.
+        */
+       for (hexlen = 0; hexlen <= len; hexlen++) {
+               if (hex_to_bin(elm->rxbuf[hexlen]) < 0 &&
+                   elm->rxbuf[hexlen] != ' ') {
+                       break;
+               }
+       }
+
+       /* Sanity check whether the line is really a clean hexdump,
+        * or terminated by an error message, or contains garbage.
+        */
+       if (hexlen < len && !isdigit(elm->rxbuf[hexlen]) &&
+           !isupper(elm->rxbuf[hexlen]) && '<' != elm->rxbuf[hexlen] &&
+           ' ' != elm->rxbuf[hexlen]) {
+               /* The line is likely garbled anyway, so bail.
+                * The main code will restart listening.
+                */
+               kfree_skb(skb);
+               return -ENODATA;
+       }
+
+       /* Use spaces in CAN ID to distinguish 29 or 11 bit address length.
+        * No out-of-bounds access:
+        * We use the fact that we can always read from elm->rxbuf.
+        */
+       if (elm->rxbuf[2] == ' ' && elm->rxbuf[5] == ' ' &&
+           elm->rxbuf[8] == ' ' && elm->rxbuf[11] == ' ' &&
+           elm->rxbuf[13] == ' ') {
+               frame->can_id = CAN_EFF_FLAG;
+               datastart = 14;
+       } else if (elm->rxbuf[3] == ' ' && elm->rxbuf[5] == ' ') {
+               datastart = 6;
+       } else {
+               /* This is not a well-formatted data line.
+                * Assume it's an error message.
+                */
+               kfree_skb(skb);
+               return -ENODATA;
+       }
+
+       if (hexlen < datastart) {
+               /* The line is too short to be a valid frame hex dump.
+                * Something interrupted the hex dump or it is invalid.
+                */
+               kfree_skb(skb);
+               return -ENODATA;
+       }
+
+       /* From here on all chars up to buf[hexlen] are hex or spaces,
+        * at well-defined offsets.
+        */
+
+       /* Read CAN data length */
+       frame->len = (hex_to_bin(elm->rxbuf[datastart - 2]) << 0);
+
+       /* Read CAN ID */
+       if (frame->can_id & CAN_EFF_FLAG) {
+               frame->can_id |= (hex_to_bin(elm->rxbuf[0]) << 28) |
+                                (hex_to_bin(elm->rxbuf[1]) << 24) |
+                                (hex_to_bin(elm->rxbuf[3]) << 20) |
+                                (hex_to_bin(elm->rxbuf[4]) << 16) |
+                                (hex_to_bin(elm->rxbuf[6]) << 12) |
+                                (hex_to_bin(elm->rxbuf[7]) << 8) |
+                                (hex_to_bin(elm->rxbuf[9]) << 4) |
+                                (hex_to_bin(elm->rxbuf[10]) << 0);
+       } else {
+               frame->can_id |= (hex_to_bin(elm->rxbuf[0]) << 8) |
+                                (hex_to_bin(elm->rxbuf[1]) << 4) |
+                                (hex_to_bin(elm->rxbuf[2]) << 0);
+       }
+
+       /* Check for RTR frame */
+       if (elm->rxfill >= hexlen + 3 &&
+           !memcmp(&elm->rxbuf[hexlen], "RTR", 3)) {
+               frame->can_id |= CAN_RTR_FLAG;
+       }
+
+       /* Is the line long enough to hold the advertised payload?
+        * Note: RTR frames have a DLC, but no actual payload.
+        */
+       if (!(frame->can_id & CAN_RTR_FLAG) &&
+           (hexlen < frame->len * 3 + datastart)) {
+               /* Incomplete frame.
+                * Probably the ELM327's RS232 TX buffer was full.
+                * Emit an error frame and exit.
+                */
+               frame->can_id = CAN_ERR_FLAG | CAN_ERR_CRTL;
+               frame->len = CAN_ERR_DLC;
+               frame->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
+               can327_feed_frame_to_netdev(elm, skb);
+
+               /* Signal failure to parse.
+                * The line will be re-parsed as an error line, which will fail.
+                * However, this will correctly drop the state machine back into
+                * command mode.
+                */
+               return -ENODATA;
+       }
+
+       /* Parse the data nibbles. */
+       for (i = 0; i < frame->len; i++) {
+               frame->data[i] =
+                       (hex_to_bin(elm->rxbuf[datastart + 3 * i]) << 4) |
+                       (hex_to_bin(elm->rxbuf[datastart + 3 * i + 1]));
+       }
+
+       /* Feed the frame to the network layer. */
+       can327_feed_frame_to_netdev(elm, skb);
+
+       return 0;
+}
+
+static void can327_parse_line(struct can327 *elm, size_t len)
+{
+       lockdep_assert_held(&elm->lock);
+
+       /* Skip empty lines */
+       if (!len)
+               return;
+
+       /* Skip echo lines */
+       if (elm->drop_next_line) {
+               elm->drop_next_line = 0;
+               return;
+       } else if (!memcmp(elm->rxbuf, "AT", 2)) {
+               return;
+       }
+
+       /* Regular parsing */
+       if (elm->state == CAN327_STATE_RECEIVING &&
+           can327_parse_frame(elm, len)) {
+               /* Parse an error line. */
+               can327_parse_error(elm, len);
+
+               /* Start afresh. */
+               can327_kick_into_cmd_mode(elm);
+       }
+}
+
+static void can327_handle_prompt(struct can327 *elm)
+{
+       struct can_frame *frame = &elm->can_frame_to_send;
+       /* Size this buffer for the largest ELM327 line we may generate,
+        * which is currently an 8 byte CAN frame's payload hexdump.
+        * Items in can327_init_script must fit here, too!
+        */
+       char local_txbuf[sizeof("0102030405060708\r")];
+
+       lockdep_assert_held(&elm->lock);
+
+       if (!elm->cmds_todo) {
+               /* Enter CAN monitor mode */
+               can327_send(elm, "ATMA\r", 5);
+               elm->state = CAN327_STATE_RECEIVING;
+
+               /* We will be in the default state once this command is
+                * sent, so enable the TX packet queue.
+                */
+               netif_wake_queue(elm->dev);
+
+               return;
+       }
+
+       /* Reconfigure ELM327 step by step as indicated by elm->cmds_todo */
+       if (test_bit(CAN327_TX_DO_INIT, &elm->cmds_todo)) {
+               snprintf(local_txbuf, sizeof(local_txbuf), "%s",
+                        *elm->next_init_cmd);
+
+               elm->next_init_cmd++;
+               if (!(*elm->next_init_cmd)) {
+                       clear_bit(CAN327_TX_DO_INIT, &elm->cmds_todo);
+                       /* Init finished. */
+               }
+
+       } else if (test_and_clear_bit(CAN327_TX_DO_SILENT_MONITOR, &elm->cmds_todo)) {
+               snprintf(local_txbuf, sizeof(local_txbuf),
+                        "ATCSM%i\r",
+                        !!(elm->can.ctrlmode & CAN_CTRLMODE_LISTENONLY));
+
+       } else if (test_and_clear_bit(CAN327_TX_DO_RESPONSES, &elm->cmds_todo)) {
+               snprintf(local_txbuf, sizeof(local_txbuf),
+                        "ATR%i\r",
+                        !(elm->can.ctrlmode & CAN_CTRLMODE_LISTENONLY));
+
+       } else if (test_and_clear_bit(CAN327_TX_DO_CAN_CONFIG, &elm->cmds_todo)) {
+               snprintf(local_txbuf, sizeof(local_txbuf),
+                        "ATPC\r");
+               set_bit(CAN327_TX_DO_CAN_CONFIG_PART2, &elm->cmds_todo);
+
+       } else if (test_and_clear_bit(CAN327_TX_DO_CAN_CONFIG_PART2, &elm->cmds_todo)) {
+               snprintf(local_txbuf, sizeof(local_txbuf),
+                        "ATPB%04X\r",
+                        elm->can_config);
+
+       } else if (test_and_clear_bit(CAN327_TX_DO_CANID_29BIT_HIGH, &elm->cmds_todo)) {
+               snprintf(local_txbuf, sizeof(local_txbuf),
+                        "ATCP%02X\r",
+                        (frame->can_id & CAN_EFF_MASK) >> 24);
+
+       } else if (test_and_clear_bit(CAN327_TX_DO_CANID_29BIT_LOW, &elm->cmds_todo)) {
+               snprintf(local_txbuf, sizeof(local_txbuf),
+                        "ATSH%06X\r",
+                        frame->can_id & CAN_EFF_MASK & ((1 << 24) - 1));
+
+       } else if (test_and_clear_bit(CAN327_TX_DO_CANID_11BIT, &elm->cmds_todo)) {
+               snprintf(local_txbuf, sizeof(local_txbuf),
+                        "ATSH%03X\r",
+                        frame->can_id & CAN_SFF_MASK);
+
+       } else if (test_and_clear_bit(CAN327_TX_DO_CAN_DATA, &elm->cmds_todo)) {
+               if (frame->can_id & CAN_RTR_FLAG) {
+                       /* Send an RTR frame. Their DLC is fixed.
+                        * Some chips don't send them at all.
+                        */
+                       snprintf(local_txbuf, sizeof(local_txbuf), "ATRTR\r");
+               } else {
+                       /* Send a regular CAN data frame */
+                       int i;
+
+                       for (i = 0; i < frame->len; i++) {
+                               snprintf(&local_txbuf[2 * i],
+                                        sizeof(local_txbuf), "%02X",
+                                        frame->data[i]);
+                       }
+
+                       snprintf(&local_txbuf[2 * i], sizeof(local_txbuf),
+                                "\r");
+               }
+
+               elm->drop_next_line = 1;
+               elm->state = CAN327_STATE_RECEIVING;
+
+               /* We will be in the default state once this command is
+                * sent, so enable the TX packet queue.
+                */
+               netif_wake_queue(elm->dev);
+       }
+
+       can327_send(elm, local_txbuf, strlen(local_txbuf));
+}
+
+static bool can327_is_ready_char(char c)
+{
+       /* Bits 0xc0 are sometimes set (randomly), hence the mask.
+        * Probably bad hardware.
+        */
+       return (c & 0x3f) == CAN327_READY_CHAR;
+}
+
+static void can327_drop_bytes(struct can327 *elm, size_t i)
+{
+       lockdep_assert_held(&elm->lock);
+
+       memmove(&elm->rxbuf[0], &elm->rxbuf[i], CAN327_SIZE_RXBUF - i);
+       elm->rxfill -= i;
+}
+
+static void can327_parse_rxbuf(struct can327 *elm, size_t first_new_char_idx)
+{
+       size_t len, pos;
+
+       lockdep_assert_held(&elm->lock);
+
+       switch (elm->state) {
+       case CAN327_STATE_NOTINIT:
+               elm->rxfill = 0;
+               break;
+
+       case CAN327_STATE_GETDUMMYCHAR:
+               /* Wait for 'y' or '>' */
+               for (pos = 0; pos < elm->rxfill; pos++) {
+                       if (elm->rxbuf[pos] == CAN327_DUMMY_CHAR) {
+                               can327_send(elm, "\r", 1);
+                               elm->state = CAN327_STATE_GETPROMPT;
+                               pos++;
+                               break;
+                       } else if (can327_is_ready_char(elm->rxbuf[pos])) {
+                               can327_send(elm, CAN327_DUMMY_STRING, 1);
+                               pos++;
+                               break;
+                       }
+               }
+
+               can327_drop_bytes(elm, pos);
+               break;
+
+       case CAN327_STATE_GETPROMPT:
+               /* Wait for '>' */
+               if (can327_is_ready_char(elm->rxbuf[elm->rxfill - 1]))
+                       can327_handle_prompt(elm);
+
+               elm->rxfill = 0;
+               break;
+
+       case CAN327_STATE_RECEIVING:
+               /* Find <CR> delimiting feedback lines. */
+               len = first_new_char_idx;
+               while (len < elm->rxfill && elm->rxbuf[len] != '\r')
+                       len++;
+
+               if (len == CAN327_SIZE_RXBUF) {
+                       /* Assume the buffer ran full with garbage.
+                        * Did we even connect at the right baud rate?
+                        */
+                       netdev_err(elm->dev,
+                                  "RX buffer overflow. Faulty ELM327 or UART?\n");
+                       can327_uart_side_failure(elm);
+               } else if (len == elm->rxfill) {
+                       if (can327_is_ready_char(elm->rxbuf[elm->rxfill - 1])) {
+                               /* The ELM327's AT ST response timeout ran out,
+                                * so we got a prompt.
+                                * Clear RX buffer and restart listening.
+                                */
+                               elm->rxfill = 0;
+
+                               can327_handle_prompt(elm);
+                       }
+
+                       /* No <CR> found - we haven't received a full line yet.
+                        * Wait for more data.
+                        */
+               } else {
+                       /* We have a full line to parse. */
+                       can327_parse_line(elm, len);
+
+                       /* Remove parsed data from RX buffer. */
+                       can327_drop_bytes(elm, len + 1);
+
+                       /* More data to parse? */
+                       if (elm->rxfill)
+                               can327_parse_rxbuf(elm, 0);
+               }
+       }
+}
+
+static int can327_netdev_open(struct net_device *dev)
+{
+       struct can327 *elm = netdev_priv(dev);
+       int err;
+
+       spin_lock_bh(&elm->lock);
+
+       if (!elm->tty) {
+               spin_unlock_bh(&elm->lock);
+               return -ENODEV;
+       }
+
+       if (elm->uart_side_failure)
+               netdev_warn(elm->dev,
+                           "Reopening netdev after a UART side fault has been detected.\n");
+
+       /* Clear TTY buffers */
+       elm->rxfill = 0;
+       elm->txleft = 0;
+
+       /* open_candev() checks for elm->can.bittiming.bitrate != 0 */
+       err = open_candev(dev);
+       if (err) {
+               spin_unlock_bh(&elm->lock);
+               return err;
+       }
+
+       can327_init_device(elm);
+       spin_unlock_bh(&elm->lock);
+
+       err = can_rx_offload_add_manual(dev, &elm->offload, CAN327_NAPI_WEIGHT);
+       if (err) {
+               close_candev(dev);
+               return err;
+       }
+
+       can_rx_offload_enable(&elm->offload);
+
+       elm->can.state = CAN_STATE_ERROR_ACTIVE;
+       netif_start_queue(dev);
+
+       return 0;
+}
+
+static int can327_netdev_close(struct net_device *dev)
+{
+       struct can327 *elm = netdev_priv(dev);
+
+       /* Interrupt whatever the ELM327 is doing right now */
+       spin_lock_bh(&elm->lock);
+       can327_send(elm, CAN327_DUMMY_STRING, 1);
+       spin_unlock_bh(&elm->lock);
+
+       netif_stop_queue(dev);
+
+       /* Give UART one final chance to flush. */
+       clear_bit(TTY_DO_WRITE_WAKEUP, &elm->tty->flags);
+       flush_work(&elm->tx_work);
+
+       can_rx_offload_disable(&elm->offload);
+       elm->can.state = CAN_STATE_STOPPED;
+       can_rx_offload_del(&elm->offload);
+       close_candev(dev);
+
+       return 0;
+}
+
+/* Send a can_frame to a TTY. */
+static netdev_tx_t can327_netdev_start_xmit(struct sk_buff *skb,
+                                           struct net_device *dev)
+{
+       struct can327 *elm = netdev_priv(dev);
+       struct can_frame *frame = (struct can_frame *)skb->data;
+
+       if (can_dropped_invalid_skb(dev, skb))
+               return NETDEV_TX_OK;
+
+       /* We shouldn't get here after a hardware fault:
+        * can_bus_off() calls netif_carrier_off()
+        */
+       if (elm->uart_side_failure) {
+               WARN_ON_ONCE(elm->uart_side_failure);
+               goto out;
+       }
+
+       netif_stop_queue(dev);
+
+       /* BHs are already disabled, so no spin_lock_bh().
+        * See Documentation/networking/netdevices.txt
+        */
+       spin_lock(&elm->lock);
+       can327_send_frame(elm, frame);
+       spin_unlock(&elm->lock);
+
+       dev->stats.tx_packets++;
+       dev->stats.tx_bytes += frame->can_id & CAN_RTR_FLAG ? 0 : frame->len;
+
+out:
+       kfree_skb(skb);
+       return NETDEV_TX_OK;
+}
+
+static const struct net_device_ops can327_netdev_ops = {
+       .ndo_open = can327_netdev_open,
+       .ndo_stop = can327_netdev_close,
+       .ndo_start_xmit = can327_netdev_start_xmit,
+       .ndo_change_mtu = can_change_mtu,
+};
+
+static bool can327_is_valid_rx_char(u8 c)
+{
+       static const bool lut_char_is_valid['z'] = {
+               ['\r'] = true,
+               [' '] = true,
+               ['.'] = true,
+               ['0'] = true, true, true, true, true,
+               ['5'] = true, true, true, true, true,
+               ['<'] = true,
+               [CAN327_READY_CHAR] = true,
+               ['?'] = true,
+               ['A'] = true, true, true, true, true, true, true,
+               ['H'] = true, true, true, true, true, true, true,
+               ['O'] = true, true, true, true, true, true, true,
+               ['V'] = true, true, true, true, true,
+               ['a'] = true,
+               ['b'] = true,
+               ['v'] = true,
+               [CAN327_DUMMY_CHAR] = true,
+       };
+       BUILD_BUG_ON(CAN327_DUMMY_CHAR >= 'z');
+
+       return (c < ARRAY_SIZE(lut_char_is_valid) && lut_char_is_valid[c]);
+}
+
+/* Handle incoming ELM327 ASCII data.
+ * This will not be re-entered while running, but other ldisc
+ * functions may be called in parallel.
+ */
+static void can327_ldisc_rx(struct tty_struct *tty, const unsigned char *cp,
+                           const char *fp, int count)
+{
+       struct can327 *elm = (struct can327 *)tty->disc_data;
+       size_t first_new_char_idx;
+
+       if (elm->uart_side_failure)
+               return;
+
+       spin_lock_bh(&elm->lock);
+
+       /* Store old rxfill, so can327_parse_rxbuf() will have
+        * the option of skipping already checked characters.
+        */
+       first_new_char_idx = elm->rxfill;
+
+       while (count-- && elm->rxfill < CAN327_SIZE_RXBUF) {
+               if (fp && *fp++) {
+                       netdev_err(elm->dev,
+                                  "Error in received character stream. Check your wiring.");
+
+                       can327_uart_side_failure(elm);
+
+                       spin_unlock_bh(&elm->lock);
+                       return;
+               }
+
+               /* Ignore NUL characters, which the PIC microcontroller may
+                * inadvertently insert due to a known hardware bug.
+                * See ELM327 documentation, which refers to a Microchip PIC
+                * bug description.
+                */
+               if (*cp) {
+                       /* Check for stray characters on the UART line.
+                        * Likely caused by bad hardware.
+                        */
+                       if (!can327_is_valid_rx_char(*cp)) {
+                               netdev_err(elm->dev,
+                                          "Received illegal character %02x.\n",
+                                          *cp);
+                               can327_uart_side_failure(elm);
+
+                               spin_unlock_bh(&elm->lock);
+                               return;
+                       }
+
+                       elm->rxbuf[elm->rxfill++] = *cp;
+               }
+
+               cp++;
+       }
+
+       if (count >= 0) {
+               netdev_err(elm->dev,
+                          "Receive buffer overflowed. Bad chip or wiring? count = %i",
+                          count);
+
+               can327_uart_side_failure(elm);
+
+               spin_unlock_bh(&elm->lock);
+               return;
+       }
+
+       can327_parse_rxbuf(elm, first_new_char_idx);
+       spin_unlock_bh(&elm->lock);
+}
+
+/* Write out remaining transmit buffer.
+ * Scheduled when TTY is writable.
+ */
+static void can327_ldisc_tx_worker(struct work_struct *work)
+{
+       struct can327 *elm = container_of(work, struct can327, tx_work);
+       ssize_t written;
+
+       if (elm->uart_side_failure)
+               return;
+
+       spin_lock_bh(&elm->lock);
+
+       if (elm->txleft) {
+               written = elm->tty->ops->write(elm->tty, elm->txhead,
+                                              elm->txleft);
+               if (written < 0) {
+                       netdev_err(elm->dev, "Failed to write to tty %s.\n",
+                                  elm->tty->name);
+                       can327_uart_side_failure(elm);
+
+                       spin_unlock_bh(&elm->lock);
+                       return;
+               }
+
+               elm->txleft -= written;
+               elm->txhead += written;
+       }
+
+       if (!elm->txleft)
+               clear_bit(TTY_DO_WRITE_WAKEUP, &elm->tty->flags);
+
+       spin_unlock_bh(&elm->lock);
+}
+
+/* Called by the driver when there's room for more data. */
+static void can327_ldisc_tx_wakeup(struct tty_struct *tty)
+{
+       struct can327 *elm = (struct can327 *)tty->disc_data;
+
+       schedule_work(&elm->tx_work);
+}
+
+/* ELM327 can only handle bitrates that are integer divisors of 500 kHz,
+ * or 7/8 of that. Divisors are 1 to 64.
+ * Currently we don't implement support for 7/8 rates.
+ */
+static const u32 can327_bitrate_const[] = {
+       7812,  7936,  8064,  8196,   8333,   8474,   8620,   8771,
+       8928,  9090,  9259,  9433,   9615,   9803,   10000,  10204,
+       10416, 10638, 10869, 11111,  11363,  11627,  11904,  12195,
+       12500, 12820, 13157, 13513,  13888,  14285,  14705,  15151,
+       15625, 16129, 16666, 17241,  17857,  18518,  19230,  20000,
+       20833, 21739, 22727, 23809,  25000,  26315,  27777,  29411,
+       31250, 33333, 35714, 38461,  41666,  45454,  50000,  55555,
+       62500, 71428, 83333, 100000, 125000, 166666, 250000, 500000
+};
+
+static int can327_ldisc_open(struct tty_struct *tty)
+{
+       struct net_device *dev;
+       struct can327 *elm;
+       int err;
+
+       if (!capable(CAP_NET_ADMIN))
+               return -EPERM;
+
+       if (!tty->ops->write)
+               return -EOPNOTSUPP;
+
+       dev = alloc_candev(sizeof(struct can327), 0);
+       if (!dev)
+               return -ENFILE;
+       elm = netdev_priv(dev);
+
+       /* Configure TTY interface */
+       tty->receive_room = 65536; /* We don't flow control */
+       spin_lock_init(&elm->lock);
+       INIT_WORK(&elm->tx_work, can327_ldisc_tx_worker);
+
+       /* Configure CAN metadata */
+       elm->can.bitrate_const = can327_bitrate_const;
+       elm->can.bitrate_const_cnt = ARRAY_SIZE(can327_bitrate_const);
+       elm->can.ctrlmode_supported = CAN_CTRLMODE_LISTENONLY;
+
+       /* Configure netdev interface */
+       elm->dev = dev;
+       dev->netdev_ops = &can327_netdev_ops;
+
+       /* Mark ldisc channel as alive */
+       elm->tty = tty;
+       tty->disc_data = elm;
+
+       /* Let 'er rip */
+       err = register_candev(elm->dev);
+       if (err) {
+               free_candev(elm->dev);
+               return err;
+       }
+
+       netdev_info(elm->dev, "can327 on %s.\n", tty->name);
+
+       return 0;
+}
+
+/* Close down a can327 channel.
+ * This means flushing out any pending queues, and then returning.
+ * This call is serialized against other ldisc functions:
+ * Once this is called, no other ldisc function of ours is entered.
+ *
+ * We also use this function for a hangup event.
+ */
+static void can327_ldisc_close(struct tty_struct *tty)
+{
+       struct can327 *elm = (struct can327 *)tty->disc_data;
+
+       /* unregister_netdev() calls .ndo_stop() so we don't have to.
+        * Our .ndo_stop() also flushes the TTY write wakeup handler,
+        * so we can safely set elm->tty = NULL after this.
+        */
+       unregister_candev(elm->dev);
+
+       /* Mark channel as dead */
+       spin_lock_bh(&elm->lock);
+       tty->disc_data = NULL;
+       elm->tty = NULL;
+       spin_unlock_bh(&elm->lock);
+
+       netdev_info(elm->dev, "can327 off %s.\n", tty->name);
+
+       free_candev(elm->dev);
+}
+
+static int can327_ldisc_ioctl(struct tty_struct *tty, unsigned int cmd,
+                             unsigned long arg)
+{
+       struct can327 *elm = (struct can327 *)tty->disc_data;
+       unsigned int tmp;
+
+       switch (cmd) {
+       case SIOCGIFNAME:
+               tmp = strnlen(elm->dev->name, IFNAMSIZ - 1) + 1;
+               if (copy_to_user((void __user *)arg, elm->dev->name, tmp))
+                       return -EFAULT;
+               return 0;
+
+       case SIOCSIFHWADDR:
+               return -EINVAL;
+
+       default:
+               return tty_mode_ioctl(tty, cmd, arg);
+       }
+}
+
+static struct tty_ldisc_ops can327_ldisc = {
+       .owner = THIS_MODULE,
+       .name = "can327",
+       .num = N_CAN327,
+       .receive_buf = can327_ldisc_rx,
+       .write_wakeup = can327_ldisc_tx_wakeup,
+       .open = can327_ldisc_open,
+       .close = can327_ldisc_close,
+       .ioctl = can327_ldisc_ioctl,
+};
+
+static int __init can327_init(void)
+{
+       int status;
+
+       status = tty_register_ldisc(&can327_ldisc);
+       if (status)
+               pr_err("Can't register line discipline\n");
+
+       return status;
+}
+
+static void __exit can327_exit(void)
+{
+       /* This will only be called when all channels have been closed by
+        * userspace - tty_ldisc.c takes care of the module's refcount.
+        */
+       tty_unregister_ldisc(&can327_ldisc);
+}
+
+module_init(can327_init);
+module_exit(can327_exit);
+
+MODULE_ALIAS_LDISC(N_CAN327);
+MODULE_DESCRIPTION("ELM327 based CAN interface");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Max Staudt <max@enpas.org>");
index 64990bf..14ac7c0 100644 (file)
@@ -1087,7 +1087,7 @@ clear:
 /**
  * ctucan_interrupt() - CAN Isr
  * @irq:       irq number
- * @dev_id:    device id poniter
+ * @dev_id:    device id pointer
  *
  * This is the CTU CAN FD ISR. It checks for the type of interrupt
  * and invokes the corresponding ISR.
index af2901d..633687d 100644 (file)
@@ -1,9 +1,12 @@
 # SPDX-License-Identifier: GPL-2.0
 
-obj-$(CONFIG_CAN_DEV)          += can-dev.o
-can-dev-y                      += bittiming.o
-can-dev-y                      += dev.o
-can-dev-y                      += length.o
-can-dev-y                      += netlink.o
-can-dev-y                      += rx-offload.o
-can-dev-y                       += skb.o
+obj-$(CONFIG_CAN_DEV) += can-dev.o
+
+can-dev-y += skb.o
+
+can-dev-$(CONFIG_CAN_CALC_BITTIMING) += calc_bittiming.o
+can-dev-$(CONFIG_CAN_NETLINK) += bittiming.o
+can-dev-$(CONFIG_CAN_NETLINK) += dev.o
+can-dev-$(CONFIG_CAN_NETLINK) += length.o
+can-dev-$(CONFIG_CAN_NETLINK) += netlink.o
+can-dev-$(CONFIG_CAN_RX_OFFLOAD) += rx-offload.o
index c1e76f0..7ae8076 100644 (file)
@@ -4,205 +4,8 @@
  * Copyright (C) 2008-2009 Wolfgang Grandegger <wg@grandegger.com>
  */
 
-#include <linux/units.h>
 #include <linux/can/dev.h>
 
-#ifdef CONFIG_CAN_CALC_BITTIMING
-#define CAN_CALC_MAX_ERROR 50 /* in one-tenth of a percent */
-
-/* Bit-timing calculation derived from:
- *
- * Code based on LinCAN sources and H8S2638 project
- * Copyright 2004-2006 Pavel Pisa - DCE FELK CVUT cz
- * Copyright 2005      Stanislav Marek
- * email: pisa@cmp.felk.cvut.cz
- *
- * Calculates proper bit-timing parameters for a specified bit-rate
- * and sample-point, which can then be used to set the bit-timing
- * registers of the CAN controller. You can find more information
- * in the header file linux/can/netlink.h.
- */
-static int
-can_update_sample_point(const struct can_bittiming_const *btc,
-                       const unsigned int sample_point_nominal, const unsigned int tseg,
-                       unsigned int *tseg1_ptr, unsigned int *tseg2_ptr,
-                       unsigned int *sample_point_error_ptr)
-{
-       unsigned int sample_point_error, best_sample_point_error = UINT_MAX;
-       unsigned int sample_point, best_sample_point = 0;
-       unsigned int tseg1, tseg2;
-       int i;
-
-       for (i = 0; i <= 1; i++) {
-               tseg2 = tseg + CAN_SYNC_SEG -
-                       (sample_point_nominal * (tseg + CAN_SYNC_SEG)) /
-                       1000 - i;
-               tseg2 = clamp(tseg2, btc->tseg2_min, btc->tseg2_max);
-               tseg1 = tseg - tseg2;
-               if (tseg1 > btc->tseg1_max) {
-                       tseg1 = btc->tseg1_max;
-                       tseg2 = tseg - tseg1;
-               }
-
-               sample_point = 1000 * (tseg + CAN_SYNC_SEG - tseg2) /
-                       (tseg + CAN_SYNC_SEG);
-               sample_point_error = abs(sample_point_nominal - sample_point);
-
-               if (sample_point <= sample_point_nominal &&
-                   sample_point_error < best_sample_point_error) {
-                       best_sample_point = sample_point;
-                       best_sample_point_error = sample_point_error;
-                       *tseg1_ptr = tseg1;
-                       *tseg2_ptr = tseg2;
-               }
-       }
-
-       if (sample_point_error_ptr)
-               *sample_point_error_ptr = best_sample_point_error;
-
-       return best_sample_point;
-}
-
-int can_calc_bittiming(const struct net_device *dev, struct can_bittiming *bt,
-                      const struct can_bittiming_const *btc)
-{
-       struct can_priv *priv = netdev_priv(dev);
-       unsigned int bitrate;                   /* current bitrate */
-       unsigned int bitrate_error;             /* difference between current and nominal value */
-       unsigned int best_bitrate_error = UINT_MAX;
-       unsigned int sample_point_error;        /* difference between current and nominal value */
-       unsigned int best_sample_point_error = UINT_MAX;
-       unsigned int sample_point_nominal;      /* nominal sample point */
-       unsigned int best_tseg = 0;             /* current best value for tseg */
-       unsigned int best_brp = 0;              /* current best value for brp */
-       unsigned int brp, tsegall, tseg, tseg1 = 0, tseg2 = 0;
-       u64 v64;
-
-       /* Use CiA recommended sample points */
-       if (bt->sample_point) {
-               sample_point_nominal = bt->sample_point;
-       } else {
-               if (bt->bitrate > 800 * KILO /* BPS */)
-                       sample_point_nominal = 750;
-               else if (bt->bitrate > 500 * KILO /* BPS */)
-                       sample_point_nominal = 800;
-               else
-                       sample_point_nominal = 875;
-       }
-
-       /* tseg even = round down, odd = round up */
-       for (tseg = (btc->tseg1_max + btc->tseg2_max) * 2 + 1;
-            tseg >= (btc->tseg1_min + btc->tseg2_min) * 2; tseg--) {
-               tsegall = CAN_SYNC_SEG + tseg / 2;
-
-               /* Compute all possible tseg choices (tseg=tseg1+tseg2) */
-               brp = priv->clock.freq / (tsegall * bt->bitrate) + tseg % 2;
-
-               /* choose brp step which is possible in system */
-               brp = (brp / btc->brp_inc) * btc->brp_inc;
-               if (brp < btc->brp_min || brp > btc->brp_max)
-                       continue;
-
-               bitrate = priv->clock.freq / (brp * tsegall);
-               bitrate_error = abs(bt->bitrate - bitrate);
-
-               /* tseg brp biterror */
-               if (bitrate_error > best_bitrate_error)
-                       continue;
-
-               /* reset sample point error if we have a better bitrate */
-               if (bitrate_error < best_bitrate_error)
-                       best_sample_point_error = UINT_MAX;
-
-               can_update_sample_point(btc, sample_point_nominal, tseg / 2,
-                                       &tseg1, &tseg2, &sample_point_error);
-               if (sample_point_error >= best_sample_point_error)
-                       continue;
-
-               best_sample_point_error = sample_point_error;
-               best_bitrate_error = bitrate_error;
-               best_tseg = tseg / 2;
-               best_brp = brp;
-
-               if (bitrate_error == 0 && sample_point_error == 0)
-                       break;
-       }
-
-       if (best_bitrate_error) {
-               /* Error in one-tenth of a percent */
-               v64 = (u64)best_bitrate_error * 1000;
-               do_div(v64, bt->bitrate);
-               bitrate_error = (u32)v64;
-               if (bitrate_error > CAN_CALC_MAX_ERROR) {
-                       netdev_err(dev,
-                                  "bitrate error %d.%d%% too high\n",
-                                  bitrate_error / 10, bitrate_error % 10);
-                       return -EDOM;
-               }
-               netdev_warn(dev, "bitrate error %d.%d%%\n",
-                           bitrate_error / 10, bitrate_error % 10);
-       }
-
-       /* real sample point */
-       bt->sample_point = can_update_sample_point(btc, sample_point_nominal,
-                                                  best_tseg, &tseg1, &tseg2,
-                                                  NULL);
-
-       v64 = (u64)best_brp * 1000 * 1000 * 1000;
-       do_div(v64, priv->clock.freq);
-       bt->tq = (u32)v64;
-       bt->prop_seg = tseg1 / 2;
-       bt->phase_seg1 = tseg1 - bt->prop_seg;
-       bt->phase_seg2 = tseg2;
-
-       /* check for sjw user settings */
-       if (!bt->sjw || !btc->sjw_max) {
-               bt->sjw = 1;
-       } else {
-               /* bt->sjw is at least 1 -> sanitize upper bound to sjw_max */
-               if (bt->sjw > btc->sjw_max)
-                       bt->sjw = btc->sjw_max;
-               /* bt->sjw must not be higher than tseg2 */
-               if (tseg2 < bt->sjw)
-                       bt->sjw = tseg2;
-       }
-
-       bt->brp = best_brp;
-
-       /* real bitrate */
-       bt->bitrate = priv->clock.freq /
-               (bt->brp * (CAN_SYNC_SEG + tseg1 + tseg2));
-
-       return 0;
-}
-
-void can_calc_tdco(struct can_tdc *tdc, const struct can_tdc_const *tdc_const,
-                  const struct can_bittiming *dbt,
-                  u32 *ctrlmode, u32 ctrlmode_supported)
-
-{
-       if (!tdc_const || !(ctrlmode_supported & CAN_CTRLMODE_TDC_AUTO))
-               return;
-
-       *ctrlmode &= ~CAN_CTRLMODE_TDC_MASK;
-
-       /* As specified in ISO 11898-1 section 11.3.3 "Transmitter
-        * delay compensation" (TDC) is only applicable if data BRP is
-        * one or two.
-        */
-       if (dbt->brp == 1 || dbt->brp == 2) {
-               /* Sample point in clock periods */
-               u32 sample_point_in_tc = (CAN_SYNC_SEG + dbt->prop_seg +
-                                         dbt->phase_seg1) * dbt->brp;
-
-               if (sample_point_in_tc < tdc_const->tdco_min)
-                       return;
-               tdc->tdco = min(sample_point_in_tc, tdc_const->tdco_max);
-               *ctrlmode |= CAN_CTRLMODE_TDC_AUTO;
-       }
-}
-#endif /* CONFIG_CAN_CALC_BITTIMING */
-
 /* Checks the validity of the specified bit-timing parameters prop_seg,
  * phase_seg1, phase_seg2 and sjw and tries to determine the bitrate
  * prescaler value brp. You can find more information in the header
diff --git a/drivers/net/can/dev/calc_bittiming.c b/drivers/net/can/dev/calc_bittiming.c
new file mode 100644 (file)
index 0000000..d3caa04
--- /dev/null
@@ -0,0 +1,202 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2005 Marc Kleine-Budde, Pengutronix
+ * Copyright (C) 2006 Andrey Volkov, Varma Electronics
+ * Copyright (C) 2008-2009 Wolfgang Grandegger <wg@grandegger.com>
+ */
+
+#include <linux/units.h>
+#include <linux/can/dev.h>
+
+#define CAN_CALC_MAX_ERROR 50 /* in one-tenth of a percent */
+
+/* Bit-timing calculation derived from:
+ *
+ * Code based on LinCAN sources and H8S2638 project
+ * Copyright 2004-2006 Pavel Pisa - DCE FELK CVUT cz
+ * Copyright 2005      Stanislav Marek
+ * email: pisa@cmp.felk.cvut.cz
+ *
+ * Calculates proper bit-timing parameters for a specified bit-rate
+ * and sample-point, which can then be used to set the bit-timing
+ * registers of the CAN controller. You can find more information
+ * in the header file linux/can/netlink.h.
+ */
+static int
+can_update_sample_point(const struct can_bittiming_const *btc,
+                       const unsigned int sample_point_nominal, const unsigned int tseg,
+                       unsigned int *tseg1_ptr, unsigned int *tseg2_ptr,
+                       unsigned int *sample_point_error_ptr)
+{
+       unsigned int sample_point_error, best_sample_point_error = UINT_MAX;
+       unsigned int sample_point, best_sample_point = 0;
+       unsigned int tseg1, tseg2;
+       int i;
+
+       for (i = 0; i <= 1; i++) {
+               tseg2 = tseg + CAN_SYNC_SEG -
+                       (sample_point_nominal * (tseg + CAN_SYNC_SEG)) /
+                       1000 - i;
+               tseg2 = clamp(tseg2, btc->tseg2_min, btc->tseg2_max);
+               tseg1 = tseg - tseg2;
+               if (tseg1 > btc->tseg1_max) {
+                       tseg1 = btc->tseg1_max;
+                       tseg2 = tseg - tseg1;
+               }
+
+               sample_point = 1000 * (tseg + CAN_SYNC_SEG - tseg2) /
+                       (tseg + CAN_SYNC_SEG);
+               sample_point_error = abs(sample_point_nominal - sample_point);
+
+               if (sample_point <= sample_point_nominal &&
+                   sample_point_error < best_sample_point_error) {
+                       best_sample_point = sample_point;
+                       best_sample_point_error = sample_point_error;
+                       *tseg1_ptr = tseg1;
+                       *tseg2_ptr = tseg2;
+               }
+       }
+
+       if (sample_point_error_ptr)
+               *sample_point_error_ptr = best_sample_point_error;
+
+       return best_sample_point;
+}
+
+int can_calc_bittiming(const struct net_device *dev, struct can_bittiming *bt,
+                      const struct can_bittiming_const *btc)
+{
+       struct can_priv *priv = netdev_priv(dev);
+       unsigned int bitrate;                   /* current bitrate */
+       unsigned int bitrate_error;             /* difference between current and nominal value */
+       unsigned int best_bitrate_error = UINT_MAX;
+       unsigned int sample_point_error;        /* difference between current and nominal value */
+       unsigned int best_sample_point_error = UINT_MAX;
+       unsigned int sample_point_nominal;      /* nominal sample point */
+       unsigned int best_tseg = 0;             /* current best value for tseg */
+       unsigned int best_brp = 0;              /* current best value for brp */
+       unsigned int brp, tsegall, tseg, tseg1 = 0, tseg2 = 0;
+       u64 v64;
+
+       /* Use CiA recommended sample points */
+       if (bt->sample_point) {
+               sample_point_nominal = bt->sample_point;
+       } else {
+               if (bt->bitrate > 800 * KILO /* BPS */)
+                       sample_point_nominal = 750;
+               else if (bt->bitrate > 500 * KILO /* BPS */)
+                       sample_point_nominal = 800;
+               else
+                       sample_point_nominal = 875;
+       }
+
+       /* tseg even = round down, odd = round up */
+       for (tseg = (btc->tseg1_max + btc->tseg2_max) * 2 + 1;
+            tseg >= (btc->tseg1_min + btc->tseg2_min) * 2; tseg--) {
+               tsegall = CAN_SYNC_SEG + tseg / 2;
+
+               /* Compute all possible tseg choices (tseg=tseg1+tseg2) */
+               brp = priv->clock.freq / (tsegall * bt->bitrate) + tseg % 2;
+
+               /* choose brp step which is possible in system */
+               brp = (brp / btc->brp_inc) * btc->brp_inc;
+               if (brp < btc->brp_min || brp > btc->brp_max)
+                       continue;
+
+               bitrate = priv->clock.freq / (brp * tsegall);
+               bitrate_error = abs(bt->bitrate - bitrate);
+
+               /* tseg brp biterror */
+               if (bitrate_error > best_bitrate_error)
+                       continue;
+
+               /* reset sample point error if we have a better bitrate */
+               if (bitrate_error < best_bitrate_error)
+                       best_sample_point_error = UINT_MAX;
+
+               can_update_sample_point(btc, sample_point_nominal, tseg / 2,
+                                       &tseg1, &tseg2, &sample_point_error);
+               if (sample_point_error >= best_sample_point_error)
+                       continue;
+
+               best_sample_point_error = sample_point_error;
+               best_bitrate_error = bitrate_error;
+               best_tseg = tseg / 2;
+               best_brp = brp;
+
+               if (bitrate_error == 0 && sample_point_error == 0)
+                       break;
+       }
+
+       if (best_bitrate_error) {
+               /* Error in one-tenth of a percent */
+               v64 = (u64)best_bitrate_error * 1000;
+               do_div(v64, bt->bitrate);
+               bitrate_error = (u32)v64;
+               if (bitrate_error > CAN_CALC_MAX_ERROR) {
+                       netdev_err(dev,
+                                  "bitrate error %d.%d%% too high\n",
+                                  bitrate_error / 10, bitrate_error % 10);
+                       return -EDOM;
+               }
+               netdev_warn(dev, "bitrate error %d.%d%%\n",
+                           bitrate_error / 10, bitrate_error % 10);
+       }
+
+       /* real sample point */
+       bt->sample_point = can_update_sample_point(btc, sample_point_nominal,
+                                                  best_tseg, &tseg1, &tseg2,
+                                                  NULL);
+
+       v64 = (u64)best_brp * 1000 * 1000 * 1000;
+       do_div(v64, priv->clock.freq);
+       bt->tq = (u32)v64;
+       bt->prop_seg = tseg1 / 2;
+       bt->phase_seg1 = tseg1 - bt->prop_seg;
+       bt->phase_seg2 = tseg2;
+
+       /* check for sjw user settings */
+       if (!bt->sjw || !btc->sjw_max) {
+               bt->sjw = 1;
+       } else {
+               /* bt->sjw is at least 1 -> sanitize upper bound to sjw_max */
+               if (bt->sjw > btc->sjw_max)
+                       bt->sjw = btc->sjw_max;
+               /* bt->sjw must not be higher than tseg2 */
+               if (tseg2 < bt->sjw)
+                       bt->sjw = tseg2;
+       }
+
+       bt->brp = best_brp;
+
+       /* real bitrate */
+       bt->bitrate = priv->clock.freq /
+               (bt->brp * (CAN_SYNC_SEG + tseg1 + tseg2));
+
+       return 0;
+}
+
+void can_calc_tdco(struct can_tdc *tdc, const struct can_tdc_const *tdc_const,
+                  const struct can_bittiming *dbt,
+                  u32 *ctrlmode, u32 ctrlmode_supported)
+
+{
+       if (!tdc_const || !(ctrlmode_supported & CAN_CTRLMODE_TDC_AUTO))
+               return;
+
+       *ctrlmode &= ~CAN_CTRLMODE_TDC_MASK;
+
+       /* As specified in ISO 11898-1 section 11.3.3 "Transmitter
+        * delay compensation" (TDC) is only applicable if data BRP is
+        * one or two.
+        */
+       if (dbt->brp == 1 || dbt->brp == 2) {
+               /* Sample point in clock periods */
+               u32 sample_point_in_tc = (CAN_SYNC_SEG + dbt->prop_seg +
+                                         dbt->phase_seg1) * dbt->brp;
+
+               if (sample_point_in_tc < tdc_const->tdco_min)
+                       return;
+               tdc->tdco = min(sample_point_in_tc, tdc_const->tdco_max);
+               *ctrlmode |= CAN_CTRLMODE_TDC_AUTO;
+       }
+}
index 96c9d9d..523eaac 100644 (file)
@@ -4,7 +4,6 @@
  * Copyright (C) 2008-2009 Wolfgang Grandegger <wg@grandegger.com>
  */
 
-#include <linux/module.h>
 #include <linux/kernel.h>
 #include <linux/slab.h>
 #include <linux/netdevice.h>
 #include <linux/gpio/consumer.h>
 #include <linux/of.h>
 
-#define MOD_DESC "CAN device driver interface"
-
-MODULE_DESCRIPTION(MOD_DESC);
-MODULE_LICENSE("GPL v2");
-MODULE_AUTHOR("Wolfgang Grandegger <wg@grandegger.com>");
-
 static void can_update_state_error_stats(struct net_device *dev,
                                         enum can_state new_state)
 {
@@ -513,7 +506,7 @@ static __init int can_dev_init(void)
 
        err = can_netlink_register();
        if (!err)
-               pr_info(MOD_DESC "\n");
+               pr_info("CAN device driver interface\n");
 
        return err;
 }
index 7633d98..8efa22d 100644 (file)
@@ -176,7 +176,8 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
                 * directly via do_set_bitrate(). Bail out if neither
                 * is given.
                 */
-               if (!priv->bittiming_const && !priv->do_set_bittiming)
+               if (!priv->bittiming_const && !priv->do_set_bittiming &&
+                   !priv->bitrate_const)
                        return -EOPNOTSUPP;
 
                memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
@@ -278,7 +279,8 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
                 * directly via do_set_bitrate(). Bail out if neither
                 * is given.
                 */
-               if (!priv->data_bittiming_const && !priv->do_set_data_bittiming)
+               if (!priv->data_bittiming_const && !priv->do_set_data_bittiming &&
+                   !priv->data_bitrate_const)
                        return -EOPNOTSUPP;
 
                memcpy(&dbt, nla_data(data[IFLA_CAN_DATA_BITTIMING]),
@@ -509,7 +511,8 @@ static int can_fill_info(struct sk_buff *skb, const struct net_device *dev)
        if (priv->do_get_state)
                priv->do_get_state(dev, &state);
 
-       if ((priv->bittiming.bitrate &&
+       if ((priv->bittiming.bitrate != CAN_BITRATE_UNSET &&
+            priv->bittiming.bitrate != CAN_BITRATE_UNKNOWN &&
             nla_put(skb, IFLA_CAN_BITTIMING,
                     sizeof(priv->bittiming), &priv->bittiming)) ||
 
index 6166024..8bb62dd 100644 (file)
@@ -5,6 +5,14 @@
  */
 
 #include <linux/can/dev.h>
+#include <linux/can/netlink.h>
+#include <linux/module.h>
+
+#define MOD_DESC "CAN device driver interface"
+
+MODULE_DESCRIPTION(MOD_DESC);
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Wolfgang Grandegger <wg@grandegger.com>");
 
 /* Local echo of CAN messages
  *
@@ -252,3 +260,67 @@ struct sk_buff *alloc_can_err_skb(struct net_device *dev, struct can_frame **cf)
        return skb;
 }
 EXPORT_SYMBOL_GPL(alloc_can_err_skb);
+
+/* Check for outgoing skbs that have not been created by the CAN subsystem */
+static bool can_skb_headroom_valid(struct net_device *dev, struct sk_buff *skb)
+{
+       /* af_packet creates a headroom of HH_DATA_MOD bytes which is fine */
+       if (WARN_ON_ONCE(skb_headroom(skb) < sizeof(struct can_skb_priv)))
+               return false;
+
+       /* af_packet does not apply CAN skb specific settings */
+       if (skb->ip_summed == CHECKSUM_NONE) {
+               /* init headroom */
+               can_skb_prv(skb)->ifindex = dev->ifindex;
+               can_skb_prv(skb)->skbcnt = 0;
+
+               skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+               /* perform proper loopback on capable devices */
+               if (dev->flags & IFF_ECHO)
+                       skb->pkt_type = PACKET_LOOPBACK;
+               else
+                       skb->pkt_type = PACKET_HOST;
+
+               skb_reset_mac_header(skb);
+               skb_reset_network_header(skb);
+               skb_reset_transport_header(skb);
+       }
+
+       return true;
+}
+
+/* Drop a given socketbuffer if it does not contain a valid CAN frame. */
+bool can_dropped_invalid_skb(struct net_device *dev, struct sk_buff *skb)
+{
+       const struct canfd_frame *cfd = (struct canfd_frame *)skb->data;
+       struct can_priv *priv = netdev_priv(dev);
+
+       if (skb->protocol == htons(ETH_P_CAN)) {
+               if (unlikely(skb->len != CAN_MTU ||
+                            cfd->len > CAN_MAX_DLEN))
+                       goto inval_skb;
+       } else if (skb->protocol == htons(ETH_P_CANFD)) {
+               if (unlikely(skb->len != CANFD_MTU ||
+                            cfd->len > CANFD_MAX_DLEN))
+                       goto inval_skb;
+       } else {
+               goto inval_skb;
+       }
+
+       if (!can_skb_headroom_valid(dev, skb)) {
+               goto inval_skb;
+       } else if (priv->ctrlmode & CAN_CTRLMODE_LISTENONLY) {
+               netdev_info_once(dev,
+                                "interface in listen only mode, dropping skb\n");
+               goto inval_skb;
+       }
+
+       return false;
+
+inval_skb:
+       kfree_skb(skb);
+       dev->stats.tx_dropped++;
+       return true;
+}
+EXPORT_SYMBOL_GPL(can_dropped_invalid_skb);
index 76df480..4c47c10 100644 (file)
@@ -1646,7 +1646,6 @@ static int grcan_probe(struct platform_device *ofdev)
         */
        sysid_parent = of_find_node_by_path("/ambapp0");
        if (sysid_parent) {
-               of_node_get(sysid_parent);
                err = of_property_read_u32(sysid_parent, "systemid", &sysid);
                if (!err && ((sysid & GRLIB_VERSION_MASK) >=
                             GRCAN_TXBUG_SAFE_GRLIB_VERSION))
index 45ad1b3..fc2afab 100644 (file)
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
 menuconfig CAN_M_CAN
        tristate "Bosch M_CAN support"
+       select CAN_RX_OFFLOAD
        help
          Say Y here if you want support for Bosch M_CAN controller framework.
          This is common support for devices that embed the Bosch M_CAN IP.
index 5d0c82d..afaaeb6 100644 (file)
@@ -529,7 +529,7 @@ static int m_can_read_fifo(struct net_device *dev, u32 rxfs)
        /* acknowledge rx fifo 0 */
        m_can_write(cdev, M_CAN_RXF0A, fgi);
 
-       timestamp = FIELD_GET(RX_BUF_RXTS_MASK, fifo_header.dlc);
+       timestamp = FIELD_GET(RX_BUF_RXTS_MASK, fifo_header.dlc) << 16;
 
        m_can_receive_skb(cdev, skb, timestamp);
 
@@ -1030,7 +1030,7 @@ static int m_can_echo_tx_event(struct net_device *dev)
                }
 
                msg_mark = FIELD_GET(TX_EVENT_MM_MASK, txe);
-               timestamp = FIELD_GET(TX_EVENT_TXTS_MASK, txe);
+               timestamp = FIELD_GET(TX_EVENT_TXTS_MASK, txe) << 16;
 
                /* ack txe element */
                m_can_write(cdev, M_CAN_TXEFA, FIELD_PREP(TXEFA_EFAI_MASK,
@@ -1348,10 +1348,12 @@ static void m_can_chip_config(struct net_device *dev)
        /* set bittiming params */
        m_can_set_bittiming(dev);
 
-       /* enable internal timestamp generation, with a prescalar of 16. The
-        * prescalar is applied to the nominal bit timing
+       /* enable internal timestamp generation, with a prescaler of 16. The
+        * prescaler is applied to the nominal bit timing
         */
-       m_can_write(cdev, M_CAN_TSCC, FIELD_PREP(TSCC_TCP_MASK, 0xf));
+       m_can_write(cdev, M_CAN_TSCC,
+                   FIELD_PREP(TSCC_TCP_MASK, 0xf) |
+                   FIELD_PREP(TSCC_TSS_MASK, TSCC_TSS_INTERNAL));
 
        m_can_config_endisable(cdev, false);
 
index 40a1144..ba42cef 100644 (file)
@@ -1332,7 +1332,10 @@ static void rcar_canfd_set_bittiming(struct net_device *dev)
                cfg = (RCANFD_DCFG_DTSEG1(gpriv, tseg1) | RCANFD_DCFG_DBRP(brp) |
                       RCANFD_DCFG_DSJW(sjw) | RCANFD_DCFG_DTSEG2(gpriv, tseg2));
 
-               rcar_canfd_write(priv->base, RCANFD_F_DCFG(ch), cfg);
+               if (is_v3u(gpriv))
+                       rcar_canfd_write(priv->base, RCANFD_V3U_DCFG(ch), cfg);
+               else
+                       rcar_canfd_write(priv->base, RCANFD_F_DCFG(ch), cfg);
                netdev_dbg(priv->ndev, "drate: brp %u, sjw %u, tseg1 %u, tseg2 %u\n",
                           brp, sjw, tseg1, tseg2);
        } else {
diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c
deleted file mode 100644 (file)
index 64a3aee..0000000
+++ /dev/null
@@ -1,793 +0,0 @@
-/*
- * slcan.c - serial line CAN interface driver (using tty line discipline)
- *
- * This file is derived from linux/drivers/net/slip/slip.c
- *
- * slip.c Authors  : Laurence Culhane <loz@holmes.demon.co.uk>
- *                   Fred N. van Kempen <waltje@uwalt.nl.mugnet.org>
- * slcan.c Author  : Oliver Hartkopp <socketcan@hartkopp.net>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the
- * Free Software Foundation; either version 2 of the License, or (at your
- * option) any later version.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License along
- * with this program; if not, see http://www.gnu.org/licenses/gpl.html
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
- * DAMAGE.
- *
- */
-
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-
-#include <linux/uaccess.h>
-#include <linux/bitops.h>
-#include <linux/string.h>
-#include <linux/tty.h>
-#include <linux/errno.h>
-#include <linux/netdevice.h>
-#include <linux/skbuff.h>
-#include <linux/rtnetlink.h>
-#include <linux/if_arp.h>
-#include <linux/if_ether.h>
-#include <linux/sched.h>
-#include <linux/delay.h>
-#include <linux/init.h>
-#include <linux/kernel.h>
-#include <linux/workqueue.h>
-#include <linux/can.h>
-#include <linux/can/skb.h>
-#include <linux/can/can-ml.h>
-
-MODULE_ALIAS_LDISC(N_SLCAN);
-MODULE_DESCRIPTION("serial line CAN interface");
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Oliver Hartkopp <socketcan@hartkopp.net>");
-
-#define SLCAN_MAGIC 0x53CA
-
-static int maxdev = 10;                /* MAX number of SLCAN channels;
-                                  This can be overridden with
-                                  insmod slcan.ko maxdev=nnn   */
-module_param(maxdev, int, 0);
-MODULE_PARM_DESC(maxdev, "Maximum number of slcan interfaces");
-
-/* maximum rx buffer len: extended CAN frame with timestamp */
-#define SLC_MTU (sizeof("T1111222281122334455667788EA5F\r")+1)
-
-#define SLC_CMD_LEN 1
-#define SLC_SFF_ID_LEN 3
-#define SLC_EFF_ID_LEN 8
-
-struct slcan {
-       int                     magic;
-
-       /* Various fields. */
-       struct tty_struct       *tty;           /* ptr to TTY structure      */
-       struct net_device       *dev;           /* easy for intr handling    */
-       spinlock_t              lock;
-       struct work_struct      tx_work;        /* Flushes transmit buffer   */
-
-       /* These are pointers to the malloc()ed frame buffers. */
-       unsigned char           rbuff[SLC_MTU]; /* receiver buffer           */
-       int                     rcount;         /* received chars counter    */
-       unsigned char           xbuff[SLC_MTU]; /* transmitter buffer        */
-       unsigned char           *xhead;         /* pointer to next XMIT byte */
-       int                     xleft;          /* bytes left in XMIT queue  */
-
-       unsigned long           flags;          /* Flag values/ mode etc     */
-#define SLF_INUSE              0               /* Channel in use            */
-#define SLF_ERROR              1               /* Parity, etc. error        */
-};
-
-static struct net_device **slcan_devs;
-
- /************************************************************************
-  *                    SLCAN ENCAPSULATION FORMAT                       *
-  ************************************************************************/
-
-/*
- * A CAN frame has a can_id (11 bit standard frame format OR 29 bit extended
- * frame format) a data length code (len) which can be from 0 to 8
- * and up to <len> data bytes as payload.
- * Additionally a CAN frame may become a remote transmission frame if the
- * RTR-bit is set. This causes another ECU to send a CAN frame with the
- * given can_id.
- *
- * The SLCAN ASCII representation of these different frame types is:
- * <type> <id> <dlc> <data>*
- *
- * Extended frames (29 bit) are defined by capital characters in the type.
- * RTR frames are defined as 'r' types - normal frames have 't' type:
- * t => 11 bit data frame
- * r => 11 bit RTR frame
- * T => 29 bit data frame
- * R => 29 bit RTR frame
- *
- * The <id> is 3 (standard) or 8 (extended) bytes in ASCII Hex (base64).
- * The <dlc> is a one byte ASCII number ('0' - '8')
- * The <data> section has at much ASCII Hex bytes as defined by the <dlc>
- *
- * Examples:
- *
- * t1230 : can_id 0x123, len 0, no data
- * t4563112233 : can_id 0x456, len 3, data 0x11 0x22 0x33
- * T12ABCDEF2AA55 : extended can_id 0x12ABCDEF, len 2, data 0xAA 0x55
- * r1230 : can_id 0x123, len 0, no data, remote transmission request
- *
- */
-
- /************************************************************************
-  *                    STANDARD SLCAN DECAPSULATION                     *
-  ************************************************************************/
-
-/* Send one completely decapsulated can_frame to the network layer */
-static void slc_bump(struct slcan *sl)
-{
-       struct sk_buff *skb;
-       struct can_frame cf;
-       int i, tmp;
-       u32 tmpid;
-       char *cmd = sl->rbuff;
-
-       memset(&cf, 0, sizeof(cf));
-
-       switch (*cmd) {
-       case 'r':
-               cf.can_id = CAN_RTR_FLAG;
-               fallthrough;
-       case 't':
-               /* store dlc ASCII value and terminate SFF CAN ID string */
-               cf.len = sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN];
-               sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN] = 0;
-               /* point to payload data behind the dlc */
-               cmd += SLC_CMD_LEN + SLC_SFF_ID_LEN + 1;
-               break;
-       case 'R':
-               cf.can_id = CAN_RTR_FLAG;
-               fallthrough;
-       case 'T':
-               cf.can_id |= CAN_EFF_FLAG;
-               /* store dlc ASCII value and terminate EFF CAN ID string */
-               cf.len = sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN];
-               sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN] = 0;
-               /* point to payload data behind the dlc */
-               cmd += SLC_CMD_LEN + SLC_EFF_ID_LEN + 1;
-               break;
-       default:
-               return;
-       }
-
-       if (kstrtou32(sl->rbuff + SLC_CMD_LEN, 16, &tmpid))
-               return;
-
-       cf.can_id |= tmpid;
-
-       /* get len from sanitized ASCII value */
-       if (cf.len >= '0' && cf.len < '9')
-               cf.len -= '0';
-       else
-               return;
-
-       /* RTR frames may have a dlc > 0 but they never have any data bytes */
-       if (!(cf.can_id & CAN_RTR_FLAG)) {
-               for (i = 0; i < cf.len; i++) {
-                       tmp = hex_to_bin(*cmd++);
-                       if (tmp < 0)
-                               return;
-                       cf.data[i] = (tmp << 4);
-                       tmp = hex_to_bin(*cmd++);
-                       if (tmp < 0)
-                               return;
-                       cf.data[i] |= tmp;
-               }
-       }
-
-       skb = dev_alloc_skb(sizeof(struct can_frame) +
-                           sizeof(struct can_skb_priv));
-       if (!skb)
-               return;
-
-       skb->dev = sl->dev;
-       skb->protocol = htons(ETH_P_CAN);
-       skb->pkt_type = PACKET_BROADCAST;
-       skb->ip_summed = CHECKSUM_UNNECESSARY;
-
-       can_skb_reserve(skb);
-       can_skb_prv(skb)->ifindex = sl->dev->ifindex;
-       can_skb_prv(skb)->skbcnt = 0;
-
-       skb_put_data(skb, &cf, sizeof(struct can_frame));
-
-       sl->dev->stats.rx_packets++;
-       if (!(cf.can_id & CAN_RTR_FLAG))
-               sl->dev->stats.rx_bytes += cf.len;
-
-       netif_rx(skb);
-}
-
-/* parse tty input stream */
-static void slcan_unesc(struct slcan *sl, unsigned char s)
-{
-       if ((s == '\r') || (s == '\a')) { /* CR or BEL ends the pdu */
-               if (!test_and_clear_bit(SLF_ERROR, &sl->flags) &&
-                   (sl->rcount > 4))  {
-                       slc_bump(sl);
-               }
-               sl->rcount = 0;
-       } else {
-               if (!test_bit(SLF_ERROR, &sl->flags))  {
-                       if (sl->rcount < SLC_MTU)  {
-                               sl->rbuff[sl->rcount++] = s;
-                               return;
-                       } else {
-                               sl->dev->stats.rx_over_errors++;
-                               set_bit(SLF_ERROR, &sl->flags);
-                       }
-               }
-       }
-}
-
- /************************************************************************
-  *                    STANDARD SLCAN ENCAPSULATION                     *
-  ************************************************************************/
-
-/* Encapsulate one can_frame and stuff into a TTY queue. */
-static void slc_encaps(struct slcan *sl, struct can_frame *cf)
-{
-       int actual, i;
-       unsigned char *pos;
-       unsigned char *endpos;
-       canid_t id = cf->can_id;
-
-       pos = sl->xbuff;
-
-       if (cf->can_id & CAN_RTR_FLAG)
-               *pos = 'R'; /* becomes 'r' in standard frame format (SFF) */
-       else
-               *pos = 'T'; /* becomes 't' in standard frame format (SSF) */
-
-       /* determine number of chars for the CAN-identifier */
-       if (cf->can_id & CAN_EFF_FLAG) {
-               id &= CAN_EFF_MASK;
-               endpos = pos + SLC_EFF_ID_LEN;
-       } else {
-               *pos |= 0x20; /* convert R/T to lower case for SFF */
-               id &= CAN_SFF_MASK;
-               endpos = pos + SLC_SFF_ID_LEN;
-       }
-
-       /* build 3 (SFF) or 8 (EFF) digit CAN identifier */
-       pos++;
-       while (endpos >= pos) {
-               *endpos-- = hex_asc_upper[id & 0xf];
-               id >>= 4;
-       }
-
-       pos += (cf->can_id & CAN_EFF_FLAG) ? SLC_EFF_ID_LEN : SLC_SFF_ID_LEN;
-
-       *pos++ = cf->len + '0';
-
-       /* RTR frames may have a dlc > 0 but they never have any data bytes */
-       if (!(cf->can_id & CAN_RTR_FLAG)) {
-               for (i = 0; i < cf->len; i++)
-                       pos = hex_byte_pack_upper(pos, cf->data[i]);
-
-               sl->dev->stats.tx_bytes += cf->len;
-       }
-
-       *pos++ = '\r';
-
-       /* Order of next two lines is *very* important.
-        * When we are sending a little amount of data,
-        * the transfer may be completed inside the ops->write()
-        * routine, because it's running with interrupts enabled.
-        * In this case we *never* got WRITE_WAKEUP event,
-        * if we did not request it before write operation.
-        *       14 Oct 1994  Dmitry Gorodchanin.
-        */
-       set_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
-       actual = sl->tty->ops->write(sl->tty, sl->xbuff, pos - sl->xbuff);
-       sl->xleft = (pos - sl->xbuff) - actual;
-       sl->xhead = sl->xbuff + actual;
-}
-
-/* Write out any remaining transmit buffer. Scheduled when tty is writable */
-static void slcan_transmit(struct work_struct *work)
-{
-       struct slcan *sl = container_of(work, struct slcan, tx_work);
-       int actual;
-
-       spin_lock_bh(&sl->lock);
-       /* First make sure we're connected. */
-       if (!sl->tty || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev)) {
-               spin_unlock_bh(&sl->lock);
-               return;
-       }
-
-       if (sl->xleft <= 0)  {
-               /* Now serial buffer is almost free & we can start
-                * transmission of another packet */
-               sl->dev->stats.tx_packets++;
-               clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
-               spin_unlock_bh(&sl->lock);
-               netif_wake_queue(sl->dev);
-               return;
-       }
-
-       actual = sl->tty->ops->write(sl->tty, sl->xhead, sl->xleft);
-       sl->xleft -= actual;
-       sl->xhead += actual;
-       spin_unlock_bh(&sl->lock);
-}
-
-/*
- * Called by the driver when there's room for more data.
- * Schedule the transmit.
- */
-static void slcan_write_wakeup(struct tty_struct *tty)
-{
-       struct slcan *sl;
-
-       rcu_read_lock();
-       sl = rcu_dereference(tty->disc_data);
-       if (sl)
-               schedule_work(&sl->tx_work);
-       rcu_read_unlock();
-}
-
-/* Send a can_frame to a TTY queue. */
-static netdev_tx_t slc_xmit(struct sk_buff *skb, struct net_device *dev)
-{
-       struct slcan *sl = netdev_priv(dev);
-
-       if (can_dropped_invalid_skb(dev, skb))
-               return NETDEV_TX_OK;
-
-       spin_lock(&sl->lock);
-       if (!netif_running(dev))  {
-               spin_unlock(&sl->lock);
-               printk(KERN_WARNING "%s: xmit: iface is down\n", dev->name);
-               goto out;
-       }
-       if (sl->tty == NULL) {
-               spin_unlock(&sl->lock);
-               goto out;
-       }
-
-       netif_stop_queue(sl->dev);
-       slc_encaps(sl, (struct can_frame *) skb->data); /* encaps & send */
-       spin_unlock(&sl->lock);
-
-out:
-       kfree_skb(skb);
-       return NETDEV_TX_OK;
-}
-
-
-/******************************************
- *   Routines looking at netdevice side.
- ******************************************/
-
-/* Netdevice UP -> DOWN routine */
-static int slc_close(struct net_device *dev)
-{
-       struct slcan *sl = netdev_priv(dev);
-
-       spin_lock_bh(&sl->lock);
-       if (sl->tty) {
-               /* TTY discipline is running. */
-               clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
-       }
-       netif_stop_queue(dev);
-       sl->rcount   = 0;
-       sl->xleft    = 0;
-       spin_unlock_bh(&sl->lock);
-
-       return 0;
-}
-
-/* Netdevice DOWN -> UP routine */
-static int slc_open(struct net_device *dev)
-{
-       struct slcan *sl = netdev_priv(dev);
-
-       if (sl->tty == NULL)
-               return -ENODEV;
-
-       sl->flags &= (1 << SLF_INUSE);
-       netif_start_queue(dev);
-       return 0;
-}
-
-/* Hook the destructor so we can free slcan devs at the right point in time */
-static void slc_free_netdev(struct net_device *dev)
-{
-       int i = dev->base_addr;
-
-       slcan_devs[i] = NULL;
-}
-
-static int slcan_change_mtu(struct net_device *dev, int new_mtu)
-{
-       return -EINVAL;
-}
-
-static const struct net_device_ops slc_netdev_ops = {
-       .ndo_open               = slc_open,
-       .ndo_stop               = slc_close,
-       .ndo_start_xmit         = slc_xmit,
-       .ndo_change_mtu         = slcan_change_mtu,
-};
-
-static void slc_setup(struct net_device *dev)
-{
-       dev->netdev_ops         = &slc_netdev_ops;
-       dev->needs_free_netdev  = true;
-       dev->priv_destructor    = slc_free_netdev;
-
-       dev->hard_header_len    = 0;
-       dev->addr_len           = 0;
-       dev->tx_queue_len       = 10;
-
-       dev->mtu                = CAN_MTU;
-       dev->type               = ARPHRD_CAN;
-
-       /* New-style flags. */
-       dev->flags              = IFF_NOARP;
-       dev->features           = NETIF_F_HW_CSUM;
-}
-
-/******************************************
-  Routines looking at TTY side.
- ******************************************/
-
-/*
- * Handle the 'receiver data ready' interrupt.
- * This function is called by the 'tty_io' module in the kernel when
- * a block of SLCAN data has been received, which can now be decapsulated
- * and sent on to some IP layer for further processing. This will not
- * be re-entered while running but other ldisc functions may be called
- * in parallel
- */
-
-static void slcan_receive_buf(struct tty_struct *tty,
-                             const unsigned char *cp, const char *fp,
-                             int count)
-{
-       struct slcan *sl = (struct slcan *) tty->disc_data;
-
-       if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev))
-               return;
-
-       /* Read the characters out of the buffer */
-       while (count--) {
-               if (fp && *fp++) {
-                       if (!test_and_set_bit(SLF_ERROR, &sl->flags))
-                               sl->dev->stats.rx_errors++;
-                       cp++;
-                       continue;
-               }
-               slcan_unesc(sl, *cp++);
-       }
-}
-
-/************************************
- *  slcan_open helper routines.
- ************************************/
-
-/* Collect hanged up channels */
-static void slc_sync(void)
-{
-       int i;
-       struct net_device *dev;
-       struct slcan      *sl;
-
-       for (i = 0; i < maxdev; i++) {
-               dev = slcan_devs[i];
-               if (dev == NULL)
-                       break;
-
-               sl = netdev_priv(dev);
-               if (sl->tty)
-                       continue;
-               if (dev->flags & IFF_UP)
-                       dev_close(dev);
-       }
-}
-
-/* Find a free SLCAN channel, and link in this `tty' line. */
-static struct slcan *slc_alloc(void)
-{
-       int i;
-       char name[IFNAMSIZ];
-       struct net_device *dev = NULL;
-       struct can_ml_priv *can_ml;
-       struct slcan       *sl;
-       int size;
-
-       for (i = 0; i < maxdev; i++) {
-               dev = slcan_devs[i];
-               if (dev == NULL)
-                       break;
-
-       }
-
-       /* Sorry, too many, all slots in use */
-       if (i >= maxdev)
-               return NULL;
-
-       sprintf(name, "slcan%d", i);
-       size = ALIGN(sizeof(*sl), NETDEV_ALIGN) + sizeof(struct can_ml_priv);
-       dev = alloc_netdev(size, name, NET_NAME_UNKNOWN, slc_setup);
-       if (!dev)
-               return NULL;
-
-       dev->base_addr  = i;
-       sl = netdev_priv(dev);
-       can_ml = (void *)sl + ALIGN(sizeof(*sl), NETDEV_ALIGN);
-       can_set_ml_priv(dev, can_ml);
-
-       /* Initialize channel control data */
-       sl->magic = SLCAN_MAGIC;
-       sl->dev = dev;
-       spin_lock_init(&sl->lock);
-       INIT_WORK(&sl->tx_work, slcan_transmit);
-       slcan_devs[i] = dev;
-
-       return sl;
-}
-
-/*
- * Open the high-level part of the SLCAN channel.
- * This function is called by the TTY module when the
- * SLCAN line discipline is called for.  Because we are
- * sure the tty line exists, we only have to link it to
- * a free SLCAN channel...
- *
- * Called in process context serialized from other ldisc calls.
- */
-
-static int slcan_open(struct tty_struct *tty)
-{
-       struct slcan *sl;
-       int err;
-
-       if (!capable(CAP_NET_ADMIN))
-               return -EPERM;
-
-       if (tty->ops->write == NULL)
-               return -EOPNOTSUPP;
-
-       /* RTnetlink lock is misused here to serialize concurrent
-          opens of slcan channels. There are better ways, but it is
-          the simplest one.
-        */
-       rtnl_lock();
-
-       /* Collect hanged up channels. */
-       slc_sync();
-
-       sl = tty->disc_data;
-
-       err = -EEXIST;
-       /* First make sure we're not already connected. */
-       if (sl && sl->magic == SLCAN_MAGIC)
-               goto err_exit;
-
-       /* OK.  Find a free SLCAN channel to use. */
-       err = -ENFILE;
-       sl = slc_alloc();
-       if (sl == NULL)
-               goto err_exit;
-
-       sl->tty = tty;
-       tty->disc_data = sl;
-
-       if (!test_bit(SLF_INUSE, &sl->flags)) {
-               /* Perform the low-level SLCAN initialization. */
-               sl->rcount   = 0;
-               sl->xleft    = 0;
-
-               set_bit(SLF_INUSE, &sl->flags);
-
-               err = register_netdevice(sl->dev);
-               if (err)
-                       goto err_free_chan;
-       }
-
-       /* Done.  We have linked the TTY line to a channel. */
-       rtnl_unlock();
-       tty->receive_room = 65536;      /* We don't flow control */
-
-       /* TTY layer expects 0 on success */
-       return 0;
-
-err_free_chan:
-       sl->tty = NULL;
-       tty->disc_data = NULL;
-       clear_bit(SLF_INUSE, &sl->flags);
-       slc_free_netdev(sl->dev);
-       /* do not call free_netdev before rtnl_unlock */
-       rtnl_unlock();
-       free_netdev(sl->dev);
-       return err;
-
-err_exit:
-       rtnl_unlock();
-
-       /* Count references from TTY module */
-       return err;
-}
-
-/*
- * Close down a SLCAN channel.
- * This means flushing out any pending queues, and then returning. This
- * call is serialized against other ldisc functions.
- *
- * We also use this method for a hangup event.
- */
-
-static void slcan_close(struct tty_struct *tty)
-{
-       struct slcan *sl = (struct slcan *) tty->disc_data;
-
-       /* First make sure we're connected. */
-       if (!sl || sl->magic != SLCAN_MAGIC || sl->tty != tty)
-               return;
-
-       spin_lock_bh(&sl->lock);
-       rcu_assign_pointer(tty->disc_data, NULL);
-       sl->tty = NULL;
-       spin_unlock_bh(&sl->lock);
-
-       synchronize_rcu();
-       flush_work(&sl->tx_work);
-
-       /* Flush network side */
-       unregister_netdev(sl->dev);
-       /* This will complete via sl_free_netdev */
-}
-
-static void slcan_hangup(struct tty_struct *tty)
-{
-       slcan_close(tty);
-}
-
-/* Perform I/O control on an active SLCAN channel. */
-static int slcan_ioctl(struct tty_struct *tty, unsigned int cmd,
-                      unsigned long arg)
-{
-       struct slcan *sl = (struct slcan *) tty->disc_data;
-       unsigned int tmp;
-
-       /* First make sure we're connected. */
-       if (!sl || sl->magic != SLCAN_MAGIC)
-               return -EINVAL;
-
-       switch (cmd) {
-       case SIOCGIFNAME:
-               tmp = strlen(sl->dev->name) + 1;
-               if (copy_to_user((void __user *)arg, sl->dev->name, tmp))
-                       return -EFAULT;
-               return 0;
-
-       case SIOCSIFHWADDR:
-               return -EINVAL;
-
-       default:
-               return tty_mode_ioctl(tty, cmd, arg);
-       }
-}
-
-static struct tty_ldisc_ops slc_ldisc = {
-       .owner          = THIS_MODULE,
-       .num            = N_SLCAN,
-       .name           = "slcan",
-       .open           = slcan_open,
-       .close          = slcan_close,
-       .hangup         = slcan_hangup,
-       .ioctl          = slcan_ioctl,
-       .receive_buf    = slcan_receive_buf,
-       .write_wakeup   = slcan_write_wakeup,
-};
-
-static int __init slcan_init(void)
-{
-       int status;
-
-       if (maxdev < 4)
-               maxdev = 4; /* Sanity */
-
-       pr_info("slcan: serial line CAN interface driver\n");
-       pr_info("slcan: %d dynamic interface channels.\n", maxdev);
-
-       slcan_devs = kcalloc(maxdev, sizeof(struct net_device *), GFP_KERNEL);
-       if (!slcan_devs)
-               return -ENOMEM;
-
-       /* Fill in our line protocol discipline, and register it */
-       status = tty_register_ldisc(&slc_ldisc);
-       if (status)  {
-               printk(KERN_ERR "slcan: can't register line discipline\n");
-               kfree(slcan_devs);
-       }
-       return status;
-}
-
-static void __exit slcan_exit(void)
-{
-       int i;
-       struct net_device *dev;
-       struct slcan *sl;
-       unsigned long timeout = jiffies + HZ;
-       int busy = 0;
-
-       if (slcan_devs == NULL)
-               return;
-
-       /* First of all: check for active disciplines and hangup them.
-        */
-       do {
-               if (busy)
-                       msleep_interruptible(100);
-
-               busy = 0;
-               for (i = 0; i < maxdev; i++) {
-                       dev = slcan_devs[i];
-                       if (!dev)
-                               continue;
-                       sl = netdev_priv(dev);
-                       spin_lock_bh(&sl->lock);
-                       if (sl->tty) {
-                               busy++;
-                               tty_hangup(sl->tty);
-                       }
-                       spin_unlock_bh(&sl->lock);
-               }
-       } while (busy && time_before(jiffies, timeout));
-
-       /* FIXME: hangup is async so we should wait when doing this second
-          phase */
-
-       for (i = 0; i < maxdev; i++) {
-               dev = slcan_devs[i];
-               if (!dev)
-                       continue;
-               slcan_devs[i] = NULL;
-
-               sl = netdev_priv(dev);
-               if (sl->tty) {
-                       printk(KERN_ERR "%s: tty discipline still running\n",
-                              dev->name);
-               }
-
-               unregister_netdev(dev);
-       }
-
-       kfree(slcan_devs);
-       slcan_devs = NULL;
-
-       tty_unregister_ldisc(&slc_ldisc);
-}
-
-module_init(slcan_init);
-module_exit(slcan_exit);
diff --git a/drivers/net/can/slcan/Makefile b/drivers/net/can/slcan/Makefile
new file mode 100644 (file)
index 0000000..8a88e48
--- /dev/null
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: GPL-2.0
+
+obj-$(CONFIG_CAN_SLCAN) += slcan.o
+
+slcan-objs :=
+slcan-objs += slcan-core.o
+slcan-objs += slcan-ethtool.o
diff --git a/drivers/net/can/slcan/slcan-core.c b/drivers/net/can/slcan/slcan-core.c
new file mode 100644 (file)
index 0000000..54d29a4
--- /dev/null
@@ -0,0 +1,1131 @@
+/*
+ * slcan.c - serial line CAN interface driver (using tty line discipline)
+ *
+ * This file is derived from linux/drivers/net/slip/slip.c
+ *
+ * slip.c Authors  : Laurence Culhane <loz@holmes.demon.co.uk>
+ *                   Fred N. van Kempen <waltje@uwalt.nl.mugnet.org>
+ * slcan.c Author  : Oliver Hartkopp <socketcan@hartkopp.net>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, see http://www.gnu.org/licenses/gpl.html
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+
+#include <linux/uaccess.h>
+#include <linux/bitops.h>
+#include <linux/string.h>
+#include <linux/tty.h>
+#include <linux/errno.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/if_arp.h>
+#include <linux/if_ether.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/workqueue.h>
+#include <linux/can.h>
+#include <linux/can/dev.h>
+#include <linux/can/skb.h>
+
+#include "slcan.h"
+
+MODULE_ALIAS_LDISC(N_SLCAN);
+MODULE_DESCRIPTION("serial line CAN interface");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Oliver Hartkopp <socketcan@hartkopp.net>");
+
+#define SLCAN_MAGIC 0x53CA
+
+static int maxdev = 10;                /* MAX number of SLCAN channels;
+                                  This can be overridden with
+                                  insmod slcan.ko maxdev=nnn   */
+module_param(maxdev, int, 0);
+MODULE_PARM_DESC(maxdev, "Maximum number of slcan interfaces");
+
+/* maximum rx buffer len: extended CAN frame with timestamp */
+#define SLC_MTU (sizeof("T1111222281122334455667788EA5F\r")+1)
+
+#define SLC_CMD_LEN 1
+#define SLC_SFF_ID_LEN 3
+#define SLC_EFF_ID_LEN 8
+#define SLC_STATE_LEN 1
+#define SLC_STATE_BE_RXCNT_LEN 3
+#define SLC_STATE_BE_TXCNT_LEN 3
+#define SLC_STATE_FRAME_LEN       (1 + SLC_CMD_LEN + SLC_STATE_BE_RXCNT_LEN + \
+                                  SLC_STATE_BE_TXCNT_LEN)
+struct slcan {
+       struct can_priv         can;
+       int                     magic;
+
+       /* Various fields. */
+       struct tty_struct       *tty;           /* ptr to TTY structure      */
+       struct net_device       *dev;           /* easy for intr handling    */
+       spinlock_t              lock;
+       struct work_struct      tx_work;        /* Flushes transmit buffer   */
+
+       /* These are pointers to the malloc()ed frame buffers. */
+       unsigned char           rbuff[SLC_MTU]; /* receiver buffer           */
+       int                     rcount;         /* received chars counter    */
+       unsigned char           xbuff[SLC_MTU]; /* transmitter buffer        */
+       unsigned char           *xhead;         /* pointer to next XMIT byte */
+       int                     xleft;          /* bytes left in XMIT queue  */
+
+       unsigned long           flags;          /* Flag values/ mode etc     */
+#define SLF_INUSE              0               /* Channel in use            */
+#define SLF_ERROR              1               /* Parity, etc. error        */
+#define SLF_XCMD               2               /* Command transmission      */
+       unsigned long           cmd_flags;      /* Command flags             */
+#define CF_ERR_RST             0               /* Reset errors on open      */
+       wait_queue_head_t       xcmd_wait;      /* Wait queue for commands   */
+                                               /* transmission              */
+};
+
+static struct net_device **slcan_devs;
+
+static const u32 slcan_bitrate_const[] = {
+       10000, 20000, 50000, 100000, 125000,
+       250000, 500000, 800000, 1000000
+};
+
+bool slcan_err_rst_on_open(struct net_device *ndev)
+{
+       struct slcan *sl = netdev_priv(ndev);
+
+       return !!test_bit(CF_ERR_RST, &sl->cmd_flags);
+}
+
+int slcan_enable_err_rst_on_open(struct net_device *ndev, bool on)
+{
+       struct slcan *sl = netdev_priv(ndev);
+
+       if (netif_running(ndev))
+               return -EBUSY;
+
+       if (on)
+               set_bit(CF_ERR_RST, &sl->cmd_flags);
+       else
+               clear_bit(CF_ERR_RST, &sl->cmd_flags);
+
+       return 0;
+}
+
+ /************************************************************************
+  *                    SLCAN ENCAPSULATION FORMAT                       *
+  ************************************************************************/
+
+/*
+ * A CAN frame has a can_id (11 bit standard frame format OR 29 bit extended
+ * frame format) a data length code (len) which can be from 0 to 8
+ * and up to <len> data bytes as payload.
+ * Additionally a CAN frame may become a remote transmission frame if the
+ * RTR-bit is set. This causes another ECU to send a CAN frame with the
+ * given can_id.
+ *
+ * The SLCAN ASCII representation of these different frame types is:
+ * <type> <id> <dlc> <data>*
+ *
+ * Extended frames (29 bit) are defined by capital characters in the type.
+ * RTR frames are defined as 'r' types - normal frames have 't' type:
+ * t => 11 bit data frame
+ * r => 11 bit RTR frame
+ * T => 29 bit data frame
+ * R => 29 bit RTR frame
+ *
+ * The <id> is 3 (standard) or 8 (extended) bytes in ASCII Hex (base64).
+ * The <dlc> is a one byte ASCII number ('0' - '8')
+ * The <data> section has at much ASCII Hex bytes as defined by the <dlc>
+ *
+ * Examples:
+ *
+ * t1230 : can_id 0x123, len 0, no data
+ * t4563112233 : can_id 0x456, len 3, data 0x11 0x22 0x33
+ * T12ABCDEF2AA55 : extended can_id 0x12ABCDEF, len 2, data 0xAA 0x55
+ * r1230 : can_id 0x123, len 0, no data, remote transmission request
+ *
+ */
+
+ /************************************************************************
+  *                    STANDARD SLCAN DECAPSULATION                     *
+  ************************************************************************/
+
+/* Send one completely decapsulated can_frame to the network layer */
+static void slc_bump_frame(struct slcan *sl)
+{
+       struct sk_buff *skb;
+       struct can_frame *cf;
+       int i, tmp;
+       u32 tmpid;
+       char *cmd = sl->rbuff;
+
+       skb = alloc_can_skb(sl->dev, &cf);
+       if (unlikely(!skb)) {
+               sl->dev->stats.rx_dropped++;
+               return;
+       }
+
+       switch (*cmd) {
+       case 'r':
+               cf->can_id = CAN_RTR_FLAG;
+               fallthrough;
+       case 't':
+               /* store dlc ASCII value and terminate SFF CAN ID string */
+               cf->len = sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN];
+               sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN] = 0;
+               /* point to payload data behind the dlc */
+               cmd += SLC_CMD_LEN + SLC_SFF_ID_LEN + 1;
+               break;
+       case 'R':
+               cf->can_id = CAN_RTR_FLAG;
+               fallthrough;
+       case 'T':
+               cf->can_id |= CAN_EFF_FLAG;
+               /* store dlc ASCII value and terminate EFF CAN ID string */
+               cf->len = sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN];
+               sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN] = 0;
+               /* point to payload data behind the dlc */
+               cmd += SLC_CMD_LEN + SLC_EFF_ID_LEN + 1;
+               break;
+       default:
+               goto decode_failed;
+       }
+
+       if (kstrtou32(sl->rbuff + SLC_CMD_LEN, 16, &tmpid))
+               goto decode_failed;
+
+       cf->can_id |= tmpid;
+
+       /* get len from sanitized ASCII value */
+       if (cf->len >= '0' && cf->len < '9')
+               cf->len -= '0';
+       else
+               goto decode_failed;
+
+       /* RTR frames may have a dlc > 0 but they never have any data bytes */
+       if (!(cf->can_id & CAN_RTR_FLAG)) {
+               for (i = 0; i < cf->len; i++) {
+                       tmp = hex_to_bin(*cmd++);
+                       if (tmp < 0)
+                               goto decode_failed;
+
+                       cf->data[i] = (tmp << 4);
+                       tmp = hex_to_bin(*cmd++);
+                       if (tmp < 0)
+                               goto decode_failed;
+
+                       cf->data[i] |= tmp;
+               }
+       }
+
+       sl->dev->stats.rx_packets++;
+       if (!(cf->can_id & CAN_RTR_FLAG))
+               sl->dev->stats.rx_bytes += cf->len;
+
+       netif_rx(skb);
+       return;
+
+decode_failed:
+       sl->dev->stats.rx_errors++;
+       dev_kfree_skb(skb);
+}
+
+/* A change state frame must contain state info and receive and transmit
+ * error counters.
+ *
+ * Examples:
+ *
+ * sb256256 : state bus-off: rx counter 256, tx counter 256
+ * sa057033 : state active, rx counter 57, tx counter 33
+ */
+static void slc_bump_state(struct slcan *sl)
+{
+       struct net_device *dev = sl->dev;
+       struct sk_buff *skb;
+       struct can_frame *cf;
+       char *cmd = sl->rbuff;
+       u32 rxerr, txerr;
+       enum can_state state, rx_state, tx_state;
+
+       switch (cmd[1]) {
+       case 'a':
+               state = CAN_STATE_ERROR_ACTIVE;
+               break;
+       case 'w':
+               state = CAN_STATE_ERROR_WARNING;
+               break;
+       case 'p':
+               state = CAN_STATE_ERROR_PASSIVE;
+               break;
+       case 'b':
+               state = CAN_STATE_BUS_OFF;
+               break;
+       default:
+               return;
+       }
+
+       if (state == sl->can.state || sl->rcount < SLC_STATE_FRAME_LEN)
+               return;
+
+       cmd += SLC_STATE_BE_RXCNT_LEN + SLC_CMD_LEN + 1;
+       cmd[SLC_STATE_BE_TXCNT_LEN] = 0;
+       if (kstrtou32(cmd, 10, &txerr))
+               return;
+
+       *cmd = 0;
+       cmd -= SLC_STATE_BE_RXCNT_LEN;
+       if (kstrtou32(cmd, 10, &rxerr))
+               return;
+
+       skb = alloc_can_err_skb(dev, &cf);
+       if (skb) {
+               cf->data[6] = txerr;
+               cf->data[7] = rxerr;
+       } else {
+               cf = NULL;
+       }
+
+       tx_state = txerr >= rxerr ? state : 0;
+       rx_state = txerr <= rxerr ? state : 0;
+       can_change_state(dev, cf, tx_state, rx_state);
+
+       if (state == CAN_STATE_BUS_OFF)
+               can_bus_off(dev);
+
+       if (skb)
+               netif_rx(skb);
+}
+
+/* An error frame can contain more than one type of error.
+ *
+ * Examples:
+ *
+ * e1a : len 1, errors: ACK error
+ * e3bcO: len 3, errors: Bit0 error, CRC error, Tx overrun error
+ */
+static void slc_bump_err(struct slcan *sl)
+{
+       struct net_device *dev = sl->dev;
+       struct sk_buff *skb;
+       struct can_frame *cf;
+       char *cmd = sl->rbuff;
+       bool rx_errors = false, tx_errors = false, rx_over_errors = false;
+       int i, len;
+
+       /* get len from sanitized ASCII value */
+       len = cmd[1];
+       if (len >= '0' && len < '9')
+               len -= '0';
+       else
+               return;
+
+       if ((len + SLC_CMD_LEN + 1) > sl->rcount)
+               return;
+
+       skb = alloc_can_err_skb(dev, &cf);
+
+       if (skb)
+               cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+       cmd += SLC_CMD_LEN + 1;
+       for (i = 0; i < len; i++, cmd++) {
+               switch (*cmd) {
+               case 'a':
+                       netdev_dbg(dev, "ACK error\n");
+                       tx_errors = true;
+                       if (skb) {
+                               cf->can_id |= CAN_ERR_ACK;
+                               cf->data[3] = CAN_ERR_PROT_LOC_ACK;
+                       }
+
+                       break;
+               case 'b':
+                       netdev_dbg(dev, "Bit0 error\n");
+                       tx_errors = true;
+                       if (skb)
+                               cf->data[2] |= CAN_ERR_PROT_BIT0;
+
+                       break;
+               case 'B':
+                       netdev_dbg(dev, "Bit1 error\n");
+                       tx_errors = true;
+                       if (skb)
+                               cf->data[2] |= CAN_ERR_PROT_BIT1;
+
+                       break;
+               case 'c':
+                       netdev_dbg(dev, "CRC error\n");
+                       rx_errors = true;
+                       if (skb) {
+                               cf->data[2] |= CAN_ERR_PROT_BIT;
+                               cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
+                       }
+
+                       break;
+               case 'f':
+                       netdev_dbg(dev, "Form Error\n");
+                       rx_errors = true;
+                       if (skb)
+                               cf->data[2] |= CAN_ERR_PROT_FORM;
+
+                       break;
+               case 'o':
+                       netdev_dbg(dev, "Rx overrun error\n");
+                       rx_over_errors = true;
+                       rx_errors = true;
+                       if (skb) {
+                               cf->can_id |= CAN_ERR_CRTL;
+                               cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
+                       }
+
+                       break;
+               case 'O':
+                       netdev_dbg(dev, "Tx overrun error\n");
+                       tx_errors = true;
+                       if (skb) {
+                               cf->can_id |= CAN_ERR_CRTL;
+                               cf->data[1] = CAN_ERR_CRTL_TX_OVERFLOW;
+                       }
+
+                       break;
+               case 's':
+                       netdev_dbg(dev, "Stuff error\n");
+                       rx_errors = true;
+                       if (skb)
+                               cf->data[2] |= CAN_ERR_PROT_STUFF;
+
+                       break;
+               default:
+                       if (skb)
+                               dev_kfree_skb(skb);
+
+                       return;
+               }
+       }
+
+       if (rx_errors)
+               dev->stats.rx_errors++;
+
+       if (rx_over_errors)
+               dev->stats.rx_over_errors++;
+
+       if (tx_errors)
+               dev->stats.tx_errors++;
+
+       if (skb)
+               netif_rx(skb);
+}
+
+static void slc_bump(struct slcan *sl)
+{
+       switch (sl->rbuff[0]) {
+       case 'r':
+               fallthrough;
+       case 't':
+               fallthrough;
+       case 'R':
+               fallthrough;
+       case 'T':
+               return slc_bump_frame(sl);
+       case 'e':
+               return slc_bump_err(sl);
+       case 's':
+               return slc_bump_state(sl);
+       default:
+               return;
+       }
+}
+
+/* parse tty input stream */
+static void slcan_unesc(struct slcan *sl, unsigned char s)
+{
+       if ((s == '\r') || (s == '\a')) { /* CR or BEL ends the pdu */
+               if (!test_and_clear_bit(SLF_ERROR, &sl->flags) &&
+                   (sl->rcount > 4))  {
+                       slc_bump(sl);
+               }
+               sl->rcount = 0;
+       } else {
+               if (!test_bit(SLF_ERROR, &sl->flags))  {
+                       if (sl->rcount < SLC_MTU)  {
+                               sl->rbuff[sl->rcount++] = s;
+                               return;
+                       } else {
+                               sl->dev->stats.rx_over_errors++;
+                               set_bit(SLF_ERROR, &sl->flags);
+                       }
+               }
+       }
+}
+
+ /************************************************************************
+  *                    STANDARD SLCAN ENCAPSULATION                     *
+  ************************************************************************/
+
+/* Encapsulate one can_frame and stuff into a TTY queue. */
+static void slc_encaps(struct slcan *sl, struct can_frame *cf)
+{
+       int actual, i;
+       unsigned char *pos;
+       unsigned char *endpos;
+       canid_t id = cf->can_id;
+
+       pos = sl->xbuff;
+
+       if (cf->can_id & CAN_RTR_FLAG)
+               *pos = 'R'; /* becomes 'r' in standard frame format (SFF) */
+       else
+               *pos = 'T'; /* becomes 't' in standard frame format (SSF) */
+
+       /* determine number of chars for the CAN-identifier */
+       if (cf->can_id & CAN_EFF_FLAG) {
+               id &= CAN_EFF_MASK;
+               endpos = pos + SLC_EFF_ID_LEN;
+       } else {
+               *pos |= 0x20; /* convert R/T to lower case for SFF */
+               id &= CAN_SFF_MASK;
+               endpos = pos + SLC_SFF_ID_LEN;
+       }
+
+       /* build 3 (SFF) or 8 (EFF) digit CAN identifier */
+       pos++;
+       while (endpos >= pos) {
+               *endpos-- = hex_asc_upper[id & 0xf];
+               id >>= 4;
+       }
+
+       pos += (cf->can_id & CAN_EFF_FLAG) ? SLC_EFF_ID_LEN : SLC_SFF_ID_LEN;
+
+       *pos++ = cf->len + '0';
+
+       /* RTR frames may have a dlc > 0 but they never have any data bytes */
+       if (!(cf->can_id & CAN_RTR_FLAG)) {
+               for (i = 0; i < cf->len; i++)
+                       pos = hex_byte_pack_upper(pos, cf->data[i]);
+
+               sl->dev->stats.tx_bytes += cf->len;
+       }
+
+       *pos++ = '\r';
+
+       /* Order of next two lines is *very* important.
+        * When we are sending a little amount of data,
+        * the transfer may be completed inside the ops->write()
+        * routine, because it's running with interrupts enabled.
+        * In this case we *never* got WRITE_WAKEUP event,
+        * if we did not request it before write operation.
+        *       14 Oct 1994  Dmitry Gorodchanin.
+        */
+       set_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
+       actual = sl->tty->ops->write(sl->tty, sl->xbuff, pos - sl->xbuff);
+       sl->xleft = (pos - sl->xbuff) - actual;
+       sl->xhead = sl->xbuff + actual;
+}
+
+/* Write out any remaining transmit buffer. Scheduled when tty is writable */
+static void slcan_transmit(struct work_struct *work)
+{
+       struct slcan *sl = container_of(work, struct slcan, tx_work);
+       int actual;
+
+       spin_lock_bh(&sl->lock);
+       /* First make sure we're connected. */
+       if (!sl->tty || sl->magic != SLCAN_MAGIC ||
+           (unlikely(!netif_running(sl->dev)) &&
+            likely(!test_bit(SLF_XCMD, &sl->flags)))) {
+               spin_unlock_bh(&sl->lock);
+               return;
+       }
+
+       if (sl->xleft <= 0)  {
+               if (unlikely(test_bit(SLF_XCMD, &sl->flags))) {
+                       clear_bit(SLF_XCMD, &sl->flags);
+                       clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
+                       spin_unlock_bh(&sl->lock);
+                       wake_up(&sl->xcmd_wait);
+                       return;
+               }
+
+               /* Now serial buffer is almost free & we can start
+                * transmission of another packet */
+               sl->dev->stats.tx_packets++;
+               clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
+               spin_unlock_bh(&sl->lock);
+               netif_wake_queue(sl->dev);
+               return;
+       }
+
+       actual = sl->tty->ops->write(sl->tty, sl->xhead, sl->xleft);
+       sl->xleft -= actual;
+       sl->xhead += actual;
+       spin_unlock_bh(&sl->lock);
+}
+
+/*
+ * Called by the driver when there's room for more data.
+ * Schedule the transmit.
+ */
+static void slcan_write_wakeup(struct tty_struct *tty)
+{
+       struct slcan *sl;
+
+       rcu_read_lock();
+       sl = rcu_dereference(tty->disc_data);
+       if (sl)
+               schedule_work(&sl->tx_work);
+       rcu_read_unlock();
+}
+
+/* Send a can_frame to a TTY queue. */
+static netdev_tx_t slc_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+       struct slcan *sl = netdev_priv(dev);
+
+       if (can_dropped_invalid_skb(dev, skb))
+               return NETDEV_TX_OK;
+
+       spin_lock(&sl->lock);
+       if (!netif_running(dev))  {
+               spin_unlock(&sl->lock);
+               netdev_warn(dev, "xmit: iface is down\n");
+               goto out;
+       }
+       if (sl->tty == NULL) {
+               spin_unlock(&sl->lock);
+               goto out;
+       }
+
+       netif_stop_queue(sl->dev);
+       slc_encaps(sl, (struct can_frame *) skb->data); /* encaps & send */
+       spin_unlock(&sl->lock);
+
+out:
+       kfree_skb(skb);
+       return NETDEV_TX_OK;
+}
+
+
+/******************************************
+ *   Routines looking at netdevice side.
+ ******************************************/
+
+static int slcan_transmit_cmd(struct slcan *sl, const unsigned char *cmd)
+{
+       int ret, actual, n;
+
+       spin_lock(&sl->lock);
+       if (!sl->tty) {
+               spin_unlock(&sl->lock);
+               return -ENODEV;
+       }
+
+       n = snprintf(sl->xbuff, sizeof(sl->xbuff), "%s", cmd);
+       set_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
+       actual = sl->tty->ops->write(sl->tty, sl->xbuff, n);
+       sl->xleft = n - actual;
+       sl->xhead = sl->xbuff + actual;
+       set_bit(SLF_XCMD, &sl->flags);
+       spin_unlock(&sl->lock);
+       ret = wait_event_interruptible_timeout(sl->xcmd_wait,
+                                              !test_bit(SLF_XCMD, &sl->flags),
+                                              HZ);
+       clear_bit(SLF_XCMD, &sl->flags);
+       if (ret == -ERESTARTSYS)
+               return ret;
+
+       if (ret == 0)
+               return -ETIMEDOUT;
+
+       return 0;
+}
+
+/* Netdevice UP -> DOWN routine */
+static int slc_close(struct net_device *dev)
+{
+       struct slcan *sl = netdev_priv(dev);
+       int err;
+
+       spin_lock_bh(&sl->lock);
+       if (sl->tty) {
+               if (sl->can.bittiming.bitrate &&
+                   sl->can.bittiming.bitrate != CAN_BITRATE_UNKNOWN) {
+                       spin_unlock_bh(&sl->lock);
+                       err = slcan_transmit_cmd(sl, "C\r");
+                       spin_lock_bh(&sl->lock);
+                       if (err)
+                               netdev_warn(dev,
+                                           "failed to send close command 'C\\r'\n");
+               }
+
+               /* TTY discipline is running. */
+               clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
+       }
+       netif_stop_queue(dev);
+       close_candev(dev);
+       sl->can.state = CAN_STATE_STOPPED;
+       if (sl->can.bittiming.bitrate == CAN_BITRATE_UNKNOWN)
+               sl->can.bittiming.bitrate = CAN_BITRATE_UNSET;
+
+       sl->rcount   = 0;
+       sl->xleft    = 0;
+       spin_unlock_bh(&sl->lock);
+
+       return 0;
+}
+
+/* Netdevice DOWN -> UP routine */
+static int slc_open(struct net_device *dev)
+{
+       struct slcan *sl = netdev_priv(dev);
+       unsigned char cmd[SLC_MTU];
+       int err, s;
+
+       if (sl->tty == NULL)
+               return -ENODEV;
+
+       /* The baud rate is not set with the command
+        * `ip link set <iface> type can bitrate <baud>' and therefore
+        * can.bittiming.bitrate is CAN_BITRATE_UNSET (0), causing
+        * open_candev() to fail. So let's set to a fake value.
+        */
+       if (sl->can.bittiming.bitrate == CAN_BITRATE_UNSET)
+               sl->can.bittiming.bitrate = CAN_BITRATE_UNKNOWN;
+
+       err = open_candev(dev);
+       if (err) {
+               netdev_err(dev, "failed to open can device\n");
+               return err;
+       }
+
+       sl->flags &= BIT(SLF_INUSE);
+
+       if (sl->can.bittiming.bitrate != CAN_BITRATE_UNKNOWN) {
+               for (s = 0; s < ARRAY_SIZE(slcan_bitrate_const); s++) {
+                       if (sl->can.bittiming.bitrate == slcan_bitrate_const[s])
+                               break;
+               }
+
+               /* The CAN framework has already validate the bitrate value,
+                * so we can avoid to check if `s' has been properly set.
+                */
+
+               snprintf(cmd, sizeof(cmd), "C\rS%d\r", s);
+               err = slcan_transmit_cmd(sl, cmd);
+               if (err) {
+                       netdev_err(dev,
+                                  "failed to send bitrate command 'C\\rS%d\\r'\n",
+                                  s);
+                       goto cmd_transmit_failed;
+               }
+
+               if (test_bit(CF_ERR_RST, &sl->cmd_flags)) {
+                       err = slcan_transmit_cmd(sl, "F\r");
+                       if (err) {
+                               netdev_err(dev,
+                                          "failed to send error command 'F\\r'\n");
+                               goto cmd_transmit_failed;
+                       }
+               }
+
+               err = slcan_transmit_cmd(sl, "O\r");
+               if (err) {
+                       netdev_err(dev, "failed to send open command 'O\\r'\n");
+                       goto cmd_transmit_failed;
+               }
+       }
+
+       sl->can.state = CAN_STATE_ERROR_ACTIVE;
+       netif_start_queue(dev);
+       return 0;
+
+cmd_transmit_failed:
+       close_candev(dev);
+       return err;
+}
+
+static void slc_dealloc(struct slcan *sl)
+{
+       int i = sl->dev->base_addr;
+
+       free_candev(sl->dev);
+       slcan_devs[i] = NULL;
+}
+
+static int slcan_change_mtu(struct net_device *dev, int new_mtu)
+{
+       return -EINVAL;
+}
+
+static const struct net_device_ops slc_netdev_ops = {
+       .ndo_open               = slc_open,
+       .ndo_stop               = slc_close,
+       .ndo_start_xmit         = slc_xmit,
+       .ndo_change_mtu         = slcan_change_mtu,
+};
+
+/******************************************
+  Routines looking at TTY side.
+ ******************************************/
+
+/*
+ * Handle the 'receiver data ready' interrupt.
+ * This function is called by the 'tty_io' module in the kernel when
+ * a block of SLCAN data has been received, which can now be decapsulated
+ * and sent on to some IP layer for further processing. This will not
+ * be re-entered while running but other ldisc functions may be called
+ * in parallel
+ */
+
+static void slcan_receive_buf(struct tty_struct *tty,
+                             const unsigned char *cp, const char *fp,
+                             int count)
+{
+       struct slcan *sl = (struct slcan *) tty->disc_data;
+
+       if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev))
+               return;
+
+       /* Read the characters out of the buffer */
+       while (count--) {
+               if (fp && *fp++) {
+                       if (!test_and_set_bit(SLF_ERROR, &sl->flags))
+                               sl->dev->stats.rx_errors++;
+                       cp++;
+                       continue;
+               }
+               slcan_unesc(sl, *cp++);
+       }
+}
+
+/************************************
+ *  slcan_open helper routines.
+ ************************************/
+
+/* Collect hanged up channels */
+static void slc_sync(void)
+{
+       int i;
+       struct net_device *dev;
+       struct slcan      *sl;
+
+       for (i = 0; i < maxdev; i++) {
+               dev = slcan_devs[i];
+               if (dev == NULL)
+                       break;
+
+               sl = netdev_priv(dev);
+               if (sl->tty)
+                       continue;
+               if (dev->flags & IFF_UP)
+                       dev_close(dev);
+       }
+}
+
+/* Find a free SLCAN channel, and link in this `tty' line. */
+static struct slcan *slc_alloc(void)
+{
+       int i;
+       struct net_device *dev = NULL;
+       struct slcan       *sl;
+
+       for (i = 0; i < maxdev; i++) {
+               dev = slcan_devs[i];
+               if (dev == NULL)
+                       break;
+
+       }
+
+       /* Sorry, too many, all slots in use */
+       if (i >= maxdev)
+               return NULL;
+
+       dev = alloc_candev(sizeof(*sl), 1);
+       if (!dev)
+               return NULL;
+
+       snprintf(dev->name, sizeof(dev->name), "slcan%d", i);
+       dev->netdev_ops = &slc_netdev_ops;
+       dev->base_addr  = i;
+       slcan_set_ethtool_ops(dev);
+       sl = netdev_priv(dev);
+
+       /* Initialize channel control data */
+       sl->magic = SLCAN_MAGIC;
+       sl->dev = dev;
+       sl->can.bitrate_const = slcan_bitrate_const;
+       sl->can.bitrate_const_cnt = ARRAY_SIZE(slcan_bitrate_const);
+       spin_lock_init(&sl->lock);
+       INIT_WORK(&sl->tx_work, slcan_transmit);
+       init_waitqueue_head(&sl->xcmd_wait);
+       slcan_devs[i] = dev;
+
+       return sl;
+}
+
+/*
+ * Open the high-level part of the SLCAN channel.
+ * This function is called by the TTY module when the
+ * SLCAN line discipline is called for.  Because we are
+ * sure the tty line exists, we only have to link it to
+ * a free SLCAN channel...
+ *
+ * Called in process context serialized from other ldisc calls.
+ */
+
+static int slcan_open(struct tty_struct *tty)
+{
+       struct slcan *sl;
+       int err;
+
+       if (!capable(CAP_NET_ADMIN))
+               return -EPERM;
+
+       if (tty->ops->write == NULL)
+               return -EOPNOTSUPP;
+
+       /* RTnetlink lock is misused here to serialize concurrent
+          opens of slcan channels. There are better ways, but it is
+          the simplest one.
+        */
+       rtnl_lock();
+
+       /* Collect hanged up channels. */
+       slc_sync();
+
+       sl = tty->disc_data;
+
+       err = -EEXIST;
+       /* First make sure we're not already connected. */
+       if (sl && sl->magic == SLCAN_MAGIC)
+               goto err_exit;
+
+       /* OK.  Find a free SLCAN channel to use. */
+       err = -ENFILE;
+       sl = slc_alloc();
+       if (sl == NULL)
+               goto err_exit;
+
+       sl->tty = tty;
+       tty->disc_data = sl;
+
+       if (!test_bit(SLF_INUSE, &sl->flags)) {
+               /* Perform the low-level SLCAN initialization. */
+               sl->rcount   = 0;
+               sl->xleft    = 0;
+
+               set_bit(SLF_INUSE, &sl->flags);
+
+               rtnl_unlock();
+               err = register_candev(sl->dev);
+               if (err) {
+                       pr_err("slcan: can't register candev\n");
+                       goto err_free_chan;
+               }
+       } else {
+               rtnl_unlock();
+       }
+
+       tty->receive_room = 65536;      /* We don't flow control */
+
+       /* TTY layer expects 0 on success */
+       return 0;
+
+err_free_chan:
+       rtnl_lock();
+       sl->tty = NULL;
+       tty->disc_data = NULL;
+       clear_bit(SLF_INUSE, &sl->flags);
+       slc_dealloc(sl);
+       rtnl_unlock();
+       return err;
+
+err_exit:
+       rtnl_unlock();
+
+       /* Count references from TTY module */
+       return err;
+}
+
+/*
+ * Close down a SLCAN channel.
+ * This means flushing out any pending queues, and then returning. This
+ * call is serialized against other ldisc functions.
+ *
+ * We also use this method for a hangup event.
+ */
+
+static void slcan_close(struct tty_struct *tty)
+{
+       struct slcan *sl = (struct slcan *) tty->disc_data;
+
+       /* First make sure we're connected. */
+       if (!sl || sl->magic != SLCAN_MAGIC || sl->tty != tty)
+               return;
+
+       spin_lock_bh(&sl->lock);
+       rcu_assign_pointer(tty->disc_data, NULL);
+       sl->tty = NULL;
+       spin_unlock_bh(&sl->lock);
+
+       synchronize_rcu();
+       flush_work(&sl->tx_work);
+
+       slc_close(sl->dev);
+       unregister_candev(sl->dev);
+       rtnl_lock();
+       slc_dealloc(sl);
+       rtnl_unlock();
+}
+
+static void slcan_hangup(struct tty_struct *tty)
+{
+       slcan_close(tty);
+}
+
+/* Perform I/O control on an active SLCAN channel. */
+static int slcan_ioctl(struct tty_struct *tty, unsigned int cmd,
+                      unsigned long arg)
+{
+       struct slcan *sl = (struct slcan *) tty->disc_data;
+       unsigned int tmp;
+
+       /* First make sure we're connected. */
+       if (!sl || sl->magic != SLCAN_MAGIC)
+               return -EINVAL;
+
+       switch (cmd) {
+       case SIOCGIFNAME:
+               tmp = strlen(sl->dev->name) + 1;
+               if (copy_to_user((void __user *)arg, sl->dev->name, tmp))
+                       return -EFAULT;
+               return 0;
+
+       case SIOCSIFHWADDR:
+               return -EINVAL;
+
+       default:
+               return tty_mode_ioctl(tty, cmd, arg);
+       }
+}
+
+static struct tty_ldisc_ops slc_ldisc = {
+       .owner          = THIS_MODULE,
+       .num            = N_SLCAN,
+       .name           = "slcan",
+       .open           = slcan_open,
+       .close          = slcan_close,
+       .hangup         = slcan_hangup,
+       .ioctl          = slcan_ioctl,
+       .receive_buf    = slcan_receive_buf,
+       .write_wakeup   = slcan_write_wakeup,
+};
+
+static int __init slcan_init(void)
+{
+       int status;
+
+       if (maxdev < 4)
+               maxdev = 4; /* Sanity */
+
+       pr_info("slcan: serial line CAN interface driver\n");
+       pr_info("slcan: %d dynamic interface channels.\n", maxdev);
+
+       slcan_devs = kcalloc(maxdev, sizeof(struct net_device *), GFP_KERNEL);
+       if (!slcan_devs)
+               return -ENOMEM;
+
+       /* Fill in our line protocol discipline, and register it */
+       status = tty_register_ldisc(&slc_ldisc);
+       if (status)  {
+               printk(KERN_ERR "slcan: can't register line discipline\n");
+               kfree(slcan_devs);
+       }
+       return status;
+}
+
+static void __exit slcan_exit(void)
+{
+       int i;
+       struct net_device *dev;
+       struct slcan *sl;
+       unsigned long timeout = jiffies + HZ;
+       int busy = 0;
+
+       if (slcan_devs == NULL)
+               return;
+
+       /* First of all: check for active disciplines and hangup them.
+        */
+       do {
+               if (busy)
+                       msleep_interruptible(100);
+
+               busy = 0;
+               for (i = 0; i < maxdev; i++) {
+                       dev = slcan_devs[i];
+                       if (!dev)
+                               continue;
+                       sl = netdev_priv(dev);
+                       spin_lock_bh(&sl->lock);
+                       if (sl->tty) {
+                               busy++;
+                               tty_hangup(sl->tty);
+                       }
+                       spin_unlock_bh(&sl->lock);
+               }
+       } while (busy && time_before(jiffies, timeout));
+
+       /* FIXME: hangup is async so we should wait when doing this second
+          phase */
+
+       for (i = 0; i < maxdev; i++) {
+               dev = slcan_devs[i];
+               if (!dev)
+                       continue;
+
+               sl = netdev_priv(dev);
+               if (sl->tty) {
+                       netdev_err(dev, "tty discipline still running\n");
+               }
+
+               slc_close(dev);
+               unregister_candev(dev);
+               slc_dealloc(sl);
+       }
+
+       kfree(slcan_devs);
+       slcan_devs = NULL;
+
+       tty_unregister_ldisc(&slc_ldisc);
+}
+
+module_init(slcan_init);
+module_exit(slcan_exit);
diff --git a/drivers/net/can/slcan/slcan-ethtool.c b/drivers/net/can/slcan/slcan-ethtool.c
new file mode 100644 (file)
index 0000000..bf0afdc
--- /dev/null
@@ -0,0 +1,65 @@
+// SPDX-License-Identifier: GPL-2.0+
+/* Copyright (c) 2022 Amarula Solutions, Dario Binacchi <dario.binacchi@amarulasolutions.com>
+ *
+ */
+
+#include <linux/can/dev.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/platform_device.h>
+
+#include "slcan.h"
+
+static const char slcan_priv_flags_strings[][ETH_GSTRING_LEN] = {
+#define SLCAN_PRIV_FLAGS_ERR_RST_ON_OPEN BIT(0)
+       "err-rst-on-open",
+};
+
+static void slcan_get_strings(struct net_device *ndev, u32 stringset, u8 *data)
+{
+       switch (stringset) {
+       case ETH_SS_PRIV_FLAGS:
+               memcpy(data, slcan_priv_flags_strings,
+                      sizeof(slcan_priv_flags_strings));
+       }
+}
+
+static u32 slcan_get_priv_flags(struct net_device *ndev)
+{
+       u32 flags = 0;
+
+       if (slcan_err_rst_on_open(ndev))
+               flags |= SLCAN_PRIV_FLAGS_ERR_RST_ON_OPEN;
+
+       return flags;
+}
+
+static int slcan_set_priv_flags(struct net_device *ndev, u32 flags)
+{
+       bool err_rst_op_open = !!(flags & SLCAN_PRIV_FLAGS_ERR_RST_ON_OPEN);
+
+       return slcan_enable_err_rst_on_open(ndev, err_rst_op_open);
+}
+
+static int slcan_get_sset_count(struct net_device *netdev, int sset)
+{
+       switch (sset) {
+       case ETH_SS_PRIV_FLAGS:
+               return ARRAY_SIZE(slcan_priv_flags_strings);
+       default:
+               return -EOPNOTSUPP;
+       }
+}
+
+static const struct ethtool_ops slcan_ethtool_ops = {
+       .get_strings = slcan_get_strings,
+       .get_priv_flags = slcan_get_priv_flags,
+       .set_priv_flags = slcan_set_priv_flags,
+       .get_sset_count = slcan_get_sset_count,
+};
+
+void slcan_set_ethtool_ops(struct net_device *netdev)
+{
+       netdev->ethtool_ops = &slcan_ethtool_ops;
+}
diff --git a/drivers/net/can/slcan/slcan.h b/drivers/net/can/slcan/slcan.h
new file mode 100644 (file)
index 0000000..d463c8d
--- /dev/null
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * slcan.h - serial line CAN interface driver
+ *
+ * Copyright (C) Laurence Culhane <loz@holmes.demon.co.uk>
+ * Copyright (C) Fred N. van Kempen <waltje@uwalt.nl.mugnet.org>
+ * Copyright (C) Oliver Hartkopp <socketcan@hartkopp.net>
+ * Copyright (C) 2022 Amarula Solutions, Dario Binacchi <dario.binacchi@amarulasolutions.com>
+ *
+ */
+
+#ifndef _SLCAN_H
+#define _SLCAN_H
+
+bool slcan_err_rst_on_open(struct net_device *ndev);
+int slcan_enable_err_rst_on_open(struct net_device *ndev, bool on);
+void slcan_set_ethtool_ops(struct net_device *ndev);
+
+#endif /* _SLCAN_H */
index dd0fc0a..877e435 100644 (file)
@@ -2,6 +2,7 @@
 
 config CAN_MCP251XFD
        tristate "Microchip MCP251xFD SPI CAN controllers"
+       select CAN_RX_OFFLOAD
        select REGMAP
        select WANT_DEV_COREDUMP
        help
index b212523..9b47b07 100644 (file)
@@ -12,6 +12,7 @@
 // Copyright (c) 2019 Martin Sperl <kernel@martin.sperl.org>
 //
 
+#include <asm/unaligned.h>
 #include <linux/bitfield.h>
 #include <linux/clk.h>
 #include <linux/device.h>
@@ -1650,6 +1651,7 @@ static int mcp251xfd_stop(struct net_device *ndev)
        netif_stop_queue(ndev);
        set_bit(MCP251XFD_FLAGS_DOWN, priv->flags);
        hrtimer_cancel(&priv->rx_irq_timer);
+       hrtimer_cancel(&priv->tx_irq_timer);
        mcp251xfd_chip_interrupts_disable(priv);
        free_irq(ndev->irq, priv);
        can_rx_offload_disable(&priv->offload);
@@ -1777,7 +1779,7 @@ mcp251xfd_register_get_dev_id(const struct mcp251xfd_priv *priv, u32 *dev_id,
        xfer[0].len = sizeof(buf_tx->cmd);
        xfer[0].speed_hz = priv->spi_max_speed_hz_slow;
        xfer[1].rx_buf = buf_rx->data;
-       xfer[1].len = sizeof(dev_id);
+       xfer[1].len = sizeof(*dev_id);
        xfer[1].speed_hz = priv->spi_max_speed_hz_fast;
 
        mcp251xfd_spi_cmd_read_nocrc(&buf_tx->cmd, MCP251XFD_REG_DEVID);
@@ -1786,7 +1788,7 @@ mcp251xfd_register_get_dev_id(const struct mcp251xfd_priv *priv, u32 *dev_id,
        if (err)
                goto out_kfree_buf_tx;
 
-       *dev_id = be32_to_cpup((__be32 *)buf_rx->data);
+       *dev_id = get_unaligned_le32(buf_rx->data);
        *effective_speed_hz_slow = xfer[0].effective_speed_hz;
        *effective_speed_hz_fast = xfer[1].effective_speed_hz;
 
index 217510c..92b7bc7 100644 (file)
@@ -334,19 +334,21 @@ mcp251xfd_regmap_crc_read(void *context,
                 * register. It increments once per SYS clock tick,
                 * which is 20 or 40 MHz.
                 *
-                * Observation shows that if the lowest byte (which is
-                * transferred first on the SPI bus) of that register
-                * is 0x00 or 0x80 the calculated CRC doesn't always
-                * match the transferred one.
+                * Observation on the mcp2518fd shows that if the
+                * lowest byte (which is transferred first on the SPI
+                * bus) of that register is 0x00 or 0x80 the
+                * calculated CRC doesn't always match the transferred
+                * one. On the mcp2517fd this problem is not limited
+                * to the first byte being 0x00 or 0x80.
                 *
                 * If the highest bit in the lowest byte is flipped
                 * the transferred CRC matches the calculated one. We
-                * assume for now the CRC calculation in the chip
-                * works on wrong data and the transferred data is
-                * correct.
+                * assume for now the CRC operates on the correct
+                * data.
                 */
                if (reg == MCP251XFD_REG_TBC &&
-                   (buf_rx->data[0] == 0x0 || buf_rx->data[0] == 0x80)) {
+                   ((buf_rx->data[0] & 0xf8) == 0x0 ||
+                    (buf_rx->data[0] & 0xf8) == 0x80)) {
                        /* Flip highest bit in lowest byte of le32 */
                        buf_rx->data[0] ^= 0x80;
 
@@ -356,10 +358,8 @@ mcp251xfd_regmap_crc_read(void *context,
                                                                  val_len);
                        if (!err) {
                                /* If CRC is now correct, assume
-                                * transferred data was OK, flip bit
-                                * back to original value.
+                                * flipped data is OK.
                                 */
-                               buf_rx->data[0] ^= 0x80;
                                goto out;
                        }
                }
index f959215..1218f96 100644 (file)
@@ -14,11 +14,18 @@ config CAN_EMS_USB
          This driver is for the one channel CPC-USB/ARM7 CAN/USB interface
          from EMS Dr. Thomas Wuensche (http://www.ems-wuensche.de).
 
-config CAN_ESD_USB2
-       tristate "ESD USB/2 CAN/USB interface"
+config CAN_ESD_USB
+       tristate "esd electronics gmbh CAN/USB interfaces"
        help
-         This driver supports the CAN-USB/2 interface
-         from esd electronic system design gmbh (http://www.esd.eu).
+         This driver adds supports for several CAN/USB interfaces
+         from esd electronics gmbh (https://www.esd.eu).
+
+         The drivers supports the following devices:
+           - esd CAN-USB/2
+           - esd CAN-USB/Micro
+
+         To compile this driver as a module, choose M here: the module
+         will be called esd_usb.
 
 config CAN_ETAS_ES58X
        tristate "ETAS ES58X CAN/USB interfaces"
index 748cf31..1ea16be 100644 (file)
@@ -5,7 +5,7 @@
 
 obj-$(CONFIG_CAN_8DEV_USB) += usb_8dev.o
 obj-$(CONFIG_CAN_EMS_USB) += ems_usb.o
-obj-$(CONFIG_CAN_ESD_USB2) += esd_usb2.o
+obj-$(CONFIG_CAN_ESD_USB) += esd_usb.o
 obj-$(CONFIG_CAN_ETAS_ES58X) += etas_es58x/
 obj-$(CONFIG_CAN_GS_USB) += gs_usb.o
 obj-$(CONFIG_CAN_KVASER_USB) += kvaser_usb/
diff --git a/drivers/net/can/usb/esd_usb.c b/drivers/net/can/usb/esd_usb.c
new file mode 100644 (file)
index 0000000..8a4bf29
--- /dev/null
@@ -0,0 +1,1146 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * CAN driver for esd electronics gmbh CAN-USB/2 and CAN-USB/Micro
+ *
+ * Copyright (C) 2010-2012 esd electronic system design gmbh, Matthias Fuchs <socketcan@esd.eu>
+ * Copyright (C) 2022 esd electronics gmbh, Frank Jungclaus <frank.jungclaus@esd.eu>
+ */
+#include <linux/signal.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/usb.h>
+
+#include <linux/can.h>
+#include <linux/can/dev.h>
+#include <linux/can/error.h>
+
+MODULE_AUTHOR("Matthias Fuchs <socketcan@esd.eu>");
+MODULE_AUTHOR("Frank Jungclaus <frank.jungclaus@esd.eu>");
+MODULE_DESCRIPTION("CAN driver for esd electronics gmbh CAN-USB/2 and CAN-USB/Micro interfaces");
+MODULE_LICENSE("GPL v2");
+
+/* USB vendor and product ID */
+#define USB_ESDGMBH_VENDOR_ID  0x0ab4
+#define USB_CANUSB2_PRODUCT_ID 0x0010
+#define USB_CANUSBM_PRODUCT_ID 0x0011
+
+/* CAN controller clock frequencies */
+#define ESD_USB2_CAN_CLOCK     60000000
+#define ESD_USBM_CAN_CLOCK     36000000
+
+/* Maximum number of CAN nets */
+#define ESD_USB_MAX_NETS       2
+
+/* USB commands */
+#define CMD_VERSION            1 /* also used for VERSION_REPLY */
+#define CMD_CAN_RX             2 /* device to host only */
+#define CMD_CAN_TX             3 /* also used for TX_DONE */
+#define CMD_SETBAUD            4 /* also used for SETBAUD_REPLY */
+#define CMD_TS                 5 /* also used for TS_REPLY */
+#define CMD_IDADD              6 /* also used for IDADD_REPLY */
+
+/* esd CAN message flags - dlc field */
+#define ESD_RTR                        0x10
+
+/* esd CAN message flags - id field */
+#define ESD_EXTID              0x20000000
+#define ESD_EVENT              0x40000000
+#define ESD_IDMASK             0x1fffffff
+
+/* esd CAN event ids */
+#define ESD_EV_CAN_ERROR_EXT   2 /* CAN controller specific diagnostic data */
+
+/* baudrate message flags */
+#define ESD_USB_UBR            0x80000000
+#define ESD_USB_LOM            0x40000000
+#define ESD_USB_NO_BAUDRATE    0x7fffffff
+
+/* bit timing CAN-USB/2 */
+#define ESD_USB2_TSEG1_MIN     1
+#define ESD_USB2_TSEG1_MAX     16
+#define ESD_USB2_TSEG1_SHIFT   16
+#define ESD_USB2_TSEG2_MIN     1
+#define ESD_USB2_TSEG2_MAX     8
+#define ESD_USB2_TSEG2_SHIFT   20
+#define ESD_USB2_SJW_MAX       4
+#define ESD_USB2_SJW_SHIFT     14
+#define ESD_USBM_SJW_SHIFT     24
+#define ESD_USB2_BRP_MIN       1
+#define ESD_USB2_BRP_MAX       1024
+#define ESD_USB2_BRP_INC       1
+#define ESD_USB2_3_SAMPLES     0x00800000
+
+/* esd IDADD message */
+#define ESD_ID_ENABLE          0x80
+#define ESD_MAX_ID_SEGMENT     64
+
+/* SJA1000 ECC register (emulated by usb firmware) */
+#define SJA1000_ECC_SEG                0x1F
+#define SJA1000_ECC_DIR                0x20
+#define SJA1000_ECC_ERR                0x06
+#define SJA1000_ECC_BIT                0x00
+#define SJA1000_ECC_FORM       0x40
+#define SJA1000_ECC_STUFF      0x80
+#define SJA1000_ECC_MASK       0xc0
+
+/* esd bus state event codes */
+#define ESD_BUSSTATE_MASK      0xc0
+#define ESD_BUSSTATE_WARN      0x40
+#define ESD_BUSSTATE_ERRPASSIVE        0x80
+#define ESD_BUSSTATE_BUSOFF    0xc0
+
+#define RX_BUFFER_SIZE         1024
+#define MAX_RX_URBS            4
+#define MAX_TX_URBS            16 /* must be power of 2 */
+
+struct header_msg {
+       u8 len; /* len is always the total message length in 32bit words */
+       u8 cmd;
+       u8 rsvd[2];
+};
+
+struct version_msg {
+       u8 len;
+       u8 cmd;
+       u8 rsvd;
+       u8 flags;
+       __le32 drv_version;
+};
+
+struct version_reply_msg {
+       u8 len;
+       u8 cmd;
+       u8 nets;
+       u8 features;
+       __le32 version;
+       u8 name[16];
+       __le32 rsvd;
+       __le32 ts;
+};
+
+struct rx_msg {
+       u8 len;
+       u8 cmd;
+       u8 net;
+       u8 dlc;
+       __le32 ts;
+       __le32 id; /* upper 3 bits contain flags */
+       u8 data[8];
+};
+
+struct tx_msg {
+       u8 len;
+       u8 cmd;
+       u8 net;
+       u8 dlc;
+       u32 hnd;        /* opaque handle, not used by device */
+       __le32 id; /* upper 3 bits contain flags */
+       u8 data[8];
+};
+
+struct tx_done_msg {
+       u8 len;
+       u8 cmd;
+       u8 net;
+       u8 status;
+       u32 hnd;        /* opaque handle, not used by device */
+       __le32 ts;
+};
+
+struct id_filter_msg {
+       u8 len;
+       u8 cmd;
+       u8 net;
+       u8 option;
+       __le32 mask[ESD_MAX_ID_SEGMENT + 1];
+};
+
+struct set_baudrate_msg {
+       u8 len;
+       u8 cmd;
+       u8 net;
+       u8 rsvd;
+       __le32 baud;
+};
+
+/* Main message type used between library and application */
+struct __packed esd_usb_msg {
+       union {
+               struct header_msg hdr;
+               struct version_msg version;
+               struct version_reply_msg version_reply;
+               struct rx_msg rx;
+               struct tx_msg tx;
+               struct tx_done_msg txdone;
+               struct set_baudrate_msg setbaud;
+               struct id_filter_msg filter;
+       } msg;
+};
+
+static struct usb_device_id esd_usb_table[] = {
+       {USB_DEVICE(USB_ESDGMBH_VENDOR_ID, USB_CANUSB2_PRODUCT_ID)},
+       {USB_DEVICE(USB_ESDGMBH_VENDOR_ID, USB_CANUSBM_PRODUCT_ID)},
+       {}
+};
+MODULE_DEVICE_TABLE(usb, esd_usb_table);
+
+struct esd_usb_net_priv;
+
+struct esd_tx_urb_context {
+       struct esd_usb_net_priv *priv;
+       u32 echo_index;
+};
+
+struct esd_usb {
+       struct usb_device *udev;
+       struct esd_usb_net_priv *nets[ESD_USB_MAX_NETS];
+
+       struct usb_anchor rx_submitted;
+
+       int net_count;
+       u32 version;
+       int rxinitdone;
+       void *rxbuf[MAX_RX_URBS];
+       dma_addr_t rxbuf_dma[MAX_RX_URBS];
+};
+
+struct esd_usb_net_priv {
+       struct can_priv can; /* must be the first member */
+
+       atomic_t active_tx_jobs;
+       struct usb_anchor tx_submitted;
+       struct esd_tx_urb_context tx_contexts[MAX_TX_URBS];
+
+       struct esd_usb *usb;
+       struct net_device *netdev;
+       int index;
+       u8 old_state;
+       struct can_berr_counter bec;
+};
+
+static void esd_usb_rx_event(struct esd_usb_net_priv *priv,
+                            struct esd_usb_msg *msg)
+{
+       struct net_device_stats *stats = &priv->netdev->stats;
+       struct can_frame *cf;
+       struct sk_buff *skb;
+       u32 id = le32_to_cpu(msg->msg.rx.id) & ESD_IDMASK;
+
+       if (id == ESD_EV_CAN_ERROR_EXT) {
+               u8 state = msg->msg.rx.data[0];
+               u8 ecc = msg->msg.rx.data[1];
+               u8 rxerr = msg->msg.rx.data[2];
+               u8 txerr = msg->msg.rx.data[3];
+
+               skb = alloc_can_err_skb(priv->netdev, &cf);
+               if (skb == NULL) {
+                       stats->rx_dropped++;
+                       return;
+               }
+
+               if (state != priv->old_state) {
+                       priv->old_state = state;
+
+                       switch (state & ESD_BUSSTATE_MASK) {
+                       case ESD_BUSSTATE_BUSOFF:
+                               priv->can.state = CAN_STATE_BUS_OFF;
+                               cf->can_id |= CAN_ERR_BUSOFF;
+                               priv->can.can_stats.bus_off++;
+                               can_bus_off(priv->netdev);
+                               break;
+                       case ESD_BUSSTATE_WARN:
+                               priv->can.state = CAN_STATE_ERROR_WARNING;
+                               priv->can.can_stats.error_warning++;
+                               break;
+                       case ESD_BUSSTATE_ERRPASSIVE:
+                               priv->can.state = CAN_STATE_ERROR_PASSIVE;
+                               priv->can.can_stats.error_passive++;
+                               break;
+                       default:
+                               priv->can.state = CAN_STATE_ERROR_ACTIVE;
+                               break;
+                       }
+               } else {
+                       priv->can.can_stats.bus_error++;
+                       stats->rx_errors++;
+
+                       cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+                       switch (ecc & SJA1000_ECC_MASK) {
+                       case SJA1000_ECC_BIT:
+                               cf->data[2] |= CAN_ERR_PROT_BIT;
+                               break;
+                       case SJA1000_ECC_FORM:
+                               cf->data[2] |= CAN_ERR_PROT_FORM;
+                               break;
+                       case SJA1000_ECC_STUFF:
+                               cf->data[2] |= CAN_ERR_PROT_STUFF;
+                               break;
+                       default:
+                               cf->data[3] = ecc & SJA1000_ECC_SEG;
+                               break;
+                       }
+
+                       /* Error occurred during transmission? */
+                       if (!(ecc & SJA1000_ECC_DIR))
+                               cf->data[2] |= CAN_ERR_PROT_TX;
+
+                       if (priv->can.state == CAN_STATE_ERROR_WARNING ||
+                           priv->can.state == CAN_STATE_ERROR_PASSIVE) {
+                               cf->data[1] = (txerr > rxerr) ?
+                                       CAN_ERR_CRTL_TX_PASSIVE :
+                                       CAN_ERR_CRTL_RX_PASSIVE;
+                       }
+                       cf->data[6] = txerr;
+                       cf->data[7] = rxerr;
+               }
+
+               priv->bec.txerr = txerr;
+               priv->bec.rxerr = rxerr;
+
+               netif_rx(skb);
+       }
+}
+
+static void esd_usb_rx_can_msg(struct esd_usb_net_priv *priv,
+                              struct esd_usb_msg *msg)
+{
+       struct net_device_stats *stats = &priv->netdev->stats;
+       struct can_frame *cf;
+       struct sk_buff *skb;
+       int i;
+       u32 id;
+
+       if (!netif_device_present(priv->netdev))
+               return;
+
+       id = le32_to_cpu(msg->msg.rx.id);
+
+       if (id & ESD_EVENT) {
+               esd_usb_rx_event(priv, msg);
+       } else {
+               skb = alloc_can_skb(priv->netdev, &cf);
+               if (skb == NULL) {
+                       stats->rx_dropped++;
+                       return;
+               }
+
+               cf->can_id = id & ESD_IDMASK;
+               can_frame_set_cc_len(cf, msg->msg.rx.dlc & ~ESD_RTR,
+                                    priv->can.ctrlmode);
+
+               if (id & ESD_EXTID)
+                       cf->can_id |= CAN_EFF_FLAG;
+
+               if (msg->msg.rx.dlc & ESD_RTR) {
+                       cf->can_id |= CAN_RTR_FLAG;
+               } else {
+                       for (i = 0; i < cf->len; i++)
+                               cf->data[i] = msg->msg.rx.data[i];
+
+                       stats->rx_bytes += cf->len;
+               }
+               stats->rx_packets++;
+
+               netif_rx(skb);
+       }
+}
+
+static void esd_usb_tx_done_msg(struct esd_usb_net_priv *priv,
+                               struct esd_usb_msg *msg)
+{
+       struct net_device_stats *stats = &priv->netdev->stats;
+       struct net_device *netdev = priv->netdev;
+       struct esd_tx_urb_context *context;
+
+       if (!netif_device_present(netdev))
+               return;
+
+       context = &priv->tx_contexts[msg->msg.txdone.hnd & (MAX_TX_URBS - 1)];
+
+       if (!msg->msg.txdone.status) {
+               stats->tx_packets++;
+               stats->tx_bytes += can_get_echo_skb(netdev, context->echo_index,
+                                                   NULL);
+       } else {
+               stats->tx_errors++;
+               can_free_echo_skb(netdev, context->echo_index, NULL);
+       }
+
+       /* Release context */
+       context->echo_index = MAX_TX_URBS;
+       atomic_dec(&priv->active_tx_jobs);
+
+       netif_wake_queue(netdev);
+}
+
+static void esd_usb_read_bulk_callback(struct urb *urb)
+{
+       struct esd_usb *dev = urb->context;
+       int retval;
+       int pos = 0;
+       int i;
+
+       switch (urb->status) {
+       case 0: /* success */
+               break;
+
+       case -ENOENT:
+       case -EPIPE:
+       case -EPROTO:
+       case -ESHUTDOWN:
+               return;
+
+       default:
+               dev_info(dev->udev->dev.parent,
+                        "Rx URB aborted (%d)\n", urb->status);
+               goto resubmit_urb;
+       }
+
+       while (pos < urb->actual_length) {
+               struct esd_usb_msg *msg;
+
+               msg = (struct esd_usb_msg *)(urb->transfer_buffer + pos);
+
+               switch (msg->msg.hdr.cmd) {
+               case CMD_CAN_RX:
+                       if (msg->msg.rx.net >= dev->net_count) {
+                               dev_err(dev->udev->dev.parent, "format error\n");
+                               break;
+                       }
+
+                       esd_usb_rx_can_msg(dev->nets[msg->msg.rx.net], msg);
+                       break;
+
+               case CMD_CAN_TX:
+                       if (msg->msg.txdone.net >= dev->net_count) {
+                               dev_err(dev->udev->dev.parent, "format error\n");
+                               break;
+                       }
+
+                       esd_usb_tx_done_msg(dev->nets[msg->msg.txdone.net],
+                                           msg);
+                       break;
+               }
+
+               pos += msg->msg.hdr.len << 2;
+
+               if (pos > urb->actual_length) {
+                       dev_err(dev->udev->dev.parent, "format error\n");
+                       break;
+               }
+       }
+
+resubmit_urb:
+       usb_fill_bulk_urb(urb, dev->udev, usb_rcvbulkpipe(dev->udev, 1),
+                         urb->transfer_buffer, RX_BUFFER_SIZE,
+                         esd_usb_read_bulk_callback, dev);
+
+       retval = usb_submit_urb(urb, GFP_ATOMIC);
+       if (retval == -ENODEV) {
+               for (i = 0; i < dev->net_count; i++) {
+                       if (dev->nets[i])
+                               netif_device_detach(dev->nets[i]->netdev);
+               }
+       } else if (retval) {
+               dev_err(dev->udev->dev.parent,
+                       "failed resubmitting read bulk urb: %d\n", retval);
+       }
+}
+
+/* callback for bulk IN urb */
+static void esd_usb_write_bulk_callback(struct urb *urb)
+{
+       struct esd_tx_urb_context *context = urb->context;
+       struct esd_usb_net_priv *priv;
+       struct net_device *netdev;
+       size_t size = sizeof(struct esd_usb_msg);
+
+       WARN_ON(!context);
+
+       priv = context->priv;
+       netdev = priv->netdev;
+
+       /* free up our allocated buffer */
+       usb_free_coherent(urb->dev, size,
+                         urb->transfer_buffer, urb->transfer_dma);
+
+       if (!netif_device_present(netdev))
+               return;
+
+       if (urb->status)
+               netdev_info(netdev, "Tx URB aborted (%d)\n", urb->status);
+
+       netif_trans_update(netdev);
+}
+
+static ssize_t firmware_show(struct device *d,
+                            struct device_attribute *attr, char *buf)
+{
+       struct usb_interface *intf = to_usb_interface(d);
+       struct esd_usb *dev = usb_get_intfdata(intf);
+
+       return sprintf(buf, "%d.%d.%d\n",
+                      (dev->version >> 12) & 0xf,
+                      (dev->version >> 8) & 0xf,
+                      dev->version & 0xff);
+}
+static DEVICE_ATTR_RO(firmware);
+
+static ssize_t hardware_show(struct device *d,
+                            struct device_attribute *attr, char *buf)
+{
+       struct usb_interface *intf = to_usb_interface(d);
+       struct esd_usb *dev = usb_get_intfdata(intf);
+
+       return sprintf(buf, "%d.%d.%d\n",
+                      (dev->version >> 28) & 0xf,
+                      (dev->version >> 24) & 0xf,
+                      (dev->version >> 16) & 0xff);
+}
+static DEVICE_ATTR_RO(hardware);
+
+static ssize_t nets_show(struct device *d,
+                        struct device_attribute *attr, char *buf)
+{
+       struct usb_interface *intf = to_usb_interface(d);
+       struct esd_usb *dev = usb_get_intfdata(intf);
+
+       return sprintf(buf, "%d", dev->net_count);
+}
+static DEVICE_ATTR_RO(nets);
+
+static int esd_usb_send_msg(struct esd_usb *dev, struct esd_usb_msg *msg)
+{
+       int actual_length;
+
+       return usb_bulk_msg(dev->udev,
+                           usb_sndbulkpipe(dev->udev, 2),
+                           msg,
+                           msg->msg.hdr.len << 2,
+                           &actual_length,
+                           1000);
+}
+
+static int esd_usb_wait_msg(struct esd_usb *dev,
+                           struct esd_usb_msg *msg)
+{
+       int actual_length;
+
+       return usb_bulk_msg(dev->udev,
+                           usb_rcvbulkpipe(dev->udev, 1),
+                           msg,
+                           sizeof(*msg),
+                           &actual_length,
+                           1000);
+}
+
+static int esd_usb_setup_rx_urbs(struct esd_usb *dev)
+{
+       int i, err = 0;
+
+       if (dev->rxinitdone)
+               return 0;
+
+       for (i = 0; i < MAX_RX_URBS; i++) {
+               struct urb *urb = NULL;
+               u8 *buf = NULL;
+               dma_addr_t buf_dma;
+
+               /* create a URB, and a buffer for it */
+               urb = usb_alloc_urb(0, GFP_KERNEL);
+               if (!urb) {
+                       err = -ENOMEM;
+                       break;
+               }
+
+               buf = usb_alloc_coherent(dev->udev, RX_BUFFER_SIZE, GFP_KERNEL,
+                                        &buf_dma);
+               if (!buf) {
+                       dev_warn(dev->udev->dev.parent,
+                                "No memory left for USB buffer\n");
+                       err = -ENOMEM;
+                       goto freeurb;
+               }
+
+               urb->transfer_dma = buf_dma;
+
+               usb_fill_bulk_urb(urb, dev->udev,
+                                 usb_rcvbulkpipe(dev->udev, 1),
+                                 buf, RX_BUFFER_SIZE,
+                                 esd_usb_read_bulk_callback, dev);
+               urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+               usb_anchor_urb(urb, &dev->rx_submitted);
+
+               err = usb_submit_urb(urb, GFP_KERNEL);
+               if (err) {
+                       usb_unanchor_urb(urb);
+                       usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf,
+                                         urb->transfer_dma);
+                       goto freeurb;
+               }
+
+               dev->rxbuf[i] = buf;
+               dev->rxbuf_dma[i] = buf_dma;
+
+freeurb:
+               /* Drop reference, USB core will take care of freeing it */
+               usb_free_urb(urb);
+               if (err)
+                       break;
+       }
+
+       /* Did we submit any URBs */
+       if (i == 0) {
+               dev_err(dev->udev->dev.parent, "couldn't setup read URBs\n");
+               return err;
+       }
+
+       /* Warn if we've couldn't transmit all the URBs */
+       if (i < MAX_RX_URBS) {
+               dev_warn(dev->udev->dev.parent,
+                        "rx performance may be slow\n");
+       }
+
+       dev->rxinitdone = 1;
+       return 0;
+}
+
+/* Start interface */
+static int esd_usb_start(struct esd_usb_net_priv *priv)
+{
+       struct esd_usb *dev = priv->usb;
+       struct net_device *netdev = priv->netdev;
+       struct esd_usb_msg *msg;
+       int err, i;
+
+       msg = kmalloc(sizeof(*msg), GFP_KERNEL);
+       if (!msg) {
+               err = -ENOMEM;
+               goto out;
+       }
+
+       /* Enable all IDs
+        * The IDADD message takes up to 64 32 bit bitmasks (2048 bits).
+        * Each bit represents one 11 bit CAN identifier. A set bit
+        * enables reception of the corresponding CAN identifier. A cleared
+        * bit disabled this identifier. An additional bitmask value
+        * following the CAN 2.0A bits is used to enable reception of
+        * extended CAN frames. Only the LSB of this final mask is checked
+        * for the complete 29 bit ID range. The IDADD message also allows
+        * filter configuration for an ID subset. In this case you can add
+        * the number of the starting bitmask (0..64) to the filter.option
+        * field followed by only some bitmasks.
+        */
+       msg->msg.hdr.cmd = CMD_IDADD;
+       msg->msg.hdr.len = 2 + ESD_MAX_ID_SEGMENT;
+       msg->msg.filter.net = priv->index;
+       msg->msg.filter.option = ESD_ID_ENABLE; /* start with segment 0 */
+       for (i = 0; i < ESD_MAX_ID_SEGMENT; i++)
+               msg->msg.filter.mask[i] = cpu_to_le32(0xffffffff);
+       /* enable 29bit extended IDs */
+       msg->msg.filter.mask[ESD_MAX_ID_SEGMENT] = cpu_to_le32(0x00000001);
+
+       err = esd_usb_send_msg(dev, msg);
+       if (err)
+               goto out;
+
+       err = esd_usb_setup_rx_urbs(dev);
+       if (err)
+               goto out;
+
+       priv->can.state = CAN_STATE_ERROR_ACTIVE;
+
+out:
+       if (err == -ENODEV)
+               netif_device_detach(netdev);
+       if (err)
+               netdev_err(netdev, "couldn't start device: %d\n", err);
+
+       kfree(msg);
+       return err;
+}
+
+static void unlink_all_urbs(struct esd_usb *dev)
+{
+       struct esd_usb_net_priv *priv;
+       int i, j;
+
+       usb_kill_anchored_urbs(&dev->rx_submitted);
+
+       for (i = 0; i < MAX_RX_URBS; ++i)
+               usb_free_coherent(dev->udev, RX_BUFFER_SIZE,
+                                 dev->rxbuf[i], dev->rxbuf_dma[i]);
+
+       for (i = 0; i < dev->net_count; i++) {
+               priv = dev->nets[i];
+               if (priv) {
+                       usb_kill_anchored_urbs(&priv->tx_submitted);
+                       atomic_set(&priv->active_tx_jobs, 0);
+
+                       for (j = 0; j < MAX_TX_URBS; j++)
+                               priv->tx_contexts[j].echo_index = MAX_TX_URBS;
+               }
+       }
+}
+
+static int esd_usb_open(struct net_device *netdev)
+{
+       struct esd_usb_net_priv *priv = netdev_priv(netdev);
+       int err;
+
+       /* common open */
+       err = open_candev(netdev);
+       if (err)
+               return err;
+
+       /* finally start device */
+       err = esd_usb_start(priv);
+       if (err) {
+               netdev_warn(netdev, "couldn't start device: %d\n", err);
+               close_candev(netdev);
+               return err;
+       }
+
+       netif_start_queue(netdev);
+
+       return 0;
+}
+
+static netdev_tx_t esd_usb_start_xmit(struct sk_buff *skb,
+                                     struct net_device *netdev)
+{
+       struct esd_usb_net_priv *priv = netdev_priv(netdev);
+       struct esd_usb *dev = priv->usb;
+       struct esd_tx_urb_context *context = NULL;
+       struct net_device_stats *stats = &netdev->stats;
+       struct can_frame *cf = (struct can_frame *)skb->data;
+       struct esd_usb_msg *msg;
+       struct urb *urb;
+       u8 *buf;
+       int i, err;
+       int ret = NETDEV_TX_OK;
+       size_t size = sizeof(struct esd_usb_msg);
+
+       if (can_dropped_invalid_skb(netdev, skb))
+               return NETDEV_TX_OK;
+
+       /* create a URB, and a buffer for it, and copy the data to the URB */
+       urb = usb_alloc_urb(0, GFP_ATOMIC);
+       if (!urb) {
+               stats->tx_dropped++;
+               dev_kfree_skb(skb);
+               goto nourbmem;
+       }
+
+       buf = usb_alloc_coherent(dev->udev, size, GFP_ATOMIC,
+                                &urb->transfer_dma);
+       if (!buf) {
+               netdev_err(netdev, "No memory left for USB buffer\n");
+               stats->tx_dropped++;
+               dev_kfree_skb(skb);
+               goto nobufmem;
+       }
+
+       msg = (struct esd_usb_msg *)buf;
+
+       msg->msg.hdr.len = 3; /* minimal length */
+       msg->msg.hdr.cmd = CMD_CAN_TX;
+       msg->msg.tx.net = priv->index;
+       msg->msg.tx.dlc = can_get_cc_dlc(cf, priv->can.ctrlmode);
+       msg->msg.tx.id = cpu_to_le32(cf->can_id & CAN_ERR_MASK);
+
+       if (cf->can_id & CAN_RTR_FLAG)
+               msg->msg.tx.dlc |= ESD_RTR;
+
+       if (cf->can_id & CAN_EFF_FLAG)
+               msg->msg.tx.id |= cpu_to_le32(ESD_EXTID);
+
+       for (i = 0; i < cf->len; i++)
+               msg->msg.tx.data[i] = cf->data[i];
+
+       msg->msg.hdr.len += (cf->len + 3) >> 2;
+
+       for (i = 0; i < MAX_TX_URBS; i++) {
+               if (priv->tx_contexts[i].echo_index == MAX_TX_URBS) {
+                       context = &priv->tx_contexts[i];
+                       break;
+               }
+       }
+
+       /* This may never happen */
+       if (!context) {
+               netdev_warn(netdev, "couldn't find free context\n");
+               ret = NETDEV_TX_BUSY;
+               goto releasebuf;
+       }
+
+       context->priv = priv;
+       context->echo_index = i;
+
+       /* hnd must not be 0 - MSB is stripped in txdone handling */
+       msg->msg.tx.hnd = 0x80000000 | i; /* returned in TX done message */
+
+       usb_fill_bulk_urb(urb, dev->udev, usb_sndbulkpipe(dev->udev, 2), buf,
+                         msg->msg.hdr.len << 2,
+                         esd_usb_write_bulk_callback, context);
+
+       urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+
+       usb_anchor_urb(urb, &priv->tx_submitted);
+
+       can_put_echo_skb(skb, netdev, context->echo_index, 0);
+
+       atomic_inc(&priv->active_tx_jobs);
+
+       /* Slow down tx path */
+       if (atomic_read(&priv->active_tx_jobs) >= MAX_TX_URBS)
+               netif_stop_queue(netdev);
+
+       err = usb_submit_urb(urb, GFP_ATOMIC);
+       if (err) {
+               can_free_echo_skb(netdev, context->echo_index, NULL);
+
+               atomic_dec(&priv->active_tx_jobs);
+               usb_unanchor_urb(urb);
+
+               stats->tx_dropped++;
+
+               if (err == -ENODEV)
+                       netif_device_detach(netdev);
+               else
+                       netdev_warn(netdev, "failed tx_urb %d\n", err);
+
+               goto releasebuf;
+       }
+
+       netif_trans_update(netdev);
+
+       /* Release our reference to this URB, the USB core will eventually free
+        * it entirely.
+        */
+       usb_free_urb(urb);
+
+       return NETDEV_TX_OK;
+
+releasebuf:
+       usb_free_coherent(dev->udev, size, buf, urb->transfer_dma);
+
+nobufmem:
+       usb_free_urb(urb);
+
+nourbmem:
+       return ret;
+}
+
+static int esd_usb_close(struct net_device *netdev)
+{
+       struct esd_usb_net_priv *priv = netdev_priv(netdev);
+       struct esd_usb_msg *msg;
+       int i;
+
+       msg = kmalloc(sizeof(*msg), GFP_KERNEL);
+       if (!msg)
+               return -ENOMEM;
+
+       /* Disable all IDs (see esd_usb_start()) */
+       msg->msg.hdr.cmd = CMD_IDADD;
+       msg->msg.hdr.len = 2 + ESD_MAX_ID_SEGMENT;
+       msg->msg.filter.net = priv->index;
+       msg->msg.filter.option = ESD_ID_ENABLE; /* start with segment 0 */
+       for (i = 0; i <= ESD_MAX_ID_SEGMENT; i++)
+               msg->msg.filter.mask[i] = 0;
+       if (esd_usb_send_msg(priv->usb, msg) < 0)
+               netdev_err(netdev, "sending idadd message failed\n");
+
+       /* set CAN controller to reset mode */
+       msg->msg.hdr.len = 2;
+       msg->msg.hdr.cmd = CMD_SETBAUD;
+       msg->msg.setbaud.net = priv->index;
+       msg->msg.setbaud.rsvd = 0;
+       msg->msg.setbaud.baud = cpu_to_le32(ESD_USB_NO_BAUDRATE);
+       if (esd_usb_send_msg(priv->usb, msg) < 0)
+               netdev_err(netdev, "sending setbaud message failed\n");
+
+       priv->can.state = CAN_STATE_STOPPED;
+
+       netif_stop_queue(netdev);
+
+       close_candev(netdev);
+
+       kfree(msg);
+
+       return 0;
+}
+
+static const struct net_device_ops esd_usb_netdev_ops = {
+       .ndo_open = esd_usb_open,
+       .ndo_stop = esd_usb_close,
+       .ndo_start_xmit = esd_usb_start_xmit,
+       .ndo_change_mtu = can_change_mtu,
+};
+
+static const struct can_bittiming_const esd_usb2_bittiming_const = {
+       .name = "esd_usb2",
+       .tseg1_min = ESD_USB2_TSEG1_MIN,
+       .tseg1_max = ESD_USB2_TSEG1_MAX,
+       .tseg2_min = ESD_USB2_TSEG2_MIN,
+       .tseg2_max = ESD_USB2_TSEG2_MAX,
+       .sjw_max = ESD_USB2_SJW_MAX,
+       .brp_min = ESD_USB2_BRP_MIN,
+       .brp_max = ESD_USB2_BRP_MAX,
+       .brp_inc = ESD_USB2_BRP_INC,
+};
+
+static int esd_usb2_set_bittiming(struct net_device *netdev)
+{
+       struct esd_usb_net_priv *priv = netdev_priv(netdev);
+       struct can_bittiming *bt = &priv->can.bittiming;
+       struct esd_usb_msg *msg;
+       int err;
+       u32 canbtr;
+       int sjw_shift;
+
+       canbtr = ESD_USB_UBR;
+       if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
+               canbtr |= ESD_USB_LOM;
+
+       canbtr |= (bt->brp - 1) & (ESD_USB2_BRP_MAX - 1);
+
+       if (le16_to_cpu(priv->usb->udev->descriptor.idProduct) ==
+           USB_CANUSBM_PRODUCT_ID)
+               sjw_shift = ESD_USBM_SJW_SHIFT;
+       else
+               sjw_shift = ESD_USB2_SJW_SHIFT;
+
+       canbtr |= ((bt->sjw - 1) & (ESD_USB2_SJW_MAX - 1))
+               << sjw_shift;
+       canbtr |= ((bt->prop_seg + bt->phase_seg1 - 1)
+                  & (ESD_USB2_TSEG1_MAX - 1))
+               << ESD_USB2_TSEG1_SHIFT;
+       canbtr |= ((bt->phase_seg2 - 1) & (ESD_USB2_TSEG2_MAX - 1))
+               << ESD_USB2_TSEG2_SHIFT;
+       if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
+               canbtr |= ESD_USB2_3_SAMPLES;
+
+       msg = kmalloc(sizeof(*msg), GFP_KERNEL);
+       if (!msg)
+               return -ENOMEM;
+
+       msg->msg.hdr.len = 2;
+       msg->msg.hdr.cmd = CMD_SETBAUD;
+       msg->msg.setbaud.net = priv->index;
+       msg->msg.setbaud.rsvd = 0;
+       msg->msg.setbaud.baud = cpu_to_le32(canbtr);
+
+       netdev_info(netdev, "setting BTR=%#x\n", canbtr);
+
+       err = esd_usb_send_msg(priv->usb, msg);
+
+       kfree(msg);
+       return err;
+}
+
+static int esd_usb_get_berr_counter(const struct net_device *netdev,
+                                   struct can_berr_counter *bec)
+{
+       struct esd_usb_net_priv *priv = netdev_priv(netdev);
+
+       bec->txerr = priv->bec.txerr;
+       bec->rxerr = priv->bec.rxerr;
+
+       return 0;
+}
+
+static int esd_usb_set_mode(struct net_device *netdev, enum can_mode mode)
+{
+       switch (mode) {
+       case CAN_MODE_START:
+               netif_wake_queue(netdev);
+               break;
+
+       default:
+               return -EOPNOTSUPP;
+       }
+
+       return 0;
+}
+
+static int esd_usb_probe_one_net(struct usb_interface *intf, int index)
+{
+       struct esd_usb *dev = usb_get_intfdata(intf);
+       struct net_device *netdev;
+       struct esd_usb_net_priv *priv;
+       int err = 0;
+       int i;
+
+       netdev = alloc_candev(sizeof(*priv), MAX_TX_URBS);
+       if (!netdev) {
+               dev_err(&intf->dev, "couldn't alloc candev\n");
+               err = -ENOMEM;
+               goto done;
+       }
+
+       priv = netdev_priv(netdev);
+
+       init_usb_anchor(&priv->tx_submitted);
+       atomic_set(&priv->active_tx_jobs, 0);
+
+       for (i = 0; i < MAX_TX_URBS; i++)
+               priv->tx_contexts[i].echo_index = MAX_TX_URBS;
+
+       priv->usb = dev;
+       priv->netdev = netdev;
+       priv->index = index;
+
+       priv->can.state = CAN_STATE_STOPPED;
+       priv->can.ctrlmode_supported = CAN_CTRLMODE_LISTENONLY |
+               CAN_CTRLMODE_CC_LEN8_DLC;
+
+       if (le16_to_cpu(dev->udev->descriptor.idProduct) ==
+           USB_CANUSBM_PRODUCT_ID)
+               priv->can.clock.freq = ESD_USBM_CAN_CLOCK;
+       else {
+               priv->can.clock.freq = ESD_USB2_CAN_CLOCK;
+               priv->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
+       }
+
+       priv->can.bittiming_const = &esd_usb2_bittiming_const;
+       priv->can.do_set_bittiming = esd_usb2_set_bittiming;
+       priv->can.do_set_mode = esd_usb_set_mode;
+       priv->can.do_get_berr_counter = esd_usb_get_berr_counter;
+
+       netdev->flags |= IFF_ECHO; /* we support local echo */
+
+       netdev->netdev_ops = &esd_usb_netdev_ops;
+
+       SET_NETDEV_DEV(netdev, &intf->dev);
+       netdev->dev_id = index;
+
+       err = register_candev(netdev);
+       if (err) {
+               dev_err(&intf->dev, "couldn't register CAN device: %d\n", err);
+               free_candev(netdev);
+               err = -ENOMEM;
+               goto done;
+       }
+
+       dev->nets[index] = priv;
+       netdev_info(netdev, "device %s registered\n", netdev->name);
+
+done:
+       return err;
+}
+
+/* probe function for new USB devices
+ *
+ * check version information and number of available
+ * CAN interfaces
+ */
+static int esd_usb_probe(struct usb_interface *intf,
+                        const struct usb_device_id *id)
+{
+       struct esd_usb *dev;
+       struct esd_usb_msg *msg;
+       int i, err;
+
+       dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+       if (!dev) {
+               err = -ENOMEM;
+               goto done;
+       }
+
+       dev->udev = interface_to_usbdev(intf);
+
+       init_usb_anchor(&dev->rx_submitted);
+
+       usb_set_intfdata(intf, dev);
+
+       msg = kmalloc(sizeof(*msg), GFP_KERNEL);
+       if (!msg) {
+               err = -ENOMEM;
+               goto free_msg;
+       }
+
+       /* query number of CAN interfaces (nets) */
+       msg->msg.hdr.cmd = CMD_VERSION;
+       msg->msg.hdr.len = 2;
+       msg->msg.version.rsvd = 0;
+       msg->msg.version.flags = 0;
+       msg->msg.version.drv_version = 0;
+
+       err = esd_usb_send_msg(dev, msg);
+       if (err < 0) {
+               dev_err(&intf->dev, "sending version message failed\n");
+               goto free_msg;
+       }
+
+       err = esd_usb_wait_msg(dev, msg);
+       if (err < 0) {
+               dev_err(&intf->dev, "no version message answer\n");
+               goto free_msg;
+       }
+
+       dev->net_count = (int)msg->msg.version_reply.nets;
+       dev->version = le32_to_cpu(msg->msg.version_reply.version);
+
+       if (device_create_file(&intf->dev, &dev_attr_firmware))
+               dev_err(&intf->dev,
+                       "Couldn't create device file for firmware\n");
+
+       if (device_create_file(&intf->dev, &dev_attr_hardware))
+               dev_err(&intf->dev,
+                       "Couldn't create device file for hardware\n");
+
+       if (device_create_file(&intf->dev, &dev_attr_nets))
+               dev_err(&intf->dev,
+                       "Couldn't create device file for nets\n");
+
+       /* do per device probing */
+       for (i = 0; i < dev->net_count; i++)
+               esd_usb_probe_one_net(intf, i);
+
+free_msg:
+       kfree(msg);
+       if (err)
+               kfree(dev);
+done:
+       return err;
+}
+
+/* called by the usb core when the device is removed from the system */
+static void esd_usb_disconnect(struct usb_interface *intf)
+{
+       struct esd_usb *dev = usb_get_intfdata(intf);
+       struct net_device *netdev;
+       int i;
+
+       device_remove_file(&intf->dev, &dev_attr_firmware);
+       device_remove_file(&intf->dev, &dev_attr_hardware);
+       device_remove_file(&intf->dev, &dev_attr_nets);
+
+       usb_set_intfdata(intf, NULL);
+
+       if (dev) {
+               for (i = 0; i < dev->net_count; i++) {
+                       if (dev->nets[i]) {
+                               netdev = dev->nets[i]->netdev;
+                               unregister_netdev(netdev);
+                               free_candev(netdev);
+                       }
+               }
+               unlink_all_urbs(dev);
+               kfree(dev);
+       }
+}
+
+/* usb specific object needed to register this driver with the usb subsystem */
+static struct usb_driver esd_usb_driver = {
+       .name = "esd_usb",
+       .probe = esd_usb_probe,
+       .disconnect = esd_usb_disconnect,
+       .id_table = esd_usb_table,
+};
+
+module_usb_driver(esd_usb_driver);
diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
deleted file mode 100644 (file)
index 286daaa..0000000
+++ /dev/null
@@ -1,1154 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * CAN driver for esd CAN-USB/2 and CAN-USB/Micro
- *
- * Copyright (C) 2010-2012 Matthias Fuchs <matthias.fuchs@esd.eu>, esd gmbh
- */
-#include <linux/signal.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <linux/usb.h>
-
-#include <linux/can.h>
-#include <linux/can/dev.h>
-#include <linux/can/error.h>
-
-MODULE_AUTHOR("Matthias Fuchs <matthias.fuchs@esd.eu>");
-MODULE_DESCRIPTION("CAN driver for esd CAN-USB/2 and CAN-USB/Micro interfaces");
-MODULE_LICENSE("GPL v2");
-
-/* Define these values to match your devices */
-#define USB_ESDGMBH_VENDOR_ID  0x0ab4
-#define USB_CANUSB2_PRODUCT_ID 0x0010
-#define USB_CANUSBM_PRODUCT_ID 0x0011
-
-#define ESD_USB2_CAN_CLOCK     60000000
-#define ESD_USBM_CAN_CLOCK     36000000
-#define ESD_USB2_MAX_NETS      2
-
-/* USB2 commands */
-#define CMD_VERSION            1 /* also used for VERSION_REPLY */
-#define CMD_CAN_RX             2 /* device to host only */
-#define CMD_CAN_TX             3 /* also used for TX_DONE */
-#define CMD_SETBAUD            4 /* also used for SETBAUD_REPLY */
-#define CMD_TS                 5 /* also used for TS_REPLY */
-#define CMD_IDADD              6 /* also used for IDADD_REPLY */
-
-/* esd CAN message flags - dlc field */
-#define ESD_RTR                        0x10
-
-/* esd CAN message flags - id field */
-#define ESD_EXTID              0x20000000
-#define ESD_EVENT              0x40000000
-#define ESD_IDMASK             0x1fffffff
-
-/* esd CAN event ids used by this driver */
-#define ESD_EV_CAN_ERROR_EXT   2
-
-/* baudrate message flags */
-#define ESD_USB2_UBR           0x80000000
-#define ESD_USB2_LOM           0x40000000
-#define ESD_USB2_NO_BAUDRATE   0x7fffffff
-#define ESD_USB2_TSEG1_MIN     1
-#define ESD_USB2_TSEG1_MAX     16
-#define ESD_USB2_TSEG1_SHIFT   16
-#define ESD_USB2_TSEG2_MIN     1
-#define ESD_USB2_TSEG2_MAX     8
-#define ESD_USB2_TSEG2_SHIFT   20
-#define ESD_USB2_SJW_MAX       4
-#define ESD_USB2_SJW_SHIFT     14
-#define ESD_USBM_SJW_SHIFT     24
-#define ESD_USB2_BRP_MIN       1
-#define ESD_USB2_BRP_MAX       1024
-#define ESD_USB2_BRP_INC       1
-#define ESD_USB2_3_SAMPLES     0x00800000
-
-/* esd IDADD message */
-#define ESD_ID_ENABLE          0x80
-#define ESD_MAX_ID_SEGMENT     64
-
-/* SJA1000 ECC register (emulated by usb2 firmware) */
-#define SJA1000_ECC_SEG                0x1F
-#define SJA1000_ECC_DIR                0x20
-#define SJA1000_ECC_ERR                0x06
-#define SJA1000_ECC_BIT                0x00
-#define SJA1000_ECC_FORM       0x40
-#define SJA1000_ECC_STUFF      0x80
-#define SJA1000_ECC_MASK       0xc0
-
-/* esd bus state event codes */
-#define ESD_BUSSTATE_MASK      0xc0
-#define ESD_BUSSTATE_WARN      0x40
-#define ESD_BUSSTATE_ERRPASSIVE        0x80
-#define ESD_BUSSTATE_BUSOFF    0xc0
-
-#define RX_BUFFER_SIZE         1024
-#define MAX_RX_URBS            4
-#define MAX_TX_URBS            16 /* must be power of 2 */
-
-struct header_msg {
-       u8 len; /* len is always the total message length in 32bit words */
-       u8 cmd;
-       u8 rsvd[2];
-};
-
-struct version_msg {
-       u8 len;
-       u8 cmd;
-       u8 rsvd;
-       u8 flags;
-       __le32 drv_version;
-};
-
-struct version_reply_msg {
-       u8 len;
-       u8 cmd;
-       u8 nets;
-       u8 features;
-       __le32 version;
-       u8 name[16];
-       __le32 rsvd;
-       __le32 ts;
-};
-
-struct rx_msg {
-       u8 len;
-       u8 cmd;
-       u8 net;
-       u8 dlc;
-       __le32 ts;
-       __le32 id; /* upper 3 bits contain flags */
-       u8 data[8];
-};
-
-struct tx_msg {
-       u8 len;
-       u8 cmd;
-       u8 net;
-       u8 dlc;
-       u32 hnd;        /* opaque handle, not used by device */
-       __le32 id; /* upper 3 bits contain flags */
-       u8 data[8];
-};
-
-struct tx_done_msg {
-       u8 len;
-       u8 cmd;
-       u8 net;
-       u8 status;
-       u32 hnd;        /* opaque handle, not used by device */
-       __le32 ts;
-};
-
-struct id_filter_msg {
-       u8 len;
-       u8 cmd;
-       u8 net;
-       u8 option;
-       __le32 mask[ESD_MAX_ID_SEGMENT + 1];
-};
-
-struct set_baudrate_msg {
-       u8 len;
-       u8 cmd;
-       u8 net;
-       u8 rsvd;
-       __le32 baud;
-};
-
-/* Main message type used between library and application */
-struct __attribute__ ((packed)) esd_usb2_msg {
-       union {
-               struct header_msg hdr;
-               struct version_msg version;
-               struct version_reply_msg version_reply;
-               struct rx_msg rx;
-               struct tx_msg tx;
-               struct tx_done_msg txdone;
-               struct set_baudrate_msg setbaud;
-               struct id_filter_msg filter;
-       } msg;
-};
-
-static struct usb_device_id esd_usb2_table[] = {
-       {USB_DEVICE(USB_ESDGMBH_VENDOR_ID, USB_CANUSB2_PRODUCT_ID)},
-       {USB_DEVICE(USB_ESDGMBH_VENDOR_ID, USB_CANUSBM_PRODUCT_ID)},
-       {}
-};
-MODULE_DEVICE_TABLE(usb, esd_usb2_table);
-
-struct esd_usb2_net_priv;
-
-struct esd_tx_urb_context {
-       struct esd_usb2_net_priv *priv;
-       u32 echo_index;
-};
-
-struct esd_usb2 {
-       struct usb_device *udev;
-       struct esd_usb2_net_priv *nets[ESD_USB2_MAX_NETS];
-
-       struct usb_anchor rx_submitted;
-
-       int net_count;
-       u32 version;
-       int rxinitdone;
-       void *rxbuf[MAX_RX_URBS];
-       dma_addr_t rxbuf_dma[MAX_RX_URBS];
-};
-
-struct esd_usb2_net_priv {
-       struct can_priv can; /* must be the first member */
-
-       atomic_t active_tx_jobs;
-       struct usb_anchor tx_submitted;
-       struct esd_tx_urb_context tx_contexts[MAX_TX_URBS];
-
-       struct esd_usb2 *usb2;
-       struct net_device *netdev;
-       int index;
-       u8 old_state;
-       struct can_berr_counter bec;
-};
-
-static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
-                             struct esd_usb2_msg *msg)
-{
-       struct net_device_stats *stats = &priv->netdev->stats;
-       struct can_frame *cf;
-       struct sk_buff *skb;
-       u32 id = le32_to_cpu(msg->msg.rx.id) & ESD_IDMASK;
-
-       if (id == ESD_EV_CAN_ERROR_EXT) {
-               u8 state = msg->msg.rx.data[0];
-               u8 ecc = msg->msg.rx.data[1];
-               u8 rxerr = msg->msg.rx.data[2];
-               u8 txerr = msg->msg.rx.data[3];
-
-               skb = alloc_can_err_skb(priv->netdev, &cf);
-               if (skb == NULL) {
-                       stats->rx_dropped++;
-                       return;
-               }
-
-               if (state != priv->old_state) {
-                       priv->old_state = state;
-
-                       switch (state & ESD_BUSSTATE_MASK) {
-                       case ESD_BUSSTATE_BUSOFF:
-                               priv->can.state = CAN_STATE_BUS_OFF;
-                               cf->can_id |= CAN_ERR_BUSOFF;
-                               priv->can.can_stats.bus_off++;
-                               can_bus_off(priv->netdev);
-                               break;
-                       case ESD_BUSSTATE_WARN:
-                               priv->can.state = CAN_STATE_ERROR_WARNING;
-                               priv->can.can_stats.error_warning++;
-                               break;
-                       case ESD_BUSSTATE_ERRPASSIVE:
-                               priv->can.state = CAN_STATE_ERROR_PASSIVE;
-                               priv->can.can_stats.error_passive++;
-                               break;
-                       default:
-                               priv->can.state = CAN_STATE_ERROR_ACTIVE;
-                               break;
-                       }
-               } else {
-                       priv->can.can_stats.bus_error++;
-                       stats->rx_errors++;
-
-                       cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
-
-                       switch (ecc & SJA1000_ECC_MASK) {
-                       case SJA1000_ECC_BIT:
-                               cf->data[2] |= CAN_ERR_PROT_BIT;
-                               break;
-                       case SJA1000_ECC_FORM:
-                               cf->data[2] |= CAN_ERR_PROT_FORM;
-                               break;
-                       case SJA1000_ECC_STUFF:
-                               cf->data[2] |= CAN_ERR_PROT_STUFF;
-                               break;
-                       default:
-                               cf->data[3] = ecc & SJA1000_ECC_SEG;
-                               break;
-                       }
-
-                       /* Error occurred during transmission? */
-                       if (!(ecc & SJA1000_ECC_DIR))
-                               cf->data[2] |= CAN_ERR_PROT_TX;
-
-                       if (priv->can.state == CAN_STATE_ERROR_WARNING ||
-                           priv->can.state == CAN_STATE_ERROR_PASSIVE) {
-                               cf->data[1] = (txerr > rxerr) ?
-                                       CAN_ERR_CRTL_TX_PASSIVE :
-                                       CAN_ERR_CRTL_RX_PASSIVE;
-                       }
-                       cf->data[6] = txerr;
-                       cf->data[7] = rxerr;
-               }
-
-               priv->bec.txerr = txerr;
-               priv->bec.rxerr = rxerr;
-
-               netif_rx(skb);
-       }
-}
-
-static void esd_usb2_rx_can_msg(struct esd_usb2_net_priv *priv,
-                               struct esd_usb2_msg *msg)
-{
-       struct net_device_stats *stats = &priv->netdev->stats;
-       struct can_frame *cf;
-       struct sk_buff *skb;
-       int i;
-       u32 id;
-
-       if (!netif_device_present(priv->netdev))
-               return;
-
-       id = le32_to_cpu(msg->msg.rx.id);
-
-       if (id & ESD_EVENT) {
-               esd_usb2_rx_event(priv, msg);
-       } else {
-               skb = alloc_can_skb(priv->netdev, &cf);
-               if (skb == NULL) {
-                       stats->rx_dropped++;
-                       return;
-               }
-
-               cf->can_id = id & ESD_IDMASK;
-               can_frame_set_cc_len(cf, msg->msg.rx.dlc & ~ESD_RTR,
-                                    priv->can.ctrlmode);
-
-               if (id & ESD_EXTID)
-                       cf->can_id |= CAN_EFF_FLAG;
-
-               if (msg->msg.rx.dlc & ESD_RTR) {
-                       cf->can_id |= CAN_RTR_FLAG;
-               } else {
-                       for (i = 0; i < cf->len; i++)
-                               cf->data[i] = msg->msg.rx.data[i];
-
-                       stats->rx_bytes += cf->len;
-               }
-               stats->rx_packets++;
-
-               netif_rx(skb);
-       }
-
-       return;
-}
-
-static void esd_usb2_tx_done_msg(struct esd_usb2_net_priv *priv,
-                                struct esd_usb2_msg *msg)
-{
-       struct net_device_stats *stats = &priv->netdev->stats;
-       struct net_device *netdev = priv->netdev;
-       struct esd_tx_urb_context *context;
-
-       if (!netif_device_present(netdev))
-               return;
-
-       context = &priv->tx_contexts[msg->msg.txdone.hnd & (MAX_TX_URBS - 1)];
-
-       if (!msg->msg.txdone.status) {
-               stats->tx_packets++;
-               stats->tx_bytes += can_get_echo_skb(netdev, context->echo_index,
-                                                   NULL);
-       } else {
-               stats->tx_errors++;
-               can_free_echo_skb(netdev, context->echo_index, NULL);
-       }
-
-       /* Release context */
-       context->echo_index = MAX_TX_URBS;
-       atomic_dec(&priv->active_tx_jobs);
-
-       netif_wake_queue(netdev);
-}
-
-static void esd_usb2_read_bulk_callback(struct urb *urb)
-{
-       struct esd_usb2 *dev = urb->context;
-       int retval;
-       int pos = 0;
-       int i;
-
-       switch (urb->status) {
-       case 0: /* success */
-               break;
-
-       case -ENOENT:
-       case -EPIPE:
-       case -EPROTO:
-       case -ESHUTDOWN:
-               return;
-
-       default:
-               dev_info(dev->udev->dev.parent,
-                        "Rx URB aborted (%d)\n", urb->status);
-               goto resubmit_urb;
-       }
-
-       while (pos < urb->actual_length) {
-               struct esd_usb2_msg *msg;
-
-               msg = (struct esd_usb2_msg *)(urb->transfer_buffer + pos);
-
-               switch (msg->msg.hdr.cmd) {
-               case CMD_CAN_RX:
-                       if (msg->msg.rx.net >= dev->net_count) {
-                               dev_err(dev->udev->dev.parent, "format error\n");
-                               break;
-                       }
-
-                       esd_usb2_rx_can_msg(dev->nets[msg->msg.rx.net], msg);
-                       break;
-
-               case CMD_CAN_TX:
-                       if (msg->msg.txdone.net >= dev->net_count) {
-                               dev_err(dev->udev->dev.parent, "format error\n");
-                               break;
-                       }
-
-                       esd_usb2_tx_done_msg(dev->nets[msg->msg.txdone.net],
-                                            msg);
-                       break;
-               }
-
-               pos += msg->msg.hdr.len << 2;
-
-               if (pos > urb->actual_length) {
-                       dev_err(dev->udev->dev.parent, "format error\n");
-                       break;
-               }
-       }
-
-resubmit_urb:
-       usb_fill_bulk_urb(urb, dev->udev, usb_rcvbulkpipe(dev->udev, 1),
-                         urb->transfer_buffer, RX_BUFFER_SIZE,
-                         esd_usb2_read_bulk_callback, dev);
-
-       retval = usb_submit_urb(urb, GFP_ATOMIC);
-       if (retval == -ENODEV) {
-               for (i = 0; i < dev->net_count; i++) {
-                       if (dev->nets[i])
-                               netif_device_detach(dev->nets[i]->netdev);
-               }
-       } else if (retval) {
-               dev_err(dev->udev->dev.parent,
-                       "failed resubmitting read bulk urb: %d\n", retval);
-       }
-
-       return;
-}
-
-/*
- * callback for bulk IN urb
- */
-static void esd_usb2_write_bulk_callback(struct urb *urb)
-{
-       struct esd_tx_urb_context *context = urb->context;
-       struct esd_usb2_net_priv *priv;
-       struct net_device *netdev;
-       size_t size = sizeof(struct esd_usb2_msg);
-
-       WARN_ON(!context);
-
-       priv = context->priv;
-       netdev = priv->netdev;
-
-       /* free up our allocated buffer */
-       usb_free_coherent(urb->dev, size,
-                         urb->transfer_buffer, urb->transfer_dma);
-
-       if (!netif_device_present(netdev))
-               return;
-
-       if (urb->status)
-               netdev_info(netdev, "Tx URB aborted (%d)\n", urb->status);
-
-       netif_trans_update(netdev);
-}
-
-static ssize_t firmware_show(struct device *d,
-                            struct device_attribute *attr, char *buf)
-{
-       struct usb_interface *intf = to_usb_interface(d);
-       struct esd_usb2 *dev = usb_get_intfdata(intf);
-
-       return sprintf(buf, "%d.%d.%d\n",
-                      (dev->version >> 12) & 0xf,
-                      (dev->version >> 8) & 0xf,
-                      dev->version & 0xff);
-}
-static DEVICE_ATTR_RO(firmware);
-
-static ssize_t hardware_show(struct device *d,
-                            struct device_attribute *attr, char *buf)
-{
-       struct usb_interface *intf = to_usb_interface(d);
-       struct esd_usb2 *dev = usb_get_intfdata(intf);
-
-       return sprintf(buf, "%d.%d.%d\n",
-                      (dev->version >> 28) & 0xf,
-                      (dev->version >> 24) & 0xf,
-                      (dev->version >> 16) & 0xff);
-}
-static DEVICE_ATTR_RO(hardware);
-
-static ssize_t nets_show(struct device *d,
-                        struct device_attribute *attr, char *buf)
-{
-       struct usb_interface *intf = to_usb_interface(d);
-       struct esd_usb2 *dev = usb_get_intfdata(intf);
-
-       return sprintf(buf, "%d", dev->net_count);
-}
-static DEVICE_ATTR_RO(nets);
-
-static int esd_usb2_send_msg(struct esd_usb2 *dev, struct esd_usb2_msg *msg)
-{
-       int actual_length;
-
-       return usb_bulk_msg(dev->udev,
-                           usb_sndbulkpipe(dev->udev, 2),
-                           msg,
-                           msg->msg.hdr.len << 2,
-                           &actual_length,
-                           1000);
-}
-
-static int esd_usb2_wait_msg(struct esd_usb2 *dev,
-                            struct esd_usb2_msg *msg)
-{
-       int actual_length;
-
-       return usb_bulk_msg(dev->udev,
-                           usb_rcvbulkpipe(dev->udev, 1),
-                           msg,
-                           sizeof(*msg),
-                           &actual_length,
-                           1000);
-}
-
-static int esd_usb2_setup_rx_urbs(struct esd_usb2 *dev)
-{
-       int i, err = 0;
-
-       if (dev->rxinitdone)
-               return 0;
-
-       for (i = 0; i < MAX_RX_URBS; i++) {
-               struct urb *urb = NULL;
-               u8 *buf = NULL;
-               dma_addr_t buf_dma;
-
-               /* create a URB, and a buffer for it */
-               urb = usb_alloc_urb(0, GFP_KERNEL);
-               if (!urb) {
-                       err = -ENOMEM;
-                       break;
-               }
-
-               buf = usb_alloc_coherent(dev->udev, RX_BUFFER_SIZE, GFP_KERNEL,
-                                        &buf_dma);
-               if (!buf) {
-                       dev_warn(dev->udev->dev.parent,
-                                "No memory left for USB buffer\n");
-                       err = -ENOMEM;
-                       goto freeurb;
-               }
-
-               urb->transfer_dma = buf_dma;
-
-               usb_fill_bulk_urb(urb, dev->udev,
-                                 usb_rcvbulkpipe(dev->udev, 1),
-                                 buf, RX_BUFFER_SIZE,
-                                 esd_usb2_read_bulk_callback, dev);
-               urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
-               usb_anchor_urb(urb, &dev->rx_submitted);
-
-               err = usb_submit_urb(urb, GFP_KERNEL);
-               if (err) {
-                       usb_unanchor_urb(urb);
-                       usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf,
-                                         urb->transfer_dma);
-                       goto freeurb;
-               }
-
-               dev->rxbuf[i] = buf;
-               dev->rxbuf_dma[i] = buf_dma;
-
-freeurb:
-               /* Drop reference, USB core will take care of freeing it */
-               usb_free_urb(urb);
-               if (err)
-                       break;
-       }
-
-       /* Did we submit any URBs */
-       if (i == 0) {
-               dev_err(dev->udev->dev.parent, "couldn't setup read URBs\n");
-               return err;
-       }
-
-       /* Warn if we've couldn't transmit all the URBs */
-       if (i < MAX_RX_URBS) {
-               dev_warn(dev->udev->dev.parent,
-                        "rx performance may be slow\n");
-       }
-
-       dev->rxinitdone = 1;
-       return 0;
-}
-
-/*
- * Start interface
- */
-static int esd_usb2_start(struct esd_usb2_net_priv *priv)
-{
-       struct esd_usb2 *dev = priv->usb2;
-       struct net_device *netdev = priv->netdev;
-       struct esd_usb2_msg *msg;
-       int err, i;
-
-       msg = kmalloc(sizeof(*msg), GFP_KERNEL);
-       if (!msg) {
-               err = -ENOMEM;
-               goto out;
-       }
-
-       /*
-        * Enable all IDs
-        * The IDADD message takes up to 64 32 bit bitmasks (2048 bits).
-        * Each bit represents one 11 bit CAN identifier. A set bit
-        * enables reception of the corresponding CAN identifier. A cleared
-        * bit disabled this identifier. An additional bitmask value
-        * following the CAN 2.0A bits is used to enable reception of
-        * extended CAN frames. Only the LSB of this final mask is checked
-        * for the complete 29 bit ID range. The IDADD message also allows
-        * filter configuration for an ID subset. In this case you can add
-        * the number of the starting bitmask (0..64) to the filter.option
-        * field followed by only some bitmasks.
-        */
-       msg->msg.hdr.cmd = CMD_IDADD;
-       msg->msg.hdr.len = 2 + ESD_MAX_ID_SEGMENT;
-       msg->msg.filter.net = priv->index;
-       msg->msg.filter.option = ESD_ID_ENABLE; /* start with segment 0 */
-       for (i = 0; i < ESD_MAX_ID_SEGMENT; i++)
-               msg->msg.filter.mask[i] = cpu_to_le32(0xffffffff);
-       /* enable 29bit extended IDs */
-       msg->msg.filter.mask[ESD_MAX_ID_SEGMENT] = cpu_to_le32(0x00000001);
-
-       err = esd_usb2_send_msg(dev, msg);
-       if (err)
-               goto out;
-
-       err = esd_usb2_setup_rx_urbs(dev);
-       if (err)
-               goto out;
-
-       priv->can.state = CAN_STATE_ERROR_ACTIVE;
-
-out:
-       if (err == -ENODEV)
-               netif_device_detach(netdev);
-       if (err)
-               netdev_err(netdev, "couldn't start device: %d\n", err);
-
-       kfree(msg);
-       return err;
-}
-
-static void unlink_all_urbs(struct esd_usb2 *dev)
-{
-       struct esd_usb2_net_priv *priv;
-       int i, j;
-
-       usb_kill_anchored_urbs(&dev->rx_submitted);
-
-       for (i = 0; i < MAX_RX_URBS; ++i)
-               usb_free_coherent(dev->udev, RX_BUFFER_SIZE,
-                                 dev->rxbuf[i], dev->rxbuf_dma[i]);
-
-       for (i = 0; i < dev->net_count; i++) {
-               priv = dev->nets[i];
-               if (priv) {
-                       usb_kill_anchored_urbs(&priv->tx_submitted);
-                       atomic_set(&priv->active_tx_jobs, 0);
-
-                       for (j = 0; j < MAX_TX_URBS; j++)
-                               priv->tx_contexts[j].echo_index = MAX_TX_URBS;
-               }
-       }
-}
-
-static int esd_usb2_open(struct net_device *netdev)
-{
-       struct esd_usb2_net_priv *priv = netdev_priv(netdev);
-       int err;
-
-       /* common open */
-       err = open_candev(netdev);
-       if (err)
-               return err;
-
-       /* finally start device */
-       err = esd_usb2_start(priv);
-       if (err) {
-               netdev_warn(netdev, "couldn't start device: %d\n", err);
-               close_candev(netdev);
-               return err;
-       }
-
-       netif_start_queue(netdev);
-
-       return 0;
-}
-
-static netdev_tx_t esd_usb2_start_xmit(struct sk_buff *skb,
-                                     struct net_device *netdev)
-{
-       struct esd_usb2_net_priv *priv = netdev_priv(netdev);
-       struct esd_usb2 *dev = priv->usb2;
-       struct esd_tx_urb_context *context = NULL;
-       struct net_device_stats *stats = &netdev->stats;
-       struct can_frame *cf = (struct can_frame *)skb->data;
-       struct esd_usb2_msg *msg;
-       struct urb *urb;
-       u8 *buf;
-       int i, err;
-       int ret = NETDEV_TX_OK;
-       size_t size = sizeof(struct esd_usb2_msg);
-
-       if (can_dropped_invalid_skb(netdev, skb))
-               return NETDEV_TX_OK;
-
-       /* create a URB, and a buffer for it, and copy the data to the URB */
-       urb = usb_alloc_urb(0, GFP_ATOMIC);
-       if (!urb) {
-               stats->tx_dropped++;
-               dev_kfree_skb(skb);
-               goto nourbmem;
-       }
-
-       buf = usb_alloc_coherent(dev->udev, size, GFP_ATOMIC,
-                                &urb->transfer_dma);
-       if (!buf) {
-               netdev_err(netdev, "No memory left for USB buffer\n");
-               stats->tx_dropped++;
-               dev_kfree_skb(skb);
-               goto nobufmem;
-       }
-
-       msg = (struct esd_usb2_msg *)buf;
-
-       msg->msg.hdr.len = 3; /* minimal length */
-       msg->msg.hdr.cmd = CMD_CAN_TX;
-       msg->msg.tx.net = priv->index;
-       msg->msg.tx.dlc = can_get_cc_dlc(cf, priv->can.ctrlmode);
-       msg->msg.tx.id = cpu_to_le32(cf->can_id & CAN_ERR_MASK);
-
-       if (cf->can_id & CAN_RTR_FLAG)
-               msg->msg.tx.dlc |= ESD_RTR;
-
-       if (cf->can_id & CAN_EFF_FLAG)
-               msg->msg.tx.id |= cpu_to_le32(ESD_EXTID);
-
-       for (i = 0; i < cf->len; i++)
-               msg->msg.tx.data[i] = cf->data[i];
-
-       msg->msg.hdr.len += (cf->len + 3) >> 2;
-
-       for (i = 0; i < MAX_TX_URBS; i++) {
-               if (priv->tx_contexts[i].echo_index == MAX_TX_URBS) {
-                       context = &priv->tx_contexts[i];
-                       break;
-               }
-       }
-
-       /*
-        * This may never happen.
-        */
-       if (!context) {
-               netdev_warn(netdev, "couldn't find free context\n");
-               ret = NETDEV_TX_BUSY;
-               goto releasebuf;
-       }
-
-       context->priv = priv;
-       context->echo_index = i;
-
-       /* hnd must not be 0 - MSB is stripped in txdone handling */
-       msg->msg.tx.hnd = 0x80000000 | i; /* returned in TX done message */
-
-       usb_fill_bulk_urb(urb, dev->udev, usb_sndbulkpipe(dev->udev, 2), buf,
-                         msg->msg.hdr.len << 2,
-                         esd_usb2_write_bulk_callback, context);
-
-       urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
-
-       usb_anchor_urb(urb, &priv->tx_submitted);
-
-       can_put_echo_skb(skb, netdev, context->echo_index, 0);
-
-       atomic_inc(&priv->active_tx_jobs);
-
-       /* Slow down tx path */
-       if (atomic_read(&priv->active_tx_jobs) >= MAX_TX_URBS)
-               netif_stop_queue(netdev);
-
-       err = usb_submit_urb(urb, GFP_ATOMIC);
-       if (err) {
-               can_free_echo_skb(netdev, context->echo_index, NULL);
-
-               atomic_dec(&priv->active_tx_jobs);
-               usb_unanchor_urb(urb);
-
-               stats->tx_dropped++;
-
-               if (err == -ENODEV)
-                       netif_device_detach(netdev);
-               else
-                       netdev_warn(netdev, "failed tx_urb %d\n", err);
-
-               goto releasebuf;
-       }
-
-       netif_trans_update(netdev);
-
-       /*
-        * Release our reference to this URB, the USB core will eventually free
-        * it entirely.
-        */
-       usb_free_urb(urb);
-
-       return NETDEV_TX_OK;
-
-releasebuf:
-       usb_free_coherent(dev->udev, size, buf, urb->transfer_dma);
-
-nobufmem:
-       usb_free_urb(urb);
-
-nourbmem:
-       return ret;
-}
-
-static int esd_usb2_close(struct net_device *netdev)
-{
-       struct esd_usb2_net_priv *priv = netdev_priv(netdev);
-       struct esd_usb2_msg *msg;
-       int i;
-
-       msg = kmalloc(sizeof(*msg), GFP_KERNEL);
-       if (!msg)
-               return -ENOMEM;
-
-       /* Disable all IDs (see esd_usb2_start()) */
-       msg->msg.hdr.cmd = CMD_IDADD;
-       msg->msg.hdr.len = 2 + ESD_MAX_ID_SEGMENT;
-       msg->msg.filter.net = priv->index;
-       msg->msg.filter.option = ESD_ID_ENABLE; /* start with segment 0 */
-       for (i = 0; i <= ESD_MAX_ID_SEGMENT; i++)
-               msg->msg.filter.mask[i] = 0;
-       if (esd_usb2_send_msg(priv->usb2, msg) < 0)
-               netdev_err(netdev, "sending idadd message failed\n");
-
-       /* set CAN controller to reset mode */
-       msg->msg.hdr.len = 2;
-       msg->msg.hdr.cmd = CMD_SETBAUD;
-       msg->msg.setbaud.net = priv->index;
-       msg->msg.setbaud.rsvd = 0;
-       msg->msg.setbaud.baud = cpu_to_le32(ESD_USB2_NO_BAUDRATE);
-       if (esd_usb2_send_msg(priv->usb2, msg) < 0)
-               netdev_err(netdev, "sending setbaud message failed\n");
-
-       priv->can.state = CAN_STATE_STOPPED;
-
-       netif_stop_queue(netdev);
-
-       close_candev(netdev);
-
-       kfree(msg);
-
-       return 0;
-}
-
-static const struct net_device_ops esd_usb2_netdev_ops = {
-       .ndo_open = esd_usb2_open,
-       .ndo_stop = esd_usb2_close,
-       .ndo_start_xmit = esd_usb2_start_xmit,
-       .ndo_change_mtu = can_change_mtu,
-};
-
-static const struct can_bittiming_const esd_usb2_bittiming_const = {
-       .name = "esd_usb2",
-       .tseg1_min = ESD_USB2_TSEG1_MIN,
-       .tseg1_max = ESD_USB2_TSEG1_MAX,
-       .tseg2_min = ESD_USB2_TSEG2_MIN,
-       .tseg2_max = ESD_USB2_TSEG2_MAX,
-       .sjw_max = ESD_USB2_SJW_MAX,
-       .brp_min = ESD_USB2_BRP_MIN,
-       .brp_max = ESD_USB2_BRP_MAX,
-       .brp_inc = ESD_USB2_BRP_INC,
-};
-
-static int esd_usb2_set_bittiming(struct net_device *netdev)
-{
-       struct esd_usb2_net_priv *priv = netdev_priv(netdev);
-       struct can_bittiming *bt = &priv->can.bittiming;
-       struct esd_usb2_msg *msg;
-       int err;
-       u32 canbtr;
-       int sjw_shift;
-
-       canbtr = ESD_USB2_UBR;
-       if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
-               canbtr |= ESD_USB2_LOM;
-
-       canbtr |= (bt->brp - 1) & (ESD_USB2_BRP_MAX - 1);
-
-       if (le16_to_cpu(priv->usb2->udev->descriptor.idProduct) ==
-           USB_CANUSBM_PRODUCT_ID)
-               sjw_shift = ESD_USBM_SJW_SHIFT;
-       else
-               sjw_shift = ESD_USB2_SJW_SHIFT;
-
-       canbtr |= ((bt->sjw - 1) & (ESD_USB2_SJW_MAX - 1))
-               << sjw_shift;
-       canbtr |= ((bt->prop_seg + bt->phase_seg1 - 1)
-                  & (ESD_USB2_TSEG1_MAX - 1))
-               << ESD_USB2_TSEG1_SHIFT;
-       canbtr |= ((bt->phase_seg2 - 1) & (ESD_USB2_TSEG2_MAX - 1))
-               << ESD_USB2_TSEG2_SHIFT;
-       if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
-               canbtr |= ESD_USB2_3_SAMPLES;
-
-       msg = kmalloc(sizeof(*msg), GFP_KERNEL);
-       if (!msg)
-               return -ENOMEM;
-
-       msg->msg.hdr.len = 2;
-       msg->msg.hdr.cmd = CMD_SETBAUD;
-       msg->msg.setbaud.net = priv->index;
-       msg->msg.setbaud.rsvd = 0;
-       msg->msg.setbaud.baud = cpu_to_le32(canbtr);
-
-       netdev_info(netdev, "setting BTR=%#x\n", canbtr);
-
-       err = esd_usb2_send_msg(priv->usb2, msg);
-
-       kfree(msg);
-       return err;
-}
-
-static int esd_usb2_get_berr_counter(const struct net_device *netdev,
-                                    struct can_berr_counter *bec)
-{
-       struct esd_usb2_net_priv *priv = netdev_priv(netdev);
-
-       bec->txerr = priv->bec.txerr;
-       bec->rxerr = priv->bec.rxerr;
-
-       return 0;
-}
-
-static int esd_usb2_set_mode(struct net_device *netdev, enum can_mode mode)
-{
-       switch (mode) {
-       case CAN_MODE_START:
-               netif_wake_queue(netdev);
-               break;
-
-       default:
-               return -EOPNOTSUPP;
-       }
-
-       return 0;
-}
-
-static int esd_usb2_probe_one_net(struct usb_interface *intf, int index)
-{
-       struct esd_usb2 *dev = usb_get_intfdata(intf);
-       struct net_device *netdev;
-       struct esd_usb2_net_priv *priv;
-       int err = 0;
-       int i;
-
-       netdev = alloc_candev(sizeof(*priv), MAX_TX_URBS);
-       if (!netdev) {
-               dev_err(&intf->dev, "couldn't alloc candev\n");
-               err = -ENOMEM;
-               goto done;
-       }
-
-       priv = netdev_priv(netdev);
-
-       init_usb_anchor(&priv->tx_submitted);
-       atomic_set(&priv->active_tx_jobs, 0);
-
-       for (i = 0; i < MAX_TX_URBS; i++)
-               priv->tx_contexts[i].echo_index = MAX_TX_URBS;
-
-       priv->usb2 = dev;
-       priv->netdev = netdev;
-       priv->index = index;
-
-       priv->can.state = CAN_STATE_STOPPED;
-       priv->can.ctrlmode_supported = CAN_CTRLMODE_LISTENONLY |
-               CAN_CTRLMODE_CC_LEN8_DLC;
-
-       if (le16_to_cpu(dev->udev->descriptor.idProduct) ==
-           USB_CANUSBM_PRODUCT_ID)
-               priv->can.clock.freq = ESD_USBM_CAN_CLOCK;
-       else {
-               priv->can.clock.freq = ESD_USB2_CAN_CLOCK;
-               priv->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
-       }
-
-       priv->can.bittiming_const = &esd_usb2_bittiming_const;
-       priv->can.do_set_bittiming = esd_usb2_set_bittiming;
-       priv->can.do_set_mode = esd_usb2_set_mode;
-       priv->can.do_get_berr_counter = esd_usb2_get_berr_counter;
-
-       netdev->flags |= IFF_ECHO; /* we support local echo */
-
-       netdev->netdev_ops = &esd_usb2_netdev_ops;
-
-       SET_NETDEV_DEV(netdev, &intf->dev);
-       netdev->dev_id = index;
-
-       err = register_candev(netdev);
-       if (err) {
-               dev_err(&intf->dev, "couldn't register CAN device: %d\n", err);
-               free_candev(netdev);
-               err = -ENOMEM;
-               goto done;
-       }
-
-       dev->nets[index] = priv;
-       netdev_info(netdev, "device %s registered\n", netdev->name);
-
-done:
-       return err;
-}
-
-/*
- * probe function for new USB2 devices
- *
- * check version information and number of available
- * CAN interfaces
- */
-static int esd_usb2_probe(struct usb_interface *intf,
-                        const struct usb_device_id *id)
-{
-       struct esd_usb2 *dev;
-       struct esd_usb2_msg *msg;
-       int i, err;
-
-       dev = kzalloc(sizeof(*dev), GFP_KERNEL);
-       if (!dev) {
-               err = -ENOMEM;
-               goto done;
-       }
-
-       dev->udev = interface_to_usbdev(intf);
-
-       init_usb_anchor(&dev->rx_submitted);
-
-       usb_set_intfdata(intf, dev);
-
-       msg = kmalloc(sizeof(*msg), GFP_KERNEL);
-       if (!msg) {
-               err = -ENOMEM;
-               goto free_msg;
-       }
-
-       /* query number of CAN interfaces (nets) */
-       msg->msg.hdr.cmd = CMD_VERSION;
-       msg->msg.hdr.len = 2;
-       msg->msg.version.rsvd = 0;
-       msg->msg.version.flags = 0;
-       msg->msg.version.drv_version = 0;
-
-       err = esd_usb2_send_msg(dev, msg);
-       if (err < 0) {
-               dev_err(&intf->dev, "sending version message failed\n");
-               goto free_msg;
-       }
-
-       err = esd_usb2_wait_msg(dev, msg);
-       if (err < 0) {
-               dev_err(&intf->dev, "no version message answer\n");
-               goto free_msg;
-       }
-
-       dev->net_count = (int)msg->msg.version_reply.nets;
-       dev->version = le32_to_cpu(msg->msg.version_reply.version);
-
-       if (device_create_file(&intf->dev, &dev_attr_firmware))
-               dev_err(&intf->dev,
-                       "Couldn't create device file for firmware\n");
-
-       if (device_create_file(&intf->dev, &dev_attr_hardware))
-               dev_err(&intf->dev,
-                       "Couldn't create device file for hardware\n");
-
-       if (device_create_file(&intf->dev, &dev_attr_nets))
-               dev_err(&intf->dev,
-                       "Couldn't create device file for nets\n");
-
-       /* do per device probing */
-       for (i = 0; i < dev->net_count; i++)
-               esd_usb2_probe_one_net(intf, i);
-
-free_msg:
-       kfree(msg);
-       if (err)
-               kfree(dev);
-done:
-       return err;
-}
-
-/*
- * called by the usb core when the device is removed from the system
- */
-static void esd_usb2_disconnect(struct usb_interface *intf)
-{
-       struct esd_usb2 *dev = usb_get_intfdata(intf);
-       struct net_device *netdev;
-       int i;
-
-       device_remove_file(&intf->dev, &dev_attr_firmware);
-       device_remove_file(&intf->dev, &dev_attr_hardware);
-       device_remove_file(&intf->dev, &dev_attr_nets);
-
-       usb_set_intfdata(intf, NULL);
-
-       if (dev) {
-               for (i = 0; i < dev->net_count; i++) {
-                       if (dev->nets[i]) {
-                               netdev = dev->nets[i]->netdev;
-                               unregister_netdev(netdev);
-                               free_candev(netdev);
-                       }
-               }
-               unlink_all_urbs(dev);
-               kfree(dev);
-       }
-}
-
-/* usb specific object needed to register this driver with the usb subsystem */
-static struct usb_driver esd_usb2_driver = {
-       .name = "esd_usb2",
-       .probe = esd_usb2_probe,
-       .disconnect = esd_usb2_disconnect,
-       .id_table = esd_usb2_table,
-};
-
-module_usb_driver(esd_usb2_driver);
index 2d73ebb..7353745 100644 (file)
@@ -1707,7 +1707,7 @@ static int es58x_alloc_rx_urbs(struct es58x_device *es58x_dev)
 {
        const struct device *dev = es58x_dev->dev;
        const struct es58x_parameters *param = es58x_dev->param;
-       size_t rx_buf_len = es58x_dev->rx_max_packet_size;
+       u16 rx_buf_len = usb_maxpacket(es58x_dev->udev, es58x_dev->rx_pipe);
        struct urb *urb;
        u8 *buf;
        int i;
@@ -1739,7 +1739,7 @@ static int es58x_alloc_rx_urbs(struct es58x_device *es58x_dev)
                dev_err(dev, "%s: Could not setup any rx URBs\n", __func__);
                return ret;
        }
-       dev_dbg(dev, "%s: Allocated %d rx URBs each of size %zu\n",
+       dev_dbg(dev, "%s: Allocated %d rx URBs each of size %u\n",
                __func__, i, rx_buf_len);
 
        return ret;
@@ -2223,7 +2223,6 @@ static struct es58x_device *es58x_init_es58x_dev(struct usb_interface *intf,
                                             ep_in->bEndpointAddress);
        es58x_dev->tx_pipe = usb_sndbulkpipe(es58x_dev->udev,
                                             ep_out->bEndpointAddress);
-       es58x_dev->rx_max_packet_size = le16_to_cpu(ep_in->wMaxPacketSize);
 
        return es58x_dev;
 }
index e5033cb..d769bdf 100644 (file)
@@ -380,7 +380,6 @@ struct es58x_operators {
  * @timestamps: a temporary buffer to store the time stamps before
  *     feeding them to es58x_can_get_echo_skb(). Can only be used
  *     in RX branches.
- * @rx_max_packet_size: Maximum length of bulk-in URB.
  * @num_can_ch: Number of CAN channel (i.e. number of elements of @netdev).
  * @opened_channel_cnt: number of channels opened. Free of race
  *     conditions because its two users (net_device_ops:ndo_open()
@@ -401,8 +400,8 @@ struct es58x_device {
        const struct es58x_parameters *param;
        const struct es58x_operators *ops;
 
-       int rx_pipe;
-       int tx_pipe;
+       unsigned int rx_pipe;
+       unsigned int tx_pipe;
 
        struct usb_anchor rx_urbs;
        struct usb_anchor tx_urbs_busy;
@@ -414,7 +413,6 @@ struct es58x_device {
 
        u64 timestamps[ES58X_ECHO_BULK_MAX];
 
-       u16 rx_max_packet_size;
        u8 num_can_ch;
        u8 opened_channel_cnt;
 
index b29ba91..d3a658b 100644 (file)
@@ -268,6 +268,8 @@ struct gs_can {
 
        struct usb_anchor tx_submitted;
        atomic_t active_tx_urbs;
+       void *rxbuf[GS_MAX_RX_URBS];
+       dma_addr_t rxbuf_dma[GS_MAX_RX_URBS];
 };
 
 /* usb interface struct */
@@ -742,6 +744,7 @@ static int gs_can_open(struct net_device *netdev)
                for (i = 0; i < GS_MAX_RX_URBS; i++) {
                        struct urb *urb;
                        u8 *buf;
+                       dma_addr_t buf_dma;
 
                        /* alloc rx urb */
                        urb = usb_alloc_urb(0, GFP_KERNEL);
@@ -752,7 +755,7 @@ static int gs_can_open(struct net_device *netdev)
                        buf = usb_alloc_coherent(dev->udev,
                                                 dev->parent->hf_size_rx,
                                                 GFP_KERNEL,
-                                                &urb->transfer_dma);
+                                                &buf_dma);
                        if (!buf) {
                                netdev_err(netdev,
                                           "No memory left for USB buffer\n");
@@ -760,6 +763,8 @@ static int gs_can_open(struct net_device *netdev)
                                return -ENOMEM;
                        }
 
+                       urb->transfer_dma = buf_dma;
+
                        /* fill, anchor, and submit rx urb */
                        usb_fill_bulk_urb(urb,
                                          dev->udev,
@@ -781,10 +786,17 @@ static int gs_can_open(struct net_device *netdev)
                                           "usb_submit failed (err=%d)\n", rc);
 
                                usb_unanchor_urb(urb);
+                               usb_free_coherent(dev->udev,
+                                                 sizeof(struct gs_host_frame),
+                                                 buf,
+                                                 buf_dma);
                                usb_free_urb(urb);
                                break;
                        }
 
+                       dev->rxbuf[i] = buf;
+                       dev->rxbuf_dma[i] = buf_dma;
+
                        /* Drop reference,
                         * USB core will take care of freeing it
                         */
@@ -842,13 +854,20 @@ static int gs_can_close(struct net_device *netdev)
        int rc;
        struct gs_can *dev = netdev_priv(netdev);
        struct gs_usb *parent = dev->parent;
+       unsigned int i;
 
        netif_stop_queue(netdev);
 
        /* Stop polling */
        parent->active_channels--;
-       if (!parent->active_channels)
+       if (!parent->active_channels) {
                usb_kill_anchored_urbs(&parent->rx_submitted);
+               for (i = 0; i < GS_MAX_RX_URBS; i++)
+                       usb_free_coherent(dev->udev,
+                                         sizeof(struct gs_host_frame),
+                                         dev->rxbuf[i],
+                                         dev->rxbuf_dma[i]);
+       }
 
        /* Stop sending URBs */
        usb_kill_anchored_urbs(&dev->tx_submitted);
index 3a49257..eefcbe3 100644 (file)
 #define KVASER_USB_RX_BUFFER_SIZE              3072
 #define KVASER_USB_MAX_NET_DEVICES             5
 
-/* USB devices features */
-#define KVASER_USB_HAS_SILENT_MODE             BIT(0)
-#define KVASER_USB_HAS_TXRX_ERRORS             BIT(1)
+/* Kvaser USB device quirks */
+#define KVASER_USB_QUIRK_HAS_SILENT_MODE       BIT(0)
+#define KVASER_USB_QUIRK_HAS_TXRX_ERRORS       BIT(1)
+#define KVASER_USB_QUIRK_IGNORE_CLK_FREQ       BIT(2)
 
 /* Device capabilities */
 #define KVASER_USB_CAP_BERR_CAP                        0x01
@@ -65,12 +66,7 @@ struct kvaser_usb_dev_card_data_hydra {
 struct kvaser_usb_dev_card_data {
        u32 ctrlmode_supported;
        u32 capabilities;
-       union {
-               struct {
-                       enum kvaser_usb_leaf_family family;
-               } leaf;
-               struct kvaser_usb_dev_card_data_hydra hydra;
-       };
+       struct kvaser_usb_dev_card_data_hydra hydra;
 };
 
 /* Context for an outstanding, not yet ACKed, transmission */
@@ -83,7 +79,7 @@ struct kvaser_usb {
        struct usb_device *udev;
        struct usb_interface *intf;
        struct kvaser_usb_net_priv *nets[KVASER_USB_MAX_NET_DEVICES];
-       const struct kvaser_usb_dev_ops *ops;
+       const struct kvaser_usb_driver_info *driver_info;
        const struct kvaser_usb_dev_cfg *cfg;
 
        struct usb_endpoint_descriptor *bulk_in, *bulk_out;
@@ -165,6 +161,12 @@ struct kvaser_usb_dev_ops {
                                  u16 transid);
 };
 
+struct kvaser_usb_driver_info {
+       u32 quirks;
+       enum kvaser_usb_leaf_family family;
+       const struct kvaser_usb_dev_ops *ops;
+};
+
 struct kvaser_usb_dev_cfg {
        const struct can_clock clock;
        const unsigned int timestamp_freq;
@@ -184,4 +186,7 @@ int kvaser_usb_send_cmd_async(struct kvaser_usb_net_priv *priv, void *cmd,
                              int len);
 
 int kvaser_usb_can_rx_over_error(struct net_device *netdev);
+
+extern const struct can_bittiming_const kvaser_usb_flexc_bittiming_const;
+
 #endif /* KVASER_USB_H */
index e67658b..f211bfc 100644 (file)
@@ -61,8 +61,6 @@
 #define USB_USBCAN_R_V2_PRODUCT_ID             294
 #define USB_LEAF_LIGHT_R_V2_PRODUCT_ID         295
 #define USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID   296
-#define USB_LEAF_PRODUCT_ID_END \
-       USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID
 
 /* Kvaser USBCan-II devices product ids */
 #define USB_USBCAN_REVB_PRODUCT_ID             2
 #define USB_USBCAN_PRO_4HS_PRODUCT_ID          276
 #define USB_HYBRID_CANLIN_PRODUCT_ID           277
 #define USB_HYBRID_PRO_CANLIN_PRODUCT_ID       278
-#define USB_HYDRA_PRODUCT_ID_END \
-       USB_HYBRID_PRO_CANLIN_PRODUCT_ID
 
-static inline bool kvaser_is_leaf(const struct usb_device_id *id)
-{
-       return (id->idProduct >= USB_LEAF_DEVEL_PRODUCT_ID &&
-               id->idProduct <= USB_CAN_R_PRODUCT_ID) ||
-               (id->idProduct >= USB_LEAF_LITE_V2_PRODUCT_ID &&
-                id->idProduct <= USB_LEAF_PRODUCT_ID_END);
-}
+static const struct kvaser_usb_driver_info kvaser_usb_driver_info_hydra = {
+       .quirks = 0,
+       .ops = &kvaser_usb_hydra_dev_ops,
+};
 
-static inline bool kvaser_is_usbcan(const struct usb_device_id *id)
-{
-       return id->idProduct >= USB_USBCAN_REVB_PRODUCT_ID &&
-              id->idProduct <= USB_MEMORATOR_PRODUCT_ID;
-}
+static const struct kvaser_usb_driver_info kvaser_usb_driver_info_usbcan = {
+       .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS |
+                 KVASER_USB_QUIRK_HAS_SILENT_MODE,
+       .family = KVASER_USBCAN,
+       .ops = &kvaser_usb_leaf_dev_ops,
+};
 
-static inline bool kvaser_is_hydra(const struct usb_device_id *id)
-{
-       return id->idProduct >= USB_BLACKBIRD_V2_PRODUCT_ID &&
-              id->idProduct <= USB_HYDRA_PRODUCT_ID_END;
-}
+static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf = {
+       .quirks = KVASER_USB_QUIRK_IGNORE_CLK_FREQ,
+       .family = KVASER_LEAF,
+       .ops = &kvaser_usb_leaf_dev_ops,
+};
+
+static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err = {
+       .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS |
+                 KVASER_USB_QUIRK_IGNORE_CLK_FREQ,
+       .family = KVASER_LEAF,
+       .ops = &kvaser_usb_leaf_dev_ops,
+};
+
+static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err_listen = {
+       .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS |
+                 KVASER_USB_QUIRK_HAS_SILENT_MODE |
+                 KVASER_USB_QUIRK_IGNORE_CLK_FREQ,
+       .family = KVASER_LEAF,
+       .ops = &kvaser_usb_leaf_dev_ops,
+};
+
+static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leafimx = {
+       .quirks = 0,
+       .ops = &kvaser_usb_leaf_dev_ops,
+};
 
 static const struct usb_device_id kvaser_usb_table[] = {
-       /* Leaf USB product IDs */
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID) },
+       /* Leaf M32C USB product IDs */
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LS_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_SWC_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LIN_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_LS_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_SWC_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_DEVEL_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSHS_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_UPRO_HSHS_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID) },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_OBDII_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS |
-                              KVASER_USB_HAS_SILENT_MODE },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSLS_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_CH_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_SPRO_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_MERCURY_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_LEAF_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_CAN_R_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_R_V2_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_R_V2_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID) },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
+
+       /* Leaf i.MX28 USB product IDs */
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_R_V2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_R_V2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
 
        /* USBCANII USB product IDs */
        { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN2_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_REVB_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMORATOR_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan },
        { USB_DEVICE(KVASER_VENDOR_ID, USB_VCI2_PRODUCT_ID),
-               .driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan },
 
        /* Minihydra USB product IDs */
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_2CANLIN_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_2CANLIN_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_U100_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_U100P_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_U100S_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_4HS_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID) },
-       { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID) },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_2CANLIN_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_2CANLIN_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_U100_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_U100P_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_U100S_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_4HS_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+       { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID),
+               .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
        { }
 };
 MODULE_DEVICE_TABLE(usb, kvaser_usb_table);
@@ -285,6 +320,7 @@ int kvaser_usb_can_rx_over_error(struct net_device *netdev)
 static void kvaser_usb_read_bulk_callback(struct urb *urb)
 {
        struct kvaser_usb *dev = urb->context;
+       const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
        int err;
        unsigned int i;
 
@@ -301,8 +337,8 @@ static void kvaser_usb_read_bulk_callback(struct urb *urb)
                goto resubmit_urb;
        }
 
-       dev->ops->dev_read_bulk_callback(dev, urb->transfer_buffer,
-                                        urb->actual_length);
+       ops->dev_read_bulk_callback(dev, urb->transfer_buffer,
+                                   urb->actual_length);
 
 resubmit_urb:
        usb_fill_bulk_urb(urb, dev->udev,
@@ -396,6 +432,7 @@ static int kvaser_usb_open(struct net_device *netdev)
 {
        struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
        struct kvaser_usb *dev = priv->dev;
+       const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
        int err;
 
        err = open_candev(netdev);
@@ -406,11 +443,11 @@ static int kvaser_usb_open(struct net_device *netdev)
        if (err)
                goto error;
 
-       err = dev->ops->dev_set_opt_mode(priv);
+       err = ops->dev_set_opt_mode(priv);
        if (err)
                goto error;
 
-       err = dev->ops->dev_start_chip(priv);
+       err = ops->dev_start_chip(priv);
        if (err) {
                netdev_warn(netdev, "Cannot start device, error %d\n", err);
                goto error;
@@ -467,22 +504,23 @@ static int kvaser_usb_close(struct net_device *netdev)
 {
        struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
        struct kvaser_usb *dev = priv->dev;
+       const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
        int err;
 
        netif_stop_queue(netdev);
 
-       err = dev->ops->dev_flush_queue(priv);
+       err = ops->dev_flush_queue(priv);
        if (err)
                netdev_warn(netdev, "Cannot flush queue, error %d\n", err);
 
-       if (dev->ops->dev_reset_chip) {
-               err = dev->ops->dev_reset_chip(dev, priv->channel);
+       if (ops->dev_reset_chip) {
+               err = ops->dev_reset_chip(dev, priv->channel);
                if (err)
                        netdev_warn(netdev, "Cannot reset card, error %d\n",
                                    err);
        }
 
-       err = dev->ops->dev_stop_chip(priv);
+       err = ops->dev_stop_chip(priv);
        if (err)
                netdev_warn(netdev, "Cannot stop device, error %d\n", err);
 
@@ -521,6 +559,7 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
 {
        struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
        struct kvaser_usb *dev = priv->dev;
+       const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
        struct net_device_stats *stats = &netdev->stats;
        struct kvaser_usb_tx_urb_context *context = NULL;
        struct urb *urb;
@@ -563,8 +602,7 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
                goto freeurb;
        }
 
-       buf = dev->ops->dev_frame_to_cmd(priv, skb, &cmd_len,
-                                        context->echo_index);
+       buf = ops->dev_frame_to_cmd(priv, skb, &cmd_len, context->echo_index);
        if (!buf) {
                stats->tx_dropped++;
                dev_kfree_skb(skb);
@@ -648,15 +686,16 @@ static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
        }
 }
 
-static int kvaser_usb_init_one(struct kvaser_usb *dev,
-                              const struct usb_device_id *id, int channel)
+static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
 {
        struct net_device *netdev;
        struct kvaser_usb_net_priv *priv;
+       const struct kvaser_usb_driver_info *driver_info = dev->driver_info;
+       const struct kvaser_usb_dev_ops *ops = driver_info->ops;
        int err;
 
-       if (dev->ops->dev_reset_chip) {
-               err = dev->ops->dev_reset_chip(dev, channel);
+       if (ops->dev_reset_chip) {
+               err = ops->dev_reset_chip(dev, channel);
                if (err)
                        return err;
        }
@@ -685,20 +724,19 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev,
        priv->can.state = CAN_STATE_STOPPED;
        priv->can.clock.freq = dev->cfg->clock.freq;
        priv->can.bittiming_const = dev->cfg->bittiming_const;
-       priv->can.do_set_bittiming = dev->ops->dev_set_bittiming;
-       priv->can.do_set_mode = dev->ops->dev_set_mode;
-       if ((id->driver_info & KVASER_USB_HAS_TXRX_ERRORS) ||
+       priv->can.do_set_bittiming = ops->dev_set_bittiming;
+       priv->can.do_set_mode = ops->dev_set_mode;
+       if ((driver_info->quirks & KVASER_USB_QUIRK_HAS_TXRX_ERRORS) ||
            (priv->dev->card_data.capabilities & KVASER_USB_CAP_BERR_CAP))
-               priv->can.do_get_berr_counter = dev->ops->dev_get_berr_counter;
-       if (id->driver_info & KVASER_USB_HAS_SILENT_MODE)
+               priv->can.do_get_berr_counter = ops->dev_get_berr_counter;
+       if (driver_info->quirks & KVASER_USB_QUIRK_HAS_SILENT_MODE)
                priv->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY;
 
        priv->can.ctrlmode_supported |= dev->card_data.ctrlmode_supported;
 
        if (priv->can.ctrlmode_supported & CAN_CTRLMODE_FD) {
                priv->can.data_bittiming_const = dev->cfg->data_bittiming_const;
-               priv->can.do_set_data_bittiming =
-                                       dev->ops->dev_set_data_bittiming;
+               priv->can.do_set_data_bittiming = ops->dev_set_data_bittiming;
        }
 
        netdev->flags |= IFF_ECHO;
@@ -729,29 +767,22 @@ static int kvaser_usb_probe(struct usb_interface *intf,
        struct kvaser_usb *dev;
        int err;
        int i;
+       const struct kvaser_usb_driver_info *driver_info;
+       const struct kvaser_usb_dev_ops *ops;
+
+       driver_info = (const struct kvaser_usb_driver_info *)id->driver_info;
+       if (!driver_info)
+               return -ENODEV;
 
        dev = devm_kzalloc(&intf->dev, sizeof(*dev), GFP_KERNEL);
        if (!dev)
                return -ENOMEM;
 
-       if (kvaser_is_leaf(id)) {
-               dev->card_data.leaf.family = KVASER_LEAF;
-               dev->ops = &kvaser_usb_leaf_dev_ops;
-       } else if (kvaser_is_usbcan(id)) {
-               dev->card_data.leaf.family = KVASER_USBCAN;
-               dev->ops = &kvaser_usb_leaf_dev_ops;
-       } else if (kvaser_is_hydra(id)) {
-               dev->ops = &kvaser_usb_hydra_dev_ops;
-       } else {
-               dev_err(&intf->dev,
-                       "Product ID (%d) is not a supported Kvaser USB device\n",
-                       id->idProduct);
-               return -ENODEV;
-       }
-
        dev->intf = intf;
+       dev->driver_info = driver_info;
+       ops = driver_info->ops;
 
-       err = dev->ops->dev_setup_endpoints(dev);
+       err = ops->dev_setup_endpoints(dev);
        if (err) {
                dev_err(&intf->dev, "Cannot get usb endpoint(s)");
                return err;
@@ -765,22 +796,22 @@ static int kvaser_usb_probe(struct usb_interface *intf,
 
        dev->card_data.ctrlmode_supported = 0;
        dev->card_data.capabilities = 0;
-       err = dev->ops->dev_init_card(dev);
+       err = ops->dev_init_card(dev);
        if (err) {
                dev_err(&intf->dev,
                        "Failed to initialize card, error %d\n", err);
                return err;
        }
 
-       err = dev->ops->dev_get_software_info(dev);
+       err = ops->dev_get_software_info(dev);
        if (err) {
                dev_err(&intf->dev,
                        "Cannot get software info, error %d\n", err);
                return err;
        }
 
-       if (dev->ops->dev_get_software_details) {
-               err = dev->ops->dev_get_software_details(dev);
+       if (ops->dev_get_software_details) {
+               err = ops->dev_get_software_details(dev);
                if (err) {
                        dev_err(&intf->dev,
                                "Cannot get software details, error %d\n", err);
@@ -798,14 +829,14 @@ static int kvaser_usb_probe(struct usb_interface *intf,
 
        dev_dbg(&intf->dev, "Max outstanding tx = %d URBs\n", dev->max_tx_urbs);
 
-       err = dev->ops->dev_get_card_info(dev);
+       err = ops->dev_get_card_info(dev);
        if (err) {
                dev_err(&intf->dev, "Cannot get card info, error %d\n", err);
                return err;
        }
 
-       if (dev->ops->dev_get_capabilities) {
-               err = dev->ops->dev_get_capabilities(dev);
+       if (ops->dev_get_capabilities) {
+               err = ops->dev_get_capabilities(dev);
                if (err) {
                        dev_err(&intf->dev,
                                "Cannot get capabilities, error %d\n", err);
@@ -815,7 +846,7 @@ static int kvaser_usb_probe(struct usb_interface *intf,
        }
 
        for (i = 0; i < dev->nchannels; i++) {
-               err = kvaser_usb_init_one(dev, id, i);
+               err = kvaser_usb_init_one(dev, i);
                if (err) {
                        kvaser_usb_remove_interfaces(dev);
                        return err;
index a26823c..5d70844 100644 (file)
@@ -375,7 +375,7 @@ static const struct can_bittiming_const kvaser_usb_hydra_kcan_bittiming_c = {
        .brp_inc = 1,
 };
 
-static const struct can_bittiming_const kvaser_usb_hydra_flexc_bittiming_c = {
+const struct can_bittiming_const kvaser_usb_flexc_bittiming_const = {
        .name = "kvaser_usb_flex",
        .tseg1_min = 4,
        .tseg1_max = 16,
@@ -2052,7 +2052,7 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_flexc = {
                .freq = 24 * MEGA /* Hz */,
        },
        .timestamp_freq = 1,
-       .bittiming_const = &kvaser_usb_hydra_flexc_bittiming_c,
+       .bittiming_const = &kvaser_usb_flexc_bittiming_const,
 };
 
 static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_rt = {
index c805b99..cc809ec 100644 (file)
 #define USBCAN_ERROR_STATE_RX_ERROR    BIT(1)
 #define USBCAN_ERROR_STATE_BUSERROR    BIT(2)
 
-/* bittiming parameters */
-#define KVASER_USB_TSEG1_MIN           1
-#define KVASER_USB_TSEG1_MAX           16
-#define KVASER_USB_TSEG2_MIN           1
-#define KVASER_USB_TSEG2_MAX           8
-#define KVASER_USB_SJW_MAX             4
-#define KVASER_USB_BRP_MIN             1
-#define KVASER_USB_BRP_MAX             64
-#define KVASER_USB_BRP_INC             1
-
 /* ctrl modes */
 #define KVASER_CTRL_MODE_NORMAL                1
 #define KVASER_CTRL_MODE_SILENT                2
@@ -343,48 +333,68 @@ struct kvaser_usb_err_summary {
        };
 };
 
-static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = {
-       .name = "kvaser_usb",
-       .tseg1_min = KVASER_USB_TSEG1_MIN,
-       .tseg1_max = KVASER_USB_TSEG1_MAX,
-       .tseg2_min = KVASER_USB_TSEG2_MIN,
-       .tseg2_max = KVASER_USB_TSEG2_MAX,
-       .sjw_max = KVASER_USB_SJW_MAX,
-       .brp_min = KVASER_USB_BRP_MIN,
-       .brp_max = KVASER_USB_BRP_MAX,
-       .brp_inc = KVASER_USB_BRP_INC,
+static const struct can_bittiming_const kvaser_usb_leaf_m16c_bittiming_const = {
+       .name = "kvaser_usb_ucii",
+       .tseg1_min = 4,
+       .tseg1_max = 16,
+       .tseg2_min = 2,
+       .tseg2_max = 8,
+       .sjw_max = 4,
+       .brp_min = 1,
+       .brp_max = 16,
+       .brp_inc = 1,
+};
+
+static const struct can_bittiming_const kvaser_usb_leaf_m32c_bittiming_const = {
+       .name = "kvaser_usb_leaf",
+       .tseg1_min = 3,
+       .tseg1_max = 16,
+       .tseg2_min = 2,
+       .tseg2_max = 8,
+       .sjw_max = 4,
+       .brp_min = 2,
+       .brp_max = 128,
+       .brp_inc = 2,
 };
 
-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = {
+static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_usbcan_dev_cfg = {
        .clock = {
                .freq = 8 * MEGA /* Hz */,
        },
        .timestamp_freq = 1,
-       .bittiming_const = &kvaser_usb_leaf_bittiming_const,
+       .bittiming_const = &kvaser_usb_leaf_m16c_bittiming_const,
+};
+
+static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_m32c_dev_cfg = {
+       .clock = {
+               .freq = 16 * MEGA /* Hz */,
+       },
+       .timestamp_freq = 1,
+       .bittiming_const = &kvaser_usb_leaf_m32c_bittiming_const,
 };
 
-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = {
+static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_16mhz = {
        .clock = {
                .freq = 16 * MEGA /* Hz */,
        },
        .timestamp_freq = 1,
-       .bittiming_const = &kvaser_usb_leaf_bittiming_const,
+       .bittiming_const = &kvaser_usb_flexc_bittiming_const,
 };
 
-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = {
+static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_24mhz = {
        .clock = {
                .freq = 24 * MEGA /* Hz */,
        },
        .timestamp_freq = 1,
-       .bittiming_const = &kvaser_usb_leaf_bittiming_const,
+       .bittiming_const = &kvaser_usb_flexc_bittiming_const,
 };
 
-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = {
+static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_32mhz = {
        .clock = {
                .freq = 32 * MEGA /* Hz */,
        },
        .timestamp_freq = 1,
-       .bittiming_const = &kvaser_usb_leaf_bittiming_const,
+       .bittiming_const = &kvaser_usb_flexc_bittiming_const,
 };
 
 static void *
@@ -404,7 +414,7 @@ kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
                                      sizeof(struct kvaser_cmd_tx_can);
                cmd->u.tx_can.channel = priv->channel;
 
-               switch (dev->card_data.leaf.family) {
+               switch (dev->driver_info->family) {
                case KVASER_LEAF:
                        cmd_tx_can_flags = &cmd->u.tx_can.leaf.flags;
                        break;
@@ -524,16 +534,23 @@ static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev,
        dev->fw_version = le32_to_cpu(softinfo->fw_version);
        dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx);
 
-       switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) {
-       case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK:
-               dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz;
-               break;
-       case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK:
-               dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz;
-               break;
-       case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK:
-               dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz;
-               break;
+       if (dev->driver_info->quirks & KVASER_USB_QUIRK_IGNORE_CLK_FREQ) {
+               /* Firmware expects bittiming parameters calculated for 16MHz
+                * clock, regardless of the actual clock
+                */
+               dev->cfg = &kvaser_usb_leaf_m32c_dev_cfg;
+       } else {
+               switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) {
+               case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK:
+                       dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_16mhz;
+                       break;
+               case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK:
+                       dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_24mhz;
+                       break;
+               case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK:
+                       dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_32mhz;
+                       break;
+               }
        }
 }
 
@@ -550,7 +567,7 @@ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
        if (err)
                return err;
 
-       switch (dev->card_data.leaf.family) {
+       switch (dev->driver_info->family) {
        case KVASER_LEAF:
                kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo);
                break;
@@ -558,7 +575,7 @@ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
                dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version);
                dev->max_tx_urbs =
                        le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx);
-               dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz;
+               dev->cfg = &kvaser_usb_leaf_usbcan_dev_cfg;
                break;
        }
 
@@ -597,7 +614,7 @@ static int kvaser_usb_leaf_get_card_info(struct kvaser_usb *dev)
 
        dev->nchannels = cmd.u.cardinfo.nchannels;
        if (dev->nchannels > KVASER_USB_MAX_NET_DEVICES ||
-           (dev->card_data.leaf.family == KVASER_USBCAN &&
+           (dev->driver_info->family == KVASER_USBCAN &&
             dev->nchannels > MAX_USBCAN_NET_DEVICES))
                return -EINVAL;
 
@@ -730,7 +747,7 @@ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
            new_state < CAN_STATE_BUS_OFF)
                priv->can.can_stats.restarts++;
 
-       switch (dev->card_data.leaf.family) {
+       switch (dev->driver_info->family) {
        case KVASER_LEAF:
                if (es->leaf.error_factor) {
                        priv->can.can_stats.bus_error++;
@@ -809,7 +826,7 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
                }
        }
 
-       switch (dev->card_data.leaf.family) {
+       switch (dev->driver_info->family) {
        case KVASER_LEAF:
                if (es->leaf.error_factor) {
                        cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_PROT;
@@ -999,7 +1016,7 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
        stats = &priv->netdev->stats;
 
        if ((cmd->u.rx_can_header.flag & MSG_FLAG_ERROR_FRAME) &&
-           (dev->card_data.leaf.family == KVASER_LEAF &&
+           (dev->driver_info->family == KVASER_LEAF &&
             cmd->id == CMD_LEAF_LOG_MESSAGE)) {
                kvaser_usb_leaf_leaf_rx_error(dev, cmd);
                return;
@@ -1015,7 +1032,7 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
                return;
        }
 
-       switch (dev->card_data.leaf.family) {
+       switch (dev->driver_info->family) {
        case KVASER_LEAF:
                rx_data = cmd->u.leaf.rx_can.data;
                break;
@@ -1030,7 +1047,7 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
                return;
        }
 
-       if (dev->card_data.leaf.family == KVASER_LEAF && cmd->id ==
+       if (dev->driver_info->family == KVASER_LEAF && cmd->id ==
            CMD_LEAF_LOG_MESSAGE) {
                cf->can_id = le32_to_cpu(cmd->u.leaf.log_message.id);
                if (cf->can_id & KVASER_EXTENDED_FRAME)
@@ -1128,14 +1145,14 @@ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
                break;
 
        case CMD_LEAF_LOG_MESSAGE:
-               if (dev->card_data.leaf.family != KVASER_LEAF)
+               if (dev->driver_info->family != KVASER_LEAF)
                        goto warn;
                kvaser_usb_leaf_rx_can_msg(dev, cmd);
                break;
 
        case CMD_CHIP_STATE_EVENT:
        case CMD_CAN_ERROR_EVENT:
-               if (dev->card_data.leaf.family == KVASER_LEAF)
+               if (dev->driver_info->family == KVASER_LEAF)
                        kvaser_usb_leaf_leaf_rx_error(dev, cmd);
                else
                        kvaser_usb_leaf_usbcan_rx_error(dev, cmd);
@@ -1147,12 +1164,12 @@ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
 
        /* Ignored commands */
        case CMD_USBCAN_CLOCK_OVERFLOW_EVENT:
-               if (dev->card_data.leaf.family != KVASER_USBCAN)
+               if (dev->driver_info->family != KVASER_USBCAN)
                        goto warn;
                break;
 
        case CMD_FLUSH_QUEUE_REPLY:
-               if (dev->card_data.leaf.family != KVASER_LEAF)
+               if (dev->driver_info->family != KVASER_LEAF)
                        goto warn;
                break;
 
index 8a3b7b1..0de2f97 100644 (file)
@@ -1,7 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 /* Xilinx CAN device driver
  *
- * Copyright (C) 2012 - 2014 Xilinx, Inc.
+ * Copyright (C) 2012 - 2022 Xilinx, Inc.
  * Copyright (C) 2009 PetaLogix. All rights reserved.
  * Copyright (C) 2017 - 2018 Sandvik Mining and Construction Oy
  *
@@ -9,6 +9,7 @@
  * This driver is developed for Axi CAN IP and for Zynq CANPS Controller.
  */
 
+#include <linux/bitfield.h>
 #include <linux/clk.h>
 #include <linux/errno.h>
 #include <linux/init.h>
@@ -50,7 +51,7 @@ enum xcan_reg {
 
        /* only on CAN FD cores */
        XCAN_F_BRPR_OFFSET      = 0x088, /* Data Phase Baud Rate
-                                         * Prescalar
+                                         * Prescaler
                                          */
        XCAN_F_BTR_OFFSET       = 0x08C, /* Data Phase Bit Timing */
        XCAN_TRR_OFFSET         = 0x0090, /* TX Buffer Ready Request */
@@ -86,6 +87,8 @@ enum xcan_reg {
 #define XCAN_MSR_LBACK_MASK            0x00000002 /* Loop back mode select */
 #define XCAN_MSR_SLEEP_MASK            0x00000001 /* Sleep mode select */
 #define XCAN_BRPR_BRP_MASK             0x000000FF /* Baud rate prescaler */
+#define XCAN_BRPR_TDCO_MASK            GENMASK(12, 8)  /* TDCO */
+#define XCAN_2_BRPR_TDCO_MASK          GENMASK(13, 8)  /* TDCO for CANFD 2.0 */
 #define XCAN_BTR_SJW_MASK              0x00000180 /* Synchronous jump width */
 #define XCAN_BTR_TS2_MASK              0x00000070 /* Time segment 2 */
 #define XCAN_BTR_TS1_MASK              0x0000000F /* Time segment 1 */
@@ -99,6 +102,7 @@ enum xcan_reg {
 #define XCAN_ESR_STER_MASK             0x00000004 /* Stuff error */
 #define XCAN_ESR_FMER_MASK             0x00000002 /* Form error */
 #define XCAN_ESR_CRCER_MASK            0x00000001 /* CRC error */
+#define XCAN_SR_TDCV_MASK              GENMASK(22, 16) /* TDCV Value */
 #define XCAN_SR_TXFLL_MASK             0x00000400 /* TX FIFO is full */
 #define XCAN_SR_ESTAT_MASK             0x00000180 /* Error status */
 #define XCAN_SR_ERRWRN_MASK            0x00000040 /* Error warning */
@@ -132,6 +136,7 @@ enum xcan_reg {
 #define XCAN_DLCR_BRS_MASK             0x04000000 /* BRS Mask in DLC */
 
 /* CAN register bit shift - XCAN_<REG>_<BIT>_SHIFT */
+#define XCAN_BRPR_TDC_ENABLE           BIT(16) /* Transmitter Delay Compensation (TDC) Enable */
 #define XCAN_BTR_SJW_SHIFT             7  /* Synchronous jump width */
 #define XCAN_BTR_TS2_SHIFT             4  /* Time segment 2 */
 #define XCAN_BTR_SJW_SHIFT_CANFD       16 /* Synchronous jump width */
@@ -258,7 +263,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = {
        .tseg2_min = 1,
        .tseg2_max = 128,
        .sjw_max = 128,
-       .brp_min = 2,
+       .brp_min = 1,
        .brp_max = 256,
        .brp_inc = 1,
 };
@@ -271,11 +276,31 @@ static const struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
        .tseg2_min = 1,
        .tseg2_max = 16,
        .sjw_max = 16,
-       .brp_min = 2,
+       .brp_min = 1,
        .brp_max = 256,
        .brp_inc = 1,
 };
 
+/* Transmission Delay Compensation constants for CANFD 1.0 */
+static const struct can_tdc_const xcan_tdc_const_canfd = {
+       .tdcv_min = 0,
+       .tdcv_max = 0, /* Manual mode not supported. */
+       .tdco_min = 0,
+       .tdco_max = 32,
+       .tdcf_min = 0, /* Filter window not supported */
+       .tdcf_max = 0,
+};
+
+/* Transmission Delay Compensation constants for CANFD 2.0 */
+static const struct can_tdc_const xcan_tdc_const_canfd2 = {
+       .tdcv_min = 0,
+       .tdcv_max = 0, /* Manual mode not supported. */
+       .tdco_min = 0,
+       .tdco_max = 64,
+       .tdcf_min = 0, /* Filter window not supported */
+       .tdcf_max = 0,
+};
+
 /**
  * xcan_write_reg_le - Write a value to the device register little endian
  * @priv:      Driver private data structure
@@ -405,7 +430,7 @@ static int xcan_set_bittiming(struct net_device *ndev)
                return -EPERM;
        }
 
-       /* Setting Baud Rate prescalar value in BRPR Register */
+       /* Setting Baud Rate prescaler value in BRPR Register */
        btr0 = (bt->brp - 1);
 
        /* Setting Time Segment 1 in BTR Register */
@@ -422,8 +447,16 @@ static int xcan_set_bittiming(struct net_device *ndev)
 
        if (priv->devtype.cantype == XAXI_CANFD ||
            priv->devtype.cantype == XAXI_CANFD_2_0) {
-               /* Setting Baud Rate prescalar value in F_BRPR Register */
+               /* Setting Baud Rate prescaler value in F_BRPR Register */
                btr0 = dbt->brp - 1;
+               if (can_tdc_is_enabled(&priv->can)) {
+                       if (priv->devtype.cantype == XAXI_CANFD)
+                               btr0 |= FIELD_PREP(XCAN_BRPR_TDCO_MASK, priv->can.tdc.tdco) |
+                                       XCAN_BRPR_TDC_ENABLE;
+                       else
+                               btr0 |= FIELD_PREP(XCAN_2_BRPR_TDCO_MASK, priv->can.tdc.tdco) |
+                                       XCAN_BRPR_TDC_ENABLE;
+               }
 
                /* Setting Time Segment 1 in BTR Register */
                btr1 = dbt->prop_seg + dbt->phase_seg1 - 1;
@@ -1483,6 +1516,22 @@ static int xcan_get_berr_counter(const struct net_device *ndev,
        return 0;
 }
 
+/**
+ * xcan_get_auto_tdcv - Get Transmitter Delay Compensation Value
+ * @ndev:      Pointer to net_device structure
+ * @tdcv:      Pointer to TDCV value
+ *
+ * Return: 0 on success
+ */
+static int xcan_get_auto_tdcv(const struct net_device *ndev, u32 *tdcv)
+{
+       struct xcan_priv *priv = netdev_priv(ndev);
+
+       *tdcv = FIELD_GET(XCAN_SR_TDCV_MASK, priv->read_reg(priv, XCAN_SR_OFFSET));
+
+       return 0;
+}
+
 static const struct net_device_ops xcan_netdev_ops = {
        .ndo_open       = xcan_open,
        .ndo_stop       = xcan_close,
@@ -1735,17 +1784,24 @@ static int xcan_probe(struct platform_device *pdev)
        priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
                                        CAN_CTRLMODE_BERR_REPORTING;
 
-       if (devtype->cantype == XAXI_CANFD)
+       if (devtype->cantype == XAXI_CANFD) {
                priv->can.data_bittiming_const =
                        &xcan_data_bittiming_const_canfd;
+               priv->can.tdc_const = &xcan_tdc_const_canfd;
+       }
 
-       if (devtype->cantype == XAXI_CANFD_2_0)
+       if (devtype->cantype == XAXI_CANFD_2_0) {
                priv->can.data_bittiming_const =
                        &xcan_data_bittiming_const_canfd2;
+               priv->can.tdc_const = &xcan_tdc_const_canfd2;
+       }
 
        if (devtype->cantype == XAXI_CANFD ||
-           devtype->cantype == XAXI_CANFD_2_0)
-               priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
+           devtype->cantype == XAXI_CANFD_2_0) {
+               priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD |
+                                               CAN_CTRLMODE_TDC_AUTO;
+               priv->can.do_get_auto_tdcv = xcan_get_auto_tdcv;
+       }
 
        priv->reg_base = addr;
        priv->tx_max = tx_max;
index 6d1fcb0..702d68a 100644 (file)
@@ -70,6 +70,15 @@ config NET_DSA_QCA8K
 
 source "drivers/net/dsa/realtek/Kconfig"
 
+config NET_DSA_RZN1_A5PSW
+       tristate "Renesas RZ/N1 A5PSW Ethernet switch support"
+       depends on OF && ARCH_RZN1
+       select NET_DSA_TAG_RZN1_A5PSW
+       select PCS_RZN1_MIIC
+       help
+         This driver supports the A5PSW switch, which is embedded in Renesas
+         RZ/N1 SoC.
+
 config NET_DSA_SMSC_LAN9303
        tristate
        select NET_DSA_TAG_LAN9303
index e73838c..b32907a 100644 (file)
@@ -9,6 +9,7 @@ obj-$(CONFIG_NET_DSA_LANTIQ_GSWIP) += lantiq_gswip.o
 obj-$(CONFIG_NET_DSA_MT7530)   += mt7530.o
 obj-$(CONFIG_NET_DSA_MV88E6060) += mv88e6060.o
 obj-$(CONFIG_NET_DSA_QCA8K)    += qca8k.o
+obj-$(CONFIG_NET_DSA_RZN1_A5PSW) += rzn1_a5psw.o
 obj-$(CONFIG_NET_DSA_SMSC_LAN9303) += lan9303-core.o
 obj-$(CONFIG_NET_DSA_SMSC_LAN9303_I2C) += lan9303_i2c.o
 obj-$(CONFIG_NET_DSA_SMSC_LAN9303_MDIO) += lan9303_mdio.o
index 0e54b2a..308f15d 100644 (file)
@@ -320,8 +320,6 @@ static void b53_spi_remove(struct spi_device *spi)
 
        if (dev)
                b53_switch_remove(dev);
-
-       spi_set_drvdata(spi, NULL);
 }
 
 static void b53_spi_shutdown(struct spi_device *spi)
index 87e81c6..be0edfa 100644 (file)
@@ -878,6 +878,11 @@ static void bcm_sf2_sw_mac_link_up(struct dsa_switch *ds, int port,
                if (duplex == DUPLEX_FULL)
                        reg |= DUPLX_MODE;
 
+               if (tx_pause)
+                       reg |= TXFLOW_CNTL;
+               if (rx_pause)
+                       reg |= RXFLOW_CNTL;
+
                core_writel(priv, reg, offset);
        }
 
index 2572c60..b28baab 100644 (file)
@@ -300,6 +300,7 @@ static int hellcreek_led_setup(struct hellcreek *hellcreek)
        const char *label, *state;
        int ret = -EINVAL;
 
+       of_node_get(hellcreek->dev->of_node);
        leds = of_find_node_by_name(hellcreek->dev->of_node, "leds");
        if (!leds) {
                dev_err(hellcreek->dev, "No LEDs specified in device tree!\n");
index c9e2a89..06b1efd 100644 (file)
@@ -1,49 +1,29 @@
 # SPDX-License-Identifier: GPL-2.0-only
-config NET_DSA_MICROCHIP_KSZ_COMMON
-       select NET_DSA_TAG_KSZ
-       tristate
-
-menuconfig NET_DSA_MICROCHIP_KSZ9477
-       tristate "Microchip KSZ9477 series switch support"
+menuconfig NET_DSA_MICROCHIP_KSZ_COMMON
+       tristate "Microchip KSZ8795/KSZ9477/LAN937x series switch support"
        depends on NET_DSA
-       select NET_DSA_MICROCHIP_KSZ_COMMON
+       select NET_DSA_TAG_KSZ
        help
-         This driver adds support for Microchip KSZ9477 switch chips.
+         This driver adds support for Microchip KSZ9477 series switch and
+         KSZ8795/KSZ88x3 switch chips.
 
 config NET_DSA_MICROCHIP_KSZ9477_I2C
-       tristate "KSZ9477 series I2C connected switch driver"
-       depends on NET_DSA_MICROCHIP_KSZ9477 && I2C
+       tristate "KSZ series I2C connected switch driver"
+       depends on NET_DSA_MICROCHIP_KSZ_COMMON && I2C
        select REGMAP_I2C
        help
          Select to enable support for registering switches configured through I2C.
 
-config NET_DSA_MICROCHIP_KSZ9477_SPI
-       tristate "KSZ9477 series SPI connected switch driver"
-       depends on NET_DSA_MICROCHIP_KSZ9477 && SPI
+config NET_DSA_MICROCHIP_KSZ_SPI
+       tristate "KSZ series SPI connected switch driver"
+       depends on NET_DSA_MICROCHIP_KSZ_COMMON && SPI
        select REGMAP_SPI
        help
          Select to enable support for registering switches configured through SPI.
 
-menuconfig NET_DSA_MICROCHIP_KSZ8795
-       tristate "Microchip KSZ8795 series switch support"
-       depends on NET_DSA
-       select NET_DSA_MICROCHIP_KSZ_COMMON
-       help
-         This driver adds support for Microchip KSZ8795/KSZ88X3 switch chips.
-
-config NET_DSA_MICROCHIP_KSZ8795_SPI
-       tristate "KSZ8795 series SPI connected switch driver"
-       depends on NET_DSA_MICROCHIP_KSZ8795 && SPI
-       select REGMAP_SPI
-       help
-         This driver accesses KSZ8795 chip through SPI.
-
-         It is required to use the KSZ8795 switch driver as the only access
-         is through SPI.
-
 config NET_DSA_MICROCHIP_KSZ8863_SMI
        tristate "KSZ series SMI connected switch driver"
-       depends on NET_DSA_MICROCHIP_KSZ8795
+       depends on NET_DSA_MICROCHIP_KSZ_COMMON
        select MDIO_BITBANG
        help
          Select to enable support for registering switches configured through
index 2a03b21..2887355 100644 (file)
@@ -1,8 +1,9 @@
 # SPDX-License-Identifier: GPL-2.0-only
-obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ_COMMON)     += ksz_common.o
-obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ9477)                += ksz9477.o
+obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ_COMMON)     += ksz_switch.o
+ksz_switch-objs := ksz_common.o
+ksz_switch-objs += ksz9477.o
+ksz_switch-objs += ksz8795.o
+ksz_switch-objs += lan937x_main.o
 obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ9477_I2C)    += ksz9477_i2c.o
-obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ9477_SPI)    += ksz9477_spi.o
-obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ8795)                += ksz8795.o
-obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ8795_SPI)    += ksz8795_spi.o
+obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ_SPI)                += ksz_spi.o
 obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ8863_SMI)    += ksz8863_smi.o
index cae76f5..42c50cc 100644 (file)
@@ -9,63 +9,53 @@
 #define __KSZ8XXX_H
 
 #include <linux/types.h>
+#include <net/dsa.h>
+#include "ksz_common.h"
 
-enum ksz_regs {
-       REG_IND_CTRL_0,
-       REG_IND_DATA_8,
-       REG_IND_DATA_CHECK,
-       REG_IND_DATA_HI,
-       REG_IND_DATA_LO,
-       REG_IND_MIB_CHECK,
-       REG_IND_BYTE,
-       P_FORCE_CTRL,
-       P_LINK_STATUS,
-       P_LOCAL_CTRL,
-       P_NEG_RESTART_CTRL,
-       P_REMOTE_STATUS,
-       P_SPEED_STATUS,
-       S_TAIL_TAG_CTRL,
-};
-
-enum ksz_masks {
-       PORT_802_1P_REMAPPING,
-       SW_TAIL_TAG_ENABLE,
-       MIB_COUNTER_OVERFLOW,
-       MIB_COUNTER_VALID,
-       VLAN_TABLE_FID,
-       VLAN_TABLE_MEMBERSHIP,
-       VLAN_TABLE_VALID,
-       STATIC_MAC_TABLE_VALID,
-       STATIC_MAC_TABLE_USE_FID,
-       STATIC_MAC_TABLE_FID,
-       STATIC_MAC_TABLE_OVERRIDE,
-       STATIC_MAC_TABLE_FWD_PORTS,
-       DYNAMIC_MAC_TABLE_ENTRIES_H,
-       DYNAMIC_MAC_TABLE_MAC_EMPTY,
-       DYNAMIC_MAC_TABLE_NOT_READY,
-       DYNAMIC_MAC_TABLE_ENTRIES,
-       DYNAMIC_MAC_TABLE_FID,
-       DYNAMIC_MAC_TABLE_SRC_PORT,
-       DYNAMIC_MAC_TABLE_TIMESTAMP,
-};
-
-enum ksz_shifts {
-       VLAN_TABLE_MEMBERSHIP_S,
-       VLAN_TABLE,
-       STATIC_MAC_FWD_PORTS,
-       STATIC_MAC_FID,
-       DYNAMIC_MAC_ENTRIES_H,
-       DYNAMIC_MAC_ENTRIES,
-       DYNAMIC_MAC_FID,
-       DYNAMIC_MAC_TIMESTAMP,
-       DYNAMIC_MAC_SRC_PORT,
-};
-
-struct ksz8 {
-       const u8 *regs;
-       const u32 *masks;
-       const u8 *shifts;
-       void *priv;
-};
+int ksz8_setup(struct dsa_switch *ds);
+u32 ksz8_get_port_addr(int port, int offset);
+void ksz8_cfg_port_member(struct ksz_device *dev, int port, u8 member);
+void ksz8_flush_dyn_mac_table(struct ksz_device *dev, int port);
+void ksz8_port_setup(struct ksz_device *dev, int port, bool cpu_port);
+void ksz8_r_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 *val);
+void ksz8_w_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 val);
+int ksz8_r_dyn_mac_table(struct ksz_device *dev, u16 addr, u8 *mac_addr,
+                        u8 *fid, u8 *src_port, u8 *timestamp, u16 *entries);
+int ksz8_r_sta_mac_table(struct ksz_device *dev, u16 addr,
+                        struct alu_struct *alu);
+void ksz8_w_sta_mac_table(struct ksz_device *dev, u16 addr,
+                         struct alu_struct *alu);
+void ksz8_r_mib_cnt(struct ksz_device *dev, int port, u16 addr, u64 *cnt);
+void ksz8_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
+                   u64 *dropped, u64 *cnt);
+void ksz8_freeze_mib(struct ksz_device *dev, int port, bool freeze);
+void ksz8_port_init_cnt(struct ksz_device *dev, int port);
+int ksz8_fdb_dump(struct ksz_device *dev, int port,
+                 dsa_fdb_dump_cb_t *cb, void *data);
+int ksz8_mdb_add(struct ksz_device *dev, int port,
+                const struct switchdev_obj_port_mdb *mdb, struct dsa_db db);
+int ksz8_mdb_del(struct ksz_device *dev, int port,
+                const struct switchdev_obj_port_mdb *mdb, struct dsa_db db);
+int ksz8_port_vlan_filtering(struct ksz_device *dev, int port, bool flag,
+                            struct netlink_ext_ack *extack);
+int ksz8_port_vlan_add(struct ksz_device *dev, int port,
+                      const struct switchdev_obj_port_vlan *vlan,
+                      struct netlink_ext_ack *extack);
+int ksz8_port_vlan_del(struct ksz_device *dev, int port,
+                      const struct switchdev_obj_port_vlan *vlan);
+int ksz8_port_mirror_add(struct ksz_device *dev, int port,
+                        struct dsa_mall_mirror_tc_entry *mirror,
+                        bool ingress, struct netlink_ext_ack *extack);
+void ksz8_port_mirror_del(struct ksz_device *dev, int port,
+                         struct dsa_mall_mirror_tc_entry *mirror);
+int ksz8_get_stp_reg(void);
+void ksz8_get_caps(struct ksz_device *dev, int port,
+                  struct phylink_config *config);
+void ksz8_config_cpu_port(struct dsa_switch *ds);
+int ksz8_enable_stp_addr(struct ksz_device *dev);
+int ksz8_reset_switch(struct ksz_device *dev);
+int ksz8_switch_detect(struct ksz_device *dev);
+int ksz8_switch_init(struct ksz_device *dev);
+void ksz8_switch_exit(struct ksz_device *dev);
 
 #endif
index 12a599d..911aace 100644 (file)
 #include "ksz8795_reg.h"
 #include "ksz8.h"
 
-static const u8 ksz8795_regs[] = {
-       [REG_IND_CTRL_0]                = 0x6E,
-       [REG_IND_DATA_8]                = 0x70,
-       [REG_IND_DATA_CHECK]            = 0x72,
-       [REG_IND_DATA_HI]               = 0x71,
-       [REG_IND_DATA_LO]               = 0x75,
-       [REG_IND_MIB_CHECK]             = 0x74,
-       [REG_IND_BYTE]                  = 0xA0,
-       [P_FORCE_CTRL]                  = 0x0C,
-       [P_LINK_STATUS]                 = 0x0E,
-       [P_LOCAL_CTRL]                  = 0x07,
-       [P_NEG_RESTART_CTRL]            = 0x0D,
-       [P_REMOTE_STATUS]               = 0x08,
-       [P_SPEED_STATUS]                = 0x09,
-       [S_TAIL_TAG_CTRL]               = 0x0C,
-};
-
-static const u32 ksz8795_masks[] = {
-       [PORT_802_1P_REMAPPING]         = BIT(7),
-       [SW_TAIL_TAG_ENABLE]            = BIT(1),
-       [MIB_COUNTER_OVERFLOW]          = BIT(6),
-       [MIB_COUNTER_VALID]             = BIT(5),
-       [VLAN_TABLE_FID]                = GENMASK(6, 0),
-       [VLAN_TABLE_MEMBERSHIP]         = GENMASK(11, 7),
-       [VLAN_TABLE_VALID]              = BIT(12),
-       [STATIC_MAC_TABLE_VALID]        = BIT(21),
-       [STATIC_MAC_TABLE_USE_FID]      = BIT(23),
-       [STATIC_MAC_TABLE_FID]          = GENMASK(30, 24),
-       [STATIC_MAC_TABLE_OVERRIDE]     = BIT(26),
-       [STATIC_MAC_TABLE_FWD_PORTS]    = GENMASK(24, 20),
-       [DYNAMIC_MAC_TABLE_ENTRIES_H]   = GENMASK(6, 0),
-       [DYNAMIC_MAC_TABLE_MAC_EMPTY]   = BIT(8),
-       [DYNAMIC_MAC_TABLE_NOT_READY]   = BIT(7),
-       [DYNAMIC_MAC_TABLE_ENTRIES]     = GENMASK(31, 29),
-       [DYNAMIC_MAC_TABLE_FID]         = GENMASK(26, 20),
-       [DYNAMIC_MAC_TABLE_SRC_PORT]    = GENMASK(26, 24),
-       [DYNAMIC_MAC_TABLE_TIMESTAMP]   = GENMASK(28, 27),
-};
-
-static const u8 ksz8795_shifts[] = {
-       [VLAN_TABLE_MEMBERSHIP_S]       = 7,
-       [VLAN_TABLE]                    = 16,
-       [STATIC_MAC_FWD_PORTS]          = 16,
-       [STATIC_MAC_FID]                = 24,
-       [DYNAMIC_MAC_ENTRIES_H]         = 3,
-       [DYNAMIC_MAC_ENTRIES]           = 29,
-       [DYNAMIC_MAC_FID]               = 16,
-       [DYNAMIC_MAC_TIMESTAMP]         = 27,
-       [DYNAMIC_MAC_SRC_PORT]          = 24,
-};
-
-static const u8 ksz8863_regs[] = {
-       [REG_IND_CTRL_0]                = 0x79,
-       [REG_IND_DATA_8]                = 0x7B,
-       [REG_IND_DATA_CHECK]            = 0x7B,
-       [REG_IND_DATA_HI]               = 0x7C,
-       [REG_IND_DATA_LO]               = 0x80,
-       [REG_IND_MIB_CHECK]             = 0x80,
-       [P_FORCE_CTRL]                  = 0x0C,
-       [P_LINK_STATUS]                 = 0x0E,
-       [P_LOCAL_CTRL]                  = 0x0C,
-       [P_NEG_RESTART_CTRL]            = 0x0D,
-       [P_REMOTE_STATUS]               = 0x0E,
-       [P_SPEED_STATUS]                = 0x0F,
-       [S_TAIL_TAG_CTRL]               = 0x03,
-};
-
-static const u32 ksz8863_masks[] = {
-       [PORT_802_1P_REMAPPING]         = BIT(3),
-       [SW_TAIL_TAG_ENABLE]            = BIT(6),
-       [MIB_COUNTER_OVERFLOW]          = BIT(7),
-       [MIB_COUNTER_VALID]             = BIT(6),
-       [VLAN_TABLE_FID]                = GENMASK(15, 12),
-       [VLAN_TABLE_MEMBERSHIP]         = GENMASK(18, 16),
-       [VLAN_TABLE_VALID]              = BIT(19),
-       [STATIC_MAC_TABLE_VALID]        = BIT(19),
-       [STATIC_MAC_TABLE_USE_FID]      = BIT(21),
-       [STATIC_MAC_TABLE_FID]          = GENMASK(29, 26),
-       [STATIC_MAC_TABLE_OVERRIDE]     = BIT(20),
-       [STATIC_MAC_TABLE_FWD_PORTS]    = GENMASK(18, 16),
-       [DYNAMIC_MAC_TABLE_ENTRIES_H]   = GENMASK(5, 0),
-       [DYNAMIC_MAC_TABLE_MAC_EMPTY]   = BIT(7),
-       [DYNAMIC_MAC_TABLE_NOT_READY]   = BIT(7),
-       [DYNAMIC_MAC_TABLE_ENTRIES]     = GENMASK(31, 28),
-       [DYNAMIC_MAC_TABLE_FID]         = GENMASK(19, 16),
-       [DYNAMIC_MAC_TABLE_SRC_PORT]    = GENMASK(21, 20),
-       [DYNAMIC_MAC_TABLE_TIMESTAMP]   = GENMASK(23, 22),
-};
-
-static u8 ksz8863_shifts[] = {
-       [VLAN_TABLE_MEMBERSHIP_S]       = 16,
-       [STATIC_MAC_FWD_PORTS]          = 16,
-       [STATIC_MAC_FID]                = 22,
-       [DYNAMIC_MAC_ENTRIES_H]         = 3,
-       [DYNAMIC_MAC_ENTRIES]           = 24,
-       [DYNAMIC_MAC_FID]               = 16,
-       [DYNAMIC_MAC_TIMESTAMP]         = 24,
-       [DYNAMIC_MAC_SRC_PORT]          = 20,
-};
-
 static bool ksz_is_ksz88x3(struct ksz_device *dev)
 {
        return dev->chip_id == 0x8830;
@@ -145,11 +45,12 @@ static void ksz_port_cfg(struct ksz_device *dev, int port, int offset, u8 bits,
 
 static int ksz8_ind_write8(struct ksz_device *dev, u8 table, u16 addr, u8 data)
 {
-       struct ksz8 *ksz8 = dev->priv;
-       const u8 *regs = ksz8->regs;
+       const u16 *regs;
        u16 ctrl_addr;
        int ret = 0;
 
+       regs = dev->info->regs;
+
        mutex_lock(&dev->alu_mutex);
 
        ctrl_addr = IND_ACC_TABLE(table) | addr;
@@ -162,7 +63,7 @@ static int ksz8_ind_write8(struct ksz_device *dev, u8 table, u16 addr, u8 data)
        return ret;
 }
 
-static int ksz8_reset_switch(struct ksz_device *dev)
+int ksz8_reset_switch(struct ksz_device *dev)
 {
        if (ksz_is_ksz88x3(dev)) {
                /* reset switch */
@@ -213,18 +114,17 @@ static void ksz8795_set_prio_queue(struct ksz_device *dev, int port, int queue)
                        true);
 }
 
-static void ksz8_r_mib_cnt(struct ksz_device *dev, int port, u16 addr, u64 *cnt)
+void ksz8_r_mib_cnt(struct ksz_device *dev, int port, u16 addr, u64 *cnt)
 {
-       struct ksz8 *ksz8 = dev->priv;
        const u32 *masks;
-       const u8 *regs;
+       const u16 *regs;
        u16 ctrl_addr;
        u32 data;
        u8 check;
        int loop;
 
-       masks = ksz8->masks;
-       regs = ksz8->regs;
+       masks = dev->info->masks;
+       regs = dev->info->regs;
 
        ctrl_addr = addr + dev->info->reg_mib_cnt * port;
        ctrl_addr |= IND_ACC_TABLE(TABLE_MIB | TABLE_READ);
@@ -252,16 +152,15 @@ static void ksz8_r_mib_cnt(struct ksz_device *dev, int port, u16 addr, u64 *cnt)
 static void ksz8795_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
                              u64 *dropped, u64 *cnt)
 {
-       struct ksz8 *ksz8 = dev->priv;
        const u32 *masks;
-       const u8 *regs;
+       const u16 *regs;
        u16 ctrl_addr;
        u32 data;
        u8 check;
        int loop;
 
-       masks = ksz8->masks;
-       regs = ksz8->regs;
+       masks = dev->info->masks;
+       regs = dev->info->regs;
 
        addr -= dev->info->reg_mib_cnt;
        ctrl_addr = (KSZ8795_MIB_TOTAL_RX_1 - KSZ8795_MIB_TOTAL_RX_0) * port;
@@ -305,13 +204,14 @@ static void ksz8795_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
 static void ksz8863_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
                              u64 *dropped, u64 *cnt)
 {
-       struct ksz8 *ksz8 = dev->priv;
-       const u8 *regs = ksz8->regs;
        u32 *last = (u32 *)dropped;
+       const u16 *regs;
        u16 ctrl_addr;
        u32 data;
        u32 cur;
 
+       regs = dev->info->regs;
+
        addr -= dev->info->reg_mib_cnt;
        ctrl_addr = addr ? KSZ8863_MIB_PACKET_DROPPED_TX_0 :
                           KSZ8863_MIB_PACKET_DROPPED_RX_0;
@@ -334,8 +234,8 @@ static void ksz8863_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
        }
 }
 
-static void ksz8_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
-                          u64 *dropped, u64 *cnt)
+void ksz8_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
+                   u64 *dropped, u64 *cnt)
 {
        if (ksz_is_ksz88x3(dev))
                ksz8863_r_mib_pkt(dev, port, addr, dropped, cnt);
@@ -343,7 +243,7 @@ static void ksz8_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
                ksz8795_r_mib_pkt(dev, port, addr, dropped, cnt);
 }
 
-static void ksz8_freeze_mib(struct ksz_device *dev, int port, bool freeze)
+void ksz8_freeze_mib(struct ksz_device *dev, int port, bool freeze)
 {
        if (ksz_is_ksz88x3(dev))
                return;
@@ -358,7 +258,7 @@ static void ksz8_freeze_mib(struct ksz_device *dev, int port, bool freeze)
                ksz_cfg(dev, REG_SW_CTRL_6, BIT(port), false);
 }
 
-static void ksz8_port_init_cnt(struct ksz_device *dev, int port)
+void ksz8_port_init_cnt(struct ksz_device *dev, int port)
 {
        struct ksz_port_mib *mib = &dev->ports[port].mib;
        u64 *dropped;
@@ -392,10 +292,11 @@ static void ksz8_port_init_cnt(struct ksz_device *dev, int port)
 
 static void ksz8_r_table(struct ksz_device *dev, int table, u16 addr, u64 *data)
 {
-       struct ksz8 *ksz8 = dev->priv;
-       const u8 *regs = ksz8->regs;
+       const u16 *regs;
        u16 ctrl_addr;
 
+       regs = dev->info->regs;
+
        ctrl_addr = IND_ACC_TABLE(table | TABLE_READ) | addr;
 
        mutex_lock(&dev->alu_mutex);
@@ -406,10 +307,11 @@ static void ksz8_r_table(struct ksz_device *dev, int table, u16 addr, u64 *data)
 
 static void ksz8_w_table(struct ksz_device *dev, int table, u16 addr, u64 data)
 {
-       struct ksz8 *ksz8 = dev->priv;
-       const u8 *regs = ksz8->regs;
+       const u16 *regs;
        u16 ctrl_addr;
 
+       regs = dev->info->regs;
+
        ctrl_addr = IND_ACC_TABLE(table) | addr;
 
        mutex_lock(&dev->alu_mutex);
@@ -420,13 +322,12 @@ static void ksz8_w_table(struct ksz_device *dev, int table, u16 addr, u64 data)
 
 static int ksz8_valid_dyn_entry(struct ksz_device *dev, u8 *data)
 {
-       struct ksz8 *ksz8 = dev->priv;
        int timeout = 100;
        const u32 *masks;
-       const u8 *regs;
+       const u16 *regs;
 
-       masks = ksz8->masks;
-       regs = ksz8->regs;
+       masks = dev->info->masks;
+       regs = dev->info->regs;
 
        do {
                ksz_read8(dev, regs[REG_IND_DATA_CHECK], data);
@@ -447,22 +348,20 @@ static int ksz8_valid_dyn_entry(struct ksz_device *dev, u8 *data)
        return 0;
 }
 
-static int ksz8_r_dyn_mac_table(struct ksz_device *dev, u16 addr,
-                               u8 *mac_addr, u8 *fid, u8 *src_port,
-                               u8 *timestamp, u16 *entries)
+int ksz8_r_dyn_mac_table(struct ksz_device *dev, u16 addr, u8 *mac_addr,
+                        u8 *fid, u8 *src_port, u8 *timestamp, u16 *entries)
 {
-       struct ksz8 *ksz8 = dev->priv;
        u32 data_hi, data_lo;
        const u8 *shifts;
        const u32 *masks;
-       const u8 *regs;
+       const u16 *regs;
        u16 ctrl_addr;
        u8 data;
        int rc;
 
-       shifts = ksz8->shifts;
-       masks = ksz8->masks;
-       regs = ksz8->regs;
+       shifts = dev->info->shifts;
+       masks = dev->info->masks;
+       regs = dev->info->regs;
 
        ctrl_addr = IND_ACC_TABLE(TABLE_DYNAMIC_MAC | TABLE_READ) | addr;
 
@@ -512,17 +411,16 @@ static int ksz8_r_dyn_mac_table(struct ksz_device *dev, u16 addr,
        return rc;
 }
 
-static int ksz8_r_sta_mac_table(struct ksz_device *dev, u16 addr,
-                               struct alu_struct *alu)
+int ksz8_r_sta_mac_table(struct ksz_device *dev, u16 addr,
+                        struct alu_struct *alu)
 {
-       struct ksz8 *ksz8 = dev->priv;
        u32 data_hi, data_lo;
        const u8 *shifts;
        const u32 *masks;
        u64 data;
 
-       shifts = ksz8->shifts;
-       masks = ksz8->masks;
+       shifts = dev->info->shifts;
+       masks = dev->info->masks;
 
        ksz8_r_table(dev, TABLE_STATIC_MAC, addr, &data);
        data_hi = data >> 32;
@@ -551,17 +449,16 @@ static int ksz8_r_sta_mac_table(struct ksz_device *dev, u16 addr,
        return -ENXIO;
 }
 
-static void ksz8_w_sta_mac_table(struct ksz_device *dev, u16 addr,
-                                struct alu_struct *alu)
+void ksz8_w_sta_mac_table(struct ksz_device *dev, u16 addr,
+                         struct alu_struct *alu)
 {
-       struct ksz8 *ksz8 = dev->priv;
        u32 data_hi, data_lo;
        const u8 *shifts;
        const u32 *masks;
        u64 data;
 
-       shifts = ksz8->shifts;
-       masks = ksz8->masks;
+       shifts = dev->info->shifts;
+       masks = dev->info->masks;
 
        data_lo = ((u32)alu->mac[2] << 24) |
                ((u32)alu->mac[3] << 16) |
@@ -587,12 +484,11 @@ static void ksz8_w_sta_mac_table(struct ksz_device *dev, u16 addr,
 static void ksz8_from_vlan(struct ksz_device *dev, u32 vlan, u8 *fid,
                           u8 *member, u8 *valid)
 {
-       struct ksz8 *ksz8 = dev->priv;
        const u8 *shifts;
        const u32 *masks;
 
-       shifts = ksz8->shifts;
-       masks = ksz8->masks;
+       shifts = dev->info->shifts;
+       masks = dev->info->masks;
 
        *fid = vlan & masks[VLAN_TABLE_FID];
        *member = (vlan & masks[VLAN_TABLE_MEMBERSHIP]) >>
@@ -603,12 +499,11 @@ static void ksz8_from_vlan(struct ksz_device *dev, u32 vlan, u8 *fid,
 static void ksz8_to_vlan(struct ksz_device *dev, u8 fid, u8 member, u8 valid,
                         u16 *vlan)
 {
-       struct ksz8 *ksz8 = dev->priv;
        const u8 *shifts;
        const u32 *masks;
 
-       shifts = ksz8->shifts;
-       masks = ksz8->masks;
+       shifts = dev->info->shifts;
+       masks = dev->info->masks;
 
        *vlan = fid;
        *vlan |= (u16)member << shifts[VLAN_TABLE_MEMBERSHIP_S];
@@ -618,12 +513,11 @@ static void ksz8_to_vlan(struct ksz_device *dev, u8 fid, u8 member, u8 valid,
 
 static void ksz8_r_vlan_entries(struct ksz_device *dev, u16 addr)
 {
-       struct ksz8 *ksz8 = dev->priv;
        const u8 *shifts;
        u64 data;
        int i;
 
-       shifts = ksz8->shifts;
+       shifts = dev->info->shifts;
 
        ksz8_r_table(dev, TABLE_VLAN, addr, &data);
        addr *= 4;
@@ -663,16 +557,17 @@ static void ksz8_w_vlan_table(struct ksz_device *dev, u16 vid, u16 vlan)
        ksz8_w_table(dev, TABLE_VLAN, addr, buf);
 }
 
-static void ksz8_r_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 *val)
+void ksz8_r_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 *val)
 {
-       struct ksz8 *ksz8 = dev->priv;
        u8 restart, speed, ctrl, link;
-       const u8 *regs = ksz8->regs;
        int processed = true;
+       const u16 *regs;
        u8 val1, val2;
        u16 data = 0;
        u8 p = phy;
 
+       regs = dev->info->regs;
+
        switch (reg) {
        case MII_BMCR:
                ksz_pread8(dev, p, regs[P_NEG_RESTART_CTRL], &restart);
@@ -786,13 +681,14 @@ static void ksz8_r_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 *val)
                *val = data;
 }
 
-static void ksz8_w_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 val)
+void ksz8_w_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 val)
 {
-       struct ksz8 *ksz8 = dev->priv;
        u8 restart, speed, ctrl, data;
-       const u8 *regs = ksz8->regs;
+       const u16 *regs;
        u8 p = phy;
 
+       regs = dev->info->regs;
+
        switch (reg) {
        case MII_BMCR:
 
@@ -898,30 +794,7 @@ static void ksz8_w_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 val)
        }
 }
 
-static enum dsa_tag_protocol ksz8_get_tag_protocol(struct dsa_switch *ds,
-                                                  int port,
-                                                  enum dsa_tag_protocol mp)
-{
-       struct ksz_device *dev = ds->priv;
-
-       /* ksz88x3 uses the same tag schema as KSZ9893 */
-       return ksz_is_ksz88x3(dev) ?
-               DSA_TAG_PROTO_KSZ9893 : DSA_TAG_PROTO_KSZ8795;
-}
-
-static u32 ksz8_sw_get_phy_flags(struct dsa_switch *ds, int port)
-{
-       /* Silicon Errata Sheet (DS80000830A):
-        * Port 1 does not work with LinkMD Cable-Testing.
-        * Port 1 does not respond to received PAUSE control frames.
-        */
-       if (!port)
-               return MICREL_KSZ8_P1_ERRATA;
-
-       return 0;
-}
-
-static void ksz8_cfg_port_member(struct ksz_device *dev, int port, u8 member)
+void ksz8_cfg_port_member(struct ksz_device *dev, int port, u8 member)
 {
        u8 data;
 
@@ -931,16 +804,14 @@ static void ksz8_cfg_port_member(struct ksz_device *dev, int port, u8 member)
        ksz_pwrite8(dev, port, P_MIRROR_CTRL, data);
 }
 
-static void ksz8_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
-{
-       ksz_port_stp_state_set(ds, port, state, P_STP_CTRL);
-}
-
-static void ksz8_flush_dyn_mac_table(struct ksz_device *dev, int port)
+void ksz8_flush_dyn_mac_table(struct ksz_device *dev, int port)
 {
        u8 learn[DSA_MAX_PORTS];
        int first, index, cnt;
        struct ksz_port *p;
+       const u16 *regs;
+
+       regs = dev->info->regs;
 
        if ((uint)port < dev->info->port_cnt) {
                first = port;
@@ -954,9 +825,9 @@ static void ksz8_flush_dyn_mac_table(struct ksz_device *dev, int port)
                p = &dev->ports[index];
                if (!p->on)
                        continue;
-               ksz_pread8(dev, index, P_STP_CTRL, &learn[index]);
+               ksz_pread8(dev, index, regs[P_STP_CTRL], &learn[index]);
                if (!(learn[index] & PORT_LEARN_DISABLE))
-                       ksz_pwrite8(dev, index, P_STP_CTRL,
+                       ksz_pwrite8(dev, index, regs[P_STP_CTRL],
                                    learn[index] | PORT_LEARN_DISABLE);
        }
        ksz_cfg(dev, S_FLUSH_TABLE_CTRL, SW_FLUSH_DYN_MAC_TABLE, true);
@@ -965,15 +836,113 @@ static void ksz8_flush_dyn_mac_table(struct ksz_device *dev, int port)
                if (!p->on)
                        continue;
                if (!(learn[index] & PORT_LEARN_DISABLE))
-                       ksz_pwrite8(dev, index, P_STP_CTRL, learn[index]);
+                       ksz_pwrite8(dev, index, regs[P_STP_CTRL], learn[index]);
        }
 }
 
-static int ksz8_port_vlan_filtering(struct dsa_switch *ds, int port, bool flag,
-                                   struct netlink_ext_ack *extack)
+int ksz8_fdb_dump(struct ksz_device *dev, int port,
+                 dsa_fdb_dump_cb_t *cb, void *data)
 {
-       struct ksz_device *dev = ds->priv;
+       int ret = 0;
+       u16 i = 0;
+       u16 entries = 0;
+       u8 timestamp = 0;
+       u8 fid;
+       u8 member;
+       struct alu_struct alu;
+
+       do {
+               alu.is_static = false;
+               ret = ksz8_r_dyn_mac_table(dev, i, alu.mac, &fid, &member,
+                                          &timestamp, &entries);
+               if (!ret && (member & BIT(port))) {
+                       ret = cb(alu.mac, alu.fid, alu.is_static, data);
+                       if (ret)
+                               break;
+               }
+               i++;
+       } while (i < entries);
+       if (i >= entries)
+               ret = 0;
+
+       return ret;
+}
+
+int ksz8_mdb_add(struct ksz_device *dev, int port,
+                const struct switchdev_obj_port_mdb *mdb, struct dsa_db db)
+{
+       struct alu_struct alu;
+       int index;
+       int empty = 0;
+
+       alu.port_forward = 0;
+       for (index = 0; index < dev->info->num_statics; index++) {
+               if (!ksz8_r_sta_mac_table(dev, index, &alu)) {
+                       /* Found one already in static MAC table. */
+                       if (!memcmp(alu.mac, mdb->addr, ETH_ALEN) &&
+                           alu.fid == mdb->vid)
+                               break;
+               /* Remember the first empty entry. */
+               } else if (!empty) {
+                       empty = index + 1;
+               }
+       }
+
+       /* no available entry */
+       if (index == dev->info->num_statics && !empty)
+               return -ENOSPC;
+
+       /* add entry */
+       if (index == dev->info->num_statics) {
+               index = empty - 1;
+               memset(&alu, 0, sizeof(alu));
+               memcpy(alu.mac, mdb->addr, ETH_ALEN);
+               alu.is_static = true;
+       }
+       alu.port_forward |= BIT(port);
+       if (mdb->vid) {
+               alu.is_use_fid = true;
+
+               /* Need a way to map VID to FID. */
+               alu.fid = mdb->vid;
+       }
+       ksz8_w_sta_mac_table(dev, index, &alu);
+
+       return 0;
+}
+
+int ksz8_mdb_del(struct ksz_device *dev, int port,
+                const struct switchdev_obj_port_mdb *mdb, struct dsa_db db)
+{
+       struct alu_struct alu;
+       int index;
+
+       for (index = 0; index < dev->info->num_statics; index++) {
+               if (!ksz8_r_sta_mac_table(dev, index, &alu)) {
+                       /* Found one already in static MAC table. */
+                       if (!memcmp(alu.mac, mdb->addr, ETH_ALEN) &&
+                           alu.fid == mdb->vid)
+                               break;
+               }
+       }
+
+       /* no available entry */
+       if (index == dev->info->num_statics)
+               goto exit;
+
+       /* clear port */
+       alu.port_forward &= ~BIT(port);
+       if (!alu.port_forward)
+               alu.is_static = false;
+       ksz8_w_sta_mac_table(dev, index, &alu);
+
+exit:
+       return 0;
+}
 
+int ksz8_port_vlan_filtering(struct ksz_device *dev, int port, bool flag,
+                            struct netlink_ext_ack *extack)
+{
        if (ksz_is_ksz88x3(dev))
                return -ENOTSUPP;
 
@@ -998,12 +967,11 @@ static void ksz8_port_enable_pvid(struct ksz_device *dev, int port, bool state)
        }
 }
 
-static int ksz8_port_vlan_add(struct dsa_switch *ds, int port,
-                             const struct switchdev_obj_port_vlan *vlan,
-                             struct netlink_ext_ack *extack)
+int ksz8_port_vlan_add(struct ksz_device *dev, int port,
+                      const struct switchdev_obj_port_vlan *vlan,
+                      struct netlink_ext_ack *extack)
 {
        bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
-       struct ksz_device *dev = ds->priv;
        struct ksz_port *p = &dev->ports[port];
        u16 data, new_pvid = 0;
        u8 fid, member, valid;
@@ -1071,10 +1039,9 @@ static int ksz8_port_vlan_add(struct dsa_switch *ds, int port,
        return 0;
 }
 
-static int ksz8_port_vlan_del(struct dsa_switch *ds, int port,
-                             const struct switchdev_obj_port_vlan *vlan)
+int ksz8_port_vlan_del(struct ksz_device *dev, int port,
+                      const struct switchdev_obj_port_vlan *vlan)
 {
-       struct ksz_device *dev = ds->priv;
        u16 data, pvid;
        u8 fid, member, valid;
 
@@ -1104,12 +1071,10 @@ static int ksz8_port_vlan_del(struct dsa_switch *ds, int port,
        return 0;
 }
 
-static int ksz8_port_mirror_add(struct dsa_switch *ds, int port,
-                               struct dsa_mall_mirror_tc_entry *mirror,
-                               bool ingress, struct netlink_ext_ack *extack)
+int ksz8_port_mirror_add(struct ksz_device *dev, int port,
+                        struct dsa_mall_mirror_tc_entry *mirror,
+                        bool ingress, struct netlink_ext_ack *extack)
 {
-       struct ksz_device *dev = ds->priv;
-
        if (ingress) {
                ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_RX, true);
                dev->mirror_rx |= BIT(port);
@@ -1128,10 +1093,9 @@ static int ksz8_port_mirror_add(struct dsa_switch *ds, int port,
        return 0;
 }
 
-static void ksz8_port_mirror_del(struct dsa_switch *ds, int port,
-                                struct dsa_mall_mirror_tc_entry *mirror)
+void ksz8_port_mirror_del(struct ksz_device *dev, int port,
+                         struct dsa_mall_mirror_tc_entry *mirror)
 {
-       struct ksz_device *dev = ds->priv;
        u8 data;
 
        if (mirror->ingress) {
@@ -1197,14 +1161,13 @@ static void ksz8795_cpu_interface_select(struct ksz_device *dev, int port)
        p->phydev.duplex = 1;
 }
 
-static void ksz8_port_setup(struct ksz_device *dev, int port, bool cpu_port)
+void ksz8_port_setup(struct ksz_device *dev, int port, bool cpu_port)
 {
        struct dsa_switch *ds = dev->ds;
-       struct ksz8 *ksz8 = dev->priv;
        const u32 *masks;
        u8 member;
 
-       masks = ksz8->masks;
+       masks = dev->info->masks;
 
        /* enable broadcast storm limit */
        ksz_port_cfg(dev, port, P_BCAST_STORM_CTRL, PORT_BROADCAST_STORM, true);
@@ -1234,17 +1197,17 @@ static void ksz8_port_setup(struct ksz_device *dev, int port, bool cpu_port)
        ksz8_cfg_port_member(dev, port, member);
 }
 
-static void ksz8_config_cpu_port(struct dsa_switch *ds)
+void ksz8_config_cpu_port(struct dsa_switch *ds)
 {
        struct ksz_device *dev = ds->priv;
-       struct ksz8 *ksz8 = dev->priv;
-       const u8 *regs = ksz8->regs;
        struct ksz_port *p;
        const u32 *masks;
+       const u16 *regs;
        u8 remote;
        int i;
 
-       masks = ksz8->masks;
+       masks = dev->info->masks;
+       regs = dev->info->regs;
 
        /* Switch marks the maximum frame with extra byte as oversize. */
        ksz_cfg(dev, REG_SW_CTRL_2, SW_LEGAL_PACKET_DISABLE, true);
@@ -1258,7 +1221,7 @@ static void ksz8_config_cpu_port(struct dsa_switch *ds)
        for (i = 0; i < dev->phy_port_cnt; i++) {
                p = &dev->ports[i];
 
-               ksz8_port_stp_state_set(ds, i, BR_STATE_DISABLED);
+               ksz_port_stp_state_set(ds, i, BR_STATE_DISABLED);
 
                /* Last port may be disabled. */
                if (i == dev->phy_port_cnt)
@@ -1272,15 +1235,15 @@ static void ksz8_config_cpu_port(struct dsa_switch *ds)
                        continue;
                if (!ksz_is_ksz88x3(dev)) {
                        ksz_pread8(dev, i, regs[P_REMOTE_STATUS], &remote);
-                       if (remote & PORT_FIBER_MODE)
+                       if (remote & KSZ8_PORT_FIBER_MODE)
                                p->fiber = 1;
                }
                if (p->fiber)
-                       ksz_port_cfg(dev, i, P_STP_CTRL, PORT_FORCE_FLOW_CTRL,
-                                    true);
+                       ksz_port_cfg(dev, i, regs[P_STP_CTRL],
+                                    PORT_FORCE_FLOW_CTRL, true);
                else
-                       ksz_port_cfg(dev, i, P_STP_CTRL, PORT_FORCE_FLOW_CTRL,
-                                    false);
+                       ksz_port_cfg(dev, i, regs[P_STP_CTRL],
+                                    PORT_FORCE_FLOW_CTRL, false);
        }
 }
 
@@ -1301,22 +1264,26 @@ static int ksz8_handle_global_errata(struct dsa_switch *ds)
        return ret;
 }
 
-static int ksz8_setup(struct dsa_switch *ds)
+int ksz8_enable_stp_addr(struct ksz_device *dev)
 {
-       struct ksz_device *dev = ds->priv;
        struct alu_struct alu;
-       int i, ret = 0;
 
-       dev->vlan_cache = devm_kcalloc(dev->dev, sizeof(struct vlan_table),
-                                      dev->info->num_vlans, GFP_KERNEL);
-       if (!dev->vlan_cache)
-               return -ENOMEM;
+       /* Setup STP address for STP operation. */
+       memset(&alu, 0, sizeof(alu));
+       ether_addr_copy(alu.mac, eth_stp_addr);
+       alu.is_static = true;
+       alu.is_override = true;
+       alu.port_forward = dev->info->cpu_ports;
 
-       ret = ksz8_reset_switch(dev);
-       if (ret) {
-               dev_err(ds->dev, "failed to reset switch\n");
-               return ret;
-       }
+       ksz8_w_sta_mac_table(dev, 0, &alu);
+
+       return 0;
+}
+
+int ksz8_setup(struct dsa_switch *ds)
+{
+       struct ksz_device *dev = ds->priv;
+       int i;
 
        ksz_cfg(dev, S_REPLACE_VID_CTRL, SW_FLOW_CTRL, true);
 
@@ -1335,10 +1302,6 @@ static int ksz8_setup(struct dsa_switch *ds)
                           UNICAST_VLAN_BOUNDARY | NO_EXC_COLLISION_DROP,
                           UNICAST_VLAN_BOUNDARY | NO_EXC_COLLISION_DROP);
 
-       ksz8_config_cpu_port(ds);
-
-       ksz_cfg(dev, REG_SW_CTRL_2, MULTICAST_STORM_DISABLE, true);
-
        ksz_cfg(dev, S_REPLACE_VID_CTRL, SW_REPLACE_VID, false);
 
        ksz_cfg(dev, S_MIRROR_CTRL, SW_MIRROR_RX_TX, false);
@@ -1346,38 +1309,15 @@ static int ksz8_setup(struct dsa_switch *ds)
        if (!ksz_is_ksz88x3(dev))
                ksz_cfg(dev, REG_SW_CTRL_19, SW_INS_TAG_ENABLE, true);
 
-       /* set broadcast storm protection 10% rate */
-       regmap_update_bits(dev->regmap[1], S_REPLACE_VID_CTRL,
-                          BROADCAST_STORM_RATE,
-                          (BROADCAST_STORM_VALUE *
-                          BROADCAST_STORM_PROT_RATE) / 100);
-
        for (i = 0; i < (dev->info->num_vlans / 4); i++)
                ksz8_r_vlan_entries(dev, i);
 
-       /* Setup STP address for STP operation. */
-       memset(&alu, 0, sizeof(alu));
-       ether_addr_copy(alu.mac, eth_stp_addr);
-       alu.is_static = true;
-       alu.is_override = true;
-       alu.port_forward = dev->info->cpu_ports;
-
-       ksz8_w_sta_mac_table(dev, 0, &alu);
-
-       ksz_init_mib_timer(dev);
-
-       ds->configure_vlan_while_not_filtering = false;
-
        return ksz8_handle_global_errata(ds);
 }
 
-static void ksz8_get_caps(struct dsa_switch *ds, int port,
-                         struct phylink_config *config)
+void ksz8_get_caps(struct ksz_device *dev, int port,
+                  struct phylink_config *config)
 {
-       struct ksz_device *dev = ds->priv;
-
-       ksz_phylink_get_caps(ds, port, config);
-
        config->mac_capabilities = MAC_10 | MAC_100;
 
        /* Silicon Errata Sheet (DS80000830A):
@@ -1393,102 +1333,17 @@ static void ksz8_get_caps(struct dsa_switch *ds, int port,
                config->mac_capabilities |= MAC_ASYM_PAUSE;
 }
 
-static const struct dsa_switch_ops ksz8_switch_ops = {
-       .get_tag_protocol       = ksz8_get_tag_protocol,
-       .get_phy_flags          = ksz8_sw_get_phy_flags,
-       .setup                  = ksz8_setup,
-       .phy_read               = ksz_phy_read16,
-       .phy_write              = ksz_phy_write16,
-       .phylink_get_caps       = ksz8_get_caps,
-       .phylink_mac_link_down  = ksz_mac_link_down,
-       .port_enable            = ksz_enable_port,
-       .get_strings            = ksz_get_strings,
-       .get_ethtool_stats      = ksz_get_ethtool_stats,
-       .get_sset_count         = ksz_sset_count,
-       .port_bridge_join       = ksz_port_bridge_join,
-       .port_bridge_leave      = ksz_port_bridge_leave,
-       .port_stp_state_set     = ksz8_port_stp_state_set,
-       .port_fast_age          = ksz_port_fast_age,
-       .port_vlan_filtering    = ksz8_port_vlan_filtering,
-       .port_vlan_add          = ksz8_port_vlan_add,
-       .port_vlan_del          = ksz8_port_vlan_del,
-       .port_fdb_dump          = ksz_port_fdb_dump,
-       .port_mdb_add           = ksz_port_mdb_add,
-       .port_mdb_del           = ksz_port_mdb_del,
-       .port_mirror_add        = ksz8_port_mirror_add,
-       .port_mirror_del        = ksz8_port_mirror_del,
-};
-
-static u32 ksz8_get_port_addr(int port, int offset)
+u32 ksz8_get_port_addr(int port, int offset)
 {
        return PORT_CTRL_ADDR(port, offset);
 }
 
-static int ksz8_switch_detect(struct ksz_device *dev)
+int ksz8_switch_init(struct ksz_device *dev)
 {
-       u8 id1, id2;
-       u16 id16;
-       int ret;
-
-       /* read chip id */
-       ret = ksz_read16(dev, REG_CHIP_ID0, &id16);
-       if (ret)
-               return ret;
-
-       id1 = id16 >> 8;
-       id2 = id16 & SW_CHIP_ID_M;
-
-       switch (id1) {
-       case KSZ87_FAMILY_ID:
-               if ((id2 != CHIP_ID_94 && id2 != CHIP_ID_95))
-                       return -ENODEV;
-
-               if (id2 == CHIP_ID_95) {
-                       u8 val;
-
-                       id2 = 0x95;
-                       ksz_read8(dev, REG_PORT_STATUS_0, &val);
-                       if (val & PORT_FIBER_MODE)
-                               id2 = 0x65;
-               } else if (id2 == CHIP_ID_94) {
-                       id2 = 0x94;
-               }
-               break;
-       case KSZ88_FAMILY_ID:
-               if (id2 != CHIP_ID_63)
-                       return -ENODEV;
-               break;
-       default:
-               dev_err(dev->dev, "invalid family id: %d\n", id1);
-               return -ENODEV;
-       }
-       id16 &= ~0xff;
-       id16 |= id2;
-       dev->chip_id = id16;
-
-       return 0;
-}
-
-static int ksz8_switch_init(struct ksz_device *dev)
-{
-       struct ksz8 *ksz8 = dev->priv;
-
-       dev->ds->ops = &ksz8_switch_ops;
-
        dev->cpu_port = fls(dev->info->cpu_ports) - 1;
        dev->phy_port_cnt = dev->info->port_cnt - 1;
        dev->port_mask = (BIT(dev->phy_port_cnt) - 1) | dev->info->cpu_ports;
 
-       if (ksz_is_ksz88x3(dev)) {
-               ksz8->regs = ksz8863_regs;
-               ksz8->masks = ksz8863_masks;
-               ksz8->shifts = ksz8863_shifts;
-       } else {
-               ksz8->regs = ksz8795_regs;
-               ksz8->masks = ksz8795_masks;
-               ksz8->shifts = ksz8795_shifts;
-       }
-
        /* We rely on software untagging on the CPU port, so that we
         * can support both tagged and untagged VLANs
         */
@@ -1502,37 +1357,11 @@ static int ksz8_switch_init(struct ksz_device *dev)
        return 0;
 }
 
-static void ksz8_switch_exit(struct ksz_device *dev)
+void ksz8_switch_exit(struct ksz_device *dev)
 {
        ksz8_reset_switch(dev);
 }
 
-static const struct ksz_dev_ops ksz8_dev_ops = {
-       .get_port_addr = ksz8_get_port_addr,
-       .cfg_port_member = ksz8_cfg_port_member,
-       .flush_dyn_mac_table = ksz8_flush_dyn_mac_table,
-       .port_setup = ksz8_port_setup,
-       .r_phy = ksz8_r_phy,
-       .w_phy = ksz8_w_phy,
-       .r_dyn_mac_table = ksz8_r_dyn_mac_table,
-       .r_sta_mac_table = ksz8_r_sta_mac_table,
-       .w_sta_mac_table = ksz8_w_sta_mac_table,
-       .r_mib_cnt = ksz8_r_mib_cnt,
-       .r_mib_pkt = ksz8_r_mib_pkt,
-       .freeze_mib = ksz8_freeze_mib,
-       .port_init_cnt = ksz8_port_init_cnt,
-       .shutdown = ksz8_reset_switch,
-       .detect = ksz8_switch_detect,
-       .init = ksz8_switch_init,
-       .exit = ksz8_switch_exit,
-};
-
-int ksz8_switch_register(struct ksz_device *dev)
-{
-       return ksz_switch_register(dev, &ksz8_dev_ops);
-}
-EXPORT_SYMBOL(ksz8_switch_register);
-
 MODULE_AUTHOR("Tristram Ha <Tristram.Ha@microchip.com>");
 MODULE_DESCRIPTION("Microchip KSZ8795 Series Switch DSA Driver");
 MODULE_LICENSE("GPL");
index 4109433..a848eb4 100644 (file)
 #define KS_PRIO_M                      0x3
 #define KS_PRIO_S                      2
 
-#define REG_CHIP_ID0                   0x00
-
-#define KSZ87_FAMILY_ID                        0x87
-#define KSZ88_FAMILY_ID                        0x88
-
-#define REG_CHIP_ID1                   0x01
-
-#define SW_CHIP_ID_M                   0xF0
-#define SW_CHIP_ID_S                   4
 #define SW_REVISION_M                  0x0E
 #define SW_REVISION_S                  1
-#define SW_START                       0x01
-
-#define CHIP_ID_94                     0x60
-#define CHIP_ID_95                     0x90
-#define CHIP_ID_63                     0x30
 
 #define KSZ8863_REG_SW_RESET           0x43
 
@@ -57,7 +43,6 @@
 #define REG_SW_CTRL_2                  0x04
 
 #define UNICAST_VLAN_BOUNDARY          BIT(7)
-#define MULTICAST_STORM_DISABLE                BIT(6)
 #define SW_BACK_PRESSURE               BIT(5)
 #define FAIR_FLOW_CTRL                 BIT(4)
 #define NO_EXC_COLLISION_DROP          BIT(3)
 #define SW_FLOW_CTRL                   BIT(5)
 #define SW_10_MBIT                     BIT(4)
 #define SW_REPLACE_VID                 BIT(3)
-#define BROADCAST_STORM_RATE_HI                0x07
 
 #define REG_SW_CTRL_5                  0x07
 
-#define BROADCAST_STORM_RATE_LO                0xFF
-#define BROADCAST_STORM_RATE           0x07FF
-
 #define REG_SW_CTRL_6                  0x08
 
 #define SW_MIB_COUNTER_FLUSH           BIT(7)
 #define REG_PORT_4_STATUS_0            0x48
 
 /* For KSZ8765. */
-#define PORT_FIBER_MODE                        BIT(7)
-
 #define PORT_REMOTE_ASYM_PAUSE         BIT(5)
 #define PORT_REMOTE_SYM_PAUSE          BIT(4)
 #define PORT_REMOTE_100BTX_FD          BIT(3)
 
 #define REG_PORT_CTRL_5                        0x05
 
-#define REG_PORT_STATUS_0              0x08
 #define REG_PORT_STATUS_1              0x09
 #define REG_PORT_LINK_MD_CTRL          0x0A
 #define REG_PORT_LINK_MD_RESULT                0x0B
 #define P_TAG_CTRL                     REG_PORT_CTRL_0
 #define P_MIRROR_CTRL                  REG_PORT_CTRL_1
 #define P_802_1P_CTRL                  REG_PORT_CTRL_2
-#define P_STP_CTRL                     REG_PORT_CTRL_2
 #define P_PASS_ALL_CTRL                        REG_PORT_CTRL_12
 #define P_INS_SRC_PVID_CTRL            REG_PORT_CTRL_12
 #define P_DROP_TAG_CTRL                        REG_PORT_CTRL_13
 #define REG_IND_EEE_GLOB2_LO           0x34
 #define REG_IND_EEE_GLOB2_HI           0x35
 
-/* Driver set switch broadcast storm protection at 10% rate. */
-#define BROADCAST_STORM_PROT_RATE      10
-
-/* 148,800 frames * 67 ms / 100 */
-#define BROADCAST_STORM_VALUE          9969
-
 /**
  * MIB_COUNTER_VALUE                   00-00000000-3FFFFFFF
  * MIB_TOTAL_BYTES                     00-0000000F-FFFFFFFF
diff --git a/drivers/net/dsa/microchip/ksz8795_spi.c b/drivers/net/dsa/microchip/ksz8795_spi.c
deleted file mode 100644 (file)
index 961a74c..0000000
+++ /dev/null
@@ -1,172 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * Microchip KSZ8795 series register access through SPI
- *
- * Copyright (C) 2017 Microchip Technology Inc.
- *     Tristram Ha <Tristram.Ha@microchip.com>
- */
-
-#include <asm/unaligned.h>
-
-#include <linux/delay.h>
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/regmap.h>
-#include <linux/spi/spi.h>
-
-#include "ksz8.h"
-#include "ksz_common.h"
-
-#define KSZ8795_SPI_ADDR_SHIFT                 12
-#define KSZ8795_SPI_ADDR_ALIGN                 3
-#define KSZ8795_SPI_TURNAROUND_SHIFT           1
-
-#define KSZ8863_SPI_ADDR_SHIFT                 8
-#define KSZ8863_SPI_ADDR_ALIGN                 8
-#define KSZ8863_SPI_TURNAROUND_SHIFT           0
-
-KSZ_REGMAP_TABLE(ksz8795, 16, KSZ8795_SPI_ADDR_SHIFT,
-                KSZ8795_SPI_TURNAROUND_SHIFT, KSZ8795_SPI_ADDR_ALIGN);
-
-KSZ_REGMAP_TABLE(ksz8863, 16, KSZ8863_SPI_ADDR_SHIFT,
-                KSZ8863_SPI_TURNAROUND_SHIFT, KSZ8863_SPI_ADDR_ALIGN);
-
-static int ksz8795_spi_probe(struct spi_device *spi)
-{
-       const struct regmap_config *regmap_config;
-       const struct ksz_chip_data *chip;
-       struct device *ddev = &spi->dev;
-       struct regmap_config rc;
-       struct ksz_device *dev;
-       struct ksz8 *ksz8;
-       int i, ret = 0;
-
-       ksz8 = devm_kzalloc(&spi->dev, sizeof(struct ksz8), GFP_KERNEL);
-       if (!ksz8)
-               return -ENOMEM;
-
-       ksz8->priv = spi;
-
-       dev = ksz_switch_alloc(&spi->dev, ksz8);
-       if (!dev)
-               return -ENOMEM;
-
-       chip = device_get_match_data(ddev);
-       if (!chip)
-               return -EINVAL;
-
-       if (chip->chip_id == KSZ8830_CHIP_ID)
-               regmap_config = ksz8863_regmap_config;
-       else
-               regmap_config = ksz8795_regmap_config;
-
-       for (i = 0; i < ARRAY_SIZE(ksz8795_regmap_config); i++) {
-               rc = regmap_config[i];
-               rc.lock_arg = &dev->regmap_mutex;
-               dev->regmap[i] = devm_regmap_init_spi(spi, &rc);
-               if (IS_ERR(dev->regmap[i])) {
-                       ret = PTR_ERR(dev->regmap[i]);
-                       dev_err(&spi->dev,
-                               "Failed to initialize regmap%i: %d\n",
-                               regmap_config[i].val_bits, ret);
-                       return ret;
-               }
-       }
-
-       if (spi->dev.platform_data)
-               dev->pdata = spi->dev.platform_data;
-
-       /* setup spi */
-       spi->mode = SPI_MODE_3;
-       ret = spi_setup(spi);
-       if (ret)
-               return ret;
-
-       ret = ksz8_switch_register(dev);
-
-       /* Main DSA driver may not be started yet. */
-       if (ret)
-               return ret;
-
-       spi_set_drvdata(spi, dev);
-
-       return 0;
-}
-
-static void ksz8795_spi_remove(struct spi_device *spi)
-{
-       struct ksz_device *dev = spi_get_drvdata(spi);
-
-       if (dev)
-               ksz_switch_remove(dev);
-
-       spi_set_drvdata(spi, NULL);
-}
-
-static void ksz8795_spi_shutdown(struct spi_device *spi)
-{
-       struct ksz_device *dev = spi_get_drvdata(spi);
-
-       if (!dev)
-               return;
-
-       if (dev->dev_ops->shutdown)
-               dev->dev_ops->shutdown(dev);
-
-       dsa_switch_shutdown(dev->ds);
-
-       spi_set_drvdata(spi, NULL);
-}
-
-static const struct of_device_id ksz8795_dt_ids[] = {
-       {
-               .compatible = "microchip,ksz8765",
-               .data = &ksz_switch_chips[KSZ8765]
-       },
-       {
-               .compatible = "microchip,ksz8794",
-               .data = &ksz_switch_chips[KSZ8794]
-       },
-       {
-               .compatible = "microchip,ksz8795",
-               .data = &ksz_switch_chips[KSZ8795]
-       },
-       {
-               .compatible = "microchip,ksz8863",
-               .data = &ksz_switch_chips[KSZ8830]
-       },
-       {
-               .compatible = "microchip,ksz8873",
-               .data = &ksz_switch_chips[KSZ8830]
-       },
-       {},
-};
-MODULE_DEVICE_TABLE(of, ksz8795_dt_ids);
-
-static const struct spi_device_id ksz8795_spi_ids[] = {
-       { "ksz8765" },
-       { "ksz8794" },
-       { "ksz8795" },
-       { "ksz8863" },
-       { "ksz8873" },
-       { },
-};
-MODULE_DEVICE_TABLE(spi, ksz8795_spi_ids);
-
-static struct spi_driver ksz8795_spi_driver = {
-       .driver = {
-               .name   = "ksz8795-switch",
-               .owner  = THIS_MODULE,
-               .of_match_table = of_match_ptr(ksz8795_dt_ids),
-       },
-       .id_table = ksz8795_spi_ids,
-       .probe  = ksz8795_spi_probe,
-       .remove = ksz8795_spi_remove,
-       .shutdown = ksz8795_spi_shutdown,
-};
-
-module_spi_driver(ksz8795_spi_driver);
-
-MODULE_AUTHOR("Tristram Ha <Tristram.Ha@microchip.com>");
-MODULE_DESCRIPTION("Microchip KSZ8795 Series Switch SPI Driver");
-MODULE_LICENSE("GPL");
index b6f99e6..5247fdf 100644 (file)
@@ -26,11 +26,9 @@ static int ksz8863_mdio_read(void *ctx, const void *reg_buf, size_t reg_len,
        struct mdio_device *mdev;
        u8 reg = *(u8 *)reg_buf;
        u8 *val = val_buf;
-       struct ksz8 *ksz8;
        int i, ret = 0;
 
-       ksz8 = dev->priv;
-       mdev = ksz8->priv;
+       mdev = dev->priv;
 
        mutex_lock_nested(&mdev->bus->mdio_lock, MDIO_MUTEX_NESTED);
        for (i = 0; i < val_len; i++) {
@@ -55,13 +53,11 @@ static int ksz8863_mdio_write(void *ctx, const void *data, size_t count)
 {
        struct ksz_device *dev = ctx;
        struct mdio_device *mdev;
-       struct ksz8 *ksz8;
        int i, ret = 0;
        u32 reg;
        u8 *val;
 
-       ksz8 = dev->priv;
-       mdev = ksz8->priv;
+       mdev = dev->priv;
 
        val = (u8 *)(data + 4);
        reg = *(u32 *)data;
@@ -142,17 +138,10 @@ static int ksz8863_smi_probe(struct mdio_device *mdiodev)
 {
        struct regmap_config rc;
        struct ksz_device *dev;
-       struct ksz8 *ksz8;
        int ret;
        int i;
 
-       ksz8 = devm_kzalloc(&mdiodev->dev, sizeof(struct ksz8), GFP_KERNEL);
-       if (!ksz8)
-               return -ENOMEM;
-
-       ksz8->priv = mdiodev;
-
-       dev = ksz_switch_alloc(&mdiodev->dev, ksz8);
+       dev = ksz_switch_alloc(&mdiodev->dev, mdiodev);
        if (!dev)
                return -ENOMEM;
 
@@ -174,7 +163,7 @@ static int ksz8863_smi_probe(struct mdio_device *mdiodev)
        if (mdiodev->dev.platform_data)
                dev->pdata = mdiodev->dev.platform_data;
 
-       ret = ksz8_switch_register(dev);
+       ret = ksz_switch_register(dev);
 
        /* Main DSA driver may not be started yet. */
        if (ret)
index ab40b70..6453642 100644 (file)
@@ -17,6 +17,7 @@
 
 #include "ksz9477_reg.h"
 #include "ksz_common.h"
+#include "ksz9477.h"
 
 /* Used with variable features to indicate capabilities. */
 #define GBIT_SUPPORT                   BIT(0)
@@ -47,9 +48,8 @@ static void ksz9477_port_cfg32(struct ksz_device *dev, int port, int offset,
                           bits, set ? bits : 0);
 }
 
-static int ksz9477_change_mtu(struct dsa_switch *ds, int port, int mtu)
+int ksz9477_change_mtu(struct ksz_device *dev, int port, int mtu)
 {
-       struct ksz_device *dev = ds->priv;
        u16 frame_size, max_frame = 0;
        int i;
 
@@ -65,7 +65,7 @@ static int ksz9477_change_mtu(struct dsa_switch *ds, int port, int mtu)
                                  REG_SW_MTU_MASK, max_frame);
 }
 
-static int ksz9477_max_mtu(struct dsa_switch *ds, int port)
+int ksz9477_max_mtu(struct ksz_device *dev, int port)
 {
        return KSZ9477_MAX_FRAME_SIZE - VLAN_ETH_HLEN - ETH_FCS_LEN;
 }
@@ -175,7 +175,7 @@ static int ksz9477_wait_alu_sta_ready(struct ksz_device *dev)
                                        10, 1000);
 }
 
-static int ksz9477_reset_switch(struct ksz_device *dev)
+int ksz9477_reset_switch(struct ksz_device *dev)
 {
        u8 data8;
        u32 data32;
@@ -198,12 +198,6 @@ static int ksz9477_reset_switch(struct ksz_device *dev)
        ksz_write32(dev, REG_SW_PORT_INT_MASK__4, 0x7F);
        ksz_read32(dev, REG_SW_PORT_INT_STATUS__4, &data32);
 
-       /* set broadcast storm protection 10% rate */
-       regmap_update_bits(dev->regmap[1], REG_SW_MAC_CTRL_2,
-                          BROADCAST_STORM_RATE,
-                          (BROADCAST_STORM_VALUE *
-                          BROADCAST_STORM_PROT_RATE) / 100);
-
        data8 = SW_ENABLE_REFCLKO;
        if (dev->synclko_disable)
                data8 = 0;
@@ -214,8 +208,7 @@ static int ksz9477_reset_switch(struct ksz_device *dev)
        return 0;
 }
 
-static void ksz9477_r_mib_cnt(struct ksz_device *dev, int port, u16 addr,
-                             u64 *cnt)
+void ksz9477_r_mib_cnt(struct ksz_device *dev, int port, u16 addr, u64 *cnt)
 {
        struct ksz_port *p = &dev->ports[port];
        unsigned int val;
@@ -242,14 +235,14 @@ static void ksz9477_r_mib_cnt(struct ksz_device *dev, int port, u16 addr,
        *cnt += data;
 }
 
-static void ksz9477_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
-                             u64 *dropped, u64 *cnt)
+void ksz9477_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
+                      u64 *dropped, u64 *cnt)
 {
        addr = dev->info->mib_names[addr].index;
        ksz9477_r_mib_cnt(dev, port, addr, cnt);
 }
 
-static void ksz9477_freeze_mib(struct ksz_device *dev, int port, bool freeze)
+void ksz9477_freeze_mib(struct ksz_device *dev, int port, bool freeze)
 {
        u32 val = freeze ? MIB_COUNTER_FLUSH_FREEZE : 0;
        struct ksz_port *p = &dev->ports[port];
@@ -263,7 +256,7 @@ static void ksz9477_freeze_mib(struct ksz_device *dev, int port, bool freeze)
        mutex_unlock(&p->mib.cnt_mutex);
 }
 
-static void ksz9477_port_init_cnt(struct ksz_device *dev, int port)
+void ksz9477_port_init_cnt(struct ksz_device *dev, int port)
 {
        struct ksz_port_mib *mib = &dev->ports[port].mib;
 
@@ -276,21 +269,8 @@ static void ksz9477_port_init_cnt(struct ksz_device *dev, int port)
        mutex_unlock(&mib->cnt_mutex);
 }
 
-static enum dsa_tag_protocol ksz9477_get_tag_protocol(struct dsa_switch *ds,
-                                                     int port,
-                                                     enum dsa_tag_protocol mp)
+void ksz9477_r_phy(struct ksz_device *dev, u16 addr, u16 reg, u16 *data)
 {
-       enum dsa_tag_protocol proto = DSA_TAG_PROTO_KSZ9477;
-       struct ksz_device *dev = ds->priv;
-
-       if (dev->features & IS_9893)
-               proto = DSA_TAG_PROTO_KSZ9893;
-       return proto;
-}
-
-static int ksz9477_phy_read16(struct dsa_switch *ds, int addr, int reg)
-{
-       struct ksz_device *dev = ds->priv;
        u16 val = 0xffff;
 
        /* No real PHY after this. Simulate the PHY.
@@ -335,40 +315,30 @@ static int ksz9477_phy_read16(struct dsa_switch *ds, int addr, int reg)
                ksz_pread16(dev, addr, 0x100 + (reg << 1), &val);
        }
 
-       return val;
+       *data = val;
 }
 
-static int ksz9477_phy_write16(struct dsa_switch *ds, int addr, int reg,
-                              u16 val)
+void ksz9477_w_phy(struct ksz_device *dev, u16 addr, u16 reg, u16 val)
 {
-       struct ksz_device *dev = ds->priv;
-
        /* No real PHY after this. */
        if (addr >= dev->phy_port_cnt)
-               return 0;
+               return;
 
        /* No gigabit support.  Do not write to this register. */
        if (!(dev->features & GBIT_SUPPORT) && reg == MII_CTRL1000)
-               return 0;
-       ksz_pwrite16(dev, addr, 0x100 + (reg << 1), val);
+               return;
 
-       return 0;
+       ksz_pwrite16(dev, addr, 0x100 + (reg << 1), val);
 }
 
-static void ksz9477_cfg_port_member(struct ksz_device *dev, int port,
-                                   u8 member)
+void ksz9477_cfg_port_member(struct ksz_device *dev, int port, u8 member)
 {
        ksz_pwrite32(dev, port, REG_PORT_VLAN_MEMBERSHIP__4, member);
 }
 
-static void ksz9477_port_stp_state_set(struct dsa_switch *ds, int port,
-                                      u8 state)
-{
-       ksz_port_stp_state_set(ds, port, state, P_STP_CTRL);
-}
-
-static void ksz9477_flush_dyn_mac_table(struct ksz_device *dev, int port)
+void ksz9477_flush_dyn_mac_table(struct ksz_device *dev, int port)
 {
+       const u16 *regs = dev->info->regs;
        u8 data;
 
        regmap_update_bits(dev->regmap[0], REG_SW_LUE_CTRL_2,
@@ -377,24 +347,21 @@ static void ksz9477_flush_dyn_mac_table(struct ksz_device *dev, int port)
 
        if (port < dev->info->port_cnt) {
                /* flush individual port */
-               ksz_pread8(dev, port, P_STP_CTRL, &data);
+               ksz_pread8(dev, port, regs[P_STP_CTRL], &data);
                if (!(data & PORT_LEARN_DISABLE))
-                       ksz_pwrite8(dev, port, P_STP_CTRL,
+                       ksz_pwrite8(dev, port, regs[P_STP_CTRL],
                                    data | PORT_LEARN_DISABLE);
                ksz_cfg(dev, S_FLUSH_TABLE_CTRL, SW_FLUSH_DYN_MAC_TABLE, true);
-               ksz_pwrite8(dev, port, P_STP_CTRL, data);
+               ksz_pwrite8(dev, port, regs[P_STP_CTRL], data);
        } else {
                /* flush all */
                ksz_cfg(dev, S_FLUSH_TABLE_CTRL, SW_FLUSH_STP_TABLE, true);
        }
 }
 
-static int ksz9477_port_vlan_filtering(struct dsa_switch *ds, int port,
-                                      bool flag,
-                                      struct netlink_ext_ack *extack)
+int ksz9477_port_vlan_filtering(struct ksz_device *dev, int port,
+                               bool flag, struct netlink_ext_ack *extack)
 {
-       struct ksz_device *dev = ds->priv;
-
        if (flag) {
                ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL,
                             PORT_VLAN_LOOKUP_VID_0, true);
@@ -408,11 +375,10 @@ static int ksz9477_port_vlan_filtering(struct dsa_switch *ds, int port,
        return 0;
 }
 
-static int ksz9477_port_vlan_add(struct dsa_switch *ds, int port,
-                                const struct switchdev_obj_port_vlan *vlan,
-                                struct netlink_ext_ack *extack)
+int ksz9477_port_vlan_add(struct ksz_device *dev, int port,
+                         const struct switchdev_obj_port_vlan *vlan,
+                         struct netlink_ext_ack *extack)
 {
-       struct ksz_device *dev = ds->priv;
        u32 vlan_table[3];
        bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
        int err;
@@ -445,10 +411,9 @@ static int ksz9477_port_vlan_add(struct dsa_switch *ds, int port,
        return 0;
 }
 
-static int ksz9477_port_vlan_del(struct dsa_switch *ds, int port,
-                                const struct switchdev_obj_port_vlan *vlan)
+int ksz9477_port_vlan_del(struct ksz_device *dev, int port,
+                         const struct switchdev_obj_port_vlan *vlan)
 {
-       struct ksz_device *dev = ds->priv;
        bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
        u32 vlan_table[3];
        u16 pvid;
@@ -479,11 +444,9 @@ static int ksz9477_port_vlan_del(struct dsa_switch *ds, int port,
        return 0;
 }
 
-static int ksz9477_port_fdb_add(struct dsa_switch *ds, int port,
-                               const unsigned char *addr, u16 vid,
-                               struct dsa_db db)
+int ksz9477_fdb_add(struct ksz_device *dev, int port,
+                   const unsigned char *addr, u16 vid, struct dsa_db db)
 {
-       struct ksz_device *dev = ds->priv;
        u32 alu_table[4];
        u32 data;
        int ret = 0;
@@ -537,11 +500,9 @@ exit:
        return ret;
 }
 
-static int ksz9477_port_fdb_del(struct dsa_switch *ds, int port,
-                               const unsigned char *addr, u16 vid,
-                               struct dsa_db db)
+int ksz9477_fdb_del(struct ksz_device *dev, int port,
+                   const unsigned char *addr, u16 vid, struct dsa_db db)
 {
-       struct ksz_device *dev = ds->priv;
        u32 alu_table[4];
        u32 data;
        int ret = 0;
@@ -628,10 +589,9 @@ static void ksz9477_convert_alu(struct alu_struct *alu, u32 *alu_table)
        alu->mac[5] = alu_table[3] & 0xFF;
 }
 
-static int ksz9477_port_fdb_dump(struct dsa_switch *ds, int port,
-                                dsa_fdb_dump_cb_t *cb, void *data)
+int ksz9477_fdb_dump(struct ksz_device *dev, int port,
+                    dsa_fdb_dump_cb_t *cb, void *data)
 {
-       struct ksz_device *dev = ds->priv;
        int ret = 0;
        u32 ksz_data;
        u32 alu_table[4];
@@ -680,17 +640,20 @@ exit:
        return ret;
 }
 
-static int ksz9477_port_mdb_add(struct dsa_switch *ds, int port,
-                               const struct switchdev_obj_port_mdb *mdb,
-                               struct dsa_db db)
+int ksz9477_mdb_add(struct ksz_device *dev, int port,
+                   const struct switchdev_obj_port_mdb *mdb, struct dsa_db db)
 {
-       struct ksz_device *dev = ds->priv;
        u32 static_table[4];
+       const u8 *shifts;
+       const u32 *masks;
        u32 data;
        int index;
        u32 mac_hi, mac_lo;
        int err = 0;
 
+       shifts = dev->info->shifts;
+       masks = dev->info->masks;
+
        mac_hi = ((mdb->addr[0] << 8) | mdb->addr[1]);
        mac_lo = ((mdb->addr[2] << 24) | (mdb->addr[3] << 16));
        mac_lo |= ((mdb->addr[4] << 8) | mdb->addr[5]);
@@ -699,8 +662,8 @@ static int ksz9477_port_mdb_add(struct dsa_switch *ds, int port,
 
        for (index = 0; index < dev->info->num_statics; index++) {
                /* find empty slot first */
-               data = (index << ALU_STAT_INDEX_S) |
-                       ALU_STAT_READ | ALU_STAT_START;
+               data = (index << shifts[ALU_STAT_INDEX]) |
+                       masks[ALU_STAT_READ] | ALU_STAT_START;
                ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data);
 
                /* wait to be finished */
@@ -744,7 +707,7 @@ static int ksz9477_port_mdb_add(struct dsa_switch *ds, int port,
 
        ksz9477_write_table(dev, static_table);
 
-       data = (index << ALU_STAT_INDEX_S) | ALU_STAT_START;
+       data = (index << shifts[ALU_STAT_INDEX]) | ALU_STAT_START;
        ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data);
 
        /* wait to be finished */
@@ -756,17 +719,20 @@ exit:
        return err;
 }
 
-static int ksz9477_port_mdb_del(struct dsa_switch *ds, int port,
-                               const struct switchdev_obj_port_mdb *mdb,
-                               struct dsa_db db)
+int ksz9477_mdb_del(struct ksz_device *dev, int port,
+                   const struct switchdev_obj_port_mdb *mdb, struct dsa_db db)
 {
-       struct ksz_device *dev = ds->priv;
        u32 static_table[4];
+       const u8 *shifts;
+       const u32 *masks;
        u32 data;
        int index;
        int ret = 0;
        u32 mac_hi, mac_lo;
 
+       shifts = dev->info->shifts;
+       masks = dev->info->masks;
+
        mac_hi = ((mdb->addr[0] << 8) | mdb->addr[1]);
        mac_lo = ((mdb->addr[2] << 24) | (mdb->addr[3] << 16));
        mac_lo |= ((mdb->addr[4] << 8) | mdb->addr[5]);
@@ -775,8 +741,8 @@ static int ksz9477_port_mdb_del(struct dsa_switch *ds, int port,
 
        for (index = 0; index < dev->info->num_statics; index++) {
                /* find empty slot first */
-               data = (index << ALU_STAT_INDEX_S) |
-                       ALU_STAT_READ | ALU_STAT_START;
+               data = (index << shifts[ALU_STAT_INDEX]) |
+                       masks[ALU_STAT_READ] | ALU_STAT_START;
                ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data);
 
                /* wait to be finished */
@@ -818,7 +784,7 @@ static int ksz9477_port_mdb_del(struct dsa_switch *ds, int port,
 
        ksz9477_write_table(dev, static_table);
 
-       data = (index << ALU_STAT_INDEX_S) | ALU_STAT_START;
+       data = (index << shifts[ALU_STAT_INDEX]) | ALU_STAT_START;
        ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data);
 
        /* wait to be finished */
@@ -832,11 +798,10 @@ exit:
        return ret;
 }
 
-static int ksz9477_port_mirror_add(struct dsa_switch *ds, int port,
-                                  struct dsa_mall_mirror_tc_entry *mirror,
-                                  bool ingress, struct netlink_ext_ack *extack)
+int ksz9477_port_mirror_add(struct ksz_device *dev, int port,
+                           struct dsa_mall_mirror_tc_entry *mirror,
+                           bool ingress, struct netlink_ext_ack *extack)
 {
-       struct ksz_device *dev = ds->priv;
        u8 data;
        int p;
 
@@ -872,10 +837,9 @@ static int ksz9477_port_mirror_add(struct dsa_switch *ds, int port,
        return 0;
 }
 
-static void ksz9477_port_mirror_del(struct dsa_switch *ds, int port,
-                                   struct dsa_mall_mirror_tc_entry *mirror)
+void ksz9477_port_mirror_del(struct ksz_device *dev, int port,
+                            struct dsa_mall_mirror_tc_entry *mirror)
 {
-       struct ksz_device *dev = ds->priv;
        bool in_use = false;
        u8 data;
        int p;
@@ -1097,16 +1061,17 @@ static void ksz9477_phy_errata_setup(struct ksz_device *dev, int port)
        ksz9477_port_mmd_write(dev, port, 0x1c, 0x20, 0xeeee);
 }
 
-static void ksz9477_get_caps(struct dsa_switch *ds, int port,
-                            struct phylink_config *config)
+void ksz9477_get_caps(struct ksz_device *dev, int port,
+                     struct phylink_config *config)
 {
-       ksz_phylink_get_caps(ds, port, config);
+       config->mac_capabilities = MAC_10 | MAC_100 | MAC_ASYM_PAUSE |
+                                  MAC_SYM_PAUSE;
 
-       config->mac_capabilities = MAC_10 | MAC_100 | MAC_1000FD |
-                                  MAC_ASYM_PAUSE | MAC_SYM_PAUSE;
+       if (dev->features & GBIT_SUPPORT)
+               config->mac_capabilities |= MAC_1000FD;
 }
 
-static void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
+void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
 {
        struct ksz_port *p = &dev->ports[port];
        struct dsa_switch *ds = dev->ds;
@@ -1203,7 +1168,7 @@ static void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
                ksz_pread16(dev, port, REG_PORT_PHY_INT_ENABLE, &data16);
 }
 
-static void ksz9477_config_cpu_port(struct dsa_switch *ds)
+void ksz9477_config_cpu_port(struct dsa_switch *ds)
 {
        struct ksz_device *dev = ds->priv;
        struct ksz_port *p;
@@ -1260,7 +1225,7 @@ static void ksz9477_config_cpu_port(struct dsa_switch *ds)
                        continue;
                p = &dev->ports[i];
 
-               ksz9477_port_stp_state_set(ds, i, BR_STATE_DISABLED);
+               ksz_port_stp_state_set(ds, i, BR_STATE_DISABLED);
                p->on = 1;
                if (i < dev->phy_port_cnt)
                        p->phy = 1;
@@ -1273,22 +1238,44 @@ static void ksz9477_config_cpu_port(struct dsa_switch *ds)
        }
 }
 
-static int ksz9477_setup(struct dsa_switch *ds)
+int ksz9477_enable_stp_addr(struct ksz_device *dev)
 {
-       struct ksz_device *dev = ds->priv;
-       int ret = 0;
+       const u32 *masks;
+       u32 data;
+       int ret;
 
-       dev->vlan_cache = devm_kcalloc(dev->dev, sizeof(struct vlan_table),
-                                      dev->info->num_vlans, GFP_KERNEL);
-       if (!dev->vlan_cache)
-               return -ENOMEM;
+       masks = dev->info->masks;
 
-       ret = ksz9477_reset_switch(dev);
-       if (ret) {
-               dev_err(ds->dev, "failed to reset switch\n");
+       /* Enable Reserved multicast table */
+       ksz_cfg(dev, REG_SW_LUE_CTRL_0, SW_RESV_MCAST_ENABLE, true);
+
+       /* Set the Override bit for forwarding BPDU packet to CPU */
+       ret = ksz_write32(dev, REG_SW_ALU_VAL_B,
+                         ALU_V_OVERRIDE | BIT(dev->cpu_port));
+       if (ret < 0)
+               return ret;
+
+       data = ALU_STAT_START | ALU_RESV_MCAST_ADDR | masks[ALU_STAT_WRITE];
+
+       ret = ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data);
+       if (ret < 0)
+               return ret;
+
+       /* wait to be finished */
+       ret = ksz9477_wait_alu_sta_ready(dev);
+       if (ret < 0) {
+               dev_err(dev->dev, "Failed to update Reserved Multicast table\n");
                return ret;
        }
 
+       return 0;
+}
+
+int ksz9477_setup(struct dsa_switch *ds)
+{
+       struct ksz_device *dev = ds->priv;
+       int ret = 0;
+
        /* Required for port partitioning. */
        ksz9477_cfg32(dev, REG_SW_QM_CTRL__4, UNICAST_VLAN_BOUNDARY,
                      true);
@@ -1305,69 +1292,27 @@ static int ksz9477_setup(struct dsa_switch *ds)
        if (ret)
                return ret;
 
-       ksz9477_config_cpu_port(ds);
-
-       ksz_cfg(dev, REG_SW_MAC_CTRL_1, MULTICAST_STORM_DISABLE, true);
-
        /* queue based egress rate limit */
        ksz_cfg(dev, REG_SW_MAC_CTRL_5, SW_OUT_RATE_LIMIT_QUEUE_BASED, true);
 
        /* enable global MIB counter freeze function */
        ksz_cfg(dev, REG_SW_MAC_CTRL_6, SW_MIB_COUNTER_FREEZE, true);
 
-       /* start switch */
-       ksz_cfg(dev, REG_SW_OPERATION, SW_START, true);
-
-       ksz_init_mib_timer(dev);
-
-       ds->configure_vlan_while_not_filtering = false;
-
        return 0;
 }
 
-static const struct dsa_switch_ops ksz9477_switch_ops = {
-       .get_tag_protocol       = ksz9477_get_tag_protocol,
-       .setup                  = ksz9477_setup,
-       .phy_read               = ksz9477_phy_read16,
-       .phy_write              = ksz9477_phy_write16,
-       .phylink_mac_link_down  = ksz_mac_link_down,
-       .phylink_get_caps       = ksz9477_get_caps,
-       .port_enable            = ksz_enable_port,
-       .get_strings            = ksz_get_strings,
-       .get_ethtool_stats      = ksz_get_ethtool_stats,
-       .get_sset_count         = ksz_sset_count,
-       .port_bridge_join       = ksz_port_bridge_join,
-       .port_bridge_leave      = ksz_port_bridge_leave,
-       .port_stp_state_set     = ksz9477_port_stp_state_set,
-       .port_fast_age          = ksz_port_fast_age,
-       .port_vlan_filtering    = ksz9477_port_vlan_filtering,
-       .port_vlan_add          = ksz9477_port_vlan_add,
-       .port_vlan_del          = ksz9477_port_vlan_del,
-       .port_fdb_dump          = ksz9477_port_fdb_dump,
-       .port_fdb_add           = ksz9477_port_fdb_add,
-       .port_fdb_del           = ksz9477_port_fdb_del,
-       .port_mdb_add           = ksz9477_port_mdb_add,
-       .port_mdb_del           = ksz9477_port_mdb_del,
-       .port_mirror_add        = ksz9477_port_mirror_add,
-       .port_mirror_del        = ksz9477_port_mirror_del,
-       .get_stats64            = ksz_get_stats64,
-       .port_change_mtu        = ksz9477_change_mtu,
-       .port_max_mtu           = ksz9477_max_mtu,
-};
-
-static u32 ksz9477_get_port_addr(int port, int offset)
+u32 ksz9477_get_port_addr(int port, int offset)
 {
        return PORT_CTRL_ADDR(port, offset);
 }
 
-static int ksz9477_switch_detect(struct ksz_device *dev)
+int ksz9477_switch_init(struct ksz_device *dev)
 {
        u8 data8;
-       u8 id_hi;
-       u8 id_lo;
-       u32 id32;
        int ret;
 
+       dev->port_mask = (1 << dev->info->port_cnt) - 1;
+
        /* turn off SPI DO Edge select */
        ret = ksz_read8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, &data8);
        if (ret)
@@ -1378,10 +1323,6 @@ static int ksz9477_switch_detect(struct ksz_device *dev)
        if (ret)
                return ret;
 
-       /* read chip id */
-       ret = ksz_read32(dev, REG_CHIP_ID0__1, &id32);
-       if (ret)
-               return ret;
        ret = ksz_read8(dev, REG_GLOBAL_OPTIONS, &data8);
        if (ret)
                return ret;
@@ -1392,12 +1333,7 @@ static int ksz9477_switch_detect(struct ksz_device *dev)
        /* Default capability is gigabit capable. */
        dev->features = GBIT_SUPPORT;
 
-       dev_dbg(dev->dev, "Switch detect: ID=%08x%02x\n", id32, data8);
-       id_hi = (u8)(id32 >> 16);
-       id_lo = (u8)(id32 >> 8);
-       if ((id_lo & 0xf) == 3) {
-               /* Chip is from KSZ9893 design. */
-               dev_info(dev->dev, "Found KSZ9893\n");
+       if (dev->chip_id == KSZ9893_CHIP_ID) {
                dev->features |= IS_9893;
 
                /* Chip does not support gigabit. */
@@ -1405,7 +1341,6 @@ static int ksz9477_switch_detect(struct ksz_device *dev)
                        dev->features &= ~GBIT_SUPPORT;
                dev->phy_port_cnt = 2;
        } else {
-               dev_info(dev->dev, "Found KSZ9477 or compatible\n");
                /* Chip uses new XMII register definitions. */
                dev->features |= NEW_XMII;
 
@@ -1414,72 +1349,14 @@ static int ksz9477_switch_detect(struct ksz_device *dev)
                        dev->features &= ~GBIT_SUPPORT;
        }
 
-       /* Change chip id to known ones so it can be matched against them. */
-       id32 = (id_hi << 16) | (id_lo << 8);
-
-       dev->chip_id = id32;
-
        return 0;
 }
 
-static int ksz9477_switch_init(struct ksz_device *dev)
-{
-       dev->ds->ops = &ksz9477_switch_ops;
-
-       dev->port_mask = (1 << dev->info->port_cnt) - 1;
-
-       return 0;
-}
-
-static void ksz9477_switch_exit(struct ksz_device *dev)
+void ksz9477_switch_exit(struct ksz_device *dev)
 {
        ksz9477_reset_switch(dev);
 }
 
-static const struct ksz_dev_ops ksz9477_dev_ops = {
-       .get_port_addr = ksz9477_get_port_addr,
-       .cfg_port_member = ksz9477_cfg_port_member,
-       .flush_dyn_mac_table = ksz9477_flush_dyn_mac_table,
-       .port_setup = ksz9477_port_setup,
-       .r_mib_cnt = ksz9477_r_mib_cnt,
-       .r_mib_pkt = ksz9477_r_mib_pkt,
-       .r_mib_stat64 = ksz_r_mib_stats64,
-       .freeze_mib = ksz9477_freeze_mib,
-       .port_init_cnt = ksz9477_port_init_cnt,
-       .shutdown = ksz9477_reset_switch,
-       .detect = ksz9477_switch_detect,
-       .init = ksz9477_switch_init,
-       .exit = ksz9477_switch_exit,
-};
-
-int ksz9477_switch_register(struct ksz_device *dev)
-{
-       int ret, i;
-       struct phy_device *phydev;
-
-       ret = ksz_switch_register(dev, &ksz9477_dev_ops);
-       if (ret)
-               return ret;
-
-       for (i = 0; i < dev->phy_port_cnt; ++i) {
-               if (!dsa_is_user_port(dev->ds, i))
-                       continue;
-
-               phydev = dsa_to_port(dev->ds, i)->slave->phydev;
-
-               /* The MAC actually cannot run in 1000 half-duplex mode. */
-               phy_remove_link_mode(phydev,
-                                    ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
-
-               /* PHY does not support gigabit. */
-               if (!(dev->features & GBIT_SUPPORT))
-                       phy_remove_link_mode(phydev,
-                                            ETHTOOL_LINK_MODE_1000baseT_Full_BIT);
-       }
-       return ret;
-}
-EXPORT_SYMBOL(ksz9477_switch_register);
-
 MODULE_AUTHOR("Woojung Huh <Woojung.Huh@microchip.com>");
 MODULE_DESCRIPTION("Microchip KSZ9477 Series Switch DSA Driver");
 MODULE_LICENSE("GPL");
diff --git a/drivers/net/dsa/microchip/ksz9477.h b/drivers/net/dsa/microchip/ksz9477.h
new file mode 100644 (file)
index 0000000..cd278b3
--- /dev/null
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Microchip KSZ9477 series Header file
+ *
+ * Copyright (C) 2017-2022 Microchip Technology Inc.
+ */
+
+#ifndef __KSZ9477_H
+#define __KSZ9477_H
+
+#include <net/dsa.h>
+#include "ksz_common.h"
+
+int ksz9477_setup(struct dsa_switch *ds);
+u32 ksz9477_get_port_addr(int port, int offset);
+void ksz9477_cfg_port_member(struct ksz_device *dev, int port, u8 member);
+void ksz9477_flush_dyn_mac_table(struct ksz_device *dev, int port);
+void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port);
+void ksz9477_r_phy(struct ksz_device *dev, u16 addr, u16 reg, u16 *data);
+void ksz9477_w_phy(struct ksz_device *dev, u16 addr, u16 reg, u16 val);
+void ksz9477_r_mib_cnt(struct ksz_device *dev, int port, u16 addr, u64 *cnt);
+void ksz9477_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
+                      u64 *dropped, u64 *cnt);
+void ksz9477_freeze_mib(struct ksz_device *dev, int port, bool freeze);
+void ksz9477_port_init_cnt(struct ksz_device *dev, int port);
+int ksz9477_port_vlan_filtering(struct ksz_device *dev, int port,
+                               bool flag, struct netlink_ext_ack *extack);
+int ksz9477_port_vlan_add(struct ksz_device *dev, int port,
+                         const struct switchdev_obj_port_vlan *vlan,
+                         struct netlink_ext_ack *extack);
+int ksz9477_port_vlan_del(struct ksz_device *dev, int port,
+                         const struct switchdev_obj_port_vlan *vlan);
+int ksz9477_port_mirror_add(struct ksz_device *dev, int port,
+                           struct dsa_mall_mirror_tc_entry *mirror,
+                           bool ingress, struct netlink_ext_ack *extack);
+void ksz9477_port_mirror_del(struct ksz_device *dev, int port,
+                            struct dsa_mall_mirror_tc_entry *mirror);
+int ksz9477_get_stp_reg(void);
+void ksz9477_get_caps(struct ksz_device *dev, int port,
+                     struct phylink_config *config);
+int ksz9477_fdb_dump(struct ksz_device *dev, int port,
+                    dsa_fdb_dump_cb_t *cb, void *data);
+int ksz9477_fdb_add(struct ksz_device *dev, int port,
+                   const unsigned char *addr, u16 vid, struct dsa_db db);
+int ksz9477_fdb_del(struct ksz_device *dev, int port,
+                   const unsigned char *addr, u16 vid, struct dsa_db db);
+int ksz9477_mdb_add(struct ksz_device *dev, int port,
+                   const struct switchdev_obj_port_mdb *mdb, struct dsa_db db);
+int ksz9477_mdb_del(struct ksz_device *dev, int port,
+                   const struct switchdev_obj_port_mdb *mdb, struct dsa_db db);
+int ksz9477_change_mtu(struct ksz_device *dev, int port, int mtu);
+int ksz9477_max_mtu(struct ksz_device *dev, int port);
+void ksz9477_config_cpu_port(struct dsa_switch *ds);
+int ksz9477_enable_stp_addr(struct ksz_device *dev);
+int ksz9477_reset_switch(struct ksz_device *dev);
+int ksz9477_dsa_init(struct ksz_device *dev);
+int ksz9477_switch_init(struct ksz_device *dev);
+void ksz9477_switch_exit(struct ksz_device *dev);
+
+#endif
index faa3163..9996651 100644 (file)
@@ -41,7 +41,7 @@ static int ksz9477_i2c_probe(struct i2c_client *i2c,
        if (i2c->dev.platform_data)
                dev->pdata = i2c->dev.platform_data;
 
-       ret = ksz9477_switch_register(dev);
+       ret = ksz_switch_register(dev);
 
        /* Main DSA driver may not be started yet. */
        if (ret)
@@ -71,8 +71,8 @@ static void ksz9477_i2c_shutdown(struct i2c_client *i2c)
        if (!dev)
                return;
 
-       if (dev->dev_ops->shutdown)
-               dev->dev_ops->shutdown(dev);
+       if (dev->dev_ops->reset)
+               dev->dev_ops->reset(dev);
 
        dsa_switch_shutdown(dev->ds);
 
index 7a2c8d4..d0cce4c 100644 (file)
@@ -25,7 +25,6 @@
 
 #define REG_CHIP_ID2__1                        0x0002
 
-#define CHIP_ID_63                     0x63
 #define CHIP_ID_66                     0x66
 #define CHIP_ID_67                     0x67
 #define CHIP_ID_77                     0x77
 
 #define SW_DOUBLE_TAG                  BIT(7)
 #define SW_RESET                       BIT(1)
-#define SW_START                       BIT(0)
 
 #define REG_SW_MAC_ADDR_0              0x0302
 #define REG_SW_MAC_ADDR_1              0x0303
 
 #define REG_SW_MAC_CTRL_1              0x0331
 
-#define MULTICAST_STORM_DISABLE                BIT(6)
 #define SW_BACK_PRESSURE               BIT(5)
 #define FAIR_FLOW_CTRL                 BIT(4)
 #define NO_EXC_COLLISION_DROP          BIT(3)
 #define REG_SW_MAC_CTRL_2              0x0332
 
 #define SW_REPLACE_VID                 BIT(3)
-#define BROADCAST_STORM_RATE_HI                0x07
 
 #define REG_SW_MAC_CTRL_3              0x0333
 
-#define BROADCAST_STORM_RATE_LO                0xFF
-#define BROADCAST_STORM_RATE           0x07FF
-
 #define REG_SW_MAC_CTRL_4              0x0334
 
 #define SW_PASS_PAUSE                  BIT(3)
 
 #define REG_SW_ALU_STAT_CTRL__4                0x041C
 
-#define ALU_STAT_INDEX_M               (BIT(4) - 1)
-#define ALU_STAT_INDEX_S               16
 #define ALU_RESV_MCAST_INDEX_M         (BIT(6) - 1)
 #define ALU_STAT_START                 BIT(7)
 #define ALU_RESV_MCAST_ADDR            BIT(1)
-#define ALU_STAT_READ                  BIT(0)
 
 #define REG_SW_ALU_VAL_A               0x0420
 
 /* 5 - MIB Counters */
 #define REG_PORT_MIB_CTRL_STAT__4      0x0500
 
-#define MIB_COUNTER_OVERFLOW           BIT(31)
-#define MIB_COUNTER_VALID              BIT(30)
 #define MIB_COUNTER_READ               BIT(25)
 #define MIB_COUNTER_FLUSH_FREEZE       BIT(24)
 #define MIB_COUNTER_INDEX_M            (BIT(8) - 1)
 #define P_BCAST_STORM_CTRL             REG_PORT_MAC_CTRL_0
 #define P_PRIO_CTRL                    REG_PORT_MRI_PRIO_CTRL
 #define P_MIRROR_CTRL                  REG_PORT_MRI_MIRROR_CTRL
-#define P_STP_CTRL                     REG_PORT_LUE_MSTP_STATE
 #define P_PHY_CTRL                     REG_PORT_PHY_CTRL
-#define P_NEG_RESTART_CTRL             REG_PORT_PHY_CTRL
-#define P_LINK_STATUS                  REG_PORT_PHY_STATUS
-#define P_SPEED_STATUS                 REG_PORT_PHY_PHY_CTRL
 #define P_RATE_LIMIT_CTRL              REG_PORT_MAC_IN_RATE_LIMIT
 
 #define S_LINK_AGING_CTRL              REG_SW_LUE_CTRL_1
 #define PTP_TRIG_UNIT_M                        (BIT(MAX_TRIG_UNIT) - 1)
 #define PTP_TS_UNIT_M                  (BIT(MAX_TIMESTAMP_UNIT) - 1)
 
-/* Driver set switch broadcast storm protection at 10% rate. */
-#define BROADCAST_STORM_PROT_RATE      10
-
-/* 148,800 frames * 67 ms / 100 */
-#define BROADCAST_STORM_VALUE          9969
-
 #define KSZ9477_MAX_FRAME_SIZE         9000
 
 #endif /* KSZ9477_REGS_H */
diff --git a/drivers/net/dsa/microchip/ksz9477_spi.c b/drivers/net/dsa/microchip/ksz9477_spi.c
deleted file mode 100644 (file)
index 1bc8b0c..0000000
+++ /dev/null
@@ -1,150 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Microchip KSZ9477 series register access through SPI
- *
- * Copyright (C) 2017-2019 Microchip Technology Inc.
- */
-
-#include <asm/unaligned.h>
-
-#include <linux/delay.h>
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/regmap.h>
-#include <linux/spi/spi.h>
-
-#include "ksz_common.h"
-
-#define SPI_ADDR_SHIFT                 24
-#define SPI_ADDR_ALIGN                 3
-#define SPI_TURNAROUND_SHIFT           5
-
-KSZ_REGMAP_TABLE(ksz9477, 32, SPI_ADDR_SHIFT,
-                SPI_TURNAROUND_SHIFT, SPI_ADDR_ALIGN);
-
-static int ksz9477_spi_probe(struct spi_device *spi)
-{
-       struct regmap_config rc;
-       struct ksz_device *dev;
-       int i, ret;
-
-       dev = ksz_switch_alloc(&spi->dev, spi);
-       if (!dev)
-               return -ENOMEM;
-
-       for (i = 0; i < ARRAY_SIZE(ksz9477_regmap_config); i++) {
-               rc = ksz9477_regmap_config[i];
-               rc.lock_arg = &dev->regmap_mutex;
-               dev->regmap[i] = devm_regmap_init_spi(spi, &rc);
-               if (IS_ERR(dev->regmap[i])) {
-                       ret = PTR_ERR(dev->regmap[i]);
-                       dev_err(&spi->dev,
-                               "Failed to initialize regmap%i: %d\n",
-                               ksz9477_regmap_config[i].val_bits, ret);
-                       return ret;
-               }
-       }
-
-       if (spi->dev.platform_data)
-               dev->pdata = spi->dev.platform_data;
-
-       /* setup spi */
-       spi->mode = SPI_MODE_3;
-       ret = spi_setup(spi);
-       if (ret)
-               return ret;
-
-       ret = ksz9477_switch_register(dev);
-
-       /* Main DSA driver may not be started yet. */
-       if (ret)
-               return ret;
-
-       spi_set_drvdata(spi, dev);
-
-       return 0;
-}
-
-static void ksz9477_spi_remove(struct spi_device *spi)
-{
-       struct ksz_device *dev = spi_get_drvdata(spi);
-
-       if (dev)
-               ksz_switch_remove(dev);
-
-       spi_set_drvdata(spi, NULL);
-}
-
-static void ksz9477_spi_shutdown(struct spi_device *spi)
-{
-       struct ksz_device *dev = spi_get_drvdata(spi);
-
-       if (dev)
-               dsa_switch_shutdown(dev->ds);
-
-       spi_set_drvdata(spi, NULL);
-}
-
-static const struct of_device_id ksz9477_dt_ids[] = {
-       {
-               .compatible = "microchip,ksz9477",
-               .data = &ksz_switch_chips[KSZ9477]
-       },
-       {
-               .compatible = "microchip,ksz9897",
-               .data = &ksz_switch_chips[KSZ9897]
-       },
-       {
-               .compatible = "microchip,ksz9893",
-               .data = &ksz_switch_chips[KSZ9893]
-       },
-       {
-               .compatible = "microchip,ksz9563",
-               .data = &ksz_switch_chips[KSZ9893]
-       },
-       {
-               .compatible = "microchip,ksz8563",
-               .data = &ksz_switch_chips[KSZ9893]
-       },
-       {
-               .compatible = "microchip,ksz9567",
-               .data = &ksz_switch_chips[KSZ9567]
-       },
-       {},
-};
-MODULE_DEVICE_TABLE(of, ksz9477_dt_ids);
-
-static const struct spi_device_id ksz9477_spi_ids[] = {
-       { "ksz9477" },
-       { "ksz9897" },
-       { "ksz9893" },
-       { "ksz9563" },
-       { "ksz8563" },
-       { "ksz9567" },
-       { },
-};
-MODULE_DEVICE_TABLE(spi, ksz9477_spi_ids);
-
-static struct spi_driver ksz9477_spi_driver = {
-       .driver = {
-               .name   = "ksz9477-switch",
-               .owner  = THIS_MODULE,
-               .of_match_table = of_match_ptr(ksz9477_dt_ids),
-       },
-       .id_table = ksz9477_spi_ids,
-       .probe  = ksz9477_spi_probe,
-       .remove = ksz9477_spi_remove,
-       .shutdown = ksz9477_spi_shutdown,
-};
-
-module_spi_driver(ksz9477_spi_driver);
-
-MODULE_ALIAS("spi:ksz9477");
-MODULE_ALIAS("spi:ksz9897");
-MODULE_ALIAS("spi:ksz9893");
-MODULE_ALIAS("spi:ksz9563");
-MODULE_ALIAS("spi:ksz8563");
-MODULE_ALIAS("spi:ksz9567");
-MODULE_AUTHOR("Woojung Huh <Woojung.Huh@microchip.com>");
-MODULE_DESCRIPTION("Microchip KSZ9477 Series Switch SPI access Driver");
-MODULE_LICENSE("GPL");
index 9ca8c8d..28d7cb2 100644 (file)
 #include <linux/if_bridge.h>
 #include <linux/of_device.h>
 #include <linux/of_net.h>
+#include <linux/micrel_phy.h>
 #include <net/dsa.h>
 #include <net/switchdev.h>
 
 #include "ksz_common.h"
+#include "ksz8.h"
+#include "ksz9477.h"
+#include "lan937x.h"
 
 #define MIB_COUNTER_NUM 0x20
 
@@ -138,6 +142,234 @@ static const struct ksz_mib_names ksz9477_mib_names[] = {
        { 0x83, "tx_discards" },
 };
 
+static const struct ksz_dev_ops ksz8_dev_ops = {
+       .setup = ksz8_setup,
+       .get_port_addr = ksz8_get_port_addr,
+       .cfg_port_member = ksz8_cfg_port_member,
+       .flush_dyn_mac_table = ksz8_flush_dyn_mac_table,
+       .port_setup = ksz8_port_setup,
+       .r_phy = ksz8_r_phy,
+       .w_phy = ksz8_w_phy,
+       .r_mib_pkt = ksz8_r_mib_pkt,
+       .freeze_mib = ksz8_freeze_mib,
+       .port_init_cnt = ksz8_port_init_cnt,
+       .fdb_dump = ksz8_fdb_dump,
+       .mdb_add = ksz8_mdb_add,
+       .mdb_del = ksz8_mdb_del,
+       .vlan_filtering = ksz8_port_vlan_filtering,
+       .vlan_add = ksz8_port_vlan_add,
+       .vlan_del = ksz8_port_vlan_del,
+       .mirror_add = ksz8_port_mirror_add,
+       .mirror_del = ksz8_port_mirror_del,
+       .get_caps = ksz8_get_caps,
+       .config_cpu_port = ksz8_config_cpu_port,
+       .enable_stp_addr = ksz8_enable_stp_addr,
+       .reset = ksz8_reset_switch,
+       .init = ksz8_switch_init,
+       .exit = ksz8_switch_exit,
+};
+
+static const struct ksz_dev_ops ksz9477_dev_ops = {
+       .setup = ksz9477_setup,
+       .get_port_addr = ksz9477_get_port_addr,
+       .cfg_port_member = ksz9477_cfg_port_member,
+       .flush_dyn_mac_table = ksz9477_flush_dyn_mac_table,
+       .port_setup = ksz9477_port_setup,
+       .r_phy = ksz9477_r_phy,
+       .w_phy = ksz9477_w_phy,
+       .r_mib_cnt = ksz9477_r_mib_cnt,
+       .r_mib_pkt = ksz9477_r_mib_pkt,
+       .r_mib_stat64 = ksz_r_mib_stats64,
+       .freeze_mib = ksz9477_freeze_mib,
+       .port_init_cnt = ksz9477_port_init_cnt,
+       .vlan_filtering = ksz9477_port_vlan_filtering,
+       .vlan_add = ksz9477_port_vlan_add,
+       .vlan_del = ksz9477_port_vlan_del,
+       .mirror_add = ksz9477_port_mirror_add,
+       .mirror_del = ksz9477_port_mirror_del,
+       .get_caps = ksz9477_get_caps,
+       .fdb_dump = ksz9477_fdb_dump,
+       .fdb_add = ksz9477_fdb_add,
+       .fdb_del = ksz9477_fdb_del,
+       .mdb_add = ksz9477_mdb_add,
+       .mdb_del = ksz9477_mdb_del,
+       .change_mtu = ksz9477_change_mtu,
+       .max_mtu = ksz9477_max_mtu,
+       .config_cpu_port = ksz9477_config_cpu_port,
+       .enable_stp_addr = ksz9477_enable_stp_addr,
+       .reset = ksz9477_reset_switch,
+       .init = ksz9477_switch_init,
+       .exit = ksz9477_switch_exit,
+};
+
+static const struct ksz_dev_ops lan937x_dev_ops = {
+       .setup = lan937x_setup,
+       .get_port_addr = ksz9477_get_port_addr,
+       .cfg_port_member = ksz9477_cfg_port_member,
+       .flush_dyn_mac_table = ksz9477_flush_dyn_mac_table,
+       .port_setup = lan937x_port_setup,
+       .r_phy = lan937x_r_phy,
+       .w_phy = lan937x_w_phy,
+       .r_mib_cnt = ksz9477_r_mib_cnt,
+       .r_mib_pkt = ksz9477_r_mib_pkt,
+       .r_mib_stat64 = ksz_r_mib_stats64,
+       .freeze_mib = ksz9477_freeze_mib,
+       .port_init_cnt = ksz9477_port_init_cnt,
+       .vlan_filtering = ksz9477_port_vlan_filtering,
+       .vlan_add = ksz9477_port_vlan_add,
+       .vlan_del = ksz9477_port_vlan_del,
+       .mirror_add = ksz9477_port_mirror_add,
+       .mirror_del = ksz9477_port_mirror_del,
+       .get_caps = lan937x_phylink_get_caps,
+       .phylink_mac_config = lan937x_phylink_mac_config,
+       .phylink_mac_link_up = lan937x_phylink_mac_link_up,
+       .fdb_dump = ksz9477_fdb_dump,
+       .fdb_add = ksz9477_fdb_add,
+       .fdb_del = ksz9477_fdb_del,
+       .mdb_add = ksz9477_mdb_add,
+       .mdb_del = ksz9477_mdb_del,
+       .change_mtu = lan937x_change_mtu,
+       .max_mtu = ksz9477_max_mtu,
+       .config_cpu_port = lan937x_config_cpu_port,
+       .enable_stp_addr = ksz9477_enable_stp_addr,
+       .reset = lan937x_reset_switch,
+       .init = lan937x_switch_init,
+       .exit = lan937x_switch_exit,
+};
+
+static const u16 ksz8795_regs[] = {
+       [REG_IND_CTRL_0]                = 0x6E,
+       [REG_IND_DATA_8]                = 0x70,
+       [REG_IND_DATA_CHECK]            = 0x72,
+       [REG_IND_DATA_HI]               = 0x71,
+       [REG_IND_DATA_LO]               = 0x75,
+       [REG_IND_MIB_CHECK]             = 0x74,
+       [REG_IND_BYTE]                  = 0xA0,
+       [P_FORCE_CTRL]                  = 0x0C,
+       [P_LINK_STATUS]                 = 0x0E,
+       [P_LOCAL_CTRL]                  = 0x07,
+       [P_NEG_RESTART_CTRL]            = 0x0D,
+       [P_REMOTE_STATUS]               = 0x08,
+       [P_SPEED_STATUS]                = 0x09,
+       [S_TAIL_TAG_CTRL]               = 0x0C,
+       [P_STP_CTRL]                    = 0x02,
+       [S_START_CTRL]                  = 0x01,
+       [S_BROADCAST_CTRL]              = 0x06,
+       [S_MULTICAST_CTRL]              = 0x04,
+};
+
+static const u32 ksz8795_masks[] = {
+       [PORT_802_1P_REMAPPING]         = BIT(7),
+       [SW_TAIL_TAG_ENABLE]            = BIT(1),
+       [MIB_COUNTER_OVERFLOW]          = BIT(6),
+       [MIB_COUNTER_VALID]             = BIT(5),
+       [VLAN_TABLE_FID]                = GENMASK(6, 0),
+       [VLAN_TABLE_MEMBERSHIP]         = GENMASK(11, 7),
+       [VLAN_TABLE_VALID]              = BIT(12),
+       [STATIC_MAC_TABLE_VALID]        = BIT(21),
+       [STATIC_MAC_TABLE_USE_FID]      = BIT(23),
+       [STATIC_MAC_TABLE_FID]          = GENMASK(30, 24),
+       [STATIC_MAC_TABLE_OVERRIDE]     = BIT(26),
+       [STATIC_MAC_TABLE_FWD_PORTS]    = GENMASK(24, 20),
+       [DYNAMIC_MAC_TABLE_ENTRIES_H]   = GENMASK(6, 0),
+       [DYNAMIC_MAC_TABLE_MAC_EMPTY]   = BIT(8),
+       [DYNAMIC_MAC_TABLE_NOT_READY]   = BIT(7),
+       [DYNAMIC_MAC_TABLE_ENTRIES]     = GENMASK(31, 29),
+       [DYNAMIC_MAC_TABLE_FID]         = GENMASK(26, 20),
+       [DYNAMIC_MAC_TABLE_SRC_PORT]    = GENMASK(26, 24),
+       [DYNAMIC_MAC_TABLE_TIMESTAMP]   = GENMASK(28, 27),
+};
+
+static const u8 ksz8795_shifts[] = {
+       [VLAN_TABLE_MEMBERSHIP_S]       = 7,
+       [VLAN_TABLE]                    = 16,
+       [STATIC_MAC_FWD_PORTS]          = 16,
+       [STATIC_MAC_FID]                = 24,
+       [DYNAMIC_MAC_ENTRIES_H]         = 3,
+       [DYNAMIC_MAC_ENTRIES]           = 29,
+       [DYNAMIC_MAC_FID]               = 16,
+       [DYNAMIC_MAC_TIMESTAMP]         = 27,
+       [DYNAMIC_MAC_SRC_PORT]          = 24,
+};
+
+static const u16 ksz8863_regs[] = {
+       [REG_IND_CTRL_0]                = 0x79,
+       [REG_IND_DATA_8]                = 0x7B,
+       [REG_IND_DATA_CHECK]            = 0x7B,
+       [REG_IND_DATA_HI]               = 0x7C,
+       [REG_IND_DATA_LO]               = 0x80,
+       [REG_IND_MIB_CHECK]             = 0x80,
+       [P_FORCE_CTRL]                  = 0x0C,
+       [P_LINK_STATUS]                 = 0x0E,
+       [P_LOCAL_CTRL]                  = 0x0C,
+       [P_NEG_RESTART_CTRL]            = 0x0D,
+       [P_REMOTE_STATUS]               = 0x0E,
+       [P_SPEED_STATUS]                = 0x0F,
+       [S_TAIL_TAG_CTRL]               = 0x03,
+       [P_STP_CTRL]                    = 0x02,
+       [S_START_CTRL]                  = 0x01,
+       [S_BROADCAST_CTRL]              = 0x06,
+       [S_MULTICAST_CTRL]              = 0x04,
+};
+
+static const u32 ksz8863_masks[] = {
+       [PORT_802_1P_REMAPPING]         = BIT(3),
+       [SW_TAIL_TAG_ENABLE]            = BIT(6),
+       [MIB_COUNTER_OVERFLOW]          = BIT(7),
+       [MIB_COUNTER_VALID]             = BIT(6),
+       [VLAN_TABLE_FID]                = GENMASK(15, 12),
+       [VLAN_TABLE_MEMBERSHIP]         = GENMASK(18, 16),
+       [VLAN_TABLE_VALID]              = BIT(19),
+       [STATIC_MAC_TABLE_VALID]        = BIT(19),
+       [STATIC_MAC_TABLE_USE_FID]      = BIT(21),
+       [STATIC_MAC_TABLE_FID]          = GENMASK(29, 26),
+       [STATIC_MAC_TABLE_OVERRIDE]     = BIT(20),
+       [STATIC_MAC_TABLE_FWD_PORTS]    = GENMASK(18, 16),
+       [DYNAMIC_MAC_TABLE_ENTRIES_H]   = GENMASK(5, 0),
+       [DYNAMIC_MAC_TABLE_MAC_EMPTY]   = BIT(7),
+       [DYNAMIC_MAC_TABLE_NOT_READY]   = BIT(7),
+       [DYNAMIC_MAC_TABLE_ENTRIES]     = GENMASK(31, 28),
+       [DYNAMIC_MAC_TABLE_FID]         = GENMASK(19, 16),
+       [DYNAMIC_MAC_TABLE_SRC_PORT]    = GENMASK(21, 20),
+       [DYNAMIC_MAC_TABLE_TIMESTAMP]   = GENMASK(23, 22),
+};
+
+static u8 ksz8863_shifts[] = {
+       [VLAN_TABLE_MEMBERSHIP_S]       = 16,
+       [STATIC_MAC_FWD_PORTS]          = 16,
+       [STATIC_MAC_FID]                = 22,
+       [DYNAMIC_MAC_ENTRIES_H]         = 3,
+       [DYNAMIC_MAC_ENTRIES]           = 24,
+       [DYNAMIC_MAC_FID]               = 16,
+       [DYNAMIC_MAC_TIMESTAMP]         = 24,
+       [DYNAMIC_MAC_SRC_PORT]          = 20,
+};
+
+static const u16 ksz9477_regs[] = {
+       [P_STP_CTRL]                    = 0x0B04,
+       [S_START_CTRL]                  = 0x0300,
+       [S_BROADCAST_CTRL]              = 0x0332,
+       [S_MULTICAST_CTRL]              = 0x0331,
+};
+
+static const u32 ksz9477_masks[] = {
+       [ALU_STAT_WRITE]                = 0,
+       [ALU_STAT_READ]                 = 1,
+};
+
+static const u8 ksz9477_shifts[] = {
+       [ALU_STAT_INDEX]                = 16,
+};
+
+static const u32 lan937x_masks[] = {
+       [ALU_STAT_WRITE]                = 1,
+       [ALU_STAT_READ]                 = 2,
+};
+
+static const u8 lan937x_shifts[] = {
+       [ALU_STAT_INDEX]                = 8,
+};
+
 const struct ksz_chip_data ksz_switch_chips[] = {
        [KSZ8795] = {
                .chip_id = KSZ8795_CHIP_ID,
@@ -147,10 +379,14 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 8,
                .cpu_ports = 0x10,      /* can be configured as cpu port */
                .port_cnt = 5,          /* total cpu and user ports */
+               .ops = &ksz8_dev_ops,
                .ksz87xx_eee_link_erratum = true,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz8795_regs,
+               .masks = ksz8795_masks,
+               .shifts = ksz8795_shifts,
                .supports_mii = {false, false, false, false, true},
                .supports_rmii = {false, false, false, false, true},
                .supports_rgmii = {false, false, false, false, true},
@@ -179,10 +415,14 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 8,
                .cpu_ports = 0x10,      /* can be configured as cpu port */
                .port_cnt = 5,          /* total cpu and user ports */
+               .ops = &ksz8_dev_ops,
                .ksz87xx_eee_link_erratum = true,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz8795_regs,
+               .masks = ksz8795_masks,
+               .shifts = ksz8795_shifts,
                .supports_mii = {false, false, false, false, true},
                .supports_rmii = {false, false, false, false, true},
                .supports_rgmii = {false, false, false, false, true},
@@ -197,10 +437,14 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 8,
                .cpu_ports = 0x10,      /* can be configured as cpu port */
                .port_cnt = 5,          /* total cpu and user ports */
+               .ops = &ksz8_dev_ops,
                .ksz87xx_eee_link_erratum = true,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz8795_regs,
+               .masks = ksz8795_masks,
+               .shifts = ksz8795_shifts,
                .supports_mii = {false, false, false, false, true},
                .supports_rmii = {false, false, false, false, true},
                .supports_rgmii = {false, false, false, false, true},
@@ -215,9 +459,13 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 8,
                .cpu_ports = 0x4,       /* can be configured as cpu port */
                .port_cnt = 3,
+               .ops = &ksz8_dev_ops,
                .mib_names = ksz88xx_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz88xx_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz8863_regs,
+               .masks = ksz8863_masks,
+               .shifts = ksz8863_shifts,
                .supports_mii = {false, false, true},
                .supports_rmii = {false, false, true},
                .internal_phy = {true, true, false},
@@ -231,10 +479,14 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 16,
                .cpu_ports = 0x7F,      /* can be configured as cpu port */
                .port_cnt = 7,          /* total physical port count */
+               .ops = &ksz9477_dev_ops,
                .phy_errata_9477 = true,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz9477_regs,
+               .masks = ksz9477_masks,
+               .shifts = ksz9477_shifts,
                .supports_mii   = {false, false, false, false,
                                   false, true, false},
                .supports_rmii  = {false, false, false, false,
@@ -253,10 +505,14 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 16,
                .cpu_ports = 0x7F,      /* can be configured as cpu port */
                .port_cnt = 7,          /* total physical port count */
+               .ops = &ksz9477_dev_ops,
                .phy_errata_9477 = true,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz9477_regs,
+               .masks = ksz9477_masks,
+               .shifts = ksz9477_shifts,
                .supports_mii   = {false, false, false, false,
                                   false, true, true},
                .supports_rmii  = {false, false, false, false,
@@ -275,9 +531,13 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 16,
                .cpu_ports = 0x07,      /* can be configured as cpu port */
                .port_cnt = 3,          /* total port count */
+               .ops = &ksz9477_dev_ops,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz9477_regs,
+               .masks = ksz9477_masks,
+               .shifts = ksz9477_shifts,
                .supports_mii = {false, false, true},
                .supports_rmii = {false, false, true},
                .supports_rgmii = {false, false, true},
@@ -292,10 +552,14 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 16,
                .cpu_ports = 0x7F,      /* can be configured as cpu port */
                .port_cnt = 7,          /* total physical port count */
+               .ops = &ksz9477_dev_ops,
                .phy_errata_9477 = true,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz9477_regs,
+               .masks = ksz9477_masks,
+               .shifts = ksz9477_shifts,
                .supports_mii   = {false, false, false, false,
                                   false, true, true},
                .supports_rmii  = {false, false, false, false,
@@ -314,9 +578,13 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 256,
                .cpu_ports = 0x10,      /* can be configured as cpu port */
                .port_cnt = 5,          /* total physical port count */
+               .ops = &lan937x_dev_ops,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz9477_regs,
+               .masks = lan937x_masks,
+               .shifts = lan937x_shifts,
                .supports_mii = {false, false, false, false, true},
                .supports_rmii = {false, false, false, false, true},
                .supports_rgmii = {false, false, false, false, true},
@@ -331,9 +599,13 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 256,
                .cpu_ports = 0x30,      /* can be configured as cpu port */
                .port_cnt = 6,          /* total physical port count */
+               .ops = &lan937x_dev_ops,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz9477_regs,
+               .masks = lan937x_masks,
+               .shifts = lan937x_shifts,
                .supports_mii = {false, false, false, false, true, true},
                .supports_rmii = {false, false, false, false, true, true},
                .supports_rgmii = {false, false, false, false, true, true},
@@ -348,9 +620,13 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 256,
                .cpu_ports = 0x30,      /* can be configured as cpu port */
                .port_cnt = 8,          /* total physical port count */
+               .ops = &lan937x_dev_ops,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz9477_regs,
+               .masks = lan937x_masks,
+               .shifts = lan937x_shifts,
                .supports_mii   = {false, false, false, false,
                                   true, true, false, false},
                .supports_rmii  = {false, false, false, false,
@@ -369,9 +645,13 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 256,
                .cpu_ports = 0x38,      /* can be configured as cpu port */
                .port_cnt = 5,          /* total physical port count */
+               .ops = &lan937x_dev_ops,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz9477_regs,
+               .masks = lan937x_masks,
+               .shifts = lan937x_shifts,
                .supports_mii   = {false, false, false, false,
                                   true, true, false, false},
                .supports_rmii  = {false, false, false, false,
@@ -390,9 +670,13 @@ const struct ksz_chip_data ksz_switch_chips[] = {
                .num_statics = 256,
                .cpu_ports = 0x30,      /* can be configured as cpu port */
                .port_cnt = 8,          /* total physical port count */
+               .ops = &lan937x_dev_ops,
                .mib_names = ksz9477_mib_names,
                .mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
                .reg_mib_cnt = MIB_COUNTER_NUM,
+               .regs = ksz9477_regs,
+               .masks = lan937x_masks,
+               .shifts = lan937x_shifts,
                .supports_mii   = {false, false, false, false,
                                   true, true, false, false},
                .supports_rmii  = {false, false, false, false,
@@ -436,8 +720,8 @@ static int ksz_check_device_id(struct ksz_device *dev)
        return 0;
 }
 
-void ksz_phylink_get_caps(struct dsa_switch *ds, int port,
-                         struct phylink_config *config)
+static void ksz_phylink_get_caps(struct dsa_switch *ds, int port,
+                                struct phylink_config *config)
 {
        struct ksz_device *dev = ds->priv;
 
@@ -456,23 +740,29 @@ void ksz_phylink_get_caps(struct dsa_switch *ds, int port,
        if (dev->info->internal_phy[port])
                __set_bit(PHY_INTERFACE_MODE_INTERNAL,
                          config->supported_interfaces);
+
+       if (dev->dev_ops->get_caps)
+               dev->dev_ops->get_caps(dev, port, config);
 }
-EXPORT_SYMBOL_GPL(ksz_phylink_get_caps);
 
 void ksz_r_mib_stats64(struct ksz_device *dev, int port)
 {
+       struct ethtool_pause_stats *pstats;
        struct rtnl_link_stats64 *stats;
        struct ksz_stats_raw *raw;
        struct ksz_port_mib *mib;
 
        mib = &dev->ports[port].mib;
        stats = &mib->stats64;
+       pstats = &mib->pause_stats;
        raw = (struct ksz_stats_raw *)mib->counters;
 
        spin_lock(&mib->stats64_lock);
 
-       stats->rx_packets = raw->rx_bcast + raw->rx_mcast + raw->rx_ucast;
-       stats->tx_packets = raw->tx_bcast + raw->tx_mcast + raw->tx_ucast;
+       stats->rx_packets = raw->rx_bcast + raw->rx_mcast + raw->rx_ucast +
+               raw->rx_pause;
+       stats->tx_packets = raw->tx_bcast + raw->tx_mcast + raw->tx_ucast +
+               raw->tx_pause;
 
        /* HW counters are counting bytes + FCS which is not acceptable
         * for rtnl_link_stats64 interface
@@ -498,12 +788,14 @@ void ksz_r_mib_stats64(struct ksz_device *dev, int port)
        stats->multicast = raw->rx_mcast;
        stats->collisions = raw->tx_total_col;
 
+       pstats->tx_pause_frames = raw->tx_pause;
+       pstats->rx_pause_frames = raw->rx_pause;
+
        spin_unlock(&mib->stats64_lock);
 }
-EXPORT_SYMBOL_GPL(ksz_r_mib_stats64);
 
-void ksz_get_stats64(struct dsa_switch *ds, int port,
-                    struct rtnl_link_stats64 *s)
+static void ksz_get_stats64(struct dsa_switch *ds, int port,
+                           struct rtnl_link_stats64 *s)
 {
        struct ksz_device *dev = ds->priv;
        struct ksz_port_mib *mib;
@@ -514,10 +806,22 @@ void ksz_get_stats64(struct dsa_switch *ds, int port,
        memcpy(s, &mib->stats64, sizeof(*s));
        spin_unlock(&mib->stats64_lock);
 }
-EXPORT_SYMBOL_GPL(ksz_get_stats64);
 
-void ksz_get_strings(struct dsa_switch *ds, int port,
-                    u32 stringset, uint8_t *buf)
+static void ksz_get_pause_stats(struct dsa_switch *ds, int port,
+                               struct ethtool_pause_stats *pause_stats)
+{
+       struct ksz_device *dev = ds->priv;
+       struct ksz_port_mib *mib;
+
+       mib = &dev->ports[port].mib;
+
+       spin_lock(&mib->stats64_lock);
+       memcpy(pause_stats, &mib->pause_stats, sizeof(*pause_stats));
+       spin_unlock(&mib->stats64_lock);
+}
+
+static void ksz_get_strings(struct dsa_switch *ds, int port,
+                           u32 stringset, uint8_t *buf)
 {
        struct ksz_device *dev = ds->priv;
        int i;
@@ -530,9 +834,8 @@ void ksz_get_strings(struct dsa_switch *ds, int port,
                       dev->info->mib_names[i].string, ETH_GSTRING_LEN);
        }
 }
-EXPORT_SYMBOL_GPL(ksz_get_strings);
 
-void ksz_update_port_member(struct ksz_device *dev, int port)
+static void ksz_update_port_member(struct ksz_device *dev, int port)
 {
        struct ksz_port *p = &dev->ports[port];
        struct dsa_switch *ds = dev->ds;
@@ -589,7 +892,55 @@ void ksz_update_port_member(struct ksz_device *dev, int port)
 
        dev->dev_ops->cfg_port_member(dev, port, port_member | cpu_port);
 }
-EXPORT_SYMBOL_GPL(ksz_update_port_member);
+
+static int ksz_setup(struct dsa_switch *ds)
+{
+       struct ksz_device *dev = ds->priv;
+       const u16 *regs;
+       int ret;
+
+       regs = dev->info->regs;
+
+       dev->vlan_cache = devm_kcalloc(dev->dev, sizeof(struct vlan_table),
+                                      dev->info->num_vlans, GFP_KERNEL);
+       if (!dev->vlan_cache)
+               return -ENOMEM;
+
+       ret = dev->dev_ops->reset(dev);
+       if (ret) {
+               dev_err(ds->dev, "failed to reset switch\n");
+               return ret;
+       }
+
+       /* set broadcast storm protection 10% rate */
+       regmap_update_bits(dev->regmap[1], regs[S_BROADCAST_CTRL],
+                          BROADCAST_STORM_RATE,
+                          (BROADCAST_STORM_VALUE *
+                          BROADCAST_STORM_PROT_RATE) / 100);
+
+       dev->dev_ops->config_cpu_port(ds);
+
+       dev->dev_ops->enable_stp_addr(dev);
+
+       regmap_update_bits(dev->regmap[0], regs[S_MULTICAST_CTRL],
+                          MULTICAST_STORM_DISABLE, MULTICAST_STORM_DISABLE);
+
+       ksz_init_mib_timer(dev);
+
+       ds->configure_vlan_while_not_filtering = false;
+
+       if (dev->dev_ops->setup) {
+               ret = dev->dev_ops->setup(ds);
+               if (ret)
+                       return ret;
+       }
+
+       /* start switch */
+       regmap_update_bits(dev->regmap[0], regs[S_START_CTRL],
+                          SW_START, SW_START);
+
+       return 0;
+}
 
 static void port_r_cnt(struct ksz_device *dev, int port)
 {
@@ -667,9 +1018,8 @@ void ksz_init_mib_timer(struct ksz_device *dev)
                memset(mib->counters, 0, dev->info->mib_cnt * sizeof(u64));
        }
 }
-EXPORT_SYMBOL_GPL(ksz_init_mib_timer);
 
-int ksz_phy_read16(struct dsa_switch *ds, int addr, int reg)
+static int ksz_phy_read16(struct dsa_switch *ds, int addr, int reg)
 {
        struct ksz_device *dev = ds->priv;
        u16 val = 0xffff;
@@ -678,9 +1028,8 @@ int ksz_phy_read16(struct dsa_switch *ds, int addr, int reg)
 
        return val;
 }
-EXPORT_SYMBOL_GPL(ksz_phy_read16);
 
-int ksz_phy_write16(struct dsa_switch *ds, int addr, int reg, u16 val)
+static int ksz_phy_write16(struct dsa_switch *ds, int addr, int reg, u16 val)
 {
        struct ksz_device *dev = ds->priv;
 
@@ -688,10 +1037,25 @@ int ksz_phy_write16(struct dsa_switch *ds, int addr, int reg, u16 val)
 
        return 0;
 }
-EXPORT_SYMBOL_GPL(ksz_phy_write16);
 
-void ksz_mac_link_down(struct dsa_switch *ds, int port, unsigned int mode,
-                      phy_interface_t interface)
+static u32 ksz_get_phy_flags(struct dsa_switch *ds, int port)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (dev->chip_id == KSZ8830_CHIP_ID) {
+               /* Silicon Errata Sheet (DS80000830A):
+                * Port 1 does not work with LinkMD Cable-Testing.
+                * Port 1 does not respond to received PAUSE control frames.
+                */
+               if (!port)
+                       return MICREL_KSZ8_P1_ERRATA;
+       }
+
+       return 0;
+}
+
+static void ksz_mac_link_down(struct dsa_switch *ds, int port,
+                             unsigned int mode, phy_interface_t interface)
 {
        struct ksz_device *dev = ds->priv;
        struct ksz_port *p = &dev->ports[port];
@@ -702,9 +1066,8 @@ void ksz_mac_link_down(struct dsa_switch *ds, int port, unsigned int mode,
        if (dev->mib_read_interval)
                schedule_delayed_work(&dev->mib_read, 0);
 }
-EXPORT_SYMBOL_GPL(ksz_mac_link_down);
 
-int ksz_sset_count(struct dsa_switch *ds, int port, int sset)
+static int ksz_sset_count(struct dsa_switch *ds, int port, int sset)
 {
        struct ksz_device *dev = ds->priv;
 
@@ -713,9 +1076,9 @@ int ksz_sset_count(struct dsa_switch *ds, int port, int sset)
 
        return dev->info->mib_cnt;
 }
-EXPORT_SYMBOL_GPL(ksz_sset_count);
 
-void ksz_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *buf)
+static void ksz_get_ethtool_stats(struct dsa_switch *ds, int port,
+                                 uint64_t *buf)
 {
        const struct dsa_port *dp = dsa_to_port(ds, port);
        struct ksz_device *dev = ds->priv;
@@ -731,12 +1094,11 @@ void ksz_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *buf)
        memcpy(buf, mib->counters, dev->info->mib_cnt * sizeof(u64));
        mutex_unlock(&mib->cnt_mutex);
 }
-EXPORT_SYMBOL_GPL(ksz_get_ethtool_stats);
 
-int ksz_port_bridge_join(struct dsa_switch *ds, int port,
-                        struct dsa_bridge bridge,
-                        bool *tx_fwd_offload,
-                        struct netlink_ext_ack *extack)
+static int ksz_port_bridge_join(struct dsa_switch *ds, int port,
+                               struct dsa_bridge bridge,
+                               bool *tx_fwd_offload,
+                               struct netlink_ext_ack *extack)
 {
        /* port_stp_state_set() will be called after to put the port in
         * appropriate state so there is no need to do anything.
@@ -744,135 +1106,83 @@ int ksz_port_bridge_join(struct dsa_switch *ds, int port,
 
        return 0;
 }
-EXPORT_SYMBOL_GPL(ksz_port_bridge_join);
 
-void ksz_port_bridge_leave(struct dsa_switch *ds, int port,
-                          struct dsa_bridge bridge)
+static void ksz_port_bridge_leave(struct dsa_switch *ds, int port,
+                                 struct dsa_bridge bridge)
 {
        /* port_stp_state_set() will be called after to put the port in
         * forwarding state so there is no need to do anything.
         */
 }
-EXPORT_SYMBOL_GPL(ksz_port_bridge_leave);
 
-void ksz_port_fast_age(struct dsa_switch *ds, int port)
+static void ksz_port_fast_age(struct dsa_switch *ds, int port)
 {
        struct ksz_device *dev = ds->priv;
 
        dev->dev_ops->flush_dyn_mac_table(dev, port);
 }
-EXPORT_SYMBOL_GPL(ksz_port_fast_age);
 
-int ksz_port_fdb_dump(struct dsa_switch *ds, int port, dsa_fdb_dump_cb_t *cb,
-                     void *data)
+static int ksz_port_fdb_add(struct dsa_switch *ds, int port,
+                           const unsigned char *addr, u16 vid,
+                           struct dsa_db db)
 {
        struct ksz_device *dev = ds->priv;
-       int ret = 0;
-       u16 i = 0;
-       u16 entries = 0;
-       u8 timestamp = 0;
-       u8 fid;
-       u8 member;
-       struct alu_struct alu;
-
-       do {
-               alu.is_static = false;
-               ret = dev->dev_ops->r_dyn_mac_table(dev, i, alu.mac, &fid,
-                                                   &member, &timestamp,
-                                                   &entries);
-               if (!ret && (member & BIT(port))) {
-                       ret = cb(alu.mac, alu.fid, alu.is_static, data);
-                       if (ret)
-                               break;
-               }
-               i++;
-       } while (i < entries);
-       if (i >= entries)
-               ret = 0;
 
-       return ret;
+       if (!dev->dev_ops->fdb_add)
+               return -EOPNOTSUPP;
+
+       return dev->dev_ops->fdb_add(dev, port, addr, vid, db);
 }
-EXPORT_SYMBOL_GPL(ksz_port_fdb_dump);
 
-int ksz_port_mdb_add(struct dsa_switch *ds, int port,
-                    const struct switchdev_obj_port_mdb *mdb,
-                    struct dsa_db db)
+static int ksz_port_fdb_del(struct dsa_switch *ds, int port,
+                           const unsigned char *addr,
+                           u16 vid, struct dsa_db db)
 {
        struct ksz_device *dev = ds->priv;
-       struct alu_struct alu;
-       int index;
-       int empty = 0;
-
-       alu.port_forward = 0;
-       for (index = 0; index < dev->info->num_statics; index++) {
-               if (!dev->dev_ops->r_sta_mac_table(dev, index, &alu)) {
-                       /* Found one already in static MAC table. */
-                       if (!memcmp(alu.mac, mdb->addr, ETH_ALEN) &&
-                           alu.fid == mdb->vid)
-                               break;
-               /* Remember the first empty entry. */
-               } else if (!empty) {
-                       empty = index + 1;
-               }
-       }
 
-       /* no available entry */
-       if (index == dev->info->num_statics && !empty)
-               return -ENOSPC;
+       if (!dev->dev_ops->fdb_del)
+               return -EOPNOTSUPP;
 
-       /* add entry */
-       if (index == dev->info->num_statics) {
-               index = empty - 1;
-               memset(&alu, 0, sizeof(alu));
-               memcpy(alu.mac, mdb->addr, ETH_ALEN);
-               alu.is_static = true;
-       }
-       alu.port_forward |= BIT(port);
-       if (mdb->vid) {
-               alu.is_use_fid = true;
+       return dev->dev_ops->fdb_del(dev, port, addr, vid, db);
+}
 
-               /* Need a way to map VID to FID. */
-               alu.fid = mdb->vid;
-       }
-       dev->dev_ops->w_sta_mac_table(dev, index, &alu);
+static int ksz_port_fdb_dump(struct dsa_switch *ds, int port,
+                            dsa_fdb_dump_cb_t *cb, void *data)
+{
+       struct ksz_device *dev = ds->priv;
 
-       return 0;
+       if (!dev->dev_ops->fdb_dump)
+               return -EOPNOTSUPP;
+
+       return dev->dev_ops->fdb_dump(dev, port, cb, data);
 }
-EXPORT_SYMBOL_GPL(ksz_port_mdb_add);
 
-int ksz_port_mdb_del(struct dsa_switch *ds, int port,
-                    const struct switchdev_obj_port_mdb *mdb,
-                    struct dsa_db db)
+static int ksz_port_mdb_add(struct dsa_switch *ds, int port,
+                           const struct switchdev_obj_port_mdb *mdb,
+                           struct dsa_db db)
 {
        struct ksz_device *dev = ds->priv;
-       struct alu_struct alu;
-       int index;
-
-       for (index = 0; index < dev->info->num_statics; index++) {
-               if (!dev->dev_ops->r_sta_mac_table(dev, index, &alu)) {
-                       /* Found one already in static MAC table. */
-                       if (!memcmp(alu.mac, mdb->addr, ETH_ALEN) &&
-                           alu.fid == mdb->vid)
-                               break;
-               }
-       }
 
-       /* no available entry */
-       if (index == dev->info->num_statics)
-               goto exit;
+       if (!dev->dev_ops->mdb_add)
+               return -EOPNOTSUPP;
 
-       /* clear port */
-       alu.port_forward &= ~BIT(port);
-       if (!alu.port_forward)
-               alu.is_static = false;
-       dev->dev_ops->w_sta_mac_table(dev, index, &alu);
+       return dev->dev_ops->mdb_add(dev, port, mdb, db);
+}
 
-exit:
-       return 0;
+static int ksz_port_mdb_del(struct dsa_switch *ds, int port,
+                           const struct switchdev_obj_port_mdb *mdb,
+                           struct dsa_db db)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (!dev->dev_ops->mdb_del)
+               return -EOPNOTSUPP;
+
+       return dev->dev_ops->mdb_del(dev, port, mdb, db);
 }
-EXPORT_SYMBOL_GPL(ksz_port_mdb_del);
 
-int ksz_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy)
+static int ksz_enable_port(struct dsa_switch *ds, int port,
+                          struct phy_device *phy)
 {
        struct ksz_device *dev = ds->priv;
 
@@ -888,16 +1198,17 @@ int ksz_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy)
 
        return 0;
 }
-EXPORT_SYMBOL_GPL(ksz_enable_port);
 
-void ksz_port_stp_state_set(struct dsa_switch *ds, int port,
-                           u8 state, int reg)
+void ksz_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
 {
        struct ksz_device *dev = ds->priv;
        struct ksz_port *p;
+       const u16 *regs;
        u8 data;
 
-       ksz_pread8(dev, port, reg, &data);
+       regs = dev->info->regs;
+
+       ksz_pread8(dev, port, regs[P_STP_CTRL], &data);
        data &= ~(PORT_TX_ENABLE | PORT_RX_ENABLE | PORT_LEARN_DISABLE);
 
        switch (state) {
@@ -921,14 +1232,239 @@ void ksz_port_stp_state_set(struct dsa_switch *ds, int port,
                return;
        }
 
-       ksz_pwrite8(dev, port, reg, data);
+       ksz_pwrite8(dev, port, regs[P_STP_CTRL], data);
 
        p = &dev->ports[port];
        p->stp_state = state;
 
        ksz_update_port_member(dev, port);
 }
-EXPORT_SYMBOL_GPL(ksz_port_stp_state_set);
+
+static enum dsa_tag_protocol ksz_get_tag_protocol(struct dsa_switch *ds,
+                                                 int port,
+                                                 enum dsa_tag_protocol mp)
+{
+       struct ksz_device *dev = ds->priv;
+       enum dsa_tag_protocol proto = DSA_TAG_PROTO_NONE;
+
+       if (dev->chip_id == KSZ8795_CHIP_ID ||
+           dev->chip_id == KSZ8794_CHIP_ID ||
+           dev->chip_id == KSZ8765_CHIP_ID)
+               proto = DSA_TAG_PROTO_KSZ8795;
+
+       if (dev->chip_id == KSZ8830_CHIP_ID ||
+           dev->chip_id == KSZ9893_CHIP_ID)
+               proto = DSA_TAG_PROTO_KSZ9893;
+
+       if (dev->chip_id == KSZ9477_CHIP_ID ||
+           dev->chip_id == KSZ9897_CHIP_ID ||
+           dev->chip_id == KSZ9567_CHIP_ID)
+               proto = DSA_TAG_PROTO_KSZ9477;
+
+       if (is_lan937x(dev))
+               proto = DSA_TAG_PROTO_LAN937X_VALUE;
+
+       return proto;
+}
+
+static int ksz_port_vlan_filtering(struct dsa_switch *ds, int port,
+                                  bool flag, struct netlink_ext_ack *extack)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (!dev->dev_ops->vlan_filtering)
+               return -EOPNOTSUPP;
+
+       return dev->dev_ops->vlan_filtering(dev, port, flag, extack);
+}
+
+static int ksz_port_vlan_add(struct dsa_switch *ds, int port,
+                            const struct switchdev_obj_port_vlan *vlan,
+                            struct netlink_ext_ack *extack)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (!dev->dev_ops->vlan_add)
+               return -EOPNOTSUPP;
+
+       return dev->dev_ops->vlan_add(dev, port, vlan, extack);
+}
+
+static int ksz_port_vlan_del(struct dsa_switch *ds, int port,
+                            const struct switchdev_obj_port_vlan *vlan)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (!dev->dev_ops->vlan_del)
+               return -EOPNOTSUPP;
+
+       return dev->dev_ops->vlan_del(dev, port, vlan);
+}
+
+static int ksz_port_mirror_add(struct dsa_switch *ds, int port,
+                              struct dsa_mall_mirror_tc_entry *mirror,
+                              bool ingress, struct netlink_ext_ack *extack)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (!dev->dev_ops->mirror_add)
+               return -EOPNOTSUPP;
+
+       return dev->dev_ops->mirror_add(dev, port, mirror, ingress, extack);
+}
+
+static void ksz_port_mirror_del(struct dsa_switch *ds, int port,
+                               struct dsa_mall_mirror_tc_entry *mirror)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (dev->dev_ops->mirror_del)
+               dev->dev_ops->mirror_del(dev, port, mirror);
+}
+
+static int ksz_change_mtu(struct dsa_switch *ds, int port, int mtu)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (!dev->dev_ops->change_mtu)
+               return -EOPNOTSUPP;
+
+       return dev->dev_ops->change_mtu(dev, port, mtu);
+}
+
+static int ksz_max_mtu(struct dsa_switch *ds, int port)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (!dev->dev_ops->max_mtu)
+               return -EOPNOTSUPP;
+
+       return dev->dev_ops->max_mtu(dev, port);
+}
+
+static void ksz_phylink_mac_config(struct dsa_switch *ds, int port,
+                                  unsigned int mode,
+                                  const struct phylink_link_state *state)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (dev->dev_ops->phylink_mac_config)
+               dev->dev_ops->phylink_mac_config(dev, port, mode, state);
+}
+
+static void ksz_phylink_mac_link_up(struct dsa_switch *ds, int port,
+                                   unsigned int mode,
+                                   phy_interface_t interface,
+                                   struct phy_device *phydev, int speed,
+                                   int duplex, bool tx_pause, bool rx_pause)
+{
+       struct ksz_device *dev = ds->priv;
+
+       if (dev->dev_ops->phylink_mac_link_up)
+               dev->dev_ops->phylink_mac_link_up(dev, port, mode, interface,
+                                                 phydev, speed, duplex,
+                                                 tx_pause, rx_pause);
+}
+
+static int ksz_switch_detect(struct ksz_device *dev)
+{
+       u8 id1, id2;
+       u16 id16;
+       u32 id32;
+       int ret;
+
+       /* read chip id */
+       ret = ksz_read16(dev, REG_CHIP_ID0, &id16);
+       if (ret)
+               return ret;
+
+       id1 = FIELD_GET(SW_FAMILY_ID_M, id16);
+       id2 = FIELD_GET(SW_CHIP_ID_M, id16);
+
+       switch (id1) {
+       case KSZ87_FAMILY_ID:
+               if (id2 == KSZ87_CHIP_ID_95) {
+                       u8 val;
+
+                       dev->chip_id = KSZ8795_CHIP_ID;
+
+                       ksz_read8(dev, KSZ8_PORT_STATUS_0, &val);
+                       if (val & KSZ8_PORT_FIBER_MODE)
+                               dev->chip_id = KSZ8765_CHIP_ID;
+               } else if (id2 == KSZ87_CHIP_ID_94) {
+                       dev->chip_id = KSZ8794_CHIP_ID;
+               } else {
+                       return -ENODEV;
+               }
+               break;
+       case KSZ88_FAMILY_ID:
+               if (id2 == KSZ88_CHIP_ID_63)
+                       dev->chip_id = KSZ8830_CHIP_ID;
+               else
+                       return -ENODEV;
+               break;
+       default:
+               ret = ksz_read32(dev, REG_CHIP_ID0, &id32);
+               if (ret)
+                       return ret;
+
+               dev->chip_rev = FIELD_GET(SW_REV_ID_M, id32);
+               id32 &= ~0xFF;
+
+               switch (id32) {
+               case KSZ9477_CHIP_ID:
+               case KSZ9897_CHIP_ID:
+               case KSZ9893_CHIP_ID:
+               case KSZ9567_CHIP_ID:
+               case LAN9370_CHIP_ID:
+               case LAN9371_CHIP_ID:
+               case LAN9372_CHIP_ID:
+               case LAN9373_CHIP_ID:
+               case LAN9374_CHIP_ID:
+                       dev->chip_id = id32;
+                       break;
+               default:
+                       dev_err(dev->dev,
+                               "unsupported switch detected %x)\n", id32);
+                       return -ENODEV;
+               }
+       }
+       return 0;
+}
+
+static const struct dsa_switch_ops ksz_switch_ops = {
+       .get_tag_protocol       = ksz_get_tag_protocol,
+       .get_phy_flags          = ksz_get_phy_flags,
+       .setup                  = ksz_setup,
+       .phy_read               = ksz_phy_read16,
+       .phy_write              = ksz_phy_write16,
+       .phylink_get_caps       = ksz_phylink_get_caps,
+       .phylink_mac_config     = ksz_phylink_mac_config,
+       .phylink_mac_link_up    = ksz_phylink_mac_link_up,
+       .phylink_mac_link_down  = ksz_mac_link_down,
+       .port_enable            = ksz_enable_port,
+       .get_strings            = ksz_get_strings,
+       .get_ethtool_stats      = ksz_get_ethtool_stats,
+       .get_sset_count         = ksz_sset_count,
+       .port_bridge_join       = ksz_port_bridge_join,
+       .port_bridge_leave      = ksz_port_bridge_leave,
+       .port_stp_state_set     = ksz_port_stp_state_set,
+       .port_fast_age          = ksz_port_fast_age,
+       .port_vlan_filtering    = ksz_port_vlan_filtering,
+       .port_vlan_add          = ksz_port_vlan_add,
+       .port_vlan_del          = ksz_port_vlan_del,
+       .port_fdb_dump          = ksz_port_fdb_dump,
+       .port_fdb_add           = ksz_port_fdb_add,
+       .port_fdb_del           = ksz_port_fdb_del,
+       .port_mdb_add           = ksz_port_mdb_add,
+       .port_mdb_del           = ksz_port_mdb_del,
+       .port_mirror_add        = ksz_port_mirror_add,
+       .port_mirror_del        = ksz_port_mirror_del,
+       .get_stats64            = ksz_get_stats64,
+       .get_pause_stats        = ksz_get_pause_stats,
+       .port_change_mtu        = ksz_change_mtu,
+       .port_max_mtu           = ksz_max_mtu,
+};
 
 struct ksz_device *ksz_switch_alloc(struct device *base, void *priv)
 {
@@ -941,6 +1477,7 @@ struct ksz_device *ksz_switch_alloc(struct device *base, void *priv)
 
        ds->dev = base;
        ds->num_ports = DSA_MAX_PORTS;
+       ds->ops = &ksz_switch_ops;
 
        swdev = devm_kzalloc(base, sizeof(*swdev), GFP_KERNEL);
        if (!swdev)
@@ -956,8 +1493,7 @@ struct ksz_device *ksz_switch_alloc(struct device *base, void *priv)
 }
 EXPORT_SYMBOL(ksz_switch_alloc);
 
-int ksz_switch_register(struct ksz_device *dev,
-                       const struct ksz_dev_ops *ops)
+int ksz_switch_register(struct ksz_device *dev)
 {
        const struct ksz_chip_data *info;
        struct device_node *port, *ports;
@@ -986,10 +1522,9 @@ int ksz_switch_register(struct ksz_device *dev,
        mutex_init(&dev->alu_mutex);
        mutex_init(&dev->vlan_mutex);
 
-       dev->dev_ops = ops;
-
-       if (dev->dev_ops->detect(dev))
-               return -EINVAL;
+       ret = ksz_switch_detect(dev);
+       if (ret)
+               return ret;
 
        info = ksz_lookup_info(dev->chip_id);
        if (!info)
@@ -998,10 +1533,15 @@ int ksz_switch_register(struct ksz_device *dev,
        /* Update the compatible info with the probed one */
        dev->info = info;
 
+       dev_info(dev->dev, "found switch: %s, rev %i\n",
+                dev->info->dev_name, dev->chip_rev);
+
        ret = ksz_check_device_id(dev);
        if (ret)
                return ret;
 
+       dev->dev_ops = dev->info->ops;
+
        ret = dev->dev_ops->init(dev);
        if (ret)
                return ret;
@@ -1072,7 +1612,7 @@ int ksz_switch_register(struct ksz_device *dev,
        /* Start the MIB timer. */
        schedule_delayed_work(&dev->mib_read, 0);
 
-       return 0;
+       return ret;
 }
 EXPORT_SYMBOL(ksz_switch_register);
 
index 8500eae..d5dddb7 100644 (file)
@@ -25,6 +25,7 @@ struct ksz_port_mib {
        u8 cnt_ptr;
        u64 *counters;
        struct rtnl_link_stats64 stats64;
+       struct ethtool_pause_stats pause_stats;
        struct spinlock stats64_lock;
 };
 
@@ -41,11 +42,19 @@ struct ksz_chip_data {
        int num_statics;
        int cpu_ports;
        int port_cnt;
+       const struct ksz_dev_ops *ops;
        bool phy_errata_9477;
        bool ksz87xx_eee_link_erratum;
        const struct ksz_mib_names *mib_names;
        int mib_cnt;
        u8 reg_mib_cnt;
+       const u16 *regs;
+       const u32 *masks;
+       const u8 *shifts;
+       int stp_ctrl_reg;
+       int broadcast_ctrl_reg;
+       int multicast_ctrl_reg;
+       int start_ctrl_reg;
        bool supports_mii[KSZ_MAX_NUM_PORTS];
        bool supports_rmii[KSZ_MAX_NUM_PORTS];
        bool supports_rgmii[KSZ_MAX_NUM_PORTS];
@@ -90,6 +99,7 @@ struct ksz_device {
 
        /* chip specific data */
        u32 chip_id;
+       u8 chip_rev;
        int cpu_port;                   /* port connected to CPU */
        int phy_port_cnt;
        phy_interface_t compat_interface;
@@ -140,6 +150,64 @@ enum ksz_chip_id {
        LAN9374_CHIP_ID = 0x00937400,
 };
 
+enum ksz_regs {
+       REG_IND_CTRL_0,
+       REG_IND_DATA_8,
+       REG_IND_DATA_CHECK,
+       REG_IND_DATA_HI,
+       REG_IND_DATA_LO,
+       REG_IND_MIB_CHECK,
+       REG_IND_BYTE,
+       P_FORCE_CTRL,
+       P_LINK_STATUS,
+       P_LOCAL_CTRL,
+       P_NEG_RESTART_CTRL,
+       P_REMOTE_STATUS,
+       P_SPEED_STATUS,
+       S_TAIL_TAG_CTRL,
+       P_STP_CTRL,
+       S_START_CTRL,
+       S_BROADCAST_CTRL,
+       S_MULTICAST_CTRL,
+};
+
+enum ksz_masks {
+       PORT_802_1P_REMAPPING,
+       SW_TAIL_TAG_ENABLE,
+       MIB_COUNTER_OVERFLOW,
+       MIB_COUNTER_VALID,
+       VLAN_TABLE_FID,
+       VLAN_TABLE_MEMBERSHIP,
+       VLAN_TABLE_VALID,
+       STATIC_MAC_TABLE_VALID,
+       STATIC_MAC_TABLE_USE_FID,
+       STATIC_MAC_TABLE_FID,
+       STATIC_MAC_TABLE_OVERRIDE,
+       STATIC_MAC_TABLE_FWD_PORTS,
+       DYNAMIC_MAC_TABLE_ENTRIES_H,
+       DYNAMIC_MAC_TABLE_MAC_EMPTY,
+       DYNAMIC_MAC_TABLE_NOT_READY,
+       DYNAMIC_MAC_TABLE_ENTRIES,
+       DYNAMIC_MAC_TABLE_FID,
+       DYNAMIC_MAC_TABLE_SRC_PORT,
+       DYNAMIC_MAC_TABLE_TIMESTAMP,
+       ALU_STAT_WRITE,
+       ALU_STAT_READ,
+};
+
+enum ksz_shifts {
+       VLAN_TABLE_MEMBERSHIP_S,
+       VLAN_TABLE,
+       STATIC_MAC_FWD_PORTS,
+       STATIC_MAC_FID,
+       DYNAMIC_MAC_ENTRIES_H,
+       DYNAMIC_MAC_ENTRIES,
+       DYNAMIC_MAC_FID,
+       DYNAMIC_MAC_TIMESTAMP,
+       DYNAMIC_MAC_SRC_PORT,
+       ALU_STAT_INDEX,
+};
+
 struct alu_struct {
        /* entry 1 */
        u8      is_static:1;
@@ -160,6 +228,7 @@ struct alu_struct {
 };
 
 struct ksz_dev_ops {
+       int (*setup)(struct dsa_switch *ds);
        u32 (*get_port_addr)(int port, int offset);
        void (*cfg_port_member)(struct ksz_device *dev, int port, u8 member);
        void (*flush_dyn_mac_table)(struct ksz_device *dev, int port);
@@ -167,71 +236,65 @@ struct ksz_dev_ops {
        void (*port_setup)(struct ksz_device *dev, int port, bool cpu_port);
        void (*r_phy)(struct ksz_device *dev, u16 phy, u16 reg, u16 *val);
        void (*w_phy)(struct ksz_device *dev, u16 phy, u16 reg, u16 val);
-       int (*r_dyn_mac_table)(struct ksz_device *dev, u16 addr, u8 *mac_addr,
-                              u8 *fid, u8 *src_port, u8 *timestamp,
-                              u16 *entries);
-       int (*r_sta_mac_table)(struct ksz_device *dev, u16 addr,
-                              struct alu_struct *alu);
-       void (*w_sta_mac_table)(struct ksz_device *dev, u16 addr,
-                               struct alu_struct *alu);
        void (*r_mib_cnt)(struct ksz_device *dev, int port, u16 addr,
                          u64 *cnt);
        void (*r_mib_pkt)(struct ksz_device *dev, int port, u16 addr,
                          u64 *dropped, u64 *cnt);
        void (*r_mib_stat64)(struct ksz_device *dev, int port);
+       int  (*vlan_filtering)(struct ksz_device *dev, int port,
+                              bool flag, struct netlink_ext_ack *extack);
+       int  (*vlan_add)(struct ksz_device *dev, int port,
+                        const struct switchdev_obj_port_vlan *vlan,
+                        struct netlink_ext_ack *extack);
+       int  (*vlan_del)(struct ksz_device *dev, int port,
+                        const struct switchdev_obj_port_vlan *vlan);
+       int (*mirror_add)(struct ksz_device *dev, int port,
+                         struct dsa_mall_mirror_tc_entry *mirror,
+                         bool ingress, struct netlink_ext_ack *extack);
+       void (*mirror_del)(struct ksz_device *dev, int port,
+                          struct dsa_mall_mirror_tc_entry *mirror);
+       int (*fdb_add)(struct ksz_device *dev, int port,
+                      const unsigned char *addr, u16 vid, struct dsa_db db);
+       int (*fdb_del)(struct ksz_device *dev, int port,
+                      const unsigned char *addr, u16 vid, struct dsa_db db);
+       int (*fdb_dump)(struct ksz_device *dev, int port,
+                       dsa_fdb_dump_cb_t *cb, void *data);
+       int (*mdb_add)(struct ksz_device *dev, int port,
+                      const struct switchdev_obj_port_mdb *mdb,
+                      struct dsa_db db);
+       int (*mdb_del)(struct ksz_device *dev, int port,
+                      const struct switchdev_obj_port_mdb *mdb,
+                      struct dsa_db db);
+       void (*get_caps)(struct ksz_device *dev, int port,
+                        struct phylink_config *config);
+       int (*change_mtu)(struct ksz_device *dev, int port, int mtu);
+       int (*max_mtu)(struct ksz_device *dev, int port);
        void (*freeze_mib)(struct ksz_device *dev, int port, bool freeze);
        void (*port_init_cnt)(struct ksz_device *dev, int port);
-       int (*shutdown)(struct ksz_device *dev);
-       int (*detect)(struct ksz_device *dev);
+       void (*phylink_mac_config)(struct ksz_device *dev, int port,
+                                  unsigned int mode,
+                                  const struct phylink_link_state *state);
+       void (*phylink_mac_link_up)(struct ksz_device *dev, int port,
+                                   unsigned int mode,
+                                   phy_interface_t interface,
+                                   struct phy_device *phydev, int speed,
+                                   int duplex, bool tx_pause, bool rx_pause);
+       void (*config_cpu_port)(struct dsa_switch *ds);
+       int (*enable_stp_addr)(struct ksz_device *dev);
+       int (*reset)(struct ksz_device *dev);
        int (*init)(struct ksz_device *dev);
        void (*exit)(struct ksz_device *dev);
 };
 
 struct ksz_device *ksz_switch_alloc(struct device *base, void *priv);
-int ksz_switch_register(struct ksz_device *dev,
-                       const struct ksz_dev_ops *ops);
+int ksz_switch_register(struct ksz_device *dev);
 void ksz_switch_remove(struct ksz_device *dev);
 
-int ksz8_switch_register(struct ksz_device *dev);
-int ksz9477_switch_register(struct ksz_device *dev);
-
-void ksz_update_port_member(struct ksz_device *dev, int port);
 void ksz_init_mib_timer(struct ksz_device *dev);
 void ksz_r_mib_stats64(struct ksz_device *dev, int port);
-void ksz_get_stats64(struct dsa_switch *ds, int port,
-                    struct rtnl_link_stats64 *s);
-void ksz_phylink_get_caps(struct dsa_switch *ds, int port,
-                         struct phylink_config *config);
+void ksz_port_stp_state_set(struct dsa_switch *ds, int port, u8 state);
 extern const struct ksz_chip_data ksz_switch_chips[];
 
-/* Common DSA access functions */
-
-int ksz_phy_read16(struct dsa_switch *ds, int addr, int reg);
-int ksz_phy_write16(struct dsa_switch *ds, int addr, int reg, u16 val);
-void ksz_mac_link_down(struct dsa_switch *ds, int port, unsigned int mode,
-                      phy_interface_t interface);
-int ksz_sset_count(struct dsa_switch *ds, int port, int sset);
-void ksz_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *buf);
-int ksz_port_bridge_join(struct dsa_switch *ds, int port,
-                        struct dsa_bridge bridge, bool *tx_fwd_offload,
-                        struct netlink_ext_ack *extack);
-void ksz_port_bridge_leave(struct dsa_switch *ds, int port,
-                          struct dsa_bridge bridge);
-void ksz_port_stp_state_set(struct dsa_switch *ds, int port,
-                           u8 state, int reg);
-void ksz_port_fast_age(struct dsa_switch *ds, int port);
-int ksz_port_fdb_dump(struct dsa_switch *ds, int port, dsa_fdb_dump_cb_t *cb,
-                     void *data);
-int ksz_port_mdb_add(struct dsa_switch *ds, int port,
-                    const struct switchdev_obj_port_mdb *mdb,
-                    struct dsa_db db);
-int ksz_port_mdb_del(struct dsa_switch *ds, int port,
-                    const struct switchdev_obj_port_mdb *mdb,
-                    struct dsa_db db);
-int ksz_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy);
-void ksz_get_strings(struct dsa_switch *ds, int port,
-                    u32 stringset, uint8_t *buf);
-
 /* Common register access functions */
 
 static inline int ksz_read8(struct ksz_device *dev, u32 reg, u8 *val)
@@ -348,11 +411,51 @@ static inline void ksz_regmap_unlock(void *__mtx)
        mutex_unlock(mtx);
 }
 
+static inline int is_lan937x(struct ksz_device *dev)
+{
+       return dev->chip_id == LAN9370_CHIP_ID ||
+               dev->chip_id == LAN9371_CHIP_ID ||
+               dev->chip_id == LAN9372_CHIP_ID ||
+               dev->chip_id == LAN9373_CHIP_ID ||
+               dev->chip_id == LAN9374_CHIP_ID;
+}
+
 /* STP State Defines */
 #define PORT_TX_ENABLE                 BIT(2)
 #define PORT_RX_ENABLE                 BIT(1)
 #define PORT_LEARN_DISABLE             BIT(0)
 
+/* Switch ID Defines */
+#define REG_CHIP_ID0                   0x00
+
+#define SW_FAMILY_ID_M                 GENMASK(15, 8)
+#define KSZ87_FAMILY_ID                        0x87
+#define KSZ88_FAMILY_ID                        0x88
+
+#define KSZ8_PORT_STATUS_0             0x08
+#define KSZ8_PORT_FIBER_MODE           BIT(7)
+
+#define SW_CHIP_ID_M                   GENMASK(7, 4)
+#define KSZ87_CHIP_ID_94               0x6
+#define KSZ87_CHIP_ID_95               0x9
+#define KSZ88_CHIP_ID_63               0x3
+
+#define SW_REV_ID_M                    GENMASK(7, 4)
+
+/* Driver set switch broadcast storm protection at 10% rate. */
+#define BROADCAST_STORM_PROT_RATE      10
+
+/* 148,800 frames * 67 ms / 100 */
+#define BROADCAST_STORM_VALUE          9969
+
+#define BROADCAST_STORM_RATE_HI                0x07
+#define BROADCAST_STORM_RATE_LO                0xFF
+#define BROADCAST_STORM_RATE           0x07FF
+
+#define MULTICAST_STORM_DISABLE                BIT(6)
+
+#define SW_START                       0x01
+
 /* Regmap tables generation */
 #define KSZ_SPI_OP_RD          3
 #define KSZ_SPI_OP_WR          2
diff --git a/drivers/net/dsa/microchip/ksz_spi.c b/drivers/net/dsa/microchip/ksz_spi.c
new file mode 100644 (file)
index 0000000..4844830
--- /dev/null
@@ -0,0 +1,237 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Microchip ksz series register access through SPI
+ *
+ * Copyright (C) 2017 Microchip Technology Inc.
+ *     Tristram Ha <Tristram.Ha@microchip.com>
+ */
+
+#include <asm/unaligned.h>
+
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/regmap.h>
+#include <linux/spi/spi.h>
+
+#include "ksz_common.h"
+
+#define KSZ8795_SPI_ADDR_SHIFT                 12
+#define KSZ8795_SPI_ADDR_ALIGN                 3
+#define KSZ8795_SPI_TURNAROUND_SHIFT           1
+
+#define KSZ8863_SPI_ADDR_SHIFT                 8
+#define KSZ8863_SPI_ADDR_ALIGN                 8
+#define KSZ8863_SPI_TURNAROUND_SHIFT           0
+
+#define KSZ9477_SPI_ADDR_SHIFT                 24
+#define KSZ9477_SPI_ADDR_ALIGN                 3
+#define KSZ9477_SPI_TURNAROUND_SHIFT           5
+
+KSZ_REGMAP_TABLE(ksz8795, 16, KSZ8795_SPI_ADDR_SHIFT,
+                KSZ8795_SPI_TURNAROUND_SHIFT, KSZ8795_SPI_ADDR_ALIGN);
+
+KSZ_REGMAP_TABLE(ksz8863, 16, KSZ8863_SPI_ADDR_SHIFT,
+                KSZ8863_SPI_TURNAROUND_SHIFT, KSZ8863_SPI_ADDR_ALIGN);
+
+KSZ_REGMAP_TABLE(ksz9477, 32, KSZ9477_SPI_ADDR_SHIFT,
+                KSZ9477_SPI_TURNAROUND_SHIFT, KSZ9477_SPI_ADDR_ALIGN);
+
+static int ksz_spi_probe(struct spi_device *spi)
+{
+       const struct regmap_config *regmap_config;
+       const struct ksz_chip_data *chip;
+       struct device *ddev = &spi->dev;
+       struct regmap_config rc;
+       struct ksz_device *dev;
+       int i, ret = 0;
+
+       dev = ksz_switch_alloc(&spi->dev, spi);
+       if (!dev)
+               return -ENOMEM;
+
+       chip = device_get_match_data(ddev);
+       if (!chip)
+               return -EINVAL;
+
+       if (chip->chip_id == KSZ8830_CHIP_ID)
+               regmap_config = ksz8863_regmap_config;
+       else if (chip->chip_id == KSZ8795_CHIP_ID ||
+                chip->chip_id == KSZ8794_CHIP_ID ||
+                chip->chip_id == KSZ8765_CHIP_ID)
+               regmap_config = ksz8795_regmap_config;
+       else
+               regmap_config = ksz9477_regmap_config;
+
+       for (i = 0; i < ARRAY_SIZE(ksz8795_regmap_config); i++) {
+               rc = regmap_config[i];
+               rc.lock_arg = &dev->regmap_mutex;
+               dev->regmap[i] = devm_regmap_init_spi(spi, &rc);
+               if (IS_ERR(dev->regmap[i])) {
+                       ret = PTR_ERR(dev->regmap[i]);
+                       dev_err(&spi->dev,
+                               "Failed to initialize regmap%i: %d\n",
+                               regmap_config[i].val_bits, ret);
+                       return ret;
+               }
+       }
+
+       if (spi->dev.platform_data)
+               dev->pdata = spi->dev.platform_data;
+
+       /* setup spi */
+       spi->mode = SPI_MODE_3;
+       ret = spi_setup(spi);
+       if (ret)
+               return ret;
+
+       ret = ksz_switch_register(dev);
+
+       /* Main DSA driver may not be started yet. */
+       if (ret)
+               return ret;
+
+       spi_set_drvdata(spi, dev);
+
+       return 0;
+}
+
+static void ksz_spi_remove(struct spi_device *spi)
+{
+       struct ksz_device *dev = spi_get_drvdata(spi);
+
+       if (dev)
+               ksz_switch_remove(dev);
+
+       spi_set_drvdata(spi, NULL);
+}
+
+static void ksz_spi_shutdown(struct spi_device *spi)
+{
+       struct ksz_device *dev = spi_get_drvdata(spi);
+
+       if (!dev)
+               return;
+
+       if (dev->dev_ops->reset)
+               dev->dev_ops->reset(dev);
+
+       dsa_switch_shutdown(dev->ds);
+
+       spi_set_drvdata(spi, NULL);
+}
+
+static const struct of_device_id ksz_dt_ids[] = {
+       {
+               .compatible = "microchip,ksz8765",
+               .data = &ksz_switch_chips[KSZ8765]
+       },
+       {
+               .compatible = "microchip,ksz8794",
+               .data = &ksz_switch_chips[KSZ8794]
+       },
+       {
+               .compatible = "microchip,ksz8795",
+               .data = &ksz_switch_chips[KSZ8795]
+       },
+       {
+               .compatible = "microchip,ksz8863",
+               .data = &ksz_switch_chips[KSZ8830]
+       },
+       {
+               .compatible = "microchip,ksz8873",
+               .data = &ksz_switch_chips[KSZ8830]
+       },
+       {
+               .compatible = "microchip,ksz9477",
+               .data = &ksz_switch_chips[KSZ9477]
+       },
+       {
+               .compatible = "microchip,ksz9897",
+               .data = &ksz_switch_chips[KSZ9897]
+       },
+       {
+               .compatible = "microchip,ksz9893",
+               .data = &ksz_switch_chips[KSZ9893]
+       },
+       {
+               .compatible = "microchip,ksz9563",
+               .data = &ksz_switch_chips[KSZ9893]
+       },
+       {
+               .compatible = "microchip,ksz8563",
+               .data = &ksz_switch_chips[KSZ9893]
+       },
+       {
+               .compatible = "microchip,ksz9567",
+               .data = &ksz_switch_chips[KSZ9567]
+       },
+       {
+               .compatible = "microchip,lan9370",
+               .data = &ksz_switch_chips[LAN9370]
+       },
+       {
+               .compatible = "microchip,lan9371",
+               .data = &ksz_switch_chips[LAN9371]
+       },
+       {
+               .compatible = "microchip,lan9372",
+               .data = &ksz_switch_chips[LAN9372]
+       },
+       {
+               .compatible = "microchip,lan9373",
+               .data = &ksz_switch_chips[LAN9373]
+       },
+       {
+               .compatible = "microchip,lan9374",
+               .data = &ksz_switch_chips[LAN9374]
+       },
+       {},
+};
+MODULE_DEVICE_TABLE(of, ksz_dt_ids);
+
+static const struct spi_device_id ksz_spi_ids[] = {
+       { "ksz8765" },
+       { "ksz8794" },
+       { "ksz8795" },
+       { "ksz8863" },
+       { "ksz8873" },
+       { "ksz9477" },
+       { "ksz9897" },
+       { "ksz9893" },
+       { "ksz9563" },
+       { "ksz8563" },
+       { "ksz9567" },
+       { "lan9370" },
+       { "lan9371" },
+       { "lan9372" },
+       { "lan9373" },
+       { "lan9374" },
+       { },
+};
+MODULE_DEVICE_TABLE(spi, ksz_spi_ids);
+
+static struct spi_driver ksz_spi_driver = {
+       .driver = {
+               .name   = "ksz-switch",
+               .owner  = THIS_MODULE,
+               .of_match_table = of_match_ptr(ksz_dt_ids),
+       },
+       .id_table = ksz_spi_ids,
+       .probe  = ksz_spi_probe,
+       .remove = ksz_spi_remove,
+       .shutdown = ksz_spi_shutdown,
+};
+
+module_spi_driver(ksz_spi_driver);
+
+MODULE_ALIAS("spi:ksz9477");
+MODULE_ALIAS("spi:ksz9897");
+MODULE_ALIAS("spi:ksz9893");
+MODULE_ALIAS("spi:ksz9563");
+MODULE_ALIAS("spi:ksz8563");
+MODULE_ALIAS("spi:ksz9567");
+MODULE_ALIAS("spi:lan937x");
+MODULE_AUTHOR("Tristram Ha <Tristram.Ha@microchip.com>");
+MODULE_DESCRIPTION("Microchip ksz Series Switch SPI Driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/dsa/microchip/lan937x.h b/drivers/net/dsa/microchip/lan937x.h
new file mode 100644 (file)
index 0000000..72ba9cb
--- /dev/null
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Microchip lan937x dev ops headers
+ * Copyright (C) 2019-2022 Microchip Technology Inc.
+ */
+
+#ifndef __LAN937X_CFG_H
+#define __LAN937X_CFG_H
+
+int lan937x_reset_switch(struct ksz_device *dev);
+int lan937x_setup(struct dsa_switch *ds);
+void lan937x_port_setup(struct ksz_device *dev, int port, bool cpu_port);
+void lan937x_config_cpu_port(struct dsa_switch *ds);
+int lan937x_switch_init(struct ksz_device *dev);
+void lan937x_switch_exit(struct ksz_device *dev);
+void lan937x_r_phy(struct ksz_device *dev, u16 addr, u16 reg, u16 *data);
+void lan937x_w_phy(struct ksz_device *dev, u16 addr, u16 reg, u16 val);
+int lan937x_change_mtu(struct ksz_device *dev, int port, int new_mtu);
+void lan937x_phylink_get_caps(struct ksz_device *dev, int port,
+                             struct phylink_config *config);
+void lan937x_phylink_mac_link_up(struct ksz_device *dev, int port,
+                                unsigned int mode, phy_interface_t interface,
+                                struct phy_device *phydev, int speed,
+                                int duplex, bool tx_pause, bool rx_pause);
+void lan937x_phylink_mac_config(struct ksz_device *dev, int port,
+                               unsigned int mode,
+                               const struct phylink_link_state *state);
+#endif
diff --git a/drivers/net/dsa/microchip/lan937x_main.c b/drivers/net/dsa/microchip/lan937x_main.c
new file mode 100644 (file)
index 0000000..c29d175
--- /dev/null
@@ -0,0 +1,484 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Microchip LAN937X switch driver main logic
+ * Copyright (C) 2019-2022 Microchip Technology Inc.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/iopoll.h>
+#include <linux/phy.h>
+#include <linux/of_net.h>
+#include <linux/of_mdio.h>
+#include <linux/if_bridge.h>
+#include <linux/if_vlan.h>
+#include <linux/math.h>
+#include <net/dsa.h>
+#include <net/switchdev.h>
+
+#include "lan937x_reg.h"
+#include "ksz_common.h"
+#include "lan937x.h"
+
+static int lan937x_cfg(struct ksz_device *dev, u32 addr, u8 bits, bool set)
+{
+       return regmap_update_bits(dev->regmap[0], addr, bits, set ? bits : 0);
+}
+
+static int lan937x_port_cfg(struct ksz_device *dev, int port, int offset,
+                           u8 bits, bool set)
+{
+       return regmap_update_bits(dev->regmap[0], PORT_CTRL_ADDR(port, offset),
+                                 bits, set ? bits : 0);
+}
+
+static int lan937x_enable_spi_indirect_access(struct ksz_device *dev)
+{
+       u16 data16;
+       int ret;
+
+       /* Enable Phy access through SPI */
+       ret = lan937x_cfg(dev, REG_GLOBAL_CTRL_0, SW_PHY_REG_BLOCK, false);
+       if (ret < 0)
+               return ret;
+
+       ret = ksz_read16(dev, REG_VPHY_SPECIAL_CTRL__2, &data16);
+       if (ret < 0)
+               return ret;
+
+       /* Allow SPI access */
+       data16 |= VPHY_SPI_INDIRECT_ENABLE;
+
+       return ksz_write16(dev, REG_VPHY_SPECIAL_CTRL__2, data16);
+}
+
+static int lan937x_vphy_ind_addr_wr(struct ksz_device *dev, int addr, int reg)
+{
+       u16 addr_base = REG_PORT_T1_PHY_CTRL_BASE;
+       u16 temp;
+
+       /* get register address based on the logical port */
+       temp = PORT_CTRL_ADDR(addr, (addr_base + (reg << 2)));
+
+       return ksz_write16(dev, REG_VPHY_IND_ADDR__2, temp);
+}
+
+static int lan937x_internal_phy_write(struct ksz_device *dev, int addr, int reg,
+                                     u16 val)
+{
+       unsigned int value;
+       int ret;
+
+       /* Check for internal phy port */
+       if (!dev->info->internal_phy[addr])
+               return -EOPNOTSUPP;
+
+       ret = lan937x_vphy_ind_addr_wr(dev, addr, reg);
+       if (ret < 0)
+               return ret;
+
+       /* Write the data to be written to the VPHY reg */
+       ret = ksz_write16(dev, REG_VPHY_IND_DATA__2, val);
+       if (ret < 0)
+               return ret;
+
+       /* Write the Write En and Busy bit */
+       ret = ksz_write16(dev, REG_VPHY_IND_CTRL__2,
+                         (VPHY_IND_WRITE | VPHY_IND_BUSY));
+       if (ret < 0)
+               return ret;
+
+       ret = regmap_read_poll_timeout(dev->regmap[1], REG_VPHY_IND_CTRL__2,
+                                      value, !(value & VPHY_IND_BUSY), 10,
+                                      1000);
+       if (ret < 0) {
+               dev_err(dev->dev, "Failed to write phy register\n");
+               return ret;
+       }
+
+       return 0;
+}
+
+static int lan937x_internal_phy_read(struct ksz_device *dev, int addr, int reg,
+                                    u16 *val)
+{
+       unsigned int value;
+       int ret;
+
+       /* Check for internal phy port, return 0xffff for non-existent phy */
+       if (!dev->info->internal_phy[addr])
+               return 0xffff;
+
+       ret = lan937x_vphy_ind_addr_wr(dev, addr, reg);
+       if (ret < 0)
+               return ret;
+
+       /* Write Read and Busy bit to start the transaction */
+       ret = ksz_write16(dev, REG_VPHY_IND_CTRL__2, VPHY_IND_BUSY);
+       if (ret < 0)
+               return ret;
+
+       ret = regmap_read_poll_timeout(dev->regmap[1], REG_VPHY_IND_CTRL__2,
+                                      value, !(value & VPHY_IND_BUSY), 10,
+                                      1000);
+       if (ret < 0) {
+               dev_err(dev->dev, "Failed to read phy register\n");
+               return ret;
+       }
+
+       /* Read the VPHY register which has the PHY data */
+       return ksz_read16(dev, REG_VPHY_IND_DATA__2, val);
+}
+
+void lan937x_r_phy(struct ksz_device *dev, u16 addr, u16 reg, u16 *data)
+{
+       lan937x_internal_phy_read(dev, addr, reg, data);
+}
+
+void lan937x_w_phy(struct ksz_device *dev, u16 addr, u16 reg, u16 val)
+{
+       lan937x_internal_phy_write(dev, addr, reg, val);
+}
+
+static int lan937x_sw_mdio_read(struct mii_bus *bus, int addr, int regnum)
+{
+       struct ksz_device *dev = bus->priv;
+       u16 val;
+       int ret;
+
+       if (regnum & MII_ADDR_C45)
+               return -EOPNOTSUPP;
+
+       ret = lan937x_internal_phy_read(dev, addr, regnum, &val);
+       if (ret < 0)
+               return ret;
+
+       return val;
+}
+
+static int lan937x_sw_mdio_write(struct mii_bus *bus, int addr, int regnum,
+                                u16 val)
+{
+       struct ksz_device *dev = bus->priv;
+
+       if (regnum & MII_ADDR_C45)
+               return -EOPNOTSUPP;
+
+       return lan937x_internal_phy_write(dev, addr, regnum, val);
+}
+
+static int lan937x_mdio_register(struct ksz_device *dev)
+{
+       struct dsa_switch *ds = dev->ds;
+       struct device_node *mdio_np;
+       struct mii_bus *bus;
+       int ret;
+
+       mdio_np = of_get_child_by_name(dev->dev->of_node, "mdio");
+       if (!mdio_np) {
+               dev_err(ds->dev, "no MDIO bus node\n");
+               return -ENODEV;
+       }
+
+       bus = devm_mdiobus_alloc(ds->dev);
+       if (!bus) {
+               of_node_put(mdio_np);
+               return -ENOMEM;
+       }
+
+       bus->priv = dev;
+       bus->read = lan937x_sw_mdio_read;
+       bus->write = lan937x_sw_mdio_write;
+       bus->name = "lan937x slave smi";
+       snprintf(bus->id, MII_BUS_ID_SIZE, "SMI-%d", ds->index);
+       bus->parent = ds->dev;
+       bus->phy_mask = ~ds->phys_mii_mask;
+
+       ds->slave_mii_bus = bus;
+
+       ret = devm_of_mdiobus_register(ds->dev, bus, mdio_np);
+       if (ret) {
+               dev_err(ds->dev, "unable to register MDIO bus %s\n",
+                       bus->id);
+       }
+
+       of_node_put(mdio_np);
+
+       return ret;
+}
+
+int lan937x_reset_switch(struct ksz_device *dev)
+{
+       u32 data32;
+       int ret;
+
+       /* reset switch */
+       ret = lan937x_cfg(dev, REG_SW_OPERATION, SW_RESET, true);
+       if (ret < 0)
+               return ret;
+
+       /* Enable Auto Aging */
+       ret = lan937x_cfg(dev, REG_SW_LUE_CTRL_1, SW_LINK_AUTO_AGING, true);
+       if (ret < 0)
+               return ret;
+
+       /* disable interrupts */
+       ret = ksz_write32(dev, REG_SW_INT_MASK__4, SWITCH_INT_MASK);
+       if (ret < 0)
+               return ret;
+
+       ret = ksz_write32(dev, REG_SW_PORT_INT_MASK__4, 0xFF);
+       if (ret < 0)
+               return ret;
+
+       return ksz_read32(dev, REG_SW_PORT_INT_STATUS__4, &data32);
+}
+
+void lan937x_port_setup(struct ksz_device *dev, int port, bool cpu_port)
+{
+       struct dsa_switch *ds = dev->ds;
+       u8 member;
+
+       /* enable tag tail for host port */
+       if (cpu_port)
+               lan937x_port_cfg(dev, port, REG_PORT_CTRL_0,
+                                PORT_TAIL_TAG_ENABLE, true);
+
+       /* disable frame check length field */
+       lan937x_port_cfg(dev, port, REG_PORT_MAC_CTRL_0, PORT_CHECK_LENGTH,
+                        false);
+
+       /* set back pressure for half duplex */
+       lan937x_port_cfg(dev, port, REG_PORT_MAC_CTRL_1, PORT_BACK_PRESSURE,
+                        true);
+
+       /* enable 802.1p priority */
+       lan937x_port_cfg(dev, port, P_PRIO_CTRL, PORT_802_1P_PRIO_ENABLE, true);
+
+       if (!dev->info->internal_phy[port])
+               lan937x_port_cfg(dev, port, REG_PORT_XMII_CTRL_0,
+                                PORT_MII_TX_FLOW_CTRL | PORT_MII_RX_FLOW_CTRL,
+                                true);
+
+       if (cpu_port)
+               member = dsa_user_ports(ds);
+       else
+               member = BIT(dsa_upstream_port(ds, port));
+
+       dev->dev_ops->cfg_port_member(dev, port, member);
+}
+
+void lan937x_config_cpu_port(struct dsa_switch *ds)
+{
+       struct ksz_device *dev = ds->priv;
+       struct dsa_port *dp;
+
+       dsa_switch_for_each_cpu_port(dp, ds) {
+               if (dev->info->cpu_ports & (1 << dp->index)) {
+                       dev->cpu_port = dp->index;
+
+                       /* enable cpu port */
+                       lan937x_port_setup(dev, dp->index, true);
+               }
+       }
+
+       dsa_switch_for_each_user_port(dp, ds) {
+               ksz_port_stp_state_set(ds, dp->index, BR_STATE_DISABLED);
+       }
+}
+
+int lan937x_change_mtu(struct ksz_device *dev, int port, int new_mtu)
+{
+       struct dsa_switch *ds = dev->ds;
+       int ret;
+
+       new_mtu += VLAN_ETH_HLEN + ETH_FCS_LEN;
+
+       if (dsa_is_cpu_port(ds, port))
+               new_mtu += LAN937X_TAG_LEN;
+
+       if (new_mtu >= FR_MIN_SIZE)
+               ret = lan937x_port_cfg(dev, port, REG_PORT_MAC_CTRL_0,
+                                      PORT_JUMBO_PACKET, true);
+       else
+               ret = lan937x_port_cfg(dev, port, REG_PORT_MAC_CTRL_0,
+                                      PORT_JUMBO_PACKET, false);
+       if (ret < 0) {
+               dev_err(ds->dev, "failed to enable jumbo\n");
+               return ret;
+       }
+
+       /* Write the frame size in PORT_MAX_FR_SIZE register */
+       ksz_pwrite16(dev, port, PORT_MAX_FR_SIZE, new_mtu);
+
+       return 0;
+}
+
+static void lan937x_config_gbit(struct ksz_device *dev, bool gbit, u8 *data)
+{
+       if (gbit)
+               *data &= ~PORT_MII_NOT_1GBIT;
+       else
+               *data |= PORT_MII_NOT_1GBIT;
+}
+
+static void lan937x_mac_config(struct ksz_device *dev, int port,
+                              phy_interface_t interface)
+{
+       u8 data8;
+
+       ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &data8);
+
+       /* clear MII selection & set it based on interface later */
+       data8 &= ~PORT_MII_SEL_M;
+
+       /* configure MAC based on interface */
+       switch (interface) {
+       case PHY_INTERFACE_MODE_MII:
+               lan937x_config_gbit(dev, false, &data8);
+               data8 |= PORT_MII_SEL;
+               break;
+       case PHY_INTERFACE_MODE_RMII:
+               lan937x_config_gbit(dev, false, &data8);
+               data8 |= PORT_RMII_SEL;
+               break;
+       default:
+               dev_err(dev->dev, "Unsupported interface '%s' for port %d\n",
+                       phy_modes(interface), port);
+               return;
+       }
+
+       /* Write the updated value */
+       ksz_pwrite8(dev, port, REG_PORT_XMII_CTRL_1, data8);
+}
+
+static void lan937x_config_interface(struct ksz_device *dev, int port,
+                                    int speed, int duplex,
+                                    bool tx_pause, bool rx_pause)
+{
+       u8 xmii_ctrl0, xmii_ctrl1;
+
+       ksz_pread8(dev, port, REG_PORT_XMII_CTRL_0, &xmii_ctrl0);
+       ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &xmii_ctrl1);
+
+       xmii_ctrl0 &= ~(PORT_MII_100MBIT | PORT_MII_FULL_DUPLEX |
+                       PORT_MII_TX_FLOW_CTRL | PORT_MII_RX_FLOW_CTRL);
+
+       if (speed == SPEED_1000)
+               lan937x_config_gbit(dev, true, &xmii_ctrl1);
+       else
+               lan937x_config_gbit(dev, false, &xmii_ctrl1);
+
+       if (speed == SPEED_100)
+               xmii_ctrl0 |= PORT_MII_100MBIT;
+
+       if (duplex)
+               xmii_ctrl0 |= PORT_MII_FULL_DUPLEX;
+
+       if (tx_pause)
+               xmii_ctrl0 |= PORT_MII_TX_FLOW_CTRL;
+
+       if (rx_pause)
+               xmii_ctrl0 |= PORT_MII_RX_FLOW_CTRL;
+
+       ksz_pwrite8(dev, port, REG_PORT_XMII_CTRL_0, xmii_ctrl0);
+       ksz_pwrite8(dev, port, REG_PORT_XMII_CTRL_1, xmii_ctrl1);
+}
+
+void lan937x_phylink_get_caps(struct ksz_device *dev, int port,
+                             struct phylink_config *config)
+{
+       config->mac_capabilities = MAC_100FD;
+
+       if (dev->info->supports_rgmii[port]) {
+               /* MII/RMII/RGMII ports */
+               config->mac_capabilities |= MAC_ASYM_PAUSE | MAC_SYM_PAUSE |
+                                           MAC_100HD | MAC_10 | MAC_1000FD;
+       }
+}
+
+void lan937x_phylink_mac_link_up(struct ksz_device *dev, int port,
+                                unsigned int mode, phy_interface_t interface,
+                                struct phy_device *phydev, int speed,
+                                int duplex, bool tx_pause, bool rx_pause)
+{
+       /* Internal PHYs */
+       if (dev->info->internal_phy[port])
+               return;
+
+       lan937x_config_interface(dev, port, speed, duplex,
+                                tx_pause, rx_pause);
+}
+
+void lan937x_phylink_mac_config(struct ksz_device *dev, int port,
+                               unsigned int mode,
+                               const struct phylink_link_state *state)
+{
+       /* Internal PHYs */
+       if (dev->info->internal_phy[port])
+               return;
+
+       if (phylink_autoneg_inband(mode)) {
+               dev_err(dev->dev, "In-band AN not supported!\n");
+               return;
+       }
+
+       lan937x_mac_config(dev, port, state->interface);
+}
+
+int lan937x_setup(struct dsa_switch *ds)
+{
+       struct ksz_device *dev = ds->priv;
+       int ret;
+
+       /* enable Indirect Access from SPI to the VPHY registers */
+       ret = lan937x_enable_spi_indirect_access(dev);
+       if (ret < 0) {
+               dev_err(dev->dev, "failed to enable spi indirect access");
+               return ret;
+       }
+
+       ret = lan937x_mdio_register(dev);
+       if (ret < 0) {
+               dev_err(dev->dev, "failed to register the mdio");
+               return ret;
+       }
+
+       /* The VLAN aware is a global setting. Mixed vlan
+        * filterings are not supported.
+        */
+       ds->vlan_filtering_is_global = true;
+
+       /* Enable aggressive back off for half duplex & UNH mode */
+       lan937x_cfg(dev, REG_SW_MAC_CTRL_0,
+                   (SW_PAUSE_UNH_MODE | SW_NEW_BACKOFF | SW_AGGR_BACKOFF),
+                   true);
+
+       /* If NO_EXC_COLLISION_DROP bit is set, the switch will not drop
+        * packets when 16 or more collisions occur
+        */
+       lan937x_cfg(dev, REG_SW_MAC_CTRL_1, NO_EXC_COLLISION_DROP, true);
+
+       /* enable global MIB counter freeze function */
+       lan937x_cfg(dev, REG_SW_MAC_CTRL_6, SW_MIB_COUNTER_FREEZE, true);
+
+       /* disable CLK125 & CLK25, 1: disable, 0: enable */
+       lan937x_cfg(dev, REG_SW_GLOBAL_OUTPUT_CTRL__1,
+                   (SW_CLK125_ENB | SW_CLK25_ENB), true);
+
+       return 0;
+}
+
+int lan937x_switch_init(struct ksz_device *dev)
+{
+       dev->port_mask = (1 << dev->info->port_cnt) - 1;
+
+       return 0;
+}
+
+void lan937x_switch_exit(struct ksz_device *dev)
+{
+       lan937x_reset_switch(dev);
+}
+
+MODULE_AUTHOR("Arun Ramadoss <arun.ramadoss@microchip.com>");
+MODULE_DESCRIPTION("Microchip LAN937x Series Switch DSA Driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/dsa/microchip/lan937x_reg.h b/drivers/net/dsa/microchip/lan937x_reg.h
new file mode 100644 (file)
index 0000000..c187d0a
--- /dev/null
@@ -0,0 +1,180 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Microchip LAN937X switch register definitions
+ * Copyright (C) 2019-2021 Microchip Technology Inc.
+ */
+#ifndef __LAN937X_REG_H
+#define __LAN937X_REG_H
+
+#define PORT_CTRL_ADDR(port, addr)     ((addr) | (((port) + 1)  << 12))
+
+/* 0 - Operation */
+#define REG_GLOBAL_CTRL_0              0x0007
+
+#define SW_PHY_REG_BLOCK               BIT(7)
+#define SW_FAST_MODE                   BIT(3)
+#define SW_FAST_MODE_OVERRIDE          BIT(2)
+
+#define REG_SW_INT_STATUS__4           0x0010
+#define REG_SW_INT_MASK__4             0x0014
+
+#define LUE_INT                                BIT(31)
+#define TRIG_TS_INT                    BIT(30)
+#define APB_TIMEOUT_INT                        BIT(29)
+#define OVER_TEMP_INT                  BIT(28)
+#define HSR_INT                                BIT(27)
+#define PIO_INT                                BIT(26)
+#define POR_READY_INT                  BIT(25)
+
+#define SWITCH_INT_MASK                        \
+       (LUE_INT | TRIG_TS_INT | APB_TIMEOUT_INT | OVER_TEMP_INT | HSR_INT | \
+        PIO_INT | POR_READY_INT)
+
+#define REG_SW_PORT_INT_STATUS__4      0x0018
+#define REG_SW_PORT_INT_MASK__4                0x001C
+
+/* 1 - Global */
+#define REG_SW_GLOBAL_OUTPUT_CTRL__1   0x0103
+#define SW_CLK125_ENB                  BIT(1)
+#define SW_CLK25_ENB                   BIT(0)
+
+/* 3 - Operation Control */
+#define REG_SW_OPERATION               0x0300
+
+#define SW_DOUBLE_TAG                  BIT(7)
+#define SW_OVER_TEMP_ENABLE            BIT(2)
+#define SW_RESET                       BIT(1)
+
+#define REG_SW_LUE_CTRL_0              0x0310
+
+#define SW_VLAN_ENABLE                 BIT(7)
+#define SW_DROP_INVALID_VID            BIT(6)
+#define SW_AGE_CNT_M                   0x7
+#define SW_AGE_CNT_S                   3
+#define SW_RESV_MCAST_ENABLE           BIT(2)
+
+#define REG_SW_LUE_CTRL_1              0x0311
+
+#define UNICAST_LEARN_DISABLE          BIT(7)
+#define SW_FLUSH_STP_TABLE             BIT(5)
+#define SW_FLUSH_MSTP_TABLE            BIT(4)
+#define SW_SRC_ADDR_FILTER             BIT(3)
+#define SW_AGING_ENABLE                        BIT(2)
+#define SW_FAST_AGING                  BIT(1)
+#define SW_LINK_AUTO_AGING             BIT(0)
+
+#define REG_SW_MAC_CTRL_0              0x0330
+#define SW_NEW_BACKOFF                 BIT(7)
+#define SW_PAUSE_UNH_MODE              BIT(1)
+#define SW_AGGR_BACKOFF                        BIT(0)
+
+#define REG_SW_MAC_CTRL_1              0x0331
+#define SW_SHORT_IFG                   BIT(7)
+#define MULTICAST_STORM_DISABLE                BIT(6)
+#define SW_BACK_PRESSURE               BIT(5)
+#define FAIR_FLOW_CTRL                 BIT(4)
+#define NO_EXC_COLLISION_DROP          BIT(3)
+#define SW_LEGAL_PACKET_DISABLE                BIT(1)
+#define SW_PASS_SHORT_FRAME            BIT(0)
+
+#define REG_SW_MAC_CTRL_6              0x0336
+#define SW_MIB_COUNTER_FLUSH           BIT(7)
+#define SW_MIB_COUNTER_FREEZE          BIT(6)
+
+/* 4 - LUE */
+#define REG_SW_ALU_STAT_CTRL__4                0x041C
+
+#define REG_SW_ALU_VAL_B               0x0424
+#define ALU_V_OVERRIDE                 BIT(31)
+#define ALU_V_USE_FID                  BIT(30)
+#define ALU_V_PORT_MAP                 0xFF
+
+/* 7 - VPhy */
+#define REG_VPHY_IND_ADDR__2           0x075C
+#define REG_VPHY_IND_DATA__2           0x0760
+
+#define REG_VPHY_IND_CTRL__2           0x0768
+
+#define VPHY_IND_WRITE                 BIT(1)
+#define VPHY_IND_BUSY                  BIT(0)
+
+#define REG_VPHY_SPECIAL_CTRL__2       0x077C
+#define VPHY_SMI_INDIRECT_ENABLE       BIT(15)
+#define VPHY_SW_LOOPBACK               BIT(14)
+#define VPHY_MDIO_INTERNAL_ENABLE      BIT(13)
+#define VPHY_SPI_INDIRECT_ENABLE       BIT(12)
+#define VPHY_PORT_MODE_M               0x3
+#define VPHY_PORT_MODE_S               8
+#define VPHY_MODE_RGMII                        0
+#define VPHY_MODE_MII_PHY              1
+#define VPHY_MODE_SGMII                        2
+#define VPHY_MODE_RMII_PHY             3
+#define VPHY_SW_COLLISION_TEST         BIT(7)
+#define VPHY_SPEED_DUPLEX_STAT_M       0x7
+#define VPHY_SPEED_DUPLEX_STAT_S       2
+#define VPHY_SPEED_1000                        BIT(4)
+#define VPHY_SPEED_100                 BIT(3)
+#define VPHY_FULL_DUPLEX               BIT(2)
+
+/* Port Registers */
+
+/* 0 - Operation */
+#define REG_PORT_CTRL_0                        0x0020
+
+#define PORT_MAC_LOOPBACK              BIT(7)
+#define PORT_MAC_REMOTE_LOOPBACK       BIT(6)
+#define PORT_K2L_INSERT_ENABLE         BIT(5)
+#define PORT_K2L_DEBUG_ENABLE          BIT(4)
+#define PORT_TAIL_TAG_ENABLE           BIT(2)
+#define PORT_QUEUE_SPLIT_ENABLE                0x3
+
+/* 1 - Phy */
+#define REG_PORT_T1_PHY_CTRL_BASE      0x0100
+
+/* 3 - xMII */
+#define REG_PORT_XMII_CTRL_0           0x0300
+#define PORT_SGMII_SEL                 BIT(7)
+#define PORT_MII_FULL_DUPLEX           BIT(6)
+#define PORT_MII_TX_FLOW_CTRL          BIT(5)
+#define PORT_MII_100MBIT               BIT(4)
+#define PORT_MII_RX_FLOW_CTRL          BIT(3)
+#define PORT_GRXC_ENABLE               BIT(0)
+
+#define REG_PORT_XMII_CTRL_1           0x0301
+#define PORT_MII_NOT_1GBIT             BIT(6)
+#define PORT_MII_SEL_EDGE              BIT(5)
+#define PORT_RGMII_ID_IG_ENABLE                BIT(4)
+#define PORT_RGMII_ID_EG_ENABLE                BIT(3)
+#define PORT_MII_MAC_MODE              BIT(2)
+#define PORT_MII_SEL_M                 0x3
+#define PORT_RGMII_SEL                 0x0
+#define PORT_RMII_SEL                  0x1
+#define PORT_MII_SEL                   0x2
+
+/* 4 - MAC */
+#define REG_PORT_MAC_CTRL_0            0x0400
+#define PORT_CHECK_LENGTH              BIT(2)
+#define PORT_BROADCAST_STORM           BIT(1)
+#define PORT_JUMBO_PACKET              BIT(0)
+
+#define REG_PORT_MAC_CTRL_1            0x0401
+#define PORT_BACK_PRESSURE             BIT(3)
+#define PORT_PASS_ALL                  BIT(0)
+
+#define PORT_MAX_FR_SIZE               0x404
+#define FR_MIN_SIZE            1522
+
+/* 8 - Classification and Policing */
+#define REG_PORT_MRI_PRIO_CTRL         0x0801
+#define PORT_HIGHEST_PRIO              BIT(7)
+#define PORT_OR_PRIO                   BIT(6)
+#define PORT_MAC_PRIO_ENABLE           BIT(4)
+#define PORT_VLAN_PRIO_ENABLE          BIT(3)
+#define PORT_802_1P_PRIO_ENABLE                BIT(2)
+#define PORT_DIFFSERV_PRIO_ENABLE      BIT(1)
+#define PORT_ACL_PRIO_ENABLE           BIT(0)
+
+#define P_PRIO_CTRL                    REG_PORT_MRI_PRIO_CTRL
+
+#define LAN937X_TAG_LEN                        2
+
+#endif
index 0b49d24..37b6495 100644 (file)
@@ -449,9 +449,6 @@ static int mv88e6xxx_port_setup_mac(struct mv88e6xxx_chip *chip, int port,
                        goto restore_link;
        }
 
-       if (speed == SPEED_MAX && chip->info->ops->port_max_speed_mode)
-               mode = chip->info->ops->port_max_speed_mode(port);
-
        if (chip->info->ops->port_set_pause) {
                err = chip->info->ops->port_set_pause(chip, port, pause);
                if (err)
@@ -3280,28 +3277,51 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
 {
        struct device_node *phy_handle = NULL;
        struct dsa_switch *ds = chip->ds;
+       phy_interface_t mode;
        struct dsa_port *dp;
-       int tx_amp;
+       int tx_amp, speed;
        int err;
        u16 reg;
 
        chip->ports[port].chip = chip;
        chip->ports[port].port = port;
 
+       dp = dsa_to_port(ds, port);
+
        /* MAC Forcing register: don't force link, speed, duplex or flow control
         * state to any particular values on physical ports, but force the CPU
         * port and all DSA ports to their maximum bandwidth and full duplex.
         */
-       if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port))
+       if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port)) {
+               unsigned long caps = dp->pl_config.mac_capabilities;
+
+               if (chip->info->ops->port_max_speed_mode)
+                       mode = chip->info->ops->port_max_speed_mode(port);
+               else
+                       mode = PHY_INTERFACE_MODE_NA;
+
+               if (caps & MAC_10000FD)
+                       speed = SPEED_10000;
+               else if (caps & MAC_5000FD)
+                       speed = SPEED_5000;
+               else if (caps & MAC_2500FD)
+                       speed = SPEED_2500;
+               else if (caps & MAC_1000)
+                       speed = SPEED_1000;
+               else if (caps & MAC_100)
+                       speed = SPEED_100;
+               else
+                       speed = SPEED_10;
+
                err = mv88e6xxx_port_setup_mac(chip, port, LINK_FORCED_UP,
-                                              SPEED_MAX, DUPLEX_FULL,
-                                              PAUSE_OFF,
-                                              PHY_INTERFACE_MODE_NA);
-       else
+                                              speed, DUPLEX_FULL,
+                                              PAUSE_OFF, mode);
+       } else {
                err = mv88e6xxx_port_setup_mac(chip, port, LINK_UNFORCED,
                                               SPEED_UNFORCED, DUPLEX_UNFORCED,
                                               PAUSE_ON,
                                               PHY_INTERFACE_MODE_NA);
+       }
        if (err)
                return err;
 
@@ -3473,7 +3493,6 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
        }
 
        if (chip->info->ops->serdes_set_tx_amplitude) {
-               dp = dsa_to_port(ds, port);
                if (dp)
                        phy_handle = of_parse_phandle(dp->dn, "phy-handle", 0);
 
index 5e03cfe..e693154 100644 (file)
@@ -488,14 +488,13 @@ struct mv88e6xxx_ops {
        int (*port_set_pause)(struct mv88e6xxx_chip *chip, int port,
                              int pause);
 
-#define SPEED_MAX              INT_MAX
 #define SPEED_UNFORCED         -2
 #define DUPLEX_UNFORCED                -2
 
        /* Port's MAC speed (in Mbps) and MAC duplex mode
         *
         * Depending on the chip, 10, 100, 200, 1000, 2500, 10000 are valid.
-        * Use SPEED_UNFORCED for normal detection, SPEED_MAX for max value.
+        * Use SPEED_UNFORCED for normal detection.
         *
         * Use DUPLEX_HALF or DUPLEX_FULL to force half or full duplex,
         * or DUPLEX_UNFORCED for normal duplex detection.
index 795b312..90c55f2 100644 (file)
@@ -294,28 +294,10 @@ static int mv88e6xxx_port_set_speed_duplex(struct mv88e6xxx_chip *chip,
        return 0;
 }
 
-/* Support 10, 100, 200 Mbps (e.g. 88E6065 family) */
-int mv88e6065_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
-                                   int speed, int duplex)
-{
-       if (speed == SPEED_MAX)
-               speed = 200;
-
-       if (speed > 200)
-               return -EOPNOTSUPP;
-
-       /* Setting 200 Mbps on port 0 to 3 selects 100 Mbps */
-       return mv88e6xxx_port_set_speed_duplex(chip, port, speed, false, false,
-                                              duplex);
-}
-
 /* Support 10, 100, 1000 Mbps (e.g. 88E6185 family) */
 int mv88e6185_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
                                    int speed, int duplex)
 {
-       if (speed == SPEED_MAX)
-               speed = 1000;
-
        if (speed == 200 || speed > 1000)
                return -EOPNOTSUPP;
 
@@ -327,9 +309,6 @@ int mv88e6185_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
 int mv88e6250_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
                                    int speed, int duplex)
 {
-       if (speed == SPEED_MAX)
-               speed = 100;
-
        if (speed > 100)
                return -EOPNOTSUPP;
 
@@ -341,9 +320,6 @@ int mv88e6250_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
 int mv88e6341_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
                                    int speed, int duplex)
 {
-       if (speed == SPEED_MAX)
-               speed = port < 5 ? 1000 : 2500;
-
        if (speed > 2500)
                return -EOPNOTSUPP;
 
@@ -369,9 +345,6 @@ phy_interface_t mv88e6341_port_max_speed_mode(int port)
 int mv88e6352_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
                                    int speed, int duplex)
 {
-       if (speed == SPEED_MAX)
-               speed = 1000;
-
        if (speed > 1000)
                return -EOPNOTSUPP;
 
@@ -386,9 +359,6 @@ int mv88e6352_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
 int mv88e6390_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
                                    int speed, int duplex)
 {
-       if (speed == SPEED_MAX)
-               speed = port < 9 ? 1000 : 2500;
-
        if (speed > 2500)
                return -EOPNOTSUPP;
 
@@ -414,9 +384,6 @@ phy_interface_t mv88e6390_port_max_speed_mode(int port)
 int mv88e6390x_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
                                     int speed, int duplex)
 {
-       if (speed == SPEED_MAX)
-               speed = port < 9 ? 1000 : 10000;
-
        if (speed == 200 && port != 0)
                return -EOPNOTSUPP;
 
@@ -445,9 +412,6 @@ int mv88e6393x_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
        u16 reg, ctrl;
        int err;
 
-       if (speed == SPEED_MAX)
-               speed = (port > 0 && port < 9) ? 1000 : 10000;
-
        if (speed == 200 && port != 0)
                return -EOPNOTSUPP;
 
index e0a705d..cb04243 100644 (file)
@@ -342,8 +342,6 @@ int mv88e6xxx_port_set_link(struct mv88e6xxx_chip *chip, int port, int link);
 int mv88e6xxx_port_sync_link(struct mv88e6xxx_chip *chip, int port, unsigned int mode, bool isup);
 int mv88e6185_port_sync_link(struct mv88e6xxx_chip *chip, int port, unsigned int mode, bool isup);
 
-int mv88e6065_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
-                                   int speed, int duplex);
 int mv88e6185_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
                                    int speed, int duplex);
 int mv88e6250_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
index 220b0b0..08db9cf 100644 (file)
@@ -6,6 +6,7 @@ config NET_DSA_MSCC_FELIX
        depends on NET_VENDOR_FREESCALE
        depends on HAS_IOMEM
        depends on PTP_1588_CLOCK_OPTIONAL
+       depends on NET_SCH_TAPRIO || NET_SCH_TAPRIO=n
        select MSCC_OCELOT_SWITCH_LIB
        select NET_DSA_TAG_OCELOT_8021Q
        select NET_DSA_TAG_OCELOT
index 3e07dc3..8591968 100644 (file)
@@ -1553,9 +1553,18 @@ static void felix_txtstamp(struct dsa_switch *ds, int port,
 static int felix_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
 {
        struct ocelot *ocelot = ds->priv;
+       struct ocelot_port *ocelot_port = ocelot->ports[port];
+       struct felix *felix = ocelot_to_felix(ocelot);
 
        ocelot_port_set_maxlen(ocelot, port, new_mtu);
 
+       mutex_lock(&ocelot->tas_lock);
+
+       if (ocelot_port->taprio && felix->info->tas_guard_bands_update)
+               felix->info->tas_guard_bands_update(ocelot, port);
+
+       mutex_unlock(&ocelot->tas_lock);
+
        return 0;
 }
 
index 9e07eb7..deb8dde 100644 (file)
@@ -53,6 +53,7 @@ struct felix_info {
                                    struct phylink_link_state *state);
        int     (*port_setup_tc)(struct dsa_switch *ds, int port,
                                 enum tc_setup_type type, void *type_data);
+       void    (*tas_guard_bands_update)(struct ocelot *ocelot, int port);
        void    (*port_sched_speed_set)(struct ocelot *ocelot, int port,
                                        u32 speed);
        struct regmap *(*init_regmap)(struct ocelot *ocelot,
index dd9085a..61ed317 100644 (file)
@@ -16,6 +16,7 @@
 #include <linux/iopoll.h>
 #include <linux/mdio.h>
 #include <linux/pci.h>
+#include <linux/time.h>
 #include "felix.h"
 
 #define VSC9959_NUM_PORTS              6
@@ -1127,9 +1128,199 @@ static void vsc9959_mdio_bus_free(struct ocelot *ocelot)
        mdiobus_free(felix->imdio);
 }
 
+/* Extract shortest continuous gate open intervals in ns for each traffic class
+ * of a cyclic tc-taprio schedule. If a gate is always open, the duration is
+ * considered U64_MAX. If the gate is always closed, it is considered 0.
+ */
+static void vsc9959_tas_min_gate_lengths(struct tc_taprio_qopt_offload *taprio,
+                                        u64 min_gate_len[OCELOT_NUM_TC])
+{
+       struct tc_taprio_sched_entry *entry;
+       u64 gate_len[OCELOT_NUM_TC];
+       int tc, i, n;
+
+       /* Initialize arrays */
+       for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
+               min_gate_len[tc] = U64_MAX;
+               gate_len[tc] = 0;
+       }
+
+       /* If we don't have taprio, consider all gates as permanently open */
+       if (!taprio)
+               return;
+
+       n = taprio->num_entries;
+
+       /* Walk through the gate list twice to determine the length
+        * of consecutively open gates for a traffic class, including
+        * open gates that wrap around. We are just interested in the
+        * minimum window size, and this doesn't change what the
+        * minimum is (if the gate never closes, min_gate_len will
+        * remain U64_MAX).
+        */
+       for (i = 0; i < 2 * n; i++) {
+               entry = &taprio->entries[i % n];
+
+               for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
+                       if (entry->gate_mask & BIT(tc)) {
+                               gate_len[tc] += entry->interval;
+                       } else {
+                               /* Gate closes now, record a potential new
+                                * minimum and reinitialize length
+                                */
+                               if (min_gate_len[tc] > gate_len[tc])
+                                       min_gate_len[tc] = gate_len[tc];
+                               gate_len[tc] = 0;
+                       }
+               }
+       }
+}
+
+/* Update QSYS_PORT_MAX_SDU to make sure the static guard bands added by the
+ * switch (see the ALWAYS_GUARD_BAND_SCH_Q comment) are correct at all MTU
+ * values (the default value is 1518). Also, for traffic class windows smaller
+ * than one MTU sized frame, update QSYS_QMAXSDU_CFG to enable oversized frame
+ * dropping, such that these won't hang the port, as they will never be sent.
+ */
+static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
+{
+       struct ocelot_port *ocelot_port = ocelot->ports[port];
+       u64 min_gate_len[OCELOT_NUM_TC];
+       int speed, picos_per_byte;
+       u64 needed_bit_time_ps;
+       u32 val, maxlen;
+       u8 tas_speed;
+       int tc;
+
+       lockdep_assert_held(&ocelot->tas_lock);
+
+       val = ocelot_read_rix(ocelot, QSYS_TAG_CONFIG, port);
+       tas_speed = QSYS_TAG_CONFIG_LINK_SPEED_X(val);
+
+       switch (tas_speed) {
+       case OCELOT_SPEED_10:
+               speed = SPEED_10;
+               break;
+       case OCELOT_SPEED_100:
+               speed = SPEED_100;
+               break;
+       case OCELOT_SPEED_1000:
+               speed = SPEED_1000;
+               break;
+       case OCELOT_SPEED_2500:
+               speed = SPEED_2500;
+               break;
+       default:
+               return;
+       }
+
+       picos_per_byte = (USEC_PER_SEC * 8) / speed;
+
+       val = ocelot_port_readl(ocelot_port, DEV_MAC_MAXLEN_CFG);
+       /* MAXLEN_CFG accounts automatically for VLAN. We need to include it
+        * manually in the bit time calculation, plus the preamble and SFD.
+        */
+       maxlen = val + 2 * VLAN_HLEN;
+       /* Consider the standard Ethernet overhead of 8 octets preamble+SFD,
+        * 4 octets FCS, 12 octets IFG.
+        */
+       needed_bit_time_ps = (maxlen + 24) * picos_per_byte;
+
+       dev_dbg(ocelot->dev,
+               "port %d: max frame size %d needs %llu ps at speed %d\n",
+               port, maxlen, needed_bit_time_ps, speed);
+
+       vsc9959_tas_min_gate_lengths(ocelot_port->taprio, min_gate_len);
+
+       for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
+               u32 max_sdu;
+
+               if (min_gate_len[tc] == U64_MAX /* Gate always open */ ||
+                   min_gate_len[tc] * PSEC_PER_NSEC > needed_bit_time_ps) {
+                       /* Setting QMAXSDU_CFG to 0 disables oversized frame
+                        * dropping.
+                        */
+                       max_sdu = 0;
+                       dev_dbg(ocelot->dev,
+                               "port %d tc %d min gate len %llu"
+                               ", sending all frames\n",
+                               port, tc, min_gate_len[tc]);
+               } else {
+                       /* If traffic class doesn't support a full MTU sized
+                        * frame, make sure to enable oversize frame dropping
+                        * for frames larger than the smallest that would fit.
+                        */
+                       max_sdu = div_u64(min_gate_len[tc] * PSEC_PER_NSEC,
+                                         picos_per_byte);
+                       /* A TC gate may be completely closed, which is a
+                        * special case where all packets are oversized.
+                        * Any limit smaller than 64 octets accomplishes this
+                        */
+                       if (!max_sdu)
+                               max_sdu = 1;
+                       /* Take L1 overhead into account, but just don't allow
+                        * max_sdu to go negative or to 0. Here we use 20
+                        * because QSYS_MAXSDU_CFG_* already counts the 4 FCS
+                        * octets as part of packet size.
+                        */
+                       if (max_sdu > 20)
+                               max_sdu -= 20;
+                       dev_info(ocelot->dev,
+                                "port %d tc %d min gate length %llu"
+                                " ns not enough for max frame size %d at %d"
+                                " Mbps, dropping frames over %d"
+                                " octets including FCS\n",
+                                port, tc, min_gate_len[tc], maxlen, speed,
+                                max_sdu);
+               }
+
+               /* ocelot_write_rix is a macro that concatenates
+                * QSYS_MAXSDU_CFG_* with _RSZ, so we need to spell out
+                * the writes to each traffic class
+                */
+               switch (tc) {
+               case 0:
+                       ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_0,
+                                        port);
+                       break;
+               case 1:
+                       ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_1,
+                                        port);
+                       break;
+               case 2:
+                       ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_2,
+                                        port);
+                       break;
+               case 3:
+                       ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_3,
+                                        port);
+                       break;
+               case 4:
+                       ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_4,
+                                        port);
+                       break;
+               case 5:
+                       ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_5,
+                                        port);
+                       break;
+               case 6:
+                       ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_6,
+                                        port);
+                       break;
+               case 7:
+                       ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_7,
+                                        port);
+                       break;
+               }
+       }
+
+       ocelot_write_rix(ocelot, maxlen, QSYS_PORT_MAX_SDU, port);
+}
+
 static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
                                    u32 speed)
 {
+       struct ocelot_port *ocelot_port = ocelot->ports[port];
        u8 tas_speed;
 
        switch (speed) {
@@ -1154,6 +1345,13 @@ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
                       QSYS_TAG_CONFIG_LINK_SPEED(tas_speed),
                       QSYS_TAG_CONFIG_LINK_SPEED_M,
                       QSYS_TAG_CONFIG, port);
+
+       mutex_lock(&ocelot->tas_lock);
+
+       if (ocelot_port->taprio)
+               vsc9959_tas_guard_bands_update(ocelot, port);
+
+       mutex_unlock(&ocelot->tas_lock);
 }
 
 static void vsc9959_new_base_time(struct ocelot *ocelot, ktime_t base_time,
@@ -1204,12 +1402,14 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
        mutex_lock(&ocelot->tas_lock);
 
        if (!taprio->enable) {
-               ocelot_rmw_rix(ocelot,
-                              QSYS_TAG_CONFIG_INIT_GATE_STATE(0xFF),
-                              QSYS_TAG_CONFIG_ENABLE |
-                              QSYS_TAG_CONFIG_INIT_GATE_STATE_M,
+               ocelot_rmw_rix(ocelot, 0, QSYS_TAG_CONFIG_ENABLE,
                               QSYS_TAG_CONFIG, port);
 
+               taprio_offload_free(ocelot_port->taprio);
+               ocelot_port->taprio = NULL;
+
+               vsc9959_tas_guard_bands_update(ocelot, port);
+
                mutex_unlock(&ocelot->tas_lock);
                return 0;
        }
@@ -1258,8 +1458,6 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
                       QSYS_TAG_CONFIG_SCH_TRAFFIC_QUEUES_M,
                       QSYS_TAG_CONFIG, port);
 
-       ocelot_port->base_time = taprio->base_time;
-
        vsc9959_new_base_time(ocelot, taprio->base_time,
                              taprio->cycle_time, &base_ts);
        ocelot_write(ocelot, base_ts.tv_nsec, QSYS_PARAM_CFG_REG_1);
@@ -1282,6 +1480,11 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
        ret = readx_poll_timeout(vsc9959_tas_read_cfg_status, ocelot, val,
                                 !(val & QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE),
                                 10, 100000);
+       if (ret)
+               goto err;
+
+       ocelot_port->taprio = taprio_offload_get(taprio);
+       vsc9959_tas_guard_bands_update(ocelot, port);
 
 err:
        mutex_unlock(&ocelot->tas_lock);
@@ -1291,17 +1494,18 @@ err:
 
 static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
 {
+       struct tc_taprio_qopt_offload *taprio;
        struct ocelot_port *ocelot_port;
        struct timespec64 base_ts;
-       u64 cycletime;
        int port;
        u32 val;
 
        mutex_lock(&ocelot->tas_lock);
 
        for (port = 0; port < ocelot->num_phys_ports; port++) {
-               val = ocelot_read_rix(ocelot, QSYS_TAG_CONFIG, port);
-               if (!(val & QSYS_TAG_CONFIG_ENABLE))
+               ocelot_port = ocelot->ports[port];
+               taprio = ocelot_port->taprio;
+               if (!taprio)
                        continue;
 
                ocelot_rmw(ocelot,
@@ -1309,17 +1513,12 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
                           QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M,
                           QSYS_TAS_PARAM_CFG_CTRL);
 
-               ocelot_rmw_rix(ocelot,
-                              QSYS_TAG_CONFIG_INIT_GATE_STATE(0xFF),
-                              QSYS_TAG_CONFIG_ENABLE |
-                              QSYS_TAG_CONFIG_INIT_GATE_STATE_M,
+               /* Disable time-aware shaper */
+               ocelot_rmw_rix(ocelot, 0, QSYS_TAG_CONFIG_ENABLE,
                               QSYS_TAG_CONFIG, port);
 
-               cycletime = ocelot_read(ocelot, QSYS_PARAM_CFG_REG_4);
-               ocelot_port = ocelot->ports[port];
-
-               vsc9959_new_base_time(ocelot, ocelot_port->base_time,
-                                     cycletime, &base_ts);
+               vsc9959_new_base_time(ocelot, taprio->base_time,
+                                     taprio->cycle_time, &base_ts);
 
                ocelot_write(ocelot, base_ts.tv_nsec, QSYS_PARAM_CFG_REG_1);
                ocelot_write(ocelot, lower_32_bits(base_ts.tv_sec),
@@ -1334,11 +1533,9 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
                           QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
                           QSYS_TAS_PARAM_CFG_CTRL);
 
-               ocelot_rmw_rix(ocelot,
-                              QSYS_TAG_CONFIG_INIT_GATE_STATE(0xFF) |
+               /* Re-enable time-aware shaper */
+               ocelot_rmw_rix(ocelot, QSYS_TAG_CONFIG_ENABLE,
                               QSYS_TAG_CONFIG_ENABLE,
-                              QSYS_TAG_CONFIG_ENABLE |
-                              QSYS_TAG_CONFIG_INIT_GATE_STATE_M,
                               QSYS_TAG_CONFIG, port);
        }
        mutex_unlock(&ocelot->tas_lock);
@@ -1956,6 +2153,8 @@ static void vsc9959_psfp_sgi_table_del(struct ocelot *ocelot,
 static void vsc9959_psfp_counters_get(struct ocelot *ocelot, u32 index,
                                      struct felix_stream_filter_counters *counters)
 {
+       mutex_lock(&ocelot->stats_lock);
+
        ocelot_rmw(ocelot, SYS_STAT_CFG_STAT_VIEW(index),
                   SYS_STAT_CFG_STAT_VIEW_M,
                   SYS_STAT_CFG);
@@ -1970,6 +2169,8 @@ static void vsc9959_psfp_counters_get(struct ocelot *ocelot, u32 index,
                     SYS_STAT_CFG_STAT_VIEW(index) |
                     SYS_STAT_CFG_STAT_CLEAR_SHOT(0x10),
                     SYS_STAT_CFG);
+
+       mutex_unlock(&ocelot->stats_lock);
 }
 
 static int vsc9959_psfp_filter_add(struct ocelot *ocelot, int port,
@@ -2307,6 +2508,7 @@ static const struct felix_info felix_info_vsc9959 = {
        .port_modes             = vsc9959_port_modes,
        .port_setup_tc          = vsc9959_port_setup_tc,
        .port_sched_speed_set   = vsc9959_sched_speed_set,
+       .tas_guard_bands_update = vsc9959_tas_guard_bands_update,
        .init_regmap            = ocelot_regmap_init,
 };
 
index f23ce56..0796b7c 100644 (file)
@@ -231,6 +231,7 @@ struct ar9331_sw_port {
        int idx;
        struct delayed_work mib_read;
        struct rtnl_link_stats64 stats;
+       struct ethtool_pause_stats pause_stats;
        struct spinlock stats_lock;
 };
 
@@ -604,6 +605,7 @@ static void ar9331_sw_phylink_mac_link_up(struct dsa_switch *ds, int port,
 static void ar9331_read_stats(struct ar9331_sw_port *port)
 {
        struct ar9331_sw_priv *priv = ar9331_sw_port_to_priv(port);
+       struct ethtool_pause_stats *pstats = &port->pause_stats;
        struct rtnl_link_stats64 *stats = &port->stats;
        struct ar9331_sw_stats_raw raw;
        int ret;
@@ -644,6 +646,9 @@ static void ar9331_read_stats(struct ar9331_sw_port *port)
        stats->multicast += raw.rxmulti;
        stats->collisions += raw.txcollision;
 
+       pstats->tx_pause_frames += raw.txpause;
+       pstats->rx_pause_frames += raw.rxpause;
+
        spin_unlock(&port->stats_lock);
 }
 
@@ -668,6 +673,17 @@ static void ar9331_get_stats64(struct dsa_switch *ds, int port,
        spin_unlock(&p->stats_lock);
 }
 
+static void ar9331_get_pause_stats(struct dsa_switch *ds, int port,
+                                  struct ethtool_pause_stats *pause_stats)
+{
+       struct ar9331_sw_priv *priv = (struct ar9331_sw_priv *)ds->priv;
+       struct ar9331_sw_port *p = &priv->port[port];
+
+       spin_lock(&p->stats_lock);
+       memcpy(pause_stats, &p->pause_stats, sizeof(*pause_stats));
+       spin_unlock(&p->stats_lock);
+}
+
 static const struct dsa_switch_ops ar9331_sw_ops = {
        .get_tag_protocol       = ar9331_sw_get_tag_protocol,
        .setup                  = ar9331_sw_setup,
@@ -677,6 +693,7 @@ static const struct dsa_switch_ops ar9331_sw_ops = {
        .phylink_mac_link_down  = ar9331_sw_phylink_mac_link_down,
        .phylink_mac_link_up    = ar9331_sw_phylink_mac_link_up,
        .get_stats64            = ar9331_get_stats64,
+       .get_pause_stats        = ar9331_get_pause_stats,
 };
 
 static irqreturn_t ar9331_sw_irq(int irq, void *data)
index 2727d31..1cbb05b 100644 (file)
@@ -2334,6 +2334,7 @@ static int
 qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
 {
        struct qca8k_priv *priv = ds->priv;
+       int ret;
 
        /* We have only have a general MTU setting.
         * DSA always set the CPU port's MTU to the largest MTU of the slave
@@ -2344,8 +2345,27 @@ qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
        if (!dsa_is_cpu_port(ds, port))
                return 0;
 
+       /* To change the MAX_FRAME_SIZE the cpu ports must be off or
+        * the switch panics.
+        * Turn off both cpu ports before applying the new value to prevent
+        * this.
+        */
+       if (priv->port_enabled_map & BIT(0))
+               qca8k_port_set_status(priv, 0, 0);
+
+       if (priv->port_enabled_map & BIT(6))
+               qca8k_port_set_status(priv, 6, 0);
+
        /* Include L2 header / FCS length */
-       return qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN);
+       ret = qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN);
+
+       if (priv->port_enabled_map & BIT(0))
+               qca8k_port_set_status(priv, 0, 1);
+
+       if (priv->port_enabled_map & BIT(6))
+               qca8k_port_set_status(priv, 6, 1);
+
+       return ret;
 }
 
 static int
index 04408e1..ec58d0e 100644 (file)
@@ -15,7 +15,7 @@
 
 #define QCA8K_ETHERNET_MDIO_PRIORITY                   7
 #define QCA8K_ETHERNET_PHY_PRIORITY                    6
-#define QCA8K_ETHERNET_TIMEOUT                         100
+#define QCA8K_ETHERNET_TIMEOUT                         5
 
 #define QCA8K_NUM_PORTS                                        7
 #define QCA8K_NUM_CPU_PORTS                            2
diff --git a/drivers/net/dsa/rzn1_a5psw.c b/drivers/net/dsa/rzn1_a5psw.c
new file mode 100644 (file)
index 0000000..0744e81
--- /dev/null
@@ -0,0 +1,1064 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2022 Schneider-Electric
+ *
+ * Clément Léger <clement.leger@bootlin.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/etherdevice.h>
+#include <linux/if_bridge.h>
+#include <linux/if_ether.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_mdio.h>
+#include <net/dsa.h>
+
+#include "rzn1_a5psw.h"
+
+struct a5psw_stats {
+       u16 offset;
+       const char name[ETH_GSTRING_LEN];
+};
+
+#define STAT_DESC(_offset) {   \
+       .offset = A5PSW_##_offset,      \
+       .name = __stringify(_offset),   \
+}
+
+static const struct a5psw_stats a5psw_stats[] = {
+       STAT_DESC(aFramesTransmittedOK),
+       STAT_DESC(aFramesReceivedOK),
+       STAT_DESC(aFrameCheckSequenceErrors),
+       STAT_DESC(aAlignmentErrors),
+       STAT_DESC(aOctetsTransmittedOK),
+       STAT_DESC(aOctetsReceivedOK),
+       STAT_DESC(aTxPAUSEMACCtrlFrames),
+       STAT_DESC(aRxPAUSEMACCtrlFrames),
+       STAT_DESC(ifInErrors),
+       STAT_DESC(ifOutErrors),
+       STAT_DESC(ifInUcastPkts),
+       STAT_DESC(ifInMulticastPkts),
+       STAT_DESC(ifInBroadcastPkts),
+       STAT_DESC(ifOutDiscards),
+       STAT_DESC(ifOutUcastPkts),
+       STAT_DESC(ifOutMulticastPkts),
+       STAT_DESC(ifOutBroadcastPkts),
+       STAT_DESC(etherStatsDropEvents),
+       STAT_DESC(etherStatsOctets),
+       STAT_DESC(etherStatsPkts),
+       STAT_DESC(etherStatsUndersizePkts),
+       STAT_DESC(etherStatsOversizePkts),
+       STAT_DESC(etherStatsPkts64Octets),
+       STAT_DESC(etherStatsPkts65to127Octets),
+       STAT_DESC(etherStatsPkts128to255Octets),
+       STAT_DESC(etherStatsPkts256to511Octets),
+       STAT_DESC(etherStatsPkts1024to1518Octets),
+       STAT_DESC(etherStatsPkts1519toXOctets),
+       STAT_DESC(etherStatsJabbers),
+       STAT_DESC(etherStatsFragments),
+       STAT_DESC(VLANReceived),
+       STAT_DESC(VLANTransmitted),
+       STAT_DESC(aDeferred),
+       STAT_DESC(aMultipleCollisions),
+       STAT_DESC(aSingleCollisions),
+       STAT_DESC(aLateCollisions),
+       STAT_DESC(aExcessiveCollisions),
+       STAT_DESC(aCarrierSenseErrors),
+};
+
+static void a5psw_reg_writel(struct a5psw *a5psw, int offset, u32 value)
+{
+       writel(value, a5psw->base + offset);
+}
+
+static u32 a5psw_reg_readl(struct a5psw *a5psw, int offset)
+{
+       return readl(a5psw->base + offset);
+}
+
+static void a5psw_reg_rmw(struct a5psw *a5psw, int offset, u32 mask, u32 val)
+{
+       u32 reg;
+
+       spin_lock(&a5psw->reg_lock);
+
+       reg = a5psw_reg_readl(a5psw, offset);
+       reg &= ~mask;
+       reg |= val;
+       a5psw_reg_writel(a5psw, offset, reg);
+
+       spin_unlock(&a5psw->reg_lock);
+}
+
+static enum dsa_tag_protocol a5psw_get_tag_protocol(struct dsa_switch *ds,
+                                                   int port,
+                                                   enum dsa_tag_protocol mp)
+{
+       return DSA_TAG_PROTO_RZN1_A5PSW;
+}
+
+static void a5psw_port_pattern_set(struct a5psw *a5psw, int port, int pattern,
+                                  bool enable)
+{
+       u32 rx_match = 0;
+
+       if (enable)
+               rx_match |= A5PSW_RXMATCH_CONFIG_PATTERN(pattern);
+
+       a5psw_reg_rmw(a5psw, A5PSW_RXMATCH_CONFIG(port),
+                     A5PSW_RXMATCH_CONFIG_PATTERN(pattern), rx_match);
+}
+
+static void a5psw_port_mgmtfwd_set(struct a5psw *a5psw, int port, bool enable)
+{
+       /* Enable "management forward" pattern matching, this will forward
+        * packets from this port only towards the management port and thus
+        * isolate the port.
+        */
+       a5psw_port_pattern_set(a5psw, port, A5PSW_PATTERN_MGMTFWD, enable);
+}
+
+static void a5psw_port_enable_set(struct a5psw *a5psw, int port, bool enable)
+{
+       u32 port_ena = 0;
+
+       if (enable)
+               port_ena |= A5PSW_PORT_ENA_TX_RX(port);
+
+       a5psw_reg_rmw(a5psw, A5PSW_PORT_ENA, A5PSW_PORT_ENA_TX_RX(port),
+                     port_ena);
+}
+
+static int a5psw_lk_execute_ctrl(struct a5psw *a5psw, u32 *ctrl)
+{
+       int ret;
+
+       a5psw_reg_writel(a5psw, A5PSW_LK_ADDR_CTRL, *ctrl);
+
+       ret = readl_poll_timeout(a5psw->base + A5PSW_LK_ADDR_CTRL, *ctrl,
+                                !(*ctrl & A5PSW_LK_ADDR_CTRL_BUSY),
+                                A5PSW_LK_BUSY_USEC_POLL, A5PSW_CTRL_TIMEOUT);
+       if (ret)
+               dev_err(a5psw->dev, "LK_CTRL timeout waiting for BUSY bit\n");
+
+       return ret;
+}
+
+static void a5psw_port_fdb_flush(struct a5psw *a5psw, int port)
+{
+       u32 ctrl = A5PSW_LK_ADDR_CTRL_DELETE_PORT | BIT(port);
+
+       mutex_lock(&a5psw->lk_lock);
+       a5psw_lk_execute_ctrl(a5psw, &ctrl);
+       mutex_unlock(&a5psw->lk_lock);
+}
+
+static void a5psw_port_authorize_set(struct a5psw *a5psw, int port,
+                                    bool authorize)
+{
+       u32 reg = a5psw_reg_readl(a5psw, A5PSW_AUTH_PORT(port));
+
+       if (authorize)
+               reg |= A5PSW_AUTH_PORT_AUTHORIZED;
+       else
+               reg &= ~A5PSW_AUTH_PORT_AUTHORIZED;
+
+       a5psw_reg_writel(a5psw, A5PSW_AUTH_PORT(port), reg);
+}
+
+static void a5psw_port_disable(struct dsa_switch *ds, int port)
+{
+       struct a5psw *a5psw = ds->priv;
+
+       a5psw_port_authorize_set(a5psw, port, false);
+       a5psw_port_enable_set(a5psw, port, false);
+}
+
+static int a5psw_port_enable(struct dsa_switch *ds, int port,
+                            struct phy_device *phy)
+{
+       struct a5psw *a5psw = ds->priv;
+
+       a5psw_port_authorize_set(a5psw, port, true);
+       a5psw_port_enable_set(a5psw, port, true);
+
+       return 0;
+}
+
+static int a5psw_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+{
+       struct a5psw *a5psw = ds->priv;
+
+       new_mtu += ETH_HLEN + A5PSW_EXTRA_MTU_LEN + ETH_FCS_LEN;
+       a5psw_reg_writel(a5psw, A5PSW_FRM_LENGTH(port), new_mtu);
+
+       return 0;
+}
+
+static int a5psw_port_max_mtu(struct dsa_switch *ds, int port)
+{
+       return A5PSW_MAX_MTU;
+}
+
+static void a5psw_phylink_get_caps(struct dsa_switch *ds, int port,
+                                  struct phylink_config *config)
+{
+       unsigned long *intf = config->supported_interfaces;
+
+       config->mac_capabilities = MAC_1000FD;
+
+       if (dsa_is_cpu_port(ds, port)) {
+               /* GMII is used internally and GMAC2 is connected to the switch
+                * using 1000Mbps Full-Duplex mode only (cf ethernet manual)
+                */
+               __set_bit(PHY_INTERFACE_MODE_GMII, intf);
+       } else {
+               config->mac_capabilities |= MAC_100 | MAC_10;
+               phy_interface_set_rgmii(intf);
+               __set_bit(PHY_INTERFACE_MODE_RMII, intf);
+               __set_bit(PHY_INTERFACE_MODE_MII, intf);
+       }
+}
+
+static struct phylink_pcs *
+a5psw_phylink_mac_select_pcs(struct dsa_switch *ds, int port,
+                            phy_interface_t interface)
+{
+       struct dsa_port *dp = dsa_to_port(ds, port);
+       struct a5psw *a5psw = ds->priv;
+
+       if (!dsa_port_is_cpu(dp) && a5psw->pcs[port])
+               return a5psw->pcs[port];
+
+       return NULL;
+}
+
+static void a5psw_phylink_mac_link_down(struct dsa_switch *ds, int port,
+                                       unsigned int mode,
+                                       phy_interface_t interface)
+{
+       struct a5psw *a5psw = ds->priv;
+       u32 cmd_cfg;
+
+       cmd_cfg = a5psw_reg_readl(a5psw, A5PSW_CMD_CFG(port));
+       cmd_cfg &= ~(A5PSW_CMD_CFG_RX_ENA | A5PSW_CMD_CFG_TX_ENA);
+       a5psw_reg_writel(a5psw, A5PSW_CMD_CFG(port), cmd_cfg);
+}
+
+static void a5psw_phylink_mac_link_up(struct dsa_switch *ds, int port,
+                                     unsigned int mode,
+                                     phy_interface_t interface,
+                                     struct phy_device *phydev, int speed,
+                                     int duplex, bool tx_pause, bool rx_pause)
+{
+       u32 cmd_cfg = A5PSW_CMD_CFG_RX_ENA | A5PSW_CMD_CFG_TX_ENA |
+                     A5PSW_CMD_CFG_TX_CRC_APPEND;
+       struct a5psw *a5psw = ds->priv;
+
+       if (speed == SPEED_1000)
+               cmd_cfg |= A5PSW_CMD_CFG_ETH_SPEED;
+
+       if (duplex == DUPLEX_HALF)
+               cmd_cfg |= A5PSW_CMD_CFG_HD_ENA;
+
+       cmd_cfg |= A5PSW_CMD_CFG_CNTL_FRM_ENA;
+
+       if (!rx_pause)
+               cmd_cfg &= ~A5PSW_CMD_CFG_PAUSE_IGNORE;
+
+       a5psw_reg_writel(a5psw, A5PSW_CMD_CFG(port), cmd_cfg);
+}
+
+static int a5psw_set_ageing_time(struct dsa_switch *ds, unsigned int msecs)
+{
+       struct a5psw *a5psw = ds->priv;
+       unsigned long rate;
+       u64 max, tmp;
+       u32 agetime;
+
+       rate = clk_get_rate(a5psw->clk);
+       max = div64_ul(((u64)A5PSW_LK_AGETIME_MASK * A5PSW_TABLE_ENTRIES * 1024),
+                      rate) * 1000;
+       if (msecs > max)
+               return -EINVAL;
+
+       tmp = div_u64(rate, MSEC_PER_SEC);
+       agetime = div_u64(msecs * tmp, 1024 * A5PSW_TABLE_ENTRIES);
+
+       a5psw_reg_writel(a5psw, A5PSW_LK_AGETIME, agetime);
+
+       return 0;
+}
+
+static void a5psw_flooding_set_resolution(struct a5psw *a5psw, int port,
+                                         bool set)
+{
+       u8 offsets[] = {A5PSW_UCAST_DEF_MASK, A5PSW_BCAST_DEF_MASK,
+                       A5PSW_MCAST_DEF_MASK};
+       int i;
+
+       if (set)
+               a5psw->bridged_ports |= BIT(port);
+       else
+               a5psw->bridged_ports &= ~BIT(port);
+
+       for (i = 0; i < ARRAY_SIZE(offsets); i++)
+               a5psw_reg_writel(a5psw, offsets[i], a5psw->bridged_ports);
+}
+
+static int a5psw_port_bridge_join(struct dsa_switch *ds, int port,
+                                 struct dsa_bridge bridge,
+                                 bool *tx_fwd_offload,
+                                 struct netlink_ext_ack *extack)
+{
+       struct a5psw *a5psw = ds->priv;
+
+       /* We only support 1 bridge device */
+       if (a5psw->br_dev && bridge.dev != a5psw->br_dev) {
+               NL_SET_ERR_MSG_MOD(extack,
+                                  "Forwarding offload supported for a single bridge");
+               return -EOPNOTSUPP;
+       }
+
+       a5psw->br_dev = bridge.dev;
+       a5psw_flooding_set_resolution(a5psw, port, true);
+       a5psw_port_mgmtfwd_set(a5psw, port, false);
+
+       return 0;
+}
+
+static void a5psw_port_bridge_leave(struct dsa_switch *ds, int port,
+                                   struct dsa_bridge bridge)
+{
+       struct a5psw *a5psw = ds->priv;
+
+       a5psw_flooding_set_resolution(a5psw, port, false);
+       a5psw_port_mgmtfwd_set(a5psw, port, true);
+
+       /* No more ports bridged */
+       if (a5psw->bridged_ports == BIT(A5PSW_CPU_PORT))
+               a5psw->br_dev = NULL;
+}
+
+static void a5psw_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
+{
+       u32 mask = A5PSW_INPUT_LEARN_DIS(port) | A5PSW_INPUT_LEARN_BLOCK(port);
+       struct a5psw *a5psw = ds->priv;
+       u32 reg = 0;
+
+       switch (state) {
+       case BR_STATE_DISABLED:
+       case BR_STATE_BLOCKING:
+               reg |= A5PSW_INPUT_LEARN_DIS(port);
+               reg |= A5PSW_INPUT_LEARN_BLOCK(port);
+               break;
+       case BR_STATE_LISTENING:
+               reg |= A5PSW_INPUT_LEARN_DIS(port);
+               break;
+       case BR_STATE_LEARNING:
+               reg |= A5PSW_INPUT_LEARN_BLOCK(port);
+               break;
+       case BR_STATE_FORWARDING:
+       default:
+               break;
+       }
+
+       a5psw_reg_rmw(a5psw, A5PSW_INPUT_LEARN, mask, reg);
+}
+
+static void a5psw_port_fast_age(struct dsa_switch *ds, int port)
+{
+       struct a5psw *a5psw = ds->priv;
+
+       a5psw_port_fdb_flush(a5psw, port);
+}
+
+static int a5psw_lk_execute_lookup(struct a5psw *a5psw, union lk_data *lk_data,
+                                  u16 *entry)
+{
+       u32 ctrl;
+       int ret;
+
+       a5psw_reg_writel(a5psw, A5PSW_LK_DATA_LO, lk_data->lo);
+       a5psw_reg_writel(a5psw, A5PSW_LK_DATA_HI, lk_data->hi);
+
+       ctrl = A5PSW_LK_ADDR_CTRL_LOOKUP;
+       ret = a5psw_lk_execute_ctrl(a5psw, &ctrl);
+       if (ret)
+               return ret;
+
+       *entry = ctrl & A5PSW_LK_ADDR_CTRL_ADDRESS;
+
+       return 0;
+}
+
+static int a5psw_port_fdb_add(struct dsa_switch *ds, int port,
+                             const unsigned char *addr, u16 vid,
+                             struct dsa_db db)
+{
+       struct a5psw *a5psw = ds->priv;
+       union lk_data lk_data = {0};
+       bool inc_learncount = false;
+       int ret = 0;
+       u16 entry;
+       u32 reg;
+
+       ether_addr_copy(lk_data.entry.mac, addr);
+       lk_data.entry.port_mask = BIT(port);
+
+       mutex_lock(&a5psw->lk_lock);
+
+       /* Set the value to be written in the lookup table */
+       ret = a5psw_lk_execute_lookup(a5psw, &lk_data, &entry);
+       if (ret)
+               goto lk_unlock;
+
+       lk_data.hi = a5psw_reg_readl(a5psw, A5PSW_LK_DATA_HI);
+       if (!lk_data.entry.valid) {
+               inc_learncount = true;
+               /* port_mask set to 0x1f when entry is not valid, clear it */
+               lk_data.entry.port_mask = 0;
+               lk_data.entry.prio = 0;
+       }
+
+       lk_data.entry.port_mask |= BIT(port);
+       lk_data.entry.is_static = 1;
+       lk_data.entry.valid = 1;
+
+       a5psw_reg_writel(a5psw, A5PSW_LK_DATA_HI, lk_data.hi);
+
+       reg = A5PSW_LK_ADDR_CTRL_WRITE | entry;
+       ret = a5psw_lk_execute_ctrl(a5psw, &reg);
+       if (ret)
+               goto lk_unlock;
+
+       if (inc_learncount) {
+               reg = A5PSW_LK_LEARNCOUNT_MODE_INC;
+               a5psw_reg_writel(a5psw, A5PSW_LK_LEARNCOUNT, reg);
+       }
+
+lk_unlock:
+       mutex_unlock(&a5psw->lk_lock);
+
+       return ret;
+}
+
+static int a5psw_port_fdb_del(struct dsa_switch *ds, int port,
+                             const unsigned char *addr, u16 vid,
+                             struct dsa_db db)
+{
+       struct a5psw *a5psw = ds->priv;
+       union lk_data lk_data = {0};
+       bool clear = false;
+       u16 entry;
+       u32 reg;
+       int ret;
+
+       ether_addr_copy(lk_data.entry.mac, addr);
+
+       mutex_lock(&a5psw->lk_lock);
+
+       ret = a5psw_lk_execute_lookup(a5psw, &lk_data, &entry);
+       if (ret)
+               goto lk_unlock;
+
+       lk_data.hi = a5psw_reg_readl(a5psw, A5PSW_LK_DATA_HI);
+
+       /* Our hardware does not associate any VID to the FDB entries so this
+        * means that if two entries were added for the same mac but for
+        * different VID, then, on the deletion of the first one, we would also
+        * delete the second one. Since there is unfortunately nothing we can do
+        * about that, do not return an error...
+        */
+       if (!lk_data.entry.valid)
+               goto lk_unlock;
+
+       lk_data.entry.port_mask &= ~BIT(port);
+       /* If there is no more port in the mask, clear the entry */
+       if (lk_data.entry.port_mask == 0)
+               clear = true;
+
+       a5psw_reg_writel(a5psw, A5PSW_LK_DATA_HI, lk_data.hi);
+
+       reg = entry;
+       if (clear)
+               reg |= A5PSW_LK_ADDR_CTRL_CLEAR;
+       else
+               reg |= A5PSW_LK_ADDR_CTRL_WRITE;
+
+       ret = a5psw_lk_execute_ctrl(a5psw, &reg);
+       if (ret)
+               goto lk_unlock;
+
+       /* Decrement LEARNCOUNT */
+       if (clear) {
+               reg = A5PSW_LK_LEARNCOUNT_MODE_DEC;
+               a5psw_reg_writel(a5psw, A5PSW_LK_LEARNCOUNT, reg);
+       }
+
+lk_unlock:
+       mutex_unlock(&a5psw->lk_lock);
+
+       return ret;
+}
+
+static int a5psw_port_fdb_dump(struct dsa_switch *ds, int port,
+                              dsa_fdb_dump_cb_t *cb, void *data)
+{
+       struct a5psw *a5psw = ds->priv;
+       union lk_data lk_data;
+       int i = 0, ret = 0;
+       u32 reg;
+
+       mutex_lock(&a5psw->lk_lock);
+
+       for (i = 0; i < A5PSW_TABLE_ENTRIES; i++) {
+               reg = A5PSW_LK_ADDR_CTRL_READ | A5PSW_LK_ADDR_CTRL_WAIT | i;
+
+               ret = a5psw_lk_execute_ctrl(a5psw, &reg);
+               if (ret)
+                       goto out_unlock;
+
+               lk_data.hi = a5psw_reg_readl(a5psw, A5PSW_LK_DATA_HI);
+               /* If entry is not valid or does not contain the port, skip */
+               if (!lk_data.entry.valid ||
+                   !(lk_data.entry.port_mask & BIT(port)))
+                       continue;
+
+               lk_data.lo = a5psw_reg_readl(a5psw, A5PSW_LK_DATA_LO);
+
+               ret = cb(lk_data.entry.mac, 0, lk_data.entry.is_static, data);
+               if (ret)
+                       goto out_unlock;
+       }
+
+out_unlock:
+       mutex_unlock(&a5psw->lk_lock);
+
+       return ret;
+}
+
+static u64 a5psw_read_stat(struct a5psw *a5psw, u32 offset, int port)
+{
+       u32 reg_lo, reg_hi;
+
+       reg_lo = a5psw_reg_readl(a5psw, offset + A5PSW_PORT_OFFSET(port));
+       /* A5PSW_STATS_HIWORD is latched on stat read */
+       reg_hi = a5psw_reg_readl(a5psw, A5PSW_STATS_HIWORD);
+
+       return ((u64)reg_hi << 32) | reg_lo;
+}
+
+static void a5psw_get_strings(struct dsa_switch *ds, int port, u32 stringset,
+                             uint8_t *data)
+{
+       unsigned int u;
+
+       if (stringset != ETH_SS_STATS)
+               return;
+
+       for (u = 0; u < ARRAY_SIZE(a5psw_stats); u++) {
+               memcpy(data + u * ETH_GSTRING_LEN, a5psw_stats[u].name,
+                      ETH_GSTRING_LEN);
+       }
+}
+
+static void a5psw_get_ethtool_stats(struct dsa_switch *ds, int port,
+                                   uint64_t *data)
+{
+       struct a5psw *a5psw = ds->priv;
+       unsigned int u;
+
+       for (u = 0; u < ARRAY_SIZE(a5psw_stats); u++)
+               data[u] = a5psw_read_stat(a5psw, a5psw_stats[u].offset, port);
+}
+
+static int a5psw_get_sset_count(struct dsa_switch *ds, int port, int sset)
+{
+       if (sset != ETH_SS_STATS)
+               return 0;
+
+       return ARRAY_SIZE(a5psw_stats);
+}
+
+static void a5psw_get_eth_mac_stats(struct dsa_switch *ds, int port,
+                                   struct ethtool_eth_mac_stats *mac_stats)
+{
+       struct a5psw *a5psw = ds->priv;
+
+#define RD(name) a5psw_read_stat(a5psw, A5PSW_##name, port)
+       mac_stats->FramesTransmittedOK = RD(aFramesTransmittedOK);
+       mac_stats->SingleCollisionFrames = RD(aSingleCollisions);
+       mac_stats->MultipleCollisionFrames = RD(aMultipleCollisions);
+       mac_stats->FramesReceivedOK = RD(aFramesReceivedOK);
+       mac_stats->FrameCheckSequenceErrors = RD(aFrameCheckSequenceErrors);
+       mac_stats->AlignmentErrors = RD(aAlignmentErrors);
+       mac_stats->OctetsTransmittedOK = RD(aOctetsTransmittedOK);
+       mac_stats->FramesWithDeferredXmissions = RD(aDeferred);
+       mac_stats->LateCollisions = RD(aLateCollisions);
+       mac_stats->FramesAbortedDueToXSColls = RD(aExcessiveCollisions);
+       mac_stats->FramesLostDueToIntMACXmitError = RD(ifOutErrors);
+       mac_stats->CarrierSenseErrors = RD(aCarrierSenseErrors);
+       mac_stats->OctetsReceivedOK = RD(aOctetsReceivedOK);
+       mac_stats->FramesLostDueToIntMACRcvError = RD(ifInErrors);
+       mac_stats->MulticastFramesXmittedOK = RD(ifOutMulticastPkts);
+       mac_stats->BroadcastFramesXmittedOK = RD(ifOutBroadcastPkts);
+       mac_stats->FramesWithExcessiveDeferral = RD(aDeferred);
+       mac_stats->MulticastFramesReceivedOK = RD(ifInMulticastPkts);
+       mac_stats->BroadcastFramesReceivedOK = RD(ifInBroadcastPkts);
+#undef RD
+}
+
+static const struct ethtool_rmon_hist_range a5psw_rmon_ranges[] = {
+       { 0, 64 },
+       { 65, 127 },
+       { 128, 255 },
+       { 256, 511 },
+       { 512, 1023 },
+       { 1024, 1518 },
+       { 1519, A5PSW_MAX_MTU },
+       {}
+};
+
+static void a5psw_get_rmon_stats(struct dsa_switch *ds, int port,
+                                struct ethtool_rmon_stats *rmon_stats,
+                                const struct ethtool_rmon_hist_range **ranges)
+{
+       struct a5psw *a5psw = ds->priv;
+
+#define RD(name) a5psw_read_stat(a5psw, A5PSW_##name, port)
+       rmon_stats->undersize_pkts = RD(etherStatsUndersizePkts);
+       rmon_stats->oversize_pkts = RD(etherStatsOversizePkts);
+       rmon_stats->fragments = RD(etherStatsFragments);
+       rmon_stats->jabbers = RD(etherStatsJabbers);
+       rmon_stats->hist[0] = RD(etherStatsPkts64Octets);
+       rmon_stats->hist[1] = RD(etherStatsPkts65to127Octets);
+       rmon_stats->hist[2] = RD(etherStatsPkts128to255Octets);
+       rmon_stats->hist[3] = RD(etherStatsPkts256to511Octets);
+       rmon_stats->hist[4] = RD(etherStatsPkts512to1023Octets);
+       rmon_stats->hist[5] = RD(etherStatsPkts1024to1518Octets);
+       rmon_stats->hist[6] = RD(etherStatsPkts1519toXOctets);
+#undef RD
+
+       *ranges = a5psw_rmon_ranges;
+}
+
+static void a5psw_get_eth_ctrl_stats(struct dsa_switch *ds, int port,
+                                    struct ethtool_eth_ctrl_stats *ctrl_stats)
+{
+       struct a5psw *a5psw = ds->priv;
+       u64 stat;
+
+       stat = a5psw_read_stat(a5psw, A5PSW_aTxPAUSEMACCtrlFrames, port);
+       ctrl_stats->MACControlFramesTransmitted = stat;
+       stat = a5psw_read_stat(a5psw, A5PSW_aRxPAUSEMACCtrlFrames, port);
+       ctrl_stats->MACControlFramesReceived = stat;
+}
+
+static int a5psw_setup(struct dsa_switch *ds)
+{
+       struct a5psw *a5psw = ds->priv;
+       int port, vlan, ret;
+       struct dsa_port *dp;
+       u32 reg;
+
+       /* Validate that there is only 1 CPU port with index A5PSW_CPU_PORT */
+       dsa_switch_for_each_cpu_port(dp, ds) {
+               if (dp->index != A5PSW_CPU_PORT) {
+                       dev_err(a5psw->dev, "Invalid CPU port\n");
+                       return -EINVAL;
+               }
+       }
+
+       /* Configure management port */
+       reg = A5PSW_CPU_PORT | A5PSW_MGMT_CFG_DISCARD;
+       a5psw_reg_writel(a5psw, A5PSW_MGMT_CFG, reg);
+
+       /* Set pattern 0 to forward all frame to mgmt port */
+       a5psw_reg_writel(a5psw, A5PSW_PATTERN_CTRL(A5PSW_PATTERN_MGMTFWD),
+                        A5PSW_PATTERN_CTRL_MGMTFWD);
+
+       /* Enable port tagging */
+       reg = FIELD_PREP(A5PSW_MGMT_TAG_CFG_TAGFIELD, ETH_P_DSA_A5PSW);
+       reg |= A5PSW_MGMT_TAG_CFG_ENABLE | A5PSW_MGMT_TAG_CFG_ALL_FRAMES;
+       a5psw_reg_writel(a5psw, A5PSW_MGMT_TAG_CFG, reg);
+
+       /* Enable normal switch operation */
+       reg = A5PSW_LK_ADDR_CTRL_BLOCKING | A5PSW_LK_ADDR_CTRL_LEARNING |
+             A5PSW_LK_ADDR_CTRL_AGEING | A5PSW_LK_ADDR_CTRL_ALLOW_MIGR |
+             A5PSW_LK_ADDR_CTRL_CLEAR_TABLE;
+       a5psw_reg_writel(a5psw, A5PSW_LK_CTRL, reg);
+
+       ret = readl_poll_timeout(a5psw->base + A5PSW_LK_CTRL, reg,
+                                !(reg & A5PSW_LK_ADDR_CTRL_CLEAR_TABLE),
+                                A5PSW_LK_BUSY_USEC_POLL, A5PSW_CTRL_TIMEOUT);
+       if (ret) {
+               dev_err(a5psw->dev, "Failed to clear lookup table\n");
+               return ret;
+       }
+
+       /* Reset learn count to 0 */
+       reg = A5PSW_LK_LEARNCOUNT_MODE_SET;
+       a5psw_reg_writel(a5psw, A5PSW_LK_LEARNCOUNT, reg);
+
+       /* Clear VLAN resource table */
+       reg = A5PSW_VLAN_RES_WR_PORTMASK | A5PSW_VLAN_RES_WR_TAGMASK;
+       for (vlan = 0; vlan < A5PSW_VLAN_COUNT; vlan++)
+               a5psw_reg_writel(a5psw, A5PSW_VLAN_RES(vlan), reg);
+
+       /* Reset all ports */
+       dsa_switch_for_each_port(dp, ds) {
+               port = dp->index;
+
+               /* Reset the port */
+               a5psw_reg_writel(a5psw, A5PSW_CMD_CFG(port),
+                                A5PSW_CMD_CFG_SW_RESET);
+
+               /* Enable only CPU port */
+               a5psw_port_enable_set(a5psw, port, dsa_port_is_cpu(dp));
+
+               if (dsa_port_is_unused(dp))
+                       continue;
+
+               /* Enable egress flooding for CPU port */
+               if (dsa_port_is_cpu(dp))
+                       a5psw_flooding_set_resolution(a5psw, port, true);
+
+               /* Enable management forward only for user ports */
+               if (dsa_port_is_user(dp))
+                       a5psw_port_mgmtfwd_set(a5psw, port, true);
+       }
+
+       return 0;
+}
+
+static const struct dsa_switch_ops a5psw_switch_ops = {
+       .get_tag_protocol = a5psw_get_tag_protocol,
+       .setup = a5psw_setup,
+       .port_disable = a5psw_port_disable,
+       .port_enable = a5psw_port_enable,
+       .phylink_get_caps = a5psw_phylink_get_caps,
+       .phylink_mac_select_pcs = a5psw_phylink_mac_select_pcs,
+       .phylink_mac_link_down = a5psw_phylink_mac_link_down,
+       .phylink_mac_link_up = a5psw_phylink_mac_link_up,
+       .port_change_mtu = a5psw_port_change_mtu,
+       .port_max_mtu = a5psw_port_max_mtu,
+       .get_sset_count = a5psw_get_sset_count,
+       .get_strings = a5psw_get_strings,
+       .get_ethtool_stats = a5psw_get_ethtool_stats,
+       .get_eth_mac_stats = a5psw_get_eth_mac_stats,
+       .get_eth_ctrl_stats = a5psw_get_eth_ctrl_stats,
+       .get_rmon_stats = a5psw_get_rmon_stats,
+       .set_ageing_time = a5psw_set_ageing_time,
+       .port_bridge_join = a5psw_port_bridge_join,
+       .port_bridge_leave = a5psw_port_bridge_leave,
+       .port_stp_state_set = a5psw_port_stp_state_set,
+       .port_fast_age = a5psw_port_fast_age,
+       .port_fdb_add = a5psw_port_fdb_add,
+       .port_fdb_del = a5psw_port_fdb_del,
+       .port_fdb_dump = a5psw_port_fdb_dump,
+};
+
+static int a5psw_mdio_wait_busy(struct a5psw *a5psw)
+{
+       u32 status;
+       int err;
+
+       err = readl_poll_timeout(a5psw->base + A5PSW_MDIO_CFG_STATUS, status,
+                                !(status & A5PSW_MDIO_CFG_STATUS_BUSY), 10,
+                                1000 * USEC_PER_MSEC);
+       if (err)
+               dev_err(a5psw->dev, "MDIO command timeout\n");
+
+       return err;
+}
+
+static int a5psw_mdio_read(struct mii_bus *bus, int phy_id, int phy_reg)
+{
+       struct a5psw *a5psw = bus->priv;
+       u32 cmd, status;
+       int ret;
+
+       if (phy_reg & MII_ADDR_C45)
+               return -EOPNOTSUPP;
+
+       cmd = A5PSW_MDIO_COMMAND_READ;
+       cmd |= FIELD_PREP(A5PSW_MDIO_COMMAND_REG_ADDR, phy_reg);
+       cmd |= FIELD_PREP(A5PSW_MDIO_COMMAND_PHY_ADDR, phy_id);
+
+       a5psw_reg_writel(a5psw, A5PSW_MDIO_COMMAND, cmd);
+
+       ret = a5psw_mdio_wait_busy(a5psw);
+       if (ret)
+               return ret;
+
+       ret = a5psw_reg_readl(a5psw, A5PSW_MDIO_DATA) & A5PSW_MDIO_DATA_MASK;
+
+       status = a5psw_reg_readl(a5psw, A5PSW_MDIO_CFG_STATUS);
+       if (status & A5PSW_MDIO_CFG_STATUS_READERR)
+               return -EIO;
+
+       return ret;
+}
+
+static int a5psw_mdio_write(struct mii_bus *bus, int phy_id, int phy_reg,
+                           u16 phy_data)
+{
+       struct a5psw *a5psw = bus->priv;
+       u32 cmd;
+
+       if (phy_reg & MII_ADDR_C45)
+               return -EOPNOTSUPP;
+
+       cmd = FIELD_PREP(A5PSW_MDIO_COMMAND_REG_ADDR, phy_reg);
+       cmd |= FIELD_PREP(A5PSW_MDIO_COMMAND_PHY_ADDR, phy_id);
+
+       a5psw_reg_writel(a5psw, A5PSW_MDIO_COMMAND, cmd);
+       a5psw_reg_writel(a5psw, A5PSW_MDIO_DATA, phy_data);
+
+       return a5psw_mdio_wait_busy(a5psw);
+}
+
+static int a5psw_mdio_config(struct a5psw *a5psw, u32 mdio_freq)
+{
+       unsigned long rate;
+       unsigned long div;
+       u32 cfgstatus;
+
+       rate = clk_get_rate(a5psw->hclk);
+       div = ((rate / mdio_freq) / 2);
+       if (div > FIELD_MAX(A5PSW_MDIO_CFG_STATUS_CLKDIV) ||
+           div < A5PSW_MDIO_CLK_DIV_MIN) {
+               dev_err(a5psw->dev, "MDIO clock div %ld out of range\n", div);
+               return -ERANGE;
+       }
+
+       cfgstatus = FIELD_PREP(A5PSW_MDIO_CFG_STATUS_CLKDIV, div);
+
+       a5psw_reg_writel(a5psw, A5PSW_MDIO_CFG_STATUS, cfgstatus);
+
+       return 0;
+}
+
+static int a5psw_probe_mdio(struct a5psw *a5psw, struct device_node *node)
+{
+       struct device *dev = a5psw->dev;
+       struct mii_bus *bus;
+       u32 mdio_freq;
+       int ret;
+
+       if (of_property_read_u32(node, "clock-frequency", &mdio_freq))
+               mdio_freq = A5PSW_MDIO_DEF_FREQ;
+
+       ret = a5psw_mdio_config(a5psw, mdio_freq);
+       if (ret)
+               return ret;
+
+       bus = devm_mdiobus_alloc(dev);
+       if (!bus)
+               return -ENOMEM;
+
+       bus->name = "a5psw_mdio";
+       bus->read = a5psw_mdio_read;
+       bus->write = a5psw_mdio_write;
+       bus->priv = a5psw;
+       bus->parent = dev;
+       snprintf(bus->id, MII_BUS_ID_SIZE, "%s", dev_name(dev));
+
+       a5psw->mii_bus = bus;
+
+       return devm_of_mdiobus_register(dev, bus, node);
+}
+
+static void a5psw_pcs_free(struct a5psw *a5psw)
+{
+       int i;
+
+       for (i = 0; i < ARRAY_SIZE(a5psw->pcs); i++) {
+               if (a5psw->pcs[i])
+                       miic_destroy(a5psw->pcs[i]);
+       }
+}
+
+static int a5psw_pcs_get(struct a5psw *a5psw)
+{
+       struct device_node *ports, *port, *pcs_node;
+       struct phylink_pcs *pcs;
+       int ret;
+       u32 reg;
+
+       ports = of_get_child_by_name(a5psw->dev->of_node, "ethernet-ports");
+       if (!ports)
+               return -EINVAL;
+
+       for_each_available_child_of_node(ports, port) {
+               pcs_node = of_parse_phandle(port, "pcs-handle", 0);
+               if (!pcs_node)
+                       continue;
+
+               if (of_property_read_u32(port, "reg", &reg)) {
+                       ret = -EINVAL;
+                       goto free_pcs;
+               }
+
+               if (reg >= ARRAY_SIZE(a5psw->pcs)) {
+                       ret = -ENODEV;
+                       goto free_pcs;
+               }
+
+               pcs = miic_create(a5psw->dev, pcs_node);
+               if (IS_ERR(pcs)) {
+                       dev_err(a5psw->dev, "Failed to create PCS for port %d\n",
+                               reg);
+                       ret = PTR_ERR(pcs);
+                       goto free_pcs;
+               }
+
+               a5psw->pcs[reg] = pcs;
+               of_node_put(pcs_node);
+       }
+       of_node_put(ports);
+
+       return 0;
+
+free_pcs:
+       of_node_put(pcs_node);
+       of_node_put(port);
+       of_node_put(ports);
+       a5psw_pcs_free(a5psw);
+
+       return ret;
+}
+
+static int a5psw_probe(struct platform_device *pdev)
+{
+       struct device *dev = &pdev->dev;
+       struct device_node *mdio;
+       struct dsa_switch *ds;
+       struct a5psw *a5psw;
+       int ret;
+
+       a5psw = devm_kzalloc(dev, sizeof(*a5psw), GFP_KERNEL);
+       if (!a5psw)
+               return -ENOMEM;
+
+       a5psw->dev = dev;
+       mutex_init(&a5psw->lk_lock);
+       spin_lock_init(&a5psw->reg_lock);
+       a5psw->base = devm_platform_ioremap_resource(pdev, 0);
+       if (IS_ERR(a5psw->base))
+               return PTR_ERR(a5psw->base);
+
+       ret = a5psw_pcs_get(a5psw);
+       if (ret)
+               return ret;
+
+       a5psw->hclk = devm_clk_get(dev, "hclk");
+       if (IS_ERR(a5psw->hclk)) {
+               dev_err(dev, "failed get hclk clock\n");
+               ret = PTR_ERR(a5psw->hclk);
+               goto free_pcs;
+       }
+
+       a5psw->clk = devm_clk_get(dev, "clk");
+       if (IS_ERR(a5psw->clk)) {
+               dev_err(dev, "failed get clk_switch clock\n");
+               ret = PTR_ERR(a5psw->clk);
+               goto free_pcs;
+       }
+
+       ret = clk_prepare_enable(a5psw->clk);
+       if (ret)
+               goto free_pcs;
+
+       ret = clk_prepare_enable(a5psw->hclk);
+       if (ret)
+               goto clk_disable;
+
+       mdio = of_get_child_by_name(dev->of_node, "mdio");
+       if (of_device_is_available(mdio)) {
+               ret = a5psw_probe_mdio(a5psw, mdio);
+               if (ret) {
+                       of_node_put(mdio);
+                       dev_err(dev, "Failed to register MDIO: %d\n", ret);
+                       goto hclk_disable;
+               }
+       }
+
+       of_node_put(mdio);
+
+       ds = &a5psw->ds;
+       ds->dev = dev;
+       ds->num_ports = A5PSW_PORTS_NUM;
+       ds->ops = &a5psw_switch_ops;
+       ds->priv = a5psw;
+
+       ret = dsa_register_switch(ds);
+       if (ret) {
+               dev_err(dev, "Failed to register DSA switch: %d\n", ret);
+               goto hclk_disable;
+       }
+
+       return 0;
+
+hclk_disable:
+       clk_disable_unprepare(a5psw->hclk);
+clk_disable:
+       clk_disable_unprepare(a5psw->clk);
+free_pcs:
+       a5psw_pcs_free(a5psw);
+
+       return ret;
+}
+
+static int a5psw_remove(struct platform_device *pdev)
+{
+       struct a5psw *a5psw = platform_get_drvdata(pdev);
+
+       if (!a5psw)
+               return 0;
+
+       dsa_unregister_switch(&a5psw->ds);
+       a5psw_pcs_free(a5psw);
+       clk_disable_unprepare(a5psw->hclk);
+       clk_disable_unprepare(a5psw->clk);
+
+       platform_set_drvdata(pdev, NULL);
+
+       return 0;
+}
+
+static void a5psw_shutdown(struct platform_device *pdev)
+{
+       struct a5psw *a5psw = platform_get_drvdata(pdev);
+
+       if (!a5psw)
+               return;
+
+       dsa_switch_shutdown(&a5psw->ds);
+
+       platform_set_drvdata(pdev, NULL);
+}
+
+static const struct of_device_id a5psw_of_mtable[] = {
+       { .compatible = "renesas,rzn1-a5psw", },
+       { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, a5psw_of_mtable);
+
+static struct platform_driver a5psw_driver = {
+       .driver = {
+               .name    = "rzn1_a5psw",
+               .of_match_table = of_match_ptr(a5psw_of_mtable),
+       },
+       .probe = a5psw_probe,
+       .remove = a5psw_remove,
+       .shutdown = a5psw_shutdown,
+};
+module_platform_driver(a5psw_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Renesas RZ/N1 Advanced 5-port Switch driver");
+MODULE_AUTHOR("Clément Léger <clement.leger@bootlin.com>");
diff --git a/drivers/net/dsa/rzn1_a5psw.h b/drivers/net/dsa/rzn1_a5psw.h
new file mode 100644 (file)
index 0000000..c67abd4
--- /dev/null
@@ -0,0 +1,259 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2022 Schneider Electric
+ *
+ * Clément Léger <clement.leger@bootlin.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/debugfs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_mdio.h>
+#include <linux/platform_device.h>
+#include <linux/pcs-rzn1-miic.h>
+#include <net/dsa.h>
+
+#define A5PSW_REVISION                 0x0
+#define A5PSW_PORT_OFFSET(port)                (0x400 * (port))
+
+#define A5PSW_PORT_ENA                 0x8
+#define A5PSW_PORT_ENA_RX_SHIFT                16
+#define A5PSW_PORT_ENA_TX_RX(port)     (BIT((port) + A5PSW_PORT_ENA_RX_SHIFT) | \
+                                        BIT(port))
+#define A5PSW_UCAST_DEF_MASK           0xC
+
+#define A5PSW_VLAN_VERIFY              0x10
+#define A5PSW_VLAN_VERI_SHIFT          0
+#define A5PSW_VLAN_DISC_SHIFT          16
+
+#define A5PSW_BCAST_DEF_MASK           0x14
+#define A5PSW_MCAST_DEF_MASK           0x18
+
+#define A5PSW_INPUT_LEARN              0x1C
+#define A5PSW_INPUT_LEARN_DIS(p)       BIT((p) + 16)
+#define A5PSW_INPUT_LEARN_BLOCK(p)     BIT(p)
+
+#define A5PSW_MGMT_CFG                 0x20
+#define A5PSW_MGMT_CFG_DISCARD         BIT(7)
+
+#define A5PSW_MODE_CFG                 0x24
+#define A5PSW_MODE_STATS_RESET         BIT(31)
+
+#define A5PSW_VLAN_IN_MODE             0x28
+#define A5PSW_VLAN_IN_MODE_PORT_SHIFT(port)    ((port) * 2)
+#define A5PSW_VLAN_IN_MODE_PORT(port)          (GENMASK(1, 0) << \
+                                       A5PSW_VLAN_IN_MODE_PORT_SHIFT(port))
+#define A5PSW_VLAN_IN_MODE_SINGLE_PASSTHROUGH  0x0
+#define A5PSW_VLAN_IN_MODE_SINGLE_REPLACE      0x1
+#define A5PSW_VLAN_IN_MODE_TAG_ALWAYS          0x2
+
+#define A5PSW_VLAN_OUT_MODE            0x2C
+#define A5PSW_VLAN_OUT_MODE_PORT(port) (GENMASK(1, 0) << ((port) * 2))
+#define A5PSW_VLAN_OUT_MODE_DIS                0x0
+#define A5PSW_VLAN_OUT_MODE_STRIP      0x1
+#define A5PSW_VLAN_OUT_MODE_TAG_THROUGH        0x2
+#define A5PSW_VLAN_OUT_MODE_TRANSPARENT        0x3
+
+#define A5PSW_VLAN_IN_MODE_ENA         0x30
+#define A5PSW_VLAN_TAG_ID              0x34
+
+#define A5PSW_SYSTEM_TAGINFO(port)     (0x200 + A5PSW_PORT_OFFSET(port))
+
+#define A5PSW_AUTH_PORT(port)          (0x240 + 4 * (port))
+#define A5PSW_AUTH_PORT_AUTHORIZED     BIT(0)
+
+#define A5PSW_VLAN_RES(entry)          (0x280 + 4 * (entry))
+#define A5PSW_VLAN_RES_WR_PORTMASK     BIT(30)
+#define A5PSW_VLAN_RES_WR_TAGMASK      BIT(29)
+#define A5PSW_VLAN_RES_RD_TAGMASK      BIT(28)
+#define A5PSW_VLAN_RES_ID              GENMASK(16, 5)
+#define A5PSW_VLAN_RES_PORTMASK                GENMASK(4, 0)
+
+#define A5PSW_RXMATCH_CONFIG(port)     (0x3e80 + 4 * (port))
+#define A5PSW_RXMATCH_CONFIG_PATTERN(p)        BIT(p)
+
+#define A5PSW_PATTERN_CTRL(p)          (0x3eb0 + 4  * (p))
+#define A5PSW_PATTERN_CTRL_MGMTFWD     BIT(1)
+
+#define A5PSW_LK_CTRL          0x400
+#define A5PSW_LK_ADDR_CTRL_BLOCKING    BIT(0)
+#define A5PSW_LK_ADDR_CTRL_LEARNING    BIT(1)
+#define A5PSW_LK_ADDR_CTRL_AGEING      BIT(2)
+#define A5PSW_LK_ADDR_CTRL_ALLOW_MIGR  BIT(3)
+#define A5PSW_LK_ADDR_CTRL_CLEAR_TABLE BIT(6)
+
+#define A5PSW_LK_ADDR_CTRL             0x408
+#define A5PSW_LK_ADDR_CTRL_BUSY                BIT(31)
+#define A5PSW_LK_ADDR_CTRL_DELETE_PORT BIT(30)
+#define A5PSW_LK_ADDR_CTRL_CLEAR       BIT(29)
+#define A5PSW_LK_ADDR_CTRL_LOOKUP      BIT(28)
+#define A5PSW_LK_ADDR_CTRL_WAIT                BIT(27)
+#define A5PSW_LK_ADDR_CTRL_READ                BIT(26)
+#define A5PSW_LK_ADDR_CTRL_WRITE       BIT(25)
+#define A5PSW_LK_ADDR_CTRL_ADDRESS     GENMASK(12, 0)
+
+#define A5PSW_LK_DATA_LO               0x40C
+#define A5PSW_LK_DATA_HI               0x410
+#define A5PSW_LK_DATA_HI_VALID         BIT(16)
+#define A5PSW_LK_DATA_HI_PORT          BIT(16)
+
+#define A5PSW_LK_LEARNCOUNT            0x418
+#define A5PSW_LK_LEARNCOUNT_COUNT      GENMASK(13, 0)
+#define A5PSW_LK_LEARNCOUNT_MODE       GENMASK(31, 30)
+#define A5PSW_LK_LEARNCOUNT_MODE_SET   0x0
+#define A5PSW_LK_LEARNCOUNT_MODE_INC   0x1
+#define A5PSW_LK_LEARNCOUNT_MODE_DEC   0x2
+
+#define A5PSW_MGMT_TAG_CFG             0x480
+#define A5PSW_MGMT_TAG_CFG_TAGFIELD    GENMASK(31, 16)
+#define A5PSW_MGMT_TAG_CFG_ALL_FRAMES  BIT(1)
+#define A5PSW_MGMT_TAG_CFG_ENABLE      BIT(0)
+
+#define A5PSW_LK_AGETIME               0x41C
+#define A5PSW_LK_AGETIME_MASK          GENMASK(23, 0)
+
+#define A5PSW_MDIO_CFG_STATUS          0x700
+#define A5PSW_MDIO_CFG_STATUS_CLKDIV   GENMASK(15, 7)
+#define A5PSW_MDIO_CFG_STATUS_READERR  BIT(1)
+#define A5PSW_MDIO_CFG_STATUS_BUSY     BIT(0)
+
+#define A5PSW_MDIO_COMMAND             0x704
+/* Register is named TRAININIT in datasheet and should be set when reading */
+#define A5PSW_MDIO_COMMAND_READ                BIT(15)
+#define A5PSW_MDIO_COMMAND_PHY_ADDR    GENMASK(9, 5)
+#define A5PSW_MDIO_COMMAND_REG_ADDR    GENMASK(4, 0)
+
+#define A5PSW_MDIO_DATA                        0x708
+#define A5PSW_MDIO_DATA_MASK           GENMASK(15, 0)
+
+#define A5PSW_CMD_CFG(port)            (0x808 + A5PSW_PORT_OFFSET(port))
+#define A5PSW_CMD_CFG_CNTL_FRM_ENA     BIT(23)
+#define A5PSW_CMD_CFG_SW_RESET         BIT(13)
+#define A5PSW_CMD_CFG_TX_CRC_APPEND    BIT(11)
+#define A5PSW_CMD_CFG_HD_ENA           BIT(10)
+#define A5PSW_CMD_CFG_PAUSE_IGNORE     BIT(8)
+#define A5PSW_CMD_CFG_CRC_FWD          BIT(6)
+#define A5PSW_CMD_CFG_ETH_SPEED                BIT(3)
+#define A5PSW_CMD_CFG_RX_ENA           BIT(1)
+#define A5PSW_CMD_CFG_TX_ENA           BIT(0)
+
+#define A5PSW_FRM_LENGTH(port)         (0x814 + A5PSW_PORT_OFFSET(port))
+#define A5PSW_FRM_LENGTH_MASK          GENMASK(13, 0)
+
+#define A5PSW_STATUS(port)             (0x840 + A5PSW_PORT_OFFSET(port))
+
+#define A5PSW_STATS_HIWORD             0x900
+
+/* Stats */
+#define A5PSW_aFramesTransmittedOK             0x868
+#define A5PSW_aFramesReceivedOK                        0x86C
+#define A5PSW_aFrameCheckSequenceErrors                0x870
+#define A5PSW_aAlignmentErrors                 0x874
+#define A5PSW_aOctetsTransmittedOK             0x878
+#define A5PSW_aOctetsReceivedOK                        0x87C
+#define A5PSW_aTxPAUSEMACCtrlFrames            0x880
+#define A5PSW_aRxPAUSEMACCtrlFrames            0x884
+/* If */
+#define A5PSW_ifInErrors                       0x888
+#define A5PSW_ifOutErrors                      0x88C
+#define A5PSW_ifInUcastPkts                    0x890
+#define A5PSW_ifInMulticastPkts                        0x894
+#define A5PSW_ifInBroadcastPkts                        0x898
+#define A5PSW_ifOutDiscards                    0x89C
+#define A5PSW_ifOutUcastPkts                   0x8A0
+#define A5PSW_ifOutMulticastPkts               0x8A4
+#define A5PSW_ifOutBroadcastPkts               0x8A8
+/* Ether */
+#define A5PSW_etherStatsDropEvents             0x8AC
+#define A5PSW_etherStatsOctets                 0x8B0
+#define A5PSW_etherStatsPkts                   0x8B4
+#define A5PSW_etherStatsUndersizePkts          0x8B8
+#define A5PSW_etherStatsOversizePkts           0x8BC
+#define A5PSW_etherStatsPkts64Octets           0x8C0
+#define A5PSW_etherStatsPkts65to127Octets      0x8C4
+#define A5PSW_etherStatsPkts128to255Octets     0x8C8
+#define A5PSW_etherStatsPkts256to511Octets     0x8CC
+#define A5PSW_etherStatsPkts512to1023Octets    0x8D0
+#define A5PSW_etherStatsPkts1024to1518Octets   0x8D4
+#define A5PSW_etherStatsPkts1519toXOctets      0x8D8
+#define A5PSW_etherStatsJabbers                        0x8DC
+#define A5PSW_etherStatsFragments              0x8E0
+
+#define A5PSW_VLANReceived                     0x8E8
+#define A5PSW_VLANTransmitted                  0x8EC
+
+#define A5PSW_aDeferred                                0x910
+#define A5PSW_aMultipleCollisions              0x914
+#define A5PSW_aSingleCollisions                        0x918
+#define A5PSW_aLateCollisions                  0x91C
+#define A5PSW_aExcessiveCollisions             0x920
+#define A5PSW_aCarrierSenseErrors              0x924
+
+#define A5PSW_VLAN_TAG(prio, id)       (((prio) << 12) | (id))
+#define A5PSW_PORTS_NUM                        5
+#define A5PSW_CPU_PORT                 (A5PSW_PORTS_NUM - 1)
+#define A5PSW_MDIO_DEF_FREQ            2500000
+#define A5PSW_MDIO_TIMEOUT             100
+#define A5PSW_JUMBO_LEN                        (10 * SZ_1K)
+#define A5PSW_MDIO_CLK_DIV_MIN         5
+#define A5PSW_TAG_LEN                  8
+#define A5PSW_VLAN_COUNT               32
+
+/* Ensure enough space for 2 VLAN tags */
+#define A5PSW_EXTRA_MTU_LEN            (A5PSW_TAG_LEN + 8)
+#define A5PSW_MAX_MTU                  (A5PSW_JUMBO_LEN - A5PSW_EXTRA_MTU_LEN)
+
+#define A5PSW_PATTERN_MGMTFWD          0
+
+#define A5PSW_LK_BUSY_USEC_POLL                10
+#define A5PSW_CTRL_TIMEOUT             1000
+#define A5PSW_TABLE_ENTRIES            8192
+
+struct fdb_entry {
+       u8 mac[ETH_ALEN];
+       u16 valid:1;
+       u16 is_static:1;
+       u16 prio:3;
+       u16 port_mask:5;
+       u16 reserved:6;
+} __packed;
+
+union lk_data {
+       struct {
+               u32 lo;
+               u32 hi;
+       };
+       struct fdb_entry entry;
+};
+
+/**
+ * struct a5psw - switch struct
+ * @base: Base address of the switch
+ * @hclk: hclk_switch clock
+ * @clk: clk_switch clock
+ * @dev: Device associated to the switch
+ * @mii_bus: MDIO bus struct
+ * @mdio_freq: MDIO bus frequency requested
+ * @pcs: Array of PCS connected to the switch ports (not for the CPU)
+ * @ds: DSA switch struct
+ * @stats_lock: lock to access statistics (shared HI counter)
+ * @lk_lock: Lock for the lookup table
+ * @reg_lock: Lock for register read-modify-write operation
+ * @bridged_ports: Mask of ports that are bridged and should be flooded
+ * @br_dev: Bridge net device
+ */
+struct a5psw {
+       void __iomem *base;
+       struct clk *hclk;
+       struct clk *clk;
+       struct device *dev;
+       struct mii_bus  *mii_bus;
+       struct phylink_pcs *pcs[A5PSW_PORTS_NUM - 1];
+       struct dsa_switch ds;
+       struct mutex lk_lock;
+       spinlock_t reg_lock;
+       u32 bridged_ports;
+       struct net_device *br_dev;
+};
index 955abbc..9a55c1d 100644 (file)
@@ -84,6 +84,7 @@ source "drivers/net/ethernet/huawei/Kconfig"
 source "drivers/net/ethernet/i825xx/Kconfig"
 source "drivers/net/ethernet/ibm/Kconfig"
 source "drivers/net/ethernet/intel/Kconfig"
+source "drivers/net/ethernet/wangxun/Kconfig"
 source "drivers/net/ethernet/xscale/Kconfig"
 
 config JME
index 9eb0116..c06e75e 100644 (file)
@@ -97,6 +97,7 @@ obj-$(CONFIG_NET_VENDOR_TOSHIBA) += toshiba/
 obj-$(CONFIG_NET_VENDOR_TUNDRA) += tundra/
 obj-$(CONFIG_NET_VENDOR_VERTEXCOM) += vertexcom/
 obj-$(CONFIG_NET_VENDOR_VIA) += via/
+obj-$(CONFIG_NET_VENDOR_WANGXUN) += wangxun/
 obj-$(CONFIG_NET_VENDOR_WIZNET) += wiznet/
 obj-$(CONFIG_NET_VENDOR_XILINX) += xilinx/
 obj-$(CONFIG_NET_VENDOR_XIRCOM) += xircom/
index fbf4588..d19d157 100644 (file)
@@ -1106,7 +1106,7 @@ static void et1310_config_rxmac_regs(struct et131x_adapter *adapter)
        writel(0, &rxmac->mif_ctrl);
        writel(0, &rxmac->space_avail);
 
-       /* Initialize the the mif_ctrl register
+       /* Initialize the mif_ctrl register
         * bit 3:  Receive code error. One or more nibbles were signaled as
         *         errors  during the reception of the packet.  Clear this
         *         bit in Gigabit, set it in 100Mbit.  This was derived
index 4d46780..f342bb8 100644 (file)
@@ -1673,12 +1673,10 @@ static int xgbe_prep_tso(struct sk_buff *skb, struct xgbe_packet_data *packet)
                return ret;
 
        if (XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES, VXLAN)) {
-               packet->header_len = skb_inner_transport_offset(skb) +
-                                    inner_tcp_hdrlen(skb);
+               packet->header_len = skb_inner_tcp_all_headers(skb);
                packet->tcp_header_len = inner_tcp_hdrlen(skb);
        } else {
-               packet->header_len = skb_transport_offset(skb) +
-                                    tcp_hdrlen(skb);
+               packet->header_len = skb_tcp_all_headers(skb);
                packet->tcp_header_len = tcp_hdrlen(skb);
        }
        packet->tcp_payload_len = skb->len - packet->header_len;
index d954755..b875c43 100644 (file)
@@ -417,7 +417,7 @@ struct xgbe_rx_ring_data {
 
 /* Structure used to hold information related to the descriptor
  * and the packet associated with the descriptor (always use
- * use the XGBE_GET_DESC_DATA macro to access this data from the ring)
+ * the XGBE_GET_DESC_DATA macro to access this data from the ring)
  */
 struct xgbe_ring_data {
        struct xgbe_ring_desc *rdesc;   /* Virtual address of descriptor */
index b6119dc..c2fda80 100644 (file)
@@ -158,7 +158,7 @@ struct aq_mss_egress_class_record {
         *  1: compare the SNAP header.
         *  If this bit is set to 1, the extracted filed will assume the
         *  SNAP header exist as encapsulated in 802.3 (RFC 1042). I.E. the
-        *  next 5 bytes after the the LLC header is SNAP header.
+        *  next 5 bytes after the LLC header is SNAP header.
         */
        u32 snap_mask;
        /*! 0: don't care and no LLC header exist.
@@ -422,7 +422,7 @@ struct aq_mss_ingress_preclass_record {
         *  1: compare the SNAP header.
         *  If this bit is set to 1, the extracted filed will assume the
         *  SNAP header exist as encapsulated in 802.3 (RFC 1042). I.E. the
-        *  next 5 bytes after the the LLC header is SNAP header.
+        *  next 5 bytes after the LLC header is SNAP header.
         */
        u32 snap_mask;
        /*! Mask is per-byte.
index 1c6ea67..e461f47 100644 (file)
@@ -786,7 +786,7 @@ static bool ag71xx_check_dma_stuck(struct ag71xx *ag)
        return false;
 }
 
-static int ag71xx_tx_packets(struct ag71xx *ag, bool flush)
+static int ag71xx_tx_packets(struct ag71xx *ag, bool flush, int budget)
 {
        struct ag71xx_ring *ring = &ag->tx_ring;
        int sent = 0, bytes_compl = 0, n = 0;
@@ -825,7 +825,7 @@ static int ag71xx_tx_packets(struct ag71xx *ag, bool flush)
                if (!skb)
                        continue;
 
-               dev_kfree_skb_any(skb);
+               napi_consume_skb(skb, budget);
                ring->buf[i].tx.skb = NULL;
 
                bytes_compl += ring->buf[i].tx.len;
@@ -970,7 +970,7 @@ static void ag71xx_fast_reset(struct ag71xx *ag)
        mii_reg = ag71xx_rr(ag, AG71XX_REG_MII_CFG);
        rx_ds = ag71xx_rr(ag, AG71XX_REG_RX_DESC);
 
-       ag71xx_tx_packets(ag, true);
+       ag71xx_tx_packets(ag, true, 0);
 
        reset_control_assert(ag->mac_reset);
        usleep_range(10, 20);
@@ -1657,7 +1657,7 @@ static int ag71xx_rx_packets(struct ag71xx *ag, int limit)
                ndev->stats.rx_packets++;
                ndev->stats.rx_bytes += pktlen;
 
-               skb = build_skb(ring->buf[i].rx.rx_buf, ag71xx_buffer_size(ag));
+               skb = napi_build_skb(ring->buf[i].rx.rx_buf, ag71xx_buffer_size(ag));
                if (!skb) {
                        skb_free_frag(ring->buf[i].rx.rx_buf);
                        goto next;
@@ -1703,7 +1703,7 @@ static int ag71xx_poll(struct napi_struct *napi, int limit)
        int tx_done, rx_done;
        u32 status;
 
-       tx_done = ag71xx_tx_packets(ag, false);
+       tx_done = ag71xx_tx_packets(ag, false, limit);
 
        netif_dbg(ag, rx_status, ndev, "processing RX ring\n");
        rx_done = ag71xx_rx_packets(ag, limit);
index 4945939..9485847 100644 (file)
@@ -2072,7 +2072,7 @@ static u16 atl1c_cal_tpd_req(const struct sk_buff *skb)
        tpd_req = skb_shinfo(skb)->nr_frags + 1;
 
        if (skb_is_gso(skb)) {
-               proto_hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               proto_hdr_len = skb_tcp_all_headers(skb);
                if (proto_hdr_len < skb_headlen(skb))
                        tpd_req++;
                if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
@@ -2107,7 +2107,7 @@ static int atl1c_tso_csum(struct atl1c_adapter *adapter,
                        if (real_len < skb->len)
                                pskb_trim(skb, real_len);
 
-                       hdr_len = (skb_transport_offset(skb) + tcp_hdrlen(skb));
+                       hdr_len = skb_tcp_all_headers(skb);
                        if (unlikely(skb->len == hdr_len)) {
                                /* only xsum need */
                                if (netif_msg_tx_queued(adapter))
@@ -2132,7 +2132,7 @@ static int atl1c_tso_csum(struct atl1c_adapter *adapter,
                        *tpd = atl1c_get_tpd(adapter, queue);
                        ipv6_hdr(skb)->payload_len = 0;
                        /* check payload == 0 byte ? */
-                       hdr_len = (skb_transport_offset(skb) + tcp_hdrlen(skb));
+                       hdr_len = skb_tcp_all_headers(skb);
                        if (unlikely(skb->len == hdr_len)) {
                                /* only xsum need */
                                if (netif_msg_tx_queued(adapter))
@@ -2219,7 +2219,8 @@ static int atl1c_tx_map(struct atl1c_adapter *adapter,
        tso = (tpd->word1 >> TPD_LSO_EN_SHIFT) & TPD_LSO_EN_MASK;
        if (tso) {
                /* TSO */
-               map_len = hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
+               map_len = hdr_len;
                use_tpd = tpd;
 
                buffer_info = atl1c_get_tx_buffer(adapter, use_tpd);
@@ -2849,7 +2850,7 @@ static pci_ers_result_t atl1c_io_error_detected(struct pci_dev *pdev,
 
        pci_disable_device(pdev);
 
-       /* Request a slot slot reset. */
+       /* Request a slot reset. */
        return PCI_ERS_RESULT_NEED_RESET;
 }
 
index 2068186..57a51fb 100644 (file)
@@ -1609,8 +1609,7 @@ static u16 atl1e_cal_tdp_req(const struct sk_buff *skb)
        if (skb_is_gso(skb)) {
                if (skb->protocol == htons(ETH_P_IP) ||
                   (skb_shinfo(skb)->gso_type == SKB_GSO_TCPV6)) {
-                       proto_hdr_len = skb_transport_offset(skb) +
-                                       tcp_hdrlen(skb);
+                       proto_hdr_len = skb_tcp_all_headers(skb);
                        if (proto_hdr_len < skb_headlen(skb)) {
                                tpd_req += ((skb_headlen(skb) - proto_hdr_len +
                                           MAX_TX_BUF_LEN - 1) >>
@@ -1645,7 +1644,7 @@ static int atl1e_tso_csum(struct atl1e_adapter *adapter,
                        if (real_len < skb->len)
                                pskb_trim(skb, real_len);
 
-                       hdr_len = (skb_transport_offset(skb) + tcp_hdrlen(skb));
+                       hdr_len = skb_tcp_all_headers(skb);
                        if (unlikely(skb->len == hdr_len)) {
                                /* only xsum need */
                                netdev_warn(adapter->netdev,
@@ -1713,7 +1712,8 @@ static int atl1e_tx_map(struct atl1e_adapter *adapter,
        segment = (tpd->word3 >> TPD_SEGMENT_EN_SHIFT) & TPD_SEGMENT_EN_MASK;
        if (segment) {
                /* TSO */
-               map_len = hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
+               map_len = hdr_len;
                use_tpd = tpd;
 
                tx_buffer = atl1e_get_tx_buffer(adapter, use_tpd);
@@ -2482,7 +2482,7 @@ atl1e_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
 
        pci_disable_device(pdev);
 
-       /* Request a slot slot reset. */
+       /* Request a slot reset. */
        return PCI_ERS_RESULT_NEED_RESET;
 }
 
index 6a96996..ff1fe09 100644 (file)
@@ -2115,7 +2115,7 @@ static int atl1_tso(struct atl1_adapter *adapter, struct sk_buff *skb,
                                ntohs(iph->tot_len));
                        if (real_len < skb->len)
                                pskb_trim(skb, real_len);
-                       hdr_len = (skb_transport_offset(skb) + tcp_hdrlen(skb));
+                       hdr_len = skb_tcp_all_headers(skb);
                        if (skb->len == hdr_len) {
                                iph->check = 0;
                                tcp_hdr(skb)->check =
@@ -2206,7 +2206,7 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
        retval = (ptpd->word3 >> TPD_SEGMENT_EN_SHIFT) & TPD_SEGMENT_EN_MASK;
        if (retval) {
                /* TSO */
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
                buffer_info->length = hdr_len;
                page = virt_to_page(skb->data);
                offset = offset_in_page(skb->data);
@@ -2367,8 +2367,7 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
        mss = skb_shinfo(skb)->gso_size;
        if (mss) {
                if (skb->protocol == htons(ETH_P_IP)) {
-                       proto_hdr_len = (skb_transport_offset(skb) +
-                                        tcp_hdrlen(skb));
+                       proto_hdr_len = skb_tcp_all_headers(skb);
                        if (unlikely(proto_hdr_len > len)) {
                                dev_kfree_skb_any(skb);
                                return NETDEV_TX_OK;
index 514d61d..fcaa6a2 100644 (file)
@@ -1935,7 +1935,7 @@ static int bcm_enet_remove(struct platform_device *pdev)
        return 0;
 }
 
-struct platform_driver bcm63xx_enet_driver = {
+static struct platform_driver bcm63xx_enet_driver = {
        .probe  = bcm_enet_probe,
        .remove = bcm_enet_remove,
        .driver = {
@@ -2756,7 +2756,7 @@ static int bcm_enetsw_remove(struct platform_device *pdev)
        return 0;
 }
 
-struct platform_driver bcm63xx_enetsw_driver = {
+static struct platform_driver bcm63xx_enetsw_driver = {
        .probe  = bcm_enetsw_probe,
        .remove = bcm_enetsw_remove,
        .driver = {
index 5729a5a..712b559 100644 (file)
@@ -3421,12 +3421,9 @@ static int bnx2x_pkt_req_lin(struct bnx2x *bp, struct sk_buff *skb,
 
                        /* Headers length */
                        if (xmit_type & XMIT_GSO_ENC)
-                               hlen = (int)(skb_inner_transport_header(skb) -
-                                            skb->data) +
-                                            inner_tcp_hdrlen(skb);
+                               hlen = skb_inner_tcp_all_headers(skb);
                        else
-                               hlen = (int)(skb_transport_header(skb) -
-                                            skb->data) + tcp_hdrlen(skb);
+                               hlen = skb_tcp_all_headers(skb);
 
                        /* Amount of data (w/o headers) on linear part of SKB*/
                        first_bd_sz = skb_headlen(skb) - hlen;
@@ -3534,15 +3531,13 @@ static u8 bnx2x_set_pbd_csum_enc(struct bnx2x *bp, struct sk_buff *skb,
                        ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW_SHIFT) &
                        ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW;
 
-               return skb_inner_transport_header(skb) +
-                       inner_tcp_hdrlen(skb) - skb->data;
+               return skb_inner_tcp_all_headers(skb);
        }
 
        /* We support checksum offload for TCP and UDP only.
         * No need to pass the UDP header length - it's a constant.
         */
-       return skb_inner_transport_header(skb) +
-               sizeof(struct udphdr) - skb->data;
+       return skb_inner_transport_offset(skb) + sizeof(struct udphdr);
 }
 
 /**
@@ -3568,12 +3563,12 @@ static u8 bnx2x_set_pbd_csum_e2(struct bnx2x *bp, struct sk_buff *skb,
                        ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW_SHIFT) &
                        ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW;
 
-               return skb_transport_header(skb) + tcp_hdrlen(skb) - skb->data;
+               return skb_tcp_all_headers(skb);
        }
        /* We support checksum offload for TCP and UDP only.
         * No need to pass the UDP header length - it's a constant.
         */
-       return skb_transport_header(skb) + sizeof(struct udphdr) - skb->data;
+       return skb_transport_offset(skb) + sizeof(struct udphdr);
 }
 
 /* set FW indication according to inner or outer protocols if tunneled */
index 56b46b8..c74b2e4 100644 (file)
@@ -535,12 +535,9 @@ normal_tx:
                u32 hdr_len;
 
                if (skb->encapsulation)
-                       hdr_len = skb_inner_network_offset(skb) +
-                               skb_inner_network_header_len(skb) +
-                               inner_tcp_hdrlen(skb);
+                       hdr_len = skb_inner_tcp_all_headers(skb);
                else
-                       hdr_len = skb_transport_offset(skb) +
-                               tcp_hdrlen(skb);
+                       hdr_len = skb_tcp_all_headers(skb);
 
                txbd1->tx_bd_hsize_lflags |= cpu_to_le32(TX_BD_FLAGS_LSO |
                                        TX_BD_FLAGS_T_IPID |
@@ -4480,7 +4477,7 @@ static void bnxt_free_ntp_fltrs(struct bnxt *bp, bool irq_reinit)
                }
        }
        if (irq_reinit) {
-               kfree(bp->ntp_fltr_bmap);
+               bitmap_free(bp->ntp_fltr_bmap);
                bp->ntp_fltr_bmap = NULL;
        }
        bp->ntp_fltr_count = 0;
@@ -4499,9 +4496,7 @@ static int bnxt_alloc_ntp_fltrs(struct bnxt *bp)
                INIT_HLIST_HEAD(&bp->ntp_fltr_hash_tbl[i]);
 
        bp->ntp_fltr_count = 0;
-       bp->ntp_fltr_bmap = kcalloc(BITS_TO_LONGS(BNXT_NTP_FLTR_MAX_FLTR),
-                                   sizeof(long),
-                                   GFP_KERNEL);
+       bp->ntp_fltr_bmap = bitmap_zalloc(BNXT_NTP_FLTR_MAX_FLTR, GFP_KERNEL);
 
        if (!bp->ntp_fltr_bmap)
                rc = -ENOMEM;
@@ -10658,7 +10653,7 @@ static void __bnxt_close_nic(struct bnxt *bp, bool irq_re_init,
        while (bnxt_drv_busy(bp))
                msleep(20);
 
-       /* Flush rings and and disable interrupts */
+       /* Flush rings and disable interrupts */
        bnxt_shutdown_nic(bp, irq_re_init);
 
        /* TODO CHIMP_FW: Link/PHY related cleanup if (link_re_init) */
index f7f10cf..e86503d 100644 (file)
@@ -660,7 +660,7 @@ static int cnic_init_id_tbl(struct cnic_id_tbl *id_tbl, u32 size, u32 start_id,
        id_tbl->max = size;
        id_tbl->next = next;
        spin_lock_init(&id_tbl->lock);
-       id_tbl->table = kcalloc(BITS_TO_LONGS(size), sizeof(long), GFP_KERNEL);
+       id_tbl->table = bitmap_zalloc(size, GFP_KERNEL);
        if (!id_tbl->table)
                return -ENOMEM;
 
@@ -669,7 +669,7 @@ static int cnic_init_id_tbl(struct cnic_id_tbl *id_tbl, u32 size, u32 start_id,
 
 static void cnic_free_id_tbl(struct cnic_id_tbl *id_tbl)
 {
-       kfree(id_tbl->table);
+       bitmap_free(id_tbl->table);
        id_tbl->table = NULL;
 }
 
index c28f8cc..db1e9d8 100644 (file)
@@ -7944,7 +7944,7 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
                iph = ip_hdr(skb);
                tcp_opt_len = tcp_optlen(skb);
 
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb) - ETH_HLEN;
+               hdr_len = skb_tcp_all_headers(skb) - ETH_HLEN;
 
                /* HW/FW can not correctly segment packets that have been
                 * vlan encapsulated.
index f6fe08d..29dd0f9 100644 (file)
@@ -2823,8 +2823,7 @@ bnad_txq_wi_prepare(struct bnad *bnad, struct bna_tcb *tcb,
                        BNAD_UPDATE_CTR(bnad, tx_skb_mss_too_long);
                        return -EINVAL;
                }
-               if (unlikely((gso_size + skb_transport_offset(skb) +
-                             tcp_hdrlen(skb)) >= skb->len)) {
+               if (unlikely((gso_size + skb_tcp_all_headers(skb)) >= skb->len)) {
                        txqent->hdr.wi.opcode = htons(BNA_TXQ_WI_SEND);
                        txqent->hdr.wi.lso_mss = 0;
                        BNAD_UPDATE_CTR(bnad, tx_skb_tso_too_short);
@@ -2872,8 +2871,7 @@ bnad_txq_wi_prepare(struct bnad *bnad, struct bna_tcb *tcb,
                                BNAD_UPDATE_CTR(bnad, tcpcsum_offload);
 
                                if (unlikely(skb_headlen(skb) <
-                                           skb_transport_offset(skb) +
-                                   tcp_hdrlen(skb))) {
+                                           skb_tcp_all_headers(skb))) {
                                        BNAD_UPDATE_CTR(bnad, tx_skb_tcp_hdr);
                                        return -EINVAL;
                                }
index d0ea8db..d5f7a9f 100644 (file)
@@ -2267,7 +2267,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
                        /* only queue eth + ip headers separately for UDP */
                        hdrlen = skb_transport_offset(skb);
                else
-                       hdrlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
+                       hdrlen = skb_tcp_all_headers(skb);
                if (skb_headlen(skb) < hdrlen) {
                        netdev_err(bp->dev, "Error - LSO headers fragmented!!!\n");
                        /* if this is required, would need to copy to single buffer */
@@ -4600,6 +4600,40 @@ static int fu540_c000_init(struct platform_device *pdev)
        return macb_init(pdev);
 }
 
+static int init_reset_optional(struct platform_device *pdev)
+{
+       struct net_device *dev = platform_get_drvdata(pdev);
+       struct macb *bp = netdev_priv(dev);
+       int ret;
+
+       if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) {
+               /* Ensure PHY device used in SGMII mode is ready */
+               bp->sgmii_phy = devm_phy_optional_get(&pdev->dev, NULL);
+
+               if (IS_ERR(bp->sgmii_phy))
+                       return dev_err_probe(&pdev->dev, PTR_ERR(bp->sgmii_phy),
+                                            "failed to get SGMII PHY\n");
+
+               ret = phy_init(bp->sgmii_phy);
+               if (ret)
+                       return dev_err_probe(&pdev->dev, ret,
+                                            "failed to init SGMII PHY\n");
+       }
+
+       /* Fully reset controller at hardware level if mapped in device tree */
+       ret = device_reset_optional(&pdev->dev);
+       if (ret) {
+               phy_exit(bp->sgmii_phy);
+               return dev_err_probe(&pdev->dev, ret, "failed to reset controller");
+       }
+
+       ret = macb_init(pdev);
+       if (ret)
+               phy_exit(bp->sgmii_phy);
+
+       return ret;
+}
+
 static const struct macb_usrio_config sama7g5_usrio = {
        .mii = 0,
        .rmii = 1,
@@ -4626,8 +4660,8 @@ static const struct macb_config at91sam9260_config = {
 };
 
 static const struct macb_config sama5d3macb_config = {
-       .caps = MACB_CAPS_SG_DISABLED
-             | MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII,
+       .caps = MACB_CAPS_SG_DISABLED |
+               MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII,
        .clk_init = macb_clk_init,
        .init = macb_init,
        .usrio = &macb_default_usrio,
@@ -4658,8 +4692,8 @@ static const struct macb_config sama5d29_config = {
 };
 
 static const struct macb_config sama5d3_config = {
-       .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE
-             | MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII | MACB_CAPS_JUMBO,
+       .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE |
+               MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII | MACB_CAPS_JUMBO,
        .dma_burst_length = 16,
        .clk_init = macb_clk_init,
        .init = macb_init,
@@ -4689,55 +4723,13 @@ static const struct macb_config np4_config = {
        .usrio = &macb_default_usrio,
 };
 
-static int zynqmp_init(struct platform_device *pdev)
-{
-       struct net_device *dev = platform_get_drvdata(pdev);
-       struct macb *bp = netdev_priv(dev);
-       int ret;
-
-       if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) {
-               /* Ensure PS-GTR PHY device used in SGMII mode is ready */
-               bp->sgmii_phy = devm_phy_optional_get(&pdev->dev, NULL);
-
-               if (IS_ERR(bp->sgmii_phy)) {
-                       ret = PTR_ERR(bp->sgmii_phy);
-                       dev_err_probe(&pdev->dev, ret,
-                                     "failed to get PS-GTR PHY\n");
-                       return ret;
-               }
-
-               ret = phy_init(bp->sgmii_phy);
-               if (ret) {
-                       dev_err(&pdev->dev, "failed to init PS-GTR PHY: %d\n",
-                               ret);
-                       return ret;
-               }
-       }
-
-       /* Fully reset GEM controller at hardware level using zynqmp-reset driver,
-        * if mapped in device tree.
-        */
-       ret = device_reset_optional(&pdev->dev);
-       if (ret) {
-               dev_err_probe(&pdev->dev, ret, "failed to reset controller");
-               phy_exit(bp->sgmii_phy);
-               return ret;
-       }
-
-       ret = macb_init(pdev);
-       if (ret)
-               phy_exit(bp->sgmii_phy);
-
-       return ret;
-}
-
 static const struct macb_config zynqmp_config = {
        .caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE |
-                       MACB_CAPS_JUMBO |
-                       MACB_CAPS_GEM_HAS_PTP | MACB_CAPS_BD_RD_PREFETCH,
+               MACB_CAPS_JUMBO |
+               MACB_CAPS_GEM_HAS_PTP | MACB_CAPS_BD_RD_PREFETCH,
        .dma_burst_length = 16,
        .clk_init = macb_clk_init,
-       .init = zynqmp_init,
+       .init = init_reset_optional,
        .jumbo_max_len = 10240,
        .usrio = &macb_default_usrio,
 };
@@ -4751,6 +4743,17 @@ static const struct macb_config zynq_config = {
        .usrio = &macb_default_usrio,
 };
 
+static const struct macb_config mpfs_config = {
+       .caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE |
+               MACB_CAPS_JUMBO |
+               MACB_CAPS_GEM_HAS_PTP,
+       .dma_burst_length = 16,
+       .clk_init = macb_clk_init,
+       .init = init_reset_optional,
+       .usrio = &macb_default_usrio,
+       .jumbo_max_len = 10240,
+};
+
 static const struct macb_config sama7g5_gem_config = {
        .caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_CLK_HW_CHG |
                MACB_CAPS_MIIONRGMII,
@@ -4787,6 +4790,7 @@ static const struct of_device_id macb_dt_ids[] = {
        { .compatible = "cdns,zynqmp-gem", .data = &zynqmp_config},
        { .compatible = "cdns,zynq-gem", .data = &zynq_config },
        { .compatible = "sifive,fu540-c000-gem", .data = &fu540_c000_config },
+       { .compatible = "microchip,mpfs-macb", .data = &mpfs_config },
        { .compatible = "microchip,sama7g5-gem", .data = &sama7g5_gem_config },
        { .compatible = "microchip,sama7g5-emac", .data = &sama7g5_emac_config },
        { /* sentinel */ }
@@ -4796,8 +4800,8 @@ MODULE_DEVICE_TABLE(of, macb_dt_ids);
 
 static const struct macb_config default_gem_config = {
        .caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE |
-                       MACB_CAPS_JUMBO |
-                       MACB_CAPS_GEM_HAS_PTP,
+               MACB_CAPS_JUMBO |
+               MACB_CAPS_GEM_HAS_PTP,
        .dma_burst_length = 16,
        .clk_init = macb_clk_init,
        .init = macb_init,
index 4367edb..06397cc 100644 (file)
@@ -1261,7 +1261,7 @@ int nicvf_xdp_sq_append_pkt(struct nicvf *nic, struct snd_queue *sq,
 static int nicvf_tso_count_subdescs(struct sk_buff *skb)
 {
        struct skb_shared_info *sh = skb_shinfo(skb);
-       unsigned int sh_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+       unsigned int sh_len = skb_tcp_all_headers(skb);
        unsigned int data_len = skb->len - sh_len;
        unsigned int p_len = sh->gso_size;
        long f_id = -1;    /* id of the current fragment */
@@ -1382,7 +1382,7 @@ nicvf_sq_add_hdr_subdesc(struct nicvf *nic, struct snd_queue *sq, int qentry,
 
        if (nic->hw_tso && skb_shinfo(skb)->gso_size) {
                hdr->tso = 1;
-               hdr->tso_start = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr->tso_start = skb_tcp_all_headers(skb);
                hdr->tso_max_paysize = skb_shinfo(skb)->gso_size;
                /* For non-tunneled pkts, point this to L2 ethertype */
                hdr->inner_l3_offset = skb_network_offset(skb) - 2;
index 4a872f3..7d52048 100644 (file)
@@ -85,7 +85,7 @@ static void cxgb4_dcb_cleanup_apps(struct net_device *dev)
 
                if (err) {
                        dev_err(adap->pdev_dev,
-                               "Failed DCB Clear %s Application Priority: sel=%d, prot=%d, err=%d\n",
+                               "Failed DCB Clear %s Application Priority: sel=%d, prot=%d, err=%d\n",
                                dcb_ver_array[dcb->dcb_version], app.selector,
                                app.protocol, -err);
                        break;
index 7d49fd4..14e0d98 100644 (file)
@@ -3429,18 +3429,18 @@ static ssize_t blocked_fl_write(struct file *filp, const char __user *ubuf,
        unsigned long *t;
        struct adapter *adap = filp->private_data;
 
-       t = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz), sizeof(long), GFP_KERNEL);
+       t = bitmap_zalloc(adap->sge.egr_sz, GFP_KERNEL);
        if (!t)
                return -ENOMEM;
 
        err = bitmap_parse_user(ubuf, count, t, adap->sge.egr_sz);
        if (err) {
-               kfree(t);
+               bitmap_free(t);
                return err;
        }
 
        bitmap_copy(adap->sge.blocked_fl, t, adap->sge.egr_sz);
-       kfree(t);
+       bitmap_free(t);
        return count;
 }
 
index 6c790af..77897ed 100644 (file)
@@ -2227,7 +2227,7 @@ void cxgb4_cleanup_ethtool_filters(struct adapter *adap)
        if (eth_filter_info) {
                for (i = 0; i < adap->params.nports; i++) {
                        kvfree(eth_filter_info[i].loc_array);
-                       kfree(eth_filter_info[i].bmap);
+                       bitmap_free(eth_filter_info[i].bmap);
                }
                kfree(eth_filter_info);
        }
@@ -2270,9 +2270,7 @@ int cxgb4_init_ethtool_filters(struct adapter *adap)
                        goto free_eth_finfo;
                }
 
-               eth_filter->port[i].bmap = kcalloc(BITS_TO_LONGS(nentries),
-                                                  sizeof(unsigned long),
-                                                  GFP_KERNEL);
+               eth_filter->port[i].bmap = bitmap_zalloc(nentries, GFP_KERNEL);
                if (!eth_filter->port[i].bmap) {
                        ret = -ENOMEM;
                        goto free_eth_finfo;
@@ -2284,7 +2282,7 @@ int cxgb4_init_ethtool_filters(struct adapter *adap)
 
 free_eth_finfo:
        while (i-- > 0) {
-               kfree(eth_filter->port[i].bmap);
+               bitmap_free(eth_filter->port[i].bmap);
                kvfree(eth_filter->port[i].loc_array);
        }
        kfree(eth_filter_info);
index 0c78c0d..d006192 100644 (file)
@@ -5047,28 +5047,24 @@ static int adap_init0(struct adapter *adap, int vpd_skip)
        /* Allocate the memory for the vaious egress queue bitmaps
         * ie starving_fl, txq_maperr and blocked_fl.
         */
-       adap->sge.starving_fl = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz),
-                                       sizeof(long), GFP_KERNEL);
+       adap->sge.starving_fl = bitmap_zalloc(adap->sge.egr_sz, GFP_KERNEL);
        if (!adap->sge.starving_fl) {
                ret = -ENOMEM;
                goto bye;
        }
 
-       adap->sge.txq_maperr = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz),
-                                      sizeof(long), GFP_KERNEL);
+       adap->sge.txq_maperr = bitmap_zalloc(adap->sge.egr_sz, GFP_KERNEL);
        if (!adap->sge.txq_maperr) {
                ret = -ENOMEM;
                goto bye;
        }
 
 #ifdef CONFIG_DEBUG_FS
-       adap->sge.blocked_fl = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz),
-                                      sizeof(long), GFP_KERNEL);
+       adap->sge.blocked_fl = bitmap_zalloc(adap->sge.egr_sz, GFP_KERNEL);
        if (!adap->sge.blocked_fl) {
                ret = -ENOMEM;
                goto bye;
        }
-       bitmap_zero(adap->sge.blocked_fl, adap->sge.egr_sz);
 #endif
 
        params[0] = FW_PARAM_PFVF(CLIP_START);
@@ -5417,10 +5413,10 @@ bye:
        adap_free_hma_mem(adap);
        kfree(adap->sge.egr_map);
        kfree(adap->sge.ingr_map);
-       kfree(adap->sge.starving_fl);
-       kfree(adap->sge.txq_maperr);
+       bitmap_free(adap->sge.starving_fl);
+       bitmap_free(adap->sge.txq_maperr);
 #ifdef CONFIG_DEBUG_FS
-       kfree(adap->sge.blocked_fl);
+       bitmap_free(adap->sge.blocked_fl);
 #endif
        if (ret != -ETIMEDOUT && ret != -EIO)
                t4_fw_bye(adap, adap->mbox);
@@ -5854,8 +5850,7 @@ static int alloc_msix_info(struct adapter *adap, u32 num_vec)
        if (!msix_info)
                return -ENOMEM;
 
-       adap->msix_bmap.msix_bmap = kcalloc(BITS_TO_LONGS(num_vec),
-                                           sizeof(long), GFP_KERNEL);
+       adap->msix_bmap.msix_bmap = bitmap_zalloc(num_vec, GFP_KERNEL);
        if (!adap->msix_bmap.msix_bmap) {
                kfree(msix_info);
                return -ENOMEM;
@@ -5870,7 +5865,7 @@ static int alloc_msix_info(struct adapter *adap, u32 num_vec)
 
 static void free_msix_info(struct adapter *adap)
 {
-       kfree(adap->msix_bmap.msix_bmap);
+       bitmap_free(adap->msix_bmap.msix_bmap);
        kfree(adap->msix_info);
 }
 
@@ -6189,10 +6184,10 @@ static void free_some_resources(struct adapter *adapter)
        cxgb4_cleanup_ethtool_filters(adapter);
        kfree(adapter->sge.egr_map);
        kfree(adapter->sge.ingr_map);
-       kfree(adapter->sge.starving_fl);
-       kfree(adapter->sge.txq_maperr);
+       bitmap_free(adapter->sge.starving_fl);
+       bitmap_free(adapter->sge.txq_maperr);
 #ifdef CONFIG_DEBUG_FS
-       kfree(adapter->sge.blocked_fl);
+       bitmap_free(adapter->sge.blocked_fl);
 #endif
        disable_msi(adapter);
 
index f889f40..ee52e3b 100644 (file)
@@ -1531,7 +1531,7 @@ static netdev_tx_t cxgb4_eth_xmit(struct sk_buff *skb, struct net_device *dev)
 
 #if IS_ENABLED(CONFIG_CHELSIO_TLS_DEVICE)
        if (cxgb4_is_ktls_skb(skb) &&
-           (skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb))))
+           (skb->len - skb_tcp_all_headers(skb)))
                return adap->uld[CXGB4_ULD_KTLS].tx_handler(skb, dev);
 #endif /* CHELSIO_TLS_DEVICE */
 
index 7de3800..c2822e6 100644 (file)
@@ -2859,7 +2859,7 @@ static const struct net_device_ops cxgb4vf_netdev_ops     = {
  *                             address stored on the adapter
  *     @adapter: The adapter
  *
- *     Find the the port mask for the VF based on the index of mac
+ *     Find the port mask for the VF based on the index of mac
  *     address stored in the adapter. If no mac address is stored on
  *     the adapter for the VF, use the port mask received from the
  *     firmware.
index d546993..1c52592 100644 (file)
@@ -877,7 +877,7 @@ int t4vf_get_sge_params(struct adapter *adapter)
 
        /* T4 uses a single control field to specify both the PCIe Padding and
         * Packing Boundary.  T5 introduced the ability to specify these
-        * separately with the Padding Boundary in SGE_CONTROL and and Packing
+        * separately with the Padding Boundary in SGE_CONTROL and Packing
         * Boundary in SGE_CONTROL2.  So for T5 and later we need to grab
         * SGE_CONTROL in order to determine how ingress packet data will be
         * laid out in Packed Buffer Mode.  Unfortunately, older versions of
index 60b648b..bfee0e4 100644 (file)
@@ -1012,7 +1012,7 @@ chcr_ktls_write_tcp_options(struct chcr_ktls_info *tx_info, struct sk_buff *skb,
        /* packet length = eth hdr len + ip hdr len + tcp hdr len
         * (including options).
         */
-       pktlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
+       pktlen = skb_tcp_all_headers(skb);
 
        ctrl = sizeof(*cpl) + pktlen;
        len16 = DIV_ROUND_UP(sizeof(*wr) + ctrl, 16);
@@ -1907,7 +1907,7 @@ static int chcr_ktls_sw_fallback(struct sk_buff *skb,
                return 0;
 
        th = tcp_hdr(nskb);
-       skb_offset =  skb_transport_offset(nskb) + tcp_hdrlen(nskb);
+       skb_offset = skb_tcp_all_headers(nskb);
        data_len = nskb->len - skb_offset;
        skb_tx_timestamp(nskb);
 
@@ -1938,7 +1938,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
        unsigned long flags;
 
        tcp_seq = ntohl(th->seq);
-       skb_offset = skb_transport_offset(skb) + tcp_hdrlen(skb);
+       skb_offset = skb_tcp_all_headers(skb);
        skb_data_len = skb->len - skb_offset;
        data_len = skb_data_len;
 
index 1c81b16..372fb7b 100644 (file)
@@ -680,11 +680,10 @@ static int enic_queue_wq_skb_tso(struct enic *enic, struct vnic_wq *wq,
        skb_frag_t *frag;
 
        if (skb->encapsulation) {
-               hdr_len = skb_inner_transport_header(skb) - skb->data;
-               hdr_len += inner_tcp_hdrlen(skb);
+               hdr_len = skb_inner_tcp_all_headers(skb);
                enic_preload_tcp_csum_encap(skb);
        } else {
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
                enic_preload_tcp_csum(skb);
        }
 
index cd4e243..414362f 100644 (file)
@@ -737,9 +737,9 @@ void be_link_status_update(struct be_adapter *adapter, u8 link_status)
 static int be_gso_hdr_len(struct sk_buff *skb)
 {
        if (skb->encapsulation)
-               return skb_inner_transport_offset(skb) +
-                      inner_tcp_hdrlen(skb);
-       return skb_transport_offset(skb) + tcp_hdrlen(skb);
+               return skb_inner_tcp_all_headers(skb);
+
+       return skb_tcp_all_headers(skb);
 }
 
 static void be_tx_stats_update(struct be_tx_obj *txo, struct sk_buff *skb)
@@ -3178,7 +3178,7 @@ static irqreturn_t be_intx(int irq, void *dev)
        }
        be_eq_notify(adapter, eqo->q.id, false, true, num_evts, 0);
 
-       /* Return IRQ_HANDLED only for the the first spurious intr
+       /* Return IRQ_HANDLED only for the first spurious intr
         * after a valid intr to stop the kernel from branding
         * this irq as a bad one!
         */
index a902751..e8e2aa1 100644 (file)
@@ -691,7 +691,7 @@ fec_enet_txq_put_hdr_tso(struct fec_enet_priv_tx_q *txq,
                         struct bufdesc *bdp, int index)
 {
        struct fec_enet_private *fep = netdev_priv(ndev);
-       int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+       int hdr_len = skb_tcp_all_headers(skb);
        struct bufdesc_ex *ebdp = container_of(bdp, struct bufdesc_ex, desc);
        void *bufaddr;
        unsigned long dmabuf;
index 5ff2634..cb419ae 100644 (file)
@@ -201,7 +201,7 @@ void fs_enet_platform_cleanup(void);
 
 /* access macros */
 #if defined(CONFIG_CPM1)
-/* for a CPM1 __raw_xxx's are sufficient */
+/* for a CPM1 __raw_xxx's are sufficient */
 #define __cbd_out32(addr, x)   __raw_writel(x, addr)
 #define __cbd_out16(addr, x)   __raw_writew(x, addr)
 #define __cbd_in32(addr)       __raw_readl(addr)
index 3dc9369..e7bf152 100644 (file)
@@ -1944,6 +1944,7 @@ static netdev_tx_t gfar_start_xmit(struct sk_buff *skb, struct net_device *dev)
                lstatus |= BD_LFLAG(TXBD_CRC | TXBD_READY) | skb_headlen(skb);
        }
 
+       skb_tx_timestamp(skb);
        netdev_tx_sent_queue(txq, bytes_sent);
 
        gfar_wmb();
index 9a2c16d..81fb687 100644 (file)
@@ -1457,6 +1457,7 @@ static int gfar_get_ts_info(struct net_device *dev,
 
        if (!(priv->device_flags & FSL_GIANFAR_DEV_HAS_TIMER)) {
                info->so_timestamping = SOF_TIMESTAMPING_RX_SOFTWARE |
+                                       SOF_TIMESTAMPING_TX_SOFTWARE |
                                        SOF_TIMESTAMPING_SOFTWARE;
                return 0;
        }
@@ -1474,7 +1475,10 @@ static int gfar_get_ts_info(struct net_device *dev,
 
        info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
                                SOF_TIMESTAMPING_RX_HARDWARE |
-                               SOF_TIMESTAMPING_RAW_HARDWARE;
+                               SOF_TIMESTAMPING_RAW_HARDWARE |
+                               SOF_TIMESTAMPING_RX_SOFTWARE |
+                               SOF_TIMESTAMPING_TX_SOFTWARE |
+                               SOF_TIMESTAMPING_SOFTWARE;
        info->tx_types = (1 << HWTSTAMP_TX_OFF) |
                         (1 << HWTSTAMP_TX_ON);
        info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
index 257203e..f218196 100644 (file)
@@ -442,6 +442,7 @@ enum fun_port_lane_attr {
 };
 
 enum fun_admin_port_subop {
+       FUN_ADMIN_PORT_SUBOP_XCVR_READ = 0x23,
        FUN_ADMIN_PORT_SUBOP_INETADDR_EVENT = 0x24,
 };
 
@@ -595,6 +596,19 @@ struct fun_admin_port_req {
 
                        struct fun_admin_read48_req read48[];
                } read;
+               struct fun_admin_port_xcvr_read_req {
+                       u8 subop;
+                       u8 rsvd0;
+                       __be16 flags;
+                       __be32 id;
+
+                       u8 bank;
+                       u8 page;
+                       u8 offset;
+                       u8 length;
+                       u8 dev_addr;
+                       u8 rsvd1[3];
+               } xcvr_read;
                struct fun_admin_port_inetaddr_event_req {
                        __u8 subop;
                        __u8 rsvd0;
@@ -625,6 +639,15 @@ struct fun_admin_port_req {
                .id = cpu_to_be32(_id),                          \
        }
 
+#define FUN_ADMIN_PORT_XCVR_READ_REQ_INIT(_flags, _id, _bank, _page,   \
+                                         _offset, _length, _dev_addr) \
+       ((struct fun_admin_port_xcvr_read_req) {                       \
+               .subop = FUN_ADMIN_PORT_SUBOP_XCVR_READ,               \
+               .flags = cpu_to_be16(_flags), .id = cpu_to_be32(_id),  \
+               .bank = (_bank), .page = (_page), .offset = (_offset), \
+               .length = (_length), .dev_addr = (_dev_addr),          \
+       })
+
 struct fun_admin_port_rsp {
        struct fun_admin_rsp_common common;
 
@@ -659,6 +682,23 @@ struct fun_admin_port_rsp {
        } u;
 };
 
+struct fun_admin_port_xcvr_read_rsp {
+       struct fun_admin_rsp_common common;
+
+       u8 subop;
+       u8 rsvd0[3];
+       __be32 id;
+
+       u8 bank;
+       u8 page;
+       u8 offset;
+       u8 length;
+       u8 dev_addr;
+       u8 rsvd1[3];
+
+       u8 data[128];
+};
+
 enum fun_xcvr_type {
        FUN_XCVR_BASET = 0x0,
        FUN_XCVR_CU = 0x1,
index d081168..31aa185 100644 (file)
@@ -78,6 +78,7 @@ static const char * const txq_stat_names[] = {
        "tx_cso",
        "tx_tso",
        "tx_encapsulated_tso",
+       "tx_uso",
        "tx_more",
        "tx_queue_stops",
        "tx_queue_restarts",
@@ -778,6 +779,7 @@ static void fun_get_ethtool_stats(struct net_device *netdev,
                ADD_STAT(txs.tx_cso);
                ADD_STAT(txs.tx_tso);
                ADD_STAT(txs.tx_encap_tso);
+               ADD_STAT(txs.tx_uso);
                ADD_STAT(txs.tx_more);
                ADD_STAT(txs.tx_nstops);
                ADD_STAT(txs.tx_nrestarts);
@@ -1116,6 +1118,39 @@ static int fun_set_fecparam(struct net_device *netdev,
        return fun_port_write_cmd(fp, FUN_ADMIN_PORT_KEY_FEC, fec_mode);
 }
 
+static int fun_get_port_module_page(struct net_device *netdev,
+                                   const struct ethtool_module_eeprom *req,
+                                   struct netlink_ext_ack *extack)
+{
+       union {
+               struct fun_admin_port_req req;
+               struct fun_admin_port_xcvr_read_rsp rsp;
+       } cmd;
+       struct funeth_priv *fp = netdev_priv(netdev);
+       int rc;
+
+       if (fp->port_caps & FUN_PORT_CAP_VPORT) {
+               NL_SET_ERR_MSG_MOD(extack,
+                                  "Specified port is virtual, only physical ports have modules");
+               return -EOPNOTSUPP;
+       }
+
+       cmd.req.common = FUN_ADMIN_REQ_COMMON_INIT2(FUN_ADMIN_OP_PORT,
+                                                   sizeof(cmd.req));
+       cmd.req.u.xcvr_read =
+               FUN_ADMIN_PORT_XCVR_READ_REQ_INIT(0, netdev->dev_port,
+                                                 req->bank, req->page,
+                                                 req->offset, req->length,
+                                                 req->i2c_address);
+       rc = fun_submit_admin_sync_cmd(fp->fdev, &cmd.req.common, &cmd.rsp,
+                                      sizeof(cmd.rsp), 0);
+       if (rc)
+               return rc;
+
+       memcpy(req->data, cmd.rsp.data, req->length);
+       return req->length;
+}
+
 static const struct ethtool_ops fun_ethtool_ops = {
        .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
                                     ETHTOOL_COALESCE_MAX_FRAMES,
@@ -1154,6 +1189,7 @@ static const struct ethtool_ops fun_ethtool_ops = {
        .get_eth_mac_stats   = fun_get_802_3_stats,
        .get_eth_ctrl_stats  = fun_get_802_3_ctrl_stats,
        .get_rmon_stats      = fun_get_rmon_stats,
+       .get_module_eeprom_by_page = fun_get_port_module_page,
 };
 
 void fun_set_ethtool_ops(struct net_device *netdev)
index 9485cf6..f247b7a 100644 (file)
@@ -1357,7 +1357,8 @@ static const struct net_device_ops fun_netdev_ops = {
 #define GSO_ENCAP_FLAGS (NETIF_F_GSO_GRE | NETIF_F_GSO_IPXIP4 | \
                         NETIF_F_GSO_IPXIP6 | NETIF_F_GSO_UDP_TUNNEL | \
                         NETIF_F_GSO_UDP_TUNNEL_CSUM)
-#define TSO_FLAGS (NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_TSO_ECN)
+#define TSO_FLAGS (NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_TSO_ECN | \
+                  NETIF_F_GSO_UDP_L4)
 #define VLAN_FEAT (NETIF_F_SG | NETIF_F_HW_CSUM | TSO_FLAGS | \
                   GSO_ENCAP_FLAGS | NETIF_F_HIGHDMA)
 
index ff6e292..a97e3af 100644 (file)
@@ -83,7 +83,7 @@ static struct sk_buff *fun_tls_tx(struct sk_buff *skb, struct funeth_txq *q,
        const struct fun_ktls_tx_ctx *tls_ctx;
        u32 datalen, seq;
 
-       datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb));
+       datalen = skb->len - skb_tcp_all_headers(skb);
        if (!datalen)
                return skb;
 
@@ -130,6 +130,7 @@ static unsigned int write_pkt_desc(struct sk_buff *skb, struct funeth_txq *q,
        struct fun_dataop_gl *gle;
        const struct tcphdr *th;
        unsigned int ngle, i;
+       unsigned int l4_hlen;
        u16 flags;
 
        if (unlikely(map_skb(skb, q->dma_dev, addrs, lens))) {
@@ -178,6 +179,7 @@ static unsigned int write_pkt_desc(struct sk_buff *skb, struct funeth_txq *q,
                                                 FUN_ETH_UPDATE_INNER_L3_LEN;
                        }
                        th = inner_tcp_hdr(skb);
+                       l4_hlen = __tcp_hdrlen(th);
                        fun_eth_offload_init(&req->offload, flags,
                                             shinfo->gso_size,
                                             tcp_hdr_doff_flags(th), 0,
@@ -185,6 +187,24 @@ static unsigned int write_pkt_desc(struct sk_buff *skb, struct funeth_txq *q,
                                             skb_inner_transport_offset(skb),
                                             skb_network_offset(skb), ol4_ofst);
                        FUN_QSTAT_INC(q, tx_encap_tso);
+               } else if (shinfo->gso_type & SKB_GSO_UDP_L4) {
+                       flags = FUN_ETH_INNER_LSO | FUN_ETH_INNER_UDP |
+                               FUN_ETH_UPDATE_INNER_L4_CKSUM |
+                               FUN_ETH_UPDATE_INNER_L4_LEN |
+                               FUN_ETH_UPDATE_INNER_L3_LEN;
+
+                       if (ip_hdr(skb)->version == 4)
+                               flags |= FUN_ETH_UPDATE_INNER_L3_CKSUM;
+                       else
+                               flags |= FUN_ETH_INNER_IPV6;
+
+                       l4_hlen = sizeof(struct udphdr);
+                       fun_eth_offload_init(&req->offload, flags,
+                                            shinfo->gso_size,
+                                            cpu_to_be16(l4_hlen << 10), 0,
+                                            skb_network_offset(skb),
+                                            skb_transport_offset(skb), 0, 0);
+                       FUN_QSTAT_INC(q, tx_uso);
                } else {
                        /* HW considers one set of headers as inner */
                        flags = FUN_ETH_INNER_LSO |
@@ -195,6 +215,7 @@ static unsigned int write_pkt_desc(struct sk_buff *skb, struct funeth_txq *q,
                        else
                                flags |= FUN_ETH_UPDATE_INNER_L3_CKSUM;
                        th = tcp_hdr(skb);
+                       l4_hlen = __tcp_hdrlen(th);
                        fun_eth_offload_init(&req->offload, flags,
                                             shinfo->gso_size,
                                             tcp_hdr_doff_flags(th), 0,
@@ -209,7 +230,7 @@ static unsigned int write_pkt_desc(struct sk_buff *skb, struct funeth_txq *q,
 
                extra_pkts = shinfo->gso_segs - 1;
                extra_bytes = (be16_to_cpu(req->offload.inner_l4_off) +
-                              __tcp_hdrlen(th)) * extra_pkts;
+                              l4_hlen) * extra_pkts;
        } else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
                flags = FUN_ETH_UPDATE_INNER_L4_CKSUM;
                if (skb->csum_offset == offsetof(struct udphdr, check))
index 04c9f91..1711f82 100644 (file)
@@ -82,6 +82,7 @@ struct funeth_txq_stats {  /* per Tx queue SW counters */
        u64 tx_cso;        /* # of packets with checksum offload */
        u64 tx_tso;        /* # of non-encapsulated TSO super-packets */
        u64 tx_encap_tso;  /* # of encapsulated TSO super-packets */
+       u64 tx_uso;        /* # of non-encapsulated UDP LSO super-packets */
        u64 tx_more;       /* # of DBs elided due to xmit_more */
        u64 tx_nstops;     /* # of times the queue has stopped */
        u64 tx_nrestarts;  /* # of times the queue has restarted */
index ec394d9..588d648 100644 (file)
@@ -386,7 +386,7 @@ static int gve_prep_tso(struct sk_buff *skb)
                                     (__force __wsum)htonl(paylen));
 
                /* Compute length of segmentation header. */
-               header_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               header_len = skb_tcp_all_headers(skb);
                break;
        default:
                return -EINVAL;
@@ -598,9 +598,9 @@ static int gve_num_buffer_descs_needed(const struct sk_buff *skb)
  */
 static bool gve_can_send_tso(const struct sk_buff *skb)
 {
-       const int header_len = skb_checksum_start_offset(skb) + tcp_hdrlen(skb);
        const int max_bufs_per_seg = GVE_TX_MAX_DATA_DESCS - 1;
        const struct skb_shared_info *shinfo = skb_shinfo(skb);
+       const int header_len = skb_tcp_all_headers(skb);
        const int gso_size = shinfo->gso_size;
        int cur_seg_num_bufs;
        int cur_seg_size;
@@ -795,7 +795,7 @@ static void gve_handle_packet_completion(struct gve_priv *priv,
                             GVE_PACKET_STATE_PENDING_REINJECT_COMPL)) {
                        /* No outstanding miss completion but packet allocated
                         * implies packet receives a re-injection completion
-                        * without a prior miss completion. Return without
+                        * without a prior miss completion. Return without
                         * completing the packet.
                         */
                        net_err_ratelimited("%s: Re-injection completion received without corresponding miss completion: %d\n",
index 2f0bd21..d94cc8c 100644 (file)
@@ -31,8 +31,6 @@
 #define HNS_BUFFER_SIZE_2048 2048
 
 #define BD_MAX_SEND_SIZE 8191
-#define SKB_TMP_LEN(SKB) \
-       (((SKB)->transport_header - (SKB)->mac_header) + tcp_hdrlen(SKB))
 
 static void fill_v2_desc_hw(struct hnae_ring *ring, void *priv, int size,
                            int send_sz, dma_addr_t dma, int frag_end,
@@ -94,7 +92,7 @@ static void fill_v2_desc_hw(struct hnae_ring *ring, void *priv, int size,
                                                     HNSV2_TXD_TSE_B, 1);
                                        l4_len = tcp_hdrlen(skb);
                                        mss = skb_shinfo(skb)->gso_size;
-                                       paylen = skb->len - SKB_TMP_LEN(skb);
+                                       paylen = skb->len - skb_tcp_all_headers(skb);
                                }
                        } else if (skb->protocol == htons(ETH_P_IPV6)) {
                                hnae_set_bit(tvsvsn, HNSV2_TXD_IPV6_B, 1);
@@ -108,7 +106,7 @@ static void fill_v2_desc_hw(struct hnae_ring *ring, void *priv, int size,
                                                     HNSV2_TXD_TSE_B, 1);
                                        l4_len = tcp_hdrlen(skb);
                                        mss = skb_shinfo(skb)->gso_size;
-                                       paylen = skb->len - SKB_TMP_LEN(skb);
+                                       paylen = skb->len - skb_tcp_all_headers(skb);
                                }
                        }
                        desc->tx.ip_offset = ip_offset;
index ae56306..35d7004 100644 (file)
@@ -1838,9 +1838,9 @@ static unsigned int hns3_tx_bd_num(struct sk_buff *skb, unsigned int *bd_size,
 static unsigned int hns3_gso_hdr_len(struct sk_buff *skb)
 {
        if (!skb->encapsulation)
-               return skb_transport_offset(skb) + tcp_hdrlen(skb);
+               return skb_tcp_all_headers(skb);
 
-       return skb_inner_transport_offset(skb) + inner_tcp_hdrlen(skb);
+       return skb_inner_tcp_all_headers(skb);
 }
 
 /* HW need every continuous max_non_tso_bd_num buffer data to be larger
index 5153e5d..b8a1ecb 100644 (file)
@@ -37,8 +37,7 @@ DECLARE_EVENT_CLASS(hns3_skb_template,
                __entry->gso_segs = skb_shinfo(skb)->gso_segs;
                __entry->gso_type = skb_shinfo(skb)->gso_type;
                __entry->hdr_len = skb->encapsulation ?
-               skb_inner_transport_offset(skb) + inner_tcp_hdrlen(skb) :
-               skb_transport_offset(skb) + tcp_hdrlen(skb);
+               skb_inner_tcp_all_headers(skb) : skb_tcp_all_headers(skb);
                __entry->ip_summed = skb->ip_summed;
                __entry->fraglist = skb_has_frag_list(skb);
                hns3_shinfo_pack(skb_shinfo(skb), __entry->size);
index 5eaf09e..26f8733 100644 (file)
@@ -979,7 +979,7 @@ static int hclgevf_update_mac_list(struct hnae3_handle *handle,
 
        /* if the mac addr is already in the mac list, no need to add a new
         * one into it, just check the mac addr state, convert it to a new
-        * new state, or just remove it, or do nothing.
+        * state, or just remove it, or do nothing.
         */
        mac_node = hclgevf_find_mac_node(list, addr);
        if (mac_node) {
index 07fdab5..c2ae1b4 100644 (file)
@@ -174,7 +174,7 @@ static int hns_mdio_wait_ready(struct mii_bus *bus)
        u32 cmd_reg_value;
        int i;
 
-       /* waitting for MDIO_COMMAND_REG 's mdio_start==0 */
+       /* waiting for MDIO_COMMAND_REG's mdio_start==0 */
        /* after that can do read or write*/
        for (i = 0; i < MDIO_TIMEOUT; i++) {
                cmd_reg_value = MDIO_GET_REG_BIT(mdio_dev,
@@ -319,7 +319,7 @@ static int hns_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
                                   MDIO_C45_READ, phy_id, devad);
        }
 
-       /* Step 5: waitting for MDIO_COMMAND_REG 's mdio_start==0,*/
+       /* Step 5: waiting for MDIO_COMMAND_REG's mdio_start==0,*/
        /* check for read or write opt is finished */
        ret = hns_mdio_wait_ready(bus);
        if (ret) {
index fb3e891..a4fbf44 100644 (file)
@@ -95,9 +95,6 @@ struct hinic_dev {
        u16                             sq_depth;
        u16                             rq_depth;
 
-       struct hinic_txq_stats          tx_stats;
-       struct hinic_rxq_stats          rx_stats;
-
        u8                              rss_tmpl_idx;
        u8                              rss_hash_engine;
        u16                             num_rss;
index 0532929..c23ee2d 100644 (file)
@@ -62,8 +62,6 @@ MODULE_PARM_DESC(rx_weight, "Number Rx packets for NAPI budget (default=64)");
 
 #define HINIC_LRO_RX_TIMER_DEFAULT     16
 
-#define VLAN_BITMAP_SIZE(nic_dev)       (ALIGN(VLAN_N_VID, 8) / 8)
-
 #define work_to_rx_mode_work(work)      \
                container_of(work, struct hinic_rx_mode_work, work)
 
@@ -82,56 +80,44 @@ static int set_features(struct hinic_dev *nic_dev,
                        netdev_features_t pre_features,
                        netdev_features_t features, bool force_change);
 
-static void update_rx_stats(struct hinic_dev *nic_dev, struct hinic_rxq *rxq)
+static void gather_rx_stats(struct hinic_rxq_stats *nic_rx_stats, struct hinic_rxq *rxq)
 {
-       struct hinic_rxq_stats *nic_rx_stats = &nic_dev->rx_stats;
        struct hinic_rxq_stats rx_stats;
 
-       u64_stats_init(&rx_stats.syncp);
-
        hinic_rxq_get_stats(rxq, &rx_stats);
 
-       u64_stats_update_begin(&nic_rx_stats->syncp);
        nic_rx_stats->bytes += rx_stats.bytes;
        nic_rx_stats->pkts  += rx_stats.pkts;
        nic_rx_stats->errors += rx_stats.errors;
        nic_rx_stats->csum_errors += rx_stats.csum_errors;
        nic_rx_stats->other_errors += rx_stats.other_errors;
-       u64_stats_update_end(&nic_rx_stats->syncp);
-
-       hinic_rxq_clean_stats(rxq);
 }
 
-static void update_tx_stats(struct hinic_dev *nic_dev, struct hinic_txq *txq)
+static void gather_tx_stats(struct hinic_txq_stats *nic_tx_stats, struct hinic_txq *txq)
 {
-       struct hinic_txq_stats *nic_tx_stats = &nic_dev->tx_stats;
        struct hinic_txq_stats tx_stats;
 
-       u64_stats_init(&tx_stats.syncp);
-
        hinic_txq_get_stats(txq, &tx_stats);
 
-       u64_stats_update_begin(&nic_tx_stats->syncp);
        nic_tx_stats->bytes += tx_stats.bytes;
        nic_tx_stats->pkts += tx_stats.pkts;
        nic_tx_stats->tx_busy += tx_stats.tx_busy;
        nic_tx_stats->tx_wake += tx_stats.tx_wake;
        nic_tx_stats->tx_dropped += tx_stats.tx_dropped;
        nic_tx_stats->big_frags_pkts += tx_stats.big_frags_pkts;
-       u64_stats_update_end(&nic_tx_stats->syncp);
-
-       hinic_txq_clean_stats(txq);
 }
 
-static void update_nic_stats(struct hinic_dev *nic_dev)
+static void gather_nic_stats(struct hinic_dev *nic_dev,
+                            struct hinic_rxq_stats *nic_rx_stats,
+                            struct hinic_txq_stats *nic_tx_stats)
 {
        int i, num_qps = hinic_hwdev_num_qps(nic_dev->hwdev);
 
        for (i = 0; i < num_qps; i++)
-               update_rx_stats(nic_dev, &nic_dev->rxqs[i]);
+               gather_rx_stats(nic_rx_stats, &nic_dev->rxqs[i]);
 
        for (i = 0; i < num_qps; i++)
-               update_tx_stats(nic_dev, &nic_dev->txqs[i]);
+               gather_tx_stats(nic_tx_stats, &nic_dev->txqs[i]);
 }
 
 /**
@@ -560,8 +546,6 @@ int hinic_close(struct net_device *netdev)
        netif_carrier_off(netdev);
        netif_tx_disable(netdev);
 
-       update_nic_stats(nic_dev);
-
        up(&nic_dev->mgmt_lock);
 
        if (!HINIC_IS_VF(nic_dev->hwdev->hwif))
@@ -855,26 +839,19 @@ static void hinic_get_stats64(struct net_device *netdev,
                              struct rtnl_link_stats64 *stats)
 {
        struct hinic_dev *nic_dev = netdev_priv(netdev);
-       struct hinic_rxq_stats *nic_rx_stats;
-       struct hinic_txq_stats *nic_tx_stats;
-
-       nic_rx_stats = &nic_dev->rx_stats;
-       nic_tx_stats = &nic_dev->tx_stats;
-
-       down(&nic_dev->mgmt_lock);
+       struct hinic_rxq_stats nic_rx_stats = {};
+       struct hinic_txq_stats nic_tx_stats = {};
 
        if (nic_dev->flags & HINIC_INTF_UP)
-               update_nic_stats(nic_dev);
-
-       up(&nic_dev->mgmt_lock);
+               gather_nic_stats(nic_dev, &nic_rx_stats, &nic_tx_stats);
 
-       stats->rx_bytes   = nic_rx_stats->bytes;
-       stats->rx_packets = nic_rx_stats->pkts;
-       stats->rx_errors  = nic_rx_stats->errors;
+       stats->rx_bytes   = nic_rx_stats.bytes;
+       stats->rx_packets = nic_rx_stats.pkts;
+       stats->rx_errors  = nic_rx_stats.errors;
 
-       stats->tx_bytes   = nic_tx_stats->bytes;
-       stats->tx_packets = nic_tx_stats->pkts;
-       stats->tx_errors  = nic_tx_stats->tx_dropped;
+       stats->tx_bytes   = nic_tx_stats.bytes;
+       stats->tx_packets = nic_tx_stats.pkts;
+       stats->tx_errors  = nic_tx_stats.tx_dropped;
 }
 
 static int hinic_set_features(struct net_device *netdev,
@@ -1173,8 +1150,6 @@ static void hinic_free_intr_coalesce(struct hinic_dev *nic_dev)
 static int nic_dev_init(struct pci_dev *pdev)
 {
        struct hinic_rx_mode_work *rx_mode_work;
-       struct hinic_txq_stats *tx_stats;
-       struct hinic_rxq_stats *rx_stats;
        struct hinic_dev *nic_dev;
        struct net_device *netdev;
        struct hinic_hwdev *hwdev;
@@ -1236,15 +1211,8 @@ static int nic_dev_init(struct pci_dev *pdev)
 
        sema_init(&nic_dev->mgmt_lock, 1);
 
-       tx_stats = &nic_dev->tx_stats;
-       rx_stats = &nic_dev->rx_stats;
-
-       u64_stats_init(&tx_stats->syncp);
-       u64_stats_init(&rx_stats->syncp);
-
-       nic_dev->vlan_bitmap = devm_kzalloc(&pdev->dev,
-                                           VLAN_BITMAP_SIZE(nic_dev),
-                                           GFP_KERNEL);
+       nic_dev->vlan_bitmap = devm_bitmap_zalloc(&pdev->dev, VLAN_N_VID,
+                                                 GFP_KERNEL);
        if (!nic_dev->vlan_bitmap) {
                err = -ENOMEM;
                goto err_vlan_bitmap;
index 24b7b81..a866bea 100644 (file)
@@ -73,7 +73,6 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
        struct hinic_rxq_stats *rxq_stats = &rxq->rxq_stats;
        unsigned int start;
 
-       u64_stats_update_begin(&stats->syncp);
        do {
                start = u64_stats_fetch_begin(&rxq_stats->syncp);
                stats->pkts = rxq_stats->pkts;
@@ -83,7 +82,6 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
                stats->csum_errors = rxq_stats->csum_errors;
                stats->other_errors = rxq_stats->other_errors;
        } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
-       u64_stats_update_end(&stats->syncp);
 }
 
 /**
index 87408e7..5051cdf 100644 (file)
@@ -98,7 +98,6 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
        struct hinic_txq_stats *txq_stats = &txq->txq_stats;
        unsigned int start;
 
-       u64_stats_update_begin(&stats->syncp);
        do {
                start = u64_stats_fetch_begin(&txq_stats->syncp);
                stats->pkts    = txq_stats->pkts;
@@ -108,7 +107,6 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
                stats->tx_dropped = txq_stats->tx_dropped;
                stats->big_frags_pkts = txq_stats->big_frags_pkts;
        } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
-       u64_stats_update_end(&stats->syncp);
 }
 
 /**
index 8ce3348..5dc3028 100644 (file)
@@ -1617,7 +1617,7 @@ static void write_swqe2_immediate(struct sk_buff *skb, struct ehea_swqe *swqe,
                 * For TSO packets we only copy the headers into the
                 * immediate area.
                 */
-               immediate_len = ETH_HLEN + ip_hdrlen(skb) + tcp_hdrlen(skb);
+               immediate_len = skb_tcp_all_headers(skb);
        }
 
        if (skb_is_gso(skb) || skb_data_size >= SWQE2_MAX_IMM) {
index 7e7fe5b..5ab7c0f 100644 (file)
@@ -5981,6 +5981,15 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
                        release_sub_crqs(adapter, 0);
                        rc = init_sub_crqs(adapter);
                } else {
+                       /* no need to reinitialize completely, but we do
+                        * need to clean up transmits that were in flight
+                        * when we processed the reset.  Failure to do so
+                        * will confound the upper layer, usually TCP, by
+                        * creating the illusion of transmits that are
+                        * awaiting completion.
+                        */
+                       clean_tx_pools(adapter);
+
                        rc = reset_sub_crq_queues(adapter);
                }
        } else {
index 36418b5..11a884a 100644 (file)
@@ -1430,7 +1430,6 @@ static int e100_phy_check_without_mii(struct nic *nic)
 #define MII_NSC_CONG           MII_RESV1
 #define NSC_CONG_ENABLE                0x0100
 #define NSC_CONG_TXREADY       0x0400
-#define ADVERTISE_FC_SUPPORTED 0x0400
 static int e100_phy_init(struct nic *nic)
 {
        struct net_device *netdev = nic->netdev;
index f8860f2..4542e2b 100644 (file)
@@ -2000,7 +2000,7 @@ s32 e1000_force_mac_fc(struct e1000_hw *hw)
         *      1:  Rx flow control is enabled (we can receive pause
         *          frames but not send pause frames).
         *      2:  Tx flow control is enabled (we can send pause frames
-        *          frames but we do not receive pause frames).
+        *          but we do not receive pause frames).
         *      3:  Both Rx and TX flow control (symmetric) is enabled.
         *  other:  No other values should be possible at this point.
         */
index 3f5feb5..23299fc 100644 (file)
@@ -2708,7 +2708,7 @@ static int e1000_tso(struct e1000_adapter *adapter,
                if (err < 0)
                        return err;
 
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
                mss = skb_shinfo(skb)->gso_size;
                if (protocol == htons(ETH_P_IP)) {
                        struct iphdr *iph = ip_hdr(skb);
@@ -3139,7 +3139,7 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb,
                max_per_txd = min(mss << 2, max_per_txd);
                max_txd_pwr = fls(max_per_txd) - 1;
 
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
                if (skb->data_len && hdr_len == len) {
                        switch (hw->mac_type) {
                        case e1000_82544: {
index 4d4f5bf..f4154ca 100644 (file)
@@ -82,7 +82,6 @@ E1000_PARAM(Duplex, "Duplex setting");
  */
 E1000_PARAM(AutoNeg, "Advertised auto-negotiation setting");
 #define AUTONEG_ADV_DEFAULT  0x2F
-#define AUTONEG_ADV_MASK     0x2F
 
 /* User Specified Flow Control Override
  *
@@ -95,7 +94,6 @@ E1000_PARAM(AutoNeg, "Advertised auto-negotiation setting");
  * Default Value: Read flow control settings from the EEPROM
  */
 E1000_PARAM(FlowControl, "Flow Control setting");
-#define FLOW_CONTROL_DEFAULT FLOW_CONTROL_FULL
 
 /* XsumRX - Receive Checksum Offload Enable/Disable
  *
index 51512a7..5df7ad9 100644 (file)
@@ -957,7 +957,7 @@ s32 e1000e_force_mac_fc(struct e1000_hw *hw)
         *      1:  Rx flow control is enabled (we can receive pause
         *          frames but not send pause frames).
         *      2:  Tx flow control is enabled (we can send pause frames
-        *          frames but we do not receive pause frames).
+        *          but we do not receive pause frames).
         *      3:  Both Rx and Tx flow control (symmetric) is enabled.
         *  other:  No other values should be possible at this point.
         */
index fa06f68..38e60de 100644 (file)
@@ -5474,7 +5474,7 @@ static int e1000_tso(struct e1000_ring *tx_ring, struct sk_buff *skb,
        if (err < 0)
                return err;
 
-       hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+       hdr_len = skb_tcp_all_headers(skb);
        mss = skb_shinfo(skb)->gso_size;
        if (protocol == htons(ETH_P_IP)) {
                struct iphdr *iph = ip_hdr(skb);
@@ -5846,7 +5846,7 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb,
                 * points to just header, pull a few bytes of payload from
                 * frags into skb->data
                 */
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
                /* we do this workaround for ES2LAN, but it is un-necessary,
                 * avoiding it could save a lot of cycles
                 */
index ebe121d..3132d8f 100644 (file)
@@ -101,8 +101,6 @@ E1000_PARAM(InterruptThrottleRate, "Interrupt Throttling Rate");
  * demoted to the most advanced interrupt mode available.
  */
 E1000_PARAM(IntMode, "Interrupt Mode");
-#define MAX_INTMODE    2
-#define MIN_INTMODE    0
 
 /* Enable Smart Power Down of the PHY
  *
index f2fba6e..87fa587 100644 (file)
@@ -809,7 +809,7 @@ static s32 fm10k_mbx_read(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
  *  @hw: pointer to hardware structure
  *  @mbx: pointer to mailbox
  *
- *  This function copies the message from the the message array to mbmem
+ *  This function copies the message from the message array to mbmem
  **/
 static void fm10k_mbx_write(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
 {
index f6d5686..75cbdf2 100644 (file)
@@ -78,7 +78,7 @@ static s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id,
  *  @string: Pointer to location of destination string
  *
  *  This function pulls the string back out of the attribute and will place
- *  it in the array pointed by by string.  It will return success if provided
+ *  it in the array pointed by string.  It will return success if provided
  *  with a valid pointers.
  **/
 static s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string)
@@ -584,7 +584,7 @@ s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg,
  *  @mbx: Unused mailbox pointer
  *
  *  This function is a default handler for unrecognized messages.  At a
- *  minimum it just indicates that the message requested was
+ *  minimum it just indicates that the message requested was
  *  unimplemented.
  **/
 s32 fm10k_tlv_msg_error(struct fm10k_hw __always_unused *hw,
index 57f4ec4..97c574a 100644 (file)
@@ -37,6 +37,7 @@
 #include <net/tc_act/tc_mirred.h>
 #include <net/udp_tunnel.h>
 #include <net/xdp_sock.h>
+#include <linux/bitfield.h>
 #include "i40e_type.h"
 #include "i40e_prototype.h"
 #include <linux/net/intel/i40e_client.h>
@@ -1093,6 +1094,21 @@ static inline void i40e_write_fd_input_set(struct i40e_pf *pf,
                          (u32)(val & 0xFFFFFFFFULL));
 }
 
+/**
+ * i40e_get_pf_count - get PCI PF count.
+ * @hw: pointer to a hw.
+ *
+ * Reports the function number of the highest PCI physical
+ * function plus 1 as it is loaded from the NVM.
+ *
+ * Return: PCI PF count.
+ **/
+static inline u32 i40e_get_pf_count(struct i40e_hw *hw)
+{
+       return FIELD_GET(I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK,
+                        rd32(hw, I40E_GLGEN_PCIFCNCNT));
+}
+
 /* needed by i40e_ethtool.c */
 int i40e_up(struct i40e_vsi *vsi);
 void i40e_down(struct i40e_vsi *vsi);
index 552aad6..d9a934c 100644 (file)
@@ -236,8 +236,6 @@ static void __i40e_add_stat_strings(u8 **p, const struct i40e_stats stats[],
        I40E_STAT(struct i40e_cp_veb_tc_stats, _name, _stat)
 #define I40E_PFC_STAT(_name, _stat) \
        I40E_STAT(struct i40e_pfc_stats, _name, _stat)
-#define I40E_QUEUE_STAT(_name, _stat) \
-       I40E_STAT(struct i40e_ring, _name, _stat)
 
 static const struct i40e_stats i40e_gstrings_net_stats[] = {
        I40E_NETDEV_STAT(rx_packets),
@@ -1143,6 +1141,71 @@ static int i40e_get_link_ksettings(struct net_device *netdev,
        return 0;
 }
 
+#define I40E_LBIT_SIZE 8
+/**
+ * i40e_speed_to_link_speed - Translate decimal speed to i40e_aq_link_speed
+ * @speed: speed in decimal
+ * @ks: ethtool ksettings
+ *
+ * Return i40e_aq_link_speed based on speed
+ **/
+static enum i40e_aq_link_speed
+i40e_speed_to_link_speed(__u32 speed, const struct ethtool_link_ksettings *ks)
+{
+       enum i40e_aq_link_speed link_speed = I40E_LINK_SPEED_UNKNOWN;
+       bool speed_changed = false;
+       int i, j;
+
+       static const struct {
+               __u32 speed;
+               enum i40e_aq_link_speed link_speed;
+               __u8 bit[I40E_LBIT_SIZE];
+       } i40e_speed_lut[] = {
+#define I40E_LBIT(mode) ETHTOOL_LINK_MODE_ ## mode ##_Full_BIT
+               {SPEED_100, I40E_LINK_SPEED_100MB, {I40E_LBIT(100baseT)} },
+               {SPEED_1000, I40E_LINK_SPEED_1GB,
+                {I40E_LBIT(1000baseT), I40E_LBIT(1000baseX),
+                 I40E_LBIT(1000baseKX)} },
+               {SPEED_10000, I40E_LINK_SPEED_10GB,
+                {I40E_LBIT(10000baseT), I40E_LBIT(10000baseKR),
+                 I40E_LBIT(10000baseLR), I40E_LBIT(10000baseCR),
+                 I40E_LBIT(10000baseSR), I40E_LBIT(10000baseKX4)} },
+
+               {SPEED_25000, I40E_LINK_SPEED_25GB,
+                {I40E_LBIT(25000baseCR), I40E_LBIT(25000baseKR),
+                 I40E_LBIT(25000baseSR)} },
+               {SPEED_40000, I40E_LINK_SPEED_40GB,
+                {I40E_LBIT(40000baseKR4), I40E_LBIT(40000baseCR4),
+                 I40E_LBIT(40000baseSR4), I40E_LBIT(40000baseLR4)} },
+               {SPEED_20000, I40E_LINK_SPEED_20GB,
+                {I40E_LBIT(20000baseKR2)} },
+               {SPEED_2500, I40E_LINK_SPEED_2_5GB, {I40E_LBIT(2500baseT)} },
+               {SPEED_5000, I40E_LINK_SPEED_5GB, {I40E_LBIT(2500baseT)} }
+#undef I40E_LBIT
+};
+
+       for (i = 0; i < ARRAY_SIZE(i40e_speed_lut); i++) {
+               if (i40e_speed_lut[i].speed == speed) {
+                       for (j = 0; j < I40E_LBIT_SIZE; j++) {
+                               if (test_bit(i40e_speed_lut[i].bit[j],
+                                            ks->link_modes.supported)) {
+                                       speed_changed = true;
+                                       break;
+                               }
+                               if (!i40e_speed_lut[i].bit[j])
+                                       break;
+                       }
+                       if (speed_changed) {
+                               link_speed = i40e_speed_lut[i].link_speed;
+                               break;
+                       }
+               }
+       }
+       return link_speed;
+}
+
+#undef I40E_LBIT_SIZE
+
 /**
  * i40e_set_link_ksettings - Set Speed and Duplex
  * @netdev: network interface device structure
@@ -1159,12 +1222,14 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
        struct ethtool_link_ksettings copy_ks;
        struct i40e_aq_set_phy_config config;
        struct i40e_pf *pf = np->vsi->back;
+       enum i40e_aq_link_speed link_speed;
        struct i40e_vsi *vsi = np->vsi;
        struct i40e_hw *hw = &pf->hw;
        bool autoneg_changed = false;
        i40e_status status = 0;
        int timeout = 50;
        int err = 0;
+       __u32 speed;
        u8 autoneg;
 
        /* Changing port settings is not supported if this isn't the
@@ -1197,6 +1262,7 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
 
        /* save autoneg out of ksettings */
        autoneg = copy_ks.base.autoneg;
+       speed = copy_ks.base.speed;
 
        /* get our own copy of the bits to check against */
        memset(&safe_ks, 0, sizeof(struct ethtool_link_ksettings));
@@ -1215,6 +1281,7 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
 
        /* set autoneg back to what it currently is */
        copy_ks.base.autoneg = safe_ks.base.autoneg;
+       copy_ks.base.speed  = safe_ks.base.speed;
 
        /* If copy_ks.base and safe_ks.base are not the same now, then they are
         * trying to set something that we do not support.
@@ -1331,6 +1398,27 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
                                                  40000baseLR4_Full))
                config.link_speed |= I40E_LINK_SPEED_40GB;
 
+       /* Autonegotiation must be disabled to change speed */
+       if ((speed != SPEED_UNKNOWN && safe_ks.base.speed != speed) &&
+           (autoneg == AUTONEG_DISABLE ||
+           (safe_ks.base.autoneg == AUTONEG_DISABLE && !autoneg_changed))) {
+               link_speed = i40e_speed_to_link_speed(speed, ks);
+               if (link_speed == I40E_LINK_SPEED_UNKNOWN) {
+                       netdev_info(netdev, "Given speed is not supported\n");
+                       err = -EOPNOTSUPP;
+                       goto done;
+               } else {
+                       config.link_speed = link_speed;
+               }
+       } else {
+               if (safe_ks.base.speed != speed) {
+                       netdev_info(netdev,
+                                   "Unable to set speed, disable autoneg\n");
+                       err = -EOPNOTSUPP;
+                       goto done;
+               }
+       }
+
        /* If speed didn't get set, set it to what it currently is.
         * This is needed because if advertise is 0 (as it is when autoneg
         * is disabled) then speed won't get set.
index 83e0cf4..151e9b6 100644 (file)
@@ -550,6 +550,47 @@ void i40e_pf_reset_stats(struct i40e_pf *pf)
        pf->hw_csum_rx_error = 0;
 }
 
+/**
+ * i40e_compute_pci_to_hw_id - compute index form PCI function.
+ * @vsi: ptr to the VSI to read from.
+ * @hw: ptr to the hardware info.
+ **/
+static u32 i40e_compute_pci_to_hw_id(struct i40e_vsi *vsi, struct i40e_hw *hw)
+{
+       int pf_count = i40e_get_pf_count(hw);
+
+       if (vsi->type == I40E_VSI_SRIOV)
+               return (hw->port * BIT(7)) / pf_count + vsi->vf_id;
+
+       return hw->port + BIT(7);
+}
+
+/**
+ * i40e_stat_update64 - read and update a 64 bit stat from the chip.
+ * @hw: ptr to the hardware info.
+ * @hireg: the high 32 bit reg to read.
+ * @loreg: the low 32 bit reg to read.
+ * @offset_loaded: has the initial offset been loaded yet.
+ * @offset: ptr to current offset value.
+ * @stat: ptr to the stat.
+ *
+ * Since the device stats are not reset at PFReset, they will not
+ * be zeroed when the driver starts.  We'll save the first values read
+ * and use them as offsets to be subtracted from the raw values in order
+ * to report stats that count from zero.
+ **/
+static void i40e_stat_update64(struct i40e_hw *hw, u32 hireg, u32 loreg,
+                              bool offset_loaded, u64 *offset, u64 *stat)
+{
+       u64 new_data;
+
+       new_data = rd64(hw, loreg);
+
+       if (!offset_loaded || new_data < *offset)
+               *offset = new_data;
+       *stat = new_data - *offset;
+}
+
 /**
  * i40e_stat_update48 - read and update a 48 bit stat from the chip
  * @hw: ptr to the hardware info
@@ -621,6 +662,34 @@ static void i40e_stat_update_and_clear32(struct i40e_hw *hw, u32 reg, u64 *stat)
        *stat += new_data;
 }
 
+/**
+ * i40e_stats_update_rx_discards - update rx_discards.
+ * @vsi: ptr to the VSI to be updated.
+ * @hw: ptr to the hardware info.
+ * @stat_idx: VSI's stat_counter_idx.
+ * @offset_loaded: ptr to the VSI's stat_offsets_loaded.
+ * @stat_offset: ptr to stat_offset to store first read of specific register.
+ * @stat: ptr to VSI's stat to be updated.
+ **/
+static void
+i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw,
+                             int stat_idx, bool offset_loaded,
+                             struct i40e_eth_stats *stat_offset,
+                             struct i40e_eth_stats *stat)
+{
+       u64 rx_rdpc, rx_rxerr;
+
+       i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded,
+                          &stat_offset->rx_discards, &rx_rdpc);
+       i40e_stat_update64(hw,
+                          I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)),
+                          I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)),
+                          offset_loaded, &stat_offset->rx_discards_other,
+                          &rx_rxerr);
+
+       stat->rx_discards = rx_rdpc + rx_rxerr;
+}
+
 /**
  * i40e_update_eth_stats - Update VSI-specific ethernet statistics counters.
  * @vsi: the VSI to be updated
@@ -680,6 +749,10 @@ void i40e_update_eth_stats(struct i40e_vsi *vsi)
                           I40E_GLV_BPTCL(stat_idx),
                           vsi->stat_offsets_loaded,
                           &oes->tx_broadcast, &es->tx_broadcast);
+
+       i40e_stats_update_rx_discards(vsi, hw, stat_idx,
+                                     vsi->stat_offsets_loaded, oes, es);
+
        vsi->stat_offsets_loaded = true;
 }
 
@@ -4162,7 +4235,6 @@ static void i40e_free_misc_vector(struct i40e_pf *pf)
        i40e_flush(&pf->hw);
 
        if (pf->flags & I40E_FLAG_MSIX_ENABLED && pf->msix_entries) {
-               synchronize_irq(pf->msix_entries[0].vector);
                free_irq(pf->msix_entries[0].vector, pf);
                clear_bit(__I40E_MISC_IRQ_REQUESTED, pf->state);
        }
@@ -4901,7 +4973,6 @@ static void i40e_vsi_free_irq(struct i40e_vsi *vsi)
                        irq_set_affinity_notifier(irq_num, NULL);
                        /* remove our suggested affinity mask for this IRQ */
                        irq_update_affinity_hint(irq_num, NULL);
-                       synchronize_irq(irq_num);
                        free_irq(irq_num, vsi->q_vectors[i]);
 
                        /* Tear down the interrupt queue link list
@@ -13158,7 +13229,7 @@ static netdev_features_t i40e_features_check(struct sk_buff *skb,
        }
 
        /* No need to validate L4LEN as TCP is the only protocol with a
-        * flexible value and we support all possible values supported
+        * flexible value and we support all possible values supported
         * by TCP, which is at most 15 dwords
         */
 
index 61e5789..57a71fa 100644 (file)
@@ -27,7 +27,6 @@
 #define I40E_PRTTSYN_CTL1_TSYNTYPE_V2  (2 << \
                                        I40E_PRTTSYN_CTL1_TSYNTYPE_SHIFT)
 #define I40E_SUBDEV_ID_25G_PTP_PIN     0xB
-#define to_dev(obj) container_of(obj, struct device, kobj)
 
 enum i40e_ptp_pin {
        SDP3_2 = 0,
index 1908eed..7339003 100644 (file)
 #define I40E_GLGEN_MSRWD_MDIWRDATA_SHIFT 0
 #define I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT 16
 #define I40E_GLGEN_MSRWD_MDIRDDATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT)
+#define I40E_GLGEN_PCIFCNCNT                0x001C0AB4 /* Reset: PCIR */
+#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT 0
+#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK  I40E_MASK(0x1F, I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT)
+#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT 16
+#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_MASK  I40E_MASK(0xFF, I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT)
 #define I40E_GLGEN_RSTAT 0x000B8188 /* Reset: POR */
 #define I40E_GLGEN_RSTAT_DEVSTATE_SHIFT 0
 #define I40E_GLGEN_RSTAT_DEVSTATE_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_DEVSTATE_SHIFT)
 #define I40E_VFQF_HKEY1_MAX_INDEX 12
 #define I40E_VFQF_HLUT1(_i, _VF) (0x00220000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...15, _VF=0...127 */ /* Reset: CORER */
 #define I40E_VFQF_HLUT1_MAX_INDEX 15
+#define I40E_GL_RXERR1H(_i)             (0x00318004 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */
+#define I40E_GL_RXERR1H_MAX_INDEX       143
+#define I40E_GL_RXERR1H_RXERR1H_SHIFT   0
+#define I40E_GL_RXERR1H_RXERR1H_MASK    I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1H_RXERR1H_SHIFT)
+#define I40E_GL_RXERR1L(_i)             (0x00318000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */
+#define I40E_GL_RXERR1L_MAX_INDEX       143
+#define I40E_GL_RXERR1L_RXERR1L_SHIFT   0
+#define I40E_GL_RXERR1L_RXERR1L_MASK    I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1L_RXERR1L_SHIFT)
 #define I40E_GLPRT_BPRCH(_i) (0x003005E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */
 #define I40E_GLPRT_BPRCL(_i) (0x003005E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */
 #define I40E_GLPRT_BPTCH(_i) (0x00300A04 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */
index b796710..f6ba97a 100644 (file)
@@ -372,7 +372,6 @@ static void i40e_change_filter_num(bool ipv4, bool add, u16 *ipv4_filter_num,
        }
 }
 
-#define IP_HEADER_OFFSET               14
 #define I40E_UDPIP_DUMMY_PACKET_LEN    42
 #define I40E_UDPIP6_DUMMY_PACKET_LEN   62
 /**
@@ -1483,10 +1482,8 @@ void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
        if (!rx_ring->rx_bi)
                return;
 
-       if (rx_ring->skb) {
-               dev_kfree_skb(rx_ring->skb);
-               rx_ring->skb = NULL;
-       }
+       dev_kfree_skb(rx_ring->skb);
+       rx_ring->skb = NULL;
 
        if (rx_ring->xsk_pool) {
                i40e_xsk_clean_rx_ring(rx_ring);
@@ -2291,16 +2288,14 @@ int i40e_xmit_xdp_tx_ring(struct xdp_buff *xdp, struct i40e_ring *xdp_ring)
  * i40e_run_xdp - run an XDP program
  * @rx_ring: Rx ring being processed
  * @xdp: XDP buffer containing the frame
+ * @xdp_prog: XDP program to run
  **/
-static int i40e_run_xdp(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
+static int i40e_run_xdp(struct i40e_ring *rx_ring, struct xdp_buff *xdp, struct bpf_prog *xdp_prog)
 {
        int err, result = I40E_XDP_PASS;
        struct i40e_ring *xdp_ring;
-       struct bpf_prog *xdp_prog;
        u32 act;
 
-       xdp_prog = READ_ONCE(rx_ring->xdp_prog);
-
        if (!xdp_prog)
                goto xdp_out;
 
@@ -2445,6 +2440,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
        unsigned int offset = rx_ring->rx_offset;
        struct sk_buff *skb = rx_ring->skb;
        unsigned int xdp_xmit = 0;
+       struct bpf_prog *xdp_prog;
        bool failure = false;
        struct xdp_buff xdp;
        int xdp_res = 0;
@@ -2454,6 +2450,8 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 #endif
        xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
 
+       xdp_prog = READ_ONCE(rx_ring->xdp_prog);
+
        while (likely(total_rx_packets < (unsigned int)budget)) {
                struct i40e_rx_buffer *rx_buffer;
                union i40e_rx_desc *rx_desc;
@@ -2514,7 +2512,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
                        /* At larger PAGE_SIZE, frame_sz depend on len size */
                        xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size);
 #endif
-                       xdp_res = i40e_run_xdp(rx_ring, &xdp);
+                       xdp_res = i40e_run_xdp(rx_ring, &xdp, xdp_prog);
                }
 
                if (xdp_res) {
index 36a4ca1..7b3f30b 100644 (file)
@@ -1172,6 +1172,7 @@ struct i40e_eth_stats {
        u64 tx_broadcast;               /* bptc */
        u64 tx_discards;                /* tdpc */
        u64 tx_errors;                  /* tepc */
+       u64 rx_discards_other;          /* rxerr1 */
 };
 
 /* Statistics collected per VEB per TC */
index d01fb59..4f184c5 100644 (file)
@@ -2147,6 +2147,10 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
                /* VFs only use TC 0 */
                vfres->vsi_res[0].qset_handle
                                          = le16_to_cpu(vsi->info.qs_handle[0]);
+               if (!(vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO) && !vf->pf_set_mac) {
+                       i40e_del_mac_filter(vsi, vf->default_lan_addr.addr);
+                       eth_zero_addr(vf->default_lan_addr.addr);
+               }
                ether_addr_copy(vfres->vsi_res[0].default_mac_addr,
                                vf->default_lan_addr.addr);
        }
index af3e7e6..6d4009e 100644 (file)
@@ -143,20 +143,17 @@ int i40e_xsk_pool_setup(struct i40e_vsi *vsi, struct xsk_buff_pool *pool,
  * i40e_run_xdp_zc - Executes an XDP program on an xdp_buff
  * @rx_ring: Rx ring
  * @xdp: xdp_buff used as input to the XDP program
+ * @xdp_prog: XDP program to run
  *
  * Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR}
  **/
-static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
+static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp,
+                          struct bpf_prog *xdp_prog)
 {
        int err, result = I40E_XDP_PASS;
        struct i40e_ring *xdp_ring;
-       struct bpf_prog *xdp_prog;
        u32 act;
 
-       /* NB! xdp_prog will always be !NULL, due to the fact that
-        * this path is enabled by setting an XDP program.
-        */
-       xdp_prog = READ_ONCE(rx_ring->xdp_prog);
        act = bpf_prog_run_xdp(xdp_prog, xdp);
 
        if (likely(act == XDP_REDIRECT)) {
@@ -339,9 +336,15 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
        u16 next_to_clean = rx_ring->next_to_clean;
        u16 count_mask = rx_ring->count - 1;
        unsigned int xdp_res, xdp_xmit = 0;
+       struct bpf_prog *xdp_prog;
        bool failure = false;
        u16 cleaned_count;
 
+       /* NB! xdp_prog will always be !NULL, due to the fact that
+        * this path is enabled by setting an XDP program.
+        */
+       xdp_prog = READ_ONCE(rx_ring->xdp_prog);
+
        while (likely(total_rx_packets < (unsigned int)budget)) {
                union i40e_rx_desc *rx_desc;
                unsigned int rx_packets;
@@ -378,7 +381,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
                xsk_buff_set_size(bi, size);
                xsk_buff_dma_sync_for_cpu(bi, rx_ring->xsk_pool);
 
-               xdp_res = i40e_run_xdp_zc(rx_ring, bi);
+               xdp_res = i40e_run_xdp_zc(rx_ring, bi, xdp_prog);
                i40e_handle_xdp_result_zc(rx_ring, bi, rx_desc, &rx_packets,
                                          &rx_bytes, size, xdp_res, &failure);
                if (failure)
index 5411039..69ade65 100644 (file)
@@ -4250,7 +4250,7 @@ static netdev_features_t iavf_features_check(struct sk_buff *skb,
        }
 
        /* No need to validate L4LEN as TCP is the only protocol with a
-        * flexible value and we support all possible values supported
+        * flexible value and we support all possible values supported
         * by TCP, which is at most 15 dwords
         */
 
index e2b4ba9..0d22bba 100644 (file)
@@ -5,10 +5,6 @@
 #include "iavf_prototype.h"
 #include "iavf_client.h"
 
-/* busy wait delay in msec */
-#define IAVF_BUSY_WAIT_DELAY 10
-#define IAVF_BUSY_WAIT_COUNT 50
-
 /**
  * iavf_send_pf_msg
  * @adapter: adapter structure
index 1e71b70..70335f6 100644 (file)
@@ -2189,6 +2189,42 @@ ice_setup_autoneg(struct ice_port_info *p, struct ethtool_link_ksettings *ks,
        return err;
 }
 
+/**
+ * ice_set_phy_type_from_speed - set phy_types based on speeds
+ * and advertised modes
+ * @ks: ethtool link ksettings struct
+ * @phy_type_low: pointer to the lower part of phy_type
+ * @phy_type_high: pointer to the higher part of phy_type
+ * @adv_link_speed: targeted link speeds bitmap
+ */
+static void
+ice_set_phy_type_from_speed(const struct ethtool_link_ksettings *ks,
+                           u64 *phy_type_low, u64 *phy_type_high,
+                           u16 adv_link_speed)
+{
+       /* Handle 1000M speed in a special way because ice_update_phy_type
+        * enables all link modes, but having mixed copper and optical
+        * standards is not supported.
+        */
+       adv_link_speed &= ~ICE_AQ_LINK_SPEED_1000MB;
+
+       if (ethtool_link_ksettings_test_link_mode(ks, advertising,
+                                                 1000baseT_Full))
+               *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_T |
+                                ICE_PHY_TYPE_LOW_1G_SGMII;
+
+       if (ethtool_link_ksettings_test_link_mode(ks, advertising,
+                                                 1000baseKX_Full))
+               *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_KX;
+
+       if (ethtool_link_ksettings_test_link_mode(ks, advertising,
+                                                 1000baseX_Full))
+               *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_SX |
+                                ICE_PHY_TYPE_LOW_1000BASE_LX;
+
+       ice_update_phy_type(phy_type_low, phy_type_high, adv_link_speed);
+}
+
 /**
  * ice_set_link_ksettings - Set Speed and Duplex
  * @netdev: network interface device structure
@@ -2320,7 +2356,8 @@ ice_set_link_ksettings(struct net_device *netdev,
                adv_link_speed = curr_link_speed;
 
        /* Convert the advertise link speeds to their corresponded PHY_TYPE */
-       ice_update_phy_type(&phy_type_low, &phy_type_high, adv_link_speed);
+       ice_set_phy_type_from_speed(ks, &phy_type_low, &phy_type_high,
+                                   adv_link_speed);
 
        if (!autoneg_changed && adv_link_speed == curr_link_speed) {
                netdev_info(netdev, "Nothing changed, exiting without setting anything.\n");
@@ -3470,6 +3507,16 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
        new_rx = ch->combined_count + ch->rx_count;
        new_tx = ch->combined_count + ch->tx_count;
 
+       if (new_rx < vsi->tc_cfg.numtc) {
+               netdev_err(dev, "Cannot set less Rx channels, than Traffic Classes you have (%u)\n",
+                          vsi->tc_cfg.numtc);
+               return -EINVAL;
+       }
+       if (new_tx < vsi->tc_cfg.numtc) {
+               netdev_err(dev, "Cannot set less Tx channels, than Traffic Classes you have (%u)\n",
+                          vsi->tc_cfg.numtc);
+               return -EINVAL;
+       }
        if (new_rx > ice_get_max_rxq(pf)) {
                netdev_err(dev, "Maximum allowed Rx channels is %d\n",
                           ice_get_max_rxq(pf));
index c73cdab..ada5198 100644 (file)
@@ -2639,7 +2639,7 @@ ice_ptg_remove_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg)
  *
  * This function will either add or move a ptype to a particular PTG depending
  * on if the ptype is already part of another group. Note that using a
- * destination PTG ID of ICE_DEFAULT_PTG (0) will move the ptype to the
+ * destination PTG ID of ICE_DEFAULT_PTG (0) will move the ptype to the
  * default PTG.
  */
 static int
index 4f954db..c9f7393 100644 (file)
@@ -447,11 +447,9 @@ void ice_deinit_lag(struct ice_pf *pf)
        if (lag->pf)
                ice_unregister_lag_handler(lag);
 
-       if (lag->upper_netdev)
-               dev_put(lag->upper_netdev);
+       dev_put(lag->upper_netdev);
 
-       if (lag->peer_netdev)
-               dev_put(lag->peer_netdev);
+       dev_put(lag->peer_netdev);
 
        kfree(lag);
 
index b28fb8e..a6c4be5 100644 (file)
@@ -912,7 +912,7 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt)
  * @vsi: the VSI being configured
  * @ctxt: VSI context structure
  */
-static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
+static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
 {
        u16 offset = 0, qmap = 0, tx_count = 0, pow = 0;
        u16 num_txq_per_tc, num_rxq_per_tc;
@@ -985,7 +985,18 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
        else
                vsi->num_rxq = num_rxq_per_tc;
 
+       if (vsi->num_rxq > vsi->alloc_rxq) {
+               dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",
+                       vsi->num_rxq, vsi->alloc_rxq);
+               return -EINVAL;
+       }
+
        vsi->num_txq = tx_count;
+       if (vsi->num_txq > vsi->alloc_txq) {
+               dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",
+                       vsi->num_txq, vsi->alloc_txq);
+               return -EINVAL;
+       }
 
        if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) {
                dev_dbg(ice_pf_to_dev(vsi->back), "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n");
@@ -1003,6 +1014,8 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
         */
        ctxt->info.q_mapping[0] = cpu_to_le16(vsi->rxq_map[0]);
        ctxt->info.q_mapping[1] = cpu_to_le16(vsi->num_rxq);
+
+       return 0;
 }
 
 /**
@@ -1190,7 +1203,10 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi)
        if (vsi->type == ICE_VSI_CHNL) {
                ice_chnl_vsi_setup_q_map(vsi, ctxt);
        } else {
-               ice_vsi_setup_q_map(vsi, ctxt);
+               ret = ice_vsi_setup_q_map(vsi, ctxt);
+               if (ret)
+                       goto out;
+
                if (!init_vsi) /* means VSI being updated */
                        /* must to indicate which section of VSI context are
                         * being modified
@@ -3467,7 +3483,7 @@ void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc)
  *
  * Prepares VSI tc_config to have queue configurations based on MQPRIO options.
  */
-static void
+static int
 ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
                           u8 ena_tc)
 {
@@ -3516,7 +3532,18 @@ ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
 
        /* Set actual Tx/Rx queue pairs */
        vsi->num_txq = offset + qcount_tx;
+       if (vsi->num_txq > vsi->alloc_txq) {
+               dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",
+                       vsi->num_txq, vsi->alloc_txq);
+               return -EINVAL;
+       }
+
        vsi->num_rxq = offset + qcount_rx;
+       if (vsi->num_rxq > vsi->alloc_rxq) {
+               dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",
+                       vsi->num_rxq, vsi->alloc_rxq);
+               return -EINVAL;
+       }
 
        /* Setup queue TC[0].qmap for given VSI context */
        ctxt->info.tc_mapping[0] = cpu_to_le16(qmap);
@@ -3534,6 +3561,8 @@ ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
        dev_dbg(ice_pf_to_dev(vsi->back), "vsi->num_rxq = %d\n",  vsi->num_rxq);
        dev_dbg(ice_pf_to_dev(vsi->back), "all_numtc %u, all_enatc: 0x%04x, tc_cfg.numtc %u\n",
                vsi->all_numtc, vsi->all_enatc, vsi->tc_cfg.numtc);
+
+       return 0;
 }
 
 /**
@@ -3583,9 +3612,12 @@ int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc)
 
        if (vsi->type == ICE_VSI_PF &&
            test_bit(ICE_FLAG_TC_MQPRIO, pf->flags))
-               ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc);
+               ret = ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc);
        else
-               ice_vsi_setup_q_map(vsi, ctx);
+               ret = ice_vsi_setup_q_map(vsi, ctx);
+
+       if (ret)
+               goto out;
 
        /* must to indicate which section of VSI context are being modified */
        ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
index 3f64300..d4a0d08 100644 (file)
@@ -43,6 +43,8 @@ enum ice_protocol_type {
        ICE_NVGRE,
        ICE_GTP,
        ICE_GTP_NO_PAY,
+       ICE_VLAN_EX,
+       ICE_VLAN_IN,
        ICE_VXLAN_GPE,
        ICE_SCTP_IL,
        ICE_PROTOCOL_LAST
@@ -109,13 +111,18 @@ enum ice_prot_id {
 #define ICE_GRE_OF_HW          64
 
 #define ICE_UDP_OF_HW  52 /* UDP Tunnels */
-#define ICE_META_DATA_ID_HW 255 /* this is used for tunnel type */
+#define ICE_META_DATA_ID_HW 255 /* this is used for tunnel and VLAN type */
 
 #define ICE_MDID_SIZE 2
+
 #define ICE_TUN_FLAG_MDID 21
 #define ICE_TUN_FLAG_MDID_OFF (ICE_MDID_SIZE * ICE_TUN_FLAG_MDID)
 #define ICE_TUN_FLAG_MASK 0xFF
 
+#define ICE_VLAN_FLAG_MDID 20
+#define ICE_VLAN_FLAG_MDID_OFF (ICE_MDID_SIZE * ICE_VLAN_FLAG_MDID)
+#define ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK 0xD000
+
 #define ICE_TUN_FLAG_FV_IND 2
 
 /* Mapping of software defined protocol ID to hardware defined protocol ID */
index 8d8f3ee..2d12747 100644 (file)
@@ -31,16 +31,16 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
                                                        0x81, 0, 0, 0};
 
 enum {
-       ICE_PKT_VLAN            = BIT(0),
-       ICE_PKT_OUTER_IPV6      = BIT(1),
-       ICE_PKT_TUN_GTPC        = BIT(2),
-       ICE_PKT_TUN_GTPU        = BIT(3),
-       ICE_PKT_TUN_NVGRE       = BIT(4),
-       ICE_PKT_TUN_UDP         = BIT(5),
-       ICE_PKT_INNER_IPV6      = BIT(6),
-       ICE_PKT_INNER_TCP       = BIT(7),
-       ICE_PKT_INNER_UDP       = BIT(8),
-       ICE_PKT_GTP_NOPAY       = BIT(9),
+       ICE_PKT_OUTER_IPV6      = BIT(0),
+       ICE_PKT_TUN_GTPC        = BIT(1),
+       ICE_PKT_TUN_GTPU        = BIT(2),
+       ICE_PKT_TUN_NVGRE       = BIT(3),
+       ICE_PKT_TUN_UDP         = BIT(4),
+       ICE_PKT_INNER_IPV6      = BIT(5),
+       ICE_PKT_INNER_TCP       = BIT(6),
+       ICE_PKT_INNER_UDP       = BIT(7),
+       ICE_PKT_GTP_NOPAY       = BIT(8),
+       ICE_PKT_KMALLOC         = BIT(9),
 };
 
 struct ice_dummy_pkt_offsets {
@@ -53,22 +53,42 @@ struct ice_dummy_pkt_profile {
        const u8 *pkt;
        u32 match;
        u16 pkt_len;
+       u16 offsets_len;
 };
 
-#define ICE_DECLARE_PKT_OFFSETS(type)                          \
-       static const struct ice_dummy_pkt_offsets               \
+#define ICE_DECLARE_PKT_OFFSETS(type)                                  \
+       static const struct ice_dummy_pkt_offsets                       \
        ice_dummy_##type##_packet_offsets[]
 
-#define ICE_DECLARE_PKT_TEMPLATE(type)                         \
+#define ICE_DECLARE_PKT_TEMPLATE(type)                                 \
        static const u8 ice_dummy_##type##_packet[]
 
-#define ICE_PKT_PROFILE(type, m) {                             \
-       .match          = (m),                                  \
-       .pkt            = ice_dummy_##type##_packet,            \
-       .pkt_len        = sizeof(ice_dummy_##type##_packet),    \
-       .offsets        = ice_dummy_##type##_packet_offsets,    \
+#define ICE_PKT_PROFILE(type, m) {                                     \
+       .match          = (m),                                          \
+       .pkt            = ice_dummy_##type##_packet,                    \
+       .pkt_len        = sizeof(ice_dummy_##type##_packet),            \
+       .offsets        = ice_dummy_##type##_packet_offsets,            \
+       .offsets_len    = sizeof(ice_dummy_##type##_packet_offsets),    \
 }
 
+ICE_DECLARE_PKT_OFFSETS(vlan) = {
+       { ICE_VLAN_OFOS,        12 },
+};
+
+ICE_DECLARE_PKT_TEMPLATE(vlan) = {
+       0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */
+};
+
+ICE_DECLARE_PKT_OFFSETS(qinq) = {
+       { ICE_VLAN_EX,          12 },
+       { ICE_VLAN_IN,          16 },
+};
+
+ICE_DECLARE_PKT_TEMPLATE(qinq) = {
+       0x91, 0x00, 0x00, 0x00, /* ICE_VLAN_EX 12 */
+       0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_IN 16 */
+};
+
 ICE_DECLARE_PKT_OFFSETS(gre_tcp) = {
        { ICE_MAC_OFOS,         0 },
        { ICE_ETYPE_OL,         12 },
@@ -506,38 +526,6 @@ ICE_DECLARE_PKT_TEMPLATE(udp) = {
        0x00, 0x00,     /* 2 bytes for 4 byte alignment */
 };
 
-/* offset info for MAC + VLAN + IPv4 + UDP dummy packet */
-ICE_DECLARE_PKT_OFFSETS(vlan_udp) = {
-       { ICE_MAC_OFOS,         0 },
-       { ICE_VLAN_OFOS,        12 },
-       { ICE_ETYPE_OL,         16 },
-       { ICE_IPV4_OFOS,        18 },
-       { ICE_UDP_ILOS,         38 },
-       { ICE_PROTOCOL_LAST,    0 },
-};
-
-/* C-tag (801.1Q), IPv4:UDP dummy packet */
-ICE_DECLARE_PKT_TEMPLATE(vlan_udp) = {
-       0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */
-
-       0x08, 0x00,             /* ICE_ETYPE_OL 16 */
-
-       0x45, 0x00, 0x00, 0x1c, /* ICE_IPV4_OFOS 18 */
-       0x00, 0x01, 0x00, 0x00,
-       0x00, 0x11, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 38 */
-       0x00, 0x08, 0x00, 0x00,
-
-       0x00, 0x00,     /* 2 bytes for 4 byte alignment */
-};
-
 /* offset info for MAC + IPv4 + TCP dummy packet */
 ICE_DECLARE_PKT_OFFSETS(tcp) = {
        { ICE_MAC_OFOS,         0 },
@@ -570,41 +558,6 @@ ICE_DECLARE_PKT_TEMPLATE(tcp) = {
        0x00, 0x00,     /* 2 bytes for 4 byte alignment */
 };
 
-/* offset info for MAC + VLAN (C-tag, 802.1Q) + IPv4 + TCP dummy packet */
-ICE_DECLARE_PKT_OFFSETS(vlan_tcp) = {
-       { ICE_MAC_OFOS,         0 },
-       { ICE_VLAN_OFOS,        12 },
-       { ICE_ETYPE_OL,         16 },
-       { ICE_IPV4_OFOS,        18 },
-       { ICE_TCP_IL,           38 },
-       { ICE_PROTOCOL_LAST,    0 },
-};
-
-/* C-tag (801.1Q), IPv4:TCP dummy packet */
-ICE_DECLARE_PKT_TEMPLATE(vlan_tcp) = {
-       0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */
-
-       0x08, 0x00,             /* ICE_ETYPE_OL 16 */
-
-       0x45, 0x00, 0x00, 0x28, /* ICE_IPV4_OFOS 18 */
-       0x00, 0x01, 0x00, 0x00,
-       0x00, 0x06, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 38 */
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x50, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x00, 0x00,     /* 2 bytes for 4 byte alignment */
-};
-
 ICE_DECLARE_PKT_OFFSETS(tcp_ipv6) = {
        { ICE_MAC_OFOS,         0 },
        { ICE_ETYPE_OL,         12 },
@@ -640,46 +593,6 @@ ICE_DECLARE_PKT_TEMPLATE(tcp_ipv6) = {
        0x00, 0x00, /* 2 bytes for 4 byte alignment */
 };
 
-/* C-tag (802.1Q): IPv6 + TCP */
-ICE_DECLARE_PKT_OFFSETS(vlan_tcp_ipv6) = {
-       { ICE_MAC_OFOS,         0 },
-       { ICE_VLAN_OFOS,        12 },
-       { ICE_ETYPE_OL,         16 },
-       { ICE_IPV6_OFOS,        18 },
-       { ICE_TCP_IL,           58 },
-       { ICE_PROTOCOL_LAST,    0 },
-};
-
-/* C-tag (802.1Q), IPv6 + TCP dummy packet */
-ICE_DECLARE_PKT_TEMPLATE(vlan_tcp_ipv6) = {
-       0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */
-
-       0x86, 0xDD,             /* ICE_ETYPE_OL 16 */
-
-       0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_OFOS 18 */
-       0x00, 0x14, 0x06, 0x00, /* Next header is TCP */
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 58 */
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x50, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x00, 0x00, /* 2 bytes for 4 byte alignment */
-};
-
 /* IPv6 + UDP */
 ICE_DECLARE_PKT_OFFSETS(udp_ipv6) = {
        { ICE_MAC_OFOS,         0 },
@@ -717,43 +630,6 @@ ICE_DECLARE_PKT_TEMPLATE(udp_ipv6) = {
        0x00, 0x00, /* 2 bytes for 4 byte alignment */
 };
 
-/* C-tag (802.1Q): IPv6 + UDP */
-ICE_DECLARE_PKT_OFFSETS(vlan_udp_ipv6) = {
-       { ICE_MAC_OFOS,         0 },
-       { ICE_VLAN_OFOS,        12 },
-       { ICE_ETYPE_OL,         16 },
-       { ICE_IPV6_OFOS,        18 },
-       { ICE_UDP_ILOS,         58 },
-       { ICE_PROTOCOL_LAST,    0 },
-};
-
-/* C-tag (802.1Q), IPv6 + UDP dummy packet */
-ICE_DECLARE_PKT_TEMPLATE(vlan_udp_ipv6) = {
-       0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x81, 0x00, 0x00, 0x00,/* ICE_VLAN_OFOS 12 */
-
-       0x86, 0xDD,             /* ICE_ETYPE_OL 16 */
-
-       0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_OFOS 18 */
-       0x00, 0x08, 0x11, 0x00, /* Next header UDP */
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-       0x00, 0x00, 0x00, 0x00,
-
-       0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 58 */
-       0x00, 0x08, 0x00, 0x00,
-
-       0x00, 0x00, /* 2 bytes for 4 byte alignment */
-};
-
 /* Outer IPv4 + Outer UDP + GTP + Inner IPv4 + Inner TCP */
 ICE_DECLARE_PKT_OFFSETS(ipv4_gtpu_ipv4_tcp) = {
        { ICE_MAC_OFOS,         0 },
@@ -1271,14 +1147,9 @@ static const struct ice_dummy_pkt_profile ice_dummy_pkt_profiles[] = {
        ICE_PKT_PROFILE(udp_tun_ipv6_udp, ICE_PKT_TUN_UDP |
                                          ICE_PKT_INNER_IPV6),
        ICE_PKT_PROFILE(udp_tun_udp, ICE_PKT_TUN_UDP),
-       ICE_PKT_PROFILE(vlan_udp_ipv6, ICE_PKT_OUTER_IPV6 | ICE_PKT_INNER_UDP |
-                                      ICE_PKT_VLAN),
        ICE_PKT_PROFILE(udp_ipv6, ICE_PKT_OUTER_IPV6 | ICE_PKT_INNER_UDP),
-       ICE_PKT_PROFILE(vlan_udp, ICE_PKT_INNER_UDP | ICE_PKT_VLAN),
        ICE_PKT_PROFILE(udp, ICE_PKT_INNER_UDP),
-       ICE_PKT_PROFILE(vlan_tcp_ipv6, ICE_PKT_OUTER_IPV6 | ICE_PKT_VLAN),
        ICE_PKT_PROFILE(tcp_ipv6, ICE_PKT_OUTER_IPV6),
-       ICE_PKT_PROFILE(vlan_tcp, ICE_PKT_VLAN),
        ICE_PKT_PROFILE(tcp, 0),
 };
 
@@ -4609,6 +4480,8 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[ICE_PROTOCOL_LAST] = {
        { ICE_NVGRE,            { 0, 2, 4, 6 } },
        { ICE_GTP,              { 8, 10, 12, 14, 16, 18, 20, 22 } },
        { ICE_GTP_NO_PAY,       { 8, 10, 12, 14 } },
+       { ICE_VLAN_EX,          { 2, 0 } },
+       { ICE_VLAN_IN,          { 2, 0 } },
 };
 
 static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
@@ -4629,6 +4502,8 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
        { ICE_NVGRE,            ICE_GRE_OF_HW },
        { ICE_GTP,              ICE_UDP_OF_HW },
        { ICE_GTP_NO_PAY,       ICE_UDP_ILOS_HW },
+       { ICE_VLAN_EX,          ICE_VLAN_OF_HW },
+       { ICE_VLAN_IN,          ICE_VLAN_OL_HW },
 };
 
 /**
@@ -5313,10 +5188,11 @@ static bool ice_tun_type_match_word(enum ice_sw_tunnel_type tun_type, u16 *mask)
  * ice_add_special_words - Add words that are not protocols, such as metadata
  * @rinfo: other information regarding the rule e.g. priority and action info
  * @lkup_exts: lookup word structure
+ * @dvm_ena: is double VLAN mode enabled
  */
 static int
 ice_add_special_words(struct ice_adv_rule_info *rinfo,
-                     struct ice_prot_lkup_ext *lkup_exts)
+                     struct ice_prot_lkup_ext *lkup_exts, bool dvm_ena)
 {
        u16 mask;
 
@@ -5335,6 +5211,19 @@ ice_add_special_words(struct ice_adv_rule_info *rinfo,
                }
        }
 
+       if (rinfo->vlan_type != 0 && dvm_ena) {
+               if (lkup_exts->n_val_words < ICE_MAX_CHAIN_WORDS) {
+                       u8 word = lkup_exts->n_val_words++;
+
+                       lkup_exts->fv_words[word].prot_id = ICE_META_DATA_ID_HW;
+                       lkup_exts->fv_words[word].off = ICE_VLAN_FLAG_MDID_OFF;
+                       lkup_exts->field_mask[word] =
+                                       ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK;
+               } else {
+                       return -ENOSPC;
+               }
+       }
+
        return 0;
 }
 
@@ -5454,7 +5343,7 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
        /* Create any special protocol/offset pairs, such as looking at tunnel
         * bits by extracting metadata
         */
-       status = ice_add_special_words(rinfo, lkup_exts);
+       status = ice_add_special_words(rinfo, lkup_exts, ice_is_dvm_ena(hw));
        if (status)
                goto err_free_lkup_exts;
 
@@ -5554,6 +5443,79 @@ err_free_lkup_exts:
        return status;
 }
 
+/**
+ * ice_dummy_packet_add_vlan - insert VLAN header to dummy pkt
+ *
+ * @dummy_pkt: dummy packet profile pattern to which VLAN tag(s) will be added
+ * @num_vlan: number of VLAN tags
+ */
+static struct ice_dummy_pkt_profile *
+ice_dummy_packet_add_vlan(const struct ice_dummy_pkt_profile *dummy_pkt,
+                         u32 num_vlan)
+{
+       struct ice_dummy_pkt_profile *profile;
+       struct ice_dummy_pkt_offsets *offsets;
+       u32 buf_len, off, etype_off, i;
+       u8 *pkt;
+
+       if (num_vlan < 1 || num_vlan > 2)
+               return ERR_PTR(-EINVAL);
+
+       off = num_vlan * VLAN_HLEN;
+
+       buf_len = array_size(num_vlan, sizeof(ice_dummy_vlan_packet_offsets)) +
+                 dummy_pkt->offsets_len;
+       offsets = kzalloc(buf_len, GFP_KERNEL);
+       if (!offsets)
+               return ERR_PTR(-ENOMEM);
+
+       offsets[0] = dummy_pkt->offsets[0];
+       if (num_vlan == 2) {
+               offsets[1] = ice_dummy_qinq_packet_offsets[0];
+               offsets[2] = ice_dummy_qinq_packet_offsets[1];
+       } else if (num_vlan == 1) {
+               offsets[1] = ice_dummy_vlan_packet_offsets[0];
+       }
+
+       for (i = 1; dummy_pkt->offsets[i].type != ICE_PROTOCOL_LAST; i++) {
+               offsets[i + num_vlan].type = dummy_pkt->offsets[i].type;
+               offsets[i + num_vlan].offset =
+                       dummy_pkt->offsets[i].offset + off;
+       }
+       offsets[i + num_vlan] = dummy_pkt->offsets[i];
+
+       etype_off = dummy_pkt->offsets[1].offset;
+
+       buf_len = array_size(num_vlan, sizeof(ice_dummy_vlan_packet)) +
+                 dummy_pkt->pkt_len;
+       pkt = kzalloc(buf_len, GFP_KERNEL);
+       if (!pkt) {
+               kfree(offsets);
+               return ERR_PTR(-ENOMEM);
+       }
+
+       memcpy(pkt, dummy_pkt->pkt, etype_off);
+       memcpy(pkt + etype_off,
+              num_vlan == 2 ? ice_dummy_qinq_packet : ice_dummy_vlan_packet,
+              off);
+       memcpy(pkt + etype_off + off, dummy_pkt->pkt + etype_off,
+              dummy_pkt->pkt_len - etype_off);
+
+       profile = kzalloc(sizeof(*profile), GFP_KERNEL);
+       if (!profile) {
+               kfree(offsets);
+               kfree(pkt);
+               return ERR_PTR(-ENOMEM);
+       }
+
+       profile->offsets = offsets;
+       profile->pkt = pkt;
+       profile->pkt_len = buf_len;
+       profile->match |= ICE_PKT_KMALLOC;
+
+       return profile;
+}
+
 /**
  * ice_find_dummy_packet - find dummy packet
  *
@@ -5569,7 +5531,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
                      enum ice_sw_tunnel_type tun_type)
 {
        const struct ice_dummy_pkt_profile *ret = ice_dummy_pkt_profiles;
-       u32 match = 0;
+       u32 match = 0, vlan_count = 0;
        u16 i;
 
        switch (tun_type) {
@@ -5597,8 +5559,11 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
                        match |= ICE_PKT_INNER_TCP;
                else if (lkups[i].type == ICE_IPV6_OFOS)
                        match |= ICE_PKT_OUTER_IPV6;
-               else if (lkups[i].type == ICE_VLAN_OFOS)
-                       match |= ICE_PKT_VLAN;
+               else if (lkups[i].type == ICE_VLAN_OFOS ||
+                        lkups[i].type == ICE_VLAN_EX)
+                       vlan_count++;
+               else if (lkups[i].type == ICE_VLAN_IN)
+                       vlan_count++;
                else if (lkups[i].type == ICE_ETYPE_OL &&
                         lkups[i].h_u.ethertype.ethtype_id ==
                                cpu_to_be16(ICE_IPV6_ETHER_ID) &&
@@ -5620,6 +5585,9 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
        while (ret->match && (match & ret->match) != ret->match)
                ret++;
 
+       if (vlan_count != 0)
+               ret = ice_dummy_packet_add_vlan(ret, vlan_count);
+
        return ret;
 }
 
@@ -5678,6 +5646,8 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
                        len = sizeof(struct ice_ethtype_hdr);
                        break;
                case ICE_VLAN_OFOS:
+               case ICE_VLAN_EX:
+               case ICE_VLAN_IN:
                        len = sizeof(struct ice_vlan_hdr);
                        break;
                case ICE_IPV4_OFOS:
@@ -5782,6 +5752,36 @@ ice_fill_adv_packet_tun(struct ice_hw *hw, enum ice_sw_tunnel_type tun_type,
        return -EIO;
 }
 
+/**
+ * ice_fill_adv_packet_vlan - fill dummy packet with VLAN tag type
+ * @vlan_type: VLAN tag type
+ * @pkt: dummy packet to fill in
+ * @offsets: offset info for the dummy packet
+ */
+static int
+ice_fill_adv_packet_vlan(u16 vlan_type, u8 *pkt,
+                        const struct ice_dummy_pkt_offsets *offsets)
+{
+       u16 i;
+
+       /* Find VLAN header and insert VLAN TPID */
+       for (i = 0; offsets[i].type != ICE_PROTOCOL_LAST; i++) {
+               if (offsets[i].type == ICE_VLAN_OFOS ||
+                   offsets[i].type == ICE_VLAN_EX) {
+                       struct ice_vlan_hdr *hdr;
+                       u16 offset;
+
+                       offset = offsets[i].offset;
+                       hdr = (struct ice_vlan_hdr *)&pkt[offset];
+                       hdr->type = cpu_to_be16(vlan_type);
+
+                       return 0;
+               }
+       }
+
+       return -EIO;
+}
+
 /**
  * ice_find_adv_rule_entry - Search a rule entry
  * @hw: pointer to the hardware structure
@@ -5817,6 +5817,7 @@ ice_find_adv_rule_entry(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
                        }
                if (rinfo->sw_act.flag == list_itr->rule_info.sw_act.flag &&
                    rinfo->tun_type == list_itr->rule_info.tun_type &&
+                   rinfo->vlan_type == list_itr->rule_info.vlan_type &&
                    lkups_matched)
                        return list_itr;
        }
@@ -5993,16 +5994,22 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 
        /* locate a dummy packet */
        profile = ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type);
+       if (IS_ERR(profile))
+               return PTR_ERR(profile);
 
        if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
              rinfo->sw_act.fltr_act == ICE_FWD_TO_Q ||
              rinfo->sw_act.fltr_act == ICE_FWD_TO_QGRP ||
-             rinfo->sw_act.fltr_act == ICE_DROP_PACKET))
-               return -EIO;
+             rinfo->sw_act.fltr_act == ICE_DROP_PACKET)) {
+               status = -EIO;
+               goto free_pkt_profile;
+       }
 
        vsi_handle = rinfo->sw_act.vsi_handle;
-       if (!ice_is_vsi_valid(hw, vsi_handle))
-               return -EINVAL;
+       if (!ice_is_vsi_valid(hw, vsi_handle)) {
+               status =  -EINVAL;
+               goto free_pkt_profile;
+       }
 
        if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI)
                rinfo->sw_act.fwd_id.hw_vsi_id =
@@ -6012,7 +6019,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 
        status = ice_add_adv_recipe(hw, lkups, lkups_cnt, rinfo, &rid);
        if (status)
-               return status;
+               goto free_pkt_profile;
        m_entry = ice_find_adv_rule_entry(hw, lkups, lkups_cnt, rid, rinfo);
        if (m_entry) {
                /* we have to add VSI to VSI_LIST and increment vsi_count.
@@ -6031,12 +6038,14 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
                        added_entry->rule_id = m_entry->rule_info.fltr_rule_id;
                        added_entry->vsi_handle = rinfo->sw_act.vsi_handle;
                }
-               return status;
+               goto free_pkt_profile;
        }
        rule_buf_sz = ICE_SW_RULE_RX_TX_HDR_SIZE(s_rule, profile->pkt_len);
        s_rule = kzalloc(rule_buf_sz, GFP_KERNEL);
-       if (!s_rule)
-               return -ENOMEM;
+       if (!s_rule) {
+               status = -ENOMEM;
+               goto free_pkt_profile;
+       }
        if (!rinfo->flags_info.act_valid) {
                act |= ICE_SINGLE_ACT_LAN_ENABLE;
                act |= ICE_SINGLE_ACT_LB_ENABLE;
@@ -6105,6 +6114,14 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
                        goto err_ice_add_adv_rule;
        }
 
+       if (rinfo->vlan_type != 0 && ice_is_dvm_ena(hw)) {
+               status = ice_fill_adv_packet_vlan(rinfo->vlan_type,
+                                                 s_rule->hdr_data,
+                                                 profile->offsets);
+               if (status)
+                       goto err_ice_add_adv_rule;
+       }
+
        status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
                                 rule_buf_sz, 1, ice_aqc_opc_add_sw_rules,
                                 NULL);
@@ -6150,6 +6167,13 @@ err_ice_add_adv_rule:
 
        kfree(s_rule);
 
+free_pkt_profile:
+       if (profile->match & ICE_PKT_KMALLOC) {
+               kfree(profile->offsets);
+               kfree(profile->pkt);
+               kfree(profile);
+       }
+
        return status;
 }
 
@@ -6342,7 +6366,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
        /* Create any special protocol/offset pairs, such as looking at tunnel
         * bits by extracting metadata
         */
-       status = ice_add_special_words(rinfo, &lkup_exts);
+       status = ice_add_special_words(rinfo, &lkup_exts, ice_is_dvm_ena(hw));
        if (status)
                return status;
 
index eb641e5..59488e3 100644 (file)
@@ -192,6 +192,7 @@ struct ice_adv_rule_info {
        u32 priority;
        u8 rx; /* true means LOOKUP_RX otherwise LOOKUP_TX */
        u16 fltr_rule_id;
+       u16 vlan_type;
        struct ice_adv_rule_flags_info flags_info;
 };
 
index 0a0c55f..1479515 100644 (file)
@@ -50,6 +50,10 @@ ice_tc_count_lkups(u32 flags, struct ice_tc_flower_lyr_2_4_hdrs *headers,
        if (flags & ICE_TC_FLWR_FIELD_VLAN)
                lkups_cnt++;
 
+       /* is CVLAN specified? */
+       if (flags & ICE_TC_FLWR_FIELD_CVLAN)
+               lkups_cnt++;
+
        /* are IPv[4|6] fields specified? */
        if (flags & (ICE_TC_FLWR_FIELD_DEST_IPV4 | ICE_TC_FLWR_FIELD_SRC_IPV4 |
                     ICE_TC_FLWR_FIELD_DEST_IPV6 | ICE_TC_FLWR_FIELD_SRC_IPV6))
@@ -134,6 +138,18 @@ ice_sw_type_from_tunnel(enum ice_tunnel_type type)
        }
 }
 
+static u16 ice_check_supported_vlan_tpid(u16 vlan_tpid)
+{
+       switch (vlan_tpid) {
+       case ETH_P_8021Q:
+       case ETH_P_8021AD:
+       case ETH_P_QINQ1:
+               return vlan_tpid;
+       default:
+               return 0;
+       }
+}
+
 static int
 ice_tc_fill_tunnel_outer(u32 flags, struct ice_tc_flower_fltr *fltr,
                         struct ice_adv_lkup_elem *list)
@@ -269,8 +285,11 @@ ice_tc_fill_rules(struct ice_hw *hw, u32 flags,
 {
        struct ice_tc_flower_lyr_2_4_hdrs *headers = &tc_fltr->outer_headers;
        bool inner = false;
+       u16 vlan_tpid = 0;
        int i = 0;
 
+       rule_info->vlan_type = vlan_tpid;
+
        rule_info->tun_type = ice_sw_type_from_tunnel(tc_fltr->tunnel_type);
        if (tc_fltr->tunnel_type != TNL_LAST) {
                i = ice_tc_fill_tunnel_outer(flags, tc_fltr, list);
@@ -311,12 +330,26 @@ ice_tc_fill_rules(struct ice_hw *hw, u32 flags,
 
        /* copy VLAN info */
        if (flags & ICE_TC_FLWR_FIELD_VLAN) {
-               list[i].type = ICE_VLAN_OFOS;
+               vlan_tpid = be16_to_cpu(headers->vlan_hdr.vlan_tpid);
+               rule_info->vlan_type =
+                               ice_check_supported_vlan_tpid(vlan_tpid);
+
+               if (flags & ICE_TC_FLWR_FIELD_CVLAN)
+                       list[i].type = ICE_VLAN_EX;
+               else
+                       list[i].type = ICE_VLAN_OFOS;
                list[i].h_u.vlan_hdr.vlan = headers->vlan_hdr.vlan_id;
                list[i].m_u.vlan_hdr.vlan = cpu_to_be16(0xFFFF);
                i++;
        }
 
+       if (flags & ICE_TC_FLWR_FIELD_CVLAN) {
+               list[i].type = ICE_VLAN_IN;
+               list[i].h_u.vlan_hdr.vlan = headers->cvlan_hdr.vlan_id;
+               list[i].m_u.vlan_hdr.vlan = cpu_to_be16(0xFFFF);
+               i++;
+       }
+
        /* copy L3 (IPv[4|6]: src, dest) address */
        if (flags & (ICE_TC_FLWR_FIELD_DEST_IPV4 |
                     ICE_TC_FLWR_FIELD_SRC_IPV4)) {
@@ -524,6 +557,7 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
         */
        fltr->rid = rule_added.rid;
        fltr->rule_id = rule_added.rule_id;
+       fltr->dest_id = rule_added.vsi_handle;
 
 exit:
        kfree(list);
@@ -944,6 +978,7 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi,
              BIT(FLOW_DISSECTOR_KEY_BASIC) |
              BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
              BIT(FLOW_DISSECTOR_KEY_VLAN) |
+             BIT(FLOW_DISSECTOR_KEY_CVLAN) |
              BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
              BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
              BIT(FLOW_DISSECTOR_KEY_ENC_CONTROL) |
@@ -993,7 +1028,9 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi,
                n_proto_key = ntohs(match.key->n_proto);
                n_proto_mask = ntohs(match.mask->n_proto);
 
-               if (n_proto_key == ETH_P_ALL || n_proto_key == 0) {
+               if (n_proto_key == ETH_P_ALL || n_proto_key == 0 ||
+                   fltr->tunnel_type == TNL_GTPU ||
+                   fltr->tunnel_type == TNL_GTPC) {
                        n_proto_key = 0;
                        n_proto_mask = 0;
                } else {
@@ -1057,6 +1094,34 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi,
                                cpu_to_be16(match.key->vlan_id & VLAN_VID_MASK);
                if (match.mask->vlan_priority)
                        headers->vlan_hdr.vlan_prio = match.key->vlan_priority;
+               if (match.mask->vlan_tpid)
+                       headers->vlan_hdr.vlan_tpid = match.key->vlan_tpid;
+       }
+
+       if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CVLAN)) {
+               struct flow_match_vlan match;
+
+               if (!ice_is_dvm_ena(&vsi->back->hw)) {
+                       NL_SET_ERR_MSG_MOD(fltr->extack, "Double VLAN mode is not enabled");
+                       return -EINVAL;
+               }
+
+               flow_rule_match_cvlan(rule, &match);
+
+               if (match.mask->vlan_id) {
+                       if (match.mask->vlan_id == VLAN_VID_MASK) {
+                               fltr->flags |= ICE_TC_FLWR_FIELD_CVLAN;
+                       } else {
+                               NL_SET_ERR_MSG_MOD(fltr->extack,
+                                                  "Bad CVLAN mask");
+                               return -EINVAL;
+                       }
+               }
+
+               headers->cvlan_hdr.vlan_id =
+                               cpu_to_be16(match.key->vlan_id & VLAN_VID_MASK);
+               if (match.mask->vlan_priority)
+                       headers->cvlan_hdr.vlan_prio = match.key->vlan_priority;
        }
 
        if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
@@ -1191,7 +1256,7 @@ ice_handle_tclass_action(struct ice_vsi *vsi,
                           ICE_TC_FLWR_FIELD_ENC_DST_MAC)) {
                ether_addr_copy(fltr->outer_headers.l2_key.dst_mac,
                                vsi->netdev->dev_addr);
-               memset(fltr->outer_headers.l2_mask.dst_mac, 0xff, ETH_ALEN);
+               eth_broadcast_addr(fltr->outer_headers.l2_mask.dst_mac);
        }
 
        /* validate specified dest MAC address, make sure either it belongs to
index e25e958..0193874 100644 (file)
@@ -23,6 +23,7 @@
 #define ICE_TC_FLWR_FIELD_ENC_DST_MAC          BIT(16)
 #define ICE_TC_FLWR_FIELD_ETH_TYPE_ID          BIT(17)
 #define ICE_TC_FLWR_FIELD_ENC_OPTS             BIT(18)
+#define ICE_TC_FLWR_FIELD_CVLAN                        BIT(19)
 
 #define ICE_TC_FLOWER_MASK_32   0xFFFFFFFF
 
@@ -40,6 +41,7 @@ struct ice_tc_flower_action {
 struct ice_tc_vlan_hdr {
        __be16 vlan_id; /* Only last 12 bits valid */
        u16 vlan_prio; /* Only last 3 bits valid (valid values: 0..7) */
+       __be16 vlan_tpid;
 };
 
 struct ice_tc_l2_hdr {
@@ -81,6 +83,7 @@ struct ice_tc_flower_lyr_2_4_hdrs {
        struct ice_tc_l2_hdr l2_key;
        struct ice_tc_l2_hdr l2_mask;
        struct ice_tc_vlan_hdr vlan_hdr;
+       struct ice_tc_vlan_hdr cvlan_hdr;
        /* L3 (IPv4[6]) layer fields with their mask */
        struct ice_tc_l3_hdr l3_key;
        struct ice_tc_l3_hdr l3_mask;
index 1b618de..bcda2e0 100644 (file)
@@ -199,7 +199,6 @@ static bool ice_is_dvm_supported(struct ice_hw *hw)
 #define ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX            2
 #define ICE_SW_LKUP_PROMISC_VLAN_LOC_LKUP_IDX          2
 #define ICE_PKT_FLAGS_0_TO_15_FV_IDX                   1
-#define ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK          0xD000
 static struct ice_update_recipe_lkup_idx_params ice_dvm_dflt_recipes[] = {
        {
                /* Update recipe ICE_SW_LKUP_VLAN to filter based on the
index cbe92fd..8d6e44e 100644 (file)
@@ -2207,7 +2207,7 @@ out:
  *  igb_reset_mdicnfg_82580 - Reset MDICNFG destination and com_mdio bits
  *  @hw: pointer to the HW structure
  *
- *  This resets the the MDICNFG.Destination and MDICNFG.Com_MDIO bits based on
+ *  This resets the MDICNFG.Destination and MDICNFG.Com_MDIO bits based on
  *  the values found in the EEPROM.  This addresses an issue in which these
  *  bits are not restored from EEPROM after reset.
  **/
index 1277c5c..205d577 100644 (file)
@@ -854,7 +854,7 @@ s32 igb_force_mac_fc(struct e1000_hw *hw)
         *      1:  Rx flow control is enabled (we can receive pause
         *          frames but not send pause frames).
         *      2:  Tx flow control is enabled (we can send pause frames
-        *          frames but we do not receive pause frames).
+        *          but we do not receive pause frames).
         *      3:  Both Rx and TX flow control (symmetric) is enabled.
         *  other:  No other values should be possible at this point.
         */
index 68be297..4f91a85 100644 (file)
@@ -1945,7 +1945,7 @@ static void igb_setup_tx_mode(struct igb_adapter *adapter)
                 * However, when we do so, no frame from queue 2 and 3 are
                 * transmitted.  It seems the MAX_TPKT_SIZE should not be great
                 * or _equal_ to the buffer size programmed in TXPBS. For this
-                * reason, we set set MAX_ TPKT_SIZE to (4kB - 1) / 64.
+                * reason, we set MAX_ TPKT_SIZE to (4kB - 1) / 64.
                 */
                val = (4096 - 1) / 64;
                wr32(E1000_I210_DTXMXPKTSZ, val);
@@ -4819,8 +4819,11 @@ static void igb_clean_tx_ring(struct igb_ring *tx_ring)
        while (i != tx_ring->next_to_use) {
                union e1000_adv_tx_desc *eop_desc, *tx_desc;
 
-               /* Free all the Tx ring sk_buffs */
-               dev_kfree_skb_any(tx_buffer->skb);
+               /* Free all the Tx ring sk_buffs or xdp frames */
+               if (tx_buffer->type == IGB_TYPE_SKB)
+                       dev_kfree_skb_any(tx_buffer->skb);
+               else
+                       xdp_return_frame(tx_buffer->xdpf);
 
                /* unmap skb header data */
                dma_unmap_single(tx_ring->dev,
@@ -9519,7 +9522,7 @@ static pci_ers_result_t igb_io_error_detected(struct pci_dev *pdev,
                igb_down(adapter);
        pci_disable_device(pdev);
 
-       /* Request a slot slot reset. */
+       /* Request a slot reset. */
        return PCI_ERS_RESULT_NEED_RESET;
 }
 
@@ -9898,11 +9901,10 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
        struct e1000_hw *hw = &adapter->hw;
        u32 dmac_thr;
        u16 hwm;
+       u32 reg;
 
        if (hw->mac.type > e1000_82580) {
                if (adapter->flags & IGB_FLAG_DMAC) {
-                       u32 reg;
-
                        /* force threshold to 0. */
                        wr32(E1000_DMCTXTH, 0);
 
@@ -9935,7 +9937,6 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
                        /* Disable BMC-to-OS Watchdog Enable */
                        if (hw->mac.type != e1000_i354)
                                reg &= ~E1000_DMACR_DC_BMC2OSW_EN;
-
                        wr32(E1000_DMACR, reg);
 
                        /* no lower threshold to disable
@@ -9952,12 +9953,12 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
                         */
                        wr32(E1000_DMCTXTH, (IGB_MIN_TXPBSIZE -
                             (IGB_TX_BUF_4096 + adapter->max_frame_size)) >> 6);
+               }
 
-                       /* make low power state decision controlled
-                        * by DMA coal
-                        */
+               if (hw->mac.type >= e1000_i210 ||
+                   (adapter->flags & IGB_FLAG_DMAC)) {
                        reg = rd32(E1000_PCIEMISC);
-                       reg &= ~E1000_PCIEMISC_LX_DECISION;
+                       reg |= E1000_PCIEMISC_LX_DECISION;
                        wr32(E1000_PCIEMISC, reg);
                } /* endif adapter->dmac is not disabled */
        } else if (hw->mac.type == e1000_82580) {
index 975eb47..57d39ee 100644 (file)
@@ -227,7 +227,7 @@ struct igbvf_adapter {
 
        /* The VF counters don't clear on read so we have to get a base
         * count on driver start up and always subtract that base on
-        * on the first update, thus the flag..
+        * the first update, thus the flag..
         */
        struct e1000_vf_stats stats;
        u64 zero_base;
index 43ced78..f4e91db 100644 (file)
@@ -2537,7 +2537,7 @@ static pci_ers_result_t igbvf_io_error_detected(struct pci_dev *pdev,
                igbvf_down(adapter);
        pci_disable_device(pdev);
 
-       /* Request a slot slot reset. */
+       /* Request a slot reset. */
        return PCI_ERS_RESULT_NEED_RESET;
 }
 
index 67b8ffd..a5c4b19 100644 (file)
@@ -193,7 +193,7 @@ s32 igc_force_mac_fc(struct igc_hw *hw)
         *      1:  Rx flow control is enabled (we can receive pause
         *          frames but not send pause frames).
         *      2:  Tx flow control is enabled (we can send pause frames
-        *          frames but we do not receive pause frames).
+        *          but we do not receive pause frames).
         *      3:  Both Rx and TX flow control (symmetric) is enabled.
         *  other:  No other values should be possible at this point.
         */
index 653e9f1..8dbb9f9 100644 (file)
@@ -15,7 +15,6 @@
 #define INCVALUE_MASK          0x7fffffff
 #define ISGN                   0x80000000
 
-#define IGC_SYSTIM_OVERFLOW_PERIOD     (HZ * 60 * 9)
 #define IGC_PTP_TX_TIMEOUT             (HZ * 15)
 
 #define IGC_PTM_STAT_SLEEP             2
index affdefc..45be9a1 100644 (file)
@@ -1187,7 +1187,7 @@ ixgb_tso(struct ixgb_adapter *adapter, struct sk_buff *skb)
                if (err < 0)
                        return err;
 
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
                mss = skb_shinfo(skb)->gso_size;
                iph = ip_hdr(skb);
                iph->tot_len = 0;
@@ -1704,7 +1704,6 @@ ixgb_update_stats(struct ixgb_adapter *adapter)
        netdev->stats.tx_window_errors = 0;
 }
 
-#define IXGB_MAX_INTR 10
 /**
  * ixgb_intr - Interrupt Handler
  * @irq: interrupt number
index f0cadd5..d40f962 100644 (file)
@@ -141,8 +141,6 @@ IXGB_PARAM(IntDelayEnable, "Transmit Interrupt Delay Enable");
 #define MAX_RDTR                        0xFFFF
 #define MIN_RDTR                             0
 
-#define XSUMRX_DEFAULT          OPTION_ENABLED
-
 #define DEFAULT_FCRTL                  0x28000
 #define DEFAULT_FCRTH                  0x30000
 #define MIN_FCRTL                            0
index 72e6ebf..e85f7d2 100644 (file)
@@ -8,12 +8,10 @@
 #include "ixgbe_sriov.h"
 
 /* Callbacks for DCB netlink in the kernel */
-#define BIT_DCB_MODE   0x01
 #define BIT_PFC                0x02
 #define BIT_PG_RX      0x04
 #define BIT_PG_TX      0x08
 #define BIT_APP_UPCHG  0x10
-#define BIT_LINKSPEED   0x80
 
 /* Responses for the DCB_C_SET_ALL command */
 #define DCB_HW_CHG_RST  0  /* DCB configuration changed with reset */
index 628d0eb..04f453e 100644 (file)
@@ -18,8 +18,6 @@
 #include "ixgbe_phy.h"
 
 
-#define IXGBE_ALL_RAR_ENTRIES 16
-
 enum {NETDEV_STATS, IXGBE_STATS};
 
 struct ixgbe_stats {
index 5c62e99..0493326 100644 (file)
@@ -5161,7 +5161,7 @@ static int ixgbe_hpbthresh(struct ixgbe_adapter *adapter, int pb)
 }
 
 /**
- * ixgbe_lpbthresh - calculate low water mark for for flow control
+ * ixgbe_lpbthresh - calculate low water mark for flow control
  *
  * @adapter: board private structure to calculate for
  * @pb: packet buffer to calculate
index 336426a..27a71fa 100644 (file)
 #define IXGBE_X550_BASE_PERIOD 0xC80000000ULL
 #define INCVALUE_MASK  0x7FFFFFFF
 #define ISGN           0x80000000
-#define MAX_TIMADJ     0x7FFFFFFF
 
 /**
  * ixgbe_ptp_setup_sdp_X540
index e4b50c7..35c2b9b 100644 (file)
@@ -1737,7 +1737,7 @@ static s32 ixgbe_setup_sfi_x550a(struct ixgbe_hw *hw, ixgbe_link_speed *speed)
  * @speed: link speed
  * @autoneg_wait_to_complete: unused
  *
- * Configure the the integrated PHY for native SFP support.
+ * Configure the integrated PHY for native SFP support.
  */
 static s32
 ixgbe_setup_mac_link_sfp_n(struct ixgbe_hw *hw, ixgbe_link_speed speed,
@@ -1786,7 +1786,7 @@ ixgbe_setup_mac_link_sfp_n(struct ixgbe_hw *hw, ixgbe_link_speed speed,
  * @speed: link speed
  * @autoneg_wait_to_complete: unused
  *
- * Configure the the integrated PHY for SFP support.
+ * Configure the integrated PHY for SFP support.
  */
 static s32
 ixgbe_setup_mac_link_sfp_x550a(struct ixgbe_hw *hw, ixgbe_link_speed speed,
index 3b41f83..fed4687 100644 (file)
@@ -17,8 +17,6 @@
 
 #include "ixgbevf.h"
 
-#define IXGBE_ALL_RAR_ENTRIES 16
-
 enum {NETDEV_STATS, IXGBEVF_STATS};
 
 struct ixgbe_stats {
@@ -130,8 +128,6 @@ static void ixgbevf_set_msglevel(struct net_device *netdev, u32 data)
        adapter->msg_enable = data;
 }
 
-#define IXGBE_GET_STAT(_A_, _R_) (_A_->stats._R_)
-
 static int ixgbevf_get_regs_len(struct net_device *netdev)
 {
 #define IXGBE_REGS_LEN 45
index 55b87bc..2f12fbe 100644 (file)
@@ -4787,7 +4787,7 @@ static pci_ers_result_t ixgbevf_io_error_detected(struct pci_dev *pdev,
                pci_disable_device(pdev);
        rtnl_unlock();
 
-       /* Request a slot slot reset. */
+       /* Request a slot reset. */
        return PCI_ERS_RESULT_NEED_RESET;
 }
 
index 68fc32e..1641d00 100644 (file)
@@ -964,7 +964,7 @@ int ixgbevf_get_queues(struct ixgbe_hw *hw, unsigned int *num_tcs,
        if (!err) {
                msg[0] &= ~IXGBE_VT_MSGTYPE_CTS;
 
-               /* if we we didn't get an ACK there must have been
+               /* if we didn't get an ACK there must have been
                 * some sort of mailbox error so we should treat it
                 * as such
                 */
index 57eff4e..b6be055 100644 (file)
@@ -775,7 +775,7 @@ txq_put_hdr_tso(struct sk_buff *skb, struct tx_queue *txq, int length,
                u32 *first_cmd_sts, bool first_desc)
 {
        struct mv643xx_eth_private *mp = txq_to_mp(txq);
-       int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+       int hdr_len = skb_tcp_all_headers(skb);
        int tx_index;
        struct tx_desc *desc;
        int ret;
index 384f5a1..0caa2df 100644 (file)
@@ -2664,8 +2664,8 @@ err_drop_frame:
 static inline void
 mvneta_tso_put_hdr(struct sk_buff *skb, struct mvneta_tx_queue *txq)
 {
-       int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
        struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index];
+       int hdr_len = skb_tcp_all_headers(skb);
        struct mvneta_tx_desc *tx_desc;
 
        tx_desc = mvneta_txq_next_desc_get(txq);
@@ -2727,7 +2727,7 @@ static int mvneta_tx_tso(struct sk_buff *skb, struct net_device *dev,
        if ((txq->count + tso_count_descs(skb)) >= txq->size)
                return 0;
 
-       if (skb_headlen(skb) < (skb_transport_offset(skb) + tcp_hdrlen(skb))) {
+       if (skb_headlen(skb) < skb_tcp_all_headers(skb)) {
                pr_info("*** Is this even possible?\n");
                return 0;
        }
index cc51149..3d5d39a 100644 (file)
@@ -52,7 +52,7 @@
 
 #define    CN93_SDP_EPF_RINFO_SRN(val)           ((val) & 0xFF)
 #define    CN93_SDP_EPF_RINFO_RPVF(val)          (((val) >> 32) & 0xF)
-#define    CN93_SDP_EPF_RINFO_NVFS(val)          (((val) >> 48) && 0xFF)
+#define    CN93_SDP_EPF_RINFO_NVFS(val)          (((val) >> 48) & 0xFF)
 
 /* SDP Function select */
 #define    CN93_SDP_FUNC_SEL_EPF_BIT_POS         8
index 25491ed..931a1a7 100644 (file)
@@ -847,6 +847,11 @@ static void cgx_lmac_pause_frm_config(void *cgxd, int lmac_id, bool enable)
        cfg |= CGX_CMR_RX_OVR_BP_EN(lmac_id);
        cfg &= ~CGX_CMR_RX_OVR_BP_BP(lmac_id);
        cgx_write(cgx, 0, CGXX_CMR_RX_OVR_BP, cfg);
+
+       /* Disable all PFC classes by default */
+       cfg = cgx_read(cgx, lmac_id, CGXX_SMUX_CBFC_CTL);
+       cfg = FIELD_SET(CGX_PFC_CLASS_MASK, 0, cfg);
+       cgx_write(cgx, lmac_id, CGXX_SMUX_CBFC_CTL, cfg);
 }
 
 int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause,
@@ -899,6 +904,7 @@ int cgx_lmac_pfc_config(void *cgxd, int lmac_id, u8 tx_pause,
                return 0;
 
        cfg = cgx_read(cgx, lmac_id, CGXX_SMUX_CBFC_CTL);
+       pfc_en |= FIELD_GET(CGX_PFC_CLASS_MASK, cfg);
 
        if (rx_pause) {
                cfg |= (CGXX_SMUX_CBFC_CTL_RX_EN |
@@ -910,12 +916,13 @@ int cgx_lmac_pfc_config(void *cgxd, int lmac_id, u8 tx_pause,
                        CGXX_SMUX_CBFC_CTL_DRP_EN);
        }
 
-       if (tx_pause)
+       if (tx_pause) {
                cfg |= CGXX_SMUX_CBFC_CTL_TX_EN;
-       else
+               cfg = FIELD_SET(CGX_PFC_CLASS_MASK, pfc_en, cfg);
+       } else {
                cfg &= ~CGXX_SMUX_CBFC_CTL_TX_EN;
-
-       cfg = FIELD_SET(CGX_PFC_CLASS_MASK, pfc_en, cfg);
+               cfg = FIELD_SET(CGX_PFC_CLASS_MASK, 0, cfg);
+       }
 
        cgx_write(cgx, lmac_id, CGXX_SMUX_CBFC_CTL, cfg);
 
index 47e83d7..0566692 100644 (file)
@@ -276,6 +276,11 @@ void rpm_lmac_pause_frm_config(void *rpmd, int lmac_id, bool enable)
        cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
        cfg |= RPMX_MTI_MAC100X_COMMAND_CONFIG_TX_P_DISABLE;
        rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
+
+       /* Disable all PFC classes */
+       cfg = rpm_read(rpm, lmac_id, RPMX_CMRX_PRT_CBFC_CTL);
+       cfg = FIELD_SET(RPM_PFC_CLASS_MASK, 0, cfg);
+       rpm_write(rpm, lmac_id, RPMX_CMRX_PRT_CBFC_CTL, cfg);
 }
 
 int rpm_get_rx_stats(void *rpmd, int lmac_id, int idx, u64 *rx_stat)
@@ -387,15 +392,14 @@ void rpm_lmac_ptp_config(void *rpmd, int lmac_id, bool enable)
 int rpm_lmac_pfc_config(void *rpmd, int lmac_id, u8 tx_pause, u8 rx_pause, u16 pfc_en)
 {
        rpm_t *rpm = rpmd;
-       u64 cfg;
+       u64 cfg, class_en;
 
        if (!is_lmac_valid(rpm, lmac_id))
                return -ENODEV;
 
-       /* reset PFC class quanta and threshold */
-       rpm_cfg_pfc_quanta_thresh(rpm, lmac_id, 0xffff, false);
-
        cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
+       class_en = rpm_read(rpm, lmac_id, RPMX_CMRX_PRT_CBFC_CTL);
+       pfc_en |= FIELD_GET(RPM_PFC_CLASS_MASK, class_en);
 
        if (rx_pause) {
                cfg &= ~(RPMX_MTI_MAC100X_COMMAND_CONFIG_RX_P_DISABLE |
@@ -410,9 +414,11 @@ int rpm_lmac_pfc_config(void *rpmd, int lmac_id, u8 tx_pause, u8 rx_pause, u16 p
        if (tx_pause) {
                rpm_cfg_pfc_quanta_thresh(rpm, lmac_id, pfc_en, true);
                cfg &= ~RPMX_MTI_MAC100X_COMMAND_CONFIG_TX_P_DISABLE;
+               class_en = FIELD_SET(RPM_PFC_CLASS_MASK, pfc_en, class_en);
        } else {
                rpm_cfg_pfc_quanta_thresh(rpm, lmac_id, 0xfff, false);
                cfg |= RPMX_MTI_MAC100X_COMMAND_CONFIG_TX_P_DISABLE;
+               class_en = FIELD_SET(RPM_PFC_CLASS_MASK, 0, class_en);
        }
 
        if (!rx_pause && !tx_pause)
@@ -422,9 +428,7 @@ int rpm_lmac_pfc_config(void *rpmd, int lmac_id, u8 tx_pause, u8 rx_pause, u16 p
 
        rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
 
-       cfg = rpm_read(rpm, lmac_id, RPMX_CMRX_PRT_CBFC_CTL);
-       cfg = FIELD_SET(RPM_PFC_CLASS_MASK, pfc_en, cfg);
-       rpm_write(rpm, lmac_id, RPMX_CMRX_PRT_CBFC_CTL, cfg);
+       rpm_write(rpm, lmac_id, RPMX_CMRX_PRT_CBFC_CTL, class_en);
 
        return 0;
 }
index 9ab8d49..8205f26 100644 (file)
@@ -48,7 +48,6 @@
 #define RPMX_MTI_MAC100X_CL1011_QUANTA_THRESH          0x8130
 #define RPMX_MTI_MAC100X_CL1213_QUANTA_THRESH          0x8138
 #define RPMX_MTI_MAC100X_CL1415_QUANTA_THRESH          0x8140
-#define RPM_DEFAULT_PAUSE_TIME                 0xFFFF
 #define RPMX_CMR_RX_OVR_BP             0x4120
 #define RPMX_CMR_RX_OVR_BP_EN(x)       BIT_ULL((x) + 8)
 #define RPMX_CMR_RX_OVR_BP_BP(x)       BIT_ULL((x) + 4)
@@ -70,7 +69,7 @@
 #define RPMX_MTI_MAC100X_COMMAND_CONFIG_PAUSE_FWD              BIT_ULL(7)
 #define RPMX_MTI_MAC100X_CL01_PAUSE_QUANTA              0x80A8
 #define RPMX_MTI_MAC100X_CL89_PAUSE_QUANTA             0x8108
-#define RPM_DEFAULT_PAUSE_TIME                          0xFFFF
+#define RPM_DEFAULT_PAUSE_TIME                          0x7FF
 
 /* Function Declarations */
 int rpm_get_nr_lmacs(void *rpmd);
index a9da85e..38bbae5 100644 (file)
@@ -17,7 +17,7 @@
 #define        PCI_DEVID_OTX2_CPT10K_PF 0xA0F2
 
 /* Length of initial context fetch in 128 byte words */
-#define CPT_CTX_ILEN    2
+#define CPT_CTX_ILEN    2ULL
 
 #define cpt_get_eng_sts(e_min, e_max, rsp, etype)                   \
 ({                                                                  \
@@ -480,7 +480,7 @@ static int cpt_inline_ipsec_cfg_inbound(struct rvu *rvu, int blkaddr, u8 cptlf,
         */
        if (!is_rvu_otx2(rvu)) {
                val = (ilog2(NIX_CHAN_CPT_X2P_MASK + 1) << 16);
-               val |= rvu->hw->cpt_chan_base;
+               val |= (u64)rvu->hw->cpt_chan_base;
 
                rvu_write64(rvu, blkaddr, CPT_AF_X2PX_LINK_CFG(0), val);
                rvu_write64(rvu, blkaddr, CPT_AF_X2PX_LINK_CFG(1), val);
index 3a31fb8..e05fd2b 100644 (file)
@@ -2534,7 +2534,7 @@ alloc:
 
        /* Copy MCAM entry indices into mbox response entry_list.
         * Requester always expects indices in ascending order, so
-        * so reverse the list if reverse bitmap is used for allocation.
+        * reverse the list if reverse bitmap is used for allocation.
         */
        if (!req->contig && rsp->count) {
                index = 0;
index 3baeafc..a18e8ef 100644 (file)
@@ -624,7 +624,7 @@ static void otx2_sqe_add_ext(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
        ext->subdc = NIX_SUBDC_EXT;
        if (skb_shinfo(skb)->gso_size) {
                ext->lso = 1;
-               ext->lso_sb = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               ext->lso_sb = skb_tcp_all_headers(skb);
                ext->lso_mps = skb_shinfo(skb)->gso_size;
 
                /* Only TSOv4 and TSOv6 GSO offloads are supported */
@@ -931,7 +931,7 @@ static bool is_hw_tso_supported(struct otx2_nic *pfvf,
         * be correctly modified, hence don't offload such TSO segments.
         */
 
-       payload_len = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb));
+       payload_len = skb->len - skb_tcp_all_headers(skb);
        last_seg_size = payload_len % skb_shinfo(skb)->gso_size;
        if (last_seg_size && last_seg_size < 16)
                return false;
index 6f754ae..0bb46ee 100644 (file)
@@ -107,7 +107,8 @@ struct prestera_port_phy_config {
 struct prestera_port {
        struct net_device *dev;
        struct prestera_switch *sw;
-       struct prestera_flow_block *flow_block;
+       struct prestera_flow_block *ingress_flow_block;
+       struct prestera_flow_block *egress_flow_block;
        struct devlink_port dl_port;
        struct list_head lag_member;
        struct prestera_lag *lag;
index 3a141f2..3d4b85f 100644 (file)
@@ -61,6 +61,7 @@ struct prestera_acl_ruleset {
        u32 index;
        u16 pcl_id;
        bool offload;
+       bool ingress;
 };
 
 struct prestera_acl_vtcam {
@@ -70,6 +71,7 @@ struct prestera_acl_vtcam {
        u32 id;
        bool is_keymask_set;
        u8 lookup;
+       u8 direction;
 };
 
 static const struct rhashtable_params prestera_acl_ruleset_ht_params = {
@@ -93,23 +95,36 @@ static const struct rhashtable_params __prestera_acl_rule_entry_ht_params = {
        .automatic_shrinking = true,
 };
 
-int prestera_acl_chain_to_client(u32 chain_index, u32 *client)
+int prestera_acl_chain_to_client(u32 chain_index, bool ingress, u32 *client)
 {
-       static const u32 client_map[] = {
-               PRESTERA_HW_COUNTER_CLIENT_LOOKUP_0,
-               PRESTERA_HW_COUNTER_CLIENT_LOOKUP_1,
-               PRESTERA_HW_COUNTER_CLIENT_LOOKUP_2
+       static const u32 ingress_client_map[] = {
+               PRESTERA_HW_COUNTER_CLIENT_INGRESS_LOOKUP_0,
+               PRESTERA_HW_COUNTER_CLIENT_INGRESS_LOOKUP_1,
+               PRESTERA_HW_COUNTER_CLIENT_INGRESS_LOOKUP_2
        };
 
-       if (chain_index >= ARRAY_SIZE(client_map))
+       if (!ingress) {
+               /* prestera supports only one chain on egress */
+               if (chain_index > 0)
+                       return -EINVAL;
+
+               *client = PRESTERA_HW_COUNTER_CLIENT_EGRESS_LOOKUP;
+               return 0;
+       }
+
+       if (chain_index >= ARRAY_SIZE(ingress_client_map))
                return -EINVAL;
 
-       *client = client_map[chain_index];
+       *client = ingress_client_map[chain_index];
        return 0;
 }
 
-static bool prestera_acl_chain_is_supported(u32 chain_index)
+static bool prestera_acl_chain_is_supported(u32 chain_index, bool ingress)
 {
+       if (!ingress)
+               /* prestera supports only one chain on egress */
+               return chain_index == 0;
+
        return (chain_index & ~PRESTERA_ACL_CHAIN_MASK) == 0;
 }
 
@@ -122,7 +137,7 @@ prestera_acl_ruleset_create(struct prestera_acl *acl,
        u32 uid = 0;
        int err;
 
-       if (!prestera_acl_chain_is_supported(chain_index))
+       if (!prestera_acl_chain_is_supported(chain_index, block->ingress))
                return ERR_PTR(-EINVAL);
 
        ruleset = kzalloc(sizeof(*ruleset), GFP_KERNEL);
@@ -130,6 +145,7 @@ prestera_acl_ruleset_create(struct prestera_acl *acl,
                return ERR_PTR(-ENOMEM);
 
        ruleset->acl = acl;
+       ruleset->ingress = block->ingress;
        ruleset->ht_key.block = block;
        ruleset->ht_key.chain_index = chain_index;
        refcount_set(&ruleset->refcount, 1);
@@ -172,13 +188,18 @@ int prestera_acl_ruleset_offload(struct prestera_acl_ruleset *ruleset)
 {
        struct prestera_acl_iface iface;
        u32 vtcam_id;
+       int dir;
        int err;
 
+       dir = ruleset->ingress ?
+               PRESTERA_HW_VTCAM_DIR_INGRESS : PRESTERA_HW_VTCAM_DIR_EGRESS;
+
        if (ruleset->offload)
                return -EEXIST;
 
        err = prestera_acl_vtcam_id_get(ruleset->acl,
                                        ruleset->ht_key.chain_index,
+                                       dir,
                                        ruleset->keymask, &vtcam_id);
        if (err)
                goto err_vtcam_create;
@@ -719,7 +740,7 @@ vtcam_found:
        return 0;
 }
 
-int prestera_acl_vtcam_id_get(struct prestera_acl *acl, u8 lookup,
+int prestera_acl_vtcam_id_get(struct prestera_acl *acl, u8 lookup, u8 dir,
                              void *keymask, u32 *vtcam_id)
 {
        struct prestera_acl_vtcam *vtcam;
@@ -731,7 +752,8 @@ int prestera_acl_vtcam_id_get(struct prestera_acl *acl, u8 lookup,
         * fine for now
         */
        list_for_each_entry(vtcam, &acl->vtcam_list, list) {
-               if (lookup != vtcam->lookup)
+               if (lookup != vtcam->lookup ||
+                   dir != vtcam->direction)
                        continue;
 
                if (!keymask && !vtcam->is_keymask_set) {
@@ -752,7 +774,7 @@ int prestera_acl_vtcam_id_get(struct prestera_acl *acl, u8 lookup,
                return -ENOMEM;
 
        err = prestera_hw_vtcam_create(acl->sw, lookup, keymask, &new_vtcam_id,
-                                      PRESTERA_HW_VTCAM_DIR_INGRESS);
+                                      dir);
        if (err) {
                kfree(vtcam);
 
@@ -765,6 +787,7 @@ int prestera_acl_vtcam_id_get(struct prestera_acl *acl, u8 lookup,
                return 0;
        }
 
+       vtcam->direction = dir;
        vtcam->id = new_vtcam_id;
        vtcam->lookup = lookup;
        if (keymask) {
index f963e1e..03fc5b9 100644 (file)
@@ -199,9 +199,9 @@ void
 prestera_acl_rule_keymask_pcl_id_set(struct prestera_acl_rule *rule,
                                     u16 pcl_id);
 
-int prestera_acl_vtcam_id_get(struct prestera_acl *acl, u8 lookup,
+int prestera_acl_vtcam_id_get(struct prestera_acl *acl, u8 lookup, u8 dir,
                              void *keymask, u32 *vtcam_id);
 int prestera_acl_vtcam_id_put(struct prestera_acl *acl, u32 vtcam_id);
-int prestera_acl_chain_to_client(u32 chain_index, u32 *client);
+int prestera_acl_chain_to_client(u32 chain_index, bool ingress, u32 *client);
 
 #endif /* _PRESTERA_ACL_H_ */
index 05c3ad9..2262693 100644 (file)
@@ -75,7 +75,9 @@ static void prestera_flow_block_destroy(void *cb_priv)
 }
 
 static struct prestera_flow_block *
-prestera_flow_block_create(struct prestera_switch *sw, struct net *net)
+prestera_flow_block_create(struct prestera_switch *sw,
+                          struct net *net,
+                          bool ingress)
 {
        struct prestera_flow_block *block;
 
@@ -87,6 +89,7 @@ prestera_flow_block_create(struct prestera_switch *sw, struct net *net)
        INIT_LIST_HEAD(&block->template_list);
        block->net = net;
        block->sw = sw;
+       block->ingress = ingress;
 
        return block;
 }
@@ -165,7 +168,8 @@ static int prestera_flow_block_unbind(struct prestera_flow_block *block,
 static struct prestera_flow_block *
 prestera_flow_block_get(struct prestera_switch *sw,
                        struct flow_block_offload *f,
-                       bool *register_block)
+                       bool *register_block,
+                       bool ingress)
 {
        struct prestera_flow_block *block;
        struct flow_block_cb *block_cb;
@@ -173,7 +177,7 @@ prestera_flow_block_get(struct prestera_switch *sw,
        block_cb = flow_block_cb_lookup(f->block,
                                        prestera_flow_block_cb, sw);
        if (!block_cb) {
-               block = prestera_flow_block_create(sw, f->net);
+               block = prestera_flow_block_create(sw, f->net, ingress);
                if (!block)
                        return ERR_PTR(-ENOMEM);
 
@@ -209,7 +213,7 @@ static void prestera_flow_block_put(struct prestera_flow_block *block)
 }
 
 static int prestera_setup_flow_block_bind(struct prestera_port *port,
-                                         struct flow_block_offload *f)
+                                         struct flow_block_offload *f, bool ingress)
 {
        struct prestera_switch *sw = port->sw;
        struct prestera_flow_block *block;
@@ -217,7 +221,7 @@ static int prestera_setup_flow_block_bind(struct prestera_port *port,
        bool register_block;
        int err;
 
-       block = prestera_flow_block_get(sw, f, &register_block);
+       block = prestera_flow_block_get(sw, f, &register_block, ingress);
        if (IS_ERR(block))
                return PTR_ERR(block);
 
@@ -232,7 +236,11 @@ static int prestera_setup_flow_block_bind(struct prestera_port *port,
                list_add_tail(&block_cb->driver_list, &prestera_block_cb_list);
        }
 
-       port->flow_block = block;
+       if (ingress)
+               port->ingress_flow_block = block;
+       else
+               port->egress_flow_block = block;
+
        return 0;
 
 err_block_bind:
@@ -242,7 +250,7 @@ err_block_bind:
 }
 
 static void prestera_setup_flow_block_unbind(struct prestera_port *port,
-                                            struct flow_block_offload *f)
+                                            struct flow_block_offload *f, bool ingress)
 {
        struct prestera_switch *sw = port->sw;
        struct prestera_flow_block *block;
@@ -266,24 +274,38 @@ static void prestera_setup_flow_block_unbind(struct prestera_port *port,
                list_del(&block_cb->driver_list);
        }
 error:
-       port->flow_block = NULL;
+       if (ingress)
+               port->ingress_flow_block = NULL;
+       else
+               port->egress_flow_block = NULL;
 }
 
-int prestera_flow_block_setup(struct prestera_port *port,
-                             struct flow_block_offload *f)
+static int prestera_setup_flow_block_clsact(struct prestera_port *port,
+                                           struct flow_block_offload *f,
+                                           bool ingress)
 {
-       if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-               return -EOPNOTSUPP;
-
        f->driver_block_list = &prestera_block_cb_list;
 
        switch (f->command) {
        case FLOW_BLOCK_BIND:
-               return prestera_setup_flow_block_bind(port, f);
+               return prestera_setup_flow_block_bind(port, f, ingress);
        case FLOW_BLOCK_UNBIND:
-               prestera_setup_flow_block_unbind(port, f);
+               prestera_setup_flow_block_unbind(port, f, ingress);
                return 0;
        default:
                return -EOPNOTSUPP;
        }
 }
+
+int prestera_flow_block_setup(struct prestera_port *port,
+                             struct flow_block_offload *f)
+{
+       switch (f->binder_type) {
+       case FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS:
+               return prestera_setup_flow_block_clsact(port, f, true);
+       case FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS:
+               return prestera_setup_flow_block_clsact(port, f, false);
+       default:
+               return -EOPNOTSUPP;
+       }
+}
index 6550278..0c9e132 100644 (file)
@@ -23,6 +23,7 @@ struct prestera_flow_block {
        struct flow_block_cb *block_cb;
        struct list_head template_list;
        unsigned int rule_count;
+       bool ingress;
 };
 
 int prestera_flow_block_setup(struct prestera_port *port,
index d43e503..a54748a 100644 (file)
@@ -79,7 +79,7 @@ static int prestera_flower_parse_actions(struct prestera_flow_block *block,
        } else if (act->hw_stats & FLOW_ACTION_HW_STATS_DELAYED) {
                /* setup counter first */
                rule->re_arg.count.valid = true;
-               err = prestera_acl_chain_to_client(chain_index,
+               err = prestera_acl_chain_to_client(chain_index, block->ingress,
                                                   &rule->re_arg.count.client);
                if (err)
                        return err;
index 579d9ba..aa74f66 100644 (file)
@@ -123,9 +123,10 @@ enum prestera_hw_vtcam_direction_t {
 };
 
 enum {
-       PRESTERA_HW_COUNTER_CLIENT_LOOKUP_0 = 0,
-       PRESTERA_HW_COUNTER_CLIENT_LOOKUP_1 = 1,
-       PRESTERA_HW_COUNTER_CLIENT_LOOKUP_2 = 2,
+       PRESTERA_HW_COUNTER_CLIENT_INGRESS_LOOKUP_0 = 0,
+       PRESTERA_HW_COUNTER_CLIENT_INGRESS_LOOKUP_1 = 1,
+       PRESTERA_HW_COUNTER_CLIENT_INGRESS_LOOKUP_2 = 2,
+       PRESTERA_HW_COUNTER_CLIENT_EGRESS_LOOKUP = 3,
 };
 
 struct prestera_switch;
index a1e907c..bbea545 100644 (file)
@@ -1863,7 +1863,7 @@ static netdev_tx_t sky2_xmit_frame(struct sk_buff *skb,
        if (mss != 0) {
 
                if (!(hw->flags & SKY2_HW_NEW_LE))
-                       mss += ETH_HLEN + ip_hdrlen(skb) + tcp_hdrlen(skb);
+                       mss += skb_tcp_all_headers(skb);
 
                if (mss != sky2->tx_last_mss) {
                        le = get_tx_le(sky2, &slot);
@@ -4711,7 +4711,7 @@ static irqreturn_t sky2_test_intr(int irq, void *dev_id)
        return IRQ_HANDLED;
 }
 
-/* Test interrupt path by forcing a software IRQ */
+/* Test interrupt path by forcing a software IRQ */
 static int sky2_test_msi(struct sky2_hw *hw)
 {
        struct pci_dev *pdev = hw->pdev;
index 95839fd..3f0e5e6 100644 (file)
@@ -17,6 +17,7 @@
 #include <linux/module.h>
 #include <linux/netdevice.h>
 #include <linux/of.h>
+#include <linux/of_device.h>
 #include <linux/of_mdio.h>
 #include <linux/of_net.h>
 #include <linux/platform_device.h>
@@ -32,6 +33,7 @@
 #define MTK_STAR_SKB_ALIGNMENT                 16
 #define MTK_STAR_HASHTABLE_MC_LIMIT            256
 #define MTK_STAR_HASHTABLE_SIZE_MAX            512
+#define MTK_STAR_DESC_NEEDED                   (MAX_SKB_FRAGS + 4)
 
 /* Normally we'd use NET_IP_ALIGN but on arm64 its value is 0 and it doesn't
  * work for this controller.
@@ -129,6 +131,11 @@ static const char *const mtk_star_clk_names[] = { "core", "reg", "trans" };
 #define MTK_STAR_REG_INT_MASK                  0x0054
 #define MTK_STAR_BIT_INT_MASK_FNRC             BIT(6)
 
+/* Delay-Macro Register */
+#define MTK_STAR_REG_TEST0                     0x0058
+#define MTK_STAR_BIT_INV_RX_CLK                        BIT(30)
+#define MTK_STAR_BIT_INV_TX_CLK                        BIT(31)
+
 /* Misc. Config Register */
 #define MTK_STAR_REG_TEST1                     0x005c
 #define MTK_STAR_BIT_TEST1_RST_HASH_MBIST      BIT(31)
@@ -149,6 +156,7 @@ static const char *const mtk_star_clk_names[] = { "core", "reg", "trans" };
 #define MTK_STAR_REG_MAC_CLK_CONF              0x00ac
 #define MTK_STAR_MSK_MAC_CLK_CONF              GENMASK(7, 0)
 #define MTK_STAR_BIT_CLK_DIV_10                        0x0a
+#define MTK_STAR_BIT_CLK_DIV_50                        0x32
 
 /* Counter registers. */
 #define MTK_STAR_REG_C_RXOKPKT                 0x0100
@@ -181,9 +189,14 @@ static const char *const mtk_star_clk_names[] = { "core", "reg", "trans" };
 #define MTK_STAR_REG_C_RX_TWIST                        0x0218
 
 /* Ethernet CFG Control */
-#define MTK_PERICFG_REG_NIC_CFG_CON            0x03c4
-#define MTK_PERICFG_MSK_NIC_CFG_CON_CFG_MII    GENMASK(3, 0)
-#define MTK_PERICFG_BIT_NIC_CFG_CON_RMII       BIT(0)
+#define MTK_PERICFG_REG_NIC_CFG0_CON           0x03c4
+#define MTK_PERICFG_REG_NIC_CFG1_CON           0x03c8
+#define MTK_PERICFG_REG_NIC_CFG_CON_V2         0x0c10
+#define MTK_PERICFG_REG_NIC_CFG_CON_CFG_INTF   GENMASK(3, 0)
+#define MTK_PERICFG_BIT_NIC_CFG_CON_MII                0
+#define MTK_PERICFG_BIT_NIC_CFG_CON_RMII       1
+#define MTK_PERICFG_BIT_NIC_CFG_CON_CLK                BIT(0)
+#define MTK_PERICFG_BIT_NIC_CFG_CON_CLK_V2     BIT(8)
 
 /* Represents the actual structure of descriptors used by the MAC. We can
  * reuse the same structure for both TX and RX - the layout is the same, only
@@ -216,7 +229,8 @@ struct mtk_star_ring_desc_data {
        struct sk_buff *skb;
 };
 
-#define MTK_STAR_RING_NUM_DESCS                        128
+#define MTK_STAR_RING_NUM_DESCS                        512
+#define MTK_STAR_TX_THRESH                     (MTK_STAR_RING_NUM_DESCS / 4)
 #define MTK_STAR_NUM_TX_DESCS                  MTK_STAR_RING_NUM_DESCS
 #define MTK_STAR_NUM_RX_DESCS                  MTK_STAR_RING_NUM_DESCS
 #define MTK_STAR_NUM_DESCS_TOTAL               (MTK_STAR_RING_NUM_DESCS * 2)
@@ -231,6 +245,11 @@ struct mtk_star_ring {
        unsigned int tail;
 };
 
+struct mtk_star_compat {
+       int (*set_interface_mode)(struct net_device *ndev);
+       unsigned char bit_clk_div;
+};
+
 struct mtk_star_priv {
        struct net_device *ndev;
 
@@ -246,7 +265,8 @@ struct mtk_star_priv {
        struct mtk_star_ring rx_ring;
 
        struct mii_bus *mii;
-       struct napi_struct napi;
+       struct napi_struct tx_napi;
+       struct napi_struct rx_napi;
 
        struct device_node *phy_node;
        phy_interface_t phy_intf;
@@ -255,6 +275,11 @@ struct mtk_star_priv {
        int speed;
        int duplex;
        int pause;
+       bool rmii_rxc;
+       bool rx_inv;
+       bool tx_inv;
+
+       const struct mtk_star_compat *compat_data;
 
        /* Protects against concurrent descriptor access. */
        spinlock_t lock;
@@ -357,19 +382,16 @@ mtk_star_ring_push_head_tx(struct mtk_star_ring *ring,
        mtk_star_ring_push_head(ring, desc_data, flags);
 }
 
-static unsigned int mtk_star_ring_num_used_descs(struct mtk_star_ring *ring)
+static unsigned int mtk_star_tx_ring_avail(struct mtk_star_ring *ring)
 {
-       return abs(ring->head - ring->tail);
-}
+       u32 avail;
 
-static bool mtk_star_ring_full(struct mtk_star_ring *ring)
-{
-       return mtk_star_ring_num_used_descs(ring) == MTK_STAR_RING_NUM_DESCS;
-}
+       if (ring->tail > ring->head)
+               avail = ring->tail - ring->head - 1;
+       else
+               avail = MTK_STAR_RING_NUM_DESCS - ring->head + ring->tail - 1;
 
-static bool mtk_star_ring_descs_available(struct mtk_star_ring *ring)
-{
-       return mtk_star_ring_num_used_descs(ring) > 0;
+       return avail;
 }
 
 static dma_addr_t mtk_star_dma_map_rx(struct mtk_star_priv *priv,
@@ -414,6 +436,36 @@ static void mtk_star_nic_disable_pd(struct mtk_star_priv *priv)
                          MTK_STAR_BIT_MAC_CFG_NIC_PD);
 }
 
+static void mtk_star_enable_dma_irq(struct mtk_star_priv *priv,
+                                   bool rx, bool tx)
+{
+       u32 value;
+
+       regmap_read(priv->regs, MTK_STAR_REG_INT_MASK, &value);
+
+       if (tx)
+               value &= ~MTK_STAR_BIT_INT_STS_TNTC;
+       if (rx)
+               value &= ~MTK_STAR_BIT_INT_STS_FNRC;
+
+       regmap_write(priv->regs, MTK_STAR_REG_INT_MASK, value);
+}
+
+static void mtk_star_disable_dma_irq(struct mtk_star_priv *priv,
+                                    bool rx, bool tx)
+{
+       u32 value;
+
+       regmap_read(priv->regs, MTK_STAR_REG_INT_MASK, &value);
+
+       if (tx)
+               value |= MTK_STAR_BIT_INT_STS_TNTC;
+       if (rx)
+               value |= MTK_STAR_BIT_INT_STS_FNRC;
+
+       regmap_write(priv->regs, MTK_STAR_REG_INT_MASK, value);
+}
+
 /* Unmask the three interrupts we care about, mask all others. */
 static void mtk_star_intr_enable(struct mtk_star_priv *priv)
 {
@@ -429,20 +481,11 @@ static void mtk_star_intr_disable(struct mtk_star_priv *priv)
        regmap_write(priv->regs, MTK_STAR_REG_INT_MASK, ~0);
 }
 
-static unsigned int mtk_star_intr_read(struct mtk_star_priv *priv)
-{
-       unsigned int val;
-
-       regmap_read(priv->regs, MTK_STAR_REG_INT_STS, &val);
-
-       return val;
-}
-
 static unsigned int mtk_star_intr_ack_all(struct mtk_star_priv *priv)
 {
        unsigned int val;
 
-       val = mtk_star_intr_read(priv);
+       regmap_read(priv->regs, MTK_STAR_REG_INT_STS, &val);
        regmap_write(priv->regs, MTK_STAR_REG_INT_STS, val);
 
        return val;
@@ -714,25 +757,44 @@ static void mtk_star_free_tx_skbs(struct mtk_star_priv *priv)
        mtk_star_ring_free_skbs(priv, ring, mtk_star_dma_unmap_tx);
 }
 
-/* All processing for TX and RX happens in the napi poll callback.
- *
- * FIXME: The interrupt handling should be more fine-grained with each
- * interrupt enabled/disabled independently when needed. Unfortunatly this
- * turned out to impact the driver's stability and until we have something
- * working properly, we're disabling all interrupts during TX & RX processing
- * or when resetting the counter registers.
- */
+/**
+ * mtk_star_handle_irq - Interrupt Handler.
+ * @irq: interrupt number.
+ * @data: pointer to a network interface device structure.
+ * Description : this is the driver interrupt service routine.
+ * it mainly handles:
+ *  1. tx complete interrupt for frame transmission.
+ *  2. rx complete interrupt for frame reception.
+ *  3. MAC Management Counter interrupt to avoid counter overflow.
+ **/
 static irqreturn_t mtk_star_handle_irq(int irq, void *data)
 {
-       struct mtk_star_priv *priv;
-       struct net_device *ndev;
-
-       ndev = data;
-       priv = netdev_priv(ndev);
+       struct net_device *ndev = data;
+       struct mtk_star_priv *priv = netdev_priv(ndev);
+       unsigned int intr_status = mtk_star_intr_ack_all(priv);
+       bool rx, tx;
+
+       rx = (intr_status & MTK_STAR_BIT_INT_STS_FNRC) &&
+            napi_schedule_prep(&priv->rx_napi);
+       tx = (intr_status & MTK_STAR_BIT_INT_STS_TNTC) &&
+            napi_schedule_prep(&priv->tx_napi);
+
+       if (rx || tx) {
+               spin_lock(&priv->lock);
+               /* mask Rx and TX Complete interrupt */
+               mtk_star_disable_dma_irq(priv, rx, tx);
+               spin_unlock(&priv->lock);
+
+               if (rx)
+                       __napi_schedule(&priv->rx_napi);
+               if (tx)
+                       __napi_schedule(&priv->tx_napi);
+       }
 
-       if (netif_running(ndev)) {
-               mtk_star_intr_disable(priv);
-               napi_schedule(&priv->napi);
+       /* interrupt is triggered once any counters reach 0x8000000 */
+       if (intr_status & MTK_STAR_REG_INT_STS_MIB_CNT_TH) {
+               mtk_star_update_stats(priv);
+               mtk_star_reset_counters(priv);
        }
 
        return IRQ_HANDLED;
@@ -821,32 +883,26 @@ static void mtk_star_phy_config(struct mtk_star_priv *priv)
        val <<= MTK_STAR_OFF_PHY_CTRL1_FORCE_SPD;
 
        val |= MTK_STAR_BIT_PHY_CTRL1_AN_EN;
-       val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_RX;
-       val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_TX;
-       /* Only full-duplex supported for now. */
-       val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_DPX;
-
-       regmap_write(priv->regs, MTK_STAR_REG_PHY_CTRL1, val);
-
        if (priv->pause) {
-               val = MTK_STAR_VAL_FC_CFG_SEND_PAUSE_TH_2K;
-               val <<= MTK_STAR_OFF_FC_CFG_SEND_PAUSE_TH;
-               val |= MTK_STAR_BIT_FC_CFG_UC_PAUSE_DIR;
+               val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_RX;
+               val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_TX;
+               val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_DPX;
        } else {
-               val = 0;
+               val &= ~MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_RX;
+               val &= ~MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_TX;
+               val &= ~MTK_STAR_BIT_PHY_CTRL1_FORCE_DPX;
        }
+       regmap_write(priv->regs, MTK_STAR_REG_PHY_CTRL1, val);
 
+       val = MTK_STAR_VAL_FC_CFG_SEND_PAUSE_TH_2K;
+       val <<= MTK_STAR_OFF_FC_CFG_SEND_PAUSE_TH;
+       val |= MTK_STAR_BIT_FC_CFG_UC_PAUSE_DIR;
        regmap_update_bits(priv->regs, MTK_STAR_REG_FC_CFG,
                           MTK_STAR_MSK_FC_CFG_SEND_PAUSE_TH |
                           MTK_STAR_BIT_FC_CFG_UC_PAUSE_DIR, val);
 
-       if (priv->pause) {
-               val = MTK_STAR_VAL_EXT_CFG_SND_PAUSE_RLS_1K;
-               val <<= MTK_STAR_OFF_EXT_CFG_SND_PAUSE_RLS;
-       } else {
-               val = 0;
-       }
-
+       val = MTK_STAR_VAL_EXT_CFG_SND_PAUSE_RLS_1K;
+       val <<= MTK_STAR_OFF_EXT_CFG_SND_PAUSE_RLS;
        regmap_update_bits(priv->regs, MTK_STAR_REG_EXT_CFG,
                           MTK_STAR_MSK_EXT_CFG_SND_PAUSE_RLS, val);
 }
@@ -898,14 +954,7 @@ static void mtk_star_init_config(struct mtk_star_priv *priv)
        regmap_write(priv->regs, MTK_STAR_REG_SYS_CONF, val);
        regmap_update_bits(priv->regs, MTK_STAR_REG_MAC_CLK_CONF,
                           MTK_STAR_MSK_MAC_CLK_CONF,
-                          MTK_STAR_BIT_CLK_DIV_10);
-}
-
-static void mtk_star_set_mode_rmii(struct mtk_star_priv *priv)
-{
-       regmap_update_bits(priv->pericfg, MTK_PERICFG_REG_NIC_CFG_CON,
-                          MTK_PERICFG_MSK_NIC_CFG_CON_CFG_MII,
-                          MTK_PERICFG_BIT_NIC_CFG_CON_RMII);
+                          priv->compat_data->bit_clk_div);
 }
 
 static int mtk_star_enable(struct net_device *ndev)
@@ -951,11 +1000,12 @@ static int mtk_star_enable(struct net_device *ndev)
 
        /* Request the interrupt */
        ret = request_irq(ndev->irq, mtk_star_handle_irq,
-                         IRQF_TRIGGER_FALLING, ndev->name, ndev);
+                         IRQF_TRIGGER_NONE, ndev->name, ndev);
        if (ret)
                goto err_free_skbs;
 
-       napi_enable(&priv->napi);
+       napi_enable(&priv->tx_napi);
+       napi_enable(&priv->rx_napi);
 
        mtk_star_intr_ack_all(priv);
        mtk_star_intr_enable(priv);
@@ -988,7 +1038,8 @@ static void mtk_star_disable(struct net_device *ndev)
        struct mtk_star_priv *priv = netdev_priv(ndev);
 
        netif_stop_queue(ndev);
-       napi_disable(&priv->napi);
+       napi_disable(&priv->tx_napi);
+       napi_disable(&priv->rx_napi);
        mtk_star_intr_disable(priv);
        mtk_star_dma_disable(priv);
        mtk_star_intr_ack_all(priv);
@@ -1020,13 +1071,45 @@ static int mtk_star_netdev_ioctl(struct net_device *ndev,
        return phy_mii_ioctl(ndev->phydev, req, cmd);
 }
 
-static int mtk_star_netdev_start_xmit(struct sk_buff *skb,
-                                     struct net_device *ndev)
+static int __mtk_star_maybe_stop_tx(struct mtk_star_priv *priv, u16 size)
+{
+       netif_stop_queue(priv->ndev);
+
+       /* Might race with mtk_star_tx_poll, check again */
+       smp_mb();
+       if (likely(mtk_star_tx_ring_avail(&priv->tx_ring) < size))
+               return -EBUSY;
+
+       netif_start_queue(priv->ndev);
+
+       return 0;
+}
+
+static inline int mtk_star_maybe_stop_tx(struct mtk_star_priv *priv, u16 size)
+{
+       if (likely(mtk_star_tx_ring_avail(&priv->tx_ring) >= size))
+               return 0;
+
+       return __mtk_star_maybe_stop_tx(priv, size);
+}
+
+static netdev_tx_t mtk_star_netdev_start_xmit(struct sk_buff *skb,
+                                             struct net_device *ndev)
 {
        struct mtk_star_priv *priv = netdev_priv(ndev);
        struct mtk_star_ring *ring = &priv->tx_ring;
        struct device *dev = mtk_star_get_dev(priv);
        struct mtk_star_ring_desc_data desc_data;
+       int nfrags = skb_shinfo(skb)->nr_frags;
+
+       if (unlikely(mtk_star_tx_ring_avail(ring) < nfrags + 1)) {
+               if (!netif_queue_stopped(ndev)) {
+                       netif_stop_queue(ndev);
+                       /* This is a hard error, log it. */
+                       pr_err_ratelimited("Tx ring full when queue awake\n");
+               }
+               return NETDEV_TX_BUSY;
+       }
 
        desc_data.dma_addr = mtk_star_dma_map_tx(priv, skb);
        if (dma_mapping_error(dev, desc_data.dma_addr))
@@ -1034,17 +1117,11 @@ static int mtk_star_netdev_start_xmit(struct sk_buff *skb,
 
        desc_data.skb = skb;
        desc_data.len = skb->len;
-
-       spin_lock_bh(&priv->lock);
-
        mtk_star_ring_push_head_tx(ring, &desc_data);
 
        netdev_sent_queue(ndev, skb->len);
 
-       if (mtk_star_ring_full(ring))
-               netif_stop_queue(ndev);
-
-       spin_unlock_bh(&priv->lock);
+       mtk_star_maybe_stop_tx(priv, MTK_STAR_DESC_NEEDED);
 
        mtk_star_dma_resume_tx(priv);
 
@@ -1076,31 +1153,40 @@ static int mtk_star_tx_complete_one(struct mtk_star_priv *priv)
        return ret;
 }
 
-static void mtk_star_tx_complete_all(struct mtk_star_priv *priv)
+static int mtk_star_tx_poll(struct napi_struct *napi, int budget)
 {
+       struct mtk_star_priv *priv = container_of(napi, struct mtk_star_priv,
+                                                 tx_napi);
+       int ret = 0, pkts_compl = 0, bytes_compl = 0, count = 0;
        struct mtk_star_ring *ring = &priv->tx_ring;
        struct net_device *ndev = priv->ndev;
-       int ret, pkts_compl, bytes_compl;
-       bool wake = false;
-
-       spin_lock(&priv->lock);
-
-       for (pkts_compl = 0, bytes_compl = 0;;
-            pkts_compl++, bytes_compl += ret, wake = true) {
-               if (!mtk_star_ring_descs_available(ring))
-                       break;
+       unsigned int head = ring->head;
+       unsigned int entry = ring->tail;
 
+       while (entry != head && count < (MTK_STAR_RING_NUM_DESCS - 1)) {
                ret = mtk_star_tx_complete_one(priv);
                if (ret < 0)
                        break;
+
+               count++;
+               pkts_compl++;
+               bytes_compl += ret;
+               entry = ring->tail;
        }
 
        netdev_completed_queue(ndev, pkts_compl, bytes_compl);
 
-       if (wake && netif_queue_stopped(ndev))
+       if (unlikely(netif_queue_stopped(ndev)) &&
+           (mtk_star_tx_ring_avail(ring) > MTK_STAR_TX_THRESH))
                netif_wake_queue(ndev);
 
-       spin_unlock(&priv->lock);
+       if (napi_complete(napi)) {
+               spin_lock(&priv->lock);
+               mtk_star_enable_dma_irq(priv, false, true);
+               spin_unlock(&priv->lock);
+       }
+
+       return 0;
 }
 
 static void mtk_star_netdev_get_stats64(struct net_device *ndev,
@@ -1180,7 +1266,7 @@ static const struct ethtool_ops mtk_star_ethtool_ops = {
        .set_link_ksettings     = phy_ethtool_set_link_ksettings,
 };
 
-static int mtk_star_receive_packet(struct mtk_star_priv *priv)
+static int mtk_star_rx(struct mtk_star_priv *priv, int budget)
 {
        struct mtk_star_ring *ring = &priv->rx_ring;
        struct device *dev = mtk_star_get_dev(priv);
@@ -1188,107 +1274,85 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
        struct net_device *ndev = priv->ndev;
        struct sk_buff *curr_skb, *new_skb;
        dma_addr_t new_dma_addr;
-       int ret;
+       int ret, count = 0;
 
-       spin_lock(&priv->lock);
-       ret = mtk_star_ring_pop_tail(ring, &desc_data);
-       spin_unlock(&priv->lock);
-       if (ret)
-               return -1;
+       while (count < budget) {
+               ret = mtk_star_ring_pop_tail(ring, &desc_data);
+               if (ret)
+                       return -1;
 
-       curr_skb = desc_data.skb;
+               curr_skb = desc_data.skb;
 
-       if ((desc_data.flags & MTK_STAR_DESC_BIT_RX_CRCE) ||
-           (desc_data.flags & MTK_STAR_DESC_BIT_RX_OSIZE)) {
-               /* Error packet -> drop and reuse skb. */
-               new_skb = curr_skb;
-               goto push_new_skb;
-       }
+               if ((desc_data.flags & MTK_STAR_DESC_BIT_RX_CRCE) ||
+                   (desc_data.flags & MTK_STAR_DESC_BIT_RX_OSIZE)) {
+                       /* Error packet -> drop and reuse skb. */
+                       new_skb = curr_skb;
+                       goto push_new_skb;
+               }
 
-       /* Prepare new skb before receiving the current one. Reuse the current
-        * skb if we fail at any point.
-        */
-       new_skb = mtk_star_alloc_skb(ndev);
-       if (!new_skb) {
-               ndev->stats.rx_dropped++;
-               new_skb = curr_skb;
-               goto push_new_skb;
-       }
+               /* Prepare new skb before receiving the current one.
+                * Reuse the current skb if we fail at any point.
+                */
+               new_skb = mtk_star_alloc_skb(ndev);
+               if (!new_skb) {
+                       ndev->stats.rx_dropped++;
+                       new_skb = curr_skb;
+                       goto push_new_skb;
+               }
 
-       new_dma_addr = mtk_star_dma_map_rx(priv, new_skb);
-       if (dma_mapping_error(dev, new_dma_addr)) {
-               ndev->stats.rx_dropped++;
-               dev_kfree_skb(new_skb);
-               new_skb = curr_skb;
-               netdev_err(ndev, "DMA mapping error of RX descriptor\n");
-               goto push_new_skb;
-       }
+               new_dma_addr = mtk_star_dma_map_rx(priv, new_skb);
+               if (dma_mapping_error(dev, new_dma_addr)) {
+                       ndev->stats.rx_dropped++;
+                       dev_kfree_skb(new_skb);
+                       new_skb = curr_skb;
+                       netdev_err(ndev, "DMA mapping error of RX descriptor\n");
+                       goto push_new_skb;
+               }
 
-       /* We can't fail anymore at this point: it's safe to unmap the skb. */
-       mtk_star_dma_unmap_rx(priv, &desc_data);
+               /* We can't fail anymore at this point:
+                * it's safe to unmap the skb.
+                */
+               mtk_star_dma_unmap_rx(priv, &desc_data);
 
-       skb_put(desc_data.skb, desc_data.len);
-       desc_data.skb->ip_summed = CHECKSUM_NONE;
-       desc_data.skb->protocol = eth_type_trans(desc_data.skb, ndev);
-       desc_data.skb->dev = ndev;
-       netif_receive_skb(desc_data.skb);
+               skb_put(desc_data.skb, desc_data.len);
+               desc_data.skb->ip_summed = CHECKSUM_NONE;
+               desc_data.skb->protocol = eth_type_trans(desc_data.skb, ndev);
+               desc_data.skb->dev = ndev;
+               netif_receive_skb(desc_data.skb);
 
-       /* update dma_addr for new skb */
-       desc_data.dma_addr = new_dma_addr;
+               /* update dma_addr for new skb */
+               desc_data.dma_addr = new_dma_addr;
 
 push_new_skb:
-       desc_data.len = skb_tailroom(new_skb);
-       desc_data.skb = new_skb;
 
-       spin_lock(&priv->lock);
-       mtk_star_ring_push_head_rx(ring, &desc_data);
-       spin_unlock(&priv->lock);
+               count++;
 
-       return 0;
-}
-
-static int mtk_star_process_rx(struct mtk_star_priv *priv, int budget)
-{
-       int received, ret;
-
-       for (received = 0, ret = 0; received < budget && ret == 0; received++)
-               ret = mtk_star_receive_packet(priv);
+               desc_data.len = skb_tailroom(new_skb);
+               desc_data.skb = new_skb;
+               mtk_star_ring_push_head_rx(ring, &desc_data);
+       }
 
        mtk_star_dma_resume_rx(priv);
 
-       return received;
+       return count;
 }
 
-static int mtk_star_poll(struct napi_struct *napi, int budget)
+static int mtk_star_rx_poll(struct napi_struct *napi, int budget)
 {
        struct mtk_star_priv *priv;
-       unsigned int status;
-       int received = 0;
-
-       priv = container_of(napi, struct mtk_star_priv, napi);
-
-       status = mtk_star_intr_read(priv);
-       mtk_star_intr_ack_all(priv);
+       int work_done = 0;
 
-       if (status & MTK_STAR_BIT_INT_STS_TNTC)
-               /* Clean-up all TX descriptors. */
-               mtk_star_tx_complete_all(priv);
+       priv = container_of(napi, struct mtk_star_priv, rx_napi);
 
-       if (status & MTK_STAR_BIT_INT_STS_FNRC)
-               /* Receive up to $budget packets. */
-               received = mtk_star_process_rx(priv, budget);
-
-       if (unlikely(status & MTK_STAR_REG_INT_STS_MIB_CNT_TH)) {
-               mtk_star_update_stats(priv);
-               mtk_star_reset_counters(priv);
+       work_done = mtk_star_rx(priv, budget);
+       if (work_done < budget) {
+               napi_complete_done(napi, work_done);
+               spin_lock(&priv->lock);
+               mtk_star_enable_dma_irq(priv, true, false);
+               spin_unlock(&priv->lock);
        }
 
-       if (received < budget)
-               napi_complete_done(napi, received);
-
-       mtk_star_intr_enable(priv);
-
-       return received;
+       return work_done;
 }
 
 static void mtk_star_mdio_rwok_clear(struct mtk_star_priv *priv)
@@ -1442,6 +1506,25 @@ static void mtk_star_clk_disable_unprepare(void *data)
        clk_bulk_disable_unprepare(MTK_STAR_NCLKS, priv->clks);
 }
 
+static int mtk_star_set_timing(struct mtk_star_priv *priv)
+{
+       struct device *dev = mtk_star_get_dev(priv);
+       unsigned int delay_val = 0;
+
+       switch (priv->phy_intf) {
+       case PHY_INTERFACE_MODE_MII:
+       case PHY_INTERFACE_MODE_RMII:
+               delay_val |= FIELD_PREP(MTK_STAR_BIT_INV_RX_CLK, priv->rx_inv);
+               delay_val |= FIELD_PREP(MTK_STAR_BIT_INV_TX_CLK, priv->tx_inv);
+               break;
+       default:
+               dev_err(dev, "This interface not supported\n");
+               return -EINVAL;
+       }
+
+       return regmap_write(priv->regs, MTK_STAR_REG_TEST0, delay_val);
+}
+
 static int mtk_star_probe(struct platform_device *pdev)
 {
        struct device_node *of_node;
@@ -1460,6 +1543,7 @@ static int mtk_star_probe(struct platform_device *pdev)
 
        priv = netdev_priv(ndev);
        priv->ndev = ndev;
+       priv->compat_data = of_device_get_match_data(&pdev->dev);
        SET_NETDEV_DEV(ndev, dev);
        platform_set_drvdata(pdev, ndev);
 
@@ -1510,7 +1594,8 @@ static int mtk_star_probe(struct platform_device *pdev)
        ret = of_get_phy_mode(of_node, &priv->phy_intf);
        if (ret) {
                return ret;
-       } else if (priv->phy_intf != PHY_INTERFACE_MODE_RMII) {
+       } else if (priv->phy_intf != PHY_INTERFACE_MODE_RMII &&
+                  priv->phy_intf != PHY_INTERFACE_MODE_MII) {
                dev_err(dev, "unsupported phy mode: %s\n",
                        phy_modes(priv->phy_intf));
                return -EINVAL;
@@ -1522,7 +1607,23 @@ static int mtk_star_probe(struct platform_device *pdev)
                return -ENODEV;
        }
 
-       mtk_star_set_mode_rmii(priv);
+       priv->rmii_rxc = of_property_read_bool(of_node, "mediatek,rmii-rxc");
+       priv->rx_inv = of_property_read_bool(of_node, "mediatek,rxc-inverse");
+       priv->tx_inv = of_property_read_bool(of_node, "mediatek,txc-inverse");
+
+       if (priv->compat_data->set_interface_mode) {
+               ret = priv->compat_data->set_interface_mode(ndev);
+               if (ret) {
+                       dev_err(dev, "Failed to set phy interface, err = %d\n", ret);
+                       return -EINVAL;
+               }
+       }
+
+       ret = mtk_star_set_timing(priv);
+       if (ret) {
+               dev_err(dev, "Failed to set timing, err = %d\n", ret);
+               return -EINVAL;
+       }
 
        ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
        if (ret) {
@@ -1550,16 +1651,92 @@ static int mtk_star_probe(struct platform_device *pdev)
        ndev->netdev_ops = &mtk_star_netdev_ops;
        ndev->ethtool_ops = &mtk_star_ethtool_ops;
 
-       netif_napi_add(ndev, &priv->napi, mtk_star_poll, NAPI_POLL_WEIGHT);
+       netif_napi_add(ndev, &priv->rx_napi, mtk_star_rx_poll,
+                      NAPI_POLL_WEIGHT);
+       netif_napi_add_tx(ndev, &priv->tx_napi, mtk_star_tx_poll);
 
        return devm_register_netdev(dev, ndev);
 }
 
 #ifdef CONFIG_OF
+static int mt8516_set_interface_mode(struct net_device *ndev)
+{
+       struct mtk_star_priv *priv = netdev_priv(ndev);
+       struct device *dev = mtk_star_get_dev(priv);
+       unsigned int intf_val, ret, rmii_rxc;
+
+       switch (priv->phy_intf) {
+       case PHY_INTERFACE_MODE_MII:
+               intf_val = MTK_PERICFG_BIT_NIC_CFG_CON_MII;
+               rmii_rxc = 0;
+               break;
+       case PHY_INTERFACE_MODE_RMII:
+               intf_val = MTK_PERICFG_BIT_NIC_CFG_CON_RMII;
+               rmii_rxc = priv->rmii_rxc ? 0 : MTK_PERICFG_BIT_NIC_CFG_CON_CLK;
+               break;
+       default:
+               dev_err(dev, "This interface not supported\n");
+               return -EINVAL;
+       }
+
+       ret = regmap_update_bits(priv->pericfg,
+                                MTK_PERICFG_REG_NIC_CFG1_CON,
+                                MTK_PERICFG_BIT_NIC_CFG_CON_CLK,
+                                rmii_rxc);
+       if (ret)
+               return ret;
+
+       return regmap_update_bits(priv->pericfg,
+                                 MTK_PERICFG_REG_NIC_CFG0_CON,
+                                 MTK_PERICFG_REG_NIC_CFG_CON_CFG_INTF,
+                                 intf_val);
+}
+
+static int mt8365_set_interface_mode(struct net_device *ndev)
+{
+       struct mtk_star_priv *priv = netdev_priv(ndev);
+       struct device *dev = mtk_star_get_dev(priv);
+       unsigned int intf_val;
+
+       switch (priv->phy_intf) {
+       case PHY_INTERFACE_MODE_MII:
+               intf_val = MTK_PERICFG_BIT_NIC_CFG_CON_MII;
+               break;
+       case PHY_INTERFACE_MODE_RMII:
+               intf_val = MTK_PERICFG_BIT_NIC_CFG_CON_RMII;
+               intf_val |= priv->rmii_rxc ? 0 : MTK_PERICFG_BIT_NIC_CFG_CON_CLK_V2;
+               break;
+       default:
+               dev_err(dev, "This interface not supported\n");
+               return -EINVAL;
+       }
+
+       return regmap_update_bits(priv->pericfg,
+                                 MTK_PERICFG_REG_NIC_CFG_CON_V2,
+                                 MTK_PERICFG_REG_NIC_CFG_CON_CFG_INTF |
+                                 MTK_PERICFG_BIT_NIC_CFG_CON_CLK_V2,
+                                 intf_val);
+}
+
+static const struct mtk_star_compat mtk_star_mt8516_compat = {
+       .set_interface_mode = mt8516_set_interface_mode,
+       .bit_clk_div = MTK_STAR_BIT_CLK_DIV_10,
+};
+
+static const struct mtk_star_compat mtk_star_mt8365_compat = {
+       .set_interface_mode = mt8365_set_interface_mode,
+       .bit_clk_div = MTK_STAR_BIT_CLK_DIV_50,
+};
+
 static const struct of_device_id mtk_star_of_match[] = {
-       { .compatible = "mediatek,mt8516-eth", },
-       { .compatible = "mediatek,mt8518-eth", },
-       { .compatible = "mediatek,mt8175-eth", },
+       { .compatible = "mediatek,mt8516-eth",
+         .data = &mtk_star_mt8516_compat },
+       { .compatible = "mediatek,mt8518-eth",
+         .data = &mtk_star_mt8516_compat },
+       { .compatible = "mediatek,mt8175-eth",
+         .data = &mtk_star_mt8516_compat },
+       { .compatible = "mediatek,mt8365-eth",
+         .data = &mtk_star_mt8365_compat },
        { }
 };
 MODULE_DEVICE_TABLE(of, mtk_star_of_match);
index af3b2b5..43a4102 100644 (file)
@@ -645,7 +645,7 @@ static int get_real_size(const struct sk_buff *skb,
                *inline_ok = false;
                *hopbyhop = 0;
                if (skb->encapsulation) {
-                       *lso_header_size = (skb_inner_transport_header(skb) - skb->data) + inner_tcp_hdrlen(skb);
+                       *lso_header_size = skb_inner_tcp_all_headers(skb);
                } else {
                        /* Detects large IPV6 TCP packets and prepares for removal of
                         * HBH header that has been pushed by ip6_xmit(),
@@ -653,7 +653,7 @@ static int get_real_size(const struct sk_buff *skb,
                         */
                        if (ipv6_has_hopopt_jumbo(skb))
                                *hopbyhop = sizeof(struct hop_jumbo_hdr);
-                       *lso_header_size = skb_transport_offset(skb) + tcp_hdrlen(skb);
+                       *lso_header_size = skb_tcp_all_headers(skb);
                }
                real_size = CTRL_SIZE + shinfo->nr_frags * DS_SIZE +
                        ALIGN(*lso_header_size - *hopbyhop + 4, DS_SIZE);
index 9ea867a..5dadc2f 100644 (file)
@@ -17,7 +17,7 @@ mlx5_core-y :=        main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
                fs_counters.o fs_ft_pool.o rl.o lag/debugfs.o lag/lag.o dev.o events.o wq.o lib/gid.o \
                lib/devcom.o lib/pci_vsc.o lib/dm.o lib/fs_ttc.o diag/fs_tracepoint.o \
                diag/fw_tracer.o diag/crdump.o devlink.o diag/rsc_dump.o \
-               fw_reset.o qos.o lib/tout.o
+               fw_reset.o qos.o lib/tout.o lib/aso.o
 
 #
 # Netdev basic
@@ -45,7 +45,8 @@ mlx5_core-$(CONFIG_MLX5_CLS_ACT)     += en_tc.o en/rep/tc.o en/rep/neigh.o \
                                        esw/indir_table.o en/tc_tun_encap.o \
                                        en/tc_tun_vxlan.o en/tc_tun_gre.o en/tc_tun_geneve.o \
                                        en/tc_tun_mplsoudp.o diag/en_tc_tracepoint.o \
-                                       en/tc/post_act.o en/tc/int_port.o
+                                       en/tc/post_act.o en/tc/int_port.o en/tc/meter.o \
+                                       en/tc/post_meter.o
 
 mlx5_core-$(CONFIG_MLX5_CLS_ACT)     += en/tc/act/act.o en/tc/act/drop.o en/tc/act/trap.o \
                                        en/tc/act/accept.o en/tc/act/mark.o en/tc/act/goto.o \
@@ -53,7 +54,7 @@ mlx5_core-$(CONFIG_MLX5_CLS_ACT)     += en/tc/act/act.o en/tc/act/drop.o en/tc/a
                                        en/tc/act/vlan.o en/tc/act/vlan_mangle.o en/tc/act/mpls.o \
                                        en/tc/act/mirred.o en/tc/act/mirred_nic.o \
                                        en/tc/act/ct.o en/tc/act/sample.o en/tc/act/ptype.o \
-                                       en/tc/act/redirect_ingress.o
+                                       en/tc/act/redirect_ingress.o en/tc/act/police.o
 
 ifneq ($(CONFIG_MLX5_TC_CT),)
        mlx5_core-y                          += en/tc_ct.o en/tc/ct_fs_dmfs.o
index 2755c25..305fde6 100644 (file)
@@ -30,7 +30,7 @@ static struct mlx5e_tc_act *tc_acts_fdb[NUM_FLOW_ACTIONS] = {
        NULL, /* FLOW_ACTION_WAKE, */
        NULL, /* FLOW_ACTION_QUEUE, */
        &mlx5e_tc_act_sample,
-       NULL, /* FLOW_ACTION_POLICE, */
+       &mlx5e_tc_act_police,
        &mlx5e_tc_act_ct,
        NULL, /* FLOW_ACTION_CT_METADATA, */
        &mlx5e_tc_act_mpls_push,
@@ -106,8 +106,8 @@ mlx5e_tc_act_init_parse_state(struct mlx5e_tc_act_parse_state *parse_state,
 {
        memset(parse_state, 0, sizeof(*parse_state));
        parse_state->flow = flow;
-       parse_state->num_actions = flow_action->num_entries;
        parse_state->extack = extack;
+       parse_state->flow_action = flow_action;
 }
 
 void
index f34714c..095ff8e 100644 (file)
@@ -13,7 +13,7 @@
 struct mlx5_flow_attr;
 
 struct mlx5e_tc_act_parse_state {
-       unsigned int num_actions;
+       struct flow_action *flow_action;
        struct mlx5e_tc_flow *flow;
        struct netlink_ext_ack *extack;
        u32 actions;
@@ -76,6 +76,7 @@ extern struct mlx5e_tc_act mlx5e_tc_act_ct;
 extern struct mlx5e_tc_act mlx5e_tc_act_sample;
 extern struct mlx5e_tc_act mlx5e_tc_act_ptype;
 extern struct mlx5e_tc_act mlx5e_tc_act_redirect_ingress;
+extern struct mlx5e_tc_act mlx5e_tc_act_police;
 
 struct mlx5e_tc_act *
 mlx5e_tc_act_get(enum flow_action_id act_id,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c
new file mode 100644 (file)
index 0000000..ab32fe6
--- /dev/null
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+
+#include "act.h"
+#include "en/tc_priv.h"
+
+static bool
+tc_act_can_offload_police(struct mlx5e_tc_act_parse_state *parse_state,
+                         const struct flow_action_entry *act,
+                         int act_index,
+                         struct mlx5_flow_attr *attr)
+{
+       if (mlx5e_policer_validate(parse_state->flow_action, act,
+                                  parse_state->extack))
+               return false;
+
+       return !!mlx5e_get_flow_meters(parse_state->flow->priv->mdev);
+}
+
+static int
+tc_act_parse_police(struct mlx5e_tc_act_parse_state *parse_state,
+                   const struct flow_action_entry *act,
+                   struct mlx5e_priv *priv,
+                   struct mlx5_flow_attr *attr)
+{
+       struct mlx5e_flow_meter_params *params;
+
+       params = &attr->meter_attr.params;
+       params->index = act->hw_index;
+       if (act->police.rate_bytes_ps) {
+               params->mode = MLX5_RATE_LIMIT_BPS;
+               /* change rate to bits per second */
+               params->rate = act->police.rate_bytes_ps << 3;
+               params->burst = act->police.burst;
+       } else if (act->police.rate_pkt_ps) {
+               params->mode = MLX5_RATE_LIMIT_PPS;
+               params->rate = act->police.rate_pkt_ps;
+               params->burst = act->police.burst_pkt;
+       } else {
+               return -EOPNOTSUPP;
+       }
+
+       attr->action |= MLX5_FLOW_CONTEXT_ACTION_EXECUTE_ASO;
+       attr->exe_aso_type = MLX5_EXE_ASO_FLOW_METER;
+
+       return 0;
+}
+
+static bool
+tc_act_is_multi_table_act_police(struct mlx5e_priv *priv,
+                                const struct flow_action_entry *act,
+                                struct mlx5_flow_attr *attr)
+{
+       return true;
+}
+
+struct mlx5e_tc_act mlx5e_tc_act_police = {
+       .can_offload = tc_act_can_offload_police,
+       .parse_action = tc_act_parse_police,
+       .is_multi_table_act = tc_act_is_multi_table_act_police,
+};
index a7d9eab..53b270f 100644 (file)
@@ -12,7 +12,7 @@ tc_act_can_offload_trap(struct mlx5e_tc_act_parse_state *parse_state,
 {
        struct netlink_ext_ack *extack = parse_state->extack;
 
-       if (parse_state->num_actions != 1) {
+       if (parse_state->flow_action->num_entries != 1) {
                NL_SET_ERR_MSG_MOD(extack, "action trap is supported as a sole action only");
                return false;
        }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c
new file mode 100644 (file)
index 0000000..ca33f67
--- /dev/null
@@ -0,0 +1,474 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+
+#include <linux/math64.h>
+#include "lib/aso.h"
+#include "en/tc/post_act.h"
+#include "meter.h"
+#include "en/tc_priv.h"
+#include "post_meter.h"
+
+#define MLX5_START_COLOR_SHIFT 28
+#define MLX5_METER_MODE_SHIFT 24
+#define MLX5_CBS_EXP_SHIFT 24
+#define MLX5_CBS_MAN_SHIFT 16
+#define MLX5_CIR_EXP_SHIFT 8
+
+/* cir = 8*(10^9)*cir_mantissa/(2^cir_exponent)) bits/s */
+#define MLX5_CONST_CIR 8000000000ULL
+#define MLX5_CALC_CIR(m, e)  ((MLX5_CONST_CIR * (m)) >> (e))
+#define MLX5_MAX_CIR ((MLX5_CONST_CIR * 0x100) - 1)
+
+/* cbs = cbs_mantissa*2^cbs_exponent */
+#define MLX5_CALC_CBS(m, e)  ((m) << (e))
+#define MLX5_MAX_CBS ((0x100ULL << 0x1F) - 1)
+#define MLX5_MAX_HW_CBS 0x7FFFFFFF
+
+struct mlx5e_flow_meter_aso_obj {
+       struct list_head entry;
+       int base_id;
+       int total_meters;
+
+       unsigned long meters_map[0]; /* must be at the end of this struct */
+};
+
+struct mlx5e_flow_meters {
+       enum mlx5_flow_namespace_type ns_type;
+       struct mlx5_aso *aso;
+       struct mutex aso_lock; /* Protects aso operations */
+       int log_granularity;
+       u32 pdn;
+
+       DECLARE_HASHTABLE(hashtbl, 8);
+
+       struct mutex sync_lock; /* protect flow meter operations */
+       struct list_head partial_list;
+       struct list_head full_list;
+
+       struct mlx5_core_dev *mdev;
+       struct mlx5e_post_act *post_act;
+
+       struct mlx5e_post_meter_priv *post_meter;
+};
+
+static void
+mlx5e_flow_meter_cir_calc(u64 cir, u8 *man, u8 *exp)
+{
+       s64 _cir, _delta, delta = S64_MAX;
+       u8 e, _man = 0, _exp = 0;
+       u64 m;
+
+       for (e = 0; e <= 0x1F; e++) { /* exp width 5bit */
+               m = cir << e;
+               if ((s64)m < 0) /* overflow */
+                       break;
+               m = div64_u64(m, MLX5_CONST_CIR);
+               if (m > 0xFF) /* man width 8 bit */
+                       continue;
+               _cir = MLX5_CALC_CIR(m, e);
+               _delta = cir - _cir;
+               if (_delta < delta) {
+                       _man = m;
+                       _exp = e;
+                       if (!_delta)
+                               goto found;
+                       delta = _delta;
+               }
+       }
+
+found:
+       *man = _man;
+       *exp = _exp;
+}
+
+static void
+mlx5e_flow_meter_cbs_calc(u64 cbs, u8 *man, u8 *exp)
+{
+       s64 _cbs, _delta, delta = S64_MAX;
+       u8 e, _man = 0, _exp = 0;
+       u64 m;
+
+       for (e = 0; e <= 0x1F; e++) { /* exp width 5bit */
+               m = cbs >> e;
+               if (m > 0xFF) /* man width 8 bit */
+                       continue;
+               _cbs = MLX5_CALC_CBS(m, e);
+               _delta = cbs - _cbs;
+               if (_delta < delta) {
+                       _man = m;
+                       _exp = e;
+                       if (!_delta)
+                               goto found;
+                       delta = _delta;
+               }
+       }
+
+found:
+       *man = _man;
+       *exp = _exp;
+}
+
+int
+mlx5e_tc_meter_modify(struct mlx5_core_dev *mdev,
+                     struct mlx5e_flow_meter_handle *meter,
+                     struct mlx5e_flow_meter_params *meter_params)
+{
+       struct mlx5_wqe_aso_ctrl_seg *aso_ctrl;
+       struct mlx5_wqe_aso_data_seg *aso_data;
+       struct mlx5e_flow_meters *flow_meters;
+       u8 cir_man, cir_exp, cbs_man, cbs_exp;
+       struct mlx5_aso_wqe *aso_wqe;
+       struct mlx5_aso *aso;
+       u64 rate, burst;
+       u8 ds_cnt;
+       int err;
+
+       rate = meter_params->rate;
+       burst = meter_params->burst;
+
+       /* HW treats each packet as 128 bytes in PPS mode */
+       if (meter_params->mode == MLX5_RATE_LIMIT_PPS) {
+               rate <<= 10;
+               burst <<= 7;
+       }
+
+       if (!rate || rate > MLX5_MAX_CIR || !burst || burst > MLX5_MAX_CBS)
+               return -EINVAL;
+
+       /* HW has limitation of total 31 bits for cbs */
+       if (burst > MLX5_MAX_HW_CBS) {
+               mlx5_core_warn(mdev,
+                              "burst(%lld) is too large, use HW allowed value(%d)\n",
+                              burst, MLX5_MAX_HW_CBS);
+               burst = MLX5_MAX_HW_CBS;
+       }
+
+       mlx5_core_dbg(mdev, "meter mode=%d\n", meter_params->mode);
+       mlx5e_flow_meter_cir_calc(rate, &cir_man, &cir_exp);
+       mlx5_core_dbg(mdev, "rate=%lld, cir=%lld, exp=%d, man=%d\n",
+                     rate, MLX5_CALC_CIR(cir_man, cir_exp), cir_exp, cir_man);
+       mlx5e_flow_meter_cbs_calc(burst, &cbs_man, &cbs_exp);
+       mlx5_core_dbg(mdev, "burst=%lld, cbs=%lld, exp=%d, man=%d\n",
+                     burst, MLX5_CALC_CBS((u64)cbs_man, cbs_exp), cbs_exp, cbs_man);
+
+       if (!cir_man || !cbs_man)
+               return -EINVAL;
+
+       flow_meters = meter->flow_meters;
+       aso = flow_meters->aso;
+
+       mutex_lock(&flow_meters->aso_lock);
+       aso_wqe = mlx5_aso_get_wqe(aso);
+       ds_cnt = DIV_ROUND_UP(sizeof(struct mlx5_aso_wqe_data), MLX5_SEND_WQE_DS);
+       mlx5_aso_build_wqe(aso, ds_cnt, aso_wqe, meter->obj_id,
+                          MLX5_ACCESS_ASO_OPC_MOD_FLOW_METER);
+
+       aso_ctrl = &aso_wqe->aso_ctrl;
+       memset(aso_ctrl, 0, sizeof(*aso_ctrl));
+       aso_ctrl->data_mask_mode = MLX5_ASO_DATA_MASK_MODE_BYTEWISE_64BYTE << 6;
+       aso_ctrl->condition_1_0_operand = MLX5_ASO_ALWAYS_TRUE |
+                                         MLX5_ASO_ALWAYS_TRUE << 4;
+       aso_ctrl->data_offset_condition_operand = MLX5_ASO_LOGICAL_OR << 6;
+       aso_ctrl->data_mask = cpu_to_be64(0x80FFFFFFULL << (meter->idx ? 0 : 32));
+
+       aso_data = (struct mlx5_wqe_aso_data_seg *)(aso_wqe + 1);
+       memset(aso_data, 0, sizeof(*aso_data));
+       aso_data->bytewise_data[meter->idx * 8] = cpu_to_be32((0x1 << 31) | /* valid */
+                                       (MLX5_FLOW_METER_COLOR_GREEN << MLX5_START_COLOR_SHIFT));
+       if (meter_params->mode == MLX5_RATE_LIMIT_PPS)
+               aso_data->bytewise_data[meter->idx * 8] |=
+                       cpu_to_be32(MLX5_FLOW_METER_MODE_NUM_PACKETS << MLX5_METER_MODE_SHIFT);
+       else
+               aso_data->bytewise_data[meter->idx * 8] |=
+                       cpu_to_be32(MLX5_FLOW_METER_MODE_BYTES_IP_LENGTH << MLX5_METER_MODE_SHIFT);
+
+       aso_data->bytewise_data[meter->idx * 8 + 2] = cpu_to_be32((cbs_exp << MLX5_CBS_EXP_SHIFT) |
+                                                                 (cbs_man << MLX5_CBS_MAN_SHIFT) |
+                                                                 (cir_exp << MLX5_CIR_EXP_SHIFT) |
+                                                                 cir_man);
+
+       mlx5_aso_post_wqe(aso, true, &aso_wqe->ctrl);
+
+       /* With newer FW, the wait for the first ASO WQE is more than 2us, put the wait 10ms. */
+       err = mlx5_aso_poll_cq(aso, true, 10);
+       mutex_unlock(&flow_meters->aso_lock);
+
+       return err;
+}
+
+static int
+mlx5e_flow_meter_create_aso_obj(struct mlx5e_flow_meters *flow_meters, int *obj_id)
+{
+       u32 in[MLX5_ST_SZ_DW(create_flow_meter_aso_obj_in)] = {};
+       u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
+       struct mlx5_core_dev *mdev = flow_meters->mdev;
+       void *obj;
+       int err;
+
+       MLX5_SET(general_obj_in_cmd_hdr, in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+       MLX5_SET(general_obj_in_cmd_hdr, in, obj_type,
+                MLX5_GENERAL_OBJECT_TYPES_FLOW_METER_ASO);
+       MLX5_SET(general_obj_in_cmd_hdr, in, log_obj_range, flow_meters->log_granularity);
+
+       obj = MLX5_ADDR_OF(create_flow_meter_aso_obj_in, in, flow_meter_aso_obj);
+       MLX5_SET(flow_meter_aso_obj, obj, meter_aso_access_pd, flow_meters->pdn);
+
+       err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+       if (!err) {
+               *obj_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+               mlx5_core_dbg(mdev, "flow meter aso obj(0x%x) created\n", *obj_id);
+       }
+
+       return err;
+}
+
+static void
+mlx5e_flow_meter_destroy_aso_obj(struct mlx5_core_dev *mdev, u32 obj_id)
+{
+       u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {};
+       u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
+
+       MLX5_SET(general_obj_in_cmd_hdr, in, opcode, MLX5_CMD_OP_DESTROY_GENERAL_OBJECT);
+       MLX5_SET(general_obj_in_cmd_hdr, in, obj_type,
+                MLX5_GENERAL_OBJECT_TYPES_FLOW_METER_ASO);
+       MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, obj_id);
+
+       mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+       mlx5_core_dbg(mdev, "flow meter aso obj(0x%x) destroyed\n", obj_id);
+}
+
+static struct mlx5e_flow_meter_handle *
+__mlx5e_flow_meter_alloc(struct mlx5e_flow_meters *flow_meters)
+{
+       struct mlx5_core_dev *mdev = flow_meters->mdev;
+       struct mlx5e_flow_meter_aso_obj *meters_obj;
+       struct mlx5e_flow_meter_handle *meter;
+       int err, pos, total;
+       u32 id;
+
+       meter = kzalloc(sizeof(*meter), GFP_KERNEL);
+       if (!meter)
+               return ERR_PTR(-ENOMEM);
+
+       meters_obj = list_first_entry_or_null(&flow_meters->partial_list,
+                                             struct mlx5e_flow_meter_aso_obj,
+                                             entry);
+       /* 2 meters in one object */
+       total = 1 << (flow_meters->log_granularity + 1);
+       if (!meters_obj) {
+               err = mlx5e_flow_meter_create_aso_obj(flow_meters, &id);
+               if (err) {
+                       mlx5_core_err(mdev, "Failed to create flow meter ASO object\n");
+                       goto err_create;
+               }
+
+               meters_obj = kzalloc(sizeof(*meters_obj) + BITS_TO_BYTES(total),
+                                    GFP_KERNEL);
+               if (!meters_obj) {
+                       err = -ENOMEM;
+                       goto err_mem;
+               }
+
+               meters_obj->base_id = id;
+               meters_obj->total_meters = total;
+               list_add(&meters_obj->entry, &flow_meters->partial_list);
+               pos = 0;
+       } else {
+               pos = find_first_zero_bit(meters_obj->meters_map, total);
+               if (bitmap_weight(meters_obj->meters_map, total) == total - 1) {
+                       list_del(&meters_obj->entry);
+                       list_add(&meters_obj->entry, &flow_meters->full_list);
+               }
+       }
+
+       bitmap_set(meters_obj->meters_map, pos, 1);
+       meter->flow_meters = flow_meters;
+       meter->meters_obj = meters_obj;
+       meter->obj_id = meters_obj->base_id + pos / 2;
+       meter->idx = pos % 2;
+
+       mlx5_core_dbg(mdev, "flow meter allocated, obj_id=0x%x, index=%d\n",
+                     meter->obj_id, meter->idx);
+
+       return meter;
+
+err_mem:
+       mlx5e_flow_meter_destroy_aso_obj(mdev, id);
+err_create:
+       kfree(meter);
+       return ERR_PTR(err);
+}
+
+static void
+__mlx5e_flow_meter_free(struct mlx5e_flow_meter_handle *meter)
+{
+       struct mlx5e_flow_meters *flow_meters = meter->flow_meters;
+       struct mlx5_core_dev *mdev = flow_meters->mdev;
+       struct mlx5e_flow_meter_aso_obj *meters_obj;
+       int n, pos;
+
+       meters_obj = meter->meters_obj;
+       pos = (meter->obj_id - meters_obj->base_id) * 2 + meter->idx;
+       bitmap_clear(meters_obj->meters_map, pos, 1);
+       n = bitmap_weight(meters_obj->meters_map, meters_obj->total_meters);
+       if (n == 0) {
+               list_del(&meters_obj->entry);
+               mlx5e_flow_meter_destroy_aso_obj(mdev, meters_obj->base_id);
+               kfree(meters_obj);
+       } else if (n == meters_obj->total_meters - 1) {
+               list_del(&meters_obj->entry);
+               list_add(&meters_obj->entry, &flow_meters->partial_list);
+       }
+
+       mlx5_core_dbg(mdev, "flow meter freed, obj_id=0x%x, index=%d\n",
+                     meter->obj_id, meter->idx);
+       kfree(meter);
+}
+
+struct mlx5e_flow_meter_handle *
+mlx5e_tc_meter_get(struct mlx5_core_dev *mdev, struct mlx5e_flow_meter_params *params)
+{
+       struct mlx5e_flow_meters *flow_meters;
+       struct mlx5e_flow_meter_handle *meter;
+       int err;
+
+       flow_meters = mlx5e_get_flow_meters(mdev);
+       if (!flow_meters)
+               return ERR_PTR(-EOPNOTSUPP);
+
+       mutex_lock(&flow_meters->sync_lock);
+       hash_for_each_possible(flow_meters->hashtbl, meter, hlist, params->index)
+               if (meter->params.index == params->index)
+                       goto add_ref;
+
+       meter = __mlx5e_flow_meter_alloc(flow_meters);
+       if (IS_ERR(meter)) {
+               err = PTR_ERR(meter);
+               goto err_alloc;
+       }
+
+       hash_add(flow_meters->hashtbl, &meter->hlist, params->index);
+       meter->params.index = params->index;
+
+add_ref:
+       meter->refcnt++;
+
+       if (meter->params.mode != params->mode || meter->params.rate != params->rate ||
+           meter->params.burst != params->burst) {
+               err = mlx5e_tc_meter_modify(mdev, meter, params);
+               if (err)
+                       goto err_update;
+
+               meter->params.mode = params->mode;
+               meter->params.rate = params->rate;
+               meter->params.burst = params->burst;
+       }
+
+       mutex_unlock(&flow_meters->sync_lock);
+       return meter;
+
+err_update:
+       if (--meter->refcnt == 0) {
+               hash_del(&meter->hlist);
+               __mlx5e_flow_meter_free(meter);
+       }
+err_alloc:
+       mutex_unlock(&flow_meters->sync_lock);
+       return ERR_PTR(err);
+}
+
+void
+mlx5e_tc_meter_put(struct mlx5e_flow_meter_handle *meter)
+{
+       struct mlx5e_flow_meters *flow_meters = meter->flow_meters;
+
+       mutex_lock(&flow_meters->sync_lock);
+       if (--meter->refcnt == 0) {
+               hash_del(&meter->hlist);
+               __mlx5e_flow_meter_free(meter);
+       }
+       mutex_unlock(&flow_meters->sync_lock);
+}
+
+struct mlx5_flow_table *
+mlx5e_tc_meter_get_post_meter_ft(struct mlx5e_flow_meters *flow_meters)
+{
+       return mlx5e_post_meter_get_ft(flow_meters->post_meter);
+}
+
+struct mlx5e_flow_meters *
+mlx5e_flow_meters_init(struct mlx5e_priv *priv,
+                      enum mlx5_flow_namespace_type ns_type,
+                      struct mlx5e_post_act *post_act)
+{
+       struct mlx5_core_dev *mdev = priv->mdev;
+       struct mlx5e_flow_meters *flow_meters;
+       int err;
+
+       if (!(MLX5_CAP_GEN_64(mdev, general_obj_types) &
+             MLX5_HCA_CAP_GENERAL_OBJECT_TYPES_FLOW_METER_ASO))
+               return ERR_PTR(-EOPNOTSUPP);
+
+       if (IS_ERR_OR_NULL(post_act)) {
+               netdev_dbg(priv->netdev,
+                          "flow meter offload is not supported, post action is missing\n");
+               return ERR_PTR(-EOPNOTSUPP);
+       }
+
+       flow_meters = kzalloc(sizeof(*flow_meters), GFP_KERNEL);
+       if (!flow_meters)
+               return ERR_PTR(-ENOMEM);
+
+       err = mlx5_core_alloc_pd(mdev, &flow_meters->pdn);
+       if (err) {
+               mlx5_core_err(mdev, "Failed to alloc pd for flow meter aso, err=%d\n", err);
+               goto err_out;
+       }
+
+       flow_meters->aso = mlx5_aso_create(mdev, flow_meters->pdn);
+       if (IS_ERR(flow_meters->aso)) {
+               mlx5_core_warn(mdev, "Failed to create aso wqe for flow meter\n");
+               err = PTR_ERR(flow_meters->aso);
+               goto err_sq;
+       }
+
+       flow_meters->post_meter = mlx5e_post_meter_init(priv, ns_type, post_act);
+       if (IS_ERR(flow_meters->post_meter)) {
+               err = PTR_ERR(flow_meters->post_meter);
+               goto err_post_meter;
+       }
+
+       mutex_init(&flow_meters->sync_lock);
+       INIT_LIST_HEAD(&flow_meters->partial_list);
+       INIT_LIST_HEAD(&flow_meters->full_list);
+
+       flow_meters->ns_type = ns_type;
+       flow_meters->mdev = mdev;
+       flow_meters->post_act = post_act;
+       mutex_init(&flow_meters->aso_lock);
+       flow_meters->log_granularity = min_t(int, 6,
+                                            MLX5_CAP_QOS(mdev, log_meter_aso_max_alloc));
+
+       return flow_meters;
+
+err_post_meter:
+       mlx5_aso_destroy(flow_meters->aso);
+err_sq:
+       mlx5_core_dealloc_pd(mdev, flow_meters->pdn);
+err_out:
+       kfree(flow_meters);
+       return ERR_PTR(err);
+}
+
+void
+mlx5e_flow_meters_cleanup(struct mlx5e_flow_meters *flow_meters)
+{
+       if (IS_ERR_OR_NULL(flow_meters))
+               return;
+
+       mlx5e_post_meter_cleanup(flow_meters->post_meter);
+       mlx5_aso_destroy(flow_meters->aso);
+       mlx5_core_dealloc_pd(flow_meters->mdev, flow_meters->pdn);
+
+       kfree(flow_meters);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.h
new file mode 100644 (file)
index 0000000..78885db
--- /dev/null
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#ifndef __MLX5_EN_FLOW_METER_H__
+#define __MLX5_EN_FLOW_METER_H__
+
+struct mlx5e_flow_meter_aso_obj;
+struct mlx5e_flow_meters;
+struct mlx5_flow_attr;
+
+enum mlx5e_flow_meter_mode {
+       MLX5_RATE_LIMIT_BPS,
+       MLX5_RATE_LIMIT_PPS,
+};
+
+struct mlx5e_flow_meter_params {
+       enum mlx5e_flow_meter_mode mode;
+        /* police action index */
+       u32 index;
+       u64 rate;
+       u64 burst;
+};
+
+struct mlx5e_flow_meter_handle {
+       struct mlx5e_flow_meters *flow_meters;
+       struct mlx5e_flow_meter_aso_obj *meters_obj;
+       u32 obj_id;
+       u8 idx;
+
+       int refcnt;
+       struct hlist_node hlist;
+       struct mlx5e_flow_meter_params params;
+};
+
+struct mlx5e_meter_attr {
+       struct mlx5e_flow_meter_params params;
+       struct mlx5e_flow_meter_handle *meter;
+};
+
+int
+mlx5e_tc_meter_modify(struct mlx5_core_dev *mdev,
+                     struct mlx5e_flow_meter_handle *meter,
+                     struct mlx5e_flow_meter_params *meter_params);
+
+struct mlx5e_flow_meter_handle *
+mlx5e_tc_meter_get(struct mlx5_core_dev *mdev, struct mlx5e_flow_meter_params *params);
+void
+mlx5e_tc_meter_put(struct mlx5e_flow_meter_handle *meter);
+
+struct mlx5_flow_table *
+mlx5e_tc_meter_get_post_meter_ft(struct mlx5e_flow_meters *flow_meters);
+
+struct mlx5e_flow_meters *
+mlx5e_flow_meters_init(struct mlx5e_priv *priv,
+                      enum mlx5_flow_namespace_type ns_type,
+                      struct mlx5e_post_act *post_action);
+void
+mlx5e_flow_meters_cleanup(struct mlx5e_flow_meters *flow_meters);
+
+#endif /* __MLX5_EN_FLOW_METER_H__ */
index dea137d..2093cc2 100644 (file)
@@ -22,9 +22,9 @@ struct mlx5e_post_act_handle {
        u32 id;
 };
 
-#define MLX5_POST_ACTION_BITS (mlx5e_tc_attr_to_reg_mappings[FTEID_TO_REG].mlen)
-#define MLX5_POST_ACTION_MAX GENMASK(MLX5_POST_ACTION_BITS - 1, 0)
-#define MLX5_POST_ACTION_MASK MLX5_POST_ACTION_MAX
+#define MLX5_POST_ACTION_BITS MLX5_REG_MAPPING_MBITS(FTEID_TO_REG)
+#define MLX5_POST_ACTION_MASK MLX5_REG_MAPPING_MASK(FTEID_TO_REG)
+#define MLX5_POST_ACTION_MAX MLX5_POST_ACTION_MASK
 
 struct mlx5e_post_act *
 mlx5e_tc_post_act_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c
new file mode 100644 (file)
index 0000000..efa2035
--- /dev/null
@@ -0,0 +1,198 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+
+#include "en/tc_priv.h"
+#include "post_meter.h"
+#include "en/tc/post_act.h"
+
+#define MLX5_PACKET_COLOR_BITS MLX5_REG_MAPPING_MBITS(PACKET_COLOR_TO_REG)
+#define MLX5_PACKET_COLOR_MASK MLX5_REG_MAPPING_MASK(PACKET_COLOR_TO_REG)
+
+struct mlx5e_post_meter_priv {
+       struct mlx5_flow_table *ft;
+       struct mlx5_flow_group *fg;
+       struct mlx5_flow_handle *fwd_green_rule;
+       struct mlx5_flow_handle *drop_red_rule;
+};
+
+struct mlx5_flow_table *
+mlx5e_post_meter_get_ft(struct mlx5e_post_meter_priv *post_meter)
+{
+       return post_meter->ft;
+}
+
+static int
+mlx5e_post_meter_table_create(struct mlx5e_priv *priv,
+                             enum mlx5_flow_namespace_type ns_type,
+                             struct mlx5e_post_meter_priv *post_meter)
+{
+       struct mlx5_flow_table_attr ft_attr = {};
+       struct mlx5_flow_namespace *root_ns;
+
+       root_ns = mlx5_get_flow_namespace(priv->mdev, ns_type);
+       if (!root_ns) {
+               mlx5_core_warn(priv->mdev, "Failed to get namespace for flow meter\n");
+               return -EOPNOTSUPP;
+       }
+
+       ft_attr.flags = MLX5_FLOW_TABLE_UNMANAGED;
+       ft_attr.prio = FDB_SLOW_PATH;
+       ft_attr.max_fte = 2;
+       ft_attr.level = 1;
+
+       post_meter->ft = mlx5_create_flow_table(root_ns, &ft_attr);
+       if (IS_ERR(post_meter->ft)) {
+               mlx5_core_warn(priv->mdev, "Failed to create post_meter table\n");
+               return PTR_ERR(post_meter->ft);
+       }
+
+       return 0;
+}
+
+static int
+mlx5e_post_meter_fg_create(struct mlx5e_priv *priv,
+                          struct mlx5e_post_meter_priv *post_meter)
+{
+       int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+       void *misc2, *match_criteria;
+       u32 *flow_group_in;
+       int err = 0;
+
+       flow_group_in = kvzalloc(inlen, GFP_KERNEL);
+       if (!flow_group_in)
+               return -ENOMEM;
+
+       MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
+                MLX5_MATCH_MISC_PARAMETERS_2);
+       match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in,
+                                     match_criteria);
+       misc2 = MLX5_ADDR_OF(fte_match_param, match_criteria, misc_parameters_2);
+       MLX5_SET(fte_match_set_misc2, misc2, metadata_reg_c_5, MLX5_PACKET_COLOR_MASK);
+       MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
+
+       post_meter->fg = mlx5_create_flow_group(post_meter->ft, flow_group_in);
+       if (IS_ERR(post_meter->fg)) {
+               mlx5_core_warn(priv->mdev, "Failed to create post_meter flow group\n");
+               err = PTR_ERR(post_meter->fg);
+       }
+
+       kvfree(flow_group_in);
+       return err;
+}
+
+static int
+mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
+                             struct mlx5e_post_meter_priv *post_meter,
+                             struct mlx5e_post_act *post_act)
+{
+       struct mlx5_flow_destination dest = {};
+       struct mlx5_flow_act flow_act = {};
+       struct mlx5_flow_handle *rule;
+       struct mlx5_flow_spec *spec;
+       int err;
+
+       spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+       if (!spec)
+               return -ENOMEM;
+
+       mlx5e_tc_match_to_reg_match(spec, PACKET_COLOR_TO_REG,
+                                   MLX5_FLOW_METER_COLOR_RED, MLX5_PACKET_COLOR_MASK);
+       flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
+       flow_act.flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
+
+       rule = mlx5_add_flow_rules(post_meter->ft, spec, &flow_act, NULL, 0);
+       if (IS_ERR(rule)) {
+               mlx5_core_warn(priv->mdev, "Failed to create post_meter flow drop rule\n");
+               err = PTR_ERR(rule);
+               goto err_red;
+       }
+       post_meter->drop_red_rule = rule;
+
+       mlx5e_tc_match_to_reg_match(spec, PACKET_COLOR_TO_REG,
+                                   MLX5_FLOW_METER_COLOR_GREEN, MLX5_PACKET_COLOR_MASK);
+       flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+       dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+       dest.ft = mlx5e_tc_post_act_get_ft(post_act);
+
+       rule = mlx5_add_flow_rules(post_meter->ft, spec, &flow_act, &dest, 1);
+       if (IS_ERR(rule)) {
+               mlx5_core_warn(priv->mdev, "Failed to create post_meter flow fwd rule\n");
+               err = PTR_ERR(rule);
+               goto err_green;
+       }
+       post_meter->fwd_green_rule = rule;
+
+       kvfree(spec);
+       return 0;
+
+err_green:
+       mlx5_del_flow_rules(post_meter->drop_red_rule);
+err_red:
+       kvfree(spec);
+       return err;
+}
+
+static void
+mlx5e_post_meter_rules_destroy(struct mlx5e_post_meter_priv *post_meter)
+{
+       mlx5_del_flow_rules(post_meter->drop_red_rule);
+       mlx5_del_flow_rules(post_meter->fwd_green_rule);
+}
+
+static void
+mlx5e_post_meter_fg_destroy(struct mlx5e_post_meter_priv *post_meter)
+{
+       mlx5_destroy_flow_group(post_meter->fg);
+}
+
+static void
+mlx5e_post_meter_table_destroy(struct mlx5e_post_meter_priv *post_meter)
+{
+       mlx5_destroy_flow_table(post_meter->ft);
+}
+
+struct mlx5e_post_meter_priv *
+mlx5e_post_meter_init(struct mlx5e_priv *priv,
+                     enum mlx5_flow_namespace_type ns_type,
+                     struct mlx5e_post_act *post_act)
+{
+       struct mlx5e_post_meter_priv *post_meter;
+       int err;
+
+       post_meter = kzalloc(sizeof(*post_meter), GFP_KERNEL);
+       if (!post_meter)
+               return ERR_PTR(-ENOMEM);
+
+       err = mlx5e_post_meter_table_create(priv, ns_type, post_meter);
+       if (err)
+               goto err_ft;
+
+       err = mlx5e_post_meter_fg_create(priv, post_meter);
+       if (err)
+               goto err_fg;
+
+       err = mlx5e_post_meter_rules_create(priv, post_meter, post_act);
+       if (err)
+               goto err_rules;
+
+       return post_meter;
+
+err_rules:
+       mlx5e_post_meter_fg_destroy(post_meter);
+err_fg:
+       mlx5e_post_meter_table_destroy(post_meter);
+err_ft:
+       kfree(post_meter);
+       return ERR_PTR(err);
+}
+
+void
+mlx5e_post_meter_cleanup(struct mlx5e_post_meter_priv *post_meter)
+{
+       mlx5e_post_meter_rules_destroy(post_meter);
+       mlx5e_post_meter_fg_destroy(post_meter);
+       mlx5e_post_meter_table_destroy(post_meter);
+       kfree(post_meter);
+}
+
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h
new file mode 100644 (file)
index 0000000..c74f3cb
--- /dev/null
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#ifndef __MLX5_EN_POST_METER_H__
+#define __MLX5_EN_POST_METER_H__
+
+#define packet_color_to_reg { \
+       .mfield = MLX5_ACTION_IN_FIELD_METADATA_REG_C_5, \
+       .moffset = 0, \
+       .mlen = 8, \
+       .soffset = MLX5_BYTE_OFF(fte_match_param, \
+                                misc_parameters_2.metadata_reg_c_5), \
+}
+
+struct mlx5e_post_meter_priv;
+
+struct mlx5_flow_table *
+mlx5e_post_meter_get_ft(struct mlx5e_post_meter_priv *post_meter);
+
+struct mlx5e_post_meter_priv *
+mlx5e_post_meter_init(struct mlx5e_priv *priv,
+                     enum mlx5_flow_namespace_type ns_type,
+                     struct mlx5e_post_act *post_act);
+void
+mlx5e_post_meter_cleanup(struct mlx5e_post_meter_priv *post_meter);
+
+#endif /* __MLX5_EN_POST_METER_H__ */
index 25f51f8..af959fa 100644 (file)
@@ -36,8 +36,8 @@
 #define MLX5_CT_STATE_RELATED_BIT BIT(5)
 #define MLX5_CT_STATE_INVALID_BIT BIT(6)
 
-#define MLX5_CT_LABELS_BITS (mlx5e_tc_attr_to_reg_mappings[LABELS_TO_REG].mlen)
-#define MLX5_CT_LABELS_MASK GENMASK(MLX5_CT_LABELS_BITS - 1, 0)
+#define MLX5_CT_LABELS_BITS MLX5_REG_MAPPING_MBITS(LABELS_TO_REG)
+#define MLX5_CT_LABELS_MASK MLX5_REG_MAPPING_MASK(LABELS_TO_REG)
 
 /* Statically allocate modify actions for
  * ipv6 and port nat (5) + tuple fields (4) + nic mode zone restore (1) = 10.
index 00a3ba8..5bbd6b9 100644 (file)
@@ -62,10 +62,11 @@ struct mlx5_ct_attr {
                                 misc_parameters_2.metadata_reg_c_4),\
 }
 
+/* 8 LSB of metadata C5 are reserved for packet color */
 #define fteid_to_reg_ct {\
        .mfield = MLX5_ACTION_IN_FIELD_METADATA_REG_C_5,\
-       .moffset = 0,\
-       .mlen = 32,\
+       .moffset = 8,\
+       .mlen = 24,\
        .soffset = MLX5_BYTE_OFF(fte_match_param,\
                                 misc_parameters_2.metadata_reg_c_5),\
 }
@@ -84,10 +85,8 @@ struct mlx5_ct_attr {
        .mlen = ESW_ZONE_ID_BITS,\
 }
 
-#define REG_MAPPING_MLEN(reg) (mlx5e_tc_attr_to_reg_mappings[reg].mlen)
-#define REG_MAPPING_MOFFSET(reg) (mlx5e_tc_attr_to_reg_mappings[reg].moffset)
-#define MLX5_CT_ZONE_BITS (mlx5e_tc_attr_to_reg_mappings[ZONE_TO_REG].mlen)
-#define MLX5_CT_ZONE_MASK GENMASK(MLX5_CT_ZONE_BITS - 1, 0)
+#define MLX5_CT_ZONE_BITS MLX5_REG_MAPPING_MBITS(ZONE_TO_REG)
+#define MLX5_CT_ZONE_MASK MLX5_REG_MAPPING_MASK(ZONE_TO_REG)
 
 #if IS_ENABLED(CONFIG_MLX5_TC_CT)
 
index 3b74a6f..d2bdfd6 100644 (file)
@@ -203,7 +203,13 @@ struct mlx5_fc *mlx5e_tc_get_counter(struct mlx5e_tc_flow *flow);
 struct mlx5e_tc_int_port_priv *
 mlx5e_get_int_port_priv(struct mlx5e_priv *priv);
 
+struct mlx5e_flow_meters *mlx5e_get_flow_meters(struct mlx5_core_dev *dev);
+
 void *mlx5e_get_match_headers_value(u32 flags, struct mlx5_flow_spec *spec);
 void *mlx5e_get_match_headers_criteria(u32 flags, struct mlx5_flow_spec *spec);
 
+int mlx5e_policer_validate(const struct flow_action *action,
+                          const struct flow_action_entry *act,
+                          struct netlink_ext_ack *extack);
+
 #endif /* __MLX5_EN_TC_PRIV_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.h
deleted file mode 100644 (file)
index e4eeb2b..0000000
+++ /dev/null
@@ -1,21 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
-/* Copyright (c) 2020, Mellanox Technologies inc. All rights reserved. */
-
-#ifndef __MLX5_IPSEC_STEERING_H__
-#define __MLX5_IPSEC_STEERING_H__
-
-#include "en.h"
-#include "ipsec.h"
-#include "ipsec_offload.h"
-#include "en/fs.h"
-
-void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec);
-int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec);
-int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_priv *priv,
-                                 struct mlx5_accel_esp_xfrm_attrs *attrs,
-                                 u32 ipsec_obj_id,
-                                 struct mlx5e_ipsec_rule *ipsec_rule);
-void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_priv *priv,
-                                  struct mlx5_accel_esp_xfrm_attrs *attrs,
-                                  struct mlx5e_ipsec_rule *ipsec_rule);
-#endif /* __MLX5_IPSEC_STEERING_H__ */
index 4b6f0d1..cc5cb30 100644 (file)
@@ -458,7 +458,7 @@ bool mlx5e_ktls_handle_tx_skb(struct net_device *netdev, struct mlx5e_txqsq *sq,
        int datalen;
        u32 seq;
 
-       datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb));
+       datalen = skb->len - skb_tcp_all_headers(skb);
        if (!datalen)
                return true;
 
index adf5cc6..dec183c 100644 (file)
@@ -62,6 +62,7 @@ struct mlx5_tc_int_port_priv;
 struct mlx5e_rep_bond;
 struct mlx5e_tc_tun_encap;
 struct mlx5e_post_act;
+struct mlx5e_flow_meters;
 
 struct mlx5_rep_uplink_priv {
        /* indirect block callbacks are invoked on bind/unbind events
@@ -97,6 +98,8 @@ struct mlx5_rep_uplink_priv {
 
        /* OVS internal port support */
        struct mlx5e_tc_int_port_priv *int_port_priv;
+
+       struct mlx5e_flow_meters *flow_meters;
 };
 
 struct mlx5e_rep_priv {
index 34bf11c..5e70e99 100644 (file)
@@ -59,6 +59,7 @@
 #include "en/tc_tun_encap.h"
 #include "en/tc/sample.h"
 #include "en/tc/act/act.h"
+#include "en/tc/post_meter.h"
 #include "lib/devcom.h"
 #include "lib/geneve.h"
 #include "lib/fs_chains.h"
@@ -104,6 +105,7 @@ struct mlx5e_tc_attr_to_reg_mapping mlx5e_tc_attr_to_reg_mappings[] = {
                .mlen = 16,
        },
        [NIC_ZONE_RESTORE_TO_REG] = nic_zone_restore_to_reg_ct,
+       [PACKET_COLOR_TO_REG] = packet_color_to_reg,
 };
 
 /* To avoid false lock dependency warning set the tc_ht lock
@@ -240,6 +242,30 @@ mlx5e_get_int_port_priv(struct mlx5e_priv *priv)
        return NULL;
 }
 
+struct mlx5e_flow_meters *
+mlx5e_get_flow_meters(struct mlx5_core_dev *dev)
+{
+       struct mlx5_eswitch *esw = dev->priv.eswitch;
+       struct mlx5_rep_uplink_priv *uplink_priv;
+       struct mlx5e_rep_priv *uplink_rpriv;
+       struct mlx5e_priv *priv;
+
+       if (is_mdev_switchdev_mode(dev)) {
+               uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH);
+               uplink_priv = &uplink_rpriv->uplink_priv;
+               priv = netdev_priv(uplink_rpriv->netdev);
+               if (!uplink_priv->flow_meters)
+                       uplink_priv->flow_meters =
+                               mlx5e_flow_meters_init(priv,
+                                                      MLX5_FLOW_NAMESPACE_FDB,
+                                                      uplink_priv->post_act);
+               if (!IS_ERR(uplink_priv->flow_meters))
+                       return uplink_priv->flow_meters;
+       }
+
+       return NULL;
+}
+
 static struct mlx5_tc_ct_priv *
 get_ct_priv(struct mlx5e_priv *priv)
 {
@@ -319,12 +345,39 @@ mlx5_tc_rule_delete(struct mlx5e_priv *priv,
        mlx5e_del_offloaded_nic_rule(priv, rule, attr);
 }
 
+static bool
+is_flow_meter_action(struct mlx5_flow_attr *attr)
+{
+       return ((attr->action & MLX5_FLOW_CONTEXT_ACTION_EXECUTE_ASO) &&
+               (attr->exe_aso_type == MLX5_EXE_ASO_FLOW_METER));
+}
+
+static int
+mlx5e_tc_add_flow_meter(struct mlx5e_priv *priv,
+                       struct mlx5_flow_attr *attr)
+{
+       struct mlx5e_flow_meter_handle *meter;
+
+       meter = mlx5e_tc_meter_get(priv->mdev, &attr->meter_attr.params);
+       if (IS_ERR(meter)) {
+               mlx5_core_err(priv->mdev, "Failed to get flow meter\n");
+               return PTR_ERR(meter);
+       }
+
+       attr->meter_attr.meter = meter;
+       attr->dest_ft = mlx5e_tc_meter_get_post_meter_ft(meter->flow_meters);
+       attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+
+       return 0;
+}
+
 struct mlx5_flow_handle *
 mlx5e_tc_rule_offload(struct mlx5e_priv *priv,
                      struct mlx5_flow_spec *spec,
                      struct mlx5_flow_attr *attr)
 {
        struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+       int err;
 
        if (attr->flags & MLX5_ATTR_FLAG_CT) {
                struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts =
@@ -341,6 +394,12 @@ mlx5e_tc_rule_offload(struct mlx5e_priv *priv,
        if (attr->flags & MLX5_ATTR_FLAG_SAMPLE)
                return mlx5e_tc_sample_offload(get_sample_priv(priv), spec, attr);
 
+       if (is_flow_meter_action(attr)) {
+               err = mlx5e_tc_add_flow_meter(priv, attr);
+               if (err)
+                       return ERR_PTR(err);
+       }
+
        return mlx5_eswitch_add_offloaded_rule(esw, spec, attr);
 }
 
@@ -367,6 +426,9 @@ mlx5e_tc_rule_unoffload(struct mlx5e_priv *priv,
        }
 
        mlx5_eswitch_del_offloaded_rule(esw, rule, attr);
+
+       if (attr->meter_attr.meter)
+               mlx5e_tc_meter_put(attr->meter_attr.meter);
 }
 
 int
@@ -4519,9 +4581,9 @@ static int apply_police_params(struct mlx5e_priv *priv, u64 rate,
        return err;
 }
 
-static int mlx5e_policer_validate(const struct flow_action *action,
-                                 const struct flow_action_entry *act,
-                                 struct netlink_ext_ack *extack)
+int mlx5e_policer_validate(const struct flow_action *action,
+                          const struct flow_action_entry *act,
+                          struct netlink_ext_ack *extack)
 {
        if (act->police.exceed.act_id != FLOW_ACTION_DROP) {
                NL_SET_ERR_MSG_MOD(extack,
@@ -4529,13 +4591,6 @@ static int mlx5e_policer_validate(const struct flow_action *action,
                return -EOPNOTSUPP;
        }
 
-       if (act->police.notexceed.act_id != FLOW_ACTION_PIPE &&
-           act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) {
-               NL_SET_ERR_MSG_MOD(extack,
-                                  "Offload not supported when conform action is not pipe or ok");
-               return -EOPNOTSUPP;
-       }
-
        if (act->police.notexceed.act_id == FLOW_ACTION_ACCEPT &&
            !flow_action_is_last_entry(action, act)) {
                NL_SET_ERR_MSG_MOD(extack,
@@ -4586,6 +4641,12 @@ static int scan_tc_matchall_fdb_actions(struct mlx5e_priv *priv,
        flow_action_for_each(i, act, flow_action) {
                switch (act->id) {
                case FLOW_ACTION_POLICE:
+                       if (act->police.notexceed.act_id != FLOW_ACTION_CONTINUE) {
+                               NL_SET_ERR_MSG_MOD(extack,
+                                                  "Offload not supported when conform action is not continue");
+                               return -EOPNOTSUPP;
+                       }
+
                        err = mlx5e_policer_validate(flow_action, act, extack);
                        if (err)
                                return err;
@@ -4956,6 +5017,7 @@ void mlx5e_tc_esw_cleanup(struct mlx5_rep_uplink_priv *uplink_priv)
        mlx5e_tc_sample_cleanup(uplink_priv->tc_psample);
        mlx5e_tc_int_port_cleanup(uplink_priv->int_port_priv);
        mlx5_tc_ct_clean(uplink_priv->ct_priv);
+       mlx5e_flow_meters_cleanup(uplink_priv->flow_meters);
        mlx5e_tc_post_act_destroy(uplink_priv->post_act);
 }
 
@@ -5061,7 +5123,7 @@ bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe,
 
                tc_skb_ext->chain = chain;
 
-               zone_restore_id = (reg_b >> REG_MAPPING_MOFFSET(NIC_ZONE_RESTORE_TO_REG)) &
+               zone_restore_id = (reg_b >> MLX5_REG_MAPPING_MOFFSET(NIC_ZONE_RESTORE_TO_REG)) &
                        ESW_ZONE_ID_MASK;
 
                if (!mlx5e_tc_ct_restore_flow(tc->ct, skb,
index e2a1250..517f225 100644 (file)
@@ -39,6 +39,7 @@
 #include "en/tc_ct.h"
 #include "en/tc_tun.h"
 #include "en/tc/int_port.h"
+#include "en/tc/meter.h"
 #include "en_rep.h"
 
 #define MLX5E_TC_FLOW_ID_MASK 0x0000ffff
@@ -71,6 +72,7 @@ struct mlx5_flow_attr {
        struct mlx5_modify_hdr *modify_hdr;
        struct mlx5_ct_attr ct_attr;
        struct mlx5e_sample_attr sample_attr;
+       struct mlx5e_meter_attr meter_attr;
        struct mlx5e_tc_flow_parse_attr *parse_attr;
        u32 chain;
        u16 prio;
@@ -83,6 +85,7 @@ struct mlx5_flow_attr {
        u8 tun_ip_version;
        int tunnel_id; /* mapped tunnel id */
        u32 flags;
+       u32 exe_aso_type;
        struct list_head list;
        struct mlx5e_post_act_handle *post_act_handle;
        struct {
@@ -229,6 +232,7 @@ enum mlx5e_tc_attr_to_reg {
        FTEID_TO_REG,
        NIC_CHAIN_TO_REG,
        NIC_ZONE_RESTORE_TO_REG,
+       PACKET_COLOR_TO_REG,
 };
 
 struct mlx5e_tc_attr_to_reg_mapping {
@@ -241,6 +245,10 @@ struct mlx5e_tc_attr_to_reg_mapping {
 
 extern struct mlx5e_tc_attr_to_reg_mapping mlx5e_tc_attr_to_reg_mappings[];
 
+#define MLX5_REG_MAPPING_MOFFSET(reg_id) (mlx5e_tc_attr_to_reg_mappings[reg_id].moffset)
+#define MLX5_REG_MAPPING_MBITS(reg_id) (mlx5e_tc_attr_to_reg_mappings[reg_id].mlen)
+#define MLX5_REG_MAPPING_MASK(reg_id) (GENMASK(mlx5e_tc_attr_to_reg_mappings[reg_id].mlen - 1, 0))
+
 bool mlx5e_is_valid_eswitch_fwd_dev(struct mlx5e_priv *priv,
                                    struct net_device *out_dev);
 
index 50d14ce..64d78fd 100644 (file)
@@ -152,14 +152,14 @@ mlx5e_tx_get_gso_ihs(struct mlx5e_txqsq *sq, struct sk_buff *skb, int *hopbyhop)
 
        *hopbyhop = 0;
        if (skb->encapsulation) {
-               ihs = skb_inner_transport_offset(skb) + inner_tcp_hdrlen(skb);
+               ihs = skb_tcp_all_headers(skb);
                stats->tso_inner_packets++;
                stats->tso_inner_bytes += skb->len - ihs;
        } else {
                if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
                        ihs = skb_transport_offset(skb) + sizeof(struct udphdr);
                } else {
-                       ihs = skb_transport_offset(skb) + tcp_hdrlen(skb);
+                       ihs = skb_tcp_all_headers(skb);
                        if (ipv6_has_hopopt_jumbo(skb)) {
                                *hopbyhop = sizeof(struct hop_jumbo_hdr);
                                ihs -= sizeof(struct hop_jumbo_hdr);
index 719ef26..b938632 100644 (file)
@@ -1152,8 +1152,6 @@ mlx5_eswitch_update_num_of_vfs(struct mlx5_eswitch *esw, int num_vfs)
 {
        const u32 *out;
 
-       WARN_ON_ONCE(esw->mode != MLX5_ESWITCH_NONE);
-
        if (num_vfs < 0)
                return;
 
@@ -1186,6 +1184,9 @@ static int mlx5_esw_acls_ns_init(struct mlx5_eswitch *esw)
        int total_vports;
        int err;
 
+       if (esw->flags & MLX5_ESWITCH_VPORT_ACL_NS_CREATED)
+               return 0;
+
        total_vports = mlx5_eswitch_get_total_vports(dev);
 
        if (MLX5_CAP_ESW_EGRESS_ACL(dev, ft_support)) {
@@ -1203,6 +1204,7 @@ static int mlx5_esw_acls_ns_init(struct mlx5_eswitch *esw)
        } else {
                esw_warn(dev, "ingress ACL is not supported by FW\n");
        }
+       esw->flags |= MLX5_ESWITCH_VPORT_ACL_NS_CREATED;
        return 0;
 
 err:
@@ -1215,6 +1217,7 @@ static void mlx5_esw_acls_ns_cleanup(struct mlx5_eswitch *esw)
 {
        struct mlx5_core_dev *dev = esw->dev;
 
+       esw->flags &= ~MLX5_ESWITCH_VPORT_ACL_NS_CREATED;
        if (MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support))
                mlx5_fs_ingress_acls_cleanup(dev);
        if (MLX5_CAP_ESW_EGRESS_ACL(dev, ft_support))
@@ -1224,7 +1227,6 @@ static void mlx5_esw_acls_ns_cleanup(struct mlx5_eswitch *esw)
 /**
  * mlx5_eswitch_enable_locked - Enable eswitch
  * @esw:       Pointer to eswitch
- * @mode:      Eswitch mode to enable
  * @num_vfs:   Enable eswitch for given number of VFs. This is optional.
  *             Valid value are 0, > 0 and MLX5_ESWITCH_IGNORE_NUM_VFS.
  *             Caller should pass num_vfs > 0 when enabling eswitch for
@@ -1238,7 +1240,7 @@ static void mlx5_esw_acls_ns_cleanup(struct mlx5_eswitch *esw)
  * mode. If num_vfs >=0 is provided, it setup VF related eswitch vports.
  * It returns 0 on success or error code on failure.
  */
-int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int mode, int num_vfs)
+int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs)
 {
        int err;
 
@@ -1257,9 +1259,7 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int mode, int num_vfs)
 
        mlx5_eswitch_update_num_of_vfs(esw, num_vfs);
 
-       esw->mode = mode;
-
-       if (mode == MLX5_ESWITCH_LEGACY) {
+       if (esw->mode == MLX5_ESWITCH_LEGACY) {
                err = esw_legacy_enable(esw);
        } else {
                mlx5_rescan_drivers(esw->dev);
@@ -1269,22 +1269,19 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int mode, int num_vfs)
        if (err)
                goto abort;
 
+       esw->fdb_table.flags |= MLX5_ESW_FDB_CREATED;
+
        mlx5_eswitch_event_handlers_register(esw);
 
        esw_info(esw->dev, "Enable: mode(%s), nvfs(%d), active vports(%d)\n",
-                mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
+                esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
                 esw->esw_funcs.num_vfs, esw->enabled_vports);
 
-       mlx5_esw_mode_change_notify(esw, mode);
+       mlx5_esw_mode_change_notify(esw, esw->mode);
 
        return 0;
 
 abort:
-       esw->mode = MLX5_ESWITCH_NONE;
-
-       if (mode == MLX5_ESWITCH_OFFLOADS)
-               mlx5_rescan_drivers(esw->dev);
-
        mlx5_esw_acls_ns_cleanup(esw);
        return err;
 }
@@ -1305,14 +1302,14 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs)
        if (!mlx5_esw_allowed(esw))
                return 0;
 
-       toggle_lag = esw->mode == MLX5_ESWITCH_NONE;
+       toggle_lag = !mlx5_esw_is_fdb_created(esw);
 
        if (toggle_lag)
                mlx5_lag_disable_change(esw->dev);
 
        down_write(&esw->mode_lock);
-       if (esw->mode == MLX5_ESWITCH_NONE) {
-               ret = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_LEGACY, num_vfs);
+       if (!mlx5_esw_is_fdb_created(esw)) {
+               ret = mlx5_eswitch_enable_locked(esw, num_vfs);
        } else {
                enum mlx5_eswitch_vport_event vport_events;
 
@@ -1330,55 +1327,79 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs)
        return ret;
 }
 
-void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw, bool clear_vf)
+/* When disabling sriov, free driver level resources. */
+void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf)
 {
-       struct devlink *devlink = priv_to_devlink(esw->dev);
-       int old_mode;
-
-       lockdep_assert_held_write(&esw->mode_lock);
-
-       if (esw->mode == MLX5_ESWITCH_NONE)
+       if (!mlx5_esw_allowed(esw))
                return;
 
-       esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), active vports(%d)\n",
+       down_write(&esw->mode_lock);
+       /* If driver is unloaded, this function is called twice by remove_one()
+        * and mlx5_unload(). Prevent the second call.
+        */
+       if (!esw->esw_funcs.num_vfs && !clear_vf)
+               goto unlock;
+
+       esw_info(esw->dev, "Unload vfs: mode(%s), nvfs(%d), active vports(%d)\n",
                 esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
                 esw->esw_funcs.num_vfs, esw->enabled_vports);
 
-       /* Notify eswitch users that it is exiting from current mode.
-        * So that it can do necessary cleanup before the eswitch is disabled.
+       mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs);
+       if (clear_vf)
+               mlx5_eswitch_clear_vf_vports_info(esw);
+       /* If disabling sriov in switchdev mode, free meta rules here
+        * because it depends on num_vfs.
         */
-       mlx5_esw_mode_change_notify(esw, MLX5_ESWITCH_NONE);
+       if (esw->mode == MLX5_ESWITCH_OFFLOADS) {
+               struct devlink *devlink = priv_to_devlink(esw->dev);
 
-       mlx5_eswitch_event_handlers_unregister(esw);
+               esw_offloads_del_send_to_vport_meta_rules(esw);
+               devlink_rate_nodes_destroy(devlink);
+       }
 
-       if (esw->mode == MLX5_ESWITCH_LEGACY)
-               esw_legacy_disable(esw);
-       else if (esw->mode == MLX5_ESWITCH_OFFLOADS)
-               esw_offloads_disable(esw);
+       esw->esw_funcs.num_vfs = 0;
 
-       old_mode = esw->mode;
-       esw->mode = MLX5_ESWITCH_NONE;
+unlock:
+       up_write(&esw->mode_lock);
+}
 
-       if (old_mode == MLX5_ESWITCH_OFFLOADS)
-               mlx5_rescan_drivers(esw->dev);
+/* Free resources for corresponding eswitch mode. It is called by devlink
+ * when changing eswitch mode or modprobe when unloading driver.
+ */
+void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw)
+{
+       struct devlink *devlink = priv_to_devlink(esw->dev);
+
+       /* Notify eswitch users that it is exiting from current mode.
+        * So that it can do necessary cleanup before the eswitch is disabled.
+        */
+       mlx5_esw_mode_change_notify(esw, MLX5_ESWITCH_LEGACY);
 
-       devlink_rate_nodes_destroy(devlink);
+       mlx5_eswitch_event_handlers_unregister(esw);
 
+       esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), active vports(%d)\n",
+                esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
+                esw->esw_funcs.num_vfs, esw->enabled_vports);
+
+       esw->fdb_table.flags &= ~MLX5_ESW_FDB_CREATED;
+       if (esw->mode == MLX5_ESWITCH_OFFLOADS)
+               esw_offloads_disable(esw);
+       else if (esw->mode == MLX5_ESWITCH_LEGACY)
+               esw_legacy_disable(esw);
        mlx5_esw_acls_ns_cleanup(esw);
 
-       if (clear_vf)
-               mlx5_eswitch_clear_vf_vports_info(esw);
+       if (esw->mode == MLX5_ESWITCH_OFFLOADS)
+               devlink_rate_nodes_destroy(devlink);
 }
 
-void mlx5_eswitch_disable(struct mlx5_eswitch *esw, bool clear_vf)
+void mlx5_eswitch_disable(struct mlx5_eswitch *esw)
 {
        if (!mlx5_esw_allowed(esw))
                return;
 
        mlx5_lag_disable_change(esw->dev);
        down_write(&esw->mode_lock);
-       mlx5_eswitch_disable_locked(esw, clear_vf);
-       esw->esw_funcs.num_vfs = 0;
+       mlx5_eswitch_disable_locked(esw);
        up_write(&esw->mode_lock);
        mlx5_lag_enable_change(esw->dev);
 }
@@ -1573,7 +1594,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
        refcount_set(&esw->qos.refcnt, 0);
 
        esw->enabled_vports = 0;
-       esw->mode = MLX5_ESWITCH_NONE;
+       esw->mode = MLX5_ESWITCH_LEGACY;
        esw->offloads.inline_mode = MLX5_INLINE_MODE_NONE;
        if (MLX5_CAP_ESW_FLOWTABLE_FDB(dev, reformat) &&
            MLX5_CAP_ESW_FLOWTABLE_FDB(dev, decap))
@@ -1875,7 +1896,7 @@ u8 mlx5_eswitch_mode(const struct mlx5_core_dev *dev)
 {
        struct mlx5_eswitch *esw = dev->priv.eswitch;
 
-       return mlx5_esw_allowed(esw) ? esw->mode : MLX5_ESWITCH_NONE;
+       return mlx5_esw_allowed(esw) ? esw->mode : MLX5_ESWITCH_LEGACY;
 }
 EXPORT_SYMBOL_GPL(mlx5_eswitch_mode);
 
@@ -1995,8 +2016,6 @@ int mlx5_esw_try_lock(struct mlx5_eswitch *esw)
  */
 void mlx5_esw_unlock(struct mlx5_eswitch *esw)
 {
-       if (!mlx5_esw_allowed(esw))
-               return;
        up_write(&esw->mode_lock);
 }
 
index 2754a73..c19604b 100644 (file)
@@ -282,10 +282,15 @@ struct mlx5_esw_functions {
 enum {
        MLX5_ESWITCH_VPORT_MATCH_METADATA = BIT(0),
        MLX5_ESWITCH_REG_C1_LOOPBACK_ENABLED = BIT(1),
+       MLX5_ESWITCH_VPORT_ACL_NS_CREATED = BIT(2),
 };
 
 struct mlx5_esw_bridge_offloads;
 
+enum {
+       MLX5_ESW_FDB_CREATED = BIT(0),
+};
+
 struct mlx5_eswitch {
        struct mlx5_core_dev    *dev;
        struct mlx5_nb          nb;
@@ -337,6 +342,7 @@ void esw_offloads_disable(struct mlx5_eswitch *esw);
 int esw_offloads_enable(struct mlx5_eswitch *esw);
 void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw);
 int esw_offloads_init_reps(struct mlx5_eswitch *esw);
+void esw_offloads_del_send_to_vport_meta_rules(struct mlx5_eswitch *esw);
 
 bool mlx5_esw_vport_match_metadata_supported(const struct mlx5_eswitch *esw);
 int mlx5_esw_offloads_vport_metadata_set(struct mlx5_eswitch *esw, bool enable);
@@ -350,10 +356,11 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev);
 void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw);
 
 #define MLX5_ESWITCH_IGNORE_NUM_VFS (-1)
-int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int mode, int num_vfs);
+int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs);
 int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs);
-void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw, bool clear_vf);
-void mlx5_eswitch_disable(struct mlx5_eswitch *esw, bool clear_vf);
+void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf);
+void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw);
+void mlx5_eswitch_disable(struct mlx5_eswitch *esw);
 int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
                               u16 vport, const u8 *mac);
 int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
@@ -575,6 +582,11 @@ mlx5_esw_devlink_port_index_to_vport_num(unsigned int dl_port_index)
        return dl_port_index & 0xffff;
 }
 
+static inline bool mlx5_esw_is_fdb_created(struct mlx5_eswitch *esw)
+{
+       return esw->fdb_table.flags & MLX5_ESW_FDB_CREATED;
+}
+
 /* TODO: This mlx5e_tc function shouldn't be called by eswitch */
 void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw);
 
@@ -719,7 +731,8 @@ int mlx5_eswitch_reload_reps(struct mlx5_eswitch *esw);
 static inline int  mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; }
 static inline void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) {}
 static inline int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs) { return 0; }
-static inline void mlx5_eswitch_disable(struct mlx5_eswitch *esw, bool clear_vf) {}
+static inline void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf) {}
+static inline void mlx5_eswitch_disable(struct mlx5_eswitch *esw) {}
 static inline bool mlx5_eswitch_is_funcs_handler(struct mlx5_core_dev *dev) { return false; }
 static inline
 int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw, u16 vport, int link_state) { return 0; }
index 2ce3728..e224ec7 100644 (file)
@@ -1040,6 +1040,15 @@ static void mlx5_eswitch_del_send_to_vport_meta_rules(struct mlx5_eswitch *esw)
                mlx5_del_flow_rules(flows[i]);
 
        kvfree(flows);
+       /* If changing eswitch mode from switchdev to legacy, but num_vfs is not 0,
+        * meta rules could be freed again. So set it to NULL.
+        */
+       esw->fdb_table.offloads.send_to_vport_meta_rules = NULL;
+}
+
+void esw_offloads_del_send_to_vport_meta_rules(struct mlx5_eswitch *esw)
+{
+       mlx5_eswitch_del_send_to_vport_meta_rules(esw);
 }
 
 static int
@@ -2034,7 +2043,7 @@ static int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, u8 *mode)
        if (!MLX5_CAP_GEN(dev, vport_group_manager))
                return -EOPNOTSUPP;
 
-       if (esw->mode == MLX5_ESWITCH_NONE)
+       if (!mlx5_esw_is_fdb_created(esw))
                return -EOPNOTSUPP;
 
        switch (MLX5_CAP_ETH(dev, wqe_inline_mode)) {
@@ -2170,18 +2179,18 @@ static int esw_offloads_start(struct mlx5_eswitch *esw,
 {
        int err, err1;
 
-       mlx5_eswitch_disable_locked(esw, false);
-       err = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_OFFLOADS,
-                                        esw->dev->priv.sriov.num_vfs);
+       esw->mode = MLX5_ESWITCH_OFFLOADS;
+       err = mlx5_eswitch_enable_locked(esw, esw->dev->priv.sriov.num_vfs);
        if (err) {
                NL_SET_ERR_MSG_MOD(extack,
                                   "Failed setting eswitch to offloads");
-               err1 = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_LEGACY,
-                                                 MLX5_ESWITCH_IGNORE_NUM_VFS);
+               esw->mode = MLX5_ESWITCH_LEGACY;
+               err1 = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
                if (err1) {
                        NL_SET_ERR_MSG_MOD(extack,
                                           "Failed setting eswitch back to legacy");
                }
+               mlx5_rescan_drivers(esw->dev);
        }
        if (esw->offloads.inline_mode == MLX5_INLINE_MODE_NONE) {
                if (mlx5_eswitch_inline_mode_get(esw,
@@ -2894,7 +2903,7 @@ int mlx5_esw_offloads_vport_metadata_set(struct mlx5_eswitch *esw, bool enable)
        int err = 0;
 
        down_write(&esw->mode_lock);
-       if (esw->mode != MLX5_ESWITCH_NONE) {
+       if (mlx5_esw_is_fdb_created(esw)) {
                err = -EBUSY;
                goto done;
        }
@@ -3229,13 +3238,12 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw,
 {
        int err, err1;
 
-       mlx5_eswitch_disable_locked(esw, false);
-       err = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_LEGACY,
-                                        MLX5_ESWITCH_IGNORE_NUM_VFS);
+       esw->mode = MLX5_ESWITCH_LEGACY;
+       err = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
        if (err) {
                NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");
-               err1 = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_OFFLOADS,
-                                                 MLX5_ESWITCH_IGNORE_NUM_VFS);
+               esw->mode = MLX5_ESWITCH_OFFLOADS;
+               err1 = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
                if (err1) {
                        NL_SET_ERR_MSG_MOD(extack,
                                           "Failed setting eswitch back to offloads");
@@ -3334,15 +3342,6 @@ static int esw_inline_mode_to_devlink(u8 mlx5_mode, u8 *mode)
        return 0;
 }
 
-static int eswitch_devlink_esw_mode_check(const struct mlx5_eswitch *esw)
-{
-       /* devlink commands in NONE eswitch mode are currently supported only
-        * on ECPF.
-        */
-       return (esw->mode == MLX5_ESWITCH_NONE &&
-               !mlx5_core_is_ecpf_esw_manager(esw->dev)) ? -EOPNOTSUPP : 0;
-}
-
 /* FIXME: devl_unlock() followed by devl_lock() inside driver callback
  * is never correct and prone to races. It's a transitional workaround,
  * never repeat this pattern.
@@ -3399,6 +3398,7 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
        if (cur_mlx5_mode == mlx5_mode)
                goto unlock;
 
+       mlx5_eswitch_disable_locked(esw);
        if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) {
                if (mlx5_devlink_trap_get_num_active(esw->dev)) {
                        NL_SET_ERR_MSG_MOD(extack,
@@ -3409,6 +3409,7 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
                err = esw_offloads_start(esw, extack);
        } else if (mode == DEVLINK_ESWITCH_MODE_LEGACY) {
                err = esw_offloads_stop(esw, extack);
+               mlx5_rescan_drivers(esw->dev);
        } else {
                err = -EINVAL;
        }
@@ -3431,12 +3432,7 @@ int mlx5_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode)
                return PTR_ERR(esw);
 
        mlx5_eswtich_mode_callback_enter(devlink, esw);
-       err = eswitch_devlink_esw_mode_check(esw);
-       if (err)
-               goto unlock;
-
        err = esw_mode_to_devlink(esw->mode, mode);
-unlock:
        mlx5_eswtich_mode_callback_exit(devlink, esw);
        return err;
 }
@@ -3485,9 +3481,6 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
                return PTR_ERR(esw);
 
        mlx5_eswtich_mode_callback_enter(devlink, esw);
-       err = eswitch_devlink_esw_mode_check(esw);
-       if (err)
-               goto out;
 
        switch (MLX5_CAP_ETH(dev, wqe_inline_mode)) {
        case MLX5_CAP_INLINE_MODE_NOT_REQUIRED:
@@ -3539,12 +3532,7 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode)
                return PTR_ERR(esw);
 
        mlx5_eswtich_mode_callback_enter(devlink, esw);
-       err = eswitch_devlink_esw_mode_check(esw);
-       if (err)
-               goto unlock;
-
        err = esw_inline_mode_to_devlink(esw->offloads.inline_mode, mode);
-unlock:
        mlx5_eswtich_mode_callback_exit(devlink, esw);
        return err;
 }
@@ -3555,16 +3543,13 @@ int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink,
 {
        struct mlx5_core_dev *dev = devlink_priv(devlink);
        struct mlx5_eswitch *esw;
-       int err;
+       int err = 0;
 
        esw = mlx5_devlink_eswitch_get(devlink);
        if (IS_ERR(esw))
                return PTR_ERR(esw);
 
        mlx5_eswtich_mode_callback_enter(devlink, esw);
-       err = eswitch_devlink_esw_mode_check(esw);
-       if (err)
-               goto unlock;
 
        if (encap != DEVLINK_ESWITCH_ENCAP_MODE_NONE &&
            (!MLX5_CAP_ESW_FLOWTABLE_FDB(dev, reformat) ||
@@ -3615,21 +3600,15 @@ int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink,
                                        enum devlink_eswitch_encap_mode *encap)
 {
        struct mlx5_eswitch *esw;
-       int err;
 
        esw = mlx5_devlink_eswitch_get(devlink);
        if (IS_ERR(esw))
                return PTR_ERR(esw);
 
        mlx5_eswtich_mode_callback_enter(devlink, esw);
-       err = eswitch_devlink_esw_mode_check(esw);
-       if (err)
-               goto unlock;
-
        *encap = esw->offloads.encap;
-unlock:
        mlx5_eswtich_mode_callback_exit(devlink, esw);
-       return err;
+       return 0;
 }
 
 static bool
index 2a8fc54..641505d 100644 (file)
@@ -632,6 +632,7 @@ static int mlx5_deactivate_lag(struct mlx5_lag *ldev)
 static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev)
 {
 #ifdef CONFIG_MLX5_ESWITCH
+       struct mlx5_core_dev *dev;
        u8 mode;
 #endif
        int i;
@@ -641,11 +642,11 @@ static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev)
                        return false;
 
 #ifdef CONFIG_MLX5_ESWITCH
-       mode = mlx5_eswitch_mode(ldev->pf[MLX5_LAG_P1].dev);
-
-       if (mode != MLX5_ESWITCH_NONE && mode != MLX5_ESWITCH_OFFLOADS)
+       dev = ldev->pf[MLX5_LAG_P1].dev;
+       if ((mlx5_sriov_is_enabled(dev)) && !is_mdev_switchdev_mode(dev))
                return false;
 
+       mode = mlx5_eswitch_mode(dev);
        for (i = 0; i < ldev->ports; i++)
                if (mlx5_eswitch_mode(ldev->pf[i].dev) != mode)
                        return false;
@@ -760,8 +761,7 @@ static bool mlx5_lag_is_roce_lag(struct mlx5_lag *ldev)
 
 #ifdef CONFIG_MLX5_ESWITCH
        for (i = 0; i < ldev->ports; i++)
-               roce_lag = roce_lag &&
-                       ldev->pf[i].dev->priv.eswitch->mode == MLX5_ESWITCH_NONE;
+               roce_lag = roce_lag && is_mdev_legacy_mode(ldev->pf[i].dev);
 #endif
 
        return roce_lag;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c
new file mode 100644 (file)
index 0000000..21e1450
--- /dev/null
@@ -0,0 +1,433 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+
+#include <linux/mlx5/device.h>
+#include <linux/mlx5/transobj.h>
+#include "aso.h"
+#include "wq.h"
+
+struct mlx5_aso_cq {
+       /* data path - accessed per cqe */
+       struct mlx5_cqwq           wq;
+
+       /* data path - accessed per napi poll */
+       struct mlx5_core_cq        mcq;
+
+       /* control */
+       struct mlx5_core_dev      *mdev;
+       struct mlx5_wq_ctrl        wq_ctrl;
+} ____cacheline_aligned_in_smp;
+
+struct mlx5_aso {
+       /* data path */
+       u16                        cc;
+       u16                        pc;
+
+       struct mlx5_wqe_ctrl_seg  *doorbell_cseg;
+       struct mlx5_aso_cq         cq;
+
+       /* read only */
+       struct mlx5_wq_cyc         wq;
+       void __iomem              *uar_map;
+       u32                        sqn;
+
+       /* control path */
+       struct mlx5_wq_ctrl        wq_ctrl;
+
+} ____cacheline_aligned_in_smp;
+
+static void mlx5_aso_free_cq(struct mlx5_aso_cq *cq)
+{
+       mlx5_wq_destroy(&cq->wq_ctrl);
+}
+
+static int mlx5_aso_alloc_cq(struct mlx5_core_dev *mdev, int numa_node,
+                            void *cqc_data, struct mlx5_aso_cq *cq)
+{
+       struct mlx5_core_cq *mcq = &cq->mcq;
+       struct mlx5_wq_param param;
+       int err;
+       u32 i;
+
+       param.buf_numa_node = numa_node;
+       param.db_numa_node = numa_node;
+
+       err = mlx5_cqwq_create(mdev, &param, cqc_data, &cq->wq, &cq->wq_ctrl);
+       if (err)
+               return err;
+
+       mcq->cqe_sz     = 64;
+       mcq->set_ci_db  = cq->wq_ctrl.db.db;
+       mcq->arm_db     = cq->wq_ctrl.db.db + 1;
+
+       for (i = 0; i < mlx5_cqwq_get_size(&cq->wq); i++) {
+               struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(&cq->wq, i);
+
+               cqe->op_own = 0xf1;
+       }
+
+       cq->mdev = mdev;
+
+       return 0;
+}
+
+static int create_aso_cq(struct mlx5_aso_cq *cq, void *cqc_data)
+{
+       u32 out[MLX5_ST_SZ_DW(create_cq_out)];
+       struct mlx5_core_dev *mdev = cq->mdev;
+       struct mlx5_core_cq *mcq = &cq->mcq;
+       void *in, *cqc;
+       int inlen, eqn;
+       int err;
+
+       err = mlx5_vector2eqn(mdev, 0, &eqn);
+       if (err)
+               return err;
+
+       inlen = MLX5_ST_SZ_BYTES(create_cq_in) +
+               sizeof(u64) * cq->wq_ctrl.buf.npages;
+       in = kvzalloc(inlen, GFP_KERNEL);
+       if (!in)
+               return -ENOMEM;
+
+       cqc = MLX5_ADDR_OF(create_cq_in, in, cq_context);
+
+       memcpy(cqc, cqc_data, MLX5_ST_SZ_BYTES(cqc));
+
+       mlx5_fill_page_frag_array(&cq->wq_ctrl.buf,
+                                 (__be64 *)MLX5_ADDR_OF(create_cq_in, in, pas));
+
+       MLX5_SET(cqc,   cqc, cq_period_mode, DIM_CQ_PERIOD_MODE_START_FROM_EQE);
+       MLX5_SET(cqc,   cqc, c_eqn_or_apu_element, eqn);
+       MLX5_SET(cqc,   cqc, uar_page,      mdev->priv.uar->index);
+       MLX5_SET(cqc,   cqc, log_page_size, cq->wq_ctrl.buf.page_shift -
+                                           MLX5_ADAPTER_PAGE_SHIFT);
+       MLX5_SET64(cqc, cqc, dbr_addr,      cq->wq_ctrl.db.dma);
+
+       err = mlx5_core_create_cq(mdev, mcq, in, inlen, out, sizeof(out));
+
+       kvfree(in);
+
+       return err;
+}
+
+static void mlx5_aso_destroy_cq(struct mlx5_aso_cq *cq)
+{
+       mlx5_core_destroy_cq(cq->mdev, &cq->mcq);
+       mlx5_wq_destroy(&cq->wq_ctrl);
+}
+
+static int mlx5_aso_create_cq(struct mlx5_core_dev *mdev, int numa_node,
+                             struct mlx5_aso_cq *cq)
+{
+       void *cqc_data;
+       int err;
+
+       cqc_data = kvzalloc(MLX5_ST_SZ_BYTES(cqc), GFP_KERNEL);
+       if (!cqc_data)
+               return -ENOMEM;
+
+       MLX5_SET(cqc, cqc_data, log_cq_size, 1);
+       MLX5_SET(cqc, cqc_data, uar_page, mdev->priv.uar->index);
+       if (MLX5_CAP_GEN(mdev, cqe_128_always) && cache_line_size() >= 128)
+               MLX5_SET(cqc, cqc_data, cqe_sz, CQE_STRIDE_128_PAD);
+
+       err = mlx5_aso_alloc_cq(mdev, numa_node, cqc_data, cq);
+       if (err) {
+               mlx5_core_err(mdev, "Failed to alloc aso wq cq, err=%d\n", err);
+               goto err_out;
+       }
+
+       err = create_aso_cq(cq, cqc_data);
+       if (err) {
+               mlx5_core_err(mdev, "Failed to create aso wq cq, err=%d\n", err);
+               goto err_free_cq;
+       }
+
+       kvfree(cqc_data);
+       return 0;
+
+err_free_cq:
+       mlx5_aso_free_cq(cq);
+err_out:
+       kvfree(cqc_data);
+       return err;
+}
+
+static int mlx5_aso_alloc_sq(struct mlx5_core_dev *mdev, int numa_node,
+                            void *sqc_data, struct mlx5_aso *sq)
+{
+       void *sqc_wq = MLX5_ADDR_OF(sqc, sqc_data, wq);
+       struct mlx5_wq_cyc *wq = &sq->wq;
+       struct mlx5_wq_param param;
+       int err;
+
+       sq->uar_map = mdev->mlx5e_res.hw_objs.bfreg.map;
+
+       param.db_numa_node = numa_node;
+       param.buf_numa_node = numa_node;
+       err = mlx5_wq_cyc_create(mdev, &param, sqc_wq, wq, &sq->wq_ctrl);
+       if (err)
+               return err;
+       wq->db = &wq->db[MLX5_SND_DBR];
+
+       return 0;
+}
+
+static int create_aso_sq(struct mlx5_core_dev *mdev, int pdn,
+                        void *sqc_data, struct mlx5_aso *sq)
+{
+       void *in, *sqc, *wq;
+       int inlen, err;
+
+       inlen = MLX5_ST_SZ_BYTES(create_sq_in) +
+               sizeof(u64) * sq->wq_ctrl.buf.npages;
+       in = kvzalloc(inlen, GFP_KERNEL);
+       if (!in)
+               return -ENOMEM;
+
+       sqc = MLX5_ADDR_OF(create_sq_in, in, ctx);
+       wq = MLX5_ADDR_OF(sqc, sqc, wq);
+
+       memcpy(sqc, sqc_data, MLX5_ST_SZ_BYTES(sqc));
+       MLX5_SET(sqc,  sqc, cqn, sq->cq.mcq.cqn);
+
+       MLX5_SET(sqc,  sqc, state, MLX5_SQC_STATE_RST);
+       MLX5_SET(sqc,  sqc, flush_in_error_en, 1);
+
+       MLX5_SET(wq,   wq, wq_type,       MLX5_WQ_TYPE_CYCLIC);
+       MLX5_SET(wq,   wq, uar_page,      mdev->mlx5e_res.hw_objs.bfreg.index);
+       MLX5_SET(wq,   wq, log_wq_pg_sz,  sq->wq_ctrl.buf.page_shift -
+                                         MLX5_ADAPTER_PAGE_SHIFT);
+       MLX5_SET64(wq, wq, dbr_addr,      sq->wq_ctrl.db.dma);
+
+       mlx5_fill_page_frag_array(&sq->wq_ctrl.buf,
+                                 (__be64 *)MLX5_ADDR_OF(wq, wq, pas));
+
+       err = mlx5_core_create_sq(mdev, in, inlen, &sq->sqn);
+
+       kvfree(in);
+
+       return err;
+}
+
+static int mlx5_aso_set_sq_rdy(struct mlx5_core_dev *mdev, u32 sqn)
+{
+       void *in, *sqc;
+       int inlen, err;
+
+       inlen = MLX5_ST_SZ_BYTES(modify_sq_in);
+       in = kvzalloc(inlen, GFP_KERNEL);
+       if (!in)
+               return -ENOMEM;
+
+       MLX5_SET(modify_sq_in, in, sq_state, MLX5_SQC_STATE_RST);
+       sqc = MLX5_ADDR_OF(modify_sq_in, in, ctx);
+       MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RDY);
+
+       err = mlx5_core_modify_sq(mdev, sqn, in);
+
+       kvfree(in);
+
+       return err;
+}
+
+static int mlx5_aso_create_sq_rdy(struct mlx5_core_dev *mdev, u32 pdn,
+                                 void *sqc_data, struct mlx5_aso *sq)
+{
+       int err;
+
+       err = create_aso_sq(mdev, pdn, sqc_data, sq);
+       if (err)
+               return err;
+
+       err = mlx5_aso_set_sq_rdy(mdev, sq->sqn);
+       if (err)
+               mlx5_core_destroy_sq(mdev, sq->sqn);
+
+       return err;
+}
+
+static void mlx5_aso_free_sq(struct mlx5_aso *sq)
+{
+       mlx5_wq_destroy(&sq->wq_ctrl);
+}
+
+static void mlx5_aso_destroy_sq(struct mlx5_aso *sq)
+{
+       mlx5_core_destroy_sq(sq->cq.mdev, sq->sqn);
+       mlx5_aso_free_sq(sq);
+}
+
+static int mlx5_aso_create_sq(struct mlx5_core_dev *mdev, int numa_node,
+                             u32 pdn, struct mlx5_aso *sq)
+{
+       void *sqc_data, *wq;
+       int err;
+
+       sqc_data = kvzalloc(MLX5_ST_SZ_BYTES(sqc), GFP_KERNEL);
+       if (!sqc_data)
+               return -ENOMEM;
+
+       wq = MLX5_ADDR_OF(sqc, sqc_data, wq);
+       MLX5_SET(wq, wq, log_wq_stride, ilog2(MLX5_SEND_WQE_BB));
+       MLX5_SET(wq, wq, pd, pdn);
+       MLX5_SET(wq, wq, log_wq_sz, 1);
+
+       err = mlx5_aso_alloc_sq(mdev, numa_node, sqc_data, sq);
+       if (err) {
+               mlx5_core_err(mdev, "Failed to alloc aso wq sq, err=%d\n", err);
+               goto err_out;
+       }
+
+       err = mlx5_aso_create_sq_rdy(mdev, pdn, sqc_data, sq);
+       if (err) {
+               mlx5_core_err(mdev, "Failed to open aso wq sq, err=%d\n", err);
+               goto err_free_asosq;
+       }
+
+       mlx5_core_dbg(mdev, "aso sq->sqn = 0x%x\n", sq->sqn);
+
+       kvfree(sqc_data);
+       return 0;
+
+err_free_asosq:
+       mlx5_aso_free_sq(sq);
+err_out:
+       kvfree(sqc_data);
+       return err;
+}
+
+struct mlx5_aso *mlx5_aso_create(struct mlx5_core_dev *mdev, u32 pdn)
+{
+       int numa_node = dev_to_node(mlx5_core_dma_dev(mdev));
+       struct mlx5_aso *aso;
+       int err;
+
+       aso = kzalloc(sizeof(*aso), GFP_KERNEL);
+       if (!aso)
+               return ERR_PTR(-ENOMEM);
+
+       err = mlx5_aso_create_cq(mdev, numa_node, &aso->cq);
+       if (err)
+               goto err_cq;
+
+       err = mlx5_aso_create_sq(mdev, numa_node, pdn, aso);
+       if (err)
+               goto err_sq;
+
+       return aso;
+
+err_sq:
+       mlx5_aso_destroy_cq(&aso->cq);
+err_cq:
+       kfree(aso);
+       return ERR_PTR(err);
+}
+
+void mlx5_aso_destroy(struct mlx5_aso *aso)
+{
+       if (IS_ERR_OR_NULL(aso))
+               return;
+
+       mlx5_aso_destroy_sq(aso);
+       mlx5_aso_destroy_cq(&aso->cq);
+       kfree(aso);
+}
+
+void mlx5_aso_build_wqe(struct mlx5_aso *aso, u8 ds_cnt,
+                       struct mlx5_aso_wqe *aso_wqe,
+                       u32 obj_id, u32 opc_mode)
+{
+       struct mlx5_wqe_ctrl_seg *cseg = &aso_wqe->ctrl;
+
+       cseg->opmod_idx_opcode = cpu_to_be32((opc_mode << MLX5_WQE_CTRL_WQE_OPC_MOD_SHIFT) |
+                                            (aso->pc << MLX5_WQE_CTRL_WQE_INDEX_SHIFT) |
+                                            MLX5_OPCODE_ACCESS_ASO);
+       cseg->qpn_ds     = cpu_to_be32((aso->sqn << MLX5_WQE_CTRL_QPN_SHIFT) | ds_cnt);
+       cseg->fm_ce_se   = MLX5_WQE_CTRL_CQ_UPDATE;
+       cseg->general_id = cpu_to_be32(obj_id);
+}
+
+void *mlx5_aso_get_wqe(struct mlx5_aso *aso)
+{
+       u16 pi;
+
+       pi = mlx5_wq_cyc_ctr2ix(&aso->wq, aso->pc);
+       return mlx5_wq_cyc_get_wqe(&aso->wq, pi);
+}
+
+void mlx5_aso_post_wqe(struct mlx5_aso *aso, bool with_data,
+                      struct mlx5_wqe_ctrl_seg *doorbell_cseg)
+{
+       doorbell_cseg->fm_ce_se |= MLX5_WQE_CTRL_CQ_UPDATE;
+       /* ensure wqe is visible to device before updating doorbell record */
+       dma_wmb();
+
+       if (with_data)
+               aso->pc += MLX5_ASO_WQEBBS_DATA;
+       else
+               aso->pc += MLX5_ASO_WQEBBS;
+       *aso->wq.db = cpu_to_be32(aso->pc);
+
+       /* ensure doorbell record is visible to device before ringing the
+        * doorbell
+        */
+       wmb();
+
+       mlx5_write64((__be32 *)doorbell_cseg, aso->uar_map);
+
+       /* Ensure doorbell is written on uar_page before poll_cq */
+       WRITE_ONCE(doorbell_cseg, NULL);
+}
+
+int mlx5_aso_poll_cq(struct mlx5_aso *aso, bool with_data, u32 interval_ms)
+{
+       struct mlx5_aso_cq *cq = &aso->cq;
+       struct mlx5_cqe64 *cqe;
+       unsigned long expires;
+
+       cqe = mlx5_cqwq_get_cqe(&cq->wq);
+
+       expires = jiffies + msecs_to_jiffies(interval_ms);
+       while (!cqe && time_is_after_jiffies(expires)) {
+               usleep_range(2, 10);
+               cqe = mlx5_cqwq_get_cqe(&cq->wq);
+       }
+
+       if (!cqe)
+               return -ETIMEDOUT;
+
+       /* sq->cc must be updated only after mlx5_cqwq_update_db_record(),
+        * otherwise a cq overrun may occur
+        */
+       mlx5_cqwq_pop(&cq->wq);
+
+       if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_REQ)) {
+               struct mlx5_err_cqe *err_cqe;
+
+               mlx5_core_err(cq->mdev, "Bad OP in ASOSQ CQE: 0x%x\n",
+                             get_cqe_opcode(cqe));
+
+               err_cqe = (struct mlx5_err_cqe *)cqe;
+               mlx5_core_err(cq->mdev, "vendor_err_synd=%x\n",
+                             err_cqe->vendor_err_synd);
+               mlx5_core_err(cq->mdev, "syndrome=%x\n",
+                             err_cqe->syndrome);
+               print_hex_dump(KERN_WARNING, "", DUMP_PREFIX_OFFSET,
+                              16, 1, err_cqe,
+                              sizeof(*err_cqe), false);
+       }
+
+       mlx5_cqwq_update_db_record(&cq->wq);
+
+       /* ensure cq space is freed before enabling more cqes */
+       wmb();
+
+       if (with_data)
+               aso->cc += MLX5_ASO_WQEBBS_DATA;
+       else
+               aso->cc += MLX5_ASO_WQEBBS;
+
+       return 0;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h
new file mode 100644 (file)
index 0000000..b3bbf28
--- /dev/null
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#ifndef __MLX5_LIB_ASO_H__
+#define __MLX5_LIB_ASO_H__
+
+#include <linux/mlx5/qp.h>
+#include "mlx5_core.h"
+
+#define MLX5_ASO_WQEBBS \
+       (DIV_ROUND_UP(sizeof(struct mlx5_aso_wqe), MLX5_SEND_WQE_BB))
+#define MLX5_ASO_WQEBBS_DATA \
+       (DIV_ROUND_UP(sizeof(struct mlx5_aso_wqe_data), MLX5_SEND_WQE_BB))
+#define MLX5_WQE_CTRL_WQE_OPC_MOD_SHIFT 24
+
+struct mlx5_wqe_aso_ctrl_seg {
+       __be32  va_h;
+       __be32  va_l; /* include read_enable */
+       __be32  l_key;
+       u8      data_mask_mode;
+       u8      condition_1_0_operand;
+       u8      condition_1_0_offset;
+       u8      data_offset_condition_operand;
+       __be32  condition_0_data;
+       __be32  condition_0_mask;
+       __be32  condition_1_data;
+       __be32  condition_1_mask;
+       __be64  bitwise_data;
+       __be64  data_mask;
+};
+
+struct mlx5_wqe_aso_data_seg {
+       __be32  bytewise_data[16];
+};
+
+struct mlx5_aso_wqe {
+       struct mlx5_wqe_ctrl_seg      ctrl;
+       struct mlx5_wqe_aso_ctrl_seg  aso_ctrl;
+};
+
+struct mlx5_aso_wqe_data {
+       struct mlx5_wqe_ctrl_seg      ctrl;
+       struct mlx5_wqe_aso_ctrl_seg  aso_ctrl;
+       struct mlx5_wqe_aso_data_seg  aso_data;
+};
+
+enum {
+       MLX5_ASO_LOGICAL_AND,
+       MLX5_ASO_LOGICAL_OR,
+};
+
+enum {
+       MLX5_ASO_ALWAYS_FALSE,
+       MLX5_ASO_ALWAYS_TRUE,
+       MLX5_ASO_EQUAL,
+       MLX5_ASO_NOT_EQUAL,
+       MLX5_ASO_GREATER_OR_EQUAL,
+       MLX5_ASO_LESSER_OR_EQUAL,
+       MLX5_ASO_LESSER,
+       MLX5_ASO_GREATER,
+       MLX5_ASO_CYCLIC_GREATER,
+       MLX5_ASO_CYCLIC_LESSER,
+};
+
+enum {
+       MLX5_ASO_DATA_MASK_MODE_BITWISE_64BIT,
+       MLX5_ASO_DATA_MASK_MODE_BYTEWISE_64BYTE,
+       MLX5_ASO_DATA_MASK_MODE_CALCULATED_64BYTE,
+};
+
+enum {
+       MLX5_ACCESS_ASO_OPC_MOD_FLOW_METER = 0x2,
+};
+
+struct mlx5_aso;
+
+void *mlx5_aso_get_wqe(struct mlx5_aso *aso);
+void mlx5_aso_build_wqe(struct mlx5_aso *aso, u8 ds_cnt,
+                       struct mlx5_aso_wqe *aso_wqe,
+                       u32 obj_id, u32 opc_mode);
+void mlx5_aso_post_wqe(struct mlx5_aso *aso, bool with_data,
+                      struct mlx5_wqe_ctrl_seg *doorbell_cseg);
+int mlx5_aso_poll_cq(struct mlx5_aso *aso, bool with_data, u32 interval_ms);
+
+struct mlx5_aso *mlx5_aso_create(struct mlx5_core_dev *mdev, u32 pdn);
+void mlx5_aso_destroy(struct mlx5_aso *aso);
+#endif /* __MLX5_LIB_ASO_H__ */
index 2078d9f..a9e51c1 100644 (file)
@@ -1250,6 +1250,7 @@ static void mlx5_unload(struct mlx5_core_dev *dev)
 {
        mlx5_sf_dev_table_destroy(dev);
        mlx5_sriov_detach(dev);
+       mlx5_eswitch_disable(dev->priv.eswitch);
        mlx5_lag_remove_mdev(dev);
        mlx5_ec_cleanup(dev);
        mlx5_sf_hw_table_destroy(dev);
index 3be659c..7d955a4 100644 (file)
@@ -501,7 +501,7 @@ static int mlx5_sf_esw_event(struct notifier_block *nb, unsigned long event, voi
        case MLX5_ESWITCH_OFFLOADS:
                mlx5_sf_table_enable(table);
                break;
-       case MLX5_ESWITCH_NONE:
+       case MLX5_ESWITCH_LEGACY:
                mlx5_sf_table_disable(table);
                break;
        default:
index 2935614..5757cd6 100644 (file)
@@ -145,8 +145,7 @@ mlx5_device_disable_sriov(struct mlx5_core_dev *dev, int num_vfs, bool clear_vf)
                sriov->vfs_ctx[vf].enabled = 0;
        }
 
-       if (MLX5_ESWITCH_MANAGER(dev))
-               mlx5_eswitch_disable(dev->priv.eswitch, clear_vf);
+       mlx5_eswitch_disable_sriov(dev->priv.eswitch, clear_vf);
 
        if (mlx5_wait_for_pages(dev, &dev->priv.vfs_pages))
                mlx5_core_warn(dev, "timeout reclaiming VFs pages\n");
index c57e293..c2d6d64 100644 (file)
@@ -28,7 +28,8 @@ mlxsw_spectrum-objs           := spectrum.o spectrum_buffers.o \
                                   spectrum_qdisc.o spectrum_span.o \
                                   spectrum_nve.o spectrum_nve_vxlan.o \
                                   spectrum_dpipe.o spectrum_trap.o \
-                                  spectrum_ethtool.o spectrum_policer.o
+                                  spectrum_ethtool.o spectrum_policer.o \
+                                  spectrum_pgt.o
 mlxsw_spectrum-$(CONFIG_MLXSW_SPECTRUM_DCB)    += spectrum_dcb.o
 mlxsw_spectrum-$(CONFIG_PTP_1588_CLOCK)                += spectrum_ptp.o
 obj-$(CONFIG_MLXSW_MINIMAL)    += mlxsw_minimal.o
index 91f68fb..666d6b6 100644 (file)
@@ -633,6 +633,12 @@ MLXSW_ITEM32(cmd_mbox, config_profile,
  */
 MLXSW_ITEM32(cmd_mbox, config_profile, set_ar_sec, 0x0C, 15, 1);
 
+/* cmd_mbox_config_set_ubridge
+ * Capability bit. Setting a bit to 1 configures the profile
+ * according to the mailbox contents.
+ */
+MLXSW_ITEM32(cmd_mbox, config_profile, set_ubridge, 0x0C, 22, 1);
+
 /* cmd_mbox_config_set_kvd_linear_size
  * Capability bit. Setting a bit to 1 configures the profile
  * according to the mailbox contents.
@@ -713,16 +719,25 @@ MLXSW_ITEM32(cmd_mbox, config_profile, max_flood_tables, 0x30, 16, 4);
  */
 MLXSW_ITEM32(cmd_mbox, config_profile, max_vid_flood_tables, 0x30, 8, 4);
 
+enum mlxsw_cmd_mbox_config_profile_flood_mode {
+       /* Mixed mode, where:
+        * max_flood_tables indicates the number of single-entry tables.
+        * max_vid_flood_tables indicates the number of per-VID tables.
+        * max_fid_offset_flood_tables indicates the number of FID-offset
+        * tables. max_fid_flood_tables indicates the number of per-FID tables.
+        * Reserved when unified bridge model is used.
+        */
+       MLXSW_CMD_MBOX_CONFIG_PROFILE_FLOOD_MODE_MIXED = 3,
+       /* Controlled flood tables. Reserved when legacy bridge model is
+        * used.
+        */
+       MLXSW_CMD_MBOX_CONFIG_PROFILE_FLOOD_MODE_CONTROLLED = 4,
+};
+
 /* cmd_mbox_config_profile_flood_mode
  * Flooding mode to use.
- * 0-2 - Backward compatible modes for SwitchX devices.
- * 3 - Mixed mode, where:
- * max_flood_tables indicates the number of single-entry tables.
- * max_vid_flood_tables indicates the number of per-VID tables.
- * max_fid_offset_flood_tables indicates the number of FID-offset tables.
- * max_fid_flood_tables indicates the number of per-FID tables.
  */
-MLXSW_ITEM32(cmd_mbox, config_profile, flood_mode, 0x30, 0, 2);
+MLXSW_ITEM32(cmd_mbox, config_profile, flood_mode, 0x30, 0, 3);
 
 /* cmd_mbox_config_profile_max_fid_offset_flood_tables
  * Maximum number of FID-offset flooding tables.
@@ -783,6 +798,13 @@ MLXSW_ITEM32(cmd_mbox, config_profile, adaptive_routing_group_cap, 0x4C, 0, 16);
  */
 MLXSW_ITEM32(cmd_mbox, config_profile, arn, 0x50, 31, 1);
 
+/* cmd_mbox_config_profile_ubridge
+ * Unified Bridge
+ * 0 - non unified bridge
+ * 1 - unified bridge
+ */
+MLXSW_ITEM32(cmd_mbox, config_profile, ubridge, 0x50, 4, 1);
+
 /* cmd_mbox_config_kvd_linear_size
  * KVD Linear Size
  * Valid for Spectrum only
index d1e8b8b..a3491ef 100644 (file)
@@ -295,6 +295,7 @@ struct mlxsw_config_profile {
                used_max_pkey:1,
                used_ar_sec:1,
                used_adaptive_routing_group_cap:1,
+               used_ubridge:1,
                used_kvd_sizes:1;
        u8      max_vepa_channels;
        u16     max_mid;
@@ -314,6 +315,7 @@ struct mlxsw_config_profile {
        u8      ar_sec;
        u16     adaptive_routing_group_cap;
        u8      arn;
+       u8      ubridge;
        u32     kvd_linear_size;
        u8      kvd_hash_single_parts;
        u8      kvd_hash_double_parts;
index fa33cae..636db9a 100644 (file)
@@ -1164,7 +1164,7 @@ EXPORT_SYMBOL(mlxsw_afa_block_append_vlan_modify);
  * trap control. In addition, the Trap / Discard action enables activating
  * SPAN (port mirroring).
  *
- * The Trap with userdef action action has the same functionality as
+ * The Trap with userdef action has the same functionality as
  * the Trap action with addition of user defined value that can be set
  * and used by higher layer applications.
  */
index 34bec9c..0107cbc 100644 (file)
@@ -180,7 +180,7 @@ mlxsw_env_query_module_eeprom(struct mlxsw_core *mlxsw_core, u8 slot_index,
                } else {
                        /* When reading upper pages 1, 2 and 3 the offset
                         * starts at 0 and I2C high address is used. Please refer
-                        * refer to "Memory Organization" figure in SFF-8472
+                        * to "Memory Organization" figure in SFF-8472
                         * specification for graphical depiction.
                         */
                        i2c_addr = MLXSW_REG_MCIA_I2C_ADDR_HIGH;
index 8dd2479..41f0f68 100644 (file)
@@ -1235,6 +1235,11 @@ static int mlxsw_pci_config_profile(struct mlxsw_pci *mlxsw_pci, char *mbox,
                mlxsw_cmd_mbox_config_profile_adaptive_routing_group_cap_set(
                        mbox, profile->adaptive_routing_group_cap);
        }
+       if (profile->used_ubridge) {
+               mlxsw_cmd_mbox_config_profile_set_ubridge_set(mbox, 1);
+               mlxsw_cmd_mbox_config_profile_ubridge_set(mbox,
+                                                         profile->ubridge);
+       }
        if (profile->used_kvd_sizes && MLXSW_RES_VALID(res, KVD_SIZE)) {
                err = mlxsw_pci_profile_get_kvd_sizes(mlxsw_pci, profile, res);
                if (err)
@@ -1551,6 +1556,14 @@ static int mlxsw_pci_init(void *bus_priv, struct mlxsw_core *mlxsw_core,
        if (err)
                goto err_config_profile;
 
+       /* Some resources depend on unified bridge model, which is configured
+        * as part of config_profile. Query the resources again to get correct
+        * values.
+        */
+       err = mlxsw_core_resources_query(mlxsw_core, mbox, res);
+       if (err)
+               goto err_requery_resources;
+
        err = mlxsw_pci_aqs_init(mlxsw_pci, mbox);
        if (err)
                goto err_aqs_init;
@@ -1568,6 +1581,7 @@ static int mlxsw_pci_init(void *bus_priv, struct mlxsw_core *mlxsw_core,
 err_request_eq_irq:
        mlxsw_pci_aqs_fini(mlxsw_pci);
 err_aqs_init:
+err_requery_resources:
 err_config_profile:
 err_cqe_v_check:
 err_query_resources:
index c9070e2..17ce28e 100644 (file)
@@ -380,7 +380,7 @@ static inline void mlxsw_reg_sfd_rec_pack(char *payload, int rec_index,
 
 static inline void mlxsw_reg_sfd_uc_pack(char *payload, int rec_index,
                                         enum mlxsw_reg_sfd_rec_policy policy,
-                                        const char *mac, u16 fid_vid,
+                                        const char *mac, u16 fid_vid, u16 vid,
                                         enum mlxsw_reg_sfd_rec_action action,
                                         u16 local_port)
 {
@@ -389,6 +389,8 @@ static inline void mlxsw_reg_sfd_uc_pack(char *payload, int rec_index,
        mlxsw_reg_sfd_rec_policy_set(payload, rec_index, policy);
        mlxsw_reg_sfd_uc_sub_port_set(payload, rec_index, 0);
        mlxsw_reg_sfd_uc_fid_vid_set(payload, rec_index, fid_vid);
+       mlxsw_reg_sfd_uc_set_vid_set(payload, rec_index, vid ? true : false);
+       mlxsw_reg_sfd_uc_vid_set(payload, rec_index, vid);
        mlxsw_reg_sfd_uc_system_port_set(payload, rec_index, local_port);
 }
 
@@ -454,6 +456,7 @@ mlxsw_reg_sfd_uc_lag_pack(char *payload, int rec_index,
        mlxsw_reg_sfd_rec_policy_set(payload, rec_index, policy);
        mlxsw_reg_sfd_uc_lag_sub_port_set(payload, rec_index, 0);
        mlxsw_reg_sfd_uc_lag_fid_vid_set(payload, rec_index, fid_vid);
+       mlxsw_reg_sfd_uc_lag_set_vid_set(payload, rec_index, true);
        mlxsw_reg_sfd_uc_lag_lag_vid_set(payload, rec_index, lag_vid);
        mlxsw_reg_sfd_uc_lag_lag_id_set(payload, rec_index, lag_id);
 }
@@ -1054,9 +1057,10 @@ enum mlxsw_reg_sfgc_type {
  */
 MLXSW_ITEM32(reg, sfgc, type, 0x00, 0, 4);
 
-enum mlxsw_reg_sfgc_bridge_type {
-       MLXSW_REG_SFGC_BRIDGE_TYPE_1Q_FID = 0,
-       MLXSW_REG_SFGC_BRIDGE_TYPE_VFID = 1,
+/* bridge_type is used in SFGC and SFMR. */
+enum mlxsw_reg_bridge_type {
+       MLXSW_REG_BRIDGE_TYPE_0 = 0, /* Used for .1q FIDs. */
+       MLXSW_REG_BRIDGE_TYPE_1 = 1, /* Used for .1d FIDs. */
 };
 
 /* reg_sfgc_bridge_type
@@ -1111,15 +1115,16 @@ MLXSW_ITEM32(reg, sfgc, mid_base, 0x10, 0, 16);
 
 static inline void
 mlxsw_reg_sfgc_pack(char *payload, enum mlxsw_reg_sfgc_type type,
-                   enum mlxsw_reg_sfgc_bridge_type bridge_type,
+                   enum mlxsw_reg_bridge_type bridge_type,
                    enum mlxsw_flood_table_type table_type,
-                   unsigned int flood_table)
+                   unsigned int flood_table, u16 mid_base)
 {
        MLXSW_REG_ZERO(sfgc, payload);
        mlxsw_reg_sfgc_type_set(payload, type);
        mlxsw_reg_sfgc_bridge_type_set(payload, bridge_type);
        mlxsw_reg_sfgc_table_type_set(payload, table_type);
        mlxsw_reg_sfgc_flood_table_set(payload, flood_table);
+       mlxsw_reg_sfgc_mid_base_set(payload, mid_base);
 }
 
 /* SFDF - Switch Filtering DB Flush
@@ -1653,40 +1658,43 @@ MLXSW_ITEM32(reg, svfa, irif, 0x14, 0, 16);
 
 static inline void __mlxsw_reg_svfa_pack(char *payload,
                                         enum mlxsw_reg_svfa_mt mt, bool valid,
-                                        u16 fid)
+                                        u16 fid, bool irif_v, u16 irif)
 {
        MLXSW_REG_ZERO(svfa, payload);
        mlxsw_reg_svfa_swid_set(payload, 0);
        mlxsw_reg_svfa_mapping_table_set(payload, mt);
        mlxsw_reg_svfa_v_set(payload, valid);
        mlxsw_reg_svfa_fid_set(payload, fid);
+       mlxsw_reg_svfa_irif_v_set(payload, irif_v);
+       mlxsw_reg_svfa_irif_set(payload, irif_v ? irif : 0);
 }
 
 static inline void mlxsw_reg_svfa_port_vid_pack(char *payload, u16 local_port,
-                                               bool valid, u16 fid, u16 vid)
+                                               bool valid, u16 fid, u16 vid,
+                                               bool irif_v, u16 irif)
 {
        enum mlxsw_reg_svfa_mt mt = MLXSW_REG_SVFA_MT_PORT_VID_TO_FID;
 
-       __mlxsw_reg_svfa_pack(payload, mt, valid, fid);
+       __mlxsw_reg_svfa_pack(payload, mt, valid, fid, irif_v, irif);
        mlxsw_reg_svfa_local_port_set(payload, local_port);
        mlxsw_reg_svfa_vid_set(payload, vid);
 }
 
 static inline void mlxsw_reg_svfa_vid_pack(char *payload, bool valid, u16 fid,
-                                          u16 vid)
+                                          u16 vid, bool irif_v, u16 irif)
 {
        enum mlxsw_reg_svfa_mt mt = MLXSW_REG_SVFA_MT_VID_TO_FID;
 
-       __mlxsw_reg_svfa_pack(payload, mt, valid, fid);
+       __mlxsw_reg_svfa_pack(payload, mt, valid, fid, irif_v, irif);
        mlxsw_reg_svfa_vid_set(payload, vid);
 }
 
 static inline void mlxsw_reg_svfa_vni_pack(char *payload, bool valid, u16 fid,
-                                          u32 vni)
+                                          u32 vni, bool irif_v, u16 irif)
 {
        enum mlxsw_reg_svfa_mt mt = MLXSW_REG_SVFA_MT_VNI_TO_FID;
 
-       __mlxsw_reg_svfa_pack(payload, mt, valid, fid);
+       __mlxsw_reg_svfa_pack(payload, mt, valid, fid, irif_v, irif);
        mlxsw_reg_svfa_vni_set(payload, vni);
 }
 
@@ -1960,7 +1968,9 @@ MLXSW_ITEM32(reg, sfmr, smpe, 0x28, 0, 16);
 
 static inline void mlxsw_reg_sfmr_pack(char *payload,
                                       enum mlxsw_reg_sfmr_op op, u16 fid,
-                                      u16 fid_offset)
+                                      u16 fid_offset, bool flood_rsp,
+                                      enum mlxsw_reg_bridge_type bridge_type,
+                                      bool smpe_valid, u16 smpe)
 {
        MLXSW_REG_ZERO(sfmr, payload);
        mlxsw_reg_sfmr_op_set(payload, op);
@@ -1968,6 +1978,10 @@ static inline void mlxsw_reg_sfmr_pack(char *payload,
        mlxsw_reg_sfmr_fid_offset_set(payload, fid_offset);
        mlxsw_reg_sfmr_vtfp_set(payload, false);
        mlxsw_reg_sfmr_vv_set(payload, false);
+       mlxsw_reg_sfmr_flood_rsp_set(payload, flood_rsp);
+       mlxsw_reg_sfmr_flood_bridge_type_set(payload, bridge_type);
+       mlxsw_reg_sfmr_smpe_valid_set(payload, smpe_valid);
+       mlxsw_reg_sfmr_smpe_set(payload, smpe);
 }
 
 /* SPVMLR - Switch Port VLAN MAC Learning Register
@@ -6937,16 +6951,6 @@ MLXSW_ITEM32(reg, ritr, vlan_if_efid, 0x0C, 0, 16);
  */
 MLXSW_ITEM32(reg, ritr, fid_if_fid, 0x08, 0, 16);
 
-static inline void mlxsw_reg_ritr_fid_set(char *payload,
-                                         enum mlxsw_reg_ritr_if_type rif_type,
-                                         u16 fid)
-{
-       if (rif_type == MLXSW_REG_RITR_FID_IF)
-               mlxsw_reg_ritr_fid_if_fid_set(payload, fid);
-       else
-               mlxsw_reg_ritr_vlan_if_vlan_id_set(payload, fid);
-}
-
 /* Sub-port Interface */
 
 /* reg_ritr_sp_if_lag
@@ -7112,10 +7116,11 @@ static inline void mlxsw_reg_ritr_rif_pack(char *payload, u16 rif)
 }
 
 static inline void mlxsw_reg_ritr_sp_if_pack(char *payload, bool lag,
-                                            u16 system_port, u16 vid)
+                                            u16 system_port, u16 efid, u16 vid)
 {
        mlxsw_reg_ritr_sp_if_lag_set(payload, lag);
        mlxsw_reg_ritr_sp_if_system_port_set(payload, system_port);
+       mlxsw_reg_ritr_sp_if_efid_set(payload, efid);
        mlxsw_reg_ritr_sp_if_vid_set(payload, vid);
 }
 
index daacf62..826e47f 100644 (file)
@@ -11,6 +11,7 @@ enum mlxsw_res_id {
        MLXSW_RES_ID_KVD_SIZE,
        MLXSW_RES_ID_KVD_SINGLE_MIN_SIZE,
        MLXSW_RES_ID_KVD_DOUBLE_MIN_SIZE,
+       MLXSW_RES_ID_PGT_SIZE,
        MLXSW_RES_ID_MAX_KVD_LINEAR_RANGE,
        MLXSW_RES_ID_MAX_KVD_ACTION_SETS,
        MLXSW_RES_ID_MAX_TRAP_GROUPS,
@@ -69,6 +70,7 @@ static u16 mlxsw_res_ids[] = {
        [MLXSW_RES_ID_KVD_SIZE] = 0x1001,
        [MLXSW_RES_ID_KVD_SINGLE_MIN_SIZE] = 0x1002,
        [MLXSW_RES_ID_KVD_DOUBLE_MIN_SIZE] = 0x1003,
+       [MLXSW_RES_ID_PGT_SIZE] = 0x1004,
        [MLXSW_RES_ID_MAX_KVD_LINEAR_RANGE] = 0x1005,
        [MLXSW_RES_ID_MAX_KVD_ACTION_SETS] = 0x1007,
        [MLXSW_RES_ID_MAX_TRAP_GROUPS] = 0x2201,
index a62887b..a703ca2 100644 (file)
@@ -3010,6 +3010,12 @@ static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core,
                return err;
        }
 
+       err = mlxsw_sp_pgt_init(mlxsw_sp);
+       if (err) {
+               dev_err(mlxsw_sp->bus_info->dev, "Failed to initialize PGT\n");
+               goto err_pgt_init;
+       }
+
        err = mlxsw_sp_fids_init(mlxsw_sp);
        if (err) {
                dev_err(mlxsw_sp->bus_info->dev, "Failed to initialize FIDs\n");
@@ -3201,6 +3207,8 @@ err_traps_init:
 err_policers_init:
        mlxsw_sp_fids_fini(mlxsw_sp);
 err_fids_init:
+       mlxsw_sp_pgt_fini(mlxsw_sp);
+err_pgt_init:
        mlxsw_sp_kvdl_fini(mlxsw_sp);
        mlxsw_sp_parsing_fini(mlxsw_sp);
        return err;
@@ -3232,7 +3240,9 @@ static int mlxsw_sp1_init(struct mlxsw_core *mlxsw_core,
        mlxsw_sp->router_ops = &mlxsw_sp1_router_ops;
        mlxsw_sp->listeners = mlxsw_sp1_listener;
        mlxsw_sp->listeners_count = ARRAY_SIZE(mlxsw_sp1_listener);
+       mlxsw_sp->fid_family_arr = mlxsw_sp1_fid_family_arr;
        mlxsw_sp->lowest_shaper_bs = MLXSW_REG_QEEC_LOWEST_SHAPER_BS_SP1;
+       mlxsw_sp->pgt_smpe_index_valid = true;
 
        return mlxsw_sp_init(mlxsw_core, mlxsw_bus_info, extack);
 }
@@ -3264,7 +3274,9 @@ static int mlxsw_sp2_init(struct mlxsw_core *mlxsw_core,
        mlxsw_sp->router_ops = &mlxsw_sp2_router_ops;
        mlxsw_sp->listeners = mlxsw_sp2_listener;
        mlxsw_sp->listeners_count = ARRAY_SIZE(mlxsw_sp2_listener);
+       mlxsw_sp->fid_family_arr = mlxsw_sp2_fid_family_arr;
        mlxsw_sp->lowest_shaper_bs = MLXSW_REG_QEEC_LOWEST_SHAPER_BS_SP2;
+       mlxsw_sp->pgt_smpe_index_valid = false;
 
        return mlxsw_sp_init(mlxsw_core, mlxsw_bus_info, extack);
 }
@@ -3296,7 +3308,9 @@ static int mlxsw_sp3_init(struct mlxsw_core *mlxsw_core,
        mlxsw_sp->router_ops = &mlxsw_sp2_router_ops;
        mlxsw_sp->listeners = mlxsw_sp2_listener;
        mlxsw_sp->listeners_count = ARRAY_SIZE(mlxsw_sp2_listener);
+       mlxsw_sp->fid_family_arr = mlxsw_sp2_fid_family_arr;
        mlxsw_sp->lowest_shaper_bs = MLXSW_REG_QEEC_LOWEST_SHAPER_BS_SP3;
+       mlxsw_sp->pgt_smpe_index_valid = false;
 
        return mlxsw_sp_init(mlxsw_core, mlxsw_bus_info, extack);
 }
@@ -3328,7 +3342,9 @@ static int mlxsw_sp4_init(struct mlxsw_core *mlxsw_core,
        mlxsw_sp->router_ops = &mlxsw_sp2_router_ops;
        mlxsw_sp->listeners = mlxsw_sp2_listener;
        mlxsw_sp->listeners_count = ARRAY_SIZE(mlxsw_sp2_listener);
+       mlxsw_sp->fid_family_arr = mlxsw_sp2_fid_family_arr;
        mlxsw_sp->lowest_shaper_bs = MLXSW_REG_QEEC_LOWEST_SHAPER_BS_SP4;
+       mlxsw_sp->pgt_smpe_index_valid = false;
 
        return mlxsw_sp_init(mlxsw_core, mlxsw_bus_info, extack);
 }
@@ -3361,28 +3377,20 @@ static void mlxsw_sp_fini(struct mlxsw_core *mlxsw_core)
        mlxsw_sp_traps_fini(mlxsw_sp);
        mlxsw_sp_policers_fini(mlxsw_sp);
        mlxsw_sp_fids_fini(mlxsw_sp);
+       mlxsw_sp_pgt_fini(mlxsw_sp);
        mlxsw_sp_kvdl_fini(mlxsw_sp);
        mlxsw_sp_parsing_fini(mlxsw_sp);
 }
 
-/* Per-FID flood tables are used for both "true" 802.1D FIDs and emulated
- * 802.1Q FIDs
- */
-#define MLXSW_SP_FID_FLOOD_TABLE_SIZE  (MLXSW_SP_FID_8021D_MAX + \
-                                        VLAN_VID_MASK - 1)
-
 static const struct mlxsw_config_profile mlxsw_sp1_config_profile = {
-       .used_max_mid                   = 1,
-       .max_mid                        = MLXSW_SP_MID_MAX,
-       .used_flood_tables              = 1,
-       .used_flood_mode                = 1,
-       .flood_mode                     = 3,
-       .max_fid_flood_tables           = 3,
-       .fid_flood_table_size           = MLXSW_SP_FID_FLOOD_TABLE_SIZE,
+       .used_flood_mode                = 1,
+       .flood_mode                     = MLXSW_CMD_MBOX_CONFIG_PROFILE_FLOOD_MODE_CONTROLLED,
        .used_max_ib_mc                 = 1,
        .max_ib_mc                      = 0,
        .used_max_pkey                  = 1,
        .max_pkey                       = 0,
+       .used_ubridge                   = 1,
+       .ubridge                        = 1,
        .used_kvd_sizes                 = 1,
        .kvd_hash_single_parts          = 59,
        .kvd_hash_double_parts          = 41,
@@ -3396,17 +3404,14 @@ static const struct mlxsw_config_profile mlxsw_sp1_config_profile = {
 };
 
 static const struct mlxsw_config_profile mlxsw_sp2_config_profile = {
-       .used_max_mid                   = 1,
-       .max_mid                        = MLXSW_SP_MID_MAX,
-       .used_flood_tables              = 1,
-       .used_flood_mode                = 1,
-       .flood_mode                     = 3,
-       .max_fid_flood_tables           = 3,
-       .fid_flood_table_size           = MLXSW_SP_FID_FLOOD_TABLE_SIZE,
+       .used_flood_mode                = 1,
+       .flood_mode                     = MLXSW_CMD_MBOX_CONFIG_PROFILE_FLOOD_MODE_CONTROLLED,
        .used_max_ib_mc                 = 1,
        .max_ib_mc                      = 0,
        .used_max_pkey                  = 1,
        .max_pkey                       = 0,
+       .used_ubridge                   = 1,
+       .ubridge                        = 1,
        .swid_config                    = {
                {
                        .used_type      = 1,
index 36c6f5b..50a9380 100644 (file)
@@ -112,15 +112,6 @@ enum mlxsw_sp_nve_type {
        MLXSW_SP_NVE_TYPE_VXLAN,
 };
 
-struct mlxsw_sp_mid {
-       struct list_head list;
-       unsigned char addr[ETH_ALEN];
-       u16 fid;
-       u16 mid;
-       bool in_hw;
-       unsigned long *ports_in_mid; /* bits array */
-};
-
 struct mlxsw_sp_sb;
 struct mlxsw_sp_bridge;
 struct mlxsw_sp_router;
@@ -143,6 +134,7 @@ struct mlxsw_sp_ptp_ops;
 struct mlxsw_sp_span_ops;
 struct mlxsw_sp_qdisc_state;
 struct mlxsw_sp_mall_entry;
+struct mlxsw_sp_pgt;
 
 struct mlxsw_sp_port_mapping {
        u8 module;
@@ -211,10 +203,13 @@ struct mlxsw_sp {
        const struct mlxsw_sp_mall_ops *mall_ops;
        const struct mlxsw_sp_router_ops *router_ops;
        const struct mlxsw_listener *listeners;
+       const struct mlxsw_sp_fid_family **fid_family_arr;
        size_t listeners_count;
        u32 lowest_shaper_bs;
        struct rhashtable ipv6_addr_ht;
        struct mutex ipv6_addr_ht_lock; /* Protects ipv6_addr_ht */
+       struct mlxsw_sp_pgt *pgt;
+       bool pgt_smpe_index_valid;
 };
 
 struct mlxsw_sp_ptp_ops {
@@ -390,6 +385,31 @@ struct mlxsw_sp_port_type_speed_ops {
        u32 (*ptys_proto_cap_masked_get)(u32 eth_proto_cap);
 };
 
+struct mlxsw_sp_ports_bitmap {
+       unsigned long *bitmap;
+       unsigned int nbits;
+};
+
+static inline int
+mlxsw_sp_port_bitmap_init(struct mlxsw_sp *mlxsw_sp,
+                         struct mlxsw_sp_ports_bitmap *ports_bm)
+{
+       unsigned int nbits = mlxsw_core_max_ports(mlxsw_sp->core);
+
+       ports_bm->nbits = nbits;
+       ports_bm->bitmap = bitmap_zalloc(nbits, GFP_KERNEL);
+       if (!ports_bm->bitmap)
+               return -ENOMEM;
+
+       return 0;
+}
+
+static inline void
+mlxsw_sp_port_bitmap_fini(struct mlxsw_sp_ports_bitmap *ports_bm)
+{
+       bitmap_free(ports_bm->bitmap);
+}
+
 static inline u8 mlxsw_sp_tunnel_ecn_decap(u8 outer_ecn, u8 inner_ecn,
                                           bool *trap_en)
 {
@@ -716,6 +736,7 @@ union mlxsw_sp_l3addr {
        struct in6_addr addr6;
 };
 
+u16 mlxsw_sp_rif_index(const struct mlxsw_sp_rif *rif);
 int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp,
                         struct netlink_ext_ack *extack);
 void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp);
@@ -1237,7 +1258,6 @@ int mlxsw_sp_setup_tc_block_qevent_mark(struct mlxsw_sp_port *mlxsw_sp_port,
 
 /* spectrum_fid.c */
 bool mlxsw_sp_fid_is_dummy(struct mlxsw_sp *mlxsw_sp, u16 fid_index);
-bool mlxsw_sp_fid_lag_vid_valid(const struct mlxsw_sp_fid *fid);
 struct mlxsw_sp_fid *mlxsw_sp_fid_lookup_by_index(struct mlxsw_sp *mlxsw_sp,
                                                  u16 fid_index);
 int mlxsw_sp_fid_nve_ifindex(const struct mlxsw_sp_fid *fid, int *nve_ifindex);
@@ -1265,7 +1285,8 @@ void mlxsw_sp_fid_port_vid_unmap(struct mlxsw_sp_fid *fid,
                                 struct mlxsw_sp_port *mlxsw_sp_port, u16 vid);
 u16 mlxsw_sp_fid_index(const struct mlxsw_sp_fid *fid);
 enum mlxsw_sp_fid_type mlxsw_sp_fid_type(const struct mlxsw_sp_fid *fid);
-void mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif);
+int mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif);
+void mlxsw_sp_fid_rif_unset(struct mlxsw_sp_fid *fid);
 struct mlxsw_sp_rif *mlxsw_sp_fid_rif(const struct mlxsw_sp_fid *fid);
 enum mlxsw_sp_rif_type
 mlxsw_sp_fid_type_rif_type(const struct mlxsw_sp *mlxsw_sp,
@@ -1287,6 +1308,9 @@ void mlxsw_sp_port_fids_fini(struct mlxsw_sp_port *mlxsw_sp_port);
 int mlxsw_sp_fids_init(struct mlxsw_sp *mlxsw_sp);
 void mlxsw_sp_fids_fini(struct mlxsw_sp *mlxsw_sp);
 
+extern const struct mlxsw_sp_fid_family *mlxsw_sp1_fid_family_arr[];
+extern const struct mlxsw_sp_fid_family *mlxsw_sp2_fid_family_arr[];
+
 /* spectrum_mr.c */
 enum mlxsw_sp_mr_route_prio {
        MLXSW_SP_MR_ROUTE_PRIO_SG,
@@ -1444,4 +1468,16 @@ int mlxsw_sp_policers_init(struct mlxsw_sp *mlxsw_sp);
 void mlxsw_sp_policers_fini(struct mlxsw_sp *mlxsw_sp);
 int mlxsw_sp_policer_resources_register(struct mlxsw_core *mlxsw_core);
 
+/* spectrum_pgt.c */
+int mlxsw_sp_pgt_mid_alloc(struct mlxsw_sp *mlxsw_sp, u16 *p_mid);
+void mlxsw_sp_pgt_mid_free(struct mlxsw_sp *mlxsw_sp, u16 mid_base);
+int mlxsw_sp_pgt_mid_alloc_range(struct mlxsw_sp *mlxsw_sp, u16 mid_base,
+                                u16 count);
+void mlxsw_sp_pgt_mid_free_range(struct mlxsw_sp *mlxsw_sp, u16 mid_base,
+                                u16 count);
+int mlxsw_sp_pgt_entry_port_set(struct mlxsw_sp *mlxsw_sp, u16 mid,
+                               u16 smpe, u16 local_port, bool member);
+int mlxsw_sp_pgt_init(struct mlxsw_sp *mlxsw_sp);
+void mlxsw_sp_pgt_fini(struct mlxsw_sp *mlxsw_sp);
+
 #endif
index 10ae111..24ff305 100644 (file)
@@ -15,7 +15,7 @@ struct mlxsw_sp2_kvdl_part_info {
         * usage bits we need and how many indexes there are
         * represented by a single bit. This could be got from FW
         * querying appropriate resources. So have the resource
-        * ids for for this purpose in partition definition.
+        * ids for this purpose in partition definition.
         */
        enum mlxsw_res_id usage_bit_count_res_id;
        enum mlxsw_res_id index_range_res_id;
index 86b88e6..045a24c 100644 (file)
@@ -22,11 +22,18 @@ struct mlxsw_sp_fid_core {
        unsigned int *port_fid_mappings;
 };
 
+struct mlxsw_sp_fid_port_vid {
+       struct list_head list;
+       u16 local_port;
+       u16 vid;
+};
+
 struct mlxsw_sp_fid {
        struct list_head list;
        struct mlxsw_sp_rif *rif;
        refcount_t ref_count;
        u16 fid_index;
+       u16 fid_offset;
        struct mlxsw_sp_fid_family *fid_family;
        struct rhash_head ht_node;
 
@@ -37,6 +44,7 @@ struct mlxsw_sp_fid {
        int nve_ifindex;
        u8 vni_valid:1,
           nve_flood_index_valid:1;
+       struct list_head port_vid_list; /* Ordered by local port. */
 };
 
 struct mlxsw_sp_fid_8021q {
@@ -63,7 +71,6 @@ static const struct rhashtable_params mlxsw_sp_fid_vni_ht_params = {
 
 struct mlxsw_sp_flood_table {
        enum mlxsw_sp_flood_type packet_type;
-       enum mlxsw_reg_sfgc_bridge_type bridge_type;
        enum mlxsw_flood_table_type table_type;
        int table_index;
 };
@@ -76,18 +83,18 @@ struct mlxsw_sp_fid_ops {
                           u16 *p_fid_index);
        bool (*compare)(const struct mlxsw_sp_fid *fid,
                        const void *arg);
-       u16 (*flood_index)(const struct mlxsw_sp_fid *fid);
        int (*port_vid_map)(struct mlxsw_sp_fid *fid,
                            struct mlxsw_sp_port *port, u16 vid);
        void (*port_vid_unmap)(struct mlxsw_sp_fid *fid,
                               struct mlxsw_sp_port *port, u16 vid);
-       int (*vni_set)(struct mlxsw_sp_fid *fid, __be32 vni);
+       int (*vni_set)(struct mlxsw_sp_fid *fid);
        void (*vni_clear)(struct mlxsw_sp_fid *fid);
-       int (*nve_flood_index_set)(struct mlxsw_sp_fid *fid,
-                                  u32 nve_flood_index);
+       int (*nve_flood_index_set)(struct mlxsw_sp_fid *fid);
        void (*nve_flood_index_clear)(struct mlxsw_sp_fid *fid);
        void (*fdb_clear_offload)(const struct mlxsw_sp_fid *fid,
                                  const struct net_device *nve_dev);
+       int (*vid_to_fid_rif_update)(const struct mlxsw_sp_fid *fid,
+                                    const struct mlxsw_sp_rif *rif);
 };
 
 struct mlxsw_sp_fid_family {
@@ -102,7 +109,10 @@ struct mlxsw_sp_fid_family {
        enum mlxsw_sp_rif_type rif_type;
        const struct mlxsw_sp_fid_ops *ops;
        struct mlxsw_sp *mlxsw_sp;
-       u8 lag_vid_valid:1;
+       bool flood_rsp;
+       enum mlxsw_reg_bridge_type bridge_type;
+       u16 pgt_base;
+       bool smpe_index_valid;
 };
 
 static const int mlxsw_sp_sfgc_uc_packet_types[MLXSW_REG_SFGC_TYPE_MAX] = {
@@ -137,11 +147,6 @@ bool mlxsw_sp_fid_is_dummy(struct mlxsw_sp *mlxsw_sp, u16 fid_index)
        return fid_family->start_index == fid_index;
 }
 
-bool mlxsw_sp_fid_lag_vid_valid(const struct mlxsw_sp_fid *fid)
-{
-       return fid->fid_family->lag_vid_valid;
-}
-
 struct mlxsw_sp_fid *mlxsw_sp_fid_lookup_by_index(struct mlxsw_sp *mlxsw_sp,
                                                  u16 fid_index)
 {
@@ -206,17 +211,20 @@ int mlxsw_sp_fid_nve_flood_index_set(struct mlxsw_sp_fid *fid,
        const struct mlxsw_sp_fid_ops *ops = fid_family->ops;
        int err;
 
-       if (WARN_ON(!ops->nve_flood_index_set || fid->nve_flood_index_valid))
+       if (WARN_ON(fid->nve_flood_index_valid))
                return -EINVAL;
 
-       err = ops->nve_flood_index_set(fid, nve_flood_index);
-       if (err)
-               return err;
-
        fid->nve_flood_index = nve_flood_index;
        fid->nve_flood_index_valid = true;
+       err = ops->nve_flood_index_set(fid);
+       if (err)
+               goto err_nve_flood_index_set;
 
        return 0;
+
+err_nve_flood_index_set:
+       fid->nve_flood_index_valid = false;
+       return err;
 }
 
 void mlxsw_sp_fid_nve_flood_index_clear(struct mlxsw_sp_fid *fid)
@@ -224,7 +232,7 @@ void mlxsw_sp_fid_nve_flood_index_clear(struct mlxsw_sp_fid *fid)
        struct mlxsw_sp_fid_family *fid_family = fid->fid_family;
        const struct mlxsw_sp_fid_ops *ops = fid_family->ops;
 
-       if (WARN_ON(!ops->nve_flood_index_clear || !fid->nve_flood_index_valid))
+       if (WARN_ON(!fid->nve_flood_index_valid))
                return;
 
        fid->nve_flood_index_valid = false;
@@ -244,7 +252,7 @@ int mlxsw_sp_fid_vni_set(struct mlxsw_sp_fid *fid, enum mlxsw_sp_nve_type type,
        struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp;
        int err;
 
-       if (WARN_ON(!ops->vni_set || fid->vni_valid))
+       if (WARN_ON(fid->vni_valid))
                return -EINVAL;
 
        fid->nve_type = type;
@@ -256,15 +264,15 @@ int mlxsw_sp_fid_vni_set(struct mlxsw_sp_fid *fid, enum mlxsw_sp_nve_type type,
        if (err)
                return err;
 
-       err = ops->vni_set(fid, vni);
+       fid->vni_valid = true;
+       err = ops->vni_set(fid);
        if (err)
                goto err_vni_set;
 
-       fid->vni_valid = true;
-
        return 0;
 
 err_vni_set:
+       fid->vni_valid = false;
        rhashtable_remove_fast(&mlxsw_sp->fid_core->vni_ht, &fid->vni_ht_node,
                               mlxsw_sp_fid_vni_ht_params);
        return err;
@@ -276,7 +284,7 @@ void mlxsw_sp_fid_vni_clear(struct mlxsw_sp_fid *fid)
        const struct mlxsw_sp_fid_ops *ops = fid_family->ops;
        struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp;
 
-       if (WARN_ON(!ops->vni_clear || !fid->vni_valid))
+       if (WARN_ON(!fid->vni_valid))
                return;
 
        fid->vni_valid = false;
@@ -316,34 +324,43 @@ mlxsw_sp_fid_flood_table_lookup(const struct mlxsw_sp_fid *fid,
        return NULL;
 }
 
+static u16
+mlxsw_sp_fid_family_num_fids(const struct mlxsw_sp_fid_family *fid_family)
+{
+       return fid_family->end_index - fid_family->start_index + 1;
+}
+
+static u16
+mlxsw_sp_fid_flood_table_mid(const struct mlxsw_sp_fid_family *fid_family,
+                            const struct mlxsw_sp_flood_table *flood_table,
+                            u16 fid_offset)
+{
+       u16 num_fids;
+
+       num_fids = mlxsw_sp_fid_family_num_fids(fid_family);
+       return fid_family->pgt_base + num_fids * flood_table->table_index +
+              fid_offset;
+}
+
 int mlxsw_sp_fid_flood_set(struct mlxsw_sp_fid *fid,
                           enum mlxsw_sp_flood_type packet_type, u16 local_port,
                           bool member)
 {
        struct mlxsw_sp_fid_family *fid_family = fid->fid_family;
-       const struct mlxsw_sp_fid_ops *ops = fid_family->ops;
        const struct mlxsw_sp_flood_table *flood_table;
-       char *sftr2_pl;
-       int err;
+       u16 mid_index;
 
-       if (WARN_ON(!fid_family->flood_tables || !ops->flood_index))
+       if (WARN_ON(!fid_family->flood_tables))
                return -EINVAL;
 
        flood_table = mlxsw_sp_fid_flood_table_lookup(fid, packet_type);
        if (!flood_table)
                return -ESRCH;
 
-       sftr2_pl = kmalloc(MLXSW_REG_SFTR2_LEN, GFP_KERNEL);
-       if (!sftr2_pl)
-               return -ENOMEM;
-
-       mlxsw_reg_sftr2_pack(sftr2_pl, flood_table->table_index,
-                            ops->flood_index(fid), flood_table->table_type, 1,
-                            local_port, member);
-       err = mlxsw_reg_write(fid_family->mlxsw_sp->core, MLXSW_REG(sftr2),
-                             sftr2_pl);
-       kfree(sftr2_pl);
-       return err;
+       mid_index = mlxsw_sp_fid_flood_table_mid(fid_family, flood_table,
+                                                fid->fid_offset);
+       return mlxsw_sp_pgt_entry_port_set(fid_family->mlxsw_sp, mid_index,
+                                          fid->fid_index, local_port, member);
 }
 
 int mlxsw_sp_fid_port_vid_map(struct mlxsw_sp_fid *fid,
@@ -370,11 +387,6 @@ enum mlxsw_sp_fid_type mlxsw_sp_fid_type(const struct mlxsw_sp_fid *fid)
        return fid->fid_family->type;
 }
 
-void mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif)
-{
-       fid->rif = rif;
-}
-
 struct mlxsw_sp_rif *mlxsw_sp_fid_rif(const struct mlxsw_sp_fid *fid)
 {
        return fid->rif;
@@ -405,6 +417,7 @@ static void mlxsw_sp_fid_8021q_setup(struct mlxsw_sp_fid *fid, const void *arg)
        u16 vid = *(u16 *) arg;
 
        mlxsw_sp_fid_8021q_fid(fid)->vid = vid;
+       fid->fid_offset = fid->fid_index - fid->fid_family->start_index;
 }
 
 static enum mlxsw_reg_sfmr_op mlxsw_sp_sfmr_op(bool valid)
@@ -413,38 +426,341 @@ static enum mlxsw_reg_sfmr_op mlxsw_sp_sfmr_op(bool valid)
                       MLXSW_REG_SFMR_OP_DESTROY_FID;
 }
 
-static int mlxsw_sp_fid_op(struct mlxsw_sp *mlxsw_sp, u16 fid_index,
-                          u16 fid_offset, bool valid)
+static int mlxsw_sp_fid_op(const struct mlxsw_sp_fid *fid, bool valid)
 {
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
        char sfmr_pl[MLXSW_REG_SFMR_LEN];
+       u16 smpe;
+
+       smpe = fid->fid_family->smpe_index_valid ? fid->fid_index : 0;
 
-       mlxsw_reg_sfmr_pack(sfmr_pl, mlxsw_sp_sfmr_op(valid), fid_index,
-                           fid_offset);
+       mlxsw_reg_sfmr_pack(sfmr_pl, mlxsw_sp_sfmr_op(valid), fid->fid_index,
+                           fid->fid_offset, fid->fid_family->flood_rsp,
+                           fid->fid_family->bridge_type,
+                           fid->fid_family->smpe_index_valid, smpe);
        return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfmr), sfmr_pl);
 }
 
-static int mlxsw_sp_fid_vni_op(struct mlxsw_sp *mlxsw_sp, u16 fid_index,
-                              __be32 vni, bool vni_valid, u32 nve_flood_index,
-                              bool nve_flood_index_valid)
+static int mlxsw_sp_fid_edit_op(const struct mlxsw_sp_fid *fid,
+                               const struct mlxsw_sp_rif *rif)
 {
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
        char sfmr_pl[MLXSW_REG_SFMR_LEN];
+       u16 smpe;
+
+       smpe = fid->fid_family->smpe_index_valid ? fid->fid_index : 0;
+
+       mlxsw_reg_sfmr_pack(sfmr_pl, MLXSW_REG_SFMR_OP_CREATE_FID,
+                           fid->fid_index, fid->fid_offset,
+                           fid->fid_family->flood_rsp,
+                           fid->fid_family->bridge_type,
+                           fid->fid_family->smpe_index_valid, smpe);
+       mlxsw_reg_sfmr_vv_set(sfmr_pl, fid->vni_valid);
+       mlxsw_reg_sfmr_vni_set(sfmr_pl, be32_to_cpu(fid->vni));
+       mlxsw_reg_sfmr_vtfp_set(sfmr_pl, fid->nve_flood_index_valid);
+       mlxsw_reg_sfmr_nve_tunnel_flood_ptr_set(sfmr_pl, fid->nve_flood_index);
+
+       if (rif) {
+               mlxsw_reg_sfmr_irif_v_set(sfmr_pl, true);
+               mlxsw_reg_sfmr_irif_set(sfmr_pl, mlxsw_sp_rif_index(rif));
+       }
 
-       mlxsw_reg_sfmr_pack(sfmr_pl, MLXSW_REG_SFMR_OP_CREATE_FID, fid_index,
-                           0);
-       mlxsw_reg_sfmr_vv_set(sfmr_pl, vni_valid);
-       mlxsw_reg_sfmr_vni_set(sfmr_pl, be32_to_cpu(vni));
-       mlxsw_reg_sfmr_vtfp_set(sfmr_pl, nve_flood_index_valid);
-       mlxsw_reg_sfmr_nve_tunnel_flood_ptr_set(sfmr_pl, nve_flood_index);
        return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfmr), sfmr_pl);
 }
 
-static int __mlxsw_sp_fid_port_vid_map(struct mlxsw_sp *mlxsw_sp, u16 fid_index,
+static int mlxsw_sp_fid_vni_to_fid_map(const struct mlxsw_sp_fid *fid,
+                                      const struct mlxsw_sp_rif *rif,
+                                      bool valid)
+{
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+       char svfa_pl[MLXSW_REG_SVFA_LEN];
+       bool irif_valid;
+       u16 irif_index;
+
+       irif_valid = !!rif;
+       irif_index = rif ? mlxsw_sp_rif_index(rif) : 0;
+
+       mlxsw_reg_svfa_vni_pack(svfa_pl, valid, fid->fid_index,
+                               be32_to_cpu(fid->vni), irif_valid, irif_index);
+       return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(svfa), svfa_pl);
+}
+
+static int mlxsw_sp_fid_to_fid_rif_update(const struct mlxsw_sp_fid *fid,
+                                         const struct mlxsw_sp_rif *rif)
+{
+       return mlxsw_sp_fid_edit_op(fid, rif);
+}
+
+static int mlxsw_sp_fid_vni_to_fid_rif_update(const struct mlxsw_sp_fid *fid,
+                                             const struct mlxsw_sp_rif *rif)
+{
+       if (!fid->vni_valid)
+               return 0;
+
+       return mlxsw_sp_fid_vni_to_fid_map(fid, rif, fid->vni_valid);
+}
+
+static int
+mlxsw_sp_fid_vid_to_fid_map(const struct mlxsw_sp_fid *fid, u16 vid, bool valid,
+                           const struct mlxsw_sp_rif *rif)
+{
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+       char svfa_pl[MLXSW_REG_SVFA_LEN];
+       bool irif_valid;
+       u16 irif_index;
+
+       irif_valid = !!rif;
+       irif_index = rif ? mlxsw_sp_rif_index(rif) : 0;
+
+       mlxsw_reg_svfa_vid_pack(svfa_pl, valid, fid->fid_index, vid, irif_valid,
+                               irif_index);
+       return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(svfa), svfa_pl);
+}
+
+static int
+mlxsw_sp_fid_8021q_vid_to_fid_rif_update(const struct mlxsw_sp_fid *fid,
+                                        const struct mlxsw_sp_rif *rif)
+{
+       struct mlxsw_sp_fid_8021q *fid_8021q = mlxsw_sp_fid_8021q_fid(fid);
+
+       /* Update the global VID => FID mapping we created when the FID was
+        * configured.
+        */
+       return mlxsw_sp_fid_vid_to_fid_map(fid, fid_8021q->vid, true, rif);
+}
+
+static int
+mlxsw_sp_fid_port_vid_to_fid_rif_update_one(const struct mlxsw_sp_fid *fid,
+                                           struct mlxsw_sp_fid_port_vid *pv,
+                                           bool irif_valid, u16 irif_index)
+{
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+       char svfa_pl[MLXSW_REG_SVFA_LEN];
+
+       mlxsw_reg_svfa_port_vid_pack(svfa_pl, pv->local_port, true,
+                                    fid->fid_index, pv->vid, irif_valid,
+                                    irif_index);
+
+       return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(svfa), svfa_pl);
+}
+
+static int mlxsw_sp_fid_vid_to_fid_rif_set(const struct mlxsw_sp_fid *fid,
+                                          const struct mlxsw_sp_rif *rif)
+{
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+       struct mlxsw_sp_fid_port_vid *pv;
+       u16 irif_index;
+       int err;
+
+       err = fid->fid_family->ops->vid_to_fid_rif_update(fid, rif);
+       if (err)
+               return err;
+
+       irif_index = mlxsw_sp_rif_index(rif);
+
+       list_for_each_entry(pv, &fid->port_vid_list, list) {
+               /* If port is not in virtual mode, then it does not have any
+                * {Port, VID}->FID mappings that need to be updated with the
+                * ingress RIF.
+                */
+               if (!mlxsw_sp->fid_core->port_fid_mappings[pv->local_port])
+                       continue;
+
+               err = mlxsw_sp_fid_port_vid_to_fid_rif_update_one(fid, pv,
+                                                                 true,
+                                                                 irif_index);
+               if (err)
+                       goto err_port_vid_to_fid_rif_update_one;
+       }
+
+       return 0;
+
+err_port_vid_to_fid_rif_update_one:
+       list_for_each_entry_continue_reverse(pv, &fid->port_vid_list, list) {
+               if (!mlxsw_sp->fid_core->port_fid_mappings[pv->local_port])
+                       continue;
+
+               mlxsw_sp_fid_port_vid_to_fid_rif_update_one(fid, pv, false, 0);
+       }
+
+       fid->fid_family->ops->vid_to_fid_rif_update(fid, NULL);
+       return err;
+}
+
+static void mlxsw_sp_fid_vid_to_fid_rif_unset(const struct mlxsw_sp_fid *fid)
+{
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+       struct mlxsw_sp_fid_port_vid *pv;
+
+       list_for_each_entry(pv, &fid->port_vid_list, list) {
+               /* If port is not in virtual mode, then it does not have any
+                * {Port, VID}->FID mappings that need to be updated.
+                */
+               if (!mlxsw_sp->fid_core->port_fid_mappings[pv->local_port])
+                       continue;
+
+               mlxsw_sp_fid_port_vid_to_fid_rif_update_one(fid, pv, false, 0);
+       }
+
+       fid->fid_family->ops->vid_to_fid_rif_update(fid, NULL);
+}
+
+static int mlxsw_sp_fid_reiv_handle(struct mlxsw_sp_fid *fid, u16 rif_index,
+                                   bool valid, u8 port_page)
+{
+       u16 local_port_end = (port_page + 1) * MLXSW_REG_REIV_REC_MAX_COUNT - 1;
+       u16 local_port_start = port_page * MLXSW_REG_REIV_REC_MAX_COUNT;
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+       struct mlxsw_sp_fid_port_vid *port_vid;
+       u8 rec_num, entries_num = 0;
+       char *reiv_pl;
+       int err;
+
+       reiv_pl = kmalloc(MLXSW_REG_REIV_LEN, GFP_KERNEL);
+       if (!reiv_pl)
+               return -ENOMEM;
+
+       mlxsw_reg_reiv_pack(reiv_pl, port_page, rif_index);
+
+       list_for_each_entry(port_vid, &fid->port_vid_list, list) {
+               /* port_vid_list is sorted by local_port. */
+               if (port_vid->local_port < local_port_start)
+                       continue;
+
+               if (port_vid->local_port > local_port_end)
+                       break;
+
+               rec_num = port_vid->local_port % MLXSW_REG_REIV_REC_MAX_COUNT;
+               mlxsw_reg_reiv_rec_update_set(reiv_pl, rec_num, true);
+               mlxsw_reg_reiv_rec_evid_set(reiv_pl, rec_num,
+                                           valid ? port_vid->vid : 0);
+               entries_num++;
+       }
+
+       if (!entries_num) {
+               kfree(reiv_pl);
+               return 0;
+       }
+
+       err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(reiv), reiv_pl);
+       if (err)
+               goto err_reg_write;
+
+       kfree(reiv_pl);
+       return 0;
+
+err_reg_write:
+       kfree(reiv_pl);
+       return err;
+}
+
+static int mlxsw_sp_fid_erif_eport_to_vid_map(struct mlxsw_sp_fid *fid,
+                                             u16 rif_index, bool valid)
+{
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+       u8 num_port_pages;
+       int err, i;
+
+       num_port_pages = mlxsw_core_max_ports(mlxsw_sp->core) /
+                        MLXSW_REG_REIV_REC_MAX_COUNT + 1;
+
+       for (i = 0; i < num_port_pages; i++) {
+               err = mlxsw_sp_fid_reiv_handle(fid, rif_index, valid, i);
+               if (err)
+                       goto err_reiv_handle;
+       }
+
+       return 0;
+
+err_reiv_handle:
+       for (; i >= 0; i--)
+               mlxsw_sp_fid_reiv_handle(fid, rif_index, !valid, i);
+       return err;
+}
+
+int mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif)
+{
+       u16 rif_index = mlxsw_sp_rif_index(rif);
+       int err;
+
+       err = mlxsw_sp_fid_to_fid_rif_update(fid, rif);
+       if (err)
+               return err;
+
+       err = mlxsw_sp_fid_vni_to_fid_rif_update(fid, rif);
+       if (err)
+               goto err_vni_to_fid_rif_update;
+
+       err = mlxsw_sp_fid_vid_to_fid_rif_set(fid, rif);
+       if (err)
+               goto err_vid_to_fid_rif_set;
+
+       err = mlxsw_sp_fid_erif_eport_to_vid_map(fid, rif_index, true);
+       if (err)
+               goto err_erif_eport_to_vid_map;
+
+       fid->rif = rif;
+       return 0;
+
+err_erif_eport_to_vid_map:
+       mlxsw_sp_fid_vid_to_fid_rif_unset(fid);
+err_vid_to_fid_rif_set:
+       mlxsw_sp_fid_vni_to_fid_rif_update(fid, NULL);
+err_vni_to_fid_rif_update:
+       mlxsw_sp_fid_to_fid_rif_update(fid, NULL);
+       return err;
+}
+
+void mlxsw_sp_fid_rif_unset(struct mlxsw_sp_fid *fid)
+{
+       u16 rif_index;
+
+       if (!fid->rif)
+               return;
+
+       rif_index = mlxsw_sp_rif_index(fid->rif);
+       fid->rif = NULL;
+
+       mlxsw_sp_fid_erif_eport_to_vid_map(fid, rif_index, false);
+       mlxsw_sp_fid_vid_to_fid_rif_unset(fid);
+       mlxsw_sp_fid_vni_to_fid_rif_update(fid, NULL);
+       mlxsw_sp_fid_to_fid_rif_update(fid, NULL);
+}
+
+static int mlxsw_sp_fid_vni_op(const struct mlxsw_sp_fid *fid)
+{
+       int err;
+
+       err = mlxsw_sp_fid_vni_to_fid_map(fid, fid->rif, fid->vni_valid);
+       if (err)
+               return err;
+
+       err = mlxsw_sp_fid_edit_op(fid, fid->rif);
+       if (err)
+               goto err_fid_edit_op;
+
+       return 0;
+
+err_fid_edit_op:
+       mlxsw_sp_fid_vni_to_fid_map(fid, fid->rif, !fid->vni_valid);
+       return err;
+}
+
+static int __mlxsw_sp_fid_port_vid_map(const struct mlxsw_sp_fid *fid,
                                       u16 local_port, u16 vid, bool valid)
 {
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
        char svfa_pl[MLXSW_REG_SVFA_LEN];
+       bool irif_valid = false;
+       u16 irif_index = 0;
+
+       if (fid->rif) {
+               irif_valid = true;
+               irif_index = mlxsw_sp_rif_index(fid->rif);
+       }
 
-       mlxsw_reg_svfa_port_vid_pack(svfa_pl, local_port, valid, fid_index,
-                                    vid);
+       mlxsw_reg_svfa_port_vid_pack(svfa_pl, local_port, valid, fid->fid_index,
+                                    vid, irif_valid, irif_index);
        return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(svfa), svfa_pl);
 }
 
@@ -459,20 +775,19 @@ static void mlxsw_sp_fid_8021d_setup(struct mlxsw_sp_fid *fid, const void *arg)
        int br_ifindex = *(int *) arg;
 
        mlxsw_sp_fid_8021d_fid(fid)->br_ifindex = br_ifindex;
+       fid->fid_offset = fid->fid_index - fid->fid_family->start_index;
 }
 
 static int mlxsw_sp_fid_8021d_configure(struct mlxsw_sp_fid *fid)
 {
-       struct mlxsw_sp_fid_family *fid_family = fid->fid_family;
-
-       return mlxsw_sp_fid_op(fid_family->mlxsw_sp, fid->fid_index, 0, true);
+       return mlxsw_sp_fid_op(fid, true);
 }
 
 static void mlxsw_sp_fid_8021d_deconfigure(struct mlxsw_sp_fid *fid)
 {
        if (fid->vni_valid)
                mlxsw_sp_nve_fid_disable(fid->fid_family->mlxsw_sp, fid);
-       mlxsw_sp_fid_op(fid->fid_family->mlxsw_sp, fid->fid_index, 0, false);
+       mlxsw_sp_fid_op(fid, false);
 }
 
 static int mlxsw_sp_fid_8021d_index_alloc(struct mlxsw_sp_fid *fid,
@@ -498,14 +813,8 @@ mlxsw_sp_fid_8021d_compare(const struct mlxsw_sp_fid *fid, const void *arg)
        return mlxsw_sp_fid_8021d_fid(fid)->br_ifindex == br_ifindex;
 }
 
-static u16 mlxsw_sp_fid_8021d_flood_index(const struct mlxsw_sp_fid *fid)
-{
-       return fid->fid_index - VLAN_N_VID;
-}
-
 static int mlxsw_sp_port_vp_mode_trans(struct mlxsw_sp_port *mlxsw_sp_port)
 {
-       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
        struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan;
        int err;
 
@@ -517,7 +826,7 @@ static int mlxsw_sp_port_vp_mode_trans(struct mlxsw_sp_port *mlxsw_sp_port)
                if (!fid)
                        continue;
 
-               err = __mlxsw_sp_fid_port_vid_map(mlxsw_sp, fid->fid_index,
+               err = __mlxsw_sp_fid_port_vid_map(fid,
                                                  mlxsw_sp_port->local_port,
                                                  vid, true);
                if (err)
@@ -540,8 +849,7 @@ err_fid_port_vid_map:
                if (!fid)
                        continue;
 
-               __mlxsw_sp_fid_port_vid_map(mlxsw_sp, fid->fid_index,
-                                           mlxsw_sp_port->local_port, vid,
+               __mlxsw_sp_fid_port_vid_map(fid, mlxsw_sp_port->local_port, vid,
                                            false);
        }
        return err;
@@ -549,7 +857,6 @@ err_fid_port_vid_map:
 
 static void mlxsw_sp_port_vlan_mode_trans(struct mlxsw_sp_port *mlxsw_sp_port)
 {
-       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
        struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan;
 
        mlxsw_sp_port_vp_mode_set(mlxsw_sp_port, false);
@@ -562,12 +869,108 @@ static void mlxsw_sp_port_vlan_mode_trans(struct mlxsw_sp_port *mlxsw_sp_port)
                if (!fid)
                        continue;
 
-               __mlxsw_sp_fid_port_vid_map(mlxsw_sp, fid->fid_index,
-                                           mlxsw_sp_port->local_port, vid,
+               __mlxsw_sp_fid_port_vid_map(fid, mlxsw_sp_port->local_port, vid,
                                            false);
        }
 }
 
+static int
+mlxsw_sp_fid_port_vid_list_add(struct mlxsw_sp_fid *fid, u16 local_port,
+                              u16 vid)
+{
+       struct mlxsw_sp_fid_port_vid *port_vid, *tmp_port_vid;
+
+       port_vid = kzalloc(sizeof(*port_vid), GFP_KERNEL);
+       if (!port_vid)
+               return -ENOMEM;
+
+       port_vid->local_port = local_port;
+       port_vid->vid = vid;
+
+       list_for_each_entry(tmp_port_vid, &fid->port_vid_list, list) {
+               if (tmp_port_vid->local_port > local_port)
+                       break;
+       }
+
+       list_add_tail(&port_vid->list, &tmp_port_vid->list);
+       return 0;
+}
+
+static void
+mlxsw_sp_fid_port_vid_list_del(struct mlxsw_sp_fid *fid, u16 local_port,
+                              u16 vid)
+{
+       struct mlxsw_sp_fid_port_vid *port_vid, *tmp;
+
+       list_for_each_entry_safe(port_vid, tmp, &fid->port_vid_list, list) {
+               if (port_vid->local_port != local_port || port_vid->vid != vid)
+                       continue;
+
+               list_del(&port_vid->list);
+               kfree(port_vid);
+               return;
+       }
+}
+
+static int
+mlxsw_sp_fid_mpe_table_map(const struct mlxsw_sp_fid *fid, u16 local_port,
+                          u16 vid, bool valid)
+{
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+       char smpe_pl[MLXSW_REG_SMPE_LEN];
+
+       mlxsw_reg_smpe_pack(smpe_pl, local_port, fid->fid_index,
+                           valid ? vid : 0);
+       return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(smpe), smpe_pl);
+}
+
+static int
+mlxsw_sp_fid_erif_eport_to_vid_map_one(const struct mlxsw_sp_fid *fid,
+                                      u16 local_port, u16 vid, bool valid)
+{
+       u8 port_page = local_port / MLXSW_REG_REIV_REC_MAX_COUNT;
+       u8 rec_num = local_port % MLXSW_REG_REIV_REC_MAX_COUNT;
+       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+       u16 rif_index = mlxsw_sp_rif_index(fid->rif);
+       char *reiv_pl;
+       int err;
+
+       reiv_pl = kmalloc(MLXSW_REG_REIV_LEN, GFP_KERNEL);
+       if (!reiv_pl)
+               return -ENOMEM;
+
+       mlxsw_reg_reiv_pack(reiv_pl, port_page, rif_index);
+       mlxsw_reg_reiv_rec_update_set(reiv_pl, rec_num, true);
+       mlxsw_reg_reiv_rec_evid_set(reiv_pl, rec_num, valid ? vid : 0);
+       err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(reiv), reiv_pl);
+       kfree(reiv_pl);
+       return err;
+}
+
+static int mlxsw_sp_fid_evid_map(const struct mlxsw_sp_fid *fid, u16 local_port,
+                                u16 vid, bool valid)
+{
+       int err;
+
+       err = mlxsw_sp_fid_mpe_table_map(fid, local_port, vid, valid);
+       if (err)
+               return err;
+
+       if (!fid->rif)
+               return 0;
+
+       err = mlxsw_sp_fid_erif_eport_to_vid_map_one(fid, local_port, vid,
+                                                    valid);
+       if (err)
+               goto err_erif_eport_to_vid_map_one;
+
+       return 0;
+
+err_erif_eport_to_vid_map_one:
+       mlxsw_sp_fid_mpe_table_map(fid, local_port, vid, !valid);
+       return err;
+}
+
 static int mlxsw_sp_fid_8021d_port_vid_map(struct mlxsw_sp_fid *fid,
                                           struct mlxsw_sp_port *mlxsw_sp_port,
                                           u16 vid)
@@ -576,11 +979,20 @@ static int mlxsw_sp_fid_8021d_port_vid_map(struct mlxsw_sp_fid *fid,
        u16 local_port = mlxsw_sp_port->local_port;
        int err;
 
-       err = __mlxsw_sp_fid_port_vid_map(mlxsw_sp, fid->fid_index,
-                                         mlxsw_sp_port->local_port, vid, true);
+       err = __mlxsw_sp_fid_port_vid_map(fid, mlxsw_sp_port->local_port, vid,
+                                         true);
        if (err)
                return err;
 
+       err = mlxsw_sp_fid_evid_map(fid, local_port, vid, true);
+       if (err)
+               goto err_fid_evid_map;
+
+       err = mlxsw_sp_fid_port_vid_list_add(fid, mlxsw_sp_port->local_port,
+                                            vid);
+       if (err)
+               goto err_port_vid_list_add;
+
        if (mlxsw_sp->fid_core->port_fid_mappings[local_port]++ == 0) {
                err = mlxsw_sp_port_vp_mode_trans(mlxsw_sp_port);
                if (err)
@@ -591,8 +1003,11 @@ static int mlxsw_sp_fid_8021d_port_vid_map(struct mlxsw_sp_fid *fid,
 
 err_port_vp_mode_trans:
        mlxsw_sp->fid_core->port_fid_mappings[local_port]--;
-       __mlxsw_sp_fid_port_vid_map(mlxsw_sp, fid->fid_index,
-                                   mlxsw_sp_port->local_port, vid, false);
+       mlxsw_sp_fid_port_vid_list_del(fid, mlxsw_sp_port->local_port, vid);
+err_port_vid_list_add:
+       mlxsw_sp_fid_evid_map(fid, local_port, vid, false);
+err_fid_evid_map:
+       __mlxsw_sp_fid_port_vid_map(fid, mlxsw_sp_port->local_port, vid, false);
        return err;
 }
 
@@ -606,43 +1021,29 @@ mlxsw_sp_fid_8021d_port_vid_unmap(struct mlxsw_sp_fid *fid,
        if (mlxsw_sp->fid_core->port_fid_mappings[local_port] == 1)
                mlxsw_sp_port_vlan_mode_trans(mlxsw_sp_port);
        mlxsw_sp->fid_core->port_fid_mappings[local_port]--;
-       __mlxsw_sp_fid_port_vid_map(mlxsw_sp, fid->fid_index,
-                                   mlxsw_sp_port->local_port, vid, false);
+       mlxsw_sp_fid_port_vid_list_del(fid, mlxsw_sp_port->local_port, vid);
+       mlxsw_sp_fid_evid_map(fid, local_port, vid, false);
+       __mlxsw_sp_fid_port_vid_map(fid, mlxsw_sp_port->local_port, vid, false);
 }
 
-static int mlxsw_sp_fid_8021d_vni_set(struct mlxsw_sp_fid *fid, __be32 vni)
+static int mlxsw_sp_fid_8021d_vni_set(struct mlxsw_sp_fid *fid)
 {
-       struct mlxsw_sp_fid_family *fid_family = fid->fid_family;
-
-       return mlxsw_sp_fid_vni_op(fid_family->mlxsw_sp, fid->fid_index, vni,
-                                  true, fid->nve_flood_index,
-                                  fid->nve_flood_index_valid);
+       return mlxsw_sp_fid_vni_op(fid);
 }
 
 static void mlxsw_sp_fid_8021d_vni_clear(struct mlxsw_sp_fid *fid)
 {
-       struct mlxsw_sp_fid_family *fid_family = fid->fid_family;
-
-       mlxsw_sp_fid_vni_op(fid_family->mlxsw_sp, fid->fid_index, 0, false,
-                           fid->nve_flood_index, fid->nve_flood_index_valid);
+       mlxsw_sp_fid_vni_op(fid);
 }
 
-static int mlxsw_sp_fid_8021d_nve_flood_index_set(struct mlxsw_sp_fid *fid,
-                                                 u32 nve_flood_index)
+static int mlxsw_sp_fid_8021d_nve_flood_index_set(struct mlxsw_sp_fid *fid)
 {
-       struct mlxsw_sp_fid_family *fid_family = fid->fid_family;
-
-       return mlxsw_sp_fid_vni_op(fid_family->mlxsw_sp, fid->fid_index,
-                                  fid->vni, fid->vni_valid, nve_flood_index,
-                                  true);
+       return mlxsw_sp_fid_edit_op(fid, fid->rif);
 }
 
 static void mlxsw_sp_fid_8021d_nve_flood_index_clear(struct mlxsw_sp_fid *fid)
 {
-       struct mlxsw_sp_fid_family *fid_family = fid->fid_family;
-
-       mlxsw_sp_fid_vni_op(fid_family->mlxsw_sp, fid->fid_index, fid->vni,
-                           fid->vni_valid, 0, false);
+       mlxsw_sp_fid_edit_op(fid, fid->rif);
 }
 
 static void
@@ -652,13 +1053,19 @@ mlxsw_sp_fid_8021d_fdb_clear_offload(const struct mlxsw_sp_fid *fid,
        br_fdb_clear_offload(nve_dev, 0);
 }
 
+static int
+mlxsw_sp_fid_8021d_vid_to_fid_rif_update(const struct mlxsw_sp_fid *fid,
+                                        const struct mlxsw_sp_rif *rif)
+{
+       return 0;
+}
+
 static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_8021d_ops = {
        .setup                  = mlxsw_sp_fid_8021d_setup,
        .configure              = mlxsw_sp_fid_8021d_configure,
        .deconfigure            = mlxsw_sp_fid_8021d_deconfigure,
        .index_alloc            = mlxsw_sp_fid_8021d_index_alloc,
        .compare                = mlxsw_sp_fid_8021d_compare,
-       .flood_index            = mlxsw_sp_fid_8021d_flood_index,
        .port_vid_map           = mlxsw_sp_fid_8021d_port_vid_map,
        .port_vid_unmap         = mlxsw_sp_fid_8021d_port_vid_unmap,
        .vni_set                = mlxsw_sp_fid_8021d_vni_set,
@@ -666,42 +1073,32 @@ static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_8021d_ops = {
        .nve_flood_index_set    = mlxsw_sp_fid_8021d_nve_flood_index_set,
        .nve_flood_index_clear  = mlxsw_sp_fid_8021d_nve_flood_index_clear,
        .fdb_clear_offload      = mlxsw_sp_fid_8021d_fdb_clear_offload,
+       .vid_to_fid_rif_update  = mlxsw_sp_fid_8021d_vid_to_fid_rif_update,
 };
 
+#define MLXSW_SP_FID_8021Q_MAX (VLAN_N_VID - 2)
+#define MLXSW_SP_FID_RFID_MAX (11 * 1024)
+#define MLXSW_SP_FID_8021Q_PGT_BASE 0
+#define MLXSW_SP_FID_8021D_PGT_BASE (3 * MLXSW_SP_FID_8021Q_MAX)
+
 static const struct mlxsw_sp_flood_table mlxsw_sp_fid_8021d_flood_tables[] = {
        {
                .packet_type    = MLXSW_SP_FLOOD_TYPE_UC,
-               .bridge_type    = MLXSW_REG_SFGC_BRIDGE_TYPE_VFID,
-               .table_type     = MLXSW_REG_SFGC_TABLE_TYPE_FID,
+               .table_type     = MLXSW_REG_SFGC_TABLE_TYPE_FID_OFFSET,
                .table_index    = 0,
        },
        {
                .packet_type    = MLXSW_SP_FLOOD_TYPE_MC,
-               .bridge_type    = MLXSW_REG_SFGC_BRIDGE_TYPE_VFID,
-               .table_type     = MLXSW_REG_SFGC_TABLE_TYPE_FID,
+               .table_type     = MLXSW_REG_SFGC_TABLE_TYPE_FID_OFFSET,
                .table_index    = 1,
        },
        {
                .packet_type    = MLXSW_SP_FLOOD_TYPE_BC,
-               .bridge_type    = MLXSW_REG_SFGC_BRIDGE_TYPE_VFID,
-               .table_type     = MLXSW_REG_SFGC_TABLE_TYPE_FID,
+               .table_type     = MLXSW_REG_SFGC_TABLE_TYPE_FID_OFFSET,
                .table_index    = 2,
        },
 };
 
-/* Range and flood configuration must match mlxsw_config_profile */
-static const struct mlxsw_sp_fid_family mlxsw_sp_fid_8021d_family = {
-       .type                   = MLXSW_SP_FID_TYPE_8021D,
-       .fid_size               = sizeof(struct mlxsw_sp_fid_8021d),
-       .start_index            = VLAN_N_VID,
-       .end_index              = VLAN_N_VID + MLXSW_SP_FID_8021D_MAX - 1,
-       .flood_tables           = mlxsw_sp_fid_8021d_flood_tables,
-       .nr_flood_tables        = ARRAY_SIZE(mlxsw_sp_fid_8021d_flood_tables),
-       .rif_type               = MLXSW_SP_RIF_TYPE_FID,
-       .ops                    = &mlxsw_sp_fid_8021d_ops,
-       .lag_vid_valid          = 1,
-};
-
 static bool
 mlxsw_sp_fid_8021q_compare(const struct mlxsw_sp_fid *fid, const void *arg)
 {
@@ -717,48 +1114,19 @@ mlxsw_sp_fid_8021q_fdb_clear_offload(const struct mlxsw_sp_fid *fid,
        br_fdb_clear_offload(nve_dev, mlxsw_sp_fid_8021q_vid(fid));
 }
 
-static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_8021q_emu_ops = {
-       .setup                  = mlxsw_sp_fid_8021q_setup,
-       .configure              = mlxsw_sp_fid_8021d_configure,
-       .deconfigure            = mlxsw_sp_fid_8021d_deconfigure,
-       .index_alloc            = mlxsw_sp_fid_8021d_index_alloc,
-       .compare                = mlxsw_sp_fid_8021q_compare,
-       .flood_index            = mlxsw_sp_fid_8021d_flood_index,
-       .port_vid_map           = mlxsw_sp_fid_8021d_port_vid_map,
-       .port_vid_unmap         = mlxsw_sp_fid_8021d_port_vid_unmap,
-       .vni_set                = mlxsw_sp_fid_8021d_vni_set,
-       .vni_clear              = mlxsw_sp_fid_8021d_vni_clear,
-       .nve_flood_index_set    = mlxsw_sp_fid_8021d_nve_flood_index_set,
-       .nve_flood_index_clear  = mlxsw_sp_fid_8021d_nve_flood_index_clear,
-       .fdb_clear_offload      = mlxsw_sp_fid_8021q_fdb_clear_offload,
-};
-
-/* There are 4K-2 emulated 802.1Q FIDs, starting right after the 802.1D FIDs */
-#define MLXSW_SP_FID_8021Q_EMU_START   (VLAN_N_VID + MLXSW_SP_FID_8021D_MAX)
-#define MLXSW_SP_FID_8021Q_EMU_END     (MLXSW_SP_FID_8021Q_EMU_START + \
-                                        VLAN_VID_MASK - 2)
-
-/* Range and flood configuration must match mlxsw_config_profile */
-static const struct mlxsw_sp_fid_family mlxsw_sp_fid_8021q_emu_family = {
-       .type                   = MLXSW_SP_FID_TYPE_8021Q,
-       .fid_size               = sizeof(struct mlxsw_sp_fid_8021q),
-       .start_index            = MLXSW_SP_FID_8021Q_EMU_START,
-       .end_index              = MLXSW_SP_FID_8021Q_EMU_END,
-       .flood_tables           = mlxsw_sp_fid_8021d_flood_tables,
-       .nr_flood_tables        = ARRAY_SIZE(mlxsw_sp_fid_8021d_flood_tables),
-       .rif_type               = MLXSW_SP_RIF_TYPE_VLAN,
-       .ops                    = &mlxsw_sp_fid_8021q_emu_ops,
-       .lag_vid_valid          = 1,
-};
+static void mlxsw_sp_fid_rfid_setup(struct mlxsw_sp_fid *fid, const void *arg)
+{
+       fid->fid_offset = 0;
+}
 
 static int mlxsw_sp_fid_rfid_configure(struct mlxsw_sp_fid *fid)
 {
-       /* rFIDs are allocated by the device during init */
-       return 0;
+       return mlxsw_sp_fid_op(fid, true);
 }
 
 static void mlxsw_sp_fid_rfid_deconfigure(struct mlxsw_sp_fid *fid)
 {
+       mlxsw_sp_fid_op(fid, false);
 }
 
 static int mlxsw_sp_fid_rfid_index_alloc(struct mlxsw_sp_fid *fid,
@@ -787,9 +1155,28 @@ static int mlxsw_sp_fid_rfid_port_vid_map(struct mlxsw_sp_fid *fid,
        u16 local_port = mlxsw_sp_port->local_port;
        int err;
 
-       /* We only need to transition the port to virtual mode since
-        * {Port, VID} => FID is done by the firmware upon RIF creation.
+       err = mlxsw_sp_fid_port_vid_list_add(fid, mlxsw_sp_port->local_port,
+                                            vid);
+       if (err)
+               return err;
+
+       /* Using legacy bridge model, we only need to transition the port to
+        * virtual mode since {Port, VID} => FID is done by the firmware upon
+        * RIF creation. Using unified bridge model, we need to map
+        * {Port, VID} => FID and map egress VID.
         */
+       err = __mlxsw_sp_fid_port_vid_map(fid, mlxsw_sp_port->local_port, vid,
+                                         true);
+       if (err)
+               goto err_port_vid_map;
+
+       if (fid->rif) {
+               err = mlxsw_sp_fid_erif_eport_to_vid_map_one(fid, local_port,
+                                                            vid, true);
+               if (err)
+                       goto err_erif_eport_to_vid_map_one;
+       }
+
        if (mlxsw_sp->fid_core->port_fid_mappings[local_port]++ == 0) {
                err = mlxsw_sp_port_vp_mode_trans(mlxsw_sp_port);
                if (err)
@@ -800,6 +1187,13 @@ static int mlxsw_sp_fid_rfid_port_vid_map(struct mlxsw_sp_fid *fid,
 
 err_port_vp_mode_trans:
        mlxsw_sp->fid_core->port_fid_mappings[local_port]--;
+       if (fid->rif)
+               mlxsw_sp_fid_erif_eport_to_vid_map_one(fid, local_port, vid,
+                                                      false);
+err_erif_eport_to_vid_map_one:
+       __mlxsw_sp_fid_port_vid_map(fid, mlxsw_sp_port->local_port, vid, false);
+err_port_vid_map:
+       mlxsw_sp_fid_port_vid_list_del(fid, mlxsw_sp_port->local_port, vid);
        return err;
 }
 
@@ -813,39 +1207,69 @@ mlxsw_sp_fid_rfid_port_vid_unmap(struct mlxsw_sp_fid *fid,
        if (mlxsw_sp->fid_core->port_fid_mappings[local_port] == 1)
                mlxsw_sp_port_vlan_mode_trans(mlxsw_sp_port);
        mlxsw_sp->fid_core->port_fid_mappings[local_port]--;
+
+       if (fid->rif)
+               mlxsw_sp_fid_erif_eport_to_vid_map_one(fid, local_port, vid,
+                                                      false);
+       __mlxsw_sp_fid_port_vid_map(fid, mlxsw_sp_port->local_port, vid, false);
+       mlxsw_sp_fid_port_vid_list_del(fid, mlxsw_sp_port->local_port, vid);
+}
+
+static int mlxsw_sp_fid_rfid_vni_set(struct mlxsw_sp_fid *fid)
+{
+       return -EOPNOTSUPP;
+}
+
+static void mlxsw_sp_fid_rfid_vni_clear(struct mlxsw_sp_fid *fid)
+{
+       WARN_ON_ONCE(1);
+}
+
+static int mlxsw_sp_fid_rfid_nve_flood_index_set(struct mlxsw_sp_fid *fid)
+{
+       return -EOPNOTSUPP;
+}
+
+static void mlxsw_sp_fid_rfid_nve_flood_index_clear(struct mlxsw_sp_fid *fid)
+{
+       WARN_ON_ONCE(1);
+}
+
+static int
+mlxsw_sp_fid_rfid_vid_to_fid_rif_update(const struct mlxsw_sp_fid *fid,
+                                       const struct mlxsw_sp_rif *rif)
+{
+       return 0;
 }
 
 static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_rfid_ops = {
+       .setup                  = mlxsw_sp_fid_rfid_setup,
        .configure              = mlxsw_sp_fid_rfid_configure,
        .deconfigure            = mlxsw_sp_fid_rfid_deconfigure,
        .index_alloc            = mlxsw_sp_fid_rfid_index_alloc,
        .compare                = mlxsw_sp_fid_rfid_compare,
        .port_vid_map           = mlxsw_sp_fid_rfid_port_vid_map,
        .port_vid_unmap         = mlxsw_sp_fid_rfid_port_vid_unmap,
+       .vni_set                = mlxsw_sp_fid_rfid_vni_set,
+       .vni_clear              = mlxsw_sp_fid_rfid_vni_clear,
+       .nve_flood_index_set    = mlxsw_sp_fid_rfid_nve_flood_index_set,
+       .nve_flood_index_clear  = mlxsw_sp_fid_rfid_nve_flood_index_clear,
+       .vid_to_fid_rif_update  = mlxsw_sp_fid_rfid_vid_to_fid_rif_update,
 };
 
-#define MLXSW_SP_RFID_BASE     (15 * 1024)
-#define MLXSW_SP_RFID_MAX      1024
-
-static const struct mlxsw_sp_fid_family mlxsw_sp_fid_rfid_family = {
-       .type                   = MLXSW_SP_FID_TYPE_RFID,
-       .fid_size               = sizeof(struct mlxsw_sp_fid),
-       .start_index            = MLXSW_SP_RFID_BASE,
-       .end_index              = MLXSW_SP_RFID_BASE + MLXSW_SP_RFID_MAX - 1,
-       .rif_type               = MLXSW_SP_RIF_TYPE_SUBPORT,
-       .ops                    = &mlxsw_sp_fid_rfid_ops,
-};
+static void mlxsw_sp_fid_dummy_setup(struct mlxsw_sp_fid *fid, const void *arg)
+{
+       fid->fid_offset = 0;
+}
 
 static int mlxsw_sp_fid_dummy_configure(struct mlxsw_sp_fid *fid)
 {
-       struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
-
-       return mlxsw_sp_fid_op(mlxsw_sp, fid->fid_index, 0, true);
+       return mlxsw_sp_fid_op(fid, true);
 }
 
 static void mlxsw_sp_fid_dummy_deconfigure(struct mlxsw_sp_fid *fid)
 {
-       mlxsw_sp_fid_op(fid->fid_family->mlxsw_sp, fid->fid_index, 0, false);
+       mlxsw_sp_fid_op(fid, false);
 }
 
 static int mlxsw_sp_fid_dummy_index_alloc(struct mlxsw_sp_fid *fid,
@@ -862,26 +1286,252 @@ static bool mlxsw_sp_fid_dummy_compare(const struct mlxsw_sp_fid *fid,
        return true;
 }
 
+static int mlxsw_sp_fid_dummy_vni_set(struct mlxsw_sp_fid *fid)
+{
+       return -EOPNOTSUPP;
+}
+
+static void mlxsw_sp_fid_dummy_vni_clear(struct mlxsw_sp_fid *fid)
+{
+       WARN_ON_ONCE(1);
+}
+
+static int mlxsw_sp_fid_dummy_nve_flood_index_set(struct mlxsw_sp_fid *fid)
+{
+       return -EOPNOTSUPP;
+}
+
+static void mlxsw_sp_fid_dummy_nve_flood_index_clear(struct mlxsw_sp_fid *fid)
+{
+       WARN_ON_ONCE(1);
+}
+
 static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_dummy_ops = {
+       .setup                  = mlxsw_sp_fid_dummy_setup,
        .configure              = mlxsw_sp_fid_dummy_configure,
        .deconfigure            = mlxsw_sp_fid_dummy_deconfigure,
        .index_alloc            = mlxsw_sp_fid_dummy_index_alloc,
        .compare                = mlxsw_sp_fid_dummy_compare,
+       .vni_set                = mlxsw_sp_fid_dummy_vni_set,
+       .vni_clear              = mlxsw_sp_fid_dummy_vni_clear,
+       .nve_flood_index_set    = mlxsw_sp_fid_dummy_nve_flood_index_set,
+       .nve_flood_index_clear  = mlxsw_sp_fid_dummy_nve_flood_index_clear,
+};
+
+static int mlxsw_sp_fid_8021q_configure(struct mlxsw_sp_fid *fid)
+{
+       struct mlxsw_sp_fid_8021q *fid_8021q = mlxsw_sp_fid_8021q_fid(fid);
+       int err;
+
+       err = mlxsw_sp_fid_op(fid, true);
+       if (err)
+               return err;
+
+       err = mlxsw_sp_fid_vid_to_fid_map(fid, fid_8021q->vid, true, fid->rif);
+       if (err)
+               goto err_vid_to_fid_map;
+
+       return 0;
+
+err_vid_to_fid_map:
+       mlxsw_sp_fid_op(fid, false);
+       return err;
+}
+
+static void mlxsw_sp_fid_8021q_deconfigure(struct mlxsw_sp_fid *fid)
+{
+       struct mlxsw_sp_fid_8021q *fid_8021q = mlxsw_sp_fid_8021q_fid(fid);
+
+       if (fid->vni_valid)
+               mlxsw_sp_nve_fid_disable(fid->fid_family->mlxsw_sp, fid);
+
+       mlxsw_sp_fid_vid_to_fid_map(fid, fid_8021q->vid, false, NULL);
+       mlxsw_sp_fid_op(fid, false);
+}
+
+static int mlxsw_sp_fid_8021q_port_vid_map(struct mlxsw_sp_fid *fid,
+                                          struct mlxsw_sp_port *mlxsw_sp_port,
+                                          u16 vid)
+{
+       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+       u8 local_port = mlxsw_sp_port->local_port;
+       int err;
+
+       /* In case there are no {Port, VID} => FID mappings on the port,
+        * we can use the global VID => FID mapping we created when the
+        * FID was configured, otherwise, configure new mapping.
+        */
+       if (mlxsw_sp->fid_core->port_fid_mappings[local_port]) {
+               err =  __mlxsw_sp_fid_port_vid_map(fid, local_port, vid, true);
+               if (err)
+                       return err;
+       }
+
+       err = mlxsw_sp_fid_evid_map(fid, local_port, vid, true);
+       if (err)
+               goto err_fid_evid_map;
+
+       err = mlxsw_sp_fid_port_vid_list_add(fid, mlxsw_sp_port->local_port,
+                                            vid);
+       if (err)
+               goto err_port_vid_list_add;
+
+       return 0;
+
+err_port_vid_list_add:
+        mlxsw_sp_fid_evid_map(fid, local_port, vid, false);
+err_fid_evid_map:
+       if (mlxsw_sp->fid_core->port_fid_mappings[local_port])
+               __mlxsw_sp_fid_port_vid_map(fid, local_port, vid, false);
+       return err;
+}
+
+static void
+mlxsw_sp_fid_8021q_port_vid_unmap(struct mlxsw_sp_fid *fid,
+                                 struct mlxsw_sp_port *mlxsw_sp_port, u16 vid)
+{
+       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+       u8 local_port = mlxsw_sp_port->local_port;
+
+       mlxsw_sp_fid_port_vid_list_del(fid, mlxsw_sp_port->local_port, vid);
+       mlxsw_sp_fid_evid_map(fid, local_port, vid, false);
+       if (mlxsw_sp->fid_core->port_fid_mappings[local_port])
+               __mlxsw_sp_fid_port_vid_map(fid, local_port, vid, false);
+}
+
+static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_8021q_ops = {
+       .setup                  = mlxsw_sp_fid_8021q_setup,
+       .configure              = mlxsw_sp_fid_8021q_configure,
+       .deconfigure            = mlxsw_sp_fid_8021q_deconfigure,
+       .index_alloc            = mlxsw_sp_fid_8021d_index_alloc,
+       .compare                = mlxsw_sp_fid_8021q_compare,
+       .port_vid_map           = mlxsw_sp_fid_8021q_port_vid_map,
+       .port_vid_unmap         = mlxsw_sp_fid_8021q_port_vid_unmap,
+       .vni_set                = mlxsw_sp_fid_8021d_vni_set,
+       .vni_clear              = mlxsw_sp_fid_8021d_vni_clear,
+       .nve_flood_index_set    = mlxsw_sp_fid_8021d_nve_flood_index_set,
+       .nve_flood_index_clear  = mlxsw_sp_fid_8021d_nve_flood_index_clear,
+       .fdb_clear_offload      = mlxsw_sp_fid_8021q_fdb_clear_offload,
+       .vid_to_fid_rif_update  = mlxsw_sp_fid_8021q_vid_to_fid_rif_update,
+};
+
+/* There are 4K-2 802.1Q FIDs */
+#define MLXSW_SP_FID_8021Q_START       1 /* FID 0 is reserved. */
+#define MLXSW_SP_FID_8021Q_END         (MLXSW_SP_FID_8021Q_START + \
+                                        MLXSW_SP_FID_8021Q_MAX - 1)
+
+/* There are 1K 802.1D FIDs */
+#define MLXSW_SP_FID_8021D_START       (MLXSW_SP_FID_8021Q_END + 1)
+#define MLXSW_SP_FID_8021D_END         (MLXSW_SP_FID_8021D_START + \
+                                        MLXSW_SP_FID_8021D_MAX - 1)
+
+/* There is one dummy FID */
+#define MLXSW_SP_FID_DUMMY             (MLXSW_SP_FID_8021D_END + 1)
+
+/* There are 11K rFIDs */
+#define MLXSW_SP_RFID_START            (MLXSW_SP_FID_DUMMY + 1)
+#define MLXSW_SP_RFID_END              (MLXSW_SP_RFID_START + \
+                                        MLXSW_SP_FID_RFID_MAX - 1)
+
+static const struct mlxsw_sp_fid_family mlxsw_sp1_fid_8021q_family = {
+       .type                   = MLXSW_SP_FID_TYPE_8021Q,
+       .fid_size               = sizeof(struct mlxsw_sp_fid_8021q),
+       .start_index            = MLXSW_SP_FID_8021Q_START,
+       .end_index              = MLXSW_SP_FID_8021Q_END,
+       .flood_tables           = mlxsw_sp_fid_8021d_flood_tables,
+       .nr_flood_tables        = ARRAY_SIZE(mlxsw_sp_fid_8021d_flood_tables),
+       .rif_type               = MLXSW_SP_RIF_TYPE_VLAN,
+       .ops                    = &mlxsw_sp_fid_8021q_ops,
+       .flood_rsp              = false,
+       .bridge_type            = MLXSW_REG_BRIDGE_TYPE_0,
+       .pgt_base               = MLXSW_SP_FID_8021Q_PGT_BASE,
+       .smpe_index_valid       = false,
+};
+
+static const struct mlxsw_sp_fid_family mlxsw_sp1_fid_8021d_family = {
+       .type                   = MLXSW_SP_FID_TYPE_8021D,
+       .fid_size               = sizeof(struct mlxsw_sp_fid_8021d),
+       .start_index            = MLXSW_SP_FID_8021D_START,
+       .end_index              = MLXSW_SP_FID_8021D_END,
+       .flood_tables           = mlxsw_sp_fid_8021d_flood_tables,
+       .nr_flood_tables        = ARRAY_SIZE(mlxsw_sp_fid_8021d_flood_tables),
+       .rif_type               = MLXSW_SP_RIF_TYPE_FID,
+       .ops                    = &mlxsw_sp_fid_8021d_ops,
+       .bridge_type            = MLXSW_REG_BRIDGE_TYPE_1,
+       .pgt_base               = MLXSW_SP_FID_8021D_PGT_BASE,
+       .smpe_index_valid       = false,
+};
+
+static const struct mlxsw_sp_fid_family mlxsw_sp1_fid_dummy_family = {
+       .type                   = MLXSW_SP_FID_TYPE_DUMMY,
+       .fid_size               = sizeof(struct mlxsw_sp_fid),
+       .start_index            = MLXSW_SP_FID_DUMMY,
+       .end_index              = MLXSW_SP_FID_DUMMY,
+       .ops                    = &mlxsw_sp_fid_dummy_ops,
+       .smpe_index_valid       = false,
+};
+
+static const struct mlxsw_sp_fid_family mlxsw_sp_fid_rfid_family = {
+       .type                   = MLXSW_SP_FID_TYPE_RFID,
+       .fid_size               = sizeof(struct mlxsw_sp_fid),
+       .start_index            = MLXSW_SP_RFID_START,
+       .end_index              = MLXSW_SP_RFID_END,
+       .rif_type               = MLXSW_SP_RIF_TYPE_SUBPORT,
+       .ops                    = &mlxsw_sp_fid_rfid_ops,
+       .flood_rsp              = true,
+       .smpe_index_valid       = false,
+};
+
+const struct mlxsw_sp_fid_family *mlxsw_sp1_fid_family_arr[] = {
+       [MLXSW_SP_FID_TYPE_8021Q]       = &mlxsw_sp1_fid_8021q_family,
+       [MLXSW_SP_FID_TYPE_8021D]       = &mlxsw_sp1_fid_8021d_family,
+       [MLXSW_SP_FID_TYPE_DUMMY]       = &mlxsw_sp1_fid_dummy_family,
+       [MLXSW_SP_FID_TYPE_RFID]        = &mlxsw_sp_fid_rfid_family,
+};
+
+static const struct mlxsw_sp_fid_family mlxsw_sp2_fid_8021q_family = {
+       .type                   = MLXSW_SP_FID_TYPE_8021Q,
+       .fid_size               = sizeof(struct mlxsw_sp_fid_8021q),
+       .start_index            = MLXSW_SP_FID_8021Q_START,
+       .end_index              = MLXSW_SP_FID_8021Q_END,
+       .flood_tables           = mlxsw_sp_fid_8021d_flood_tables,
+       .nr_flood_tables        = ARRAY_SIZE(mlxsw_sp_fid_8021d_flood_tables),
+       .rif_type               = MLXSW_SP_RIF_TYPE_VLAN,
+       .ops                    = &mlxsw_sp_fid_8021q_ops,
+       .flood_rsp              = false,
+       .bridge_type            = MLXSW_REG_BRIDGE_TYPE_0,
+       .pgt_base               = MLXSW_SP_FID_8021Q_PGT_BASE,
+       .smpe_index_valid       = true,
 };
 
-static const struct mlxsw_sp_fid_family mlxsw_sp_fid_dummy_family = {
+static const struct mlxsw_sp_fid_family mlxsw_sp2_fid_8021d_family = {
+       .type                   = MLXSW_SP_FID_TYPE_8021D,
+       .fid_size               = sizeof(struct mlxsw_sp_fid_8021d),
+       .start_index            = MLXSW_SP_FID_8021D_START,
+       .end_index              = MLXSW_SP_FID_8021D_END,
+       .flood_tables           = mlxsw_sp_fid_8021d_flood_tables,
+       .nr_flood_tables        = ARRAY_SIZE(mlxsw_sp_fid_8021d_flood_tables),
+       .rif_type               = MLXSW_SP_RIF_TYPE_FID,
+       .ops                    = &mlxsw_sp_fid_8021d_ops,
+       .bridge_type            = MLXSW_REG_BRIDGE_TYPE_1,
+       .pgt_base               = MLXSW_SP_FID_8021D_PGT_BASE,
+       .smpe_index_valid       = true,
+};
+
+static const struct mlxsw_sp_fid_family mlxsw_sp2_fid_dummy_family = {
        .type                   = MLXSW_SP_FID_TYPE_DUMMY,
        .fid_size               = sizeof(struct mlxsw_sp_fid),
-       .start_index            = VLAN_N_VID - 1,
-       .end_index              = VLAN_N_VID - 1,
+       .start_index            = MLXSW_SP_FID_DUMMY,
+       .end_index              = MLXSW_SP_FID_DUMMY,
        .ops                    = &mlxsw_sp_fid_dummy_ops,
+       .smpe_index_valid       = false,
 };
 
-static const struct mlxsw_sp_fid_family *mlxsw_sp_fid_family_arr[] = {
-       [MLXSW_SP_FID_TYPE_8021Q]       = &mlxsw_sp_fid_8021q_emu_family,
-       [MLXSW_SP_FID_TYPE_8021D]       = &mlxsw_sp_fid_8021d_family,
+const struct mlxsw_sp_fid_family *mlxsw_sp2_fid_family_arr[] = {
+       [MLXSW_SP_FID_TYPE_8021Q]       = &mlxsw_sp2_fid_8021q_family,
+       [MLXSW_SP_FID_TYPE_8021D]       = &mlxsw_sp2_fid_8021d_family,
+       [MLXSW_SP_FID_TYPE_DUMMY]       = &mlxsw_sp2_fid_dummy_family,
        [MLXSW_SP_FID_TYPE_RFID]        = &mlxsw_sp_fid_rfid_family,
-       [MLXSW_SP_FID_TYPE_DUMMY]       = &mlxsw_sp_fid_dummy_family,
 };
 
 static struct mlxsw_sp_fid *mlxsw_sp_fid_lookup(struct mlxsw_sp *mlxsw_sp,
@@ -919,6 +1569,8 @@ static struct mlxsw_sp_fid *mlxsw_sp_fid_get(struct mlxsw_sp *mlxsw_sp,
        fid = kzalloc(fid_family->fid_size, GFP_KERNEL);
        if (!fid)
                return ERR_PTR(-ENOMEM);
+
+       INIT_LIST_HEAD(&fid->port_vid_list);
        fid->fid_family = fid_family;
 
        err = fid->fid_family->ops->index_alloc(fid, arg, &fid_index);
@@ -927,8 +1579,7 @@ static struct mlxsw_sp_fid *mlxsw_sp_fid_get(struct mlxsw_sp *mlxsw_sp,
        fid->fid_index = fid_index;
        __set_bit(fid_index - fid_family->start_index, fid_family->fids_bitmap);
 
-       if (fid->fid_family->ops->setup)
-               fid->fid_family->ops->setup(fid, arg);
+       fid->fid_family->ops->setup(fid, arg);
 
        err = fid->fid_family->ops->configure(fid);
        if (err)
@@ -967,6 +1618,7 @@ void mlxsw_sp_fid_put(struct mlxsw_sp_fid *fid)
        fid->fid_family->ops->deconfigure(fid);
        __clear_bit(fid->fid_index - fid_family->start_index,
                    fid_family->fids_bitmap);
+       WARN_ON_ONCE(!list_empty(&fid->port_vid_list));
        kfree(fid);
 }
 
@@ -1010,26 +1662,49 @@ mlxsw_sp_fid_flood_table_init(struct mlxsw_sp_fid_family *fid_family,
                              const struct mlxsw_sp_flood_table *flood_table)
 {
        enum mlxsw_sp_flood_type packet_type = flood_table->packet_type;
+       struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp;
        const int *sfgc_packet_types;
-       int i;
+       u16 num_fids, mid_base;
+       int err, i;
+
+       mid_base = mlxsw_sp_fid_flood_table_mid(fid_family, flood_table, 0);
+       num_fids = mlxsw_sp_fid_family_num_fids(fid_family);
+       err = mlxsw_sp_pgt_mid_alloc_range(mlxsw_sp, mid_base, num_fids);
+       if (err)
+               return err;
 
        sfgc_packet_types = mlxsw_sp_packet_type_sfgc_types[packet_type];
        for (i = 0; i < MLXSW_REG_SFGC_TYPE_MAX; i++) {
-               struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp;
                char sfgc_pl[MLXSW_REG_SFGC_LEN];
-               int err;
 
                if (!sfgc_packet_types[i])
                        continue;
-               mlxsw_reg_sfgc_pack(sfgc_pl, i, flood_table->bridge_type,
-                                   flood_table->table_type,
-                                   flood_table->table_index);
+
+               mlxsw_reg_sfgc_pack(sfgc_pl, i, fid_family->bridge_type,
+                                   flood_table->table_type, 0, mid_base);
+
                err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfgc), sfgc_pl);
                if (err)
-                       return err;
+                       goto err_reg_write;
        }
 
        return 0;
+
+err_reg_write:
+       mlxsw_sp_pgt_mid_free_range(mlxsw_sp, mid_base, num_fids);
+       return err;
+}
+
+static void
+mlxsw_sp_fid_flood_table_fini(struct mlxsw_sp_fid_family *fid_family,
+                             const struct mlxsw_sp_flood_table *flood_table)
+{
+       struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp;
+       u16 num_fids, mid_base;
+
+       mid_base = mlxsw_sp_fid_flood_table_mid(fid_family, flood_table, 0);
+       num_fids = mlxsw_sp_fid_family_num_fids(fid_family);
+       mlxsw_sp_pgt_mid_free_range(mlxsw_sp, mid_base, num_fids);
 }
 
 static int
@@ -1050,6 +1725,19 @@ mlxsw_sp_fid_flood_tables_init(struct mlxsw_sp_fid_family *fid_family)
        return 0;
 }
 
+static void
+mlxsw_sp_fid_flood_tables_fini(struct mlxsw_sp_fid_family *fid_family)
+{
+       int i;
+
+       for (i = 0; i < fid_family->nr_flood_tables; i++) {
+               const struct mlxsw_sp_flood_table *flood_table;
+
+               flood_table = &fid_family->flood_tables[i];
+               mlxsw_sp_fid_flood_table_fini(fid_family, flood_table);
+       }
+}
+
 static int mlxsw_sp_fid_family_register(struct mlxsw_sp *mlxsw_sp,
                                        const struct mlxsw_sp_fid_family *tmpl)
 {
@@ -1091,6 +1779,10 @@ mlxsw_sp_fid_family_unregister(struct mlxsw_sp *mlxsw_sp,
                               struct mlxsw_sp_fid_family *fid_family)
 {
        mlxsw_sp->fid_core->fid_family_arr[fid_family->type] = NULL;
+
+       if (fid_family->flood_tables)
+               mlxsw_sp_fid_flood_tables_fini(fid_family);
+
        bitmap_free(fid_family->fids_bitmap);
        WARN_ON_ONCE(!list_empty(&fid_family->fids_list));
        kfree(fid_family);
@@ -1144,7 +1836,7 @@ int mlxsw_sp_fids_init(struct mlxsw_sp *mlxsw_sp)
 
        for (i = 0; i < MLXSW_SP_FID_TYPE_MAX; i++) {
                err = mlxsw_sp_fid_family_register(mlxsw_sp,
-                                                  mlxsw_sp_fid_family_arr[i]);
+                                                  mlxsw_sp->fid_family_arr[i]);
 
                if (err)
                        goto err_fid_ops_register;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_pgt.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_pgt.c
new file mode 100644 (file)
index 0000000..7dd3dba
--- /dev/null
@@ -0,0 +1,346 @@
+// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
+/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#include <linux/refcount.h>
+#include <linux/idr.h>
+
+#include "spectrum.h"
+#include "reg.h"
+
+struct mlxsw_sp_pgt {
+       struct idr pgt_idr;
+       u16 end_index; /* Exclusive. */
+       struct mutex lock; /* Protects PGT. */
+       bool smpe_index_valid;
+};
+
+struct mlxsw_sp_pgt_entry {
+       struct list_head ports_list;
+       u16 index;
+       u16 smpe_index;
+};
+
+struct mlxsw_sp_pgt_entry_port {
+       struct list_head list; /* Member of 'ports_list'. */
+       u16 local_port;
+};
+
+int mlxsw_sp_pgt_mid_alloc(struct mlxsw_sp *mlxsw_sp, u16 *p_mid)
+{
+       int index, err = 0;
+
+       mutex_lock(&mlxsw_sp->pgt->lock);
+       index = idr_alloc(&mlxsw_sp->pgt->pgt_idr, NULL, 0,
+                         mlxsw_sp->pgt->end_index, GFP_KERNEL);
+
+       if (index < 0) {
+               err = index;
+               goto err_idr_alloc;
+       }
+
+       *p_mid = index;
+       mutex_unlock(&mlxsw_sp->pgt->lock);
+       return 0;
+
+err_idr_alloc:
+       mutex_unlock(&mlxsw_sp->pgt->lock);
+       return err;
+}
+
+void mlxsw_sp_pgt_mid_free(struct mlxsw_sp *mlxsw_sp, u16 mid_base)
+{
+       mutex_lock(&mlxsw_sp->pgt->lock);
+       WARN_ON(idr_remove(&mlxsw_sp->pgt->pgt_idr, mid_base));
+       mutex_unlock(&mlxsw_sp->pgt->lock);
+}
+
+int
+mlxsw_sp_pgt_mid_alloc_range(struct mlxsw_sp *mlxsw_sp, u16 mid_base, u16 count)
+{
+       unsigned int idr_cursor;
+       int i, err;
+
+       mutex_lock(&mlxsw_sp->pgt->lock);
+
+       /* This function is supposed to be called several times as part of
+        * driver init, in specific order. Verify that the mid_index is the
+        * first free index in the idr, to be able to free the indexes in case
+        * of error.
+        */
+       idr_cursor = idr_get_cursor(&mlxsw_sp->pgt->pgt_idr);
+       if (WARN_ON(idr_cursor != mid_base)) {
+               err = -EINVAL;
+               goto err_idr_cursor;
+       }
+
+       for (i = 0; i < count; i++) {
+               err = idr_alloc_cyclic(&mlxsw_sp->pgt->pgt_idr, NULL,
+                                      mid_base, mid_base + count, GFP_KERNEL);
+               if (err < 0)
+                       goto err_idr_alloc_cyclic;
+       }
+
+       mutex_unlock(&mlxsw_sp->pgt->lock);
+       return 0;
+
+err_idr_alloc_cyclic:
+       for (i--; i >= 0; i--)
+               idr_remove(&mlxsw_sp->pgt->pgt_idr, mid_base + i);
+err_idr_cursor:
+       mutex_unlock(&mlxsw_sp->pgt->lock);
+       return err;
+}
+
+void
+mlxsw_sp_pgt_mid_free_range(struct mlxsw_sp *mlxsw_sp, u16 mid_base, u16 count)
+{
+       struct idr *pgt_idr = &mlxsw_sp->pgt->pgt_idr;
+       int i;
+
+       mutex_lock(&mlxsw_sp->pgt->lock);
+
+       for (i = 0; i < count; i++)
+               WARN_ON_ONCE(idr_remove(pgt_idr, mid_base + i));
+
+       mutex_unlock(&mlxsw_sp->pgt->lock);
+}
+
+static struct mlxsw_sp_pgt_entry_port *
+mlxsw_sp_pgt_entry_port_lookup(struct mlxsw_sp_pgt_entry *pgt_entry,
+                              u16 local_port)
+{
+       struct mlxsw_sp_pgt_entry_port *pgt_entry_port;
+
+       list_for_each_entry(pgt_entry_port, &pgt_entry->ports_list, list) {
+               if (pgt_entry_port->local_port == local_port)
+                       return pgt_entry_port;
+       }
+
+       return NULL;
+}
+
+static struct mlxsw_sp_pgt_entry *
+mlxsw_sp_pgt_entry_create(struct mlxsw_sp_pgt *pgt, u16 mid, u16 smpe)
+{
+       struct mlxsw_sp_pgt_entry *pgt_entry;
+       void *ret;
+       int err;
+
+       pgt_entry = kzalloc(sizeof(*pgt_entry), GFP_KERNEL);
+       if (!pgt_entry)
+               return ERR_PTR(-ENOMEM);
+
+       ret = idr_replace(&pgt->pgt_idr, pgt_entry, mid);
+       if (IS_ERR(ret)) {
+               err = PTR_ERR(ret);
+               goto err_idr_replace;
+       }
+
+       INIT_LIST_HEAD(&pgt_entry->ports_list);
+       pgt_entry->index = mid;
+       pgt_entry->smpe_index = smpe;
+       return pgt_entry;
+
+err_idr_replace:
+       kfree(pgt_entry);
+       return ERR_PTR(err);
+}
+
+static void mlxsw_sp_pgt_entry_destroy(struct mlxsw_sp_pgt *pgt,
+                                      struct mlxsw_sp_pgt_entry *pgt_entry)
+{
+       WARN_ON(!list_empty(&pgt_entry->ports_list));
+
+       pgt_entry = idr_replace(&pgt->pgt_idr, NULL, pgt_entry->index);
+       if (WARN_ON(IS_ERR(pgt_entry)))
+               return;
+
+       kfree(pgt_entry);
+}
+
+static struct mlxsw_sp_pgt_entry *
+mlxsw_sp_pgt_entry_get(struct mlxsw_sp_pgt *pgt, u16 mid, u16 smpe)
+{
+       struct mlxsw_sp_pgt_entry *pgt_entry;
+
+       pgt_entry = idr_find(&pgt->pgt_idr, mid);
+       if (pgt_entry)
+               return pgt_entry;
+
+       return mlxsw_sp_pgt_entry_create(pgt, mid, smpe);
+}
+
+static void mlxsw_sp_pgt_entry_put(struct mlxsw_sp_pgt *pgt, u16 mid)
+{
+       struct mlxsw_sp_pgt_entry *pgt_entry;
+
+       pgt_entry = idr_find(&pgt->pgt_idr, mid);
+       if (WARN_ON(!pgt_entry))
+               return;
+
+       if (list_empty(&pgt_entry->ports_list))
+               mlxsw_sp_pgt_entry_destroy(pgt, pgt_entry);
+}
+
+static void mlxsw_sp_pgt_smid2_port_set(char *smid2_pl, u16 local_port,
+                                       bool member)
+{
+       mlxsw_reg_smid2_port_set(smid2_pl, local_port, member);
+       mlxsw_reg_smid2_port_mask_set(smid2_pl, local_port, 1);
+}
+
+static int
+mlxsw_sp_pgt_entry_port_write(struct mlxsw_sp *mlxsw_sp,
+                             const struct mlxsw_sp_pgt_entry *pgt_entry,
+                             u16 local_port, bool member)
+{
+       char *smid2_pl;
+       int err;
+
+       smid2_pl = kmalloc(MLXSW_REG_SMID2_LEN, GFP_KERNEL);
+       if (!smid2_pl)
+               return -ENOMEM;
+
+       mlxsw_reg_smid2_pack(smid2_pl, pgt_entry->index, 0, 0,
+                            mlxsw_sp->pgt->smpe_index_valid,
+                            pgt_entry->smpe_index);
+
+       mlxsw_sp_pgt_smid2_port_set(smid2_pl, local_port, member);
+       err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(smid2), smid2_pl);
+
+       kfree(smid2_pl);
+
+       return err;
+}
+
+static struct mlxsw_sp_pgt_entry_port *
+mlxsw_sp_pgt_entry_port_create(struct mlxsw_sp *mlxsw_sp,
+                              struct mlxsw_sp_pgt_entry *pgt_entry,
+                              u16 local_port)
+{
+       struct mlxsw_sp_pgt_entry_port *pgt_entry_port;
+       int err;
+
+       pgt_entry_port = kzalloc(sizeof(*pgt_entry_port), GFP_KERNEL);
+       if (!pgt_entry_port)
+               return ERR_PTR(-ENOMEM);
+
+       err = mlxsw_sp_pgt_entry_port_write(mlxsw_sp, pgt_entry, local_port,
+                                           true);
+       if (err)
+               goto err_pgt_entry_port_write;
+
+       pgt_entry_port->local_port = local_port;
+       list_add(&pgt_entry_port->list, &pgt_entry->ports_list);
+
+       return pgt_entry_port;
+
+err_pgt_entry_port_write:
+       kfree(pgt_entry_port);
+       return ERR_PTR(err);
+}
+
+static void
+mlxsw_sp_pgt_entry_port_destroy(struct mlxsw_sp *mlxsw_sp,
+                               struct mlxsw_sp_pgt_entry *pgt_entry,
+                               struct mlxsw_sp_pgt_entry_port *pgt_entry_port)
+
+{
+       list_del(&pgt_entry_port->list);
+       mlxsw_sp_pgt_entry_port_write(mlxsw_sp, pgt_entry,
+                                     pgt_entry_port->local_port, false);
+       kfree(pgt_entry_port);
+}
+
+static int mlxsw_sp_pgt_entry_port_add(struct mlxsw_sp *mlxsw_sp, u16 mid,
+                                      u16 smpe, u16 local_port)
+{
+       struct mlxsw_sp_pgt_entry_port *pgt_entry_port;
+       struct mlxsw_sp_pgt_entry *pgt_entry;
+       int err;
+
+       mutex_lock(&mlxsw_sp->pgt->lock);
+
+       pgt_entry = mlxsw_sp_pgt_entry_get(mlxsw_sp->pgt, mid, smpe);
+       if (IS_ERR(pgt_entry)) {
+               err = PTR_ERR(pgt_entry);
+               goto err_pgt_entry_get;
+       }
+
+       pgt_entry_port = mlxsw_sp_pgt_entry_port_create(mlxsw_sp, pgt_entry,
+                                                       local_port);
+       if (IS_ERR(pgt_entry_port)) {
+               err = PTR_ERR(pgt_entry_port);
+               goto err_pgt_entry_port_get;
+       }
+
+       mutex_unlock(&mlxsw_sp->pgt->lock);
+       return 0;
+
+err_pgt_entry_port_get:
+       mlxsw_sp_pgt_entry_put(mlxsw_sp->pgt, mid);
+err_pgt_entry_get:
+       mutex_unlock(&mlxsw_sp->pgt->lock);
+       return err;
+}
+
+static void mlxsw_sp_pgt_entry_port_del(struct mlxsw_sp *mlxsw_sp,
+                                       u16 mid, u16 smpe, u16 local_port)
+{
+       struct mlxsw_sp_pgt_entry_port *pgt_entry_port;
+       struct mlxsw_sp_pgt_entry *pgt_entry;
+
+       mutex_lock(&mlxsw_sp->pgt->lock);
+
+       pgt_entry = idr_find(&mlxsw_sp->pgt->pgt_idr, mid);
+       if (!pgt_entry)
+               goto out;
+
+       pgt_entry_port = mlxsw_sp_pgt_entry_port_lookup(pgt_entry, local_port);
+       if (!pgt_entry_port)
+               goto out;
+
+       mlxsw_sp_pgt_entry_port_destroy(mlxsw_sp, pgt_entry, pgt_entry_port);
+       mlxsw_sp_pgt_entry_put(mlxsw_sp->pgt, mid);
+
+out:
+       mutex_unlock(&mlxsw_sp->pgt->lock);
+}
+
+int mlxsw_sp_pgt_entry_port_set(struct mlxsw_sp *mlxsw_sp, u16 mid,
+                               u16 smpe, u16 local_port, bool member)
+{
+       if (member)
+               return mlxsw_sp_pgt_entry_port_add(mlxsw_sp, mid, smpe,
+                                                  local_port);
+
+       mlxsw_sp_pgt_entry_port_del(mlxsw_sp, mid, smpe, local_port);
+       return 0;
+}
+
+int mlxsw_sp_pgt_init(struct mlxsw_sp *mlxsw_sp)
+{
+       struct mlxsw_sp_pgt *pgt;
+
+       if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, PGT_SIZE))
+               return -EIO;
+
+       pgt = kzalloc(sizeof(*mlxsw_sp->pgt), GFP_KERNEL);
+       if (!pgt)
+               return -ENOMEM;
+
+       idr_init(&pgt->pgt_idr);
+       pgt->end_index = MLXSW_CORE_RES_GET(mlxsw_sp->core, PGT_SIZE);
+       mutex_init(&pgt->lock);
+       pgt->smpe_index_valid = mlxsw_sp->pgt_smpe_index_valid;
+       mlxsw_sp->pgt = pgt;
+       return 0;
+}
+
+void mlxsw_sp_pgt_fini(struct mlxsw_sp *mlxsw_sp)
+{
+       mutex_destroy(&mlxsw_sp->pgt->lock);
+       WARN_ON(!idr_is_empty(&mlxsw_sp->pgt->pgt_idr));
+       idr_destroy(&mlxsw_sp->pgt->pgt_idr);
+       kfree(mlxsw_sp->pgt);
+}
index 4c77215..09009e8 100644 (file)
@@ -4323,6 +4323,8 @@ static int mlxsw_sp_nexthop4_init(struct mlxsw_sp *mlxsw_sp,
        return 0;
 
 err_nexthop_neigh_init:
+       list_del(&nh->router_list_node);
+       mlxsw_sp_nexthop_counter_free(mlxsw_sp, nh);
        mlxsw_sp_nexthop_remove(mlxsw_sp, nh);
        return err;
 }
@@ -6498,6 +6500,7 @@ static int mlxsw_sp_nexthop6_init(struct mlxsw_sp *mlxsw_sp,
                                  const struct fib6_info *rt)
 {
        struct net_device *dev = rt->fib6_nh->fib_nh_dev;
+       int err;
 
        nh->nhgi = nh_grp->nhgi;
        nh->nh_weight = rt->fib6_nh->fib_nh_weight;
@@ -6513,7 +6516,16 @@ static int mlxsw_sp_nexthop6_init(struct mlxsw_sp *mlxsw_sp,
                return 0;
        nh->ifindex = dev->ifindex;
 
-       return mlxsw_sp_nexthop_type_init(mlxsw_sp, nh, dev);
+       err = mlxsw_sp_nexthop_type_init(mlxsw_sp, nh, dev);
+       if (err)
+               goto err_nexthop_type_init;
+
+       return 0;
+
+err_nexthop_type_init:
+       list_del(&nh->router_list_node);
+       mlxsw_sp_nexthop_counter_free(mlxsw_sp, nh);
+       return err;
 }
 
 static void mlxsw_sp_nexthop6_fini(struct mlxsw_sp *mlxsw_sp,
@@ -9304,17 +9316,18 @@ static int mlxsw_sp_rif_subport_op(struct mlxsw_sp_rif *rif, bool enable)
        struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
        struct mlxsw_sp_rif_subport *rif_subport;
        char ritr_pl[MLXSW_REG_RITR_LEN];
+       u16 efid;
 
        rif_subport = mlxsw_sp_rif_subport_rif(rif);
        mlxsw_reg_ritr_pack(ritr_pl, enable, MLXSW_REG_RITR_SP_IF,
                            rif->rif_index, rif->vr_id, rif->dev->mtu);
        mlxsw_reg_ritr_mac_pack(ritr_pl, rif->dev->dev_addr);
        mlxsw_reg_ritr_if_mac_profile_id_set(ritr_pl, rif->mac_profile_id);
+       efid = mlxsw_sp_fid_index(rif->fid);
        mlxsw_reg_ritr_sp_if_pack(ritr_pl, rif_subport->lag,
                                  rif_subport->lag ? rif_subport->lag_id :
                                                     rif_subport->system_port,
-                                 rif_subport->vid);
-
+                                 efid, 0);
        return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ritr), ritr_pl);
 }
 
@@ -9339,9 +9352,15 @@ static int mlxsw_sp_rif_subport_configure(struct mlxsw_sp_rif *rif,
        if (err)
                goto err_rif_fdb_op;
 
-       mlxsw_sp_fid_rif_set(rif->fid, rif);
+       err = mlxsw_sp_fid_rif_set(rif->fid, rif);
+       if (err)
+               goto err_fid_rif_set;
+
        return 0;
 
+err_fid_rif_set:
+       mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
+                           mlxsw_sp_fid_index(rif->fid), false);
 err_rif_fdb_op:
        mlxsw_sp_rif_subport_op(rif, false);
 err_rif_subport_op:
@@ -9353,7 +9372,7 @@ static void mlxsw_sp_rif_subport_deconfigure(struct mlxsw_sp_rif *rif)
 {
        struct mlxsw_sp_fid *fid = rif->fid;
 
-       mlxsw_sp_fid_rif_set(fid, NULL);
+       mlxsw_sp_fid_rif_unset(fid);
        mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
                            mlxsw_sp_fid_index(fid), false);
        mlxsw_sp_rif_macvlan_flush(rif);
@@ -9377,10 +9396,9 @@ static const struct mlxsw_sp_rif_ops mlxsw_sp_rif_subport_ops = {
        .fid_get                = mlxsw_sp_rif_subport_fid_get,
 };
 
-static int mlxsw_sp_rif_vlan_fid_op(struct mlxsw_sp_rif *rif,
-                                   enum mlxsw_reg_ritr_if_type type,
-                                   u16 vid_fid, bool enable)
+static int mlxsw_sp_rif_fid_op(struct mlxsw_sp_rif *rif, u16 fid, bool enable)
 {
+       enum mlxsw_reg_ritr_if_type type = MLXSW_REG_RITR_FID_IF;
        struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
        char ritr_pl[MLXSW_REG_RITR_LEN];
 
@@ -9388,7 +9406,7 @@ static int mlxsw_sp_rif_vlan_fid_op(struct mlxsw_sp_rif *rif,
                            rif->dev->mtu);
        mlxsw_reg_ritr_mac_pack(ritr_pl, rif->dev->dev_addr);
        mlxsw_reg_ritr_if_mac_profile_id_set(ritr_pl, rif->mac_profile_id);
-       mlxsw_reg_ritr_fid_set(ritr_pl, type, vid_fid);
+       mlxsw_reg_ritr_fid_if_fid_set(ritr_pl, fid);
 
        return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ritr), ritr_pl);
 }
@@ -9412,10 +9430,9 @@ static int mlxsw_sp_rif_fid_configure(struct mlxsw_sp_rif *rif,
                return err;
        rif->mac_profile_id = mac_profile;
 
-       err = mlxsw_sp_rif_vlan_fid_op(rif, MLXSW_REG_RITR_FID_IF, fid_index,
-                                      true);
+       err = mlxsw_sp_rif_fid_op(rif, fid_index, true);
        if (err)
-               goto err_rif_vlan_fid_op;
+               goto err_rif_fid_op;
 
        err = mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_MC,
                                     mlxsw_sp_router_port(mlxsw_sp), true);
@@ -9432,9 +9449,15 @@ static int mlxsw_sp_rif_fid_configure(struct mlxsw_sp_rif *rif,
        if (err)
                goto err_rif_fdb_op;
 
-       mlxsw_sp_fid_rif_set(rif->fid, rif);
+       err = mlxsw_sp_fid_rif_set(rif->fid, rif);
+       if (err)
+               goto err_fid_rif_set;
+
        return 0;
 
+err_fid_rif_set:
+       mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
+                           mlxsw_sp_fid_index(rif->fid), false);
 err_rif_fdb_op:
        mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_BC,
                               mlxsw_sp_router_port(mlxsw_sp), false);
@@ -9442,8 +9465,8 @@ err_fid_bc_flood_set:
        mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_MC,
                               mlxsw_sp_router_port(mlxsw_sp), false);
 err_fid_mc_flood_set:
-       mlxsw_sp_rif_vlan_fid_op(rif, MLXSW_REG_RITR_FID_IF, fid_index, false);
-err_rif_vlan_fid_op:
+       mlxsw_sp_rif_fid_op(rif, fid_index, false);
+err_rif_fid_op:
        mlxsw_sp_rif_mac_profile_put(mlxsw_sp, mac_profile);
        return err;
 }
@@ -9454,7 +9477,7 @@ static void mlxsw_sp_rif_fid_deconfigure(struct mlxsw_sp_rif *rif)
        struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
        struct mlxsw_sp_fid *fid = rif->fid;
 
-       mlxsw_sp_fid_rif_set(fid, NULL);
+       mlxsw_sp_fid_rif_unset(fid);
        mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
                            mlxsw_sp_fid_index(fid), false);
        mlxsw_sp_rif_macvlan_flush(rif);
@@ -9462,7 +9485,7 @@ static void mlxsw_sp_rif_fid_deconfigure(struct mlxsw_sp_rif *rif)
                               mlxsw_sp_router_port(mlxsw_sp), false);
        mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_MC,
                               mlxsw_sp_router_port(mlxsw_sp), false);
-       mlxsw_sp_rif_vlan_fid_op(rif, MLXSW_REG_RITR_FID_IF, fid_index, false);
+       mlxsw_sp_rif_fid_op(rif, fid_index, false);
        mlxsw_sp_rif_mac_profile_put(rif->mlxsw_sp, rif->mac_profile_id);
 }
 
@@ -9539,11 +9562,119 @@ static void mlxsw_sp_rif_vlan_fdb_del(struct mlxsw_sp_rif *rif, const char *mac)
                                 NULL);
 }
 
-static const struct mlxsw_sp_rif_ops mlxsw_sp_rif_vlan_emu_ops = {
+static int mlxsw_sp_rif_vlan_op(struct mlxsw_sp_rif *rif, u16 vid, u16 efid,
+                               bool enable)
+{
+       struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
+       char ritr_pl[MLXSW_REG_RITR_LEN];
+
+       mlxsw_reg_ritr_vlan_if_pack(ritr_pl, enable, rif->rif_index, rif->vr_id,
+                                   rif->dev->mtu, rif->dev->dev_addr,
+                                   rif->mac_profile_id, vid, efid);
+
+       return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ritr), ritr_pl);
+}
+
+static int mlxsw_sp_rif_vlan_configure(struct mlxsw_sp_rif *rif, u16 efid,
+                                      struct netlink_ext_ack *extack)
+{
+       u16 vid = mlxsw_sp_fid_8021q_vid(rif->fid);
+       struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
+       u8 mac_profile;
+       int err;
+
+       err = mlxsw_sp_rif_mac_profile_get(mlxsw_sp, rif->addr,
+                                          &mac_profile, extack);
+       if (err)
+               return err;
+       rif->mac_profile_id = mac_profile;
+
+       err = mlxsw_sp_rif_vlan_op(rif, vid, efid, true);
+       if (err)
+               goto err_rif_vlan_fid_op;
+
+       err = mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_MC,
+                                    mlxsw_sp_router_port(mlxsw_sp), true);
+       if (err)
+               goto err_fid_mc_flood_set;
+
+       err = mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_BC,
+                                    mlxsw_sp_router_port(mlxsw_sp), true);
+       if (err)
+               goto err_fid_bc_flood_set;
+
+       err = mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
+                                 mlxsw_sp_fid_index(rif->fid), true);
+       if (err)
+               goto err_rif_fdb_op;
+
+       err = mlxsw_sp_fid_rif_set(rif->fid, rif);
+       if (err)
+               goto err_fid_rif_set;
+
+       return 0;
+
+err_fid_rif_set:
+       mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
+                           mlxsw_sp_fid_index(rif->fid), false);
+err_rif_fdb_op:
+       mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_BC,
+                              mlxsw_sp_router_port(mlxsw_sp), false);
+err_fid_bc_flood_set:
+       mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_MC,
+                              mlxsw_sp_router_port(mlxsw_sp), false);
+err_fid_mc_flood_set:
+       mlxsw_sp_rif_vlan_op(rif, vid, 0, false);
+err_rif_vlan_fid_op:
+       mlxsw_sp_rif_mac_profile_put(mlxsw_sp, mac_profile);
+       return err;
+}
+
+static void mlxsw_sp_rif_vlan_deconfigure(struct mlxsw_sp_rif *rif)
+{
+       u16 vid = mlxsw_sp_fid_8021q_vid(rif->fid);
+       struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
+
+       mlxsw_sp_fid_rif_unset(rif->fid);
+       mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
+                           mlxsw_sp_fid_index(rif->fid), false);
+       mlxsw_sp_rif_macvlan_flush(rif);
+       mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_BC,
+                              mlxsw_sp_router_port(mlxsw_sp), false);
+       mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_MC,
+                              mlxsw_sp_router_port(mlxsw_sp), false);
+       mlxsw_sp_rif_vlan_op(rif, vid, 0, false);
+       mlxsw_sp_rif_mac_profile_put(rif->mlxsw_sp, rif->mac_profile_id);
+}
+
+static int mlxsw_sp1_rif_vlan_configure(struct mlxsw_sp_rif *rif,
+                                       struct netlink_ext_ack *extack)
+{
+       return mlxsw_sp_rif_vlan_configure(rif, 0, extack);
+}
+
+static const struct mlxsw_sp_rif_ops mlxsw_sp1_rif_vlan_ops = {
        .type                   = MLXSW_SP_RIF_TYPE_VLAN,
        .rif_size               = sizeof(struct mlxsw_sp_rif),
-       .configure              = mlxsw_sp_rif_fid_configure,
-       .deconfigure            = mlxsw_sp_rif_fid_deconfigure,
+       .configure              = mlxsw_sp1_rif_vlan_configure,
+       .deconfigure            = mlxsw_sp_rif_vlan_deconfigure,
+       .fid_get                = mlxsw_sp_rif_vlan_fid_get,
+       .fdb_del                = mlxsw_sp_rif_vlan_fdb_del,
+};
+
+static int mlxsw_sp2_rif_vlan_configure(struct mlxsw_sp_rif *rif,
+                                       struct netlink_ext_ack *extack)
+{
+       u16 efid = mlxsw_sp_fid_index(rif->fid);
+
+       return mlxsw_sp_rif_vlan_configure(rif, efid, extack);
+}
+
+static const struct mlxsw_sp_rif_ops mlxsw_sp2_rif_vlan_ops = {
+       .type                   = MLXSW_SP_RIF_TYPE_VLAN,
+       .rif_size               = sizeof(struct mlxsw_sp_rif),
+       .configure              = mlxsw_sp2_rif_vlan_configure,
+       .deconfigure            = mlxsw_sp_rif_vlan_deconfigure,
        .fid_get                = mlxsw_sp_rif_vlan_fid_get,
        .fdb_del                = mlxsw_sp_rif_vlan_fdb_del,
 };
@@ -9618,7 +9749,7 @@ static const struct mlxsw_sp_rif_ops mlxsw_sp1_rif_ipip_lb_ops = {
 
 static const struct mlxsw_sp_rif_ops *mlxsw_sp1_rif_ops_arr[] = {
        [MLXSW_SP_RIF_TYPE_SUBPORT]     = &mlxsw_sp_rif_subport_ops,
-       [MLXSW_SP_RIF_TYPE_VLAN]        = &mlxsw_sp_rif_vlan_emu_ops,
+       [MLXSW_SP_RIF_TYPE_VLAN]        = &mlxsw_sp1_rif_vlan_ops,
        [MLXSW_SP_RIF_TYPE_FID]         = &mlxsw_sp_rif_fid_ops,
        [MLXSW_SP_RIF_TYPE_IPIP_LB]     = &mlxsw_sp1_rif_ipip_lb_ops,
 };
@@ -9806,7 +9937,7 @@ static const struct mlxsw_sp_rif_ops mlxsw_sp2_rif_ipip_lb_ops = {
 
 static const struct mlxsw_sp_rif_ops *mlxsw_sp2_rif_ops_arr[] = {
        [MLXSW_SP_RIF_TYPE_SUBPORT]     = &mlxsw_sp_rif_subport_ops,
-       [MLXSW_SP_RIF_TYPE_VLAN]        = &mlxsw_sp_rif_vlan_emu_ops,
+       [MLXSW_SP_RIF_TYPE_VLAN]        = &mlxsw_sp2_rif_vlan_ops,
        [MLXSW_SP_RIF_TYPE_FID]         = &mlxsw_sp_rif_fid_ops,
        [MLXSW_SP_RIF_TYPE_IPIP_LB]     = &mlxsw_sp2_rif_ipip_lb_ops,
 };
index b5c83ec..c5dfb97 100644 (file)
@@ -82,7 +82,6 @@ struct mlxsw_sp_ipip_entry;
 
 struct mlxsw_sp_rif *mlxsw_sp_rif_by_index(const struct mlxsw_sp *mlxsw_sp,
                                           u16 rif_index);
-u16 mlxsw_sp_rif_index(const struct mlxsw_sp_rif *rif);
 u16 mlxsw_sp_ipip_lb_rif_index(const struct mlxsw_sp_rif_ipip_lb *rif);
 u16 mlxsw_sp_ipip_lb_ul_vr_id(const struct mlxsw_sp_rif_ipip_lb *rif);
 u16 mlxsw_sp_ipip_lb_ul_rif_id(const struct mlxsw_sp_rif_ipip_lb *lb_rif);
index a738d02..4efccd9 100644 (file)
@@ -48,7 +48,8 @@ struct mlxsw_sp_bridge_device {
        struct net_device *dev;
        struct list_head list;
        struct list_head ports_list;
-       struct list_head mids_list;
+       struct list_head mdb_list;
+       struct rhashtable mdb_ht;
        u8 vlan_enabled:1,
           multicast_enabled:1,
           mrouter:1;
@@ -102,6 +103,33 @@ struct mlxsw_sp_switchdev_ops {
        void (*init)(struct mlxsw_sp *mlxsw_sp);
 };
 
+struct mlxsw_sp_mdb_entry_key {
+       unsigned char addr[ETH_ALEN];
+       u16 fid;
+};
+
+struct mlxsw_sp_mdb_entry {
+       struct list_head list;
+       struct rhash_head ht_node;
+       struct mlxsw_sp_mdb_entry_key key;
+       u16 mid;
+       struct list_head ports_list;
+       u16 ports_count;
+};
+
+struct mlxsw_sp_mdb_entry_port {
+       struct list_head list; /* Member of 'ports_list'. */
+       u16 local_port;
+       refcount_t refcount;
+       bool mrouter;
+};
+
+static const struct rhashtable_params mlxsw_sp_mdb_ht_params = {
+       .key_offset = offsetof(struct mlxsw_sp_mdb_entry, key),
+       .head_offset = offsetof(struct mlxsw_sp_mdb_entry, ht_node),
+       .key_len = sizeof(struct mlxsw_sp_mdb_entry_key),
+};
+
 static int
 mlxsw_sp_bridge_port_fdb_flush(struct mlxsw_sp *mlxsw_sp,
                               struct mlxsw_sp_bridge_port *bridge_port,
@@ -109,12 +137,13 @@ mlxsw_sp_bridge_port_fdb_flush(struct mlxsw_sp *mlxsw_sp,
 
 static void
 mlxsw_sp_bridge_port_mdb_flush(struct mlxsw_sp_port *mlxsw_sp_port,
-                              struct mlxsw_sp_bridge_port *bridge_port);
+                              struct mlxsw_sp_bridge_port *bridge_port,
+                              u16 fid_index);
 
-static void
-mlxsw_sp_bridge_mdb_mc_enable_sync(struct mlxsw_sp_port *mlxsw_sp_port,
+static int
+mlxsw_sp_bridge_mdb_mc_enable_sync(struct mlxsw_sp *mlxsw_sp,
                                   struct mlxsw_sp_bridge_device
-                                  *bridge_device);
+                                  *bridge_device, bool mc_enabled);
 
 static void
 mlxsw_sp_port_mrouter_update_mdb(struct mlxsw_sp_port *mlxsw_sp_port,
@@ -237,6 +266,10 @@ mlxsw_sp_bridge_device_create(struct mlxsw_sp_bridge *bridge,
        if (!bridge_device)
                return ERR_PTR(-ENOMEM);
 
+       err = rhashtable_init(&bridge_device->mdb_ht, &mlxsw_sp_mdb_ht_params);
+       if (err)
+               goto err_mdb_rhashtable_init;
+
        bridge_device->dev = br_dev;
        bridge_device->vlan_enabled = vlan_enabled;
        bridge_device->multicast_enabled = br_multicast_enabled(br_dev);
@@ -254,7 +287,8 @@ mlxsw_sp_bridge_device_create(struct mlxsw_sp_bridge *bridge,
        } else {
                bridge_device->ops = bridge->bridge_8021d_ops;
        }
-       INIT_LIST_HEAD(&bridge_device->mids_list);
+       INIT_LIST_HEAD(&bridge_device->mdb_list);
+
        if (list_empty(&bridge->bridges_list))
                mlxsw_sp_fdb_notify_work_schedule(bridge->mlxsw_sp, false);
        list_add(&bridge_device->list, &bridge->bridges_list);
@@ -273,6 +307,8 @@ err_vxlan_init:
        list_del(&bridge_device->list);
        if (bridge_device->vlan_enabled)
                bridge->vlan_enabled_exists = false;
+       rhashtable_destroy(&bridge_device->mdb_ht);
+err_mdb_rhashtable_init:
        kfree(bridge_device);
        return ERR_PTR(err);
 }
@@ -290,7 +326,8 @@ mlxsw_sp_bridge_device_destroy(struct mlxsw_sp_bridge *bridge,
        if (bridge_device->vlan_enabled)
                bridge->vlan_enabled_exists = false;
        WARN_ON(!list_empty(&bridge_device->ports_list));
-       WARN_ON(!list_empty(&bridge_device->mids_list));
+       WARN_ON(!list_empty(&bridge_device->mdb_list));
+       rhashtable_destroy(&bridge_device->mdb_ht);
        kfree(bridge_device);
 }
 
@@ -642,6 +679,64 @@ err_port_bridge_vlan_flood_set:
        return err;
 }
 
+static int
+mlxsw_sp_bridge_vlans_flood_set(struct mlxsw_sp_bridge_vlan *bridge_vlan,
+                               enum mlxsw_sp_flood_type packet_type,
+                               bool member)
+{
+       struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan;
+       int err;
+
+       list_for_each_entry(mlxsw_sp_port_vlan, &bridge_vlan->port_vlan_list,
+                           bridge_vlan_node) {
+               u16 local_port = mlxsw_sp_port_vlan->mlxsw_sp_port->local_port;
+
+               err = mlxsw_sp_fid_flood_set(mlxsw_sp_port_vlan->fid,
+                                            packet_type, local_port, member);
+               if (err)
+                       goto err_fid_flood_set;
+       }
+
+       return 0;
+
+err_fid_flood_set:
+       list_for_each_entry_continue_reverse(mlxsw_sp_port_vlan,
+                                            &bridge_vlan->port_vlan_list,
+                                            list) {
+               u16 local_port = mlxsw_sp_port_vlan->mlxsw_sp_port->local_port;
+
+               mlxsw_sp_fid_flood_set(mlxsw_sp_port_vlan->fid, packet_type,
+                                      local_port, !member);
+       }
+
+       return err;
+}
+
+static int
+mlxsw_sp_bridge_ports_flood_table_set(struct mlxsw_sp_bridge_port *bridge_port,
+                                     enum mlxsw_sp_flood_type packet_type,
+                                     bool member)
+{
+       struct mlxsw_sp_bridge_vlan *bridge_vlan;
+       int err;
+
+       list_for_each_entry(bridge_vlan, &bridge_port->vlans_list, list) {
+               err = mlxsw_sp_bridge_vlans_flood_set(bridge_vlan, packet_type,
+                                                     member);
+               if (err)
+                       goto err_bridge_vlans_flood_set;
+       }
+
+       return 0;
+
+err_bridge_vlans_flood_set:
+       list_for_each_entry_continue_reverse(bridge_vlan,
+                                            &bridge_port->vlans_list, list)
+               mlxsw_sp_bridge_vlans_flood_set(bridge_vlan, packet_type,
+                                               !member);
+       return err;
+}
+
 static int
 mlxsw_sp_port_bridge_vlan_learning_set(struct mlxsw_sp_port *mlxsw_sp_port,
                                       struct mlxsw_sp_bridge_vlan *bridge_vlan,
@@ -813,6 +908,9 @@ static int mlxsw_sp_port_attr_mrouter_set(struct mlxsw_sp_port *mlxsw_sp_port,
        if (!bridge_port)
                return 0;
 
+       mlxsw_sp_port_mrouter_update_mdb(mlxsw_sp_port, bridge_port,
+                                        is_port_mrouter);
+
        if (!bridge_port->bridge_device->multicast_enabled)
                goto out;
 
@@ -822,8 +920,6 @@ static int mlxsw_sp_port_attr_mrouter_set(struct mlxsw_sp_port *mlxsw_sp_port,
        if (err)
                return err;
 
-       mlxsw_sp_port_mrouter_update_mdb(mlxsw_sp_port, bridge_port,
-                                        is_port_mrouter);
 out:
        bridge_port->mrouter = is_port_mrouter;
        return 0;
@@ -842,6 +938,7 @@ static int mlxsw_sp_port_mc_disabled_set(struct mlxsw_sp_port *mlxsw_sp_port,
                                         struct net_device *orig_dev,
                                         bool mc_disabled)
 {
+       enum mlxsw_sp_flood_type packet_type = MLXSW_SP_FLOOD_TYPE_MC;
        struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
        struct mlxsw_sp_bridge_device *bridge_device;
        struct mlxsw_sp_bridge_port *bridge_port;
@@ -854,43 +951,184 @@ static int mlxsw_sp_port_mc_disabled_set(struct mlxsw_sp_port *mlxsw_sp_port,
        if (!bridge_device)
                return 0;
 
-       if (bridge_device->multicast_enabled != !mc_disabled) {
-               bridge_device->multicast_enabled = !mc_disabled;
-               mlxsw_sp_bridge_mdb_mc_enable_sync(mlxsw_sp_port,
-                                                  bridge_device);
-       }
+       if (bridge_device->multicast_enabled == !mc_disabled)
+               return 0;
+
+       bridge_device->multicast_enabled = !mc_disabled;
+       err = mlxsw_sp_bridge_mdb_mc_enable_sync(mlxsw_sp, bridge_device,
+                                                !mc_disabled);
+       if (err)
+               goto err_mc_enable_sync;
 
        list_for_each_entry(bridge_port, &bridge_device->ports_list, list) {
-               enum mlxsw_sp_flood_type packet_type = MLXSW_SP_FLOOD_TYPE_MC;
                bool member = mlxsw_sp_mc_flood(bridge_port);
 
-               err = mlxsw_sp_bridge_port_flood_table_set(mlxsw_sp_port,
-                                                          bridge_port,
-                                                          packet_type, member);
+               err = mlxsw_sp_bridge_ports_flood_table_set(bridge_port,
+                                                           packet_type,
+                                                           member);
                if (err)
-                       return err;
+                       goto err_flood_table_set;
        }
 
-       bridge_device->multicast_enabled = !mc_disabled;
-
        return 0;
+
+err_flood_table_set:
+       list_for_each_entry_continue_reverse(bridge_port,
+                                            &bridge_device->ports_list, list) {
+               bool member = mlxsw_sp_mc_flood(bridge_port);
+
+               mlxsw_sp_bridge_ports_flood_table_set(bridge_port, packet_type,
+                                                     !member);
+       }
+       mlxsw_sp_bridge_mdb_mc_enable_sync(mlxsw_sp, bridge_device,
+                                          mc_disabled);
+err_mc_enable_sync:
+       bridge_device->multicast_enabled = mc_disabled;
+       return err;
+}
+
+static struct mlxsw_sp_mdb_entry_port *
+mlxsw_sp_mdb_entry_port_lookup(struct mlxsw_sp_mdb_entry *mdb_entry,
+                              u16 local_port)
+{
+       struct mlxsw_sp_mdb_entry_port *mdb_entry_port;
+
+       list_for_each_entry(mdb_entry_port, &mdb_entry->ports_list, list) {
+               if (mdb_entry_port->local_port == local_port)
+                       return mdb_entry_port;
+       }
+
+       return NULL;
 }
 
-static int mlxsw_sp_smid_router_port_set(struct mlxsw_sp *mlxsw_sp,
-                                        u16 mid_idx, bool add)
+static struct mlxsw_sp_mdb_entry_port *
+mlxsw_sp_mdb_entry_port_get(struct mlxsw_sp *mlxsw_sp,
+                           struct mlxsw_sp_mdb_entry *mdb_entry,
+                           u16 local_port)
 {
-       char *smid2_pl;
+       struct mlxsw_sp_mdb_entry_port *mdb_entry_port;
        int err;
 
-       smid2_pl = kmalloc(MLXSW_REG_SMID2_LEN, GFP_KERNEL);
-       if (!smid2_pl)
-               return -ENOMEM;
+       mdb_entry_port = mlxsw_sp_mdb_entry_port_lookup(mdb_entry, local_port);
+       if (mdb_entry_port) {
+               if (mdb_entry_port->mrouter &&
+                   refcount_read(&mdb_entry_port->refcount) == 1)
+                       mdb_entry->ports_count++;
 
-       mlxsw_reg_smid2_pack(smid2_pl, mid_idx,
-                            mlxsw_sp_router_port(mlxsw_sp), add, false, 0);
-       err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(smid2), smid2_pl);
-       kfree(smid2_pl);
-       return err;
+               refcount_inc(&mdb_entry_port->refcount);
+               return mdb_entry_port;
+       }
+
+       err = mlxsw_sp_pgt_entry_port_set(mlxsw_sp, mdb_entry->mid,
+                                         mdb_entry->key.fid, local_port, true);
+       if (err)
+               return ERR_PTR(err);
+
+       mdb_entry_port = kzalloc(sizeof(*mdb_entry_port), GFP_KERNEL);
+       if (!mdb_entry_port) {
+               err = -ENOMEM;
+               goto err_mdb_entry_port_alloc;
+       }
+
+       mdb_entry_port->local_port = local_port;
+       refcount_set(&mdb_entry_port->refcount, 1);
+       list_add(&mdb_entry_port->list, &mdb_entry->ports_list);
+       mdb_entry->ports_count++;
+
+       return mdb_entry_port;
+
+err_mdb_entry_port_alloc:
+       mlxsw_sp_pgt_entry_port_set(mlxsw_sp, mdb_entry->mid,
+                                   mdb_entry->key.fid, local_port, false);
+       return ERR_PTR(err);
+}
+
+static void
+mlxsw_sp_mdb_entry_port_put(struct mlxsw_sp *mlxsw_sp,
+                           struct mlxsw_sp_mdb_entry *mdb_entry,
+                           u16 local_port, bool force)
+{
+       struct mlxsw_sp_mdb_entry_port *mdb_entry_port;
+
+       mdb_entry_port = mlxsw_sp_mdb_entry_port_lookup(mdb_entry, local_port);
+       if (!mdb_entry_port)
+               return;
+
+       if (!force && !refcount_dec_and_test(&mdb_entry_port->refcount)) {
+               if (mdb_entry_port->mrouter &&
+                   refcount_read(&mdb_entry_port->refcount) == 1)
+                       mdb_entry->ports_count--;
+               return;
+       }
+
+       mdb_entry->ports_count--;
+       list_del(&mdb_entry_port->list);
+       kfree(mdb_entry_port);
+       mlxsw_sp_pgt_entry_port_set(mlxsw_sp, mdb_entry->mid,
+                                   mdb_entry->key.fid, local_port, false);
+}
+
+static __always_unused struct mlxsw_sp_mdb_entry_port *
+mlxsw_sp_mdb_entry_mrouter_port_get(struct mlxsw_sp *mlxsw_sp,
+                                   struct mlxsw_sp_mdb_entry *mdb_entry,
+                                   u16 local_port)
+{
+       struct mlxsw_sp_mdb_entry_port *mdb_entry_port;
+       int err;
+
+       mdb_entry_port = mlxsw_sp_mdb_entry_port_lookup(mdb_entry, local_port);
+       if (mdb_entry_port) {
+               if (!mdb_entry_port->mrouter)
+                       refcount_inc(&mdb_entry_port->refcount);
+               return mdb_entry_port;
+       }
+
+       err = mlxsw_sp_pgt_entry_port_set(mlxsw_sp, mdb_entry->mid,
+                                         mdb_entry->key.fid, local_port, true);
+       if (err)
+               return ERR_PTR(err);
+
+       mdb_entry_port = kzalloc(sizeof(*mdb_entry_port), GFP_KERNEL);
+       if (!mdb_entry_port) {
+               err = -ENOMEM;
+               goto err_mdb_entry_port_alloc;
+       }
+
+       mdb_entry_port->local_port = local_port;
+       refcount_set(&mdb_entry_port->refcount, 1);
+       mdb_entry_port->mrouter = true;
+       list_add(&mdb_entry_port->list, &mdb_entry->ports_list);
+
+       return mdb_entry_port;
+
+err_mdb_entry_port_alloc:
+       mlxsw_sp_pgt_entry_port_set(mlxsw_sp, mdb_entry->mid,
+                                   mdb_entry->key.fid, local_port, false);
+       return ERR_PTR(err);
+}
+
+static __always_unused void
+mlxsw_sp_mdb_entry_mrouter_port_put(struct mlxsw_sp *mlxsw_sp,
+                                   struct mlxsw_sp_mdb_entry *mdb_entry,
+                                   u16 local_port)
+{
+       struct mlxsw_sp_mdb_entry_port *mdb_entry_port;
+
+       mdb_entry_port = mlxsw_sp_mdb_entry_port_lookup(mdb_entry, local_port);
+       if (!mdb_entry_port)
+               return;
+
+       if (!mdb_entry_port->mrouter)
+               return;
+
+       mdb_entry_port->mrouter = false;
+       if (!refcount_dec_and_test(&mdb_entry_port->refcount))
+               return;
+
+       list_del(&mdb_entry_port->list);
+       kfree(mdb_entry_port);
+       mlxsw_sp_pgt_entry_port_set(mlxsw_sp, mdb_entry->mid,
+                                   mdb_entry->key.fid, local_port, false);
 }
 
 static void
@@ -898,10 +1136,17 @@ mlxsw_sp_bridge_mrouter_update_mdb(struct mlxsw_sp *mlxsw_sp,
                                   struct mlxsw_sp_bridge_device *bridge_device,
                                   bool add)
 {
-       struct mlxsw_sp_mid *mid;
+       u16 local_port = mlxsw_sp_router_port(mlxsw_sp);
+       struct mlxsw_sp_mdb_entry *mdb_entry;
 
-       list_for_each_entry(mid, &bridge_device->mids_list, list)
-               mlxsw_sp_smid_router_port_set(mlxsw_sp, mid->mid, add);
+       list_for_each_entry(mdb_entry, &bridge_device->mdb_list, list) {
+               if (add)
+                       mlxsw_sp_mdb_entry_mrouter_port_get(mlxsw_sp, mdb_entry,
+                                                           local_port);
+               else
+                       mlxsw_sp_mdb_entry_mrouter_port_put(mlxsw_sp, mdb_entry,
+                                                           local_port);
+       }
 }
 
 static int
@@ -1127,14 +1372,13 @@ mlxsw_sp_port_vlan_bridge_leave(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan)
        struct mlxsw_sp_bridge_vlan *bridge_vlan;
        struct mlxsw_sp_bridge_port *bridge_port;
        u16 vid = mlxsw_sp_port_vlan->vid;
-       bool last_port, last_vlan;
+       bool last_port;
 
        if (WARN_ON(mlxsw_sp_fid_type(fid) != MLXSW_SP_FID_TYPE_8021Q &&
                    mlxsw_sp_fid_type(fid) != MLXSW_SP_FID_TYPE_8021D))
                return;
 
        bridge_port = mlxsw_sp_port_vlan->bridge_port;
-       last_vlan = list_is_singular(&bridge_port->vlans_list);
        bridge_vlan = mlxsw_sp_bridge_vlan_find(bridge_port, vid);
        last_port = list_is_singular(&bridge_vlan->port_vlan_list);
 
@@ -1146,8 +1390,9 @@ mlxsw_sp_port_vlan_bridge_leave(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan)
                mlxsw_sp_bridge_port_fdb_flush(mlxsw_sp_port->mlxsw_sp,
                                               bridge_port,
                                               mlxsw_sp_fid_index(fid));
-       if (last_vlan)
-               mlxsw_sp_bridge_port_mdb_flush(mlxsw_sp_port, bridge_port);
+
+       mlxsw_sp_bridge_port_mdb_flush(mlxsw_sp_port, bridge_port,
+                                      mlxsw_sp_fid_index(fid));
 
        mlxsw_sp_port_vlan_fid_leave(mlxsw_sp_port_vlan);
 
@@ -1436,7 +1681,8 @@ static int mlxsw_sp_port_fdb_tunnel_uc_op(struct mlxsw_sp *mlxsw_sp,
 }
 
 static int __mlxsw_sp_port_fdb_uc_op(struct mlxsw_sp *mlxsw_sp, u16 local_port,
-                                    const char *mac, u16 fid, bool adding,
+                                    const char *mac, u16 fid, u16 vid,
+                                    bool adding,
                                     enum mlxsw_reg_sfd_rec_action action,
                                     enum mlxsw_reg_sfd_rec_policy policy)
 {
@@ -1449,7 +1695,8 @@ static int __mlxsw_sp_port_fdb_uc_op(struct mlxsw_sp *mlxsw_sp, u16 local_port,
                return -ENOMEM;
 
        mlxsw_reg_sfd_pack(sfd_pl, mlxsw_sp_sfd_op(adding), 0);
-       mlxsw_reg_sfd_uc_pack(sfd_pl, 0, policy, mac, fid, action, local_port);
+       mlxsw_reg_sfd_uc_pack(sfd_pl, 0, policy, mac, fid, vid, action,
+                             local_port);
        num_rec = mlxsw_reg_sfd_num_rec_get(sfd_pl);
        err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfd), sfd_pl);
        if (err)
@@ -1464,18 +1711,18 @@ out:
 }
 
 static int mlxsw_sp_port_fdb_uc_op(struct mlxsw_sp *mlxsw_sp, u16 local_port,
-                                  const char *mac, u16 fid, bool adding,
-                                  bool dynamic)
+                                  const char *mac, u16 fid, u16 vid,
+                                  bool adding, bool dynamic)
 {
-       return __mlxsw_sp_port_fdb_uc_op(mlxsw_sp, local_port, mac, fid, adding,
-                                        MLXSW_REG_SFD_REC_ACTION_NOP,
+       return __mlxsw_sp_port_fdb_uc_op(mlxsw_sp, local_port, mac, fid, vid,
+                                        adding, MLXSW_REG_SFD_REC_ACTION_NOP,
                                         mlxsw_sp_sfd_rec_policy(dynamic));
 }
 
 int mlxsw_sp_rif_fdb_op(struct mlxsw_sp *mlxsw_sp, const char *mac, u16 fid,
                        bool adding)
 {
-       return __mlxsw_sp_port_fdb_uc_op(mlxsw_sp, 0, mac, fid, adding,
+       return __mlxsw_sp_port_fdb_uc_op(mlxsw_sp, 0, mac, fid, 0, adding,
                                         MLXSW_REG_SFD_REC_ACTION_FORWARD_IP_ROUTER,
                                         MLXSW_REG_SFD_REC_POLICY_STATIC_ENTRY);
 }
@@ -1537,7 +1784,7 @@ mlxsw_sp_port_fdb_set(struct mlxsw_sp_port *mlxsw_sp_port,
        if (!bridge_port->lagged)
                return mlxsw_sp_port_fdb_uc_op(mlxsw_sp,
                                               bridge_port->system_port,
-                                              fdb_info->addr, fid_index,
+                                              fdb_info->addr, fid_index, vid,
                                               adding, false);
        else
                return mlxsw_sp_port_fdb_uc_lag_op(mlxsw_sp,
@@ -1546,8 +1793,9 @@ mlxsw_sp_port_fdb_set(struct mlxsw_sp_port *mlxsw_sp_port,
                                                   vid, adding, false);
 }
 
-static int mlxsw_sp_port_mdb_op(struct mlxsw_sp *mlxsw_sp, const char *addr,
-                               u16 fid, u16 mid_idx, bool adding)
+static int mlxsw_sp_mdb_entry_write(struct mlxsw_sp *mlxsw_sp,
+                                   const struct mlxsw_sp_mdb_entry *mdb_entry,
+                                   bool adding)
 {
        char *sfd_pl;
        u8 num_rec;
@@ -1558,8 +1806,9 @@ static int mlxsw_sp_port_mdb_op(struct mlxsw_sp *mlxsw_sp, const char *addr,
                return -ENOMEM;
 
        mlxsw_reg_sfd_pack(sfd_pl, mlxsw_sp_sfd_op(adding), 0);
-       mlxsw_reg_sfd_mc_pack(sfd_pl, 0, addr, fid,
-                             MLXSW_REG_SFD_REC_ACTION_NOP, mid_idx);
+       mlxsw_reg_sfd_mc_pack(sfd_pl, 0, mdb_entry->key.addr,
+                             mdb_entry->key.fid, MLXSW_REG_SFD_REC_ACTION_NOP,
+                             mdb_entry->mid);
        num_rec = mlxsw_reg_sfd_num_rec_get(sfd_pl);
        err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfd), sfd_pl);
        if (err)
@@ -1573,80 +1822,17 @@ out:
        return err;
 }
 
-static int mlxsw_sp_port_smid_full_entry(struct mlxsw_sp *mlxsw_sp, u16 mid_idx,
-                                        long *ports_bitmap,
-                                        bool set_router_port)
-{
-       char *smid2_pl;
-       int err, i;
-
-       smid2_pl = kmalloc(MLXSW_REG_SMID2_LEN, GFP_KERNEL);
-       if (!smid2_pl)
-               return -ENOMEM;
-
-       mlxsw_reg_smid2_pack(smid2_pl, mid_idx, 0, false, false, 0);
-       for (i = 1; i < mlxsw_core_max_ports(mlxsw_sp->core); i++) {
-               if (mlxsw_sp->ports[i])
-                       mlxsw_reg_smid2_port_mask_set(smid2_pl, i, 1);
-       }
-
-       mlxsw_reg_smid2_port_mask_set(smid2_pl,
-                                     mlxsw_sp_router_port(mlxsw_sp), 1);
-
-       for_each_set_bit(i, ports_bitmap, mlxsw_core_max_ports(mlxsw_sp->core))
-               mlxsw_reg_smid2_port_set(smid2_pl, i, 1);
-
-       mlxsw_reg_smid2_port_set(smid2_pl, mlxsw_sp_router_port(mlxsw_sp),
-                                set_router_port);
-
-       err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(smid2), smid2_pl);
-       kfree(smid2_pl);
-       return err;
-}
-
-static int mlxsw_sp_port_smid_set(struct mlxsw_sp_port *mlxsw_sp_port,
-                                 u16 mid_idx, bool add)
-{
-       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
-       char *smid2_pl;
-       int err;
-
-       smid2_pl = kmalloc(MLXSW_REG_SMID2_LEN, GFP_KERNEL);
-       if (!smid2_pl)
-               return -ENOMEM;
-
-       mlxsw_reg_smid2_pack(smid2_pl, mid_idx, mlxsw_sp_port->local_port, add,
-                            false, 0);
-       err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(smid2), smid2_pl);
-       kfree(smid2_pl);
-       return err;
-}
-
-static struct
-mlxsw_sp_mid *__mlxsw_sp_mc_get(struct mlxsw_sp_bridge_device *bridge_device,
-                               const unsigned char *addr,
-                               u16 fid)
-{
-       struct mlxsw_sp_mid *mid;
-
-       list_for_each_entry(mid, &bridge_device->mids_list, list) {
-               if (ether_addr_equal(mid->addr, addr) && mid->fid == fid)
-                       return mid;
-       }
-       return NULL;
-}
-
 static void
 mlxsw_sp_bridge_port_get_ports_bitmap(struct mlxsw_sp *mlxsw_sp,
                                      struct mlxsw_sp_bridge_port *bridge_port,
-                                     unsigned long *ports_bitmap)
+                                     struct mlxsw_sp_ports_bitmap *ports_bm)
 {
        struct mlxsw_sp_port *mlxsw_sp_port;
        u64 max_lag_members, i;
        int lag_id;
 
        if (!bridge_port->lagged) {
-               set_bit(bridge_port->system_port, ports_bitmap);
+               set_bit(bridge_port->system_port, ports_bm->bitmap);
        } else {
                max_lag_members = MLXSW_CORE_RES_GET(mlxsw_sp->core,
                                                     MAX_LAG_MEMBERS);
@@ -1656,13 +1842,13 @@ mlxsw_sp_bridge_port_get_ports_bitmap(struct mlxsw_sp *mlxsw_sp,
                                                                 lag_id, i);
                        if (mlxsw_sp_port)
                                set_bit(mlxsw_sp_port->local_port,
-                                       ports_bitmap);
+                                       ports_bm->bitmap);
                }
        }
 }
 
 static void
-mlxsw_sp_mc_get_mrouters_bitmap(unsigned long *flood_bitmap,
+mlxsw_sp_mc_get_mrouters_bitmap(struct mlxsw_sp_ports_bitmap *flood_bm,
                                struct mlxsw_sp_bridge_device *bridge_device,
                                struct mlxsw_sp *mlxsw_sp)
 {
@@ -1672,116 +1858,226 @@ mlxsw_sp_mc_get_mrouters_bitmap(unsigned long *flood_bitmap,
                if (bridge_port->mrouter) {
                        mlxsw_sp_bridge_port_get_ports_bitmap(mlxsw_sp,
                                                              bridge_port,
-                                                             flood_bitmap);
+                                                             flood_bm);
                }
        }
 }
 
-static bool
-mlxsw_sp_mc_write_mdb_entry(struct mlxsw_sp *mlxsw_sp,
-                           struct mlxsw_sp_mid *mid,
-                           struct mlxsw_sp_bridge_device *bridge_device)
+static int mlxsw_sp_mc_mdb_mrouters_add(struct mlxsw_sp *mlxsw_sp,
+                                       struct mlxsw_sp_ports_bitmap *ports_bm,
+                                       struct mlxsw_sp_mdb_entry *mdb_entry)
+{
+       struct mlxsw_sp_mdb_entry_port *mdb_entry_port;
+       unsigned int nbits = ports_bm->nbits;
+       int i;
+
+       for_each_set_bit(i, ports_bm->bitmap, nbits) {
+               mdb_entry_port = mlxsw_sp_mdb_entry_mrouter_port_get(mlxsw_sp,
+                                                                    mdb_entry,
+                                                                    i);
+               if (IS_ERR(mdb_entry_port)) {
+                       nbits = i;
+                       goto err_mrouter_port_get;
+               }
+       }
+
+       return 0;
+
+err_mrouter_port_get:
+       for_each_set_bit(i, ports_bm->bitmap, nbits)
+               mlxsw_sp_mdb_entry_mrouter_port_put(mlxsw_sp, mdb_entry, i);
+       return PTR_ERR(mdb_entry_port);
+}
+
+static void mlxsw_sp_mc_mdb_mrouters_del(struct mlxsw_sp *mlxsw_sp,
+                                        struct mlxsw_sp_ports_bitmap *ports_bm,
+                                        struct mlxsw_sp_mdb_entry *mdb_entry)
+{
+       int i;
+
+       for_each_set_bit(i, ports_bm->bitmap, ports_bm->nbits)
+               mlxsw_sp_mdb_entry_mrouter_port_put(mlxsw_sp, mdb_entry, i);
+}
+
+static int
+mlxsw_sp_mc_mdb_mrouters_set(struct mlxsw_sp *mlxsw_sp,
+                            struct mlxsw_sp_bridge_device *bridge_device,
+                            struct mlxsw_sp_mdb_entry *mdb_entry, bool add)
 {
-       long *flood_bitmap;
-       int num_of_ports;
-       u16 mid_idx;
+       struct mlxsw_sp_ports_bitmap ports_bm;
        int err;
 
-       mid_idx = find_first_zero_bit(mlxsw_sp->bridge->mids_bitmap,
-                                     MLXSW_SP_MID_MAX);
-       if (mid_idx == MLXSW_SP_MID_MAX)
-               return false;
+       err = mlxsw_sp_port_bitmap_init(mlxsw_sp, &ports_bm);
+       if (err)
+               return err;
 
-       num_of_ports = mlxsw_core_max_ports(mlxsw_sp->core);
-       flood_bitmap = bitmap_alloc(num_of_ports, GFP_KERNEL);
-       if (!flood_bitmap)
-               return false;
+       mlxsw_sp_mc_get_mrouters_bitmap(&ports_bm, bridge_device, mlxsw_sp);
+
+       if (add)
+               err = mlxsw_sp_mc_mdb_mrouters_add(mlxsw_sp, &ports_bm,
+                                                  mdb_entry);
+       else
+               mlxsw_sp_mc_mdb_mrouters_del(mlxsw_sp, &ports_bm, mdb_entry);
+
+       mlxsw_sp_port_bitmap_fini(&ports_bm);
+       return err;
+}
 
-       bitmap_copy(flood_bitmap, mid->ports_in_mid, num_of_ports);
-       mlxsw_sp_mc_get_mrouters_bitmap(flood_bitmap, bridge_device, mlxsw_sp);
+static struct mlxsw_sp_mdb_entry *
+mlxsw_sp_mc_mdb_entry_init(struct mlxsw_sp *mlxsw_sp,
+                          struct mlxsw_sp_bridge_device *bridge_device,
+                          const unsigned char *addr, u16 fid, u16 local_port)
+{
+       struct mlxsw_sp_mdb_entry_port *mdb_entry_port;
+       struct mlxsw_sp_mdb_entry *mdb_entry;
+       int err;
+
+       mdb_entry = kzalloc(sizeof(*mdb_entry), GFP_KERNEL);
+       if (!mdb_entry)
+               return ERR_PTR(-ENOMEM);
 
-       mid->mid = mid_idx;
-       err = mlxsw_sp_port_smid_full_entry(mlxsw_sp, mid_idx, flood_bitmap,
-                                           bridge_device->mrouter);
-       bitmap_free(flood_bitmap);
+       ether_addr_copy(mdb_entry->key.addr, addr);
+       mdb_entry->key.fid = fid;
+       err = mlxsw_sp_pgt_mid_alloc(mlxsw_sp, &mdb_entry->mid);
        if (err)
-               return false;
+               goto err_pgt_mid_alloc;
+
+       INIT_LIST_HEAD(&mdb_entry->ports_list);
 
-       err = mlxsw_sp_port_mdb_op(mlxsw_sp, mid->addr, mid->fid, mid_idx,
-                                  true);
+       err = mlxsw_sp_mc_mdb_mrouters_set(mlxsw_sp, bridge_device, mdb_entry,
+                                          true);
        if (err)
-               return false;
+               goto err_mdb_mrouters_set;
 
-       set_bit(mid_idx, mlxsw_sp->bridge->mids_bitmap);
-       mid->in_hw = true;
-       return true;
+       mdb_entry_port = mlxsw_sp_mdb_entry_port_get(mlxsw_sp, mdb_entry,
+                                                    local_port);
+       if (IS_ERR(mdb_entry_port)) {
+               err = PTR_ERR(mdb_entry_port);
+               goto err_mdb_entry_port_get;
+       }
+
+       if (bridge_device->multicast_enabled) {
+               err = mlxsw_sp_mdb_entry_write(mlxsw_sp, mdb_entry, true);
+               if (err)
+                       goto err_mdb_entry_write;
+       }
+
+       err = rhashtable_insert_fast(&bridge_device->mdb_ht,
+                                    &mdb_entry->ht_node,
+                                    mlxsw_sp_mdb_ht_params);
+       if (err)
+               goto err_rhashtable_insert;
+
+       list_add_tail(&mdb_entry->list, &bridge_device->mdb_list);
+
+       return mdb_entry;
+
+err_rhashtable_insert:
+       if (bridge_device->multicast_enabled)
+               mlxsw_sp_mdb_entry_write(mlxsw_sp, mdb_entry, false);
+err_mdb_entry_write:
+       mlxsw_sp_mdb_entry_port_put(mlxsw_sp, mdb_entry, local_port, false);
+err_mdb_entry_port_get:
+       mlxsw_sp_mc_mdb_mrouters_set(mlxsw_sp, bridge_device, mdb_entry, false);
+err_mdb_mrouters_set:
+       mlxsw_sp_pgt_mid_free(mlxsw_sp, mdb_entry->mid);
+err_pgt_mid_alloc:
+       kfree(mdb_entry);
+       return ERR_PTR(err);
 }
 
-static int mlxsw_sp_mc_remove_mdb_entry(struct mlxsw_sp *mlxsw_sp,
-                                       struct mlxsw_sp_mid *mid)
+static void
+mlxsw_sp_mc_mdb_entry_fini(struct mlxsw_sp *mlxsw_sp,
+                          struct mlxsw_sp_mdb_entry *mdb_entry,
+                          struct mlxsw_sp_bridge_device *bridge_device,
+                          u16 local_port, bool force)
 {
-       if (!mid->in_hw)
-               return 0;
-
-       clear_bit(mid->mid, mlxsw_sp->bridge->mids_bitmap);
-       mid->in_hw = false;
-       return mlxsw_sp_port_mdb_op(mlxsw_sp, mid->addr, mid->fid, mid->mid,
-                                   false);
+       list_del(&mdb_entry->list);
+       rhashtable_remove_fast(&bridge_device->mdb_ht, &mdb_entry->ht_node,
+                              mlxsw_sp_mdb_ht_params);
+       if (bridge_device->multicast_enabled)
+               mlxsw_sp_mdb_entry_write(mlxsw_sp, mdb_entry, false);
+       mlxsw_sp_mdb_entry_port_put(mlxsw_sp, mdb_entry, local_port, force);
+       mlxsw_sp_mc_mdb_mrouters_set(mlxsw_sp, bridge_device, mdb_entry, false);
+       WARN_ON(!list_empty(&mdb_entry->ports_list));
+       mlxsw_sp_pgt_mid_free(mlxsw_sp, mdb_entry->mid);
+       kfree(mdb_entry);
 }
 
-static struct
-mlxsw_sp_mid *__mlxsw_sp_mc_alloc(struct mlxsw_sp *mlxsw_sp,
-                                 struct mlxsw_sp_bridge_device *bridge_device,
-                                 const unsigned char *addr,
-                                 u16 fid)
+static struct mlxsw_sp_mdb_entry *
+mlxsw_sp_mc_mdb_entry_get(struct mlxsw_sp *mlxsw_sp,
+                         struct mlxsw_sp_bridge_device *bridge_device,
+                         const unsigned char *addr, u16 fid, u16 local_port)
 {
-       struct mlxsw_sp_mid *mid;
+       struct mlxsw_sp_mdb_entry_key key = {};
+       struct mlxsw_sp_mdb_entry *mdb_entry;
 
-       mid = kzalloc(sizeof(*mid), GFP_KERNEL);
-       if (!mid)
-               return NULL;
+       ether_addr_copy(key.addr, addr);
+       key.fid = fid;
+       mdb_entry = rhashtable_lookup_fast(&bridge_device->mdb_ht, &key,
+                                          mlxsw_sp_mdb_ht_params);
+       if (mdb_entry) {
+               struct mlxsw_sp_mdb_entry_port *mdb_entry_port;
 
-       mid->ports_in_mid = bitmap_zalloc(mlxsw_core_max_ports(mlxsw_sp->core),
-                                         GFP_KERNEL);
-       if (!mid->ports_in_mid)
-               goto err_ports_in_mid_alloc;
+               mdb_entry_port = mlxsw_sp_mdb_entry_port_get(mlxsw_sp,
+                                                            mdb_entry,
+                                                            local_port);
+               if (IS_ERR(mdb_entry_port))
+                       return ERR_CAST(mdb_entry_port);
 
-       ether_addr_copy(mid->addr, addr);
-       mid->fid = fid;
-       mid->in_hw = false;
+               return mdb_entry;
+       }
 
-       if (!bridge_device->multicast_enabled)
-               goto out;
+       return mlxsw_sp_mc_mdb_entry_init(mlxsw_sp, bridge_device, addr, fid,
+                                         local_port);
+}
 
-       if (!mlxsw_sp_mc_write_mdb_entry(mlxsw_sp, mid, bridge_device))
-               goto err_write_mdb_entry;
+static bool
+mlxsw_sp_mc_mdb_entry_remove(struct mlxsw_sp_mdb_entry *mdb_entry,
+                            struct mlxsw_sp_mdb_entry_port *removed_entry_port,
+                            bool force)
+{
+       if (mdb_entry->ports_count > 1)
+               return false;
 
-out:
-       list_add_tail(&mid->list, &bridge_device->mids_list);
-       return mid;
+       if (force)
+               return true;
 
-err_write_mdb_entry:
-       bitmap_free(mid->ports_in_mid);
-err_ports_in_mid_alloc:
-       kfree(mid);
-       return NULL;
+       if (!removed_entry_port->mrouter &&
+           refcount_read(&removed_entry_port->refcount) > 1)
+               return false;
+
+       if (removed_entry_port->mrouter &&
+           refcount_read(&removed_entry_port->refcount) > 2)
+               return false;
+
+       return true;
 }
 
-static int mlxsw_sp_port_remove_from_mid(struct mlxsw_sp_port *mlxsw_sp_port,
-                                        struct mlxsw_sp_mid *mid)
+static void
+mlxsw_sp_mc_mdb_entry_put(struct mlxsw_sp *mlxsw_sp,
+                         struct mlxsw_sp_bridge_device *bridge_device,
+                         struct mlxsw_sp_mdb_entry *mdb_entry, u16 local_port,
+                         bool force)
 {
-       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
-       int err = 0;
+       struct mlxsw_sp_mdb_entry_port *mdb_entry_port;
 
-       clear_bit(mlxsw_sp_port->local_port, mid->ports_in_mid);
-       if (bitmap_empty(mid->ports_in_mid,
-                        mlxsw_core_max_ports(mlxsw_sp->core))) {
-               err = mlxsw_sp_mc_remove_mdb_entry(mlxsw_sp, mid);
-               list_del(&mid->list);
-               bitmap_free(mid->ports_in_mid);
-               kfree(mid);
-       }
-       return err;
+       mdb_entry_port = mlxsw_sp_mdb_entry_port_lookup(mdb_entry, local_port);
+       if (!mdb_entry_port)
+               return;
+
+       /* Avoid a temporary situation in which the MDB entry points to an empty
+        * PGT entry, as otherwise packets will be temporarily dropped instead
+        * of being flooded. Instead, in this situation, call
+        * mlxsw_sp_mc_mdb_entry_fini(), which first deletes the MDB entry and
+        * then releases the PGT entry.
+        */
+       if (mlxsw_sp_mc_mdb_entry_remove(mdb_entry, mdb_entry_port, force))
+               mlxsw_sp_mc_mdb_entry_fini(mlxsw_sp, mdb_entry, bridge_device,
+                                          local_port, force);
+       else
+               mlxsw_sp_mdb_entry_port_put(mlxsw_sp, mdb_entry, local_port,
+                                           force);
 }
 
 static int mlxsw_sp_port_mdb_add(struct mlxsw_sp_port *mlxsw_sp_port,
@@ -1790,12 +2086,10 @@ static int mlxsw_sp_port_mdb_add(struct mlxsw_sp_port *mlxsw_sp_port,
        struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
        struct net_device *orig_dev = mdb->obj.orig_dev;
        struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan;
-       struct net_device *dev = mlxsw_sp_port->dev;
        struct mlxsw_sp_bridge_device *bridge_device;
        struct mlxsw_sp_bridge_port *bridge_port;
-       struct mlxsw_sp_mid *mid;
+       struct mlxsw_sp_mdb_entry *mdb_entry;
        u16 fid_index;
-       int err = 0;
 
        bridge_port = mlxsw_sp_bridge_port_find(mlxsw_sp->bridge, orig_dev);
        if (!bridge_port)
@@ -1810,54 +2104,35 @@ static int mlxsw_sp_port_mdb_add(struct mlxsw_sp_port *mlxsw_sp_port,
 
        fid_index = mlxsw_sp_fid_index(mlxsw_sp_port_vlan->fid);
 
-       mid = __mlxsw_sp_mc_get(bridge_device, mdb->addr, fid_index);
-       if (!mid) {
-               mid = __mlxsw_sp_mc_alloc(mlxsw_sp, bridge_device, mdb->addr,
-                                         fid_index);
-               if (!mid) {
-                       netdev_err(dev, "Unable to allocate MC group\n");
-                       return -ENOMEM;
-               }
-       }
-       set_bit(mlxsw_sp_port->local_port, mid->ports_in_mid);
-
-       if (!bridge_device->multicast_enabled)
-               return 0;
-
-       if (bridge_port->mrouter)
-               return 0;
-
-       err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, true);
-       if (err) {
-               netdev_err(dev, "Unable to set SMID\n");
-               goto err_out;
-       }
+       mdb_entry = mlxsw_sp_mc_mdb_entry_get(mlxsw_sp, bridge_device,
+                                             mdb->addr, fid_index,
+                                             mlxsw_sp_port->local_port);
+       if (IS_ERR(mdb_entry))
+               return PTR_ERR(mdb_entry);
 
        return 0;
-
-err_out:
-       mlxsw_sp_port_remove_from_mid(mlxsw_sp_port, mid);
-       return err;
 }
 
-static void
-mlxsw_sp_bridge_mdb_mc_enable_sync(struct mlxsw_sp_port *mlxsw_sp_port,
-                                  struct mlxsw_sp_bridge_device
-                                  *bridge_device)
+static int
+mlxsw_sp_bridge_mdb_mc_enable_sync(struct mlxsw_sp *mlxsw_sp,
+                                  struct mlxsw_sp_bridge_device *bridge_device,
+                                  bool mc_enabled)
 {
-       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
-       struct mlxsw_sp_mid *mid;
-       bool mc_enabled;
-
-       mc_enabled = bridge_device->multicast_enabled;
+       struct mlxsw_sp_mdb_entry *mdb_entry;
+       int err;
 
-       list_for_each_entry(mid, &bridge_device->mids_list, list) {
-               if (mc_enabled)
-                       mlxsw_sp_mc_write_mdb_entry(mlxsw_sp, mid,
-                                                   bridge_device);
-               else
-                       mlxsw_sp_mc_remove_mdb_entry(mlxsw_sp, mid);
+       list_for_each_entry(mdb_entry, &bridge_device->mdb_list, list) {
+               err = mlxsw_sp_mdb_entry_write(mlxsw_sp, mdb_entry, mc_enabled);
+               if (err)
+                       goto err_mdb_entry_write;
        }
+       return 0;
+
+err_mdb_entry_write:
+       list_for_each_entry_continue_reverse(mdb_entry,
+                                            &bridge_device->mdb_list, list)
+               mlxsw_sp_mdb_entry_write(mlxsw_sp, mdb_entry, !mc_enabled);
+       return err;
 }
 
 static void
@@ -1865,14 +2140,20 @@ mlxsw_sp_port_mrouter_update_mdb(struct mlxsw_sp_port *mlxsw_sp_port,
                                 struct mlxsw_sp_bridge_port *bridge_port,
                                 bool add)
 {
+       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
        struct mlxsw_sp_bridge_device *bridge_device;
-       struct mlxsw_sp_mid *mid;
+       u16 local_port = mlxsw_sp_port->local_port;
+       struct mlxsw_sp_mdb_entry *mdb_entry;
 
        bridge_device = bridge_port->bridge_device;
 
-       list_for_each_entry(mid, &bridge_device->mids_list, list) {
-               if (!test_bit(mlxsw_sp_port->local_port, mid->ports_in_mid))
-                       mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, add);
+       list_for_each_entry(mdb_entry, &bridge_device->mdb_list, list) {
+               if (add)
+                       mlxsw_sp_mdb_entry_mrouter_port_get(mlxsw_sp, mdb_entry,
+                                                           local_port);
+               else
+                       mlxsw_sp_mdb_entry_mrouter_port_put(mlxsw_sp, mdb_entry,
+                                                           local_port);
        }
 }
 
@@ -1950,28 +2231,6 @@ static int mlxsw_sp_port_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port,
        return 0;
 }
 
-static int
-__mlxsw_sp_port_mdb_del(struct mlxsw_sp_port *mlxsw_sp_port,
-                       struct mlxsw_sp_bridge_port *bridge_port,
-                       struct mlxsw_sp_mid *mid)
-{
-       struct net_device *dev = mlxsw_sp_port->dev;
-       int err;
-
-       if (bridge_port->bridge_device->multicast_enabled &&
-           !bridge_port->mrouter) {
-               err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, false);
-               if (err)
-                       netdev_err(dev, "Unable to remove port from SMID\n");
-       }
-
-       err = mlxsw_sp_port_remove_from_mid(mlxsw_sp_port, mid);
-       if (err)
-               netdev_err(dev, "Unable to remove MC SFD\n");
-
-       return err;
-}
-
 static int mlxsw_sp_port_mdb_del(struct mlxsw_sp_port *mlxsw_sp_port,
                                 const struct switchdev_obj_port_mdb *mdb)
 {
@@ -1981,7 +2240,8 @@ static int mlxsw_sp_port_mdb_del(struct mlxsw_sp_port *mlxsw_sp_port,
        struct mlxsw_sp_bridge_device *bridge_device;
        struct net_device *dev = mlxsw_sp_port->dev;
        struct mlxsw_sp_bridge_port *bridge_port;
-       struct mlxsw_sp_mid *mid;
+       struct mlxsw_sp_mdb_entry_key key = {};
+       struct mlxsw_sp_mdb_entry *mdb_entry;
        u16 fid_index;
 
        bridge_port = mlxsw_sp_bridge_port_find(mlxsw_sp->bridge, orig_dev);
@@ -1997,32 +2257,44 @@ static int mlxsw_sp_port_mdb_del(struct mlxsw_sp_port *mlxsw_sp_port,
 
        fid_index = mlxsw_sp_fid_index(mlxsw_sp_port_vlan->fid);
 
-       mid = __mlxsw_sp_mc_get(bridge_device, mdb->addr, fid_index);
-       if (!mid) {
+       ether_addr_copy(key.addr, mdb->addr);
+       key.fid = fid_index;
+       mdb_entry = rhashtable_lookup_fast(&bridge_device->mdb_ht, &key,
+                                          mlxsw_sp_mdb_ht_params);
+       if (!mdb_entry) {
                netdev_err(dev, "Unable to remove port from MC DB\n");
                return -EINVAL;
        }
 
-       return __mlxsw_sp_port_mdb_del(mlxsw_sp_port, bridge_port, mid);
+       mlxsw_sp_mc_mdb_entry_put(mlxsw_sp, bridge_device, mdb_entry,
+                                 mlxsw_sp_port->local_port, false);
+       return 0;
 }
 
 static void
 mlxsw_sp_bridge_port_mdb_flush(struct mlxsw_sp_port *mlxsw_sp_port,
-                              struct mlxsw_sp_bridge_port *bridge_port)
+                              struct mlxsw_sp_bridge_port *bridge_port,
+                              u16 fid_index)
 {
+       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
        struct mlxsw_sp_bridge_device *bridge_device;
-       struct mlxsw_sp_mid *mid, *tmp;
+       struct mlxsw_sp_mdb_entry *mdb_entry, *tmp;
+       u16 local_port = mlxsw_sp_port->local_port;
 
        bridge_device = bridge_port->bridge_device;
 
-       list_for_each_entry_safe(mid, tmp, &bridge_device->mids_list, list) {
-               if (test_bit(mlxsw_sp_port->local_port, mid->ports_in_mid)) {
-                       __mlxsw_sp_port_mdb_del(mlxsw_sp_port, bridge_port,
-                                               mid);
-               } else if (bridge_device->multicast_enabled &&
-                          bridge_port->mrouter) {
-                       mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, false);
-               }
+       list_for_each_entry_safe(mdb_entry, tmp, &bridge_device->mdb_list,
+                                list) {
+               if (mdb_entry->key.fid != fid_index)
+                       continue;
+
+               if (bridge_port->mrouter)
+                       mlxsw_sp_mdb_entry_mrouter_port_put(mlxsw_sp,
+                                                           mdb_entry,
+                                                           local_port);
+
+               mlxsw_sp_mc_mdb_entry_put(mlxsw_sp, bridge_device, mdb_entry,
+                                         local_port, true);
        }
 }
 
@@ -2634,10 +2906,9 @@ static void mlxsw_sp_fdb_notify_mac_process(struct mlxsw_sp *mlxsw_sp,
        struct mlxsw_sp_bridge_device *bridge_device;
        struct mlxsw_sp_bridge_port *bridge_port;
        struct mlxsw_sp_port *mlxsw_sp_port;
+       u16 local_port, vid, fid, evid = 0;
        enum switchdev_notifier_type type;
        char mac[ETH_ALEN];
-       u16 local_port;
-       u16 vid, fid;
        bool do_notification = true;
        int err;
 
@@ -2668,9 +2939,10 @@ static void mlxsw_sp_fdb_notify_mac_process(struct mlxsw_sp *mlxsw_sp,
 
        bridge_device = bridge_port->bridge_device;
        vid = bridge_device->vlan_enabled ? mlxsw_sp_port_vlan->vid : 0;
+       evid = mlxsw_sp_port_vlan->vid;
 
 do_fdb_op:
-       err = mlxsw_sp_port_fdb_uc_op(mlxsw_sp, local_port, mac, fid,
+       err = mlxsw_sp_port_fdb_uc_op(mlxsw_sp, local_port, mac, fid, evid,
                                      adding, true);
        if (err) {
                dev_err_ratelimited(mlxsw_sp->bus_info->dev, "Failed to set FDB entry\n");
@@ -2730,8 +3002,7 @@ static void mlxsw_sp_fdb_notify_mac_lag_process(struct mlxsw_sp *mlxsw_sp,
 
        bridge_device = bridge_port->bridge_device;
        vid = bridge_device->vlan_enabled ? mlxsw_sp_port_vlan->vid : 0;
-       lag_vid = mlxsw_sp_fid_lag_vid_valid(mlxsw_sp_port_vlan->fid) ?
-                 mlxsw_sp_port_vlan->vid : 0;
+       lag_vid = mlxsw_sp_port_vlan->vid;
 
 do_fdb_op:
        err = mlxsw_sp_port_fdb_uc_lag_op(mlxsw_sp, lag_id, mac, fid, lag_vid,
index 79ecf29..a9a1dea 100644 (file)
@@ -1212,8 +1212,8 @@ static int lan743x_sgmii_config(struct lan743x_adapter *adapter)
 
        /* SGMII/1000/2500BASE-X PCS power down */
        mii_ctl = lan743x_sgmii_read(adapter, MDIO_MMD_VEND2, MII_BMCR);
-       if (ret < 0)
-               return ret;
+       if (mii_ctl < 0)
+               return mii_ctl;
 
        mii_ctl |= BMCR_PDOWN;
        ret = lan743x_sgmii_write(adapter, MDIO_MMD_VEND2, MII_BMCR, mii_ctl);
index 5784c41..1d6e3b6 100644 (file)
@@ -994,7 +994,7 @@ static int lan966x_probe(struct platform_device *pdev)
        struct fwnode_handle *ports, *portnp;
        struct lan966x *lan966x;
        u8 mac_addr[ETH_ALEN];
-       int err, i;
+       int err;
 
        lan966x = devm_kzalloc(&pdev->dev, sizeof(*lan966x), GFP_KERNEL);
        if (!lan966x)
@@ -1025,11 +1025,7 @@ static int lan966x_probe(struct platform_device *pdev)
        if (err)
                return dev_err_probe(&pdev->dev, err, "Reset failed");
 
-       i = 0;
-       fwnode_for_each_available_child_node(ports, portnp)
-               ++i;
-
-       lan966x->num_phys_ports = i;
+       lan966x->num_phys_ports = NUM_PHYS_PORTS;
        lan966x->ports = devm_kcalloc(&pdev->dev, lan966x->num_phys_ports,
                                      sizeof(struct lan966x_port *),
                                      GFP_KERNEL);
index 3b86ddd..2787055 100644 (file)
@@ -34,6 +34,7 @@
 /* Reserved amount for (SRC, PRIO) at index 8*SRC + PRIO */
 #define QSYS_Q_RSRV                    95
 
+#define NUM_PHYS_PORTS                 8
 #define CPU_PORT                       8
 
 /* Reserved PGIDs */
index 40ef9fa..ec07f7d 100644 (file)
@@ -397,6 +397,9 @@ static int sparx5_handle_port_mdb_add(struct net_device *dev,
        bool is_host;
        int res, err;
 
+       if (!sparx5_netdevice_check(dev))
+               return -EOPNOTSUPP;
+
        is_host = netif_is_bridge_master(v->obj.orig_dev);
 
        /* When VLAN unaware the vlan value is not parsed and we receive vid 0.
@@ -480,6 +483,9 @@ static int sparx5_handle_port_mdb_del(struct net_device *dev,
        u32 mact_entry, res, pgid_entry[3], misc_cfg;
        bool host_ena;
 
+       if (!sparx5_netdevice_check(dev))
+               return -EOPNOTSUPP;
+
        if (!br_vlan_enabled(spx5->hw_bridge_dev))
                vid = 1;
        else
index 61497c3..971dde8 100644 (file)
@@ -2692,7 +2692,7 @@ again:
                 * send loop that we are still in the
                 * header portion of the TSO packet.
                 * TSO header can be at most 1KB long */
-               cum_len = -(skb_transport_offset(skb) + tcp_hdrlen(skb));
+               cum_len = -skb_tcp_all_headers(skb);
 
                /* for IPv6 TSO, the checksum offset stores the
                 * TCP header length, to save the firmware from
index 50bca48..9aae7f1 100644 (file)
@@ -158,7 +158,7 @@ MODULE_PARM_DESC(full_duplex, "DP8381x full duplex setting(s) (1)");
 I. Board Compatibility
 
 This driver is designed for National Semiconductor DP83815 PCI Ethernet NIC.
-It also works with other chips in in the DP83810 series.
+It also works with other chips in the DP83810 series.
 
 II. Board-specific settings
 
index 0c0d127..09a89e7 100644 (file)
@@ -32,28 +32,4 @@ config S2IO
          To compile this driver as a module, choose M here. The module
          will be called s2io.
 
-config VXGE
-       tristate "Neterion (Exar) X3100 Series 10GbE PCIe Server Adapter"
-       depends on PCI
-       help
-         This driver supports Exar Corp's X3100 Series 10 GbE PCIe
-         I/O Virtualized Server Adapter.  These were originally released from
-         Neterion, which was later acquired by Exar.  So, the adapters might be
-         labeled as either one, depending on its age.
-
-         More specific information on configuring the driver is in
-         <file:Documentation/networking/device_drivers/ethernet/neterion/vxge.rst>.
-
-         To compile this driver as a module, choose M here. The module
-         will be called vxge.
-
-config VXGE_DEBUG_TRACE_ALL
-       bool "Enabling All Debug trace statements in driver"
-       default n
-       depends on VXGE
-       help
-         Say Y here if you want to enabling all the debug trace statements in
-         the vxge driver. By default only few debug trace statements are
-         enabled.
-
 endif # NET_VENDOR_NETERION
index 87ede8a..de98b4e 100644 (file)
@@ -4,4 +4,3 @@
 #
 
 obj-$(CONFIG_S2IO) += s2io.o
-obj-$(CONFIG_VXGE) += vxge/
index 6dd451a..30f955e 100644 (file)
@@ -2156,7 +2156,7 @@ static int verify_xena_quiescence(struct s2io_nic *sp)
 
        /*
         * In PCI 33 mode, the P_PLL is not used, and therefore,
-        * the the P_PLL_LOCK bit in the adapter_status register will
+        * the P_PLL_LOCK bit in the adapter_status register will
         * not be asserted.
         */
        if (!(val64 & ADAPTER_STATUS_P_PLL_LOCK) &&
@@ -3817,7 +3817,7 @@ static irqreturn_t s2io_test_intr(int irq, void *dev_id)
        return IRQ_HANDLED;
 }
 
-/* Test interrupt path by forcing a software IRQ */
+/* Test interrupt path by forcing a software IRQ */
 static int s2io_test_msi(struct s2io_nic *sp)
 {
        struct pci_dev *pdev = sp->pdev;
@@ -5492,7 +5492,7 @@ s2io_ethtool_gringparam(struct net_device *dev,
 }
 
 /**
- * s2io_ethtool_getpause_data -Pause frame frame generation and reception.
+ * s2io_ethtool_getpause_data -Pause frame generation and reception.
  * @dev: pointer to netdev
  * @ep : pointer to the structure with pause parameters given by ethtool.
  * Description:
@@ -7449,7 +7449,7 @@ aggregate:
  *  @link : inidicates whether link is UP/DOWN.
  *  Description:
  *  This function stops/starts the Tx queue depending on whether the link
- *  status of the NIC is is down or up. This is called by the Alarm
+ *  status of the NIC is down or up. This is called by the Alarm
  *  interrupt handler whenever a link change interrupt comes up.
  *  Return value:
  *  void.
@@ -7732,7 +7732,7 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre)
         * Setting the device configuration parameters.
         * Most of these parameters can be specified by the user during
         * module insertion as they are module loadable parameters. If
-        * these parameters are not not specified during load time, they
+        * these parameters are not specified during load time, they
         * are initialized with default values.
         */
        config = &sp->config;
diff --git a/drivers/net/ethernet/neterion/vxge/Makefile b/drivers/net/ethernet/neterion/vxge/Makefile
deleted file mode 100644 (file)
index 0820e81..0000000
+++ /dev/null
@@ -1,8 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-#
-# Makefile for Exar Corp's X3100 Series 10 GbE PCIe I/O
-# Virtualized Server Adapter linux driver
-
-obj-$(CONFIG_VXGE) += vxge.o
-
-vxge-objs := vxge-config.o vxge-traffic.o vxge-ethtool.o vxge-main.o
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-config.c b/drivers/net/ethernet/neterion/vxge/vxge-config.c
deleted file mode 100644 (file)
index a3204a7..0000000
+++ /dev/null
@@ -1,5099 +0,0 @@
-/******************************************************************************
- * This software may be used and distributed according to the terms of
- * the GNU General Public License (GPL), incorporated herein by reference.
- * Drivers based on or derived from this code fall under the GPL and must
- * retain the authorship, copyright and license notice.  This file is not
- * a complete program and may only be used when the entire operating
- * system is licensed under the GPL.
- * See the file COPYING in this distribution for more information.
- *
- * vxge-config.c: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
- *                Virtualized Server Adapter.
- * Copyright(c) 2002-2010 Exar Corp.
- ******************************************************************************/
-#include <linux/vmalloc.h>
-#include <linux/etherdevice.h>
-#include <linux/io-64-nonatomic-lo-hi.h>
-#include <linux/pci.h>
-#include <linux/slab.h>
-
-#include "vxge-traffic.h"
-#include "vxge-config.h"
-#include "vxge-main.h"
-
-#define VXGE_HW_VPATH_STATS_PIO_READ(offset) {                         \
-       status = __vxge_hw_vpath_stats_access(vpath,                    \
-                                             VXGE_HW_STATS_OP_READ,    \
-                                             offset,                   \
-                                             &val64);                  \
-       if (status != VXGE_HW_OK)                                       \
-               return status;                                          \
-}
-
-static void
-vxge_hw_vpath_set_zero_rx_frm_len(struct vxge_hw_vpath_reg __iomem *vp_reg)
-{
-       u64 val64;
-
-       val64 = readq(&vp_reg->rxmac_vcfg0);
-       val64 &= ~VXGE_HW_RXMAC_VCFG0_RTS_MAX_FRM_LEN(0x3fff);
-       writeq(val64, &vp_reg->rxmac_vcfg0);
-       val64 = readq(&vp_reg->rxmac_vcfg0);
-}
-
-/*
- * vxge_hw_vpath_wait_receive_idle - Wait for Rx to become idle
- */
-int vxge_hw_vpath_wait_receive_idle(struct __vxge_hw_device *hldev, u32 vp_id)
-{
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-       struct __vxge_hw_virtualpath *vpath;
-       u64 val64, rxd_count, rxd_spat;
-       int count = 0, total_count = 0;
-
-       vpath = &hldev->virtual_paths[vp_id];
-       vp_reg = vpath->vp_reg;
-
-       vxge_hw_vpath_set_zero_rx_frm_len(vp_reg);
-
-       /* Check that the ring controller for this vpath has enough free RxDs
-        * to send frames to the host.  This is done by reading the
-        * PRC_RXD_DOORBELL_VPn register and comparing the read value to the
-        * RXD_SPAT value for the vpath.
-        */
-       val64 = readq(&vp_reg->prc_cfg6);
-       rxd_spat = VXGE_HW_PRC_CFG6_GET_RXD_SPAT(val64) + 1;
-       /* Use a factor of 2 when comparing rxd_count against rxd_spat for some
-        * leg room.
-        */
-       rxd_spat *= 2;
-
-       do {
-               mdelay(1);
-
-               rxd_count = readq(&vp_reg->prc_rxd_doorbell);
-
-               /* Check that the ring controller for this vpath does
-                * not have any frame in its pipeline.
-                */
-               val64 = readq(&vp_reg->frm_in_progress_cnt);
-               if ((rxd_count <= rxd_spat) || (val64 > 0))
-                       count = 0;
-               else
-                       count++;
-               total_count++;
-       } while ((count < VXGE_HW_MIN_SUCCESSIVE_IDLE_COUNT) &&
-                       (total_count < VXGE_HW_MAX_POLLING_COUNT));
-
-       if (total_count >= VXGE_HW_MAX_POLLING_COUNT)
-               printk(KERN_ALERT "%s: Still Receiving traffic. Abort wait\n",
-                       __func__);
-
-       return total_count;
-}
-
-/* vxge_hw_device_wait_receive_idle - This function waits until all frames
- * stored in the frame buffer for each vpath assigned to the given
- * function (hldev) have been sent to the host.
- */
-void vxge_hw_device_wait_receive_idle(struct __vxge_hw_device *hldev)
-{
-       int i, total_count = 0;
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!(hldev->vpaths_deployed & vxge_mBIT(i)))
-                       continue;
-
-               total_count += vxge_hw_vpath_wait_receive_idle(hldev, i);
-               if (total_count >= VXGE_HW_MAX_POLLING_COUNT)
-                       break;
-       }
-}
-
-/*
- * __vxge_hw_device_register_poll
- * Will poll certain register for specified amount of time.
- * Will poll until masked bit is not cleared.
- */
-static enum vxge_hw_status
-__vxge_hw_device_register_poll(void __iomem *reg, u64 mask, u32 max_millis)
-{
-       u64 val64;
-       u32 i = 0;
-
-       udelay(10);
-
-       do {
-               val64 = readq(reg);
-               if (!(val64 & mask))
-                       return VXGE_HW_OK;
-               udelay(100);
-       } while (++i <= 9);
-
-       i = 0;
-       do {
-               val64 = readq(reg);
-               if (!(val64 & mask))
-                       return VXGE_HW_OK;
-               mdelay(1);
-       } while (++i <= max_millis);
-
-       return VXGE_HW_FAIL;
-}
-
-static inline enum vxge_hw_status
-__vxge_hw_pio_mem_write64(u64 val64, void __iomem *addr,
-                         u64 mask, u32 max_millis)
-{
-       __vxge_hw_pio_mem_write32_lower((u32)vxge_bVALn(val64, 32, 32), addr);
-       wmb();
-       __vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(val64, 0, 32), addr);
-       wmb();
-
-       return __vxge_hw_device_register_poll(addr, mask, max_millis);
-}
-
-static enum vxge_hw_status
-vxge_hw_vpath_fw_api(struct __vxge_hw_virtualpath *vpath, u32 action,
-                    u32 fw_memo, u32 offset, u64 *data0, u64 *data1,
-                    u64 *steer_ctrl)
-{
-       struct vxge_hw_vpath_reg __iomem *vp_reg = vpath->vp_reg;
-       enum vxge_hw_status status;
-       u64 val64;
-       u32 retry = 0, max_retry = 3;
-
-       spin_lock(&vpath->lock);
-       if (!vpath->vp_open) {
-               spin_unlock(&vpath->lock);
-               max_retry = 100;
-       }
-
-       writeq(*data0, &vp_reg->rts_access_steer_data0);
-       writeq(*data1, &vp_reg->rts_access_steer_data1);
-       wmb();
-
-       val64 = VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION(action) |
-               VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL(fw_memo) |
-               VXGE_HW_RTS_ACCESS_STEER_CTRL_OFFSET(offset) |
-               VXGE_HW_RTS_ACCESS_STEER_CTRL_STROBE |
-               *steer_ctrl;
-
-       status = __vxge_hw_pio_mem_write64(val64,
-                                          &vp_reg->rts_access_steer_ctrl,
-                                          VXGE_HW_RTS_ACCESS_STEER_CTRL_STROBE,
-                                          VXGE_HW_DEF_DEVICE_POLL_MILLIS);
-
-       /* The __vxge_hw_device_register_poll can udelay for a significant
-        * amount of time, blocking other process from the CPU.  If it delays
-        * for ~5secs, a NMI error can occur.  A way around this is to give up
-        * the processor via msleep, but this is not allowed is under lock.
-        * So, only allow it to sleep for ~4secs if open.  Otherwise, delay for
-        * 1sec and sleep for 10ms until the firmware operation has completed
-        * or timed-out.
-        */
-       while ((status != VXGE_HW_OK) && retry++ < max_retry) {
-               if (!vpath->vp_open)
-                       msleep(20);
-               status = __vxge_hw_device_register_poll(
-                                       &vp_reg->rts_access_steer_ctrl,
-                                       VXGE_HW_RTS_ACCESS_STEER_CTRL_STROBE,
-                                       VXGE_HW_DEF_DEVICE_POLL_MILLIS);
-       }
-
-       if (status != VXGE_HW_OK)
-               goto out;
-
-       val64 = readq(&vp_reg->rts_access_steer_ctrl);
-       if (val64 & VXGE_HW_RTS_ACCESS_STEER_CTRL_RMACJ_STATUS) {
-               *data0 = readq(&vp_reg->rts_access_steer_data0);
-               *data1 = readq(&vp_reg->rts_access_steer_data1);
-               *steer_ctrl = val64;
-       } else
-               status = VXGE_HW_FAIL;
-
-out:
-       if (vpath->vp_open)
-               spin_unlock(&vpath->lock);
-       return status;
-}
-
-enum vxge_hw_status
-vxge_hw_upgrade_read_version(struct __vxge_hw_device *hldev, u32 *major,
-                            u32 *minor, u32 *build)
-{
-       u64 data0 = 0, data1 = 0, steer_ctrl = 0;
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status;
-
-       vpath = &hldev->virtual_paths[hldev->first_vp_id];
-
-       status = vxge_hw_vpath_fw_api(vpath,
-                                     VXGE_HW_FW_UPGRADE_ACTION,
-                                     VXGE_HW_FW_UPGRADE_MEMO,
-                                     VXGE_HW_FW_UPGRADE_OFFSET_READ,
-                                     &data0, &data1, &steer_ctrl);
-       if (status != VXGE_HW_OK)
-               return status;
-
-       *major = VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_MAJOR(data0);
-       *minor = VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_MINOR(data0);
-       *build = VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_BUILD(data0);
-
-       return status;
-}
-
-enum vxge_hw_status vxge_hw_flash_fw(struct __vxge_hw_device *hldev)
-{
-       u64 data0 = 0, data1 = 0, steer_ctrl = 0;
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status;
-       u32 ret;
-
-       vpath = &hldev->virtual_paths[hldev->first_vp_id];
-
-       status = vxge_hw_vpath_fw_api(vpath,
-                                     VXGE_HW_FW_UPGRADE_ACTION,
-                                     VXGE_HW_FW_UPGRADE_MEMO,
-                                     VXGE_HW_FW_UPGRADE_OFFSET_COMMIT,
-                                     &data0, &data1, &steer_ctrl);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR, "%s: FW upgrade failed", __func__);
-               goto exit;
-       }
-
-       ret = VXGE_HW_RTS_ACCESS_STEER_CTRL_GET_ACTION(steer_ctrl) & 0x7F;
-       if (ret != 1) {
-               vxge_debug_init(VXGE_ERR, "%s: FW commit failed with error %d",
-                               __func__, ret);
-               status = VXGE_HW_FAIL;
-       }
-
-exit:
-       return status;
-}
-
-enum vxge_hw_status
-vxge_update_fw_image(struct __vxge_hw_device *hldev, const u8 *fwdata, int size)
-{
-       u64 data0 = 0, data1 = 0, steer_ctrl = 0;
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status;
-       int ret_code, sec_code;
-
-       vpath = &hldev->virtual_paths[hldev->first_vp_id];
-
-       /* send upgrade start command */
-       status = vxge_hw_vpath_fw_api(vpath,
-                                     VXGE_HW_FW_UPGRADE_ACTION,
-                                     VXGE_HW_FW_UPGRADE_MEMO,
-                                     VXGE_HW_FW_UPGRADE_OFFSET_START,
-                                     &data0, &data1, &steer_ctrl);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR, " %s: Upgrade start cmd failed",
-                               __func__);
-               return status;
-       }
-
-       /* Transfer fw image to adapter 16 bytes at a time */
-       for (; size > 0; size -= VXGE_HW_FW_UPGRADE_BLK_SIZE) {
-               steer_ctrl = 0;
-
-               /* The next 128bits of fwdata to be loaded onto the adapter */
-               data0 = *((u64 *)fwdata);
-               data1 = *((u64 *)fwdata + 1);
-
-               status = vxge_hw_vpath_fw_api(vpath,
-                                             VXGE_HW_FW_UPGRADE_ACTION,
-                                             VXGE_HW_FW_UPGRADE_MEMO,
-                                             VXGE_HW_FW_UPGRADE_OFFSET_SEND,
-                                             &data0, &data1, &steer_ctrl);
-               if (status != VXGE_HW_OK) {
-                       vxge_debug_init(VXGE_ERR, "%s: Upgrade send failed",
-                                       __func__);
-                       goto out;
-               }
-
-               ret_code = VXGE_HW_UPGRADE_GET_RET_ERR_CODE(data0);
-               switch (ret_code) {
-               case VXGE_HW_FW_UPGRADE_OK:
-                       /* All OK, send next 16 bytes. */
-                       break;
-               case VXGE_FW_UPGRADE_BYTES2SKIP:
-                       /* skip bytes in the stream */
-                       fwdata += (data0 >> 8) & 0xFFFFFFFF;
-                       break;
-               case VXGE_HW_FW_UPGRADE_DONE:
-                       goto out;
-               case VXGE_HW_FW_UPGRADE_ERR:
-                       sec_code = VXGE_HW_UPGRADE_GET_SEC_ERR_CODE(data0);
-                       switch (sec_code) {
-                       case VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_1:
-                       case VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_7:
-                               printk(KERN_ERR
-                                      "corrupted data from .ncf file\n");
-                               break;
-                       case VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_3:
-                       case VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_4:
-                       case VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_5:
-                       case VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_6:
-                       case VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_8:
-                               printk(KERN_ERR "invalid .ncf file\n");
-                               break;
-                       case VXGE_HW_FW_UPGRADE_ERR_BUFFER_OVERFLOW:
-                               printk(KERN_ERR "buffer overflow\n");
-                               break;
-                       case VXGE_HW_FW_UPGRADE_ERR_FAILED_TO_FLASH:
-                               printk(KERN_ERR "failed to flash the image\n");
-                               break;
-                       case VXGE_HW_FW_UPGRADE_ERR_GENERIC_ERROR_UNKNOWN:
-                               printk(KERN_ERR
-                                      "generic error. Unknown error type\n");
-                               break;
-                       default:
-                               printk(KERN_ERR "Unknown error of type %d\n",
-                                      sec_code);
-                               break;
-                       }
-                       status = VXGE_HW_FAIL;
-                       goto out;
-               default:
-                       printk(KERN_ERR "Unknown FW error: %d\n", ret_code);
-                       status = VXGE_HW_FAIL;
-                       goto out;
-               }
-               /* point to next 16 bytes */
-               fwdata += VXGE_HW_FW_UPGRADE_BLK_SIZE;
-       }
-out:
-       return status;
-}
-
-enum vxge_hw_status
-vxge_hw_vpath_eprom_img_ver_get(struct __vxge_hw_device *hldev,
-                               struct eprom_image *img)
-{
-       u64 data0 = 0, data1 = 0, steer_ctrl = 0;
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status;
-       int i;
-
-       vpath = &hldev->virtual_paths[hldev->first_vp_id];
-
-       for (i = 0; i < VXGE_HW_MAX_ROM_IMAGES; i++) {
-               data0 = VXGE_HW_RTS_ACCESS_STEER_ROM_IMAGE_INDEX(i);
-               data1 = steer_ctrl = 0;
-
-               status = vxge_hw_vpath_fw_api(vpath,
-                       VXGE_HW_FW_API_GET_EPROM_REV,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_FW_MEMO,
-                       0, &data0, &data1, &steer_ctrl);
-               if (status != VXGE_HW_OK)
-                       break;
-
-               img[i].is_valid = VXGE_HW_GET_EPROM_IMAGE_VALID(data0);
-               img[i].index = VXGE_HW_GET_EPROM_IMAGE_INDEX(data0);
-               img[i].type = VXGE_HW_GET_EPROM_IMAGE_TYPE(data0);
-               img[i].version = VXGE_HW_GET_EPROM_IMAGE_REV(data0);
-       }
-
-       return status;
-}
-
-/*
- * __vxge_hw_channel_free - Free memory allocated for channel
- * This function deallocates memory from the channel and various arrays
- * in the channel
- */
-static void __vxge_hw_channel_free(struct __vxge_hw_channel *channel)
-{
-       kfree(channel->work_arr);
-       kfree(channel->free_arr);
-       kfree(channel->reserve_arr);
-       kfree(channel->orig_arr);
-       kfree(channel);
-}
-
-/*
- * __vxge_hw_channel_initialize - Initialize a channel
- * This function initializes a channel by properly setting the
- * various references
- */
-static enum vxge_hw_status
-__vxge_hw_channel_initialize(struct __vxge_hw_channel *channel)
-{
-       u32 i;
-       struct __vxge_hw_virtualpath *vpath;
-
-       vpath = channel->vph->vpath;
-
-       if ((channel->reserve_arr != NULL) && (channel->orig_arr != NULL)) {
-               for (i = 0; i < channel->length; i++)
-                       channel->orig_arr[i] = channel->reserve_arr[i];
-       }
-
-       switch (channel->type) {
-       case VXGE_HW_CHANNEL_TYPE_FIFO:
-               vpath->fifoh = (struct __vxge_hw_fifo *)channel;
-               channel->stats = &((struct __vxge_hw_fifo *)
-                               channel)->stats->common_stats;
-               break;
-       case VXGE_HW_CHANNEL_TYPE_RING:
-               vpath->ringh = (struct __vxge_hw_ring *)channel;
-               channel->stats = &((struct __vxge_hw_ring *)
-                               channel)->stats->common_stats;
-               break;
-       default:
-               break;
-       }
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_channel_reset - Resets a channel
- * This function resets a channel by properly setting the various references
- */
-static enum vxge_hw_status
-__vxge_hw_channel_reset(struct __vxge_hw_channel *channel)
-{
-       u32 i;
-
-       for (i = 0; i < channel->length; i++) {
-               if (channel->reserve_arr != NULL)
-                       channel->reserve_arr[i] = channel->orig_arr[i];
-               if (channel->free_arr != NULL)
-                       channel->free_arr[i] = NULL;
-               if (channel->work_arr != NULL)
-                       channel->work_arr[i] = NULL;
-       }
-       channel->free_ptr = channel->length;
-       channel->reserve_ptr = channel->length;
-       channel->reserve_top = 0;
-       channel->post_index = 0;
-       channel->compl_index = 0;
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_device_pci_e_init
- * Initialize certain PCI/PCI-X configuration registers
- * with recommended values. Save config space for future hw resets.
- */
-static void __vxge_hw_device_pci_e_init(struct __vxge_hw_device *hldev)
-{
-       u16 cmd = 0;
-
-       /* Set the PErr Repconse bit and SERR in PCI command register. */
-       pci_read_config_word(hldev->pdev, PCI_COMMAND, &cmd);
-       cmd |= 0x140;
-       pci_write_config_word(hldev->pdev, PCI_COMMAND, cmd);
-
-       pci_save_state(hldev->pdev);
-}
-
-/* __vxge_hw_device_vpath_reset_in_prog_check - Check if vpath reset
- * in progress
- * This routine checks the vpath reset in progress register is turned zero
- */
-static enum vxge_hw_status
-__vxge_hw_device_vpath_reset_in_prog_check(u64 __iomem *vpath_rst_in_prog)
-{
-       enum vxge_hw_status status;
-       status = __vxge_hw_device_register_poll(vpath_rst_in_prog,
-                       VXGE_HW_VPATH_RST_IN_PROG_VPATH_RST_IN_PROG(0x1ffff),
-                       VXGE_HW_DEF_DEVICE_POLL_MILLIS);
-       return status;
-}
-
-/*
- * _hw_legacy_swapper_set - Set the swapper bits for the legacy secion.
- * Set the swapper bits appropriately for the lagacy section.
- */
-static enum vxge_hw_status
-__vxge_hw_legacy_swapper_set(struct vxge_hw_legacy_reg __iomem *legacy_reg)
-{
-       u64 val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       val64 = readq(&legacy_reg->toc_swapper_fb);
-
-       wmb();
-
-       switch (val64) {
-       case VXGE_HW_SWAPPER_INITIAL_VALUE:
-               return status;
-
-       case VXGE_HW_SWAPPER_BYTE_SWAPPED_BIT_FLIPPED:
-               writeq(VXGE_HW_SWAPPER_READ_BYTE_SWAP_ENABLE,
-                       &legacy_reg->pifm_rd_swap_en);
-               writeq(VXGE_HW_SWAPPER_READ_BIT_FLAP_ENABLE,
-                       &legacy_reg->pifm_rd_flip_en);
-               writeq(VXGE_HW_SWAPPER_WRITE_BYTE_SWAP_ENABLE,
-                       &legacy_reg->pifm_wr_swap_en);
-               writeq(VXGE_HW_SWAPPER_WRITE_BIT_FLAP_ENABLE,
-                       &legacy_reg->pifm_wr_flip_en);
-               break;
-
-       case VXGE_HW_SWAPPER_BYTE_SWAPPED:
-               writeq(VXGE_HW_SWAPPER_READ_BYTE_SWAP_ENABLE,
-                       &legacy_reg->pifm_rd_swap_en);
-               writeq(VXGE_HW_SWAPPER_WRITE_BYTE_SWAP_ENABLE,
-                       &legacy_reg->pifm_wr_swap_en);
-               break;
-
-       case VXGE_HW_SWAPPER_BIT_FLIPPED:
-               writeq(VXGE_HW_SWAPPER_READ_BIT_FLAP_ENABLE,
-                       &legacy_reg->pifm_rd_flip_en);
-               writeq(VXGE_HW_SWAPPER_WRITE_BIT_FLAP_ENABLE,
-                       &legacy_reg->pifm_wr_flip_en);
-               break;
-       }
-
-       wmb();
-
-       val64 = readq(&legacy_reg->toc_swapper_fb);
-
-       if (val64 != VXGE_HW_SWAPPER_INITIAL_VALUE)
-               status = VXGE_HW_ERR_SWAPPER_CTRL;
-
-       return status;
-}
-
-/*
- * __vxge_hw_device_toc_get
- * This routine sets the swapper and reads the toc pointer and returns the
- * memory mapped address of the toc
- */
-static struct vxge_hw_toc_reg __iomem *
-__vxge_hw_device_toc_get(void __iomem *bar0)
-{
-       u64 val64;
-       struct vxge_hw_toc_reg __iomem *toc = NULL;
-       enum vxge_hw_status status;
-
-       struct vxge_hw_legacy_reg __iomem *legacy_reg =
-               (struct vxge_hw_legacy_reg __iomem *)bar0;
-
-       status = __vxge_hw_legacy_swapper_set(legacy_reg);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       val64 = readq(&legacy_reg->toc_first_pointer);
-       toc = bar0 + val64;
-exit:
-       return toc;
-}
-
-/*
- * __vxge_hw_device_reg_addr_get
- * This routine sets the swapper and reads the toc pointer and initializes the
- * register location pointers in the device object. It waits until the ric is
- * completed initializing registers.
- */
-static enum vxge_hw_status
-__vxge_hw_device_reg_addr_get(struct __vxge_hw_device *hldev)
-{
-       u64 val64;
-       u32 i;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       hldev->legacy_reg = hldev->bar0;
-
-       hldev->toc_reg = __vxge_hw_device_toc_get(hldev->bar0);
-       if (hldev->toc_reg  == NULL) {
-               status = VXGE_HW_FAIL;
-               goto exit;
-       }
-
-       val64 = readq(&hldev->toc_reg->toc_common_pointer);
-       hldev->common_reg = hldev->bar0 + val64;
-
-       val64 = readq(&hldev->toc_reg->toc_mrpcim_pointer);
-       hldev->mrpcim_reg = hldev->bar0 + val64;
-
-       for (i = 0; i < VXGE_HW_TITAN_SRPCIM_REG_SPACES; i++) {
-               val64 = readq(&hldev->toc_reg->toc_srpcim_pointer[i]);
-               hldev->srpcim_reg[i] = hldev->bar0 + val64;
-       }
-
-       for (i = 0; i < VXGE_HW_TITAN_VPMGMT_REG_SPACES; i++) {
-               val64 = readq(&hldev->toc_reg->toc_vpmgmt_pointer[i]);
-               hldev->vpmgmt_reg[i] = hldev->bar0 + val64;
-       }
-
-       for (i = 0; i < VXGE_HW_TITAN_VPATH_REG_SPACES; i++) {
-               val64 = readq(&hldev->toc_reg->toc_vpath_pointer[i]);
-               hldev->vpath_reg[i] = hldev->bar0 + val64;
-       }
-
-       val64 = readq(&hldev->toc_reg->toc_kdfc);
-
-       switch (VXGE_HW_TOC_GET_KDFC_INITIAL_BIR(val64)) {
-       case 0:
-               hldev->kdfc = hldev->bar0 + VXGE_HW_TOC_GET_KDFC_INITIAL_OFFSET(val64) ;
-               break;
-       default:
-               break;
-       }
-
-       status = __vxge_hw_device_vpath_reset_in_prog_check(
-                       (u64 __iomem *)&hldev->common_reg->vpath_rst_in_prog);
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_device_access_rights_get: Get Access Rights of the driver
- * This routine returns the Access Rights of the driver
- */
-static u32
-__vxge_hw_device_access_rights_get(u32 host_type, u32 func_id)
-{
-       u32 access_rights = VXGE_HW_DEVICE_ACCESS_RIGHT_VPATH;
-
-       switch (host_type) {
-       case VXGE_HW_NO_MR_NO_SR_NORMAL_FUNCTION:
-               if (func_id == 0) {
-                       access_rights |= VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM |
-                                       VXGE_HW_DEVICE_ACCESS_RIGHT_SRPCIM;
-               }
-               break;
-       case VXGE_HW_MR_NO_SR_VH0_BASE_FUNCTION:
-               access_rights |= VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM |
-                               VXGE_HW_DEVICE_ACCESS_RIGHT_SRPCIM;
-               break;
-       case VXGE_HW_NO_MR_SR_VH0_FUNCTION0:
-               access_rights |= VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM |
-                               VXGE_HW_DEVICE_ACCESS_RIGHT_SRPCIM;
-               break;
-       case VXGE_HW_NO_MR_SR_VH0_VIRTUAL_FUNCTION:
-       case VXGE_HW_SR_VH_VIRTUAL_FUNCTION:
-       case VXGE_HW_MR_SR_VH0_INVALID_CONFIG:
-               break;
-       case VXGE_HW_SR_VH_FUNCTION0:
-       case VXGE_HW_VH_NORMAL_FUNCTION:
-               access_rights |= VXGE_HW_DEVICE_ACCESS_RIGHT_SRPCIM;
-               break;
-       }
-
-       return access_rights;
-}
-/*
- * __vxge_hw_device_is_privilaged
- * This routine checks if the device function is privilaged or not
- */
-
-enum vxge_hw_status
-__vxge_hw_device_is_privilaged(u32 host_type, u32 func_id)
-{
-       if (__vxge_hw_device_access_rights_get(host_type,
-               func_id) &
-               VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM)
-               return VXGE_HW_OK;
-       else
-               return VXGE_HW_ERR_PRIVILEGED_OPERATION;
-}
-
-/*
- * __vxge_hw_vpath_func_id_get - Get the function id of the vpath.
- * Returns the function number of the vpath.
- */
-static u32
-__vxge_hw_vpath_func_id_get(struct vxge_hw_vpmgmt_reg __iomem *vpmgmt_reg)
-{
-       u64 val64;
-
-       val64 = readq(&vpmgmt_reg->vpath_to_func_map_cfg1);
-
-       return
-        (u32)VXGE_HW_VPATH_TO_FUNC_MAP_CFG1_GET_VPATH_TO_FUNC_MAP_CFG1(val64);
-}
-
-/*
- * __vxge_hw_device_host_info_get
- * This routine returns the host type assignments
- */
-static void __vxge_hw_device_host_info_get(struct __vxge_hw_device *hldev)
-{
-       u64 val64;
-       u32 i;
-
-       val64 = readq(&hldev->common_reg->host_type_assignments);
-
-       hldev->host_type =
-          (u32)VXGE_HW_HOST_TYPE_ASSIGNMENTS_GET_HOST_TYPE_ASSIGNMENTS(val64);
-
-       hldev->vpath_assignments = readq(&hldev->common_reg->vpath_assignments);
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!(hldev->vpath_assignments & vxge_mBIT(i)))
-                       continue;
-
-               hldev->func_id =
-                       __vxge_hw_vpath_func_id_get(hldev->vpmgmt_reg[i]);
-
-               hldev->access_rights = __vxge_hw_device_access_rights_get(
-                       hldev->host_type, hldev->func_id);
-
-               hldev->virtual_paths[i].vp_open = VXGE_HW_VP_NOT_OPEN;
-               hldev->virtual_paths[i].vp_reg = hldev->vpath_reg[i];
-
-               hldev->first_vp_id = i;
-               break;
-       }
-}
-
-/*
- * __vxge_hw_verify_pci_e_info - Validate the pci-e link parameters such as
- * link width and signalling rate.
- */
-static enum vxge_hw_status
-__vxge_hw_verify_pci_e_info(struct __vxge_hw_device *hldev)
-{
-       struct pci_dev *dev = hldev->pdev;
-       u16 lnk;
-
-       /* Get the negotiated link width and speed from PCI config space */
-       pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnk);
-
-       if ((lnk & PCI_EXP_LNKSTA_CLS) != 1)
-               return VXGE_HW_ERR_INVALID_PCI_INFO;
-
-       switch ((lnk & PCI_EXP_LNKSTA_NLW) >> 4) {
-       case PCIE_LNK_WIDTH_RESRV:
-       case PCIE_LNK_X1:
-       case PCIE_LNK_X2:
-       case PCIE_LNK_X4:
-       case PCIE_LNK_X8:
-               break;
-       default:
-               return VXGE_HW_ERR_INVALID_PCI_INFO;
-       }
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_device_initialize
- * Initialize Titan-V hardware.
- */
-static enum vxge_hw_status
-__vxge_hw_device_initialize(struct __vxge_hw_device *hldev)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (VXGE_HW_OK == __vxge_hw_device_is_privilaged(hldev->host_type,
-                               hldev->func_id)) {
-               /* Validate the pci-e link width and speed */
-               status = __vxge_hw_verify_pci_e_info(hldev);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-       }
-
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_fw_ver_get - Get the fw version
- * Returns FW Version
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_fw_ver_get(struct __vxge_hw_virtualpath *vpath,
-                          struct vxge_hw_device_hw_info *hw_info)
-{
-       struct vxge_hw_device_version *fw_version = &hw_info->fw_version;
-       struct vxge_hw_device_date *fw_date = &hw_info->fw_date;
-       struct vxge_hw_device_version *flash_version = &hw_info->flash_version;
-       struct vxge_hw_device_date *flash_date = &hw_info->flash_date;
-       u64 data0 = 0, data1 = 0, steer_ctrl = 0;
-       enum vxge_hw_status status;
-
-       status = vxge_hw_vpath_fw_api(vpath,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_READ_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_FW_MEMO,
-                       0, &data0, &data1, &steer_ctrl);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       fw_date->day =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_DAY(data0);
-       fw_date->month =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_MONTH(data0);
-       fw_date->year =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_YEAR(data0);
-
-       snprintf(fw_date->date, VXGE_HW_FW_STRLEN, "%2.2d/%2.2d/%4.4d",
-                fw_date->month, fw_date->day, fw_date->year);
-
-       fw_version->major =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_MAJOR(data0);
-       fw_version->minor =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_MINOR(data0);
-       fw_version->build =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_BUILD(data0);
-
-       snprintf(fw_version->version, VXGE_HW_FW_STRLEN, "%d.%d.%d",
-                fw_version->major, fw_version->minor, fw_version->build);
-
-       flash_date->day =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_DAY(data1);
-       flash_date->month =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_MONTH(data1);
-       flash_date->year =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_YEAR(data1);
-
-       snprintf(flash_date->date, VXGE_HW_FW_STRLEN, "%2.2d/%2.2d/%4.4d",
-                flash_date->month, flash_date->day, flash_date->year);
-
-       flash_version->major =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_MAJOR(data1);
-       flash_version->minor =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_MINOR(data1);
-       flash_version->build =
-           (u32) VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_BUILD(data1);
-
-       snprintf(flash_version->version, VXGE_HW_FW_STRLEN, "%d.%d.%d",
-                flash_version->major, flash_version->minor,
-                flash_version->build);
-
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_card_info_get - Get the serial numbers,
- * part number and product description.
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_card_info_get(struct __vxge_hw_virtualpath *vpath,
-                             struct vxge_hw_device_hw_info *hw_info)
-{
-       __be64 *serial_number = (void *)hw_info->serial_number;
-       __be64 *product_desc = (void *)hw_info->product_desc;
-       __be64 *part_number = (void *)hw_info->part_number;
-       enum vxge_hw_status status;
-       u64 data0, data1 = 0, steer_ctrl = 0;
-       u32 i, j = 0;
-
-       data0 = VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_SERIAL_NUMBER;
-
-       status = vxge_hw_vpath_fw_api(vpath,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_READ_MEMO_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_FW_MEMO,
-                       0, &data0, &data1, &steer_ctrl);
-       if (status != VXGE_HW_OK)
-               return status;
-
-       serial_number[0] = cpu_to_be64(data0);
-       serial_number[1] = cpu_to_be64(data1);
-
-       data0 = VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_PART_NUMBER;
-       data1 = steer_ctrl = 0;
-
-       status = vxge_hw_vpath_fw_api(vpath,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_READ_MEMO_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_FW_MEMO,
-                       0, &data0, &data1, &steer_ctrl);
-       if (status != VXGE_HW_OK)
-               return status;
-
-       part_number[0] = cpu_to_be64(data0);
-       part_number[1] = cpu_to_be64(data1);
-
-       for (i = VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_DESC_0;
-            i <= VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_DESC_3; i++) {
-               data0 = i;
-               data1 = steer_ctrl = 0;
-
-               status = vxge_hw_vpath_fw_api(vpath,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_READ_MEMO_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_FW_MEMO,
-                       0, &data0, &data1, &steer_ctrl);
-               if (status != VXGE_HW_OK)
-                       return status;
-
-               product_desc[j++] = cpu_to_be64(data0);
-               product_desc[j++] = cpu_to_be64(data1);
-       }
-
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_pci_func_mode_get - Get the pci mode
- * Returns pci function mode
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_pci_func_mode_get(struct __vxge_hw_virtualpath *vpath,
-                                 struct vxge_hw_device_hw_info *hw_info)
-{
-       u64 data0, data1 = 0, steer_ctrl = 0;
-       enum vxge_hw_status status;
-
-       data0 = 0;
-
-       status = vxge_hw_vpath_fw_api(vpath,
-                       VXGE_HW_FW_API_GET_FUNC_MODE,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_FW_MEMO,
-                       0, &data0, &data1, &steer_ctrl);
-       if (status != VXGE_HW_OK)
-               return status;
-
-       hw_info->function_mode = VXGE_HW_GET_FUNC_MODE_VAL(data0);
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_addr_get - Get the hw address entry for this vpath
- *               from MAC address table.
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_addr_get(struct __vxge_hw_virtualpath *vpath,
-                        u8 *macaddr, u8 *macaddr_mask)
-{
-       u64 action = VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LIST_FIRST_ENTRY,
-           data0 = 0, data1 = 0, steer_ctrl = 0;
-       enum vxge_hw_status status;
-       int i;
-
-       do {
-               status = vxge_hw_vpath_fw_api(vpath, action,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA,
-                       0, &data0, &data1, &steer_ctrl);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-
-               data0 = VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_DA_MAC_ADDR(data0);
-               data1 = VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_DA_MAC_ADDR_MASK(
-                                                                       data1);
-
-               for (i = ETH_ALEN; i > 0; i--) {
-                       macaddr[i - 1] = (u8) (data0 & 0xFF);
-                       data0 >>= 8;
-
-                       macaddr_mask[i - 1] = (u8) (data1 & 0xFF);
-                       data1 >>= 8;
-               }
-
-               action = VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LIST_NEXT_ENTRY;
-               data0 = 0, data1 = 0, steer_ctrl = 0;
-
-       } while (!is_valid_ether_addr(macaddr));
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_device_hw_info_get - Get the hw information
- * @bar0: the bar
- * @hw_info: the hw_info struct
- *
- * Returns the vpath mask that has the bits set for each vpath allocated
- * for the driver, FW version information, and the first mac address for
- * each vpath
- */
-enum vxge_hw_status
-vxge_hw_device_hw_info_get(void __iomem *bar0,
-                          struct vxge_hw_device_hw_info *hw_info)
-{
-       u32 i;
-       u64 val64;
-       struct vxge_hw_toc_reg __iomem *toc;
-       struct vxge_hw_mrpcim_reg __iomem *mrpcim_reg;
-       struct vxge_hw_common_reg __iomem *common_reg;
-       struct vxge_hw_vpmgmt_reg __iomem *vpmgmt_reg;
-       enum vxge_hw_status status;
-       struct __vxge_hw_virtualpath vpath;
-
-       memset(hw_info, 0, sizeof(struct vxge_hw_device_hw_info));
-
-       toc = __vxge_hw_device_toc_get(bar0);
-       if (toc == NULL) {
-               status = VXGE_HW_ERR_CRITICAL;
-               goto exit;
-       }
-
-       val64 = readq(&toc->toc_common_pointer);
-       common_reg = bar0 + val64;
-
-       status = __vxge_hw_device_vpath_reset_in_prog_check(
-               (u64 __iomem *)&common_reg->vpath_rst_in_prog);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       hw_info->vpath_mask = readq(&common_reg->vpath_assignments);
-
-       val64 = readq(&common_reg->host_type_assignments);
-
-       hw_info->host_type =
-          (u32)VXGE_HW_HOST_TYPE_ASSIGNMENTS_GET_HOST_TYPE_ASSIGNMENTS(val64);
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!((hw_info->vpath_mask) & vxge_mBIT(i)))
-                       continue;
-
-               val64 = readq(&toc->toc_vpmgmt_pointer[i]);
-
-               vpmgmt_reg = bar0 + val64;
-
-               hw_info->func_id = __vxge_hw_vpath_func_id_get(vpmgmt_reg);
-               if (__vxge_hw_device_access_rights_get(hw_info->host_type,
-                       hw_info->func_id) &
-                       VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM) {
-
-                       val64 = readq(&toc->toc_mrpcim_pointer);
-
-                       mrpcim_reg = bar0 + val64;
-
-                       writeq(0, &mrpcim_reg->xgmac_gen_fw_memo_mask);
-                       wmb();
-               }
-
-               val64 = readq(&toc->toc_vpath_pointer[i]);
-
-               spin_lock_init(&vpath.lock);
-               vpath.vp_reg = bar0 + val64;
-               vpath.vp_open = VXGE_HW_VP_NOT_OPEN;
-
-               status = __vxge_hw_vpath_pci_func_mode_get(&vpath, hw_info);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-
-               status = __vxge_hw_vpath_fw_ver_get(&vpath, hw_info);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-
-               status = __vxge_hw_vpath_card_info_get(&vpath, hw_info);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-
-               break;
-       }
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!((hw_info->vpath_mask) & vxge_mBIT(i)))
-                       continue;
-
-               val64 = readq(&toc->toc_vpath_pointer[i]);
-               vpath.vp_reg = bar0 + val64;
-               vpath.vp_open = VXGE_HW_VP_NOT_OPEN;
-
-               status =  __vxge_hw_vpath_addr_get(&vpath,
-                               hw_info->mac_addrs[i],
-                               hw_info->mac_addr_masks[i]);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-       }
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_blockpool_destroy - Deallocates the block pool
- */
-static void __vxge_hw_blockpool_destroy(struct __vxge_hw_blockpool *blockpool)
-{
-       struct __vxge_hw_device *hldev;
-       struct list_head *p, *n;
-
-       if (!blockpool)
-               return;
-
-       hldev = blockpool->hldev;
-
-       list_for_each_safe(p, n, &blockpool->free_block_list) {
-               dma_unmap_single(&hldev->pdev->dev,
-                                ((struct __vxge_hw_blockpool_entry *)p)->dma_addr,
-                                ((struct __vxge_hw_blockpool_entry *)p)->length,
-                                DMA_BIDIRECTIONAL);
-
-               vxge_os_dma_free(hldev->pdev,
-                       ((struct __vxge_hw_blockpool_entry *)p)->memblock,
-                       &((struct __vxge_hw_blockpool_entry *)p)->acc_handle);
-
-               list_del(&((struct __vxge_hw_blockpool_entry *)p)->item);
-               kfree(p);
-               blockpool->pool_size--;
-       }
-
-       list_for_each_safe(p, n, &blockpool->free_entry_list) {
-               list_del(&((struct __vxge_hw_blockpool_entry *)p)->item);
-               kfree(p);
-       }
-
-       return;
-}
-
-/*
- * __vxge_hw_blockpool_create - Create block pool
- */
-static enum vxge_hw_status
-__vxge_hw_blockpool_create(struct __vxge_hw_device *hldev,
-                          struct __vxge_hw_blockpool *blockpool,
-                          u32 pool_size,
-                          u32 pool_max)
-{
-       u32 i;
-       struct __vxge_hw_blockpool_entry *entry = NULL;
-       void *memblock;
-       dma_addr_t dma_addr;
-       struct pci_dev *dma_handle;
-       struct pci_dev *acc_handle;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (blockpool == NULL) {
-               status = VXGE_HW_FAIL;
-               goto blockpool_create_exit;
-       }
-
-       blockpool->hldev = hldev;
-       blockpool->block_size = VXGE_HW_BLOCK_SIZE;
-       blockpool->pool_size = 0;
-       blockpool->pool_max = pool_max;
-       blockpool->req_out = 0;
-
-       INIT_LIST_HEAD(&blockpool->free_block_list);
-       INIT_LIST_HEAD(&blockpool->free_entry_list);
-
-       for (i = 0; i < pool_size + pool_max; i++) {
-               entry = kzalloc(sizeof(struct __vxge_hw_blockpool_entry),
-                               GFP_KERNEL);
-               if (entry == NULL) {
-                       __vxge_hw_blockpool_destroy(blockpool);
-                       status = VXGE_HW_ERR_OUT_OF_MEMORY;
-                       goto blockpool_create_exit;
-               }
-               list_add(&entry->item, &blockpool->free_entry_list);
-       }
-
-       for (i = 0; i < pool_size; i++) {
-               memblock = vxge_os_dma_malloc(
-                               hldev->pdev,
-                               VXGE_HW_BLOCK_SIZE,
-                               &dma_handle,
-                               &acc_handle);
-               if (memblock == NULL) {
-                       __vxge_hw_blockpool_destroy(blockpool);
-                       status = VXGE_HW_ERR_OUT_OF_MEMORY;
-                       goto blockpool_create_exit;
-               }
-
-               dma_addr = dma_map_single(&hldev->pdev->dev, memblock,
-                                         VXGE_HW_BLOCK_SIZE,
-                                         DMA_BIDIRECTIONAL);
-               if (unlikely(dma_mapping_error(&hldev->pdev->dev, dma_addr))) {
-                       vxge_os_dma_free(hldev->pdev, memblock, &acc_handle);
-                       __vxge_hw_blockpool_destroy(blockpool);
-                       status = VXGE_HW_ERR_OUT_OF_MEMORY;
-                       goto blockpool_create_exit;
-               }
-
-               if (!list_empty(&blockpool->free_entry_list))
-                       entry = (struct __vxge_hw_blockpool_entry *)
-                               list_first_entry(&blockpool->free_entry_list,
-                                       struct __vxge_hw_blockpool_entry,
-                                       item);
-
-               if (entry == NULL)
-                       entry =
-                           kzalloc(sizeof(struct __vxge_hw_blockpool_entry),
-                                       GFP_KERNEL);
-               if (entry != NULL) {
-                       list_del(&entry->item);
-                       entry->length = VXGE_HW_BLOCK_SIZE;
-                       entry->memblock = memblock;
-                       entry->dma_addr = dma_addr;
-                       entry->acc_handle = acc_handle;
-                       entry->dma_handle = dma_handle;
-                       list_add(&entry->item,
-                                         &blockpool->free_block_list);
-                       blockpool->pool_size++;
-               } else {
-                       __vxge_hw_blockpool_destroy(blockpool);
-                       status = VXGE_HW_ERR_OUT_OF_MEMORY;
-                       goto blockpool_create_exit;
-               }
-       }
-
-blockpool_create_exit:
-       return status;
-}
-
-/*
- * __vxge_hw_device_fifo_config_check - Check fifo configuration.
- * Check the fifo configuration
- */
-static enum vxge_hw_status
-__vxge_hw_device_fifo_config_check(struct vxge_hw_fifo_config *fifo_config)
-{
-       if ((fifo_config->fifo_blocks < VXGE_HW_MIN_FIFO_BLOCKS) ||
-           (fifo_config->fifo_blocks > VXGE_HW_MAX_FIFO_BLOCKS))
-               return VXGE_HW_BADCFG_FIFO_BLOCKS;
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_device_vpath_config_check - Check vpath configuration.
- * Check the vpath configuration
- */
-static enum vxge_hw_status
-__vxge_hw_device_vpath_config_check(struct vxge_hw_vp_config *vp_config)
-{
-       enum vxge_hw_status status;
-
-       if ((vp_config->min_bandwidth < VXGE_HW_VPATH_BANDWIDTH_MIN) ||
-           (vp_config->min_bandwidth > VXGE_HW_VPATH_BANDWIDTH_MAX))
-               return VXGE_HW_BADCFG_VPATH_MIN_BANDWIDTH;
-
-       status = __vxge_hw_device_fifo_config_check(&vp_config->fifo);
-       if (status != VXGE_HW_OK)
-               return status;
-
-       if ((vp_config->mtu != VXGE_HW_VPATH_USE_FLASH_DEFAULT_INITIAL_MTU) &&
-               ((vp_config->mtu < VXGE_HW_VPATH_MIN_INITIAL_MTU) ||
-               (vp_config->mtu > VXGE_HW_VPATH_MAX_INITIAL_MTU)))
-               return VXGE_HW_BADCFG_VPATH_MTU;
-
-       if ((vp_config->rpa_strip_vlan_tag !=
-               VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_USE_FLASH_DEFAULT) &&
-               (vp_config->rpa_strip_vlan_tag !=
-               VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_ENABLE) &&
-               (vp_config->rpa_strip_vlan_tag !=
-               VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_DISABLE))
-               return VXGE_HW_BADCFG_VPATH_RPA_STRIP_VLAN_TAG;
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_device_config_check - Check device configuration.
- * Check the device configuration
- */
-static enum vxge_hw_status
-__vxge_hw_device_config_check(struct vxge_hw_device_config *new_config)
-{
-       u32 i;
-       enum vxge_hw_status status;
-
-       if ((new_config->intr_mode != VXGE_HW_INTR_MODE_IRQLINE) &&
-           (new_config->intr_mode != VXGE_HW_INTR_MODE_MSIX) &&
-           (new_config->intr_mode != VXGE_HW_INTR_MODE_MSIX_ONE_SHOT) &&
-           (new_config->intr_mode != VXGE_HW_INTR_MODE_DEF))
-               return VXGE_HW_BADCFG_INTR_MODE;
-
-       if ((new_config->rts_mac_en != VXGE_HW_RTS_MAC_DISABLE) &&
-           (new_config->rts_mac_en != VXGE_HW_RTS_MAC_ENABLE))
-               return VXGE_HW_BADCFG_RTS_MAC_EN;
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               status = __vxge_hw_device_vpath_config_check(
-                               &new_config->vp_config[i]);
-               if (status != VXGE_HW_OK)
-                       return status;
-       }
-
-       return VXGE_HW_OK;
-}
-
-/*
- * vxge_hw_device_initialize - Initialize Titan device.
- * Initialize Titan device. Note that all the arguments of this public API
- * are 'IN', including @hldev. Driver cooperates with
- * OS to find new Titan device, locate its PCI and memory spaces.
- *
- * When done, the driver allocates sizeof(struct __vxge_hw_device) bytes for HW
- * to enable the latter to perform Titan hardware initialization.
- */
-enum vxge_hw_status
-vxge_hw_device_initialize(
-       struct __vxge_hw_device **devh,
-       struct vxge_hw_device_attr *attr,
-       struct vxge_hw_device_config *device_config)
-{
-       u32 i;
-       u32 nblocks = 0;
-       struct __vxge_hw_device *hldev = NULL;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       status = __vxge_hw_device_config_check(device_config);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       hldev = vzalloc(sizeof(struct __vxge_hw_device));
-       if (hldev == NULL) {
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               goto exit;
-       }
-
-       hldev->magic = VXGE_HW_DEVICE_MAGIC;
-
-       vxge_hw_device_debug_set(hldev, VXGE_ERR, VXGE_COMPONENT_ALL);
-
-       /* apply config */
-       memcpy(&hldev->config, device_config,
-               sizeof(struct vxge_hw_device_config));
-
-       hldev->bar0 = attr->bar0;
-       hldev->pdev = attr->pdev;
-
-       hldev->uld_callbacks = attr->uld_callbacks;
-
-       __vxge_hw_device_pci_e_init(hldev);
-
-       status = __vxge_hw_device_reg_addr_get(hldev);
-       if (status != VXGE_HW_OK) {
-               vfree(hldev);
-               goto exit;
-       }
-
-       __vxge_hw_device_host_info_get(hldev);
-
-       /* Incrementing for stats blocks */
-       nblocks++;
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!(hldev->vpath_assignments & vxge_mBIT(i)))
-                       continue;
-
-               if (device_config->vp_config[i].ring.enable ==
-                       VXGE_HW_RING_ENABLE)
-                       nblocks += device_config->vp_config[i].ring.ring_blocks;
-
-               if (device_config->vp_config[i].fifo.enable ==
-                       VXGE_HW_FIFO_ENABLE)
-                       nblocks += device_config->vp_config[i].fifo.fifo_blocks;
-               nblocks++;
-       }
-
-       if (__vxge_hw_blockpool_create(hldev,
-               &hldev->block_pool,
-               device_config->dma_blockpool_initial + nblocks,
-               device_config->dma_blockpool_max + nblocks) != VXGE_HW_OK) {
-
-               vxge_hw_device_terminate(hldev);
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               goto exit;
-       }
-
-       status = __vxge_hw_device_initialize(hldev);
-       if (status != VXGE_HW_OK) {
-               vxge_hw_device_terminate(hldev);
-               goto exit;
-       }
-
-       *devh = hldev;
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_device_terminate - Terminate Titan device.
- * Terminate HW device.
- */
-void
-vxge_hw_device_terminate(struct __vxge_hw_device *hldev)
-{
-       vxge_assert(hldev->magic == VXGE_HW_DEVICE_MAGIC);
-
-       hldev->magic = VXGE_HW_DEVICE_DEAD;
-       __vxge_hw_blockpool_destroy(&hldev->block_pool);
-       vfree(hldev);
-}
-
-/*
- * __vxge_hw_vpath_stats_access - Get the statistics from the given location
- *                           and offset and perform an operation
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_stats_access(struct __vxge_hw_virtualpath *vpath,
-                            u32 operation, u32 offset, u64 *stat)
-{
-       u64 val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto vpath_stats_access_exit;
-       }
-
-       vp_reg = vpath->vp_reg;
-
-       val64 =  VXGE_HW_XMAC_STATS_ACCESS_CMD_OP(operation) |
-                VXGE_HW_XMAC_STATS_ACCESS_CMD_STROBE |
-                VXGE_HW_XMAC_STATS_ACCESS_CMD_OFFSET_SEL(offset);
-
-       status = __vxge_hw_pio_mem_write64(val64,
-                               &vp_reg->xmac_stats_access_cmd,
-                               VXGE_HW_XMAC_STATS_ACCESS_CMD_STROBE,
-                               vpath->hldev->config.device_poll_millis);
-       if ((status == VXGE_HW_OK) && (operation == VXGE_HW_STATS_OP_READ))
-               *stat = readq(&vp_reg->xmac_stats_access_data);
-       else
-               *stat = 0;
-
-vpath_stats_access_exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_xmac_tx_stats_get - Get the TX Statistics of a vpath
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_xmac_tx_stats_get(struct __vxge_hw_virtualpath *vpath,
-                       struct vxge_hw_xmac_vpath_tx_stats *vpath_tx_stats)
-{
-       u64 *val64;
-       int i;
-       u32 offset = VXGE_HW_STATS_VPATH_TX_OFFSET;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       val64 = (u64 *)vpath_tx_stats;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto exit;
-       }
-
-       for (i = 0; i < sizeof(struct vxge_hw_xmac_vpath_tx_stats) / 8; i++) {
-               status = __vxge_hw_vpath_stats_access(vpath,
-                                       VXGE_HW_STATS_OP_READ,
-                                       offset, val64);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-               offset++;
-               val64++;
-       }
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_xmac_rx_stats_get - Get the RX Statistics of a vpath
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_xmac_rx_stats_get(struct __vxge_hw_virtualpath *vpath,
-                       struct vxge_hw_xmac_vpath_rx_stats *vpath_rx_stats)
-{
-       u64 *val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       int i;
-       u32 offset = VXGE_HW_STATS_VPATH_RX_OFFSET;
-       val64 = (u64 *) vpath_rx_stats;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto exit;
-       }
-       for (i = 0; i < sizeof(struct vxge_hw_xmac_vpath_rx_stats) / 8; i++) {
-               status = __vxge_hw_vpath_stats_access(vpath,
-                                       VXGE_HW_STATS_OP_READ,
-                                       offset >> 3, val64);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-
-               offset += 8;
-               val64++;
-       }
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_stats_get - Get the vpath hw statistics.
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_stats_get(struct __vxge_hw_virtualpath *vpath,
-                         struct vxge_hw_vpath_stats_hw_info *hw_stats)
-{
-       u64 val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto exit;
-       }
-       vp_reg = vpath->vp_reg;
-
-       val64 = readq(&vp_reg->vpath_debug_stats0);
-       hw_stats->ini_num_mwr_sent =
-               (u32)VXGE_HW_VPATH_DEBUG_STATS0_GET_INI_NUM_MWR_SENT(val64);
-
-       val64 = readq(&vp_reg->vpath_debug_stats1);
-       hw_stats->ini_num_mrd_sent =
-               (u32)VXGE_HW_VPATH_DEBUG_STATS1_GET_INI_NUM_MRD_SENT(val64);
-
-       val64 = readq(&vp_reg->vpath_debug_stats2);
-       hw_stats->ini_num_cpl_rcvd =
-               (u32)VXGE_HW_VPATH_DEBUG_STATS2_GET_INI_NUM_CPL_RCVD(val64);
-
-       val64 = readq(&vp_reg->vpath_debug_stats3);
-       hw_stats->ini_num_mwr_byte_sent =
-               VXGE_HW_VPATH_DEBUG_STATS3_GET_INI_NUM_MWR_BYTE_SENT(val64);
-
-       val64 = readq(&vp_reg->vpath_debug_stats4);
-       hw_stats->ini_num_cpl_byte_rcvd =
-               VXGE_HW_VPATH_DEBUG_STATS4_GET_INI_NUM_CPL_BYTE_RCVD(val64);
-
-       val64 = readq(&vp_reg->vpath_debug_stats5);
-       hw_stats->wrcrdtarb_xoff =
-               (u32)VXGE_HW_VPATH_DEBUG_STATS5_GET_WRCRDTARB_XOFF(val64);
-
-       val64 = readq(&vp_reg->vpath_debug_stats6);
-       hw_stats->rdcrdtarb_xoff =
-               (u32)VXGE_HW_VPATH_DEBUG_STATS6_GET_RDCRDTARB_XOFF(val64);
-
-       val64 = readq(&vp_reg->vpath_genstats_count01);
-       hw_stats->vpath_genstats_count0 =
-       (u32)VXGE_HW_VPATH_GENSTATS_COUNT01_GET_PPIF_VPATH_GENSTATS_COUNT0(
-               val64);
-
-       val64 = readq(&vp_reg->vpath_genstats_count01);
-       hw_stats->vpath_genstats_count1 =
-       (u32)VXGE_HW_VPATH_GENSTATS_COUNT01_GET_PPIF_VPATH_GENSTATS_COUNT1(
-               val64);
-
-       val64 = readq(&vp_reg->vpath_genstats_count23);
-       hw_stats->vpath_genstats_count2 =
-       (u32)VXGE_HW_VPATH_GENSTATS_COUNT23_GET_PPIF_VPATH_GENSTATS_COUNT2(
-               val64);
-
-       val64 = readq(&vp_reg->vpath_genstats_count01);
-       hw_stats->vpath_genstats_count3 =
-       (u32)VXGE_HW_VPATH_GENSTATS_COUNT23_GET_PPIF_VPATH_GENSTATS_COUNT3(
-               val64);
-
-       val64 = readq(&vp_reg->vpath_genstats_count4);
-       hw_stats->vpath_genstats_count4 =
-       (u32)VXGE_HW_VPATH_GENSTATS_COUNT4_GET_PPIF_VPATH_GENSTATS_COUNT4(
-               val64);
-
-       val64 = readq(&vp_reg->vpath_genstats_count5);
-       hw_stats->vpath_genstats_count5 =
-       (u32)VXGE_HW_VPATH_GENSTATS_COUNT5_GET_PPIF_VPATH_GENSTATS_COUNT5(
-               val64);
-
-       status = __vxge_hw_vpath_xmac_tx_stats_get(vpath, &hw_stats->tx_stats);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       status = __vxge_hw_vpath_xmac_rx_stats_get(vpath, &hw_stats->rx_stats);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       VXGE_HW_VPATH_STATS_PIO_READ(
-               VXGE_HW_STATS_VPATH_PROG_EVENT_VNUM0_OFFSET);
-
-       hw_stats->prog_event_vnum0 =
-                       (u32)VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM0(val64);
-
-       hw_stats->prog_event_vnum1 =
-                       (u32)VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM1(val64);
-
-       VXGE_HW_VPATH_STATS_PIO_READ(
-               VXGE_HW_STATS_VPATH_PROG_EVENT_VNUM2_OFFSET);
-
-       hw_stats->prog_event_vnum2 =
-                       (u32)VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM2(val64);
-
-       hw_stats->prog_event_vnum3 =
-                       (u32)VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM3(val64);
-
-       val64 = readq(&vp_reg->rx_multi_cast_stats);
-       hw_stats->rx_multi_cast_frame_discard =
-               (u16)VXGE_HW_RX_MULTI_CAST_STATS_GET_FRAME_DISCARD(val64);
-
-       val64 = readq(&vp_reg->rx_frm_transferred);
-       hw_stats->rx_frm_transferred =
-               (u32)VXGE_HW_RX_FRM_TRANSFERRED_GET_RX_FRM_TRANSFERRED(val64);
-
-       val64 = readq(&vp_reg->rxd_returned);
-       hw_stats->rxd_returned =
-               (u16)VXGE_HW_RXD_RETURNED_GET_RXD_RETURNED(val64);
-
-       val64 = readq(&vp_reg->dbg_stats_rx_mpa);
-       hw_stats->rx_mpa_len_fail_frms =
-               (u16)VXGE_HW_DBG_STATS_GET_RX_MPA_LEN_FAIL_FRMS(val64);
-       hw_stats->rx_mpa_mrk_fail_frms =
-               (u16)VXGE_HW_DBG_STATS_GET_RX_MPA_MRK_FAIL_FRMS(val64);
-       hw_stats->rx_mpa_crc_fail_frms =
-               (u16)VXGE_HW_DBG_STATS_GET_RX_MPA_CRC_FAIL_FRMS(val64);
-
-       val64 = readq(&vp_reg->dbg_stats_rx_fau);
-       hw_stats->rx_permitted_frms =
-               (u16)VXGE_HW_DBG_STATS_GET_RX_FAU_RX_PERMITTED_FRMS(val64);
-       hw_stats->rx_vp_reset_discarded_frms =
-       (u16)VXGE_HW_DBG_STATS_GET_RX_FAU_RX_VP_RESET_DISCARDED_FRMS(val64);
-       hw_stats->rx_wol_frms =
-               (u16)VXGE_HW_DBG_STATS_GET_RX_FAU_RX_WOL_FRMS(val64);
-
-       val64 = readq(&vp_reg->tx_vp_reset_discarded_frms);
-       hw_stats->tx_vp_reset_discarded_frms =
-       (u16)VXGE_HW_TX_VP_RESET_DISCARDED_FRMS_GET_TX_VP_RESET_DISCARDED_FRMS(
-               val64);
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_device_stats_get - Get the device hw statistics.
- * Returns the vpath h/w stats for the device.
- */
-enum vxge_hw_status
-vxge_hw_device_stats_get(struct __vxge_hw_device *hldev,
-                       struct vxge_hw_device_stats_hw_info *hw_stats)
-{
-       u32 i;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!(hldev->vpaths_deployed & vxge_mBIT(i)) ||
-                       (hldev->virtual_paths[i].vp_open ==
-                               VXGE_HW_VP_NOT_OPEN))
-                       continue;
-
-               memcpy(hldev->virtual_paths[i].hw_stats_sav,
-                               hldev->virtual_paths[i].hw_stats,
-                               sizeof(struct vxge_hw_vpath_stats_hw_info));
-
-               status = __vxge_hw_vpath_stats_get(
-                       &hldev->virtual_paths[i],
-                       hldev->virtual_paths[i].hw_stats);
-       }
-
-       memcpy(hw_stats, &hldev->stats.hw_dev_info_stats,
-                       sizeof(struct vxge_hw_device_stats_hw_info));
-
-       return status;
-}
-
-/*
- * vxge_hw_driver_stats_get - Get the device sw statistics.
- * Returns the vpath s/w stats for the device.
- */
-enum vxge_hw_status vxge_hw_driver_stats_get(
-                       struct __vxge_hw_device *hldev,
-                       struct vxge_hw_device_stats_sw_info *sw_stats)
-{
-       memcpy(sw_stats, &hldev->stats.sw_dev_info_stats,
-               sizeof(struct vxge_hw_device_stats_sw_info));
-
-       return VXGE_HW_OK;
-}
-
-/*
- * vxge_hw_mrpcim_stats_access - Access the statistics from the given location
- *                           and offset and perform an operation
- * Get the statistics from the given location and offset.
- */
-enum vxge_hw_status
-vxge_hw_mrpcim_stats_access(struct __vxge_hw_device *hldev,
-                           u32 operation, u32 location, u32 offset, u64 *stat)
-{
-       u64 val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       status = __vxge_hw_device_is_privilaged(hldev->host_type,
-                       hldev->func_id);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       val64 = VXGE_HW_XMAC_STATS_SYS_CMD_OP(operation) |
-               VXGE_HW_XMAC_STATS_SYS_CMD_STROBE |
-               VXGE_HW_XMAC_STATS_SYS_CMD_LOC_SEL(location) |
-               VXGE_HW_XMAC_STATS_SYS_CMD_OFFSET_SEL(offset);
-
-       status = __vxge_hw_pio_mem_write64(val64,
-                               &hldev->mrpcim_reg->xmac_stats_sys_cmd,
-                               VXGE_HW_XMAC_STATS_SYS_CMD_STROBE,
-                               hldev->config.device_poll_millis);
-
-       if ((status == VXGE_HW_OK) && (operation == VXGE_HW_STATS_OP_READ))
-               *stat = readq(&hldev->mrpcim_reg->xmac_stats_sys_data);
-       else
-               *stat = 0;
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_device_xmac_aggr_stats_get - Get the Statistics on aggregate port
- * Get the Statistics on aggregate port
- */
-static enum vxge_hw_status
-vxge_hw_device_xmac_aggr_stats_get(struct __vxge_hw_device *hldev, u32 port,
-                                  struct vxge_hw_xmac_aggr_stats *aggr_stats)
-{
-       u64 *val64;
-       int i;
-       u32 offset = VXGE_HW_STATS_AGGRn_OFFSET;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       val64 = (u64 *)aggr_stats;
-
-       status = __vxge_hw_device_is_privilaged(hldev->host_type,
-                       hldev->func_id);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       for (i = 0; i < sizeof(struct vxge_hw_xmac_aggr_stats) / 8; i++) {
-               status = vxge_hw_mrpcim_stats_access(hldev,
-                                       VXGE_HW_STATS_OP_READ,
-                                       VXGE_HW_STATS_LOC_AGGR,
-                                       ((offset + (104 * port)) >> 3), val64);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-
-               offset += 8;
-               val64++;
-       }
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_device_xmac_port_stats_get - Get the Statistics on a port
- * Get the Statistics on port
- */
-static enum vxge_hw_status
-vxge_hw_device_xmac_port_stats_get(struct __vxge_hw_device *hldev, u32 port,
-                                  struct vxge_hw_xmac_port_stats *port_stats)
-{
-       u64 *val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       int i;
-       u32 offset = 0x0;
-       val64 = (u64 *) port_stats;
-
-       status = __vxge_hw_device_is_privilaged(hldev->host_type,
-                       hldev->func_id);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       for (i = 0; i < sizeof(struct vxge_hw_xmac_port_stats) / 8; i++) {
-               status = vxge_hw_mrpcim_stats_access(hldev,
-                                       VXGE_HW_STATS_OP_READ,
-                                       VXGE_HW_STATS_LOC_AGGR,
-                                       ((offset + (608 * port)) >> 3), val64);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-
-               offset += 8;
-               val64++;
-       }
-
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_device_xmac_stats_get - Get the XMAC Statistics
- * Get the XMAC Statistics
- */
-enum vxge_hw_status
-vxge_hw_device_xmac_stats_get(struct __vxge_hw_device *hldev,
-                             struct vxge_hw_xmac_stats *xmac_stats)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       u32 i;
-
-       status = vxge_hw_device_xmac_aggr_stats_get(hldev,
-                                       0, &xmac_stats->aggr_stats[0]);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       status = vxge_hw_device_xmac_aggr_stats_get(hldev,
-                               1, &xmac_stats->aggr_stats[1]);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       for (i = 0; i <= VXGE_HW_MAC_MAX_MAC_PORT_ID; i++) {
-
-               status = vxge_hw_device_xmac_port_stats_get(hldev,
-                                       i, &xmac_stats->port_stats[i]);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-       }
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-
-               if (!(hldev->vpaths_deployed & vxge_mBIT(i)))
-                       continue;
-
-               status = __vxge_hw_vpath_xmac_tx_stats_get(
-                                       &hldev->virtual_paths[i],
-                                       &xmac_stats->vpath_tx_stats[i]);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-
-               status = __vxge_hw_vpath_xmac_rx_stats_get(
-                                       &hldev->virtual_paths[i],
-                                       &xmac_stats->vpath_rx_stats[i]);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-       }
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_device_debug_set - Set the debug module, level and timestamp
- * This routine is used to dynamically change the debug output
- */
-void vxge_hw_device_debug_set(struct __vxge_hw_device *hldev,
-                             enum vxge_debug_level level, u32 mask)
-{
-       if (hldev == NULL)
-               return;
-
-#if defined(VXGE_DEBUG_TRACE_MASK) || \
-       defined(VXGE_DEBUG_ERR_MASK)
-       hldev->debug_module_mask = mask;
-       hldev->debug_level = level;
-#endif
-
-#if defined(VXGE_DEBUG_ERR_MASK)
-       hldev->level_err = level & VXGE_ERR;
-#endif
-
-#if defined(VXGE_DEBUG_TRACE_MASK)
-       hldev->level_trace = level & VXGE_TRACE;
-#endif
-}
-
-/*
- * vxge_hw_device_error_level_get - Get the error level
- * This routine returns the current error level set
- */
-u32 vxge_hw_device_error_level_get(struct __vxge_hw_device *hldev)
-{
-#if defined(VXGE_DEBUG_ERR_MASK)
-       if (hldev == NULL)
-               return VXGE_ERR;
-       else
-               return hldev->level_err;
-#else
-       return 0;
-#endif
-}
-
-/*
- * vxge_hw_device_trace_level_get - Get the trace level
- * This routine returns the current trace level set
- */
-u32 vxge_hw_device_trace_level_get(struct __vxge_hw_device *hldev)
-{
-#if defined(VXGE_DEBUG_TRACE_MASK)
-       if (hldev == NULL)
-               return VXGE_TRACE;
-       else
-               return hldev->level_trace;
-#else
-       return 0;
-#endif
-}
-
-/*
- * vxge_hw_getpause_data -Pause frame frame generation and reception.
- * Returns the Pause frame generation and reception capability of the NIC.
- */
-enum vxge_hw_status vxge_hw_device_getpause_data(struct __vxge_hw_device *hldev,
-                                                u32 port, u32 *tx, u32 *rx)
-{
-       u64 val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if ((hldev == NULL) || (hldev->magic != VXGE_HW_DEVICE_MAGIC)) {
-               status = VXGE_HW_ERR_INVALID_DEVICE;
-               goto exit;
-       }
-
-       if (port > VXGE_HW_MAC_MAX_MAC_PORT_ID) {
-               status = VXGE_HW_ERR_INVALID_PORT;
-               goto exit;
-       }
-
-       if (!(hldev->access_rights & VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM)) {
-               status = VXGE_HW_ERR_PRIVILEGED_OPERATION;
-               goto exit;
-       }
-
-       val64 = readq(&hldev->mrpcim_reg->rxmac_pause_cfg_port[port]);
-       if (val64 & VXGE_HW_RXMAC_PAUSE_CFG_PORT_GEN_EN)
-               *tx = 1;
-       if (val64 & VXGE_HW_RXMAC_PAUSE_CFG_PORT_RCV_EN)
-               *rx = 1;
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_device_setpause_data -  set/reset pause frame generation.
- * It can be used to set or reset Pause frame generation or reception
- * support of the NIC.
- */
-enum vxge_hw_status vxge_hw_device_setpause_data(struct __vxge_hw_device *hldev,
-                                                u32 port, u32 tx, u32 rx)
-{
-       u64 val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if ((hldev == NULL) || (hldev->magic != VXGE_HW_DEVICE_MAGIC)) {
-               status = VXGE_HW_ERR_INVALID_DEVICE;
-               goto exit;
-       }
-
-       if (port > VXGE_HW_MAC_MAX_MAC_PORT_ID) {
-               status = VXGE_HW_ERR_INVALID_PORT;
-               goto exit;
-       }
-
-       status = __vxge_hw_device_is_privilaged(hldev->host_type,
-                       hldev->func_id);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       val64 = readq(&hldev->mrpcim_reg->rxmac_pause_cfg_port[port]);
-       if (tx)
-               val64 |= VXGE_HW_RXMAC_PAUSE_CFG_PORT_GEN_EN;
-       else
-               val64 &= ~VXGE_HW_RXMAC_PAUSE_CFG_PORT_GEN_EN;
-       if (rx)
-               val64 |= VXGE_HW_RXMAC_PAUSE_CFG_PORT_RCV_EN;
-       else
-               val64 &= ~VXGE_HW_RXMAC_PAUSE_CFG_PORT_RCV_EN;
-
-       writeq(val64, &hldev->mrpcim_reg->rxmac_pause_cfg_port[port]);
-exit:
-       return status;
-}
-
-u16 vxge_hw_device_link_width_get(struct __vxge_hw_device *hldev)
-{
-       struct pci_dev *dev = hldev->pdev;
-       u16 lnk;
-
-       pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnk);
-       return (lnk & VXGE_HW_PCI_EXP_LNKCAP_LNK_WIDTH) >> 4;
-}
-
-/*
- * __vxge_hw_ring_block_memblock_idx - Return the memblock index
- * This function returns the index of memory block
- */
-static inline u32
-__vxge_hw_ring_block_memblock_idx(u8 *block)
-{
-       return (u32)*((u64 *)(block + VXGE_HW_RING_MEMBLOCK_IDX_OFFSET));
-}
-
-/*
- * __vxge_hw_ring_block_memblock_idx_set - Sets the memblock index
- * This function sets index to a memory block
- */
-static inline void
-__vxge_hw_ring_block_memblock_idx_set(u8 *block, u32 memblock_idx)
-{
-       *((u64 *)(block + VXGE_HW_RING_MEMBLOCK_IDX_OFFSET)) = memblock_idx;
-}
-
-/*
- * __vxge_hw_ring_block_next_pointer_set - Sets the next block pointer
- * in RxD block
- * Sets the next block pointer in RxD block
- */
-static inline void
-__vxge_hw_ring_block_next_pointer_set(u8 *block, dma_addr_t dma_next)
-{
-       *((u64 *)(block + VXGE_HW_RING_NEXT_BLOCK_POINTER_OFFSET)) = dma_next;
-}
-
-/*
- * __vxge_hw_ring_first_block_address_get - Returns the dma address of the
- *             first block
- * Returns the dma address of the first RxD block
- */
-static u64 __vxge_hw_ring_first_block_address_get(struct __vxge_hw_ring *ring)
-{
-       struct vxge_hw_mempool_dma *dma_object;
-
-       dma_object = ring->mempool->memblocks_dma_arr;
-       vxge_assert(dma_object != NULL);
-
-       return dma_object->addr;
-}
-
-/*
- * __vxge_hw_ring_item_dma_addr - Return the dma address of an item
- * This function returns the dma address of a given item
- */
-static dma_addr_t __vxge_hw_ring_item_dma_addr(struct vxge_hw_mempool *mempoolh,
-                                              void *item)
-{
-       u32 memblock_idx;
-       void *memblock;
-       struct vxge_hw_mempool_dma *memblock_dma_object;
-       ptrdiff_t dma_item_offset;
-
-       /* get owner memblock index */
-       memblock_idx = __vxge_hw_ring_block_memblock_idx(item);
-
-       /* get owner memblock by memblock index */
-       memblock = mempoolh->memblocks_arr[memblock_idx];
-
-       /* get memblock DMA object by memblock index */
-       memblock_dma_object = mempoolh->memblocks_dma_arr + memblock_idx;
-
-       /* calculate offset in the memblock of this item */
-       dma_item_offset = (u8 *)item - (u8 *)memblock;
-
-       return memblock_dma_object->addr + dma_item_offset;
-}
-
-/*
- * __vxge_hw_ring_rxdblock_link - Link the RxD blocks
- * This function returns the dma address of a given item
- */
-static void __vxge_hw_ring_rxdblock_link(struct vxge_hw_mempool *mempoolh,
-                                        struct __vxge_hw_ring *ring, u32 from,
-                                        u32 to)
-{
-       u8 *to_item , *from_item;
-       dma_addr_t to_dma;
-
-       /* get "from" RxD block */
-       from_item = mempoolh->items_arr[from];
-       vxge_assert(from_item);
-
-       /* get "to" RxD block */
-       to_item = mempoolh->items_arr[to];
-       vxge_assert(to_item);
-
-       /* return address of the beginning of previous RxD block */
-       to_dma = __vxge_hw_ring_item_dma_addr(mempoolh, to_item);
-
-       /* set next pointer for this RxD block to point on
-        * previous item's DMA start address */
-       __vxge_hw_ring_block_next_pointer_set(from_item, to_dma);
-}
-
-/*
- * __vxge_hw_ring_mempool_item_alloc - Allocate List blocks for RxD
- * block callback
- * This function is callback passed to __vxge_hw_mempool_create to create memory
- * pool for RxD block
- */
-static void
-__vxge_hw_ring_mempool_item_alloc(struct vxge_hw_mempool *mempoolh,
-                                 u32 memblock_index,
-                                 struct vxge_hw_mempool_dma *dma_object,
-                                 u32 index, u32 is_last)
-{
-       u32 i;
-       void *item = mempoolh->items_arr[index];
-       struct __vxge_hw_ring *ring =
-               (struct __vxge_hw_ring *)mempoolh->userdata;
-
-       /* format rxds array */
-       for (i = 0; i < ring->rxds_per_block; i++) {
-               void *rxdblock_priv;
-               void *uld_priv;
-               struct vxge_hw_ring_rxd_1 *rxdp;
-
-               u32 reserve_index = ring->channel.reserve_ptr -
-                               (index * ring->rxds_per_block + i + 1);
-               u32 memblock_item_idx;
-
-               ring->channel.reserve_arr[reserve_index] = ((u8 *)item) +
-                                               i * ring->rxd_size;
-
-               /* Note: memblock_item_idx is index of the item within
-                *       the memblock. For instance, in case of three RxD-blocks
-                *       per memblock this value can be 0, 1 or 2. */
-               rxdblock_priv = __vxge_hw_mempool_item_priv(mempoolh,
-                                       memblock_index, item,
-                                       &memblock_item_idx);
-
-               rxdp = ring->channel.reserve_arr[reserve_index];
-
-               uld_priv = ((u8 *)rxdblock_priv + ring->rxd_priv_size * i);
-
-               /* pre-format Host_Control */
-               rxdp->host_control = (u64)(size_t)uld_priv;
-       }
-
-       __vxge_hw_ring_block_memblock_idx_set(item, memblock_index);
-
-       if (is_last) {
-               /* link last one with first one */
-               __vxge_hw_ring_rxdblock_link(mempoolh, ring, index, 0);
-       }
-
-       if (index > 0) {
-               /* link this RxD block with previous one */
-               __vxge_hw_ring_rxdblock_link(mempoolh, ring, index - 1, index);
-       }
-}
-
-/*
- * __vxge_hw_ring_replenish - Initial replenish of RxDs
- * This function replenishes the RxDs from reserve array to work array
- */
-static enum vxge_hw_status
-vxge_hw_ring_replenish(struct __vxge_hw_ring *ring)
-{
-       void *rxd;
-       struct __vxge_hw_channel *channel;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       channel = &ring->channel;
-
-       while (vxge_hw_channel_dtr_count(channel) > 0) {
-
-               status = vxge_hw_ring_rxd_reserve(ring, &rxd);
-
-               vxge_assert(status == VXGE_HW_OK);
-
-               if (ring->rxd_init) {
-                       status = ring->rxd_init(rxd, channel->userdata);
-                       if (status != VXGE_HW_OK) {
-                               vxge_hw_ring_rxd_free(ring, rxd);
-                               goto exit;
-                       }
-               }
-
-               vxge_hw_ring_rxd_post(ring, rxd);
-       }
-       status = VXGE_HW_OK;
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_channel_allocate - Allocate memory for channel
- * This function allocates required memory for the channel and various arrays
- * in the channel
- */
-static struct __vxge_hw_channel *
-__vxge_hw_channel_allocate(struct __vxge_hw_vpath_handle *vph,
-                          enum __vxge_hw_channel_type type,
-                          u32 length, u32 per_dtr_space,
-                          void *userdata)
-{
-       struct __vxge_hw_channel *channel;
-       struct __vxge_hw_device *hldev;
-       int size = 0;
-       u32 vp_id;
-
-       hldev = vph->vpath->hldev;
-       vp_id = vph->vpath->vp_id;
-
-       switch (type) {
-       case VXGE_HW_CHANNEL_TYPE_FIFO:
-               size = sizeof(struct __vxge_hw_fifo);
-               break;
-       case VXGE_HW_CHANNEL_TYPE_RING:
-               size = sizeof(struct __vxge_hw_ring);
-               break;
-       default:
-               break;
-       }
-
-       channel = kzalloc(size, GFP_KERNEL);
-       if (channel == NULL)
-               goto exit0;
-       INIT_LIST_HEAD(&channel->item);
-
-       channel->common_reg = hldev->common_reg;
-       channel->first_vp_id = hldev->first_vp_id;
-       channel->type = type;
-       channel->devh = hldev;
-       channel->vph = vph;
-       channel->userdata = userdata;
-       channel->per_dtr_space = per_dtr_space;
-       channel->length = length;
-       channel->vp_id = vp_id;
-
-       channel->work_arr = kcalloc(length, sizeof(void *), GFP_KERNEL);
-       if (channel->work_arr == NULL)
-               goto exit1;
-
-       channel->free_arr = kcalloc(length, sizeof(void *), GFP_KERNEL);
-       if (channel->free_arr == NULL)
-               goto exit1;
-       channel->free_ptr = length;
-
-       channel->reserve_arr = kcalloc(length, sizeof(void *), GFP_KERNEL);
-       if (channel->reserve_arr == NULL)
-               goto exit1;
-       channel->reserve_ptr = length;
-       channel->reserve_top = 0;
-
-       channel->orig_arr = kcalloc(length, sizeof(void *), GFP_KERNEL);
-       if (channel->orig_arr == NULL)
-               goto exit1;
-
-       return channel;
-exit1:
-       __vxge_hw_channel_free(channel);
-
-exit0:
-       return NULL;
-}
-
-/*
- * vxge_hw_blockpool_block_add - callback for vxge_os_dma_malloc_async
- * Adds a block to block pool
- */
-static void vxge_hw_blockpool_block_add(struct __vxge_hw_device *devh,
-                                       void *block_addr,
-                                       u32 length,
-                                       struct pci_dev *dma_h,
-                                       struct pci_dev *acc_handle)
-{
-       struct __vxge_hw_blockpool *blockpool;
-       struct __vxge_hw_blockpool_entry *entry = NULL;
-       dma_addr_t dma_addr;
-
-       blockpool = &devh->block_pool;
-
-       if (block_addr == NULL) {
-               blockpool->req_out--;
-               goto exit;
-       }
-
-       dma_addr = dma_map_single(&devh->pdev->dev, block_addr, length,
-                                 DMA_BIDIRECTIONAL);
-
-       if (unlikely(dma_mapping_error(&devh->pdev->dev, dma_addr))) {
-               vxge_os_dma_free(devh->pdev, block_addr, &acc_handle);
-               blockpool->req_out--;
-               goto exit;
-       }
-
-       if (!list_empty(&blockpool->free_entry_list))
-               entry = (struct __vxge_hw_blockpool_entry *)
-                       list_first_entry(&blockpool->free_entry_list,
-                               struct __vxge_hw_blockpool_entry,
-                               item);
-
-       if (entry == NULL)
-               entry = vmalloc(sizeof(struct __vxge_hw_blockpool_entry));
-       else
-               list_del(&entry->item);
-
-       if (entry) {
-               entry->length = length;
-               entry->memblock = block_addr;
-               entry->dma_addr = dma_addr;
-               entry->acc_handle = acc_handle;
-               entry->dma_handle = dma_h;
-               list_add(&entry->item, &blockpool->free_block_list);
-               blockpool->pool_size++;
-       }
-
-       blockpool->req_out--;
-
-exit:
-       return;
-}
-
-static inline void
-vxge_os_dma_malloc_async(struct pci_dev *pdev, void *devh, unsigned long size)
-{
-       void *vaddr;
-
-       vaddr = kmalloc(size, GFP_KERNEL | GFP_DMA);
-       vxge_hw_blockpool_block_add(devh, vaddr, size, pdev, pdev);
-}
-
-/*
- * __vxge_hw_blockpool_blocks_add - Request additional blocks
- */
-static
-void __vxge_hw_blockpool_blocks_add(struct __vxge_hw_blockpool *blockpool)
-{
-       u32 nreq = 0, i;
-
-       if ((blockpool->pool_size  +  blockpool->req_out) <
-               VXGE_HW_MIN_DMA_BLOCK_POOL_SIZE) {
-               nreq = VXGE_HW_INCR_DMA_BLOCK_POOL_SIZE;
-               blockpool->req_out += nreq;
-       }
-
-       for (i = 0; i < nreq; i++)
-               vxge_os_dma_malloc_async(
-                       (blockpool->hldev)->pdev,
-                       blockpool->hldev, VXGE_HW_BLOCK_SIZE);
-}
-
-/*
- * __vxge_hw_blockpool_malloc - Allocate a memory block from pool
- * Allocates a block of memory of given size, either from block pool
- * or by calling vxge_os_dma_malloc()
- */
-static void *__vxge_hw_blockpool_malloc(struct __vxge_hw_device *devh, u32 size,
-                                       struct vxge_hw_mempool_dma *dma_object)
-{
-       struct __vxge_hw_blockpool_entry *entry = NULL;
-       struct __vxge_hw_blockpool  *blockpool;
-       void *memblock = NULL;
-
-       blockpool = &devh->block_pool;
-
-       if (size != blockpool->block_size) {
-
-               memblock = vxge_os_dma_malloc(devh->pdev, size,
-                                               &dma_object->handle,
-                                               &dma_object->acc_handle);
-
-               if (!memblock)
-                       goto exit;
-
-               dma_object->addr = dma_map_single(&devh->pdev->dev, memblock,
-                                                 size, DMA_BIDIRECTIONAL);
-
-               if (unlikely(dma_mapping_error(&devh->pdev->dev, dma_object->addr))) {
-                       vxge_os_dma_free(devh->pdev, memblock,
-                               &dma_object->acc_handle);
-                       memblock = NULL;
-                       goto exit;
-               }
-
-       } else {
-
-               if (!list_empty(&blockpool->free_block_list))
-                       entry = (struct __vxge_hw_blockpool_entry *)
-                               list_first_entry(&blockpool->free_block_list,
-                                       struct __vxge_hw_blockpool_entry,
-                                       item);
-
-               if (entry != NULL) {
-                       list_del(&entry->item);
-                       dma_object->addr = entry->dma_addr;
-                       dma_object->handle = entry->dma_handle;
-                       dma_object->acc_handle = entry->acc_handle;
-                       memblock = entry->memblock;
-
-                       list_add(&entry->item,
-                               &blockpool->free_entry_list);
-                       blockpool->pool_size--;
-               }
-
-               if (memblock != NULL)
-                       __vxge_hw_blockpool_blocks_add(blockpool);
-       }
-exit:
-       return memblock;
-}
-
-/*
- * __vxge_hw_blockpool_blocks_remove - Free additional blocks
- */
-static void
-__vxge_hw_blockpool_blocks_remove(struct __vxge_hw_blockpool *blockpool)
-{
-       struct list_head *p, *n;
-
-       list_for_each_safe(p, n, &blockpool->free_block_list) {
-
-               if (blockpool->pool_size < blockpool->pool_max)
-                       break;
-
-               dma_unmap_single(&(blockpool->hldev)->pdev->dev,
-                                ((struct __vxge_hw_blockpool_entry *)p)->dma_addr,
-                                ((struct __vxge_hw_blockpool_entry *)p)->length,
-                                DMA_BIDIRECTIONAL);
-
-               vxge_os_dma_free(
-                       (blockpool->hldev)->pdev,
-                       ((struct __vxge_hw_blockpool_entry *)p)->memblock,
-                       &((struct __vxge_hw_blockpool_entry *)p)->acc_handle);
-
-               list_del(&((struct __vxge_hw_blockpool_entry *)p)->item);
-
-               list_add(p, &blockpool->free_entry_list);
-
-               blockpool->pool_size--;
-
-       }
-}
-
-/*
- * __vxge_hw_blockpool_free - Frees the memory allcoated with
- *                             __vxge_hw_blockpool_malloc
- */
-static void __vxge_hw_blockpool_free(struct __vxge_hw_device *devh,
-                                    void *memblock, u32 size,
-                                    struct vxge_hw_mempool_dma *dma_object)
-{
-       struct __vxge_hw_blockpool_entry *entry = NULL;
-       struct __vxge_hw_blockpool  *blockpool;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       blockpool = &devh->block_pool;
-
-       if (size != blockpool->block_size) {
-               dma_unmap_single(&devh->pdev->dev, dma_object->addr, size,
-                                DMA_BIDIRECTIONAL);
-               vxge_os_dma_free(devh->pdev, memblock, &dma_object->acc_handle);
-       } else {
-
-               if (!list_empty(&blockpool->free_entry_list))
-                       entry = (struct __vxge_hw_blockpool_entry *)
-                               list_first_entry(&blockpool->free_entry_list,
-                                       struct __vxge_hw_blockpool_entry,
-                                       item);
-
-               if (entry == NULL)
-                       entry = vmalloc(sizeof(
-                                       struct __vxge_hw_blockpool_entry));
-               else
-                       list_del(&entry->item);
-
-               if (entry != NULL) {
-                       entry->length = size;
-                       entry->memblock = memblock;
-                       entry->dma_addr = dma_object->addr;
-                       entry->acc_handle = dma_object->acc_handle;
-                       entry->dma_handle = dma_object->handle;
-                       list_add(&entry->item,
-                                       &blockpool->free_block_list);
-                       blockpool->pool_size++;
-                       status = VXGE_HW_OK;
-               } else
-                       status = VXGE_HW_ERR_OUT_OF_MEMORY;
-
-               if (status == VXGE_HW_OK)
-                       __vxge_hw_blockpool_blocks_remove(blockpool);
-       }
-}
-
-/*
- * vxge_hw_mempool_destroy
- */
-static void __vxge_hw_mempool_destroy(struct vxge_hw_mempool *mempool)
-{
-       u32 i, j;
-       struct __vxge_hw_device *devh = mempool->devh;
-
-       for (i = 0; i < mempool->memblocks_allocated; i++) {
-               struct vxge_hw_mempool_dma *dma_object;
-
-               vxge_assert(mempool->memblocks_arr[i]);
-               vxge_assert(mempool->memblocks_dma_arr + i);
-
-               dma_object = mempool->memblocks_dma_arr + i;
-
-               for (j = 0; j < mempool->items_per_memblock; j++) {
-                       u32 index = i * mempool->items_per_memblock + j;
-
-                       /* to skip last partially filled(if any) memblock */
-                       if (index >= mempool->items_current)
-                               break;
-               }
-
-               vfree(mempool->memblocks_priv_arr[i]);
-
-               __vxge_hw_blockpool_free(devh, mempool->memblocks_arr[i],
-                               mempool->memblock_size, dma_object);
-       }
-
-       vfree(mempool->items_arr);
-       vfree(mempool->memblocks_dma_arr);
-       vfree(mempool->memblocks_priv_arr);
-       vfree(mempool->memblocks_arr);
-       vfree(mempool);
-}
-
-/*
- * __vxge_hw_mempool_grow
- * Will resize mempool up to %num_allocate value.
- */
-static enum vxge_hw_status
-__vxge_hw_mempool_grow(struct vxge_hw_mempool *mempool, u32 num_allocate,
-                      u32 *num_allocated)
-{
-       u32 i, first_time = mempool->memblocks_allocated == 0 ? 1 : 0;
-       u32 n_items = mempool->items_per_memblock;
-       u32 start_block_idx = mempool->memblocks_allocated;
-       u32 end_block_idx = mempool->memblocks_allocated + num_allocate;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       *num_allocated = 0;
-
-       if (end_block_idx > mempool->memblocks_max) {
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               goto exit;
-       }
-
-       for (i = start_block_idx; i < end_block_idx; i++) {
-               u32 j;
-               u32 is_last = ((end_block_idx - 1) == i);
-               struct vxge_hw_mempool_dma *dma_object =
-                       mempool->memblocks_dma_arr + i;
-               void *the_memblock;
-
-               /* allocate memblock's private part. Each DMA memblock
-                * has a space allocated for item's private usage upon
-                * mempool's user request. Each time mempool grows, it will
-                * allocate new memblock and its private part at once.
-                * This helps to minimize memory usage a lot. */
-               mempool->memblocks_priv_arr[i] =
-                       vzalloc(array_size(mempool->items_priv_size, n_items));
-               if (mempool->memblocks_priv_arr[i] == NULL) {
-                       status = VXGE_HW_ERR_OUT_OF_MEMORY;
-                       goto exit;
-               }
-
-               /* allocate DMA-capable memblock */
-               mempool->memblocks_arr[i] =
-                       __vxge_hw_blockpool_malloc(mempool->devh,
-                               mempool->memblock_size, dma_object);
-               if (mempool->memblocks_arr[i] == NULL) {
-                       vfree(mempool->memblocks_priv_arr[i]);
-                       status = VXGE_HW_ERR_OUT_OF_MEMORY;
-                       goto exit;
-               }
-
-               (*num_allocated)++;
-               mempool->memblocks_allocated++;
-
-               memset(mempool->memblocks_arr[i], 0, mempool->memblock_size);
-
-               the_memblock = mempool->memblocks_arr[i];
-
-               /* fill the items hash array */
-               for (j = 0; j < n_items; j++) {
-                       u32 index = i * n_items + j;
-
-                       if (first_time && index >= mempool->items_initial)
-                               break;
-
-                       mempool->items_arr[index] =
-                               ((char *)the_memblock + j*mempool->item_size);
-
-                       /* let caller to do more job on each item */
-                       if (mempool->item_func_alloc != NULL)
-                               mempool->item_func_alloc(mempool, i,
-                                       dma_object, index, is_last);
-
-                       mempool->items_current = index + 1;
-               }
-
-               if (first_time && mempool->items_current ==
-                                       mempool->items_initial)
-                       break;
-       }
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_mempool_create
- * This function will create memory pool object. Pool may grow but will
- * never shrink. Pool consists of number of dynamically allocated blocks
- * with size enough to hold %items_initial number of items. Memory is
- * DMA-able but client must map/unmap before interoperating with the device.
- */
-static struct vxge_hw_mempool *
-__vxge_hw_mempool_create(struct __vxge_hw_device *devh,
-                        u32 memblock_size,
-                        u32 item_size,
-                        u32 items_priv_size,
-                        u32 items_initial,
-                        u32 items_max,
-                        const struct vxge_hw_mempool_cbs *mp_callback,
-                        void *userdata)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       u32 memblocks_to_allocate;
-       struct vxge_hw_mempool *mempool = NULL;
-       u32 allocated;
-
-       if (memblock_size < item_size) {
-               status = VXGE_HW_FAIL;
-               goto exit;
-       }
-
-       mempool = vzalloc(sizeof(struct vxge_hw_mempool));
-       if (mempool == NULL) {
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               goto exit;
-       }
-
-       mempool->devh                   = devh;
-       mempool->memblock_size          = memblock_size;
-       mempool->items_max              = items_max;
-       mempool->items_initial          = items_initial;
-       mempool->item_size              = item_size;
-       mempool->items_priv_size        = items_priv_size;
-       mempool->item_func_alloc        = mp_callback->item_func_alloc;
-       mempool->userdata               = userdata;
-
-       mempool->memblocks_allocated = 0;
-
-       mempool->items_per_memblock = memblock_size / item_size;
-
-       mempool->memblocks_max = (items_max + mempool->items_per_memblock - 1) /
-                                       mempool->items_per_memblock;
-
-       /* allocate array of memblocks */
-       mempool->memblocks_arr =
-               vzalloc(array_size(sizeof(void *), mempool->memblocks_max));
-       if (mempool->memblocks_arr == NULL) {
-               __vxge_hw_mempool_destroy(mempool);
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               mempool = NULL;
-               goto exit;
-       }
-
-       /* allocate array of private parts of items per memblocks */
-       mempool->memblocks_priv_arr =
-               vzalloc(array_size(sizeof(void *), mempool->memblocks_max));
-       if (mempool->memblocks_priv_arr == NULL) {
-               __vxge_hw_mempool_destroy(mempool);
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               mempool = NULL;
-               goto exit;
-       }
-
-       /* allocate array of memblocks DMA objects */
-       mempool->memblocks_dma_arr =
-               vzalloc(array_size(sizeof(struct vxge_hw_mempool_dma),
-                                  mempool->memblocks_max));
-       if (mempool->memblocks_dma_arr == NULL) {
-               __vxge_hw_mempool_destroy(mempool);
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               mempool = NULL;
-               goto exit;
-       }
-
-       /* allocate hash array of items */
-       mempool->items_arr = vzalloc(array_size(sizeof(void *),
-                                               mempool->items_max));
-       if (mempool->items_arr == NULL) {
-               __vxge_hw_mempool_destroy(mempool);
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               mempool = NULL;
-               goto exit;
-       }
-
-       /* calculate initial number of memblocks */
-       memblocks_to_allocate = (mempool->items_initial +
-                                mempool->items_per_memblock - 1) /
-                                               mempool->items_per_memblock;
-
-       /* pre-allocate the mempool */
-       status = __vxge_hw_mempool_grow(mempool, memblocks_to_allocate,
-                                       &allocated);
-       if (status != VXGE_HW_OK) {
-               __vxge_hw_mempool_destroy(mempool);
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               mempool = NULL;
-               goto exit;
-       }
-
-exit:
-       return mempool;
-}
-
-/*
- * __vxge_hw_ring_abort - Returns the RxD
- * This function terminates the RxDs of ring
- */
-static enum vxge_hw_status __vxge_hw_ring_abort(struct __vxge_hw_ring *ring)
-{
-       void *rxdh;
-       struct __vxge_hw_channel *channel;
-
-       channel = &ring->channel;
-
-       for (;;) {
-               vxge_hw_channel_dtr_try_complete(channel, &rxdh);
-
-               if (rxdh == NULL)
-                       break;
-
-               vxge_hw_channel_dtr_complete(channel);
-
-               if (ring->rxd_term)
-                       ring->rxd_term(rxdh, VXGE_HW_RXD_STATE_POSTED,
-                               channel->userdata);
-
-               vxge_hw_channel_dtr_free(channel, rxdh);
-       }
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_ring_reset - Resets the ring
- * This function resets the ring during vpath reset operation
- */
-static enum vxge_hw_status __vxge_hw_ring_reset(struct __vxge_hw_ring *ring)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_channel *channel;
-
-       channel = &ring->channel;
-
-       __vxge_hw_ring_abort(ring);
-
-       status = __vxge_hw_channel_reset(channel);
-
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       if (ring->rxd_init) {
-               status = vxge_hw_ring_replenish(ring);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-       }
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_ring_delete - Removes the ring
- * This function freeup the memory pool and removes the ring
- */
-static enum vxge_hw_status
-__vxge_hw_ring_delete(struct __vxge_hw_vpath_handle *vp)
-{
-       struct __vxge_hw_ring *ring = vp->vpath->ringh;
-
-       __vxge_hw_ring_abort(ring);
-
-       if (ring->mempool)
-               __vxge_hw_mempool_destroy(ring->mempool);
-
-       vp->vpath->ringh = NULL;
-       __vxge_hw_channel_free(&ring->channel);
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_ring_create - Create a Ring
- * This function creates Ring and initializes it.
- */
-static enum vxge_hw_status
-__vxge_hw_ring_create(struct __vxge_hw_vpath_handle *vp,
-                     struct vxge_hw_ring_attr *attr)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_ring *ring;
-       u32 ring_length;
-       struct vxge_hw_ring_config *config;
-       struct __vxge_hw_device *hldev;
-       u32 vp_id;
-       static const struct vxge_hw_mempool_cbs ring_mp_callback = {
-               .item_func_alloc = __vxge_hw_ring_mempool_item_alloc,
-       };
-
-       if ((vp == NULL) || (attr == NULL)) {
-               status = VXGE_HW_FAIL;
-               goto exit;
-       }
-
-       hldev = vp->vpath->hldev;
-       vp_id = vp->vpath->vp_id;
-
-       config = &hldev->config.vp_config[vp_id].ring;
-
-       ring_length = config->ring_blocks *
-                       vxge_hw_ring_rxds_per_block_get(config->buffer_mode);
-
-       ring = (struct __vxge_hw_ring *)__vxge_hw_channel_allocate(vp,
-                                               VXGE_HW_CHANNEL_TYPE_RING,
-                                               ring_length,
-                                               attr->per_rxd_space,
-                                               attr->userdata);
-       if (ring == NULL) {
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               goto exit;
-       }
-
-       vp->vpath->ringh = ring;
-       ring->vp_id = vp_id;
-       ring->vp_reg = vp->vpath->vp_reg;
-       ring->common_reg = hldev->common_reg;
-       ring->stats = &vp->vpath->sw_stats->ring_stats;
-       ring->config = config;
-       ring->callback = attr->callback;
-       ring->rxd_init = attr->rxd_init;
-       ring->rxd_term = attr->rxd_term;
-       ring->buffer_mode = config->buffer_mode;
-       ring->tim_rti_cfg1_saved = vp->vpath->tim_rti_cfg1_saved;
-       ring->tim_rti_cfg3_saved = vp->vpath->tim_rti_cfg3_saved;
-       ring->rxds_limit = config->rxds_limit;
-
-       ring->rxd_size = vxge_hw_ring_rxd_size_get(config->buffer_mode);
-       ring->rxd_priv_size =
-               sizeof(struct __vxge_hw_ring_rxd_priv) + attr->per_rxd_space;
-       ring->per_rxd_space = attr->per_rxd_space;
-
-       ring->rxd_priv_size =
-               ((ring->rxd_priv_size + VXGE_CACHE_LINE_SIZE - 1) /
-               VXGE_CACHE_LINE_SIZE) * VXGE_CACHE_LINE_SIZE;
-
-       /* how many RxDs can fit into one block. Depends on configured
-        * buffer_mode. */
-       ring->rxds_per_block =
-               vxge_hw_ring_rxds_per_block_get(config->buffer_mode);
-
-       /* calculate actual RxD block private size */
-       ring->rxdblock_priv_size = ring->rxd_priv_size * ring->rxds_per_block;
-       ring->mempool = __vxge_hw_mempool_create(hldev,
-                               VXGE_HW_BLOCK_SIZE,
-                               VXGE_HW_BLOCK_SIZE,
-                               ring->rxdblock_priv_size,
-                               ring->config->ring_blocks,
-                               ring->config->ring_blocks,
-                               &ring_mp_callback,
-                               ring);
-       if (ring->mempool == NULL) {
-               __vxge_hw_ring_delete(vp);
-               return VXGE_HW_ERR_OUT_OF_MEMORY;
-       }
-
-       status = __vxge_hw_channel_initialize(&ring->channel);
-       if (status != VXGE_HW_OK) {
-               __vxge_hw_ring_delete(vp);
-               goto exit;
-       }
-
-       /* Note:
-        * Specifying rxd_init callback means two things:
-        * 1) rxds need to be initialized by driver at channel-open time;
-        * 2) rxds need to be posted at channel-open time
-        *    (that's what the initial_replenish() below does)
-        * Currently we don't have a case when the 1) is done without the 2).
-        */
-       if (ring->rxd_init) {
-               status = vxge_hw_ring_replenish(ring);
-               if (status != VXGE_HW_OK) {
-                       __vxge_hw_ring_delete(vp);
-                       goto exit;
-               }
-       }
-
-       /* initial replenish will increment the counter in its post() routine,
-        * we have to reset it */
-       ring->stats->common_stats.usage_cnt = 0;
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_device_config_default_get - Initialize device config with defaults.
- * Initialize Titan device config with default values.
- */
-enum vxge_hw_status
-vxge_hw_device_config_default_get(struct vxge_hw_device_config *device_config)
-{
-       u32 i;
-
-       device_config->dma_blockpool_initial =
-                                       VXGE_HW_INITIAL_DMA_BLOCK_POOL_SIZE;
-       device_config->dma_blockpool_max = VXGE_HW_MAX_DMA_BLOCK_POOL_SIZE;
-       device_config->intr_mode = VXGE_HW_INTR_MODE_DEF;
-       device_config->rth_en = VXGE_HW_RTH_DEFAULT;
-       device_config->rth_it_type = VXGE_HW_RTH_IT_TYPE_DEFAULT;
-       device_config->device_poll_millis =  VXGE_HW_DEF_DEVICE_POLL_MILLIS;
-       device_config->rts_mac_en =  VXGE_HW_RTS_MAC_DEFAULT;
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               device_config->vp_config[i].vp_id = i;
-
-               device_config->vp_config[i].min_bandwidth =
-                               VXGE_HW_VPATH_BANDWIDTH_DEFAULT;
-
-               device_config->vp_config[i].ring.enable = VXGE_HW_RING_DEFAULT;
-
-               device_config->vp_config[i].ring.ring_blocks =
-                               VXGE_HW_DEF_RING_BLOCKS;
-
-               device_config->vp_config[i].ring.buffer_mode =
-                               VXGE_HW_RING_RXD_BUFFER_MODE_DEFAULT;
-
-               device_config->vp_config[i].ring.scatter_mode =
-                               VXGE_HW_RING_SCATTER_MODE_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].ring.rxds_limit =
-                               VXGE_HW_DEF_RING_RXDS_LIMIT;
-
-               device_config->vp_config[i].fifo.enable = VXGE_HW_FIFO_ENABLE;
-
-               device_config->vp_config[i].fifo.fifo_blocks =
-                               VXGE_HW_MIN_FIFO_BLOCKS;
-
-               device_config->vp_config[i].fifo.max_frags =
-                               VXGE_HW_MAX_FIFO_FRAGS;
-
-               device_config->vp_config[i].fifo.memblock_size =
-                               VXGE_HW_DEF_FIFO_MEMBLOCK_SIZE;
-
-               device_config->vp_config[i].fifo.alignment_size =
-                               VXGE_HW_DEF_FIFO_ALIGNMENT_SIZE;
-
-               device_config->vp_config[i].fifo.intr =
-                               VXGE_HW_FIFO_QUEUE_INTR_DEFAULT;
-
-               device_config->vp_config[i].fifo.no_snoop_bits =
-                               VXGE_HW_FIFO_NO_SNOOP_DEFAULT;
-               device_config->vp_config[i].tti.intr_enable =
-                               VXGE_HW_TIM_INTR_DEFAULT;
-
-               device_config->vp_config[i].tti.btimer_val =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.timer_ac_en =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.timer_ci_en =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.timer_ri_en =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.rtimer_val =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.util_sel =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.ltimer_val =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.urange_a =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.uec_a =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.urange_b =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.uec_b =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.urange_c =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.uec_c =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].tti.uec_d =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.intr_enable =
-                               VXGE_HW_TIM_INTR_DEFAULT;
-
-               device_config->vp_config[i].rti.btimer_val =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.timer_ac_en =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.timer_ci_en =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.timer_ri_en =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.rtimer_val =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.util_sel =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.ltimer_val =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.urange_a =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.uec_a =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.urange_b =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.uec_b =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.urange_c =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.uec_c =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].rti.uec_d =
-                               VXGE_HW_USE_FLASH_DEFAULT;
-
-               device_config->vp_config[i].mtu =
-                               VXGE_HW_VPATH_USE_FLASH_DEFAULT_INITIAL_MTU;
-
-               device_config->vp_config[i].rpa_strip_vlan_tag =
-                       VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_USE_FLASH_DEFAULT;
-       }
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_vpath_swapper_set - Set the swapper bits for the vpath.
- * Set the swapper bits appropriately for the vpath.
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_swapper_set(struct vxge_hw_vpath_reg __iomem *vpath_reg)
-{
-#ifndef __BIG_ENDIAN
-       u64 val64;
-
-       val64 = readq(&vpath_reg->vpath_general_cfg1);
-       wmb();
-       val64 |= VXGE_HW_VPATH_GENERAL_CFG1_CTL_BYTE_SWAPEN;
-       writeq(val64, &vpath_reg->vpath_general_cfg1);
-       wmb();
-#endif
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_kdfc_swapper_set - Set the swapper bits for the kdfc.
- * Set the swapper bits appropriately for the vpath.
- */
-static enum vxge_hw_status
-__vxge_hw_kdfc_swapper_set(struct vxge_hw_legacy_reg __iomem *legacy_reg,
-                          struct vxge_hw_vpath_reg __iomem *vpath_reg)
-{
-       u64 val64;
-
-       val64 = readq(&legacy_reg->pifm_wr_swap_en);
-
-       if (val64 == VXGE_HW_SWAPPER_WRITE_BYTE_SWAP_ENABLE) {
-               val64 = readq(&vpath_reg->kdfcctl_cfg0);
-               wmb();
-
-               val64 |= VXGE_HW_KDFCCTL_CFG0_BYTE_SWAPEN_FIFO0 |
-                       VXGE_HW_KDFCCTL_CFG0_BYTE_SWAPEN_FIFO1  |
-                       VXGE_HW_KDFCCTL_CFG0_BYTE_SWAPEN_FIFO2;
-
-               writeq(val64, &vpath_reg->kdfcctl_cfg0);
-               wmb();
-       }
-
-       return VXGE_HW_OK;
-}
-
-/*
- * vxge_hw_mgmt_reg_read - Read Titan register.
- */
-enum vxge_hw_status
-vxge_hw_mgmt_reg_read(struct __vxge_hw_device *hldev,
-                     enum vxge_hw_mgmt_reg_type type,
-                     u32 index, u32 offset, u64 *value)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if ((hldev == NULL) || (hldev->magic != VXGE_HW_DEVICE_MAGIC)) {
-               status = VXGE_HW_ERR_INVALID_DEVICE;
-               goto exit;
-       }
-
-       switch (type) {
-       case vxge_hw_mgmt_reg_type_legacy:
-               if (offset > sizeof(struct vxge_hw_legacy_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               *value = readq((void __iomem *)hldev->legacy_reg + offset);
-               break;
-       case vxge_hw_mgmt_reg_type_toc:
-               if (offset > sizeof(struct vxge_hw_toc_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               *value = readq((void __iomem *)hldev->toc_reg + offset);
-               break;
-       case vxge_hw_mgmt_reg_type_common:
-               if (offset > sizeof(struct vxge_hw_common_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               *value = readq((void __iomem *)hldev->common_reg + offset);
-               break;
-       case vxge_hw_mgmt_reg_type_mrpcim:
-               if (!(hldev->access_rights &
-                       VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM)) {
-                       status = VXGE_HW_ERR_PRIVILEGED_OPERATION;
-                       break;
-               }
-               if (offset > sizeof(struct vxge_hw_mrpcim_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               *value = readq((void __iomem *)hldev->mrpcim_reg + offset);
-               break;
-       case vxge_hw_mgmt_reg_type_srpcim:
-               if (!(hldev->access_rights &
-                       VXGE_HW_DEVICE_ACCESS_RIGHT_SRPCIM)) {
-                       status = VXGE_HW_ERR_PRIVILEGED_OPERATION;
-                       break;
-               }
-               if (index > VXGE_HW_TITAN_SRPCIM_REG_SPACES - 1) {
-                       status = VXGE_HW_ERR_INVALID_INDEX;
-                       break;
-               }
-               if (offset > sizeof(struct vxge_hw_srpcim_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               *value = readq((void __iomem *)hldev->srpcim_reg[index] +
-                               offset);
-               break;
-       case vxge_hw_mgmt_reg_type_vpmgmt:
-               if ((index > VXGE_HW_TITAN_VPMGMT_REG_SPACES - 1) ||
-                       (!(hldev->vpath_assignments & vxge_mBIT(index)))) {
-                       status = VXGE_HW_ERR_INVALID_INDEX;
-                       break;
-               }
-               if (offset > sizeof(struct vxge_hw_vpmgmt_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               *value = readq((void __iomem *)hldev->vpmgmt_reg[index] +
-                               offset);
-               break;
-       case vxge_hw_mgmt_reg_type_vpath:
-               if ((index > VXGE_HW_TITAN_VPATH_REG_SPACES - 1) ||
-                       (!(hldev->vpath_assignments & vxge_mBIT(index)))) {
-                       status = VXGE_HW_ERR_INVALID_INDEX;
-                       break;
-               }
-               if (index > VXGE_HW_TITAN_VPATH_REG_SPACES - 1) {
-                       status = VXGE_HW_ERR_INVALID_INDEX;
-                       break;
-               }
-               if (offset > sizeof(struct vxge_hw_vpath_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               *value = readq((void __iomem *)hldev->vpath_reg[index] +
-                               offset);
-               break;
-       default:
-               status = VXGE_HW_ERR_INVALID_TYPE;
-               break;
-       }
-
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_vpath_strip_fcs_check - Check for FCS strip.
- */
-enum vxge_hw_status
-vxge_hw_vpath_strip_fcs_check(struct __vxge_hw_device *hldev, u64 vpath_mask)
-{
-       struct vxge_hw_vpmgmt_reg       __iomem *vpmgmt_reg;
-       int i = 0, j = 0;
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!((vpath_mask) & vxge_mBIT(i)))
-                       continue;
-               vpmgmt_reg = hldev->vpmgmt_reg[i];
-               for (j = 0; j < VXGE_HW_MAC_MAX_MAC_PORT_ID; j++) {
-                       if (readq(&vpmgmt_reg->rxmac_cfg0_port_vpmgmt_clone[j])
-                       & VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_STRIP_FCS)
-                               return VXGE_HW_FAIL;
-               }
-       }
-       return VXGE_HW_OK;
-}
-/*
- * vxge_hw_mgmt_reg_Write - Write Titan register.
- */
-enum vxge_hw_status
-vxge_hw_mgmt_reg_write(struct __vxge_hw_device *hldev,
-                     enum vxge_hw_mgmt_reg_type type,
-                     u32 index, u32 offset, u64 value)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if ((hldev == NULL) || (hldev->magic != VXGE_HW_DEVICE_MAGIC)) {
-               status = VXGE_HW_ERR_INVALID_DEVICE;
-               goto exit;
-       }
-
-       switch (type) {
-       case vxge_hw_mgmt_reg_type_legacy:
-               if (offset > sizeof(struct vxge_hw_legacy_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               writeq(value, (void __iomem *)hldev->legacy_reg + offset);
-               break;
-       case vxge_hw_mgmt_reg_type_toc:
-               if (offset > sizeof(struct vxge_hw_toc_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               writeq(value, (void __iomem *)hldev->toc_reg + offset);
-               break;
-       case vxge_hw_mgmt_reg_type_common:
-               if (offset > sizeof(struct vxge_hw_common_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               writeq(value, (void __iomem *)hldev->common_reg + offset);
-               break;
-       case vxge_hw_mgmt_reg_type_mrpcim:
-               if (!(hldev->access_rights &
-                       VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM)) {
-                       status = VXGE_HW_ERR_PRIVILEGED_OPERATION;
-                       break;
-               }
-               if (offset > sizeof(struct vxge_hw_mrpcim_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               writeq(value, (void __iomem *)hldev->mrpcim_reg + offset);
-               break;
-       case vxge_hw_mgmt_reg_type_srpcim:
-               if (!(hldev->access_rights &
-                       VXGE_HW_DEVICE_ACCESS_RIGHT_SRPCIM)) {
-                       status = VXGE_HW_ERR_PRIVILEGED_OPERATION;
-                       break;
-               }
-               if (index > VXGE_HW_TITAN_SRPCIM_REG_SPACES - 1) {
-                       status = VXGE_HW_ERR_INVALID_INDEX;
-                       break;
-               }
-               if (offset > sizeof(struct vxge_hw_srpcim_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               writeq(value, (void __iomem *)hldev->srpcim_reg[index] +
-                       offset);
-
-               break;
-       case vxge_hw_mgmt_reg_type_vpmgmt:
-               if ((index > VXGE_HW_TITAN_VPMGMT_REG_SPACES - 1) ||
-                       (!(hldev->vpath_assignments & vxge_mBIT(index)))) {
-                       status = VXGE_HW_ERR_INVALID_INDEX;
-                       break;
-               }
-               if (offset > sizeof(struct vxge_hw_vpmgmt_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               writeq(value, (void __iomem *)hldev->vpmgmt_reg[index] +
-                       offset);
-               break;
-       case vxge_hw_mgmt_reg_type_vpath:
-               if ((index > VXGE_HW_TITAN_VPATH_REG_SPACES-1) ||
-                       (!(hldev->vpath_assignments & vxge_mBIT(index)))) {
-                       status = VXGE_HW_ERR_INVALID_INDEX;
-                       break;
-               }
-               if (offset > sizeof(struct vxge_hw_vpath_reg) - 8) {
-                       status = VXGE_HW_ERR_INVALID_OFFSET;
-                       break;
-               }
-               writeq(value, (void __iomem *)hldev->vpath_reg[index] +
-                       offset);
-               break;
-       default:
-               status = VXGE_HW_ERR_INVALID_TYPE;
-               break;
-       }
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_fifo_abort - Returns the TxD
- * This function terminates the TxDs of fifo
- */
-static enum vxge_hw_status __vxge_hw_fifo_abort(struct __vxge_hw_fifo *fifo)
-{
-       void *txdlh;
-
-       for (;;) {
-               vxge_hw_channel_dtr_try_complete(&fifo->channel, &txdlh);
-
-               if (txdlh == NULL)
-                       break;
-
-               vxge_hw_channel_dtr_complete(&fifo->channel);
-
-               if (fifo->txdl_term) {
-                       fifo->txdl_term(txdlh,
-                       VXGE_HW_TXDL_STATE_POSTED,
-                       fifo->channel.userdata);
-               }
-
-               vxge_hw_channel_dtr_free(&fifo->channel, txdlh);
-       }
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_fifo_reset - Resets the fifo
- * This function resets the fifo during vpath reset operation
- */
-static enum vxge_hw_status __vxge_hw_fifo_reset(struct __vxge_hw_fifo *fifo)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       __vxge_hw_fifo_abort(fifo);
-       status = __vxge_hw_channel_reset(&fifo->channel);
-
-       return status;
-}
-
-/*
- * __vxge_hw_fifo_delete - Removes the FIFO
- * This function freeup the memory pool and removes the FIFO
- */
-static enum vxge_hw_status
-__vxge_hw_fifo_delete(struct __vxge_hw_vpath_handle *vp)
-{
-       struct __vxge_hw_fifo *fifo = vp->vpath->fifoh;
-
-       __vxge_hw_fifo_abort(fifo);
-
-       if (fifo->mempool)
-               __vxge_hw_mempool_destroy(fifo->mempool);
-
-       vp->vpath->fifoh = NULL;
-
-       __vxge_hw_channel_free(&fifo->channel);
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_fifo_mempool_item_alloc - Allocate List blocks for TxD
- * list callback
- * This function is callback passed to __vxge_hw_mempool_create to create memory
- * pool for TxD list
- */
-static void
-__vxge_hw_fifo_mempool_item_alloc(
-       struct vxge_hw_mempool *mempoolh,
-       u32 memblock_index, struct vxge_hw_mempool_dma *dma_object,
-       u32 index, u32 is_last)
-{
-       u32 memblock_item_idx;
-       struct __vxge_hw_fifo_txdl_priv *txdl_priv;
-       struct vxge_hw_fifo_txd *txdp =
-               (struct vxge_hw_fifo_txd *)mempoolh->items_arr[index];
-       struct __vxge_hw_fifo *fifo =
-                       (struct __vxge_hw_fifo *)mempoolh->userdata;
-       void *memblock = mempoolh->memblocks_arr[memblock_index];
-
-       vxge_assert(txdp);
-
-       txdp->host_control = (u64) (size_t)
-       __vxge_hw_mempool_item_priv(mempoolh, memblock_index, txdp,
-                                       &memblock_item_idx);
-
-       txdl_priv = __vxge_hw_fifo_txdl_priv(fifo, txdp);
-
-       vxge_assert(txdl_priv);
-
-       fifo->channel.reserve_arr[fifo->channel.reserve_ptr - 1 - index] = txdp;
-
-       /* pre-format HW's TxDL's private */
-       txdl_priv->dma_offset = (char *)txdp - (char *)memblock;
-       txdl_priv->dma_addr = dma_object->addr + txdl_priv->dma_offset;
-       txdl_priv->dma_handle = dma_object->handle;
-       txdl_priv->memblock   = memblock;
-       txdl_priv->first_txdp = txdp;
-       txdl_priv->next_txdl_priv = NULL;
-       txdl_priv->alloc_frags = 0;
-}
-
-/*
- * __vxge_hw_fifo_create - Create a FIFO
- * This function creates FIFO and initializes it.
- */
-static enum vxge_hw_status
-__vxge_hw_fifo_create(struct __vxge_hw_vpath_handle *vp,
-                     struct vxge_hw_fifo_attr *attr)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_fifo *fifo;
-       struct vxge_hw_fifo_config *config;
-       u32 txdl_size, txdl_per_memblock;
-       struct vxge_hw_mempool_cbs fifo_mp_callback;
-       struct __vxge_hw_virtualpath *vpath;
-
-       if ((vp == NULL) || (attr == NULL)) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-       vpath = vp->vpath;
-       config = &vpath->hldev->config.vp_config[vpath->vp_id].fifo;
-
-       txdl_size = config->max_frags * sizeof(struct vxge_hw_fifo_txd);
-
-       txdl_per_memblock = config->memblock_size / txdl_size;
-
-       fifo = (struct __vxge_hw_fifo *)__vxge_hw_channel_allocate(vp,
-                                       VXGE_HW_CHANNEL_TYPE_FIFO,
-                                       config->fifo_blocks * txdl_per_memblock,
-                                       attr->per_txdl_space, attr->userdata);
-
-       if (fifo == NULL) {
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               goto exit;
-       }
-
-       vpath->fifoh = fifo;
-       fifo->nofl_db = vpath->nofl_db;
-
-       fifo->vp_id = vpath->vp_id;
-       fifo->vp_reg = vpath->vp_reg;
-       fifo->stats = &vpath->sw_stats->fifo_stats;
-
-       fifo->config = config;
-
-       /* apply "interrupts per txdl" attribute */
-       fifo->interrupt_type = VXGE_HW_FIFO_TXD_INT_TYPE_UTILZ;
-       fifo->tim_tti_cfg1_saved = vpath->tim_tti_cfg1_saved;
-       fifo->tim_tti_cfg3_saved = vpath->tim_tti_cfg3_saved;
-
-       if (fifo->config->intr)
-               fifo->interrupt_type = VXGE_HW_FIFO_TXD_INT_TYPE_PER_LIST;
-
-       fifo->no_snoop_bits = config->no_snoop_bits;
-
-       /*
-        * FIFO memory management strategy:
-        *
-        * TxDL split into three independent parts:
-        *      - set of TxD's
-        *      - TxD HW private part
-        *      - driver private part
-        *
-        * Adaptative memory allocation used. i.e. Memory allocated on
-        * demand with the size which will fit into one memory block.
-        * One memory block may contain more than one TxDL.
-        *
-        * During "reserve" operations more memory can be allocated on demand
-        * for example due to FIFO full condition.
-        *
-        * Pool of memory memblocks never shrinks except in __vxge_hw_fifo_close
-        * routine which will essentially stop the channel and free resources.
-        */
-
-       /* TxDL common private size == TxDL private  +  driver private */
-       fifo->priv_size =
-               sizeof(struct __vxge_hw_fifo_txdl_priv) + attr->per_txdl_space;
-       fifo->priv_size = ((fifo->priv_size  +  VXGE_CACHE_LINE_SIZE - 1) /
-                       VXGE_CACHE_LINE_SIZE) * VXGE_CACHE_LINE_SIZE;
-
-       fifo->per_txdl_space = attr->per_txdl_space;
-
-       /* recompute txdl size to be cacheline aligned */
-       fifo->txdl_size = txdl_size;
-       fifo->txdl_per_memblock = txdl_per_memblock;
-
-       fifo->txdl_term = attr->txdl_term;
-       fifo->callback = attr->callback;
-
-       if (fifo->txdl_per_memblock == 0) {
-               __vxge_hw_fifo_delete(vp);
-               status = VXGE_HW_ERR_INVALID_BLOCK_SIZE;
-               goto exit;
-       }
-
-       fifo_mp_callback.item_func_alloc = __vxge_hw_fifo_mempool_item_alloc;
-
-       fifo->mempool =
-               __vxge_hw_mempool_create(vpath->hldev,
-                       fifo->config->memblock_size,
-                       fifo->txdl_size,
-                       fifo->priv_size,
-                       (fifo->config->fifo_blocks * fifo->txdl_per_memblock),
-                       (fifo->config->fifo_blocks * fifo->txdl_per_memblock),
-                       &fifo_mp_callback,
-                       fifo);
-
-       if (fifo->mempool == NULL) {
-               __vxge_hw_fifo_delete(vp);
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               goto exit;
-       }
-
-       status = __vxge_hw_channel_initialize(&fifo->channel);
-       if (status != VXGE_HW_OK) {
-               __vxge_hw_fifo_delete(vp);
-               goto exit;
-       }
-
-       vxge_assert(fifo->channel.reserve_ptr);
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_pci_read - Read the content of given address
- *                          in pci config space.
- * Read from the vpath pci config space.
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_pci_read(struct __vxge_hw_virtualpath *vpath,
-                        u32 phy_func_0, u32 offset, u32 *val)
-{
-       u64 val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_hw_vpath_reg __iomem *vp_reg = vpath->vp_reg;
-
-       val64 = VXGE_HW_PCI_CONFIG_ACCESS_CFG1_ADDRESS(offset);
-
-       if (phy_func_0)
-               val64 |= VXGE_HW_PCI_CONFIG_ACCESS_CFG1_SEL_FUNC0;
-
-       writeq(val64, &vp_reg->pci_config_access_cfg1);
-       wmb();
-       writeq(VXGE_HW_PCI_CONFIG_ACCESS_CFG2_REQ,
-                       &vp_reg->pci_config_access_cfg2);
-       wmb();
-
-       status = __vxge_hw_device_register_poll(
-                       &vp_reg->pci_config_access_cfg2,
-                       VXGE_HW_INTR_MASK_ALL, VXGE_HW_DEF_DEVICE_POLL_MILLIS);
-
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       val64 = readq(&vp_reg->pci_config_access_status);
-
-       if (val64 & VXGE_HW_PCI_CONFIG_ACCESS_STATUS_ACCESS_ERR) {
-               status = VXGE_HW_FAIL;
-               *val = 0;
-       } else
-               *val = (u32)vxge_bVALn(val64, 32, 32);
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_device_flick_link_led - Flick (blink) link LED.
- * @hldev: HW device.
- * @on_off: TRUE if flickering to be on, FALSE to be off
- *
- * Flicker the link LED.
- */
-enum vxge_hw_status
-vxge_hw_device_flick_link_led(struct __vxge_hw_device *hldev, u64 on_off)
-{
-       struct __vxge_hw_virtualpath *vpath;
-       u64 data0, data1 = 0, steer_ctrl = 0;
-       enum vxge_hw_status status;
-
-       if (hldev == NULL) {
-               status = VXGE_HW_ERR_INVALID_DEVICE;
-               goto exit;
-       }
-
-       vpath = &hldev->virtual_paths[hldev->first_vp_id];
-
-       data0 = on_off;
-       status = vxge_hw_vpath_fw_api(vpath,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LED_CONTROL,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_FW_MEMO,
-                       0, &data0, &data1, &steer_ctrl);
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_rts_table_get - Get the entries from RTS access tables
- */
-enum vxge_hw_status
-__vxge_hw_vpath_rts_table_get(struct __vxge_hw_vpath_handle *vp,
-                             u32 action, u32 rts_table, u32 offset,
-                             u64 *data0, u64 *data1)
-{
-       enum vxge_hw_status status;
-       u64 steer_ctrl = 0;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       if ((rts_table ==
-            VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_SOLO_IT) ||
-           (rts_table ==
-            VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_MULTI_IT) ||
-           (rts_table ==
-            VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_RTH_MASK) ||
-           (rts_table ==
-            VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_RTH_KEY)) {
-               steer_ctrl = VXGE_HW_RTS_ACCESS_STEER_CTRL_TABLE_SEL;
-       }
-
-       status = vxge_hw_vpath_fw_api(vp->vpath, action, rts_table, offset,
-                                     data0, data1, &steer_ctrl);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       if ((rts_table != VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA) &&
-           (rts_table !=
-            VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_MULTI_IT))
-               *data1 = 0;
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_rts_table_set - Set the entries of RTS access tables
- */
-enum vxge_hw_status
-__vxge_hw_vpath_rts_table_set(struct __vxge_hw_vpath_handle *vp, u32 action,
-                             u32 rts_table, u32 offset, u64 steer_data0,
-                             u64 steer_data1)
-{
-       u64 data0, data1 = 0, steer_ctrl = 0;
-       enum vxge_hw_status status;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       data0 = steer_data0;
-
-       if ((rts_table == VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA) ||
-           (rts_table ==
-            VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_MULTI_IT))
-               data1 = steer_data1;
-
-       status = vxge_hw_vpath_fw_api(vp->vpath, action, rts_table, offset,
-                                     &data0, &data1, &steer_ctrl);
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_vpath_rts_rth_set - Set/configure RTS hashing.
- */
-enum vxge_hw_status vxge_hw_vpath_rts_rth_set(
-                       struct __vxge_hw_vpath_handle *vp,
-                       enum vxge_hw_rth_algoritms algorithm,
-                       struct vxge_hw_rth_hash_types *hash_type,
-                       u16 bucket_size)
-{
-       u64 data0, data1;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       status = __vxge_hw_vpath_rts_table_get(vp,
-                    VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_READ_ENTRY,
-                    VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_RTH_GEN_CFG,
-                       0, &data0, &data1);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       data0 &= ~(VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_BUCKET_SIZE(0xf) |
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_ALG_SEL(0x3));
-
-       data0 |= VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_EN |
-       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_BUCKET_SIZE(bucket_size) |
-       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_ALG_SEL(algorithm);
-
-       if (hash_type->hash_type_tcpipv4_en)
-               data0 |= VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_TCP_IPV4_EN;
-
-       if (hash_type->hash_type_ipv4_en)
-               data0 |= VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_IPV4_EN;
-
-       if (hash_type->hash_type_tcpipv6_en)
-               data0 |= VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_TCP_IPV6_EN;
-
-       if (hash_type->hash_type_ipv6_en)
-               data0 |= VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_IPV6_EN;
-
-       if (hash_type->hash_type_tcpipv6ex_en)
-               data0 |=
-               VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_TCP_IPV6_EX_EN;
-
-       if (hash_type->hash_type_ipv6ex_en)
-               data0 |= VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_IPV6_EX_EN;
-
-       if (VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_ACTIVE_TABLE(data0))
-               data0 &= ~VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_ACTIVE_TABLE;
-       else
-               data0 |= VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_ACTIVE_TABLE;
-
-       status = __vxge_hw_vpath_rts_table_set(vp,
-               VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_WRITE_ENTRY,
-               VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_RTH_GEN_CFG,
-               0, data0, 0);
-exit:
-       return status;
-}
-
-static void
-vxge_hw_rts_rth_data0_data1_get(u32 j, u64 *data0, u64 *data1,
-                               u16 flag, u8 *itable)
-{
-       switch (flag) {
-       case 1:
-               *data0 = VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM0_BUCKET_NUM(j)|
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM0_ENTRY_EN |
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM0_BUCKET_DATA(
-                       itable[j]);
-               fallthrough;
-       case 2:
-               *data0 |=
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM1_BUCKET_NUM(j)|
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM1_ENTRY_EN |
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM1_BUCKET_DATA(
-                       itable[j]);
-               fallthrough;
-       case 3:
-               *data1 = VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM0_BUCKET_NUM(j)|
-                       VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM0_ENTRY_EN |
-                       VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM0_BUCKET_DATA(
-                       itable[j]);
-               fallthrough;
-       case 4:
-               *data1 |=
-                       VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM1_BUCKET_NUM(j)|
-                       VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM1_ENTRY_EN |
-                       VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM1_BUCKET_DATA(
-                       itable[j]);
-               return;
-       default:
-               return;
-       }
-}
-/*
- * vxge_hw_vpath_rts_rth_itable_set - Set/configure indirection table (IT).
- */
-enum vxge_hw_status vxge_hw_vpath_rts_rth_itable_set(
-                       struct __vxge_hw_vpath_handle **vpath_handles,
-                       u32 vpath_count,
-                       u8 *mtable,
-                       u8 *itable,
-                       u32 itable_size)
-{
-       u32 i, j, action, rts_table;
-       u64 data0;
-       u64 data1;
-       u32 max_entries;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_vpath_handle *vp = vpath_handles[0];
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       max_entries = (((u32)1) << itable_size);
-
-       if (vp->vpath->hldev->config.rth_it_type
-                               == VXGE_HW_RTH_IT_TYPE_SOLO_IT) {
-               action = VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_WRITE_ENTRY;
-               rts_table =
-                       VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_SOLO_IT;
-
-               for (j = 0; j < max_entries; j++) {
-
-                       data1 = 0;
-
-                       data0 =
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_SOLO_IT_BUCKET_DATA(
-                               itable[j]);
-
-                       status = __vxge_hw_vpath_rts_table_set(vpath_handles[0],
-                               action, rts_table, j, data0, data1);
-
-                       if (status != VXGE_HW_OK)
-                               goto exit;
-               }
-
-               for (j = 0; j < max_entries; j++) {
-
-                       data1 = 0;
-
-                       data0 =
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_SOLO_IT_ENTRY_EN |
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_SOLO_IT_BUCKET_DATA(
-                               itable[j]);
-
-                       status = __vxge_hw_vpath_rts_table_set(
-                               vpath_handles[mtable[itable[j]]], action,
-                               rts_table, j, data0, data1);
-
-                       if (status != VXGE_HW_OK)
-                               goto exit;
-               }
-       } else {
-               action = VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_WRITE_ENTRY;
-               rts_table =
-                       VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_MULTI_IT;
-               for (i = 0; i < vpath_count; i++) {
-
-                       for (j = 0; j < max_entries;) {
-
-                               data0 = 0;
-                               data1 = 0;
-
-                               while (j < max_entries) {
-                                       if (mtable[itable[j]] != i) {
-                                               j++;
-                                               continue;
-                                       }
-                                       vxge_hw_rts_rth_data0_data1_get(j,
-                                               &data0, &data1, 1, itable);
-                                       j++;
-                                       break;
-                               }
-
-                               while (j < max_entries) {
-                                       if (mtable[itable[j]] != i) {
-                                               j++;
-                                               continue;
-                                       }
-                                       vxge_hw_rts_rth_data0_data1_get(j,
-                                               &data0, &data1, 2, itable);
-                                       j++;
-                                       break;
-                               }
-
-                               while (j < max_entries) {
-                                       if (mtable[itable[j]] != i) {
-                                               j++;
-                                               continue;
-                                       }
-                                       vxge_hw_rts_rth_data0_data1_get(j,
-                                               &data0, &data1, 3, itable);
-                                       j++;
-                                       break;
-                               }
-
-                               while (j < max_entries) {
-                                       if (mtable[itable[j]] != i) {
-                                               j++;
-                                               continue;
-                                       }
-                                       vxge_hw_rts_rth_data0_data1_get(j,
-                                               &data0, &data1, 4, itable);
-                                       j++;
-                                       break;
-                               }
-
-                               if (data0 != 0) {
-                                       status = __vxge_hw_vpath_rts_table_set(
-                                                       vpath_handles[i],
-                                                       action, rts_table,
-                                                       0, data0, data1);
-
-                                       if (status != VXGE_HW_OK)
-                                               goto exit;
-                               }
-                       }
-               }
-       }
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_check_leak - Check for memory leak
- * @ring: Handle to the ring object used for receive
- *
- * If PRC_RXD_DOORBELL_VPn.NEW_QW_CNT is larger or equal to
- * PRC_CFG6_VPn.RXD_SPAT then a leak has occurred.
- * Returns: VXGE_HW_FAIL, if leak has occurred.
- *
- */
-enum vxge_hw_status
-vxge_hw_vpath_check_leak(struct __vxge_hw_ring *ring)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       u64 rxd_new_count, rxd_spat;
-
-       if (ring == NULL)
-               return status;
-
-       rxd_new_count = readl(&ring->vp_reg->prc_rxd_doorbell);
-       rxd_spat = readq(&ring->vp_reg->prc_cfg6);
-       rxd_spat = VXGE_HW_PRC_CFG6_RXD_SPAT(rxd_spat);
-
-       if (rxd_new_count >= rxd_spat)
-               status = VXGE_HW_FAIL;
-
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_mgmt_read
- * This routine reads the vpath_mgmt registers
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_mgmt_read(
-       struct __vxge_hw_device *hldev,
-       struct __vxge_hw_virtualpath *vpath)
-{
-       u32 i, mtu = 0, max_pyld = 0;
-       u64 val64;
-
-       for (i = 0; i < VXGE_HW_MAC_MAX_MAC_PORT_ID; i++) {
-
-               val64 = readq(&vpath->vpmgmt_reg->
-                               rxmac_cfg0_port_vpmgmt_clone[i]);
-               max_pyld =
-                       (u32)
-                       VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_GET_MAX_PYLD_LEN
-                       (val64);
-               if (mtu < max_pyld)
-                       mtu = max_pyld;
-       }
-
-       vpath->max_mtu = mtu + VXGE_HW_MAC_HEADER_MAX_SIZE;
-
-       val64 = readq(&vpath->vpmgmt_reg->xmac_vsport_choices_vp);
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (val64 & vxge_mBIT(i))
-                       vpath->vsport_number = i;
-       }
-
-       val64 = readq(&vpath->vpmgmt_reg->xgmac_gen_status_vpmgmt_clone);
-
-       if (val64 & VXGE_HW_XGMAC_GEN_STATUS_VPMGMT_CLONE_XMACJ_NTWK_OK)
-               VXGE_HW_DEVICE_LINK_STATE_SET(vpath->hldev, VXGE_HW_LINK_UP);
-       else
-               VXGE_HW_DEVICE_LINK_STATE_SET(vpath->hldev, VXGE_HW_LINK_DOWN);
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_vpath_reset_check - Check if resetting the vpath completed
- * This routine checks the vpath_rst_in_prog register to see if
- * adapter completed the reset process for the vpath
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_reset_check(struct __vxge_hw_virtualpath *vpath)
-{
-       enum vxge_hw_status status;
-
-       status = __vxge_hw_device_register_poll(
-                       &vpath->hldev->common_reg->vpath_rst_in_prog,
-                       VXGE_HW_VPATH_RST_IN_PROG_VPATH_RST_IN_PROG(
-                               1 << (16 - vpath->vp_id)),
-                       vpath->hldev->config.device_poll_millis);
-
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_reset
- * This routine resets the vpath on the device
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_reset(struct __vxge_hw_device *hldev, u32 vp_id)
-{
-       u64 val64;
-
-       val64 = VXGE_HW_CMN_RSTHDLR_CFG0_SW_RESET_VPATH(1 << (16 - vp_id));
-
-       __vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(val64, 0, 32),
-                               &hldev->common_reg->cmn_rsthdlr_cfg0);
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_vpath_sw_reset
- * This routine resets the vpath structures
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_sw_reset(struct __vxge_hw_device *hldev, u32 vp_id)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_virtualpath *vpath;
-
-       vpath = &hldev->virtual_paths[vp_id];
-
-       if (vpath->ringh) {
-               status = __vxge_hw_ring_reset(vpath->ringh);
-               if (status != VXGE_HW_OK)
-                       goto exit;
-       }
-
-       if (vpath->fifoh)
-               status = __vxge_hw_fifo_reset(vpath->fifoh);
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_prc_configure
- * This routine configures the prc registers of virtual path using the config
- * passed
- */
-static void
-__vxge_hw_vpath_prc_configure(struct __vxge_hw_device *hldev, u32 vp_id)
-{
-       u64 val64;
-       struct __vxge_hw_virtualpath *vpath;
-       struct vxge_hw_vp_config *vp_config;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-
-       vpath = &hldev->virtual_paths[vp_id];
-       vp_reg = vpath->vp_reg;
-       vp_config = vpath->vp_config;
-
-       if (vp_config->ring.enable == VXGE_HW_RING_DISABLE)
-               return;
-
-       val64 = readq(&vp_reg->prc_cfg1);
-       val64 |= VXGE_HW_PRC_CFG1_RTI_TINT_DISABLE;
-       writeq(val64, &vp_reg->prc_cfg1);
-
-       val64 = readq(&vpath->vp_reg->prc_cfg6);
-       val64 |= VXGE_HW_PRC_CFG6_DOORBELL_MODE_EN;
-       writeq(val64, &vpath->vp_reg->prc_cfg6);
-
-       val64 = readq(&vp_reg->prc_cfg7);
-
-       if (vpath->vp_config->ring.scatter_mode !=
-               VXGE_HW_RING_SCATTER_MODE_USE_FLASH_DEFAULT) {
-
-               val64 &= ~VXGE_HW_PRC_CFG7_SCATTER_MODE(0x3);
-
-               switch (vpath->vp_config->ring.scatter_mode) {
-               case VXGE_HW_RING_SCATTER_MODE_A:
-                       val64 |= VXGE_HW_PRC_CFG7_SCATTER_MODE(
-                                       VXGE_HW_PRC_CFG7_SCATTER_MODE_A);
-                       break;
-               case VXGE_HW_RING_SCATTER_MODE_B:
-                       val64 |= VXGE_HW_PRC_CFG7_SCATTER_MODE(
-                                       VXGE_HW_PRC_CFG7_SCATTER_MODE_B);
-                       break;
-               case VXGE_HW_RING_SCATTER_MODE_C:
-                       val64 |= VXGE_HW_PRC_CFG7_SCATTER_MODE(
-                                       VXGE_HW_PRC_CFG7_SCATTER_MODE_C);
-                       break;
-               }
-       }
-
-       writeq(val64, &vp_reg->prc_cfg7);
-
-       writeq(VXGE_HW_PRC_CFG5_RXD0_ADD(
-                               __vxge_hw_ring_first_block_address_get(
-                                       vpath->ringh) >> 3), &vp_reg->prc_cfg5);
-
-       val64 = readq(&vp_reg->prc_cfg4);
-       val64 |= VXGE_HW_PRC_CFG4_IN_SVC;
-       val64 &= ~VXGE_HW_PRC_CFG4_RING_MODE(0x3);
-
-       val64 |= VXGE_HW_PRC_CFG4_RING_MODE(
-                       VXGE_HW_PRC_CFG4_RING_MODE_ONE_BUFFER);
-
-       if (hldev->config.rth_en == VXGE_HW_RTH_DISABLE)
-               val64 |= VXGE_HW_PRC_CFG4_RTH_DISABLE;
-       else
-               val64 &= ~VXGE_HW_PRC_CFG4_RTH_DISABLE;
-
-       writeq(val64, &vp_reg->prc_cfg4);
-}
-
-/*
- * __vxge_hw_vpath_kdfc_configure
- * This routine configures the kdfc registers of virtual path using the
- * config passed
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_kdfc_configure(struct __vxge_hw_device *hldev, u32 vp_id)
-{
-       u64 val64;
-       u64 vpath_stride;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_virtualpath *vpath;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-
-       vpath = &hldev->virtual_paths[vp_id];
-       vp_reg = vpath->vp_reg;
-       status = __vxge_hw_kdfc_swapper_set(hldev->legacy_reg, vp_reg);
-
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       val64 = readq(&vp_reg->kdfc_drbl_triplet_total);
-
-       vpath->max_kdfc_db =
-               (u32)VXGE_HW_KDFC_DRBL_TRIPLET_TOTAL_GET_KDFC_MAX_SIZE(
-                       val64+1)/2;
-
-       if (vpath->vp_config->fifo.enable == VXGE_HW_FIFO_ENABLE) {
-
-               vpath->max_nofl_db = vpath->max_kdfc_db;
-
-               if (vpath->max_nofl_db <
-                       ((vpath->vp_config->fifo.memblock_size /
-                       (vpath->vp_config->fifo.max_frags *
-                       sizeof(struct vxge_hw_fifo_txd))) *
-                       vpath->vp_config->fifo.fifo_blocks)) {
-
-                       return VXGE_HW_BADCFG_FIFO_BLOCKS;
-               }
-               val64 = VXGE_HW_KDFC_FIFO_TRPL_PARTITION_LENGTH_0(
-                               (vpath->max_nofl_db*2)-1);
-       }
-
-       writeq(val64, &vp_reg->kdfc_fifo_trpl_partition);
-
-       writeq(VXGE_HW_KDFC_FIFO_TRPL_CTRL_TRIPLET_ENABLE,
-               &vp_reg->kdfc_fifo_trpl_ctrl);
-
-       val64 = readq(&vp_reg->kdfc_trpl_fifo_0_ctrl);
-
-       val64 &= ~(VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_MODE(0x3) |
-                  VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_SELECT(0xFF));
-
-       val64 |= VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_MODE(
-                VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_MODE_NON_OFFLOAD_ONLY) |
-#ifndef __BIG_ENDIAN
-                VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_SWAP_EN |
-#endif
-                VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_SELECT(0);
-
-       writeq(val64, &vp_reg->kdfc_trpl_fifo_0_ctrl);
-       writeq((u64)0, &vp_reg->kdfc_trpl_fifo_0_wb_address);
-       wmb();
-       vpath_stride = readq(&hldev->toc_reg->toc_kdfc_vpath_stride);
-
-       vpath->nofl_db =
-               (struct __vxge_hw_non_offload_db_wrapper __iomem *)
-               (hldev->kdfc + (vp_id *
-               VXGE_HW_TOC_KDFC_VPATH_STRIDE_GET_TOC_KDFC_VPATH_STRIDE(
-                                       vpath_stride)));
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vpath_mac_configure
- * This routine configures the mac of virtual path using the config passed
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_mac_configure(struct __vxge_hw_device *hldev, u32 vp_id)
-{
-       u64 val64;
-       struct __vxge_hw_virtualpath *vpath;
-       struct vxge_hw_vp_config *vp_config;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-
-       vpath = &hldev->virtual_paths[vp_id];
-       vp_reg = vpath->vp_reg;
-       vp_config = vpath->vp_config;
-
-       writeq(VXGE_HW_XMAC_VSPORT_CHOICE_VSPORT_NUMBER(
-                       vpath->vsport_number), &vp_reg->xmac_vsport_choice);
-
-       if (vp_config->ring.enable == VXGE_HW_RING_ENABLE) {
-
-               val64 = readq(&vp_reg->xmac_rpa_vcfg);
-
-               if (vp_config->rpa_strip_vlan_tag !=
-                       VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_USE_FLASH_DEFAULT) {
-                       if (vp_config->rpa_strip_vlan_tag)
-                               val64 |= VXGE_HW_XMAC_RPA_VCFG_STRIP_VLAN_TAG;
-                       else
-                               val64 &= ~VXGE_HW_XMAC_RPA_VCFG_STRIP_VLAN_TAG;
-               }
-
-               writeq(val64, &vp_reg->xmac_rpa_vcfg);
-               val64 = readq(&vp_reg->rxmac_vcfg0);
-
-               if (vp_config->mtu !=
-                               VXGE_HW_VPATH_USE_FLASH_DEFAULT_INITIAL_MTU) {
-                       val64 &= ~VXGE_HW_RXMAC_VCFG0_RTS_MAX_FRM_LEN(0x3fff);
-                       if ((vp_config->mtu  +
-                               VXGE_HW_MAC_HEADER_MAX_SIZE) < vpath->max_mtu)
-                               val64 |= VXGE_HW_RXMAC_VCFG0_RTS_MAX_FRM_LEN(
-                                       vp_config->mtu  +
-                                       VXGE_HW_MAC_HEADER_MAX_SIZE);
-                       else
-                               val64 |= VXGE_HW_RXMAC_VCFG0_RTS_MAX_FRM_LEN(
-                                       vpath->max_mtu);
-               }
-
-               writeq(val64, &vp_reg->rxmac_vcfg0);
-
-               val64 = readq(&vp_reg->rxmac_vcfg1);
-
-               val64 &= ~(VXGE_HW_RXMAC_VCFG1_RTS_RTH_MULTI_IT_BD_MODE(0x3) |
-                       VXGE_HW_RXMAC_VCFG1_RTS_RTH_MULTI_IT_EN_MODE);
-
-               if (hldev->config.rth_it_type ==
-                               VXGE_HW_RTH_IT_TYPE_MULTI_IT) {
-                       val64 |= VXGE_HW_RXMAC_VCFG1_RTS_RTH_MULTI_IT_BD_MODE(
-                               0x2) |
-                               VXGE_HW_RXMAC_VCFG1_RTS_RTH_MULTI_IT_EN_MODE;
-               }
-
-               writeq(val64, &vp_reg->rxmac_vcfg1);
-       }
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_vpath_tim_configure
- * This routine configures the tim registers of virtual path using the config
- * passed
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_tim_configure(struct __vxge_hw_device *hldev, u32 vp_id)
-{
-       u64 val64;
-       struct __vxge_hw_virtualpath *vpath;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-       struct vxge_hw_vp_config *config;
-
-       vpath = &hldev->virtual_paths[vp_id];
-       vp_reg = vpath->vp_reg;
-       config = vpath->vp_config;
-
-       writeq(0, &vp_reg->tim_dest_addr);
-       writeq(0, &vp_reg->tim_vpath_map);
-       writeq(0, &vp_reg->tim_bitmap);
-       writeq(0, &vp_reg->tim_remap);
-
-       if (config->ring.enable == VXGE_HW_RING_ENABLE)
-               writeq(VXGE_HW_TIM_RING_ASSN_INT_NUM(
-                       (vp_id * VXGE_HW_MAX_INTR_PER_VP) +
-                       VXGE_HW_VPATH_INTR_RX), &vp_reg->tim_ring_assn);
-
-       val64 = readq(&vp_reg->tim_pci_cfg);
-       val64 |= VXGE_HW_TIM_PCI_CFG_ADD_PAD;
-       writeq(val64, &vp_reg->tim_pci_cfg);
-
-       if (config->fifo.enable == VXGE_HW_FIFO_ENABLE) {
-
-               val64 = readq(&vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_TX]);
-
-               if (config->tti.btimer_val != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_BTIMER_VAL(
-                               0x3ffffff);
-                       val64 |= VXGE_HW_TIM_CFG1_INT_NUM_BTIMER_VAL(
-                                       config->tti.btimer_val);
-               }
-
-               val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_BITMP_EN;
-
-               if (config->tti.timer_ac_en != VXGE_HW_USE_FLASH_DEFAULT) {
-                       if (config->tti.timer_ac_en)
-                               val64 |= VXGE_HW_TIM_CFG1_INT_NUM_TIMER_AC;
-                       else
-                               val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_TIMER_AC;
-               }
-
-               if (config->tti.timer_ci_en != VXGE_HW_USE_FLASH_DEFAULT) {
-                       if (config->tti.timer_ci_en)
-                               val64 |= VXGE_HW_TIM_CFG1_INT_NUM_TIMER_CI;
-                       else
-                               val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_TIMER_CI;
-               }
-
-               if (config->tti.urange_a != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_URNG_A(0x3f);
-                       val64 |= VXGE_HW_TIM_CFG1_INT_NUM_URNG_A(
-                                       config->tti.urange_a);
-               }
-
-               if (config->tti.urange_b != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_URNG_B(0x3f);
-                       val64 |= VXGE_HW_TIM_CFG1_INT_NUM_URNG_B(
-                                       config->tti.urange_b);
-               }
-
-               if (config->tti.urange_c != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_URNG_C(0x3f);
-                       val64 |= VXGE_HW_TIM_CFG1_INT_NUM_URNG_C(
-                                       config->tti.urange_c);
-               }
-
-               writeq(val64, &vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_TX]);
-               vpath->tim_tti_cfg1_saved = val64;
-
-               val64 = readq(&vp_reg->tim_cfg2_int_num[VXGE_HW_VPATH_INTR_TX]);
-
-               if (config->tti.uec_a != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG2_INT_NUM_UEC_A(0xffff);
-                       val64 |= VXGE_HW_TIM_CFG2_INT_NUM_UEC_A(
-                                               config->tti.uec_a);
-               }
-
-               if (config->tti.uec_b != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG2_INT_NUM_UEC_B(0xffff);
-                       val64 |= VXGE_HW_TIM_CFG2_INT_NUM_UEC_B(
-                                               config->tti.uec_b);
-               }
-
-               if (config->tti.uec_c != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG2_INT_NUM_UEC_C(0xffff);
-                       val64 |= VXGE_HW_TIM_CFG2_INT_NUM_UEC_C(
-                                               config->tti.uec_c);
-               }
-
-               if (config->tti.uec_d != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG2_INT_NUM_UEC_D(0xffff);
-                       val64 |= VXGE_HW_TIM_CFG2_INT_NUM_UEC_D(
-                                               config->tti.uec_d);
-               }
-
-               writeq(val64, &vp_reg->tim_cfg2_int_num[VXGE_HW_VPATH_INTR_TX]);
-               val64 = readq(&vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_TX]);
-
-               if (config->tti.timer_ri_en != VXGE_HW_USE_FLASH_DEFAULT) {
-                       if (config->tti.timer_ri_en)
-                               val64 |= VXGE_HW_TIM_CFG3_INT_NUM_TIMER_RI;
-                       else
-                               val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_TIMER_RI;
-               }
-
-               if (config->tti.rtimer_val != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(
-                                       0x3ffffff);
-                       val64 |= VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(
-                                       config->tti.rtimer_val);
-               }
-
-               if (config->tti.util_sel != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_UTIL_SEL(0x3f);
-                       val64 |= VXGE_HW_TIM_CFG3_INT_NUM_UTIL_SEL(vp_id);
-               }
-
-               if (config->tti.ltimer_val != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_LTIMER_VAL(
-                                       0x3ffffff);
-                       val64 |= VXGE_HW_TIM_CFG3_INT_NUM_LTIMER_VAL(
-                                       config->tti.ltimer_val);
-               }
-
-               writeq(val64, &vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_TX]);
-               vpath->tim_tti_cfg3_saved = val64;
-       }
-
-       if (config->ring.enable == VXGE_HW_RING_ENABLE) {
-
-               val64 = readq(&vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_RX]);
-
-               if (config->rti.btimer_val != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_BTIMER_VAL(
-                                       0x3ffffff);
-                       val64 |= VXGE_HW_TIM_CFG1_INT_NUM_BTIMER_VAL(
-                                       config->rti.btimer_val);
-               }
-
-               val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_BITMP_EN;
-
-               if (config->rti.timer_ac_en != VXGE_HW_USE_FLASH_DEFAULT) {
-                       if (config->rti.timer_ac_en)
-                               val64 |= VXGE_HW_TIM_CFG1_INT_NUM_TIMER_AC;
-                       else
-                               val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_TIMER_AC;
-               }
-
-               if (config->rti.timer_ci_en != VXGE_HW_USE_FLASH_DEFAULT) {
-                       if (config->rti.timer_ci_en)
-                               val64 |= VXGE_HW_TIM_CFG1_INT_NUM_TIMER_CI;
-                       else
-                               val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_TIMER_CI;
-               }
-
-               if (config->rti.urange_a != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_URNG_A(0x3f);
-                       val64 |= VXGE_HW_TIM_CFG1_INT_NUM_URNG_A(
-                                       config->rti.urange_a);
-               }
-
-               if (config->rti.urange_b != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_URNG_B(0x3f);
-                       val64 |= VXGE_HW_TIM_CFG1_INT_NUM_URNG_B(
-                                       config->rti.urange_b);
-               }
-
-               if (config->rti.urange_c != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG1_INT_NUM_URNG_C(0x3f);
-                       val64 |= VXGE_HW_TIM_CFG1_INT_NUM_URNG_C(
-                                       config->rti.urange_c);
-               }
-
-               writeq(val64, &vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_RX]);
-               vpath->tim_rti_cfg1_saved = val64;
-
-               val64 = readq(&vp_reg->tim_cfg2_int_num[VXGE_HW_VPATH_INTR_RX]);
-
-               if (config->rti.uec_a != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG2_INT_NUM_UEC_A(0xffff);
-                       val64 |= VXGE_HW_TIM_CFG2_INT_NUM_UEC_A(
-                                               config->rti.uec_a);
-               }
-
-               if (config->rti.uec_b != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG2_INT_NUM_UEC_B(0xffff);
-                       val64 |= VXGE_HW_TIM_CFG2_INT_NUM_UEC_B(
-                                               config->rti.uec_b);
-               }
-
-               if (config->rti.uec_c != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG2_INT_NUM_UEC_C(0xffff);
-                       val64 |= VXGE_HW_TIM_CFG2_INT_NUM_UEC_C(
-                                               config->rti.uec_c);
-               }
-
-               if (config->rti.uec_d != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG2_INT_NUM_UEC_D(0xffff);
-                       val64 |= VXGE_HW_TIM_CFG2_INT_NUM_UEC_D(
-                                               config->rti.uec_d);
-               }
-
-               writeq(val64, &vp_reg->tim_cfg2_int_num[VXGE_HW_VPATH_INTR_RX]);
-               val64 = readq(&vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_RX]);
-
-               if (config->rti.timer_ri_en != VXGE_HW_USE_FLASH_DEFAULT) {
-                       if (config->rti.timer_ri_en)
-                               val64 |= VXGE_HW_TIM_CFG3_INT_NUM_TIMER_RI;
-                       else
-                               val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_TIMER_RI;
-               }
-
-               if (config->rti.rtimer_val != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(
-                                       0x3ffffff);
-                       val64 |= VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(
-                                       config->rti.rtimer_val);
-               }
-
-               if (config->rti.util_sel != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_UTIL_SEL(0x3f);
-                       val64 |= VXGE_HW_TIM_CFG3_INT_NUM_UTIL_SEL(vp_id);
-               }
-
-               if (config->rti.ltimer_val != VXGE_HW_USE_FLASH_DEFAULT) {
-                       val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_LTIMER_VAL(
-                                       0x3ffffff);
-                       val64 |= VXGE_HW_TIM_CFG3_INT_NUM_LTIMER_VAL(
-                                       config->rti.ltimer_val);
-               }
-
-               writeq(val64, &vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_RX]);
-               vpath->tim_rti_cfg3_saved = val64;
-       }
-
-       val64 = 0;
-       writeq(val64, &vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_EINTA]);
-       writeq(val64, &vp_reg->tim_cfg2_int_num[VXGE_HW_VPATH_INTR_EINTA]);
-       writeq(val64, &vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_EINTA]);
-       writeq(val64, &vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_BMAP]);
-       writeq(val64, &vp_reg->tim_cfg2_int_num[VXGE_HW_VPATH_INTR_BMAP]);
-       writeq(val64, &vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_BMAP]);
-
-       val64 = VXGE_HW_TIM_WRKLD_CLC_WRKLD_EVAL_PRD(150);
-       val64 |= VXGE_HW_TIM_WRKLD_CLC_WRKLD_EVAL_DIV(0);
-       val64 |= VXGE_HW_TIM_WRKLD_CLC_CNT_RX_TX(3);
-       writeq(val64, &vp_reg->tim_wrkld_clc);
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_vpath_initialize
- * This routine is the final phase of init which initializes the
- * registers of the vpath using the configuration passed.
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_initialize(struct __vxge_hw_device *hldev, u32 vp_id)
-{
-       u64 val64;
-       u32 val32;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_virtualpath *vpath;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-
-       vpath = &hldev->virtual_paths[vp_id];
-
-       if (!(hldev->vpath_assignments & vxge_mBIT(vp_id))) {
-               status = VXGE_HW_ERR_VPATH_NOT_AVAILABLE;
-               goto exit;
-       }
-       vp_reg = vpath->vp_reg;
-
-       status =  __vxge_hw_vpath_swapper_set(vpath->vp_reg);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       status =  __vxge_hw_vpath_mac_configure(hldev, vp_id);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       status =  __vxge_hw_vpath_kdfc_configure(hldev, vp_id);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       status = __vxge_hw_vpath_tim_configure(hldev, vp_id);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       val64 = readq(&vp_reg->rtdma_rd_optimization_ctrl);
-
-       /* Get MRRS value from device control */
-       status  = __vxge_hw_vpath_pci_read(vpath, 1, 0x78, &val32);
-       if (status == VXGE_HW_OK) {
-               val32 = (val32 & VXGE_HW_PCI_EXP_DEVCTL_READRQ) >> 12;
-               val64 &=
-                   ~(VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_FILL_THRESH(7));
-               val64 |=
-                   VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_FILL_THRESH(val32);
-
-               val64 |= VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_WAIT_FOR_SPACE;
-       }
-
-       val64 &= ~(VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_ADDR_BDRY(7));
-       val64 |=
-           VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_ADDR_BDRY(
-                   VXGE_HW_MAX_PAYLOAD_SIZE_512);
-
-       val64 |= VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_ADDR_BDRY_EN;
-       writeq(val64, &vp_reg->rtdma_rd_optimization_ctrl);
-
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_vp_terminate - Terminate Virtual Path structure
- * This routine closes all channels it opened and freeup memory
- */
-static void __vxge_hw_vp_terminate(struct __vxge_hw_device *hldev, u32 vp_id)
-{
-       struct __vxge_hw_virtualpath *vpath;
-
-       vpath = &hldev->virtual_paths[vp_id];
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN)
-               goto exit;
-
-       VXGE_HW_DEVICE_TIM_INT_MASK_RESET(vpath->hldev->tim_int_mask0,
-               vpath->hldev->tim_int_mask1, vpath->vp_id);
-       hldev->stats.hw_dev_info_stats.vpath_info[vpath->vp_id] = NULL;
-
-       /* If the whole struct __vxge_hw_virtualpath is zeroed, nothing will
-        * work after the interface is brought down.
-        */
-       spin_lock(&vpath->lock);
-       vpath->vp_open = VXGE_HW_VP_NOT_OPEN;
-       spin_unlock(&vpath->lock);
-
-       vpath->vpmgmt_reg = NULL;
-       vpath->nofl_db = NULL;
-       vpath->max_mtu = 0;
-       vpath->vsport_number = 0;
-       vpath->max_kdfc_db = 0;
-       vpath->max_nofl_db = 0;
-       vpath->ringh = NULL;
-       vpath->fifoh = NULL;
-       memset(&vpath->vpath_handles, 0, sizeof(struct list_head));
-       vpath->stats_block = NULL;
-       vpath->hw_stats = NULL;
-       vpath->hw_stats_sav = NULL;
-       vpath->sw_stats = NULL;
-
-exit:
-       return;
-}
-
-/*
- * __vxge_hw_vp_initialize - Initialize Virtual Path structure
- * This routine is the initial phase of init which resets the vpath and
- * initializes the software support structures.
- */
-static enum vxge_hw_status
-__vxge_hw_vp_initialize(struct __vxge_hw_device *hldev, u32 vp_id,
-                       struct vxge_hw_vp_config *config)
-{
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (!(hldev->vpath_assignments & vxge_mBIT(vp_id))) {
-               status = VXGE_HW_ERR_VPATH_NOT_AVAILABLE;
-               goto exit;
-       }
-
-       vpath = &hldev->virtual_paths[vp_id];
-
-       spin_lock_init(&vpath->lock);
-       vpath->vp_id = vp_id;
-       vpath->vp_open = VXGE_HW_VP_OPEN;
-       vpath->hldev = hldev;
-       vpath->vp_config = config;
-       vpath->vp_reg = hldev->vpath_reg[vp_id];
-       vpath->vpmgmt_reg = hldev->vpmgmt_reg[vp_id];
-
-       __vxge_hw_vpath_reset(hldev, vp_id);
-
-       status = __vxge_hw_vpath_reset_check(vpath);
-       if (status != VXGE_HW_OK) {
-               memset(vpath, 0, sizeof(struct __vxge_hw_virtualpath));
-               goto exit;
-       }
-
-       status = __vxge_hw_vpath_mgmt_read(hldev, vpath);
-       if (status != VXGE_HW_OK) {
-               memset(vpath, 0, sizeof(struct __vxge_hw_virtualpath));
-               goto exit;
-       }
-
-       INIT_LIST_HEAD(&vpath->vpath_handles);
-
-       vpath->sw_stats = &hldev->stats.sw_dev_info_stats.vpath_info[vp_id];
-
-       VXGE_HW_DEVICE_TIM_INT_MASK_SET(hldev->tim_int_mask0,
-               hldev->tim_int_mask1, vp_id);
-
-       status = __vxge_hw_vpath_initialize(hldev, vp_id);
-       if (status != VXGE_HW_OK)
-               __vxge_hw_vp_terminate(hldev, vp_id);
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_vpath_mtu_set - Set MTU.
- * Set new MTU value. Example, to use jumbo frames:
- * vxge_hw_vpath_mtu_set(my_device, 9600);
- */
-enum vxge_hw_status
-vxge_hw_vpath_mtu_set(struct __vxge_hw_vpath_handle *vp, u32 new_mtu)
-{
-       u64 val64;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_virtualpath *vpath;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-       vpath = vp->vpath;
-
-       new_mtu += VXGE_HW_MAC_HEADER_MAX_SIZE;
-
-       if ((new_mtu < VXGE_HW_MIN_MTU) || (new_mtu > vpath->max_mtu))
-               status = VXGE_HW_ERR_INVALID_MTU_SIZE;
-
-       val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
-
-       val64 &= ~VXGE_HW_RXMAC_VCFG0_RTS_MAX_FRM_LEN(0x3fff);
-       val64 |= VXGE_HW_RXMAC_VCFG0_RTS_MAX_FRM_LEN(new_mtu);
-
-       writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
-
-       vpath->vp_config->mtu = new_mtu - VXGE_HW_MAC_HEADER_MAX_SIZE;
-
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_vpath_stats_enable - Enable vpath h/wstatistics.
- * Enable the DMA vpath statistics. The function is to be called to re-enable
- * the adapter to update stats into the host memory
- */
-static enum vxge_hw_status
-vxge_hw_vpath_stats_enable(struct __vxge_hw_vpath_handle *vp)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_virtualpath *vpath;
-
-       vpath = vp->vpath;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto exit;
-       }
-
-       memcpy(vpath->hw_stats_sav, vpath->hw_stats,
-                       sizeof(struct vxge_hw_vpath_stats_hw_info));
-
-       status = __vxge_hw_vpath_stats_get(vpath, vpath->hw_stats);
-exit:
-       return status;
-}
-
-/*
- * __vxge_hw_blockpool_block_allocate - Allocates a block from block pool
- * This function allocates a block from block pool or from the system
- */
-static struct __vxge_hw_blockpool_entry *
-__vxge_hw_blockpool_block_allocate(struct __vxge_hw_device *devh, u32 size)
-{
-       struct __vxge_hw_blockpool_entry *entry = NULL;
-       struct __vxge_hw_blockpool  *blockpool;
-
-       blockpool = &devh->block_pool;
-
-       if (size == blockpool->block_size) {
-
-               if (!list_empty(&blockpool->free_block_list))
-                       entry = (struct __vxge_hw_blockpool_entry *)
-                               list_first_entry(&blockpool->free_block_list,
-                                       struct __vxge_hw_blockpool_entry,
-                                       item);
-
-               if (entry != NULL) {
-                       list_del(&entry->item);
-                       blockpool->pool_size--;
-               }
-       }
-
-       if (entry != NULL)
-               __vxge_hw_blockpool_blocks_add(blockpool);
-
-       return entry;
-}
-
-/*
- * vxge_hw_vpath_open - Open a virtual path on a given adapter
- * This function is used to open access to virtual path of an
- * adapter for offload, GRO operations. This function returns
- * synchronously.
- */
-enum vxge_hw_status
-vxge_hw_vpath_open(struct __vxge_hw_device *hldev,
-                  struct vxge_hw_vpath_attr *attr,
-                  struct __vxge_hw_vpath_handle **vpath_handle)
-{
-       struct __vxge_hw_virtualpath *vpath;
-       struct __vxge_hw_vpath_handle *vp;
-       enum vxge_hw_status status;
-
-       vpath = &hldev->virtual_paths[attr->vp_id];
-
-       if (vpath->vp_open == VXGE_HW_VP_OPEN) {
-               status = VXGE_HW_ERR_INVALID_STATE;
-               goto vpath_open_exit1;
-       }
-
-       status = __vxge_hw_vp_initialize(hldev, attr->vp_id,
-                       &hldev->config.vp_config[attr->vp_id]);
-       if (status != VXGE_HW_OK)
-               goto vpath_open_exit1;
-
-       vp = vzalloc(sizeof(struct __vxge_hw_vpath_handle));
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               goto vpath_open_exit2;
-       }
-
-       vp->vpath = vpath;
-
-       if (vpath->vp_config->fifo.enable == VXGE_HW_FIFO_ENABLE) {
-               status = __vxge_hw_fifo_create(vp, &attr->fifo_attr);
-               if (status != VXGE_HW_OK)
-                       goto vpath_open_exit6;
-       }
-
-       if (vpath->vp_config->ring.enable == VXGE_HW_RING_ENABLE) {
-               status = __vxge_hw_ring_create(vp, &attr->ring_attr);
-               if (status != VXGE_HW_OK)
-                       goto vpath_open_exit7;
-
-               __vxge_hw_vpath_prc_configure(hldev, attr->vp_id);
-       }
-
-       vpath->fifoh->tx_intr_num =
-               (attr->vp_id * VXGE_HW_MAX_INTR_PER_VP)  +
-                       VXGE_HW_VPATH_INTR_TX;
-
-       vpath->stats_block = __vxge_hw_blockpool_block_allocate(hldev,
-                               VXGE_HW_BLOCK_SIZE);
-       if (vpath->stats_block == NULL) {
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
-               goto vpath_open_exit8;
-       }
-
-       vpath->hw_stats = vpath->stats_block->memblock;
-       memset(vpath->hw_stats, 0,
-               sizeof(struct vxge_hw_vpath_stats_hw_info));
-
-       hldev->stats.hw_dev_info_stats.vpath_info[attr->vp_id] =
-                                               vpath->hw_stats;
-
-       vpath->hw_stats_sav =
-               &hldev->stats.hw_dev_info_stats.vpath_info_sav[attr->vp_id];
-       memset(vpath->hw_stats_sav, 0,
-                       sizeof(struct vxge_hw_vpath_stats_hw_info));
-
-       writeq(vpath->stats_block->dma_addr, &vpath->vp_reg->stats_cfg);
-
-       status = vxge_hw_vpath_stats_enable(vp);
-       if (status != VXGE_HW_OK)
-               goto vpath_open_exit8;
-
-       list_add(&vp->item, &vpath->vpath_handles);
-
-       hldev->vpaths_deployed |= vxge_mBIT(vpath->vp_id);
-
-       *vpath_handle = vp;
-
-       attr->fifo_attr.userdata = vpath->fifoh;
-       attr->ring_attr.userdata = vpath->ringh;
-
-       return VXGE_HW_OK;
-
-vpath_open_exit8:
-       if (vpath->ringh != NULL)
-               __vxge_hw_ring_delete(vp);
-vpath_open_exit7:
-       if (vpath->fifoh != NULL)
-               __vxge_hw_fifo_delete(vp);
-vpath_open_exit6:
-       vfree(vp);
-vpath_open_exit2:
-       __vxge_hw_vp_terminate(hldev, attr->vp_id);
-vpath_open_exit1:
-
-       return status;
-}
-
-/**
- * vxge_hw_vpath_rx_doorbell_init - Close the handle got from previous vpath
- * (vpath) open
- * @vp: Handle got from previous vpath open
- *
- * This function is used to close access to virtual path opened
- * earlier.
- */
-void vxge_hw_vpath_rx_doorbell_init(struct __vxge_hw_vpath_handle *vp)
-{
-       struct __vxge_hw_virtualpath *vpath = vp->vpath;
-       struct __vxge_hw_ring *ring = vpath->ringh;
-       struct vxgedev *vdev = netdev_priv(vpath->hldev->ndev);
-       u64 new_count, val64, val164;
-
-       if (vdev->titan1) {
-               new_count = readq(&vpath->vp_reg->rxdmem_size);
-               new_count &= 0x1fff;
-       } else
-               new_count = ring->config->ring_blocks * VXGE_HW_BLOCK_SIZE / 8;
-
-       val164 = VXGE_HW_RXDMEM_SIZE_PRC_RXDMEM_SIZE(new_count);
-
-       writeq(VXGE_HW_PRC_RXD_DOORBELL_NEW_QW_CNT(val164),
-               &vpath->vp_reg->prc_rxd_doorbell);
-       readl(&vpath->vp_reg->prc_rxd_doorbell);
-
-       val164 /= 2;
-       val64 = readq(&vpath->vp_reg->prc_cfg6);
-       val64 = VXGE_HW_PRC_CFG6_RXD_SPAT(val64);
-       val64 &= 0x1ff;
-
-       /*
-        * Each RxD is of 4 qwords
-        */
-       new_count -= (val64 + 1);
-       val64 = min(val164, new_count) / 4;
-
-       ring->rxds_limit = min(ring->rxds_limit, val64);
-       if (ring->rxds_limit < 4)
-               ring->rxds_limit = 4;
-}
-
-/*
- * __vxge_hw_blockpool_block_free - Frees a block from block pool
- * @devh: Hal device
- * @entry: Entry of block to be freed
- *
- * This function frees a block from block pool
- */
-static void
-__vxge_hw_blockpool_block_free(struct __vxge_hw_device *devh,
-                              struct __vxge_hw_blockpool_entry *entry)
-{
-       struct __vxge_hw_blockpool  *blockpool;
-
-       blockpool = &devh->block_pool;
-
-       if (entry->length == blockpool->block_size) {
-               list_add(&entry->item, &blockpool->free_block_list);
-               blockpool->pool_size++;
-       }
-
-       __vxge_hw_blockpool_blocks_remove(blockpool);
-}
-
-/*
- * vxge_hw_vpath_close - Close the handle got from previous vpath (vpath) open
- * This function is used to close access to virtual path opened
- * earlier.
- */
-enum vxge_hw_status vxge_hw_vpath_close(struct __vxge_hw_vpath_handle *vp)
-{
-       struct __vxge_hw_virtualpath *vpath = NULL;
-       struct __vxge_hw_device *devh = NULL;
-       u32 vp_id = vp->vpath->vp_id;
-       u32 is_empty = TRUE;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       vpath = vp->vpath;
-       devh = vpath->hldev;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto vpath_close_exit;
-       }
-
-       list_del(&vp->item);
-
-       if (!list_empty(&vpath->vpath_handles)) {
-               list_add(&vp->item, &vpath->vpath_handles);
-               is_empty = FALSE;
-       }
-
-       if (!is_empty) {
-               status = VXGE_HW_FAIL;
-               goto vpath_close_exit;
-       }
-
-       devh->vpaths_deployed &= ~vxge_mBIT(vp_id);
-
-       if (vpath->ringh != NULL)
-               __vxge_hw_ring_delete(vp);
-
-       if (vpath->fifoh != NULL)
-               __vxge_hw_fifo_delete(vp);
-
-       if (vpath->stats_block != NULL)
-               __vxge_hw_blockpool_block_free(devh, vpath->stats_block);
-
-       vfree(vp);
-
-       __vxge_hw_vp_terminate(devh, vp_id);
-
-vpath_close_exit:
-       return status;
-}
-
-/*
- * vxge_hw_vpath_reset - Resets vpath
- * This function is used to request a reset of vpath
- */
-enum vxge_hw_status vxge_hw_vpath_reset(struct __vxge_hw_vpath_handle *vp)
-{
-       enum vxge_hw_status status;
-       u32 vp_id;
-       struct __vxge_hw_virtualpath *vpath = vp->vpath;
-
-       vp_id = vpath->vp_id;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto exit;
-       }
-
-       status = __vxge_hw_vpath_reset(vpath->hldev, vp_id);
-       if (status == VXGE_HW_OK)
-               vpath->sw_stats->soft_reset_cnt++;
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_vpath_recover_from_reset - Poll for reset complete and re-initialize.
- * This function poll's for the vpath reset completion and re initializes
- * the vpath.
- */
-enum vxge_hw_status
-vxge_hw_vpath_recover_from_reset(struct __vxge_hw_vpath_handle *vp)
-{
-       struct __vxge_hw_virtualpath *vpath = NULL;
-       enum vxge_hw_status status;
-       struct __vxge_hw_device *hldev;
-       u32 vp_id;
-
-       vp_id = vp->vpath->vp_id;
-       vpath = vp->vpath;
-       hldev = vpath->hldev;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto exit;
-       }
-
-       status = __vxge_hw_vpath_reset_check(vpath);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       status = __vxge_hw_vpath_sw_reset(hldev, vp_id);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       status = __vxge_hw_vpath_initialize(hldev, vp_id);
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       if (vpath->ringh != NULL)
-               __vxge_hw_vpath_prc_configure(hldev, vp_id);
-
-       memset(vpath->hw_stats, 0,
-               sizeof(struct vxge_hw_vpath_stats_hw_info));
-
-       memset(vpath->hw_stats_sav, 0,
-               sizeof(struct vxge_hw_vpath_stats_hw_info));
-
-       writeq(vpath->stats_block->dma_addr,
-               &vpath->vp_reg->stats_cfg);
-
-       status = vxge_hw_vpath_stats_enable(vp);
-
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_vpath_enable - Enable vpath.
- * This routine clears the vpath reset thereby enabling a vpath
- * to start forwarding frames and generating interrupts.
- */
-void
-vxge_hw_vpath_enable(struct __vxge_hw_vpath_handle *vp)
-{
-       struct __vxge_hw_device *hldev;
-       u64 val64;
-
-       hldev = vp->vpath->hldev;
-
-       val64 = VXGE_HW_CMN_RSTHDLR_CFG1_CLR_VPATH_RESET(
-               1 << (16 - vp->vpath->vp_id));
-
-       __vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(val64, 0, 32),
-               &hldev->common_reg->cmn_rsthdlr_cfg1);
-}
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-config.h b/drivers/net/ethernet/neterion/vxge/vxge-config.h
deleted file mode 100644 (file)
index 0cd0750..0000000
+++ /dev/null
@@ -1,2086 +0,0 @@
-/******************************************************************************
- * This software may be used and distributed according to the terms of
- * the GNU General Public License (GPL), incorporated herein by reference.
- * Drivers based on or derived from this code fall under the GPL and must
- * retain the authorship, copyright and license notice.  This file is not
- * a complete program and may only be used when the entire operating
- * system is licensed under the GPL.
- * See the file COPYING in this distribution for more information.
- *
- * vxge-config.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
- *                Virtualized Server Adapter.
- * Copyright(c) 2002-2010 Exar Corp.
- ******************************************************************************/
-#ifndef VXGE_CONFIG_H
-#define VXGE_CONFIG_H
-#include <linux/hardirq.h>
-#include <linux/list.h>
-#include <linux/slab.h>
-#include <asm/io.h>
-
-#ifndef VXGE_CACHE_LINE_SIZE
-#define VXGE_CACHE_LINE_SIZE 128
-#endif
-
-#ifndef VXGE_ALIGN
-#define VXGE_ALIGN(adrs, size) \
-       (((size) - (((u64)adrs) & ((size)-1))) & ((size)-1))
-#endif
-
-#define VXGE_HW_MIN_MTU                                ETH_MIN_MTU
-#define VXGE_HW_MAX_MTU                                9600
-#define VXGE_HW_DEFAULT_MTU                    1500
-
-#define VXGE_HW_MAX_ROM_IMAGES                 8
-
-struct eprom_image {
-       u8 is_valid:1;
-       u8 index;
-       u8 type;
-       u16 version;
-};
-
-#ifdef VXGE_DEBUG_ASSERT
-/**
- * vxge_assert
- * @test: C-condition to check
- * @fmt: printf like format string
- *
- * This function implements traditional assert. By default assertions
- * are enabled. It can be disabled by undefining VXGE_DEBUG_ASSERT macro in
- * compilation
- * time.
- */
-#define vxge_assert(test) BUG_ON(!(test))
-#else
-#define vxge_assert(test)
-#endif /* end of VXGE_DEBUG_ASSERT */
-
-/**
- * enum vxge_debug_level
- * @VXGE_NONE: debug disabled
- * @VXGE_ERR: all errors going to be logged out
- * @VXGE_TRACE: all errors plus all kind of verbose tracing print outs
- *                 going to be logged out. Very noisy.
- *
- * This enumeration going to be used to switch between different
- * debug levels during runtime if DEBUG macro defined during
- * compilation. If DEBUG macro not defined than code will be
- * compiled out.
- */
-enum vxge_debug_level {
-       VXGE_NONE   = 0,
-       VXGE_TRACE  = 1,
-       VXGE_ERR    = 2
-};
-
-#define NULL_VPID                                      0xFFFFFFFF
-#ifdef CONFIG_VXGE_DEBUG_TRACE_ALL
-#define VXGE_DEBUG_MODULE_MASK  0xffffffff
-#define VXGE_DEBUG_TRACE_MASK   0xffffffff
-#define VXGE_DEBUG_ERR_MASK     0xffffffff
-#define VXGE_DEBUG_MASK         0x000001ff
-#else
-#define VXGE_DEBUG_MODULE_MASK  0x20000000
-#define VXGE_DEBUG_TRACE_MASK   0x20000000
-#define VXGE_DEBUG_ERR_MASK     0x20000000
-#define VXGE_DEBUG_MASK         0x00000001
-#endif
-
-/*
- * @VXGE_COMPONENT_LL: do debug for vxge link layer module
- * @VXGE_COMPONENT_ALL: activate debug for all modules with no exceptions
- *
- * This enumeration going to be used to distinguish modules
- * or libraries during compilation and runtime.  Makefile must declare
- * VXGE_DEBUG_MODULE_MASK macro and set it to proper value.
- */
-#define        VXGE_COMPONENT_LL                               0x20000000
-#define        VXGE_COMPONENT_ALL                              0xffffffff
-
-#define VXGE_HW_BASE_INF       100
-#define VXGE_HW_BASE_ERR       200
-#define VXGE_HW_BASE_BADCFG    300
-
-enum vxge_hw_status {
-       VXGE_HW_OK                                = 0,
-       VXGE_HW_FAIL                              = 1,
-       VXGE_HW_PENDING                           = 2,
-       VXGE_HW_COMPLETIONS_REMAIN                = 3,
-
-       VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS = VXGE_HW_BASE_INF + 1,
-       VXGE_HW_INF_OUT_OF_DESCRIPTORS            = VXGE_HW_BASE_INF + 2,
-
-       VXGE_HW_ERR_INVALID_HANDLE                = VXGE_HW_BASE_ERR + 1,
-       VXGE_HW_ERR_OUT_OF_MEMORY                 = VXGE_HW_BASE_ERR + 2,
-       VXGE_HW_ERR_VPATH_NOT_AVAILABLE           = VXGE_HW_BASE_ERR + 3,
-       VXGE_HW_ERR_VPATH_NOT_OPEN                = VXGE_HW_BASE_ERR + 4,
-       VXGE_HW_ERR_WRONG_IRQ                     = VXGE_HW_BASE_ERR + 5,
-       VXGE_HW_ERR_SWAPPER_CTRL                  = VXGE_HW_BASE_ERR + 6,
-       VXGE_HW_ERR_INVALID_MTU_SIZE              = VXGE_HW_BASE_ERR + 7,
-       VXGE_HW_ERR_INVALID_INDEX                 = VXGE_HW_BASE_ERR + 8,
-       VXGE_HW_ERR_INVALID_TYPE                  = VXGE_HW_BASE_ERR + 9,
-       VXGE_HW_ERR_INVALID_OFFSET                = VXGE_HW_BASE_ERR + 10,
-       VXGE_HW_ERR_INVALID_DEVICE                = VXGE_HW_BASE_ERR + 11,
-       VXGE_HW_ERR_VERSION_CONFLICT              = VXGE_HW_BASE_ERR + 12,
-       VXGE_HW_ERR_INVALID_PCI_INFO              = VXGE_HW_BASE_ERR + 13,
-       VXGE_HW_ERR_INVALID_TCODE                 = VXGE_HW_BASE_ERR + 14,
-       VXGE_HW_ERR_INVALID_BLOCK_SIZE            = VXGE_HW_BASE_ERR + 15,
-       VXGE_HW_ERR_INVALID_STATE                 = VXGE_HW_BASE_ERR + 16,
-       VXGE_HW_ERR_PRIVILEGED_OPERATION          = VXGE_HW_BASE_ERR + 17,
-       VXGE_HW_ERR_INVALID_PORT                  = VXGE_HW_BASE_ERR + 18,
-       VXGE_HW_ERR_FIFO                          = VXGE_HW_BASE_ERR + 19,
-       VXGE_HW_ERR_VPATH                         = VXGE_HW_BASE_ERR + 20,
-       VXGE_HW_ERR_CRITICAL                      = VXGE_HW_BASE_ERR + 21,
-       VXGE_HW_ERR_SLOT_FREEZE                   = VXGE_HW_BASE_ERR + 22,
-
-       VXGE_HW_BADCFG_RING_INDICATE_MAX_PKTS     = VXGE_HW_BASE_BADCFG + 1,
-       VXGE_HW_BADCFG_FIFO_BLOCKS                = VXGE_HW_BASE_BADCFG + 2,
-       VXGE_HW_BADCFG_VPATH_MTU                  = VXGE_HW_BASE_BADCFG + 3,
-       VXGE_HW_BADCFG_VPATH_RPA_STRIP_VLAN_TAG   = VXGE_HW_BASE_BADCFG + 4,
-       VXGE_HW_BADCFG_VPATH_MIN_BANDWIDTH        = VXGE_HW_BASE_BADCFG + 5,
-       VXGE_HW_BADCFG_INTR_MODE                  = VXGE_HW_BASE_BADCFG + 6,
-       VXGE_HW_BADCFG_RTS_MAC_EN                 = VXGE_HW_BASE_BADCFG + 7,
-
-       VXGE_HW_EOF_TRACE_BUF                     = -1
-};
-
-/**
- * enum enum vxge_hw_device_link_state - Link state enumeration.
- * @VXGE_HW_LINK_NONE: Invalid link state.
- * @VXGE_HW_LINK_DOWN: Link is down.
- * @VXGE_HW_LINK_UP: Link is up.
- *
- */
-enum vxge_hw_device_link_state {
-       VXGE_HW_LINK_NONE,
-       VXGE_HW_LINK_DOWN,
-       VXGE_HW_LINK_UP
-};
-
-/**
- * enum enum vxge_hw_fw_upgrade_code - FW upgrade return codes.
- * @VXGE_HW_FW_UPGRADE_OK: All OK send next 16 bytes
- * @VXGE_HW_FW_UPGRADE_DONE:  upload completed
- * @VXGE_HW_FW_UPGRADE_ERR:  upload error
- * @VXGE_FW_UPGRADE_BYTES2SKIP:  skip bytes in the stream
- *
- */
-enum vxge_hw_fw_upgrade_code {
-       VXGE_HW_FW_UPGRADE_OK           = 0,
-       VXGE_HW_FW_UPGRADE_DONE         = 1,
-       VXGE_HW_FW_UPGRADE_ERR          = 2,
-       VXGE_FW_UPGRADE_BYTES2SKIP      = 3
-};
-
-/**
- * enum enum vxge_hw_fw_upgrade_err_code - FW upgrade error codes.
- * @VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_1: corrupt data
- * @VXGE_HW_FW_UPGRADE_ERR_BUFFER_OVERFLOW: buffer overflow
- * @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_3: invalid .ncf file
- * @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_4: invalid .ncf file
- * @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_5: invalid .ncf file
- * @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_6: invalid .ncf file
- * @VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_7: corrupt data
- * @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_8: invalid .ncf file
- * @VXGE_HW_FW_UPGRADE_ERR_GENERIC_ERROR_UNKNOWN: generic error unknown type
- * @VXGE_HW_FW_UPGRADE_ERR_FAILED_TO_FLASH: failed to flash image check failed
- */
-enum vxge_hw_fw_upgrade_err_code {
-       VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_1           = 1,
-       VXGE_HW_FW_UPGRADE_ERR_BUFFER_OVERFLOW          = 2,
-       VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_3           = 3,
-       VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_4           = 4,
-       VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_5           = 5,
-       VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_6           = 6,
-       VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_7           = 7,
-       VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_8           = 8,
-       VXGE_HW_FW_UPGRADE_ERR_GENERIC_ERROR_UNKNOWN    = 9,
-       VXGE_HW_FW_UPGRADE_ERR_FAILED_TO_FLASH          = 10
-};
-
-/**
- * struct vxge_hw_device_date - Date Format
- * @day: Day
- * @month: Month
- * @year: Year
- * @date: Date in string format
- *
- * Structure for returning date
- */
-
-#define VXGE_HW_FW_STRLEN      32
-struct vxge_hw_device_date {
-       u32     day;
-       u32     month;
-       u32     year;
-       char    date[VXGE_HW_FW_STRLEN];
-};
-
-struct vxge_hw_device_version {
-       u32     major;
-       u32     minor;
-       u32     build;
-       char    version[VXGE_HW_FW_STRLEN];
-};
-
-/**
- * struct vxge_hw_fifo_config - Configuration of fifo.
- * @enable: Is this fifo to be commissioned
- * @fifo_blocks: Numbers of TxDL (that is, lists of Tx descriptors)
- *             blocks per queue.
- * @max_frags: Max number of Tx buffers per TxDL (that is, per single
- *             transmit operation).
- *             No more than 256 transmit buffers can be specified.
- * @memblock_size: Fifo descriptors are allocated in blocks of @mem_block_size
- *             bytes. Setting @memblock_size to page size ensures
- *             by-page allocation of descriptors. 128K bytes is the
- *             maximum supported block size.
- * @alignment_size: per Tx fragment DMA-able memory used to align transmit data
- *             (e.g., to align on a cache line).
- * @intr: Boolean. Use 1 to generate interrupt for each completed TxDL.
- *             Use 0 otherwise.
- * @no_snoop_bits: If non-zero, specifies no-snoop PCI operation,
- *             which generally improves latency of the host bridge operation
- *             (see PCI specification). For valid values please refer
- *             to struct vxge_hw_fifo_config{} in the driver sources.
- * Configuration of all Titan fifos.
- * Note: Valid (min, max) range for each attribute is specified in the body of
- * the struct vxge_hw_fifo_config{} structure.
- */
-struct vxge_hw_fifo_config {
-       u32                             enable;
-#define VXGE_HW_FIFO_ENABLE                            1
-#define VXGE_HW_FIFO_DISABLE                           0
-
-       u32                             fifo_blocks;
-#define VXGE_HW_MIN_FIFO_BLOCKS                                2
-#define VXGE_HW_MAX_FIFO_BLOCKS                                128
-
-       u32                             max_frags;
-#define VXGE_HW_MIN_FIFO_FRAGS                         1
-#define VXGE_HW_MAX_FIFO_FRAGS                         256
-
-       u32                             memblock_size;
-#define VXGE_HW_MIN_FIFO_MEMBLOCK_SIZE                 VXGE_HW_BLOCK_SIZE
-#define VXGE_HW_MAX_FIFO_MEMBLOCK_SIZE                 131072
-#define VXGE_HW_DEF_FIFO_MEMBLOCK_SIZE                 8096
-
-       u32                             alignment_size;
-#define VXGE_HW_MIN_FIFO_ALIGNMENT_SIZE                0
-#define VXGE_HW_MAX_FIFO_ALIGNMENT_SIZE                65536
-#define VXGE_HW_DEF_FIFO_ALIGNMENT_SIZE                VXGE_CACHE_LINE_SIZE
-
-       u32                             intr;
-#define VXGE_HW_FIFO_QUEUE_INTR_ENABLE                 1
-#define VXGE_HW_FIFO_QUEUE_INTR_DISABLE                        0
-#define VXGE_HW_FIFO_QUEUE_INTR_DEFAULT                        0
-
-       u32                             no_snoop_bits;
-#define VXGE_HW_FIFO_NO_SNOOP_DISABLED                 0
-#define VXGE_HW_FIFO_NO_SNOOP_TXD                      1
-#define VXGE_HW_FIFO_NO_SNOOP_FRM                      2
-#define VXGE_HW_FIFO_NO_SNOOP_ALL                      3
-#define VXGE_HW_FIFO_NO_SNOOP_DEFAULT                  0
-
-};
-/**
- * struct vxge_hw_ring_config - Ring configurations.
- * @enable: Is this ring to be commissioned
- * @ring_blocks: Numbers of RxD blocks in the ring
- * @buffer_mode: Receive buffer mode (1, 2, 3, or 5); for details please refer
- *             to Titan User Guide.
- * @scatter_mode: Titan supports two receive scatter modes: A and B.
- *             For details please refer to Titan User Guide.
- * @rx_timer_val: The number of 32ns periods that would be counted between two
- *             timer interrupts.
- * @greedy_return: If Set it forces the device to return absolutely all RxD
- *             that are consumed and still on board when a timer interrupt
- *             triggers. If Clear, then if the device has already returned
- *             RxD before current timer interrupt triggered and after the
- *             previous timer interrupt triggered, then the device is not
- *             forced to returned the rest of the consumed RxD that it has
- *             on board which account for a byte count less than the one
- *             programmed into PRC_CFG6.RXD_CRXDT field
- * @rx_timer_ci: TBD
- * @backoff_interval_us: Time (in microseconds), after which Titan
- *             tries to download RxDs posted by the host.
- *             Note that the "backoff" does not happen if host posts receive
- *             descriptors in the timely fashion.
- * Ring configuration.
- */
-struct vxge_hw_ring_config {
-       u32                             enable;
-#define VXGE_HW_RING_ENABLE                                    1
-#define VXGE_HW_RING_DISABLE                                   0
-#define VXGE_HW_RING_DEFAULT                                   1
-
-       u32                             ring_blocks;
-#define VXGE_HW_MIN_RING_BLOCKS                                        1
-#define VXGE_HW_MAX_RING_BLOCKS                                        128
-#define VXGE_HW_DEF_RING_BLOCKS                                        2
-
-       u32                             buffer_mode;
-#define VXGE_HW_RING_RXD_BUFFER_MODE_1                         1
-#define VXGE_HW_RING_RXD_BUFFER_MODE_3                         3
-#define VXGE_HW_RING_RXD_BUFFER_MODE_5                         5
-#define VXGE_HW_RING_RXD_BUFFER_MODE_DEFAULT                   1
-
-       u32                             scatter_mode;
-#define VXGE_HW_RING_SCATTER_MODE_A                            0
-#define VXGE_HW_RING_SCATTER_MODE_B                            1
-#define VXGE_HW_RING_SCATTER_MODE_C                            2
-#define VXGE_HW_RING_SCATTER_MODE_USE_FLASH_DEFAULT            0xffffffff
-
-       u64                             rxds_limit;
-#define VXGE_HW_DEF_RING_RXDS_LIMIT                            44
-};
-
-/**
- * struct vxge_hw_vp_config - Configuration of virtual path
- * @vp_id: Virtual Path Id
- * @min_bandwidth: Minimum Guaranteed bandwidth
- * @ring: See struct vxge_hw_ring_config{}.
- * @fifo: See struct vxge_hw_fifo_config{}.
- * @tti: Configuration of interrupt associated with Transmit.
- *             see struct vxge_hw_tim_intr_config();
- * @rti: Configuration of interrupt associated with Receive.
- *              see struct vxge_hw_tim_intr_config();
- * @mtu: mtu size used on this port.
- * @rpa_strip_vlan_tag: Strip VLAN Tag enable/disable. Instructs the device to
- *             remove the VLAN tag from all received tagged frames that are not
- *             replicated at the internal L2 switch.
- *             0 - Do not strip the VLAN tag.
- *             1 - Strip the VLAN tag. Regardless of this setting, VLAN tags are
- *                 always placed into the RxDMA descriptor.
- *
- * This structure is used by the driver to pass the configuration parameters to
- * configure Virtual Path.
- */
-struct vxge_hw_vp_config {
-       u32                             vp_id;
-
-#define        VXGE_HW_VPATH_PRIORITY_MIN                      0
-#define        VXGE_HW_VPATH_PRIORITY_MAX                      16
-#define        VXGE_HW_VPATH_PRIORITY_DEFAULT                  0
-
-       u32                             min_bandwidth;
-#define        VXGE_HW_VPATH_BANDWIDTH_MIN                     0
-#define        VXGE_HW_VPATH_BANDWIDTH_MAX                     100
-#define        VXGE_HW_VPATH_BANDWIDTH_DEFAULT                 0
-
-       struct vxge_hw_ring_config              ring;
-       struct vxge_hw_fifo_config              fifo;
-       struct vxge_hw_tim_intr_config  tti;
-       struct vxge_hw_tim_intr_config  rti;
-
-       u32                             mtu;
-#define VXGE_HW_VPATH_MIN_INITIAL_MTU                  VXGE_HW_MIN_MTU
-#define VXGE_HW_VPATH_MAX_INITIAL_MTU                  VXGE_HW_MAX_MTU
-#define VXGE_HW_VPATH_USE_FLASH_DEFAULT_INITIAL_MTU    0xffffffff
-
-       u32                             rpa_strip_vlan_tag;
-#define VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_ENABLE                        1
-#define VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_DISABLE               0
-#define VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_USE_FLASH_DEFAULT     0xffffffff
-
-};
-/**
- * struct vxge_hw_device_config - Device configuration.
- * @dma_blockpool_initial: Initial size of DMA Pool
- * @dma_blockpool_max: Maximum blocks in DMA pool
- * @intr_mode: Line, or MSI-X interrupt.
- *
- * @rth_en: Enable Receive Traffic Hashing(RTH) using IT(Indirection Table).
- * @rth_it_type: RTH IT table programming type
- * @rts_mac_en: Enable Receive Traffic Steering using MAC destination address
- * @vp_config: Configuration for virtual paths
- * @device_poll_millis: Specify the interval (in mulliseconds)
- *                     to wait for register reads
- *
- * Titan configuration.
- * Contains per-device configuration parameters, including:
- * - stats sampling interval, etc.
- *
- * In addition, struct vxge_hw_device_config{} includes "subordinate"
- * configurations, including:
- * - fifos and rings;
- * - MAC (done at firmware level).
- *
- * See Titan User Guide for more details.
- * Note: Valid (min, max) range for each attribute is specified in the body of
- * the struct vxge_hw_device_config{} structure. Please refer to the
- * corresponding include file.
- * See also: struct vxge_hw_tim_intr_config{}.
- */
-struct vxge_hw_device_config {
-       u32                                     device_poll_millis;
-#define VXGE_HW_MIN_DEVICE_POLL_MILLIS         1
-#define VXGE_HW_MAX_DEVICE_POLL_MILLIS         100000
-#define VXGE_HW_DEF_DEVICE_POLL_MILLIS         1000
-
-       u32                                     dma_blockpool_initial;
-       u32                                     dma_blockpool_max;
-#define VXGE_HW_MIN_DMA_BLOCK_POOL_SIZE                0
-#define VXGE_HW_INITIAL_DMA_BLOCK_POOL_SIZE    0
-#define VXGE_HW_INCR_DMA_BLOCK_POOL_SIZE       4
-#define VXGE_HW_MAX_DMA_BLOCK_POOL_SIZE                4096
-
-#define        VXGE_HW_MAX_PAYLOAD_SIZE_512            2
-
-       u32                                     intr_mode:2,
-#define VXGE_HW_INTR_MODE_IRQLINE              0
-#define VXGE_HW_INTR_MODE_MSIX                 1
-#define VXGE_HW_INTR_MODE_MSIX_ONE_SHOT                2
-
-#define VXGE_HW_INTR_MODE_DEF                  0
-
-                                               rth_en:1,
-#define VXGE_HW_RTH_DISABLE                    0
-#define VXGE_HW_RTH_ENABLE                     1
-#define VXGE_HW_RTH_DEFAULT                    0
-
-                                               rth_it_type:1,
-#define VXGE_HW_RTH_IT_TYPE_SOLO_IT            0
-#define VXGE_HW_RTH_IT_TYPE_MULTI_IT           1
-#define VXGE_HW_RTH_IT_TYPE_DEFAULT            0
-
-                                               rts_mac_en:1,
-#define VXGE_HW_RTS_MAC_DISABLE                        0
-#define VXGE_HW_RTS_MAC_ENABLE                 1
-#define VXGE_HW_RTS_MAC_DEFAULT                        0
-
-                                               hwts_en:1;
-#define        VXGE_HW_HWTS_DISABLE                    0
-#define        VXGE_HW_HWTS_ENABLE                     1
-#define        VXGE_HW_HWTS_DEFAULT                    1
-
-       struct vxge_hw_vp_config vp_config[VXGE_HW_MAX_VIRTUAL_PATHS];
-};
-
-/**
- * function vxge_uld_link_up_f - Link-Up callback provided by driver.
- * @devh: HW device handle.
- * Link-up notification callback provided by the driver.
- * This is one of the per-driver callbacks, see struct vxge_hw_uld_cbs{}.
- *
- * See also: struct vxge_hw_uld_cbs{}, vxge_uld_link_down_f{},
- * vxge_hw_driver_initialize().
- */
-
-/**
- * function vxge_uld_link_down_f - Link-Down callback provided by
- * driver.
- * @devh: HW device handle.
- *
- * Link-Down notification callback provided by the driver.
- * This is one of the per-driver callbacks, see struct vxge_hw_uld_cbs{}.
- *
- * See also: struct vxge_hw_uld_cbs{}, vxge_uld_link_up_f{},
- * vxge_hw_driver_initialize().
- */
-
-/**
- * function vxge_uld_crit_err_f - Critical Error notification callback.
- * @devh: HW device handle.
- * (typically - at HW device iinitialization time).
- * @type: Enumerated hw error, e.g.: double ECC.
- * @serr_data: Titan status.
- * @ext_data: Extended data. The contents depends on the @type.
- *
- * Link-Down notification callback provided by the driver.
- * This is one of the per-driver callbacks, see struct vxge_hw_uld_cbs{}.
- *
- * See also: struct vxge_hw_uld_cbs{}, enum vxge_hw_event{},
- * vxge_hw_driver_initialize().
- */
-
-/**
- * struct vxge_hw_uld_cbs - driver "slow-path" callbacks.
- * @link_up: See vxge_uld_link_up_f{}.
- * @link_down: See vxge_uld_link_down_f{}.
- * @crit_err: See vxge_uld_crit_err_f{}.
- *
- * Driver slow-path (per-driver) callbacks.
- * Implemented by driver and provided to HW via
- * vxge_hw_driver_initialize().
- * Note that these callbacks are not mandatory: HW will not invoke
- * a callback if NULL is specified.
- *
- * See also: vxge_hw_driver_initialize().
- */
-struct vxge_hw_uld_cbs {
-       void (*link_up)(struct __vxge_hw_device *devh);
-       void (*link_down)(struct __vxge_hw_device *devh);
-       void (*crit_err)(struct __vxge_hw_device *devh,
-                       enum vxge_hw_event type, u64 ext_data);
-};
-
-/*
- * struct __vxge_hw_blockpool_entry - Block private data structure
- * @item: List header used to link.
- * @length: Length of the block
- * @memblock: Virtual address block
- * @dma_addr: DMA Address of the block.
- * @dma_handle: DMA handle of the block.
- * @acc_handle: DMA acc handle
- *
- * Block is allocated with a header to put the blocks into list.
- *
- */
-struct __vxge_hw_blockpool_entry {
-       struct list_head        item;
-       u32                     length;
-       void                    *memblock;
-       dma_addr_t              dma_addr;
-       struct pci_dev          *dma_handle;
-       struct pci_dev          *acc_handle;
-};
-
-/*
- * struct __vxge_hw_blockpool - Block Pool
- * @hldev: HW device
- * @block_size: size of each block.
- * @Pool_size: Number of blocks in the pool
- * @pool_max: Maximum number of blocks above which to free additional blocks
- * @req_out: Number of block requests with OS out standing
- * @free_block_list: List of free blocks
- *
- * Block pool contains the DMA blocks preallocated.
- *
- */
-struct __vxge_hw_blockpool {
-       struct __vxge_hw_device *hldev;
-       u32                             block_size;
-       u32                             pool_size;
-       u32                             pool_max;
-       u32                             req_out;
-       struct list_head                free_block_list;
-       struct list_head                free_entry_list;
-};
-
-/*
- * enum enum __vxge_hw_channel_type - Enumerated channel types.
- * @VXGE_HW_CHANNEL_TYPE_UNKNOWN: Unknown channel.
- * @VXGE_HW_CHANNEL_TYPE_FIFO: fifo.
- * @VXGE_HW_CHANNEL_TYPE_RING: ring.
- * @VXGE_HW_CHANNEL_TYPE_MAX: Maximum number of HW-supported
- * (and recognized) channel types. Currently: 2.
- *
- * Enumerated channel types. Currently there are only two link-layer
- * channels - Titan fifo and Titan ring. In the future the list will grow.
- */
-enum __vxge_hw_channel_type {
-       VXGE_HW_CHANNEL_TYPE_UNKNOWN                    = 0,
-       VXGE_HW_CHANNEL_TYPE_FIFO                       = 1,
-       VXGE_HW_CHANNEL_TYPE_RING                       = 2,
-       VXGE_HW_CHANNEL_TYPE_MAX                        = 3
-};
-
-/*
- * struct __vxge_hw_channel
- * @item: List item; used to maintain a list of open channels.
- * @type: Channel type. See enum vxge_hw_channel_type{}.
- * @devh: Device handle. HW device object that contains _this_ channel.
- * @vph: Virtual path handle. Virtual Path Object that contains _this_ channel.
- * @length: Channel length. Currently allocated number of descriptors.
- *          The channel length "grows" when more descriptors get allocated.
- *          See _hw_mempool_grow.
- * @reserve_arr: Reserve array. Contains descriptors that can be reserved
- *               by driver for the subsequent send or receive operation.
- *               See vxge_hw_fifo_txdl_reserve(),
- *               vxge_hw_ring_rxd_reserve().
- * @reserve_ptr: Current pointer in the resrve array
- * @reserve_top: Reserve top gives the maximum number of dtrs available in
- *          reserve array.
- * @work_arr: Work array. Contains descriptors posted to the channel.
- *            Note that at any point in time @work_arr contains 3 types of
- *            descriptors:
- *            1) posted but not yet consumed by Titan device;
- *            2) consumed but not yet completed;
- *            3) completed but not yet freed
- *            (via vxge_hw_fifo_txdl_free() or vxge_hw_ring_rxd_free())
- * @post_index: Post index. At any point in time points on the
- *              position in the channel, which'll contain next to-be-posted
- *              descriptor.
- * @compl_index: Completion index. At any point in time points on the
- *               position in the channel, which will contain next
- *               to-be-completed descriptor.
- * @free_arr: Free array. Contains completed descriptors that were freed
- *            (i.e., handed over back to HW) by driver.
- *            See vxge_hw_fifo_txdl_free(), vxge_hw_ring_rxd_free().
- * @free_ptr: current pointer in free array
- * @per_dtr_space: Per-descriptor space (in bytes) that channel user can utilize
- *                 to store per-operation control information.
- * @stats: Pointer to common statistics
- * @userdata: Per-channel opaque (void*) user-defined context, which may be
- *            driver object, ULP connection, etc.
- *            Once channel is open, @userdata is passed back to user via
- *            vxge_hw_channel_callback_f.
- *
- * HW channel object.
- *
- * See also: enum vxge_hw_channel_type{}, enum vxge_hw_channel_flag
- */
-struct __vxge_hw_channel {
-       struct list_head                item;
-       enum __vxge_hw_channel_type     type;
-       struct __vxge_hw_device         *devh;
-       struct __vxge_hw_vpath_handle   *vph;
-       u32                     length;
-       u32                     vp_id;
-       void            **reserve_arr;
-       u32                     reserve_ptr;
-       u32                     reserve_top;
-       void            **work_arr;
-       u32                     post_index ____cacheline_aligned;
-       u32                     compl_index ____cacheline_aligned;
-       void            **free_arr;
-       u32                     free_ptr;
-       void            **orig_arr;
-       u32                     per_dtr_space;
-       void            *userdata;
-       struct vxge_hw_common_reg       __iomem *common_reg;
-       u32                     first_vp_id;
-       struct vxge_hw_vpath_stats_sw_common_info *stats;
-
-} ____cacheline_aligned;
-
-/*
- * struct __vxge_hw_virtualpath - Virtual Path
- *
- * @vp_id: Virtual path id
- * @vp_open: This flag specifies if vxge_hw_vp_open is called from LL Driver
- * @hldev: Hal device
- * @vp_config: Virtual Path Config
- * @vp_reg: VPATH Register map address in BAR0
- * @vpmgmt_reg: VPATH_MGMT register map address
- * @max_mtu: Max mtu that can be supported
- * @vsport_number: vsport attached to this vpath
- * @max_kdfc_db: Maximum kernel mode doorbells
- * @max_nofl_db: Maximum non offload doorbells
- * @tx_intr_num: Interrupt Number associated with the TX
-
- * @ringh: Ring Queue
- * @fifoh: FIFO Queue
- * @vpath_handles: Virtual Path handles list
- * @stats_block: Memory for DMAing stats
- * @stats: Vpath statistics
- *
- * Virtual path structure to encapsulate the data related to a virtual path.
- * Virtual paths are allocated by the HW upon getting configuration from the
- * driver and inserted into the list of virtual paths.
- */
-struct __vxge_hw_virtualpath {
-       u32                             vp_id;
-
-       u32                             vp_open;
-#define VXGE_HW_VP_NOT_OPEN    0
-#define        VXGE_HW_VP_OPEN         1
-
-       struct __vxge_hw_device         *hldev;
-       struct vxge_hw_vp_config        *vp_config;
-       struct vxge_hw_vpath_reg        __iomem *vp_reg;
-       struct vxge_hw_vpmgmt_reg       __iomem *vpmgmt_reg;
-       struct __vxge_hw_non_offload_db_wrapper __iomem *nofl_db;
-
-       u32                             max_mtu;
-       u32                             vsport_number;
-       u32                             max_kdfc_db;
-       u32                             max_nofl_db;
-       u64                             tim_tti_cfg1_saved;
-       u64                             tim_tti_cfg3_saved;
-       u64                             tim_rti_cfg1_saved;
-       u64                             tim_rti_cfg3_saved;
-
-       struct __vxge_hw_ring *____cacheline_aligned ringh;
-       struct __vxge_hw_fifo *____cacheline_aligned fifoh;
-       struct list_head                vpath_handles;
-       struct __vxge_hw_blockpool_entry                *stats_block;
-       struct vxge_hw_vpath_stats_hw_info      *hw_stats;
-       struct vxge_hw_vpath_stats_hw_info      *hw_stats_sav;
-       struct vxge_hw_vpath_stats_sw_info      *sw_stats;
-       spinlock_t lock;
-};
-
-/*
- * struct __vxge_hw_vpath_handle - List item to store callback information
- * @item: List head to keep the item in linked list
- * @vpath: Virtual path to which this item belongs
- *
- * This structure is used to store the callback information.
- */
-struct __vxge_hw_vpath_handle {
-       struct list_head        item;
-       struct __vxge_hw_virtualpath    *vpath;
-};
-
-/*
- * struct __vxge_hw_device
- *
- * HW device object.
- */
-/**
- * struct __vxge_hw_device  - Hal device object
- * @magic: Magic Number
- * @bar0: BAR0 virtual address.
- * @pdev: Physical device handle
- * @config: Confguration passed by the LL driver at initialization
- * @link_state: Link state
- *
- * HW device object. Represents Titan adapter
- */
-struct __vxge_hw_device {
-       u32                             magic;
-#define VXGE_HW_DEVICE_MAGIC           0x12345678
-#define VXGE_HW_DEVICE_DEAD            0xDEADDEAD
-       void __iomem                    *bar0;
-       struct pci_dev                  *pdev;
-       struct net_device               *ndev;
-       struct vxge_hw_device_config    config;
-       enum vxge_hw_device_link_state  link_state;
-
-       const struct vxge_hw_uld_cbs    *uld_callbacks;
-
-       u32                             host_type;
-       u32                             func_id;
-       u32                             access_rights;
-#define VXGE_HW_DEVICE_ACCESS_RIGHT_VPATH      0x1
-#define VXGE_HW_DEVICE_ACCESS_RIGHT_SRPCIM     0x2
-#define VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM     0x4
-       struct vxge_hw_legacy_reg       __iomem *legacy_reg;
-       struct vxge_hw_toc_reg          __iomem *toc_reg;
-       struct vxge_hw_common_reg       __iomem *common_reg;
-       struct vxge_hw_mrpcim_reg       __iomem *mrpcim_reg;
-       struct vxge_hw_srpcim_reg       __iomem *srpcim_reg \
-                                       [VXGE_HW_TITAN_SRPCIM_REG_SPACES];
-       struct vxge_hw_vpmgmt_reg       __iomem *vpmgmt_reg \
-                                       [VXGE_HW_TITAN_VPMGMT_REG_SPACES];
-       struct vxge_hw_vpath_reg        __iomem *vpath_reg \
-                                       [VXGE_HW_TITAN_VPATH_REG_SPACES];
-       u8                              __iomem *kdfc;
-       u8                              __iomem *usdc;
-       struct __vxge_hw_virtualpath    virtual_paths \
-                                       [VXGE_HW_MAX_VIRTUAL_PATHS];
-       u64                             vpath_assignments;
-       u64                             vpaths_deployed;
-       u32                             first_vp_id;
-       u64                             tim_int_mask0[4];
-       u32                             tim_int_mask1[4];
-
-       struct __vxge_hw_blockpool      block_pool;
-       struct vxge_hw_device_stats     stats;
-       u32                             debug_module_mask;
-       u32                             debug_level;
-       u32                             level_err;
-       u32                             level_trace;
-       u16 eprom_versions[VXGE_HW_MAX_ROM_IMAGES];
-};
-
-#define VXGE_HW_INFO_LEN       64
-/**
- * struct vxge_hw_device_hw_info - Device information
- * @host_type: Host Type
- * @func_id: Function Id
- * @vpath_mask: vpath bit mask
- * @fw_version: Firmware version
- * @fw_date: Firmware Date
- * @flash_version: Firmware version
- * @flash_date: Firmware Date
- * @mac_addrs: Mac addresses for each vpath
- * @mac_addr_masks: Mac address masks for each vpath
- *
- * Returns the vpath mask that has the bits set for each vpath allocated
- * for the driver and the first mac address for each vpath
- */
-struct vxge_hw_device_hw_info {
-       u32             host_type;
-#define VXGE_HW_NO_MR_NO_SR_NORMAL_FUNCTION                    0
-#define VXGE_HW_MR_NO_SR_VH0_BASE_FUNCTION                     1
-#define VXGE_HW_NO_MR_SR_VH0_FUNCTION0                         2
-#define VXGE_HW_NO_MR_SR_VH0_VIRTUAL_FUNCTION                  3
-#define VXGE_HW_MR_SR_VH0_INVALID_CONFIG                       4
-#define VXGE_HW_SR_VH_FUNCTION0                                        5
-#define VXGE_HW_SR_VH_VIRTUAL_FUNCTION                         6
-#define VXGE_HW_VH_NORMAL_FUNCTION                             7
-       u64             function_mode;
-#define VXGE_HW_FUNCTION_MODE_SINGLE_FUNCTION                  0
-#define VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION                   1
-#define VXGE_HW_FUNCTION_MODE_SRIOV                            2
-#define VXGE_HW_FUNCTION_MODE_MRIOV                            3
-#define VXGE_HW_FUNCTION_MODE_MRIOV_8                          4
-#define VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION_17                        5
-#define VXGE_HW_FUNCTION_MODE_SRIOV_8                          6
-#define VXGE_HW_FUNCTION_MODE_SRIOV_4                          7
-#define VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION_2                 8
-#define VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION_4                 9
-#define VXGE_HW_FUNCTION_MODE_MRIOV_4                          10
-
-       u32             func_id;
-       u64             vpath_mask;
-       struct vxge_hw_device_version fw_version;
-       struct vxge_hw_device_date    fw_date;
-       struct vxge_hw_device_version flash_version;
-       struct vxge_hw_device_date    flash_date;
-       u8              serial_number[VXGE_HW_INFO_LEN];
-       u8              part_number[VXGE_HW_INFO_LEN];
-       u8              product_desc[VXGE_HW_INFO_LEN];
-       u8 mac_addrs[VXGE_HW_MAX_VIRTUAL_PATHS][ETH_ALEN];
-       u8 mac_addr_masks[VXGE_HW_MAX_VIRTUAL_PATHS][ETH_ALEN];
-};
-
-/**
- * struct vxge_hw_device_attr - Device memory spaces.
- * @bar0: BAR0 virtual address.
- * @pdev: PCI device object.
- *
- * Device memory spaces. Includes configuration, BAR0 etc. per device
- * mapped memories. Also, includes a pointer to OS-specific PCI device object.
- */
-struct vxge_hw_device_attr {
-       void __iomem            *bar0;
-       struct pci_dev          *pdev;
-       const struct vxge_hw_uld_cbs *uld_callbacks;
-};
-
-#define VXGE_HW_DEVICE_LINK_STATE_SET(hldev, ls)       (hldev->link_state = ls)
-
-#define VXGE_HW_DEVICE_TIM_INT_MASK_SET(m0, m1, i) {   \
-       if (i < 16) {                           \
-               m0[0] |= vxge_vBIT(0x8, (i*4), 4);      \
-               m0[1] |= vxge_vBIT(0x4, (i*4), 4);      \
-       }                                       \
-       else {                                  \
-               m1[0] = 0x80000000;             \
-               m1[1] = 0x40000000;             \
-       }                                       \
-}
-
-#define VXGE_HW_DEVICE_TIM_INT_MASK_RESET(m0, m1, i) { \
-       if (i < 16) {                                   \
-               m0[0] &= ~vxge_vBIT(0x8, (i*4), 4);             \
-               m0[1] &= ~vxge_vBIT(0x4, (i*4), 4);             \
-       }                                               \
-       else {                                          \
-               m1[0] = 0;                              \
-               m1[1] = 0;                              \
-       }                                               \
-}
-
-#define VXGE_HW_DEVICE_STATS_PIO_READ(loc, offset) {           \
-       status = vxge_hw_mrpcim_stats_access(hldev, \
-                               VXGE_HW_STATS_OP_READ, \
-                               loc, \
-                               offset, \
-                               &val64);                        \
-       if (status != VXGE_HW_OK)                               \
-               return status;                                          \
-}
-
-/*
- * struct __vxge_hw_ring - Ring channel.
- * @channel: Channel "base" of this ring, the common part of all HW
- *           channels.
- * @mempool: Memory pool, the pool from which descriptors get allocated.
- *           (See vxge_hw_mm.h).
- * @config: Ring configuration, part of device configuration
- *          (see struct vxge_hw_device_config{}).
- * @ring_length: Length of the ring
- * @buffer_mode: 1, 3, or 5. The value specifies a receive buffer mode,
- *          as per Titan User Guide.
- * @rxd_size: RxD sizes for 1-, 3- or 5- buffer modes. As per Titan spec,
- *            1-buffer mode descriptor is 32 byte long, etc.
- * @rxd_priv_size: Per RxD size reserved (by HW) for driver to keep
- *                 per-descriptor data (e.g., DMA handle for Solaris)
- * @per_rxd_space: Per rxd space requested by driver
- * @rxds_per_block: Number of descriptors per hardware-defined RxD
- *                  block. Depends on the (1-, 3-, 5-) buffer mode.
- * @rxdblock_priv_size: Reserved at the end of each RxD block. HW internal
- *                      usage. Not to confuse with @rxd_priv_size.
- * @cmpl_cnt: Completion counter. Is reset to zero upon entering the ISR.
- * @callback: Channel completion callback. HW invokes the callback when there
- *            are new completions on that channel. In many implementations
- *            the @callback executes in the hw interrupt context.
- * @rxd_init: Channel's descriptor-initialize callback.
- *            See vxge_hw_ring_rxd_init_f{}.
- *            If not NULL, HW invokes the callback when opening
- *            the ring.
- * @rxd_term: Channel's descriptor-terminate callback. If not NULL,
- *          HW invokes the callback when closing the corresponding channel.
- *          See also vxge_hw_channel_rxd_term_f{}.
- * @stats: Statistics for ring
- * Ring channel.
- *
- * Note: The structure is cache line aligned to better utilize
- *       CPU cache performance.
- */
-struct __vxge_hw_ring {
-       struct __vxge_hw_channel                channel;
-       struct vxge_hw_mempool                  *mempool;
-       struct vxge_hw_vpath_reg                __iomem *vp_reg;
-       struct vxge_hw_common_reg               __iomem *common_reg;
-       u32                                     ring_length;
-       u32                                     buffer_mode;
-       u32                                     rxd_size;
-       u32                                     rxd_priv_size;
-       u32                                     per_rxd_space;
-       u32                                     rxds_per_block;
-       u32                                     rxdblock_priv_size;
-       u32                                     cmpl_cnt;
-       u32                                     vp_id;
-       u32                                     doorbell_cnt;
-       u32                                     total_db_cnt;
-       u64                                     rxds_limit;
-       u32                                     rtimer;
-       u64                                     tim_rti_cfg1_saved;
-       u64                                     tim_rti_cfg3_saved;
-
-       enum vxge_hw_status (*callback)(
-                       struct __vxge_hw_ring *ringh,
-                       void *rxdh,
-                       u8 t_code,
-                       void *userdata);
-
-       enum vxge_hw_status (*rxd_init)(
-                       void *rxdh,
-                       void *userdata);
-
-       void (*rxd_term)(
-                       void *rxdh,
-                       enum vxge_hw_rxd_state state,
-                       void *userdata);
-
-       struct vxge_hw_vpath_stats_sw_ring_info *stats  ____cacheline_aligned;
-       struct vxge_hw_ring_config              *config;
-} ____cacheline_aligned;
-
-/**
- * enum enum vxge_hw_txdl_state - Descriptor (TXDL) state.
- * @VXGE_HW_TXDL_STATE_NONE: Invalid state.
- * @VXGE_HW_TXDL_STATE_AVAIL: Descriptor is available for reservation.
- * @VXGE_HW_TXDL_STATE_POSTED: Descriptor is posted for processing by the
- * device.
- * @VXGE_HW_TXDL_STATE_FREED: Descriptor is free and can be reused for
- * filling-in and posting later.
- *
- * Titan/HW descriptor states.
- *
- */
-enum vxge_hw_txdl_state {
-       VXGE_HW_TXDL_STATE_NONE = 0,
-       VXGE_HW_TXDL_STATE_AVAIL        = 1,
-       VXGE_HW_TXDL_STATE_POSTED       = 2,
-       VXGE_HW_TXDL_STATE_FREED        = 3
-};
-/*
- * struct __vxge_hw_fifo - Fifo.
- * @channel: Channel "base" of this fifo, the common part of all HW
- *             channels.
- * @mempool: Memory pool, from which descriptors get allocated.
- * @config: Fifo configuration, part of device configuration
- *             (see struct vxge_hw_device_config{}).
- * @interrupt_type: Interrupt type to be used
- * @no_snoop_bits: See struct vxge_hw_fifo_config{}.
- * @txdl_per_memblock: Number of TxDLs (TxD lists) per memblock.
- *             on TxDL please refer to Titan UG.
- * @txdl_size: Configured TxDL size (i.e., number of TxDs in a list), plus
- *             per-TxDL HW private space (struct __vxge_hw_fifo_txdl_priv).
- * @priv_size: Per-Tx descriptor space reserved for driver
- *             usage.
- * @per_txdl_space: Per txdl private space for the driver
- * @callback: Fifo completion callback. HW invokes the callback when there
- *             are new completions on that fifo. In many implementations
- *             the @callback executes in the hw interrupt context.
- * @txdl_term: Fifo's descriptor-terminate callback. If not NULL,
- *             HW invokes the callback when closing the corresponding fifo.
- *             See also vxge_hw_fifo_txdl_term_f{}.
- * @stats: Statistics of this fifo
- *
- * Fifo channel.
- * Note: The structure is cache line aligned.
- */
-struct __vxge_hw_fifo {
-       struct __vxge_hw_channel                channel;
-       struct vxge_hw_mempool                  *mempool;
-       struct vxge_hw_fifo_config              *config;
-       struct vxge_hw_vpath_reg                __iomem *vp_reg;
-       struct __vxge_hw_non_offload_db_wrapper __iomem *nofl_db;
-       u64                                     interrupt_type;
-       u32                                     no_snoop_bits;
-       u32                                     txdl_per_memblock;
-       u32                                     txdl_size;
-       u32                                     priv_size;
-       u32                                     per_txdl_space;
-       u32                                     vp_id;
-       u32                                     tx_intr_num;
-       u32                                     rtimer;
-       u64                                     tim_tti_cfg1_saved;
-       u64                                     tim_tti_cfg3_saved;
-
-       enum vxge_hw_status (*callback)(
-                       struct __vxge_hw_fifo *fifo_handle,
-                       void *txdlh,
-                       enum vxge_hw_fifo_tcode t_code,
-                       void *userdata,
-                       struct sk_buff ***skb_ptr,
-                       int nr_skb,
-                       int *more);
-
-       void (*txdl_term)(
-                       void *txdlh,
-                       enum vxge_hw_txdl_state state,
-                       void *userdata);
-
-       struct vxge_hw_vpath_stats_sw_fifo_info *stats ____cacheline_aligned;
-} ____cacheline_aligned;
-
-/*
- * struct __vxge_hw_fifo_txdl_priv - Transmit descriptor HW-private data.
- * @dma_addr: DMA (mapped) address of _this_ descriptor.
- * @dma_handle: DMA handle used to map the descriptor onto device.
- * @dma_offset: Descriptor's offset in the memory block. HW allocates
- *      descriptors in memory blocks (see struct vxge_hw_fifo_config{})
- *             Each memblock is a contiguous block of DMA-able memory.
- * @frags: Total number of fragments (that is, contiguous data buffers)
- * carried by this TxDL.
- * @align_vaddr_start: Aligned virtual address start
- * @align_vaddr: Virtual address of the per-TxDL area in memory used for
- *             alignement. Used to place one or more mis-aligned fragments
- * @align_dma_addr: DMA address translated from the @align_vaddr.
- * @align_dma_handle: DMA handle that corresponds to @align_dma_addr.
- * @align_dma_acch: DMA access handle corresponds to @align_dma_addr.
- * @align_dma_offset: The current offset into the @align_vaddr area.
- * Grows while filling the descriptor, gets reset.
- * @align_used_frags: Number of fragments used.
- * @alloc_frags: Total number of fragments allocated.
- * @unused: TODO
- * @next_txdl_priv: (TODO).
- * @first_txdp: (TODO).
- * @linked_txdl_priv: Pointer to any linked TxDL for creating contiguous
- *             TxDL list.
- * @txdlh: Corresponding txdlh to this TxDL.
- * @memblock: Pointer to the TxDL memory block or memory page.
- *             on the next send operation.
- * @dma_object: DMA address and handle of the memory block that contains
- *             the descriptor. This member is used only in the "checked"
- *             version of the HW (to enforce certain assertions);
- *             otherwise it gets compiled out.
- * @allocated: True if the descriptor is reserved, 0 otherwise. Internal usage.
- *
- * Per-transmit decsriptor HW-private data. HW uses the space to keep DMA
- * information associated with the descriptor. Note that driver can ask HW
- * to allocate additional per-descriptor space for its own (driver-specific)
- * purposes.
- *
- * See also: struct vxge_hw_ring_rxd_priv{}.
- */
-struct __vxge_hw_fifo_txdl_priv {
-       dma_addr_t              dma_addr;
-       struct pci_dev  *dma_handle;
-       ptrdiff_t               dma_offset;
-       u32                             frags;
-       u8                              *align_vaddr_start;
-       u8                              *align_vaddr;
-       dma_addr_t              align_dma_addr;
-       struct pci_dev  *align_dma_handle;
-       struct pci_dev  *align_dma_acch;
-       ptrdiff_t               align_dma_offset;
-       u32                             align_used_frags;
-       u32                             alloc_frags;
-       u32                             unused;
-       struct __vxge_hw_fifo_txdl_priv *next_txdl_priv;
-       struct vxge_hw_fifo_txd         *first_txdp;
-       void                    *memblock;
-};
-
-/*
- * struct __vxge_hw_non_offload_db_wrapper - Non-offload Doorbell Wrapper
- * @control_0: Bits 0 to 7 - Doorbell type.
- *             Bits 8 to 31 - Reserved.
- *             Bits 32 to 39 - The highest TxD in this TxDL.
- *             Bits 40 to 47 - Reserved.
-       *              Bits 48 to 55 - Reserved.
- *             Bits 56 to 63 - No snoop flags.
- * @txdl_ptr:  The starting location of the TxDL in host memory.
- *
- * Created by the host and written to the adapter via PIO to a Kernel Doorbell
- * FIFO. All non-offload doorbell wrapper fields must be written by the host as
- * part of a doorbell write. Consumed by the adapter but is not written by the
- * adapter.
- */
-struct __vxge_hw_non_offload_db_wrapper {
-       u64             control_0;
-#define        VXGE_HW_NODBW_GET_TYPE(ctrl0)                   vxge_bVALn(ctrl0, 0, 8)
-#define VXGE_HW_NODBW_TYPE(val) vxge_vBIT(val, 0, 8)
-#define        VXGE_HW_NODBW_TYPE_NODBW                                0
-
-#define        VXGE_HW_NODBW_GET_LAST_TXD_NUMBER(ctrl0)        vxge_bVALn(ctrl0, 32, 8)
-#define VXGE_HW_NODBW_LAST_TXD_NUMBER(val) vxge_vBIT(val, 32, 8)
-
-#define        VXGE_HW_NODBW_GET_NO_SNOOP(ctrl0)               vxge_bVALn(ctrl0, 56, 8)
-#define VXGE_HW_NODBW_LIST_NO_SNOOP(val) vxge_vBIT(val, 56, 8)
-#define        VXGE_HW_NODBW_LIST_NO_SNOOP_TXD_READ_TXD0_WRITE         0x2
-#define        VXGE_HW_NODBW_LIST_NO_SNOOP_TX_FRAME_DATA_READ          0x1
-
-       u64             txdl_ptr;
-};
-
-/*
- * TX Descriptor
- */
-
-/**
- * struct vxge_hw_fifo_txd - Transmit Descriptor
- * @control_0: Bits 0 to 6 - Reserved.
- *             Bit 7 - List Ownership. This field should be initialized
- *             to '1' by the driver before the transmit list pointer is
- *             written to the adapter. This field will be set to '0' by the
- *             adapter once it has completed transmitting the frame or frames in
- *             the list. Note - This field is only valid in TxD0. Additionally,
- *             for multi-list sequences, the driver should not release any
- *             buffers until the ownership of the last list in the multi-list
- *             sequence has been returned to the host.
- *             Bits 8 to 11 - Reserved
- *             Bits 12 to 15 - Transfer_Code. This field is only valid in
- *             TxD0. It is used to describe the status of the transmit data
- *             buffer transfer. This field is always overwritten by the
- *             adapter, so this field may be initialized to any value.
- *             Bits 16 to 17 - Host steering. This field allows the host to
- *             override the selection of the physical transmit port.
- *             Attention:
- *             Normal sounds as if learned from the switch rather than from
- *             the aggregation algorythms.
- *             00: Normal. Use Destination/MAC Address
- *             lookup to determine the transmit port.
- *             01: Send on physical Port1.
- *             10: Send on physical Port0.
- *            11: Send on both ports.
- *             Bits 18 to 21 - Reserved
- *             Bits 22 to 23 - Gather_Code. This field is set by the host and
- *             is used to describe how individual buffers comprise a frame.
- *             10: First descriptor of a frame.
- *             00: Middle of a multi-descriptor frame.
- *             01: Last descriptor of a frame.
- *             11: First and last descriptor of a frame (the entire frame
- *             resides in a single buffer).
- *             For multi-descriptor frames, the only valid gather code sequence
- *             is {10, [00], 01}. In other words, the descriptors must be placed
- *             in the list in the correct order.
- *             Bits 24 to 27 - Reserved
- *             Bits 28 to 29 - LSO_Frm_Encap. LSO Frame Encapsulation
- *             definition. Only valid in TxD0. This field allows the host to
- *             indicate the Ethernet encapsulation of an outbound LSO packet.
- *             00 - classic mode (best guess)
- *             01 - LLC
- *             10 - SNAP
- *             11 - DIX
- *             If "classic mode" is selected, the adapter will attempt to
- *             decode the frame's Ethernet encapsulation by examining the L/T
- *             field as follows:
- *             <= 0x05DC LLC/SNAP encoding; must examine DSAP/SSAP to determine
- *             if packet is IPv4 or IPv6.
- *             0x8870 Jumbo-SNAP encoding.
- *             0x0800 IPv4 DIX encoding
- *             0x86DD IPv6 DIX encoding
- *             others illegal encapsulation
- *             Bits 30 - LSO_ Flag. Large Send Offload (LSO) flag.
- *             Set to 1 to perform segmentation offload for TCP/UDP.
- *             This field is valid only in TxD0.
- *             Bits 31 to 33 - Reserved.
- *             Bits 34 to 47 - LSO_MSS. TCP/UDP LSO Maximum Segment Size
- *             This field is meaningful only when LSO_Control is non-zero.
- *             When LSO_Control is set to TCP_LSO, the single (possibly large)
- *             TCP segment described by this TxDL will be sent as a series of
- *             TCP segments each of which contains no more than LSO_MSS
- *             payload bytes.
- *             When LSO_Control is set to UDP_LSO, the single (possibly large)
- *             UDP datagram described by this TxDL will be sent as a series of
- *             UDP datagrams each of which contains no more than LSO_MSS
- *             payload bytes.
- *             All outgoing frames from this TxDL will have LSO_MSS bytes of UDP
- *             or TCP payload, with the exception of the last, which will have
- *             <= LSO_MSS bytes of payload.
- *             Bits 48 to 63 - Buffer_Size. Number of valid bytes in the
- *             buffer to be read by the adapter. This field is written by the
- *             host. A value of 0 is illegal.
- *            Bits 32 to 63 - This value is written by the adapter upon
- *            completion of a UDP or TCP LSO operation and indicates the number
- *             of UDP or TCP payload bytes that were transmitted. 0x0000 will be
- *             returned for any non-LSO operation.
- * @control_1: Bits 0 to 4 - Reserved.
- *             Bit 5 - Tx_CKO_IPv4 Set to a '1' to enable IPv4 header checksum
- *             offload. This field is only valid in the first TxD of a frame.
- *             Bit 6 - Tx_CKO_TCP Set to a '1' to enable TCP checksum offload.
- *             This field is only valid in the first TxD of a frame (the TxD's
- *             gather code must be 10 or 11). The driver should only set this
- *             bit if it can guarantee that TCP is present.
- *             Bit 7 - Tx_CKO_UDP Set to a '1' to enable UDP checksum offload.
- *             This field is only valid in the first TxD of a frame (the TxD's
- *             gather code must be 10 or 11). The driver should only set this
- *             bit if it can guarantee that UDP is present.
- *             Bits 8 to 14 - Reserved.
- *             Bit 15 - Tx_VLAN_Enable VLAN tag insertion flag. Set to a '1' to
- *             instruct the adapter to insert the VLAN tag specified by the
- *             Tx_VLAN_Tag field. This field is only valid in the first TxD of
- *             a frame.
- *             Bits 16 to 31 - Tx_VLAN_Tag. Variable portion of the VLAN tag
- *             to be inserted into the frame by the adapter (the first two bytes
- *             of a VLAN tag are always 0x8100). This field is only valid if the
- *             Tx_VLAN_Enable field is set to '1'.
- *             Bits 32 to 33 - Reserved.
- *             Bits 34 to 39 - Tx_Int_Number. Indicates which Tx interrupt
- *             number the frame associated with. This field is written by the
- *             host. It is only valid in the first TxD of a frame.
- *             Bits 40 to 42 - Reserved.
- *             Bit 43 - Set to 1 to exclude the frame from bandwidth metering
- *             functions. This field is valid only in the first TxD
- *             of a frame.
- *             Bits 44 to 45 - Reserved.
- *             Bit 46 - Tx_Int_Per_List Set to a '1' to instruct the adapter to
- *             generate an interrupt as soon as all of the frames in the list
- *             have been transmitted. In order to have per-frame interrupts,
- *             the driver should place a maximum of one frame per list. This
- *             field is only valid in the first TxD of a frame.
- *             Bit 47 - Tx_Int_Utilization Set to a '1' to instruct the adapter
- *             to count the frame toward the utilization interrupt specified in
- *             the Tx_Int_Number field. This field is only valid in the first
- *             TxD of a frame.
- *             Bits 48 to 63 - Reserved.
- * @buffer_pointer: Buffer start address.
- * @host_control: Host_Control.Opaque 64bit data stored by driver inside the
- *            Titan descriptor prior to posting the latter on the fifo
- *            via vxge_hw_fifo_txdl_post().The %host_control is returned as is
- *            to the driver with each completed descriptor.
- *
- * Transmit descriptor (TxD).Fifo descriptor contains configured number
- * (list) of TxDs. * For more details please refer to Titan User Guide,
- * Section 5.4.2 "Transmit Descriptor (TxD) Format".
- */
-struct vxge_hw_fifo_txd {
-       u64 control_0;
-#define VXGE_HW_FIFO_TXD_LIST_OWN_ADAPTER              vxge_mBIT(7)
-
-#define VXGE_HW_FIFO_TXD_T_CODE_GET(ctrl0)             vxge_bVALn(ctrl0, 12, 4)
-#define VXGE_HW_FIFO_TXD_T_CODE(val)                   vxge_vBIT(val, 12, 4)
-#define VXGE_HW_FIFO_TXD_T_CODE_UNUSED         VXGE_HW_FIFO_T_CODE_UNUSED
-
-
-#define VXGE_HW_FIFO_TXD_GATHER_CODE(val)              vxge_vBIT(val, 22, 2)
-#define VXGE_HW_FIFO_TXD_GATHER_CODE_FIRST     VXGE_HW_FIFO_GATHER_CODE_FIRST
-#define VXGE_HW_FIFO_TXD_GATHER_CODE_LAST      VXGE_HW_FIFO_GATHER_CODE_LAST
-
-
-#define VXGE_HW_FIFO_TXD_LSO_EN                                vxge_mBIT(30)
-
-#define VXGE_HW_FIFO_TXD_LSO_MSS(val)                  vxge_vBIT(val, 34, 14)
-
-#define VXGE_HW_FIFO_TXD_BUFFER_SIZE(val)              vxge_vBIT(val, 48, 16)
-
-       u64 control_1;
-#define VXGE_HW_FIFO_TXD_TX_CKO_IPV4_EN                        vxge_mBIT(5)
-#define VXGE_HW_FIFO_TXD_TX_CKO_TCP_EN                 vxge_mBIT(6)
-#define VXGE_HW_FIFO_TXD_TX_CKO_UDP_EN                 vxge_mBIT(7)
-#define VXGE_HW_FIFO_TXD_VLAN_ENABLE                   vxge_mBIT(15)
-
-#define VXGE_HW_FIFO_TXD_VLAN_TAG(val)                         vxge_vBIT(val, 16, 16)
-
-#define VXGE_HW_FIFO_TXD_INT_NUMBER(val)               vxge_vBIT(val, 34, 6)
-
-#define VXGE_HW_FIFO_TXD_INT_TYPE_PER_LIST             vxge_mBIT(46)
-#define VXGE_HW_FIFO_TXD_INT_TYPE_UTILZ                        vxge_mBIT(47)
-
-       u64 buffer_pointer;
-
-       u64 host_control;
-};
-
-/**
- * struct vxge_hw_ring_rxd_1 - One buffer mode RxD for ring
- * @host_control: This field is exclusively for host use and is "readonly"
- *             from the adapter's perspective.
- * @control_0:Bits 0 to 6 - RTH_Bucket get
- *           Bit 7 - Own Descriptor ownership bit. This bit is set to 1
- *            by the host, and is set to 0 by the adapter.
- *           0 - Host owns RxD and buffer.
- *           1 - The adapter owns RxD and buffer.
- *           Bit 8 - Fast_Path_Eligible When set, indicates that the
- *            received frame meets all of the criteria for fast path processing.
- *            The required criteria are as follows:
- *            !SYN &
- *            (Transfer_Code == "Transfer OK") &
- *            (!Is_IP_Fragment) &
- *            ((Is_IPv4 & computed_L3_checksum == 0xFFFF) |
- *            (Is_IPv6)) &
- *            ((Is_TCP & computed_L4_checksum == 0xFFFF) |
- *            (Is_UDP & (computed_L4_checksum == 0xFFFF |
- *            computed _L4_checksum == 0x0000)))
- *            (same meaning for all RxD buffer modes)
- *           Bit 9 - L3 Checksum Correct
- *           Bit 10 - L4 Checksum Correct
- *           Bit 11 - Reserved
- *           Bit 12 to 15 - This field is written by the adapter. It is
- *            used to report the status of the frame transfer to the host.
- *           0x0 - Transfer OK
- *           0x4 - RDA Failure During Transfer
- *           0x5 - Unparseable Packet, such as unknown IPv6 header.
- *           0x6 - Frame integrity error (FCS or ECC).
- *           0x7 - Buffer Size Error. The provided buffer(s) were not
- *                  appropriately sized and data loss occurred.
- *           0x8 - Internal ECC Error. RxD corrupted.
- *           0x9 - IPv4 Checksum error
- *           0xA - TCP/UDP Checksum error
- *           0xF - Unknown Error or Multiple Error. Indicates an
- *               unknown problem or that more than one of transfer codes is set.
- *           Bit 16 - SYN The adapter sets this field to indicate that
- *                the incoming frame contained a TCP segment with its SYN bit
- *               set and its ACK bit NOT set. (same meaning for all RxD buffer
- *                modes)
- *           Bit 17 - Is ICMP
- *           Bit 18 - RTH_SPDM_HIT Set to 1 if there was a match in the
- *                Socket Pair Direct Match Table and the frame was steered based
- *                on SPDM.
- *           Bit 19 - RTH_IT_HIT Set to 1 if there was a match in the
- *            Indirection Table and the frame was steered based on hash
- *            indirection.
- *           Bit 20 to 23 - RTH_HASH_TYPE Indicates the function (hash
- *               type) that was used to calculate the hash.
- *           Bit 19 - IS_VLAN Set to '1' if the frame was/is VLAN
- *               tagged.
- *           Bit 25 to 26 - ETHER_ENCAP Reflects the Ethernet encapsulation
- *                of the received frame.
- *           0x0 - Ethernet DIX
- *           0x1 - LLC
- *           0x2 - SNAP (includes Jumbo-SNAP)
- *           0x3 - IPX
- *           Bit 27 - IS_IPV4 Set to '1' if the frame contains an IPv4 packet.
- *           Bit 28 - IS_IPV6 Set to '1' if the frame contains an IPv6 packet.
- *           Bit 29 - IS_IP_FRAG Set to '1' if the frame contains a fragmented
- *            IP packet.
- *           Bit 30 - IS_TCP Set to '1' if the frame contains a TCP segment.
- *           Bit 31 - IS_UDP Set to '1' if the frame contains a UDP message.
- *           Bit 32 to 47 - L3_Checksum[0:15] The IPv4 checksum value  that
- *            arrived with the frame. If the resulting computed IPv4 header
- *            checksum for the frame did not produce the expected 0xFFFF value,
- *            then the transfer code would be set to 0x9.
- *           Bit 48 to 63 - L4_Checksum[0:15] The TCP/UDP checksum value that
- *            arrived with the frame. If the resulting computed TCP/UDP checksum
- *            for the frame did not produce the expected 0xFFFF value, then the
- *            transfer code would be set to 0xA.
- * @control_1:Bits 0 to 1 - Reserved
- *            Bits 2 to 15 - Buffer0_Size.This field is set by the host and
- *            eventually overwritten by the adapter. The host writes the
- *            available buffer size in bytes when it passes the descriptor to
- *            the adapter. When a frame is delivered the host, the adapter
- *            populates this field with the number of bytes written into the
- *            buffer. The largest supported buffer is 16, 383 bytes.
- *           Bit 16 to 47 - RTH Hash Value 32-bit RTH hash value. Only valid if
- *           RTH_HASH_TYPE (Control_0, bits 20:23) is nonzero.
- *           Bit 48 to 63 - VLAN_Tag[0:15] The contents of the variable portion
- *            of the VLAN tag, if one was detected by the adapter. This field is
- *            populated even if VLAN-tag stripping is enabled.
- * @buffer0_ptr: Pointer to buffer. This field is populated by the driver.
- *
- * One buffer mode RxD for ring structure
- */
-struct vxge_hw_ring_rxd_1 {
-       u64 host_control;
-       u64 control_0;
-#define VXGE_HW_RING_RXD_RTH_BUCKET_GET(ctrl0)         vxge_bVALn(ctrl0, 0, 7)
-
-#define VXGE_HW_RING_RXD_LIST_OWN_ADAPTER              vxge_mBIT(7)
-
-#define VXGE_HW_RING_RXD_FAST_PATH_ELIGIBLE_GET(ctrl0) vxge_bVALn(ctrl0, 8, 1)
-
-#define VXGE_HW_RING_RXD_L3_CKSUM_CORRECT_GET(ctrl0)   vxge_bVALn(ctrl0, 9, 1)
-
-#define VXGE_HW_RING_RXD_L4_CKSUM_CORRECT_GET(ctrl0)   vxge_bVALn(ctrl0, 10, 1)
-
-#define VXGE_HW_RING_RXD_T_CODE_GET(ctrl0)             vxge_bVALn(ctrl0, 12, 4)
-#define VXGE_HW_RING_RXD_T_CODE(val)                   vxge_vBIT(val, 12, 4)
-
-#define VXGE_HW_RING_RXD_T_CODE_UNUSED         VXGE_HW_RING_T_CODE_UNUSED
-
-#define VXGE_HW_RING_RXD_SYN_GET(ctrl0)                vxge_bVALn(ctrl0, 16, 1)
-
-#define VXGE_HW_RING_RXD_IS_ICMP_GET(ctrl0)            vxge_bVALn(ctrl0, 17, 1)
-
-#define VXGE_HW_RING_RXD_RTH_SPDM_HIT_GET(ctrl0)       vxge_bVALn(ctrl0, 18, 1)
-
-#define VXGE_HW_RING_RXD_RTH_IT_HIT_GET(ctrl0)         vxge_bVALn(ctrl0, 19, 1)
-
-#define VXGE_HW_RING_RXD_RTH_HASH_TYPE_GET(ctrl0)      vxge_bVALn(ctrl0, 20, 4)
-
-#define VXGE_HW_RING_RXD_IS_VLAN_GET(ctrl0)            vxge_bVALn(ctrl0, 24, 1)
-
-#define VXGE_HW_RING_RXD_ETHER_ENCAP_GET(ctrl0)                vxge_bVALn(ctrl0, 25, 2)
-
-#define VXGE_HW_RING_RXD_FRAME_PROTO_GET(ctrl0)                vxge_bVALn(ctrl0, 27, 5)
-
-#define VXGE_HW_RING_RXD_L3_CKSUM_GET(ctrl0)   vxge_bVALn(ctrl0, 32, 16)
-
-#define VXGE_HW_RING_RXD_L4_CKSUM_GET(ctrl0)   vxge_bVALn(ctrl0, 48, 16)
-
-       u64 control_1;
-
-#define VXGE_HW_RING_RXD_1_BUFFER0_SIZE_GET(ctrl1)     vxge_bVALn(ctrl1, 2, 14)
-#define VXGE_HW_RING_RXD_1_BUFFER0_SIZE(val) vxge_vBIT(val, 2, 14)
-#define VXGE_HW_RING_RXD_1_BUFFER0_SIZE_MASK           vxge_vBIT(0x3FFF, 2, 14)
-
-#define VXGE_HW_RING_RXD_1_RTH_HASH_VAL_GET(ctrl1)    vxge_bVALn(ctrl1, 16, 32)
-
-#define VXGE_HW_RING_RXD_VLAN_TAG_GET(ctrl1)   vxge_bVALn(ctrl1, 48, 16)
-
-       u64 buffer0_ptr;
-};
-
-enum vxge_hw_rth_algoritms {
-       RTH_ALG_JENKINS = 0,
-       RTH_ALG_MS_RSS  = 1,
-       RTH_ALG_CRC32C  = 2
-};
-
-/**
- * struct vxge_hw_rth_hash_types - RTH hash types.
- * @hash_type_tcpipv4_en: Enables RTH field type HashTypeTcpIPv4
- * @hash_type_ipv4_en: Enables RTH field type HashTypeIPv4
- * @hash_type_tcpipv6_en: Enables RTH field type HashTypeTcpIPv6
- * @hash_type_ipv6_en: Enables RTH field type HashTypeIPv6
- * @hash_type_tcpipv6ex_en: Enables RTH field type HashTypeTcpIPv6Ex
- * @hash_type_ipv6ex_en: Enables RTH field type HashTypeIPv6Ex
- *
- * Used to pass RTH hash types to rts_rts_set.
- *
- * See also: vxge_hw_vpath_rts_rth_set(), vxge_hw_vpath_rts_rth_get().
- */
-struct vxge_hw_rth_hash_types {
-       u8 hash_type_tcpipv4_en:1,
-          hash_type_ipv4_en:1,
-          hash_type_tcpipv6_en:1,
-          hash_type_ipv6_en:1,
-          hash_type_tcpipv6ex_en:1,
-          hash_type_ipv6ex_en:1;
-};
-
-void vxge_hw_device_debug_set(
-       struct __vxge_hw_device *devh,
-       enum vxge_debug_level level,
-       u32 mask);
-
-u32
-vxge_hw_device_error_level_get(struct __vxge_hw_device *devh);
-
-u32
-vxge_hw_device_trace_level_get(struct __vxge_hw_device *devh);
-
-/**
- * vxge_hw_ring_rxd_size_get   - Get the size of ring descriptor.
- * @buf_mode: Buffer mode (1, 3 or 5)
- *
- * This function returns the size of RxD for given buffer mode
- */
-static inline u32 vxge_hw_ring_rxd_size_get(u32 buf_mode)
-{
-       return sizeof(struct vxge_hw_ring_rxd_1);
-}
-
-/**
- * vxge_hw_ring_rxds_per_block_get - Get the number of rxds per block.
- * @buf_mode: Buffer mode (1 buffer mode only)
- *
- * This function returns the number of RxD for RxD block for given buffer mode
- */
-static inline u32 vxge_hw_ring_rxds_per_block_get(u32 buf_mode)
-{
-       return (u32)((VXGE_HW_BLOCK_SIZE-16) /
-               sizeof(struct vxge_hw_ring_rxd_1));
-}
-
-/**
- * vxge_hw_ring_rxd_1b_set - Prepare 1-buffer-mode descriptor.
- * @rxdh: Descriptor handle.
- * @dma_pointer: DMA address of        a single receive buffer this descriptor
- * should carry. Note that by the time vxge_hw_ring_rxd_1b_set is called,
- * the receive buffer should be already mapped to the device
- * @size: Size of the receive @dma_pointer buffer.
- *
- * Prepare 1-buffer-mode Rx    descriptor for posting
- * (via        vxge_hw_ring_rxd_post()).
- *
- * This        inline helper-function does not return any parameters and always
- * succeeds.
- *
- */
-static inline
-void vxge_hw_ring_rxd_1b_set(
-       void *rxdh,
-       dma_addr_t dma_pointer,
-       u32 size)
-{
-       struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
-       rxdp->buffer0_ptr = dma_pointer;
-       rxdp->control_1 &= ~VXGE_HW_RING_RXD_1_BUFFER0_SIZE_MASK;
-       rxdp->control_1 |= VXGE_HW_RING_RXD_1_BUFFER0_SIZE(size);
-}
-
-/**
- * vxge_hw_ring_rxd_1b_get - Get data from the completed 1-buf
- * descriptor.
- * @vpath_handle: Virtual Path handle.
- * @rxdh: Descriptor handle.
- * @dma_pointer: DMA address of        a single receive buffer this descriptor
- * carries. Returned by HW.
- * @pkt_length:        Length (in bytes) of the data in the buffer pointed by
- *
- * Retrieve protocol data from the completed 1-buffer-mode Rx descriptor.
- * This        inline helper-function uses completed descriptor to populate receive
- * buffer pointer and other "out" parameters. The function always succeeds.
- *
- */
-static inline
-void vxge_hw_ring_rxd_1b_get(
-       struct __vxge_hw_ring *ring_handle,
-       void *rxdh,
-       u32 *pkt_length)
-{
-       struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
-
-       *pkt_length =
-               (u32)VXGE_HW_RING_RXD_1_BUFFER0_SIZE_GET(rxdp->control_1);
-}
-
-/**
- * vxge_hw_ring_rxd_1b_info_get - Get extended information associated with
- * a completed receive descriptor for 1b mode.
- * @vpath_handle: Virtual Path handle.
- * @rxdh: Descriptor handle.
- * @rxd_info: Descriptor information
- *
- * Retrieve extended information associated with a completed receive descriptor.
- *
- */
-static inline
-void vxge_hw_ring_rxd_1b_info_get(
-       struct __vxge_hw_ring *ring_handle,
-       void *rxdh,
-       struct vxge_hw_ring_rxd_info *rxd_info)
-{
-
-       struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
-       rxd_info->syn_flag =
-               (u32)VXGE_HW_RING_RXD_SYN_GET(rxdp->control_0);
-       rxd_info->is_icmp =
-               (u32)VXGE_HW_RING_RXD_IS_ICMP_GET(rxdp->control_0);
-       rxd_info->fast_path_eligible =
-               (u32)VXGE_HW_RING_RXD_FAST_PATH_ELIGIBLE_GET(rxdp->control_0);
-       rxd_info->l3_cksum_valid =
-               (u32)VXGE_HW_RING_RXD_L3_CKSUM_CORRECT_GET(rxdp->control_0);
-       rxd_info->l3_cksum =
-               (u32)VXGE_HW_RING_RXD_L3_CKSUM_GET(rxdp->control_0);
-       rxd_info->l4_cksum_valid =
-               (u32)VXGE_HW_RING_RXD_L4_CKSUM_CORRECT_GET(rxdp->control_0);
-       rxd_info->l4_cksum =
-               (u32)VXGE_HW_RING_RXD_L4_CKSUM_GET(rxdp->control_0);
-       rxd_info->frame =
-               (u32)VXGE_HW_RING_RXD_ETHER_ENCAP_GET(rxdp->control_0);
-       rxd_info->proto =
-               (u32)VXGE_HW_RING_RXD_FRAME_PROTO_GET(rxdp->control_0);
-       rxd_info->is_vlan =
-               (u32)VXGE_HW_RING_RXD_IS_VLAN_GET(rxdp->control_0);
-       rxd_info->vlan =
-               (u32)VXGE_HW_RING_RXD_VLAN_TAG_GET(rxdp->control_1);
-       rxd_info->rth_bucket =
-               (u32)VXGE_HW_RING_RXD_RTH_BUCKET_GET(rxdp->control_0);
-       rxd_info->rth_it_hit =
-               (u32)VXGE_HW_RING_RXD_RTH_IT_HIT_GET(rxdp->control_0);
-       rxd_info->rth_spdm_hit =
-               (u32)VXGE_HW_RING_RXD_RTH_SPDM_HIT_GET(rxdp->control_0);
-       rxd_info->rth_hash_type =
-               (u32)VXGE_HW_RING_RXD_RTH_HASH_TYPE_GET(rxdp->control_0);
-       rxd_info->rth_value =
-               (u32)VXGE_HW_RING_RXD_1_RTH_HASH_VAL_GET(rxdp->control_1);
-}
-
-/**
- * vxge_hw_ring_rxd_private_get - Get driver private per-descriptor data
- *                      of 1b mode 3b mode ring.
- * @rxdh: Descriptor handle.
- *
- * Returns: private driver     info associated with the descriptor.
- * driver requests     per-descriptor space via vxge_hw_ring_attr.
- *
- */
-static inline void *vxge_hw_ring_rxd_private_get(void *rxdh)
-{
-       struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
-       return (void *)(size_t)rxdp->host_control;
-}
-
-/**
- * vxge_hw_fifo_txdl_cksum_set_bits - Offload checksum.
- * @txdlh: Descriptor handle.
- * @cksum_bits: Specifies which checksums are to be offloaded: IPv4,
- *              and/or TCP and/or UDP.
- *
- * Ask Titan to calculate IPv4 & transport checksums for _this_ transmit
- * descriptor.
- * This API is part of the preparation of the transmit descriptor for posting
- * (via vxge_hw_fifo_txdl_post()). The related "preparation" APIs include
- * vxge_hw_fifo_txdl_mss_set(), vxge_hw_fifo_txdl_buffer_set_aligned(),
- * and vxge_hw_fifo_txdl_buffer_set().
- * All these APIs fill in the fields of the fifo descriptor,
- * in accordance with the Titan specification.
- *
- */
-static inline void vxge_hw_fifo_txdl_cksum_set_bits(void *txdlh, u64 cksum_bits)
-{
-       struct vxge_hw_fifo_txd *txdp = (struct vxge_hw_fifo_txd *)txdlh;
-       txdp->control_1 |= cksum_bits;
-}
-
-/**
- * vxge_hw_fifo_txdl_mss_set - Set MSS.
- * @txdlh: Descriptor handle.
- * @mss: MSS size for _this_ TCP connection. Passed by TCP stack down to the
- *       driver, which in turn inserts the MSS into the @txdlh.
- *
- * This API is part of the preparation of the transmit descriptor for posting
- * (via vxge_hw_fifo_txdl_post()). The related "preparation" APIs include
- * vxge_hw_fifo_txdl_buffer_set(), vxge_hw_fifo_txdl_buffer_set_aligned(),
- * and vxge_hw_fifo_txdl_cksum_set_bits().
- * All these APIs fill in the fields of the fifo descriptor,
- * in accordance with the Titan specification.
- *
- */
-static inline void vxge_hw_fifo_txdl_mss_set(void *txdlh, int mss)
-{
-       struct vxge_hw_fifo_txd *txdp = (struct vxge_hw_fifo_txd *)txdlh;
-
-       txdp->control_0 |= VXGE_HW_FIFO_TXD_LSO_EN;
-       txdp->control_0 |= VXGE_HW_FIFO_TXD_LSO_MSS(mss);
-}
-
-/**
- * vxge_hw_fifo_txdl_vlan_set - Set VLAN tag.
- * @txdlh: Descriptor handle.
- * @vlan_tag: 16bit VLAN tag.
- *
- * Insert VLAN tag into specified transmit descriptor.
- * The actual insertion of the tag into outgoing frame is done by the hardware.
- */
-static inline void vxge_hw_fifo_txdl_vlan_set(void *txdlh, u16 vlan_tag)
-{
-       struct vxge_hw_fifo_txd *txdp = (struct vxge_hw_fifo_txd *)txdlh;
-
-       txdp->control_1 |= VXGE_HW_FIFO_TXD_VLAN_ENABLE;
-       txdp->control_1 |= VXGE_HW_FIFO_TXD_VLAN_TAG(vlan_tag);
-}
-
-/**
- * vxge_hw_fifo_txdl_private_get - Retrieve per-descriptor private data.
- * @txdlh: Descriptor handle.
- *
- * Retrieve per-descriptor private data.
- * Note that driver requests per-descriptor space via
- * struct vxge_hw_fifo_attr passed to
- * vxge_hw_vpath_open().
- *
- * Returns: private driver data associated with the descriptor.
- */
-static inline void *vxge_hw_fifo_txdl_private_get(void *txdlh)
-{
-       struct vxge_hw_fifo_txd *txdp  = (struct vxge_hw_fifo_txd *)txdlh;
-
-       return (void *)(size_t)txdp->host_control;
-}
-
-/**
- * struct vxge_hw_ring_attr - Ring open "template".
- * @callback: Ring completion callback. HW invokes the callback when there
- *            are new completions on that ring. In many implementations
- *            the @callback executes in the hw interrupt context.
- * @rxd_init: Ring's descriptor-initialize callback.
- *            See vxge_hw_ring_rxd_init_f{}.
- *            If not NULL, HW invokes the callback when opening
- *            the ring.
- * @rxd_term: Ring's descriptor-terminate callback. If not NULL,
- *          HW invokes the callback when closing the corresponding ring.
- *          See also vxge_hw_ring_rxd_term_f{}.
- * @userdata: User-defined "context" of _that_ ring. Passed back to the
- *            user as one of the @callback, @rxd_init, and @rxd_term arguments.
- * @per_rxd_space: If specified (i.e., greater than zero): extra space
- *              reserved by HW per each receive descriptor.
- *              Can be used to store
- *              and retrieve on completion, information specific
- *              to the driver.
- *
- * Ring open "template". User fills the structure with ring
- * attributes and passes it to vxge_hw_vpath_open().
- */
-struct vxge_hw_ring_attr {
-       enum vxge_hw_status (*callback)(
-                       struct __vxge_hw_ring *ringh,
-                       void *rxdh,
-                       u8 t_code,
-                       void *userdata);
-
-       enum vxge_hw_status (*rxd_init)(
-                       void *rxdh,
-                       void *userdata);
-
-       void (*rxd_term)(
-                       void *rxdh,
-                       enum vxge_hw_rxd_state state,
-                       void *userdata);
-
-       void            *userdata;
-       u32             per_rxd_space;
-};
-
-/**
- * function vxge_hw_fifo_callback_f - FIFO callback.
- * @vpath_handle: Virtual path whose Fifo "containing" 1 or more completed
- *             descriptors.
- * @txdlh: First completed descriptor.
- * @txdl_priv: Pointer to per txdl space allocated
- * @t_code: Transfer code, as per Titan User Guide.
- *          Returned by HW.
- * @host_control: Opaque 64bit data stored by driver inside the Titan
- *            descriptor prior to posting the latter on the fifo
- *            via vxge_hw_fifo_txdl_post(). The @host_control is returned
- *            as is to the driver with each completed descriptor.
- * @userdata: Opaque per-fifo data specified at fifo open
- *            time, via vxge_hw_vpath_open().
- *
- * Fifo completion callback (type declaration). A single per-fifo
- * callback is specified at fifo open time, via
- * vxge_hw_vpath_open(). Typically gets called as part of the processing
- * of the Interrupt Service Routine.
- *
- * Fifo callback gets called by HW if, and only if, there is at least
- * one new completion on a given fifo. Upon processing the first @txdlh driver
- * is _supposed_ to continue consuming completions using:
- *    - vxge_hw_fifo_txdl_next_completed()
- *
- * Note that failure to process new completions in a timely fashion
- * leads to VXGE_HW_INF_OUT_OF_DESCRIPTORS condition.
- *
- * Non-zero @t_code means failure to process transmit descriptor.
- *
- * In the "transmit" case the failure could happen, for instance, when the
- * link is down, in which case Titan completes the descriptor because it
- * is not able to send the data out.
- *
- * For details please refer to Titan User Guide.
- *
- * See also: vxge_hw_fifo_txdl_next_completed(), vxge_hw_fifo_txdl_term_f{}.
- */
-/**
- * function vxge_hw_fifo_txdl_term_f - Terminate descriptor callback.
- * @txdlh: First completed descriptor.
- * @txdl_priv: Pointer to per txdl space allocated
- * @state: One of the enum vxge_hw_txdl_state{} enumerated states.
- * @userdata: Per-fifo user data (a.k.a. context) specified at
- * fifo open time, via vxge_hw_vpath_open().
- *
- * Terminate descriptor callback. Unless NULL is specified in the
- * struct vxge_hw_fifo_attr{} structure passed to vxge_hw_vpath_open()),
- * HW invokes the callback as part of closing fifo, prior to
- * de-allocating the ring and associated data structures
- * (including descriptors).
- * driver should utilize the callback to (for instance) unmap
- * and free DMA data buffers associated with the posted (state =
- * VXGE_HW_TXDL_STATE_POSTED) descriptors,
- * as well as other relevant cleanup functions.
- *
- * See also: struct vxge_hw_fifo_attr{}
- */
-/**
- * struct vxge_hw_fifo_attr - Fifo open "template".
- * @callback: Fifo completion callback. HW invokes the callback when there
- *            are new completions on that fifo. In many implementations
- *            the @callback executes in the hw interrupt context.
- * @txdl_term: Fifo's descriptor-terminate callback. If not NULL,
- *          HW invokes the callback when closing the corresponding fifo.
- *          See also vxge_hw_fifo_txdl_term_f{}.
- * @userdata: User-defined "context" of _that_ fifo. Passed back to the
- *            user as one of the @callback, and @txdl_term arguments.
- * @per_txdl_space: If specified (i.e., greater than zero): extra space
- *              reserved by HW per each transmit descriptor. Can be used to
- *              store, and retrieve on completion, information specific
- *              to the driver.
- *
- * Fifo open "template". User fills the structure with fifo
- * attributes and passes it to vxge_hw_vpath_open().
- */
-struct vxge_hw_fifo_attr {
-
-       enum vxge_hw_status (*callback)(
-                       struct __vxge_hw_fifo *fifo_handle,
-                       void *txdlh,
-                       enum vxge_hw_fifo_tcode t_code,
-                       void *userdata,
-                       struct sk_buff ***skb_ptr,
-                       int nr_skb, int *more);
-
-       void (*txdl_term)(
-                       void *txdlh,
-                       enum vxge_hw_txdl_state state,
-                       void *userdata);
-
-       void            *userdata;
-       u32             per_txdl_space;
-};
-
-/**
- * struct vxge_hw_vpath_attr - Attributes of virtual path
- * @vp_id: Identifier of Virtual Path
- * @ring_attr: Attributes of ring for non-offload receive
- * @fifo_attr: Attributes of fifo for non-offload transmit
- *
- * Attributes of virtual path.  This structure is passed as parameter
- * to the vxge_hw_vpath_open() routine to set the attributes of ring and fifo.
- */
-struct vxge_hw_vpath_attr {
-       u32                             vp_id;
-       struct vxge_hw_ring_attr        ring_attr;
-       struct vxge_hw_fifo_attr        fifo_attr;
-};
-
-enum vxge_hw_status vxge_hw_device_hw_info_get(
-       void __iomem *bar0,
-       struct vxge_hw_device_hw_info *hw_info);
-
-enum vxge_hw_status vxge_hw_device_config_default_get(
-       struct vxge_hw_device_config *device_config);
-
-/**
- * vxge_hw_device_link_state_get - Get link state.
- * @devh: HW device handle.
- *
- * Get link state.
- * Returns: link state.
- */
-static inline
-enum vxge_hw_device_link_state vxge_hw_device_link_state_get(
-       struct __vxge_hw_device *devh)
-{
-       return devh->link_state;
-}
-
-void vxge_hw_device_terminate(struct __vxge_hw_device *devh);
-
-const u8 *
-vxge_hw_device_serial_number_get(struct __vxge_hw_device *devh);
-
-u16 vxge_hw_device_link_width_get(struct __vxge_hw_device *devh);
-
-const u8 *
-vxge_hw_device_product_name_get(struct __vxge_hw_device *devh);
-
-enum vxge_hw_status vxge_hw_device_initialize(
-       struct __vxge_hw_device **devh,
-       struct vxge_hw_device_attr *attr,
-       struct vxge_hw_device_config *device_config);
-
-enum vxge_hw_status vxge_hw_device_getpause_data(
-        struct __vxge_hw_device *devh,
-        u32 port,
-        u32 *tx,
-        u32 *rx);
-
-enum vxge_hw_status vxge_hw_device_setpause_data(
-       struct __vxge_hw_device *devh,
-       u32 port,
-       u32 tx,
-       u32 rx);
-
-static inline void *vxge_os_dma_malloc(struct pci_dev *pdev,
-                       unsigned long size,
-                       struct pci_dev **p_dmah,
-                       struct pci_dev **p_dma_acch)
-{
-       void *vaddr;
-       unsigned long misaligned = 0;
-       int realloc_flag = 0;
-       *p_dma_acch = *p_dmah = NULL;
-
-realloc:
-       vaddr = kmalloc(size, GFP_KERNEL | GFP_DMA);
-       if (vaddr == NULL)
-               return vaddr;
-       misaligned = (unsigned long)VXGE_ALIGN((unsigned long)vaddr,
-                               VXGE_CACHE_LINE_SIZE);
-       if (realloc_flag)
-               goto out;
-
-       if (misaligned) {
-               /* misaligned, free current one and try allocating
-                * size + VXGE_CACHE_LINE_SIZE memory
-                */
-               kfree(vaddr);
-               size += VXGE_CACHE_LINE_SIZE;
-               realloc_flag = 1;
-               goto realloc;
-       }
-out:
-       *(unsigned long *)p_dma_acch = misaligned;
-       vaddr = (void *)((u8 *)vaddr + misaligned);
-       return vaddr;
-}
-
-static inline void vxge_os_dma_free(struct pci_dev *pdev, const void *vaddr,
-                       struct pci_dev **p_dma_acch)
-{
-       unsigned long misaligned = *(unsigned long *)p_dma_acch;
-       u8 *tmp = (u8 *)vaddr;
-       tmp -= misaligned;
-       kfree((void *)tmp);
-}
-
-/*
- * __vxge_hw_mempool_item_priv - will return pointer on per item private space
- */
-static inline void*
-__vxge_hw_mempool_item_priv(
-       struct vxge_hw_mempool *mempool,
-       u32 memblock_idx,
-       void *item,
-       u32 *memblock_item_idx)
-{
-       ptrdiff_t offset;
-       void *memblock = mempool->memblocks_arr[memblock_idx];
-
-
-       offset = (u32)((u8 *)item - (u8 *)memblock);
-       vxge_assert(offset >= 0 && (u32)offset < mempool->memblock_size);
-
-       (*memblock_item_idx) = (u32) offset / mempool->item_size;
-       vxge_assert((*memblock_item_idx) < mempool->items_per_memblock);
-
-       return (u8 *)mempool->memblocks_priv_arr[memblock_idx] +
-                           (*memblock_item_idx) * mempool->items_priv_size;
-}
-
-/*
- * __vxge_hw_fifo_txdl_priv - Return the max fragments allocated
- * for the fifo.
- * @fifo: Fifo
- * @txdp: Poniter to a TxD
- */
-static inline struct __vxge_hw_fifo_txdl_priv *
-__vxge_hw_fifo_txdl_priv(
-       struct __vxge_hw_fifo *fifo,
-       struct vxge_hw_fifo_txd *txdp)
-{
-       return (struct __vxge_hw_fifo_txdl_priv *)
-                       (((char *)((ulong)txdp->host_control)) +
-                               fifo->per_txdl_space);
-}
-
-enum vxge_hw_status vxge_hw_vpath_open(
-       struct __vxge_hw_device *devh,
-       struct vxge_hw_vpath_attr *attr,
-       struct __vxge_hw_vpath_handle **vpath_handle);
-
-enum vxge_hw_status vxge_hw_vpath_close(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-enum vxge_hw_status
-vxge_hw_vpath_reset(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-enum vxge_hw_status
-vxge_hw_vpath_recover_from_reset(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-void
-vxge_hw_vpath_enable(struct __vxge_hw_vpath_handle *vp);
-
-enum vxge_hw_status
-vxge_hw_vpath_check_leak(struct __vxge_hw_ring *ringh);
-
-enum vxge_hw_status vxge_hw_vpath_mtu_set(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u32 new_mtu);
-
-void
-vxge_hw_vpath_rx_doorbell_init(struct __vxge_hw_vpath_handle *vp);
-
-static inline void __vxge_hw_pio_mem_write32_upper(u32 val, void __iomem *addr)
-{
-       writel(val, addr + 4);
-}
-
-static inline void __vxge_hw_pio_mem_write32_lower(u32 val, void __iomem *addr)
-{
-       writel(val, addr);
-}
-
-enum vxge_hw_status
-vxge_hw_device_flick_link_led(struct __vxge_hw_device *devh, u64 on_off);
-
-enum vxge_hw_status
-vxge_hw_vpath_strip_fcs_check(struct __vxge_hw_device *hldev, u64 vpath_mask);
-
-/**
- * vxge_debug_ll
- * @level: level of debug verbosity.
- * @mask: mask for the debug
- * @buf: Circular buffer for tracing
- * @fmt: printf like format string
- *
- * Provides logging facilities. Can be customized on per-module
- * basis or/and with debug levels. Input parameters, except
- * module and level, are the same as posix printf. This function
- * may be compiled out if DEBUG macro was never defined.
- * See also: enum vxge_debug_level{}.
- */
-#if (VXGE_COMPONENT_LL & VXGE_DEBUG_MODULE_MASK)
-#define vxge_debug_ll(level, mask, fmt, ...) do {                             \
-       if ((level >= VXGE_ERR && VXGE_COMPONENT_LL & VXGE_DEBUG_ERR_MASK) ||  \
-           (level >= VXGE_TRACE && VXGE_COMPONENT_LL & VXGE_DEBUG_TRACE_MASK))\
-               if ((mask & VXGE_DEBUG_MASK) == mask)                          \
-                       printk(fmt "\n", ##__VA_ARGS__);                       \
-} while (0)
-#else
-#define vxge_debug_ll(level, mask, fmt, ...)
-#endif
-
-enum vxge_hw_status vxge_hw_vpath_rts_rth_itable_set(
-                       struct __vxge_hw_vpath_handle **vpath_handles,
-                       u32 vpath_count,
-                       u8 *mtable,
-                       u8 *itable,
-                       u32 itable_size);
-
-enum vxge_hw_status vxge_hw_vpath_rts_rth_set(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       enum vxge_hw_rth_algoritms algorithm,
-       struct vxge_hw_rth_hash_types *hash_type,
-       u16 bucket_size);
-
-enum vxge_hw_status
-__vxge_hw_device_is_privilaged(u32 host_type, u32 func_id);
-
-#define VXGE_HW_MIN_SUCCESSIVE_IDLE_COUNT 5
-#define VXGE_HW_MAX_POLLING_COUNT 100
-
-void
-vxge_hw_device_wait_receive_idle(struct __vxge_hw_device *hldev);
-
-enum vxge_hw_status
-vxge_hw_upgrade_read_version(struct __vxge_hw_device *hldev, u32 *major,
-                            u32 *minor, u32 *build);
-
-enum vxge_hw_status vxge_hw_flash_fw(struct __vxge_hw_device *hldev);
-
-enum vxge_hw_status
-vxge_update_fw_image(struct __vxge_hw_device *hldev, const u8 *filebuf,
-                    int size);
-
-enum vxge_hw_status
-vxge_hw_vpath_eprom_img_ver_get(struct __vxge_hw_device *hldev,
-                               struct eprom_image *eprom_image_data);
-
-int vxge_hw_vpath_wait_receive_idle(struct __vxge_hw_device *hldev, u32 vp_id);
-#endif
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-ethtool.c b/drivers/net/ethernet/neterion/vxge/vxge-ethtool.c
deleted file mode 100644 (file)
index 4d91026..0000000
+++ /dev/null
@@ -1,1154 +0,0 @@
-/******************************************************************************
- * This software may be used and distributed according to the terms of
- * the GNU General Public License (GPL), incorporated herein by reference.
- * Drivers based on or derived from this code fall under the GPL and must
- * retain the authorship, copyright and license notice.  This file is not
- * a complete program and may only be used when the entire operating
- * system is licensed under the GPL.
- * See the file COPYING in this distribution for more information.
- *
- * vxge-ethtool.c: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
- *                 Virtualized Server Adapter.
- * Copyright(c) 2002-2010 Exar Corp.
- ******************************************************************************/
-#include <linux/ethtool.h>
-#include <linux/slab.h>
-#include <linux/pci.h>
-#include <linux/etherdevice.h>
-
-#include "vxge-ethtool.h"
-
-static const char ethtool_driver_stats_keys[][ETH_GSTRING_LEN] = {
-       {"\n DRIVER STATISTICS"},
-       {"vpaths_opened"},
-       {"vpath_open_fail_cnt"},
-       {"link_up_cnt"},
-       {"link_down_cnt"},
-       {"tx_frms"},
-       {"tx_errors"},
-       {"tx_bytes"},
-       {"txd_not_free"},
-       {"txd_out_of_desc"},
-       {"rx_frms"},
-       {"rx_errors"},
-       {"rx_bytes"},
-       {"rx_mcast"},
-       {"pci_map_fail_cnt"},
-       {"skb_alloc_fail_cnt"}
-};
-
-/**
- * vxge_ethtool_set_link_ksettings - Sets different link parameters.
- * @dev: device pointer.
- * @cmd: pointer to the structure with parameters given by ethtool to set
- * link information.
- *
- * The function sets different link parameters provided by the user onto
- * the NIC.
- * Return value:
- * 0 on success.
- */
-static int
-vxge_ethtool_set_link_ksettings(struct net_device *dev,
-                               const struct ethtool_link_ksettings *cmd)
-{
-       /* We currently only support 10Gb/FULL */
-       if ((cmd->base.autoneg == AUTONEG_ENABLE) ||
-           (cmd->base.speed != SPEED_10000) ||
-           (cmd->base.duplex != DUPLEX_FULL))
-               return -EINVAL;
-
-       return 0;
-}
-
-/**
- * vxge_ethtool_get_link_ksettings - Return link specific information.
- * @dev: device pointer.
- * @cmd: pointer to the structure with parameters given by ethtool
- * to return link information.
- *
- * Returns link specific information like speed, duplex etc.. to ethtool.
- * Return value :
- * return 0 on success.
- */
-static int vxge_ethtool_get_link_ksettings(struct net_device *dev,
-                                          struct ethtool_link_ksettings *cmd)
-{
-       ethtool_link_ksettings_zero_link_mode(cmd, supported);
-       ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseT_Full);
-       ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
-
-       ethtool_link_ksettings_zero_link_mode(cmd, advertising);
-       ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseT_Full);
-       ethtool_link_ksettings_add_link_mode(cmd, advertising, FIBRE);
-
-       cmd->base.port = PORT_FIBRE;
-
-       if (netif_carrier_ok(dev)) {
-               cmd->base.speed = SPEED_10000;
-               cmd->base.duplex = DUPLEX_FULL;
-       } else {
-               cmd->base.speed = SPEED_UNKNOWN;
-               cmd->base.duplex = DUPLEX_UNKNOWN;
-       }
-
-       cmd->base.autoneg = AUTONEG_DISABLE;
-       return 0;
-}
-
-/**
- * vxge_ethtool_gdrvinfo - Returns driver specific information.
- * @dev: device pointer.
- * @info: pointer to the structure with parameters given by ethtool to
- * return driver information.
- *
- * Returns driver specefic information like name, version etc.. to ethtool.
- */
-static void vxge_ethtool_gdrvinfo(struct net_device *dev,
-                                 struct ethtool_drvinfo *info)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-       strlcpy(info->driver, VXGE_DRIVER_NAME, sizeof(info->driver));
-       strlcpy(info->version, DRV_VERSION, sizeof(info->version));
-       strlcpy(info->fw_version, vdev->fw_version, sizeof(info->fw_version));
-       strlcpy(info->bus_info, pci_name(vdev->pdev), sizeof(info->bus_info));
-}
-
-/**
- * vxge_ethtool_gregs - dumps the entire space of Titan into the buffer.
- * @dev: device pointer.
- * @regs: pointer to the structure with parameters given by ethtool for
- * dumping the registers.
- * @space: The input argument into which all the registers are dumped.
- *
- * Dumps the vpath register space of Titan NIC into the user given
- * buffer area.
- */
-static void vxge_ethtool_gregs(struct net_device *dev,
-                              struct ethtool_regs *regs, void *space)
-{
-       int index, offset;
-       enum vxge_hw_status status;
-       u64 reg;
-       u64 *reg_space = (u64 *)space;
-       struct vxgedev *vdev = netdev_priv(dev);
-       struct __vxge_hw_device *hldev = vdev->devh;
-
-       regs->len = sizeof(struct vxge_hw_vpath_reg) * vdev->no_of_vpath;
-       regs->version = vdev->pdev->subsystem_device;
-       for (index = 0; index < vdev->no_of_vpath; index++) {
-               for (offset = 0; offset < sizeof(struct vxge_hw_vpath_reg);
-                               offset += 8) {
-                       status = vxge_hw_mgmt_reg_read(hldev,
-                                       vxge_hw_mgmt_reg_type_vpath,
-                                       vdev->vpaths[index].device_id,
-                                       offset, &reg);
-                       if (status != VXGE_HW_OK) {
-                               vxge_debug_init(VXGE_ERR,
-                                       "%s:%d Getting reg dump Failed",
-                                               __func__, __LINE__);
-                               return;
-                       }
-                       *reg_space++ = reg;
-               }
-       }
-}
-
-/**
- * vxge_ethtool_idnic - To physically identify the nic on the system.
- * @dev : device pointer.
- * @state : requested LED state
- *
- * Used to physically identify the NIC on the system.
- * 0 on success
- */
-static int vxge_ethtool_idnic(struct net_device *dev,
-                             enum ethtool_phys_id_state state)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-       struct __vxge_hw_device *hldev = vdev->devh;
-
-       switch (state) {
-       case ETHTOOL_ID_ACTIVE:
-               vxge_hw_device_flick_link_led(hldev, VXGE_FLICKER_ON);
-               break;
-
-       case ETHTOOL_ID_INACTIVE:
-               vxge_hw_device_flick_link_led(hldev, VXGE_FLICKER_OFF);
-               break;
-
-       default:
-               return -EINVAL;
-       }
-
-       return 0;
-}
-
-/**
- * vxge_ethtool_getpause_data - Pause frame frame generation and reception.
- * @dev : device pointer.
- * @ep : pointer to the structure with pause parameters given by ethtool.
- * Description:
- * Returns the Pause frame generation and reception capability of the NIC.
- * Return value:
- *  void
- */
-static void vxge_ethtool_getpause_data(struct net_device *dev,
-                                      struct ethtool_pauseparam *ep)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-       struct __vxge_hw_device *hldev = vdev->devh;
-
-       vxge_hw_device_getpause_data(hldev, 0, &ep->tx_pause, &ep->rx_pause);
-}
-
-/**
- * vxge_ethtool_setpause_data -  set/reset pause frame generation.
- * @dev : device pointer.
- * @ep : pointer to the structure with pause parameters given by ethtool.
- * Description:
- * It can be used to set or reset Pause frame generation or reception
- * support of the NIC.
- * Return value:
- * int, returns 0 on Success
- */
-static int vxge_ethtool_setpause_data(struct net_device *dev,
-                                     struct ethtool_pauseparam *ep)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-       struct __vxge_hw_device *hldev = vdev->devh;
-
-       vxge_hw_device_setpause_data(hldev, 0, ep->tx_pause, ep->rx_pause);
-
-       vdev->config.tx_pause_enable = ep->tx_pause;
-       vdev->config.rx_pause_enable = ep->rx_pause;
-
-       return 0;
-}
-
-static void vxge_get_ethtool_stats(struct net_device *dev,
-                                  struct ethtool_stats *estats, u64 *tmp_stats)
-{
-       int j, k;
-       enum vxge_hw_status status;
-       enum vxge_hw_status swstatus;
-       struct vxge_vpath *vpath = NULL;
-       struct vxgedev *vdev = netdev_priv(dev);
-       struct __vxge_hw_device *hldev = vdev->devh;
-       struct vxge_hw_xmac_stats *xmac_stats;
-       struct vxge_hw_device_stats_sw_info *sw_stats;
-       struct vxge_hw_device_stats_hw_info *hw_stats;
-
-       u64 *ptr = tmp_stats;
-
-       memset(tmp_stats, 0,
-               vxge_ethtool_get_sset_count(dev, ETH_SS_STATS) * sizeof(u64));
-
-       xmac_stats = kzalloc(sizeof(struct vxge_hw_xmac_stats), GFP_KERNEL);
-       if (xmac_stats == NULL) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s : %d Memory Allocation failed for xmac_stats",
-                                __func__, __LINE__);
-               return;
-       }
-
-       sw_stats = kzalloc(sizeof(struct vxge_hw_device_stats_sw_info),
-                               GFP_KERNEL);
-       if (sw_stats == NULL) {
-               kfree(xmac_stats);
-               vxge_debug_init(VXGE_ERR,
-                       "%s : %d Memory Allocation failed for sw_stats",
-                       __func__, __LINE__);
-               return;
-       }
-
-       hw_stats = kzalloc(sizeof(struct vxge_hw_device_stats_hw_info),
-                               GFP_KERNEL);
-       if (hw_stats == NULL) {
-               kfree(xmac_stats);
-               kfree(sw_stats);
-               vxge_debug_init(VXGE_ERR,
-                       "%s : %d Memory Allocation failed for hw_stats",
-                       __func__, __LINE__);
-               return;
-       }
-
-       *ptr++ = 0;
-       status = vxge_hw_device_xmac_stats_get(hldev, xmac_stats);
-       if (status != VXGE_HW_OK) {
-               if (status != VXGE_HW_ERR_PRIVILEGED_OPERATION) {
-                       vxge_debug_init(VXGE_ERR,
-                               "%s : %d Failure in getting xmac stats",
-                               __func__, __LINE__);
-               }
-       }
-       swstatus = vxge_hw_driver_stats_get(hldev, sw_stats);
-       if (swstatus != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s : %d Failure in getting sw stats",
-                       __func__, __LINE__);
-       }
-
-       status = vxge_hw_device_stats_get(hldev, hw_stats);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s : %d hw_stats_get error", __func__, __LINE__);
-       }
-
-       for (k = 0; k < vdev->no_of_vpath; k++) {
-               struct vxge_hw_vpath_stats_hw_info *vpath_info;
-
-               vpath = &vdev->vpaths[k];
-               j = vpath->device_id;
-               vpath_info = hw_stats->vpath_info[j];
-               if (!vpath_info) {
-                       memset(ptr, 0, (VXGE_HW_VPATH_TX_STATS_LEN +
-                               VXGE_HW_VPATH_RX_STATS_LEN) * sizeof(u64));
-                       ptr += (VXGE_HW_VPATH_TX_STATS_LEN +
-                               VXGE_HW_VPATH_RX_STATS_LEN);
-                       continue;
-               }
-
-               *ptr++ = vpath_info->tx_stats.tx_ttl_eth_frms;
-               *ptr++ = vpath_info->tx_stats.tx_ttl_eth_octets;
-               *ptr++ = vpath_info->tx_stats.tx_data_octets;
-               *ptr++ = vpath_info->tx_stats.tx_mcast_frms;
-               *ptr++ = vpath_info->tx_stats.tx_bcast_frms;
-               *ptr++ = vpath_info->tx_stats.tx_ucast_frms;
-               *ptr++ = vpath_info->tx_stats.tx_tagged_frms;
-               *ptr++ = vpath_info->tx_stats.tx_vld_ip;
-               *ptr++ = vpath_info->tx_stats.tx_vld_ip_octets;
-               *ptr++ = vpath_info->tx_stats.tx_icmp;
-               *ptr++ = vpath_info->tx_stats.tx_tcp;
-               *ptr++ = vpath_info->tx_stats.tx_rst_tcp;
-               *ptr++ = vpath_info->tx_stats.tx_udp;
-               *ptr++ = vpath_info->tx_stats.tx_unknown_protocol;
-               *ptr++ = vpath_info->tx_stats.tx_lost_ip;
-               *ptr++ = vpath_info->tx_stats.tx_parse_error;
-               *ptr++ = vpath_info->tx_stats.tx_tcp_offload;
-               *ptr++ = vpath_info->tx_stats.tx_retx_tcp_offload;
-               *ptr++ = vpath_info->tx_stats.tx_lost_ip_offload;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_eth_frms;
-               *ptr++ = vpath_info->rx_stats.rx_vld_frms;
-               *ptr++ = vpath_info->rx_stats.rx_offload_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_eth_octets;
-               *ptr++ = vpath_info->rx_stats.rx_data_octets;
-               *ptr++ = vpath_info->rx_stats.rx_offload_octets;
-               *ptr++ = vpath_info->rx_stats.rx_vld_mcast_frms;
-               *ptr++ = vpath_info->rx_stats.rx_vld_bcast_frms;
-               *ptr++ = vpath_info->rx_stats.rx_accepted_ucast_frms;
-               *ptr++ = vpath_info->rx_stats.rx_accepted_nucast_frms;
-               *ptr++ = vpath_info->rx_stats.rx_tagged_frms;
-               *ptr++ = vpath_info->rx_stats.rx_long_frms;
-               *ptr++ = vpath_info->rx_stats.rx_usized_frms;
-               *ptr++ = vpath_info->rx_stats.rx_osized_frms;
-               *ptr++ = vpath_info->rx_stats.rx_frag_frms;
-               *ptr++ = vpath_info->rx_stats.rx_jabber_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_64_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_65_127_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_128_255_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_256_511_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_512_1023_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_1024_1518_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_1519_4095_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_4096_8191_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_8192_max_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ttl_gt_max_frms;
-               *ptr++ = vpath_info->rx_stats.rx_ip;
-               *ptr++ = vpath_info->rx_stats.rx_accepted_ip;
-               *ptr++ = vpath_info->rx_stats.rx_ip_octets;
-               *ptr++ = vpath_info->rx_stats.rx_err_ip;
-               *ptr++ = vpath_info->rx_stats.rx_icmp;
-               *ptr++ = vpath_info->rx_stats.rx_tcp;
-               *ptr++ = vpath_info->rx_stats.rx_udp;
-               *ptr++ = vpath_info->rx_stats.rx_err_tcp;
-               *ptr++ = vpath_info->rx_stats.rx_lost_frms;
-               *ptr++ = vpath_info->rx_stats.rx_lost_ip;
-               *ptr++ = vpath_info->rx_stats.rx_lost_ip_offload;
-               *ptr++ = vpath_info->rx_stats.rx_various_discard;
-               *ptr++ = vpath_info->rx_stats.rx_sleep_discard;
-               *ptr++ = vpath_info->rx_stats.rx_red_discard;
-               *ptr++ = vpath_info->rx_stats.rx_queue_full_discard;
-               *ptr++ = vpath_info->rx_stats.rx_mpa_ok_frms;
-       }
-       *ptr++ = 0;
-       for (k = 0; k < vdev->max_config_port; k++) {
-               *ptr++ = xmac_stats->aggr_stats[k].tx_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].tx_data_octets;
-               *ptr++ = xmac_stats->aggr_stats[k].tx_mcast_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].tx_bcast_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].tx_discarded_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].tx_errored_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].rx_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].rx_data_octets;
-               *ptr++ = xmac_stats->aggr_stats[k].rx_mcast_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].rx_bcast_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].rx_discarded_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].rx_errored_frms;
-               *ptr++ = xmac_stats->aggr_stats[k].rx_unknown_slow_proto_frms;
-       }
-       *ptr++ = 0;
-       for (k = 0; k < vdev->max_config_port; k++) {
-               *ptr++ = xmac_stats->port_stats[k].tx_ttl_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_ttl_octets;
-               *ptr++ = xmac_stats->port_stats[k].tx_data_octets;
-               *ptr++ = xmac_stats->port_stats[k].tx_mcast_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_bcast_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_ucast_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_tagged_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_vld_ip;
-               *ptr++ = xmac_stats->port_stats[k].tx_vld_ip_octets;
-               *ptr++ = xmac_stats->port_stats[k].tx_icmp;
-               *ptr++ = xmac_stats->port_stats[k].tx_tcp;
-               *ptr++ = xmac_stats->port_stats[k].tx_rst_tcp;
-               *ptr++ = xmac_stats->port_stats[k].tx_udp;
-               *ptr++ = xmac_stats->port_stats[k].tx_parse_error;
-               *ptr++ = xmac_stats->port_stats[k].tx_unknown_protocol;
-               *ptr++ = xmac_stats->port_stats[k].tx_pause_ctrl_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_marker_pdu_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_lacpdu_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_drop_ip;
-               *ptr++ = xmac_stats->port_stats[k].tx_marker_resp_pdu_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_xgmii_char2_match;
-               *ptr++ = xmac_stats->port_stats[k].tx_xgmii_char1_match;
-               *ptr++ = xmac_stats->port_stats[k].tx_xgmii_column2_match;
-               *ptr++ = xmac_stats->port_stats[k].tx_xgmii_column1_match;
-               *ptr++ = xmac_stats->port_stats[k].tx_any_err_frms;
-               *ptr++ = xmac_stats->port_stats[k].tx_drop_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_vld_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_offload_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_octets;
-               *ptr++ = xmac_stats->port_stats[k].rx_data_octets;
-               *ptr++ = xmac_stats->port_stats[k].rx_offload_octets;
-               *ptr++ = xmac_stats->port_stats[k].rx_vld_mcast_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_vld_bcast_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_accepted_ucast_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_accepted_nucast_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_tagged_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_long_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_usized_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_osized_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_frag_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_jabber_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_64_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_65_127_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_128_255_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_256_511_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_512_1023_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_1024_1518_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_1519_4095_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_4096_8191_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_8192_max_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ttl_gt_max_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_ip;
-               *ptr++ = xmac_stats->port_stats[k].rx_accepted_ip;
-               *ptr++ = xmac_stats->port_stats[k].rx_ip_octets;
-               *ptr++ = xmac_stats->port_stats[k].rx_err_ip;
-               *ptr++ = xmac_stats->port_stats[k].rx_icmp;
-               *ptr++ = xmac_stats->port_stats[k].rx_tcp;
-               *ptr++ = xmac_stats->port_stats[k].rx_udp;
-               *ptr++ = xmac_stats->port_stats[k].rx_err_tcp;
-               *ptr++ = xmac_stats->port_stats[k].rx_pause_count;
-               *ptr++ = xmac_stats->port_stats[k].rx_pause_ctrl_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_unsup_ctrl_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_fcs_err_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_in_rng_len_err_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_out_rng_len_err_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_drop_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_discarded_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_drop_ip;
-               *ptr++ = xmac_stats->port_stats[k].rx_drop_udp;
-               *ptr++ = xmac_stats->port_stats[k].rx_marker_pdu_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_lacpdu_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_unknown_pdu_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_marker_resp_pdu_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_fcs_discard;
-               *ptr++ = xmac_stats->port_stats[k].rx_illegal_pdu_frms;
-               *ptr++ = xmac_stats->port_stats[k].rx_switch_discard;
-               *ptr++ = xmac_stats->port_stats[k].rx_len_discard;
-               *ptr++ = xmac_stats->port_stats[k].rx_rpa_discard;
-               *ptr++ = xmac_stats->port_stats[k].rx_l2_mgmt_discard;
-               *ptr++ = xmac_stats->port_stats[k].rx_rts_discard;
-               *ptr++ = xmac_stats->port_stats[k].rx_trash_discard;
-               *ptr++ = xmac_stats->port_stats[k].rx_buff_full_discard;
-               *ptr++ = xmac_stats->port_stats[k].rx_red_discard;
-               *ptr++ = xmac_stats->port_stats[k].rx_xgmii_ctrl_err_cnt;
-               *ptr++ = xmac_stats->port_stats[k].rx_xgmii_data_err_cnt;
-               *ptr++ = xmac_stats->port_stats[k].rx_xgmii_char1_match;
-               *ptr++ = xmac_stats->port_stats[k].rx_xgmii_err_sym;
-               *ptr++ = xmac_stats->port_stats[k].rx_xgmii_column1_match;
-               *ptr++ = xmac_stats->port_stats[k].rx_xgmii_char2_match;
-               *ptr++ = xmac_stats->port_stats[k].rx_local_fault;
-               *ptr++ = xmac_stats->port_stats[k].rx_xgmii_column2_match;
-               *ptr++ = xmac_stats->port_stats[k].rx_jettison;
-               *ptr++ = xmac_stats->port_stats[k].rx_remote_fault;
-       }
-
-       *ptr++ = 0;
-       for (k = 0; k < vdev->no_of_vpath; k++) {
-               struct vxge_hw_vpath_stats_sw_info *vpath_info;
-
-               vpath = &vdev->vpaths[k];
-               j = vpath->device_id;
-               vpath_info = (struct vxge_hw_vpath_stats_sw_info *)
-                               &sw_stats->vpath_info[j];
-               *ptr++ = vpath_info->soft_reset_cnt;
-               *ptr++ = vpath_info->error_stats.unknown_alarms;
-               *ptr++ = vpath_info->error_stats.network_sustained_fault;
-               *ptr++ = vpath_info->error_stats.network_sustained_ok;
-               *ptr++ = vpath_info->error_stats.kdfcctl_fifo0_overwrite;
-               *ptr++ = vpath_info->error_stats.kdfcctl_fifo0_poison;
-               *ptr++ = vpath_info->error_stats.kdfcctl_fifo0_dma_error;
-               *ptr++ = vpath_info->error_stats.dblgen_fifo0_overflow;
-               *ptr++ = vpath_info->error_stats.statsb_pif_chain_error;
-               *ptr++ = vpath_info->error_stats.statsb_drop_timeout;
-               *ptr++ = vpath_info->error_stats.target_illegal_access;
-               *ptr++ = vpath_info->error_stats.ini_serr_det;
-               *ptr++ = vpath_info->error_stats.prc_ring_bumps;
-               *ptr++ = vpath_info->error_stats.prc_rxdcm_sc_err;
-               *ptr++ = vpath_info->error_stats.prc_rxdcm_sc_abort;
-               *ptr++ = vpath_info->error_stats.prc_quanta_size_err;
-               *ptr++ = vpath_info->ring_stats.common_stats.full_cnt;
-               *ptr++ = vpath_info->ring_stats.common_stats.usage_cnt;
-               *ptr++ = vpath_info->ring_stats.common_stats.usage_max;
-               *ptr++ = vpath_info->ring_stats.common_stats.
-                                       reserve_free_swaps_cnt;
-               *ptr++ = vpath_info->ring_stats.common_stats.total_compl_cnt;
-               for (j = 0; j < VXGE_HW_DTR_MAX_T_CODE; j++)
-                       *ptr++ = vpath_info->ring_stats.rxd_t_code_err_cnt[j];
-               *ptr++ = vpath_info->fifo_stats.common_stats.full_cnt;
-               *ptr++ = vpath_info->fifo_stats.common_stats.usage_cnt;
-               *ptr++ = vpath_info->fifo_stats.common_stats.usage_max;
-               *ptr++ = vpath_info->fifo_stats.common_stats.
-                                               reserve_free_swaps_cnt;
-               *ptr++ = vpath_info->fifo_stats.common_stats.total_compl_cnt;
-               *ptr++ = vpath_info->fifo_stats.total_posts;
-               *ptr++ = vpath_info->fifo_stats.total_buffers;
-               for (j = 0; j < VXGE_HW_DTR_MAX_T_CODE; j++)
-                       *ptr++ = vpath_info->fifo_stats.txd_t_code_err_cnt[j];
-       }
-
-       *ptr++ = 0;
-       for (k = 0; k < vdev->no_of_vpath; k++) {
-               struct vxge_hw_vpath_stats_hw_info *vpath_info;
-               vpath = &vdev->vpaths[k];
-               j = vpath->device_id;
-               vpath_info = hw_stats->vpath_info[j];
-               if (!vpath_info) {
-                       memset(ptr, 0, VXGE_HW_VPATH_STATS_LEN * sizeof(u64));
-                       ptr += VXGE_HW_VPATH_STATS_LEN;
-                       continue;
-               }
-               *ptr++ = vpath_info->ini_num_mwr_sent;
-               *ptr++ = vpath_info->ini_num_mrd_sent;
-               *ptr++ = vpath_info->ini_num_cpl_rcvd;
-               *ptr++ = vpath_info->ini_num_mwr_byte_sent;
-               *ptr++ = vpath_info->ini_num_cpl_byte_rcvd;
-               *ptr++ = vpath_info->wrcrdtarb_xoff;
-               *ptr++ = vpath_info->rdcrdtarb_xoff;
-               *ptr++ = vpath_info->vpath_genstats_count0;
-               *ptr++ = vpath_info->vpath_genstats_count1;
-               *ptr++ = vpath_info->vpath_genstats_count2;
-               *ptr++ = vpath_info->vpath_genstats_count3;
-               *ptr++ = vpath_info->vpath_genstats_count4;
-               *ptr++ = vpath_info->vpath_genstats_count5;
-               *ptr++ = vpath_info->prog_event_vnum0;
-               *ptr++ = vpath_info->prog_event_vnum1;
-               *ptr++ = vpath_info->prog_event_vnum2;
-               *ptr++ = vpath_info->prog_event_vnum3;
-               *ptr++ = vpath_info->rx_multi_cast_frame_discard;
-               *ptr++ = vpath_info->rx_frm_transferred;
-               *ptr++ = vpath_info->rxd_returned;
-               *ptr++ = vpath_info->rx_mpa_len_fail_frms;
-               *ptr++ = vpath_info->rx_mpa_mrk_fail_frms;
-               *ptr++ = vpath_info->rx_mpa_crc_fail_frms;
-               *ptr++ = vpath_info->rx_permitted_frms;
-               *ptr++ = vpath_info->rx_vp_reset_discarded_frms;
-               *ptr++ = vpath_info->rx_wol_frms;
-               *ptr++ = vpath_info->tx_vp_reset_discarded_frms;
-       }
-
-       *ptr++ = 0;
-       *ptr++ = vdev->stats.vpaths_open;
-       *ptr++ = vdev->stats.vpath_open_fail;
-       *ptr++ = vdev->stats.link_up;
-       *ptr++ = vdev->stats.link_down;
-
-       for (k = 0; k < vdev->no_of_vpath; k++) {
-               *ptr += vdev->vpaths[k].fifo.stats.tx_frms;
-               *(ptr + 1) += vdev->vpaths[k].fifo.stats.tx_errors;
-               *(ptr + 2) += vdev->vpaths[k].fifo.stats.tx_bytes;
-               *(ptr + 3) += vdev->vpaths[k].fifo.stats.txd_not_free;
-               *(ptr + 4) += vdev->vpaths[k].fifo.stats.txd_out_of_desc;
-               *(ptr + 5) += vdev->vpaths[k].ring.stats.rx_frms;
-               *(ptr + 6) += vdev->vpaths[k].ring.stats.rx_errors;
-               *(ptr + 7) += vdev->vpaths[k].ring.stats.rx_bytes;
-               *(ptr + 8) += vdev->vpaths[k].ring.stats.rx_mcast;
-               *(ptr + 9) += vdev->vpaths[k].fifo.stats.pci_map_fail +
-                               vdev->vpaths[k].ring.stats.pci_map_fail;
-               *(ptr + 10) += vdev->vpaths[k].ring.stats.skb_alloc_fail;
-       }
-
-       ptr += 12;
-
-       kfree(xmac_stats);
-       kfree(sw_stats);
-       kfree(hw_stats);
-}
-
-static void vxge_ethtool_get_strings(struct net_device *dev, u32 stringset,
-                                    u8 *data)
-{
-       int stat_size = 0;
-       int i, j;
-       struct vxgedev *vdev = netdev_priv(dev);
-       switch (stringset) {
-       case ETH_SS_STATS:
-               vxge_add_string("VPATH STATISTICS%s\t\t\t",
-                       &stat_size, data, "");
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vxge_add_string("tx_ttl_eth_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_ttl_eth_octects_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_data_octects_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_mcast_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_bcast_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_ucast_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_tagged_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_vld_ip_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_vld_ip_octects_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_icmp_%d\t\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_tcp_%d\t\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_rst_tcp_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_udp_%d\t\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_unknown_proto_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_lost_ip_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_parse_error_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_tcp_offload_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_retx_tcp_offload_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_lost_ip_offload_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_eth_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_vld_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_offload_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_eth_octects_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_data_octects_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_offload_octects_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_vld_mcast_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_vld_bcast_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_accepted_ucast_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_accepted_nucast_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_tagged_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_long_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_usized_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_osized_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_frag_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_jabber_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_64_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_65_127_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_128_255_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_256_511_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_512_1023_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_1024_1518_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_1519_4095_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_4096_8191_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_8192_max_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ttl_gt_max_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ip%d\t\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_accepted_ip_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_ip_octects_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_err_ip_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_icmp_%d\t\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_tcp_%d\t\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_udp_%d\t\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_err_tcp_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_lost_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_lost_ip_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_lost_ip_offload_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_various_discard_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_sleep_discard_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_red_discard_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_queue_full_discard_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_mpa_ok_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-               }
-
-               vxge_add_string("\nAGGR STATISTICS%s\t\t\t\t",
-                       &stat_size, data, "");
-               for (i = 0; i < vdev->max_config_port; i++) {
-                       vxge_add_string("tx_frms_%d\t\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_data_octects_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_mcast_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_bcast_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_discarded_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_errored_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_frms_%d\t\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_data_octects_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_mcast_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_bcast_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_discarded_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_errored_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_unknown_slow_proto_frms_%d\t",
-                               &stat_size, data, i);
-               }
-
-               vxge_add_string("\nPORT STATISTICS%s\t\t\t\t",
-                       &stat_size, data, "");
-               for (i = 0; i < vdev->max_config_port; i++) {
-                       vxge_add_string("tx_ttl_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_ttl_octects_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_data_octects_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_mcast_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_bcast_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_ucast_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_tagged_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_vld_ip_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_vld_ip_octects_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_icmp_%d\t\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_tcp_%d\t\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_rst_tcp_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_udp_%d\t\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_parse_error_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_unknown_protocol_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_pause_ctrl_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_marker_pdu_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_lacpdu_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_drop_ip_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_marker_resp_pdu_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_xgmii_char2_match_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_xgmii_char1_match_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_xgmii_column2_match_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_xgmii_column1_match_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_any_err_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("tx_drop_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_vld_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_offload_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_octects_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_data_octects_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_offload_octects_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_vld_mcast_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_vld_bcast_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_accepted_ucast_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_accepted_nucast_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_tagged_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_long_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_usized_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_osized_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_frag_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_jabber_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_64_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_65_127_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_128_255_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_256_511_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_512_1023_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_1024_1518_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_1519_4095_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_4096_8191_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_8192_max_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ttl_gt_max_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ip_%d\t\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_accepted_ip_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_ip_octets_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_err_ip_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_icmp_%d\t\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_tcp_%d\t\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_udp_%d\t\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_err_tcp_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_pause_count_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_pause_ctrl_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_unsup_ctrl_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_fcs_err_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_in_rng_len_err_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_out_rng_len_err_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_drop_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_discard_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_drop_ip_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_drop_udp_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_marker_pdu_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_lacpdu_frms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_unknown_pdu_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_marker_resp_pdu_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_fcs_discard_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_illegal_pdu_frms_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_switch_discard_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_len_discard_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_rpa_discard_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_l2_mgmt_discard_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_rts_discard_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_trash_discard_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_buff_full_discard_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_red_discard_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_xgmii_ctrl_err_cnt_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_xgmii_data_err_cnt_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_xgmii_char1_match_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_xgmii_err_sym_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_xgmii_column1_match_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_xgmii_char2_match_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_local_fault_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_xgmii_column2_match_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_jettison_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("rx_remote_fault_%d\t\t\t",
-                               &stat_size, data, i);
-               }
-
-               vxge_add_string("\n SOFTWARE STATISTICS%s\t\t\t",
-                       &stat_size, data, "");
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vxge_add_string("soft_reset_cnt_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("unknown_alarms_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("network_sustained_fault_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("network_sustained_ok_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("kdfcctl_fifo0_overwrite_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("kdfcctl_fifo0_poison_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("kdfcctl_fifo0_dma_error_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("dblgen_fifo0_overflow_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("statsb_pif_chain_error_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("statsb_drop_timeout_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("target_illegal_access_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("ini_serr_det_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("prc_ring_bumps_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("prc_rxdcm_sc_err_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("prc_rxdcm_sc_abort_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("prc_quanta_size_err_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("ring_full_cnt_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("ring_usage_cnt_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("ring_usage_max_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("ring_reserve_free_swaps_cnt_%d\t",
-                               &stat_size, data, i);
-                       vxge_add_string("ring_total_compl_cnt_%d\t\t",
-                               &stat_size, data, i);
-                       for (j = 0; j < VXGE_HW_DTR_MAX_T_CODE; j++)
-                               vxge_add_string("rxd_t_code_err_cnt%d_%d\t\t",
-                                       &stat_size, data, j, i);
-                       vxge_add_string("fifo_full_cnt_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("fifo_usage_cnt_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("fifo_usage_max_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("fifo_reserve_free_swaps_cnt_%d\t",
-                               &stat_size, data, i);
-                       vxge_add_string("fifo_total_compl_cnt_%d\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("fifo_total_posts_%d\t\t\t",
-                               &stat_size, data, i);
-                       vxge_add_string("fifo_total_buffers_%d\t\t",
-                               &stat_size, data, i);
-                       for (j = 0; j < VXGE_HW_DTR_MAX_T_CODE; j++)
-                               vxge_add_string("txd_t_code_err_cnt%d_%d\t\t",
-                                       &stat_size, data, j, i);
-               }
-
-               vxge_add_string("\n HARDWARE STATISTICS%s\t\t\t",
-                               &stat_size, data, "");
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vxge_add_string("ini_num_mwr_sent_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("ini_num_mrd_sent_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("ini_num_cpl_rcvd_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("ini_num_mwr_byte_sent_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("ini_num_cpl_byte_rcvd_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("wrcrdtarb_xoff_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rdcrdtarb_xoff_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("vpath_genstats_count0_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("vpath_genstats_count1_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("vpath_genstats_count2_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("vpath_genstats_count3_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("vpath_genstats_count4_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("vpath_genstats_count5_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("prog_event_vnum0_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("prog_event_vnum1_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("prog_event_vnum2_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("prog_event_vnum3_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_multi_cast_frame_discard_%d\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_frm_transferred_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rxd_returned_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_mpa_len_fail_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_mpa_mrk_fail_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_mpa_crc_fail_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_permitted_frms_%d\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_vp_reset_discarded_frms_%d\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("rx_wol_frms_%d\t\t\t",
-                                       &stat_size, data, i);
-                       vxge_add_string("tx_vp_reset_discarded_frms_%d\t",
-                                       &stat_size, data, i);
-               }
-
-               memcpy(data + stat_size, &ethtool_driver_stats_keys,
-                       sizeof(ethtool_driver_stats_keys));
-       }
-}
-
-static int vxge_ethtool_get_regs_len(struct net_device *dev)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-
-       return sizeof(struct vxge_hw_vpath_reg) * vdev->no_of_vpath;
-}
-
-static int vxge_ethtool_get_sset_count(struct net_device *dev, int sset)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-
-       switch (sset) {
-       case ETH_SS_STATS:
-               return VXGE_TITLE_LEN +
-                       (vdev->no_of_vpath * VXGE_HW_VPATH_STATS_LEN) +
-                       (vdev->max_config_port * VXGE_HW_AGGR_STATS_LEN) +
-                       (vdev->max_config_port * VXGE_HW_PORT_STATS_LEN) +
-                       (vdev->no_of_vpath * VXGE_HW_VPATH_TX_STATS_LEN) +
-                       (vdev->no_of_vpath * VXGE_HW_VPATH_RX_STATS_LEN) +
-                       (vdev->no_of_vpath * VXGE_SW_STATS_LEN) +
-                       DRIVER_STAT_LEN;
-       default:
-               return -EOPNOTSUPP;
-       }
-}
-
-static int vxge_fw_flash(struct net_device *dev, struct ethtool_flash *parms)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-
-       if (vdev->max_vpath_supported != VXGE_HW_MAX_VIRTUAL_PATHS) {
-               printk(KERN_INFO "Single Function Mode is required to flash the"
-                      " firmware\n");
-               return -EINVAL;
-       }
-
-       if (netif_running(dev)) {
-               printk(KERN_INFO "Interface %s must be down to flash the "
-                      "firmware\n", dev->name);
-               return -EBUSY;
-       }
-
-       return vxge_fw_upgrade(vdev, parms->data, 1);
-}
-
-static const struct ethtool_ops vxge_ethtool_ops = {
-       .get_drvinfo            = vxge_ethtool_gdrvinfo,
-       .get_regs_len           = vxge_ethtool_get_regs_len,
-       .get_regs               = vxge_ethtool_gregs,
-       .get_link               = ethtool_op_get_link,
-       .get_pauseparam         = vxge_ethtool_getpause_data,
-       .set_pauseparam         = vxge_ethtool_setpause_data,
-       .get_strings            = vxge_ethtool_get_strings,
-       .set_phys_id            = vxge_ethtool_idnic,
-       .get_sset_count         = vxge_ethtool_get_sset_count,
-       .get_ethtool_stats      = vxge_get_ethtool_stats,
-       .flash_device           = vxge_fw_flash,
-       .get_link_ksettings     = vxge_ethtool_get_link_ksettings,
-       .set_link_ksettings     = vxge_ethtool_set_link_ksettings,
-};
-
-void vxge_initialize_ethtool_ops(struct net_device *ndev)
-{
-       ndev->ethtool_ops = &vxge_ethtool_ops;
-}
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-ethtool.h b/drivers/net/ethernet/neterion/vxge/vxge-ethtool.h
deleted file mode 100644 (file)
index 065a2c0..0000000
+++ /dev/null
@@ -1,48 +0,0 @@
-/******************************************************************************
- * This software may be used and distributed according to the terms of
- * the GNU General Public License (GPL), incorporated herein by reference.
- * Drivers based on or derived from this code fall under the GPL and must
- * retain the authorship, copyright and license notice.  This file is not
- * a complete program and may only be used when the entire operating
- * system is licensed under the GPL.
- * See the file COPYING in this distribution for more information.
- *
- * vxge-ethtool.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
- *                 Virtualized Server Adapter.
- * Copyright(c) 2002-2010 Exar Corp.
- ******************************************************************************/
-#ifndef _VXGE_ETHTOOL_H
-#define _VXGE_ETHTOOL_H
-
-#include "vxge-main.h"
-
-/* Ethtool related variables and Macros. */
-static int vxge_ethtool_get_sset_count(struct net_device *dev, int sset);
-
-#define VXGE_TITLE_LEN                 5
-#define VXGE_HW_VPATH_STATS_LEN        27
-#define VXGE_HW_AGGR_STATS_LEN         13
-#define VXGE_HW_PORT_STATS_LEN         94
-#define VXGE_HW_VPATH_TX_STATS_LEN     19
-#define VXGE_HW_VPATH_RX_STATS_LEN     42
-#define VXGE_SW_STATS_LEN              60
-#define VXGE_HW_STATS_LEN      (VXGE_HW_VPATH_STATS_LEN +\
-                               VXGE_HW_AGGR_STATS_LEN +\
-                               VXGE_HW_PORT_STATS_LEN +\
-                               VXGE_HW_VPATH_TX_STATS_LEN +\
-                               VXGE_HW_VPATH_RX_STATS_LEN)
-
-#define DRIVER_STAT_LEN (sizeof(ethtool_driver_stats_keys)/ETH_GSTRING_LEN)
-#define STAT_LEN (VXGE_HW_STATS_LEN + DRIVER_STAT_LEN + VXGE_SW_STATS_LEN)
-
-/* Maximum flicker time of adapter LED */
-#define VXGE_MAX_FLICKER_TIME (60 * HZ) /* 60 seconds */
-#define VXGE_FLICKER_ON                1
-#define VXGE_FLICKER_OFF       0
-
-#define vxge_add_string(fmt, size, buf, ...) {\
-       snprintf(buf + *size, ETH_GSTRING_LEN, fmt, __VA_ARGS__); \
-       *size += ETH_GSTRING_LEN; \
-}
-
-#endif /*_VXGE_ETHTOOL_H*/
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c
deleted file mode 100644 (file)
index fa5d4dd..0000000
+++ /dev/null
@@ -1,4808 +0,0 @@
-/******************************************************************************
-* This software may be used and distributed according to the terms of
-* the GNU General Public License (GPL), incorporated herein by reference.
-* Drivers based on or derived from this code fall under the GPL and must
-* retain the authorship, copyright and license notice.  This file is not
-* a complete program and may only be used when the entire operating
-* system is licensed under the GPL.
-* See the file COPYING in this distribution for more information.
-*
-* vxge-main.c: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
-*              Virtualized Server Adapter.
-* Copyright(c) 2002-2010 Exar Corp.
-*
-* The module loadable parameters that are supported by the driver and a brief
-* explanation of all the variables:
-* vlan_tag_strip:
-*      Strip VLAN Tag enable/disable. Instructs the device to remove
-*      the VLAN tag from all received tagged frames that are not
-*      replicated at the internal L2 switch.
-*              0 - Do not strip the VLAN tag.
-*              1 - Strip the VLAN tag.
-*
-* addr_learn_en:
-*      Enable learning the mac address of the guest OS interface in
-*      a virtualization environment.
-*              0 - DISABLE
-*              1 - ENABLE
-*
-* max_config_port:
-*      Maximum number of port to be supported.
-*              MIN -1 and MAX - 2
-*
-* max_config_vpath:
-*      This configures the maximum no of VPATH configures for each
-*      device function.
-*              MIN - 1 and MAX - 17
-*
-* max_config_dev:
-*      This configures maximum no of Device function to be enabled.
-*              MIN - 1 and MAX - 17
-*
-******************************************************************************/
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <linux/bitops.h>
-#include <linux/if_vlan.h>
-#include <linux/interrupt.h>
-#include <linux/pci.h>
-#include <linux/slab.h>
-#include <linux/tcp.h>
-#include <net/ip.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/firmware.h>
-#include <linux/net_tstamp.h>
-#include <linux/prefetch.h>
-#include <linux/module.h>
-#include "vxge-main.h"
-#include "vxge-reg.h"
-
-MODULE_LICENSE("Dual BSD/GPL");
-MODULE_DESCRIPTION("Neterion's X3100 Series 10GbE PCIe I/O"
-       "Virtualized Server Adapter");
-
-static const struct pci_device_id vxge_id_table[] = {
-       {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_TITAN_WIN, PCI_ANY_ID,
-       PCI_ANY_ID},
-       {PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_TITAN_UNI, PCI_ANY_ID,
-       PCI_ANY_ID},
-       {0}
-};
-
-MODULE_DEVICE_TABLE(pci, vxge_id_table);
-
-VXGE_MODULE_PARAM_INT(vlan_tag_strip, VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_ENABLE);
-VXGE_MODULE_PARAM_INT(addr_learn_en, VXGE_HW_MAC_ADDR_LEARN_DEFAULT);
-VXGE_MODULE_PARAM_INT(max_config_port, VXGE_MAX_CONFIG_PORT);
-VXGE_MODULE_PARAM_INT(max_config_vpath, VXGE_USE_DEFAULT);
-VXGE_MODULE_PARAM_INT(max_mac_vpath, VXGE_MAX_MAC_ADDR_COUNT);
-VXGE_MODULE_PARAM_INT(max_config_dev, VXGE_MAX_CONFIG_DEV);
-
-static u16 vpath_selector[VXGE_HW_MAX_VIRTUAL_PATHS] =
-               {0, 1, 3, 3, 7, 7, 7, 7, 15, 15, 15, 15, 15, 15, 15, 15, 31};
-static unsigned int bw_percentage[VXGE_HW_MAX_VIRTUAL_PATHS] =
-       {[0 ...(VXGE_HW_MAX_VIRTUAL_PATHS - 1)] = 0xFF};
-module_param_array(bw_percentage, uint, NULL, 0);
-
-static struct vxge_drv_config *driver_config;
-static void vxge_reset_all_vpaths(struct vxgedev *vdev);
-
-static inline int is_vxge_card_up(struct vxgedev *vdev)
-{
-       return test_bit(__VXGE_STATE_CARD_UP, &vdev->state);
-}
-
-static inline void VXGE_COMPLETE_VPATH_TX(struct vxge_fifo *fifo)
-{
-       struct sk_buff **skb_ptr = NULL;
-       struct sk_buff **temp;
-#define NR_SKB_COMPLETED 16
-       struct sk_buff *completed[NR_SKB_COMPLETED];
-       int more;
-
-       do {
-               more = 0;
-               skb_ptr = completed;
-
-               if (__netif_tx_trylock(fifo->txq)) {
-                       vxge_hw_vpath_poll_tx(fifo->handle, &skb_ptr,
-                                               NR_SKB_COMPLETED, &more);
-                       __netif_tx_unlock(fifo->txq);
-               }
-
-               /* free SKBs */
-               for (temp = completed; temp != skb_ptr; temp++)
-                       dev_consume_skb_irq(*temp);
-       } while (more);
-}
-
-static inline void VXGE_COMPLETE_ALL_TX(struct vxgedev *vdev)
-{
-       int i;
-
-       /* Complete all transmits */
-       for (i = 0; i < vdev->no_of_vpath; i++)
-               VXGE_COMPLETE_VPATH_TX(&vdev->vpaths[i].fifo);
-}
-
-static inline void VXGE_COMPLETE_ALL_RX(struct vxgedev *vdev)
-{
-       int i;
-       struct vxge_ring *ring;
-
-       /* Complete all receives*/
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               ring = &vdev->vpaths[i].ring;
-               vxge_hw_vpath_poll_rx(ring->handle);
-       }
-}
-
-/*
- * vxge_callback_link_up
- *
- * This function is called during interrupt context to notify link up state
- * change.
- */
-static void vxge_callback_link_up(struct __vxge_hw_device *hldev)
-{
-       struct net_device *dev = hldev->ndev;
-       struct vxgedev *vdev = netdev_priv(dev);
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d",
-               vdev->ndev->name, __func__, __LINE__);
-       netdev_notice(vdev->ndev, "Link Up\n");
-       vdev->stats.link_up++;
-
-       netif_carrier_on(vdev->ndev);
-       netif_tx_wake_all_queues(vdev->ndev);
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d Exiting...", vdev->ndev->name, __func__, __LINE__);
-}
-
-/*
- * vxge_callback_link_down
- *
- * This function is called during interrupt context to notify link down state
- * change.
- */
-static void vxge_callback_link_down(struct __vxge_hw_device *hldev)
-{
-       struct net_device *dev = hldev->ndev;
-       struct vxgedev *vdev = netdev_priv(dev);
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d", vdev->ndev->name, __func__, __LINE__);
-       netdev_notice(vdev->ndev, "Link Down\n");
-
-       vdev->stats.link_down++;
-       netif_carrier_off(vdev->ndev);
-       netif_tx_stop_all_queues(vdev->ndev);
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d Exiting...", vdev->ndev->name, __func__, __LINE__);
-}
-
-/*
- * vxge_rx_alloc
- *
- * Allocate SKB.
- */
-static struct sk_buff *
-vxge_rx_alloc(void *dtrh, struct vxge_ring *ring, const int skb_size)
-{
-       struct net_device    *dev;
-       struct sk_buff       *skb;
-       struct vxge_rx_priv *rx_priv;
-
-       dev = ring->ndev;
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d",
-               ring->ndev->name, __func__, __LINE__);
-
-       rx_priv = vxge_hw_ring_rxd_private_get(dtrh);
-
-       /* try to allocate skb first. this one may fail */
-       skb = netdev_alloc_skb(dev, skb_size +
-       VXGE_HW_HEADER_ETHERNET_II_802_3_ALIGN);
-       if (skb == NULL) {
-               vxge_debug_mem(VXGE_ERR,
-                       "%s: out of memory to allocate SKB", dev->name);
-               ring->stats.skb_alloc_fail++;
-               return NULL;
-       }
-
-       vxge_debug_mem(VXGE_TRACE,
-               "%s: %s:%d  Skb : 0x%p", ring->ndev->name,
-               __func__, __LINE__, skb);
-
-       skb_reserve(skb, VXGE_HW_HEADER_ETHERNET_II_802_3_ALIGN);
-
-       rx_priv->skb = skb;
-       rx_priv->skb_data = NULL;
-       rx_priv->data_size = skb_size;
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d Exiting...", ring->ndev->name, __func__, __LINE__);
-
-       return skb;
-}
-
-/*
- * vxge_rx_map
- */
-static int vxge_rx_map(void *dtrh, struct vxge_ring *ring)
-{
-       struct vxge_rx_priv *rx_priv;
-       dma_addr_t dma_addr;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d",
-               ring->ndev->name, __func__, __LINE__);
-       rx_priv = vxge_hw_ring_rxd_private_get(dtrh);
-
-       rx_priv->skb_data = rx_priv->skb->data;
-       dma_addr = dma_map_single(&ring->pdev->dev, rx_priv->skb_data,
-                                 rx_priv->data_size, DMA_FROM_DEVICE);
-
-       if (unlikely(dma_mapping_error(&ring->pdev->dev, dma_addr))) {
-               ring->stats.pci_map_fail++;
-               return -EIO;
-       }
-       vxge_debug_mem(VXGE_TRACE,
-               "%s: %s:%d  1 buffer mode dma_addr = 0x%llx",
-               ring->ndev->name, __func__, __LINE__,
-               (unsigned long long)dma_addr);
-       vxge_hw_ring_rxd_1b_set(dtrh, dma_addr, rx_priv->data_size);
-
-       rx_priv->data_dma = dma_addr;
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d Exiting...", ring->ndev->name, __func__, __LINE__);
-
-       return 0;
-}
-
-/*
- * vxge_rx_initial_replenish
- * Allocation of RxD as an initial replenish procedure.
- */
-static enum vxge_hw_status
-vxge_rx_initial_replenish(void *dtrh, void *userdata)
-{
-       struct vxge_ring *ring = (struct vxge_ring *)userdata;
-       struct vxge_rx_priv *rx_priv;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d",
-               ring->ndev->name, __func__, __LINE__);
-       if (vxge_rx_alloc(dtrh, ring,
-                         VXGE_LL_MAX_FRAME_SIZE(ring->ndev)) == NULL)
-               return VXGE_HW_FAIL;
-
-       if (vxge_rx_map(dtrh, ring)) {
-               rx_priv = vxge_hw_ring_rxd_private_get(dtrh);
-               dev_kfree_skb(rx_priv->skb);
-
-               return VXGE_HW_FAIL;
-       }
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d Exiting...", ring->ndev->name, __func__, __LINE__);
-
-       return VXGE_HW_OK;
-}
-
-static inline void
-vxge_rx_complete(struct vxge_ring *ring, struct sk_buff *skb, u16 vlan,
-                int pkt_length, struct vxge_hw_ring_rxd_info *ext_info)
-{
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d",
-                       ring->ndev->name, __func__, __LINE__);
-       skb_record_rx_queue(skb, ring->driver_id);
-       skb->protocol = eth_type_trans(skb, ring->ndev);
-
-       u64_stats_update_begin(&ring->stats.syncp);
-       ring->stats.rx_frms++;
-       ring->stats.rx_bytes += pkt_length;
-
-       if (skb->pkt_type == PACKET_MULTICAST)
-               ring->stats.rx_mcast++;
-       u64_stats_update_end(&ring->stats.syncp);
-
-       vxge_debug_rx(VXGE_TRACE,
-               "%s: %s:%d  skb protocol = %d",
-               ring->ndev->name, __func__, __LINE__, skb->protocol);
-
-       if (ext_info->vlan &&
-           ring->vlan_tag_strip == VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_ENABLE)
-               __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), ext_info->vlan);
-       napi_gro_receive(ring->napi_p, skb);
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d Exiting...", ring->ndev->name, __func__, __LINE__);
-}
-
-static inline void vxge_re_pre_post(void *dtr, struct vxge_ring *ring,
-                                   struct vxge_rx_priv *rx_priv)
-{
-       dma_sync_single_for_device(&ring->pdev->dev, rx_priv->data_dma,
-                                  rx_priv->data_size, DMA_FROM_DEVICE);
-
-       vxge_hw_ring_rxd_1b_set(dtr, rx_priv->data_dma, rx_priv->data_size);
-       vxge_hw_ring_rxd_pre_post(ring->handle, dtr);
-}
-
-static inline void vxge_post(int *dtr_cnt, void **first_dtr,
-                            void *post_dtr, struct __vxge_hw_ring *ringh)
-{
-       int dtr_count = *dtr_cnt;
-       if ((*dtr_cnt % VXGE_HW_RXSYNC_FREQ_CNT) == 0) {
-               if (*first_dtr)
-                       vxge_hw_ring_rxd_post_post_wmb(ringh, *first_dtr);
-               *first_dtr = post_dtr;
-       } else
-               vxge_hw_ring_rxd_post_post(ringh, post_dtr);
-       dtr_count++;
-       *dtr_cnt = dtr_count;
-}
-
-/*
- * vxge_rx_1b_compl
- *
- * If the interrupt is because of a received frame or if the receive ring
- * contains fresh as yet un-processed frames, this function is called.
- */
-static enum vxge_hw_status
-vxge_rx_1b_compl(struct __vxge_hw_ring *ringh, void *dtr,
-                u8 t_code, void *userdata)
-{
-       struct vxge_ring *ring = (struct vxge_ring *)userdata;
-       struct net_device *dev = ring->ndev;
-       unsigned int dma_sizes;
-       void *first_dtr = NULL;
-       int dtr_cnt = 0;
-       int data_size;
-       dma_addr_t data_dma;
-       int pkt_length;
-       struct sk_buff *skb;
-       struct vxge_rx_priv *rx_priv;
-       struct vxge_hw_ring_rxd_info ext_info;
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d",
-               ring->ndev->name, __func__, __LINE__);
-
-       if (ring->budget <= 0)
-               goto out;
-
-       do {
-               prefetch((char *)dtr + L1_CACHE_BYTES);
-               rx_priv = vxge_hw_ring_rxd_private_get(dtr);
-               skb = rx_priv->skb;
-               data_size = rx_priv->data_size;
-               data_dma = rx_priv->data_dma;
-               prefetch(rx_priv->skb_data);
-
-               vxge_debug_rx(VXGE_TRACE,
-                       "%s: %s:%d  skb = 0x%p",
-                       ring->ndev->name, __func__, __LINE__, skb);
-
-               vxge_hw_ring_rxd_1b_get(ringh, dtr, &dma_sizes);
-               pkt_length = dma_sizes;
-
-               pkt_length -= ETH_FCS_LEN;
-
-               vxge_debug_rx(VXGE_TRACE,
-                       "%s: %s:%d  Packet Length = %d",
-                       ring->ndev->name, __func__, __LINE__, pkt_length);
-
-               vxge_hw_ring_rxd_1b_info_get(ringh, dtr, &ext_info);
-
-               /* check skb validity */
-               vxge_assert(skb);
-
-               prefetch((char *)skb + L1_CACHE_BYTES);
-               if (unlikely(t_code)) {
-                       if (vxge_hw_ring_handle_tcode(ringh, dtr, t_code) !=
-                               VXGE_HW_OK) {
-
-                               ring->stats.rx_errors++;
-                               vxge_debug_rx(VXGE_TRACE,
-                                       "%s: %s :%d Rx T_code is %d",
-                                       ring->ndev->name, __func__,
-                                       __LINE__, t_code);
-
-                               /* If the t_code is not supported and if the
-                                * t_code is other than 0x5 (unparseable packet
-                                * such as unknown UPV6 header), Drop it !!!
-                                */
-                               vxge_re_pre_post(dtr, ring, rx_priv);
-
-                               vxge_post(&dtr_cnt, &first_dtr, dtr, ringh);
-                               ring->stats.rx_dropped++;
-                               continue;
-                       }
-               }
-
-               if (pkt_length > VXGE_LL_RX_COPY_THRESHOLD) {
-                       if (vxge_rx_alloc(dtr, ring, data_size) != NULL) {
-                               if (!vxge_rx_map(dtr, ring)) {
-                                       skb_put(skb, pkt_length);
-
-                                       dma_unmap_single(&ring->pdev->dev,
-                                                        data_dma, data_size,
-                                                        DMA_FROM_DEVICE);
-
-                                       vxge_hw_ring_rxd_pre_post(ringh, dtr);
-                                       vxge_post(&dtr_cnt, &first_dtr, dtr,
-                                               ringh);
-                               } else {
-                                       dev_kfree_skb(rx_priv->skb);
-                                       rx_priv->skb = skb;
-                                       rx_priv->data_size = data_size;
-                                       vxge_re_pre_post(dtr, ring, rx_priv);
-
-                                       vxge_post(&dtr_cnt, &first_dtr, dtr,
-                                               ringh);
-                                       ring->stats.rx_dropped++;
-                                       break;
-                               }
-                       } else {
-                               vxge_re_pre_post(dtr, ring, rx_priv);
-
-                               vxge_post(&dtr_cnt, &first_dtr, dtr, ringh);
-                               ring->stats.rx_dropped++;
-                               break;
-                       }
-               } else {
-                       struct sk_buff *skb_up;
-
-                       skb_up = netdev_alloc_skb(dev, pkt_length +
-                               VXGE_HW_HEADER_ETHERNET_II_802_3_ALIGN);
-                       if (skb_up != NULL) {
-                               skb_reserve(skb_up,
-                                   VXGE_HW_HEADER_ETHERNET_II_802_3_ALIGN);
-
-                               dma_sync_single_for_cpu(&ring->pdev->dev,
-                                                       data_dma, data_size,
-                                                       DMA_FROM_DEVICE);
-
-                               vxge_debug_mem(VXGE_TRACE,
-                                       "%s: %s:%d  skb_up = %p",
-                                       ring->ndev->name, __func__,
-                                       __LINE__, skb);
-                               memcpy(skb_up->data, skb->data, pkt_length);
-
-                               vxge_re_pre_post(dtr, ring, rx_priv);
-
-                               vxge_post(&dtr_cnt, &first_dtr, dtr,
-                                       ringh);
-                               /* will netif_rx small SKB instead */
-                               skb = skb_up;
-                               skb_put(skb, pkt_length);
-                       } else {
-                               vxge_re_pre_post(dtr, ring, rx_priv);
-
-                               vxge_post(&dtr_cnt, &first_dtr, dtr, ringh);
-                               vxge_debug_rx(VXGE_ERR,
-                                       "%s: vxge_rx_1b_compl: out of "
-                                       "memory", dev->name);
-                               ring->stats.skb_alloc_fail++;
-                               break;
-                       }
-               }
-
-               if ((ext_info.proto & VXGE_HW_FRAME_PROTO_TCP_OR_UDP) &&
-                   !(ext_info.proto & VXGE_HW_FRAME_PROTO_IP_FRAG) &&
-                   (dev->features & NETIF_F_RXCSUM) && /* Offload Rx side CSUM */
-                   ext_info.l3_cksum == VXGE_HW_L3_CKSUM_OK &&
-                   ext_info.l4_cksum == VXGE_HW_L4_CKSUM_OK)
-                       skb->ip_summed = CHECKSUM_UNNECESSARY;
-               else
-                       skb_checksum_none_assert(skb);
-
-
-               if (ring->rx_hwts) {
-                       struct skb_shared_hwtstamps *skb_hwts;
-                       u32 ns = *(u32 *)(skb->head + pkt_length);
-
-                       skb_hwts = skb_hwtstamps(skb);
-                       skb_hwts->hwtstamp = ns_to_ktime(ns);
-               }
-
-               /* rth_hash_type and rth_it_hit are non-zero regardless of
-                * whether rss is enabled.  Only the rth_value is zero/non-zero
-                * if rss is disabled/enabled, so key off of that.
-                */
-               if (ext_info.rth_value)
-                       skb_set_hash(skb, ext_info.rth_value,
-                                    PKT_HASH_TYPE_L3);
-
-               vxge_rx_complete(ring, skb, ext_info.vlan,
-                       pkt_length, &ext_info);
-
-               ring->budget--;
-               ring->pkts_processed++;
-               if (!ring->budget)
-                       break;
-
-       } while (vxge_hw_ring_rxd_next_completed(ringh, &dtr,
-               &t_code) == VXGE_HW_OK);
-
-       if (first_dtr)
-               vxge_hw_ring_rxd_post_post_wmb(ringh, first_dtr);
-
-out:
-       vxge_debug_entryexit(VXGE_TRACE,
-                               "%s:%d  Exiting...",
-                               __func__, __LINE__);
-       return VXGE_HW_OK;
-}
-
-/*
- * vxge_xmit_compl
- *
- * If an interrupt was raised to indicate DMA complete of the Tx packet,
- * this function is called. It identifies the last TxD whose buffer was
- * freed and frees all skbs whose data have already DMA'ed into the NICs
- * internal memory.
- */
-static enum vxge_hw_status
-vxge_xmit_compl(struct __vxge_hw_fifo *fifo_hw, void *dtr,
-               enum vxge_hw_fifo_tcode t_code, void *userdata,
-               struct sk_buff ***skb_ptr, int nr_skb, int *more)
-{
-       struct vxge_fifo *fifo = (struct vxge_fifo *)userdata;
-       struct sk_buff *skb, **done_skb = *skb_ptr;
-       int pkt_cnt = 0;
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s:%d Entered....", __func__, __LINE__);
-
-       do {
-               int frg_cnt;
-               skb_frag_t *frag;
-               int i = 0, j;
-               struct vxge_tx_priv *txd_priv =
-                       vxge_hw_fifo_txdl_private_get(dtr);
-
-               skb = txd_priv->skb;
-               frg_cnt = skb_shinfo(skb)->nr_frags;
-               frag = &skb_shinfo(skb)->frags[0];
-
-               vxge_debug_tx(VXGE_TRACE,
-                               "%s: %s:%d fifo_hw = %p dtr = %p "
-                               "tcode = 0x%x", fifo->ndev->name, __func__,
-                               __LINE__, fifo_hw, dtr, t_code);
-               /* check skb validity */
-               vxge_assert(skb);
-               vxge_debug_tx(VXGE_TRACE,
-                       "%s: %s:%d skb = %p itxd_priv = %p frg_cnt = %d",
-                       fifo->ndev->name, __func__, __LINE__,
-                       skb, txd_priv, frg_cnt);
-               if (unlikely(t_code)) {
-                       fifo->stats.tx_errors++;
-                       vxge_debug_tx(VXGE_ERR,
-                               "%s: tx: dtr %p completed due to "
-                               "error t_code %01x", fifo->ndev->name,
-                               dtr, t_code);
-                       vxge_hw_fifo_handle_tcode(fifo_hw, dtr, t_code);
-               }
-
-               /*  for unfragmented skb */
-               dma_unmap_single(&fifo->pdev->dev, txd_priv->dma_buffers[i++],
-                                skb_headlen(skb), DMA_TO_DEVICE);
-
-               for (j = 0; j < frg_cnt; j++) {
-                       dma_unmap_page(&fifo->pdev->dev,
-                                      txd_priv->dma_buffers[i++],
-                                      skb_frag_size(frag), DMA_TO_DEVICE);
-                       frag += 1;
-               }
-
-               vxge_hw_fifo_txdl_free(fifo_hw, dtr);
-
-               /* Updating the statistics block */
-               u64_stats_update_begin(&fifo->stats.syncp);
-               fifo->stats.tx_frms++;
-               fifo->stats.tx_bytes += skb->len;
-               u64_stats_update_end(&fifo->stats.syncp);
-
-               *done_skb++ = skb;
-
-               if (--nr_skb <= 0) {
-                       *more = 1;
-                       break;
-               }
-
-               pkt_cnt++;
-               if (pkt_cnt > fifo->indicate_max_pkts)
-                       break;
-
-       } while (vxge_hw_fifo_txdl_next_completed(fifo_hw,
-                               &dtr, &t_code) == VXGE_HW_OK);
-
-       *skb_ptr = done_skb;
-       if (netif_tx_queue_stopped(fifo->txq))
-               netif_tx_wake_queue(fifo->txq);
-
-       vxge_debug_entryexit(VXGE_TRACE,
-                               "%s: %s:%d  Exiting...",
-                               fifo->ndev->name, __func__, __LINE__);
-       return VXGE_HW_OK;
-}
-
-/* select a vpath to transmit the packet */
-static u32 vxge_get_vpath_no(struct vxgedev *vdev, struct sk_buff *skb)
-{
-       u16 queue_len, counter = 0;
-       if (skb->protocol == htons(ETH_P_IP)) {
-               struct iphdr *ip;
-               struct tcphdr *th;
-
-               ip = ip_hdr(skb);
-
-               if (!ip_is_fragment(ip)) {
-                       th = (struct tcphdr *)(((unsigned char *)ip) +
-                                       ip->ihl*4);
-
-                       queue_len = vdev->no_of_vpath;
-                       counter = (ntohs(th->source) +
-                               ntohs(th->dest)) &
-                               vdev->vpath_selector[queue_len - 1];
-                       if (counter >= queue_len)
-                               counter = queue_len - 1;
-               }
-       }
-       return counter;
-}
-
-static enum vxge_hw_status vxge_search_mac_addr_in_list(
-       struct vxge_vpath *vpath, u64 del_mac)
-{
-       struct list_head *entry, *next;
-       list_for_each_safe(entry, next, &vpath->mac_addr_list) {
-               if (((struct vxge_mac_addrs *)entry)->macaddr == del_mac)
-                       return TRUE;
-       }
-       return FALSE;
-}
-
-static int vxge_mac_list_add(struct vxge_vpath *vpath, struct macInfo *mac)
-{
-       struct vxge_mac_addrs *new_mac_entry;
-       u8 *mac_address = NULL;
-
-       if (vpath->mac_addr_cnt >= VXGE_MAX_LEARN_MAC_ADDR_CNT)
-               return TRUE;
-
-       new_mac_entry = kzalloc(sizeof(struct vxge_mac_addrs), GFP_ATOMIC);
-       if (!new_mac_entry) {
-               vxge_debug_mem(VXGE_ERR,
-                       "%s: memory allocation failed",
-                       VXGE_DRIVER_NAME);
-               return FALSE;
-       }
-
-       list_add(&new_mac_entry->item, &vpath->mac_addr_list);
-
-       /* Copy the new mac address to the list */
-       mac_address = (u8 *)&new_mac_entry->macaddr;
-       memcpy(mac_address, mac->macaddr, ETH_ALEN);
-
-       new_mac_entry->state = mac->state;
-       vpath->mac_addr_cnt++;
-
-       if (is_multicast_ether_addr(mac->macaddr))
-               vpath->mcast_addr_cnt++;
-
-       return TRUE;
-}
-
-/* Add a mac address to DA table */
-static enum vxge_hw_status
-vxge_add_mac_addr(struct vxgedev *vdev, struct macInfo *mac)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_vpath *vpath;
-       enum vxge_hw_vpath_mac_addr_add_mode duplicate_mode;
-
-       if (is_multicast_ether_addr(mac->macaddr))
-               duplicate_mode = VXGE_HW_VPATH_MAC_ADDR_ADD_DUPLICATE;
-       else
-               duplicate_mode = VXGE_HW_VPATH_MAC_ADDR_REPLACE_DUPLICATE;
-
-       vpath = &vdev->vpaths[mac->vpath_no];
-       status = vxge_hw_vpath_mac_addr_add(vpath->handle, mac->macaddr,
-                                               mac->macmask, duplicate_mode);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "DA config add entry failed for vpath:%d",
-                       vpath->device_id);
-       } else
-               if (FALSE == vxge_mac_list_add(vpath, mac))
-                       status = -EPERM;
-
-       return status;
-}
-
-static int vxge_learn_mac(struct vxgedev *vdev, u8 *mac_header)
-{
-       struct macInfo mac_info;
-       u8 *mac_address = NULL;
-       u64 mac_addr = 0, vpath_vector = 0;
-       int vpath_idx = 0;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_vpath *vpath = NULL;
-
-       mac_address = (u8 *)&mac_addr;
-       memcpy(mac_address, mac_header, ETH_ALEN);
-
-       /* Is this mac address already in the list? */
-       for (vpath_idx = 0; vpath_idx < vdev->no_of_vpath; vpath_idx++) {
-               vpath = &vdev->vpaths[vpath_idx];
-               if (vxge_search_mac_addr_in_list(vpath, mac_addr))
-                       return vpath_idx;
-       }
-
-       memset(&mac_info, 0, sizeof(struct macInfo));
-       memcpy(mac_info.macaddr, mac_header, ETH_ALEN);
-
-       /* Any vpath has room to add mac address to its da table? */
-       for (vpath_idx = 0; vpath_idx < vdev->no_of_vpath; vpath_idx++) {
-               vpath = &vdev->vpaths[vpath_idx];
-               if (vpath->mac_addr_cnt < vpath->max_mac_addr_cnt) {
-                       /* Add this mac address to this vpath */
-                       mac_info.vpath_no = vpath_idx;
-                       mac_info.state = VXGE_LL_MAC_ADDR_IN_DA_TABLE;
-                       status = vxge_add_mac_addr(vdev, &mac_info);
-                       if (status != VXGE_HW_OK)
-                               return -EPERM;
-                       return vpath_idx;
-               }
-       }
-
-       mac_info.state = VXGE_LL_MAC_ADDR_IN_LIST;
-       vpath_idx = 0;
-       mac_info.vpath_no = vpath_idx;
-       /* Is the first vpath already selected as catch-basin ? */
-       vpath = &vdev->vpaths[vpath_idx];
-       if (vpath->mac_addr_cnt > vpath->max_mac_addr_cnt) {
-               /* Add this mac address to this vpath */
-               if (FALSE == vxge_mac_list_add(vpath, &mac_info))
-                       return -EPERM;
-               return vpath_idx;
-       }
-
-       /* Select first vpath as catch-basin */
-       vpath_vector = vxge_mBIT(vpath->device_id);
-       status = vxge_hw_mgmt_reg_write(vpath->vdev->devh,
-                               vxge_hw_mgmt_reg_type_mrpcim,
-                               0,
-                               (ulong)offsetof(
-                                       struct vxge_hw_mrpcim_reg,
-                                       rts_mgr_cbasin_cfg),
-                               vpath_vector);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_tx(VXGE_ERR,
-                       "%s: Unable to set the vpath-%d in catch-basin mode",
-                       VXGE_DRIVER_NAME, vpath->device_id);
-               return -EPERM;
-       }
-
-       if (FALSE == vxge_mac_list_add(vpath, &mac_info))
-               return -EPERM;
-
-       return vpath_idx;
-}
-
-/**
- * vxge_xmit
- * @skb : the socket buffer containing the Tx data.
- * @dev : device pointer.
- *
- * This function is the Tx entry point of the driver. Neterion NIC supports
- * certain protocol assist features on Tx side, namely  CSO, S/G, LSO.
-*/
-static netdev_tx_t
-vxge_xmit(struct sk_buff *skb, struct net_device *dev)
-{
-       struct vxge_fifo *fifo = NULL;
-       void *dtr_priv;
-       void *dtr = NULL;
-       struct vxgedev *vdev = NULL;
-       enum vxge_hw_status status;
-       int frg_cnt, first_frg_len;
-       skb_frag_t *frag;
-       int i = 0, j = 0, avail;
-       u64 dma_pointer;
-       struct vxge_tx_priv *txdl_priv = NULL;
-       struct __vxge_hw_fifo *fifo_hw;
-       int offload_type;
-       int vpath_no = 0;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d",
-                       dev->name, __func__, __LINE__);
-
-       /* A buffer with no data will be dropped */
-       if (unlikely(skb->len <= 0)) {
-               vxge_debug_tx(VXGE_ERR,
-                       "%s: Buffer has no data..", dev->name);
-               dev_kfree_skb_any(skb);
-               return NETDEV_TX_OK;
-       }
-
-       vdev = netdev_priv(dev);
-
-       if (unlikely(!is_vxge_card_up(vdev))) {
-               vxge_debug_tx(VXGE_ERR,
-                       "%s: vdev not initialized", dev->name);
-               dev_kfree_skb_any(skb);
-               return NETDEV_TX_OK;
-       }
-
-       if (vdev->config.addr_learn_en) {
-               vpath_no = vxge_learn_mac(vdev, skb->data + ETH_ALEN);
-               if (vpath_no == -EPERM) {
-                       vxge_debug_tx(VXGE_ERR,
-                               "%s: Failed to store the mac address",
-                               dev->name);
-                       dev_kfree_skb_any(skb);
-                       return NETDEV_TX_OK;
-               }
-       }
-
-       if (vdev->config.tx_steering_type == TX_MULTIQ_STEERING)
-               vpath_no = skb_get_queue_mapping(skb);
-       else if (vdev->config.tx_steering_type == TX_PORT_STEERING)
-               vpath_no = vxge_get_vpath_no(vdev, skb);
-
-       vxge_debug_tx(VXGE_TRACE, "%s: vpath_no= %d", dev->name, vpath_no);
-
-       if (vpath_no >= vdev->no_of_vpath)
-               vpath_no = 0;
-
-       fifo = &vdev->vpaths[vpath_no].fifo;
-       fifo_hw = fifo->handle;
-
-       if (netif_tx_queue_stopped(fifo->txq))
-               return NETDEV_TX_BUSY;
-
-       avail = vxge_hw_fifo_free_txdl_count_get(fifo_hw);
-       if (avail == 0) {
-               vxge_debug_tx(VXGE_ERR,
-                       "%s: No free TXDs available", dev->name);
-               fifo->stats.txd_not_free++;
-               goto _exit0;
-       }
-
-       /* Last TXD?  Stop tx queue to avoid dropping packets.  TX
-        * completion will resume the queue.
-        */
-       if (avail == 1)
-               netif_tx_stop_queue(fifo->txq);
-
-       status = vxge_hw_fifo_txdl_reserve(fifo_hw, &dtr, &dtr_priv);
-       if (unlikely(status != VXGE_HW_OK)) {
-               vxge_debug_tx(VXGE_ERR,
-                  "%s: Out of descriptors .", dev->name);
-               fifo->stats.txd_out_of_desc++;
-               goto _exit0;
-       }
-
-       vxge_debug_tx(VXGE_TRACE,
-               "%s: %s:%d fifo_hw = %p dtr = %p dtr_priv = %p",
-               dev->name, __func__, __LINE__,
-               fifo_hw, dtr, dtr_priv);
-
-       if (skb_vlan_tag_present(skb)) {
-               u16 vlan_tag = skb_vlan_tag_get(skb);
-               vxge_hw_fifo_txdl_vlan_set(dtr, vlan_tag);
-       }
-
-       first_frg_len = skb_headlen(skb);
-
-       dma_pointer = dma_map_single(&fifo->pdev->dev, skb->data,
-                                    first_frg_len, DMA_TO_DEVICE);
-
-       if (unlikely(dma_mapping_error(&fifo->pdev->dev, dma_pointer))) {
-               vxge_hw_fifo_txdl_free(fifo_hw, dtr);
-               fifo->stats.pci_map_fail++;
-               goto _exit0;
-       }
-
-       txdl_priv = vxge_hw_fifo_txdl_private_get(dtr);
-       txdl_priv->skb = skb;
-       txdl_priv->dma_buffers[j] = dma_pointer;
-
-       frg_cnt = skb_shinfo(skb)->nr_frags;
-       vxge_debug_tx(VXGE_TRACE,
-                       "%s: %s:%d skb = %p txdl_priv = %p "
-                       "frag_cnt = %d dma_pointer = 0x%llx", dev->name,
-                       __func__, __LINE__, skb, txdl_priv,
-                       frg_cnt, (unsigned long long)dma_pointer);
-
-       vxge_hw_fifo_txdl_buffer_set(fifo_hw, dtr, j++, dma_pointer,
-               first_frg_len);
-
-       frag = &skb_shinfo(skb)->frags[0];
-       for (i = 0; i < frg_cnt; i++) {
-               /* ignore 0 length fragment */
-               if (!skb_frag_size(frag))
-                       continue;
-
-               dma_pointer = (u64)skb_frag_dma_map(&fifo->pdev->dev, frag,
-                                                   0, skb_frag_size(frag),
-                                                   DMA_TO_DEVICE);
-
-               if (unlikely(dma_mapping_error(&fifo->pdev->dev, dma_pointer)))
-                       goto _exit2;
-               vxge_debug_tx(VXGE_TRACE,
-                       "%s: %s:%d frag = %d dma_pointer = 0x%llx",
-                               dev->name, __func__, __LINE__, i,
-                               (unsigned long long)dma_pointer);
-
-               txdl_priv->dma_buffers[j] = dma_pointer;
-               vxge_hw_fifo_txdl_buffer_set(fifo_hw, dtr, j++, dma_pointer,
-                                       skb_frag_size(frag));
-               frag += 1;
-       }
-
-       offload_type = vxge_offload_type(skb);
-
-       if (offload_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) {
-               int mss = vxge_tcp_mss(skb);
-               if (mss) {
-                       vxge_debug_tx(VXGE_TRACE, "%s: %s:%d mss = %d",
-                               dev->name, __func__, __LINE__, mss);
-                       vxge_hw_fifo_txdl_mss_set(dtr, mss);
-               } else {
-                       vxge_assert(skb->len <=
-                               dev->mtu + VXGE_HW_MAC_HEADER_MAX_SIZE);
-                       vxge_assert(0);
-                       goto _exit1;
-               }
-       }
-
-       if (skb->ip_summed == CHECKSUM_PARTIAL)
-               vxge_hw_fifo_txdl_cksum_set_bits(dtr,
-                                       VXGE_HW_FIFO_TXD_TX_CKO_IPV4_EN |
-                                       VXGE_HW_FIFO_TXD_TX_CKO_TCP_EN |
-                                       VXGE_HW_FIFO_TXD_TX_CKO_UDP_EN);
-
-       vxge_hw_fifo_txdl_post(fifo_hw, dtr);
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d  Exiting...",
-               dev->name, __func__, __LINE__);
-       return NETDEV_TX_OK;
-
-_exit2:
-       vxge_debug_tx(VXGE_TRACE, "%s: pci_map_page failed", dev->name);
-_exit1:
-       j = 0;
-       frag = &skb_shinfo(skb)->frags[0];
-
-       dma_unmap_single(&fifo->pdev->dev, txdl_priv->dma_buffers[j++],
-                        skb_headlen(skb), DMA_TO_DEVICE);
-
-       for (; j < i; j++) {
-               dma_unmap_page(&fifo->pdev->dev, txdl_priv->dma_buffers[j],
-                              skb_frag_size(frag), DMA_TO_DEVICE);
-               frag += 1;
-       }
-
-       vxge_hw_fifo_txdl_free(fifo_hw, dtr);
-_exit0:
-       netif_tx_stop_queue(fifo->txq);
-       dev_kfree_skb_any(skb);
-
-       return NETDEV_TX_OK;
-}
-
-/*
- * vxge_rx_term
- *
- * Function will be called by hw function to abort all outstanding receive
- * descriptors.
- */
-static void
-vxge_rx_term(void *dtrh, enum vxge_hw_rxd_state state, void *userdata)
-{
-       struct vxge_ring *ring = (struct vxge_ring *)userdata;
-       struct vxge_rx_priv *rx_priv =
-               vxge_hw_ring_rxd_private_get(dtrh);
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d",
-                       ring->ndev->name, __func__, __LINE__);
-       if (state != VXGE_HW_RXD_STATE_POSTED)
-               return;
-
-       dma_unmap_single(&ring->pdev->dev, rx_priv->data_dma,
-                        rx_priv->data_size, DMA_FROM_DEVICE);
-
-       dev_kfree_skb(rx_priv->skb);
-       rx_priv->skb_data = NULL;
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d  Exiting...",
-               ring->ndev->name, __func__, __LINE__);
-}
-
-/*
- * vxge_tx_term
- *
- * Function will be called to abort all outstanding tx descriptors
- */
-static void
-vxge_tx_term(void *dtrh, enum vxge_hw_txdl_state state, void *userdata)
-{
-       struct vxge_fifo *fifo = (struct vxge_fifo *)userdata;
-       skb_frag_t *frag;
-       int i = 0, j, frg_cnt;
-       struct vxge_tx_priv *txd_priv = vxge_hw_fifo_txdl_private_get(dtrh);
-       struct sk_buff *skb = txd_priv->skb;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s:%d", __func__, __LINE__);
-
-       if (state != VXGE_HW_TXDL_STATE_POSTED)
-               return;
-
-       /* check skb validity */
-       vxge_assert(skb);
-       frg_cnt = skb_shinfo(skb)->nr_frags;
-       frag = &skb_shinfo(skb)->frags[0];
-
-       /*  for unfragmented skb */
-       dma_unmap_single(&fifo->pdev->dev, txd_priv->dma_buffers[i++],
-                        skb_headlen(skb), DMA_TO_DEVICE);
-
-       for (j = 0; j < frg_cnt; j++) {
-               dma_unmap_page(&fifo->pdev->dev, txd_priv->dma_buffers[i++],
-                              skb_frag_size(frag), DMA_TO_DEVICE);
-               frag += 1;
-       }
-
-       dev_kfree_skb(skb);
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s:%d  Exiting...", __func__, __LINE__);
-}
-
-static int vxge_mac_list_del(struct vxge_vpath *vpath, struct macInfo *mac)
-{
-       struct list_head *entry, *next;
-       u64 del_mac = 0;
-       u8 *mac_address = (u8 *) (&del_mac);
-
-       /* Copy the mac address to delete from the list */
-       memcpy(mac_address, mac->macaddr, ETH_ALEN);
-
-       list_for_each_safe(entry, next, &vpath->mac_addr_list) {
-               if (((struct vxge_mac_addrs *)entry)->macaddr == del_mac) {
-                       list_del(entry);
-                       kfree(entry);
-                       vpath->mac_addr_cnt--;
-
-                       if (is_multicast_ether_addr(mac->macaddr))
-                               vpath->mcast_addr_cnt--;
-                       return TRUE;
-               }
-       }
-
-       return FALSE;
-}
-
-/* delete a mac address from DA table */
-static enum vxge_hw_status
-vxge_del_mac_addr(struct vxgedev *vdev, struct macInfo *mac)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_vpath *vpath;
-
-       vpath = &vdev->vpaths[mac->vpath_no];
-       status = vxge_hw_vpath_mac_addr_delete(vpath->handle, mac->macaddr,
-                                               mac->macmask);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "DA config delete entry failed for vpath:%d",
-                       vpath->device_id);
-       } else
-               vxge_mac_list_del(vpath, mac);
-       return status;
-}
-
-/**
- * vxge_set_multicast
- * @dev: pointer to the device structure
- *
- * Entry point for multicast address enable/disable
- * This function is a driver entry point which gets called by the kernel
- * whenever multicast addresses must be enabled/disabled. This also gets
- * called to set/reset promiscuous mode. Depending on the deivce flag, we
- * determine, if multicast address must be enabled or if promiscuous mode
- * is to be disabled etc.
- */
-static void vxge_set_multicast(struct net_device *dev)
-{
-       struct netdev_hw_addr *ha;
-       struct vxgedev *vdev;
-       int i, mcast_cnt = 0;
-       struct vxge_vpath *vpath;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct macInfo mac_info;
-       int vpath_idx = 0;
-       struct vxge_mac_addrs *mac_entry;
-       struct list_head *list_head;
-       struct list_head *entry, *next;
-       u8 *mac_address = NULL;
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s:%d", __func__, __LINE__);
-
-       vdev = netdev_priv(dev);
-
-       if (unlikely(!is_vxge_card_up(vdev)))
-               return;
-
-       if ((dev->flags & IFF_ALLMULTI) && (!vdev->all_multi_flg)) {
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vpath = &vdev->vpaths[i];
-                       vxge_assert(vpath->is_open);
-                       status = vxge_hw_vpath_mcast_enable(vpath->handle);
-                       if (status != VXGE_HW_OK)
-                               vxge_debug_init(VXGE_ERR, "failed to enable "
-                                               "multicast, status %d", status);
-                       vdev->all_multi_flg = 1;
-               }
-       } else if (!(dev->flags & IFF_ALLMULTI) && (vdev->all_multi_flg)) {
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vpath = &vdev->vpaths[i];
-                       vxge_assert(vpath->is_open);
-                       status = vxge_hw_vpath_mcast_disable(vpath->handle);
-                       if (status != VXGE_HW_OK)
-                               vxge_debug_init(VXGE_ERR, "failed to disable "
-                                               "multicast, status %d", status);
-                       vdev->all_multi_flg = 0;
-               }
-       }
-
-
-       if (!vdev->config.addr_learn_en) {
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vpath = &vdev->vpaths[i];
-                       vxge_assert(vpath->is_open);
-
-                       if (dev->flags & IFF_PROMISC)
-                               status = vxge_hw_vpath_promisc_enable(
-                                       vpath->handle);
-                       else
-                               status = vxge_hw_vpath_promisc_disable(
-                                       vpath->handle);
-                       if (status != VXGE_HW_OK)
-                               vxge_debug_init(VXGE_ERR, "failed to %s promisc"
-                                       ", status %d", dev->flags&IFF_PROMISC ?
-                                       "enable" : "disable", status);
-               }
-       }
-
-       memset(&mac_info, 0, sizeof(struct macInfo));
-       /* Update individual M_CAST address list */
-       if ((!vdev->all_multi_flg) && netdev_mc_count(dev)) {
-               mcast_cnt = vdev->vpaths[0].mcast_addr_cnt;
-               list_head = &vdev->vpaths[0].mac_addr_list;
-               if ((netdev_mc_count(dev) +
-                       (vdev->vpaths[0].mac_addr_cnt - mcast_cnt)) >
-                               vdev->vpaths[0].max_mac_addr_cnt)
-                       goto _set_all_mcast;
-
-               /* Delete previous MC's */
-               for (i = 0; i < mcast_cnt; i++) {
-                       list_for_each_safe(entry, next, list_head) {
-                               mac_entry = (struct vxge_mac_addrs *)entry;
-                               /* Copy the mac address to delete */
-                               mac_address = (u8 *)&mac_entry->macaddr;
-                               memcpy(mac_info.macaddr, mac_address, ETH_ALEN);
-
-                               if (is_multicast_ether_addr(mac_info.macaddr)) {
-                                       for (vpath_idx = 0; vpath_idx <
-                                               vdev->no_of_vpath;
-                                               vpath_idx++) {
-                                               mac_info.vpath_no = vpath_idx;
-                                               status = vxge_del_mac_addr(
-                                                               vdev,
-                                                               &mac_info);
-                                       }
-                               }
-                       }
-               }
-
-               /* Add new ones */
-               netdev_for_each_mc_addr(ha, dev) {
-                       memcpy(mac_info.macaddr, ha->addr, ETH_ALEN);
-                       for (vpath_idx = 0; vpath_idx < vdev->no_of_vpath;
-                                       vpath_idx++) {
-                               mac_info.vpath_no = vpath_idx;
-                               mac_info.state = VXGE_LL_MAC_ADDR_IN_DA_TABLE;
-                               status = vxge_add_mac_addr(vdev, &mac_info);
-                               if (status != VXGE_HW_OK) {
-                                       vxge_debug_init(VXGE_ERR,
-                                               "%s:%d Setting individual"
-                                               "multicast address failed",
-                                               __func__, __LINE__);
-                                       goto _set_all_mcast;
-                               }
-                       }
-               }
-
-               return;
-_set_all_mcast:
-               mcast_cnt = vdev->vpaths[0].mcast_addr_cnt;
-               /* Delete previous MC's */
-               for (i = 0; i < mcast_cnt; i++) {
-                       list_for_each_safe(entry, next, list_head) {
-                               mac_entry = (struct vxge_mac_addrs *)entry;
-                               /* Copy the mac address to delete */
-                               mac_address = (u8 *)&mac_entry->macaddr;
-                               memcpy(mac_info.macaddr, mac_address, ETH_ALEN);
-
-                               if (is_multicast_ether_addr(mac_info.macaddr))
-                                       break;
-                       }
-
-                       for (vpath_idx = 0; vpath_idx < vdev->no_of_vpath;
-                                       vpath_idx++) {
-                               mac_info.vpath_no = vpath_idx;
-                               status = vxge_del_mac_addr(vdev, &mac_info);
-                       }
-               }
-
-               /* Enable all multicast */
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vpath = &vdev->vpaths[i];
-                       vxge_assert(vpath->is_open);
-
-                       status = vxge_hw_vpath_mcast_enable(vpath->handle);
-                       if (status != VXGE_HW_OK) {
-                               vxge_debug_init(VXGE_ERR,
-                                       "%s:%d Enabling all multicasts failed",
-                                        __func__, __LINE__);
-                       }
-                       vdev->all_multi_flg = 1;
-               }
-               dev->flags |= IFF_ALLMULTI;
-       }
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s:%d  Exiting...", __func__, __LINE__);
-}
-
-/**
- * vxge_set_mac_addr
- * @dev: pointer to the device structure
- * @p: socket info
- *
- * Update entry "0" (default MAC addr)
- */
-static int vxge_set_mac_addr(struct net_device *dev, void *p)
-{
-       struct sockaddr *addr = p;
-       struct vxgedev *vdev;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct macInfo mac_info_new, mac_info_old;
-       int vpath_idx = 0;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s:%d", __func__, __LINE__);
-
-       vdev = netdev_priv(dev);
-
-       if (!is_valid_ether_addr(addr->sa_data))
-               return -EINVAL;
-
-       memset(&mac_info_new, 0, sizeof(struct macInfo));
-       memset(&mac_info_old, 0, sizeof(struct macInfo));
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s:%d  Exiting...",
-               __func__, __LINE__);
-
-       /* Get the old address */
-       memcpy(mac_info_old.macaddr, dev->dev_addr, dev->addr_len);
-
-       /* Copy the new address */
-       memcpy(mac_info_new.macaddr, addr->sa_data, dev->addr_len);
-
-       /* First delete the old mac address from all the vpaths
-       as we can't specify the index while adding new mac address */
-       for (vpath_idx = 0; vpath_idx < vdev->no_of_vpath; vpath_idx++) {
-               struct vxge_vpath *vpath = &vdev->vpaths[vpath_idx];
-               if (!vpath->is_open) {
-                       /* This can happen when this interface is added/removed
-                       to the bonding interface. Delete this station address
-                       from the linked list */
-                       vxge_mac_list_del(vpath, &mac_info_old);
-
-                       /* Add this new address to the linked list
-                       for later restoring */
-                       vxge_mac_list_add(vpath, &mac_info_new);
-
-                       continue;
-               }
-               /* Delete the station address */
-               mac_info_old.vpath_no = vpath_idx;
-               status = vxge_del_mac_addr(vdev, &mac_info_old);
-       }
-
-       if (unlikely(!is_vxge_card_up(vdev))) {
-               eth_hw_addr_set(dev, addr->sa_data);
-               return VXGE_HW_OK;
-       }
-
-       /* Set this mac address to all the vpaths */
-       for (vpath_idx = 0; vpath_idx < vdev->no_of_vpath; vpath_idx++) {
-               mac_info_new.vpath_no = vpath_idx;
-               mac_info_new.state = VXGE_LL_MAC_ADDR_IN_DA_TABLE;
-               status = vxge_add_mac_addr(vdev, &mac_info_new);
-               if (status != VXGE_HW_OK)
-                       return -EINVAL;
-       }
-
-       eth_hw_addr_set(dev, addr->sa_data);
-
-       return status;
-}
-
-/*
- * vxge_vpath_intr_enable
- * @vdev: pointer to vdev
- * @vp_id: vpath for which to enable the interrupts
- *
- * Enables the interrupts for the vpath
-*/
-static void vxge_vpath_intr_enable(struct vxgedev *vdev, int vp_id)
-{
-       struct vxge_vpath *vpath = &vdev->vpaths[vp_id];
-       int msix_id = 0;
-       int tim_msix_id[4] = {0, 1, 0, 0};
-       int alarm_msix_id = VXGE_ALARM_MSIX_ID;
-
-       vxge_hw_vpath_intr_enable(vpath->handle);
-
-       if (vdev->config.intr_type == INTA)
-               vxge_hw_vpath_inta_unmask_tx_rx(vpath->handle);
-       else {
-               vxge_hw_vpath_msix_set(vpath->handle, tim_msix_id,
-                       alarm_msix_id);
-
-               msix_id = vpath->device_id * VXGE_HW_VPATH_MSIX_ACTIVE;
-               vxge_hw_vpath_msix_unmask(vpath->handle, msix_id);
-               vxge_hw_vpath_msix_unmask(vpath->handle, msix_id + 1);
-
-               /* enable the alarm vector */
-               msix_id = (vpath->handle->vpath->hldev->first_vp_id *
-                       VXGE_HW_VPATH_MSIX_ACTIVE) + alarm_msix_id;
-               vxge_hw_vpath_msix_unmask(vpath->handle, msix_id);
-       }
-}
-
-/*
- * vxge_vpath_intr_disable
- * @vdev: pointer to vdev
- * @vp_id: vpath for which to disable the interrupts
- *
- * Disables the interrupts for the vpath
-*/
-static void vxge_vpath_intr_disable(struct vxgedev *vdev, int vp_id)
-{
-       struct vxge_vpath *vpath = &vdev->vpaths[vp_id];
-       struct __vxge_hw_device *hldev;
-       int msix_id;
-
-       hldev = pci_get_drvdata(vdev->pdev);
-
-       vxge_hw_vpath_wait_receive_idle(hldev, vpath->device_id);
-
-       vxge_hw_vpath_intr_disable(vpath->handle);
-
-       if (vdev->config.intr_type == INTA)
-               vxge_hw_vpath_inta_mask_tx_rx(vpath->handle);
-       else {
-               msix_id = vpath->device_id * VXGE_HW_VPATH_MSIX_ACTIVE;
-               vxge_hw_vpath_msix_mask(vpath->handle, msix_id);
-               vxge_hw_vpath_msix_mask(vpath->handle, msix_id + 1);
-
-               /* disable the alarm vector */
-               msix_id = (vpath->handle->vpath->hldev->first_vp_id *
-                       VXGE_HW_VPATH_MSIX_ACTIVE) + VXGE_ALARM_MSIX_ID;
-               vxge_hw_vpath_msix_mask(vpath->handle, msix_id);
-       }
-}
-
-/* list all mac addresses from DA table */
-static enum vxge_hw_status
-vxge_search_mac_addr_in_da_table(struct vxge_vpath *vpath, struct macInfo *mac)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       unsigned char macmask[ETH_ALEN];
-       unsigned char macaddr[ETH_ALEN];
-
-       status = vxge_hw_vpath_mac_addr_get(vpath->handle,
-                               macaddr, macmask);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "DA config list entry failed for vpath:%d",
-                       vpath->device_id);
-               return status;
-       }
-
-       while (!ether_addr_equal(mac->macaddr, macaddr)) {
-               status = vxge_hw_vpath_mac_addr_get_next(vpath->handle,
-                               macaddr, macmask);
-               if (status != VXGE_HW_OK)
-                       break;
-       }
-
-       return status;
-}
-
-/* Store all mac addresses from the list to the DA table */
-static enum vxge_hw_status vxge_restore_vpath_mac_addr(struct vxge_vpath *vpath)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct macInfo mac_info;
-       u8 *mac_address = NULL;
-       struct list_head *entry, *next;
-
-       memset(&mac_info, 0, sizeof(struct macInfo));
-
-       if (vpath->is_open) {
-               list_for_each_safe(entry, next, &vpath->mac_addr_list) {
-                       mac_address =
-                               (u8 *)&
-                               ((struct vxge_mac_addrs *)entry)->macaddr;
-                       memcpy(mac_info.macaddr, mac_address, ETH_ALEN);
-                       ((struct vxge_mac_addrs *)entry)->state =
-                               VXGE_LL_MAC_ADDR_IN_DA_TABLE;
-                       /* does this mac address already exist in da table? */
-                       status = vxge_search_mac_addr_in_da_table(vpath,
-                               &mac_info);
-                       if (status != VXGE_HW_OK) {
-                               /* Add this mac address to the DA table */
-                               status = vxge_hw_vpath_mac_addr_add(
-                                       vpath->handle, mac_info.macaddr,
-                                       mac_info.macmask,
-                                   VXGE_HW_VPATH_MAC_ADDR_ADD_DUPLICATE);
-                               if (status != VXGE_HW_OK) {
-                                       vxge_debug_init(VXGE_ERR,
-                                           "DA add entry failed for vpath:%d",
-                                           vpath->device_id);
-                                       ((struct vxge_mac_addrs *)entry)->state
-                                               = VXGE_LL_MAC_ADDR_IN_LIST;
-                               }
-                       }
-               }
-       }
-
-       return status;
-}
-
-/* Store all vlan ids from the list to the vid table */
-static enum vxge_hw_status
-vxge_restore_vpath_vid_table(struct vxge_vpath *vpath)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxgedev *vdev = vpath->vdev;
-       u16 vid;
-
-       if (!vpath->is_open)
-               return status;
-
-       for_each_set_bit(vid, vdev->active_vlans, VLAN_N_VID)
-               status = vxge_hw_vpath_vid_add(vpath->handle, vid);
-
-       return status;
-}
-
-/*
- * vxge_reset_vpath
- * @vdev: pointer to vdev
- * @vp_id: vpath to reset
- *
- * Resets the vpath
-*/
-static int vxge_reset_vpath(struct vxgedev *vdev, int vp_id)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_vpath *vpath = &vdev->vpaths[vp_id];
-       int ret = 0;
-
-       /* check if device is down already */
-       if (unlikely(!is_vxge_card_up(vdev)))
-               return 0;
-
-       /* is device reset already scheduled */
-       if (test_bit(__VXGE_STATE_RESET_CARD, &vdev->state))
-               return 0;
-
-       if (vpath->handle) {
-               if (vxge_hw_vpath_reset(vpath->handle) == VXGE_HW_OK) {
-                       if (is_vxge_card_up(vdev) &&
-                               vxge_hw_vpath_recover_from_reset(vpath->handle)
-                                       != VXGE_HW_OK) {
-                               vxge_debug_init(VXGE_ERR,
-                                       "vxge_hw_vpath_recover_from_reset"
-                                       "failed for vpath:%d", vp_id);
-                               return status;
-                       }
-               } else {
-                       vxge_debug_init(VXGE_ERR,
-                               "vxge_hw_vpath_reset failed for"
-                               "vpath:%d", vp_id);
-                       return status;
-               }
-       } else
-               return VXGE_HW_FAIL;
-
-       vxge_restore_vpath_mac_addr(vpath);
-       vxge_restore_vpath_vid_table(vpath);
-
-       /* Enable all broadcast */
-       vxge_hw_vpath_bcast_enable(vpath->handle);
-
-       /* Enable all multicast */
-       if (vdev->all_multi_flg) {
-               status = vxge_hw_vpath_mcast_enable(vpath->handle);
-               if (status != VXGE_HW_OK)
-                       vxge_debug_init(VXGE_ERR,
-                               "%s:%d Enabling multicast failed",
-                               __func__, __LINE__);
-       }
-
-       /* Enable the interrupts */
-       vxge_vpath_intr_enable(vdev, vp_id);
-
-       smp_wmb();
-
-       /* Enable the flow of traffic through the vpath */
-       vxge_hw_vpath_enable(vpath->handle);
-
-       smp_wmb();
-       vxge_hw_vpath_rx_doorbell_init(vpath->handle);
-       vpath->ring.last_status = VXGE_HW_OK;
-
-       /* Vpath reset done */
-       clear_bit(vp_id, &vdev->vp_reset);
-
-       /* Start the vpath queue */
-       if (netif_tx_queue_stopped(vpath->fifo.txq))
-               netif_tx_wake_queue(vpath->fifo.txq);
-
-       return ret;
-}
-
-/* Configure CI */
-static void vxge_config_ci_for_tti_rti(struct vxgedev *vdev)
-{
-       int i = 0;
-
-       /* Enable CI for RTI */
-       if (vdev->config.intr_type == MSI_X) {
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       struct __vxge_hw_ring *hw_ring;
-
-                       hw_ring = vdev->vpaths[i].ring.handle;
-                       vxge_hw_vpath_dynamic_rti_ci_set(hw_ring);
-               }
-       }
-
-       /* Enable CI for TTI */
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               struct __vxge_hw_fifo *hw_fifo = vdev->vpaths[i].fifo.handle;
-               vxge_hw_vpath_tti_ci_set(hw_fifo);
-               /*
-                * For Inta (with or without napi), Set CI ON for only one
-                * vpath. (Have only one free running timer).
-                */
-               if ((vdev->config.intr_type == INTA) && (i == 0))
-                       break;
-       }
-
-       return;
-}
-
-static int do_vxge_reset(struct vxgedev *vdev, int event)
-{
-       int ret = 0, vp_id, i;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s:%d", __func__, __LINE__);
-
-       if ((event == VXGE_LL_FULL_RESET) || (event == VXGE_LL_START_RESET)) {
-               /* check if device is down already */
-               if (unlikely(!is_vxge_card_up(vdev)))
-                       return 0;
-
-               /* is reset already scheduled */
-               if (test_and_set_bit(__VXGE_STATE_RESET_CARD, &vdev->state))
-                       return 0;
-       }
-
-       if (event == VXGE_LL_FULL_RESET) {
-               netif_carrier_off(vdev->ndev);
-
-               /* wait for all the vpath reset to complete */
-               for (vp_id = 0; vp_id < vdev->no_of_vpath; vp_id++) {
-                       while (test_bit(vp_id, &vdev->vp_reset))
-                               msleep(50);
-               }
-
-               netif_carrier_on(vdev->ndev);
-
-               /* if execution mode is set to debug, don't reset the adapter */
-               if (unlikely(vdev->exec_mode)) {
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: execution mode is debug, returning..",
-                               vdev->ndev->name);
-                       clear_bit(__VXGE_STATE_CARD_UP, &vdev->state);
-                       netif_tx_stop_all_queues(vdev->ndev);
-                       return 0;
-               }
-       }
-
-       if (event == VXGE_LL_FULL_RESET) {
-               vxge_hw_device_wait_receive_idle(vdev->devh);
-               vxge_hw_device_intr_disable(vdev->devh);
-
-               switch (vdev->cric_err_event) {
-               case VXGE_HW_EVENT_UNKNOWN:
-                       netif_tx_stop_all_queues(vdev->ndev);
-                       vxge_debug_init(VXGE_ERR,
-                               "fatal: %s: Disabling device due to"
-                               "unknown error",
-                               vdev->ndev->name);
-                       ret = -EPERM;
-                       goto out;
-               case VXGE_HW_EVENT_RESET_START:
-                       break;
-               case VXGE_HW_EVENT_RESET_COMPLETE:
-               case VXGE_HW_EVENT_LINK_DOWN:
-               case VXGE_HW_EVENT_LINK_UP:
-               case VXGE_HW_EVENT_ALARM_CLEARED:
-               case VXGE_HW_EVENT_ECCERR:
-               case VXGE_HW_EVENT_MRPCIM_ECCERR:
-                       ret = -EPERM;
-                       goto out;
-               case VXGE_HW_EVENT_FIFO_ERR:
-               case VXGE_HW_EVENT_VPATH_ERR:
-                       break;
-               case VXGE_HW_EVENT_CRITICAL_ERR:
-                       netif_tx_stop_all_queues(vdev->ndev);
-                       vxge_debug_init(VXGE_ERR,
-                               "fatal: %s: Disabling device due to"
-                               "serious error",
-                               vdev->ndev->name);
-                       /* SOP or device reset required */
-                       /* This event is not currently used */
-                       ret = -EPERM;
-                       goto out;
-               case VXGE_HW_EVENT_SERR:
-                       netif_tx_stop_all_queues(vdev->ndev);
-                       vxge_debug_init(VXGE_ERR,
-                               "fatal: %s: Disabling device due to"
-                               "serious error",
-                               vdev->ndev->name);
-                       ret = -EPERM;
-                       goto out;
-               case VXGE_HW_EVENT_SRPCIM_SERR:
-               case VXGE_HW_EVENT_MRPCIM_SERR:
-                       ret = -EPERM;
-                       goto out;
-               case VXGE_HW_EVENT_SLOT_FREEZE:
-                       netif_tx_stop_all_queues(vdev->ndev);
-                       vxge_debug_init(VXGE_ERR,
-                               "fatal: %s: Disabling device due to"
-                               "slot freeze",
-                               vdev->ndev->name);
-                       ret = -EPERM;
-                       goto out;
-               default:
-                       break;
-
-               }
-       }
-
-       if ((event == VXGE_LL_FULL_RESET) || (event == VXGE_LL_START_RESET))
-               netif_tx_stop_all_queues(vdev->ndev);
-
-       if (event == VXGE_LL_FULL_RESET) {
-               vxge_reset_all_vpaths(vdev);
-       }
-
-       if (event == VXGE_LL_COMPL_RESET) {
-               for (i = 0; i < vdev->no_of_vpath; i++)
-                       if (vdev->vpaths[i].handle) {
-                               if (vxge_hw_vpath_recover_from_reset(
-                                       vdev->vpaths[i].handle)
-                                               != VXGE_HW_OK) {
-                                       vxge_debug_init(VXGE_ERR,
-                                               "vxge_hw_vpath_recover_"
-                                               "from_reset failed for vpath: "
-                                               "%d", i);
-                                       ret = -EPERM;
-                                       goto out;
-                               }
-                               } else {
-                                       vxge_debug_init(VXGE_ERR,
-                                       "vxge_hw_vpath_reset failed for "
-                                               "vpath:%d", i);
-                                       ret = -EPERM;
-                                       goto out;
-                               }
-       }
-
-       if ((event == VXGE_LL_FULL_RESET) || (event == VXGE_LL_COMPL_RESET)) {
-               /* Reprogram the DA table with populated mac addresses */
-               for (vp_id = 0; vp_id < vdev->no_of_vpath; vp_id++) {
-                       vxge_restore_vpath_mac_addr(&vdev->vpaths[vp_id]);
-                       vxge_restore_vpath_vid_table(&vdev->vpaths[vp_id]);
-               }
-
-               /* enable vpath interrupts */
-               for (i = 0; i < vdev->no_of_vpath; i++)
-                       vxge_vpath_intr_enable(vdev, i);
-
-               vxge_hw_device_intr_enable(vdev->devh);
-
-               smp_wmb();
-
-               /* Indicate card up */
-               set_bit(__VXGE_STATE_CARD_UP, &vdev->state);
-
-               /* Get the traffic to flow through the vpaths */
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vxge_hw_vpath_enable(vdev->vpaths[i].handle);
-                       smp_wmb();
-                       vxge_hw_vpath_rx_doorbell_init(vdev->vpaths[i].handle);
-               }
-
-               netif_tx_wake_all_queues(vdev->ndev);
-       }
-
-       /* configure CI */
-       vxge_config_ci_for_tti_rti(vdev);
-
-out:
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s:%d  Exiting...", __func__, __LINE__);
-
-       /* Indicate reset done */
-       if ((event == VXGE_LL_FULL_RESET) || (event == VXGE_LL_COMPL_RESET))
-               clear_bit(__VXGE_STATE_RESET_CARD, &vdev->state);
-       return ret;
-}
-
-/*
- * vxge_reset
- * @vdev: pointer to ll device
- *
- * driver may reset the chip on events of serr, eccerr, etc
- */
-static void vxge_reset(struct work_struct *work)
-{
-       struct vxgedev *vdev = container_of(work, struct vxgedev, reset_task);
-
-       if (!netif_running(vdev->ndev))
-               return;
-
-       do_vxge_reset(vdev, VXGE_LL_FULL_RESET);
-}
-
-/**
- * vxge_poll_msix - Receive handler when Receive Polling is used.
- * @napi: pointer to the napi structure.
- * @budget: Number of packets budgeted to be processed in this iteration.
- *
- * This function comes into picture only if Receive side is being handled
- * through polling (called NAPI in linux). It mostly does what the normal
- * Rx interrupt handler does in terms of descriptor and packet processing
- * but not in an interrupt context. Also it will process a specified number
- * of packets at most in one iteration. This value is passed down by the
- * kernel as the function argument 'budget'.
- */
-static int vxge_poll_msix(struct napi_struct *napi, int budget)
-{
-       struct vxge_ring *ring = container_of(napi, struct vxge_ring, napi);
-       int pkts_processed;
-       int budget_org = budget;
-
-       ring->budget = budget;
-       ring->pkts_processed = 0;
-       vxge_hw_vpath_poll_rx(ring->handle);
-       pkts_processed = ring->pkts_processed;
-
-       if (pkts_processed < budget_org) {
-               napi_complete_done(napi, pkts_processed);
-
-               /* Re enable the Rx interrupts for the vpath */
-               vxge_hw_channel_msix_unmask(
-                               (struct __vxge_hw_channel *)ring->handle,
-                               ring->rx_vector_no);
-       }
-
-       /* We are copying and returning the local variable, in case if after
-        * clearing the msix interrupt above, if the interrupt fires right
-        * away which can preempt this NAPI thread */
-       return pkts_processed;
-}
-
-static int vxge_poll_inta(struct napi_struct *napi, int budget)
-{
-       struct vxgedev *vdev = container_of(napi, struct vxgedev, napi);
-       int pkts_processed = 0;
-       int i;
-       int budget_org = budget;
-       struct vxge_ring *ring;
-
-       struct __vxge_hw_device *hldev = pci_get_drvdata(vdev->pdev);
-
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               ring = &vdev->vpaths[i].ring;
-               ring->budget = budget;
-               ring->pkts_processed = 0;
-               vxge_hw_vpath_poll_rx(ring->handle);
-               pkts_processed += ring->pkts_processed;
-               budget -= ring->pkts_processed;
-               if (budget <= 0)
-                       break;
-       }
-
-       VXGE_COMPLETE_ALL_TX(vdev);
-
-       if (pkts_processed < budget_org) {
-               napi_complete_done(napi, pkts_processed);
-               /* Re enable the Rx interrupts for the ring */
-               vxge_hw_device_unmask_all(hldev);
-               vxge_hw_device_flush_io(hldev);
-       }
-
-       return pkts_processed;
-}
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-/**
- * vxge_netpoll - netpoll event handler entry point
- * @dev : pointer to the device structure.
- * Description:
- *      This function will be called by upper layer to check for events on the
- * interface in situations where interrupts are disabled. It is used for
- * specific in-kernel networking tasks, such as remote consoles and kernel
- * debugging over the network (example netdump in RedHat).
- */
-static void vxge_netpoll(struct net_device *dev)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-       struct pci_dev *pdev = vdev->pdev;
-       struct __vxge_hw_device *hldev = pci_get_drvdata(pdev);
-       const int irq = pdev->irq;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s:%d", __func__, __LINE__);
-
-       if (pci_channel_offline(pdev))
-               return;
-
-       disable_irq(irq);
-       vxge_hw_device_clear_tx_rx(hldev);
-
-       vxge_hw_device_clear_tx_rx(hldev);
-       VXGE_COMPLETE_ALL_RX(vdev);
-       VXGE_COMPLETE_ALL_TX(vdev);
-
-       enable_irq(irq);
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s:%d  Exiting...", __func__, __LINE__);
-}
-#endif
-
-/* RTH configuration */
-static enum vxge_hw_status vxge_rth_configure(struct vxgedev *vdev)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_hw_rth_hash_types hash_types;
-       u8 itable[256] = {0}; /* indirection table */
-       u8 mtable[256] = {0}; /* CPU to vpath mapping  */
-       int index;
-
-       /*
-        * Filling
-        *      - itable with bucket numbers
-        *      - mtable with bucket-to-vpath mapping
-        */
-       for (index = 0; index < (1 << vdev->config.rth_bkt_sz); index++) {
-               itable[index] = index;
-               mtable[index] = index % vdev->no_of_vpath;
-       }
-
-       /* set indirection table, bucket-to-vpath mapping */
-       status = vxge_hw_vpath_rts_rth_itable_set(vdev->vp_handles,
-                                               vdev->no_of_vpath,
-                                               mtable, itable,
-                                               vdev->config.rth_bkt_sz);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "RTH indirection table configuration failed "
-                       "for vpath:%d", vdev->vpaths[0].device_id);
-               return status;
-       }
-
-       /* Fill RTH hash types */
-       hash_types.hash_type_tcpipv4_en   = vdev->config.rth_hash_type_tcpipv4;
-       hash_types.hash_type_ipv4_en      = vdev->config.rth_hash_type_ipv4;
-       hash_types.hash_type_tcpipv6_en   = vdev->config.rth_hash_type_tcpipv6;
-       hash_types.hash_type_ipv6_en      = vdev->config.rth_hash_type_ipv6;
-       hash_types.hash_type_tcpipv6ex_en =
-                                       vdev->config.rth_hash_type_tcpipv6ex;
-       hash_types.hash_type_ipv6ex_en    = vdev->config.rth_hash_type_ipv6ex;
-
-       /*
-        * Because the itable_set() method uses the active_table field
-        * for the target virtual path the RTH config should be updated
-        * for all VPATHs. The h/w only uses the lowest numbered VPATH
-        * when steering frames.
-        */
-       for (index = 0; index < vdev->no_of_vpath; index++) {
-               status = vxge_hw_vpath_rts_rth_set(
-                               vdev->vpaths[index].handle,
-                               vdev->config.rth_algorithm,
-                               &hash_types,
-                               vdev->config.rth_bkt_sz);
-               if (status != VXGE_HW_OK) {
-                       vxge_debug_init(VXGE_ERR,
-                               "RTH configuration failed for vpath:%d",
-                               vdev->vpaths[index].device_id);
-                       return status;
-               }
-       }
-
-       return status;
-}
-
-/* reset vpaths */
-static void vxge_reset_all_vpaths(struct vxgedev *vdev)
-{
-       struct vxge_vpath *vpath;
-       int i;
-
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               vpath = &vdev->vpaths[i];
-               if (vpath->handle) {
-                       if (vxge_hw_vpath_reset(vpath->handle) == VXGE_HW_OK) {
-                               if (is_vxge_card_up(vdev) &&
-                                       vxge_hw_vpath_recover_from_reset(
-                                               vpath->handle) != VXGE_HW_OK) {
-                                       vxge_debug_init(VXGE_ERR,
-                                               "vxge_hw_vpath_recover_"
-                                               "from_reset failed for vpath: "
-                                               "%d", i);
-                                       return;
-                               }
-                       } else {
-                               vxge_debug_init(VXGE_ERR,
-                                       "vxge_hw_vpath_reset failed for "
-                                       "vpath:%d", i);
-                               return;
-                       }
-               }
-       }
-}
-
-/* close vpaths */
-static void vxge_close_vpaths(struct vxgedev *vdev, int index)
-{
-       struct vxge_vpath *vpath;
-       int i;
-
-       for (i = index; i < vdev->no_of_vpath; i++) {
-               vpath = &vdev->vpaths[i];
-
-               if (vpath->handle && vpath->is_open) {
-                       vxge_hw_vpath_close(vpath->handle);
-                       vdev->stats.vpaths_open--;
-               }
-               vpath->is_open = 0;
-               vpath->handle = NULL;
-       }
-}
-
-/* open vpaths */
-static int vxge_open_vpaths(struct vxgedev *vdev)
-{
-       struct vxge_hw_vpath_attr attr;
-       enum vxge_hw_status status;
-       struct vxge_vpath *vpath;
-       u32 vp_id = 0;
-       int i;
-
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               vpath = &vdev->vpaths[i];
-               vxge_assert(vpath->is_configured);
-
-               if (!vdev->titan1) {
-                       struct vxge_hw_vp_config *vcfg;
-                       vcfg = &vdev->devh->config.vp_config[vpath->device_id];
-
-                       vcfg->rti.urange_a = RTI_T1A_RX_URANGE_A;
-                       vcfg->rti.urange_b = RTI_T1A_RX_URANGE_B;
-                       vcfg->rti.urange_c = RTI_T1A_RX_URANGE_C;
-                       vcfg->tti.uec_a = TTI_T1A_TX_UFC_A;
-                       vcfg->tti.uec_b = TTI_T1A_TX_UFC_B;
-                       vcfg->tti.uec_c = TTI_T1A_TX_UFC_C(vdev->mtu);
-                       vcfg->tti.uec_d = TTI_T1A_TX_UFC_D(vdev->mtu);
-                       vcfg->tti.ltimer_val = VXGE_T1A_TTI_LTIMER_VAL;
-                       vcfg->tti.rtimer_val = VXGE_T1A_TTI_RTIMER_VAL;
-               }
-
-               attr.vp_id = vpath->device_id;
-               attr.fifo_attr.callback = vxge_xmit_compl;
-               attr.fifo_attr.txdl_term = vxge_tx_term;
-               attr.fifo_attr.per_txdl_space = sizeof(struct vxge_tx_priv);
-               attr.fifo_attr.userdata = &vpath->fifo;
-
-               attr.ring_attr.callback = vxge_rx_1b_compl;
-               attr.ring_attr.rxd_init = vxge_rx_initial_replenish;
-               attr.ring_attr.rxd_term = vxge_rx_term;
-               attr.ring_attr.per_rxd_space = sizeof(struct vxge_rx_priv);
-               attr.ring_attr.userdata = &vpath->ring;
-
-               vpath->ring.ndev = vdev->ndev;
-               vpath->ring.pdev = vdev->pdev;
-
-               status = vxge_hw_vpath_open(vdev->devh, &attr, &vpath->handle);
-               if (status == VXGE_HW_OK) {
-                       vpath->fifo.handle =
-                           (struct __vxge_hw_fifo *)attr.fifo_attr.userdata;
-                       vpath->ring.handle =
-                           (struct __vxge_hw_ring *)attr.ring_attr.userdata;
-                       vpath->fifo.tx_steering_type =
-                               vdev->config.tx_steering_type;
-                       vpath->fifo.ndev = vdev->ndev;
-                       vpath->fifo.pdev = vdev->pdev;
-
-                       u64_stats_init(&vpath->fifo.stats.syncp);
-                       u64_stats_init(&vpath->ring.stats.syncp);
-
-                       if (vdev->config.tx_steering_type)
-                               vpath->fifo.txq =
-                                       netdev_get_tx_queue(vdev->ndev, i);
-                       else
-                               vpath->fifo.txq =
-                                       netdev_get_tx_queue(vdev->ndev, 0);
-                       vpath->fifo.indicate_max_pkts =
-                               vdev->config.fifo_indicate_max_pkts;
-                       vpath->fifo.tx_vector_no = 0;
-                       vpath->ring.rx_vector_no = 0;
-                       vpath->ring.rx_hwts = vdev->rx_hwts;
-                       vpath->is_open = 1;
-                       vdev->vp_handles[i] = vpath->handle;
-                       vpath->ring.vlan_tag_strip = vdev->vlan_tag_strip;
-                       vdev->stats.vpaths_open++;
-               } else {
-                       vdev->stats.vpath_open_fail++;
-                       vxge_debug_init(VXGE_ERR, "%s: vpath: %d failed to "
-                                       "open with status: %d",
-                                       vdev->ndev->name, vpath->device_id,
-                                       status);
-                       vxge_close_vpaths(vdev, 0);
-                       return -EPERM;
-               }
-
-               vp_id = vpath->handle->vpath->vp_id;
-               vdev->vpaths_deployed |= vxge_mBIT(vp_id);
-       }
-
-       return VXGE_HW_OK;
-}
-
-/**
- *  adaptive_coalesce_tx_interrupts - Changes the interrupt coalescing
- *  if the interrupts are not within a range
- *  @fifo: pointer to transmit fifo structure
- *  Description: The function changes boundary timer and restriction timer
- *  value depends on the traffic
- *  Return Value: None
- */
-static void adaptive_coalesce_tx_interrupts(struct vxge_fifo *fifo)
-{
-       fifo->interrupt_count++;
-       if (time_before(fifo->jiffies + HZ / 100, jiffies)) {
-               struct __vxge_hw_fifo *hw_fifo = fifo->handle;
-
-               fifo->jiffies = jiffies;
-               if (fifo->interrupt_count > VXGE_T1A_MAX_TX_INTERRUPT_COUNT &&
-                   hw_fifo->rtimer != VXGE_TTI_RTIMER_ADAPT_VAL) {
-                       hw_fifo->rtimer = VXGE_TTI_RTIMER_ADAPT_VAL;
-                       vxge_hw_vpath_dynamic_tti_rtimer_set(hw_fifo);
-               } else if (hw_fifo->rtimer != 0) {
-                       hw_fifo->rtimer = 0;
-                       vxge_hw_vpath_dynamic_tti_rtimer_set(hw_fifo);
-               }
-               fifo->interrupt_count = 0;
-       }
-}
-
-/**
- *  adaptive_coalesce_rx_interrupts - Changes the interrupt coalescing
- *  if the interrupts are not within a range
- *  @ring: pointer to receive ring structure
- *  Description: The function increases of decreases the packet counts within
- *  the ranges of traffic utilization, if the interrupts due to this ring are
- *  not within a fixed range.
- *  Return Value: Nothing
- */
-static void adaptive_coalesce_rx_interrupts(struct vxge_ring *ring)
-{
-       ring->interrupt_count++;
-       if (time_before(ring->jiffies + HZ / 100, jiffies)) {
-               struct __vxge_hw_ring *hw_ring = ring->handle;
-
-               ring->jiffies = jiffies;
-               if (ring->interrupt_count > VXGE_T1A_MAX_INTERRUPT_COUNT &&
-                   hw_ring->rtimer != VXGE_RTI_RTIMER_ADAPT_VAL) {
-                       hw_ring->rtimer = VXGE_RTI_RTIMER_ADAPT_VAL;
-                       vxge_hw_vpath_dynamic_rti_rtimer_set(hw_ring);
-               } else if (hw_ring->rtimer != 0) {
-                       hw_ring->rtimer = 0;
-                       vxge_hw_vpath_dynamic_rti_rtimer_set(hw_ring);
-               }
-               ring->interrupt_count = 0;
-       }
-}
-
-/*
- *  vxge_isr_napi
- *  @irq: the irq of the device.
- *  @dev_id: a void pointer to the hldev structure of the Titan device
- *  @ptregs: pointer to the registers pushed on the stack.
- *
- *  This function is the ISR handler of the device when napi is enabled. It
- *  identifies the reason for the interrupt and calls the relevant service
- *  routines.
- */
-static irqreturn_t vxge_isr_napi(int irq, void *dev_id)
-{
-       struct __vxge_hw_device *hldev;
-       u64 reason;
-       enum vxge_hw_status status;
-       struct vxgedev *vdev = (struct vxgedev *)dev_id;
-
-       vxge_debug_intr(VXGE_TRACE, "%s:%d", __func__, __LINE__);
-
-       hldev = pci_get_drvdata(vdev->pdev);
-
-       if (pci_channel_offline(vdev->pdev))
-               return IRQ_NONE;
-
-       if (unlikely(!is_vxge_card_up(vdev)))
-               return IRQ_HANDLED;
-
-       status = vxge_hw_device_begin_irq(hldev, vdev->exec_mode, &reason);
-       if (status == VXGE_HW_OK) {
-               vxge_hw_device_mask_all(hldev);
-
-               if (reason &
-                       VXGE_HW_TITAN_GENERAL_INT_STATUS_VPATH_TRAFFIC_INT(
-                       vdev->vpaths_deployed >>
-                       (64 - VXGE_HW_MAX_VIRTUAL_PATHS))) {
-
-                       vxge_hw_device_clear_tx_rx(hldev);
-                       napi_schedule(&vdev->napi);
-                       vxge_debug_intr(VXGE_TRACE,
-                               "%s:%d  Exiting...", __func__, __LINE__);
-                       return IRQ_HANDLED;
-               } else
-                       vxge_hw_device_unmask_all(hldev);
-       } else if (unlikely((status == VXGE_HW_ERR_VPATH) ||
-               (status == VXGE_HW_ERR_CRITICAL) ||
-               (status == VXGE_HW_ERR_FIFO))) {
-               vxge_hw_device_mask_all(hldev);
-               vxge_hw_device_flush_io(hldev);
-               return IRQ_HANDLED;
-       } else if (unlikely(status == VXGE_HW_ERR_SLOT_FREEZE))
-               return IRQ_HANDLED;
-
-       vxge_debug_intr(VXGE_TRACE, "%s:%d  Exiting...", __func__, __LINE__);
-       return IRQ_NONE;
-}
-
-static irqreturn_t vxge_tx_msix_handle(int irq, void *dev_id)
-{
-       struct vxge_fifo *fifo = (struct vxge_fifo *)dev_id;
-
-       adaptive_coalesce_tx_interrupts(fifo);
-
-       vxge_hw_channel_msix_mask((struct __vxge_hw_channel *)fifo->handle,
-                                 fifo->tx_vector_no);
-
-       vxge_hw_channel_msix_clear((struct __vxge_hw_channel *)fifo->handle,
-                                  fifo->tx_vector_no);
-
-       VXGE_COMPLETE_VPATH_TX(fifo);
-
-       vxge_hw_channel_msix_unmask((struct __vxge_hw_channel *)fifo->handle,
-                                   fifo->tx_vector_no);
-
-       return IRQ_HANDLED;
-}
-
-static irqreturn_t vxge_rx_msix_napi_handle(int irq, void *dev_id)
-{
-       struct vxge_ring *ring = (struct vxge_ring *)dev_id;
-
-       adaptive_coalesce_rx_interrupts(ring);
-
-       vxge_hw_channel_msix_mask((struct __vxge_hw_channel *)ring->handle,
-                                 ring->rx_vector_no);
-
-       vxge_hw_channel_msix_clear((struct __vxge_hw_channel *)ring->handle,
-                                  ring->rx_vector_no);
-
-       napi_schedule(&ring->napi);
-       return IRQ_HANDLED;
-}
-
-static irqreturn_t
-vxge_alarm_msix_handle(int irq, void *dev_id)
-{
-       int i;
-       enum vxge_hw_status status;
-       struct vxge_vpath *vpath = (struct vxge_vpath *)dev_id;
-       struct vxgedev *vdev = vpath->vdev;
-       int msix_id = (vpath->handle->vpath->vp_id *
-               VXGE_HW_VPATH_MSIX_ACTIVE) + VXGE_ALARM_MSIX_ID;
-
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               /* Reduce the chance of losing alarm interrupts by masking
-                * the vector. A pending bit will be set if an alarm is
-                * generated and on unmask the interrupt will be fired.
-                */
-               vxge_hw_vpath_msix_mask(vdev->vpaths[i].handle, msix_id);
-               vxge_hw_vpath_msix_clear(vdev->vpaths[i].handle, msix_id);
-
-               status = vxge_hw_vpath_alarm_process(vdev->vpaths[i].handle,
-                       vdev->exec_mode);
-               if (status == VXGE_HW_OK) {
-                       vxge_hw_vpath_msix_unmask(vdev->vpaths[i].handle,
-                                                 msix_id);
-                       continue;
-               }
-               vxge_debug_intr(VXGE_ERR,
-                       "%s: vxge_hw_vpath_alarm_process failed %x ",
-                       VXGE_DRIVER_NAME, status);
-       }
-       return IRQ_HANDLED;
-}
-
-static int vxge_alloc_msix(struct vxgedev *vdev)
-{
-       int j, i, ret = 0;
-       int msix_intr_vect = 0, temp;
-       vdev->intr_cnt = 0;
-
-start:
-       /* Tx/Rx MSIX Vectors count */
-       vdev->intr_cnt = vdev->no_of_vpath * 2;
-
-       /* Alarm MSIX Vectors count */
-       vdev->intr_cnt++;
-
-       vdev->entries = kcalloc(vdev->intr_cnt, sizeof(struct msix_entry),
-                               GFP_KERNEL);
-       if (!vdev->entries) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s: memory allocation failed",
-                       VXGE_DRIVER_NAME);
-               ret = -ENOMEM;
-               goto alloc_entries_failed;
-       }
-
-       vdev->vxge_entries = kcalloc(vdev->intr_cnt,
-                                    sizeof(struct vxge_msix_entry),
-                                    GFP_KERNEL);
-       if (!vdev->vxge_entries) {
-               vxge_debug_init(VXGE_ERR, "%s: memory allocation failed",
-                       VXGE_DRIVER_NAME);
-               ret = -ENOMEM;
-               goto alloc_vxge_entries_failed;
-       }
-
-       for (i = 0, j = 0; i < vdev->no_of_vpath; i++) {
-
-               msix_intr_vect = i * VXGE_HW_VPATH_MSIX_ACTIVE;
-
-               /* Initialize the fifo vector */
-               vdev->entries[j].entry = msix_intr_vect;
-               vdev->vxge_entries[j].entry = msix_intr_vect;
-               vdev->vxge_entries[j].in_use = 0;
-               j++;
-
-               /* Initialize the ring vector */
-               vdev->entries[j].entry = msix_intr_vect + 1;
-               vdev->vxge_entries[j].entry = msix_intr_vect + 1;
-               vdev->vxge_entries[j].in_use = 0;
-               j++;
-       }
-
-       /* Initialize the alarm vector */
-       vdev->entries[j].entry = VXGE_ALARM_MSIX_ID;
-       vdev->vxge_entries[j].entry = VXGE_ALARM_MSIX_ID;
-       vdev->vxge_entries[j].in_use = 0;
-
-       ret = pci_enable_msix_range(vdev->pdev,
-                                   vdev->entries, 3, vdev->intr_cnt);
-       if (ret < 0) {
-               ret = -ENODEV;
-               goto enable_msix_failed;
-       } else if (ret < vdev->intr_cnt) {
-               pci_disable_msix(vdev->pdev);
-
-               vxge_debug_init(VXGE_ERR,
-                       "%s: MSI-X enable failed for %d vectors, ret: %d",
-                       VXGE_DRIVER_NAME, vdev->intr_cnt, ret);
-               if (max_config_vpath != VXGE_USE_DEFAULT) {
-                       ret = -ENODEV;
-                       goto enable_msix_failed;
-               }
-
-               kfree(vdev->entries);
-               kfree(vdev->vxge_entries);
-               vdev->entries = NULL;
-               vdev->vxge_entries = NULL;
-               /* Try with less no of vector by reducing no of vpaths count */
-               temp = (ret - 1)/2;
-               vxge_close_vpaths(vdev, temp);
-               vdev->no_of_vpath = temp;
-               goto start;
-       }
-       return 0;
-
-enable_msix_failed:
-       kfree(vdev->vxge_entries);
-alloc_vxge_entries_failed:
-       kfree(vdev->entries);
-alloc_entries_failed:
-       return ret;
-}
-
-static int vxge_enable_msix(struct vxgedev *vdev)
-{
-
-       int i, ret = 0;
-       /* 0 - Tx, 1 - Rx  */
-       int tim_msix_id[4] = {0, 1, 0, 0};
-
-       vdev->intr_cnt = 0;
-
-       /* allocate msix vectors */
-       ret = vxge_alloc_msix(vdev);
-       if (!ret) {
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       struct vxge_vpath *vpath = &vdev->vpaths[i];
-
-                       /* If fifo or ring are not enabled, the MSIX vector for
-                        * it should be set to 0.
-                        */
-                       vpath->ring.rx_vector_no = (vpath->device_id *
-                                               VXGE_HW_VPATH_MSIX_ACTIVE) + 1;
-
-                       vpath->fifo.tx_vector_no = (vpath->device_id *
-                                               VXGE_HW_VPATH_MSIX_ACTIVE);
-
-                       vxge_hw_vpath_msix_set(vpath->handle, tim_msix_id,
-                                              VXGE_ALARM_MSIX_ID);
-               }
-       }
-
-       return ret;
-}
-
-static void vxge_rem_msix_isr(struct vxgedev *vdev)
-{
-       int intr_cnt;
-
-       for (intr_cnt = 0; intr_cnt < (vdev->no_of_vpath * 2 + 1);
-               intr_cnt++) {
-               if (vdev->vxge_entries[intr_cnt].in_use) {
-                       free_irq(vdev->entries[intr_cnt].vector,
-                               vdev->vxge_entries[intr_cnt].arg);
-                       vdev->vxge_entries[intr_cnt].in_use = 0;
-               }
-       }
-
-       kfree(vdev->entries);
-       kfree(vdev->vxge_entries);
-       vdev->entries = NULL;
-       vdev->vxge_entries = NULL;
-
-       if (vdev->config.intr_type == MSI_X)
-               pci_disable_msix(vdev->pdev);
-}
-
-static void vxge_rem_isr(struct vxgedev *vdev)
-{
-       if (IS_ENABLED(CONFIG_PCI_MSI) &&
-           vdev->config.intr_type == MSI_X) {
-               vxge_rem_msix_isr(vdev);
-       } else if (vdev->config.intr_type == INTA) {
-                       free_irq(vdev->pdev->irq, vdev);
-       }
-}
-
-static int vxge_add_isr(struct vxgedev *vdev)
-{
-       int ret = 0;
-       int vp_idx = 0, intr_idx = 0, intr_cnt = 0, msix_idx = 0, irq_req = 0;
-       int pci_fun = PCI_FUNC(vdev->pdev->devfn);
-
-       if (IS_ENABLED(CONFIG_PCI_MSI) && vdev->config.intr_type == MSI_X)
-               ret = vxge_enable_msix(vdev);
-
-       if (ret) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s: Enabling MSI-X Failed", VXGE_DRIVER_NAME);
-               vxge_debug_init(VXGE_ERR,
-                       "%s: Defaulting to INTA", VXGE_DRIVER_NAME);
-               vdev->config.intr_type = INTA;
-       }
-
-       if (IS_ENABLED(CONFIG_PCI_MSI) && vdev->config.intr_type == MSI_X) {
-               for (intr_idx = 0;
-                    intr_idx < (vdev->no_of_vpath *
-                       VXGE_HW_VPATH_MSIX_ACTIVE); intr_idx++) {
-
-                       msix_idx = intr_idx % VXGE_HW_VPATH_MSIX_ACTIVE;
-                       irq_req = 0;
-
-                       switch (msix_idx) {
-                       case 0:
-                               snprintf(vdev->desc[intr_cnt], VXGE_INTR_STRLEN,
-                                       "%s:vxge:MSI-X %d - Tx - fn:%d vpath:%d",
-                                       vdev->ndev->name,
-                                       vdev->entries[intr_cnt].entry,
-                                       pci_fun, vp_idx);
-                               ret = request_irq(
-                                       vdev->entries[intr_cnt].vector,
-                                       vxge_tx_msix_handle, 0,
-                                       vdev->desc[intr_cnt],
-                                       &vdev->vpaths[vp_idx].fifo);
-                               vdev->vxge_entries[intr_cnt].arg =
-                                               &vdev->vpaths[vp_idx].fifo;
-                               irq_req = 1;
-                               break;
-                       case 1:
-                               snprintf(vdev->desc[intr_cnt], VXGE_INTR_STRLEN,
-                                       "%s:vxge:MSI-X %d - Rx - fn:%d vpath:%d",
-                                       vdev->ndev->name,
-                                       vdev->entries[intr_cnt].entry,
-                                       pci_fun, vp_idx);
-                               ret = request_irq(
-                                       vdev->entries[intr_cnt].vector,
-                                       vxge_rx_msix_napi_handle, 0,
-                                       vdev->desc[intr_cnt],
-                                       &vdev->vpaths[vp_idx].ring);
-                               vdev->vxge_entries[intr_cnt].arg =
-                                               &vdev->vpaths[vp_idx].ring;
-                               irq_req = 1;
-                               break;
-                       }
-
-                       if (ret) {
-                               vxge_debug_init(VXGE_ERR,
-                                       "%s: MSIX - %d  Registration failed",
-                                       vdev->ndev->name, intr_cnt);
-                               vxge_rem_msix_isr(vdev);
-                               vdev->config.intr_type = INTA;
-                               vxge_debug_init(VXGE_ERR,
-                                       "%s: Defaulting to INTA",
-                                       vdev->ndev->name);
-                               goto INTA_MODE;
-                       }
-
-                       if (irq_req) {
-                               /* We requested for this msix interrupt */
-                               vdev->vxge_entries[intr_cnt].in_use = 1;
-                               msix_idx +=  vdev->vpaths[vp_idx].device_id *
-                                       VXGE_HW_VPATH_MSIX_ACTIVE;
-                               vxge_hw_vpath_msix_unmask(
-                                       vdev->vpaths[vp_idx].handle,
-                                       msix_idx);
-                               intr_cnt++;
-                       }
-
-                       /* Point to next vpath handler */
-                       if (((intr_idx + 1) % VXGE_HW_VPATH_MSIX_ACTIVE == 0) &&
-                           (vp_idx < (vdev->no_of_vpath - 1)))
-                               vp_idx++;
-               }
-
-               intr_cnt = vdev->no_of_vpath * 2;
-               snprintf(vdev->desc[intr_cnt], VXGE_INTR_STRLEN,
-                       "%s:vxge:MSI-X %d - Alarm - fn:%d",
-                       vdev->ndev->name,
-                       vdev->entries[intr_cnt].entry,
-                       pci_fun);
-               /* For Alarm interrupts */
-               ret = request_irq(vdev->entries[intr_cnt].vector,
-                                       vxge_alarm_msix_handle, 0,
-                                       vdev->desc[intr_cnt],
-                                       &vdev->vpaths[0]);
-               if (ret) {
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: MSIX - %d Registration failed",
-                               vdev->ndev->name, intr_cnt);
-                       vxge_rem_msix_isr(vdev);
-                       vdev->config.intr_type = INTA;
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: Defaulting to INTA",
-                               vdev->ndev->name);
-                       goto INTA_MODE;
-               }
-
-               msix_idx = (vdev->vpaths[0].handle->vpath->vp_id *
-                       VXGE_HW_VPATH_MSIX_ACTIVE) + VXGE_ALARM_MSIX_ID;
-               vxge_hw_vpath_msix_unmask(vdev->vpaths[vp_idx].handle,
-                                       msix_idx);
-               vdev->vxge_entries[intr_cnt].in_use = 1;
-               vdev->vxge_entries[intr_cnt].arg = &vdev->vpaths[0];
-       }
-
-INTA_MODE:
-       if (vdev->config.intr_type == INTA) {
-               snprintf(vdev->desc[0], VXGE_INTR_STRLEN,
-                       "%s:vxge:INTA", vdev->ndev->name);
-               vxge_hw_device_set_intr_type(vdev->devh,
-                       VXGE_HW_INTR_MODE_IRQLINE);
-
-               vxge_hw_vpath_tti_ci_set(vdev->vpaths[0].fifo.handle);
-
-               ret = request_irq((int) vdev->pdev->irq,
-                       vxge_isr_napi,
-                       IRQF_SHARED, vdev->desc[0], vdev);
-               if (ret) {
-                       vxge_debug_init(VXGE_ERR,
-                               "%s %s-%d: ISR registration failed",
-                               VXGE_DRIVER_NAME, "IRQ", vdev->pdev->irq);
-                       return -ENODEV;
-               }
-               vxge_debug_init(VXGE_TRACE,
-                       "new %s-%d line allocated",
-                       "IRQ", vdev->pdev->irq);
-       }
-
-       return VXGE_HW_OK;
-}
-
-static void vxge_poll_vp_reset(struct timer_list *t)
-{
-       struct vxgedev *vdev = from_timer(vdev, t, vp_reset_timer);
-       int i, j = 0;
-
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               if (test_bit(i, &vdev->vp_reset)) {
-                       vxge_reset_vpath(vdev, i);
-                       j++;
-               }
-       }
-       if (j && (vdev->config.intr_type != MSI_X)) {
-               vxge_hw_device_unmask_all(vdev->devh);
-               vxge_hw_device_flush_io(vdev->devh);
-       }
-
-       mod_timer(&vdev->vp_reset_timer, jiffies + HZ / 2);
-}
-
-static void vxge_poll_vp_lockup(struct timer_list *t)
-{
-       struct vxgedev *vdev = from_timer(vdev, t, vp_lockup_timer);
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_vpath *vpath;
-       struct vxge_ring *ring;
-       int i;
-       unsigned long rx_frms;
-
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               ring = &vdev->vpaths[i].ring;
-
-               /* Truncated to machine word size number of frames */
-               rx_frms = READ_ONCE(ring->stats.rx_frms);
-
-               /* Did this vpath received any packets */
-               if (ring->stats.prev_rx_frms == rx_frms) {
-                       status = vxge_hw_vpath_check_leak(ring->handle);
-
-                       /* Did it received any packets last time */
-                       if ((VXGE_HW_FAIL == status) &&
-                               (VXGE_HW_FAIL == ring->last_status)) {
-
-                               /* schedule vpath reset */
-                               if (!test_and_set_bit(i, &vdev->vp_reset)) {
-                                       vpath = &vdev->vpaths[i];
-
-                                       /* disable interrupts for this vpath */
-                                       vxge_vpath_intr_disable(vdev, i);
-
-                                       /* stop the queue for this vpath */
-                                       netif_tx_stop_queue(vpath->fifo.txq);
-                                       continue;
-                               }
-                       }
-               }
-               ring->stats.prev_rx_frms = rx_frms;
-               ring->last_status = status;
-       }
-
-       /* Check every 1 milli second */
-       mod_timer(&vdev->vp_lockup_timer, jiffies + HZ / 1000);
-}
-
-static netdev_features_t vxge_fix_features(struct net_device *dev,
-       netdev_features_t features)
-{
-       netdev_features_t changed = dev->features ^ features;
-
-       /* Enabling RTH requires some of the logic in vxge_device_register and a
-        * vpath reset.  Due to these restrictions, only allow modification
-        * while the interface is down.
-        */
-       if ((changed & NETIF_F_RXHASH) && netif_running(dev))
-               features ^= NETIF_F_RXHASH;
-
-       return features;
-}
-
-static int vxge_set_features(struct net_device *dev, netdev_features_t features)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-       netdev_features_t changed = dev->features ^ features;
-
-       if (!(changed & NETIF_F_RXHASH))
-               return 0;
-
-       /* !netif_running() ensured by vxge_fix_features() */
-
-       vdev->devh->config.rth_en = !!(features & NETIF_F_RXHASH);
-       vxge_reset_all_vpaths(vdev);
-
-       return 0;
-}
-
-/**
- * vxge_open
- * @dev: pointer to the device structure.
- *
- * This function is the open entry point of the driver. It mainly calls a
- * function to allocate Rx buffers and inserts them into the buffer
- * descriptors and then enables the Rx part of the NIC.
- * Return value: '0' on success and an appropriate (-)ve integer as
- * defined in errno.h file on failure.
- */
-static int vxge_open(struct net_device *dev)
-{
-       enum vxge_hw_status status;
-       struct vxgedev *vdev;
-       struct __vxge_hw_device *hldev;
-       struct vxge_vpath *vpath;
-       int ret = 0;
-       int i;
-       u64 val64;
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d", dev->name, __func__, __LINE__);
-
-       vdev = netdev_priv(dev);
-       hldev = pci_get_drvdata(vdev->pdev);
-
-       /* make sure you have link off by default every time Nic is
-        * initialized */
-       netif_carrier_off(dev);
-
-       /* Open VPATHs */
-       status = vxge_open_vpaths(vdev);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s: fatal: Vpath open failed", vdev->ndev->name);
-               ret = -EPERM;
-               goto out0;
-       }
-
-       vdev->mtu = dev->mtu;
-
-       status = vxge_add_isr(vdev);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s: fatal: ISR add failed", dev->name);
-               ret = -EPERM;
-               goto out1;
-       }
-
-       if (vdev->config.intr_type != MSI_X) {
-               netif_napi_add_weight(dev, &vdev->napi, vxge_poll_inta,
-                                     vdev->config.napi_weight);
-               napi_enable(&vdev->napi);
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vpath = &vdev->vpaths[i];
-                       vpath->ring.napi_p = &vdev->napi;
-               }
-       } else {
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vpath = &vdev->vpaths[i];
-                       netif_napi_add_weight(dev, &vpath->ring.napi,
-                                             vxge_poll_msix,
-                                             vdev->config.napi_weight);
-                       napi_enable(&vpath->ring.napi);
-                       vpath->ring.napi_p = &vpath->ring.napi;
-               }
-       }
-
-       /* configure RTH */
-       if (vdev->config.rth_steering) {
-               status = vxge_rth_configure(vdev);
-               if (status != VXGE_HW_OK) {
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: fatal: RTH configuration failed",
-                               dev->name);
-                       ret = -EPERM;
-                       goto out2;
-               }
-       }
-       printk(KERN_INFO "%s: Receive Hashing Offload %s\n", dev->name,
-              hldev->config.rth_en ? "enabled" : "disabled");
-
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               vpath = &vdev->vpaths[i];
-
-               /* set initial mtu before enabling the device */
-               status = vxge_hw_vpath_mtu_set(vpath->handle, vdev->mtu);
-               if (status != VXGE_HW_OK) {
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: fatal: can not set new MTU", dev->name);
-                       ret = -EPERM;
-                       goto out2;
-               }
-       }
-
-       VXGE_DEVICE_DEBUG_LEVEL_SET(VXGE_TRACE, VXGE_COMPONENT_LL, vdev);
-       vxge_debug_init(vdev->level_trace,
-               "%s: MTU is %d", vdev->ndev->name, vdev->mtu);
-       VXGE_DEVICE_DEBUG_LEVEL_SET(VXGE_ERR, VXGE_COMPONENT_LL, vdev);
-
-       /* Restore the DA, VID table and also multicast and promiscuous mode
-        * states
-        */
-       if (vdev->all_multi_flg) {
-               for (i = 0; i < vdev->no_of_vpath; i++) {
-                       vpath = &vdev->vpaths[i];
-                       vxge_restore_vpath_mac_addr(vpath);
-                       vxge_restore_vpath_vid_table(vpath);
-
-                       status = vxge_hw_vpath_mcast_enable(vpath->handle);
-                       if (status != VXGE_HW_OK)
-                               vxge_debug_init(VXGE_ERR,
-                                       "%s:%d Enabling multicast failed",
-                                       __func__, __LINE__);
-               }
-       }
-
-       /* Enable vpath to sniff all unicast/multicast traffic that not
-        * addressed to them. We allow promiscuous mode for PF only
-        */
-
-       val64 = 0;
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++)
-               val64 |= VXGE_HW_RXMAC_AUTHORIZE_ALL_ADDR_VP(i);
-
-       vxge_hw_mgmt_reg_write(vdev->devh,
-               vxge_hw_mgmt_reg_type_mrpcim,
-               0,
-               (ulong)offsetof(struct vxge_hw_mrpcim_reg,
-                       rxmac_authorize_all_addr),
-               val64);
-
-       vxge_hw_mgmt_reg_write(vdev->devh,
-               vxge_hw_mgmt_reg_type_mrpcim,
-               0,
-               (ulong)offsetof(struct vxge_hw_mrpcim_reg,
-                       rxmac_authorize_all_vid),
-               val64);
-
-       vxge_set_multicast(dev);
-
-       /* Enabling Bcast and mcast for all vpath */
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               vpath = &vdev->vpaths[i];
-               status = vxge_hw_vpath_bcast_enable(vpath->handle);
-               if (status != VXGE_HW_OK)
-                       vxge_debug_init(VXGE_ERR,
-                               "%s : Can not enable bcast for vpath "
-                               "id %d", dev->name, i);
-               if (vdev->config.addr_learn_en) {
-                       status = vxge_hw_vpath_mcast_enable(vpath->handle);
-                       if (status != VXGE_HW_OK)
-                               vxge_debug_init(VXGE_ERR,
-                                       "%s : Can not enable mcast for vpath "
-                                       "id %d", dev->name, i);
-               }
-       }
-
-       vxge_hw_device_setpause_data(vdev->devh, 0,
-               vdev->config.tx_pause_enable,
-               vdev->config.rx_pause_enable);
-
-       if (vdev->vp_reset_timer.function == NULL)
-               vxge_os_timer(&vdev->vp_reset_timer, vxge_poll_vp_reset,
-                             HZ / 2);
-
-       /* There is no need to check for RxD leak and RxD lookup on Titan1A */
-       if (vdev->titan1 && vdev->vp_lockup_timer.function == NULL)
-               vxge_os_timer(&vdev->vp_lockup_timer, vxge_poll_vp_lockup,
-                             HZ / 2);
-
-       set_bit(__VXGE_STATE_CARD_UP, &vdev->state);
-
-       smp_wmb();
-
-       if (vxge_hw_device_link_state_get(vdev->devh) == VXGE_HW_LINK_UP) {
-               netif_carrier_on(vdev->ndev);
-               netdev_notice(vdev->ndev, "Link Up\n");
-               vdev->stats.link_up++;
-       }
-
-       vxge_hw_device_intr_enable(vdev->devh);
-
-       smp_wmb();
-
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               vpath = &vdev->vpaths[i];
-
-               vxge_hw_vpath_enable(vpath->handle);
-               smp_wmb();
-               vxge_hw_vpath_rx_doorbell_init(vpath->handle);
-       }
-
-       netif_tx_start_all_queues(vdev->ndev);
-
-       /* configure CI */
-       vxge_config_ci_for_tti_rti(vdev);
-
-       goto out0;
-
-out2:
-       vxge_rem_isr(vdev);
-
-       /* Disable napi */
-       if (vdev->config.intr_type != MSI_X)
-               napi_disable(&vdev->napi);
-       else {
-               for (i = 0; i < vdev->no_of_vpath; i++)
-                       napi_disable(&vdev->vpaths[i].ring.napi);
-       }
-
-out1:
-       vxge_close_vpaths(vdev, 0);
-out0:
-       vxge_debug_entryexit(VXGE_TRACE,
-                               "%s: %s:%d  Exiting...",
-                               dev->name, __func__, __LINE__);
-       return ret;
-}
-
-/* Loop through the mac address list and delete all the entries */
-static void vxge_free_mac_add_list(struct vxge_vpath *vpath)
-{
-
-       struct list_head *entry, *next;
-       if (list_empty(&vpath->mac_addr_list))
-               return;
-
-       list_for_each_safe(entry, next, &vpath->mac_addr_list) {
-               list_del(entry);
-               kfree(entry);
-       }
-}
-
-static void vxge_napi_del_all(struct vxgedev *vdev)
-{
-       int i;
-       if (vdev->config.intr_type != MSI_X)
-               netif_napi_del(&vdev->napi);
-       else {
-               for (i = 0; i < vdev->no_of_vpath; i++)
-                       netif_napi_del(&vdev->vpaths[i].ring.napi);
-       }
-}
-
-static int do_vxge_close(struct net_device *dev, int do_io)
-{
-       enum vxge_hw_status status;
-       struct vxgedev *vdev;
-       struct __vxge_hw_device *hldev;
-       int i;
-       u64 val64, vpath_vector;
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d",
-               dev->name, __func__, __LINE__);
-
-       vdev = netdev_priv(dev);
-       hldev = pci_get_drvdata(vdev->pdev);
-
-       if (unlikely(!is_vxge_card_up(vdev)))
-               return 0;
-
-       /* If vxge_handle_crit_err task is executing,
-        * wait till it completes. */
-       while (test_and_set_bit(__VXGE_STATE_RESET_CARD, &vdev->state))
-               msleep(50);
-
-       if (do_io) {
-               /* Put the vpath back in normal mode */
-               vpath_vector = vxge_mBIT(vdev->vpaths[0].device_id);
-               status = vxge_hw_mgmt_reg_read(vdev->devh,
-                               vxge_hw_mgmt_reg_type_mrpcim,
-                               0,
-                               (ulong)offsetof(
-                                       struct vxge_hw_mrpcim_reg,
-                                       rts_mgr_cbasin_cfg),
-                               &val64);
-               if (status == VXGE_HW_OK) {
-                       val64 &= ~vpath_vector;
-                       status = vxge_hw_mgmt_reg_write(vdev->devh,
-                                       vxge_hw_mgmt_reg_type_mrpcim,
-                                       0,
-                                       (ulong)offsetof(
-                                               struct vxge_hw_mrpcim_reg,
-                                               rts_mgr_cbasin_cfg),
-                                       val64);
-               }
-
-               /* Remove the function 0 from promiscuous mode */
-               vxge_hw_mgmt_reg_write(vdev->devh,
-                       vxge_hw_mgmt_reg_type_mrpcim,
-                       0,
-                       (ulong)offsetof(struct vxge_hw_mrpcim_reg,
-                               rxmac_authorize_all_addr),
-                       0);
-
-               vxge_hw_mgmt_reg_write(vdev->devh,
-                       vxge_hw_mgmt_reg_type_mrpcim,
-                       0,
-                       (ulong)offsetof(struct vxge_hw_mrpcim_reg,
-                               rxmac_authorize_all_vid),
-                       0);
-
-               smp_wmb();
-       }
-
-       if (vdev->titan1)
-               del_timer_sync(&vdev->vp_lockup_timer);
-
-       del_timer_sync(&vdev->vp_reset_timer);
-
-       if (do_io)
-               vxge_hw_device_wait_receive_idle(hldev);
-
-       clear_bit(__VXGE_STATE_CARD_UP, &vdev->state);
-
-       /* Disable napi */
-       if (vdev->config.intr_type != MSI_X)
-               napi_disable(&vdev->napi);
-       else {
-               for (i = 0; i < vdev->no_of_vpath; i++)
-                       napi_disable(&vdev->vpaths[i].ring.napi);
-       }
-
-       netif_carrier_off(vdev->ndev);
-       netdev_notice(vdev->ndev, "Link Down\n");
-       netif_tx_stop_all_queues(vdev->ndev);
-
-       /* Note that at this point xmit() is stopped by upper layer */
-       if (do_io)
-               vxge_hw_device_intr_disable(vdev->devh);
-
-       vxge_rem_isr(vdev);
-
-       vxge_napi_del_all(vdev);
-
-       if (do_io)
-               vxge_reset_all_vpaths(vdev);
-
-       vxge_close_vpaths(vdev, 0);
-
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s: %s:%d  Exiting...", dev->name, __func__, __LINE__);
-
-       clear_bit(__VXGE_STATE_RESET_CARD, &vdev->state);
-
-       return 0;
-}
-
-/**
- * vxge_close
- * @dev: device pointer.
- *
- * This is the stop entry point of the driver. It needs to undo exactly
- * whatever was done by the open entry point, thus it's usually referred to
- * as the close function.Among other things this function mainly stops the
- * Rx side of the NIC and frees all the Rx buffers in the Rx rings.
- * Return value: '0' on success and an appropriate (-)ve integer as
- * defined in errno.h file on failure.
- */
-static int vxge_close(struct net_device *dev)
-{
-       do_vxge_close(dev, 1);
-       return 0;
-}
-
-/**
- * vxge_change_mtu
- * @dev: net device pointer.
- * @new_mtu :the new MTU size for the device.
- *
- * A driver entry point to change MTU size for the device. Before changing
- * the MTU the device must be stopped.
- */
-static int vxge_change_mtu(struct net_device *dev, int new_mtu)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-
-       vxge_debug_entryexit(vdev->level_trace,
-               "%s:%d", __func__, __LINE__);
-
-       /* check if device is down already */
-       if (unlikely(!is_vxge_card_up(vdev))) {
-               /* just store new value, will use later on open() */
-               dev->mtu = new_mtu;
-               vxge_debug_init(vdev->level_err,
-                       "%s", "device is down on MTU change");
-               return 0;
-       }
-
-       vxge_debug_init(vdev->level_trace,
-               "trying to apply new MTU %d", new_mtu);
-
-       if (vxge_close(dev))
-               return -EIO;
-
-       dev->mtu = new_mtu;
-       vdev->mtu = new_mtu;
-
-       if (vxge_open(dev))
-               return -EIO;
-
-       vxge_debug_init(vdev->level_trace,
-               "%s: MTU changed to %d", vdev->ndev->name, new_mtu);
-
-       vxge_debug_entryexit(vdev->level_trace,
-               "%s:%d  Exiting...", __func__, __LINE__);
-
-       return 0;
-}
-
-/**
- * vxge_get_stats64
- * @dev: pointer to the device structure
- * @net_stats: pointer to struct rtnl_link_stats64
- *
- */
-static void
-vxge_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *net_stats)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-       int k;
-
-       /* net_stats already zeroed by caller */
-       for (k = 0; k < vdev->no_of_vpath; k++) {
-               struct vxge_ring_stats *rxstats = &vdev->vpaths[k].ring.stats;
-               struct vxge_fifo_stats *txstats = &vdev->vpaths[k].fifo.stats;
-               unsigned int start;
-               u64 packets, bytes, multicast;
-
-               do {
-                       start = u64_stats_fetch_begin_irq(&rxstats->syncp);
-
-                       packets   = rxstats->rx_frms;
-                       multicast = rxstats->rx_mcast;
-                       bytes     = rxstats->rx_bytes;
-               } while (u64_stats_fetch_retry_irq(&rxstats->syncp, start));
-
-               net_stats->rx_packets += packets;
-               net_stats->rx_bytes += bytes;
-               net_stats->multicast += multicast;
-
-               net_stats->rx_errors += rxstats->rx_errors;
-               net_stats->rx_dropped += rxstats->rx_dropped;
-
-               do {
-                       start = u64_stats_fetch_begin_irq(&txstats->syncp);
-
-                       packets = txstats->tx_frms;
-                       bytes   = txstats->tx_bytes;
-               } while (u64_stats_fetch_retry_irq(&txstats->syncp, start));
-
-               net_stats->tx_packets += packets;
-               net_stats->tx_bytes += bytes;
-               net_stats->tx_errors += txstats->tx_errors;
-       }
-}
-
-static enum vxge_hw_status vxge_timestamp_config(struct __vxge_hw_device *devh)
-{
-       enum vxge_hw_status status;
-       u64 val64;
-
-       /* Timestamp is passed to the driver via the FCS, therefore we
-        * must disable the FCS stripping by the adapter.  Since this is
-        * required for the driver to load (due to a hardware bug),
-        * there is no need to do anything special here.
-        */
-       val64 = VXGE_HW_XMAC_TIMESTAMP_EN |
-               VXGE_HW_XMAC_TIMESTAMP_USE_LINK_ID(0) |
-               VXGE_HW_XMAC_TIMESTAMP_INTERVAL(0);
-
-       status = vxge_hw_mgmt_reg_write(devh,
-                                       vxge_hw_mgmt_reg_type_mrpcim,
-                                       0,
-                                       offsetof(struct vxge_hw_mrpcim_reg,
-                                                xmac_timestamp),
-                                       val64);
-       vxge_hw_device_flush_io(devh);
-       devh->config.hwts_en = VXGE_HW_HWTS_ENABLE;
-       return status;
-}
-
-static int vxge_hwtstamp_set(struct vxgedev *vdev, void __user *data)
-{
-       struct hwtstamp_config config;
-       int i;
-
-       if (copy_from_user(&config, data, sizeof(config)))
-               return -EFAULT;
-
-       /* Transmit HW Timestamp not supported */
-       switch (config.tx_type) {
-       case HWTSTAMP_TX_OFF:
-               break;
-       case HWTSTAMP_TX_ON:
-       default:
-               return -ERANGE;
-       }
-
-       switch (config.rx_filter) {
-       case HWTSTAMP_FILTER_NONE:
-               vdev->rx_hwts = 0;
-               config.rx_filter = HWTSTAMP_FILTER_NONE;
-               break;
-
-       case HWTSTAMP_FILTER_ALL:
-       case HWTSTAMP_FILTER_SOME:
-       case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
-       case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
-       case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
-       case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
-       case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
-       case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
-       case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
-       case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
-       case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
-       case HWTSTAMP_FILTER_PTP_V2_EVENT:
-       case HWTSTAMP_FILTER_PTP_V2_SYNC:
-       case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
-       case HWTSTAMP_FILTER_NTP_ALL:
-               if (vdev->devh->config.hwts_en != VXGE_HW_HWTS_ENABLE)
-                       return -EFAULT;
-
-               vdev->rx_hwts = 1;
-               config.rx_filter = HWTSTAMP_FILTER_ALL;
-               break;
-
-       default:
-                return -ERANGE;
-       }
-
-       for (i = 0; i < vdev->no_of_vpath; i++)
-               vdev->vpaths[i].ring.rx_hwts = vdev->rx_hwts;
-
-       if (copy_to_user(data, &config, sizeof(config)))
-               return -EFAULT;
-
-       return 0;
-}
-
-static int vxge_hwtstamp_get(struct vxgedev *vdev, void __user *data)
-{
-       struct hwtstamp_config config;
-
-       config.flags = 0;
-       config.tx_type = HWTSTAMP_TX_OFF;
-       config.rx_filter = (vdev->rx_hwts ?
-                           HWTSTAMP_FILTER_ALL : HWTSTAMP_FILTER_NONE);
-
-       if (copy_to_user(data, &config, sizeof(config)))
-               return -EFAULT;
-
-       return 0;
-}
-
-/**
- * vxge_ioctl
- * @dev: Device pointer.
- * @rq: An IOCTL specific structure, that can contain a pointer to
- *       a proprietary structure used to pass information to the driver.
- * @cmd: This is used to distinguish between the different commands that
- *       can be passed to the IOCTL functions.
- *
- * Entry point for the Ioctl.
- */
-static int vxge_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-
-       switch (cmd) {
-       case SIOCSHWTSTAMP:
-               return vxge_hwtstamp_set(vdev, rq->ifr_data);
-       case SIOCGHWTSTAMP:
-               return vxge_hwtstamp_get(vdev, rq->ifr_data);
-       default:
-               return -EOPNOTSUPP;
-       }
-}
-
-/**
- * vxge_tx_watchdog
- * @dev: pointer to net device structure
- * @txqueue: index of the hanging queue
- *
- * Watchdog for transmit side.
- * This function is triggered if the Tx Queue is stopped
- * for a pre-defined amount of time when the Interface is still up.
- */
-static void vxge_tx_watchdog(struct net_device *dev, unsigned int txqueue)
-{
-       struct vxgedev *vdev;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s:%d", __func__, __LINE__);
-
-       vdev = netdev_priv(dev);
-
-       vdev->cric_err_event = VXGE_HW_EVENT_RESET_START;
-
-       schedule_work(&vdev->reset_task);
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s:%d  Exiting...", __func__, __LINE__);
-}
-
-/**
- * vxge_vlan_rx_add_vid
- * @dev: net device pointer.
- * @proto: vlan protocol
- * @vid: vid
- *
- * Add the vlan id to the devices vlan id table
- */
-static int
-vxge_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-       struct vxge_vpath *vpath;
-       int vp_id;
-
-       /* Add these vlan to the vid table */
-       for (vp_id = 0; vp_id < vdev->no_of_vpath; vp_id++) {
-               vpath = &vdev->vpaths[vp_id];
-               if (!vpath->is_open)
-                       continue;
-               vxge_hw_vpath_vid_add(vpath->handle, vid);
-       }
-       set_bit(vid, vdev->active_vlans);
-       return 0;
-}
-
-/**
- * vxge_vlan_rx_kill_vid
- * @dev: net device pointer.
- * @proto: vlan protocol
- * @vid: vid
- *
- * Remove the vlan id from the device's vlan id table
- */
-static int
-vxge_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
-{
-       struct vxgedev *vdev = netdev_priv(dev);
-       struct vxge_vpath *vpath;
-       int vp_id;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s:%d", __func__, __LINE__);
-
-       /* Delete this vlan from the vid table */
-       for (vp_id = 0; vp_id < vdev->no_of_vpath; vp_id++) {
-               vpath = &vdev->vpaths[vp_id];
-               if (!vpath->is_open)
-                       continue;
-               vxge_hw_vpath_vid_delete(vpath->handle, vid);
-       }
-       vxge_debug_entryexit(VXGE_TRACE,
-               "%s:%d  Exiting...", __func__, __LINE__);
-       clear_bit(vid, vdev->active_vlans);
-       return 0;
-}
-
-static const struct net_device_ops vxge_netdev_ops = {
-       .ndo_open               = vxge_open,
-       .ndo_stop               = vxge_close,
-       .ndo_get_stats64        = vxge_get_stats64,
-       .ndo_start_xmit         = vxge_xmit,
-       .ndo_validate_addr      = eth_validate_addr,
-       .ndo_set_rx_mode        = vxge_set_multicast,
-       .ndo_eth_ioctl           = vxge_ioctl,
-       .ndo_set_mac_address    = vxge_set_mac_addr,
-       .ndo_change_mtu         = vxge_change_mtu,
-       .ndo_fix_features       = vxge_fix_features,
-       .ndo_set_features       = vxge_set_features,
-       .ndo_vlan_rx_kill_vid   = vxge_vlan_rx_kill_vid,
-       .ndo_vlan_rx_add_vid    = vxge_vlan_rx_add_vid,
-       .ndo_tx_timeout         = vxge_tx_watchdog,
-#ifdef CONFIG_NET_POLL_CONTROLLER
-       .ndo_poll_controller    = vxge_netpoll,
-#endif
-};
-
-static int vxge_device_register(struct __vxge_hw_device *hldev,
-                               struct vxge_config *config,
-                               int no_of_vpath, struct vxgedev **vdev_out)
-{
-       struct net_device *ndev;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxgedev *vdev;
-       int ret = 0, no_of_queue = 1;
-       u64 stat;
-
-       *vdev_out = NULL;
-       if (config->tx_steering_type)
-               no_of_queue = no_of_vpath;
-
-       ndev = alloc_etherdev_mq(sizeof(struct vxgedev),
-                       no_of_queue);
-       if (ndev == NULL) {
-               vxge_debug_init(
-                       vxge_hw_device_trace_level_get(hldev),
-               "%s : device allocation failed", __func__);
-               ret = -ENODEV;
-               goto _out0;
-       }
-
-       vxge_debug_entryexit(
-               vxge_hw_device_trace_level_get(hldev),
-               "%s: %s:%d  Entering...",
-               ndev->name, __func__, __LINE__);
-
-       vdev = netdev_priv(ndev);
-       memset(vdev, 0, sizeof(struct vxgedev));
-
-       vdev->ndev = ndev;
-       vdev->devh = hldev;
-       vdev->pdev = hldev->pdev;
-       memcpy(&vdev->config, config, sizeof(struct vxge_config));
-       vdev->rx_hwts = 0;
-       vdev->titan1 = (vdev->pdev->revision == VXGE_HW_TITAN1_PCI_REVISION);
-
-       SET_NETDEV_DEV(ndev, &vdev->pdev->dev);
-
-       ndev->hw_features = NETIF_F_RXCSUM | NETIF_F_SG |
-               NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
-               NETIF_F_TSO | NETIF_F_TSO6 |
-               NETIF_F_HW_VLAN_CTAG_TX;
-       if (vdev->config.rth_steering != NO_STEERING)
-               ndev->hw_features |= NETIF_F_RXHASH;
-
-       ndev->features |= ndev->hw_features |
-               NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_FILTER;
-
-
-       ndev->netdev_ops = &vxge_netdev_ops;
-
-       ndev->watchdog_timeo = VXGE_LL_WATCH_DOG_TIMEOUT;
-       INIT_WORK(&vdev->reset_task, vxge_reset);
-
-       vxge_initialize_ethtool_ops(ndev);
-
-       /* Allocate memory for vpath */
-       vdev->vpaths = kcalloc(no_of_vpath, sizeof(struct vxge_vpath),
-                              GFP_KERNEL);
-       if (!vdev->vpaths) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s: vpath memory allocation failed",
-                       vdev->ndev->name);
-               ret = -ENOMEM;
-               goto _out1;
-       }
-
-       vxge_debug_init(vxge_hw_device_trace_level_get(hldev),
-               "%s : checksumming enabled", __func__);
-
-       ndev->features |= NETIF_F_HIGHDMA;
-
-       /* MTU range: 68 - 9600 */
-       ndev->min_mtu = VXGE_HW_MIN_MTU;
-       ndev->max_mtu = VXGE_HW_MAX_MTU;
-
-       ret = register_netdev(ndev);
-       if (ret) {
-               vxge_debug_init(vxge_hw_device_trace_level_get(hldev),
-                       "%s: %s : device registration failed!",
-                       ndev->name, __func__);
-               goto _out2;
-       }
-
-       /*  Set the factory defined MAC address initially */
-       ndev->addr_len = ETH_ALEN;
-
-       /* Make Link state as off at this point, when the Link change
-        * interrupt comes the state will be automatically changed to
-        * the right state.
-        */
-       netif_carrier_off(ndev);
-
-       vxge_debug_init(vxge_hw_device_trace_level_get(hldev),
-               "%s: Ethernet device registered",
-               ndev->name);
-
-       hldev->ndev = ndev;
-       *vdev_out = vdev;
-
-       /* Resetting the Device stats */
-       status = vxge_hw_mrpcim_stats_access(
-                               hldev,
-                               VXGE_HW_STATS_OP_CLEAR_ALL_STATS,
-                               0,
-                               0,
-                               &stat);
-
-       if (status == VXGE_HW_ERR_PRIVILEGED_OPERATION)
-               vxge_debug_init(
-                       vxge_hw_device_trace_level_get(hldev),
-                       "%s: device stats clear returns"
-                       "VXGE_HW_ERR_PRIVILEGED_OPERATION", ndev->name);
-
-       vxge_debug_entryexit(vxge_hw_device_trace_level_get(hldev),
-               "%s: %s:%d  Exiting...",
-               ndev->name, __func__, __LINE__);
-
-       return ret;
-_out2:
-       kfree(vdev->vpaths);
-_out1:
-       free_netdev(ndev);
-_out0:
-       return ret;
-}
-
-/*
- * vxge_device_unregister
- *
- * This function will unregister and free network device
- */
-static void vxge_device_unregister(struct __vxge_hw_device *hldev)
-{
-       struct vxgedev *vdev;
-       struct net_device *dev;
-       char buf[IFNAMSIZ];
-
-       dev = hldev->ndev;
-       vdev = netdev_priv(dev);
-
-       vxge_debug_entryexit(vdev->level_trace, "%s: %s:%d", vdev->ndev->name,
-                            __func__, __LINE__);
-
-       strlcpy(buf, dev->name, IFNAMSIZ);
-
-       flush_work(&vdev->reset_task);
-
-       /* in 2.6 will call stop() if device is up */
-       unregister_netdev(dev);
-
-       kfree(vdev->vpaths);
-
-       vxge_debug_init(vdev->level_trace, "%s: ethernet device unregistered",
-                       buf);
-       vxge_debug_entryexit(vdev->level_trace, "%s: %s:%d  Exiting...", buf,
-                            __func__, __LINE__);
-
-       /* we are safe to free it now */
-       free_netdev(dev);
-}
-
-/*
- * vxge_callback_crit_err
- *
- * This function is called by the alarm handler in interrupt context.
- * Driver must analyze it based on the event type.
- */
-static void
-vxge_callback_crit_err(struct __vxge_hw_device *hldev,
-                       enum vxge_hw_event type, u64 vp_id)
-{
-       struct net_device *dev = hldev->ndev;
-       struct vxgedev *vdev = netdev_priv(dev);
-       struct vxge_vpath *vpath = NULL;
-       int vpath_idx;
-
-       vxge_debug_entryexit(vdev->level_trace,
-               "%s: %s:%d", vdev->ndev->name, __func__, __LINE__);
-
-       /* Note: This event type should be used for device wide
-        * indications only - Serious errors, Slot freeze and critical errors
-        */
-       vdev->cric_err_event = type;
-
-       for (vpath_idx = 0; vpath_idx < vdev->no_of_vpath; vpath_idx++) {
-               vpath = &vdev->vpaths[vpath_idx];
-               if (vpath->device_id == vp_id)
-                       break;
-       }
-
-       if (!test_bit(__VXGE_STATE_RESET_CARD, &vdev->state)) {
-               if (type == VXGE_HW_EVENT_SLOT_FREEZE) {
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: Slot is frozen", vdev->ndev->name);
-               } else if (type == VXGE_HW_EVENT_SERR) {
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: Encountered Serious Error",
-                               vdev->ndev->name);
-               } else if (type == VXGE_HW_EVENT_CRITICAL_ERR)
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: Encountered Critical Error",
-                               vdev->ndev->name);
-       }
-
-       if ((type == VXGE_HW_EVENT_SERR) ||
-               (type == VXGE_HW_EVENT_SLOT_FREEZE)) {
-               if (unlikely(vdev->exec_mode))
-                       clear_bit(__VXGE_STATE_CARD_UP, &vdev->state);
-       } else if (type == VXGE_HW_EVENT_CRITICAL_ERR) {
-               vxge_hw_device_mask_all(hldev);
-               if (unlikely(vdev->exec_mode))
-                       clear_bit(__VXGE_STATE_CARD_UP, &vdev->state);
-       } else if ((type == VXGE_HW_EVENT_FIFO_ERR) ||
-                 (type == VXGE_HW_EVENT_VPATH_ERR)) {
-
-               if (unlikely(vdev->exec_mode))
-                       clear_bit(__VXGE_STATE_CARD_UP, &vdev->state);
-               else {
-                       /* check if this vpath is already set for reset */
-                       if (!test_and_set_bit(vpath_idx, &vdev->vp_reset)) {
-
-                               /* disable interrupts for this vpath */
-                               vxge_vpath_intr_disable(vdev, vpath_idx);
-
-                               /* stop the queue for this vpath */
-                               netif_tx_stop_queue(vpath->fifo.txq);
-                       }
-               }
-       }
-
-       vxge_debug_entryexit(vdev->level_trace,
-               "%s: %s:%d  Exiting...",
-               vdev->ndev->name, __func__, __LINE__);
-}
-
-static void verify_bandwidth(void)
-{
-       int i, band_width, total = 0, equal_priority = 0;
-
-       /* 1. If user enters 0 for some fifo, give equal priority to all */
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (bw_percentage[i] == 0) {
-                       equal_priority = 1;
-                       break;
-               }
-       }
-
-       if (!equal_priority) {
-               /* 2. If sum exceeds 100, give equal priority to all */
-               for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-                       if (bw_percentage[i] == 0xFF)
-                               break;
-
-                       total += bw_percentage[i];
-                       if (total > VXGE_HW_VPATH_BANDWIDTH_MAX) {
-                               equal_priority = 1;
-                               break;
-                       }
-               }
-       }
-
-       if (!equal_priority) {
-               /* Is all the bandwidth consumed? */
-               if (total < VXGE_HW_VPATH_BANDWIDTH_MAX) {
-                       if (i < VXGE_HW_MAX_VIRTUAL_PATHS) {
-                               /* Split rest of bw equally among next VPs*/
-                               band_width =
-                                 (VXGE_HW_VPATH_BANDWIDTH_MAX  - total) /
-                                       (VXGE_HW_MAX_VIRTUAL_PATHS - i);
-                               if (band_width < 2) /* min of 2% */
-                                       equal_priority = 1;
-                               else {
-                                       for (; i < VXGE_HW_MAX_VIRTUAL_PATHS;
-                                               i++)
-                                               bw_percentage[i] =
-                                                       band_width;
-                               }
-                       }
-               } else if (i < VXGE_HW_MAX_VIRTUAL_PATHS)
-                       equal_priority = 1;
-       }
-
-       if (equal_priority) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s: Assigning equal bandwidth to all the vpaths",
-                       VXGE_DRIVER_NAME);
-               bw_percentage[0] = VXGE_HW_VPATH_BANDWIDTH_MAX /
-                                       VXGE_HW_MAX_VIRTUAL_PATHS;
-               for (i = 1; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++)
-                       bw_percentage[i] = bw_percentage[0];
-       }
-}
-
-/*
- * Vpath configuration
- */
-static int vxge_config_vpaths(struct vxge_hw_device_config *device_config,
-                             u64 vpath_mask, struct vxge_config *config_param)
-{
-       int i, no_of_vpaths = 0, default_no_vpath = 0, temp;
-       u32 txdl_size, txdl_per_memblock;
-
-       temp = driver_config->vpath_per_dev;
-       if ((driver_config->vpath_per_dev == VXGE_USE_DEFAULT) &&
-               (max_config_dev == VXGE_MAX_CONFIG_DEV)) {
-               /* No more CPU. Return vpath number as zero.*/
-               if (driver_config->g_no_cpus == -1)
-                       return 0;
-
-               if (!driver_config->g_no_cpus)
-                       driver_config->g_no_cpus =
-                               netif_get_num_default_rss_queues();
-
-               driver_config->vpath_per_dev = driver_config->g_no_cpus >> 1;
-               if (!driver_config->vpath_per_dev)
-                       driver_config->vpath_per_dev = 1;
-
-               for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++)
-                       if (vxge_bVALn(vpath_mask, i, 1))
-                               default_no_vpath++;
-
-               if (default_no_vpath < driver_config->vpath_per_dev)
-                       driver_config->vpath_per_dev = default_no_vpath;
-
-               driver_config->g_no_cpus = driver_config->g_no_cpus -
-                               (driver_config->vpath_per_dev * 2);
-               if (driver_config->g_no_cpus <= 0)
-                       driver_config->g_no_cpus = -1;
-       }
-
-       if (driver_config->vpath_per_dev == 1) {
-               vxge_debug_ll_config(VXGE_TRACE,
-                       "%s: Disable tx and rx steering, "
-                       "as single vpath is configured", VXGE_DRIVER_NAME);
-               config_param->rth_steering = NO_STEERING;
-               config_param->tx_steering_type = NO_STEERING;
-               device_config->rth_en = 0;
-       }
-
-       /* configure bandwidth */
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++)
-               device_config->vp_config[i].min_bandwidth = bw_percentage[i];
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               device_config->vp_config[i].vp_id = i;
-               device_config->vp_config[i].mtu = VXGE_HW_DEFAULT_MTU;
-               if (no_of_vpaths < driver_config->vpath_per_dev) {
-                       if (!vxge_bVALn(vpath_mask, i, 1)) {
-                               vxge_debug_ll_config(VXGE_TRACE,
-                                       "%s: vpath: %d is not available",
-                                       VXGE_DRIVER_NAME, i);
-                               continue;
-                       } else {
-                               vxge_debug_ll_config(VXGE_TRACE,
-                                       "%s: vpath: %d available",
-                                       VXGE_DRIVER_NAME, i);
-                               no_of_vpaths++;
-                       }
-               } else {
-                       vxge_debug_ll_config(VXGE_TRACE,
-                               "%s: vpath: %d is not configured, "
-                               "max_config_vpath exceeded",
-                               VXGE_DRIVER_NAME, i);
-                       break;
-               }
-
-               /* Configure Tx fifo's */
-               device_config->vp_config[i].fifo.enable =
-                                               VXGE_HW_FIFO_ENABLE;
-               device_config->vp_config[i].fifo.max_frags =
-                               MAX_SKB_FRAGS + 1;
-               device_config->vp_config[i].fifo.memblock_size =
-                       VXGE_HW_MIN_FIFO_MEMBLOCK_SIZE;
-
-               txdl_size = device_config->vp_config[i].fifo.max_frags *
-                               sizeof(struct vxge_hw_fifo_txd);
-               txdl_per_memblock = VXGE_HW_MIN_FIFO_MEMBLOCK_SIZE / txdl_size;
-
-               device_config->vp_config[i].fifo.fifo_blocks =
-                       ((VXGE_DEF_FIFO_LENGTH - 1) / txdl_per_memblock) + 1;
-
-               device_config->vp_config[i].fifo.intr =
-                               VXGE_HW_FIFO_QUEUE_INTR_DISABLE;
-
-               /* Configure tti properties */
-               device_config->vp_config[i].tti.intr_enable =
-                                       VXGE_HW_TIM_INTR_ENABLE;
-
-               device_config->vp_config[i].tti.btimer_val =
-                       (VXGE_TTI_BTIMER_VAL * 1000) / 272;
-
-               device_config->vp_config[i].tti.timer_ac_en =
-                               VXGE_HW_TIM_TIMER_AC_ENABLE;
-
-               /* For msi-x with napi (each vector has a handler of its own) -
-                * Set CI to OFF for all vpaths
-                */
-               device_config->vp_config[i].tti.timer_ci_en =
-                       VXGE_HW_TIM_TIMER_CI_DISABLE;
-
-               device_config->vp_config[i].tti.timer_ri_en =
-                               VXGE_HW_TIM_TIMER_RI_DISABLE;
-
-               device_config->vp_config[i].tti.util_sel =
-                       VXGE_HW_TIM_UTIL_SEL_LEGACY_TX_NET_UTIL;
-
-               device_config->vp_config[i].tti.ltimer_val =
-                       (VXGE_TTI_LTIMER_VAL * 1000) / 272;
-
-               device_config->vp_config[i].tti.rtimer_val =
-                       (VXGE_TTI_RTIMER_VAL * 1000) / 272;
-
-               device_config->vp_config[i].tti.urange_a = TTI_TX_URANGE_A;
-               device_config->vp_config[i].tti.urange_b = TTI_TX_URANGE_B;
-               device_config->vp_config[i].tti.urange_c = TTI_TX_URANGE_C;
-               device_config->vp_config[i].tti.uec_a = TTI_TX_UFC_A;
-               device_config->vp_config[i].tti.uec_b = TTI_TX_UFC_B;
-               device_config->vp_config[i].tti.uec_c = TTI_TX_UFC_C;
-               device_config->vp_config[i].tti.uec_d = TTI_TX_UFC_D;
-
-               /* Configure Rx rings */
-               device_config->vp_config[i].ring.enable  =
-                                               VXGE_HW_RING_ENABLE;
-
-               device_config->vp_config[i].ring.ring_blocks  =
-                                               VXGE_HW_DEF_RING_BLOCKS;
-
-               device_config->vp_config[i].ring.buffer_mode =
-                       VXGE_HW_RING_RXD_BUFFER_MODE_1;
-
-               device_config->vp_config[i].ring.rxds_limit  =
-                               VXGE_HW_DEF_RING_RXDS_LIMIT;
-
-               device_config->vp_config[i].ring.scatter_mode =
-                                       VXGE_HW_RING_SCATTER_MODE_A;
-
-               /* Configure rti properties */
-               device_config->vp_config[i].rti.intr_enable =
-                                       VXGE_HW_TIM_INTR_ENABLE;
-
-               device_config->vp_config[i].rti.btimer_val =
-                       (VXGE_RTI_BTIMER_VAL * 1000)/272;
-
-               device_config->vp_config[i].rti.timer_ac_en =
-                                               VXGE_HW_TIM_TIMER_AC_ENABLE;
-
-               device_config->vp_config[i].rti.timer_ci_en =
-                                               VXGE_HW_TIM_TIMER_CI_DISABLE;
-
-               device_config->vp_config[i].rti.timer_ri_en =
-                                               VXGE_HW_TIM_TIMER_RI_DISABLE;
-
-               device_config->vp_config[i].rti.util_sel =
-                               VXGE_HW_TIM_UTIL_SEL_LEGACY_RX_NET_UTIL;
-
-               device_config->vp_config[i].rti.urange_a =
-                                               RTI_RX_URANGE_A;
-               device_config->vp_config[i].rti.urange_b =
-                                               RTI_RX_URANGE_B;
-               device_config->vp_config[i].rti.urange_c =
-                                               RTI_RX_URANGE_C;
-               device_config->vp_config[i].rti.uec_a = RTI_RX_UFC_A;
-               device_config->vp_config[i].rti.uec_b = RTI_RX_UFC_B;
-               device_config->vp_config[i].rti.uec_c = RTI_RX_UFC_C;
-               device_config->vp_config[i].rti.uec_d = RTI_RX_UFC_D;
-
-               device_config->vp_config[i].rti.rtimer_val =
-                       (VXGE_RTI_RTIMER_VAL * 1000) / 272;
-
-               device_config->vp_config[i].rti.ltimer_val =
-                       (VXGE_RTI_LTIMER_VAL * 1000) / 272;
-
-               device_config->vp_config[i].rpa_strip_vlan_tag =
-                       vlan_tag_strip;
-       }
-
-       driver_config->vpath_per_dev = temp;
-       return no_of_vpaths;
-}
-
-/* initialize device configuratrions */
-static void vxge_device_config_init(struct vxge_hw_device_config *device_config,
-                                   int *intr_type)
-{
-       /* Used for CQRQ/SRQ. */
-       device_config->dma_blockpool_initial =
-                       VXGE_HW_INITIAL_DMA_BLOCK_POOL_SIZE;
-
-       device_config->dma_blockpool_max =
-                       VXGE_HW_MAX_DMA_BLOCK_POOL_SIZE;
-
-       if (max_mac_vpath > VXGE_MAX_MAC_ADDR_COUNT)
-               max_mac_vpath = VXGE_MAX_MAC_ADDR_COUNT;
-
-       if (!IS_ENABLED(CONFIG_PCI_MSI)) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s: This Kernel does not support "
-                       "MSI-X. Defaulting to INTA", VXGE_DRIVER_NAME);
-               *intr_type = INTA;
-       }
-
-       /* Configure whether MSI-X or IRQL. */
-       switch (*intr_type) {
-       case INTA:
-               device_config->intr_mode = VXGE_HW_INTR_MODE_IRQLINE;
-               break;
-
-       case MSI_X:
-               device_config->intr_mode = VXGE_HW_INTR_MODE_MSIX_ONE_SHOT;
-               break;
-       }
-
-       /* Timer period between device poll */
-       device_config->device_poll_millis = VXGE_TIMER_DELAY;
-
-       /* Configure mac based steering. */
-       device_config->rts_mac_en = addr_learn_en;
-
-       /* Configure Vpaths */
-       device_config->rth_it_type = VXGE_HW_RTH_IT_TYPE_MULTI_IT;
-
-       vxge_debug_ll_config(VXGE_TRACE, "%s : Device Config Params ",
-                       __func__);
-       vxge_debug_ll_config(VXGE_TRACE, "intr_mode : %d",
-                       device_config->intr_mode);
-       vxge_debug_ll_config(VXGE_TRACE, "device_poll_millis : %d",
-                       device_config->device_poll_millis);
-       vxge_debug_ll_config(VXGE_TRACE, "rth_en : %d",
-                       device_config->rth_en);
-       vxge_debug_ll_config(VXGE_TRACE, "rth_it_type : %d",
-                       device_config->rth_it_type);
-}
-
-static void vxge_print_parm(struct vxgedev *vdev, u64 vpath_mask)
-{
-       int i;
-
-       vxge_debug_init(VXGE_TRACE,
-               "%s: %d Vpath(s) opened",
-               vdev->ndev->name, vdev->no_of_vpath);
-
-       switch (vdev->config.intr_type) {
-       case INTA:
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Interrupt type INTA", vdev->ndev->name);
-               break;
-
-       case MSI_X:
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Interrupt type MSI-X", vdev->ndev->name);
-               break;
-       }
-
-       if (vdev->config.rth_steering) {
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: RTH steering enabled for TCP_IPV4",
-                       vdev->ndev->name);
-       } else {
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: RTH steering disabled", vdev->ndev->name);
-       }
-
-       switch (vdev->config.tx_steering_type) {
-       case NO_STEERING:
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Tx steering disabled", vdev->ndev->name);
-               break;
-       case TX_PRIORITY_STEERING:
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Unsupported tx steering option",
-                       vdev->ndev->name);
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Tx steering disabled", vdev->ndev->name);
-               vdev->config.tx_steering_type = 0;
-               break;
-       case TX_VLAN_STEERING:
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Unsupported tx steering option",
-                       vdev->ndev->name);
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Tx steering disabled", vdev->ndev->name);
-               vdev->config.tx_steering_type = 0;
-               break;
-       case TX_MULTIQ_STEERING:
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Tx multiqueue steering enabled",
-                       vdev->ndev->name);
-               break;
-       case TX_PORT_STEERING:
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Tx port steering enabled",
-                       vdev->ndev->name);
-               break;
-       default:
-               vxge_debug_init(VXGE_ERR,
-                       "%s: Unsupported tx steering type",
-                       vdev->ndev->name);
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: Tx steering disabled", vdev->ndev->name);
-               vdev->config.tx_steering_type = 0;
-       }
-
-       if (vdev->config.addr_learn_en)
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: MAC Address learning enabled", vdev->ndev->name);
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!vxge_bVALn(vpath_mask, i, 1))
-                       continue;
-               vxge_debug_ll_config(VXGE_TRACE,
-                       "%s: MTU size - %d", vdev->ndev->name,
-                       ((vdev->devh))->
-                               config.vp_config[i].mtu);
-               vxge_debug_init(VXGE_TRACE,
-                       "%s: VLAN tag stripping %s", vdev->ndev->name,
-                       ((vdev->devh))->
-                               config.vp_config[i].rpa_strip_vlan_tag
-                       ? "Enabled" : "Disabled");
-               vxge_debug_ll_config(VXGE_TRACE,
-                       "%s: Max frags : %d", vdev->ndev->name,
-                       ((vdev->devh))->
-                               config.vp_config[i].fifo.max_frags);
-               break;
-       }
-}
-
-/**
- * vxge_pm_suspend - vxge power management suspend entry point
- * @dev_d: device pointer
- *
- */
-static int __maybe_unused vxge_pm_suspend(struct device *dev_d)
-{
-       return -ENOSYS;
-}
-/**
- * vxge_pm_resume - vxge power management resume entry point
- * @dev_d: device pointer
- *
- */
-static int __maybe_unused vxge_pm_resume(struct device *dev_d)
-{
-       return -ENOSYS;
-}
-
-/**
- * vxge_io_error_detected - called when PCI error is detected
- * @pdev: Pointer to PCI device
- * @state: The current pci connection state
- *
- * This function is called after a PCI bus error affecting
- * this device has been detected.
- */
-static pci_ers_result_t vxge_io_error_detected(struct pci_dev *pdev,
-                                               pci_channel_state_t state)
-{
-       struct __vxge_hw_device *hldev = pci_get_drvdata(pdev);
-       struct net_device *netdev = hldev->ndev;
-
-       netif_device_detach(netdev);
-
-       if (state == pci_channel_io_perm_failure)
-               return PCI_ERS_RESULT_DISCONNECT;
-
-       if (netif_running(netdev)) {
-               /* Bring down the card, while avoiding PCI I/O */
-               do_vxge_close(netdev, 0);
-       }
-
-       pci_disable_device(pdev);
-
-       return PCI_ERS_RESULT_NEED_RESET;
-}
-
-/**
- * vxge_io_slot_reset - called after the pci bus has been reset.
- * @pdev: Pointer to PCI device
- *
- * Restart the card from scratch, as if from a cold-boot.
- * At this point, the card has exprienced a hard reset,
- * followed by fixups by BIOS, and has its config space
- * set up identically to what it was at cold boot.
- */
-static pci_ers_result_t vxge_io_slot_reset(struct pci_dev *pdev)
-{
-       struct __vxge_hw_device *hldev = pci_get_drvdata(pdev);
-       struct net_device *netdev = hldev->ndev;
-
-       struct vxgedev *vdev = netdev_priv(netdev);
-
-       if (pci_enable_device(pdev)) {
-               netdev_err(netdev, "Cannot re-enable device after reset\n");
-               return PCI_ERS_RESULT_DISCONNECT;
-       }
-
-       pci_set_master(pdev);
-       do_vxge_reset(vdev, VXGE_LL_FULL_RESET);
-
-       return PCI_ERS_RESULT_RECOVERED;
-}
-
-/**
- * vxge_io_resume - called when traffic can start flowing again.
- * @pdev: Pointer to PCI device
- *
- * This callback is called when the error recovery driver tells
- * us that its OK to resume normal operation.
- */
-static void vxge_io_resume(struct pci_dev *pdev)
-{
-       struct __vxge_hw_device *hldev = pci_get_drvdata(pdev);
-       struct net_device *netdev = hldev->ndev;
-
-       if (netif_running(netdev)) {
-               if (vxge_open(netdev)) {
-                       netdev_err(netdev,
-                                  "Can't bring device back up after reset\n");
-                       return;
-               }
-       }
-
-       netif_device_attach(netdev);
-}
-
-static inline u32 vxge_get_num_vfs(u64 function_mode)
-{
-       u32 num_functions = 0;
-
-       switch (function_mode) {
-       case VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION:
-       case VXGE_HW_FUNCTION_MODE_SRIOV_8:
-               num_functions = 8;
-               break;
-       case VXGE_HW_FUNCTION_MODE_SINGLE_FUNCTION:
-               num_functions = 1;
-               break;
-       case VXGE_HW_FUNCTION_MODE_SRIOV:
-       case VXGE_HW_FUNCTION_MODE_MRIOV:
-       case VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION_17:
-               num_functions = 17;
-               break;
-       case VXGE_HW_FUNCTION_MODE_SRIOV_4:
-               num_functions = 4;
-               break;
-       case VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION_2:
-               num_functions = 2;
-               break;
-       case VXGE_HW_FUNCTION_MODE_MRIOV_8:
-               num_functions = 8; /* TODO */
-               break;
-       }
-       return num_functions;
-}
-
-int vxge_fw_upgrade(struct vxgedev *vdev, char *fw_name, int override)
-{
-       struct __vxge_hw_device *hldev = vdev->devh;
-       u32 maj, min, bld, cmaj, cmin, cbld;
-       enum vxge_hw_status status;
-       const struct firmware *fw;
-       int ret;
-
-       ret = request_firmware(&fw, fw_name, &vdev->pdev->dev);
-       if (ret) {
-               vxge_debug_init(VXGE_ERR, "%s: Firmware file '%s' not found",
-                               VXGE_DRIVER_NAME, fw_name);
-               goto out;
-       }
-
-       /* Load the new firmware onto the adapter */
-       status = vxge_update_fw_image(hldev, fw->data, fw->size);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                               "%s: FW image download to adapter failed '%s'.",
-                               VXGE_DRIVER_NAME, fw_name);
-               ret = -EIO;
-               goto out;
-       }
-
-       /* Read the version of the new firmware */
-       status = vxge_hw_upgrade_read_version(hldev, &maj, &min, &bld);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                               "%s: Upgrade read version failed '%s'.",
-                               VXGE_DRIVER_NAME, fw_name);
-               ret = -EIO;
-               goto out;
-       }
-
-       cmaj = vdev->config.device_hw_info.fw_version.major;
-       cmin = vdev->config.device_hw_info.fw_version.minor;
-       cbld = vdev->config.device_hw_info.fw_version.build;
-       /* It's possible the version in /lib/firmware is not the latest version.
-        * If so, we could get into a loop of trying to upgrade to the latest
-        * and flashing the older version.
-        */
-       if (VXGE_FW_VER(maj, min, bld) == VXGE_FW_VER(cmaj, cmin, cbld) &&
-           !override) {
-               ret = -EINVAL;
-               goto out;
-       }
-
-       printk(KERN_NOTICE "Upgrade to firmware version %d.%d.%d commencing\n",
-              maj, min, bld);
-
-       /* Flash the adapter with the new firmware */
-       status = vxge_hw_flash_fw(hldev);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR, "%s: Upgrade commit failed '%s'.",
-                               VXGE_DRIVER_NAME, fw_name);
-               ret = -EIO;
-               goto out;
-       }
-
-       printk(KERN_NOTICE "Upgrade of firmware successful!  Adapter must be "
-              "hard reset before using, thus requiring a system reboot or a "
-              "hotplug event.\n");
-
-out:
-       release_firmware(fw);
-       return ret;
-}
-
-static int vxge_probe_fw_update(struct vxgedev *vdev)
-{
-       u32 maj, min, bld;
-       int ret, gpxe = 0;
-       char *fw_name;
-
-       maj = vdev->config.device_hw_info.fw_version.major;
-       min = vdev->config.device_hw_info.fw_version.minor;
-       bld = vdev->config.device_hw_info.fw_version.build;
-
-       if (VXGE_FW_VER(maj, min, bld) == VXGE_CERT_FW_VER)
-               return 0;
-
-       /* Ignore the build number when determining if the current firmware is
-        * "too new" to load the driver
-        */
-       if (VXGE_FW_VER(maj, min, 0) > VXGE_CERT_FW_VER) {
-               vxge_debug_init(VXGE_ERR, "%s: Firmware newer than last known "
-                               "version, unable to load driver\n",
-                               VXGE_DRIVER_NAME);
-               return -EINVAL;
-       }
-
-       /* Firmware 1.4.4 and older cannot be upgraded, and is too ancient to
-        * work with this driver.
-        */
-       if (VXGE_FW_VER(maj, min, bld) <= VXGE_FW_DEAD_VER) {
-               vxge_debug_init(VXGE_ERR, "%s: Firmware %d.%d.%d cannot be "
-                               "upgraded\n", VXGE_DRIVER_NAME, maj, min, bld);
-               return -EINVAL;
-       }
-
-       /* If file not specified, determine gPXE or not */
-       if (VXGE_FW_VER(maj, min, bld) >= VXGE_EPROM_FW_VER) {
-               int i;
-               for (i = 0; i < VXGE_HW_MAX_ROM_IMAGES; i++)
-                       if (vdev->devh->eprom_versions[i]) {
-                               gpxe = 1;
-                               break;
-                       }
-       }
-       if (gpxe)
-               fw_name = "vxge/X3fw-pxe.ncf";
-       else
-               fw_name = "vxge/X3fw.ncf";
-
-       ret = vxge_fw_upgrade(vdev, fw_name, 0);
-       /* -EINVAL and -ENOENT are not fatal errors for flashing firmware on
-        * probe, so ignore them
-        */
-       if (ret != -EINVAL && ret != -ENOENT)
-               return -EIO;
-       else
-               ret = 0;
-
-       if (VXGE_FW_VER(VXGE_CERT_FW_VER_MAJOR, VXGE_CERT_FW_VER_MINOR, 0) >
-           VXGE_FW_VER(maj, min, 0)) {
-               vxge_debug_init(VXGE_ERR, "%s: Firmware %d.%d.%d is too old to"
-                               " be used with this driver.",
-                               VXGE_DRIVER_NAME, maj, min, bld);
-               return -EINVAL;
-       }
-
-       return ret;
-}
-
-static int is_sriov_initialized(struct pci_dev *pdev)
-{
-       int pos;
-       u16 ctrl;
-
-       pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV);
-       if (pos) {
-               pci_read_config_word(pdev, pos + PCI_SRIOV_CTRL, &ctrl);
-               if (ctrl & PCI_SRIOV_CTRL_VFE)
-                       return 1;
-       }
-       return 0;
-}
-
-static const struct vxge_hw_uld_cbs vxge_callbacks = {
-       .link_up = vxge_callback_link_up,
-       .link_down = vxge_callback_link_down,
-       .crit_err = vxge_callback_crit_err,
-};
-
-/**
- * vxge_probe
- * @pdev : structure containing the PCI related information of the device.
- * @pre: List of PCI devices supported by the driver listed in vxge_id_table.
- * Description:
- * This function is called when a new PCI device gets detected and initializes
- * it.
- * Return value:
- * returns 0 on success and negative on failure.
- *
- */
-static int
-vxge_probe(struct pci_dev *pdev, const struct pci_device_id *pre)
-{
-       struct __vxge_hw_device *hldev;
-       enum vxge_hw_status status;
-       int ret;
-       u64 vpath_mask = 0;
-       struct vxgedev *vdev;
-       struct vxge_config *ll_config = NULL;
-       struct vxge_hw_device_config *device_config = NULL;
-       struct vxge_hw_device_attr attr;
-       int i, j, no_of_vpath = 0, max_vpath_supported = 0;
-       u8 *macaddr;
-       struct vxge_mac_addrs *entry;
-       static int bus = -1, device = -1;
-       u32 host_type;
-       u8 new_device = 0;
-       enum vxge_hw_status is_privileged;
-       u32 function_mode;
-       u32 num_vfs = 0;
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s:%d", __func__, __LINE__);
-       attr.pdev = pdev;
-
-       /* In SRIOV-17 mode, functions of the same adapter
-        * can be deployed on different buses
-        */
-       if (((bus != pdev->bus->number) || (device != PCI_SLOT(pdev->devfn))) &&
-           !pdev->is_virtfn)
-               new_device = 1;
-
-       bus = pdev->bus->number;
-       device = PCI_SLOT(pdev->devfn);
-
-       if (new_device) {
-               if (driver_config->config_dev_cnt &&
-                  (driver_config->config_dev_cnt !=
-                       driver_config->total_dev_cnt))
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: Configured %d of %d devices",
-                               VXGE_DRIVER_NAME,
-                               driver_config->config_dev_cnt,
-                               driver_config->total_dev_cnt);
-               driver_config->config_dev_cnt = 0;
-               driver_config->total_dev_cnt = 0;
-       }
-
-       /* Now making the CPU based no of vpath calculation
-        * applicable for individual functions as well.
-        */
-       driver_config->g_no_cpus = 0;
-       driver_config->vpath_per_dev = max_config_vpath;
-
-       driver_config->total_dev_cnt++;
-       if (++driver_config->config_dev_cnt > max_config_dev) {
-               ret = 0;
-               goto _exit0;
-       }
-
-       device_config = kzalloc(sizeof(struct vxge_hw_device_config),
-               GFP_KERNEL);
-       if (!device_config) {
-               ret = -ENOMEM;
-               vxge_debug_init(VXGE_ERR,
-                       "device_config : malloc failed %s %d",
-                       __FILE__, __LINE__);
-               goto _exit0;
-       }
-
-       ll_config = kzalloc(sizeof(struct vxge_config), GFP_KERNEL);
-       if (!ll_config) {
-               ret = -ENOMEM;
-               vxge_debug_init(VXGE_ERR,
-                       "device_config : malloc failed %s %d",
-                       __FILE__, __LINE__);
-               goto _exit0;
-       }
-       ll_config->tx_steering_type = TX_MULTIQ_STEERING;
-       ll_config->intr_type = MSI_X;
-       ll_config->napi_weight = NAPI_POLL_WEIGHT;
-       ll_config->rth_steering = RTH_STEERING;
-
-       /* get the default configuration parameters */
-       vxge_hw_device_config_default_get(device_config);
-
-       /* initialize configuration parameters */
-       vxge_device_config_init(device_config, &ll_config->intr_type);
-
-       ret = pci_enable_device(pdev);
-       if (ret) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s : can not enable PCI device", __func__);
-               goto _exit0;
-       }
-
-       if (!dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64))) {
-               vxge_debug_ll_config(VXGE_TRACE,
-                       "%s : using 64bit DMA", __func__);
-       } else {
-               ret = -ENOMEM;
-               goto _exit1;
-       }
-
-       ret = pci_request_region(pdev, 0, VXGE_DRIVER_NAME);
-       if (ret) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s : request regions failed", __func__);
-               goto _exit1;
-       }
-
-       pci_set_master(pdev);
-
-       attr.bar0 = pci_ioremap_bar(pdev, 0);
-       if (!attr.bar0) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s : cannot remap io memory bar0", __func__);
-               ret = -ENODEV;
-               goto _exit2;
-       }
-       vxge_debug_ll_config(VXGE_TRACE,
-               "pci ioremap bar0: %p:0x%llx",
-               attr.bar0,
-               (unsigned long long)pci_resource_start(pdev, 0));
-
-       status = vxge_hw_device_hw_info_get(attr.bar0,
-                       &ll_config->device_hw_info);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "%s: Reading of hardware info failed."
-                       "Please try upgrading the firmware.", VXGE_DRIVER_NAME);
-               ret = -EINVAL;
-               goto _exit3;
-       }
-
-       vpath_mask = ll_config->device_hw_info.vpath_mask;
-       if (vpath_mask == 0) {
-               vxge_debug_ll_config(VXGE_TRACE,
-                       "%s: No vpaths available in device", VXGE_DRIVER_NAME);
-               ret = -EINVAL;
-               goto _exit3;
-       }
-
-       vxge_debug_ll_config(VXGE_TRACE,
-               "%s:%d  Vpath mask = %llx", __func__, __LINE__,
-               (unsigned long long)vpath_mask);
-
-       function_mode = ll_config->device_hw_info.function_mode;
-       host_type = ll_config->device_hw_info.host_type;
-       is_privileged = __vxge_hw_device_is_privilaged(host_type,
-               ll_config->device_hw_info.func_id);
-
-       /* Check how many vpaths are available */
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!((vpath_mask) & vxge_mBIT(i)))
-                       continue;
-               max_vpath_supported++;
-       }
-
-       if (new_device)
-               num_vfs = vxge_get_num_vfs(function_mode) - 1;
-
-       /* Enable SRIOV mode, if firmware has SRIOV support and if it is a PF */
-       if (is_sriov(function_mode) && !is_sriov_initialized(pdev) &&
-          (ll_config->intr_type != INTA)) {
-               ret = pci_enable_sriov(pdev, num_vfs);
-               if (ret)
-                       vxge_debug_ll_config(VXGE_ERR,
-                               "Failed in enabling SRIOV mode: %d\n", ret);
-                       /* No need to fail out, as an error here is non-fatal */
-       }
-
-       /*
-        * Configure vpaths and get driver configured number of vpaths
-        * which is less than or equal to the maximum vpaths per function.
-        */
-       no_of_vpath = vxge_config_vpaths(device_config, vpath_mask, ll_config);
-       if (!no_of_vpath) {
-               vxge_debug_ll_config(VXGE_ERR,
-                       "%s: No more vpaths to configure", VXGE_DRIVER_NAME);
-               ret = 0;
-               goto _exit3;
-       }
-
-       /* Setting driver callbacks */
-       attr.uld_callbacks = &vxge_callbacks;
-
-       status = vxge_hw_device_initialize(&hldev, &attr, device_config);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR,
-                       "Failed to initialize device (%d)", status);
-               ret = -EINVAL;
-               goto _exit3;
-       }
-
-       if (VXGE_FW_VER(ll_config->device_hw_info.fw_version.major,
-                       ll_config->device_hw_info.fw_version.minor,
-                       ll_config->device_hw_info.fw_version.build) >=
-           VXGE_EPROM_FW_VER) {
-               struct eprom_image img[VXGE_HW_MAX_ROM_IMAGES];
-
-               status = vxge_hw_vpath_eprom_img_ver_get(hldev, img);
-               if (status != VXGE_HW_OK) {
-                       vxge_debug_init(VXGE_ERR, "%s: Reading of EPROM failed",
-                                       VXGE_DRIVER_NAME);
-                       /* This is a non-fatal error, continue */
-               }
-
-               for (i = 0; i < VXGE_HW_MAX_ROM_IMAGES; i++) {
-                       hldev->eprom_versions[i] = img[i].version;
-                       if (!img[i].is_valid)
-                               break;
-                       vxge_debug_init(VXGE_TRACE, "%s: EPROM %d, version "
-                                       "%d.%d.%d.%d", VXGE_DRIVER_NAME, i,
-                                       VXGE_EPROM_IMG_MAJOR(img[i].version),
-                                       VXGE_EPROM_IMG_MINOR(img[i].version),
-                                       VXGE_EPROM_IMG_FIX(img[i].version),
-                                       VXGE_EPROM_IMG_BUILD(img[i].version));
-               }
-       }
-
-       /* if FCS stripping is not disabled in MAC fail driver load */
-       status = vxge_hw_vpath_strip_fcs_check(hldev, vpath_mask);
-       if (status != VXGE_HW_OK) {
-               vxge_debug_init(VXGE_ERR, "%s: FCS stripping is enabled in MAC"
-                               " failing driver load", VXGE_DRIVER_NAME);
-               ret = -EINVAL;
-               goto _exit4;
-       }
-
-       /* Always enable HWTS.  This will always cause the FCS to be invalid,
-        * due to the fact that HWTS is using the FCS as the location of the
-        * timestamp.  The HW FCS checking will still correctly determine if
-        * there is a valid checksum, and the FCS is being removed by the driver
-        * anyway.  So no functionality is being lost.  Since it is always
-        * enabled, we now simply use the ioctl call to set whether or not the
-        * driver should be paying attention to the HWTS.
-        */
-       if (is_privileged == VXGE_HW_OK) {
-               status = vxge_timestamp_config(hldev);
-               if (status != VXGE_HW_OK) {
-                       vxge_debug_init(VXGE_ERR, "%s: HWTS enable failed",
-                                       VXGE_DRIVER_NAME);
-                       ret = -EFAULT;
-                       goto _exit4;
-               }
-       }
-
-       vxge_hw_device_debug_set(hldev, VXGE_ERR, VXGE_COMPONENT_LL);
-
-       /* set private device info */
-       pci_set_drvdata(pdev, hldev);
-
-       ll_config->fifo_indicate_max_pkts = VXGE_FIFO_INDICATE_MAX_PKTS;
-       ll_config->addr_learn_en = addr_learn_en;
-       ll_config->rth_algorithm = RTH_ALG_JENKINS;
-       ll_config->rth_hash_type_tcpipv4 = 1;
-       ll_config->rth_hash_type_ipv4 = 0;
-       ll_config->rth_hash_type_tcpipv6 = 0;
-       ll_config->rth_hash_type_ipv6 = 0;
-       ll_config->rth_hash_type_tcpipv6ex = 0;
-       ll_config->rth_hash_type_ipv6ex = 0;
-       ll_config->rth_bkt_sz = RTH_BUCKET_SIZE;
-       ll_config->tx_pause_enable = VXGE_PAUSE_CTRL_ENABLE;
-       ll_config->rx_pause_enable = VXGE_PAUSE_CTRL_ENABLE;
-
-       ret = vxge_device_register(hldev, ll_config, no_of_vpath, &vdev);
-       if (ret) {
-               ret = -EINVAL;
-               goto _exit4;
-       }
-
-       ret = vxge_probe_fw_update(vdev);
-       if (ret)
-               goto _exit5;
-
-       vxge_hw_device_debug_set(hldev, VXGE_TRACE, VXGE_COMPONENT_LL);
-       VXGE_COPY_DEBUG_INFO_TO_LL(vdev, vxge_hw_device_error_level_get(hldev),
-               vxge_hw_device_trace_level_get(hldev));
-
-       /* set private HW device info */
-       vdev->mtu = VXGE_HW_DEFAULT_MTU;
-       vdev->bar0 = attr.bar0;
-       vdev->max_vpath_supported = max_vpath_supported;
-       vdev->no_of_vpath = no_of_vpath;
-
-       /* Virtual Path count */
-       for (i = 0, j = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-               if (!vxge_bVALn(vpath_mask, i, 1))
-                       continue;
-               if (j >= vdev->no_of_vpath)
-                       break;
-
-               vdev->vpaths[j].is_configured = 1;
-               vdev->vpaths[j].device_id = i;
-               vdev->vpaths[j].ring.driver_id = j;
-               vdev->vpaths[j].vdev = vdev;
-               vdev->vpaths[j].max_mac_addr_cnt = max_mac_vpath;
-               memcpy((u8 *)vdev->vpaths[j].macaddr,
-                               ll_config->device_hw_info.mac_addrs[i],
-                               ETH_ALEN);
-
-               /* Initialize the mac address list header */
-               INIT_LIST_HEAD(&vdev->vpaths[j].mac_addr_list);
-
-               vdev->vpaths[j].mac_addr_cnt = 0;
-               vdev->vpaths[j].mcast_addr_cnt = 0;
-               j++;
-       }
-       vdev->exec_mode = VXGE_EXEC_MODE_DISABLE;
-       vdev->max_config_port = max_config_port;
-
-       vdev->vlan_tag_strip = vlan_tag_strip;
-
-       /* map the hashing selector table to the configured vpaths */
-       for (i = 0; i < vdev->no_of_vpath; i++)
-               vdev->vpath_selector[i] = vpath_selector[i];
-
-       macaddr = (u8 *)vdev->vpaths[0].macaddr;
-
-       ll_config->device_hw_info.serial_number[VXGE_HW_INFO_LEN - 1] = '\0';
-       ll_config->device_hw_info.product_desc[VXGE_HW_INFO_LEN - 1] = '\0';
-       ll_config->device_hw_info.part_number[VXGE_HW_INFO_LEN - 1] = '\0';
-
-       vxge_debug_init(VXGE_TRACE, "%s: SERIAL NUMBER: %s",
-               vdev->ndev->name, ll_config->device_hw_info.serial_number);
-
-       vxge_debug_init(VXGE_TRACE, "%s: PART NUMBER: %s",
-               vdev->ndev->name, ll_config->device_hw_info.part_number);
-
-       vxge_debug_init(VXGE_TRACE, "%s: Neterion %s Server Adapter",
-               vdev->ndev->name, ll_config->device_hw_info.product_desc);
-
-       vxge_debug_init(VXGE_TRACE, "%s: MAC ADDR: %pM",
-               vdev->ndev->name, macaddr);
-
-       vxge_debug_init(VXGE_TRACE, "%s: Link Width x%d",
-               vdev->ndev->name, vxge_hw_device_link_width_get(hldev));
-
-       vxge_debug_init(VXGE_TRACE,
-               "%s: Firmware version : %s Date : %s", vdev->ndev->name,
-               ll_config->device_hw_info.fw_version.version,
-               ll_config->device_hw_info.fw_date.date);
-
-       if (new_device) {
-               switch (ll_config->device_hw_info.function_mode) {
-               case VXGE_HW_FUNCTION_MODE_SINGLE_FUNCTION:
-                       vxge_debug_init(VXGE_TRACE,
-                       "%s: Single Function Mode Enabled", vdev->ndev->name);
-               break;
-               case VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION:
-                       vxge_debug_init(VXGE_TRACE,
-                       "%s: Multi Function Mode Enabled", vdev->ndev->name);
-               break;
-               case VXGE_HW_FUNCTION_MODE_SRIOV:
-                       vxge_debug_init(VXGE_TRACE,
-                       "%s: Single Root IOV Mode Enabled", vdev->ndev->name);
-               break;
-               case VXGE_HW_FUNCTION_MODE_MRIOV:
-                       vxge_debug_init(VXGE_TRACE,
-                       "%s: Multi Root IOV Mode Enabled", vdev->ndev->name);
-               break;
-               }
-       }
-
-       vxge_print_parm(vdev, vpath_mask);
-
-       /* Store the fw version for ethttool option */
-       strcpy(vdev->fw_version, ll_config->device_hw_info.fw_version.version);
-       eth_hw_addr_set(vdev->ndev, (u8 *)vdev->vpaths[0].macaddr);
-
-       /* Copy the station mac address to the list */
-       for (i = 0; i < vdev->no_of_vpath; i++) {
-               entry = kzalloc(sizeof(struct vxge_mac_addrs), GFP_KERNEL);
-               if (NULL == entry) {
-                       vxge_debug_init(VXGE_ERR,
-                               "%s: mac_addr_list : memory allocation failed",
-                               vdev->ndev->name);
-                       ret = -EPERM;
-                       goto _exit6;
-               }
-               macaddr = (u8 *)&entry->macaddr;
-               memcpy(macaddr, vdev->ndev->dev_addr, ETH_ALEN);
-               list_add(&entry->item, &vdev->vpaths[i].mac_addr_list);
-               vdev->vpaths[i].mac_addr_cnt = 1;
-       }
-
-       kfree(device_config);
-
-       /*
-        * INTA is shared in multi-function mode. This is unlike the INTA
-        * implementation in MR mode, where each VH has its own INTA message.
-        * - INTA is masked (disabled) as long as at least one function sets
-        * its TITAN_MASK_ALL_INT.ALARM bit.
-        * - INTA is unmasked (enabled) when all enabled functions have cleared
-        * their own TITAN_MASK_ALL_INT.ALARM bit.
-        * The TITAN_MASK_ALL_INT ALARM & TRAFFIC bits are cleared on power up.
-        * Though this driver leaves the top level interrupts unmasked while
-        * leaving the required module interrupt bits masked on exit, there
-        * could be a rougue driver around that does not follow this procedure
-        * resulting in a failure to generate interrupts. The following code is
-        * present to prevent such a failure.
-        */
-
-       if (ll_config->device_hw_info.function_mode ==
-               VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION)
-               if (vdev->config.intr_type == INTA)
-                       vxge_hw_device_unmask_all(hldev);
-
-       vxge_debug_entryexit(VXGE_TRACE, "%s: %s:%d  Exiting...",
-               vdev->ndev->name, __func__, __LINE__);
-
-       vxge_hw_device_debug_set(hldev, VXGE_ERR, VXGE_COMPONENT_LL);
-       VXGE_COPY_DEBUG_INFO_TO_LL(vdev, vxge_hw_device_error_level_get(hldev),
-               vxge_hw_device_trace_level_get(hldev));
-
-       kfree(ll_config);
-       return 0;
-
-_exit6:
-       for (i = 0; i < vdev->no_of_vpath; i++)
-               vxge_free_mac_add_list(&vdev->vpaths[i]);
-_exit5:
-       vxge_device_unregister(hldev);
-_exit4:
-       vxge_hw_device_terminate(hldev);
-       pci_disable_sriov(pdev);
-_exit3:
-       iounmap(attr.bar0);
-_exit2:
-       pci_release_region(pdev, 0);
-_exit1:
-       pci_disable_device(pdev);
-_exit0:
-       kfree(ll_config);
-       kfree(device_config);
-       driver_config->config_dev_cnt--;
-       driver_config->total_dev_cnt--;
-       return ret;
-}
-
-/**
- * vxge_remove - Free the PCI device
- * @pdev: structure containing the PCI related information of the device.
- * Description: This function is called by the Pci subsystem to release a
- * PCI device and free up all resource held up by the device.
- */
-static void vxge_remove(struct pci_dev *pdev)
-{
-       struct __vxge_hw_device *hldev;
-       struct vxgedev *vdev;
-       int i;
-
-       hldev = pci_get_drvdata(pdev);
-       if (hldev == NULL)
-               return;
-
-       vdev = netdev_priv(hldev->ndev);
-
-       vxge_debug_entryexit(vdev->level_trace, "%s:%d", __func__, __LINE__);
-       vxge_debug_init(vdev->level_trace, "%s : removing PCI device...",
-                       __func__);
-
-       for (i = 0; i < vdev->no_of_vpath; i++)
-               vxge_free_mac_add_list(&vdev->vpaths[i]);
-
-       vxge_device_unregister(hldev);
-       /* Do not call pci_disable_sriov here, as it will break child devices */
-       vxge_hw_device_terminate(hldev);
-       iounmap(vdev->bar0);
-       pci_release_region(pdev, 0);
-       pci_disable_device(pdev);
-       driver_config->config_dev_cnt--;
-       driver_config->total_dev_cnt--;
-
-       vxge_debug_init(vdev->level_trace, "%s:%d Device unregistered",
-                       __func__, __LINE__);
-       vxge_debug_entryexit(vdev->level_trace, "%s:%d  Exiting...", __func__,
-                            __LINE__);
-}
-
-static const struct pci_error_handlers vxge_err_handler = {
-       .error_detected = vxge_io_error_detected,
-       .slot_reset = vxge_io_slot_reset,
-       .resume = vxge_io_resume,
-};
-
-static SIMPLE_DEV_PM_OPS(vxge_pm_ops, vxge_pm_suspend, vxge_pm_resume);
-
-static struct pci_driver vxge_driver = {
-       .name = VXGE_DRIVER_NAME,
-       .id_table = vxge_id_table,
-       .probe = vxge_probe,
-       .remove = vxge_remove,
-       .driver.pm = &vxge_pm_ops,
-       .err_handler = &vxge_err_handler,
-};
-
-static int __init
-vxge_starter(void)
-{
-       int ret = 0;
-
-       pr_info("Copyright(c) 2002-2010 Exar Corp.\n");
-       pr_info("Driver version: %s\n", DRV_VERSION);
-
-       verify_bandwidth();
-
-       driver_config = kzalloc(sizeof(struct vxge_drv_config), GFP_KERNEL);
-       if (!driver_config)
-               return -ENOMEM;
-
-       ret = pci_register_driver(&vxge_driver);
-       if (ret) {
-               kfree(driver_config);
-               goto err;
-       }
-
-       if (driver_config->config_dev_cnt &&
-          (driver_config->config_dev_cnt != driver_config->total_dev_cnt))
-               vxge_debug_init(VXGE_ERR,
-                       "%s: Configured %d of %d devices",
-                       VXGE_DRIVER_NAME, driver_config->config_dev_cnt,
-                       driver_config->total_dev_cnt);
-err:
-       return ret;
-}
-
-static void __exit
-vxge_closer(void)
-{
-       pci_unregister_driver(&vxge_driver);
-       kfree(driver_config);
-}
-module_init(vxge_starter);
-module_exit(vxge_closer);
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.h b/drivers/net/ethernet/neterion/vxge/vxge-main.h
deleted file mode 100644 (file)
index da9d2c1..0000000
+++ /dev/null
@@ -1,516 +0,0 @@
-/******************************************************************************
- * This software may be used and distributed according to the terms of
- * the GNU General Public License (GPL), incorporated herein by reference.
- * Drivers based on or derived from this code fall under the GPL and must
- * retain the authorship, copyright and license notice.  This file is not
- * a complete program and may only be used when the entire operating
- * system is licensed under the GPL.
- * See the file COPYING in this distribution for more information.
- *
- * vxge-main.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
- *              Virtualized Server Adapter.
- * Copyright(c) 2002-2010 Exar Corp.
- ******************************************************************************/
-#ifndef VXGE_MAIN_H
-#define VXGE_MAIN_H
-
-#include "vxge-traffic.h"
-#include "vxge-config.h"
-#include "vxge-version.h"
-#include <linux/list.h>
-#include <linux/bitops.h>
-#include <linux/if_vlan.h>
-
-#define VXGE_DRIVER_NAME               "vxge"
-#define VXGE_DRIVER_VENDOR             "Neterion, Inc"
-#define VXGE_DRIVER_FW_VERSION_MAJOR   1
-
-#define DRV_VERSION    VXGE_VERSION_MAJOR"."VXGE_VERSION_MINOR"."\
-       VXGE_VERSION_FIX"."VXGE_VERSION_BUILD"-"\
-       VXGE_VERSION_FOR
-
-#define PCI_DEVICE_ID_TITAN_WIN                0x5733
-#define PCI_DEVICE_ID_TITAN_UNI                0x5833
-#define VXGE_HW_TITAN1_PCI_REVISION    1
-#define VXGE_HW_TITAN1A_PCI_REVISION   2
-
-#define        VXGE_USE_DEFAULT                0xffffffff
-#define VXGE_HW_VPATH_MSIX_ACTIVE      4
-#define VXGE_ALARM_MSIX_ID             2
-#define VXGE_HW_RXSYNC_FREQ_CNT                4
-#define VXGE_LL_WATCH_DOG_TIMEOUT      (15 * HZ)
-#define VXGE_LL_RX_COPY_THRESHOLD      256
-#define VXGE_DEF_FIFO_LENGTH           84
-
-#define NO_STEERING            0
-#define PORT_STEERING          0x1
-#define RTH_STEERING           0x2
-#define RX_TOS_STEERING                0x3
-#define RX_VLAN_STEERING       0x4
-#define RTH_BUCKET_SIZE                4
-
-#define        TX_PRIORITY_STEERING    1
-#define        TX_VLAN_STEERING        2
-#define        TX_PORT_STEERING        3
-#define        TX_MULTIQ_STEERING      4
-
-#define VXGE_HW_MAC_ADDR_LEARN_DEFAULT VXGE_HW_RTS_MAC_DISABLE
-
-#define VXGE_TTI_BTIMER_VAL 250000
-
-#define VXGE_TTI_LTIMER_VAL    1000
-#define VXGE_T1A_TTI_LTIMER_VAL        80
-#define VXGE_TTI_RTIMER_VAL    0
-#define VXGE_TTI_RTIMER_ADAPT_VAL      10
-#define VXGE_T1A_TTI_RTIMER_VAL        400
-#define VXGE_RTI_BTIMER_VAL    250
-#define VXGE_RTI_LTIMER_VAL    100
-#define VXGE_RTI_RTIMER_VAL    0
-#define VXGE_RTI_RTIMER_ADAPT_VAL      15
-#define VXGE_FIFO_INDICATE_MAX_PKTS    VXGE_DEF_FIFO_LENGTH
-#define VXGE_ISR_POLLING_CNT   8
-#define VXGE_MAX_CONFIG_DEV    0xFF
-#define VXGE_EXEC_MODE_DISABLE 0
-#define VXGE_EXEC_MODE_ENABLE  1
-#define VXGE_MAX_CONFIG_PORT   1
-#define VXGE_ALL_VID_DISABLE   0
-#define VXGE_ALL_VID_ENABLE    1
-#define VXGE_PAUSE_CTRL_DISABLE        0
-#define VXGE_PAUSE_CTRL_ENABLE 1
-
-#define TTI_TX_URANGE_A        5
-#define TTI_TX_URANGE_B        15
-#define TTI_TX_URANGE_C        40
-#define TTI_TX_UFC_A   5
-#define TTI_TX_UFC_B   40
-#define TTI_TX_UFC_C   60
-#define TTI_TX_UFC_D   100
-#define TTI_T1A_TX_UFC_A       30
-#define TTI_T1A_TX_UFC_B       80
-/* Slope - (max_mtu - min_mtu)/(max_mtu_ufc - min_mtu_ufc) */
-/* Slope - 93 */
-/* 60 - 9k Mtu, 140 - 1.5k mtu */
-#define TTI_T1A_TX_UFC_C(mtu)  (60 + ((VXGE_HW_MAX_MTU - mtu) / 93))
-
-/* Slope - 37 */
-/* 100 - 9k Mtu, 300 - 1.5k mtu */
-#define TTI_T1A_TX_UFC_D(mtu)  (100 + ((VXGE_HW_MAX_MTU - mtu) / 37))
-
-
-#define RTI_RX_URANGE_A                5
-#define RTI_RX_URANGE_B                15
-#define RTI_RX_URANGE_C                40
-#define RTI_T1A_RX_URANGE_A    1
-#define RTI_T1A_RX_URANGE_B    20
-#define RTI_T1A_RX_URANGE_C    50
-#define RTI_RX_UFC_A           1
-#define RTI_RX_UFC_B           5
-#define RTI_RX_UFC_C           10
-#define RTI_RX_UFC_D           15
-#define RTI_T1A_RX_UFC_B       20
-#define RTI_T1A_RX_UFC_C       50
-#define RTI_T1A_RX_UFC_D       60
-
-/*
- * The interrupt rate is maintained at 3k per second with the moderation
- * parameters for most traffic but not all. This is the maximum interrupt
- * count allowed per function with INTA or per vector in the case of
- * MSI-X in a 10 millisecond time period. Enabled only for Titan 1A.
- */
-#define VXGE_T1A_MAX_INTERRUPT_COUNT   100
-#define VXGE_T1A_MAX_TX_INTERRUPT_COUNT        200
-
-/* Milli secs timer period */
-#define VXGE_TIMER_DELAY               10000
-
-#define VXGE_LL_MAX_FRAME_SIZE(dev) ((dev)->mtu + VXGE_HW_MAC_HEADER_MAX_SIZE)
-
-#define is_sriov(function_mode) \
-       ((function_mode == VXGE_HW_FUNCTION_MODE_SRIOV) || \
-       (function_mode == VXGE_HW_FUNCTION_MODE_SRIOV_8) || \
-       (function_mode == VXGE_HW_FUNCTION_MODE_SRIOV_4))
-
-enum vxge_reset_event {
-       /* reset events */
-       VXGE_LL_VPATH_RESET     = 0,
-       VXGE_LL_DEVICE_RESET    = 1,
-       VXGE_LL_FULL_RESET      = 2,
-       VXGE_LL_START_RESET     = 3,
-       VXGE_LL_COMPL_RESET     = 4
-};
-/* These flags represent the devices temporary state */
-enum vxge_device_state_t {
-__VXGE_STATE_RESET_CARD = 0,
-__VXGE_STATE_CARD_UP
-};
-
-enum vxge_mac_addr_state {
-       /* mac address states */
-       VXGE_LL_MAC_ADDR_IN_LIST        = 0,
-       VXGE_LL_MAC_ADDR_IN_DA_TABLE    = 1
-};
-
-struct vxge_drv_config {
-       int config_dev_cnt;
-       int total_dev_cnt;
-       int g_no_cpus;
-       unsigned int vpath_per_dev;
-};
-
-struct macInfo {
-       unsigned char macaddr[ETH_ALEN];
-       unsigned char macmask[ETH_ALEN];
-       unsigned int vpath_no;
-       enum vxge_mac_addr_state state;
-};
-
-struct vxge_config {
-       int             tx_pause_enable;
-       int             rx_pause_enable;
-       int             napi_weight;
-       int             intr_type;
-#define INTA   0
-#define MSI    1
-#define MSI_X  2
-
-       int             addr_learn_en;
-
-       u32             rth_steering:2,
-                       rth_algorithm:2,
-                       rth_hash_type_tcpipv4:1,
-                       rth_hash_type_ipv4:1,
-                       rth_hash_type_tcpipv6:1,
-                       rth_hash_type_ipv6:1,
-                       rth_hash_type_tcpipv6ex:1,
-                       rth_hash_type_ipv6ex:1,
-                       rth_bkt_sz:8;
-       int             rth_jhash_golden_ratio;
-       int             tx_steering_type;
-       int     fifo_indicate_max_pkts;
-       struct vxge_hw_device_hw_info device_hw_info;
-};
-
-struct vxge_msix_entry {
-       /* Mimicing the msix_entry struct of Kernel. */
-       u16 vector;
-       u16 entry;
-       u16 in_use;
-       void *arg;
-};
-
-/* Software Statistics */
-
-struct vxge_sw_stats {
-
-       /* Virtual Path */
-       unsigned long vpaths_open;
-       unsigned long vpath_open_fail;
-
-       /* Misc. */
-       unsigned long link_up;
-       unsigned long link_down;
-};
-
-struct vxge_mac_addrs {
-       struct list_head item;
-       u64 macaddr;
-       u64 macmask;
-       enum vxge_mac_addr_state state;
-};
-
-struct vxgedev;
-
-struct vxge_fifo_stats {
-       struct u64_stats_sync   syncp;
-       u64 tx_frms;
-       u64 tx_bytes;
-
-       unsigned long tx_errors;
-       unsigned long txd_not_free;
-       unsigned long txd_out_of_desc;
-       unsigned long pci_map_fail;
-};
-
-struct vxge_fifo {
-       struct net_device *ndev;
-       struct pci_dev *pdev;
-       struct __vxge_hw_fifo *handle;
-       struct netdev_queue *txq;
-
-       int tx_steering_type;
-       int indicate_max_pkts;
-
-       /* Adaptive interrupt moderation parameters used in T1A */
-       unsigned long interrupt_count;
-       unsigned long jiffies;
-
-       u32 tx_vector_no;
-       /* Tx stats */
-       struct vxge_fifo_stats stats;
-} ____cacheline_aligned;
-
-struct vxge_ring_stats {
-       struct u64_stats_sync syncp;
-       u64 rx_frms;
-       u64 rx_mcast;
-       u64 rx_bytes;
-
-       unsigned long rx_errors;
-       unsigned long rx_dropped;
-       unsigned long prev_rx_frms;
-       unsigned long pci_map_fail;
-       unsigned long skb_alloc_fail;
-};
-
-struct vxge_ring {
-       struct net_device       *ndev;
-       struct pci_dev          *pdev;
-       struct __vxge_hw_ring   *handle;
-       /* The vpath id maintained in the driver -
-        * 0 to 'maximum_vpaths_in_function - 1'
-        */
-       int driver_id;
-
-       /* Adaptive interrupt moderation parameters used in T1A */
-       unsigned long interrupt_count;
-       unsigned long jiffies;
-
-       /* copy of the flag indicating whether rx_hwts is to be used */
-       u32 rx_hwts:1;
-
-       int pkts_processed;
-       int budget;
-
-       struct napi_struct napi;
-       struct napi_struct *napi_p;
-
-#define VXGE_MAX_MAC_ADDR_COUNT                30
-
-       int vlan_tag_strip;
-       u32 rx_vector_no;
-       enum vxge_hw_status last_status;
-
-       /* Rx stats */
-       struct vxge_ring_stats stats;
-} ____cacheline_aligned;
-
-struct vxge_vpath {
-       struct vxge_fifo fifo;
-       struct vxge_ring ring;
-
-       struct __vxge_hw_vpath_handle *handle;
-
-       /* Actual vpath id for this vpath in the device - 0 to 16 */
-       int device_id;
-       int max_mac_addr_cnt;
-       int is_configured;
-       int is_open;
-       struct vxgedev *vdev;
-       u8 macaddr[ETH_ALEN];
-       u8 macmask[ETH_ALEN];
-
-#define VXGE_MAX_LEARN_MAC_ADDR_CNT    2048
-       /* mac addresses currently programmed into NIC */
-       u16 mac_addr_cnt;
-       u16 mcast_addr_cnt;
-       struct list_head mac_addr_list;
-
-       u32 level_err;
-       u32 level_trace;
-};
-#define VXGE_COPY_DEBUG_INFO_TO_LL(vdev, err, trace) { \
-       for (i = 0; i < vdev->no_of_vpath; i++) {               \
-               vdev->vpaths[i].level_err = err;                \
-               vdev->vpaths[i].level_trace = trace;            \
-       }                                                       \
-       vdev->level_err = err;                                  \
-       vdev->level_trace = trace;                              \
-}
-
-struct vxgedev {
-       struct net_device       *ndev;
-       struct pci_dev          *pdev;
-       struct __vxge_hw_device *devh;
-       unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
-       int vlan_tag_strip;
-       struct vxge_config      config;
-       unsigned long   state;
-
-       /* Indicates which vpath to reset */
-       unsigned long  vp_reset;
-
-       /* Timer used for polling vpath resets */
-       struct timer_list vp_reset_timer;
-
-       /* Timer used for polling vpath lockup */
-       struct timer_list vp_lockup_timer;
-
-       /*
-        * Flags to track whether device is in All Multicast
-        * or in promiscuous mode.
-        */
-       u16             all_multi_flg;
-
-       /* A flag indicating whether rx_hwts is to be used or not. */
-       u32     rx_hwts:1,
-               titan1:1;
-
-       struct vxge_msix_entry *vxge_entries;
-       struct msix_entry *entries;
-       /*
-        * 4 for each vpath * 17;
-        * total is 68
-        */
-#define        VXGE_MAX_REQUESTED_MSIX 68
-#define VXGE_INTR_STRLEN 80
-       char desc[VXGE_MAX_REQUESTED_MSIX][VXGE_INTR_STRLEN];
-
-       enum vxge_hw_event cric_err_event;
-
-       int max_vpath_supported;
-       int no_of_vpath;
-
-       struct napi_struct napi;
-       /* A debug option, when enabled and if error condition occurs,
-        * the driver will do following steps:
-        * - mask all interrupts
-        * - Not clear the source of the alarm
-        * - gracefully stop all I/O
-        * A diagnostic dump of register and stats at this point
-        * reveals very useful information.
-        */
-       int exec_mode;
-       int max_config_port;
-       struct vxge_vpath       *vpaths;
-
-       struct __vxge_hw_vpath_handle *vp_handles[VXGE_HW_MAX_VIRTUAL_PATHS];
-       void __iomem *bar0;
-       struct vxge_sw_stats    stats;
-       int             mtu;
-       /* Below variables are used for vpath selection to transmit a packet */
-       u8              vpath_selector[VXGE_HW_MAX_VIRTUAL_PATHS];
-       u64             vpaths_deployed;
-
-       u32             intr_cnt;
-       u32             level_err;
-       u32             level_trace;
-       char            fw_version[VXGE_HW_FW_STRLEN];
-       struct work_struct reset_task;
-};
-
-struct vxge_rx_priv {
-       struct sk_buff          *skb;
-       unsigned char           *skb_data;
-       dma_addr_t              data_dma;
-       dma_addr_t              data_size;
-};
-
-struct vxge_tx_priv {
-       struct sk_buff          *skb;
-       dma_addr_t              dma_buffers[MAX_SKB_FRAGS+1];
-};
-
-#define VXGE_MODULE_PARAM_INT(p, val) \
-       static int p = val; \
-       module_param(p, int, 0)
-
-static inline
-void vxge_os_timer(struct timer_list *timer, void (*func)(struct timer_list *),
-                  unsigned long timeout)
-{
-       timer_setup(timer, func, 0);
-       mod_timer(timer, jiffies + timeout);
-}
-
-void vxge_initialize_ethtool_ops(struct net_device *ndev);
-int vxge_fw_upgrade(struct vxgedev *vdev, char *fw_name, int override);
-
-/* #define VXGE_DEBUG_INIT: debug for initialization functions
- * #define VXGE_DEBUG_TX        : debug transmit related functions
- * #define VXGE_DEBUG_RX  : debug recevice related functions
- * #define VXGE_DEBUG_MEM : debug memory module
- * #define VXGE_DEBUG_LOCK: debug locks
- * #define VXGE_DEBUG_SEM : debug semaphore
- * #define VXGE_DEBUG_ENTRYEXIT: debug functions by adding entry exit statements
-*/
-#define VXGE_DEBUG_INIT                0x00000001
-#define VXGE_DEBUG_TX          0x00000002
-#define VXGE_DEBUG_RX          0x00000004
-#define VXGE_DEBUG_MEM         0x00000008
-#define VXGE_DEBUG_LOCK                0x00000010
-#define VXGE_DEBUG_SEM         0x00000020
-#define VXGE_DEBUG_ENTRYEXIT   0x00000040
-#define VXGE_DEBUG_INTR                0x00000080
-#define VXGE_DEBUG_LL_CONFIG   0x00000100
-
-/* Debug tracing for VXGE driver */
-#ifndef VXGE_DEBUG_MASK
-#define VXGE_DEBUG_MASK        0x0
-#endif
-
-#if (VXGE_DEBUG_LL_CONFIG & VXGE_DEBUG_MASK)
-#define vxge_debug_ll_config(level, fmt, ...) \
-       vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, ##__VA_ARGS__)
-#else
-#define vxge_debug_ll_config(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
-#endif
-
-#if (VXGE_DEBUG_INIT & VXGE_DEBUG_MASK)
-#define vxge_debug_init(level, fmt, ...) \
-       vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, ##__VA_ARGS__)
-#else
-#define vxge_debug_init(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
-#endif
-
-#if (VXGE_DEBUG_TX & VXGE_DEBUG_MASK)
-#define vxge_debug_tx(level, fmt, ...) \
-       vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, ##__VA_ARGS__)
-#else
-#define vxge_debug_tx(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
-#endif
-
-#if (VXGE_DEBUG_RX & VXGE_DEBUG_MASK)
-#define vxge_debug_rx(level, fmt, ...) \
-       vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, ##__VA_ARGS__)
-#else
-#define vxge_debug_rx(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
-#endif
-
-#if (VXGE_DEBUG_MEM & VXGE_DEBUG_MASK)
-#define vxge_debug_mem(level, fmt, ...) \
-       vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, ##__VA_ARGS__)
-#else
-#define vxge_debug_mem(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
-#endif
-
-#if (VXGE_DEBUG_ENTRYEXIT & VXGE_DEBUG_MASK)
-#define vxge_debug_entryexit(level, fmt, ...) \
-       vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, ##__VA_ARGS__)
-#else
-#define vxge_debug_entryexit(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
-#endif
-
-#if (VXGE_DEBUG_INTR & VXGE_DEBUG_MASK)
-#define vxge_debug_intr(level, fmt, ...) \
-       vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, ##__VA_ARGS__)
-#else
-#define vxge_debug_intr(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
-#endif
-
-#define VXGE_DEVICE_DEBUG_LEVEL_SET(level, mask, vdev) {\
-       vxge_hw_device_debug_set((struct __vxge_hw_device  *)vdev->devh, \
-               level, mask);\
-       VXGE_COPY_DEBUG_INFO_TO_LL(vdev, \
-               vxge_hw_device_error_level_get((struct __vxge_hw_device  *) \
-                       vdev->devh), \
-               vxge_hw_device_trace_level_get((struct __vxge_hw_device  *) \
-                       vdev->devh));\
-}
-
-#ifdef NETIF_F_GSO
-#define vxge_tcp_mss(skb) (skb_shinfo(skb)->gso_size)
-#define vxge_udp_mss(skb) (skb_shinfo(skb)->gso_size)
-#define vxge_offload_type(skb) (skb_shinfo(skb)->gso_type)
-#endif
-
-#endif
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-reg.h b/drivers/net/ethernet/neterion/vxge/vxge-reg.h
deleted file mode 100644 (file)
index 3e658b1..0000000
+++ /dev/null
@@ -1,4636 +0,0 @@
-/******************************************************************************
- * This software may be used and distributed according to the terms of
- * the GNU General Public License (GPL), incorporated herein by reference.
- * Drivers based on or derived from this code fall under the GPL and must
- * retain the authorship, copyright and license notice.  This file is not
- * a complete program and may only be used when the entire operating
- * system is licensed under the GPL.
- * See the file COPYING in this distribution for more information.
- *
- * vxge-reg.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O Virtualized
- *             Server Adapter.
- * Copyright(c) 2002-2010 Exar Corp.
- ******************************************************************************/
-#ifndef VXGE_REG_H
-#define VXGE_REG_H
-
-/*
- * vxge_mBIT(loc) - set bit at offset
- */
-#define vxge_mBIT(loc)         (0x8000000000000000ULL >> (loc))
-
-/*
- * vxge_vBIT(val, loc, sz) - set bits at offset
- */
-#define vxge_vBIT(val, loc, sz)        (((u64)(val)) << (64-(loc)-(sz)))
-#define vxge_vBIT32(val, loc, sz)      (((u32)(val)) << (32-(loc)-(sz)))
-
-/*
- * vxge_bVALn(bits, loc, n) - Get the value of n bits at location
- */
-#define vxge_bVALn(bits, loc, n) \
-       ((((u64)bits) >> (64-(loc+n))) & ((0x1ULL << n) - 1))
-
-#define        VXGE_HW_TITAN_ASIC_ID_GET_INITIAL_DEVICE_ID(bits) \
-                                                       vxge_bVALn(bits, 0, 16)
-#define        VXGE_HW_TITAN_ASIC_ID_GET_INITIAL_MAJOR_REVISION(bits) \
-                                                       vxge_bVALn(bits, 48, 8)
-#define        VXGE_HW_TITAN_ASIC_ID_GET_INITIAL_MINOR_REVISION(bits) \
-                                                       vxge_bVALn(bits, 56, 8)
-
-#define        VXGE_HW_VPATH_TO_FUNC_MAP_CFG1_GET_VPATH_TO_FUNC_MAP_CFG1(bits) \
-                                                       vxge_bVALn(bits, 3, 5)
-#define        VXGE_HW_HOST_TYPE_ASSIGNMENTS_GET_HOST_TYPE_ASSIGNMENTS(bits) \
-                                                       vxge_bVALn(bits, 5, 3)
-#define VXGE_HW_PF_SW_RESET_COMMAND                            0xA5
-
-#define VXGE_HW_TITAN_PCICFGMGMT_REG_SPACES            17
-#define VXGE_HW_TITAN_SRPCIM_REG_SPACES                        17
-#define VXGE_HW_TITAN_VPMGMT_REG_SPACES                        17
-#define VXGE_HW_TITAN_VPATH_REG_SPACES                 17
-
-#define VXGE_HW_FW_API_GET_EPROM_REV                   31
-
-#define VXGE_EPROM_IMG_MAJOR(val)              (u32) vxge_bVALn(val, 48, 4)
-#define VXGE_EPROM_IMG_MINOR(val)              (u32) vxge_bVALn(val, 52, 4)
-#define VXGE_EPROM_IMG_FIX(val)                        (u32) vxge_bVALn(val, 56, 4)
-#define VXGE_EPROM_IMG_BUILD(val)              (u32) vxge_bVALn(val, 60, 4)
-
-#define VXGE_HW_GET_EPROM_IMAGE_INDEX(val)             vxge_bVALn(val, 16, 8)
-#define VXGE_HW_GET_EPROM_IMAGE_VALID(val)             vxge_bVALn(val, 31, 1)
-#define VXGE_HW_GET_EPROM_IMAGE_TYPE(val)              vxge_bVALn(val, 40, 8)
-#define VXGE_HW_GET_EPROM_IMAGE_REV(val)               vxge_bVALn(val, 48, 16)
-#define VXGE_HW_RTS_ACCESS_STEER_ROM_IMAGE_INDEX(val)  vxge_vBIT(val, 16, 8)
-
-#define VXGE_HW_FW_API_GET_FUNC_MODE                   29
-#define VXGE_HW_GET_FUNC_MODE_VAL(val)                 (val & 0xFF)
-
-#define VXGE_HW_FW_UPGRADE_MEMO                                13
-#define VXGE_HW_FW_UPGRADE_ACTION                      16
-#define VXGE_HW_FW_UPGRADE_OFFSET_START                        2
-#define VXGE_HW_FW_UPGRADE_OFFSET_SEND                 3
-#define VXGE_HW_FW_UPGRADE_OFFSET_COMMIT               4
-#define VXGE_HW_FW_UPGRADE_OFFSET_READ                 5
-
-#define VXGE_HW_FW_UPGRADE_BLK_SIZE                    16
-#define VXGE_HW_UPGRADE_GET_RET_ERR_CODE(val)          (val & 0xff)
-#define VXGE_HW_UPGRADE_GET_SEC_ERR_CODE(val)          ((val >> 8) & 0xff)
-
-#define VXGE_HW_ASIC_MODE_RESERVED                             0
-#define VXGE_HW_ASIC_MODE_NO_IOV                               1
-#define VXGE_HW_ASIC_MODE_SR_IOV                               2
-#define VXGE_HW_ASIC_MODE_MR_IOV                               3
-
-#define        VXGE_HW_TXMAC_GEN_CFG1_TMAC_PERMA_STOP_EN               vxge_mBIT(3)
-#define        VXGE_HW_TXMAC_GEN_CFG1_BLOCK_BCAST_TO_WIRE              vxge_mBIT(19)
-#define        VXGE_HW_TXMAC_GEN_CFG1_BLOCK_BCAST_TO_SWITCH    vxge_mBIT(23)
-#define        VXGE_HW_TXMAC_GEN_CFG1_HOST_APPEND_FCS                  vxge_mBIT(31)
-
-#define        VXGE_HW_VPATH_IS_FIRST_GET_VPATH_IS_FIRST(bits) vxge_bVALn(bits, 3, 1)
-
-#define        VXGE_HW_TIM_VPATH_ASSIGNMENT_GET_BMAP_ROOT(bits) \
-                                               vxge_bVALn(bits, 0, 32)
-
-#define        VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_GET_MAX_PYLD_LEN(bits) \
-                                                       vxge_bVALn(bits, 50, 14)
-
-#define        VXGE_HW_XMAC_VSPORT_CHOICES_VP_GET_VSPORT_VECTOR(bits) \
-                                                       vxge_bVALn(bits, 0, 17)
-
-#define        VXGE_HW_XMAC_VPATH_TO_VSPORT_VPMGMT_CLONE_GET_VSPORT_NUMBER(bits) \
-                                                       vxge_bVALn(bits, 3, 5)
-
-#define        VXGE_HW_KDFC_DRBL_TRIPLET_TOTAL_GET_KDFC_MAX_SIZE(bits) \
-                                                       vxge_bVALn(bits, 17, 15)
-
-#define VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_MODE_LEGACY_MODE                 0
-#define VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_MODE_NON_OFFLOAD_ONLY            1
-#define VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_MODE_MULTI_OP_MODE               2
-
-#define VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_MODE_MESSAGES_ONLY               0
-#define VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_MODE_MULTI_OP_MODE               1
-
-#define        VXGE_HW_TOC_GET_KDFC_INITIAL_OFFSET(val) \
-                               (val&~VXGE_HW_TOC_KDFC_INITIAL_BIR(7))
-#define        VXGE_HW_TOC_GET_KDFC_INITIAL_BIR(val) \
-                               vxge_bVALn(val, 61, 3)
-#define        VXGE_HW_TOC_GET_USDC_INITIAL_OFFSET(val) \
-                               (val&~VXGE_HW_TOC_USDC_INITIAL_BIR(7))
-#define        VXGE_HW_TOC_GET_USDC_INITIAL_BIR(val) \
-                               vxge_bVALn(val, 61, 3)
-
-#define        VXGE_HW_TOC_KDFC_VPATH_STRIDE_GET_TOC_KDFC_VPATH_STRIDE(bits)   bits
-#define        VXGE_HW_TOC_KDFC_FIFO_STRIDE_GET_TOC_KDFC_FIFO_STRIDE(bits)     bits
-
-#define        VXGE_HW_KDFC_TRPL_FIFO_OFFSET_GET_KDFC_RCTR0(bits) \
-                                               vxge_bVALn(bits, 1, 15)
-#define        VXGE_HW_KDFC_TRPL_FIFO_OFFSET_GET_KDFC_RCTR1(bits) \
-                                               vxge_bVALn(bits, 17, 15)
-#define        VXGE_HW_KDFC_TRPL_FIFO_OFFSET_GET_KDFC_RCTR2(bits) \
-                                               vxge_bVALn(bits, 33, 15)
-
-#define VXGE_HW_KDFC_TRPL_FIFO_OFFSET_KDFC_VAPTH_NUM(val) vxge_vBIT(val, 42, 5)
-#define VXGE_HW_KDFC_TRPL_FIFO_OFFSET_KDFC_FIFO_NUM(val) vxge_vBIT(val, 47, 2)
-#define VXGE_HW_KDFC_TRPL_FIFO_OFFSET_KDFC_FIFO_OFFSET(val) \
-                                       vxge_vBIT(val, 49, 15)
-
-#define VXGE_HW_PRC_CFG4_RING_MODE_ONE_BUFFER                  0
-#define VXGE_HW_PRC_CFG4_RING_MODE_THREE_BUFFER                        1
-#define VXGE_HW_PRC_CFG4_RING_MODE_FIVE_BUFFER                 2
-
-#define VXGE_HW_PRC_CFG7_SCATTER_MODE_A                                0
-#define VXGE_HW_PRC_CFG7_SCATTER_MODE_B                                2
-#define VXGE_HW_PRC_CFG7_SCATTER_MODE_C                                1
-
-#define VXGE_HW_RTS_MGR_STEER_CTRL_WE_READ                             0
-#define VXGE_HW_RTS_MGR_STEER_CTRL_WE_WRITE                            1
-
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_DA                  0
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_VID                 1
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_ETYPE               2
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_PN                  3
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_RANGE_PN            4
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_RTH_GEN_CFG         5
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_RTH_SOLO_IT         6
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_RTH_JHASH_CFG       7
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_RTH_MASK            8
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_RTH_KEY             9
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_QOS                 10
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_DS                  11
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_RTH_MULTI_IT        12
-#define VXGE_HW_RTS_MGR_STEER_CTRL_DATA_STRUCT_SEL_FW_VERSION          13
-
-#define VXGE_HW_RTS_MGR_STEER_DATA0_GET_DA_MAC_ADDR(bits) \
-                                                       vxge_bVALn(bits, 0, 48)
-#define VXGE_HW_RTS_MGR_STEER_DATA0_DA_MAC_ADDR(val) vxge_vBIT(val, 0, 48)
-
-#define VXGE_HW_RTS_MGR_STEER_DATA1_GET_DA_MAC_ADDR_MASK(bits) \
-                                                       vxge_bVALn(bits, 0, 48)
-#define VXGE_HW_RTS_MGR_STEER_DATA1_DA_MAC_ADDR_MASK(val) vxge_vBIT(val, 0, 48)
-#define VXGE_HW_RTS_MGR_STEER_DATA1_DA_MAC_ADDR_ADD_PRIVILEGED_MODE \
-                                                               vxge_mBIT(54)
-#define VXGE_HW_RTS_MGR_STEER_DATA1_GET_DA_MAC_ADDR_ADD_VPATH(bits) \
-                                                       vxge_bVALn(bits, 55, 5)
-#define VXGE_HW_RTS_MGR_STEER_DATA1_DA_MAC_ADDR_ADD_VPATH(val) \
-                                                       vxge_vBIT(val, 55, 5)
-#define VXGE_HW_RTS_MGR_STEER_DATA1_GET_DA_MAC_ADDR_ADD_MODE(bits) \
-                                                       vxge_bVALn(bits, 62, 2)
-#define VXGE_HW_RTS_MGR_STEER_DATA1_DA_MAC_ADDR_MODE(val) vxge_vBIT(val, 62, 2)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_ADD_ENTRY                  0
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_DELETE_ENTRY               1
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LIST_FIRST_ENTRY           2
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LIST_NEXT_ENTRY            3
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_READ_ENTRY                 0
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_WRITE_ENTRY                1
-#define VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_READ_MEMO_ENTRY           3
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LED_CONTROL                4
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_ALL_CLEAR                  172
-
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA                0
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_VID               1
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_ETYPE             2
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_PN                3
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_RTH_GEN_CFG       5
-#define        VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_SOLO_IT          6
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_RTH_JHASH_CFG     7
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_RTH_MASK          8
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_RTH_KEY           9
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_QOS               10
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DS                11
-#define        VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_MULTI_IT         12
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_FW_MEMO           13
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_DA_MAC_ADDR(bits) \
-                                                       vxge_bVALn(bits, 0, 48)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_DA_MAC_ADDR(val) vxge_vBIT(val, 0, 48)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_VLAN_ID(bits) vxge_bVALn(bits, 0, 12)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_VLAN_ID(val) vxge_vBIT(val, 0, 12)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_ETYPE(bits)  vxge_bVALn(bits, 0, 11)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_ETYPE(val) vxge_vBIT(val, 0, 16)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_PN_SRC_DEST_SEL(bits) \
-                                                       vxge_bVALn(bits, 3, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_PN_SRC_DEST_SEL          vxge_mBIT(3)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_PN_TCP_UDP_SEL(bits) \
-                                                       vxge_bVALn(bits, 7, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_PN_TCP_UDP_SEL           vxge_mBIT(7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_PN_PORT_NUM(bits) \
-                                                       vxge_bVALn(bits, 8, 16)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_PN_PORT_NUM(val) vxge_vBIT(val, 8, 16)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_RTH_EN(bits) \
-                                                       vxge_bVALn(bits, 3, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_EN           vxge_mBIT(3)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_BUCKET_SIZE(bits) \
-                                                       vxge_bVALn(bits, 4, 4)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_BUCKET_SIZE(val) \
-                                                       vxge_vBIT(val, 4, 4)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_ALG_SEL(bits) \
-                                                       vxge_bVALn(bits, 10, 2)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_ALG_SEL(val) \
-                                                       vxge_vBIT(val, 10, 2)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_ALG_SEL_JENKINS  0
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_ALG_SEL_MS_RSS   1
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_ALG_SEL_CRC32C   2
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_RTH_TCP_IPV4_EN(bits) \
-                                                       vxge_bVALn(bits, 15, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_TCP_IPV4_EN  vxge_mBIT(15)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_RTH_IPV4_EN(bits) \
-                                                       vxge_bVALn(bits, 19, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_IPV4_EN      vxge_mBIT(19)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_RTH_TCP_IPV6_EN(bits) \
-                                                       vxge_bVALn(bits, 23, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_TCP_IPV6_EN  vxge_mBIT(23)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_RTH_IPV6_EN(bits) \
-                                                       vxge_bVALn(bits, 27, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_IPV6_EN      vxge_mBIT(27)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_RTH_TCP_IPV6_EX_EN(bits) \
-                                                       vxge_bVALn(bits, 31, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_TCP_IPV6_EX_EN vxge_mBIT(31)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_RTH_IPV6_EX_EN(bits) \
-                                                       vxge_bVALn(bits, 35, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_RTH_IPV6_EX_EN   vxge_mBIT(35)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_ACTIVE_TABLE(bits) \
-                                                       vxge_bVALn(bits, 39, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_ACTIVE_TABLE     vxge_mBIT(39)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_GEN_REPL_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 43, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_GEN_REPL_ENTRY_EN    vxge_mBIT(43)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_SOLO_IT_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 3, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_SOLO_IT_ENTRY_EN     vxge_mBIT(3)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_SOLO_IT_BUCKET_DATA(bits) \
-                                                       vxge_bVALn(bits, 9, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_SOLO_IT_BUCKET_DATA(val) \
-                                                       vxge_vBIT(val, 9, 7)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_ITEM0_BUCKET_NUM(bits) \
-                                                       vxge_bVALn(bits, 0, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM0_BUCKET_NUM(val) \
-                                                       vxge_vBIT(val, 0, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_ITEM0_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 8, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM0_ENTRY_EN       vxge_mBIT(8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_ITEM0_BUCKET_DATA(bits) \
-                                                       vxge_bVALn(bits, 9, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM0_BUCKET_DATA(val) \
-                                                       vxge_vBIT(val, 9, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_ITEM1_BUCKET_NUM(bits) \
-                                                       vxge_bVALn(bits, 16, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM1_BUCKET_NUM(val) \
-                                                       vxge_vBIT(val, 16, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_ITEM1_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 24, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM1_ENTRY_EN       vxge_mBIT(24)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_ITEM1_BUCKET_DATA(bits) \
-                                                       vxge_bVALn(bits, 25, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_ITEM1_BUCKET_DATA(val) \
-                                                       vxge_vBIT(val, 25, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM0_BUCKET_NUM(bits) \
-                                                       vxge_bVALn(bits, 0, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM0_BUCKET_NUM(val) \
-                                                       vxge_vBIT(val, 0, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM0_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 8, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM0_ENTRY_EN       vxge_mBIT(8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM0_BUCKET_DATA(bits) \
-                                                       vxge_bVALn(bits, 9, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM0_BUCKET_DATA(val) \
-                                                       vxge_vBIT(val, 9, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM1_BUCKET_NUM(bits) \
-                                                       vxge_bVALn(bits, 16, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM1_BUCKET_NUM(val) \
-                                                       vxge_vBIT(val, 16, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM1_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 24, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM1_ENTRY_EN       vxge_mBIT(24)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM1_BUCKET_DATA(bits) \
-                                                       vxge_bVALn(bits, 25, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM1_BUCKET_DATA(val) \
-                                                       vxge_vBIT(val, 25, 7)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_JHASH_CFG_GOLDEN_RATIO(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_JHASH_CFG_GOLDEN_RATIO(val) \
-                                                       vxge_vBIT(val, 0, 32)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_JHASH_CFG_INIT_VALUE(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_JHASH_CFG_INIT_VALUE(val) \
-                                                       vxge_vBIT(val, 32, 32)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_MASK_IPV6_SA_MASK(bits) \
-                                                       vxge_bVALn(bits, 0, 16)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_MASK_IPV6_SA_MASK(val) \
-                                                       vxge_vBIT(val, 0, 16)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_MASK_IPV6_DA_MASK(bits) \
-                                                       vxge_bVALn(bits, 16, 16)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_MASK_IPV6_DA_MASK(val) \
-                                                       vxge_vBIT(val, 16, 16)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_MASK_IPV4_SA_MASK(bits) \
-                                                       vxge_bVALn(bits, 32, 4)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_MASK_IPV4_SA_MASK(val) \
-                                                       vxge_vBIT(val, 32, 4)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_MASK_IPV4_DA_MASK(bits) \
-                                                       vxge_bVALn(bits, 36, 4)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_MASK_IPV4_DA_MASK(val) \
-                                                       vxge_vBIT(val, 36, 4)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_MASK_L4SP_MASK(bits) \
-                                                       vxge_bVALn(bits, 40, 2)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_MASK_L4SP_MASK(val) \
-                                                       vxge_vBIT(val, 40, 2)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_MASK_L4DP_MASK(bits) \
-                                                       vxge_bVALn(bits, 42, 2)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_MASK_L4DP_MASK(val) \
-                                                       vxge_vBIT(val, 42, 2)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_RTH_KEY_KEY(bits) \
-                                                       vxge_bVALn(bits, 0, 64)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_RTH_KEY_KEY vxge_vBIT(val, 0, 64)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_QOS_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 3, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_QOS_ENTRY_EN             vxge_mBIT(3)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_DS_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 3, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_DS_ENTRY_EN              vxge_mBIT(3)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_DA_MAC_ADDR_MASK(bits) \
-                                                       vxge_bVALn(bits, 0, 48)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_DA_MAC_ADDR_MASK(val) \
-                                                       vxge_vBIT(val, 0, 48)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_DA_MAC_ADDR_MODE(val) \
-                                                       vxge_vBIT(val, 62, 2)
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM4_BUCKET_NUM(bits) \
-                                                       vxge_bVALn(bits, 0, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM4_BUCKET_NUM(val) \
-                                                       vxge_vBIT(val, 0, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM4_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 8, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM4_ENTRY_EN       vxge_mBIT(8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM4_BUCKET_DATA(bits) \
-                                                       vxge_bVALn(bits, 9, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM4_BUCKET_DATA(val) \
-                                                       vxge_vBIT(val, 9, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM5_BUCKET_NUM(bits) \
-                                                       vxge_bVALn(bits, 16, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM5_BUCKET_NUM(val) \
-                                                       vxge_vBIT(val, 16, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM5_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 24, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM5_ENTRY_EN       vxge_mBIT(24)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM5_BUCKET_DATA(bits) \
-                                                       vxge_bVALn(bits, 25, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM5_BUCKET_DATA(val) \
-                                                       vxge_vBIT(val, 25, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM6_BUCKET_NUM(bits) \
-                                                       vxge_bVALn(bits, 32, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM6_BUCKET_NUM(val) \
-                                                       vxge_vBIT(val, 32, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM6_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 40, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM6_ENTRY_EN       vxge_mBIT(40)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM6_BUCKET_DATA(bits) \
-                                                       vxge_bVALn(bits, 41, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM6_BUCKET_DATA(val) \
-                                                       vxge_vBIT(val, 41, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM7_BUCKET_NUM(bits) \
-                                                       vxge_bVALn(bits, 48, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM7_BUCKET_NUM(val) \
-                                                       vxge_vBIT(val, 48, 8)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM7_ENTRY_EN(bits) \
-                                                       vxge_bVALn(bits, 56, 1)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM7_ENTRY_EN       vxge_mBIT(56)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_RTH_ITEM7_BUCKET_DATA(bits) \
-                                                       vxge_bVALn(bits, 57, 7)
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM7_BUCKET_DATA(val) \
-                                                       vxge_vBIT(val, 57, 7)
-
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_PART_NUMBER           0
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_SERIAL_NUMBER         1
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_VERSION               2
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_PCI_MODE              3
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_DESC_0                4
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_DESC_1                5
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_DESC_2                6
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_MEMO_ITEM_DESC_3                7
-
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_LED_CONTROL_ON                   1
-#define        VXGE_HW_RTS_ACCESS_STEER_DATA0_LED_CONTROL_OFF                  0
-
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_DAY(bits) \
-                                                       vxge_bVALn(bits, 0, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_FW_VER_DAY(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_MONTH(bits) \
-                                                       vxge_bVALn(bits, 8, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_FW_VER_MONTH(val) vxge_vBIT(val, 8, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_YEAR(bits) \
-                                               vxge_bVALn(bits, 16, 16)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_FW_VER_YEAR(val) \
-                                                       vxge_vBIT(val, 16, 16)
-
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_MAJOR(bits) \
-                                               vxge_bVALn(bits, 32, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_FW_VER_MAJOR vxge_vBIT(val, 32, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_MINOR(bits) \
-                                               vxge_bVALn(bits, 40, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_FW_VER_MINOR vxge_vBIT(val, 40, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_FW_VER_BUILD(bits) \
-                                               vxge_bVALn(bits, 48, 16)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_FW_VER_BUILD vxge_vBIT(val, 48, 16)
-
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_DAY(bits) \
-                                               vxge_bVALn(bits, 0, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_FLASH_VER_DAY(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_MONTH(bits) \
-                                                       vxge_bVALn(bits, 8, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_FLASH_VER_MONTH(val) vxge_vBIT(val, 8, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_YEAR(bits) \
-                                                       vxge_bVALn(bits, 16, 16)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_FLASH_VER_YEAR(val) \
-                                                       vxge_vBIT(val, 16, 16)
-
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_MAJOR(bits) \
-                                                       vxge_bVALn(bits, 32, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_FLASH_VER_MAJOR vxge_vBIT(val, 32, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_MINOR(bits) \
-                                                       vxge_bVALn(bits, 40, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_FLASH_VER_MINOR vxge_vBIT(val, 40, 8)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_FLASH_VER_BUILD(bits) \
-                                                       vxge_bVALn(bits, 48, 16)
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_FLASH_VER_BUILD vxge_vBIT(val, 48, 16)
-#define VXGE_HW_RTS_ACCESS_STEER_CTRL_GET_ACTION(bits) vxge_bVALn(bits, 0, 8)
-
-#define        VXGE_HW_SRPCIM_TO_VPATH_ALARM_REG_GET_PPIF_SRPCIM_TO_VPATH_ALARM(bits)\
-                                                       vxge_bVALn(bits, 0, 18)
-
-#define        VXGE_HW_RX_MULTI_CAST_STATS_GET_FRAME_DISCARD(bits) \
-                                                       vxge_bVALn(bits, 48, 16)
-#define        VXGE_HW_RX_FRM_TRANSFERRED_GET_RX_FRM_TRANSFERRED(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_RXD_RETURNED_GET_RXD_RETURNED(bits)     vxge_bVALn(bits, 48, 16)
-#define        VXGE_HW_VPATH_DEBUG_STATS0_GET_INI_NUM_MWR_SENT(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_VPATH_DEBUG_STATS1_GET_INI_NUM_MRD_SENT(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_VPATH_DEBUG_STATS2_GET_INI_NUM_CPL_RCVD(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_VPATH_DEBUG_STATS3_GET_INI_NUM_MWR_BYTE_SENT(bits)      (bits)
-#define        VXGE_HW_VPATH_DEBUG_STATS4_GET_INI_NUM_CPL_BYTE_RCVD(bits)      (bits)
-#define        VXGE_HW_VPATH_DEBUG_STATS5_GET_WRCRDTARB_XOFF(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_VPATH_DEBUG_STATS6_GET_RDCRDTARB_XOFF(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_VPATH_GENSTATS_COUNT01_GET_PPIF_VPATH_GENSTATS_COUNT1(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_VPATH_GENSTATS_COUNT01_GET_PPIF_VPATH_GENSTATS_COUNT0(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_VPATH_GENSTATS_COUNT23_GET_PPIF_VPATH_GENSTATS_COUNT3(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_VPATH_GENSTATS_COUNT23_GET_PPIF_VPATH_GENSTATS_COUNT2(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_VPATH_GENSTATS_COUNT4_GET_PPIF_VPATH_GENSTATS_COUNT4(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_VPATH_GENSTATS_COUNT5_GET_PPIF_VPATH_GENSTATS_COUNT5(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_TX_VP_RESET_DISCARDED_FRMS_GET_TX_VP_RESET_DISCARDED_FRMS(bits\
-) vxge_bVALn(bits, 48, 16)
-#define        VXGE_HW_DBG_STATS_GET_RX_MPA_CRC_FAIL_FRMS(bits) vxge_bVALn(bits, 0, 16)
-#define        VXGE_HW_DBG_STATS_GET_RX_MPA_MRK_FAIL_FRMS(bits) \
-                                                       vxge_bVALn(bits, 16, 16)
-#define        VXGE_HW_DBG_STATS_GET_RX_MPA_LEN_FAIL_FRMS(bits) \
-                                                       vxge_bVALn(bits, 32, 16)
-#define        VXGE_HW_DBG_STATS_GET_RX_FAU_RX_WOL_FRMS(bits)  vxge_bVALn(bits, 0, 16)
-#define        VXGE_HW_DBG_STATS_GET_RX_FAU_RX_VP_RESET_DISCARDED_FRMS(bits) \
-                                                       vxge_bVALn(bits, 16, 16)
-#define        VXGE_HW_DBG_STATS_GET_RX_FAU_RX_PERMITTED_FRMS(bits) \
-                                                       vxge_bVALn(bits, 32, 16)
-
-#define        VXGE_HW_MRPCIM_DEBUG_STATS0_GET_INI_WR_DROP(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_MRPCIM_DEBUG_STATS0_GET_INI_RD_DROP(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_MRPCIM_DEBUG_STATS1_GET_VPLANE_WRCRDTARB_PH_CRDT_DEPLETED(bits\
-) vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_MRPCIM_DEBUG_STATS2_GET_VPLANE_WRCRDTARB_PD_CRDT_DEPLETED(bits\
-) vxge_bVALn(bits, 32, 32)
-#define \
-VXGE_HW_MRPCIM_DEBUG_STATS3_GET_VPLANE_RDCRDTARB_NPH_CRDT_DEPLETED(bits) \
-       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_MRPCIM_DEBUG_STATS4_GET_INI_WR_VPIN_DROP(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_MRPCIM_DEBUG_STATS4_GET_INI_RD_VPIN_DROP(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_GENSTATS_COUNT01_GET_GENSTATS_COUNT1(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_GENSTATS_COUNT01_GET_GENSTATS_COUNT0(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_GENSTATS_COUNT23_GET_GENSTATS_COUNT3(bits) \
-                                                       vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_GENSTATS_COUNT23_GET_GENSTATS_COUNT2(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_GENSTATS_COUNT4_GET_GENSTATS_COUNT4(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_GENSTATS_COUNT5_GET_GENSTATS_COUNT5(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-
-#define        VXGE_HW_DEBUG_STATS0_GET_RSTDROP_MSG(bits)      vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_DEBUG_STATS0_GET_RSTDROP_CPL(bits)      vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_DEBUG_STATS1_GET_RSTDROP_CLIENT0(bits)  vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_DEBUG_STATS1_GET_RSTDROP_CLIENT1(bits)  vxge_bVALn(bits, 32, 32)
-#define        VXGE_HW_DEBUG_STATS2_GET_RSTDROP_CLIENT2(bits)  vxge_bVALn(bits, 0, 32)
-#define        VXGE_HW_DEBUG_STATS3_GET_VPLANE_DEPL_PH(bits)   vxge_bVALn(bits, 0, 16)
-#define        VXGE_HW_DEBUG_STATS3_GET_VPLANE_DEPL_NPH(bits)  vxge_bVALn(bits, 16, 16)
-#define        VXGE_HW_DEBUG_STATS3_GET_VPLANE_DEPL_CPLH(bits) vxge_bVALn(bits, 32, 16)
-#define        VXGE_HW_DEBUG_STATS4_GET_VPLANE_DEPL_PD(bits)   vxge_bVALn(bits, 0, 16)
-#define        VXGE_HW_DEBUG_STATS4_GET_VPLANE_DEPL_NPD(bits)  bVAL(bits, 16, 16)
-#define        VXGE_HW_DEBUG_STATS4_GET_VPLANE_DEPL_CPLD(bits) vxge_bVALn(bits, 32, 16)
-
-#define        VXGE_HW_DBG_STATS_TPA_TX_PATH_GET_TX_PERMITTED_FRMS(bits) \
-                                                       vxge_bVALn(bits, 32, 32)
-
-#define        VXGE_HW_DBG_STAT_TX_ANY_FRMS_GET_PORT0_TX_ANY_FRMS(bits) \
-                                                       vxge_bVALn(bits, 0, 8)
-#define        VXGE_HW_DBG_STAT_TX_ANY_FRMS_GET_PORT1_TX_ANY_FRMS(bits) \
-                                                       vxge_bVALn(bits, 8, 8)
-#define        VXGE_HW_DBG_STAT_TX_ANY_FRMS_GET_PORT2_TX_ANY_FRMS(bits) \
-                                                       vxge_bVALn(bits, 16, 8)
-
-#define        VXGE_HW_DBG_STAT_RX_ANY_FRMS_GET_PORT0_RX_ANY_FRMS(bits) \
-                                                       vxge_bVALn(bits, 0, 8)
-#define        VXGE_HW_DBG_STAT_RX_ANY_FRMS_GET_PORT1_RX_ANY_FRMS(bits) \
-                                                       vxge_bVALn(bits, 8, 8)
-#define        VXGE_HW_DBG_STAT_RX_ANY_FRMS_GET_PORT2_RX_ANY_FRMS(bits) \
-                                                       vxge_bVALn(bits, 16, 8)
-
-#define VXGE_HW_CONFIG_PRIV_H
-
-#define VXGE_HW_SWAPPER_INITIAL_VALUE                  0x0123456789abcdefULL
-#define VXGE_HW_SWAPPER_BYTE_SWAPPED                   0xefcdab8967452301ULL
-#define VXGE_HW_SWAPPER_BIT_FLIPPED                    0x80c4a2e691d5b3f7ULL
-#define VXGE_HW_SWAPPER_BYTE_SWAPPED_BIT_FLIPPED       0xf7b3d591e6a2c480ULL
-
-#define VXGE_HW_SWAPPER_READ_BYTE_SWAP_ENABLE          0xFFFFFFFFFFFFFFFFULL
-#define VXGE_HW_SWAPPER_READ_BYTE_SWAP_DISABLE         0x0000000000000000ULL
-
-#define VXGE_HW_SWAPPER_READ_BIT_FLAP_ENABLE           0xFFFFFFFFFFFFFFFFULL
-#define VXGE_HW_SWAPPER_READ_BIT_FLAP_DISABLE          0x0000000000000000ULL
-
-#define VXGE_HW_SWAPPER_WRITE_BYTE_SWAP_ENABLE         0xFFFFFFFFFFFFFFFFULL
-#define VXGE_HW_SWAPPER_WRITE_BYTE_SWAP_DISABLE                0x0000000000000000ULL
-
-#define VXGE_HW_SWAPPER_WRITE_BIT_FLAP_ENABLE          0xFFFFFFFFFFFFFFFFULL
-#define VXGE_HW_SWAPPER_WRITE_BIT_FLAP_DISABLE         0x0000000000000000ULL
-
-/*
- * The registers are memory mapped and are native big-endian byte order. The
- * little-endian hosts are handled by enabling hardware byte-swapping for
- * register and dma operations.
- */
-struct vxge_hw_legacy_reg {
-
-       u8      unused00010[0x00010];
-
-/*0x00010*/    u64     toc_swapper_fb;
-#define VXGE_HW_TOC_SWAPPER_FB_INITIAL_VAL(val) vxge_vBIT(val, 0, 64)
-/*0x00018*/    u64     pifm_rd_swap_en;
-#define VXGE_HW_PIFM_RD_SWAP_EN_PIFM_RD_SWAP_EN(val) vxge_vBIT(val, 0, 64)
-/*0x00020*/    u64     pifm_rd_flip_en;
-#define VXGE_HW_PIFM_RD_FLIP_EN_PIFM_RD_FLIP_EN(val) vxge_vBIT(val, 0, 64)
-/*0x00028*/    u64     pifm_wr_swap_en;
-#define VXGE_HW_PIFM_WR_SWAP_EN_PIFM_WR_SWAP_EN(val) vxge_vBIT(val, 0, 64)
-/*0x00030*/    u64     pifm_wr_flip_en;
-#define VXGE_HW_PIFM_WR_FLIP_EN_PIFM_WR_FLIP_EN(val) vxge_vBIT(val, 0, 64)
-/*0x00038*/    u64     toc_first_pointer;
-#define VXGE_HW_TOC_FIRST_POINTER_INITIAL_VAL(val) vxge_vBIT(val, 0, 64)
-/*0x00040*/    u64     host_access_en;
-#define VXGE_HW_HOST_ACCESS_EN_HOST_ACCESS_EN(val) vxge_vBIT(val, 0, 64)
-
-} __packed;
-
-struct vxge_hw_toc_reg {
-
-       u8      unused00050[0x00050];
-
-/*0x00050*/    u64     toc_common_pointer;
-#define VXGE_HW_TOC_COMMON_POINTER_INITIAL_VAL(val) vxge_vBIT(val, 0, 64)
-/*0x00058*/    u64     toc_memrepair_pointer;
-#define VXGE_HW_TOC_MEMREPAIR_POINTER_INITIAL_VAL(val) vxge_vBIT(val, 0, 64)
-/*0x00060*/    u64     toc_pcicfgmgmt_pointer[17];
-#define VXGE_HW_TOC_PCICFGMGMT_POINTER_INITIAL_VAL(val) vxge_vBIT(val, 0, 64)
-       u8      unused001e0[0x001e0-0x000e8];
-
-/*0x001e0*/    u64     toc_mrpcim_pointer;
-#define VXGE_HW_TOC_MRPCIM_POINTER_INITIAL_VAL(val) vxge_vBIT(val, 0, 64)
-/*0x001e8*/    u64     toc_srpcim_pointer[17];
-#define VXGE_HW_TOC_SRPCIM_POINTER_INITIAL_VAL(val) vxge_vBIT(val, 0, 64)
-       u8      unused00278[0x00278-0x00270];
-
-/*0x00278*/    u64     toc_vpmgmt_pointer[17];
-#define VXGE_HW_TOC_VPMGMT_POINTER_INITIAL_VAL(val) vxge_vBIT(val, 0, 64)
-       u8      unused00390[0x00390-0x00300];
-
-/*0x00390*/    u64     toc_vpath_pointer[17];
-#define VXGE_HW_TOC_VPATH_POINTER_INITIAL_VAL(val) vxge_vBIT(val, 0, 64)
-       u8      unused004a0[0x004a0-0x00418];
-
-/*0x004a0*/    u64     toc_kdfc;
-#define VXGE_HW_TOC_KDFC_INITIAL_OFFSET(val) vxge_vBIT(val, 0, 61)
-#define VXGE_HW_TOC_KDFC_INITIAL_BIR(val) vxge_vBIT(val, 61, 3)
-/*0x004a8*/    u64     toc_usdc;
-#define VXGE_HW_TOC_USDC_INITIAL_OFFSET(val) vxge_vBIT(val, 0, 61)
-#define VXGE_HW_TOC_USDC_INITIAL_BIR(val) vxge_vBIT(val, 61, 3)
-/*0x004b0*/    u64     toc_kdfc_vpath_stride;
-#define        VXGE_HW_TOC_KDFC_VPATH_STRIDE_INITIAL_TOC_KDFC_VPATH_STRIDE(val) \
-                                                       vxge_vBIT(val, 0, 64)
-/*0x004b8*/    u64     toc_kdfc_fifo_stride;
-#define        VXGE_HW_TOC_KDFC_FIFO_STRIDE_INITIAL_TOC_KDFC_FIFO_STRIDE(val) \
-                                                       vxge_vBIT(val, 0, 64)
-
-} __packed;
-
-struct vxge_hw_common_reg {
-
-       u8      unused00a00[0x00a00];
-
-/*0x00a00*/    u64     prc_status1;
-#define VXGE_HW_PRC_STATUS1_PRC_VP_QUIESCENT(n)        vxge_mBIT(n)
-/*0x00a08*/    u64     rxdcm_reset_in_progress;
-#define VXGE_HW_RXDCM_RESET_IN_PROGRESS_PRC_VP(n)      vxge_mBIT(n)
-/*0x00a10*/    u64     replicq_flush_in_progress;
-#define VXGE_HW_REPLICQ_FLUSH_IN_PROGRESS_NOA_VP(n)    vxge_mBIT(n)
-/*0x00a18*/    u64     rxpe_cmds_reset_in_progress;
-#define VXGE_HW_RXPE_CMDS_RESET_IN_PROGRESS_NOA_VP(n)  vxge_mBIT(n)
-/*0x00a20*/    u64     mxp_cmds_reset_in_progress;
-#define VXGE_HW_MXP_CMDS_RESET_IN_PROGRESS_NOA_VP(n)   vxge_mBIT(n)
-/*0x00a28*/    u64     noffload_reset_in_progress;
-#define VXGE_HW_NOFFLOAD_RESET_IN_PROGRESS_PRC_VP(n)   vxge_mBIT(n)
-/*0x00a30*/    u64     rd_req_in_progress;
-#define VXGE_HW_RD_REQ_IN_PROGRESS_VP(n)       vxge_mBIT(n)
-/*0x00a38*/    u64     rd_req_outstanding;
-#define VXGE_HW_RD_REQ_OUTSTANDING_VP(n)       vxge_mBIT(n)
-/*0x00a40*/    u64     kdfc_reset_in_progress;
-#define VXGE_HW_KDFC_RESET_IN_PROGRESS_NOA_VP(n)       vxge_mBIT(n)
-       u8      unused00b00[0x00b00-0x00a48];
-
-/*0x00b00*/    u64     one_cfg_vp;
-#define VXGE_HW_ONE_CFG_VP_RDY(n)      vxge_mBIT(n)
-/*0x00b08*/    u64     one_common;
-#define VXGE_HW_ONE_COMMON_PET_VPATH_RESET_IN_PROGRESS(n)      vxge_mBIT(n)
-       u8      unused00b80[0x00b80-0x00b10];
-
-/*0x00b80*/    u64     tim_int_en;
-#define VXGE_HW_TIM_INT_EN_TIM_VP(n)   vxge_mBIT(n)
-/*0x00b88*/    u64     tim_set_int_en;
-#define VXGE_HW_TIM_SET_INT_EN_VP(n)   vxge_mBIT(n)
-/*0x00b90*/    u64     tim_clr_int_en;
-#define VXGE_HW_TIM_CLR_INT_EN_VP(n)   vxge_mBIT(n)
-/*0x00b98*/    u64     tim_mask_int_during_reset;
-#define VXGE_HW_TIM_MASK_INT_DURING_RESET_VPATH(n)     vxge_mBIT(n)
-/*0x00ba0*/    u64     tim_reset_in_progress;
-#define VXGE_HW_TIM_RESET_IN_PROGRESS_TIM_VPATH(n)     vxge_mBIT(n)
-/*0x00ba8*/    u64     tim_outstanding_bmap;
-#define VXGE_HW_TIM_OUTSTANDING_BMAP_TIM_VPATH(n)      vxge_mBIT(n)
-       u8      unused00c00[0x00c00-0x00bb0];
-
-/*0x00c00*/    u64     msg_reset_in_progress;
-#define VXGE_HW_MSG_RESET_IN_PROGRESS_MSG_COMPOSITE(val) vxge_vBIT(val, 0, 17)
-/*0x00c08*/    u64     msg_mxp_mr_ready;
-#define VXGE_HW_MSG_MXP_MR_READY_MP_BOOTED(n)  vxge_mBIT(n)
-/*0x00c10*/    u64     msg_uxp_mr_ready;
-#define VXGE_HW_MSG_UXP_MR_READY_UP_BOOTED(n)  vxge_mBIT(n)
-/*0x00c18*/    u64     msg_dmq_noni_rtl_prefetch;
-#define VXGE_HW_MSG_DMQ_NONI_RTL_PREFETCH_BYPASS_ENABLE(n)     vxge_mBIT(n)
-/*0x00c20*/    u64     msg_umq_rtl_bwr;
-#define VXGE_HW_MSG_UMQ_RTL_BWR_PREFETCH_DISABLE(n)    vxge_mBIT(n)
-       u8      unused00d00[0x00d00-0x00c28];
-
-/*0x00d00*/    u64     cmn_rsthdlr_cfg0;
-#define VXGE_HW_CMN_RSTHDLR_CFG0_SW_RESET_VPATH(val) vxge_vBIT(val, 0, 17)
-/*0x00d08*/    u64     cmn_rsthdlr_cfg1;
-#define VXGE_HW_CMN_RSTHDLR_CFG1_CLR_VPATH_RESET(val) vxge_vBIT(val, 0, 17)
-/*0x00d10*/    u64     cmn_rsthdlr_cfg2;
-#define VXGE_HW_CMN_RSTHDLR_CFG2_SW_RESET_FIFO0(val) vxge_vBIT(val, 0, 17)
-/*0x00d18*/    u64     cmn_rsthdlr_cfg3;
-#define VXGE_HW_CMN_RSTHDLR_CFG3_SW_RESET_FIFO1(val) vxge_vBIT(val, 0, 17)
-/*0x00d20*/    u64     cmn_rsthdlr_cfg4;
-#define VXGE_HW_CMN_RSTHDLR_CFG4_SW_RESET_FIFO2(val) vxge_vBIT(val, 0, 17)
-       u8      unused00d40[0x00d40-0x00d28];
-
-/*0x00d40*/    u64     cmn_rsthdlr_cfg8;
-#define VXGE_HW_CMN_RSTHDLR_CFG8_INCR_VPATH_INST_NUM(val) vxge_vBIT(val, 0, 17)
-/*0x00d48*/    u64     stats_cfg0;
-#define VXGE_HW_STATS_CFG0_STATS_ENABLE(val) vxge_vBIT(val, 0, 17)
-       u8      unused00da8[0x00da8-0x00d50];
-
-/*0x00da8*/    u64     clear_msix_mask_vect[4];
-#define VXGE_HW_CLEAR_MSIX_MASK_VECT_CLEAR_MSIX_MASK_VECT(val) \
-                                               vxge_vBIT(val, 0, 17)
-/*0x00dc8*/    u64     set_msix_mask_vect[4];
-#define VXGE_HW_SET_MSIX_MASK_VECT_SET_MSIX_MASK_VECT(val) vxge_vBIT(val, 0, 17)
-/*0x00de8*/    u64     clear_msix_mask_all_vect;
-#define        VXGE_HW_CLEAR_MSIX_MASK_ALL_VECT_CLEAR_MSIX_MASK_ALL_VECT(val)  \
-                                                       vxge_vBIT(val, 0, 17)
-/*0x00df0*/    u64     set_msix_mask_all_vect;
-#define        VXGE_HW_SET_MSIX_MASK_ALL_VECT_SET_MSIX_MASK_ALL_VECT(val) \
-                                                       vxge_vBIT(val, 0, 17)
-/*0x00df8*/    u64     mask_vector[4];
-#define VXGE_HW_MASK_VECTOR_MASK_VECTOR(val) vxge_vBIT(val, 0, 17)
-/*0x00e18*/    u64     msix_pending_vector[4];
-#define VXGE_HW_MSIX_PENDING_VECTOR_MSIX_PENDING_VECTOR(val) \
-                                                       vxge_vBIT(val, 0, 17)
-/*0x00e38*/    u64     clr_msix_one_shot_vec[4];
-#define        VXGE_HW_CLR_MSIX_ONE_SHOT_VEC_CLR_MSIX_ONE_SHOT_VEC(val) \
-                                                       vxge_vBIT(val, 0, 17)
-/*0x00e58*/    u64     titan_asic_id;
-#define VXGE_HW_TITAN_ASIC_ID_INITIAL_DEVICE_ID(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_TITAN_ASIC_ID_INITIAL_MAJOR_REVISION(val) vxge_vBIT(val, 48, 8)
-#define VXGE_HW_TITAN_ASIC_ID_INITIAL_MINOR_REVISION(val) vxge_vBIT(val, 56, 8)
-/*0x00e60*/    u64     titan_general_int_status;
-#define        VXGE_HW_TITAN_GENERAL_INT_STATUS_MRPCIM_ALARM_INT       vxge_mBIT(0)
-#define        VXGE_HW_TITAN_GENERAL_INT_STATUS_SRPCIM_ALARM_INT       vxge_mBIT(1)
-#define        VXGE_HW_TITAN_GENERAL_INT_STATUS_VPATH_ALARM_INT        vxge_mBIT(2)
-#define        VXGE_HW_TITAN_GENERAL_INT_STATUS_VPATH_TRAFFIC_INT(val) \
-                                                       vxge_vBIT(val, 3, 17)
-       u8      unused00e70[0x00e70-0x00e68];
-
-/*0x00e70*/    u64     titan_mask_all_int;
-#define        VXGE_HW_TITAN_MASK_ALL_INT_ALARM        vxge_mBIT(7)
-#define        VXGE_HW_TITAN_MASK_ALL_INT_TRAFFIC      vxge_mBIT(15)
-       u8      unused00e80[0x00e80-0x00e78];
-
-/*0x00e80*/    u64     tim_int_status0;
-#define VXGE_HW_TIM_INT_STATUS0_TIM_INT_STATUS0(val) vxge_vBIT(val, 0, 64)
-/*0x00e88*/    u64     tim_int_mask0;
-#define VXGE_HW_TIM_INT_MASK0_TIM_INT_MASK0(val) vxge_vBIT(val, 0, 64)
-/*0x00e90*/    u64     tim_int_status1;
-#define VXGE_HW_TIM_INT_STATUS1_TIM_INT_STATUS1(val) vxge_vBIT(val, 0, 4)
-/*0x00e98*/    u64     tim_int_mask1;
-#define VXGE_HW_TIM_INT_MASK1_TIM_INT_MASK1(val) vxge_vBIT(val, 0, 4)
-/*0x00ea0*/    u64     rti_int_status;
-#define VXGE_HW_RTI_INT_STATUS_RTI_INT_STATUS(val) vxge_vBIT(val, 0, 17)
-/*0x00ea8*/    u64     rti_int_mask;
-#define VXGE_HW_RTI_INT_MASK_RTI_INT_MASK(val) vxge_vBIT(val, 0, 17)
-/*0x00eb0*/    u64     adapter_status;
-#define        VXGE_HW_ADAPTER_STATUS_RTDMA_RTDMA_READY        vxge_mBIT(0)
-#define        VXGE_HW_ADAPTER_STATUS_WRDMA_WRDMA_READY        vxge_mBIT(1)
-#define        VXGE_HW_ADAPTER_STATUS_KDFC_KDFC_READY  vxge_mBIT(2)
-#define        VXGE_HW_ADAPTER_STATUS_TPA_TMAC_BUF_EMPTY       vxge_mBIT(3)
-#define        VXGE_HW_ADAPTER_STATUS_RDCTL_PIC_QUIESCENT      vxge_mBIT(4)
-#define        VXGE_HW_ADAPTER_STATUS_XGMAC_NETWORK_FAULT      vxge_mBIT(5)
-#define        VXGE_HW_ADAPTER_STATUS_ROCRC_OFFLOAD_QUIESCENT  vxge_mBIT(6)
-#define        VXGE_HW_ADAPTER_STATUS_G3IF_FB_G3IF_FB_GDDR3_READY      vxge_mBIT(7)
-#define        VXGE_HW_ADAPTER_STATUS_G3IF_CM_G3IF_CM_GDDR3_READY      vxge_mBIT(8)
-#define        VXGE_HW_ADAPTER_STATUS_RIC_RIC_RUNNING  vxge_mBIT(9)
-#define        VXGE_HW_ADAPTER_STATUS_CMG_C_PLL_IN_LOCK        vxge_mBIT(10)
-#define        VXGE_HW_ADAPTER_STATUS_XGMAC_X_PLL_IN_LOCK      vxge_mBIT(11)
-#define        VXGE_HW_ADAPTER_STATUS_FBIF_M_PLL_IN_LOCK       vxge_mBIT(12)
-#define VXGE_HW_ADAPTER_STATUS_PCC_PCC_IDLE(val) vxge_vBIT(val, 24, 8)
-#define VXGE_HW_ADAPTER_STATUS_ROCRC_RC_PRC_QUIESCENT(val) vxge_vBIT(val, 44, 8)
-/*0x00eb8*/    u64     gen_ctrl;
-#define        VXGE_HW_GEN_CTRL_SPI_MRPCIM_WR_DIS      vxge_mBIT(0)
-#define        VXGE_HW_GEN_CTRL_SPI_MRPCIM_RD_DIS      vxge_mBIT(1)
-#define        VXGE_HW_GEN_CTRL_SPI_SRPCIM_WR_DIS      vxge_mBIT(2)
-#define        VXGE_HW_GEN_CTRL_SPI_SRPCIM_RD_DIS      vxge_mBIT(3)
-#define        VXGE_HW_GEN_CTRL_SPI_DEBUG_DIS  vxge_mBIT(4)
-#define        VXGE_HW_GEN_CTRL_SPI_APP_LTSSM_TIMER_DIS        vxge_mBIT(5)
-#define VXGE_HW_GEN_CTRL_SPI_NOT_USED(val) vxge_vBIT(val, 6, 4)
-       u8      unused00ed0[0x00ed0-0x00ec0];
-
-/*0x00ed0*/    u64     adapter_ready;
-#define        VXGE_HW_ADAPTER_READY_ADAPTER_READY     vxge_mBIT(63)
-/*0x00ed8*/    u64     outstanding_read;
-#define VXGE_HW_OUTSTANDING_READ_OUTSTANDING_READ(val) vxge_vBIT(val, 0, 17)
-/*0x00ee0*/    u64     vpath_rst_in_prog;
-#define VXGE_HW_VPATH_RST_IN_PROG_VPATH_RST_IN_PROG(val) vxge_vBIT(val, 0, 17)
-/*0x00ee8*/    u64     vpath_reg_modified;
-#define VXGE_HW_VPATH_REG_MODIFIED_VPATH_REG_MODIFIED(val) vxge_vBIT(val, 0, 17)
-       u8      unused00fc0[0x00fc0-0x00ef0];
-
-/*0x00fc0*/    u64     cp_reset_in_progress;
-#define VXGE_HW_CP_RESET_IN_PROGRESS_CP_VPATH(n)       vxge_mBIT(n)
-       u8      unused01080[0x01080-0x00fc8];
-
-/*0x01080*/    u64     xgmac_ready;
-#define VXGE_HW_XGMAC_READY_XMACJ_READY(val) vxge_vBIT(val, 0, 17)
-       u8      unused010c0[0x010c0-0x01088];
-
-/*0x010c0*/    u64     fbif_ready;
-#define VXGE_HW_FBIF_READY_FAU_READY(val) vxge_vBIT(val, 0, 17)
-       u8      unused01100[0x01100-0x010c8];
-
-/*0x01100*/    u64     vplane_assignments;
-#define VXGE_HW_VPLANE_ASSIGNMENTS_VPLANE_ASSIGNMENTS(val) vxge_vBIT(val, 3, 5)
-/*0x01108*/    u64     vpath_assignments;
-#define VXGE_HW_VPATH_ASSIGNMENTS_VPATH_ASSIGNMENTS(val) vxge_vBIT(val, 0, 17)
-/*0x01110*/    u64     resource_assignments;
-#define VXGE_HW_RESOURCE_ASSIGNMENTS_RESOURCE_ASSIGNMENTS(val) \
-                                               vxge_vBIT(val, 0, 17)
-/*0x01118*/    u64     host_type_assignments;
-#define        VXGE_HW_HOST_TYPE_ASSIGNMENTS_HOST_TYPE_ASSIGNMENTS(val) \
-                                                       vxge_vBIT(val, 5, 3)
-       u8      unused01128[0x01128-0x01120];
-
-/*0x01128*/    u64     max_resource_assignments;
-#define VXGE_HW_MAX_RESOURCE_ASSIGNMENTS_PCI_MAX_VPLANE(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_MAX_RESOURCE_ASSIGNMENTS_PCI_MAX_VPATHS(val) \
-                                               vxge_vBIT(val, 11, 5)
-/*0x01130*/    u64     pf_vpath_assignments;
-#define VXGE_HW_PF_VPATH_ASSIGNMENTS_PF_VPATH_ASSIGNMENTS(val) \
-                                               vxge_vBIT(val, 0, 17)
-       u8      unused01200[0x01200-0x01138];
-
-/*0x01200*/    u64     rts_access_icmp;
-#define VXGE_HW_RTS_ACCESS_ICMP_EN(val) vxge_vBIT(val, 0, 17)
-/*0x01208*/    u64     rts_access_tcpsyn;
-#define VXGE_HW_RTS_ACCESS_TCPSYN_EN(val) vxge_vBIT(val, 0, 17)
-/*0x01210*/    u64     rts_access_zl4pyld;
-#define VXGE_HW_RTS_ACCESS_ZL4PYLD_EN(val) vxge_vBIT(val, 0, 17)
-/*0x01218*/    u64     rts_access_l4prtcl_tcp;
-#define VXGE_HW_RTS_ACCESS_L4PRTCL_TCP_EN(val) vxge_vBIT(val, 0, 17)
-/*0x01220*/    u64     rts_access_l4prtcl_udp;
-#define VXGE_HW_RTS_ACCESS_L4PRTCL_UDP_EN(val) vxge_vBIT(val, 0, 17)
-/*0x01228*/    u64     rts_access_l4prtcl_flex;
-#define VXGE_HW_RTS_ACCESS_L4PRTCL_FLEX_EN(val) vxge_vBIT(val, 0, 17)
-/*0x01230*/    u64     rts_access_ipfrag;
-#define VXGE_HW_RTS_ACCESS_IPFRAG_EN(val) vxge_vBIT(val, 0, 17)
-
-} __packed;
-
-struct vxge_hw_memrepair_reg {
-       u64     unused1;
-       u64     unused2;
-} __packed;
-
-struct vxge_hw_pcicfgmgmt_reg {
-
-/*0x00000*/    u64     resource_no;
-#define        VXGE_HW_RESOURCE_NO_PFN_OR_VF   BIT(3)
-/*0x00008*/    u64     bargrp_pf_or_vf_bar0_mask;
-#define        VXGE_HW_BARGRP_PF_OR_VF_BAR0_MASK_BARGRP_PF_OR_VF_BAR0_MASK(val) \
-                                                       vxge_vBIT(val, 2, 6)
-/*0x00010*/    u64     bargrp_pf_or_vf_bar1_mask;
-#define        VXGE_HW_BARGRP_PF_OR_VF_BAR1_MASK_BARGRP_PF_OR_VF_BAR1_MASK(val) \
-                                                       vxge_vBIT(val, 2, 6)
-/*0x00018*/    u64     bargrp_pf_or_vf_bar2_mask;
-#define        VXGE_HW_BARGRP_PF_OR_VF_BAR2_MASK_BARGRP_PF_OR_VF_BAR2_MASK(val) \
-                                                       vxge_vBIT(val, 2, 6)
-/*0x00020*/    u64     msixgrp_no;
-#define VXGE_HW_MSIXGRP_NO_TABLE_SIZE(val) vxge_vBIT(val, 5, 11)
-
-} __packed;
-
-struct vxge_hw_mrpcim_reg {
-/*0x00000*/    u64     g3fbct_int_status;
-#define        VXGE_HW_G3FBCT_INT_STATUS_ERR_G3IF_INT  vxge_mBIT(0)
-/*0x00008*/    u64     g3fbct_int_mask;
-/*0x00010*/    u64     g3fbct_err_reg;
-#define        VXGE_HW_G3FBCT_ERR_REG_G3IF_SM_ERR      vxge_mBIT(4)
-#define        VXGE_HW_G3FBCT_ERR_REG_G3IF_GDDR3_DECC  vxge_mBIT(5)
-#define        VXGE_HW_G3FBCT_ERR_REG_G3IF_GDDR3_U_DECC        vxge_mBIT(6)
-#define        VXGE_HW_G3FBCT_ERR_REG_G3IF_CTRL_FIFO_DECC      vxge_mBIT(7)
-#define        VXGE_HW_G3FBCT_ERR_REG_G3IF_GDDR3_SECC  vxge_mBIT(29)
-#define        VXGE_HW_G3FBCT_ERR_REG_G3IF_GDDR3_U_SECC        vxge_mBIT(30)
-#define        VXGE_HW_G3FBCT_ERR_REG_G3IF_CTRL_FIFO_SECC      vxge_mBIT(31)
-/*0x00018*/    u64     g3fbct_err_mask;
-/*0x00020*/    u64     g3fbct_err_alarm;
-
-       u8      unused00a00[0x00a00-0x00028];
-
-/*0x00a00*/    u64     wrdma_int_status;
-#define        VXGE_HW_WRDMA_INT_STATUS_RC_ALARM_RC_INT        vxge_mBIT(0)
-#define        VXGE_HW_WRDMA_INT_STATUS_RXDRM_SM_ERR_RXDRM_INT vxge_mBIT(1)
-#define        VXGE_HW_WRDMA_INT_STATUS_RXDCM_SM_ERR_RXDCM_SM_INT      vxge_mBIT(2)
-#define        VXGE_HW_WRDMA_INT_STATUS_RXDWM_SM_ERR_RXDWM_INT vxge_mBIT(3)
-#define        VXGE_HW_WRDMA_INT_STATUS_RDA_ERR_RDA_INT        vxge_mBIT(6)
-#define        VXGE_HW_WRDMA_INT_STATUS_RDA_ECC_DB_RDA_ECC_DB_INT      vxge_mBIT(8)
-#define        VXGE_HW_WRDMA_INT_STATUS_RDA_ECC_SG_RDA_ECC_SG_INT      vxge_mBIT(9)
-#define        VXGE_HW_WRDMA_INT_STATUS_FRF_ALARM_FRF_INT      vxge_mBIT(12)
-#define        VXGE_HW_WRDMA_INT_STATUS_ROCRC_ALARM_ROCRC_INT  vxge_mBIT(13)
-#define        VXGE_HW_WRDMA_INT_STATUS_WDE0_ALARM_WDE0_INT    vxge_mBIT(14)
-#define        VXGE_HW_WRDMA_INT_STATUS_WDE1_ALARM_WDE1_INT    vxge_mBIT(15)
-#define        VXGE_HW_WRDMA_INT_STATUS_WDE2_ALARM_WDE2_INT    vxge_mBIT(16)
-#define        VXGE_HW_WRDMA_INT_STATUS_WDE3_ALARM_WDE3_INT    vxge_mBIT(17)
-/*0x00a08*/    u64     wrdma_int_mask;
-/*0x00a10*/    u64     rc_alarm_reg;
-#define        VXGE_HW_RC_ALARM_REG_FTC_SM_ERR vxge_mBIT(0)
-#define        VXGE_HW_RC_ALARM_REG_FTC_SM_PHASE_ERR   vxge_mBIT(1)
-#define        VXGE_HW_RC_ALARM_REG_BTDWM_SM_ERR       vxge_mBIT(2)
-#define        VXGE_HW_RC_ALARM_REG_BTC_SM_ERR vxge_mBIT(3)
-#define        VXGE_HW_RC_ALARM_REG_BTDCM_SM_ERR       vxge_mBIT(4)
-#define        VXGE_HW_RC_ALARM_REG_BTDRM_SM_ERR       vxge_mBIT(5)
-#define        VXGE_HW_RC_ALARM_REG_RMM_RXD_RC_ECC_DB_ERR      vxge_mBIT(6)
-#define        VXGE_HW_RC_ALARM_REG_RMM_RXD_RC_ECC_SG_ERR      vxge_mBIT(7)
-#define        VXGE_HW_RC_ALARM_REG_RHS_RXD_RHS_ECC_DB_ERR     vxge_mBIT(8)
-#define        VXGE_HW_RC_ALARM_REG_RHS_RXD_RHS_ECC_SG_ERR     vxge_mBIT(9)
-#define        VXGE_HW_RC_ALARM_REG_RMM_SM_ERR vxge_mBIT(10)
-#define        VXGE_HW_RC_ALARM_REG_BTC_VPATH_MISMATCH_ERR     vxge_mBIT(12)
-/*0x00a18*/    u64     rc_alarm_mask;
-/*0x00a20*/    u64     rc_alarm_alarm;
-/*0x00a28*/    u64     rxdrm_sm_err_reg;
-#define VXGE_HW_RXDRM_SM_ERR_REG_PRC_VP(n)     vxge_mBIT(n)
-/*0x00a30*/    u64     rxdrm_sm_err_mask;
-/*0x00a38*/    u64     rxdrm_sm_err_alarm;
-/*0x00a40*/    u64     rxdcm_sm_err_reg;
-#define VXGE_HW_RXDCM_SM_ERR_REG_PRC_VP(n)     vxge_mBIT(n)
-/*0x00a48*/    u64     rxdcm_sm_err_mask;
-/*0x00a50*/    u64     rxdcm_sm_err_alarm;
-/*0x00a58*/    u64     rxdwm_sm_err_reg;
-#define VXGE_HW_RXDWM_SM_ERR_REG_PRC_VP(n)     vxge_mBIT(n)
-/*0x00a60*/    u64     rxdwm_sm_err_mask;
-/*0x00a68*/    u64     rxdwm_sm_err_alarm;
-/*0x00a70*/    u64     rda_err_reg;
-#define        VXGE_HW_RDA_ERR_REG_RDA_SM0_ERR_ALARM   vxge_mBIT(0)
-#define        VXGE_HW_RDA_ERR_REG_RDA_MISC_ERR        vxge_mBIT(1)
-#define        VXGE_HW_RDA_ERR_REG_RDA_PCIX_ERR        vxge_mBIT(2)
-#define        VXGE_HW_RDA_ERR_REG_RDA_RXD_ECC_DB_ERR  vxge_mBIT(3)
-#define        VXGE_HW_RDA_ERR_REG_RDA_FRM_ECC_DB_ERR  vxge_mBIT(4)
-#define        VXGE_HW_RDA_ERR_REG_RDA_UQM_ECC_DB_ERR  vxge_mBIT(5)
-#define        VXGE_HW_RDA_ERR_REG_RDA_IMM_ECC_DB_ERR  vxge_mBIT(6)
-#define        VXGE_HW_RDA_ERR_REG_RDA_TIM_ECC_DB_ERR  vxge_mBIT(7)
-/*0x00a78*/    u64     rda_err_mask;
-/*0x00a80*/    u64     rda_err_alarm;
-/*0x00a88*/    u64     rda_ecc_db_reg;
-#define VXGE_HW_RDA_ECC_DB_REG_RDA_RXD_ERR(n)  vxge_mBIT(n)
-/*0x00a90*/    u64     rda_ecc_db_mask;
-/*0x00a98*/    u64     rda_ecc_db_alarm;
-/*0x00aa0*/    u64     rda_ecc_sg_reg;
-#define VXGE_HW_RDA_ECC_SG_REG_RDA_RXD_ERR(n)  vxge_mBIT(n)
-/*0x00aa8*/    u64     rda_ecc_sg_mask;
-/*0x00ab0*/    u64     rda_ecc_sg_alarm;
-/*0x00ab8*/    u64     rqa_err_reg;
-#define        VXGE_HW_RQA_ERR_REG_RQA_SM_ERR_ALARM    vxge_mBIT(0)
-/*0x00ac0*/    u64     rqa_err_mask;
-/*0x00ac8*/    u64     rqa_err_alarm;
-/*0x00ad0*/    u64     frf_alarm_reg;
-#define VXGE_HW_FRF_ALARM_REG_PRC_VP_FRF_SM_ERR(n)     vxge_mBIT(n)
-/*0x00ad8*/    u64     frf_alarm_mask;
-/*0x00ae0*/    u64     frf_alarm_alarm;
-/*0x00ae8*/    u64     rocrc_alarm_reg;
-#define        VXGE_HW_ROCRC_ALARM_REG_QCQ_QCC_BYP_ECC_DB      vxge_mBIT(0)
-#define        VXGE_HW_ROCRC_ALARM_REG_QCQ_QCC_BYP_ECC_SG      vxge_mBIT(1)
-#define        VXGE_HW_ROCRC_ALARM_REG_NOA_NMA_SM_ERR  vxge_mBIT(2)
-#define        VXGE_HW_ROCRC_ALARM_REG_NOA_IMMM_ECC_DB vxge_mBIT(3)
-#define        VXGE_HW_ROCRC_ALARM_REG_NOA_IMMM_ECC_SG vxge_mBIT(4)
-#define        VXGE_HW_ROCRC_ALARM_REG_UDQ_UMQM_ECC_DB vxge_mBIT(5)
-#define        VXGE_HW_ROCRC_ALARM_REG_UDQ_UMQM_ECC_SG vxge_mBIT(6)
-#define        VXGE_HW_ROCRC_ALARM_REG_NOA_RCBM_ECC_DB vxge_mBIT(11)
-#define        VXGE_HW_ROCRC_ALARM_REG_NOA_RCBM_ECC_SG vxge_mBIT(12)
-#define        VXGE_HW_ROCRC_ALARM_REG_QCQ_MULTI_EGB_RSVD_ERR  vxge_mBIT(13)
-#define        VXGE_HW_ROCRC_ALARM_REG_QCQ_MULTI_EGB_OWN_ERR   vxge_mBIT(14)
-#define        VXGE_HW_ROCRC_ALARM_REG_QCQ_MULTI_BYP_OWN_ERR   vxge_mBIT(15)
-#define        VXGE_HW_ROCRC_ALARM_REG_QCQ_OWN_NOT_ASSIGNED_ERR        vxge_mBIT(16)
-#define        VXGE_HW_ROCRC_ALARM_REG_QCQ_OWN_RSVD_SYNC_ERR   vxge_mBIT(17)
-#define        VXGE_HW_ROCRC_ALARM_REG_QCQ_LOST_EGB_ERR        vxge_mBIT(18)
-#define        VXGE_HW_ROCRC_ALARM_REG_RCQ_BYPQ0_OVERFLOW      vxge_mBIT(19)
-#define        VXGE_HW_ROCRC_ALARM_REG_RCQ_BYPQ1_OVERFLOW      vxge_mBIT(20)
-#define        VXGE_HW_ROCRC_ALARM_REG_RCQ_BYPQ2_OVERFLOW      vxge_mBIT(21)
-#define        VXGE_HW_ROCRC_ALARM_REG_NOA_WCT_CMD_FIFO_ERR    vxge_mBIT(22)
-/*0x00af0*/    u64     rocrc_alarm_mask;
-/*0x00af8*/    u64     rocrc_alarm_alarm;
-/*0x00b00*/    u64     wde0_alarm_reg;
-#define        VXGE_HW_WDE0_ALARM_REG_WDE0_DCC_SM_ERR  vxge_mBIT(0)
-#define        VXGE_HW_WDE0_ALARM_REG_WDE0_PRM_SM_ERR  vxge_mBIT(1)
-#define        VXGE_HW_WDE0_ALARM_REG_WDE0_CP_SM_ERR   vxge_mBIT(2)
-#define        VXGE_HW_WDE0_ALARM_REG_WDE0_CP_CMD_ERR  vxge_mBIT(3)
-#define        VXGE_HW_WDE0_ALARM_REG_WDE0_PCR_SM_ERR  vxge_mBIT(4)
-/*0x00b08*/    u64     wde0_alarm_mask;
-/*0x00b10*/    u64     wde0_alarm_alarm;
-/*0x00b18*/    u64     wde1_alarm_reg;
-#define        VXGE_HW_WDE1_ALARM_REG_WDE1_DCC_SM_ERR  vxge_mBIT(0)
-#define        VXGE_HW_WDE1_ALARM_REG_WDE1_PRM_SM_ERR  vxge_mBIT(1)
-#define        VXGE_HW_WDE1_ALARM_REG_WDE1_CP_SM_ERR   vxge_mBIT(2)
-#define        VXGE_HW_WDE1_ALARM_REG_WDE1_CP_CMD_ERR  vxge_mBIT(3)
-#define        VXGE_HW_WDE1_ALARM_REG_WDE1_PCR_SM_ERR  vxge_mBIT(4)
-/*0x00b20*/    u64     wde1_alarm_mask;
-/*0x00b28*/    u64     wde1_alarm_alarm;
-/*0x00b30*/    u64     wde2_alarm_reg;
-#define        VXGE_HW_WDE2_ALARM_REG_WDE2_DCC_SM_ERR  vxge_mBIT(0)
-#define        VXGE_HW_WDE2_ALARM_REG_WDE2_PRM_SM_ERR  vxge_mBIT(1)
-#define        VXGE_HW_WDE2_ALARM_REG_WDE2_CP_SM_ERR   vxge_mBIT(2)
-#define        VXGE_HW_WDE2_ALARM_REG_WDE2_CP_CMD_ERR  vxge_mBIT(3)
-#define        VXGE_HW_WDE2_ALARM_REG_WDE2_PCR_SM_ERR  vxge_mBIT(4)
-/*0x00b38*/    u64     wde2_alarm_mask;
-/*0x00b40*/    u64     wde2_alarm_alarm;
-/*0x00b48*/    u64     wde3_alarm_reg;
-#define        VXGE_HW_WDE3_ALARM_REG_WDE3_DCC_SM_ERR  vxge_mBIT(0)
-#define        VXGE_HW_WDE3_ALARM_REG_WDE3_PRM_SM_ERR  vxge_mBIT(1)
-#define        VXGE_HW_WDE3_ALARM_REG_WDE3_CP_SM_ERR   vxge_mBIT(2)
-#define        VXGE_HW_WDE3_ALARM_REG_WDE3_CP_CMD_ERR  vxge_mBIT(3)
-#define        VXGE_HW_WDE3_ALARM_REG_WDE3_PCR_SM_ERR  vxge_mBIT(4)
-/*0x00b50*/    u64     wde3_alarm_mask;
-/*0x00b58*/    u64     wde3_alarm_alarm;
-
-       u8      unused00be8[0x00be8-0x00b60];
-
-/*0x00be8*/    u64     rx_w_round_robin_0;
-#define VXGE_HW_RX_W_ROUND_ROBIN_0_RX_W_PRIORITY_SS_0(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_0_RX_W_PRIORITY_SS_1(val) vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_0_RX_W_PRIORITY_SS_2(val) vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_0_RX_W_PRIORITY_SS_3(val) vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_0_RX_W_PRIORITY_SS_4(val) vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_0_RX_W_PRIORITY_SS_5(val) vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_0_RX_W_PRIORITY_SS_6(val) vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_0_RX_W_PRIORITY_SS_7(val) vxge_vBIT(val, 59, 5)
-/*0x00bf0*/    u64     rx_w_round_robin_1;
-#define VXGE_HW_RX_W_ROUND_ROBIN_1_RX_W_PRIORITY_SS_8(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_1_RX_W_PRIORITY_SS_9(val) vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_1_RX_W_PRIORITY_SS_10(val) \
-                                               vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_1_RX_W_PRIORITY_SS_11(val) \
-                                               vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_1_RX_W_PRIORITY_SS_12(val) \
-                                               vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_1_RX_W_PRIORITY_SS_13(val) \
-                                               vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_1_RX_W_PRIORITY_SS_14(val) \
-                                               vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_1_RX_W_PRIORITY_SS_15(val) \
-                                               vxge_vBIT(val, 59, 5)
-/*0x00bf8*/    u64     rx_w_round_robin_2;
-#define VXGE_HW_RX_W_ROUND_ROBIN_2_RX_W_PRIORITY_SS_16(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_2_RX_W_PRIORITY_SS_17(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_2_RX_W_PRIORITY_SS_18(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_2_RX_W_PRIORITY_SS_19(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_2_RX_W_PRIORITY_SS_20(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_2_RX_W_PRIORITY_SS_21(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_2_RX_W_PRIORITY_SS_22(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_2_RX_W_PRIORITY_SS_23(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c00*/    u64     rx_w_round_robin_3;
-#define VXGE_HW_RX_W_ROUND_ROBIN_3_RX_W_PRIORITY_SS_24(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_3_RX_W_PRIORITY_SS_25(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_3_RX_W_PRIORITY_SS_26(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_3_RX_W_PRIORITY_SS_27(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_3_RX_W_PRIORITY_SS_28(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_3_RX_W_PRIORITY_SS_29(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_3_RX_W_PRIORITY_SS_30(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_3_RX_W_PRIORITY_SS_31(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c08*/    u64     rx_w_round_robin_4;
-#define VXGE_HW_RX_W_ROUND_ROBIN_4_RX_W_PRIORITY_SS_32(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_4_RX_W_PRIORITY_SS_33(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_4_RX_W_PRIORITY_SS_34(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_4_RX_W_PRIORITY_SS_35(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_4_RX_W_PRIORITY_SS_36(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_4_RX_W_PRIORITY_SS_37(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_4_RX_W_PRIORITY_SS_38(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_4_RX_W_PRIORITY_SS_39(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c10*/    u64     rx_w_round_robin_5;
-#define VXGE_HW_RX_W_ROUND_ROBIN_5_RX_W_PRIORITY_SS_40(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_5_RX_W_PRIORITY_SS_41(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_5_RX_W_PRIORITY_SS_42(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_5_RX_W_PRIORITY_SS_43(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_5_RX_W_PRIORITY_SS_44(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_5_RX_W_PRIORITY_SS_45(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_5_RX_W_PRIORITY_SS_46(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_5_RX_W_PRIORITY_SS_47(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c18*/    u64     rx_w_round_robin_6;
-#define VXGE_HW_RX_W_ROUND_ROBIN_6_RX_W_PRIORITY_SS_48(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_6_RX_W_PRIORITY_SS_49(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_6_RX_W_PRIORITY_SS_50(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_6_RX_W_PRIORITY_SS_51(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_6_RX_W_PRIORITY_SS_52(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_6_RX_W_PRIORITY_SS_53(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_6_RX_W_PRIORITY_SS_54(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_6_RX_W_PRIORITY_SS_55(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c20*/    u64     rx_w_round_robin_7;
-#define VXGE_HW_RX_W_ROUND_ROBIN_7_RX_W_PRIORITY_SS_56(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_7_RX_W_PRIORITY_SS_57(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_7_RX_W_PRIORITY_SS_58(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_7_RX_W_PRIORITY_SS_59(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_7_RX_W_PRIORITY_SS_60(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_7_RX_W_PRIORITY_SS_61(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_7_RX_W_PRIORITY_SS_62(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_7_RX_W_PRIORITY_SS_63(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c28*/    u64     rx_w_round_robin_8;
-#define VXGE_HW_RX_W_ROUND_ROBIN_8_RX_W_PRIORITY_SS_64(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_8_RX_W_PRIORITY_SS_65(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_8_RX_W_PRIORITY_SS_66(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_8_RX_W_PRIORITY_SS_67(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_8_RX_W_PRIORITY_SS_68(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_8_RX_W_PRIORITY_SS_69(val) \
-                                               vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_8_RX_W_PRIORITY_SS_70(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_8_RX_W_PRIORITY_SS_71(val) \
-                                               vxge_vBIT(val, 59, 5)
-/*0x00c30*/    u64     rx_w_round_robin_9;
-#define VXGE_HW_RX_W_ROUND_ROBIN_9_RX_W_PRIORITY_SS_72(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_9_RX_W_PRIORITY_SS_73(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_9_RX_W_PRIORITY_SS_74(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_9_RX_W_PRIORITY_SS_75(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_9_RX_W_PRIORITY_SS_76(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_9_RX_W_PRIORITY_SS_77(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_9_RX_W_PRIORITY_SS_78(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_9_RX_W_PRIORITY_SS_79(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c38*/    u64     rx_w_round_robin_10;
-#define VXGE_HW_RX_W_ROUND_ROBIN_10_RX_W_PRIORITY_SS_80(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_10_RX_W_PRIORITY_SS_81(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_10_RX_W_PRIORITY_SS_82(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_10_RX_W_PRIORITY_SS_83(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_10_RX_W_PRIORITY_SS_84(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_10_RX_W_PRIORITY_SS_85(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_10_RX_W_PRIORITY_SS_86(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_10_RX_W_PRIORITY_SS_87(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c40*/    u64     rx_w_round_robin_11;
-#define VXGE_HW_RX_W_ROUND_ROBIN_11_RX_W_PRIORITY_SS_88(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_11_RX_W_PRIORITY_SS_89(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_11_RX_W_PRIORITY_SS_90(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_11_RX_W_PRIORITY_SS_91(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_11_RX_W_PRIORITY_SS_92(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_11_RX_W_PRIORITY_SS_93(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_11_RX_W_PRIORITY_SS_94(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_11_RX_W_PRIORITY_SS_95(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c48*/    u64     rx_w_round_robin_12;
-#define VXGE_HW_RX_W_ROUND_ROBIN_12_RX_W_PRIORITY_SS_96(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_12_RX_W_PRIORITY_SS_97(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_12_RX_W_PRIORITY_SS_98(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_12_RX_W_PRIORITY_SS_99(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_12_RX_W_PRIORITY_SS_100(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_12_RX_W_PRIORITY_SS_101(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_12_RX_W_PRIORITY_SS_102(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_12_RX_W_PRIORITY_SS_103(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c50*/    u64     rx_w_round_robin_13;
-#define VXGE_HW_RX_W_ROUND_ROBIN_13_RX_W_PRIORITY_SS_104(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_13_RX_W_PRIORITY_SS_105(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_13_RX_W_PRIORITY_SS_106(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_13_RX_W_PRIORITY_SS_107(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_13_RX_W_PRIORITY_SS_108(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_13_RX_W_PRIORITY_SS_109(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_13_RX_W_PRIORITY_SS_110(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_13_RX_W_PRIORITY_SS_111(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c58*/    u64     rx_w_round_robin_14;
-#define VXGE_HW_RX_W_ROUND_ROBIN_14_RX_W_PRIORITY_SS_112(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_14_RX_W_PRIORITY_SS_113(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_14_RX_W_PRIORITY_SS_114(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_14_RX_W_PRIORITY_SS_115(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_14_RX_W_PRIORITY_SS_116(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_14_RX_W_PRIORITY_SS_117(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_14_RX_W_PRIORITY_SS_118(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_14_RX_W_PRIORITY_SS_119(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c60*/    u64     rx_w_round_robin_15;
-#define VXGE_HW_RX_W_ROUND_ROBIN_15_RX_W_PRIORITY_SS_120(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_15_RX_W_PRIORITY_SS_121(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_15_RX_W_PRIORITY_SS_122(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_15_RX_W_PRIORITY_SS_123(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_15_RX_W_PRIORITY_SS_124(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_15_RX_W_PRIORITY_SS_125(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_15_RX_W_PRIORITY_SS_126(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_15_RX_W_PRIORITY_SS_127(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c68*/    u64     rx_w_round_robin_16;
-#define VXGE_HW_RX_W_ROUND_ROBIN_16_RX_W_PRIORITY_SS_128(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_16_RX_W_PRIORITY_SS_129(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_16_RX_W_PRIORITY_SS_130(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_16_RX_W_PRIORITY_SS_131(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_16_RX_W_PRIORITY_SS_132(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_16_RX_W_PRIORITY_SS_133(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_16_RX_W_PRIORITY_SS_134(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_16_RX_W_PRIORITY_SS_135(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c70*/    u64     rx_w_round_robin_17;
-#define VXGE_HW_RX_W_ROUND_ROBIN_17_RX_W_PRIORITY_SS_136(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_17_RX_W_PRIORITY_SS_137(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_17_RX_W_PRIORITY_SS_138(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_17_RX_W_PRIORITY_SS_139(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_17_RX_W_PRIORITY_SS_140(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_17_RX_W_PRIORITY_SS_141(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_17_RX_W_PRIORITY_SS_142(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_17_RX_W_PRIORITY_SS_143(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c78*/    u64     rx_w_round_robin_18;
-#define VXGE_HW_RX_W_ROUND_ROBIN_18_RX_W_PRIORITY_SS_144(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_18_RX_W_PRIORITY_SS_145(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_18_RX_W_PRIORITY_SS_146(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_18_RX_W_PRIORITY_SS_147(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_18_RX_W_PRIORITY_SS_148(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_18_RX_W_PRIORITY_SS_149(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_18_RX_W_PRIORITY_SS_150(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_18_RX_W_PRIORITY_SS_151(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c80*/    u64     rx_w_round_robin_19;
-#define VXGE_HW_RX_W_ROUND_ROBIN_19_RX_W_PRIORITY_SS_152(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_19_RX_W_PRIORITY_SS_153(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_19_RX_W_PRIORITY_SS_154(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_19_RX_W_PRIORITY_SS_155(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_19_RX_W_PRIORITY_SS_156(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_19_RX_W_PRIORITY_SS_157(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_19_RX_W_PRIORITY_SS_158(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_19_RX_W_PRIORITY_SS_159(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c88*/    u64     rx_w_round_robin_20;
-#define VXGE_HW_RX_W_ROUND_ROBIN_20_RX_W_PRIORITY_SS_160(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_20_RX_W_PRIORITY_SS_161(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_20_RX_W_PRIORITY_SS_162(val) \
-                                                       vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_20_RX_W_PRIORITY_SS_163(val) \
-                                                       vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_20_RX_W_PRIORITY_SS_164(val) \
-                                                       vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_20_RX_W_PRIORITY_SS_165(val) \
-                                                       vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_20_RX_W_PRIORITY_SS_166(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_20_RX_W_PRIORITY_SS_167(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00c90*/    u64     rx_w_round_robin_21;
-#define VXGE_HW_RX_W_ROUND_ROBIN_21_RX_W_PRIORITY_SS_168(val) \
-                                                       vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_21_RX_W_PRIORITY_SS_169(val) \
-                                                       vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_W_ROUND_ROBIN_21_RX_W_PRIORITY_SS_170(val) \
-                                                       vxge_vBIT(val, 19, 5)
-
-#define VXGE_HW_WRR_RING_SERVICE_STATES                        171
-#define VXGE_HW_WRR_RING_COUNT                         22
-
-/*0x00c98*/    u64     rx_queue_priority_0;
-#define VXGE_HW_RX_QUEUE_PRIORITY_0_RX_Q_NUMBER_0(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_0_RX_Q_NUMBER_1(val) vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_0_RX_Q_NUMBER_2(val) vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_0_RX_Q_NUMBER_3(val) vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_0_RX_Q_NUMBER_4(val) vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_0_RX_Q_NUMBER_5(val) vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_0_RX_Q_NUMBER_6(val) vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_0_RX_Q_NUMBER_7(val) vxge_vBIT(val, 59, 5)
-/*0x00ca0*/    u64     rx_queue_priority_1;
-#define VXGE_HW_RX_QUEUE_PRIORITY_1_RX_Q_NUMBER_8(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_1_RX_Q_NUMBER_9(val) vxge_vBIT(val, 11, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_1_RX_Q_NUMBER_10(val) vxge_vBIT(val, 19, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_1_RX_Q_NUMBER_11(val) vxge_vBIT(val, 27, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_1_RX_Q_NUMBER_12(val) vxge_vBIT(val, 35, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_1_RX_Q_NUMBER_13(val) vxge_vBIT(val, 43, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_1_RX_Q_NUMBER_14(val) vxge_vBIT(val, 51, 5)
-#define VXGE_HW_RX_QUEUE_PRIORITY_1_RX_Q_NUMBER_15(val) vxge_vBIT(val, 59, 5)
-/*0x00ca8*/    u64     rx_queue_priority_2;
-#define VXGE_HW_RX_QUEUE_PRIORITY_2_RX_Q_NUMBER_16(val) vxge_vBIT(val, 3, 5)
-       u8      unused00cc8[0x00cc8-0x00cb0];
-
-/*0x00cc8*/    u64     replication_queue_priority;
-#define        VXGE_HW_REPLICATION_QUEUE_PRIORITY_REPLICATION_QUEUE_PRIORITY(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00cd0*/    u64     rx_queue_select;
-#define VXGE_HW_RX_QUEUE_SELECT_NUMBER(n)      vxge_mBIT(n)
-#define        VXGE_HW_RX_QUEUE_SELECT_ENABLE_CODE     vxge_mBIT(15)
-#define        VXGE_HW_RX_QUEUE_SELECT_ENABLE_HIERARCHICAL_PRTY        vxge_mBIT(23)
-/*0x00cd8*/    u64     rqa_vpbp_ctrl;
-#define        VXGE_HW_RQA_VPBP_CTRL_WR_XON_DIS        vxge_mBIT(15)
-#define        VXGE_HW_RQA_VPBP_CTRL_ROCRC_DIS vxge_mBIT(23)
-#define        VXGE_HW_RQA_VPBP_CTRL_TXPE_DIS  vxge_mBIT(31)
-/*0x00ce0*/    u64     rx_multi_cast_ctrl;
-#define        VXGE_HW_RX_MULTI_CAST_CTRL_TIME_OUT_DIS vxge_mBIT(0)
-#define        VXGE_HW_RX_MULTI_CAST_CTRL_FRM_DROP_DIS vxge_mBIT(1)
-#define VXGE_HW_RX_MULTI_CAST_CTRL_NO_RXD_TIME_OUT_CNT(val) \
-                                                       vxge_vBIT(val, 2, 30)
-#define VXGE_HW_RX_MULTI_CAST_CTRL_TIME_OUT_CNT(val) vxge_vBIT(val, 32, 32)
-/*0x00ce8*/    u64     wde_prm_ctrl;
-#define VXGE_HW_WDE_PRM_CTRL_SPAV_THRESHOLD(val) vxge_vBIT(val, 2, 10)
-#define VXGE_HW_WDE_PRM_CTRL_SPLIT_THRESHOLD(val) vxge_vBIT(val, 18, 14)
-#define        VXGE_HW_WDE_PRM_CTRL_SPLIT_ON_1ST_ROW   vxge_mBIT(32)
-#define        VXGE_HW_WDE_PRM_CTRL_SPLIT_ON_ROW_BNDRY vxge_mBIT(33)
-#define VXGE_HW_WDE_PRM_CTRL_FB_ROW_SIZE(val) vxge_vBIT(val, 46, 2)
-/*0x00cf0*/    u64     noa_ctrl;
-#define VXGE_HW_NOA_CTRL_FRM_PRTY_QUOTA(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_NOA_CTRL_NON_FRM_PRTY_QUOTA(val) vxge_vBIT(val, 11, 5)
-#define        VXGE_HW_NOA_CTRL_IGNORE_KDFC_IF_STATUS  vxge_mBIT(16)
-#define VXGE_HW_NOA_CTRL_MAX_JOB_CNT_FOR_WDE0(val) vxge_vBIT(val, 37, 4)
-#define VXGE_HW_NOA_CTRL_MAX_JOB_CNT_FOR_WDE1(val) vxge_vBIT(val, 45, 4)
-#define VXGE_HW_NOA_CTRL_MAX_JOB_CNT_FOR_WDE2(val) vxge_vBIT(val, 53, 4)
-#define VXGE_HW_NOA_CTRL_MAX_JOB_CNT_FOR_WDE3(val) vxge_vBIT(val, 60, 4)
-/*0x00cf8*/    u64     phase_cfg;
-#define        VXGE_HW_PHASE_CFG_QCC_WR_PHASE_EN       vxge_mBIT(0)
-#define        VXGE_HW_PHASE_CFG_QCC_RD_PHASE_EN       vxge_mBIT(3)
-#define        VXGE_HW_PHASE_CFG_IMMM_WR_PHASE_EN      vxge_mBIT(7)
-#define        VXGE_HW_PHASE_CFG_IMMM_RD_PHASE_EN      vxge_mBIT(11)
-#define        VXGE_HW_PHASE_CFG_UMQM_WR_PHASE_EN      vxge_mBIT(15)
-#define        VXGE_HW_PHASE_CFG_UMQM_RD_PHASE_EN      vxge_mBIT(19)
-#define        VXGE_HW_PHASE_CFG_RCBM_WR_PHASE_EN      vxge_mBIT(23)
-#define        VXGE_HW_PHASE_CFG_RCBM_RD_PHASE_EN      vxge_mBIT(27)
-#define        VXGE_HW_PHASE_CFG_RXD_RC_WR_PHASE_EN    vxge_mBIT(31)
-#define        VXGE_HW_PHASE_CFG_RXD_RC_RD_PHASE_EN    vxge_mBIT(35)
-#define        VXGE_HW_PHASE_CFG_RXD_RHS_WR_PHASE_EN   vxge_mBIT(39)
-#define        VXGE_HW_PHASE_CFG_RXD_RHS_RD_PHASE_EN   vxge_mBIT(43)
-/*0x00d00*/    u64     rcq_bypq_cfg;
-#define VXGE_HW_RCQ_BYPQ_CFG_OVERFLOW_THRESHOLD(val) vxge_vBIT(val, 10, 22)
-#define VXGE_HW_RCQ_BYPQ_CFG_BYP_ON_THRESHOLD(val) vxge_vBIT(val, 39, 9)
-#define VXGE_HW_RCQ_BYPQ_CFG_BYP_OFF_THRESHOLD(val) vxge_vBIT(val, 55, 9)
-       u8      unused00e00[0x00e00-0x00d08];
-
-/*0x00e00*/    u64     doorbell_int_status;
-#define        VXGE_HW_DOORBELL_INT_STATUS_KDFC_ERR_REG_TXDMA_KDFC_INT vxge_mBIT(7)
-#define        VXGE_HW_DOORBELL_INT_STATUS_USDC_ERR_REG_TXDMA_USDC_INT vxge_mBIT(15)
-/*0x00e08*/    u64     doorbell_int_mask;
-/*0x00e10*/    u64     kdfc_err_reg;
-#define        VXGE_HW_KDFC_ERR_REG_KDFC_KDFC_ECC_SG_ERR       vxge_mBIT(7)
-#define        VXGE_HW_KDFC_ERR_REG_KDFC_KDFC_ECC_DB_ERR       vxge_mBIT(15)
-#define        VXGE_HW_KDFC_ERR_REG_KDFC_KDFC_SM_ERR_ALARM     vxge_mBIT(23)
-#define        VXGE_HW_KDFC_ERR_REG_KDFC_KDFC_MISC_ERR_1       vxge_mBIT(32)
-#define        VXGE_HW_KDFC_ERR_REG_KDFC_KDFC_PCIX_ERR vxge_mBIT(39)
-/*0x00e18*/    u64     kdfc_err_mask;
-/*0x00e20*/    u64     kdfc_err_reg_alarm;
-#define        VXGE_HW_KDFC_ERR_REG_ALARM_KDFC_KDFC_ECC_SG_ERR vxge_mBIT(7)
-#define        VXGE_HW_KDFC_ERR_REG_ALARM_KDFC_KDFC_ECC_DB_ERR vxge_mBIT(15)
-#define        VXGE_HW_KDFC_ERR_REG_ALARM_KDFC_KDFC_SM_ERR_ALARM       vxge_mBIT(23)
-#define        VXGE_HW_KDFC_ERR_REG_ALARM_KDFC_KDFC_MISC_ERR_1 vxge_mBIT(32)
-#define        VXGE_HW_KDFC_ERR_REG_ALARM_KDFC_KDFC_PCIX_ERR   vxge_mBIT(39)
-       u8      unused00e40[0x00e40-0x00e28];
-/*0x00e40*/    u64     kdfc_vp_partition_0;
-#define        VXGE_HW_KDFC_VP_PARTITION_0_ENABLE      vxge_mBIT(0)
-#define VXGE_HW_KDFC_VP_PARTITION_0_NUMBER_0(val) vxge_vBIT(val, 5, 3)
-#define VXGE_HW_KDFC_VP_PARTITION_0_LENGTH_0(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_VP_PARTITION_0_NUMBER_1(val) vxge_vBIT(val, 37, 3)
-#define VXGE_HW_KDFC_VP_PARTITION_0_LENGTH_1(val) vxge_vBIT(val, 49, 15)
-/*0x00e48*/    u64     kdfc_vp_partition_1;
-#define VXGE_HW_KDFC_VP_PARTITION_1_NUMBER_2(val) vxge_vBIT(val, 5, 3)
-#define VXGE_HW_KDFC_VP_PARTITION_1_LENGTH_2(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_VP_PARTITION_1_NUMBER_3(val) vxge_vBIT(val, 37, 3)
-#define VXGE_HW_KDFC_VP_PARTITION_1_LENGTH_3(val) vxge_vBIT(val, 49, 15)
-/*0x00e50*/    u64     kdfc_vp_partition_2;
-#define VXGE_HW_KDFC_VP_PARTITION_2_NUMBER_4(val) vxge_vBIT(val, 5, 3)
-#define VXGE_HW_KDFC_VP_PARTITION_2_LENGTH_4(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_VP_PARTITION_2_NUMBER_5(val) vxge_vBIT(val, 37, 3)
-#define VXGE_HW_KDFC_VP_PARTITION_2_LENGTH_5(val) vxge_vBIT(val, 49, 15)
-/*0x00e58*/    u64     kdfc_vp_partition_3;
-#define VXGE_HW_KDFC_VP_PARTITION_3_NUMBER_6(val) vxge_vBIT(val, 5, 3)
-#define VXGE_HW_KDFC_VP_PARTITION_3_LENGTH_6(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_VP_PARTITION_3_NUMBER_7(val) vxge_vBIT(val, 37, 3)
-#define VXGE_HW_KDFC_VP_PARTITION_3_LENGTH_7(val) vxge_vBIT(val, 49, 15)
-/*0x00e60*/    u64     kdfc_vp_partition_4;
-#define VXGE_HW_KDFC_VP_PARTITION_4_LENGTH_8(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_VP_PARTITION_4_LENGTH_9(val) vxge_vBIT(val, 49, 15)
-/*0x00e68*/    u64     kdfc_vp_partition_5;
-#define VXGE_HW_KDFC_VP_PARTITION_5_LENGTH_10(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_VP_PARTITION_5_LENGTH_11(val) vxge_vBIT(val, 49, 15)
-/*0x00e70*/    u64     kdfc_vp_partition_6;
-#define VXGE_HW_KDFC_VP_PARTITION_6_LENGTH_12(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_VP_PARTITION_6_LENGTH_13(val) vxge_vBIT(val, 49, 15)
-/*0x00e78*/    u64     kdfc_vp_partition_7;
-#define VXGE_HW_KDFC_VP_PARTITION_7_LENGTH_14(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_VP_PARTITION_7_LENGTH_15(val) vxge_vBIT(val, 49, 15)
-/*0x00e80*/    u64     kdfc_vp_partition_8;
-#define VXGE_HW_KDFC_VP_PARTITION_8_LENGTH_16(val) vxge_vBIT(val, 17, 15)
-/*0x00e88*/    u64     kdfc_w_round_robin_0;
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_0_NUMBER_0(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_0_NUMBER_1(val) vxge_vBIT(val, 11, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_0_NUMBER_2(val) vxge_vBIT(val, 19, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_0_NUMBER_3(val) vxge_vBIT(val, 27, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_0_NUMBER_4(val) vxge_vBIT(val, 35, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_0_NUMBER_5(val) vxge_vBIT(val, 43, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_0_NUMBER_6(val) vxge_vBIT(val, 51, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_0_NUMBER_7(val) vxge_vBIT(val, 59, 5)
-
-       u8      unused0f28[0x0f28-0x0e90];
-
-/*0x00f28*/    u64     kdfc_w_round_robin_20;
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_20_NUMBER_0(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_20_NUMBER_1(val) vxge_vBIT(val, 11, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_20_NUMBER_2(val) vxge_vBIT(val, 19, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_20_NUMBER_3(val) vxge_vBIT(val, 27, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_20_NUMBER_4(val) vxge_vBIT(val, 35, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_20_NUMBER_5(val) vxge_vBIT(val, 43, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_20_NUMBER_6(val) vxge_vBIT(val, 51, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_20_NUMBER_7(val) vxge_vBIT(val, 59, 5)
-
-#define VXGE_HW_WRR_FIFO_COUNT                         20
-
-       u8      unused0fc8[0x0fc8-0x0f30];
-
-/*0x00fc8*/    u64     kdfc_w_round_robin_40;
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_40_NUMBER_0(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_40_NUMBER_1(val) vxge_vBIT(val, 11, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_40_NUMBER_2(val) vxge_vBIT(val, 19, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_40_NUMBER_3(val) vxge_vBIT(val, 27, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_40_NUMBER_4(val) vxge_vBIT(val, 35, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_40_NUMBER_5(val) vxge_vBIT(val, 43, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_40_NUMBER_6(val) vxge_vBIT(val, 51, 5)
-#define VXGE_HW_KDFC_W_ROUND_ROBIN_40_NUMBER_7(val) vxge_vBIT(val, 59, 5)
-
-       u8      unused1068[0x01068-0x0fd0];
-
-/*0x01068*/    u64     kdfc_entry_type_sel_0;
-#define VXGE_HW_KDFC_ENTRY_TYPE_SEL_0_NUMBER_0(val) vxge_vBIT(val, 6, 2)
-#define VXGE_HW_KDFC_ENTRY_TYPE_SEL_0_NUMBER_1(val) vxge_vBIT(val, 14, 2)
-#define VXGE_HW_KDFC_ENTRY_TYPE_SEL_0_NUMBER_2(val) vxge_vBIT(val, 22, 2)
-#define VXGE_HW_KDFC_ENTRY_TYPE_SEL_0_NUMBER_3(val) vxge_vBIT(val, 30, 2)
-#define VXGE_HW_KDFC_ENTRY_TYPE_SEL_0_NUMBER_4(val) vxge_vBIT(val, 38, 2)
-#define VXGE_HW_KDFC_ENTRY_TYPE_SEL_0_NUMBER_5(val) vxge_vBIT(val, 46, 2)
-#define VXGE_HW_KDFC_ENTRY_TYPE_SEL_0_NUMBER_6(val) vxge_vBIT(val, 54, 2)
-#define VXGE_HW_KDFC_ENTRY_TYPE_SEL_0_NUMBER_7(val) vxge_vBIT(val, 62, 2)
-/*0x01070*/    u64     kdfc_entry_type_sel_1;
-#define VXGE_HW_KDFC_ENTRY_TYPE_SEL_1_NUMBER_8(val) vxge_vBIT(val, 6, 2)
-/*0x01078*/    u64     kdfc_fifo_0_ctrl;
-#define VXGE_HW_KDFC_FIFO_0_CTRL_WRR_NUMBER(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_WEIGHTED_RR_SERVICE_STATES             176
-#define VXGE_HW_WRR_FIFO_SERVICE_STATES                        153
-
-       u8      unused1100[0x01100-0x1080];
-
-/*0x01100*/    u64     kdfc_fifo_17_ctrl;
-#define VXGE_HW_KDFC_FIFO_17_CTRL_WRR_NUMBER(val) vxge_vBIT(val, 3, 5)
-
-       u8      unused1600[0x01600-0x1108];
-
-/*0x01600*/    u64     rxmac_int_status;
-#define        VXGE_HW_RXMAC_INT_STATUS_RXMAC_GEN_ERR_RXMAC_GEN_INT    vxge_mBIT(3)
-#define        VXGE_HW_RXMAC_INT_STATUS_RXMAC_ECC_ERR_RXMAC_ECC_INT    vxge_mBIT(7)
-#define        VXGE_HW_RXMAC_INT_STATUS_RXMAC_VARIOUS_ERR_RXMAC_VARIOUS_INT \
-                                                               vxge_mBIT(11)
-/*0x01608*/    u64     rxmac_int_mask;
-       u8      unused01618[0x01618-0x01610];
-
-/*0x01618*/    u64     rxmac_gen_err_reg;
-/*0x01620*/    u64     rxmac_gen_err_mask;
-/*0x01628*/    u64     rxmac_gen_err_alarm;
-/*0x01630*/    u64     rxmac_ecc_err_reg;
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RMAC_PORT0_RMAC_RTS_PART_SG_ERR(val) \
-                                                       vxge_vBIT(val, 0, 4)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RMAC_PORT0_RMAC_RTS_PART_DB_ERR(val) \
-                                                       vxge_vBIT(val, 4, 4)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RMAC_PORT1_RMAC_RTS_PART_SG_ERR(val) \
-                                                       vxge_vBIT(val, 8, 4)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RMAC_PORT1_RMAC_RTS_PART_DB_ERR(val) \
-                                                       vxge_vBIT(val, 12, 4)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RMAC_PORT2_RMAC_RTS_PART_SG_ERR(val) \
-                                                       vxge_vBIT(val, 16, 4)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RMAC_PORT2_RMAC_RTS_PART_DB_ERR(val) \
-                                                       vxge_vBIT(val, 20, 4)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_DA_LKP_PRT0_SG_ERR(val) \
-                                                       vxge_vBIT(val, 24, 2)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_DA_LKP_PRT0_DB_ERR(val) \
-                                                       vxge_vBIT(val, 26, 2)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_DA_LKP_PRT1_SG_ERR(val) \
-                                                       vxge_vBIT(val, 28, 2)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_DA_LKP_PRT1_DB_ERR(val) \
-                                                       vxge_vBIT(val, 30, 2)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_VID_LKP_SG_ERR      vxge_mBIT(32)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_VID_LKP_DB_ERR      vxge_mBIT(33)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_PN_LKP_PRT0_SG_ERR  vxge_mBIT(34)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_PN_LKP_PRT0_DB_ERR  vxge_mBIT(35)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_PN_LKP_PRT1_SG_ERR  vxge_mBIT(36)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_PN_LKP_PRT1_DB_ERR  vxge_mBIT(37)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_PN_LKP_PRT2_SG_ERR  vxge_mBIT(38)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_PN_LKP_PRT2_DB_ERR  vxge_mBIT(39)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_RTH_MASK_SG_ERR(val) \
-                                                       vxge_vBIT(val, 40, 7)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_RTH_MASK_DB_ERR(val) \
-                                                       vxge_vBIT(val, 47, 7)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_RTH_LKP_SG_ERR(val) \
-                                                       vxge_vBIT(val, 54, 3)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_RTH_LKP_DB_ERR(val) \
-                                                       vxge_vBIT(val, 57, 3)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_DS_LKP_SG_ERR \
-                                                       vxge_mBIT(60)
-#define        VXGE_HW_RXMAC_ECC_ERR_REG_RTSJ_RMAC_DS_LKP_DB_ERR \
-                                                       vxge_mBIT(61)
-/*0x01638*/    u64     rxmac_ecc_err_mask;
-/*0x01640*/    u64     rxmac_ecc_err_alarm;
-/*0x01648*/    u64     rxmac_various_err_reg;
-#define        VXGE_HW_RXMAC_VARIOUS_ERR_REG_RMAC_RMAC_PORT0_FSM_ERR   vxge_mBIT(0)
-#define        VXGE_HW_RXMAC_VARIOUS_ERR_REG_RMAC_RMAC_PORT1_FSM_ERR   vxge_mBIT(1)
-#define        VXGE_HW_RXMAC_VARIOUS_ERR_REG_RMAC_RMAC_PORT2_FSM_ERR   vxge_mBIT(2)
-#define        VXGE_HW_RXMAC_VARIOUS_ERR_REG_RMACJ_RMACJ_FSM_ERR       vxge_mBIT(3)
-/*0x01650*/    u64     rxmac_various_err_mask;
-/*0x01658*/    u64     rxmac_various_err_alarm;
-/*0x01660*/    u64     rxmac_gen_cfg;
-#define        VXGE_HW_RXMAC_GEN_CFG_SCALE_RMAC_UTIL   vxge_mBIT(11)
-/*0x01668*/    u64     rxmac_authorize_all_addr;
-#define VXGE_HW_RXMAC_AUTHORIZE_ALL_ADDR_VP(n) vxge_mBIT(n)
-/*0x01670*/    u64     rxmac_authorize_all_vid;
-#define VXGE_HW_RXMAC_AUTHORIZE_ALL_VID_VP(n)  vxge_mBIT(n)
-       u8      unused016c0[0x016c0-0x01678];
-
-/*0x016c0*/    u64     rxmac_red_rate_repl_queue;
-#define VXGE_HW_RXMAC_RED_RATE_REPL_QUEUE_CRATE_THR0(val) vxge_vBIT(val, 0, 4)
-#define VXGE_HW_RXMAC_RED_RATE_REPL_QUEUE_CRATE_THR1(val) vxge_vBIT(val, 4, 4)
-#define VXGE_HW_RXMAC_RED_RATE_REPL_QUEUE_CRATE_THR2(val) vxge_vBIT(val, 8, 4)
-#define VXGE_HW_RXMAC_RED_RATE_REPL_QUEUE_CRATE_THR3(val) vxge_vBIT(val, 12, 4)
-#define VXGE_HW_RXMAC_RED_RATE_REPL_QUEUE_FRATE_THR0(val) vxge_vBIT(val, 16, 4)
-#define VXGE_HW_RXMAC_RED_RATE_REPL_QUEUE_FRATE_THR1(val) vxge_vBIT(val, 20, 4)
-#define VXGE_HW_RXMAC_RED_RATE_REPL_QUEUE_FRATE_THR2(val) vxge_vBIT(val, 24, 4)
-#define VXGE_HW_RXMAC_RED_RATE_REPL_QUEUE_FRATE_THR3(val) vxge_vBIT(val, 28, 4)
-#define        VXGE_HW_RXMAC_RED_RATE_REPL_QUEUE_TRICKLE_EN    vxge_mBIT(35)
-       u8      unused016e0[0x016e0-0x016c8];
-
-/*0x016e0*/    u64     rxmac_cfg0_port[3];
-#define        VXGE_HW_RXMAC_CFG0_PORT_RMAC_EN vxge_mBIT(3)
-#define        VXGE_HW_RXMAC_CFG0_PORT_STRIP_FCS       vxge_mBIT(7)
-#define        VXGE_HW_RXMAC_CFG0_PORT_DISCARD_PFRM    vxge_mBIT(11)
-#define        VXGE_HW_RXMAC_CFG0_PORT_IGNORE_FCS_ERR  vxge_mBIT(15)
-#define        VXGE_HW_RXMAC_CFG0_PORT_IGNORE_LONG_ERR vxge_mBIT(19)
-#define        VXGE_HW_RXMAC_CFG0_PORT_IGNORE_USIZED_ERR       vxge_mBIT(23)
-#define        VXGE_HW_RXMAC_CFG0_PORT_IGNORE_LEN_MISMATCH     vxge_mBIT(27)
-#define VXGE_HW_RXMAC_CFG0_PORT_MAX_PYLD_LEN(val) vxge_vBIT(val, 50, 14)
-       u8      unused01710[0x01710-0x016f8];
-
-/*0x01710*/    u64     rxmac_cfg2_port[3];
-#define        VXGE_HW_RXMAC_CFG2_PORT_PROM_EN vxge_mBIT(3)
-/*0x01728*/    u64     rxmac_pause_cfg_port[3];
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_GEN_EN     vxge_mBIT(3)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_RCV_EN     vxge_mBIT(7)
-#define VXGE_HW_RXMAC_PAUSE_CFG_PORT_ACCEL_SEND(val) vxge_vBIT(val, 9, 3)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_DUAL_THR   vxge_mBIT(15)
-#define VXGE_HW_RXMAC_PAUSE_CFG_PORT_HIGH_PTIME(val) vxge_vBIT(val, 20, 16)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_IGNORE_PF_FCS_ERR  vxge_mBIT(39)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_IGNORE_PF_LEN_ERR  vxge_mBIT(43)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_LIMITER_EN vxge_mBIT(47)
-#define VXGE_HW_RXMAC_PAUSE_CFG_PORT_MAX_LIMIT(val) vxge_vBIT(val, 48, 8)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_PERMIT_RATEMGMT_CTRL       vxge_mBIT(59)
-       u8      unused01758[0x01758-0x01740];
-
-/*0x01758*/    u64     rxmac_red_cfg0_port[3];
-#define VXGE_HW_RXMAC_RED_CFG0_PORT_RED_EN_VP(n)       vxge_mBIT(n)
-/*0x01770*/    u64     rxmac_red_cfg1_port[3];
-#define        VXGE_HW_RXMAC_RED_CFG1_PORT_FINE_EN     vxge_mBIT(3)
-#define        VXGE_HW_RXMAC_RED_CFG1_PORT_RED_EN_REPL_QUEUE   vxge_mBIT(11)
-/*0x01788*/    u64     rxmac_red_cfg2_port[3];
-#define VXGE_HW_RXMAC_RED_CFG2_PORT_TRICKLE_EN_VP(n)   vxge_mBIT(n)
-/*0x017a0*/    u64     rxmac_link_util_port[3];
-#define        VXGE_HW_RXMAC_LINK_UTIL_PORT_RMAC_RMAC_UTILIZATION(val) \
-                                                       vxge_vBIT(val, 1, 7)
-#define VXGE_HW_RXMAC_LINK_UTIL_PORT_RMAC_UTIL_CFG(val) vxge_vBIT(val, 8, 4)
-#define VXGE_HW_RXMAC_LINK_UTIL_PORT_RMAC_RMAC_FRAC_UTIL(val) \
-                                                       vxge_vBIT(val, 12, 4)
-#define VXGE_HW_RXMAC_LINK_UTIL_PORT_RMAC_PKT_WEIGHT(val) vxge_vBIT(val, 16, 4)
-#define        VXGE_HW_RXMAC_LINK_UTIL_PORT_RMAC_RMAC_SCALE_FACTOR     vxge_mBIT(23)
-       u8      unused017d0[0x017d0-0x017b8];
-
-/*0x017d0*/    u64     rxmac_status_port[3];
-#define        VXGE_HW_RXMAC_STATUS_PORT_RMAC_RX_FRM_RCVD      vxge_mBIT(3)
-       u8      unused01800[0x01800-0x017e8];
-
-/*0x01800*/    u64     rxmac_rx_pa_cfg0;
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_IGNORE_FRAME_ERR       vxge_mBIT(3)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_SUPPORT_SNAP_AB_N      vxge_mBIT(7)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_SEARCH_FOR_HAO vxge_mBIT(18)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_SUPPORT_MOBILE_IPV6_HDRS       vxge_mBIT(19)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_IPV6_STOP_SEARCHING    vxge_mBIT(23)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_NO_PS_IF_UNKNOWN       vxge_mBIT(27)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_SEARCH_FOR_ETYPE       vxge_mBIT(35)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_TOSS_ANY_FRM_IF_L3_CSUM_ERR    vxge_mBIT(39)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_TOSS_OFFLD_FRM_IF_L3_CSUM_ERR  vxge_mBIT(43)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_TOSS_ANY_FRM_IF_L4_CSUM_ERR    vxge_mBIT(47)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_TOSS_OFFLD_FRM_IF_L4_CSUM_ERR  vxge_mBIT(51)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_TOSS_ANY_FRM_IF_RPA_ERR        vxge_mBIT(55)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_TOSS_OFFLD_FRM_IF_RPA_ERR      vxge_mBIT(59)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_JUMBO_SNAP_EN  vxge_mBIT(63)
-/*0x01808*/    u64     rxmac_rx_pa_cfg1;
-#define        VXGE_HW_RXMAC_RX_PA_CFG1_REPL_IPV4_TCP_INCL_PH  vxge_mBIT(3)
-#define        VXGE_HW_RXMAC_RX_PA_CFG1_REPL_IPV6_TCP_INCL_PH  vxge_mBIT(7)
-#define        VXGE_HW_RXMAC_RX_PA_CFG1_REPL_IPV4_UDP_INCL_PH  vxge_mBIT(11)
-#define        VXGE_HW_RXMAC_RX_PA_CFG1_REPL_IPV6_UDP_INCL_PH  vxge_mBIT(15)
-#define        VXGE_HW_RXMAC_RX_PA_CFG1_REPL_L4_INCL_CF        vxge_mBIT(19)
-#define        VXGE_HW_RXMAC_RX_PA_CFG1_REPL_STRIP_VLAN_TAG    vxge_mBIT(23)
-       u8      unused01828[0x01828-0x01810];
-
-/*0x01828*/    u64     rts_mgr_cfg0;
-#define        VXGE_HW_RTS_MGR_CFG0_RTS_DP_SP_PRIORITY vxge_mBIT(3)
-#define VXGE_HW_RTS_MGR_CFG0_FLEX_L4PRTCL_VALUE(val) vxge_vBIT(val, 24, 8)
-#define        VXGE_HW_RTS_MGR_CFG0_ICMP_TRASH vxge_mBIT(35)
-#define        VXGE_HW_RTS_MGR_CFG0_TCPSYN_TRASH       vxge_mBIT(39)
-#define        VXGE_HW_RTS_MGR_CFG0_ZL4PYLD_TRASH      vxge_mBIT(43)
-#define        VXGE_HW_RTS_MGR_CFG0_L4PRTCL_TCP_TRASH  vxge_mBIT(47)
-#define        VXGE_HW_RTS_MGR_CFG0_L4PRTCL_UDP_TRASH  vxge_mBIT(51)
-#define        VXGE_HW_RTS_MGR_CFG0_L4PRTCL_FLEX_TRASH vxge_mBIT(55)
-#define        VXGE_HW_RTS_MGR_CFG0_IPFRAG_TRASH       vxge_mBIT(59)
-/*0x01830*/    u64     rts_mgr_cfg1;
-#define        VXGE_HW_RTS_MGR_CFG1_DA_ACTIVE_TABLE    vxge_mBIT(3)
-#define        VXGE_HW_RTS_MGR_CFG1_PN_ACTIVE_TABLE    vxge_mBIT(7)
-/*0x01838*/    u64     rts_mgr_criteria_priority;
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_ETYPE(val) vxge_vBIT(val, 5, 3)
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_ICMP_TCPSYN(val) vxge_vBIT(val, 9, 3)
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_L4PN(val) vxge_vBIT(val, 13, 3)
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_RANGE_L4PN(val) vxge_vBIT(val, 17, 3)
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_RTH_IT(val) vxge_vBIT(val, 21, 3)
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_DS(val) vxge_vBIT(val, 25, 3)
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_QOS(val) vxge_vBIT(val, 29, 3)
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_ZL4PYLD(val) vxge_vBIT(val, 33, 3)
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_L4PRTCL(val) vxge_vBIT(val, 37, 3)
-/*0x01840*/    u64     rts_mgr_da_pause_cfg;
-#define VXGE_HW_RTS_MGR_DA_PAUSE_CFG_VPATH_VECTOR(val) vxge_vBIT(val, 0, 17)
-/*0x01848*/    u64     rts_mgr_da_slow_proto_cfg;
-#define VXGE_HW_RTS_MGR_DA_SLOW_PROTO_CFG_VPATH_VECTOR(val) \
-                                                       vxge_vBIT(val, 0, 17)
-       u8      unused01890[0x01890-0x01850];
-/*0x01890*/     u64     rts_mgr_cbasin_cfg;
-       u8      unused01968[0x01968-0x01898];
-
-/*0x01968*/    u64     dbg_stat_rx_any_frms;
-#define VXGE_HW_DBG_STAT_RX_ANY_FRMS_PORT0_RX_ANY_FRMS(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_DBG_STAT_RX_ANY_FRMS_PORT1_RX_ANY_FRMS(val) vxge_vBIT(val, 8, 8)
-#define VXGE_HW_DBG_STAT_RX_ANY_FRMS_PORT2_RX_ANY_FRMS(val) \
-                                                       vxge_vBIT(val, 16, 8)
-       u8      unused01a00[0x01a00-0x01970];
-
-/*0x01a00*/    u64     rxmac_red_rate_vp[17];
-#define VXGE_HW_RXMAC_RED_RATE_VP_CRATE_THR0(val) vxge_vBIT(val, 0, 4)
-#define VXGE_HW_RXMAC_RED_RATE_VP_CRATE_THR1(val) vxge_vBIT(val, 4, 4)
-#define VXGE_HW_RXMAC_RED_RATE_VP_CRATE_THR2(val) vxge_vBIT(val, 8, 4)
-#define VXGE_HW_RXMAC_RED_RATE_VP_CRATE_THR3(val) vxge_vBIT(val, 12, 4)
-#define VXGE_HW_RXMAC_RED_RATE_VP_FRATE_THR0(val) vxge_vBIT(val, 16, 4)
-#define VXGE_HW_RXMAC_RED_RATE_VP_FRATE_THR1(val) vxge_vBIT(val, 20, 4)
-#define VXGE_HW_RXMAC_RED_RATE_VP_FRATE_THR2(val) vxge_vBIT(val, 24, 4)
-#define VXGE_HW_RXMAC_RED_RATE_VP_FRATE_THR3(val) vxge_vBIT(val, 28, 4)
-       u8      unused01e00[0x01e00-0x01a88];
-
-/*0x01e00*/    u64     xgmac_int_status;
-#define        VXGE_HW_XGMAC_INT_STATUS_XMAC_GEN_ERR_XMAC_GEN_INT      vxge_mBIT(3)
-#define        VXGE_HW_XGMAC_INT_STATUS_XMAC_LINK_ERR_PORT0_XMAC_LINK_INT_PORT0 \
-                                                               vxge_mBIT(7)
-#define        VXGE_HW_XGMAC_INT_STATUS_XMAC_LINK_ERR_PORT1_XMAC_LINK_INT_PORT1 \
-                                                               vxge_mBIT(11)
-#define        VXGE_HW_XGMAC_INT_STATUS_XGXS_GEN_ERR_XGXS_GEN_INT      vxge_mBIT(15)
-#define        VXGE_HW_XGMAC_INT_STATUS_ASIC_NTWK_ERR_ASIC_NTWK_INT    vxge_mBIT(19)
-#define        VXGE_HW_XGMAC_INT_STATUS_ASIC_GPIO_ERR_ASIC_GPIO_INT    vxge_mBIT(23)
-/*0x01e08*/    u64     xgmac_int_mask;
-/*0x01e10*/    u64     xmac_gen_err_reg;
-#define        VXGE_HW_XMAC_GEN_ERR_REG_LAGC_LAG_PORT0_ACTOR_CHURN_DETECTED \
-                                                               vxge_mBIT(7)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_LAGC_LAG_PORT0_PARTNER_CHURN_DETECTED \
-                                                               vxge_mBIT(11)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_LAGC_LAG_PORT0_RECEIVED_LACPDU vxge_mBIT(15)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_LAGC_LAG_PORT1_ACTOR_CHURN_DETECTED \
-                                                               vxge_mBIT(19)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_LAGC_LAG_PORT1_PARTNER_CHURN_DETECTED \
-                                                               vxge_mBIT(23)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_LAGC_LAG_PORT1_RECEIVED_LACPDU vxge_mBIT(27)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XLCM_LAG_FAILOVER_DETECTED     vxge_mBIT(31)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE0_SG_ERR(val) \
-                                                       vxge_vBIT(val, 40, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE0_DB_ERR(val) \
-                                                       vxge_vBIT(val, 42, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE1_SG_ERR(val) \
-                                                       vxge_vBIT(val, 44, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE1_DB_ERR(val) \
-                                                       vxge_vBIT(val, 46, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE2_SG_ERR(val) \
-                                                       vxge_vBIT(val, 48, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE2_DB_ERR(val) \
-                                                       vxge_vBIT(val, 50, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE3_SG_ERR(val) \
-                                                       vxge_vBIT(val, 52, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE3_DB_ERR(val) \
-                                                       vxge_vBIT(val, 54, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE4_SG_ERR(val) \
-                                                       vxge_vBIT(val, 56, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XSTATS_RMAC_STATS_TILE4_DB_ERR(val) \
-                                                       vxge_vBIT(val, 58, 2)
-#define        VXGE_HW_XMAC_GEN_ERR_REG_XMACJ_XMAC_FSM_ERR     vxge_mBIT(63)
-/*0x01e18*/    u64     xmac_gen_err_mask;
-/*0x01e20*/    u64     xmac_gen_err_alarm;
-/*0x01e28*/    u64     xmac_link_err_port0_reg;
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_XMACJ_PORT_DOWN  vxge_mBIT(3)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_XMACJ_PORT_UP    vxge_mBIT(7)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_XMACJ_PORT_WENT_DOWN     vxge_mBIT(11)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_XMACJ_PORT_WENT_UP       vxge_mBIT(15)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_XMACJ_PORT_REAFFIRMED_FAULT \
-                                                               vxge_mBIT(19)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_XMACJ_PORT_REAFFIRMED_OK vxge_mBIT(23)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_XMACJ_LINK_DOWN  vxge_mBIT(27)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_XMACJ_LINK_UP    vxge_mBIT(31)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_RATEMGMT_RATE_CHANGE     vxge_mBIT(35)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_RATEMGMT_LASI_INV        vxge_mBIT(39)
-#define        VXGE_HW_XMAC_LINK_ERR_PORT_REG_XMDIO_MDIO_MGR_ACCESS_COMPLETE \
-                                                               vxge_mBIT(47)
-/*0x01e30*/    u64     xmac_link_err_port0_mask;
-/*0x01e38*/    u64     xmac_link_err_port0_alarm;
-/*0x01e40*/    u64     xmac_link_err_port1_reg;
-/*0x01e48*/    u64     xmac_link_err_port1_mask;
-/*0x01e50*/    u64     xmac_link_err_port1_alarm;
-/*0x01e58*/    u64     xgxs_gen_err_reg;
-#define        VXGE_HW_XGXS_GEN_ERR_REG_XGXS_XGXS_FSM_ERR      vxge_mBIT(63)
-/*0x01e60*/    u64     xgxs_gen_err_mask;
-/*0x01e68*/    u64     xgxs_gen_err_alarm;
-/*0x01e70*/    u64     asic_ntwk_err_reg;
-#define        VXGE_HW_ASIC_NTWK_ERR_REG_XMACJ_NTWK_DOWN       vxge_mBIT(3)
-#define        VXGE_HW_ASIC_NTWK_ERR_REG_XMACJ_NTWK_UP vxge_mBIT(7)
-#define        VXGE_HW_ASIC_NTWK_ERR_REG_XMACJ_NTWK_WENT_DOWN  vxge_mBIT(11)
-#define        VXGE_HW_ASIC_NTWK_ERR_REG_XMACJ_NTWK_WENT_UP    vxge_mBIT(15)
-#define        VXGE_HW_ASIC_NTWK_ERR_REG_XMACJ_NTWK_REAFFIRMED_FAULT   vxge_mBIT(19)
-#define        VXGE_HW_ASIC_NTWK_ERR_REG_XMACJ_NTWK_REAFFIRMED_OK      vxge_mBIT(23)
-/*0x01e78*/    u64     asic_ntwk_err_mask;
-/*0x01e80*/    u64     asic_ntwk_err_alarm;
-/*0x01e88*/    u64     asic_gpio_err_reg;
-#define VXGE_HW_ASIC_GPIO_ERR_REG_XMACJ_GPIO_INT(n)    vxge_mBIT(n)
-/*0x01e90*/    u64     asic_gpio_err_mask;
-/*0x01e98*/    u64     asic_gpio_err_alarm;
-/*0x01ea0*/    u64     xgmac_gen_status;
-#define        VXGE_HW_XGMAC_GEN_STATUS_XMACJ_NTWK_OK  vxge_mBIT(3)
-#define        VXGE_HW_XGMAC_GEN_STATUS_XMACJ_NTWK_DATA_RATE   vxge_mBIT(11)
-/*0x01ea8*/    u64     xgmac_gen_fw_memo_status;
-#define        VXGE_HW_XGMAC_GEN_FW_MEMO_STATUS_XMACJ_EVENTS_PENDING(val) \
-                                                       vxge_vBIT(val, 0, 17)
-/*0x01eb0*/    u64     xgmac_gen_fw_memo_mask;
-#define VXGE_HW_XGMAC_GEN_FW_MEMO_MASK_MASK(val) vxge_vBIT(val, 0, 64)
-/*0x01eb8*/    u64     xgmac_gen_fw_vpath_to_vsport_status;
-#define        VXGE_HW_XGMAC_GEN_FW_VPATH_TO_VSPORT_STATUS_XMACJ_EVENTS_PENDING(val) \
-                                               vxge_vBIT(val, 0, 17)
-/*0x01ec0*/    u64     xgmac_main_cfg_port[2];
-#define        VXGE_HW_XGMAC_MAIN_CFG_PORT_PORT_EN     vxge_mBIT(3)
-       u8      unused01f40[0x01f40-0x01ed0];
-
-/*0x01f40*/    u64     xmac_gen_cfg;
-#define VXGE_HW_XMAC_GEN_CFG_RATEMGMT_MAC_RATE_SEL(val) vxge_vBIT(val, 2, 2)
-#define        VXGE_HW_XMAC_GEN_CFG_TX_HEAD_DROP_WHEN_FAULT    vxge_mBIT(7)
-#define        VXGE_HW_XMAC_GEN_CFG_FAULT_BEHAVIOUR    vxge_mBIT(27)
-#define VXGE_HW_XMAC_GEN_CFG_PERIOD_NTWK_UP(val) vxge_vBIT(val, 28, 4)
-#define VXGE_HW_XMAC_GEN_CFG_PERIOD_NTWK_DOWN(val) vxge_vBIT(val, 32, 4)
-/*0x01f48*/    u64     xmac_timestamp;
-#define        VXGE_HW_XMAC_TIMESTAMP_EN       vxge_mBIT(3)
-#define VXGE_HW_XMAC_TIMESTAMP_USE_LINK_ID(val) vxge_vBIT(val, 6, 2)
-#define VXGE_HW_XMAC_TIMESTAMP_INTERVAL(val) vxge_vBIT(val, 12, 4)
-#define        VXGE_HW_XMAC_TIMESTAMP_TIMER_RESTART    vxge_mBIT(19)
-#define VXGE_HW_XMAC_TIMESTAMP_XMACJ_ROLLOVER_CNT(val) vxge_vBIT(val, 32, 16)
-/*0x01f50*/    u64     xmac_stats_gen_cfg;
-#define VXGE_HW_XMAC_STATS_GEN_CFG_PRTAGGR_CUM_TIMER(val) vxge_vBIT(val, 4, 4)
-#define VXGE_HW_XMAC_STATS_GEN_CFG_VPATH_CUM_TIMER(val) vxge_vBIT(val, 8, 4)
-#define        VXGE_HW_XMAC_STATS_GEN_CFG_VLAN_HANDLING        vxge_mBIT(15)
-/*0x01f58*/    u64     xmac_stats_sys_cmd;
-#define VXGE_HW_XMAC_STATS_SYS_CMD_OP(val) vxge_vBIT(val, 5, 3)
-#define        VXGE_HW_XMAC_STATS_SYS_CMD_STROBE       vxge_mBIT(15)
-#define VXGE_HW_XMAC_STATS_SYS_CMD_LOC_SEL(val) vxge_vBIT(val, 27, 5)
-#define VXGE_HW_XMAC_STATS_SYS_CMD_OFFSET_SEL(val) vxge_vBIT(val, 32, 8)
-/*0x01f60*/    u64     xmac_stats_sys_data;
-#define VXGE_HW_XMAC_STATS_SYS_DATA_XSMGR_DATA(val) vxge_vBIT(val, 0, 64)
-       u8      unused01f80[0x01f80-0x01f68];
-
-/*0x01f80*/    u64     asic_ntwk_ctrl;
-#define        VXGE_HW_ASIC_NTWK_CTRL_REQ_TEST_NTWK    vxge_mBIT(3)
-#define        VXGE_HW_ASIC_NTWK_CTRL_PORT0_REQ_TEST_PORT      vxge_mBIT(11)
-#define        VXGE_HW_ASIC_NTWK_CTRL_PORT1_REQ_TEST_PORT      vxge_mBIT(15)
-/*0x01f88*/    u64     asic_ntwk_cfg_show_port_info;
-#define VXGE_HW_ASIC_NTWK_CFG_SHOW_PORT_INFO_VP(n)     vxge_mBIT(n)
-/*0x01f90*/    u64     asic_ntwk_cfg_port_num;
-#define VXGE_HW_ASIC_NTWK_CFG_PORT_NUM_VP(n)   vxge_mBIT(n)
-/*0x01f98*/    u64     xmac_cfg_port[3];
-#define        VXGE_HW_XMAC_CFG_PORT_XGMII_LOOPBACK    vxge_mBIT(3)
-#define        VXGE_HW_XMAC_CFG_PORT_XGMII_REVERSE_LOOPBACK    vxge_mBIT(7)
-#define        VXGE_HW_XMAC_CFG_PORT_XGMII_TX_BEHAV    vxge_mBIT(11)
-#define        VXGE_HW_XMAC_CFG_PORT_XGMII_RX_BEHAV    vxge_mBIT(15)
-/*0x01fb0*/    u64     xmac_station_addr_port[2];
-#define VXGE_HW_XMAC_STATION_ADDR_PORT_MAC_ADDR(val) vxge_vBIT(val, 0, 48)
-       u8      unused02020[0x02020-0x01fc0];
-
-/*0x02020*/    u64     lag_cfg;
-#define        VXGE_HW_LAG_CFG_EN      vxge_mBIT(3)
-#define VXGE_HW_LAG_CFG_MODE(val) vxge_vBIT(val, 6, 2)
-#define        VXGE_HW_LAG_CFG_TX_DISCARD_BEHAV        vxge_mBIT(11)
-#define        VXGE_HW_LAG_CFG_RX_DISCARD_BEHAV        vxge_mBIT(15)
-#define        VXGE_HW_LAG_CFG_PREF_INDIV_PORT_NUM     vxge_mBIT(19)
-/*0x02028*/    u64     lag_status;
-#define        VXGE_HW_LAG_STATUS_XLCM_WAITING_TO_FAILBACK     vxge_mBIT(3)
-#define VXGE_HW_LAG_STATUS_XLCM_TIMER_VAL_COLD_FAILOVER(val) \
-                                                       vxge_vBIT(val, 8, 8)
-/*0x02030*/    u64     lag_active_passive_cfg;
-#define        VXGE_HW_LAG_ACTIVE_PASSIVE_CFG_HOT_STANDBY      vxge_mBIT(3)
-#define        VXGE_HW_LAG_ACTIVE_PASSIVE_CFG_LACP_DECIDES     vxge_mBIT(7)
-#define        VXGE_HW_LAG_ACTIVE_PASSIVE_CFG_PREF_ACTIVE_PORT_NUM     vxge_mBIT(11)
-#define        VXGE_HW_LAG_ACTIVE_PASSIVE_CFG_AUTO_FAILBACK    vxge_mBIT(15)
-#define        VXGE_HW_LAG_ACTIVE_PASSIVE_CFG_FAILBACK_EN      vxge_mBIT(19)
-#define        VXGE_HW_LAG_ACTIVE_PASSIVE_CFG_COLD_FAILOVER_TIMEOUT(val) \
-                                                       vxge_vBIT(val, 32, 16)
-       u8      unused02040[0x02040-0x02038];
-
-/*0x02040*/    u64     lag_lacp_cfg;
-#define        VXGE_HW_LAG_LACP_CFG_EN vxge_mBIT(3)
-#define        VXGE_HW_LAG_LACP_CFG_LACP_BEGIN vxge_mBIT(7)
-#define        VXGE_HW_LAG_LACP_CFG_DISCARD_LACP       vxge_mBIT(11)
-#define        VXGE_HW_LAG_LACP_CFG_LIBERAL_LEN_CHK    vxge_mBIT(15)
-/*0x02048*/    u64     lag_timer_cfg_1;
-#define VXGE_HW_LAG_TIMER_CFG_1_FAST_PER(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_LAG_TIMER_CFG_1_SLOW_PER(val) vxge_vBIT(val, 16, 16)
-#define VXGE_HW_LAG_TIMER_CFG_1_SHORT_TIMEOUT(val) vxge_vBIT(val, 32, 16)
-#define VXGE_HW_LAG_TIMER_CFG_1_LONG_TIMEOUT(val) vxge_vBIT(val, 48, 16)
-/*0x02050*/    u64     lag_timer_cfg_2;
-#define VXGE_HW_LAG_TIMER_CFG_2_CHURN_DET(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_LAG_TIMER_CFG_2_AGGR_WAIT(val) vxge_vBIT(val, 16, 16)
-#define VXGE_HW_LAG_TIMER_CFG_2_SHORT_TIMER_SCALE(val) vxge_vBIT(val, 32, 16)
-#define VXGE_HW_LAG_TIMER_CFG_2_LONG_TIMER_SCALE(val)  vxge_vBIT(val, 48, 16)
-/*0x02058*/    u64     lag_sys_id;
-#define VXGE_HW_LAG_SYS_ID_ADDR(val) vxge_vBIT(val, 0, 48)
-#define        VXGE_HW_LAG_SYS_ID_USE_PORT_ADDR        vxge_mBIT(51)
-#define        VXGE_HW_LAG_SYS_ID_ADDR_SEL     vxge_mBIT(55)
-/*0x02060*/    u64     lag_sys_cfg;
-#define VXGE_HW_LAG_SYS_CFG_SYS_PRI(val) vxge_vBIT(val, 0, 16)
-       u8      unused02070[0x02070-0x02068];
-
-/*0x02070*/    u64     lag_aggr_addr_cfg[2];
-#define VXGE_HW_LAG_AGGR_ADDR_CFG_ADDR(val) vxge_vBIT(val, 0, 48)
-#define        VXGE_HW_LAG_AGGR_ADDR_CFG_USE_PORT_ADDR vxge_mBIT(51)
-#define        VXGE_HW_LAG_AGGR_ADDR_CFG_ADDR_SEL      vxge_mBIT(55)
-/*0x02080*/    u64     lag_aggr_id_cfg[2];
-#define VXGE_HW_LAG_AGGR_ID_CFG_ID(val) vxge_vBIT(val, 0, 16)
-/*0x02090*/    u64     lag_aggr_admin_key[2];
-#define VXGE_HW_LAG_AGGR_ADMIN_KEY_KEY(val) vxge_vBIT(val, 0, 16)
-/*0x020a0*/    u64     lag_aggr_alt_admin_key;
-#define VXGE_HW_LAG_AGGR_ALT_ADMIN_KEY_KEY(val) vxge_vBIT(val, 0, 16)
-#define        VXGE_HW_LAG_AGGR_ALT_ADMIN_KEY_ALT_AGGR vxge_mBIT(19)
-/*0x020a8*/    u64     lag_aggr_oper_key[2];
-#define VXGE_HW_LAG_AGGR_OPER_KEY_LAGC_KEY(val) vxge_vBIT(val, 0, 16)
-/*0x020b8*/    u64     lag_aggr_partner_sys_id[2];
-#define VXGE_HW_LAG_AGGR_PARTNER_SYS_ID_LAGC_ADDR(val) vxge_vBIT(val, 0, 48)
-/*0x020c8*/    u64     lag_aggr_partner_info[2];
-#define VXGE_HW_LAG_AGGR_PARTNER_INFO_LAGC_SYS_PRI(val) vxge_vBIT(val, 0, 16)
-#define        VXGE_HW_LAG_AGGR_PARTNER_INFO_LAGC_OPER_KEY(val) \
-                                               vxge_vBIT(val, 16, 16)
-/*0x020d8*/    u64     lag_aggr_state[2];
-#define        VXGE_HW_LAG_AGGR_STATE_LAGC_TX  vxge_mBIT(3)
-#define        VXGE_HW_LAG_AGGR_STATE_LAGC_RX  vxge_mBIT(7)
-#define        VXGE_HW_LAG_AGGR_STATE_LAGC_READY       vxge_mBIT(11)
-#define        VXGE_HW_LAG_AGGR_STATE_LAGC_INDIVIDUAL  vxge_mBIT(15)
-       u8      unused020f0[0x020f0-0x020e8];
-
-/*0x020f0*/    u64     lag_port_cfg[2];
-#define        VXGE_HW_LAG_PORT_CFG_EN vxge_mBIT(3)
-#define        VXGE_HW_LAG_PORT_CFG_DISCARD_SLOW_PROTO vxge_mBIT(7)
-#define        VXGE_HW_LAG_PORT_CFG_HOST_CHOSEN_AGGR   vxge_mBIT(11)
-#define        VXGE_HW_LAG_PORT_CFG_DISCARD_UNKNOWN_SLOW_PROTO vxge_mBIT(15)
-/*0x02100*/    u64     lag_port_actor_admin_cfg[2];
-#define VXGE_HW_LAG_PORT_ACTOR_ADMIN_CFG_PORT_NUM(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_LAG_PORT_ACTOR_ADMIN_CFG_PORT_PRI(val) vxge_vBIT(val, 16, 16)
-#define VXGE_HW_LAG_PORT_ACTOR_ADMIN_CFG_KEY_10G(val) vxge_vBIT(val, 32, 16)
-#define VXGE_HW_LAG_PORT_ACTOR_ADMIN_CFG_KEY_1G(val) vxge_vBIT(val, 48, 16)
-/*0x02110*/    u64     lag_port_actor_admin_state[2];
-#define        VXGE_HW_LAG_PORT_ACTOR_ADMIN_STATE_LACP_ACTIVITY        vxge_mBIT(3)
-#define        VXGE_HW_LAG_PORT_ACTOR_ADMIN_STATE_LACP_TIMEOUT vxge_mBIT(7)
-#define        VXGE_HW_LAG_PORT_ACTOR_ADMIN_STATE_AGGREGATION  vxge_mBIT(11)
-#define        VXGE_HW_LAG_PORT_ACTOR_ADMIN_STATE_SYNCHRONIZATION      vxge_mBIT(15)
-#define        VXGE_HW_LAG_PORT_ACTOR_ADMIN_STATE_COLLECTING   vxge_mBIT(19)
-#define        VXGE_HW_LAG_PORT_ACTOR_ADMIN_STATE_DISTRIBUTING vxge_mBIT(23)
-#define        VXGE_HW_LAG_PORT_ACTOR_ADMIN_STATE_DEFAULTED    vxge_mBIT(27)
-#define        VXGE_HW_LAG_PORT_ACTOR_ADMIN_STATE_EXPIRED      vxge_mBIT(31)
-/*0x02120*/    u64     lag_port_partner_admin_sys_id[2];
-#define VXGE_HW_LAG_PORT_PARTNER_ADMIN_SYS_ID_ADDR(val) vxge_vBIT(val, 0, 48)
-/*0x02130*/    u64     lag_port_partner_admin_cfg[2];
-#define VXGE_HW_LAG_PORT_PARTNER_ADMIN_CFG_SYS_PRI(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_LAG_PORT_PARTNER_ADMIN_CFG_KEY(val) vxge_vBIT(val, 16, 16)
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_CFG_PORT_NUM(val) \
-                                                       vxge_vBIT(val, 32, 16)
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_CFG_PORT_PRI(val) \
-                                                       vxge_vBIT(val, 48, 16)
-/*0x02140*/    u64     lag_port_partner_admin_state[2];
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_STATE_LACP_ACTIVITY      vxge_mBIT(3)
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_STATE_LACP_TIMEOUT       vxge_mBIT(7)
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_STATE_AGGREGATION        vxge_mBIT(11)
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_STATE_SYNCHRONIZATION    vxge_mBIT(15)
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_STATE_COLLECTING vxge_mBIT(19)
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_STATE_DISTRIBUTING       vxge_mBIT(23)
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_STATE_DEFAULTED  vxge_mBIT(27)
-#define        VXGE_HW_LAG_PORT_PARTNER_ADMIN_STATE_EXPIRED    vxge_mBIT(31)
-/*0x02150*/    u64     lag_port_to_aggr[2];
-#define VXGE_HW_LAG_PORT_TO_AGGR_LAGC_AGGR_ID(val) vxge_vBIT(val, 0, 16)
-#define        VXGE_HW_LAG_PORT_TO_AGGR_LAGC_AGGR_VLD_ID       vxge_mBIT(19)
-/*0x02160*/    u64     lag_port_actor_oper_key[2];
-#define VXGE_HW_LAG_PORT_ACTOR_OPER_KEY_LAGC_KEY(val) vxge_vBIT(val, 0, 16)
-/*0x02170*/    u64     lag_port_actor_oper_state[2];
-#define        VXGE_HW_LAG_PORT_ACTOR_OPER_STATE_LAGC_LACP_ACTIVITY    vxge_mBIT(3)
-#define        VXGE_HW_LAG_PORT_ACTOR_OPER_STATE_LAGC_LACP_TIMEOUT     vxge_mBIT(7)
-#define        VXGE_HW_LAG_PORT_ACTOR_OPER_STATE_LAGC_AGGREGATION      vxge_mBIT(11)
-#define        VXGE_HW_LAG_PORT_ACTOR_OPER_STATE_LAGC_SYNCHRONIZATION  vxge_mBIT(15)
-#define        VXGE_HW_LAG_PORT_ACTOR_OPER_STATE_LAGC_COLLECTING       vxge_mBIT(19)
-#define        VXGE_HW_LAG_PORT_ACTOR_OPER_STATE_LAGC_DISTRIBUTING     vxge_mBIT(23)
-#define        VXGE_HW_LAG_PORT_ACTOR_OPER_STATE_LAGC_DEFAULTED        vxge_mBIT(27)
-#define        VXGE_HW_LAG_PORT_ACTOR_OPER_STATE_LAGC_EXPIRED  vxge_mBIT(31)
-/*0x02180*/    u64     lag_port_partner_oper_sys_id[2];
-#define VXGE_HW_LAG_PORT_PARTNER_OPER_SYS_ID_LAGC_ADDR(val) \
-                                               vxge_vBIT(val, 0, 48)
-/*0x02190*/    u64     lag_port_partner_oper_info[2];
-#define VXGE_HW_LAG_PORT_PARTNER_OPER_INFO_LAGC_SYS_PRI(val) \
-                                               vxge_vBIT(val, 0, 16)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_INFO_LAGC_KEY(val) \
-                                               vxge_vBIT(val, 16, 16)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_INFO_LAGC_PORT_NUM(val) \
-                                               vxge_vBIT(val, 32, 16)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_INFO_LAGC_PORT_PRI(val) \
-                                               vxge_vBIT(val, 48, 16)
-/*0x021a0*/    u64     lag_port_partner_oper_state[2];
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_STATE_LAGC_LACP_ACTIVITY  vxge_mBIT(3)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_STATE_LAGC_LACP_TIMEOUT   vxge_mBIT(7)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_STATE_LAGC_AGGREGATION    vxge_mBIT(11)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_STATE_LAGC_SYNCHRONIZATION \
-                                                               vxge_mBIT(15)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_STATE_LAGC_COLLECTING     vxge_mBIT(19)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_STATE_LAGC_DISTRIBUTING   vxge_mBIT(23)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_STATE_LAGC_DEFAULTED      vxge_mBIT(27)
-#define        VXGE_HW_LAG_PORT_PARTNER_OPER_STATE_LAGC_EXPIRED        vxge_mBIT(31)
-/*0x021b0*/    u64     lag_port_state_vars[2];
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_READY  vxge_mBIT(3)
-#define VXGE_HW_LAG_PORT_STATE_VARS_LAGC_SELECTED(val) vxge_vBIT(val, 6, 2)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_AGGR_NUM       vxge_mBIT(11)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_PORT_MOVED     vxge_mBIT(15)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_PORT_ENABLED   vxge_mBIT(18)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_PORT_DISABLED  vxge_mBIT(19)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_NTT    vxge_mBIT(23)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_ACTOR_CHURN    vxge_mBIT(27)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_PARTNER_CHURN  vxge_mBIT(31)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_ACTOR_INFO_LEN_MISMATCH \
-                                                               vxge_mBIT(32)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_PARTNER_INFO_LEN_MISMATCH \
-                                                               vxge_mBIT(33)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_COLL_INFO_LEN_MISMATCH vxge_mBIT(34)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_TERM_INFO_LEN_MISMATCH vxge_mBIT(35)
-#define VXGE_HW_LAG_PORT_STATE_VARS_LAGC_RX_FSM_STATE(val) vxge_vBIT(val, 37, 3)
-#define VXGE_HW_LAG_PORT_STATE_VARS_LAGC_MUX_FSM_STATE(val) \
-                                                       vxge_vBIT(val, 41, 3)
-#define VXGE_HW_LAG_PORT_STATE_VARS_LAGC_MUX_REASON(val) vxge_vBIT(val, 44, 4)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_ACTOR_CHURN_STATE      vxge_mBIT(54)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_PARTNER_CHURN_STATE    vxge_mBIT(55)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_ACTOR_CHURN_COUNT(val) \
-                                                       vxge_vBIT(val, 56, 4)
-#define        VXGE_HW_LAG_PORT_STATE_VARS_LAGC_PARTNER_CHURN_COUNT(val) \
-                                                       vxge_vBIT(val, 60, 4)
-/*0x021c0*/    u64     lag_port_timer_cntr[2];
-#define VXGE_HW_LAG_PORT_TIMER_CNTR_LAGC_CURRENT_WHILE(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_LAG_PORT_TIMER_CNTR_LAGC_PERIODIC_WHILE(val) \
-                                                       vxge_vBIT(val, 8, 8)
-#define VXGE_HW_LAG_PORT_TIMER_CNTR_LAGC_WAIT_WHILE(val) vxge_vBIT(val, 16, 8)
-#define VXGE_HW_LAG_PORT_TIMER_CNTR_LAGC_TX_LACP(val) vxge_vBIT(val, 24, 8)
-#define        VXGE_HW_LAG_PORT_TIMER_CNTR_LAGC_ACTOR_SYNC_TRANSITION_COUNT(val) \
-                                                       vxge_vBIT(val, 32, 8)
-#define        VXGE_HW_LAG_PORT_TIMER_CNTR_LAGC_PARTNER_SYNC_TRANSITION_COUNT(val) \
-                                                       vxge_vBIT(val, 40, 8)
-#define        VXGE_HW_LAG_PORT_TIMER_CNTR_LAGC_ACTOR_CHANGE_COUNT(val) \
-                                                       vxge_vBIT(val, 48, 8)
-#define        VXGE_HW_LAG_PORT_TIMER_CNTR_LAGC_PARTNER_CHANGE_COUNT(val) \
-                                                       vxge_vBIT(val, 56, 8)
-       u8      unused02208[0x02700-0x021d0];
-
-/*0x02700*/    u64     rtdma_int_status;
-#define        VXGE_HW_RTDMA_INT_STATUS_PDA_ALARM_PDA_INT      vxge_mBIT(1)
-#define        VXGE_HW_RTDMA_INT_STATUS_PCC_ERROR_PCC_INT      vxge_mBIT(2)
-#define        VXGE_HW_RTDMA_INT_STATUS_LSO_ERROR_LSO_INT      vxge_mBIT(4)
-#define        VXGE_HW_RTDMA_INT_STATUS_SM_ERROR_SM_INT        vxge_mBIT(5)
-/*0x02708*/    u64     rtdma_int_mask;
-/*0x02710*/    u64     pda_alarm_reg;
-#define        VXGE_HW_PDA_ALARM_REG_PDA_HSC_FIFO_ERR  vxge_mBIT(0)
-#define        VXGE_HW_PDA_ALARM_REG_PDA_SM_ERR        vxge_mBIT(1)
-/*0x02718*/    u64     pda_alarm_mask;
-/*0x02720*/    u64     pda_alarm_alarm;
-/*0x02728*/    u64     pcc_error_reg;
-#define VXGE_HW_PCC_ERROR_REG_PCC_PCC_FRM_BUF_SBE(n)   vxge_mBIT(n)
-#define VXGE_HW_PCC_ERROR_REG_PCC_PCC_TXDO_SBE(n)      vxge_mBIT(n)
-#define VXGE_HW_PCC_ERROR_REG_PCC_PCC_FRM_BUF_DBE(n)   vxge_mBIT(n)
-#define VXGE_HW_PCC_ERROR_REG_PCC_PCC_TXDO_DBE(n)      vxge_mBIT(n)
-#define VXGE_HW_PCC_ERROR_REG_PCC_PCC_FSM_ERR_ALARM(n) vxge_mBIT(n)
-#define VXGE_HW_PCC_ERROR_REG_PCC_PCC_SERR(n)  vxge_mBIT(n)
-/*0x02730*/    u64     pcc_error_mask;
-/*0x02738*/    u64     pcc_error_alarm;
-/*0x02740*/    u64     lso_error_reg;
-#define VXGE_HW_LSO_ERROR_REG_PCC_LSO_ABORT(n) vxge_mBIT(n)
-#define VXGE_HW_LSO_ERROR_REG_PCC_LSO_FSM_ERR_ALARM(n) vxge_mBIT(n)
-/*0x02748*/    u64     lso_error_mask;
-/*0x02750*/    u64     lso_error_alarm;
-/*0x02758*/    u64     sm_error_reg;
-#define        VXGE_HW_SM_ERROR_REG_SM_FSM_ERR_ALARM   vxge_mBIT(15)
-/*0x02760*/    u64     sm_error_mask;
-/*0x02768*/    u64     sm_error_alarm;
-
-       u8      unused027a8[0x027a8-0x02770];
-
-/*0x027a8*/    u64     txd_ownership_ctrl;
-#define        VXGE_HW_TXD_OWNERSHIP_CTRL_KEEP_OWNERSHIP       vxge_mBIT(7)
-/*0x027b0*/    u64     pcc_cfg;
-#define VXGE_HW_PCC_CFG_PCC_ENABLE(n)  vxge_mBIT(n)
-#define VXGE_HW_PCC_CFG_PCC_ECC_ENABLE_N(n)    vxge_mBIT(n)
-/*0x027b8*/    u64     pcc_control;
-#define VXGE_HW_PCC_CONTROL_FE_ENABLE(val) vxge_vBIT(val, 6, 2)
-#define        VXGE_HW_PCC_CONTROL_EARLY_ASSIGN_EN     vxge_mBIT(15)
-#define        VXGE_HW_PCC_CONTROL_UNBLOCK_DB_ERR      vxge_mBIT(31)
-/*0x027c0*/    u64     pda_status1;
-#define VXGE_HW_PDA_STATUS1_PDA_WRAP_0_CTR(val) vxge_vBIT(val, 4, 4)
-#define VXGE_HW_PDA_STATUS1_PDA_WRAP_1_CTR(val) vxge_vBIT(val, 12, 4)
-#define VXGE_HW_PDA_STATUS1_PDA_WRAP_2_CTR(val) vxge_vBIT(val, 20, 4)
-#define VXGE_HW_PDA_STATUS1_PDA_WRAP_3_CTR(val) vxge_vBIT(val, 28, 4)
-#define VXGE_HW_PDA_STATUS1_PDA_WRAP_4_CTR(val) vxge_vBIT(val, 36, 4)
-#define VXGE_HW_PDA_STATUS1_PDA_WRAP_5_CTR(val) vxge_vBIT(val, 44, 4)
-#define VXGE_HW_PDA_STATUS1_PDA_WRAP_6_CTR(val) vxge_vBIT(val, 52, 4)
-#define VXGE_HW_PDA_STATUS1_PDA_WRAP_7_CTR(val) vxge_vBIT(val, 60, 4)
-/*0x027c8*/    u64     rtdma_bw_timer;
-#define VXGE_HW_RTDMA_BW_TIMER_TIMER_CTRL(val) vxge_vBIT(val, 12, 4)
-
-       u8      unused02900[0x02900-0x027d0];
-/*0x02900*/    u64     g3cmct_int_status;
-#define        VXGE_HW_G3CMCT_INT_STATUS_ERR_G3IF_INT  vxge_mBIT(0)
-/*0x02908*/    u64     g3cmct_int_mask;
-/*0x02910*/    u64     g3cmct_err_reg;
-#define        VXGE_HW_G3CMCT_ERR_REG_G3IF_SM_ERR      vxge_mBIT(4)
-#define        VXGE_HW_G3CMCT_ERR_REG_G3IF_GDDR3_DECC  vxge_mBIT(5)
-#define        VXGE_HW_G3CMCT_ERR_REG_G3IF_GDDR3_U_DECC        vxge_mBIT(6)
-#define        VXGE_HW_G3CMCT_ERR_REG_G3IF_CTRL_FIFO_DECC      vxge_mBIT(7)
-#define        VXGE_HW_G3CMCT_ERR_REG_G3IF_GDDR3_SECC  vxge_mBIT(29)
-#define        VXGE_HW_G3CMCT_ERR_REG_G3IF_GDDR3_U_SECC        vxge_mBIT(30)
-#define        VXGE_HW_G3CMCT_ERR_REG_G3IF_CTRL_FIFO_SECC      vxge_mBIT(31)
-/*0x02918*/    u64     g3cmct_err_mask;
-/*0x02920*/    u64     g3cmct_err_alarm;
-       u8      unused03000[0x03000-0x02928];
-
-/*0x03000*/    u64     mc_int_status;
-#define        VXGE_HW_MC_INT_STATUS_MC_ERR_MC_INT     vxge_mBIT(3)
-#define        VXGE_HW_MC_INT_STATUS_GROCRC_ALARM_ROCRC_INT    vxge_mBIT(7)
-#define        VXGE_HW_MC_INT_STATUS_FAU_GEN_ERR_FAU_GEN_INT   vxge_mBIT(11)
-#define        VXGE_HW_MC_INT_STATUS_FAU_ECC_ERR_FAU_ECC_INT   vxge_mBIT(15)
-/*0x03008*/    u64     mc_int_mask;
-/*0x03010*/    u64     mc_err_reg;
-#define        VXGE_HW_MC_ERR_REG_MC_XFMD_MEM_ECC_SG_ERR_A     vxge_mBIT(3)
-#define        VXGE_HW_MC_ERR_REG_MC_XFMD_MEM_ECC_SG_ERR_B     vxge_mBIT(4)
-#define        VXGE_HW_MC_ERR_REG_MC_G3IF_RD_FIFO_ECC_SG_ERR   vxge_mBIT(5)
-#define        VXGE_HW_MC_ERR_REG_MC_MIRI_ECC_SG_ERR_0 vxge_mBIT(6)
-#define        VXGE_HW_MC_ERR_REG_MC_MIRI_ECC_SG_ERR_1 vxge_mBIT(7)
-#define        VXGE_HW_MC_ERR_REG_MC_XFMD_MEM_ECC_DB_ERR_A     vxge_mBIT(10)
-#define        VXGE_HW_MC_ERR_REG_MC_XFMD_MEM_ECC_DB_ERR_B     vxge_mBIT(11)
-#define        VXGE_HW_MC_ERR_REG_MC_G3IF_RD_FIFO_ECC_DB_ERR   vxge_mBIT(12)
-#define        VXGE_HW_MC_ERR_REG_MC_MIRI_ECC_DB_ERR_0 vxge_mBIT(13)
-#define        VXGE_HW_MC_ERR_REG_MC_MIRI_ECC_DB_ERR_1 vxge_mBIT(14)
-#define        VXGE_HW_MC_ERR_REG_MC_SM_ERR    vxge_mBIT(15)
-/*0x03018*/    u64     mc_err_mask;
-/*0x03020*/    u64     mc_err_alarm;
-/*0x03028*/    u64     grocrc_alarm_reg;
-#define        VXGE_HW_GROCRC_ALARM_REG_XFMD_WR_FIFO_ERR       vxge_mBIT(3)
-#define        VXGE_HW_GROCRC_ALARM_REG_WDE2MSR_RD_FIFO_ERR    vxge_mBIT(7)
-/*0x03030*/    u64     grocrc_alarm_mask;
-/*0x03038*/    u64     grocrc_alarm_alarm;
-       u8      unused03100[0x03100-0x03040];
-
-/*0x03100*/    u64     rx_thresh_cfg_repl;
-#define VXGE_HW_RX_THRESH_CFG_REPL_PAUSE_LOW_THR(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_RX_THRESH_CFG_REPL_PAUSE_HIGH_THR(val) vxge_vBIT(val, 8, 8)
-#define VXGE_HW_RX_THRESH_CFG_REPL_RED_THR_0(val) vxge_vBIT(val, 16, 8)
-#define VXGE_HW_RX_THRESH_CFG_REPL_RED_THR_1(val) vxge_vBIT(val, 24, 8)
-#define VXGE_HW_RX_THRESH_CFG_REPL_RED_THR_2(val) vxge_vBIT(val, 32, 8)
-#define VXGE_HW_RX_THRESH_CFG_REPL_RED_THR_3(val) vxge_vBIT(val, 40, 8)
-#define        VXGE_HW_RX_THRESH_CFG_REPL_GLOBAL_WOL_EN        vxge_mBIT(62)
-#define        VXGE_HW_RX_THRESH_CFG_REPL_EXACT_VP_MATCH_REQ   vxge_mBIT(63)
-       u8      unused033b8[0x033b8-0x03108];
-
-/*0x033b8*/    u64     fbmc_ecc_cfg;
-#define VXGE_HW_FBMC_ECC_CFG_ENABLE(val) vxge_vBIT(val, 3, 5)
-       u8      unused03400[0x03400-0x033c0];
-
-/*0x03400*/    u64     pcipif_int_status;
-#define        VXGE_HW_PCIPIF_INT_STATUS_DBECC_ERR_DBECC_ERR_INT       vxge_mBIT(3)
-#define        VXGE_HW_PCIPIF_INT_STATUS_SBECC_ERR_SBECC_ERR_INT       vxge_mBIT(7)
-#define        VXGE_HW_PCIPIF_INT_STATUS_GENERAL_ERR_GENERAL_ERR_INT   vxge_mBIT(11)
-#define        VXGE_HW_PCIPIF_INT_STATUS_SRPCIM_MSG_SRPCIM_MSG_INT     vxge_mBIT(15)
-#define        VXGE_HW_PCIPIF_INT_STATUS_MRPCIM_SPARE_R1_MRPCIM_SPARE_R1_INT \
-                                                               vxge_mBIT(19)
-/*0x03408*/    u64     pcipif_int_mask;
-/*0x03410*/    u64     dbecc_err_reg;
-#define        VXGE_HW_DBECC_ERR_REG_PCI_RETRY_BUF_DB_ERR      vxge_mBIT(3)
-#define        VXGE_HW_DBECC_ERR_REG_PCI_RETRY_SOT_DB_ERR      vxge_mBIT(7)
-#define        VXGE_HW_DBECC_ERR_REG_PCI_P_HDR_DB_ERR  vxge_mBIT(11)
-#define        VXGE_HW_DBECC_ERR_REG_PCI_P_DATA_DB_ERR vxge_mBIT(15)
-#define        VXGE_HW_DBECC_ERR_REG_PCI_NP_HDR_DB_ERR vxge_mBIT(19)
-#define        VXGE_HW_DBECC_ERR_REG_PCI_NP_DATA_DB_ERR        vxge_mBIT(23)
-/*0x03418*/    u64     dbecc_err_mask;
-/*0x03420*/    u64     dbecc_err_alarm;
-/*0x03428*/    u64     sbecc_err_reg;
-#define        VXGE_HW_SBECC_ERR_REG_PCI_RETRY_BUF_SG_ERR      vxge_mBIT(3)
-#define        VXGE_HW_SBECC_ERR_REG_PCI_RETRY_SOT_SG_ERR      vxge_mBIT(7)
-#define        VXGE_HW_SBECC_ERR_REG_PCI_P_HDR_SG_ERR  vxge_mBIT(11)
-#define        VXGE_HW_SBECC_ERR_REG_PCI_P_DATA_SG_ERR vxge_mBIT(15)
-#define        VXGE_HW_SBECC_ERR_REG_PCI_NP_HDR_SG_ERR vxge_mBIT(19)
-#define        VXGE_HW_SBECC_ERR_REG_PCI_NP_DATA_SG_ERR        vxge_mBIT(23)
-/*0x03430*/    u64     sbecc_err_mask;
-/*0x03438*/    u64     sbecc_err_alarm;
-/*0x03440*/    u64     general_err_reg;
-#define        VXGE_HW_GENERAL_ERR_REG_PCI_DROPPED_ILLEGAL_CFG vxge_mBIT(3)
-#define        VXGE_HW_GENERAL_ERR_REG_PCI_ILLEGAL_MEM_MAP_PROG        vxge_mBIT(7)
-#define        VXGE_HW_GENERAL_ERR_REG_PCI_LINK_RST_FSM_ERR    vxge_mBIT(11)
-#define        VXGE_HW_GENERAL_ERR_REG_PCI_RX_ILLEGAL_TLP_VPLANE       vxge_mBIT(15)
-#define        VXGE_HW_GENERAL_ERR_REG_PCI_TRAINING_RESET_DET  vxge_mBIT(19)
-#define        VXGE_HW_GENERAL_ERR_REG_PCI_PCI_LINK_DOWN_DET   vxge_mBIT(23)
-#define        VXGE_HW_GENERAL_ERR_REG_PCI_RESET_ACK_DLLP      vxge_mBIT(27)
-/*0x03448*/    u64     general_err_mask;
-/*0x03450*/    u64     general_err_alarm;
-/*0x03458*/    u64     srpcim_msg_reg;
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE0_RMSG_INT \
-                                                               vxge_mBIT(0)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE1_RMSG_INT \
-                                                               vxge_mBIT(1)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE2_RMSG_INT \
-                                                               vxge_mBIT(2)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE3_RMSG_INT \
-                                                               vxge_mBIT(3)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE4_RMSG_INT \
-                                                               vxge_mBIT(4)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE5_RMSG_INT \
-                                                               vxge_mBIT(5)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE6_RMSG_INT \
-                                                               vxge_mBIT(6)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE7_RMSG_INT \
-                                                               vxge_mBIT(7)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE8_RMSG_INT \
-                                                               vxge_mBIT(8)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE9_RMSG_INT \
-                                                               vxge_mBIT(9)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE10_RMSG_INT \
-                                                               vxge_mBIT(10)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE11_RMSG_INT \
-                                                               vxge_mBIT(11)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE12_RMSG_INT \
-                                                               vxge_mBIT(12)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE13_RMSG_INT \
-                                                               vxge_mBIT(13)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE14_RMSG_INT \
-                                                               vxge_mBIT(14)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE15_RMSG_INT \
-                                                               vxge_mBIT(15)
-#define        VXGE_HW_SRPCIM_MSG_REG_SWIF_SRPCIM_TO_MRPCIM_VPLANE16_RMSG_INT \
-                                                               vxge_mBIT(16)
-/*0x03460*/    u64     srpcim_msg_mask;
-/*0x03468*/    u64     srpcim_msg_alarm;
-       u8      unused03600[0x03600-0x03470];
-
-/*0x03600*/    u64     gcmg1_int_status;
-#define        VXGE_HW_GCMG1_INT_STATUS_GSSCC_ERR_GSSCC_INT    vxge_mBIT(0)
-#define        VXGE_HW_GCMG1_INT_STATUS_GSSC0_ERR0_GSSC0_0_INT vxge_mBIT(1)
-#define        VXGE_HW_GCMG1_INT_STATUS_GSSC0_ERR1_GSSC0_1_INT vxge_mBIT(2)
-#define        VXGE_HW_GCMG1_INT_STATUS_GSSC1_ERR0_GSSC1_0_INT vxge_mBIT(3)
-#define        VXGE_HW_GCMG1_INT_STATUS_GSSC1_ERR1_GSSC1_1_INT vxge_mBIT(4)
-#define        VXGE_HW_GCMG1_INT_STATUS_GSSC2_ERR0_GSSC2_0_INT vxge_mBIT(5)
-#define        VXGE_HW_GCMG1_INT_STATUS_GSSC2_ERR1_GSSC2_1_INT vxge_mBIT(6)
-#define        VXGE_HW_GCMG1_INT_STATUS_UQM_ERR_UQM_INT        vxge_mBIT(7)
-#define        VXGE_HW_GCMG1_INT_STATUS_GQCC_ERR_GQCC_INT      vxge_mBIT(8)
-/*0x03608*/    u64     gcmg1_int_mask;
-       u8      unused03a00[0x03a00-0x03610];
-
-/*0x03a00*/    u64     pcmg1_int_status;
-#define        VXGE_HW_PCMG1_INT_STATUS_PSSCC_ERR_PSSCC_INT    vxge_mBIT(0)
-#define        VXGE_HW_PCMG1_INT_STATUS_PQCC_ERR_PQCC_INT      vxge_mBIT(1)
-#define        VXGE_HW_PCMG1_INT_STATUS_PQCC_CQM_ERR_PQCC_CQM_INT      vxge_mBIT(2)
-#define        VXGE_HW_PCMG1_INT_STATUS_PQCC_SQM_ERR_PQCC_SQM_INT      vxge_mBIT(3)
-/*0x03a08*/    u64     pcmg1_int_mask;
-       u8      unused04000[0x04000-0x03a10];
-
-/*0x04000*/    u64     one_int_status;
-#define        VXGE_HW_ONE_INT_STATUS_RXPE_ERR_RXPE_INT        vxge_mBIT(7)
-#define        VXGE_HW_ONE_INT_STATUS_TXPE_BCC_MEM_SG_ECC_ERR_TXPE_BCC_MEM_SG_ECC_INT \
-                                                       vxge_mBIT(13)
-#define        VXGE_HW_ONE_INT_STATUS_TXPE_BCC_MEM_DB_ECC_ERR_TXPE_BCC_MEM_DB_ECC_INT \
-                                                       vxge_mBIT(14)
-#define        VXGE_HW_ONE_INT_STATUS_TXPE_ERR_TXPE_INT        vxge_mBIT(15)
-#define        VXGE_HW_ONE_INT_STATUS_DLM_ERR_DLM_INT  vxge_mBIT(23)
-#define        VXGE_HW_ONE_INT_STATUS_PE_ERR_PE_INT    vxge_mBIT(31)
-#define        VXGE_HW_ONE_INT_STATUS_RPE_ERR_RPE_INT  vxge_mBIT(39)
-#define        VXGE_HW_ONE_INT_STATUS_RPE_FSM_ERR_RPE_FSM_INT  vxge_mBIT(47)
-#define        VXGE_HW_ONE_INT_STATUS_OES_ERR_OES_INT  vxge_mBIT(55)
-/*0x04008*/    u64     one_int_mask;
-       u8      unused04818[0x04818-0x04010];
-
-/*0x04818*/    u64     noa_wct_ctrl;
-#define        VXGE_HW_NOA_WCT_CTRL_VP_INT_NUM vxge_mBIT(0)
-/*0x04820*/    u64     rc_cfg2;
-#define VXGE_HW_RC_CFG2_BUFF1_SIZE(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_RC_CFG2_BUFF2_SIZE(val) vxge_vBIT(val, 16, 16)
-#define VXGE_HW_RC_CFG2_BUFF3_SIZE(val) vxge_vBIT(val, 32, 16)
-#define VXGE_HW_RC_CFG2_BUFF4_SIZE(val) vxge_vBIT(val, 48, 16)
-/*0x04828*/    u64     rc_cfg3;
-#define VXGE_HW_RC_CFG3_BUFF5_SIZE(val) vxge_vBIT(val, 0, 16)
-/*0x04830*/    u64     rx_multi_cast_ctrl1;
-#define        VXGE_HW_RX_MULTI_CAST_CTRL1_ENABLE      vxge_mBIT(7)
-#define VXGE_HW_RX_MULTI_CAST_CTRL1_DELAY_COUNT(val) vxge_vBIT(val, 11, 5)
-/*0x04838*/    u64     rxdm_dbg_rd;
-#define VXGE_HW_RXDM_DBG_RD_ADDR(val) vxge_vBIT(val, 0, 12)
-#define        VXGE_HW_RXDM_DBG_RD_ENABLE      vxge_mBIT(31)
-/*0x04840*/    u64     rxdm_dbg_rd_data;
-#define VXGE_HW_RXDM_DBG_RD_DATA_RMC_RXDM_DBG_RD_DATA(val) vxge_vBIT(val, 0, 64)
-/*0x04848*/    u64     rqa_top_prty_for_vh[17];
-#define VXGE_HW_RQA_TOP_PRTY_FOR_VH_RQA_TOP_PRTY_FOR_VH(val) \
-                                                       vxge_vBIT(val, 59, 5)
-       u8      unused04900[0x04900-0x048d0];
-
-/*0x04900*/    u64     tim_status;
-#define        VXGE_HW_TIM_STATUS_TIM_RESET_IN_PROGRESS        vxge_mBIT(0)
-/*0x04908*/    u64     tim_ecc_enable;
-#define        VXGE_HW_TIM_ECC_ENABLE_VBLS_N   vxge_mBIT(7)
-#define        VXGE_HW_TIM_ECC_ENABLE_BMAP_N   vxge_mBIT(15)
-#define        VXGE_HW_TIM_ECC_ENABLE_BMAP_MSG_N       vxge_mBIT(23)
-/*0x04910*/    u64     tim_bp_ctrl;
-#define        VXGE_HW_TIM_BP_CTRL_RD_XON      vxge_mBIT(7)
-#define        VXGE_HW_TIM_BP_CTRL_WR_XON      vxge_mBIT(15)
-#define        VXGE_HW_TIM_BP_CTRL_ROCRC_BYP   vxge_mBIT(23)
-/*0x04918*/    u64     tim_resource_assignment_vh[17];
-#define VXGE_HW_TIM_RESOURCE_ASSIGNMENT_VH_BMAP_ROOT(val) vxge_vBIT(val, 0, 32)
-/*0x049a0*/    u64     tim_bmap_mapping_vp_err[17];
-#define VXGE_HW_TIM_BMAP_MAPPING_VP_ERR_TIM_DEST_VPATH(val) vxge_vBIT(val, 3, 5)
-       u8      unused04b00[0x04b00-0x04a28];
-
-/*0x04b00*/    u64     gcmg2_int_status;
-#define        VXGE_HW_GCMG2_INT_STATUS_GXTMC_ERR_GXTMC_INT    vxge_mBIT(7)
-#define        VXGE_HW_GCMG2_INT_STATUS_GCP_ERR_GCP_INT        vxge_mBIT(15)
-#define        VXGE_HW_GCMG2_INT_STATUS_CMC_ERR_CMC_INT        vxge_mBIT(23)
-/*0x04b08*/    u64     gcmg2_int_mask;
-/*0x04b10*/    u64     gxtmc_err_reg;
-#define VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_MEM_DB_ERR(val) vxge_vBIT(val, 0, 4)
-#define VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_MEM_SG_ERR(val) vxge_vBIT(val, 4, 4)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_CMC_RD_DATA_DB_ERR   vxge_mBIT(8)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_REQ_FIFO_ERR vxge_mBIT(9)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_REQ_DATA_FIFO_ERR    vxge_mBIT(10)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_WR_RSP_FIFO_ERR      vxge_mBIT(11)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_RD_RSP_FIFO_ERR      vxge_mBIT(12)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_CMI_WRP_FIFO_ERR     vxge_mBIT(13)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_CMI_WRP_ERR  vxge_mBIT(14)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_CMI_RRP_FIFO_ERR     vxge_mBIT(15)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_CMI_RRP_ERR  vxge_mBIT(16)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_CMI_DATA_SM_ERR      vxge_mBIT(17)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_CMI_CMC0_IF_ERR      vxge_mBIT(18)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_ARB_SM_ERR   vxge_mBIT(19)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_CFC_SM_ERR   vxge_mBIT(20)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_DFETCH_CREDIT_OVERFLOW \
-                                                       vxge_mBIT(21)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_DFETCH_CREDIT_UNDERFLOW \
-                                                       vxge_mBIT(22)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_DFETCH_SM_ERR        vxge_mBIT(23)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_RCTRL_CREDIT_OVERFLOW \
-                                                       vxge_mBIT(24)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_RCTRL_CREDIT_UNDERFLOW \
-                                                       vxge_mBIT(25)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_RCTRL_SM_ERR vxge_mBIT(26)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_WCOMPL_SM_ERR        vxge_mBIT(27)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_WCOMPL_TAG_ERR       vxge_mBIT(28)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_WREQ_SM_ERR  vxge_mBIT(29)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_BDT_CMI_WREQ_FIFO_ERR        vxge_mBIT(30)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_CP2BDT_RFIFO_POP_ERR vxge_mBIT(31)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_XTMC_BDT_CMI_OP_ERR  vxge_mBIT(32)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_XTMC_BDT_DFETCH_OP_ERR       vxge_mBIT(33)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_XTMC_BDT_DFIFO_ERR   vxge_mBIT(34)
-#define        VXGE_HW_GXTMC_ERR_REG_XTMC_CMI_ARB_SM_ERR       vxge_mBIT(35)
-/*0x04b18*/    u64     gxtmc_err_mask;
-/*0x04b20*/    u64     gxtmc_err_alarm;
-/*0x04b28*/    u64     cmc_err_reg;
-#define        VXGE_HW_CMC_ERR_REG_CMC_CMC_SM_ERR      vxge_mBIT(0)
-/*0x04b30*/    u64     cmc_err_mask;
-/*0x04b38*/    u64     cmc_err_alarm;
-/*0x04b40*/    u64     gcp_err_reg;
-#define        VXGE_HW_GCP_ERR_REG_CP_H2L2CP_FIFO_ERR  vxge_mBIT(0)
-#define        VXGE_HW_GCP_ERR_REG_CP_STC2CP_FIFO_ERR  vxge_mBIT(1)
-#define        VXGE_HW_GCP_ERR_REG_CP_STE2CP_FIFO_ERR  vxge_mBIT(2)
-#define        VXGE_HW_GCP_ERR_REG_CP_TTE2CP_FIFO_ERR  vxge_mBIT(3)
-/*0x04b48*/    u64     gcp_err_mask;
-/*0x04b50*/    u64     gcp_err_alarm;
-       u8      unused04f00[0x04f00-0x04b58];
-
-/*0x04f00*/    u64     pcmg2_int_status;
-#define        VXGE_HW_PCMG2_INT_STATUS_PXTMC_ERR_PXTMC_INT    vxge_mBIT(7)
-#define        VXGE_HW_PCMG2_INT_STATUS_CP_EXC_CP_XT_EXC_INT   vxge_mBIT(15)
-#define        VXGE_HW_PCMG2_INT_STATUS_CP_ERR_CP_ERR_INT      vxge_mBIT(23)
-/*0x04f08*/    u64     pcmg2_int_mask;
-/*0x04f10*/    u64     pxtmc_err_reg;
-#define VXGE_HW_PXTMC_ERR_REG_XTMC_XT_PIF_SRAM_DB_ERR(val) vxge_vBIT(val, 0, 2)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MPT_REQ_FIFO_ERR     vxge_mBIT(2)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MPT_PRSP_FIFO_ERR    vxge_mBIT(3)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MPT_WRSP_FIFO_ERR    vxge_mBIT(4)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UPT_REQ_FIFO_ERR     vxge_mBIT(5)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UPT_PRSP_FIFO_ERR    vxge_mBIT(6)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UPT_WRSP_FIFO_ERR    vxge_mBIT(7)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CPT_REQ_FIFO_ERR     vxge_mBIT(8)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CPT_PRSP_FIFO_ERR    vxge_mBIT(9)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CPT_WRSP_FIFO_ERR    vxge_mBIT(10)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_REQ_FIFO_ERR vxge_mBIT(11)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_REQ_DATA_FIFO_ERR    vxge_mBIT(12)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_WR_RSP_FIFO_ERR      vxge_mBIT(13)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_RD_RSP_FIFO_ERR      vxge_mBIT(14)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MPT_REQ_SHADOW_ERR   vxge_mBIT(15)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MPT_RSP_SHADOW_ERR   vxge_mBIT(16)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UPT_REQ_SHADOW_ERR   vxge_mBIT(17)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UPT_RSP_SHADOW_ERR   vxge_mBIT(18)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CPT_REQ_SHADOW_ERR   vxge_mBIT(19)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CPT_RSP_SHADOW_ERR   vxge_mBIT(20)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_XIL_SHADOW_ERR       vxge_mBIT(21)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_ARB_SHADOW_ERR       vxge_mBIT(22)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_RAM_SHADOW_ERR       vxge_mBIT(23)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CMW_SHADOW_ERR       vxge_mBIT(24)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CMR_SHADOW_ERR       vxge_mBIT(25)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MPT_REQ_FSM_ERR      vxge_mBIT(26)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MPT_RSP_FSM_ERR      vxge_mBIT(27)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UPT_REQ_FSM_ERR      vxge_mBIT(28)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UPT_RSP_FSM_ERR      vxge_mBIT(29)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CPT_REQ_FSM_ERR      vxge_mBIT(30)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CPT_RSP_FSM_ERR      vxge_mBIT(31)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_XIL_FSM_ERR  vxge_mBIT(32)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_ARB_FSM_ERR  vxge_mBIT(33)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CMW_FSM_ERR  vxge_mBIT(34)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CMR_FSM_ERR  vxge_mBIT(35)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MXP_RD_PROT_ERR      vxge_mBIT(36)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UXP_RD_PROT_ERR      vxge_mBIT(37)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CXP_RD_PROT_ERR      vxge_mBIT(38)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MXP_WR_PROT_ERR      vxge_mBIT(39)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UXP_WR_PROT_ERR      vxge_mBIT(40)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CXP_WR_PROT_ERR      vxge_mBIT(41)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MXP_INV_ADDR_ERR     vxge_mBIT(42)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UXP_INV_ADDR_ERR     vxge_mBIT(43)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CXP_INV_ADDR_ERR     vxge_mBIT(44)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MXP_RD_PROT_INFO_ERR vxge_mBIT(45)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UXP_RD_PROT_INFO_ERR vxge_mBIT(46)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CXP_RD_PROT_INFO_ERR vxge_mBIT(47)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MXP_WR_PROT_INFO_ERR vxge_mBIT(48)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UXP_WR_PROT_INFO_ERR vxge_mBIT(49)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CXP_WR_PROT_INFO_ERR vxge_mBIT(50)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_MXP_INV_ADDR_INFO_ERR        vxge_mBIT(51)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_UXP_INV_ADDR_INFO_ERR        vxge_mBIT(52)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CXP_INV_ADDR_INFO_ERR        vxge_mBIT(53)
-#define VXGE_HW_PXTMC_ERR_REG_XTMC_XT_PIF_SRAM_SG_ERR(val) vxge_vBIT(val, 54, 2)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CP2BDT_DFIFO_PUSH_ERR        vxge_mBIT(56)
-#define        VXGE_HW_PXTMC_ERR_REG_XTMC_CP2BDT_RFIFO_PUSH_ERR        vxge_mBIT(57)
-/*0x04f18*/    u64     pxtmc_err_mask;
-/*0x04f20*/    u64     pxtmc_err_alarm;
-/*0x04f28*/    u64     cp_err_reg;
-#define VXGE_HW_CP_ERR_REG_CP_CP_DCACHE_SG_ERR(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_CP_ERR_REG_CP_CP_ICACHE_SG_ERR(val) vxge_vBIT(val, 8, 2)
-#define        VXGE_HW_CP_ERR_REG_CP_CP_DTAG_SG_ERR    vxge_mBIT(10)
-#define        VXGE_HW_CP_ERR_REG_CP_CP_ITAG_SG_ERR    vxge_mBIT(11)
-#define        VXGE_HW_CP_ERR_REG_CP_CP_TRACE_SG_ERR   vxge_mBIT(12)
-#define        VXGE_HW_CP_ERR_REG_CP_DMA2CP_SG_ERR     vxge_mBIT(13)
-#define        VXGE_HW_CP_ERR_REG_CP_MP2CP_SG_ERR      vxge_mBIT(14)
-#define        VXGE_HW_CP_ERR_REG_CP_QCC2CP_SG_ERR     vxge_mBIT(15)
-#define VXGE_HW_CP_ERR_REG_CP_STC2CP_SG_ERR(val) vxge_vBIT(val, 16, 2)
-#define VXGE_HW_CP_ERR_REG_CP_CP_DCACHE_DB_ERR(val) vxge_vBIT(val, 24, 8)
-#define VXGE_HW_CP_ERR_REG_CP_CP_ICACHE_DB_ERR(val) vxge_vBIT(val, 32, 2)
-#define        VXGE_HW_CP_ERR_REG_CP_CP_DTAG_DB_ERR    vxge_mBIT(34)
-#define        VXGE_HW_CP_ERR_REG_CP_CP_ITAG_DB_ERR    vxge_mBIT(35)
-#define        VXGE_HW_CP_ERR_REG_CP_CP_TRACE_DB_ERR   vxge_mBIT(36)
-#define        VXGE_HW_CP_ERR_REG_CP_DMA2CP_DB_ERR     vxge_mBIT(37)
-#define        VXGE_HW_CP_ERR_REG_CP_MP2CP_DB_ERR      vxge_mBIT(38)
-#define        VXGE_HW_CP_ERR_REG_CP_QCC2CP_DB_ERR     vxge_mBIT(39)
-#define VXGE_HW_CP_ERR_REG_CP_STC2CP_DB_ERR(val) vxge_vBIT(val, 40, 2)
-#define        VXGE_HW_CP_ERR_REG_CP_H2L2CP_FIFO_ERR   vxge_mBIT(48)
-#define        VXGE_HW_CP_ERR_REG_CP_STC2CP_FIFO_ERR   vxge_mBIT(49)
-#define        VXGE_HW_CP_ERR_REG_CP_STE2CP_FIFO_ERR   vxge_mBIT(50)
-#define        VXGE_HW_CP_ERR_REG_CP_TTE2CP_FIFO_ERR   vxge_mBIT(51)
-#define        VXGE_HW_CP_ERR_REG_CP_SWIF2CP_FIFO_ERR  vxge_mBIT(52)
-#define        VXGE_HW_CP_ERR_REG_CP_CP2DMA_FIFO_ERR   vxge_mBIT(53)
-#define        VXGE_HW_CP_ERR_REG_CP_DAM2CP_FIFO_ERR   vxge_mBIT(54)
-#define        VXGE_HW_CP_ERR_REG_CP_MP2CP_FIFO_ERR    vxge_mBIT(55)
-#define        VXGE_HW_CP_ERR_REG_CP_QCC2CP_FIFO_ERR   vxge_mBIT(56)
-#define        VXGE_HW_CP_ERR_REG_CP_DMA2CP_FIFO_ERR   vxge_mBIT(57)
-#define        VXGE_HW_CP_ERR_REG_CP_CP_WAKE_FSM_INTEGRITY_ERR vxge_mBIT(60)
-#define        VXGE_HW_CP_ERR_REG_CP_CP_PMON_FSM_INTEGRITY_ERR vxge_mBIT(61)
-#define        VXGE_HW_CP_ERR_REG_CP_DMA_RD_SHADOW_ERR vxge_mBIT(62)
-#define        VXGE_HW_CP_ERR_REG_CP_PIFT_CREDIT_ERR   vxge_mBIT(63)
-/*0x04f30*/    u64     cp_err_mask;
-/*0x04f38*/    u64     cp_err_alarm;
-       u8      unused04fe8[0x04f50-0x04f40];
-
-/*0x04f50*/    u64     cp_exc_reg;
-#define        VXGE_HW_CP_EXC_REG_CP_CP_CAUSE_INFO_INT vxge_mBIT(47)
-#define        VXGE_HW_CP_EXC_REG_CP_CP_CAUSE_CRIT_INT vxge_mBIT(55)
-#define        VXGE_HW_CP_EXC_REG_CP_CP_SERR   vxge_mBIT(63)
-/*0x04f58*/    u64     cp_exc_mask;
-/*0x04f60*/    u64     cp_exc_alarm;
-/*0x04f68*/    u64     cp_exc_cause;
-#define VXGE_HW_CP_EXC_CAUSE_CP_CP_CAUSE(val) vxge_vBIT(val, 32, 32)
-       u8      unused05200[0x05200-0x04f70];
-
-/*0x05200*/    u64     msg_int_status;
-#define        VXGE_HW_MSG_INT_STATUS_TIM_ERR_TIM_INT  vxge_mBIT(7)
-#define        VXGE_HW_MSG_INT_STATUS_MSG_EXC_MSG_XT_EXC_INT   vxge_mBIT(60)
-#define        VXGE_HW_MSG_INT_STATUS_MSG_ERR3_MSG_ERR3_INT    vxge_mBIT(61)
-#define        VXGE_HW_MSG_INT_STATUS_MSG_ERR2_MSG_ERR2_INT    vxge_mBIT(62)
-#define        VXGE_HW_MSG_INT_STATUS_MSG_ERR_MSG_ERR_INT      vxge_mBIT(63)
-/*0x05208*/    u64     msg_int_mask;
-/*0x05210*/    u64     tim_err_reg;
-#define        VXGE_HW_TIM_ERR_REG_TIM_VBLS_SG_ERR     vxge_mBIT(4)
-#define        VXGE_HW_TIM_ERR_REG_TIM_BMAP_PA_SG_ERR  vxge_mBIT(5)
-#define        VXGE_HW_TIM_ERR_REG_TIM_BMAP_PB_SG_ERR  vxge_mBIT(6)
-#define        VXGE_HW_TIM_ERR_REG_TIM_BMAP_MSG_SG_ERR vxge_mBIT(7)
-#define        VXGE_HW_TIM_ERR_REG_TIM_VBLS_DB_ERR     vxge_mBIT(12)
-#define        VXGE_HW_TIM_ERR_REG_TIM_BMAP_PA_DB_ERR  vxge_mBIT(13)
-#define        VXGE_HW_TIM_ERR_REG_TIM_BMAP_PB_DB_ERR  vxge_mBIT(14)
-#define        VXGE_HW_TIM_ERR_REG_TIM_BMAP_MSG_DB_ERR vxge_mBIT(15)
-#define        VXGE_HW_TIM_ERR_REG_TIM_BMAP_MEM_CNTRL_SM_ERR   vxge_mBIT(18)
-#define        VXGE_HW_TIM_ERR_REG_TIM_BMAP_MSG_MEM_CNTRL_SM_ERR       vxge_mBIT(19)
-#define        VXGE_HW_TIM_ERR_REG_TIM_MPIF_PCIWR_ERR  vxge_mBIT(20)
-#define        VXGE_HW_TIM_ERR_REG_TIM_ROCRC_BMAP_UPDT_FIFO_ERR        vxge_mBIT(22)
-#define        VXGE_HW_TIM_ERR_REG_TIM_CREATE_BMAPMSG_FIFO_ERR vxge_mBIT(23)
-#define        VXGE_HW_TIM_ERR_REG_TIM_ROCRCIF_MISMATCH        vxge_mBIT(46)
-#define VXGE_HW_TIM_ERR_REG_TIM_BMAP_MAPPING_VP_ERR(n) vxge_mBIT(n)
-/*0x05218*/    u64     tim_err_mask;
-/*0x05220*/    u64     tim_err_alarm;
-/*0x05228*/    u64     msg_err_reg;
-#define        VXGE_HW_MSG_ERR_REG_UP_UXP_WAKE_FSM_INTEGRITY_ERR       vxge_mBIT(0)
-#define        VXGE_HW_MSG_ERR_REG_MP_MXP_WAKE_FSM_INTEGRITY_ERR       vxge_mBIT(1)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_DMQ_DMA_READ_CMD_FSM_INTEGRITY_ERR \
-                                                               vxge_mBIT(2)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_DMQ_DMA_RESP_FSM_INTEGRITY_ERR \
-                                                               vxge_mBIT(3)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_DMQ_OWN_FSM_INTEGRITY_ERR   vxge_mBIT(4)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_PDA_ACC_FSM_INTEGRITY_ERR   vxge_mBIT(5)
-#define        VXGE_HW_MSG_ERR_REG_MP_MXP_PMON_FSM_INTEGRITY_ERR       vxge_mBIT(6)
-#define        VXGE_HW_MSG_ERR_REG_UP_UXP_PMON_FSM_INTEGRITY_ERR       vxge_mBIT(7)
-#define        VXGE_HW_MSG_ERR_REG_UP_UXP_DTAG_SG_ERR  vxge_mBIT(8)
-#define        VXGE_HW_MSG_ERR_REG_UP_UXP_ITAG_SG_ERR  vxge_mBIT(10)
-#define        VXGE_HW_MSG_ERR_REG_MP_MXP_DTAG_SG_ERR  vxge_mBIT(12)
-#define        VXGE_HW_MSG_ERR_REG_MP_MXP_ITAG_SG_ERR  vxge_mBIT(14)
-#define        VXGE_HW_MSG_ERR_REG_UP_UXP_TRACE_SG_ERR vxge_mBIT(16)
-#define        VXGE_HW_MSG_ERR_REG_MP_MXP_TRACE_SG_ERR vxge_mBIT(17)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_CMG2MSG_SG_ERR      vxge_mBIT(18)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_TXPE2MSG_SG_ERR     vxge_mBIT(19)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_RXPE2MSG_SG_ERR     vxge_mBIT(20)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_RPE2MSG_SG_ERR      vxge_mBIT(21)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_UMQ_SG_ERR  vxge_mBIT(26)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_BWR_PF_SG_ERR       vxge_mBIT(27)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_DMQ_ECC_SG_ERR      vxge_mBIT(29)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_DMA_RESP_ECC_SG_ERR vxge_mBIT(31)
-#define        VXGE_HW_MSG_ERR_REG_MSG_XFMDQRY_FSM_INTEGRITY_ERR       vxge_mBIT(33)
-#define        VXGE_HW_MSG_ERR_REG_MSG_FRMQRY_FSM_INTEGRITY_ERR        vxge_mBIT(34)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_UMQ_WRITE_FSM_INTEGRITY_ERR vxge_mBIT(35)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_UMQ_BWR_PF_FSM_INTEGRITY_ERR \
-                                                               vxge_mBIT(36)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_REG_RESP_FIFO_ERR   vxge_mBIT(38)
-#define        VXGE_HW_MSG_ERR_REG_UP_UXP_DTAG_DB_ERR  vxge_mBIT(39)
-#define        VXGE_HW_MSG_ERR_REG_UP_UXP_ITAG_DB_ERR  vxge_mBIT(41)
-#define        VXGE_HW_MSG_ERR_REG_MP_MXP_DTAG_DB_ERR  vxge_mBIT(43)
-#define        VXGE_HW_MSG_ERR_REG_MP_MXP_ITAG_DB_ERR  vxge_mBIT(45)
-#define        VXGE_HW_MSG_ERR_REG_UP_UXP_TRACE_DB_ERR vxge_mBIT(47)
-#define        VXGE_HW_MSG_ERR_REG_MP_MXP_TRACE_DB_ERR vxge_mBIT(48)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_CMG2MSG_DB_ERR      vxge_mBIT(49)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_TXPE2MSG_DB_ERR     vxge_mBIT(50)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_RXPE2MSG_DB_ERR     vxge_mBIT(51)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_RPE2MSG_DB_ERR      vxge_mBIT(52)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_REG_READ_FIFO_ERR   vxge_mBIT(53)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_MXP2UXP_FIFO_ERR    vxge_mBIT(54)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_KDFC_SIF_FIFO_ERR   vxge_mBIT(55)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_CXP2SWIF_FIFO_ERR   vxge_mBIT(56)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_UMQ_DB_ERR  vxge_mBIT(57)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_BWR_PF_DB_ERR       vxge_mBIT(58)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_BWR_SIF_FIFO_ERR    vxge_mBIT(59)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_DMQ_ECC_DB_ERR      vxge_mBIT(60)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_DMA_READ_FIFO_ERR   vxge_mBIT(61)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_DMA_RESP_ECC_DB_ERR vxge_mBIT(62)
-#define        VXGE_HW_MSG_ERR_REG_MSG_QUE_UXP2MXP_FIFO_ERR    vxge_mBIT(63)
-/*0x05230*/    u64     msg_err_mask;
-/*0x05238*/    u64     msg_err_alarm;
-       u8      unused05340[0x05340-0x05240];
-
-/*0x05340*/    u64     msg_exc_reg;
-#define        VXGE_HW_MSG_EXC_REG_MP_MXP_CAUSE_INFO_INT       vxge_mBIT(50)
-#define        VXGE_HW_MSG_EXC_REG_MP_MXP_CAUSE_CRIT_INT       vxge_mBIT(51)
-#define        VXGE_HW_MSG_EXC_REG_UP_UXP_CAUSE_INFO_INT       vxge_mBIT(54)
-#define        VXGE_HW_MSG_EXC_REG_UP_UXP_CAUSE_CRIT_INT       vxge_mBIT(55)
-#define        VXGE_HW_MSG_EXC_REG_MP_MXP_SERR vxge_mBIT(62)
-#define        VXGE_HW_MSG_EXC_REG_UP_UXP_SERR vxge_mBIT(63)
-/*0x05348*/    u64     msg_exc_mask;
-/*0x05350*/    u64     msg_exc_alarm;
-/*0x05358*/    u64     msg_exc_cause;
-#define VXGE_HW_MSG_EXC_CAUSE_MP_MXP(val) vxge_vBIT(val, 0, 32)
-#define VXGE_HW_MSG_EXC_CAUSE_UP_UXP(val) vxge_vBIT(val, 32, 32)
-       u8      unused05368[0x05380-0x05360];
-
-/*0x05380*/    u64     msg_err2_reg;
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_CMG2MSG_DISPATCH_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(0)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_DMQ_DISPATCH_FSM_INTEGRITY_ERR \
-                                                               vxge_mBIT(1)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_SWIF_DISPATCH_FSM_INTEGRITY_ERR \
-                                                               vxge_mBIT(2)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_PIC_WRITE_FSM_INTEGRITY_ERR \
-                                                               vxge_mBIT(3)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_SWIFREG_FSM_INTEGRITY_ERR  vxge_mBIT(4)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_TIM_WRITE_FSM_INTEGRITY_ERR \
-                                                               vxge_mBIT(5)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_UMQ_TA_FSM_INTEGRITY_ERR   vxge_mBIT(6)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_TXPE_TA_FSM_INTEGRITY_ERR  vxge_mBIT(7)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_RXPE_TA_FSM_INTEGRITY_ERR  vxge_mBIT(8)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_SWIF_TA_FSM_INTEGRITY_ERR  vxge_mBIT(9)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_DMA_TA_FSM_INTEGRITY_ERR   vxge_mBIT(10)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_CP_TA_FSM_INTEGRITY_ERR    vxge_mBIT(11)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA16_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(12)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA15_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(13)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA14_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(14)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA13_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(15)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA12_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(16)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA11_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(17)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA10_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(18)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA9_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(19)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA8_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(20)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA7_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(21)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA6_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(22)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA5_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(23)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA4_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(24)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA3_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(25)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA2_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(26)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA1_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(27)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_LONGTERMUMQ_TA0_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(28)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_FBMC_OWN_FSM_INTEGRITY_ERR vxge_mBIT(29)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_TXPE2MSG_DISPATCH_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(30)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_RXPE2MSG_DISPATCH_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(31)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_RPE2MSG_DISPATCH_FSM_INTEGRITY_ERR \
-                                                       vxge_mBIT(32)
-#define        VXGE_HW_MSG_ERR2_REG_MP_MP_PIFT_IF_CREDIT_CNT_ERR       vxge_mBIT(33)
-#define        VXGE_HW_MSG_ERR2_REG_UP_UP_PIFT_IF_CREDIT_CNT_ERR       vxge_mBIT(34)
-#define        VXGE_HW_MSG_ERR2_REG_MSG_QUE_UMQ2PIC_CMD_FIFO_ERR       vxge_mBIT(62)
-#define        VXGE_HW_MSG_ERR2_REG_TIM_TIM2MSG_CMD_FIFO_ERR   vxge_mBIT(63)
-/*0x05388*/    u64     msg_err2_mask;
-/*0x05390*/    u64     msg_err2_alarm;
-/*0x05398*/    u64     msg_err3_reg;
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_SG_ERR0      vxge_mBIT(0)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_SG_ERR1      vxge_mBIT(1)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_SG_ERR2      vxge_mBIT(2)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_SG_ERR3      vxge_mBIT(3)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_SG_ERR4      vxge_mBIT(4)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_SG_ERR5      vxge_mBIT(5)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_SG_ERR6      vxge_mBIT(6)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_SG_ERR7      vxge_mBIT(7)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_ICACHE_SG_ERR0      vxge_mBIT(8)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_ICACHE_SG_ERR1      vxge_mBIT(9)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_SG_ERR0      vxge_mBIT(16)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_SG_ERR1      vxge_mBIT(17)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_SG_ERR2      vxge_mBIT(18)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_SG_ERR3      vxge_mBIT(19)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_SG_ERR4      vxge_mBIT(20)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_SG_ERR5      vxge_mBIT(21)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_SG_ERR6      vxge_mBIT(22)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_SG_ERR7      vxge_mBIT(23)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_ICACHE_SG_ERR0      vxge_mBIT(24)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_ICACHE_SG_ERR1      vxge_mBIT(25)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_DB_ERR0      vxge_mBIT(32)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_DB_ERR1      vxge_mBIT(33)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_DB_ERR2      vxge_mBIT(34)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_DB_ERR3      vxge_mBIT(35)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_DB_ERR4      vxge_mBIT(36)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_DB_ERR5      vxge_mBIT(37)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_DB_ERR6      vxge_mBIT(38)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_DCACHE_DB_ERR7      vxge_mBIT(39)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_ICACHE_DB_ERR0      vxge_mBIT(40)
-#define        VXGE_HW_MSG_ERR3_REG_UP_UXP_ICACHE_DB_ERR1      vxge_mBIT(41)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_DB_ERR0      vxge_mBIT(48)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_DB_ERR1      vxge_mBIT(49)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_DB_ERR2      vxge_mBIT(50)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_DB_ERR3      vxge_mBIT(51)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_DB_ERR4      vxge_mBIT(52)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_DB_ERR5      vxge_mBIT(53)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_DB_ERR6      vxge_mBIT(54)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_DCACHE_DB_ERR7      vxge_mBIT(55)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_ICACHE_DB_ERR0      vxge_mBIT(56)
-#define        VXGE_HW_MSG_ERR3_REG_MP_MXP_ICACHE_DB_ERR1      vxge_mBIT(57)
-/*0x053a0*/    u64     msg_err3_mask;
-/*0x053a8*/    u64     msg_err3_alarm;
-       u8      unused05600[0x05600-0x053b0];
-
-/*0x05600*/    u64     fau_gen_err_reg;
-#define        VXGE_HW_FAU_GEN_ERR_REG_FMPF_PORT0_PERMANENT_STOP       vxge_mBIT(3)
-#define        VXGE_HW_FAU_GEN_ERR_REG_FMPF_PORT1_PERMANENT_STOP       vxge_mBIT(7)
-#define        VXGE_HW_FAU_GEN_ERR_REG_FMPF_PORT2_PERMANENT_STOP       vxge_mBIT(11)
-#define        VXGE_HW_FAU_GEN_ERR_REG_FALR_AUTO_LRO_NOTIFICATION      vxge_mBIT(15)
-/*0x05608*/    u64     fau_gen_err_mask;
-/*0x05610*/    u64     fau_gen_err_alarm;
-/*0x05618*/    u64     fau_ecc_err_reg;
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT0_FAU_MAC2F_N_SG_ERR    vxge_mBIT(0)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT0_FAU_MAC2F_N_DB_ERR    vxge_mBIT(1)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT0_FAU_MAC2F_W_SG_ERR(val) \
-                                                       vxge_vBIT(val, 2, 2)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT0_FAU_MAC2F_W_DB_ERR(val) \
-                                                       vxge_vBIT(val, 4, 2)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT1_FAU_MAC2F_N_SG_ERR    vxge_mBIT(6)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT1_FAU_MAC2F_N_DB_ERR    vxge_mBIT(7)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT1_FAU_MAC2F_W_SG_ERR(val) \
-                                                       vxge_vBIT(val, 8, 2)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT1_FAU_MAC2F_W_DB_ERR(val) \
-                                                       vxge_vBIT(val, 10, 2)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT2_FAU_MAC2F_N_SG_ERR    vxge_mBIT(12)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT2_FAU_MAC2F_N_DB_ERR    vxge_mBIT(13)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT2_FAU_MAC2F_W_SG_ERR(val) \
-                                                       vxge_vBIT(val, 14, 2)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAU_PORT2_FAU_MAC2F_W_DB_ERR(val) \
-                                                       vxge_vBIT(val, 16, 2)
-#define VXGE_HW_FAU_ECC_ERR_REG_FAU_FAU_XFMD_INS_SG_ERR(val) \
-                                                       vxge_vBIT(val, 18, 2)
-#define VXGE_HW_FAU_ECC_ERR_REG_FAU_FAU_XFMD_INS_DB_ERR(val) \
-                                                       vxge_vBIT(val, 20, 2)
-#define        VXGE_HW_FAU_ECC_ERR_REG_FAUJ_FAU_FSM_ERR        vxge_mBIT(31)
-/*0x05620*/    u64     fau_ecc_err_mask;
-/*0x05628*/    u64     fau_ecc_err_alarm;
-       u8      unused05658[0x05658-0x05630];
-/*0x05658*/    u64     fau_pa_cfg;
-#define        VXGE_HW_FAU_PA_CFG_REPL_L4_COMP_CSUM    vxge_mBIT(3)
-#define        VXGE_HW_FAU_PA_CFG_REPL_L3_INCL_CF      vxge_mBIT(7)
-#define        VXGE_HW_FAU_PA_CFG_REPL_L3_COMP_CSUM    vxge_mBIT(11)
-       u8      unused05668[0x05668-0x05660];
-
-/*0x05668*/    u64     dbg_stats_fau_rx_path;
-#define        VXGE_HW_DBG_STATS_FAU_RX_PATH_RX_PERMITTED_FRMS(val) \
-                                               vxge_vBIT(val, 32, 32)
-       u8      unused056c0[0x056c0-0x05670];
-
-/*0x056c0*/    u64     fau_lag_cfg;
-#define VXGE_HW_FAU_LAG_CFG_COLL_ALG(val) vxge_vBIT(val, 2, 2)
-#define        VXGE_HW_FAU_LAG_CFG_INCR_RX_AGGR_STATS  vxge_mBIT(7)
-       u8      unused05800[0x05800-0x056c8];
-
-/*0x05800*/    u64     tpa_int_status;
-#define        VXGE_HW_TPA_INT_STATUS_ORP_ERR_ORP_INT  vxge_mBIT(15)
-#define        VXGE_HW_TPA_INT_STATUS_PTM_ALARM_PTM_INT        vxge_mBIT(23)
-#define        VXGE_HW_TPA_INT_STATUS_TPA_ERROR_TPA_INT        vxge_mBIT(31)
-/*0x05808*/    u64     tpa_int_mask;
-/*0x05810*/    u64     orp_err_reg;
-#define        VXGE_HW_ORP_ERR_REG_ORP_FIFO_SG_ERR     vxge_mBIT(3)
-#define        VXGE_HW_ORP_ERR_REG_ORP_FIFO_DB_ERR     vxge_mBIT(7)
-#define        VXGE_HW_ORP_ERR_REG_ORP_XFMD_FIFO_UFLOW_ERR     vxge_mBIT(11)
-#define        VXGE_HW_ORP_ERR_REG_ORP_FRM_FIFO_UFLOW_ERR      vxge_mBIT(15)
-#define        VXGE_HW_ORP_ERR_REG_ORP_XFMD_RCV_FSM_ERR        vxge_mBIT(19)
-#define        VXGE_HW_ORP_ERR_REG_ORP_OUTREAD_FSM_ERR vxge_mBIT(23)
-#define        VXGE_HW_ORP_ERR_REG_ORP_OUTQEM_FSM_ERR  vxge_mBIT(27)
-#define        VXGE_HW_ORP_ERR_REG_ORP_XFMD_RCV_SHADOW_ERR     vxge_mBIT(31)
-#define        VXGE_HW_ORP_ERR_REG_ORP_OUTREAD_SHADOW_ERR      vxge_mBIT(35)
-#define        VXGE_HW_ORP_ERR_REG_ORP_OUTQEM_SHADOW_ERR       vxge_mBIT(39)
-#define        VXGE_HW_ORP_ERR_REG_ORP_OUTFRM_SHADOW_ERR       vxge_mBIT(43)
-#define        VXGE_HW_ORP_ERR_REG_ORP_OPTPRS_SHADOW_ERR       vxge_mBIT(47)
-/*0x05818*/    u64     orp_err_mask;
-/*0x05820*/    u64     orp_err_alarm;
-/*0x05828*/    u64     ptm_alarm_reg;
-#define        VXGE_HW_PTM_ALARM_REG_PTM_RDCTRL_SYNC_ERR       vxge_mBIT(3)
-#define        VXGE_HW_PTM_ALARM_REG_PTM_RDCTRL_FIFO_ERR       vxge_mBIT(7)
-#define        VXGE_HW_PTM_ALARM_REG_XFMD_RD_FIFO_ERR  vxge_mBIT(11)
-#define        VXGE_HW_PTM_ALARM_REG_WDE2MSR_WR_FIFO_ERR       vxge_mBIT(15)
-#define VXGE_HW_PTM_ALARM_REG_PTM_FRMM_ECC_DB_ERR(val) vxge_vBIT(val, 18, 2)
-#define VXGE_HW_PTM_ALARM_REG_PTM_FRMM_ECC_SG_ERR(val) vxge_vBIT(val, 22, 2)
-/*0x05830*/    u64     ptm_alarm_mask;
-/*0x05838*/    u64     ptm_alarm_alarm;
-/*0x05840*/    u64     tpa_error_reg;
-#define        VXGE_HW_TPA_ERROR_REG_TPA_FSM_ERR_ALARM vxge_mBIT(3)
-#define        VXGE_HW_TPA_ERROR_REG_TPA_TPA_DA_LKUP_PRT0_DB_ERR       vxge_mBIT(7)
-#define        VXGE_HW_TPA_ERROR_REG_TPA_TPA_DA_LKUP_PRT0_SG_ERR       vxge_mBIT(11)
-/*0x05848*/    u64     tpa_error_mask;
-/*0x05850*/    u64     tpa_error_alarm;
-/*0x05858*/    u64     tpa_global_cfg;
-#define        VXGE_HW_TPA_GLOBAL_CFG_SUPPORT_SNAP_AB_N        vxge_mBIT(7)
-#define        VXGE_HW_TPA_GLOBAL_CFG_ECC_ENABLE_N     vxge_mBIT(35)
-       u8      unused05868[0x05870-0x05860];
-
-/*0x05870*/    u64     ptm_ecc_cfg;
-#define        VXGE_HW_PTM_ECC_CFG_PTM_FRMM_ECC_EN_N   vxge_mBIT(3)
-/*0x05878*/    u64     ptm_phase_cfg;
-#define        VXGE_HW_PTM_PHASE_CFG_FRMM_WR_PHASE_EN  vxge_mBIT(3)
-#define        VXGE_HW_PTM_PHASE_CFG_FRMM_RD_PHASE_EN  vxge_mBIT(7)
-       u8      unused05898[0x05898-0x05880];
-
-/*0x05898*/    u64     dbg_stats_tpa_tx_path;
-#define        VXGE_HW_DBG_STATS_TPA_TX_PATH_TX_PERMITTED_FRMS(val) \
-                                                       vxge_vBIT(val, 32, 32)
-       u8      unused05900[0x05900-0x058a0];
-
-/*0x05900*/    u64     tmac_int_status;
-#define        VXGE_HW_TMAC_INT_STATUS_TXMAC_GEN_ERR_TXMAC_GEN_INT     vxge_mBIT(3)
-#define        VXGE_HW_TMAC_INT_STATUS_TXMAC_ECC_ERR_TXMAC_ECC_INT     vxge_mBIT(7)
-/*0x05908*/    u64     tmac_int_mask;
-/*0x05910*/    u64     txmac_gen_err_reg;
-#define        VXGE_HW_TXMAC_GEN_ERR_REG_TMACJ_PERMANENT_STOP  vxge_mBIT(3)
-#define        VXGE_HW_TXMAC_GEN_ERR_REG_TMACJ_NO_VALID_VSPORT vxge_mBIT(7)
-/*0x05918*/    u64     txmac_gen_err_mask;
-/*0x05920*/    u64     txmac_gen_err_alarm;
-/*0x05928*/    u64     txmac_ecc_err_reg;
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMACJ_TMAC_TPA2MAC_SG_ERR     vxge_mBIT(3)
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMACJ_TMAC_TPA2MAC_DB_ERR     vxge_mBIT(7)
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMACJ_TMAC_TPA2M_SB_SG_ERR    vxge_mBIT(11)
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMACJ_TMAC_TPA2M_SB_DB_ERR    vxge_mBIT(15)
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMACJ_TMAC_TPA2M_DA_SG_ERR    vxge_mBIT(19)
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMACJ_TMAC_TPA2M_DA_DB_ERR    vxge_mBIT(23)
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMAC_TMAC_PORT0_FSM_ERR       vxge_mBIT(27)
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMAC_TMAC_PORT1_FSM_ERR       vxge_mBIT(31)
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMAC_TMAC_PORT2_FSM_ERR       vxge_mBIT(35)
-#define        VXGE_HW_TXMAC_ECC_ERR_REG_TMACJ_TMACJ_FSM_ERR   vxge_mBIT(39)
-/*0x05930*/    u64     txmac_ecc_err_mask;
-/*0x05938*/    u64     txmac_ecc_err_alarm;
-       u8      unused05978[0x05978-0x05940];
-
-/*0x05978*/    u64     dbg_stat_tx_any_frms;
-#define VXGE_HW_DBG_STAT_TX_ANY_FRMS_PORT0_TX_ANY_FRMS(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_DBG_STAT_TX_ANY_FRMS_PORT1_TX_ANY_FRMS(val) vxge_vBIT(val, 8, 8)
-#define VXGE_HW_DBG_STAT_TX_ANY_FRMS_PORT2_TX_ANY_FRMS(val) \
-                                                       vxge_vBIT(val, 16, 8)
-       u8      unused059a0[0x059a0-0x05980];
-
-/*0x059a0*/    u64     txmac_link_util_port[3];
-#define        VXGE_HW_TXMAC_LINK_UTIL_PORT_TMAC_TMAC_UTILIZATION(val) \
-                                                       vxge_vBIT(val, 1, 7)
-#define VXGE_HW_TXMAC_LINK_UTIL_PORT_TMAC_UTIL_CFG(val) vxge_vBIT(val, 8, 4)
-#define VXGE_HW_TXMAC_LINK_UTIL_PORT_TMAC_TMAC_FRAC_UTIL(val) \
-                                                       vxge_vBIT(val, 12, 4)
-#define VXGE_HW_TXMAC_LINK_UTIL_PORT_TMAC_PKT_WEIGHT(val) vxge_vBIT(val, 16, 4)
-#define        VXGE_HW_TXMAC_LINK_UTIL_PORT_TMAC_TMAC_SCALE_FACTOR     vxge_mBIT(23)
-/*0x059b8*/    u64     txmac_cfg0_port[3];
-#define        VXGE_HW_TXMAC_CFG0_PORT_TMAC_EN vxge_mBIT(3)
-#define        VXGE_HW_TXMAC_CFG0_PORT_APPEND_PAD      vxge_mBIT(7)
-#define VXGE_HW_TXMAC_CFG0_PORT_PAD_BYTE(val) vxge_vBIT(val, 8, 8)
-/*0x059d0*/    u64     txmac_cfg1_port[3];
-#define VXGE_HW_TXMAC_CFG1_PORT_AVG_IPG(val) vxge_vBIT(val, 40, 8)
-/*0x059e8*/    u64     txmac_status_port[3];
-#define        VXGE_HW_TXMAC_STATUS_PORT_TMAC_TX_FRM_SENT      vxge_mBIT(3)
-       u8      unused05a20[0x05a20-0x05a00];
-
-/*0x05a20*/    u64     lag_distrib_dest;
-#define VXGE_HW_LAG_DISTRIB_DEST_MAP_VPATH(n)  vxge_mBIT(n)
-/*0x05a28*/    u64     lag_marker_cfg;
-#define        VXGE_HW_LAG_MARKER_CFG_GEN_RCVR_EN      vxge_mBIT(3)
-#define        VXGE_HW_LAG_MARKER_CFG_RESP_EN  vxge_mBIT(7)
-#define VXGE_HW_LAG_MARKER_CFG_RESP_TIMEOUT(val) vxge_vBIT(val, 16, 16)
-#define        VXGE_HW_LAG_MARKER_CFG_SLOW_PROTO_MRKR_MIN_INTERVAL(val) \
-                                                       vxge_vBIT(val, 32, 16)
-#define        VXGE_HW_LAG_MARKER_CFG_THROTTLE_MRKR_RESP       vxge_mBIT(51)
-/*0x05a30*/    u64     lag_tx_cfg;
-#define        VXGE_HW_LAG_TX_CFG_INCR_TX_AGGR_STATS   vxge_mBIT(3)
-#define VXGE_HW_LAG_TX_CFG_DISTRIB_ALG_SEL(val) vxge_vBIT(val, 6, 2)
-#define        VXGE_HW_LAG_TX_CFG_DISTRIB_REMAP_IF_FAIL        vxge_mBIT(11)
-#define VXGE_HW_LAG_TX_CFG_COLL_MAX_DELAY(val) vxge_vBIT(val, 16, 16)
-/*0x05a38*/    u64     lag_tx_status;
-#define VXGE_HW_LAG_TX_STATUS_TLAG_TIMER_VAL_EMPTIED_LINK(val) \
-                                                       vxge_vBIT(val, 0, 8)
-#define        VXGE_HW_LAG_TX_STATUS_TLAG_TIMER_VAL_SLOW_PROTO_MRKR(val) \
-                                                       vxge_vBIT(val, 8, 8)
-#define        VXGE_HW_LAG_TX_STATUS_TLAG_TIMER_VAL_SLOW_PROTO_MRKRRESP(val) \
-                                                       vxge_vBIT(val, 16, 8)
-       u8      unused05d48[0x05d48-0x05a40];
-
-/*0x05d48*/    u64     srpcim_to_mrpcim_vplane_rmsg[17];
-#define        \
-VXGE_HAL_SRPCIM_TO_MRPCIM_VPLANE_RMSG_SWIF_SRPCIM_TO_MRPCIM_VPLANE_RMSG(val)\
- vxge_vBIT(val, 0, 64)
-               u8      unused06420[0x06420-0x05dd0];
-
-/*0x06420*/    u64     mrpcim_to_srpcim_vplane_wmsg[17];
-#define        VXGE_HW_MRPCIM_TO_SRPCIM_VPLANE_WMSG_MRPCIM_TO_SRPCIM_VPLANE_WMSG(val) \
-                                                       vxge_vBIT(val, 0, 64)
-/*0x064a8*/    u64     mrpcim_to_srpcim_vplane_wmsg_trig[17];
-
-/*0x06530*/    u64     debug_stats0;
-#define VXGE_HW_DEBUG_STATS0_RSTDROP_MSG(val) vxge_vBIT(val, 0, 32)
-#define VXGE_HW_DEBUG_STATS0_RSTDROP_CPL(val) vxge_vBIT(val, 32, 32)
-/*0x06538*/    u64     debug_stats1;
-#define VXGE_HW_DEBUG_STATS1_RSTDROP_CLIENT0(val) vxge_vBIT(val, 0, 32)
-#define VXGE_HW_DEBUG_STATS1_RSTDROP_CLIENT1(val) vxge_vBIT(val, 32, 32)
-/*0x06540*/    u64     debug_stats2;
-#define VXGE_HW_DEBUG_STATS2_RSTDROP_CLIENT2(val) vxge_vBIT(val, 0, 32)
-/*0x06548*/    u64     debug_stats3_vplane[17];
-#define VXGE_HW_DEBUG_STATS3_VPLANE_DEPL_PH(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_DEBUG_STATS3_VPLANE_DEPL_NPH(val) vxge_vBIT(val, 16, 16)
-#define VXGE_HW_DEBUG_STATS3_VPLANE_DEPL_CPLH(val) vxge_vBIT(val, 32, 16)
-/*0x065d0*/    u64     debug_stats4_vplane[17];
-#define VXGE_HW_DEBUG_STATS4_VPLANE_DEPL_PD(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_DEBUG_STATS4_VPLANE_DEPL_NPD(val) vxge_vBIT(val, 16, 16)
-#define VXGE_HW_DEBUG_STATS4_VPLANE_DEPL_CPLD(val) vxge_vBIT(val, 32, 16)
-
-       u8      unused07000[0x07000-0x06658];
-
-/*0x07000*/    u64     mrpcim_general_int_status;
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_PIC_INT       vxge_mBIT(0)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_PCI_INT       vxge_mBIT(1)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_RTDMA_INT     vxge_mBIT(2)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_WRDMA_INT     vxge_mBIT(3)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_G3CMCT_INT    vxge_mBIT(4)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_GCMG1_INT     vxge_mBIT(5)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_GCMG2_INT     vxge_mBIT(6)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_GCMG3_INT     vxge_mBIT(7)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_G3CMIFL_INT   vxge_mBIT(8)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_G3CMIFU_INT   vxge_mBIT(9)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_PCMG1_INT     vxge_mBIT(10)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_PCMG2_INT     vxge_mBIT(11)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_PCMG3_INT     vxge_mBIT(12)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_XMAC_INT      vxge_mBIT(13)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_RXMAC_INT     vxge_mBIT(14)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_TMAC_INT      vxge_mBIT(15)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_G3FBIF_INT    vxge_mBIT(16)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_FBMC_INT      vxge_mBIT(17)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_G3FBCT_INT    vxge_mBIT(18)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_TPA_INT       vxge_mBIT(19)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_DRBELL_INT    vxge_mBIT(20)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_ONE_INT       vxge_mBIT(21)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_STATUS_MSG_INT       vxge_mBIT(22)
-/*0x07008*/    u64     mrpcim_general_int_mask;
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_PIC_INT vxge_mBIT(0)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_PCI_INT vxge_mBIT(1)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_RTDMA_INT       vxge_mBIT(2)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_WRDMA_INT       vxge_mBIT(3)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_G3CMCT_INT      vxge_mBIT(4)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_GCMG1_INT       vxge_mBIT(5)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_GCMG2_INT       vxge_mBIT(6)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_GCMG3_INT       vxge_mBIT(7)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_G3CMIFL_INT     vxge_mBIT(8)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_G3CMIFU_INT     vxge_mBIT(9)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_PCMG1_INT       vxge_mBIT(10)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_PCMG2_INT       vxge_mBIT(11)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_PCMG3_INT       vxge_mBIT(12)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_XMAC_INT        vxge_mBIT(13)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_RXMAC_INT       vxge_mBIT(14)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_TMAC_INT        vxge_mBIT(15)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_G3FBIF_INT      vxge_mBIT(16)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_FBMC_INT        vxge_mBIT(17)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_G3FBCT_INT      vxge_mBIT(18)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_TPA_INT vxge_mBIT(19)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_DRBELL_INT      vxge_mBIT(20)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_ONE_INT vxge_mBIT(21)
-#define        VXGE_HW_MRPCIM_GENERAL_INT_MASK_MSG_INT vxge_mBIT(22)
-/*0x07010*/    u64     mrpcim_ppif_int_status;
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_INI_ERRORS_INI_INT       vxge_mBIT(3)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_DMA_ERRORS_DMA_INT       vxge_mBIT(7)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_TGT_ERRORS_TGT_INT       vxge_mBIT(11)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CONFIG_ERRORS_CONFIG_INT vxge_mBIT(15)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_CRDT_INT     vxge_mBIT(19)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_PLL_ERRORS_PLL_INT       vxge_mBIT(27)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE0_CRD_INT_VPLANE0_INT\
-                                                       vxge_mBIT(31)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE1_CRD_INT_VPLANE1_INT\
-                                                       vxge_mBIT(32)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE2_CRD_INT_VPLANE2_INT\
-                                                       vxge_mBIT(33)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE3_CRD_INT_VPLANE3_INT\
-                                                       vxge_mBIT(34)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE4_CRD_INT_VPLANE4_INT\
-                                                       vxge_mBIT(35)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE5_CRD_INT_VPLANE5_INT\
-                                                       vxge_mBIT(36)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE6_CRD_INT_VPLANE6_INT\
-                                                       vxge_mBIT(37)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE7_CRD_INT_VPLANE7_INT\
-                                                       vxge_mBIT(38)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE8_CRD_INT_VPLANE8_INT\
-                                                       vxge_mBIT(39)
-#define        VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE9_CRD_INT_VPLANE9_INT\
-                                                       vxge_mBIT(40)
-#define \
-VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE10_CRD_INT_VPLANE10_INT \
-                                                       vxge_mBIT(41)
-#define \
-VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE11_CRD_INT_VPLANE11_INT \
-                                                       vxge_mBIT(42)
-#define        \
-VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE12_CRD_INT_VPLANE12_INT \
-                                                       vxge_mBIT(43)
-#define        \
-VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE13_CRD_INT_VPLANE13_INT \
-                                                       vxge_mBIT(44)
-#define        \
-VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE14_CRD_INT_VPLANE14_INT \
-                                                       vxge_mBIT(45)
-#define        \
-VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE15_CRD_INT_VPLANE15_INT \
-                                                       vxge_mBIT(46)
-#define        \
-VXGE_HW_MRPCIM_PPIF_INT_STATUS_CRDT_ERRORS_VPLANE16_CRD_INT_VPLANE16_INT \
-                                                       vxge_mBIT(47)
-#define        \
-VXGE_HW_MRPCIM_PPIF_INT_STATUS_VPATH_TO_MRPCIM_ALARM_VPATH_TO_MRPCIM_ALARM_INT \
-                                                       vxge_mBIT(55)
-/*0x07018*/    u64     mrpcim_ppif_int_mask;
-       u8      unused07028[0x07028-0x07020];
-
-/*0x07028*/    u64     ini_errors_reg;
-#define        VXGE_HW_INI_ERRORS_REG_SCPL_CPL_TIMEOUT_UNUSED_TAG      vxge_mBIT(3)
-#define        VXGE_HW_INI_ERRORS_REG_SCPL_CPL_TIMEOUT vxge_mBIT(7)
-#define        VXGE_HW_INI_ERRORS_REG_DCPL_FSM_ERR     vxge_mBIT(11)
-#define        VXGE_HW_INI_ERRORS_REG_DCPL_POISON      vxge_mBIT(12)
-#define        VXGE_HW_INI_ERRORS_REG_DCPL_UNSUPPORTED vxge_mBIT(15)
-#define        VXGE_HW_INI_ERRORS_REG_DCPL_ABORT       vxge_mBIT(19)
-#define        VXGE_HW_INI_ERRORS_REG_INI_TLP_ABORT    vxge_mBIT(23)
-#define        VXGE_HW_INI_ERRORS_REG_INI_DLLP_ABORT   vxge_mBIT(27)
-#define        VXGE_HW_INI_ERRORS_REG_INI_ECRC_ERR     vxge_mBIT(31)
-#define        VXGE_HW_INI_ERRORS_REG_INI_BUF_DB_ERR   vxge_mBIT(35)
-#define        VXGE_HW_INI_ERRORS_REG_INI_BUF_SG_ERR   vxge_mBIT(39)
-#define        VXGE_HW_INI_ERRORS_REG_INI_DATA_OVERFLOW        vxge_mBIT(43)
-#define        VXGE_HW_INI_ERRORS_REG_INI_HDR_OVERFLOW vxge_mBIT(47)
-#define        VXGE_HW_INI_ERRORS_REG_INI_MRD_SYS_DROP vxge_mBIT(51)
-#define        VXGE_HW_INI_ERRORS_REG_INI_MWR_SYS_DROP vxge_mBIT(55)
-#define        VXGE_HW_INI_ERRORS_REG_INI_MRD_CLIENT_DROP      vxge_mBIT(59)
-#define        VXGE_HW_INI_ERRORS_REG_INI_MWR_CLIENT_DROP      vxge_mBIT(63)
-/*0x07030*/    u64     ini_errors_mask;
-/*0x07038*/    u64     ini_errors_alarm;
-/*0x07040*/    u64     dma_errors_reg;
-#define        VXGE_HW_DMA_ERRORS_REG_RDARB_FSM_ERR    vxge_mBIT(3)
-#define        VXGE_HW_DMA_ERRORS_REG_WRARB_FSM_ERR    vxge_mBIT(7)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_WRDMA_WR_HDR_OVERFLOW        vxge_mBIT(8)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_WRDMA_WR_HDR_UNDERFLOW       vxge_mBIT(9)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_WRDMA_WR_DATA_OVERFLOW       vxge_mBIT(10)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_WRDMA_WR_DATA_UNDERFLOW      vxge_mBIT(11)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_MSG_WR_HDR_OVERFLOW  vxge_mBIT(12)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_MSG_WR_HDR_UNDERFLOW vxge_mBIT(13)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_MSG_WR_DATA_OVERFLOW vxge_mBIT(14)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_MSG_WR_DATA_UNDERFLOW        vxge_mBIT(15)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_STATS_WR_HDR_OVERFLOW        vxge_mBIT(16)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_STATS_WR_HDR_UNDERFLOW       vxge_mBIT(17)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_STATS_WR_DATA_OVERFLOW       vxge_mBIT(18)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_STATS_WR_DATA_UNDERFLOW      vxge_mBIT(19)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_RTDMA_WR_HDR_OVERFLOW        vxge_mBIT(20)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_RTDMA_WR_HDR_UNDERFLOW       vxge_mBIT(21)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_RTDMA_WR_DATA_OVERFLOW       vxge_mBIT(22)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_RTDMA_WR_DATA_UNDERFLOW      vxge_mBIT(23)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_WRDMA_RD_HDR_OVERFLOW        vxge_mBIT(24)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_WRDMA_RD_HDR_UNDERFLOW       vxge_mBIT(25)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_RTDMA_RD_HDR_OVERFLOW        vxge_mBIT(28)
-#define        VXGE_HW_DMA_ERRORS_REG_DMA_RTDMA_RD_HDR_UNDERFLOW       vxge_mBIT(29)
-#define        VXGE_HW_DMA_ERRORS_REG_DBLGEN_FSM_ERR   vxge_mBIT(32)
-#define        VXGE_HW_DMA_ERRORS_REG_DBLGEN_CREDIT_FSM_ERR    vxge_mBIT(33)
-#define        VXGE_HW_DMA_ERRORS_REG_DBLGEN_DMA_WRR_SM_ERR    vxge_mBIT(34)
-/*0x07048*/    u64     dma_errors_mask;
-/*0x07050*/    u64     dma_errors_alarm;
-/*0x07058*/    u64     tgt_errors_reg;
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_VENDOR_MSG   vxge_mBIT(0)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_MSG_UNLOCK   vxge_mBIT(1)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_ILLEGAL_TLP_BE       vxge_mBIT(2)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_BOOT_WRITE   vxge_mBIT(3)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_PIF_WR_CROSS_QWRANGE vxge_mBIT(4)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_PIF_READ_CROSS_QWRANGE       vxge_mBIT(5)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_KDFC_READ    vxge_mBIT(6)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_USDC_READ    vxge_mBIT(7)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_USDC_WR_CROSS_QWRANGE        vxge_mBIT(8)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_MSIX_BEYOND_RANGE    vxge_mBIT(9)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_WR_TO_KDFC_POISON    vxge_mBIT(10)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_WR_TO_USDC_POISON    vxge_mBIT(11)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_WR_TO_PIF_POISON     vxge_mBIT(12)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_WR_TO_MSIX_POISON    vxge_mBIT(13)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_WR_TO_MRIOV_POISON   vxge_mBIT(14)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_NOT_MEM_TLP  vxge_mBIT(15)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_UNKNOWN_MEM_TLP      vxge_mBIT(16)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_REQ_FSM_ERR  vxge_mBIT(17)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_CPL_FSM_ERR  vxge_mBIT(18)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_KDFC_PROT_ERR        vxge_mBIT(19)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_SWIF_PROT_ERR        vxge_mBIT(20)
-#define        VXGE_HW_TGT_ERRORS_REG_TGT_MRIOV_MEM_MAP_CFG_ERR        vxge_mBIT(21)
-/*0x07060*/    u64     tgt_errors_mask;
-/*0x07068*/    u64     tgt_errors_alarm;
-/*0x07070*/    u64     config_errors_reg;
-#define        VXGE_HW_CONFIG_ERRORS_REG_I2C_ILLEGAL_STOP_COND vxge_mBIT(3)
-#define        VXGE_HW_CONFIG_ERRORS_REG_I2C_ILLEGAL_START_COND        vxge_mBIT(7)
-#define        VXGE_HW_CONFIG_ERRORS_REG_I2C_EXP_RD_CNT        vxge_mBIT(11)
-#define        VXGE_HW_CONFIG_ERRORS_REG_I2C_EXTRA_CYCLE       vxge_mBIT(15)
-#define        VXGE_HW_CONFIG_ERRORS_REG_I2C_MAIN_FSM_ERR      vxge_mBIT(19)
-#define        VXGE_HW_CONFIG_ERRORS_REG_I2C_REQ_COLLISION     vxge_mBIT(23)
-#define        VXGE_HW_CONFIG_ERRORS_REG_I2C_REG_FSM_ERR       vxge_mBIT(27)
-#define        VXGE_HW_CONFIG_ERRORS_REG_CFGM_I2C_TIMEOUT      vxge_mBIT(31)
-#define        VXGE_HW_CONFIG_ERRORS_REG_RIC_I2C_TIMEOUT       vxge_mBIT(35)
-#define        VXGE_HW_CONFIG_ERRORS_REG_CFGM_FSM_ERR  vxge_mBIT(39)
-#define        VXGE_HW_CONFIG_ERRORS_REG_RIC_FSM_ERR   vxge_mBIT(43)
-#define        VXGE_HW_CONFIG_ERRORS_REG_PIFM_ILLEGAL_ACCESS   vxge_mBIT(47)
-#define        VXGE_HW_CONFIG_ERRORS_REG_PIFM_TIMEOUT  vxge_mBIT(51)
-#define        VXGE_HW_CONFIG_ERRORS_REG_PIFM_FSM_ERR  vxge_mBIT(55)
-#define        VXGE_HW_CONFIG_ERRORS_REG_PIFM_TO_FSM_ERR       vxge_mBIT(59)
-#define        VXGE_HW_CONFIG_ERRORS_REG_RIC_RIC_RD_TIMEOUT    vxge_mBIT(63)
-/*0x07078*/    u64     config_errors_mask;
-/*0x07080*/    u64     config_errors_alarm;
-       u8      unused07090[0x07090-0x07088];
-
-/*0x07090*/    u64     crdt_errors_reg;
-#define        VXGE_HW_CRDT_ERRORS_REG_WRCRDTARB_FSM_ERR       vxge_mBIT(11)
-#define        VXGE_HW_CRDT_ERRORS_REG_WRCRDTARB_INTCTL_ILLEGAL_CRD_DEAL \
-                                                       vxge_mBIT(15)
-#define        VXGE_HW_CRDT_ERRORS_REG_WRCRDTARB_PDA_ILLEGAL_CRD_DEAL  vxge_mBIT(19)
-#define        VXGE_HW_CRDT_ERRORS_REG_WRCRDTARB_PCI_MSG_ILLEGAL_CRD_DEAL \
-                                                       vxge_mBIT(23)
-#define        VXGE_HW_CRDT_ERRORS_REG_RDCRDTARB_FSM_ERR       vxge_mBIT(35)
-#define        VXGE_HW_CRDT_ERRORS_REG_RDCRDTARB_RDA_ILLEGAL_CRD_DEAL  vxge_mBIT(39)
-#define        VXGE_HW_CRDT_ERRORS_REG_RDCRDTARB_PDA_ILLEGAL_CRD_DEAL  vxge_mBIT(43)
-#define        VXGE_HW_CRDT_ERRORS_REG_RDCRDTARB_DBLGEN_ILLEGAL_CRD_DEAL \
-                                                       vxge_mBIT(47)
-/*0x07098*/    u64     crdt_errors_mask;
-/*0x070a0*/    u64     crdt_errors_alarm;
-       u8      unused070b0[0x070b0-0x070a8];
-
-/*0x070b0*/    u64     mrpcim_general_errors_reg;
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_STATSB_FSM_ERR        vxge_mBIT(3)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_XGEN_FSM_ERR  vxge_mBIT(7)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_XMEM_FSM_ERR  vxge_mBIT(11)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_KDFCCTL_FSM_ERR       vxge_mBIT(15)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_MRIOVCTL_FSM_ERR      vxge_mBIT(19)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_SPI_FLSH_ERR  vxge_mBIT(23)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_SPI_IIC_ACK_ERR       vxge_mBIT(27)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_SPI_IIC_CHKSUM_ERR    vxge_mBIT(31)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_INI_SERR_DET  vxge_mBIT(35)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_INTCTL_MSIX_FSM_ERR   vxge_mBIT(39)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_INTCTL_MSI_OVERFLOW   vxge_mBIT(43)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_PPIF_PCI_NOT_FLUSH_DURING_SW_RESET \
-                                                       vxge_mBIT(47)
-#define        VXGE_HW_MRPCIM_GENERAL_ERRORS_REG_PPIF_SW_RESET_FSM_ERR vxge_mBIT(51)
-/*0x070b8*/    u64     mrpcim_general_errors_mask;
-/*0x070c0*/    u64     mrpcim_general_errors_alarm;
-       u8      unused070d0[0x070d0-0x070c8];
-
-/*0x070d0*/    u64     pll_errors_reg;
-#define        VXGE_HW_PLL_ERRORS_REG_CORE_CMG_PLL_OOL vxge_mBIT(3)
-#define        VXGE_HW_PLL_ERRORS_REG_CORE_FB_PLL_OOL  vxge_mBIT(7)
-#define        VXGE_HW_PLL_ERRORS_REG_CORE_X_PLL_OOL   vxge_mBIT(11)
-/*0x070d8*/    u64     pll_errors_mask;
-/*0x070e0*/    u64     pll_errors_alarm;
-/*0x070e8*/    u64     srpcim_to_mrpcim_alarm_reg;
-#define        VXGE_HW_SRPCIM_TO_MRPCIM_ALARM_REG_PPIF_SRPCIM_TO_MRPCIM_ALARM(val) \
-                                                       vxge_vBIT(val, 0, 17)
-/*0x070f0*/    u64     srpcim_to_mrpcim_alarm_mask;
-/*0x070f8*/    u64     srpcim_to_mrpcim_alarm_alarm;
-/*0x07100*/    u64     vpath_to_mrpcim_alarm_reg;
-#define        VXGE_HW_VPATH_TO_MRPCIM_ALARM_REG_PPIF_VPATH_TO_MRPCIM_ALARM(val) \
-                                                       vxge_vBIT(val, 0, 17)
-/*0x07108*/    u64     vpath_to_mrpcim_alarm_mask;
-/*0x07110*/    u64     vpath_to_mrpcim_alarm_alarm;
-       u8      unused07128[0x07128-0x07118];
-
-/*0x07128*/    u64     crdt_errors_vplane_reg[17];
-#define        VXGE_HW_CRDT_ERRORS_VPLANE_REG_WRCRDTARB_P_H_CONSUME_CRDT_ERR \
-                                                       vxge_mBIT(3)
-#define        VXGE_HW_CRDT_ERRORS_VPLANE_REG_WRCRDTARB_P_D_CONSUME_CRDT_ERR \
-                                                       vxge_mBIT(7)
-#define        VXGE_HW_CRDT_ERRORS_VPLANE_REG_WRCRDTARB_P_H_RETURN_CRDT_ERR \
-                                                       vxge_mBIT(11)
-#define        VXGE_HW_CRDT_ERRORS_VPLANE_REG_WRCRDTARB_P_D_RETURN_CRDT_ERR \
-                                                       vxge_mBIT(15)
-#define        VXGE_HW_CRDT_ERRORS_VPLANE_REG_RDCRDTARB_NP_H_CONSUME_CRDT_ERR \
-                                                       vxge_mBIT(19)
-#define        VXGE_HW_CRDT_ERRORS_VPLANE_REG_RDCRDTARB_NP_H_RETURN_CRDT_ERR \
-                                                       vxge_mBIT(23)
-#define        VXGE_HW_CRDT_ERRORS_VPLANE_REG_RDCRDTARB_TAG_CONSUME_TAG_ERR \
-                                                       vxge_mBIT(27)
-#define        VXGE_HW_CRDT_ERRORS_VPLANE_REG_RDCRDTARB_TAG_RETURN_TAG_ERR \
-                                                       vxge_mBIT(31)
-/*0x07130*/    u64     crdt_errors_vplane_mask[17];
-/*0x07138*/    u64     crdt_errors_vplane_alarm[17];
-       u8      unused072f0[0x072f0-0x072c0];
-
-/*0x072f0*/    u64     mrpcim_rst_in_prog;
-#define        VXGE_HW_MRPCIM_RST_IN_PROG_MRPCIM_RST_IN_PROG   vxge_mBIT(7)
-/*0x072f8*/    u64     mrpcim_reg_modified;
-#define        VXGE_HW_MRPCIM_REG_MODIFIED_MRPCIM_REG_MODIFIED vxge_mBIT(7)
-
-       u8      unused07378[0x07378-0x07300];
-
-/*0x07378*/    u64     write_arb_pending;
-#define        VXGE_HW_WRITE_ARB_PENDING_WRARB_WRDMA   vxge_mBIT(3)
-#define        VXGE_HW_WRITE_ARB_PENDING_WRARB_RTDMA   vxge_mBIT(7)
-#define        VXGE_HW_WRITE_ARB_PENDING_WRARB_MSG     vxge_mBIT(11)
-#define        VXGE_HW_WRITE_ARB_PENDING_WRARB_STATSB  vxge_mBIT(15)
-#define        VXGE_HW_WRITE_ARB_PENDING_WRARB_INTCTL  vxge_mBIT(19)
-/*0x07380*/    u64     read_arb_pending;
-#define        VXGE_HW_READ_ARB_PENDING_RDARB_WRDMA    vxge_mBIT(3)
-#define        VXGE_HW_READ_ARB_PENDING_RDARB_RTDMA    vxge_mBIT(7)
-#define        VXGE_HW_READ_ARB_PENDING_RDARB_DBLGEN   vxge_mBIT(11)
-/*0x07388*/    u64     dmaif_dmadbl_pending;
-#define        VXGE_HW_DMAIF_DMADBL_PENDING_DMAIF_WRDMA_WR     vxge_mBIT(0)
-#define        VXGE_HW_DMAIF_DMADBL_PENDING_DMAIF_WRDMA_RD     vxge_mBIT(1)
-#define        VXGE_HW_DMAIF_DMADBL_PENDING_DMAIF_RTDMA_WR     vxge_mBIT(2)
-#define        VXGE_HW_DMAIF_DMADBL_PENDING_DMAIF_RTDMA_RD     vxge_mBIT(3)
-#define        VXGE_HW_DMAIF_DMADBL_PENDING_DMAIF_MSG_WR       vxge_mBIT(4)
-#define        VXGE_HW_DMAIF_DMADBL_PENDING_DMAIF_STATS_WR     vxge_mBIT(5)
-#define        VXGE_HW_DMAIF_DMADBL_PENDING_DBLGEN_IN_PROG(val) \
-                                                       vxge_vBIT(val, 13, 51)
-/*0x07390*/    u64     wrcrdtarb_status0_vplane[17];
-#define        VXGE_HW_WRCRDTARB_STATUS0_VPLANE_WRCRDTARB_ABS_AVAIL_P_H(val) \
-                                                       vxge_vBIT(val, 0, 8)
-/*0x07418*/    u64     wrcrdtarb_status1_vplane[17];
-#define        VXGE_HW_WRCRDTARB_STATUS1_VPLANE_WRCRDTARB_ABS_AVAIL_P_D(val) \
-                                                       vxge_vBIT(val, 4, 12)
-       u8      unused07500[0x07500-0x074a0];
-
-/*0x07500*/    u64     mrpcim_general_cfg1;
-#define        VXGE_HW_MRPCIM_GENERAL_CFG1_CLEAR_SERR  vxge_mBIT(7)
-/*0x07508*/    u64     mrpcim_general_cfg2;
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_INS_TX_WR_TD        vxge_mBIT(3)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_INS_TX_RD_TD        vxge_mBIT(7)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_INS_TX_CPL_TD       vxge_mBIT(11)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_INI_TIMEOUT_EN_MWR  vxge_mBIT(15)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_INI_TIMEOUT_EN_MRD  vxge_mBIT(19)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_IGNORE_VPATH_RST_FOR_MSIX   vxge_mBIT(23)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_FLASH_READ_MSB      vxge_mBIT(27)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_DIS_HOST_PIPELINE_WR        vxge_mBIT(31)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_MRPCIM_STATS_ENABLE vxge_mBIT(43)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_MRPCIM_STATS_MAP_TO_VPATH(val) \
-                                                       vxge_vBIT(val, 47, 5)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_EN_BLOCK_MSIX_DUE_TO_SERR   vxge_mBIT(55)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_FORCE_SENDING_INTA  vxge_mBIT(59)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG2_DIS_SWIF_PROT_ON_RDS        vxge_mBIT(63)
-/*0x07510*/    u64     mrpcim_general_cfg3;
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_PROTECTION_CA_OR_UNSUPN     vxge_mBIT(0)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_ILLEGAL_RD_CA_OR_UNSUPN     vxge_mBIT(3)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_RD_BYTE_SWAPEN      vxge_mBIT(7)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_RD_BIT_FLIPEN       vxge_mBIT(11)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_WR_BYTE_SWAPEN      vxge_mBIT(15)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_WR_BIT_FLIPEN       vxge_mBIT(19)
-#define VXGE_HW_MRPCIM_GENERAL_CFG3_MR_MAX_MVFS(val) vxge_vBIT(val, 20, 16)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_MR_MVF_TBL_SIZE(val) \
-                                                       vxge_vBIT(val, 36, 16)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_PF0_SW_RESET_EN     vxge_mBIT(55)
-#define VXGE_HW_MRPCIM_GENERAL_CFG3_REG_MODIFIED_CFG(val) vxge_vBIT(val, 56, 2)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_CPL_ECC_ENABLE_N    vxge_mBIT(59)
-#define        VXGE_HW_MRPCIM_GENERAL_CFG3_BYPASS_DAISY_CHAIN  vxge_mBIT(63)
-/*0x07518*/    u64     mrpcim_stats_start_host_addr;
-#define        VXGE_HW_MRPCIM_STATS_START_HOST_ADDR_MRPCIM_STATS_START_HOST_ADDR(val)\
-                                                       vxge_vBIT(val, 0, 57)
-
-       u8      unused07950[0x07950-0x07520];
-
-/*0x07950*/    u64     rdcrdtarb_cfg0;
-#define VXGE_HW_RDCRDTARB_CFG0_RDA_MAX_OUTSTANDING_RDS(val) \
-                                               vxge_vBIT(val, 18, 6)
-#define VXGE_HW_RDCRDTARB_CFG0_PDA_MAX_OUTSTANDING_RDS(val) \
-                                               vxge_vBIT(val, 26, 6)
-#define VXGE_HW_RDCRDTARB_CFG0_DBLGEN_MAX_OUTSTANDING_RDS(val) \
-                                               vxge_vBIT(val, 34, 6)
-#define VXGE_HW_RDCRDTARB_CFG0_WAIT_CNT(val) vxge_vBIT(val, 48, 4)
-#define VXGE_HW_RDCRDTARB_CFG0_MAX_OUTSTANDING_RDS(val) vxge_vBIT(val, 54, 6)
-#define        VXGE_HW_RDCRDTARB_CFG0_EN_XON   vxge_mBIT(63)
-       u8      unused07be8[0x07be8-0x07958];
-
-/*0x07be8*/    u64     bf_sw_reset;
-#define VXGE_HW_BF_SW_RESET_BF_SW_RESET(val) vxge_vBIT(val, 0, 8)
-/*0x07bf0*/    u64     sw_reset_status;
-#define        VXGE_HW_SW_RESET_STATUS_RESET_CMPLT     vxge_mBIT(7)
-#define        VXGE_HW_SW_RESET_STATUS_INIT_CMPLT      vxge_mBIT(15)
-       u8      unused07d30[0x07d30-0x07bf8];
-
-/*0x07d30*/    u64     mrpcim_debug_stats0;
-#define VXGE_HW_MRPCIM_DEBUG_STATS0_INI_WR_DROP(val) vxge_vBIT(val, 0, 32)
-#define VXGE_HW_MRPCIM_DEBUG_STATS0_INI_RD_DROP(val) vxge_vBIT(val, 32, 32)
-/*0x07d38*/    u64     mrpcim_debug_stats1_vplane[17];
-#define        VXGE_HW_MRPCIM_DEBUG_STATS1_VPLANE_WRCRDTARB_PH_CRDT_DEPLETED(val) \
-                                                       vxge_vBIT(val, 32, 32)
-/*0x07dc0*/    u64     mrpcim_debug_stats2_vplane[17];
-#define        VXGE_HW_MRPCIM_DEBUG_STATS2_VPLANE_WRCRDTARB_PD_CRDT_DEPLETED(val) \
-                                                       vxge_vBIT(val, 32, 32)
-/*0x07e48*/    u64     mrpcim_debug_stats3_vplane[17];
-#define        VXGE_HW_MRPCIM_DEBUG_STATS3_VPLANE_RDCRDTARB_NPH_CRDT_DEPLETED(val) \
-                                                       vxge_vBIT(val, 32, 32)
-/*0x07ed0*/    u64     mrpcim_debug_stats4;
-#define VXGE_HW_MRPCIM_DEBUG_STATS4_INI_WR_VPIN_DROP(val) vxge_vBIT(val, 0, 32)
-#define        VXGE_HW_MRPCIM_DEBUG_STATS4_INI_RD_VPIN_DROP(val) \
-                                                       vxge_vBIT(val, 32, 32)
-/*0x07ed8*/    u64     genstats_count01;
-#define VXGE_HW_GENSTATS_COUNT01_GENSTATS_COUNT1(val) vxge_vBIT(val, 0, 32)
-#define VXGE_HW_GENSTATS_COUNT01_GENSTATS_COUNT0(val) vxge_vBIT(val, 32, 32)
-/*0x07ee0*/    u64     genstats_count23;
-#define VXGE_HW_GENSTATS_COUNT23_GENSTATS_COUNT3(val) vxge_vBIT(val, 0, 32)
-#define VXGE_HW_GENSTATS_COUNT23_GENSTATS_COUNT2(val) vxge_vBIT(val, 32, 32)
-/*0x07ee8*/    u64     genstats_count4;
-#define VXGE_HW_GENSTATS_COUNT4_GENSTATS_COUNT4(val) vxge_vBIT(val, 32, 32)
-/*0x07ef0*/    u64     genstats_count5;
-#define VXGE_HW_GENSTATS_COUNT5_GENSTATS_COUNT5(val) vxge_vBIT(val, 32, 32)
-
-       u8      unused07f08[0x07f08-0x07ef8];
-
-/*0x07f08*/    u64     genstats_cfg[6];
-#define VXGE_HW_GENSTATS_CFG_DTYPE_SEL(val) vxge_vBIT(val, 3, 5)
-#define VXGE_HW_GENSTATS_CFG_CLIENT_NO_SEL(val) vxge_vBIT(val, 9, 3)
-#define VXGE_HW_GENSTATS_CFG_WR_RD_CPL_SEL(val) vxge_vBIT(val, 14, 2)
-#define VXGE_HW_GENSTATS_CFG_VPATH_SEL(val) vxge_vBIT(val, 31, 17)
-/*0x07f38*/    u64     genstat_64bit_cfg;
-#define        VXGE_HW_GENSTAT_64BIT_CFG_EN_FOR_GENSTATS0      vxge_mBIT(3)
-#define        VXGE_HW_GENSTAT_64BIT_CFG_EN_FOR_GENSTATS2      vxge_mBIT(7)
-       u8      unused08000[0x08000-0x07f40];
-/*0x08000*/    u64     gcmg3_int_status;
-#define        VXGE_HW_GCMG3_INT_STATUS_GSTC_ERR0_GSTC0_INT    vxge_mBIT(0)
-#define        VXGE_HW_GCMG3_INT_STATUS_GSTC_ERR1_GSTC1_INT    vxge_mBIT(1)
-#define        VXGE_HW_GCMG3_INT_STATUS_GH2L_ERR0_GH2L0_INT    vxge_mBIT(2)
-#define        VXGE_HW_GCMG3_INT_STATUS_GHSQ_ERR_GH2L1_INT     vxge_mBIT(3)
-#define        VXGE_HW_GCMG3_INT_STATUS_GHSQ_ERR2_GH2L2_INT    vxge_mBIT(4)
-#define        VXGE_HW_GCMG3_INT_STATUS_GH2L_SMERR0_GH2L3_INT  vxge_mBIT(5)
-#define        VXGE_HW_GCMG3_INT_STATUS_GHSQ_ERR3_GH2L4_INT    vxge_mBIT(6)
-/*0x08008*/    u64     gcmg3_int_mask;
-       u8      unused09000[0x09000-0x8010];
-
-/*0x09000*/    u64     g3ifcmd_fb_int_status;
-#define        VXGE_HW_G3IFCMD_FB_INT_STATUS_ERR_G3IF_INT      vxge_mBIT(0)
-/*0x09008*/    u64     g3ifcmd_fb_int_mask;
-/*0x09010*/    u64     g3ifcmd_fb_err_reg;
-#define        VXGE_HW_G3IFCMD_FB_ERR_REG_G3IF_CK_DLL_LOCK     vxge_mBIT(6)
-#define        VXGE_HW_G3IFCMD_FB_ERR_REG_G3IF_SM_ERR  vxge_mBIT(7)
-#define VXGE_HW_G3IFCMD_FB_ERR_REG_G3IF_RWDQS_DLL_LOCK(val) \
-                                               vxge_vBIT(val, 24, 8)
-#define        VXGE_HW_G3IFCMD_FB_ERR_REG_G3IF_IOCAL_FAULT     vxge_mBIT(55)
-/*0x09018*/    u64     g3ifcmd_fb_err_mask;
-/*0x09020*/    u64     g3ifcmd_fb_err_alarm;
-
-       u8      unused09400[0x09400-0x09028];
-
-/*0x09400*/    u64     g3ifcmd_cmu_int_status;
-#define        VXGE_HW_G3IFCMD_CMU_INT_STATUS_ERR_G3IF_INT     vxge_mBIT(0)
-/*0x09408*/    u64     g3ifcmd_cmu_int_mask;
-/*0x09410*/    u64     g3ifcmd_cmu_err_reg;
-#define        VXGE_HW_G3IFCMD_CMU_ERR_REG_G3IF_CK_DLL_LOCK    vxge_mBIT(6)
-#define        VXGE_HW_G3IFCMD_CMU_ERR_REG_G3IF_SM_ERR vxge_mBIT(7)
-#define VXGE_HW_G3IFCMD_CMU_ERR_REG_G3IF_RWDQS_DLL_LOCK(val) \
-                                                       vxge_vBIT(val, 24, 8)
-#define        VXGE_HW_G3IFCMD_CMU_ERR_REG_G3IF_IOCAL_FAULT    vxge_mBIT(55)
-/*0x09418*/    u64     g3ifcmd_cmu_err_mask;
-/*0x09420*/    u64     g3ifcmd_cmu_err_alarm;
-
-       u8      unused09800[0x09800-0x09428];
-
-/*0x09800*/    u64     g3ifcmd_cml_int_status;
-#define        VXGE_HW_G3IFCMD_CML_INT_STATUS_ERR_G3IF_INT     vxge_mBIT(0)
-/*0x09808*/    u64     g3ifcmd_cml_int_mask;
-/*0x09810*/    u64     g3ifcmd_cml_err_reg;
-#define        VXGE_HW_G3IFCMD_CML_ERR_REG_G3IF_CK_DLL_LOCK    vxge_mBIT(6)
-#define        VXGE_HW_G3IFCMD_CML_ERR_REG_G3IF_SM_ERR vxge_mBIT(7)
-#define VXGE_HW_G3IFCMD_CML_ERR_REG_G3IF_RWDQS_DLL_LOCK(val) \
-                                               vxge_vBIT(val, 24, 8)
-#define        VXGE_HW_G3IFCMD_CML_ERR_REG_G3IF_IOCAL_FAULT    vxge_mBIT(55)
-/*0x09818*/    u64     g3ifcmd_cml_err_mask;
-/*0x09820*/    u64     g3ifcmd_cml_err_alarm;
-       u8      unused09b00[0x09b00-0x09828];
-
-/*0x09b00*/    u64     vpath_to_vplane_map[17];
-#define VXGE_HW_VPATH_TO_VPLANE_MAP_VPATH_TO_VPLANE_MAP(val) \
-                                                       vxge_vBIT(val, 3, 5)
-       u8      unused09c30[0x09c30-0x09b88];
-
-/*0x09c30*/    u64     xgxs_cfg_port[2];
-#define VXGE_HW_XGXS_CFG_PORT_SIG_DETECT_FORCE_LOS(val) vxge_vBIT(val, 16, 4)
-#define VXGE_HW_XGXS_CFG_PORT_SIG_DETECT_FORCE_VALID(val) vxge_vBIT(val, 20, 4)
-#define        VXGE_HW_XGXS_CFG_PORT_SEL_INFO_0        vxge_mBIT(27)
-#define VXGE_HW_XGXS_CFG_PORT_SEL_INFO_1(val) vxge_vBIT(val, 29, 3)
-#define VXGE_HW_XGXS_CFG_PORT_TX_LANE0_SKEW(val) vxge_vBIT(val, 32, 4)
-#define VXGE_HW_XGXS_CFG_PORT_TX_LANE1_SKEW(val) vxge_vBIT(val, 36, 4)
-#define VXGE_HW_XGXS_CFG_PORT_TX_LANE2_SKEW(val) vxge_vBIT(val, 40, 4)
-#define VXGE_HW_XGXS_CFG_PORT_TX_LANE3_SKEW(val) vxge_vBIT(val, 44, 4)
-/*0x09c40*/    u64     xgxs_rxber_cfg_port[2];
-#define VXGE_HW_XGXS_RXBER_CFG_PORT_INTERVAL_DUR(val) vxge_vBIT(val, 0, 4)
-#define        VXGE_HW_XGXS_RXBER_CFG_PORT_RXGXS_INTERVAL_CNT(val) \
-                                                       vxge_vBIT(val, 16, 48)
-/*0x09c50*/    u64     xgxs_rxber_status_port[2];
-#define        VXGE_HW_XGXS_RXBER_STATUS_PORT_RXGXS_RXGXS_LANE_A_ERR_CNT(val)  \
-                                                       vxge_vBIT(val, 0, 16)
-#define        VXGE_HW_XGXS_RXBER_STATUS_PORT_RXGXS_RXGXS_LANE_B_ERR_CNT(val)  \
-                                                       vxge_vBIT(val, 16, 16)
-#define        VXGE_HW_XGXS_RXBER_STATUS_PORT_RXGXS_RXGXS_LANE_C_ERR_CNT(val)  \
-                                                       vxge_vBIT(val, 32, 16)
-#define        VXGE_HW_XGXS_RXBER_STATUS_PORT_RXGXS_RXGXS_LANE_D_ERR_CNT(val)  \
-                                                       vxge_vBIT(val, 48, 16)
-/*0x09c60*/    u64     xgxs_status_port[2];
-#define VXGE_HW_XGXS_STATUS_PORT_XMACJ_PCS_TX_ACTIVITY(val) vxge_vBIT(val, 0, 4)
-#define VXGE_HW_XGXS_STATUS_PORT_XMACJ_PCS_RX_ACTIVITY(val) vxge_vBIT(val, 4, 4)
-#define        VXGE_HW_XGXS_STATUS_PORT_XMACJ_PCS_CTC_FIFO_ERR BIT(11)
-#define VXGE_HW_XGXS_STATUS_PORT_XMACJ_PCS_BYTE_SYNC_LOST(val) \
-                                                       vxge_vBIT(val, 12, 4)
-#define VXGE_HW_XGXS_STATUS_PORT_XMACJ_PCS_CTC_ERR(val) vxge_vBIT(val, 16, 4)
-#define        VXGE_HW_XGXS_STATUS_PORT_XMACJ_PCS_ALIGNMENT_ERR        vxge_mBIT(23)
-#define VXGE_HW_XGXS_STATUS_PORT_XMACJ_PCS_DEC_ERR(val) vxge_vBIT(val, 24, 8)
-#define VXGE_HW_XGXS_STATUS_PORT_XMACJ_PCS_SKIP_INS_REQ(val) \
-                                                       vxge_vBIT(val, 32, 4)
-#define VXGE_HW_XGXS_STATUS_PORT_XMACJ_PCS_SKIP_DEL_REQ(val) \
-                                                       vxge_vBIT(val, 36, 4)
-/*0x09c70*/    u64     xgxs_pma_reset_port[2];
-#define VXGE_HW_XGXS_PMA_RESET_PORT_SERDES_RESET(val) vxge_vBIT(val, 0, 8)
-       u8      unused09c90[0x09c90-0x09c80];
-
-/*0x09c90*/    u64     xgxs_static_cfg_port[2];
-#define        VXGE_HW_XGXS_STATIC_CFG_PORT_FW_CTRL_SERDES     vxge_mBIT(3)
-       u8      unused09d40[0x09d40-0x09ca0];
-
-/*0x09d40*/    u64     xgxs_info_port[2];
-#define VXGE_HW_XGXS_INFO_PORT_XMACJ_INFO_0(val) vxge_vBIT(val, 0, 32)
-#define VXGE_HW_XGXS_INFO_PORT_XMACJ_INFO_1(val) vxge_vBIT(val, 32, 32)
-/*0x09d50*/    u64     ratemgmt_cfg_port[2];
-#define VXGE_HW_RATEMGMT_CFG_PORT_MODE(val) vxge_vBIT(val, 2, 2)
-#define        VXGE_HW_RATEMGMT_CFG_PORT_RATE  vxge_mBIT(7)
-#define        VXGE_HW_RATEMGMT_CFG_PORT_FIXED_USE_FSM vxge_mBIT(11)
-#define        VXGE_HW_RATEMGMT_CFG_PORT_ANTP_USE_FSM  vxge_mBIT(15)
-#define        VXGE_HW_RATEMGMT_CFG_PORT_ANBE_USE_FSM  vxge_mBIT(19)
-/*0x09d60*/    u64     ratemgmt_status_port[2];
-#define        VXGE_HW_RATEMGMT_STATUS_PORT_RATEMGMT_COMPLETE  vxge_mBIT(3)
-#define        VXGE_HW_RATEMGMT_STATUS_PORT_RATEMGMT_RATE      vxge_mBIT(7)
-#define        VXGE_HW_RATEMGMT_STATUS_PORT_RATEMGMT_MAC_MATCHES_PHY   vxge_mBIT(11)
-       u8      unused09d80[0x09d80-0x09d70];
-
-/*0x09d80*/    u64     ratemgmt_fixed_cfg_port[2];
-#define        VXGE_HW_RATEMGMT_FIXED_CFG_PORT_RESTART vxge_mBIT(7)
-/*0x09d90*/    u64     ratemgmt_antp_cfg_port[2];
-#define        VXGE_HW_RATEMGMT_ANTP_CFG_PORT_RESTART  vxge_mBIT(7)
-#define        VXGE_HW_RATEMGMT_ANTP_CFG_PORT_USE_PREAMBLE_EXT_PHY     vxge_mBIT(11)
-#define        VXGE_HW_RATEMGMT_ANTP_CFG_PORT_USE_ACT_SEL      vxge_mBIT(15)
-#define VXGE_HW_RATEMGMT_ANTP_CFG_PORT_T_RETRY_PHY_QUERY(val) \
-                                                       vxge_vBIT(val, 16, 4)
-#define        VXGE_HW_RATEMGMT_ANTP_CFG_PORT_T_WAIT_MDIO_RESPONSE(val) \
-                                                       vxge_vBIT(val, 20, 4)
-#define        VXGE_HW_RATEMGMT_ANTP_CFG_PORT_T_LDOWN_REAUTO_RESPONSE(val) \
-                                                       vxge_vBIT(val, 24, 4)
-#define        VXGE_HW_RATEMGMT_ANTP_CFG_PORT_ADVERTISE_10G    vxge_mBIT(31)
-#define        VXGE_HW_RATEMGMT_ANTP_CFG_PORT_ADVERTISE_1G     vxge_mBIT(35)
-/*0x09da0*/    u64     ratemgmt_anbe_cfg_port[2];
-#define        VXGE_HW_RATEMGMT_ANBE_CFG_PORT_RESTART  vxge_mBIT(7)
-#define        VXGE_HW_RATEMGMT_ANBE_CFG_PORT_PARALLEL_DETECT_10G_KX4_ENABLE \
-                                                               vxge_mBIT(11)
-#define        VXGE_HW_RATEMGMT_ANBE_CFG_PORT_PARALLEL_DETECT_1G_KX_ENABLE \
-                                                               vxge_mBIT(15)
-#define VXGE_HW_RATEMGMT_ANBE_CFG_PORT_T_SYNC_10G_KX4(val) vxge_vBIT(val, 16, 4)
-#define VXGE_HW_RATEMGMT_ANBE_CFG_PORT_T_SYNC_1G_KX(val) vxge_vBIT(val, 20, 4)
-#define VXGE_HW_RATEMGMT_ANBE_CFG_PORT_T_DME_EXCHANGE(val) vxge_vBIT(val, 24, 4)
-#define        VXGE_HW_RATEMGMT_ANBE_CFG_PORT_ADVERTISE_10G_KX4        vxge_mBIT(31)
-#define        VXGE_HW_RATEMGMT_ANBE_CFG_PORT_ADVERTISE_1G_KX  vxge_mBIT(35)
-/*0x09db0*/    u64     anbe_cfg_port[2];
-#define VXGE_HW_ANBE_CFG_PORT_RESET_CFG_REGS(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_ANBE_CFG_PORT_ALIGN_10G_KX4_OVERRIDE(val) vxge_vBIT(val, 10, 2)
-#define VXGE_HW_ANBE_CFG_PORT_SYNC_1G_KX_OVERRIDE(val) vxge_vBIT(val, 14, 2)
-/*0x09dc0*/    u64     anbe_mgr_ctrl_port[2];
-#define        VXGE_HW_ANBE_MGR_CTRL_PORT_WE   vxge_mBIT(3)
-#define        VXGE_HW_ANBE_MGR_CTRL_PORT_STROBE       vxge_mBIT(7)
-#define VXGE_HW_ANBE_MGR_CTRL_PORT_ADDR(val) vxge_vBIT(val, 15, 9)
-#define VXGE_HW_ANBE_MGR_CTRL_PORT_DATA(val) vxge_vBIT(val, 32, 32)
-       u8      unused09de0[0x09de0-0x09dd0];
-
-/*0x09de0*/    u64     anbe_fw_mstr_port[2];
-#define        VXGE_HW_ANBE_FW_MSTR_PORT_CONNECT_BEAN_TO_SERDES        vxge_mBIT(3)
-#define        VXGE_HW_ANBE_FW_MSTR_PORT_TX_ZEROES_TO_SERDES   vxge_mBIT(7)
-/*0x09df0*/    u64     anbe_hwfsm_gen_status_port[2];
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_CHOSE_10G_KX4_USING_PD \
-                                                       vxge_mBIT(3)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_CHOSE_10G_KX4_USING_DME \
-                                                       vxge_mBIT(7)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_CHOSE_1G_KX_USING_PD \
-                                                       vxge_mBIT(11)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_CHOSE_1G_KX_USING_DME \
-                                                       vxge_mBIT(15)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_ANBEFSM_STATE(val)  \
-                                                       vxge_vBIT(val, 18, 6)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_BEAN_NEXT_PAGE_RECEIVED \
-                                                       vxge_mBIT(27)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_BEAN_BASE_PAGE_RECEIVED \
-                                                       vxge_mBIT(35)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_BEAN_AUTONEG_COMPLETE \
-                                                       vxge_mBIT(39)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_UNEXPECTED_NP_BEFORE_BP \
-                                                       vxge_mBIT(43)
-#define        \
-VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_UNEXPECTED_AN_COMPLETE_BEFORE_BP \
-                                                       vxge_mBIT(47)
-#define        \
-VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_UNEXPECTED_AN_COMPLETE_BEFORE_NP \
-vxge_mBIT(51)
-#define        \
-VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_UNEXPECTED_MODE_WHEN_AN_COMPLETE \
-                                                       vxge_mBIT(55)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_COUNT_BP(val) \
-                                                       vxge_vBIT(val, 56, 4)
-#define        VXGE_HW_ANBE_HWFSM_GEN_STATUS_PORT_RATEMGMT_COUNT_NP(val) \
-                                                       vxge_vBIT(val, 60, 4)
-/*0x09e00*/    u64     anbe_hwfsm_bp_status_port[2];
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_FEC_ENABLE \
-                                                       vxge_mBIT(32)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_FEC_ABILITY \
-                                                       vxge_mBIT(33)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_10G_KR_CAPABLE \
-                                                       vxge_mBIT(40)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_10G_KX4_CAPABLE \
-                                                       vxge_mBIT(41)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_1G_KX_CAPABLE \
-                                                       vxge_mBIT(42)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_TX_NONCE(val)     \
-                                                       vxge_vBIT(val, 43, 5)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_NP        vxge_mBIT(48)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_ACK       vxge_mBIT(49)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_REMOTE_FAULT \
-                                                       vxge_mBIT(50)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_ASM_DIR   vxge_mBIT(51)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_PAUSE     vxge_mBIT(53)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_ECHOED_NONCE(val) \
-                                                       vxge_vBIT(val, 54, 5)
-#define        VXGE_HW_ANBE_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_SELECTOR_FIELD(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x09e10*/    u64     anbe_hwfsm_np_status_port[2];
-#define        VXGE_HW_ANBE_HWFSM_NP_STATUS_PORT_RATEMGMT_NP_BITS_47_TO_32(val) \
-                                                       vxge_vBIT(val, 16, 16)
-#define        VXGE_HW_ANBE_HWFSM_NP_STATUS_PORT_RATEMGMT_NP_BITS_31_TO_0(val) \
-                                                       vxge_vBIT(val, 32, 32)
-       u8      unused09e30[0x09e30-0x09e20];
-
-/*0x09e30*/    u64     antp_gen_cfg_port[2];
-/*0x09e40*/    u64     antp_hwfsm_gen_status_port[2];
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_CHOSE_10G   vxge_mBIT(3)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_CHOSE_1G    vxge_mBIT(7)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_ANTPFSM_STATE(val)  \
-                                                       vxge_vBIT(val, 10, 6)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_AUTONEG_COMPLETE \
-                                                               vxge_mBIT(23)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_UNEXPECTED_NO_LP_XNP \
-                                                       vxge_mBIT(27)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_GOT_LP_XNP  vxge_mBIT(31)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_UNEXPECTED_MESSAGE_CODE \
-                                                       vxge_mBIT(35)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_UNEXPECTED_NO_HCD \
-                                                       vxge_mBIT(43)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_FOUND_HCD   vxge_mBIT(47)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_UNEXPECTED_INVALID_RATE \
-                                                       vxge_mBIT(51)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_VALID_RATE  vxge_mBIT(55)
-#define        VXGE_HW_ANTP_HWFSM_GEN_STATUS_PORT_RATEMGMT_PERSISTENT_LDOWN \
-                                                       vxge_mBIT(59)
-/*0x09e50*/    u64     antp_hwfsm_bp_status_port[2];
-#define        VXGE_HW_ANTP_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_NP        vxge_mBIT(0)
-#define        VXGE_HW_ANTP_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_ACK       vxge_mBIT(1)
-#define        VXGE_HW_ANTP_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_RF        vxge_mBIT(2)
-#define        VXGE_HW_ANTP_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_XNP       vxge_mBIT(3)
-#define        VXGE_HW_ANTP_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_ABILITY_FIELD(val) \
-                                                       vxge_vBIT(val, 4, 7)
-#define        VXGE_HW_ANTP_HWFSM_BP_STATUS_PORT_RATEMGMT_BP_SELECTOR_FIELD(val) \
-                                                       vxge_vBIT(val, 11, 5)
-/*0x09e60*/    u64     antp_hwfsm_xnp_status_port[2];
-#define        VXGE_HW_ANTP_HWFSM_XNP_STATUS_PORT_RATEMGMT_XNP_NP      vxge_mBIT(0)
-#define        VXGE_HW_ANTP_HWFSM_XNP_STATUS_PORT_RATEMGMT_XNP_ACK     vxge_mBIT(1)
-#define        VXGE_HW_ANTP_HWFSM_XNP_STATUS_PORT_RATEMGMT_XNP_MP      vxge_mBIT(2)
-#define        VXGE_HW_ANTP_HWFSM_XNP_STATUS_PORT_RATEMGMT_XNP_ACK2    vxge_mBIT(3)
-#define        VXGE_HW_ANTP_HWFSM_XNP_STATUS_PORT_RATEMGMT_XNP_TOGGLE  vxge_mBIT(4)
-#define        VXGE_HW_ANTP_HWFSM_XNP_STATUS_PORT_RATEMGMT_XNP_MESSAGE_CODE(val) \
-                                                       vxge_vBIT(val, 5, 11)
-#define        VXGE_HW_ANTP_HWFSM_XNP_STATUS_PORT_RATEMGMT_XNP_UNF_CODE_FIELD1(val) \
-                                                       vxge_vBIT(val, 16, 16)
-#define        VXGE_HW_ANTP_HWFSM_XNP_STATUS_PORT_RATEMGMT_XNP_UNF_CODE_FIELD2(val) \
-                                                       vxge_vBIT(val, 32, 16)
-/*0x09e70*/    u64     mdio_mgr_access_port[2];
-#define        VXGE_HW_MDIO_MGR_ACCESS_PORT_STROBE_ONE BIT(3)
-#define VXGE_HW_MDIO_MGR_ACCESS_PORT_OP_TYPE(val) vxge_vBIT(val, 5, 3)
-#define VXGE_HW_MDIO_MGR_ACCESS_PORT_DEVAD(val) vxge_vBIT(val, 11, 5)
-#define VXGE_HW_MDIO_MGR_ACCESS_PORT_ADDR(val) vxge_vBIT(val, 16, 16)
-#define VXGE_HW_MDIO_MGR_ACCESS_PORT_DATA(val) vxge_vBIT(val, 32, 16)
-#define VXGE_HW_MDIO_MGR_ACCESS_PORT_ST_PATTERN(val) vxge_vBIT(val, 49, 2)
-#define        VXGE_HW_MDIO_MGR_ACCESS_PORT_PREAMBLE   vxge_mBIT(51)
-#define VXGE_HW_MDIO_MGR_ACCESS_PORT_PRTAD(val) vxge_vBIT(val, 55, 5)
-#define        VXGE_HW_MDIO_MGR_ACCESS_PORT_STROBE_TWO vxge_mBIT(63)
-       u8      unused0a200[0x0a200-0x09e80];
-/*0x0a200*/    u64     xmac_vsport_choices_vh[17];
-#define VXGE_HW_XMAC_VSPORT_CHOICES_VH_VSPORT_VECTOR(val) vxge_vBIT(val, 0, 17)
-       u8      unused0a400[0x0a400-0x0a288];
-
-/*0x0a400*/    u64     rx_thresh_cfg_vp[17];
-#define VXGE_HW_RX_THRESH_CFG_VP_PAUSE_LOW_THR(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_RX_THRESH_CFG_VP_PAUSE_HIGH_THR(val) vxge_vBIT(val, 8, 8)
-#define VXGE_HW_RX_THRESH_CFG_VP_RED_THR_0(val) vxge_vBIT(val, 16, 8)
-#define VXGE_HW_RX_THRESH_CFG_VP_RED_THR_1(val) vxge_vBIT(val, 24, 8)
-#define VXGE_HW_RX_THRESH_CFG_VP_RED_THR_2(val) vxge_vBIT(val, 32, 8)
-#define VXGE_HW_RX_THRESH_CFG_VP_RED_THR_3(val) vxge_vBIT(val, 40, 8)
-       u8      unused0ac90[0x0ac90-0x0a488];
-} __packed;
-
-/*VXGE_HW_SRPCIM_REGS_H*/
-struct vxge_hw_srpcim_reg {
-
-/*0x00000*/    u64     tim_mr2sr_resource_assignment_vh;
-#define        VXGE_HW_TIM_MR2SR_RESOURCE_ASSIGNMENT_VH_BMAP_ROOT(val) \
-                                                       vxge_vBIT(val, 0, 32)
-       u8      unused00100[0x00100-0x00008];
-
-/*0x00100*/    u64     srpcim_pcipif_int_status;
-#define        VXGE_HW_SRPCIM_PCIPIF_INT_STATUS_MRPCIM_MSG_MRPCIM_MSG_INT      BIT(3)
-#define        VXGE_HW_SRPCIM_PCIPIF_INT_STATUS_VPATH_MSG_VPATH_MSG_INT        BIT(7)
-#define        VXGE_HW_SRPCIM_PCIPIF_INT_STATUS_SRPCIM_SPARE_R1_SRPCIM_SPARE_R1_INT \
-                                                                       BIT(11)
-/*0x00108*/    u64     srpcim_pcipif_int_mask;
-/*0x00110*/    u64     mrpcim_msg_reg;
-#define        VXGE_HW_MRPCIM_MSG_REG_SWIF_MRPCIM_TO_SRPCIM_RMSG_INT   BIT(3)
-/*0x00118*/    u64     mrpcim_msg_mask;
-/*0x00120*/    u64     mrpcim_msg_alarm;
-/*0x00128*/    u64     vpath_msg_reg;
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH0_TO_SRPCIM_RMSG_INT    BIT(0)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH1_TO_SRPCIM_RMSG_INT    BIT(1)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH2_TO_SRPCIM_RMSG_INT    BIT(2)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH3_TO_SRPCIM_RMSG_INT    BIT(3)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH4_TO_SRPCIM_RMSG_INT    BIT(4)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH5_TO_SRPCIM_RMSG_INT    BIT(5)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH6_TO_SRPCIM_RMSG_INT    BIT(6)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH7_TO_SRPCIM_RMSG_INT    BIT(7)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH8_TO_SRPCIM_RMSG_INT    BIT(8)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH9_TO_SRPCIM_RMSG_INT    BIT(9)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH10_TO_SRPCIM_RMSG_INT   BIT(10)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH11_TO_SRPCIM_RMSG_INT   BIT(11)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH12_TO_SRPCIM_RMSG_INT   BIT(12)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH13_TO_SRPCIM_RMSG_INT   BIT(13)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH14_TO_SRPCIM_RMSG_INT   BIT(14)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH15_TO_SRPCIM_RMSG_INT   BIT(15)
-#define        VXGE_HW_VPATH_MSG_REG_SWIF_VPATH16_TO_SRPCIM_RMSG_INT   BIT(16)
-/*0x00130*/    u64     vpath_msg_mask;
-/*0x00138*/    u64     vpath_msg_alarm;
-       u8      unused00160[0x00160-0x00140];
-
-/*0x00160*/    u64     srpcim_to_mrpcim_wmsg;
-#define        VXGE_HW_SRPCIM_TO_MRPCIM_WMSG_SRPCIM_TO_MRPCIM_WMSG(val) \
-                                                       vxge_vBIT(val, 0, 64)
-/*0x00168*/    u64     srpcim_to_mrpcim_wmsg_trig;
-#define        VXGE_HW_SRPCIM_TO_MRPCIM_WMSG_TRIG_SRPCIM_TO_MRPCIM_WMSG_TRIG   BIT(0)
-/*0x00170*/    u64     mrpcim_to_srpcim_rmsg;
-#define        VXGE_HW_MRPCIM_TO_SRPCIM_RMSG_SWIF_MRPCIM_TO_SRPCIM_RMSG(val) \
-                                                       vxge_vBIT(val, 0, 64)
-/*0x00178*/    u64     vpath_to_srpcim_rmsg_sel;
-#define        VXGE_HW_VPATH_TO_SRPCIM_RMSG_SEL_VPATH_TO_SRPCIM_RMSG_SEL(val) \
-                                                       vxge_vBIT(val, 0, 5)
-/*0x00180*/    u64     vpath_to_srpcim_rmsg;
-#define        VXGE_HW_VPATH_TO_SRPCIM_RMSG_SWIF_VPATH_TO_SRPCIM_RMSG(val) \
-                                                       vxge_vBIT(val, 0, 64)
-       u8      unused00200[0x00200-0x00188];
-
-/*0x00200*/    u64     srpcim_general_int_status;
-#define        VXGE_HW_SRPCIM_GENERAL_INT_STATUS_PIC_INT       BIT(0)
-#define        VXGE_HW_SRPCIM_GENERAL_INT_STATUS_PCI_INT       BIT(3)
-#define        VXGE_HW_SRPCIM_GENERAL_INT_STATUS_XMAC_INT      BIT(7)
-       u8      unused00210[0x00210-0x00208];
-
-/*0x00210*/    u64     srpcim_general_int_mask;
-#define        VXGE_HW_SRPCIM_GENERAL_INT_MASK_PIC_INT BIT(0)
-#define        VXGE_HW_SRPCIM_GENERAL_INT_MASK_PCI_INT BIT(3)
-#define        VXGE_HW_SRPCIM_GENERAL_INT_MASK_XMAC_INT        BIT(7)
-       u8      unused00220[0x00220-0x00218];
-
-/*0x00220*/    u64     srpcim_ppif_int_status;
-
-/*0x00228*/    u64     srpcim_ppif_int_mask;
-/*0x00230*/    u64     srpcim_gen_errors_reg;
-#define        VXGE_HW_SRPCIM_GEN_ERRORS_REG_PCICONFIG_PF_STATUS_ERR   BIT(3)
-#define        VXGE_HW_SRPCIM_GEN_ERRORS_REG_PCICONFIG_PF_UNCOR_ERR    BIT(7)
-#define        VXGE_HW_SRPCIM_GEN_ERRORS_REG_PCICONFIG_PF_COR_ERR      BIT(11)
-#define        VXGE_HW_SRPCIM_GEN_ERRORS_REG_INTCTRL_SCHED_INT BIT(15)
-#define        VXGE_HW_SRPCIM_GEN_ERRORS_REG_INI_SERR_DET      BIT(19)
-#define        VXGE_HW_SRPCIM_GEN_ERRORS_REG_TGT_PF_ILLEGAL_ACCESS     BIT(23)
-/*0x00238*/    u64     srpcim_gen_errors_mask;
-/*0x00240*/    u64     srpcim_gen_errors_alarm;
-/*0x00248*/    u64     mrpcim_to_srpcim_alarm_reg;
-#define        VXGE_HW_MRPCIM_TO_SRPCIM_ALARM_REG_PPIF_MRPCIM_TO_SRPCIM_ALARM  BIT(3)
-/*0x00250*/    u64     mrpcim_to_srpcim_alarm_mask;
-/*0x00258*/    u64     mrpcim_to_srpcim_alarm_alarm;
-/*0x00260*/    u64     vpath_to_srpcim_alarm_reg;
-
-/*0x00268*/    u64     vpath_to_srpcim_alarm_mask;
-/*0x00270*/    u64     vpath_to_srpcim_alarm_alarm;
-       u8      unused00280[0x00280-0x00278];
-
-/*0x00280*/    u64     pf_sw_reset;
-#define VXGE_HW_PF_SW_RESET_PF_SW_RESET(val) vxge_vBIT(val, 0, 8)
-/*0x00288*/    u64     srpcim_general_cfg1;
-#define        VXGE_HW_SRPCIM_GENERAL_CFG1_BOOT_BYTE_SWAPEN    BIT(19)
-#define        VXGE_HW_SRPCIM_GENERAL_CFG1_BOOT_BIT_FLIPEN     BIT(23)
-#define        VXGE_HW_SRPCIM_GENERAL_CFG1_MSIX_ADDR_SWAPEN    BIT(27)
-#define        VXGE_HW_SRPCIM_GENERAL_CFG1_MSIX_ADDR_FLIPEN    BIT(31)
-#define        VXGE_HW_SRPCIM_GENERAL_CFG1_MSIX_DATA_SWAPEN    BIT(35)
-#define        VXGE_HW_SRPCIM_GENERAL_CFG1_MSIX_DATA_FLIPEN    BIT(39)
-/*0x00290*/    u64     srpcim_interrupt_cfg1;
-#define VXGE_HW_SRPCIM_INTERRUPT_CFG1_ALARM_MAP_TO_MSG(val) vxge_vBIT(val, 1, 7)
-#define VXGE_HW_SRPCIM_INTERRUPT_CFG1_TRAFFIC_CLASS(val) vxge_vBIT(val, 9, 3)
-       u8      unused002a8[0x002a8-0x00298];
-
-/*0x002a8*/    u64     srpcim_clear_msix_mask;
-#define        VXGE_HW_SRPCIM_CLEAR_MSIX_MASK_SRPCIM_CLEAR_MSIX_MASK   BIT(0)
-/*0x002b0*/    u64     srpcim_set_msix_mask;
-#define        VXGE_HW_SRPCIM_SET_MSIX_MASK_SRPCIM_SET_MSIX_MASK       BIT(0)
-/*0x002b8*/    u64     srpcim_clr_msix_one_shot;
-#define        VXGE_HW_SRPCIM_CLR_MSIX_ONE_SHOT_SRPCIM_CLR_MSIX_ONE_SHOT       BIT(0)
-/*0x002c0*/    u64     srpcim_rst_in_prog;
-#define        VXGE_HW_SRPCIM_RST_IN_PROG_SRPCIM_RST_IN_PROG   BIT(7)
-/*0x002c8*/    u64     srpcim_reg_modified;
-#define        VXGE_HW_SRPCIM_REG_MODIFIED_SRPCIM_REG_MODIFIED BIT(7)
-/*0x002d0*/    u64     tgt_pf_illegal_access;
-#define VXGE_HW_TGT_PF_ILLEGAL_ACCESS_SWIF_REGION(val) vxge_vBIT(val, 1, 7)
-/*0x002d8*/    u64     srpcim_msix_status;
-#define        VXGE_HW_SRPCIM_MSIX_STATUS_INTCTL_SRPCIM_MSIX_MASK      BIT(3)
-#define        VXGE_HW_SRPCIM_MSIX_STATUS_INTCTL_SRPCIM_MSIX_PENDING_VECTOR    BIT(7)
-       u8      unused00880[0x00880-0x002e0];
-
-/*0x00880*/    u64     xgmac_sr_int_status;
-#define        VXGE_HW_XGMAC_SR_INT_STATUS_ASIC_NTWK_SR_ERR_ASIC_NTWK_SR_INT   BIT(3)
-/*0x00888*/    u64     xgmac_sr_int_mask;
-/*0x00890*/    u64     asic_ntwk_sr_err_reg;
-#define        VXGE_HW_ASIC_NTWK_SR_ERR_REG_XMACJ_NTWK_SUSTAINED_FAULT BIT(3)
-#define        VXGE_HW_ASIC_NTWK_SR_ERR_REG_XMACJ_NTWK_SUSTAINED_OK    BIT(7)
-#define        VXGE_HW_ASIC_NTWK_SR_ERR_REG_XMACJ_NTWK_SUSTAINED_FAULT_OCCURRED \
-                                                                       BIT(11)
-#define        VXGE_HW_ASIC_NTWK_SR_ERR_REG_XMACJ_NTWK_SUSTAINED_OK_OCCURRED   BIT(15)
-/*0x00898*/    u64     asic_ntwk_sr_err_mask;
-/*0x008a0*/    u64     asic_ntwk_sr_err_alarm;
-       u8      unused008c0[0x008c0-0x008a8];
-
-/*0x008c0*/    u64     xmac_vsport_choices_sr_clone;
-#define        VXGE_HW_XMAC_VSPORT_CHOICES_SR_CLONE_VSPORT_VECTOR(val) \
-                                                       vxge_vBIT(val, 0, 17)
-       u8      unused00900[0x00900-0x008c8];
-
-/*0x00900*/    u64     mr_rqa_top_prty_for_vh;
-#define        VXGE_HW_MR_RQA_TOP_PRTY_FOR_VH_RQA_TOP_PRTY_FOR_VH(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00908*/    u64     umq_vh_data_list_empty;
-#define        VXGE_HW_UMQ_VH_DATA_LIST_EMPTY_ROCRC_UMQ_VH_DATA_LIST_EMPTY \
-                                                       BIT(0)
-/*0x00910*/    u64     wde_cfg;
-#define        VXGE_HW_WDE_CFG_NS0_FORCE_MWB_START     BIT(0)
-#define        VXGE_HW_WDE_CFG_NS0_FORCE_MWB_END       BIT(1)
-#define        VXGE_HW_WDE_CFG_NS0_FORCE_QB_START      BIT(2)
-#define        VXGE_HW_WDE_CFG_NS0_FORCE_QB_END        BIT(3)
-#define        VXGE_HW_WDE_CFG_NS0_FORCE_MPSB_START    BIT(4)
-#define        VXGE_HW_WDE_CFG_NS0_FORCE_MPSB_END      BIT(5)
-#define        VXGE_HW_WDE_CFG_NS0_MWB_OPT_EN  BIT(6)
-#define        VXGE_HW_WDE_CFG_NS0_QB_OPT_EN   BIT(7)
-#define        VXGE_HW_WDE_CFG_NS0_MPSB_OPT_EN BIT(8)
-#define        VXGE_HW_WDE_CFG_NS1_FORCE_MWB_START     BIT(9)
-#define        VXGE_HW_WDE_CFG_NS1_FORCE_MWB_END       BIT(10)
-#define        VXGE_HW_WDE_CFG_NS1_FORCE_QB_START      BIT(11)
-#define        VXGE_HW_WDE_CFG_NS1_FORCE_QB_END        BIT(12)
-#define        VXGE_HW_WDE_CFG_NS1_FORCE_MPSB_START    BIT(13)
-#define        VXGE_HW_WDE_CFG_NS1_FORCE_MPSB_END      BIT(14)
-#define        VXGE_HW_WDE_CFG_NS1_MWB_OPT_EN  BIT(15)
-#define        VXGE_HW_WDE_CFG_NS1_QB_OPT_EN   BIT(16)
-#define        VXGE_HW_WDE_CFG_NS1_MPSB_OPT_EN BIT(17)
-#define        VXGE_HW_WDE_CFG_DISABLE_QPAD_FOR_UNALIGNED_ADDR BIT(19)
-#define VXGE_HW_WDE_CFG_ALIGNMENT_PREFERENCE(val) vxge_vBIT(val, 30, 2)
-#define VXGE_HW_WDE_CFG_MEM_WORD_SIZE(val) vxge_vBIT(val, 46, 2)
-
-} __packed;
-
-/*VXGE_HW_VPMGMT_REGS_H*/
-struct vxge_hw_vpmgmt_reg {
-
-       u8      unused00040[0x00040-0x00000];
-
-/*0x00040*/    u64     vpath_to_func_map_cfg1;
-#define        VXGE_HW_VPATH_TO_FUNC_MAP_CFG1_VPATH_TO_FUNC_MAP_CFG1(val) \
-                                                       vxge_vBIT(val, 3, 5)
-/*0x00048*/    u64     vpath_is_first;
-#define        VXGE_HW_VPATH_IS_FIRST_VPATH_IS_FIRST   vxge_mBIT(3)
-/*0x00050*/    u64     srpcim_to_vpath_wmsg;
-#define        VXGE_HW_SRPCIM_TO_VPATH_WMSG_SRPCIM_TO_VPATH_WMSG(val) \
-                                                       vxge_vBIT(val, 0, 64)
-/*0x00058*/    u64     srpcim_to_vpath_wmsg_trig;
-#define        VXGE_HW_SRPCIM_TO_VPATH_WMSG_TRIG_SRPCIM_TO_VPATH_WMSG_TRIG \
-                                                               vxge_mBIT(0)
-       u8      unused00100[0x00100-0x00060];
-
-/*0x00100*/    u64     tim_vpath_assignment;
-#define VXGE_HW_TIM_VPATH_ASSIGNMENT_BMAP_ROOT(val) vxge_vBIT(val, 0, 32)
-       u8      unused00140[0x00140-0x00108];
-
-/*0x00140*/    u64     rqa_top_prty_for_vp;
-#define VXGE_HW_RQA_TOP_PRTY_FOR_VP_RQA_TOP_PRTY_FOR_VP(val) \
-                                                       vxge_vBIT(val, 59, 5)
-       u8      unused001c0[0x001c0-0x00148];
-
-/*0x001c0*/    u64     rxmac_rx_pa_cfg0_vpmgmt_clone;
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_IGNORE_FRAME_ERR  vxge_mBIT(3)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_SUPPORT_SNAP_AB_N vxge_mBIT(7)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_SEARCH_FOR_HAO    vxge_mBIT(18)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_SUPPORT_MOBILE_IPV6_HDRS \
-                                                               vxge_mBIT(19)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_IPV6_STOP_SEARCHING \
-                                                               vxge_mBIT(23)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_NO_PS_IF_UNKNOWN  vxge_mBIT(27)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_SEARCH_FOR_ETYPE  vxge_mBIT(35)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_TOSS_ANY_FRM_IF_L3_CSUM_ERR \
-                                                               vxge_mBIT(39)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_TOSS_OFFLD_FRM_IF_L3_CSUM_ERR \
-                                                               vxge_mBIT(43)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_TOSS_ANY_FRM_IF_L4_CSUM_ERR \
-                                                               vxge_mBIT(47)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_TOSS_OFFLD_FRM_IF_L4_CSUM_ERR \
-                                                               vxge_mBIT(51)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_TOSS_ANY_FRM_IF_RPA_ERR \
-                                                               vxge_mBIT(55)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_TOSS_OFFLD_FRM_IF_RPA_ERR \
-                                                               vxge_mBIT(59)
-#define        VXGE_HW_RXMAC_RX_PA_CFG0_VPMGMT_CLONE_JUMBO_SNAP_EN     vxge_mBIT(63)
-/*0x001c8*/    u64     rts_mgr_cfg0_vpmgmt_clone;
-#define        VXGE_HW_RTS_MGR_CFG0_VPMGMT_CLONE_RTS_DP_SP_PRIORITY    vxge_mBIT(3)
-#define        VXGE_HW_RTS_MGR_CFG0_VPMGMT_CLONE_FLEX_L4PRTCL_VALUE(val) \
-                                                       vxge_vBIT(val, 24, 8)
-#define        VXGE_HW_RTS_MGR_CFG0_VPMGMT_CLONE_ICMP_TRASH    vxge_mBIT(35)
-#define        VXGE_HW_RTS_MGR_CFG0_VPMGMT_CLONE_TCPSYN_TRASH  vxge_mBIT(39)
-#define        VXGE_HW_RTS_MGR_CFG0_VPMGMT_CLONE_ZL4PYLD_TRASH vxge_mBIT(43)
-#define        VXGE_HW_RTS_MGR_CFG0_VPMGMT_CLONE_L4PRTCL_TCP_TRASH     vxge_mBIT(47)
-#define        VXGE_HW_RTS_MGR_CFG0_VPMGMT_CLONE_L4PRTCL_UDP_TRASH     vxge_mBIT(51)
-#define        VXGE_HW_RTS_MGR_CFG0_VPMGMT_CLONE_L4PRTCL_FLEX_TRASH    vxge_mBIT(55)
-#define        VXGE_HW_RTS_MGR_CFG0_VPMGMT_CLONE_IPFRAG_TRASH  vxge_mBIT(59)
-/*0x001d0*/    u64     rts_mgr_criteria_priority_vpmgmt_clone;
-#define        VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_VPMGMT_CLONE_ETYPE(val) \
-                                                       vxge_vBIT(val, 5, 3)
-#define        VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_VPMGMT_CLONE_ICMP_TCPSYN(val) \
-                                                       vxge_vBIT(val, 9, 3)
-#define        VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_VPMGMT_CLONE_L4PN(val) \
-                                                       vxge_vBIT(val, 13, 3)
-#define        VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_VPMGMT_CLONE_RANGE_L4PN(val) \
-                                                       vxge_vBIT(val, 17, 3)
-#define        VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_VPMGMT_CLONE_RTH_IT(val) \
-                                                       vxge_vBIT(val, 21, 3)
-#define VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_VPMGMT_CLONE_DS(val) \
-                                                       vxge_vBIT(val, 25, 3)
-#define        VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_VPMGMT_CLONE_QOS(val) \
-                                                       vxge_vBIT(val, 29, 3)
-#define        VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_VPMGMT_CLONE_ZL4PYLD(val) \
-                                                       vxge_vBIT(val, 33, 3)
-#define        VXGE_HW_RTS_MGR_CRITERIA_PRIORITY_VPMGMT_CLONE_L4PRTCL(val) \
-                                                       vxge_vBIT(val, 37, 3)
-/*0x001d8*/    u64     rxmac_cfg0_port_vpmgmt_clone[3];
-#define        VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_RMAC_EN    vxge_mBIT(3)
-#define        VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_STRIP_FCS  vxge_mBIT(7)
-#define        VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_DISCARD_PFRM       vxge_mBIT(11)
-#define        VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_IGNORE_FCS_ERR     vxge_mBIT(15)
-#define        VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_IGNORE_LONG_ERR    vxge_mBIT(19)
-#define        VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_IGNORE_USIZED_ERR  vxge_mBIT(23)
-#define        VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_IGNORE_LEN_MISMATCH \
-                                                               vxge_mBIT(27)
-#define        VXGE_HW_RXMAC_CFG0_PORT_VPMGMT_CLONE_MAX_PYLD_LEN(val) \
-                                                       vxge_vBIT(val, 50, 14)
-/*0x001f0*/    u64     rxmac_pause_cfg_port_vpmgmt_clone[3];
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_GEN_EN        vxge_mBIT(3)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_RCV_EN        vxge_mBIT(7)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_ACCEL_SEND(val) \
-                                                       vxge_vBIT(val, 9, 3)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_DUAL_THR      vxge_mBIT(15)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_HIGH_PTIME(val) \
-                                                       vxge_vBIT(val, 20, 16)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_IGNORE_PF_FCS_ERR \
-                                                               vxge_mBIT(39)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_IGNORE_PF_LEN_ERR \
-                                                               vxge_mBIT(43)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_LIMITER_EN    vxge_mBIT(47)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_MAX_LIMIT(val) \
-                                                       vxge_vBIT(val, 48, 8)
-#define        VXGE_HW_RXMAC_PAUSE_CFG_PORT_VPMGMT_CLONE_PERMIT_RATEMGMT_CTRL \
-                                                       vxge_mBIT(59)
-       u8      unused00240[0x00240-0x00208];
-
-/*0x00240*/    u64     xmac_vsport_choices_vp;
-#define VXGE_HW_XMAC_VSPORT_CHOICES_VP_VSPORT_VECTOR(val) vxge_vBIT(val, 0, 17)
-       u8      unused00260[0x00260-0x00248];
-
-/*0x00260*/    u64     xgmac_gen_status_vpmgmt_clone;
-#define        VXGE_HW_XGMAC_GEN_STATUS_VPMGMT_CLONE_XMACJ_NTWK_OK     vxge_mBIT(3)
-#define        VXGE_HW_XGMAC_GEN_STATUS_VPMGMT_CLONE_XMACJ_NTWK_DATA_RATE \
-                                                               vxge_mBIT(11)
-/*0x00268*/    u64     xgmac_status_port_vpmgmt_clone[2];
-#define        VXGE_HW_XGMAC_STATUS_PORT_VPMGMT_CLONE_RMAC_REMOTE_FAULT \
-                                                               vxge_mBIT(3)
-#define        VXGE_HW_XGMAC_STATUS_PORT_VPMGMT_CLONE_RMAC_LOCAL_FAULT vxge_mBIT(7)
-#define        VXGE_HW_XGMAC_STATUS_PORT_VPMGMT_CLONE_XMACJ_MAC_PHY_LAYER_AVAIL \
-                                                               vxge_mBIT(11)
-#define        VXGE_HW_XGMAC_STATUS_PORT_VPMGMT_CLONE_XMACJ_PORT_OK    vxge_mBIT(15)
-/*0x00278*/    u64     xmac_gen_cfg_vpmgmt_clone;
-#define        VXGE_HW_XMAC_GEN_CFG_VPMGMT_CLONE_RATEMGMT_MAC_RATE_SEL(val) \
-                                                       vxge_vBIT(val, 2, 2)
-#define        VXGE_HW_XMAC_GEN_CFG_VPMGMT_CLONE_TX_HEAD_DROP_WHEN_FAULT \
-                                                       vxge_mBIT(7)
-#define        VXGE_HW_XMAC_GEN_CFG_VPMGMT_CLONE_FAULT_BEHAVIOUR       vxge_mBIT(27)
-#define VXGE_HW_XMAC_GEN_CFG_VPMGMT_CLONE_PERIOD_NTWK_UP(val) \
-                                                       vxge_vBIT(val, 28, 4)
-#define        VXGE_HW_XMAC_GEN_CFG_VPMGMT_CLONE_PERIOD_NTWK_DOWN(val) \
-                                                       vxge_vBIT(val, 32, 4)
-/*0x00280*/    u64     xmac_timestamp_vpmgmt_clone;
-#define        VXGE_HW_XMAC_TIMESTAMP_VPMGMT_CLONE_EN  vxge_mBIT(3)
-#define VXGE_HW_XMAC_TIMESTAMP_VPMGMT_CLONE_USE_LINK_ID(val) \
-                                                       vxge_vBIT(val, 6, 2)
-#define VXGE_HW_XMAC_TIMESTAMP_VPMGMT_CLONE_INTERVAL(val) vxge_vBIT(val, 12, 4)
-#define        VXGE_HW_XMAC_TIMESTAMP_VPMGMT_CLONE_TIMER_RESTART       vxge_mBIT(19)
-#define        VXGE_HW_XMAC_TIMESTAMP_VPMGMT_CLONE_XMACJ_ROLLOVER_CNT(val) \
-                                                       vxge_vBIT(val, 32, 16)
-/*0x00288*/    u64     xmac_stats_gen_cfg_vpmgmt_clone;
-#define        VXGE_HW_XMAC_STATS_GEN_CFG_VPMGMT_CLONE_PRTAGGR_CUM_TIMER(val) \
-                                                       vxge_vBIT(val, 4, 4)
-#define        VXGE_HW_XMAC_STATS_GEN_CFG_VPMGMT_CLONE_VPATH_CUM_TIMER(val) \
-                                                       vxge_vBIT(val, 8, 4)
-#define        VXGE_HW_XMAC_STATS_GEN_CFG_VPMGMT_CLONE_VLAN_HANDLING   vxge_mBIT(15)
-/*0x00290*/    u64     xmac_cfg_port_vpmgmt_clone[3];
-#define        VXGE_HW_XMAC_CFG_PORT_VPMGMT_CLONE_XGMII_LOOPBACK       vxge_mBIT(3)
-#define        VXGE_HW_XMAC_CFG_PORT_VPMGMT_CLONE_XGMII_REVERSE_LOOPBACK \
-                                                               vxge_mBIT(7)
-#define        VXGE_HW_XMAC_CFG_PORT_VPMGMT_CLONE_XGMII_TX_BEHAV       vxge_mBIT(11)
-#define        VXGE_HW_XMAC_CFG_PORT_VPMGMT_CLONE_XGMII_RX_BEHAV       vxge_mBIT(15)
-       u8      unused002c0[0x002c0-0x002a8];
-
-/*0x002c0*/    u64     txmac_gen_cfg0_vpmgmt_clone;
-#define        VXGE_HW_TXMAC_GEN_CFG0_VPMGMT_CLONE_CHOSEN_TX_PORT      vxge_mBIT(7)
-/*0x002c8*/    u64     txmac_cfg0_port_vpmgmt_clone[3];
-#define        VXGE_HW_TXMAC_CFG0_PORT_VPMGMT_CLONE_TMAC_EN    vxge_mBIT(3)
-#define        VXGE_HW_TXMAC_CFG0_PORT_VPMGMT_CLONE_APPEND_PAD vxge_mBIT(7)
-#define VXGE_HW_TXMAC_CFG0_PORT_VPMGMT_CLONE_PAD_BYTE(val) vxge_vBIT(val, 8, 8)
-       u8      unused00300[0x00300-0x002e0];
-
-/*0x00300*/    u64     wol_mp_crc;
-#define VXGE_HW_WOL_MP_CRC_CRC(val) vxge_vBIT(val, 0, 32)
-#define        VXGE_HW_WOL_MP_CRC_RC_EN        vxge_mBIT(63)
-/*0x00308*/    u64     wol_mp_mask_a;
-#define VXGE_HW_WOL_MP_MASK_A_MASK(val) vxge_vBIT(val, 0, 64)
-/*0x00310*/    u64     wol_mp_mask_b;
-#define VXGE_HW_WOL_MP_MASK_B_MASK(val) vxge_vBIT(val, 0, 64)
-       u8      unused00360[0x00360-0x00318];
-
-/*0x00360*/    u64     fau_pa_cfg_vpmgmt_clone;
-#define        VXGE_HW_FAU_PA_CFG_VPMGMT_CLONE_REPL_L4_COMP_CSUM       vxge_mBIT(3)
-#define        VXGE_HW_FAU_PA_CFG_VPMGMT_CLONE_REPL_L3_INCL_CF vxge_mBIT(7)
-#define        VXGE_HW_FAU_PA_CFG_VPMGMT_CLONE_REPL_L3_COMP_CSUM       vxge_mBIT(11)
-/*0x00368*/    u64     rx_datapath_util_vp_clone;
-#define        VXGE_HW_RX_DATAPATH_UTIL_VP_CLONE_FAU_RX_UTILIZATION(val) \
-                                                       vxge_vBIT(val, 7, 9)
-#define        VXGE_HW_RX_DATAPATH_UTIL_VP_CLONE_RX_UTIL_CFG(val) \
-                                                       vxge_vBIT(val, 16, 4)
-#define        VXGE_HW_RX_DATAPATH_UTIL_VP_CLONE_FAU_RX_FRAC_UTIL(val) \
-                                                       vxge_vBIT(val, 20, 4)
-#define        VXGE_HW_RX_DATAPATH_UTIL_VP_CLONE_RX_PKT_WEIGHT(val) \
-                                                       vxge_vBIT(val, 24, 4)
-       u8      unused00380[0x00380-0x00370];
-
-/*0x00380*/    u64     tx_datapath_util_vp_clone;
-#define        VXGE_HW_TX_DATAPATH_UTIL_VP_CLONE_TPA_TX_UTILIZATION(val) \
-                                                       vxge_vBIT(val, 7, 9)
-#define        VXGE_HW_TX_DATAPATH_UTIL_VP_CLONE_TX_UTIL_CFG(val) \
-                                                       vxge_vBIT(val, 16, 4)
-#define        VXGE_HW_TX_DATAPATH_UTIL_VP_CLONE_TPA_TX_FRAC_UTIL(val) \
-                                                       vxge_vBIT(val, 20, 4)
-#define        VXGE_HW_TX_DATAPATH_UTIL_VP_CLONE_TX_PKT_WEIGHT(val) \
-                                                       vxge_vBIT(val, 24, 4)
-
-} __packed;
-
-struct vxge_hw_vpath_reg {
-
-       u8      unused00300[0x00300];
-
-/*0x00300*/    u64     usdc_vpath;
-#define VXGE_HW_USDC_VPATH_SGRP_ASSIGN(val) vxge_vBIT(val, 0, 32)
-       u8      unused00a00[0x00a00-0x00308];
-
-/*0x00a00*/    u64     wrdma_alarm_status;
-#define        VXGE_HW_WRDMA_ALARM_STATUS_PRC_ALARM_PRC_INT    vxge_mBIT(1)
-/*0x00a08*/    u64     wrdma_alarm_mask;
-       u8      unused00a30[0x00a30-0x00a10];
-
-/*0x00a30*/    u64     prc_alarm_reg;
-#define        VXGE_HW_PRC_ALARM_REG_PRC_RING_BUMP     vxge_mBIT(0)
-#define        VXGE_HW_PRC_ALARM_REG_PRC_RXDCM_SC_ERR  vxge_mBIT(1)
-#define        VXGE_HW_PRC_ALARM_REG_PRC_RXDCM_SC_ABORT        vxge_mBIT(2)
-#define        VXGE_HW_PRC_ALARM_REG_PRC_QUANTA_SIZE_ERR       vxge_mBIT(3)
-/*0x00a38*/    u64     prc_alarm_mask;
-/*0x00a40*/    u64     prc_alarm_alarm;
-/*0x00a48*/    u64     prc_cfg1;
-#define VXGE_HW_PRC_CFG1_RX_TIMER_VAL(val) vxge_vBIT(val, 3, 29)
-#define        VXGE_HW_PRC_CFG1_TIM_RING_BUMP_INT_ENABLE       vxge_mBIT(34)
-#define        VXGE_HW_PRC_CFG1_RTI_TINT_DISABLE       vxge_mBIT(35)
-#define        VXGE_HW_PRC_CFG1_GREEDY_RETURN  vxge_mBIT(36)
-#define        VXGE_HW_PRC_CFG1_QUICK_SHOT     vxge_mBIT(37)
-#define        VXGE_HW_PRC_CFG1_RX_TIMER_CI    vxge_mBIT(39)
-#define VXGE_HW_PRC_CFG1_RESET_TIMER_ON_RXD_RET(val) vxge_vBIT(val, 40, 2)
-       u8      unused00a60[0x00a60-0x00a50];
-
-/*0x00a60*/    u64     prc_cfg4;
-#define        VXGE_HW_PRC_CFG4_IN_SVC vxge_mBIT(7)
-#define VXGE_HW_PRC_CFG4_RING_MODE(val) vxge_vBIT(val, 14, 2)
-#define        VXGE_HW_PRC_CFG4_RXD_NO_SNOOP   vxge_mBIT(22)
-#define        VXGE_HW_PRC_CFG4_FRM_NO_SNOOP   vxge_mBIT(23)
-#define        VXGE_HW_PRC_CFG4_RTH_DISABLE    vxge_mBIT(31)
-#define        VXGE_HW_PRC_CFG4_IGNORE_OWNERSHIP       vxge_mBIT(32)
-#define        VXGE_HW_PRC_CFG4_SIGNAL_BENIGN_OVFLW    vxge_mBIT(36)
-#define        VXGE_HW_PRC_CFG4_BIMODAL_INTERRUPT      vxge_mBIT(37)
-#define VXGE_HW_PRC_CFG4_BACKOFF_INTERVAL(val) vxge_vBIT(val, 40, 24)
-/*0x00a68*/    u64     prc_cfg5;
-#define VXGE_HW_PRC_CFG5_RXD0_ADD(val) vxge_vBIT(val, 0, 61)
-/*0x00a70*/    u64     prc_cfg6;
-#define        VXGE_HW_PRC_CFG6_FRM_PAD_EN     vxge_mBIT(0)
-#define        VXGE_HW_PRC_CFG6_QSIZE_ALIGNED_RXD      vxge_mBIT(2)
-#define        VXGE_HW_PRC_CFG6_DOORBELL_MODE_EN       vxge_mBIT(5)
-#define        VXGE_HW_PRC_CFG6_L3_CPC_TRSFR_CODE_EN   vxge_mBIT(8)
-#define        VXGE_HW_PRC_CFG6_L4_CPC_TRSFR_CODE_EN   vxge_mBIT(9)
-#define VXGE_HW_PRC_CFG6_RXD_CRXDT(val) vxge_vBIT(val, 23, 9)
-#define VXGE_HW_PRC_CFG6_RXD_SPAT(val) vxge_vBIT(val, 36, 9)
-#define VXGE_HW_PRC_CFG6_GET_RXD_SPAT(val)     vxge_bVALn(val, 36, 9)
-/*0x00a78*/    u64     prc_cfg7;
-#define VXGE_HW_PRC_CFG7_SCATTER_MODE(val) vxge_vBIT(val, 6, 2)
-#define        VXGE_HW_PRC_CFG7_SMART_SCAT_EN  vxge_mBIT(11)
-#define        VXGE_HW_PRC_CFG7_RXD_NS_CHG_EN  vxge_mBIT(12)
-#define        VXGE_HW_PRC_CFG7_NO_HDR_SEPARATION      vxge_mBIT(14)
-#define VXGE_HW_PRC_CFG7_RXD_BUFF_SIZE_MASK(val) vxge_vBIT(val, 20, 4)
-#define VXGE_HW_PRC_CFG7_BUFF_SIZE0_MASK(val) vxge_vBIT(val, 27, 5)
-/*0x00a80*/    u64     tim_dest_addr;
-#define VXGE_HW_TIM_DEST_ADDR_TIM_DEST_ADDR(val) vxge_vBIT(val, 0, 64)
-/*0x00a88*/    u64     prc_rxd_doorbell;
-#define VXGE_HW_PRC_RXD_DOORBELL_NEW_QW_CNT(val) vxge_vBIT(val, 48, 16)
-/*0x00a90*/    u64     rqa_prty_for_vp;
-#define VXGE_HW_RQA_PRTY_FOR_VP_RQA_PRTY_FOR_VP(val) vxge_vBIT(val, 59, 5)
-/*0x00a98*/    u64     rxdmem_size;
-#define VXGE_HW_RXDMEM_SIZE_PRC_RXDMEM_SIZE(val) vxge_vBIT(val, 51, 13)
-/*0x00aa0*/    u64     frm_in_progress_cnt;
-#define        VXGE_HW_FRM_IN_PROGRESS_CNT_PRC_FRM_IN_PROGRESS_CNT(val) \
-                                                       vxge_vBIT(val, 59, 5)
-/*0x00aa8*/    u64     rx_multi_cast_stats;
-#define VXGE_HW_RX_MULTI_CAST_STATS_FRAME_DISCARD(val) vxge_vBIT(val, 48, 16)
-/*0x00ab0*/    u64     rx_frm_transferred;
-#define        VXGE_HW_RX_FRM_TRANSFERRED_RX_FRM_TRANSFERRED(val) \
-                                                       vxge_vBIT(val, 32, 32)
-/*0x00ab8*/    u64     rxd_returned;
-#define VXGE_HW_RXD_RETURNED_RXD_RETURNED(val) vxge_vBIT(val, 48, 16)
-       u8      unused00c00[0x00c00-0x00ac0];
-
-/*0x00c00*/    u64     kdfc_fifo_trpl_partition;
-#define VXGE_HW_KDFC_FIFO_TRPL_PARTITION_LENGTH_0(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_FIFO_TRPL_PARTITION_LENGTH_1(val) vxge_vBIT(val, 33, 15)
-#define VXGE_HW_KDFC_FIFO_TRPL_PARTITION_LENGTH_2(val) vxge_vBIT(val, 49, 15)
-/*0x00c08*/    u64     kdfc_fifo_trpl_ctrl;
-#define        VXGE_HW_KDFC_FIFO_TRPL_CTRL_TRIPLET_ENABLE      vxge_mBIT(7)
-/*0x00c10*/    u64     kdfc_trpl_fifo_0_ctrl;
-#define VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_MODE(val) vxge_vBIT(val, 14, 2)
-#define        VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_FLIP_EN   vxge_mBIT(22)
-#define        VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_SWAP_EN   vxge_mBIT(23)
-#define VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_INT_CTRL(val) vxge_vBIT(val, 26, 2)
-#define        VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_CTRL_STRUC        vxge_mBIT(28)
-#define        VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_ADD_PAD   vxge_mBIT(29)
-#define        VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_NO_SNOOP  vxge_mBIT(30)
-#define        VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_RLX_ORD   vxge_mBIT(31)
-#define VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_SELECT(val) vxge_vBIT(val, 32, 8)
-#define VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_INT_NO(val) vxge_vBIT(val, 41, 7)
-#define VXGE_HW_KDFC_TRPL_FIFO_0_CTRL_BIT_MAP(val) vxge_vBIT(val, 48, 16)
-/*0x00c18*/    u64     kdfc_trpl_fifo_1_ctrl;
-#define VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_MODE(val) vxge_vBIT(val, 14, 2)
-#define        VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_FLIP_EN   vxge_mBIT(22)
-#define        VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_SWAP_EN   vxge_mBIT(23)
-#define VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_INT_CTRL(val) vxge_vBIT(val, 26, 2)
-#define        VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_CTRL_STRUC        vxge_mBIT(28)
-#define        VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_ADD_PAD   vxge_mBIT(29)
-#define        VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_NO_SNOOP  vxge_mBIT(30)
-#define        VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_RLX_ORD   vxge_mBIT(31)
-#define VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_SELECT(val) vxge_vBIT(val, 32, 8)
-#define VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_INT_NO(val) vxge_vBIT(val, 41, 7)
-#define VXGE_HW_KDFC_TRPL_FIFO_1_CTRL_BIT_MAP(val) vxge_vBIT(val, 48, 16)
-/*0x00c20*/    u64     kdfc_trpl_fifo_2_ctrl;
-#define        VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_FLIP_EN   vxge_mBIT(22)
-#define        VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_SWAP_EN   vxge_mBIT(23)
-#define VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_INT_CTRL(val) vxge_vBIT(val, 26, 2)
-#define        VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_CTRL_STRUC        vxge_mBIT(28)
-#define        VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_ADD_PAD   vxge_mBIT(29)
-#define        VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_NO_SNOOP  vxge_mBIT(30)
-#define        VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_RLX_ORD   vxge_mBIT(31)
-#define VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_SELECT(val) vxge_vBIT(val, 32, 8)
-#define VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_INT_NO(val) vxge_vBIT(val, 41, 7)
-#define VXGE_HW_KDFC_TRPL_FIFO_2_CTRL_BIT_MAP(val) vxge_vBIT(val, 48, 16)
-/*0x00c28*/    u64     kdfc_trpl_fifo_0_wb_address;
-#define VXGE_HW_KDFC_TRPL_FIFO_0_WB_ADDRESS_ADD(val) vxge_vBIT(val, 0, 64)
-/*0x00c30*/    u64     kdfc_trpl_fifo_1_wb_address;
-#define VXGE_HW_KDFC_TRPL_FIFO_1_WB_ADDRESS_ADD(val) vxge_vBIT(val, 0, 64)
-/*0x00c38*/    u64     kdfc_trpl_fifo_2_wb_address;
-#define VXGE_HW_KDFC_TRPL_FIFO_2_WB_ADDRESS_ADD(val) vxge_vBIT(val, 0, 64)
-/*0x00c40*/    u64     kdfc_trpl_fifo_offset;
-#define VXGE_HW_KDFC_TRPL_FIFO_OFFSET_KDFC_RCTR0(val) vxge_vBIT(val, 1, 15)
-#define VXGE_HW_KDFC_TRPL_FIFO_OFFSET_KDFC_RCTR1(val) vxge_vBIT(val, 17, 15)
-#define VXGE_HW_KDFC_TRPL_FIFO_OFFSET_KDFC_RCTR2(val) vxge_vBIT(val, 33, 15)
-/*0x00c48*/    u64     kdfc_drbl_triplet_total;
-#define        VXGE_HW_KDFC_DRBL_TRIPLET_TOTAL_KDFC_MAX_SIZE(val) \
-                                                       vxge_vBIT(val, 17, 15)
-       u8      unused00c60[0x00c60-0x00c50];
-
-/*0x00c60*/    u64     usdc_drbl_ctrl;
-#define        VXGE_HW_USDC_DRBL_CTRL_FLIP_EN  vxge_mBIT(22)
-#define        VXGE_HW_USDC_DRBL_CTRL_SWAP_EN  vxge_mBIT(23)
-/*0x00c68*/    u64     usdc_vp_ready;
-#define        VXGE_HW_USDC_VP_READY_USDC_HTN_READY    vxge_mBIT(7)
-#define        VXGE_HW_USDC_VP_READY_USDC_SRQ_READY    vxge_mBIT(15)
-#define        VXGE_HW_USDC_VP_READY_USDC_CQRQ_READY   vxge_mBIT(23)
-/*0x00c70*/    u64     kdfc_status;
-#define        VXGE_HW_KDFC_STATUS_KDFC_WRR_0_READY    vxge_mBIT(0)
-#define        VXGE_HW_KDFC_STATUS_KDFC_WRR_1_READY    vxge_mBIT(1)
-#define        VXGE_HW_KDFC_STATUS_KDFC_WRR_2_READY    vxge_mBIT(2)
-       u8      unused00c80[0x00c80-0x00c78];
-
-/*0x00c80*/    u64     xmac_rpa_vcfg;
-#define        VXGE_HW_XMAC_RPA_VCFG_IPV4_TCP_INCL_PH  vxge_mBIT(3)
-#define        VXGE_HW_XMAC_RPA_VCFG_IPV6_TCP_INCL_PH  vxge_mBIT(7)
-#define        VXGE_HW_XMAC_RPA_VCFG_IPV4_UDP_INCL_PH  vxge_mBIT(11)
-#define        VXGE_HW_XMAC_RPA_VCFG_IPV6_UDP_INCL_PH  vxge_mBIT(15)
-#define        VXGE_HW_XMAC_RPA_VCFG_L4_INCL_CF        vxge_mBIT(19)
-#define        VXGE_HW_XMAC_RPA_VCFG_STRIP_VLAN_TAG    vxge_mBIT(23)
-/*0x00c88*/    u64     rxmac_vcfg0;
-#define VXGE_HW_RXMAC_VCFG0_RTS_MAX_FRM_LEN(val) vxge_vBIT(val, 2, 14)
-#define        VXGE_HW_RXMAC_VCFG0_RTS_USE_MIN_LEN     vxge_mBIT(19)
-#define VXGE_HW_RXMAC_VCFG0_RTS_MIN_FRM_LEN(val) vxge_vBIT(val, 26, 14)
-#define        VXGE_HW_RXMAC_VCFG0_UCAST_ALL_ADDR_EN   vxge_mBIT(43)
-#define        VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN   vxge_mBIT(47)
-#define        VXGE_HW_RXMAC_VCFG0_BCAST_EN    vxge_mBIT(51)
-#define        VXGE_HW_RXMAC_VCFG0_ALL_VID_EN  vxge_mBIT(55)
-/*0x00c90*/    u64     rxmac_vcfg1;
-#define VXGE_HW_RXMAC_VCFG1_RTS_RTH_MULTI_IT_BD_MODE(val) vxge_vBIT(val, 42, 2)
-#define        VXGE_HW_RXMAC_VCFG1_RTS_RTH_MULTI_IT_EN_MODE    vxge_mBIT(47)
-#define        VXGE_HW_RXMAC_VCFG1_CONTRIB_L2_FLOW     vxge_mBIT(51)
-/*0x00c98*/    u64     rts_access_steer_ctrl;
-#define VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION(val) vxge_vBIT(val, 1, 7)
-#define VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL(val) vxge_vBIT(val, 8, 4)
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_STROBE    vxge_mBIT(15)
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_BEHAV_TBL_SEL     vxge_mBIT(23)
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_TABLE_SEL vxge_mBIT(27)
-#define        VXGE_HW_RTS_ACCESS_STEER_CTRL_RMACJ_STATUS      vxge_mBIT(0)
-#define VXGE_HW_RTS_ACCESS_STEER_CTRL_OFFSET(val) vxge_vBIT(val, 40, 8)
-/*0x00ca0*/    u64     rts_access_steer_data0;
-#define VXGE_HW_RTS_ACCESS_STEER_DATA0_DATA(val) vxge_vBIT(val, 0, 64)
-/*0x00ca8*/    u64     rts_access_steer_data1;
-#define VXGE_HW_RTS_ACCESS_STEER_DATA1_DATA(val) vxge_vBIT(val, 0, 64)
-       u8      unused00d00[0x00d00-0x00cb0];
-
-/*0x00d00*/    u64     xmac_vsport_choice;
-#define VXGE_HW_XMAC_VSPORT_CHOICE_VSPORT_NUMBER(val) vxge_vBIT(val, 3, 5)
-/*0x00d08*/    u64     xmac_stats_cfg;
-/*0x00d10*/    u64     xmac_stats_access_cmd;
-#define VXGE_HW_XMAC_STATS_ACCESS_CMD_OP(val) vxge_vBIT(val, 6, 2)
-#define        VXGE_HW_XMAC_STATS_ACCESS_CMD_STROBE    vxge_mBIT(15)
-#define VXGE_HW_XMAC_STATS_ACCESS_CMD_OFFSET_SEL(val) vxge_vBIT(val, 32, 8)
-/*0x00d18*/    u64     xmac_stats_access_data;
-#define VXGE_HW_XMAC_STATS_ACCESS_DATA_XSMGR_DATA(val) vxge_vBIT(val, 0, 64)
-/*0x00d20*/    u64     asic_ntwk_vp_ctrl;
-#define        VXGE_HW_ASIC_NTWK_VP_CTRL_REQ_TEST_NTWK vxge_mBIT(3)
-#define        VXGE_HW_ASIC_NTWK_VP_CTRL_XMACJ_SHOW_PORT_INFO  vxge_mBIT(55)
-#define        VXGE_HW_ASIC_NTWK_VP_CTRL_XMACJ_PORT_NUM        vxge_mBIT(63)
-       u8      unused00d30[0x00d30-0x00d28];
-
-/*0x00d30*/    u64     xgmac_vp_int_status;
-#define        VXGE_HW_XGMAC_VP_INT_STATUS_ASIC_NTWK_VP_ERR_ASIC_NTWK_VP_INT \
-                                                               vxge_mBIT(3)
-/*0x00d38*/    u64     xgmac_vp_int_mask;
-/*0x00d40*/    u64     asic_ntwk_vp_err_reg;
-#define        VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT        vxge_mBIT(3)
-#define        VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK vxge_mBIT(7)
-#define        VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT_OCCURR \
-                                                               vxge_mBIT(11)
-#define        VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK_OCCURR \
-                                                       vxge_mBIT(15)
-#define        VXGE_HW_ASIC_NTWK_VP_ERR_REG_XMACJ_NTWK_REAFFIRMED_FAULT \
-                                                       vxge_mBIT(19)
-#define        VXGE_HW_ASIC_NTWK_VP_ERR_REG_XMACJ_NTWK_REAFFIRMED_OK   vxge_mBIT(23)
-/*0x00d48*/    u64     asic_ntwk_vp_err_mask;
-/*0x00d50*/    u64     asic_ntwk_vp_err_alarm;
-       u8      unused00d80[0x00d80-0x00d58];
-
-/*0x00d80*/    u64     rtdma_bw_ctrl;
-#define        VXGE_HW_RTDMA_BW_CTRL_BW_CTRL_EN        vxge_mBIT(39)
-#define VXGE_HW_RTDMA_BW_CTRL_DESIRED_BW(val) vxge_vBIT(val, 46, 18)
-/*0x00d88*/    u64     rtdma_rd_optimization_ctrl;
-#define        VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_GEN_INT_AFTER_ABORT  vxge_mBIT(3)
-#define VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_PAD_MODE(val) vxge_vBIT(val, 6, 2)
-#define VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_PAD_PATTERN(val) vxge_vBIT(val, 8, 8)
-#define        VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_WAIT_FOR_SPACE    vxge_mBIT(19)
-#define VXGE_HW_PCI_EXP_DEVCTL_READRQ   0x7000  /* Max_Read_Request_Size */
-#define VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_FILL_THRESH(val) \
-                                                       vxge_vBIT(val, 21, 3)
-#define        VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_TXD_PYLD_WMARK_EN    vxge_mBIT(28)
-#define VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_TXD_PYLD_WMARK(val) \
-                                                       vxge_vBIT(val, 29, 3)
-#define        VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_ADDR_BDRY_EN      vxge_mBIT(35)
-#define VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_FB_ADDR_BDRY(val) \
-                                                       vxge_vBIT(val, 37, 3)
-#define        VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_TXD_WAIT_FOR_SPACE   vxge_mBIT(43)
-#define        VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_TXD_FILL_THRESH(val) \
-                                                       vxge_vBIT(val, 51, 5)
-#define        VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_TXD_ADDR_BDRY_EN     vxge_mBIT(59)
-#define VXGE_HW_RTDMA_RD_OPTIMIZATION_CTRL_TXD_ADDR_BDRY(val) \
-                                                       vxge_vBIT(val, 61, 3)
-/*0x00d90*/    u64     pda_pcc_job_monitor;
-#define        VXGE_HW_PDA_PCC_JOB_MONITOR_PDA_PCC_JOB_STATUS  vxge_mBIT(7)
-/*0x00d98*/    u64     tx_protocol_assist_cfg;
-#define        VXGE_HW_TX_PROTOCOL_ASSIST_CFG_LSOV2_EN vxge_mBIT(6)
-#define        VXGE_HW_TX_PROTOCOL_ASSIST_CFG_IPV6_KEEP_SEARCHING      vxge_mBIT(7)
-       u8      unused01000[0x01000-0x00da0];
-
-/*0x01000*/    u64     tim_cfg1_int_num[4];
-#define VXGE_HW_TIM_CFG1_INT_NUM_BTIMER_VAL(val) vxge_vBIT(val, 6, 26)
-#define        VXGE_HW_TIM_CFG1_INT_NUM_BITMP_EN       vxge_mBIT(35)
-#define        VXGE_HW_TIM_CFG1_INT_NUM_TXFRM_CNT_EN   vxge_mBIT(36)
-#define        VXGE_HW_TIM_CFG1_INT_NUM_TXD_CNT_EN     vxge_mBIT(37)
-#define        VXGE_HW_TIM_CFG1_INT_NUM_TIMER_AC       vxge_mBIT(38)
-#define        VXGE_HW_TIM_CFG1_INT_NUM_TIMER_CI       vxge_mBIT(39)
-#define VXGE_HW_TIM_CFG1_INT_NUM_URNG_A(val) vxge_vBIT(val, 41, 7)
-#define VXGE_HW_TIM_CFG1_INT_NUM_URNG_B(val) vxge_vBIT(val, 49, 7)
-#define VXGE_HW_TIM_CFG1_INT_NUM_URNG_C(val) vxge_vBIT(val, 57, 7)
-/*0x01020*/    u64     tim_cfg2_int_num[4];
-#define VXGE_HW_TIM_CFG2_INT_NUM_UEC_A(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_TIM_CFG2_INT_NUM_UEC_B(val) vxge_vBIT(val, 16, 16)
-#define VXGE_HW_TIM_CFG2_INT_NUM_UEC_C(val) vxge_vBIT(val, 32, 16)
-#define VXGE_HW_TIM_CFG2_INT_NUM_UEC_D(val) vxge_vBIT(val, 48, 16)
-/*0x01040*/    u64     tim_cfg3_int_num[4];
-#define        VXGE_HW_TIM_CFG3_INT_NUM_TIMER_RI       vxge_mBIT(0)
-#define VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_EVENT_SF(val) vxge_vBIT(val, 1, 4)
-#define VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(val) vxge_vBIT(val, 6, 26)
-#define VXGE_HW_TIM_CFG3_INT_NUM_UTIL_SEL(val) vxge_vBIT(val, 32, 6)
-#define VXGE_HW_TIM_CFG3_INT_NUM_LTIMER_VAL(val) vxge_vBIT(val, 38, 26)
-/*0x01060*/    u64     tim_wrkld_clc;
-#define VXGE_HW_TIM_WRKLD_CLC_WRKLD_EVAL_PRD(val) vxge_vBIT(val, 0, 32)
-#define VXGE_HW_TIM_WRKLD_CLC_WRKLD_EVAL_DIV(val) vxge_vBIT(val, 35, 5)
-#define        VXGE_HW_TIM_WRKLD_CLC_CNT_FRM_BYTE      vxge_mBIT(40)
-#define VXGE_HW_TIM_WRKLD_CLC_CNT_RX_TX(val) vxge_vBIT(val, 41, 2)
-#define        VXGE_HW_TIM_WRKLD_CLC_CNT_LNK_EN        vxge_mBIT(43)
-#define VXGE_HW_TIM_WRKLD_CLC_HOST_UTIL(val) vxge_vBIT(val, 57, 7)
-/*0x01068*/    u64     tim_bitmap;
-#define VXGE_HW_TIM_BITMAP_MASK(val) vxge_vBIT(val, 0, 32)
-#define        VXGE_HW_TIM_BITMAP_LLROOT_RXD_EN        vxge_mBIT(32)
-#define        VXGE_HW_TIM_BITMAP_LLROOT_TXD_EN        vxge_mBIT(33)
-/*0x01070*/    u64     tim_ring_assn;
-#define VXGE_HW_TIM_RING_ASSN_INT_NUM(val) vxge_vBIT(val, 6, 2)
-/*0x01078*/    u64     tim_remap;
-#define        VXGE_HW_TIM_REMAP_TX_EN vxge_mBIT(5)
-#define        VXGE_HW_TIM_REMAP_RX_EN vxge_mBIT(6)
-#define        VXGE_HW_TIM_REMAP_OFFLOAD_EN    vxge_mBIT(7)
-#define VXGE_HW_TIM_REMAP_TO_VPATH_NUM(val) vxge_vBIT(val, 11, 5)
-/*0x01080*/    u64     tim_vpath_map;
-#define VXGE_HW_TIM_VPATH_MAP_BMAP_ROOT(val) vxge_vBIT(val, 0, 32)
-/*0x01088*/    u64     tim_pci_cfg;
-#define        VXGE_HW_TIM_PCI_CFG_ADD_PAD     vxge_mBIT(7)
-#define        VXGE_HW_TIM_PCI_CFG_NO_SNOOP    vxge_mBIT(15)
-#define        VXGE_HW_TIM_PCI_CFG_RELAXED     vxge_mBIT(23)
-#define        VXGE_HW_TIM_PCI_CFG_CTL_STR     vxge_mBIT(31)
-       u8      unused01100[0x01100-0x01090];
-
-/*0x01100*/    u64     sgrp_assign;
-#define VXGE_HW_SGRP_ASSIGN_SGRP_ASSIGN(val) vxge_vBIT(val, 0, 64)
-/*0x01108*/    u64     sgrp_aoa_and_result;
-#define        VXGE_HW_SGRP_AOA_AND_RESULT_PET_SGRP_AOA_AND_RESULT(val) \
-                                                       vxge_vBIT(val, 0, 64)
-/*0x01110*/    u64     rpe_pci_cfg;
-#define        VXGE_HW_RPE_PCI_CFG_PAD_LRO_DATA_ENABLE vxge_mBIT(7)
-#define        VXGE_HW_RPE_PCI_CFG_PAD_LRO_HDR_ENABLE  vxge_mBIT(8)
-#define        VXGE_HW_RPE_PCI_CFG_PAD_LRO_CQE_ENABLE  vxge_mBIT(9)
-#define        VXGE_HW_RPE_PCI_CFG_PAD_NONLL_CQE_ENABLE        vxge_mBIT(10)
-#define        VXGE_HW_RPE_PCI_CFG_PAD_BASE_LL_CQE_ENABLE      vxge_mBIT(11)
-#define        VXGE_HW_RPE_PCI_CFG_PAD_LL_CQE_IDATA_ENABLE     vxge_mBIT(12)
-#define        VXGE_HW_RPE_PCI_CFG_PAD_CQRQ_IR_ENABLE  vxge_mBIT(13)
-#define        VXGE_HW_RPE_PCI_CFG_PAD_CQSQ_IR_ENABLE  vxge_mBIT(14)
-#define        VXGE_HW_RPE_PCI_CFG_PAD_CQRR_IR_ENABLE  vxge_mBIT(15)
-#define        VXGE_HW_RPE_PCI_CFG_NOSNOOP_DATA        vxge_mBIT(18)
-#define        VXGE_HW_RPE_PCI_CFG_NOSNOOP_NONLL_CQE   vxge_mBIT(19)
-#define        VXGE_HW_RPE_PCI_CFG_NOSNOOP_LL_CQE      vxge_mBIT(20)
-#define        VXGE_HW_RPE_PCI_CFG_NOSNOOP_CQRQ_IR     vxge_mBIT(21)
-#define        VXGE_HW_RPE_PCI_CFG_NOSNOOP_CQSQ_IR     vxge_mBIT(22)
-#define        VXGE_HW_RPE_PCI_CFG_NOSNOOP_CQRR_IR     vxge_mBIT(23)
-#define        VXGE_HW_RPE_PCI_CFG_RELAXED_DATA        vxge_mBIT(26)
-#define        VXGE_HW_RPE_PCI_CFG_RELAXED_NONLL_CQE   vxge_mBIT(27)
-#define        VXGE_HW_RPE_PCI_CFG_RELAXED_LL_CQE      vxge_mBIT(28)
-#define        VXGE_HW_RPE_PCI_CFG_RELAXED_CQRQ_IR     vxge_mBIT(29)
-#define        VXGE_HW_RPE_PCI_CFG_RELAXED_CQSQ_IR     vxge_mBIT(30)
-#define        VXGE_HW_RPE_PCI_CFG_RELAXED_CQRR_IR     vxge_mBIT(31)
-/*0x01118*/    u64     rpe_lro_cfg;
-#define        VXGE_HW_RPE_LRO_CFG_SUPPRESS_LRO_ETH_TRLR       vxge_mBIT(7)
-#define        VXGE_HW_RPE_LRO_CFG_ALLOW_LRO_SNAP_SNAPJUMBO_MRG        vxge_mBIT(11)
-#define        VXGE_HW_RPE_LRO_CFG_ALLOW_LRO_LLC_LLCJUMBO_MRG  vxge_mBIT(15)
-#define        VXGE_HW_RPE_LRO_CFG_INCL_ACK_CNT_IN_CQE vxge_mBIT(23)
-/*0x01120*/    u64     pe_mr2vp_ack_blk_limit;
-#define VXGE_HW_PE_MR2VP_ACK_BLK_LIMIT_BLK_LIMIT(val) vxge_vBIT(val, 32, 32)
-/*0x01128*/    u64     pe_mr2vp_rirr_lirr_blk_limit;
-#define        VXGE_HW_PE_MR2VP_RIRR_LIRR_BLK_LIMIT_RIRR_BLK_LIMIT(val) \
-                                                       vxge_vBIT(val, 0, 32)
-#define        VXGE_HW_PE_MR2VP_RIRR_LIRR_BLK_LIMIT_LIRR_BLK_LIMIT(val) \
-                                                       vxge_vBIT(val, 32, 32)
-/*0x01130*/    u64     txpe_pci_nce_cfg;
-#define VXGE_HW_TXPE_PCI_NCE_CFG_NCE_THRESH(val) vxge_vBIT(val, 0, 32)
-#define        VXGE_HW_TXPE_PCI_NCE_CFG_PAD_TOWI_ENABLE        vxge_mBIT(55)
-#define        VXGE_HW_TXPE_PCI_NCE_CFG_NOSNOOP_TOWI   vxge_mBIT(63)
-       u8      unused01180[0x01180-0x01138];
-
-/*0x01180*/    u64     msg_qpad_en_cfg;
-#define        VXGE_HW_MSG_QPAD_EN_CFG_UMQ_BWR_READ    vxge_mBIT(3)
-#define        VXGE_HW_MSG_QPAD_EN_CFG_DMQ_BWR_READ    vxge_mBIT(7)
-#define        VXGE_HW_MSG_QPAD_EN_CFG_MXP_GENDMA_READ vxge_mBIT(11)
-#define        VXGE_HW_MSG_QPAD_EN_CFG_UXP_GENDMA_READ vxge_mBIT(15)
-#define        VXGE_HW_MSG_QPAD_EN_CFG_UMQ_MSG_WRITE   vxge_mBIT(19)
-#define        VXGE_HW_MSG_QPAD_EN_CFG_UMQDMQ_IR_WRITE vxge_mBIT(23)
-#define        VXGE_HW_MSG_QPAD_EN_CFG_MXP_GENDMA_WRITE        vxge_mBIT(27)
-#define        VXGE_HW_MSG_QPAD_EN_CFG_UXP_GENDMA_WRITE        vxge_mBIT(31)
-/*0x01188*/    u64     msg_pci_cfg;
-#define        VXGE_HW_MSG_PCI_CFG_GENDMA_NO_SNOOP     vxge_mBIT(3)
-#define        VXGE_HW_MSG_PCI_CFG_UMQDMQ_IR_NO_SNOOP  vxge_mBIT(7)
-#define        VXGE_HW_MSG_PCI_CFG_UMQ_NO_SNOOP        vxge_mBIT(11)
-#define        VXGE_HW_MSG_PCI_CFG_DMQ_NO_SNOOP        vxge_mBIT(15)
-/*0x01190*/    u64     umqdmq_ir_init;
-#define VXGE_HW_UMQDMQ_IR_INIT_HOST_WRITE_ADD(val) vxge_vBIT(val, 0, 64)
-/*0x01198*/    u64     dmq_ir_int;
-#define        VXGE_HW_DMQ_IR_INT_IMMED_ENABLE vxge_mBIT(6)
-#define        VXGE_HW_DMQ_IR_INT_EVENT_ENABLE vxge_mBIT(7)
-#define VXGE_HW_DMQ_IR_INT_NUMBER(val) vxge_vBIT(val, 9, 7)
-#define VXGE_HW_DMQ_IR_INT_BITMAP(val) vxge_vBIT(val, 16, 16)
-/*0x011a0*/    u64     dmq_bwr_init_add;
-#define VXGE_HW_DMQ_BWR_INIT_ADD_HOST(val) vxge_vBIT(val, 0, 64)
-/*0x011a8*/    u64     dmq_bwr_init_byte;
-#define VXGE_HW_DMQ_BWR_INIT_BYTE_COUNT(val) vxge_vBIT(val, 0, 32)
-/*0x011b0*/    u64     dmq_ir;
-#define VXGE_HW_DMQ_IR_POLICY(val) vxge_vBIT(val, 0, 8)
-/*0x011b8*/    u64     umq_int;
-#define        VXGE_HW_UMQ_INT_IMMED_ENABLE    vxge_mBIT(6)
-#define        VXGE_HW_UMQ_INT_EVENT_ENABLE    vxge_mBIT(7)
-#define VXGE_HW_UMQ_INT_NUMBER(val) vxge_vBIT(val, 9, 7)
-#define VXGE_HW_UMQ_INT_BITMAP(val) vxge_vBIT(val, 16, 16)
-/*0x011c0*/    u64     umq_mr2vp_bwr_pfch_init;
-#define VXGE_HW_UMQ_MR2VP_BWR_PFCH_INIT_NUMBER(val) vxge_vBIT(val, 0, 8)
-/*0x011c8*/    u64     umq_bwr_pfch_ctrl;
-#define        VXGE_HW_UMQ_BWR_PFCH_CTRL_POLL_EN       vxge_mBIT(3)
-/*0x011d0*/    u64     umq_mr2vp_bwr_eol;
-#define VXGE_HW_UMQ_MR2VP_BWR_EOL_POLL_LATENCY(val) vxge_vBIT(val, 32, 32)
-/*0x011d8*/    u64     umq_bwr_init_add;
-#define VXGE_HW_UMQ_BWR_INIT_ADD_HOST(val) vxge_vBIT(val, 0, 64)
-/*0x011e0*/    u64     umq_bwr_init_byte;
-#define VXGE_HW_UMQ_BWR_INIT_BYTE_COUNT(val) vxge_vBIT(val, 0, 32)
-/*0x011e8*/    u64     gendma_int;
-/*0x011f0*/    u64     umqdmq_ir_init_notify;
-#define        VXGE_HW_UMQDMQ_IR_INIT_NOTIFY_PULSE     vxge_mBIT(3)
-/*0x011f8*/    u64     dmq_init_notify;
-#define        VXGE_HW_DMQ_INIT_NOTIFY_PULSE   vxge_mBIT(3)
-/*0x01200*/    u64     umq_init_notify;
-#define        VXGE_HW_UMQ_INIT_NOTIFY_PULSE   vxge_mBIT(3)
-       u8      unused01380[0x01380-0x01208];
-
-/*0x01380*/    u64     tpa_cfg;
-#define        VXGE_HW_TPA_CFG_IGNORE_FRAME_ERR        vxge_mBIT(3)
-#define        VXGE_HW_TPA_CFG_IPV6_STOP_SEARCHING     vxge_mBIT(7)
-#define        VXGE_HW_TPA_CFG_L4_PSHDR_PRESENT        vxge_mBIT(11)
-#define        VXGE_HW_TPA_CFG_SUPPORT_MOBILE_IPV6_HDRS        vxge_mBIT(15)
-       u8      unused01400[0x01400-0x01388];
-
-/*0x01400*/    u64     tx_vp_reset_discarded_frms;
-#define        VXGE_HW_TX_VP_RESET_DISCARDED_FRMS_TX_VP_RESET_DISCARDED_FRMS(val) \
-                                                       vxge_vBIT(val, 48, 16)
-       u8      unused01480[0x01480-0x01408];
-
-/*0x01480*/    u64     fau_rpa_vcfg;
-#define        VXGE_HW_FAU_RPA_VCFG_L4_COMP_CSUM       vxge_mBIT(7)
-#define        VXGE_HW_FAU_RPA_VCFG_L3_INCL_CF vxge_mBIT(11)
-#define        VXGE_HW_FAU_RPA_VCFG_L3_COMP_CSUM       vxge_mBIT(15)
-       u8      unused014d0[0x014d0-0x01488];
-
-/*0x014d0*/    u64     dbg_stats_rx_mpa;
-#define VXGE_HW_DBG_STATS_RX_MPA_CRC_FAIL_FRMS(val) vxge_vBIT(val, 0, 16)
-#define VXGE_HW_DBG_STATS_RX_MPA_MRK_FAIL_FRMS(val) vxge_vBIT(val, 16, 16)
-#define VXGE_HW_DBG_STATS_RX_MPA_LEN_FAIL_FRMS(val) vxge_vBIT(val, 32, 16)
-/*0x014d8*/    u64     dbg_stats_rx_fau;
-#define VXGE_HW_DBG_STATS_RX_FAU_RX_WOL_FRMS(val) vxge_vBIT(val, 0, 16)
-#define        VXGE_HW_DBG_STATS_RX_FAU_RX_VP_RESET_DISCARDED_FRMS(val) \
-                                                       vxge_vBIT(val, 16, 16)
-#define        VXGE_HW_DBG_STATS_RX_FAU_RX_PERMITTED_FRMS(val) \
-                                                       vxge_vBIT(val, 32, 32)
-       u8      unused014f0[0x014f0-0x014e0];
-
-/*0x014f0*/    u64     fbmc_vp_rdy;
-#define        VXGE_HW_FBMC_VP_RDY_QUEUE_SPAV_FM       vxge_mBIT(0)
-       u8      unused01e00[0x01e00-0x014f8];
-
-/*0x01e00*/    u64     vpath_pcipif_int_status;
-#define \
-VXGE_HW_VPATH_PCIPIF_INT_STATUS_SRPCIM_MSG_TO_VPATH_SRPCIM_MSG_TO_VPATH_INT \
-                                                               vxge_mBIT(3)
-#define        VXGE_HW_VPATH_PCIPIF_INT_STATUS_VPATH_SPARE_R1_VPATH_SPARE_R1_INT \
-                                                               vxge_mBIT(7)
-/*0x01e08*/    u64     vpath_pcipif_int_mask;
-       u8      unused01e20[0x01e20-0x01e10];
-
-/*0x01e20*/    u64     srpcim_msg_to_vpath_reg;
-#define        VXGE_HW_SRPCIM_MSG_TO_VPATH_REG_SWIF_SRPCIM_TO_VPATH_RMSG_INT \
-                                                               vxge_mBIT(3)
-/*0x01e28*/    u64     srpcim_msg_to_vpath_mask;
-/*0x01e30*/    u64     srpcim_msg_to_vpath_alarm;
-       u8      unused01ea0[0x01ea0-0x01e38];
-
-/*0x01ea0*/    u64     vpath_to_srpcim_wmsg;
-#define VXGE_HW_VPATH_TO_SRPCIM_WMSG_VPATH_TO_SRPCIM_WMSG(val) \
-                                                       vxge_vBIT(val, 0, 64)
-/*0x01ea8*/    u64     vpath_to_srpcim_wmsg_trig;
-#define        VXGE_HW_VPATH_TO_SRPCIM_WMSG_TRIG_VPATH_TO_SRPCIM_WMSG_TRIG \
-                                                       vxge_mBIT(0)
-       u8      unused02000[0x02000-0x01eb0];
-
-/*0x02000*/    u64     vpath_general_int_status;
-#define        VXGE_HW_VPATH_GENERAL_INT_STATUS_PIC_INT        vxge_mBIT(3)
-#define        VXGE_HW_VPATH_GENERAL_INT_STATUS_PCI_INT        vxge_mBIT(7)
-#define        VXGE_HW_VPATH_GENERAL_INT_STATUS_WRDMA_INT      vxge_mBIT(15)
-#define        VXGE_HW_VPATH_GENERAL_INT_STATUS_XMAC_INT       vxge_mBIT(19)
-/*0x02008*/    u64     vpath_general_int_mask;
-#define        VXGE_HW_VPATH_GENERAL_INT_MASK_PIC_INT  vxge_mBIT(3)
-#define        VXGE_HW_VPATH_GENERAL_INT_MASK_PCI_INT  vxge_mBIT(7)
-#define        VXGE_HW_VPATH_GENERAL_INT_MASK_WRDMA_INT        vxge_mBIT(15)
-#define        VXGE_HW_VPATH_GENERAL_INT_MASK_XMAC_INT vxge_mBIT(19)
-/*0x02010*/    u64     vpath_ppif_int_status;
-#define        VXGE_HW_VPATH_PPIF_INT_STATUS_KDFCCTL_ERRORS_KDFCCTL_INT \
-                                                       vxge_mBIT(3)
-#define        VXGE_HW_VPATH_PPIF_INT_STATUS_GENERAL_ERRORS_GENERAL_INT \
-                                                       vxge_mBIT(7)
-#define        VXGE_HW_VPATH_PPIF_INT_STATUS_PCI_CONFIG_ERRORS_PCI_CONFIG_INT \
-                                                       vxge_mBIT(11)
-#define \
-VXGE_HW_VPATH_PPIF_INT_STATUS_MRPCIM_TO_VPATH_ALARM_MRPCIM_TO_VPATH_ALARM_INT \
-                                                       vxge_mBIT(15)
-#define \
-VXGE_HW_VPATH_PPIF_INT_STATUS_SRPCIM_TO_VPATH_ALARM_SRPCIM_TO_VPATH_ALARM_INT \
-                                                       vxge_mBIT(19)
-/*0x02018*/    u64     vpath_ppif_int_mask;
-/*0x02020*/    u64     kdfcctl_errors_reg;
-#define        VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO0_OVRWR  vxge_mBIT(3)
-#define        VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO1_OVRWR  vxge_mBIT(7)
-#define        VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO2_OVRWR  vxge_mBIT(11)
-#define        VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO0_POISON vxge_mBIT(15)
-#define        VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO1_POISON vxge_mBIT(19)
-#define        VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO2_POISON vxge_mBIT(23)
-#define        VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO0_DMA_ERR        vxge_mBIT(31)
-#define        VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO1_DMA_ERR        vxge_mBIT(35)
-#define        VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO2_DMA_ERR        vxge_mBIT(39)
-/*0x02028*/    u64     kdfcctl_errors_mask;
-/*0x02030*/    u64     kdfcctl_errors_alarm;
-       u8      unused02040[0x02040-0x02038];
-
-/*0x02040*/    u64     general_errors_reg;
-#define        VXGE_HW_GENERAL_ERRORS_REG_DBLGEN_FIFO0_OVRFLOW vxge_mBIT(3)
-#define        VXGE_HW_GENERAL_ERRORS_REG_DBLGEN_FIFO1_OVRFLOW vxge_mBIT(7)
-#define        VXGE_HW_GENERAL_ERRORS_REG_DBLGEN_FIFO2_OVRFLOW vxge_mBIT(11)
-#define        VXGE_HW_GENERAL_ERRORS_REG_STATSB_PIF_CHAIN_ERR vxge_mBIT(15)
-#define        VXGE_HW_GENERAL_ERRORS_REG_STATSB_DROP_TIMEOUT_REQ      vxge_mBIT(19)
-#define        VXGE_HW_GENERAL_ERRORS_REG_TGT_ILLEGAL_ACCESS   vxge_mBIT(27)
-#define        VXGE_HW_GENERAL_ERRORS_REG_INI_SERR_DET vxge_mBIT(31)
-/*0x02048*/    u64     general_errors_mask;
-/*0x02050*/    u64     general_errors_alarm;
-/*0x02058*/    u64     pci_config_errors_reg;
-#define        VXGE_HW_PCI_CONFIG_ERRORS_REG_PCICONFIG_STATUS_ERR      vxge_mBIT(3)
-#define        VXGE_HW_PCI_CONFIG_ERRORS_REG_PCICONFIG_UNCOR_ERR       vxge_mBIT(7)
-#define        VXGE_HW_PCI_CONFIG_ERRORS_REG_PCICONFIG_COR_ERR vxge_mBIT(11)
-/*0x02060*/    u64     pci_config_errors_mask;
-/*0x02068*/    u64     pci_config_errors_alarm;
-/*0x02070*/    u64     mrpcim_to_vpath_alarm_reg;
-#define        VXGE_HW_MRPCIM_TO_VPATH_ALARM_REG_PPIF_MRPCIM_TO_VPATH_ALARM \
-                                                               vxge_mBIT(3)
-/*0x02078*/    u64     mrpcim_to_vpath_alarm_mask;
-/*0x02080*/    u64     mrpcim_to_vpath_alarm_alarm;
-/*0x02088*/    u64     srpcim_to_vpath_alarm_reg;
-#define        VXGE_HW_SRPCIM_TO_VPATH_ALARM_REG_PPIF_SRPCIM_TO_VPATH_ALARM(val) \
-                                                       vxge_vBIT(val, 0, 17)
-/*0x02090*/    u64     srpcim_to_vpath_alarm_mask;
-/*0x02098*/    u64     srpcim_to_vpath_alarm_alarm;
-       u8      unused02108[0x02108-0x020a0];
-
-/*0x02108*/    u64     kdfcctl_status;
-#define VXGE_HW_KDFCCTL_STATUS_KDFCCTL_FIFO0_PRES(val) vxge_vBIT(val, 0, 8)
-#define VXGE_HW_KDFCCTL_STATUS_KDFCCTL_FIFO1_PRES(val) vxge_vBIT(val, 8, 8)
-#define VXGE_HW_KDFCCTL_STATUS_KDFCCTL_FIFO2_PRES(val) vxge_vBIT(val, 16, 8)
-#define VXGE_HW_KDFCCTL_STATUS_KDFCCTL_FIFO0_OVRWR(val) vxge_vBIT(val, 24, 8)
-#define VXGE_HW_KDFCCTL_STATUS_KDFCCTL_FIFO1_OVRWR(val) vxge_vBIT(val, 32, 8)
-#define VXGE_HW_KDFCCTL_STATUS_KDFCCTL_FIFO2_OVRWR(val) vxge_vBIT(val, 40, 8)
-/*0x02110*/    u64     rsthdlr_status;
-#define        VXGE_HW_RSTHDLR_STATUS_RSTHDLR_CURRENT_RESET    vxge_mBIT(3)
-#define VXGE_HW_RSTHDLR_STATUS_RSTHDLR_CURRENT_VPIN(val) vxge_vBIT(val, 6, 2)
-/*0x02118*/    u64     fifo0_status;
-#define VXGE_HW_FIFO0_STATUS_DBLGEN_FIFO0_RDIDX(val) vxge_vBIT(val, 0, 12)
-/*0x02120*/    u64     fifo1_status;
-#define VXGE_HW_FIFO1_STATUS_DBLGEN_FIFO1_RDIDX(val) vxge_vBIT(val, 0, 12)
-/*0x02128*/    u64     fifo2_status;
-#define VXGE_HW_FIFO2_STATUS_DBLGEN_FIFO2_RDIDX(val) vxge_vBIT(val, 0, 12)
-       u8      unused02158[0x02158-0x02130];
-
-/*0x02158*/    u64     tgt_illegal_access;
-#define VXGE_HW_TGT_ILLEGAL_ACCESS_SWIF_REGION(val) vxge_vBIT(val, 1, 7)
-       u8      unused02200[0x02200-0x02160];
-
-/*0x02200*/    u64     vpath_general_cfg1;
-#define VXGE_HW_VPATH_GENERAL_CFG1_TC_VALUE(val) vxge_vBIT(val, 1, 3)
-#define        VXGE_HW_VPATH_GENERAL_CFG1_DATA_BYTE_SWAPEN     vxge_mBIT(7)
-#define        VXGE_HW_VPATH_GENERAL_CFG1_DATA_FLIPEN  vxge_mBIT(11)
-#define        VXGE_HW_VPATH_GENERAL_CFG1_CTL_BYTE_SWAPEN      vxge_mBIT(15)
-#define        VXGE_HW_VPATH_GENERAL_CFG1_CTL_FLIPEN   vxge_mBIT(23)
-#define        VXGE_HW_VPATH_GENERAL_CFG1_MSIX_ADDR_SWAPEN     vxge_mBIT(51)
-#define        VXGE_HW_VPATH_GENERAL_CFG1_MSIX_ADDR_FLIPEN     vxge_mBIT(55)
-#define        VXGE_HW_VPATH_GENERAL_CFG1_MSIX_DATA_SWAPEN     vxge_mBIT(59)
-#define        VXGE_HW_VPATH_GENERAL_CFG1_MSIX_DATA_FLIPEN     vxge_mBIT(63)
-/*0x02208*/    u64     vpath_general_cfg2;
-#define VXGE_HW_VPATH_GENERAL_CFG2_SIZE_QUANTUM(val) vxge_vBIT(val, 1, 3)
-/*0x02210*/    u64     vpath_general_cfg3;
-#define        VXGE_HW_VPATH_GENERAL_CFG3_IGNORE_VPATH_RST_FOR_INTA    vxge_mBIT(3)
-       u8      unused02220[0x02220-0x02218];
-
-/*0x02220*/    u64     kdfcctl_cfg0;
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_SWAPEN_FIFO0  vxge_mBIT(1)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_SWAPEN_FIFO1  vxge_mBIT(2)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_SWAPEN_FIFO2  vxge_mBIT(3)
-#define        VXGE_HW_KDFCCTL_CFG0_BIT_FLIPEN_FIFO0   vxge_mBIT(5)
-#define        VXGE_HW_KDFCCTL_CFG0_BIT_FLIPEN_FIFO1   vxge_mBIT(6)
-#define        VXGE_HW_KDFCCTL_CFG0_BIT_FLIPEN_FIFO2   vxge_mBIT(7)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE0_FIFO0      vxge_mBIT(9)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE0_FIFO1      vxge_mBIT(10)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE0_FIFO2      vxge_mBIT(11)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE1_FIFO0      vxge_mBIT(13)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE1_FIFO1      vxge_mBIT(14)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE1_FIFO2      vxge_mBIT(15)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE2_FIFO0      vxge_mBIT(17)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE2_FIFO1      vxge_mBIT(18)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE2_FIFO2      vxge_mBIT(19)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE3_FIFO0      vxge_mBIT(21)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE3_FIFO1      vxge_mBIT(22)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE3_FIFO2      vxge_mBIT(23)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE4_FIFO0      vxge_mBIT(25)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE4_FIFO1      vxge_mBIT(26)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE4_FIFO2      vxge_mBIT(27)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE5_FIFO0      vxge_mBIT(29)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE5_FIFO1      vxge_mBIT(30)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE5_FIFO2      vxge_mBIT(31)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE6_FIFO0      vxge_mBIT(33)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE6_FIFO1      vxge_mBIT(34)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE6_FIFO2      vxge_mBIT(35)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE7_FIFO0      vxge_mBIT(37)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE7_FIFO1      vxge_mBIT(38)
-#define        VXGE_HW_KDFCCTL_CFG0_BYTE_MASK_BYTE7_FIFO2      vxge_mBIT(39)
-
-       u8      unused02268[0x02268-0x02228];
-
-/*0x02268*/    u64     stats_cfg;
-#define VXGE_HW_STATS_CFG_START_HOST_ADDR(val) vxge_vBIT(val, 0, 57)
-/*0x02270*/    u64     interrupt_cfg0;
-#define VXGE_HW_INTERRUPT_CFG0_MSIX_FOR_RXTI(val) vxge_vBIT(val, 1, 7)
-#define VXGE_HW_INTERRUPT_CFG0_GROUP0_MSIX_FOR_TXTI(val) vxge_vBIT(val, 9, 7)
-#define VXGE_HW_INTERRUPT_CFG0_GROUP1_MSIX_FOR_TXTI(val) vxge_vBIT(val, 17, 7)
-#define VXGE_HW_INTERRUPT_CFG0_GROUP2_MSIX_FOR_TXTI(val) vxge_vBIT(val, 25, 7)
-#define VXGE_HW_INTERRUPT_CFG0_GROUP3_MSIX_FOR_TXTI(val) vxge_vBIT(val, 33, 7)
-       u8      unused02280[0x02280-0x02278];
-
-/*0x02280*/    u64     interrupt_cfg2;
-#define VXGE_HW_INTERRUPT_CFG2_ALARM_MAP_TO_MSG(val) vxge_vBIT(val, 1, 7)
-/*0x02288*/    u64     one_shot_vect0_en;
-#define        VXGE_HW_ONE_SHOT_VECT0_EN_ONE_SHOT_VECT0_EN     vxge_mBIT(3)
-/*0x02290*/    u64     one_shot_vect1_en;
-#define        VXGE_HW_ONE_SHOT_VECT1_EN_ONE_SHOT_VECT1_EN     vxge_mBIT(3)
-/*0x02298*/    u64     one_shot_vect2_en;
-#define        VXGE_HW_ONE_SHOT_VECT2_EN_ONE_SHOT_VECT2_EN     vxge_mBIT(3)
-/*0x022a0*/    u64     one_shot_vect3_en;
-#define        VXGE_HW_ONE_SHOT_VECT3_EN_ONE_SHOT_VECT3_EN     vxge_mBIT(3)
-       u8      unused022b0[0x022b0-0x022a8];
-
-/*0x022b0*/    u64     pci_config_access_cfg1;
-#define VXGE_HW_PCI_CONFIG_ACCESS_CFG1_ADDRESS(val) vxge_vBIT(val, 0, 12)
-#define        VXGE_HW_PCI_CONFIG_ACCESS_CFG1_SEL_FUNC0        vxge_mBIT(15)
-/*0x022b8*/    u64     pci_config_access_cfg2;
-#define        VXGE_HW_PCI_CONFIG_ACCESS_CFG2_REQ      vxge_mBIT(0)
-/*0x022c0*/    u64     pci_config_access_status;
-#define        VXGE_HW_PCI_CONFIG_ACCESS_STATUS_ACCESS_ERR     vxge_mBIT(0)
-#define VXGE_HW_PCI_CONFIG_ACCESS_STATUS_DATA(val) vxge_vBIT(val, 32, 32)
-       u8      unused02300[0x02300-0x022c8];
-
-/*0x02300*/    u64     vpath_debug_stats0;
-#define VXGE_HW_VPATH_DEBUG_STATS0_INI_NUM_MWR_SENT(val) vxge_vBIT(val, 0, 32)
-/*0x02308*/    u64     vpath_debug_stats1;
-#define VXGE_HW_VPATH_DEBUG_STATS1_INI_NUM_MRD_SENT(val) vxge_vBIT(val, 0, 32)
-/*0x02310*/    u64     vpath_debug_stats2;
-#define VXGE_HW_VPATH_DEBUG_STATS2_INI_NUM_CPL_RCVD(val) vxge_vBIT(val, 0, 32)
-/*0x02318*/    u64     vpath_debug_stats3;
-#define VXGE_HW_VPATH_DEBUG_STATS3_INI_NUM_MWR_BYTE_SENT(val) \
-                                                       vxge_vBIT(val, 0, 64)
-/*0x02320*/    u64     vpath_debug_stats4;
-#define VXGE_HW_VPATH_DEBUG_STATS4_INI_NUM_CPL_BYTE_RCVD(val) \
-                                                       vxge_vBIT(val, 0, 64)
-/*0x02328*/    u64     vpath_debug_stats5;
-#define VXGE_HW_VPATH_DEBUG_STATS5_WRCRDTARB_XOFF(val) vxge_vBIT(val, 32, 32)
-/*0x02330*/    u64     vpath_debug_stats6;
-#define VXGE_HW_VPATH_DEBUG_STATS6_RDCRDTARB_XOFF(val) vxge_vBIT(val, 32, 32)
-/*0x02338*/    u64     vpath_genstats_count01;
-#define        VXGE_HW_VPATH_GENSTATS_COUNT01_PPIF_VPATH_GENSTATS_COUNT1(val) \
-                                                       vxge_vBIT(val, 0, 32)
-#define        VXGE_HW_VPATH_GENSTATS_COUNT01_PPIF_VPATH_GENSTATS_COUNT0(val) \
-                                                       vxge_vBIT(val, 32, 32)
-/*0x02340*/    u64     vpath_genstats_count23;
-#define        VXGE_HW_VPATH_GENSTATS_COUNT23_PPIF_VPATH_GENSTATS_COUNT3(val) \
-                                                       vxge_vBIT(val, 0, 32)
-#define        VXGE_HW_VPATH_GENSTATS_COUNT23_PPIF_VPATH_GENSTATS_COUNT2(val) \
-                                                       vxge_vBIT(val, 32, 32)
-/*0x02348*/    u64     vpath_genstats_count4;
-#define        VXGE_HW_VPATH_GENSTATS_COUNT4_PPIF_VPATH_GENSTATS_COUNT4(val) \
-                                                       vxge_vBIT(val, 32, 32)
-/*0x02350*/    u64     vpath_genstats_count5;
-#define        VXGE_HW_VPATH_GENSTATS_COUNT5_PPIF_VPATH_GENSTATS_COUNT5(val) \
-                                                       vxge_vBIT(val, 32, 32)
-       u8      unused02648[0x02648-0x02358];
-} __packed;
-
-#define VXGE_HW_EEPROM_SIZE    (0x01 << 11)
-
-/* Capability lists */
-#define  VXGE_HW_PCI_EXP_LNKCAP_LNK_SPEED    0xf  /* Supported Link speeds */
-#define  VXGE_HW_PCI_EXP_LNKCAP_LNK_WIDTH    0x3f0 /* Supported Link speeds. */
-#define  VXGE_HW_PCI_EXP_LNKCAP_LW_RES       0x0  /* Reserved. */
-
-#endif
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-traffic.c b/drivers/net/ethernet/neterion/vxge/vxge-traffic.c
deleted file mode 100644 (file)
index ee16497..0000000
+++ /dev/null
@@ -1,2428 +0,0 @@
-/******************************************************************************
- * This software may be used and distributed according to the terms of
- * the GNU General Public License (GPL), incorporated herein by reference.
- * Drivers based on or derived from this code fall under the GPL and must
- * retain the authorship, copyright and license notice.  This file is not
- * a complete program and may only be used when the entire operating
- * system is licensed under the GPL.
- * See the file COPYING in this distribution for more information.
- *
- * vxge-traffic.c: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
- *                 Virtualized Server Adapter.
- * Copyright(c) 2002-2010 Exar Corp.
- ******************************************************************************/
-#include <linux/etherdevice.h>
-#include <linux/io-64-nonatomic-lo-hi.h>
-#include <linux/prefetch.h>
-
-#include "vxge-traffic.h"
-#include "vxge-config.h"
-#include "vxge-main.h"
-
-/*
- * vxge_hw_vpath_intr_enable - Enable vpath interrupts.
- * @vp: Virtual Path handle.
- *
- * Enable vpath interrupts. The function is to be executed the last in
- * vpath initialization sequence.
- *
- * See also: vxge_hw_vpath_intr_disable()
- */
-enum vxge_hw_status vxge_hw_vpath_intr_enable(struct __vxge_hw_vpath_handle *vp)
-{
-       struct __vxge_hw_virtualpath *vpath;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       vpath = vp->vpath;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto exit;
-       }
-
-       vp_reg = vpath->vp_reg;
-
-       writeq(VXGE_HW_INTR_MASK_ALL, &vp_reg->kdfcctl_errors_reg);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->general_errors_reg);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->pci_config_errors_reg);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->mrpcim_to_vpath_alarm_reg);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->srpcim_to_vpath_alarm_reg);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->vpath_ppif_int_status);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->srpcim_msg_to_vpath_reg);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->vpath_pcipif_int_status);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->prc_alarm_reg);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->wrdma_alarm_status);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->asic_ntwk_vp_err_reg);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->xgmac_vp_int_status);
-
-       readq(&vp_reg->vpath_general_int_status);
-
-       /* Mask unwanted interrupts */
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->vpath_pcipif_int_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->srpcim_msg_to_vpath_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->srpcim_to_vpath_alarm_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->mrpcim_to_vpath_alarm_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->pci_config_errors_mask);
-
-       /* Unmask the individual interrupts */
-
-       writeq((u32)vxge_bVALn((VXGE_HW_GENERAL_ERRORS_REG_DBLGEN_FIFO1_OVRFLOW|
-               VXGE_HW_GENERAL_ERRORS_REG_DBLGEN_FIFO2_OVRFLOW|
-               VXGE_HW_GENERAL_ERRORS_REG_STATSB_DROP_TIMEOUT_REQ|
-               VXGE_HW_GENERAL_ERRORS_REG_STATSB_PIF_CHAIN_ERR), 0, 32),
-               &vp_reg->general_errors_mask);
-
-       __vxge_hw_pio_mem_write32_upper(
-               (u32)vxge_bVALn((VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO1_OVRWR|
-               VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO2_OVRWR|
-               VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO1_POISON|
-               VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO2_POISON|
-               VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO1_DMA_ERR|
-               VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO2_DMA_ERR), 0, 32),
-               &vp_reg->kdfcctl_errors_mask);
-
-       __vxge_hw_pio_mem_write32_upper(0, &vp_reg->vpath_ppif_int_mask);
-
-       __vxge_hw_pio_mem_write32_upper(
-               (u32)vxge_bVALn(VXGE_HW_PRC_ALARM_REG_PRC_RING_BUMP, 0, 32),
-               &vp_reg->prc_alarm_mask);
-
-       __vxge_hw_pio_mem_write32_upper(0, &vp_reg->wrdma_alarm_mask);
-       __vxge_hw_pio_mem_write32_upper(0, &vp_reg->xgmac_vp_int_mask);
-
-       if (vpath->hldev->first_vp_id != vpath->vp_id)
-               __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->asic_ntwk_vp_err_mask);
-       else
-               __vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn((
-               VXGE_HW_ASIC_NTWK_VP_ERR_REG_XMACJ_NTWK_REAFFIRMED_FAULT |
-               VXGE_HW_ASIC_NTWK_VP_ERR_REG_XMACJ_NTWK_REAFFIRMED_OK), 0, 32),
-               &vp_reg->asic_ntwk_vp_err_mask);
-
-       __vxge_hw_pio_mem_write32_upper(0,
-               &vp_reg->vpath_general_int_mask);
-exit:
-       return status;
-
-}
-
-/*
- * vxge_hw_vpath_intr_disable - Disable vpath interrupts.
- * @vp: Virtual Path handle.
- *
- * Disable vpath interrupts. The function is to be executed the last in
- * vpath initialization sequence.
- *
- * See also: vxge_hw_vpath_intr_enable()
- */
-enum vxge_hw_status vxge_hw_vpath_intr_disable(
-                       struct __vxge_hw_vpath_handle *vp)
-{
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       vpath = vp->vpath;
-
-       if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
-               status = VXGE_HW_ERR_VPATH_NOT_OPEN;
-               goto exit;
-       }
-       vp_reg = vpath->vp_reg;
-
-       __vxge_hw_pio_mem_write32_upper(
-               (u32)VXGE_HW_INTR_MASK_ALL,
-               &vp_reg->vpath_general_int_mask);
-
-       writeq(VXGE_HW_INTR_MASK_ALL, &vp_reg->kdfcctl_errors_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->general_errors_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->pci_config_errors_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->mrpcim_to_vpath_alarm_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->srpcim_to_vpath_alarm_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->vpath_ppif_int_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->srpcim_msg_to_vpath_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->vpath_pcipif_int_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->wrdma_alarm_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->prc_alarm_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->xgmac_vp_int_mask);
-
-       __vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
-                       &vp_reg->asic_ntwk_vp_err_mask);
-
-exit:
-       return status;
-}
-
-void vxge_hw_vpath_tti_ci_set(struct __vxge_hw_fifo *fifo)
-{
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-       struct vxge_hw_vp_config *config;
-       u64 val64;
-
-       if (fifo->config->enable != VXGE_HW_FIFO_ENABLE)
-               return;
-
-       vp_reg = fifo->vp_reg;
-       config = container_of(fifo->config, struct vxge_hw_vp_config, fifo);
-
-       if (config->tti.timer_ci_en != VXGE_HW_TIM_TIMER_CI_ENABLE) {
-               config->tti.timer_ci_en = VXGE_HW_TIM_TIMER_CI_ENABLE;
-               val64 = readq(&vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_TX]);
-               val64 |= VXGE_HW_TIM_CFG1_INT_NUM_TIMER_CI;
-               fifo->tim_tti_cfg1_saved = val64;
-               writeq(val64, &vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_TX]);
-       }
-}
-
-void vxge_hw_vpath_dynamic_rti_ci_set(struct __vxge_hw_ring *ring)
-{
-       u64 val64 = ring->tim_rti_cfg1_saved;
-
-       val64 |= VXGE_HW_TIM_CFG1_INT_NUM_TIMER_CI;
-       ring->tim_rti_cfg1_saved = val64;
-       writeq(val64, &ring->vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_RX]);
-}
-
-void vxge_hw_vpath_dynamic_tti_rtimer_set(struct __vxge_hw_fifo *fifo)
-{
-       u64 val64 = fifo->tim_tti_cfg3_saved;
-       u64 timer = (fifo->rtimer * 1000) / 272;
-
-       val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(0x3ffffff);
-       if (timer)
-               val64 |= VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(timer) |
-                       VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_EVENT_SF(5);
-
-       writeq(val64, &fifo->vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_TX]);
-       /* tti_cfg3_saved is not updated again because it is
-        * initialized at one place only - init time.
-        */
-}
-
-void vxge_hw_vpath_dynamic_rti_rtimer_set(struct __vxge_hw_ring *ring)
-{
-       u64 val64 = ring->tim_rti_cfg3_saved;
-       u64 timer = (ring->rtimer * 1000) / 272;
-
-       val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(0x3ffffff);
-       if (timer)
-               val64 |= VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(timer) |
-                       VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_EVENT_SF(4);
-
-       writeq(val64, &ring->vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_RX]);
-       /* rti_cfg3_saved is not updated again because it is
-        * initialized at one place only - init time.
-        */
-}
-
-/**
- * vxge_hw_channel_msix_mask - Mask MSIX Vector.
- * @channel: Channel for rx or tx handle
- * @msix_id:  MSIX ID
- *
- * The function masks the msix interrupt for the given msix_id
- *
- * Returns: 0
- */
-void vxge_hw_channel_msix_mask(struct __vxge_hw_channel *channel, int msix_id)
-{
-
-       __vxge_hw_pio_mem_write32_upper(
-               (u32)vxge_bVALn(vxge_mBIT(msix_id >> 2), 0, 32),
-               &channel->common_reg->set_msix_mask_vect[msix_id%4]);
-}
-
-/**
- * vxge_hw_channel_msix_unmask - Unmask the MSIX Vector.
- * @channel: Channel for rx or tx handle
- * @msix_id:  MSI ID
- *
- * The function unmasks the msix interrupt for the given msix_id
- *
- * Returns: 0
- */
-void
-vxge_hw_channel_msix_unmask(struct __vxge_hw_channel *channel, int msix_id)
-{
-
-       __vxge_hw_pio_mem_write32_upper(
-               (u32)vxge_bVALn(vxge_mBIT(msix_id >> 2), 0, 32),
-               &channel->common_reg->clear_msix_mask_vect[msix_id%4]);
-}
-
-/**
- * vxge_hw_channel_msix_clear - Unmask the MSIX Vector.
- * @channel: Channel for rx or tx handle
- * @msix_id:  MSI ID
- *
- * The function unmasks the msix interrupt for the given msix_id
- * if configured in MSIX oneshot mode
- *
- * Returns: 0
- */
-void vxge_hw_channel_msix_clear(struct __vxge_hw_channel *channel, int msix_id)
-{
-       __vxge_hw_pio_mem_write32_upper(
-               (u32) vxge_bVALn(vxge_mBIT(msix_id >> 2), 0, 32),
-               &channel->common_reg->clr_msix_one_shot_vec[msix_id % 4]);
-}
-
-/**
- * vxge_hw_device_set_intr_type - Updates the configuration
- *             with new interrupt type.
- * @hldev: HW device handle.
- * @intr_mode: New interrupt type
- */
-u32 vxge_hw_device_set_intr_type(struct __vxge_hw_device *hldev, u32 intr_mode)
-{
-
-       if ((intr_mode != VXGE_HW_INTR_MODE_IRQLINE) &&
-          (intr_mode != VXGE_HW_INTR_MODE_MSIX) &&
-          (intr_mode != VXGE_HW_INTR_MODE_MSIX_ONE_SHOT) &&
-          (intr_mode != VXGE_HW_INTR_MODE_DEF))
-               intr_mode = VXGE_HW_INTR_MODE_IRQLINE;
-
-       hldev->config.intr_mode = intr_mode;
-       return intr_mode;
-}
-
-/**
- * vxge_hw_device_intr_enable - Enable interrupts.
- * @hldev: HW device handle.
- *
- * Enable Titan interrupts. The function is to be executed the last in
- * Titan initialization sequence.
- *
- * See also: vxge_hw_device_intr_disable()
- */
-void vxge_hw_device_intr_enable(struct __vxge_hw_device *hldev)
-{
-       u32 i;
-       u64 val64;
-       u32 val32;
-
-       vxge_hw_device_mask_all(hldev);
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-
-               if (!(hldev->vpaths_deployed & vxge_mBIT(i)))
-                       continue;
-
-               vxge_hw_vpath_intr_enable(
-                       VXGE_HW_VIRTUAL_PATH_HANDLE(&hldev->virtual_paths[i]));
-       }
-
-       if (hldev->config.intr_mode == VXGE_HW_INTR_MODE_IRQLINE) {
-               val64 = hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_TX] |
-                       hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_RX];
-
-               if (val64 != 0) {
-                       writeq(val64, &hldev->common_reg->tim_int_status0);
-
-                       writeq(~val64, &hldev->common_reg->tim_int_mask0);
-               }
-
-               val32 = hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_TX] |
-                       hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_RX];
-
-               if (val32 != 0) {
-                       __vxge_hw_pio_mem_write32_upper(val32,
-                                       &hldev->common_reg->tim_int_status1);
-
-                       __vxge_hw_pio_mem_write32_upper(~val32,
-                                       &hldev->common_reg->tim_int_mask1);
-               }
-       }
-
-       val64 = readq(&hldev->common_reg->titan_general_int_status);
-
-       vxge_hw_device_unmask_all(hldev);
-}
-
-/**
- * vxge_hw_device_intr_disable - Disable Titan interrupts.
- * @hldev: HW device handle.
- *
- * Disable Titan interrupts.
- *
- * See also: vxge_hw_device_intr_enable()
- */
-void vxge_hw_device_intr_disable(struct __vxge_hw_device *hldev)
-{
-       u32 i;
-
-       vxge_hw_device_mask_all(hldev);
-
-       /* mask all the tim interrupts */
-       writeq(VXGE_HW_INTR_MASK_ALL, &hldev->common_reg->tim_int_mask0);
-       __vxge_hw_pio_mem_write32_upper(VXGE_HW_DEFAULT_32,
-               &hldev->common_reg->tim_int_mask1);
-
-       for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-
-               if (!(hldev->vpaths_deployed & vxge_mBIT(i)))
-                       continue;
-
-               vxge_hw_vpath_intr_disable(
-                       VXGE_HW_VIRTUAL_PATH_HANDLE(&hldev->virtual_paths[i]));
-       }
-}
-
-/**
- * vxge_hw_device_mask_all - Mask all device interrupts.
- * @hldev: HW device handle.
- *
- * Mask        all device interrupts.
- *
- * See also: vxge_hw_device_unmask_all()
- */
-void vxge_hw_device_mask_all(struct __vxge_hw_device *hldev)
-{
-       u64 val64;
-
-       val64 = VXGE_HW_TITAN_MASK_ALL_INT_ALARM |
-               VXGE_HW_TITAN_MASK_ALL_INT_TRAFFIC;
-
-       __vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(val64, 0, 32),
-                               &hldev->common_reg->titan_mask_all_int);
-}
-
-/**
- * vxge_hw_device_unmask_all - Unmask all device interrupts.
- * @hldev: HW device handle.
- *
- * Unmask all device interrupts.
- *
- * See also: vxge_hw_device_mask_all()
- */
-void vxge_hw_device_unmask_all(struct __vxge_hw_device *hldev)
-{
-       u64 val64 = 0;
-
-       if (hldev->config.intr_mode == VXGE_HW_INTR_MODE_IRQLINE)
-               val64 =  VXGE_HW_TITAN_MASK_ALL_INT_TRAFFIC;
-
-       __vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(val64, 0, 32),
-                       &hldev->common_reg->titan_mask_all_int);
-}
-
-/**
- * vxge_hw_device_flush_io - Flush io writes.
- * @hldev: HW device handle.
- *
- * The function        performs a read operation to flush io writes.
- *
- * Returns: void
- */
-void vxge_hw_device_flush_io(struct __vxge_hw_device *hldev)
-{
-       readl(&hldev->common_reg->titan_general_int_status);
-}
-
-/**
- * __vxge_hw_device_handle_error - Handle error
- * @hldev: HW device
- * @vp_id: Vpath Id
- * @type: Error type. Please see enum vxge_hw_event{}
- *
- * Handle error.
- */
-static enum vxge_hw_status
-__vxge_hw_device_handle_error(struct __vxge_hw_device *hldev, u32 vp_id,
-                             enum vxge_hw_event type)
-{
-       switch (type) {
-       case VXGE_HW_EVENT_UNKNOWN:
-               break;
-       case VXGE_HW_EVENT_RESET_START:
-       case VXGE_HW_EVENT_RESET_COMPLETE:
-       case VXGE_HW_EVENT_LINK_DOWN:
-       case VXGE_HW_EVENT_LINK_UP:
-               goto out;
-       case VXGE_HW_EVENT_ALARM_CLEARED:
-               goto out;
-       case VXGE_HW_EVENT_ECCERR:
-       case VXGE_HW_EVENT_MRPCIM_ECCERR:
-               goto out;
-       case VXGE_HW_EVENT_FIFO_ERR:
-       case VXGE_HW_EVENT_VPATH_ERR:
-       case VXGE_HW_EVENT_CRITICAL_ERR:
-       case VXGE_HW_EVENT_SERR:
-               break;
-       case VXGE_HW_EVENT_SRPCIM_SERR:
-       case VXGE_HW_EVENT_MRPCIM_SERR:
-               goto out;
-       case VXGE_HW_EVENT_SLOT_FREEZE:
-               break;
-       default:
-               vxge_assert(0);
-               goto out;
-       }
-
-       /* notify driver */
-       if (hldev->uld_callbacks->crit_err)
-               hldev->uld_callbacks->crit_err(hldev,
-                       type, vp_id);
-out:
-
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_device_handle_link_down_ind
- * @hldev: HW device handle.
- *
- * Link down indication handler. The function is invoked by HW when
- * Titan indicates that the link is down.
- */
-static enum vxge_hw_status
-__vxge_hw_device_handle_link_down_ind(struct __vxge_hw_device *hldev)
-{
-       /*
-        * If the previous link state is not down, return.
-        */
-       if (hldev->link_state == VXGE_HW_LINK_DOWN)
-               goto exit;
-
-       hldev->link_state = VXGE_HW_LINK_DOWN;
-
-       /* notify driver */
-       if (hldev->uld_callbacks->link_down)
-               hldev->uld_callbacks->link_down(hldev);
-exit:
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_device_handle_link_up_ind
- * @hldev: HW device handle.
- *
- * Link up indication handler. The function is invoked by HW when
- * Titan indicates that the link is up for programmable amount of time.
- */
-static enum vxge_hw_status
-__vxge_hw_device_handle_link_up_ind(struct __vxge_hw_device *hldev)
-{
-       /*
-        * If the previous link state is not down, return.
-        */
-       if (hldev->link_state == VXGE_HW_LINK_UP)
-               goto exit;
-
-       hldev->link_state = VXGE_HW_LINK_UP;
-
-       /* notify driver */
-       if (hldev->uld_callbacks->link_up)
-               hldev->uld_callbacks->link_up(hldev);
-exit:
-       return VXGE_HW_OK;
-}
-
-/*
- * __vxge_hw_vpath_alarm_process - Process Alarms.
- * @vpath: Virtual Path.
- * @skip_alarms: Do not clear the alarms
- *
- * Process vpath alarms.
- *
- */
-static enum vxge_hw_status
-__vxge_hw_vpath_alarm_process(struct __vxge_hw_virtualpath *vpath,
-                             u32 skip_alarms)
-{
-       u64 val64;
-       u64 alarm_status;
-       u64 pic_status;
-       struct __vxge_hw_device *hldev = NULL;
-       enum vxge_hw_event alarm_event = VXGE_HW_EVENT_UNKNOWN;
-       u64 mask64;
-       struct vxge_hw_vpath_stats_sw_info *sw_stats;
-       struct vxge_hw_vpath_reg __iomem *vp_reg;
-
-       if (vpath == NULL) {
-               alarm_event = VXGE_HW_SET_LEVEL(VXGE_HW_EVENT_UNKNOWN,
-                       alarm_event);
-               goto out2;
-       }
-
-       hldev = vpath->hldev;
-       vp_reg = vpath->vp_reg;
-       alarm_status = readq(&vp_reg->vpath_general_int_status);
-
-       if (alarm_status == VXGE_HW_ALL_FOXES) {
-               alarm_event = VXGE_HW_SET_LEVEL(VXGE_HW_EVENT_SLOT_FREEZE,
-                       alarm_event);
-               goto out;
-       }
-
-       sw_stats = vpath->sw_stats;
-
-       if (alarm_status & ~(
-               VXGE_HW_VPATH_GENERAL_INT_STATUS_PIC_INT |
-               VXGE_HW_VPATH_GENERAL_INT_STATUS_PCI_INT |
-               VXGE_HW_VPATH_GENERAL_INT_STATUS_WRDMA_INT |
-               VXGE_HW_VPATH_GENERAL_INT_STATUS_XMAC_INT)) {
-               sw_stats->error_stats.unknown_alarms++;
-
-               alarm_event = VXGE_HW_SET_LEVEL(VXGE_HW_EVENT_UNKNOWN,
-                       alarm_event);
-               goto out;
-       }
-
-       if (alarm_status & VXGE_HW_VPATH_GENERAL_INT_STATUS_XMAC_INT) {
-
-               val64 = readq(&vp_reg->xgmac_vp_int_status);
-
-               if (val64 &
-               VXGE_HW_XGMAC_VP_INT_STATUS_ASIC_NTWK_VP_ERR_ASIC_NTWK_VP_INT) {
-
-                       val64 = readq(&vp_reg->asic_ntwk_vp_err_reg);
-
-                       if (((val64 &
-                             VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT) &&
-                            (!(val64 &
-                               VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK))) ||
-                           ((val64 &
-                            VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT_OCCURR) &&
-                            (!(val64 &
-                               VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK_OCCURR)
-                                    ))) {
-                               sw_stats->error_stats.network_sustained_fault++;
-
-                               writeq(
-                               VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT,
-                                       &vp_reg->asic_ntwk_vp_err_mask);
-
-                               __vxge_hw_device_handle_link_down_ind(hldev);
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_LINK_DOWN, alarm_event);
-                       }
-
-                       if (((val64 &
-                             VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK) &&
-                            (!(val64 &
-                               VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT))) ||
-                           ((val64 &
-                             VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK_OCCURR) &&
-                            (!(val64 &
-                               VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT_OCCURR)
-                                    ))) {
-
-                               sw_stats->error_stats.network_sustained_ok++;
-
-                               writeq(
-                               VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK,
-                                       &vp_reg->asic_ntwk_vp_err_mask);
-
-                               __vxge_hw_device_handle_link_up_ind(hldev);
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_LINK_UP, alarm_event);
-                       }
-
-                       writeq(VXGE_HW_INTR_MASK_ALL,
-                               &vp_reg->asic_ntwk_vp_err_reg);
-
-                       alarm_event = VXGE_HW_SET_LEVEL(
-                               VXGE_HW_EVENT_ALARM_CLEARED, alarm_event);
-
-                       if (skip_alarms)
-                               return VXGE_HW_OK;
-               }
-       }
-
-       if (alarm_status & VXGE_HW_VPATH_GENERAL_INT_STATUS_PIC_INT) {
-
-               pic_status = readq(&vp_reg->vpath_ppif_int_status);
-
-               if (pic_status &
-                   VXGE_HW_VPATH_PPIF_INT_STATUS_GENERAL_ERRORS_GENERAL_INT) {
-
-                       val64 = readq(&vp_reg->general_errors_reg);
-                       mask64 = readq(&vp_reg->general_errors_mask);
-
-                       if ((val64 &
-                               VXGE_HW_GENERAL_ERRORS_REG_INI_SERR_DET) &
-                               ~mask64) {
-                               sw_stats->error_stats.ini_serr_det++;
-
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_SERR, alarm_event);
-                       }
-
-                       if ((val64 &
-                           VXGE_HW_GENERAL_ERRORS_REG_DBLGEN_FIFO0_OVRFLOW) &
-                               ~mask64) {
-                               sw_stats->error_stats.dblgen_fifo0_overflow++;
-
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_FIFO_ERR, alarm_event);
-                       }
-
-                       if ((val64 &
-                           VXGE_HW_GENERAL_ERRORS_REG_STATSB_PIF_CHAIN_ERR) &
-                               ~mask64)
-                               sw_stats->error_stats.statsb_pif_chain_error++;
-
-                       if ((val64 &
-                          VXGE_HW_GENERAL_ERRORS_REG_STATSB_DROP_TIMEOUT_REQ) &
-                               ~mask64)
-                               sw_stats->error_stats.statsb_drop_timeout++;
-
-                       if ((val64 &
-                               VXGE_HW_GENERAL_ERRORS_REG_TGT_ILLEGAL_ACCESS) &
-                               ~mask64)
-                               sw_stats->error_stats.target_illegal_access++;
-
-                       if (!skip_alarms) {
-                               writeq(VXGE_HW_INTR_MASK_ALL,
-                                       &vp_reg->general_errors_reg);
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_ALARM_CLEARED,
-                                       alarm_event);
-                       }
-               }
-
-               if (pic_status &
-                   VXGE_HW_VPATH_PPIF_INT_STATUS_KDFCCTL_ERRORS_KDFCCTL_INT) {
-
-                       val64 = readq(&vp_reg->kdfcctl_errors_reg);
-                       mask64 = readq(&vp_reg->kdfcctl_errors_mask);
-
-                       if ((val64 &
-                           VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO0_OVRWR) &
-                               ~mask64) {
-                               sw_stats->error_stats.kdfcctl_fifo0_overwrite++;
-
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_FIFO_ERR,
-                                       alarm_event);
-                       }
-
-                       if ((val64 &
-                           VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO0_POISON) &
-                               ~mask64) {
-                               sw_stats->error_stats.kdfcctl_fifo0_poison++;
-
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_FIFO_ERR,
-                                       alarm_event);
-                       }
-
-                       if ((val64 &
-                           VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO0_DMA_ERR) &
-                               ~mask64) {
-                               sw_stats->error_stats.kdfcctl_fifo0_dma_error++;
-
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_FIFO_ERR,
-                                       alarm_event);
-                       }
-
-                       if (!skip_alarms) {
-                               writeq(VXGE_HW_INTR_MASK_ALL,
-                                       &vp_reg->kdfcctl_errors_reg);
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_ALARM_CLEARED,
-                                       alarm_event);
-                       }
-               }
-
-       }
-
-       if (alarm_status & VXGE_HW_VPATH_GENERAL_INT_STATUS_WRDMA_INT) {
-
-               val64 = readq(&vp_reg->wrdma_alarm_status);
-
-               if (val64 & VXGE_HW_WRDMA_ALARM_STATUS_PRC_ALARM_PRC_INT) {
-
-                       val64 = readq(&vp_reg->prc_alarm_reg);
-                       mask64 = readq(&vp_reg->prc_alarm_mask);
-
-                       if ((val64 & VXGE_HW_PRC_ALARM_REG_PRC_RING_BUMP)&
-                               ~mask64)
-                               sw_stats->error_stats.prc_ring_bumps++;
-
-                       if ((val64 & VXGE_HW_PRC_ALARM_REG_PRC_RXDCM_SC_ERR) &
-                               ~mask64) {
-                               sw_stats->error_stats.prc_rxdcm_sc_err++;
-
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_VPATH_ERR,
-                                       alarm_event);
-                       }
-
-                       if ((val64 & VXGE_HW_PRC_ALARM_REG_PRC_RXDCM_SC_ABORT)
-                               & ~mask64) {
-                               sw_stats->error_stats.prc_rxdcm_sc_abort++;
-
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                               VXGE_HW_EVENT_VPATH_ERR,
-                                               alarm_event);
-                       }
-
-                       if ((val64 & VXGE_HW_PRC_ALARM_REG_PRC_QUANTA_SIZE_ERR)
-                                & ~mask64) {
-                               sw_stats->error_stats.prc_quanta_size_err++;
-
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                       VXGE_HW_EVENT_VPATH_ERR,
-                                       alarm_event);
-                       }
-
-                       if (!skip_alarms) {
-                               writeq(VXGE_HW_INTR_MASK_ALL,
-                                       &vp_reg->prc_alarm_reg);
-                               alarm_event = VXGE_HW_SET_LEVEL(
-                                               VXGE_HW_EVENT_ALARM_CLEARED,
-                                               alarm_event);
-                       }
-               }
-       }
-out:
-       hldev->stats.sw_dev_err_stats.vpath_alarms++;
-out2:
-       if ((alarm_event == VXGE_HW_EVENT_ALARM_CLEARED) ||
-               (alarm_event == VXGE_HW_EVENT_UNKNOWN))
-               return VXGE_HW_OK;
-
-       __vxge_hw_device_handle_error(hldev, vpath->vp_id, alarm_event);
-
-       if (alarm_event == VXGE_HW_EVENT_SERR)
-               return VXGE_HW_ERR_CRITICAL;
-
-       return (alarm_event == VXGE_HW_EVENT_SLOT_FREEZE) ?
-               VXGE_HW_ERR_SLOT_FREEZE :
-               (alarm_event == VXGE_HW_EVENT_FIFO_ERR) ? VXGE_HW_ERR_FIFO :
-               VXGE_HW_ERR_VPATH;
-}
-
-/**
- * vxge_hw_device_begin_irq - Begin IRQ processing.
- * @hldev: HW device handle.
- * @skip_alarms: Do not clear the alarms
- * @reason: "Reason" for the interrupt, the value of Titan's
- *     general_int_status register.
- *
- * The function        performs two actions, It first checks whether (shared IRQ) the
- * interrupt was raised        by the device. Next, it masks the device interrupts.
- *
- * Note:
- * vxge_hw_device_begin_irq() does not flush MMIO writes through the
- * bridge. Therefore, two back-to-back interrupts are potentially possible.
- *
- * Returns: 0, if the interrupt        is not "ours" (note that in this case the
- * device remain enabled).
- * Otherwise, vxge_hw_device_begin_irq() returns 64bit general adapter
- * status.
- */
-enum vxge_hw_status vxge_hw_device_begin_irq(struct __vxge_hw_device *hldev,
-                                            u32 skip_alarms, u64 *reason)
-{
-       u32 i;
-       u64 val64;
-       u64 adapter_status;
-       u64 vpath_mask;
-       enum vxge_hw_status ret = VXGE_HW_OK;
-
-       val64 = readq(&hldev->common_reg->titan_general_int_status);
-
-       if (unlikely(!val64)) {
-               /* not Titan interrupt  */
-               *reason = 0;
-               ret = VXGE_HW_ERR_WRONG_IRQ;
-               goto exit;
-       }
-
-       if (unlikely(val64 == VXGE_HW_ALL_FOXES)) {
-
-               adapter_status = readq(&hldev->common_reg->adapter_status);
-
-               if (adapter_status == VXGE_HW_ALL_FOXES) {
-
-                       __vxge_hw_device_handle_error(hldev,
-                               NULL_VPID, VXGE_HW_EVENT_SLOT_FREEZE);
-                       *reason = 0;
-                       ret = VXGE_HW_ERR_SLOT_FREEZE;
-                       goto exit;
-               }
-       }
-
-       hldev->stats.sw_dev_info_stats.total_intr_cnt++;
-
-       *reason = val64;
-
-       vpath_mask = hldev->vpaths_deployed >>
-                               (64 - VXGE_HW_MAX_VIRTUAL_PATHS);
-
-       if (val64 &
-           VXGE_HW_TITAN_GENERAL_INT_STATUS_VPATH_TRAFFIC_INT(vpath_mask)) {
-               hldev->stats.sw_dev_info_stats.traffic_intr_cnt++;
-
-               return VXGE_HW_OK;
-       }
-
-       hldev->stats.sw_dev_info_stats.not_traffic_intr_cnt++;
-
-       if (unlikely(val64 &
-                       VXGE_HW_TITAN_GENERAL_INT_STATUS_VPATH_ALARM_INT)) {
-
-               enum vxge_hw_status error_level = VXGE_HW_OK;
-
-               hldev->stats.sw_dev_err_stats.vpath_alarms++;
-
-               for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
-
-                       if (!(hldev->vpaths_deployed & vxge_mBIT(i)))
-                               continue;
-
-                       ret = __vxge_hw_vpath_alarm_process(
-                               &hldev->virtual_paths[i], skip_alarms);
-
-                       error_level = VXGE_HW_SET_LEVEL(ret, error_level);
-
-                       if (unlikely((ret == VXGE_HW_ERR_CRITICAL) ||
-                               (ret == VXGE_HW_ERR_SLOT_FREEZE)))
-                               break;
-               }
-
-               ret = error_level;
-       }
-exit:
-       return ret;
-}
-
-/**
- * vxge_hw_device_clear_tx_rx - Acknowledge (that is, clear) the
- * condition that has caused the Tx and RX interrupt.
- * @hldev: HW device.
- *
- * Acknowledge (that is, clear) the condition that has caused
- * the Tx and Rx interrupt.
- * See also: vxge_hw_device_begin_irq(),
- * vxge_hw_device_mask_tx_rx(), vxge_hw_device_unmask_tx_rx().
- */
-void vxge_hw_device_clear_tx_rx(struct __vxge_hw_device *hldev)
-{
-
-       if ((hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_TX] != 0) ||
-          (hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_RX] != 0)) {
-               writeq((hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_TX] |
-                                hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_RX]),
-                               &hldev->common_reg->tim_int_status0);
-       }
-
-       if ((hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_TX] != 0) ||
-          (hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_RX] != 0)) {
-               __vxge_hw_pio_mem_write32_upper(
-                               (hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_TX] |
-                                hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_RX]),
-                               &hldev->common_reg->tim_int_status1);
-       }
-}
-
-/*
- * vxge_hw_channel_dtr_alloc - Allocate a dtr from the channel
- * @channel: Channel
- * @dtrh: Buffer to return the DTR pointer
- *
- * Allocates a dtr from the reserve array. If the reserve array is empty,
- * it swaps the reserve and free arrays.
- *
- */
-static enum vxge_hw_status
-vxge_hw_channel_dtr_alloc(struct __vxge_hw_channel *channel, void **dtrh)
-{
-       if (channel->reserve_ptr - channel->reserve_top > 0) {
-_alloc_after_swap:
-               *dtrh = channel->reserve_arr[--channel->reserve_ptr];
-
-               return VXGE_HW_OK;
-       }
-
-       /* switch between empty and full arrays */
-
-       /* the idea behind such a design is that by having free and reserved
-        * arrays separated we basically separated irq and non-irq parts.
-        * i.e. no additional lock need to be done when we free a resource */
-
-       if (channel->length - channel->free_ptr > 0) {
-               swap(channel->reserve_arr, channel->free_arr);
-               channel->reserve_ptr = channel->length;
-               channel->reserve_top = channel->free_ptr;
-               channel->free_ptr = channel->length;
-
-               channel->stats->reserve_free_swaps_cnt++;
-
-               goto _alloc_after_swap;
-       }
-
-       channel->stats->full_cnt++;
-
-       *dtrh = NULL;
-       return VXGE_HW_INF_OUT_OF_DESCRIPTORS;
-}
-
-/*
- * vxge_hw_channel_dtr_post - Post a dtr to the channel
- * @channelh: Channel
- * @dtrh: DTR pointer
- *
- * Posts a dtr to work array.
- *
- */
-static void
-vxge_hw_channel_dtr_post(struct __vxge_hw_channel *channel, void *dtrh)
-{
-       vxge_assert(channel->work_arr[channel->post_index] == NULL);
-
-       channel->work_arr[channel->post_index++] = dtrh;
-
-       /* wrap-around */
-       if (channel->post_index == channel->length)
-               channel->post_index = 0;
-}
-
-/*
- * vxge_hw_channel_dtr_try_complete - Returns next completed dtr
- * @channel: Channel
- * @dtr: Buffer to return the next completed DTR pointer
- *
- * Returns the next completed dtr with out removing it from work array
- *
- */
-void
-vxge_hw_channel_dtr_try_complete(struct __vxge_hw_channel *channel, void **dtrh)
-{
-       vxge_assert(channel->compl_index < channel->length);
-
-       *dtrh = channel->work_arr[channel->compl_index];
-       prefetch(*dtrh);
-}
-
-/*
- * vxge_hw_channel_dtr_complete - Removes next completed dtr from the work array
- * @channel: Channel handle
- *
- * Removes the next completed dtr from work array
- *
- */
-void vxge_hw_channel_dtr_complete(struct __vxge_hw_channel *channel)
-{
-       channel->work_arr[channel->compl_index] = NULL;
-
-       /* wrap-around */
-       if (++channel->compl_index == channel->length)
-               channel->compl_index = 0;
-
-       channel->stats->total_compl_cnt++;
-}
-
-/*
- * vxge_hw_channel_dtr_free - Frees a dtr
- * @channel: Channel handle
- * @dtr:  DTR pointer
- *
- * Returns the dtr to free array
- *
- */
-void vxge_hw_channel_dtr_free(struct __vxge_hw_channel *channel, void *dtrh)
-{
-       channel->free_arr[--channel->free_ptr] = dtrh;
-}
-
-/*
- * vxge_hw_channel_dtr_count
- * @channel: Channel handle. Obtained via vxge_hw_channel_open().
- *
- * Retrieve number of DTRs available. This function can not be called
- * from data path. ring_initial_replenishi() is the only user.
- */
-int vxge_hw_channel_dtr_count(struct __vxge_hw_channel *channel)
-{
-       return (channel->reserve_ptr - channel->reserve_top) +
-               (channel->length - channel->free_ptr);
-}
-
-/**
- * vxge_hw_ring_rxd_reserve    - Reserve ring descriptor.
- * @ring: Handle to the ring object used for receive
- * @rxdh: Reserved descriptor. On success HW fills this "out" parameter
- * with a valid handle.
- *
- * Reserve Rx descriptor for the subsequent filling-in driver
- * and posting on the corresponding channel (@channelh)
- * via vxge_hw_ring_rxd_post().
- *
- * Returns: VXGE_HW_OK - success.
- * VXGE_HW_INF_OUT_OF_DESCRIPTORS - Currently no descriptors available.
- *
- */
-enum vxge_hw_status vxge_hw_ring_rxd_reserve(struct __vxge_hw_ring *ring,
-       void **rxdh)
-{
-       enum vxge_hw_status status;
-       struct __vxge_hw_channel *channel;
-
-       channel = &ring->channel;
-
-       status = vxge_hw_channel_dtr_alloc(channel, rxdh);
-
-       if (status == VXGE_HW_OK) {
-               struct vxge_hw_ring_rxd_1 *rxdp =
-                       (struct vxge_hw_ring_rxd_1 *)*rxdh;
-
-               rxdp->control_0 = rxdp->control_1 = 0;
-       }
-
-       return status;
-}
-
-/**
- * vxge_hw_ring_rxd_free - Free descriptor.
- * @ring: Handle to the ring object used for receive
- * @rxdh: Descriptor handle.
- *
- * Free        the reserved descriptor. This operation is "symmetrical" to
- * vxge_hw_ring_rxd_reserve. The "free-ing" completes the descriptor's
- * lifecycle.
- *
- * After free-ing (see vxge_hw_ring_rxd_free()) the descriptor again can
- * be:
- *
- * - reserved (vxge_hw_ring_rxd_reserve);
- *
- * - posted    (vxge_hw_ring_rxd_post);
- *
- * - completed (vxge_hw_ring_rxd_next_completed);
- *
- * - and recycled again        (vxge_hw_ring_rxd_free).
- *
- * For alternative state transitions and more details please refer to
- * the design doc.
- *
- */
-void vxge_hw_ring_rxd_free(struct __vxge_hw_ring *ring, void *rxdh)
-{
-       struct __vxge_hw_channel *channel;
-
-       channel = &ring->channel;
-
-       vxge_hw_channel_dtr_free(channel, rxdh);
-
-}
-
-/**
- * vxge_hw_ring_rxd_pre_post - Prepare rxd and post
- * @ring: Handle to the ring object used for receive
- * @rxdh: Descriptor handle.
- *
- * This routine prepares a rxd and posts
- */
-void vxge_hw_ring_rxd_pre_post(struct __vxge_hw_ring *ring, void *rxdh)
-{
-       struct __vxge_hw_channel *channel;
-
-       channel = &ring->channel;
-
-       vxge_hw_channel_dtr_post(channel, rxdh);
-}
-
-/**
- * vxge_hw_ring_rxd_post_post - Process rxd after post.
- * @ring: Handle to the ring object used for receive
- * @rxdh: Descriptor handle.
- *
- * Processes rxd after post
- */
-void vxge_hw_ring_rxd_post_post(struct __vxge_hw_ring *ring, void *rxdh)
-{
-       struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
-
-       rxdp->control_0 = VXGE_HW_RING_RXD_LIST_OWN_ADAPTER;
-
-       if (ring->stats->common_stats.usage_cnt > 0)
-               ring->stats->common_stats.usage_cnt--;
-}
-
-/**
- * vxge_hw_ring_rxd_post - Post descriptor on the ring.
- * @ring: Handle to the ring object used for receive
- * @rxdh: Descriptor obtained via vxge_hw_ring_rxd_reserve().
- *
- * Post        descriptor on the ring.
- * Prior to posting the        descriptor should be filled in accordance with
- * Host/Titan interface specification for a given service (LL, etc.).
- *
- */
-void vxge_hw_ring_rxd_post(struct __vxge_hw_ring *ring, void *rxdh)
-{
-       struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
-       struct __vxge_hw_channel *channel;
-
-       channel = &ring->channel;
-
-       wmb();
-       rxdp->control_0 = VXGE_HW_RING_RXD_LIST_OWN_ADAPTER;
-
-       vxge_hw_channel_dtr_post(channel, rxdh);
-
-       if (ring->stats->common_stats.usage_cnt > 0)
-               ring->stats->common_stats.usage_cnt--;
-}
-
-/**
- * vxge_hw_ring_rxd_post_post_wmb - Process rxd after post with memory barrier.
- * @ring: Handle to the ring object used for receive
- * @rxdh: Descriptor handle.
- *
- * Processes rxd after post with memory barrier.
- */
-void vxge_hw_ring_rxd_post_post_wmb(struct __vxge_hw_ring *ring, void *rxdh)
-{
-       wmb();
-       vxge_hw_ring_rxd_post_post(ring, rxdh);
-}
-
-/**
- * vxge_hw_ring_rxd_next_completed - Get the _next_ completed descriptor.
- * @ring: Handle to the ring object used for receive
- * @rxdh: Descriptor handle. Returned by HW.
- * @t_code:    Transfer code, as per Titan User Guide,
- *      Receive Descriptor Format. Returned by HW.
- *
- * Retrieve the        _next_ completed descriptor.
- * HW uses ring callback (*vxge_hw_ring_callback_f) to notifiy
- * driver of new completed descriptors. After that
- * the driver can use vxge_hw_ring_rxd_next_completed to retrieve the rest
- * completions (the very first completion is passed by HW via
- * vxge_hw_ring_callback_f).
- *
- * Implementation-wise, the driver is free to call
- * vxge_hw_ring_rxd_next_completed either immediately from inside the
- * ring callback, or in a deferred fashion and separate (from HW)
- * context.
- *
- * Non-zero @t_code means failure to fill-in receive buffer(s)
- * of the descriptor.
- * For instance, parity        error detected during the data transfer.
- * In this case        Titan will complete the descriptor and indicate
- * for the host        that the received data is not to be used.
- * For details please refer to Titan User Guide.
- *
- * Returns: VXGE_HW_OK - success.
- * VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS - No completed descriptors
- * are currently available for processing.
- *
- * See also: vxge_hw_ring_callback_f{},
- * vxge_hw_fifo_rxd_next_completed(), enum vxge_hw_status{}.
- */
-enum vxge_hw_status vxge_hw_ring_rxd_next_completed(
-       struct __vxge_hw_ring *ring, void **rxdh, u8 *t_code)
-{
-       struct __vxge_hw_channel *channel;
-       struct vxge_hw_ring_rxd_1 *rxdp;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       u64 control_0, own;
-
-       channel = &ring->channel;
-
-       vxge_hw_channel_dtr_try_complete(channel, rxdh);
-
-       rxdp = *rxdh;
-       if (rxdp == NULL) {
-               status = VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS;
-               goto exit;
-       }
-
-       control_0 = rxdp->control_0;
-       own = control_0 & VXGE_HW_RING_RXD_LIST_OWN_ADAPTER;
-       *t_code = (u8)VXGE_HW_RING_RXD_T_CODE_GET(control_0);
-
-       /* check whether it is not the end */
-       if (!own || *t_code == VXGE_HW_RING_T_CODE_FRM_DROP) {
-
-               vxge_assert((rxdp)->host_control !=
-                               0);
-
-               ++ring->cmpl_cnt;
-               vxge_hw_channel_dtr_complete(channel);
-
-               vxge_assert(*t_code != VXGE_HW_RING_RXD_T_CODE_UNUSED);
-
-               ring->stats->common_stats.usage_cnt++;
-               if (ring->stats->common_stats.usage_max <
-                               ring->stats->common_stats.usage_cnt)
-                       ring->stats->common_stats.usage_max =
-                               ring->stats->common_stats.usage_cnt;
-
-               status = VXGE_HW_OK;
-               goto exit;
-       }
-
-       /* reset it. since we don't want to return
-        * garbage to the driver */
-       *rxdh = NULL;
-       status = VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS;
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_ring_handle_tcode - Handle transfer code.
- * @ring: Handle to the ring object used for receive
- * @rxdh: Descriptor handle.
- * @t_code: One of the enumerated (and documented in the Titan user guide)
- * "transfer codes".
- *
- * Handle descriptor's transfer code. The latter comes with each completed
- * descriptor.
- *
- * Returns: one of the enum vxge_hw_status{} enumerated types.
- * VXGE_HW_OK                  - for success.
- * VXGE_HW_ERR_CRITICAL         - when encounters critical error.
- */
-enum vxge_hw_status vxge_hw_ring_handle_tcode(
-       struct __vxge_hw_ring *ring, void *rxdh, u8 t_code)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       /* If the t_code is not supported and if the
-        * t_code is other than 0x5 (unparseable packet
-        * such as unknown UPV6 header), Drop it !!!
-        */
-
-       if (t_code ==  VXGE_HW_RING_T_CODE_OK ||
-               t_code == VXGE_HW_RING_T_CODE_L3_PKT_ERR) {
-               status = VXGE_HW_OK;
-               goto exit;
-       }
-
-       if (t_code > VXGE_HW_RING_T_CODE_MULTI_ERR) {
-               status = VXGE_HW_ERR_INVALID_TCODE;
-               goto exit;
-       }
-
-       ring->stats->rxd_t_code_err_cnt[t_code]++;
-exit:
-       return status;
-}
-
-/**
- * __vxge_hw_non_offload_db_post - Post non offload doorbell
- *
- * @fifo: fifohandle
- * @txdl_ptr: The starting location of the TxDL in host memory
- * @num_txds: The highest TxD in this TxDL (0 to 255 means 1 to 256)
- * @no_snoop: No snoop flags
- *
- * This function posts a non-offload doorbell to doorbell FIFO
- *
- */
-static void __vxge_hw_non_offload_db_post(struct __vxge_hw_fifo *fifo,
-       u64 txdl_ptr, u32 num_txds, u32 no_snoop)
-{
-       writeq(VXGE_HW_NODBW_TYPE(VXGE_HW_NODBW_TYPE_NODBW) |
-               VXGE_HW_NODBW_LAST_TXD_NUMBER(num_txds) |
-               VXGE_HW_NODBW_GET_NO_SNOOP(no_snoop),
-               &fifo->nofl_db->control_0);
-
-       writeq(txdl_ptr, &fifo->nofl_db->txdl_ptr);
-}
-
-/**
- * vxge_hw_fifo_free_txdl_count_get - returns the number of txdls available in
- * the fifo
- * @fifoh: Handle to the fifo object used for non offload send
- */
-u32 vxge_hw_fifo_free_txdl_count_get(struct __vxge_hw_fifo *fifoh)
-{
-       return vxge_hw_channel_dtr_count(&fifoh->channel);
-}
-
-/**
- * vxge_hw_fifo_txdl_reserve - Reserve fifo descriptor.
- * @fifo: Handle to the fifo object used for non offload send
- * @txdlh: Reserved descriptor. On success HW fills this "out" parameter
- *        with a valid handle.
- * @txdl_priv: Buffer to return the pointer to per txdl space
- *
- * Reserve a single TxDL (that is, fifo descriptor)
- * for the subsequent filling-in by driver)
- * and posting on the corresponding channel (@channelh)
- * via vxge_hw_fifo_txdl_post().
- *
- * Note: it is the responsibility of driver to reserve multiple descriptors
- * for lengthy (e.g., LSO) transmit operation. A single fifo descriptor
- * carries up to configured number (fifo.max_frags) of contiguous buffers.
- *
- * Returns: VXGE_HW_OK - success;
- * VXGE_HW_INF_OUT_OF_DESCRIPTORS - Currently no descriptors available
- *
- */
-enum vxge_hw_status vxge_hw_fifo_txdl_reserve(
-       struct __vxge_hw_fifo *fifo,
-       void **txdlh, void **txdl_priv)
-{
-       struct __vxge_hw_channel *channel;
-       enum vxge_hw_status status;
-       int i;
-
-       channel = &fifo->channel;
-
-       status = vxge_hw_channel_dtr_alloc(channel, txdlh);
-
-       if (status == VXGE_HW_OK) {
-               struct vxge_hw_fifo_txd *txdp =
-                       (struct vxge_hw_fifo_txd *)*txdlh;
-               struct __vxge_hw_fifo_txdl_priv *priv;
-
-               priv = __vxge_hw_fifo_txdl_priv(fifo, txdp);
-
-               /* reset the TxDL's private */
-               priv->align_dma_offset = 0;
-               priv->align_vaddr_start = priv->align_vaddr;
-               priv->align_used_frags = 0;
-               priv->frags = 0;
-               priv->alloc_frags = fifo->config->max_frags;
-               priv->next_txdl_priv = NULL;
-
-               *txdl_priv = (void *)(size_t)txdp->host_control;
-
-               for (i = 0; i < fifo->config->max_frags; i++) {
-                       txdp = ((struct vxge_hw_fifo_txd *)*txdlh) + i;
-                       txdp->control_0 = txdp->control_1 = 0;
-               }
-       }
-
-       return status;
-}
-
-/**
- * vxge_hw_fifo_txdl_buffer_set - Set transmit buffer pointer in the
- * descriptor.
- * @fifo: Handle to the fifo object used for non offload send
- * @txdlh: Descriptor handle.
- * @frag_idx: Index of the data buffer in the caller's scatter-gather list
- *            (of buffers).
- * @dma_pointer: DMA address of the data buffer referenced by @frag_idx.
- * @size: Size of the data buffer (in bytes).
- *
- * This API is part of the preparation of the transmit descriptor for posting
- * (via vxge_hw_fifo_txdl_post()). The related "preparation" APIs include
- * vxge_hw_fifo_txdl_mss_set() and vxge_hw_fifo_txdl_cksum_set_bits().
- * All three APIs fill in the fields of the fifo descriptor,
- * in accordance with the Titan specification.
- *
- */
-void vxge_hw_fifo_txdl_buffer_set(struct __vxge_hw_fifo *fifo,
-                                 void *txdlh, u32 frag_idx,
-                                 dma_addr_t dma_pointer, u32 size)
-{
-       struct __vxge_hw_fifo_txdl_priv *txdl_priv;
-       struct vxge_hw_fifo_txd *txdp, *txdp_last;
-
-       txdl_priv = __vxge_hw_fifo_txdl_priv(fifo, txdlh);
-       txdp = (struct vxge_hw_fifo_txd *)txdlh  +  txdl_priv->frags;
-
-       if (frag_idx != 0)
-               txdp->control_0 = txdp->control_1 = 0;
-       else {
-               txdp->control_0 |= VXGE_HW_FIFO_TXD_GATHER_CODE(
-                       VXGE_HW_FIFO_TXD_GATHER_CODE_FIRST);
-               txdp->control_1 |= fifo->interrupt_type;
-               txdp->control_1 |= VXGE_HW_FIFO_TXD_INT_NUMBER(
-                       fifo->tx_intr_num);
-               if (txdl_priv->frags) {
-                       txdp_last = (struct vxge_hw_fifo_txd *)txdlh  +
-                       (txdl_priv->frags - 1);
-                       txdp_last->control_0 |= VXGE_HW_FIFO_TXD_GATHER_CODE(
-                               VXGE_HW_FIFO_TXD_GATHER_CODE_LAST);
-               }
-       }
-
-       vxge_assert(frag_idx < txdl_priv->alloc_frags);
-
-       txdp->buffer_pointer = (u64)dma_pointer;
-       txdp->control_0 |= VXGE_HW_FIFO_TXD_BUFFER_SIZE(size);
-       fifo->stats->total_buffers++;
-       txdl_priv->frags++;
-}
-
-/**
- * vxge_hw_fifo_txdl_post - Post descriptor on the fifo channel.
- * @fifo: Handle to the fifo object used for non offload send
- * @txdlh: Descriptor obtained via vxge_hw_fifo_txdl_reserve()
- *
- * Post descriptor on the 'fifo' type channel for transmission.
- * Prior to posting the descriptor should be filled in accordance with
- * Host/Titan interface specification for a given service (LL, etc.).
- *
- */
-void vxge_hw_fifo_txdl_post(struct __vxge_hw_fifo *fifo, void *txdlh)
-{
-       struct __vxge_hw_fifo_txdl_priv *txdl_priv;
-       struct vxge_hw_fifo_txd *txdp_last;
-       struct vxge_hw_fifo_txd *txdp_first;
-
-       txdl_priv = __vxge_hw_fifo_txdl_priv(fifo, txdlh);
-       txdp_first = txdlh;
-
-       txdp_last = (struct vxge_hw_fifo_txd *)txdlh  +  (txdl_priv->frags - 1);
-       txdp_last->control_0 |=
-             VXGE_HW_FIFO_TXD_GATHER_CODE(VXGE_HW_FIFO_TXD_GATHER_CODE_LAST);
-       txdp_first->control_0 |= VXGE_HW_FIFO_TXD_LIST_OWN_ADAPTER;
-
-       vxge_hw_channel_dtr_post(&fifo->channel, txdlh);
-
-       __vxge_hw_non_offload_db_post(fifo,
-               (u64)txdl_priv->dma_addr,
-               txdl_priv->frags - 1,
-               fifo->no_snoop_bits);
-
-       fifo->stats->total_posts++;
-       fifo->stats->common_stats.usage_cnt++;
-       if (fifo->stats->common_stats.usage_max <
-               fifo->stats->common_stats.usage_cnt)
-               fifo->stats->common_stats.usage_max =
-                       fifo->stats->common_stats.usage_cnt;
-}
-
-/**
- * vxge_hw_fifo_txdl_next_completed - Retrieve next completed descriptor.
- * @fifo: Handle to the fifo object used for non offload send
- * @txdlh: Descriptor handle. Returned by HW.
- * @t_code: Transfer code, as per Titan User Guide,
- *          Transmit Descriptor Format.
- *          Returned by HW.
- *
- * Retrieve the _next_ completed descriptor.
- * HW uses channel callback (*vxge_hw_channel_callback_f) to notifiy
- * driver of new completed descriptors. After that
- * the driver can use vxge_hw_fifo_txdl_next_completed to retrieve the rest
- * completions (the very first completion is passed by HW via
- * vxge_hw_channel_callback_f).
- *
- * Implementation-wise, the driver is free to call
- * vxge_hw_fifo_txdl_next_completed either immediately from inside the
- * channel callback, or in a deferred fashion and separate (from HW)
- * context.
- *
- * Non-zero @t_code means failure to process the descriptor.
- * The failure could happen, for instance, when the link is
- * down, in which case Titan completes the descriptor because it
- * is not able to send the data out.
- *
- * For details please refer to Titan User Guide.
- *
- * Returns: VXGE_HW_OK - success.
- * VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS - No completed descriptors
- * are currently available for processing.
- *
- */
-enum vxge_hw_status vxge_hw_fifo_txdl_next_completed(
-       struct __vxge_hw_fifo *fifo, void **txdlh,
-       enum vxge_hw_fifo_tcode *t_code)
-{
-       struct __vxge_hw_channel *channel;
-       struct vxge_hw_fifo_txd *txdp;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       channel = &fifo->channel;
-
-       vxge_hw_channel_dtr_try_complete(channel, txdlh);
-
-       txdp = *txdlh;
-       if (txdp == NULL) {
-               status = VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS;
-               goto exit;
-       }
-
-       /* check whether host owns it */
-       if (!(txdp->control_0 & VXGE_HW_FIFO_TXD_LIST_OWN_ADAPTER)) {
-
-               vxge_assert(txdp->host_control != 0);
-
-               vxge_hw_channel_dtr_complete(channel);
-
-               *t_code = (u8)VXGE_HW_FIFO_TXD_T_CODE_GET(txdp->control_0);
-
-               if (fifo->stats->common_stats.usage_cnt > 0)
-                       fifo->stats->common_stats.usage_cnt--;
-
-               status = VXGE_HW_OK;
-               goto exit;
-       }
-
-       /* no more completions */
-       *txdlh = NULL;
-       status = VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS;
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_fifo_handle_tcode - Handle transfer code.
- * @fifo: Handle to the fifo object used for non offload send
- * @txdlh: Descriptor handle.
- * @t_code: One of the enumerated (and documented in the Titan user guide)
- *          "transfer codes".
- *
- * Handle descriptor's transfer code. The latter comes with each completed
- * descriptor.
- *
- * Returns: one of the enum vxge_hw_status{} enumerated types.
- * VXGE_HW_OK - for success.
- * VXGE_HW_ERR_CRITICAL - when encounters critical error.
- */
-enum vxge_hw_status vxge_hw_fifo_handle_tcode(struct __vxge_hw_fifo *fifo,
-                                             void *txdlh,
-                                             enum vxge_hw_fifo_tcode t_code)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (((t_code & 0x7) < 0) || ((t_code & 0x7) > 0x4)) {
-               status = VXGE_HW_ERR_INVALID_TCODE;
-               goto exit;
-       }
-
-       fifo->stats->txd_t_code_err_cnt[t_code]++;
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_fifo_txdl_free - Free descriptor.
- * @fifo: Handle to the fifo object used for non offload send
- * @txdlh: Descriptor handle.
- *
- * Free the reserved descriptor. This operation is "symmetrical" to
- * vxge_hw_fifo_txdl_reserve. The "free-ing" completes the descriptor's
- * lifecycle.
- *
- * After free-ing (see vxge_hw_fifo_txdl_free()) the descriptor again can
- * be:
- *
- * - reserved (vxge_hw_fifo_txdl_reserve);
- *
- * - posted (vxge_hw_fifo_txdl_post);
- *
- * - completed (vxge_hw_fifo_txdl_next_completed);
- *
- * - and recycled again (vxge_hw_fifo_txdl_free).
- *
- * For alternative state transitions and more details please refer to
- * the design doc.
- *
- */
-void vxge_hw_fifo_txdl_free(struct __vxge_hw_fifo *fifo, void *txdlh)
-{
-       struct __vxge_hw_channel *channel;
-
-       channel = &fifo->channel;
-
-       vxge_hw_channel_dtr_free(channel, txdlh);
-}
-
-/**
- * vxge_hw_vpath_mac_addr_add - Add the mac address entry for this vpath to MAC address table.
- * @vp: Vpath handle.
- * @macaddr: MAC address to be added for this vpath into the list
- * @macaddr_mask: MAC address mask for macaddr
- * @duplicate_mode: Duplicate MAC address add mode. Please see
- *             enum vxge_hw_vpath_mac_addr_add_mode{}
- *
- * Adds the given mac address and mac address mask into the list for this
- * vpath.
- * see also: vxge_hw_vpath_mac_addr_delete, vxge_hw_vpath_mac_addr_get and
- * vxge_hw_vpath_mac_addr_get_next
- *
- */
-enum vxge_hw_status
-vxge_hw_vpath_mac_addr_add(
-       struct __vxge_hw_vpath_handle *vp,
-       u8 *macaddr,
-       u8 *macaddr_mask,
-       enum vxge_hw_vpath_mac_addr_add_mode duplicate_mode)
-{
-       u32 i;
-       u64 data1 = 0ULL;
-       u64 data2 = 0ULL;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       for (i = 0; i < ETH_ALEN; i++) {
-               data1 <<= 8;
-               data1 |= (u8)macaddr[i];
-
-               data2 <<= 8;
-               data2 |= (u8)macaddr_mask[i];
-       }
-
-       switch (duplicate_mode) {
-       case VXGE_HW_VPATH_MAC_ADDR_ADD_DUPLICATE:
-               i = 0;
-               break;
-       case VXGE_HW_VPATH_MAC_ADDR_DISCARD_DUPLICATE:
-               i = 1;
-               break;
-       case VXGE_HW_VPATH_MAC_ADDR_REPLACE_DUPLICATE:
-               i = 2;
-               break;
-       default:
-               i = 0;
-               break;
-       }
-
-       status = __vxge_hw_vpath_rts_table_set(vp,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_ADD_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA,
-                       0,
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_DA_MAC_ADDR(data1),
-                       VXGE_HW_RTS_ACCESS_STEER_DATA1_DA_MAC_ADDR_MASK(data2)|
-                       VXGE_HW_RTS_ACCESS_STEER_DATA1_DA_MAC_ADDR_MODE(i));
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_mac_addr_get - Get the first mac address entry
- * @vp: Vpath handle.
- * @macaddr: First MAC address entry for this vpath in the list
- * @macaddr_mask: MAC address mask for macaddr
- *
- * Get the first mac address entry for this vpath from MAC address table.
- * Return: the first mac address and mac address mask in the list for this
- * vpath.
- * see also: vxge_hw_vpath_mac_addr_get_next
- *
- */
-enum vxge_hw_status
-vxge_hw_vpath_mac_addr_get(
-       struct __vxge_hw_vpath_handle *vp,
-       u8 *macaddr,
-       u8 *macaddr_mask)
-{
-       u32 i;
-       u64 data1 = 0ULL;
-       u64 data2 = 0ULL;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       status = __vxge_hw_vpath_rts_table_get(vp,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LIST_FIRST_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA,
-                       0, &data1, &data2);
-
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       data1 = VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_DA_MAC_ADDR(data1);
-
-       data2 = VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_DA_MAC_ADDR_MASK(data2);
-
-       for (i = ETH_ALEN; i > 0; i--) {
-               macaddr[i-1] = (u8)(data1 & 0xFF);
-               data1 >>= 8;
-
-               macaddr_mask[i-1] = (u8)(data2 & 0xFF);
-               data2 >>= 8;
-       }
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_mac_addr_get_next - Get the next mac address entry
- * @vp: Vpath handle.
- * @macaddr: Next MAC address entry for this vpath in the list
- * @macaddr_mask: MAC address mask for macaddr
- *
- * Get the next mac address entry for this vpath from MAC address table.
- * Return: the next mac address and mac address mask in the list for this
- * vpath.
- * see also: vxge_hw_vpath_mac_addr_get
- *
- */
-enum vxge_hw_status
-vxge_hw_vpath_mac_addr_get_next(
-       struct __vxge_hw_vpath_handle *vp,
-       u8 *macaddr,
-       u8 *macaddr_mask)
-{
-       u32 i;
-       u64 data1 = 0ULL;
-       u64 data2 = 0ULL;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       status = __vxge_hw_vpath_rts_table_get(vp,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LIST_NEXT_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA,
-                       0, &data1, &data2);
-
-       if (status != VXGE_HW_OK)
-               goto exit;
-
-       data1 = VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_DA_MAC_ADDR(data1);
-
-       data2 = VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_DA_MAC_ADDR_MASK(data2);
-
-       for (i = ETH_ALEN; i > 0; i--) {
-               macaddr[i-1] = (u8)(data1 & 0xFF);
-               data1 >>= 8;
-
-               macaddr_mask[i-1] = (u8)(data2 & 0xFF);
-               data2 >>= 8;
-       }
-
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_mac_addr_delete - Delete the mac address entry for this vpath to MAC address table.
- * @vp: Vpath handle.
- * @macaddr: MAC address to be added for this vpath into the list
- * @macaddr_mask: MAC address mask for macaddr
- *
- * Delete the given mac address and mac address mask into the list for this
- * vpath.
- * see also: vxge_hw_vpath_mac_addr_add, vxge_hw_vpath_mac_addr_get and
- * vxge_hw_vpath_mac_addr_get_next
- *
- */
-enum vxge_hw_status
-vxge_hw_vpath_mac_addr_delete(
-       struct __vxge_hw_vpath_handle *vp,
-       u8 *macaddr,
-       u8 *macaddr_mask)
-{
-       u32 i;
-       u64 data1 = 0ULL;
-       u64 data2 = 0ULL;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       for (i = 0; i < ETH_ALEN; i++) {
-               data1 <<= 8;
-               data1 |= (u8)macaddr[i];
-
-               data2 <<= 8;
-               data2 |= (u8)macaddr_mask[i];
-       }
-
-       status = __vxge_hw_vpath_rts_table_set(vp,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_DELETE_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA,
-                       0,
-                       VXGE_HW_RTS_ACCESS_STEER_DATA0_DA_MAC_ADDR(data1),
-                       VXGE_HW_RTS_ACCESS_STEER_DATA1_DA_MAC_ADDR_MASK(data2));
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_vid_add - Add the vlan id entry for this vpath to vlan id table.
- * @vp: Vpath handle.
- * @vid: vlan id to be added for this vpath into the list
- *
- * Adds the given vlan id into the list for this  vpath.
- * see also: vxge_hw_vpath_vid_delete
- *
- */
-enum vxge_hw_status
-vxge_hw_vpath_vid_add(struct __vxge_hw_vpath_handle *vp, u64 vid)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       status = __vxge_hw_vpath_rts_table_set(vp,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_ADD_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_VID,
-                       0, VXGE_HW_RTS_ACCESS_STEER_DATA0_VLAN_ID(vid), 0);
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_vid_delete - Delete the vlan id entry for this vpath
- *               to vlan id table.
- * @vp: Vpath handle.
- * @vid: vlan id to be added for this vpath into the list
- *
- * Adds the given vlan id into the list for this  vpath.
- * see also: vxge_hw_vpath_vid_add
- *
- */
-enum vxge_hw_status
-vxge_hw_vpath_vid_delete(struct __vxge_hw_vpath_handle *vp, u64 vid)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       status = __vxge_hw_vpath_rts_table_set(vp,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_DELETE_ENTRY,
-                       VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_VID,
-                       0, VXGE_HW_RTS_ACCESS_STEER_DATA0_VLAN_ID(vid), 0);
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_promisc_enable - Enable promiscuous mode.
- * @vp: Vpath handle.
- *
- * Enable promiscuous mode of Titan-e operation.
- *
- * See also: vxge_hw_vpath_promisc_disable().
- */
-enum vxge_hw_status vxge_hw_vpath_promisc_enable(
-                       struct __vxge_hw_vpath_handle *vp)
-{
-       u64 val64;
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       vpath = vp->vpath;
-
-       /* Enable promiscuous mode for function 0 only */
-       if (!(vpath->hldev->access_rights &
-               VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM))
-               return VXGE_HW_OK;
-
-       val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
-
-       if (!(val64 & VXGE_HW_RXMAC_VCFG0_UCAST_ALL_ADDR_EN)) {
-
-               val64 |= VXGE_HW_RXMAC_VCFG0_UCAST_ALL_ADDR_EN |
-                        VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN |
-                        VXGE_HW_RXMAC_VCFG0_BCAST_EN |
-                        VXGE_HW_RXMAC_VCFG0_ALL_VID_EN;
-
-               writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
-       }
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_promisc_disable - Disable promiscuous mode.
- * @vp: Vpath handle.
- *
- * Disable promiscuous mode of Titan-e operation.
- *
- * See also: vxge_hw_vpath_promisc_enable().
- */
-enum vxge_hw_status vxge_hw_vpath_promisc_disable(
-                       struct __vxge_hw_vpath_handle *vp)
-{
-       u64 val64;
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       vpath = vp->vpath;
-
-       val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
-
-       if (val64 & VXGE_HW_RXMAC_VCFG0_UCAST_ALL_ADDR_EN) {
-
-               val64 &= ~(VXGE_HW_RXMAC_VCFG0_UCAST_ALL_ADDR_EN |
-                          VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN |
-                          VXGE_HW_RXMAC_VCFG0_ALL_VID_EN);
-
-               writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
-       }
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_vpath_bcast_enable - Enable broadcast
- * @vp: Vpath handle.
- *
- * Enable receiving broadcasts.
- */
-enum vxge_hw_status vxge_hw_vpath_bcast_enable(
-                       struct __vxge_hw_vpath_handle *vp)
-{
-       u64 val64;
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       vpath = vp->vpath;
-
-       val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
-
-       if (!(val64 & VXGE_HW_RXMAC_VCFG0_BCAST_EN)) {
-               val64 |= VXGE_HW_RXMAC_VCFG0_BCAST_EN;
-               writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
-       }
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_mcast_enable - Enable multicast addresses.
- * @vp: Vpath handle.
- *
- * Enable Titan-e multicast addresses.
- * Returns: VXGE_HW_OK on success.
- *
- */
-enum vxge_hw_status vxge_hw_vpath_mcast_enable(
-                       struct __vxge_hw_vpath_handle *vp)
-{
-       u64 val64;
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       vpath = vp->vpath;
-
-       val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
-
-       if (!(val64 & VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN)) {
-               val64 |= VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN;
-               writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
-       }
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_mcast_disable - Disable  multicast addresses.
- * @vp: Vpath handle.
- *
- * Disable Titan-e multicast addresses.
- * Returns: VXGE_HW_OK - success.
- * VXGE_HW_ERR_INVALID_HANDLE - Invalid handle
- *
- */
-enum vxge_hw_status
-vxge_hw_vpath_mcast_disable(struct __vxge_hw_vpath_handle *vp)
-{
-       u64 val64;
-       struct __vxge_hw_virtualpath *vpath;
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       vpath = vp->vpath;
-
-       val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
-
-       if (val64 & VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN) {
-               val64 &= ~VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN;
-               writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
-       }
-exit:
-       return status;
-}
-
-/*
- * vxge_hw_vpath_alarm_process - Process Alarms.
- * @vpath: Virtual Path.
- * @skip_alarms: Do not clear the alarms
- *
- * Process vpath alarms.
- *
- */
-enum vxge_hw_status vxge_hw_vpath_alarm_process(
-                       struct __vxge_hw_vpath_handle *vp,
-                       u32 skip_alarms)
-{
-       enum vxge_hw_status status = VXGE_HW_OK;
-
-       if (vp == NULL) {
-               status = VXGE_HW_ERR_INVALID_HANDLE;
-               goto exit;
-       }
-
-       status = __vxge_hw_vpath_alarm_process(vp->vpath, skip_alarms);
-exit:
-       return status;
-}
-
-/**
- * vxge_hw_vpath_msix_set - Associate MSIX vectors with TIM interrupts and
- *                            alrms
- * @vp: Virtual Path handle.
- * @tim_msix_id: MSIX vectors associated with VXGE_HW_MAX_INTR_PER_VP number of
- *             interrupts(Can be repeated). If fifo or ring are not enabled
- *             the MSIX vector for that should be set to 0
- * @alarm_msix_id: MSIX vector for alarm.
- *
- * This API will associate a given MSIX vector numbers with the four TIM
- * interrupts and alarm interrupt.
- */
-void
-vxge_hw_vpath_msix_set(struct __vxge_hw_vpath_handle *vp, int *tim_msix_id,
-                      int alarm_msix_id)
-{
-       u64 val64;
-       struct __vxge_hw_virtualpath *vpath = vp->vpath;
-       struct vxge_hw_vpath_reg __iomem *vp_reg = vpath->vp_reg;
-       u32 vp_id = vp->vpath->vp_id;
-
-       val64 =  VXGE_HW_INTERRUPT_CFG0_GROUP0_MSIX_FOR_TXTI(
-                 (vp_id * 4) + tim_msix_id[0]) |
-                VXGE_HW_INTERRUPT_CFG0_GROUP1_MSIX_FOR_TXTI(
-                 (vp_id * 4) + tim_msix_id[1]);
-
-       writeq(val64, &vp_reg->interrupt_cfg0);
-
-       writeq(VXGE_HW_INTERRUPT_CFG2_ALARM_MAP_TO_MSG(
-                       (vpath->hldev->first_vp_id * 4) + alarm_msix_id),
-                       &vp_reg->interrupt_cfg2);
-
-       if (vpath->hldev->config.intr_mode ==
-                                       VXGE_HW_INTR_MODE_MSIX_ONE_SHOT) {
-               __vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(
-                               VXGE_HW_ONE_SHOT_VECT0_EN_ONE_SHOT_VECT0_EN,
-                               0, 32), &vp_reg->one_shot_vect0_en);
-               __vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(
-                               VXGE_HW_ONE_SHOT_VECT1_EN_ONE_SHOT_VECT1_EN,
-                               0, 32), &vp_reg->one_shot_vect1_en);
-               __vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(
-                               VXGE_HW_ONE_SHOT_VECT2_EN_ONE_SHOT_VECT2_EN,
-                               0, 32), &vp_reg->one_shot_vect2_en);
-       }
-}
-
-/**
- * vxge_hw_vpath_msix_mask - Mask MSIX Vector.
- * @vp: Virtual Path handle.
- * @msix_id:  MSIX ID
- *
- * The function masks the msix interrupt for the given msix_id
- *
- * Returns: 0,
- * Otherwise, VXGE_HW_ERR_WRONG_IRQ if the msix index is out of range
- * status.
- * See also:
- */
-void
-vxge_hw_vpath_msix_mask(struct __vxge_hw_vpath_handle *vp, int msix_id)
-{
-       struct __vxge_hw_device *hldev = vp->vpath->hldev;
-       __vxge_hw_pio_mem_write32_upper(
-               (u32) vxge_bVALn(vxge_mBIT(msix_id  >> 2), 0, 32),
-               &hldev->common_reg->set_msix_mask_vect[msix_id % 4]);
-}
-
-/**
- * vxge_hw_vpath_msix_clear - Clear MSIX Vector.
- * @vp: Virtual Path handle.
- * @msix_id:  MSI ID
- *
- * The function clears the msix interrupt for the given msix_id
- *
- * Returns: 0,
- * Otherwise, VXGE_HW_ERR_WRONG_IRQ if the msix index is out of range
- * status.
- * See also:
- */
-void vxge_hw_vpath_msix_clear(struct __vxge_hw_vpath_handle *vp, int msix_id)
-{
-       struct __vxge_hw_device *hldev = vp->vpath->hldev;
-
-       if (hldev->config.intr_mode == VXGE_HW_INTR_MODE_MSIX_ONE_SHOT)
-               __vxge_hw_pio_mem_write32_upper(
-                       (u32) vxge_bVALn(vxge_mBIT((msix_id >> 2)), 0, 32),
-                       &hldev->common_reg->clr_msix_one_shot_vec[msix_id % 4]);
-       else
-               __vxge_hw_pio_mem_write32_upper(
-                       (u32) vxge_bVALn(vxge_mBIT((msix_id >> 2)), 0, 32),
-                       &hldev->common_reg->clear_msix_mask_vect[msix_id % 4]);
-}
-
-/**
- * vxge_hw_vpath_msix_unmask - Unmask the MSIX Vector.
- * @vp: Virtual Path handle.
- * @msix_id:  MSI ID
- *
- * The function unmasks the msix interrupt for the given msix_id
- *
- * Returns: 0,
- * Otherwise, VXGE_HW_ERR_WRONG_IRQ if the msix index is out of range
- * status.
- * See also:
- */
-void
-vxge_hw_vpath_msix_unmask(struct __vxge_hw_vpath_handle *vp, int msix_id)
-{
-       struct __vxge_hw_device *hldev = vp->vpath->hldev;
-       __vxge_hw_pio_mem_write32_upper(
-                       (u32)vxge_bVALn(vxge_mBIT(msix_id >> 2), 0, 32),
-                       &hldev->common_reg->clear_msix_mask_vect[msix_id%4]);
-}
-
-/**
- * vxge_hw_vpath_inta_mask_tx_rx - Mask Tx and Rx interrupts.
- * @vp: Virtual Path handle.
- *
- * Mask Tx and Rx vpath interrupts.
- *
- * See also: vxge_hw_vpath_inta_mask_tx_rx()
- */
-void vxge_hw_vpath_inta_mask_tx_rx(struct __vxge_hw_vpath_handle *vp)
-{
-       u64     tim_int_mask0[4] = {[0 ...3] = 0};
-       u32     tim_int_mask1[4] = {[0 ...3] = 0};
-       u64     val64;
-       struct __vxge_hw_device *hldev = vp->vpath->hldev;
-
-       VXGE_HW_DEVICE_TIM_INT_MASK_SET(tim_int_mask0,
-               tim_int_mask1, vp->vpath->vp_id);
-
-       val64 = readq(&hldev->common_reg->tim_int_mask0);
-
-       if ((tim_int_mask0[VXGE_HW_VPATH_INTR_TX] != 0) ||
-               (tim_int_mask0[VXGE_HW_VPATH_INTR_RX] != 0)) {
-               writeq((tim_int_mask0[VXGE_HW_VPATH_INTR_TX] |
-                       tim_int_mask0[VXGE_HW_VPATH_INTR_RX] | val64),
-                       &hldev->common_reg->tim_int_mask0);
-       }
-
-       val64 = readl(&hldev->common_reg->tim_int_mask1);
-
-       if ((tim_int_mask1[VXGE_HW_VPATH_INTR_TX] != 0) ||
-               (tim_int_mask1[VXGE_HW_VPATH_INTR_RX] != 0)) {
-               __vxge_hw_pio_mem_write32_upper(
-                       (tim_int_mask1[VXGE_HW_VPATH_INTR_TX] |
-                       tim_int_mask1[VXGE_HW_VPATH_INTR_RX] | val64),
-                       &hldev->common_reg->tim_int_mask1);
-       }
-}
-
-/**
- * vxge_hw_vpath_inta_unmask_tx_rx - Unmask Tx and Rx interrupts.
- * @vp: Virtual Path handle.
- *
- * Unmask Tx and Rx vpath interrupts.
- *
- * See also: vxge_hw_vpath_inta_mask_tx_rx()
- */
-void vxge_hw_vpath_inta_unmask_tx_rx(struct __vxge_hw_vpath_handle *vp)
-{
-       u64     tim_int_mask0[4] = {[0 ...3] = 0};
-       u32     tim_int_mask1[4] = {[0 ...3] = 0};
-       u64     val64;
-       struct __vxge_hw_device *hldev = vp->vpath->hldev;
-
-       VXGE_HW_DEVICE_TIM_INT_MASK_SET(tim_int_mask0,
-               tim_int_mask1, vp->vpath->vp_id);
-
-       val64 = readq(&hldev->common_reg->tim_int_mask0);
-
-       if ((tim_int_mask0[VXGE_HW_VPATH_INTR_TX] != 0) ||
-          (tim_int_mask0[VXGE_HW_VPATH_INTR_RX] != 0)) {
-               writeq((~(tim_int_mask0[VXGE_HW_VPATH_INTR_TX] |
-                       tim_int_mask0[VXGE_HW_VPATH_INTR_RX])) & val64,
-                       &hldev->common_reg->tim_int_mask0);
-       }
-
-       if ((tim_int_mask1[VXGE_HW_VPATH_INTR_TX] != 0) ||
-          (tim_int_mask1[VXGE_HW_VPATH_INTR_RX] != 0)) {
-               __vxge_hw_pio_mem_write32_upper(
-                       (~(tim_int_mask1[VXGE_HW_VPATH_INTR_TX] |
-                         tim_int_mask1[VXGE_HW_VPATH_INTR_RX])) & val64,
-                       &hldev->common_reg->tim_int_mask1);
-       }
-}
-
-/**
- * vxge_hw_vpath_poll_rx - Poll Rx Virtual Path for completed
- * descriptors and process the same.
- * @ring: Handle to the ring object used for receive
- *
- * The function        polls the Rx for the completed  descriptors and calls
- * the driver via supplied completion  callback.
- *
- * Returns: VXGE_HW_OK, if the polling is completed successful.
- * VXGE_HW_COMPLETIONS_REMAIN: There are still more completed
- * descriptors available which are yet to be processed.
- *
- * See also: vxge_hw_vpath_poll_rx()
- */
-enum vxge_hw_status vxge_hw_vpath_poll_rx(struct __vxge_hw_ring *ring)
-{
-       u8 t_code;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       void *first_rxdh;
-       int new_count = 0;
-
-       ring->cmpl_cnt = 0;
-
-       status = vxge_hw_ring_rxd_next_completed(ring, &first_rxdh, &t_code);
-       if (status == VXGE_HW_OK)
-               ring->callback(ring, first_rxdh,
-                       t_code, ring->channel.userdata);
-
-       if (ring->cmpl_cnt != 0) {
-               ring->doorbell_cnt += ring->cmpl_cnt;
-               if (ring->doorbell_cnt >= ring->rxds_limit) {
-                       /*
-                        * Each RxD is of 4 qwords, update the number of
-                        * qwords replenished
-                        */
-                       new_count = (ring->doorbell_cnt * 4);
-
-                       /* For each block add 4 more qwords */
-                       ring->total_db_cnt += ring->doorbell_cnt;
-                       if (ring->total_db_cnt >= ring->rxds_per_block) {
-                               new_count += 4;
-                               /* Reset total count */
-                               ring->total_db_cnt %= ring->rxds_per_block;
-                       }
-                       writeq(VXGE_HW_PRC_RXD_DOORBELL_NEW_QW_CNT(new_count),
-                               &ring->vp_reg->prc_rxd_doorbell);
-                       readl(&ring->common_reg->titan_general_int_status);
-                       ring->doorbell_cnt = 0;
-               }
-       }
-
-       return status;
-}
-
-/**
- * vxge_hw_vpath_poll_tx - Poll Tx for completed descriptors and process the same.
- * @fifo: Handle to the fifo object used for non offload send
- * @skb_ptr: pointer to skb
- * @nr_skb: number of skbs
- * @more: more is coming
- *
- * The function polls the Tx for the completed descriptors and calls
- * the driver via supplied completion callback.
- *
- * Returns: VXGE_HW_OK, if the polling is completed successful.
- * VXGE_HW_COMPLETIONS_REMAIN: There are still more completed
- * descriptors available which are yet to be processed.
- */
-enum vxge_hw_status vxge_hw_vpath_poll_tx(struct __vxge_hw_fifo *fifo,
-                                       struct sk_buff ***skb_ptr, int nr_skb,
-                                       int *more)
-{
-       enum vxge_hw_fifo_tcode t_code;
-       void *first_txdlh;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       struct __vxge_hw_channel *channel;
-
-       channel = &fifo->channel;
-
-       status = vxge_hw_fifo_txdl_next_completed(fifo,
-                               &first_txdlh, &t_code);
-       if (status == VXGE_HW_OK)
-               if (fifo->callback(fifo, first_txdlh, t_code,
-                       channel->userdata, skb_ptr, nr_skb, more) != VXGE_HW_OK)
-                       status = VXGE_HW_COMPLETIONS_REMAIN;
-
-       return status;
-}
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-traffic.h b/drivers/net/ethernet/neterion/vxge/vxge-traffic.h
deleted file mode 100644 (file)
index ba6f833..0000000
+++ /dev/null
@@ -1,2290 +0,0 @@
-/******************************************************************************
- * This software may be used and distributed according to the terms of
- * the GNU General Public License (GPL), incorporated herein by reference.
- * Drivers based on or derived from this code fall under the GPL and must
- * retain the authorship, copyright and license notice.  This file is not
- * a complete program and may only be used when the entire operating
- * system is licensed under the GPL.
- * See the file COPYING in this distribution for more information.
- *
- * vxge-traffic.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
- *                 Virtualized Server Adapter.
- * Copyright(c) 2002-2010 Exar Corp.
- ******************************************************************************/
-#ifndef VXGE_TRAFFIC_H
-#define VXGE_TRAFFIC_H
-
-#include "vxge-reg.h"
-#include "vxge-version.h"
-
-#define VXGE_HW_DTR_MAX_T_CODE         16
-#define VXGE_HW_ALL_FOXES              0xFFFFFFFFFFFFFFFFULL
-#define VXGE_HW_INTR_MASK_ALL          0xFFFFFFFFFFFFFFFFULL
-#define        VXGE_HW_MAX_VIRTUAL_PATHS       17
-
-#define VXGE_HW_MAC_MAX_MAC_PORT_ID    2
-
-#define VXGE_HW_DEFAULT_32             0xffffffff
-/* frames sizes */
-#define VXGE_HW_HEADER_802_2_SIZE      3
-#define VXGE_HW_HEADER_SNAP_SIZE       5
-#define VXGE_HW_HEADER_VLAN_SIZE       4
-#define VXGE_HW_MAC_HEADER_MAX_SIZE \
-                       (ETH_HLEN + \
-                       VXGE_HW_HEADER_802_2_SIZE + \
-                       VXGE_HW_HEADER_VLAN_SIZE + \
-                       VXGE_HW_HEADER_SNAP_SIZE)
-
-/* 32bit alignments */
-#define VXGE_HW_HEADER_ETHERNET_II_802_3_ALIGN         2
-#define VXGE_HW_HEADER_802_2_SNAP_ALIGN                        2
-#define VXGE_HW_HEADER_802_2_ALIGN                     3
-#define VXGE_HW_HEADER_SNAP_ALIGN                      1
-
-#define VXGE_HW_L3_CKSUM_OK                            0xFFFF
-#define VXGE_HW_L4_CKSUM_OK                            0xFFFF
-
-/* Forward declarations */
-struct __vxge_hw_device;
-struct __vxge_hw_vpath_handle;
-struct vxge_hw_vp_config;
-struct __vxge_hw_virtualpath;
-struct __vxge_hw_channel;
-struct __vxge_hw_fifo;
-struct __vxge_hw_ring;
-struct vxge_hw_ring_attr;
-struct vxge_hw_mempool;
-
-#ifndef TRUE
-#define TRUE 1
-#endif
-
-#ifndef FALSE
-#define FALSE 0
-#endif
-
-/*VXGE_HW_STATUS_H*/
-
-#define VXGE_HW_EVENT_BASE                     0
-#define VXGE_LL_EVENT_BASE                     100
-
-/**
- * enum vxge_hw_event- Enumerates slow-path HW events.
- * @VXGE_HW_EVENT_UNKNOWN: Unknown (and invalid) event.
- * @VXGE_HW_EVENT_SERR: Serious vpath hardware error event.
- * @VXGE_HW_EVENT_ECCERR: vpath ECC error event.
- * @VXGE_HW_EVENT_VPATH_ERR: Error local to the respective vpath
- * @VXGE_HW_EVENT_FIFO_ERR: FIFO Doorbell fifo error.
- * @VXGE_HW_EVENT_SRPCIM_SERR: srpcim hardware error event.
- * @VXGE_HW_EVENT_MRPCIM_SERR: mrpcim hardware error event.
- * @VXGE_HW_EVENT_MRPCIM_ECCERR: mrpcim ecc error event.
- * @VXGE_HW_EVENT_RESET_START: Privileged entity is starting device reset
- * @VXGE_HW_EVENT_RESET_COMPLETE: Device reset has been completed
- * @VXGE_HW_EVENT_SLOT_FREEZE: Slot-freeze event. Driver tries to distinguish
- * slot-freeze from the rest critical events (e.g. ECC) when it is
- * impossible to PIO read "through" the bus, i.e. when getting all-foxes.
- *
- * enum vxge_hw_event enumerates slow-path HW eventis.
- *
- * See also: struct vxge_hw_uld_cbs{}, vxge_uld_link_up_f{},
- * vxge_uld_link_down_f{}.
- */
-enum vxge_hw_event {
-       VXGE_HW_EVENT_UNKNOWN           = 0,
-       /* HW events */
-       VXGE_HW_EVENT_RESET_START       = VXGE_HW_EVENT_BASE + 1,
-       VXGE_HW_EVENT_RESET_COMPLETE    = VXGE_HW_EVENT_BASE + 2,
-       VXGE_HW_EVENT_LINK_DOWN         = VXGE_HW_EVENT_BASE + 3,
-       VXGE_HW_EVENT_LINK_UP           = VXGE_HW_EVENT_BASE + 4,
-       VXGE_HW_EVENT_ALARM_CLEARED     = VXGE_HW_EVENT_BASE + 5,
-       VXGE_HW_EVENT_ECCERR            = VXGE_HW_EVENT_BASE + 6,
-       VXGE_HW_EVENT_MRPCIM_ECCERR     = VXGE_HW_EVENT_BASE + 7,
-       VXGE_HW_EVENT_FIFO_ERR          = VXGE_HW_EVENT_BASE + 8,
-       VXGE_HW_EVENT_VPATH_ERR         = VXGE_HW_EVENT_BASE + 9,
-       VXGE_HW_EVENT_CRITICAL_ERR      = VXGE_HW_EVENT_BASE + 10,
-       VXGE_HW_EVENT_SERR              = VXGE_HW_EVENT_BASE + 11,
-       VXGE_HW_EVENT_SRPCIM_SERR       = VXGE_HW_EVENT_BASE + 12,
-       VXGE_HW_EVENT_MRPCIM_SERR       = VXGE_HW_EVENT_BASE + 13,
-       VXGE_HW_EVENT_SLOT_FREEZE       = VXGE_HW_EVENT_BASE + 14,
-};
-
-#define VXGE_HW_SET_LEVEL(a, b) (((a) > (b)) ? (a) : (b))
-
-/*
- * struct vxge_hw_mempool_dma - Represents DMA objects passed to the
-       caller.
- */
-struct vxge_hw_mempool_dma {
-       dma_addr_t                      addr;
-       struct pci_dev *handle;
-       struct pci_dev *acc_handle;
-};
-
-/*
- * vxge_hw_mempool_item_f  - Mempool item alloc/free callback
- * @mempoolh: Memory pool handle.
- * @memblock: Address of memory block
- * @memblock_index: Index of memory block
- * @item: Item that gets allocated or freed.
- * @index: Item's index in the memory pool.
- * @is_last: True, if this item is the last one in the pool; false - otherwise.
- * userdata: Per-pool user context.
- *
- * Memory pool allocation/deallocation callback.
- */
-
-/*
- * struct vxge_hw_mempool - Memory pool.
- */
-struct vxge_hw_mempool {
-
-       void (*item_func_alloc)(
-       struct vxge_hw_mempool *mempoolh,
-       u32                     memblock_index,
-       struct vxge_hw_mempool_dma      *dma_object,
-       u32                     index,
-       u32                     is_last);
-
-       void            *userdata;
-       void            **memblocks_arr;
-       void            **memblocks_priv_arr;
-       struct vxge_hw_mempool_dma      *memblocks_dma_arr;
-       struct __vxge_hw_device *devh;
-       u32                     memblock_size;
-       u32                     memblocks_max;
-       u32                     memblocks_allocated;
-       u32                     item_size;
-       u32                     items_max;
-       u32                     items_initial;
-       u32                     items_current;
-       u32                     items_per_memblock;
-       void            **items_arr;
-       u32                     items_priv_size;
-};
-
-#define        VXGE_HW_MAX_INTR_PER_VP                         4
-#define        VXGE_HW_VPATH_INTR_TX                           0
-#define        VXGE_HW_VPATH_INTR_RX                           1
-#define        VXGE_HW_VPATH_INTR_EINTA                        2
-#define        VXGE_HW_VPATH_INTR_BMAP                         3
-
-#define VXGE_HW_BLOCK_SIZE                             4096
-
-/**
- * struct vxge_hw_tim_intr_config - Titan Tim interrupt configuration.
- * @intr_enable: Set to 1, if interrupt is enabled.
- * @btimer_val: Boundary Timer Initialization value in units of 272 ns.
- * @timer_ac_en: Timer Automatic Cancel. 1 : Automatic Canceling Enable: when
- *             asserted, other interrupt-generating entities will cancel the
- *             scheduled timer interrupt.
- * @timer_ci_en: Timer Continuous Interrupt. 1 : Continuous Interrupting Enable:
- *             When asserted, an interrupt will be generated every time the
- *             boundary timer expires, even if no traffic has been transmitted
- *             on this interrupt.
- * @timer_ri_en: Timer Consecutive (Re-) Interrupt 1 : Consecutive
- *             (Re-) Interrupt Enable: When asserted, an interrupt will be
- *             generated the next time the timer expires, even if no traffic has
- *             been transmitted on this interrupt. (This will only happen once
- *             each time that this value is written to the TIM.) This bit is
- *             cleared by H/W at the end of the current-timer-interval when
- *             the interrupt is triggered.
- * @rtimer_val: Restriction Timer Initialization value in units of 272 ns.
- * @util_sel: Utilization Selector. Selects which of the workload approximations
- *             to use (e.g. legacy Tx utilization, Tx/Rx utilization, host
- *             specified utilization etc.), selects one of
- *             the 17 host configured values.
- *             0-Virtual Path 0
- *             1-Virtual Path 1
- *             ...
- *             16-Virtual Path 17
- *             17-Legacy Tx network utilization, provided by TPA
- *             18-Legacy Rx network utilization, provided by FAU
- *             19-Average of legacy Rx and Tx utilization calculated from link
- *                utilization values.
- *             20-31-Invalid configurations
- *             32-Host utilization for Virtual Path 0
- *             33-Host utilization for Virtual Path 1
- *             ...
- *             48-Host utilization for Virtual Path 17
- *             49-Legacy Tx network utilization, provided by TPA
- *             50-Legacy Rx network utilization, provided by FAU
- *             51-Average of legacy Rx and Tx utilization calculated from
- *                link utilization values.
- *             52-63-Invalid configurations
- * @ltimer_val: Latency Timer Initialization Value in units of 272 ns.
- * @txd_cnt_en: TxD Return Event Count Enable. This configuration bit when set
- *             to 1 enables counting of TxD0 returns (signalled by PCC's),
- *             towards utilization event count values.
- * @urange_a: Defines the upper limit (in percent) for this utilization range
- *             to be active. This range is considered active
- *             if 0 = UTIL = URNG_A
- *             and the UEC_A field (below) is non-zero.
- * @uec_a: Utilization Event Count A. If this range is active, the adapter will
- *             wait until UEC_A events have occurred on the interrupt before
- *             generating an interrupt.
- * @urange_b: Link utilization range B.
- * @uec_b: Utilization Event Count B.
- * @urange_c: Link utilization range C.
- * @uec_c: Utilization Event Count C.
- * @urange_d: Link utilization range D.
- * @uec_d: Utilization Event Count D.
- * Traffic Interrupt Controller Module interrupt configuration.
- */
-struct vxge_hw_tim_intr_config {
-
-       u32                             intr_enable;
-#define VXGE_HW_TIM_INTR_ENABLE                                1
-#define VXGE_HW_TIM_INTR_DISABLE                               0
-#define VXGE_HW_TIM_INTR_DEFAULT                               0
-
-       u32                             btimer_val;
-#define VXGE_HW_MIN_TIM_BTIMER_VAL                             0
-#define VXGE_HW_MAX_TIM_BTIMER_VAL                             67108864
-#define VXGE_HW_USE_FLASH_DEFAULT                              (~0)
-
-       u32                             timer_ac_en;
-#define VXGE_HW_TIM_TIMER_AC_ENABLE                            1
-#define VXGE_HW_TIM_TIMER_AC_DISABLE                           0
-
-       u32                             timer_ci_en;
-#define VXGE_HW_TIM_TIMER_CI_ENABLE                            1
-#define VXGE_HW_TIM_TIMER_CI_DISABLE                           0
-
-       u32                             timer_ri_en;
-#define VXGE_HW_TIM_TIMER_RI_ENABLE                            1
-#define VXGE_HW_TIM_TIMER_RI_DISABLE                           0
-
-       u32                             rtimer_val;
-#define VXGE_HW_MIN_TIM_RTIMER_VAL                             0
-#define VXGE_HW_MAX_TIM_RTIMER_VAL                             67108864
-
-       u32                             util_sel;
-#define VXGE_HW_TIM_UTIL_SEL_LEGACY_TX_NET_UTIL                17
-#define VXGE_HW_TIM_UTIL_SEL_LEGACY_RX_NET_UTIL                18
-#define VXGE_HW_TIM_UTIL_SEL_LEGACY_TX_RX_AVE_NET_UTIL         19
-#define VXGE_HW_TIM_UTIL_SEL_PER_VPATH                         63
-
-       u32                             ltimer_val;
-#define VXGE_HW_MIN_TIM_LTIMER_VAL                             0
-#define VXGE_HW_MAX_TIM_LTIMER_VAL                             67108864
-
-       /* Line utilization interrupts */
-       u32                             urange_a;
-#define VXGE_HW_MIN_TIM_URANGE_A                               0
-#define VXGE_HW_MAX_TIM_URANGE_A                               100
-
-       u32                             uec_a;
-#define VXGE_HW_MIN_TIM_UEC_A                                  0
-#define VXGE_HW_MAX_TIM_UEC_A                                  65535
-
-       u32                             urange_b;
-#define VXGE_HW_MIN_TIM_URANGE_B                               0
-#define VXGE_HW_MAX_TIM_URANGE_B                               100
-
-       u32                             uec_b;
-#define VXGE_HW_MIN_TIM_UEC_B                                  0
-#define VXGE_HW_MAX_TIM_UEC_B                                  65535
-
-       u32                             urange_c;
-#define VXGE_HW_MIN_TIM_URANGE_C                               0
-#define VXGE_HW_MAX_TIM_URANGE_C                               100
-
-       u32                             uec_c;
-#define VXGE_HW_MIN_TIM_UEC_C                                  0
-#define VXGE_HW_MAX_TIM_UEC_C                                  65535
-
-       u32                             uec_d;
-#define VXGE_HW_MIN_TIM_UEC_D                                  0
-#define VXGE_HW_MAX_TIM_UEC_D                                  65535
-};
-
-#define        VXGE_HW_STATS_OP_READ                                   0
-#define        VXGE_HW_STATS_OP_CLEAR_STAT                             1
-#define        VXGE_HW_STATS_OP_CLEAR_ALL_VPATH_STATS                  2
-#define        VXGE_HW_STATS_OP_CLEAR_ALL_STATS_OF_LOC                 2
-#define        VXGE_HW_STATS_OP_CLEAR_ALL_STATS                        3
-
-#define        VXGE_HW_STATS_LOC_AGGR                                  17
-#define VXGE_HW_STATS_AGGRn_OFFSET                             0x00720
-
-#define VXGE_HW_STATS_VPATH_TX_OFFSET                          0x0
-#define VXGE_HW_STATS_VPATH_RX_OFFSET                          0x00090
-
-#define        VXGE_HW_STATS_VPATH_PROG_EVENT_VNUM0_OFFSET        (0x001d0 >> 3)
-#define        VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM0(bits) \
-                                               vxge_bVALn(bits, 0, 32)
-
-#define        VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM1(bits) \
-                                               vxge_bVALn(bits, 32, 32)
-
-#define        VXGE_HW_STATS_VPATH_PROG_EVENT_VNUM2_OFFSET        (0x001d8 >> 3)
-#define        VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM2(bits) \
-                                               vxge_bVALn(bits, 0, 32)
-
-#define        VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM3(bits) \
-                                               vxge_bVALn(bits, 32, 32)
-
-/**
- * struct vxge_hw_xmac_aggr_stats - Per-Aggregator XMAC Statistics
- *
- * @tx_frms: Count of data frames transmitted on this Aggregator on all
- *             its Aggregation ports. Does not include LACPDUs or Marker PDUs.
- *             However, does include frames discarded by the Distribution
- *             function.
- * @tx_data_octets: Count of data and padding octets of frames transmitted
- *             on this Aggregator on all its Aggregation ports. Does not include
- *             octets of LACPDUs or Marker PDUs. However, does include octets of
- *             frames discarded by the Distribution function.
- * @tx_mcast_frms: Count of data frames transmitted (to a group destination
- *             address other than the broadcast address) on this Aggregator on
- *             all its Aggregation ports. Does not include LACPDUs or Marker
- *             PDUs. However, does include frames discarded by the Distribution
- *             function.
- * @tx_bcast_frms: Count of broadcast data frames transmitted on this Aggregator
- *             on all its Aggregation ports. Does not include LACPDUs or Marker
- *             PDUs. However, does include frames discarded by the Distribution
- *             function.
- * @tx_discarded_frms: Count of data frames to be transmitted on this Aggregator
- *             that are discarded by the Distribution function. This occurs when
- *             conversation are allocated to different ports and have to be
- *             flushed on old ports
- * @tx_errored_frms: Count of data frames transmitted on this Aggregator that
- *             experience transmission errors on its Aggregation ports.
- * @rx_frms: Count of data frames received on this Aggregator on all its
- *             Aggregation ports. Does not include LACPDUs or Marker PDUs.
- *             Also, does not include frames discarded by the Collection
- *             function.
- * @rx_data_octets: Count of data and padding octets of frames received on this
- *             Aggregator on all its Aggregation ports. Does not include octets
- *             of LACPDUs or Marker PDUs. Also, does not include
- *             octets of frames
- *             discarded by the Collection function.
- * @rx_mcast_frms: Count of data frames received (from a group destination
- *             address other than the broadcast address) on this Aggregator on
- *             all its Aggregation ports. Does not include LACPDUs or Marker
- *             PDUs. Also, does not include frames discarded by the Collection
- *             function.
- * @rx_bcast_frms: Count of broadcast data frames received on this Aggregator on
- *             all its Aggregation ports. Does not include LACPDUs or Marker
- *             PDUs. Also, does not include frames discarded by the Collection
- *             function.
- * @rx_discarded_frms: Count of data frames received on this Aggregator that are
- *             discarded by the Collection function because the Collection
- *             function was disabled on the port which the frames are received.
- * @rx_errored_frms: Count of data frames received on this Aggregator that are
- *             discarded by its Aggregation ports, or are discarded by the
- *             Collection function of the Aggregator, or that are discarded by
- *             the Aggregator due to detection of an illegal Slow Protocols PDU.
- * @rx_unknown_slow_proto_frms: Count of data frames received on this Aggregator
- *             that are discarded by its Aggregation ports due to detection of
- *             an unknown Slow Protocols PDU.
- *
- * Per aggregator XMAC RX statistics.
- */
-struct vxge_hw_xmac_aggr_stats {
-/*0x000*/              u64     tx_frms;
-/*0x008*/              u64     tx_data_octets;
-/*0x010*/              u64     tx_mcast_frms;
-/*0x018*/              u64     tx_bcast_frms;
-/*0x020*/              u64     tx_discarded_frms;
-/*0x028*/              u64     tx_errored_frms;
-/*0x030*/              u64     rx_frms;
-/*0x038*/              u64     rx_data_octets;
-/*0x040*/              u64     rx_mcast_frms;
-/*0x048*/              u64     rx_bcast_frms;
-/*0x050*/              u64     rx_discarded_frms;
-/*0x058*/              u64     rx_errored_frms;
-/*0x060*/              u64     rx_unknown_slow_proto_frms;
-} __packed;
-
-/**
- * struct vxge_hw_xmac_port_stats - XMAC Port Statistics
- *
- * @tx_ttl_frms: Count of successfully transmitted MAC frames
- * @tx_ttl_octets: Count of total octets of transmitted frames, not including
- *            framing characters (i.e. less framing bits). To determine the
- *            total octets of transmitted frames, including framing characters,
- *            multiply PORTn_TX_TTL_FRMS by 8 and add it to this stat (unless
- *            otherwise configured, this stat only counts frames that have
- *            8 bytes of preamble for each frame). This stat can be configured
- *            (see XMAC_STATS_GLOBAL_CFG.TTL_FRMS_HANDLING) to count everything
- *            including the preamble octets.
- * @tx_data_octets: Count of data and padding octets of successfully transmitted
- *            frames.
- * @tx_mcast_frms: Count of successfully transmitted frames to a group address
- *            other than the broadcast address.
- * @tx_bcast_frms: Count of successfully transmitted frames to the broadcast
- *            group address.
- * @tx_ucast_frms: Count of transmitted frames containing a unicast address.
- *            Includes discarded frames that are not sent to the network.
- * @tx_tagged_frms: Count of transmitted frames containing a VLAN tag.
- * @tx_vld_ip: Count of transmitted IP datagrams that are passed to the network.
- * @tx_vld_ip_octets: Count of total octets of transmitted IP datagrams that
- *            are passed to the network.
- * @tx_icmp: Count of transmitted ICMP messages. Includes messages not sent
- *            due to problems within ICMP.
- * @tx_tcp: Count of transmitted TCP segments. Does not include segments
- *            containing retransmitted octets.
- * @tx_rst_tcp: Count of transmitted TCP segments containing the RST flag.
- * @tx_udp: Count of transmitted UDP datagrams.
- * @tx_parse_error: Increments when the TPA is unable to parse a packet. This
- *            generally occurs when a packet is corrupt somehow, including
- *            packets that have IP version mismatches, invalid Layer 2 control
- *            fields, etc. L3/L4 checksums are not offloaded, but the packet
- *            is still be transmitted.
- * @tx_unknown_protocol: Increments when the TPA encounters an unknown
- *            protocol, such as a new IPv6 extension header, or an unsupported
- *            Routing Type. The packet still has a checksum calculated but it
- *            may be incorrect.
- * @tx_pause_ctrl_frms: Count of MAC PAUSE control frames that are transmitted.
- *            Since, the only control frames supported by this device are
- *            PAUSE frames, this register is a count of all transmitted MAC
- *            control frames.
- * @tx_marker_pdu_frms: Count of Marker PDUs transmitted
- * on this Aggregation port.
- * @tx_lacpdu_frms: Count of LACPDUs transmitted on this Aggregation port.
- * @tx_drop_ip: Count of transmitted IP datagrams that could not be passed to
- *            the network. Increments because of:
- *            1) An internal processing error
- *            (such as an uncorrectable ECC error). 2) A frame parsing error
- *            during IP checksum calculation.
- * @tx_marker_resp_pdu_frms: Count of Marker Response PDUs transmitted on this
- *            Aggregation port.
- * @tx_xgmii_char2_match: Maintains a count of the number of transmitted XGMII
- *            characters that match a pattern that is programmable through
- *            register XMAC_STATS_TX_XGMII_CHAR_PORTn. By default, the pattern
- *            is set to /T/ (i.e. the terminate character), thus the statistic
- *            tracks the number of transmitted Terminate characters.
- * @tx_xgmii_char1_match: Maintains a count of the number of transmitted XGMII
- *            characters that match a pattern that is programmable through
- *            register XMAC_STATS_TX_XGMII_CHAR_PORTn. By default, the pattern
- *            is set to /S/ (i.e. the start character),
- *            thus the statistic tracks
- *            the number of transmitted Start characters.
- * @tx_xgmii_column2_match: Maintains a count of the number of transmitted XGMII
- *            columns that match a pattern that is programmable through register
- *            XMAC_STATS_TX_XGMII_COLUMN2_PORTn. By default, the pattern is set
- *            to 4 x /E/ (i.e. a column containing all error characters), thus
- *            the statistic tracks the number of Error columns transmitted at
- *            any time. If XMAC_STATS_TX_XGMII_BEHAV_COLUMN2_PORTn.NEAR_COL1 is
- *            set to 1, then this stat increments when COLUMN2 is found within
- *            'n' clocks after COLUMN1. Here, 'n' is defined by
- *            XMAC_STATS_TX_XGMII_BEHAV_COLUMN2_PORTn.NUM_COL (if 'n' is set
- *            to 0, then it means to search anywhere for COLUMN2).
- * @tx_xgmii_column1_match: Maintains a count of the number of transmitted XGMII
- *            columns that match a pattern that is programmable through register
- *            XMAC_STATS_TX_XGMII_COLUMN1_PORTn. By default, the pattern is set
- *            to 4 x /I/ (i.e. a column containing all idle characters),
- *            thus the statistic tracks the number of transmitted Idle columns.
- * @tx_any_err_frms: Count of transmitted frames containing any error that
- *            prevents them from being passed to the network. Increments if
- *            there is an ECC while reading the frame out of the transmit
- *            buffer. Also increments if the transmit protocol assist (TPA)
- *            block determines that the frame should not be sent.
- * @tx_drop_frms: Count of frames that could not be sent for no other reason
- *            than internal MAC processing. Increments once whenever the
- *            transmit buffer is flushed (due to an ECC error on a memory
- *            descriptor).
- * @rx_ttl_frms: Count of total received MAC frames, including frames received
- *            with frame-too-long, FCS, or length errors. This stat can be
- *            configured (see XMAC_STATS_GLOBAL_CFG.TTL_FRMS_HANDLING) to count
- *            everything, even "frames" as small one byte of preamble.
- * @rx_vld_frms: Count of successfully received MAC frames. Does not include
- *            frames received with frame-too-long, FCS, or length errors.
- * @rx_offload_frms: Count of offloaded received frames that are passed to
- *            the host.
- * @rx_ttl_octets: Count of total octets of received frames, not including
- *            framing characters (i.e. less framing bits). To determine the
- *            total octets of received frames, including framing characters,
- *            multiply PORTn_RX_TTL_FRMS by 8 and add it to this stat (unless
- *            otherwise configured, this stat only counts frames that have 8
- *            bytes of preamble for each frame). This stat can be configured
- *            (see XMAC_STATS_GLOBAL_CFG.TTL_FRMS_HANDLING) to count everything,
- *            even the preamble octets of "frames" as small one byte of preamble
- * @rx_data_octets: Count of data and padding octets of successfully received
- *            frames. Does not include frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_offload_octets: Count of total octets, not including framing
- *            characters, of offloaded received frames that are passed
- *            to the host.
- * @rx_vld_mcast_frms: Count of successfully received MAC frames containing a
- *           nonbroadcast group address. Does not include frames received
- *            with frame-too-long, FCS, or length errors.
- * @rx_vld_bcast_frms: Count of successfully received MAC frames containing
- *            the broadcast group address. Does not include frames received
- *            with frame-too-long, FCS, or length errors.
- * @rx_accepted_ucast_frms: Count of successfully received frames containing
- *            a unicast address. Only includes frames that are passed to
- *            the system.
- * @rx_accepted_nucast_frms: Count of successfully received frames containing
- *            a non-unicast (broadcast or multicast) address. Only includes
- *            frames that are passed to the system. Could include, for instance,
- *            non-unicast frames that contain FCS errors if the MAC_ERROR_CFG
- *            register is set to pass FCS-errored frames to the host.
- * @rx_tagged_frms: Count of received frames containing a VLAN tag.
- * @rx_long_frms: Count of received frames that are longer than RX_MAX_PYLD_LEN
- *            + 18 bytes (+ 22 bytes if VLAN-tagged).
- * @rx_usized_frms: Count of received frames of length (including FCS, but not
- *            framing bits) less than 64 octets, that are otherwise well-formed.
- *            In other words, counts runts.
- * @rx_osized_frms: Count of received frames of length (including FCS, but not
- *            framing bits) more than 1518 octets, that are otherwise
- *            well-formed. Note: If register XMAC_STATS_GLOBAL_CFG.VLAN_HANDLING
- *            is set to 1, then "more than 1518 octets" becomes "more than 1518
- *            (1522 if VLAN-tagged) octets".
- * @rx_frag_frms: Count of received frames of length (including FCS, but not
- *            framing bits) less than 64 octets that had bad FCS. In other
- *            words, counts fragments.
- * @rx_jabber_frms: Count of received frames of length (including FCS, but not
- *            framing bits) more than 1518 octets that had bad FCS. In other
- *            words, counts jabbers. Note: If register
- *            XMAC_STATS_GLOBAL_CFG.VLAN_HANDLING is set to 1, then "more than
- *            1518 octets" becomes "more than 1518 (1522 if VLAN-tagged)
- *            octets".
- * @rx_ttl_64_frms: Count of total received MAC frames with length (including
- *            FCS, but not framing bits) of exactly 64 octets. Includes frames
- *            received with frame-too-long, FCS, or length errors.
- * @rx_ttl_65_127_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 65 and 127
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_128_255_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 128 and 255
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_256_511_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 256 and 511
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_512_1023_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 512 and 1023
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_1024_1518_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 1024 and 1518
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_1519_4095_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 1519 and 4095
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_4096_8191_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 4096 and 8191
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_8192_max_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 8192 and
- *            RX_MAX_PYLD_LEN+18 octets inclusive. Includes frames received
- *            with frame-too-long, FCS, or length errors.
- * @rx_ttl_gt_max_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) exceeding
- *            RX_MAX_PYLD_LEN+18 (+22 bytes if VLAN-tagged) octets inclusive.
- *            Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ip: Count of received IP datagrams. Includes errored IP datagrams.
- * @rx_accepted_ip: Count of received IP datagrams that
- *             are passed to the system.
- * @rx_ip_octets: Count of number of octets in received IP datagrams. Includes
- *            errored IP datagrams.
- * @rx_err_ip:         Count of received IP datagrams containing errors. For example,
- *            bad IP checksum.
- * @rx_icmp: Count of received ICMP messages. Includes errored ICMP messages.
- * @rx_tcp: Count of received TCP segments. Includes errored TCP segments.
- *            Note: This stat contains a count of all received TCP segments,
- *            regardless of whether or not they pertain to an established
- *            connection.
- * @rx_udp: Count of received UDP datagrams.
- * @rx_err_tcp: Count of received TCP segments containing errors. For example,
- *            bad TCP checksum.
- * @rx_pause_count: Count of number of pause quanta that the MAC has been in
- *            the paused state. Recall, one pause quantum equates to 512
- *            bit times.
- * @rx_pause_ctrl_frms: Count of received MAC PAUSE control frames.
- * @rx_unsup_ctrl_frms: Count of received MAC control frames that do not
- *            contain the PAUSE opcode. The sum of RX_PAUSE_CTRL_FRMS and
- *            this register is a count of all received MAC control frames.
- *            Note: This stat may be configured to count all layer 2 errors
- *            (i.e. length errors and FCS errors).
- * @rx_fcs_err_frms: Count of received MAC frames that do not pass FCS. Does
- *            not include frames received with frame-too-long or
- *            frame-too-short error.
- * @rx_in_rng_len_err_frms: Count of received frames with a length/type field
- *            value between 46 (42 for VLAN-tagged frames) and 1500 (also 1500
- *            for VLAN-tagged frames), inclusive, that does not match the
- *            number of data octets (including pad) received. Also contains
- *            a count of received frames with a length/type field less than
- *            46 (42 for VLAN-tagged frames) and the number of data octets
- *            (including pad) received is greater than 46 (42 for VLAN-tagged
- *            frames).
- * @rx_out_rng_len_err_frms:  Count of received frames with length/type field
- *            between 1501 and 1535 decimal, inclusive.
- * @rx_drop_frms: Count of received frames that could not be passed to the host.
- *            See PORTn_RX_L2_MGMT_DISCARD, PORTn_RX_RPA_DISCARD,
- *            PORTn_RX_TRASH_DISCARD, PORTn_RX_RTS_DISCARD, PORTn_RX_RED_DISCARD
- *            for a list of reasons. Because the RMAC drops one frame at a time,
- *            this stat also indicates the number of drop events.
- * @rx_discarded_frms: Count of received frames containing
- *             any error that prevents
- *            them from being passed to the system. See PORTn_RX_FCS_DISCARD,
- *            PORTn_RX_LEN_DISCARD, and PORTn_RX_SWITCH_DISCARD for a list of
- *            reasons.
- * @rx_drop_ip: Count of received IP datagrams that could not be passed to the
- *            host. See PORTn_RX_DROP_FRMS for a list of reasons.
- * @rx_drop_udp: Count of received UDP datagrams that are not delivered to the
- *            host. See PORTn_RX_DROP_FRMS for a list of reasons.
- * @rx_marker_pdu_frms: Count of valid Marker PDUs received on this Aggregation
- *            port.
- * @rx_lacpdu_frms: Count of valid LACPDUs received on this Aggregation port.
- * @rx_unknown_pdu_frms: Count of received frames (on this Aggregation port)
- *            that carry the Slow Protocols EtherType, but contain an unknown
- *            PDU. Or frames that contain the Slow Protocols group MAC address,
- *            but do not carry the Slow Protocols EtherType.
- * @rx_marker_resp_pdu_frms: Count of valid Marker Response PDUs received on
- *            this Aggregation port.
- * @rx_fcs_discard: Count of received frames that are discarded because the
- *            FCS check failed.
- * @rx_illegal_pdu_frms: Count of received frames (on this Aggregation port)
- *            that carry the Slow Protocols EtherType, but contain a badly
- *            formed PDU. Or frames that carry the Slow Protocols EtherType,
- *            but contain an illegal value of Protocol Subtype.
- * @rx_switch_discard: Count of received frames that are discarded by the
- *            internal switch because they did not have an entry in the
- *            Filtering Database. This includes frames that had an invalid
- *            destination MAC address or VLAN ID. It also includes frames are
- *            discarded because they did not satisfy the length requirements
- *            of the target VPATH.
- * @rx_len_discard: Count of received frames that are discarded because of an
- *            invalid frame length (includes fragments, oversized frames and
- *            mismatch between frame length and length/type field). This stat
- *            can be configured
- *            (see XMAC_STATS_GLOBAL_CFG.LEN_DISCARD_HANDLING).
- * @rx_rpa_discard: Count of received frames that were discarded because the
- *            receive protocol assist (RPA) discovered and error in the frame
- *            or was unable to parse the frame.
- * @rx_l2_mgmt_discard: Count of Layer 2 management frames (eg. pause frames,
- *            Link Aggregation Control Protocol (LACP) frames, etc.) that are
- *            discarded.
- * @rx_rts_discard: Count of received frames that are discarded by the receive
- *            traffic steering (RTS) logic. Includes those frame discarded
- *            because the SSC response contradicted the switch table, because
- *            the SSC timed out, or because the target queue could not fit the
- *            frame.
- * @rx_trash_discard: Count of received frames that are discarded because
- *            receive traffic steering (RTS) steered the frame to the trash
- *            queue.
- * @rx_buff_full_discard: Count of received frames that are discarded because
- *            internal buffers are full. Includes frames discarded because the
- *            RTS logic is waiting for an SSC lookup that has no timeout bound.
- *            Also, includes frames that are dropped because the MAC2FAU buffer
- *            is nearly full -- this can happen if the external receive buffer
- *            is full and the receive path is backing up.
- * @rx_red_discard: Count of received frames that are discarded because of RED
- *            (Random Early Discard).
- * @rx_xgmii_ctrl_err_cnt: Maintains a count of unexpected or misplaced control
- *            characters occurring between times of normal data transmission
- *            (i.e. not included in RX_XGMII_DATA_ERR_CNT). This counter is
- *            incremented when either -
- *            1) The Reconciliation Sublayer (RS) is expecting one control
- *               character and gets another (i.e. is expecting a Start
- *               character, but gets another control character).
- *            2) Start control character is not in lane 0
- *            Only increments the count by one for each XGMII column.
- * @rx_xgmii_data_err_cnt: Maintains a count of unexpected control characters
- *            during normal data transmission. If the Reconciliation Sublayer
- *            (RS) receives a control character, other than a terminate control
- *            character, during receipt of data octets then this register is
- *            incremented. Also increments if the start frame delimiter is not
- *            found in the correct location. Only increments the count by one
- *            for each XGMII column.
- * @rx_xgmii_char1_match: Maintains a count of the number of XGMII characters
- *            that match a pattern that is programmable through register
- *            XMAC_STATS_RX_XGMII_CHAR_PORTn. By default, the pattern is set
- *            to /E/ (i.e. the error character), thus the statistic tracks the
- *            number of Error characters received at any time.
- * @rx_xgmii_err_sym: Count of the number of symbol errors in the received
- *            XGMII data (i.e. PHY indicates "Receive Error" on the XGMII).
- *            Only includes symbol errors that are observed between the XGMII
- *            Start Frame Delimiter and End Frame Delimiter, inclusive. And
- *            only increments the count by one for each frame.
- * @rx_xgmii_column1_match: Maintains a count of the number of XGMII columns
- *            that match a pattern that is programmable through register
- *            XMAC_STATS_RX_XGMII_COLUMN1_PORTn. By default, the pattern is set
- *            to 4 x /E/ (i.e. a column containing all error characters), thus
- *            the statistic tracks the number of Error columns received at any
- *            time.
- * @rx_xgmii_char2_match: Maintains a count of the number of XGMII characters
- *            that match a pattern that is programmable through register
- *            XMAC_STATS_RX_XGMII_CHAR_PORTn. By default, the pattern is set
- *            to /E/ (i.e. the error character), thus the statistic tracks the
- *            number of Error characters received at any time.
- * @rx_local_fault: Maintains a count of the number of times that link
- *            transitioned from "up" to "down" due to a local fault.
- * @rx_xgmii_column2_match: Maintains a count of the number of XGMII columns
- *            that match a pattern that is programmable through register
- *            XMAC_STATS_RX_XGMII_COLUMN2_PORTn. By default, the pattern is set
- *            to 4 x /E/ (i.e. a column containing all error characters), thus
- *            the statistic tracks the number of Error columns received at any
- *            time. If XMAC_STATS_RX_XGMII_BEHAV_COLUMN2_PORTn.NEAR_COL1 is set
- *            to 1, then this stat increments when COLUMN2 is found within 'n'
- *            clocks after COLUMN1. Here, 'n' is defined by
- *            XMAC_STATS_RX_XGMII_BEHAV_COLUMN2_PORTn.NUM_COL (if 'n' is set to
- *            0, then it means to search anywhere for COLUMN2).
- * @rx_jettison: Count of received frames that are jettisoned because internal
- *            buffers are full.
- * @rx_remote_fault: Maintains a count of the number of times that link
- *            transitioned from "up" to "down" due to a remote fault.
- *
- * XMAC Port Statistics.
- */
-struct vxge_hw_xmac_port_stats {
-/*0x000*/              u64     tx_ttl_frms;
-/*0x008*/              u64     tx_ttl_octets;
-/*0x010*/              u64     tx_data_octets;
-/*0x018*/              u64     tx_mcast_frms;
-/*0x020*/              u64     tx_bcast_frms;
-/*0x028*/              u64     tx_ucast_frms;
-/*0x030*/              u64     tx_tagged_frms;
-/*0x038*/              u64     tx_vld_ip;
-/*0x040*/              u64     tx_vld_ip_octets;
-/*0x048*/              u64     tx_icmp;
-/*0x050*/              u64     tx_tcp;
-/*0x058*/              u64     tx_rst_tcp;
-/*0x060*/              u64     tx_udp;
-/*0x068*/              u32     tx_parse_error;
-/*0x06c*/              u32     tx_unknown_protocol;
-/*0x070*/              u64     tx_pause_ctrl_frms;
-/*0x078*/              u32     tx_marker_pdu_frms;
-/*0x07c*/              u32     tx_lacpdu_frms;
-/*0x080*/              u32     tx_drop_ip;
-/*0x084*/              u32     tx_marker_resp_pdu_frms;
-/*0x088*/              u32     tx_xgmii_char2_match;
-/*0x08c*/              u32     tx_xgmii_char1_match;
-/*0x090*/              u32     tx_xgmii_column2_match;
-/*0x094*/              u32     tx_xgmii_column1_match;
-/*0x098*/              u32     unused1;
-/*0x09c*/              u16     tx_any_err_frms;
-/*0x09e*/              u16     tx_drop_frms;
-/*0x0a0*/              u64     rx_ttl_frms;
-/*0x0a8*/              u64     rx_vld_frms;
-/*0x0b0*/              u64     rx_offload_frms;
-/*0x0b8*/              u64     rx_ttl_octets;
-/*0x0c0*/              u64     rx_data_octets;
-/*0x0c8*/              u64     rx_offload_octets;
-/*0x0d0*/              u64     rx_vld_mcast_frms;
-/*0x0d8*/              u64     rx_vld_bcast_frms;
-/*0x0e0*/              u64     rx_accepted_ucast_frms;
-/*0x0e8*/              u64     rx_accepted_nucast_frms;
-/*0x0f0*/              u64     rx_tagged_frms;
-/*0x0f8*/              u64     rx_long_frms;
-/*0x100*/              u64     rx_usized_frms;
-/*0x108*/              u64     rx_osized_frms;
-/*0x110*/              u64     rx_frag_frms;
-/*0x118*/              u64     rx_jabber_frms;
-/*0x120*/              u64     rx_ttl_64_frms;
-/*0x128*/              u64     rx_ttl_65_127_frms;
-/*0x130*/              u64     rx_ttl_128_255_frms;
-/*0x138*/              u64     rx_ttl_256_511_frms;
-/*0x140*/              u64     rx_ttl_512_1023_frms;
-/*0x148*/              u64     rx_ttl_1024_1518_frms;
-/*0x150*/              u64     rx_ttl_1519_4095_frms;
-/*0x158*/              u64     rx_ttl_4096_8191_frms;
-/*0x160*/              u64     rx_ttl_8192_max_frms;
-/*0x168*/              u64     rx_ttl_gt_max_frms;
-/*0x170*/              u64     rx_ip;
-/*0x178*/              u64     rx_accepted_ip;
-/*0x180*/              u64     rx_ip_octets;
-/*0x188*/              u64     rx_err_ip;
-/*0x190*/              u64     rx_icmp;
-/*0x198*/              u64     rx_tcp;
-/*0x1a0*/              u64     rx_udp;
-/*0x1a8*/              u64     rx_err_tcp;
-/*0x1b0*/              u64     rx_pause_count;
-/*0x1b8*/              u64     rx_pause_ctrl_frms;
-/*0x1c0*/              u64     rx_unsup_ctrl_frms;
-/*0x1c8*/              u64     rx_fcs_err_frms;
-/*0x1d0*/              u64     rx_in_rng_len_err_frms;
-/*0x1d8*/              u64     rx_out_rng_len_err_frms;
-/*0x1e0*/              u64     rx_drop_frms;
-/*0x1e8*/              u64     rx_discarded_frms;
-/*0x1f0*/              u64     rx_drop_ip;
-/*0x1f8*/              u64     rx_drop_udp;
-/*0x200*/              u32     rx_marker_pdu_frms;
-/*0x204*/              u32     rx_lacpdu_frms;
-/*0x208*/              u32     rx_unknown_pdu_frms;
-/*0x20c*/              u32     rx_marker_resp_pdu_frms;
-/*0x210*/              u32     rx_fcs_discard;
-/*0x214*/              u32     rx_illegal_pdu_frms;
-/*0x218*/              u32     rx_switch_discard;
-/*0x21c*/              u32     rx_len_discard;
-/*0x220*/              u32     rx_rpa_discard;
-/*0x224*/              u32     rx_l2_mgmt_discard;
-/*0x228*/              u32     rx_rts_discard;
-/*0x22c*/              u32     rx_trash_discard;
-/*0x230*/              u32     rx_buff_full_discard;
-/*0x234*/              u32     rx_red_discard;
-/*0x238*/              u32     rx_xgmii_ctrl_err_cnt;
-/*0x23c*/              u32     rx_xgmii_data_err_cnt;
-/*0x240*/              u32     rx_xgmii_char1_match;
-/*0x244*/              u32     rx_xgmii_err_sym;
-/*0x248*/              u32     rx_xgmii_column1_match;
-/*0x24c*/              u32     rx_xgmii_char2_match;
-/*0x250*/              u32     rx_local_fault;
-/*0x254*/              u32     rx_xgmii_column2_match;
-/*0x258*/              u32     rx_jettison;
-/*0x25c*/              u32     rx_remote_fault;
-} __packed;
-
-/**
- * struct vxge_hw_xmac_vpath_tx_stats - XMAC Vpath Tx Statistics
- *
- * @tx_ttl_eth_frms: Count of successfully transmitted MAC frames.
- * @tx_ttl_eth_octets: Count of total octets of transmitted frames,
- *             not including framing characters (i.e. less framing bits).
- *             To determine the total octets of transmitted frames, including
- *             framing characters, multiply TX_TTL_ETH_FRMS by 8 and add it to
- *             this stat (the device always prepends 8 bytes of preamble for
- *             each frame)
- * @tx_data_octets: Count of data and padding octets of successfully transmitted
- *             frames.
- * @tx_mcast_frms: Count of successfully transmitted frames to a group address
- *             other than the broadcast address.
- * @tx_bcast_frms: Count of successfully transmitted frames to the broadcast
- *             group address.
- * @tx_ucast_frms: Count of transmitted frames containing a unicast address.
- *             Includes discarded frames that are not sent to the network.
- * @tx_tagged_frms: Count of transmitted frames containing a VLAN tag.
- * @tx_vld_ip: Count of transmitted IP datagrams that are passed to the network.
- * @tx_vld_ip_octets: Count of total octets of transmitted IP datagrams that
- *            are passed to the network.
- * @tx_icmp: Count of transmitted ICMP messages. Includes messages not sent due
- *            to problems within ICMP.
- * @tx_tcp: Count of transmitted TCP segments. Does not include segments
- *            containing retransmitted octets.
- * @tx_rst_tcp: Count of transmitted TCP segments containing the RST flag.
- * @tx_udp: Count of transmitted UDP datagrams.
- * @tx_unknown_protocol: Increments when the TPA encounters an unknown protocol,
- *            such as a new IPv6 extension header, or an unsupported Routing
- *            Type. The packet still has a checksum calculated but it may be
- *            incorrect.
- * @tx_lost_ip: Count of transmitted IP datagrams that could not be passed
- *            to the network. Increments because of: 1) An internal processing
- *            error (such as an uncorrectable ECC error). 2) A frame parsing
- *            error during IP checksum calculation.
- * @tx_parse_error: Increments when the TPA is unable to parse a packet. This
- *            generally occurs when a packet is corrupt somehow, including
- *            packets that have IP version mismatches, invalid Layer 2 control
- *            fields, etc. L3/L4 checksums are not offloaded, but the packet
- *            is still be transmitted.
- * @tx_tcp_offload: For frames belonging to offloaded sessions only, a count
- *            of transmitted TCP segments. Does not include segments containing
- *            retransmitted octets.
- * @tx_retx_tcp_offload: For frames belonging to offloaded sessions only, the
- *            total number of segments retransmitted. Retransmitted segments
- *            that are sourced by the host are counted by the host.
- * @tx_lost_ip_offload: For frames belonging to offloaded sessions only, a count
- *            of transmitted IP datagrams that could not be passed to the
- *            network.
- *
- * XMAC Vpath TX Statistics.
- */
-struct vxge_hw_xmac_vpath_tx_stats {
-       u64     tx_ttl_eth_frms;
-       u64     tx_ttl_eth_octets;
-       u64     tx_data_octets;
-       u64     tx_mcast_frms;
-       u64     tx_bcast_frms;
-       u64     tx_ucast_frms;
-       u64     tx_tagged_frms;
-       u64     tx_vld_ip;
-       u64     tx_vld_ip_octets;
-       u64     tx_icmp;
-       u64     tx_tcp;
-       u64     tx_rst_tcp;
-       u64     tx_udp;
-       u32     tx_unknown_protocol;
-       u32     tx_lost_ip;
-       u32     unused1;
-       u32     tx_parse_error;
-       u64     tx_tcp_offload;
-       u64     tx_retx_tcp_offload;
-       u64     tx_lost_ip_offload;
-} __packed;
-
-/**
- * struct vxge_hw_xmac_vpath_rx_stats - XMAC Vpath RX Statistics
- *
- * @rx_ttl_eth_frms: Count of successfully received MAC frames.
- * @rx_vld_frms: Count of successfully received MAC frames. Does not include
- *            frames received with frame-too-long, FCS, or length errors.
- * @rx_offload_frms: Count of offloaded received frames that are passed to
- *            the host.
- * @rx_ttl_eth_octets: Count of total octets of received frames, not including
- *            framing characters (i.e. less framing bits). Only counts octets
- *            of frames that are at least 14 bytes (18 bytes for VLAN-tagged)
- *            before FCS. To determine the total octets of received frames,
- *            including framing characters, multiply RX_TTL_ETH_FRMS by 8 and
- *            add it to this stat (the stat RX_TTL_ETH_FRMS only counts frames
- *            that have the required 8 bytes of preamble).
- * @rx_data_octets: Count of data and padding octets of successfully received
- *            frames. Does not include frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_offload_octets: Count of total octets, not including framing characters,
- *            of offloaded received frames that are passed to the host.
- * @rx_vld_mcast_frms: Count of successfully received MAC frames containing a
- *            nonbroadcast group address. Does not include frames received with
- *            frame-too-long, FCS, or length errors.
- * @rx_vld_bcast_frms: Count of successfully received MAC frames containing the
- *            broadcast group address. Does not include frames received with
- *            frame-too-long, FCS, or length errors.
- * @rx_accepted_ucast_frms: Count of successfully received frames containing
- *            a unicast address. Only includes frames that are passed to the
- *            system.
- * @rx_accepted_nucast_frms: Count of successfully received frames containing
- *            a non-unicast (broadcast or multicast) address. Only includes
- *            frames that are passed to the system. Could include, for instance,
- *            non-unicast frames that contain FCS errors if the MAC_ERROR_CFG
- *            register is set to pass FCS-errored frames to the host.
- * @rx_tagged_frms: Count of received frames containing a VLAN tag.
- * @rx_long_frms: Count of received frames that are longer than RX_MAX_PYLD_LEN
- *            + 18 bytes (+ 22 bytes if VLAN-tagged).
- * @rx_usized_frms: Count of received frames of length (including FCS, but not
- *            framing bits) less than 64 octets, that are otherwise well-formed.
- *            In other words, counts runts.
- * @rx_osized_frms: Count of received frames of length (including FCS, but not
- *            framing bits) more than 1518 octets, that are otherwise
- *            well-formed.
- * @rx_frag_frms: Count of received frames of length (including FCS, but not
- *            framing bits) less than 64 octets that had bad FCS.
- *            In other words, counts fragments.
- * @rx_jabber_frms: Count of received frames of length (including FCS, but not
- *            framing bits) more than 1518 octets that had bad FCS. In other
- *            words, counts jabbers.
- * @rx_ttl_64_frms: Count of total received MAC frames with length (including
- *            FCS, but not framing bits) of exactly 64 octets. Includes frames
- *            received with frame-too-long, FCS, or length errors.
- * @rx_ttl_65_127_frms: Count of total received MAC frames
- *             with length (including
- *            FCS, but not framing bits) of between 65 and 127 octets inclusive.
- *            Includes frames received with frame-too-long, FCS,
- *            or length errors.
- * @rx_ttl_128_255_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits)
- *            of between 128 and 255 octets
- *            inclusive. Includes frames received with frame-too-long, FCS,
- *            or length errors.
- * @rx_ttl_256_511_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits)
- *            of between 256 and 511 octets
- *            inclusive. Includes frames received with frame-too-long, FCS, or
- *            length errors.
- * @rx_ttl_512_1023_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 512 and 1023
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_1024_1518_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 1024 and 1518
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_1519_4095_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 1519 and 4095
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_4096_8191_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 4096 and 8191
- *            octets inclusive. Includes frames received with frame-too-long,
- *            FCS, or length errors.
- * @rx_ttl_8192_max_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) of between 8192 and
- *            RX_MAX_PYLD_LEN+18 octets inclusive. Includes frames received
- *            with frame-too-long, FCS, or length errors.
- * @rx_ttl_gt_max_frms: Count of total received MAC frames with length
- *            (including FCS, but not framing bits) exceeding RX_MAX_PYLD_LEN+18
- *            (+22 bytes if VLAN-tagged) octets inclusive. Includes frames
- *            received with frame-too-long, FCS, or length errors.
- * @rx_ip: Count of received IP datagrams. Includes errored IP datagrams.
- * @rx_accepted_ip: Count of received IP datagrams that
- *             are passed to the system.
- * @rx_ip_octets: Count of number of octets in received IP datagrams.
- *            Includes errored IP datagrams.
- * @rx_err_ip: Count of received IP datagrams containing errors. For example,
- *            bad IP checksum.
- * @rx_icmp: Count of received ICMP messages. Includes errored ICMP messages.
- * @rx_tcp: Count of received TCP segments. Includes errored TCP segments.
- *             Note: This stat contains a count of all received TCP segments,
- *             regardless of whether or not they pertain to an established
- *             connection.
- * @rx_udp: Count of received UDP datagrams.
- * @rx_err_tcp: Count of received TCP segments containing errors. For example,
- *             bad TCP checksum.
- * @rx_lost_frms: Count of received frames that could not be passed to the host.
- *             See RX_QUEUE_FULL_DISCARD and RX_RED_DISCARD
- *             for a list of reasons.
- * @rx_lost_ip: Count of received IP datagrams that could not be passed to
- *             the host. See RX_LOST_FRMS for a list of reasons.
- * @rx_lost_ip_offload: For frames belonging to offloaded sessions only, a count
- *             of received IP datagrams that could not be passed to the host.
- *             See RX_LOST_FRMS for a list of reasons.
- * @rx_various_discard: Count of received frames that are discarded because
- *             the target receive queue is full.
- * @rx_sleep_discard: Count of received frames that are discarded because the
- *            target VPATH is asleep (a Wake-on-LAN magic packet can be used
- *            to awaken the VPATH).
- * @rx_red_discard: Count of received frames that are discarded because of RED
- *            (Random Early Discard).
- * @rx_queue_full_discard: Count of received frames that are discarded because
- *             the target receive queue is full.
- * @rx_mpa_ok_frms: Count of received frames that pass the MPA checks.
- *
- * XMAC Vpath RX Statistics.
- */
-struct vxge_hw_xmac_vpath_rx_stats {
-       u64     rx_ttl_eth_frms;
-       u64     rx_vld_frms;
-       u64     rx_offload_frms;
-       u64     rx_ttl_eth_octets;
-       u64     rx_data_octets;
-       u64     rx_offload_octets;
-       u64     rx_vld_mcast_frms;
-       u64     rx_vld_bcast_frms;
-       u64     rx_accepted_ucast_frms;
-       u64     rx_accepted_nucast_frms;
-       u64     rx_tagged_frms;
-       u64     rx_long_frms;
-       u64     rx_usized_frms;
-       u64     rx_osized_frms;
-       u64     rx_frag_frms;
-       u64     rx_jabber_frms;
-       u64     rx_ttl_64_frms;
-       u64     rx_ttl_65_127_frms;
-       u64     rx_ttl_128_255_frms;
-       u64     rx_ttl_256_511_frms;
-       u64     rx_ttl_512_1023_frms;
-       u64     rx_ttl_1024_1518_frms;
-       u64     rx_ttl_1519_4095_frms;
-       u64     rx_ttl_4096_8191_frms;
-       u64     rx_ttl_8192_max_frms;
-       u64     rx_ttl_gt_max_frms;
-       u64     rx_ip;
-       u64     rx_accepted_ip;
-       u64     rx_ip_octets;
-       u64     rx_err_ip;
-       u64     rx_icmp;
-       u64     rx_tcp;
-       u64     rx_udp;
-       u64     rx_err_tcp;
-       u64     rx_lost_frms;
-       u64     rx_lost_ip;
-       u64     rx_lost_ip_offload;
-       u16     rx_various_discard;
-       u16     rx_sleep_discard;
-       u16     rx_red_discard;
-       u16     rx_queue_full_discard;
-       u64     rx_mpa_ok_frms;
-} __packed;
-
-/**
- * struct vxge_hw_xmac_stats - XMAC Statistics
- *
- * @aggr_stats: Statistics on aggregate port(port 0, port 1)
- * @port_stats: Staticstics on ports(wire 0, wire 1, lag)
- * @vpath_tx_stats: Per vpath XMAC TX stats
- * @vpath_rx_stats: Per vpath XMAC RX stats
- *
- * XMAC Statistics.
- */
-struct vxge_hw_xmac_stats {
-       struct vxge_hw_xmac_aggr_stats
-                               aggr_stats[VXGE_HW_MAC_MAX_MAC_PORT_ID];
-       struct vxge_hw_xmac_port_stats
-                               port_stats[VXGE_HW_MAC_MAX_MAC_PORT_ID+1];
-       struct vxge_hw_xmac_vpath_tx_stats
-                               vpath_tx_stats[VXGE_HW_MAX_VIRTUAL_PATHS];
-       struct vxge_hw_xmac_vpath_rx_stats
-                               vpath_rx_stats[VXGE_HW_MAX_VIRTUAL_PATHS];
-};
-
-/**
- * struct vxge_hw_vpath_stats_hw_info - Titan vpath hardware statistics.
- * @ini_num_mwr_sent: The number of PCI memory writes initiated by the PIC block
- *             for the given VPATH
- * @ini_num_mrd_sent: The number of PCI memory reads initiated by the PIC block
- * @ini_num_cpl_rcvd: The number of PCI read completions received by the
- *             PIC block
- * @ini_num_mwr_byte_sent: The number of PCI memory write bytes sent by the PIC
- *             block to the host
- * @ini_num_cpl_byte_rcvd: The number of PCI read completion bytes received by
- *             the PIC block
- * @wrcrdtarb_xoff: TBD
- * @rdcrdtarb_xoff: TBD
- * @vpath_genstats_count0: TBD
- * @vpath_genstats_count1: TBD
- * @vpath_genstats_count2: TBD
- * @vpath_genstats_count3: TBD
- * @vpath_genstats_count4: TBD
- * @vpath_gennstats_count5: TBD
- * @tx_stats: Transmit stats
- * @rx_stats: Receive stats
- * @prog_event_vnum1: Programmable statistic. Increments when internal logic
- *             detects a certain event. See register
- *             XMAC_STATS_CFG.EVENT_VNUM1_CFG for more information.
- * @prog_event_vnum0: Programmable statistic. Increments when internal logic
- *             detects a certain event. See register
- *             XMAC_STATS_CFG.EVENT_VNUM0_CFG for more information.
- * @prog_event_vnum3: Programmable statistic. Increments when internal logic
- *             detects a certain event. See register
- *             XMAC_STATS_CFG.EVENT_VNUM3_CFG for more information.
- * @prog_event_vnum2: Programmable statistic. Increments when internal logic
- *             detects a certain event. See register
- *             XMAC_STATS_CFG.EVENT_VNUM2_CFG for more information.
- * @rx_multi_cast_frame_discard: TBD
- * @rx_frm_transferred: TBD
- * @rxd_returned: TBD
- * @rx_mpa_len_fail_frms: Count of received frames
- *             that fail the MPA length check
- * @rx_mpa_mrk_fail_frms: Count of received frames
- *             that fail the MPA marker check
- * @rx_mpa_crc_fail_frms: Count of received frames that fail the MPA CRC check
- * @rx_permitted_frms: Count of frames that pass through the FAU and on to the
- *             frame buffer (and subsequently to the host).
- * @rx_vp_reset_discarded_frms: Count of receive frames that are discarded
- *             because the VPATH is in reset
- * @rx_wol_frms: Count of received "magic packet" frames. Stat increments
- *             whenever the received frame matches the VPATH's Wake-on-LAN
- *             signature(s) CRC.
- * @tx_vp_reset_discarded_frms: Count of transmit frames that are discarded
- *             because the VPATH is in reset. Includes frames that are discarded
- *             because the current VPIN does not match that VPIN of the frame
- *
- * Titan vpath hardware statistics.
- */
-struct vxge_hw_vpath_stats_hw_info {
-/*0x000*/      u32 ini_num_mwr_sent;
-/*0x004*/      u32 unused1;
-/*0x008*/      u32 ini_num_mrd_sent;
-/*0x00c*/      u32 unused2;
-/*0x010*/      u32 ini_num_cpl_rcvd;
-/*0x014*/      u32 unused3;
-/*0x018*/      u64 ini_num_mwr_byte_sent;
-/*0x020*/      u64 ini_num_cpl_byte_rcvd;
-/*0x028*/      u32 wrcrdtarb_xoff;
-/*0x02c*/      u32 unused4;
-/*0x030*/      u32 rdcrdtarb_xoff;
-/*0x034*/      u32 unused5;
-/*0x038*/      u32 vpath_genstats_count0;
-/*0x03c*/      u32 vpath_genstats_count1;
-/*0x040*/      u32 vpath_genstats_count2;
-/*0x044*/      u32 vpath_genstats_count3;
-/*0x048*/      u32 vpath_genstats_count4;
-/*0x04c*/      u32 unused6;
-/*0x050*/      u32 vpath_genstats_count5;
-/*0x054*/      u32 unused7;
-/*0x058*/      struct vxge_hw_xmac_vpath_tx_stats tx_stats;
-/*0x0e8*/      struct vxge_hw_xmac_vpath_rx_stats rx_stats;
-/*0x220*/      u64 unused9;
-/*0x228*/      u32 prog_event_vnum1;
-/*0x22c*/      u32 prog_event_vnum0;
-/*0x230*/      u32 prog_event_vnum3;
-/*0x234*/      u32 prog_event_vnum2;
-/*0x238*/      u16 rx_multi_cast_frame_discard;
-/*0x23a*/      u8 unused10[6];
-/*0x240*/      u32 rx_frm_transferred;
-/*0x244*/      u32 unused11;
-/*0x248*/      u16 rxd_returned;
-/*0x24a*/      u8 unused12[6];
-/*0x252*/      u16 rx_mpa_len_fail_frms;
-/*0x254*/      u16 rx_mpa_mrk_fail_frms;
-/*0x256*/      u16 rx_mpa_crc_fail_frms;
-/*0x258*/      u16 rx_permitted_frms;
-/*0x25c*/      u64 rx_vp_reset_discarded_frms;
-/*0x25e*/      u64 rx_wol_frms;
-/*0x260*/      u64 tx_vp_reset_discarded_frms;
-} __packed;
-
-
-/**
- * struct vxge_hw_device_stats_mrpcim_info - Titan mrpcim hardware statistics.
- * @pic.ini_rd_drop     0x0000          4       Number of DMA reads initiated
- *  by the adapter that were discarded because the VPATH is out of service
- * @pic.ini_wr_drop    0x0004  4       Number of DMA writes initiated by the
- *  adapter that were discared because the VPATH is out of service
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane0]    0x0008  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane1]    0x0010  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane2]    0x0018  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane3]    0x0020  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane4]    0x0028  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane5]    0x0030  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane6]    0x0038  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane7]    0x0040  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane8]    0x0048  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane9]    0x0050  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane10]   0x0058  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane11]   0x0060  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane12]   0x0068  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane13]   0x0070  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane14]   0x0078  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane15]   0x0080  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_ph_crdt_depleted[vplane16]   0x0088  4       Number of times
- *  the posted header credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane0]    0x0090  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane1]    0x0098  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane2]    0x00a0  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane3]    0x00a8  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane4]    0x00b0  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane5]    0x00b8  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane6]    0x00c0  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane7]    0x00c8  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane8]    0x00d0  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane9]    0x00d8  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane10]   0x00e0  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane11]   0x00e8  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane12]   0x00f0  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane13]   0x00f8  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane14]   0x0100  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane15]   0x0108  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.wrcrdtarb_pd_crdt_depleted[vplane16]   0x0110  4       Number of times
- *  the posted data credits for upstream PCI writes were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane0]   0x0118  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane1]   0x0120  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane2]   0x0128  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane3]   0x0130  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane4]   0x0138  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane5]   0x0140  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane6]   0x0148  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane7]   0x0150  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane8]   0x0158  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane9]   0x0160  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane10]  0x0168  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane11]  0x0170  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane12]  0x0178  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane13]  0x0180  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane14]  0x0188  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane15]  0x0190  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.rdcrdtarb_nph_crdt_depleted[vplane16]  0x0198  4       Number of times
- *  the non-posted header credits for upstream PCI reads were depleted
- * @pic.ini_rd_vpin_drop       0x01a0  4       Number of DMA reads initiated by
- *  the adapter that were discarded because the VPATH instance number does
- *  not match
- * @pic.ini_wr_vpin_drop       0x01a4  4       Number of DMA writes initiated
- *  by the adapter that were discarded because the VPATH instance number
- *  does not match
- * @pic.genstats_count0        0x01a8  4       Configurable statistic #1. Refer
- *  to the GENSTATS0_CFG for information on configuring this statistic
- * @pic.genstats_count1        0x01ac  4       Configurable statistic #2. Refer
- *  to the GENSTATS1_CFG for information on configuring this statistic
- * @pic.genstats_count2        0x01b0  4       Configurable statistic #3. Refer
- *  to the GENSTATS2_CFG for information on configuring this statistic
- * @pic.genstats_count3        0x01b4  4       Configurable statistic #4. Refer
- *  to the GENSTATS3_CFG for information on configuring this statistic
- * @pic.genstats_count4        0x01b8  4       Configurable statistic #5. Refer
- *  to the GENSTATS4_CFG for information on configuring this statistic
- * @pic.genstats_count5        0x01c0  4       Configurable statistic #6. Refer
- *  to the GENSTATS5_CFG for information on configuring this statistic
- * @pci.rstdrop_cpl    0x01c8  4
- * @pci.rstdrop_msg    0x01cc  4
- * @pci.rstdrop_client1        0x01d0  4
- * @pci.rstdrop_client0        0x01d4  4
- * @pci.rstdrop_client2        0x01d8  4
- * @pci.depl_cplh[vplane0]     0x01e2  2       Number of times completion
- *  header credits were depleted
- * @pci.depl_nph[vplane0]      0x01e4  2       Number of times non posted
- *  header credits were depleted
- * @pci.depl_ph[vplane0]       0x01e6  2       Number of times the posted
- *  header credits were depleted
- * @pci.depl_cplh[vplane1]     0x01ea  2
- * @pci.depl_nph[vplane1]      0x01ec  2
- * @pci.depl_ph[vplane1]       0x01ee  2
- * @pci.depl_cplh[vplane2]     0x01f2  2
- * @pci.depl_nph[vplane2]      0x01f4  2
- * @pci.depl_ph[vplane2]       0x01f6  2
- * @pci.depl_cplh[vplane3]     0x01fa  2
- * @pci.depl_nph[vplane3]      0x01fc  2
- * @pci.depl_ph[vplane3]       0x01fe  2
- * @pci.depl_cplh[vplane4]     0x0202  2
- * @pci.depl_nph[vplane4]      0x0204  2
- * @pci.depl_ph[vplane4]       0x0206  2
- * @pci.depl_cplh[vplane5]     0x020a  2
- * @pci.depl_nph[vplane5]      0x020c  2
- * @pci.depl_ph[vplane5]       0x020e  2
- * @pci.depl_cplh[vplane6]     0x0212  2
- * @pci.depl_nph[vplane6]      0x0214  2
- * @pci.depl_ph[vplane6]       0x0216  2
- * @pci.depl_cplh[vplane7]     0x021a  2
- * @pci.depl_nph[vplane7]      0x021c  2
- * @pci.depl_ph[vplane7]       0x021e  2
- * @pci.depl_cplh[vplane8]     0x0222  2
- * @pci.depl_nph[vplane8]      0x0224  2
- * @pci.depl_ph[vplane8]       0x0226  2
- * @pci.depl_cplh[vplane9]     0x022a  2
- * @pci.depl_nph[vplane9]      0x022c  2
- * @pci.depl_ph[vplane9]       0x022e  2
- * @pci.depl_cplh[vplane10]    0x0232  2
- * @pci.depl_nph[vplane10]     0x0234  2
- * @pci.depl_ph[vplane10]      0x0236  2
- * @pci.depl_cplh[vplane11]    0x023a  2
- * @pci.depl_nph[vplane11]     0x023c  2
- * @pci.depl_ph[vplane11]      0x023e  2
- * @pci.depl_cplh[vplane12]    0x0242  2
- * @pci.depl_nph[vplane12]     0x0244  2
- * @pci.depl_ph[vplane12]      0x0246  2
- * @pci.depl_cplh[vplane13]    0x024a  2
- * @pci.depl_nph[vplane13]     0x024c  2
- * @pci.depl_ph[vplane13]      0x024e  2
- * @pci.depl_cplh[vplane14]    0x0252  2
- * @pci.depl_nph[vplane14]     0x0254  2
- * @pci.depl_ph[vplane14]      0x0256  2
- * @pci.depl_cplh[vplane15]    0x025a  2
- * @pci.depl_nph[vplane15]     0x025c  2
- * @pci.depl_ph[vplane15]      0x025e  2
- * @pci.depl_cplh[vplane16]    0x0262  2
- * @pci.depl_nph[vplane16]     0x0264  2
- * @pci.depl_ph[vplane16]      0x0266  2
- * @pci.depl_cpld[vplane0]     0x026a  2       Number of times completion data
- *  credits were depleted
- * @pci.depl_npd[vplane0]      0x026c  2       Number of times non posted data
- *  credits were depleted
- * @pci.depl_pd[vplane0]       0x026e  2       Number of times the posted data
- *  credits were depleted
- * @pci.depl_cpld[vplane1]     0x0272  2
- * @pci.depl_npd[vplane1]      0x0274  2
- * @pci.depl_pd[vplane1]       0x0276  2
- * @pci.depl_cpld[vplane2]     0x027a  2
- * @pci.depl_npd[vplane2]      0x027c  2
- * @pci.depl_pd[vplane2]       0x027e  2
- * @pci.depl_cpld[vplane3]     0x0282  2
- * @pci.depl_npd[vplane3]      0x0284  2
- * @pci.depl_pd[vplane3]       0x0286  2
- * @pci.depl_cpld[vplane4]     0x028a  2
- * @pci.depl_npd[vplane4]      0x028c  2
- * @pci.depl_pd[vplane4]       0x028e  2
- * @pci.depl_cpld[vplane5]     0x0292  2
- * @pci.depl_npd[vplane5]      0x0294  2
- * @pci.depl_pd[vplane5]       0x0296  2
- * @pci.depl_cpld[vplane6]     0x029a  2
- * @pci.depl_npd[vplane6]      0x029c  2
- * @pci.depl_pd[vplane6]       0x029e  2
- * @pci.depl_cpld[vplane7]     0x02a2  2
- * @pci.depl_npd[vplane7]      0x02a4  2
- * @pci.depl_pd[vplane7]       0x02a6  2
- * @pci.depl_cpld[vplane8]     0x02aa  2
- * @pci.depl_npd[vplane8]      0x02ac  2
- * @pci.depl_pd[vplane8]       0x02ae  2
- * @pci.depl_cpld[vplane9]     0x02b2  2
- * @pci.depl_npd[vplane9]      0x02b4  2
- * @pci.depl_pd[vplane9]       0x02b6  2
- * @pci.depl_cpld[vplane10]    0x02ba  2
- * @pci.depl_npd[vplane10]     0x02bc  2
- * @pci.depl_pd[vplane10]      0x02be  2
- * @pci.depl_cpld[vplane11]    0x02c2  2
- * @pci.depl_npd[vplane11]     0x02c4  2
- * @pci.depl_pd[vplane11]      0x02c6  2
- * @pci.depl_cpld[vplane12]    0x02ca  2
- * @pci.depl_npd[vplane12]     0x02cc  2
- * @pci.depl_pd[vplane12]      0x02ce  2
- * @pci.depl_cpld[vplane13]    0x02d2  2
- * @pci.depl_npd[vplane13]     0x02d4  2
- * @pci.depl_pd[vplane13]      0x02d6  2
- * @pci.depl_cpld[vplane14]    0x02da  2
- * @pci.depl_npd[vplane14]     0x02dc  2
- * @pci.depl_pd[vplane14]      0x02de  2
- * @pci.depl_cpld[vplane15]    0x02e2  2
- * @pci.depl_npd[vplane15]     0x02e4  2
- * @pci.depl_pd[vplane15]      0x02e6  2
- * @pci.depl_cpld[vplane16]    0x02ea  2
- * @pci.depl_npd[vplane16]     0x02ec  2
- * @pci.depl_pd[vplane16]      0x02ee  2
- * @xgmac_port[3];
- * @xgmac_aggr[2];
- * @xgmac.global_prog_event_gnum0      0x0ae0  8       Programmable statistic.
- *  Increments when internal logic detects a certain event. See register
- *  XMAC_STATS_GLOBAL_CFG.EVENT_GNUM0_CFG for more information.
- * @xgmac.global_prog_event_gnum1      0x0ae8  8       Programmable statistic.
- *  Increments when internal logic detects a certain event. See register
- *  XMAC_STATS_GLOBAL_CFG.EVENT_GNUM1_CFG for more information.
- * @xgmac.orp_lro_events       0x0af8  8
- * @xgmac.orp_bs_events        0x0b00  8
- * @xgmac.orp_iwarp_events     0x0b08  8
- * @xgmac.tx_permitted_frms    0x0b14  4
- * @xgmac.port2_tx_any_frms    0x0b1d  1
- * @xgmac.port1_tx_any_frms    0x0b1e  1
- * @xgmac.port0_tx_any_frms    0x0b1f  1
- * @xgmac.port2_rx_any_frms    0x0b25  1
- * @xgmac.port1_rx_any_frms    0x0b26  1
- * @xgmac.port0_rx_any_frms    0x0b27  1
- *
- * Titan mrpcim hardware statistics.
- */
-struct vxge_hw_device_stats_mrpcim_info {
-/*0x0000*/     u32     pic_ini_rd_drop;
-/*0x0004*/     u32     pic_ini_wr_drop;
-/*0x0008*/     struct {
-       /*0x0000*/      u32     pic_wrcrdtarb_ph_crdt_depleted;
-       /*0x0004*/      u32     unused1;
-               } pic_wrcrdtarb_ph_crdt_depleted_vplane[17];
-/*0x0090*/     struct {
-       /*0x0000*/      u32     pic_wrcrdtarb_pd_crdt_depleted;
-       /*0x0004*/      u32     unused2;
-               } pic_wrcrdtarb_pd_crdt_depleted_vplane[17];
-/*0x0118*/     struct {
-       /*0x0000*/      u32     pic_rdcrdtarb_nph_crdt_depleted;
-       /*0x0004*/      u32     unused3;
-               } pic_rdcrdtarb_nph_crdt_depleted_vplane[17];
-/*0x01a0*/     u32     pic_ini_rd_vpin_drop;
-/*0x01a4*/     u32     pic_ini_wr_vpin_drop;
-/*0x01a8*/     u32     pic_genstats_count0;
-/*0x01ac*/     u32     pic_genstats_count1;
-/*0x01b0*/     u32     pic_genstats_count2;
-/*0x01b4*/     u32     pic_genstats_count3;
-/*0x01b8*/     u32     pic_genstats_count4;
-/*0x01bc*/     u32     unused4;
-/*0x01c0*/     u32     pic_genstats_count5;
-/*0x01c4*/     u32     unused5;
-/*0x01c8*/     u32     pci_rstdrop_cpl;
-/*0x01cc*/     u32     pci_rstdrop_msg;
-/*0x01d0*/     u32     pci_rstdrop_client1;
-/*0x01d4*/     u32     pci_rstdrop_client0;
-/*0x01d8*/     u32     pci_rstdrop_client2;
-/*0x01dc*/     u32     unused6;
-/*0x01e0*/     struct {
-       /*0x0000*/      u16     unused7;
-       /*0x0002*/      u16     pci_depl_cplh;
-       /*0x0004*/      u16     pci_depl_nph;
-       /*0x0006*/      u16     pci_depl_ph;
-               } pci_depl_h_vplane[17];
-/*0x0268*/     struct {
-       /*0x0000*/      u16     unused8;
-       /*0x0002*/      u16     pci_depl_cpld;
-       /*0x0004*/      u16     pci_depl_npd;
-       /*0x0006*/      u16     pci_depl_pd;
-               } pci_depl_d_vplane[17];
-/*0x02f0*/     struct vxge_hw_xmac_port_stats xgmac_port[3];
-/*0x0a10*/     struct vxge_hw_xmac_aggr_stats xgmac_aggr[2];
-/*0x0ae0*/     u64     xgmac_global_prog_event_gnum0;
-/*0x0ae8*/     u64     xgmac_global_prog_event_gnum1;
-/*0x0af0*/     u64     unused7;
-/*0x0af8*/     u64     unused8;
-/*0x0b00*/     u64     unused9;
-/*0x0b08*/     u64     unused10;
-/*0x0b10*/     u32     unused11;
-/*0x0b14*/     u32     xgmac_tx_permitted_frms;
-/*0x0b18*/     u32     unused12;
-/*0x0b1c*/     u8      unused13;
-/*0x0b1d*/     u8      xgmac_port2_tx_any_frms;
-/*0x0b1e*/     u8      xgmac_port1_tx_any_frms;
-/*0x0b1f*/     u8      xgmac_port0_tx_any_frms;
-/*0x0b20*/     u32     unused14;
-/*0x0b24*/     u8      unused15;
-/*0x0b25*/     u8      xgmac_port2_rx_any_frms;
-/*0x0b26*/     u8      xgmac_port1_rx_any_frms;
-/*0x0b27*/     u8      xgmac_port0_rx_any_frms;
-} __packed;
-
-/**
- * struct vxge_hw_device_stats_hw_info - Titan hardware statistics.
- * @vpath_info: VPath statistics
- * @vpath_info_sav: Vpath statistics saved
- *
- * Titan hardware statistics.
- */
-struct vxge_hw_device_stats_hw_info {
-       struct vxge_hw_vpath_stats_hw_info
-               *vpath_info[VXGE_HW_MAX_VIRTUAL_PATHS];
-       struct vxge_hw_vpath_stats_hw_info
-               vpath_info_sav[VXGE_HW_MAX_VIRTUAL_PATHS];
-};
-
-/**
- * struct vxge_hw_vpath_stats_sw_common_info - HW common
- * statistics for queues.
- * @full_cnt: Number of times the queue was full
- * @usage_cnt: usage count.
- * @usage_max: Maximum usage
- * @reserve_free_swaps_cnt: Reserve/free swap counter. Internal usage.
- * @total_compl_cnt: Total descriptor completion count.
- *
- * Hw queue counters
- * See also: struct vxge_hw_vpath_stats_sw_fifo_info{},
- * struct vxge_hw_vpath_stats_sw_ring_info{},
- */
-struct vxge_hw_vpath_stats_sw_common_info {
-       u32     full_cnt;
-       u32     usage_cnt;
-       u32     usage_max;
-       u32     reserve_free_swaps_cnt;
-       u32 total_compl_cnt;
-};
-
-/**
- * struct vxge_hw_vpath_stats_sw_fifo_info - HW fifo statistics
- * @common_stats: Common counters for all queues
- * @total_posts: Total number of postings on the queue.
- * @total_buffers: Total number of buffers posted.
- * @txd_t_code_err_cnt: Array of transmit transfer codes. The position
- * (index) in this array reflects the transfer code type, for instance
- * 0xA - "loss of link".
- * Value txd_t_code_err_cnt[i] reflects the
- * number of times the corresponding transfer code was encountered.
- *
- * HW fifo counters
- * See also: struct vxge_hw_vpath_stats_sw_common_info{},
- * struct vxge_hw_vpath_stats_sw_ring_info{},
- */
-struct vxge_hw_vpath_stats_sw_fifo_info {
-       struct vxge_hw_vpath_stats_sw_common_info common_stats;
-       u32 total_posts;
-       u32 total_buffers;
-       u32 txd_t_code_err_cnt[VXGE_HW_DTR_MAX_T_CODE];
-};
-
-/**
- * struct vxge_hw_vpath_stats_sw_ring_info - HW ring statistics
- * @common_stats: Common counters for all queues
- * @rxd_t_code_err_cnt: Array of receive transfer codes. The position
- *             (index) in this array reflects the transfer code type,
- *             for instance
- *             0x7 - for "invalid receive buffer size", or 0x8 - for ECC.
- *             Value rxd_t_code_err_cnt[i] reflects the
- *             number of times the corresponding transfer code was encountered.
- *
- * HW ring counters
- * See also: struct vxge_hw_vpath_stats_sw_common_info{},
- * struct vxge_hw_vpath_stats_sw_fifo_info{},
- */
-struct vxge_hw_vpath_stats_sw_ring_info {
-       struct vxge_hw_vpath_stats_sw_common_info common_stats;
-       u32 rxd_t_code_err_cnt[VXGE_HW_DTR_MAX_T_CODE];
-
-};
-
-/**
- * struct vxge_hw_vpath_stats_sw_err - HW vpath error statistics
- * @unknown_alarms:
- * @network_sustained_fault:
- * @network_sustained_ok:
- * @kdfcctl_fifo0_overwrite:
- * @kdfcctl_fifo0_poison:
- * @kdfcctl_fifo0_dma_error:
- * @dblgen_fifo0_overflow:
- * @statsb_pif_chain_error:
- * @statsb_drop_timeout:
- * @target_illegal_access:
- * @ini_serr_det:
- * @prc_ring_bumps:
- * @prc_rxdcm_sc_err:
- * @prc_rxdcm_sc_abort:
- * @prc_quanta_size_err:
- *
- * HW vpath error statistics
- */
-struct vxge_hw_vpath_stats_sw_err {
-       u32     unknown_alarms;
-       u32     network_sustained_fault;
-       u32     network_sustained_ok;
-       u32     kdfcctl_fifo0_overwrite;
-       u32     kdfcctl_fifo0_poison;
-       u32     kdfcctl_fifo0_dma_error;
-       u32     dblgen_fifo0_overflow;
-       u32     statsb_pif_chain_error;
-       u32     statsb_drop_timeout;
-       u32     target_illegal_access;
-       u32     ini_serr_det;
-       u32     prc_ring_bumps;
-       u32     prc_rxdcm_sc_err;
-       u32     prc_rxdcm_sc_abort;
-       u32     prc_quanta_size_err;
-};
-
-/**
- * struct vxge_hw_vpath_stats_sw_info - HW vpath sw statistics
- * @soft_reset_cnt: Number of times soft reset is done on this vpath.
- * @error_stats: error counters for the vpath
- * @ring_stats: counters for ring belonging to the vpath
- * @fifo_stats: counters for fifo belonging to the vpath
- *
- * HW vpath sw statistics
- * See also: struct vxge_hw_device_info{} }.
- */
-struct vxge_hw_vpath_stats_sw_info {
-       u32    soft_reset_cnt;
-       struct vxge_hw_vpath_stats_sw_err       error_stats;
-       struct vxge_hw_vpath_stats_sw_ring_info ring_stats;
-       struct vxge_hw_vpath_stats_sw_fifo_info fifo_stats;
-};
-
-/**
- * struct vxge_hw_device_stats_sw_info - HW own per-device statistics.
- *
- * @not_traffic_intr_cnt: Number of times the host was interrupted
- *                        without new completions.
- *                        "Non-traffic interrupt counter".
- * @traffic_intr_cnt: Number of traffic interrupts for the device.
- * @total_intr_cnt: Total number of traffic interrupts for the device.
- *                  @total_intr_cnt == @traffic_intr_cnt +
- *                              @not_traffic_intr_cnt
- * @soft_reset_cnt: Number of times soft reset is done on this device.
- * @vpath_info: please see struct vxge_hw_vpath_stats_sw_info{}
- * HW per-device statistics.
- */
-struct vxge_hw_device_stats_sw_info {
-       u32     not_traffic_intr_cnt;
-       u32     traffic_intr_cnt;
-       u32     total_intr_cnt;
-       u32     soft_reset_cnt;
-       struct vxge_hw_vpath_stats_sw_info
-               vpath_info[VXGE_HW_MAX_VIRTUAL_PATHS];
-};
-
-/**
- * struct vxge_hw_device_stats_sw_err - HW device error statistics.
- * @vpath_alarms: Number of vpath alarms
- *
- * HW Device error stats
- */
-struct vxge_hw_device_stats_sw_err {
-       u32     vpath_alarms;
-};
-
-/**
- * struct vxge_hw_device_stats - Contains HW per-device statistics,
- * including hw.
- * @devh: HW device handle.
- * @dma_addr: DMA address of the %hw_info. Given to device to fill-in the stats.
- * @hw_info_dmah: DMA handle used to map hw statistics onto the device memory
- *                space.
- * @hw_info_dma_acch: One more DMA handle used subsequently to free the
- *                    DMA object. Note that this and the previous handle have
- *                    physical meaning for Solaris; on Windows and Linux the
- *                    corresponding value will be simply pointer to PCI device.
- *
- * @hw_dev_info_stats: Titan statistics maintained by the hardware.
- * @sw_dev_info_stats: HW's "soft" device informational statistics, e.g. number
- *                     of completions per interrupt.
- * @sw_dev_err_stats: HW's "soft" device error statistics.
- *
- * Structure-container of HW per-device statistics. Note that per-channel
- * statistics are kept in separate structures under HW's fifo and ring
- * channels.
- */
-struct vxge_hw_device_stats {
-       /* handles */
-       struct __vxge_hw_device *devh;
-
-       /* HW device hardware statistics */
-       struct vxge_hw_device_stats_hw_info     hw_dev_info_stats;
-
-       /* HW device "soft" stats */
-       struct vxge_hw_device_stats_sw_err   sw_dev_err_stats;
-       struct vxge_hw_device_stats_sw_info  sw_dev_info_stats;
-
-};
-
-enum vxge_hw_status vxge_hw_device_hw_stats_enable(
-                       struct __vxge_hw_device *devh);
-
-enum vxge_hw_status vxge_hw_device_stats_get(
-                       struct __vxge_hw_device *devh,
-                       struct vxge_hw_device_stats_hw_info *hw_stats);
-
-enum vxge_hw_status vxge_hw_driver_stats_get(
-                       struct __vxge_hw_device *devh,
-                       struct vxge_hw_device_stats_sw_info *sw_stats);
-
-enum vxge_hw_status vxge_hw_mrpcim_stats_enable(struct __vxge_hw_device *devh);
-
-enum vxge_hw_status vxge_hw_mrpcim_stats_disable(struct __vxge_hw_device *devh);
-
-enum vxge_hw_status
-vxge_hw_mrpcim_stats_access(
-       struct __vxge_hw_device *devh,
-       u32 operation,
-       u32 location,
-       u32 offset,
-       u64 *stat);
-
-enum vxge_hw_status
-vxge_hw_device_xmac_stats_get(struct __vxge_hw_device *devh,
-                             struct vxge_hw_xmac_stats *xmac_stats);
-
-/**
- * enum enum vxge_hw_mgmt_reg_type - Register types.
- *
- * @vxge_hw_mgmt_reg_type_legacy: Legacy registers
- * @vxge_hw_mgmt_reg_type_toc: TOC Registers
- * @vxge_hw_mgmt_reg_type_common: Common Registers
- * @vxge_hw_mgmt_reg_type_mrpcim: mrpcim registers
- * @vxge_hw_mgmt_reg_type_srpcim: srpcim registers
- * @vxge_hw_mgmt_reg_type_vpmgmt: vpath management registers
- * @vxge_hw_mgmt_reg_type_vpath: vpath registers
- *
- * Register type enumaration
- */
-enum vxge_hw_mgmt_reg_type {
-       vxge_hw_mgmt_reg_type_legacy = 0,
-       vxge_hw_mgmt_reg_type_toc = 1,
-       vxge_hw_mgmt_reg_type_common = 2,
-       vxge_hw_mgmt_reg_type_mrpcim = 3,
-       vxge_hw_mgmt_reg_type_srpcim = 4,
-       vxge_hw_mgmt_reg_type_vpmgmt = 5,
-       vxge_hw_mgmt_reg_type_vpath = 6
-};
-
-enum vxge_hw_status
-vxge_hw_mgmt_reg_read(struct __vxge_hw_device *devh,
-                     enum vxge_hw_mgmt_reg_type type,
-                     u32 index,
-                     u32 offset,
-                     u64 *value);
-
-enum vxge_hw_status
-vxge_hw_mgmt_reg_write(struct __vxge_hw_device *devh,
-                     enum vxge_hw_mgmt_reg_type type,
-                     u32 index,
-                     u32 offset,
-                     u64 value);
-
-/**
- * enum enum vxge_hw_rxd_state - Descriptor (RXD) state.
- * @VXGE_HW_RXD_STATE_NONE: Invalid state.
- * @VXGE_HW_RXD_STATE_AVAIL: Descriptor is available for reservation.
- * @VXGE_HW_RXD_STATE_POSTED: Descriptor is posted for processing by the
- * device.
- * @VXGE_HW_RXD_STATE_FREED: Descriptor is free and can be reused for
- * filling-in and posting later.
- *
- * Titan/HW descriptor states.
- *
- */
-enum vxge_hw_rxd_state {
-       VXGE_HW_RXD_STATE_NONE          = 0,
-       VXGE_HW_RXD_STATE_AVAIL         = 1,
-       VXGE_HW_RXD_STATE_POSTED        = 2,
-       VXGE_HW_RXD_STATE_FREED         = 3
-};
-
-/**
- * struct vxge_hw_ring_rxd_info - Extended information associated with a
- * completed ring descriptor.
- * @syn_flag: SYN flag
- * @is_icmp: Is ICMP
- * @fast_path_eligible: Fast Path Eligible flag
- * @l3_cksum: in L3 checksum is valid
- * @l3_cksum: Result of IP checksum check (by Titan hardware).
- *            This field containing VXGE_HW_L3_CKSUM_OK would mean that
- *            the checksum is correct, otherwise - the datagram is
- *            corrupted.
- * @l4_cksum: in L4 checksum is valid
- * @l4_cksum: Result of TCP/UDP checksum check (by Titan hardware).
- *            This field containing VXGE_HW_L4_CKSUM_OK would mean that
- *            the checksum is correct. Otherwise - the packet is
- *            corrupted.
- * @frame: Zero or more of enum vxge_hw_frame_type flags.
- *             See enum vxge_hw_frame_type{}.
- * @proto: zero or more of enum vxge_hw_frame_proto flags.  Reporting bits for
- *            various higher-layer protocols, including (but note restricted to)
- *            TCP and UDP. See enum vxge_hw_frame_proto{}.
- * @is_vlan: If vlan tag is valid
- * @vlan: VLAN tag extracted from the received frame.
- * @rth_bucket: RTH bucket
- * @rth_it_hit: Set, If RTH hash value calculated by the Titan hardware
- *             has a matching entry in the Indirection table.
- * @rth_spdm_hit: Set, If RTH hash value calculated by the Titan hardware
- *             has a matching entry in the Socket Pair Direct Match table.
- * @rth_hash_type: RTH hash code of the function used to calculate the hash.
- * @rth_value: Receive Traffic Hashing(RTH) hash value. Produced by Titan
- *             hardware if RTH is enabled.
- */
-struct vxge_hw_ring_rxd_info {
-       u32     syn_flag;
-       u32     is_icmp;
-       u32     fast_path_eligible;
-       u32     l3_cksum_valid;
-       u32     l3_cksum;
-       u32     l4_cksum_valid;
-       u32     l4_cksum;
-       u32     frame;
-       u32     proto;
-       u32     is_vlan;
-       u32     vlan;
-       u32     rth_bucket;
-       u32     rth_it_hit;
-       u32     rth_spdm_hit;
-       u32     rth_hash_type;
-       u32     rth_value;
-};
-/**
- * enum vxge_hw_ring_tcode - Transfer codes returned by adapter
- * @VXGE_HW_RING_T_CODE_OK: Transfer ok.
- * @VXGE_HW_RING_T_CODE_L3_CKSUM_MISMATCH: Layer 3 checksum presentation
- *             configuration mismatch.
- * @VXGE_HW_RING_T_CODE_L4_CKSUM_MISMATCH: Layer 4 checksum presentation
- *             configuration mismatch.
- * @VXGE_HW_RING_T_CODE_L3_L4_CKSUM_MISMATCH: Layer 3 and Layer 4 checksum
- *             presentation configuration mismatch.
- * @VXGE_HW_RING_T_CODE_L3_PKT_ERR: Layer 3 error unparseable packet,
- *             such as unknown IPv6 header.
- * @VXGE_HW_RING_T_CODE_L2_FRM_ERR: Layer 2 error frame integrity
- *             error, such as FCS or ECC).
- * @VXGE_HW_RING_T_CODE_BUF_SIZE_ERR: Buffer size error the RxD buffer(
- *             s) were not appropriately sized and data loss occurred.
- * @VXGE_HW_RING_T_CODE_INT_ECC_ERR: Internal ECC error RxD corrupted.
- * @VXGE_HW_RING_T_CODE_BENIGN_OVFLOW: Benign overflow the contents of
- *             Segment1 exceeded the capacity of Buffer1 and the remainder
- *             was placed in Buffer2. Segment2 now starts in Buffer3.
- *             No data loss or errors occurred.
- * @VXGE_HW_RING_T_CODE_ZERO_LEN_BUFF: Buffer size 0 one of the RxDs
- *             assigned buffers has a size of 0 bytes.
- * @VXGE_HW_RING_T_CODE_FRM_DROP: Frame dropped either due to
- *             VPath Reset or because of a VPIN mismatch.
- * @VXGE_HW_RING_T_CODE_UNUSED: Unused
- * @VXGE_HW_RING_T_CODE_MULTI_ERR: Multiple errors more than one
- *             transfer code condition occurred.
- *
- * Transfer codes returned by adapter.
- */
-enum vxge_hw_ring_tcode {
-       VXGE_HW_RING_T_CODE_OK                          = 0x0,
-       VXGE_HW_RING_T_CODE_L3_CKSUM_MISMATCH           = 0x1,
-       VXGE_HW_RING_T_CODE_L4_CKSUM_MISMATCH           = 0x2,
-       VXGE_HW_RING_T_CODE_L3_L4_CKSUM_MISMATCH        = 0x3,
-       VXGE_HW_RING_T_CODE_L3_PKT_ERR                  = 0x5,
-       VXGE_HW_RING_T_CODE_L2_FRM_ERR                  = 0x6,
-       VXGE_HW_RING_T_CODE_BUF_SIZE_ERR                = 0x7,
-       VXGE_HW_RING_T_CODE_INT_ECC_ERR                 = 0x8,
-       VXGE_HW_RING_T_CODE_BENIGN_OVFLOW               = 0x9,
-       VXGE_HW_RING_T_CODE_ZERO_LEN_BUFF               = 0xA,
-       VXGE_HW_RING_T_CODE_FRM_DROP                    = 0xC,
-       VXGE_HW_RING_T_CODE_UNUSED                      = 0xE,
-       VXGE_HW_RING_T_CODE_MULTI_ERR                   = 0xF
-};
-
-enum vxge_hw_status vxge_hw_ring_rxd_reserve(
-       struct __vxge_hw_ring *ring_handle,
-       void **rxdh);
-
-void
-vxge_hw_ring_rxd_pre_post(
-       struct __vxge_hw_ring *ring_handle,
-       void *rxdh);
-
-void
-vxge_hw_ring_rxd_post_post(
-       struct __vxge_hw_ring *ring_handle,
-       void *rxdh);
-
-void
-vxge_hw_ring_rxd_post_post_wmb(
-       struct __vxge_hw_ring *ring_handle,
-       void *rxdh);
-
-void vxge_hw_ring_rxd_post(
-       struct __vxge_hw_ring *ring_handle,
-       void *rxdh);
-
-enum vxge_hw_status vxge_hw_ring_rxd_next_completed(
-       struct __vxge_hw_ring *ring_handle,
-       void **rxdh,
-       u8 *t_code);
-
-enum vxge_hw_status vxge_hw_ring_handle_tcode(
-       struct __vxge_hw_ring *ring_handle,
-       void *rxdh,
-       u8 t_code);
-
-void vxge_hw_ring_rxd_free(
-       struct __vxge_hw_ring *ring_handle,
-       void *rxdh);
-
-/**
- * enum enum vxge_hw_frame_proto - Higher-layer ethernet protocols.
- * @VXGE_HW_FRAME_PROTO_VLAN_TAGGED: VLAN.
- * @VXGE_HW_FRAME_PROTO_IPV4: IPv4.
- * @VXGE_HW_FRAME_PROTO_IPV6: IPv6.
- * @VXGE_HW_FRAME_PROTO_IP_FRAG: IP fragmented.
- * @VXGE_HW_FRAME_PROTO_TCP: TCP.
- * @VXGE_HW_FRAME_PROTO_UDP: UDP.
- * @VXGE_HW_FRAME_PROTO_TCP_OR_UDP: TCP or UDP.
- *
- * Higher layer ethernet protocols and options.
- */
-enum vxge_hw_frame_proto {
-       VXGE_HW_FRAME_PROTO_VLAN_TAGGED = 0x80,
-       VXGE_HW_FRAME_PROTO_IPV4                = 0x10,
-       VXGE_HW_FRAME_PROTO_IPV6                = 0x08,
-       VXGE_HW_FRAME_PROTO_IP_FRAG             = 0x04,
-       VXGE_HW_FRAME_PROTO_TCP                 = 0x02,
-       VXGE_HW_FRAME_PROTO_UDP                 = 0x01,
-       VXGE_HW_FRAME_PROTO_TCP_OR_UDP  = (VXGE_HW_FRAME_PROTO_TCP | \
-                                                  VXGE_HW_FRAME_PROTO_UDP)
-};
-
-/**
- * enum enum vxge_hw_fifo_gather_code - Gather codes used in fifo TxD
- * @VXGE_HW_FIFO_GATHER_CODE_FIRST: First TxDL
- * @VXGE_HW_FIFO_GATHER_CODE_MIDDLE: Middle TxDL
- * @VXGE_HW_FIFO_GATHER_CODE_LAST: Last TxDL
- * @VXGE_HW_FIFO_GATHER_CODE_FIRST_LAST: First and Last TxDL.
- *
- * These gather codes are used to indicate the position of a TxD in a TxD list
- */
-enum vxge_hw_fifo_gather_code {
-       VXGE_HW_FIFO_GATHER_CODE_FIRST          = 0x2,
-       VXGE_HW_FIFO_GATHER_CODE_MIDDLE         = 0x0,
-       VXGE_HW_FIFO_GATHER_CODE_LAST           = 0x1,
-       VXGE_HW_FIFO_GATHER_CODE_FIRST_LAST     = 0x3
-};
-
-/**
- * enum enum vxge_hw_fifo_tcode - tcodes used in fifo
- * @VXGE_HW_FIFO_T_CODE_OK: Transfer OK
- * @VXGE_HW_FIFO_T_CODE_PCI_READ_CORRUPT: PCI read transaction (either TxD or
- *             frame data) returned with corrupt data.
- * @VXGE_HW_FIFO_T_CODE_PCI_READ_FAIL:PCI read transaction was returned
- *             with no data.
- * @VXGE_HW_FIFO_T_CODE_INVALID_MSS: The host attempted to send either a
- *             frame or LSO MSS that was too long (>9800B).
- * @VXGE_HW_FIFO_T_CODE_LSO_ERROR: Error detected during TCP/UDP Large Send
-       *              Offload operation, due to improper header template,
-       *              unsupported protocol, etc.
- * @VXGE_HW_FIFO_T_CODE_UNUSED: Unused
- * @VXGE_HW_FIFO_T_CODE_MULTI_ERROR: Set to 1 by the adapter if multiple
- *             data buffer transfer errors are encountered (see below).
- *             Otherwise it is set to 0.
- *
- * These tcodes are returned in various API for TxD status
- */
-enum vxge_hw_fifo_tcode {
-       VXGE_HW_FIFO_T_CODE_OK                  = 0x0,
-       VXGE_HW_FIFO_T_CODE_PCI_READ_CORRUPT    = 0x1,
-       VXGE_HW_FIFO_T_CODE_PCI_READ_FAIL       = 0x2,
-       VXGE_HW_FIFO_T_CODE_INVALID_MSS         = 0x3,
-       VXGE_HW_FIFO_T_CODE_LSO_ERROR           = 0x4,
-       VXGE_HW_FIFO_T_CODE_UNUSED              = 0x7,
-       VXGE_HW_FIFO_T_CODE_MULTI_ERROR         = 0x8
-};
-
-enum vxge_hw_status vxge_hw_fifo_txdl_reserve(
-       struct __vxge_hw_fifo *fifoh,
-       void **txdlh,
-       void **txdl_priv);
-
-void vxge_hw_fifo_txdl_buffer_set(
-                       struct __vxge_hw_fifo *fifo_handle,
-                       void *txdlh,
-                       u32 frag_idx,
-                       dma_addr_t dma_pointer,
-                       u32 size);
-
-void vxge_hw_fifo_txdl_post(
-                       struct __vxge_hw_fifo *fifo_handle,
-                       void *txdlh);
-
-u32 vxge_hw_fifo_free_txdl_count_get(
-                       struct __vxge_hw_fifo *fifo_handle);
-
-enum vxge_hw_status vxge_hw_fifo_txdl_next_completed(
-       struct __vxge_hw_fifo *fifoh,
-       void **txdlh,
-       enum vxge_hw_fifo_tcode *t_code);
-
-enum vxge_hw_status vxge_hw_fifo_handle_tcode(
-       struct __vxge_hw_fifo *fifoh,
-       void *txdlh,
-       enum vxge_hw_fifo_tcode t_code);
-
-void vxge_hw_fifo_txdl_free(
-       struct __vxge_hw_fifo *fifoh,
-       void *txdlh);
-
-/*
- * Device
- */
-
-#define VXGE_HW_RING_NEXT_BLOCK_POINTER_OFFSET (VXGE_HW_BLOCK_SIZE-8)
-#define VXGE_HW_RING_MEMBLOCK_IDX_OFFSET               (VXGE_HW_BLOCK_SIZE-16)
-
-/*
- * struct __vxge_hw_ring_rxd_priv - Receive descriptor HW-private data.
- * @dma_addr: DMA (mapped) address of _this_ descriptor.
- * @dma_handle: DMA handle used to map the descriptor onto device.
- * @dma_offset: Descriptor's offset in the memory block. HW allocates
- *              descriptors in memory blocks of %VXGE_HW_BLOCK_SIZE
- *              bytes. Each memblock is contiguous DMA-able memory. Each
- *              memblock contains 1 or more 4KB RxD blocks visible to the
- *              Titan hardware.
- * @dma_object: DMA address and handle of the memory block that contains
- *              the descriptor. This member is used only in the "checked"
- *              version of the HW (to enforce certain assertions);
- *              otherwise it gets compiled out.
- * @allocated: True if the descriptor is reserved, 0 otherwise. Internal usage.
- *
- * Per-receive decsriptor HW-private data. HW uses the space to keep DMA
- * information associated with the descriptor. Note that driver can ask HW
- * to allocate additional per-descriptor space for its own (driver-specific)
- * purposes.
- */
-struct __vxge_hw_ring_rxd_priv {
-       dma_addr_t      dma_addr;
-       struct pci_dev *dma_handle;
-       ptrdiff_t       dma_offset;
-#ifdef VXGE_DEBUG_ASSERT
-       struct vxge_hw_mempool_dma      *dma_object;
-#endif
-};
-
-struct vxge_hw_mempool_cbs {
-       void (*item_func_alloc)(
-                       struct vxge_hw_mempool *mempoolh,
-                       u32                     memblock_index,
-                       struct vxge_hw_mempool_dma      *dma_object,
-                       u32                     index,
-                       u32                     is_last);
-};
-
-#define VXGE_HW_VIRTUAL_PATH_HANDLE(vpath)                             \
-               ((struct __vxge_hw_vpath_handle *)(vpath)->vpath_handles.next)
-
-enum vxge_hw_status
-__vxge_hw_vpath_rts_table_get(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u32                     action,
-       u32                     rts_table,
-       u32                     offset,
-       u64                     *data1,
-       u64                     *data2);
-
-enum vxge_hw_status
-__vxge_hw_vpath_rts_table_set(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u32                     action,
-       u32                     rts_table,
-       u32                     offset,
-       u64                     data1,
-       u64                     data2);
-
-enum vxge_hw_status
-__vxge_hw_vpath_enable(
-       struct __vxge_hw_device *devh,
-       u32                     vp_id);
-
-void vxge_hw_device_intr_enable(
-       struct __vxge_hw_device *devh);
-
-u32 vxge_hw_device_set_intr_type(struct __vxge_hw_device *devh, u32 intr_mode);
-
-void vxge_hw_device_intr_disable(
-       struct __vxge_hw_device *devh);
-
-void vxge_hw_device_mask_all(
-       struct __vxge_hw_device *devh);
-
-void vxge_hw_device_unmask_all(
-       struct __vxge_hw_device *devh);
-
-enum vxge_hw_status vxge_hw_device_begin_irq(
-       struct __vxge_hw_device *devh,
-       u32 skip_alarms,
-       u64 *reason);
-
-void vxge_hw_device_clear_tx_rx(
-       struct __vxge_hw_device *devh);
-
-/*
- *  Virtual Paths
- */
-
-void vxge_hw_vpath_dynamic_rti_rtimer_set(struct __vxge_hw_ring *ring);
-
-void vxge_hw_vpath_dynamic_tti_rtimer_set(struct __vxge_hw_fifo *fifo);
-
-u32 vxge_hw_vpath_id(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-enum vxge_hw_vpath_mac_addr_add_mode {
-       VXGE_HW_VPATH_MAC_ADDR_ADD_DUPLICATE = 0,
-       VXGE_HW_VPATH_MAC_ADDR_DISCARD_DUPLICATE = 1,
-       VXGE_HW_VPATH_MAC_ADDR_REPLACE_DUPLICATE = 2
-};
-
-enum vxge_hw_status
-vxge_hw_vpath_mac_addr_add(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u8 *macaddr,
-       u8 *macaddr_mask,
-       enum vxge_hw_vpath_mac_addr_add_mode duplicate_mode);
-
-enum vxge_hw_status
-vxge_hw_vpath_mac_addr_get(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u8 *macaddr,
-       u8 *macaddr_mask);
-
-enum vxge_hw_status
-vxge_hw_vpath_mac_addr_get_next(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u8 *macaddr,
-       u8 *macaddr_mask);
-
-enum vxge_hw_status
-vxge_hw_vpath_mac_addr_delete(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u8 *macaddr,
-       u8 *macaddr_mask);
-
-enum vxge_hw_status
-vxge_hw_vpath_vid_add(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u64                     vid);
-
-enum vxge_hw_status
-vxge_hw_vpath_vid_delete(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u64                     vid);
-
-enum vxge_hw_status
-vxge_hw_vpath_etype_add(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u64                     etype);
-
-enum vxge_hw_status
-vxge_hw_vpath_etype_get(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u64                     *etype);
-
-enum vxge_hw_status
-vxge_hw_vpath_etype_get_next(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u64                     *etype);
-
-enum vxge_hw_status
-vxge_hw_vpath_etype_delete(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u64                     etype);
-
-enum vxge_hw_status vxge_hw_vpath_promisc_enable(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-enum vxge_hw_status vxge_hw_vpath_promisc_disable(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-enum vxge_hw_status vxge_hw_vpath_bcast_enable(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-enum vxge_hw_status vxge_hw_vpath_mcast_enable(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-enum vxge_hw_status vxge_hw_vpath_mcast_disable(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-enum vxge_hw_status vxge_hw_vpath_poll_rx(
-       struct __vxge_hw_ring *ringh);
-
-enum vxge_hw_status vxge_hw_vpath_poll_tx(
-       struct __vxge_hw_fifo *fifoh,
-       struct sk_buff ***skb_ptr, int nr_skb, int *more);
-
-enum vxge_hw_status vxge_hw_vpath_alarm_process(
-       struct __vxge_hw_vpath_handle *vpath_handle,
-       u32 skip_alarms);
-
-void
-vxge_hw_vpath_msix_set(struct __vxge_hw_vpath_handle *vpath_handle,
-                      int *tim_msix_id, int alarm_msix_id);
-
-void
-vxge_hw_vpath_msix_mask(struct __vxge_hw_vpath_handle *vpath_handle,
-                       int msix_id);
-
-void vxge_hw_vpath_msix_clear(struct __vxge_hw_vpath_handle *vp, int msix_id);
-
-void vxge_hw_device_flush_io(struct __vxge_hw_device *devh);
-
-void
-vxge_hw_vpath_msix_unmask(struct __vxge_hw_vpath_handle *vpath_handle,
-                         int msix_id);
-
-enum vxge_hw_status vxge_hw_vpath_intr_enable(
-                               struct __vxge_hw_vpath_handle *vpath_handle);
-
-enum vxge_hw_status vxge_hw_vpath_intr_disable(
-                               struct __vxge_hw_vpath_handle *vpath_handle);
-
-void vxge_hw_vpath_inta_mask_tx_rx(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-void vxge_hw_vpath_inta_unmask_tx_rx(
-       struct __vxge_hw_vpath_handle *vpath_handle);
-
-void
-vxge_hw_channel_msix_mask(struct __vxge_hw_channel *channelh, int msix_id);
-
-void
-vxge_hw_channel_msix_unmask(struct __vxge_hw_channel *channelh, int msix_id);
-
-void
-vxge_hw_channel_msix_clear(struct __vxge_hw_channel *channelh, int msix_id);
-
-void
-vxge_hw_channel_dtr_try_complete(struct __vxge_hw_channel *channel,
-                                void **dtrh);
-
-void
-vxge_hw_channel_dtr_complete(struct __vxge_hw_channel *channel);
-
-void
-vxge_hw_channel_dtr_free(struct __vxge_hw_channel *channel, void *dtrh);
-
-int
-vxge_hw_channel_dtr_count(struct __vxge_hw_channel *channel);
-
-void vxge_hw_vpath_tti_ci_set(struct __vxge_hw_fifo *fifo);
-
-void vxge_hw_vpath_dynamic_rti_ci_set(struct __vxge_hw_ring *ring);
-
-#endif
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-version.h b/drivers/net/ethernet/neterion/vxge/vxge-version.h
deleted file mode 100644 (file)
index b9efa28..0000000
+++ /dev/null
@@ -1,49 +0,0 @@
-/******************************************************************************
- * This software may be used and distributed according to the terms of
- * the GNU General Public License (GPL), incorporated herein by reference.
- * Drivers based on or derived from this code fall under the GPL and must
- * retain the authorship, copyright and license notice.  This file is not
- * a complete program and may only be used when the entire operating
- * system is licensed under the GPL.
- * See the file COPYING in this distribution for more information.
- *
- * vxge-version.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
- *                 Virtualized Server Adapter.
- * Copyright(c) 2002-2010 Exar Corp.
- ******************************************************************************/
-#ifndef VXGE_VERSION_H
-#define VXGE_VERSION_H
-
-#define VXGE_VERSION_MAJOR     "2"
-#define VXGE_VERSION_MINOR     "5"
-#define VXGE_VERSION_FIX       "3"
-#define VXGE_VERSION_BUILD     "22640"
-#define VXGE_VERSION_FOR       "k"
-
-#define VXGE_FW_VER(maj, min, bld) (((maj) << 16) + ((min) << 8) + (bld))
-
-#define VXGE_DEAD_FW_VER_MAJOR 1
-#define VXGE_DEAD_FW_VER_MINOR 4
-#define VXGE_DEAD_FW_VER_BUILD 4
-
-#define VXGE_FW_DEAD_VER VXGE_FW_VER(VXGE_DEAD_FW_VER_MAJOR, \
-                                    VXGE_DEAD_FW_VER_MINOR, \
-                                    VXGE_DEAD_FW_VER_BUILD)
-
-#define VXGE_EPROM_FW_VER_MAJOR        1
-#define VXGE_EPROM_FW_VER_MINOR        6
-#define VXGE_EPROM_FW_VER_BUILD        1
-
-#define VXGE_EPROM_FW_VER VXGE_FW_VER(VXGE_EPROM_FW_VER_MAJOR, \
-                                     VXGE_EPROM_FW_VER_MINOR, \
-                                     VXGE_EPROM_FW_VER_BUILD)
-
-#define VXGE_CERT_FW_VER_MAJOR 1
-#define VXGE_CERT_FW_VER_MINOR 8
-#define VXGE_CERT_FW_VER_BUILD 1
-
-#define VXGE_CERT_FW_VER VXGE_FW_VER(VXGE_CERT_FW_VER_MAJOR, \
-                                    VXGE_CERT_FW_VER_MINOR, \
-                                    VXGE_CERT_FW_VER_BUILD)
-
-#endif
index b456e81..7453cc5 100644 (file)
@@ -149,7 +149,7 @@ nfp_fl_pre_lag(struct nfp_app *app, const struct flow_action_entry *act,
        }
 
        /* Pre_lag action must be first on action list.
-        * If other actions already exist they need pushed forward.
+        * If other actions already exist they need to be pushed forward.
         */
        if (act_len)
                memmove(nfp_flow->action_data + act_size,
index 7c31a46..b3b2a23 100644 (file)
@@ -182,7 +182,7 @@ static int nfp_ct_merge_check(struct nfp_fl_ct_flow_entry *entry1,
        u8 ip_proto = 0;
        /* Temporary buffer for mangling keys, 64 is enough to cover max
         * struct size of key in various fields that may be mangled.
-        * Supported fileds to mangle:
+        * Supported fields to mangle:
         * mac_src/mac_dst(struct flow_match_eth_addrs, 12B)
         * nw_tos/nw_ttl(struct flow_match_ip, 2B)
         * nw_src/nw_dst(struct flow_match_ipv4/6_addrs, 32B)
@@ -194,7 +194,7 @@ static int nfp_ct_merge_check(struct nfp_fl_ct_flow_entry *entry1,
            entry1->netdev != entry2->netdev)
                return -EINVAL;
 
-       /* check the overlapped fields one by one, the unmasked part
+       /* Check the overlapped fields one by one, the unmasked part
         * should not conflict with each other.
         */
        if (ovlp_keys & BIT(FLOW_DISSECTOR_KEY_CONTROL)) {
@@ -563,7 +563,7 @@ static int nfp_fl_merge_actions_offload(struct flow_rule **rules,
                if (flow_rule_match_key(rules[j], FLOW_DISSECTOR_KEY_BASIC)) {
                        struct flow_match_basic match;
 
-                       /* ip_proto is the only field that needed in later compile_action,
+                       /* ip_proto is the only field that is needed in later compile_action,
                         * needed to set the correct checksum flags. It doesn't really matter
                         * which input rule's ip_proto field we take as the earlier merge checks
                         * would have made sure that they don't conflict. We do not know which
@@ -1013,7 +1013,7 @@ static int nfp_ct_do_nft_merge(struct nfp_fl_ct_zone_entry *zt,
        nft_m_entry->tc_m_parent = tc_m_entry;
        nft_m_entry->nft_parent = nft_entry;
        nft_m_entry->tc_flower_cookie = 0;
-       /* Copy the netdev from one the pre_ct entry. When the tc_m_entry was created
+       /* Copy the netdev from the pre_ct entry. When the tc_m_entry was created
         * it only combined them if the netdevs were the same, so can use any of them.
         */
        nft_m_entry->netdev = pre_ct_entry->netdev;
@@ -1143,7 +1143,7 @@ nfp_fl_ct_zone_entry *get_nfp_zone_entry(struct nfp_flower_priv *priv,
        zt->priv = priv;
        zt->nft = NULL;
 
-       /* init the various hash tables and lists*/
+       /* init the various hash tables and lists */
        INIT_LIST_HEAD(&zt->pre_ct_list);
        INIT_LIST_HEAD(&zt->post_ct_list);
        INIT_LIST_HEAD(&zt->nft_flows_list);
@@ -1346,7 +1346,7 @@ static void nfp_free_nft_merge_children(void *entry, bool is_nft_flow)
         */
 
        if (is_nft_flow) {
-               /* Need to iterate through list of nft_flow entries*/
+               /* Need to iterate through list of nft_flow entries */
                struct nfp_fl_ct_flow_entry *ct_entry = entry;
 
                list_for_each_entry_safe(m_entry, tmp, &ct_entry->children,
@@ -1354,7 +1354,7 @@ static void nfp_free_nft_merge_children(void *entry, bool is_nft_flow)
                        cleanup_nft_merge_entry(m_entry);
                }
        } else {
-               /* Need to iterate through list of tc_merged_flow entries*/
+               /* Need to iterate through list of tc_merged_flow entries */
                struct nfp_fl_ct_tc_merge *ct_entry = entry;
 
                list_for_each_entry_safe(m_entry, tmp, &ct_entry->children,
index ede90e0..e92860e 100644 (file)
@@ -234,7 +234,7 @@ nfp_fl_lag_config_group(struct nfp_fl_lag *lag, struct nfp_fl_lag_group *group,
        }
 
        /* To signal the end of a batch, both the switch and last flags are set
-        * and the the reserved SYNC group ID is used.
+        * and the reserved SYNC group ID is used.
         */
        if (*batch == NFP_FL_LAG_BATCH_FINISHED) {
                flags |= NFP_FL_LAG_SWITCH | NFP_FL_LAG_LAST;
@@ -576,7 +576,7 @@ nfp_fl_lag_changeupper_event(struct nfp_fl_lag *lag,
        group->dirty = true;
        group->slave_cnt = slave_count;
 
-       /* Group may have been on queue for removal but is now offloable. */
+       /* Group may have been on queue for removal but is now offloadable. */
        group->to_remove = false;
        mutex_unlock(&lag->lock);
 
index 74e1b27..0f06ef6 100644 (file)
@@ -339,7 +339,7 @@ int nfp_compile_flow_metadata(struct nfp_app *app, u32 cookie,
                goto err_free_ctx_entry;
        }
 
-       /* Do net allocate a mask-id for pre_tun_rules. These flows are used to
+       /* Do not allocate a mask-id for pre_tun_rules. These flows are used to
         * configure the pre_tun table and are never actually send to the
         * firmware as an add-flow message. This causes the mask-id allocation
         * on the firmware to get out of sync if allocated here.
index 9d65459..83c9715 100644 (file)
@@ -359,7 +359,7 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
                        flow_rule_match_enc_opts(rule, &enc_op);
 
                if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) {
-                       /* check if GRE, which has no enc_ports */
+                       /* Check if GRE, which has no enc_ports */
                        if (!netif_is_gretap(netdev) && !netif_is_ip6gretap(netdev)) {
                                NL_SET_ERR_MSG_MOD(extack, "unsupported offload: an exact match on L4 destination port is required for non-GRE tunnels");
                                return -EOPNOTSUPP;
@@ -1016,7 +1016,7 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
            nfp_flower_is_merge_flow(sub_flow2))
                return -EINVAL;
 
-       /* check if the two flows are already merged */
+       /* Check if the two flows are already merged */
        parent_ctx = (u64)(be32_to_cpu(sub_flow1->meta.host_ctx_id)) << 32;
        parent_ctx |= (u64)(be32_to_cpu(sub_flow2->meta.host_ctx_id));
        if (rhashtable_lookup_fast(&priv->merge_table,
index 3206ba8..4e5df9f 100644 (file)
@@ -534,7 +534,7 @@ int nfp_flower_setup_qos_offload(struct nfp_app *app, struct net_device *netdev,
        }
 }
 
-/* offload tc action, currently only for tc police */
+/* Offload tc action, currently only for tc police */
 
 static const struct rhashtable_params stats_meter_table_params = {
        .key_offset     = offsetof(struct nfp_meter_entry, meter_id),
@@ -690,7 +690,7 @@ nfp_act_install_actions(struct nfp_app *app, struct flow_offload_action *fl_act,
        pps_support = !!(fl_priv->flower_ext_feats & NFP_FL_FEATS_QOS_PPS);
 
        for (i = 0 ; i < action_num; i++) {
-               /*set qos associate data for this interface */
+               /* Set qos associate data for this interface */
                action = paction + i;
                if (action->id != FLOW_ACTION_POLICE) {
                        NL_SET_ERR_MSG_MOD(extack,
@@ -736,7 +736,7 @@ nfp_act_remove_actions(struct nfp_app *app, struct flow_offload_action *fl_act,
        u32 meter_id;
        bool pps;
 
-       /*delete qos associate data for this interface */
+       /* Delete qos associate data for this interface */
        if (fl_act->id != FLOW_ACTION_POLICE) {
                NL_SET_ERR_MSG_MOD(extack,
                                   "unsupported offload: qos rate limit offload requires police action");
index 6bf3ec4..0af5541 100644 (file)
@@ -1064,7 +1064,7 @@ nfp_tunnel_del_shared_mac(struct nfp_app *app, struct net_device *netdev,
                return 0;
 
        entry->ref_count--;
-       /* If del is part of a mod then mac_list is still in use elsewheree. */
+       /* If del is part of a mod then mac_list is still in use elsewhere. */
        if (nfp_netdev_is_nfp_repr(netdev) && !mod) {
                repr = netdev_priv(netdev);
                repr_priv = repr->app_priv;
index f9410d5..448c1c1 100644 (file)
@@ -3,6 +3,7 @@
 
 #include <linux/bpf_trace.h>
 #include <linux/netdevice.h>
+#include <linux/bitfield.h>
 
 #include "../nfp_app.h"
 #include "../nfp_net.h"
@@ -81,12 +82,11 @@ nfp_nfd3_tx_tso(struct nfp_net_r_vector *r_vec, struct nfp_nfd3_tx_buf *txbuf,
        if (!skb->encapsulation) {
                l3_offset = skb_network_offset(skb);
                l4_offset = skb_transport_offset(skb);
-               hdrlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdrlen = skb_tcp_all_headers(skb);
        } else {
                l3_offset = skb_inner_network_offset(skb);
                l4_offset = skb_inner_transport_offset(skb);
-               hdrlen = skb_inner_transport_header(skb) - skb->data +
-                       inner_tcp_hdrlen(skb);
+               hdrlen = skb_inner_tcp_all_headers(skb);
        }
 
        txbuf->pkt_cnt = skb_shinfo(skb)->gso_segs;
@@ -167,30 +167,35 @@ nfp_nfd3_tx_csum(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec,
        u64_stats_update_end(&r_vec->tx_sync);
 }
 
-static int nfp_nfd3_prep_tx_meta(struct sk_buff *skb, u64 tls_handle)
+static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb, u64 tls_handle)
 {
        struct metadata_dst *md_dst = skb_metadata_dst(skb);
        unsigned char *data;
+       bool vlan_insert;
        u32 meta_id = 0;
        int md_bytes;
 
-       if (likely(!md_dst && !tls_handle))
-               return 0;
-       if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX)) {
-               if (!tls_handle)
-                       return 0;
-               md_dst = NULL;
+       if (unlikely(md_dst || tls_handle)) {
+               if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX))
+                       md_dst = NULL;
        }
 
-       md_bytes = 4 + !!md_dst * 4 + !!tls_handle * 8;
+       vlan_insert = skb_vlan_tag_present(skb) && (dp->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2);
+
+       if (!(md_dst || tls_handle || vlan_insert))
+               return 0;
+
+       md_bytes = sizeof(meta_id) +
+                  !!md_dst * NFP_NET_META_PORTID_SIZE +
+                  !!tls_handle * NFP_NET_META_CONN_HANDLE_SIZE +
+                  vlan_insert * NFP_NET_META_VLAN_SIZE;
 
        if (unlikely(skb_cow_head(skb, md_bytes)))
                return -ENOMEM;
 
-       meta_id = 0;
        data = skb_push(skb, md_bytes) + md_bytes;
        if (md_dst) {
-               data -= 4;
+               data -= NFP_NET_META_PORTID_SIZE;
                put_unaligned_be32(md_dst->u.port_info.port_id, data);
                meta_id = NFP_NET_META_PORTID;
        }
@@ -198,13 +203,23 @@ static int nfp_nfd3_prep_tx_meta(struct sk_buff *skb, u64 tls_handle)
                /* conn handle is opaque, we just use u64 to be able to quickly
                 * compare it to zero
                 */
-               data -= 8;
+               data -= NFP_NET_META_CONN_HANDLE_SIZE;
                memcpy(data, &tls_handle, sizeof(tls_handle));
                meta_id <<= NFP_NET_META_FIELD_SIZE;
                meta_id |= NFP_NET_META_CONN_HANDLE;
        }
+       if (vlan_insert) {
+               data -= NFP_NET_META_VLAN_SIZE;
+               /* data type of skb->vlan_proto is __be16
+                * so it fills metadata without calling put_unaligned_be16
+                */
+               memcpy(data, &skb->vlan_proto, sizeof(skb->vlan_proto));
+               put_unaligned_be16(skb_vlan_tag_get(skb), data + sizeof(skb->vlan_proto));
+               meta_id <<= NFP_NET_META_FIELD_SIZE;
+               meta_id |= NFP_NET_META_VLAN;
+       }
 
-       data -= 4;
+       data -= sizeof(meta_id);
        put_unaligned_be32(meta_id, data);
 
        return md_bytes;
@@ -258,7 +273,7 @@ netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev)
                return NETDEV_TX_OK;
        }
 
-       md_bytes = nfp_nfd3_prep_tx_meta(skb, tls_handle);
+       md_bytes = nfp_nfd3_prep_tx_meta(dp, skb, tls_handle);
        if (unlikely(md_bytes < 0))
                goto err_flush;
 
@@ -704,7 +719,7 @@ bool
 nfp_nfd3_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta,
                    void *data, void *pkt, unsigned int pkt_len, int meta_len)
 {
-       u32 meta_info;
+       u32 meta_info, vlan_info;
 
        meta_info = get_unaligned_be32(data);
        data += 4;
@@ -722,6 +737,17 @@ nfp_nfd3_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta,
                        meta->mark = get_unaligned_be32(data);
                        data += 4;
                        break;
+               case NFP_NET_META_VLAN:
+                       vlan_info = get_unaligned_be32(data);
+                       if (FIELD_GET(NFP_NET_META_VLAN_STRIP, vlan_info)) {
+                               meta->vlan.stripped = true;
+                               meta->vlan.tpid = FIELD_GET(NFP_NET_META_VLAN_TPID_MASK,
+                                                           vlan_info);
+                               meta->vlan.tci = FIELD_GET(NFP_NET_META_VLAN_TCI_MASK,
+                                                          vlan_info);
+                       }
+                       data += 4;
+                       break;
                case NFP_NET_META_PORTID:
                        meta->portid = get_unaligned_be32(data);
                        data += 4;
@@ -1050,9 +1076,11 @@ static int nfp_nfd3_rx(struct nfp_net_rx_ring *rx_ring, int budget)
                }
 #endif
 
-               if (rxd->rxd.flags & PCIE_DESC_RX_VLAN)
-                       __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
-                                              le16_to_cpu(rxd->rxd.vlan));
+               if (unlikely(!nfp_net_vlan_strip(skb, rxd, &meta))) {
+                       nfp_nfd3_rx_drop(dp, r_vec, rx_ring, NULL, skb);
+                       continue;
+               }
+
                if (meta_len_xdp)
                        skb_metadata_set(skb, meta_len_xdp);
 
index f31eabd..a03190c 100644 (file)
@@ -247,10 +247,13 @@ nfp_nfd3_print_tx_descs(struct seq_file *file,
         NFP_NET_CFG_CTRL_L2BC | NFP_NET_CFG_CTRL_L2MC |                \
         NFP_NET_CFG_CTRL_RXCSUM | NFP_NET_CFG_CTRL_TXCSUM |            \
         NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_TXVLAN |            \
+        NFP_NET_CFG_CTRL_RXVLAN_V2 | NFP_NET_CFG_CTRL_RXQINQ |         \
+        NFP_NET_CFG_CTRL_TXVLAN_V2 |                                   \
         NFP_NET_CFG_CTRL_GATHER | NFP_NET_CFG_CTRL_LSO |               \
         NFP_NET_CFG_CTRL_CTAG_FILTER | NFP_NET_CFG_CTRL_CMSG_DATA |    \
         NFP_NET_CFG_CTRL_RINGCFG | NFP_NET_CFG_CTRL_RSS |              \
         NFP_NET_CFG_CTRL_IRQMOD | NFP_NET_CFG_CTRL_TXRWB |             \
+        NFP_NET_CFG_CTRL_VEPA |                                        \
         NFP_NET_CFG_CTRL_VXLAN | NFP_NET_CFG_CTRL_NVGRE |              \
         NFP_NET_CFG_CTRL_BPF | NFP_NET_CFG_CTRL_LSO2 |                 \
         NFP_NET_CFG_CTRL_RSS2 | NFP_NET_CFG_CTRL_CSUM_COMPLETE |       \
index 454fea4..65e2431 100644 (file)
@@ -94,9 +94,12 @@ static void nfp_nfd3_xsk_rx_skb(struct nfp_net_rx_ring *rx_ring,
 
        nfp_nfd3_rx_csum(dp, r_vec, rxd, meta, skb);
 
-       if (rxd->rxd.flags & PCIE_DESC_RX_VLAN)
-               __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
-                                      le16_to_cpu(rxd->rxd.vlan));
+       if (unlikely(!nfp_net_vlan_strip(skb, rxd, meta))) {
+               dev_kfree_skb_any(skb);
+               nfp_net_xsk_rx_drop(r_vec, xrxbuf);
+               return;
+       }
+
        if (meta_xdp)
                skb_metadata_set(skb,
                                 xrxbuf->xdp->data - xrxbuf->xdp->data_meta);
index 300637e..0b4f550 100644 (file)
@@ -46,28 +46,16 @@ nfp_nfdk_tx_tso(struct nfp_net_r_vector *r_vec, struct nfp_nfdk_tx_buf *txbuf,
        if (!skb->encapsulation) {
                l3_offset = skb_network_offset(skb);
                l4_offset = skb_transport_offset(skb);
-               hdrlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdrlen = skb_tcp_all_headers(skb);
        } else {
                l3_offset = skb_inner_network_offset(skb);
                l4_offset = skb_inner_transport_offset(skb);
-               hdrlen = skb_inner_transport_header(skb) - skb->data +
-                       inner_tcp_hdrlen(skb);
+               hdrlen = skb_inner_tcp_all_headers(skb);
        }
 
        segs = skb_shinfo(skb)->gso_segs;
        mss = skb_shinfo(skb)->gso_size & NFDK_DESC_TX_MSS_MASK;
 
-       /* Note: TSO of the packet with metadata prepended to skb is not
-        * supported yet, in which case l3/l4_offset and lso_hdrlen need
-        * be correctly handled here.
-        * Concern:
-        * The driver doesn't have md_bytes easily available at this point.
-        * The PCI.IN PD ME won't have md_bytes bytes to add to lso_hdrlen,
-        * so it needs the full length there.  The app MEs might prefer
-        * l3_offset and l4_offset relative to the start of packet data,
-        * but could probably cope with it being relative to the CTM buf
-        * data offset.
-        */
        txd.l3_offset = l3_offset;
        txd.l4_offset = l4_offset;
        txd.lso_meta_res = 0;
@@ -191,12 +179,6 @@ static int nfp_nfdk_prep_port_id(struct sk_buff *skb)
        if (unlikely(md_dst->type != METADATA_HW_PORT_MUX))
                return 0;
 
-       /* Note: Unsupported case when TSO a skb with metedata prepended.
-        * See the comments in `nfp_nfdk_tx_tso` for details.
-        */
-       if (unlikely(md_dst && skb_is_gso(skb)))
-               return -EOPNOTSUPP;
-
        if (unlikely(skb_cow_head(skb, sizeof(md_dst->u.port_info.port_id))))
                return -ENOMEM;
 
@@ -717,7 +699,7 @@ static bool
 nfp_nfdk_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta,
                    void *data, void *pkt, unsigned int pkt_len, int meta_len)
 {
-       u32 meta_info;
+       u32 meta_info, vlan_info;
 
        meta_info = get_unaligned_be32(data);
        data += 4;
@@ -735,6 +717,17 @@ nfp_nfdk_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta,
                        meta->mark = get_unaligned_be32(data);
                        data += 4;
                        break;
+               case NFP_NET_META_VLAN:
+                       vlan_info = get_unaligned_be32(data);
+                       if (FIELD_GET(NFP_NET_META_VLAN_STRIP, vlan_info)) {
+                               meta->vlan.stripped = true;
+                               meta->vlan.tpid = FIELD_GET(NFP_NET_META_VLAN_TPID_MASK,
+                                                           vlan_info);
+                               meta->vlan.tci = FIELD_GET(NFP_NET_META_VLAN_TCI_MASK,
+                                                          vlan_info);
+                       }
+                       data += 4;
+                       break;
                case NFP_NET_META_PORTID:
                        meta->portid = get_unaligned_be32(data);
                        data += 4;
@@ -1170,9 +1163,11 @@ static int nfp_nfdk_rx(struct nfp_net_rx_ring *rx_ring, int budget)
 
                nfp_nfdk_rx_csum(dp, r_vec, rxd, &meta, skb);
 
-               if (rxd->rxd.flags & PCIE_DESC_RX_VLAN)
-                       __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
-                                              le16_to_cpu(rxd->rxd.vlan));
+               if (unlikely(!nfp_net_vlan_strip(skb, rxd, &meta))) {
+                       nfp_nfdk_rx_drop(dp, r_vec, rx_ring, NULL, skb);
+                       continue;
+               }
+
                if (meta_len_xdp)
                        skb_metadata_set(skb, meta_len_xdp);
 
index f4d94ae..6cd895d 100644 (file)
@@ -168,10 +168,11 @@ nfp_nfdk_print_tx_descs(struct seq_file *file,
         NFP_NET_CFG_CTRL_L2BC | NFP_NET_CFG_CTRL_L2MC |                \
         NFP_NET_CFG_CTRL_RXCSUM | NFP_NET_CFG_CTRL_TXCSUM |            \
         NFP_NET_CFG_CTRL_RXVLAN |                                      \
+        NFP_NET_CFG_CTRL_RXVLAN_V2 | NFP_NET_CFG_CTRL_RXQINQ |         \
         NFP_NET_CFG_CTRL_GATHER | NFP_NET_CFG_CTRL_LSO |               \
         NFP_NET_CFG_CTRL_CTAG_FILTER | NFP_NET_CFG_CTRL_CMSG_DATA |    \
         NFP_NET_CFG_CTRL_RINGCFG | NFP_NET_CFG_CTRL_IRQMOD |           \
-        NFP_NET_CFG_CTRL_TXRWB |                                       \
+        NFP_NET_CFG_CTRL_TXRWB | NFP_NET_CFG_CTRL_VEPA |               \
         NFP_NET_CFG_CTRL_VXLAN | NFP_NET_CFG_CTRL_NVGRE |              \
         NFP_NET_CFG_CTRL_BPF | NFP_NET_CFG_CTRL_LSO2 |                 \
         NFP_NET_CFG_CTRL_RSS2 | NFP_NET_CFG_CTRL_CSUM_COMPLETE |       \
index 4f88d17..36b1730 100644 (file)
@@ -410,7 +410,9 @@ nfp_net_fw_find(struct pci_dev *pdev, struct nfp_pf *pf)
                return NULL;
        }
 
-       fw_model = nfp_hwinfo_lookup(pf->hwinfo, "assembly.partno");
+       fw_model = nfp_hwinfo_lookup(pf->hwinfo, "nffw.partno");
+       if (!fw_model)
+               fw_model = nfp_hwinfo_lookup(pf->hwinfo, "assembly.partno");
        if (!fw_model) {
                dev_err(&pdev->dev, "Error: can't read part number\n");
                return NULL;
index b07cea8..a101ff3 100644 (file)
@@ -248,6 +248,8 @@ struct nfp_net_rx_desc {
 };
 
 #define NFP_NET_META_FIELD_MASK GENMASK(NFP_NET_META_FIELD_SIZE - 1, 0)
+#define NFP_NET_VLAN_CTAG      0
+#define NFP_NET_VLAN_STAG      1
 
 struct nfp_meta_parsed {
        u8 hash_type;
@@ -256,6 +258,11 @@ struct nfp_meta_parsed {
        u32 mark;
        u32 portid;
        __wsum csum;
+       struct {
+               bool stripped;
+               u8 tpid;
+               u16 tci;
+       } vlan;
 };
 
 struct nfp_net_rx_hash {
index 57f284e..c5c3a4a 100644 (file)
@@ -31,6 +31,7 @@
 #include <linux/ethtool.h>
 #include <linux/log2.h>
 #include <linux/if_vlan.h>
+#include <linux/if_bridge.h>
 #include <linux/random.h>
 #include <linux/vmalloc.h>
 #include <linux/ktime.h>
@@ -597,7 +598,7 @@ nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec,
        if (!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk))
                return skb;
 
-       datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb));
+       datalen = skb->len - skb_tcp_all_headers(skb);
        seq = ntohl(tcp_hdr(skb)->seq);
        ntls = tls_driver_ctx(skb->sk, TLS_OFFLOAD_CTX_DIR_TX);
        resync_pending = tls_offload_tx_resync_pending(skb->sk);
@@ -665,7 +666,7 @@ void nfp_net_tls_tx_undo(struct sk_buff *skb, u64 tls_handle)
        if (WARN_ON_ONCE(!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk)))
                return;
 
-       datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb));
+       datalen = skb->len - skb_tcp_all_headers(skb);
        seq = ntohl(tcp_hdr(skb)->seq);
 
        ntls = tls_driver_ctx(skb->sk, TLS_OFFLOAD_CTX_DIR_TX);
@@ -1694,16 +1695,18 @@ static int nfp_net_set_features(struct net_device *netdev,
 
        if (changed & NETIF_F_HW_VLAN_CTAG_RX) {
                if (features & NETIF_F_HW_VLAN_CTAG_RX)
-                       new_ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
+                       new_ctrl |= nn->cap & NFP_NET_CFG_CTRL_RXVLAN_V2 ?:
+                                   NFP_NET_CFG_CTRL_RXVLAN;
                else
-                       new_ctrl &= ~NFP_NET_CFG_CTRL_RXVLAN;
+                       new_ctrl &= ~NFP_NET_CFG_CTRL_RXVLAN_ANY;
        }
 
        if (changed & NETIF_F_HW_VLAN_CTAG_TX) {
                if (features & NETIF_F_HW_VLAN_CTAG_TX)
-                       new_ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
+                       new_ctrl |= nn->cap & NFP_NET_CFG_CTRL_TXVLAN_V2 ?:
+                                   NFP_NET_CFG_CTRL_TXVLAN;
                else
-                       new_ctrl &= ~NFP_NET_CFG_CTRL_TXVLAN;
+                       new_ctrl &= ~NFP_NET_CFG_CTRL_TXVLAN_ANY;
        }
 
        if (changed & NETIF_F_HW_VLAN_CTAG_FILTER) {
@@ -1713,6 +1716,13 @@ static int nfp_net_set_features(struct net_device *netdev,
                        new_ctrl &= ~NFP_NET_CFG_CTRL_CTAG_FILTER;
        }
 
+       if (changed & NETIF_F_HW_VLAN_STAG_RX) {
+               if (features & NETIF_F_HW_VLAN_STAG_RX)
+                       new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
+               else
+                       new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ;
+       }
+
        if (changed & NETIF_F_SG) {
                if (features & NETIF_F_SG)
                        new_ctrl |= NFP_NET_CFG_CTRL_GATHER;
@@ -1741,6 +1751,27 @@ static int nfp_net_set_features(struct net_device *netdev,
        return 0;
 }
 
+static netdev_features_t
+nfp_net_fix_features(struct net_device *netdev,
+                    netdev_features_t features)
+{
+       if ((features & NETIF_F_HW_VLAN_CTAG_RX) &&
+           (features & NETIF_F_HW_VLAN_STAG_RX)) {
+               if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) {
+                       features &= ~NETIF_F_HW_VLAN_CTAG_RX;
+                       netdev->wanted_features &= ~NETIF_F_HW_VLAN_CTAG_RX;
+                       netdev_warn(netdev,
+                                   "S-tag and C-tag stripping can't be enabled at the same time. Enabling S-tag stripping and disabling C-tag stripping\n");
+               } else if (netdev->features & NETIF_F_HW_VLAN_STAG_RX) {
+                       features &= ~NETIF_F_HW_VLAN_STAG_RX;
+                       netdev->wanted_features &= ~NETIF_F_HW_VLAN_STAG_RX;
+                       netdev_warn(netdev,
+                                   "S-tag and C-tag stripping can't be enabled at the same time. Enabling C-tag stripping and disabling S-tag stripping\n");
+               }
+       }
+       return features;
+}
+
 static netdev_features_t
 nfp_net_features_check(struct sk_buff *skb, struct net_device *dev,
                       netdev_features_t features)
@@ -1757,8 +1788,7 @@ nfp_net_features_check(struct sk_buff *skb, struct net_device *dev,
        if (skb_is_gso(skb)) {
                u32 hdrlen;
 
-               hdrlen = skb_inner_transport_header(skb) - skb->data +
-                       inner_tcp_hdrlen(skb);
+               hdrlen = skb_inner_tcp_all_headers(skb);
 
                /* Assume worst case scenario of having longest possible
                 * metadata prepend - 8B
@@ -1892,6 +1922,69 @@ static int nfp_net_set_mac_address(struct net_device *netdev, void *addr)
        return 0;
 }
 
+static int nfp_net_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
+                                 struct net_device *dev, u32 filter_mask,
+                                 int nlflags)
+{
+       struct nfp_net *nn = netdev_priv(dev);
+       u16 mode;
+
+       if (!(nn->cap & NFP_NET_CFG_CTRL_VEPA))
+               return -EOPNOTSUPP;
+
+       mode = (nn->dp.ctrl & NFP_NET_CFG_CTRL_VEPA) ?
+              BRIDGE_MODE_VEPA : BRIDGE_MODE_VEB;
+
+       return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode, 0, 0,
+                                      nlflags, filter_mask, NULL);
+}
+
+static int nfp_net_bridge_setlink(struct net_device *dev, struct nlmsghdr *nlh,
+                                 u16 flags, struct netlink_ext_ack *extack)
+{
+       struct nfp_net *nn = netdev_priv(dev);
+       struct nlattr *attr, *br_spec;
+       int rem, err;
+       u32 new_ctrl;
+       u16 mode;
+
+       if (!(nn->cap & NFP_NET_CFG_CTRL_VEPA))
+               return -EOPNOTSUPP;
+
+       br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
+       if (!br_spec)
+               return -EINVAL;
+
+       nla_for_each_nested(attr, br_spec, rem) {
+               if (nla_type(attr) != IFLA_BRIDGE_MODE)
+                       continue;
+
+               if (nla_len(attr) < sizeof(mode))
+                       return -EINVAL;
+
+               new_ctrl = nn->dp.ctrl;
+               mode = nla_get_u16(attr);
+               if (mode == BRIDGE_MODE_VEPA)
+                       new_ctrl |= NFP_NET_CFG_CTRL_VEPA;
+               else if (mode == BRIDGE_MODE_VEB)
+                       new_ctrl &= ~NFP_NET_CFG_CTRL_VEPA;
+               else
+                       return -EOPNOTSUPP;
+
+               if (new_ctrl == nn->dp.ctrl)
+                       return 0;
+
+               nn_writel(nn, NFP_NET_CFG_CTRL, new_ctrl);
+               err = nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_GEN);
+               if (!err)
+                       nn->dp.ctrl = new_ctrl;
+
+               return err;
+       }
+
+       return -EINVAL;
+}
+
 const struct net_device_ops nfp_nfd3_netdev_ops = {
        .ndo_init               = nfp_app_ndo_init,
        .ndo_uninit             = nfp_app_ndo_uninit,
@@ -1914,11 +2007,14 @@ const struct net_device_ops nfp_nfd3_netdev_ops = {
        .ndo_change_mtu         = nfp_net_change_mtu,
        .ndo_set_mac_address    = nfp_net_set_mac_address,
        .ndo_set_features       = nfp_net_set_features,
+       .ndo_fix_features       = nfp_net_fix_features,
        .ndo_features_check     = nfp_net_features_check,
        .ndo_get_phys_port_name = nfp_net_get_phys_port_name,
        .ndo_bpf                = nfp_net_xdp,
        .ndo_xsk_wakeup         = nfp_net_xsk_wakeup,
        .ndo_get_devlink_port   = nfp_devlink_get_devlink_port,
+       .ndo_bridge_getlink     = nfp_net_bridge_getlink,
+       .ndo_bridge_setlink     = nfp_net_bridge_setlink,
 };
 
 const struct net_device_ops nfp_nfdk_netdev_ops = {
@@ -1932,6 +2028,7 @@ const struct net_device_ops nfp_nfdk_netdev_ops = {
        .ndo_vlan_rx_kill_vid   = nfp_net_vlan_rx_kill_vid,
        .ndo_set_vf_mac         = nfp_app_set_vf_mac,
        .ndo_set_vf_vlan        = nfp_app_set_vf_vlan,
+       .ndo_set_vf_rate        = nfp_app_set_vf_rate,
        .ndo_set_vf_spoofchk    = nfp_app_set_vf_spoofchk,
        .ndo_set_vf_trust       = nfp_app_set_vf_trust,
        .ndo_get_vf_config      = nfp_app_get_vf_config,
@@ -1942,10 +2039,13 @@ const struct net_device_ops nfp_nfdk_netdev_ops = {
        .ndo_change_mtu         = nfp_net_change_mtu,
        .ndo_set_mac_address    = nfp_net_set_mac_address,
        .ndo_set_features       = nfp_net_set_features,
+       .ndo_fix_features       = nfp_net_fix_features,
        .ndo_features_check     = nfp_net_features_check,
        .ndo_get_phys_port_name = nfp_net_get_phys_port_name,
        .ndo_bpf                = nfp_net_xdp,
        .ndo_get_devlink_port   = nfp_devlink_get_devlink_port,
+       .ndo_bridge_getlink     = nfp_net_bridge_getlink,
+       .ndo_bridge_setlink     = nfp_net_bridge_setlink,
 };
 
 static int nfp_udp_tunnel_sync(struct net_device *netdev, unsigned int table)
@@ -1993,7 +2093,7 @@ void nfp_net_info(struct nfp_net *nn)
                nn->fw_ver.extend, nn->fw_ver.class,
                nn->fw_ver.major, nn->fw_ver.minor,
                nn->max_mtu);
-       nn_info(nn, "CAP: %#x %s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
+       nn_info(nn, "CAP: %#x %s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
                nn->cap,
                nn->cap & NFP_NET_CFG_CTRL_PROMISC  ? "PROMISC "  : "",
                nn->cap & NFP_NET_CFG_CTRL_L2BC     ? "L2BCFILT " : "",
@@ -2002,6 +2102,9 @@ void nfp_net_info(struct nfp_net *nn)
                nn->cap & NFP_NET_CFG_CTRL_TXCSUM   ? "TXCSUM "   : "",
                nn->cap & NFP_NET_CFG_CTRL_RXVLAN   ? "RXVLAN "   : "",
                nn->cap & NFP_NET_CFG_CTRL_TXVLAN   ? "TXVLAN "   : "",
+               nn->cap & NFP_NET_CFG_CTRL_RXQINQ   ? "RXQINQ "   : "",
+               nn->cap & NFP_NET_CFG_CTRL_RXVLAN_V2 ? "RXVLANv2 "   : "",
+               nn->cap & NFP_NET_CFG_CTRL_TXVLAN_V2   ? "TXVLAN2 "   : "",
                nn->cap & NFP_NET_CFG_CTRL_SCATTER  ? "SCATTER "  : "",
                nn->cap & NFP_NET_CFG_CTRL_GATHER   ? "GATHER "   : "",
                nn->cap & NFP_NET_CFG_CTRL_LSO      ? "TSO1 "     : "",
@@ -2012,6 +2115,7 @@ void nfp_net_info(struct nfp_net *nn)
                nn->cap & NFP_NET_CFG_CTRL_MSIXAUTO ? "AUTOMASK " : "",
                nn->cap & NFP_NET_CFG_CTRL_IRQMOD   ? "IRQMOD "   : "",
                nn->cap & NFP_NET_CFG_CTRL_TXRWB    ? "TXRWB "    : "",
+               nn->cap & NFP_NET_CFG_CTRL_VEPA     ? "VEPA "     : "",
                nn->cap & NFP_NET_CFG_CTRL_VXLAN    ? "VXLAN "    : "",
                nn->cap & NFP_NET_CFG_CTRL_NVGRE    ? "NVGRE "    : "",
                nn->cap & NFP_NET_CFG_CTRL_CSUM_COMPLETE ?
@@ -2288,31 +2392,39 @@ static void nfp_net_netdev_init(struct nfp_net *nn)
 
        netdev->vlan_features = netdev->hw_features;
 
-       if (nn->cap & NFP_NET_CFG_CTRL_RXVLAN) {
+       if (nn->cap & NFP_NET_CFG_CTRL_RXVLAN_ANY) {
                netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX;
-               nn->dp.ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
+               nn->dp.ctrl |= nn->cap & NFP_NET_CFG_CTRL_RXVLAN_V2 ?:
+                              NFP_NET_CFG_CTRL_RXVLAN;
        }
-       if (nn->cap & NFP_NET_CFG_CTRL_TXVLAN) {
+       if (nn->cap & NFP_NET_CFG_CTRL_TXVLAN_ANY) {
                if (nn->cap & NFP_NET_CFG_CTRL_LSO2) {
                        nn_warn(nn, "Device advertises both TSO2 and TXVLAN. Refusing to enable TXVLAN.\n");
                } else {
                        netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX;
-                       nn->dp.ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
+                       nn->dp.ctrl |= nn->cap & NFP_NET_CFG_CTRL_TXVLAN_V2 ?:
+                                      NFP_NET_CFG_CTRL_TXVLAN;
                }
        }
        if (nn->cap & NFP_NET_CFG_CTRL_CTAG_FILTER) {
                netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER;
                nn->dp.ctrl |= NFP_NET_CFG_CTRL_CTAG_FILTER;
        }
+       if (nn->cap & NFP_NET_CFG_CTRL_RXQINQ) {
+               netdev->hw_features |= NETIF_F_HW_VLAN_STAG_RX;
+               nn->dp.ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
+       }
 
        netdev->features = netdev->hw_features;
 
        if (nfp_app_has_tc(nn->app) && nn->port)
                netdev->hw_features |= NETIF_F_HW_TC;
 
-       /* Advertise but disable TSO by default. */
-       netdev->features &= ~(NETIF_F_TSO | NETIF_F_TSO6);
-       nn->dp.ctrl &= ~NFP_NET_CFG_CTRL_LSO_ANY;
+       /* C-Tag strip and S-Tag strip can't be supported simultaneously,
+        * so enable C-Tag strip and disable S-Tag strip by default.
+        */
+       netdev->features &= ~NETIF_F_HW_VLAN_STAG_RX;
+       nn->dp.ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ;
 
        /* Finalise the netdev setup */
        switch (nn->dp.ops->version) {
index 8892a94..ac05ec3 100644 (file)
 #define NFP_NET_LSO_MAX_HDR_SZ         255
 #define NFP_NET_LSO_MAX_SEGS           64
 
+/* working with metadata vlan api (NFD version >= 2.0) */
+#define NFP_NET_META_VLAN_STRIP                        BIT(31)
+#define NFP_NET_META_VLAN_TPID_MASK            GENMASK(19, 16)
+#define NFP_NET_META_VLAN_TCI_MASK             GENMASK(15, 0)
+
 /* Prepend field types */
 #define NFP_NET_META_FIELD_SIZE                4
 #define NFP_NET_META_HASH              1 /* next field carries hash type */
 #define NFP_NET_META_MARK              2
+#define NFP_NET_META_VLAN              4 /* ctag or stag type */
 #define NFP_NET_META_PORTID            5
 #define NFP_NET_META_CSUM              6 /* checksum complete type */
 #define NFP_NET_META_CONN_HANDLE       7
 
 #define NFP_META_PORT_ID_CTRL          ~0U
 
+/* Prepend field sizes */
+#define NFP_NET_META_VLAN_SIZE                 4
+#define NFP_NET_META_PORTID_SIZE               4
+#define NFP_NET_META_CONN_HANDLE_SIZE          8
 /* Hash type pre-pended when a RSS hash was computed */
 #define NFP_NET_RSS_NONE               0
 #define NFP_NET_RSS_IPV4               1
 #define   NFP_NET_CFG_CTRL_LSO           (0x1 << 10) /* LSO/TSO (version 1) */
 #define   NFP_NET_CFG_CTRL_CTAG_FILTER   (0x1 << 11) /* VLAN CTAG filtering */
 #define   NFP_NET_CFG_CTRL_CMSG_DATA     (0x1 << 12) /* RX cmsgs on data Qs */
+#define   NFP_NET_CFG_CTRL_RXQINQ        (0x1 << 13) /* Enable S-tag strip */
+#define   NFP_NET_CFG_CTRL_RXVLAN_V2     (0x1 << 15) /* Enable C-tag strip */
 #define   NFP_NET_CFG_CTRL_RINGCFG       (0x1 << 16) /* Ring runtime changes */
 #define   NFP_NET_CFG_CTRL_RSS           (0x1 << 17) /* RSS (version 1) */
 #define   NFP_NET_CFG_CTRL_IRQMOD        (0x1 << 18) /* Interrupt moderation */
 #define   NFP_NET_CFG_CTRL_MSIXAUTO      (0x1 << 20) /* MSI-X auto-masking */
 #define   NFP_NET_CFG_CTRL_TXRWB         (0x1 << 21) /* Write-back of TX ring*/
+#define   NFP_NET_CFG_CTRL_VEPA                  (0x1 << 22) /* Enable VEPA mode */
+#define   NFP_NET_CFG_CTRL_TXVLAN_V2     (0x1 << 23) /* Enable VLAN C-tag insert*/
 #define   NFP_NET_CFG_CTRL_VXLAN         (0x1 << 24) /* VXLAN tunnel support */
 #define   NFP_NET_CFG_CTRL_NVGRE         (0x1 << 25) /* NVGRE tunnel support */
 #define   NFP_NET_CFG_CTRL_BPF           (0x1 << 27) /* BPF offload capable */
                                         NFP_NET_CFG_CTRL_CSUM_COMPLETE)
 #define NFP_NET_CFG_CTRL_CHAIN_META    (NFP_NET_CFG_CTRL_RSS2 | \
                                         NFP_NET_CFG_CTRL_CSUM_COMPLETE)
+#define NFP_NET_CFG_CTRL_RXVLAN_ANY    (NFP_NET_CFG_CTRL_RXVLAN | \
+                                        NFP_NET_CFG_CTRL_RXVLAN_V2)
+#define NFP_NET_CFG_CTRL_TXVLAN_ANY    (NFP_NET_CFG_CTRL_TXVLAN | \
+                                        NFP_NET_CFG_CTRL_TXVLAN_V2)
 
 #define NFP_NET_CFG_UPDATE             0x0004
 #define   NFP_NET_CFG_UPDATE_GEN         (0x1 <<  0) /* General update */
index 34dd948..550df83 100644 (file)
@@ -440,3 +440,27 @@ bool nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb)
 
        return ret;
 }
+
+bool nfp_net_vlan_strip(struct sk_buff *skb, const struct nfp_net_rx_desc *rxd,
+                       const struct nfp_meta_parsed *meta)
+{
+       u16 tpid = 0, tci = 0;
+
+       if (rxd->rxd.flags & PCIE_DESC_RX_VLAN) {
+               tpid = ETH_P_8021Q;
+               tci = le16_to_cpu(rxd->rxd.vlan);
+       } else if (meta->vlan.stripped) {
+               if (meta->vlan.tpid == NFP_NET_VLAN_CTAG)
+                       tpid = ETH_P_8021Q;
+               else if (meta->vlan.tpid == NFP_NET_VLAN_STAG)
+                       tpid = ETH_P_8021AD;
+               else
+                       return false;
+
+               tci = meta->vlan.tci;
+       }
+       if (tpid)
+               __vlan_hwaccel_put_tag(skb, htons(tpid), tci);
+
+       return true;
+}
index 83becb3..831c83c 100644 (file)
@@ -106,6 +106,8 @@ int nfp_net_tx_rings_prepare(struct nfp_net *nn, struct nfp_net_dp *dp);
 void nfp_net_rx_rings_free(struct nfp_net_dp *dp);
 void nfp_net_tx_rings_free(struct nfp_net_dp *dp);
 void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring);
+bool nfp_net_vlan_strip(struct sk_buff *skb, const struct nfp_net_rx_desc *rxd,
+                       const struct nfp_meta_parsed *meta);
 
 enum nfp_nfd_version {
        NFP_NFD_VER_NFD3,
index 15e9cf7..c922dfa 100644 (file)
@@ -29,6 +29,7 @@
 #include "nfp_net_dp.h"
 #include "nfp_net.h"
 #include "nfp_port.h"
+#include "nfpcore/nfp_cpp.h"
 
 struct nfp_et_stat {
        char name[ETH_GSTRING_LEN];
@@ -442,6 +443,160 @@ static int nfp_net_set_ringparam(struct net_device *netdev,
        return nfp_net_set_ring_size(nn, rxd_cnt, txd_cnt);
 }
 
+static int nfp_test_link(struct net_device *netdev)
+{
+       if (!netif_carrier_ok(netdev) || !(netdev->flags & IFF_UP))
+               return 1;
+
+       return 0;
+}
+
+static int nfp_test_nsp(struct net_device *netdev)
+{
+       struct nfp_app *app = nfp_app_from_netdev(netdev);
+       struct nfp_nsp_identify *nspi;
+       struct nfp_nsp *nsp;
+       int err;
+
+       nsp = nfp_nsp_open(app->cpp);
+       if (IS_ERR(nsp)) {
+               err = PTR_ERR(nsp);
+               netdev_info(netdev, "NSP Test: failed to access the NSP: %d\n", err);
+               goto exit;
+       }
+
+       if (nfp_nsp_get_abi_ver_minor(nsp) < 15) {
+               err = -EOPNOTSUPP;
+               goto exit_close_nsp;
+       }
+
+       nspi = kzalloc(sizeof(*nspi), GFP_KERNEL);
+       if (!nspi) {
+               err = -ENOMEM;
+               goto exit_close_nsp;
+       }
+
+       err = nfp_nsp_read_identify(nsp, nspi, sizeof(*nspi));
+       if (err < 0)
+               netdev_info(netdev, "NSP Test: reading bsp version failed %d\n", err);
+
+       kfree(nspi);
+exit_close_nsp:
+       nfp_nsp_close(nsp);
+exit:
+       return err;
+}
+
+static int nfp_test_fw(struct net_device *netdev)
+{
+       struct nfp_net *nn = netdev_priv(netdev);
+       int err;
+
+       err = nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_GEN);
+       if (err)
+               netdev_info(netdev, "FW Test: update failed %d\n", err);
+
+       return err;
+}
+
+static int nfp_test_reg(struct net_device *netdev)
+{
+       struct nfp_app *app = nfp_app_from_netdev(netdev);
+       struct nfp_cpp *cpp = app->cpp;
+       u32 model = nfp_cpp_model(cpp);
+       u32 value;
+       int err;
+
+       err = nfp_cpp_model_autodetect(cpp, &value);
+       if (err < 0) {
+               netdev_info(netdev, "REG Test: NFP model detection failed %d\n", err);
+               return err;
+       }
+
+       return (value == model) ? 0 : 1;
+}
+
+static bool link_test_supported(struct net_device *netdev)
+{
+       return true;
+}
+
+static bool nsp_test_supported(struct net_device *netdev)
+{
+       if (nfp_app_from_netdev(netdev))
+               return true;
+
+       return false;
+}
+
+static bool fw_test_supported(struct net_device *netdev)
+{
+       if (nfp_netdev_is_nfp_net(netdev))
+               return true;
+
+       return false;
+}
+
+static bool reg_test_supported(struct net_device *netdev)
+{
+       if (nfp_app_from_netdev(netdev))
+               return true;
+
+       return false;
+}
+
+static struct nfp_self_test_item {
+       char name[ETH_GSTRING_LEN];
+       bool (*is_supported)(struct net_device *dev);
+       int (*func)(struct net_device *dev);
+} nfp_self_test[] = {
+       {"Link Test", link_test_supported, nfp_test_link},
+       {"NSP Test", nsp_test_supported, nfp_test_nsp},
+       {"Firmware Test", fw_test_supported, nfp_test_fw},
+       {"Register Test", reg_test_supported, nfp_test_reg}
+};
+
+#define NFP_TEST_TOTAL_NUM ARRAY_SIZE(nfp_self_test)
+
+static void nfp_get_self_test_strings(struct net_device *netdev, u8 *data)
+{
+       int i;
+
+       for (i = 0; i < NFP_TEST_TOTAL_NUM; i++)
+               if (nfp_self_test[i].is_supported(netdev))
+                       ethtool_sprintf(&data, nfp_self_test[i].name);
+}
+
+static int nfp_get_self_test_count(struct net_device *netdev)
+{
+       int i, count = 0;
+
+       for (i = 0; i < NFP_TEST_TOTAL_NUM; i++)
+               if (nfp_self_test[i].is_supported(netdev))
+                       count++;
+
+       return count;
+}
+
+static void nfp_net_self_test(struct net_device *netdev, struct ethtool_test *eth_test,
+                             u64 *data)
+{
+       int i, ret, count = 0;
+
+       netdev_info(netdev, "Start self test\n");
+
+       for (i = 0; i < NFP_TEST_TOTAL_NUM; i++) {
+               if (nfp_self_test[i].is_supported(netdev)) {
+                       ret = nfp_self_test[i].func(netdev);
+                       if (ret)
+                               eth_test->flags |= ETH_TEST_FL_FAILED;
+                       data[count++] = ret;
+               }
+       }
+
+       netdev_info(netdev, "Test end\n");
+}
+
 static unsigned int nfp_vnic_get_sw_stats_count(struct net_device *netdev)
 {
        struct nfp_net *nn = netdev_priv(netdev);
@@ -705,6 +860,9 @@ static void nfp_net_get_strings(struct net_device *netdev,
                data = nfp_mac_get_stats_strings(netdev, data);
                data = nfp_app_port_get_stats_strings(nn->port, data);
                break;
+       case ETH_SS_TEST:
+               nfp_get_self_test_strings(netdev, data);
+               break;
        }
 }
 
@@ -739,6 +897,8 @@ static int nfp_net_get_sset_count(struct net_device *netdev, int sset)
                cnt += nfp_mac_get_stats_count(netdev);
                cnt += nfp_app_port_get_stats_count(nn->port);
                return cnt;
+       case ETH_SS_TEST:
+               return nfp_get_self_test_count(netdev);
        default:
                return -EOPNOTSUPP;
        }
@@ -757,6 +917,9 @@ static void nfp_port_get_strings(struct net_device *netdev,
                        data = nfp_mac_get_stats_strings(netdev, data);
                data = nfp_app_port_get_stats_strings(port, data);
                break;
+       case ETH_SS_TEST:
+               nfp_get_self_test_strings(netdev, data);
+               break;
        }
 }
 
@@ -786,6 +949,8 @@ static int nfp_port_get_sset_count(struct net_device *netdev, int sset)
                        count = nfp_mac_get_stats_count(netdev);
                count += nfp_app_port_get_stats_count(port);
                return count;
+       case ETH_SS_TEST:
+               return nfp_get_self_test_count(netdev);
        default:
                return -EOPNOTSUPP;
        }
@@ -1477,6 +1642,38 @@ static void nfp_port_get_pauseparam(struct net_device *netdev,
        pause->tx_pause = 1;
 }
 
+static int nfp_net_set_phys_id(struct net_device *netdev,
+                              enum ethtool_phys_id_state state)
+{
+       struct nfp_eth_table_port *eth_port;
+       struct nfp_port *port;
+       int err;
+
+       port = nfp_port_from_netdev(netdev);
+       eth_port = __nfp_port_get_eth_port(port);
+       if (!eth_port)
+               return -EOPNOTSUPP;
+
+       switch (state) {
+       case ETHTOOL_ID_ACTIVE:
+               /* Control LED to blink */
+               err = nfp_eth_set_idmode(port->app->cpp, eth_port->index, 1);
+               break;
+
+       case ETHTOOL_ID_INACTIVE:
+               /* Control LED to normal mode */
+               err = nfp_eth_set_idmode(port->app->cpp, eth_port->index, 0);
+               break;
+
+       case ETHTOOL_ID_ON:
+       case ETHTOOL_ID_OFF:
+       default:
+               return -EOPNOTSUPP;
+       }
+
+       return err;
+}
+
 static const struct ethtool_ops nfp_net_ethtool_ops = {
        .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
                                     ETHTOOL_COALESCE_MAX_FRAMES |
@@ -1485,6 +1682,7 @@ static const struct ethtool_ops nfp_net_ethtool_ops = {
        .get_link               = ethtool_op_get_link,
        .get_ringparam          = nfp_net_get_ringparam,
        .set_ringparam          = nfp_net_set_ringparam,
+       .self_test              = nfp_net_self_test,
        .get_strings            = nfp_net_get_strings,
        .get_ethtool_stats      = nfp_net_get_stats,
        .get_sset_count         = nfp_net_get_sset_count,
@@ -1510,6 +1708,7 @@ static const struct ethtool_ops nfp_net_ethtool_ops = {
        .get_fecparam           = nfp_port_get_fecparam,
        .set_fecparam           = nfp_port_set_fecparam,
        .get_pauseparam         = nfp_port_get_pauseparam,
+       .set_phys_id            = nfp_net_set_phys_id,
 };
 
 const struct ethtool_ops nfp_port_ethtool_ops = {
@@ -1517,6 +1716,7 @@ const struct ethtool_ops nfp_port_ethtool_ops = {
        .get_link               = ethtool_op_get_link,
        .get_strings            = nfp_port_get_strings,
        .get_ethtool_stats      = nfp_port_get_stats,
+       .self_test              = nfp_net_self_test,
        .get_sset_count         = nfp_port_get_sset_count,
        .set_dump               = nfp_app_set_dump,
        .get_dump_flag          = nfp_app_get_dump_flag,
@@ -1528,6 +1728,7 @@ const struct ethtool_ops nfp_port_ethtool_ops = {
        .get_fecparam           = nfp_port_get_fecparam,
        .set_fecparam           = nfp_port_set_fecparam,
        .get_pauseparam         = nfp_port_get_pauseparam,
+       .set_phys_id            = nfp_net_set_phys_id,
 };
 
 void nfp_net_set_ethtool_ops(struct net_device *netdev)
index 75b5018..8b77582 100644 (file)
@@ -365,9 +365,9 @@ int nfp_repr_init(struct nfp_app *app, struct net_device *netdev,
 
        netdev->vlan_features = netdev->hw_features;
 
-       if (repr_cap & NFP_NET_CFG_CTRL_RXVLAN)
+       if (repr_cap & NFP_NET_CFG_CTRL_RXVLAN_ANY)
                netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX;
-       if (repr_cap & NFP_NET_CFG_CTRL_TXVLAN) {
+       if (repr_cap & NFP_NET_CFG_CTRL_TXVLAN_ANY) {
                if (repr_cap & NFP_NET_CFG_CTRL_LSO2)
                        netdev_warn(netdev, "Device advertises both TSO2 and TXVLAN. Refusing to enable TXVLAN.\n");
                else
@@ -375,11 +375,15 @@ int nfp_repr_init(struct nfp_app *app, struct net_device *netdev,
        }
        if (repr_cap & NFP_NET_CFG_CTRL_CTAG_FILTER)
                netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+       if (repr_cap & NFP_NET_CFG_CTRL_RXQINQ)
+               netdev->hw_features |= NETIF_F_HW_VLAN_STAG_RX;
 
        netdev->features = netdev->hw_features;
 
-       /* Advertise but disable TSO by default. */
-       netdev->features &= ~(NETIF_F_TSO | NETIF_F_TSO6);
+       /* C-Tag strip and S-Tag strip can't be supported simultaneously,
+        * so enable C-Tag strip and disable S-Tag strip by default.
+        */
+       netdev->features &= ~NETIF_F_HW_VLAN_STAG_RX;
        netif_set_tso_max_segs(netdev, NFP_NET_LSO_MAX_SEGS);
 
        netdev->priv_flags |= IFF_NO_QUEUE | IFF_DISABLE_NETPOLL;
index f5360ba..77d6685 100644 (file)
@@ -196,6 +196,8 @@ int nfp_eth_set_configured(struct nfp_cpp *cpp, unsigned int idx,
 int
 nfp_eth_set_fec(struct nfp_cpp *cpp, unsigned int idx, enum nfp_eth_fec mode);
 
+int nfp_eth_set_idmode(struct nfp_cpp *cpp, unsigned int idx, bool state);
+
 static inline bool nfp_eth_can_support_fec(struct nfp_eth_table_port *eth_port)
 {
        return !!eth_port->fec_modes_supported;
index 311a5be..edd3000 100644 (file)
@@ -49,6 +49,7 @@
 #define NSP_ETH_CTRL_SET_LANES         BIT_ULL(5)
 #define NSP_ETH_CTRL_SET_ANEG          BIT_ULL(6)
 #define NSP_ETH_CTRL_SET_FEC           BIT_ULL(7)
+#define NSP_ETH_CTRL_SET_IDMODE                BIT_ULL(8)
 
 enum nfp_eth_raw {
        NSP_ETH_RAW_PORT = 0,
@@ -492,6 +493,35 @@ nfp_eth_set_bit_config(struct nfp_nsp *nsp, unsigned int raw_idx,
        return 0;
 }
 
+int nfp_eth_set_idmode(struct nfp_cpp *cpp, unsigned int idx, bool state)
+{
+       union eth_table_entry *entries;
+       struct nfp_nsp *nsp;
+       u64 reg;
+
+       nsp = nfp_eth_config_start(cpp, idx);
+       if (IS_ERR(nsp))
+               return PTR_ERR(nsp);
+
+       /* Set this features were added in ABI 0.32 */
+       if (nfp_nsp_get_abi_ver_minor(nsp) < 32) {
+               nfp_err(nfp_nsp_cpp(nsp),
+                       "set id mode operation not supported, please update flash\n");
+               return -EOPNOTSUPP;
+       }
+
+       entries = nfp_nsp_config_entries(nsp);
+
+       reg = le64_to_cpu(entries[idx].control);
+       reg &= ~NSP_ETH_CTRL_SET_IDMODE;
+       reg |= FIELD_PREP(NSP_ETH_CTRL_SET_IDMODE, state);
+       entries[idx].control = cpu_to_le64(reg);
+
+       nfp_nsp_config_set_modified(nsp, true);
+
+       return nfp_eth_config_commit_end(nsp);
+}
+
 #define NFP_ETH_SET_BIT_CONFIG(nsp, raw_idx, mask, val, ctrl_bit)      \
        ({                                                              \
                __BF_FIELD_CHECK(mask, 0ULL, val, "NFP_ETH_SET_BIT_CONFIG: "); \
index f540354..c03986b 100644 (file)
@@ -947,10 +947,9 @@ static int ionic_tx_tso(struct ionic_queue *q, struct sk_buff *skb)
        }
 
        if (encap)
-               hdrlen = skb_inner_transport_header(skb) - skb->data +
-                        inner_tcp_hdrlen(skb);
+               hdrlen = skb_inner_tcp_all_headers(skb);
        else
-               hdrlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdrlen = skb_tcp_all_headers(skb);
 
        tso_rem = len;
        seg_rem = min(tso_rem, hdrlen + mss);
index 07dd3c3..4e6f00a 100644 (file)
@@ -1877,7 +1877,7 @@ netxen_tso_check(struct net_device *netdev,
        if ((netdev->features & (NETIF_F_TSO | NETIF_F_TSO6)) &&
                        skb_shinfo(skb)->gso_size > 0) {
 
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
 
                first_desc->mss = cpu_to_le16(skb_shinfo(skb)->gso_size);
                first_desc->total_hdr_length = hdr_len;
index 82e74f6..d701ecd 100644 (file)
@@ -1110,7 +1110,7 @@ static int qed_int_deassertion(struct qed_hwfn  *p_hwfn,
                                                                 bit_len);
 
                                        /* Some bits represent more than a
-                                        * single interrupt. Correctly print
+                                        * single interrupt. Correctly print
                                         * their name.
                                         */
                                        if (ATTENTION_LENGTH(flags) > 2 ||
index 69b0ede..5a5dbbb 100644 (file)
@@ -42,8 +42,7 @@ int qed_rdma_bmap_alloc(struct qed_hwfn *p_hwfn,
 
        bmap->max_count = max_count;
 
-       bmap->bitmap = kcalloc(BITS_TO_LONGS(max_count), sizeof(long),
-                              GFP_KERNEL);
+       bmap->bitmap = bitmap_zalloc(max_count, GFP_KERNEL);
        if (!bmap->bitmap)
                return -ENOMEM;
 
@@ -107,7 +106,7 @@ int qed_bmap_test_id(struct qed_hwfn *p_hwfn,
 
 static bool qed_bmap_is_empty(struct qed_bmap *bmap)
 {
-       return bmap->max_count == find_first_bit(bmap->bitmap, bmap->max_count);
+       return bitmap_empty(bmap->bitmap, bmap->max_count);
 }
 
 static u32 qed_rdma_get_sb_id(void *p_hwfn, u32 rel_sb_id)
@@ -343,7 +342,7 @@ void qed_rdma_bmap_free(struct qed_hwfn *p_hwfn,
        }
 
 end:
-       kfree(bmap->bitmap);
+       bitmap_free(bmap->bitmap);
        bmap->bitmap = NULL;
 }
 
index b7cc365..7c2af48 100644 (file)
@@ -260,11 +260,9 @@ static int map_frag_to_bd(struct qede_tx_queue *txq,
 static u16 qede_get_skb_hlen(struct sk_buff *skb, bool is_encap_pkt)
 {
        if (is_encap_pkt)
-               return (skb_inner_transport_header(skb) +
-                       inner_tcp_hdrlen(skb) - skb->data);
-       else
-               return (skb_transport_header(skb) +
-                       tcp_hdrlen(skb) - skb->data);
+               return skb_inner_tcp_all_headers(skb);
+
+       return skb_tcp_all_headers(skb);
 }
 
 /* +2 for 1st BD for headers and 2nd BD for headlen (if required) */
index 8d43ca2..9da5e97 100644 (file)
@@ -497,7 +497,7 @@ set_flags:
        }
        opcode = QLCNIC_TX_ETHER_PKT;
        if (skb_is_gso(skb)) {
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
                first_desc->mss = cpu_to_le16(skb_shinfo(skb)->gso_size);
                first_desc->hdr_length = hdr_len;
                opcode = (protocol == ETH_P_IPV6) ? QLCNIC_TX_TCP_LSO6 :
index 80c95c3..0d80447 100644 (file)
@@ -1264,7 +1264,7 @@ static int emac_tso_csum(struct emac_adapter *adpt,
                                pskb_trim(skb, pkt_len);
                }
 
-               hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               hdr_len = skb_tcp_all_headers(skb);
                if (unlikely(skb->len == hdr_len)) {
                        /* we only need to do csum */
                        netif_warn(adpt, tx_err, adpt->netdev,
@@ -1339,7 +1339,7 @@ static void emac_tx_fill_tpd(struct emac_adapter *adpt,
 
        /* if Large Segment Offload is (in TCP Segmentation Offload struct) */
        if (TPD_LSO(tpd)) {
-               mapped_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               mapped_len = skb_tcp_all_headers(skb);
 
                tpbuf = GET_TPD_BUFFER(tx_q, tx_q->tpd.produce_idx);
                tpbuf->length = mapped_len;
index 3098d66..1b7fdb4 100644 (file)
@@ -4190,7 +4190,6 @@ static void rtl8169_tso_csum_v1(struct sk_buff *skb, u32 *opts)
 static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
                                struct sk_buff *skb, u32 *opts)
 {
-       u32 transport_offset = (u32)skb_transport_offset(skb);
        struct skb_shared_info *shinfo = skb_shinfo(skb);
        u32 mss = shinfo->gso_size;
 
@@ -4207,7 +4206,7 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
                        WARN_ON_ONCE(1);
                }
 
-               opts[0] |= transport_offset << GTTCPHO_SHIFT;
+               opts[0] |= skb_transport_offset(skb) << GTTCPHO_SHIFT;
                opts[1] |= mss << TD1_MSS_SHIFT;
        } else if (skb->ip_summed == CHECKSUM_PARTIAL) {
                u8 ip_protocol;
@@ -4235,7 +4234,7 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
                else
                        WARN_ON_ONCE(1);
 
-               opts[1] |= transport_offset << TCPHO_SHIFT;
+               opts[1] |= skb_transport_offset(skb) << TCPHO_SHIFT;
        } else {
                unsigned int padto = rtl_quirk_packet_padto(tp, skb);
 
@@ -4402,14 +4401,13 @@ static netdev_features_t rtl8169_features_check(struct sk_buff *skb,
                                                struct net_device *dev,
                                                netdev_features_t features)
 {
-       int transport_offset = skb_transport_offset(skb);
        struct rtl8169_private *tp = netdev_priv(dev);
 
        if (skb_is_gso(skb)) {
                if (tp->mac_version == RTL_GIGA_MAC_VER_34)
                        features = rtl8168evl_fix_tso(skb, features);
 
-               if (transport_offset > GTTCPHO_MAX &&
+               if (skb_transport_offset(skb) > GTTCPHO_MAX &&
                    rtl_chip_supports_csum_v2(tp))
                        features &= ~NETIF_F_ALL_TSO;
        } else if (skb->ip_summed == CHECKSUM_PARTIAL) {
@@ -4420,7 +4418,7 @@ static netdev_features_t rtl8169_features_check(struct sk_buff *skb,
                if (rtl_quirk_packet_padto(tp, skb))
                        features &= ~NETIF_F_CSUM_MASK;
 
-               if (transport_offset > TCPHO_MAX &&
+               if (skb_transport_offset(skb) > TCPHO_MAX &&
                    rtl_chip_supports_csum_v2(tp))
                        features &= ~NETIF_F_CSUM_MASK;
        }
index 407a1f8..a1c10b6 100644 (file)
@@ -89,7 +89,7 @@ static void sxgbe_enable_eee_mode(const struct sxgbe_priv_data *priv)
 
 void sxgbe_disable_eee_mode(struct sxgbe_priv_data * const priv)
 {
-       /* Exit and disable EEE in case of we are are in LPI state. */
+       /* Exit and disable EEE in case of we are in LPI state. */
        priv->hw->mac->reset_eee_mode(priv->ioaddr);
        del_timer_sync(&priv->eee_ctrl_timer);
        priv->tx_path_in_lpi_mode = false;
index 186cb28..a99c3a6 100644 (file)
@@ -3874,7 +3874,7 @@ static int efx_ef10_udp_tnl_set_port(struct net_device *dev,
                                     unsigned int table, unsigned int entry,
                                     struct udp_tunnel_info *ti)
 {
-       struct efx_nic *efx = netdev_priv(dev);
+       struct efx_nic *efx = efx_netdev_priv(dev);
        struct efx_ef10_nic_data *nic_data;
        int efx_tunnel_type, rc;
 
@@ -3934,7 +3934,7 @@ static int efx_ef10_udp_tnl_unset_port(struct net_device *dev,
                                       unsigned int table, unsigned int entry,
                                       struct udp_tunnel_info *ti)
 {
-       struct efx_nic *efx = netdev_priv(dev);
+       struct efx_nic *efx = efx_netdev_priv(dev);
        struct efx_ef10_nic_data *nic_data;
        int rc;
 
index 173f0ec..425017f 100644 (file)
@@ -423,65 +423,58 @@ static int ef100_pci_find_func_ctrl_window(struct efx_nic *efx,
  */
 static void ef100_pci_remove(struct pci_dev *pci_dev)
 {
-       struct efx_nic *efx;
+       struct efx_nic *efx = pci_get_drvdata(pci_dev);
+       struct efx_probe_data *probe_data;
 
-       efx = pci_get_drvdata(pci_dev);
        if (!efx)
                return;
 
-       rtnl_lock();
-       dev_close(efx->net_dev);
-       rtnl_unlock();
-
-       /* Unregistering our netdev notifier triggers unbinding of TC indirect
-        * blocks, so we have to do it before PCI removal.
-        */
-       unregister_netdevice_notifier(&efx->netdev_notifier);
-#if defined(CONFIG_SFC_SRIOV)
-       if (!efx->type->is_vf)
-               efx_ef100_pci_sriov_disable(efx);
-#endif
+       probe_data = container_of(efx, struct efx_probe_data, efx);
+       ef100_remove_netdev(probe_data);
+
        ef100_remove(efx);
        efx_fini_io(efx);
-       netif_dbg(efx, drv, efx->net_dev, "shutdown successful\n");
 
-       pci_set_drvdata(pci_dev, NULL);
-       efx_fini_struct(efx);
-       free_netdev(efx->net_dev);
+       pci_dbg(pci_dev, "shutdown successful\n");
 
        pci_disable_pcie_error_reporting(pci_dev);
+
+       pci_set_drvdata(pci_dev, NULL);
+       efx_fini_struct(efx);
+       kfree(probe_data);
 };
 
 static int ef100_pci_probe(struct pci_dev *pci_dev,
                           const struct pci_device_id *entry)
 {
        struct ef100_func_ctl_window fcw = { 0 };
-       struct net_device *net_dev;
+       struct efx_probe_data *probe_data;
        struct efx_nic *efx;
        int rc;
 
-       /* Allocate and initialise a struct net_device and struct efx_nic */
-       net_dev = alloc_etherdev_mq(sizeof(*efx), EFX_MAX_CORE_TX_QUEUES);
-       if (!net_dev)
+       /* Allocate probe data and struct efx_nic */
+       probe_data = kzalloc(sizeof(*probe_data), GFP_KERNEL);
+       if (!probe_data)
                return -ENOMEM;
-       efx = netdev_priv(net_dev);
+       probe_data->pci_dev = pci_dev;
+       efx = &probe_data->efx;
+
        efx->type = (const struct efx_nic_type *)entry->driver_data;
 
+       efx->pci_dev = pci_dev;
        pci_set_drvdata(pci_dev, efx);
-       SET_NETDEV_DEV(net_dev, &pci_dev->dev);
-       rc = efx_init_struct(efx, pci_dev, net_dev);
+       rc = efx_init_struct(efx, pci_dev);
        if (rc)
                goto fail;
 
        efx->vi_stride = EF100_DEFAULT_VI_STRIDE;
-       netif_info(efx, probe, efx->net_dev,
-                  "Solarflare EF100 NIC detected\n");
+       pci_info(pci_dev, "Solarflare EF100 NIC detected\n");
 
        rc = ef100_pci_find_func_ctrl_window(efx, &fcw);
        if (rc) {
-               netif_err(efx, probe, efx->net_dev,
-                         "Error looking for ef100 function control window, rc=%d\n",
-                         rc);
+               pci_err(pci_dev,
+                       "Error looking for ef100 function control window, rc=%d\n",
+                       rc);
                goto fail;
        }
 
@@ -493,8 +486,7 @@ static int ef100_pci_probe(struct pci_dev *pci_dev,
        }
 
        if (fcw.offset > pci_resource_len(efx->pci_dev, fcw.bar) - ESE_GZ_FCW_LEN) {
-               netif_err(efx, probe, efx->net_dev,
-                         "Func control window overruns BAR\n");
+               pci_err(pci_dev, "Func control window overruns BAR\n");
                rc = -EIO;
                goto fail;
        }
@@ -508,19 +500,16 @@ static int ef100_pci_probe(struct pci_dev *pci_dev,
 
        efx->reg_base = fcw.offset;
 
-       efx->netdev_notifier.notifier_call = ef100_netdev_event;
-       rc = register_netdevice_notifier(&efx->netdev_notifier);
-       if (rc) {
-               netif_err(efx, probe, efx->net_dev,
-                         "Failed to register netdevice notifier, rc=%d\n", rc);
+       rc = efx->type->probe(efx);
+       if (rc)
                goto fail;
-       }
 
-       rc = efx->type->probe(efx);
+       efx->state = STATE_PROBED;
+       rc = ef100_probe_netdev(probe_data);
        if (rc)
                goto fail;
 
-       netif_dbg(efx, probe, efx->net_dev, "initialisation successful\n");
+       pci_dbg(pci_dev, "initialisation successful\n");
 
        return 0;
 
index 5dba412..702abbe 100644 (file)
@@ -26,7 +26,7 @@ ef100_ethtool_get_ringparam(struct net_device *net_dev,
                            struct kernel_ethtool_ringparam *kernel_ring,
                            struct netlink_ext_ack *extack)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        ring->rx_max_pending = EFX_EF100_MAX_DMAQ_SIZE;
        ring->tx_max_pending = EFX_EF100_MAX_DMAQ_SIZE;
index 67fe44d..060392d 100644 (file)
@@ -22,6 +22,7 @@
 #include "ef100_regs.h"
 #include "mcdi_filters.h"
 #include "rx_common.h"
+#include "ef100_sriov.h"
 
 static void ef100_update_name(struct efx_nic *efx)
 {
@@ -79,7 +80,7 @@ static int ef100_remap_bar(struct efx_nic *efx, int max_vis)
  */
 static int ef100_net_stop(struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        netif_dbg(efx, ifdown, efx->net_dev, "closing on CPU %d\n",
                  raw_smp_processor_id());
@@ -96,13 +97,15 @@ static int ef100_net_stop(struct net_device *net_dev)
        efx_mcdi_free_vis(efx);
        efx_remove_interrupts(efx);
 
+       efx->state = STATE_NET_DOWN;
+
        return 0;
 }
 
 /* Context: process, rtnl_lock() held. */
 static int ef100_net_open(struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        unsigned int allocated_vis;
        int rc;
 
@@ -172,6 +175,8 @@ static int ef100_net_open(struct net_device *net_dev)
                efx_link_status_changed(efx);
        mutex_unlock(&efx->mac_lock);
 
+       efx->state = STATE_NET_UP;
+
        return 0;
 
 fail:
@@ -189,7 +194,7 @@ fail:
 static netdev_tx_t ef100_hard_start_xmit(struct sk_buff *skb,
                                         struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct efx_tx_queue *tx_queue;
        struct efx_channel *channel;
        int rc;
@@ -239,13 +244,14 @@ int ef100_netdev_event(struct notifier_block *this,
        struct efx_nic *efx = container_of(this, struct efx_nic, netdev_notifier);
        struct net_device *net_dev = netdev_notifier_info_to_dev(ptr);
 
-       if (netdev_priv(net_dev) == efx && event == NETDEV_CHANGENAME)
+       if (efx->net_dev == net_dev &&
+           (event == NETDEV_CHANGENAME || event == NETDEV_REGISTER))
                ef100_update_name(efx);
 
        return NOTIFY_DONE;
 }
 
-int ef100_register_netdev(struct efx_nic *efx)
+static int ef100_register_netdev(struct efx_nic *efx)
 {
        struct net_device *net_dev = efx->net_dev;
        int rc;
@@ -271,7 +277,7 @@ int ef100_register_netdev(struct efx_nic *efx)
        /* Always start with carrier off; PHY events will detect the link */
        netif_carrier_off(net_dev);
 
-       efx->state = STATE_READY;
+       efx->state = STATE_NET_DOWN;
        rtnl_unlock();
        efx_init_mcdi_logging(efx);
 
@@ -283,11 +289,119 @@ fail_locked:
        return rc;
 }
 
-void ef100_unregister_netdev(struct efx_nic *efx)
+static void ef100_unregister_netdev(struct efx_nic *efx)
 {
        if (efx_dev_registered(efx)) {
                efx_fini_mcdi_logging(efx);
-               efx->state = STATE_UNINIT;
+               efx->state = STATE_PROBED;
                unregister_netdev(efx->net_dev);
        }
 }
+
+void ef100_remove_netdev(struct efx_probe_data *probe_data)
+{
+       struct efx_nic *efx = &probe_data->efx;
+
+       if (!efx->net_dev)
+               return;
+
+       rtnl_lock();
+       dev_close(efx->net_dev);
+       rtnl_unlock();
+
+       unregister_netdevice_notifier(&efx->netdev_notifier);
+#if defined(CONFIG_SFC_SRIOV)
+       if (!efx->type->is_vf)
+               efx_ef100_pci_sriov_disable(efx);
+#endif
+
+       ef100_unregister_netdev(efx);
+
+       down_write(&efx->filter_sem);
+       efx_mcdi_filter_table_remove(efx);
+       up_write(&efx->filter_sem);
+       efx_fini_channels(efx);
+       kfree(efx->phy_data);
+       efx->phy_data = NULL;
+
+       free_netdev(efx->net_dev);
+       efx->net_dev = NULL;
+       efx->state = STATE_PROBED;
+}
+
+int ef100_probe_netdev(struct efx_probe_data *probe_data)
+{
+       struct efx_nic *efx = &probe_data->efx;
+       struct efx_probe_data **probe_ptr;
+       struct net_device *net_dev;
+       int rc;
+
+       if (efx->mcdi->fn_flags &
+                       (1 << MC_CMD_DRV_ATTACH_EXT_OUT_FLAG_NO_ACTIVE_PORT)) {
+               pci_info(efx->pci_dev, "No network port on this PCI function");
+               return 0;
+       }
+
+       /* Allocate and initialise a struct net_device */
+       net_dev = alloc_etherdev_mq(sizeof(probe_data), EFX_MAX_CORE_TX_QUEUES);
+       if (!net_dev)
+               return -ENOMEM;
+       probe_ptr = netdev_priv(net_dev);
+       *probe_ptr = probe_data;
+       efx->net_dev = net_dev;
+       SET_NETDEV_DEV(net_dev, &efx->pci_dev->dev);
+
+       net_dev->features |= efx->type->offload_features;
+       net_dev->hw_features |= efx->type->offload_features;
+       net_dev->hw_enc_features |= efx->type->offload_features;
+       net_dev->vlan_features |= NETIF_F_HW_CSUM | NETIF_F_SG |
+                                 NETIF_F_HIGHDMA | NETIF_F_ALL_TSO;
+       netif_set_tso_max_segs(net_dev,
+                              ESE_EF100_DP_GZ_TSO_MAX_HDR_NUM_SEGS_DEFAULT);
+       efx->mdio.dev = net_dev;
+
+       rc = efx_ef100_init_datapath_caps(efx);
+       if (rc < 0)
+               goto fail;
+
+       rc = ef100_phy_probe(efx);
+       if (rc)
+               goto fail;
+
+       rc = efx_init_channels(efx);
+       if (rc)
+               goto fail;
+
+       down_write(&efx->filter_sem);
+       rc = ef100_filter_table_probe(efx);
+       up_write(&efx->filter_sem);
+       if (rc)
+               goto fail;
+
+       netdev_rss_key_fill(efx->rss_context.rx_hash_key,
+                           sizeof(efx->rss_context.rx_hash_key));
+
+       /* Don't fail init if RSS setup doesn't work. */
+       efx_mcdi_push_default_indir_table(efx, efx->n_rx_channels);
+
+       rc = ef100_register_netdev(efx);
+       if (rc)
+               goto fail;
+
+       if (!efx->type->is_vf) {
+               rc = ef100_probe_netdev_pf(efx);
+               if (rc)
+                       goto fail;
+       }
+
+       efx->netdev_notifier.notifier_call = ef100_netdev_event;
+       rc = register_netdevice_notifier(&efx->netdev_notifier);
+       if (rc) {
+               netif_err(efx, probe, efx->net_dev,
+                         "Failed to register netdevice notifier, rc=%d\n", rc);
+               goto fail;
+       }
+
+fail:
+       return rc;
+}
index d40abb7..38b032b 100644 (file)
@@ -13,5 +13,5 @@
 
 int ef100_netdev_event(struct notifier_block *this,
                       unsigned long event, void *ptr);
-int ef100_register_netdev(struct efx_nic *efx);
-void ef100_unregister_netdev(struct efx_nic *efx);
+int ef100_probe_netdev(struct efx_probe_data *probe_data);
+void ef100_remove_netdev(struct efx_probe_data *probe_data);
index b2536d2..f89e695 100644 (file)
@@ -148,7 +148,7 @@ static int ef100_get_mac_address(struct efx_nic *efx, u8 *mac_address)
        return 0;
 }
 
-static int efx_ef100_init_datapath_caps(struct efx_nic *efx)
+int efx_ef100_init_datapath_caps(struct efx_nic *efx)
 {
        MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_CAPABILITIES_V7_OUT_LEN);
        struct ef100_nic_data *nic_data = efx->nic_data;
@@ -327,7 +327,7 @@ static irqreturn_t ef100_msi_interrupt(int irq, void *dev_id)
        return IRQ_HANDLED;
 }
 
-static int ef100_phy_probe(struct efx_nic *efx)
+int ef100_phy_probe(struct efx_nic *efx)
 {
        struct efx_mcdi_phy_data *phy_data;
        int rc;
@@ -365,7 +365,7 @@ static int ef100_phy_probe(struct efx_nic *efx)
        return 0;
 }
 
-static int ef100_filter_table_probe(struct efx_nic *efx)
+int ef100_filter_table_probe(struct efx_nic *efx)
 {
        return efx_mcdi_filter_table_probe(efx, true);
 }
@@ -704,178 +704,6 @@ static unsigned int efx_ef100_recycle_ring_size(const struct efx_nic *efx)
        return 10 * EFX_RECYCLE_RING_SIZE_10G;
 }
 
-/*     NIC level access functions
- */
-#define EF100_OFFLOAD_FEATURES (NETIF_F_HW_CSUM | NETIF_F_RXCSUM |     \
-       NETIF_F_HIGHDMA | NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_NTUPLE | \
-       NETIF_F_RXHASH | NETIF_F_RXFCS | NETIF_F_TSO_ECN | NETIF_F_RXALL | \
-       NETIF_F_HW_VLAN_CTAG_TX)
-
-const struct efx_nic_type ef100_pf_nic_type = {
-       .revision = EFX_REV_EF100,
-       .is_vf = false,
-       .probe = ef100_probe_pf,
-       .offload_features = EF100_OFFLOAD_FEATURES,
-       .mcdi_max_ver = 2,
-       .mcdi_request = ef100_mcdi_request,
-       .mcdi_poll_response = ef100_mcdi_poll_response,
-       .mcdi_read_response = ef100_mcdi_read_response,
-       .mcdi_poll_reboot = ef100_mcdi_poll_reboot,
-       .mcdi_reboot_detected = ef100_mcdi_reboot_detected,
-       .irq_enable_master = efx_port_dummy_op_void,
-       .irq_test_generate = efx_ef100_irq_test_generate,
-       .irq_disable_non_ev = efx_port_dummy_op_void,
-       .push_irq_moderation = efx_channel_dummy_op_void,
-       .min_interrupt_mode = EFX_INT_MODE_MSIX,
-       .map_reset_reason = ef100_map_reset_reason,
-       .map_reset_flags = ef100_map_reset_flags,
-       .reset = ef100_reset,
-
-       .check_caps = ef100_check_caps,
-
-       .ev_probe = ef100_ev_probe,
-       .ev_init = ef100_ev_init,
-       .ev_fini = efx_mcdi_ev_fini,
-       .ev_remove = efx_mcdi_ev_remove,
-       .irq_handle_msi = ef100_msi_interrupt,
-       .ev_process = ef100_ev_process,
-       .ev_read_ack = ef100_ev_read_ack,
-       .ev_test_generate = efx_ef100_ev_test_generate,
-       .tx_probe = ef100_tx_probe,
-       .tx_init = ef100_tx_init,
-       .tx_write = ef100_tx_write,
-       .tx_enqueue = ef100_enqueue_skb,
-       .rx_probe = efx_mcdi_rx_probe,
-       .rx_init = efx_mcdi_rx_init,
-       .rx_remove = efx_mcdi_rx_remove,
-       .rx_write = ef100_rx_write,
-       .rx_packet = __ef100_rx_packet,
-       .rx_buf_hash_valid = ef100_rx_buf_hash_valid,
-       .fini_dmaq = efx_fini_dmaq,
-       .max_rx_ip_filters = EFX_MCDI_FILTER_TBL_ROWS,
-       .filter_table_probe = ef100_filter_table_up,
-       .filter_table_restore = efx_mcdi_filter_table_restore,
-       .filter_table_remove = ef100_filter_table_down,
-       .filter_insert = efx_mcdi_filter_insert,
-       .filter_remove_safe = efx_mcdi_filter_remove_safe,
-       .filter_get_safe = efx_mcdi_filter_get_safe,
-       .filter_clear_rx = efx_mcdi_filter_clear_rx,
-       .filter_count_rx_used = efx_mcdi_filter_count_rx_used,
-       .filter_get_rx_id_limit = efx_mcdi_filter_get_rx_id_limit,
-       .filter_get_rx_ids = efx_mcdi_filter_get_rx_ids,
-#ifdef CONFIG_RFS_ACCEL
-       .filter_rfs_expire_one = efx_mcdi_filter_rfs_expire_one,
-#endif
-
-       .get_phys_port_id = efx_ef100_get_phys_port_id,
-
-       .rx_prefix_size = ESE_GZ_RX_PKT_PREFIX_LEN,
-       .rx_hash_offset = ESF_GZ_RX_PREFIX_RSS_HASH_LBN / 8,
-       .rx_ts_offset = ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_LBN / 8,
-       .rx_hash_key_size = 40,
-       .rx_pull_rss_config = efx_mcdi_rx_pull_rss_config,
-       .rx_push_rss_config = efx_mcdi_pf_rx_push_rss_config,
-       .rx_push_rss_context_config = efx_mcdi_rx_push_rss_context_config,
-       .rx_pull_rss_context_config = efx_mcdi_rx_pull_rss_context_config,
-       .rx_restore_rss_contexts = efx_mcdi_rx_restore_rss_contexts,
-       .rx_recycle_ring_size = efx_ef100_recycle_ring_size,
-
-       .reconfigure_mac = ef100_reconfigure_mac,
-       .reconfigure_port = efx_mcdi_port_reconfigure,
-       .test_nvram = efx_new_mcdi_nvram_test_all,
-       .describe_stats = ef100_describe_stats,
-       .start_stats = efx_mcdi_mac_start_stats,
-       .update_stats = ef100_update_stats,
-       .pull_stats = efx_mcdi_mac_pull_stats,
-       .stop_stats = efx_mcdi_mac_stop_stats,
-#ifdef CONFIG_SFC_SRIOV
-       .sriov_configure = efx_ef100_sriov_configure,
-#endif
-
-       /* Per-type bar/size configuration not used on ef100. Location of
-        * registers is defined by extended capabilities.
-        */
-       .mem_bar = NULL,
-       .mem_map_size = NULL,
-
-};
-
-const struct efx_nic_type ef100_vf_nic_type = {
-       .revision = EFX_REV_EF100,
-       .is_vf = true,
-       .probe = ef100_probe_vf,
-       .offload_features = EF100_OFFLOAD_FEATURES,
-       .mcdi_max_ver = 2,
-       .mcdi_request = ef100_mcdi_request,
-       .mcdi_poll_response = ef100_mcdi_poll_response,
-       .mcdi_read_response = ef100_mcdi_read_response,
-       .mcdi_poll_reboot = ef100_mcdi_poll_reboot,
-       .mcdi_reboot_detected = ef100_mcdi_reboot_detected,
-       .irq_enable_master = efx_port_dummy_op_void,
-       .irq_test_generate = efx_ef100_irq_test_generate,
-       .irq_disable_non_ev = efx_port_dummy_op_void,
-       .push_irq_moderation = efx_channel_dummy_op_void,
-       .min_interrupt_mode = EFX_INT_MODE_MSIX,
-       .map_reset_reason = ef100_map_reset_reason,
-       .map_reset_flags = ef100_map_reset_flags,
-       .reset = ef100_reset,
-       .check_caps = ef100_check_caps,
-       .ev_probe = ef100_ev_probe,
-       .ev_init = ef100_ev_init,
-       .ev_fini = efx_mcdi_ev_fini,
-       .ev_remove = efx_mcdi_ev_remove,
-       .irq_handle_msi = ef100_msi_interrupt,
-       .ev_process = ef100_ev_process,
-       .ev_read_ack = ef100_ev_read_ack,
-       .ev_test_generate = efx_ef100_ev_test_generate,
-       .tx_probe = ef100_tx_probe,
-       .tx_init = ef100_tx_init,
-       .tx_write = ef100_tx_write,
-       .tx_enqueue = ef100_enqueue_skb,
-       .rx_probe = efx_mcdi_rx_probe,
-       .rx_init = efx_mcdi_rx_init,
-       .rx_remove = efx_mcdi_rx_remove,
-       .rx_write = ef100_rx_write,
-       .rx_packet = __ef100_rx_packet,
-       .rx_buf_hash_valid = ef100_rx_buf_hash_valid,
-       .fini_dmaq = efx_fini_dmaq,
-       .max_rx_ip_filters = EFX_MCDI_FILTER_TBL_ROWS,
-       .filter_table_probe = ef100_filter_table_up,
-       .filter_table_restore = efx_mcdi_filter_table_restore,
-       .filter_table_remove = ef100_filter_table_down,
-       .filter_insert = efx_mcdi_filter_insert,
-       .filter_remove_safe = efx_mcdi_filter_remove_safe,
-       .filter_get_safe = efx_mcdi_filter_get_safe,
-       .filter_clear_rx = efx_mcdi_filter_clear_rx,
-       .filter_count_rx_used = efx_mcdi_filter_count_rx_used,
-       .filter_get_rx_id_limit = efx_mcdi_filter_get_rx_id_limit,
-       .filter_get_rx_ids = efx_mcdi_filter_get_rx_ids,
-#ifdef CONFIG_RFS_ACCEL
-       .filter_rfs_expire_one = efx_mcdi_filter_rfs_expire_one,
-#endif
-
-       .rx_prefix_size = ESE_GZ_RX_PKT_PREFIX_LEN,
-       .rx_hash_offset = ESF_GZ_RX_PREFIX_RSS_HASH_LBN / 8,
-       .rx_ts_offset = ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_LBN / 8,
-       .rx_hash_key_size = 40,
-       .rx_pull_rss_config = efx_mcdi_rx_pull_rss_config,
-       .rx_push_rss_config = efx_mcdi_pf_rx_push_rss_config,
-       .rx_restore_rss_contexts = efx_mcdi_rx_restore_rss_contexts,
-       .rx_recycle_ring_size = efx_ef100_recycle_ring_size,
-
-       .reconfigure_mac = ef100_reconfigure_mac,
-       .test_nvram = efx_new_mcdi_nvram_test_all,
-       .describe_stats = ef100_describe_stats,
-       .start_stats = efx_mcdi_mac_start_stats,
-       .update_stats = ef100_update_stats,
-       .pull_stats = efx_mcdi_mac_pull_stats,
-       .stop_stats = efx_mcdi_mac_stop_stats,
-
-       .mem_bar = NULL,
-       .mem_map_size = NULL,
-
-};
-
 static int compare_versions(const char *a, const char *b)
 {
        int a_major, a_minor, a_point, a_patch;
@@ -1077,8 +905,7 @@ static int ef100_check_design_params(struct efx_nic *efx)
 
        efx_readd(efx, &reg, ER_GZ_PARAMS_TLV_LEN);
        total_len = EFX_DWORD_FIELD(reg, EFX_DWORD_0);
-       netif_dbg(efx, probe, efx->net_dev, "%u bytes of design parameters\n",
-                 total_len);
+       pci_dbg(efx->pci_dev, "%u bytes of design parameters\n", total_len);
        while (offset < total_len) {
                efx_readd(efx, &reg, ER_GZ_PARAMS_TLV + offset);
                data = EFX_DWORD_FIELD(reg, EFX_DWORD_0);
@@ -1117,7 +944,6 @@ out:
 static int ef100_probe_main(struct efx_nic *efx)
 {
        unsigned int bar_size = resource_size(&efx->pci_dev->resource[efx->mem_bar]);
-       struct net_device *net_dev = efx->net_dev;
        struct ef100_nic_data *nic_data;
        char fw_version[32];
        int i, rc;
@@ -1130,24 +956,18 @@ static int ef100_probe_main(struct efx_nic *efx)
                return -ENOMEM;
        efx->nic_data = nic_data;
        nic_data->efx = efx;
-       net_dev->features |= efx->type->offload_features;
-       net_dev->hw_features |= efx->type->offload_features;
-       net_dev->hw_enc_features |= efx->type->offload_features;
-       net_dev->vlan_features |= NETIF_F_HW_CSUM | NETIF_F_SG |
-                                 NETIF_F_HIGHDMA | NETIF_F_ALL_TSO;
+       efx->max_vis = EF100_MAX_VIS;
 
        /* Populate design-parameter defaults */
        nic_data->tso_max_hdr_len = ESE_EF100_DP_GZ_TSO_MAX_HDR_LEN_DEFAULT;
        nic_data->tso_max_frames = ESE_EF100_DP_GZ_TSO_MAX_NUM_FRAMES_DEFAULT;
        nic_data->tso_max_payload_num_segs = ESE_EF100_DP_GZ_TSO_MAX_PAYLOAD_NUM_SEGS_DEFAULT;
        nic_data->tso_max_payload_len = ESE_EF100_DP_GZ_TSO_MAX_PAYLOAD_LEN_DEFAULT;
-       netif_set_tso_max_segs(net_dev,
-                              ESE_EF100_DP_GZ_TSO_MAX_HDR_NUM_SEGS_DEFAULT);
+
        /* Read design parameters */
        rc = ef100_check_design_params(efx);
        if (rc) {
-               netif_err(efx, probe, efx->net_dev,
-                         "Unsupported design parameters\n");
+               pci_err(efx->pci_dev, "Unsupported design parameters\n");
                goto fail;
        }
 
@@ -1184,12 +1004,6 @@ static int ef100_probe_main(struct efx_nic *efx)
        /* Post-IO section. */
 
        rc = efx_mcdi_init(efx);
-       if (!rc && efx->mcdi->fn_flags &
-                  (1 << MC_CMD_DRV_ATTACH_EXT_OUT_FLAG_NO_ACTIVE_PORT)) {
-               netif_info(efx, probe, efx->net_dev,
-                          "No network port on this PCI function");
-               rc = -ENODEV;
-       }
        if (rc)
                goto fail;
        /* Reset (most) configuration for this function */
@@ -1205,67 +1019,37 @@ static int ef100_probe_main(struct efx_nic *efx)
        if (rc)
                goto fail;
 
-       rc = efx_ef100_init_datapath_caps(efx);
-       if (rc < 0)
-               goto fail;
-
-       efx->max_vis = EF100_MAX_VIS;
-
        rc = efx_mcdi_port_get_number(efx);
        if (rc < 0)
                goto fail;
        efx->port_num = rc;
 
        efx_mcdi_print_fwver(efx, fw_version, sizeof(fw_version));
-       netif_dbg(efx, drv, efx->net_dev, "Firmware version %s\n", fw_version);
+       pci_dbg(efx->pci_dev, "Firmware version %s\n", fw_version);
 
        if (compare_versions(fw_version, "1.1.0.1000") < 0) {
-               netif_info(efx, drv, efx->net_dev, "Firmware uses old event descriptors\n");
+               pci_info(efx->pci_dev, "Firmware uses old event descriptors\n");
                rc = -EINVAL;
                goto fail;
        }
 
        if (efx_has_cap(efx, UNSOL_EV_CREDIT_SUPPORTED)) {
-               netif_info(efx, drv, efx->net_dev, "Firmware uses unsolicited-event credits\n");
+               pci_info(efx->pci_dev, "Firmware uses unsolicited-event credits\n");
                rc = -EINVAL;
                goto fail;
        }
 
-       rc = ef100_phy_probe(efx);
-       if (rc)
-               goto fail;
-
-       down_write(&efx->filter_sem);
-       rc = ef100_filter_table_probe(efx);
-       up_write(&efx->filter_sem);
-       if (rc)
-               goto fail;
-
-       netdev_rss_key_fill(efx->rss_context.rx_hash_key,
-                           sizeof(efx->rss_context.rx_hash_key));
-
-       /* Don't fail init if RSS setup doesn't work. */
-       efx_mcdi_push_default_indir_table(efx, efx->n_rx_channels);
-
-       rc = ef100_register_netdev(efx);
-       if (rc)
-               goto fail;
-
        return 0;
 fail:
        return rc;
 }
 
-int ef100_probe_pf(struct efx_nic *efx)
+int ef100_probe_netdev_pf(struct efx_nic *efx)
 {
+       struct ef100_nic_data *nic_data = efx->nic_data;
        struct net_device *net_dev = efx->net_dev;
-       struct ef100_nic_data *nic_data;
-       int rc = ef100_probe_main(efx);
-
-       if (rc)
-               goto fail;
+       int rc;
 
-       nic_data = efx->nic_data;
        rc = ef100_get_mac_address(efx, net_dev->perm_addr);
        if (rc)
                goto fail;
@@ -1288,14 +1072,6 @@ void ef100_remove(struct efx_nic *efx)
 {
        struct ef100_nic_data *nic_data = efx->nic_data;
 
-       ef100_unregister_netdev(efx);
-
-       down_write(&efx->filter_sem);
-       efx_mcdi_filter_table_remove(efx);
-       up_write(&efx->filter_sem);
-       efx_fini_channels(efx);
-       kfree(efx->phy_data);
-       efx->phy_data = NULL;
        efx_mcdi_detach(efx);
        efx_mcdi_fini(efx);
        if (nic_data)
@@ -1303,3 +1079,175 @@ void ef100_remove(struct efx_nic *efx)
        kfree(nic_data);
        efx->nic_data = NULL;
 }
+
+/*     NIC level access functions
+ */
+#define EF100_OFFLOAD_FEATURES (NETIF_F_HW_CSUM | NETIF_F_RXCSUM |     \
+       NETIF_F_HIGHDMA | NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_NTUPLE | \
+       NETIF_F_RXHASH | NETIF_F_RXFCS | NETIF_F_TSO_ECN | NETIF_F_RXALL | \
+       NETIF_F_HW_VLAN_CTAG_TX)
+
+const struct efx_nic_type ef100_pf_nic_type = {
+       .revision = EFX_REV_EF100,
+       .is_vf = false,
+       .probe = ef100_probe_main,
+       .offload_features = EF100_OFFLOAD_FEATURES,
+       .mcdi_max_ver = 2,
+       .mcdi_request = ef100_mcdi_request,
+       .mcdi_poll_response = ef100_mcdi_poll_response,
+       .mcdi_read_response = ef100_mcdi_read_response,
+       .mcdi_poll_reboot = ef100_mcdi_poll_reboot,
+       .mcdi_reboot_detected = ef100_mcdi_reboot_detected,
+       .irq_enable_master = efx_port_dummy_op_void,
+       .irq_test_generate = efx_ef100_irq_test_generate,
+       .irq_disable_non_ev = efx_port_dummy_op_void,
+       .push_irq_moderation = efx_channel_dummy_op_void,
+       .min_interrupt_mode = EFX_INT_MODE_MSIX,
+       .map_reset_reason = ef100_map_reset_reason,
+       .map_reset_flags = ef100_map_reset_flags,
+       .reset = ef100_reset,
+
+       .check_caps = ef100_check_caps,
+
+       .ev_probe = ef100_ev_probe,
+       .ev_init = ef100_ev_init,
+       .ev_fini = efx_mcdi_ev_fini,
+       .ev_remove = efx_mcdi_ev_remove,
+       .irq_handle_msi = ef100_msi_interrupt,
+       .ev_process = ef100_ev_process,
+       .ev_read_ack = ef100_ev_read_ack,
+       .ev_test_generate = efx_ef100_ev_test_generate,
+       .tx_probe = ef100_tx_probe,
+       .tx_init = ef100_tx_init,
+       .tx_write = ef100_tx_write,
+       .tx_enqueue = ef100_enqueue_skb,
+       .rx_probe = efx_mcdi_rx_probe,
+       .rx_init = efx_mcdi_rx_init,
+       .rx_remove = efx_mcdi_rx_remove,
+       .rx_write = ef100_rx_write,
+       .rx_packet = __ef100_rx_packet,
+       .rx_buf_hash_valid = ef100_rx_buf_hash_valid,
+       .fini_dmaq = efx_fini_dmaq,
+       .max_rx_ip_filters = EFX_MCDI_FILTER_TBL_ROWS,
+       .filter_table_probe = ef100_filter_table_up,
+       .filter_table_restore = efx_mcdi_filter_table_restore,
+       .filter_table_remove = ef100_filter_table_down,
+       .filter_insert = efx_mcdi_filter_insert,
+       .filter_remove_safe = efx_mcdi_filter_remove_safe,
+       .filter_get_safe = efx_mcdi_filter_get_safe,
+       .filter_clear_rx = efx_mcdi_filter_clear_rx,
+       .filter_count_rx_used = efx_mcdi_filter_count_rx_used,
+       .filter_get_rx_id_limit = efx_mcdi_filter_get_rx_id_limit,
+       .filter_get_rx_ids = efx_mcdi_filter_get_rx_ids,
+#ifdef CONFIG_RFS_ACCEL
+       .filter_rfs_expire_one = efx_mcdi_filter_rfs_expire_one,
+#endif
+
+       .get_phys_port_id = efx_ef100_get_phys_port_id,
+
+       .rx_prefix_size = ESE_GZ_RX_PKT_PREFIX_LEN,
+       .rx_hash_offset = ESF_GZ_RX_PREFIX_RSS_HASH_LBN / 8,
+       .rx_ts_offset = ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_LBN / 8,
+       .rx_hash_key_size = 40,
+       .rx_pull_rss_config = efx_mcdi_rx_pull_rss_config,
+       .rx_push_rss_config = efx_mcdi_pf_rx_push_rss_config,
+       .rx_push_rss_context_config = efx_mcdi_rx_push_rss_context_config,
+       .rx_pull_rss_context_config = efx_mcdi_rx_pull_rss_context_config,
+       .rx_restore_rss_contexts = efx_mcdi_rx_restore_rss_contexts,
+       .rx_recycle_ring_size = efx_ef100_recycle_ring_size,
+
+       .reconfigure_mac = ef100_reconfigure_mac,
+       .reconfigure_port = efx_mcdi_port_reconfigure,
+       .test_nvram = efx_new_mcdi_nvram_test_all,
+       .describe_stats = ef100_describe_stats,
+       .start_stats = efx_mcdi_mac_start_stats,
+       .update_stats = ef100_update_stats,
+       .pull_stats = efx_mcdi_mac_pull_stats,
+       .stop_stats = efx_mcdi_mac_stop_stats,
+#ifdef CONFIG_SFC_SRIOV
+       .sriov_configure = efx_ef100_sriov_configure,
+#endif
+
+       /* Per-type bar/size configuration not used on ef100. Location of
+        * registers is defined by extended capabilities.
+        */
+       .mem_bar = NULL,
+       .mem_map_size = NULL,
+
+};
+
+const struct efx_nic_type ef100_vf_nic_type = {
+       .revision = EFX_REV_EF100,
+       .is_vf = true,
+       .probe = ef100_probe_vf,
+       .offload_features = EF100_OFFLOAD_FEATURES,
+       .mcdi_max_ver = 2,
+       .mcdi_request = ef100_mcdi_request,
+       .mcdi_poll_response = ef100_mcdi_poll_response,
+       .mcdi_read_response = ef100_mcdi_read_response,
+       .mcdi_poll_reboot = ef100_mcdi_poll_reboot,
+       .mcdi_reboot_detected = ef100_mcdi_reboot_detected,
+       .irq_enable_master = efx_port_dummy_op_void,
+       .irq_test_generate = efx_ef100_irq_test_generate,
+       .irq_disable_non_ev = efx_port_dummy_op_void,
+       .push_irq_moderation = efx_channel_dummy_op_void,
+       .min_interrupt_mode = EFX_INT_MODE_MSIX,
+       .map_reset_reason = ef100_map_reset_reason,
+       .map_reset_flags = ef100_map_reset_flags,
+       .reset = ef100_reset,
+       .check_caps = ef100_check_caps,
+       .ev_probe = ef100_ev_probe,
+       .ev_init = ef100_ev_init,
+       .ev_fini = efx_mcdi_ev_fini,
+       .ev_remove = efx_mcdi_ev_remove,
+       .irq_handle_msi = ef100_msi_interrupt,
+       .ev_process = ef100_ev_process,
+       .ev_read_ack = ef100_ev_read_ack,
+       .ev_test_generate = efx_ef100_ev_test_generate,
+       .tx_probe = ef100_tx_probe,
+       .tx_init = ef100_tx_init,
+       .tx_write = ef100_tx_write,
+       .tx_enqueue = ef100_enqueue_skb,
+       .rx_probe = efx_mcdi_rx_probe,
+       .rx_init = efx_mcdi_rx_init,
+       .rx_remove = efx_mcdi_rx_remove,
+       .rx_write = ef100_rx_write,
+       .rx_packet = __ef100_rx_packet,
+       .rx_buf_hash_valid = ef100_rx_buf_hash_valid,
+       .fini_dmaq = efx_fini_dmaq,
+       .max_rx_ip_filters = EFX_MCDI_FILTER_TBL_ROWS,
+       .filter_table_probe = ef100_filter_table_up,
+       .filter_table_restore = efx_mcdi_filter_table_restore,
+       .filter_table_remove = ef100_filter_table_down,
+       .filter_insert = efx_mcdi_filter_insert,
+       .filter_remove_safe = efx_mcdi_filter_remove_safe,
+       .filter_get_safe = efx_mcdi_filter_get_safe,
+       .filter_clear_rx = efx_mcdi_filter_clear_rx,
+       .filter_count_rx_used = efx_mcdi_filter_count_rx_used,
+       .filter_get_rx_id_limit = efx_mcdi_filter_get_rx_id_limit,
+       .filter_get_rx_ids = efx_mcdi_filter_get_rx_ids,
+#ifdef CONFIG_RFS_ACCEL
+       .filter_rfs_expire_one = efx_mcdi_filter_rfs_expire_one,
+#endif
+
+       .rx_prefix_size = ESE_GZ_RX_PKT_PREFIX_LEN,
+       .rx_hash_offset = ESF_GZ_RX_PREFIX_RSS_HASH_LBN / 8,
+       .rx_ts_offset = ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_LBN / 8,
+       .rx_hash_key_size = 40,
+       .rx_pull_rss_config = efx_mcdi_rx_pull_rss_config,
+       .rx_push_rss_config = efx_mcdi_pf_rx_push_rss_config,
+       .rx_restore_rss_contexts = efx_mcdi_rx_restore_rss_contexts,
+       .rx_recycle_ring_size = efx_ef100_recycle_ring_size,
+
+       .reconfigure_mac = ef100_reconfigure_mac,
+       .test_nvram = efx_new_mcdi_nvram_test_all,
+       .describe_stats = ef100_describe_stats,
+       .start_stats = efx_mcdi_mac_start_stats,
+       .update_stats = ef100_update_stats,
+       .pull_stats = efx_mcdi_mac_pull_stats,
+       .stop_stats = efx_mcdi_mac_stop_stats,
+
+       .mem_bar = NULL,
+       .mem_map_size = NULL,
+
+};
index e799688..744dbbd 100644 (file)
@@ -8,6 +8,8 @@
  * under the terms of the GNU General Public License version 2 as published
  * by the Free Software Foundation, incorporated herein by reference.
  */
+#ifndef EFX_EF100_NIC_H
+#define EFX_EF100_NIC_H
 
 #include "net_driver.h"
 #include "nic_common.h"
@@ -15,7 +17,7 @@
 extern const struct efx_nic_type ef100_pf_nic_type;
 extern const struct efx_nic_type ef100_vf_nic_type;
 
-int ef100_probe_pf(struct efx_nic *efx);
+int ef100_probe_netdev_pf(struct efx_nic *efx);
 int ef100_probe_vf(struct efx_nic *efx);
 void ef100_remove(struct efx_nic *efx);
 
@@ -78,3 +80,9 @@ struct ef100_nic_data {
 
 #define efx_ef100_has_cap(caps, flag) \
        (!!((caps) & BIT_ULL(MC_CMD_GET_CAPABILITIES_V4_OUT_ ## flag ## _LBN)))
+
+int efx_ef100_init_datapath_caps(struct efx_nic *efx);
+int ef100_phy_probe(struct efx_nic *efx);
+int ef100_filter_table_probe(struct efx_nic *efx);
+
+#endif /* EFX_EF100_NIC_H */
index 5a77235..153d68e 100644 (file)
@@ -106,14 +106,6 @@ static int efx_xdp(struct net_device *dev, struct netdev_bpf *xdp);
 static int efx_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **xdpfs,
                        u32 flags);
 
-#define EFX_ASSERT_RESET_SERIALISED(efx)               \
-       do {                                            \
-               if ((efx->state == STATE_READY) ||      \
-                   (efx->state == STATE_RECOVERY) ||   \
-                   (efx->state == STATE_DISABLED))     \
-                       ASSERT_RTNL();                  \
-       } while (0)
-
 /**************************************************************************
  *
  * Port handling
@@ -378,6 +370,8 @@ static int efx_probe_all(struct efx_nic *efx)
        if (rc)
                goto fail5;
 
+       efx->state = STATE_NET_DOWN;
+
        return 0;
 
  fail5:
@@ -498,7 +492,7 @@ void efx_get_irq_moderation(struct efx_nic *efx, unsigned int *tx_usecs,
  */
 static int efx_ioctl(struct net_device *net_dev, struct ifreq *ifr, int cmd)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct mii_ioctl_data *data = if_mii(ifr);
 
        if (cmd == SIOCSHWTSTAMP)
@@ -523,7 +517,7 @@ static int efx_ioctl(struct net_device *net_dev, struct ifreq *ifr, int cmd)
 /* Context: process, rtnl_lock() held. */
 int efx_net_open(struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int rc;
 
        netif_dbg(efx, ifup, efx->net_dev, "opening device on CPU %d\n",
@@ -544,6 +538,9 @@ int efx_net_open(struct net_device *net_dev)
        efx_start_all(efx);
        if (efx->state == STATE_DISABLED || efx->reset_pending)
                netif_device_detach(efx->net_dev);
+       else
+               efx->state = STATE_NET_UP;
+
        efx_selftest_async_start(efx);
        return 0;
 }
@@ -554,7 +551,7 @@ int efx_net_open(struct net_device *net_dev)
  */
 int efx_net_stop(struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        netif_dbg(efx, ifdown, efx->net_dev, "closing on CPU %d\n",
                  raw_smp_processor_id());
@@ -567,7 +564,7 @@ int efx_net_stop(struct net_device *net_dev)
 
 static int efx_vlan_rx_add_vid(struct net_device *net_dev, __be16 proto, u16 vid)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->type->vlan_rx_add_vid)
                return efx->type->vlan_rx_add_vid(efx, proto, vid);
@@ -577,7 +574,7 @@ static int efx_vlan_rx_add_vid(struct net_device *net_dev, __be16 proto, u16 vid
 
 static int efx_vlan_rx_kill_vid(struct net_device *net_dev, __be16 proto, u16 vid)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->type->vlan_rx_kill_vid)
                return efx->type->vlan_rx_kill_vid(efx, proto, vid);
@@ -646,7 +643,7 @@ static int efx_xdp_setup_prog(struct efx_nic *efx, struct bpf_prog *prog)
 /* Context: process, rtnl_lock() held. */
 static int efx_xdp(struct net_device *dev, struct netdev_bpf *xdp)
 {
-       struct efx_nic *efx = netdev_priv(dev);
+       struct efx_nic *efx = efx_netdev_priv(dev);
 
        switch (xdp->command) {
        case XDP_SETUP_PROG:
@@ -659,7 +656,7 @@ static int efx_xdp(struct net_device *dev, struct netdev_bpf *xdp)
 static int efx_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **xdpfs,
                        u32 flags)
 {
-       struct efx_nic *efx = netdev_priv(dev);
+       struct efx_nic *efx = efx_netdev_priv(dev);
 
        if (!netif_running(dev))
                return -EINVAL;
@@ -681,7 +678,7 @@ static int efx_netdev_event(struct notifier_block *this,
 
        if ((net_dev->netdev_ops == &efx_netdev_ops) &&
            event == NETDEV_CHANGENAME)
-               efx_update_name(netdev_priv(net_dev));
+               efx_update_name(efx_netdev_priv(net_dev));
 
        return NOTIFY_DONE;
 }
@@ -720,8 +717,6 @@ static int efx_register_netdev(struct efx_nic *efx)
         * already requested.  If so, the NIC is probably hosed so we
         * abort.
         */
-       efx->state = STATE_READY;
-       smp_mb(); /* ensure we change state before checking reset_pending */
        if (efx->reset_pending) {
                pci_err(efx->pci_dev, "aborting probe due to scheduled reset\n");
                rc = -EIO;
@@ -748,6 +743,8 @@ static int efx_register_netdev(struct efx_nic *efx)
 
        efx_associate(efx);
 
+       efx->state = STATE_NET_DOWN;
+
        rtnl_unlock();
 
        rc = device_create_file(&efx->pci_dev->dev, &dev_attr_phy_type);
@@ -777,7 +774,8 @@ static void efx_unregister_netdev(struct efx_nic *efx)
        if (!efx->net_dev)
                return;
 
-       BUG_ON(netdev_priv(efx->net_dev) != efx);
+       if (WARN_ON(efx_netdev_priv(efx->net_dev) != efx))
+               return;
 
        if (efx_dev_registered(efx)) {
                strlcpy(efx->name, pci_name(efx->pci_dev), sizeof(efx->name));
@@ -845,7 +843,7 @@ static void efx_pci_remove_main(struct efx_nic *efx)
        /* Flush reset_work. It can no longer be scheduled since we
         * are not READY.
         */
-       BUG_ON(efx->state == STATE_READY);
+       WARN_ON(efx_net_active(efx->state));
        efx_flush_reset_workqueue(efx);
 
        efx_disable_interrupts(efx);
@@ -863,6 +861,7 @@ static void efx_pci_remove_main(struct efx_nic *efx)
  */
 static void efx_pci_remove(struct pci_dev *pci_dev)
 {
+       struct efx_probe_data *probe_data;
        struct efx_nic *efx;
 
        efx = pci_get_drvdata(pci_dev);
@@ -887,10 +886,12 @@ static void efx_pci_remove(struct pci_dev *pci_dev)
        efx_pci_remove_main(efx);
 
        efx_fini_io(efx);
-       netif_dbg(efx, drv, efx->net_dev, "shutdown successful\n");
+       pci_dbg(efx->pci_dev, "shutdown successful\n");
 
        efx_fini_struct(efx);
        free_netdev(efx->net_dev);
+       probe_data = container_of(efx, struct efx_probe_data, efx);
+       kfree(probe_data);
 
        pci_disable_pcie_error_reporting(pci_dev);
 };
@@ -1044,24 +1045,34 @@ static int efx_pci_probe_post_io(struct efx_nic *efx)
 static int efx_pci_probe(struct pci_dev *pci_dev,
                         const struct pci_device_id *entry)
 {
+       struct efx_probe_data *probe_data, **probe_ptr;
        struct net_device *net_dev;
        struct efx_nic *efx;
        int rc;
 
-       /* Allocate and initialise a struct net_device and struct efx_nic */
-       net_dev = alloc_etherdev_mqs(sizeof(*efx), EFX_MAX_CORE_TX_QUEUES,
-                                    EFX_MAX_RX_QUEUES);
+       /* Allocate probe data and struct efx_nic */
+       probe_data = kzalloc(sizeof(*probe_data), GFP_KERNEL);
+       if (!probe_data)
+               return -ENOMEM;
+       probe_data->pci_dev = pci_dev;
+       efx = &probe_data->efx;
+
+       /* Allocate and initialise a struct net_device */
+       net_dev = alloc_etherdev_mq(sizeof(probe_data), EFX_MAX_CORE_TX_QUEUES);
        if (!net_dev)
                return -ENOMEM;
-       efx = netdev_priv(net_dev);
+       probe_ptr = netdev_priv(net_dev);
+       *probe_ptr = probe_data;
+       efx->net_dev = net_dev;
        efx->type = (const struct efx_nic_type *) entry->driver_data;
        efx->fixed_features |= NETIF_F_HIGHDMA;
 
        pci_set_drvdata(pci_dev, efx);
        SET_NETDEV_DEV(net_dev, &pci_dev->dev);
-       rc = efx_init_struct(efx, pci_dev, net_dev);
+       rc = efx_init_struct(efx, pci_dev);
        if (rc)
                goto fail1;
+       efx->mdio.dev = net_dev;
 
        pci_info(pci_dev, "Solarflare NIC detected\n");
 
@@ -1150,13 +1161,13 @@ static int efx_pm_freeze(struct device *dev)
 
        rtnl_lock();
 
-       if (efx->state != STATE_DISABLED) {
-               efx->state = STATE_UNINIT;
-
+       if (efx_net_active(efx->state)) {
                efx_device_detach_sync(efx);
 
                efx_stop_all(efx);
                efx_disable_interrupts(efx);
+
+               efx->state = efx_freeze(efx->state);
        }
 
        rtnl_unlock();
@@ -1171,7 +1182,7 @@ static int efx_pm_thaw(struct device *dev)
 
        rtnl_lock();
 
-       if (efx->state != STATE_DISABLED) {
+       if (efx_frozen(efx->state)) {
                rc = efx_enable_interrupts(efx);
                if (rc)
                        goto fail;
@@ -1184,7 +1195,7 @@ static int efx_pm_thaw(struct device *dev)
 
                efx_device_attach_if_not_resetting(efx);
 
-               efx->state = STATE_READY;
+               efx->state = efx_thaw(efx->state);
 
                efx->type->resume_wol(efx);
        }
index f6577e7..56eb717 100644 (file)
@@ -167,7 +167,7 @@ static void efx_mac_work(struct work_struct *data)
 
 int efx_set_mac_address(struct net_device *net_dev, void *data)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct sockaddr *addr = data;
        u8 *new_addr = addr->sa_data;
        u8 old_addr[6];
@@ -202,7 +202,7 @@ int efx_set_mac_address(struct net_device *net_dev, void *data)
 /* Context: netif_addr_lock held, BHs disabled. */
 void efx_set_rx_mode(struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->port_enabled)
                queue_work(efx->workqueue, &efx->mac_work);
@@ -211,7 +211,7 @@ void efx_set_rx_mode(struct net_device *net_dev)
 
 int efx_set_features(struct net_device *net_dev, netdev_features_t data)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int rc;
 
        /* If disabling RX n-tuple filtering, clear existing filters */
@@ -285,7 +285,7 @@ unsigned int efx_xdp_max_mtu(struct efx_nic *efx)
 /* Context: process, rtnl_lock() held. */
 int efx_change_mtu(struct net_device *net_dev, int new_mtu)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int rc;
 
        rc = efx_check_disabled(efx);
@@ -600,7 +600,7 @@ void efx_stop_all(struct efx_nic *efx)
 /* Context: process, dev_base_lock or RTNL held, non-blocking. */
 void efx_net_stats(struct net_device *net_dev, struct rtnl_link_stats64 *stats)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        spin_lock_bh(&efx->stats_lock);
        efx_nic_update_stats_atomic(efx, NULL, stats);
@@ -723,7 +723,7 @@ void efx_reset_down(struct efx_nic *efx, enum reset_type method)
 /* Context: netif_tx_lock held, BHs disabled. */
 void efx_watchdog(struct net_device *net_dev, unsigned int txqueue)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        netif_err(efx, tx_err, efx->net_dev,
                  "TX stuck with port_enabled=%d: resetting channels\n",
@@ -898,7 +898,7 @@ static void efx_reset_work(struct work_struct *data)
         * have changed by now.  Now that we have the RTNL lock,
         * it cannot change again.
         */
-       if (efx->state == STATE_READY)
+       if (efx_net_active(efx->state))
                (void)efx_reset(efx, method);
 
        rtnl_unlock();
@@ -908,7 +908,7 @@ void efx_schedule_reset(struct efx_nic *efx, enum reset_type type)
 {
        enum reset_type method;
 
-       if (efx->state == STATE_RECOVERY) {
+       if (efx_recovering(efx->state)) {
                netif_dbg(efx, drv, efx->net_dev,
                          "recovering: skip scheduling %s reset\n",
                          RESET_TYPE(type));
@@ -943,7 +943,7 @@ void efx_schedule_reset(struct efx_nic *efx, enum reset_type type)
        /* If we're not READY then just leave the flags set as the cue
         * to abort probing or reschedule the reset later.
         */
-       if (READ_ONCE(efx->state) != STATE_READY)
+       if (!efx_net_active(READ_ONCE(efx->state)))
                return;
 
        /* efx_process_channel() will no longer read events once a
@@ -978,8 +978,7 @@ void efx_port_dummy_op_void(struct efx_nic *efx) {}
 /* This zeroes out and then fills in the invariants in a struct
  * efx_nic (including all sub-structures).
  */
-int efx_init_struct(struct efx_nic *efx,
-                   struct pci_dev *pci_dev, struct net_device *net_dev)
+int efx_init_struct(struct efx_nic *efx, struct pci_dev *pci_dev)
 {
        int rc = -ENOMEM;
 
@@ -998,7 +997,6 @@ int efx_init_struct(struct efx_nic *efx,
        efx->state = STATE_UNINIT;
        strlcpy(efx->name, pci_name(pci_dev), sizeof(efx->name));
 
-       efx->net_dev = net_dev;
        efx->rx_prefix_size = efx->type->rx_prefix_size;
        efx->rx_ip_align =
                NET_IP_ALIGN ? (efx->rx_prefix_size + NET_IP_ALIGN) % 4 : 0;
@@ -1023,7 +1021,6 @@ int efx_init_struct(struct efx_nic *efx,
        efx->rps_hash_table = kcalloc(EFX_ARFS_HASH_TABLE_SIZE,
                                      sizeof(*efx->rps_hash_table), GFP_KERNEL);
 #endif
-       efx->mdio.dev = net_dev;
        INIT_WORK(&efx->mac_work, efx_mac_work);
        init_waitqueue_head(&efx->flush_wq);
 
@@ -1077,13 +1074,11 @@ int efx_init_io(struct efx_nic *efx, int bar, dma_addr_t dma_mask,
        int rc;
 
        efx->mem_bar = UINT_MAX;
-
-       netif_dbg(efx, probe, efx->net_dev, "initialising I/O bar=%d\n", bar);
+       pci_dbg(pci_dev, "initialising I/O bar=%d\n", bar);
 
        rc = pci_enable_device(pci_dev);
        if (rc) {
-               netif_err(efx, probe, efx->net_dev,
-                         "failed to enable PCI device\n");
+               pci_err(pci_dev, "failed to enable PCI device\n");
                goto fail1;
        }
 
@@ -1091,42 +1086,40 @@ int efx_init_io(struct efx_nic *efx, int bar, dma_addr_t dma_mask,
 
        rc = dma_set_mask_and_coherent(&pci_dev->dev, dma_mask);
        if (rc) {
-               netif_err(efx, probe, efx->net_dev,
-                         "could not find a suitable DMA mask\n");
+               pci_err(efx->pci_dev, "could not find a suitable DMA mask\n");
                goto fail2;
        }
-       netif_dbg(efx, probe, efx->net_dev,
-                 "using DMA mask %llx\n", (unsigned long long)dma_mask);
+       pci_dbg(efx->pci_dev, "using DMA mask %llx\n", (unsigned long long)dma_mask);
 
        efx->membase_phys = pci_resource_start(efx->pci_dev, bar);
        if (!efx->membase_phys) {
-               netif_err(efx, probe, efx->net_dev,
-                         "ERROR: No BAR%d mapping from the BIOS. "
-                         "Try pci=realloc on the kernel command line\n", bar);
+               pci_err(efx->pci_dev,
+                       "ERROR: No BAR%d mapping from the BIOS. Try pci=realloc on the kernel command line\n",
+                       bar);
                rc = -ENODEV;
                goto fail3;
        }
 
        rc = pci_request_region(pci_dev, bar, "sfc");
        if (rc) {
-               netif_err(efx, probe, efx->net_dev,
-                         "request for memory BAR[%d] failed\n", bar);
+               pci_err(efx->pci_dev,
+                       "request for memory BAR[%d] failed\n", bar);
                rc = -EIO;
                goto fail3;
        }
        efx->mem_bar = bar;
        efx->membase = ioremap(efx->membase_phys, mem_map_size);
        if (!efx->membase) {
-               netif_err(efx, probe, efx->net_dev,
-                         "could not map memory BAR[%d] at %llx+%x\n", bar,
-                         (unsigned long long)efx->membase_phys, mem_map_size);
+               pci_err(efx->pci_dev,
+                       "could not map memory BAR[%d] at %llx+%x\n", bar,
+                       (unsigned long long)efx->membase_phys, mem_map_size);
                rc = -ENOMEM;
                goto fail4;
        }
-       netif_dbg(efx, probe, efx->net_dev,
-                 "memory BAR[%d] at %llx+%x (virtual %p)\n", bar,
-                 (unsigned long long)efx->membase_phys, mem_map_size,
-                 efx->membase);
+       pci_dbg(efx->pci_dev,
+               "memory BAR[%d] at %llx+%x (virtual %p)\n", bar,
+               (unsigned long long)efx->membase_phys, mem_map_size,
+               efx->membase);
 
        return 0;
 
@@ -1142,7 +1135,7 @@ fail1:
 
 void efx_fini_io(struct efx_nic *efx)
 {
-       netif_dbg(efx, drv, efx->net_dev, "shutting down I/O\n");
+       pci_dbg(efx->pci_dev, "shutting down I/O\n");
 
        if (efx->membase) {
                iounmap(efx->membase);
@@ -1217,13 +1210,15 @@ static pci_ers_result_t efx_io_error_detected(struct pci_dev *pdev,
        rtnl_lock();
 
        if (efx->state != STATE_DISABLED) {
-               efx->state = STATE_RECOVERY;
+               efx->state = efx_recover(efx->state);
                efx->reset_pending = 0;
 
                efx_device_detach_sync(efx);
 
-               efx_stop_all(efx);
-               efx_disable_interrupts(efx);
+               if (efx_net_active(efx->state)) {
+                       efx_stop_all(efx);
+                       efx_disable_interrupts(efx);
+               }
 
                status = PCI_ERS_RESULT_NEED_RESET;
        } else {
@@ -1271,7 +1266,7 @@ static void efx_io_resume(struct pci_dev *pdev)
                netif_err(efx, hw, efx->net_dev,
                          "efx_reset failed after PCI error (%d)\n", rc);
        } else {
-               efx->state = STATE_READY;
+               efx->state = efx_recovered(efx->state);
                netif_dbg(efx, hw, efx->net_dev,
                          "Done resetting and resuming IO after PCI error.\n");
        }
@@ -1357,7 +1352,7 @@ static bool efx_can_encap_offloads(struct efx_nic *efx, struct sk_buff *skb)
 netdev_features_t efx_features_check(struct sk_buff *skb, struct net_device *dev,
                                     netdev_features_t features)
 {
-       struct efx_nic *efx = netdev_priv(dev);
+       struct efx_nic *efx = efx_netdev_priv(dev);
 
        if (skb->encapsulation) {
                if (features & NETIF_F_GSO_MASK)
@@ -1378,7 +1373,7 @@ netdev_features_t efx_features_check(struct sk_buff *skb, struct net_device *dev
 int efx_get_phys_port_id(struct net_device *net_dev,
                         struct netdev_phys_item_id *ppid)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->type->get_phys_port_id)
                return efx->type->get_phys_port_id(efx, ppid);
@@ -1388,7 +1383,7 @@ int efx_get_phys_port_id(struct net_device *net_dev,
 
 int efx_get_phys_port_name(struct net_device *net_dev, char *name, size_t len)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (snprintf(name, len, "p%u", efx->port_num) >= len)
                return -EINVAL;
index 65513fd..93babc1 100644 (file)
@@ -14,8 +14,7 @@
 int efx_init_io(struct efx_nic *efx, int bar, dma_addr_t dma_mask,
                unsigned int mem_map_size);
 void efx_fini_io(struct efx_nic *efx);
-int efx_init_struct(struct efx_nic *efx, struct pci_dev *pci_dev,
-                   struct net_device *net_dev);
+int efx_init_struct(struct efx_nic *efx, struct pci_dev *pci_dev);
 void efx_fini_struct(struct efx_nic *efx);
 
 #define EFX_MAX_DMAQ_SIZE 4096UL
@@ -43,12 +42,11 @@ void efx_start_monitor(struct efx_nic *efx);
 int __efx_reconfigure_port(struct efx_nic *efx);
 int efx_reconfigure_port(struct efx_nic *efx);
 
-#define EFX_ASSERT_RESET_SERIALISED(efx)               \
-       do {                                            \
-               if ((efx->state == STATE_READY) ||      \
-                   (efx->state == STATE_RECOVERY) ||   \
-                   (efx->state == STATE_DISABLED))     \
-                       ASSERT_RTNL();                  \
+#define EFX_ASSERT_RESET_SERIALISED(efx)                               \
+       do {                                                            \
+               if ((efx)->state != STATE_UNINIT &&                     \
+                   (efx)->state != STATE_PROBED)                       \
+                       ASSERT_RTNL();                                  \
        } while (0)
 
 int efx_try_recovery(struct efx_nic *efx);
@@ -64,7 +62,7 @@ void efx_port_dummy_op_void(struct efx_nic *efx);
 
 static inline int efx_check_disabled(struct efx_nic *efx)
 {
-       if (efx->state == STATE_DISABLED || efx->state == STATE_RECOVERY) {
+       if (efx->state == STATE_DISABLED || efx_recovering(efx->state)) {
                netif_err(efx, drv, efx->net_dev,
                          "device is disabled due to earlier errors\n");
                return -EIO;
index 4850637..3643235 100644 (file)
@@ -33,7 +33,7 @@
 static int efx_ethtool_phys_id(struct net_device *net_dev,
                               enum ethtool_phys_id_state state)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        enum efx_led_mode mode = EFX_LED_DEFAULT;
 
        switch (state) {
@@ -55,13 +55,13 @@ static int efx_ethtool_phys_id(struct net_device *net_dev,
 
 static int efx_ethtool_get_regs_len(struct net_device *net_dev)
 {
-       return efx_nic_get_regs_len(netdev_priv(net_dev));
+       return efx_nic_get_regs_len(efx_netdev_priv(net_dev));
 }
 
 static void efx_ethtool_get_regs(struct net_device *net_dev,
                                 struct ethtool_regs *regs, void *buf)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        regs->version = efx->type->revision;
        efx_nic_get_regs(efx, buf);
@@ -101,7 +101,7 @@ static int efx_ethtool_get_coalesce(struct net_device *net_dev,
                                    struct kernel_ethtool_coalesce *kernel_coal,
                                    struct netlink_ext_ack *extack)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        unsigned int tx_usecs, rx_usecs;
        bool rx_adaptive;
 
@@ -121,7 +121,7 @@ static int efx_ethtool_set_coalesce(struct net_device *net_dev,
                                    struct kernel_ethtool_coalesce *kernel_coal,
                                    struct netlink_ext_ack *extack)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct efx_channel *channel;
        unsigned int tx_usecs, rx_usecs;
        bool adaptive, rx_may_override_tx;
@@ -163,7 +163,7 @@ efx_ethtool_get_ringparam(struct net_device *net_dev,
                          struct kernel_ethtool_ringparam *kernel_ring,
                          struct netlink_ext_ack *extack)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        ring->rx_max_pending = EFX_MAX_DMAQ_SIZE;
        ring->tx_max_pending = EFX_TXQ_MAX_ENT(efx);
@@ -177,7 +177,7 @@ efx_ethtool_set_ringparam(struct net_device *net_dev,
                          struct kernel_ethtool_ringparam *kernel_ring,
                          struct netlink_ext_ack *extack)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        u32 txq_entries;
 
        if (ring->rx_mini_pending || ring->rx_jumbo_pending ||
@@ -204,7 +204,7 @@ efx_ethtool_set_ringparam(struct net_device *net_dev,
 static void efx_ethtool_get_wol(struct net_device *net_dev,
                                struct ethtool_wolinfo *wol)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        return efx->type->get_wol(efx, wol);
 }
 
@@ -212,14 +212,14 @@ static void efx_ethtool_get_wol(struct net_device *net_dev,
 static int efx_ethtool_set_wol(struct net_device *net_dev,
                               struct ethtool_wolinfo *wol)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        return efx->type->set_wol(efx, wol->wolopts);
 }
 
 static void efx_ethtool_get_fec_stats(struct net_device *net_dev,
                                      struct ethtool_fec_stats *fec_stats)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->type->get_fec_stats)
                efx->type->get_fec_stats(efx, fec_stats);
@@ -228,7 +228,7 @@ static void efx_ethtool_get_fec_stats(struct net_device *net_dev,
 static int efx_ethtool_get_ts_info(struct net_device *net_dev,
                                   struct ethtool_ts_info *ts_info)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        /* Software capabilities */
        ts_info->so_timestamping = (SOF_TIMESTAMPING_RX_SOFTWARE |
index bd552c7..58ad9d6 100644 (file)
@@ -103,7 +103,7 @@ static const struct efx_sw_stat_desc efx_sw_stat_desc[] = {
 void efx_ethtool_get_drvinfo(struct net_device *net_dev,
                             struct ethtool_drvinfo *info)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        strlcpy(info->driver, KBUILD_MODNAME, sizeof(info->driver));
        efx_mcdi_print_fwver(efx, info->fw_version,
@@ -113,14 +113,14 @@ void efx_ethtool_get_drvinfo(struct net_device *net_dev,
 
 u32 efx_ethtool_get_msglevel(struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        return efx->msg_enable;
 }
 
 void efx_ethtool_set_msglevel(struct net_device *net_dev, u32 msg_enable)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        efx->msg_enable = msg_enable;
 }
@@ -128,7 +128,7 @@ void efx_ethtool_set_msglevel(struct net_device *net_dev, u32 msg_enable)
 void efx_ethtool_self_test(struct net_device *net_dev,
                           struct ethtool_test *test, u64 *data)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct efx_self_tests *efx_tests;
        bool already_up;
        int rc = -ENOMEM;
@@ -137,7 +137,7 @@ void efx_ethtool_self_test(struct net_device *net_dev,
        if (!efx_tests)
                goto fail;
 
-       if (efx->state != STATE_READY) {
+       if (!efx_net_active(efx->state)) {
                rc = -EBUSY;
                goto out;
        }
@@ -176,7 +176,7 @@ fail:
 void efx_ethtool_get_pauseparam(struct net_device *net_dev,
                                struct ethtool_pauseparam *pause)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        pause->rx_pause = !!(efx->wanted_fc & EFX_FC_RX);
        pause->tx_pause = !!(efx->wanted_fc & EFX_FC_TX);
@@ -186,7 +186,7 @@ void efx_ethtool_get_pauseparam(struct net_device *net_dev,
 int efx_ethtool_set_pauseparam(struct net_device *net_dev,
                               struct ethtool_pauseparam *pause)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        u8 wanted_fc, old_fc;
        u32 old_adv;
        int rc = 0;
@@ -441,7 +441,7 @@ static size_t efx_describe_per_queue_stats(struct efx_nic *efx, u8 *strings)
 
 int efx_ethtool_get_sset_count(struct net_device *net_dev, int string_set)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        switch (string_set) {
        case ETH_SS_STATS:
@@ -459,7 +459,7 @@ int efx_ethtool_get_sset_count(struct net_device *net_dev, int string_set)
 void efx_ethtool_get_strings(struct net_device *net_dev,
                             u32 string_set, u8 *strings)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int i;
 
        switch (string_set) {
@@ -487,7 +487,7 @@ void efx_ethtool_get_stats(struct net_device *net_dev,
                           struct ethtool_stats *stats,
                           u64 *data)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        const struct efx_sw_stat_desc *stat;
        struct efx_channel *channel;
        struct efx_tx_queue *tx_queue;
@@ -561,7 +561,7 @@ void efx_ethtool_get_stats(struct net_device *net_dev,
 int efx_ethtool_get_link_ksettings(struct net_device *net_dev,
                                   struct ethtool_link_ksettings *cmd)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct efx_link_state *link_state = &efx->link_state;
 
        mutex_lock(&efx->mac_lock);
@@ -584,7 +584,7 @@ int efx_ethtool_get_link_ksettings(struct net_device *net_dev,
 int efx_ethtool_set_link_ksettings(struct net_device *net_dev,
                                   const struct ethtool_link_ksettings *cmd)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int rc;
 
        /* GMAC does not support 1000Mbps HD */
@@ -604,7 +604,7 @@ int efx_ethtool_set_link_ksettings(struct net_device *net_dev,
 int efx_ethtool_get_fecparam(struct net_device *net_dev,
                             struct ethtool_fecparam *fecparam)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int rc;
 
        mutex_lock(&efx->mac_lock);
@@ -617,7 +617,7 @@ int efx_ethtool_get_fecparam(struct net_device *net_dev,
 int efx_ethtool_set_fecparam(struct net_device *net_dev,
                             struct ethtool_fecparam *fecparam)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int rc;
 
        mutex_lock(&efx->mac_lock);
@@ -809,7 +809,7 @@ static int efx_ethtool_get_class_rule(struct efx_nic *efx,
 int efx_ethtool_get_rxnfc(struct net_device *net_dev,
                          struct ethtool_rxnfc *info, u32 *rule_locs)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        u32 rss_context = 0;
        s32 rc = 0;
 
@@ -1127,7 +1127,7 @@ static int efx_ethtool_set_class_rule(struct efx_nic *efx,
 int efx_ethtool_set_rxnfc(struct net_device *net_dev,
                          struct ethtool_rxnfc *info)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx_filter_get_rx_id_limit(efx) == 0)
                return -EOPNOTSUPP;
@@ -1148,7 +1148,7 @@ int efx_ethtool_set_rxnfc(struct net_device *net_dev,
 
 u32 efx_ethtool_get_rxfh_indir_size(struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->n_rx_channels == 1)
                return 0;
@@ -1157,7 +1157,7 @@ u32 efx_ethtool_get_rxfh_indir_size(struct net_device *net_dev)
 
 u32 efx_ethtool_get_rxfh_key_size(struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        return efx->type->rx_hash_key_size;
 }
@@ -1165,7 +1165,7 @@ u32 efx_ethtool_get_rxfh_key_size(struct net_device *net_dev)
 int efx_ethtool_get_rxfh(struct net_device *net_dev, u32 *indir, u8 *key,
                         u8 *hfunc)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int rc;
 
        rc = efx->type->rx_pull_rss_config(efx);
@@ -1186,7 +1186,7 @@ int efx_ethtool_get_rxfh(struct net_device *net_dev, u32 *indir, u8 *key,
 int efx_ethtool_set_rxfh(struct net_device *net_dev, const u32 *indir,
                         const u8 *key, const u8 hfunc)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        /* Hash function is Toeplitz, cannot be changed */
        if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP)
@@ -1205,7 +1205,7 @@ int efx_ethtool_set_rxfh(struct net_device *net_dev, const u32 *indir,
 int efx_ethtool_get_rxfh_context(struct net_device *net_dev, u32 *indir,
                                 u8 *key, u8 *hfunc, u32 rss_context)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct efx_rss_context *ctx;
        int rc = 0;
 
@@ -1238,7 +1238,7 @@ int efx_ethtool_set_rxfh_context(struct net_device *net_dev,
                                 const u8 hfunc, u32 *rss_context,
                                 bool delete)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct efx_rss_context *ctx;
        bool allocated = false;
        int rc;
@@ -1300,7 +1300,7 @@ out_unlock:
 
 int efx_ethtool_reset(struct net_device *net_dev, u32 *flags)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int rc;
 
        rc = efx->type->map_reset_flags(flags);
@@ -1314,7 +1314,7 @@ int efx_ethtool_get_module_eeprom(struct net_device *net_dev,
                                  struct ethtool_eeprom *ee,
                                  u8 *data)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int ret;
 
        mutex_lock(&efx->mac_lock);
@@ -1327,7 +1327,7 @@ int efx_ethtool_get_module_eeprom(struct net_device *net_dev,
 int efx_ethtool_get_module_info(struct net_device *net_dev,
                                struct ethtool_modinfo *modinfo)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        int ret;
 
        mutex_lock(&efx->mac_lock);
index 5eb178d..78537a5 100644 (file)
@@ -117,7 +117,7 @@ typedef union ef4_oword {
  *
  *   ( element ) << 4
  *
- * The result will contain the relevant bits filled in in the range
+ * The result will contain the relevant bits filled in the range
  * [0,high-low), with garbage in bits [high-low+1,...).
  */
 #define EF4_EXTRACT_NATIVE(native_element, min, max, low, high)                \
index 2c91792..c64623c 100644 (file)
@@ -2711,7 +2711,7 @@ void ef4_farch_filter_table_remove(struct ef4_nic *efx)
        enum ef4_farch_filter_table_id table_id;
 
        for (table_id = 0; table_id < EF4_FARCH_FILTER_TABLE_COUNT; table_id++) {
-               kfree(state->table[table_id].used_bitmap);
+               bitmap_free(state->table[table_id].used_bitmap);
                vfree(state->table[table_id].spec);
        }
        kfree(state);
@@ -2740,9 +2740,7 @@ int ef4_farch_filter_table_probe(struct ef4_nic *efx)
                table = &state->table[table_id];
                if (table->size == 0)
                        continue;
-               table->used_bitmap = kcalloc(BITS_TO_LONGS(table->size),
-                                            sizeof(unsigned long),
-                                            GFP_KERNEL);
+               table->used_bitmap = bitmap_zalloc(table->size, GFP_KERNEL);
                if (!table->used_bitmap)
                        goto fail;
                table->spec = vzalloc(array_size(sizeof(*table->spec),
index a3425b6..3225fe6 100644 (file)
@@ -99,14 +99,12 @@ int efx_mcdi_init(struct efx_nic *efx)
         */
        rc = efx_mcdi_drv_attach(efx, true, &already_attached);
        if (rc) {
-               netif_err(efx, probe, efx->net_dev,
-                         "Unable to register driver with MCPU\n");
+               pci_err(efx->pci_dev, "Unable to register driver with MCPU\n");
                goto fail2;
        }
        if (already_attached)
                /* Not a fatal error */
-               netif_err(efx, probe, efx->net_dev,
-                         "Host already registered with MCPU\n");
+               pci_err(efx->pci_dev, "Host already registered with MCPU\n");
 
        if (efx->mcdi->fn_flags &
            (1 << MC_CMD_DRV_ATTACH_EXT_OUT_FLAG_PRIMARY))
@@ -1447,7 +1445,7 @@ void efx_mcdi_print_fwver(struct efx_nic *efx, char *buf, size_t len)
        return;
 
 fail:
-       netif_err(efx, probe, efx->net_dev, "%s: failed rc=%d\n", __func__, rc);
+       pci_err(efx->pci_dev, "%s: failed rc=%d\n", __func__, rc);
        buf[0] = 0;
 }
 
@@ -1471,8 +1469,9 @@ static int efx_mcdi_drv_attach(struct efx_nic *efx, bool driver_operating,
         * care what firmware we get.
         */
        if (rc == -EPERM) {
-               netif_dbg(efx, probe, efx->net_dev,
-                         "efx_mcdi_drv_attach with fw-variant setting failed EPERM, trying without it\n");
+               pci_dbg(efx->pci_dev,
+                       "%s with fw-variant setting failed EPERM, trying without it\n",
+                       __func__);
                MCDI_SET_DWORD(inbuf, DRV_ATTACH_IN_FIRMWARE_ID,
                               MC_CMD_FW_DONT_CARE);
                rc = efx_mcdi_rpc_quiet(efx, MC_CMD_DRV_ATTACH, inbuf,
@@ -1514,7 +1513,7 @@ static int efx_mcdi_drv_attach(struct efx_nic *efx, bool driver_operating,
        return 0;
 
 fail:
-       netif_err(efx, probe, efx->net_dev, "%s: failed rc=%d\n", __func__, rc);
+       pci_err(efx->pci_dev, "%s: failed rc=%d\n", __func__, rc);
        return rc;
 }
 
index ff617b1..7984f6f 100644 (file)
  * MC_CMD_WORKAROUND_BUG26807.
  * May also returned for other operations such as sub-variant switching. */
 #define MC_CMD_ERR_FILTERS_PRESENT 0x1014
-/* The clock whose frequency you've attempted to set set
+/* The clock whose frequency you've attempted to set
  * doesn't exist on this NIC */
 #define MC_CMD_ERR_NO_CLOCK 0x1015
 /* Returned by MC_CMD_TESTASSERT if the action that should
  * large number (253) it is not anticipated that this will be needed in the
  * near future, so can currently be ignored.
  *
- * On Riverhead this command is implemented as a wrapper for `list` in the
+ * On Riverhead this command is implemented as a wrapper for `list` in the
  * sensor_query SPHINX service.
  */
 #define MC_CMD_DYNAMIC_SENSORS_LIST 0x66
  * update is in progress, and effectively means the set of usable sensors is
  * the intersection between the sets of sensors known to the driver and the MC.
  *
- * On Riverhead this command is implemented as a wrapper for
+ * On Riverhead this command is implemented as a wrapper for
  * `get_descriptions` in the sensor_query SPHINX service.
  */
 #define MC_CMD_DYNAMIC_SENSORS_GET_DESCRIPTIONS 0x67
  * update is in progress, and effectively means the set of usable sensors is
  * the intersection between the sets of sensors known to the driver and the MC.
  *
- * On Riverhead this command is implemented as a wrapper for `get_readings`
+ * On Riverhead this command is implemented as a wrapper for `get_readings`
  * in the sensor_query SPHINX service.
  */
 #define MC_CMD_DYNAMIC_SENSORS_GET_READINGS 0x68
  * TLV_PORT_MODE_*). A superset of MC_CMD_GET_PORT_MODES_OUT/MODES that
  * contains all modes implemented in firmware for a particular board. Modes
  * listed in MODES are considered production modes and should be exposed in
- * userland tools. Modes listed in in ENGINEERING_MODES, but not in MODES
+ * userland tools. Modes listed in ENGINEERING_MODES, but not in MODES
  * should be considered hidden (not to be exposed in userland tools) and for
  * engineering use only. There are no other semantic differences and any mode
  * listed in either MODES or ENGINEERING_MODES can be set on the board.
index 94c6a34..ad4694f 100644 (file)
@@ -20,7 +20,7 @@
 static int efx_mcdi_mdio_read(struct net_device *net_dev,
                              int prtad, int devad, u16 addr)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        MCDI_DECLARE_BUF(inbuf, MC_CMD_MDIO_READ_IN_LEN);
        MCDI_DECLARE_BUF(outbuf, MC_CMD_MDIO_READ_OUT_LEN);
        size_t outlen;
@@ -46,7 +46,7 @@ static int efx_mcdi_mdio_read(struct net_device *net_dev,
 static int efx_mcdi_mdio_write(struct net_device *net_dev,
                               int prtad, int devad, u16 addr, u16 value)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        MCDI_DECLARE_BUF(inbuf, MC_CMD_MDIO_WRITE_IN_LEN);
        MCDI_DECLARE_BUF(outbuf, MC_CMD_MDIO_WRITE_OUT_LEN);
        size_t outlen;
index 723bbee..2228c88 100644 (file)
@@ -622,12 +622,55 @@ enum efx_int_mode {
 #define EFX_INT_MODE_USE_MSI(x) (((x)->interrupt_mode) <= EFX_INT_MODE_MSI)
 
 enum nic_state {
-       STATE_UNINIT = 0,       /* device being probed/removed or is frozen */
-       STATE_READY = 1,        /* hardware ready and netdev registered */
-       STATE_DISABLED = 2,     /* device disabled due to hardware errors */
-       STATE_RECOVERY = 3,     /* device recovering from PCI error */
+       STATE_UNINIT = 0,       /* device being probed/removed */
+       STATE_PROBED,           /* hardware probed */
+       STATE_NET_DOWN,         /* netdev registered */
+       STATE_NET_UP,           /* ready for traffic */
+       STATE_DISABLED,         /* device disabled due to hardware errors */
+
+       STATE_RECOVERY = 0x100,/* recovering from PCI error */
+       STATE_FROZEN = 0x200,   /* frozen by power management */
 };
 
+static inline bool efx_net_active(enum nic_state state)
+{
+       return state == STATE_NET_DOWN || state == STATE_NET_UP;
+}
+
+static inline bool efx_frozen(enum nic_state state)
+{
+       return state & STATE_FROZEN;
+}
+
+static inline bool efx_recovering(enum nic_state state)
+{
+       return state & STATE_RECOVERY;
+}
+
+static inline enum nic_state efx_freeze(enum nic_state state)
+{
+       WARN_ON(!efx_net_active(state));
+       return state | STATE_FROZEN;
+}
+
+static inline enum nic_state efx_thaw(enum nic_state state)
+{
+       WARN_ON(!efx_frozen(state));
+       return state & ~STATE_FROZEN;
+}
+
+static inline enum nic_state efx_recover(enum nic_state state)
+{
+       WARN_ON(!efx_net_active(state));
+       return state | STATE_RECOVERY;
+}
+
+static inline enum nic_state efx_recovered(enum nic_state state)
+{
+       WARN_ON(!efx_recovering(state));
+       return state & ~STATE_RECOVERY;
+}
+
 /* Forward declaration */
 struct efx_nic;
 
@@ -1123,6 +1166,24 @@ struct efx_nic {
        atomic_t n_rx_noskb_drops;
 };
 
+/**
+ * struct efx_probe_data - State after hardware probe
+ * @pci_dev: The PCI device
+ * @efx: Efx NIC details
+ */
+struct efx_probe_data {
+       struct pci_dev *pci_dev;
+       struct efx_nic efx;
+};
+
+static inline struct efx_nic *efx_netdev_priv(struct net_device *dev)
+{
+       struct efx_probe_data **probe_ptr = netdev_priv(dev);
+       struct efx_probe_data *probe_data = *probe_ptr;
+
+       return &probe_data->efx;
+}
+
 static inline int efx_dev_registered(struct efx_nic *efx)
 {
        return efx->net_dev->reg_state == NETREG_REGISTERED;
index fa8b9aa..bd21d6a 100644 (file)
@@ -857,7 +857,7 @@ static void efx_filter_rfs_work(struct work_struct *data)
 {
        struct efx_async_filter_insertion *req = container_of(data, struct efx_async_filter_insertion,
                                                              work);
-       struct efx_nic *efx = netdev_priv(req->net_dev);
+       struct efx_nic *efx = efx_netdev_priv(req->net_dev);
        struct efx_channel *channel = efx_get_channel(efx, req->rxq_index);
        int slot_idx = req - efx->rps_slot;
        struct efx_arfs_rule *rule;
@@ -942,7 +942,7 @@ static void efx_filter_rfs_work(struct work_struct *data)
 int efx_filter_rfs(struct net_device *net_dev, const struct sk_buff *skb,
                   u16 rxq_index, u32 flow_id)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct efx_async_filter_insertion *req;
        struct efx_arfs_rule *rule;
        struct flow_keys fk;
index cce2380..89ccd65 100644 (file)
@@ -2778,7 +2778,7 @@ void efx_farch_filter_table_remove(struct efx_nic *efx)
        enum efx_farch_filter_table_id table_id;
 
        for (table_id = 0; table_id < EFX_FARCH_FILTER_TABLE_COUNT; table_id++) {
-               kfree(state->table[table_id].used_bitmap);
+               bitmap_free(state->table[table_id].used_bitmap);
                vfree(state->table[table_id].spec);
        }
        kfree(state);
@@ -2822,9 +2822,7 @@ int efx_farch_filter_table_probe(struct efx_nic *efx)
                table = &state->table[table_id];
                if (table->size == 0)
                        continue;
-               table->used_bitmap = kcalloc(BITS_TO_LONGS(table->size),
-                                            sizeof(unsigned long),
-                                            GFP_KERNEL);
+               table->used_bitmap = bitmap_zalloc(table->size, GFP_KERNEL);
                if (!table->used_bitmap)
                        goto fail;
                table->spec = vzalloc(array_size(sizeof(*table->spec),
index 89a7fd4..a3cc8b7 100644 (file)
  * MC_CMD_WORKAROUND_BUG26807.
  * May also returned for other operations such as sub-variant switching. */
 #define MC_CMD_ERR_FILTERS_PRESENT 0x1014
-/* The clock whose frequency you've attempted to set set
+/* The clock whose frequency you've attempted to set
  * doesn't exist on this NIC */
 #define MC_CMD_ERR_NO_CLOCK 0x1015
 /* Returned by MC_CMD_TESTASSERT if the action that should
  * large number (253) it is not anticipated that this will be needed in the
  * near future, so can currently be ignored.
  *
- * On Riverhead this command is implemented as a wrapper for `list` in the
+ * On Riverhead this command is implemented as a wrapper for `list` in the
  * sensor_query SPHINX service.
  */
 #define MC_CMD_DYNAMIC_SENSORS_LIST 0x66
  * update is in progress, and effectively means the set of usable sensors is
  * the intersection between the sets of sensors known to the driver and the MC.
  *
- * On Riverhead this command is implemented as a wrapper for
+ * On Riverhead this command is implemented as a wrapper for
  * `get_descriptions` in the sensor_query SPHINX service.
  */
 #define MC_CMD_DYNAMIC_SENSORS_GET_DESCRIPTIONS 0x67
  * update is in progress, and effectively means the set of usable sensors is
  * the intersection between the sets of sensors known to the driver and the MC.
  *
- * On Riverhead this command is implemented as a wrapper for `get_readings`
+ * On Riverhead this command is implemented as a wrapper for `get_readings`
  * in the sensor_query SPHINX service.
  */
 #define MC_CMD_DYNAMIC_SENSORS_GET_READINGS 0x68
  * TLV_PORT_MODE_*). A superset of MC_CMD_GET_PORT_MODES_OUT/MODES that
  * contains all modes implemented in firmware for a particular board. Modes
  * listed in MODES are considered production modes and should be exposed in
- * userland tools. Modes listed in in ENGINEERING_MODES, but not in MODES
+ * userland tools. Modes listed in ENGINEERING_MODES, but not in MODES
  * should be considered hidden (not to be exposed in userland tools) and for
  * engineering use only. There are no other semantic differences and any mode
  * listed in either MODES or ENGINEERING_MODES can be set on the board.
index 3f241e6..fc9f018 100644 (file)
@@ -10,7 +10,7 @@
 
 int efx_sriov_set_vf_mac(struct net_device *net_dev, int vf_i, u8 *mac)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->type->sriov_set_vf_mac)
                return efx->type->sriov_set_vf_mac(efx, vf_i, mac);
@@ -21,7 +21,7 @@ int efx_sriov_set_vf_mac(struct net_device *net_dev, int vf_i, u8 *mac)
 int efx_sriov_set_vf_vlan(struct net_device *net_dev, int vf_i, u16 vlan,
                          u8 qos, __be16 vlan_proto)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->type->sriov_set_vf_vlan) {
                if ((vlan & ~VLAN_VID_MASK) ||
@@ -40,7 +40,7 @@ int efx_sriov_set_vf_vlan(struct net_device *net_dev, int vf_i, u16 vlan,
 int efx_sriov_set_vf_spoofchk(struct net_device *net_dev, int vf_i,
                              bool spoofchk)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->type->sriov_set_vf_spoofchk)
                return efx->type->sriov_set_vf_spoofchk(efx, vf_i, spoofchk);
@@ -51,7 +51,7 @@ int efx_sriov_set_vf_spoofchk(struct net_device *net_dev, int vf_i,
 int efx_sriov_get_vf_config(struct net_device *net_dev, int vf_i,
                            struct ifla_vf_info *ivi)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->type->sriov_get_vf_config)
                return efx->type->sriov_get_vf_config(efx, vf_i, ivi);
@@ -62,7 +62,7 @@ int efx_sriov_get_vf_config(struct net_device *net_dev, int vf_i,
 int efx_sriov_set_vf_link_state(struct net_device *net_dev, int vf_i,
                                int link_state)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
 
        if (efx->type->sriov_set_vf_link_state)
                return efx->type->sriov_set_vf_link_state(efx, vf_i,
index 138bca6..79cc0bb 100644 (file)
@@ -512,7 +512,7 @@ unlock:
 netdev_tx_t efx_hard_start_xmit(struct sk_buff *skb,
                                struct net_device *net_dev)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct efx_tx_queue *tx_queue;
        unsigned index, type;
 
@@ -609,7 +609,7 @@ void efx_init_tx_queue_core_txq(struct efx_tx_queue *tx_queue)
 int efx_setup_tc(struct net_device *net_dev, enum tc_setup_type type,
                 void *type_data)
 {
-       struct efx_nic *efx = netdev_priv(net_dev);
+       struct efx_nic *efx = efx_netdev_priv(net_dev);
        struct tc_mqprio_qopt *mqprio = type_data;
        unsigned tc, num_tc;
 
index a0654e8..0329caf 100644 (file)
@@ -1515,14 +1515,14 @@ static void epic_remove_one(struct pci_dev *pdev)
        struct net_device *dev = pci_get_drvdata(pdev);
        struct epic_private *ep = netdev_priv(dev);
 
+       unregister_netdev(dev);
        dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, ep->tx_ring,
                          ep->tx_ring_dma);
        dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, ep->rx_ring,
                          ep->rx_ring_dma);
-       unregister_netdev(dev);
        pci_iounmap(pdev, ep->ioaddr);
-       pci_release_regions(pdev);
        free_netdev(dev);
+       pci_release_regions(pdev);
        pci_disable_device(pdev);
        /* pci_power_off(pdev, -1); */
 }
index a57b0fa..ea4910a 100644 (file)
@@ -197,7 +197,7 @@ static void dwmac_mmc_ctrl(void __iomem *mmcaddr, unsigned int mode)
                 MMC_CNTRL, value);
 }
 
-/* To mask all all interrupts.*/
+/* To mask all interrupts.*/
 static void dwmac_mmc_intr_all_mask(void __iomem *mmcaddr)
 {
        writel(MMC_DEFAULT_MASK, mmcaddr + MMC_RX_INTR_MASK);
index fe263ca..6f14b00 100644 (file)
@@ -3961,7 +3961,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
                proto_hdr_len = skb_transport_offset(skb) + sizeof(struct udphdr);
                hdr = sizeof(struct udphdr);
        } else {
-               proto_hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+               proto_hdr_len = skb_tcp_all_headers(skb);
                hdr = tcp_hdrlen(skb);
        }
 
index 435dc00..0b08b0e 100644 (file)
@@ -29,7 +29,7 @@
  *  -- on page reclamation, the driver swaps the page with a spare page.
  *     if that page is still in use, it frees its reference to that page,
  *     and allocates a new page for use. otherwise, it just recycles the
- *     the page.
+ *     page.
  *
  * NOTE: cassini can parse the header. however, it's not worth it
  *       as long as the network stack requires a header copy.
index ae5f05f..2d91f49 100644 (file)
  * PAUSE thresholds defined in terms of FIFO occupancy and may be translated
  * into FIFO vacancy using RX_FIFO_SIZE. setting ON will trigger XON frames
  * when FIFO reaches 0. OFF threshold should not be > size of RX FIFO. max
- * value is is 0x6F.
+ * value is 0x6F.
  * DEFAULT: 0x00078
  */
 #define  REG_RX_PAUSE_THRESH               0x4020  /* RX pause thresholds */
index 6b59b14..0cd8493 100644 (file)
@@ -335,7 +335,7 @@ static int vsw_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
        port->tsolen = 0;
 
        /* Mark the port as belonging to ldmvsw which directs the
-        * the common code to use the net_device in the vnet_port
+        * common code to use the net_device in the vnet_port
         * rather than the net_device in the vnet (which is used
         * by sunvnet). This bit is used by the VNET_PORT_TO_NET_DEVICE
         * macro.
index 45bd891..a14591b 100644 (file)
@@ -1088,7 +1088,7 @@ static netdev_tx_t gem_start_xmit(struct sk_buff *skb,
                netif_stop_queue(dev);
 
                /* netif_stop_queue() must be done before checking
-                * checking tx index in TX_BUFFS_AVAIL() below, because
+                * tx index in TX_BUFFS_AVAIL() below, because
                 * in gem_tx(), we update tx_old before checking for
                 * netif_queue_stopped().
                 */
index 3773ce5..5462066 100644 (file)
@@ -494,7 +494,7 @@ static int spl2sw_probe(struct platform_device *pdev)
        /* Add and enable napi. */
        netif_napi_add(ndev, &comm->rx_napi, spl2sw_rx_poll, NAPI_POLL_WEIGHT);
        napi_enable(&comm->rx_napi);
-       netif_napi_add(ndev, &comm->tx_napi, spl2sw_tx_poll, NAPI_POLL_WEIGHT);
+       netif_napi_add_tx(ndev, &comm->tx_napi, spl2sw_tx_poll);
        napi_enable(&comm->tx_napi);
        return 0;
 
index d435519..e54ce73 100644 (file)
@@ -81,7 +81,7 @@ static int xlgmac_prep_tso(struct sk_buff *skb,
        if (ret)
                return ret;
 
-       pkt_info->header_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+       pkt_info->header_len = skb_tcp_all_headers(skb);
        pkt_info->tcp_header_len = tcp_hdrlen(skb);
        pkt_info->tcp_payload_len = skb->len - pkt_info->header_len;
        pkt_info->mss = skb_shinfo(skb)->gso_size;
diff --git a/drivers/net/ethernet/wangxun/Kconfig b/drivers/net/ethernet/wangxun/Kconfig
new file mode 100644 (file)
index 0000000..baa1f0a
--- /dev/null
@@ -0,0 +1,32 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Wangxun network device configuration
+#
+
+config NET_VENDOR_WANGXUN
+       bool "Wangxun devices"
+       default y
+       help
+         If you have a network (Ethernet) card belonging to this class, say Y.
+
+         Note that the answer to this question doesn't directly affect the
+         kernel: saying N will just cause the configurator to skip all
+         the questions about Intel cards. If you say Y, you will be asked for
+         your specific card in the following questions.
+
+if NET_VENDOR_WANGXUN
+
+config TXGBE
+       tristate "Wangxun(R) 10GbE PCI Express adapters support"
+       depends on PCI
+       help
+         This driver supports Wangxun(R) 10GbE PCI Express family of
+         adapters.
+
+         More specific information on configuring the driver is in
+         <file:Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst>.
+
+         To compile this driver as a module, choose M here. The module
+         will be called txgbe.
+
+endif # NET_VENDOR_WANGXUN
diff --git a/drivers/net/ethernet/wangxun/Makefile b/drivers/net/ethernet/wangxun/Makefile
new file mode 100644 (file)
index 0000000..c34db1b
--- /dev/null
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the Wangxun network device drivers.
+#
+
+obj-$(CONFIG_TXGBE) += txgbe/
diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
new file mode 100644 (file)
index 0000000..431303c
--- /dev/null
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd.
+#
+# Makefile for the Wangxun(R) 10GbE PCI Express ethernet driver
+#
+
+obj-$(CONFIG_TXGBE) += txgbe.o
+
+txgbe-objs := txgbe_main.o
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
new file mode 100644 (file)
index 0000000..38ddbde
--- /dev/null
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _TXGBE_H_
+#define _TXGBE_H_
+
+#include "txgbe_type.h"
+
+#define TXGBE_MAX_FDIR_INDICES          63
+
+#define TXGBE_MAX_RX_QUEUES   (TXGBE_MAX_FDIR_INDICES + 1)
+#define TXGBE_MAX_TX_QUEUES   (TXGBE_MAX_FDIR_INDICES + 1)
+
+/* board specific private data structure */
+struct txgbe_adapter {
+       u8 __iomem *io_addr;    /* Mainly for iounmap use */
+       /* OS defined structs */
+       struct net_device *netdev;
+       struct pci_dev *pdev;
+};
+
+extern char txgbe_driver_name[];
+
+#endif /* _TXGBE_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
new file mode 100644 (file)
index 0000000..55c3c72
--- /dev/null
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd. */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/string.h>
+#include <linux/aer.h>
+#include <linux/etherdevice.h>
+
+#include "txgbe.h"
+
+char txgbe_driver_name[] = "txgbe";
+
+/* txgbe_pci_tbl - PCI Device ID Table
+ *
+ * Wildcard entries (PCI_ANY_ID) should come last
+ * Last entry must be all 0s
+ *
+ * { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
+ *   Class, Class Mask, private data (not used) }
+ */
+static const struct pci_device_id txgbe_pci_tbl[] = {
+       { PCI_VDEVICE(WANGXUN, TXGBE_DEV_ID_SP1000), 0},
+       { PCI_VDEVICE(WANGXUN, TXGBE_DEV_ID_WX1820), 0},
+       /* required last entry */
+       { .device = 0 }
+};
+
+#define DEFAULT_DEBUG_LEVEL_SHIFT 3
+
+static void txgbe_dev_shutdown(struct pci_dev *pdev, bool *enable_wake)
+{
+       struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
+       struct net_device *netdev = adapter->netdev;
+
+       netif_device_detach(netdev);
+
+       pci_disable_device(pdev);
+}
+
+static void txgbe_shutdown(struct pci_dev *pdev)
+{
+       bool wake;
+
+       txgbe_dev_shutdown(pdev, &wake);
+
+       if (system_state == SYSTEM_POWER_OFF) {
+               pci_wake_from_d3(pdev, wake);
+               pci_set_power_state(pdev, PCI_D3hot);
+       }
+}
+
+/**
+ * txgbe_probe - Device Initialization Routine
+ * @pdev: PCI device information struct
+ * @ent: entry in txgbe_pci_tbl
+ *
+ * Returns 0 on success, negative on failure
+ *
+ * txgbe_probe initializes an adapter identified by a pci_dev structure.
+ * The OS initialization, configuring of the adapter private structure,
+ * and a hardware reset occur.
+ **/
+static int txgbe_probe(struct pci_dev *pdev,
+                      const struct pci_device_id __always_unused *ent)
+{
+       struct txgbe_adapter *adapter = NULL;
+       struct net_device *netdev;
+       int err;
+
+       err = pci_enable_device_mem(pdev);
+       if (err)
+               return err;
+
+       err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+       if (err) {
+               dev_err(&pdev->dev,
+                       "No usable DMA configuration, aborting\n");
+               goto err_pci_disable_dev;
+       }
+
+       err = pci_request_selected_regions(pdev,
+                                          pci_select_bars(pdev, IORESOURCE_MEM),
+                                          txgbe_driver_name);
+       if (err) {
+               dev_err(&pdev->dev,
+                       "pci_request_selected_regions failed 0x%x\n", err);
+               goto err_pci_disable_dev;
+       }
+
+       pci_enable_pcie_error_reporting(pdev);
+       pci_set_master(pdev);
+
+       netdev = devm_alloc_etherdev_mqs(&pdev->dev,
+                                        sizeof(struct txgbe_adapter),
+                                        TXGBE_MAX_TX_QUEUES,
+                                        TXGBE_MAX_RX_QUEUES);
+       if (!netdev) {
+               err = -ENOMEM;
+               goto err_pci_release_regions;
+       }
+
+       SET_NETDEV_DEV(netdev, &pdev->dev);
+
+       adapter = netdev_priv(netdev);
+       adapter->netdev = netdev;
+       adapter->pdev = pdev;
+
+       adapter->io_addr = devm_ioremap(&pdev->dev,
+                                       pci_resource_start(pdev, 0),
+                                       pci_resource_len(pdev, 0));
+       if (!adapter->io_addr) {
+               err = -EIO;
+               goto err_pci_release_regions;
+       }
+
+       netdev->features |= NETIF_F_HIGHDMA;
+
+       pci_set_drvdata(pdev, adapter);
+
+       return 0;
+
+err_pci_release_regions:
+       pci_release_selected_regions(pdev,
+                                    pci_select_bars(pdev, IORESOURCE_MEM));
+err_pci_disable_dev:
+       pci_disable_device(pdev);
+       return err;
+}
+
+/**
+ * txgbe_remove - Device Removal Routine
+ * @pdev: PCI device information struct
+ *
+ * txgbe_remove is called by the PCI subsystem to alert the driver
+ * that it should release a PCI device.  The could be caused by a
+ * Hot-Plug event, or because the driver is going to be removed from
+ * memory.
+ **/
+static void txgbe_remove(struct pci_dev *pdev)
+{
+       pci_release_selected_regions(pdev,
+                                    pci_select_bars(pdev, IORESOURCE_MEM));
+
+       pci_disable_pcie_error_reporting(pdev);
+
+       pci_disable_device(pdev);
+}
+
+static struct pci_driver txgbe_driver = {
+       .name     = txgbe_driver_name,
+       .id_table = txgbe_pci_tbl,
+       .probe    = txgbe_probe,
+       .remove   = txgbe_remove,
+       .shutdown = txgbe_shutdown,
+};
+
+module_pci_driver(txgbe_driver);
+
+MODULE_DEVICE_TABLE(pci, txgbe_pci_tbl);
+MODULE_AUTHOR("Beijing WangXun Technology Co., Ltd, <software@trustnetic.com>");
+MODULE_DESCRIPTION("WangXun(R) 10 Gigabit PCI Express Network Driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
new file mode 100644 (file)
index 0000000..b2e329f
--- /dev/null
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _TXGBE_TYPE_H_
+#define _TXGBE_TYPE_H_
+
+#include <linux/types.h>
+#include <linux/netdevice.h>
+
+/************ txgbe_register.h ************/
+/* Vendor ID */
+#ifndef PCI_VENDOR_ID_WANGXUN
+#define PCI_VENDOR_ID_WANGXUN                   0x8088
+#endif
+
+/* Device IDs */
+#define TXGBE_DEV_ID_SP1000                     0x1001
+#define TXGBE_DEV_ID_WX1820                     0x2001
+
+/* Subsystem IDs */
+/* SFP */
+#define TXGBE_ID_SP1000_SFP                     0x0000
+#define TXGBE_ID_WX1820_SFP                     0x2000
+#define TXGBE_ID_SFP                            0x00
+
+/* copper */
+#define TXGBE_ID_SP1000_XAUI                    0x1010
+#define TXGBE_ID_WX1820_XAUI                    0x2010
+#define TXGBE_ID_XAUI                           0x10
+#define TXGBE_ID_SP1000_SGMII                   0x1020
+#define TXGBE_ID_WX1820_SGMII                   0x2020
+#define TXGBE_ID_SGMII                          0x20
+/* backplane */
+#define TXGBE_ID_SP1000_KR_KX_KX4               0x1030
+#define TXGBE_ID_WX1820_KR_KX_KX4               0x2030
+#define TXGBE_ID_KR_KX_KX4                      0x30
+/* MAC Interface */
+#define TXGBE_ID_SP1000_MAC_XAUI                0x1040
+#define TXGBE_ID_WX1820_MAC_XAUI                0x2040
+#define TXGBE_ID_MAC_XAUI                       0x40
+#define TXGBE_ID_SP1000_MAC_SGMII               0x1060
+#define TXGBE_ID_WX1820_MAC_SGMII               0x2060
+#define TXGBE_ID_MAC_SGMII                      0x60
+
+#define TXGBE_NCSI_SUP                          0x8000
+#define TXGBE_NCSI_MASK                         0x8000
+#define TXGBE_WOL_SUP                           0x4000
+#define TXGBE_WOL_MASK                          0x4000
+#define TXGBE_DEV_MASK                          0xf0
+
+/* Combined interface*/
+#define TXGBE_ID_SFI_XAUI                      0x50
+
+/* Revision ID */
+#define TXGBE_SP_MPW  1
+
+#endif /* _TXGBE_TYPE_H_ */
index 48f544f..2772a79 100644 (file)
@@ -106,7 +106,7 @@ static int axienet_mdio_read(struct mii_bus *bus, int phy_id, int reg)
  * Return:     0 on success, -ETIMEDOUT on a timeout
  *
  * Writes the value to the requested register by first writing the value
- * into MWD register. The the MCR register is then appropriately setup
+ * into MWD register. The MCR register is then appropriately setup
  * to finish the write operation.
  */
 static int axienet_mdio_write(struct mii_bus *bus, int phy_id, int reg,
index 89770c2..3591b9e 100644 (file)
@@ -29,6 +29,7 @@
 #include <linux/net_tstamp.h>
 #include <linux/of.h>
 #include <linux/of_mdio.h>
+#include <linux/of_net.h>
 #include <linux/phy.h>
 #include <linux/platform_device.h>
 #include <linux/ptp_classify.h>
@@ -156,7 +157,7 @@ struct eth_plat_info {
        u8 phy;         /* MII PHY ID, 0 - 31 */
        u8 rxq;         /* configurable, currently 0 - 31 only */
        u8 txreadyq;
-       u8 hwaddr[6];
+       u8 hwaddr[ETH_ALEN];
        u8 npe;         /* NPE instance used by this interface */
        bool has_mdio;  /* If this instance has an MDIO bus */
 };
@@ -1387,6 +1388,7 @@ static struct eth_plat_info *ixp4xx_of_get_platdata(struct device *dev)
        struct of_phandle_args npe_spec;
        struct device_node *mdio_np;
        struct eth_plat_info *plat;
+       u8 mac[ETH_ALEN];
        int ret;
 
        plat = devm_kzalloc(dev, sizeof(*plat), GFP_KERNEL);
@@ -1428,6 +1430,12 @@ static struct eth_plat_info *ixp4xx_of_get_platdata(struct device *dev)
        }
        plat->txreadyq = queue_spec.args[0];
 
+       ret = of_get_mac_address(np, mac);
+       if (!ret) {
+               dev_info(dev, "Setting macaddr from DT %pM\n", mac);
+               memcpy(plat->hwaddr, mac, ETH_ALEN);
+       }
+
        return plat;
 }
 
@@ -1487,7 +1495,10 @@ static int ixp4xx_eth_probe(struct platform_device *pdev)
 
        port->plat = plat;
        npe_port_tab[NPE_ID(port->id)] = port;
-       eth_hw_addr_set(ndev, plat->hwaddr);
+       if (is_valid_ether_addr(plat->hwaddr))
+               eth_hw_addr_set(ndev, plat->hwaddr);
+       else
+               eth_hw_addr_random(ndev);
 
        platform_set_drvdata(pdev, ndev);
 
index 45c3c4a..9fb5675 100644 (file)
@@ -99,6 +99,7 @@ struct sixpack {
 
        unsigned int            rx_count;
        unsigned int            rx_count_cooked;
+       spinlock_t              rxlock;
 
        int                     mtu;            /* Our mtu (to spot changes!) */
        int                     buffsize;       /* Max buffers sizes */
@@ -565,6 +566,7 @@ static int sixpack_open(struct tty_struct *tty)
        sp->dev = dev;
 
        spin_lock_init(&sp->lock);
+       spin_lock_init(&sp->rxlock);
        refcount_set(&sp->refcnt, 1);
        init_completion(&sp->dead);
 
@@ -913,6 +915,7 @@ static void decode_std_command(struct sixpack *sp, unsigned char cmd)
                        sp->led_state = 0x60;
                        /* fill trailing bytes with zeroes */
                        sp->tty->ops->write(sp->tty, &sp->led_state, 1);
+                       spin_lock_bh(&sp->rxlock);
                        rest = sp->rx_count;
                        if (rest != 0)
                                 for (i = rest; i <= 3; i++)
@@ -930,6 +933,7 @@ static void decode_std_command(struct sixpack *sp, unsigned char cmd)
                                sp_bump(sp, 0);
                        }
                        sp->rx_count_cooked = 0;
+                       spin_unlock_bh(&sp->rxlock);
                }
                break;
        case SIXP_TX_URUN: printk(KERN_DEBUG "6pack: TX underrun\n");
@@ -959,8 +963,11 @@ sixpack_decode(struct sixpack *sp, const unsigned char *pre_rbuff, int count)
                        decode_prio_command(sp, inbyte);
                else if ((inbyte & SIXP_STD_CMD_MASK) != 0)
                        decode_std_command(sp, inbyte);
-               else if ((sp->status & SIXP_RX_DCD_MASK) == SIXP_RX_DCD_MASK)
+               else if ((sp->status & SIXP_RX_DCD_MASK) == SIXP_RX_DCD_MASK) {
+                       spin_lock_bh(&sp->rxlock);
                        decode_data(sp, inbyte);
+                       spin_unlock_bh(&sp->rxlock);
+               }
        }
 }
 
index cf646dc..29496ca 100644 (file)
@@ -339,7 +339,7 @@ struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
        if (!gsi_trans_tre_reserve(trans_info, tre_count))
                return NULL;
 
-       /* Allocate and initialize non-zero fields in the the transaction */
+       /* Allocate and initialize non-zero fields in the transaction */
        trans = gsi_trans_pool_alloc(&trans_info->pool, 1);
        trans->gsi = gsi;
        trans->channel_id = channel_id;
@@ -669,7 +669,7 @@ int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr)
        if (!gsi_trans_tre_reserve(trans_info, 1))
                return -EBUSY;
 
-       /* Now fill the the reserved TRE and tell the hardware */
+       /* Now fill the reserved TRE and tell the hardware */
 
        dest_tre = gsi_ring_virt(tre_ring, tre_ring->index);
        gsi_trans_tre_fill(dest_tre, addr, 1, true, false, IPA_CMD_NONE);
index 22ba7b0..6289b7c 100644 (file)
@@ -6,8 +6,8 @@
 menu "PCS device drivers"
 
 config PCS_XPCS
-       tristate "Synopsys DesignWare XPCS controller"
-       depends on MDIO_DEVICE && MDIO_BUS
+       tristate
+       select PHYLINK
        help
          This module provides helper functions for Synopsys DesignWare XPCS
          controllers.
@@ -18,4 +18,12 @@ config PCS_LYNX
          This module provides helpers to phylink for managing the Lynx PCS
          which is part of the Layerscape and QorIQ Ethernet SERDES.
 
+config PCS_RZN1_MIIC
+       tristate "Renesas RZ/N1 MII converter"
+       depends on OF && (ARCH_RZN1 || COMPILE_TEST)
+       help
+         This module provides a driver for the MII converter that is available
+         on RZ/N1 SoCs. This PCS converts MII to RMII/RGMII or can be set in
+         pass-through mode for MII.
+
 endmenu
index 0603d46..0ff5388 100644 (file)
@@ -5,3 +5,4 @@ pcs_xpcs-$(CONFIG_PCS_XPCS)     := pcs-xpcs.o pcs-xpcs-nxp.o
 
 obj-$(CONFIG_PCS_XPCS)         += pcs_xpcs.o
 obj-$(CONFIG_PCS_LYNX)         += pcs-lynx.o
+obj-$(CONFIG_PCS_RZN1_MIIC)    += pcs-rzn1-miic.o
index fd34453..7d5fc7f 100644 (file)
@@ -71,12 +71,10 @@ static void lynx_pcs_get_state_usxgmii(struct mdio_device *pcs,
 static void lynx_pcs_get_state_2500basex(struct mdio_device *pcs,
                                         struct phylink_link_state *state)
 {
-       struct mii_bus *bus = pcs->bus;
-       int addr = pcs->addr;
        int bmsr, lpa;
 
-       bmsr = mdiobus_read(bus, addr, MII_BMSR);
-       lpa = mdiobus_read(bus, addr, MII_LPA);
+       bmsr = mdiodev_read(pcs, MII_BMSR);
+       lpa = mdiodev_read(pcs, MII_LPA);
        if (bmsr < 0 || lpa < 0) {
                state->link = false;
                return;
@@ -124,57 +122,39 @@ static void lynx_pcs_get_state(struct phylink_pcs *pcs,
                state->link, state->an_enabled, state->an_complete);
 }
 
-static int lynx_pcs_config_1000basex(struct mdio_device *pcs,
-                                    unsigned int mode,
-                                    const unsigned long *advertising)
+static int lynx_pcs_config_giga(struct mdio_device *pcs, unsigned int mode,
+                               phy_interface_t interface,
+                               const unsigned long *advertising)
 {
-       struct mii_bus *bus = pcs->bus;
-       int addr = pcs->addr;
        u32 link_timer;
-       int err;
-
-       link_timer = LINK_TIMER_VAL(IEEE8023_LINK_TIMER_NS);
-       mdiobus_write(bus, addr, LINK_TIMER_LO, link_timer & 0xffff);
-       mdiobus_write(bus, addr, LINK_TIMER_HI, link_timer >> 16);
-
-       err = mdiobus_modify(bus, addr, IF_MODE,
-                            IF_MODE_SGMII_EN | IF_MODE_USE_SGMII_AN,
-                            0);
-       if (err)
-               return err;
-
-       return phylink_mii_c22_pcs_config(pcs, mode,
-                                         PHY_INTERFACE_MODE_1000BASEX,
-                                         advertising);
-}
-
-static int lynx_pcs_config_sgmii(struct mdio_device *pcs, unsigned int mode,
-                                const unsigned long *advertising)
-{
-       struct mii_bus *bus = pcs->bus;
-       int addr = pcs->addr;
        u16 if_mode;
        int err;
 
-       if_mode = IF_MODE_SGMII_EN;
-       if (mode == MLO_AN_INBAND) {
-               u32 link_timer;
-
-               if_mode |= IF_MODE_USE_SGMII_AN;
-
-               /* Adjust link timer for SGMII */
-               link_timer = LINK_TIMER_VAL(SGMII_AN_LINK_TIMER_NS);
-               mdiobus_write(bus, addr, LINK_TIMER_LO, link_timer & 0xffff);
-               mdiobus_write(bus, addr, LINK_TIMER_HI, link_timer >> 16);
+       if (interface == PHY_INTERFACE_MODE_1000BASEX) {
+               link_timer = LINK_TIMER_VAL(IEEE8023_LINK_TIMER_NS);
+               mdiodev_write(pcs, LINK_TIMER_LO, link_timer & 0xffff);
+               mdiodev_write(pcs, LINK_TIMER_HI, link_timer >> 16);
+
+               if_mode = 0;
+       } else {
+               if_mode = IF_MODE_SGMII_EN;
+               if (mode == MLO_AN_INBAND) {
+                       if_mode |= IF_MODE_USE_SGMII_AN;
+
+                       /* Adjust link timer for SGMII */
+                       link_timer = LINK_TIMER_VAL(SGMII_AN_LINK_TIMER_NS);
+                       mdiodev_write(pcs, LINK_TIMER_LO, link_timer & 0xffff);
+                       mdiodev_write(pcs, LINK_TIMER_HI, link_timer >> 16);
+               }
        }
-       err = mdiobus_modify(bus, addr, IF_MODE,
+
+       err = mdiodev_modify(pcs, IF_MODE,
                             IF_MODE_SGMII_EN | IF_MODE_USE_SGMII_AN,
                             if_mode);
        if (err)
                return err;
 
-       return phylink_mii_c22_pcs_config(pcs, mode, PHY_INTERFACE_MODE_SGMII,
-                                        advertising);
+       return phylink_mii_c22_pcs_config(pcs, mode, interface, advertising);
 }
 
 static int lynx_pcs_config_usxgmii(struct mdio_device *pcs, unsigned int mode,
@@ -204,10 +184,10 @@ static int lynx_pcs_config(struct phylink_pcs *pcs, unsigned int mode,
 
        switch (ifmode) {
        case PHY_INTERFACE_MODE_1000BASEX:
-               return lynx_pcs_config_1000basex(lynx->mdio, mode, advertising);
        case PHY_INTERFACE_MODE_SGMII:
        case PHY_INTERFACE_MODE_QSGMII:
-               return lynx_pcs_config_sgmii(lynx->mdio, mode, advertising);
+               return lynx_pcs_config_giga(lynx->mdio, mode, ifmode,
+                                           advertising);
        case PHY_INTERFACE_MODE_2500BASEX:
                if (phylink_autoneg_inband(mode)) {
                        dev_err(&lynx->mdio->dev,
@@ -237,9 +217,7 @@ static void lynx_pcs_an_restart(struct phylink_pcs *pcs)
 static void lynx_pcs_link_up_sgmii(struct mdio_device *pcs, unsigned int mode,
                                   int speed, int duplex)
 {
-       struct mii_bus *bus = pcs->bus;
        u16 if_mode = 0, sgmii_speed;
-       int addr = pcs->addr;
 
        /* The PCS needs to be configured manually only
         * when not operating on in-band mode
@@ -269,7 +247,7 @@ static void lynx_pcs_link_up_sgmii(struct mdio_device *pcs, unsigned int mode,
        }
        if_mode |= IF_MODE_SPEED(sgmii_speed);
 
-       mdiobus_modify(bus, addr, IF_MODE,
+       mdiodev_modify(pcs, IF_MODE,
                       IF_MODE_HALF_DUPLEX | IF_MODE_SPEED_MSK,
                       if_mode);
 }
@@ -294,8 +272,6 @@ static void lynx_pcs_link_up_2500basex(struct mdio_device *pcs,
                                       unsigned int mode,
                                       int speed, int duplex)
 {
-       struct mii_bus *bus = pcs->bus;
-       int addr = pcs->addr;
        u16 if_mode = 0;
 
        if (mode == MLO_AN_INBAND) {
@@ -307,7 +283,7 @@ static void lynx_pcs_link_up_2500basex(struct mdio_device *pcs,
                if_mode |= IF_MODE_HALF_DUPLEX;
        if_mode |= IF_MODE_SPEED(SGMII_SPEED_2500);
 
-       mdiobus_modify(bus, addr, IF_MODE,
+       mdiodev_modify(pcs, IF_MODE,
                       IF_MODE_HALF_DUPLEX | IF_MODE_SPEED_MSK,
                       if_mode);
 }
diff --git a/drivers/net/pcs/pcs-rzn1-miic.c b/drivers/net/pcs/pcs-rzn1-miic.c
new file mode 100644 (file)
index 0000000..c142411
--- /dev/null
@@ -0,0 +1,531 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Schneider Electric
+ *
+ * Clément Léger <clement.leger@bootlin.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/mdio.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/pcs-rzn1-miic.h>
+#include <linux/phylink.h>
+#include <linux/pm_runtime.h>
+#include <dt-bindings/net/pcs-rzn1-miic.h>
+
+#define MIIC_PRCMD                     0x0
+#define MIIC_ESID_CODE                 0x4
+
+#define MIIC_MODCTRL                   0x20
+#define MIIC_MODCTRL_SW_MODE           GENMASK(4, 0)
+
+#define MIIC_CONVCTRL(port)            (0x100 + (port) * 4)
+
+#define MIIC_CONVCTRL_CONV_SPEED       GENMASK(1, 0)
+#define CONV_MODE_10MBPS               0
+#define CONV_MODE_100MBPS              1
+#define CONV_MODE_1000MBPS             2
+
+#define MIIC_CONVCTRL_CONV_MODE                GENMASK(3, 2)
+#define CONV_MODE_MII                  0
+#define CONV_MODE_RMII                 1
+#define CONV_MODE_RGMII                        2
+
+#define MIIC_CONVCTRL_FULLD            BIT(8)
+#define MIIC_CONVCTRL_RGMII_LINK       BIT(12)
+#define MIIC_CONVCTRL_RGMII_DUPLEX     BIT(13)
+#define MIIC_CONVCTRL_RGMII_SPEED      GENMASK(15, 14)
+
+#define MIIC_CONVRST                   0x114
+#define MIIC_CONVRST_PHYIF_RST(port)   BIT(port)
+#define MIIC_CONVRST_PHYIF_RST_MASK    GENMASK(4, 0)
+
+#define MIIC_SWCTRL                    0x304
+#define MIIC_SWDUPC                    0x308
+
+#define MIIC_MAX_NR_PORTS              5
+
+#define MIIC_MODCTRL_CONF_CONV_NUM     6
+#define MIIC_MODCTRL_CONF_NONE         -1
+
+/**
+ * struct modctrl_match - Matching table entry for  convctrl configuration
+ *                       See section 8.2.1 of manual.
+ * @mode_cfg: Configuration value for convctrl
+ * @conv: Configuration of ethernet port muxes. First index is SWITCH_PORTIN,
+ *       then index 1 - 5 are CONV1 - CONV5.
+ */
+struct modctrl_match {
+       u32 mode_cfg;
+       u8 conv[MIIC_MODCTRL_CONF_CONV_NUM];
+};
+
+static struct modctrl_match modctrl_match_table[] = {
+       {0x0, {MIIC_RTOS_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+              MIIC_SWITCH_PORTC, MIIC_SERCOS_PORTB, MIIC_SERCOS_PORTA}},
+       {0x1, {MIIC_RTOS_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+              MIIC_SWITCH_PORTC, MIIC_ETHERCAT_PORTB, MIIC_ETHERCAT_PORTA}},
+       {0x2, {MIIC_RTOS_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+              MIIC_ETHERCAT_PORTC, MIIC_ETHERCAT_PORTB, MIIC_ETHERCAT_PORTA}},
+       {0x3, {MIIC_RTOS_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+              MIIC_SWITCH_PORTC, MIIC_SWITCH_PORTB, MIIC_SWITCH_PORTA}},
+
+       {0x8, {MIIC_RTOS_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+              MIIC_SWITCH_PORTC, MIIC_SERCOS_PORTB, MIIC_SERCOS_PORTA}},
+       {0x9, {MIIC_RTOS_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+              MIIC_SWITCH_PORTC, MIIC_ETHERCAT_PORTB, MIIC_ETHERCAT_PORTA}},
+       {0xA, {MIIC_RTOS_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+              MIIC_ETHERCAT_PORTC, MIIC_ETHERCAT_PORTB, MIIC_ETHERCAT_PORTA}},
+       {0xB, {MIIC_RTOS_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+              MIIC_SWITCH_PORTC, MIIC_SWITCH_PORTB, MIIC_SWITCH_PORTA}},
+
+       {0x10, {MIIC_GMAC2_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+               MIIC_SWITCH_PORTC, MIIC_SERCOS_PORTB, MIIC_SERCOS_PORTA}},
+       {0x11, {MIIC_GMAC2_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+               MIIC_SWITCH_PORTC, MIIC_ETHERCAT_PORTB, MIIC_ETHERCAT_PORTA}},
+       {0x12, {MIIC_GMAC2_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+               MIIC_ETHERCAT_PORTC, MIIC_ETHERCAT_PORTB, MIIC_ETHERCAT_PORTA}},
+       {0x13, {MIIC_GMAC2_PORT, MIIC_GMAC1_PORT, MIIC_SWITCH_PORTD,
+               MIIC_SWITCH_PORTC, MIIC_SWITCH_PORTB, MIIC_SWITCH_PORTA}}
+};
+
+static const char * const conf_to_string[] = {
+       [MIIC_GMAC1_PORT]       = "GMAC1_PORT",
+       [MIIC_GMAC2_PORT]       = "GMAC2_PORT",
+       [MIIC_RTOS_PORT]        = "RTOS_PORT",
+       [MIIC_SERCOS_PORTA]     = "SERCOS_PORTA",
+       [MIIC_SERCOS_PORTB]     = "SERCOS_PORTB",
+       [MIIC_ETHERCAT_PORTA]   = "ETHERCAT_PORTA",
+       [MIIC_ETHERCAT_PORTB]   = "ETHERCAT_PORTB",
+       [MIIC_ETHERCAT_PORTC]   = "ETHERCAT_PORTC",
+       [MIIC_SWITCH_PORTA]     = "SWITCH_PORTA",
+       [MIIC_SWITCH_PORTB]     = "SWITCH_PORTB",
+       [MIIC_SWITCH_PORTC]     = "SWITCH_PORTC",
+       [MIIC_SWITCH_PORTD]     = "SWITCH_PORTD",
+       [MIIC_HSR_PORTA]        = "HSR_PORTA",
+       [MIIC_HSR_PORTB]        = "HSR_PORTB",
+};
+
+static const char *index_to_string[MIIC_MODCTRL_CONF_CONV_NUM] = {
+       "SWITCH_PORTIN",
+       "CONV1",
+       "CONV2",
+       "CONV3",
+       "CONV4",
+       "CONV5",
+};
+
+/**
+ * struct miic - MII converter structure
+ * @base: base address of the MII converter
+ * @dev: Device associated to the MII converter
+ * @clks: Clocks used for this device
+ * @nclk: Number of clocks
+ * @lock: Lock used for read-modify-write access
+ */
+struct miic {
+       void __iomem *base;
+       struct device *dev;
+       struct clk_bulk_data *clks;
+       int nclk;
+       spinlock_t lock;
+};
+
+/**
+ * struct miic_port - Per port MII converter struct
+ * @miic: backiling to MII converter structure
+ * @pcs: PCS structure associated to the port
+ * @port: port number
+ * @interface: interface mode of the port
+ */
+struct miic_port {
+       struct miic *miic;
+       struct phylink_pcs pcs;
+       int port;
+       phy_interface_t interface;
+};
+
+static struct miic_port *phylink_pcs_to_miic_port(struct phylink_pcs *pcs)
+{
+       return container_of(pcs, struct miic_port, pcs);
+}
+
+static void miic_reg_writel(struct miic *miic, int offset, u32 value)
+{
+       writel(value, miic->base + offset);
+}
+
+static u32 miic_reg_readl(struct miic *miic, int offset)
+{
+       return readl(miic->base + offset);
+}
+
+static void miic_reg_rmw(struct miic *miic, int offset, u32 mask, u32 val)
+{
+       u32 reg;
+
+       spin_lock(&miic->lock);
+
+       reg = miic_reg_readl(miic, offset);
+       reg &= ~mask;
+       reg |= val;
+       miic_reg_writel(miic, offset, reg);
+
+       spin_unlock(&miic->lock);
+}
+
+static void miic_converter_enable(struct miic *miic, int port, int enable)
+{
+       u32 val = 0;
+
+       if (enable)
+               val = MIIC_CONVRST_PHYIF_RST(port);
+
+       miic_reg_rmw(miic, MIIC_CONVRST, MIIC_CONVRST_PHYIF_RST(port), val);
+}
+
+static int miic_config(struct phylink_pcs *pcs, unsigned int mode,
+                      phy_interface_t interface,
+                      const unsigned long *advertising, bool permit)
+{
+       struct miic_port *miic_port = phylink_pcs_to_miic_port(pcs);
+       struct miic *miic = miic_port->miic;
+       u32 speed, conv_mode, val, mask;
+       int port = miic_port->port;
+
+       switch (interface) {
+       case PHY_INTERFACE_MODE_RMII:
+               conv_mode = CONV_MODE_RMII;
+               speed = CONV_MODE_100MBPS;
+               break;
+       case PHY_INTERFACE_MODE_RGMII:
+       case PHY_INTERFACE_MODE_RGMII_ID:
+       case PHY_INTERFACE_MODE_RGMII_TXID:
+       case PHY_INTERFACE_MODE_RGMII_RXID:
+               conv_mode = CONV_MODE_RGMII;
+               speed = CONV_MODE_1000MBPS;
+               break;
+       case PHY_INTERFACE_MODE_MII:
+               conv_mode = CONV_MODE_MII;
+               /* When in MII mode, speed should be set to 0 (which is actually
+                * CONV_MODE_10MBPS)
+                */
+               speed = CONV_MODE_10MBPS;
+               break;
+       default:
+               return -EOPNOTSUPP;
+       }
+
+       val = FIELD_PREP(MIIC_CONVCTRL_CONV_MODE, conv_mode);
+       mask = MIIC_CONVCTRL_CONV_MODE;
+
+       /* Update speed only if we are going to change the interface because
+        * the link might already be up and it would break it if the speed is
+        * changed.
+        */
+       if (interface != miic_port->interface) {
+               val |= FIELD_PREP(MIIC_CONVCTRL_CONV_SPEED, speed);
+               mask |= MIIC_CONVCTRL_CONV_SPEED;
+               miic_port->interface = interface;
+       }
+
+       miic_reg_rmw(miic, MIIC_CONVCTRL(port), mask, val);
+       miic_converter_enable(miic_port->miic, miic_port->port, 1);
+
+       return 0;
+}
+
+static void miic_link_up(struct phylink_pcs *pcs, unsigned int mode,
+                        phy_interface_t interface, int speed, int duplex)
+{
+       struct miic_port *miic_port = phylink_pcs_to_miic_port(pcs);
+       struct miic *miic = miic_port->miic;
+       u32 conv_speed = 0, val = 0;
+       int port = miic_port->port;
+
+       if (duplex == DUPLEX_FULL)
+               val |= MIIC_CONVCTRL_FULLD;
+
+       /* No speed in MII through-mode */
+       if (interface != PHY_INTERFACE_MODE_MII) {
+               switch (speed) {
+               case SPEED_1000:
+                       conv_speed = CONV_MODE_1000MBPS;
+                       break;
+               case SPEED_100:
+                       conv_speed = CONV_MODE_100MBPS;
+                       break;
+               case SPEED_10:
+                       conv_speed = CONV_MODE_10MBPS;
+                       break;
+               default:
+                       return;
+               }
+       }
+
+       val |= FIELD_PREP(MIIC_CONVCTRL_CONV_SPEED, conv_speed);
+
+       miic_reg_rmw(miic, MIIC_CONVCTRL(port),
+                    (MIIC_CONVCTRL_CONV_SPEED | MIIC_CONVCTRL_FULLD), val);
+}
+
+static int miic_validate(struct phylink_pcs *pcs, unsigned long *supported,
+                        const struct phylink_link_state *state)
+{
+       if (phy_interface_mode_is_rgmii(state->interface) ||
+           state->interface == PHY_INTERFACE_MODE_RMII ||
+           state->interface == PHY_INTERFACE_MODE_MII)
+               return 1;
+
+       return -EINVAL;
+}
+
+static const struct phylink_pcs_ops miic_phylink_ops = {
+       .pcs_validate = miic_validate,
+       .pcs_config = miic_config,
+       .pcs_link_up = miic_link_up,
+};
+
+struct phylink_pcs *miic_create(struct device *dev, struct device_node *np)
+{
+       struct platform_device *pdev;
+       struct miic_port *miic_port;
+       struct device_node *pcs_np;
+       struct miic *miic;
+       u32 port;
+
+       if (!of_device_is_available(np))
+               return ERR_PTR(-ENODEV);
+
+       if (of_property_read_u32(np, "reg", &port))
+               return ERR_PTR(-EINVAL);
+
+       if (port > MIIC_MAX_NR_PORTS || port < 1)
+               return ERR_PTR(-EINVAL);
+
+       /* The PCS pdev is attached to the parent node */
+       pcs_np = of_get_parent(np);
+       if (!pcs_np)
+               return ERR_PTR(-ENODEV);
+
+       if (!of_device_is_available(pcs_np)) {
+               of_node_put(pcs_np);
+               return ERR_PTR(-ENODEV);
+       }
+
+       pdev = of_find_device_by_node(pcs_np);
+       of_node_put(pcs_np);
+       if (!pdev || !platform_get_drvdata(pdev))
+               return ERR_PTR(-EPROBE_DEFER);
+
+       miic_port = kzalloc(sizeof(*miic_port), GFP_KERNEL);
+       if (!miic_port)
+               return ERR_PTR(-ENOMEM);
+
+       miic = platform_get_drvdata(pdev);
+       device_link_add(dev, miic->dev, DL_FLAG_AUTOREMOVE_CONSUMER);
+
+       miic_port->miic = miic;
+       miic_port->port = port - 1;
+       miic_port->pcs.ops = &miic_phylink_ops;
+
+       return &miic_port->pcs;
+}
+EXPORT_SYMBOL(miic_create);
+
+void miic_destroy(struct phylink_pcs *pcs)
+{
+       struct miic_port *miic_port = phylink_pcs_to_miic_port(pcs);
+
+       miic_converter_enable(miic_port->miic, miic_port->port, 0);
+       kfree(miic_port);
+}
+EXPORT_SYMBOL(miic_destroy);
+
+static int miic_init_hw(struct miic *miic, u32 cfg_mode)
+{
+       int port;
+
+       /* Unlock write access to accessory registers (cf datasheet). If this
+        * is going to be used in conjunction with the Cortex-M3, this sequence
+        * will have to be moved in register write
+        */
+       miic_reg_writel(miic, MIIC_PRCMD, 0x00A5);
+       miic_reg_writel(miic, MIIC_PRCMD, 0x0001);
+       miic_reg_writel(miic, MIIC_PRCMD, 0xFFFE);
+       miic_reg_writel(miic, MIIC_PRCMD, 0x0001);
+
+       miic_reg_writel(miic, MIIC_MODCTRL,
+                       FIELD_PREP(MIIC_MODCTRL_SW_MODE, cfg_mode));
+
+       for (port = 0; port < MIIC_MAX_NR_PORTS; port++) {
+               miic_converter_enable(miic, port, 0);
+               /* Disable speed/duplex control from these registers, datasheet
+                * says switch registers should be used to setup switch port
+                * speed and duplex.
+                */
+               miic_reg_writel(miic, MIIC_SWCTRL, 0x0);
+               miic_reg_writel(miic, MIIC_SWDUPC, 0x0);
+       }
+
+       return 0;
+}
+
+static bool miic_modctrl_match(s8 table_val[MIIC_MODCTRL_CONF_CONV_NUM],
+                              s8 dt_val[MIIC_MODCTRL_CONF_CONV_NUM])
+{
+       int i;
+
+       for (i = 0; i < MIIC_MODCTRL_CONF_CONV_NUM; i++) {
+               if (dt_val[i] == MIIC_MODCTRL_CONF_NONE)
+                       continue;
+
+               if (dt_val[i] != table_val[i])
+                       return false;
+       }
+
+       return true;
+}
+
+static void miic_dump_conf(struct device *dev,
+                          s8 conf[MIIC_MODCTRL_CONF_CONV_NUM])
+{
+       const char *conf_name;
+       int i;
+
+       for (i = 0; i < MIIC_MODCTRL_CONF_CONV_NUM; i++) {
+               if (conf[i] != MIIC_MODCTRL_CONF_NONE)
+                       conf_name = conf_to_string[conf[i]];
+               else
+                       conf_name = "NONE";
+
+               dev_err(dev, "%s: %s\n", index_to_string[i], conf_name);
+       }
+}
+
+static int miic_match_dt_conf(struct device *dev,
+                             s8 dt_val[MIIC_MODCTRL_CONF_CONV_NUM],
+                             u32 *mode_cfg)
+{
+       struct modctrl_match *table_entry;
+       int i;
+
+       for (i = 0; i < ARRAY_SIZE(modctrl_match_table); i++) {
+               table_entry = &modctrl_match_table[i];
+
+               if (miic_modctrl_match(table_entry->conv, dt_val)) {
+                       *mode_cfg = table_entry->mode_cfg;
+                       return 0;
+               }
+       }
+
+       dev_err(dev, "Failed to apply requested configuration\n");
+       miic_dump_conf(dev, dt_val);
+
+       return -EINVAL;
+}
+
+static int miic_parse_dt(struct device *dev, u32 *mode_cfg)
+{
+       s8 dt_val[MIIC_MODCTRL_CONF_CONV_NUM];
+       struct device_node *np = dev->of_node;
+       struct device_node *conv;
+       u32 conf;
+       int port;
+
+       memset(dt_val, MIIC_MODCTRL_CONF_NONE, sizeof(dt_val));
+
+       if (of_property_read_u32(np, "renesas,miic-switch-portin", &conf) == 0)
+               dt_val[0] = conf;
+
+       for_each_child_of_node(np, conv) {
+               if (of_property_read_u32(conv, "reg", &port))
+                       continue;
+
+               if (!of_device_is_available(conv))
+                       continue;
+
+               if (of_property_read_u32(conv, "renesas,miic-input", &conf) == 0)
+                       dt_val[port] = conf;
+       }
+
+       return miic_match_dt_conf(dev, dt_val, mode_cfg);
+}
+
+static int miic_probe(struct platform_device *pdev)
+{
+       struct device *dev = &pdev->dev;
+       struct miic *miic;
+       u32 mode_cfg;
+       int ret;
+
+       ret = miic_parse_dt(dev, &mode_cfg);
+       if (ret < 0)
+               return ret;
+
+       miic = devm_kzalloc(dev, sizeof(*miic), GFP_KERNEL);
+       if (!miic)
+               return -ENOMEM;
+
+       spin_lock_init(&miic->lock);
+       miic->dev = dev;
+       miic->base = devm_platform_ioremap_resource(pdev, 0);
+       if (IS_ERR(miic->base))
+               return PTR_ERR(miic->base);
+
+       ret = devm_pm_runtime_enable(dev);
+       if (ret < 0)
+               return ret;
+
+       ret = pm_runtime_resume_and_get(dev);
+       if (ret < 0)
+               return ret;
+
+       ret = miic_init_hw(miic, mode_cfg);
+       if (ret)
+               goto disable_runtime_pm;
+
+       /* miic_create() relies on that fact that data are attached to the
+        * platform device to determine if the driver is ready so this needs to
+        * be the last thing to be done after everything is initialized
+        * properly.
+        */
+       platform_set_drvdata(pdev, miic);
+
+       return 0;
+
+disable_runtime_pm:
+       pm_runtime_put(dev);
+
+       return ret;
+}
+
+static int miic_remove(struct platform_device *pdev)
+{
+       pm_runtime_put(&pdev->dev);
+
+       return 0;
+}
+
+static const struct of_device_id miic_of_mtable[] = {
+       { .compatible = "renesas,rzn1-miic" },
+       { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, miic_of_mtable);
+
+static struct platform_driver miic_driver = {
+       .driver = {
+               .name    = "rzn1_miic",
+               .suppress_bind_attrs = true,
+               .of_match_table = miic_of_mtable,
+       },
+       .probe = miic_probe,
+       .remove = miic_remove,
+};
+module_platform_driver(miic_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Renesas MII converter PCS driver");
+MODULE_AUTHOR("Clément Léger <clement.leger@bootlin.com>");
index 9fee639..c57a026 100644 (file)
@@ -104,6 +104,8 @@ config AX88796B_PHY
 config BROADCOM_PHY
        tristate "Broadcom 54XX PHYs"
        select BCM_NET_PHYLIB
+       select BCM_NET_PHYPTP if NETWORK_PHY_TIMESTAMPING
+       depends on PTP_1588_CLOCK_OPTIONAL
        help
          Currently supports the BCM5411, BCM5421, BCM5461, BCM54616S, BCM5464,
          BCM5481, BCM54810 and BCM5482 PHYs.
@@ -160,6 +162,9 @@ config BCM_CYGNUS_PHY
 config BCM_NET_PHYLIB
        tristate
 
+config BCM_NET_PHYPTP
+       tristate
+
 config CICADA_PHY
        tristate "Cicada PHYs"
        help
@@ -216,6 +221,8 @@ config MARVELL_88X2222_PHY
 
 config MAXLINEAR_GPHY
        tristate "Maxlinear Ethernet PHYs"
+       select POLYNOMIAL if HWMON
+       depends on HWMON || HWMON=n
        help
          Support for the Maxlinear GPY115, GPY211, GPY212, GPY215,
          GPY241, GPY245 PHYs.
index b12b1d8..f7138d3 100644 (file)
@@ -47,6 +47,7 @@ obj-$(CONFIG_BCM84881_PHY)    += bcm84881.o
 obj-$(CONFIG_BCM87XX_PHY)      += bcm87xx.o
 obj-$(CONFIG_BCM_CYGNUS_PHY)   += bcm-cygnus.o
 obj-$(CONFIG_BCM_NET_PHYLIB)   += bcm-phy-lib.o
+obj-$(CONFIG_BCM_NET_PHYPTP)   += bcm-phy-ptp.o
 obj-$(CONFIG_BROADCOM_PHY)     += broadcom.o
 obj-$(CONFIG_CICADA_PHY)       += cicada.o
 obj-$(CONFIG_CORTINA_PHY)      += cortina.o
index a8db1a1..8b7a46d 100644 (file)
@@ -22,6 +22,7 @@
 #define PHY_ID_AQR107  0x03a1b4e0
 #define PHY_ID_AQCS109 0x03a1b5c2
 #define PHY_ID_AQR405  0x03a1b4b0
+#define PHY_ID_AQR113C 0x31c31c12
 
 #define MDIO_PHYXS_VEND_IF_STATUS              0xe812
 #define MDIO_PHYXS_VEND_IF_STATUS_TYPE_MASK    GENMASK(7, 3)
@@ -34,6 +35,8 @@
 #define MDIO_AN_VEND_PROV                      0xc400
 #define MDIO_AN_VEND_PROV_1000BASET_FULL       BIT(15)
 #define MDIO_AN_VEND_PROV_1000BASET_HALF       BIT(14)
+#define MDIO_AN_VEND_PROV_5000BASET_FULL       BIT(11)
+#define MDIO_AN_VEND_PROV_2500BASET_FULL       BIT(10)
 #define MDIO_AN_VEND_PROV_DOWNSHIFT_EN         BIT(4)
 #define MDIO_AN_VEND_PROV_DOWNSHIFT_MASK       GENMASK(3, 0)
 #define MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT       4
@@ -231,9 +234,20 @@ static int aqr_config_aneg(struct phy_device *phydev)
                              phydev->advertising))
                reg |= MDIO_AN_VEND_PROV_1000BASET_HALF;
 
+       /* Handle the case when the 2.5G and 5G speeds are not advertised */
+       if (linkmode_test_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT,
+                             phydev->advertising))
+               reg |= MDIO_AN_VEND_PROV_2500BASET_FULL;
+
+       if (linkmode_test_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT,
+                             phydev->advertising))
+               reg |= MDIO_AN_VEND_PROV_5000BASET_FULL;
+
        ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_VEND_PROV,
                                     MDIO_AN_VEND_PROV_1000BASET_HALF |
-                                    MDIO_AN_VEND_PROV_1000BASET_FULL, reg);
+                                    MDIO_AN_VEND_PROV_1000BASET_FULL |
+                                    MDIO_AN_VEND_PROV_2500BASET_FULL |
+                                    MDIO_AN_VEND_PROV_5000BASET_FULL, reg);
        if (ret < 0)
                return ret;
        if (ret > 0)
@@ -684,6 +698,24 @@ static struct phy_driver aqr_driver[] = {
        .handle_interrupt = aqr_handle_interrupt,
        .read_status    = aqr_read_status,
 },
+{
+       PHY_ID_MATCH_MODEL(PHY_ID_AQR113C),
+       .name           = "Aquantia AQR113C",
+       .probe          = aqr107_probe,
+       .config_init    = aqr107_config_init,
+       .config_aneg    = aqr_config_aneg,
+       .config_intr    = aqr_config_intr,
+       .handle_interrupt       = aqr_handle_interrupt,
+       .read_status    = aqr107_read_status,
+       .get_tunable    = aqr107_get_tunable,
+       .set_tunable    = aqr107_set_tunable,
+       .suspend        = aqr107_suspend,
+       .resume         = aqr107_resume,
+       .get_sset_count = aqr107_get_sset_count,
+       .get_strings    = aqr107_get_strings,
+       .get_stats      = aqr107_get_stats,
+       .link_change_notify = aqr107_link_change_notify,
+},
 };
 
 module_phy_driver(aqr_driver);
@@ -696,6 +728,7 @@ static struct mdio_device_id __maybe_unused aqr_tbl[] = {
        { PHY_ID_MATCH_MODEL(PHY_ID_AQR107) },
        { PHY_ID_MATCH_MODEL(PHY_ID_AQCS109) },
        { PHY_ID_MATCH_MODEL(PHY_ID_AQR405) },
+       { PHY_ID_MATCH_MODEL(PHY_ID_AQR113C) },
        { }
 };
 
index 6a467e7..59fe356 100644 (file)
@@ -2072,6 +2072,8 @@ static struct phy_driver at803x_driver[] = {
        /* ATHEROS AR9331 */
        PHY_ID_MATCH_EXACT(ATH9331_PHY_ID),
        .name                   = "Qualcomm Atheros AR9331 built-in PHY",
+       .probe                  = at803x_probe,
+       .remove                 = at803x_remove,
        .suspend                = at803x_suspend,
        .resume                 = at803x_resume,
        .flags                  = PHY_POLL_CABLE_TEST,
@@ -2087,6 +2089,8 @@ static struct phy_driver at803x_driver[] = {
        /* Qualcomm Atheros QCA9561 */
        PHY_ID_MATCH_EXACT(QCA9561_PHY_ID),
        .name                   = "Qualcomm Atheros QCA9561 built-in PHY",
+       .probe                  = at803x_probe,
+       .remove                 = at803x_remove,
        .suspend                = at803x_suspend,
        .resume                 = at803x_resume,
        .flags                  = PHY_POLL_CABLE_TEST,
@@ -2151,6 +2155,8 @@ static struct phy_driver at803x_driver[] = {
        PHY_ID_MATCH_EXACT(QCA8081_PHY_ID),
        .name                   = "Qualcomm QCA8081",
        .flags                  = PHY_POLL_CABLE_TEST,
+       .probe                  = at803x_probe,
+       .remove                 = at803x_remove,
        .config_intr            = at803x_config_intr,
        .handle_interrupt       = at803x_handle_interrupt,
        .get_tunable            = at803x_get_tunable,
index 4578963..0f1e617 100644 (file)
@@ -88,8 +88,10 @@ static void asix_ax88772a_link_change_notify(struct phy_device *phydev)
        /* Reset PHY, otherwise MII_LPA will provide outdated information.
         * This issue is reproducible only with some link partner PHYs
         */
-       if (phydev->state == PHY_NOLINK && phydev->drv->soft_reset)
-               phydev->drv->soft_reset(phydev);
+       if (phydev->state == PHY_NOLINK) {
+               phy_init_hw(phydev);
+               phy_start_aneg(phydev);
+       }
 }
 
 static struct phy_driver asix_driver[] = {
index c3842f8..9902fb1 100644 (file)
@@ -87,4 +87,23 @@ int bcm_phy_cable_test_start_rdb(struct phy_device *phydev);
 int bcm_phy_cable_test_start(struct phy_device *phydev);
 int bcm_phy_cable_test_get_status(struct phy_device *phydev, bool *finished);
 
+#if IS_ENABLED(CONFIG_BCM_NET_PHYPTP)
+struct bcm_ptp_private *bcm_ptp_probe(struct phy_device *phydev);
+void bcm_ptp_config_init(struct phy_device *phydev);
+void bcm_ptp_stop(struct bcm_ptp_private *priv);
+#else
+static inline struct bcm_ptp_private *bcm_ptp_probe(struct phy_device *phydev)
+{
+       return NULL;
+}
+
+static inline void bcm_ptp_config_init(struct phy_device *phydev)
+{
+}
+
+static inline void bcm_ptp_stop(struct bcm_ptp_private *priv)
+{
+}
+#endif
+
 #endif /* _LINUX_BCM_PHY_LIB_H */
diff --git a/drivers/net/phy/bcm-phy-ptp.c b/drivers/net/phy/bcm-phy-ptp.c
new file mode 100644 (file)
index 0000000..ef00d61
--- /dev/null
@@ -0,0 +1,944 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Meta Platforms Inc.
+ * Copyright (C) 2022 Jonathan Lemon <jonathan.lemon@gmail.com>
+ */
+
+#include <asm/unaligned.h>
+#include <linux/mii.h>
+#include <linux/phy.h>
+#include <linux/ptp_classify.h>
+#include <linux/ptp_clock_kernel.h>
+#include <linux/net_tstamp.h>
+#include <linux/netdevice.h>
+#include <linux/workqueue.h>
+
+#include "bcm-phy-lib.h"
+
+/* IEEE 1588 Expansion registers */
+#define SLICE_CTRL             0x0810
+#define  SLICE_TX_EN                   BIT(0)
+#define  SLICE_RX_EN                   BIT(8)
+#define TX_EVENT_MODE          0x0811
+#define  MODE_TX_UPDATE_CF             BIT(0)
+#define  MODE_TX_REPLACE_TS_CF         BIT(1)
+#define  MODE_TX_REPLACE_TS            GENMASK(1, 0)
+#define RX_EVENT_MODE          0x0819
+#define  MODE_RX_UPDATE_CF             BIT(0)
+#define  MODE_RX_INSERT_TS_48          BIT(1)
+#define  MODE_RX_INSERT_TS_64          GENMASK(1, 0)
+
+#define MODE_EVT_SHIFT_SYNC            0
+#define MODE_EVT_SHIFT_DELAY_REQ       2
+#define MODE_EVT_SHIFT_PDELAY_REQ      4
+#define MODE_EVT_SHIFT_PDELAY_RESP     6
+
+#define MODE_SEL_SHIFT_PORT            0
+#define MODE_SEL_SHIFT_CPU             8
+
+#define RX_MODE_SEL(sel, evt, act) \
+       (((MODE_RX_##act) << (MODE_EVT_SHIFT_##evt)) << (MODE_SEL_SHIFT_##sel))
+
+#define TX_MODE_SEL(sel, evt, act) \
+       (((MODE_TX_##act) << (MODE_EVT_SHIFT_##evt)) << (MODE_SEL_SHIFT_##sel))
+
+/* needs global TS capture first */
+#define TX_TS_CAPTURE          0x0821
+#define  TX_TS_CAP_EN                  BIT(0)
+#define RX_TS_CAPTURE          0x0822
+#define  RX_TS_CAP_EN                  BIT(0)
+
+#define TIME_CODE_0            0x0854
+#define TIME_CODE_1            0x0855
+#define TIME_CODE_2            0x0856
+#define TIME_CODE_3            0x0857
+#define TIME_CODE_4            0x0858
+
+#define DPLL_SELECT            0x085b
+#define  DPLL_HB_MODE2                 BIT(6)
+
+#define SHADOW_CTRL            0x085c
+#define SHADOW_LOAD            0x085d
+#define  TIME_CODE_LOAD                        BIT(10)
+#define  SYNC_OUT_LOAD                 BIT(9)
+#define  NCO_TIME_LOAD                 BIT(7)
+#define  FREQ_LOAD                     BIT(6)
+#define INTR_MASK              0x085e
+#define INTR_STATUS            0x085f
+#define  INTC_FSYNC                    BIT(0)
+#define  INTC_SOP                      BIT(1)
+
+#define NCO_FREQ_LSB           0x0873
+#define NCO_FREQ_MSB           0x0874
+
+#define NCO_TIME_0             0x0875
+#define NCO_TIME_1             0x0876
+#define NCO_TIME_2_CTRL                0x0877
+#define  FREQ_MDIO_SEL                 BIT(14)
+
+#define SYNC_OUT_0             0x0878
+#define SYNC_OUT_1             0x0879
+#define SYNC_OUT_2             0x087a
+
+#define SYNC_IN_DIVIDER                0x087b
+
+#define SYNOUT_TS_0            0x087c
+#define SYNOUT_TS_1            0x087d
+#define SYNOUT_TS_2            0x087e
+
+#define NSE_CTRL               0x087f
+#define  NSE_GMODE_EN                  GENMASK(15, 14)
+#define  NSE_CAPTURE_EN                        BIT(13)
+#define  NSE_INIT                      BIT(12)
+#define  NSE_CPU_FRAMESYNC             BIT(5)
+#define  NSE_SYNC1_FRAMESYNC           BIT(3)
+#define  NSE_FRAMESYNC_MASK            GENMASK(5, 2)
+#define  NSE_PEROUT_EN                 BIT(1)
+#define  NSE_ONESHOT_EN                        BIT(0)
+#define  NSE_SYNC_OUT_MASK             GENMASK(1, 0)
+
+#define TS_READ_CTRL           0x0885
+#define  TS_READ_START                 BIT(0)
+#define  TS_READ_END                   BIT(1)
+
+#define HB_REG_0               0x0886
+#define HB_REG_1               0x0887
+#define HB_REG_2               0x0888
+#define HB_REG_3               0x08ec
+#define HB_REG_4               0x08ed
+#define HB_STAT_CTRL           0x088e
+#define  HB_READ_START                 BIT(10)
+#define  HB_READ_END                   BIT(11)
+#define  HB_READ_MASK                  GENMASK(11, 10)
+
+#define TS_REG_0               0x0889
+#define TS_REG_1               0x088a
+#define TS_REG_2               0x088b
+#define TS_REG_3               0x08c4
+
+#define TS_INFO_0              0x088c
+#define TS_INFO_1              0x088d
+
+#define TIMECODE_CTRL          0x08c3
+#define  TX_TIMECODE_SEL               GENMASK(7, 0)
+#define  RX_TIMECODE_SEL               GENMASK(15, 8)
+
+#define TIME_SYNC              0x0ff5
+#define  TIME_SYNC_EN                  BIT(0)
+
+struct bcm_ptp_private {
+       struct phy_device *phydev;
+       struct mii_timestamper mii_ts;
+       struct ptp_clock *ptp_clock;
+       struct ptp_clock_info ptp_info;
+       struct ptp_pin_desc pin;
+       struct mutex mutex;
+       struct sk_buff_head tx_queue;
+       int tx_type;
+       bool hwts_rx;
+       u16 nse_ctrl;
+       bool pin_active;
+       struct delayed_work pin_work;
+};
+
+struct bcm_ptp_skb_cb {
+       unsigned long timeout;
+       u16 seq_id;
+       u8 msgtype;
+       bool discard;
+};
+
+struct bcm_ptp_capture {
+       ktime_t hwtstamp;
+       u16 seq_id;
+       u8 msgtype;
+       bool tx_dir;
+};
+
+#define BCM_SKB_CB(skb)                ((struct bcm_ptp_skb_cb *)(skb)->cb)
+#define SKB_TS_TIMEOUT         10                      /* jiffies */
+
+#define BCM_MAX_PULSE_8NS      ((1U << 9) - 1)
+#define BCM_MAX_PERIOD_8NS     ((1U << 30) - 1)
+
+#define BRCM_PHY_MODEL(phydev) \
+       ((phydev)->drv->phy_id & (phydev)->drv->phy_id_mask)
+
+static struct bcm_ptp_private *mii2priv(struct mii_timestamper *mii_ts)
+{
+       return container_of(mii_ts, struct bcm_ptp_private, mii_ts);
+}
+
+static struct bcm_ptp_private *ptp2priv(struct ptp_clock_info *info)
+{
+       return container_of(info, struct bcm_ptp_private, ptp_info);
+}
+
+static void bcm_ptp_get_framesync_ts(struct phy_device *phydev,
+                                    struct timespec64 *ts)
+{
+       u16 hb[4];
+
+       bcm_phy_write_exp(phydev, HB_STAT_CTRL, HB_READ_START);
+
+       hb[0] = bcm_phy_read_exp(phydev, HB_REG_0);
+       hb[1] = bcm_phy_read_exp(phydev, HB_REG_1);
+       hb[2] = bcm_phy_read_exp(phydev, HB_REG_2);
+       hb[3] = bcm_phy_read_exp(phydev, HB_REG_3);
+
+       bcm_phy_write_exp(phydev, HB_STAT_CTRL, HB_READ_END);
+       bcm_phy_write_exp(phydev, HB_STAT_CTRL, 0);
+
+       ts->tv_sec = (hb[3] << 16) | hb[2];
+       ts->tv_nsec = (hb[1] << 16) | hb[0];
+}
+
+static u16 bcm_ptp_framesync_disable(struct phy_device *phydev, u16 orig_ctrl)
+{
+       u16 ctrl = orig_ctrl & ~(NSE_FRAMESYNC_MASK | NSE_CAPTURE_EN);
+
+       bcm_phy_write_exp(phydev, NSE_CTRL, ctrl);
+
+       return ctrl;
+}
+
+static void bcm_ptp_framesync_restore(struct phy_device *phydev, u16 orig_ctrl)
+{
+       if (orig_ctrl & NSE_FRAMESYNC_MASK)
+               bcm_phy_write_exp(phydev, NSE_CTRL, orig_ctrl);
+}
+
+static void bcm_ptp_framesync(struct phy_device *phydev, u16 ctrl)
+{
+       /* trigger framesync - must have 0->1 transition. */
+       bcm_phy_write_exp(phydev, NSE_CTRL, ctrl | NSE_CPU_FRAMESYNC);
+}
+
+static int bcm_ptp_framesync_ts(struct phy_device *phydev,
+                               struct ptp_system_timestamp *sts,
+                               struct timespec64 *ts,
+                               u16 orig_ctrl)
+{
+       u16 ctrl, reg;
+       int i;
+
+       ctrl = bcm_ptp_framesync_disable(phydev, orig_ctrl);
+
+       ptp_read_system_prets(sts);
+
+       /* trigger framesync + capture */
+       bcm_ptp_framesync(phydev, ctrl | NSE_CAPTURE_EN);
+
+       ptp_read_system_postts(sts);
+
+       /* poll for FSYNC interrupt from TS capture */
+       for (i = 0; i < 10; i++) {
+               reg = bcm_phy_read_exp(phydev, INTR_STATUS);
+               if (reg & INTC_FSYNC) {
+                       bcm_ptp_get_framesync_ts(phydev, ts);
+                       break;
+               }
+       }
+
+       bcm_ptp_framesync_restore(phydev, orig_ctrl);
+
+       return reg & INTC_FSYNC ? 0 : -ETIMEDOUT;
+}
+
+static int bcm_ptp_gettimex(struct ptp_clock_info *info,
+                           struct timespec64 *ts,
+                           struct ptp_system_timestamp *sts)
+{
+       struct bcm_ptp_private *priv = ptp2priv(info);
+       int err;
+
+       mutex_lock(&priv->mutex);
+       err = bcm_ptp_framesync_ts(priv->phydev, sts, ts, priv->nse_ctrl);
+       mutex_unlock(&priv->mutex);
+
+       return err;
+}
+
+static int bcm_ptp_settime_locked(struct bcm_ptp_private *priv,
+                                 const struct timespec64 *ts)
+{
+       struct phy_device *phydev = priv->phydev;
+       u16 ctrl;
+       u64 ns;
+
+       ctrl = bcm_ptp_framesync_disable(phydev, priv->nse_ctrl);
+
+       /* set up time code */
+       bcm_phy_write_exp(phydev, TIME_CODE_0, ts->tv_nsec);
+       bcm_phy_write_exp(phydev, TIME_CODE_1, ts->tv_nsec >> 16);
+       bcm_phy_write_exp(phydev, TIME_CODE_2, ts->tv_sec);
+       bcm_phy_write_exp(phydev, TIME_CODE_3, ts->tv_sec >> 16);
+       bcm_phy_write_exp(phydev, TIME_CODE_4, ts->tv_sec >> 32);
+
+       /* set NCO counter to match */
+       ns = timespec64_to_ns(ts);
+       bcm_phy_write_exp(phydev, NCO_TIME_0, ns >> 4);
+       bcm_phy_write_exp(phydev, NCO_TIME_1, ns >> 20);
+       bcm_phy_write_exp(phydev, NCO_TIME_2_CTRL, (ns >> 36) & 0xfff);
+
+       /* set up load on next frame sync (auto-clears due to NSE_INIT) */
+       bcm_phy_write_exp(phydev, SHADOW_LOAD, TIME_CODE_LOAD | NCO_TIME_LOAD);
+
+       /* must have NSE_INIT in order to write time code */
+       bcm_ptp_framesync(phydev, ctrl | NSE_INIT);
+
+       bcm_ptp_framesync_restore(phydev, priv->nse_ctrl);
+
+       return 0;
+}
+
+static int bcm_ptp_settime(struct ptp_clock_info *info,
+                          const struct timespec64 *ts)
+{
+       struct bcm_ptp_private *priv = ptp2priv(info);
+       int err;
+
+       mutex_lock(&priv->mutex);
+       err = bcm_ptp_settime_locked(priv, ts);
+       mutex_unlock(&priv->mutex);
+
+       return err;
+}
+
+static int bcm_ptp_adjtime_locked(struct bcm_ptp_private *priv,
+                                 s64 delta_ns)
+{
+       struct timespec64 ts;
+       int err;
+       s64 ns;
+
+       err = bcm_ptp_framesync_ts(priv->phydev, NULL, &ts, priv->nse_ctrl);
+       if (!err) {
+               ns = timespec64_to_ns(&ts) + delta_ns;
+               ts = ns_to_timespec64(ns);
+               err = bcm_ptp_settime_locked(priv, &ts);
+       }
+       return err;
+}
+
+static int bcm_ptp_adjtime(struct ptp_clock_info *info, s64 delta_ns)
+{
+       struct bcm_ptp_private *priv = ptp2priv(info);
+       int err;
+
+       mutex_lock(&priv->mutex);
+       err = bcm_ptp_adjtime_locked(priv, delta_ns);
+       mutex_unlock(&priv->mutex);
+
+       return err;
+}
+
+/* A 125Mhz clock should adjust 8ns per pulse.
+ * The frequency adjustment base is 0x8000 0000, or 8*2^28.
+ *
+ * Frequency adjustment is
+ * adj = scaled_ppm * 8*2^28 / (10^6 * 2^16)
+ *   which simplifies to:
+ * adj = scaled_ppm * 2^9 / 5^6
+ */
+static int bcm_ptp_adjfine(struct ptp_clock_info *info, long scaled_ppm)
+{
+       struct bcm_ptp_private *priv = ptp2priv(info);
+       int neg_adj = 0;
+       u32 diff, freq;
+       u16 ctrl;
+       u64 adj;
+
+       if (scaled_ppm < 0) {
+               neg_adj = 1;
+               scaled_ppm = -scaled_ppm;
+       }
+
+       adj = scaled_ppm << 9;
+       diff = div_u64(adj, 15625);
+       freq = (8 << 28) + (neg_adj ? -diff : diff);
+
+       mutex_lock(&priv->mutex);
+
+       ctrl = bcm_ptp_framesync_disable(priv->phydev, priv->nse_ctrl);
+
+       bcm_phy_write_exp(priv->phydev, NCO_FREQ_LSB, freq);
+       bcm_phy_write_exp(priv->phydev, NCO_FREQ_MSB, freq >> 16);
+
+       bcm_phy_write_exp(priv->phydev, NCO_TIME_2_CTRL, FREQ_MDIO_SEL);
+
+       /* load on next framesync */
+       bcm_phy_write_exp(priv->phydev, SHADOW_LOAD, FREQ_LOAD);
+
+       bcm_ptp_framesync(priv->phydev, ctrl);
+
+       /* clear load */
+       bcm_phy_write_exp(priv->phydev, SHADOW_LOAD, 0);
+
+       bcm_ptp_framesync_restore(priv->phydev, priv->nse_ctrl);
+
+       mutex_unlock(&priv->mutex);
+
+       return 0;
+}
+
+static bool bcm_ptp_rxtstamp(struct mii_timestamper *mii_ts,
+                            struct sk_buff *skb, int type)
+{
+       struct bcm_ptp_private *priv = mii2priv(mii_ts);
+       struct skb_shared_hwtstamps *hwts;
+       struct ptp_header *header;
+       u32 sec, nsec;
+       u8 *data;
+       int off;
+
+       if (!priv->hwts_rx)
+               return false;
+
+       header = ptp_parse_header(skb, type);
+       if (!header)
+               return false;
+
+       data = (u8 *)(header + 1);
+       sec = get_unaligned_be32(data);
+       nsec = get_unaligned_be32(data + 4);
+
+       hwts = skb_hwtstamps(skb);
+       hwts->hwtstamp = ktime_set(sec, nsec);
+
+       off = data - skb->data + 8;
+       if (off < skb->len) {
+               memmove(data, data + 8, skb->len - off);
+               __pskb_trim(skb, skb->len - 8);
+       }
+
+       return false;
+}
+
+static bool bcm_ptp_get_tstamp(struct bcm_ptp_private *priv,
+                              struct bcm_ptp_capture *capts)
+{
+       struct phy_device *phydev = priv->phydev;
+       u16 ts[4], reg;
+       u32 sec, nsec;
+
+       mutex_lock(&priv->mutex);
+
+       reg = bcm_phy_read_exp(phydev, INTR_STATUS);
+       if ((reg & INTC_SOP) == 0) {
+               mutex_unlock(&priv->mutex);
+               return false;
+       }
+
+       bcm_phy_write_exp(phydev, TS_READ_CTRL, TS_READ_START);
+
+       ts[0] = bcm_phy_read_exp(phydev, TS_REG_0);
+       ts[1] = bcm_phy_read_exp(phydev, TS_REG_1);
+       ts[2] = bcm_phy_read_exp(phydev, TS_REG_2);
+       ts[3] = bcm_phy_read_exp(phydev, TS_REG_3);
+
+       /* not in be32 format for some reason */
+       capts->seq_id = bcm_phy_read_exp(priv->phydev, TS_INFO_0);
+
+       reg = bcm_phy_read_exp(phydev, TS_INFO_1);
+       capts->msgtype = reg >> 12;
+       capts->tx_dir = !!(reg & BIT(11));
+
+       bcm_phy_write_exp(phydev, TS_READ_CTRL, TS_READ_END);
+       bcm_phy_write_exp(phydev, TS_READ_CTRL, 0);
+
+       mutex_unlock(&priv->mutex);
+
+       sec = (ts[3] << 16) | ts[2];
+       nsec = (ts[1] << 16) | ts[0];
+       capts->hwtstamp = ktime_set(sec, nsec);
+
+       return true;
+}
+
+static void bcm_ptp_match_tstamp(struct bcm_ptp_private *priv,
+                                struct bcm_ptp_capture *capts)
+{
+       struct skb_shared_hwtstamps hwts;
+       struct sk_buff *skb, *ts_skb;
+       unsigned long flags;
+       bool first = false;
+
+       ts_skb = NULL;
+       spin_lock_irqsave(&priv->tx_queue.lock, flags);
+       skb_queue_walk(&priv->tx_queue, skb) {
+               if (BCM_SKB_CB(skb)->seq_id == capts->seq_id &&
+                   BCM_SKB_CB(skb)->msgtype == capts->msgtype) {
+                       first = skb_queue_is_first(&priv->tx_queue, skb);
+                       __skb_unlink(skb, &priv->tx_queue);
+                       ts_skb = skb;
+                       break;
+               }
+       }
+       spin_unlock_irqrestore(&priv->tx_queue.lock, flags);
+
+       /* TX captures one-step packets, discard them if needed. */
+       if (ts_skb) {
+               if (BCM_SKB_CB(ts_skb)->discard) {
+                       kfree_skb(ts_skb);
+               } else {
+                       memset(&hwts, 0, sizeof(hwts));
+                       hwts.hwtstamp = capts->hwtstamp;
+                       skb_complete_tx_timestamp(ts_skb, &hwts);
+               }
+       }
+
+       /* not first match, try and expire entries */
+       if (!first) {
+               while ((skb = skb_dequeue(&priv->tx_queue))) {
+                       if (!time_after(jiffies, BCM_SKB_CB(skb)->timeout)) {
+                               skb_queue_head(&priv->tx_queue, skb);
+                               break;
+                       }
+                       kfree_skb(skb);
+               }
+       }
+}
+
+static long bcm_ptp_do_aux_work(struct ptp_clock_info *info)
+{
+       struct bcm_ptp_private *priv = ptp2priv(info);
+       struct bcm_ptp_capture capts;
+       bool reschedule = false;
+
+       while (!skb_queue_empty_lockless(&priv->tx_queue)) {
+               if (!bcm_ptp_get_tstamp(priv, &capts)) {
+                       reschedule = true;
+                       break;
+               }
+               bcm_ptp_match_tstamp(priv, &capts);
+       }
+
+       return reschedule ? 1 : -1;
+}
+
+static int bcm_ptp_cancel_func(struct bcm_ptp_private *priv)
+{
+       if (!priv->pin_active)
+               return 0;
+
+       priv->pin_active = false;
+
+       priv->nse_ctrl &= ~(NSE_SYNC_OUT_MASK | NSE_SYNC1_FRAMESYNC |
+                           NSE_CAPTURE_EN);
+       bcm_phy_write_exp(priv->phydev, NSE_CTRL, priv->nse_ctrl);
+
+       cancel_delayed_work_sync(&priv->pin_work);
+
+       return 0;
+}
+
+static void bcm_ptp_perout_work(struct work_struct *pin_work)
+{
+       struct bcm_ptp_private *priv =
+               container_of(pin_work, struct bcm_ptp_private, pin_work.work);
+       struct phy_device *phydev = priv->phydev;
+       struct timespec64 ts;
+       u64 ns, next;
+       u16 ctrl;
+
+       mutex_lock(&priv->mutex);
+
+       /* no longer running */
+       if (!priv->pin_active) {
+               mutex_unlock(&priv->mutex);
+               return;
+       }
+
+       bcm_ptp_framesync_ts(phydev, NULL, &ts, priv->nse_ctrl);
+
+       /* this is 1PPS only */
+       next = NSEC_PER_SEC - ts.tv_nsec;
+       ts.tv_sec += next < NSEC_PER_MSEC ? 2 : 1;
+       ts.tv_nsec = 0;
+
+       ns = timespec64_to_ns(&ts);
+
+       /* force 0->1 transition for ONESHOT */
+       ctrl = bcm_ptp_framesync_disable(phydev,
+                                        priv->nse_ctrl & ~NSE_ONESHOT_EN);
+
+       bcm_phy_write_exp(phydev, SYNOUT_TS_0, ns & 0xfff0);
+       bcm_phy_write_exp(phydev, SYNOUT_TS_1, ns >> 16);
+       bcm_phy_write_exp(phydev, SYNOUT_TS_2, ns >> 32);
+
+       /* load values on next framesync */
+       bcm_phy_write_exp(phydev, SHADOW_LOAD, SYNC_OUT_LOAD);
+
+       bcm_ptp_framesync(phydev, ctrl | NSE_ONESHOT_EN | NSE_INIT);
+
+       priv->nse_ctrl |= NSE_ONESHOT_EN;
+       bcm_ptp_framesync_restore(phydev, priv->nse_ctrl);
+
+       mutex_unlock(&priv->mutex);
+
+       next = next + NSEC_PER_MSEC;
+       schedule_delayed_work(&priv->pin_work, nsecs_to_jiffies(next));
+}
+
+static int bcm_ptp_perout_locked(struct bcm_ptp_private *priv,
+                                struct ptp_perout_request *req, int on)
+{
+       struct phy_device *phydev = priv->phydev;
+       u64 period, pulse;
+       u16 val;
+
+       if (!on)
+               return bcm_ptp_cancel_func(priv);
+
+       /* 1PPS */
+       if (req->period.sec != 1 || req->period.nsec != 0)
+               return -EINVAL;
+
+       period = BCM_MAX_PERIOD_8NS;    /* write nonzero value */
+
+       if (req->flags & PTP_PEROUT_PHASE)
+               return -EOPNOTSUPP;
+
+       if (req->flags & PTP_PEROUT_DUTY_CYCLE)
+               pulse = ktime_to_ns(ktime_set(req->on.sec, req->on.nsec));
+       else
+               pulse = (u64)BCM_MAX_PULSE_8NS << 3;
+
+       /* convert to 8ns units */
+       pulse >>= 3;
+
+       if (!pulse || pulse > period || pulse > BCM_MAX_PULSE_8NS)
+               return -EINVAL;
+
+       bcm_phy_write_exp(phydev, SYNC_OUT_0, period);
+
+       val = ((pulse & 0x3) << 14) | ((period >> 16) & 0x3fff);
+       bcm_phy_write_exp(phydev, SYNC_OUT_1, val);
+
+       val = ((pulse >> 2) & 0x7f) | (pulse << 7);
+       bcm_phy_write_exp(phydev, SYNC_OUT_2, val);
+
+       if (priv->pin_active)
+               cancel_delayed_work_sync(&priv->pin_work);
+
+       priv->pin_active = true;
+       INIT_DELAYED_WORK(&priv->pin_work, bcm_ptp_perout_work);
+       schedule_delayed_work(&priv->pin_work, 0);
+
+       return 0;
+}
+
+static void bcm_ptp_extts_work(struct work_struct *pin_work)
+{
+       struct bcm_ptp_private *priv =
+               container_of(pin_work, struct bcm_ptp_private, pin_work.work);
+       struct phy_device *phydev = priv->phydev;
+       struct ptp_clock_event event;
+       struct timespec64 ts;
+       u16 reg;
+
+       mutex_lock(&priv->mutex);
+
+       /* no longer running */
+       if (!priv->pin_active) {
+               mutex_unlock(&priv->mutex);
+               return;
+       }
+
+       reg = bcm_phy_read_exp(phydev, INTR_STATUS);
+       if ((reg & INTC_FSYNC) == 0)
+               goto out;
+
+       bcm_ptp_get_framesync_ts(phydev, &ts);
+
+       event.index = 0;
+       event.type = PTP_CLOCK_EXTTS;
+       event.timestamp = timespec64_to_ns(&ts);
+       ptp_clock_event(priv->ptp_clock, &event);
+
+out:
+       mutex_unlock(&priv->mutex);
+       schedule_delayed_work(&priv->pin_work, HZ / 4);
+}
+
+static int bcm_ptp_extts_locked(struct bcm_ptp_private *priv, int on)
+{
+       struct phy_device *phydev = priv->phydev;
+
+       if (!on)
+               return bcm_ptp_cancel_func(priv);
+
+       if (priv->pin_active)
+               cancel_delayed_work_sync(&priv->pin_work);
+
+       bcm_ptp_framesync_disable(phydev, priv->nse_ctrl);
+
+       priv->nse_ctrl |= NSE_SYNC1_FRAMESYNC | NSE_CAPTURE_EN;
+
+       bcm_ptp_framesync_restore(phydev, priv->nse_ctrl);
+
+       priv->pin_active = true;
+       INIT_DELAYED_WORK(&priv->pin_work, bcm_ptp_extts_work);
+       schedule_delayed_work(&priv->pin_work, 0);
+
+       return 0;
+}
+
+static int bcm_ptp_enable(struct ptp_clock_info *info,
+                         struct ptp_clock_request *rq, int on)
+{
+       struct bcm_ptp_private *priv = ptp2priv(info);
+       int err = -EBUSY;
+
+       mutex_lock(&priv->mutex);
+
+       switch (rq->type) {
+       case PTP_CLK_REQ_PEROUT:
+               if (priv->pin.func == PTP_PF_PEROUT)
+                       err = bcm_ptp_perout_locked(priv, &rq->perout, on);
+               break;
+       case PTP_CLK_REQ_EXTTS:
+               if (priv->pin.func == PTP_PF_EXTTS)
+                       err = bcm_ptp_extts_locked(priv, on);
+               break;
+       default:
+               err = -EOPNOTSUPP;
+               break;
+       }
+
+       mutex_unlock(&priv->mutex);
+
+       return err;
+}
+
+static int bcm_ptp_verify(struct ptp_clock_info *info, unsigned int pin,
+                         enum ptp_pin_function func, unsigned int chan)
+{
+       switch (func) {
+       case PTP_PF_NONE:
+       case PTP_PF_EXTTS:
+       case PTP_PF_PEROUT:
+               break;
+       default:
+               return -EOPNOTSUPP;
+       }
+       return 0;
+}
+
+static const struct ptp_clock_info bcm_ptp_clock_info = {
+       .owner          = THIS_MODULE,
+       .name           = KBUILD_MODNAME,
+       .max_adj        = 100000000,
+       .gettimex64     = bcm_ptp_gettimex,
+       .settime64      = bcm_ptp_settime,
+       .adjtime        = bcm_ptp_adjtime,
+       .adjfine        = bcm_ptp_adjfine,
+       .enable         = bcm_ptp_enable,
+       .verify         = bcm_ptp_verify,
+       .do_aux_work    = bcm_ptp_do_aux_work,
+       .n_pins         = 1,
+       .n_per_out      = 1,
+       .n_ext_ts       = 1,
+};
+
+static void bcm_ptp_txtstamp(struct mii_timestamper *mii_ts,
+                            struct sk_buff *skb, int type)
+{
+       struct bcm_ptp_private *priv = mii2priv(mii_ts);
+       struct ptp_header *hdr;
+       bool discard = false;
+       int msgtype;
+
+       hdr = ptp_parse_header(skb, type);
+       if (!hdr)
+               goto out;
+       msgtype = ptp_get_msgtype(hdr, type);
+
+       switch (priv->tx_type) {
+       case HWTSTAMP_TX_ONESTEP_P2P:
+               if (msgtype == PTP_MSGTYPE_PDELAY_RESP)
+                       discard = true;
+               fallthrough;
+       case HWTSTAMP_TX_ONESTEP_SYNC:
+               if (msgtype == PTP_MSGTYPE_SYNC)
+                       discard = true;
+               fallthrough;
+       case HWTSTAMP_TX_ON:
+               BCM_SKB_CB(skb)->timeout = jiffies + SKB_TS_TIMEOUT;
+               BCM_SKB_CB(skb)->seq_id = be16_to_cpu(hdr->sequence_id);
+               BCM_SKB_CB(skb)->msgtype = msgtype;
+               BCM_SKB_CB(skb)->discard = discard;
+               skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+               skb_queue_tail(&priv->tx_queue, skb);
+               ptp_schedule_worker(priv->ptp_clock, 0);
+               return;
+       default:
+               break;
+       }
+
+out:
+       kfree_skb(skb);
+}
+
+static int bcm_ptp_hwtstamp(struct mii_timestamper *mii_ts,
+                           struct ifreq *ifr)
+{
+       struct bcm_ptp_private *priv = mii2priv(mii_ts);
+       struct hwtstamp_config cfg;
+       u16 mode, ctrl;
+
+       if (copy_from_user(&cfg, ifr->ifr_data, sizeof(cfg)))
+               return -EFAULT;
+
+       switch (cfg.rx_filter) {
+       case HWTSTAMP_FILTER_NONE:
+               priv->hwts_rx = false;
+               break;
+       case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+       case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+       case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+       case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+       case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+       case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+       case HWTSTAMP_FILTER_PTP_V2_EVENT:
+       case HWTSTAMP_FILTER_PTP_V2_SYNC:
+       case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+               cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+               priv->hwts_rx = true;
+               break;
+       default:
+               return -ERANGE;
+       }
+
+       priv->tx_type = cfg.tx_type;
+
+       ctrl  = priv->hwts_rx ? SLICE_RX_EN : 0;
+       ctrl |= priv->tx_type != HWTSTAMP_TX_OFF ? SLICE_TX_EN : 0;
+
+       mode = TX_MODE_SEL(PORT, SYNC, REPLACE_TS) |
+              TX_MODE_SEL(PORT, DELAY_REQ, REPLACE_TS) |
+              TX_MODE_SEL(PORT, PDELAY_REQ, REPLACE_TS) |
+              TX_MODE_SEL(PORT, PDELAY_RESP, REPLACE_TS);
+
+       bcm_phy_write_exp(priv->phydev, TX_EVENT_MODE, mode);
+
+       mode = RX_MODE_SEL(PORT, SYNC, INSERT_TS_64) |
+              RX_MODE_SEL(PORT, DELAY_REQ, INSERT_TS_64) |
+              RX_MODE_SEL(PORT, PDELAY_REQ, INSERT_TS_64) |
+              RX_MODE_SEL(PORT, PDELAY_RESP, INSERT_TS_64);
+
+       bcm_phy_write_exp(priv->phydev, RX_EVENT_MODE, mode);
+
+       bcm_phy_write_exp(priv->phydev, SLICE_CTRL, ctrl);
+
+       if (ctrl & SLICE_TX_EN)
+               bcm_phy_write_exp(priv->phydev, TX_TS_CAPTURE, TX_TS_CAP_EN);
+       else
+               ptp_cancel_worker_sync(priv->ptp_clock);
+
+       /* purge existing data */
+       skb_queue_purge(&priv->tx_queue);
+
+       return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
+}
+
+static int bcm_ptp_ts_info(struct mii_timestamper *mii_ts,
+                          struct ethtool_ts_info *ts_info)
+{
+       struct bcm_ptp_private *priv = mii2priv(mii_ts);
+
+       ts_info->phc_index = ptp_clock_index(priv->ptp_clock);
+       ts_info->so_timestamping =
+               SOF_TIMESTAMPING_TX_HARDWARE |
+               SOF_TIMESTAMPING_RX_HARDWARE |
+               SOF_TIMESTAMPING_RAW_HARDWARE;
+       ts_info->tx_types =
+               BIT(HWTSTAMP_TX_ON) |
+               BIT(HWTSTAMP_TX_OFF) |
+               BIT(HWTSTAMP_TX_ONESTEP_SYNC) |
+               BIT(HWTSTAMP_TX_ONESTEP_P2P);
+       ts_info->rx_filters =
+               BIT(HWTSTAMP_FILTER_NONE) |
+               BIT(HWTSTAMP_FILTER_PTP_V2_EVENT);
+
+       return 0;
+}
+
+void bcm_ptp_stop(struct bcm_ptp_private *priv)
+{
+       ptp_cancel_worker_sync(priv->ptp_clock);
+       bcm_ptp_cancel_func(priv);
+}
+EXPORT_SYMBOL_GPL(bcm_ptp_stop);
+
+void bcm_ptp_config_init(struct phy_device *phydev)
+{
+       /* init network sync engine */
+       bcm_phy_write_exp(phydev, NSE_CTRL, NSE_GMODE_EN | NSE_INIT);
+
+       /* enable time sync (TX/RX SOP capture) */
+       bcm_phy_write_exp(phydev, TIME_SYNC, TIME_SYNC_EN);
+
+       /* use sec.nsec heartbeat capture */
+       bcm_phy_write_exp(phydev, DPLL_SELECT, DPLL_HB_MODE2);
+
+       /* use 64 bit timecode for TX */
+       bcm_phy_write_exp(phydev, TIMECODE_CTRL, TX_TIMECODE_SEL);
+
+       /* always allow FREQ_LOAD on framesync */
+       bcm_phy_write_exp(phydev, SHADOW_CTRL, FREQ_LOAD);
+
+       bcm_phy_write_exp(phydev, SYNC_IN_DIVIDER, 1);
+}
+EXPORT_SYMBOL_GPL(bcm_ptp_config_init);
+
+static void bcm_ptp_init(struct bcm_ptp_private *priv)
+{
+       priv->nse_ctrl = NSE_GMODE_EN;
+
+       mutex_init(&priv->mutex);
+       skb_queue_head_init(&priv->tx_queue);
+
+       priv->mii_ts.rxtstamp = bcm_ptp_rxtstamp;
+       priv->mii_ts.txtstamp = bcm_ptp_txtstamp;
+       priv->mii_ts.hwtstamp = bcm_ptp_hwtstamp;
+       priv->mii_ts.ts_info = bcm_ptp_ts_info;
+
+       priv->phydev->mii_ts = &priv->mii_ts;
+}
+
+struct bcm_ptp_private *bcm_ptp_probe(struct phy_device *phydev)
+{
+       struct bcm_ptp_private *priv;
+       struct ptp_clock *clock;
+
+       switch (BRCM_PHY_MODEL(phydev)) {
+       case PHY_ID_BCM54210E:
+               break;
+       default:
+               return NULL;
+       }
+
+       priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL);
+       if (!priv)
+               return ERR_PTR(-ENOMEM);
+
+       priv->ptp_info = bcm_ptp_clock_info;
+
+       snprintf(priv->pin.name, sizeof(priv->pin.name), "SYNC_OUT");
+       priv->ptp_info.pin_config = &priv->pin;
+
+       clock = ptp_clock_register(&priv->ptp_info, &phydev->mdio.dev);
+       if (IS_ERR(clock))
+               return ERR_CAST(clock);
+       priv->ptp_clock = clock;
+
+       priv->phydev = phydev;
+       bcm_ptp_init(priv);
+
+       return priv;
+}
+EXPORT_SYMBOL_GPL(bcm_ptp_probe);
+
+MODULE_LICENSE("GPL");
index e36809a..31fbcdd 100644 (file)
@@ -27,6 +27,11 @@ MODULE_DESCRIPTION("Broadcom PHY driver");
 MODULE_AUTHOR("Maciej W. Rozycki");
 MODULE_LICENSE("GPL");
 
+struct bcm54xx_phy_priv {
+       u64     *stats;
+       struct bcm_ptp_private *ptp;
+};
+
 static int bcm54xx_config_clock_delay(struct phy_device *phydev)
 {
        int rc, val;
@@ -313,6 +318,22 @@ static void bcm54xx_adjust_rxrefclk(struct phy_device *phydev)
                bcm_phy_write_shadow(phydev, BCM54XX_SHD_APD, val);
 }
 
+static void bcm54xx_ptp_stop(struct phy_device *phydev)
+{
+       struct bcm54xx_phy_priv *priv = phydev->priv;
+
+       if (priv->ptp)
+               bcm_ptp_stop(priv->ptp);
+}
+
+static void bcm54xx_ptp_config_init(struct phy_device *phydev)
+{
+       struct bcm54xx_phy_priv *priv = phydev->priv;
+
+       if (priv->ptp)
+               bcm_ptp_config_init(phydev);
+}
+
 static int bcm54xx_config_init(struct phy_device *phydev)
 {
        int reg, err, val;
@@ -390,6 +411,8 @@ static int bcm54xx_config_init(struct phy_device *phydev)
                bcm_phy_write_exp(phydev, BCM_EXP_MULTICOLOR, val);
        }
 
+       bcm54xx_ptp_config_init(phydev);
+
        return 0;
 }
 
@@ -418,6 +441,8 @@ static int bcm54xx_suspend(struct phy_device *phydev)
 {
        int ret;
 
+       bcm54xx_ptp_stop(phydev);
+
        /* We cannot use a read/modify/write here otherwise the PHY gets into
         * a bad state where its LEDs keep flashing, thus defeating the purpose
         * of low power mode.
@@ -741,10 +766,6 @@ static irqreturn_t brcm_fet_handle_interrupt(struct phy_device *phydev)
        return IRQ_HANDLED;
 }
 
-struct bcm54xx_phy_priv {
-       u64     *stats;
-};
-
 static int bcm54xx_phy_probe(struct phy_device *phydev)
 {
        struct bcm54xx_phy_priv *priv;
@@ -761,6 +782,10 @@ static int bcm54xx_phy_probe(struct phy_device *phydev)
        if (!priv->stats)
                return -ENOMEM;
 
+       priv->ptp = bcm_ptp_probe(phydev);
+       if (IS_ERR(priv->ptp))
+               return PTR_ERR(priv->ptp);
+
        return 0;
 }
 
@@ -1041,6 +1066,20 @@ static struct phy_driver broadcom_drivers[] = {
        .config_intr    = bcm_phy_config_intr,
        .handle_interrupt = bcm_phy_handle_interrupt,
        .link_change_notify     = bcm54xx_link_change_notify,
+}, {
+       .phy_id         = PHY_ID_BCM53128,
+       .phy_id_mask    = 0xfffffff0,
+       .name           = "Broadcom BCM53128",
+       .flags          = PHY_IS_INTERNAL,
+       /* PHY_GBIT_FEATURES */
+       .get_sset_count = bcm_phy_get_sset_count,
+       .get_strings    = bcm_phy_get_strings,
+       .get_stats      = bcm54xx_get_stats,
+       .probe          = bcm54xx_phy_probe,
+       .config_init    = bcm54xx_config_init,
+       .config_intr    = bcm_phy_config_intr,
+       .handle_interrupt = bcm_phy_handle_interrupt,
+       .link_change_notify     = bcm54xx_link_change_notify,
 }, {
        .phy_id         = PHY_ID_BCM89610,
        .phy_id_mask    = 0xfffffff0,
@@ -1077,6 +1116,7 @@ static struct mdio_device_id __maybe_unused broadcom_tbl[] = {
        { PHY_ID_BCM5241, 0xfffffff0 },
        { PHY_ID_BCM5395, 0xfffffff0 },
        { PHY_ID_BCM53125, 0xfffffff0 },
+       { PHY_ID_BCM53128, 0xfffffff0 },
        { PHY_ID_BCM89610, 0xfffffff0 },
        { }
 };
index e6ad3a4..8549e0e 100644 (file)
@@ -229,9 +229,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
                if (misr_status < 0)
                        return misr_status;
 
-               misr_status |= (DP83822_RX_ERR_HF_INT_EN |
-                               DP83822_FALSE_CARRIER_HF_INT_EN |
-                               DP83822_LINK_STAT_INT_EN |
+               misr_status |= (DP83822_LINK_STAT_INT_EN |
                                DP83822_ENERGY_DET_INT_EN |
                                DP83822_LINK_QUAL_INT_EN);
 
index 1ae792b..3cd9a77 100644 (file)
 #define DP83TD510E_AN_STAT_1                   0x60c
 #define DP83TD510E_MASTER_SLAVE_RESOL_FAIL     BIT(15)
 
+#define DP83TD510E_MSE_DETECT                  0xa85
+
+#define DP83TD510_SQI_MAX      7
+
+/* Register values are converted to SNR(dB) as suggested by
+ * "Application Report - DP83TD510E Cable Diagnostics Toolkit":
+ * SNR(dB) = -10 * log10 (VAL/2^17) - 1.76 dB.
+ * SQI ranges are implemented according to "OPEN ALLIANCE - Advanced diagnostic
+ * features for 100BASE-T1 automotive Ethernet PHYs"
+ */
+static const u16 dp83td510_mse_sqi_map[] = {
+       0x0569, /* < 18dB */
+       0x044c, /* 18dB =< SNR < 19dB */
+       0x0369, /* 19dB =< SNR < 20dB */
+       0x02b6, /* 20dB =< SNR < 21dB */
+       0x0227, /* 21dB =< SNR < 22dB */
+       0x01b6, /* 22dB =< SNR < 23dB */
+       0x015b, /* 23dB =< SNR < 24dB */
+       0x0000  /* 24dB =< SNR */
+};
+
 static int dp83td510_config_intr(struct phy_device *phydev)
 {
        int ret;
@@ -164,6 +185,32 @@ static int dp83td510_config_aneg(struct phy_device *phydev)
        return genphy_c45_check_and_restart_aneg(phydev, changed);
 }
 
+static int dp83td510_get_sqi(struct phy_device *phydev)
+{
+       int sqi, ret;
+       u16 mse_val;
+
+       if (!phydev->link)
+               return 0;
+
+       ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, DP83TD510E_MSE_DETECT);
+       if (ret < 0)
+               return ret;
+
+       mse_val = 0xFFFF & ret;
+       for (sqi = 0; sqi < ARRAY_SIZE(dp83td510_mse_sqi_map); sqi++) {
+               if (mse_val >= dp83td510_mse_sqi_map[sqi])
+                       return sqi;
+       }
+
+       return -EINVAL;
+}
+
+static int dp83td510_get_sqi_max(struct phy_device *phydev)
+{
+       return DP83TD510_SQI_MAX;
+}
+
 static int dp83td510_get_features(struct phy_device *phydev)
 {
        /* This PHY can't respond on MDIO bus if no RMII clock is enabled.
@@ -192,6 +239,8 @@ static struct phy_driver dp83td510_driver[] = {
        .get_features   = dp83td510_get_features,
        .config_intr    = dp83td510_config_intr,
        .handle_interrupt = dp83td510_handle_interrupt,
+       .get_sqi        = dp83td510_get_sqi,
+       .get_sqi_max    = dp83td510_get_sqi_max,
 
        .suspend        = genphy_suspend,
        .resume         = genphy_resume,
index 2213990..e78d0bf 100644 (file)
 #define PTP_TSU_INT_STS_PTP_RX_TS_OVRFL_INT_   BIT(1)
 #define PTP_TSU_INT_STS_PTP_RX_TS_EN_          BIT(0)
 
+#define LAN8814_LED_CTRL_1                     0x0
+#define LAN8814_LED_CTRL_1_KSZ9031_LED_MODE_   BIT(6)
+
 /* PHY Control 1 */
 #define MII_KSZPHY_CTRL_1                      0x1e
 #define KSZ8081_CTRL1_MDIX_STAT                        BIT(4)
@@ -308,6 +311,10 @@ struct kszphy_priv {
        u64 stats[ARRAY_SIZE(kszphy_hw_stats)];
 };
 
+static const struct kszphy_type lan8814_type = {
+       .led_mode_reg           = ~LAN8814_LED_CTRL_1,
+};
+
 static const struct kszphy_type ksz8021_type = {
        .led_mode_reg           = MII_KSZPHY_CTRL_2,
        .has_broadcast_disable  = true,
@@ -1688,6 +1695,30 @@ static int kszphy_suspend(struct phy_device *phydev)
        return genphy_suspend(phydev);
 }
 
+static void kszphy_parse_led_mode(struct phy_device *phydev)
+{
+       const struct kszphy_type *type = phydev->drv->driver_data;
+       const struct device_node *np = phydev->mdio.dev.of_node;
+       struct kszphy_priv *priv = phydev->priv;
+       int ret;
+
+       if (type && type->led_mode_reg) {
+               ret = of_property_read_u32(np, "micrel,led-mode",
+                                          &priv->led_mode);
+
+               if (ret)
+                       priv->led_mode = -1;
+
+               if (priv->led_mode > 3) {
+                       phydev_err(phydev, "invalid led mode: 0x%02x\n",
+                                  priv->led_mode);
+                       priv->led_mode = -1;
+               }
+       } else {
+               priv->led_mode = -1;
+       }
+}
+
 static int kszphy_resume(struct phy_device *phydev)
 {
        int ret;
@@ -1720,7 +1751,6 @@ static int kszphy_probe(struct phy_device *phydev)
        const struct device_node *np = phydev->mdio.dev.of_node;
        struct kszphy_priv *priv;
        struct clk *clk;
-       int ret;
 
        priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL);
        if (!priv)
@@ -1730,20 +1760,7 @@ static int kszphy_probe(struct phy_device *phydev)
 
        priv->type = type;
 
-       if (type && type->led_mode_reg) {
-               ret = of_property_read_u32(np, "micrel,led-mode",
-                               &priv->led_mode);
-               if (ret)
-                       priv->led_mode = -1;
-
-               if (priv->led_mode > 3) {
-                       phydev_err(phydev, "invalid led mode: 0x%02x\n",
-                                  priv->led_mode);
-                       priv->led_mode = -1;
-               }
-       } else {
-               priv->led_mode = -1;
-       }
+       kszphy_parse_led_mode(phydev);
 
        clk = devm_clk_get(&phydev->mdio.dev, "rmii-ref");
        /* NOTE: clk may be NULL if building without CONFIG_HAVE_CLK */
@@ -2815,8 +2832,23 @@ static int lan8814_ptp_probe_once(struct phy_device *phydev)
        return 0;
 }
 
+static void lan8814_setup_led(struct phy_device *phydev, int val)
+{
+       int temp;
+
+       temp = lanphy_read_page_reg(phydev, 5, LAN8814_LED_CTRL_1);
+
+       if (val)
+               temp |= LAN8814_LED_CTRL_1_KSZ9031_LED_MODE_;
+       else
+               temp &= ~LAN8814_LED_CTRL_1_KSZ9031_LED_MODE_;
+
+       lanphy_write_page_reg(phydev, 5, LAN8814_LED_CTRL_1, temp);
+}
+
 static int lan8814_config_init(struct phy_device *phydev)
 {
+       struct kszphy_priv *lan8814 = phydev->priv;
        int val;
 
        /* Reset the PHY */
@@ -2835,6 +2867,9 @@ static int lan8814_config_init(struct phy_device *phydev)
        val |= LAN8814_ALIGN_TX_A_B_SWAP;
        lanphy_write_page_reg(phydev, 2, LAN8814_ALIGN_SWAP, val);
 
+       if (lan8814->led_mode >= 0)
+               lan8814_setup_led(phydev, lan8814->led_mode);
+
        return 0;
 }
 
@@ -2855,6 +2890,7 @@ static int lan8814_release_coma_mode(struct phy_device *phydev)
 
 static int lan8814_probe(struct phy_device *phydev)
 {
+       const struct kszphy_type *type = phydev->drv->driver_data;
        struct kszphy_priv *priv;
        u16 addr;
        int err;
@@ -2863,10 +2899,12 @@ static int lan8814_probe(struct phy_device *phydev)
        if (!priv)
                return -ENOMEM;
 
-       priv->led_mode = -1;
-
        phydev->priv = priv;
 
+       priv->type = type;
+
+       kszphy_parse_led_mode(phydev);
+
        /* Strap-in value for PHY address, below register read gives starting
         * phy address value
         */
@@ -3068,6 +3106,7 @@ static struct phy_driver ksphy_driver[] = {
        .phy_id_mask    = MICREL_PHY_ID_MASK,
        .name           = "Microchip INDY Gigabit Quad PHY",
        .config_init    = lan8814_config_init,
+       .driver_data    = &lan8814_type,
        .probe          = lan8814_probe,
        .soft_reset     = genphy_soft_reset,
        .read_status    = ksz9031_read_status,
index 6c4da2f..5b99acf 100644 (file)
@@ -8,7 +8,9 @@
 
 #include <linux/module.h>
 #include <linux/bitfield.h>
+#include <linux/hwmon.h>
 #include <linux/phy.h>
+#include <linux/polynomial.h>
 #include <linux/netdevice.h>
 
 /* PHY ID */
 #define VSPEC1_SGMII_ANEN_ANRS (VSPEC1_SGMII_CTRL_ANEN | \
                                 VSPEC1_SGMII_CTRL_ANRS)
 
+/* Temperature sensor */
+#define VPSPEC1_TEMP_STA       0x0E
+#define VPSPEC1_TEMP_STA_DATA  GENMASK(9, 0)
+
 /* WoL */
 #define VPSPEC2_WOL_CTL                0x0E06
 #define VPSPEC2_WOL_AD01       0x0E08
@@ -80,6 +86,102 @@ static const struct {
        {9, 0x73},
 };
 
+#if IS_ENABLED(CONFIG_HWMON)
+/* The original translation formulae of the temperature (in degrees of Celsius)
+ * are as follows:
+ *
+ *   T = -2.5761e-11*(N^4) + 9.7332e-8*(N^3) + -1.9165e-4*(N^2) +
+ *       3.0762e-1*(N^1) + -5.2156e1
+ *
+ * where [-52.156, 137.961]C and N = [0, 1023].
+ *
+ * They must be accordingly altered to be suitable for the integer arithmetics.
+ * The technique is called 'factor redistribution', which just makes sure the
+ * multiplications and divisions are made so to have a result of the operations
+ * within the integer numbers limit. In addition we need to translate the
+ * formulae to accept millidegrees of Celsius. Here what it looks like after
+ * the alterations:
+ *
+ *   T = -25761e-12*(N^4) + 97332e-9*(N^3) + -191650e-6*(N^2) +
+ *       307620e-3*(N^1) + -52156
+ *
+ * where T = [-52156, 137961]mC and N = [0, 1023].
+ */
+static const struct polynomial poly_N_to_temp = {
+       .terms = {
+               {4,  -25761, 1000, 1},
+               {3,   97332, 1000, 1},
+               {2, -191650, 1000, 1},
+               {1,  307620, 1000, 1},
+               {0,  -52156,    1, 1}
+       }
+};
+
+static int gpy_hwmon_read(struct device *dev,
+                         enum hwmon_sensor_types type,
+                         u32 attr, int channel, long *value)
+{
+       struct phy_device *phydev = dev_get_drvdata(dev);
+       int ret;
+
+       ret = phy_read_mmd(phydev, MDIO_MMD_VEND1, VPSPEC1_TEMP_STA);
+       if (ret < 0)
+               return ret;
+       if (!ret)
+               return -ENODATA;
+
+       *value = polynomial_calc(&poly_N_to_temp,
+                                FIELD_GET(VPSPEC1_TEMP_STA_DATA, ret));
+
+       return 0;
+}
+
+static umode_t gpy_hwmon_is_visible(const void *data,
+                                   enum hwmon_sensor_types type,
+                                   u32 attr, int channel)
+{
+       return 0444;
+}
+
+static const struct hwmon_channel_info *gpy_hwmon_info[] = {
+       HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT),
+       NULL
+};
+
+static const struct hwmon_ops gpy_hwmon_hwmon_ops = {
+       .is_visible     = gpy_hwmon_is_visible,
+       .read           = gpy_hwmon_read,
+};
+
+static const struct hwmon_chip_info gpy_hwmon_chip_info = {
+       .ops            = &gpy_hwmon_hwmon_ops,
+       .info           = gpy_hwmon_info,
+};
+
+static int gpy_hwmon_register(struct phy_device *phydev)
+{
+       struct device *dev = &phydev->mdio.dev;
+       struct device *hwmon_dev;
+       char *hwmon_name;
+
+       hwmon_name = devm_hwmon_sanitize_name(dev, dev_name(dev));
+       if (IS_ERR(hwmon_name))
+               return PTR_ERR(hwmon_name);
+
+       hwmon_dev = devm_hwmon_device_register_with_info(dev, hwmon_name,
+                                                        phydev,
+                                                        &gpy_hwmon_chip_info,
+                                                        NULL);
+
+       return PTR_ERR_OR_ZERO(hwmon_dev);
+}
+#else
+static int gpy_hwmon_register(struct phy_device *phydev)
+{
+       return 0;
+}
+#endif
+
 static int gpy_config_init(struct phy_device *phydev)
 {
        int ret;
@@ -109,6 +211,10 @@ static int gpy_probe(struct phy_device *phydev)
        if (ret < 0)
                return ret;
 
+       ret = gpy_hwmon_register(phydev);
+       if (ret)
+               return ret;
+
        phydev_info(phydev, "Firmware Version: 0x%04X (%s)\n", ret,
                    (ret & PHY_FWV_REL_MASK) ? "release" : "test");
 
index 9944cc5..2a8195c 100644 (file)
@@ -444,15 +444,10 @@ static int tja11xx_hwmon_register(struct phy_device *phydev,
                                  struct tja11xx_priv *priv)
 {
        struct device *dev = &phydev->mdio.dev;
-       int i;
-
-       priv->hwmon_name = devm_kstrdup(dev, dev_name(dev), GFP_KERNEL);
-       if (!priv->hwmon_name)
-               return -ENOMEM;
 
-       for (i = 0; priv->hwmon_name[i]; i++)
-               if (hwmon_is_bad_char(priv->hwmon_name[i]))
-                       priv->hwmon_name[i] = '_';
+       priv->hwmon_name = devm_hwmon_sanitize_name(dev, dev_name(dev));
+       if (IS_ERR(priv->hwmon_name))
+               return PTR_ERR(priv->hwmon_name);
 
        priv->hwmon_dev =
                devm_hwmon_device_register_with_info(dev, priv->hwmon_name,
index ef62f35..8d3ee3a 100644 (file)
@@ -31,6 +31,7 @@
 #include <linux/io.h>
 #include <linux/uaccess.h>
 #include <linux/atomic.h>
+#include <linux/suspend.h>
 #include <net/netlink.h>
 #include <net/genetlink.h>
 #include <net/sock.h>
@@ -976,6 +977,28 @@ static irqreturn_t phy_interrupt(int irq, void *phy_dat)
        struct phy_driver *drv = phydev->drv;
        irqreturn_t ret;
 
+       /* Wakeup interrupts may occur during a system sleep transition.
+        * Postpone handling until the PHY has resumed.
+        */
+       if (IS_ENABLED(CONFIG_PM_SLEEP) && phydev->irq_suspended) {
+               struct net_device *netdev = phydev->attached_dev;
+
+               if (netdev) {
+                       struct device *parent = netdev->dev.parent;
+
+                       if (netdev->wol_enabled)
+                               pm_system_wakeup();
+                       else if (device_may_wakeup(&netdev->dev))
+                               pm_wakeup_dev_event(&netdev->dev, 0, true);
+                       else if (parent && device_may_wakeup(parent))
+                               pm_wakeup_dev_event(parent, 0, true);
+               }
+
+               phydev->irq_rerun = 1;
+               disable_irq_nosync(irq);
+               return IRQ_HANDLED;
+       }
+
        mutex_lock(&phydev->lock);
        ret = drv->handle_interrupt(phydev);
        mutex_unlock(&phydev->lock);
index 7885bce..a74b320 100644 (file)
@@ -278,6 +278,15 @@ static __maybe_unused int mdio_bus_phy_suspend(struct device *dev)
        if (phydev->mac_managed_pm)
                return 0;
 
+       /* Wakeup interrupts may occur during the system sleep transition when
+        * the PHY is inaccessible. Set flag to postpone handling until the PHY
+        * has resumed. Wait for concurrent interrupt handler to complete.
+        */
+       if (phy_interrupt_is_valid(phydev)) {
+               phydev->irq_suspended = 1;
+               synchronize_irq(phydev->irq);
+       }
+
        /* We must stop the state machine manually, otherwise it stops out of
         * control, possibly with the phydev->lock held. Upon resume, netdev
         * may call phy routines that try to grab the same lock, and that may
@@ -315,6 +324,20 @@ static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
        if (ret < 0)
                return ret;
 no_resume:
+       if (phy_interrupt_is_valid(phydev)) {
+               phydev->irq_suspended = 0;
+               synchronize_irq(phydev->irq);
+
+               /* Rerun interrupts which were postponed by phy_interrupt()
+                * because they occurred during the system sleep transition.
+                */
+               if (phydev->irq_rerun) {
+                       phydev->irq_rerun = 0;
+                       enable_irq(phydev->irq);
+                       irq_wake_thread(phydev->irq, phydev);
+               }
+       }
+
        if (phydev->attached_dev && phydev->adjust_link)
                phy_start_machine(phydev);
 
index 066684b..9bd6932 100644 (file)
@@ -43,7 +43,6 @@ struct phylink {
        /* private: */
        struct net_device *netdev;
        const struct phylink_mac_ops *mac_ops;
-       const struct phylink_pcs_ops *pcs_ops;
        struct phylink_config *config;
        struct phylink_pcs *pcs;
        struct device *dev;
@@ -759,6 +758,18 @@ static void phylink_resolve_flow(struct phylink_link_state *state)
        }
 }
 
+static void phylink_pcs_poll_stop(struct phylink *pl)
+{
+       if (pl->cfg_link_an_mode == MLO_AN_INBAND)
+               del_timer(&pl->link_poll);
+}
+
+static void phylink_pcs_poll_start(struct phylink *pl)
+{
+       if (pl->pcs && pl->pcs->poll && pl->cfg_link_an_mode == MLO_AN_INBAND)
+               mod_timer(&pl->link_poll, jiffies + HZ);
+}
+
 static void phylink_mac_config(struct phylink *pl,
                               const struct phylink_link_state *state)
 {
@@ -779,8 +790,8 @@ static void phylink_mac_pcs_an_restart(struct phylink *pl)
        if (pl->link_config.an_enabled &&
            phy_interface_mode_is_8023z(pl->link_config.interface) &&
            phylink_autoneg_inband(pl->cur_link_an_mode)) {
-               if (pl->pcs_ops)
-                       pl->pcs_ops->pcs_an_restart(pl->pcs);
+               if (pl->pcs)
+                       pl->pcs->ops->pcs_an_restart(pl->pcs);
                else if (pl->config->legacy_pre_march2020)
                        pl->mac_ops->mac_an_restart(pl->config);
        }
@@ -790,6 +801,7 @@ static void phylink_major_config(struct phylink *pl, bool restart,
                                  const struct phylink_link_state *state)
 {
        struct phylink_pcs *pcs = NULL;
+       bool pcs_changed = false;
        int err;
 
        phylink_dbg(pl, "major config %s\n", phy_modes(state->interface));
@@ -802,8 +814,12 @@ static void phylink_major_config(struct phylink *pl, bool restart,
                                    pcs);
                        return;
                }
+
+               pcs_changed = pcs && pl->pcs != pcs;
        }
 
+       phylink_pcs_poll_stop(pl);
+
        if (pl->mac_ops->mac_prepare) {
                err = pl->mac_ops->mac_prepare(pl->config, pl->cur_link_an_mode,
                                               state->interface);
@@ -817,27 +833,17 @@ static void phylink_major_config(struct phylink *pl, bool restart,
        /* If we have a new PCS, switch to the new PCS after preparing the MAC
         * for the change.
         */
-       if (pcs) {
+       if (pcs_changed)
                pl->pcs = pcs;
-               pl->pcs_ops = pcs->ops;
-
-               if (!pl->phylink_disable_state &&
-                   pl->cfg_link_an_mode == MLO_AN_INBAND) {
-                       if (pcs->poll)
-                               mod_timer(&pl->link_poll, jiffies + HZ);
-                       else
-                               del_timer(&pl->link_poll);
-               }
-       }
 
        phylink_mac_config(pl, state);
 
-       if (pl->pcs_ops) {
-               err = pl->pcs_ops->pcs_config(pl->pcs, pl->cur_link_an_mode,
-                                             state->interface,
-                                             state->advertising,
-                                             !!(pl->link_config.pause &
-                                                MLO_PAUSE_AN));
+       if (pl->pcs) {
+               err = pl->pcs->ops->pcs_config(pl->pcs, pl->cur_link_an_mode,
+                                              state->interface,
+                                              state->advertising,
+                                              !!(pl->link_config.pause &
+                                                 MLO_PAUSE_AN));
                if (err < 0)
                        phylink_err(pl, "pcs_config failed: %pe\n",
                                    ERR_PTR(err));
@@ -854,6 +860,8 @@ static void phylink_major_config(struct phylink *pl, bool restart,
                        phylink_err(pl, "mac_finish failed: %pe\n",
                                    ERR_PTR(err));
        }
+
+       phylink_pcs_poll_start(pl);
 }
 
 /*
@@ -869,7 +877,7 @@ static int phylink_change_inband_advert(struct phylink *pl)
        if (test_bit(PHYLINK_DISABLE_STOPPED, &pl->phylink_disable_state))
                return 0;
 
-       if (!pl->pcs_ops && pl->config->legacy_pre_march2020) {
+       if (!pl->pcs && pl->config->legacy_pre_march2020) {
                /* Legacy method */
                phylink_mac_config(pl, &pl->link_config);
                phylink_mac_pcs_an_restart(pl);
@@ -886,10 +894,11 @@ static int phylink_change_inband_advert(struct phylink *pl)
         * restart negotiation if the pcs_config() helper indicates that
         * the programmed advertisement has changed.
         */
-       ret = pl->pcs_ops->pcs_config(pl->pcs, pl->cur_link_an_mode,
-                                     pl->link_config.interface,
-                                     pl->link_config.advertising,
-                                     !!(pl->link_config.pause & MLO_PAUSE_AN));
+       ret = pl->pcs->ops->pcs_config(pl->pcs, pl->cur_link_an_mode,
+                                      pl->link_config.interface,
+                                      pl->link_config.advertising,
+                                      !!(pl->link_config.pause &
+                                         MLO_PAUSE_AN));
        if (ret < 0)
                return ret;
 
@@ -918,8 +927,8 @@ static void phylink_mac_pcs_get_state(struct phylink *pl,
        state->an_complete = 0;
        state->link = 1;
 
-       if (pl->pcs_ops)
-               pl->pcs_ops->pcs_get_state(pl->pcs, state);
+       if (pl->pcs)
+               pl->pcs->ops->pcs_get_state(pl->pcs, state);
        else if (pl->mac_ops->mac_pcs_get_state &&
                 pl->config->legacy_pre_march2020)
                pl->mac_ops->mac_pcs_get_state(pl->config, state);
@@ -992,8 +1001,8 @@ static void phylink_link_up(struct phylink *pl,
 
        pl->cur_interface = link_state.interface;
 
-       if (pl->pcs_ops && pl->pcs_ops->pcs_link_up)
-               pl->pcs_ops->pcs_link_up(pl->pcs, pl->cur_link_an_mode,
+       if (pl->pcs && pl->pcs->ops->pcs_link_up)
+               pl->pcs->ops->pcs_link_up(pl->pcs, pl->cur_link_an_mode,
                                         pl->cur_interface,
                                         link_state.speed, link_state.duplex);
 
@@ -1115,7 +1124,7 @@ static void phylink_resolve(struct work_struct *w)
                        }
                        phylink_major_config(pl, false, &link_state);
                        pl->link_config.interface = link_state.interface;
-               } else if (!pl->pcs_ops && pl->config->legacy_pre_march2020) {
+               } else if (!pl->pcs && pl->config->legacy_pre_march2020) {
                        /* The interface remains unchanged, only the speed,
                         * duplex or pause settings have changed. Call the
                         * old mac_config() method to configure the MAC/PCS
@@ -2991,6 +3000,7 @@ int phylink_mii_c22_pcs_encode_advertisement(phy_interface_t interface,
                        adv |= ADVERTISE_1000XPSE_ASYM;
                return adv;
        case PHY_INTERFACE_MODE_SGMII:
+       case PHY_INTERFACE_MODE_QSGMII:
                return 0x0001;
        default:
                /* Nothing to do for other modes */
@@ -3030,7 +3040,9 @@ int phylink_mii_c22_pcs_config(struct mdio_device *pcs, unsigned int mode,
 
        /* Ensure ISOLATE bit is disabled */
        if (mode == MLO_AN_INBAND &&
-           linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, advertising))
+           (interface == PHY_INTERFACE_MODE_SGMII ||
+            interface == PHY_INTERFACE_MODE_QSGMII ||
+            linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, advertising)))
                bmcr = BMCR_ANENABLE;
        else
                bmcr = 0;
index 9a5d5a1..63f90fe 100644 (file)
@@ -1290,7 +1290,7 @@ static const struct hwmon_chip_info sfp_hwmon_chip_info = {
 static void sfp_hwmon_probe(struct work_struct *work)
 {
        struct sfp *sfp = container_of(work, struct sfp, hwmon_probe.work);
-       int err, i;
+       int err;
 
        /* hwmon interface needs to access 16bit registers in atomic way to
         * guarantee coherency of the diagnostic monitoring data. If it is not
@@ -1318,16 +1318,12 @@ static void sfp_hwmon_probe(struct work_struct *work)
                return;
        }
 
-       sfp->hwmon_name = kstrdup(dev_name(sfp->dev), GFP_KERNEL);
-       if (!sfp->hwmon_name) {
+       sfp->hwmon_name = hwmon_sanitize_name(dev_name(sfp->dev));
+       if (IS_ERR(sfp->hwmon_name)) {
                dev_err(sfp->dev, "out of memory for hwmon name\n");
                return;
        }
 
-       for (i = 0; sfp->hwmon_name[i]; i++)
-               if (hwmon_is_bad_char(sfp->hwmon_name[i]))
-                       sfp->hwmon_name[i] = '_';
-
        sfp->hwmon_dev = hwmon_device_register_with_info(sfp->dev,
                                                         sfp->hwmon_name, sfp,
                                                         &sfp_hwmon_chip_info,
@@ -2516,7 +2512,7 @@ static int sfp_probe(struct platform_device *pdev)
 
        platform_set_drvdata(pdev, sfp);
 
-       err = devm_add_action(sfp->dev, sfp_cleanup, sfp);
+       err = devm_add_action_or_reset(sfp->dev, sfp_cleanup, sfp);
        if (err < 0)
                return err;
 
index 1b54684..69423b8 100644 (file)
@@ -110,7 +110,7 @@ static int smsc_phy_config_init(struct phy_device *phydev)
        struct smsc_phy_priv *priv = phydev->priv;
        int rc;
 
-       if (!priv->energy_enable)
+       if (!priv->energy_enable || phydev->irq != PHY_POLL)
                return 0;
 
        rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
@@ -121,10 +121,7 @@ static int smsc_phy_config_init(struct phy_device *phydev)
        /* Enable energy detect mode for this SMSC Transceivers */
        rc = phy_write(phydev, MII_LAN83C185_CTRL_STATUS,
                       rc | MII_LAN83C185_EDPWRDOWN);
-       if (rc < 0)
-               return rc;
-
-       return smsc_phy_ack_interrupt(phydev);
+       return rc;
 }
 
 static int smsc_phy_reset(struct phy_device *phydev)
@@ -146,11 +143,6 @@ static int smsc_phy_reset(struct phy_device *phydev)
        return genphy_soft_reset(phydev);
 }
 
-static int lan911x_config_init(struct phy_device *phydev)
-{
-       return smsc_phy_ack_interrupt(phydev);
-}
-
 static int lan87xx_config_aneg(struct phy_device *phydev)
 {
        int rc;
@@ -210,6 +202,8 @@ static int lan95xx_config_aneg_ext(struct phy_device *phydev)
  * response on link pulses to detect presence of plugged Ethernet cable.
  * The Energy Detect Power-Down mode is enabled again in the end of procedure to
  * save approximately 220 mW of power if cable is unplugged.
+ * The workaround is only applicable to poll mode. Energy Detect Power-Down may
+ * not be used in interrupt mode lest link change detection becomes unreliable.
  */
 static int lan87xx_read_status(struct phy_device *phydev)
 {
@@ -217,7 +211,7 @@ static int lan87xx_read_status(struct phy_device *phydev)
 
        int err = genphy_read_status(phydev);
 
-       if (!phydev->link && priv->energy_enable) {
+       if (!phydev->link && priv->energy_enable && phydev->irq == PHY_POLL) {
                /* Disable EDPD to wake up PHY */
                int rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
                if (rc < 0)
@@ -418,9 +412,6 @@ static struct phy_driver smsc_phy_driver[] = {
 
        .probe          = smsc_phy_probe,
 
-       /* basic functions */
-       .config_init    = lan911x_config_init,
-
        /* IRQ related */
        .config_intr    = smsc_phy_config_intr,
        .handle_interrupt = smsc_phy_handle_interrupt,
index 87a635a..259b2b8 100644 (file)
@@ -273,6 +273,12 @@ static void tun_napi_init(struct tun_struct *tun, struct tun_file *tfile,
        }
 }
 
+static void tun_napi_enable(struct tun_file *tfile)
+{
+       if (tfile->napi_enabled)
+               napi_enable(&tfile->napi);
+}
+
 static void tun_napi_disable(struct tun_file *tfile)
 {
        if (tfile->napi_enabled)
@@ -634,7 +640,8 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
        tun = rtnl_dereference(tfile->tun);
 
        if (tun && clean) {
-               tun_napi_disable(tfile);
+               if (!tfile->detached)
+                       tun_napi_disable(tfile);
                tun_napi_del(tfile);
        }
 
@@ -653,8 +660,10 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
                if (clean) {
                        RCU_INIT_POINTER(tfile->tun, NULL);
                        sock_put(&tfile->sk);
-               } else
+               } else {
                        tun_disable_queue(tun, tfile);
+                       tun_napi_disable(tfile);
+               }
 
                synchronize_net();
                tun_flow_delete_by_queue(tun, tun->numqueues + 1);
@@ -727,6 +736,7 @@ static void tun_detach_all(struct net_device *dev)
                sock_put(&tfile->sk);
        }
        list_for_each_entry_safe(tfile, tmp, &tun->disabled, next) {
+               tun_napi_del(tfile);
                tun_enable_queue(tfile);
                tun_queue_purge(tfile);
                xdp_rxq_info_unreg(&tfile->xdp_rxq);
@@ -807,6 +817,7 @@ static int tun_attach(struct tun_struct *tun, struct file *file,
 
        if (tfile->detached) {
                tun_enable_queue(tfile);
+               tun_napi_enable(tfile);
        } else {
                sock_hold(&tfile->sk);
                tun_napi_init(tun, tfile, napi, napi_frags);
index 2c81236..21c1ca2 100644 (file)
         AX_MEDIUM_RE)
 
 #define AX88772_MEDIUM_DEFAULT \
-       (AX_MEDIUM_FD | AX_MEDIUM_RFC | \
-        AX_MEDIUM_TFC | AX_MEDIUM_PS | \
+       (AX_MEDIUM_FD | AX_MEDIUM_PS | \
         AX_MEDIUM_AC | AX_MEDIUM_RE)
 
 /* AX88772 & AX88178 RX_CTL values */
@@ -213,9 +212,6 @@ void asix_rx_fixup_common_free(struct asix_common_private *dp);
 struct sk_buff *asix_tx_fixup(struct usbnet *dev, struct sk_buff *skb,
                              gfp_t flags);
 
-int asix_set_sw_mii(struct usbnet *dev, int in_pm);
-int asix_set_hw_mii(struct usbnet *dev, int in_pm);
-
 int asix_read_phy_addr(struct usbnet *dev, bool internal);
 
 int asix_sw_reset(struct usbnet *dev, u8 flags, int in_pm);
index 632fa6c..9ea91c3 100644 (file)
@@ -68,6 +68,27 @@ void asix_write_cmd_async(struct usbnet *dev, u8 cmd, u16 value, u16 index,
                               value, index, data, size);
 }
 
+static int asix_set_sw_mii(struct usbnet *dev, int in_pm)
+{
+       int ret;
+
+       ret = asix_write_cmd(dev, AX_CMD_SET_SW_MII, 0x0000, 0, 0, NULL, in_pm);
+
+       if (ret < 0)
+               netdev_err(dev->net, "Failed to enable software MII access\n");
+       return ret;
+}
+
+static int asix_set_hw_mii(struct usbnet *dev, int in_pm)
+{
+       int ret;
+
+       ret = asix_write_cmd(dev, AX_CMD_SET_HW_MII, 0x0000, 0, 0, NULL, in_pm);
+       if (ret < 0)
+               netdev_err(dev->net, "Failed to enable hardware MII access\n");
+       return ret;
+}
+
 static int asix_check_host_enable(struct usbnet *dev, int in_pm)
 {
        int i, ret;
@@ -297,25 +318,6 @@ struct sk_buff *asix_tx_fixup(struct usbnet *dev, struct sk_buff *skb,
        return skb;
 }
 
-int asix_set_sw_mii(struct usbnet *dev, int in_pm)
-{
-       int ret;
-       ret = asix_write_cmd(dev, AX_CMD_SET_SW_MII, 0x0000, 0, 0, NULL, in_pm);
-
-       if (ret < 0)
-               netdev_err(dev->net, "Failed to enable software MII access\n");
-       return ret;
-}
-
-int asix_set_hw_mii(struct usbnet *dev, int in_pm)
-{
-       int ret;
-       ret = asix_write_cmd(dev, AX_CMD_SET_HW_MII, 0x0000, 0, 0, NULL, in_pm);
-       if (ret < 0)
-               netdev_err(dev->net, "Failed to enable hardware MII access\n");
-       return ret;
-}
-
 int asix_read_phy_addr(struct usbnet *dev, bool internal)
 {
        int ret, offset;
@@ -431,6 +433,7 @@ void asix_adjust_link(struct net_device *netdev)
 
        asix_write_medium_mode(dev, mode, 0);
        phy_print_status(phydev);
+       usbnet_link_change(dev, phydev->link, 0);
 }
 
 int asix_write_gpio(struct usbnet *dev, u16 value, int sleep, int in_pm)
index 4704ed6..ac2d400 100644 (file)
@@ -1472,6 +1472,42 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
         * are bundled into this buffer and where we can find an array of
         * per-packet metadata (which contains elements encoded into u16).
         */
+
+       /* SKB contents for current firmware:
+        *   <packet 1> <padding>
+        *   ...
+        *   <packet N> <padding>
+        *   <per-packet metadata entry 1> <dummy header>
+        *   ...
+        *   <per-packet metadata entry N> <dummy header>
+        *   <padding2> <rx_hdr>
+        *
+        * where:
+        *   <packet N> contains pkt_len bytes:
+        *              2 bytes of IP alignment pseudo header
+        *              packet received
+        *   <per-packet metadata entry N> contains 4 bytes:
+        *              pkt_len and fields AX_RXHDR_*
+        *   <padding>  0-7 bytes to terminate at
+        *              8 bytes boundary (64-bit).
+        *   <padding2> 4 bytes to make rx_hdr terminate at
+        *              8 bytes boundary (64-bit)
+        *   <dummy-header> contains 4 bytes:
+        *              pkt_len=0 and AX_RXHDR_DROP_ERR
+        *   <rx-hdr>   contains 4 bytes:
+        *              pkt_cnt and hdr_off (offset of
+        *                <per-packet metadata entry 1>)
+        *
+        * pkt_cnt is number of entrys in the per-packet metadata.
+        * In current firmware there is 2 entrys per packet.
+        * The first points to the packet and the
+        *  second is a dummy header.
+        * This was done probably to align fields in 64-bit and
+        *  maintain compatibility with old firmware.
+        * This code assumes that <dummy header> and <padding2> are
+        *  optional.
+        */
+
        if (skb->len < 4)
                return 0;
        skb_trim(skb, skb->len - 4);
@@ -1485,51 +1521,66 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
        /* Make sure that the bounds of the metadata array are inside the SKB
         * (and in front of the counter at the end).
         */
-       if (pkt_cnt * 2 + hdr_off > skb->len)
+       if (pkt_cnt * 4 + hdr_off > skb->len)
                return 0;
        pkt_hdr = (u32 *)(skb->data + hdr_off);
 
        /* Packets must not overlap the metadata array */
        skb_trim(skb, hdr_off);
 
-       for (; ; pkt_cnt--, pkt_hdr++) {
+       for (; pkt_cnt > 0; pkt_cnt--, pkt_hdr++) {
+               u16 pkt_len_plus_padd;
                u16 pkt_len;
 
                le32_to_cpus(pkt_hdr);
                pkt_len = (*pkt_hdr >> 16) & 0x1fff;
+               pkt_len_plus_padd = (pkt_len + 7) & 0xfff8;
 
-               if (pkt_len > skb->len)
+               /* Skip dummy header used for alignment
+                */
+               if (pkt_len == 0)
+                       continue;
+
+               if (pkt_len_plus_padd > skb->len)
                        return 0;
 
                /* Check CRC or runt packet */
-               if (((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) == 0) &&
-                   pkt_len >= 2 + ETH_HLEN) {
-                       bool last = (pkt_cnt == 0);
-
-                       if (last) {
-                               ax_skb = skb;
-                       } else {
-                               ax_skb = skb_clone(skb, GFP_ATOMIC);
-                               if (!ax_skb)
-                                       return 0;
-                       }
-                       ax_skb->len = pkt_len;
-                       /* Skip IP alignment pseudo header */
-                       skb_pull(ax_skb, 2);
-                       skb_set_tail_pointer(ax_skb, ax_skb->len);
-                       ax_skb->truesize = pkt_len + sizeof(struct sk_buff);
-                       ax88179_rx_checksum(ax_skb, pkt_hdr);
+               if ((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) ||
+                   pkt_len < 2 + ETH_HLEN) {
+                       dev->net->stats.rx_errors++;
+                       skb_pull(skb, pkt_len_plus_padd);
+                       continue;
+               }
 
-                       if (last)
-                               return 1;
+               /* last packet */
+               if (pkt_len_plus_padd == skb->len) {
+                       skb_trim(skb, pkt_len);
 
-                       usbnet_skb_return(dev, ax_skb);
+                       /* Skip IP alignment pseudo header */
+                       skb_pull(skb, 2);
+
+                       skb->truesize = SKB_TRUESIZE(pkt_len_plus_padd);
+                       ax88179_rx_checksum(skb, pkt_hdr);
+                       return 1;
                }
 
-               /* Trim this packet away from the SKB */
-               if (!skb_pull(skb, (pkt_len + 7) & 0xFFF8))
+               ax_skb = skb_clone(skb, GFP_ATOMIC);
+               if (!ax_skb)
                        return 0;
+               skb_trim(ax_skb, pkt_len);
+
+               /* Skip IP alignment pseudo header */
+               skb_pull(ax_skb, 2);
+
+               skb->truesize = pkt_len_plus_padd +
+                               SKB_DATA_ALIGN(sizeof(struct sk_buff));
+               ax88179_rx_checksum(ax_skb, pkt_hdr);
+               usbnet_skb_return(dev, ax_skb);
+
+               skb_pull(skb, pkt_len_plus_padd);
        }
+
+       return 0;
 }
 
 static struct sk_buff *
index e7fe9c0..1e5c153 100644 (file)
@@ -280,7 +280,7 @@ static void catc_irq_done(struct urb *urb)
        struct catc *catc = urb->context;
        u8 *data = urb->transfer_buffer;
        int status = urb->status;
-       unsigned int hasdata = 0, linksts = LinkNoChange;
+       unsigned int hasdata, linksts = LinkNoChange;
        int res;
 
        if (!catc->is_f5u011) {
@@ -781,7 +781,7 @@ static int catc_probe(struct usb_interface *intf, const struct usb_device_id *id
                        intf->altsetting->desc.bInterfaceNumber, 1)) {
                dev_err(dev, "Can't set altsetting 1.\n");
                ret = -EIO;
-               goto fail_mem;;
+               goto fail_mem;
        }
 
        netdev = alloc_etherdev(sizeof(struct catc));
index 359ea0d..baa9b14 100644 (file)
@@ -218,7 +218,7 @@ static int eem_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
                                if (unlikely(!skb2))
                                        goto next;
                                skb_trim(skb2, len);
-                               put_unaligned_le16(BIT(15) | (1 << 11) | len,
+                               put_unaligned_le16(BIT(15) | BIT(11) | len,
                                                skb_push(skb2, 2));
                                eem_linkcmd(dev, skb2);
                                break;
index 3511081..bfb58c9 100644 (file)
@@ -71,22 +71,22 @@ struct smsc95xx_priv {
        struct fwnode_handle *irqfwnode;
        struct mii_bus *mdiobus;
        struct phy_device *phydev;
+       struct task_struct *pm_task;
 };
 
 static bool turbo_mode = true;
 module_param(turbo_mode, bool, 0644);
 MODULE_PARM_DESC(turbo_mode, "Enable multiple frames per Rx transaction");
 
-static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
-                                           u32 *data, int in_pm)
+static int __must_check smsc95xx_read_reg(struct usbnet *dev, u32 index,
+                                         u32 *data)
 {
+       struct smsc95xx_priv *pdata = dev->driver_priv;
        u32 buf;
        int ret;
        int (*fn)(struct usbnet *, u8, u8, u16, u16, void *, u16);
 
-       BUG_ON(!dev);
-
-       if (!in_pm)
+       if (current != pdata->pm_task)
                fn = usbnet_read_cmd;
        else
                fn = usbnet_read_cmd_nopm;
@@ -107,16 +107,15 @@ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
        return ret;
 }
 
-static int __must_check __smsc95xx_write_reg(struct usbnet *dev, u32 index,
-                                            u32 data, int in_pm)
+static int __must_check smsc95xx_write_reg(struct usbnet *dev, u32 index,
+                                          u32 data)
 {
+       struct smsc95xx_priv *pdata = dev->driver_priv;
        u32 buf;
        int ret;
        int (*fn)(struct usbnet *, u8, u8, u16, u16, const void *, u16);
 
-       BUG_ON(!dev);
-
-       if (!in_pm)
+       if (current != pdata->pm_task)
                fn = usbnet_write_cmd;
        else
                fn = usbnet_write_cmd_nopm;
@@ -134,41 +133,16 @@ static int __must_check __smsc95xx_write_reg(struct usbnet *dev, u32 index,
        return ret;
 }
 
-static int __must_check smsc95xx_read_reg_nopm(struct usbnet *dev, u32 index,
-                                              u32 *data)
-{
-       return __smsc95xx_read_reg(dev, index, data, 1);
-}
-
-static int __must_check smsc95xx_write_reg_nopm(struct usbnet *dev, u32 index,
-                                               u32 data)
-{
-       return __smsc95xx_write_reg(dev, index, data, 1);
-}
-
-static int __must_check smsc95xx_read_reg(struct usbnet *dev, u32 index,
-                                         u32 *data)
-{
-       return __smsc95xx_read_reg(dev, index, data, 0);
-}
-
-static int __must_check smsc95xx_write_reg(struct usbnet *dev, u32 index,
-                                          u32 data)
-{
-       return __smsc95xx_write_reg(dev, index, data, 0);
-}
-
 /* Loop until the read is completed with timeout
  * called with phy_mutex held */
-static int __must_check __smsc95xx_phy_wait_not_busy(struct usbnet *dev,
-                                                    int in_pm)
+static int __must_check smsc95xx_phy_wait_not_busy(struct usbnet *dev)
 {
        unsigned long start_time = jiffies;
        u32 val;
        int ret;
 
        do {
-               ret = __smsc95xx_read_reg(dev, MII_ADDR, &val, in_pm);
+               ret = smsc95xx_read_reg(dev, MII_ADDR, &val);
                if (ret < 0) {
                        /* Ignore -ENODEV error during disconnect() */
                        if (ret == -ENODEV)
@@ -189,8 +163,7 @@ static u32 mii_address_cmd(int phy_id, int idx, u16 op)
        return (phy_id & 0x1f) << 11 | (idx & 0x1f) << 6 | op;
 }
 
-static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
-                               int in_pm)
+static int smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx)
 {
        u32 val, addr;
        int ret;
@@ -198,7 +171,7 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
        mutex_lock(&dev->phy_mutex);
 
        /* confirm MII not busy */
-       ret = __smsc95xx_phy_wait_not_busy(dev, in_pm);
+       ret = smsc95xx_phy_wait_not_busy(dev);
        if (ret < 0) {
                netdev_warn(dev->net, "%s: MII is busy\n", __func__);
                goto done;
@@ -206,20 +179,20 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
 
        /* set the address, index & direction (read from PHY) */
        addr = mii_address_cmd(phy_id, idx, MII_READ_ | MII_BUSY_);
-       ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm);
+       ret = smsc95xx_write_reg(dev, MII_ADDR, addr);
        if (ret < 0) {
                if (ret != -ENODEV)
                        netdev_warn(dev->net, "Error writing MII_ADDR\n");
                goto done;
        }
 
-       ret = __smsc95xx_phy_wait_not_busy(dev, in_pm);
+       ret = smsc95xx_phy_wait_not_busy(dev);
        if (ret < 0) {
                netdev_warn(dev->net, "Timed out reading MII reg %02X\n", idx);
                goto done;
        }
 
-       ret = __smsc95xx_read_reg(dev, MII_DATA, &val, in_pm);
+       ret = smsc95xx_read_reg(dev, MII_DATA, &val);
        if (ret < 0) {
                if (ret != -ENODEV)
                        netdev_warn(dev->net, "Error reading MII_DATA\n");
@@ -237,8 +210,8 @@ done:
        return ret;
 }
 
-static void __smsc95xx_mdio_write(struct usbnet *dev, int phy_id,
-                                 int idx, int regval, int in_pm)
+static void smsc95xx_mdio_write(struct usbnet *dev, int phy_id, int idx,
+                               int regval)
 {
        u32 val, addr;
        int ret;
@@ -246,14 +219,14 @@ static void __smsc95xx_mdio_write(struct usbnet *dev, int phy_id,
        mutex_lock(&dev->phy_mutex);
 
        /* confirm MII not busy */
-       ret = __smsc95xx_phy_wait_not_busy(dev, in_pm);
+       ret = smsc95xx_phy_wait_not_busy(dev);
        if (ret < 0) {
                netdev_warn(dev->net, "%s: MII is busy\n", __func__);
                goto done;
        }
 
        val = regval;
-       ret = __smsc95xx_write_reg(dev, MII_DATA, val, in_pm);
+       ret = smsc95xx_write_reg(dev, MII_DATA, val);
        if (ret < 0) {
                if (ret != -ENODEV)
                        netdev_warn(dev->net, "Error writing MII_DATA\n");
@@ -262,14 +235,14 @@ static void __smsc95xx_mdio_write(struct usbnet *dev, int phy_id,
 
        /* set the address, index & direction (write to PHY) */
        addr = mii_address_cmd(phy_id, idx, MII_WRITE_ | MII_BUSY_);
-       ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm);
+       ret = smsc95xx_write_reg(dev, MII_ADDR, addr);
        if (ret < 0) {
                if (ret != -ENODEV)
                        netdev_warn(dev->net, "Error writing MII_ADDR\n");
                goto done;
        }
 
-       ret = __smsc95xx_phy_wait_not_busy(dev, in_pm);
+       ret = smsc95xx_phy_wait_not_busy(dev);
        if (ret < 0) {
                netdev_warn(dev->net, "Timed out writing MII reg %02X\n", idx);
                goto done;
@@ -279,25 +252,11 @@ done:
        mutex_unlock(&dev->phy_mutex);
 }
 
-static int smsc95xx_mdio_read_nopm(struct usbnet *dev, int idx)
-{
-       struct smsc95xx_priv *pdata = dev->driver_priv;
-
-       return __smsc95xx_mdio_read(dev, pdata->phydev->mdio.addr, idx, 1);
-}
-
-static void smsc95xx_mdio_write_nopm(struct usbnet *dev, int idx, int regval)
-{
-       struct smsc95xx_priv *pdata = dev->driver_priv;
-
-       __smsc95xx_mdio_write(dev, pdata->phydev->mdio.addr, idx, regval, 1);
-}
-
 static int smsc95xx_mdiobus_read(struct mii_bus *bus, int phy_id, int idx)
 {
        struct usbnet *dev = bus->priv;
 
-       return __smsc95xx_mdio_read(dev, phy_id, idx, 0);
+       return smsc95xx_mdio_read(dev, phy_id, idx);
 }
 
 static int smsc95xx_mdiobus_write(struct mii_bus *bus, int phy_id, int idx,
@@ -305,7 +264,7 @@ static int smsc95xx_mdiobus_write(struct mii_bus *bus, int phy_id, int idx,
 {
        struct usbnet *dev = bus->priv;
 
-       __smsc95xx_mdio_write(dev, phy_id, idx, regval, 0);
+       smsc95xx_mdio_write(dev, phy_id, idx, regval);
        return 0;
 }
 
@@ -865,7 +824,7 @@ static int smsc95xx_start_tx_path(struct usbnet *dev)
 }
 
 /* Starts the Receive path */
-static int smsc95xx_start_rx_path(struct usbnet *dev, int in_pm)
+static int smsc95xx_start_rx_path(struct usbnet *dev)
 {
        struct smsc95xx_priv *pdata = dev->driver_priv;
        unsigned long flags;
@@ -874,7 +833,7 @@ static int smsc95xx_start_rx_path(struct usbnet *dev, int in_pm)
        pdata->mac_cr |= MAC_CR_RXEN_;
        spin_unlock_irqrestore(&pdata->mac_cr_lock, flags);
 
-       return __smsc95xx_write_reg(dev, MAC_CR, pdata->mac_cr, in_pm);
+       return smsc95xx_write_reg(dev, MAC_CR, pdata->mac_cr);
 }
 
 static int smsc95xx_reset(struct usbnet *dev)
@@ -1057,7 +1016,7 @@ static int smsc95xx_reset(struct usbnet *dev)
                return ret;
        }
 
-       ret = smsc95xx_start_rx_path(dev, 0);
+       ret = smsc95xx_start_rx_path(dev);
        if (ret < 0) {
                netdev_warn(dev->net, "Failed to start RX path\n");
                return ret;
@@ -1291,16 +1250,17 @@ static u32 smsc_crc(const u8 *buffer, size_t len, int filter)
        return crc << ((filter % 2) * 16);
 }
 
-static int smsc95xx_link_ok_nopm(struct usbnet *dev)
+static int smsc95xx_link_ok(struct usbnet *dev)
 {
+       struct smsc95xx_priv *pdata = dev->driver_priv;
        int ret;
 
        /* first, a dummy read, needed to latch some MII phys */
-       ret = smsc95xx_mdio_read_nopm(dev, MII_BMSR);
+       ret = smsc95xx_mdio_read(dev, pdata->phydev->mdio.addr, MII_BMSR);
        if (ret < 0)
                return ret;
 
-       ret = smsc95xx_mdio_read_nopm(dev, MII_BMSR);
+       ret = smsc95xx_mdio_read(dev, pdata->phydev->mdio.addr, MII_BMSR);
        if (ret < 0)
                return ret;
 
@@ -1313,14 +1273,14 @@ static int smsc95xx_enter_suspend0(struct usbnet *dev)
        u32 val;
        int ret;
 
-       ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+       ret = smsc95xx_read_reg(dev, PM_CTRL, &val);
        if (ret < 0)
                return ret;
 
        val &= (~(PM_CTL_SUS_MODE_ | PM_CTL_WUPS_ | PM_CTL_PHY_RST_));
        val |= PM_CTL_SUS_MODE_0;
 
-       ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+       ret = smsc95xx_write_reg(dev, PM_CTRL, val);
        if (ret < 0)
                return ret;
 
@@ -1332,12 +1292,12 @@ static int smsc95xx_enter_suspend0(struct usbnet *dev)
        if (pdata->wolopts & WAKE_PHY)
                val |= PM_CTL_WUPS_ED_;
 
-       ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+       ret = smsc95xx_write_reg(dev, PM_CTRL, val);
        if (ret < 0)
                return ret;
 
        /* read back PM_CTRL */
-       ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+       ret = smsc95xx_read_reg(dev, PM_CTRL, &val);
        if (ret < 0)
                return ret;
 
@@ -1349,34 +1309,34 @@ static int smsc95xx_enter_suspend0(struct usbnet *dev)
 static int smsc95xx_enter_suspend1(struct usbnet *dev)
 {
        struct smsc95xx_priv *pdata = dev->driver_priv;
+       int ret, phy_id = pdata->phydev->mdio.addr;
        u32 val;
-       int ret;
 
        /* reconfigure link pulse detection timing for
         * compatibility with non-standard link partners
         */
        if (pdata->features & FEATURE_PHY_NLP_CROSSOVER)
-               smsc95xx_mdio_write_nopm(dev, PHY_EDPD_CONFIG,
-                                        PHY_EDPD_CONFIG_DEFAULT);
+               smsc95xx_mdio_write(dev, phy_id, PHY_EDPD_CONFIG,
+                                   PHY_EDPD_CONFIG_DEFAULT);
 
        /* enable energy detect power-down mode */
-       ret = smsc95xx_mdio_read_nopm(dev, PHY_MODE_CTRL_STS);
+       ret = smsc95xx_mdio_read(dev, phy_id, PHY_MODE_CTRL_STS);
        if (ret < 0)
                return ret;
 
        ret |= MODE_CTRL_STS_EDPWRDOWN_;
 
-       smsc95xx_mdio_write_nopm(dev, PHY_MODE_CTRL_STS, ret);
+       smsc95xx_mdio_write(dev, phy_id, PHY_MODE_CTRL_STS, ret);
 
        /* enter SUSPEND1 mode */
-       ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+       ret = smsc95xx_read_reg(dev, PM_CTRL, &val);
        if (ret < 0)
                return ret;
 
        val &= ~(PM_CTL_SUS_MODE_ | PM_CTL_WUPS_ | PM_CTL_PHY_RST_);
        val |= PM_CTL_SUS_MODE_1;
 
-       ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+       ret = smsc95xx_write_reg(dev, PM_CTRL, val);
        if (ret < 0)
                return ret;
 
@@ -1384,7 +1344,7 @@ static int smsc95xx_enter_suspend1(struct usbnet *dev)
        val &= ~PM_CTL_WUPS_;
        val |= (PM_CTL_WUPS_ED_ | PM_CTL_ED_EN_);
 
-       ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+       ret = smsc95xx_write_reg(dev, PM_CTRL, val);
        if (ret < 0)
                return ret;
 
@@ -1399,14 +1359,14 @@ static int smsc95xx_enter_suspend2(struct usbnet *dev)
        u32 val;
        int ret;
 
-       ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+       ret = smsc95xx_read_reg(dev, PM_CTRL, &val);
        if (ret < 0)
                return ret;
 
        val &= ~(PM_CTL_SUS_MODE_ | PM_CTL_WUPS_ | PM_CTL_PHY_RST_);
        val |= PM_CTL_SUS_MODE_2;
 
-       ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+       ret = smsc95xx_write_reg(dev, PM_CTRL, val);
        if (ret < 0)
                return ret;
 
@@ -1421,7 +1381,7 @@ static int smsc95xx_enter_suspend3(struct usbnet *dev)
        u32 val;
        int ret;
 
-       ret = smsc95xx_read_reg_nopm(dev, RX_FIFO_INF, &val);
+       ret = smsc95xx_read_reg(dev, RX_FIFO_INF, &val);
        if (ret < 0)
                return ret;
 
@@ -1430,14 +1390,14 @@ static int smsc95xx_enter_suspend3(struct usbnet *dev)
                return -EBUSY;
        }
 
-       ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+       ret = smsc95xx_read_reg(dev, PM_CTRL, &val);
        if (ret < 0)
                return ret;
 
        val &= ~(PM_CTL_SUS_MODE_ | PM_CTL_WUPS_ | PM_CTL_PHY_RST_);
        val |= PM_CTL_SUS_MODE_3 | PM_CTL_RES_CLR_WKP_STS;
 
-       ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+       ret = smsc95xx_write_reg(dev, PM_CTRL, val);
        if (ret < 0)
                return ret;
 
@@ -1445,7 +1405,7 @@ static int smsc95xx_enter_suspend3(struct usbnet *dev)
        val &= ~PM_CTL_WUPS_;
        val |= PM_CTL_WUPS_WOL_;
 
-       ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+       ret = smsc95xx_write_reg(dev, PM_CTRL, val);
        if (ret < 0)
                return ret;
 
@@ -1490,9 +1450,12 @@ static int smsc95xx_suspend(struct usb_interface *intf, pm_message_t message)
        u32 val, link_up;
        int ret;
 
+       pdata->pm_task = current;
+
        ret = usbnet_suspend(intf, message);
        if (ret < 0) {
                netdev_warn(dev->net, "usbnet_suspend error\n");
+               pdata->pm_task = NULL;
                return ret;
        }
 
@@ -1501,8 +1464,7 @@ static int smsc95xx_suspend(struct usb_interface *intf, pm_message_t message)
                pdata->suspend_flags = 0;
        }
 
-       /* determine if link is up using only _nopm functions */
-       link_up = smsc95xx_link_ok_nopm(dev);
+       link_up = smsc95xx_link_ok(dev);
 
        if (message.event == PM_EVENT_AUTO_SUSPEND &&
            (pdata->features & FEATURE_REMOTE_WAKEUP)) {
@@ -1519,23 +1481,23 @@ static int smsc95xx_suspend(struct usb_interface *intf, pm_message_t message)
                netdev_info(dev->net, "entering SUSPEND2 mode\n");
 
                /* disable energy detect (link up) & wake up events */
-               ret = smsc95xx_read_reg_nopm(dev, WUCSR, &val);
+               ret = smsc95xx_read_reg(dev, WUCSR, &val);
                if (ret < 0)
                        goto done;
 
                val &= ~(WUCSR_MPEN_ | WUCSR_WAKE_EN_);
 
-               ret = smsc95xx_write_reg_nopm(dev, WUCSR, val);
+               ret = smsc95xx_write_reg(dev, WUCSR, val);
                if (ret < 0)
                        goto done;
 
-               ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+               ret = smsc95xx_read_reg(dev, PM_CTRL, &val);
                if (ret < 0)
                        goto done;
 
                val &= ~(PM_CTL_ED_EN_ | PM_CTL_WOL_EN_);
 
-               ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+               ret = smsc95xx_write_reg(dev, PM_CTRL, val);
                if (ret < 0)
                        goto done;
 
@@ -1626,7 +1588,7 @@ static int smsc95xx_suspend(struct usb_interface *intf, pm_message_t message)
                }
 
                for (i = 0; i < (wuff_filter_count * 4); i++) {
-                       ret = smsc95xx_write_reg_nopm(dev, WUFF, filter_mask[i]);
+                       ret = smsc95xx_write_reg(dev, WUFF, filter_mask[i]);
                        if (ret < 0) {
                                kfree(filter_mask);
                                goto done;
@@ -1635,50 +1597,50 @@ static int smsc95xx_suspend(struct usb_interface *intf, pm_message_t message)
                kfree(filter_mask);
 
                for (i = 0; i < (wuff_filter_count / 4); i++) {
-                       ret = smsc95xx_write_reg_nopm(dev, WUFF, command[i]);
+                       ret = smsc95xx_write_reg(dev, WUFF, command[i]);
                        if (ret < 0)
                                goto done;
                }
 
                for (i = 0; i < (wuff_filter_count / 4); i++) {
-                       ret = smsc95xx_write_reg_nopm(dev, WUFF, offset[i]);
+                       ret = smsc95xx_write_reg(dev, WUFF, offset[i]);
                        if (ret < 0)
                                goto done;
                }
 
                for (i = 0; i < (wuff_filter_count / 2); i++) {
-                       ret = smsc95xx_write_reg_nopm(dev, WUFF, crc[i]);
+                       ret = smsc95xx_write_reg(dev, WUFF, crc[i]);
                        if (ret < 0)
                                goto done;
                }
 
                /* clear any pending pattern match packet status */
-               ret = smsc95xx_read_reg_nopm(dev, WUCSR, &val);
+               ret = smsc95xx_read_reg(dev, WUCSR, &val);
                if (ret < 0)
                        goto done;
 
                val |= WUCSR_WUFR_;
 
-               ret = smsc95xx_write_reg_nopm(dev, WUCSR, val);
+               ret = smsc95xx_write_reg(dev, WUCSR, val);
                if (ret < 0)
                        goto done;
        }
 
        if (pdata->wolopts & WAKE_MAGIC) {
                /* clear any pending magic packet status */
-               ret = smsc95xx_read_reg_nopm(dev, WUCSR, &val);
+               ret = smsc95xx_read_reg(dev, WUCSR, &val);
                if (ret < 0)
                        goto done;
 
                val |= WUCSR_MPR_;
 
-               ret = smsc95xx_write_reg_nopm(dev, WUCSR, val);
+               ret = smsc95xx_write_reg(dev, WUCSR, val);
                if (ret < 0)
                        goto done;
        }
 
        /* enable/disable wakeup sources */
-       ret = smsc95xx_read_reg_nopm(dev, WUCSR, &val);
+       ret = smsc95xx_read_reg(dev, WUCSR, &val);
        if (ret < 0)
                goto done;
 
@@ -1698,12 +1660,12 @@ static int smsc95xx_suspend(struct usb_interface *intf, pm_message_t message)
                val &= ~WUCSR_MPEN_;
        }
 
-       ret = smsc95xx_write_reg_nopm(dev, WUCSR, val);
+       ret = smsc95xx_write_reg(dev, WUCSR, val);
        if (ret < 0)
                goto done;
 
        /* enable wol wakeup source */
-       ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+       ret = smsc95xx_read_reg(dev, PM_CTRL, &val);
        if (ret < 0)
                goto done;
 
@@ -1713,12 +1675,12 @@ static int smsc95xx_suspend(struct usb_interface *intf, pm_message_t message)
        if (pdata->wolopts & WAKE_PHY)
                val |= PM_CTL_ED_EN_;
 
-       ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+       ret = smsc95xx_write_reg(dev, PM_CTRL, val);
        if (ret < 0)
                goto done;
 
        /* enable receiver to enable frame reception */
-       smsc95xx_start_rx_path(dev, 1);
+       smsc95xx_start_rx_path(dev);
 
        /* some wol options are enabled, so enter SUSPEND0 */
        netdev_info(dev->net, "entering SUSPEND0 mode\n");
@@ -1732,6 +1694,7 @@ done:
        if (ret && PMSG_IS_AUTO(message))
                usbnet_resume(intf);
 
+       pdata->pm_task = NULL;
        return ret;
 }
 
@@ -1752,29 +1715,31 @@ static int smsc95xx_resume(struct usb_interface *intf)
        /* do this first to ensure it's cleared even in error case */
        pdata->suspend_flags = 0;
 
+       pdata->pm_task = current;
+
        if (suspend_flags & SUSPEND_ALLMODES) {
                /* clear wake-up sources */
-               ret = smsc95xx_read_reg_nopm(dev, WUCSR, &val);
+               ret = smsc95xx_read_reg(dev, WUCSR, &val);
                if (ret < 0)
-                       return ret;
+                       goto done;
 
                val &= ~(WUCSR_WAKE_EN_ | WUCSR_MPEN_);
 
-               ret = smsc95xx_write_reg_nopm(dev, WUCSR, val);
+               ret = smsc95xx_write_reg(dev, WUCSR, val);
                if (ret < 0)
-                       return ret;
+                       goto done;
 
                /* clear wake-up status */
-               ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+               ret = smsc95xx_read_reg(dev, PM_CTRL, &val);
                if (ret < 0)
-                       return ret;
+                       goto done;
 
                val &= ~PM_CTL_WOL_EN_;
                val |= PM_CTL_WUPS_;
 
-               ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+               ret = smsc95xx_write_reg(dev, PM_CTRL, val);
                if (ret < 0)
-                       return ret;
+                       goto done;
        }
 
        phy_init_hw(pdata->phydev);
@@ -1783,15 +1748,20 @@ static int smsc95xx_resume(struct usb_interface *intf)
        if (ret < 0)
                netdev_warn(dev->net, "usbnet_resume error\n");
 
+done:
+       pdata->pm_task = NULL;
        return ret;
 }
 
 static int smsc95xx_reset_resume(struct usb_interface *intf)
 {
        struct usbnet *dev = usb_get_intfdata(intf);
+       struct smsc95xx_priv *pdata = dev->driver_priv;
        int ret;
 
+       pdata->pm_task = current;
        ret = smsc95xx_reset(dev);
+       pdata->pm_task = NULL;
        if (ret < 0)
                return ret;
 
index dc79811..e415465 100644 (file)
@@ -17,9 +17,6 @@
  * issues can usefully be addressed by this framework.
  */
 
-// #define     DEBUG                   // error path messages, extra info
-// #define     VERBOSE                 // more; success messages
-
 #include <linux/module.h>
 #include <linux/init.h>
 #include <linux/netdevice.h>
@@ -849,13 +846,11 @@ int usbnet_stop (struct net_device *net)
 
        mpn = !test_and_clear_bit(EVENT_NO_RUNTIME_PM, &dev->flags);
 
-       /* deferred work (task, timer, softirq) must also stop.
-        * can't flush_scheduled_work() until we drop rtnl (later),
-        * else workers could deadlock; so make workers a NOP.
-        */
+       /* deferred work (timer, softirq, task) must also stop */
        dev->flags = 0;
        del_timer_sync (&dev->delay);
        tasklet_kill (&dev->bh);
+       cancel_work_sync(&dev->kevent);
        if (!pm)
                usb_autopm_put_interface(dev->intf);
 
@@ -1619,8 +1614,6 @@ void usbnet_disconnect (struct usb_interface *intf)
        net = dev->net;
        unregister_netdev (net);
 
-       cancel_work_sync(&dev->kevent);
-
        usb_scuttle_anchored_urbs(&dev->deferred);
 
        if (dev->driver_info->unbind)
@@ -2004,7 +1997,7 @@ static int __usbnet_read_cmd(struct usbnet *dev, u8 cmd, u8 reqtype,
                   cmd, reqtype, value, index, size);
 
        if (size) {
-               buf = kmalloc(size, GFP_KERNEL);
+               buf = kmalloc(size, GFP_NOIO);
                if (!buf)
                        goto out;
        }
@@ -2036,7 +2029,7 @@ static int __usbnet_write_cmd(struct usbnet *dev, u8 cmd, u8 reqtype,
                   cmd, reqtype, value, index, size);
 
        if (data) {
-               buf = kmemdup(data, size, GFP_KERNEL);
+               buf = kmemdup(data, size, GFP_NOIO);
                if (!buf)
                        goto out;
        } else {
@@ -2137,7 +2130,7 @@ static void usbnet_async_cmd_cb(struct urb *urb)
 int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype,
                           u16 value, u16 index, const void *data, u16 size)
 {
-       struct usb_ctrlrequest *req = NULL;
+       struct usb_ctrlrequest *req;
        struct urb *urb;
        int err = -ENOMEM;
        void *buf = NULL;
@@ -2155,7 +2148,7 @@ int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype,
                if (!buf) {
                        netdev_err(dev->net, "Error allocating buffer"
                                   " in %s!\n", __func__);
-                       goto fail_free;
+                       goto fail_free_urb;
                }
        }
 
@@ -2179,14 +2172,21 @@ int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype,
        if (err < 0) {
                netdev_err(dev->net, "Error submitting the control"
                           " message: status=%d\n", err);
-               goto fail_free;
+               goto fail_free_all;
        }
        return 0;
 
+fail_free_all:
+       kfree(req);
 fail_free_buf:
        kfree(buf);
-fail_free:
-       kfree(req);
+       /*
+        * avoid a double free
+        * needed because the flag can be set only
+        * after filling the URB
+        */
+       urb->transfer_flags = 0;
+fail_free_urb:
        usb_free_urb(urb);
 fail:
        return err;
index 466da01..2cb833b 100644 (file)
@@ -312,6 +312,7 @@ static bool veth_skb_is_eligible_for_gro(const struct net_device *dev,
 static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
 {
        struct veth_priv *rcv_priv, *priv = netdev_priv(dev);
+       struct netdev_queue *queue = NULL;
        struct veth_rq *rq = NULL;
        struct net_device *rcv;
        int length = skb->len;
@@ -329,6 +330,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
        rxq = skb_get_queue_mapping(skb);
        if (rxq < rcv->real_num_rx_queues) {
                rq = &rcv_priv->rq[rxq];
+               queue = netdev_get_tx_queue(dev, rxq);
 
                /* The napi pointer is available when an XDP program is
                 * attached or when GRO is enabled
@@ -340,6 +342,8 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
 
        skb_tx_timestamp(skb);
        if (likely(veth_forward_skb(rcv, skb, rq, use_napi) == NET_RX_SUCCESS)) {
+               if (queue)
+                       txq_trans_cond_update(queue);
                if (!use_napi)
                        dev_lstats_add(dev, length);
        } else {
index db05b5e..356cf8d 100644 (file)
@@ -2768,7 +2768,6 @@ static const struct ethtool_ops virtnet_ethtool_ops = {
 static void virtnet_freeze_down(struct virtio_device *vdev)
 {
        struct virtnet_info *vi = vdev->priv;
-       int i;
 
        /* Make sure no work handler is accessing the device */
        flush_work(&vi->config_work);
@@ -2776,14 +2775,8 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
        netif_tx_lock_bh(vi->dev);
        netif_device_detach(vi->dev);
        netif_tx_unlock_bh(vi->dev);
-       cancel_delayed_work_sync(&vi->refill);
-
-       if (netif_running(vi->dev)) {
-               for (i = 0; i < vi->max_queue_pairs; i++) {
-                       napi_disable(&vi->rq[i].napi);
-                       virtnet_napi_tx_disable(&vi->sq[i].napi);
-               }
-       }
+       if (netif_running(vi->dev))
+               virtnet_close(vi->dev);
 }
 
 static int init_vqs(struct virtnet_info *vi);
@@ -2791,7 +2784,7 @@ static int init_vqs(struct virtnet_info *vi);
 static int virtnet_restore_up(struct virtio_device *vdev)
 {
        struct virtnet_info *vi = vdev->priv;
-       int err, i;
+       int err;
 
        err = init_vqs(vi);
        if (err)
@@ -2800,15 +2793,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
        virtio_device_ready(vdev);
 
        if (netif_running(vi->dev)) {
-               for (i = 0; i < vi->curr_queue_pairs; i++)
-                       if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
-                               schedule_delayed_work(&vi->refill, 0);
-
-               for (i = 0; i < vi->max_queue_pairs; i++) {
-                       virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
-                       virtnet_napi_tx_enable(vi, vi->sq[i].vq,
-                                              &vi->sq[i].napi);
-               }
+               err = virtnet_open(vi->dev);
+               if (err)
+                       return err;
        }
 
        netif_tx_lock_bh(vi->dev);
@@ -3655,14 +3642,20 @@ static int virtnet_probe(struct virtio_device *vdev)
        if (vi->has_rss || vi->has_rss_hash_report)
                virtnet_init_default_rss(vi);
 
-       err = register_netdev(dev);
+       /* serialize netdev register + virtio_device_ready() with ndo_open() */
+       rtnl_lock();
+
+       err = register_netdevice(dev);
        if (err) {
                pr_debug("virtio_net: registering device failed\n");
+               rtnl_unlock();
                goto free_failover;
        }
 
        virtio_device_ready(vdev);
 
+       rtnl_unlock();
+
        err = virtnet_cpu_notif_add(vi);
        if (err) {
                pr_debug("virtio_net: registering cpu notifier failed\n");
index 5704def..237cbd5 100644 (file)
@@ -1782,9 +1782,7 @@ static int __wil_tx_vring_tso(struct wil6210_priv *wil, struct wil6210_vif *vif,
        }
 
        /* Header Length = MAC header len + IP header len + TCP header len*/
-       hdrlen = ETH_HLEN +
-               (int)skb_network_header_len(skb) +
-               tcp_hdrlen(skb);
+       hdrlen = skb_tcp_all_headers(skb);
 
        gso_type = skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV6 | SKB_GSO_TCPV4);
        switch (gso_type) {
index fc61a44..a256695 100644 (file)
@@ -1201,9 +1201,7 @@ static int xenvif_tx_submit(struct xenvif_queue *queue)
                        }
 
                        mss = skb_shinfo(skb)->gso_size;
-                       hdrlen = skb_transport_header(skb) -
-                               skb_mac_header(skb) +
-                               tcp_hdrlen(skb);
+                       hdrlen = skb_tcp_all_headers(skb);
 
                        skb_shinfo(skb)->gso_segs =
                                DIV_ROUND_UP(skb->len - hdrlen, mss);
index 8c0b954..2409007 100644 (file)
@@ -66,6 +66,10 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
                 "Maximum number of queues per virtual interface");
 
+static bool __read_mostly xennet_trusted = true;
+module_param_named(trusted, xennet_trusted, bool, 0644);
+MODULE_PARM_DESC(trusted, "Is the backend trusted");
+
 #define XENNET_TIMEOUT  (5 * HZ)
 
 static const struct ethtool_ops xennet_ethtool_ops;
@@ -173,6 +177,9 @@ struct netfront_info {
        /* Is device behaving sane? */
        bool broken;
 
+       /* Should skbs be bounced into a zeroed buffer? */
+       bool bounce;
+
        atomic_t rx_gso_checksum_fixup;
 };
 
@@ -271,7 +278,8 @@ static struct sk_buff *xennet_alloc_one_rx_buffer(struct netfront_queue *queue)
        if (unlikely(!skb))
                return NULL;
 
-       page = page_pool_dev_alloc_pages(queue->page_pool);
+       page = page_pool_alloc_pages(queue->page_pool,
+                                    GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO);
        if (unlikely(!page)) {
                kfree_skb(skb);
                return NULL;
@@ -665,6 +673,33 @@ static int xennet_xdp_xmit(struct net_device *dev, int n,
        return nxmit;
 }
 
+struct sk_buff *bounce_skb(const struct sk_buff *skb)
+{
+       unsigned int headerlen = skb_headroom(skb);
+       /* Align size to allocate full pages and avoid contiguous data leaks */
+       unsigned int size = ALIGN(skb_end_offset(skb) + skb->data_len,
+                                 XEN_PAGE_SIZE);
+       struct sk_buff *n = alloc_skb(size, GFP_ATOMIC | __GFP_ZERO);
+
+       if (!n)
+               return NULL;
+
+       if (!IS_ALIGNED((uintptr_t)n->head, XEN_PAGE_SIZE)) {
+               WARN_ONCE(1, "misaligned skb allocated\n");
+               kfree_skb(n);
+               return NULL;
+       }
+
+       /* Set the data pointer */
+       skb_reserve(n, headerlen);
+       /* Set the tail pointer and length */
+       skb_put(n, skb->len);
+
+       BUG_ON(skb_copy_bits(skb, -headerlen, n->head, headerlen + skb->len));
+
+       skb_copy_header(n, skb);
+       return n;
+}
 
 #define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1)
 
@@ -718,9 +753,13 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
 
        /* The first req should be at least ETH_HLEN size or the packet will be
         * dropped by netback.
+        *
+        * If the backend is not trusted bounce all data to zeroed pages to
+        * avoid exposing contiguous data on the granted page not belonging to
+        * the skb.
         */
-       if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
-               nskb = skb_copy(skb, GFP_ATOMIC);
+       if (np->bounce || unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
+               nskb = bounce_skb(skb);
                if (!nskb)
                        goto drop;
                dev_consume_skb_any(skb);
@@ -1053,8 +1092,10 @@ static int xennet_get_responses(struct netfront_queue *queue,
                        }
                }
                rcu_read_unlock();
-next:
+
                __skb_queue_tail(list, skb);
+
+next:
                if (!(rx->flags & XEN_NETRXF_more_data))
                        break;
 
@@ -2214,6 +2255,10 @@ static int talk_to_netback(struct xenbus_device *dev,
 
        info->netdev->irq = 0;
 
+       /* Check if backend is trusted. */
+       info->bounce = !xennet_trusted ||
+                      !xenbus_read_unsigned(dev->nodename, "trusted", 1);
+
        /* Check if backend supports multiple queues */
        max_queues = xenbus_read_unsigned(info->xbdev->otherend,
                                          "multi-queue-max-queues", 1);
@@ -2381,6 +2426,9 @@ static int xennet_connect(struct net_device *dev)
                return err;
        if (np->netback_has_xdp_headroom)
                pr_info("backend supports XDP headroom\n");
+       if (np->bounce)
+               dev_info(&np->xbdev->dev,
+                        "bouncing transmitted data to zeroed pages\n");
 
        /* talk_to_netback() sets the correct number of queues */
        num_queues = dev->real_num_tx_queues;
index ceef81d..01329b9 100644 (file)
@@ -167,9 +167,9 @@ static int nfcmrvl_i2c_parse_dt(struct device_node *node,
                pdata->irq_polarity = IRQF_TRIGGER_RISING;
 
        ret = irq_of_parse_and_map(node, 0);
-       if (ret < 0) {
-               pr_err("Unable to get irq, error: %d\n", ret);
-               return ret;
+       if (!ret) {
+               pr_err("Unable to get irq\n");
+               return -EINVAL;
        }
        pdata->irq = ret;
 
index a38e2fc..ad3359a 100644 (file)
@@ -115,9 +115,9 @@ static int nfcmrvl_spi_parse_dt(struct device_node *node,
        }
 
        ret = irq_of_parse_and_map(node, 0);
-       if (ret < 0) {
-               pr_err("Unable to get irq, error: %d\n", ret);
-               return ret;
+       if (!ret) {
+               pr_err("Unable to get irq\n");
+               return -EINVAL;
        }
        pdata->irq = ret;
 
index 7e451c1..ae2ba08 100644 (file)
@@ -122,7 +122,9 @@ static int nxp_nci_i2c_fw_read(struct nxp_nci_i2c_phy *phy,
        skb_put_data(*skb, &header, NXP_NCI_FW_HDR_LEN);
 
        r = i2c_master_recv(client, skb_put(*skb, frame_len), frame_len);
-       if (r != frame_len) {
+       if (r < 0) {
+               goto fw_read_exit_free_skb;
+       } else if (r != frame_len) {
                nfc_err(&client->dev,
                        "Invalid frame length: %u (expected %zu)\n",
                        r, frame_len);
@@ -162,8 +164,13 @@ static int nxp_nci_i2c_nci_read(struct nxp_nci_i2c_phy *phy,
 
        skb_put_data(*skb, (void *)&header, NCI_CTRL_HDR_SIZE);
 
+       if (!header.plen)
+               return 0;
+
        r = i2c_master_recv(client, skb_put(*skb, header.plen), header.plen);
-       if (r != header.plen) {
+       if (r < 0) {
+               goto nci_read_exit_free_skb;
+       } else if (r != header.plen) {
                nfc_err(&client->dev,
                        "Invalid frame payload length: %u (expected %u)\n",
                        r, header.plen);
index a4fc17d..b38d035 100644 (file)
@@ -176,8 +176,8 @@ static int nvdimm_clear_badblocks_region(struct device *dev, void *data)
        ndr_end = nd_region->ndr_start + nd_region->ndr_size - 1;
 
        /* make sure we are in the region */
-       if (ctx->phys < nd_region->ndr_start
-                       || (ctx->phys + ctx->cleared) > ndr_end)
+       if (ctx->phys < nd_region->ndr_start ||
+           (ctx->phys + ctx->cleared - 1) > ndr_end)
                return 0;
 
        sector = (ctx->phys - nd_region->ndr_start) / 512;
index 24165da..ec6ac29 100644 (file)
@@ -2546,6 +2546,20 @@ static const struct nvme_core_quirk_entry core_quirks[] = {
                .vid = 0x1e0f,
                .mn = "KCD6XVUL6T40",
                .quirks = NVME_QUIRK_NO_APST,
+       },
+       {
+               /*
+                * The external Samsung X5 SSD fails initialization without a
+                * delay before checking if it is ready and has a whole set of
+                * other problems.  To make this even more interesting, it
+                * shares the PCI ID with internal Samsung 970 Evo Plus that
+                * does not need or want these quirks.
+                */
+               .vid = 0x144d,
+               .mn = "Samsung Portable SSD X5",
+               .quirks = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |
+                         NVME_QUIRK_NO_DEEPEST_PS |
+                         NVME_QUIRK_IGNORE_DEV_SUBNQN,
        }
 };
 
@@ -3285,8 +3299,8 @@ static ssize_t uuid_show(struct device *dev, struct device_attribute *attr,
         * we have no UUID set
         */
        if (uuid_is_null(&ids->uuid)) {
-               printk_ratelimited(KERN_WARNING
-                                  "No UUID available providing old NGUID\n");
+               dev_warn_ratelimited(dev,
+                       "No UUID available providing old NGUID\n");
                return sysfs_emit(buf, "%pU\n", ids->nguid);
        }
        return sysfs_emit(buf, "%pU\n", &ids->uuid);
@@ -3863,6 +3877,7 @@ static int nvme_init_ns_head(struct nvme_ns *ns, unsigned nsid,
        if (ret) {
                dev_err(ctrl->device,
                        "globally duplicate IDs for nsid %d\n", nsid);
+               nvme_print_device_info(ctrl);
                return ret;
        }
 
@@ -4580,6 +4595,8 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
        nvme_stop_failfast_work(ctrl);
        flush_work(&ctrl->async_event_work);
        cancel_work_sync(&ctrl->fw_act_work);
+       if (ctrl->ops->stop_ctrl)
+               ctrl->ops->stop_ctrl(ctrl);
 }
 EXPORT_SYMBOL_GPL(nvme_stop_ctrl);
 
index 9b72b6e..5558f88 100644 (file)
@@ -502,7 +502,9 @@ struct nvme_ctrl_ops {
        void (*free_ctrl)(struct nvme_ctrl *ctrl);
        void (*submit_async_event)(struct nvme_ctrl *ctrl);
        void (*delete_ctrl)(struct nvme_ctrl *ctrl);
+       void (*stop_ctrl)(struct nvme_ctrl *ctrl);
        int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
+       void (*print_device_info)(struct nvme_ctrl *ctrl);
 };
 
 /*
@@ -548,6 +550,33 @@ static inline struct request *nvme_cid_to_rq(struct blk_mq_tags *tags,
        return blk_mq_tag_to_rq(tags, nvme_tag_from_cid(command_id));
 }
 
+/*
+ * Return the length of the string without the space padding
+ */
+static inline int nvme_strlen(char *s, int len)
+{
+       while (s[len - 1] == ' ')
+               len--;
+       return len;
+}
+
+static inline void nvme_print_device_info(struct nvme_ctrl *ctrl)
+{
+       struct nvme_subsystem *subsys = ctrl->subsys;
+
+       if (ctrl->ops->print_device_info) {
+               ctrl->ops->print_device_info(ctrl);
+               return;
+       }
+
+       dev_err(ctrl->device,
+               "VID:%04x model:%.*s firmware:%.*s\n", subsys->vendor_id,
+               nvme_strlen(subsys->model, sizeof(subsys->model)),
+               subsys->model, nvme_strlen(subsys->firmware_rev,
+                                          sizeof(subsys->firmware_rev)),
+               subsys->firmware_rev);
+}
+
 #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
 void nvme_fault_inject_init(struct nvme_fault_inject *fault_inj,
                            const char *dev_name);
index 48f4f6e..e7af223 100644 (file)
@@ -1334,6 +1334,14 @@ static void nvme_warn_reset(struct nvme_dev *dev, u32 csts)
                dev_warn(dev->ctrl.device,
                         "controller is down; will reset: CSTS=0x%x, PCI_STATUS read failed (%d)\n",
                         csts, result);
+
+       if (csts != ~0)
+               return;
+
+       dev_warn(dev->ctrl.device,
+                "Does your device have a faulty power saving mode enabled?\n");
+       dev_warn(dev->ctrl.device,
+                "Try \"nvme_core.default_ps_max_latency_us=0 pcie_aspm=off\" and report a bug\n");
 }
 
 static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
@@ -2976,6 +2984,21 @@ static int nvme_pci_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
        return snprintf(buf, size, "%s\n", dev_name(&pdev->dev));
 }
 
+
+static void nvme_pci_print_device_info(struct nvme_ctrl *ctrl)
+{
+       struct pci_dev *pdev = to_pci_dev(to_nvme_dev(ctrl)->dev);
+       struct nvme_subsystem *subsys = ctrl->subsys;
+
+       dev_err(ctrl->device,
+               "VID:DID %04x:%04x model:%.*s firmware:%.*s\n",
+               pdev->vendor, pdev->device,
+               nvme_strlen(subsys->model, sizeof(subsys->model)),
+               subsys->model, nvme_strlen(subsys->firmware_rev,
+                                          sizeof(subsys->firmware_rev)),
+               subsys->firmware_rev);
+}
+
 static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
        .name                   = "pcie",
        .module                 = THIS_MODULE,
@@ -2987,6 +3010,7 @@ static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
        .free_ctrl              = nvme_pci_free_ctrl,
        .submit_async_event     = nvme_pci_submit_async_event,
        .get_address            = nvme_pci_get_address,
+       .print_device_info      = nvme_pci_print_device_info,
 };
 
 static int nvme_dev_map(struct nvme_dev *dev)
@@ -3421,7 +3445,8 @@ static const struct pci_device_id nvme_id_table[] = {
        { PCI_VDEVICE(REDHAT, 0x0010),  /* Qemu emulated controller */
                .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x126f, 0x2263),   /* Silicon Motion unidentified */
-               .driver_data = NVME_QUIRK_NO_NS_DESC_LIST, },
+               .driver_data = NVME_QUIRK_NO_NS_DESC_LIST |
+                               NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x1bb1, 0x0100),   /* Seagate Nytro Flash Storage */
                .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |
                                NVME_QUIRK_NO_NS_DESC_LIST, },
@@ -3437,22 +3462,39 @@ static const struct pci_device_id nvme_id_table[] = {
                .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |
                                NVME_QUIRK_DISABLE_WRITE_ZEROES|
                                NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+       { PCI_DEVICE(0x1987, 0x5012),   /* Phison E12 */
+               .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x1987, 0x5016),   /* Phison E16 */
                .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
        { PCI_DEVICE(0x1b4b, 0x1092),   /* Lexar 256 GB SSD */
                .driver_data = NVME_QUIRK_NO_NS_DESC_LIST |
                                NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+       { PCI_DEVICE(0x1cc1, 0x33f8),   /* ADATA IM2P33F8ABR1 1 TB */
+               .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x10ec, 0x5762),   /* ADATA SX6000LNP */
-               .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+               .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN |
+                               NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x1cc1, 0x8201),   /* ADATA SX8200PNP 512GB */
                .driver_data = NVME_QUIRK_NO_DEEPEST_PS |
                                NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+        { PCI_DEVICE(0x1344, 0x5407), /* Micron Technology Inc NVMe SSD */
+               .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN },
        { PCI_DEVICE(0x1c5c, 0x1504),   /* SK Hynix PC400 */
                .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+       { PCI_DEVICE(0x1c5c, 0x174a),   /* SK Hynix P31 SSD */
+               .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x15b7, 0x2001),   /*  Sandisk Skyhawk */
                .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
        { PCI_DEVICE(0x1d97, 0x2263),   /* SPCC */
                .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+       { PCI_DEVICE(0x144d, 0xa80b),   /* Samsung PM9B1 256G and 512G */
+               .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+       { PCI_DEVICE(0x144d, 0xa809),   /* Samsung MZALQ256HBJD 256G */
+               .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+       { PCI_DEVICE(0x1cc4, 0x6303),   /* UMIS RPJTJ512MGE1QDY 512G */
+               .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+       { PCI_DEVICE(0x1cc4, 0x6302),   /* UMIS RPJTJ256MGE1QDY 256G */
+               .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
        { PCI_DEVICE(0x2646, 0x2262),   /* KINGSTON SKC2000 NVMe SSD */
                .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
        { PCI_DEVICE(0x2646, 0x2263),   /* KINGSTON A2000 NVMe SSD  */
@@ -3463,6 +3505,10 @@ static const struct pci_device_id nvme_id_table[] = {
                .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x1e4B, 0x1202),   /* MAXIO MAP1202 */
                .driver_data = NVME_QUIRK_BOGUS_NID, },
+       { PCI_DEVICE(0x1cc1, 0x5350),   /* ADATA XPG GAMMIX S50 */
+               .driver_data = NVME_QUIRK_BOGUS_NID, },
+       { PCI_DEVICE(0x1e49, 0x0041),   /* ZHITAI TiPro7000 NVMe SSD */
+               .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
        { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0061),
                .driver_data = NVME_QUIRK_DMA_ADDRESS_BITS_48, },
        { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0065),
@@ -3483,10 +3529,6 @@ static const struct pci_device_id nvme_id_table[] = {
                                NVME_QUIRK_128_BYTES_SQES |
                                NVME_QUIRK_SHARED_TAGS |
                                NVME_QUIRK_SKIP_CID_GEN },
-       { PCI_DEVICE(0x144d, 0xa808),   /* Samsung X5 */
-               .driver_data =  NVME_QUIRK_DELAY_BEFORE_CHK_RDY|
-                               NVME_QUIRK_NO_DEEPEST_PS |
-                               NVME_QUIRK_IGNORE_DEV_SUBNQN, },
        { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
        { 0, }
 };
index f2a5e1e..46c2dcf 100644 (file)
@@ -1048,6 +1048,14 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
        }
 }
 
+static void nvme_rdma_stop_ctrl(struct nvme_ctrl *nctrl)
+{
+       struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
+
+       cancel_work_sync(&ctrl->err_work);
+       cancel_delayed_work_sync(&ctrl->reconnect_work);
+}
+
 static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl)
 {
        struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
@@ -2252,9 +2260,6 @@ static const struct blk_mq_ops nvme_rdma_admin_mq_ops = {
 
 static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
 {
-       cancel_work_sync(&ctrl->err_work);
-       cancel_delayed_work_sync(&ctrl->reconnect_work);
-
        nvme_rdma_teardown_io_queues(ctrl, shutdown);
        nvme_stop_admin_queue(&ctrl->ctrl);
        if (shutdown)
@@ -2304,6 +2309,7 @@ static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
        .submit_async_event     = nvme_rdma_submit_async_event,
        .delete_ctrl            = nvme_rdma_delete_ctrl,
        .get_address            = nvmf_get_address,
+       .stop_ctrl              = nvme_rdma_stop_ctrl,
 };
 
 /*
index bb67538..7a9e6ff 100644 (file)
@@ -1180,8 +1180,7 @@ done:
        } else if (ret < 0) {
                dev_err(queue->ctrl->ctrl.device,
                        "failed to send request %d\n", ret);
-               if (ret != -EPIPE && ret != -ECONNRESET)
-                       nvme_tcp_fail_request(queue->request);
+               nvme_tcp_fail_request(queue->request);
                nvme_tcp_done_send_req(queue);
        }
        return ret;
@@ -2194,9 +2193,6 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
 
 static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
 {
-       cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
-       cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work);
-
        nvme_tcp_teardown_io_queues(ctrl, shutdown);
        nvme_stop_admin_queue(ctrl);
        if (shutdown)
@@ -2236,6 +2232,12 @@ out_fail:
        nvme_tcp_reconnect_or_remove(ctrl);
 }
 
+static void nvme_tcp_stop_ctrl(struct nvme_ctrl *ctrl)
+{
+       cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
+       cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work);
+}
+
 static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl)
 {
        struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
@@ -2557,6 +2559,7 @@ static const struct nvme_ctrl_ops nvme_tcp_ctrl_ops = {
        .submit_async_event     = nvme_tcp_submit_async_event,
        .delete_ctrl            = nvme_tcp_delete_ctrl,
        .get_address            = nvmf_get_address,
+       .stop_ctrl              = nvme_tcp_stop_ctrl,
 };
 
 static bool
index e44b298..ff77c3d 100644 (file)
@@ -773,11 +773,31 @@ static ssize_t nvmet_passthru_io_timeout_store(struct config_item *item,
 }
 CONFIGFS_ATTR(nvmet_passthru_, io_timeout);
 
+static ssize_t nvmet_passthru_clear_ids_show(struct config_item *item,
+               char *page)
+{
+       return sprintf(page, "%u\n", to_subsys(item->ci_parent)->clear_ids);
+}
+
+static ssize_t nvmet_passthru_clear_ids_store(struct config_item *item,
+               const char *page, size_t count)
+{
+       struct nvmet_subsys *subsys = to_subsys(item->ci_parent);
+       unsigned int clear_ids;
+
+       if (kstrtouint(page, 0, &clear_ids))
+               return -EINVAL;
+       subsys->clear_ids = clear_ids;
+       return count;
+}
+CONFIGFS_ATTR(nvmet_passthru_, clear_ids);
+
 static struct configfs_attribute *nvmet_passthru_attrs[] = {
        &nvmet_passthru_attr_device_path,
        &nvmet_passthru_attr_enable,
        &nvmet_passthru_attr_admin_timeout,
        &nvmet_passthru_attr_io_timeout,
+       &nvmet_passthru_attr_clear_ids,
        NULL,
 };
 
index 90e7532..c27660a 100644 (file)
@@ -1374,6 +1374,12 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
        ctrl->port = req->port;
        ctrl->ops = req->ops;
 
+#ifdef CONFIG_NVME_TARGET_PASSTHRU
+       /* By default, set loop targets to clear IDS by default */
+       if (ctrl->port->disc_addr.trtype == NVMF_TRTYPE_LOOP)
+               subsys->clear_ids = 1;
+#endif
+
        INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work);
        INIT_LIST_HEAD(&ctrl->async_events);
        INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL);
index 6981875..2b3e571 100644 (file)
@@ -249,6 +249,7 @@ struct nvmet_subsys {
        struct config_group     passthru_group;
        unsigned int            admin_timeout;
        unsigned int            io_timeout;
+       unsigned int            clear_ids;
 #endif /* CONFIG_NVME_TARGET_PASSTHRU */
 
 #ifdef CONFIG_BLK_DEV_ZONED
index b1f7efa..6f39a29 100644 (file)
@@ -30,6 +30,53 @@ void nvmet_passthrough_override_cap(struct nvmet_ctrl *ctrl)
                ctrl->cap &= ~(1ULL << 43);
 }
 
+static u16 nvmet_passthru_override_id_descs(struct nvmet_req *req)
+{
+       struct nvmet_ctrl *ctrl = req->sq->ctrl;
+       u16 status = NVME_SC_SUCCESS;
+       int pos, len;
+       bool csi_seen = false;
+       void *data;
+       u8 csi;
+
+       if (!ctrl->subsys->clear_ids)
+               return status;
+
+       data = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+       if (!data)
+               return NVME_SC_INTERNAL;
+
+       status = nvmet_copy_from_sgl(req, 0, data, NVME_IDENTIFY_DATA_SIZE);
+       if (status)
+               goto out_free;
+
+       for (pos = 0; pos < NVME_IDENTIFY_DATA_SIZE; pos += len) {
+               struct nvme_ns_id_desc *cur = data + pos;
+
+               if (cur->nidl == 0)
+                       break;
+               if (cur->nidt == NVME_NIDT_CSI) {
+                       memcpy(&csi, cur + 1, NVME_NIDT_CSI_LEN);
+                       csi_seen = true;
+                       break;
+               }
+               len = sizeof(struct nvme_ns_id_desc) + cur->nidl;
+       }
+
+       memset(data, 0, NVME_IDENTIFY_DATA_SIZE);
+       if (csi_seen) {
+               struct nvme_ns_id_desc *cur = data;
+
+               cur->nidt = NVME_NIDT_CSI;
+               cur->nidl = NVME_NIDT_CSI_LEN;
+               memcpy(cur + 1, &csi, NVME_NIDT_CSI_LEN);
+       }
+       status = nvmet_copy_to_sgl(req, 0, data, NVME_IDENTIFY_DATA_SIZE);
+out_free:
+       kfree(data);
+       return status;
+}
+
 static u16 nvmet_passthru_override_id_ctrl(struct nvmet_req *req)
 {
        struct nvmet_ctrl *ctrl = req->sq->ctrl;
@@ -152,6 +199,11 @@ static u16 nvmet_passthru_override_id_ns(struct nvmet_req *req)
         */
        id->mc = 0;
 
+       if (req->sq->ctrl->subsys->clear_ids) {
+               memset(id->nguid, 0, NVME_NIDT_NGUID_LEN);
+               memset(id->eui64, 0, NVME_NIDT_EUI64_LEN);
+       }
+
        status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id));
 
 out_free:
@@ -176,6 +228,9 @@ static void nvmet_passthru_execute_cmd_work(struct work_struct *w)
                case NVME_ID_CNS_NS:
                        nvmet_passthru_override_id_ns(req);
                        break;
+               case NVME_ID_CNS_NS_DESC_LIST:
+                       nvmet_passthru_override_id_descs(req);
+                       break;
                }
        } else if (status < 0)
                status = NVME_SC_INTERNAL;
index 2793554..0a95425 100644 (file)
@@ -405,7 +405,7 @@ err:
        return NVME_SC_INTERNAL;
 }
 
-static void nvmet_tcp_send_ddgst(struct ahash_request *hash,
+static void nvmet_tcp_calc_ddgst(struct ahash_request *hash,
                struct nvmet_tcp_cmd *cmd)
 {
        ahash_request_set_crypt(hash, cmd->req.sg,
@@ -413,23 +413,6 @@ static void nvmet_tcp_send_ddgst(struct ahash_request *hash,
        crypto_ahash_digest(hash);
 }
 
-static void nvmet_tcp_recv_ddgst(struct ahash_request *hash,
-               struct nvmet_tcp_cmd *cmd)
-{
-       struct scatterlist sg;
-       struct kvec *iov;
-       int i;
-
-       crypto_ahash_init(hash);
-       for (i = 0, iov = cmd->iov; i < cmd->nr_mapped; i++, iov++) {
-               sg_init_one(&sg, iov->iov_base, iov->iov_len);
-               ahash_request_set_crypt(hash, &sg, NULL, iov->iov_len);
-               crypto_ahash_update(hash);
-       }
-       ahash_request_set_crypt(hash, NULL, (void *)&cmd->exp_ddgst, 0);
-       crypto_ahash_final(hash);
-}
-
 static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd)
 {
        struct nvme_tcp_data_pdu *pdu = cmd->data_pdu;
@@ -454,7 +437,7 @@ static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd)
 
        if (queue->data_digest) {
                pdu->hdr.flags |= NVME_TCP_F_DDGST;
-               nvmet_tcp_send_ddgst(queue->snd_hash, cmd);
+               nvmet_tcp_calc_ddgst(queue->snd_hash, cmd);
        }
 
        if (cmd->queue->hdr_digest) {
@@ -1137,7 +1120,7 @@ static void nvmet_tcp_prep_recv_ddgst(struct nvmet_tcp_cmd *cmd)
 {
        struct nvmet_tcp_queue *queue = cmd->queue;
 
-       nvmet_tcp_recv_ddgst(queue->rcv_hash, cmd);
+       nvmet_tcp_calc_ddgst(queue->rcv_hash, cmd);
        queue->offset = 0;
        queue->left = NVME_TCP_DIGEST_LENGTH;
        queue->rcv_state = NVMET_TCP_RECV_DDGST;
index c94e24a..83d47ff 100644 (file)
@@ -236,11 +236,11 @@ int aspeed_pinmux_set_mux(struct pinctrl_dev *pctldev, unsigned int function,
                const struct aspeed_sig_expr **funcs;
                const struct aspeed_sig_expr ***prios;
 
-               pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name);
-
                if (!pdesc)
                        return -EINVAL;
 
+               pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name);
+
                prios = pdesc->prios;
 
                if (!prios)
index c0630f6..417e41b 100644 (file)
@@ -239,6 +239,7 @@ static const struct pinctrl_pin_desc imx93_pinctrl_pads[] = {
 static const struct imx_pinctrl_soc_info imx93_pinctrl_info = {
        .pins = imx93_pinctrl_pads,
        .npins = ARRAY_SIZE(imx93_pinctrl_pads),
+       .flags = ZERO_OFFSET_VALID,
        .gpr_compatible = "fsl,imx93-iomuxc-gpr",
 };
 
index 57a33fb..14bcca7 100644 (file)
@@ -1338,16 +1338,18 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl, struct fwnode
        bank->secure_control = pctl->match_data->secure_control;
        spin_lock_init(&bank->lock);
 
-       /* create irq hierarchical domain */
-       bank->fwnode = fwnode;
+       if (pctl->domain) {
+               /* create irq hierarchical domain */
+               bank->fwnode = fwnode;
 
-       bank->domain = irq_domain_create_hierarchy(pctl->domain, 0,
-                                       STM32_GPIO_IRQ_LINE, bank->fwnode,
-                                       &stm32_gpio_domain_ops, bank);
+               bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, STM32_GPIO_IRQ_LINE,
+                                                          bank->fwnode, &stm32_gpio_domain_ops,
+                                                          bank);
 
-       if (!bank->domain) {
-               err = -ENODEV;
-               goto err_clk;
+               if (!bank->domain) {
+                       err = -ENODEV;
+                       goto err_clk;
+               }
        }
 
        err = gpiochip_add_data(&bank->gpio_chip, bank);
@@ -1510,6 +1512,8 @@ int stm32_pctl_probe(struct platform_device *pdev)
        pctl->domain = stm32_pctrl_get_irq_domain(pdev);
        if (IS_ERR(pctl->domain))
                return PTR_ERR(pctl->domain);
+       if (!pctl->domain)
+               dev_warn(dev, "pinctrl without interrupt support\n");
 
        /* hwspinlock is optional */
        hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
index 4ada803..b5c1a8f 100644 (file)
@@ -158,26 +158,26 @@ static const struct sunxi_desc_pin sun8i_a83t_pins[] = {
        SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 14),
                  SUNXI_FUNCTION(0x0, "gpio_in"),
                  SUNXI_FUNCTION(0x1, "gpio_out"),
-                 SUNXI_FUNCTION(0x2, "nand"),          /* DQ6 */
+                 SUNXI_FUNCTION(0x2, "nand0"),         /* DQ6 */
                  SUNXI_FUNCTION(0x3, "mmc2")),         /* D6 */
        SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 15),
                  SUNXI_FUNCTION(0x0, "gpio_in"),
                  SUNXI_FUNCTION(0x1, "gpio_out"),
-                 SUNXI_FUNCTION(0x2, "nand"),          /* DQ7 */
+                 SUNXI_FUNCTION(0x2, "nand0"),         /* DQ7 */
                  SUNXI_FUNCTION(0x3, "mmc2")),         /* D7 */
        SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 16),
                  SUNXI_FUNCTION(0x0, "gpio_in"),
                  SUNXI_FUNCTION(0x1, "gpio_out"),
-                 SUNXI_FUNCTION(0x2, "nand"),          /* DQS */
+                 SUNXI_FUNCTION(0x2, "nand0"),         /* DQS */
                  SUNXI_FUNCTION(0x3, "mmc2")),         /* RST */
        SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 17),
                  SUNXI_FUNCTION(0x0, "gpio_in"),
                  SUNXI_FUNCTION(0x1, "gpio_out"),
-                 SUNXI_FUNCTION(0x2, "nand")),         /* CE2 */
+                 SUNXI_FUNCTION(0x2, "nand0")),        /* CE2 */
        SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 18),
                  SUNXI_FUNCTION(0x0, "gpio_in"),
                  SUNXI_FUNCTION(0x1, "gpio_out"),
-                 SUNXI_FUNCTION(0x2, "nand")),         /* CE3 */
+                 SUNXI_FUNCTION(0x2, "nand0")),        /* CE3 */
        /* Hole */
        SUNXI_PIN(SUNXI_PINCTRL_PIN(D, 2),
                  SUNXI_FUNCTION(0x0, "gpio_in"),
index d9327d7..dd92840 100644 (file)
@@ -544,6 +544,8 @@ static int sunxi_pconf_set(struct pinctrl_dev *pctldev, unsigned pin,
        struct sunxi_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev);
        int i;
 
+       pin -= pctl->desc->pin_base;
+
        for (i = 0; i < num_configs; i++) {
                enum pin_config_param param;
                unsigned long flags;
index 2923daf..7b9c107 100644 (file)
@@ -890,6 +890,7 @@ nvsw_sn2201_create_static_devices(struct nvsw_sn2201 *nvsw_sn2201,
                                  int size)
 {
        struct mlxreg_hotplug_device *dev = devs;
+       int ret;
        int i;
 
        /* Create I2C static devices. */
@@ -901,6 +902,7 @@ nvsw_sn2201_create_static_devices(struct nvsw_sn2201 *nvsw_sn2201,
                                dev->nr, dev->brdinfo->addr);
 
                        dev->adapter = NULL;
+                       ret = PTR_ERR(dev->client);
                        goto fail_create_static_devices;
                }
        }
@@ -914,7 +916,7 @@ fail_create_static_devices:
                dev->client = NULL;
                dev->adapter = NULL;
        }
-       return IS_ERR(dev->client);
+       return ret;
 }
 
 static void nvsw_sn2201_destroy_static_devices(struct nvsw_sn2201 *nvsw_sn2201,
index f08ad85..bc4013e 100644 (file)
@@ -945,6 +945,8 @@ config PANASONIC_LAPTOP
        tristate "Panasonic Laptop Extras"
        depends on INPUT && ACPI
        depends on BACKLIGHT_CLASS_DEVICE
+       depends on ACPI_VIDEO=n || ACPI_VIDEO
+       depends on SERIO_I8042 || SERIO_I8042 = n
        select INPUT_SPARSEKMAP
        help
          This driver adds support for access to backlight control and hotkeys
index 0d8cb22..bc7020e 100644 (file)
@@ -89,6 +89,7 @@ enum hp_wmi_event_ids {
        HPWMI_BACKLIT_KB_BRIGHTNESS     = 0x0D,
        HPWMI_PEAKSHIFT_PERIOD          = 0x0F,
        HPWMI_BATTERY_CHARGE_PERIOD     = 0x10,
+       HPWMI_SANITIZATION_MODE         = 0x17,
 };
 
 /*
@@ -853,6 +854,8 @@ static void hp_wmi_notify(u32 value, void *context)
                break;
        case HPWMI_BATTERY_CHARGE_PERIOD:
                break;
+       case HPWMI_SANITIZATION_MODE:
+               break;
        default:
                pr_info("Unknown event_id - %d - 0x%x\n", event_id, event_data);
                break;
index 3ccb7b7..abd0c81 100644 (file)
@@ -152,6 +152,10 @@ static bool no_bt_rfkill;
 module_param(no_bt_rfkill, bool, 0444);
 MODULE_PARM_DESC(no_bt_rfkill, "No rfkill for bluetooth.");
 
+static bool allow_v4_dytc;
+module_param(allow_v4_dytc, bool, 0444);
+MODULE_PARM_DESC(allow_v4_dytc, "Enable DYTC version 4 platform-profile support.");
+
 /*
  * ACPI Helpers
  */
@@ -871,12 +875,18 @@ static void dytc_profile_refresh(struct ideapad_private *priv)
 static const struct dmi_system_id ideapad_dytc_v4_allow_table[] = {
        {
                /* Ideapad 5 Pro 16ACH6 */
-               .ident = "LENOVO 82L5",
                .matches = {
                        DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
                        DMI_MATCH(DMI_PRODUCT_NAME, "82L5")
                }
        },
+       {
+               /* Ideapad 5 15ITL05 */
+               .matches = {
+                       DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+                       DMI_MATCH(DMI_PRODUCT_VERSION, "IdeaPad 5 15ITL05")
+               }
+       },
        {}
 };
 
@@ -901,13 +911,16 @@ static int ideapad_dytc_profile_init(struct ideapad_private *priv)
 
        dytc_version = (output >> DYTC_QUERY_REV_BIT) & 0xF;
 
-       if (dytc_version < 5) {
-               if (dytc_version < 4 || !dmi_check_system(ideapad_dytc_v4_allow_table)) {
-                       dev_info(&priv->platform_device->dev,
-                                "DYTC_VERSION is less than 4 or is not allowed: %d\n",
-                                dytc_version);
-                       return -ENODEV;
-               }
+       if (dytc_version < 4) {
+               dev_info(&priv->platform_device->dev, "DYTC_VERSION < 4 is not supported\n");
+               return -ENODEV;
+       }
+
+       if (dytc_version < 5 &&
+           !(allow_v4_dytc || dmi_check_system(ideapad_dytc_v4_allow_table))) {
+               dev_info(&priv->platform_device->dev,
+                        "DYTC_VERSION 4 support may not work. Pass ideapad_laptop.allow_v4_dytc=Y on the kernel commandline to enable\n");
+               return -ENODEV;
        }
 
        priv->dytc = kzalloc(sizeof(*priv->dytc), GFP_KERNEL);
index 40183bd..a1fe1e0 100644 (file)
@@ -1911,6 +1911,7 @@ static const struct x86_cpu_id intel_pmc_core_ids[] = {
        X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L,      &icl_reg_map),
        X86_MATCH_INTEL_FAM6_MODEL(ROCKETLAKE,          &tgl_reg_map),
        X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L,         &tgl_reg_map),
+       X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N,         &tgl_reg_map),
        X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE,           &adl_reg_map),
        X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P,        &tgl_reg_map),
        {}
index 37850d0..615e39c 100644 (file)
  *             - v0.1  start from toshiba_acpi driver written by John Belmonte
  */
 
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
+#include <linux/acpi.h>
 #include <linux/backlight.h>
 #include <linux/ctype.h>
-#include <linux/seq_file.h>
-#include <linux/uaccess.h>
-#include <linux/slab.h>
-#include <linux/acpi.h>
+#include <linux/i8042.h>
+#include <linux/init.h>
 #include <linux/input.h>
 #include <linux/input/sparse-keymap.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
 #include <linux/platform_device.h>
-
+#include <linux/seq_file.h>
+#include <linux/serio.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/uaccess.h>
+#include <acpi/video.h>
 
 MODULE_AUTHOR("Hiroshi Miura <miura@da-cha.org>");
 MODULE_AUTHOR("David Bronaugh <dbronaugh@linuxboxen.org>");
@@ -241,6 +243,42 @@ struct pcc_acpi {
        struct platform_device  *platform;
 };
 
+/*
+ * On some Panasonic models the volume up / down / mute keys send duplicate
+ * keypress events over the PS/2 kbd interface, filter these out.
+ */
+static bool panasonic_i8042_filter(unsigned char data, unsigned char str,
+                                  struct serio *port)
+{
+       static bool extended;
+
+       if (str & I8042_STR_AUXDATA)
+               return false;
+
+       if (data == 0xe0) {
+               extended = true;
+               return true;
+       } else if (extended) {
+               extended = false;
+
+               switch (data & 0x7f) {
+               case 0x20: /* e0 20 / e0 a0, Volume Mute press / release */
+               case 0x2e: /* e0 2e / e0 ae, Volume Down press / release */
+               case 0x30: /* e0 30 / e0 b0, Volume Up press / release */
+                       return true;
+               default:
+                       /*
+                        * Report the previously filtered e0 before continuing
+                        * with the next non-filtered byte.
+                        */
+                       serio_interrupt(port, 0xe0, 0);
+                       return false;
+               }
+       }
+
+       return false;
+}
+
 /* method access functions */
 static int acpi_pcc_write_sset(struct pcc_acpi *pcc, int func, int val)
 {
@@ -762,6 +800,8 @@ static void acpi_pcc_generate_keyinput(struct pcc_acpi *pcc)
        struct input_dev *hotk_input_dev = pcc->input_dev;
        int rc;
        unsigned long long result;
+       unsigned int key;
+       unsigned int updown;
 
        rc = acpi_evaluate_integer(pcc->handle, METHOD_HKEY_QUERY,
                                   NULL, &result);
@@ -770,20 +810,27 @@ static void acpi_pcc_generate_keyinput(struct pcc_acpi *pcc)
                return;
        }
 
+       key = result & 0xf;
+       updown = result & 0x80; /* 0x80 == key down; 0x00 = key up */
+
        /* hack: some firmware sends no key down for sleep / hibernate */
-       if ((result & 0xf) == 0x7 || (result & 0xf) == 0xa) {
-               if (result & 0x80)
+       if (key == 7 || key == 10) {
+               if (updown)
                        sleep_keydown_seen = 1;
                if (!sleep_keydown_seen)
                        sparse_keymap_report_event(hotk_input_dev,
-                                       result & 0xf, 0x80, false);
+                                       key, 0x80, false);
        }
 
-       if ((result & 0xf) == 0x7 || (result & 0xf) == 0x9 || (result & 0xf) == 0xa) {
-               if (!sparse_keymap_report_event(hotk_input_dev,
-                                               result & 0xf, result & 0x80, false))
-                       pr_err("Unknown hotkey event: 0x%04llx\n", result);
-       }
+       /*
+        * Don't report brightness key-presses if they are also reported
+        * by the ACPI video bus.
+        */
+       if ((key == 1 || key == 2) && acpi_video_handles_brightness_key_presses())
+               return;
+
+       if (!sparse_keymap_report_event(hotk_input_dev, key, updown, false))
+               pr_err("Unknown hotkey event: 0x%04llx\n", result);
 }
 
 static void acpi_pcc_hotkey_notify(struct acpi_device *device, u32 event)
@@ -997,6 +1044,7 @@ static int acpi_pcc_hotkey_add(struct acpi_device *device)
                pcc->platform = NULL;
        }
 
+       i8042_install_filter(panasonic_i8042_filter);
        return 0;
 
 out_platform:
@@ -1020,6 +1068,8 @@ static int acpi_pcc_hotkey_remove(struct acpi_device *device)
        if (!device || !pcc)
                return -EINVAL;
 
+       i8042_remove_filter(panasonic_i8042_filter);
+
        if (pcc->platform) {
                device_remove_file(&pcc->platform->dev, &dev_attr_cdpower);
                platform_device_unregister(pcc->platform);
index e6cb4a1..a8b3830 100644 (file)
@@ -4529,6 +4529,7 @@ static void thinkpad_acpi_amd_s2idle_restore(void)
        iounmap(addr);
 cleanup_resource:
        release_resource(res);
+       kfree(res);
 }
 
 static struct acpi_s2idle_dev_ops thinkpad_acpi_s2idle_dev_ops = {
@@ -10299,21 +10300,15 @@ static struct ibm_struct proxsensor_driver_data = {
 #define DYTC_DISABLE_CQL DYTC_SET_COMMAND(DYTC_FUNCTION_CQL, DYTC_MODE_MMC_BALANCE, 0)
 #define DYTC_ENABLE_CQL DYTC_SET_COMMAND(DYTC_FUNCTION_CQL, DYTC_MODE_MMC_BALANCE, 1)
 
-enum dytc_profile_funcmode {
-       DYTC_FUNCMODE_NONE = 0,
-       DYTC_FUNCMODE_MMC,
-       DYTC_FUNCMODE_PSC,
-};
-
-static enum dytc_profile_funcmode dytc_profile_available;
 static enum platform_profile_option dytc_current_profile;
 static atomic_t dytc_ignore_event = ATOMIC_INIT(0);
 static DEFINE_MUTEX(dytc_mutex);
+static int dytc_capabilities;
 static bool dytc_mmc_get_available;
 
 static int convert_dytc_to_profile(int dytcmode, enum platform_profile_option *profile)
 {
-       if (dytc_profile_available == DYTC_FUNCMODE_MMC) {
+       if (dytc_capabilities & BIT(DYTC_FC_MMC)) {
                switch (dytcmode) {
                case DYTC_MODE_MMC_LOWPOWER:
                        *profile = PLATFORM_PROFILE_LOW_POWER;
@@ -10330,7 +10325,7 @@ static int convert_dytc_to_profile(int dytcmode, enum platform_profile_option *p
                }
                return 0;
        }
-       if (dytc_profile_available == DYTC_FUNCMODE_PSC) {
+       if (dytc_capabilities & BIT(DYTC_FC_PSC)) {
                switch (dytcmode) {
                case DYTC_MODE_PSC_LOWPOWER:
                        *profile = PLATFORM_PROFILE_LOW_POWER;
@@ -10352,21 +10347,21 @@ static int convert_profile_to_dytc(enum platform_profile_option profile, int *pe
 {
        switch (profile) {
        case PLATFORM_PROFILE_LOW_POWER:
-               if (dytc_profile_available == DYTC_FUNCMODE_MMC)
+               if (dytc_capabilities & BIT(DYTC_FC_MMC))
                        *perfmode = DYTC_MODE_MMC_LOWPOWER;
-               else if (dytc_profile_available == DYTC_FUNCMODE_PSC)
+               else if (dytc_capabilities & BIT(DYTC_FC_PSC))
                        *perfmode = DYTC_MODE_PSC_LOWPOWER;
                break;
        case PLATFORM_PROFILE_BALANCED:
-               if (dytc_profile_available == DYTC_FUNCMODE_MMC)
+               if (dytc_capabilities & BIT(DYTC_FC_MMC))
                        *perfmode = DYTC_MODE_MMC_BALANCE;
-               else if (dytc_profile_available == DYTC_FUNCMODE_PSC)
+               else if (dytc_capabilities & BIT(DYTC_FC_PSC))
                        *perfmode = DYTC_MODE_PSC_BALANCE;
                break;
        case PLATFORM_PROFILE_PERFORMANCE:
-               if (dytc_profile_available == DYTC_FUNCMODE_MMC)
+               if (dytc_capabilities & BIT(DYTC_FC_MMC))
                        *perfmode = DYTC_MODE_MMC_PERFORM;
-               else if (dytc_profile_available == DYTC_FUNCMODE_PSC)
+               else if (dytc_capabilities & BIT(DYTC_FC_PSC))
                        *perfmode = DYTC_MODE_PSC_PERFORM;
                break;
        default: /* Unknown profile */
@@ -10445,7 +10440,7 @@ static int dytc_profile_set(struct platform_profile_handler *pprof,
        if (err)
                goto unlock;
 
-       if (dytc_profile_available == DYTC_FUNCMODE_MMC) {
+       if (dytc_capabilities & BIT(DYTC_FC_MMC)) {
                if (profile == PLATFORM_PROFILE_BALANCED) {
                        /*
                         * To get back to balanced mode we need to issue a reset command.
@@ -10464,7 +10459,7 @@ static int dytc_profile_set(struct platform_profile_handler *pprof,
                                goto unlock;
                }
        }
-       if (dytc_profile_available == DYTC_FUNCMODE_PSC) {
+       if (dytc_capabilities & BIT(DYTC_FC_PSC)) {
                err = dytc_command(DYTC_SET_COMMAND(DYTC_FUNCTION_PSC, perfmode, 1), &output);
                if (err)
                        goto unlock;
@@ -10483,12 +10478,12 @@ static void dytc_profile_refresh(void)
        int perfmode;
 
        mutex_lock(&dytc_mutex);
-       if (dytc_profile_available == DYTC_FUNCMODE_MMC) {
+       if (dytc_capabilities & BIT(DYTC_FC_MMC)) {
                if (dytc_mmc_get_available)
                        err = dytc_command(DYTC_CMD_MMC_GET, &output);
                else
                        err = dytc_cql_command(DYTC_CMD_GET, &output);
-       } else if (dytc_profile_available == DYTC_FUNCMODE_PSC)
+       } else if (dytc_capabilities & BIT(DYTC_FC_PSC))
                err = dytc_command(DYTC_CMD_GET, &output);
 
        mutex_unlock(&dytc_mutex);
@@ -10517,7 +10512,6 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm)
        set_bit(PLATFORM_PROFILE_BALANCED, dytc_profile.choices);
        set_bit(PLATFORM_PROFILE_PERFORMANCE, dytc_profile.choices);
 
-       dytc_profile_available = DYTC_FUNCMODE_NONE;
        err = dytc_command(DYTC_CMD_QUERY, &output);
        if (err)
                return err;
@@ -10530,13 +10524,12 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm)
                return -ENODEV;
 
        /* Check what capabilities are supported */
-       err = dytc_command(DYTC_CMD_FUNC_CAP, &output);
+       err = dytc_command(DYTC_CMD_FUNC_CAP, &dytc_capabilities);
        if (err)
                return err;
 
-       if (output & BIT(DYTC_FC_MMC)) { /* MMC MODE */
-               dytc_profile_available = DYTC_FUNCMODE_MMC;
-
+       if (dytc_capabilities & BIT(DYTC_FC_MMC)) { /* MMC MODE */
+               pr_debug("MMC is supported\n");
                /*
                 * Check if MMC_GET functionality available
                 * Version > 6 and return success from MMC_GET command
@@ -10547,8 +10540,13 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm)
                        if (!err && ((output & DYTC_ERR_MASK) == DYTC_ERR_SUCCESS))
                                dytc_mmc_get_available = true;
                }
-       } else if (output & BIT(DYTC_FC_PSC)) { /* PSC MODE */
-               dytc_profile_available = DYTC_FUNCMODE_PSC;
+       } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { /* PSC MODE */
+               /* Support for this only works on AMD platforms */
+               if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
+                       dbg_printk(TPACPI_DBG_INIT, "PSC not support on Intel platforms\n");
+                       return -ENODEV;
+               }
+               pr_debug("PSC is supported\n");
        } else {
                dbg_printk(TPACPI_DBG_INIT, "No DYTC support available\n");
                return -ENODEV;
@@ -10574,7 +10572,6 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm)
 
 static void dytc_profile_exit(void)
 {
-       dytc_profile_available = DYTC_FUNCMODE_NONE;
        platform_profile_remove();
 }
 
index 7dff94a..ef6e47d 100644 (file)
@@ -723,19 +723,19 @@ static const struct regulator_desc pms405_pldo600 = {
 
 static const struct regulator_desc mp5496_smpa2 = {
        .linear_ranges = (struct linear_range[]) {
-               REGULATOR_LINEAR_RANGE(725000, 0, 27, 12500),
+               REGULATOR_LINEAR_RANGE(600000, 0, 127, 12500),
        },
        .n_linear_ranges = 1,
-       .n_voltages = 28,
+       .n_voltages = 128,
        .ops = &rpm_mp5496_ops,
 };
 
 static const struct regulator_desc mp5496_ldoa2 = {
        .linear_ranges = (struct linear_range[]) {
-               REGULATOR_LINEAR_RANGE(1800000, 0, 60, 25000),
+               REGULATOR_LINEAR_RANGE(800000, 0, 127, 25000),
        },
        .n_linear_ranges = 1,
-       .n_voltages = 61,
+       .n_voltages = 128,
        .ops = &rpm_mp5496_ops,
 };
 
index cb24917..ae1d6ee 100644 (file)
@@ -60,7 +60,7 @@ static LIST_HEAD(sclp_reg_list);
 /* List of queued requests. */
 static LIST_HEAD(sclp_req_queue);
 
-/* Data for read and and init requests. */
+/* Data for read and init requests. */
 static struct sclp_req sclp_read_req;
 static struct sclp_req sclp_init_req;
 static void *sclp_read_sccb;
index 97e51c3..161d3b1 100644 (file)
@@ -1136,8 +1136,13 @@ static void virtio_ccw_int_handler(struct ccw_device *cdev,
                        vcdev->err = -EIO;
        }
        virtio_ccw_check_activity(vcdev, activity);
-       /* Interrupts are disabled here */
+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION
+       /*
+        * Paired with virtio_ccw_synchronize_cbs() and interrupts are
+        * disabled here.
+        */
        read_lock(&vcdev->irq_lock);
+#endif
        for_each_set_bit(i, indicators(vcdev),
                         sizeof(*indicators(vcdev)) * BITS_PER_BYTE) {
                /* The bit clear must happen before the vring kick. */
@@ -1146,7 +1151,9 @@ static void virtio_ccw_int_handler(struct ccw_device *cdev,
                vq = virtio_ccw_vq_by_ind(vcdev, i);
                vring_interrupt(0, vq);
        }
+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION
        read_unlock(&vcdev->irq_lock);
+#endif
        if (test_bit(0, indicators2(vcdev))) {
                virtio_config_changed(&vcdev->vdev);
                clear_bit(0, indicators2(vcdev));
index 7d819fc..eb86afb 100644 (file)
@@ -2782,6 +2782,7 @@ static int slave_configure_v3_hw(struct scsi_device *sdev)
        struct hisi_hba *hisi_hba = shost_priv(shost);
        struct device *dev = hisi_hba->dev;
        int ret = sas_slave_configure(sdev);
+       unsigned int max_sectors;
 
        if (ret)
                return ret;
@@ -2799,6 +2800,12 @@ static int slave_configure_v3_hw(struct scsi_device *sdev)
                }
        }
 
+       /* Set according to IOMMU IOVA caching limit */
+       max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue),
+                           (PAGE_SIZE * 32) >> SECTOR_SHIFT);
+
+       blk_queue_max_hw_sectors(sdev->request_queue, max_sectors);
+
        return 0;
 }
 
index d0eab57..00684e1 100644 (file)
@@ -160,8 +160,8 @@ static void ibmvfc_npiv_logout(struct ibmvfc_host *);
 static void ibmvfc_tgt_implicit_logout_and_del(struct ibmvfc_target *);
 static void ibmvfc_tgt_move_login(struct ibmvfc_target *);
 
-static void ibmvfc_release_sub_crqs(struct ibmvfc_host *);
-static void ibmvfc_init_sub_crqs(struct ibmvfc_host *);
+static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *);
+static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *);
 
 static const char *unknown_error = "unknown error";
 
@@ -917,7 +917,7 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost)
        struct vio_dev *vdev = to_vio_dev(vhost->dev);
        unsigned long flags;
 
-       ibmvfc_release_sub_crqs(vhost);
+       ibmvfc_dereg_sub_crqs(vhost);
 
        /* Re-enable the CRQ */
        do {
@@ -936,7 +936,7 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost)
        spin_unlock(vhost->crq.q_lock);
        spin_unlock_irqrestore(vhost->host->host_lock, flags);
 
-       ibmvfc_init_sub_crqs(vhost);
+       ibmvfc_reg_sub_crqs(vhost);
 
        return rc;
 }
@@ -955,7 +955,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
        struct vio_dev *vdev = to_vio_dev(vhost->dev);
        struct ibmvfc_queue *crq = &vhost->crq;
 
-       ibmvfc_release_sub_crqs(vhost);
+       ibmvfc_dereg_sub_crqs(vhost);
 
        /* Close the CRQ */
        do {
@@ -988,7 +988,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
        spin_unlock(vhost->crq.q_lock);
        spin_unlock_irqrestore(vhost->host->host_lock, flags);
 
-       ibmvfc_init_sub_crqs(vhost);
+       ibmvfc_reg_sub_crqs(vhost);
 
        return rc;
 }
@@ -5682,6 +5682,8 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost,
        queue->cur = 0;
        queue->fmt = fmt;
        queue->size = PAGE_SIZE / fmt_size;
+
+       queue->vhost = vhost;
        return 0;
 }
 
@@ -5757,9 +5759,6 @@ static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
 
        ENTER;
 
-       if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT))
-               return -ENOMEM;
-
        rc = h_reg_sub_crq(vdev->unit_address, scrq->msg_token, PAGE_SIZE,
                           &scrq->cookie, &scrq->hw_irq);
 
@@ -5790,7 +5789,6 @@ static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
        }
 
        scrq->hwq_id = index;
-       scrq->vhost = vhost;
 
        LEAVE;
        return 0;
@@ -5800,7 +5798,6 @@ irq_failed:
                rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address, scrq->cookie);
        } while (rtas_busy_delay(rc));
 reg_failed:
-       ibmvfc_free_queue(vhost, scrq);
        LEAVE;
        return rc;
 }
@@ -5826,12 +5823,50 @@ static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index)
        if (rc)
                dev_err(dev, "Failed to free sub-crq[%d]: rc=%ld\n", index, rc);
 
-       ibmvfc_free_queue(vhost, scrq);
+       /* Clean out the queue */
+       memset(scrq->msgs.crq, 0, PAGE_SIZE);
+       scrq->cur = 0;
+
+       LEAVE;
+}
+
+static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost)
+{
+       int i, j;
+
+       ENTER;
+       if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs)
+               return;
+
+       for (i = 0; i < nr_scsi_hw_queues; i++) {
+               if (ibmvfc_register_scsi_channel(vhost, i)) {
+                       for (j = i; j > 0; j--)
+                               ibmvfc_deregister_scsi_channel(vhost, j - 1);
+                       vhost->do_enquiry = 0;
+                       return;
+               }
+       }
+
+       LEAVE;
+}
+
+static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost)
+{
+       int i;
+
+       ENTER;
+       if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs)
+               return;
+
+       for (i = 0; i < nr_scsi_hw_queues; i++)
+               ibmvfc_deregister_scsi_channel(vhost, i);
+
        LEAVE;
 }
 
 static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)
 {
+       struct ibmvfc_queue *scrq;
        int i, j;
 
        ENTER;
@@ -5847,30 +5882,41 @@ static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)
        }
 
        for (i = 0; i < nr_scsi_hw_queues; i++) {
-               if (ibmvfc_register_scsi_channel(vhost, i)) {
-                       for (j = i; j > 0; j--)
-                               ibmvfc_deregister_scsi_channel(vhost, j - 1);
+               scrq = &vhost->scsi_scrqs.scrqs[i];
+               if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT)) {
+                       for (j = i; j > 0; j--) {
+                               scrq = &vhost->scsi_scrqs.scrqs[j - 1];
+                               ibmvfc_free_queue(vhost, scrq);
+                       }
                        kfree(vhost->scsi_scrqs.scrqs);
                        vhost->scsi_scrqs.scrqs = NULL;
                        vhost->scsi_scrqs.active_queues = 0;
                        vhost->do_enquiry = 0;
-                       break;
+                       vhost->mq_enabled = 0;
+                       return;
                }
        }
 
+       ibmvfc_reg_sub_crqs(vhost);
+
        LEAVE;
 }
 
 static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost)
 {
+       struct ibmvfc_queue *scrq;
        int i;
 
        ENTER;
        if (!vhost->scsi_scrqs.scrqs)
                return;
 
-       for (i = 0; i < nr_scsi_hw_queues; i++)
-               ibmvfc_deregister_scsi_channel(vhost, i);
+       ibmvfc_dereg_sub_crqs(vhost);
+
+       for (i = 0; i < nr_scsi_hw_queues; i++) {
+               scrq = &vhost->scsi_scrqs.scrqs[i];
+               ibmvfc_free_queue(vhost, scrq);
+       }
 
        kfree(vhost->scsi_scrqs.scrqs);
        vhost->scsi_scrqs.scrqs = NULL;
index 3718406..c39a245 100644 (file)
@@ -789,6 +789,7 @@ struct ibmvfc_queue {
        spinlock_t _lock;
        spinlock_t *q_lock;
 
+       struct ibmvfc_host *vhost;
        struct ibmvfc_event_pool evt_pool;
        struct list_head sent;
        struct list_head free;
@@ -797,7 +798,6 @@ struct ibmvfc_queue {
        union ibmvfc_iu cancel_rsp;
 
        /* Sub-CRQ fields */
-       struct ibmvfc_host *vhost;
        unsigned long cookie;
        unsigned long vios_cookie;
        unsigned long hw_irq;
index 1f423f7..b8a76b8 100644 (file)
@@ -2826,6 +2826,24 @@ static void zbc_open_zone(struct sdebug_dev_info *devip,
        }
 }
 
+static inline void zbc_set_zone_full(struct sdebug_dev_info *devip,
+                                    struct sdeb_zone_state *zsp)
+{
+       switch (zsp->z_cond) {
+       case ZC2_IMPLICIT_OPEN:
+               devip->nr_imp_open--;
+               break;
+       case ZC3_EXPLICIT_OPEN:
+               devip->nr_exp_open--;
+               break;
+       default:
+               WARN_ONCE(true, "Invalid zone %llu condition %x\n",
+                         zsp->z_start, zsp->z_cond);
+               break;
+       }
+       zsp->z_cond = ZC5_FULL;
+}
+
 static void zbc_inc_wp(struct sdebug_dev_info *devip,
                       unsigned long long lba, unsigned int num)
 {
@@ -2838,7 +2856,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip,
        if (zsp->z_type == ZBC_ZTYPE_SWR) {
                zsp->z_wp += num;
                if (zsp->z_wp >= zend)
-                       zsp->z_cond = ZC5_FULL;
+                       zbc_set_zone_full(devip, zsp);
                return;
        }
 
@@ -2857,7 +2875,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip,
                        n = num;
                }
                if (zsp->z_wp >= zend)
-                       zsp->z_cond = ZC5_FULL;
+                       zbc_set_zone_full(devip, zsp);
 
                num -= n;
                lba += n;
index 2c0dd64..5d21f07 100644 (file)
@@ -212,7 +212,12 @@ iscsi_create_endpoint(int dd_size)
                return NULL;
 
        mutex_lock(&iscsi_ep_idr_mutex);
-       id = idr_alloc(&iscsi_ep_idr, ep, 0, -1, GFP_NOIO);
+
+       /*
+        * First endpoint id should be 1 to comply with user space
+        * applications (iscsid).
+        */
+       id = idr_alloc(&iscsi_ep_idr, ep, 1, -1, GFP_NOIO);
        if (id < 0) {
                mutex_unlock(&iscsi_ep_idr_mutex);
                printk(KERN_ERR "Could not allocate endpoint ID. Error %d.\n",
index ca35309..fe000da 100644 (file)
@@ -1844,7 +1844,7 @@ static struct scsi_host_template scsi_driver = {
        .cmd_per_lun =          2048,
        .this_id =              -1,
        /* Ensure there are no gaps in presented sgls */
-       .virt_boundary_mask =   PAGE_SIZE-1,
+       .virt_boundary_mask =   HV_HYP_PAGE_SIZE - 1,
        .no_write_same =        1,
        .track_queue_depth =    1,
        .change_queue_depth =   storvsc_change_queue_depth,
@@ -1895,6 +1895,7 @@ static int storvsc_probe(struct hv_device *device,
        int target = 0;
        struct storvsc_device *stor_device;
        int max_sub_channels = 0;
+       u32 max_xfer_bytes;
 
        /*
         * We support sub-channels for storage on SCSI and FC controllers.
@@ -1968,12 +1969,28 @@ static int storvsc_probe(struct hv_device *device,
        }
        /* max cmd length */
        host->max_cmd_len = STORVSC_MAX_CMD_LEN;
-
        /*
-        * set the table size based on the info we got
-        * from the host.
+        * Any reasonable Hyper-V configuration should provide
+        * max_transfer_bytes value aligning to HV_HYP_PAGE_SIZE,
+        * protecting it from any weird value.
+        */
+       max_xfer_bytes = round_down(stor_device->max_transfer_bytes, HV_HYP_PAGE_SIZE);
+       /* max_hw_sectors_kb */
+       host->max_sectors = max_xfer_bytes >> 9;
+       /*
+        * There are 2 requirements for Hyper-V storvsc sgl segments,
+        * based on which the below calculation for max segments is
+        * done:
+        *
+        * 1. Except for the first and last sgl segment, all sgl segments
+        *    should be align to HV_HYP_PAGE_SIZE, that also means the
+        *    maximum number of segments in a sgl can be calculated by
+        *    dividing the total max transfer length by HV_HYP_PAGE_SIZE.
+        *
+        * 2. Except for the first and last, each entry in the SGL must
+        *    have an offset that is a multiple of HV_HYP_PAGE_SIZE.
         */
-       host->sg_tablesize = (stor_device->max_transfer_bytes >> PAGE_SHIFT);
+       host->sg_tablesize = (max_xfer_bytes >> HV_HYP_PAGE_SHIFT) + 1;
        /*
         * For non-IDE disks, the host supports multiple channels.
         * Set the number of HW queues we are supporting.
index b2d365a..dae8a2e 100644 (file)
@@ -91,14 +91,14 @@ static const struct at91_soc socs[] __initconst = {
        AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK,
                 AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH,
                 "sam9x60", "sam9x60"),
-       AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D5M_EXID_MATCH,
-                AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH,
+       AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK,
+                AT91_CIDR_VERSION_MASK, SAM9X60_D5M_EXID_MATCH,
                 "sam9x60 64MiB DDR2 SiP", "sam9x60"),
-       AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D1G_EXID_MATCH,
-                AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH,
+       AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK,
+                AT91_CIDR_VERSION_MASK, SAM9X60_D1G_EXID_MATCH,
                 "sam9x60 128MiB DDR2 SiP", "sam9x60"),
-       AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D6K_EXID_MATCH,
-                AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH,
+       AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK,
+                AT91_CIDR_VERSION_MASK, SAM9X60_D6K_EXID_MATCH,
                 "sam9x60 8MiB SDRAM SiP", "sam9x60"),
 #endif
 #ifdef CONFIG_SOC_SAMA5
index 3cbb165..70ad0f3 100644 (file)
@@ -783,6 +783,7 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
        }
 
        ret = brcmstb_init_sram(dn);
+       of_node_put(dn);
        if (ret) {
                pr_err("error setting up SRAM for PM\n");
                return ret;
index 7f49385..7ebc287 100644 (file)
@@ -667,7 +667,7 @@ static const struct imx8m_blk_ctrl_domain_data imx8mp_media_blk_ctl_domain_data[
        },
        [IMX8MP_MEDIABLK_PD_LCDIF_2] = {
                .name = "mediablk-lcdif-2",
-               .clk_names = (const char *[]){ "disp1", "apb", "axi", },
+               .clk_names = (const char *[]){ "disp2", "apb", "axi", },
                .num_clks = 3,
                .gpc_name = "lcdif2",
                .rst_mask = BIT(11) | BIT(12) | BIT(24),
index 613935c..58240e3 100644 (file)
@@ -758,7 +758,7 @@ static const struct of_device_id ixp4xx_npe_of_match[] = {
 static struct platform_driver ixp4xx_npe_driver = {
        .driver = {
                .name           = "ixp4xx-npe",
-               .of_match_table = of_match_ptr(ixp4xx_npe_of_match),
+               .of_match_table = ixp4xx_npe_of_match,
        },
        .probe = ixp4xx_npe_probe,
        .remove = ixp4xx_npe_remove,
index 3e95835..4f163d6 100644 (file)
@@ -926,7 +926,7 @@ qcom_smem_enumerate_partitions(struct qcom_smem *smem, u16 local_host)
        struct smem_partition_header *header;
        struct smem_ptable_entry *entry;
        struct smem_ptable *ptable;
-       unsigned int remote_host;
+       u16 remote_host;
        u16 host0, host1;
        int i;
 
@@ -951,12 +951,12 @@ qcom_smem_enumerate_partitions(struct qcom_smem *smem, u16 local_host)
                        continue;
 
                if (remote_host >= SMEM_HOST_COUNT) {
-                       dev_err(smem->dev, "bad host %hu\n", remote_host);
+                       dev_err(smem->dev, "bad host %u\n", remote_host);
                        return -EINVAL;
                }
 
                if (smem->partitions[remote_host].virt_base) {
-                       dev_err(smem->dev, "duplicate host %hu\n", remote_host);
+                       dev_err(smem->dev, "duplicate host %u\n", remote_host);
                        return -EINVAL;
                }
 
index a23d4f6..31d778e 100644 (file)
@@ -69,6 +69,7 @@
 #define CDNS_SPI_BAUD_DIV_SHIFT                3 /* Baud rate divisor shift in CR */
 #define CDNS_SPI_SS_SHIFT              10 /* Slave Select field shift in CR */
 #define CDNS_SPI_SS0                   0x1 /* Slave Select zero */
+#define CDNS_SPI_NOSS                  0x3C /* No Slave select */
 
 /*
  * SPI Interrupt Registers bit Masks
@@ -92,9 +93,6 @@
 #define CDNS_SPI_ER_ENABLE     0x00000001 /* SPI Enable Bit Mask */
 #define CDNS_SPI_ER_DISABLE    0x0 /* SPI Disable Bit Mask */
 
-/* SPI FIFO depth in bytes */
-#define CDNS_SPI_FIFO_DEPTH    128
-
 /* Default number of chip select lines */
 #define CDNS_SPI_DEFAULT_NUM_CS                4
 
  * @rx_bytes:          Number of bytes requested
  * @dev_busy:          Device busy flag
  * @is_decoded_cs:     Flag for decoder property set or not
+ * @tx_fifo_depth:     Depth of the TX FIFO
  */
 struct cdns_spi {
        void __iomem *regs;
@@ -123,6 +122,7 @@ struct cdns_spi {
        int rx_bytes;
        u8 dev_busy;
        u32 is_decoded_cs;
+       unsigned int tx_fifo_depth;
 };
 
 /* Macros for the SPI controller read/write */
@@ -304,7 +304,7 @@ static void cdns_spi_fill_tx_fifo(struct cdns_spi *xspi)
 {
        unsigned long trans_cnt = 0;
 
-       while ((trans_cnt < CDNS_SPI_FIFO_DEPTH) &&
+       while ((trans_cnt < xspi->tx_fifo_depth) &&
               (xspi->tx_bytes > 0)) {
 
                /* When xspi in busy condition, bytes may send failed,
@@ -450,19 +450,42 @@ static int cdns_prepare_transfer_hardware(struct spi_master *master)
  * @master:    Pointer to the spi_master structure which provides
  *             information about the controller.
  *
- * This function disables the SPI master controller.
+ * This function disables the SPI master controller when no slave selected.
  *
  * Return:     0 always
  */
 static int cdns_unprepare_transfer_hardware(struct spi_master *master)
 {
        struct cdns_spi *xspi = spi_master_get_devdata(master);
+       u32 ctrl_reg;
 
-       cdns_spi_write(xspi, CDNS_SPI_ER, CDNS_SPI_ER_DISABLE);
+       /* Disable the SPI if slave is deselected */
+       ctrl_reg = cdns_spi_read(xspi, CDNS_SPI_CR);
+       ctrl_reg = (ctrl_reg & CDNS_SPI_CR_SSCTRL) >>  CDNS_SPI_SS_SHIFT;
+       if (ctrl_reg == CDNS_SPI_NOSS)
+               cdns_spi_write(xspi, CDNS_SPI_ER, CDNS_SPI_ER_DISABLE);
 
        return 0;
 }
 
+/**
+ * cdns_spi_detect_fifo_depth - Detect the FIFO depth of the hardware
+ * @xspi:      Pointer to the cdns_spi structure
+ *
+ * The depth of the TX FIFO is a synthesis configuration parameter of the SPI
+ * IP. The FIFO threshold register is sized so that its maximum value can be the
+ * FIFO size - 1. This is used to detect the size of the FIFO.
+ */
+static void cdns_spi_detect_fifo_depth(struct cdns_spi *xspi)
+{
+       /* The MSBs will get truncated giving us the size of the FIFO */
+       cdns_spi_write(xspi, CDNS_SPI_THLD, 0xffff);
+       xspi->tx_fifo_depth = cdns_spi_read(xspi, CDNS_SPI_THLD) + 1;
+
+       /* Reset to default */
+       cdns_spi_write(xspi, CDNS_SPI_THLD, 0x1);
+}
+
 /**
  * cdns_spi_probe - Probe method for the SPI driver
  * @pdev:      Pointer to the platform_device structure
@@ -535,6 +558,8 @@ static int cdns_spi_probe(struct platform_device *pdev)
        if (ret < 0)
                xspi->is_decoded_cs = 0;
 
+       cdns_spi_detect_fifo_depth(xspi);
+
        /* SPI controller initializations */
        cdns_spi_init_hw(xspi);
 
index e8de4f5..0c79193 100644 (file)
@@ -808,7 +808,7 @@ int spi_mem_poll_status(struct spi_mem *mem,
            op->data.dir != SPI_MEM_DATA_IN)
                return -EINVAL;
 
-       if (ctlr->mem_ops && ctlr->mem_ops->poll_status) {
+       if (ctlr->mem_ops && ctlr->mem_ops->poll_status && !mem->spi->cs_gpiod) {
                ret = spi_mem_access_start(mem);
                if (ret)
                        return ret;
index a08215e..79242dc 100644 (file)
@@ -381,15 +381,18 @@ static int rockchip_spi_prepare_irq(struct rockchip_spi *rs,
        rs->tx_left = rs->tx ? xfer->len / rs->n_bytes : 0;
        rs->rx_left = xfer->len / rs->n_bytes;
 
-       if (rs->cs_inactive)
-               writel_relaxed(INT_RF_FULL | INT_CS_INACTIVE, rs->regs + ROCKCHIP_SPI_IMR);
-       else
-               writel_relaxed(INT_RF_FULL, rs->regs + ROCKCHIP_SPI_IMR);
+       writel_relaxed(0xffffffff, rs->regs + ROCKCHIP_SPI_ICR);
+
        spi_enable_chip(rs, true);
 
        if (rs->tx_left)
                rockchip_spi_pio_writer(rs);
 
+       if (rs->cs_inactive)
+               writel_relaxed(INT_RF_FULL | INT_CS_INACTIVE, rs->regs + ROCKCHIP_SPI_IMR);
+       else
+               writel_relaxed(INT_RF_FULL, rs->regs + ROCKCHIP_SPI_IMR);
+
        /* 1 means the transfer is in progress */
        return 1;
 }
index d1a0dea..d0ba34c 100644 (file)
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
 config FB_OLPC_DCON
        tristate "One Laptop Per Child Display CONtroller support"
-       depends on OLPC && FB
+       depends on OLPC && FB && BROKEN
        depends on I2C
        depends on GPIO_CS5535 && ACPI
        select BACKLIGHT_CLASS_DEVICE
index 113a3ef..6cd7fc9 100644 (file)
@@ -2461,7 +2461,7 @@ static int qlge_tso(struct sk_buff *skb, struct qlge_ob_mac_tso_iocb_req *mac_io
                mac_iocb_ptr->flags3 |= OB_MAC_TSO_IOCB_IC;
                mac_iocb_ptr->frame_len = cpu_to_le32((u32)skb->len);
                mac_iocb_ptr->total_hdrs_len =
-                       cpu_to_le16(skb_transport_offset(skb) + tcp_hdrlen(skb));
+                       cpu_to_le16(skb_tcp_all_headers(skb));
                mac_iocb_ptr->net_trans_offset =
                        cpu_to_le16(skb_network_offset(skb) |
                                    skb_transport_offset(skb)
index 3d8e9de..7135d89 100644 (file)
@@ -178,8 +178,7 @@ s32 _rtw_init_xmit_priv(struct xmit_priv *pxmitpriv, struct adapter *padapter)
 
        pxmitpriv->free_xmit_extbuf_cnt = num_xmit_extbuf;
 
-       res = rtw_alloc_hwxmits(padapter);
-       if (res) {
+       if (rtw_alloc_hwxmits(padapter)) {
                res = _FAIL;
                goto exit;
        }
@@ -1483,19 +1482,10 @@ int rtw_alloc_hwxmits(struct adapter *padapter)
 
        hwxmits = pxmitpriv->hwxmits;
 
-       if (pxmitpriv->hwxmit_entry == 5) {
-               hwxmits[0] .sta_queue = &pxmitpriv->bm_pending;
-               hwxmits[1] .sta_queue = &pxmitpriv->vo_pending;
-               hwxmits[2] .sta_queue = &pxmitpriv->vi_pending;
-               hwxmits[3] .sta_queue = &pxmitpriv->bk_pending;
-               hwxmits[4] .sta_queue = &pxmitpriv->be_pending;
-       } else if (pxmitpriv->hwxmit_entry == 4) {
-               hwxmits[0] .sta_queue = &pxmitpriv->vo_pending;
-               hwxmits[1] .sta_queue = &pxmitpriv->vi_pending;
-               hwxmits[2] .sta_queue = &pxmitpriv->be_pending;
-               hwxmits[3] .sta_queue = &pxmitpriv->bk_pending;
-       } else {
-       }
+       hwxmits[0].sta_queue = &pxmitpriv->vo_pending;
+       hwxmits[1].sta_queue = &pxmitpriv->vi_pending;
+       hwxmits[2].sta_queue = &pxmitpriv->be_pending;
+       hwxmits[3].sta_queue = &pxmitpriv->bk_pending;
 
        return 0;
 }
index 1b09462..8dd280e 100644 (file)
@@ -403,7 +403,7 @@ static int wpa_set_encryption(struct net_device *dev, struct ieee_param *param,
 
                if (wep_key_len > 0) {
                        wep_key_len = wep_key_len <= 5 ? 5 : 13;
-                       wep_total_len = wep_key_len + FIELD_OFFSET(struct ndis_802_11_wep, KeyMaterial);
+                       wep_total_len = wep_key_len + sizeof(*pwep);
                        pwep = kzalloc(wep_total_len, GFP_KERNEL);
                        if (!pwep)
                                goto exit;
index ece97e3..30374a8 100644 (file)
@@ -90,7 +90,8 @@ static int wpa_set_encryption(struct net_device *dev, struct ieee_param *param,
                if (wep_key_len > 0) {
                        wep_key_len = wep_key_len <= 5 ? 5 : 13;
                        wep_total_len = wep_key_len + FIELD_OFFSET(struct ndis_802_11_wep, key_material);
-                       pwep = kzalloc(wep_total_len, GFP_KERNEL);
+                       /* Allocate a full structure to avoid potentially running off the end. */
+                       pwep = kzalloc(sizeof(*pwep), GFP_KERNEL);
                        if (!pwep) {
                                ret = -ENOMEM;
                                goto exit;
@@ -582,7 +583,8 @@ static int rtw_set_encryption(struct net_device *dev, struct ieee_param *param,
                if (wep_key_len > 0) {
                        wep_key_len = wep_key_len <= 5 ? 5 : 13;
                        wep_total_len = wep_key_len + FIELD_OFFSET(struct ndis_802_11_wep, key_material);
-                       pwep = kzalloc(wep_total_len, GFP_KERNEL);
+                       /* Allocate a full structure to avoid potentially running off the end. */
+                       pwep = kzalloc(sizeof(*pwep), GFP_KERNEL);
                        if (!pwep)
                                goto exit;
 
index cd80c7d..a9596e7 100644 (file)
@@ -81,6 +81,7 @@ static const struct x86_cpu_id tcc_ids[] __initconst = {
        X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, NULL),
        X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, NULL),
        X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, NULL),
+       X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, NULL),
        {}
 };
 
index c7968ae..d02de3f 100644 (file)
@@ -426,7 +426,7 @@ static int goldfish_tty_remove(struct platform_device *pdev)
        tty_unregister_device(goldfish_tty_driver, qtty->console.index);
        iounmap(qtty->base);
        qtty->base = NULL;
-       free_irq(qtty->irq, pdev);
+       free_irq(qtty->irq, qtty);
        tty_port_destroy(&qtty->port);
        goldfish_tty_current_line_count--;
        if (goldfish_tty_current_line_count == 0)
index 137eebd..fd4d24f 100644 (file)
@@ -455,7 +455,7 @@ static void gsm_hex_dump_bytes(const char *fname, const u8 *data,
                return;
        }
 
-       prefix = kasprintf(GFP_KERNEL, "%s: ", fname);
+       prefix = kasprintf(GFP_ATOMIC, "%s: ", fname);
        if (!prefix)
                return;
        print_hex_dump(KERN_INFO, prefix, DUMP_PREFIX_OFFSET, 16, 1, data, len,
index 78b6ded..8f32fe9 100644 (file)
@@ -1517,6 +1517,8 @@ static inline void __stop_tx(struct uart_8250_port *p)
                unsigned char lsr = serial_in(p, UART_LSR);
                u64 stop_delay = 0;
 
+               p->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+
                if (!(lsr & UART_LSR_THRE))
                        return;
                /*
index 4733a23..f8f9506 100644 (file)
@@ -1306,6 +1306,7 @@ static const struct uart_ops qcom_geni_console_pops = {
        .stop_tx = qcom_geni_serial_stop_tx,
        .start_tx = qcom_geni_serial_start_tx,
        .stop_rx = qcom_geni_serial_stop_rx,
+       .start_rx = qcom_geni_serial_start_rx,
        .set_termios = qcom_geni_serial_set_termios,
        .startup = qcom_geni_serial_startup,
        .request_port = qcom_geni_serial_request_port,
index 9a85b41..338ebad 100644 (file)
@@ -2214,11 +2214,12 @@ int uart_suspend_port(struct uart_driver *drv, struct uart_port *uport)
        /*
         * Nothing to do if the console is not suspending
         * except stop_rx to prevent any asynchronous data
-        * over RX line. Re-start_rx, when required, is
-        * done by set_termios in resume sequence
+        * over RX line. However ensure that we will be
+        * able to Re-start_rx later.
         */
        if (!console_suspend_enabled && uart_console(uport)) {
-               uport->ops->stop_rx(uport);
+               if (uport->ops->start_rx)
+                       uport->ops->stop_rx(uport);
                goto unlock;
        }
 
@@ -2310,6 +2311,8 @@ int uart_resume_port(struct uart_driver *drv, struct uart_port *uport)
                if (console_suspend_enabled)
                        uart_change_pm(state, UART_PM_STATE_ON);
                uport->ops->set_termios(uport, &termios, NULL);
+               if (!console_suspend_enabled && uport->ops->start_rx)
+                       uport->ops->start_rx(uport);
                if (console_suspend_enabled)
                        console_start(uport->cons);
        }
index 18e6233..d2b2720 100644 (file)
@@ -581,7 +581,6 @@ void __handle_sysrq(int key, bool check_mask)
 
        rcu_sysrq_start();
        rcu_read_lock();
-       printk_prefer_direct_enter();
        /*
         * Raise the apparent loglevel to maximum so that the sysrq header
         * is shown to provide the user with positive feedback.  We do not
@@ -623,7 +622,6 @@ void __handle_sysrq(int key, bool check_mask)
                pr_cont("\n");
                console_loglevel = orig_log_level;
        }
-       printk_prefer_direct_exit();
        rcu_read_unlock();
        rcu_sysrq_end();
 
index 01fb4ba..ce86d1b 100644 (file)
@@ -748,17 +748,28 @@ static enum utp_ocs ufshcd_get_tr_ocs(struct ufshcd_lrb *lrbp)
 }
 
 /**
- * ufshcd_utrl_clear - Clear a bit in UTRLCLR register
+ * ufshcd_utrl_clear() - Clear requests from the controller request list.
  * @hba: per adapter instance
- * @pos: position of the bit to be cleared
+ * @mask: mask with one bit set for each request to be cleared
  */
-static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 pos)
+static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 mask)
 {
        if (hba->quirks & UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR)
-               ufshcd_writel(hba, (1 << pos), REG_UTP_TRANSFER_REQ_LIST_CLEAR);
-       else
-               ufshcd_writel(hba, ~(1 << pos),
-                               REG_UTP_TRANSFER_REQ_LIST_CLEAR);
+               mask = ~mask;
+       /*
+        * From the UFSHCI specification: "UTP Transfer Request List CLear
+        * Register (UTRLCLR): This field is bit significant. Each bit
+        * corresponds to a slot in the UTP Transfer Request List, where bit 0
+        * corresponds to request slot 0. A bit in this field is set to ‘0’
+        * by host software to indicate to the host controller that a transfer
+        * request slot is cleared. The host controller
+        * shall free up any resources associated to the request slot
+        * immediately, and shall set the associated bit in UTRLDBR to ‘0’. The
+        * host software indicates no change to request slots by setting the
+        * associated bits in this field to ‘1’. Bits in this field shall only
+        * be set ‘1’ or ‘0’ by host software when UTRLRSR is set to ‘1’."
+        */
+       ufshcd_writel(hba, ~mask, REG_UTP_TRANSFER_REQ_LIST_CLEAR);
 }
 
 /**
@@ -2863,27 +2874,26 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba,
        return ufshcd_compose_devman_upiu(hba, lrbp);
 }
 
-static int
-ufshcd_clear_cmd(struct ufs_hba *hba, int tag)
+/*
+ * Clear all the requests from the controller for which a bit has been set in
+ * @mask and wait until the controller confirms that these requests have been
+ * cleared.
+ */
+static int ufshcd_clear_cmds(struct ufs_hba *hba, u32 mask)
 {
-       int err = 0;
        unsigned long flags;
-       u32 mask = 1 << tag;
 
        /* clear outstanding transaction before retry */
        spin_lock_irqsave(hba->host->host_lock, flags);
-       ufshcd_utrl_clear(hba, tag);
+       ufshcd_utrl_clear(hba, mask);
        spin_unlock_irqrestore(hba->host->host_lock, flags);
 
        /*
         * wait for h/w to clear corresponding bit in door-bell.
         * max. wait is 1 sec.
         */
-       err = ufshcd_wait_for_register(hba,
-                       REG_UTP_TRANSFER_REQ_DOOR_BELL,
-                       mask, ~mask, 1000, 1000);
-
-       return err;
+       return ufshcd_wait_for_register(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL,
+                                       mask, ~mask, 1000, 1000);
 }
 
 static int
@@ -2963,7 +2973,7 @@ static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba,
                err = -ETIMEDOUT;
                dev_dbg(hba->dev, "%s: dev_cmd request timedout, tag %d\n",
                        __func__, lrbp->task_tag);
-               if (!ufshcd_clear_cmd(hba, lrbp->task_tag))
+               if (!ufshcd_clear_cmds(hba, 1U << lrbp->task_tag))
                        /* successfully cleared the command, retry if needed */
                        err = -EAGAIN;
                /*
@@ -6958,14 +6968,14 @@ int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,
 }
 
 /**
- * ufshcd_eh_device_reset_handler - device reset handler registered to
- *                                    scsi layer.
+ * ufshcd_eh_device_reset_handler() - Reset a single logical unit.
  * @cmd: SCSI command pointer
  *
  * Returns SUCCESS/FAILED
  */
 static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
 {
+       unsigned long flags, pending_reqs = 0, not_cleared = 0;
        struct Scsi_Host *host;
        struct ufs_hba *hba;
        u32 pos;
@@ -6984,14 +6994,24 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
        }
 
        /* clear the commands that were pending for corresponding LUN */
-       for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) {
-               if (hba->lrb[pos].lun == lun) {
-                       err = ufshcd_clear_cmd(hba, pos);
-                       if (err)
-                               break;
-                       __ufshcd_transfer_req_compl(hba, 1U << pos);
-               }
+       spin_lock_irqsave(&hba->outstanding_lock, flags);
+       for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs)
+               if (hba->lrb[pos].lun == lun)
+                       __set_bit(pos, &pending_reqs);
+       hba->outstanding_reqs &= ~pending_reqs;
+       spin_unlock_irqrestore(&hba->outstanding_lock, flags);
+
+       if (ufshcd_clear_cmds(hba, pending_reqs) < 0) {
+               spin_lock_irqsave(&hba->outstanding_lock, flags);
+               not_cleared = pending_reqs &
+                       ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
+               hba->outstanding_reqs |= not_cleared;
+               spin_unlock_irqrestore(&hba->outstanding_lock, flags);
+
+               dev_err(hba->dev, "%s: failed to clear requests %#lx\n",
+                       __func__, not_cleared);
        }
+       __ufshcd_transfer_req_compl(hba, pending_reqs & ~not_cleared);
 
 out:
        hba->req_abort_count = 0;
@@ -7088,7 +7108,7 @@ static int ufshcd_try_to_abort_task(struct ufs_hba *hba, int tag)
                goto out;
        }
 
-       err = ufshcd_clear_cmd(hba, tag);
+       err = ufshcd_clear_cmds(hba, 1U << tag);
        if (err)
                dev_err(hba->dev, "%s: Failed clearing cmd at tag %d, err %d\n",
                        __func__, tag, err);
index e45c3d6..794e413 100644 (file)
@@ -1941,13 +1941,16 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
                }
 
                if (enqd_len + trb_buff_len >= full_len) {
-                       if (need_zero_pkt)
-                               zero_len_trb = !zero_len_trb;
-
-                       field &= ~TRB_CHAIN;
-                       field |= TRB_IOC;
-                       more_trbs_coming = false;
-                       preq->td.last_trb = ring->enqueue;
+                       if (need_zero_pkt && !zero_len_trb) {
+                               zero_len_trb = true;
+                       } else {
+                               zero_len_trb = false;
+                               field &= ~TRB_CHAIN;
+                               field |= TRB_IOC;
+                               more_trbs_coming = false;
+                               need_zero_pkt = false;
+                               preq->td.last_trb = ring->enqueue;
+                       }
                }
 
                /* Only set interrupt on short packet for OUT endpoints. */
@@ -1962,7 +1965,7 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
                length_field = TRB_LEN(trb_buff_len) | TRB_TD_SIZE(remainder) |
                        TRB_INTR_TARGET(0);
 
-               cdnsp_queue_trb(pdev, ring, more_trbs_coming | zero_len_trb,
+               cdnsp_queue_trb(pdev, ring, more_trbs_coming,
                                lower_32_bits(send_addr),
                                upper_32_bits(send_addr),
                                length_field,
index dc6c96e..3b8bf6d 100644 (file)
@@ -1048,6 +1048,9 @@ isr_setup_status_complete(struct usb_ep *ep, struct usb_request *req)
        struct ci_hdrc *ci = req->context;
        unsigned long flags;
 
+       if (req->status < 0)
+               return;
+
        if (ci->setaddr) {
                hw_usb_set_address(ci, ci->address);
                ci->setaddr = false;
index f63a27d..3f107a0 100644 (file)
@@ -5190,7 +5190,7 @@ int dwc2_hcd_init(struct dwc2_hsotg *hsotg)
        res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
        if (!res) {
                retval = -EINVAL;
-               goto error1;
+               goto error2;
        }
        hcd->rsrc_start = res->start;
        hcd->rsrc_len = resource_size(res);
index e027c04..5734219 100644 (file)
@@ -1644,13 +1644,8 @@ static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc)
         * This device property is for kernel internal use only and
         * is expected to be set by the glue code.
         */
-       if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) {
-               edev = extcon_get_extcon_dev(name);
-               if (!edev)
-                       return ERR_PTR(-EPROBE_DEFER);
-
-               return edev;
-       }
+       if (device_property_read_string(dev, "linux,extcon-name", &name) == 0)
+               return extcon_get_extcon_dev(name);
 
        /*
         * Try to get an extcon device from the USB PHY controller's "port"
index ba51de7..6b01804 100644 (file)
@@ -127,6 +127,7 @@ static const struct property_entry dwc3_pci_intel_phy_charger_detect_properties[
        PROPERTY_ENTRY_STRING("dr_mode", "peripheral"),
        PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"),
        PROPERTY_ENTRY_BOOL("linux,phy_charger_detect"),
+       PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
        {}
 };
 
index 00427d1..8716bec 100644 (file)
@@ -2976,6 +2976,7 @@ static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
        struct dwc3 *dwc = dep->dwc;
        u32 mdwidth;
        int size;
+       int maxpacket;
 
        mdwidth = dwc3_mdwidth(dwc);
 
@@ -2988,21 +2989,24 @@ static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
        else
                size = DWC31_GTXFIFOSIZ_TXFDEP(size);
 
-       /* FIFO Depth is in MDWDITH bytes. Multiply */
-       size *= mdwidth;
-
        /*
-        * To meet performance requirement, a minimum TxFIFO size of 3x
-        * MaxPacketSize is recommended for endpoints that support burst and a
-        * minimum TxFIFO size of 2x MaxPacketSize for endpoints that don't
-        * support burst. Use those numbers and we can calculate the max packet
-        * limit as below.
+        * maxpacket size is determined as part of the following, after assuming
+        * a mult value of one maxpacket:
+        * DWC3 revision 280A and prior:
+        * fifo_size = mult * (max_packet / mdwidth) + 1;
+        * maxpacket = mdwidth * (fifo_size - 1);
+        *
+        * DWC3 revision 290A and onwards:
+        * fifo_size = mult * ((max_packet + mdwidth)/mdwidth + 1) + 1
+        * maxpacket = mdwidth * ((fifo_size - 1) - 1) - mdwidth;
         */
-       if (dwc->maximum_speed >= USB_SPEED_SUPER)
-               size /= 3;
+       if (DWC3_VER_IS_PRIOR(DWC3, 290A))
+               maxpacket = mdwidth * (size - 1);
        else
-               size /= 2;
+               maxpacket = mdwidth * ((size - 1) - 1) - mdwidth;
 
+       /* Functionally, space for one max packet is sufficient */
+       size = min_t(int, maxpacket, 1024);
        usb_ep_set_maxpacket_limit(&dep->endpoint, size);
 
        dep->endpoint.max_streams = 16;
index 4585ee3..e0fa4b1 100644 (file)
@@ -122,8 +122,6 @@ struct ffs_ep {
        struct usb_endpoint_descriptor  *descs[3];
 
        u8                              num;
-
-       int                             status; /* P: epfile->mutex */
 };
 
 struct ffs_epfile {
@@ -227,6 +225,9 @@ struct ffs_io_data {
        bool use_sg;
 
        struct ffs_data *ffs;
+
+       int status;
+       struct completion done;
 };
 
 struct ffs_desc_helper {
@@ -707,12 +708,15 @@ static const struct file_operations ffs_ep0_operations = {
 
 static void ffs_epfile_io_complete(struct usb_ep *_ep, struct usb_request *req)
 {
+       struct ffs_io_data *io_data = req->context;
+
        ENTER();
-       if (req->context) {
-               struct ffs_ep *ep = _ep->driver_data;
-               ep->status = req->status ? req->status : req->actual;
-               complete(req->context);
-       }
+       if (req->status)
+               io_data->status = req->status;
+       else
+               io_data->status = req->actual;
+
+       complete(&io_data->done);
 }
 
 static ssize_t ffs_copy_to_iter(void *data, int data_len, struct iov_iter *iter)
@@ -1050,7 +1054,6 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
                WARN(1, "%s: data_len == -EINVAL\n", __func__);
                ret = -EINVAL;
        } else if (!io_data->aio) {
-               DECLARE_COMPLETION_ONSTACK(done);
                bool interrupted = false;
 
                req = ep->req;
@@ -1066,7 +1069,8 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
 
                io_data->buf = data;
 
-               req->context  = &done;
+               init_completion(&io_data->done);
+               req->context  = io_data;
                req->complete = ffs_epfile_io_complete;
 
                ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC);
@@ -1075,7 +1079,12 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
 
                spin_unlock_irq(&epfile->ffs->eps_lock);
 
-               if (wait_for_completion_interruptible(&done)) {
+               if (wait_for_completion_interruptible(&io_data->done)) {
+                       spin_lock_irq(&epfile->ffs->eps_lock);
+                       if (epfile->ep != ep) {
+                               ret = -ESHUTDOWN;
+                               goto error_lock;
+                       }
                        /*
                         * To avoid race condition with ffs_epfile_io_complete,
                         * dequeue the request first then check
@@ -1083,17 +1092,18 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
                         * condition with req->complete callback.
                         */
                        usb_ep_dequeue(ep->ep, req);
-                       wait_for_completion(&done);
-                       interrupted = ep->status < 0;
+                       spin_unlock_irq(&epfile->ffs->eps_lock);
+                       wait_for_completion(&io_data->done);
+                       interrupted = io_data->status < 0;
                }
 
                if (interrupted)
                        ret = -EINTR;
-               else if (io_data->read && ep->status > 0)
-                       ret = __ffs_epfile_read_data(epfile, data, ep->status,
+               else if (io_data->read && io_data->status > 0)
+                       ret = __ffs_epfile_read_data(epfile, data, io_data->status,
                                                     &io_data->data);
                else
-                       ret = ep->status;
+                       ret = io_data->status;
                goto error_mutex;
        } else if (!(req = usb_ep_alloc_request(ep->ep, GFP_ATOMIC))) {
                ret = -ENOMEM;
index 6f5d45e..f51694f 100644 (file)
@@ -775,9 +775,13 @@ struct eth_dev *gether_setup_name(struct usb_gadget *g,
        dev->qmult = qmult;
        snprintf(net->name, sizeof(net->name), "%s%%d", netname);
 
-       if (get_ether_addr(dev_addr, addr))
+       if (get_ether_addr(dev_addr, addr)) {
+               net->addr_assign_type = NET_ADDR_RANDOM;
                dev_warn(&g->dev,
                        "using random %s ethernet address\n", "self");
+       } else {
+               net->addr_assign_type = NET_ADDR_SET;
+       }
        eth_hw_addr_set(net, addr);
        if (get_ether_addr(host_addr, dev->host_mac))
                dev_warn(&g->dev,
@@ -844,6 +848,10 @@ struct net_device *gether_setup_name_default(const char *netname)
 
        eth_random_addr(dev->dev_mac);
        pr_warn("using random %s ethernet address\n", "self");
+
+       /* by default we always have a random MAC address */
+       net->addr_assign_type = NET_ADDR_RANDOM;
+
        eth_random_addr(dev->host_mac);
        pr_warn("using random %s ethernet address\n", "host");
 
@@ -871,7 +879,6 @@ int gether_register_netdev(struct net_device *net)
        dev = netdev_priv(net);
        g = dev->gadget;
 
-       net->addr_assign_type = NET_ADDR_RANDOM;
        eth_hw_addr_set(net, dev->dev_mac);
 
        status = register_netdev(net);
@@ -912,6 +919,7 @@ int gether_set_dev_addr(struct net_device *net, const char *dev_addr)
        if (get_ether_addr(dev_addr, new_addr))
                return -EINVAL;
        memcpy(dev->dev_mac, new_addr, ETH_ALEN);
+       net->addr_assign_type = NET_ADDR_SET;
        return 0;
 }
 EXPORT_SYMBOL_GPL(gether_set_dev_addr);
index a9bb455..d42bb33 100644 (file)
@@ -424,6 +424,9 @@ static void uvcg_video_pump(struct work_struct *work)
                        uvcg_queue_cancel(queue, 0);
                        break;
                }
+
+               /* Endpoint now owns the request */
+               req = NULL;
                video->req_int_count++;
        }
 
index 2417400..2acece1 100644 (file)
@@ -11,6 +11,7 @@
 #include <linux/ctype.h>
 #include <linux/debugfs.h>
 #include <linux/delay.h>
+#include <linux/idr.h>
 #include <linux/kref.h>
 #include <linux/miscdevice.h>
 #include <linux/module.h>
@@ -36,6 +37,9 @@ MODULE_LICENSE("GPL");
 
 /*----------------------------------------------------------------------*/
 
+static DEFINE_IDA(driver_id_numbers);
+#define DRIVER_DRIVER_NAME_LENGTH_MAX  32
+
 #define RAW_EVENT_QUEUE_SIZE   16
 
 struct raw_event_queue {
@@ -161,6 +165,9 @@ struct raw_dev {
        /* Reference to misc device: */
        struct device                   *dev;
 
+       /* Make driver names unique */
+       int                             driver_id_number;
+
        /* Protected by lock: */
        enum dev_state                  state;
        bool                            gadget_registered;
@@ -189,6 +196,7 @@ static struct raw_dev *dev_new(void)
        spin_lock_init(&dev->lock);
        init_completion(&dev->ep0_done);
        raw_event_queue_init(&dev->queue);
+       dev->driver_id_number = -1;
        return dev;
 }
 
@@ -199,6 +207,9 @@ static void dev_free(struct kref *kref)
 
        kfree(dev->udc_name);
        kfree(dev->driver.udc_name);
+       kfree(dev->driver.driver.name);
+       if (dev->driver_id_number >= 0)
+               ida_free(&driver_id_numbers, dev->driver_id_number);
        if (dev->req) {
                if (dev->ep0_urb_queued)
                        usb_ep_dequeue(dev->gadget->ep0, dev->req);
@@ -419,9 +430,11 @@ out_put:
 static int raw_ioctl_init(struct raw_dev *dev, unsigned long value)
 {
        int ret = 0;
+       int driver_id_number;
        struct usb_raw_init arg;
        char *udc_driver_name;
        char *udc_device_name;
+       char *driver_driver_name;
        unsigned long flags;
 
        if (copy_from_user(&arg, (void __user *)value, sizeof(arg)))
@@ -440,36 +453,43 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value)
                return -EINVAL;
        }
 
+       driver_id_number = ida_alloc(&driver_id_numbers, GFP_KERNEL);
+       if (driver_id_number < 0)
+               return driver_id_number;
+
+       driver_driver_name = kmalloc(DRIVER_DRIVER_NAME_LENGTH_MAX, GFP_KERNEL);
+       if (!driver_driver_name) {
+               ret = -ENOMEM;
+               goto out_free_driver_id_number;
+       }
+       snprintf(driver_driver_name, DRIVER_DRIVER_NAME_LENGTH_MAX,
+                               DRIVER_NAME ".%d", driver_id_number);
+
        udc_driver_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL);
-       if (!udc_driver_name)
-               return -ENOMEM;
+       if (!udc_driver_name) {
+               ret = -ENOMEM;
+               goto out_free_driver_driver_name;
+       }
        ret = strscpy(udc_driver_name, &arg.driver_name[0],
                                UDC_NAME_LENGTH_MAX);
-       if (ret < 0) {
-               kfree(udc_driver_name);
-               return ret;
-       }
+       if (ret < 0)
+               goto out_free_udc_driver_name;
        ret = 0;
 
        udc_device_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL);
        if (!udc_device_name) {
-               kfree(udc_driver_name);
-               return -ENOMEM;
+               ret = -ENOMEM;
+               goto out_free_udc_driver_name;
        }
        ret = strscpy(udc_device_name, &arg.device_name[0],
                                UDC_NAME_LENGTH_MAX);
-       if (ret < 0) {
-               kfree(udc_driver_name);
-               kfree(udc_device_name);
-               return ret;
-       }
+       if (ret < 0)
+               goto out_free_udc_device_name;
        ret = 0;
 
        spin_lock_irqsave(&dev->lock, flags);
        if (dev->state != STATE_DEV_OPENED) {
                dev_dbg(dev->dev, "fail, device is not opened\n");
-               kfree(udc_driver_name);
-               kfree(udc_device_name);
                ret = -EINVAL;
                goto out_unlock;
        }
@@ -484,14 +504,25 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value)
        dev->driver.suspend = gadget_suspend;
        dev->driver.resume = gadget_resume;
        dev->driver.reset = gadget_reset;
-       dev->driver.driver.name = DRIVER_NAME;
+       dev->driver.driver.name = driver_driver_name;
        dev->driver.udc_name = udc_device_name;
        dev->driver.match_existing_only = 1;
+       dev->driver_id_number = driver_id_number;
 
        dev->state = STATE_DEV_INITIALIZED;
+       spin_unlock_irqrestore(&dev->lock, flags);
+       return ret;
 
 out_unlock:
        spin_unlock_irqrestore(&dev->lock, flags);
+out_free_udc_device_name:
+       kfree(udc_device_name);
+out_free_udc_driver_name:
+       kfree(udc_driver_name);
+out_free_driver_driver_name:
+       kfree(driver_driver_name);
+out_free_driver_id_number:
+       ida_free(&driver_id_numbers, driver_id_number);
        return ret;
 }
 
index 6117ae8..cea10cd 100644 (file)
@@ -3016,6 +3016,7 @@ static int lpc32xx_udc_probe(struct platform_device *pdev)
        }
 
        udc->isp1301_i2c_client = isp1301_get_client(isp1301_node);
+       of_node_put(isp1301_node);
        if (!udc->isp1301_i2c_client) {
                return -EPROBE_DEFER;
        }
index c54f2bc..0fdc014 100644 (file)
@@ -652,7 +652,7 @@ struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd)
  * It will release and re-aquire the lock while calling ACPI
  * method.
  */
-static void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
+void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
                                u16 index, bool on, unsigned long *flags)
        __must_hold(&xhci->lock)
 {
index fac9492..dce6c0e 100644 (file)
@@ -61,6 +61,8 @@
 #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI            0x461e
 #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI          0x464e
 #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI        0x51ed
+#define PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI           0xa71e
+#define PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI           0x7ec0
 
 #define PCI_DEVICE_ID_AMD_RENOIR_XHCI                  0x1639
 #define PCI_DEVICE_ID_AMD_PROMONTORYA_4                        0x43b9
@@ -269,7 +271,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
             pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
             pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
             pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI ||
-            pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI))
+            pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI ||
+            pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI ||
+            pdev->device == PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI))
                xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
 
        if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
index f0ab631..65858f6 100644 (file)
@@ -611,15 +611,37 @@ static int xhci_init(struct usb_hcd *hcd)
 
 static int xhci_run_finished(struct xhci_hcd *xhci)
 {
+       unsigned long   flags;
+       u32             temp;
+
+       /*
+        * Enable interrupts before starting the host (xhci 4.2 and 5.5.2).
+        * Protect the short window before host is running with a lock
+        */
+       spin_lock_irqsave(&xhci->lock, flags);
+
+       xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Enable interrupts");
+       temp = readl(&xhci->op_regs->command);
+       temp |= (CMD_EIE);
+       writel(temp, &xhci->op_regs->command);
+
+       xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Enable primary interrupter");
+       temp = readl(&xhci->ir_set->irq_pending);
+       writel(ER_IRQ_ENABLE(temp), &xhci->ir_set->irq_pending);
+
        if (xhci_start(xhci)) {
                xhci_halt(xhci);
+               spin_unlock_irqrestore(&xhci->lock, flags);
                return -ENODEV;
        }
+
        xhci->cmd_ring_state = CMD_RING_STATE_RUNNING;
 
        if (xhci->quirks & XHCI_NEC_HOST)
                xhci_ring_cmd_db(xhci);
 
+       spin_unlock_irqrestore(&xhci->lock, flags);
+
        return 0;
 }
 
@@ -668,19 +690,6 @@ int xhci_run(struct usb_hcd *hcd)
        temp |= (xhci->imod_interval / 250) & ER_IRQ_INTERVAL_MASK;
        writel(temp, &xhci->ir_set->irq_control);
 
-       /* Set the HCD state before we enable the irqs */
-       temp = readl(&xhci->op_regs->command);
-       temp |= (CMD_EIE);
-       xhci_dbg_trace(xhci, trace_xhci_dbg_init,
-                       "// Enable interrupts, cmd = 0x%x.", temp);
-       writel(temp, &xhci->op_regs->command);
-
-       temp = readl(&xhci->ir_set->irq_pending);
-       xhci_dbg_trace(xhci, trace_xhci_dbg_init,
-                       "// Enabling event ring interrupter %p by writing 0x%x to irq_pending",
-                       xhci->ir_set, (unsigned int) ER_IRQ_ENABLE(temp));
-       writel(ER_IRQ_ENABLE(temp), &xhci->ir_set->irq_pending);
-
        if (xhci->quirks & XHCI_NEC_HOST) {
                struct xhci_command *command;
 
@@ -782,6 +791,8 @@ static void xhci_stop(struct usb_hcd *hcd)
 void xhci_shutdown(struct usb_hcd *hcd)
 {
        struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+       unsigned long flags;
+       int i;
 
        if (xhci->quirks & XHCI_SPURIOUS_REBOOT)
                usb_disable_xhci_ports(to_pci_dev(hcd->self.sysdev));
@@ -797,12 +808,21 @@ void xhci_shutdown(struct usb_hcd *hcd)
                del_timer_sync(&xhci->shared_hcd->rh_timer);
        }
 
-       spin_lock_irq(&xhci->lock);
+       spin_lock_irqsave(&xhci->lock, flags);
        xhci_halt(xhci);
+
+       /* Power off USB2 ports*/
+       for (i = 0; i < xhci->usb2_rhub.num_ports; i++)
+               xhci_set_port_power(xhci, xhci->main_hcd, i, false, &flags);
+
+       /* Power off USB3 ports*/
+       for (i = 0; i < xhci->usb3_rhub.num_ports; i++)
+               xhci_set_port_power(xhci, xhci->shared_hcd, i, false, &flags);
+
        /* Workaround for spurious wakeups at shutdown with HSW */
        if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
                xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
-       spin_unlock_irq(&xhci->lock);
+       spin_unlock_irqrestore(&xhci->lock, flags);
 
        xhci_cleanup_msix(xhci);
 
@@ -1107,7 +1127,6 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
 {
        u32                     command, temp = 0;
        struct usb_hcd          *hcd = xhci_to_hcd(xhci);
-       struct usb_hcd          *secondary_hcd;
        int                     retval = 0;
        bool                    comp_timer_running = false;
        bool                    pending_portevent = false;
@@ -1214,23 +1233,19 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
                 * first with the primary HCD, and then with the secondary HCD.
                 * If we don't do the same, the host will never be started.
                 */
-               if (!usb_hcd_is_primary_hcd(hcd))
-                       secondary_hcd = hcd;
-               else
-                       secondary_hcd = xhci->shared_hcd;
-
                xhci_dbg(xhci, "Initialize the xhci_hcd\n");
-               retval = xhci_init(hcd->primary_hcd);
+               retval = xhci_init(hcd);
                if (retval)
                        return retval;
                comp_timer_running = true;
 
                xhci_dbg(xhci, "Start the primary HCD\n");
-               retval = xhci_run(hcd->primary_hcd);
-               if (!retval && secondary_hcd) {
+               retval = xhci_run(hcd);
+               if (!retval && xhci->shared_hcd) {
                        xhci_dbg(xhci, "Start the secondary HCD\n");
-                       retval = xhci_run(secondary_hcd);
+                       retval = xhci_run(xhci->shared_hcd);
                }
+
                hcd->state = HC_STATE_SUSPENDED;
                if (xhci->shared_hcd)
                        xhci->shared_hcd->state = HC_STATE_SUSPENDED;
index 0bd76c9..28aaf03 100644 (file)
@@ -2196,6 +2196,8 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, u16 wIndex,
 int xhci_hub_status_data(struct usb_hcd *hcd, char *buf);
 int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1);
 struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd);
+void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, u16 index,
+                        bool on, unsigned long *flags);
 
 void xhci_hc_died(struct xhci_hcd *xhci);
 
index a7b3c15..feba2a8 100644 (file)
@@ -166,6 +166,7 @@ static const struct usb_device_id edgeport_2port_id_table[] = {
        { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_8S) },
        { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416) },
        { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416B) },
+       { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_E5805A) },
        { }
 };
 
@@ -204,6 +205,7 @@ static const struct usb_device_id id_table_combined[] = {
        { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_8S) },
        { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416) },
        { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416B) },
+       { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_E5805A) },
        { }
 };
 
index 52cbc35..9a6f742 100644 (file)
 //
 // Definitions for other product IDs
 #define ION_DEVICE_ID_MT4X56USB                        0x1403  // OEM device
+#define ION_DEVICE_ID_E5805A                   0x1A01  // OEM device (rebranded Edgeport/4)
 
 
 #define        GENERATION_ID_FROM_USB_PRODUCT_ID(ProductId)                            \
index e60425b..de59fa9 100644 (file)
@@ -252,10 +252,12 @@ static void option_instat_callback(struct urb *urb);
 #define QUECTEL_PRODUCT_EG95                   0x0195
 #define QUECTEL_PRODUCT_BG96                   0x0296
 #define QUECTEL_PRODUCT_EP06                   0x0306
+#define QUECTEL_PRODUCT_EM05G                  0x030a
 #define QUECTEL_PRODUCT_EM12                   0x0512
 #define QUECTEL_PRODUCT_RM500Q                 0x0800
 #define QUECTEL_PRODUCT_EC200S_CN              0x6002
 #define QUECTEL_PRODUCT_EC200T                 0x6026
+#define QUECTEL_PRODUCT_RM500K                 0x7001
 
 #define CMOTECH_VENDOR_ID                      0x16d8
 #define CMOTECH_PRODUCT_6001                   0x6001
@@ -432,6 +434,8 @@ static void option_instat_callback(struct urb *urb);
 #define CINTERION_PRODUCT_CLS8                 0x00b0
 #define CINTERION_PRODUCT_MV31_MBIM            0x00b3
 #define CINTERION_PRODUCT_MV31_RMNET           0x00b7
+#define CINTERION_PRODUCT_MV31_2_MBIM          0x00b8
+#define CINTERION_PRODUCT_MV31_2_RMNET         0x00b9
 #define CINTERION_PRODUCT_MV32_WA              0x00f1
 #define CINTERION_PRODUCT_MV32_WB              0x00f2
 
@@ -1132,6 +1136,8 @@ static const struct usb_device_id option_ids[] = {
        { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
          .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
        { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+       { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
+         .driver_info = RSVD(6) | ZLP },
        { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
          .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
        { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
@@ -1145,6 +1151,7 @@ static const struct usb_device_id option_ids[] = {
          .driver_info = ZLP },
        { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
        { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+       { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
 
        { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
        { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
@@ -1277,6 +1284,7 @@ static const struct usb_device_id option_ids[] = {
          .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
        { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1231, 0xff),    /* Telit LE910Cx (RNDIS) */
          .driver_info = NCTRL(2) | RSVD(3) },
+       { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x1250, 0xff, 0x00, 0x00) },   /* Telit LE910Cx (rmnet) */
        { USB_DEVICE(TELIT_VENDOR_ID, 0x1260),
          .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
        { USB_DEVICE(TELIT_VENDOR_ID, 0x1261),
@@ -1979,6 +1987,10 @@ static const struct usb_device_id option_ids[] = {
          .driver_info = RSVD(3)},
        { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_RMNET, 0xff),
          .driver_info = RSVD(0)},
+       { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_2_MBIM, 0xff),
+         .driver_info = RSVD(3)},
+       { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_2_RMNET, 0xff),
+         .driver_info = RSVD(0)},
        { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA, 0xff),
          .driver_info = RSVD(3)},
        { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB, 0xff),
index 3506c47..40b1ab3 100644 (file)
@@ -436,22 +436,27 @@ static int pl2303_detect_type(struct usb_serial *serial)
                break;
        case 0x200:
                switch (bcdDevice) {
-               case 0x100:
+               case 0x100:     /* GC */
                case 0x105:
+                       return TYPE_HXN;
+               case 0x300:     /* GT / TA */
+                       if (pl2303_supports_hx_status(serial))
+                               return TYPE_TA;
+                       fallthrough;
                case 0x305:
+               case 0x400:     /* GL */
                case 0x405:
+                       return TYPE_HXN;
+               case 0x500:     /* GE / TB */
+                       if (pl2303_supports_hx_status(serial))
+                               return TYPE_TB;
+                       fallthrough;
+               case 0x505:
+               case 0x600:     /* GS */
                case 0x605:
-                       /*
-                        * Assume it's an HXN-type if the device doesn't
-                        * support the old read request value.
-                        */
-                       if (!pl2303_supports_hx_status(serial))
-                               return TYPE_HXN;
-                       break;
-               case 0x300:
-                       return TYPE_TA;
-               case 0x500:
-                       return TYPE_TB;
+               case 0x700:     /* GR */
+               case 0x705:
+                       return TYPE_HXN;
                }
                break;
        }
index 557f392..073fd2e 100644 (file)
@@ -56,7 +56,6 @@ config TYPEC_WCOVE
        tristate "Intel WhiskeyCove PMIC USB Type-C PHY driver"
        depends on ACPI
        depends on MFD_INTEL_PMC_BXT
-       depends on INTEL_SOC_PMIC
        depends on BXT_WC_PMIC_OPREGION
        help
          This driver adds support for USB Type-C on Intel Broxton platforms
index 1b6d46b..e85c1d7 100644 (file)
@@ -1962,6 +1962,8 @@ static void mlx5_vdpa_set_vq_cb(struct vdpa_device *vdev, u16 idx, struct vdpa_c
        struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
 
        ndev->event_cbs[idx] = *cb;
+       if (is_ctrl_vq_idx(mvdev, idx))
+               mvdev->cvq.event_cb = *cb;
 }
 
 static void mlx5_cvq_notify(struct vringh *vring)
@@ -2174,7 +2176,6 @@ static int verify_driver_features(struct mlx5_vdpa_dev *mvdev, u64 features)
 static int setup_virtqueues(struct mlx5_vdpa_dev *mvdev)
 {
        struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
-       struct mlx5_control_vq *cvq = &mvdev->cvq;
        int err;
        int i;
 
@@ -2184,16 +2185,6 @@ static int setup_virtqueues(struct mlx5_vdpa_dev *mvdev)
                        goto err_vq;
        }
 
-       if (mvdev->actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) {
-               err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features,
-                                       MLX5_CVQ_MAX_ENT, false,
-                                       (struct vring_desc *)(uintptr_t)cvq->desc_addr,
-                                       (struct vring_avail *)(uintptr_t)cvq->driver_addr,
-                                       (struct vring_used *)(uintptr_t)cvq->device_addr);
-               if (err)
-                       goto err_vq;
-       }
-
        return 0;
 
 err_vq:
@@ -2466,6 +2457,21 @@ static void clear_vqs_ready(struct mlx5_vdpa_net *ndev)
        ndev->mvdev.cvq.ready = false;
 }
 
+static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev)
+{
+       struct mlx5_control_vq *cvq = &mvdev->cvq;
+       int err = 0;
+
+       if (mvdev->actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ))
+               err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features,
+                                       MLX5_CVQ_MAX_ENT, false,
+                                       (struct vring_desc *)(uintptr_t)cvq->desc_addr,
+                                       (struct vring_avail *)(uintptr_t)cvq->driver_addr,
+                                       (struct vring_used *)(uintptr_t)cvq->device_addr);
+
+       return err;
+}
+
 static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
 {
        struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
@@ -2478,6 +2484,11 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
 
        if ((status ^ ndev->mvdev.status) & VIRTIO_CONFIG_S_DRIVER_OK) {
                if (status & VIRTIO_CONFIG_S_DRIVER_OK) {
+                       err = setup_cvq_vring(mvdev);
+                       if (err) {
+                               mlx5_vdpa_warn(mvdev, "failed to setup control VQ vring\n");
+                               goto err_setup;
+                       }
                        err = setup_driver(mvdev);
                        if (err) {
                                mlx5_vdpa_warn(mvdev, "failed to setup driver\n");
index 776ad74..3bc27de 100644 (file)
@@ -1476,16 +1476,12 @@ static char *vduse_devnode(struct device *dev, umode_t *mode)
        return kasprintf(GFP_KERNEL, "vduse/%s", dev_name(dev));
 }
 
-static void vduse_mgmtdev_release(struct device *dev)
-{
-}
-
-static struct device vduse_mgmtdev = {
-       .init_name = "vduse",
-       .release = vduse_mgmtdev_release,
+struct vduse_mgmt_dev {
+       struct vdpa_mgmt_dev mgmt_dev;
+       struct device dev;
 };
 
-static struct vdpa_mgmt_dev mgmt_dev;
+static struct vduse_mgmt_dev *vduse_mgmt;
 
 static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
 {
@@ -1510,7 +1506,7 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
        }
        set_dma_ops(&vdev->vdpa.dev, &vduse_dev_dma_ops);
        vdev->vdpa.dma_dev = &vdev->vdpa.dev;
-       vdev->vdpa.mdev = &mgmt_dev;
+       vdev->vdpa.mdev = &vduse_mgmt->mgmt_dev;
 
        return 0;
 }
@@ -1556,34 +1552,52 @@ static struct virtio_device_id id_table[] = {
        { 0 },
 };
 
-static struct vdpa_mgmt_dev mgmt_dev = {
-       .device = &vduse_mgmtdev,
-       .id_table = id_table,
-       .ops = &vdpa_dev_mgmtdev_ops,
-};
+static void vduse_mgmtdev_release(struct device *dev)
+{
+       struct vduse_mgmt_dev *mgmt_dev;
+
+       mgmt_dev = container_of(dev, struct vduse_mgmt_dev, dev);
+       kfree(mgmt_dev);
+}
 
 static int vduse_mgmtdev_init(void)
 {
        int ret;
 
-       ret = device_register(&vduse_mgmtdev);
-       if (ret)
+       vduse_mgmt = kzalloc(sizeof(*vduse_mgmt), GFP_KERNEL);
+       if (!vduse_mgmt)
+               return -ENOMEM;
+
+       ret = dev_set_name(&vduse_mgmt->dev, "vduse");
+       if (ret) {
+               kfree(vduse_mgmt);
                return ret;
+       }
 
-       ret = vdpa_mgmtdev_register(&mgmt_dev);
+       vduse_mgmt->dev.release = vduse_mgmtdev_release;
+
+       ret = device_register(&vduse_mgmt->dev);
        if (ret)
-               goto err;
+               goto dev_reg_err;
 
-       return 0;
-err:
-       device_unregister(&vduse_mgmtdev);
+       vduse_mgmt->mgmt_dev.id_table = id_table;
+       vduse_mgmt->mgmt_dev.ops = &vdpa_dev_mgmtdev_ops;
+       vduse_mgmt->mgmt_dev.device = &vduse_mgmt->dev;
+       ret = vdpa_mgmtdev_register(&vduse_mgmt->mgmt_dev);
+       if (ret)
+               device_unregister(&vduse_mgmt->dev);
+
+       return ret;
+
+dev_reg_err:
+       put_device(&vduse_mgmt->dev);
        return ret;
 }
 
 static void vduse_mgmtdev_exit(void)
 {
-       vdpa_mgmtdev_unregister(&mgmt_dev);
-       device_unregister(&vduse_mgmtdev);
+       vdpa_mgmtdev_unregister(&vduse_mgmt->mgmt_dev);
+       device_unregister(&vduse_mgmt->dev);
 }
 
 static int vduse_init(void)
index 5ad2596..23dcbfd 100644 (file)
@@ -1209,7 +1209,7 @@ static int vhost_vdpa_release(struct inode *inode, struct file *filep)
        vhost_dev_stop(&v->vdev);
        vhost_vdpa_free_domain(v);
        vhost_vdpa_config_put(v);
-       vhost_dev_cleanup(&v->vdev);
+       vhost_vdpa_cleanup(v);
        mutex_unlock(&d->mutex);
 
        atomic_dec(&v->opened);
index fa23bf0..bd4dc97 100644 (file)
@@ -1148,6 +1148,7 @@ int sti_call(const struct sti_struct *sti, unsigned long func,
        return ret;
 }
 
+#if defined(CONFIG_FB_STI)
 /* check if given fb_info is the primary device */
 int fb_is_primary_device(struct fb_info *info)
 {
@@ -1163,6 +1164,7 @@ int fb_is_primary_device(struct fb_info *info)
        return (sti->info == info);
 }
 EXPORT_SYMBOL(fb_is_primary_device);
+#endif
 
 MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer");
 MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines");
index 52f731a..519313b 100644 (file)
@@ -560,8 +560,7 @@ int au1100fb_drv_suspend(struct platform_device *dev, pm_message_t state)
        /* Blank the LCD */
        au1100fb_fb_blank(VESA_POWERDOWN, &fbdev->info);
 
-       if (fbdev->lcdclk)
-               clk_disable(fbdev->lcdclk);
+       clk_disable(fbdev->lcdclk);
 
        memcpy(&fbregs, fbdev->regs, sizeof(struct au1100fb_regs));
 
@@ -577,8 +576,7 @@ int au1100fb_drv_resume(struct platform_device *dev)
 
        memcpy(fbdev->regs, &fbregs, sizeof(struct au1100fb_regs));
 
-       if (fbdev->lcdclk)
-               clk_enable(fbdev->lcdclk);
+       clk_enable(fbdev->lcdclk);
 
        /* Unblank the LCD */
        au1100fb_fb_blank(VESA_NO_BLANKING, &fbdev->info);
index 3d47c34..51e072c 100644 (file)
@@ -2184,12 +2184,6 @@ static struct pci_driver cirrusfb_pci_driver = {
        .id_table       = cirrusfb_pci_table,
        .probe          = cirrusfb_pci_register,
        .remove         = cirrusfb_pci_unregister,
-#ifdef CONFIG_PM
-#if 0
-       .suspend        = cirrusfb_pci_suspend,
-       .resume         = cirrusfb_pci_resume,
-#endif
-#endif
 };
 #endif /* CONFIG_PCI */
 
index afa2863..8afc453 100644 (file)
@@ -19,6 +19,7 @@
 #include <linux/kernel.h>
 #include <linux/major.h>
 #include <linux/slab.h>
+#include <linux/sysfb.h>
 #include <linux/mm.h>
 #include <linux/mman.h>
 #include <linux/vt.h>
@@ -1752,6 +1753,17 @@ int remove_conflicting_framebuffers(struct apertures_struct *a,
                do_free = true;
        }
 
+       /*
+        * If a driver asked to unregister a platform device registered by
+        * sysfb, then can be assumed that this is a driver for a display
+        * that is set up by the system firmware and has a generic driver.
+        *
+        * Drivers for devices that don't have a generic driver will never
+        * ask for this, so let's assume that a real driver for the display
+        * was already probed and prevent sysfb to register devices later.
+        */
+       sysfb_disable();
+
        mutex_lock(&registration_lock);
        do_remove_conflicting_framebuffers(a, name, primary);
        mutex_unlock(&registration_lock);
index a957996..5647fca 100644 (file)
@@ -472,7 +472,7 @@ static int intelfb_pci_register(struct pci_dev *pdev,
        struct fb_info *info;
        struct intelfb_info *dinfo;
        int i, err, dvo;
-       int aperture_size, stolen_size;
+       int aperture_size, stolen_size = 0;
        struct agp_kern_info gtt_info;
        int agp_memtype;
        const char *s;
@@ -571,7 +571,7 @@ static int intelfb_pci_register(struct pci_dev *pdev,
                return -ENODEV;
        }
 
-       if (intelfbhw_get_memory(pdev, &aperture_size,&stolen_size)) {
+       if (intelfbhw_get_memory(pdev, &aperture_size, &stolen_size)) {
                cleanup(dinfo);
                return -ENODEV;
        }
index 57aff74..2086e06 100644 (file)
@@ -201,13 +201,11 @@ int intelfbhw_get_memory(struct pci_dev *pdev, int *aperture_size,
        case PCI_DEVICE_ID_INTEL_945GME:
        case PCI_DEVICE_ID_INTEL_965G:
        case PCI_DEVICE_ID_INTEL_965GM:
-               /* 915, 945 and 965 chipsets support a 256MB aperture.
-                  Aperture size is determined by inspected the
-                  base address of the aperture. */
-               if (pci_resource_start(pdev, 2) & 0x08000000)
-                       *aperture_size = MB(128);
-               else
-                       *aperture_size = MB(256);
+               /*
+                * 915, 945 and 965 chipsets support 64MB, 128MB or 256MB
+                * aperture. Determine size from PCI resource length.
+                */
+               *aperture_size = pci_resource_len(pdev, 2);
                break;
        default:
                if ((tmp & INTEL_GMCH_MEM_MASK) == INTEL_GMCH_MEM_64M)
index c90eb8c..66aff6c 100644 (file)
@@ -359,7 +359,7 @@ static void sossi_set_bits_per_cycle(int bpc)
        int bus_pick_count, bus_pick_width;
 
        /*
-        * We set explicitly the the bus_pick_count as well, although
+        * We set explicitly the bus_pick_count as well, although
         * with remapping/reordering disabled it will be calculated by HW
         * as (32 / bus_pick_width).
         */
index 6fbfeb0..170463a 100644 (file)
@@ -143,7 +143,7 @@ int hdmi_phy_configure(struct hdmi_phy_data *phy, unsigned long hfbitclk,
        /*
         * In OMAP5+, the HFBITCLK must be divided by 2 before issuing the
         * HDMI_PHYPWRCMD_LDOON command.
-       */
+        */
        if (phy_feat->bist_ctrl)
                REG_FLD_MOD(phy->base, HDMI_TXPHY_BIST_CONTROL, 1, 11, 11);
 
index 043cc8f..c3cd1e1 100644 (file)
@@ -381,7 +381,7 @@ pxa3xx_gcu_write(struct file *file, const char *buff,
        struct pxa3xx_gcu_batch *buffer;
        struct pxa3xx_gcu_priv *priv = to_pxa3xx_gcu_priv(file);
 
-       int words = count / 4;
+       size_t words = count / 4;
 
        /* Does not need to be atomic. There's a lock in user space,
         * but anyhow, this is just for statistics. */
index 2c19856..f96ce88 100644 (file)
@@ -237,8 +237,7 @@ static int simplefb_clocks_get(struct simplefb_par *par,
                if (IS_ERR(clock)) {
                        if (PTR_ERR(clock) == -EPROBE_DEFER) {
                                while (--i >= 0) {
-                                       if (par->clks[i])
-                                               clk_put(par->clks[i]);
+                                       clk_put(par->clks[i]);
                                }
                                kfree(par->clks);
                                return -EPROBE_DEFER;
index bcacfb6..d119b1d 100644 (file)
@@ -96,7 +96,7 @@ static const struct fb_fix_screeninfo xxxfb_fix = {
 
     /*
      *         Modern graphical hardware not only supports pipelines but some 
-     *  also support multiple monitors where each display can have its  
+     *  also support multiple monitors where each display can have
      *  its own unique data. In this case each display could be  
      *  represented by a separate framebuffer device thus a separate 
      *  struct fb_info. Now the struct xxx_par represents the graphics
@@ -838,9 +838,9 @@ static void xxxfb_remove(struct pci_dev *dev)
  *
  *      See Documentation/driver-api/pm/devices.rst for more information
  */
-static int xxxfb_suspend(struct pci_dev *dev, pm_message_t msg)
+static int xxxfb_suspend(struct device *dev)
 {
-       struct fb_info *info = pci_get_drvdata(dev);
+       struct fb_info *info = dev_get_drvdata(dev);
        struct xxxfb_par *par = info->par;
 
        /* suspend here */
@@ -853,9 +853,9 @@ static int xxxfb_suspend(struct pci_dev *dev, pm_message_t msg)
  *
  *      See Documentation/driver-api/pm/devices.rst for more information
  */
-static int xxxfb_resume(struct pci_dev *dev)
+static int xxxfb_resume(struct device *dev)
 {
-       struct fb_info *info = pci_get_drvdata(dev);
+       struct fb_info *info = dev_get_drvdata(dev);
        struct xxxfb_par *par = info->par;
 
        /* resume here */
@@ -873,14 +873,15 @@ static const struct pci_device_id xxxfb_id_table[] = {
        { 0, }
 };
 
+static SIMPLE_DEV_PM_OPS(xxxfb_pm_ops, xxxfb_suspend, xxxfb_resume);
+
 /* For PCI drivers */
 static struct pci_driver xxxfb_driver = {
        .name =         "xxxfb",
        .id_table =     xxxfb_id_table,
        .probe =        xxxfb_probe,
        .remove =       xxxfb_remove,
-       .suspend =      xxxfb_suspend, /* optional but recommended */
-       .resume =       xxxfb_resume,  /* optional but recommended */
+       .driver.pm =    xxxfb_pm_ops, /* optional but recommended */
 };
 
 MODULE_DEVICE_TABLE(pci, xxxfb_id_table);
index a6dc8b5..e1556d2 100644 (file)
@@ -29,6 +29,19 @@ menuconfig VIRTIO_MENU
 
 if VIRTIO_MENU
 
+config VIRTIO_HARDEN_NOTIFICATION
+        bool "Harden virtio notification"
+        help
+          Enable this to harden the device notifications and suppress
+          those that happen at a time where notifications are illegal.
+
+          Experimental: Note that several drivers still have bugs that
+          may cause crashes or hangs when correct handling of
+          notifications is enforced; depending on the subset of
+          drivers and devices you use, this may or may not work.
+
+          If unsure, say N.
+
 config VIRTIO_PCI
        tristate "PCI driver for virtio devices"
        depends on PCI
index 6bace84..7deeed3 100644 (file)
@@ -219,6 +219,7 @@ static int virtio_features_ok(struct virtio_device *dev)
  * */
 void virtio_reset_device(struct virtio_device *dev)
 {
+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION
        /*
         * The below virtio_synchronize_cbs() guarantees that any
         * interrupt for this line arriving after
@@ -227,6 +228,7 @@ void virtio_reset_device(struct virtio_device *dev)
         */
        virtio_break_device(dev);
        virtio_synchronize_cbs(dev);
+#endif
 
        dev->config->reset(dev);
 }
index c9bec38..083ff1e 100644 (file)
@@ -62,6 +62,7 @@
 #include <linux/list.h>
 #include <linux/module.h>
 #include <linux/platform_device.h>
+#include <linux/pm.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
 #include <linux/virtio.h>
@@ -556,6 +557,28 @@ static const struct virtio_config_ops virtio_mmio_config_ops = {
        .synchronize_cbs = vm_synchronize_cbs,
 };
 
+#ifdef CONFIG_PM_SLEEP
+static int virtio_mmio_freeze(struct device *dev)
+{
+       struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev);
+
+       return virtio_device_freeze(&vm_dev->vdev);
+}
+
+static int virtio_mmio_restore(struct device *dev)
+{
+       struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev);
+
+       if (vm_dev->version == 1)
+               writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE);
+
+       return virtio_device_restore(&vm_dev->vdev);
+}
+
+static const struct dev_pm_ops virtio_mmio_pm_ops = {
+       SET_SYSTEM_SLEEP_PM_OPS(virtio_mmio_freeze, virtio_mmio_restore)
+};
+#endif
 
 static void virtio_mmio_release_dev(struct device *_d)
 {
@@ -799,6 +822,9 @@ static struct platform_driver virtio_mmio_driver = {
                .name   = "virtio-mmio",
                .of_match_table = virtio_mmio_match,
                .acpi_match_table = ACPI_PTR(virtio_mmio_acpi_match),
+#ifdef CONFIG_PM_SLEEP
+               .pm     = &virtio_mmio_pm_ops,
+#endif
        },
 };
 
index b790f30..fa2a944 100644 (file)
@@ -220,8 +220,6 @@ int vp_modern_probe(struct virtio_pci_modern_device *mdev)
 
        check_offsets();
 
-       mdev->pci_dev = pci_dev;
-
        /* We only own devices >= 0x1000 and <= 0x107f: leave the rest. */
        if (pci_dev->device < 0x1000 || pci_dev->device > 0x107f)
                return -ENODEV;
index 13a7348..643ca77 100644 (file)
@@ -111,7 +111,12 @@ struct vring_virtqueue {
        /* Number we've added since last sync. */
        unsigned int num_added;
 
-       /* Last used index we've seen. */
+       /* Last used index  we've seen.
+        * for split ring, it just contains last used index
+        * for packed ring:
+        * bits up to VRING_PACKED_EVENT_F_WRAP_CTR include the last used index.
+        * bits from VRING_PACKED_EVENT_F_WRAP_CTR include the used wrap counter.
+        */
        u16 last_used_idx;
 
        /* Hint for event idx: already triggered no need to disable. */
@@ -154,9 +159,6 @@ struct vring_virtqueue {
                        /* Driver ring wrap counter. */
                        bool avail_wrap_counter;
 
-                       /* Device ring wrap counter. */
-                       bool used_wrap_counter;
-
                        /* Avail used flags. */
                        u16 avail_used_flags;
 
@@ -933,7 +935,7 @@ static struct virtqueue *vring_create_virtqueue_split(
        for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) {
                queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
                                          &dma_addr,
-                                         GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
+                                         GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO);
                if (queue)
                        break;
                if (!may_reduce_num)
@@ -973,6 +975,15 @@ static struct virtqueue *vring_create_virtqueue_split(
 /*
  * Packed ring specific functions - *_packed().
  */
+static inline bool packed_used_wrap_counter(u16 last_used_idx)
+{
+       return !!(last_used_idx & (1 << VRING_PACKED_EVENT_F_WRAP_CTR));
+}
+
+static inline u16 packed_last_used(u16 last_used_idx)
+{
+       return last_used_idx & ~(-(1 << VRING_PACKED_EVENT_F_WRAP_CTR));
+}
 
 static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
                                     struct vring_desc_extra *extra)
@@ -1406,8 +1417,14 @@ static inline bool is_used_desc_packed(const struct vring_virtqueue *vq,
 
 static inline bool more_used_packed(const struct vring_virtqueue *vq)
 {
-       return is_used_desc_packed(vq, vq->last_used_idx,
-                       vq->packed.used_wrap_counter);
+       u16 last_used;
+       u16 last_used_idx;
+       bool used_wrap_counter;
+
+       last_used_idx = READ_ONCE(vq->last_used_idx);
+       last_used = packed_last_used(last_used_idx);
+       used_wrap_counter = packed_used_wrap_counter(last_used_idx);
+       return is_used_desc_packed(vq, last_used, used_wrap_counter);
 }
 
 static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
@@ -1415,7 +1432,8 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
                                          void **ctx)
 {
        struct vring_virtqueue *vq = to_vvq(_vq);
-       u16 last_used, id;
+       u16 last_used, id, last_used_idx;
+       bool used_wrap_counter;
        void *ret;
 
        START_USE(vq);
@@ -1434,7 +1452,9 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
        /* Only get used elements after they have been exposed by host. */
        virtio_rmb(vq->weak_barriers);
 
-       last_used = vq->last_used_idx;
+       last_used_idx = READ_ONCE(vq->last_used_idx);
+       used_wrap_counter = packed_used_wrap_counter(last_used_idx);
+       last_used = packed_last_used(last_used_idx);
        id = le16_to_cpu(vq->packed.vring.desc[last_used].id);
        *len = le32_to_cpu(vq->packed.vring.desc[last_used].len);
 
@@ -1451,12 +1471,15 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
        ret = vq->packed.desc_state[id].data;
        detach_buf_packed(vq, id, ctx);
 
-       vq->last_used_idx += vq->packed.desc_state[id].num;
-       if (unlikely(vq->last_used_idx >= vq->packed.vring.num)) {
-               vq->last_used_idx -= vq->packed.vring.num;
-               vq->packed.used_wrap_counter ^= 1;
+       last_used += vq->packed.desc_state[id].num;
+       if (unlikely(last_used >= vq->packed.vring.num)) {
+               last_used -= vq->packed.vring.num;
+               used_wrap_counter ^= 1;
        }
 
+       last_used = (last_used | (used_wrap_counter << VRING_PACKED_EVENT_F_WRAP_CTR));
+       WRITE_ONCE(vq->last_used_idx, last_used);
+
        /*
         * If we expect an interrupt for the next entry, tell host
         * by writing event index and flush out the write before
@@ -1465,9 +1488,7 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
        if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DESC)
                virtio_store_mb(vq->weak_barriers,
                                &vq->packed.vring.driver->off_wrap,
-                               cpu_to_le16(vq->last_used_idx |
-                                       (vq->packed.used_wrap_counter <<
-                                        VRING_PACKED_EVENT_F_WRAP_CTR)));
+                               cpu_to_le16(vq->last_used_idx));
 
        LAST_ADD_TIME_INVALID(vq);
 
@@ -1499,9 +1520,7 @@ static unsigned int virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
 
        if (vq->event) {
                vq->packed.vring.driver->off_wrap =
-                       cpu_to_le16(vq->last_used_idx |
-                               (vq->packed.used_wrap_counter <<
-                                VRING_PACKED_EVENT_F_WRAP_CTR));
+                       cpu_to_le16(vq->last_used_idx);
                /*
                 * We need to update event offset and event wrap
                 * counter first before updating event flags.
@@ -1518,8 +1537,7 @@ static unsigned int virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
        }
 
        END_USE(vq);
-       return vq->last_used_idx | ((u16)vq->packed.used_wrap_counter <<
-                       VRING_PACKED_EVENT_F_WRAP_CTR);
+       return vq->last_used_idx;
 }
 
 static bool virtqueue_poll_packed(struct virtqueue *_vq, u16 off_wrap)
@@ -1537,7 +1555,7 @@ static bool virtqueue_poll_packed(struct virtqueue *_vq, u16 off_wrap)
 static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
 {
        struct vring_virtqueue *vq = to_vvq(_vq);
-       u16 used_idx, wrap_counter;
+       u16 used_idx, wrap_counter, last_used_idx;
        u16 bufs;
 
        START_USE(vq);
@@ -1550,9 +1568,10 @@ static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
        if (vq->event) {
                /* TODO: tune this threshold */
                bufs = (vq->packed.vring.num - vq->vq.num_free) * 3 / 4;
-               wrap_counter = vq->packed.used_wrap_counter;
+               last_used_idx = READ_ONCE(vq->last_used_idx);
+               wrap_counter = packed_used_wrap_counter(last_used_idx);
 
-               used_idx = vq->last_used_idx + bufs;
+               used_idx = packed_last_used(last_used_idx) + bufs;
                if (used_idx >= vq->packed.vring.num) {
                        used_idx -= vq->packed.vring.num;
                        wrap_counter ^= 1;
@@ -1582,9 +1601,10 @@ static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
         */
        virtio_mb(vq->weak_barriers);
 
-       if (is_used_desc_packed(vq,
-                               vq->last_used_idx,
-                               vq->packed.used_wrap_counter)) {
+       last_used_idx = READ_ONCE(vq->last_used_idx);
+       wrap_counter = packed_used_wrap_counter(last_used_idx);
+       used_idx = packed_last_used(last_used_idx);
+       if (is_used_desc_packed(vq, used_idx, wrap_counter)) {
                END_USE(vq);
                return false;
        }
@@ -1688,8 +1708,12 @@ static struct virtqueue *vring_create_virtqueue_packed(
        vq->we_own_ring = true;
        vq->notify = notify;
        vq->weak_barriers = weak_barriers;
+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION
        vq->broken = true;
-       vq->last_used_idx = 0;
+#else
+       vq->broken = false;
+#endif
+       vq->last_used_idx = 0 | (1 << VRING_PACKED_EVENT_F_WRAP_CTR);
        vq->event_triggered = false;
        vq->num_added = 0;
        vq->packed_ring = true;
@@ -1720,7 +1744,6 @@ static struct virtqueue *vring_create_virtqueue_packed(
 
        vq->packed.next_avail_idx = 0;
        vq->packed.avail_wrap_counter = 1;
-       vq->packed.used_wrap_counter = 1;
        vq->packed.event_flags_shadow = 0;
        vq->packed.avail_used_flags = 1 << VRING_PACKED_DESC_F_AVAIL;
 
@@ -2135,9 +2158,13 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
        }
 
        if (unlikely(vq->broken)) {
+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION
                dev_warn_once(&vq->vq.vdev->dev,
                              "virtio vring IRQ raised before DRIVER_OK");
                return IRQ_NONE;
+#else
+               return IRQ_HANDLED;
+#endif
        }
 
        /* Just a hint for performance: so it's ok that this can be racy! */
@@ -2180,7 +2207,11 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
        vq->we_own_ring = false;
        vq->notify = notify;
        vq->weak_barriers = weak_barriers;
+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION
        vq->broken = true;
+#else
+       vq->broken = false;
+#endif
        vq->last_used_idx = 0;
        vq->event_triggered = false;
        vq->num_added = 0;
index b0b2d7a..2fd85be 100644 (file)
@@ -172,3 +172,4 @@ module_platform_driver(gxp_wdt_driver);
 MODULE_AUTHOR("Nick Hawkins <nick.hawkins@hpe.com>");
 MODULE_AUTHOR("Jean-Marie Verdun <verdun@hpe.com>");
 MODULE_DESCRIPTION("Driver for GXP watchdog timer");
+MODULE_LICENSE("GPL");
index 7b59144..87f1828 100644 (file)
@@ -42,7 +42,7 @@ void xen_setup_features(void)
                if (HYPERVISOR_xen_version(XENVER_get_features, &fi) < 0)
                        break;
                for (j = 0; j < 32; j++)
-                       xen_features[i * 32 + j] = !!(fi.submap & 1<<j);
+                       xen_features[i * 32 + j] = !!(fi.submap & 1U << j);
        }
 
        if (xen_pv_domain()) {
index 20d7d05..40ef379 100644 (file)
@@ -16,6 +16,7 @@
 #include <linux/mmu_notifier.h>
 #include <linux/types.h>
 #include <xen/interface/event_channel.h>
+#include <xen/grant_table.h>
 
 struct gntdev_dmabuf_priv;
 
@@ -56,6 +57,7 @@ struct gntdev_grant_map {
        struct gnttab_unmap_grant_ref *unmap_ops;
        struct gnttab_map_grant_ref   *kmap_ops;
        struct gnttab_unmap_grant_ref *kunmap_ops;
+       bool *being_removed;
        struct page **pages;
        unsigned long pages_vm_start;
 
@@ -73,6 +75,11 @@ struct gntdev_grant_map {
        /* Needed to avoid allocation in gnttab_dma_free_pages(). */
        xen_pfn_t *frames;
 #endif
+
+       /* Number of live grants */
+       atomic_t live_grants;
+       /* Needed to avoid allocation in __unmap_grant_pages */
+       struct gntab_unmap_queue_data unmap_data;
 };
 
 struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
index 59ffea8..4b56c39 100644 (file)
@@ -35,6 +35,7 @@
 #include <linux/slab.h>
 #include <linux/highmem.h>
 #include <linux/refcount.h>
+#include <linux/workqueue.h>
 
 #include <xen/xen.h>
 #include <xen/grant_table.h>
@@ -60,10 +61,11 @@ module_param(limit, uint, 0644);
 MODULE_PARM_DESC(limit,
        "Maximum number of grants that may be mapped by one mapping request");
 
+/* True in PV mode, false otherwise */
 static int use_ptemod;
 
-static int unmap_grant_pages(struct gntdev_grant_map *map,
-                            int offset, int pages);
+static void unmap_grant_pages(struct gntdev_grant_map *map,
+                             int offset, int pages);
 
 static struct miscdevice gntdev_miscdev;
 
@@ -120,6 +122,7 @@ static void gntdev_free_map(struct gntdev_grant_map *map)
        kvfree(map->unmap_ops);
        kvfree(map->kmap_ops);
        kvfree(map->kunmap_ops);
+       kvfree(map->being_removed);
        kfree(map);
 }
 
@@ -140,10 +143,13 @@ struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
        add->unmap_ops = kvmalloc_array(count, sizeof(add->unmap_ops[0]),
                                        GFP_KERNEL);
        add->pages     = kvcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
+       add->being_removed =
+               kvcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
        if (NULL == add->grants    ||
            NULL == add->map_ops   ||
            NULL == add->unmap_ops ||
-           NULL == add->pages)
+           NULL == add->pages     ||
+           NULL == add->being_removed)
                goto err;
        if (use_ptemod) {
                add->kmap_ops   = kvmalloc_array(count, sizeof(add->kmap_ops[0]),
@@ -250,9 +256,36 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
        if (!refcount_dec_and_test(&map->users))
                return;
 
-       if (map->pages && !use_ptemod)
+       if (map->pages && !use_ptemod) {
+               /*
+                * Increment the reference count.  This ensures that the
+                * subsequent call to unmap_grant_pages() will not wind up
+                * re-entering itself.  It *can* wind up calling
+                * gntdev_put_map() recursively, but such calls will be with a
+                * reference count greater than 1, so they will return before
+                * this code is reached.  The recursion depth is thus limited to
+                * 1.  Do NOT use refcount_inc() here, as it will detect that
+                * the reference count is zero and WARN().
+                */
+               refcount_set(&map->users, 1);
+
+               /*
+                * Unmap the grants.  This may or may not be asynchronous, so it
+                * is possible that the reference count is 1 on return, but it
+                * could also be greater than 1.
+                */
                unmap_grant_pages(map, 0, map->count);
 
+               /* Check if the memory now needs to be freed */
+               if (!refcount_dec_and_test(&map->users))
+                       return;
+
+               /*
+                * All pages have been returned to the hypervisor, so free the
+                * map.
+                */
+       }
+
        if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
                notify_remote_via_evtchn(map->notify.event);
                evtchn_put(map->notify.event);
@@ -283,6 +316,7 @@ static int find_grant_ptes(pte_t *pte, unsigned long addr, void *data)
 
 int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 {
+       size_t alloced = 0;
        int i, err = 0;
 
        if (!use_ptemod) {
@@ -331,97 +365,116 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
                        map->count);
 
        for (i = 0; i < map->count; i++) {
-               if (map->map_ops[i].status == GNTST_okay)
+               if (map->map_ops[i].status == GNTST_okay) {
                        map->unmap_ops[i].handle = map->map_ops[i].handle;
-               else if (!err)
+                       if (!use_ptemod)
+                               alloced++;
+               } else if (!err)
                        err = -EINVAL;
 
                if (map->flags & GNTMAP_device_map)
                        map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
 
                if (use_ptemod) {
-                       if (map->kmap_ops[i].status == GNTST_okay)
+                       if (map->kmap_ops[i].status == GNTST_okay) {
+                               if (map->map_ops[i].status == GNTST_okay)
+                                       alloced++;
                                map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
-                       else if (!err)
+                       else if (!err)
                                err = -EINVAL;
                }
        }
+       atomic_add(alloced, &map->live_grants);
        return err;
 }
 
-static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-                              int pages)
+static void __unmap_grant_pages_done(int result,
+               struct gntab_unmap_queue_data *data)
 {
-       int i, err = 0;
-       struct gntab_unmap_queue_data unmap_data;
-
-       if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
-               int pgno = (map->notify.addr >> PAGE_SHIFT);
-               if (pgno >= offset && pgno < offset + pages) {
-                       /* No need for kmap, pages are in lowmem */
-                       uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
-                       tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
-                       map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
-               }
-       }
-
-       unmap_data.unmap_ops = map->unmap_ops + offset;
-       unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
-       unmap_data.pages = map->pages + offset;
-       unmap_data.count = pages;
-
-       err = gnttab_unmap_refs_sync(&unmap_data);
-       if (err)
-               return err;
+       unsigned int i;
+       struct gntdev_grant_map *map = data->data;
+       unsigned int offset = data->unmap_ops - map->unmap_ops;
 
-       for (i = 0; i < pages; i++) {
-               if (map->unmap_ops[offset+i].status)
-                       err = -EINVAL;
+       for (i = 0; i < data->count; i++) {
+               WARN_ON(map->unmap_ops[offset+i].status);
                pr_debug("unmap handle=%d st=%d\n",
                        map->unmap_ops[offset+i].handle,
                        map->unmap_ops[offset+i].status);
                map->unmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
                if (use_ptemod) {
-                       if (map->kunmap_ops[offset+i].status)
-                               err = -EINVAL;
+                       WARN_ON(map->kunmap_ops[offset+i].status);
                        pr_debug("kunmap handle=%u st=%d\n",
                                 map->kunmap_ops[offset+i].handle,
                                 map->kunmap_ops[offset+i].status);
                        map->kunmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
                }
        }
-       return err;
+       /*
+        * Decrease the live-grant counter.  This must happen after the loop to
+        * prevent premature reuse of the grants by gnttab_mmap().
+        */
+       atomic_sub(data->count, &map->live_grants);
+
+       /* Release reference taken by __unmap_grant_pages */
+       gntdev_put_map(NULL, map);
+}
+
+static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+                              int pages)
+{
+       if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
+               int pgno = (map->notify.addr >> PAGE_SHIFT);
+
+               if (pgno >= offset && pgno < offset + pages) {
+                       /* No need for kmap, pages are in lowmem */
+                       uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
+
+                       tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
+                       map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
+               }
+       }
+
+       map->unmap_data.unmap_ops = map->unmap_ops + offset;
+       map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
+       map->unmap_data.pages = map->pages + offset;
+       map->unmap_data.count = pages;
+       map->unmap_data.done = __unmap_grant_pages_done;
+       map->unmap_data.data = map;
+       refcount_inc(&map->users); /* to keep map alive during async call below */
+
+       gnttab_unmap_refs_async(&map->unmap_data);
 }
 
-static int unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-                            int pages)
+static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+                             int pages)
 {
-       int range, err = 0;
+       int range;
+
+       if (atomic_read(&map->live_grants) == 0)
+               return; /* Nothing to do */
 
        pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);
 
        /* It is possible the requested range will have a "hole" where we
         * already unmapped some of the grants. Only unmap valid ranges.
         */
-       while (pages && !err) {
-               while (pages &&
-                      map->unmap_ops[offset].handle == INVALID_GRANT_HANDLE) {
+       while (pages) {
+               while (pages && map->being_removed[offset]) {
                        offset++;
                        pages--;
                }
                range = 0;
                while (range < pages) {
-                       if (map->unmap_ops[offset + range].handle ==
-                           INVALID_GRANT_HANDLE)
+                       if (map->being_removed[offset + range])
                                break;
+                       map->being_removed[offset + range] = true;
                        range++;
                }
-               err = __unmap_grant_pages(map, offset, range);
+               if (range)
+                       __unmap_grant_pages(map, offset, range);
                offset += range;
                pages -= range;
        }
-
-       return err;
 }
 
 /* ------------------------------------------------------------------ */
@@ -473,7 +526,6 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
        struct gntdev_grant_map *map =
                container_of(mn, struct gntdev_grant_map, notifier);
        unsigned long mstart, mend;
-       int err;
 
        if (!mmu_notifier_range_blockable(range))
                return false;
@@ -494,10 +546,9 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
                        map->index, map->count,
                        map->vma->vm_start, map->vma->vm_end,
                        range->start, range->end, mstart, mend);
-       err = unmap_grant_pages(map,
+       unmap_grant_pages(map,
                                (mstart - map->vma->vm_start) >> PAGE_SHIFT,
                                (mend - mstart) >> PAGE_SHIFT);
-       WARN_ON(err);
 
        return true;
 }
@@ -985,6 +1036,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
                goto unlock_out;
        if (use_ptemod && map->vma)
                goto unlock_out;
+       if (atomic_read(&map->live_grants)) {
+               err = -EAGAIN;
+               goto unlock_out;
+       }
        refcount_inc(&map->users);
 
        vma->vm_ops = &gntdev_vmops;
index 79df61f..baf2b15 100644 (file)
@@ -152,7 +152,7 @@ static struct p9_fid *v9fs_fid_lookup_with_uid(struct dentry *dentry,
        const unsigned char **wnames, *uname;
        int i, n, l, clone, access;
        struct v9fs_session_info *v9ses;
-       struct p9_fid *fid, *old_fid = NULL;
+       struct p9_fid *fid, *old_fid;
 
        v9ses = v9fs_dentry2v9ses(dentry);
        access = v9ses->flags & V9FS_ACCESS_MASK;
@@ -194,13 +194,12 @@ static struct p9_fid *v9fs_fid_lookup_with_uid(struct dentry *dentry,
                if (IS_ERR(fid))
                        return fid;
 
+               refcount_inc(&fid->count);
                v9fs_fid_add(dentry->d_sb->s_root, fid);
        }
        /* If we are root ourself just return that */
-       if (dentry->d_sb->s_root == dentry) {
-               refcount_inc(&fid->count);
+       if (dentry->d_sb->s_root == dentry)
                return fid;
-       }
        /*
         * Do a multipath walk with attached root.
         * When walking parent we need to make sure we
@@ -212,6 +211,7 @@ static struct p9_fid *v9fs_fid_lookup_with_uid(struct dentry *dentry,
                fid = ERR_PTR(n);
                goto err_out;
        }
+       old_fid = fid;
        clone = 1;
        i = 0;
        while (i < n) {
@@ -221,19 +221,15 @@ static struct p9_fid *v9fs_fid_lookup_with_uid(struct dentry *dentry,
                 * walk to ensure none of the patch component change
                 */
                fid = p9_client_walk(fid, l, &wnames[i], clone);
+               /* non-cloning walk will return the same fid */
+               if (fid != old_fid) {
+                       p9_client_clunk(old_fid);
+                       old_fid = fid;
+               }
                if (IS_ERR(fid)) {
-                       if (old_fid) {
-                               /*
-                                * If we fail, clunk fid which are mapping
-                                * to path component and not the last component
-                                * of the path.
-                                */
-                               p9_client_clunk(old_fid);
-                       }
                        kfree(wnames);
                        goto err_out;
                }
-               old_fid = fid;
                i += l;
                clone = 0;
        }
index a8f512b..d0833fa 100644 (file)
@@ -58,8 +58,21 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
  */
 static int v9fs_init_request(struct netfs_io_request *rreq, struct file *file)
 {
+       struct inode *inode = file_inode(file);
+       struct v9fs_inode *v9inode = V9FS_I(inode);
        struct p9_fid *fid = file->private_data;
 
+       BUG_ON(!fid);
+
+       /* we might need to read from a fid that was opened write-only
+        * for read-modify-write of page cache, use the writeback fid
+        * for that */
+       if (rreq->origin == NETFS_READ_FOR_WRITE &&
+                       (fid->mode & O_ACCMODE) == O_WRONLY) {
+               fid = v9inode->writeback_fid;
+               BUG_ON(!fid);
+       }
+
        refcount_inc(&fid->count);
        rreq->netfs_priv = fid;
        return 0;
index 419d2f3..3d82977 100644 (file)
@@ -1251,15 +1251,15 @@ static const char *v9fs_vfs_get_link(struct dentry *dentry,
                return ERR_PTR(-ECHILD);
 
        v9ses = v9fs_dentry2v9ses(dentry);
-       fid = v9fs_fid_lookup(dentry);
+       if (!v9fs_proto_dotu(v9ses))
+               return ERR_PTR(-EBADF);
+
        p9_debug(P9_DEBUG_VFS, "%pd\n", dentry);
+       fid = v9fs_fid_lookup(dentry);
 
        if (IS_ERR(fid))
                return ERR_CAST(fid);
 
-       if (!v9fs_proto_dotu(v9ses))
-               return ERR_PTR(-EBADF);
-
        st = p9_client_stat(fid);
        p9_client_clunk(fid);
        if (IS_ERR(st))
index d17502a..b6eb116 100644 (file)
@@ -274,6 +274,7 @@ v9fs_vfs_atomic_open_dotl(struct inode *dir, struct dentry *dentry,
        if (IS_ERR(ofid)) {
                err = PTR_ERR(ofid);
                p9_debug(P9_DEBUG_VFS, "p9_client_walk failed %d\n", err);
+               p9_client_clunk(dfid);
                goto out;
        }
 
@@ -285,6 +286,7 @@ v9fs_vfs_atomic_open_dotl(struct inode *dir, struct dentry *dentry,
        if (err) {
                p9_debug(P9_DEBUG_VFS, "Failed to get acl values in creat %d\n",
                         err);
+               p9_client_clunk(dfid);
                goto error;
        }
        err = p9_client_create_dotl(ofid, name, v9fs_open_to_dotl_flags(flags),
@@ -292,6 +294,7 @@ v9fs_vfs_atomic_open_dotl(struct inode *dir, struct dentry *dentry,
        if (err < 0) {
                p9_debug(P9_DEBUG_VFS, "p9_client_open_dotl failed in creat %d\n",
                         err);
+               p9_client_clunk(dfid);
                goto error;
        }
        v9fs_invalidate_inode_attr(dir);
index 89630ac..64dab70 100644 (file)
@@ -745,7 +745,8 @@ int afs_getattr(struct user_namespace *mnt_userns, const struct path *path,
 
        _enter("{ ino=%lu v=%u }", inode->i_ino, inode->i_generation);
 
-       if (!(query_flags & AT_STATX_DONT_SYNC) &&
+       if (vnode->volume &&
+           !(query_flags & AT_STATX_DONT_SYNC) &&
            !test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) {
                key = afs_request_key(vnode->volume->cell);
                if (IS_ERR(key))
index 3ac668a..35e0e86 100644 (file)
@@ -104,6 +104,7 @@ struct btrfs_block_group {
        unsigned int relocating_repair:1;
        unsigned int chunk_item_inserted:1;
        unsigned int zone_is_active:1;
+       unsigned int zoned_data_reloc_ongoing:1;
 
        int disk_cache_state;
 
index 0e49b1a..415bf18 100644 (file)
@@ -1330,6 +1330,8 @@ struct btrfs_replace_extent_info {
         * existing extent into a file range.
         */
        bool is_new_extent;
+       /* Indicate if we should update the inode's mtime and ctime. */
+       bool update_times;
        /* Meaningful only if is_new_extent is true. */
        int qgroup_reserved;
        /*
index 89e94ea..4ba005c 100644 (file)
@@ -4632,6 +4632,17 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
        int ret;
 
        set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags);
+
+       /*
+        * We may have the reclaim task running and relocating a data block group,
+        * in which case it may create delayed iputs. So stop it before we park
+        * the cleaner kthread otherwise we can get new delayed iputs after
+        * parking the cleaner, and that can make the async reclaim task to hang
+        * if it's waiting for delayed iputs to complete, since the cleaner is
+        * parked and can not run delayed iputs - this will make us hang when
+        * trying to stop the async reclaim task.
+        */
+       cancel_work_sync(&fs_info->reclaim_bgs_work);
        /*
         * We don't want the cleaner to start new transactions, add more delayed
         * iputs, etc. while we're closing. We can't use kthread_stop() yet
@@ -4672,8 +4683,6 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
        cancel_work_sync(&fs_info->async_data_reclaim_work);
        cancel_work_sync(&fs_info->preempt_reclaim_work);
 
-       cancel_work_sync(&fs_info->reclaim_bgs_work);
-
        /* Cancel or finish ongoing discard work */
        btrfs_discard_cleanup(fs_info);
 
index 0867c5c..4157ecc 100644 (file)
@@ -3832,7 +3832,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
               block_group->start == fs_info->data_reloc_bg ||
               fs_info->data_reloc_bg == 0);
 
-       if (block_group->ro) {
+       if (block_group->ro || block_group->zoned_data_reloc_ongoing) {
                ret = 1;
                goto out;
        }
@@ -3894,8 +3894,24 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 out:
        if (ret && ffe_ctl->for_treelog)
                fs_info->treelog_bg = 0;
-       if (ret && ffe_ctl->for_data_reloc)
+       if (ret && ffe_ctl->for_data_reloc &&
+           fs_info->data_reloc_bg == block_group->start) {
+               /*
+                * Do not allow further allocations from this block group.
+                * Compared to increasing the ->ro, setting the
+                * ->zoned_data_reloc_ongoing flag still allows nocow
+                *  writers to come in. See btrfs_inc_nocow_writers().
+                *
+                * We need to disable an allocation to avoid an allocation of
+                * regular (non-relocation data) extent. With mix of relocation
+                * extents and regular extents, we can dispatch WRITE commands
+                * (for relocation extents) and ZONE APPEND commands (for
+                * regular extents) at the same time to the same zone, which
+                * easily break the write pointer.
+                */
+               block_group->zoned_data_reloc_ongoing = 1;
                fs_info->data_reloc_bg = 0;
+       }
        spin_unlock(&fs_info->relocation_bg_lock);
        spin_unlock(&fs_info->treelog_bg_lock);
        spin_unlock(&block_group->lock);
index 8f6b544..04e3634 100644 (file)
@@ -5241,13 +5241,14 @@ int extent_writepages(struct address_space *mapping,
         */
        btrfs_zoned_data_reloc_lock(BTRFS_I(inode));
        ret = extent_write_cache_pages(mapping, wbc, &epd);
-       btrfs_zoned_data_reloc_unlock(BTRFS_I(inode));
        ASSERT(ret <= 0);
        if (ret < 0) {
+               btrfs_zoned_data_reloc_unlock(BTRFS_I(inode));
                end_write_bio(&epd, ret);
                return ret;
        }
        flush_write_bio(&epd);
+       btrfs_zoned_data_reloc_unlock(BTRFS_I(inode));
        return ret;
 }
 
index 1fd827b..9dfde1a 100644 (file)
@@ -2323,25 +2323,62 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
         */
        btrfs_inode_unlock(inode, BTRFS_ILOCK_MMAP);
 
-       if (ret != BTRFS_NO_LOG_SYNC) {
+       if (ret == BTRFS_NO_LOG_SYNC) {
+               ret = btrfs_end_transaction(trans);
+               goto out;
+       }
+
+       /* We successfully logged the inode, attempt to sync the log. */
+       if (!ret) {
+               ret = btrfs_sync_log(trans, root, &ctx);
                if (!ret) {
-                       ret = btrfs_sync_log(trans, root, &ctx);
-                       if (!ret) {
-                               ret = btrfs_end_transaction(trans);
-                               goto out;
-                       }
-               }
-               if (!full_sync) {
-                       ret = btrfs_wait_ordered_range(inode, start, len);
-                       if (ret) {
-                               btrfs_end_transaction(trans);
-                               goto out;
-                       }
+                       ret = btrfs_end_transaction(trans);
+                       goto out;
                }
-               ret = btrfs_commit_transaction(trans);
-       } else {
+       }
+
+       /*
+        * At this point we need to commit the transaction because we had
+        * btrfs_need_log_full_commit() or some other error.
+        *
+        * If we didn't do a full sync we have to stop the trans handle, wait on
+        * the ordered extents, start it again and commit the transaction.  If
+        * we attempt to wait on the ordered extents here we could deadlock with
+        * something like fallocate() that is holding the extent lock trying to
+        * start a transaction while some other thread is trying to commit the
+        * transaction while we (fsync) are currently holding the transaction
+        * open.
+        */
+       if (!full_sync) {
                ret = btrfs_end_transaction(trans);
+               if (ret)
+                       goto out;
+               ret = btrfs_wait_ordered_range(inode, start, len);
+               if (ret)
+                       goto out;
+
+               /*
+                * This is safe to use here because we're only interested in
+                * making sure the transaction that had the ordered extents is
+                * committed.  We aren't waiting on anything past this point,
+                * we're purely getting the transaction and committing it.
+                */
+               trans = btrfs_attach_transaction_barrier(root);
+               if (IS_ERR(trans)) {
+                       ret = PTR_ERR(trans);
+
+                       /*
+                        * We committed the transaction and there's no currently
+                        * running transaction, this means everything we care
+                        * about made it to disk and we are done.
+                        */
+                       if (ret == -ENOENT)
+                               ret = 0;
+                       goto out;
+               }
        }
+
+       ret = btrfs_commit_transaction(trans);
 out:
        ASSERT(list_empty(&ctx.list));
        err = file_check_and_advance_wb_err(file);
@@ -2719,7 +2756,8 @@ int btrfs_replace_file_extents(struct btrfs_inode *inode,
 
        ret = btrfs_block_rsv_migrate(&fs_info->trans_block_rsv, rsv,
                                      min_size, false);
-       BUG_ON(ret);
+       if (WARN_ON(ret))
+               goto out_trans;
        trans->block_rsv = rsv;
 
        cur_offset = start;
@@ -2803,6 +2841,25 @@ int btrfs_replace_file_extents(struct btrfs_inode *inode,
                        extent_info->file_offset += replace_len;
                }
 
+               /*
+                * We are releasing our handle on the transaction, balance the
+                * dirty pages of the btree inode and flush delayed items, and
+                * then get a new transaction handle, which may now point to a
+                * new transaction in case someone else may have committed the
+                * transaction we used to replace/drop file extent items. So
+                * bump the inode's iversion and update mtime and ctime except
+                * if we are called from a dedupe context. This is because a
+                * power failure/crash may happen after the transaction is
+                * committed and before we finish replacing/dropping all the
+                * file extent items we need.
+                */
+               inode_inc_iversion(&inode->vfs_inode);
+
+               if (!extent_info || extent_info->update_times) {
+                       inode->vfs_inode.i_mtime = current_time(&inode->vfs_inode);
+                       inode->vfs_inode.i_ctime = inode->vfs_inode.i_mtime;
+               }
+
                ret = btrfs_update_inode(trans, root, inode);
                if (ret)
                        break;
@@ -2819,7 +2876,8 @@ int btrfs_replace_file_extents(struct btrfs_inode *inode,
 
                ret = btrfs_block_rsv_migrate(&fs_info->trans_block_rsv,
                                              rsv, min_size, false);
-               BUG_ON(ret);    /* shouldn't happen */
+               if (WARN_ON(ret))
+                       break;
                trans->block_rsv = rsv;
 
                cur_offset = drop_args.drop_end;
index 81737ef..05e0c4a 100644 (file)
@@ -3195,6 +3195,8 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
                                                ordered_extent->file_offset,
                                                ordered_extent->file_offset +
                                                logical_len);
+               btrfs_zoned_release_data_reloc_bg(fs_info, ordered_extent->disk_bytenr,
+                                                 ordered_extent->disk_num_bytes);
        } else {
                BUG_ON(root == fs_info->tree_root);
                ret = insert_ordered_extent_file_extent(trans, ordered_extent);
@@ -9897,6 +9899,7 @@ static struct btrfs_trans_handle *insert_prealloc_file_extent(
        extent_info.file_offset = file_offset;
        extent_info.extent_buf = (char *)&stack_fi;
        extent_info.is_new_extent = true;
+       extent_info.update_times = true;
        extent_info.qgroup_reserved = qgroup_released;
        extent_info.insertions = 0;
 
index 313d9d6..33461b4 100644 (file)
@@ -45,7 +45,6 @@ void __btrfs_tree_read_lock(struct extent_buffer *eb, enum btrfs_lock_nesting ne
                start_ns = ktime_get_ns();
 
        down_read_nested(&eb->lock, nest);
-       eb->lock_owner = current->pid;
        trace_btrfs_tree_read_lock(eb, start_ns);
 }
 
@@ -62,7 +61,6 @@ void btrfs_tree_read_lock(struct extent_buffer *eb)
 int btrfs_try_tree_read_lock(struct extent_buffer *eb)
 {
        if (down_read_trylock(&eb->lock)) {
-               eb->lock_owner = current->pid;
                trace_btrfs_try_tree_read_lock(eb);
                return 1;
        }
@@ -90,7 +88,6 @@ int btrfs_try_tree_write_lock(struct extent_buffer *eb)
 void btrfs_tree_read_unlock(struct extent_buffer *eb)
 {
        trace_btrfs_tree_read_unlock(eb);
-       eb->lock_owner = 0;
        up_read(&eb->lock);
 }
 
index c39f8b3..a3549d5 100644 (file)
@@ -344,6 +344,7 @@ static int btrfs_clone(struct inode *src, struct inode *inode,
        int ret;
        const u64 len = olen_aligned;
        u64 last_dest_end = destoff;
+       u64 prev_extent_end = off;
 
        ret = -ENOMEM;
        buf = kvmalloc(fs_info->nodesize, GFP_KERNEL);
@@ -363,7 +364,6 @@ static int btrfs_clone(struct inode *src, struct inode *inode,
        key.offset = off;
 
        while (1) {
-               u64 next_key_min_offset = key.offset + 1;
                struct btrfs_file_extent_item *extent;
                u64 extent_gen;
                int type;
@@ -431,14 +431,21 @@ process_slot:
                 * The first search might have left us at an extent item that
                 * ends before our target range's start, can happen if we have
                 * holes and NO_HOLES feature enabled.
+                *
+                * Subsequent searches may leave us on a file range we have
+                * processed before - this happens due to a race with ordered
+                * extent completion for a file range that is outside our source
+                * range, but that range was part of a file extent item that
+                * also covered a leading part of our source range.
                 */
-               if (key.offset + datal <= off) {
+               if (key.offset + datal <= prev_extent_end) {
                        path->slots[0]++;
                        goto process_slot;
                } else if (key.offset >= off + len) {
                        break;
                }
-               next_key_min_offset = key.offset + datal;
+
+               prev_extent_end = key.offset + datal;
                size = btrfs_item_size(leaf, slot);
                read_extent_buffer(leaf, buf, btrfs_item_ptr_offset(leaf, slot),
                                   size);
@@ -489,6 +496,7 @@ process_slot:
                        clone_info.file_offset = new_key.offset;
                        clone_info.extent_buf = buf;
                        clone_info.is_new_extent = false;
+                       clone_info.update_times = !no_time_update;
                        ret = btrfs_replace_file_extents(BTRFS_I(inode), path,
                                        drop_start, new_key.offset + datal - 1,
                                        &clone_info, &trans);
@@ -550,7 +558,7 @@ process_slot:
                        break;
 
                btrfs_release_path(path);
-               key.offset = next_key_min_offset;
+               key.offset = prev_extent_end;
 
                if (fatal_signal_pending(current)) {
                        ret = -EINTR;
index b1fdc6a..6627dd7 100644 (file)
@@ -763,6 +763,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
                                compress_force = false;
                                no_compress++;
                        } else {
+                               btrfs_err(info, "unrecognized compression value %s",
+                                         args[0].from);
                                ret = -EINVAL;
                                goto out;
                        }
@@ -821,8 +823,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
                case Opt_thread_pool:
                        ret = match_int(&args[0], &intarg);
                        if (ret) {
+                               btrfs_err(info, "unrecognized thread_pool value %s",
+                                         args[0].from);
                                goto out;
                        } else if (intarg == 0) {
+                               btrfs_err(info, "invalid value 0 for thread_pool");
                                ret = -EINVAL;
                                goto out;
                        }
@@ -883,8 +888,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
                        break;
                case Opt_ratio:
                        ret = match_int(&args[0], &intarg);
-                       if (ret)
+                       if (ret) {
+                               btrfs_err(info, "unrecognized metadata_ratio value %s",
+                                         args[0].from);
                                goto out;
+                       }
                        info->metadata_ratio = intarg;
                        btrfs_info(info, "metadata ratio %u",
                                   info->metadata_ratio);
@@ -901,6 +909,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
                                btrfs_set_and_info(info, DISCARD_ASYNC,
                                                   "turning on async discard");
                        } else {
+                               btrfs_err(info, "unrecognized discard mode value %s",
+                                         args[0].from);
                                ret = -EINVAL;
                                goto out;
                        }
@@ -933,6 +943,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
                                btrfs_set_and_info(info, FREE_SPACE_TREE,
                                                   "enabling free space tree");
                        } else {
+                               btrfs_err(info, "unrecognized space_cache value %s",
+                                         args[0].from);
                                ret = -EINVAL;
                                goto out;
                        }
@@ -1014,8 +1026,12 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
                        break;
                case Opt_check_integrity_print_mask:
                        ret = match_int(&args[0], &intarg);
-                       if (ret)
+                       if (ret) {
+                               btrfs_err(info,
+                               "unrecognized check_integrity_print_mask value %s",
+                                       args[0].from);
                                goto out;
+                       }
                        info->check_integrity_print_mask = intarg;
                        btrfs_info(info, "check_integrity_print_mask 0x%x",
                                   info->check_integrity_print_mask);
@@ -1030,13 +1046,15 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
                        goto out;
 #endif
                case Opt_fatal_errors:
-                       if (strcmp(args[0].from, "panic") == 0)
+                       if (strcmp(args[0].from, "panic") == 0) {
                                btrfs_set_opt(info->mount_opt,
                                              PANIC_ON_FATAL_ERROR);
-                       else if (strcmp(args[0].from, "bug") == 0)
+                       } else if (strcmp(args[0].from, "bug") == 0) {
                                btrfs_clear_opt(info->mount_opt,
                                              PANIC_ON_FATAL_ERROR);
-                       else {
+                       } else {
+                               btrfs_err(info, "unrecognized fatal_errors value %s",
+                                         args[0].from);
                                ret = -EINVAL;
                                goto out;
                        }
@@ -1044,8 +1062,12 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
                case Opt_commit_interval:
                        intarg = 0;
                        ret = match_int(&args[0], &intarg);
-                       if (ret)
+                       if (ret) {
+                               btrfs_err(info, "unrecognized commit_interval value %s",
+                                         args[0].from);
+                               ret = -EINVAL;
                                goto out;
+                       }
                        if (intarg == 0) {
                                btrfs_info(info,
                                           "using default commit interval %us",
@@ -1059,8 +1081,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
                        break;
                case Opt_rescue:
                        ret = parse_rescue_options(info, args[0].from);
-                       if (ret < 0)
+                       if (ret < 0) {
+                               btrfs_err(info, "unrecognized rescue value %s",
+                                         args[0].from);
                                goto out;
+                       }
                        break;
 #ifdef CONFIG_BTRFS_DEBUG
                case Opt_fragment_all:
@@ -1985,6 +2010,14 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data)
        if (ret)
                goto restore;
 
+       /* V1 cache is not supported for subpage mount. */
+       if (fs_info->sectorsize < PAGE_SIZE && btrfs_test_opt(fs_info, SPACE_CACHE)) {
+               btrfs_warn(fs_info,
+       "v1 space cache is not supported for page size %lu with sectorsize %u",
+                          PAGE_SIZE, fs_info->sectorsize);
+               ret = -EINVAL;
+               goto restore;
+       }
        btrfs_remount_begin(fs_info, old_opts, *flags);
        btrfs_resize_thread_pool(fs_info,
                fs_info->thread_pool_size, old_thread_pool_size);
index 11237a9..79e8c8c 100644 (file)
@@ -2139,3 +2139,30 @@ bool btrfs_zoned_should_reclaim(struct btrfs_fs_info *fs_info)
        factor = div64_u64(used * 100, total);
        return factor >= fs_info->bg_reclaim_threshold;
 }
+
+void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logical,
+                                      u64 length)
+{
+       struct btrfs_block_group *block_group;
+
+       if (!btrfs_is_zoned(fs_info))
+               return;
+
+       block_group = btrfs_lookup_block_group(fs_info, logical);
+       /* It should be called on a previous data relocation block group. */
+       ASSERT(block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA));
+
+       spin_lock(&block_group->lock);
+       if (!block_group->zoned_data_reloc_ongoing)
+               goto out;
+
+       /* All relocation extents are written. */
+       if (block_group->start + block_group->alloc_offset == logical + length) {
+               /* Now, release this block group for further allocations. */
+               block_group->zoned_data_reloc_ongoing = 0;
+       }
+
+out:
+       spin_unlock(&block_group->lock);
+       btrfs_put_block_group(block_group);
+}
index bb1a189..6b2eec9 100644 (file)
@@ -77,6 +77,8 @@ void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
 void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg);
 void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info);
 bool btrfs_zoned_should_reclaim(struct btrfs_fs_info *fs_info);
+void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logical,
+                                      u64 length);
 #else /* CONFIG_BLK_DEV_ZONED */
 static inline int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
                                     struct blk_zone *zone)
@@ -243,6 +245,9 @@ static inline bool btrfs_zoned_should_reclaim(struct btrfs_fs_info *fs_info)
 {
        return false;
 }
+
+static inline void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info,
+                                                    u64 logical, u64 length) { }
 #endif
 
 static inline bool btrfs_dev_is_sequential(struct btrfs_device *device, u64 pos)
index 38c9303..ac8fd5e 100644 (file)
@@ -4377,6 +4377,7 @@ static void flush_dirty_session_caps(struct ceph_mds_session *s)
                ihold(inode);
                dout("flush_dirty_caps %llx.%llx\n", ceph_vinop(inode));
                spin_unlock(&mdsc->cap_dirty_lock);
+               ceph_wait_on_async_create(inode);
                ceph_check_caps(ci, CHECK_CAPS_FLUSH, NULL);
                iput(inode);
                spin_lock(&mdsc->cap_dirty_lock);
index 1dd995e..2cfbac8 100644 (file)
@@ -162,6 +162,8 @@ cifs_dump_iface(struct seq_file *m, struct cifs_server_iface *iface)
                seq_printf(m, "\t\tIPv4: %pI4\n", &ipv4->sin_addr);
        else if (iface->sockaddr.ss_family == AF_INET6)
                seq_printf(m, "\t\tIPv6: %pI6\n", &ipv6->sin6_addr);
+       if (!iface->is_active)
+               seq_puts(m, "\t\t[for-cleanup]\n");
 }
 
 static int cifs_debug_files_proc_show(struct seq_file *m, void *v)
@@ -221,6 +223,7 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
        struct TCP_Server_Info *server;
        struct cifs_ses *ses;
        struct cifs_tcon *tcon;
+       struct cifs_server_iface *iface;
        int c, i, j;
 
        seq_puts(m,
@@ -456,11 +459,10 @@ skip_rdma:
                        if (ses->iface_count)
                                seq_printf(m, "\n\n\tServer interfaces: %zu",
                                           ses->iface_count);
-                       for (j = 0; j < ses->iface_count; j++) {
-                               struct cifs_server_iface *iface;
-
-                               iface = &ses->iface_list[j];
-                               seq_printf(m, "\n\t%d)", j+1);
+                       j = 0;
+                       list_for_each_entry(iface, &ses->iface_list,
+                                                iface_head) {
+                               seq_printf(m, "\n\t%d)", ++j);
                                cifs_dump_iface(m, iface);
                                if (is_ses_using_iface(ses, iface))
                                        seq_puts(m, "\t\t[CONNECTED]\n");
index e773716..a643c84 100644 (file)
@@ -80,6 +80,9 @@
 #define SMB_DNS_RESOLVE_INTERVAL_MIN     120
 #define SMB_DNS_RESOLVE_INTERVAL_DEFAULT 600
 
+/* smb multichannel query server interfaces interval in seconds */
+#define SMB_INTERFACE_POLL_INTERVAL    600
+
 /* maximum number of PDUs in one compound */
 #define MAX_COMPOUND 5
 
@@ -933,15 +936,67 @@ static inline void cifs_set_net_ns(struct TCP_Server_Info *srv, struct net *net)
 #endif
 
 struct cifs_server_iface {
+       struct list_head iface_head;
+       struct kref refcount;
        size_t speed;
        unsigned int rdma_capable : 1;
        unsigned int rss_capable : 1;
+       unsigned int is_active : 1; /* unset if non existent */
        struct sockaddr_storage sockaddr;
 };
 
+/* release iface when last ref is dropped */
+static inline void
+release_iface(struct kref *ref)
+{
+       struct cifs_server_iface *iface = container_of(ref,
+                                                      struct cifs_server_iface,
+                                                      refcount);
+       list_del_init(&iface->iface_head);
+       kfree(iface);
+}
+
+/*
+ * compare two interfaces a and b
+ * return 0 if everything matches.
+ * return 1 if a has higher link speed, or rdma capable, or rss capable
+ * return -1 otherwise.
+ */
+static inline int
+iface_cmp(struct cifs_server_iface *a, struct cifs_server_iface *b)
+{
+       int cmp_ret = 0;
+
+       WARN_ON(!a || !b);
+       if (a->speed == b->speed) {
+               if (a->rdma_capable == b->rdma_capable) {
+                       if (a->rss_capable == b->rss_capable) {
+                               cmp_ret = memcmp(&a->sockaddr, &b->sockaddr,
+                                                sizeof(a->sockaddr));
+                               if (!cmp_ret)
+                                       return 0;
+                               else if (cmp_ret > 0)
+                                       return 1;
+                               else
+                                       return -1;
+                       } else if (a->rss_capable > b->rss_capable)
+                               return 1;
+                       else
+                               return -1;
+               } else if (a->rdma_capable > b->rdma_capable)
+                       return 1;
+               else
+                       return -1;
+       } else if (a->speed > b->speed)
+               return 1;
+       else
+               return -1;
+}
+
 struct cifs_chan {
        unsigned int in_reconnect : 1; /* if session setup in progress for this channel */
        struct TCP_Server_Info *server;
+       struct cifs_server_iface *iface; /* interface in use */
        __u8 signkey[SMB3_SIGN_KEY_SIZE];
 };
 
@@ -993,7 +1048,7 @@ struct cifs_ses {
         */
        spinlock_t iface_lock;
        /* ========= begin: protected by iface_lock ======== */
-       struct cifs_server_iface *iface_list;
+       struct list_head iface_list;
        size_t iface_count;
        unsigned long iface_last_update; /* jiffies */
        /* ========= end: protected by iface_lock ======== */
@@ -1203,6 +1258,7 @@ struct cifs_tcon {
 #ifdef CONFIG_CIFS_DFS_UPCALL
        struct list_head ulist; /* cache update list */
 #endif
+       struct delayed_work     query_interfaces; /* query interfaces workqueue job */
 };
 
 /*
index 3b7366e..d59aebe 100644 (file)
@@ -636,6 +636,13 @@ cifs_chan_clear_need_reconnect(struct cifs_ses *ses,
 bool
 cifs_chan_needs_reconnect(struct cifs_ses *ses,
                          struct TCP_Server_Info *server);
+bool
+cifs_chan_is_iface_active(struct cifs_ses *ses,
+                         struct TCP_Server_Info *server);
+int
+cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server);
+int
+SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon);
 
 void extract_unc_hostname(const char *unc, const char **h, size_t *len);
 int copy_path_name(char *dst, const char *src);
index 1849e34..fa29c9a 100644 (file)
@@ -145,6 +145,25 @@ requeue_resolve:
        return rc;
 }
 
+static void smb2_query_server_interfaces(struct work_struct *work)
+{
+       int rc;
+       struct cifs_tcon *tcon = container_of(work,
+                                       struct cifs_tcon,
+                                       query_interfaces.work);
+
+       /*
+        * query server network interfaces, in case they change
+        */
+       rc = SMB3_request_interfaces(0, tcon);
+       if (rc) {
+               cifs_dbg(FYI, "%s: failed to query server interfaces: %d\n",
+                               __func__, rc);
+       }
+
+       queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
+                          (SMB_INTERFACE_POLL_INTERVAL * HZ));
+}
 
 static void cifs_resolve_server(struct work_struct *work)
 {
@@ -217,7 +236,7 @@ cifs_mark_tcp_ses_conns_for_reconnect(struct TCP_Server_Info *server,
                                      bool mark_smb_session)
 {
        struct TCP_Server_Info *pserver;
-       struct cifs_ses *ses;
+       struct cifs_ses *ses, *nses;
        struct cifs_tcon *tcon;
 
        /*
@@ -231,7 +250,20 @@ cifs_mark_tcp_ses_conns_for_reconnect(struct TCP_Server_Info *server,
 
 
        spin_lock(&cifs_tcp_ses_lock);
-       list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
+       list_for_each_entry_safe(ses, nses, &pserver->smb_ses_list, smb_ses_list) {
+               /* check if iface is still active */
+               if (!cifs_chan_is_iface_active(ses, server)) {
+                       /*
+                        * HACK: drop the lock before calling
+                        * cifs_chan_update_iface to avoid deadlock
+                        */
+                       ses->ses_count++;
+                       spin_unlock(&cifs_tcp_ses_lock);
+                       cifs_chan_update_iface(ses, server);
+                       spin_lock(&cifs_tcp_ses_lock);
+                       ses->ses_count--;
+               }
+
                spin_lock(&ses->chan_lock);
                if (!mark_smb_session && cifs_chan_needs_reconnect(ses, server))
                        goto next_session;
@@ -1894,9 +1926,11 @@ void cifs_put_smb_ses(struct cifs_ses *ses)
                int i;
 
                for (i = 1; i < chan_count; i++) {
-                       spin_unlock(&ses->chan_lock);
+                       if (ses->chans[i].iface) {
+                               kref_put(&ses->chans[i].iface->refcount, release_iface);
+                               ses->chans[i].iface = NULL;
+                       }
                        cifs_put_tcp_session(ses->chans[i].server, 0);
-                       spin_lock(&ses->chan_lock);
                        ses->chans[i].server = NULL;
                }
        }
@@ -2270,6 +2304,9 @@ cifs_put_tcon(struct cifs_tcon *tcon)
        list_del_init(&tcon->tcon_list);
        spin_unlock(&cifs_tcp_ses_lock);
 
+       /* cancel polling of interfaces */
+       cancel_delayed_work_sync(&tcon->query_interfaces);
+
        if (tcon->use_witness) {
                int rc;
 
@@ -2507,6 +2544,12 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
        tcon->local_lease = ctx->local_lease;
        INIT_LIST_HEAD(&tcon->pending_opens);
 
+       /* schedule query interfaces poll */
+       INIT_DELAYED_WORK(&tcon->query_interfaces,
+                         smb2_query_server_interfaces);
+       queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
+                          (SMB_INTERFACE_POLL_INTERVAL * HZ));
+
        spin_lock(&cifs_tcp_ses_lock);
        list_add(&tcon->tcon_list, &ses->tcon_list);
        spin_unlock(&cifs_tcp_ses_lock);
@@ -3982,10 +4025,16 @@ cifs_setup_session(const unsigned int xid, struct cifs_ses *ses,
                   struct nls_table *nls_info)
 {
        int rc = -ENOSYS;
+       struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *)&server->dstaddr;
+       struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr;
        bool is_binding = false;
 
-
        spin_lock(&cifs_tcp_ses_lock);
+       if (server->dstaddr.ss_family == AF_INET6)
+               scnprintf(ses->ip_addr, sizeof(ses->ip_addr), "%pI6", &addr6->sin6_addr);
+       else
+               scnprintf(ses->ip_addr, sizeof(ses->ip_addr), "%pI4", &addr->sin_addr);
+
        if (ses->ses_status != SES_GOOD &&
            ses->ses_status != SES_NEW &&
            ses->ses_status != SES_NEED_RECON) {
index c69e124..0e84e6f 100644 (file)
@@ -75,6 +75,7 @@ sesInfoAlloc(void)
                INIT_LIST_HEAD(&ret_buf->tcon_list);
                mutex_init(&ret_buf->session_mutex);
                spin_lock_init(&ret_buf->iface_lock);
+               INIT_LIST_HEAD(&ret_buf->iface_list);
                spin_lock_init(&ret_buf->chan_lock);
        }
        return ret_buf;
@@ -83,6 +84,8 @@ sesInfoAlloc(void)
 void
 sesInfoFree(struct cifs_ses *buf_to_free)
 {
+       struct cifs_server_iface *iface = NULL, *niface = NULL;
+
        if (buf_to_free == NULL) {
                cifs_dbg(FYI, "Null buffer passed to sesInfoFree\n");
                return;
@@ -96,7 +99,11 @@ sesInfoFree(struct cifs_ses *buf_to_free)
        kfree(buf_to_free->user_name);
        kfree(buf_to_free->domainName);
        kfree_sensitive(buf_to_free->auth_key.response);
-       kfree(buf_to_free->iface_list);
+       spin_lock(&buf_to_free->iface_lock);
+       list_for_each_entry_safe(iface, niface, &buf_to_free->iface_list,
+                                iface_head)
+               kref_put(&iface->refcount, release_iface);
+       spin_unlock(&buf_to_free->iface_lock);
        kfree_sensitive(buf_to_free);
 }
 
index 0bece97..b85718f 100644 (file)
@@ -58,7 +58,7 @@ bool is_ses_using_iface(struct cifs_ses *ses, struct cifs_server_iface *iface)
 
        spin_lock(&ses->chan_lock);
        for (i = 0; i < ses->chan_count; i++) {
-               if (is_server_using_iface(ses->chans[i].server, iface)) {
+               if (ses->chans[i].iface == iface) {
                        spin_unlock(&ses->chan_lock);
                        return true;
                }
@@ -81,6 +81,9 @@ cifs_ses_get_chan_index(struct cifs_ses *ses,
        }
 
        /* If we didn't find the channel, it is likely a bug */
+       if (server)
+               cifs_dbg(VFS, "unable to get chan index for server: 0x%llx",
+                        server->conn_id);
        WARN_ON(1);
        return 0;
 }
@@ -143,16 +146,24 @@ cifs_chan_needs_reconnect(struct cifs_ses *ses,
        return CIFS_CHAN_NEEDS_RECONNECT(ses, chan_index);
 }
 
+bool
+cifs_chan_is_iface_active(struct cifs_ses *ses,
+                         struct TCP_Server_Info *server)
+{
+       unsigned int chan_index = cifs_ses_get_chan_index(ses, server);
+
+       return ses->chans[chan_index].iface &&
+               ses->chans[chan_index].iface->is_active;
+}
+
 /* returns number of channels added */
 int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
 {
        int old_chan_count, new_chan_count;
        int left;
-       int i = 0;
        int rc = 0;
        int tries = 0;
-       struct cifs_server_iface *ifaces = NULL;
-       size_t iface_count;
+       struct cifs_server_iface *iface = NULL, *niface = NULL;
 
        spin_lock(&ses->chan_lock);
 
@@ -181,33 +192,17 @@ int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
        }
        spin_unlock(&ses->chan_lock);
 
-       /*
-        * Make a copy of the iface list at the time and use that
-        * instead so as to not hold the iface spinlock for opening
-        * channels
-        */
-       spin_lock(&ses->iface_lock);
-       iface_count = ses->iface_count;
-       if (iface_count <= 0) {
-               spin_unlock(&ses->iface_lock);
-               cifs_dbg(VFS, "no iface list available to open channels\n");
-               return 0;
-       }
-       ifaces = kmemdup(ses->iface_list, iface_count*sizeof(*ifaces),
-                        GFP_ATOMIC);
-       if (!ifaces) {
-               spin_unlock(&ses->iface_lock);
-               return 0;
-       }
-       spin_unlock(&ses->iface_lock);
-
        /*
         * Keep connecting to same, fastest, iface for all channels as
         * long as its RSS. Try next fastest one if not RSS or channel
         * creation fails.
         */
+       spin_lock(&ses->iface_lock);
+       iface = list_first_entry(&ses->iface_list, struct cifs_server_iface,
+                                iface_head);
+       spin_unlock(&ses->iface_lock);
+
        while (left > 0) {
-               struct cifs_server_iface *iface;
 
                tries++;
                if (tries > 3*ses->chan_max) {
@@ -216,30 +211,127 @@ int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
                        break;
                }
 
-               iface = &ifaces[i];
-               if (is_ses_using_iface(ses, iface) && !iface->rss_capable) {
-                       i = (i+1) % iface_count;
-                       continue;
+               spin_lock(&ses->iface_lock);
+               if (!ses->iface_count) {
+                       spin_unlock(&ses->iface_lock);
+                       break;
                }
 
-               rc = cifs_ses_add_channel(cifs_sb, ses, iface);
-               if (rc) {
-                       cifs_dbg(FYI, "failed to open extra channel on iface#%d rc=%d\n",
-                                i, rc);
-                       i = (i+1) % iface_count;
-                       continue;
+               list_for_each_entry_safe_from(iface, niface, &ses->iface_list,
+                                   iface_head) {
+                       /* skip ifaces that are unusable */
+                       if (!iface->is_active ||
+                           (is_ses_using_iface(ses, iface) &&
+                            !iface->rss_capable)) {
+                               continue;
+                       }
+
+                       /* take ref before unlock */
+                       kref_get(&iface->refcount);
+
+                       spin_unlock(&ses->iface_lock);
+                       rc = cifs_ses_add_channel(cifs_sb, ses, iface);
+                       spin_lock(&ses->iface_lock);
+
+                       if (rc) {
+                               cifs_dbg(VFS, "failed to open extra channel on iface:%pIS rc=%d\n",
+                                        &iface->sockaddr,
+                                        rc);
+                               kref_put(&iface->refcount, release_iface);
+                               continue;
+                       }
+
+                       cifs_dbg(FYI, "successfully opened new channel on iface:%pIS\n",
+                                &iface->sockaddr);
+                       break;
                }
+               spin_unlock(&ses->iface_lock);
 
-               cifs_dbg(FYI, "successfully opened new channel on iface#%d\n",
-                        i);
                left--;
                new_chan_count++;
        }
 
-       kfree(ifaces);
        return new_chan_count - old_chan_count;
 }
 
+/*
+ * update the iface for the channel if necessary.
+ * will return 0 when iface is updated, 1 if removed, 2 otherwise
+ * Must be called with chan_lock held.
+ */
+int
+cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+{
+       unsigned int chan_index;
+       struct cifs_server_iface *iface = NULL;
+       struct cifs_server_iface *old_iface = NULL;
+       int rc = 0;
+
+       spin_lock(&ses->chan_lock);
+       chan_index = cifs_ses_get_chan_index(ses, server);
+       if (!chan_index) {
+               spin_unlock(&ses->chan_lock);
+               return 0;
+       }
+
+       if (ses->chans[chan_index].iface) {
+               old_iface = ses->chans[chan_index].iface;
+               if (old_iface->is_active) {
+                       spin_unlock(&ses->chan_lock);
+                       return 1;
+               }
+       }
+       spin_unlock(&ses->chan_lock);
+
+       spin_lock(&ses->iface_lock);
+       /* then look for a new one */
+       list_for_each_entry(iface, &ses->iface_list, iface_head) {
+               if (!iface->is_active ||
+                   (is_ses_using_iface(ses, iface) &&
+                    !iface->rss_capable)) {
+                       continue;
+               }
+               kref_get(&iface->refcount);
+       }
+
+       if (!list_entry_is_head(iface, &ses->iface_list, iface_head)) {
+               rc = 1;
+               iface = NULL;
+               cifs_dbg(FYI, "unable to find a suitable iface\n");
+       }
+
+       /* now drop the ref to the current iface */
+       if (old_iface && iface) {
+               kref_put(&old_iface->refcount, release_iface);
+               cifs_dbg(FYI, "replacing iface: %pIS with %pIS\n",
+                        &old_iface->sockaddr,
+                        &iface->sockaddr);
+       } else if (old_iface) {
+               kref_put(&old_iface->refcount, release_iface);
+               cifs_dbg(FYI, "releasing ref to iface: %pIS\n",
+                        &old_iface->sockaddr);
+       } else {
+               WARN_ON(!iface);
+               cifs_dbg(FYI, "adding new iface: %pIS\n", &iface->sockaddr);
+       }
+       spin_unlock(&ses->iface_lock);
+
+       spin_lock(&ses->chan_lock);
+       chan_index = cifs_ses_get_chan_index(ses, server);
+       ses->chans[chan_index].iface = iface;
+
+       /* No iface is found. if secondary chan, drop connection */
+       if (!iface && CIFS_SERVER_IS_CHAN(server))
+               ses->chans[chan_index].server = NULL;
+
+       spin_unlock(&ses->chan_lock);
+
+       if (!iface && CIFS_SERVER_IS_CHAN(server))
+               cifs_put_tcp_session(server, false);
+
+       return rc;
+}
+
 /*
  * If server is a channel of ses, return the corresponding enclosing
  * cifs_chan otherwise return NULL.
@@ -352,6 +444,7 @@ cifs_ses_add_channel(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses,
                spin_unlock(&ses->chan_lock);
                goto out;
        }
+       chan->iface = iface;
        ses->chan_count++;
        atomic_set(&ses->chan_seq, 0);
 
index 8543caf..8802995 100644 (file)
@@ -512,73 +512,41 @@ smb3_negotiate_rsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
 static int
 parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
                        size_t buf_len,
-                       struct cifs_server_iface **iface_list,
-                       size_t *iface_count)
+                       struct cifs_ses *ses)
 {
        struct network_interface_info_ioctl_rsp *p;
        struct sockaddr_in *addr4;
        struct sockaddr_in6 *addr6;
        struct iface_info_ipv4 *p4;
        struct iface_info_ipv6 *p6;
-       struct cifs_server_iface *info;
+       struct cifs_server_iface *info = NULL, *iface = NULL, *niface = NULL;
+       struct cifs_server_iface tmp_iface;
        ssize_t bytes_left;
        size_t next = 0;
        int nb_iface = 0;
-       int rc = 0;
-
-       *iface_list = NULL;
-       *iface_count = 0;
-
-       /*
-        * Fist pass: count and sanity check
-        */
+       int rc = 0, ret = 0;
 
        bytes_left = buf_len;
        p = buf;
-       while (bytes_left >= sizeof(*p)) {
-               nb_iface++;
-               next = le32_to_cpu(p->Next);
-               if (!next) {
-                       bytes_left -= sizeof(*p);
-                       break;
-               }
-               p = (struct network_interface_info_ioctl_rsp *)((u8 *)p+next);
-               bytes_left -= next;
-       }
-
-       if (!nb_iface) {
-               cifs_dbg(VFS, "%s: malformed interface info\n", __func__);
-               rc = -EINVAL;
-               goto out;
-       }
-
-       /* Azure rounds the buffer size up 8, to a 16 byte boundary */
-       if ((bytes_left > 8) || p->Next)
-               cifs_dbg(VFS, "%s: incomplete interface info\n", __func__);
-
 
+       spin_lock(&ses->iface_lock);
        /*
-        * Second pass: extract info to internal structure
+        * Go through iface_list and do kref_put to remove
+        * any unused ifaces. ifaces in use will be removed
+        * when the last user calls a kref_put on it
         */
-
-       *iface_list = kcalloc(nb_iface, sizeof(**iface_list), GFP_KERNEL);
-       if (!*iface_list) {
-               rc = -ENOMEM;
-               goto out;
+       list_for_each_entry_safe(iface, niface, &ses->iface_list,
+                                iface_head) {
+               iface->is_active = 0;
+               kref_put(&iface->refcount, release_iface);
        }
+       spin_unlock(&ses->iface_lock);
 
-       info = *iface_list;
-       bytes_left = buf_len;
-       p = buf;
        while (bytes_left >= sizeof(*p)) {
-               info->speed = le64_to_cpu(p->LinkSpeed);
-               info->rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0;
-               info->rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0;
-
-               cifs_dbg(FYI, "%s: adding iface %zu\n", __func__, *iface_count);
-               cifs_dbg(FYI, "%s: speed %zu bps\n", __func__, info->speed);
-               cifs_dbg(FYI, "%s: capabilities 0x%08x\n", __func__,
-                        le32_to_cpu(p->Capability));
+               memset(&tmp_iface, 0, sizeof(tmp_iface));
+               tmp_iface.speed = le64_to_cpu(p->LinkSpeed);
+               tmp_iface.rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0;
+               tmp_iface.rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0;
 
                switch (p->Family) {
                /*
@@ -587,7 +555,7 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
                 * conversion explicit in case either one changes.
                 */
                case INTERNETWORK:
-                       addr4 = (struct sockaddr_in *)&info->sockaddr;
+                       addr4 = (struct sockaddr_in *)&tmp_iface.sockaddr;
                        p4 = (struct iface_info_ipv4 *)p->Buffer;
                        addr4->sin_family = AF_INET;
                        memcpy(&addr4->sin_addr, &p4->IPv4Address, 4);
@@ -599,7 +567,7 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
                                 &addr4->sin_addr);
                        break;
                case INTERNETWORKV6:
-                       addr6 = (struct sockaddr_in6 *)&info->sockaddr;
+                       addr6 = (struct sockaddr_in6 *)&tmp_iface.sockaddr;
                        p6 = (struct iface_info_ipv6 *)p->Buffer;
                        addr6->sin6_family = AF_INET6;
                        memcpy(&addr6->sin6_addr, &p6->IPv6Address, 16);
@@ -619,46 +587,96 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
                        goto next_iface;
                }
 
-               (*iface_count)++;
-               info++;
+               /*
+                * The iface_list is assumed to be sorted by speed.
+                * Check if the new interface exists in that list.
+                * NEVER change iface. it could be in use.
+                * Add a new one instead
+                */
+               spin_lock(&ses->iface_lock);
+               iface = niface = NULL;
+               list_for_each_entry_safe(iface, niface, &ses->iface_list,
+                                        iface_head) {
+                       ret = iface_cmp(iface, &tmp_iface);
+                       if (!ret) {
+                               /* just get a ref so that it doesn't get picked/freed */
+                               iface->is_active = 1;
+                               kref_get(&iface->refcount);
+                               spin_unlock(&ses->iface_lock);
+                               goto next_iface;
+                       } else if (ret < 0) {
+                               /* all remaining ifaces are slower */
+                               kref_get(&iface->refcount);
+                               break;
+                       }
+               }
+               spin_unlock(&ses->iface_lock);
+
+               /* no match. insert the entry in the list */
+               info = kmalloc(sizeof(struct cifs_server_iface),
+                              GFP_KERNEL);
+               if (!info) {
+                       rc = -ENOMEM;
+                       goto out;
+               }
+               memcpy(info, &tmp_iface, sizeof(tmp_iface));
+
+               /* add this new entry to the list */
+               kref_init(&info->refcount);
+               info->is_active = 1;
+
+               cifs_dbg(FYI, "%s: adding iface %zu\n", __func__, ses->iface_count);
+               cifs_dbg(FYI, "%s: speed %zu bps\n", __func__, info->speed);
+               cifs_dbg(FYI, "%s: capabilities 0x%08x\n", __func__,
+                        le32_to_cpu(p->Capability));
+
+               spin_lock(&ses->iface_lock);
+               if (!list_entry_is_head(iface, &ses->iface_list, iface_head)) {
+                       list_add_tail(&info->iface_head, &iface->iface_head);
+                       kref_put(&iface->refcount, release_iface);
+               } else
+                       list_add_tail(&info->iface_head, &ses->iface_list);
+               spin_unlock(&ses->iface_lock);
+
+               ses->iface_count++;
+               ses->iface_last_update = jiffies;
 next_iface:
+               nb_iface++;
                next = le32_to_cpu(p->Next);
-               if (!next)
+               if (!next) {
+                       bytes_left -= sizeof(*p);
                        break;
+               }
                p = (struct network_interface_info_ioctl_rsp *)((u8 *)p+next);
                bytes_left -= next;
        }
 
-       if (!*iface_count) {
+       if (!nb_iface) {
+               cifs_dbg(VFS, "%s: malformed interface info\n", __func__);
                rc = -EINVAL;
                goto out;
        }
 
-out:
-       if (rc) {
-               kfree(*iface_list);
-               *iface_count = 0;
-               *iface_list = NULL;
-       }
-       return rc;
-}
+       /* Azure rounds the buffer size up 8, to a 16 byte boundary */
+       if ((bytes_left > 8) || p->Next)
+               cifs_dbg(VFS, "%s: incomplete interface info\n", __func__);
 
-static int compare_iface(const void *ia, const void *ib)
-{
-       const struct cifs_server_iface *a = (struct cifs_server_iface *)ia;
-       const struct cifs_server_iface *b = (struct cifs_server_iface *)ib;
 
-       return a->speed == b->speed ? 0 : (a->speed > b->speed ? -1 : 1);
+       if (!ses->iface_count) {
+               rc = -EINVAL;
+               goto out;
+       }
+
+out:
+       return rc;
 }
 
-static int
+int
 SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)
 {
        int rc;
        unsigned int ret_data_len = 0;
        struct network_interface_info_ioctl_rsp *out_buf = NULL;
-       struct cifs_server_iface *iface_list;
-       size_t iface_count;
        struct cifs_ses *ses = tcon->ses;
 
        rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
@@ -674,21 +692,10 @@ SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)
                goto out;
        }
 
-       rc = parse_server_interfaces(out_buf, ret_data_len,
-                                    &iface_list, &iface_count);
+       rc = parse_server_interfaces(out_buf, ret_data_len, ses);
        if (rc)
                goto out;
 
-       /* sort interfaces from fastest to slowest */
-       sort(iface_list, iface_count, sizeof(*iface_list), compare_iface, NULL);
-
-       spin_lock(&ses->iface_lock);
-       kfree(ses->iface_list);
-       ses->iface_list = iface_list;
-       ses->iface_count = iface_count;
-       ses->iface_last_update = jiffies;
-       spin_unlock(&ses->iface_lock);
-
 out:
        kfree(out_buf);
        return rc;
index eaf975f..12b4ddd 100644 (file)
@@ -543,6 +543,7 @@ assemble_neg_contexts(struct smb2_negotiate_req *req,
                      struct TCP_Server_Info *server, unsigned int *total_len)
 {
        char *pneg_ctxt;
+       char *hostname = NULL;
        unsigned int ctxt_len, neg_context_count;
 
        if (*total_len > 200) {
@@ -570,16 +571,24 @@ assemble_neg_contexts(struct smb2_negotiate_req *req,
        *total_len += ctxt_len;
        pneg_ctxt += ctxt_len;
 
-       ctxt_len = build_netname_ctxt((struct smb2_netname_neg_context *)pneg_ctxt,
-                                       server->hostname);
-       *total_len += ctxt_len;
-       pneg_ctxt += ctxt_len;
-
        build_posix_ctxt((struct smb2_posix_neg_context *)pneg_ctxt);
        *total_len += sizeof(struct smb2_posix_neg_context);
        pneg_ctxt += sizeof(struct smb2_posix_neg_context);
 
-       neg_context_count = 4;
+       /*
+        * secondary channels don't have the hostname field populated
+        * use the hostname field in the primary channel instead
+        */
+       hostname = CIFS_SERVER_IS_CHAN(server) ?
+               server->primary_server->hostname : server->hostname;
+       if (hostname && (hostname[0] != 0)) {
+               ctxt_len = build_netname_ctxt((struct smb2_netname_neg_context *)pneg_ctxt,
+                                             hostname);
+               *total_len += ctxt_len;
+               pneg_ctxt += ctxt_len;
+               neg_context_count = 4;
+       } else /* second channels do not have a hostname */
+               neg_context_count = 3;
 
        if (server->compress_algorithm) {
                build_compression_ctxt((struct smb2_compression_capabilities_context *)
@@ -5154,6 +5163,8 @@ SMB2_set_eof(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
        data = &info;
        size = sizeof(struct smb2_file_eof_info);
 
+       trace_smb3_set_eof(xid, persistent_fid, tcon->tid, tcon->ses->Suid, le64_to_cpu(*eof));
+
        return send_set_info(xid, tcon, persistent_fid, volatile_fid,
                        pid, FILE_END_OF_FILE_INFORMATION, SMB2_O_INFO_FILE,
                        0, 1, &data, &size);
index 2be5e0c..6b88dc2 100644 (file)
@@ -121,6 +121,44 @@ DEFINE_SMB3_RW_DONE_EVENT(query_dir_done);
 DEFINE_SMB3_RW_DONE_EVENT(zero_done);
 DEFINE_SMB3_RW_DONE_EVENT(falloc_done);
 
+/* For logging successful set EOF (truncate) */
+DECLARE_EVENT_CLASS(smb3_eof_class,
+       TP_PROTO(unsigned int xid,
+               __u64   fid,
+               __u32   tid,
+               __u64   sesid,
+               __u64   offset),
+       TP_ARGS(xid, fid, tid, sesid, offset),
+       TP_STRUCT__entry(
+               __field(unsigned int, xid)
+               __field(__u64, fid)
+               __field(__u32, tid)
+               __field(__u64, sesid)
+               __field(__u64, offset)
+       ),
+       TP_fast_assign(
+               __entry->xid = xid;
+               __entry->fid = fid;
+               __entry->tid = tid;
+               __entry->sesid = sesid;
+               __entry->offset = offset;
+       ),
+       TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx",
+               __entry->xid, __entry->sesid, __entry->tid, __entry->fid,
+               __entry->offset)
+)
+
+#define DEFINE_SMB3_EOF_EVENT(name)         \
+DEFINE_EVENT(smb3_eof_class, smb3_##name,   \
+       TP_PROTO(unsigned int xid,              \
+               __u64   fid,                    \
+               __u32   tid,                    \
+               __u64   sesid,                  \
+               __u64   offset),                \
+       TP_ARGS(xid, fid, tid, sesid, offset))
+
+DEFINE_SMB3_EOF_EVENT(set_eof);
+
 /*
  * For handle based calls other than read and write, and get/set info
  */
index 76acc37..c6eaf7e 100644 (file)
@@ -1198,7 +1198,9 @@ static int __exfat_rename(struct inode *old_parent_inode,
                return -ENOENT;
        }
 
-       exfat_chain_dup(&olddir, &ei->dir);
+       exfat_chain_set(&olddir, EXFAT_I(old_parent_inode)->start_clu,
+               EXFAT_B_TO_CLU_ROUND_UP(i_size_read(old_parent_inode), sbi),
+               EXFAT_I(old_parent_inode)->flags);
        dentry = ei->entry;
 
        ep = exfat_get_dentry(sb, &olddir, dentry, &old_bh);
index 2c2f179..43de293 100644 (file)
@@ -672,17 +672,14 @@ int ext2_empty_dir (struct inode * inode)
        void *page_addr = NULL;
        struct page *page = NULL;
        unsigned long i, npages = dir_pages(inode);
-       int dir_has_error = 0;
 
        for (i = 0; i < npages; i++) {
                char *kaddr;
                ext2_dirent * de;
-               page = ext2_get_page(inode, i, dir_has_error, &page_addr);
+               page = ext2_get_page(inode, i, 0, &page_addr);
 
-               if (IS_ERR(page)) {
-                       dir_has_error = 1;
-                       continue;
-               }
+               if (IS_ERR(page))
+                       goto not_empty;
 
                kaddr = page_addr;
                de = (ext2_dirent *)kaddr;
index 3dce7d0..84c0eb5 100644 (file)
@@ -829,7 +829,7 @@ int ext4_get_block_unwritten(struct inode *inode, sector_t iblock,
        ext4_debug("ext4_get_block_unwritten: inode %lu, create flag %d\n",
                   inode->i_ino, create);
        return _ext4_get_block(inode, iblock, bh_result,
-                              EXT4_GET_BLOCKS_IO_CREATE_EXT);
+                              EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT);
 }
 
 /* Maximum number of blocks we map for direct IO at once. */
index 9f12f29..9e06334 100644 (file)
@@ -4104,6 +4104,15 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
        size = size >> bsbits;
        start = start_off >> bsbits;
 
+       /*
+        * For tiny groups (smaller than 8MB) the chosen allocation
+        * alignment may be larger than group size. Make sure the
+        * alignment does not move allocation to a different group which
+        * makes mballoc fail assertions later.
+        */
+       start = max(start, rounddown(ac->ac_o_ex.fe_logical,
+                       (ext4_lblk_t)EXT4_BLOCKS_PER_GROUP(ac->ac_sb)));
+
        /* don't cover already allocated blocks in selected range */
        if (ar->pleft && start <= ar->lleft) {
                size -= ar->lleft + 1 - start;
@@ -4176,7 +4185,22 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
        }
        rcu_read_unlock();
 
-       if (start + size <= ac->ac_o_ex.fe_logical &&
+       /*
+        * In this function "start" and "size" are normalized for better
+        * alignment and length such that we could preallocate more blocks.
+        * This normalization is done such that original request of
+        * ac->ac_o_ex.fe_logical & fe_len should always lie within "start" and
+        * "size" boundaries.
+        * (Note fe_len can be relaxed since FS block allocation API does not
+        * provide gurantee on number of contiguous blocks allocation since that
+        * depends upon free space left, etc).
+        * In case of inode pa, later we use the allocated blocks
+        * [pa_start + fe_logical - pa_lstart, fe_len/size] from the preallocated
+        * range of goal/best blocks [start, size] to put it at the
+        * ac_o_ex.fe_logical extent of this inode.
+        * (See ext4_mb_use_inode_pa() for more details)
+        */
+       if (start + size <= ac->ac_o_ex.fe_logical ||
                        start > ac->ac_o_ex.fe_logical) {
                ext4_msg(ac->ac_sb, KERN_ERR,
                         "start %lu, size %lu, fe_logical %lu",
index 7a5353a..42f5905 100644 (file)
@@ -438,7 +438,7 @@ int ext4_ext_migrate(struct inode *inode)
 
        /*
         * Worst case we can touch the allocation bitmaps and a block
-        * group descriptor block.  We do need need to worry about
+        * group descriptor block.  We do need to worry about
         * credits for modifying the quota inode.
         */
        handle = ext4_journal_start(inode, EXT4_HT_MIGRATE,
index 47d0ca4..db4ba99 100644 (file)
@@ -1929,7 +1929,8 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
                        struct dx_hash_info *hinfo)
 {
        unsigned blocksize = dir->i_sb->s_blocksize;
-       unsigned count, continued;
+       unsigned continued;
+       int count;
        struct buffer_head *bh2;
        ext4_lblk_t newblock;
        u32 hash2;
index 14695e2..97fa7b4 100644 (file)
@@ -465,7 +465,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
        /*
         * In the first loop we prepare and mark buffers to submit. We have to
         * mark all buffers in the page before submitting so that
-        * end_page_writeback() cannot be called from ext4_bio_end_io() when IO
+        * end_page_writeback() cannot be called from ext4_end_bio() when IO
         * on the first buffer finishes and we are still working on submitting
         * the second buffer.
         */
index 90a941d..8b70a47 100644 (file)
@@ -53,6 +53,16 @@ int ext4_resize_begin(struct super_block *sb)
        if (!capable(CAP_SYS_RESOURCE))
                return -EPERM;
 
+       /*
+        * If the reserved GDT blocks is non-zero, the resize_inode feature
+        * should always be set.
+        */
+       if (EXT4_SB(sb)->s_es->s_reserved_gdt_blocks &&
+           !ext4_has_feature_resize_inode(sb)) {
+               ext4_error(sb, "resize_inode disabled but reserved GDT blocks non-zero");
+               return -EFSCORRUPTED;
+       }
+
        /*
         * If we are not using the primary superblock/GDT copy don't resize,
          * because the user tools have no way of handling this.  Probably a
index 450c918..845f2f8 100644 (file)
@@ -87,7 +87,7 @@ static struct inode *ext4_get_journal_inode(struct super_block *sb,
 static int ext4_validate_options(struct fs_context *fc);
 static int ext4_check_opt_consistency(struct fs_context *fc,
                                      struct super_block *sb);
-static int ext4_apply_options(struct fs_context *fc, struct super_block *sb);
+static void ext4_apply_options(struct fs_context *fc, struct super_block *sb);
 static int ext4_parse_param(struct fs_context *fc, struct fs_parameter *param);
 static int ext4_get_tree(struct fs_context *fc);
 static int ext4_reconfigure(struct fs_context *fc);
@@ -1870,31 +1870,12 @@ ext4_sb_read_encoding(const struct ext4_super_block *es)
 }
 #endif
 
-static int ext4_set_test_dummy_encryption(struct super_block *sb, char *arg)
-{
-#ifdef CONFIG_FS_ENCRYPTION
-       struct ext4_sb_info *sbi = EXT4_SB(sb);
-       int err;
-
-       err = fscrypt_set_test_dummy_encryption(sb, arg,
-                                               &sbi->s_dummy_enc_policy);
-       if (err) {
-               ext4_msg(sb, KERN_WARNING,
-                        "Error while setting test dummy encryption [%d]", err);
-               return err;
-       }
-       ext4_msg(sb, KERN_WARNING, "Test dummy encryption mode enabled");
-#endif
-       return 0;
-}
-
 #define EXT4_SPEC_JQUOTA                       (1 <<  0)
 #define EXT4_SPEC_JQFMT                                (1 <<  1)
 #define EXT4_SPEC_DATAJ                                (1 <<  2)
 #define EXT4_SPEC_SB_BLOCK                     (1 <<  3)
 #define EXT4_SPEC_JOURNAL_DEV                  (1 <<  4)
 #define EXT4_SPEC_JOURNAL_IOPRIO               (1 <<  5)
-#define EXT4_SPEC_DUMMY_ENCRYPTION             (1 <<  6)
 #define EXT4_SPEC_s_want_extra_isize           (1 <<  7)
 #define EXT4_SPEC_s_max_batch_time             (1 <<  8)
 #define EXT4_SPEC_s_min_batch_time             (1 <<  9)
@@ -1911,7 +1892,7 @@ static int ext4_set_test_dummy_encryption(struct super_block *sb, char *arg)
 
 struct ext4_fs_context {
        char            *s_qf_names[EXT4_MAXQUOTAS];
-       char            *test_dummy_enc_arg;
+       struct fscrypt_dummy_policy dummy_enc_policy;
        int             s_jquota_fmt;   /* Format of quota to use */
 #ifdef CONFIG_EXT4_DEBUG
        int s_fc_debug_max_replay;
@@ -1953,7 +1934,7 @@ static void ext4_fc_free(struct fs_context *fc)
        for (i = 0; i < EXT4_MAXQUOTAS; i++)
                kfree(ctx->s_qf_names[i]);
 
-       kfree(ctx->test_dummy_enc_arg);
+       fscrypt_free_dummy_policy(&ctx->dummy_enc_policy);
        kfree(ctx);
 }
 
@@ -2029,6 +2010,29 @@ static int unnote_qf_name(struct fs_context *fc, int qtype)
 }
 #endif
 
+static int ext4_parse_test_dummy_encryption(const struct fs_parameter *param,
+                                           struct ext4_fs_context *ctx)
+{
+       int err;
+
+       if (!IS_ENABLED(CONFIG_FS_ENCRYPTION)) {
+               ext4_msg(NULL, KERN_WARNING,
+                        "test_dummy_encryption option not supported");
+               return -EINVAL;
+       }
+       err = fscrypt_parse_test_dummy_encryption(param,
+                                                 &ctx->dummy_enc_policy);
+       if (err == -EINVAL) {
+               ext4_msg(NULL, KERN_WARNING,
+                        "Value of option \"%s\" is unrecognized", param->key);
+       } else if (err == -EEXIST) {
+               ext4_msg(NULL, KERN_WARNING,
+                        "Conflicting test_dummy_encryption options");
+               return -EINVAL;
+       }
+       return err;
+}
+
 #define EXT4_SET_CTX(name)                                             \
 static inline void ctx_set_##name(struct ext4_fs_context *ctx,         \
                                  unsigned long flag)                   \
@@ -2291,29 +2295,7 @@ static int ext4_parse_param(struct fs_context *fc, struct fs_parameter *param)
                ctx->spec |= EXT4_SPEC_JOURNAL_IOPRIO;
                return 0;
        case Opt_test_dummy_encryption:
-#ifdef CONFIG_FS_ENCRYPTION
-               if (param->type == fs_value_is_flag) {
-                       ctx->spec |= EXT4_SPEC_DUMMY_ENCRYPTION;
-                       ctx->test_dummy_enc_arg = NULL;
-                       return 0;
-               }
-               if (*param->string &&
-                   !(!strcmp(param->string, "v1") ||
-                     !strcmp(param->string, "v2"))) {
-                       ext4_msg(NULL, KERN_WARNING,
-                                "Value of option \"%s\" is unrecognized",
-                                param->key);
-                       return -EINVAL;
-               }
-               ctx->spec |= EXT4_SPEC_DUMMY_ENCRYPTION;
-               ctx->test_dummy_enc_arg = kmemdup_nul(param->string, param->size,
-                                                     GFP_KERNEL);
-               return 0;
-#else
-               ext4_msg(NULL, KERN_WARNING,
-                        "test_dummy_encryption option not supported");
-               return -EINVAL;
-#endif
+               return ext4_parse_test_dummy_encryption(param, ctx);
        case Opt_dax:
        case Opt_dax_type:
 #ifdef CONFIG_FS_DAX
@@ -2504,7 +2486,8 @@ parse_failed:
        if (s_ctx->spec & EXT4_SPEC_JOURNAL_IOPRIO)
                m_ctx->journal_ioprio = s_ctx->journal_ioprio;
 
-       ret = ext4_apply_options(fc, sb);
+       ext4_apply_options(fc, sb);
+       ret = 0;
 
 out_free:
        if (fc) {
@@ -2673,11 +2656,11 @@ err_jquota_specified:
 static int ext4_check_test_dummy_encryption(const struct fs_context *fc,
                                            struct super_block *sb)
 {
-#ifdef CONFIG_FS_ENCRYPTION
        const struct ext4_fs_context *ctx = fc->fs_private;
        const struct ext4_sb_info *sbi = EXT4_SB(sb);
+       int err;
 
-       if (!(ctx->spec & EXT4_SPEC_DUMMY_ENCRYPTION))
+       if (!fscrypt_is_dummy_policy_set(&ctx->dummy_enc_policy))
                return 0;
 
        if (!ext4_has_feature_encrypt(sb)) {
@@ -2691,14 +2674,46 @@ static int ext4_check_test_dummy_encryption(const struct fs_context *fc,
         * needed to allow it to be set or changed during remount.  We do allow
         * it to be specified during remount, but only if there is no change.
         */
-       if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE &&
-           !sbi->s_dummy_enc_policy.policy) {
+       if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE) {
+               if (fscrypt_dummy_policies_equal(&sbi->s_dummy_enc_policy,
+                                                &ctx->dummy_enc_policy))
+                       return 0;
                ext4_msg(NULL, KERN_WARNING,
-                        "Can't set test_dummy_encryption on remount");
+                        "Can't set or change test_dummy_encryption on remount");
                return -EINVAL;
        }
-#endif /* CONFIG_FS_ENCRYPTION */
-       return 0;
+       /* Also make sure s_mount_opts didn't contain a conflicting value. */
+       if (fscrypt_is_dummy_policy_set(&sbi->s_dummy_enc_policy)) {
+               if (fscrypt_dummy_policies_equal(&sbi->s_dummy_enc_policy,
+                                                &ctx->dummy_enc_policy))
+                       return 0;
+               ext4_msg(NULL, KERN_WARNING,
+                        "Conflicting test_dummy_encryption options");
+               return -EINVAL;
+       }
+       /*
+        * fscrypt_add_test_dummy_key() technically changes the super_block, so
+        * technically it should be delayed until ext4_apply_options() like the
+        * other changes.  But since we never get here for remounts (see above),
+        * and this is the last chance to report errors, we do it here.
+        */
+       err = fscrypt_add_test_dummy_key(sb, &ctx->dummy_enc_policy);
+       if (err)
+               ext4_msg(NULL, KERN_WARNING,
+                        "Error adding test dummy encryption key [%d]", err);
+       return err;
+}
+
+static void ext4_apply_test_dummy_encryption(struct ext4_fs_context *ctx,
+                                            struct super_block *sb)
+{
+       if (!fscrypt_is_dummy_policy_set(&ctx->dummy_enc_policy) ||
+           /* if already set, it was already verified to be the same */
+           fscrypt_is_dummy_policy_set(&EXT4_SB(sb)->s_dummy_enc_policy))
+               return;
+       EXT4_SB(sb)->s_dummy_enc_policy = ctx->dummy_enc_policy;
+       memset(&ctx->dummy_enc_policy, 0, sizeof(ctx->dummy_enc_policy));
+       ext4_msg(sb, KERN_WARNING, "Test dummy encryption mode enabled");
 }
 
 static int ext4_check_opt_consistency(struct fs_context *fc,
@@ -2785,11 +2800,10 @@ fail_dax_change_remount:
        return ext4_check_quota_consistency(fc, sb);
 }
 
-static int ext4_apply_options(struct fs_context *fc, struct super_block *sb)
+static void ext4_apply_options(struct fs_context *fc, struct super_block *sb)
 {
        struct ext4_fs_context *ctx = fc->fs_private;
        struct ext4_sb_info *sbi = fc->s_fs_info;
-       int ret = 0;
 
        sbi->s_mount_opt &= ~ctx->mask_s_mount_opt;
        sbi->s_mount_opt |= ctx->vals_s_mount_opt;
@@ -2825,11 +2839,7 @@ static int ext4_apply_options(struct fs_context *fc, struct super_block *sb)
 #endif
 
        ext4_apply_quota_options(fc, sb);
-
-       if (ctx->spec & EXT4_SPEC_DUMMY_ENCRYPTION)
-               ret = ext4_set_test_dummy_encryption(sb, ctx->test_dummy_enc_arg);
-
-       return ret;
+       ext4_apply_test_dummy_encryption(ctx, sb);
 }
 
 
@@ -4552,9 +4562,7 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
        if (err < 0)
                goto failed_mount;
 
-       err = ext4_apply_options(fc, sb);
-       if (err < 0)
-               goto failed_mount;
+       ext4_apply_options(fc, sb);
 
 #if IS_ENABLED(CONFIG_UNICODE)
        if (ext4_has_feature_casefold(sb) && !sb->s_encoding) {
@@ -5302,14 +5310,6 @@ no_journal:
                err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
                                          GFP_KERNEL);
        }
-       /*
-        * Update the checksum after updating free space/inode
-        * counters.  Otherwise the superblock can have an incorrect
-        * checksum in the buffer cache until it is written out and
-        * e2fsprogs programs trying to open a file system immediately
-        * after it is mounted can fail.
-        */
-       ext4_superblock_csum_set(sb);
        if (!err)
                err = percpu_counter_init(&sbi->s_dirs_counter,
                                          ext4_count_dirs(sb), GFP_KERNEL);
@@ -5367,6 +5367,14 @@ no_journal:
        EXT4_SB(sb)->s_mount_state |= EXT4_ORPHAN_FS;
        ext4_orphan_cleanup(sb, es);
        EXT4_SB(sb)->s_mount_state &= ~EXT4_ORPHAN_FS;
+       /*
+        * Update the checksum after updating free space/inode counters and
+        * ext4_orphan_cleanup. Otherwise the superblock can have an incorrect
+        * checksum in the buffer cache until it is written out and
+        * e2fsprogs programs trying to open a file system immediately
+        * after it is mounted can fail.
+        */
+       ext4_superblock_csum_set(sb);
        if (needs_recovery) {
                ext4_msg(sb, KERN_INFO, "recovery complete");
                err = ext4_mark_recovery_complete(sb, es);
@@ -5898,7 +5906,6 @@ static void ext4_update_super(struct super_block *sb)
 static int ext4_commit_super(struct super_block *sb)
 {
        struct buffer_head *sbh = EXT4_SB(sb)->s_sbh;
-       int error = 0;
 
        if (!sbh)
                return -EINVAL;
@@ -5907,6 +5914,13 @@ static int ext4_commit_super(struct super_block *sb)
 
        ext4_update_super(sb);
 
+       lock_buffer(sbh);
+       /* Buffer got discarded which means block device got invalidated */
+       if (!buffer_mapped(sbh)) {
+               unlock_buffer(sbh);
+               return -EIO;
+       }
+
        if (buffer_write_io_error(sbh) || !buffer_uptodate(sbh)) {
                /*
                 * Oh, dear.  A previous attempt to write the
@@ -5921,17 +5935,21 @@ static int ext4_commit_super(struct super_block *sb)
                clear_buffer_write_io_error(sbh);
                set_buffer_uptodate(sbh);
        }
-       BUFFER_TRACE(sbh, "marking dirty");
-       mark_buffer_dirty(sbh);
-       error = __sync_dirty_buffer(sbh,
-               REQ_SYNC | (test_opt(sb, BARRIER) ? REQ_FUA : 0));
+       get_bh(sbh);
+       /* Clear potential dirty bit if it was journalled update */
+       clear_buffer_dirty(sbh);
+       sbh->b_end_io = end_buffer_write_sync;
+       submit_bh(REQ_OP_WRITE,
+                 REQ_SYNC | (test_opt(sb, BARRIER) ? REQ_FUA : 0), sbh);
+       wait_on_buffer(sbh);
        if (buffer_write_io_error(sbh)) {
                ext4_msg(sb, KERN_ERR, "I/O error while writing "
                       "superblock");
                clear_buffer_write_io_error(sbh);
                set_buffer_uptodate(sbh);
+               return -EIO;
        }
-       return error;
+       return 0;
 }
 
 /*
index 0423253..564e28a 100644 (file)
@@ -1895,11 +1895,10 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
 
                        unlock_buffer(bs->bh);
                        ea_bdebug(bs->bh, "cloning");
-                       s->base = kmalloc(bs->bh->b_size, GFP_NOFS);
+                       s->base = kmemdup(BHDR(bs->bh), bs->bh->b_size, GFP_NOFS);
                        error = -ENOMEM;
                        if (s->base == NULL)
                                goto cleanup;
-                       memcpy(s->base, BHDR(bs->bh), bs->bh->b_size);
                        s->first = ENTRY(header(s->base)+1);
                        header(s->base)->h_refcount = cpu_to_le32(1);
                        s->here = ENTRY(s->base + offset);
index be599f3..d84c5f6 100644 (file)
@@ -91,8 +91,9 @@ static inline void __record_iostat_latency(struct f2fs_sb_info *sbi)
        unsigned int cnt;
        struct f2fs_iostat_latency iostat_lat[MAX_IO_TYPE][NR_PAGE_TYPE];
        struct iostat_lat_info *io_lat = sbi->iostat_io_lat;
+       unsigned long flags;
 
-       spin_lock_bh(&sbi->iostat_lat_lock);
+       spin_lock_irqsave(&sbi->iostat_lat_lock, flags);
        for (idx = 0; idx < MAX_IO_TYPE; idx++) {
                for (io = 0; io < NR_PAGE_TYPE; io++) {
                        cnt = io_lat->bio_cnt[idx][io];
@@ -106,7 +107,7 @@ static inline void __record_iostat_latency(struct f2fs_sb_info *sbi)
                        io_lat->bio_cnt[idx][io] = 0;
                }
        }
-       spin_unlock_bh(&sbi->iostat_lat_lock);
+       spin_unlock_irqrestore(&sbi->iostat_lat_lock, flags);
 
        trace_f2fs_iostat_latency(sbi, iostat_lat);
 }
@@ -115,14 +116,15 @@ static inline void f2fs_record_iostat(struct f2fs_sb_info *sbi)
 {
        unsigned long long iostat_diff[NR_IO_TYPE];
        int i;
+       unsigned long flags;
 
        if (time_is_after_jiffies(sbi->iostat_next_period))
                return;
 
        /* Need double check under the lock */
-       spin_lock_bh(&sbi->iostat_lock);
+       spin_lock_irqsave(&sbi->iostat_lock, flags);
        if (time_is_after_jiffies(sbi->iostat_next_period)) {
-               spin_unlock_bh(&sbi->iostat_lock);
+               spin_unlock_irqrestore(&sbi->iostat_lock, flags);
                return;
        }
        sbi->iostat_next_period = jiffies +
@@ -133,7 +135,7 @@ static inline void f2fs_record_iostat(struct f2fs_sb_info *sbi)
                                sbi->prev_rw_iostat[i];
                sbi->prev_rw_iostat[i] = sbi->rw_iostat[i];
        }
-       spin_unlock_bh(&sbi->iostat_lock);
+       spin_unlock_irqrestore(&sbi->iostat_lock, flags);
 
        trace_f2fs_iostat(sbi, iostat_diff);
 
@@ -145,25 +147,27 @@ void f2fs_reset_iostat(struct f2fs_sb_info *sbi)
        struct iostat_lat_info *io_lat = sbi->iostat_io_lat;
        int i;
 
-       spin_lock_bh(&sbi->iostat_lock);
+       spin_lock_irq(&sbi->iostat_lock);
        for (i = 0; i < NR_IO_TYPE; i++) {
                sbi->rw_iostat[i] = 0;
                sbi->prev_rw_iostat[i] = 0;
        }
-       spin_unlock_bh(&sbi->iostat_lock);
+       spin_unlock_irq(&sbi->iostat_lock);
 
-       spin_lock_bh(&sbi->iostat_lat_lock);
+       spin_lock_irq(&sbi->iostat_lat_lock);
        memset(io_lat, 0, sizeof(struct iostat_lat_info));
-       spin_unlock_bh(&sbi->iostat_lat_lock);
+       spin_unlock_irq(&sbi->iostat_lat_lock);
 }
 
 void f2fs_update_iostat(struct f2fs_sb_info *sbi,
                        enum iostat_type type, unsigned long long io_bytes)
 {
+       unsigned long flags;
+
        if (!sbi->iostat_enable)
                return;
 
-       spin_lock_bh(&sbi->iostat_lock);
+       spin_lock_irqsave(&sbi->iostat_lock, flags);
        sbi->rw_iostat[type] += io_bytes;
 
        if (type == APP_BUFFERED_IO || type == APP_DIRECT_IO)
@@ -172,7 +176,7 @@ void f2fs_update_iostat(struct f2fs_sb_info *sbi,
        if (type == APP_BUFFERED_READ_IO || type == APP_DIRECT_READ_IO)
                sbi->rw_iostat[APP_READ_IO] += io_bytes;
 
-       spin_unlock_bh(&sbi->iostat_lock);
+       spin_unlock_irqrestore(&sbi->iostat_lock, flags);
 
        f2fs_record_iostat(sbi);
 }
@@ -185,6 +189,7 @@ static inline void __update_iostat_latency(struct bio_iostat_ctx *iostat_ctx,
        struct f2fs_sb_info *sbi = iostat_ctx->sbi;
        struct iostat_lat_info *io_lat = sbi->iostat_io_lat;
        int idx;
+       unsigned long flags;
 
        if (!sbi->iostat_enable)
                return;
@@ -202,12 +207,12 @@ static inline void __update_iostat_latency(struct bio_iostat_ctx *iostat_ctx,
                        idx = WRITE_ASYNC_IO;
        }
 
-       spin_lock_bh(&sbi->iostat_lat_lock);
+       spin_lock_irqsave(&sbi->iostat_lat_lock, flags);
        io_lat->sum_lat[idx][iotype] += ts_diff;
        io_lat->bio_cnt[idx][iotype]++;
        if (ts_diff > io_lat->peak_lat[idx][iotype])
                io_lat->peak_lat[idx][iotype] = ts_diff;
-       spin_unlock_bh(&sbi->iostat_lat_lock);
+       spin_unlock_irqrestore(&sbi->iostat_lat_lock, flags);
 }
 
 void iostat_update_and_unbind_ctx(struct bio *bio, int rw)
index c549acb..bf00d50 100644 (file)
@@ -89,8 +89,6 @@ static struct inode *f2fs_new_inode(struct user_namespace *mnt_userns,
        if (test_opt(sbi, INLINE_XATTR))
                set_inode_flag(inode, FI_INLINE_XATTR);
 
-       if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode))
-               set_inode_flag(inode, FI_INLINE_DATA);
        if (f2fs_may_inline_dentry(inode))
                set_inode_flag(inode, FI_INLINE_DENTRY);
 
@@ -107,10 +105,6 @@ static struct inode *f2fs_new_inode(struct user_namespace *mnt_userns,
 
        f2fs_init_extent_tree(inode, NULL);
 
-       stat_inc_inline_xattr(inode);
-       stat_inc_inline_inode(inode);
-       stat_inc_inline_dir(inode);
-
        F2FS_I(inode)->i_flags =
                f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED);
 
@@ -127,6 +121,14 @@ static struct inode *f2fs_new_inode(struct user_namespace *mnt_userns,
                        set_compress_context(inode);
        }
 
+       /* Should enable inline_data after compression set */
+       if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode))
+               set_inode_flag(inode, FI_INLINE_DATA);
+
+       stat_inc_inline_xattr(inode);
+       stat_inc_inline_inode(inode);
+       stat_inc_inline_dir(inode);
+
        f2fs_set_inode_flags(inode);
 
        trace_f2fs_new_inode(inode, 0);
@@ -325,6 +327,9 @@ static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode,
                if (!is_extension_exist(name, ext[i], false))
                        continue;
 
+               /* Do not use inline_data with compression */
+               stat_dec_inline_inode(inode);
+               clear_inode_flag(inode, FI_INLINE_DATA);
                set_compress_context(inode);
                return;
        }
index 836c79a..cf6f7fc 100644 (file)
@@ -1450,7 +1450,9 @@ page_hit:
 out_err:
        ClearPageUptodate(page);
 out_put_err:
-       f2fs_handle_page_eio(sbi, page->index, NODE);
+       /* ENOENT comes from read_node_page which is not an error. */
+       if (err != -ENOENT)
+               f2fs_handle_page_eio(sbi, page->index, NODE);
        f2fs_put_page(page, 1);
        return ERR_PTR(err);
 }
index 6240804..02eb723 100644 (file)
@@ -600,41 +600,79 @@ static void hugetlb_vmtruncate(struct inode *inode, loff_t offset)
        remove_inode_hugepages(inode, offset, LLONG_MAX);
 }
 
+static void hugetlbfs_zero_partial_page(struct hstate *h,
+                                       struct address_space *mapping,
+                                       loff_t start,
+                                       loff_t end)
+{
+       pgoff_t idx = start >> huge_page_shift(h);
+       struct folio *folio;
+
+       folio = filemap_lock_folio(mapping, idx);
+       if (!folio)
+               return;
+
+       start = start & ~huge_page_mask(h);
+       end = end & ~huge_page_mask(h);
+       if (!end)
+               end = huge_page_size(h);
+
+       folio_zero_segment(folio, (size_t)start, (size_t)end);
+
+       folio_unlock(folio);
+       folio_put(folio);
+}
+
 static long hugetlbfs_punch_hole(struct inode *inode, loff_t offset, loff_t len)
 {
+       struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
+       struct address_space *mapping = inode->i_mapping;
        struct hstate *h = hstate_inode(inode);
        loff_t hpage_size = huge_page_size(h);
        loff_t hole_start, hole_end;
 
        /*
-        * For hole punch round up the beginning offset of the hole and
-        * round down the end.
+        * hole_start and hole_end indicate the full pages within the hole.
         */
        hole_start = round_up(offset, hpage_size);
        hole_end = round_down(offset + len, hpage_size);
 
-       if (hole_end > hole_start) {
-               struct address_space *mapping = inode->i_mapping;
-               struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
+       inode_lock(inode);
 
-               inode_lock(inode);
+       /* protected by i_rwsem */
+       if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
+               inode_unlock(inode);
+               return -EPERM;
+       }
 
-               /* protected by i_rwsem */
-               if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
-                       inode_unlock(inode);
-                       return -EPERM;
-               }
+       i_mmap_lock_write(mapping);
+
+       /* If range starts before first full page, zero partial page. */
+       if (offset < hole_start)
+               hugetlbfs_zero_partial_page(h, mapping,
+                               offset, min(offset + len, hole_start));
 
-               i_mmap_lock_write(mapping);
+       /* Unmap users of full pages in the hole. */
+       if (hole_end > hole_start) {
                if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))
                        hugetlb_vmdelete_list(&mapping->i_mmap,
                                              hole_start >> PAGE_SHIFT,
                                              hole_end >> PAGE_SHIFT, 0);
-               i_mmap_unlock_write(mapping);
-               remove_inode_hugepages(inode, hole_start, hole_end);
-               inode_unlock(inode);
        }
 
+       /* If range extends beyond last full page, zero partial page. */
+       if ((offset + len) > hole_end && (offset + len) > hole_start)
+               hugetlbfs_zero_partial_page(h, mapping,
+                               hole_end, offset + len);
+
+       i_mmap_unlock_write(mapping);
+
+       /* Remove full pages from the file. */
+       if (hole_end > hole_start)
+               remove_inode_hugepages(inode, hole_start, hole_end);
+
+       inode_unlock(inode);
+
        return 0;
 }
 
index 3aab418..0d491ad 100644 (file)
@@ -298,8 +298,8 @@ struct io_buffer_list {
        /* below is for ring provided buffers */
        __u16 buf_nr_pages;
        __u16 nr_entries;
-       __u32 head;
-       __u32 mask;
+       __u16 head;
+       __u16 mask;
 };
 
 struct io_buffer {
@@ -576,7 +576,6 @@ struct io_close {
        struct file                     *file;
        int                             fd;
        u32                             file_slot;
-       u32                             flags;
 };
 
 struct io_timeout_data {
@@ -784,12 +783,6 @@ struct io_msg {
        u32 len;
 };
 
-struct io_nop {
-       struct file                     *file;
-       u64                             extra1;
-       u64                             extra2;
-};
-
 struct io_async_connect {
        struct sockaddr_storage         address;
 };
@@ -851,6 +844,7 @@ enum {
        REQ_F_SINGLE_POLL_BIT,
        REQ_F_DOUBLE_POLL_BIT,
        REQ_F_PARTIAL_IO_BIT,
+       REQ_F_CQE32_INIT_BIT,
        REQ_F_APOLL_MULTISHOT_BIT,
        /* keep async read/write and isreg together and in order */
        REQ_F_SUPPORT_NOWAIT_BIT,
@@ -920,6 +914,8 @@ enum {
        REQ_F_PARTIAL_IO        = BIT(REQ_F_PARTIAL_IO_BIT),
        /* fast poll multishot mode */
        REQ_F_APOLL_MULTISHOT   = BIT(REQ_F_APOLL_MULTISHOT_BIT),
+       /* ->extra1 and ->extra2 are initialised */
+       REQ_F_CQE32_INIT        = BIT(REQ_F_CQE32_INIT_BIT),
 };
 
 struct async_poll {
@@ -994,7 +990,6 @@ struct io_kiocb {
                struct io_msg           msg;
                struct io_xattr         xattr;
                struct io_socket        sock;
-               struct io_nop           nop;
                struct io_uring_cmd     uring_cmd;
        };
 
@@ -1121,7 +1116,6 @@ static const struct io_op_def io_op_defs[] = {
        [IORING_OP_NOP] = {
                .audit_skip             = 1,
                .iopoll                 = 1,
-               .buffer_select          = 1,
        },
        [IORING_OP_READV] = {
                .needs_file             = 1,
@@ -1189,6 +1183,7 @@ static const struct io_op_def io_op_defs[] = {
                .unbound_nonreg_file    = 1,
                .pollout                = 1,
                .needs_async_setup      = 1,
+               .ioprio                 = 1,
                .async_size             = sizeof(struct io_async_msghdr),
        },
        [IORING_OP_RECVMSG] = {
@@ -1197,6 +1192,7 @@ static const struct io_op_def io_op_defs[] = {
                .pollin                 = 1,
                .buffer_select          = 1,
                .needs_async_setup      = 1,
+               .ioprio                 = 1,
                .async_size             = sizeof(struct io_async_msghdr),
        },
        [IORING_OP_TIMEOUT] = {
@@ -1272,6 +1268,7 @@ static const struct io_op_def io_op_defs[] = {
                .unbound_nonreg_file    = 1,
                .pollout                = 1,
                .audit_skip             = 1,
+               .ioprio                 = 1,
        },
        [IORING_OP_RECV] = {
                .needs_file             = 1,
@@ -1279,6 +1276,7 @@ static const struct io_op_def io_op_defs[] = {
                .pollin                 = 1,
                .buffer_select          = 1,
                .audit_skip             = 1,
+               .ioprio                 = 1,
        },
        [IORING_OP_OPENAT2] = {
        },
@@ -1729,9 +1727,16 @@ static void io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags)
 
        if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
                return;
-       /* don't recycle if we already did IO to this buffer */
-       if (req->flags & REQ_F_PARTIAL_IO)
+       /*
+        * For legacy provided buffer mode, don't recycle if we already did
+        * IO to this buffer. For ring-mapped provided buffer mode, we should
+        * increment ring->head to explicitly monopolize the buffer to avoid
+        * multiple use.
+        */
+       if ((req->flags & REQ_F_BUFFER_SELECTED) &&
+           (req->flags & REQ_F_PARTIAL_IO))
                return;
+
        /*
         * We don't need to recycle for REQ_F_BUFFER_RING, we can just clear
         * the flag and hence ensure that bl->head doesn't get incremented.
@@ -1739,8 +1744,13 @@ static void io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags)
         */
        if (req->flags & REQ_F_BUFFER_RING) {
                if (req->buf_list) {
-                       req->buf_index = req->buf_list->bgid;
-                       req->flags &= ~REQ_F_BUFFER_RING;
+                       if (req->flags & REQ_F_PARTIAL_IO) {
+                               req->buf_list->head++;
+                               req->buf_list = NULL;
+                       } else {
+                               req->buf_index = req->buf_list->bgid;
+                               req->flags &= ~REQ_F_BUFFER_RING;
+                       }
                }
                return;
        }
@@ -1969,7 +1979,7 @@ static inline void io_req_track_inflight(struct io_kiocb *req)
 {
        if (!(req->flags & REQ_F_INFLIGHT)) {
                req->flags |= REQ_F_INFLIGHT;
-               atomic_inc(&current->io_uring->inflight_tracked);
+               atomic_inc(&req->task->io_uring->inflight_tracked);
        }
 }
 
@@ -2441,94 +2451,66 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
        return true;
 }
 
-static inline bool __io_fill_cqe(struct io_ring_ctx *ctx, u64 user_data,
-                                s32 res, u32 cflags)
+static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx,
+                                    struct io_kiocb *req)
 {
        struct io_uring_cqe *cqe;
 
-       /*
-        * If we can't get a cq entry, userspace overflowed the
-        * submission (by quite a lot). Increment the overflow count in
-        * the ring.
-        */
-       cqe = io_get_cqe(ctx);
-       if (likely(cqe)) {
-               WRITE_ONCE(cqe->user_data, user_data);
-               WRITE_ONCE(cqe->res, res);
-               WRITE_ONCE(cqe->flags, cflags);
-               return true;
-       }
-       return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
-}
+       if (!(ctx->flags & IORING_SETUP_CQE32)) {
+               trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
+                                       req->cqe.res, req->cqe.flags, 0, 0);
 
-static inline bool __io_fill_cqe_req_filled(struct io_ring_ctx *ctx,
-                                           struct io_kiocb *req)
-{
-       struct io_uring_cqe *cqe;
+               /*
+                * If we can't get a cq entry, userspace overflowed the
+                * submission (by quite a lot). Increment the overflow count in
+                * the ring.
+                */
+               cqe = io_get_cqe(ctx);
+               if (likely(cqe)) {
+                       memcpy(cqe, &req->cqe, sizeof(*cqe));
+                       return true;
+               }
 
-       trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
-                               req->cqe.res, req->cqe.flags, 0, 0);
+               return io_cqring_event_overflow(ctx, req->cqe.user_data,
+                                               req->cqe.res, req->cqe.flags,
+                                               0, 0);
+       } else {
+               u64 extra1 = 0, extra2 = 0;
 
-       /*
-        * If we can't get a cq entry, userspace overflowed the
-        * submission (by quite a lot). Increment the overflow count in
-        * the ring.
-        */
-       cqe = io_get_cqe(ctx);
-       if (likely(cqe)) {
-               memcpy(cqe, &req->cqe, sizeof(*cqe));
-               return true;
-       }
-       return io_cqring_event_overflow(ctx, req->cqe.user_data,
-                                       req->cqe.res, req->cqe.flags, 0, 0);
-}
+               if (req->flags & REQ_F_CQE32_INIT) {
+                       extra1 = req->extra1;
+                       extra2 = req->extra2;
+               }
 
-static inline bool __io_fill_cqe32_req_filled(struct io_ring_ctx *ctx,
-                                             struct io_kiocb *req)
-{
-       struct io_uring_cqe *cqe;
-       u64 extra1 = req->extra1;
-       u64 extra2 = req->extra2;
+               trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
+                                       req->cqe.res, req->cqe.flags, extra1, extra2);
 
-       trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
-                               req->cqe.res, req->cqe.flags, extra1, extra2);
+               /*
+                * If we can't get a cq entry, userspace overflowed the
+                * submission (by quite a lot). Increment the overflow count in
+                * the ring.
+                */
+               cqe = io_get_cqe(ctx);
+               if (likely(cqe)) {
+                       memcpy(cqe, &req->cqe, sizeof(struct io_uring_cqe));
+                       WRITE_ONCE(cqe->big_cqe[0], extra1);
+                       WRITE_ONCE(cqe->big_cqe[1], extra2);
+                       return true;
+               }
 
-       /*
-        * If we can't get a cq entry, userspace overflowed the
-        * submission (by quite a lot). Increment the overflow count in
-        * the ring.
-        */
-       cqe = io_get_cqe(ctx);
-       if (likely(cqe)) {
-               memcpy(cqe, &req->cqe, sizeof(struct io_uring_cqe));
-               cqe->big_cqe[0] = extra1;
-               cqe->big_cqe[1] = extra2;
-               return true;
+               return io_cqring_event_overflow(ctx, req->cqe.user_data,
+                               req->cqe.res, req->cqe.flags,
+                               extra1, extra2);
        }
-
-       return io_cqring_event_overflow(ctx, req->cqe.user_data, req->cqe.res,
-                                       req->cqe.flags, extra1, extra2);
 }
 
-static inline bool __io_fill_cqe_req(struct io_kiocb *req, s32 res, u32 cflags)
-{
-       trace_io_uring_complete(req->ctx, req, req->cqe.user_data, res, cflags, 0, 0);
-       return __io_fill_cqe(req->ctx, req->cqe.user_data, res, cflags);
-}
-
-static inline void __io_fill_cqe32_req(struct io_kiocb *req, s32 res, u32 cflags,
-                               u64 extra1, u64 extra2)
+static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
+                                    s32 res, u32 cflags)
 {
-       struct io_ring_ctx *ctx = req->ctx;
        struct io_uring_cqe *cqe;
 
-       if (WARN_ON_ONCE(!(ctx->flags & IORING_SETUP_CQE32)))
-               return;
-       if (req->flags & REQ_F_CQE_SKIP)
-               return;
-
-       trace_io_uring_complete(ctx, req, req->cqe.user_data, res, cflags,
-                               extra1, extra2);
+       ctx->cq_extra++;
+       trace_io_uring_complete(ctx, NULL, user_data, res, cflags, 0, 0);
 
        /*
         * If we can't get a cq entry, userspace overflowed the
@@ -2537,23 +2519,17 @@ static inline void __io_fill_cqe32_req(struct io_kiocb *req, s32 res, u32 cflags
         */
        cqe = io_get_cqe(ctx);
        if (likely(cqe)) {
-               WRITE_ONCE(cqe->user_data, req->cqe.user_data);
+               WRITE_ONCE(cqe->user_data, user_data);
                WRITE_ONCE(cqe->res, res);
                WRITE_ONCE(cqe->flags, cflags);
-               WRITE_ONCE(cqe->big_cqe[0], extra1);
-               WRITE_ONCE(cqe->big_cqe[1], extra2);
-               return;
-       }
 
-       io_cqring_event_overflow(ctx, req->cqe.user_data, res, cflags, extra1, extra2);
-}
-
-static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
-                                    s32 res, u32 cflags)
-{
-       ctx->cq_extra++;
-       trace_io_uring_complete(ctx, NULL, user_data, res, cflags, 0, 0);
-       return __io_fill_cqe(ctx, user_data, res, cflags);
+               if (ctx->flags & IORING_SETUP_CQE32) {
+                       WRITE_ONCE(cqe->big_cqe[0], 0);
+                       WRITE_ONCE(cqe->big_cqe[1], 0);
+               }
+               return true;
+       }
+       return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
 }
 
 static void __io_req_complete_put(struct io_kiocb *req)
@@ -2590,16 +2566,11 @@ static void __io_req_complete_put(struct io_kiocb *req)
 static void __io_req_complete_post(struct io_kiocb *req, s32 res,
                                   u32 cflags)
 {
-       if (!(req->flags & REQ_F_CQE_SKIP))
-               __io_fill_cqe_req(req, res, cflags);
-       __io_req_complete_put(req);
-}
-
-static void __io_req_complete_post32(struct io_kiocb *req, s32 res,
-                                  u32 cflags, u64 extra1, u64 extra2)
-{
-       if (!(req->flags & REQ_F_CQE_SKIP))
-               __io_fill_cqe32_req(req, res, cflags, extra1, extra2);
+       if (!(req->flags & REQ_F_CQE_SKIP)) {
+               req->cqe.res = res;
+               req->cqe.flags = cflags;
+               __io_fill_cqe_req(req->ctx, req);
+       }
        __io_req_complete_put(req);
 }
 
@@ -2614,18 +2585,6 @@ static void io_req_complete_post(struct io_kiocb *req, s32 res, u32 cflags)
        io_cqring_ev_posted(ctx);
 }
 
-static void io_req_complete_post32(struct io_kiocb *req, s32 res,
-                                  u32 cflags, u64 extra1, u64 extra2)
-{
-       struct io_ring_ctx *ctx = req->ctx;
-
-       spin_lock(&ctx->completion_lock);
-       __io_req_complete_post32(req, res, cflags, extra1, extra2);
-       io_commit_cqring(ctx);
-       spin_unlock(&ctx->completion_lock);
-       io_cqring_ev_posted(ctx);
-}
-
 static inline void io_req_complete_state(struct io_kiocb *req, s32 res,
                                         u32 cflags)
 {
@@ -2643,19 +2602,6 @@ static inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags,
                io_req_complete_post(req, res, cflags);
 }
 
-static inline void __io_req_complete32(struct io_kiocb *req,
-                                      unsigned int issue_flags, s32 res,
-                                      u32 cflags, u64 extra1, u64 extra2)
-{
-       if (issue_flags & IO_URING_F_COMPLETE_DEFER) {
-               io_req_complete_state(req, res, cflags);
-               req->extra1 = extra1;
-               req->extra2 = extra2;
-       } else {
-               io_req_complete_post32(req, res, cflags, extra1, extra2);
-       }
-}
-
 static inline void io_req_complete(struct io_kiocb *req, s32 res)
 {
        if (res < 0)
@@ -3202,12 +3148,8 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
                        struct io_kiocb *req = container_of(node, struct io_kiocb,
                                                    comp_list);
 
-                       if (!(req->flags & REQ_F_CQE_SKIP)) {
-                               if (!(ctx->flags & IORING_SETUP_CQE32))
-                                       __io_fill_cqe_req_filled(ctx, req);
-                               else
-                                       __io_fill_cqe32_req_filled(ctx, req);
-                       }
+                       if (!(req->flags & REQ_F_CQE_SKIP))
+                               __io_fill_cqe_req(ctx, req);
                }
 
                io_commit_cqring(ctx);
@@ -3326,7 +3268,9 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
                nr_events++;
                if (unlikely(req->flags & REQ_F_CQE_SKIP))
                        continue;
-               __io_fill_cqe_req(req, req->cqe.res, io_put_kbuf(req, 0));
+
+               req->cqe.flags = io_put_kbuf(req, 0);
+               __io_fill_cqe_req(req->ctx, req);
        }
 
        if (unlikely(!nr_events))
@@ -3497,7 +3441,7 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
        if (unlikely(res != req->cqe.res)) {
                if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
                    io_rw_should_reissue(req)) {
-                       req->flags |= REQ_F_REISSUE;
+                       req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;
                        return true;
                }
                req_set_fail(req);
@@ -3547,7 +3491,7 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res)
                kiocb_end_write(req);
        if (unlikely(res != req->cqe.res)) {
                if (res == -EAGAIN && io_rw_should_reissue(req)) {
-                       req->flags |= REQ_F_REISSUE;
+                       req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;
                        return;
                }
                req->cqe.res = res;
@@ -3677,6 +3621,20 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
        int ret;
 
        kiocb->ki_pos = READ_ONCE(sqe->off);
+       /* used for fixed read/write too - just read unconditionally */
+       req->buf_index = READ_ONCE(sqe->buf_index);
+
+       if (req->opcode == IORING_OP_READ_FIXED ||
+           req->opcode == IORING_OP_WRITE_FIXED) {
+               struct io_ring_ctx *ctx = req->ctx;
+               u16 index;
+
+               if (unlikely(req->buf_index >= ctx->nr_user_bufs))
+                       return -EFAULT;
+               index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
+               req->imu = ctx->user_bufs[index];
+               io_req_set_rsrc_node(req, ctx, 0);
+       }
 
        ioprio = READ_ONCE(sqe->ioprio);
        if (ioprio) {
@@ -3689,12 +3647,9 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
                kiocb->ki_ioprio = get_current_ioprio();
        }
 
-       req->imu = NULL;
        req->rw.addr = READ_ONCE(sqe->addr);
        req->rw.len = READ_ONCE(sqe->len);
        req->rw.flags = READ_ONCE(sqe->rw_flags);
-       /* used for fixed read/write too - just read unconditionally */
-       req->buf_index = READ_ONCE(sqe->buf_index);
        return 0;
 }
 
@@ -3826,20 +3781,9 @@ static int __io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter
 static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
                           unsigned int issue_flags)
 {
-       struct io_mapped_ubuf *imu = req->imu;
-       u16 index, buf_index = req->buf_index;
-
-       if (likely(!imu)) {
-               struct io_ring_ctx *ctx = req->ctx;
-
-               if (unlikely(buf_index >= ctx->nr_user_bufs))
-                       return -EFAULT;
-               io_req_set_rsrc_node(req, ctx, issue_flags);
-               index = array_index_nospec(buf_index, ctx->nr_user_bufs);
-               imu = READ_ONCE(ctx->user_bufs[index]);
-               req->imu = imu;
-       }
-       return __io_import_fixed(req, rw, iter, imu);
+       if (WARN_ON_ONCE(!req->imu))
+               return -EFAULT;
+       return __io_import_fixed(req, rw, iter, req->imu);
 }
 
 static int io_buffer_add_list(struct io_ring_ctx *ctx,
@@ -3876,19 +3820,17 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
 {
        struct io_uring_buf_ring *br = bl->buf_ring;
        struct io_uring_buf *buf;
-       __u32 head = bl->head;
+       __u16 head = bl->head;
 
-       if (unlikely(smp_load_acquire(&br->tail) == head)) {
-               io_ring_submit_unlock(req->ctx, issue_flags);
+       if (unlikely(smp_load_acquire(&br->tail) == head))
                return NULL;
-       }
 
        head &= bl->mask;
        if (head < IO_BUFFER_LIST_BUF_PER_PAGE) {
                buf = &br->bufs[head];
        } else {
                int off = head & (IO_BUFFER_LIST_BUF_PER_PAGE - 1);
-               int index = head / IO_BUFFER_LIST_BUF_PER_PAGE - 1;
+               int index = head / IO_BUFFER_LIST_BUF_PER_PAGE;
                buf = page_address(bl->buf_pages[index]);
                buf += off;
        }
@@ -3898,7 +3840,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
        req->buf_list = bl;
        req->buf_index = buf->bid;
 
-       if (issue_flags & IO_URING_F_UNLOCKED) {
+       if (issue_flags & IO_URING_F_UNLOCKED || !file_can_poll(req->file)) {
                /*
                 * If we came in unlocked, we have no choice but to consume the
                 * buffer here. This does mean it'll be pinned until the IO
@@ -4376,18 +4318,19 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
                if (unlikely(ret < 0))
                        return ret;
        } else {
+               rw = req->async_data;
+               s = &rw->s;
+
                /*
                 * Safe and required to re-import if we're using provided
                 * buffers, as we dropped the selected one before retry.
                 */
-               if (req->flags & REQ_F_BUFFER_SELECT) {
+               if (io_do_buffer_select(req)) {
                        ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
                        if (unlikely(ret < 0))
                                return ret;
                }
 
-               rw = req->async_data;
-               s = &rw->s;
                /*
                 * We come here from an earlier attempt, restore our state to
                 * match in case it doesn't. It's cheap enough that we don't
@@ -5079,10 +5022,18 @@ void io_uring_cmd_complete_in_task(struct io_uring_cmd *ioucmd,
 
        req->uring_cmd.task_work_cb = task_work_cb;
        req->io_task_work.func = io_uring_cmd_work;
-       io_req_task_prio_work_add(req);
+       io_req_task_work_add(req);
 }
 EXPORT_SYMBOL_GPL(io_uring_cmd_complete_in_task);
 
+static inline void io_req_set_cqe32_extra(struct io_kiocb *req,
+                                         u64 extra1, u64 extra2)
+{
+       req->extra1 = extra1;
+       req->extra2 = extra2;
+       req->flags |= REQ_F_CQE32_INIT;
+}
+
 /*
  * Called by consumers of io_uring_cmd, if they originally returned
  * -EIOCBQUEUED upon receiving the command.
@@ -5093,10 +5044,10 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2)
 
        if (ret < 0)
                req_set_fail(req);
+
        if (req->ctx->flags & IORING_SETUP_CQE32)
-               __io_req_complete32(req, 0, ret, 0, res2, 0);
-       else
-               io_req_complete(req, ret);
+               io_req_set_cqe32_extra(req, res2, 0);
+       io_req_complete(req, ret);
 }
 EXPORT_SYMBOL_GPL(io_uring_cmd_done);
 
@@ -5258,14 +5209,6 @@ done:
 
 static int io_nop_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
-       /*
-        * If the ring is setup with CQE32, relay back addr/addr
-        */
-       if (req->ctx->flags & IORING_SETUP_CQE32) {
-               req->nop.extra1 = READ_ONCE(sqe->addr);
-               req->nop.extra2 = READ_ONCE(sqe->addr2);
-       }
-
        return 0;
 }
 
@@ -5274,23 +5217,7 @@ static int io_nop_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
  */
 static int io_nop(struct io_kiocb *req, unsigned int issue_flags)
 {
-       unsigned int cflags;
-       void __user *buf;
-
-       if (req->flags & REQ_F_BUFFER_SELECT) {
-               size_t len = 1;
-
-               buf = io_buffer_select(req, &len, issue_flags);
-               if (!buf)
-                       return -ENOBUFS;
-       }
-
-       cflags = io_put_kbuf(req, issue_flags);
-       if (!(req->ctx->flags & IORING_SETUP_CQE32))
-               __io_req_complete(req, issue_flags, 0, cflags);
-       else
-               __io_req_complete32(req, issue_flags, 0, cflags,
-                                   req->nop.extra1, req->nop.extra2);
+       __io_req_complete(req, issue_flags, 0, 0);
        return 0;
 }
 
@@ -5988,18 +5915,14 @@ static int io_statx(struct io_kiocb *req, unsigned int issue_flags)
 
 static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
-       if (sqe->off || sqe->addr || sqe->len || sqe->buf_index)
+       if (sqe->off || sqe->addr || sqe->len || sqe->rw_flags || sqe->buf_index)
                return -EINVAL;
        if (req->flags & REQ_F_FIXED_FILE)
                return -EBADF;
 
        req->close.fd = READ_ONCE(sqe->fd);
        req->close.file_slot = READ_ONCE(sqe->file_index);
-       req->close.flags = READ_ONCE(sqe->close_flags);
-       if (req->close.flags & ~IORING_CLOSE_FD_AND_FILE_SLOT)
-               return -EINVAL;
-       if (!(req->close.flags & IORING_CLOSE_FD_AND_FILE_SLOT) &&
-           req->close.file_slot && req->close.fd)
+       if (req->close.file_slot && req->close.fd)
                return -EINVAL;
 
        return 0;
@@ -6015,8 +5938,7 @@ static int io_close(struct io_kiocb *req, unsigned int issue_flags)
 
        if (req->close.file_slot) {
                ret = io_close_fixed(req, issue_flags);
-               if (ret || !(req->close.flags & IORING_CLOSE_FD_AND_FILE_SLOT))
-                       goto err;
+               goto err;
        }
 
        spin_lock(&files->file_lock);
@@ -6158,14 +6080,12 @@ static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
        struct io_sr_msg *sr = &req->sr_msg;
 
-       if (unlikely(sqe->file_index))
-               return -EINVAL;
-       if (unlikely(sqe->addr2 || sqe->file_index))
+       if (unlikely(sqe->file_index || sqe->addr2))
                return -EINVAL;
 
        sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
        sr->len = READ_ONCE(sqe->len);
-       sr->flags = READ_ONCE(sqe->addr2);
+       sr->flags = READ_ONCE(sqe->ioprio);
        if (sr->flags & ~IORING_RECVSEND_POLL_FIRST)
                return -EINVAL;
        sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
@@ -6396,14 +6316,12 @@ static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
        struct io_sr_msg *sr = &req->sr_msg;
 
-       if (unlikely(sqe->file_index))
-               return -EINVAL;
-       if (unlikely(sqe->addr2 || sqe->file_index))
+       if (unlikely(sqe->file_index || sqe->addr2))
                return -EINVAL;
 
        sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
        sr->len = READ_ONCE(sqe->len);
-       sr->flags = READ_ONCE(sqe->addr2);
+       sr->flags = READ_ONCE(sqe->ioprio);
        if (sr->flags & ~IORING_RECVSEND_POLL_FIRST)
                return -EINVAL;
        sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
@@ -7037,7 +6955,8 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
                io_req_complete_failed(req, ret);
 }
 
-static void __io_poll_execute(struct io_kiocb *req, int mask, __poll_t events)
+static void __io_poll_execute(struct io_kiocb *req, int mask,
+                             __poll_t __maybe_unused events)
 {
        req->cqe.res = mask;
        /*
@@ -7046,7 +6965,6 @@ static void __io_poll_execute(struct io_kiocb *req, int mask, __poll_t events)
         * CPU. We want to avoid pulling in req->apoll->events for that
         * case.
         */
-       req->apoll_events = events;
        if (req->opcode == IORING_OP_POLL_ADD)
                req->io_task_work.func = io_poll_task_func;
        else
@@ -7197,6 +7115,8 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
        io_init_poll_iocb(poll, mask, io_poll_wake);
        poll->file = req->file;
 
+       req->apoll_events = poll->events;
+
        ipt->pt._key = mask;
        ipt->req = req;
        ipt->error = 0;
@@ -7227,8 +7147,11 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
 
        if (mask) {
                /* can't multishot if failed, just queue the event we've got */
-               if (unlikely(ipt->error || !ipt->nr_entries))
+               if (unlikely(ipt->error || !ipt->nr_entries)) {
                        poll->events |= EPOLLONESHOT;
+                       req->apoll_events |= EPOLLONESHOT;
+                       ipt->error = 0;
+               }
                __io_poll_execute(req, mask, poll->events);
                return 0;
        }
@@ -7290,6 +7213,7 @@ static int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
                mask |= EPOLLEXCLUSIVE;
        if (req->flags & REQ_F_POLLED) {
                apoll = req->apoll;
+               kfree(apoll->double_poll);
        } else if (!(issue_flags & IO_URING_F_UNLOCKED) &&
                   !list_empty(&ctx->apoll_cache)) {
                apoll = list_first_entry(&ctx->apoll_cache, struct async_poll,
@@ -7475,7 +7399,7 @@ static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
                return -EINVAL;
 
        io_req_set_refcount(req);
-       req->apoll_events = poll->events = io_poll_parse_events(sqe, flags);
+       poll->events = io_poll_parse_events(sqe, flags);
        return 0;
 }
 
@@ -7488,6 +7412,8 @@ static int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
        ipt.pt._qproc = io_poll_queue_proc;
 
        ret = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events);
+       if (!ret && ipt.error)
+               req_set_fail(req);
        ret = ret ?: ipt.error;
        if (ret)
                __io_req_complete(req, issue_flags, ret, 0);
@@ -8063,8 +7989,8 @@ static int io_files_update_with_index_alloc(struct io_kiocb *req,
                if (ret < 0)
                        break;
                if (copy_to_user(&fds[done], &ret, sizeof(ret))) {
-                       ret = -EFAULT;
                        __io_close_fixed(req, issue_flags, ret);
+                       ret = -EFAULT;
                        break;
                }
        }
@@ -8773,6 +8699,7 @@ static void io_queue_async(struct io_kiocb *req, int ret)
                 * Queued up for async execution, worker will release
                 * submit reference when the iocb is actually submitted.
                 */
+               io_kbuf_recycle(req, 0);
                io_queue_iowq(req, NULL);
                break;
        case IO_APOLL_OK:
@@ -9788,11 +9715,19 @@ static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
 
 static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
 {
+       unsigned nr = ctx->nr_user_files;
        int ret;
 
        if (!ctx->file_data)
                return -ENXIO;
+
+       /*
+        * Quiesce may unlock ->uring_lock, and while it's not held
+        * prevent new requests using the table.
+        */
+       ctx->nr_user_files = 0;
        ret = io_rsrc_ref_quiesce(ctx->file_data, ctx);
+       ctx->nr_user_files = nr;
        if (!ret)
                __io_sqe_files_unregister(ctx);
        return ret;
@@ -10690,12 +10625,19 @@ static void __io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
 
 static int io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
 {
+       unsigned nr = ctx->nr_user_bufs;
        int ret;
 
        if (!ctx->buf_data)
                return -ENXIO;
 
+       /*
+        * Quiesce may unlock ->uring_lock, and while it's not held
+        * prevent new requests using the table.
+        */
+       ctx->nr_user_bufs = 0;
        ret = io_rsrc_ref_quiesce(ctx->buf_data, ctx);
+       ctx->nr_user_bufs = nr;
        if (!ret)
                __io_sqe_buffers_unregister(ctx);
        return ret;
@@ -13002,6 +12944,10 @@ static int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
        if (!is_power_of_2(reg.ring_entries))
                return -EINVAL;
 
+       /* cannot disambiguate full vs empty due to head/tail size */
+       if (reg.ring_entries >= 65536)
+               return -EINVAL;
+
        if (unlikely(reg.bgid < BGID_ARRAY && !ctx->io_bl)) {
                int ret = io_init_bl_list(ctx);
                if (ret)
index e49bb09..e9c308a 100644 (file)
@@ -2114,7 +2114,7 @@ out:
 /**
  * jbd2_journal_try_to_free_buffers() - try to free page buffers.
  * @journal: journal for operation
- * @page: to try and free
+ * @folio: Folio to detach data from.
  *
  * For all the buffers on this page,
  * if they are fully written out ordered data, move them onto BUF_CLEAN
index e6f4ccc..353f047 100644 (file)
@@ -6490,6 +6490,7 @@ int smb2_write(struct ksmbd_work *work)
                goto out;
        }
 
+       ksmbd_debug(SMB, "flags %u\n", le32_to_cpu(req->Flags));
        if (le32_to_cpu(req->Flags) & SMB2_WRITEFLAG_WRITE_THROUGH)
                writethrough = true;
 
@@ -6505,10 +6506,6 @@ int smb2_write(struct ksmbd_work *work)
                data_buf = (char *)(((char *)&req->hdr.ProtocolId) +
                                    le16_to_cpu(req->DataOffset));
 
-               ksmbd_debug(SMB, "flags %u\n", le32_to_cpu(req->Flags));
-               if (le32_to_cpu(req->Flags) & SMB2_WRITEFLAG_WRITE_THROUGH)
-                       writethrough = true;
-
                ksmbd_debug(SMB, "filename %pd, offset %lld, len %zu\n",
                            fp->filp->f_path.dentry, offset, length);
                err = ksmbd_vfs_write(work, fp, data_buf, length, &offset,
@@ -7703,7 +7700,7 @@ int smb2_ioctl(struct ksmbd_work *work)
        {
                struct file_zero_data_information *zero_data;
                struct ksmbd_file *fp;
-               loff_t off, len;
+               loff_t off, len, bfz;
 
                if (!test_tree_conn_flag(work->tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) {
                        ksmbd_debug(SMB,
@@ -7720,19 +7717,26 @@ int smb2_ioctl(struct ksmbd_work *work)
                zero_data =
                        (struct file_zero_data_information *)&req->Buffer[0];
 
-               fp = ksmbd_lookup_fd_fast(work, id);
-               if (!fp) {
-                       ret = -ENOENT;
+               off = le64_to_cpu(zero_data->FileOffset);
+               bfz = le64_to_cpu(zero_data->BeyondFinalZero);
+               if (off > bfz) {
+                       ret = -EINVAL;
                        goto out;
                }
 
-               off = le64_to_cpu(zero_data->FileOffset);
-               len = le64_to_cpu(zero_data->BeyondFinalZero) - off;
+               len = bfz - off;
+               if (len) {
+                       fp = ksmbd_lookup_fd_fast(work, id);
+                       if (!fp) {
+                               ret = -ENOENT;
+                               goto out;
+                       }
 
-               ret = ksmbd_vfs_zero_data(work, fp, off, len);
-               ksmbd_fd_put(work, fp);
-               if (ret < 0)
-                       goto out;
+                       ret = ksmbd_vfs_zero_data(work, fp, off, len);
+                       ksmbd_fd_put(work, fp);
+                       if (ret < 0)
+                               goto out;
+               }
                break;
        }
        case FSCTL_QUERY_ALLOCATED_RANGES:
@@ -7806,14 +7810,24 @@ int smb2_ioctl(struct ksmbd_work *work)
                src_off = le64_to_cpu(dup_ext->SourceFileOffset);
                dst_off = le64_to_cpu(dup_ext->TargetFileOffset);
                length = le64_to_cpu(dup_ext->ByteCount);
-               cloned = vfs_clone_file_range(fp_in->filp, src_off, fp_out->filp,
-                                             dst_off, length, 0);
+               /*
+                * XXX: It is not clear if FSCTL_DUPLICATE_EXTENTS_TO_FILE
+                * should fall back to vfs_copy_file_range().  This could be
+                * beneficial when re-exporting nfs/smb mount, but note that
+                * this can result in partial copy that returns an error status.
+                * If/when FSCTL_DUPLICATE_EXTENTS_TO_FILE_EX is implemented,
+                * fall back to vfs_copy_file_range(), should be avoided when
+                * the flag DUPLICATE_EXTENTS_DATA_EX_SOURCE_ATOMIC is set.
+                */
+               cloned = vfs_clone_file_range(fp_in->filp, src_off,
+                                             fp_out->filp, dst_off, length, 0);
                if (cloned == -EXDEV || cloned == -EOPNOTSUPP) {
                        ret = -EOPNOTSUPP;
                        goto dup_ext_out;
                } else if (cloned != length) {
                        cloned = vfs_copy_file_range(fp_in->filp, src_off,
-                                                    fp_out->filp, dst_off, length, 0);
+                                                    fp_out->filp, dst_off,
+                                                    length, 0);
                        if (cloned != length) {
                                if (cloned < 0)
                                        ret = cloned;
index d035e06..35b55ee 100644 (file)
@@ -5,16 +5,6 @@
  *
  *   Author(s): Long Li <longli@microsoft.com>,
  *             Hyunchul Lee <hyc.lee@gmail.com>
- *
- *   This program is free software;  you can redistribute it and/or modify
- *   it under the terms of the GNU General Public License as published by
- *   the Free Software Foundation; either version 2 of the License, or
- *   (at your option) any later version.
- *
- *   This program is distributed in the hope that it will be useful,
- *   but WITHOUT ANY WARRANTY;  without even the implied warranty of
- *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See
- *   the GNU General Public License for more details.
  */
 
 #define SUBMOD_NAME    "smb_direct"
index 8fef9de..143bba4 100644 (file)
@@ -230,7 +230,7 @@ static int ksmbd_kthread_fn(void *p)
                        break;
                }
                ret = kernel_accept(iface->ksmbd_socket, &client_sk,
-                                   O_NONBLOCK);
+                                   SOCK_NONBLOCK);
                mutex_unlock(&iface->sock_release_lock);
                if (ret) {
                        if (ret == -EAGAIN)
index dcdd07c..05efcdf 100644 (file)
@@ -1015,7 +1015,9 @@ int ksmbd_vfs_zero_data(struct ksmbd_work *work, struct ksmbd_file *fp,
                                     FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
                                     off, len);
 
-       return vfs_fallocate(fp->filp, FALLOC_FL_ZERO_RANGE, off, len);
+       return vfs_fallocate(fp->filp,
+                            FALLOC_FL_ZERO_RANGE | FALLOC_FL_KEEP_SIZE,
+                            off, len);
 }
 
 int ksmbd_vfs_fqar_lseek(struct ksmbd_file *fp, loff_t start, loff_t length,
@@ -1046,7 +1048,7 @@ int ksmbd_vfs_fqar_lseek(struct ksmbd_file *fp, loff_t start, loff_t length,
        *out_count = 0;
        end = start + length;
        while (start < end && *out_count < in_count) {
-               extent_start = f->f_op->llseek(f, start, SEEK_DATA);
+               extent_start = vfs_llseek(f, start, SEEK_DATA);
                if (extent_start < 0) {
                        if (extent_start != -ENXIO)
                                ret = (int)extent_start;
@@ -1056,7 +1058,7 @@ int ksmbd_vfs_fqar_lseek(struct ksmbd_file *fp, loff_t start, loff_t length,
                if (extent_start >= end)
                        break;
 
-               extent_end = f->f_op->llseek(f, extent_start, SEEK_HOLE);
+               extent_end = vfs_llseek(f, extent_start, SEEK_HOLE);
                if (extent_end < 0) {
                        if (extent_end != -ENXIO)
                                ret = (int)extent_end;
@@ -1777,6 +1779,10 @@ int ksmbd_vfs_copy_file_ranges(struct ksmbd_work *work,
 
                ret = vfs_copy_file_range(src_fp->filp, src_off,
                                          dst_fp->filp, dst_off, len, 0);
+               if (ret == -EOPNOTSUPP || ret == -EXDEV)
+                       ret = generic_copy_file_range(src_fp->filp, src_off,
+                                                     dst_fp->filp, dst_off,
+                                                     len, 0);
                if (ret < 0)
                        return ret;
 
index c852028..c1eda73 100644 (file)
@@ -288,6 +288,7 @@ static u32 initiate_file_draining(struct nfs_client *clp,
                rv = NFS4_OK;
                break;
        case -ENOENT:
+               set_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags);
                /* Embrace your forgetfulness! */
                rv = NFS4ERR_NOMATCHING_LAYOUT;
 
index a8ecdd5..0c4e8dd 100644 (file)
@@ -2124,6 +2124,7 @@ int nfs_atomic_open(struct inode *dir, struct dentry *dentry,
                }
                goto out;
        }
+       file->f_mode |= FMODE_CAN_ODIRECT;
 
        err = nfs_finish_open(ctx, ctx->dentry, file, open_flags);
        trace_nfs_atomic_open_exit(dir, ctx, open_flags, err);
index 03d3a27..e88f6b1 100644 (file)
@@ -93,6 +93,7 @@ nfs4_file_open(struct inode *inode, struct file *filp)
        nfs_file_set_open_context(filp, ctx);
        nfs_fscache_open_file(inode, filp);
        err = 0;
+       filp->f_mode |= FMODE_CAN_ODIRECT;
 
 out_put_ctx:
        put_nfs_open_context(ctx);
index c0fdcf8..bb0e84a 100644 (file)
@@ -4012,22 +4012,29 @@ static int _nfs4_discover_trunking(struct nfs_server *server,
        }
 
        page = alloc_page(GFP_KERNEL);
+       if (!page)
+               return -ENOMEM;
        locations = kmalloc(sizeof(struct nfs4_fs_locations), GFP_KERNEL);
-       if (page == NULL || locations == NULL)
-               goto out;
+       if (!locations)
+               goto out_free;
+       locations->fattr = nfs_alloc_fattr();
+       if (!locations->fattr)
+               goto out_free_2;
 
        status = nfs4_proc_get_locations(server, fhandle, locations, page,
                                         cred);
        if (status)
-               goto out;
+               goto out_free_3;
 
        for (i = 0; i < locations->nlocations; i++)
                test_fs_location_for_trunking(&locations->locations[i], clp,
                                              server);
-out:
-       if (page)
-               __free_page(page);
+out_free_3:
+       kfree(locations->fattr);
+out_free_2:
        kfree(locations);
+out_free:
+       __free_page(page);
        return status;
 }
 
index 2540b35..9bab3e9 100644 (file)
@@ -2753,5 +2753,6 @@ again:
                goto again;
 
        nfs_put_client(clp);
+       module_put_and_kthread_exit(0);
        return 0;
 }
index 68a87be..41a9b6b 100644 (file)
@@ -469,6 +469,7 @@ pnfs_mark_layout_stateid_invalid(struct pnfs_layout_hdr *lo,
                pnfs_clear_lseg_state(lseg, lseg_list);
        pnfs_clear_layoutreturn_info(lo);
        pnfs_free_returned_lsegs(lo, lseg_list, &range, 0);
+       set_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags);
        if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags) &&
            !test_and_set_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags))
                pnfs_clear_layoutreturn_waitbit(lo);
@@ -1917,8 +1918,9 @@ static void nfs_layoutget_begin(struct pnfs_layout_hdr *lo)
 
 static void nfs_layoutget_end(struct pnfs_layout_hdr *lo)
 {
-       if (atomic_dec_and_test(&lo->plh_outstanding))
-               wake_up_var(&lo->plh_outstanding);
+       if (atomic_dec_and_test(&lo->plh_outstanding) &&
+           test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags))
+               wake_up_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN);
 }
 
 static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
@@ -2025,11 +2027,11 @@ lookup_again:
         * If the layout segment list is empty, but there are outstanding
         * layoutget calls, then they might be subject to a layoutrecall.
         */
-       if ((list_empty(&lo->plh_segs) || !pnfs_layout_is_valid(lo)) &&
+       if (test_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags) &&
            atomic_read(&lo->plh_outstanding) != 0) {
                spin_unlock(&ino->i_lock);
-               lseg = ERR_PTR(wait_var_event_killable(&lo->plh_outstanding,
-                                       !atomic_read(&lo->plh_outstanding)));
+               lseg = ERR_PTR(wait_on_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN,
+                                          TASK_KILLABLE));
                if (IS_ERR(lseg))
                        goto out_put_layout_hdr;
                pnfs_put_layout_hdr(lo);
@@ -2152,6 +2154,12 @@ lookup_again:
                case -ERECALLCONFLICT:
                case -EAGAIN:
                        break;
+               case -ENODATA:
+                       /* The server returned NFS4ERR_LAYOUTUNAVAILABLE */
+                       pnfs_layout_set_fail_bit(
+                               lo, pnfs_iomode_to_fail_bit(iomode));
+                       lseg = NULL;
+                       goto out_put_layout_hdr;
                default:
                        if (!nfs_error_is_fatal(PTR_ERR(lseg))) {
                                pnfs_layout_clear_fail_bit(lo, pnfs_iomode_to_fail_bit(iomode));
@@ -2407,7 +2415,8 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
                goto out_forget;
        }
 
-       if (!pnfs_layout_is_valid(lo) && !pnfs_is_first_layoutget(lo))
+       if (test_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags) &&
+           !pnfs_is_first_layoutget(lo))
                goto out_forget;
 
        if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
index 07f1148..f331f06 100644 (file)
@@ -105,6 +105,7 @@ enum {
        NFS_LAYOUT_FIRST_LAYOUTGET,     /* Serialize first layoutget */
        NFS_LAYOUT_INODE_FREEING,       /* The inode is being freed */
        NFS_LAYOUT_HASHED,              /* The layout visible */
+       NFS_LAYOUT_DRAIN,
 };
 
 enum layoutdriver_policy_flags {
index 840e3af..d79db56 100644 (file)
@@ -577,6 +577,7 @@ out_err:
 ssize_t nfsd_copy_file_range(struct file *src, u64 src_pos, struct file *dst,
                             u64 dst_pos, u64 count)
 {
+       ssize_t ret;
 
        /*
         * Limit copy to 4MB to prevent indefinitely blocking an nfsd
@@ -587,7 +588,12 @@ ssize_t nfsd_copy_file_range(struct file *src, u64 src_pos, struct file *dst,
         * limit like this and pipeline multiple COPY requests.
         */
        count = min_t(u64, count, 1 << 22);
-       return vfs_copy_file_range(src, src_pos, dst, dst_pos, count, 0);
+       ret = vfs_copy_file_range(src, src_pos, dst, dst_pos, count, 0);
+
+       if (ret == -EOPNOTSUPP || ret == -EXDEV)
+               ret = generic_copy_file_range(src, src_pos, dst, dst_pos,
+                                             count, 0);
+       return ret;
 }
 
 __be32 nfsd4_vfs_fallocate(struct svc_rqst *rqstp, struct svc_fh *fhp,
@@ -1173,6 +1179,7 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp, u64 offset,
                        nfsd_copy_write_verifier(verf, nn);
                        err2 = filemap_check_wb_err(nf->nf_file->f_mapping,
                                                    since);
+                       err = nfserrno(err2);
                        break;
                case -EINVAL:
                        err = nfserr_notsupp;
@@ -1180,8 +1187,8 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp, u64 offset,
                default:
                        nfsd_reset_write_verifier(nn);
                        trace_nfsd_writeverf_reset(nn, rqstp, err2);
+                       err = nfserrno(err2);
                }
-               err = nfserrno(err2);
        } else
                nfsd_copy_write_verifier(verf, nn);
 
index c2255b4..b08ce0d 100644 (file)
@@ -1513,8 +1513,15 @@ static int fanotify_test_fid(struct dentry *dentry)
        return 0;
 }
 
-static int fanotify_events_supported(struct path *path, __u64 mask)
+static int fanotify_events_supported(struct fsnotify_group *group,
+                                    struct path *path, __u64 mask,
+                                    unsigned int flags)
 {
+       unsigned int mark_type = flags & FANOTIFY_MARK_TYPE_BITS;
+       /* Strict validation of events in non-dir inode mask with v5.17+ APIs */
+       bool strict_dir_events = FAN_GROUP_FLAG(group, FAN_REPORT_TARGET_FID) ||
+                                (mask & FAN_RENAME);
+
        /*
         * Some filesystems such as 'proc' acquire unusual locks when opening
         * files. For them fanotify permission events have high chances of
@@ -1526,6 +1533,16 @@ static int fanotify_events_supported(struct path *path, __u64 mask)
        if (mask & FANOTIFY_PERM_EVENTS &&
            path->mnt->mnt_sb->s_type->fs_flags & FS_DISALLOW_NOTIFY_PERM)
                return -EINVAL;
+
+       /*
+        * We shouldn't have allowed setting dirent events and the directory
+        * flags FAN_ONDIR and FAN_EVENT_ON_CHILD in mask of non-dir inode,
+        * but because we always allowed it, error only when using new APIs.
+        */
+       if (strict_dir_events && mark_type == FAN_MARK_INODE &&
+           !d_is_dir(path->dentry) && (mask & FANOTIFY_DIRONLY_EVENT_BITS))
+               return -ENOTDIR;
+
        return 0;
 }
 
@@ -1672,7 +1689,7 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
                goto fput_and_out;
 
        if (flags & FAN_MARK_ADD) {
-               ret = fanotify_events_supported(&path, mask);
+               ret = fanotify_events_supported(group, &path, mask, flags);
                if (ret)
                        goto path_put_and_out;
        }
@@ -1695,19 +1712,6 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
        else
                mnt = path.mnt;
 
-       /*
-        * FAN_RENAME is not allowed on non-dir (for now).
-        * We shouldn't have allowed setting any dirent events in mask of
-        * non-dir, but because we always allowed it, error only if group
-        * was initialized with the new flag FAN_REPORT_TARGET_FID.
-        */
-       ret = -ENOTDIR;
-       if (inode && !S_ISDIR(inode->i_mode) &&
-           ((mask & FAN_RENAME) ||
-            ((mask & FANOTIFY_DIRENT_EVENTS) &&
-             FAN_GROUP_FLAG(group, FAN_REPORT_TARGET_FID))))
-               goto path_put_and_out;
-
        /* Mask out FAN_EVENT_ON_CHILD flag for sb/mount/non-dir marks */
        if (mnt || !S_ISDIR(inode->i_mode)) {
                mask &= ~FAN_EVENT_ON_CHILD;
index b1b1cdf..e0777ee 100644 (file)
@@ -1397,28 +1397,6 @@ ssize_t generic_copy_file_range(struct file *file_in, loff_t pos_in,
 }
 EXPORT_SYMBOL(generic_copy_file_range);
 
-static ssize_t do_copy_file_range(struct file *file_in, loff_t pos_in,
-                                 struct file *file_out, loff_t pos_out,
-                                 size_t len, unsigned int flags)
-{
-       /*
-        * Although we now allow filesystems to handle cross sb copy, passing
-        * a file of the wrong filesystem type to filesystem driver can result
-        * in an attempt to dereference the wrong type of ->private_data, so
-        * avoid doing that until we really have a good reason.  NFS defines
-        * several different file_system_type structures, but they all end up
-        * using the same ->copy_file_range() function pointer.
-        */
-       if (file_out->f_op->copy_file_range &&
-           file_out->f_op->copy_file_range == file_in->f_op->copy_file_range)
-               return file_out->f_op->copy_file_range(file_in, pos_in,
-                                                      file_out, pos_out,
-                                                      len, flags);
-
-       return generic_copy_file_range(file_in, pos_in, file_out, pos_out, len,
-                                      flags);
-}
-
 /*
  * Performs necessary checks before doing a file copy
  *
@@ -1440,6 +1418,24 @@ static int generic_copy_file_checks(struct file *file_in, loff_t pos_in,
        if (ret)
                return ret;
 
+       /*
+        * We allow some filesystems to handle cross sb copy, but passing
+        * a file of the wrong filesystem type to filesystem driver can result
+        * in an attempt to dereference the wrong type of ->private_data, so
+        * avoid doing that until we really have a good reason.
+        *
+        * nfs and cifs define several different file_system_type structures
+        * and several different sets of file_operations, but they all end up
+        * using the same ->copy_file_range() function pointer.
+        */
+       if (file_out->f_op->copy_file_range) {
+               if (file_in->f_op->copy_file_range !=
+                   file_out->f_op->copy_file_range)
+                       return -EXDEV;
+       } else if (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb) {
+               return -EXDEV;
+       }
+
        /* Don't touch certain kinds of inodes */
        if (IS_IMMUTABLE(inode_out))
                return -EPERM;
@@ -1505,26 +1501,41 @@ ssize_t vfs_copy_file_range(struct file *file_in, loff_t pos_in,
        file_start_write(file_out);
 
        /*
-        * Try cloning first, this is supported by more file systems, and
-        * more efficient if both clone and copy are supported (e.g. NFS).
+        * Cloning is supported by more file systems, so we implement copy on
+        * same sb using clone, but for filesystems where both clone and copy
+        * are supported (e.g. nfs,cifs), we only call the copy method.
         */
+       if (file_out->f_op->copy_file_range) {
+               ret = file_out->f_op->copy_file_range(file_in, pos_in,
+                                                     file_out, pos_out,
+                                                     len, flags);
+               goto done;
+       }
+
        if (file_in->f_op->remap_file_range &&
            file_inode(file_in)->i_sb == file_inode(file_out)->i_sb) {
-               loff_t cloned;
-
-               cloned = file_in->f_op->remap_file_range(file_in, pos_in,
+               ret = file_in->f_op->remap_file_range(file_in, pos_in,
                                file_out, pos_out,
                                min_t(loff_t, MAX_RW_COUNT, len),
                                REMAP_FILE_CAN_SHORTEN);
-               if (cloned > 0) {
-                       ret = cloned;
+               if (ret > 0)
                        goto done;
-               }
        }
 
-       ret = do_copy_file_range(file_in, pos_in, file_out, pos_out, len,
-                               flags);
-       WARN_ON_ONCE(ret == -EOPNOTSUPP);
+       /*
+        * We can get here for same sb copy of filesystems that do not implement
+        * ->copy_file_range() in case filesystem does not support clone or in
+        * case filesystem supports clone but rejected the clone request (e.g.
+        * because it was not block aligned).
+        *
+        * In both cases, fall back to kernel copy so we are able to maintain a
+        * consistent story about which filesystems support copy_file_range()
+        * and which filesystems do not, that will allow userspace tools to
+        * make consistent desicions w.r.t using copy_file_range().
+        */
+       ret = generic_copy_file_range(file_in, pos_in, file_out, pos_out, len,
+                                     flags);
+
 done:
        if (ret > 0) {
                fsnotify_access(file_in);
index de72527..81d26ab 100644 (file)
@@ -553,7 +553,7 @@ struct dentry *tracefs_create_dir(const char *name, struct dentry *parent)
  *
  * Only one instances directory is allowed.
  *
- * The instances directory is special as it allows for mkdir and rmdir to
+ * The instances directory is special as it allows for mkdir and rmdir
  * to be done by userspace. When a mkdir or rmdir is performed, the inode
  * locks are released and the methods passed in (@mkdir and @rmdir) are
  * called without locks and with the name of the directory being created
index 836ab1b..224649a 100644 (file)
@@ -50,7 +50,7 @@ STATIC int xfs_attr_shortform_addname(xfs_da_args_t *args);
 STATIC int xfs_attr_leaf_get(xfs_da_args_t *args);
 STATIC int xfs_attr_leaf_removename(xfs_da_args_t *args);
 STATIC int xfs_attr_leaf_hasname(struct xfs_da_args *args, struct xfs_buf **bp);
-STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args, struct xfs_buf *bp);
+STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args);
 
 /*
  * Internal routines when attribute list is more than one block.
@@ -393,16 +393,10 @@ xfs_attr_sf_addname(
         * It won't fit in the shortform, transform to a leaf block.  GROT:
         * another possible req'mt for a double-split btree op.
         */
-       error = xfs_attr_shortform_to_leaf(args, &attr->xattri_leaf_bp);
+       error = xfs_attr_shortform_to_leaf(args);
        if (error)
                return error;
 
-       /*
-        * Prevent the leaf buffer from being unlocked so that a concurrent AIL
-        * push cannot grab the half-baked leaf buffer and run into problems
-        * with the write verifier.
-        */
-       xfs_trans_bhold(args->trans, attr->xattri_leaf_bp);
        attr->xattri_dela_state = XFS_DAS_LEAF_ADD;
 out:
        trace_xfs_attr_sf_addname_return(attr->xattri_dela_state, args->dp);
@@ -447,11 +441,9 @@ xfs_attr_leaf_addname(
 
        /*
         * Use the leaf buffer we may already hold locked as a result of
-        * a sf-to-leaf conversion. The held buffer is no longer valid
-        * after this call, regardless of the result.
+        * a sf-to-leaf conversion.
         */
-       error = xfs_attr_leaf_try_add(args, attr->xattri_leaf_bp);
-       attr->xattri_leaf_bp = NULL;
+       error = xfs_attr_leaf_try_add(args);
 
        if (error == -ENOSPC) {
                error = xfs_attr3_leaf_to_node(args);
@@ -497,8 +489,6 @@ xfs_attr_node_addname(
        struct xfs_da_args      *args = attr->xattri_da_args;
        int                     error;
 
-       ASSERT(!attr->xattri_leaf_bp);
-
        error = xfs_attr_node_addname_find_attr(attr);
        if (error)
                return error;
@@ -997,9 +987,11 @@ xfs_attr_set(
        /*
         * We have no control over the attribute names that userspace passes us
         * to remove, so we have to allow the name lookup prior to attribute
-        * removal to fail as well.
+        * removal to fail as well.  Preserve the logged flag, since we need
+        * to pass that through to the logging code.
         */
-       args->op_flags = XFS_DA_OP_OKNOENT;
+       args->op_flags = XFS_DA_OP_OKNOENT |
+                                       (args->op_flags & XFS_DA_OP_LOGGED);
 
        if (args->value) {
                XFS_STATS_INC(mp, xs_attr_set);
@@ -1213,24 +1205,14 @@ xfs_attr_restore_rmt_blk(
  */
 STATIC int
 xfs_attr_leaf_try_add(
-       struct xfs_da_args      *args,
-       struct xfs_buf          *bp)
+       struct xfs_da_args      *args)
 {
+       struct xfs_buf          *bp;
        int                     error;
 
-       /*
-        * If the caller provided a buffer to us, it is locked and held in
-        * the transaction because it just did a shortform to leaf conversion.
-        * Hence we don't need to read it again. Otherwise read in the leaf
-        * buffer.
-        */
-       if (bp) {
-               xfs_trans_bhold_release(args->trans, bp);
-       } else {
-               error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp);
-               if (error)
-                       return error;
-       }
+       error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp);
+       if (error)
+               return error;
 
        /*
         * Look up the xattr name to set the insertion point for the new xattr.
@@ -1439,12 +1421,11 @@ static int
 xfs_attr_node_try_addname(
        struct xfs_attr_intent          *attr)
 {
-       struct xfs_da_args              *args = attr->xattri_da_args;
        struct xfs_da_state             *state = attr->xattri_da_state;
        struct xfs_da_state_blk         *blk;
        int                             error;
 
-       trace_xfs_attr_node_addname(args);
+       trace_xfs_attr_node_addname(state->args);
 
        blk = &state->path.blk[state->path.active-1];
        ASSERT(blk->magic == XFS_ATTR_LEAF_MAGIC);
index e329da3..dfb47fa 100644 (file)
@@ -28,16 +28,6 @@ struct xfs_attr_list_context;
  */
 #define        ATTR_MAX_VALUELEN       (64*1024)       /* max length of a value */
 
-static inline bool xfs_has_larp(struct xfs_mount *mp)
-{
-#ifdef DEBUG
-       /* Logged xattrs require a V5 super for log_incompat */
-       return xfs_has_crc(mp) && xfs_globals.larp;
-#else
-       return false;
-#endif
-}
-
 /*
  * Kernel-internal version of the attrlist cursor.
  */
@@ -525,11 +515,6 @@ struct xfs_attr_intent {
         */
        struct xfs_attri_log_nameval    *xattri_nameval;
 
-       /*
-        * Used by xfs_attr_set to hold a leaf buffer across a transaction roll
-        */
-       struct xfs_buf                  *xattri_leaf_bp;
-
        /* Used to keep track of current state of delayed operation */
        enum xfs_delattr_state          xattri_dela_state;
 
@@ -624,7 +609,7 @@ static inline enum xfs_delattr_state
 xfs_attr_init_replace_state(struct xfs_da_args *args)
 {
        args->op_flags |= XFS_DA_OP_ADDNAME | XFS_DA_OP_REPLACE;
-       if (xfs_has_larp(args->dp->i_mount))
+       if (args->op_flags & XFS_DA_OP_LOGGED)
                return xfs_attr_init_remove_state(args);
        return xfs_attr_init_add_state(args);
 }
index 15a9904..8f47396 100644 (file)
@@ -289,6 +289,23 @@ xfs_attr3_leaf_verify_entry(
        return NULL;
 }
 
+/*
+ * Validate an attribute leaf block.
+ *
+ * Empty leaf blocks can occur under the following circumstances:
+ *
+ * 1. setxattr adds a new extended attribute to a file;
+ * 2. The file has zero existing attributes;
+ * 3. The attribute is too large to fit in the attribute fork;
+ * 4. The attribute is small enough to fit in a leaf block;
+ * 5. A log flush occurs after committing the transaction that creates
+ *    the (empty) leaf block; and
+ * 6. The filesystem goes down after the log flush but before the new
+ *    attribute can be committed to the leaf block.
+ *
+ * Hence we need to ensure that we don't fail the validation purely
+ * because the leaf is empty.
+ */
 static xfs_failaddr_t
 xfs_attr3_leaf_verify(
        struct xfs_buf                  *bp)
@@ -310,15 +327,6 @@ xfs_attr3_leaf_verify(
        if (fa)
                return fa;
 
-       /*
-        * Empty leaf blocks should never occur;  they imply the existence of a
-        * software bug that needs fixing. xfs_repair also flags them as a
-        * corruption that needs fixing, so we should never let these go to
-        * disk.
-        */
-       if (ichdr.count == 0)
-               return __this_address;
-
        /*
         * firstused is the block offset of the first name info structure.
         * Make sure it doesn't go off the block or crash into the header.
@@ -922,14 +930,10 @@ xfs_attr_shortform_getvalue(
        return -ENOATTR;
 }
 
-/*
- * Convert from using the shortform to the leaf.  On success, return the
- * buffer so that we can keep it locked until we're totally done with it.
- */
+/* Convert from using the shortform to the leaf format. */
 int
 xfs_attr_shortform_to_leaf(
-       struct xfs_da_args              *args,
-       struct xfs_buf                  **leaf_bp)
+       struct xfs_da_args              *args)
 {
        struct xfs_inode                *dp;
        struct xfs_attr_shortform       *sf;
@@ -991,7 +995,6 @@ xfs_attr_shortform_to_leaf(
                sfe = xfs_attr_sf_nextentry(sfe);
        }
        error = 0;
-       *leaf_bp = bp;
 out:
        kmem_free(tmpbuffer);
        return error;
@@ -1530,7 +1533,7 @@ xfs_attr3_leaf_add_work(
        if (tmp)
                entry->flags |= XFS_ATTR_LOCAL;
        if (args->op_flags & XFS_DA_OP_REPLACE) {
-               if (!xfs_has_larp(mp))
+               if (!(args->op_flags & XFS_DA_OP_LOGGED))
                        entry->flags |= XFS_ATTR_INCOMPLETE;
                if ((args->blkno2 == args->blkno) &&
                    (args->index2 <= args->index)) {
index efa757f..368f4d9 100644 (file)
@@ -49,8 +49,7 @@ void  xfs_attr_shortform_create(struct xfs_da_args *args);
 void   xfs_attr_shortform_add(struct xfs_da_args *args, int forkoff);
 int    xfs_attr_shortform_lookup(struct xfs_da_args *args);
 int    xfs_attr_shortform_getvalue(struct xfs_da_args *args);
-int    xfs_attr_shortform_to_leaf(struct xfs_da_args *args,
-                       struct xfs_buf **leaf_bp);
+int    xfs_attr_shortform_to_leaf(struct xfs_da_args *args);
 int    xfs_attr_sf_removename(struct xfs_da_args *args);
 int    xfs_attr_sf_findname(struct xfs_da_args *args,
                             struct xfs_attr_sf_entry **sfep,
index d33b768..ffa3df5 100644 (file)
@@ -92,6 +92,7 @@ typedef struct xfs_da_args {
 #define XFS_DA_OP_NOTIME       (1u << 5) /* don't update inode timestamps */
 #define XFS_DA_OP_REMOVE       (1u << 6) /* this is a remove operation */
 #define XFS_DA_OP_RECOVERY     (1u << 7) /* Log recovery operation */
+#define XFS_DA_OP_LOGGED       (1u << 8) /* Use intent items to track op */
 
 #define XFS_DA_OP_FLAGS \
        { XFS_DA_OP_JUSTCHECK,  "JUSTCHECK" }, \
@@ -101,7 +102,8 @@ typedef struct xfs_da_args {
        { XFS_DA_OP_CILOOKUP,   "CILOOKUP" }, \
        { XFS_DA_OP_NOTIME,     "NOTIME" }, \
        { XFS_DA_OP_REMOVE,     "REMOVE" }, \
-       { XFS_DA_OP_RECOVERY,   "RECOVERY" }
+       { XFS_DA_OP_RECOVERY,   "RECOVERY" }, \
+       { XFS_DA_OP_LOGGED,     "LOGGED" }
 
 /*
  * Storage for holding state during Btree searches and split/join ops.
index 4a28c2d..5077a7a 100644 (file)
@@ -413,18 +413,20 @@ xfs_attr_create_intent(
        struct xfs_mount                *mp = tp->t_mountp;
        struct xfs_attri_log_item       *attrip;
        struct xfs_attr_intent          *attr;
+       struct xfs_da_args              *args;
 
        ASSERT(count == 1);
 
-       if (!xfs_sb_version_haslogxattrs(&mp->m_sb))
-               return NULL;
-
        /*
         * Each attr item only performs one attribute operation at a time, so
         * this is a list of one
         */
        attr = list_first_entry_or_null(items, struct xfs_attr_intent,
                        xattri_list);
+       args = attr->xattri_da_args;
+
+       if (!(args->op_flags & XFS_DA_OP_LOGGED))
+               return NULL;
 
        /*
         * Create a buffer to store the attribute name and value.  This buffer
@@ -432,8 +434,6 @@ xfs_attr_create_intent(
         * and the lower level xattr log items.
         */
        if (!attr->xattri_nameval) {
-               struct xfs_da_args      *args = attr->xattri_da_args;
-
                /*
                 * Transfer our reference to the name/value buffer to the
                 * deferred work state structure.
@@ -576,7 +576,7 @@ xfs_attri_item_recover(
        struct xfs_trans_res            tres;
        struct xfs_attri_log_format     *attrp;
        struct xfs_attri_log_nameval    *nv = attrip->attri_nameval;
-       int                             error, ret = 0;
+       int                             error;
        int                             total;
        int                             local;
        struct xfs_attrd_log_item       *done_item = NULL;
@@ -617,7 +617,10 @@ xfs_attri_item_recover(
        args->namelen = nv->name.i_len;
        args->hashval = xfs_da_hashname(args->name, args->namelen);
        args->attr_filter = attrp->alfi_attr_filter & XFS_ATTRI_FILTER_MASK;
-       args->op_flags = XFS_DA_OP_RECOVERY | XFS_DA_OP_OKNOENT;
+       args->op_flags = XFS_DA_OP_RECOVERY | XFS_DA_OP_OKNOENT |
+                        XFS_DA_OP_LOGGED;
+
+       ASSERT(xfs_sb_version_haslogxattrs(&mp->m_sb));
 
        switch (attr->xattri_op_flags) {
        case XFS_ATTRI_OP_FLAGS_SET:
@@ -652,29 +655,32 @@ xfs_attri_item_recover(
        xfs_ilock(ip, XFS_ILOCK_EXCL);
        xfs_trans_ijoin(tp, ip, 0);
 
-       ret = xfs_xattri_finish_update(attr, done_item);
-       if (ret == -EAGAIN) {
-               /* There's more work to do, so add it to this transaction */
+       error = xfs_xattri_finish_update(attr, done_item);
+       if (error == -EAGAIN) {
+               /*
+                * There's more work to do, so add the intent item to this
+                * transaction so that we can continue it later.
+                */
                xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_ATTR, &attr->xattri_list);
-       } else
-               error = ret;
+               error = xfs_defer_ops_capture_and_commit(tp, capture_list);
+               if (error)
+                       goto out_unlock;
 
+               xfs_iunlock(ip, XFS_ILOCK_EXCL);
+               xfs_irele(ip);
+               return 0;
+       }
        if (error) {
                xfs_trans_cancel(tp);
                goto out_unlock;
        }
 
        error = xfs_defer_ops_capture_and_commit(tp, capture_list);
-
 out_unlock:
-       if (attr->xattri_leaf_bp)
-               xfs_buf_relse(attr->xattri_leaf_bp);
-
        xfs_iunlock(ip, XFS_ILOCK_EXCL);
        xfs_irele(ip);
 out:
-       if (ret != -EAGAIN)
-               xfs_attr_free_item(attr);
+       xfs_attr_free_item(attr);
        return error;
 }
 
index 52be583..85e1a26 100644 (file)
@@ -686,6 +686,8 @@ xfs_can_free_eofblocks(
         * forever.
         */
        end_fsb = XFS_B_TO_FSB(mp, (xfs_ufsize_t)XFS_ISIZE(ip));
+       if (XFS_IS_REALTIME_INODE(ip) && mp->m_sb.sb_rextsize > 1)
+               end_fsb = roundup_64(end_fsb, mp->m_sb.sb_rextsize);
        last_fsb = XFS_B_TO_FSB(mp, mp->m_super->s_maxbytes);
        if (last_fsb <= end_fsb)
                return false;
index 5269354..2609825 100644 (file)
@@ -440,7 +440,7 @@ xfs_inodegc_queue_all(
        for_each_online_cpu(cpu) {
                gc = per_cpu_ptr(mp->m_inodegc, cpu);
                if (!llist_empty(&gc->list))
-                       queue_work_on(cpu, mp->m_inodegc_wq, &gc->work);
+                       mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0);
        }
 }
 
@@ -1841,8 +1841,8 @@ void
 xfs_inodegc_worker(
        struct work_struct      *work)
 {
-       struct xfs_inodegc      *gc = container_of(work, struct xfs_inodegc,
-                                                       work);
+       struct xfs_inodegc      *gc = container_of(to_delayed_work(work),
+                                               struct xfs_inodegc, work);
        struct llist_node       *node = llist_del_all(&gc->list);
        struct xfs_inode        *ip, *n;
 
@@ -1862,19 +1862,29 @@ xfs_inodegc_worker(
 }
 
 /*
- * Force all currently queued inode inactivation work to run immediately and
- * wait for the work to finish.
+ * Expedite all pending inodegc work to run immediately. This does not wait for
+ * completion of the work.
  */
 void
-xfs_inodegc_flush(
+xfs_inodegc_push(
        struct xfs_mount        *mp)
 {
        if (!xfs_is_inodegc_enabled(mp))
                return;
+       trace_xfs_inodegc_push(mp, __return_address);
+       xfs_inodegc_queue_all(mp);
+}
 
+/*
+ * Force all currently queued inode inactivation work to run immediately and
+ * wait for the work to finish.
+ */
+void
+xfs_inodegc_flush(
+       struct xfs_mount        *mp)
+{
+       xfs_inodegc_push(mp);
        trace_xfs_inodegc_flush(mp, __return_address);
-
-       xfs_inodegc_queue_all(mp);
        flush_workqueue(mp->m_inodegc_wq);
 }
 
@@ -2014,6 +2024,7 @@ xfs_inodegc_queue(
        struct xfs_inodegc      *gc;
        int                     items;
        unsigned int            shrinker_hits;
+       unsigned long           queue_delay = 1;
 
        trace_xfs_inode_set_need_inactive(ip);
        spin_lock(&ip->i_flags_lock);
@@ -2025,19 +2036,26 @@ xfs_inodegc_queue(
        items = READ_ONCE(gc->items);
        WRITE_ONCE(gc->items, items + 1);
        shrinker_hits = READ_ONCE(gc->shrinker_hits);
-       put_cpu_ptr(gc);
 
-       if (!xfs_is_inodegc_enabled(mp))
+       /*
+        * We queue the work while holding the current CPU so that the work
+        * is scheduled to run on this CPU.
+        */
+       if (!xfs_is_inodegc_enabled(mp)) {
+               put_cpu_ptr(gc);
                return;
-
-       if (xfs_inodegc_want_queue_work(ip, items)) {
-               trace_xfs_inodegc_queue(mp, __return_address);
-               queue_work(mp->m_inodegc_wq, &gc->work);
        }
 
+       if (xfs_inodegc_want_queue_work(ip, items))
+               queue_delay = 0;
+
+       trace_xfs_inodegc_queue(mp, __return_address);
+       mod_delayed_work(mp->m_inodegc_wq, &gc->work, queue_delay);
+       put_cpu_ptr(gc);
+
        if (xfs_inodegc_want_flush_work(ip, items, shrinker_hits)) {
                trace_xfs_inodegc_throttle(mp, __return_address);
-               flush_work(&gc->work);
+               flush_delayed_work(&gc->work);
        }
 }
 
@@ -2054,7 +2072,7 @@ xfs_inodegc_cpu_dead(
        unsigned int            count = 0;
 
        dead_gc = per_cpu_ptr(mp->m_inodegc, dead_cpu);
-       cancel_work_sync(&dead_gc->work);
+       cancel_delayed_work_sync(&dead_gc->work);
 
        if (llist_empty(&dead_gc->list))
                return;
@@ -2073,12 +2091,12 @@ xfs_inodegc_cpu_dead(
        llist_add_batch(first, last, &gc->list);
        count += READ_ONCE(gc->items);
        WRITE_ONCE(gc->items, count);
-       put_cpu_ptr(gc);
 
        if (xfs_is_inodegc_enabled(mp)) {
                trace_xfs_inodegc_queue(mp, __return_address);
-               queue_work(mp->m_inodegc_wq, &gc->work);
+               mod_delayed_work(mp->m_inodegc_wq, &gc->work, 0);
        }
+       put_cpu_ptr(gc);
 }
 
 /*
@@ -2173,7 +2191,7 @@ xfs_inodegc_shrinker_scan(
                        unsigned int    h = READ_ONCE(gc->shrinker_hits);
 
                        WRITE_ONCE(gc->shrinker_hits, h + 1);
-                       queue_work_on(cpu, mp->m_inodegc_wq, &gc->work);
+                       mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0);
                        no_items = false;
                }
        }
index 2e4cfdd..6cd1807 100644 (file)
@@ -76,6 +76,7 @@ void xfs_blockgc_stop(struct xfs_mount *mp);
 void xfs_blockgc_start(struct xfs_mount *mp);
 
 void xfs_inodegc_worker(struct work_struct *work);
+void xfs_inodegc_push(struct xfs_mount *mp);
 void xfs_inodegc_flush(struct xfs_mount *mp);
 void xfs_inodegc_stop(struct xfs_mount *mp);
 void xfs_inodegc_start(struct xfs_mount *mp);
index 52d6f2c..3e1c62f 100644 (file)
@@ -131,6 +131,26 @@ xfs_ilock_attr_map_shared(
        return lock_mode;
 }
 
+/*
+ * You can't set both SHARED and EXCL for the same lock,
+ * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_MMAPLOCK_SHARED,
+ * XFS_MMAPLOCK_EXCL, XFS_ILOCK_SHARED, XFS_ILOCK_EXCL are valid values
+ * to set in lock_flags.
+ */
+static inline void
+xfs_lock_flags_assert(
+       uint            lock_flags)
+{
+       ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=
+               (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));
+       ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=
+               (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));
+       ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=
+               (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));
+       ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);
+       ASSERT(lock_flags != 0);
+}
+
 /*
  * In addition to i_rwsem in the VFS inode, the xfs inode contains 2
  * multi-reader locks: invalidate_lock and the i_lock.  This routine allows
@@ -168,18 +188,7 @@ xfs_ilock(
 {
        trace_xfs_ilock(ip, lock_flags, _RET_IP_);
 
-       /*
-        * You can't set both SHARED and EXCL for the same lock,
-        * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED,
-        * and XFS_ILOCK_EXCL are valid values to set in lock_flags.
-        */
-       ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=
-              (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));
-       ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=
-              (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));
-       ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=
-              (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));
-       ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);
+       xfs_lock_flags_assert(lock_flags);
 
        if (lock_flags & XFS_IOLOCK_EXCL) {
                down_write_nested(&VFS_I(ip)->i_rwsem,
@@ -222,18 +231,7 @@ xfs_ilock_nowait(
 {
        trace_xfs_ilock_nowait(ip, lock_flags, _RET_IP_);
 
-       /*
-        * You can't set both SHARED and EXCL for the same lock,
-        * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED,
-        * and XFS_ILOCK_EXCL are valid values to set in lock_flags.
-        */
-       ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=
-              (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));
-       ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=
-              (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));
-       ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=
-              (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));
-       ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);
+       xfs_lock_flags_assert(lock_flags);
 
        if (lock_flags & XFS_IOLOCK_EXCL) {
                if (!down_write_trylock(&VFS_I(ip)->i_rwsem))
@@ -291,19 +289,7 @@ xfs_iunlock(
        xfs_inode_t             *ip,
        uint                    lock_flags)
 {
-       /*
-        * You can't set both SHARED and EXCL for the same lock,
-        * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED,
-        * and XFS_ILOCK_EXCL are valid values to set in lock_flags.
-        */
-       ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=
-              (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));
-       ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=
-              (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));
-       ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=
-              (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));
-       ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);
-       ASSERT(lock_flags != 0);
+       xfs_lock_flags_assert(lock_flags);
 
        if (lock_flags & XFS_IOLOCK_EXCL)
                up_write(&VFS_I(ip)->i_rwsem);
@@ -379,8 +365,8 @@ xfs_isilocked(
        }
 
        if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) {
-               return __xfs_rwsem_islocked(&VFS_I(ip)->i_rwsem,
-                               (lock_flags & XFS_IOLOCK_SHARED));
+               return __xfs_rwsem_islocked(&VFS_I(ip)->i_mapping->invalidate_lock,
+                               (lock_flags & XFS_MMAPLOCK_SHARED));
        }
 
        if (lock_flags & (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED)) {
index 5a364a7..0d67ff8 100644 (file)
@@ -1096,7 +1096,8 @@ xfs_flags2diflags2(
 {
        uint64_t                di_flags2 =
                (ip->i_diflags2 & (XFS_DIFLAG2_REFLINK |
-                                  XFS_DIFLAG2_BIGTIME));
+                                  XFS_DIFLAG2_BIGTIME |
+                                  XFS_DIFLAG2_NREXT64));
 
        if (xflags & FS_XFLAG_DAX)
                di_flags2 |= XFS_DIFLAG2_DAX;
index 1e972f8..ae904b2 100644 (file)
@@ -2092,8 +2092,6 @@ xlog_dealloc_log(
        xlog_in_core_t  *iclog, *next_iclog;
        int             i;
 
-       xlog_cil_destroy(log);
-
        /*
         * Cycle all the iclogbuf locks to make sure all log IO completion
         * is done before we tear down these buffers.
@@ -2105,6 +2103,13 @@ xlog_dealloc_log(
                iclog = iclog->ic_next;
        }
 
+       /*
+        * Destroy the CIL after waiting for iclog IO completion because an
+        * iclog EIO error will try to shut down the log, which accesses the
+        * CIL to wake up the waiters.
+        */
+       xlog_cil_destroy(log);
+
        iclog = log->l_iclog;
        for (i = 0; i < log->l_iclog_bufs; i++) {
                next_iclog = iclog->ic_next;
index ba5d42a..d2eaebd 100644 (file)
@@ -61,7 +61,7 @@ struct xfs_error_cfg {
  */
 struct xfs_inodegc {
        struct llist_head       list;
-       struct work_struct      work;
+       struct delayed_work     work;
 
        /* approximate count of inodes in the list */
        unsigned int            items;
index 74ac9ca..392cb39 100644 (file)
@@ -454,9 +454,12 @@ xfs_qm_scall_getquota(
        struct xfs_dquot        *dqp;
        int                     error;
 
-       /* Flush inodegc work at the start of a quota reporting scan. */
+       /*
+        * Expedite pending inodegc work at the start of a quota reporting
+        * scan but don't block waiting for it to complete.
+        */
        if (id == 0)
-               xfs_inodegc_flush(mp);
+               xfs_inodegc_push(mp);
 
        /*
         * Try to get the dquot. We don't want it allocated on disk, so don't
@@ -498,7 +501,7 @@ xfs_qm_scall_getquota_next(
 
        /* Flush inodegc work at the start of a quota reporting scan. */
        if (*id == 0)
-               xfs_inodegc_flush(mp);
+               xfs_inodegc_push(mp);
 
        error = xfs_qm_dqget_next(mp, *id, type, &dqp);
        if (error)
index ed18160..aa977c7 100644 (file)
@@ -797,8 +797,11 @@ xfs_fs_statfs(
        xfs_extlen_t            lsize;
        int64_t                 ffree;
 
-       /* Wait for whatever inactivations are in progress. */
-       xfs_inodegc_flush(mp);
+       /*
+        * Expedite background inodegc but don't wait. We do not want to block
+        * here waiting hours for a billion extent file to be truncated.
+        */
+       xfs_inodegc_push(mp);
 
        statp->f_type = XFS_SUPER_MAGIC;
        statp->f_namelen = MAXNAMELEN - 1;
@@ -1074,7 +1077,7 @@ xfs_inodegc_init_percpu(
                gc = per_cpu_ptr(mp->m_inodegc, cpu);
                init_llist_head(&gc->list);
                gc->items = 0;
-               INIT_WORK(&gc->work, xfs_inodegc_worker);
+               INIT_DELAYED_WORK(&gc->work, xfs_inodegc_worker);
        }
        return 0;
 }
index d320265..0fa1b7a 100644 (file)
@@ -240,6 +240,7 @@ DEFINE_EVENT(xfs_fs_class, name,                                    \
        TP_PROTO(struct xfs_mount *mp, void *caller_ip), \
        TP_ARGS(mp, caller_ip))
 DEFINE_FS_EVENT(xfs_inodegc_flush);
+DEFINE_FS_EVENT(xfs_inodegc_push);
 DEFINE_FS_EVENT(xfs_inodegc_start);
 DEFINE_FS_EVENT(xfs_inodegc_stop);
 DEFINE_FS_EVENT(xfs_inodegc_queue);
index 35e13e1..c325a28 100644 (file)
@@ -68,6 +68,18 @@ xfs_attr_rele_log_assist(
        xlog_drop_incompat_feat(mp->m_log);
 }
 
+static inline bool
+xfs_attr_want_log_assist(
+       struct xfs_mount        *mp)
+{
+#ifdef DEBUG
+       /* Logged xattrs require a V5 super for log_incompat */
+       return xfs_has_crc(mp) && xfs_globals.larp;
+#else
+       return false;
+#endif
+}
+
 /*
  * Set or remove an xattr, having grabbed the appropriate logging resources
  * prior to calling libxfs.
@@ -80,11 +92,14 @@ xfs_attr_change(
        bool                    use_logging = false;
        int                     error;
 
-       if (xfs_has_larp(mp)) {
+       ASSERT(!(args->op_flags & XFS_DA_OP_LOGGED));
+
+       if (xfs_attr_want_log_assist(mp)) {
                error = xfs_attr_grab_log_assist(mp);
                if (error)
                        return error;
 
+               args->op_flags |= XFS_DA_OP_LOGGED;
                use_logging = true;
        }
 
index 0777725..10b1990 100644 (file)
@@ -1022,6 +1022,7 @@ void drm_state_dump(struct drm_device *dev, struct drm_printer *p);
        for ((__i) = 0; \
             (__i) < (__state)->num_private_objs && \
                     ((obj) = (__state)->private_objs[__i].ptr, \
+                     (void)(obj) /* Only to avoid unused-but-set-variable warning */, \
                      (new_obj_state) = (__state)->private_objs[__i].new_state, 1); \
             (__i)++)
 
index 4416536..ca89a48 100644 (file)
@@ -311,12 +311,12 @@ ttm_resource_manager_cleanup(struct ttm_resource_manager *man)
 }
 
 void ttm_lru_bulk_move_init(struct ttm_lru_bulk_move *bulk);
-void ttm_lru_bulk_move_add(struct ttm_lru_bulk_move *bulk,
-                          struct ttm_resource *res);
-void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk,
-                          struct ttm_resource *res);
 void ttm_lru_bulk_move_tail(struct ttm_lru_bulk_move *bulk);
 
+void ttm_resource_add_bulk_move(struct ttm_resource *res,
+                               struct ttm_buffer_object *bo);
+void ttm_resource_del_bulk_move(struct ttm_resource *res,
+                               struct ttm_buffer_object *bo);
 void ttm_resource_move_to_lru_tail(struct ttm_resource *res);
 
 void ttm_resource_init(struct ttm_buffer_object *bo,
diff --git a/include/dt-bindings/net/pcs-rzn1-miic.h b/include/dt-bindings/net/pcs-rzn1-miic.h
new file mode 100644 (file)
index 0000000..784782e
--- /dev/null
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
+/*
+ * Copyright (C) 2022 Schneider-Electric
+ *
+ * Clément Léger <clement.leger@bootlin.com>
+ */
+
+#ifndef _DT_BINDINGS_PCS_RZN1_MIIC
+#define _DT_BINDINGS_PCS_RZN1_MIIC
+
+/*
+ * Reefer to the datasheet [1] section 8.2.1, Internal Connection of Ethernet
+ * Ports to check the available combination
+ *
+ * [1] REN_r01uh0750ej0140-rzn1-introduction_MAT_20210228.pdf
+ */
+
+#define MIIC_GMAC1_PORT                        0
+#define MIIC_GMAC2_PORT                        1
+#define MIIC_RTOS_PORT                 2
+#define MIIC_SERCOS_PORTA              3
+#define MIIC_SERCOS_PORTB              4
+#define MIIC_ETHERCAT_PORTA            5
+#define MIIC_ETHERCAT_PORTB            6
+#define MIIC_ETHERCAT_PORTC            7
+#define MIIC_SWITCH_PORTA              8
+#define MIIC_SWITCH_PORTB              9
+#define MIIC_SWITCH_PORTC              10
+#define MIIC_SWITCH_PORTD              11
+#define MIIC_HSR_PORTA                 12
+#define MIIC_HSR_PORTB                 13
+
+#endif
index 6c5d496..69a13e1 100644 (file)
@@ -84,6 +84,9 @@ extern struct key *find_asymmetric_key(struct key *keyring,
                                       const struct asymmetric_key_id *id_2,
                                       bool partial);
 
+int x509_load_certificate_list(const u8 cert_list[], const unsigned long list_size,
+                              const struct key *keyring);
+
 /*
  * The payload is at the discretion of the subtype.
  */
index 2bd073f..d452071 100644 (file)
@@ -119,6 +119,8 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
 
 extern struct backing_dev_info noop_backing_dev_info;
 
+int bdi_init(struct backing_dev_info *bdi);
+
 /**
  * writeback_in_progress - determine whether there is writeback in progress
  * @wb: bdi_writeback of interest
index 608d577..2f7b434 100644 (file)
@@ -342,7 +342,6 @@ static inline int blkdev_zone_mgmt_ioctl(struct block_device *bdev,
  */
 struct blk_independent_access_range {
        struct kobject          kobj;
-       struct request_queue    *queue;
        sector_t                sector;
        sector_t                nr_sectors;
 };
@@ -482,7 +481,6 @@ struct request_queue {
 #endif /* CONFIG_BLK_DEV_ZONED */
 
        int                     node;
-       struct mutex            debugfs_mutex;
 #ifdef CONFIG_BLK_DEV_IO_TRACE
        struct blk_trace __rcu  *blk_trace;
 #endif
@@ -526,11 +524,12 @@ struct request_queue {
        struct bio_set          bio_split;
 
        struct dentry           *debugfs_dir;
-
-#ifdef CONFIG_BLK_DEBUG_FS
        struct dentry           *sched_debugfs_dir;
        struct dentry           *rqos_debugfs_dir;
-#endif
+       /*
+        * Serializes all debugfs metadata operations using the above dentries.
+        */
+       struct mutex            debugfs_mutex;
 
        bool                    mq_sysfs_init_done;
 
@@ -575,6 +574,7 @@ struct request_queue {
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27    /* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE 28      /* at least one blk-mq hctx is active */
 #define QUEUE_FLAG_NOWAIT       29     /* device supports NOWAIT */
+#define QUEUE_FLAG_SQ_SCHED     30     /* single queue style io dispatch */
 
 #define QUEUE_FLAG_MQ_DEFAULT  ((1 << QUEUE_FLAG_IO_STAT) |            \
                                 (1 << QUEUE_FLAG_SAME_COMP) |          \
@@ -616,6 +616,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_pm_only(q)   atomic_read(&(q)->pm_only)
 #define blk_queue_registered(q)        test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags)
 #define blk_queue_nowait(q)    test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags)
+#define blk_queue_sq_sched(q)  test_bit(QUEUE_FLAG_SQ_SCHED, &(q)->queue_flags)
 
 extern void blk_set_pm_only(struct request_queue *q);
 extern void blk_clear_pm_only(struct request_queue *q);
@@ -1006,8 +1007,6 @@ void disk_set_independent_access_ranges(struct gendisk *disk,
  */
 /* Supports zoned block devices sequential write constraint */
 #define ELEVATOR_F_ZBD_SEQ_WRITE       (1U << 0)
-/* Supports scheduling on multiple hardware queues */
-#define ELEVATOR_F_MQ_AWARE            (1U << 1)
 
 extern void blk_queue_required_elevator_features(struct request_queue *q,
                                                 unsigned int features);
index 747fad2..6ff567e 100644 (file)
@@ -16,6 +16,7 @@
 #define PHY_ID_BCM5481                 0x0143bca0
 #define PHY_ID_BCM5395                 0x0143bcf0
 #define PHY_ID_BCM53125                        0x03625f20
+#define PHY_ID_BCM53128                        0x03625e10
 #define PHY_ID_BCM54810                        0x03625d00
 #define PHY_ID_BCM54811                        0x03625cc0
 #define PHY_ID_BCM5482                 0x0143bcb0
index 7ae21c0..ef0a771 100644 (file)
@@ -11,6 +11,8 @@
 
 #define CAN_SYNC_SEG 1
 
+#define CAN_BITRATE_UNSET 0
+#define CAN_BITRATE_UNKNOWN (-1U)
 
 #define CAN_CTRLMODE_TDC_MASK                                  \
        (CAN_CTRLMODE_TDC_AUTO | CAN_CTRLMODE_TDC_MANUAL)
index fdb22b0..182749e 100644 (file)
@@ -31,6 +31,7 @@ struct sk_buff *alloc_canfd_skb(struct net_device *dev,
                                struct canfd_frame **cfd);
 struct sk_buff *alloc_can_err_skb(struct net_device *dev,
                                  struct can_frame **cf);
+bool can_dropped_invalid_skb(struct net_device *dev, struct sk_buff *skb);
 
 /*
  * The struct can_skb_priv is used to transport additional information along
@@ -96,64 +97,6 @@ static inline struct sk_buff *can_create_echo_skb(struct sk_buff *skb)
        return nskb;
 }
 
-/* Check for outgoing skbs that have not been created by the CAN subsystem */
-static inline bool can_skb_headroom_valid(struct net_device *dev,
-                                         struct sk_buff *skb)
-{
-       /* af_packet creates a headroom of HH_DATA_MOD bytes which is fine */
-       if (WARN_ON_ONCE(skb_headroom(skb) < sizeof(struct can_skb_priv)))
-               return false;
-
-       /* af_packet does not apply CAN skb specific settings */
-       if (skb->ip_summed == CHECKSUM_NONE) {
-               /* init headroom */
-               can_skb_prv(skb)->ifindex = dev->ifindex;
-               can_skb_prv(skb)->skbcnt = 0;
-
-               skb->ip_summed = CHECKSUM_UNNECESSARY;
-
-               /* perform proper loopback on capable devices */
-               if (dev->flags & IFF_ECHO)
-                       skb->pkt_type = PACKET_LOOPBACK;
-               else
-                       skb->pkt_type = PACKET_HOST;
-
-               skb_reset_mac_header(skb);
-               skb_reset_network_header(skb);
-               skb_reset_transport_header(skb);
-       }
-
-       return true;
-}
-
-/* Drop a given socketbuffer if it does not contain a valid CAN frame. */
-static inline bool can_dropped_invalid_skb(struct net_device *dev,
-                                         struct sk_buff *skb)
-{
-       const struct canfd_frame *cfd = (struct canfd_frame *)skb->data;
-
-       if (skb->protocol == htons(ETH_P_CAN)) {
-               if (unlikely(skb->len != CAN_MTU ||
-                            cfd->len > CAN_MAX_DLEN))
-                       goto inval_skb;
-       } else if (skb->protocol == htons(ETH_P_CANFD)) {
-               if (unlikely(skb->len != CANFD_MTU ||
-                            cfd->len > CANFD_MAX_DLEN))
-                       goto inval_skb;
-       } else
-               goto inval_skb;
-
-       if (!can_skb_headroom_valid(dev, skb))
-               goto inval_skb;
-
-       return false;
-
-inval_skb:
-       kfree_skb(skb);
-       dev->stats.tx_dropped++;
-       return true;
-}
-
 static inline bool can_is_canfd_skb(const struct sk_buff *skb)
 {
        /* the CAN specific type of skb is identified by its data length */
index d08dfcb..4f2a819 100644 (file)
@@ -24,6 +24,7 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { }
 /* context/locking */
 # define __must_hold(x)        __attribute__((context(x,1,1)))
 # define __acquires(x) __attribute__((context(x,0,1)))
+# define __cond_acquires(x) __attribute__((context(x,0,-1)))
 # define __releases(x) __attribute__((context(x,1,0)))
 # define __acquire(x)  __context__(x,1)
 # define __release(x)  __context__(x,-1)
@@ -50,6 +51,7 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { }
 /* context/locking */
 # define __must_hold(x)
 # define __acquires(x)
+# define __cond_acquires(x)
 # define __releases(x)
 # define __acquire(x)  (void)0
 # define __release(x)  (void)0
index 1436530..8c1686e 100644 (file)
@@ -16,7 +16,6 @@
 
 #include <linux/atomic.h>
 #include <linux/types.h>
-#include <linux/mutex.h>
 
 struct vc_data;
 struct console_font_op;
@@ -154,22 +153,6 @@ struct console {
        uint    ospeed;
        u64     seq;
        unsigned long dropped;
-       struct task_struct *thread;
-       bool    blocked;
-
-       /*
-        * The per-console lock is used by printing kthreads to synchronize
-        * this console with callers of console_lock(). This is necessary in
-        * order to allow printing kthreads to run in parallel to each other,
-        * while each safely accessing the @blocked field and synchronizing
-        * against direct printing via console_lock/console_unlock.
-        *
-        * Note: For synchronizing against direct printing via
-        *       console_trylock/console_unlock, see the static global
-        *       variable @console_kthreads_active.
-        */
-       struct mutex lock;
-
        void    *data;
        struct   console *next;
 };
index dc10bee..34aab4d 100644 (file)
@@ -148,6 +148,8 @@ struct devfreq_stats {
  *             reevaluate operable frequencies. Devfreq users may use
  *             devfreq.nb to the corresponding register notifier call chain.
  * @work:      delayed work for load monitoring.
+ * @freq_table:                current frequency table used by the devfreq driver.
+ * @max_state:         count of entry present in the frequency table.
  * @previous_freq:     previously configured frequency value.
  * @last_status:       devfreq user device info, performance statistics
  * @data:      Private data of the governor. The devfreq framework does not
@@ -185,6 +187,9 @@ struct devfreq {
        struct notifier_block nb;
        struct delayed_work work;
 
+       unsigned long *freq_table;
+       unsigned int max_state;
+
        unsigned long previous_freq;
        struct devfreq_dev_status last_status;
 
index b698266..6c57339 100644 (file)
@@ -21,7 +21,7 @@
  * We consider 10% difference as significant.
  */
 #define IS_SIGNIFICANT_DIFF(val, ref) \
-       (((100UL * abs((val) - (ref))) / (ref)) > 10)
+       ((ref) && (((100UL * abs((val) - (ref))) / (ref)) > 10))
 
 /*
  * Calculate the gap between two values.
index edc2855..e517dbc 100644 (file)
                                         FANOTIFY_PERM_EVENTS | \
                                         FAN_Q_OVERFLOW | FAN_ONDIR)
 
+/* Events and flags relevant only for directories */
+#define FANOTIFY_DIRONLY_EVENT_BITS    (FANOTIFY_DIRENT_EVENTS | \
+                                        FAN_EVENT_ON_CHILD | FAN_ONDIR)
+
 #define ALL_FANOTIFY_EVENT_BITS                (FANOTIFY_OUTGOING_EVENTS | \
                                         FANOTIFY_EVENT_FLAGS)
 
index b1e0f1f..54c3c65 100644 (file)
@@ -167,21 +167,24 @@ struct gpio_irq_chip {
         */
        irq_flow_handler_t parent_handler;
 
-       /**
-        * @parent_handler_data:
-        *
-        * If @per_parent_data is false, @parent_handler_data is a single
-        * pointer used as the data associated with every parent interrupt.
-        *
-        * @parent_handler_data_array:
-        *
-        * If @per_parent_data is true, @parent_handler_data_array is
-        * an array of @num_parents pointers, and is used to associate
-        * different data for each parent. This cannot be NULL if
-        * @per_parent_data is true.
-        */
        union {
+               /**
+                * @parent_handler_data:
+                *
+                * If @per_parent_data is false, @parent_handler_data is a
+                * single pointer used as the data associated with every
+                * parent interrupt.
+                */
                void *parent_handler_data;
+
+               /**
+                * @parent_handler_data_array:
+                *
+                * If @per_parent_data is true, @parent_handler_data_array is
+                * an array of @num_parents pointers, and is used to associate
+                * different data for each parent. This cannot be NULL if
+                * @per_parent_data is true.
+                */
                void **parent_handler_data_array;
        };
 
index 99f17cc..c3a1f78 100644 (file)
@@ -38,7 +38,6 @@ extern void lockref_get(struct lockref *);
 extern int lockref_put_return(struct lockref *);
 extern int lockref_get_not_zero(struct lockref *);
 extern int lockref_put_not_zero(struct lockref *);
-extern int lockref_get_or_lock(struct lockref *);
 extern int lockref_put_or_lock(struct lockref *);
 
 extern void lockref_mark_dead(struct lockref *);
index 8b18fe9..e2701ed 100644 (file)
@@ -12,7 +12,6 @@
 #define MLX5_ESWITCH_MANAGER(mdev) MLX5_CAP_GEN(mdev, eswitch_manager)
 
 enum {
-       MLX5_ESWITCH_NONE,
        MLX5_ESWITCH_LEGACY,
        MLX5_ESWITCH_OFFLOADS
 };
@@ -153,7 +152,7 @@ struct mlx5_core_dev *mlx5_eswitch_get_core_dev(struct mlx5_eswitch *esw);
 
 static inline u8 mlx5_eswitch_mode(const struct mlx5_core_dev *dev)
 {
-       return MLX5_ESWITCH_NONE;
+       return MLX5_ESWITCH_LEGACY;
 }
 
 static inline enum devlink_eswitch_encap_mode
@@ -198,6 +197,11 @@ static inline struct mlx5_core_dev *mlx5_eswitch_get_core_dev(struct mlx5_eswitc
 
 #endif /* CONFIG_MLX5_ESWITCH */
 
+static inline bool is_mdev_legacy_mode(struct mlx5_core_dev *dev)
+{
+       return mlx5_eswitch_mode(dev) == MLX5_ESWITCH_LEGACY;
+}
+
 static inline bool is_mdev_switchdev_mode(struct mlx5_core_dev *dev)
 {
        return mlx5_eswitch_mode(dev) == MLX5_ESWITCH_OFFLOADS;
index bc8f326..cf3d0d6 100644 (file)
@@ -1600,7 +1600,7 @@ static inline bool is_pinnable_page(struct page *page)
        if (mt == MIGRATE_CMA || mt == MIGRATE_ISOLATE)
                return false;
 #endif
-       return !(is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page)));
+       return !is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page));
 }
 #else
 static inline bool is_pinnable_page(struct page *page)
@@ -3232,6 +3232,7 @@ enum mf_flags {
        MF_MUST_KILL = 1 << 2,
        MF_SOFT_OFFLINE = 1 << 3,
        MF_UNPOISON = 1 << 4,
+       MF_SW_SIMULATED = 1 << 5,
 };
 extern int memory_failure(unsigned long pfn, int flags);
 extern void memory_failure_queue(unsigned long pfn, int flags);
index e05ee9f..9dd4bf1 100644 (file)
@@ -26,7 +26,7 @@
  * @remote: Remote address for tunnels
  */
 struct vif_device {
-       struct net_device *dev;
+       struct net_device __rcu *dev;
        netdevice_tracker dev_tracker;
        unsigned long bytes_in, bytes_out;
        unsigned long pkt_in, pkt_out;
@@ -52,6 +52,7 @@ static inline int mr_call_vif_notifier(struct notifier_block *nb,
                                       unsigned short family,
                                       enum fib_event_type event_type,
                                       struct vif_device *vif,
+                                      struct net_device *vif_dev,
                                       unsigned short vif_index, u32 tb_id,
                                       struct netlink_ext_ack *extack)
 {
@@ -60,7 +61,7 @@ static inline int mr_call_vif_notifier(struct notifier_block *nb,
                        .family = family,
                        .extack = extack,
                },
-               .dev = vif->dev,
+               .dev = vif_dev,
                .vif_index = vif_index,
                .vif_flags = vif->flags,
                .tb_id = tb_id,
@@ -73,6 +74,7 @@ static inline int mr_call_vif_notifiers(struct net *net,
                                        unsigned short family,
                                        enum fib_event_type event_type,
                                        struct vif_device *vif,
+                                       struct net_device *vif_dev,
                                        unsigned short vif_index, u32 tb_id,
                                        unsigned int *ipmr_seq)
 {
@@ -80,7 +82,7 @@ static inline int mr_call_vif_notifiers(struct net *net,
                .info = {
                        .family = family,
                },
-               .dev = vif->dev,
+               .dev = vif_dev,
                .vif_index = vif_index,
                .vif_flags = vif->flags,
                .tb_id = tb_id,
@@ -98,7 +100,8 @@ static inline int mr_call_vif_notifiers(struct net *net,
 #define MAXVIFS        32
 #endif
 
-#define VIF_EXISTS(_mrt, _idx) (!!((_mrt)->vif_table[_idx].dev))
+/* Note: This helper is deprecated. */
+#define VIF_EXISTS(_mrt, _idx) (!!rcu_access_pointer((_mrt)->vif_table[_idx].dev))
 
 /* mfc_flags:
  * MFC_STATIC - the entry was added statically (not by a routing daemon)
@@ -305,7 +308,7 @@ int mr_dump(struct net *net, struct notifier_block *nb, unsigned short family,
                              struct netlink_ext_ack *extack),
            struct mr_table *(*mr_iter)(struct net *net,
                                        struct mr_table *mrt),
-           rwlock_t *mrt_lock, struct netlink_ext_ack *extack);
+           struct netlink_ext_ack *extack);
 #else
 static inline void vif_device_init(struct vif_device *v,
                                   struct net_device *dev,
@@ -360,7 +363,7 @@ static inline int mr_dump(struct net *net, struct notifier_block *nb,
                                            struct netlink_ext_ack *extack),
                          struct mr_table *(*mr_iter)(struct net *net,
                                                      struct mr_table *mrt),
-                         rwlock_t *mrt_lock, struct netlink_ext_ack *extack)
+                         struct netlink_ext_ack *extack)
 {
        return -EINVAL;
 }
index 89afa4f..1a3cb93 100644 (file)
@@ -1671,7 +1671,7 @@ enum netdev_priv_flags {
        IFF_FAILOVER_SLAVE              = 1<<28,
        IFF_L3MDEV_RX_HANDLER           = 1<<29,
        IFF_LIVE_RENAME_OK              = 1<<30,
-       IFF_TX_SKB_NO_LINEAR            = 1<<31,
+       IFF_TX_SKB_NO_LINEAR            = BIT_ULL(31),
        IFF_CHANGE_PROTO_DOWN           = BIT_ULL(32),
 };
 
index 29ec3e3..e393400 100644 (file)
@@ -233,8 +233,8 @@ enum {
 };
 
 enum {
-       NVME_CAP_CRMS_CRIMS     = 1ULL << 59,
-       NVME_CAP_CRMS_CRWMS     = 1ULL << 60,
+       NVME_CAP_CRMS_CRWMS     = 1ULL << 59,
+       NVME_CAP_CRMS_CRIMS     = 1ULL << 60,
 };
 
 struct nvme_id_power_state {
index 6491fa8..15b940e 100644 (file)
@@ -143,6 +143,12 @@ struct unwind_hint {
        .popsection
 .endm
 
+.macro STACK_FRAME_NON_STANDARD_FP func:req
+#ifdef CONFIG_FRAME_POINTER
+       STACK_FRAME_NON_STANDARD \func
+#endif
+.endm
+
 .macro ANNOTATE_NOENDBR
 .Lhere_\@:
        .pushsection .discard.noendbr
diff --git a/include/linux/pcs-rzn1-miic.h b/include/linux/pcs-rzn1-miic.h
new file mode 100644 (file)
index 0000000..56d12b2
--- /dev/null
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022 Schneider Electric
+ *
+ * Clément Léger <clement.leger@bootlin.com>
+ */
+
+#ifndef __LINUX_PCS_MIIC_H
+#define __LINUX_PCS_MIIC_H
+
+struct phylink;
+struct device_node;
+
+struct phylink_pcs *miic_create(struct device *dev, struct device_node *np);
+
+void miic_destroy(struct phylink_pcs *pcs);
+
+#endif /* __LINUX_PCS_MIIC_H */
index bed9a34..87638c5 100644 (file)
@@ -572,6 +572,10 @@ struct macsec_ops;
  * @mdix_ctrl: User setting of crossover
  * @pma_extable: Cached value of PMA/PMD Extended Abilities Register
  * @interrupts: Flag interrupts have been enabled
+ * @irq_suspended: Flag indicating PHY is suspended and therefore interrupt
+ *                 handling shall be postponed until PHY has resumed
+ * @irq_rerun: Flag indicating interrupts occurred while PHY was suspended,
+ *             requiring a rerun of the interrupt handler after resume
  * @interface: enum phy_interface_t value
  * @skb: Netlink message for cable diagnostics
  * @nest: Netlink nest used for cable diagnostics
@@ -626,6 +630,8 @@ struct phy_device {
 
        /* Interrupts are enabled */
        unsigned interrupts:1;
+       unsigned irq_suspended:1;
+       unsigned irq_rerun:1;
 
        enum phy_state state;
 
index 10ec29b..cf7d666 100644 (file)
@@ -169,9 +169,6 @@ extern void __printk_safe_exit(void);
 #define printk_deferred_enter __printk_safe_enter
 #define printk_deferred_exit __printk_safe_exit
 
-extern void printk_prefer_direct_enter(void);
-extern void printk_prefer_direct_exit(void);
-
 extern bool pr_flush(int timeout_ms, bool reset_on_progress);
 
 /*
@@ -224,14 +221,6 @@ static inline void printk_deferred_exit(void)
 {
 }
 
-static inline void printk_prefer_direct_enter(void)
-{
-}
-
-static inline void printk_prefer_direct_exit(void)
-{
-}
-
 static inline bool pr_flush(int timeout_ms, bool reset_on_progress)
 {
        return true;
index c21c7f8..0022666 100644 (file)
@@ -23,12 +23,16 @@ struct ratelimit_state {
        unsigned long   flags;
 };
 
-#define RATELIMIT_STATE_INIT(name, interval_init, burst_init) {                \
-               .lock           = __RAW_SPIN_LOCK_UNLOCKED(name.lock),  \
-               .interval       = interval_init,                        \
-               .burst          = burst_init,                           \
+#define RATELIMIT_STATE_INIT_FLAGS(name, interval_init, burst_init, flags_init) { \
+               .lock           = __RAW_SPIN_LOCK_UNLOCKED(name.lock),            \
+               .interval       = interval_init,                                  \
+               .burst          = burst_init,                                     \
+               .flags          = flags_init,                                     \
        }
 
+#define RATELIMIT_STATE_INIT(name, interval_init, burst_init) \
+       RATELIMIT_STATE_INIT_FLAGS(name, interval_init, burst_init, 0)
+
 #define RATELIMIT_STATE_INIT_DISABLED                                  \
        RATELIMIT_STATE_INIT(ratelimit_state, 0, DEFAULT_RATELIMIT_BURST)
 
index b8a6e38..a62fcca 100644 (file)
@@ -361,9 +361,9 @@ static inline void refcount_dec(refcount_t *r)
 
 extern __must_check bool refcount_dec_if_one(refcount_t *r);
 extern __must_check bool refcount_dec_not_one(refcount_t *r);
-extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock);
-extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock);
+extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(lock);
+extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(lock);
 extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r,
                                                       spinlock_t *lock,
-                                                      unsigned long *flags);
+                                                      unsigned long *flags) __cond_acquires(lock);
 #endif /* _LINUX_REFCOUNT_H */
index 1c58646..704111f 100644 (file)
@@ -13,8 +13,9 @@
 #include <linux/notifier.h>
 #include <linux/types.h>
 
-#define SCMI_MAX_STR_SIZE      64
-#define SCMI_MAX_NUM_RATES     16
+#define SCMI_MAX_STR_SIZE              64
+#define SCMI_SHORT_NAME_MAX_SIZE       16
+#define SCMI_MAX_NUM_RATES             16
 
 /**
  * struct scmi_revision_info - version information structure
@@ -36,8 +37,8 @@ struct scmi_revision_info {
        u8 num_protocols;
        u8 num_agents;
        u32 impl_ver;
-       char vendor_id[SCMI_MAX_STR_SIZE];
-       char sub_vendor_id[SCMI_MAX_STR_SIZE];
+       char vendor_id[SCMI_SHORT_NAME_MAX_SIZE];
+       char sub_vendor_id[SCMI_SHORT_NAME_MAX_SIZE];
 };
 
 struct scmi_clock_info {
index cbd5070..657a0fc 100644 (file)
@@ -45,6 +45,7 @@ struct uart_ops {
        void            (*unthrottle)(struct uart_port *);
        void            (*send_xchar)(struct uart_port *, char ch);
        void            (*stop_rx)(struct uart_port *);
+       void            (*start_rx)(struct uart_port *);
        void            (*enable_ms)(struct uart_port *);
        void            (*break_ctl)(struct uart_port *, int ctl);
        int             (*startup)(struct uart_port *);
index 82edf03..f6a27ab 100644 (file)
@@ -2351,6 +2351,18 @@ static inline unsigned int skb_pagelen(const struct sk_buff *skb)
        return skb_headlen(skb) + __skb_pagelen(skb);
 }
 
+/**
+ * skb_len_add - adds a number to len fields of skb
+ * @skb: buffer to add len to
+ * @delta: number of bytes to add
+ */
+static inline void skb_len_add(struct sk_buff *skb, int delta)
+{
+       skb->len += delta;
+       skb->data_len += delta;
+       skb->truesize += delta;
+}
+
 /**
  * __skb_fill_page_desc - initialise a paged fragment in an skb
  * @skb: buffer containing fragment to be initialised
@@ -2763,8 +2775,14 @@ static inline void skb_set_network_header(struct sk_buff *skb, const int offset)
        skb->network_header += offset;
 }
 
+static inline int skb_mac_header_was_set(const struct sk_buff *skb)
+{
+       return skb->mac_header != (typeof(skb->mac_header))~0U;
+}
+
 static inline unsigned char *skb_mac_header(const struct sk_buff *skb)
 {
+       DEBUG_NET_WARN_ON_ONCE(!skb_mac_header_was_set(skb));
        return skb->head + skb->mac_header;
 }
 
@@ -2775,14 +2793,10 @@ static inline int skb_mac_offset(const struct sk_buff *skb)
 
 static inline u32 skb_mac_header_len(const struct sk_buff *skb)
 {
+       DEBUG_NET_WARN_ON_ONCE(!skb_mac_header_was_set(skb));
        return skb->network_header - skb->mac_header;
 }
 
-static inline int skb_mac_header_was_set(const struct sk_buff *skb)
-{
-       return skb->mac_header != (typeof(skb->mac_header))~0U;
-}
-
 static inline void skb_unset_mac_header(struct sk_buff *skb)
 {
        skb->mac_header = (typeof(skb->mac_header))~0U;
index ea19341..d45902f 100644 (file)
@@ -102,4 +102,12 @@ static inline long strncpy_from_sockptr(char *dst, sockptr_t src, size_t count)
        return strncpy_from_user(dst, src.user, count);
 }
 
+static inline int check_zeroed_sockptr(sockptr_t src, size_t offset,
+                                      size_t size)
+{
+       if (!sockptr_is_kernel(src))
+               return check_zeroed_user(src.user + offset, size);
+       return memchr_inv(src.kernel + offset, 0, size) == NULL;
+}
+
 #endif /* _LINUX_SOCKPTR_H */
index 80263f7..17b42ce 100644 (file)
@@ -75,6 +75,8 @@ int proc_douintvec_minmax(struct ctl_table *table, int write, void *buffer,
 int proc_dou8vec_minmax(struct ctl_table *table, int write, void *buffer,
                        size_t *lenp, loff_t *ppos);
 int proc_dointvec_jiffies(struct ctl_table *, int, void *, size_t *, loff_t *);
+int proc_dointvec_ms_jiffies_minmax(struct ctl_table *table, int write,
+               void *buffer, size_t *lenp, loff_t *ppos);
 int proc_dointvec_userhz_jiffies(struct ctl_table *, int, void *, size_t *,
                loff_t *);
 int proc_dointvec_ms_jiffies(struct ctl_table *, int, void *, size_t *,
index b0dcfa2..8ba8b5b 100644 (file)
@@ -55,6 +55,18 @@ struct efifb_dmi_info {
        int flags;
 };
 
+#ifdef CONFIG_SYSFB
+
+void sysfb_disable(void);
+
+#else /* CONFIG_SYSFB */
+
+static inline void sysfb_disable(void)
+{
+}
+
+#endif /* CONFIG_SYSFB */
+
 #ifdef CONFIG_EFI
 
 extern struct efifb_dmi_info efifb_dmi_list[];
@@ -72,8 +84,8 @@ static inline void sysfb_apply_efi_quirks(struct platform_device *pd)
 
 bool sysfb_parse_mode(const struct screen_info *si,
                      struct simplefb_platform_data *mode);
-int sysfb_create_simplefb(const struct screen_info *si,
-                         const struct simplefb_platform_data *mode);
+struct platform_device *sysfb_create_simplefb(const struct screen_info *si,
+                                             const struct simplefb_platform_data *mode);
 
 #else /* CONFIG_SYSFB_SIMPLE */
 
@@ -83,10 +95,10 @@ static inline bool sysfb_parse_mode(const struct screen_info *si,
        return false;
 }
 
-static inline int sysfb_create_simplefb(const struct screen_info *si,
-                                        const struct simplefb_platform_data *mode)
+static inline struct platform_device *sysfb_create_simplefb(const struct screen_info *si,
+                                                           const struct simplefb_platform_data *mode)
 {
-       return -EINVAL;
+       return ERR_PTR(-EINVAL);
 }
 
 #endif /* CONFIG_SYSFB_SIMPLE */
index 1168302..a9fbe22 100644 (file)
@@ -46,6 +46,36 @@ static inline unsigned int inner_tcp_hdrlen(const struct sk_buff *skb)
        return inner_tcp_hdr(skb)->doff * 4;
 }
 
+/**
+ * skb_tcp_all_headers - Returns size of all headers for a TCP packet
+ * @skb: buffer
+ *
+ * Used in TX path, for a packet known to be a TCP one.
+ *
+ * if (skb_is_gso(skb)) {
+ *         int hlen = skb_tcp_all_headers(skb);
+ *         ...
+ */
+static inline int skb_tcp_all_headers(const struct sk_buff *skb)
+{
+       return skb_transport_offset(skb) + tcp_hdrlen(skb);
+}
+
+/**
+ * skb_inner_tcp_all_headers - Returns size of all headers for an encap TCP packet
+ * @skb: buffer
+ *
+ * Used in TX path, for a packet known to be a TCP one.
+ *
+ * if (skb_is_gso(skb) && skb->encapsulation) {
+ *         int hlen = skb_inner_tcp_all_headers(skb);
+ *         ...
+ */
+static inline int skb_inner_tcp_all_headers(const struct sk_buff *skb)
+{
+       return skb_inner_transport_offset(skb) + inner_tcp_hdrlen(skb);
+}
+
 static inline unsigned int tcp_optlen(const struct sk_buff *skb)
 {
        return (tcp_hdr(skb)->doff - 5) * 4;
index 81b9686..2fb8232 100644 (file)
@@ -20,6 +20,9 @@ struct itimerspec64 {
        struct timespec64 it_value;
 };
 
+/* Parameters used to convert the timespec values: */
+#define PSEC_PER_NSEC                  1000L
+
 /* Located here for timespec[64]_valid_strict */
 #define TIME64_MAX                     ((s64)~((u64)1 << 63))
 #define TIME64_MIN                     (-TIME64_MAX - 1)
index 49c7c32..b47c2e7 100644 (file)
@@ -257,6 +257,7 @@ void virtio_device_ready(struct virtio_device *dev)
 
        WARN_ON(status & VIRTIO_CONFIG_S_DRIVER_OK);
 
+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION
        /*
         * The virtio_synchronize_cbs() makes sure vring_interrupt()
         * will see the driver specific setup if it sees vq->broken
@@ -264,6 +265,7 @@ void virtio_device_ready(struct virtio_device *dev)
         */
        virtio_synchronize_cbs(dev);
        __virtio_unbreak_device(dev);
+#endif
        /*
         * The transport should ensure the visibility of vq->broken
         * before setting DRIVER_OK. See the comments for the transport
diff --git a/include/linux/visorbus.h b/include/linux/visorbus.h
deleted file mode 100644 (file)
index 0d8bd67..0000000
+++ /dev/null
@@ -1,344 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0+
-/*
- * Copyright (C) 2010 - 2013 UNISYS CORPORATION
- * All rights reserved.
- */
-
-/*
- *  This header file is to be included by other kernel mode components that
- *  implement a particular kind of visor_device.  Each of these other kernel
- *  mode components is called a visor device driver.  Refer to visortemplate
- *  for a minimal sample visor device driver.
- *
- *  There should be nothing in this file that is private to the visorbus
- *  bus implementation itself.
- */
-
-#ifndef __VISORBUS_H__
-#define __VISORBUS_H__
-
-#include <linux/device.h>
-
-#define VISOR_CHANNEL_SIGNATURE ('L' << 24 | 'N' << 16 | 'C' << 8 | 'E')
-
-/*
- * enum channel_serverstate
- * @CHANNELSRV_UNINITIALIZED: Channel is in an undefined state.
- * @CHANNELSRV_READY:        Channel has been initialized by server.
- */
-enum channel_serverstate {
-       CHANNELSRV_UNINITIALIZED = 0,
-       CHANNELSRV_READY = 1
-};
-
-/*
- * enum channel_clientstate
- * @CHANNELCLI_DETACHED:
- * @CHANNELCLI_DISABLED:  Client can see channel but is NOT allowed to use it
- *                       unless given TBD* explicit request
- *                       (should actually be < DETACHED).
- * @CHANNELCLI_ATTACHING: Legacy EFI client request for EFI server to attach.
- * @CHANNELCLI_ATTACHED:  Idle, but client may want to use channel any time.
- * @CHANNELCLI_BUSY:     Client either wants to use or is using channel.
- * @CHANNELCLI_OWNED:    "No worries" state - client can access channel
- *                       anytime.
- */
-enum channel_clientstate {
-       CHANNELCLI_DETACHED = 0,
-       CHANNELCLI_DISABLED = 1,
-       CHANNELCLI_ATTACHING = 2,
-       CHANNELCLI_ATTACHED = 3,
-       CHANNELCLI_BUSY = 4,
-       CHANNELCLI_OWNED = 5
-};
-
-/*
- * Values for VISOR_CHANNEL_PROTOCOL.Features: This define exists so that
- * a guest can look at the FeatureFlags in the io channel, and configure the
- * driver to use interrupts or not based on this setting. All feature bits for
- * all channels should be defined here. The io channel feature bits are defined
- * below.
- */
-#define VISOR_DRIVER_ENABLES_INTS (0x1ULL << 1)
-#define VISOR_CHANNEL_IS_POLLING (0x1ULL << 3)
-#define VISOR_IOVM_OK_DRIVER_DISABLING_INTS (0x1ULL << 4)
-#define VISOR_DRIVER_DISABLES_INTS (0x1ULL << 5)
-#define VISOR_DRIVER_ENHANCED_RCVBUF_CHECKING (0x1ULL << 6)
-
-/*
- * struct channel_header - Common Channel Header
- * @signature:        Signature.
- * @legacy_state:      DEPRECATED - being replaced by.
- * @header_size:       sizeof(struct channel_header).
- * @size:             Total size of this channel in bytes.
- * @features:         Flags to modify behavior.
- * @chtype:           Channel type: data, bus, control, etc..
- * @partition_handle:  ID of guest partition.
- * @handle:           Device number of this channel in client.
- * @ch_space_offset:   Offset in bytes to channel specific area.
- * @version_id:               Struct channel_header Version ID.
- * @partition_index:   Index of guest partition.
- * @zone_uuid:        Guid of Channel's zone.
- * @cli_str_offset:    Offset from channel header to null-terminated
- *                    ClientString (0 if ClientString not present).
- * @cli_state_boot:    CHANNEL_CLIENTSTATE of pre-boot EFI client of this
- *                    channel.
- * @cmd_state_cli:     CHANNEL_COMMANDSTATE (overloaded in Windows drivers, see
- *                    ServerStateUp, ServerStateDown, etc).
- * @cli_state_os:      CHANNEL_CLIENTSTATE of Guest OS client of this channel.
- * @ch_characteristic: CHANNEL_CHARACTERISTIC_<xxx>.
- * @cmd_state_srv:     CHANNEL_COMMANDSTATE (overloaded in Windows drivers, see
- *                    ServerStateUp, ServerStateDown, etc).
- * @srv_state:        CHANNEL_SERVERSTATE.
- * @cli_error_boot:    Bits to indicate err states for boot clients, so err
- *                    messages can be throttled.
- * @cli_error_os:      Bits to indicate err states for OS clients, so err
- *                    messages can be throttled.
- * @filler:           Pad out to 128 byte cacheline.
- * @recover_channel:   Please add all new single-byte values below here.
- */
-struct channel_header {
-       u64 signature;
-       u32 legacy_state;
-       /* SrvState, CliStateBoot, and CliStateOS below */
-       u32 header_size;
-       u64 size;
-       u64 features;
-       guid_t chtype;
-       u64 partition_handle;
-       u64 handle;
-       u64 ch_space_offset;
-       u32 version_id;
-       u32 partition_index;
-       guid_t zone_guid;
-       u32 cli_str_offset;
-       u32 cli_state_boot;
-       u32 cmd_state_cli;
-       u32 cli_state_os;
-       u32 ch_characteristic;
-       u32 cmd_state_srv;
-       u32 srv_state;
-       u8 cli_error_boot;
-       u8 cli_error_os;
-       u8 filler[1];
-       u8 recover_channel;
-} __packed;
-
-#define VISOR_CHANNEL_ENABLE_INTS (0x1ULL << 0)
-
-/*
- * struct signal_queue_header - Subheader for the Signal Type variation of the
- *                              Common Channel.
- * @version:         SIGNAL_QUEUE_HEADER Version ID.
- * @chtype:          Queue type: storage, network.
- * @size:            Total size of this queue in bytes.
- * @sig_base_offset:  Offset to signal queue area.
- * @features:        Flags to modify behavior.
- * @num_sent:        Total # of signals placed in this queue.
- * @num_overflows:    Total # of inserts failed due to full queue.
- * @signal_size:      Total size of a signal for this queue.
- * @max_slots:        Max # of slots in queue, 1 slot is always empty.
- * @max_signals:      Max # of signals in queue (MaxSignalSlots-1).
- * @head:            Queue head signal #.
- * @num_received:     Total # of signals removed from this queue.
- * @tail:            Queue tail signal.
- * @reserved1:       Reserved field.
- * @reserved2:       Reserved field.
- * @client_queue:
- * @num_irq_received: Total # of Interrupts received. This is incremented by the
- *                   ISR in the guest windows driver.
- * @num_empty:       Number of times that visor_signal_remove is called and
- *                   returned Empty Status.
- * @errorflags:              Error bits set during SignalReinit to denote trouble with
- *                   client's fields.
- * @filler:          Pad out to 64 byte cacheline.
- */
-struct signal_queue_header {
-       /* 1st cache line */
-       u32 version;
-       u32 chtype;
-       u64 size;
-       u64 sig_base_offset;
-       u64 features;
-       u64 num_sent;
-       u64 num_overflows;
-       u32 signal_size;
-       u32 max_slots;
-       u32 max_signals;
-       u32 head;
-       /* 2nd cache line */
-       u64 num_received;
-       u32 tail;
-       u32 reserved1;
-       u64 reserved2;
-       u64 client_queue;
-       u64 num_irq_received;
-       u64 num_empty;
-       u32 errorflags;
-       u8 filler[12];
-} __packed;
-
-/* VISORCHANNEL Guids */
-/* {414815ed-c58c-11da-95a9-00e08161165f} */
-#define VISOR_VHBA_CHANNEL_GUID \
-       GUID_INIT(0x414815ed, 0xc58c, 0x11da, \
-                 0x95, 0xa9, 0x0, 0xe0, 0x81, 0x61, 0x16, 0x5f)
-#define VISOR_VHBA_CHANNEL_GUID_STR \
-       "414815ed-c58c-11da-95a9-00e08161165f"
-struct visorchipset_state {
-       u32 created:1;
-       u32 attached:1;
-       u32 configured:1;
-       u32 running:1;
-       /* Remaining bits in this 32-bit word are reserved. */
-};
-
-/**
- * struct visor_device - A device type for things "plugged" into the visorbus
- *                       bus
- * @visorchannel:              Points to the channel that the device is
- *                             associated with.
- * @channel_type_guid:         Identifies the channel type to the bus driver.
- * @device:                    Device struct meant for use by the bus driver
- *                             only.
- * @list_all:                  Used by the bus driver to enumerate devices.
- * @timer:                     Timer fired periodically to do interrupt-type
- *                             activity.
- * @being_removed:             Indicates that the device is being removed from
- *                             the bus. Private bus driver use only.
- * @visordriver_callback_lock: Used by the bus driver to lock when adding and
- *                             removing devices.
- * @pausing:                   Indicates that a change towards a paused state.
- *                             is in progress. Only modified by the bus driver.
- * @resuming:                  Indicates that a change towards a running state
- *                             is in progress. Only modified by the bus driver.
- * @chipset_bus_no:            Private field used by the bus driver.
- * @chipset_dev_no:            Private field used the bus driver.
- * @state:                     Used to indicate the current state of the
- *                             device.
- * @inst:                      Unique GUID for this instance of the device.
- * @name:                      Name of the device.
- * @pending_msg_hdr:           For private use by bus driver to respond to
- *                             hypervisor requests.
- * @vbus_hdr_info:             A pointer to header info. Private use by bus
- *                             driver.
- * @partition_guid:            Indicates client partion id. This should be the
- *                             same across all visor_devices in the current
- *                             guest. Private use by bus driver only.
- */
-struct visor_device {
-       struct visorchannel *visorchannel;
-       guid_t channel_type_guid;
-       /* These fields are for private use by the bus driver only. */
-       struct device device;
-       struct list_head list_all;
-       struct timer_list timer;
-       bool timer_active;
-       bool being_removed;
-       struct mutex visordriver_callback_lock; /* synchronize probe/remove */
-       bool pausing;
-       bool resuming;
-       u32 chipset_bus_no;
-       u32 chipset_dev_no;
-       struct visorchipset_state state;
-       guid_t inst;
-       u8 *name;
-       struct controlvm_message_header *pending_msg_hdr;
-       void *vbus_hdr_info;
-       guid_t partition_guid;
-       struct dentry *debugfs_dir;
-       struct dentry *debugfs_bus_info;
-};
-
-#define to_visor_device(x) container_of(x, struct visor_device, device)
-
-typedef void (*visorbus_state_complete_func) (struct visor_device *dev,
-                                             int status);
-
-/*
- * This struct describes a specific visor channel, by providing its GUID, name,
- * and sizes.
- */
-struct visor_channeltype_descriptor {
-       const guid_t guid;
-       const char *name;
-       u64 min_bytes;
-       u32 version;
-};
-
-/**
- * struct visor_driver - Information provided by each visor driver when it
- *                       registers with the visorbus driver
- * @name:              Name of the visor driver.
- * @owner:             The module owner.
- * @channel_types:     Types of channels handled by this driver, ending with
- *                     a zero GUID. Our specialized BUS.match() method knows
- *                     about this list, and uses it to determine whether this
- *                     driver will in fact handle a new device that it has
- *                     detected.
- * @probe:             Called when a new device comes online, by our probe()
- *                     function specified by driver.probe() (triggered
- *                     ultimately by some call to driver_register(),
- *                     bus_add_driver(), or driver_attach()).
- * @remove:            Called when a new device is removed, by our remove()
- *                     function specified by driver.remove() (triggered
- *                     ultimately by some call to device_release_driver()).
- * @channel_interrupt: Called periodically, whenever there is a possiblity
- *                     that "something interesting" may have happened to the
- *                     channel.
- * @pause:             Called to initiate a change of the device's state.  If
- *                     the return valu`e is < 0, there was an error and the
- *                     state transition will NOT occur.  If the return value
- *                     is >= 0, then the state transition was INITIATED
- *                     successfully, and complete_func() will be called (or
- *                     was just called) with the final status when either the
- *                     state transition fails or completes successfully.
- * @resume:            Behaves similar to pause.
- * @driver:            Private reference to the device driver. For use by bus
- *                     driver only.
- */
-struct visor_driver {
-       const char *name;
-       struct module *owner;
-       struct visor_channeltype_descriptor *channel_types;
-       int (*probe)(struct visor_device *dev);
-       void (*remove)(struct visor_device *dev);
-       void (*channel_interrupt)(struct visor_device *dev);
-       int (*pause)(struct visor_device *dev,
-                    visorbus_state_complete_func complete_func);
-       int (*resume)(struct visor_device *dev,
-                     visorbus_state_complete_func complete_func);
-
-       /* These fields are for private use by the bus driver only. */
-       struct device_driver driver;
-};
-
-#define to_visor_driver(x) (container_of(x, struct visor_driver, driver))
-
-int visor_check_channel(struct channel_header *ch, struct device *dev,
-                       const guid_t *expected_uuid, char *chname,
-                       u64 expected_min_bytes, u32 expected_version,
-                       u64 expected_signature);
-
-int visorbus_register_visor_driver(struct visor_driver *drv);
-void visorbus_unregister_visor_driver(struct visor_driver *drv);
-int visorbus_read_channel(struct visor_device *dev,
-                         unsigned long offset, void *dest,
-                         unsigned long nbytes);
-int visorbus_write_channel(struct visor_device *dev,
-                          unsigned long offset, void *src,
-                          unsigned long nbytes);
-int visorbus_enable_channel_interrupts(struct visor_device *dev);
-void visorbus_disable_channel_interrupts(struct visor_device *dev);
-
-int visorchannel_signalremove(struct visorchannel *channel, u32 queue,
-                             void *msg);
-int visorchannel_signalinsert(struct visorchannel *channel, u32 queue,
-                             void *msg);
-bool visorchannel_signalempty(struct visorchannel *channel, u32 queue);
-const guid_t *visorchannel_get_guid(struct visorchannel *channel);
-
-#define BUS_ROOT_DEVICE UINT_MAX
-struct visor_device *visorbus_get_device_by_id(u32 bus_no, u32 dev_no,
-                                              struct visor_device *from);
-#endif
index a7ef624..480fa57 100644 (file)
@@ -16,12 +16,11 @@ void wait_for_unix_gc(void);
 struct sock *unix_get_socket(struct file *filp);
 struct sock *unix_peer_get(struct sock *sk);
 
-#define UNIX_HASH_SIZE 256
+#define UNIX_HASH_MOD  (256 - 1)
+#define UNIX_HASH_SIZE (256 * 2)
 #define UNIX_HASH_BITS 8
 
 extern unsigned int unix_tot_inflight;
-extern spinlock_t unix_table_locks[2 * UNIX_HASH_SIZE];
-extern struct hlist_head unix_socket_table[2 * UNIX_HASH_SIZE];
 
 struct unix_address {
        refcount_t      refcnt;
index 1618b76..d2aea5c 100644 (file)
@@ -67,6 +67,7 @@ enum {
        BOND_OPT_LACP_ACTIVE,
        BOND_OPT_MISSED_MAX,
        BOND_OPT_NS_TARGETS,
+       BOND_OPT_PRIO,
        BOND_OPT_LAST
 };
 
@@ -83,7 +84,10 @@ struct bond_opt_value {
        char *string;
        u64 value;
        u32 flags;
-       char extra[BOND_OPT_EXTRA_MAXLEN];
+       union {
+               char extra[BOND_OPT_EXTRA_MAXLEN];
+               struct net_device *slave_dev;
+       };
 };
 
 struct bonding;
@@ -133,13 +137,16 @@ static inline void __bond_opt_init(struct bond_opt_value *optval,
                optval->value = value;
        else if (string)
                optval->string = string;
-       else if (extra_len <= BOND_OPT_EXTRA_MAXLEN)
+
+       if (extra && extra_len <= BOND_OPT_EXTRA_MAXLEN)
                memcpy(optval->extra, extra, extra_len);
 }
 #define bond_opt_initval(optval, value) __bond_opt_init(optval, NULL, value, NULL, 0)
 #define bond_opt_initstr(optval, str) __bond_opt_init(optval, str, ULLONG_MAX, NULL, 0)
 #define bond_opt_initextra(optval, extra, extra_len) \
        __bond_opt_init(optval, NULL, ULLONG_MAX, extra, extra_len)
+#define bond_opt_slave_initval(optval, slave_dev, value) \
+       __bond_opt_init(optval, NULL, value, slave_dev, sizeof(struct net_device *))
 
 void bond_option_arp_ip_targets_clear(struct bonding *bond);
 #if IS_ENABLED(CONFIG_IPV6)
index cb904d3..6e78d65 100644 (file)
@@ -178,6 +178,7 @@ struct slave {
        u32    speed;
        u16    queue_id;
        u8     perm_hwaddr[MAX_ADDR_LEN];
+       int    prio;
        struct ad_slave_info *ad_info;
        struct tlb_slave_info tlb_info;
 #ifdef CONFIG_NET_POLL_CONTROLLER
index 14f0727..b902b31 100644 (file)
@@ -53,6 +53,8 @@ struct phylink_link_state;
 #define DSA_TAG_PROTO_SJA1110_VALUE            23
 #define DSA_TAG_PROTO_RTL8_4_VALUE             24
 #define DSA_TAG_PROTO_RTL8_4T_VALUE            25
+#define DSA_TAG_PROTO_RZN1_A5PSW_VALUE         26
+#define DSA_TAG_PROTO_LAN937X_VALUE            27
 
 enum dsa_tag_protocol {
        DSA_TAG_PROTO_NONE              = DSA_TAG_PROTO_NONE_VALUE,
@@ -81,6 +83,8 @@ enum dsa_tag_protocol {
        DSA_TAG_PROTO_SJA1110           = DSA_TAG_PROTO_SJA1110_VALUE,
        DSA_TAG_PROTO_RTL8_4            = DSA_TAG_PROTO_RTL8_4_VALUE,
        DSA_TAG_PROTO_RTL8_4T           = DSA_TAG_PROTO_RTL8_4T_VALUE,
+       DSA_TAG_PROTO_RZN1_A5PSW        = DSA_TAG_PROTO_RZN1_A5PSW_VALUE,
+       DSA_TAG_PROTO_LAN937X           = DSA_TAG_PROTO_LAN937X_VALUE,
 };
 
 struct dsa_switch;
@@ -888,8 +892,13 @@ struct dsa_switch_ops {
                                     struct ethtool_eth_mac_stats *mac_stats);
        void    (*get_eth_ctrl_stats)(struct dsa_switch *ds, int port,
                                      struct ethtool_eth_ctrl_stats *ctrl_stats);
+       void    (*get_rmon_stats)(struct dsa_switch *ds, int port,
+                                 struct ethtool_rmon_stats *rmon_stats,
+                                 const struct ethtool_rmon_hist_range **ranges);
        void    (*get_stats64)(struct dsa_switch *ds, int port,
                                   struct rtnl_link_stats64 *s);
+       void    (*get_pause_stats)(struct dsa_switch *ds, int port,
+                                  struct ethtool_pause_stats *pause_stats);
        void    (*self_test)(struct dsa_switch *ds, int port,
                             struct ethtool_test *etest, u64 *data);
 
index 6484095..7ac3138 100644 (file)
@@ -152,6 +152,7 @@ enum flow_action_id {
        FLOW_ACTION_PIPE,
        FLOW_ACTION_VLAN_PUSH_ETH,
        FLOW_ACTION_VLAN_POP_ETH,
+       FLOW_ACTION_CONTINUE,
        NUM_FLOW_ACTIONS,
 };
 
index c1b5dcd..daead5f 100644 (file)
@@ -253,6 +253,11 @@ struct inet_sock {
 #define IP_CMSG_CHECKSUM       BIT(7)
 #define IP_CMSG_RECVFRAGSIZE   BIT(8)
 
+static inline bool sk_is_inet(struct sock *sk)
+{
+       return sk->sk_family == AF_INET || sk->sk_family == AF_INET6;
+}
+
 /**
  * sk_to_full_sk - Access to a full socket
  * @sk: pointer to a socket
index 4d761ad..ac9cf72 100644 (file)
@@ -39,6 +39,7 @@ struct mptcp_ext {
                        infinite_map:1;
 };
 
+#define MPTCPOPT_HMAC_LEN      20
 #define MPTCP_RM_IDS_MAX       8
 
 struct mptcp_rm_list {
@@ -89,7 +90,7 @@ struct mptcp_out_options {
                        u32 nonce;
                        u32 token;
                        u64 thmac;
-                       u8 hmac[20];
+                       u8 hmac[MPTCPOPT_HMAC_LEN];
                };
        };
 #endif
index 87419f7..9f0bab0 100644 (file)
@@ -48,6 +48,7 @@ enum {
        NEIGH_VAR_RETRANS_TIME,
        NEIGH_VAR_BASE_REACHABLE_TIME,
        NEIGH_VAR_DELAY_PROBE_TIME,
+       NEIGH_VAR_INTERVAL_PROBE_TIME_MS,
        NEIGH_VAR_GC_STALETIME,
        NEIGH_VAR_QUEUE_LEN_BYTES,
        NEIGH_VAR_PROXY_QLEN,
index c4f5601..20a2992 100644 (file)
@@ -120,7 +120,9 @@ struct net {
        struct netns_core       core;
        struct netns_mib        mib;
        struct netns_packet     packet;
+#if IS_ENABLED(CONFIG_UNIX)
        struct netns_unix       unx;
+#endif
        struct netns_nexthop    nexthop;
        struct netns_ipv4       ipv4;
 #if IS_ENABLED(CONFIG_IPV6)
index 279ae0f..5c4e5a9 100644 (file)
@@ -1338,24 +1338,28 @@ void nft_unregister_flowtable_type(struct nf_flowtable_type *type);
 /**
  *     struct nft_traceinfo - nft tracing information and state
  *
+ *     @trace: other struct members are initialised
+ *     @nf_trace: copy of skb->nf_trace before rule evaluation
+ *     @type: event type (enum nft_trace_types)
+ *     @skbid: hash of skb to be used as trace id
+ *     @packet_dumped: packet headers sent in a previous traceinfo message
  *     @pkt: pktinfo currently processed
  *     @basechain: base chain currently processed
  *     @chain: chain currently processed
  *     @rule:  rule that was evaluated
  *     @verdict: verdict given by rule
- *     @type: event type (enum nft_trace_types)
- *     @packet_dumped: packet headers sent in a previous traceinfo message
- *     @trace: other struct members are initialised
  */
 struct nft_traceinfo {
+       bool                            trace;
+       bool                            nf_trace;
+       bool                            packet_dumped;
+       enum nft_trace_types            type:8;
+       u32                             skbid;
        const struct nft_pktinfo        *pkt;
        const struct nft_base_chain     *basechain;
        const struct nft_chain          *chain;
        const struct nft_rule_dp        *rule;
        const struct nft_verdict        *verdict;
-       enum nft_trace_types            type;
-       bool                            packet_dumped;
-       bool                            trace;
 };
 
 void nft_trace_init(struct nft_traceinfo *info, const struct nft_pktinfo *pkt,
index 91a3d7e..6f1a33d 100644 (file)
@@ -5,8 +5,14 @@
 #ifndef __NETNS_UNIX_H__
 #define __NETNS_UNIX_H__
 
+struct unix_table {
+       spinlock_t              *locks;
+       struct hlist_head       *buckets;
+};
+
 struct ctl_table_header;
 struct netns_unix {
+       struct unix_table       table;
        int                     sysctl_max_dgram_qlen;
        struct ctl_table_header *ctl;
 };
index 44a3553..3372a1f 100644 (file)
@@ -173,11 +173,28 @@ struct tc_taprio_qopt_offload {
        struct tc_taprio_sched_entry entries[];
 };
 
+#if IS_ENABLED(CONFIG_NET_SCH_TAPRIO)
+
 /* Reference counting */
 struct tc_taprio_qopt_offload *taprio_offload_get(struct tc_taprio_qopt_offload
                                                  *offload);
 void taprio_offload_free(struct tc_taprio_qopt_offload *offload);
 
+#else
+
+/* Reference counting */
+static inline struct tc_taprio_qopt_offload *
+taprio_offload_get(struct tc_taprio_qopt_offload *offload)
+{
+       return NULL;
+}
+
+static inline void taprio_offload_free(struct tc_taprio_qopt_offload *offload)
+{
+}
+
+#endif
+
 /* Ensure skb_mstamp_ns, which might have been populated with the txtime, is
  * not mistaken for a software timestamp, because this will otherwise prevent
  * the dispatch of hardware timestamps to the socket.
index d81eeeb..d224376 100644 (file)
@@ -32,7 +32,7 @@ int raw_rcv(struct sock *, struct sk_buff *);
 #define RAW_HTABLE_SIZE        MAX_INET_PROTOS
 
 struct raw_hashinfo {
-       rwlock_t lock;
+       spinlock_t lock;
        struct hlist_nulls_head ht[RAW_HTABLE_SIZE];
 };
 
@@ -40,7 +40,7 @@ static inline void raw_hashinfo_init(struct raw_hashinfo *hashinfo)
 {
        int i;
 
-       rwlock_init(&hashinfo->lock);
+       spin_lock_init(&hashinfo->lock);
        for (i = 0; i < RAW_HTABLE_SIZE; i++)
                INIT_HLIST_NULLS_HEAD(&hashinfo->ht[i], i);
 }
index 5bed1ea..0dd43c3 100644 (file)
@@ -1619,11 +1619,6 @@ static inline void sk_mem_charge(struct sock *sk, int size)
        sk->sk_forward_alloc -= size;
 }
 
-/* the following macros control memory reclaiming in mptcp_rmem_uncharge()
- */
-#define SK_RECLAIM_THRESHOLD   (1 << 21)
-#define SK_RECLAIM_CHUNK       (1 << 20)
-
 static inline void sk_mem_uncharge(struct sock *sk, int size)
 {
        if (!sk_has_account(sk))
@@ -2219,9 +2214,7 @@ static inline int skb_copy_to_page_nocache(struct sock *sk, struct iov_iter *fro
        if (err)
                return err;
 
-       skb->len             += copy;
-       skb->data_len        += copy;
-       skb->truesize        += copy;
+       skb_len_add(skb, copy);
        sk_wmem_queued_add(sk, copy);
        sk_mem_charge(sk, copy);
        return 0;
index a191486..88900b0 100644 (file)
@@ -65,15 +65,19 @@ struct _strp_msg {
 struct sk_skb_cb {
 #define SK_SKB_CB_PRIV_LEN 20
        unsigned char data[SK_SKB_CB_PRIV_LEN];
+       /* align strp on cache line boundary within skb->cb[] */
+       unsigned char pad[4];
        struct _strp_msg strp;
-       /* temp_reg is a temporary register used for bpf_convert_data_end_access
-        * when dst_reg == src_reg.
-        */
-       u64 temp_reg;
+
+       /* strp users' data follows */
        struct tls_msg {
                u8 control;
                u8 decrypted;
        } tls;
+       /* temp_reg is a temporary register used for bpf_convert_data_end_access
+        * when dst_reg == src_reg.
+        */
+       u64 temp_reg;
 };
 
 static inline struct strp_msg *strp_msg(struct sk_buff *skb)
index aa0171d..7dcdc97 100644 (file)
@@ -239,6 +239,9 @@ struct switchdev_notifier_info {
        const void *ctx;
 };
 
+/* Remember to update br_switchdev_fdb_populate() when adding
+ * new members to this structure
+ */
 struct switchdev_notifier_fdb_info {
        struct switchdev_notifier_info info; /* must be first */
        const unsigned char *addr;
index 8017f17..8742e13 100644 (file)
@@ -39,7 +39,6 @@
 #include <linux/crypto.h>
 #include <linux/socket.h>
 #include <linux/tcp.h>
-#include <linux/skmsg.h>
 #include <linux/mutex.h>
 #include <linux/netdevice.h>
 #include <linux/rcupdate.h>
@@ -50,6 +49,7 @@
 #include <crypto/aead.h>
 #include <uapi/linux/tls.h>
 
+struct tls_rec;
 
 /* Maximum data size carried in a TLS record */
 #define TLS_MAX_PAYLOAD_SIZE           ((size_t)1 << 14)
@@ -66,6 +66,7 @@
 #define MAX_IV_SIZE                    16
 #define TLS_TAG_SIZE                   16
 #define TLS_MAX_REC_SEQ_SIZE           8
+#define TLS_MAX_AAD_SIZE               TLS_AAD_SPACE_SIZE
 
 /* For CCM mode, the full 16-bytes of IV is made of '4' fields of given sizes.
  *
 #define TLS_AES_CCM_IV_B0_BYTE         2
 #define TLS_SM4_CCM_IV_B0_BYTE         2
 
-#define __TLS_INC_STATS(net, field)                            \
-       __SNMP_INC_STATS((net)->mib.tls_statistics, field)
-#define TLS_INC_STATS(net, field)                              \
-       SNMP_INC_STATS((net)->mib.tls_statistics, field)
-#define TLS_DEC_STATS(net, field)                              \
-       SNMP_DEC_STATS((net)->mib.tls_statistics, field)
-
 enum {
        TLS_BASE,
        TLS_SW,
@@ -92,32 +86,6 @@ enum {
        TLS_NUM_CONFIG,
 };
 
-/* TLS records are maintained in 'struct tls_rec'. It stores the memory pages
- * allocated or mapped for each TLS record. After encryption, the records are
- * stores in a linked list.
- */
-struct tls_rec {
-       struct list_head list;
-       int tx_ready;
-       int tx_flags;
-
-       struct sk_msg msg_plaintext;
-       struct sk_msg msg_encrypted;
-
-       /* AAD | msg_plaintext.sg.data | sg_tag */
-       struct scatterlist sg_aead_in[2];
-       /* AAD | msg_encrypted.sg.data (data contains overhead for hdr & iv & tag) */
-       struct scatterlist sg_aead_out[2];
-
-       char content_type;
-       struct scatterlist sg_content_type;
-
-       char aad_space[TLS_AAD_SPACE_SIZE];
-       u8 iv_data[MAX_IV_SIZE];
-       struct aead_request aead_req;
-       u8 aead_req_ctx[];
-};
-
 struct tx_work {
        struct delayed_work work;
        struct sock *sk;
@@ -149,6 +117,7 @@ struct tls_sw_context_rx {
 
        struct sk_buff *recv_pkt;
        u8 async_capable:1;
+       u8 zc_capable:1;
        atomic_t decrypt_pending;
        /* protect crypto_wait with decrypt_pending*/
        spinlock_t decrypt_compl_lock;
@@ -239,6 +208,7 @@ struct tls_context {
        u8 tx_conf:3;
        u8 rx_conf:3;
        u8 zerocopy_sendfile:1;
+       u8 rx_no_pad:1;
 
        int (*push_pending_record)(struct sock *sk, int flags);
        void (*sk_write_space)(struct sock *sk);
@@ -346,43 +316,6 @@ struct tls_offload_context_rx {
 #define TLS_OFFLOAD_CONTEXT_SIZE_RX                                    \
        (sizeof(struct tls_offload_context_rx) + TLS_DRIVER_STATE_SIZE_RX)
 
-struct tls_context *tls_ctx_create(struct sock *sk);
-void tls_ctx_free(struct sock *sk, struct tls_context *ctx);
-void update_sk_prot(struct sock *sk, struct tls_context *ctx);
-
-int wait_on_pending_writer(struct sock *sk, long *timeo);
-int tls_sk_query(struct sock *sk, int optname, char __user *optval,
-               int __user *optlen);
-int tls_sk_attach(struct sock *sk, int optname, char __user *optval,
-                 unsigned int optlen);
-void tls_err_abort(struct sock *sk, int err);
-
-int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx);
-void tls_sw_strparser_arm(struct sock *sk, struct tls_context *ctx);
-void tls_sw_strparser_done(struct tls_context *tls_ctx);
-int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
-int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
-                          int offset, size_t size, int flags);
-int tls_sw_sendpage(struct sock *sk, struct page *page,
-                   int offset, size_t size, int flags);
-void tls_sw_cancel_work_tx(struct tls_context *tls_ctx);
-void tls_sw_release_resources_tx(struct sock *sk);
-void tls_sw_free_ctx_tx(struct tls_context *tls_ctx);
-void tls_sw_free_resources_rx(struct sock *sk);
-void tls_sw_release_resources_rx(struct sock *sk);
-void tls_sw_free_ctx_rx(struct tls_context *tls_ctx);
-int tls_sw_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
-                  int flags, int *addr_len);
-bool tls_sw_sock_is_readable(struct sock *sk);
-ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
-                          struct pipe_inode_info *pipe,
-                          size_t len, unsigned int flags);
-
-int tls_device_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
-int tls_device_sendpage(struct sock *sk, struct page *page,
-                       int offset, size_t size, int flags);
-int tls_tx_records(struct sock *sk, int flags);
-
 struct tls_record_info *tls_get_record(struct tls_offload_context_tx *context,
                                       u32 seq, u64 *p_record_sn);
 
@@ -396,58 +329,6 @@ static inline u32 tls_record_start_seq(struct tls_record_info *rec)
        return rec->end_seq - rec->len;
 }
 
-int tls_push_sg(struct sock *sk, struct tls_context *ctx,
-               struct scatterlist *sg, u16 first_offset,
-               int flags);
-int tls_push_partial_record(struct sock *sk, struct tls_context *ctx,
-                           int flags);
-void tls_free_partial_record(struct sock *sk, struct tls_context *ctx);
-
-static inline struct tls_msg *tls_msg(struct sk_buff *skb)
-{
-       struct sk_skb_cb *scb = (struct sk_skb_cb *)skb->cb;
-
-       return &scb->tls;
-}
-
-static inline bool tls_is_partially_sent_record(struct tls_context *ctx)
-{
-       return !!ctx->partially_sent_record;
-}
-
-static inline bool tls_is_pending_open_record(struct tls_context *tls_ctx)
-{
-       return tls_ctx->pending_open_record_frags;
-}
-
-static inline bool is_tx_ready(struct tls_sw_context_tx *ctx)
-{
-       struct tls_rec *rec;
-
-       rec = list_first_entry(&ctx->tx_list, struct tls_rec, list);
-       if (!rec)
-               return false;
-
-       return READ_ONCE(rec->tx_ready);
-}
-
-static inline u16 tls_user_config(struct tls_context *ctx, bool tx)
-{
-       u16 config = tx ? ctx->tx_conf : ctx->rx_conf;
-
-       switch (config) {
-       case TLS_BASE:
-               return TLS_CONF_BASE;
-       case TLS_SW:
-               return TLS_CONF_SW;
-       case TLS_HW:
-               return TLS_CONF_HW;
-       case TLS_HW_RECORD:
-               return TLS_CONF_HW_RECORD;
-       }
-       return 0;
-}
-
 struct sk_buff *
 tls_validate_xmit_skb(struct sock *sk, struct net_device *dev,
                      struct sk_buff *skb);
@@ -466,31 +347,6 @@ static inline bool tls_is_sk_tx_device_offloaded(struct sock *sk)
 #endif
 }
 
-static inline bool tls_bigint_increment(unsigned char *seq, int len)
-{
-       int i;
-
-       for (i = len - 1; i >= 0; i--) {
-               ++seq[i];
-               if (seq[i] != 0)
-                       break;
-       }
-
-       return (i == -1);
-}
-
-static inline void tls_bigint_subtract(unsigned char *seq, int  n)
-{
-       u64 rcd_sn;
-       __be64 *p;
-
-       BUILD_BUG_ON(TLS_MAX_REC_SEQ_SIZE != 8);
-
-       p = (__be64 *)seq;
-       rcd_sn = be64_to_cpu(*p);
-       *p = cpu_to_be64(rcd_sn - n);
-}
-
 static inline struct tls_context *tls_get_ctx(const struct sock *sk)
 {
        struct inet_connection_sock *icsk = inet_csk(sk);
@@ -501,82 +357,6 @@ static inline struct tls_context *tls_get_ctx(const struct sock *sk)
        return (__force void *)icsk->icsk_ulp_data;
 }
 
-static inline void tls_advance_record_sn(struct sock *sk,
-                                        struct tls_prot_info *prot,
-                                        struct cipher_context *ctx)
-{
-       if (tls_bigint_increment(ctx->rec_seq, prot->rec_seq_size))
-               tls_err_abort(sk, -EBADMSG);
-
-       if (prot->version != TLS_1_3_VERSION &&
-           prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305)
-               tls_bigint_increment(ctx->iv + prot->salt_size,
-                                    prot->iv_size);
-}
-
-static inline void tls_fill_prepend(struct tls_context *ctx,
-                            char *buf,
-                            size_t plaintext_len,
-                            unsigned char record_type)
-{
-       struct tls_prot_info *prot = &ctx->prot_info;
-       size_t pkt_len, iv_size = prot->iv_size;
-
-       pkt_len = plaintext_len + prot->tag_size;
-       if (prot->version != TLS_1_3_VERSION &&
-           prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305) {
-               pkt_len += iv_size;
-
-               memcpy(buf + TLS_NONCE_OFFSET,
-                      ctx->tx.iv + prot->salt_size, iv_size);
-       }
-
-       /* we cover nonce explicit here as well, so buf should be of
-        * size KTLS_DTLS_HEADER_SIZE + KTLS_DTLS_NONCE_EXPLICIT_SIZE
-        */
-       buf[0] = prot->version == TLS_1_3_VERSION ?
-                  TLS_RECORD_TYPE_DATA : record_type;
-       /* Note that VERSION must be TLS_1_2 for both TLS1.2 and TLS1.3 */
-       buf[1] = TLS_1_2_VERSION_MINOR;
-       buf[2] = TLS_1_2_VERSION_MAJOR;
-       /* we can use IV for nonce explicit according to spec */
-       buf[3] = pkt_len >> 8;
-       buf[4] = pkt_len & 0xFF;
-}
-
-static inline void tls_make_aad(char *buf,
-                               size_t size,
-                               char *record_sequence,
-                               unsigned char record_type,
-                               struct tls_prot_info *prot)
-{
-       if (prot->version != TLS_1_3_VERSION) {
-               memcpy(buf, record_sequence, prot->rec_seq_size);
-               buf += 8;
-       } else {
-               size += prot->tag_size;
-       }
-
-       buf[0] = prot->version == TLS_1_3_VERSION ?
-                 TLS_RECORD_TYPE_DATA : record_type;
-       buf[1] = TLS_1_2_VERSION_MAJOR;
-       buf[2] = TLS_1_2_VERSION_MINOR;
-       buf[3] = size >> 8;
-       buf[4] = size & 0xFF;
-}
-
-static inline void xor_iv_with_seq(struct tls_prot_info *prot, char *iv, char *seq)
-{
-       int i;
-
-       if (prot->version == TLS_1_3_VERSION ||
-           prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305) {
-               for (i = 0; i < 8; i++)
-                       iv[i + 4] ^= seq[i];
-       }
-}
-
-
 static inline struct tls_sw_context_rx *tls_sw_ctx_rx(
                const struct tls_context *tls_ctx)
 {
@@ -613,9 +393,6 @@ static inline bool tls_sw_has_ctx_rx(const struct sock *sk)
        return !!tls_sw_ctx_rx(ctx);
 }
 
-void tls_sw_write_space(struct sock *sk, struct tls_context *ctx);
-void tls_device_write_space(struct sock *sk, struct tls_context *ctx);
-
 static inline struct tls_offload_context_rx *
 tls_offload_ctx_rx(const struct tls_context *tls_ctx)
 {
@@ -690,31 +467,11 @@ static inline bool tls_offload_tx_resync_pending(struct sock *sk)
        return ret;
 }
 
-int __net_init tls_proc_init(struct net *net);
-void __net_exit tls_proc_fini(struct net *net);
-
-int tls_proccess_cmsg(struct sock *sk, struct msghdr *msg,
-                     unsigned char *record_type);
-int decrypt_skb(struct sock *sk, struct sk_buff *skb,
-               struct scatterlist *sgout);
 struct sk_buff *tls_encrypt_skb(struct sk_buff *skb);
 
-int tls_sw_fallback_init(struct sock *sk,
-                        struct tls_offload_context_tx *offload_ctx,
-                        struct tls_crypto_info *crypto_info);
-
 #ifdef CONFIG_TLS_DEVICE
-void tls_device_init(void);
-void tls_device_cleanup(void);
 void tls_device_sk_destruct(struct sock *sk);
-int tls_set_device_offload(struct sock *sk, struct tls_context *ctx);
-void tls_device_free_resources_tx(struct sock *sk);
-int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx);
-void tls_device_offload_cleanup_rx(struct sock *sk);
-void tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq);
 void tls_offload_tx_resync_request(struct sock *sk, u32 got_seq, u32 exp_seq);
-int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx,
-                        struct sk_buff *skb, struct strp_msg *rxm);
 
 static inline bool tls_is_sk_rx_device_offloaded(struct sock *sk)
 {
@@ -723,33 +480,5 @@ static inline bool tls_is_sk_rx_device_offloaded(struct sock *sk)
                return false;
        return tls_get_ctx(sk)->rx_conf == TLS_HW;
 }
-#else
-static inline void tls_device_init(void) {}
-static inline void tls_device_cleanup(void) {}
-
-static inline int
-tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
-{
-       return -EOPNOTSUPP;
-}
-
-static inline void tls_device_free_resources_tx(struct sock *sk) {}
-
-static inline int
-tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx)
-{
-       return -EOPNOTSUPP;
-}
-
-static inline void tls_device_offload_cleanup_rx(struct sock *sk) {}
-static inline void
-tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq) {}
-
-static inline int
-tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx,
-                    struct sk_buff *skb, struct strp_msg *rxm)
-{
-       return 0;
-}
 #endif
 #endif /* _TLS_OFFLOAD_H */
index 3737570..ac151ec 100644 (file)
@@ -670,6 +670,8 @@ struct ocelot_port {
        /* VLAN that untagged frames are classified to, on ingress */
        const struct ocelot_bridge_vlan *pvid_vlan;
 
+       struct tc_taprio_qopt_offload   *taprio;
+
        phy_interface_t                 phy_mode;
 
        unsigned int                    ptp_skbs_in_flight;
@@ -692,9 +694,6 @@ struct ocelot_port {
        int                             bridge_num;
 
        int                             speed;
-
-       /* Store the AdminBaseTime of EST fetched from userspace. */
-       s64                             base_time;
 };
 
 struct ocelot {
index f20f5f8..b276dcb 100644 (file)
@@ -408,8 +408,6 @@ struct snd_soc_jack_pin;
 
 struct snd_soc_jack_gpio;
 
-typedef int (*hw_write_t)(void *,const char* ,int);
-
 enum snd_soc_pcm_subclass {
        SND_SOC_PCM_CLASS_PCM   = 0,
        SND_SOC_PCM_CLASS_BE    = 1,
index 66fcc5a..aa2f951 100644 (file)
@@ -158,6 +158,8 @@ TRACE_EVENT(io_uring_queue_async_work,
                __field(  unsigned int,                 flags           )
                __field(  struct io_wq_work *,          work            )
                __field(  int,                          rw              )
+
+               __string( op_str, io_uring_get_opcode(opcode)   )
        ),
 
        TP_fast_assign(
@@ -168,11 +170,13 @@ TRACE_EVENT(io_uring_queue_async_work,
                __entry->opcode         = opcode;
                __entry->work           = work;
                __entry->rw             = rw;
+
+               __assign_str(op_str, io_uring_get_opcode(opcode));
        ),
 
        TP_printk("ring %p, request %p, user_data 0x%llx, opcode %s, flags 0x%x, %s queue, work %p",
                __entry->ctx, __entry->req, __entry->user_data,
-               io_uring_get_opcode(__entry->opcode),
+               __get_str(op_str),
                __entry->flags, __entry->rw ? "hashed" : "normal", __entry->work)
 );
 
@@ -198,6 +202,8 @@ TRACE_EVENT(io_uring_defer,
                __field(  void *,               req     )
                __field(  unsigned long long,   data    )
                __field(  u8,                   opcode  )
+
+               __string( op_str, io_uring_get_opcode(opcode) )
        ),
 
        TP_fast_assign(
@@ -205,11 +211,13 @@ TRACE_EVENT(io_uring_defer,
                __entry->req    = req;
                __entry->data   = user_data;
                __entry->opcode = opcode;
+
+               __assign_str(op_str, io_uring_get_opcode(opcode));
        ),
 
        TP_printk("ring %p, request %p, user_data 0x%llx, opcode %s",
                __entry->ctx, __entry->req, __entry->data,
-               io_uring_get_opcode(__entry->opcode))
+               __get_str(op_str))
 );
 
 /**
@@ -298,6 +306,8 @@ TRACE_EVENT(io_uring_fail_link,
                __field(  unsigned long long,   user_data       )
                __field(  u8,                   opcode          )
                __field(  void *,               link            )
+
+               __string( op_str, io_uring_get_opcode(opcode) )
        ),
 
        TP_fast_assign(
@@ -306,11 +316,13 @@ TRACE_EVENT(io_uring_fail_link,
                __entry->user_data      = user_data;
                __entry->opcode         = opcode;
                __entry->link           = link;
+
+               __assign_str(op_str, io_uring_get_opcode(opcode));
        ),
 
        TP_printk("ring %p, request %p, user_data 0x%llx, opcode %s, link %p",
                __entry->ctx, __entry->req, __entry->user_data,
-               io_uring_get_opcode(__entry->opcode), __entry->link)
+               __get_str(op_str), __entry->link)
 );
 
 /**
@@ -390,6 +402,8 @@ TRACE_EVENT(io_uring_submit_sqe,
                __field(  u32,                  flags           )
                __field(  bool,                 force_nonblock  )
                __field(  bool,                 sq_thread       )
+
+               __string( op_str, io_uring_get_opcode(opcode) )
        ),
 
        TP_fast_assign(
@@ -400,11 +414,13 @@ TRACE_EVENT(io_uring_submit_sqe,
                __entry->flags          = flags;
                __entry->force_nonblock = force_nonblock;
                __entry->sq_thread      = sq_thread;
+
+               __assign_str(op_str, io_uring_get_opcode(opcode));
        ),
 
        TP_printk("ring %p, req %p, user_data 0x%llx, opcode %s, flags 0x%x, "
                  "non block %d, sq_thread %d", __entry->ctx, __entry->req,
-                 __entry->user_data, io_uring_get_opcode(__entry->opcode),
+                 __entry->user_data, __get_str(op_str),
                  __entry->flags, __entry->force_nonblock, __entry->sq_thread)
 );
 
@@ -435,6 +451,8 @@ TRACE_EVENT(io_uring_poll_arm,
                __field(  u8,                   opcode          )
                __field(  int,                  mask            )
                __field(  int,                  events          )
+
+               __string( op_str, io_uring_get_opcode(opcode) )
        ),
 
        TP_fast_assign(
@@ -444,11 +462,13 @@ TRACE_EVENT(io_uring_poll_arm,
                __entry->opcode         = opcode;
                __entry->mask           = mask;
                __entry->events         = events;
+
+               __assign_str(op_str, io_uring_get_opcode(opcode));
        ),
 
        TP_printk("ring %p, req %p, user_data 0x%llx, opcode %s, mask 0x%x, events 0x%x",
                  __entry->ctx, __entry->req, __entry->user_data,
-                 io_uring_get_opcode(__entry->opcode),
+                 __get_str(op_str),
                  __entry->mask, __entry->events)
 );
 
@@ -474,6 +494,8 @@ TRACE_EVENT(io_uring_task_add,
                __field(  unsigned long long,   user_data       )
                __field(  u8,                   opcode          )
                __field(  int,                  mask            )
+
+               __string( op_str, io_uring_get_opcode(opcode) )
        ),
 
        TP_fast_assign(
@@ -482,11 +504,13 @@ TRACE_EVENT(io_uring_task_add,
                __entry->user_data      = user_data;
                __entry->opcode         = opcode;
                __entry->mask           = mask;
+
+               __assign_str(op_str, io_uring_get_opcode(opcode));
        ),
 
        TP_printk("ring %p, req %p, user_data 0x%llx, opcode %s, mask %x",
                __entry->ctx, __entry->req, __entry->user_data,
-               io_uring_get_opcode(__entry->opcode),
+               __get_str(op_str),
                __entry->mask)
 );
 
@@ -523,6 +547,8 @@ TRACE_EVENT(io_uring_req_failed,
                __field( u64,                   pad1            )
                __field( u64,                   addr3           )
                __field( int,                   error           )
+
+               __string( op_str, io_uring_get_opcode(sqe->opcode) )
        ),
 
        TP_fast_assign(
@@ -542,6 +568,8 @@ TRACE_EVENT(io_uring_req_failed,
                __entry->pad1           = sqe->__pad2[0];
                __entry->addr3          = sqe->addr3;
                __entry->error          = error;
+
+               __assign_str(op_str, io_uring_get_opcode(sqe->opcode));
        ),
 
        TP_printk("ring %p, req %p, user_data 0x%llx, "
@@ -550,7 +578,7 @@ TRACE_EVENT(io_uring_req_failed,
                  "personality=%d, file_index=%d, pad=0x%llx, addr3=%llx, "
                  "error=%d",
                  __entry->ctx, __entry->req, __entry->user_data,
-                 io_uring_get_opcode(__entry->opcode),
+                 __get_str(op_str),
                  __entry->flags, __entry->ioprio,
                  (unsigned long long)__entry->off,
                  (unsigned long long) __entry->addr, __entry->len,
index d4e631a..6025dd8 100644 (file)
@@ -288,6 +288,7 @@ DECLARE_EVENT_CLASS(ata_qc_complete_template,
                __entry->hob_feature    = qc->result_tf.hob_feature;
                __entry->nsect          = qc->result_tf.nsect;
                __entry->hob_nsect      = qc->result_tf.hob_nsect;
+               __entry->flags          = qc->flags;
        ),
 
        TP_printk("ata_port=%u ata_dev=%u tag=%d flags=%s status=%s " \
index 032b431..da611a7 100644 (file)
@@ -136,7 +136,7 @@ DECLARE_EVENT_CLASS(net_dev_template,
                __assign_str(name, skb->dev->name);
        ),
 
-       TP_printk("dev=%s skbaddr=%px len=%u",
+       TP_printk("dev=%s skbaddr=%p len=%u",
                __get_str(name), __entry->skbaddr, __entry->len)
 )
 
index 59c945b..a399592 100644 (file)
@@ -41,7 +41,7 @@ TRACE_EVENT(qdisc_dequeue,
                __entry->txq_state      = txq->state;
        ),
 
-       TP_printk("dequeue ifindex=%d qdisc handle=0x%X parent=0x%X txq_state=0x%lX packets=%d skbaddr=%px",
+       TP_printk("dequeue ifindex=%d qdisc handle=0x%X parent=0x%X txq_state=0x%lX packets=%d skbaddr=%p",
                  __entry->ifindex, __entry->handle, __entry->parent,
                  __entry->txq_state, __entry->packets, __entry->skbaddr )
 );
@@ -70,7 +70,7 @@ TRACE_EVENT(qdisc_enqueue,
                __entry->parent  = qdisc->parent;
        ),
 
-       TP_printk("enqueue ifindex=%d qdisc handle=0x%X parent=0x%X skbaddr=%px",
+       TP_printk("enqueue ifindex=%d qdisc handle=0x%X parent=0x%X skbaddr=%p",
                  __entry->ifindex, __entry->handle, __entry->parent, __entry->skbaddr)
 );
 
index f197215..0980678 100644 (file)
@@ -1444,11 +1444,11 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier)
 #define AMD_FMT_MOD_PIPE_MASK 0x7
 
 #define AMD_FMT_MOD_SET(field, value) \
-       ((uint64_t)(value) << AMD_FMT_MOD_##field##_SHIFT)
+       ((__u64)(value) << AMD_FMT_MOD_##field##_SHIFT)
 #define AMD_FMT_MOD_GET(field, value) \
        (((value) >> AMD_FMT_MOD_##field##_SHIFT) & AMD_FMT_MOD_##field##_MASK)
 #define AMD_FMT_MOD_CLEAR(field) \
-       (~((uint64_t)AMD_FMT_MOD_##field##_MASK << AMD_FMT_MOD_##field##_SHIFT))
+       (~((__u64)AMD_FMT_MOD_##field##_MASK << AMD_FMT_MOD_##field##_SHIFT))
 
 #if defined(__cplusplus)
 }
index 1d0bccc..d370165 100644 (file)
 #define ETH_P_QINQ3    0x9300          /* deprecated QinQ VLAN [ NOT AN OFFICIALLY REGISTERED ID ] */
 #define ETH_P_EDSA     0xDADA          /* Ethertype DSA [ NOT AN OFFICIALLY REGISTERED ID ] */
 #define ETH_P_DSA_8021Q        0xDADB          /* Fake VLAN Header for DSA [ NOT AN OFFICIALLY REGISTERED ID ] */
+#define ETH_P_DSA_A5PSW        0xE001          /* A5PSW Tag Value [ NOT AN OFFICIALLY REGISTERED ID ] */
 #define ETH_P_IFE      0xED3E          /* ForCES inter-FE LFB type */
 #define ETH_P_AF_IUCV   0xFBFB         /* IBM af_iucv [ NOT AN OFFICIALLY REGISTERED ID ] */
 
index 5f58dcf..e36d9d2 100644 (file)
@@ -963,6 +963,7 @@ enum {
        IFLA_BOND_SLAVE_AD_AGGREGATOR_ID,
        IFLA_BOND_SLAVE_AD_ACTOR_OPER_PORT_STATE,
        IFLA_BOND_SLAVE_AD_PARTNER_OPER_PORT_STATE,
+       IFLA_BOND_SLAVE_PRIO,
        __IFLA_BOND_SLAVE_MAX,
 };
 
index 776e027..f10b59d 100644 (file)
@@ -47,7 +47,6 @@ struct io_uring_sqe {
                __u32           unlink_flags;
                __u32           hardlink_flags;
                __u32           xattr_flags;
-               __u32           close_flags;
        };
        __u64   user_data;      /* data to be passed back at completion time */
        /* pack this to avoid bogus arm OABI complaints */
@@ -245,7 +244,7 @@ enum io_uring_op {
 #define IORING_ASYNC_CANCEL_ANY        (1U << 2)
 
 /*
- * send/sendmsg and recv/recvmsg flags (sqe->addr2)
+ * send/sendmsg and recv/recvmsg flags (sqe->ioprio)
  *
  * IORING_RECVSEND_POLL_FIRST  If set, instead of first attempting to send
  *                             or receive and arm poll if that yields an
@@ -259,11 +258,6 @@ enum io_uring_op {
  */
 #define IORING_ACCEPT_MULTISHOT        (1U << 0)
 
-/*
- * close flags, store in sqe->close_flags
- */
-#define IORING_CLOSE_FD_AND_FILE_SLOT  (1U << 0)
-
 /*
  * IO completion data structure (Completion Queue Entry)
  */
index 9219635..dfe19bf 100644 (file)
@@ -2,16 +2,17 @@
 #ifndef _UAPI_MPTCP_H
 #define _UAPI_MPTCP_H
 
+#ifndef __KERNEL__
+#include <netinet/in.h>                /* for sockaddr_in and sockaddr_in6     */
+#include <sys/socket.h>                /* for struct sockaddr                  */
+#endif
+
 #include <linux/const.h>
 #include <linux/types.h>
 #include <linux/in.h>          /* for sockaddr_in                      */
 #include <linux/in6.h>         /* for sockaddr_in6                     */
 #include <linux/socket.h>      /* for sockaddr_storage and sa_family   */
 
-#ifndef __KERNEL__
-#include <sys/socket.h>                /* for struct sockaddr                  */
-#endif
-
 #define MPTCP_SUBFLOW_FLAG_MCAP_REM            _BITUL(0)
 #define MPTCP_SUBFLOW_FLAG_MCAP_LOC            _BITUL(1)
 #define MPTCP_SUBFLOW_FLAG_JOIN_REM            _BITUL(2)
index 39c565e..a998bf7 100644 (file)
@@ -154,6 +154,7 @@ enum {
        NDTPA_QUEUE_LENBYTES,           /* u32 */
        NDTPA_MCAST_REPROBES,           /* u32 */
        NDTPA_PAD,
+       NDTPA_INTERVAL_PROBE_TIME_MS,   /* u64, msecs */
        __NDTPA_MAX
 };
 #define NDTPA_MAX (__NDTPA_MAX - 1)
index 904909d..1c9152a 100644 (file)
@@ -344,6 +344,7 @@ enum
        LINUX_MIB_TLSRXDEVICE,                  /* TlsRxDevice */
        LINUX_MIB_TLSDECRYPTERROR,              /* TlsDecryptError */
        LINUX_MIB_TLSRXDEVICERESYNC,            /* TlsRxDeviceResync */
+       LINUX_MIN_TLSDECRYPTRETRY,              /* TlsDecryptRetry */
        __LINUX_MIB_TLSMAX
 };
 
index 6a3b194..8981f00 100644 (file)
@@ -584,24 +584,25 @@ enum {
 
 /* /proc/sys/net/<protocol>/neigh/<dev> */
 enum {
-       NET_NEIGH_MCAST_SOLICIT=1,
-       NET_NEIGH_UCAST_SOLICIT=2,
-       NET_NEIGH_APP_SOLICIT=3,
-       NET_NEIGH_RETRANS_TIME=4,
-       NET_NEIGH_REACHABLE_TIME=5,
-       NET_NEIGH_DELAY_PROBE_TIME=6,
-       NET_NEIGH_GC_STALE_TIME=7,
-       NET_NEIGH_UNRES_QLEN=8,
-       NET_NEIGH_PROXY_QLEN=9,
-       NET_NEIGH_ANYCAST_DELAY=10,
-       NET_NEIGH_PROXY_DELAY=11,
-       NET_NEIGH_LOCKTIME=12,
-       NET_NEIGH_GC_INTERVAL=13,
-       NET_NEIGH_GC_THRESH1=14,
-       NET_NEIGH_GC_THRESH2=15,
-       NET_NEIGH_GC_THRESH3=16,
-       NET_NEIGH_RETRANS_TIME_MS=17,
-       NET_NEIGH_REACHABLE_TIME_MS=18,
+       NET_NEIGH_MCAST_SOLICIT = 1,
+       NET_NEIGH_UCAST_SOLICIT = 2,
+       NET_NEIGH_APP_SOLICIT = 3,
+       NET_NEIGH_RETRANS_TIME = 4,
+       NET_NEIGH_REACHABLE_TIME = 5,
+       NET_NEIGH_DELAY_PROBE_TIME = 6,
+       NET_NEIGH_GC_STALE_TIME = 7,
+       NET_NEIGH_UNRES_QLEN = 8,
+       NET_NEIGH_PROXY_QLEN = 9,
+       NET_NEIGH_ANYCAST_DELAY = 10,
+       NET_NEIGH_PROXY_DELAY = 11,
+       NET_NEIGH_LOCKTIME = 12,
+       NET_NEIGH_GC_INTERVAL = 13,
+       NET_NEIGH_GC_THRESH1 = 14,
+       NET_NEIGH_GC_THRESH2 = 15,
+       NET_NEIGH_GC_THRESH3 = 16,
+       NET_NEIGH_RETRANS_TIME_MS = 17,
+       NET_NEIGH_REACHABLE_TIME_MS = 18,
+       NET_NEIGH_INTERVAL_PROBE_TIME_MS = 19,
 };
 
 /* /proc/sys/net/dccp */
index bb8f808..f1157d8 100644 (file)
@@ -40,6 +40,7 @@
 #define TLS_TX                 1       /* Set transmit parameters */
 #define TLS_RX                 2       /* Set receive parameters */
 #define TLS_TX_ZEROCOPY_RO     3       /* TX zerocopy (only sendfile now) */
+#define TLS_RX_EXPECT_NO_PAD   4       /* Attempt opportunistic zero-copy */
 
 /* Supported versions */
 #define TLS_VERSION_MINOR(ver) ((ver) & 0xFF)
@@ -162,6 +163,7 @@ enum {
        TLS_INFO_TXCONF,
        TLS_INFO_RXCONF,
        TLS_INFO_ZC_RO_TX,
+       TLS_INFO_RX_NO_PAD,
        __TLS_INFO_MAX,
 };
 #define TLS_INFO_MAX (__TLS_INFO_MAX - 1)
index 9d0f06b..68aeae2 100644 (file)
@@ -38,8 +38,9 @@
 #define N_NULL         27      /* Null ldisc used for error handling */
 #define N_MCTP         28      /* MCTP-over-serial */
 #define N_DEVELOPMENT  29      /* Manual out-of-tree testing */
+#define N_CAN327       30      /* ELM327 based OBD-II interfaces */
 
 /* Always the newest line discipline + 1 */
-#define NR_LDISCS      30
+#define NR_LDISCS      31
 
 #endif /* _UAPI_LINUX_TTY_H */
index f3a2abd..3a8c9d7 100644 (file)
@@ -1014,10 +1014,10 @@ static void audit_reset_context(struct audit_context *ctx)
        ctx->target_comm[0] = '\0';
        unroll_tree_refs(ctx, NULL, 0);
        WARN_ON(!list_empty(&ctx->killed_trees));
-       ctx->type = 0;
        audit_free_module(ctx);
        ctx->fds[0] = -1;
        audit_proctitle_free(ctx);
+       ctx->type = 0; /* reset last for audit_free_*() */
 }
 
 static inline struct audit_context *audit_alloc_context(enum audit_state state)
index 4f2408a..4423045 100644 (file)
@@ -4928,6 +4928,7 @@ static int btf_check_type_tags(struct btf_verifier_env *env,
        n = btf_nr_types(btf);
        for (i = start_id; i < n; i++) {
                const struct btf_type *t;
+               int chain_limit = 32;
                u32 cur_id = i;
 
                t = btf_type_by_id(btf, i);
@@ -4940,6 +4941,10 @@ static int btf_check_type_tags(struct btf_verifier_env *env,
 
                in_tags = btf_type_is_type_tag(t);
                while (btf_type_is_modifier(t)) {
+                       if (!chain_limit--) {
+                               btf_verifier_log(env, "Max chain length or cycle detected");
+                               return -ELOOP;
+                       }
                        if (btf_type_is_type_tag(t)) {
                                if (!in_tags) {
                                        btf_verifier_log(env, "Type tags don't precede modifiers");
index e3cf619..328cfab 100644 (file)
@@ -1562,6 +1562,21 @@ static void __reg_bound_offset(struct bpf_reg_state *reg)
        reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);
 }
 
+static void reg_bounds_sync(struct bpf_reg_state *reg)
+{
+       /* We might have learned new bounds from the var_off. */
+       __update_reg_bounds(reg);
+       /* We might have learned something about the sign bit. */
+       __reg_deduce_bounds(reg);
+       /* We might have learned some bits from the bounds. */
+       __reg_bound_offset(reg);
+       /* Intersecting with the old var_off might have improved our bounds
+        * slightly, e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
+        * then new var_off is (0; 0x7f...fc) which improves our umax.
+        */
+       __update_reg_bounds(reg);
+}
+
 static bool __reg32_bound_s64(s32 a)
 {
        return a >= 0 && a <= S32_MAX;
@@ -1603,16 +1618,8 @@ static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
                 * so they do not impact tnum bounds calculation.
                 */
                __mark_reg64_unbounded(reg);
-               __update_reg_bounds(reg);
        }
-
-       /* Intersecting with the old var_off might have improved our bounds
-        * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
-        * then new var_off is (0; 0x7f...fc) which improves our umax.
-        */
-       __reg_deduce_bounds(reg);
-       __reg_bound_offset(reg);
-       __update_reg_bounds(reg);
+       reg_bounds_sync(reg);
 }
 
 static bool __reg64_bound_s32(s64 a)
@@ -1628,7 +1635,6 @@ static bool __reg64_bound_u32(u64 a)
 static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
 {
        __mark_reg32_unbounded(reg);
-
        if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) {
                reg->s32_min_value = (s32)reg->smin_value;
                reg->s32_max_value = (s32)reg->smax_value;
@@ -1637,14 +1643,7 @@ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
                reg->u32_min_value = (u32)reg->umin_value;
                reg->u32_max_value = (u32)reg->umax_value;
        }
-
-       /* Intersecting with the old var_off might have improved our bounds
-        * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
-        * then new var_off is (0; 0x7f...fc) which improves our umax.
-        */
-       __reg_deduce_bounds(reg);
-       __reg_bound_offset(reg);
-       __update_reg_bounds(reg);
+       reg_bounds_sync(reg);
 }
 
 /* Mark a register as having a completely unknown (scalar) value. */
@@ -6965,9 +6964,7 @@ static void do_refine_retval_range(struct bpf_reg_state *regs, int ret_type,
        ret_reg->s32_max_value = meta->msize_max_value;
        ret_reg->smin_value = -MAX_ERRNO;
        ret_reg->s32_min_value = -MAX_ERRNO;
-       __reg_deduce_bounds(ret_reg);
-       __reg_bound_offset(ret_reg);
-       __update_reg_bounds(ret_reg);
+       reg_bounds_sync(ret_reg);
 }
 
 static int
@@ -8267,11 +8264,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
 
        if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))
                return -EINVAL;
-
-       __update_reg_bounds(dst_reg);
-       __reg_deduce_bounds(dst_reg);
-       __reg_bound_offset(dst_reg);
-
+       reg_bounds_sync(dst_reg);
        if (sanitize_check_bounds(env, insn, dst_reg) < 0)
                return -EACCES;
        if (sanitize_needed(opcode)) {
@@ -9009,10 +9002,7 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
        /* ALU32 ops are zero extended into 64bit register */
        if (alu32)
                zext_32_to_64(dst_reg);
-
-       __update_reg_bounds(dst_reg);
-       __reg_deduce_bounds(dst_reg);
-       __reg_bound_offset(dst_reg);
+       reg_bounds_sync(dst_reg);
        return 0;
 }
 
@@ -9201,10 +9191,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
                                                         insn->dst_reg);
                                }
                                zext_32_to_64(dst_reg);
-
-                               __update_reg_bounds(dst_reg);
-                               __reg_deduce_bounds(dst_reg);
-                               __reg_bound_offset(dst_reg);
+                               reg_bounds_sync(dst_reg);
                        }
                } else {
                        /* case: R = imm
@@ -9642,26 +9629,33 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
                return;
 
        switch (opcode) {
+       /* JEQ/JNE comparison doesn't change the register equivalence.
+        *
+        * r1 = r2;
+        * if (r1 == 42) goto label;
+        * ...
+        * label: // here both r1 and r2 are known to be 42.
+        *
+        * Hence when marking register as known preserve it's ID.
+        */
        case BPF_JEQ:
+               if (is_jmp32) {
+                       __mark_reg32_known(true_reg, val32);
+                       true_32off = tnum_subreg(true_reg->var_off);
+               } else {
+                       ___mark_reg_known(true_reg, val);
+                       true_64off = true_reg->var_off;
+               }
+               break;
        case BPF_JNE:
-       {
-               struct bpf_reg_state *reg =
-                       opcode == BPF_JEQ ? true_reg : false_reg;
-
-               /* JEQ/JNE comparison doesn't change the register equivalence.
-                * r1 = r2;
-                * if (r1 == 42) goto label;
-                * ...
-                * label: // here both r1 and r2 are known to be 42.
-                *
-                * Hence when marking register as known preserve it's ID.
-                */
-               if (is_jmp32)
-                       __mark_reg32_known(reg, val32);
-               else
-                       ___mark_reg_known(reg, val);
+               if (is_jmp32) {
+                       __mark_reg32_known(false_reg, val32);
+                       false_32off = tnum_subreg(false_reg->var_off);
+               } else {
+                       ___mark_reg_known(false_reg, val);
+                       false_64off = false_reg->var_off;
+               }
                break;
-       }
        case BPF_JSET:
                if (is_jmp32) {
                        false_32off = tnum_and(false_32off, tnum_const(~val32));
@@ -9800,21 +9794,8 @@ static void __reg_combine_min_max(struct bpf_reg_state *src_reg,
                                                        dst_reg->smax_value);
        src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off,
                                                             dst_reg->var_off);
-       /* We might have learned new bounds from the var_off. */
-       __update_reg_bounds(src_reg);
-       __update_reg_bounds(dst_reg);
-       /* We might have learned something about the sign bit. */
-       __reg_deduce_bounds(src_reg);
-       __reg_deduce_bounds(dst_reg);
-       /* We might have learned some bits from the bounds. */
-       __reg_bound_offset(src_reg);
-       __reg_bound_offset(dst_reg);
-       /* Intersecting with the old var_off might have improved our bounds
-        * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
-        * then new var_off is (0; 0x7f...fc) which improves our umax.
-        */
-       __update_reg_bounds(src_reg);
-       __update_reg_bounds(dst_reg);
+       reg_bounds_sync(src_reg);
+       reg_bounds_sync(dst_reg);
 }
 
 static void reg_combine_min_max(struct bpf_reg_state *true_src,
index e978f36..8d0b68a 100644 (file)
@@ -357,7 +357,7 @@ void dma_direct_free(struct device *dev, size_t size,
        } else {
                if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
                        arch_dma_clear_uncached(cpu_addr, size);
-               if (dma_set_encrypted(dev, cpu_addr, 1 << page_order))
+               if (dma_set_encrypted(dev, cpu_addr, size))
                        return;
        }
 
@@ -392,7 +392,6 @@ void dma_direct_free_pages(struct device *dev, size_t size,
                struct page *page, dma_addr_t dma_addr,
                enum dma_data_direction dir)
 {
-       unsigned int page_order = get_order(size);
        void *vaddr = page_address(page);
 
        /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */
@@ -400,7 +399,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
            dma_free_from_pool(dev, vaddr, size))
                return;
 
-       if (dma_set_encrypted(dev, vaddr, 1 << page_order))
+       if (dma_set_encrypted(dev, vaddr, size))
                return;
        __dma_direct_free_pages(dev, page, size);
 }
index 80bfea5..cff3ae8 100644 (file)
@@ -127,8 +127,6 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
         * complain:
         */
        if (sysctl_hung_task_warnings) {
-               printk_prefer_direct_enter();
-
                if (sysctl_hung_task_warnings > 0)
                        sysctl_hung_task_warnings--;
                pr_err("INFO: task %s:%d blocked for more than %ld seconds.\n",
@@ -144,8 +142,6 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
 
                if (sysctl_hung_task_all_cpu_backtrace)
                        hung_task_show_all_bt = true;
-
-               printk_prefer_direct_exit();
        }
 
        touch_nmi_watchdog();
@@ -208,17 +204,12 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
        }
  unlock:
        rcu_read_unlock();
-       if (hung_task_show_lock) {
-               printk_prefer_direct_enter();
+       if (hung_task_show_lock)
                debug_show_all_locks();
-               printk_prefer_direct_exit();
-       }
 
        if (hung_task_show_all_bt) {
                hung_task_show_all_bt = false;
-               printk_prefer_direct_enter();
                trigger_all_cpu_backtrace();
-               printk_prefer_direct_exit();
        }
 
        if (hung_task_call_panic)
index e6b8e56..886789d 100644 (file)
@@ -1006,8 +1006,10 @@ __irq_do_set_handler(struct irq_desc *desc, irq_flow_handler_t handle,
                if (desc->irq_data.chip != &no_irq_chip)
                        mask_ack_irq(desc);
                irq_state_set_disabled(desc);
-               if (is_chained)
+               if (is_chained) {
                        desc->action = NULL;
+                       WARN_ON(irq_chip_pm_put(irq_desc_get_irq_data(desc)));
+               }
                desc->depth = 1;
        }
        desc->handle_irq = handle;
@@ -1033,6 +1035,7 @@ __irq_do_set_handler(struct irq_desc *desc, irq_flow_handler_t handle,
                irq_settings_set_norequest(desc);
                irq_settings_set_nothread(desc);
                desc->action = &chained_action;
+               WARN_ON(irq_chip_pm_get(irq_desc_get_irq_data(desc)));
                irq_activate_and_startup(desc, IRQ_RESEND);
        }
 }
index 544fd40..3c67791 100644 (file)
@@ -340,7 +340,7 @@ static int kthread(void *_create)
 
        self = to_kthread(current);
 
-       /* If user was SIGKILLed, I release the structure. */
+       /* Release the structure when caller killed by a fatal signal. */
        done = xchg(&create->done, NULL);
        if (!done) {
                kfree(create);
@@ -398,7 +398,7 @@ static void create_kthread(struct kthread_create_info *create)
        /* We want our own signal handler (we take no signals by default). */
        pid = kernel_thread(kthread, create, CLONE_FS | CLONE_FILES | SIGCHLD);
        if (pid < 0) {
-               /* If user was SIGKILLed, I release the structure. */
+               /* Release the structure when caller killed by a fatal signal. */
                struct completion *done = xchg(&create->done, NULL);
 
                if (!done) {
@@ -440,9 +440,9 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data),
         */
        if (unlikely(wait_for_completion_killable(&done))) {
                /*
-                * If I was SIGKILLed before kthreadd (or new kernel thread)
-                * calls complete(), leave the cleanup of this structure to
-                * that thread.
+                * If I was killed by a fatal signal before kthreadd (or new
+                * kernel thread) calls complete(), leave the cleanup of this
+                * structure to that thread.
                 */
                if (xchg(&create->done, NULL))
                        return ERR_PTR(-EINTR);
@@ -876,7 +876,7 @@ fail_task:
  *
  * Returns a pointer to the allocated worker on success, ERR_PTR(-ENOMEM)
  * when the needed structures could not get allocated, and ERR_PTR(-EINTR)
- * when the worker was SIGKILLed.
+ * when the caller was killed by a fatal signal.
  */
 struct kthread_worker *
 kthread_create_worker(unsigned int flags, const char namefmt[], ...)
@@ -925,7 +925,7 @@ EXPORT_SYMBOL(kthread_create_worker);
  * Return:
  * The pointer to the allocated worker on success, ERR_PTR(-ENOMEM)
  * when the needed structures could not get allocated, and ERR_PTR(-EINTR)
- * when the worker was SIGKILLed.
+ * when the caller was killed by a fatal signal.
  */
 struct kthread_worker *
 kthread_create_worker_on_cpu(int cpu, unsigned int flags,
index 81e8728..f06b91c 100644 (file)
@@ -5432,7 +5432,7 @@ static struct pin_cookie __lock_pin_lock(struct lockdep_map *lock)
                         * be guessable and still allows some pin nesting in
                         * our u32 pin_count.
                         */
-                       cookie.val = 1 + (prandom_u32() >> 16);
+                       cookie.val = 1 + (sched_clock() & 0xffff);
                        hlock->pin_count += cookie.val;
                        return cookie;
                }
index a3c758d..a3308af 100644 (file)
@@ -603,8 +603,6 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
 {
        disable_trace_on_warning();
 
-       printk_prefer_direct_enter();
-
        if (file)
                pr_warn("WARNING: CPU: %d PID: %d at %s:%d %pS\n",
                        raw_smp_processor_id(), current->pid, file, line,
@@ -634,8 +632,6 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
 
        /* Just a warning, don't kill lockdep. */
        add_taint(taint, LOCKDEP_STILL_OK);
-
-       printk_prefer_direct_exit();
 }
 
 #ifndef __WARN_FLAGS
index 20a66bf..89c71fc 100644 (file)
@@ -665,7 +665,7 @@ static void power_down(void)
                hibernation_platform_enter();
                fallthrough;
        case HIBERNATION_SHUTDOWN:
-               if (pm_power_off)
+               if (kernel_can_power_off())
                        kernel_power_off();
                break;
        }
index ea3dd55..b49c6ff 100644 (file)
@@ -223,33 +223,6 @@ int devkmsg_sysctl_set_loglvl(struct ctl_table *table, int write,
 /* Number of registered extended console drivers. */
 static int nr_ext_console_drivers;
 
-/*
- * Used to synchronize printing kthreads against direct printing via
- * console_trylock/console_unlock.
- *
- * Values:
- * -1 = console kthreads atomically blocked (via global trylock)
- *  0 = no kthread printing, console not locked (via trylock)
- * >0 = kthread(s) actively printing
- *
- * Note: For synchronizing against direct printing via
- *       console_lock/console_unlock, see the @lock variable in
- *       struct console.
- */
-static atomic_t console_kthreads_active = ATOMIC_INIT(0);
-
-#define console_kthreads_atomic_tryblock() \
-       (atomic_cmpxchg(&console_kthreads_active, 0, -1) == 0)
-#define console_kthreads_atomic_unblock() \
-       atomic_cmpxchg(&console_kthreads_active, -1, 0)
-#define console_kthreads_atomically_blocked() \
-       (atomic_read(&console_kthreads_active) == -1)
-
-#define console_kthread_printing_tryenter() \
-       atomic_inc_unless_negative(&console_kthreads_active)
-#define console_kthread_printing_exit() \
-       atomic_dec(&console_kthreads_active)
-
 /*
  * Helper macros to handle lockdep when locking/unlocking console_sem. We use
  * macros instead of functions so that _RET_IP_ contains useful information.
@@ -298,49 +271,14 @@ static bool panic_in_progress(void)
 }
 
 /*
- * Tracks whether kthread printers are all blocked. A value of true implies
- * that the console is locked via console_lock() or the console is suspended.
- * Writing to this variable requires holding @console_sem.
+ * This is used for debugging the mess that is the VT code by
+ * keeping track if we have the console semaphore held. It's
+ * definitely not the perfect debug tool (we don't know if _WE_
+ * hold it and are racing, but it helps tracking those weird code
+ * paths in the console code where we end up in places I want
+ * locked without the console semaphore held).
  */
-static bool console_kthreads_blocked;
-
-/*
- * Block all kthread printers from a schedulable context.
- *
- * Requires holding @console_sem.
- */
-static void console_kthreads_block(void)
-{
-       struct console *con;
-
-       for_each_console(con) {
-               mutex_lock(&con->lock);
-               con->blocked = true;
-               mutex_unlock(&con->lock);
-       }
-
-       console_kthreads_blocked = true;
-}
-
-/*
- * Unblock all kthread printers from a schedulable context.
- *
- * Requires holding @console_sem.
- */
-static void console_kthreads_unblock(void)
-{
-       struct console *con;
-
-       for_each_console(con) {
-               mutex_lock(&con->lock);
-               con->blocked = false;
-               mutex_unlock(&con->lock);
-       }
-
-       console_kthreads_blocked = false;
-}
-
-static int console_suspended;
+static int console_locked, console_suspended;
 
 /*
  *     Array of consoles built from command line options (console=)
@@ -423,75 +361,7 @@ static int console_msg_format = MSG_FORMAT_DEFAULT;
 /* syslog_lock protects syslog_* variables and write access to clear_seq. */
 static DEFINE_MUTEX(syslog_lock);
 
-/*
- * A flag to signify if printk_activate_kthreads() has already started the
- * kthread printers. If true, any later registered consoles must start their
- * own kthread directly. The flag is write protected by the console_lock.
- */
-static bool printk_kthreads_available;
-
 #ifdef CONFIG_PRINTK
-static atomic_t printk_prefer_direct = ATOMIC_INIT(0);
-
-/**
- * printk_prefer_direct_enter - cause printk() calls to attempt direct
- *                              printing to all enabled consoles
- *
- * Since it is not possible to call into the console printing code from any
- * context, there is no guarantee that direct printing will occur.
- *
- * This globally effects all printk() callers.
- *
- * Context: Any context.
- */
-void printk_prefer_direct_enter(void)
-{
-       atomic_inc(&printk_prefer_direct);
-}
-
-/**
- * printk_prefer_direct_exit - restore printk() behavior
- *
- * Context: Any context.
- */
-void printk_prefer_direct_exit(void)
-{
-       WARN_ON(atomic_dec_if_positive(&printk_prefer_direct) < 0);
-}
-
-/*
- * Calling printk() always wakes kthread printers so that they can
- * flush the new message to their respective consoles. Also, if direct
- * printing is allowed, printk() tries to flush the messages directly.
- *
- * Direct printing is allowed in situations when the kthreads
- * are not available or the system is in a problematic state.
- *
- * See the implementation about possible races.
- */
-static inline bool allow_direct_printing(void)
-{
-       /*
-        * Checking kthread availability is a possible race because the
-        * kthread printers can become permanently disabled during runtime.
-        * However, doing that requires holding the console_lock, so any
-        * pending messages will be direct printed by console_unlock().
-        */
-       if (!printk_kthreads_available)
-               return true;
-
-       /*
-        * Prefer direct printing when the system is in a problematic state.
-        * The context that sets this state will always see the updated value.
-        * The other contexts do not care. Anyway, direct printing is just a
-        * best effort. The direct output is only possible when console_lock
-        * is not already taken and no kthread printers are actively printing.
-        */
-       return (system_state > SYSTEM_RUNNING ||
-               oops_in_progress ||
-               atomic_read(&printk_prefer_direct));
-}
-
 DECLARE_WAIT_QUEUE_HEAD(log_wait);
 /* All 3 protected by @syslog_lock. */
 /* the next printk record to read by syslog(READ) or /proc/kmsg */
@@ -2382,10 +2252,10 @@ asmlinkage int vprintk_emit(int facility, int level,
        printed_len = vprintk_store(facility, level, dev_info, fmt, args);
 
        /* If called from the scheduler, we can not call up(). */
-       if (!in_sched && allow_direct_printing()) {
+       if (!in_sched) {
                /*
                 * The caller may be holding system-critical or
-                * timing-sensitive locks. Disable preemption during direct
+                * timing-sensitive locks. Disable preemption during
                 * printing of all remaining records to all consoles so that
                 * this context can return as soon as possible. Hopefully
                 * another printk() caller will take over the printing.
@@ -2428,8 +2298,6 @@ EXPORT_SYMBOL(_printk);
 
 static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress);
 
-static void printk_start_kthread(struct console *con);
-
 #else /* CONFIG_PRINTK */
 
 #define CONSOLE_LOG_MAX                0
@@ -2463,8 +2331,6 @@ static void call_console_driver(struct console *con, const char *text, size_t le
 }
 static bool suppress_message_printing(int level) { return false; }
 static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) { return true; }
-static void printk_start_kthread(struct console *con) { }
-static bool allow_direct_printing(void) { return true; }
 
 #endif /* CONFIG_PRINTK */
 
@@ -2683,14 +2549,6 @@ static int console_cpu_notify(unsigned int cpu)
                /* If trylock fails, someone else is doing the printing */
                if (console_trylock())
                        console_unlock();
-               else {
-                       /*
-                        * If a new CPU comes online, the conditions for
-                        * printer_should_wake() may have changed for some
-                        * kthread printer with !CON_ANYTIME.
-                        */
-                       wake_up_klogd();
-               }
        }
        return 0;
 }
@@ -2710,7 +2568,7 @@ void console_lock(void)
        down_console_sem();
        if (console_suspended)
                return;
-       console_kthreads_block();
+       console_locked = 1;
        console_may_schedule = 1;
 }
 EXPORT_SYMBOL(console_lock);
@@ -2731,30 +2589,15 @@ int console_trylock(void)
                up_console_sem();
                return 0;
        }
-       if (!console_kthreads_atomic_tryblock()) {
-               up_console_sem();
-               return 0;
-       }
+       console_locked = 1;
        console_may_schedule = 0;
        return 1;
 }
 EXPORT_SYMBOL(console_trylock);
 
-/*
- * This is used to help to make sure that certain paths within the VT code are
- * running with the console lock held. It is definitely not the perfect debug
- * tool (it is not known if the VT code is the task holding the console lock),
- * but it helps tracking those weird code paths in the console code such as
- * when the console is suspended: where the console is not locked but no
- * console printing may occur.
- *
- * Note: This returns true when the console is suspended but is not locked.
- *       This is intentional because the VT code must consider that situation
- *       the same as if the console was locked.
- */
 int is_console_locked(void)
 {
-       return (console_kthreads_blocked || atomic_read(&console_kthreads_active));
+       return console_locked;
 }
 EXPORT_SYMBOL(is_console_locked);
 
@@ -2777,9 +2620,18 @@ static bool abandon_console_lock_in_panic(void)
        return atomic_read(&panic_cpu) != raw_smp_processor_id();
 }
 
-static inline bool __console_is_usable(short flags)
+/*
+ * Check if the given console is currently capable and allowed to print
+ * records.
+ *
+ * Requires the console_lock.
+ */
+static inline bool console_is_usable(struct console *con)
 {
-       if (!(flags & CON_ENABLED))
+       if (!(con->flags & CON_ENABLED))
+               return false;
+
+       if (!con->write)
                return false;
 
        /*
@@ -2788,43 +2640,15 @@ static inline bool __console_is_usable(short flags)
         * cope (CON_ANYTIME) don't call them until this CPU is officially up.
         */
        if (!cpu_online(raw_smp_processor_id()) &&
-           !(flags & CON_ANYTIME))
+           !(con->flags & CON_ANYTIME))
                return false;
 
        return true;
 }
 
-/*
- * Check if the given console is currently capable and allowed to print
- * records.
- *
- * Requires holding the console_lock.
- */
-static inline bool console_is_usable(struct console *con)
-{
-       if (!con->write)
-               return false;
-
-       return __console_is_usable(con->flags);
-}
-
 static void __console_unlock(void)
 {
-       /*
-        * Depending on whether console_lock() or console_trylock() was used,
-        * appropriately allow the kthread printers to continue.
-        */
-       if (console_kthreads_blocked)
-               console_kthreads_unblock();
-       else
-               console_kthreads_atomic_unblock();
-
-       /*
-        * New records may have arrived while the console was locked.
-        * Wake the kthread printers to print them.
-        */
-       wake_up_klogd();
-
+       console_locked = 0;
        up_console_sem();
 }
 
@@ -2842,19 +2666,17 @@ static void __console_unlock(void)
  *
  * @handover will be set to true if a printk waiter has taken over the
  * console_lock, in which case the caller is no longer holding the
- * console_lock. Otherwise it is set to false. A NULL pointer may be provided
- * to disable allowing the console_lock to be taken over by a printk waiter.
+ * console_lock. Otherwise it is set to false.
  *
  * Returns false if the given console has no next record to print, otherwise
  * true.
  *
- * Requires the console_lock if @handover is non-NULL.
- * Requires con->lock otherwise.
+ * Requires the console_lock.
  */
-static bool __console_emit_next_record(struct console *con, char *text, char *ext_text,
-                                      char *dropped_text, bool *handover)
+static bool console_emit_next_record(struct console *con, char *text, char *ext_text,
+                                    char *dropped_text, bool *handover)
 {
-       static atomic_t panic_console_dropped = ATOMIC_INIT(0);
+       static int panic_console_dropped;
        struct printk_info info;
        struct printk_record r;
        unsigned long flags;
@@ -2863,8 +2685,7 @@ static bool __console_emit_next_record(struct console *con, char *text, char *ex
 
        prb_rec_init_rd(&r, &info, text, CONSOLE_LOG_MAX);
 
-       if (handover)
-               *handover = false;
+       *handover = false;
 
        if (!prb_read_valid(prb, con->seq, &r))
                return false;
@@ -2872,8 +2693,7 @@ static bool __console_emit_next_record(struct console *con, char *text, char *ex
        if (con->seq != r.info->seq) {
                con->dropped += r.info->seq - con->seq;
                con->seq = r.info->seq;
-               if (panic_in_progress() &&
-                   atomic_fetch_inc_relaxed(&panic_console_dropped) > 10) {
+               if (panic_in_progress() && panic_console_dropped++ > 10) {
                        suppress_panic_printk = 1;
                        pr_warn_once("Too many dropped messages. Suppress messages on non-panic CPUs to prevent livelock.\n");
                }
@@ -2895,61 +2715,31 @@ static bool __console_emit_next_record(struct console *con, char *text, char *ex
                len = record_print_text(&r, console_msg_format & MSG_FORMAT_SYSLOG, printk_time);
        }
 
-       if (handover) {
-               /*
-                * While actively printing out messages, if another printk()
-                * were to occur on another CPU, it may wait for this one to
-                * finish. This task can not be preempted if there is a
-                * waiter waiting to take over.
-                *
-                * Interrupts are disabled because the hand over to a waiter
-                * must not be interrupted until the hand over is completed
-                * (@console_waiter is cleared).
-                */
-               printk_safe_enter_irqsave(flags);
-               console_lock_spinning_enable();
-
-               /* don't trace irqsoff print latency */
-               stop_critical_timings();
-       }
+       /*
+        * While actively printing out messages, if another printk()
+        * were to occur on another CPU, it may wait for this one to
+        * finish. This task can not be preempted if there is a
+        * waiter waiting to take over.
+        *
+        * Interrupts are disabled because the hand over to a waiter
+        * must not be interrupted until the hand over is completed
+        * (@console_waiter is cleared).
+        */
+       printk_safe_enter_irqsave(flags);
+       console_lock_spinning_enable();
 
+       stop_critical_timings();        /* don't trace print latency */
        call_console_driver(con, write_text, len, dropped_text);
+       start_critical_timings();
 
        con->seq++;
 
-       if (handover) {
-               start_critical_timings();
-               *handover = console_lock_spinning_disable_and_check();
-               printk_safe_exit_irqrestore(flags);
-       }
+       *handover = console_lock_spinning_disable_and_check();
+       printk_safe_exit_irqrestore(flags);
 skip:
        return true;
 }
 
-/*
- * Print a record for a given console, but allow another printk() caller to
- * take over the console_lock and continue printing.
- *
- * Requires the console_lock, but depending on @handover after the call, the
- * caller may no longer have the console_lock.
- *
- * See __console_emit_next_record() for argument and return details.
- */
-static bool console_emit_next_record_transferable(struct console *con, char *text, char *ext_text,
-                                                 char *dropped_text, bool *handover)
-{
-       /*
-        * Handovers are only supported if threaded printers are atomically
-        * blocked. The context taking over the console_lock may be atomic.
-        */
-       if (!console_kthreads_atomically_blocked()) {
-               *handover = false;
-               handover = NULL;
-       }
-
-       return __console_emit_next_record(con, text, ext_text, dropped_text, handover);
-}
-
 /*
  * Print out all remaining records to all consoles.
  *
@@ -2968,8 +2758,8 @@ static bool console_emit_next_record_transferable(struct console *con, char *tex
  * were flushed to all usable consoles. A returned false informs the caller
  * that everything was not flushed (either there were no usable consoles or
  * another context has taken over printing or it is a panic situation and this
- * is not the panic CPU or direct printing is not preferred). Regardless the
- * reason, the caller should assume it is not useful to immediately try again.
+ * is not the panic CPU). Regardless the reason, the caller should assume it
+ * is not useful to immediately try again.
  *
  * Requires the console_lock.
  */
@@ -2986,10 +2776,6 @@ static bool console_flush_all(bool do_cond_resched, u64 *next_seq, bool *handove
        *handover = false;
 
        do {
-               /* Let the kthread printers do the work if they can. */
-               if (!allow_direct_printing())
-                       return false;
-
                any_progress = false;
 
                for_each_console(con) {
@@ -3001,11 +2787,13 @@ static bool console_flush_all(bool do_cond_resched, u64 *next_seq, bool *handove
 
                        if (con->flags & CON_EXTENDED) {
                                /* Extended consoles do not print "dropped messages". */
-                               progress = console_emit_next_record_transferable(con, &text[0],
-                                                               &ext_text[0], NULL, handover);
+                               progress = console_emit_next_record(con, &text[0],
+                                                                   &ext_text[0], NULL,
+                                                                   handover);
                        } else {
-                               progress = console_emit_next_record_transferable(con, &text[0],
-                                                               NULL, &dropped_text[0], handover);
+                               progress = console_emit_next_record(con, &text[0],
+                                                                   NULL, &dropped_text[0],
+                                                                   handover);
                        }
                        if (*handover)
                                return false;
@@ -3120,13 +2908,10 @@ void console_unblank(void)
        if (oops_in_progress) {
                if (down_trylock_console_sem() != 0)
                        return;
-               if (!console_kthreads_atomic_tryblock()) {
-                       up_console_sem();
-                       return;
-               }
        } else
                console_lock();
 
+       console_locked = 1;
        console_may_schedule = 0;
        for_each_console(c)
                if ((c->flags & CON_ENABLED) && c->unblank)
@@ -3405,10 +3190,6 @@ void register_console(struct console *newcon)
                nr_ext_console_drivers++;
 
        newcon->dropped = 0;
-       newcon->thread = NULL;
-       newcon->blocked = true;
-       mutex_init(&newcon->lock);
-
        if (newcon->flags & CON_PRINTBUFFER) {
                /* Get a consistent copy of @syslog_seq. */
                mutex_lock(&syslog_lock);
@@ -3418,10 +3199,6 @@ void register_console(struct console *newcon)
                /* Begin with next message. */
                newcon->seq = prb_next_seq(prb);
        }
-
-       if (printk_kthreads_available)
-               printk_start_kthread(newcon);
-
        console_unlock();
        console_sysfs_notify();
 
@@ -3448,7 +3225,6 @@ EXPORT_SYMBOL(register_console);
 
 int unregister_console(struct console *console)
 {
-       struct task_struct *thd;
        struct console *con;
        int res;
 
@@ -3489,20 +3265,7 @@ int unregister_console(struct console *console)
                console_drivers->flags |= CON_CONSDEV;
 
        console->flags &= ~CON_ENABLED;
-
-       /*
-        * console->thread can only be cleared under the console lock. But
-        * stopping the thread must be done without the console lock. The
-        * task that clears @thread is the task that stops the kthread.
-        */
-       thd = console->thread;
-       console->thread = NULL;
-
        console_unlock();
-
-       if (thd)
-               kthread_stop(thd);
-
        console_sysfs_notify();
 
        if (console->exit)
@@ -3598,20 +3361,6 @@ static int __init printk_late_init(void)
 }
 late_initcall(printk_late_init);
 
-static int __init printk_activate_kthreads(void)
-{
-       struct console *con;
-
-       console_lock();
-       printk_kthreads_available = true;
-       for_each_console(con)
-               printk_start_kthread(con);
-       console_unlock();
-
-       return 0;
-}
-early_initcall(printk_activate_kthreads);
-
 #if defined CONFIG_PRINTK
 /* If @con is specified, only wait for that console. Otherwise wait for all. */
 static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress)
@@ -3686,206 +3435,11 @@ bool pr_flush(int timeout_ms, bool reset_on_progress)
 }
 EXPORT_SYMBOL(pr_flush);
 
-static void __printk_fallback_preferred_direct(void)
-{
-       printk_prefer_direct_enter();
-       pr_err("falling back to preferred direct printing\n");
-       printk_kthreads_available = false;
-}
-
-/*
- * Enter preferred direct printing, but never exit. Mark console threads as
- * unavailable. The system is then forever in preferred direct printing and
- * any printing threads will exit.
- *
- * Must *not* be called under console_lock. Use
- * __printk_fallback_preferred_direct() if already holding console_lock.
- */
-static void printk_fallback_preferred_direct(void)
-{
-       console_lock();
-       __printk_fallback_preferred_direct();
-       console_unlock();
-}
-
-/*
- * Print a record for a given console, not allowing another printk() caller
- * to take over. This is appropriate for contexts that do not have the
- * console_lock.
- *
- * See __console_emit_next_record() for argument and return details.
- */
-static bool console_emit_next_record(struct console *con, char *text, char *ext_text,
-                                    char *dropped_text)
-{
-       return __console_emit_next_record(con, text, ext_text, dropped_text, NULL);
-}
-
-static bool printer_should_wake(struct console *con, u64 seq)
-{
-       short flags;
-
-       if (kthread_should_stop() || !printk_kthreads_available)
-               return true;
-
-       if (con->blocked ||
-           console_kthreads_atomically_blocked()) {
-               return false;
-       }
-
-       /*
-        * This is an unsafe read from con->flags, but a false positive is
-        * not a problem. Worst case it would allow the printer to wake up
-        * although it is disabled. But the printer will notice that when
-        * attempting to print and instead go back to sleep.
-        */
-       flags = data_race(READ_ONCE(con->flags));
-
-       if (!__console_is_usable(flags))
-               return false;
-
-       return prb_read_valid(prb, seq, NULL);
-}
-
-static int printk_kthread_func(void *data)
-{
-       struct console *con = data;
-       char *dropped_text = NULL;
-       char *ext_text = NULL;
-       u64 seq = 0;
-       char *text;
-       int error;
-
-       text = kmalloc(CONSOLE_LOG_MAX, GFP_KERNEL);
-       if (!text) {
-               con_printk(KERN_ERR, con, "failed to allocate text buffer\n");
-               printk_fallback_preferred_direct();
-               goto out;
-       }
-
-       if (con->flags & CON_EXTENDED) {
-               ext_text = kmalloc(CONSOLE_EXT_LOG_MAX, GFP_KERNEL);
-               if (!ext_text) {
-                       con_printk(KERN_ERR, con, "failed to allocate ext_text buffer\n");
-                       printk_fallback_preferred_direct();
-                       goto out;
-               }
-       } else {
-               dropped_text = kmalloc(DROPPED_TEXT_MAX, GFP_KERNEL);
-               if (!dropped_text) {
-                       con_printk(KERN_ERR, con, "failed to allocate dropped_text buffer\n");
-                       printk_fallback_preferred_direct();
-                       goto out;
-               }
-       }
-
-       con_printk(KERN_INFO, con, "printing thread started\n");
-
-       for (;;) {
-               /*
-                * Guarantee this task is visible on the waitqueue before
-                * checking the wake condition.
-                *
-                * The full memory barrier within set_current_state() of
-                * prepare_to_wait_event() pairs with the full memory barrier
-                * within wq_has_sleeper().
-                *
-                * This pairs with __wake_up_klogd:A.
-                */
-               error = wait_event_interruptible(log_wait,
-                               printer_should_wake(con, seq)); /* LMM(printk_kthread_func:A) */
-
-               if (kthread_should_stop() || !printk_kthreads_available)
-                       break;
-
-               if (error)
-                       continue;
-
-               error = mutex_lock_interruptible(&con->lock);
-               if (error)
-                       continue;
-
-               if (con->blocked ||
-                   !console_kthread_printing_tryenter()) {
-                       /* Another context has locked the console_lock. */
-                       mutex_unlock(&con->lock);
-                       continue;
-               }
-
-               /*
-                * Although this context has not locked the console_lock, it
-                * is known that the console_lock is not locked and it is not
-                * possible for any other context to lock the console_lock.
-                * Therefore it is safe to read con->flags.
-                */
-
-               if (!__console_is_usable(con->flags)) {
-                       console_kthread_printing_exit();
-                       mutex_unlock(&con->lock);
-                       continue;
-               }
-
-               /*
-                * Even though the printk kthread is always preemptible, it is
-                * still not allowed to call cond_resched() from within
-                * console drivers. The task may become non-preemptible in the
-                * console driver call chain. For example, vt_console_print()
-                * takes a spinlock and then can call into fbcon_redraw(),
-                * which can conditionally invoke cond_resched().
-                */
-               console_may_schedule = 0;
-               console_emit_next_record(con, text, ext_text, dropped_text);
-
-               seq = con->seq;
-
-               console_kthread_printing_exit();
-
-               mutex_unlock(&con->lock);
-       }
-
-       con_printk(KERN_INFO, con, "printing thread stopped\n");
-out:
-       kfree(dropped_text);
-       kfree(ext_text);
-       kfree(text);
-
-       console_lock();
-       /*
-        * If this kthread is being stopped by another task, con->thread will
-        * already be NULL. That is fine. The important thing is that it is
-        * NULL after the kthread exits.
-        */
-       con->thread = NULL;
-       console_unlock();
-
-       return 0;
-}
-
-/* Must be called under console_lock. */
-static void printk_start_kthread(struct console *con)
-{
-       /*
-        * Do not start a kthread if there is no write() callback. The
-        * kthreads assume the write() callback exists.
-        */
-       if (!con->write)
-               return;
-
-       con->thread = kthread_run(printk_kthread_func, con,
-                                 "pr/%s%d", con->name, con->index);
-       if (IS_ERR(con->thread)) {
-               con->thread = NULL;
-               con_printk(KERN_ERR, con, "unable to start printing thread\n");
-               __printk_fallback_preferred_direct();
-               return;
-       }
-}
-
 /*
  * Delayed printk version, for scheduler-internal messages:
  */
-#define PRINTK_PENDING_WAKEUP          0x01
-#define PRINTK_PENDING_DIRECT_OUTPUT   0x02
+#define PRINTK_PENDING_WAKEUP  0x01
+#define PRINTK_PENDING_OUTPUT  0x02
 
 static DEFINE_PER_CPU(int, printk_pending);
 
@@ -3893,14 +3447,10 @@ static void wake_up_klogd_work_func(struct irq_work *irq_work)
 {
        int pending = this_cpu_xchg(printk_pending, 0);
 
-       if (pending & PRINTK_PENDING_DIRECT_OUTPUT) {
-               printk_prefer_direct_enter();
-
+       if (pending & PRINTK_PENDING_OUTPUT) {
                /* If trylock fails, someone else is doing the printing */
                if (console_trylock())
                        console_unlock();
-
-               printk_prefer_direct_exit();
        }
 
        if (pending & PRINTK_PENDING_WAKEUP)
@@ -3925,11 +3475,10 @@ static void __wake_up_klogd(int val)
         * prepare_to_wait_event(), which is called after ___wait_event() adds
         * the waiter but before it has checked the wait condition.
         *
-        * This pairs with devkmsg_read:A, syslog_print:A, and
-        * printk_kthread_func:A.
+        * This pairs with devkmsg_read:A and syslog_print:A.
         */
        if (wq_has_sleeper(&log_wait) || /* LMM(__wake_up_klogd:A) */
-           (val & PRINTK_PENDING_DIRECT_OUTPUT)) {
+           (val & PRINTK_PENDING_OUTPUT)) {
                this_cpu_or(printk_pending, val);
                irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
        }
@@ -3947,17 +3496,7 @@ void defer_console_output(void)
         * New messages may have been added directly to the ringbuffer
         * using vprintk_store(), so wake any waiters as well.
         */
-       int val = PRINTK_PENDING_WAKEUP;
-
-       /*
-        * Make sure that some context will print the messages when direct
-        * printing is allowed. This happens in situations when the kthreads
-        * may not be as reliable or perhaps unusable.
-        */
-       if (allow_direct_printing())
-               val |= PRINTK_PENDING_DIRECT_OUTPUT;
-
-       __wake_up_klogd(val);
+       __wake_up_klogd(PRINTK_PENDING_WAKEUP | PRINTK_PENDING_OUTPUT);
 }
 
 void printk_trigger_flush(void)
index 4995c07..a001e1e 100644 (file)
@@ -647,7 +647,6 @@ static void print_cpu_stall(unsigned long gps)
         * See Documentation/RCU/stallwarn.rst for info on how to debug
         * RCU CPU stall warnings.
         */
-       printk_prefer_direct_enter();
        trace_rcu_stall_warning(rcu_state.name, TPS("SelfDetected"));
        pr_err("INFO: %s self-detected stall on CPU\n", rcu_state.name);
        raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags);
@@ -685,7 +684,6 @@ static void print_cpu_stall(unsigned long gps)
         */
        set_tsk_need_resched(current);
        set_preempt_need_resched();
-       printk_prefer_direct_exit();
 }
 
 static void check_cpu_stall(struct rcu_data *rdp)
index b5a71d1..3c35445 100644 (file)
@@ -819,11 +819,9 @@ static int __orderly_reboot(void)
        ret = run_cmd(reboot_cmd);
 
        if (ret) {
-               printk_prefer_direct_enter();
                pr_warn("Failed to start orderly reboot: forcing the issue\n");
                emergency_sync();
                kernel_restart(NULL);
-               printk_prefer_direct_exit();
        }
 
        return ret;
@@ -836,7 +834,6 @@ static int __orderly_poweroff(bool force)
        ret = run_cmd(poweroff_cmd);
 
        if (ret && force) {
-               printk_prefer_direct_enter();
                pr_warn("Failed to start orderly shutdown: forcing the issue\n");
 
                /*
@@ -846,7 +843,6 @@ static int __orderly_poweroff(bool force)
                 */
                emergency_sync();
                kernel_power_off();
-               printk_prefer_direct_exit();
        }
 
        return ret;
@@ -904,8 +900,6 @@ EXPORT_SYMBOL_GPL(orderly_reboot);
  */
 static void hw_failure_emergency_poweroff_func(struct work_struct *work)
 {
-       printk_prefer_direct_enter();
-
        /*
         * We have reached here after the emergency shutdown waiting period has
         * expired. This means orderly_poweroff has not been able to shut off
@@ -922,8 +916,6 @@ static void hw_failure_emergency_poweroff_func(struct work_struct *work)
         */
        pr_emerg("Hardware protection shutdown failed. Trying emergency restart\n");
        emergency_restart();
-
-       printk_prefer_direct_exit();
 }
 
 static DECLARE_DELAYED_WORK(hw_failure_emergency_poweroff_work,
@@ -962,13 +954,11 @@ void hw_protection_shutdown(const char *reason, int ms_until_forced)
 {
        static atomic_t allow_proceed = ATOMIC_INIT(1);
 
-       printk_prefer_direct_enter();
-
        pr_emerg("HARDWARE PROTECTION shutdown (%s)\n", reason);
 
        /* Shutdown should be initiated only once. */
        if (!atomic_dec_and_test(&allow_proceed))
-               goto out;
+               return;
 
        /*
         * Queue a backup emergency shutdown in the event of
@@ -976,8 +966,6 @@ void hw_protection_shutdown(const char *reason, int ms_until_forced)
         */
        hw_failure_emergency_poweroff(ms_until_forced);
        orderly_poweroff(true);
-out:
-       printk_prefer_direct_exit();
 }
 EXPORT_SYMBOL_GPL(hw_protection_shutdown);
 
index bfa7452..da0bf6f 100644 (file)
@@ -4798,25 +4798,55 @@ static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
 
 static void balance_push(struct rq *rq);
 
+/*
+ * balance_push_callback is a right abuse of the callback interface and plays
+ * by significantly different rules.
+ *
+ * Where the normal balance_callback's purpose is to be ran in the same context
+ * that queued it (only later, when it's safe to drop rq->lock again),
+ * balance_push_callback is specifically targeted at __schedule().
+ *
+ * This abuse is tolerated because it places all the unlikely/odd cases behind
+ * a single test, namely: rq->balance_callback == NULL.
+ */
 struct callback_head balance_push_callback = {
        .next = NULL,
        .func = (void (*)(struct callback_head *))balance_push,
 };
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct callback_head *
+__splice_balance_callbacks(struct rq *rq, bool split)
 {
        struct callback_head *head = rq->balance_callback;
 
+       if (likely(!head))
+               return NULL;
+
        lockdep_assert_rq_held(rq);
-       if (head)
+       /*
+        * Must not take balance_push_callback off the list when
+        * splice_balance_callbacks() and balance_callbacks() are not
+        * in the same rq->lock section.
+        *
+        * In that case it would be possible for __schedule() to interleave
+        * and observe the list empty.
+        */
+       if (split && head == &balance_push_callback)
+               head = NULL;
+       else
                rq->balance_callback = NULL;
 
        return head;
 }
 
+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+{
+       return __splice_balance_callbacks(rq, true);
+}
+
 static void __balance_callbacks(struct rq *rq)
 {
-       do_balance_callbacks(rq, splice_balance_callbacks(rq));
+       do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
 }
 
 static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
index 0125961..47b89a0 100644 (file)
@@ -1693,6 +1693,11 @@ queue_balance_callback(struct rq *rq,
 {
        lockdep_assert_rq_held(rq);
 
+       /*
+        * Don't (re)queue an already queued item; nor queue anything when
+        * balance_push() is active, see the comment with
+        * balance_push_callback.
+        */
        if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
                return;
 
index edb1dc9..6f86fda 100644 (file)
@@ -2029,12 +2029,12 @@ bool do_notify_parent(struct task_struct *tsk, int sig)
        bool autoreap = false;
        u64 utime, stime;
 
-       BUG_ON(sig == -1);
+       WARN_ON_ONCE(sig == -1);
 
-       /* do_notify_parent_cldstop should have been called instead.  */
-       BUG_ON(task_is_stopped_or_traced(tsk));
+       /* do_notify_parent_cldstop should have been called instead.  */
+       WARN_ON_ONCE(task_is_stopped_or_traced(tsk));
 
-       BUG_ON(!tsk->ptrace &&
+       WARN_ON_ONCE(!tsk->ptrace &&
               (tsk->group_leader != tsk || !thread_group_empty(tsk)));
 
        /* Wake up all pidfd waiters */
index e52b6e3..85c92e2 100644 (file)
@@ -1237,6 +1237,30 @@ static int do_proc_dointvec_ms_jiffies_conv(bool *negp, unsigned long *lvalp,
        return 0;
 }
 
+static int do_proc_dointvec_ms_jiffies_minmax_conv(bool *negp, unsigned long *lvalp,
+                                               int *valp, int write, void *data)
+{
+       int tmp, ret;
+       struct do_proc_dointvec_minmax_conv_param *param = data;
+       /*
+        * If writing, first do so via a temporary local int so we can
+        * bounds-check it before touching *valp.
+        */
+       int *ip = write ? &tmp : valp;
+
+       ret = do_proc_dointvec_ms_jiffies_conv(negp, lvalp, ip, write, data);
+       if (ret)
+               return ret;
+
+       if (write) {
+               if ((param->min && *param->min > tmp) ||
+                               (param->max && *param->max < tmp))
+                       return -EINVAL;
+               *valp = tmp;
+       }
+       return 0;
+}
+
 /**
  * proc_dointvec_jiffies - read a vector of integers as seconds
  * @table: the sysctl table
@@ -1259,6 +1283,17 @@ int proc_dointvec_jiffies(struct ctl_table *table, int write,
                            do_proc_dointvec_jiffies_conv,NULL);
 }
 
+int proc_dointvec_ms_jiffies_minmax(struct ctl_table *table, int write,
+                         void *buffer, size_t *lenp, loff_t *ppos)
+{
+       struct do_proc_dointvec_minmax_conv_param param = {
+               .min = (int *) table->extra1,
+               .max = (int *) table->extra2,
+       };
+       return do_proc_dointvec(table, write, buffer, lenp, ppos,
+                       do_proc_dointvec_ms_jiffies_minmax_conv, &param);
+}
+
 /**
  * proc_dointvec_userhz_jiffies - read a vector of integers as 1/USER_HZ seconds
  * @table: the sysctl table
@@ -1523,6 +1558,12 @@ int proc_dointvec_jiffies(struct ctl_table *table, int write,
        return -ENOSYS;
 }
 
+int proc_dointvec_ms_jiffies_minmax(struct ctl_table *table, int write,
+                                   void *buffer, size_t *lenp, loff_t *ppos)
+{
+       return -ENOSYS;
+}
+
 int proc_dointvec_userhz_jiffies(struct ctl_table *table, int write,
                    void *buffer, size_t *lenp, loff_t *ppos)
 {
index 58a11f8..3004958 100644 (file)
@@ -526,7 +526,6 @@ void __init tick_nohz_full_setup(cpumask_var_t cpumask)
        cpumask_copy(tick_nohz_full_mask, cpumask);
        tick_nohz_full_running = true;
 }
-EXPORT_SYMBOL_GPL(tick_nohz_full_setup);
 
 static int tick_nohz_cpu_down(unsigned int cpu)
 {
index 10a32b0..fe04c6f 100644 (file)
@@ -770,14 +770,11 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg)
  **/
 void blk_trace_shutdown(struct request_queue *q)
 {
-       mutex_lock(&q->debugfs_mutex);
        if (rcu_dereference_protected(q->blk_trace,
                                      lockdep_is_held(&q->debugfs_mutex))) {
                __blk_trace_startstop(q, 0);
                __blk_trace_remove(q);
        }
-
-       mutex_unlock(&q->debugfs_mutex);
 }
 
 #ifdef CONFIG_BLK_CGROUP
index 4be976c..68e5cdd 100644 (file)
@@ -2423,7 +2423,7 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long entry_ip,
        kprobe_multi_link_prog_run(link, entry_ip, regs);
 }
 
-static int symbols_cmp(const void *a, const void *b)
+static int symbols_cmp_r(const void *a, const void *b, const void *priv)
 {
        const char **str_a = (const char **) a;
        const char **str_b = (const char **) b;
@@ -2431,6 +2431,28 @@ static int symbols_cmp(const void *a, const void *b)
        return strcmp(*str_a, *str_b);
 }
 
+struct multi_symbols_sort {
+       const char **funcs;
+       u64 *cookies;
+};
+
+static void symbols_swap_r(void *a, void *b, int size, const void *priv)
+{
+       const struct multi_symbols_sort *data = priv;
+       const char **name_a = a, **name_b = b;
+
+       swap(*name_a, *name_b);
+
+       /* If defined, swap also related cookies. */
+       if (data->cookies) {
+               u64 *cookie_a, *cookie_b;
+
+               cookie_a = data->cookies + (name_a - data->funcs);
+               cookie_b = data->cookies + (name_b - data->funcs);
+               swap(*cookie_a, *cookie_b);
+       }
+}
+
 int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
 {
        struct bpf_kprobe_multi_link *link = NULL;
@@ -2468,38 +2490,46 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
        if (!addrs)
                return -ENOMEM;
 
+       ucookies = u64_to_user_ptr(attr->link_create.kprobe_multi.cookies);
+       if (ucookies) {
+               cookies = kvmalloc_array(cnt, sizeof(*addrs), GFP_KERNEL);
+               if (!cookies) {
+                       err = -ENOMEM;
+                       goto error;
+               }
+               if (copy_from_user(cookies, ucookies, size)) {
+                       err = -EFAULT;
+                       goto error;
+               }
+       }
+
        if (uaddrs) {
                if (copy_from_user(addrs, uaddrs, size)) {
                        err = -EFAULT;
                        goto error;
                }
        } else {
+               struct multi_symbols_sort data = {
+                       .cookies = cookies,
+               };
                struct user_syms us;
 
                err = copy_user_syms(&us, usyms, cnt);
                if (err)
                        goto error;
 
-               sort(us.syms, cnt, sizeof(*us.syms), symbols_cmp, NULL);
+               if (cookies)
+                       data.funcs = us.syms;
+
+               sort_r(us.syms, cnt, sizeof(*us.syms), symbols_cmp_r,
+                      symbols_swap_r, &data);
+
                err = ftrace_lookup_symbols(us.syms, cnt, addrs);
                free_user_syms(&us);
                if (err)
                        goto error;
        }
 
-       ucookies = u64_to_user_ptr(attr->link_create.kprobe_multi.cookies);
-       if (ucookies) {
-               cookies = kvmalloc_array(cnt, sizeof(*addrs), GFP_KERNEL);
-               if (!cookies) {
-                       err = -ENOMEM;
-                       goto error;
-               }
-               if (copy_from_user(cookies, ucookies, size)) {
-                       err = -EFAULT;
-                       goto error;
-               }
-       }
-
        link = kzalloc(sizeof(*link), GFP_KERNEL);
        if (!link) {
                err = -ENOMEM;
index e750fe1..601ccf1 100644 (file)
@@ -8029,15 +8029,23 @@ static int kallsyms_callback(void *data, const char *name,
                             struct module *mod, unsigned long addr)
 {
        struct kallsyms_data *args = data;
+       const char **sym;
+       int idx;
 
-       if (!bsearch(&name, args->syms, args->cnt, sizeof(*args->syms), symbols_cmp))
+       sym = bsearch(&name, args->syms, args->cnt, sizeof(*args->syms), symbols_cmp);
+       if (!sym)
+               return 0;
+
+       idx = sym - args->syms;
+       if (args->addrs[idx])
                return 0;
 
        addr = ftrace_location(addr);
        if (!addr)
                return 0;
 
-       args->addrs[args->found++] = addr;
+       args->addrs[idx] = addr;
+       args->found++;
        return args->found == args->cnt ? 1 : 0;
 }
 
@@ -8062,6 +8070,7 @@ int ftrace_lookup_symbols(const char **sorted_syms, size_t cnt, unsigned long *a
        struct kallsyms_data args;
        int err;
 
+       memset(addrs, 0, sizeof(*addrs) * cnt);
        args.addrs = addrs;
        args.syms = sorted_syms;
        args.cnt = cnt;
index b568337..c69d822 100644 (file)
@@ -154,6 +154,15 @@ struct rethook_node *rethook_try_get(struct rethook *rh)
        if (unlikely(!handler))
                return NULL;
 
+       /*
+        * This expects the caller will set up a rethook on a function entry.
+        * When the function returns, the rethook will eventually be reclaimed
+        * or released in the rethook_recycle() with call_rcu().
+        * This means the caller must be run in the RCU-availabe context.
+        */
+       if (unlikely(!rcu_is_watching()))
+               return NULL;
+
        fn = freelist_try_get(&rh->pool);
        if (!fn)
                return NULL;
index 2c95992..a8cfac0 100644 (file)
@@ -6424,9 +6424,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
                synchronize_rcu();
                free_snapshot(tr);
        }
-#endif
 
-#ifdef CONFIG_TRACER_MAX_TRACE
        if (t->use_max_tr && !had_max_tr) {
                ret = tracing_alloc_snapshot_instance(tr);
                if (ret < 0)
index 9350733..a245ea6 100644 (file)
@@ -1718,8 +1718,17 @@ static int
 kretprobe_dispatcher(struct kretprobe_instance *ri, struct pt_regs *regs)
 {
        struct kretprobe *rp = get_kretprobe(ri);
-       struct trace_kprobe *tk = container_of(rp, struct trace_kprobe, rp);
+       struct trace_kprobe *tk;
+
+       /*
+        * There is a small chance that get_kretprobe(ri) returns NULL when
+        * the kretprobe is unregister on another CPU between kretprobe's
+        * trampoline_handler and this function.
+        */
+       if (unlikely(!rp))
+               return 0;
 
+       tk = container_of(rp, struct trace_kprobe, rp);
        raw_cpu_inc(*tk->nhit);
 
        if (trace_probe_test_flag(&tk->tp, TP_FLAG_TRACE))
index 326235f..88ba5b4 100644 (file)
@@ -547,7 +547,6 @@ static int __trace_uprobe_create(int argc, const char **argv)
        bool is_return = false;
        int i, ret;
 
-       ret = 0;
        ref_ctr_offset = 0;
 
        switch (argv[0][0]) {
index 20a7a55..ecb0e83 100644 (file)
@@ -424,8 +424,6 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
                /* Start period for the next softlockup warning. */
                update_report_ts();
 
-               printk_prefer_direct_enter();
-
                pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
                        smp_processor_id(), duration,
                        current->comm, task_pid_nr(current));
@@ -444,8 +442,6 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
                add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK);
                if (softlockup_panic)
                        panic("softlockup: hung tasks");
-
-               printk_prefer_direct_exit();
        }
 
        return HRTIMER_RESTART;
index 701f35f..247bf0b 100644 (file)
@@ -135,8 +135,6 @@ static void watchdog_overflow_callback(struct perf_event *event,
                if (__this_cpu_read(hard_watchdog_warn) == true)
                        return;
 
-               printk_prefer_direct_enter();
-
                pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n",
                         this_cpu);
                print_modules();
@@ -157,8 +155,6 @@ static void watchdog_overflow_callback(struct perf_event *event,
                if (hardlockup_panic)
                        nmi_panic(regs, "Hard LOCKUP");
 
-               printk_prefer_direct_exit();
-
                __this_cpu_write(hard_watchdog_warn, true);
                return;
        }
index 6a84363..eaaad4d 100644 (file)
@@ -120,6 +120,9 @@ config INDIRECT_IOMEM_FALLBACK
 
 source "lib/crypto/Kconfig"
 
+config LIB_MEMNEQ
+       bool
+
 config CRC_CCITT
        tristate "CRC-CCITT functions"
        help
index c4fe15d..a9f7eb0 100644 (file)
@@ -94,7 +94,7 @@ config UBSAN_UNREACHABLE
        bool "Perform checking for unreachable code"
        # objtool already handles unreachable checking and gets angry about
        # seeing UBSan instrumentation located in unreachable places.
-       depends on !(OBJTOOL && (STACK_VALIDATION || UNWINDER_ORC || X86_SMAP))
+       depends on !(OBJTOOL && (STACK_VALIDATION || UNWINDER_ORC || HAVE_UACCESS_VALIDATION))
        depends on $(cc-option,-fsanitize=unreachable)
        help
          This option enables -fsanitize=unreachable which checks for control
index ea54294..f99bf61 100644 (file)
@@ -251,6 +251,7 @@ obj-$(CONFIG_DIMLIB) += dim/
 obj-$(CONFIG_SIGNATURE) += digsig.o
 
 lib-$(CONFIG_CLZ_TAB) += clz_tab.o
+lib-$(CONFIG_LIB_MEMNEQ) += memneq.o
 
 obj-$(CONFIG_GENERIC_STRNCPY_FROM_USER) += strncpy_from_user.o
 obj-$(CONFIG_GENERIC_STRNLEN_USER) += strnlen_user.o
index 9856e29..2082af4 100644 (file)
@@ -71,6 +71,7 @@ config CRYPTO_LIB_CURVE25519
        tristate "Curve25519 scalar multiplication library"
        depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519
        select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n
+       select LIB_MEMNEQ
        help
          Enable the Curve25519 library interface. This interface may be
          fulfilled by either the generic implementation or an arch-specific
index c6f0b18..45e93ec 100644 (file)
@@ -110,31 +110,6 @@ int lockref_put_not_zero(struct lockref *lockref)
 }
 EXPORT_SYMBOL(lockref_put_not_zero);
 
-/**
- * lockref_get_or_lock - Increments count unless the count is 0 or dead
- * @lockref: pointer to lockref structure
- * Return: 1 if count updated successfully or 0 if count was zero
- * and we got the lock instead.
- */
-int lockref_get_or_lock(struct lockref *lockref)
-{
-       CMPXCHG_LOOP(
-               new.count++;
-               if (old.count <= 0)
-                       break;
-       ,
-               return 1;
-       );
-
-       spin_lock(&lockref->lock);
-       if (lockref->count <= 0)
-               return 0;
-       lockref->count++;
-       spin_unlock(&lockref->lock);
-       return 1;
-}
-EXPORT_SYMBOL(lockref_get_or_lock);
-
 /**
  * lockref_put_return - Decrement reference count if possible
  * @lockref: pointer to lockref structure
diff --git a/lib/memneq.c b/lib/memneq.c
new file mode 100644 (file)
index 0000000..fb11608
--- /dev/null
@@ -0,0 +1,176 @@
+/*
+ * Constant-time equality testing of memory regions.
+ *
+ * Authors:
+ *
+ *   James Yonan <james@openvpn.net>
+ *   Daniel Borkmann <dborkman@redhat.com>
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ * The full GNU General Public License is included in this distribution
+ * in the file called LICENSE.GPL.
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of OpenVPN Technologies nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <crypto/algapi.h>
+#include <asm/unaligned.h>
+
+#ifndef __HAVE_ARCH_CRYPTO_MEMNEQ
+
+/* Generic path for arbitrary size */
+static inline unsigned long
+__crypto_memneq_generic(const void *a, const void *b, size_t size)
+{
+       unsigned long neq = 0;
+
+#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
+       while (size >= sizeof(unsigned long)) {
+               neq |= get_unaligned((unsigned long *)a) ^
+                      get_unaligned((unsigned long *)b);
+               OPTIMIZER_HIDE_VAR(neq);
+               a += sizeof(unsigned long);
+               b += sizeof(unsigned long);
+               size -= sizeof(unsigned long);
+       }
+#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+       while (size > 0) {
+               neq |= *(unsigned char *)a ^ *(unsigned char *)b;
+               OPTIMIZER_HIDE_VAR(neq);
+               a += 1;
+               b += 1;
+               size -= 1;
+       }
+       return neq;
+}
+
+/* Loop-free fast-path for frequently used 16-byte size */
+static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
+{
+       unsigned long neq = 0;
+
+#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+       if (sizeof(unsigned long) == 8) {
+               neq |= get_unaligned((unsigned long *)a) ^
+                      get_unaligned((unsigned long *)b);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= get_unaligned((unsigned long *)(a + 8)) ^
+                      get_unaligned((unsigned long *)(b + 8));
+               OPTIMIZER_HIDE_VAR(neq);
+       } else if (sizeof(unsigned int) == 4) {
+               neq |= get_unaligned((unsigned int *)a) ^
+                      get_unaligned((unsigned int *)b);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= get_unaligned((unsigned int *)(a + 4)) ^
+                      get_unaligned((unsigned int *)(b + 4));
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= get_unaligned((unsigned int *)(a + 8)) ^
+                      get_unaligned((unsigned int *)(b + 8));
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= get_unaligned((unsigned int *)(a + 12)) ^
+                      get_unaligned((unsigned int *)(b + 12));
+               OPTIMIZER_HIDE_VAR(neq);
+       } else
+#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+       {
+               neq |= *(unsigned char *)(a)    ^ *(unsigned char *)(b);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+1)  ^ *(unsigned char *)(b+1);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+2)  ^ *(unsigned char *)(b+2);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+3)  ^ *(unsigned char *)(b+3);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+4)  ^ *(unsigned char *)(b+4);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+5)  ^ *(unsigned char *)(b+5);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+6)  ^ *(unsigned char *)(b+6);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+7)  ^ *(unsigned char *)(b+7);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+8)  ^ *(unsigned char *)(b+8);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+9)  ^ *(unsigned char *)(b+9);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+10) ^ *(unsigned char *)(b+10);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+11) ^ *(unsigned char *)(b+11);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+12) ^ *(unsigned char *)(b+12);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+13) ^ *(unsigned char *)(b+13);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+14) ^ *(unsigned char *)(b+14);
+               OPTIMIZER_HIDE_VAR(neq);
+               neq |= *(unsigned char *)(a+15) ^ *(unsigned char *)(b+15);
+               OPTIMIZER_HIDE_VAR(neq);
+       }
+
+       return neq;
+}
+
+/* Compare two areas of memory without leaking timing information,
+ * and with special optimizations for common sizes.  Users should
+ * not call this function directly, but should instead use
+ * crypto_memneq defined in crypto/algapi.h.
+ */
+noinline unsigned long __crypto_memneq(const void *a, const void *b,
+                                      size_t size)
+{
+       switch (size) {
+       case 16:
+               return __crypto_memneq_16(a, b);
+       default:
+               return __crypto_memneq_generic(a, b, size);
+       }
+}
+EXPORT_SYMBOL(__crypto_memneq);
+
+#endif /* __HAVE_ARCH_CRYPTO_MEMNEQ */
index ae4fd4d..29eb048 100644 (file)
@@ -528,7 +528,7 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
 
                sbitmap_deferred_clear(map);
                if (map->word == (1UL << (map_depth - 1)) - 1)
-                       continue;
+                       goto next;
 
                nr = find_first_zero_bit(&map->word, map_depth);
                if (nr + nr_tags <= map_depth) {
@@ -539,6 +539,8 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
                        get_mask = ((1UL << map_tags) - 1) << nr;
                        do {
                                val = READ_ONCE(map->word);
+                               if ((val & ~get_mask) != val)
+                                       goto next;
                                ret = atomic_long_cmpxchg(ptr, val, get_mask | val);
                        } while (ret != val);
                        get_mask = (get_mask & ~ret) >> nr;
@@ -549,6 +551,7 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
                                return get_mask;
                        }
                }
+next:
                /* Jump to next index. */
                if (++index >= sb->map_nr)
                        index = 0;
index ff60bd7..95550b8 100644 (file)
@@ -231,20 +231,13 @@ static __init int bdi_class_init(void)
 }
 postcore_initcall(bdi_class_init);
 
-static int bdi_init(struct backing_dev_info *bdi);
-
 static int __init default_bdi_init(void)
 {
-       int err;
-
        bdi_wq = alloc_workqueue("writeback", WQ_MEM_RECLAIM | WQ_UNBOUND |
                                 WQ_SYSFS, 0);
        if (!bdi_wq)
                return -ENOMEM;
-
-       err = bdi_init(&noop_backing_dev_info);
-
-       return err;
+       return 0;
 }
 subsys_initcall(default_bdi_init);
 
@@ -781,7 +774,7 @@ static void cgwb_remove_from_bdi_list(struct bdi_writeback *wb)
 
 #endif /* CONFIG_CGROUP_WRITEBACK */
 
-static int bdi_init(struct backing_dev_info *bdi)
+int bdi_init(struct backing_dev_info *bdi)
 {
        int ret;
 
index 8efbfb2..4b07c29 100644 (file)
@@ -374,6 +374,8 @@ static void damon_reclaim_timer_fn(struct work_struct *work)
 }
 static DECLARE_DELAYED_WORK(damon_reclaim_timer, damon_reclaim_timer_fn);
 
+static bool damon_reclaim_initialized;
+
 static int enabled_store(const char *val,
                const struct kernel_param *kp)
 {
@@ -382,6 +384,10 @@ static int enabled_store(const char *val,
        if (rc < 0)
                return rc;
 
+       /* system_wq might not initialized yet */
+       if (!damon_reclaim_initialized)
+               return rc;
+
        if (enabled)
                schedule_delayed_work(&damon_reclaim_timer, 0);
 
@@ -449,6 +455,8 @@ static int __init damon_reclaim_init(void)
        damon_add_target(ctx, target);
 
        schedule_delayed_work(&damon_reclaim_timer, 0);
+
+       damon_reclaim_initialized = true;
        return 0;
 }
 
index ac3775c..ffdfbc8 100644 (file)
@@ -2385,6 +2385,8 @@ static void filemap_get_read_batch(struct address_space *mapping,
                        continue;
                if (xas.xa_index > max || xa_is_value(folio))
                        break;
+               if (xa_is_sibling(folio))
+                       break;
                if (!folio_try_get_rcu(folio))
                        goto retry;
 
@@ -2629,6 +2631,13 @@ err:
        return err;
 }
 
+static inline bool pos_same_folio(loff_t pos1, loff_t pos2, struct folio *folio)
+{
+       unsigned int shift = folio_shift(folio);
+
+       return (pos1 >> shift == pos2 >> shift);
+}
+
 /**
  * filemap_read - Read data from the page cache.
  * @iocb: The iocb to read.
@@ -2700,11 +2709,11 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter,
                writably_mapped = mapping_writably_mapped(mapping);
 
                /*
-                * When a sequential read accesses a page several times, only
+                * When a read accesses the same folio several times, only
                 * mark it as accessed the first time.
                 */
-               if (iocb->ki_pos >> PAGE_SHIFT !=
-                   ra->prev_pos >> PAGE_SHIFT)
+               if (!pos_same_folio(iocb->ki_pos, ra->prev_pos - 1,
+                                                       fbatch.folios[0]))
                        folio_mark_accessed(fbatch.folios[0]);
 
                for (i = 0; i < folio_batch_count(&fbatch); i++) {
index f724800..834f288 100644 (file)
@@ -2377,6 +2377,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
                        page_tail);
        page_tail->mapping = head->mapping;
        page_tail->index = head->index + tail;
+       page_tail->private = 0;
 
        /* Page flags must be visible before we make the page non-compound. */
        smp_wmb();
index 5c0cddd..65e242b 100644 (file)
@@ -48,7 +48,7 @@ static int hwpoison_inject(void *data, u64 val)
 
 inject:
        pr_info("Injecting memory failure at pfn %#lx\n", pfn);
-       err = memory_failure(pfn, 0);
+       err = memory_failure(pfn, MF_SW_SIMULATED);
        return (err == -EOPNOTSUPP) ? 0 : err;
 }
 
index 4e7cd4c..4b5e5a3 100644 (file)
@@ -360,6 +360,9 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
        unsigned long flags;
        struct slab *slab;
        void *addr;
+       const bool random_right_allocate = prandom_u32_max(2);
+       const bool random_fault = CONFIG_KFENCE_STRESS_TEST_FAULTS &&
+                                 !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS);
 
        /* Try to obtain a free object. */
        raw_spin_lock_irqsave(&kfence_freelist_lock, flags);
@@ -404,7 +407,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
         * is that the out-of-bounds accesses detected are deterministic for
         * such allocations.
         */
-       if (prandom_u32_max(2)) {
+       if (random_right_allocate) {
                /* Allocate on the "right" side, re-calculate address. */
                meta->addr += PAGE_SIZE - size;
                meta->addr = ALIGN_DOWN(meta->addr, cache->align);
@@ -444,7 +447,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
        if (cache->ctor)
                cache->ctor(addr);
 
-       if (CONFIG_KFENCE_STRESS_TEST_FAULTS && !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS))
+       if (random_fault)
                kfence_protect(meta->addr); /* Random "faults" by protecting the object. */
 
        atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCATED]);
index d7b4f26..0316bbc 100644 (file)
@@ -1112,7 +1112,7 @@ static int madvise_inject_error(int behavior,
                } else {
                        pr_info("Injecting memory failure for pfn %#lx at process virtual address %#lx\n",
                                 pfn, start);
-                       ret = memory_failure(pfn, MF_COUNT_INCREASED);
+                       ret = memory_failure(pfn, MF_COUNT_INCREASED | MF_SW_SIMULATED);
                        if (ret == -EOPNOTSUPP)
                                ret = 0;
                }
index abec50f..618c366 100644 (file)
@@ -4859,7 +4859,7 @@ static int mem_cgroup_slab_show(struct seq_file *m, void *p)
 {
        /*
         * Deprecated.
-        * Please, take a look at tools/cgroup/slabinfo.py .
+        * Please, take a look at tools/cgroup/memcg_slabinfo.py .
         */
        return 0;
 }
index b85661c..da39ec8 100644 (file)
@@ -69,6 +69,8 @@ int sysctl_memory_failure_recovery __read_mostly = 1;
 
 atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);
 
+static bool hw_memory_failure __read_mostly = false;
+
 static bool __page_handle_poison(struct page *page)
 {
        int ret;
@@ -1768,6 +1770,9 @@ int memory_failure(unsigned long pfn, int flags)
 
        mutex_lock(&mf_mutex);
 
+       if (!(flags & MF_SW_SIMULATED))
+               hw_memory_failure = true;
+
        p = pfn_to_online_page(pfn);
        if (!p) {
                res = arch_memory_failure(pfn, flags);
@@ -2103,6 +2108,13 @@ int unpoison_memory(unsigned long pfn)
 
        mutex_lock(&mf_mutex);
 
+       if (hw_memory_failure) {
+               unpoison_pr_info("Unpoison: Disabled after HW memory failure %#lx\n",
+                                pfn, &unpoison_rs);
+               ret = -EOPNOTSUPP;
+               goto unlock_mutex;
+       }
+
        if (!PageHWPoison(p)) {
                unpoison_pr_info("Unpoison: Page was already unpoisoned %#lx\n",
                                 pfn, &unpoison_rs);
index e51588e..6c1ea61 100644 (file)
@@ -1106,6 +1106,7 @@ static int unmap_and_move(new_page_t get_new_page,
        if (!newpage)
                return -ENOMEM;
 
+       newpage->private = 0;
        rc = __unmap_and_move(page, newpage, force, mode);
        if (rc == MIGRATEPAGE_SUCCESS)
                set_page_owner_migrate_reason(newpage, reason);
index d200d41..9d73dc3 100644 (file)
@@ -286,6 +286,8 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
  * @flags:                     isolation flags
  * @gfp_flags:                 GFP flags used for migrating pages
  * @isolate_before:    isolate the pageblock before the boundary_pfn
+ * @skip_isolation:    the flag to skip the pageblock isolation in second
+ *                     isolate_single_pageblock()
  *
  * Free and in-use pages can be as big as MAX_ORDER-1 and contain more than one
  * pageblock. When not all pageblocks within a page are isolated at the same
index 57a0151..fdcd28c 100644 (file)
@@ -510,6 +510,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
                        new_order--;
        }
 
+       filemap_invalidate_lock_shared(mapping);
        while (index <= limit) {
                unsigned int order = new_order;
 
@@ -536,6 +537,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
        }
 
        read_pages(ractl);
+       filemap_invalidate_unlock_shared(mapping);
 
        /*
         * If there were already pages in the page cache, then we may have
index e553502..b1281b8 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -726,25 +726,48 @@ static struct track *get_track(struct kmem_cache *s, void *object,
        return kasan_reset_tag(p + alloc);
 }
 
-static void noinline set_track(struct kmem_cache *s, void *object,
-                       enum track_item alloc, unsigned long addr)
-{
-       struct track *p = get_track(s, object, alloc);
-
 #ifdef CONFIG_STACKDEPOT
+static noinline depot_stack_handle_t set_track_prepare(void)
+{
+       depot_stack_handle_t handle;
        unsigned long entries[TRACK_ADDRS_COUNT];
        unsigned int nr_entries;
 
        nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 3);
-       p->handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT);
+       handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT);
+
+       return handle;
+}
+#else
+static inline depot_stack_handle_t set_track_prepare(void)
+{
+       return 0;
+}
 #endif
 
+static void set_track_update(struct kmem_cache *s, void *object,
+                            enum track_item alloc, unsigned long addr,
+                            depot_stack_handle_t handle)
+{
+       struct track *p = get_track(s, object, alloc);
+
+#ifdef CONFIG_STACKDEPOT
+       p->handle = handle;
+#endif
        p->addr = addr;
        p->cpu = smp_processor_id();
        p->pid = current->pid;
        p->when = jiffies;
 }
 
+static __always_inline void set_track(struct kmem_cache *s, void *object,
+                                     enum track_item alloc, unsigned long addr)
+{
+       depot_stack_handle_t handle = set_track_prepare();
+
+       set_track_update(s, object, alloc, addr, handle);
+}
+
 static void init_tracking(struct kmem_cache *s, void *object)
 {
        struct track *p;
@@ -1373,6 +1396,10 @@ static noinline int free_debug_processing(
        int cnt = 0;
        unsigned long flags, flags2;
        int ret = 0;
+       depot_stack_handle_t handle = 0;
+
+       if (s->flags & SLAB_STORE_USER)
+               handle = set_track_prepare();
 
        spin_lock_irqsave(&n->list_lock, flags);
        slab_lock(slab, &flags2);
@@ -1391,7 +1418,7 @@ next_object:
        }
 
        if (s->flags & SLAB_STORE_USER)
-               set_track(s, object, TRACK_FREE, addr);
+               set_track_update(s, object, TRACK_FREE, addr, handle);
        trace(s, slab, object, 0);
        /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */
        init_object(s, object, SLUB_RED_INACTIVE);
@@ -2936,6 +2963,7 @@ redo:
 
        if (!freelist) {
                c->slab = NULL;
+               c->tid = next_tid(c->tid);
                local_unlock_irqrestore(&s->cpu_slab->lock, flags);
                stat(s, DEACTIVATE_BYPASS);
                goto new_slab;
@@ -2968,6 +2996,7 @@ deactivate_slab:
        freelist = c->freelist;
        c->slab = NULL;
        c->freelist = NULL;
+       c->tid = next_tid(c->tid);
        local_unlock_irqrestore(&s->cpu_slab->lock, flags);
        deactivate_slab(s, slab, freelist);
 
index f3922a9..034bb24 100644 (file)
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -881,7 +881,7 @@ void lru_cache_disable(void)
         * lru_disable_count = 0 will have exited the critical
         * section when synchronize_rcu() returns.
         */
-       synchronize_rcu();
+       synchronize_rcu_expedited();
 #ifdef CONFIG_SMP
        __lru_add_drain_all(true);
 #else
index 59a5c13..a0f99ba 100644 (file)
@@ -571,6 +571,7 @@ int hci_dev_close(__u16 dev)
                goto done;
        }
 
+       cancel_work_sync(&hdev->power_on);
        if (hci_dev_test_and_clear_flag(hdev, HCI_AUTO_OFF))
                cancel_delayed_work(&hdev->power_off);
 
@@ -2675,6 +2676,8 @@ void hci_unregister_dev(struct hci_dev *hdev)
        list_del(&hdev->list);
        write_unlock(&hci_dev_list_lock);
 
+       cancel_work_sync(&hdev->power_on);
+
        hci_cmd_sync_clear(hdev);
 
        if (!test_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks))
index 286d676..1739e8c 100644 (file)
@@ -4088,7 +4088,6 @@ int hci_dev_close_sync(struct hci_dev *hdev)
 
        bt_dev_dbg(hdev, "");
 
-       cancel_work_sync(&hdev->power_on);
        cancel_delayed_work(&hdev->power_off);
        cancel_delayed_work(&hdev->ncmd_timer);
 
index 4fd8826..ff47790 100644 (file)
@@ -1012,9 +1012,24 @@ int br_nf_hook_thresh(unsigned int hook, struct net *net,
                return okfn(net, sk, skb);
 
        ops = nf_hook_entries_get_hook_ops(e);
-       for (i = 0; i < e->num_hook_entries &&
-             ops[i]->priority <= NF_BR_PRI_BRNF; i++)
-               ;
+       for (i = 0; i < e->num_hook_entries; i++) {
+               /* These hooks have already been called */
+               if (ops[i]->priority < NF_BR_PRI_BRNF)
+                       continue;
+
+               /* These hooks have not been called yet, run them. */
+               if (ops[i]->priority > NF_BR_PRI_BRNF)
+                       break;
+
+               /* take a closer look at NF_BR_PRI_BRNF. */
+               if (ops[i]->hook == br_nf_pre_routing) {
+                       /* This hook diverted the skb to this function,
+                        * hooks after this have not been run yet.
+                        */
+                       i++;
+                       break;
+               }
+       }
 
        nf_hook_state_init(&state, hook, NFPROTO_BRIDGE, indev, outdev,
                           sk, net, okfn);
index a9ac5ff..cb56be8 100644 (file)
@@ -15,7 +15,8 @@ menuconfig CAN
          PF_CAN is contained in <Documentation/networking/can.rst>.
 
          If you want CAN support you should say Y here and also to the
-         specific driver for your controller(s) below.
+         specific driver for your controller(s) under the Network device
+         support section.
 
 if CAN
 
@@ -69,6 +70,4 @@ config CAN_ISOTP
          If you want to perform automotive vehicle diagnostic services (UDS),
          say 'y'.
 
-source "drivers/net/can/Kconfig"
-
 endif
index 65ee1b7..e60161b 100644 (file)
@@ -100,6 +100,7 @@ static inline u64 get_u64(const struct canfd_frame *cp, int offset)
 
 struct bcm_op {
        struct list_head list;
+       struct rcu_head rcu;
        int ifindex;
        canid_t can_id;
        u32 flags;
@@ -718,10 +719,9 @@ static struct bcm_op *bcm_find_op(struct list_head *ops,
        return NULL;
 }
 
-static void bcm_remove_op(struct bcm_op *op)
+static void bcm_free_op_rcu(struct rcu_head *rcu_head)
 {
-       hrtimer_cancel(&op->timer);
-       hrtimer_cancel(&op->thrtimer);
+       struct bcm_op *op = container_of(rcu_head, struct bcm_op, rcu);
 
        if ((op->frames) && (op->frames != &op->sframe))
                kfree(op->frames);
@@ -732,6 +732,14 @@ static void bcm_remove_op(struct bcm_op *op)
        kfree(op);
 }
 
+static void bcm_remove_op(struct bcm_op *op)
+{
+       hrtimer_cancel(&op->timer);
+       hrtimer_cancel(&op->thrtimer);
+
+       call_rcu(&op->rcu, bcm_free_op_rcu);
+}
+
 static void bcm_rx_unreg(struct net_device *dev, struct bcm_op *op)
 {
        if (op->rx_reg_dev == dev) {
@@ -757,6 +765,9 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh,
                if ((op->can_id == mh->can_id) && (op->ifindex == ifindex) &&
                    (op->flags & CAN_FD_FRAME) == (mh->flags & CAN_FD_FRAME)) {
 
+                       /* disable automatic timer on frame reception */
+                       op->flags |= RX_NO_AUTOTIMER;
+
                        /*
                         * Don't care if we're bound or not (due to netdev
                         * problems) can_rx_unregister() is always a save
@@ -785,7 +796,6 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh,
                                                  bcm_rx_handler, op);
 
                        list_del(&op->list);
-                       synchronize_rcu();
                        bcm_remove_op(op);
                        return 1; /* done */
                }
index 8958c42..978ed06 100644 (file)
@@ -397,16 +397,18 @@ static void list_netdevice(struct net_device *dev)
 /* Device list removal
  * caller must respect a RCU grace period before freeing/reusing dev
  */
-static void unlist_netdevice(struct net_device *dev)
+static void unlist_netdevice(struct net_device *dev, bool lock)
 {
        ASSERT_RTNL();
 
        /* Unlink dev from the device chain */
-       write_lock(&dev_base_lock);
+       if (lock)
+               write_lock(&dev_base_lock);
        list_del_rcu(&dev->dev_list);
        netdev_name_node_del(dev->name_node);
        hlist_del_rcu(&dev->index_hlist);
-       write_unlock(&dev_base_lock);
+       if (lock)
+               write_unlock(&dev_base_lock);
 
        dev_base_seq_inc(dev_net(dev));
 }
@@ -10061,11 +10063,11 @@ int register_netdevice(struct net_device *dev)
                goto err_uninit;
 
        ret = netdev_register_kobject(dev);
-       if (ret) {
-               dev->reg_state = NETREG_UNREGISTERED;
+       write_lock(&dev_base_lock);
+       dev->reg_state = ret ? NETREG_UNREGISTERED : NETREG_REGISTERED;
+       write_unlock(&dev_base_lock);
+       if (ret)
                goto err_uninit;
-       }
-       dev->reg_state = NETREG_REGISTERED;
 
        __netdev_update_features(dev);
 
@@ -10347,7 +10349,9 @@ void netdev_run_todo(void)
                        continue;
                }
 
+               write_lock(&dev_base_lock);
                dev->reg_state = NETREG_UNREGISTERED;
+               write_unlock(&dev_base_lock);
                linkwatch_forget_dev(dev);
        }
 
@@ -10828,9 +10832,10 @@ void unregister_netdevice_many(struct list_head *head)
 
        list_for_each_entry(dev, head, unreg_list) {
                /* And unlink it from device chain. */
-               unlist_netdevice(dev);
-
+               write_lock(&dev_base_lock);
+               unlist_netdevice(dev, false);
                dev->reg_state = NETREG_UNREGISTERING;
+               write_unlock(&dev_base_lock);
        }
        flush_all_backlogs();
 
@@ -10977,7 +10982,7 @@ int __dev_change_net_namespace(struct net_device *dev, struct net *net,
        dev_close(dev);
 
        /* And unlink it from device chain */
-       unlist_netdevice(dev);
+       unlist_netdevice(dev, true);
 
        synchronize_net();
 
index 4fae919..4ef77ec 100644 (file)
@@ -6559,10 +6559,21 @@ __bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
                                           ifindex, proto, netns_id, flags);
 
        if (sk) {
-               sk = sk_to_full_sk(sk);
-               if (!sk_fullsock(sk)) {
+               struct sock *sk2 = sk_to_full_sk(sk);
+
+               /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk
+                * sock refcnt is decremented to prevent a request_sock leak.
+                */
+               if (!sk_fullsock(sk2))
+                       sk2 = NULL;
+               if (sk2 != sk) {
                        sock_gen_put(sk);
-                       return NULL;
+                       /* Ensure there is no need to bump sk2 refcnt */
+                       if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) {
+                               WARN_ONCE(1, "Found non-RCU, unreferenced socket!");
+                               return NULL;
+                       }
+                       sk = sk2;
                }
        }
 
@@ -6596,10 +6607,21 @@ bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
                                         flags);
 
        if (sk) {
-               sk = sk_to_full_sk(sk);
-               if (!sk_fullsock(sk)) {
+               struct sock *sk2 = sk_to_full_sk(sk);
+
+               /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk
+                * sock refcnt is decremented to prevent a request_sock leak.
+                */
+               if (!sk_fullsock(sk2))
+                       sk2 = NULL;
+               if (sk2 != sk) {
                        sock_gen_put(sk);
-                       return NULL;
+                       /* Ensure there is no need to bump sk2 refcnt */
+                       if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) {
+                               WARN_ONCE(1, "Found non-RCU, unreferenced socket!");
+                               return NULL;
+                       }
+                       sk = sk2;
                }
        }
 
index d8ec706..6a8c259 100644 (file)
@@ -1579,7 +1579,7 @@ static void neigh_managed_work(struct work_struct *work)
        list_for_each_entry(neigh, &tbl->managed_list, managed_list)
                neigh_event_send_probe(neigh, NULL, false);
        queue_delayed_work(system_power_efficient_wq, &tbl->managed_work,
-                          max(NEIGH_VAR(&tbl->parms, DELAY_PROBE_TIME), HZ));
+                          NEIGH_VAR(&tbl->parms, INTERVAL_PROBE_TIME_MS));
        write_unlock_bh(&tbl->lock);
 }
 
@@ -2100,7 +2100,9 @@ static int neightbl_fill_parms(struct sk_buff *skb, struct neigh_parms *parms)
            nla_put_msecs(skb, NDTPA_PROXY_DELAY,
                          NEIGH_VAR(parms, PROXY_DELAY), NDTPA_PAD) ||
            nla_put_msecs(skb, NDTPA_LOCKTIME,
-                         NEIGH_VAR(parms, LOCKTIME), NDTPA_PAD))
+                         NEIGH_VAR(parms, LOCKTIME), NDTPA_PAD) ||
+           nla_put_msecs(skb, NDTPA_INTERVAL_PROBE_TIME_MS,
+                         NEIGH_VAR(parms, INTERVAL_PROBE_TIME_MS), NDTPA_PAD))
                goto nla_put_failure;
        return nla_nest_end(skb, nest);
 
@@ -2255,6 +2257,7 @@ static const struct nla_policy nl_ntbl_parm_policy[NDTPA_MAX+1] = {
        [NDTPA_ANYCAST_DELAY]           = { .type = NLA_U64 },
        [NDTPA_PROXY_DELAY]             = { .type = NLA_U64 },
        [NDTPA_LOCKTIME]                = { .type = NLA_U64 },
+       [NDTPA_INTERVAL_PROBE_TIME_MS]  = { .type = NLA_U64, .min = 1 },
 };
 
 static int neightbl_set(struct sk_buff *skb, struct nlmsghdr *nlh,
@@ -2373,6 +2376,10 @@ static int neightbl_set(struct sk_buff *skb, struct nlmsghdr *nlh,
                                              nla_get_msecs(tbp[i]));
                                call_netevent_notifiers(NETEVENT_DELAY_PROBE_TIME_UPDATE, p);
                                break;
+                       case NDTPA_INTERVAL_PROBE_TIME_MS:
+                               NEIGH_VAR_SET(p, INTERVAL_PROBE_TIME_MS,
+                                             nla_get_msecs(tbp[i]));
+                               break;
                        case NDTPA_RETRANS_TIME:
                                NEIGH_VAR_SET(p, RETRANS_TIME,
                                              nla_get_msecs(tbp[i]));
@@ -3562,6 +3569,22 @@ static int neigh_proc_dointvec_zero_intmax(struct ctl_table *ctl, int write,
        return ret;
 }
 
+static int neigh_proc_dointvec_ms_jiffies_positive(struct ctl_table *ctl, int write,
+                                                  void *buffer, size_t *lenp, loff_t *ppos)
+{
+       struct ctl_table tmp = *ctl;
+       int ret;
+
+       int min = msecs_to_jiffies(1);
+
+       tmp.extra1 = &min;
+       tmp.extra2 = NULL;
+
+       ret = proc_dointvec_ms_jiffies_minmax(&tmp, write, buffer, lenp, ppos);
+       neigh_proc_update(ctl, write);
+       return ret;
+}
+
 int neigh_proc_dointvec(struct ctl_table *ctl, int write, void *buffer,
                        size_t *lenp, loff_t *ppos)
 {
@@ -3658,6 +3681,9 @@ static int neigh_proc_base_reachable_time(struct ctl_table *ctl, int write,
 #define NEIGH_SYSCTL_USERHZ_JIFFIES_ENTRY(attr, name) \
        NEIGH_SYSCTL_ENTRY(attr, attr, name, 0644, neigh_proc_dointvec_userhz_jiffies)
 
+#define NEIGH_SYSCTL_MS_JIFFIES_POSITIVE_ENTRY(attr, name) \
+       NEIGH_SYSCTL_ENTRY(attr, attr, name, 0644, neigh_proc_dointvec_ms_jiffies_positive)
+
 #define NEIGH_SYSCTL_MS_JIFFIES_REUSED_ENTRY(attr, data_attr, name) \
        NEIGH_SYSCTL_ENTRY(attr, data_attr, name, 0644, neigh_proc_dointvec_ms_jiffies)
 
@@ -3676,6 +3702,8 @@ static struct neigh_sysctl_table {
                NEIGH_SYSCTL_USERHZ_JIFFIES_ENTRY(RETRANS_TIME, "retrans_time"),
                NEIGH_SYSCTL_JIFFIES_ENTRY(BASE_REACHABLE_TIME, "base_reachable_time"),
                NEIGH_SYSCTL_JIFFIES_ENTRY(DELAY_PROBE_TIME, "delay_first_probe_time"),
+               NEIGH_SYSCTL_MS_JIFFIES_POSITIVE_ENTRY(INTERVAL_PROBE_TIME_MS,
+                                                      "interval_probe_time_ms"),
                NEIGH_SYSCTL_JIFFIES_ENTRY(GC_STALETIME, "gc_stale_time"),
                NEIGH_SYSCTL_ZERO_INTMAX_ENTRY(QUEUE_LEN_BYTES, "unres_qlen_bytes"),
                NEIGH_SYSCTL_ZERO_INTMAX_ENTRY(PROXY_QLEN, "proxy_qlen"),
index d49fc97..d61afd2 100644 (file)
@@ -33,6 +33,7 @@ static const char fmt_dec[] = "%d\n";
 static const char fmt_ulong[] = "%lu\n";
 static const char fmt_u64[] = "%llu\n";
 
+/* Caller holds RTNL or dev_base_lock */
 static inline int dev_isalive(const struct net_device *dev)
 {
        return dev->reg_state <= NETREG_REGISTERED;
index f18e6e7..b74905f 100644 (file)
@@ -389,7 +389,8 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
        /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */
        memset(&pool->alloc.cache, 0, sizeof(void *) * bulk);
 
-       nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache);
+       nr_pages = alloc_pages_bulk_array_node(gfp, pool->p.nid, bulk,
+                                              pool->alloc.cache);
        if (unlikely(!nr_pages))
                return NULL;
 
index 00bf35e..c4a7517 100644 (file)
@@ -454,8 +454,6 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 
                skb->fclone = SKB_FCLONE_ORIG;
                refcount_set(&fclones->fclone_ref, 1);
-
-               fclones->skb2.fclone = SKB_FCLONE_CLONE;
        }
 
        return skb;
@@ -1513,6 +1511,7 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
            refcount_read(&fclones->fclone_ref) == 1) {
                n = &fclones->skb2;
                refcount_set(&fclones->fclone_ref, 2);
+               n->fclone = SKB_FCLONE_CLONE;
        } else {
                if (skb_pfmemalloc(skb))
                        gfp_mask |= __GFP_MEMALLOC;
@@ -3195,9 +3194,7 @@ skb_zerocopy(struct sk_buff *to, struct sk_buff *from, int len, int hlen)
                }
        }
 
-       to->truesize += len + plen;
-       to->len += len + plen;
-       to->data_len += len + plen;
+       skb_len_add(to, len + plen);
 
        if (unlikely(skb_orphan_frags(from, GFP_ATOMIC))) {
                skb_tx_error(from);
@@ -3634,13 +3631,8 @@ onlymerged:
        tgt->ip_summed = CHECKSUM_PARTIAL;
        skb->ip_summed = CHECKSUM_PARTIAL;
 
-       /* Yak, is it really working this way? Some helper please? */
-       skb->len -= shiftlen;
-       skb->data_len -= shiftlen;
-       skb->truesize -= shiftlen;
-       tgt->len += shiftlen;
-       tgt->data_len += shiftlen;
-       tgt->truesize += shiftlen;
+       skb_len_add(skb, -shiftlen);
+       skb_len_add(tgt, shiftlen);
 
        return shiftlen;
 }
index 4b297d6..266d3b7 100644 (file)
@@ -702,6 +702,11 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
 
        write_lock_bh(&sk->sk_callback_lock);
 
+       if (sk_is_inet(sk) && inet_csk_has_ulp(sk)) {
+               psock = ERR_PTR(-EINVAL);
+               goto out;
+       }
+
        if (sk->sk_user_data) {
                psock = ERR_PTR(-EBUSY);
                goto out;
index 92a0296..4cb957d 100644 (file)
@@ -2870,6 +2870,7 @@ void __sk_flush_backlog(struct sock *sk)
        __release_sock(sk);
        spin_unlock_bh(&sk->sk_lock.slock);
 }
+EXPORT_SYMBOL_GPL(__sk_flush_backlog);
 
 /**
  * sk_wait_data - wait for data to arrive at sk_receive_queue
index fbd98ac..7c569bc 100644 (file)
@@ -94,6 +94,7 @@ struct neigh_table dn_neigh_table = {
                        [NEIGH_VAR_RETRANS_TIME] = 1 * HZ,
                        [NEIGH_VAR_BASE_REACHABLE_TIME] = 30 * HZ,
                        [NEIGH_VAR_DELAY_PROBE_TIME] = 5 * HZ,
+                       [NEIGH_VAR_INTERVAL_PROBE_TIME_MS] = 5 * HZ,
                        [NEIGH_VAR_GC_STALETIME] = 60 * HZ,
                        [NEIGH_VAR_QUEUE_LEN_BYTES] = SK_WMEM_MAX,
                        [NEIGH_VAR_PROXY_QLEN] = 0,
index 8cb87b5..3eef72c 100644 (file)
@@ -87,10 +87,10 @@ config NET_DSA_TAG_MTK
          Mediatek switches.
 
 config NET_DSA_TAG_KSZ
-       tristate "Tag driver for Microchip 8795/9477/9893 families of switches"
+       tristate "Tag driver for Microchip 8795/937x/9477/9893 families of switches"
        help
          Say Y if you want to enable support for tagging frames for the
-         Microchip 8795/9477/9893 families of switches.
+         Microchip 8795/937x/9477/9893 families of switches.
 
 config NET_DSA_TAG_OCELOT
        tristate "Tag driver for Ocelot family of switches, using NPI port"
@@ -132,6 +132,13 @@ config NET_DSA_TAG_RTL8_4
          Say Y or M if you want to enable support for tagging frames for Realtek
          switches with 8 byte protocol 4 tags, such as the Realtek RTL8365MB-VC.
 
+config NET_DSA_TAG_RZN1_A5PSW
+       tristate "Tag driver for Renesas RZ/N1 A5PSW switch"
+       help
+         Say Y or M if you want to enable support for tagging frames for
+         Renesas RZ/N1 embedded switch that uses an 8 byte tag located after
+         destination MAC address.
+
 config NET_DSA_TAG_LAN9303
        tristate "Tag driver for SMSC/Microchip LAN9303 family of switches"
        help
index 9f75820..af28c24 100644 (file)
@@ -17,6 +17,7 @@ obj-$(CONFIG_NET_DSA_TAG_OCELOT_8021Q) += tag_ocelot_8021q.o
 obj-$(CONFIG_NET_DSA_TAG_QCA) += tag_qca.o
 obj-$(CONFIG_NET_DSA_TAG_RTL4_A) += tag_rtl4_a.o
 obj-$(CONFIG_NET_DSA_TAG_RTL8_4) += tag_rtl8_4.o
+obj-$(CONFIG_NET_DSA_TAG_RZN1_A5PSW) += tag_rzn1_a5psw.o
 obj-$(CONFIG_NET_DSA_TAG_SJA1105) += tag_sja1105.o
 obj-$(CONFIG_NET_DSA_TAG_TRAILER) += tag_trailer.o
 obj-$(CONFIG_NET_DSA_TAG_XRS700X) += tag_xrs700x.o
index 2e1ac63..ad6a666 100644 (file)
@@ -1002,6 +1002,18 @@ dsa_slave_get_eth_ctrl_stats(struct net_device *dev,
                ds->ops->get_eth_ctrl_stats(ds, dp->index, ctrl_stats);
 }
 
+static void
+dsa_slave_get_rmon_stats(struct net_device *dev,
+                        struct ethtool_rmon_stats *rmon_stats,
+                        const struct ethtool_rmon_hist_range **ranges)
+{
+       struct dsa_port *dp = dsa_slave_to_port(dev);
+       struct dsa_switch *ds = dp->ds;
+
+       if (ds->ops->get_rmon_stats)
+               ds->ops->get_rmon_stats(ds, dp->index, rmon_stats, ranges);
+}
+
 static void dsa_slave_net_selftest(struct net_device *ndev,
                                   struct ethtool_test *etest, u64 *buf)
 {
@@ -1097,6 +1109,16 @@ static int dsa_slave_set_link_ksettings(struct net_device *dev,
        return phylink_ethtool_ksettings_set(dp->pl, cmd);
 }
 
+static void dsa_slave_get_pause_stats(struct net_device *dev,
+                                 struct ethtool_pause_stats *pause_stats)
+{
+       struct dsa_port *dp = dsa_slave_to_port(dev);
+       struct dsa_switch *ds = dp->ds;
+
+       if (ds->ops->get_pause_stats)
+               ds->ops->get_pause_stats(ds, dp->index, pause_stats);
+}
+
 static void dsa_slave_get_pauseparam(struct net_device *dev,
                                     struct ethtool_pauseparam *pause)
 {
@@ -2081,12 +2103,14 @@ static const struct ethtool_ops dsa_slave_ethtool_ops = {
        .get_eth_phy_stats      = dsa_slave_get_eth_phy_stats,
        .get_eth_mac_stats      = dsa_slave_get_eth_mac_stats,
        .get_eth_ctrl_stats     = dsa_slave_get_eth_ctrl_stats,
+       .get_rmon_stats         = dsa_slave_get_rmon_stats,
        .set_wol                = dsa_slave_set_wol,
        .get_wol                = dsa_slave_get_wol,
        .set_eee                = dsa_slave_set_eee,
        .get_eee                = dsa_slave_get_eee,
        .get_link_ksettings     = dsa_slave_get_link_ksettings,
        .set_link_ksettings     = dsa_slave_set_link_ksettings,
+       .get_pause_stats        = dsa_slave_get_pause_stats,
        .get_pauseparam         = dsa_slave_get_pauseparam,
        .set_pauseparam         = dsa_slave_set_pauseparam,
        .get_rxnfc              = dsa_slave_get_rxnfc,
@@ -2460,8 +2484,9 @@ static int dsa_slave_changeupper(struct net_device *dev,
                        if (!err)
                                dsa_bridge_mtu_normalization(dp);
                        if (err == -EOPNOTSUPP) {
-                               NL_SET_ERR_MSG_MOD(extack,
-                                                  "Offloading not supported");
+                               if (!extack->_msg)
+                                       NL_SET_ERR_MSG_MOD(extack,
+                                                          "Offloading not supported");
                                err = 0;
                        }
                        err = notifier_from_errno(err);
index 3509fc9..38fa19c 100644 (file)
@@ -193,10 +193,69 @@ static const struct dsa_device_ops ksz9893_netdev_ops = {
 DSA_TAG_DRIVER(ksz9893_netdev_ops);
 MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_KSZ9893);
 
+/* For xmit, 2 bytes are added before FCS.
+ * ---------------------------------------------------------------------------
+ * DA(6bytes)|SA(6bytes)|....|Data(nbytes)|tag0(1byte)|tag1(1byte)|FCS(4bytes)
+ * ---------------------------------------------------------------------------
+ * tag0 : represents tag override, lookup and valid
+ * tag1 : each bit represents port (eg, 0x01=port1, 0x02=port2, 0x80=port8)
+ *
+ * For rcv, 1 byte is added before FCS.
+ * ---------------------------------------------------------------------------
+ * DA(6bytes)|SA(6bytes)|....|Data(nbytes)|tag0(1byte)|FCS(4bytes)
+ * ---------------------------------------------------------------------------
+ * tag0 : zero-based value represents port
+ *       (eg, 0x00=port1, 0x02=port3, 0x07=port8)
+ */
+#define LAN937X_EGRESS_TAG_LEN         2
+
+#define LAN937X_TAIL_TAG_BLOCKING_OVERRIDE     BIT(11)
+#define LAN937X_TAIL_TAG_LOOKUP                        BIT(12)
+#define LAN937X_TAIL_TAG_VALID                 BIT(13)
+#define LAN937X_TAIL_TAG_PORT_MASK             7
+
+static struct sk_buff *lan937x_xmit(struct sk_buff *skb,
+                                   struct net_device *dev)
+{
+       struct dsa_port *dp = dsa_slave_to_port(dev);
+       const struct ethhdr *hdr = eth_hdr(skb);
+       __be16 *tag;
+       u16 val;
+
+       if (skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_help(skb))
+               return NULL;
+
+       tag = skb_put(skb, LAN937X_EGRESS_TAG_LEN);
+
+       val = BIT(dp->index);
+
+       if (is_link_local_ether_addr(hdr->h_dest))
+               val |= LAN937X_TAIL_TAG_BLOCKING_OVERRIDE;
+
+       /* Tail tag valid bit - This bit should always be set by the CPU */
+       val |= LAN937X_TAIL_TAG_VALID;
+
+       put_unaligned_be16(val, tag);
+
+       return skb;
+}
+
+static const struct dsa_device_ops lan937x_netdev_ops = {
+       .name   = "lan937x",
+       .proto  = DSA_TAG_PROTO_LAN937X,
+       .xmit   = lan937x_xmit,
+       .rcv    = ksz9477_rcv,
+       .needed_tailroom = LAN937X_EGRESS_TAG_LEN,
+};
+
+DSA_TAG_DRIVER(lan937x_netdev_ops);
+MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_LAN937X);
+
 static struct dsa_tag_driver *dsa_tag_driver_array[] = {
        &DSA_TAG_DRIVER_NAME(ksz8795_netdev_ops),
        &DSA_TAG_DRIVER_NAME(ksz9477_netdev_ops),
        &DSA_TAG_DRIVER_NAME(ksz9893_netdev_ops),
+       &DSA_TAG_DRIVER_NAME(lan937x_netdev_ops),
 };
 
 module_dsa_tag_drivers(dsa_tag_driver_array);
diff --git a/net/dsa/tag_rzn1_a5psw.c b/net/dsa/tag_rzn1_a5psw.c
new file mode 100644 (file)
index 0000000..e2a5ee6
--- /dev/null
@@ -0,0 +1,113 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2022 Schneider Electric
+ *
+ * Clément Léger <clement.leger@bootlin.com>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
+#include <net/dsa.h>
+
+#include "dsa_priv.h"
+
+/* To define the outgoing port and to discover the incoming port a TAG is
+ * inserted after Src MAC :
+ *
+ *       Dest MAC       Src MAC           TAG         Type
+ * ...| 1 2 3 4 5 6 | 1 2 3 4 5 6 | 1 2 3 4 5 6 7 8 | 1 2 |...
+ *                                |<--------------->|
+ *
+ * See struct a5psw_tag for layout
+ */
+
+#define ETH_P_DSA_A5PSW                        0xE001
+#define A5PSW_TAG_LEN                  8
+#define A5PSW_CTRL_DATA_FORCE_FORWARD  BIT(0)
+/* This is both used for xmit tag and rcv tagging */
+#define A5PSW_CTRL_DATA_PORT           GENMASK(3, 0)
+
+struct a5psw_tag {
+       __be16 ctrl_tag;
+       __be16 ctrl_data;
+       __be16 ctrl_data2_hi;
+       __be16 ctrl_data2_lo;
+};
+
+static struct sk_buff *a5psw_tag_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+       struct dsa_port *dp = dsa_slave_to_port(dev);
+       struct a5psw_tag *ptag;
+       u32 data2_val;
+
+       BUILD_BUG_ON(sizeof(*ptag) != A5PSW_TAG_LEN);
+
+       /* The Ethernet switch we are interfaced with needs packets to be at
+        * least 60 bytes otherwise they will be discarded when they enter the
+        * switch port logic.
+        */
+       if (__skb_put_padto(skb, ETH_ZLEN, false))
+               return NULL;
+
+       /* provide 'A5PSW_TAG_LEN' bytes additional space */
+       skb_push(skb, A5PSW_TAG_LEN);
+
+       /* make room between MACs and Ether-Type to insert tag */
+       dsa_alloc_etype_header(skb, A5PSW_TAG_LEN);
+
+       ptag = dsa_etype_header_pos_tx(skb);
+
+       data2_val = FIELD_PREP(A5PSW_CTRL_DATA_PORT, BIT(dp->index));
+       ptag->ctrl_tag = htons(ETH_P_DSA_A5PSW);
+       ptag->ctrl_data = htons(A5PSW_CTRL_DATA_FORCE_FORWARD);
+       ptag->ctrl_data2_lo = htons(data2_val);
+       ptag->ctrl_data2_hi = 0;
+
+       return skb;
+}
+
+static struct sk_buff *a5psw_tag_rcv(struct sk_buff *skb,
+                                    struct net_device *dev)
+{
+       struct a5psw_tag *tag;
+       int port;
+
+       if (unlikely(!pskb_may_pull(skb, A5PSW_TAG_LEN))) {
+               dev_warn_ratelimited(&dev->dev,
+                                    "Dropping packet, cannot pull\n");
+               return NULL;
+       }
+
+       tag = dsa_etype_header_pos_rx(skb);
+
+       if (tag->ctrl_tag != htons(ETH_P_DSA_A5PSW)) {
+               dev_warn_ratelimited(&dev->dev, "Dropping packet due to invalid TAG marker\n");
+               return NULL;
+       }
+
+       port = FIELD_GET(A5PSW_CTRL_DATA_PORT, ntohs(tag->ctrl_data));
+
+       skb->dev = dsa_master_find_slave(dev, 0, port);
+       if (!skb->dev)
+               return NULL;
+
+       skb_pull_rcsum(skb, A5PSW_TAG_LEN);
+       dsa_strip_etype_header(skb, A5PSW_TAG_LEN);
+
+       dsa_default_offload_fwd_mark(skb);
+
+       return skb;
+}
+
+static const struct dsa_device_ops a5psw_netdev_ops = {
+       .name   = "a5psw",
+       .proto  = DSA_TAG_PROTO_RZN1_A5PSW,
+       .xmit   = a5psw_tag_xmit,
+       .rcv    = a5psw_tag_rcv,
+       .needed_headroom = A5PSW_TAG_LEN,
+};
+
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_A5PSW);
+module_dsa_tag_driver(a5psw_netdev_ops);
index 7e6b37a..1c94bb8 100644 (file)
@@ -36,7 +36,7 @@ static int fallback_set_params(struct eeprom_req_info *request,
        if (request->page)
                offset = request->page * ETH_MODULE_EEPROM_PAGE_LEN + offset;
 
-       if (modinfo->type == ETH_MODULE_SFF_8079 &&
+       if (modinfo->type == ETH_MODULE_SFF_8472 &&
            request->i2c_address == 0x51)
                offset += ETH_MODULE_EEPROM_PAGE_LEN * 2;
 
index ab4a560..af2f12f 100644 (file)
@@ -168,6 +168,7 @@ struct neigh_table arp_tbl = {
                        [NEIGH_VAR_RETRANS_TIME] = 1 * HZ,
                        [NEIGH_VAR_BASE_REACHABLE_TIME] = 30 * HZ,
                        [NEIGH_VAR_DELAY_PROBE_TIME] = 5 * HZ,
+                       [NEIGH_VAR_INTERVAL_PROBE_TIME_MS] = 5 * HZ,
                        [NEIGH_VAR_GC_STALETIME] = 60 * HZ,
                        [NEIGH_VAR_QUEUE_LEN_BYTES] = SK_WMEM_MAX,
                        [NEIGH_VAR_PROXY_QLEN] = 64,
index b21238d..7eae8d6 100644 (file)
@@ -502,9 +502,7 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
 
                        nfrags++;
 
-                       skb->len += tailen;
-                       skb->data_len += tailen;
-                       skb->truesize += tailen;
+                       skb_len_add(skb, tailen);
                        if (sk && sk_fullsock(sk))
                                refcount_add(tailen, &sk->sk_wmem_alloc);
 
index 3b9cd48..5c58e21 100644 (file)
@@ -524,7 +524,6 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
        int tunnel_hlen;
        int version;
        int nhoff;
-       int thoff;
 
        tun_info = skb_tunnel_info(skb);
        if (unlikely(!tun_info || !(tun_info->mode & IP_TUNNEL_INFO_TX) ||
@@ -558,10 +557,16 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
            (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
                truncate = true;
 
-       thoff = skb_transport_header(skb) - skb_mac_header(skb);
-       if (skb->protocol == htons(ETH_P_IPV6) &&
-           (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff))
-               truncate = true;
+       if (skb->protocol == htons(ETH_P_IPV6)) {
+               int thoff;
+
+               if (skb_transport_header_was_set(skb))
+                       thoff = skb_transport_header(skb) - skb_mac_header(skb);
+               else
+                       thoff = nhoff + sizeof(struct ipv6hdr);
+               if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
+                       truncate = true;
+       }
 
        if (version == 1) {
                erspan_build_header(skb, ntohl(tunnel_id_to_key32(key->tun_id)),
index 00b4bf2..5e32a2f 100644 (file)
@@ -1214,9 +1214,7 @@ alloc_new_skb:
 
                        pfrag->offset += copy;
                        skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
-                       skb->len += copy;
-                       skb->data_len += copy;
-                       skb->truesize += copy;
+                       skb_len_add(skb, copy);
                        wmem_alloc_delta += copy;
                } else {
                        err = skb_zerocopy_iter_dgram(skb, from, copy);
@@ -1443,9 +1441,7 @@ ssize_t   ip_append_page(struct sock *sk, struct flowi4 *fl4, struct page *page,
                        skb->csum = csum_block_add(skb->csum, csum, skb->len);
                }
 
-               skb->len += len;
-               skb->data_len += len;
-               skb->truesize += len;
+               skb_len_add(skb, len);
                refcount_add(len, &sk->sk_wmem_alloc);
                offset += len;
                size -= len;
index 6b2dc7b..cc1caab 100644 (file)
@@ -410,7 +410,7 @@ int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst,
        u32 mtu = dst_mtu(encap_dst) - headroom;
 
        if ((skb_is_gso(skb) && skb_gso_validate_network_len(skb, mtu)) ||
-           (!skb_is_gso(skb) && (skb->len - skb_mac_header_len(skb)) <= mtu))
+           (!skb_is_gso(skb) && (skb->len - skb_network_offset(skb)) <= mtu))
                return 0;
 
        skb_dst_update_pmtu_no_confirm(skb, mtu);
index 9d41d5d..f53a0f2 100644 (file)
@@ -1759,15 +1759,15 @@ static int __init ip_auto_config_setup(char *addrs)
                        case 4:
                                if ((dp = strchr(ip, '.'))) {
                                        *dp++ = '\0';
-                                       strlcpy(utsname()->domainname, dp,
+                                       strscpy(utsname()->domainname, dp,
                                                sizeof(utsname()->domainname));
                                }
-                               strlcpy(utsname()->nodename, ip,
+                               strscpy(utsname()->nodename, ip,
                                        sizeof(utsname()->nodename));
                                ic_host_name_set = 1;
                                break;
                        case 5:
-                               strlcpy(user_dev_name, ip, sizeof(user_dev_name));
+                               strscpy(user_dev_name, ip, sizeof(user_dev_name));
                                break;
                        case 6:
                                if (ic_proto_name(ip) == 0 &&
@@ -1814,7 +1814,7 @@ __setup("nfsaddrs=", nfsaddrs_config_setup);
 
 static int __init vendor_class_identifier_setup(char *addrs)
 {
-       if (strlcpy(vendor_class_identifier, addrs,
+       if (strscpy(vendor_class_identifier, addrs,
                    sizeof(vendor_class_identifier))
            >= sizeof(vendor_class_identifier))
                pr_warn("DHCP: vendorclass too long, truncated to \"%s\"\n",
index 8324e54..73651d1 100644 (file)
@@ -77,7 +77,12 @@ struct ipmr_result {
  * Note that the changes are semaphored via rtnl_lock.
  */
 
-static DEFINE_RWLOCK(mrt_lock);
+static DEFINE_SPINLOCK(mrt_lock);
+
+static struct net_device *vif_dev_read(const struct vif_device *vif)
+{
+       return rcu_dereference(vif->dev);
+}
 
 /* Multicast router control variables */
 
@@ -100,11 +105,11 @@ static void ipmr_free_table(struct mr_table *mrt);
 static void ip_mr_forward(struct net *net, struct mr_table *mrt,
                          struct net_device *dev, struct sk_buff *skb,
                          struct mfc_cache *cache, int local);
-static int ipmr_cache_report(struct mr_table *mrt,
+static int ipmr_cache_report(const struct mr_table *mrt,
                             struct sk_buff *pkt, vifi_t vifi, int assert);
 static void mroute_netlink_event(struct mr_table *mrt, struct mfc_cache *mfc,
                                 int cmd);
-static void igmpmsg_netlink_event(struct mr_table *mrt, struct sk_buff *pkt);
+static void igmpmsg_netlink_event(const struct mr_table *mrt, struct sk_buff *pkt);
 static void mroute_clean_tables(struct mr_table *mrt, int flags);
 static void ipmr_expire_process(struct timer_list *t);
 
@@ -501,11 +506,15 @@ static netdev_tx_t reg_vif_xmit(struct sk_buff *skb, struct net_device *dev)
                return err;
        }
 
-       read_lock(&mrt_lock);
        dev->stats.tx_bytes += skb->len;
        dev->stats.tx_packets++;
-       ipmr_cache_report(mrt, skb, mrt->mroute_reg_vif_num, IGMPMSG_WHOLEPKT);
-       read_unlock(&mrt_lock);
+       rcu_read_lock();
+
+       /* Pairs with WRITE_ONCE() in vif_add() and vif_delete() */
+       ipmr_cache_report(mrt, skb, READ_ONCE(mrt->mroute_reg_vif_num),
+                         IGMPMSG_WHOLEPKT);
+
+       rcu_read_unlock();
        kfree_skb(skb);
        return NETDEV_TX_OK;
 }
@@ -572,6 +581,7 @@ static int __pim_rcv(struct mr_table *mrt, struct sk_buff *skb,
 {
        struct net_device *reg_dev = NULL;
        struct iphdr *encap;
+       int vif_num;
 
        encap = (struct iphdr *)(skb_transport_header(skb) + pimlen);
        /* Check that:
@@ -584,11 +594,10 @@ static int __pim_rcv(struct mr_table *mrt, struct sk_buff *skb,
            ntohs(encap->tot_len) + pimlen > skb->len)
                return 1;
 
-       read_lock(&mrt_lock);
-       if (mrt->mroute_reg_vif_num >= 0)
-               reg_dev = mrt->vif_table[mrt->mroute_reg_vif_num].dev;
-       read_unlock(&mrt_lock);
-
+       /* Pairs with WRITE_ONCE() in vif_add()/vid_delete() */
+       vif_num = READ_ONCE(mrt->mroute_reg_vif_num);
+       if (vif_num >= 0)
+               reg_dev = vif_dev_read(&mrt->vif_table[vif_num]);
        if (!reg_dev)
                return 1;
 
@@ -614,10 +623,11 @@ static struct net_device *ipmr_reg_vif(struct net *net, struct mr_table *mrt)
 static int call_ipmr_vif_entry_notifiers(struct net *net,
                                         enum fib_event_type event_type,
                                         struct vif_device *vif,
+                                        struct net_device *vif_dev,
                                         vifi_t vif_index, u32 tb_id)
 {
        return mr_call_vif_notifiers(net, RTNL_FAMILY_IPMR, event_type,
-                                    vif, vif_index, tb_id,
+                                    vif, vif_dev, vif_index, tb_id,
                                     &net->ipv4.ipmr_seq);
 }
 
@@ -649,22 +659,19 @@ static int vif_delete(struct mr_table *mrt, int vifi, int notify,
 
        v = &mrt->vif_table[vifi];
 
-       if (VIF_EXISTS(mrt, vifi))
-               call_ipmr_vif_entry_notifiers(net, FIB_EVENT_VIF_DEL, v, vifi,
-                                             mrt->id);
-
-       write_lock_bh(&mrt_lock);
-       dev = v->dev;
-       v->dev = NULL;
-
-       if (!dev) {
-               write_unlock_bh(&mrt_lock);
+       dev = rtnl_dereference(v->dev);
+       if (!dev)
                return -EADDRNOTAVAIL;
-       }
 
-       if (vifi == mrt->mroute_reg_vif_num)
-               mrt->mroute_reg_vif_num = -1;
+       spin_lock(&mrt_lock);
+       call_ipmr_vif_entry_notifiers(net, FIB_EVENT_VIF_DEL, v, dev,
+                                     vifi, mrt->id);
+       RCU_INIT_POINTER(v->dev, NULL);
 
+       if (vifi == mrt->mroute_reg_vif_num) {
+               /* Pairs with READ_ONCE() in ipmr_cache_report() and reg_vif_xmit() */
+               WRITE_ONCE(mrt->mroute_reg_vif_num, -1);
+       }
        if (vifi + 1 == mrt->maxvif) {
                int tmp;
 
@@ -672,10 +679,10 @@ static int vif_delete(struct mr_table *mrt, int vifi, int notify,
                        if (VIF_EXISTS(mrt, tmp))
                                break;
                }
-               mrt->maxvif = tmp+1;
+               WRITE_ONCE(mrt->maxvif, tmp + 1);
        }
 
-       write_unlock_bh(&mrt_lock);
+       spin_unlock(&mrt_lock);
 
        dev_set_allmulti(dev, -1);
 
@@ -777,7 +784,7 @@ out:
        spin_unlock(&mfc_unres_lock);
 }
 
-/* Fill oifs list. It is called under write locked mrt_lock. */
+/* Fill oifs list. It is called under locked mrt_lock. */
 static void ipmr_update_thresholds(struct mr_table *mrt, struct mr_mfc *cache,
                                   unsigned char *ttls)
 {
@@ -889,15 +896,18 @@ static int vif_add(struct net *net, struct mr_table *mrt,
        v->remote = vifc->vifc_rmt_addr.s_addr;
 
        /* And finish update writing critical data */
-       write_lock_bh(&mrt_lock);
-       v->dev = dev;
+       spin_lock(&mrt_lock);
+       rcu_assign_pointer(v->dev, dev);
        netdev_tracker_alloc(dev, &v->dev_tracker, GFP_ATOMIC);
-       if (v->flags & VIFF_REGISTER)
-               mrt->mroute_reg_vif_num = vifi;
+       if (v->flags & VIFF_REGISTER) {
+               /* Pairs with READ_ONCE() in ipmr_cache_report() and reg_vif_xmit() */
+               WRITE_ONCE(mrt->mroute_reg_vif_num, vifi);
+       }
        if (vifi+1 > mrt->maxvif)
-               mrt->maxvif = vifi+1;
-       write_unlock_bh(&mrt_lock);
-       call_ipmr_vif_entry_notifiers(net, FIB_EVENT_VIF_ADD, v, vifi, mrt->id);
+               WRITE_ONCE(mrt->maxvif, vifi + 1);
+       spin_unlock(&mrt_lock);
+       call_ipmr_vif_entry_notifiers(net, FIB_EVENT_VIF_ADD, v, dev,
+                                     vifi, mrt->id);
        return 0;
 }
 
@@ -1001,9 +1011,9 @@ static void ipmr_cache_resolve(struct net *net, struct mr_table *mrt,
 
 /* Bounce a cache query up to mrouted and netlink.
  *
- * Called under mrt_lock.
+ * Called under rcu_read_lock().
  */
-static int ipmr_cache_report(struct mr_table *mrt,
+static int ipmr_cache_report(const struct mr_table *mrt,
                             struct sk_buff *pkt, vifi_t vifi, int assert)
 {
        const int ihl = ip_hdrlen(pkt);
@@ -1038,8 +1048,11 @@ static int ipmr_cache_report(struct mr_table *mrt,
                        msg->im_vif = vifi;
                        msg->im_vif_hi = vifi >> 8;
                } else {
-                       msg->im_vif = mrt->mroute_reg_vif_num;
-                       msg->im_vif_hi = mrt->mroute_reg_vif_num >> 8;
+                       /* Pairs with WRITE_ONCE() in vif_add() and vif_delete() */
+                       int vif_num = READ_ONCE(mrt->mroute_reg_vif_num);
+
+                       msg->im_vif = vif_num;
+                       msg->im_vif_hi = vif_num >> 8;
                }
                ip_hdr(skb)->ihl = sizeof(struct iphdr) >> 2;
                ip_hdr(skb)->tot_len = htons(ntohs(ip_hdr(pkt)->tot_len) +
@@ -1064,10 +1077,8 @@ static int ipmr_cache_report(struct mr_table *mrt,
                skb->transport_header = skb->network_header;
        }
 
-       rcu_read_lock();
        mroute_sk = rcu_dereference(mrt->mroute_sk);
        if (!mroute_sk) {
-               rcu_read_unlock();
                kfree_skb(skb);
                return -EINVAL;
        }
@@ -1076,7 +1087,7 @@ static int ipmr_cache_report(struct mr_table *mrt,
 
        /* Deliver to mrouted */
        ret = sock_queue_rcv_skb(mroute_sk, skb);
-       rcu_read_unlock();
+
        if (ret < 0) {
                net_warn_ratelimited("mroute: pending queue full, dropping entries\n");
                kfree_skb(skb);
@@ -1086,6 +1097,7 @@ static int ipmr_cache_report(struct mr_table *mrt,
 }
 
 /* Queue a packet for resolution. It gets locked cache entry! */
+/* Called under rcu_read_lock() */
 static int ipmr_cache_unresolved(struct mr_table *mrt, vifi_t vifi,
                                 struct sk_buff *skb, struct net_device *dev)
 {
@@ -1198,12 +1210,12 @@ static int ipmr_mfc_add(struct net *net, struct mr_table *mrt,
                                   mfc->mfcc_mcastgrp.s_addr, parent);
        rcu_read_unlock();
        if (c) {
-               write_lock_bh(&mrt_lock);
+               spin_lock(&mrt_lock);
                c->_c.mfc_parent = mfc->mfcc_parent;
                ipmr_update_thresholds(mrt, &c->_c, mfc->mfcc_ttls);
                if (!mrtsock)
                        c->_c.mfc_flags |= MFC_STATIC;
-               write_unlock_bh(&mrt_lock);
+               spin_unlock(&mrt_lock);
                call_ipmr_mfc_entry_notifiers(net, FIB_EVENT_ENTRY_REPLACE, c,
                                              mrt->id);
                mroute_netlink_event(mrt, c, RTM_NEWROUTE);
@@ -1598,20 +1610,20 @@ int ipmr_ioctl(struct sock *sk, int cmd, void __user *arg)
                if (vr.vifi >= mrt->maxvif)
                        return -EINVAL;
                vr.vifi = array_index_nospec(vr.vifi, mrt->maxvif);
-               read_lock(&mrt_lock);
+               rcu_read_lock();
                vif = &mrt->vif_table[vr.vifi];
                if (VIF_EXISTS(mrt, vr.vifi)) {
-                       vr.icount = vif->pkt_in;
-                       vr.ocount = vif->pkt_out;
-                       vr.ibytes = vif->bytes_in;
-                       vr.obytes = vif->bytes_out;
-                       read_unlock(&mrt_lock);
+                       vr.icount = READ_ONCE(vif->pkt_in);
+                       vr.ocount = READ_ONCE(vif->pkt_out);
+                       vr.ibytes = READ_ONCE(vif->bytes_in);
+                       vr.obytes = READ_ONCE(vif->bytes_out);
+                       rcu_read_unlock();
 
                        if (copy_to_user(arg, &vr, sizeof(vr)))
                                return -EFAULT;
                        return 0;
                }
-               read_unlock(&mrt_lock);
+               rcu_read_unlock();
                return -EADDRNOTAVAIL;
        case SIOCGETSGCNT:
                if (copy_from_user(&sr, arg, sizeof(sr)))
@@ -1673,20 +1685,20 @@ int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
                if (vr.vifi >= mrt->maxvif)
                        return -EINVAL;
                vr.vifi = array_index_nospec(vr.vifi, mrt->maxvif);
-               read_lock(&mrt_lock);
+               rcu_read_lock();
                vif = &mrt->vif_table[vr.vifi];
                if (VIF_EXISTS(mrt, vr.vifi)) {
-                       vr.icount = vif->pkt_in;
-                       vr.ocount = vif->pkt_out;
-                       vr.ibytes = vif->bytes_in;
-                       vr.obytes = vif->bytes_out;
-                       read_unlock(&mrt_lock);
+                       vr.icount = READ_ONCE(vif->pkt_in);
+                       vr.ocount = READ_ONCE(vif->pkt_out);
+                       vr.ibytes = READ_ONCE(vif->bytes_in);
+                       vr.obytes = READ_ONCE(vif->bytes_out);
+                       rcu_read_unlock();
 
                        if (copy_to_user(arg, &vr, sizeof(vr)))
                                return -EFAULT;
                        return 0;
                }
-               read_unlock(&mrt_lock);
+               rcu_read_unlock();
                return -EADDRNOTAVAIL;
        case SIOCGETSGCNT:
                if (copy_from_user(&sr, arg, sizeof(sr)))
@@ -1726,7 +1738,7 @@ static int ipmr_device_event(struct notifier_block *this, unsigned long event, v
        ipmr_for_each_table(mrt, net) {
                v = &mrt->vif_table[0];
                for (ct = 0; ct < mrt->maxvif; ct++, v++) {
-                       if (v->dev == dev)
+                       if (rcu_access_pointer(v->dev) == dev)
                                vif_delete(mrt, ct, 1, NULL);
                }
        }
@@ -1804,26 +1816,28 @@ static bool ipmr_forward_offloaded(struct sk_buff *skb, struct mr_table *mrt,
 }
 #endif
 
-/* Processing handlers for ipmr_forward */
+/* Processing handlers for ipmr_forward, under rcu_read_lock() */
 
 static void ipmr_queue_xmit(struct net *net, struct mr_table *mrt,
                            int in_vifi, struct sk_buff *skb, int vifi)
 {
        const struct iphdr *iph = ip_hdr(skb);
        struct vif_device *vif = &mrt->vif_table[vifi];
+       struct net_device *vif_dev;
        struct net_device *dev;
        struct rtable *rt;
        struct flowi4 fl4;
        int    encap = 0;
 
-       if (!vif->dev)
+       vif_dev = vif_dev_read(vif);
+       if (!vif_dev)
                goto out_free;
 
        if (vif->flags & VIFF_REGISTER) {
-               vif->pkt_out++;
-               vif->bytes_out += skb->len;
-               vif->dev->stats.tx_bytes += skb->len;
-               vif->dev->stats.tx_packets++;
+               WRITE_ONCE(vif->pkt_out, vif->pkt_out + 1);
+               WRITE_ONCE(vif->bytes_out, vif->bytes_out + skb->len);
+               vif_dev->stats.tx_bytes += skb->len;
+               vif_dev->stats.tx_packets++;
                ipmr_cache_report(mrt, skb, vifi, IGMPMSG_WHOLEPKT);
                goto out_free;
        }
@@ -1868,8 +1882,8 @@ static void ipmr_queue_xmit(struct net *net, struct mr_table *mrt,
                goto out_free;
        }
 
-       vif->pkt_out++;
-       vif->bytes_out += skb->len;
+       WRITE_ONCE(vif->pkt_out, vif->pkt_out + 1);
+       WRITE_ONCE(vif->bytes_out, vif->bytes_out + skb->len);
 
        skb_dst_drop(skb);
        skb_dst_set(skb, &rt->dst);
@@ -1881,8 +1895,8 @@ static void ipmr_queue_xmit(struct net *net, struct mr_table *mrt,
        if (vif->flags & VIFF_TUNNEL) {
                ip_encap(net, skb, vif->local, vif->remote);
                /* FIXME: extra output firewall step used to be here. --RR */
-               vif->dev->stats.tx_packets++;
-               vif->dev->stats.tx_bytes += skb->len;
+               vif_dev->stats.tx_packets++;
+               vif_dev->stats.tx_bytes += skb->len;
        }
 
        IPCB(skb)->flags |= IPSKB_FORWARDED;
@@ -1906,18 +1920,20 @@ out_free:
        kfree_skb(skb);
 }
 
-static int ipmr_find_vif(struct mr_table *mrt, struct net_device *dev)
+/* Called with mrt_lock or rcu_read_lock() */
+static int ipmr_find_vif(const struct mr_table *mrt, struct net_device *dev)
 {
        int ct;
-
-       for (ct = mrt->maxvif-1; ct >= 0; ct--) {
-               if (mrt->vif_table[ct].dev == dev)
+       /* Pairs with WRITE_ONCE() in vif_delete()/vif_add() */
+       for (ct = READ_ONCE(mrt->maxvif) - 1; ct >= 0; ct--) {
+               if (rcu_access_pointer(mrt->vif_table[ct].dev) == dev)
                        break;
        }
        return ct;
 }
 
 /* "local" means that we should preserve one skb (for local delivery) */
+/* Called uner rcu_read_lock() */
 static void ip_mr_forward(struct net *net, struct mr_table *mrt,
                          struct net_device *dev, struct sk_buff *skb,
                          struct mfc_cache *c, int local)
@@ -1944,7 +1960,7 @@ static void ip_mr_forward(struct net *net, struct mr_table *mrt,
        }
 
        /* Wrong interface: drop packet and (maybe) send PIM assert. */
-       if (mrt->vif_table[vif].dev != dev) {
+       if (rcu_access_pointer(mrt->vif_table[vif].dev) != dev) {
                if (rt_is_output_route(skb_rtable(skb))) {
                        /* It is our own packet, looped back.
                         * Very complicated situation...
@@ -1983,8 +1999,10 @@ static void ip_mr_forward(struct net *net, struct mr_table *mrt,
        }
 
 forward:
-       mrt->vif_table[vif].pkt_in++;
-       mrt->vif_table[vif].bytes_in += skb->len;
+       WRITE_ONCE(mrt->vif_table[vif].pkt_in,
+                  mrt->vif_table[vif].pkt_in + 1);
+       WRITE_ONCE(mrt->vif_table[vif].bytes_in,
+                  mrt->vif_table[vif].bytes_in + skb->len);
 
        /* Forward the frame */
        if (c->mfc_origin == htonl(INADDR_ANY) &&
@@ -2140,22 +2158,14 @@ int ip_mr_input(struct sk_buff *skb)
                        skb = skb2;
                }
 
-               read_lock(&mrt_lock);
                vif = ipmr_find_vif(mrt, dev);
-               if (vif >= 0) {
-                       int err2 = ipmr_cache_unresolved(mrt, vif, skb, dev);
-                       read_unlock(&mrt_lock);
-
-                       return err2;
-               }
-               read_unlock(&mrt_lock);
+               if (vif >= 0)
+                       return ipmr_cache_unresolved(mrt, vif, skb, dev);
                kfree_skb(skb);
                return -ENODEV;
        }
 
-       read_lock(&mrt_lock);
        ip_mr_forward(net, mrt, dev, skb, cache, local);
-       read_unlock(&mrt_lock);
 
        if (local)
                return ip_local_deliver(skb);
@@ -2252,18 +2262,15 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb,
                int vif = -1;
 
                dev = skb->dev;
-               read_lock(&mrt_lock);
                if (dev)
                        vif = ipmr_find_vif(mrt, dev);
                if (vif < 0) {
-                       read_unlock(&mrt_lock);
                        rcu_read_unlock();
                        return -ENODEV;
                }
 
                skb2 = skb_realloc_headroom(skb, sizeof(struct iphdr));
                if (!skb2) {
-                       read_unlock(&mrt_lock);
                        rcu_read_unlock();
                        return -ENOMEM;
                }
@@ -2277,14 +2284,11 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb,
                iph->daddr = daddr;
                iph->version = 0;
                err = ipmr_cache_unresolved(mrt, vif, skb2, dev);
-               read_unlock(&mrt_lock);
                rcu_read_unlock();
                return err;
        }
 
-       read_lock(&mrt_lock);
        err = mr_fill_mroute(mrt, skb, &cache->_c, rtm);
-       read_unlock(&mrt_lock);
        rcu_read_unlock();
        return err;
 }
@@ -2404,7 +2408,7 @@ static size_t igmpmsg_netlink_msgsize(size_t payloadlen)
        return len;
 }
 
-static void igmpmsg_netlink_event(struct mr_table *mrt, struct sk_buff *pkt)
+static void igmpmsg_netlink_event(const struct mr_table *mrt, struct sk_buff *pkt)
 {
        struct net *net = read_pnet(&mrt->net);
        struct nlmsghdr *nlh;
@@ -2744,18 +2748,21 @@ static bool ipmr_fill_table(struct mr_table *mrt, struct sk_buff *skb)
 
 static bool ipmr_fill_vif(struct mr_table *mrt, u32 vifid, struct sk_buff *skb)
 {
+       struct net_device *vif_dev;
        struct nlattr *vif_nest;
        struct vif_device *vif;
 
+       vif = &mrt->vif_table[vifid];
+       vif_dev = rtnl_dereference(vif->dev);
        /* if the VIF doesn't exist just continue */
-       if (!VIF_EXISTS(mrt, vifid))
+       if (!vif_dev)
                return true;
 
-       vif = &mrt->vif_table[vifid];
        vif_nest = nla_nest_start_noflag(skb, IPMRA_VIF);
        if (!vif_nest)
                return false;
-       if (nla_put_u32(skb, IPMRA_VIFA_IFINDEX, vif->dev->ifindex) ||
+
+       if (nla_put_u32(skb, IPMRA_VIFA_IFINDEX, vif_dev->ifindex) ||
            nla_put_u32(skb, IPMRA_VIFA_VIF_ID, vifid) ||
            nla_put_u16(skb, IPMRA_VIFA_FLAGS, vif->flags) ||
            nla_put_u64_64bit(skb, IPMRA_VIFA_BYTES_IN, vif->bytes_in,
@@ -2887,7 +2894,7 @@ out:
  */
 
 static void *ipmr_vif_seq_start(struct seq_file *seq, loff_t *pos)
-       __acquires(mrt_lock)
+       __acquires(RCU)
 {
        struct mr_vif_iter *iter = seq->private;
        struct net *net = seq_file_net(seq);
@@ -2899,14 +2906,14 @@ static void *ipmr_vif_seq_start(struct seq_file *seq, loff_t *pos)
 
        iter->mrt = mrt;
 
-       read_lock(&mrt_lock);
+       rcu_read_lock();
        return mr_vif_seq_start(seq, pos);
 }
 
 static void ipmr_vif_seq_stop(struct seq_file *seq, void *v)
-       __releases(mrt_lock)
+       __releases(RCU)
 {
-       read_unlock(&mrt_lock);
+       rcu_read_unlock();
 }
 
 static int ipmr_vif_seq_show(struct seq_file *seq, void *v)
@@ -2919,9 +2926,11 @@ static int ipmr_vif_seq_show(struct seq_file *seq, void *v)
                         "Interface      BytesIn  PktsIn  BytesOut PktsOut Flags Local    Remote\n");
        } else {
                const struct vif_device *vif = v;
-               const char *name =  vif->dev ?
-                                   vif->dev->name : "none";
+               const struct net_device *vif_dev;
+               const char *name;
 
+               vif_dev = vif_dev_read(vif);
+               name = vif_dev ? vif_dev->name : "none";
                seq_printf(seq,
                           "%2td %-10s %8ld %7ld  %8ld %7ld %05X %08X %08X\n",
                           vif - mrt->vif_table,
@@ -3017,7 +3026,7 @@ static int ipmr_dump(struct net *net, struct notifier_block *nb,
                     struct netlink_ext_ack *extack)
 {
        return mr_dump(net, nb, RTNL_FAMILY_IPMR, ipmr_rules_dump,
-                      ipmr_mr_table_iter, &mrt_lock, extack);
+                      ipmr_mr_table_iter, extack);
 }
 
 static const struct fib_notifier_ops ipmr_notifier_ops_template = {
index aa8738a..271dc03 100644 (file)
@@ -13,7 +13,7 @@ void vif_device_init(struct vif_device *v,
                     unsigned short flags,
                     unsigned short get_iflink_mask)
 {
-       v->dev = NULL;
+       RCU_INIT_POINTER(v->dev, NULL);
        v->bytes_in = 0;
        v->bytes_out = 0;
        v->pkt_in = 0;
@@ -208,6 +208,7 @@ EXPORT_SYMBOL(mr_mfc_seq_next);
 int mr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb,
                   struct mr_mfc *c, struct rtmsg *rtm)
 {
+       struct net_device *vif_dev;
        struct rta_mfc_stats mfcs;
        struct nlattr *mp_attr;
        struct rtnexthop *nhp;
@@ -220,10 +221,13 @@ int mr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb,
                return -ENOENT;
        }
 
-       if (VIF_EXISTS(mrt, c->mfc_parent) &&
-           nla_put_u32(skb, RTA_IIF,
-                       mrt->vif_table[c->mfc_parent].dev->ifindex) < 0)
+       rcu_read_lock();
+       vif_dev = rcu_dereference(mrt->vif_table[c->mfc_parent].dev);
+       if (vif_dev && nla_put_u32(skb, RTA_IIF, vif_dev->ifindex) < 0) {
+               rcu_read_unlock();
                return -EMSGSIZE;
+       }
+       rcu_read_unlock();
 
        if (c->mfc_flags & MFC_OFFLOAD)
                rtm->rtm_flags |= RTNH_F_OFFLOAD;
@@ -232,23 +236,27 @@ int mr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb,
        if (!mp_attr)
                return -EMSGSIZE;
 
+       rcu_read_lock();
        for (ct = c->mfc_un.res.minvif; ct < c->mfc_un.res.maxvif; ct++) {
-               if (VIF_EXISTS(mrt, ct) && c->mfc_un.res.ttls[ct] < 255) {
-                       struct vif_device *vif;
+               struct vif_device *vif = &mrt->vif_table[ct];
+
+               vif_dev = rcu_dereference(vif->dev);
+               if (vif_dev && c->mfc_un.res.ttls[ct] < 255) {
 
                        nhp = nla_reserve_nohdr(skb, sizeof(*nhp));
                        if (!nhp) {
+                               rcu_read_unlock();
                                nla_nest_cancel(skb, mp_attr);
                                return -EMSGSIZE;
                        }
 
                        nhp->rtnh_flags = 0;
                        nhp->rtnh_hops = c->mfc_un.res.ttls[ct];
-                       vif = &mrt->vif_table[ct];
-                       nhp->rtnh_ifindex = vif->dev->ifindex;
+                       nhp->rtnh_ifindex = vif_dev->ifindex;
                        nhp->rtnh_len = sizeof(*nhp);
                }
        }
+       rcu_read_unlock();
 
        nla_nest_end(skb, mp_attr);
 
@@ -275,13 +283,14 @@ static bool mr_mfc_uses_dev(const struct mr_table *mrt,
        int ct;
 
        for (ct = c->mfc_un.res.minvif; ct < c->mfc_un.res.maxvif; ct++) {
-               if (VIF_EXISTS(mrt, ct) && c->mfc_un.res.ttls[ct] < 255) {
-                       const struct vif_device *vif;
-
-                       vif = &mrt->vif_table[ct];
-                       if (vif->dev == dev)
-                               return true;
-               }
+               const struct net_device *vif_dev;
+               const struct vif_device *vif;
+
+               vif = &mrt->vif_table[ct];
+               vif_dev = rcu_access_pointer(vif->dev);
+               if (vif_dev && c->mfc_un.res.ttls[ct] < 255 &&
+                   vif_dev == dev)
+                       return true;
        }
        return false;
 }
@@ -390,7 +399,6 @@ int mr_dump(struct net *net, struct notifier_block *nb, unsigned short family,
                              struct netlink_ext_ack *extack),
            struct mr_table *(*mr_iter)(struct net *net,
                                        struct mr_table *mrt),
-           rwlock_t *mrt_lock,
            struct netlink_ext_ack *extack)
 {
        struct mr_table *mrt;
@@ -402,22 +410,25 @@ int mr_dump(struct net *net, struct notifier_block *nb, unsigned short family,
 
        for (mrt = mr_iter(net, NULL); mrt; mrt = mr_iter(net, mrt)) {
                struct vif_device *v = &mrt->vif_table[0];
+               struct net_device *vif_dev;
                struct mr_mfc *mfc;
                int vifi;
 
                /* Notifiy on table VIF entries */
-               read_lock(mrt_lock);
+               rcu_read_lock();
                for (vifi = 0; vifi < mrt->maxvif; vifi++, v++) {
-                       if (!v->dev)
+                       vif_dev = rcu_dereference(v->dev);
+                       if (!vif_dev)
                                continue;
 
                        err = mr_call_vif_notifier(nb, family,
-                                                  FIB_EVENT_VIF_ADD,
-                                                  v, vifi, mrt->id, extack);
+                                                  FIB_EVENT_VIF_ADD, v,
+                                                  vif_dev, vifi,
+                                                  mrt->id, extack);
                        if (err)
                                break;
                }
-               read_unlock(mrt_lock);
+               rcu_read_unlock();
 
                if (err)
                        return err;
index 3d6fc6d..b83c2bd 100644 (file)
@@ -316,12 +316,16 @@ static int ping_check_bind_addr(struct sock *sk, struct inet_sock *isk,
                pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n",
                         sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port));
 
+               if (addr->sin_addr.s_addr == htonl(INADDR_ANY))
+                       return 0;
+
                tb_id = l3mdev_fib_table_by_index(net, sk->sk_bound_dev_if) ? : tb_id;
                chk_addr_ret = inet_addr_type_table(net, addr->sin_addr.s_addr, tb_id);
 
-               if (!inet_addr_valid_or_nonlocal(net, inet_sk(sk),
-                                                addr->sin_addr.s_addr,
-                                                chk_addr_ret))
+               if (chk_addr_ret == RTN_MULTICAST ||
+                   chk_addr_ret == RTN_BROADCAST ||
+                   (chk_addr_ret != RTN_LOCAL &&
+                    !inet_can_nonlocal_bind(net, isk)))
                        return -EADDRNOTAVAIL;
 
 #if IS_ENABLED(CONFIG_IPV6)
index 959bea1..006c1f0 100644 (file)
@@ -95,10 +95,10 @@ int raw_hash_sk(struct sock *sk)
 
        hlist = &h->ht[inet_sk(sk)->inet_num & (RAW_HTABLE_SIZE - 1)];
 
-       write_lock_bh(&h->lock);
+       spin_lock(&h->lock);
        __sk_nulls_add_node_rcu(sk, hlist);
        sock_set_flag(sk, SOCK_RCU_FREE);
-       write_unlock_bh(&h->lock);
+       spin_unlock(&h->lock);
        sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
 
        return 0;
@@ -109,10 +109,10 @@ void raw_unhash_sk(struct sock *sk)
 {
        struct raw_hashinfo *h = sk->sk_prot->h.raw_hash;
 
-       write_lock_bh(&h->lock);
+       spin_lock(&h->lock);
        if (__sk_nulls_del_node_init_rcu(sk))
                sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
-       write_unlock_bh(&h->lock);
+       spin_unlock(&h->lock);
 }
 EXPORT_SYMBOL_GPL(raw_unhash_sk);
 
@@ -278,7 +278,7 @@ void raw_icmp_error(struct sk_buff *skb, int protocol, u32 info)
        sk_nulls_for_each(sk, hnode, hlist) {
                iph = (const struct iphdr *)skb->data;
                if (!raw_v4_match(net, sk, iph->protocol,
-                                 iph->saddr, iph->daddr, dif, sdif))
+                                 iph->daddr, iph->saddr, dif, sdif))
                        continue;
                raw_err(sk, skb, info);
        }
index ac4b652..9993218 100644 (file)
@@ -156,7 +156,7 @@ static void raw_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
        s_slot = cb->args[0];
        num = s_num = cb->args[1];
 
-       read_lock(&hashinfo->lock);
+       rcu_read_lock();
        for (slot = s_slot; slot < RAW_HTABLE_SIZE; s_num = 0, slot++) {
                num = 0;
 
@@ -184,7 +184,7 @@ next:
        }
 
 out_unlock:
-       read_unlock(&hashinfo->lock);
+       rcu_read_unlock();
 
        cb->args[0] = slot;
        cb->args[1] = num;
index 9d2fd3c..21bdee8 100644 (file)
@@ -4575,16 +4575,24 @@ EXPORT_SYMBOL_GPL(tcp_done);
 
 int tcp_abort(struct sock *sk, int err)
 {
-       if (!sk_fullsock(sk)) {
-               if (sk->sk_state == TCP_NEW_SYN_RECV) {
-                       struct request_sock *req = inet_reqsk(sk);
+       int state = inet_sk_state_load(sk);
 
-                       local_bh_disable();
-                       inet_csk_reqsk_queue_drop(req->rsk_listener, req);
-                       local_bh_enable();
-                       return 0;
-               }
-               return -EOPNOTSUPP;
+       if (state == TCP_NEW_SYN_RECV) {
+               struct request_sock *req = inet_reqsk(sk);
+
+               local_bh_disable();
+               inet_csk_reqsk_queue_drop(req->rsk_listener, req);
+               local_bh_enable();
+               return 0;
+       }
+       if (state == TCP_TIME_WAIT) {
+               struct inet_timewait_sock *tw = inet_twsk(sk);
+
+               refcount_inc(&tw->tw_refcnt);
+               local_bh_disable();
+               inet_twsk_deschedule_put(tw);
+               local_bh_enable();
+               return 0;
        }
 
        /* Don't race with userspace socket closes such as tcp_close. */
index 38550bb..a1626af 100644 (file)
@@ -612,9 +612,6 @@ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
                return 0;
        }
 
-       if (inet_csk_has_ulp(sk))
-               return -EINVAL;
-
        if (sk->sk_family == AF_INET6) {
                if (tcp_bpf_assert_proto_ops(psock->sk_proto))
                        return -EINVAL;
index fda811a..68d0d8a 100644 (file)
@@ -1964,7 +1964,10 @@ process:
                struct sock *nsk;
 
                sk = req->rsk_listener;
-               drop_reason = tcp_inbound_md5_hash(sk, skb,
+               if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb))
+                       drop_reason = SKB_DROP_REASON_XFRM_POLICY;
+               else
+                       drop_reason = tcp_inbound_md5_hash(sk, skb,
                                                   &iph->saddr, &iph->daddr,
                                                   AF_INET, dif, sdif);
                if (unlikely(drop_reason)) {
@@ -2016,6 +2019,7 @@ process:
                        }
                        goto discard_and_relse;
                }
+               nf_reset_ct(skb);
                if (nsk == sk) {
                        reqsk_put(req);
                        tcp_v4_restore_cb(skb);
index 3497ad1..88becb0 100644 (file)
@@ -1109,10 +1109,6 @@ ipv6_add_addr(struct inet6_dev *idev, struct ifa6_config *cfg,
                goto out;
        }
 
-       if (net->ipv6.devconf_all->disable_policy ||
-           idev->cnf.disable_policy)
-               f6i->dst_nopolicy = true;
-
        neigh_parms_data_state_setall(idev->nd_parms);
 
        ifa->addr = *cfg->pfx;
@@ -4524,6 +4520,39 @@ restart:
                        /* We try to batch several events at once. */
                        age = (now - ifp->tstamp + ADDRCONF_TIMER_FUZZ_MINUS) / HZ;
 
+                       if ((ifp->flags&IFA_F_TEMPORARY) &&
+                           !(ifp->flags&IFA_F_TENTATIVE) &&
+                           ifp->prefered_lft != INFINITY_LIFE_TIME &&
+                           !ifp->regen_count && ifp->ifpub) {
+                               /* This is a non-regenerated temporary addr. */
+
+                               unsigned long regen_advance = ifp->idev->cnf.regen_max_retry *
+                                       ifp->idev->cnf.dad_transmits *
+                                       max(NEIGH_VAR(ifp->idev->nd_parms, RETRANS_TIME), HZ/100) / HZ;
+
+                               if (age + regen_advance >= ifp->prefered_lft) {
+                                       struct inet6_ifaddr *ifpub = ifp->ifpub;
+                                       if (time_before(ifp->tstamp + ifp->prefered_lft * HZ, next))
+                                               next = ifp->tstamp + ifp->prefered_lft * HZ;
+
+                                       ifp->regen_count++;
+                                       in6_ifa_hold(ifp);
+                                       in6_ifa_hold(ifpub);
+                                       spin_unlock(&ifp->lock);
+
+                                       spin_lock(&ifpub->lock);
+                                       ifpub->regen_count = 0;
+                                       spin_unlock(&ifpub->lock);
+                                       rcu_read_unlock_bh();
+                                       ipv6_create_tempaddr(ifpub, true);
+                                       in6_ifa_put(ifpub);
+                                       in6_ifa_put(ifp);
+                                       rcu_read_lock_bh();
+                                       goto restart;
+                               } else if (time_before(ifp->tstamp + ifp->prefered_lft * HZ - regen_advance * HZ, next))
+                                       next = ifp->tstamp + ifp->prefered_lft * HZ - regen_advance * HZ;
+                       }
+
                        if (ifp->valid_lft != INFINITY_LIFE_TIME &&
                            age >= ifp->valid_lft) {
                                spin_unlock(&ifp->lock);
@@ -4557,35 +4586,6 @@ restart:
                                        in6_ifa_put(ifp);
                                        goto restart;
                                }
-                       } else if ((ifp->flags&IFA_F_TEMPORARY) &&
-                                  !(ifp->flags&IFA_F_TENTATIVE)) {
-                               unsigned long regen_advance = ifp->idev->cnf.regen_max_retry *
-                                       ifp->idev->cnf.dad_transmits *
-                                       max(NEIGH_VAR(ifp->idev->nd_parms, RETRANS_TIME), HZ/100) / HZ;
-
-                               if (age >= ifp->prefered_lft - regen_advance) {
-                                       struct inet6_ifaddr *ifpub = ifp->ifpub;
-                                       if (time_before(ifp->tstamp + ifp->prefered_lft * HZ, next))
-                                               next = ifp->tstamp + ifp->prefered_lft * HZ;
-                                       if (!ifp->regen_count && ifpub) {
-                                               ifp->regen_count++;
-                                               in6_ifa_hold(ifp);
-                                               in6_ifa_hold(ifpub);
-                                               spin_unlock(&ifp->lock);
-
-                                               spin_lock(&ifpub->lock);
-                                               ifpub->regen_count = 0;
-                                               spin_unlock(&ifpub->lock);
-                                               rcu_read_unlock_bh();
-                                               ipv6_create_tempaddr(ifpub, true);
-                                               in6_ifa_put(ifpub);
-                                               in6_ifa_put(ifp);
-                                               rcu_read_lock_bh();
-                                               goto restart;
-                                       }
-                               } else if (time_before(ifp->tstamp + ifp->prefered_lft * HZ - regen_advance * HZ, next))
-                                       next = ifp->tstamp + ifp->prefered_lft * HZ - regen_advance * HZ;
-                               spin_unlock(&ifp->lock);
                        } else {
                                /* ifp->prefered_lft <= ifp->valid_lft */
                                if (time_before(ifp->tstamp + ifp->prefered_lft * HZ, next))
@@ -5172,9 +5172,9 @@ next:
                fillargs->event = RTM_GETMULTICAST;
 
                /* multicast address */
-               for (ifmca = rcu_dereference(idev->mc_list);
+               for (ifmca = rtnl_dereference(idev->mc_list);
                     ifmca;
-                    ifmca = rcu_dereference(ifmca->next), ip_idx++) {
+                    ifmca = rtnl_dereference(ifmca->next), ip_idx++) {
                        if (ip_idx < s_ip_idx)
                                continue;
                        err = inet6_fill_ifmcaddr(skb, ifmca, fillargs);
index 3e22cbe..1bd10ae 100644 (file)
@@ -939,7 +939,6 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
        __be16 proto;
        __u32 mtu;
        int nhoff;
-       int thoff;
 
        if (!pskb_inet_may_pull(skb))
                goto tx_err;
@@ -960,10 +959,16 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
            (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
                truncate = true;
 
-       thoff = skb_transport_header(skb) - skb_mac_header(skb);
-       if (skb->protocol == htons(ETH_P_IPV6) &&
-           (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff))
-               truncate = true;
+       if (skb->protocol == htons(ETH_P_IPV6)) {
+               int thoff;
+
+               if (skb_transport_header_was_set(skb))
+                       thoff = skb_transport_header(skb) - skb_mac_header(skb);
+               else
+                       thoff = nhoff + sizeof(struct ipv6hdr);
+               if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
+                       truncate = true;
+       }
 
        if (skb_cow_head(skb, dev->needed_headroom ?: t->hlen))
                goto tx_err;
index d4aad41..ec6e150 100644 (file)
@@ -62,7 +62,12 @@ struct ip6mr_result {
    Note that the changes are semaphored via rtnl_lock.
  */
 
-static DEFINE_RWLOCK(mrt_lock);
+static DEFINE_SPINLOCK(mrt_lock);
+
+static struct net_device *vif_dev_read(const struct vif_device *vif)
+{
+       return rcu_dereference(vif->dev);
+}
 
 /* Multicast router control variables */
 
@@ -85,11 +90,11 @@ static void ip6mr_free_table(struct mr_table *mrt);
 static void ip6_mr_forward(struct net *net, struct mr_table *mrt,
                           struct net_device *dev, struct sk_buff *skb,
                           struct mfc6_cache *cache);
-static int ip6mr_cache_report(struct mr_table *mrt, struct sk_buff *pkt,
+static int ip6mr_cache_report(const struct mr_table *mrt, struct sk_buff *pkt,
                              mifi_t mifi, int assert);
 static void mr6_netlink_event(struct mr_table *mrt, struct mfc6_cache *mfc,
                              int cmd);
-static void mrt6msg_netlink_event(struct mr_table *mrt, struct sk_buff *pkt);
+static void mrt6msg_netlink_event(const struct mr_table *mrt, struct sk_buff *pkt);
 static int ip6mr_rtm_dumproute(struct sk_buff *skb,
                               struct netlink_callback *cb);
 static void mroute_clean_tables(struct mr_table *mrt, int flags);
@@ -398,7 +403,7 @@ static void ip6mr_free_table(struct mr_table *mrt)
  */
 
 static void *ip6mr_vif_seq_start(struct seq_file *seq, loff_t *pos)
-       __acquires(mrt_lock)
+       __acquires(RCU)
 {
        struct mr_vif_iter *iter = seq->private;
        struct net *net = seq_file_net(seq);
@@ -410,14 +415,14 @@ static void *ip6mr_vif_seq_start(struct seq_file *seq, loff_t *pos)
 
        iter->mrt = mrt;
 
-       read_lock(&mrt_lock);
+       rcu_read_lock();
        return mr_vif_seq_start(seq, pos);
 }
 
 static void ip6mr_vif_seq_stop(struct seq_file *seq, void *v)
-       __releases(mrt_lock)
+       __releases(RCU)
 {
-       read_unlock(&mrt_lock);
+       rcu_read_unlock();
 }
 
 static int ip6mr_vif_seq_show(struct seq_file *seq, void *v)
@@ -430,7 +435,11 @@ static int ip6mr_vif_seq_show(struct seq_file *seq, void *v)
                         "Interface      BytesIn  PktsIn  BytesOut PktsOut Flags\n");
        } else {
                const struct vif_device *vif = v;
-               const char *name = vif->dev ? vif->dev->name : "none";
+               const struct net_device *vif_dev;
+               const char *name;
+
+               vif_dev = vif_dev_read(vif);
+               name = vif_dev ? vif_dev->name : "none";
 
                seq_printf(seq,
                           "%2td %-10s %8ld %7ld  %8ld %7ld %05X\n",
@@ -549,13 +558,11 @@ static int pim6_rcv(struct sk_buff *skb)
 
        if (ip6mr_fib_lookup(net, &fl6, &mrt) < 0)
                goto drop;
-       reg_vif_num = mrt->mroute_reg_vif_num;
 
-       read_lock(&mrt_lock);
+       /* Pairs with WRITE_ONCE() in mif6_add()/mif6_delete() */
+       reg_vif_num = READ_ONCE(mrt->mroute_reg_vif_num);
        if (reg_vif_num >= 0)
-               reg_dev = mrt->vif_table[reg_vif_num].dev;
-       dev_hold(reg_dev);
-       read_unlock(&mrt_lock);
+               reg_dev = vif_dev_read(&mrt->vif_table[reg_vif_num]);
 
        if (!reg_dev)
                goto drop;
@@ -570,7 +577,6 @@ static int pim6_rcv(struct sk_buff *skb)
 
        netif_rx(skb);
 
-       dev_put(reg_dev);
        return 0;
  drop:
        kfree_skb(skb);
@@ -600,11 +606,12 @@ static netdev_tx_t reg_vif_xmit(struct sk_buff *skb,
        if (ip6mr_fib_lookup(net, &fl6, &mrt) < 0)
                goto tx_err;
 
-       read_lock(&mrt_lock);
        dev->stats.tx_bytes += skb->len;
        dev->stats.tx_packets++;
-       ip6mr_cache_report(mrt, skb, mrt->mroute_reg_vif_num, MRT6MSG_WHOLEPKT);
-       read_unlock(&mrt_lock);
+       rcu_read_lock();
+       ip6mr_cache_report(mrt, skb, READ_ONCE(mrt->mroute_reg_vif_num),
+                          MRT6MSG_WHOLEPKT);
+       rcu_read_unlock();
        kfree_skb(skb);
        return NETDEV_TX_OK;
 
@@ -670,10 +677,11 @@ failure:
 static int call_ip6mr_vif_entry_notifiers(struct net *net,
                                          enum fib_event_type event_type,
                                          struct vif_device *vif,
+                                         struct net_device *vif_dev,
                                          mifi_t vif_index, u32 tb_id)
 {
        return mr_call_vif_notifiers(net, RTNL_FAMILY_IP6MR, event_type,
-                                    vif, vif_index, tb_id,
+                                    vif, vif_dev, vif_index, tb_id,
                                     &net->ipv6.ipmr_seq);
 }
 
@@ -698,23 +706,21 @@ static int mif6_delete(struct mr_table *mrt, int vifi, int notify,
 
        v = &mrt->vif_table[vifi];
 
-       if (VIF_EXISTS(mrt, vifi))
-               call_ip6mr_vif_entry_notifiers(read_pnet(&mrt->net),
-                                              FIB_EVENT_VIF_DEL, v, vifi,
-                                              mrt->id);
-
-       write_lock_bh(&mrt_lock);
-       dev = v->dev;
-       v->dev = NULL;
-
-       if (!dev) {
-               write_unlock_bh(&mrt_lock);
+       dev = rtnl_dereference(v->dev);
+       if (!dev)
                return -EADDRNOTAVAIL;
-       }
+
+       call_ip6mr_vif_entry_notifiers(read_pnet(&mrt->net),
+                                      FIB_EVENT_VIF_DEL, v, dev,
+                                      vifi, mrt->id);
+       spin_lock(&mrt_lock);
+       RCU_INIT_POINTER(v->dev, NULL);
 
 #ifdef CONFIG_IPV6_PIMSM_V2
-       if (vifi == mrt->mroute_reg_vif_num)
-               mrt->mroute_reg_vif_num = -1;
+       if (vifi == mrt->mroute_reg_vif_num) {
+               /* Pairs with READ_ONCE() in ip6mr_cache_report() and reg_vif_xmit() */
+               WRITE_ONCE(mrt->mroute_reg_vif_num, -1);
+       }
 #endif
 
        if (vifi + 1 == mrt->maxvif) {
@@ -723,10 +729,10 @@ static int mif6_delete(struct mr_table *mrt, int vifi, int notify,
                        if (VIF_EXISTS(mrt, tmp))
                                break;
                }
-               mrt->maxvif = tmp + 1;
+               WRITE_ONCE(mrt->maxvif, tmp + 1);
        }
 
-       write_unlock_bh(&mrt_lock);
+       spin_unlock(&mrt_lock);
 
        dev_set_allmulti(dev, -1);
 
@@ -826,7 +832,7 @@ static void ipmr_expire_process(struct timer_list *t)
        spin_unlock(&mfc_unres_lock);
 }
 
-/* Fill oifs list. It is called under write locked mrt_lock. */
+/* Fill oifs list. It is called under locked mrt_lock. */
 
 static void ip6mr_update_thresholds(struct mr_table *mrt,
                                    struct mr_mfc *cache,
@@ -912,18 +918,18 @@ static int mif6_add(struct net *net, struct mr_table *mrt,
                        MIFF_REGISTER);
 
        /* And finish update writing critical data */
-       write_lock_bh(&mrt_lock);
-       v->dev = dev;
+       spin_lock(&mrt_lock);
+       rcu_assign_pointer(v->dev, dev);
        netdev_tracker_alloc(dev, &v->dev_tracker, GFP_ATOMIC);
 #ifdef CONFIG_IPV6_PIMSM_V2
        if (v->flags & MIFF_REGISTER)
-               mrt->mroute_reg_vif_num = vifi;
+               WRITE_ONCE(mrt->mroute_reg_vif_num, vifi);
 #endif
        if (vifi + 1 > mrt->maxvif)
-               mrt->maxvif = vifi + 1;
-       write_unlock_bh(&mrt_lock);
+               WRITE_ONCE(mrt->maxvif, vifi + 1);
+       spin_unlock(&mrt_lock);
        call_ip6mr_vif_entry_notifiers(net, FIB_EVENT_VIF_ADD,
-                                      v, vifi, mrt->id);
+                                      v, dev, vifi, mrt->id);
        return 0;
 }
 
@@ -1028,10 +1034,10 @@ static void ip6mr_cache_resolve(struct net *net, struct mr_table *mrt,
 /*
  *     Bounce a cache query up to pim6sd and netlink.
  *
- *     Called under mrt_lock.
+ *     Called under rcu_read_lock()
  */
 
-static int ip6mr_cache_report(struct mr_table *mrt, struct sk_buff *pkt,
+static int ip6mr_cache_report(const struct mr_table *mrt, struct sk_buff *pkt,
                              mifi_t mifi, int assert)
 {
        struct sock *mroute6_sk;
@@ -1072,7 +1078,7 @@ static int ip6mr_cache_report(struct mr_table *mrt, struct sk_buff *pkt,
                if (assert == MRT6MSG_WRMIFWHOLE)
                        msg->im6_mif = mifi;
                else
-                       msg->im6_mif = mrt->mroute_reg_vif_num;
+                       msg->im6_mif = READ_ONCE(mrt->mroute_reg_vif_num);
                msg->im6_pad = 0;
                msg->im6_src = ipv6_hdr(pkt)->saddr;
                msg->im6_dst = ipv6_hdr(pkt)->daddr;
@@ -1107,10 +1113,8 @@ static int ip6mr_cache_report(struct mr_table *mrt, struct sk_buff *pkt,
        skb->ip_summed = CHECKSUM_UNNECESSARY;
        }
 
-       rcu_read_lock();
        mroute6_sk = rcu_dereference(mrt->mroute_sk);
        if (!mroute6_sk) {
-               rcu_read_unlock();
                kfree_skb(skb);
                return -EINVAL;
        }
@@ -1119,7 +1123,7 @@ static int ip6mr_cache_report(struct mr_table *mrt, struct sk_buff *pkt,
 
        /* Deliver to user space multicast routing algorithms */
        ret = sock_queue_rcv_skb(mroute6_sk, skb);
-       rcu_read_unlock();
+
        if (ret < 0) {
                net_warn_ratelimited("mroute6: pending queue full, dropping entries\n");
                kfree_skb(skb);
@@ -1243,7 +1247,7 @@ static int ip6mr_device_event(struct notifier_block *this,
        ip6mr_for_each_table(mrt, net) {
                v = &mrt->vif_table[0];
                for (ct = 0; ct < mrt->maxvif; ct++, v++) {
-                       if (v->dev == dev)
+                       if (rcu_access_pointer(v->dev) == dev)
                                mif6_delete(mrt, ct, 1, NULL);
                }
        }
@@ -1262,7 +1266,7 @@ static int ip6mr_dump(struct net *net, struct notifier_block *nb,
                      struct netlink_ext_ack *extack)
 {
        return mr_dump(net, nb, RTNL_FAMILY_IP6MR, ip6mr_rules_dump,
-                      ip6mr_mr_table_iter, &mrt_lock, extack);
+                      ip6mr_mr_table_iter, extack);
 }
 
 static struct notifier_block ip6_mr_notifier = {
@@ -1437,12 +1441,12 @@ static int ip6mr_mfc_add(struct net *net, struct mr_table *mrt,
                                    &mfc->mf6cc_mcastgrp.sin6_addr, parent);
        rcu_read_unlock();
        if (c) {
-               write_lock_bh(&mrt_lock);
+               spin_lock(&mrt_lock);
                c->_c.mfc_parent = mfc->mf6cc_parent;
                ip6mr_update_thresholds(mrt, &c->_c, ttls);
                if (!mrtsock)
                        c->_c.mfc_flags |= MFC_STATIC;
-               write_unlock_bh(&mrt_lock);
+               spin_unlock(&mrt_lock);
                call_ip6mr_mfc_entry_notifiers(net, FIB_EVENT_ENTRY_REPLACE,
                                               c, mrt->id);
                mr6_netlink_event(mrt, c, RTM_NEWROUTE);
@@ -1560,7 +1564,7 @@ static int ip6mr_sk_init(struct mr_table *mrt, struct sock *sk)
        struct net *net = sock_net(sk);
 
        rtnl_lock();
-       write_lock_bh(&mrt_lock);
+       spin_lock(&mrt_lock);
        if (rtnl_dereference(mrt->mroute_sk)) {
                err = -EADDRINUSE;
        } else {
@@ -1568,7 +1572,7 @@ static int ip6mr_sk_init(struct mr_table *mrt, struct sock *sk)
                sock_set_flag(sk, SOCK_RCU_FREE);
                atomic_inc(&net->ipv6.devconf_all->mc_forwarding);
        }
-       write_unlock_bh(&mrt_lock);
+       spin_unlock(&mrt_lock);
 
        if (!err)
                inet6_netconf_notify_devconf(net, RTM_NEWNETCONF,
@@ -1598,14 +1602,14 @@ int ip6mr_sk_done(struct sock *sk)
        rtnl_lock();
        ip6mr_for_each_table(mrt, net) {
                if (sk == rtnl_dereference(mrt->mroute_sk)) {
-                       write_lock_bh(&mrt_lock);
+                       spin_lock(&mrt_lock);
                        RCU_INIT_POINTER(mrt->mroute_sk, NULL);
                        /* Note that mroute_sk had SOCK_RCU_FREE set,
                         * so the RCU grace period before sk freeing
                         * is guaranteed by sk_destruct()
                         */
                        atomic_dec(&devconf->mc_forwarding);
-                       write_unlock_bh(&mrt_lock);
+                       spin_unlock(&mrt_lock);
                        inet6_netconf_notify_devconf(net, RTM_NEWNETCONF,
                                                     NETCONFA_MC_FORWARDING,
                                                     NETCONFA_IFINDEX_ALL,
@@ -1891,20 +1895,20 @@ int ip6mr_ioctl(struct sock *sk, int cmd, void __user *arg)
                if (vr.mifi >= mrt->maxvif)
                        return -EINVAL;
                vr.mifi = array_index_nospec(vr.mifi, mrt->maxvif);
-               read_lock(&mrt_lock);
+               rcu_read_lock();
                vif = &mrt->vif_table[vr.mifi];
                if (VIF_EXISTS(mrt, vr.mifi)) {
-                       vr.icount = vif->pkt_in;
-                       vr.ocount = vif->pkt_out;
-                       vr.ibytes = vif->bytes_in;
-                       vr.obytes = vif->bytes_out;
-                       read_unlock(&mrt_lock);
+                       vr.icount = READ_ONCE(vif->pkt_in);
+                       vr.ocount = READ_ONCE(vif->pkt_out);
+                       vr.ibytes = READ_ONCE(vif->bytes_in);
+                       vr.obytes = READ_ONCE(vif->bytes_out);
+                       rcu_read_unlock();
 
                        if (copy_to_user(arg, &vr, sizeof(vr)))
                                return -EFAULT;
                        return 0;
                }
-               read_unlock(&mrt_lock);
+               rcu_read_unlock();
                return -EADDRNOTAVAIL;
        case SIOCGETSGCNT_IN6:
                if (copy_from_user(&sr, arg, sizeof(sr)))
@@ -1966,20 +1970,20 @@ int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
                if (vr.mifi >= mrt->maxvif)
                        return -EINVAL;
                vr.mifi = array_index_nospec(vr.mifi, mrt->maxvif);
-               read_lock(&mrt_lock);
+               rcu_read_lock();
                vif = &mrt->vif_table[vr.mifi];
                if (VIF_EXISTS(mrt, vr.mifi)) {
-                       vr.icount = vif->pkt_in;
-                       vr.ocount = vif->pkt_out;
-                       vr.ibytes = vif->bytes_in;
-                       vr.obytes = vif->bytes_out;
-                       read_unlock(&mrt_lock);
+                       vr.icount = READ_ONCE(vif->pkt_in);
+                       vr.ocount = READ_ONCE(vif->pkt_out);
+                       vr.ibytes = READ_ONCE(vif->bytes_in);
+                       vr.obytes = READ_ONCE(vif->bytes_out);
+                       rcu_read_unlock();
 
                        if (copy_to_user(arg, &vr, sizeof(vr)))
                                return -EFAULT;
                        return 0;
                }
-               read_unlock(&mrt_lock);
+               rcu_read_unlock();
                return -EADDRNOTAVAIL;
        case SIOCGETSGCNT_IN6:
                if (copy_from_user(&sr, arg, sizeof(sr)))
@@ -2021,21 +2025,22 @@ static inline int ip6mr_forward2_finish(struct net *net, struct sock *sk, struct
 static int ip6mr_forward2(struct net *net, struct mr_table *mrt,
                          struct sk_buff *skb, int vifi)
 {
-       struct ipv6hdr *ipv6h;
        struct vif_device *vif = &mrt->vif_table[vifi];
-       struct net_device *dev;
+       struct net_device *vif_dev;
+       struct ipv6hdr *ipv6h;
        struct dst_entry *dst;
        struct flowi6 fl6;
 
-       if (!vif->dev)
+       vif_dev = vif_dev_read(vif);
+       if (!vif_dev)
                goto out_free;
 
 #ifdef CONFIG_IPV6_PIMSM_V2
        if (vif->flags & MIFF_REGISTER) {
-               vif->pkt_out++;
-               vif->bytes_out += skb->len;
-               vif->dev->stats.tx_bytes += skb->len;
-               vif->dev->stats.tx_packets++;
+               WRITE_ONCE(vif->pkt_out, vif->pkt_out + 1);
+               WRITE_ONCE(vif->bytes_out, vif->bytes_out + skb->len);
+               vif_dev->stats.tx_bytes += skb->len;
+               vif_dev->stats.tx_packets++;
                ip6mr_cache_report(mrt, skb, vifi, MRT6MSG_WHOLEPKT);
                goto out_free;
        }
@@ -2068,14 +2073,13 @@ static int ip6mr_forward2(struct net *net, struct mr_table *mrt,
         * not mrouter) cannot join to more than one interface - it will
         * result in receiving multiple packets.
         */
-       dev = vif->dev;
-       skb->dev = dev;
-       vif->pkt_out++;
-       vif->bytes_out += skb->len;
+       skb->dev = vif_dev;
+       WRITE_ONCE(vif->pkt_out, vif->pkt_out + 1);
+       WRITE_ONCE(vif->bytes_out, vif->bytes_out + skb->len);
 
        /* We are about to write */
        /* XXX: extension headers? */
-       if (skb_cow(skb, sizeof(*ipv6h) + LL_RESERVED_SPACE(dev)))
+       if (skb_cow(skb, sizeof(*ipv6h) + LL_RESERVED_SPACE(vif_dev)))
                goto out_free;
 
        ipv6h = ipv6_hdr(skb);
@@ -2084,7 +2088,7 @@ static int ip6mr_forward2(struct net *net, struct mr_table *mrt,
        IP6CB(skb)->flags |= IP6SKB_FORWARDED;
 
        return NF_HOOK(NFPROTO_IPV6, NF_INET_FORWARD,
-                      net, NULL, skb, skb->dev, dev,
+                      net, NULL, skb, skb->dev, vif_dev,
                       ip6mr_forward2_finish);
 
 out_free:
@@ -2092,17 +2096,20 @@ out_free:
        return 0;
 }
 
+/* Called with rcu_read_lock() */
 static int ip6mr_find_vif(struct mr_table *mrt, struct net_device *dev)
 {
        int ct;
 
-       for (ct = mrt->maxvif - 1; ct >= 0; ct--) {
-               if (mrt->vif_table[ct].dev == dev)
+       /* Pairs with WRITE_ONCE() in mif6_delete()/mif6_add() */
+       for (ct = READ_ONCE(mrt->maxvif) - 1; ct >= 0; ct--) {
+               if (rcu_access_pointer(mrt->vif_table[ct].dev) == dev)
                        break;
        }
        return ct;
 }
 
+/* Called under rcu_read_lock() */
 static void ip6_mr_forward(struct net *net, struct mr_table *mrt,
                           struct net_device *dev, struct sk_buff *skb,
                           struct mfc6_cache *c)
@@ -2122,20 +2129,18 @@ static void ip6_mr_forward(struct net *net, struct mr_table *mrt,
                /* For an (*,G) entry, we only check that the incoming
                 * interface is part of the static tree.
                 */
-               rcu_read_lock();
                cache_proxy = mr_mfc_find_any_parent(mrt, vif);
                if (cache_proxy &&
                    cache_proxy->_c.mfc_un.res.ttls[true_vifi] < 255) {
                        rcu_read_unlock();
                        goto forward;
                }
-               rcu_read_unlock();
        }
 
        /*
         * Wrong interface: drop packet and (maybe) send PIM assert.
         */
-       if (mrt->vif_table[vif].dev != dev) {
+       if (rcu_access_pointer(mrt->vif_table[vif].dev) != dev) {
                c->_c.mfc_un.res.wrong_if++;
 
                if (true_vifi >= 0 && mrt->mroute_do_assert &&
@@ -2159,8 +2164,10 @@ static void ip6_mr_forward(struct net *net, struct mr_table *mrt,
        }
 
 forward:
-       mrt->vif_table[vif].pkt_in++;
-       mrt->vif_table[vif].bytes_in += skb->len;
+       WRITE_ONCE(mrt->vif_table[vif].pkt_in,
+                  mrt->vif_table[vif].pkt_in + 1);
+       WRITE_ONCE(mrt->vif_table[vif].bytes_in,
+                  mrt->vif_table[vif].bytes_in + skb->len);
 
        /*
         *      Forward the frame
@@ -2238,7 +2245,6 @@ int ip6_mr_input(struct sk_buff *skb)
                return err;
        }
 
-       read_lock(&mrt_lock);
        cache = ip6mr_cache_find(mrt,
                                 &ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr);
        if (!cache) {
@@ -2259,19 +2265,15 @@ int ip6_mr_input(struct sk_buff *skb)
                vif = ip6mr_find_vif(mrt, dev);
                if (vif >= 0) {
                        int err = ip6mr_cache_unresolved(mrt, vif, skb, dev);
-                       read_unlock(&mrt_lock);
 
                        return err;
                }
-               read_unlock(&mrt_lock);
                kfree_skb(skb);
                return -ENODEV;
        }
 
        ip6_mr_forward(net, mrt, dev, skb, cache);
 
-       read_unlock(&mrt_lock);
-
        return 0;
 }
 
@@ -2287,7 +2289,7 @@ int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm,
        if (!mrt)
                return -ENOENT;
 
-       read_lock(&mrt_lock);
+       rcu_read_lock();
        cache = ip6mr_cache_find(mrt, &rt->rt6i_src.addr, &rt->rt6i_dst.addr);
        if (!cache && skb->dev) {
                int vif = ip6mr_find_vif(mrt, skb->dev);
@@ -2305,14 +2307,14 @@ int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm,
 
                dev = skb->dev;
                if (!dev || (vif = ip6mr_find_vif(mrt, dev)) < 0) {
-                       read_unlock(&mrt_lock);
+                       rcu_read_unlock();
                        return -ENODEV;
                }
 
                /* really correct? */
                skb2 = alloc_skb(sizeof(struct ipv6hdr), GFP_ATOMIC);
                if (!skb2) {
-                       read_unlock(&mrt_lock);
+                       rcu_read_unlock();
                        return -ENOMEM;
                }
 
@@ -2335,13 +2337,13 @@ int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm,
                iph->daddr = rt->rt6i_dst.addr;
 
                err = ip6mr_cache_unresolved(mrt, vif, skb2, dev);
-               read_unlock(&mrt_lock);
+               rcu_read_unlock();
 
                return err;
        }
 
        err = mr_fill_mroute(mrt, skb, &cache->_c, rtm);
-       read_unlock(&mrt_lock);
+       rcu_read_unlock();
        return err;
 }
 
@@ -2460,7 +2462,7 @@ static size_t mrt6msg_netlink_msgsize(size_t payloadlen)
        return len;
 }
 
-static void mrt6msg_netlink_event(struct mr_table *mrt, struct sk_buff *pkt)
+static void mrt6msg_netlink_event(const struct mr_table *mrt, struct sk_buff *pkt)
 {
        struct net *net = read_pnet(&mrt->net);
        struct nlmsghdr *nlh;
index b0dfe97..cd84cbd 100644 (file)
@@ -128,6 +128,7 @@ struct neigh_table nd_tbl = {
                        [NEIGH_VAR_RETRANS_TIME] = ND_RETRANS_TIMER,
                        [NEIGH_VAR_BASE_REACHABLE_TIME] = ND_REACHABLE_TIME,
                        [NEIGH_VAR_DELAY_PROBE_TIME] = 5 * HZ,
+                       [NEIGH_VAR_INTERVAL_PROBE_TIME_MS] = 5 * HZ,
                        [NEIGH_VAR_GC_STALETIME] = 60 * HZ,
                        [NEIGH_VAR_QUEUE_LEN_BYTES] = SK_WMEM_MAX,
                        [NEIGH_VAR_PROXY_QLEN] = 64,
index 46b560a..722de9d 100644 (file)
@@ -332,7 +332,6 @@ static void rawv6_err(struct sock *sk, struct sk_buff *skb,
 void raw6_icmp_error(struct sk_buff *skb, int nexthdr,
                u8 type, u8 code, int inner_offset, __be32 info)
 {
-       const struct in6_addr *saddr, *daddr;
        struct net *net = dev_net(skb->dev);
        struct hlist_nulls_head *hlist;
        struct hlist_nulls_node *hnode;
@@ -345,8 +344,6 @@ void raw6_icmp_error(struct sk_buff *skb, int nexthdr,
        sk_nulls_for_each(sk, hnode, hlist) {
                /* Note: ipv6_hdr(skb) != skb->data */
                const struct ipv6hdr *ip6h = (const struct ipv6hdr *)skb->data;
-               saddr = &ip6h->saddr;
-               daddr = &ip6h->daddr;
 
                if (!raw_v6_match(net, sk, nexthdr, &ip6h->saddr, &ip6h->daddr,
                                  inet6_iif(skb), inet6_iif(skb)))
index 0be01a4..70cd50c 100644 (file)
@@ -4569,8 +4569,15 @@ struct fib6_info *addrconf_f6i_alloc(struct net *net,
        }
 
        f6i = ip6_route_info_create(&cfg, gfp_flags, NULL);
-       if (!IS_ERR(f6i))
+       if (!IS_ERR(f6i)) {
                f6i->dst_nocount = true;
+
+               if (!anycast &&
+                   (net->ipv6.devconf_all->disable_policy ||
+                    idev->cnf.disable_policy))
+                       f6i->dst_nopolicy = true;
+       }
+
        return f6i;
 }
 
@@ -5934,7 +5941,7 @@ int rt6_dump_route(struct fib6_info *rt, void *p_arg, unsigned int skip)
                rcu_read_unlock();
 
                if (err)
-                       return count += w.count;
+                       return count + w.count;
        }
 
        return -1;
index 6de0118..d43c50a 100644 (file)
@@ -406,7 +406,6 @@ int __net_init seg6_hmac_net_init(struct net *net)
 
        return rhashtable_init(&sdata->hmac_infos, &rht_params);
 }
-EXPORT_SYMBOL(seg6_hmac_net_init);
 
 void seg6_hmac_exit(void)
 {
index fab89fd..6b73b7a 100644 (file)
@@ -323,8 +323,6 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ip_tunnel_prl __u
                kcalloc(cmax, sizeof(*kp), GFP_KERNEL_ACCOUNT | __GFP_NOWARN) :
                NULL;
 
-       rcu_read_lock();
-
        ca = min(t->prl_count, cmax);
 
        if (!kp) {
@@ -341,7 +339,7 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ip_tunnel_prl __u
                }
        }
 
-       c = 0;
+       rcu_read_lock();
        for_each_prl_rcu(t->prl) {
                if (c >= cmax)
                        break;
@@ -353,7 +351,7 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ip_tunnel_prl __u
                if (kprl.addr != htonl(INADDR_ANY))
                        break;
        }
-out:
+
        rcu_read_unlock();
 
        len = sizeof(*kp) * c;
@@ -362,7 +360,7 @@ out:
                ret = -EFAULT;
 
        kfree(kp);
-
+out:
        return ret;
 }
 
index 9d1aafe..4595b56 100644 (file)
@@ -184,7 +184,7 @@ static void l2tp_dfs_seq_session_show(struct seq_file *m, void *v)
                   session->pwtype == L2TP_PWTYPE_PPP ? "PPP" :
                   "");
        if (session->send_seq || session->recv_seq)
-               seq_printf(m, "   nr %hu, ns %hu\n", session->nr, session->ns);
+               seq_printf(m, "   nr %u, ns %u\n", session->nr, session->ns);
        seq_printf(m, "   refcnt %d\n", refcount_read(&session->ref_count));
        seq_printf(m, "   config 0/0/%c/%c/-/%s %08x %u\n",
                   session->recv_seq ? 'R' : '-',
@@ -192,7 +192,7 @@ static void l2tp_dfs_seq_session_show(struct seq_file *m, void *v)
                   session->lns_mode ? "LNS" : "LAC",
                   0,
                   jiffies_to_msecs(session->reorder_timeout));
-       seq_printf(m, "   offset 0 l2specific %hu/%hu\n",
+       seq_printf(m, "   offset 0 l2specific %hu/%d\n",
                   session->l2specific_type, l2tp_get_l2specific_len(session));
        if (session->cookie_len) {
                seq_printf(m, "   cookie %02x%02x%02x%02x",
@@ -215,7 +215,7 @@ static void l2tp_dfs_seq_session_show(struct seq_file *m, void *v)
                seq_puts(m, "\n");
        }
 
-       seq_printf(m, "   %hu/%hu tx %ld/%ld/%ld rx %ld/%ld/%ld\n",
+       seq_printf(m, "   %u/%u tx %ld/%ld/%ld rx %ld/%ld/%ld\n",
                   session->nr, session->ns,
                   atomic_long_read(&session->stats.tx_packets),
                   atomic_long_read(&session->stats.tx_bytes),
index 8be1fdc..db2e584 100644 (file)
@@ -1553,7 +1553,7 @@ static void pppol2tp_seq_session_show(struct seq_file *m, void *v)
                   session->lns_mode ? "LNS" : "LAC",
                   0,
                   jiffies_to_msecs(session->reorder_timeout));
-       seq_printf(m, "   %hu/%hu %ld/%ld/%ld %ld/%ld/%ld\n",
+       seq_printf(m, "   %u/%u %ld/%ld/%ld %ld/%ld/%ld\n",
                   session->nr, session->ns,
                   atomic_long_read(&session->stats.tx_packets),
                   atomic_long_read(&session->stats.tx_bytes),
index be3b918..bd8f0f4 100644 (file)
@@ -765,6 +765,7 @@ static noinline bool mptcp_established_options_rst(struct sock *sk, struct sk_bu
        opts->suboptions |= OPTION_MPTCP_RST;
        opts->reset_transient = subflow->reset_transient;
        opts->reset_reason = subflow->reset_reason;
+       MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPRSTTX);
 
        return true;
 }
@@ -788,6 +789,7 @@ static bool mptcp_established_options_fastclose(struct sock *sk,
        opts->rcvr_key = msk->remote_key;
 
        pr_debug("FASTCLOSE key=%llu", opts->rcvr_key);
+       MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFASTCLOSETX);
        return true;
 }
 
@@ -809,6 +811,7 @@ static bool mptcp_established_options_mp_fail(struct sock *sk,
        opts->fail_seq = subflow->map_seq;
 
        pr_debug("MP_FAIL fail_seq=%llu", opts->fail_seq);
+       MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFAILTX);
 
        return true;
 }
@@ -833,13 +836,11 @@ bool mptcp_established_options(struct sock *sk, struct sk_buff *skb,
                    mptcp_established_options_mp_fail(sk, &opt_size, remaining, opts)) {
                        *size += opt_size;
                        remaining -= opt_size;
-                       MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFASTCLOSETX);
                }
                /* MP_RST can be used with MP_FASTCLOSE and MP_FAIL if there is room */
                if (mptcp_established_options_rst(sk, skb, &opt_size, remaining, opts)) {
                        *size += opt_size;
                        remaining -= opt_size;
-                       MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPRSTTX);
                }
                return true;
        }
@@ -966,7 +967,7 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
                        goto reset;
                subflow->mp_capable = 0;
                pr_fallback(msk);
-               __mptcp_do_fallback(msk);
+               mptcp_do_fallback(ssk);
                return false;
        }
 
@@ -1583,6 +1584,9 @@ mp_rst:
                *ptr++ = mptcp_option(MPTCPOPT_MP_PRIO,
                                      TCPOLEN_MPTCP_PRIO,
                                      opts->backup, TCPOPT_NOP);
+
+               MPTCP_INC_STATS(sock_net((const struct sock *)tp),
+                               MPTCP_MIB_MPPRIOTX);
        }
 
 mp_capable_done:
index 59a8522..45e2a48 100644 (file)
@@ -299,23 +299,21 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq)
 {
        struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
        struct mptcp_sock *msk = mptcp_sk(subflow->conn);
-       struct sock *s = (struct sock *)msk;
 
        pr_debug("fail_seq=%llu", fail_seq);
 
        if (!READ_ONCE(msk->allow_infinite_fallback))
                return;
 
-       if (!READ_ONCE(subflow->mp_fail_response_expect)) {
+       if (!subflow->fail_tout) {
                pr_debug("send MP_FAIL response and infinite map");
 
                subflow->send_mp_fail = 1;
-               MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFAILTX);
                subflow->send_infinite_map = 1;
-       } else if (!sock_flag(sk, SOCK_DEAD)) {
+               tcp_send_ack(sk);
+       } else {
                pr_debug("MP_FAIL response received");
-
-               sk_stop_timer(s, &s->sk_timer);
+               WRITE_ONCE(subflow->fail_tout, 0);
        }
 }
 
index e099f2a..5bdb559 100644 (file)
@@ -717,9 +717,10 @@ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
        }
 }
 
-static int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
-                                       struct mptcp_addr_info *addr,
-                                       u8 bkup)
+int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+                                struct mptcp_addr_info *addr,
+                                struct mptcp_addr_info *rem,
+                                u8 bkup)
 {
        struct mptcp_subflow_context *subflow;
 
@@ -727,24 +728,29 @@ static int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
 
        mptcp_for_each_subflow(msk, subflow) {
                struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
-               struct sock *sk = (struct sock *)msk;
-               struct mptcp_addr_info local;
+               struct mptcp_addr_info local, remote;
+               bool slow;
 
                local_address((struct sock_common *)ssk, &local);
                if (!mptcp_addresses_equal(&local, addr, addr->port))
                        continue;
 
+               if (rem && rem->family != AF_UNSPEC) {
+                       remote_address((struct sock_common *)ssk, &remote);
+                       if (!mptcp_addresses_equal(&remote, rem, rem->port))
+                               continue;
+               }
+
+               slow = lock_sock_fast(ssk);
                if (subflow->backup != bkup)
                        msk->last_snd = NULL;
                subflow->backup = bkup;
                subflow->send_mp_prio = 1;
                subflow->request_bkup = bkup;
-               __MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPPRIOTX);
 
-               spin_unlock_bh(&msk->pm.lock);
                pr_debug("send ack for mp_prio");
-               mptcp_subflow_send_ack(ssk);
-               spin_lock_bh(&msk->pm.lock);
+               __mptcp_subflow_send_ack(ssk);
+               unlock_sock_fast(ssk, slow);
 
                return 0;
        }
@@ -801,7 +807,8 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
                        removed = true;
                        __MPTCP_INC_STATS(sock_net(sk), rm_type);
                }
-               __set_bit(rm_list->ids[i], msk->pm.id_avail_bitmap);
+               if (rm_type == MPTCP_MIB_RMSUBFLOW)
+                       __set_bit(rm_list->ids[i], msk->pm.id_avail_bitmap);
                if (!removed)
                        continue;
 
@@ -1127,7 +1134,7 @@ void mptcp_pm_nl_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ss
                        }
                        unlock_sock_fast(ssk, slow);
 
-                       /* always try to push the pending data regarless of re-injections:
+                       /* always try to push the pending data regardless of re-injections:
                         * we can possibly use backup subflows now, and subflow selection
                         * is cheap under the msk socket lock
                         */
@@ -1816,8 +1823,10 @@ static void mptcp_pm_nl_fullmesh(struct mptcp_sock *msk,
 
        list.ids[list.nr++] = addr->id;
 
+       spin_lock_bh(&msk->pm.lock);
        mptcp_pm_nl_rm_subflow_received(msk, &list);
        mptcp_pm_create_subflow_or_signal_addr(msk);
+       spin_unlock_bh(&msk->pm.lock);
 }
 
 static int mptcp_nl_set_flags(struct net *net,
@@ -1835,12 +1844,10 @@ static int mptcp_nl_set_flags(struct net *net,
                        goto next;
 
                lock_sock(sk);
-               spin_lock_bh(&msk->pm.lock);
                if (changed & MPTCP_PM_ADDR_FLAG_BACKUP)
-                       ret = mptcp_pm_nl_mp_prio_send_ack(msk, addr, bkup);
+                       ret = mptcp_pm_nl_mp_prio_send_ack(msk, addr, NULL, bkup);
                if (changed & MPTCP_PM_ADDR_FLAG_FULLMESH)
                        mptcp_pm_nl_fullmesh(msk, addr);
-               spin_unlock_bh(&msk->pm.lock);
                release_sock(sk);
 
 next:
@@ -1854,6 +1861,9 @@ next:
 static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info)
 {
        struct mptcp_pm_addr_entry addr = { .addr = { .family = AF_UNSPEC }, }, *entry;
+       struct mptcp_pm_addr_entry remote = { .addr = { .family = AF_UNSPEC }, };
+       struct nlattr *attr_rem = info->attrs[MPTCP_PM_ATTR_ADDR_REMOTE];
+       struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN];
        struct nlattr *attr = info->attrs[MPTCP_PM_ATTR_ADDR];
        struct pm_nl_pernet *pernet = genl_info_pm_nl(info);
        u8 changed, mask = MPTCP_PM_ADDR_FLAG_BACKUP |
@@ -1866,6 +1876,12 @@ static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info)
        if (ret < 0)
                return ret;
 
+       if (attr_rem) {
+               ret = mptcp_pm_parse_entry(attr_rem, info, false, &remote);
+               if (ret < 0)
+                       return ret;
+       }
+
        if (addr.flags & MPTCP_PM_ADDR_FLAG_BACKUP)
                bkup = 1;
        if (addr.addr.family == AF_UNSPEC) {
@@ -1874,6 +1890,10 @@ static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info)
                        return -EOPNOTSUPP;
        }
 
+       if (token)
+               return mptcp_userspace_pm_set_flags(sock_net(skb->sk),
+                                                   token, &addr, &remote, bkup);
+
        spin_lock_bh(&pernet->lock);
        entry = __lookup_addr(pernet, &addr.addr, lookup_by_id);
        if (!entry) {
index f56378e..9e82250 100644 (file)
@@ -5,6 +5,7 @@
  */
 
 #include "protocol.h"
+#include "mib.h"
 
 void mptcp_free_local_addr_list(struct mptcp_sock *msk)
 {
@@ -306,15 +307,11 @@ static struct sock *mptcp_nl_find_ssk(struct mptcp_sock *msk,
                                      const struct mptcp_addr_info *local,
                                      const struct mptcp_addr_info *remote)
 {
-       struct sock *sk = &msk->sk.icsk_inet.sk;
        struct mptcp_subflow_context *subflow;
-       struct sock *found = NULL;
 
        if (local->family != remote->family)
                return NULL;
 
-       lock_sock(sk);
-
        mptcp_for_each_subflow(msk, subflow) {
                const struct inet_sock *issk;
                struct sock *ssk;
@@ -347,16 +344,11 @@ static struct sock *mptcp_nl_find_ssk(struct mptcp_sock *msk,
                }
 
                if (issk->inet_sport == local->port &&
-                   issk->inet_dport == remote->port) {
-                       found = ssk;
-                       goto found;
-               }
+                   issk->inet_dport == remote->port)
+                       return ssk;
        }
 
-found:
-       release_sock(sk);
-
-       return found;
+       return NULL;
 }
 
 int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
@@ -412,18 +404,51 @@ int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
        }
 
        sk = &msk->sk.icsk_inet.sk;
+       lock_sock(sk);
        ssk = mptcp_nl_find_ssk(msk, &addr_l, &addr_r);
        if (ssk) {
                struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
 
                mptcp_subflow_shutdown(sk, ssk, RCV_SHUTDOWN | SEND_SHUTDOWN);
                mptcp_close_ssk(sk, ssk, subflow);
+               MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMSUBFLOW);
                err = 0;
        } else {
                err = -ESRCH;
        }
+       release_sock(sk);
 
- destroy_err:
+destroy_err:
        sock_put((struct sock *)msk);
        return err;
 }
+
+int mptcp_userspace_pm_set_flags(struct net *net, struct nlattr *token,
+                                struct mptcp_pm_addr_entry *loc,
+                                struct mptcp_pm_addr_entry *rem, u8 bkup)
+{
+       struct mptcp_sock *msk;
+       int ret = -EINVAL;
+       u32 token_val;
+
+       token_val = nla_get_u32(token);
+
+       msk = mptcp_token_get_sock(net, token_val);
+       if (!msk)
+               return ret;
+
+       if (!mptcp_pm_is_userspace(msk))
+               goto set_flags_err;
+
+       if (loc->addr.family == AF_UNSPEC ||
+           rem->addr.family == AF_UNSPEC)
+               goto set_flags_err;
+
+       lock_sock((struct sock *)msk);
+       ret = mptcp_pm_nl_mp_prio_send_ack(msk, &loc->addr, &rem->addr, bkup);
+       release_sock((struct sock *)msk);
+
+set_flags_err:
+       sock_put((struct sock *)msk);
+       return ret;
+}
index e0fb9f9..2caad4a 100644 (file)
@@ -181,8 +181,8 @@ static void mptcp_rmem_uncharge(struct sock *sk, int size)
        reclaimable = msk->rmem_fwd_alloc - sk_unused_reserved_mem(sk);
 
        /* see sk_mem_uncharge() for the rationale behind the following schema */
-       if (unlikely(reclaimable >= SK_RECLAIM_THRESHOLD))
-               __mptcp_rmem_reclaim(sk, SK_RECLAIM_CHUNK);
+       if (unlikely(reclaimable >= PAGE_SIZE))
+               __mptcp_rmem_reclaim(sk, reclaimable);
 }
 
 static void mptcp_rfree(struct sk_buff *skb)
@@ -323,20 +323,16 @@ static bool mptcp_rmem_schedule(struct sock *sk, struct sock *ssk, int size)
        struct mptcp_sock *msk = mptcp_sk(sk);
        int amt, amount;
 
-       if (size < msk->rmem_fwd_alloc)
+       if (size <= msk->rmem_fwd_alloc)
                return true;
 
+       size -= msk->rmem_fwd_alloc;
        amt = sk_mem_pages(size);
        amount = amt << PAGE_SHIFT;
-       msk->rmem_fwd_alloc += amount;
-       if (!__sk_mem_raise_allocated(sk, size, amt, SK_MEM_RECV)) {
-               if (ssk->sk_forward_alloc < amount) {
-                       msk->rmem_fwd_alloc -= amount;
-                       return false;
-               }
+       if (!__sk_mem_raise_allocated(sk, size, amt, SK_MEM_RECV))
+               return false;
 
-               ssk->sk_forward_alloc -= amount;
-       }
+       msk->rmem_fwd_alloc += amount;
        return true;
 }
 
@@ -500,19 +496,24 @@ static void mptcp_set_timeout(struct sock *sk)
        __mptcp_set_timeout(sk, tout);
 }
 
-static bool tcp_can_send_ack(const struct sock *ssk)
+static inline bool tcp_can_send_ack(const struct sock *ssk)
 {
        return !((1 << inet_sk_state_load(ssk)) &
               (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN));
 }
 
+void __mptcp_subflow_send_ack(struct sock *ssk)
+{
+       if (tcp_can_send_ack(ssk))
+               tcp_send_ack(ssk);
+}
+
 void mptcp_subflow_send_ack(struct sock *ssk)
 {
        bool slow;
 
        slow = lock_sock_fast(ssk);
-       if (tcp_can_send_ack(ssk))
-               tcp_send_ack(ssk);
+       __mptcp_subflow_send_ack(ssk);
        unlock_sock_fast(ssk, slow);
 }
 
@@ -966,25 +967,6 @@ static bool mptcp_frag_can_collapse_to(const struct mptcp_sock *msk,
                df->data_seq + df->data_len == msk->write_seq;
 }
 
-static void __mptcp_mem_reclaim_partial(struct sock *sk)
-{
-       int reclaimable = mptcp_sk(sk)->rmem_fwd_alloc - sk_unused_reserved_mem(sk);
-
-       lockdep_assert_held_once(&sk->sk_lock.slock);
-
-       if (reclaimable > (int)PAGE_SIZE)
-               __mptcp_rmem_reclaim(sk, reclaimable - 1);
-
-       sk_mem_reclaim(sk);
-}
-
-static void mptcp_mem_reclaim_partial(struct sock *sk)
-{
-       mptcp_data_lock(sk);
-       __mptcp_mem_reclaim_partial(sk);
-       mptcp_data_unlock(sk);
-}
-
 static void dfrag_uncharge(struct sock *sk, int len)
 {
        sk_mem_uncharge(sk, len);
@@ -1004,7 +986,6 @@ static void __mptcp_clean_una(struct sock *sk)
 {
        struct mptcp_sock *msk = mptcp_sk(sk);
        struct mptcp_data_frag *dtmp, *dfrag;
-       bool cleaned = false;
        u64 snd_una;
 
        /* on fallback we just need to ignore snd_una, as this is really
@@ -1027,7 +1008,6 @@ static void __mptcp_clean_una(struct sock *sk)
                }
 
                dfrag_clear(sk, dfrag);
-               cleaned = true;
        }
 
        dfrag = mptcp_rtx_head(sk);
@@ -1049,7 +1029,6 @@ static void __mptcp_clean_una(struct sock *sk)
                dfrag->already_sent -= delta;
 
                dfrag_uncharge(sk, delta);
-               cleaned = true;
        }
 
        /* all retransmitted data acked, recovery completed */
@@ -1057,9 +1036,6 @@ static void __mptcp_clean_una(struct sock *sk)
                msk->recovery = false;
 
 out:
-       if (cleaned && tcp_under_memory_pressure(sk))
-               __mptcp_mem_reclaim_partial(sk);
-
        if (snd_una == READ_ONCE(msk->snd_nxt) &&
            snd_una == READ_ONCE(msk->write_seq)) {
                if (mptcp_timer_pending(sk) && !mptcp_data_fin_enabled(msk))
@@ -1211,12 +1187,6 @@ static struct sk_buff *mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, boo
 {
        gfp_t gfp = data_lock_held ? GFP_ATOMIC : sk->sk_allocation;
 
-       if (unlikely(tcp_under_memory_pressure(sk))) {
-               if (data_lock_held)
-                       __mptcp_mem_reclaim_partial(sk);
-               else
-                       mptcp_mem_reclaim_partial(sk);
-       }
        return __mptcp_alloc_tx_skb(sk, ssk, gfp);
 }
 
@@ -1245,7 +1215,7 @@ static void mptcp_update_infinite_map(struct mptcp_sock *msk,
        MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPTX);
        mptcp_subflow_ctx(ssk)->send_infinite_map = 0;
        pr_fallback(msk);
-       __mptcp_do_fallback(msk);
+       mptcp_do_fallback(ssk);
 }
 
 static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
@@ -2175,21 +2145,6 @@ static void mptcp_retransmit_timer(struct timer_list *t)
        sock_put(sk);
 }
 
-static struct mptcp_subflow_context *
-mp_fail_response_expect_subflow(struct mptcp_sock *msk)
-{
-       struct mptcp_subflow_context *subflow, *ret = NULL;
-
-       mptcp_for_each_subflow(msk, subflow) {
-               if (READ_ONCE(subflow->mp_fail_response_expect)) {
-                       ret = subflow;
-                       break;
-               }
-       }
-
-       return ret;
-}
-
 static void mptcp_timeout_timer(struct timer_list *t)
 {
        struct sock *sk = from_timer(sk, t, sk_timer);
@@ -2346,6 +2301,11 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
                kfree_rcu(subflow, rcu);
        } else {
                /* otherwise tcp will dispose of the ssk and subflow ctx */
+               if (ssk->sk_state == TCP_LISTEN) {
+                       tcp_set_state(ssk, TCP_CLOSE);
+                       mptcp_subflow_queue_clean(ssk);
+                       inet_csk_listen_stop(ssk);
+               }
                __tcp_close(ssk, 0);
 
                /* close acquired an extra ref */
@@ -2518,27 +2478,50 @@ reset_timer:
                mptcp_reset_timer(sk);
 }
 
+/* schedule the timeout timer for the relevant event: either close timeout
+ * or mp_fail timeout. The close timeout takes precedence on the mp_fail one
+ */
+void mptcp_reset_timeout(struct mptcp_sock *msk, unsigned long fail_tout)
+{
+       struct sock *sk = (struct sock *)msk;
+       unsigned long timeout, close_timeout;
+
+       if (!fail_tout && !sock_flag(sk, SOCK_DEAD))
+               return;
+
+       close_timeout = inet_csk(sk)->icsk_mtup.probe_timestamp - tcp_jiffies32 + jiffies + TCP_TIMEWAIT_LEN;
+
+       /* the close timeout takes precedence on the fail one, and here at least one of
+        * them is active
+        */
+       timeout = sock_flag(sk, SOCK_DEAD) ? close_timeout : fail_tout;
+
+       sk_reset_timer(sk, &sk->sk_timer, timeout);
+}
+
 static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
 {
-       struct mptcp_subflow_context *subflow;
-       struct sock *ssk;
+       struct sock *ssk = msk->first;
        bool slow;
 
-       subflow = mp_fail_response_expect_subflow(msk);
-       if (subflow) {
-               pr_debug("MP_FAIL doesn't respond, reset the subflow");
+       if (!ssk)
+               return;
+
+       pr_debug("MP_FAIL doesn't respond, reset the subflow");
 
-               ssk = mptcp_subflow_tcp_sock(subflow);
-               slow = lock_sock_fast(ssk);
-               mptcp_subflow_reset(ssk);
-               unlock_sock_fast(ssk, slow);
-       }
+       slow = lock_sock_fast(ssk);
+       mptcp_subflow_reset(ssk);
+       WRITE_ONCE(mptcp_subflow_ctx(ssk)->fail_tout, 0);
+       unlock_sock_fast(ssk, slow);
+
+       mptcp_reset_timeout(msk, 0);
 }
 
 static void mptcp_worker(struct work_struct *work)
 {
        struct mptcp_sock *msk = container_of(work, struct mptcp_sock, work);
        struct sock *sk = &msk->sk.icsk_inet.sk;
+       unsigned long fail_tout;
        int state;
 
        lock_sock(sk);
@@ -2575,7 +2558,9 @@ static void mptcp_worker(struct work_struct *work)
        if (test_and_clear_bit(MPTCP_WORK_RTX, &msk->flags))
                __mptcp_retrans(sk);
 
-       mptcp_mp_fail_no_response(msk);
+       fail_tout = msk->first ? READ_ONCE(mptcp_subflow_ctx(msk->first)->fail_tout) : 0;
+       if (fail_tout && time_after(jiffies, fail_tout))
+               mptcp_mp_fail_no_response(msk);
 
 unlock:
        release_sock(sk);
@@ -2822,6 +2807,7 @@ static void __mptcp_destroy_sock(struct sock *sk)
 static void mptcp_close(struct sock *sk, long timeout)
 {
        struct mptcp_subflow_context *subflow;
+       struct mptcp_sock *msk = mptcp_sk(sk);
        bool do_cancel_work = false;
 
        lock_sock(sk);
@@ -2840,10 +2826,16 @@ static void mptcp_close(struct sock *sk, long timeout)
 cleanup:
        /* orphan all the subflows */
        inet_csk(sk)->icsk_mtup.probe_timestamp = tcp_jiffies32;
-       mptcp_for_each_subflow(mptcp_sk(sk), subflow) {
+       mptcp_for_each_subflow(msk, subflow) {
                struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
                bool slow = lock_sock_fast_nested(ssk);
 
+               /* since the close timeout takes precedence on the fail one,
+                * cancel the latter
+                */
+               if (ssk == msk->first)
+                       subflow->fail_tout = 0;
+
                sock_orphan(ssk);
                unlock_sock_fast(ssk, slow);
        }
@@ -2852,13 +2844,13 @@ cleanup:
        sock_hold(sk);
        pr_debug("msk=%p state=%d", sk, sk->sk_state);
        if (mptcp_sk(sk)->token)
-               mptcp_event(MPTCP_EVENT_CLOSED, mptcp_sk(sk), NULL, GFP_KERNEL);
+               mptcp_event(MPTCP_EVENT_CLOSED, msk, NULL, GFP_KERNEL);
 
        if (sk->sk_state == TCP_CLOSE) {
                __mptcp_destroy_sock(sk);
                do_cancel_work = true;
        } else {
-               sk_reset_timer(sk, &sk->sk_timer, jiffies + TCP_TIMEWAIT_LEN);
+               mptcp_reset_timeout(msk, 0);
        }
        release_sock(sk);
        if (do_cancel_work)
index 200f89f..07871e1 100644 (file)
@@ -83,7 +83,6 @@
 
 /* MPTCP MP_JOIN flags */
 #define MPTCPOPT_BACKUP                BIT(0)
-#define MPTCPOPT_HMAC_LEN      20
 #define MPTCPOPT_THMAC_LEN     8
 
 /* MPTCP MP_CAPABLE flags */
@@ -306,6 +305,7 @@ struct mptcp_sock {
 
        u32 setsockopt_seq;
        char            ca_name[TCP_CA_NAME_MAX];
+       struct mptcp_sock       *dl_next;
 };
 
 #define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock)
@@ -468,7 +468,6 @@ struct mptcp_subflow_context {
                local_id_valid : 1, /* local_id is correctly initialized */
                valid_csum_seen : 1;        /* at least one csum validated */
        enum mptcp_data_avail data_avail;
-       bool    mp_fail_response_expect;
        u32     remote_nonce;
        u64     thmac;
        u32     local_nonce;
@@ -482,6 +481,7 @@ struct mptcp_subflow_context {
        u8      stale_count;
 
        long    delegated_status;
+       unsigned long   fail_tout;
 
        );
 
@@ -606,8 +606,10 @@ void __init mptcp_subflow_init(void);
 void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how);
 void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
                     struct mptcp_subflow_context *subflow);
+void __mptcp_subflow_send_ack(struct sock *ssk);
 void mptcp_subflow_send_ack(struct sock *ssk);
 void mptcp_subflow_reset(struct sock *ssk);
+void mptcp_subflow_queue_clean(struct sock *ssk);
 void mptcp_sock_graft(struct sock *sk, struct socket *parent);
 struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk);
 
@@ -662,6 +664,7 @@ void mptcp_get_options(const struct sk_buff *skb,
 
 void mptcp_finish_connect(struct sock *sk);
 void __mptcp_set_connected(struct sock *sk);
+void mptcp_reset_timeout(struct mptcp_sock *msk, unsigned long fail_tout);
 static inline bool mptcp_is_fully_established(struct sock *sk)
 {
        return inet_sk_state_load(sk) == TCP_ESTABLISHED &&
@@ -768,6 +771,10 @@ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
                               const struct mptcp_rm_list *rm_list);
 void mptcp_pm_mp_prio_received(struct sock *sk, u8 bkup);
 void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq);
+int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+                                struct mptcp_addr_info *addr,
+                                struct mptcp_addr_info *rem,
+                                u8 bkup);
 bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
                              const struct mptcp_pm_addr_entry *entry);
 void mptcp_pm_free_anno_list(struct mptcp_sock *msk);
@@ -784,7 +791,9 @@ int mptcp_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk,
 int mptcp_userspace_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk,
                                                   unsigned int id,
                                                   u8 *flags, int *ifindex);
-
+int mptcp_userspace_pm_set_flags(struct net *net, struct nlattr *token,
+                                struct mptcp_pm_addr_entry *loc,
+                                struct mptcp_pm_addr_entry *rem, u8 bkup);
 int mptcp_pm_announce_addr(struct mptcp_sock *msk,
                           const struct mptcp_addr_info *addr,
                           bool echo);
@@ -926,12 +935,25 @@ static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
        set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
 }
 
-static inline void mptcp_do_fallback(struct sock *sk)
+static inline void mptcp_do_fallback(struct sock *ssk)
 {
-       struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
-       struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+       struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+       struct sock *sk = subflow->conn;
+       struct mptcp_sock *msk;
 
+       msk = mptcp_sk(sk);
        __mptcp_do_fallback(msk);
+       if (READ_ONCE(msk->snd_data_fin_enable) && !(ssk->sk_shutdown & SEND_SHUTDOWN)) {
+               gfp_t saved_allocation = ssk->sk_allocation;
+
+               /* we are in a atomic (BH) scope, override ssk default for data
+                * fin allocation
+                */
+               ssk->sk_allocation = GFP_ATOMIC;
+               ssk->sk_shutdown |= SEND_SHUTDOWN;
+               tcp_shutdown(ssk, SEND_SHUTDOWN);
+               ssk->sk_allocation = saved_allocation;
+       }
 }
 
 #define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)", __func__, a)
index 8841e8c..d4b16d0 100644 (file)
@@ -843,7 +843,8 @@ enum mapping_status {
        MAPPING_INVALID,
        MAPPING_EMPTY,
        MAPPING_DATA_FIN,
-       MAPPING_DUMMY
+       MAPPING_DUMMY,
+       MAPPING_BAD_CSUM
 };
 
 static void dbg_bad_map(struct mptcp_subflow_context *subflow, u32 ssn)
@@ -958,11 +959,7 @@ static enum mapping_status validate_data_csum(struct sock *ssk, struct sk_buff *
                                 subflow->map_data_csum);
        if (unlikely(csum)) {
                MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DATACSUMERR);
-               if (subflow->mp_join || subflow->valid_csum_seen) {
-                       subflow->send_mp_fail = 1;
-                       MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPFAILTX);
-               }
-               return subflow->mp_join ? MAPPING_INVALID : MAPPING_DUMMY;
+               return MAPPING_BAD_CSUM;
        }
 
        subflow->valid_csum_seen = 1;
@@ -974,7 +971,6 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
 {
        struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
        bool csum_reqd = READ_ONCE(msk->csum_enabled);
-       struct sock *sk = (struct sock *)msk;
        struct mptcp_ext *mpext;
        struct sk_buff *skb;
        u16 data_len;
@@ -1016,9 +1012,6 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
                pr_debug("infinite mapping received");
                MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX);
                subflow->map_data_len = 0;
-               if (!sock_flag(ssk, SOCK_DEAD))
-                       sk_stop_timer(sk, &sk->sk_timer);
-
                return MAPPING_INVALID;
        }
 
@@ -1165,6 +1158,33 @@ static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
                return !subflow->fully_established;
 }
 
+static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
+{
+       struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+       unsigned long fail_tout;
+
+       /* greceful failure can happen only on the MPC subflow */
+       if (WARN_ON_ONCE(ssk != READ_ONCE(msk->first)))
+               return;
+
+       /* since the close timeout take precedence on the fail one,
+        * no need to start the latter when the first is already set
+        */
+       if (sock_flag((struct sock *)msk, SOCK_DEAD))
+               return;
+
+       /* we don't need extreme accuracy here, use a zero fail_tout as special
+        * value meaning no fail timeout at all;
+        */
+       fail_tout = jiffies + TCP_RTO_MAX;
+       if (!fail_tout)
+               fail_tout = 1;
+       WRITE_ONCE(subflow->fail_tout, fail_tout);
+       tcp_send_ack(ssk);
+
+       mptcp_reset_timeout(msk, subflow->fail_tout);
+}
+
 static bool subflow_check_data_avail(struct sock *ssk)
 {
        struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
@@ -1184,10 +1204,8 @@ static bool subflow_check_data_avail(struct sock *ssk)
 
                status = get_mapping_status(ssk, msk);
                trace_subflow_check_data_avail(status, skb_peek(&ssk->sk_receive_queue));
-               if (unlikely(status == MAPPING_INVALID))
-                       goto fallback;
-
-               if (unlikely(status == MAPPING_DUMMY))
+               if (unlikely(status == MAPPING_INVALID || status == MAPPING_DUMMY ||
+                            status == MAPPING_BAD_CSUM))
                        goto fallback;
 
                if (status != MAPPING_OK)
@@ -1229,22 +1247,17 @@ no_data:
 fallback:
        if (!__mptcp_check_fallback(msk)) {
                /* RFC 8684 section 3.7. */
-               if (subflow->send_mp_fail) {
+               if (status == MAPPING_BAD_CSUM &&
+                   (subflow->mp_join || subflow->valid_csum_seen)) {
+                       subflow->send_mp_fail = 1;
+
                        if (!READ_ONCE(msk->allow_infinite_fallback)) {
-                               ssk->sk_err = EBADMSG;
-                               tcp_set_state(ssk, TCP_CLOSE);
                                subflow->reset_transient = 0;
                                subflow->reset_reason = MPTCP_RST_EMIDDLEBOX;
-                               tcp_send_active_reset(ssk, GFP_ATOMIC);
-                               while ((skb = skb_peek(&ssk->sk_receive_queue)))
-                                       sk_eat_skb(ssk, skb);
-                       } else if (!sock_flag(ssk, SOCK_DEAD)) {
-                               WRITE_ONCE(subflow->mp_fail_response_expect, true);
-                               sk_reset_timer((struct sock *)msk,
-                                              &((struct sock *)msk)->sk_timer,
-                                              jiffies + TCP_RTO_MAX);
+                               goto reset;
                        }
-                       WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA);
+                       mptcp_subflow_fail(msk, ssk);
+                       WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_DATA_AVAIL);
                        return true;
                }
 
@@ -1252,16 +1265,20 @@ fallback:
                        /* fatal protocol error, close the socket.
                         * subflow_error_report() will introduce the appropriate barriers
                         */
-                       ssk->sk_err = EBADMSG;
-                       tcp_set_state(ssk, TCP_CLOSE);
                        subflow->reset_transient = 0;
                        subflow->reset_reason = MPTCP_RST_EMPTCP;
+
+reset:
+                       ssk->sk_err = EBADMSG;
+                       tcp_set_state(ssk, TCP_CLOSE);
+                       while ((skb = skb_peek(&ssk->sk_receive_queue)))
+                               sk_eat_skb(ssk, skb);
                        tcp_send_active_reset(ssk, GFP_ATOMIC);
                        WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA);
                        return false;
                }
 
-               __mptcp_do_fallback(msk);
+               mptcp_do_fallback(ssk);
        }
 
        skb = skb_peek(&ssk->sk_receive_queue);
@@ -1617,7 +1634,7 @@ int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock)
        /* the newly created socket really belongs to the owning MPTCP master
         * socket, even if for additional subflows the allocation is performed
         * by a kernel workqueue. Adjust inode references, so that the
-        * procfs/diag interaces really show this one belonging to the correct
+        * procfs/diag interfaces really show this one belonging to the correct
         * user.
         */
        SOCK_INODE(sf)->i_ino = SOCK_INODE(sk->sk_socket)->i_ino;
@@ -1706,6 +1723,58 @@ static void subflow_state_change(struct sock *sk)
        }
 }
 
+void mptcp_subflow_queue_clean(struct sock *listener_ssk)
+{
+       struct request_sock_queue *queue = &inet_csk(listener_ssk)->icsk_accept_queue;
+       struct mptcp_sock *msk, *next, *head = NULL;
+       struct request_sock *req;
+
+       /* build a list of all unaccepted mptcp sockets */
+       spin_lock_bh(&queue->rskq_lock);
+       for (req = queue->rskq_accept_head; req; req = req->dl_next) {
+               struct mptcp_subflow_context *subflow;
+               struct sock *ssk = req->sk;
+               struct mptcp_sock *msk;
+
+               if (!sk_is_mptcp(ssk))
+                       continue;
+
+               subflow = mptcp_subflow_ctx(ssk);
+               if (!subflow || !subflow->conn)
+                       continue;
+
+               /* skip if already in list */
+               msk = mptcp_sk(subflow->conn);
+               if (msk->dl_next || msk == head)
+                       continue;
+
+               msk->dl_next = head;
+               head = msk;
+       }
+       spin_unlock_bh(&queue->rskq_lock);
+       if (!head)
+               return;
+
+       /* can't acquire the msk socket lock under the subflow one,
+        * or will cause ABBA deadlock
+        */
+       release_sock(listener_ssk);
+
+       for (msk = head; msk; msk = next) {
+               struct sock *sk = (struct sock *)msk;
+               bool slow;
+
+               slow = lock_sock_fast_nested(sk);
+               next = msk->dl_next;
+               msk->first = NULL;
+               msk->dl_next = NULL;
+               unlock_sock_fast(sk, slow);
+       }
+
+       /* we are still under the listener msk socket lock */
+       lock_sock_nested(listener_ssk, SINGLE_DEPTH_NESTING);
+}
+
 static int subflow_ulp_init(struct sock *sk)
 {
        struct inet_connection_sock *icsk = inet_csk(sk);
index 7881441..80713fe 100644 (file)
@@ -1803,7 +1803,8 @@ struct ncsi_dev *ncsi_register_dev(struct net_device *dev,
        pdev = to_platform_device(dev->dev.parent);
        if (pdev) {
                np = pdev->dev.of_node;
-               if (np && of_get_property(np, "mlx,multi-host", NULL))
+               if (np && (of_get_property(np, "mellanox,multi-host", NULL) ||
+                          of_get_property(np, "mlx,multi-host", NULL)))
                        ndp->mlx_multi_host = true;
        }
 
index 7873bd1..a8e2425 100644 (file)
 #include <net/netfilter/nf_tables_offload.h>
 #include <net/netfilter/nf_dup_netdev.h>
 
-static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev)
+#define NF_RECURSION_LIMIT     2
+
+static DEFINE_PER_CPU(u8, nf_dup_skb_recursion);
+
+static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev,
+                               enum nf_dev_hooks hook)
 {
-       if (skb_mac_header_was_set(skb))
+       if (__this_cpu_read(nf_dup_skb_recursion) > NF_RECURSION_LIMIT)
+               goto err;
+
+       if (hook == NF_NETDEV_INGRESS && skb_mac_header_was_set(skb)) {
+               if (skb_cow_head(skb, skb->mac_len))
+                       goto err;
+
                skb_push(skb, skb->mac_len);
+       }
 
        skb->dev = dev;
        skb_clear_tstamp(skb);
+       __this_cpu_inc(nf_dup_skb_recursion);
        dev_queue_xmit(skb);
+       __this_cpu_dec(nf_dup_skb_recursion);
+       return;
+err:
+       kfree_skb(skb);
 }
 
 void nf_fwd_netdev_egress(const struct nft_pktinfo *pkt, int oif)
@@ -33,7 +50,7 @@ void nf_fwd_netdev_egress(const struct nft_pktinfo *pkt, int oif)
                return;
        }
 
-       nf_do_netdev_egress(pkt->skb, dev);
+       nf_do_netdev_egress(pkt->skb, dev, nft_hook(pkt));
 }
 EXPORT_SYMBOL_GPL(nf_fwd_netdev_egress);
 
@@ -48,7 +65,7 @@ void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif)
 
        skb = skb_clone(pkt->skb, GFP_ATOMIC);
        if (skb)
-               nf_do_netdev_egress(skb, dev);
+               nf_do_netdev_egress(skb, dev, nft_hook(pkt));
 }
 EXPORT_SYMBOL_GPL(nf_dup_netdev_egress);
 
index 51144fc..d6b59be 100644 (file)
@@ -5213,13 +5213,20 @@ static int nft_setelem_parse_data(struct nft_ctx *ctx, struct nft_set *set,
                                  struct nft_data *data,
                                  struct nlattr *attr)
 {
+       u32 dtype;
        int err;
 
        err = nft_data_init(ctx, data, NFT_DATA_VALUE_MAXLEN, desc, attr);
        if (err < 0)
                return err;
 
-       if (desc->type != NFT_DATA_VERDICT && desc->len != set->dlen) {
+       if (set->dtype == NFT_DATA_VERDICT)
+               dtype = NFT_DATA_VERDICT;
+       else
+               dtype = NFT_DATA_VALUE;
+
+       if (dtype != desc->type ||
+           set->dlen != desc->len) {
                nft_data_release(data, desc->type);
                return -EINVAL;
        }
index 53f40e4..3ddce24 100644 (file)
@@ -25,9 +25,7 @@ static noinline void __nft_trace_packet(struct nft_traceinfo *info,
                                        const struct nft_chain *chain,
                                        enum nft_trace_types type)
 {
-       const struct nft_pktinfo *pkt = info->pkt;
-
-       if (!info->trace || !pkt->skb->nf_trace)
+       if (!info->trace || !info->nf_trace)
                return;
 
        info->chain = chain;
@@ -42,11 +40,24 @@ static inline void nft_trace_packet(struct nft_traceinfo *info,
                                    enum nft_trace_types type)
 {
        if (static_branch_unlikely(&nft_trace_enabled)) {
+               const struct nft_pktinfo *pkt = info->pkt;
+
+               info->nf_trace = pkt->skb->nf_trace;
                info->rule = rule;
                __nft_trace_packet(info, chain, type);
        }
 }
 
+static inline void nft_trace_copy_nftrace(struct nft_traceinfo *info)
+{
+       if (static_branch_unlikely(&nft_trace_enabled)) {
+               const struct nft_pktinfo *pkt = info->pkt;
+
+               if (info->trace)
+                       info->nf_trace = pkt->skb->nf_trace;
+       }
+}
+
 static void nft_bitwise_fast_eval(const struct nft_expr *expr,
                                  struct nft_regs *regs)
 {
@@ -85,6 +96,7 @@ static noinline void __nft_trace_verdict(struct nft_traceinfo *info,
                                         const struct nft_chain *chain,
                                         const struct nft_regs *regs)
 {
+       const struct nft_pktinfo *pkt = info->pkt;
        enum nft_trace_types type;
 
        switch (regs->verdict.code) {
@@ -92,8 +104,13 @@ static noinline void __nft_trace_verdict(struct nft_traceinfo *info,
        case NFT_RETURN:
                type = NFT_TRACETYPE_RETURN;
                break;
+       case NF_STOLEN:
+               type = NFT_TRACETYPE_RULE;
+               /* can't access skb->nf_trace; use copy */
+               break;
        default:
                type = NFT_TRACETYPE_RULE;
+               info->nf_trace = pkt->skb->nf_trace;
                break;
        }
 
@@ -254,6 +271,7 @@ next_rule:
                switch (regs.verdict.code) {
                case NFT_BREAK:
                        regs.verdict.code = NFT_CONTINUE;
+                       nft_trace_copy_nftrace(&info);
                        continue;
                case NFT_CONTINUE:
                        nft_trace_packet(&info, chain, rule,
index 5041725..1163ba9 100644 (file)
@@ -7,7 +7,7 @@
 #include <linux/module.h>
 #include <linux/static_key.h>
 #include <linux/hash.h>
-#include <linux/jhash.h>
+#include <linux/siphash.h>
 #include <linux/if_vlan.h>
 #include <linux/init.h>
 #include <linux/skbuff.h>
 DEFINE_STATIC_KEY_FALSE(nft_trace_enabled);
 EXPORT_SYMBOL_GPL(nft_trace_enabled);
 
-static int trace_fill_id(struct sk_buff *nlskb, struct sk_buff *skb)
-{
-       __be32 id;
-
-       /* using skb address as ID results in a limited number of
-        * values (and quick reuse).
-        *
-        * So we attempt to use as many skb members that will not
-        * change while skb is with netfilter.
-        */
-       id = (__be32)jhash_2words(hash32_ptr(skb), skb_get_hash(skb),
-                                 skb->skb_iif);
-
-       return nla_put_be32(nlskb, NFTA_TRACE_ID, id);
-}
-
 static int trace_fill_header(struct sk_buff *nlskb, u16 type,
                             const struct sk_buff *skb,
                             int off, unsigned int len)
@@ -186,6 +170,7 @@ void nft_trace_notify(struct nft_traceinfo *info)
        struct nlmsghdr *nlh;
        struct sk_buff *skb;
        unsigned int size;
+       u32 mark = 0;
        u16 event;
 
        if (!nfnetlink_has_listeners(nft_net(pkt), NFNLGRP_NFTRACE))
@@ -229,7 +214,7 @@ void nft_trace_notify(struct nft_traceinfo *info)
        if (nla_put_be32(skb, NFTA_TRACE_TYPE, htonl(info->type)))
                goto nla_put_failure;
 
-       if (trace_fill_id(skb, pkt->skb))
+       if (nla_put_u32(skb, NFTA_TRACE_ID, info->skbid))
                goto nla_put_failure;
 
        if (nla_put_string(skb, NFTA_TRACE_CHAIN, info->chain->name))
@@ -249,16 +234,24 @@ void nft_trace_notify(struct nft_traceinfo *info)
        case NFT_TRACETYPE_RULE:
                if (nft_verdict_dump(skb, NFTA_TRACE_VERDICT, info->verdict))
                        goto nla_put_failure;
+
+               /* pkt->skb undefined iff NF_STOLEN, disable dump */
+               if (info->verdict->code == NF_STOLEN)
+                       info->packet_dumped = true;
+               else
+                       mark = pkt->skb->mark;
+
                break;
        case NFT_TRACETYPE_POLICY:
+               mark = pkt->skb->mark;
+
                if (nla_put_be32(skb, NFTA_TRACE_POLICY,
                                 htonl(info->basechain->policy)))
                        goto nla_put_failure;
                break;
        }
 
-       if (pkt->skb->mark &&
-           nla_put_be32(skb, NFTA_TRACE_MARK, htonl(pkt->skb->mark)))
+       if (mark && nla_put_be32(skb, NFTA_TRACE_MARK, htonl(mark)))
                goto nla_put_failure;
 
        if (!info->packet_dumped) {
@@ -283,9 +276,20 @@ void nft_trace_init(struct nft_traceinfo *info, const struct nft_pktinfo *pkt,
                    const struct nft_verdict *verdict,
                    const struct nft_chain *chain)
 {
+       static siphash_key_t trace_key __read_mostly;
+       struct sk_buff *skb = pkt->skb;
+
        info->basechain = nft_base_chain(chain);
        info->trace = true;
+       info->nf_trace = pkt->skb->nf_trace;
        info->packet_dumped = false;
        info->pkt = pkt;
        info->verdict = verdict;
+
+       net_get_random_once(&trace_key, sizeof(trace_key));
+
+       info->skbid = (u32)siphash_3u32(hash32_ptr(skb),
+                                       skb_get_hash(skb),
+                                       skb->skb_iif,
+                                       &trace_key);
 }
index af15102..f466af4 100644 (file)
@@ -614,7 +614,7 @@ static void __net_exit cttimeout_net_exit(struct net *net)
 
        nf_ct_untimeout(net, NULL);
 
-       list_for_each_entry_safe(cur, tmp, &pernet->nfct_timeout_freelist, head) {
+       list_for_each_entry_safe(cur, tmp, &pernet->nfct_timeout_freelist, free_head) {
                list_del(&cur->free_head);
 
                if (refcount_dec_and_test(&cur->refcnt))
index ac48592..55d2d49 100644 (file)
@@ -14,6 +14,7 @@
 #include <linux/in.h>
 #include <linux/ip.h>
 #include <linux/ipv6.h>
+#include <linux/random.h>
 #include <linux/smp.h>
 #include <linux/static_key.h>
 #include <net/dst.h>
@@ -32,8 +33,6 @@
 #define NFT_META_SECS_PER_DAY          86400
 #define NFT_META_DAYS_PER_WEEK         7
 
-static DEFINE_PER_CPU(struct rnd_state, nft_prandom_state);
-
 static u8 nft_meta_weekday(void)
 {
        time64_t secs = ktime_get_real_seconds();
@@ -271,13 +270,6 @@ static bool nft_meta_get_eval_ifname(enum nft_meta_keys key, u32 *dest,
        return true;
 }
 
-static noinline u32 nft_prandom_u32(void)
-{
-       struct rnd_state *state = this_cpu_ptr(&nft_prandom_state);
-
-       return prandom_u32_state(state);
-}
-
 #ifdef CONFIG_IP_ROUTE_CLASSID
 static noinline bool
 nft_meta_get_eval_rtclassid(const struct sk_buff *skb, u32 *dest)
@@ -389,7 +381,7 @@ void nft_meta_get_eval(const struct nft_expr *expr,
                break;
 #endif
        case NFT_META_PRANDOM:
-               *dest = nft_prandom_u32();
+               *dest = get_random_u32();
                break;
 #ifdef CONFIG_XFRM
        case NFT_META_SECPATH:
@@ -518,7 +510,6 @@ int nft_meta_get_init(const struct nft_ctx *ctx,
                len = IFNAMSIZ;
                break;
        case NFT_META_PRANDOM:
-               prandom_init_once(&nft_prandom_state);
                len = sizeof(u32);
                break;
 #ifdef CONFIG_XFRM
index 81b40c6..45d3dc9 100644 (file)
@@ -9,12 +9,11 @@
 #include <linux/netlink.h>
 #include <linux/netfilter.h>
 #include <linux/netfilter/nf_tables.h>
+#include <linux/random.h>
 #include <linux/static_key.h>
 #include <net/netfilter/nf_tables.h>
 #include <net/netfilter/nf_tables_core.h>
 
-static DEFINE_PER_CPU(struct rnd_state, nft_numgen_prandom_state);
-
 struct nft_ng_inc {
        u8                      dreg;
        u32                     modulus;
@@ -135,12 +134,9 @@ struct nft_ng_random {
        u32                     offset;
 };
 
-static u32 nft_ng_random_gen(struct nft_ng_random *priv)
+static u32 nft_ng_random_gen(const struct nft_ng_random *priv)
 {
-       struct rnd_state *state = this_cpu_ptr(&nft_numgen_prandom_state);
-
-       return reciprocal_scale(prandom_u32_state(state), priv->modulus) +
-              priv->offset;
+       return reciprocal_scale(get_random_u32(), priv->modulus) + priv->offset;
 }
 
 static void nft_ng_random_eval(const struct nft_expr *expr,
@@ -168,8 +164,6 @@ static int nft_ng_random_init(const struct nft_ctx *ctx,
        if (priv->offset + priv->modulus - 1 < priv->offset)
                return -EOVERFLOW;
 
-       prandom_init_once(&nft_numgen_prandom_state);
-
        return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg,
                                        NULL, NFT_DATA_VALUE, sizeof(u32));
 }
index df40314..76de6c8 100644 (file)
@@ -143,6 +143,7 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
        /* Another cpu may race to insert the element with the same key */
        if (prev) {
                nft_set_elem_destroy(set, he, true);
+               atomic_dec(&set->nelems);
                he = prev;
        }
 
@@ -152,6 +153,7 @@ out:
 
 err2:
        nft_set_elem_destroy(set, he, true);
+       atomic_dec(&set->nelems);
 err1:
        return false;
 }
index 2c8051d..4f9299b 100644 (file)
@@ -2124,6 +2124,32 @@ out_scratch:
        return err;
 }
 
+/**
+ * nft_set_pipapo_match_destroy() - Destroy elements from key mapping array
+ * @set:       nftables API set representation
+ * @m:         matching data pointing to key mapping array
+ */
+static void nft_set_pipapo_match_destroy(const struct nft_set *set,
+                                        struct nft_pipapo_match *m)
+{
+       struct nft_pipapo_field *f;
+       int i, r;
+
+       for (i = 0, f = m->f; i < m->field_count - 1; i++, f++)
+               ;
+
+       for (r = 0; r < f->rules; r++) {
+               struct nft_pipapo_elem *e;
+
+               if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e)
+                       continue;
+
+               e = f->mt[r].e;
+
+               nft_set_elem_destroy(set, e, true);
+       }
+}
+
 /**
  * nft_pipapo_destroy() - Free private data for set and all committed elements
  * @set:       nftables API set representation
@@ -2132,26 +2158,13 @@ static void nft_pipapo_destroy(const struct nft_set *set)
 {
        struct nft_pipapo *priv = nft_set_priv(set);
        struct nft_pipapo_match *m;
-       struct nft_pipapo_field *f;
-       int i, r, cpu;
+       int cpu;
 
        m = rcu_dereference_protected(priv->match, true);
        if (m) {
                rcu_barrier();
 
-               for (i = 0, f = m->f; i < m->field_count - 1; i++, f++)
-                       ;
-
-               for (r = 0; r < f->rules; r++) {
-                       struct nft_pipapo_elem *e;
-
-                       if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e)
-                               continue;
-
-                       e = f->mt[r].e;
-
-                       nft_set_elem_destroy(set, e, true);
-               }
+               nft_set_pipapo_match_destroy(set, m);
 
 #ifdef NFT_PIPAPO_ALIGN
                free_percpu(m->scratch_aligned);
@@ -2165,6 +2178,11 @@ static void nft_pipapo_destroy(const struct nft_set *set)
        }
 
        if (priv->clone) {
+               m = priv->clone;
+
+               if (priv->dirty)
+                       nft_set_pipapo_match_destroy(set, m);
+
 #ifdef NFT_PIPAPO_ALIGN
                free_percpu(priv->clone->scratch_aligned);
 #endif
index 372bf54..e20d1a9 100644 (file)
@@ -407,7 +407,7 @@ static int parse_ipv6hdr(struct sk_buff *skb, struct sw_flow_key *key)
        if (flags & IP6_FH_F_FRAG) {
                if (frag_off) {
                        key->ip.frag = OVS_FRAG_TYPE_LATER;
-                       key->ip.proto = nexthdr;
+                       key->ip.proto = NEXTHDR_FRAGMENT;
                        return 0;
                }
                key->ip.frag = OVS_FRAG_TYPE_FIRST;
index fee6409..eb0b819 100644 (file)
@@ -227,8 +227,8 @@ static void rose_remove_neigh(struct rose_neigh *rose_neigh)
 {
        struct rose_neigh *s;
 
-       rose_stop_ftimer(rose_neigh);
-       rose_stop_t0timer(rose_neigh);
+       del_timer_sync(&rose_neigh->ftimer);
+       del_timer_sync(&rose_neigh->t0timer);
 
        skb_queue_purge(&rose_neigh->queue);
 
index b3138fc..f06ddbe 100644 (file)
@@ -31,89 +31,89 @@ static void rose_idletimer_expiry(struct timer_list *);
 
 void rose_start_heartbeat(struct sock *sk)
 {
-       del_timer(&sk->sk_timer);
+       sk_stop_timer(sk, &sk->sk_timer);
 
        sk->sk_timer.function = rose_heartbeat_expiry;
        sk->sk_timer.expires  = jiffies + 5 * HZ;
 
-       add_timer(&sk->sk_timer);
+       sk_reset_timer(sk, &sk->sk_timer, sk->sk_timer.expires);
 }
 
 void rose_start_t1timer(struct sock *sk)
 {
        struct rose_sock *rose = rose_sk(sk);
 
-       del_timer(&rose->timer);
+       sk_stop_timer(sk, &rose->timer);
 
        rose->timer.function = rose_timer_expiry;
        rose->timer.expires  = jiffies + rose->t1;
 
-       add_timer(&rose->timer);
+       sk_reset_timer(sk, &rose->timer, rose->timer.expires);
 }
 
 void rose_start_t2timer(struct sock *sk)
 {
        struct rose_sock *rose = rose_sk(sk);
 
-       del_timer(&rose->timer);
+       sk_stop_timer(sk, &rose->timer);
 
        rose->timer.function = rose_timer_expiry;
        rose->timer.expires  = jiffies + rose->t2;
 
-       add_timer(&rose->timer);
+       sk_reset_timer(sk, &rose->timer, rose->timer.expires);
 }
 
 void rose_start_t3timer(struct sock *sk)
 {
        struct rose_sock *rose = rose_sk(sk);
 
-       del_timer(&rose->timer);
+       sk_stop_timer(sk, &rose->timer);
 
        rose->timer.function = rose_timer_expiry;
        rose->timer.expires  = jiffies + rose->t3;
 
-       add_timer(&rose->timer);
+       sk_reset_timer(sk, &rose->timer, rose->timer.expires);
 }
 
 void rose_start_hbtimer(struct sock *sk)
 {
        struct rose_sock *rose = rose_sk(sk);
 
-       del_timer(&rose->timer);
+       sk_stop_timer(sk, &rose->timer);
 
        rose->timer.function = rose_timer_expiry;
        rose->timer.expires  = jiffies + rose->hb;
 
-       add_timer(&rose->timer);
+       sk_reset_timer(sk, &rose->timer, rose->timer.expires);
 }
 
 void rose_start_idletimer(struct sock *sk)
 {
        struct rose_sock *rose = rose_sk(sk);
 
-       del_timer(&rose->idletimer);
+       sk_stop_timer(sk, &rose->idletimer);
 
        if (rose->idle > 0) {
                rose->idletimer.function = rose_idletimer_expiry;
                rose->idletimer.expires  = jiffies + rose->idle;
 
-               add_timer(&rose->idletimer);
+               sk_reset_timer(sk, &rose->idletimer, rose->idletimer.expires);
        }
 }
 
 void rose_stop_heartbeat(struct sock *sk)
 {
-       del_timer(&sk->sk_timer);
+       sk_stop_timer(sk, &sk->sk_timer);
 }
 
 void rose_stop_timer(struct sock *sk)
 {
-       del_timer(&rose_sk(sk)->timer);
+       sk_stop_timer(sk, &rose_sk(sk)->timer);
 }
 
 void rose_stop_idletimer(struct sock *sk)
 {
-       del_timer(&rose_sk(sk)->idletimer);
+       sk_stop_timer(sk, &rose_sk(sk)->idletimer);
 }
 
 static void rose_heartbeat_expiry(struct timer_list *t)
@@ -130,6 +130,7 @@ static void rose_heartbeat_expiry(struct timer_list *t)
                    (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) {
                        bh_unlock_sock(sk);
                        rose_destroy_socket(sk);
+                       sock_put(sk);
                        return;
                }
                break;
@@ -152,6 +153,7 @@ static void rose_heartbeat_expiry(struct timer_list *t)
 
        rose_start_heartbeat(sk);
        bh_unlock_sock(sk);
+       sock_put(sk);
 }
 
 static void rose_timer_expiry(struct timer_list *t)
@@ -181,6 +183,7 @@ static void rose_timer_expiry(struct timer_list *t)
                break;
        }
        bh_unlock_sock(sk);
+       sock_put(sk);
 }
 
 static void rose_idletimer_expiry(struct timer_list *t)
@@ -205,4 +208,5 @@ static void rose_idletimer_expiry(struct timer_list *t)
                sock_set_flag(sk, SOCK_DEAD);
        }
        bh_unlock_sock(sk);
+       sock_put(sk);
 }
index 08aab5c..258917a 100644 (file)
@@ -431,7 +431,7 @@ static int rxkad_secure_packet(struct rxrpc_call *call,
                break;
        }
 
-       _leave(" = %d [set %hx]", ret, y);
+       _leave(" = %d [set %x]", ret, y);
        return ret;
 }
 
index da9733d..817065a 100644 (file)
@@ -588,7 +588,8 @@ static int tcf_idr_release_unsafe(struct tc_action *p)
 }
 
 static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,
-                         const struct tc_action_ops *ops)
+                         const struct tc_action_ops *ops,
+                         struct netlink_ext_ack *extack)
 {
        struct nlattr *nest;
        int n_i = 0;
@@ -604,20 +605,25 @@ static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,
        if (nla_put_string(skb, TCA_KIND, ops->kind))
                goto nla_put_failure;
 
+       ret = 0;
        mutex_lock(&idrinfo->lock);
        idr_for_each_entry_ul(idr, p, tmp, id) {
                if (IS_ERR(p))
                        continue;
                ret = tcf_idr_release_unsafe(p);
-               if (ret == ACT_P_DELETED) {
+               if (ret == ACT_P_DELETED)
                        module_put(ops->owner);
-                       n_i++;
-               } else if (ret < 0) {
-                       mutex_unlock(&idrinfo->lock);
-                       goto nla_put_failure;
-               }
+               else if (ret < 0)
+                       break;
+               n_i++;
        }
        mutex_unlock(&idrinfo->lock);
+       if (ret < 0) {
+               if (n_i)
+                       NL_SET_ERR_MSG(extack, "Unable to flush all TC actions");
+               else
+                       goto nla_put_failure;
+       }
 
        ret = nla_put_u32(skb, TCA_FCNT, n_i);
        if (ret)
@@ -638,7 +644,7 @@ int tcf_generic_walker(struct tc_action_net *tn, struct sk_buff *skb,
        struct tcf_idrinfo *idrinfo = tn->idrinfo;
 
        if (type == RTM_DELACTION) {
-               return tcf_del_walker(idrinfo, skb, ops);
+               return tcf_del_walker(idrinfo, skb, ops, extack);
        } else if (type == RTM_GETACTION) {
                return tcf_dump_walker(idrinfo, skb, cb);
        } else {
index 79c8901..b759628 100644 (file)
@@ -442,7 +442,7 @@ static int tcf_police_act_to_flow_act(int tc_act, u32 *extval,
                act_id = FLOW_ACTION_JUMP;
                *extval = tc_act & TC_ACT_EXT_VAL_MASK;
        } else if (tc_act == TC_ACT_UNSPEC) {
-               NL_SET_ERR_MSG_MOD(extack, "Offload not supported when conform/exceed action is \"continue\"");
+               act_id = FLOW_ACTION_CONTINUE;
        } else {
                NL_SET_ERR_MSG_MOD(extack, "Unsupported conform/exceed action offload");
        }
index ed4ccef..5449ed1 100644 (file)
@@ -1146,9 +1146,9 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
        struct tc_netem_rate rate;
        struct tc_netem_slot slot;
 
-       qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency),
+       qopt.latency = min_t(psched_time_t, PSCHED_NS2TICKS(q->latency),
                             UINT_MAX);
-       qopt.jitter = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->jitter),
+       qopt.jitter = min_t(psched_time_t, PSCHED_NS2TICKS(q->jitter),
                            UINT_MAX);
        qopt.limit = q->limit;
        qopt.loss = q->loss;
index b9c71a3..0b941dd 100644 (file)
@@ -18,6 +18,7 @@
 #include <linux/module.h>
 #include <linux/spinlock.h>
 #include <linux/rcupdate.h>
+#include <linux/time.h>
 #include <net/netlink.h>
 #include <net/pkt_sched.h>
 #include <net/pkt_cls.h>
@@ -176,7 +177,7 @@ static ktime_t get_interval_end_time(struct sched_gate_list *sched,
 
 static int length_to_duration(struct taprio_sched *q, int len)
 {
-       return div_u64(len * atomic64_read(&q->picos_per_byte), 1000);
+       return div_u64(len * atomic64_read(&q->picos_per_byte), PSEC_PER_NSEC);
 }
 
 /* Returns the entry corresponding to next available interval. If
@@ -551,7 +552,7 @@ static struct sk_buff *taprio_peek(struct Qdisc *sch)
 static void taprio_set_budget(struct taprio_sched *q, struct sched_entry *entry)
 {
        atomic_set(&entry->budget,
-                  div64_u64((u64)entry->interval * 1000,
+                  div64_u64((u64)entry->interval * PSEC_PER_NSEC,
                             atomic64_read(&q->picos_per_byte)));
 }
 
index 1b6f5e2..3d7eb2a 100644 (file)
@@ -2146,10 +2146,13 @@ SYSCALL_DEFINE4(send, int, fd, void __user *, buff, size_t, len,
 int __sys_recvfrom(int fd, void __user *ubuf, size_t size, unsigned int flags,
                   struct sockaddr __user *addr, int __user *addr_len)
 {
+       struct sockaddr_storage address;
+       struct msghdr msg = {
+               /* Save some cycles and don't copy the address if not needed */
+               .msg_name = addr ? (struct sockaddr *)&address : NULL,
+       };
        struct socket *sock;
        struct iovec iov;
-       struct msghdr msg;
-       struct sockaddr_storage address;
        int err, err2;
        int fput_needed;
 
@@ -2160,14 +2163,6 @@ int __sys_recvfrom(int fd, void __user *ubuf, size_t size, unsigned int flags,
        if (!sock)
                goto out;
 
-       msg.msg_control = NULL;
-       msg.msg_controllen = 0;
-       /* Save some cycles and don't copy the address if not needed */
-       msg.msg_name = addr ? (struct sockaddr *)&address : NULL;
-       /* We assume all kernel code knows the size of sockaddr_storage */
-       msg.msg_namelen = 0;
-       msg.msg_iocb = NULL;
-       msg.msg_flags = 0;
        if (sock->file->f_flags & O_NONBLOCK)
                flags |= MSG_DONTWAIT;
        err = sock_recvmsg(sock, &msg, flags);
@@ -2372,6 +2367,7 @@ int __copy_msghdr_from_user(struct msghdr *kmsg,
                return -EFAULT;
 
        kmsg->msg_control_is_user = true;
+       kmsg->msg_get_inq = 0;
        kmsg->msg_control_user = msg.msg_control;
        kmsg->msg_controllen = msg.msg_controllen;
        kmsg->msg_flags = msg.msg_flags;
index 1a72c67..8299ceb 100644 (file)
@@ -533,6 +533,9 @@ EXPORT_SYMBOL_GPL(strp_check_rcv);
 
 static int __init strp_dev_init(void)
 {
+       BUILD_BUG_ON(sizeof(struct sk_skb_cb) >
+                    sizeof_field(struct sk_buff, cb));
+
        strp_wq = create_singlethread_workqueue("kstrp");
        if (unlikely(!strp_wq))
                return -ENOMEM;
index e2c6eca..b6781ad 100644 (file)
@@ -651,6 +651,7 @@ static struct rpc_clnt *__rpc_clone_client(struct rpc_create_args *args,
        new->cl_discrtry = clnt->cl_discrtry;
        new->cl_chatty = clnt->cl_chatty;
        new->cl_principal = clnt->cl_principal;
+       new->cl_max_connect = clnt->cl_max_connect;
        return new;
 
 out_err:
index f87a2d8..5d2b3e6 100644 (file)
@@ -984,7 +984,7 @@ static noinline __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr,
        p = page_address(*xdr->page_ptr);
        xdr->p = p + frag2bytes;
        space_left = xdr->buf->buflen - xdr->buf->len;
-       if (space_left - nbytes >= PAGE_SIZE)
+       if (space_left - frag1bytes >= PAGE_SIZE)
                xdr->end = p + PAGE_SIZE;
        else
                xdr->end = p + space_left - frag1bytes;
index 3f4542e..434e70e 100644 (file)
@@ -109,10 +109,9 @@ static void __net_exit tipc_exit_net(struct net *net)
        struct tipc_net *tn = tipc_net(net);
 
        tipc_detach_loopback(net);
+       tipc_net_stop(net);
        /* Make sure the tipc_net_finalize_work() finished */
        cancel_work_sync(&tn->work);
-       tipc_net_stop(net);
-
        tipc_bcast_stop(net);
        tipc_nametbl_stop(net);
        tipc_sk_rht_destroy(net);
index 6ef95ce..b48d97c 100644 (file)
@@ -472,8 +472,8 @@ struct tipc_node *tipc_node_create(struct net *net, u32 addr, u8 *peer_id,
                                   bool preliminary)
 {
        struct tipc_net *tn = net_generic(net, tipc_net_id);
+       struct tipc_link *l, *snd_l = tipc_bc_sndlink(net);
        struct tipc_node *n, *temp_node;
-       struct tipc_link *l;
        unsigned long intv;
        int bearer_id;
        int i;
@@ -488,6 +488,16 @@ struct tipc_node *tipc_node_create(struct net *net, u32 addr, u8 *peer_id,
                        goto exit;
                /* A preliminary node becomes "real" now, refresh its data */
                tipc_node_write_lock(n);
+               if (!tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX,
+                                        tipc_link_min_win(snd_l), tipc_link_max_win(snd_l),
+                                        n->capabilities, &n->bc_entry.inputq1,
+                                        &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) {
+                       pr_warn("Broadcast rcv link refresh failed, no memory\n");
+                       tipc_node_write_unlock_fast(n);
+                       tipc_node_put(n);
+                       n = NULL;
+                       goto exit;
+               }
                n->preliminary = false;
                n->addr = addr;
                hlist_del_rcu(&n->hash);
@@ -567,7 +577,16 @@ update:
        n->signature = INVALID_NODE_SIG;
        n->active_links[0] = INVALID_BEARER_ID;
        n->active_links[1] = INVALID_BEARER_ID;
-       n->bc_entry.link = NULL;
+       if (!preliminary &&
+           !tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX,
+                                tipc_link_min_win(snd_l), tipc_link_max_win(snd_l),
+                                n->capabilities, &n->bc_entry.inputq1,
+                                &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) {
+               pr_warn("Broadcast rcv link creation failed, no memory\n");
+               kfree(n);
+               n = NULL;
+               goto exit;
+       }
        tipc_node_get(n);
        timer_setup(&n->timer, tipc_node_timeout, 0);
        /* Start a slow timer anyway, crypto needs it */
@@ -1155,7 +1174,7 @@ void tipc_node_check_dest(struct net *net, u32 addr,
                          bool *respond, bool *dupl_addr)
 {
        struct tipc_node *n;
-       struct tipc_link *l, *snd_l;
+       struct tipc_link *l;
        struct tipc_link_entry *le;
        bool addr_match = false;
        bool sign_match = false;
@@ -1175,22 +1194,6 @@ void tipc_node_check_dest(struct net *net, u32 addr,
                return;
 
        tipc_node_write_lock(n);
-       if (unlikely(!n->bc_entry.link)) {
-               snd_l = tipc_bc_sndlink(net);
-               if (!tipc_link_bc_create(net, tipc_own_addr(net),
-                                        addr, peer_id, U16_MAX,
-                                        tipc_link_min_win(snd_l),
-                                        tipc_link_max_win(snd_l),
-                                        n->capabilities,
-                                        &n->bc_entry.inputq1,
-                                        &n->bc_entry.namedq, snd_l,
-                                        &n->bc_entry.link)) {
-                       pr_warn("Broadcast rcv link creation failed, no mem\n");
-                       tipc_node_write_unlock_fast(n);
-                       tipc_node_put(n);
-                       return;
-               }
-       }
 
        le = &n->links[b->identity];
 
index 17f8c52..43509c7 100644 (file)
@@ -502,6 +502,7 @@ static int tipc_sk_create(struct net *net, struct socket *sock,
        sock_init_data(sock, sk);
        tipc_set_sk_state(sk, TIPC_OPEN);
        if (tipc_sk_insert(tsk)) {
+               sk_free(sk);
                pr_warn("Socket create failed; port number exhausted\n");
                return -EINVAL;
        }
diff --git a/net/tls/tls.h b/net/tls/tls.h
new file mode 100644 (file)
index 0000000..8005ee2
--- /dev/null
@@ -0,0 +1,290 @@
+/*
+ * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.
+ * Copyright (c) 2016-2017, Dave Watson <davejwatson@fb.com>. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _TLS_INT_H
+#define _TLS_INT_H
+
+#include <asm/byteorder.h>
+#include <linux/types.h>
+#include <linux/skmsg.h>
+#include <net/tls.h>
+
+#define __TLS_INC_STATS(net, field)                            \
+       __SNMP_INC_STATS((net)->mib.tls_statistics, field)
+#define TLS_INC_STATS(net, field)                              \
+       SNMP_INC_STATS((net)->mib.tls_statistics, field)
+#define TLS_DEC_STATS(net, field)                              \
+       SNMP_DEC_STATS((net)->mib.tls_statistics, field)
+
+/* TLS records are maintained in 'struct tls_rec'. It stores the memory pages
+ * allocated or mapped for each TLS record. After encryption, the records are
+ * stores in a linked list.
+ */
+struct tls_rec {
+       struct list_head list;
+       int tx_ready;
+       int tx_flags;
+
+       struct sk_msg msg_plaintext;
+       struct sk_msg msg_encrypted;
+
+       /* AAD | msg_plaintext.sg.data | sg_tag */
+       struct scatterlist sg_aead_in[2];
+       /* AAD | msg_encrypted.sg.data (data contains overhead for hdr & iv & tag) */
+       struct scatterlist sg_aead_out[2];
+
+       char content_type;
+       struct scatterlist sg_content_type;
+
+       char aad_space[TLS_AAD_SPACE_SIZE];
+       u8 iv_data[MAX_IV_SIZE];
+       struct aead_request aead_req;
+       u8 aead_req_ctx[];
+};
+
+int __net_init tls_proc_init(struct net *net);
+void __net_exit tls_proc_fini(struct net *net);
+
+struct tls_context *tls_ctx_create(struct sock *sk);
+void tls_ctx_free(struct sock *sk, struct tls_context *ctx);
+void update_sk_prot(struct sock *sk, struct tls_context *ctx);
+
+int wait_on_pending_writer(struct sock *sk, long *timeo);
+int tls_sk_query(struct sock *sk, int optname, char __user *optval,
+                int __user *optlen);
+int tls_sk_attach(struct sock *sk, int optname, char __user *optval,
+                 unsigned int optlen);
+void tls_err_abort(struct sock *sk, int err);
+
+int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx);
+void tls_update_rx_zc_capable(struct tls_context *tls_ctx);
+void tls_sw_strparser_arm(struct sock *sk, struct tls_context *ctx);
+void tls_sw_strparser_done(struct tls_context *tls_ctx);
+int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
+int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
+                          int offset, size_t size, int flags);
+int tls_sw_sendpage(struct sock *sk, struct page *page,
+                   int offset, size_t size, int flags);
+void tls_sw_cancel_work_tx(struct tls_context *tls_ctx);
+void tls_sw_release_resources_tx(struct sock *sk);
+void tls_sw_free_ctx_tx(struct tls_context *tls_ctx);
+void tls_sw_free_resources_rx(struct sock *sk);
+void tls_sw_release_resources_rx(struct sock *sk);
+void tls_sw_free_ctx_rx(struct tls_context *tls_ctx);
+int tls_sw_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+                  int flags, int *addr_len);
+bool tls_sw_sock_is_readable(struct sock *sk);
+ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
+                          struct pipe_inode_info *pipe,
+                          size_t len, unsigned int flags);
+
+int tls_device_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
+int tls_device_sendpage(struct sock *sk, struct page *page,
+                       int offset, size_t size, int flags);
+int tls_tx_records(struct sock *sk, int flags);
+
+void tls_sw_write_space(struct sock *sk, struct tls_context *ctx);
+void tls_device_write_space(struct sock *sk, struct tls_context *ctx);
+
+int tls_process_cmsg(struct sock *sk, struct msghdr *msg,
+                    unsigned char *record_type);
+int decrypt_skb(struct sock *sk, struct sk_buff *skb,
+               struct scatterlist *sgout);
+
+int tls_sw_fallback_init(struct sock *sk,
+                        struct tls_offload_context_tx *offload_ctx,
+                        struct tls_crypto_info *crypto_info);
+
+static inline struct tls_msg *tls_msg(struct sk_buff *skb)
+{
+       struct sk_skb_cb *scb = (struct sk_skb_cb *)skb->cb;
+
+       return &scb->tls;
+}
+
+#ifdef CONFIG_TLS_DEVICE
+void tls_device_init(void);
+void tls_device_cleanup(void);
+int tls_set_device_offload(struct sock *sk, struct tls_context *ctx);
+void tls_device_free_resources_tx(struct sock *sk);
+int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx);
+void tls_device_offload_cleanup_rx(struct sock *sk);
+void tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq);
+int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx,
+                        struct sk_buff *skb, struct strp_msg *rxm);
+#else
+static inline void tls_device_init(void) {}
+static inline void tls_device_cleanup(void) {}
+
+static inline int
+tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+{
+       return -EOPNOTSUPP;
+}
+
+static inline void tls_device_free_resources_tx(struct sock *sk) {}
+
+static inline int
+tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx)
+{
+       return -EOPNOTSUPP;
+}
+
+static inline void tls_device_offload_cleanup_rx(struct sock *sk) {}
+static inline void
+tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq) {}
+
+static inline int
+tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx,
+                    struct sk_buff *skb, struct strp_msg *rxm)
+{
+       return 0;
+}
+#endif
+
+int tls_push_sg(struct sock *sk, struct tls_context *ctx,
+               struct scatterlist *sg, u16 first_offset,
+               int flags);
+int tls_push_partial_record(struct sock *sk, struct tls_context *ctx,
+                           int flags);
+void tls_free_partial_record(struct sock *sk, struct tls_context *ctx);
+
+static inline bool tls_is_partially_sent_record(struct tls_context *ctx)
+{
+       return !!ctx->partially_sent_record;
+}
+
+static inline bool tls_is_pending_open_record(struct tls_context *tls_ctx)
+{
+       return tls_ctx->pending_open_record_frags;
+}
+
+static inline bool tls_bigint_increment(unsigned char *seq, int len)
+{
+       int i;
+
+       for (i = len - 1; i >= 0; i--) {
+               ++seq[i];
+               if (seq[i] != 0)
+                       break;
+       }
+
+       return (i == -1);
+}
+
+static inline void tls_bigint_subtract(unsigned char *seq, int  n)
+{
+       u64 rcd_sn;
+       __be64 *p;
+
+       BUILD_BUG_ON(TLS_MAX_REC_SEQ_SIZE != 8);
+
+       p = (__be64 *)seq;
+       rcd_sn = be64_to_cpu(*p);
+       *p = cpu_to_be64(rcd_sn - n);
+}
+
+static inline void
+tls_advance_record_sn(struct sock *sk, struct tls_prot_info *prot,
+                     struct cipher_context *ctx)
+{
+       if (tls_bigint_increment(ctx->rec_seq, prot->rec_seq_size))
+               tls_err_abort(sk, -EBADMSG);
+
+       if (prot->version != TLS_1_3_VERSION &&
+           prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305)
+               tls_bigint_increment(ctx->iv + prot->salt_size,
+                                    prot->iv_size);
+}
+
+static inline void
+tls_xor_iv_with_seq(struct tls_prot_info *prot, char *iv, char *seq)
+{
+       int i;
+
+       if (prot->version == TLS_1_3_VERSION ||
+           prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305) {
+               for (i = 0; i < 8; i++)
+                       iv[i + 4] ^= seq[i];
+       }
+}
+
+static inline void
+tls_fill_prepend(struct tls_context *ctx, char *buf, size_t plaintext_len,
+                unsigned char record_type)
+{
+       struct tls_prot_info *prot = &ctx->prot_info;
+       size_t pkt_len, iv_size = prot->iv_size;
+
+       pkt_len = plaintext_len + prot->tag_size;
+       if (prot->version != TLS_1_3_VERSION &&
+           prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305) {
+               pkt_len += iv_size;
+
+               memcpy(buf + TLS_NONCE_OFFSET,
+                      ctx->tx.iv + prot->salt_size, iv_size);
+       }
+
+       /* we cover nonce explicit here as well, so buf should be of
+        * size KTLS_DTLS_HEADER_SIZE + KTLS_DTLS_NONCE_EXPLICIT_SIZE
+        */
+       buf[0] = prot->version == TLS_1_3_VERSION ?
+                  TLS_RECORD_TYPE_DATA : record_type;
+       /* Note that VERSION must be TLS_1_2 for both TLS1.2 and TLS1.3 */
+       buf[1] = TLS_1_2_VERSION_MINOR;
+       buf[2] = TLS_1_2_VERSION_MAJOR;
+       /* we can use IV for nonce explicit according to spec */
+       buf[3] = pkt_len >> 8;
+       buf[4] = pkt_len & 0xFF;
+}
+
+static inline
+void tls_make_aad(char *buf, size_t size, char *record_sequence,
+                 unsigned char record_type, struct tls_prot_info *prot)
+{
+       if (prot->version != TLS_1_3_VERSION) {
+               memcpy(buf, record_sequence, prot->rec_seq_size);
+               buf += 8;
+       } else {
+               size += prot->tag_size;
+       }
+
+       buf[0] = prot->version == TLS_1_3_VERSION ?
+                 TLS_RECORD_TYPE_DATA : record_type;
+       buf[1] = TLS_1_2_VERSION_MAJOR;
+       buf[2] = TLS_1_2_VERSION_MINOR;
+       buf[3] = size >> 8;
+       buf[4] = size & 0xFF;
+}
+
+#endif
index ec6f4b6..227b92a 100644 (file)
@@ -38,6 +38,7 @@
 #include <net/tcp.h>
 #include <net/tls.h>
 
+#include "tls.h"
 #include "trace.h"
 
 /* device_offload_lock is used to synchronize tls_dev_add
@@ -562,7 +563,7 @@ int tls_device_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
        lock_sock(sk);
 
        if (unlikely(msg->msg_controllen)) {
-               rc = tls_proccess_cmsg(sk, msg, &record_type);
+               rc = tls_process_cmsg(sk, msg, &record_type);
                if (rc)
                        goto out;
        }
index e40bedd..618cee7 100644 (file)
@@ -34,6 +34,8 @@
 #include <crypto/scatterwalk.h>
 #include <net/ip6_checksum.h>
 
+#include "tls.h"
+
 static void chain_to_walk(struct scatterlist *sg, struct scatter_walk *walk)
 {
        struct scatterlist *src = walk->sg;
@@ -232,7 +234,7 @@ static int fill_sg_in(struct scatterlist *sg_in,
                      s32 *sync_size,
                      int *resync_sgs)
 {
-       int tcp_payload_offset = skb_transport_offset(skb) + tcp_hdrlen(skb);
+       int tcp_payload_offset = skb_tcp_all_headers(skb);
        int payload_len = skb->len - tcp_payload_offset;
        u32 tcp_seq = ntohl(tcp_hdr(skb)->seq);
        struct tls_record_info *record;
@@ -310,8 +312,8 @@ static struct sk_buff *tls_enc_skb(struct tls_context *tls_ctx,
                                   struct sk_buff *skb,
                                   s32 sync_size, u64 rcd_sn)
 {
-       int tcp_payload_offset = skb_transport_offset(skb) + tcp_hdrlen(skb);
        struct tls_offload_context_tx *ctx = tls_offload_ctx_tx(tls_ctx);
+       int tcp_payload_offset = skb_tcp_all_headers(skb);
        int payload_len = skb->len - tcp_payload_offset;
        void *buf, *iv, *aad, *dummy_buf;
        struct aead_request *aead_req;
@@ -372,7 +374,7 @@ free_nskb:
 
 static struct sk_buff *tls_sw_fallback(struct sock *sk, struct sk_buff *skb)
 {
-       int tcp_payload_offset = skb_transport_offset(skb) + tcp_hdrlen(skb);
+       int tcp_payload_offset = skb_tcp_all_headers(skb);
        struct tls_context *tls_ctx = tls_get_ctx(sk);
        struct tls_offload_context_tx *ctx = tls_offload_ctx_tx(tls_ctx);
        int payload_len = skb->len - tcp_payload_offset;
index da17641..f3d9dbf 100644 (file)
@@ -45,6 +45,8 @@
 #include <net/tls.h>
 #include <net/tls_toe.h>
 
+#include "tls.h"
+
 MODULE_AUTHOR("Mellanox Technologies");
 MODULE_DESCRIPTION("Transport Layer Security Support");
 MODULE_LICENSE("Dual BSD/GPL");
@@ -164,8 +166,8 @@ static int tls_handle_open_record(struct sock *sk, int flags)
        return 0;
 }
 
-int tls_proccess_cmsg(struct sock *sk, struct msghdr *msg,
-                     unsigned char *record_type)
+int tls_process_cmsg(struct sock *sk, struct msghdr *msg,
+                    unsigned char *record_type)
 {
        struct cmsghdr *cmsg;
        int rc = -EINVAL;
@@ -533,6 +535,37 @@ static int do_tls_getsockopt_tx_zc(struct sock *sk, char __user *optval,
        return 0;
 }
 
+static int do_tls_getsockopt_no_pad(struct sock *sk, char __user *optval,
+                                   int __user *optlen)
+{
+       struct tls_context *ctx = tls_get_ctx(sk);
+       unsigned int value;
+       int err, len;
+
+       if (ctx->prot_info.version != TLS_1_3_VERSION)
+               return -EINVAL;
+
+       if (get_user(len, optlen))
+               return -EFAULT;
+       if (len < sizeof(value))
+               return -EINVAL;
+
+       lock_sock(sk);
+       err = -EINVAL;
+       if (ctx->rx_conf == TLS_SW || ctx->rx_conf == TLS_HW)
+               value = ctx->rx_no_pad;
+       release_sock(sk);
+       if (err)
+               return err;
+
+       if (put_user(sizeof(value), optlen))
+               return -EFAULT;
+       if (copy_to_user(optval, &value, sizeof(value)))
+               return -EFAULT;
+
+       return 0;
+}
+
 static int do_tls_getsockopt(struct sock *sk, int optname,
                             char __user *optval, int __user *optlen)
 {
@@ -547,6 +580,9 @@ static int do_tls_getsockopt(struct sock *sk, int optname,
        case TLS_TX_ZEROCOPY_RO:
                rc = do_tls_getsockopt_tx_zc(sk, optval, optlen);
                break;
+       case TLS_RX_EXPECT_NO_PAD:
+               rc = do_tls_getsockopt_no_pad(sk, optval, optlen);
+               break;
        default:
                rc = -ENOPROTOOPT;
                break;
@@ -718,6 +754,38 @@ static int do_tls_setsockopt_tx_zc(struct sock *sk, sockptr_t optval,
        return 0;
 }
 
+static int do_tls_setsockopt_no_pad(struct sock *sk, sockptr_t optval,
+                                   unsigned int optlen)
+{
+       struct tls_context *ctx = tls_get_ctx(sk);
+       u32 val;
+       int rc;
+
+       if (ctx->prot_info.version != TLS_1_3_VERSION ||
+           sockptr_is_null(optval) || optlen < sizeof(val))
+               return -EINVAL;
+
+       rc = copy_from_sockptr(&val, optval, sizeof(val));
+       if (rc)
+               return -EFAULT;
+       if (val > 1)
+               return -EINVAL;
+       rc = check_zeroed_sockptr(optval, sizeof(val), optlen - sizeof(val));
+       if (rc < 1)
+               return rc == 0 ? -EINVAL : rc;
+
+       lock_sock(sk);
+       rc = -EINVAL;
+       if (ctx->rx_conf == TLS_SW || ctx->rx_conf == TLS_HW) {
+               ctx->rx_no_pad = val;
+               tls_update_rx_zc_capable(ctx);
+               rc = 0;
+       }
+       release_sock(sk);
+
+       return rc;
+}
+
 static int do_tls_setsockopt(struct sock *sk, int optname, sockptr_t optval,
                             unsigned int optlen)
 {
@@ -736,6 +804,9 @@ static int do_tls_setsockopt(struct sock *sk, int optname, sockptr_t optval,
                rc = do_tls_setsockopt_tx_zc(sk, optval, optlen);
                release_sock(sk);
                break;
+       case TLS_RX_EXPECT_NO_PAD:
+               rc = do_tls_setsockopt_no_pad(sk, optval, optlen);
+               break;
        default:
                rc = -ENOPROTOOPT;
                break;
@@ -921,6 +992,8 @@ static void tls_update(struct sock *sk, struct proto *p,
 {
        struct tls_context *ctx;
 
+       WARN_ON_ONCE(sk->sk_prot == p);
+
        ctx = tls_get_ctx(sk);
        if (likely(ctx)) {
                ctx->sk_write_space = write_space;
@@ -932,6 +1005,23 @@ static void tls_update(struct sock *sk, struct proto *p,
        }
 }
 
+static u16 tls_user_config(struct tls_context *ctx, bool tx)
+{
+       u16 config = tx ? ctx->tx_conf : ctx->rx_conf;
+
+       switch (config) {
+       case TLS_BASE:
+               return TLS_CONF_BASE;
+       case TLS_SW:
+               return TLS_CONF_SW;
+       case TLS_HW:
+               return TLS_CONF_HW;
+       case TLS_HW_RECORD:
+               return TLS_CONF_HW_RECORD;
+       }
+       return 0;
+}
+
 static int tls_get_info(const struct sock *sk, struct sk_buff *skb)
 {
        u16 version, cipher_type;
@@ -974,6 +1064,11 @@ static int tls_get_info(const struct sock *sk, struct sk_buff *skb)
                if (err)
                        goto nla_failure;
        }
+       if (ctx->rx_no_pad) {
+               err = nla_put_flag(skb, TLS_INFO_RX_NO_PAD);
+               if (err)
+                       goto nla_failure;
+       }
 
        rcu_read_unlock();
        nla_nest_end(skb, start);
@@ -995,6 +1090,7 @@ static size_t tls_get_info_size(const struct sock *sk)
                nla_total_size(sizeof(u16)) +   /* TLS_INFO_RXCONF */
                nla_total_size(sizeof(u16)) +   /* TLS_INFO_TXCONF */
                nla_total_size(0) +             /* TLS_INFO_ZC_RO_TX */
+               nla_total_size(0) +             /* TLS_INFO_RX_NO_PAD */
                0;
 
        return size;
index feeceb0..1246e52 100644 (file)
@@ -6,6 +6,8 @@
 #include <net/snmp.h>
 #include <net/tls.h>
 
+#include "tls.h"
+
 #ifdef CONFIG_PROC_FS
 static const struct snmp_mib tls_mib_list[] = {
        SNMP_MIB_ITEM("TlsCurrTxSw", LINUX_MIB_TLSCURRTXSW),
@@ -18,6 +20,7 @@ static const struct snmp_mib tls_mib_list[] = {
        SNMP_MIB_ITEM("TlsRxDevice", LINUX_MIB_TLSRXDEVICE),
        SNMP_MIB_ITEM("TlsDecryptError", LINUX_MIB_TLSDECRYPTERROR),
        SNMP_MIB_ITEM("TlsRxDeviceResync", LINUX_MIB_TLSRXDEVICERESYNC),
+       SNMP_MIB_ITEM("TlsDecryptRetry", LINUX_MIN_TLSDECRYPTRETRY),
        SNMP_MIB_SENTINEL
 };
 
index 0513f82..09370f8 100644 (file)
 #include <net/strparser.h>
 #include <net/tls.h>
 
+#include "tls.h"
+
 struct tls_decrypt_arg {
        bool zc;
        bool async;
+       u8 tail;
+};
+
+struct tls_decrypt_ctx {
+       u8 iv[MAX_IV_SIZE];
+       u8 aad[TLS_MAX_AAD_SIZE];
+       u8 tail;
+       struct scatterlist sg[];
 };
 
 noinline void tls_err_abort(struct sock *sk, int err)
@@ -133,7 +143,8 @@ static int skb_nsg(struct sk_buff *skb, int offset, int len)
         return __skb_nsg(skb, offset, len, 0);
 }
 
-static int padding_length(struct tls_prot_info *prot, struct sk_buff *skb)
+static int tls_padding_length(struct tls_prot_info *prot, struct sk_buff *skb,
+                             struct tls_decrypt_arg *darg)
 {
        struct strp_msg *rxm = strp_msg(skb);
        struct tls_msg *tlm = tls_msg(skb);
@@ -142,7 +153,7 @@ static int padding_length(struct tls_prot_info *prot, struct sk_buff *skb)
        /* Determine zero-padding length */
        if (prot->version == TLS_1_3_VERSION) {
                int offset = rxm->full_len - TLS_TAG_SIZE - 1;
-               char content_type = 0;
+               char content_type = darg->zc ? darg->tail : 0;
                int err;
 
                while (content_type == 0) {
@@ -267,9 +278,6 @@ static int tls_do_decryption(struct sock *sk,
        }
        darg->async = false;
 
-       if (ret == -EBADMSG)
-               TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR);
-
        return ret;
 }
 
@@ -518,7 +526,8 @@ static int tls_do_encryption(struct sock *sk,
        memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv,
               prot->iv_size + prot->salt_size);
 
-       xor_iv_with_seq(prot, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq);
+       tls_xor_iv_with_seq(prot, rec->iv_data + iv_offset,
+                           tls_ctx->tx.rec_seq);
 
        sge->offset += prot->prepend_size;
        sge->length -= prot->prepend_size;
@@ -955,7 +964,7 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
        lock_sock(sk);
 
        if (unlikely(msg->msg_controllen)) {
-               ret = tls_proccess_cmsg(sk, msg, &record_type);
+               ret = tls_process_cmsg(sk, msg, &record_type);
                if (ret) {
                        if (ret == -EINPROGRESS)
                                num_async++;
@@ -1293,54 +1302,50 @@ int tls_sw_sendpage(struct sock *sk, struct page *page,
        return ret;
 }
 
-static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock,
-                                    bool nonblock, long timeo, int *err)
+static int
+tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock,
+               long timeo)
 {
        struct tls_context *tls_ctx = tls_get_ctx(sk);
        struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
-       struct sk_buff *skb;
        DEFINE_WAIT_FUNC(wait, woken_wake_function);
 
-       while (!(skb = ctx->recv_pkt) && sk_psock_queue_empty(psock)) {
-               if (sk->sk_err) {
-                       *err = sock_error(sk);
-                       return NULL;
-               }
+       while (!ctx->recv_pkt) {
+               if (!sk_psock_queue_empty(psock))
+                       return 0;
+
+               if (sk->sk_err)
+                       return sock_error(sk);
 
                if (!skb_queue_empty(&sk->sk_receive_queue)) {
                        __strp_unpause(&ctx->strp);
                        if (ctx->recv_pkt)
-                               return ctx->recv_pkt;
+                               break;
                }
 
                if (sk->sk_shutdown & RCV_SHUTDOWN)
-                       return NULL;
+                       return 0;
 
                if (sock_flag(sk, SOCK_DONE))
-                       return NULL;
+                       return 0;
 
-               if (nonblock || !timeo) {
-                       *err = -EAGAIN;
-                       return NULL;
-               }
+               if (nonblock || !timeo)
+                       return -EAGAIN;
 
                add_wait_queue(sk_sleep(sk), &wait);
                sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
                sk_wait_event(sk, &timeo,
-                             ctx->recv_pkt != skb ||
-                             !sk_psock_queue_empty(psock),
+                             ctx->recv_pkt || !sk_psock_queue_empty(psock),
                              &wait);
                sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
                remove_wait_queue(sk_sleep(sk), &wait);
 
                /* Handle signals */
-               if (signal_pending(current)) {
-                       *err = sock_intr_errno(timeo);
-                       return NULL;
-               }
+               if (signal_pending(current))
+                       return sock_intr_errno(timeo);
        }
 
-       return skb;
+       return 1;
 }
 
 static int tls_setup_from_iter(struct iov_iter *from,
@@ -1415,21 +1420,22 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
        struct tls_context *tls_ctx = tls_get_ctx(sk);
        struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
        struct tls_prot_info *prot = &tls_ctx->prot_info;
+       int n_sgin, n_sgout, aead_size, err, pages = 0;
        struct strp_msg *rxm = strp_msg(skb);
        struct tls_msg *tlm = tls_msg(skb);
-       int n_sgin, n_sgout, nsg, mem_size, aead_size, err, pages = 0;
        struct aead_request *aead_req;
        struct sk_buff *unused;
-       u8 *aad, *iv, *mem = NULL;
        struct scatterlist *sgin = NULL;
        struct scatterlist *sgout = NULL;
-       const int data_len = rxm->full_len - prot->overhead_size +
-                            prot->tail_size;
+       const int data_len = rxm->full_len - prot->overhead_size;
+       int tail_pages = !!prot->tail_size;
+       struct tls_decrypt_ctx *dctx;
        int iv_offset = 0;
+       u8 *mem;
 
        if (darg->zc && (out_iov || out_sg)) {
                if (out_iov)
-                       n_sgout = 1 +
+                       n_sgout = 1 + tail_pages +
                                iov_iter_npages_cap(out_iov, INT_MAX, data_len);
                else
                        n_sgout = sg_nents(out_sg);
@@ -1447,36 +1453,30 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
        /* Increment to accommodate AAD */
        n_sgin = n_sgin + 1;
 
-       nsg = n_sgin + n_sgout;
-
-       aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv);
-       mem_size = aead_size + (nsg * sizeof(struct scatterlist));
-       mem_size = mem_size + prot->aad_size;
-       mem_size = mem_size + MAX_IV_SIZE;
-
        /* Allocate a single block of memory which contains
-        * aead_req || sgin[] || sgout[] || aad || iv.
-        * This order achieves correct alignment for aead_req, sgin, sgout.
+        *   aead_req || tls_decrypt_ctx.
+        * Both structs are variable length.
         */
-       mem = kmalloc(mem_size, sk->sk_allocation);
+       aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv);
+       mem = kmalloc(aead_size + struct_size(dctx, sg, n_sgin + n_sgout),
+                     sk->sk_allocation);
        if (!mem)
                return -ENOMEM;
 
        /* Segment the allocated memory */
        aead_req = (struct aead_request *)mem;
-       sgin = (struct scatterlist *)(mem + aead_size);
-       sgout = sgin + n_sgin;
-       aad = (u8 *)(sgout + n_sgout);
-       iv = aad + prot->aad_size;
+       dctx = (struct tls_decrypt_ctx *)(mem + aead_size);
+       sgin = &dctx->sg[0];
+       sgout = &dctx->sg[n_sgin];
 
        /* For CCM based ciphers, first byte of nonce+iv is a constant */
        switch (prot->cipher_type) {
        case TLS_CIPHER_AES_CCM_128:
-               iv[0] = TLS_AES_CCM_IV_B0_BYTE;
+               dctx->iv[0] = TLS_AES_CCM_IV_B0_BYTE;
                iv_offset = 1;
                break;
        case TLS_CIPHER_SM4_CCM:
-               iv[0] = TLS_SM4_CCM_IV_B0_BYTE;
+               dctx->iv[0] = TLS_SM4_CCM_IV_B0_BYTE;
                iv_offset = 1;
                break;
        }
@@ -1484,46 +1484,49 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
        /* Prepare IV */
        if (prot->version == TLS_1_3_VERSION ||
            prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305) {
-               memcpy(iv + iv_offset, tls_ctx->rx.iv,
+               memcpy(&dctx->iv[iv_offset], tls_ctx->rx.iv,
                       prot->iv_size + prot->salt_size);
        } else {
                err = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE,
-                                   iv + iv_offset + prot->salt_size,
+                                   &dctx->iv[iv_offset] + prot->salt_size,
                                    prot->iv_size);
-               if (err < 0) {
-                       kfree(mem);
-                       return err;
-               }
-               memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
+               if (err < 0)
+                       goto exit_free;
+               memcpy(&dctx->iv[iv_offset], tls_ctx->rx.iv, prot->salt_size);
        }
-       xor_iv_with_seq(prot, iv + iv_offset, tls_ctx->rx.rec_seq);
+       tls_xor_iv_with_seq(prot, &dctx->iv[iv_offset], tls_ctx->rx.rec_seq);
 
        /* Prepare AAD */
-       tls_make_aad(aad, rxm->full_len - prot->overhead_size +
+       tls_make_aad(dctx->aad, rxm->full_len - prot->overhead_size +
                     prot->tail_size,
                     tls_ctx->rx.rec_seq, tlm->control, prot);
 
        /* Prepare sgin */
        sg_init_table(sgin, n_sgin);
-       sg_set_buf(&sgin[0], aad, prot->aad_size);
+       sg_set_buf(&sgin[0], dctx->aad, prot->aad_size);
        err = skb_to_sgvec(skb, &sgin[1],
                           rxm->offset + prot->prepend_size,
                           rxm->full_len - prot->prepend_size);
-       if (err < 0) {
-               kfree(mem);
-               return err;
-       }
+       if (err < 0)
+               goto exit_free;
 
        if (n_sgout) {
                if (out_iov) {
                        sg_init_table(sgout, n_sgout);
-                       sg_set_buf(&sgout[0], aad, prot->aad_size);
+                       sg_set_buf(&sgout[0], dctx->aad, prot->aad_size);
 
                        err = tls_setup_from_iter(out_iov, data_len,
                                                  &pages, &sgout[1],
-                                                 (n_sgout - 1));
+                                                 (n_sgout - 1 - tail_pages));
                        if (err < 0)
                                goto fallback_to_reg_recv;
+
+                       if (prot->tail_size) {
+                               sg_unmark_end(&sgout[pages]);
+                               sg_set_buf(&sgout[pages + 1], &dctx->tail,
+                                          prot->tail_size);
+                               sg_mark_end(&sgout[pages + 1]);
+                       }
                } else if (out_sg) {
                        memcpy(sgout, out_sg, n_sgout * sizeof(*sgout));
                } else {
@@ -1537,15 +1540,18 @@ fallback_to_reg_recv:
        }
 
        /* Prepare and submit AEAD request */
-       err = tls_do_decryption(sk, skb, sgin, sgout, iv,
-                               data_len, aead_req, darg);
+       err = tls_do_decryption(sk, skb, sgin, sgout, dctx->iv,
+                               data_len + prot->tail_size, aead_req, darg);
        if (darg->async)
                return 0;
 
+       if (prot->tail_size)
+               darg->tail = dctx->tail;
+
        /* Release the pages in case iov was mapped to pages */
        for (; pages > 0; pages--)
                put_page(sg_page(&sgout[pages]));
-
+exit_free:
        kfree(mem);
        return err;
 }
@@ -1579,13 +1585,23 @@ static int decrypt_skb_update(struct sock *sk, struct sk_buff *skb,
        }
 
        err = decrypt_internal(sk, skb, dest, NULL, darg);
-       if (err < 0)
+       if (err < 0) {
+               if (err == -EBADMSG)
+                       TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR);
                return err;
+       }
        if (darg->async)
                goto decrypt_next;
+       /* If opportunistic TLS 1.3 ZC failed retry without ZC */
+       if (unlikely(darg->zc && prot->version == TLS_1_3_VERSION &&
+                    darg->tail != TLS_RECORD_TYPE_DATA)) {
+               darg->zc = false;
+               TLS_INC_STATS(sock_net(sk), LINUX_MIN_TLSDECRYPTRETRY);
+               return decrypt_skb_update(sk, skb, dest, darg);
+       }
 
 decrypt_done:
-       pad = padding_length(prot, skb);
+       pad = tls_padding_length(prot, skb, darg);
        if (pad < 0)
                return pad;
 
@@ -1717,6 +1733,24 @@ out:
        return copied ? : err;
 }
 
+static void
+tls_read_flush_backlog(struct sock *sk, struct tls_prot_info *prot,
+                      size_t len_left, size_t decrypted, ssize_t done,
+                      size_t *flushed_at)
+{
+       size_t max_rec;
+
+       if (len_left <= decrypted)
+               return;
+
+       max_rec = prot->overhead_size - prot->tail_size + TLS_MAX_PAYLOAD_SIZE;
+       if (done - *flushed_at < SZ_128K && tcp_inq(sk) > max_rec)
+               return;
+
+       *flushed_at = done;
+       sk_flush_backlog(sk);
+}
+
 int tls_sw_recvmsg(struct sock *sk,
                   struct msghdr *msg,
                   size_t len,
@@ -1729,6 +1763,7 @@ int tls_sw_recvmsg(struct sock *sk,
        struct sk_psock *psock;
        unsigned char control = 0;
        ssize_t decrypted = 0;
+       size_t flushed_at = 0;
        struct strp_msg *rxm;
        struct tls_msg *tlm;
        struct sk_buff *skb;
@@ -1767,14 +1802,14 @@ int tls_sw_recvmsg(struct sock *sk,
        timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
 
        zc_capable = !bpf_strp_enabled && !is_kvec && !is_peek &&
-                    prot->version != TLS_1_3_VERSION;
+               ctx->zc_capable;
        decrypted = 0;
        while (len && (decrypted + copied < target || ctx->recv_pkt)) {
                struct tls_decrypt_arg darg = {};
                int to_decrypt, chunk;
 
-               skb = tls_wait_data(sk, psock, flags & MSG_DONTWAIT, timeo, &err);
-               if (!skb) {
+               err = tls_rx_rec_wait(sk, psock, flags & MSG_DONTWAIT, timeo);
+               if (err <= 0) {
                        if (psock) {
                                chunk = sk_msg_recvmsg(sk, psock, msg, len,
                                                       flags);
@@ -1784,6 +1819,7 @@ int tls_sw_recvmsg(struct sock *sk,
                        goto recv_end;
                }
 
+               skb = ctx->recv_pkt;
                rxm = strp_msg(skb);
                tlm = tls_msg(skb);
 
@@ -1818,6 +1854,10 @@ int tls_sw_recvmsg(struct sock *sk,
                if (err <= 0)
                        goto recv_end;
 
+               /* periodically flush backlog, and feed strparser */
+               tls_read_flush_backlog(sk, prot, len, to_decrypt,
+                                      decrypted + copied, &flushed_at);
+
                ctx->recv_pkt = NULL;
                __strp_unpause(&ctx->strp);
                __skb_queue_tail(&ctx->rx_list, skb);
@@ -1946,11 +1986,13 @@ ssize_t tls_sw_splice_read(struct socket *sock,  loff_t *ppos,
        } else {
                struct tls_decrypt_arg darg = {};
 
-               skb = tls_wait_data(sk, NULL, flags & SPLICE_F_NONBLOCK, timeo,
-                                   &err);
-               if (!skb)
+               err = tls_rx_rec_wait(sk, NULL, flags & SPLICE_F_NONBLOCK,
+                                     timeo);
+               if (err <= 0)
                        goto splice_read_end;
 
+               skb = ctx->recv_pkt;
+
                err = decrypt_skb_update(sk, skb, NULL, &darg);
                if (err < 0) {
                        tls_err_abort(sk, -EBADMSG);
@@ -2227,12 +2269,23 @@ static void tx_work_handler(struct work_struct *work)
        mutex_unlock(&tls_ctx->tx_lock);
 }
 
+static bool tls_is_tx_ready(struct tls_sw_context_tx *ctx)
+{
+       struct tls_rec *rec;
+
+       rec = list_first_entry(&ctx->tx_list, struct tls_rec, list);
+       if (!rec)
+               return false;
+
+       return READ_ONCE(rec->tx_ready);
+}
+
 void tls_sw_write_space(struct sock *sk, struct tls_context *ctx)
 {
        struct tls_sw_context_tx *tx_ctx = tls_sw_ctx_tx(ctx);
 
        /* Schedule the transmission if tx list is ready */
-       if (is_tx_ready(tx_ctx) &&
+       if (tls_is_tx_ready(tx_ctx) &&
            !test_and_set_bit(BIT_TX_SCHEDULED, &tx_ctx->tx_bitmask))
                schedule_delayed_work(&tx_ctx->tx_work.work, 0);
 }
@@ -2249,6 +2302,14 @@ void tls_sw_strparser_arm(struct sock *sk, struct tls_context *tls_ctx)
        strp_check_rcv(&rx_ctx->strp);
 }
 
+void tls_update_rx_zc_capable(struct tls_context *tls_ctx)
+{
+       struct tls_sw_context_rx *rx_ctx = tls_sw_ctx_rx(tls_ctx);
+
+       rx_ctx->zc_capable = tls_ctx->rx_no_pad ||
+               tls_ctx->prot_info.version != TLS_1_3_VERSION;
+}
+
 int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
 {
        struct tls_context *tls_ctx = tls_get_ctx(sk);
@@ -2422,13 +2483,6 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
                goto free_priv;
        }
 
-       /* Sanity-check the sizes for stack allocations. */
-       if (iv_size > MAX_IV_SIZE || nonce_size > MAX_IV_SIZE ||
-           rec_seq_size > TLS_MAX_REC_SEQ_SIZE || tag_size != TLS_TAG_SIZE) {
-               rc = -EINVAL;
-               goto free_priv;
-       }
-
        if (crypto_info->version == TLS_1_3_VERSION) {
                nonce_size = 0;
                prot->aad_size = TLS_HEADER_SIZE;
@@ -2438,6 +2492,14 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
                prot->tail_size = 0;
        }
 
+       /* Sanity-check the sizes for stack allocations. */
+       if (iv_size > MAX_IV_SIZE || nonce_size > MAX_IV_SIZE ||
+           rec_seq_size > TLS_MAX_REC_SEQ_SIZE || tag_size != TLS_TAG_SIZE ||
+           prot->aad_size > TLS_MAX_AAD_SIZE) {
+               rc = -EINVAL;
+               goto free_priv;
+       }
+
        prot->version = crypto_info->version;
        prot->cipher_type = crypto_info->cipher_type;
        prot->prepend_size = TLS_HEADER_SIZE + nonce_size;
@@ -2484,12 +2546,10 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
        if (sw_ctx_rx) {
                tfm = crypto_aead_tfm(sw_ctx_rx->aead_recv);
 
-               if (crypto_info->version == TLS_1_3_VERSION)
-                       sw_ctx_rx->async_capable = 0;
-               else
-                       sw_ctx_rx->async_capable =
-                               !!(tfm->__crt_alg->cra_flags &
-                                  CRYPTO_ALG_ASYNC);
+               tls_update_rx_zc_capable(ctx);
+               sw_ctx_rx->async_capable =
+                       crypto_info->version != TLS_1_3_VERSION &&
+                       !!(tfm->__crt_alg->cra_flags & CRYPTO_ALG_ASYNC);
 
                /* Set up strparser */
                memset(&cb, 0, sizeof(cb));
index 7e1330f..825669e 100644 (file)
@@ -38,6 +38,8 @@
 #include <net/tls.h>
 #include <net/tls_toe.h>
 
+#include "tls.h"
+
 static LIST_HEAD(device_list);
 static DEFINE_SPINLOCK(device_spinlock);
 
index 1bed373..bf338b7 100644 (file)
 
 #include "scm.h"
 
-spinlock_t unix_table_locks[2 * UNIX_HASH_SIZE];
-EXPORT_SYMBOL_GPL(unix_table_locks);
-struct hlist_head unix_socket_table[2 * UNIX_HASH_SIZE];
-EXPORT_SYMBOL_GPL(unix_socket_table);
 static atomic_long_t unix_nr_socks;
+static struct hlist_head bsd_socket_buckets[UNIX_HASH_SIZE / 2];
+static spinlock_t bsd_socket_locks[UNIX_HASH_SIZE / 2];
 
 /* SMP locking strategy:
- *    hash table is protected with spinlock unix_table_locks
- *    each socket state is protected by separate spin lock.
+ *    hash table is protected with spinlock.
+ *    each socket state is protected by separate spinlock.
  */
 
 static unsigned int unix_unbound_hash(struct sock *sk)
@@ -137,12 +135,12 @@ static unsigned int unix_unbound_hash(struct sock *sk)
        hash ^= hash >> 8;
        hash ^= sk->sk_type;
 
-       return UNIX_HASH_SIZE + (hash & (UNIX_HASH_SIZE - 1));
+       return hash & UNIX_HASH_MOD;
 }
 
 static unsigned int unix_bsd_hash(struct inode *i)
 {
-       return i->i_ino & (UNIX_HASH_SIZE - 1);
+       return i->i_ino & UNIX_HASH_MOD;
 }
 
 static unsigned int unix_abstract_hash(struct sockaddr_un *sunaddr,
@@ -155,26 +153,34 @@ static unsigned int unix_abstract_hash(struct sockaddr_un *sunaddr,
        hash ^= hash >> 8;
        hash ^= type;
 
-       return hash & (UNIX_HASH_SIZE - 1);
+       return UNIX_HASH_MOD + 1 + (hash & UNIX_HASH_MOD);
 }
 
-static void unix_table_double_lock(unsigned int hash1, unsigned int hash2)
+static void unix_table_double_lock(struct net *net,
+                                  unsigned int hash1, unsigned int hash2)
 {
-       /* hash1 and hash2 is never the same because
-        * one is between 0 and UNIX_HASH_SIZE - 1, and
-        * another is between UNIX_HASH_SIZE and UNIX_HASH_SIZE * 2.
-        */
+       if (hash1 == hash2) {
+               spin_lock(&net->unx.table.locks[hash1]);
+               return;
+       }
+
        if (hash1 > hash2)
                swap(hash1, hash2);
 
-       spin_lock(&unix_table_locks[hash1]);
-       spin_lock_nested(&unix_table_locks[hash2], SINGLE_DEPTH_NESTING);
+       spin_lock(&net->unx.table.locks[hash1]);
+       spin_lock_nested(&net->unx.table.locks[hash2], SINGLE_DEPTH_NESTING);
 }
 
-static void unix_table_double_unlock(unsigned int hash1, unsigned int hash2)
+static void unix_table_double_unlock(struct net *net,
+                                    unsigned int hash1, unsigned int hash2)
 {
-       spin_unlock(&unix_table_locks[hash1]);
-       spin_unlock(&unix_table_locks[hash2]);
+       if (hash1 == hash2) {
+               spin_unlock(&net->unx.table.locks[hash1]);
+               return;
+       }
+
+       spin_unlock(&net->unx.table.locks[hash1]);
+       spin_unlock(&net->unx.table.locks[hash2]);
 }
 
 #ifdef CONFIG_SECURITY_NETWORK
@@ -300,34 +306,52 @@ static void __unix_remove_socket(struct sock *sk)
        sk_del_node_init(sk);
 }
 
-static void __unix_insert_socket(struct sock *sk)
+static void __unix_insert_socket(struct net *net, struct sock *sk)
 {
        DEBUG_NET_WARN_ON_ONCE(!sk_unhashed(sk));
-       sk_add_node(sk, &unix_socket_table[sk->sk_hash]);
+       sk_add_node(sk, &net->unx.table.buckets[sk->sk_hash]);
 }
 
-static void __unix_set_addr_hash(struct sock *sk, struct unix_address *addr,
-                                unsigned int hash)
+static void __unix_set_addr_hash(struct net *net, struct sock *sk,
+                                struct unix_address *addr, unsigned int hash)
 {
        __unix_remove_socket(sk);
        smp_store_release(&unix_sk(sk)->addr, addr);
 
        sk->sk_hash = hash;
-       __unix_insert_socket(sk);
+       __unix_insert_socket(net, sk);
 }
 
-static void unix_remove_socket(struct sock *sk)
+static void unix_remove_socket(struct net *net, struct sock *sk)
 {
-       spin_lock(&unix_table_locks[sk->sk_hash]);
+       spin_lock(&net->unx.table.locks[sk->sk_hash]);
        __unix_remove_socket(sk);
-       spin_unlock(&unix_table_locks[sk->sk_hash]);
+       spin_unlock(&net->unx.table.locks[sk->sk_hash]);
+}
+
+static void unix_insert_unbound_socket(struct net *net, struct sock *sk)
+{
+       spin_lock(&net->unx.table.locks[sk->sk_hash]);
+       __unix_insert_socket(net, sk);
+       spin_unlock(&net->unx.table.locks[sk->sk_hash]);
+}
+
+static void unix_insert_bsd_socket(struct sock *sk)
+{
+       spin_lock(&bsd_socket_locks[sk->sk_hash]);
+       sk_add_bind_node(sk, &bsd_socket_buckets[sk->sk_hash]);
+       spin_unlock(&bsd_socket_locks[sk->sk_hash]);
 }
 
-static void unix_insert_unbound_socket(struct sock *sk)
+static void unix_remove_bsd_socket(struct sock *sk)
 {
-       spin_lock(&unix_table_locks[sk->sk_hash]);
-       __unix_insert_socket(sk);
-       spin_unlock(&unix_table_locks[sk->sk_hash]);
+       if (!hlist_unhashed(&sk->sk_bind_node)) {
+               spin_lock(&bsd_socket_locks[sk->sk_hash]);
+               __sk_del_bind_node(sk);
+               spin_unlock(&bsd_socket_locks[sk->sk_hash]);
+
+               sk_node_init(&sk->sk_bind_node);
+       }
 }
 
 static struct sock *__unix_find_socket_byname(struct net *net,
@@ -336,12 +360,9 @@ static struct sock *__unix_find_socket_byname(struct net *net,
 {
        struct sock *s;
 
-       sk_for_each(s, &unix_socket_table[hash]) {
+       sk_for_each(s, &net->unx.table.buckets[hash]) {
                struct unix_sock *u = unix_sk(s);
 
-               if (!net_eq(sock_net(s), net))
-                       continue;
-
                if (u->addr->len == len &&
                    !memcmp(u->addr->name, sunname, len))
                        return s;
@@ -355,11 +376,11 @@ static inline struct sock *unix_find_socket_byname(struct net *net,
 {
        struct sock *s;
 
-       spin_lock(&unix_table_locks[hash]);
+       spin_lock(&net->unx.table.locks[hash]);
        s = __unix_find_socket_byname(net, sunname, len, hash);
        if (s)
                sock_hold(s);
-       spin_unlock(&unix_table_locks[hash]);
+       spin_unlock(&net->unx.table.locks[hash]);
        return s;
 }
 
@@ -368,17 +389,17 @@ static struct sock *unix_find_socket_byinode(struct inode *i)
        unsigned int hash = unix_bsd_hash(i);
        struct sock *s;
 
-       spin_lock(&unix_table_locks[hash]);
-       sk_for_each(s, &unix_socket_table[hash]) {
+       spin_lock(&bsd_socket_locks[hash]);
+       sk_for_each_bound(s, &bsd_socket_buckets[hash]) {
                struct dentry *dentry = unix_sk(s)->path.dentry;
 
                if (dentry && d_backing_inode(dentry) == i) {
                        sock_hold(s);
-                       spin_unlock(&unix_table_locks[hash]);
+                       spin_unlock(&bsd_socket_locks[hash]);
                        return s;
                }
        }
-       spin_unlock(&unix_table_locks[hash]);
+       spin_unlock(&bsd_socket_locks[hash]);
        return NULL;
 }
 
@@ -576,12 +597,13 @@ static void unix_sock_destructor(struct sock *sk)
 static void unix_release_sock(struct sock *sk, int embrion)
 {
        struct unix_sock *u = unix_sk(sk);
-       struct path path;
        struct sock *skpair;
        struct sk_buff *skb;
+       struct path path;
        int state;
 
-       unix_remove_socket(sk);
+       unix_remove_socket(sock_net(sk), sk);
+       unix_remove_bsd_socket(sk);
 
        /* Clear state */
        unix_state_lock(sk);
@@ -928,9 +950,9 @@ static struct sock *unix_create1(struct net *net, struct socket *sock, int kern,
        init_waitqueue_head(&u->peer_wait);
        init_waitqueue_func_entry(&u->peer_wake, unix_dgram_peer_wake_relay);
        memset(&u->scm_stat, 0, sizeof(struct scm_stat));
-       unix_insert_unbound_socket(sk);
+       unix_insert_unbound_socket(net, sk);
 
-       sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+       sock_prot_inuse_add(net, sk->sk_prot, 1);
 
        return sk;
 
@@ -991,8 +1013,8 @@ static int unix_release(struct socket *sock)
        return 0;
 }
 
-static struct sock *unix_find_bsd(struct net *net, struct sockaddr_un *sunaddr,
-                                 int addr_len, int type)
+static struct sock *unix_find_bsd(struct sockaddr_un *sunaddr, int addr_len,
+                                 int type)
 {
        struct inode *inode;
        struct path path;
@@ -1061,7 +1083,7 @@ static struct sock *unix_find_other(struct net *net,
        struct sock *sk;
 
        if (sunaddr->sun_path[0])
-               sk = unix_find_bsd(net, sunaddr, addr_len, type);
+               sk = unix_find_bsd(sunaddr, addr_len, type);
        else
                sk = unix_find_abstract(net, sunaddr, addr_len, type);
 
@@ -1072,6 +1094,7 @@ static int unix_autobind(struct sock *sk)
 {
        unsigned int new_hash, old_hash = sk->sk_hash;
        struct unix_sock *u = unix_sk(sk);
+       struct net *net = sock_net(sk);
        struct unix_address *addr;
        u32 lastnum, ordernum;
        int err;
@@ -1100,11 +1123,10 @@ retry:
        sprintf(addr->name->sun_path + 1, "%05x", ordernum);
 
        new_hash = unix_abstract_hash(addr->name, addr->len, sk->sk_type);
-       unix_table_double_lock(old_hash, new_hash);
+       unix_table_double_lock(net, old_hash, new_hash);
 
-       if (__unix_find_socket_byname(sock_net(sk), addr->name, addr->len,
-                                     new_hash)) {
-               unix_table_double_unlock(old_hash, new_hash);
+       if (__unix_find_socket_byname(net, addr->name, addr->len, new_hash)) {
+               unix_table_double_unlock(net, old_hash, new_hash);
 
                /* __unix_find_socket_byname() may take long time if many names
                 * are already in use.
@@ -1121,8 +1143,8 @@ retry:
                goto retry;
        }
 
-       __unix_set_addr_hash(sk, addr, new_hash);
-       unix_table_double_unlock(old_hash, new_hash);
+       __unix_set_addr_hash(net, sk, addr, new_hash);
+       unix_table_double_unlock(net, old_hash, new_hash);
        err = 0;
 
 out:   mutex_unlock(&u->bindlock);
@@ -1136,6 +1158,7 @@ static int unix_bind_bsd(struct sock *sk, struct sockaddr_un *sunaddr,
               (SOCK_INODE(sk->sk_socket)->i_mode & ~current_umask());
        unsigned int new_hash, old_hash = sk->sk_hash;
        struct unix_sock *u = unix_sk(sk);
+       struct net *net = sock_net(sk);
        struct user_namespace *ns; // barf...
        struct unix_address *addr;
        struct dentry *dentry;
@@ -1176,11 +1199,12 @@ static int unix_bind_bsd(struct sock *sk, struct sockaddr_un *sunaddr,
                goto out_unlock;
 
        new_hash = unix_bsd_hash(d_backing_inode(dentry));
-       unix_table_double_lock(old_hash, new_hash);
+       unix_table_double_lock(net, old_hash, new_hash);
        u->path.mnt = mntget(parent.mnt);
        u->path.dentry = dget(dentry);
-       __unix_set_addr_hash(sk, addr, new_hash);
-       unix_table_double_unlock(old_hash, new_hash);
+       __unix_set_addr_hash(net, sk, addr, new_hash);
+       unix_table_double_unlock(net, old_hash, new_hash);
+       unix_insert_bsd_socket(sk);
        mutex_unlock(&u->bindlock);
        done_path_create(&parent, dentry);
        return 0;
@@ -1203,6 +1227,7 @@ static int unix_bind_abstract(struct sock *sk, struct sockaddr_un *sunaddr,
 {
        unsigned int new_hash, old_hash = sk->sk_hash;
        struct unix_sock *u = unix_sk(sk);
+       struct net *net = sock_net(sk);
        struct unix_address *addr;
        int err;
 
@@ -1220,19 +1245,18 @@ static int unix_bind_abstract(struct sock *sk, struct sockaddr_un *sunaddr,
        }
 
        new_hash = unix_abstract_hash(addr->name, addr->len, sk->sk_type);
-       unix_table_double_lock(old_hash, new_hash);
+       unix_table_double_lock(net, old_hash, new_hash);
 
-       if (__unix_find_socket_byname(sock_net(sk), addr->name, addr->len,
-                                     new_hash))
+       if (__unix_find_socket_byname(net, addr->name, addr->len, new_hash))
                goto out_spin;
 
-       __unix_set_addr_hash(sk, addr, new_hash);
-       unix_table_double_unlock(old_hash, new_hash);
+       __unix_set_addr_hash(net, sk, addr, new_hash);
+       unix_table_double_unlock(net, old_hash, new_hash);
        mutex_unlock(&u->bindlock);
        return 0;
 
 out_spin:
-       unix_table_double_unlock(old_hash, new_hash);
+       unix_table_double_unlock(net, old_hash, new_hash);
        err = -EADDRINUSE;
 out_mutex:
        mutex_unlock(&u->bindlock);
@@ -1291,9 +1315,8 @@ static void unix_state_double_unlock(struct sock *sk1, struct sock *sk2)
 static int unix_dgram_connect(struct socket *sock, struct sockaddr *addr,
                              int alen, int flags)
 {
-       struct sock *sk = sock->sk;
-       struct net *net = sock_net(sk);
        struct sockaddr_un *sunaddr = (struct sockaddr_un *)addr;
+       struct sock *sk = sock->sk;
        struct sock *other;
        int err;
 
@@ -1314,7 +1337,7 @@ static int unix_dgram_connect(struct socket *sock, struct sockaddr *addr,
                }
 
 restart:
-               other = unix_find_other(net, sunaddr, alen, sock->type);
+               other = unix_find_other(sock_net(sk), sunaddr, alen, sock->type);
                if (IS_ERR(other)) {
                        err = PTR_ERR(other);
                        goto out;
@@ -1402,15 +1425,13 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
                               int addr_len, int flags)
 {
        struct sockaddr_un *sunaddr = (struct sockaddr_un *)uaddr;
-       struct sock *sk = sock->sk;
-       struct net *net = sock_net(sk);
+       struct sock *sk = sock->sk, *newsk = NULL, *other = NULL;
        struct unix_sock *u = unix_sk(sk), *newu, *otheru;
-       struct sock *newsk = NULL;
-       struct sock *other = NULL;
+       struct net *net = sock_net(sk);
        struct sk_buff *skb = NULL;
-       int st;
-       int err;
        long timeo;
+       int err;
+       int st;
 
        err = unix_validate_addr(sunaddr, addr_len);
        if (err)
@@ -1430,7 +1451,7 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
         */
 
        /* create new sock for complete connection */
-       newsk = unix_create1(sock_net(sk), NULL, 0, sock->type);
+       newsk = unix_create1(net, NULL, 0, sock->type);
        if (IS_ERR(newsk)) {
                err = PTR_ERR(newsk);
                newsk = NULL;
@@ -1539,9 +1560,9 @@ restart:
         *
         * The contents of *(otheru->addr) and otheru->path
         * are seen fully set up here, since we have found
-        * otheru in hash under unix_table_locks.  Insertion
-        * into the hash chain we'd found it in had been done
-        * in an earlier critical area protected by unix_table_locks,
+        * otheru in hash under its lock.  Insertion into the
+        * hash chain we'd found it in had been done in an
+        * earlier critical area protected by the chain's lock,
         * the same one where we'd set *(otheru->addr) contents,
         * as well as otheru->path and otheru->addr itself.
         *
@@ -1838,17 +1859,15 @@ static void scm_stat_del(struct sock *sk, struct sk_buff *skb)
 static int unix_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
                              size_t len)
 {
-       struct sock *sk = sock->sk;
-       struct net *net = sock_net(sk);
-       struct unix_sock *u = unix_sk(sk);
        DECLARE_SOCKADDR(struct sockaddr_un *, sunaddr, msg->msg_name);
-       struct sock *other = NULL;
-       int err;
-       struct sk_buff *skb;
-       long timeo;
+       struct sock *sk = sock->sk, *other = NULL;
+       struct unix_sock *u = unix_sk(sk);
        struct scm_cookie scm;
+       struct sk_buff *skb;
        int data_len = 0;
        int sk_locked;
+       long timeo;
+       int err;
 
        wait_for_unix_gc();
        err = scm_send(sock, msg, &scm, false);
@@ -1915,7 +1934,7 @@ restart:
                if (sunaddr == NULL)
                        goto out_free;
 
-               other = unix_find_other(net, sunaddr, msg->msg_namelen,
+               other = unix_find_other(sock_net(sk), sunaddr, msg->msg_namelen,
                                        sk->sk_type);
                if (IS_ERR(other)) {
                        err = PTR_ERR(other);
@@ -3221,12 +3240,11 @@ static struct sock *unix_from_bucket(struct seq_file *seq, loff_t *pos)
 {
        unsigned long offset = get_offset(*pos);
        unsigned long bucket = get_bucket(*pos);
-       struct sock *sk;
        unsigned long count = 0;
+       struct sock *sk;
 
-       for (sk = sk_head(&unix_socket_table[bucket]); sk; sk = sk_next(sk)) {
-               if (sock_net(sk) != seq_file_net(seq))
-                       continue;
+       for (sk = sk_head(&seq_file_net(seq)->unx.table.buckets[bucket]);
+            sk; sk = sk_next(sk)) {
                if (++count == offset)
                        break;
        }
@@ -3237,16 +3255,17 @@ static struct sock *unix_from_bucket(struct seq_file *seq, loff_t *pos)
 static struct sock *unix_get_first(struct seq_file *seq, loff_t *pos)
 {
        unsigned long bucket = get_bucket(*pos);
+       struct net *net = seq_file_net(seq);
        struct sock *sk;
 
-       while (bucket < ARRAY_SIZE(unix_socket_table)) {
-               spin_lock(&unix_table_locks[bucket]);
+       while (bucket < UNIX_HASH_SIZE) {
+               spin_lock(&net->unx.table.locks[bucket]);
 
                sk = unix_from_bucket(seq, pos);
                if (sk)
                        return sk;
 
-               spin_unlock(&unix_table_locks[bucket]);
+               spin_unlock(&net->unx.table.locks[bucket]);
 
                *pos = set_bucket_offset(++bucket, 1);
        }
@@ -3259,11 +3278,12 @@ static struct sock *unix_get_next(struct seq_file *seq, struct sock *sk,
 {
        unsigned long bucket = get_bucket(*pos);
 
-       for (sk = sk_next(sk); sk; sk = sk_next(sk))
-               if (sock_net(sk) == seq_file_net(seq))
-                       return sk;
+       sk = sk_next(sk);
+       if (sk)
+               return sk;
+
 
-       spin_unlock(&unix_table_locks[bucket]);
+       spin_unlock(&seq_file_net(seq)->unx.table.locks[bucket]);
 
        *pos = set_bucket_offset(++bucket, 1);
 
@@ -3293,7 +3313,7 @@ static void unix_seq_stop(struct seq_file *seq, void *v)
        struct sock *sk = v;
 
        if (sk)
-               spin_unlock(&unix_table_locks[sk->sk_hash]);
+               spin_unlock(&seq_file_net(seq)->unx.table.locks[sk->sk_hash]);
 }
 
 static int unix_seq_show(struct seq_file *seq, void *v)
@@ -3318,7 +3338,7 @@ static int unix_seq_show(struct seq_file *seq, void *v)
                        (s->sk_state == TCP_ESTABLISHED ? SS_CONNECTING : SS_DISCONNECTING),
                        sock_i_ino(s));
 
-               if (u->addr) {  // under unix_table_locks here
+               if (u->addr) {  // under a hash table lock here
                        int i, len;
                        seq_putc(seq, ' ');
 
@@ -3388,9 +3408,6 @@ static int bpf_iter_unix_hold_batch(struct seq_file *seq, struct sock *start_sk)
        iter->batch[iter->end_sk++] = start_sk;
 
        for (sk = sk_next(start_sk); sk; sk = sk_next(sk)) {
-               if (sock_net(sk) != seq_file_net(seq))
-                       continue;
-
                if (iter->end_sk < iter->max_sk) {
                        sock_hold(sk);
                        iter->batch[iter->end_sk++] = sk;
@@ -3399,7 +3416,7 @@ static int bpf_iter_unix_hold_batch(struct seq_file *seq, struct sock *start_sk)
                expected++;
        }
 
-       spin_unlock(&unix_table_locks[start_sk->sk_hash]);
+       spin_unlock(&seq_file_net(seq)->unx.table.locks[start_sk->sk_hash]);
 
        return expected;
 }
@@ -3559,7 +3576,7 @@ static const struct net_proto_family unix_family_ops = {
 
 static int __net_init unix_net_init(struct net *net)
 {
-       int error = -ENOMEM;
+       int i;
 
        net->unx.sysctl_max_dgram_qlen = 10;
        if (unix_sysctl_register(net))
@@ -3567,18 +3584,44 @@ static int __net_init unix_net_init(struct net *net)
 
 #ifdef CONFIG_PROC_FS
        if (!proc_create_net("unix", 0, net->proc_net, &unix_seq_ops,
-                       sizeof(struct seq_net_private))) {
-               unix_sysctl_unregister(net);
-               goto out;
+                            sizeof(struct seq_net_private)))
+               goto err_sysctl;
+#endif
+
+       net->unx.table.locks = kvmalloc_array(UNIX_HASH_SIZE,
+                                             sizeof(spinlock_t), GFP_KERNEL);
+       if (!net->unx.table.locks)
+               goto err_proc;
+
+       net->unx.table.buckets = kvmalloc_array(UNIX_HASH_SIZE,
+                                               sizeof(struct hlist_head),
+                                               GFP_KERNEL);
+       if (!net->unx.table.buckets)
+               goto free_locks;
+
+       for (i = 0; i < UNIX_HASH_SIZE; i++) {
+               spin_lock_init(&net->unx.table.locks[i]);
+               INIT_HLIST_HEAD(&net->unx.table.buckets[i]);
        }
+
+       return 0;
+
+free_locks:
+       kvfree(net->unx.table.locks);
+err_proc:
+#ifdef CONFIG_PROC_FS
+       remove_proc_entry("unix", net->proc_net);
+err_sysctl:
 #endif
-       error = 0;
+       unix_sysctl_unregister(net);
 out:
-       return error;
+       return -ENOMEM;
 }
 
 static void __net_exit unix_net_exit(struct net *net)
 {
+       kvfree(net->unx.table.buckets);
+       kvfree(net->unx.table.locks);
        unix_sysctl_unregister(net);
        remove_proc_entry("unix", net->proc_net);
 }
@@ -3666,8 +3709,10 @@ static int __init af_unix_init(void)
 
        BUILD_BUG_ON(sizeof(struct unix_skb_parms) > sizeof_field(struct sk_buff, cb));
 
-       for (i = 0; i < 2 * UNIX_HASH_SIZE; i++)
-               spin_lock_init(&unix_table_locks[i]);
+       for (i = 0; i < UNIX_HASH_SIZE / 2; i++) {
+               spin_lock_init(&bsd_socket_locks[i]);
+               INIT_HLIST_HEAD(&bsd_socket_buckets[i]);
+       }
 
        rc = proto_register(&unix_dgram_proto, 1);
        if (rc != 0) {
index bb0b5ea..105f522 100644 (file)
@@ -13,7 +13,7 @@
 
 static int sk_diag_dump_name(struct sock *sk, struct sk_buff *nlskb)
 {
-       /* might or might not have unix_table_locks */
+       /* might or might not have a hash table lock */
        struct unix_address *addr = smp_load_acquire(&unix_sk(sk)->addr);
 
        if (!addr)
@@ -195,25 +195,21 @@ static int sk_diag_dump(struct sock *sk, struct sk_buff *skb, struct unix_diag_r
 
 static int unix_diag_dump(struct sk_buff *skb, struct netlink_callback *cb)
 {
-       struct unix_diag_req *req;
-       int num, s_num, slot, s_slot;
        struct net *net = sock_net(skb->sk);
+       int num, s_num, slot, s_slot;
+       struct unix_diag_req *req;
 
        req = nlmsg_data(cb->nlh);
 
        s_slot = cb->args[0];
        num = s_num = cb->args[1];
 
-       for (slot = s_slot;
-            slot < ARRAY_SIZE(unix_socket_table);
-            s_num = 0, slot++) {
+       for (slot = s_slot; slot < UNIX_HASH_SIZE; s_num = 0, slot++) {
                struct sock *sk;
 
                num = 0;
-               spin_lock(&unix_table_locks[slot]);
-               sk_for_each(sk, &unix_socket_table[slot]) {
-                       if (!net_eq(sock_net(sk), net))
-                               continue;
+               spin_lock(&net->unx.table.locks[slot]);
+               sk_for_each(sk, &net->unx.table.buckets[slot]) {
                        if (num < s_num)
                                goto next;
                        if (!(req->udiag_states & (1 << sk->sk_state)))
@@ -222,13 +218,13 @@ static int unix_diag_dump(struct sk_buff *skb, struct netlink_callback *cb)
                                         NETLINK_CB(cb->skb).portid,
                                         cb->nlh->nlmsg_seq,
                                         NLM_F_MULTI) < 0) {
-                               spin_unlock(&unix_table_locks[slot]);
+                               spin_unlock(&net->unx.table.locks[slot]);
                                goto done;
                        }
 next:
                        num++;
                }
-               spin_unlock(&unix_table_locks[slot]);
+               spin_unlock(&net->unx.table.locks[slot]);
        }
 done:
        cb->args[0] = slot;
@@ -237,20 +233,21 @@ done:
        return skb->len;
 }
 
-static struct sock *unix_lookup_by_ino(unsigned int ino)
+static struct sock *unix_lookup_by_ino(struct net *net, unsigned int ino)
 {
        struct sock *sk;
        int i;
 
-       for (i = 0; i < ARRAY_SIZE(unix_socket_table); i++) {
-               spin_lock(&unix_table_locks[i]);
-               sk_for_each(sk, &unix_socket_table[i])
+       for (i = 0; i < UNIX_HASH_SIZE; i++) {
+               spin_lock(&net->unx.table.locks[i]);
+               sk_for_each(sk, &net->unx.table.buckets[i]) {
                        if (ino == sock_i_ino(sk)) {
                                sock_hold(sk);
-                               spin_unlock(&unix_table_locks[i]);
+                               spin_unlock(&net->unx.table.locks[i]);
                                return sk;
                        }
-               spin_unlock(&unix_table_locks[i]);
+               }
+               spin_unlock(&net->unx.table.locks[i]);
        }
        return NULL;
 }
@@ -259,21 +256,20 @@ static int unix_diag_get_exact(struct sk_buff *in_skb,
                               const struct nlmsghdr *nlh,
                               struct unix_diag_req *req)
 {
-       int err = -EINVAL;
-       struct sock *sk;
-       struct sk_buff *rep;
-       unsigned int extra_len;
        struct net *net = sock_net(in_skb->sk);
+       unsigned int extra_len;
+       struct sk_buff *rep;
+       struct sock *sk;
+       int err;
 
+       err = -EINVAL;
        if (req->udiag_ino == 0)
                goto out_nosk;
 
-       sk = unix_lookup_by_ino(req->udiag_ino);
+       sk = unix_lookup_by_ino(net, req->udiag_ino);
        err = -ENOENT;
        if (sk == NULL)
                goto out_nosk;
-       if (!net_eq(sock_net(sk), net))
-               goto out;
 
        err = sock_diag_check_cookie(sk, req->udiag_cookie);
        if (err)
@@ -308,7 +304,6 @@ out_nosk:
 static int unix_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h)
 {
        int hdrlen = sizeof(struct unix_diag_req);
-       struct net *net = sock_net(skb->sk);
 
        if (nlmsg_len(h) < hdrlen)
                return -EINVAL;
@@ -317,7 +312,7 @@ static int unix_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h)
                struct netlink_dump_control c = {
                        .dump = unix_diag_dump,
                };
-               return netlink_dump_start(net->diag_nlsk, skb, h, &c);
+               return netlink_dump_start(sock_net(skb->sk)->diag_nlsk, skb, h, &c);
        } else
                return unix_diag_get_exact(skb, h, nlmsg_data(h));
 }
index 01d44e2..500129a 100644 (file)
@@ -26,11 +26,16 @@ int __net_init unix_sysctl_register(struct net *net)
 {
        struct ctl_table *table;
 
-       table = kmemdup(unix_table, sizeof(unix_table), GFP_KERNEL);
-       if (table == NULL)
-               goto err_alloc;
+       if (net_eq(net, &init_net)) {
+               table = unix_table;
+       } else {
+               table = kmemdup(unix_table, sizeof(unix_table), GFP_KERNEL);
+               if (!table)
+                       goto err_alloc;
+
+               table[0].data = &net->unx.sysctl_max_dgram_qlen;
+       }
 
-       table[0].data = &net->unx.sysctl_max_dgram_qlen;
        net->unx.ctl = register_net_sysctl(net, "net/unix", table);
        if (net->unx.ctl == NULL)
                goto err_reg;
@@ -38,7 +43,8 @@ int __net_init unix_sysctl_register(struct net *net)
        return 0;
 
 err_reg:
-       kfree(table);
+       if (!net_eq(net, &init_net))
+               kfree(table);
 err_alloc:
        return -ENOMEM;
 }
@@ -49,5 +55,6 @@ void unix_sysctl_unregister(struct net *net)
 
        table = net->unx.ctl->ctl_table_arg;
        unregister_net_sysctl_table(net->unx.ctl);
-       kfree(table);
+       if (!net_eq(net, &init_net))
+               kfree(table);
 }
index 19ac872..0900238 100644 (file)
@@ -538,12 +538,6 @@ static int xsk_generic_xmit(struct sock *sk)
                        goto out;
                }
 
-               skb = xsk_build_skb(xs, &desc);
-               if (IS_ERR(skb)) {
-                       err = PTR_ERR(skb);
-                       goto out;
-               }
-
                /* This is the backpressure mechanism for the Tx path.
                 * Reserve space in the completion queue and only proceed
                 * if there is space in it. This avoids having to implement
@@ -552,11 +546,19 @@ static int xsk_generic_xmit(struct sock *sk)
                spin_lock_irqsave(&xs->pool->cq_lock, flags);
                if (xskq_prod_reserve(xs->pool->cq)) {
                        spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
-                       kfree_skb(skb);
                        goto out;
                }
                spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
 
+               skb = xsk_build_skb(xs, &desc);
+               if (IS_ERR(skb)) {
+                       err = PTR_ERR(skb);
+                       spin_lock_irqsave(&xs->pool->cq_lock, flags);
+                       xskq_prod_cancel(xs->pool->cq);
+                       spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
+                       goto out;
+               }
+
                err = __dev_direct_xmit(skb, xs->queue_id);
                if  (err == NETDEV_TX_BUSY) {
                        /* Tell user-space to retry the send */
index 87bdd71..f701121 100644 (file)
@@ -332,6 +332,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
        for (i = 0; i < dma_map->dma_pages_cnt; i++) {
                dma = &dma_map->dma_pages[i];
                if (*dma) {
+                       *dma &= ~XSK_NEXT_PG_CONTIG_MASK;
                        dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
                                             DMA_BIDIRECTIONAL, attrs);
                        *dma = 0;
index 24d3cf1..18b1e5c 100644 (file)
 #define BACKTRACE_DEPTH 16
 #define MAX_SYMBOL_LEN 4096
 struct fprobe sample_probe;
+static unsigned long nhit;
 
 static char symbol[MAX_SYMBOL_LEN] = "kernel_clone";
 module_param_string(symbol, symbol, sizeof(symbol), 0644);
+MODULE_PARM_DESC(symbol, "Probed symbol(s), given by comma separated symbols or a wildcard pattern.");
+
 static char nosymbol[MAX_SYMBOL_LEN] = "";
 module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644);
+MODULE_PARM_DESC(nosymbol, "Not-probed symbols, given by a wildcard pattern.");
+
 static bool stackdump = true;
 module_param(stackdump, bool, 0644);
+MODULE_PARM_DESC(stackdump, "Enable stackdump.");
+
+static bool use_trace = false;
+module_param(use_trace, bool, 0644);
+MODULE_PARM_DESC(use_trace, "Use trace_printk instead of printk. This is only for debugging.");
 
 static void show_backtrace(void)
 {
@@ -40,7 +50,15 @@ static void show_backtrace(void)
 
 static void sample_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs)
 {
-       pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip);
+       if (use_trace)
+               /*
+                * This is just an example, no kernel code should call
+                * trace_printk() except when actively debugging.
+                */
+               trace_printk("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip);
+       else
+               pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip);
+       nhit++;
        if (stackdump)
                show_backtrace();
 }
@@ -49,8 +67,17 @@ static void sample_exit_handler(struct fprobe *fp, unsigned long ip, struct pt_r
 {
        unsigned long rip = instruction_pointer(regs);
 
-       pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n",
-               (void *)ip, (void *)ip, (void *)rip, (void *)rip);
+       if (use_trace)
+               /*
+                * This is just an example, no kernel code should call
+                * trace_printk() except when actively debugging.
+                */
+               trace_printk("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n",
+                       (void *)ip, (void *)ip, (void *)rip, (void *)rip);
+       else
+               pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n",
+                       (void *)ip, (void *)ip, (void *)rip, (void *)rip);
+       nhit++;
        if (stackdump)
                show_backtrace();
 }
@@ -112,7 +139,8 @@ static void __exit fprobe_exit(void)
 {
        unregister_fprobe(&sample_probe);
 
-       pr_info("fprobe at %s unregistered\n", symbol);
+       pr_info("fprobe at %s unregistered. %ld times hit, %ld times missed\n",
+               symbol, nhit, sample_probe.nmissed);
 }
 
 module_init(fprobe_init)
index 0e6268d..94ed98d 100755 (executable)
@@ -95,17 +95,25 @@ __faddr2line() {
        local print_warnings=$4
 
        local sym_name=${func_addr%+*}
-       local offset=${func_addr#*+}
-       offset=${offset%/*}
+       local func_offset=${func_addr#*+}
+       func_offset=${func_offset%/*}
        local user_size=
+       local file_type
+       local is_vmlinux=0
        [[ $func_addr =~ "/" ]] && user_size=${func_addr#*/}
 
-       if [[ -z $sym_name ]] || [[ -z $offset ]] || [[ $sym_name = $func_addr ]]; then
+       if [[ -z $sym_name ]] || [[ -z $func_offset ]] || [[ $sym_name = $func_addr ]]; then
                warn "bad func+offset $func_addr"
                DONE=1
                return
        fi
 
+       # vmlinux uses absolute addresses in the section table rather than
+       # section offsets.
+       local file_type=$(${READELF} --file-header $objfile |
+               ${AWK} '$1 == "Type:" { print $2; exit }')
+       [[ $file_type = "EXEC" ]] && is_vmlinux=1
+
        # Go through each of the object's symbols which match the func name.
        # In rare cases there might be duplicates, in which case we print all
        # matches.
@@ -114,9 +122,11 @@ __faddr2line() {
                local sym_addr=0x${fields[1]}
                local sym_elf_size=${fields[2]}
                local sym_sec=${fields[6]}
+               local sec_size
+               local sec_name
 
                # Get the section size:
-               local sec_size=$(${READELF} --section-headers --wide $objfile |
+               sec_size=$(${READELF} --section-headers --wide $objfile |
                        sed 's/\[ /\[/' |
                        ${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print "0x" $6; exit }')
 
@@ -126,6 +136,17 @@ __faddr2line() {
                        return
                fi
 
+               # Get the section name:
+               sec_name=$(${READELF} --section-headers --wide $objfile |
+                       sed 's/\[ /\[/' |
+                       ${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print $2; exit }')
+
+               if [[ -z $sec_name ]]; then
+                       warn "bad section name: section: $sym_sec"
+                       DONE=1
+                       return
+               fi
+
                # Calculate the symbol size.
                #
                # Unfortunately we can't use the ELF size, because kallsyms
@@ -174,10 +195,10 @@ __faddr2line() {
 
                sym_size=0x$(printf %x $sym_size)
 
-               # Calculate the section address from user-supplied offset:
-               local addr=$(($sym_addr + $offset))
+               # Calculate the address from user-supplied offset:
+               local addr=$(($sym_addr + $func_offset))
                if [[ -z $addr ]] || [[ $addr = 0 ]]; then
-                       warn "bad address: $sym_addr + $offset"
+                       warn "bad address: $sym_addr + $func_offset"
                        DONE=1
                        return
                fi
@@ -191,9 +212,9 @@ __faddr2line() {
                fi
 
                # Make sure the provided offset is within the symbol's range:
-               if [[ $offset -gt $sym_size ]]; then
+               if [[ $func_offset -gt $sym_size ]]; then
                        [[ $print_warnings = 1 ]] &&
-                               echo "skipping $sym_name address at $addr due to size mismatch ($offset > $sym_size)"
+                               echo "skipping $sym_name address at $addr due to size mismatch ($func_offset > $sym_size)"
                        continue
                fi
 
@@ -202,11 +223,13 @@ __faddr2line() {
                [[ $FIRST = 0 ]] && echo
                FIRST=0
 
-               echo "$sym_name+$offset/$sym_size:"
+               echo "$sym_name+$func_offset/$sym_size:"
 
                # Pass section address to addr2line and strip absolute paths
                # from the output:
-               local output=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;")
+               local args="--functions --pretty-print --inlines --exe=$objfile"
+               [[ $is_vmlinux = 0 ]] && args="$args --section=$sec_name"
+               local output=$(${ADDR2LINE} $args $addr | sed "s; $dir_prefix\(\./\)*; ;")
                [[ -z $output ]] && continue
 
                # Default output (non --list):
index faacf70..653fadb 100755 (executable)
@@ -56,4 +56,7 @@ EOT
 # point addresses.
 sed -e 's/^\.//' |
 sort -u |
+# Ignore __this_module. It's not an exported symbol, and will be resolved
+# when the final .ko's are linked.
+grep -v '^__this_module$' |
 sed -e 's/\(.*\)/#define __KSYM_\1 1/' >> "$output_file"
index 29d5a84..620dc8c 100644 (file)
@@ -980,7 +980,7 @@ static const struct sectioncheck sectioncheck[] = {
 },
 /* Do not export init/exit functions or data */
 {
-       .fromsec = { "__ksymtab*", NULL },
+       .fromsec = { "___ksymtab*", NULL },
        .bad_tosec = { INIT_SECTIONS, EXIT_SECTIONS, NULL },
        .mismatch = EXPORT_TO_INIT_EXIT,
        .symbol_white_list = { DEFAULT_SYMBOL_WHITE_LIST, NULL },
index beceb89..1bbd533 100644 (file)
@@ -2600,8 +2600,9 @@ static int selinux_sb_eat_lsm_opts(char *options, void **mnt_opts)
                                }
                        }
                        rc = selinux_add_opt(token, arg, mnt_opts);
+                       kfree(arg);
+                       arg = NULL;
                        if (unlikely(rc)) {
-                               kfree(arg);
                                goto free_opt;
                        }
                } else {
@@ -2792,17 +2793,13 @@ static int selinux_fs_context_parse_param(struct fs_context *fc,
                                          struct fs_parameter *param)
 {
        struct fs_parse_result result;
-       int opt, rc;
+       int opt;
 
        opt = fs_parse(fc, selinux_fs_parameters, param, &result);
        if (opt < 0)
                return opt;
 
-       rc = selinux_add_opt(opt, param->string, &fc->security);
-       if (!rc)
-               param->string = NULL;
-
-       return rc;
+       return selinux_add_opt(opt, param->string, &fc->security);
 }
 
 /* inode security operations */
index 15dc716..8cfdaee 100644 (file)
@@ -431,33 +431,17 @@ static const struct snd_malloc_ops snd_dma_iram_ops = {
  */
 static void *snd_dma_dev_alloc(struct snd_dma_buffer *dmab, size_t size)
 {
-       void *p;
-
-       p = dma_alloc_coherent(dmab->dev.dev, size, &dmab->addr, DEFAULT_GFP);
-#ifdef CONFIG_X86
-       if (p && dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC)
-               set_memory_wc((unsigned long)p, PAGE_ALIGN(size) >> PAGE_SHIFT);
-#endif
-       return p;
+       return dma_alloc_coherent(dmab->dev.dev, size, &dmab->addr, DEFAULT_GFP);
 }
 
 static void snd_dma_dev_free(struct snd_dma_buffer *dmab)
 {
-#ifdef CONFIG_X86
-       if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC)
-               set_memory_wb((unsigned long)dmab->area,
-                             PAGE_ALIGN(dmab->bytes) >> PAGE_SHIFT);
-#endif
        dma_free_coherent(dmab->dev.dev, dmab->bytes, dmab->area, dmab->addr);
 }
 
 static int snd_dma_dev_mmap(struct snd_dma_buffer *dmab,
                            struct vm_area_struct *area)
 {
-#ifdef CONFIG_X86
-       if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC)
-               area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
-#endif
        return dma_mmap_coherent(dmab->dev.dev, area,
                                 dmab->area, dmab->addr, dmab->bytes);
 }
@@ -471,10 +455,6 @@ static const struct snd_malloc_ops snd_dma_dev_ops = {
 /*
  * Write-combined pages
  */
-#ifdef CONFIG_X86
-/* On x86, share the same ops as the standard dev ops */
-#define snd_dma_wc_ops snd_dma_dev_ops
-#else /* CONFIG_X86 */
 static void *snd_dma_wc_alloc(struct snd_dma_buffer *dmab, size_t size)
 {
        return dma_alloc_wc(dmab->dev.dev, size, &dmab->addr, DEFAULT_GFP);
@@ -497,7 +477,6 @@ static const struct snd_malloc_ops snd_dma_wc_ops = {
        .free = snd_dma_wc_free,
        .mmap = snd_dma_wc_mmap,
 };
-#endif /* CONFIG_X86 */
 
 #ifdef CONFIG_SND_DMA_SGBUF
 static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size);
index 3f35972..161a971 100644 (file)
@@ -119,21 +119,18 @@ static int i915_component_master_match(struct device *dev, int subcomponent,
 /* check whether Intel graphics is present and reachable */
 static int i915_gfx_present(struct pci_dev *hdac_pci)
 {
-       unsigned int class = PCI_BASE_CLASS_DISPLAY << 16;
        struct pci_dev *display_dev = NULL;
-       bool match = false;
 
-       do {
-               display_dev = pci_get_class(class, display_dev);
-
-               if (display_dev && display_dev->vendor == PCI_VENDOR_ID_INTEL &&
+       for_each_pci_dev(display_dev) {
+               if (display_dev->vendor == PCI_VENDOR_ID_INTEL &&
+                   (display_dev->class >> 16) == PCI_BASE_CLASS_DISPLAY &&
                    connectivity_check(display_dev, hdac_pci)) {
                        pci_dev_put(display_dev);
-                       match = true;
+                       return true;
                }
-       } while (!match && display_dev);
+       }
 
-       return match;
+       return false;
 }
 
 /**
index a8fe017..ec9cbb2 100644 (file)
@@ -196,6 +196,12 @@ static const struct config_entry config_table[] = {
                                        DMI_MATCH(DMI_SYS_VENDOR, "Google"),
                                }
                        },
+                       {
+                               .ident = "UP-WHL",
+                               .matches = {
+                                       DMI_MATCH(DMI_SYS_VENDOR, "AAEON"),
+                               }
+                       },
                        {}
                }
        },
@@ -358,6 +364,12 @@ static const struct config_entry config_table[] = {
                                        DMI_MATCH(DMI_SYS_VENDOR, "Google"),
                                }
                        },
+                       {
+                               .ident = "UPX-TGL",
+                               .matches = {
+                                       DMI_MATCH(DMI_SYS_VENDOR, "AAEON"),
+                               }
+                       },
                        {}
                }
        },
index 4063da3..9db5ccd 100644 (file)
@@ -55,8 +55,8 @@ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt)
 
                /* find max number of channels based on format_configuration */
                if (fmt_configs->fmt_count) {
-                       dev_dbg(dev, "%s: found %d format definitions\n",
-                               __func__, fmt_configs->fmt_count);
+                       dev_dbg(dev, "found %d format definitions\n",
+                               fmt_configs->fmt_count);
 
                        for (i = 0; i < fmt_configs->fmt_count; i++) {
                                struct wav_fmt_ext *fmt_ext;
@@ -66,9 +66,9 @@ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt)
                                if (fmt_ext->fmt.channels > max_ch)
                                        max_ch = fmt_ext->fmt.channels;
                        }
-                       dev_dbg(dev, "%s: max channels found %d\n", __func__, max_ch);
+                       dev_dbg(dev, "max channels found %d\n", max_ch);
                } else {
-                       dev_dbg(dev, "%s: No format information found\n", __func__);
+                       dev_dbg(dev, "No format information found\n");
                }
 
                if (cfg->device_config.config_type != NHLT_CONFIG_TYPE_MIC_ARRAY) {
@@ -95,17 +95,16 @@ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt)
                        }
 
                        if (dmic_geo > 0) {
-                               dev_dbg(dev, "%s: Array with %d dmics\n", __func__, dmic_geo);
+                               dev_dbg(dev, "Array with %d dmics\n", dmic_geo);
                        }
                        if (max_ch > dmic_geo) {
-                               dev_dbg(dev, "%s: max channels %d exceed dmic number %d\n",
-                                       __func__, max_ch, dmic_geo);
+                               dev_dbg(dev, "max channels %d exceed dmic number %d\n",
+                                       max_ch, dmic_geo);
                        }
                }
        }
 
-       dev_dbg(dev, "%s: dmic number %d max_ch %d\n",
-               __func__, dmic_geo, max_ch);
+       dev_dbg(dev, "dmic number %d max_ch %d\n", dmic_geo, max_ch);
 
        return dmic_geo;
 }
index bd60308..8634004 100644 (file)
@@ -74,36 +74,36 @@ static int snd_card_cs46xx_probe(struct pci_dev *pci,
        err = snd_cs46xx_create(card, pci,
                                external_amp[dev], thinkpad[dev]);
        if (err < 0)
-               return err;
+               goto error;
        card->private_data = chip;
        chip->accept_valid = mmap_valid[dev];
        err = snd_cs46xx_pcm(chip, 0);
        if (err < 0)
-               return err;
+               goto error;
 #ifdef CONFIG_SND_CS46XX_NEW_DSP
        err = snd_cs46xx_pcm_rear(chip, 1);
        if (err < 0)
-               return err;
+               goto error;
        err = snd_cs46xx_pcm_iec958(chip, 2);
        if (err < 0)
-               return err;
+               goto error;
 #endif
        err = snd_cs46xx_mixer(chip, 2);
        if (err < 0)
-               return err;
+               goto error;
 #ifdef CONFIG_SND_CS46XX_NEW_DSP
        if (chip->nr_ac97_codecs ==2) {
                err = snd_cs46xx_pcm_center_lfe(chip, 3);
                if (err < 0)
-                       return err;
+                       goto error;
        }
 #endif
        err = snd_cs46xx_midi(chip, 0);
        if (err < 0)
-               return err;
+               goto error;
        err = snd_cs46xx_start_dsp(chip);
        if (err < 0)
-               return err;
+               goto error;
 
        snd_cs46xx_gameport(chip);
 
@@ -117,11 +117,15 @@ static int snd_card_cs46xx_probe(struct pci_dev *pci,
 
        err = snd_card_register(card);
        if (err < 0)
-               return err;
+               goto error;
 
        pci_set_drvdata(pci, card);
        dev++;
        return 0;
+
+ error:
+       snd_card_free(card);
+       return err;
 }
 
 static struct pci_driver cs46xx_driver = {
index cd1db94..7c6b1fe 100644 (file)
@@ -819,7 +819,7 @@ static void set_pin_targets(struct hda_codec *codec,
                snd_hda_set_pin_ctl_cache(codec, cfg->nid, cfg->val);
 }
 
-static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
+void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth)
 {
        const char *modelname = codec->fixup_name;
 
@@ -829,7 +829,7 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
                if (++depth > 10)
                        break;
                if (fix->chained_before)
-                       apply_fixup(codec, fix->chain_id, action, depth + 1);
+                       __snd_hda_apply_fixup(codec, fix->chain_id, action, depth + 1);
 
                switch (fix->type) {
                case HDA_FIXUP_PINS:
@@ -870,6 +870,7 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
                id = fix->chain_id;
        }
 }
+EXPORT_SYMBOL_GPL(__snd_hda_apply_fixup);
 
 /**
  * snd_hda_apply_fixup - Apply the fixup chain with the given action
@@ -879,7 +880,7 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
 void snd_hda_apply_fixup(struct hda_codec *codec, int action)
 {
        if (codec->fixup_list)
-               apply_fixup(codec, codec->fixup_id, action, 0);
+               __snd_hda_apply_fixup(codec, codec->fixup_id, action, 0);
 }
 EXPORT_SYMBOL_GPL(snd_hda_apply_fixup);
 
index aca5926..682dca2 100644 (file)
@@ -348,6 +348,7 @@ void snd_hda_apply_verbs(struct hda_codec *codec);
 void snd_hda_apply_pincfgs(struct hda_codec *codec,
                           const struct hda_pintbl *cfg);
 void snd_hda_apply_fixup(struct hda_codec *codec, int action);
+void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth);
 void snd_hda_pick_fixup(struct hda_codec *codec,
                        const struct hda_model_fixup *models,
                        const struct snd_pci_quirk *quirk,
index 1248d1a..3e541a4 100644 (file)
@@ -1079,11 +1079,11 @@ static int patch_conexant_auto(struct hda_codec *codec)
        if (err < 0)
                goto error;
 
-       err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg);
+       err = cx_auto_parse_beep(codec);
        if (err < 0)
                goto error;
 
-       err = cx_auto_parse_beep(codec);
+       err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg);
        if (err < 0)
                goto error;
 
index b0f9541..007dd8b 100644 (file)
@@ -2634,6 +2634,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
        SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
        SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
        SND_PCI_QUIRK(0x1558, 0x67f1, "Clevo PC70H[PRS]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+       SND_PCI_QUIRK(0x1558, 0x67f5, "Clevo PD70PN[NRT]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
        SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
        SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
        SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED),
@@ -7004,6 +7005,7 @@ enum {
        ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS,
        ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
        ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+       ALC298_FIXUP_LENOVO_C940_DUET7,
        ALC287_FIXUP_13S_GEN2_SPEAKERS,
        ALC256_FIXUP_SET_COEF_DEFAULTS,
        ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
@@ -7022,6 +7024,23 @@ enum {
        ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE,
 };
 
+/* A special fixup for Lenovo C940 and Yoga Duet 7;
+ * both have the very same PCI SSID, and we need to apply different fixups
+ * depending on the codec ID
+ */
+static void alc298_fixup_lenovo_c940_duet7(struct hda_codec *codec,
+                                          const struct hda_fixup *fix,
+                                          int action)
+{
+       int id;
+
+       if (codec->core.vendor_id == 0x10ec0298)
+               id = ALC298_FIXUP_LENOVO_SPK_VOLUME; /* C940 */
+       else
+               id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* Duet 7 */
+       __snd_hda_apply_fixup(codec, id, action, 0);
+}
+
 static const struct hda_fixup alc269_fixups[] = {
        [ALC269_FIXUP_GPIO2] = {
                .type = HDA_FIXUP_FUNC,
@@ -8721,6 +8740,10 @@ static const struct hda_fixup alc269_fixups[] = {
                .chained = true,
                .chain_id = ALC269_FIXUP_HEADSET_MODE,
        },
+       [ALC298_FIXUP_LENOVO_C940_DUET7] = {
+               .type = HDA_FIXUP_FUNC,
+               .v.func = alc298_fixup_lenovo_c940_duet7,
+       },
        [ALC287_FIXUP_13S_GEN2_SPEAKERS] = {
                .type = HDA_FIXUP_VERBS,
                .v.verbs = (const struct hda_verb[]) {
@@ -9022,6 +9045,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
                      ALC285_FIXUP_HP_GPIO_AMP_INIT),
        SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation",
                      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+       SND_PCI_QUIRK(0x103c, 0x8787, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
        SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
        SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
        SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
@@ -9187,6 +9211,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x1558, 0x70f3, "Clevo NH77DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+       SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+       SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
@@ -9273,7 +9299,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
        SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
        SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
-       SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+       SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7),
        SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
        SND_PCI_QUIRK(0x17aa, 0x3820, "Yoga Duet 7 13ITL6", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
        SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
@@ -10737,6 +10763,7 @@ enum {
        ALC668_FIXUP_MIC_DET_COEF,
        ALC897_FIXUP_LENOVO_HEADSET_MIC,
        ALC897_FIXUP_HEADSET_MIC_PIN,
+       ALC897_FIXUP_HP_HSMIC_VERB,
 };
 
 static const struct hda_fixup alc662_fixups[] = {
@@ -11156,6 +11183,13 @@ static const struct hda_fixup alc662_fixups[] = {
                .chained = true,
                .chain_id = ALC897_FIXUP_LENOVO_HEADSET_MIC
        },
+       [ALC897_FIXUP_HP_HSMIC_VERB] = {
+               .type = HDA_FIXUP_PINS,
+               .v.pins = (const struct hda_pintbl[]) {
+                       { 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
+                       { }
+               },
+       },
 };
 
 static const struct snd_pci_quirk alc662_fixup_tbl[] = {
@@ -11181,6 +11215,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
        SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+       SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
        SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
        SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
        SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
index a05304f..aea7fae 100644 (file)
@@ -518,11 +518,11 @@ static int via_parse_auto_config(struct hda_codec *codec)
        if (err < 0)
                return err;
 
-       err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg);
+       err = auto_parse_beep(codec);
        if (err < 0)
                return err;
 
-       err = auto_parse_beep(codec);
+       err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg);
        if (err < 0)
                return err;
 
index 55e773f..93606e5 100644 (file)
@@ -868,10 +868,12 @@ static void ak4613_parse_of(struct ak4613_priv *priv,
 
        /*
         * connected STDI
+        * TDM support is assuming it is probed via Audio-Graph-Card style here.
+        * Default is SDTIx1 if it was probed via Simple-Audio-Card for now.
         */
        sdti_num = of_graph_get_endpoint_count(np);
-       if (WARN_ON((sdti_num > 3) || (sdti_num < 1)))
-               return;
+       if ((sdti_num >= SDTx_MAX) || (sdti_num < 1))
+               sdti_num = 1;
 
        AK4613_CONFIG_SDTI_set(priv, sdti_num);
 }
index 6d3070e..198cfe5 100644 (file)
@@ -37,8 +37,8 @@ static const struct reg_default cs35l41_reg[] = {
        { CS35L41_DAC_PCM1_SRC,                 0x00000008 },
        { CS35L41_ASP_TX1_SRC,                  0x00000018 },
        { CS35L41_ASP_TX2_SRC,                  0x00000019 },
-       { CS35L41_ASP_TX3_SRC,                  0x00000020 },
-       { CS35L41_ASP_TX4_SRC,                  0x00000021 },
+       { CS35L41_ASP_TX3_SRC,                  0x00000000 },
+       { CS35L41_ASP_TX4_SRC,                  0x00000000 },
        { CS35L41_DSP1_RX1_SRC,                 0x00000008 },
        { CS35L41_DSP1_RX2_SRC,                 0x00000009 },
        { CS35L41_DSP1_RX3_SRC,                 0x00000018 },
@@ -644,6 +644,8 @@ static const struct reg_sequence cs35l41_reva0_errata_patch[] = {
        { CS35L41_DSP1_XM_ACCEL_PL0_PRI, 0x00000000 },
        { CS35L41_PWR_CTRL2,             0x00000000 },
        { CS35L41_AMP_GAIN_CTRL,         0x00000000 },
+       { CS35L41_ASP_TX3_SRC,           0x00000000 },
+       { CS35L41_ASP_TX4_SRC,           0x00000000 },
 };
 
 static const struct reg_sequence cs35l41_revb0_errata_patch[] = {
@@ -655,6 +657,8 @@ static const struct reg_sequence cs35l41_revb0_errata_patch[] = {
        { CS35L41_DSP1_XM_ACCEL_PL0_PRI, 0x00000000 },
        { CS35L41_PWR_CTRL2,             0x00000000 },
        { CS35L41_AMP_GAIN_CTRL,         0x00000000 },
+       { CS35L41_ASP_TX3_SRC,           0x00000000 },
+       { CS35L41_ASP_TX4_SRC,           0x00000000 },
 };
 
 static const struct reg_sequence cs35l41_revb2_errata_patch[] = {
@@ -666,6 +670,8 @@ static const struct reg_sequence cs35l41_revb2_errata_patch[] = {
        { CS35L41_DSP1_XM_ACCEL_PL0_PRI, 0x00000000 },
        { CS35L41_PWR_CTRL2,             0x00000000 },
        { CS35L41_AMP_GAIN_CTRL,         0x00000000 },
+       { CS35L41_ASP_TX3_SRC,           0x00000000 },
+       { CS35L41_ASP_TX4_SRC,           0x00000000 },
 };
 
 static const struct reg_sequence cs35l41_fs_errata_patch[] = {
index 3e68a07..71ab2a5 100644 (file)
@@ -333,7 +333,7 @@ static const struct snd_kcontrol_new cs35l41_aud_controls[] = {
        SOC_SINGLE("HW Noise Gate Enable", CS35L41_NG_CFG, 8, 63, 0),
        SOC_SINGLE("HW Noise Gate Delay", CS35L41_NG_CFG, 4, 7, 0),
        SOC_SINGLE("HW Noise Gate Threshold", CS35L41_NG_CFG, 0, 7, 0),
-       SOC_SINGLE("Aux Noise Gate CH1 Enable",
+       SOC_SINGLE("Aux Noise Gate CH1 Switch",
                   CS35L41_MIXER_NGATE_CH1_CFG, 16, 1, 0),
        SOC_SINGLE("Aux Noise Gate CH1 Entry Delay",
                   CS35L41_MIXER_NGATE_CH1_CFG, 8, 15, 0),
@@ -341,15 +341,15 @@ static const struct snd_kcontrol_new cs35l41_aud_controls[] = {
                   CS35L41_MIXER_NGATE_CH1_CFG, 0, 7, 0),
        SOC_SINGLE("Aux Noise Gate CH2 Entry Delay",
                   CS35L41_MIXER_NGATE_CH2_CFG, 8, 15, 0),
-       SOC_SINGLE("Aux Noise Gate CH2 Enable",
+       SOC_SINGLE("Aux Noise Gate CH2 Switch",
                   CS35L41_MIXER_NGATE_CH2_CFG, 16, 1, 0),
        SOC_SINGLE("Aux Noise Gate CH2 Threshold",
                   CS35L41_MIXER_NGATE_CH2_CFG, 0, 7, 0),
-       SOC_SINGLE("SCLK Force", CS35L41_SP_FORMAT, CS35L41_SCLK_FRC_SHIFT, 1, 0),
-       SOC_SINGLE("LRCLK Force", CS35L41_SP_FORMAT, CS35L41_LRCLK_FRC_SHIFT, 1, 0),
-       SOC_SINGLE("Invert Class D", CS35L41_AMP_DIG_VOL_CTRL,
+       SOC_SINGLE("SCLK Force Switch", CS35L41_SP_FORMAT, CS35L41_SCLK_FRC_SHIFT, 1, 0),
+       SOC_SINGLE("LRCLK Force Switch", CS35L41_SP_FORMAT, CS35L41_LRCLK_FRC_SHIFT, 1, 0),
+       SOC_SINGLE("Invert Class D Switch", CS35L41_AMP_DIG_VOL_CTRL,
                   CS35L41_AMP_INV_PCM_SHIFT, 1, 0),
-       SOC_SINGLE("Amp Gain ZC", CS35L41_AMP_GAIN_CTRL,
+       SOC_SINGLE("Amp Gain ZC Switch", CS35L41_AMP_GAIN_CTRL,
                   CS35L41_AMP_GAIN_ZC_SHIFT, 1, 0),
        WM_ADSP2_PRELOAD_SWITCH("DSP1", 1),
        WM_ADSP_FW_CONTROL("DSP1", 0),
index 391fd7d..1c7d52b 100644 (file)
@@ -122,6 +122,9 @@ static int cs47l15_in1_adc_put(struct snd_kcontrol *kcontrol,
                snd_soc_kcontrol_component(kcontrol);
        struct cs47l15 *cs47l15 = snd_soc_component_get_drvdata(component);
 
+       if (!!ucontrol->value.integer.value[0] == cs47l15->in1_lp_mode)
+               return 0;
+
        switch (ucontrol->value.integer.value[0]) {
        case 0:
                /* Set IN1 to normal mode */
@@ -150,7 +153,7 @@ static int cs47l15_in1_adc_put(struct snd_kcontrol *kcontrol,
                break;
        }
 
-       return 0;
+       return 1;
 }
 
 static const struct snd_kcontrol_new cs47l15_snd_controls[] = {
index 272041c..b9f19fb 100644 (file)
@@ -618,7 +618,13 @@ int madera_out1_demux_put(struct snd_kcontrol *kcontrol,
 end:
        snd_soc_dapm_mutex_unlock(dapm);
 
-       return snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+       ret = snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+       if (ret < 0) {
+               dev_err(madera->dev, "Failed to update demux power state: %d\n", ret);
+               return ret;
+       }
+
+       return change;
 }
 EXPORT_SYMBOL_GPL(madera_out1_demux_put);
 
@@ -893,7 +899,7 @@ static int madera_adsp_rate_put(struct snd_kcontrol *kcontrol,
        struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
        const int adsp_num = e->shift_l;
        const unsigned int item = ucontrol->value.enumerated.item[0];
-       int ret;
+       int ret = 0;
 
        if (item >= e->items)
                return -EINVAL;
@@ -910,10 +916,10 @@ static int madera_adsp_rate_put(struct snd_kcontrol *kcontrol,
                         "Cannot change '%s' while in use by active audio paths\n",
                         kcontrol->id.name);
                ret = -EBUSY;
-       } else {
+       } else if (priv->adsp_rate_cache[adsp_num] != e->values[item]) {
                /* Volatile register so defer until the codec is powered up */
                priv->adsp_rate_cache[adsp_num] = e->values[item];
-               ret = 0;
+               ret = 1;
        }
 
        mutex_unlock(&priv->rate_lock);
index f47e956..97b6447 100644 (file)
@@ -862,6 +862,16 @@ static int max98373_sdw_probe(struct sdw_slave *slave,
        return max98373_init(slave, regmap);
 }
 
+static int max98373_sdw_remove(struct sdw_slave *slave)
+{
+       struct max98373_priv *max98373 = dev_get_drvdata(&slave->dev);
+
+       if (max98373->first_hw_init)
+               pm_runtime_disable(&slave->dev);
+
+       return 0;
+}
+
 #if defined(CONFIG_OF)
 static const struct of_device_id max98373_of_match[] = {
        { .compatible = "maxim,max98373", },
@@ -893,7 +903,7 @@ static struct sdw_driver max98373_sdw_driver = {
                .pm = &max98373_pm,
        },
        .probe = max98373_sdw_probe,
-       .remove = NULL,
+       .remove = max98373_sdw_remove,
        .ops = &max98373_slave_ops,
        .id_table = max98373_id,
 };
index 1c11b42..72f673f 100644 (file)
@@ -691,6 +691,16 @@ static int rt1308_sdw_probe(struct sdw_slave *slave,
        return 0;
 }
 
+static int rt1308_sdw_remove(struct sdw_slave *slave)
+{
+       struct rt1308_sdw_priv *rt1308 = dev_get_drvdata(&slave->dev);
+
+       if (rt1308->first_hw_init)
+               pm_runtime_disable(&slave->dev);
+
+       return 0;
+}
+
 static const struct sdw_device_id rt1308_id[] = {
        SDW_SLAVE_ENTRY_EXT(0x025d, 0x1308, 0x2, 0, 0),
        {},
@@ -750,6 +760,7 @@ static struct sdw_driver rt1308_sdw_driver = {
                .pm = &rt1308_pm,
        },
        .probe = rt1308_sdw_probe,
+       .remove = rt1308_sdw_remove,
        .ops = &rt1308_slave_ops,
        .id_table = rt1308_id,
 };
index 60baa9f..2d6b5f9 100644 (file)
@@ -676,6 +676,16 @@ static int rt1316_sdw_probe(struct sdw_slave *slave,
        return rt1316_sdw_init(&slave->dev, regmap, slave);
 }
 
+static int rt1316_sdw_remove(struct sdw_slave *slave)
+{
+       struct rt1316_sdw_priv *rt1316 = dev_get_drvdata(&slave->dev);
+
+       if (rt1316->first_hw_init)
+               pm_runtime_disable(&slave->dev);
+
+       return 0;
+}
+
 static const struct sdw_device_id rt1316_id[] = {
        SDW_SLAVE_ENTRY_EXT(0x025d, 0x1316, 0x3, 0x1, 0),
        {},
@@ -735,6 +745,7 @@ static struct sdw_driver rt1316_sdw_driver = {
                .pm = &rt1316_pm,
        },
        .probe = rt1316_sdw_probe,
+       .remove = rt1316_sdw_remove,
        .ops = &rt1316_slave_ops,
        .id_table = rt1316_id,
 };
index 248257a..f04e18c 100644 (file)
@@ -719,9 +719,12 @@ static int rt5682_sdw_remove(struct sdw_slave *slave)
 {
        struct rt5682_priv *rt5682 = dev_get_drvdata(&slave->dev);
 
-       if (rt5682 && rt5682->hw_init)
+       if (rt5682->hw_init)
                cancel_delayed_work_sync(&rt5682->jack_detect_work);
 
+       if (rt5682->first_hw_init)
+               pm_runtime_disable(&slave->dev);
+
        return 0;
 }
 
index bda5948..f7439e4 100644 (file)
@@ -13,6 +13,7 @@
 #include <linux/soundwire/sdw_type.h>
 #include <linux/soundwire/sdw_registers.h>
 #include <linux/module.h>
+#include <linux/pm_runtime.h>
 #include <linux/regmap.h>
 #include <sound/soc.h>
 #include "rt700.h"
@@ -463,11 +464,14 @@ static int rt700_sdw_remove(struct sdw_slave *slave)
 {
        struct rt700_priv *rt700 = dev_get_drvdata(&slave->dev);
 
-       if (rt700 && rt700->hw_init) {
+       if (rt700->hw_init) {
                cancel_delayed_work_sync(&rt700->jack_detect_work);
                cancel_delayed_work_sync(&rt700->jack_btn_check_work);
        }
 
+       if (rt700->first_hw_init)
+               pm_runtime_disable(&slave->dev);
+
        return 0;
 }
 
index af32295..9bceeeb 100644 (file)
@@ -162,7 +162,7 @@ static void rt700_jack_detect_handler(struct work_struct *work)
        if (!rt700->hs_jack)
                return;
 
-       if (!rt700->component->card->instantiated)
+       if (!rt700->component->card || !rt700->component->card->instantiated)
                return;
 
        reg = RT700_VERB_GET_PIN_SENSE | RT700_HP_OUT;
@@ -315,17 +315,27 @@ static int rt700_set_jack_detect(struct snd_soc_component *component,
        struct snd_soc_jack *hs_jack, void *data)
 {
        struct rt700_priv *rt700 = snd_soc_component_get_drvdata(component);
+       int ret;
 
        rt700->hs_jack = hs_jack;
 
-       if (!rt700->hw_init) {
-               dev_dbg(&rt700->slave->dev,
-                       "%s hw_init not ready yet\n", __func__);
+       ret = pm_runtime_resume_and_get(component->dev);
+       if (ret < 0) {
+               if (ret != -EACCES) {
+                       dev_err(component->dev, "%s: failed to resume %d\n", __func__, ret);
+                       return ret;
+               }
+
+               /* pm_runtime not enabled yet */
+               dev_dbg(component->dev, "%s: skipping jack init for now\n", __func__);
                return 0;
        }
 
        rt700_jack_init(rt700);
 
+       pm_runtime_mark_last_busy(component->dev);
+       pm_runtime_put_autosuspend(component->dev);
+
        return 0;
 }
 
@@ -1115,6 +1125,11 @@ int rt700_init(struct device *dev, struct regmap *sdw_regmap,
 
        mutex_init(&rt700->disable_irq_lock);
 
+       INIT_DELAYED_WORK(&rt700->jack_detect_work,
+                         rt700_jack_detect_handler);
+       INIT_DELAYED_WORK(&rt700->jack_btn_check_work,
+                         rt700_btn_check_handler);
+
        /*
         * Mark hw_init to false
         * HW init will be performed when device reports present
@@ -1209,13 +1224,6 @@ int rt700_io_init(struct device *dev, struct sdw_slave *slave)
        /* Finish Initial Settings, set power to D3 */
        regmap_write(rt700->regmap, RT700_SET_AUDIO_POWER_STATE, AC_PWRST_D3);
 
-       if (!rt700->first_hw_init) {
-               INIT_DELAYED_WORK(&rt700->jack_detect_work,
-                       rt700_jack_detect_handler);
-               INIT_DELAYED_WORK(&rt700->jack_btn_check_work,
-                       rt700_btn_check_handler);
-       }
-
        /*
         * if set_jack callback occurred early than io_init,
         * we set up the jack detection function now
index aaf5af1..a085b2f 100644 (file)
@@ -11,6 +11,7 @@
 #include <linux/mod_devicetable.h>
 #include <linux/soundwire/sdw_registers.h>
 #include <linux/module.h>
+#include <linux/pm_runtime.h>
 
 #include "rt711-sdca.h"
 #include "rt711-sdca-sdw.h"
@@ -364,11 +365,17 @@ static int rt711_sdca_sdw_remove(struct sdw_slave *slave)
 {
        struct rt711_sdca_priv *rt711 = dev_get_drvdata(&slave->dev);
 
-       if (rt711 && rt711->hw_init) {
+       if (rt711->hw_init) {
                cancel_delayed_work_sync(&rt711->jack_detect_work);
                cancel_delayed_work_sync(&rt711->jack_btn_check_work);
        }
 
+       if (rt711->first_hw_init)
+               pm_runtime_disable(&slave->dev);
+
+       mutex_destroy(&rt711->calibrate_mutex);
+       mutex_destroy(&rt711->disable_irq_lock);
+
        return 0;
 }
 
index 57629c1..5ad53bb 100644 (file)
@@ -34,7 +34,7 @@ static int rt711_sdca_index_write(struct rt711_sdca_priv *rt711,
 
        ret = regmap_write(regmap, addr, value);
        if (ret < 0)
-               dev_err(rt711->component->dev,
+               dev_err(&rt711->slave->dev,
                        "Failed to set private value: %06x <= %04x ret=%d\n",
                        addr, value, ret);
 
@@ -50,7 +50,7 @@ static int rt711_sdca_index_read(struct rt711_sdca_priv *rt711,
 
        ret = regmap_read(regmap, addr, value);
        if (ret < 0)
-               dev_err(rt711->component->dev,
+               dev_err(&rt711->slave->dev,
                        "Failed to get private value: %06x => %04x ret=%d\n",
                        addr, *value, ret);
 
@@ -294,7 +294,7 @@ static void rt711_sdca_jack_detect_handler(struct work_struct *work)
        if (!rt711->hs_jack)
                return;
 
-       if (!rt711->component->card->instantiated)
+       if (!rt711->component->card || !rt711->component->card->instantiated)
                return;
 
        /* SDW_SCP_SDCA_INT_SDCA_0 is used for jack detection */
@@ -487,16 +487,27 @@ static int rt711_sdca_set_jack_detect(struct snd_soc_component *component,
        struct snd_soc_jack *hs_jack, void *data)
 {
        struct rt711_sdca_priv *rt711 = snd_soc_component_get_drvdata(component);
+       int ret;
 
        rt711->hs_jack = hs_jack;
 
-       if (!rt711->hw_init) {
-               dev_dbg(&rt711->slave->dev,
-                       "%s hw_init not ready yet\n", __func__);
+       ret = pm_runtime_resume_and_get(component->dev);
+       if (ret < 0) {
+               if (ret != -EACCES) {
+                       dev_err(component->dev, "%s: failed to resume %d\n", __func__, ret);
+                       return ret;
+               }
+
+               /* pm_runtime not enabled yet */
+               dev_dbg(component->dev, "%s: skipping jack init for now\n", __func__);
                return 0;
        }
 
        rt711_sdca_jack_init(rt711);
+
+       pm_runtime_mark_last_busy(component->dev);
+       pm_runtime_put_autosuspend(component->dev);
+
        return 0;
 }
 
@@ -1190,14 +1201,6 @@ static int rt711_sdca_probe(struct snd_soc_component *component)
        return 0;
 }
 
-static void rt711_sdca_remove(struct snd_soc_component *component)
-{
-       struct rt711_sdca_priv *rt711 = snd_soc_component_get_drvdata(component);
-
-       regcache_cache_only(rt711->regmap, true);
-       regcache_cache_only(rt711->mbq_regmap, true);
-}
-
 static const struct snd_soc_component_driver soc_sdca_dev_rt711 = {
        .probe = rt711_sdca_probe,
        .controls = rt711_sdca_snd_controls,
@@ -1207,7 +1210,6 @@ static const struct snd_soc_component_driver soc_sdca_dev_rt711 = {
        .dapm_routes = rt711_sdca_audio_map,
        .num_dapm_routes = ARRAY_SIZE(rt711_sdca_audio_map),
        .set_jack = rt711_sdca_set_jack_detect,
-       .remove = rt711_sdca_remove,
        .endianness = 1,
 };
 
@@ -1412,8 +1414,12 @@ int rt711_sdca_init(struct device *dev, struct regmap *regmap,
        rt711->regmap = regmap;
        rt711->mbq_regmap = mbq_regmap;
 
+       mutex_init(&rt711->calibrate_mutex);
        mutex_init(&rt711->disable_irq_lock);
 
+       INIT_DELAYED_WORK(&rt711->jack_detect_work, rt711_sdca_jack_detect_handler);
+       INIT_DELAYED_WORK(&rt711->jack_btn_check_work, rt711_sdca_btn_check_handler);
+
        /*
         * Mark hw_init to false
         * HW init will be performed when device reports present
@@ -1545,14 +1551,6 @@ int rt711_sdca_io_init(struct device *dev, struct sdw_slave *slave)
        rt711_sdca_index_update_bits(rt711, RT711_VENDOR_HDA_CTL,
                RT711_PUSH_BTN_INT_CTL0, 0x20, 0x00);
 
-       if (!rt711->first_hw_init) {
-               INIT_DELAYED_WORK(&rt711->jack_detect_work,
-                       rt711_sdca_jack_detect_handler);
-               INIT_DELAYED_WORK(&rt711->jack_btn_check_work,
-                       rt711_sdca_btn_check_handler);
-               mutex_init(&rt711->calibrate_mutex);
-       }
-
        /* calibration */
        ret = rt711_sdca_calibration(rt711);
        if (ret < 0)
index bda2cc9..4fe68bc 100644 (file)
@@ -13,6 +13,7 @@
 #include <linux/soundwire/sdw_type.h>
 #include <linux/soundwire/sdw_registers.h>
 #include <linux/module.h>
+#include <linux/pm_runtime.h>
 #include <linux/regmap.h>
 #include <sound/soc.h>
 #include "rt711.h"
@@ -464,12 +465,18 @@ static int rt711_sdw_remove(struct sdw_slave *slave)
 {
        struct rt711_priv *rt711 = dev_get_drvdata(&slave->dev);
 
-       if (rt711 && rt711->hw_init) {
+       if (rt711->hw_init) {
                cancel_delayed_work_sync(&rt711->jack_detect_work);
                cancel_delayed_work_sync(&rt711->jack_btn_check_work);
                cancel_work_sync(&rt711->calibration_work);
        }
 
+       if (rt711->first_hw_init)
+               pm_runtime_disable(&slave->dev);
+
+       mutex_destroy(&rt711->calibrate_mutex);
+       mutex_destroy(&rt711->disable_irq_lock);
+
        return 0;
 }
 
index 9838fb4..9df800a 100644 (file)
@@ -242,7 +242,7 @@ static void rt711_jack_detect_handler(struct work_struct *work)
        if (!rt711->hs_jack)
                return;
 
-       if (!rt711->component->card->instantiated)
+       if (!rt711->component->card || !rt711->component->card->instantiated)
                return;
 
        if (pm_runtime_status_suspended(rt711->slave->dev.parent)) {
@@ -457,17 +457,27 @@ static int rt711_set_jack_detect(struct snd_soc_component *component,
        struct snd_soc_jack *hs_jack, void *data)
 {
        struct rt711_priv *rt711 = snd_soc_component_get_drvdata(component);
+       int ret;
 
        rt711->hs_jack = hs_jack;
 
-       if (!rt711->hw_init) {
-               dev_dbg(&rt711->slave->dev,
-                       "%s hw_init not ready yet\n", __func__);
+       ret = pm_runtime_resume_and_get(component->dev);
+       if (ret < 0) {
+               if (ret != -EACCES) {
+                       dev_err(component->dev, "%s: failed to resume %d\n", __func__, ret);
+                       return ret;
+               }
+
+               /* pm_runtime not enabled yet */
+               dev_dbg(component->dev, "%s: skipping jack init for now\n", __func__);
                return 0;
        }
 
        rt711_jack_init(rt711);
 
+       pm_runtime_mark_last_busy(component->dev);
+       pm_runtime_put_autosuspend(component->dev);
+
        return 0;
 }
 
@@ -932,13 +942,6 @@ static int rt711_probe(struct snd_soc_component *component)
        return 0;
 }
 
-static void rt711_remove(struct snd_soc_component *component)
-{
-       struct rt711_priv *rt711 = snd_soc_component_get_drvdata(component);
-
-       regcache_cache_only(rt711->regmap, true);
-}
-
 static const struct snd_soc_component_driver soc_codec_dev_rt711 = {
        .probe = rt711_probe,
        .set_bias_level = rt711_set_bias_level,
@@ -949,7 +952,6 @@ static const struct snd_soc_component_driver soc_codec_dev_rt711 = {
        .dapm_routes = rt711_audio_map,
        .num_dapm_routes = ARRAY_SIZE(rt711_audio_map),
        .set_jack = rt711_set_jack_detect,
-       .remove = rt711_remove,
        .endianness = 1,
 };
 
@@ -1204,8 +1206,13 @@ int rt711_init(struct device *dev, struct regmap *sdw_regmap,
        rt711->sdw_regmap = sdw_regmap;
        rt711->regmap = regmap;
 
+       mutex_init(&rt711->calibrate_mutex);
        mutex_init(&rt711->disable_irq_lock);
 
+       INIT_DELAYED_WORK(&rt711->jack_detect_work, rt711_jack_detect_handler);
+       INIT_DELAYED_WORK(&rt711->jack_btn_check_work, rt711_btn_check_handler);
+       INIT_WORK(&rt711->calibration_work, rt711_calibration_work);
+
        /*
         * Mark hw_init to false
         * HW init will be performed when device reports present
@@ -1313,15 +1320,8 @@ int rt711_io_init(struct device *dev, struct sdw_slave *slave)
 
        if (rt711->first_hw_init)
                rt711_calibration(rt711);
-       else {
-               INIT_DELAYED_WORK(&rt711->jack_detect_work,
-                       rt711_jack_detect_handler);
-               INIT_DELAYED_WORK(&rt711->jack_btn_check_work,
-                       rt711_btn_check_handler);
-               mutex_init(&rt711->calibrate_mutex);
-               INIT_WORK(&rt711->calibration_work, rt711_calibration_work);
+       else
                schedule_work(&rt711->calibration_work);
-       }
 
        /*
         * if set_jack callback occurred early than io_init,
index 0ecd294..13e731d 100644 (file)
@@ -13,6 +13,7 @@
 #include <linux/soundwire/sdw_type.h>
 #include <linux/soundwire/sdw_registers.h>
 #include <linux/module.h>
+#include <linux/pm_runtime.h>
 #include <linux/regmap.h>
 #include <sound/soc.h>
 #include "rt715-sdca.h"
@@ -193,6 +194,16 @@ static int rt715_sdca_sdw_probe(struct sdw_slave *slave,
        return rt715_sdca_init(&slave->dev, mbq_regmap, regmap, slave);
 }
 
+static int rt715_sdca_sdw_remove(struct sdw_slave *slave)
+{
+       struct rt715_sdca_priv *rt715 = dev_get_drvdata(&slave->dev);
+
+       if (rt715->first_hw_init)
+               pm_runtime_disable(&slave->dev);
+
+       return 0;
+}
+
 static const struct sdw_device_id rt715_sdca_id[] = {
        SDW_SLAVE_ENTRY_EXT(0x025d, 0x715, 0x3, 0x1, 0),
        SDW_SLAVE_ENTRY_EXT(0x025d, 0x714, 0x3, 0x1, 0),
@@ -267,6 +278,7 @@ static struct sdw_driver rt715_sdw_driver = {
                .pm = &rt715_pm,
        },
        .probe = rt715_sdca_sdw_probe,
+       .remove = rt715_sdca_sdw_remove,
        .ops = &rt715_sdca_slave_ops,
        .id_table = rt715_sdca_id,
 };
index a7b21b0..b047bf8 100644 (file)
@@ -14,6 +14,7 @@
 #include <linux/soundwire/sdw_type.h>
 #include <linux/soundwire/sdw_registers.h>
 #include <linux/module.h>
+#include <linux/pm_runtime.h>
 #include <linux/of.h>
 #include <linux/regmap.h>
 #include <sound/soc.h>
@@ -514,6 +515,16 @@ static int rt715_sdw_probe(struct sdw_slave *slave,
        return 0;
 }
 
+static int rt715_sdw_remove(struct sdw_slave *slave)
+{
+       struct rt715_priv *rt715 = dev_get_drvdata(&slave->dev);
+
+       if (rt715->first_hw_init)
+               pm_runtime_disable(&slave->dev);
+
+       return 0;
+}
+
 static const struct sdw_device_id rt715_id[] = {
        SDW_SLAVE_ENTRY_EXT(0x025d, 0x714, 0x2, 0, 0),
        SDW_SLAVE_ENTRY_EXT(0x025d, 0x715, 0x2, 0, 0),
@@ -575,6 +586,7 @@ static struct sdw_driver rt715_sdw_driver = {
                   .pm = &rt715_pm,
                   },
        .probe = rt715_sdw_probe,
+       .remove = rt715_sdw_remove,
        .ops = &rt715_slave_ops,
        .id_table = rt715_id,
 };
index 617a36a..d9f1352 100644 (file)
@@ -1287,11 +1287,17 @@ static int slim_rx_mux_put(struct snd_kcontrol *kc,
        struct snd_soc_dapm_update *update = NULL;
        u32 port_id = w->shift;
 
+       if (wcd->rx_port_value[port_id] == ucontrol->value.enumerated.item[0])
+               return 0;
+
        wcd->rx_port_value[port_id] = ucontrol->value.enumerated.item[0];
 
+       /* Remove channel from any list it's in before adding it to a new one */
+       list_del_init(&wcd->rx_chs[port_id].list);
+
        switch (wcd->rx_port_value[port_id]) {
        case 0:
-               list_del_init(&wcd->rx_chs[port_id].list);
+               /* Channel already removed from lists. Nothing to do here */
                break;
        case 1:
                list_add_tail(&wcd->rx_chs[port_id].list,
index c1b61b9..781ae56 100644 (file)
@@ -2519,6 +2519,9 @@ static int wcd938x_tx_mode_put(struct snd_kcontrol *kcontrol,
        struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
        int path = e->shift_l;
 
+       if (wcd938x->tx_mode[path] == ucontrol->value.enumerated.item[0])
+               return 0;
+
        wcd938x->tx_mode[path] = ucontrol->value.enumerated.item[0];
 
        return 1;
@@ -2541,6 +2544,9 @@ static int wcd938x_rx_hph_mode_put(struct snd_kcontrol *kcontrol,
        struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
        struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component);
 
+       if (wcd938x->hph_mode == ucontrol->value.enumerated.item[0])
+               return 0;
+
        wcd938x->hph_mode = ucontrol->value.enumerated.item[0];
 
        return 1;
@@ -2632,6 +2638,9 @@ static int wcd938x_ldoh_put(struct snd_kcontrol *kcontrol,
        struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
        struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component);
 
+       if (wcd938x->ldoh == ucontrol->value.integer.value[0])
+               return 0;
+
        wcd938x->ldoh = ucontrol->value.integer.value[0];
 
        return 1;
@@ -2654,6 +2663,9 @@ static int wcd938x_bcs_put(struct snd_kcontrol *kcontrol,
        struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
        struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component);
 
+       if (wcd938x->bcs_dis == ucontrol->value.integer.value[0])
+               return 0;
+
        wcd938x->bcs_dis = ucontrol->value.integer.value[0];
 
        return 1;
index 4973ba1..4ab7a67 100644 (file)
@@ -413,6 +413,7 @@ static int wm5110_put_dre(struct snd_kcontrol *kcontrol,
        unsigned int rnew = (!!ucontrol->value.integer.value[1]) << mc->rshift;
        unsigned int lold, rold;
        unsigned int lena, rena;
+       bool change = false;
        int ret;
 
        snd_soc_dapm_mutex_lock(dapm);
@@ -440,8 +441,8 @@ static int wm5110_put_dre(struct snd_kcontrol *kcontrol,
                goto err;
        }
 
-       ret = regmap_update_bits(arizona->regmap, ARIZONA_DRE_ENABLE,
-                                mask, lnew | rnew);
+       ret = regmap_update_bits_check(arizona->regmap, ARIZONA_DRE_ENABLE,
+                                      mask, lnew | rnew, &change);
        if (ret) {
                dev_err(arizona->dev, "Failed to set DRE: %d\n", ret);
                goto err;
@@ -454,6 +455,9 @@ static int wm5110_put_dre(struct snd_kcontrol *kcontrol,
        if (!rnew && rold)
                wm5110_clear_pga_volume(arizona, mc->rshift);
 
+       if (change)
+               ret = 1;
+
 err:
        snd_soc_dapm_mutex_unlock(dapm);
 
index 6d7fd88..a7784ac 100644 (file)
@@ -997,7 +997,7 @@ int wm_adsp2_preloader_put(struct snd_kcontrol *kcontrol,
                snd_soc_dapm_sync(dapm);
        }
 
-       return 0;
+       return 1;
 }
 EXPORT_SYMBOL_GPL(wm_adsp2_preloader_put);
 
index 0d11cc8..6a06fe3 100644 (file)
@@ -128,10 +128,10 @@ struct avs_tplg_token_parser {
 static int
 avs_parse_uuid_token(struct snd_soc_component *comp, void *elem, void *object, u32 offset)
 {
-       struct snd_soc_tplg_vendor_value_elem *tuple = elem;
+       struct snd_soc_tplg_vendor_uuid_elem *tuple = elem;
        guid_t *val = (guid_t *)((u8 *)object + offset);
 
-       guid_copy((guid_t *)val, (const guid_t *)&tuple->value);
+       guid_copy((guid_t *)val, (const guid_t *)&tuple->uuid);
 
        return 0;
 }
index 00384c6..330c0ac 100644 (file)
@@ -421,8 +421,17 @@ static int snd_byt_wm5102_mc_probe(struct platform_device *pdev)
        priv->spkvdd_en_gpio = gpiod_get(codec_dev, "wlf,spkvdd-ena", GPIOD_OUT_LOW);
        put_device(codec_dev);
 
-       if (IS_ERR(priv->spkvdd_en_gpio))
-               return dev_err_probe(dev, PTR_ERR(priv->spkvdd_en_gpio), "getting spkvdd-GPIO\n");
+       if (IS_ERR(priv->spkvdd_en_gpio)) {
+               ret = PTR_ERR(priv->spkvdd_en_gpio);
+               /*
+                * The spkvdd gpio-lookup is registered by: drivers/mfd/arizona-spi.c,
+                * so -ENOENT means that arizona-spi hasn't probed yet.
+                */
+               if (ret == -ENOENT)
+                       ret = -EPROBE_DEFER;
+
+               return dev_err_probe(dev, ret, "getting spkvdd-GPIO\n");
+       }
 
        /* override platform name, if required */
        byt_wm5102_card.dev = dev;
index 1f00679..ad826ad 100644 (file)
@@ -1398,6 +1398,33 @@ static struct snd_soc_card card_sof_sdw = {
        .late_probe = sof_sdw_card_late_probe,
 };
 
+static void mc_dailink_exit_loop(struct snd_soc_card *card)
+{
+       struct snd_soc_dai_link *link;
+       int ret;
+       int i, j;
+
+       for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) {
+               if (!codec_info_list[i].exit)
+                       continue;
+               /*
+                * We don't need to call .exit function if there is no matched
+                * dai link found.
+                */
+               for_each_card_prelinks(card, j, link) {
+                       if (!strcmp(link->codecs[0].dai_name,
+                                   codec_info_list[i].dai_name)) {
+                               ret = codec_info_list[i].exit(card, link);
+                               if (ret)
+                                       dev_warn(card->dev,
+                                                "codec exit failed %d\n",
+                                                ret);
+                               break;
+                       }
+               }
+       }
+}
+
 static int mc_probe(struct platform_device *pdev)
 {
        struct snd_soc_card *card = &card_sof_sdw;
@@ -1462,6 +1489,7 @@ static int mc_probe(struct platform_device *pdev)
        ret = devm_snd_soc_register_card(&pdev->dev, card);
        if (ret) {
                dev_err(card->dev, "snd_soc_register_card failed %d\n", ret);
+               mc_dailink_exit_loop(card);
                return ret;
        }
 
@@ -1473,29 +1501,8 @@ static int mc_probe(struct platform_device *pdev)
 static int mc_remove(struct platform_device *pdev)
 {
        struct snd_soc_card *card = platform_get_drvdata(pdev);
-       struct snd_soc_dai_link *link;
-       int ret;
-       int i, j;
 
-       for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) {
-               if (!codec_info_list[i].exit)
-                       continue;
-               /*
-                * We don't need to call .exit function if there is no matched
-                * dai link found.
-                */
-               for_each_card_prelinks(card, j, link) {
-                       if (!strcmp(link->codecs[0].dai_name,
-                                   codec_info_list[i].dai_name)) {
-                               ret = codec_info_list[i].exit(card, link);
-                               if (ret)
-                                       dev_warn(&pdev->dev,
-                                                "codec exit failed %d\n",
-                                                ret);
-                               break;
-                       }
-               }
-       }
+       mc_dailink_exit_loop(card);
 
        return 0;
 }
index 19c4a90..ee59ef3 100644 (file)
@@ -147,6 +147,12 @@ static int q6apm_dai_prepare(struct snd_soc_component *component,
        cfg.num_channels = runtime->channels;
        cfg.bit_width = prtd->bits_per_sample;
 
+       if (prtd->state) {
+               /* clear the previous setup if any  */
+               q6apm_graph_stop(prtd->graph);
+               q6apm_unmap_memory_regions(prtd->graph, substream->stream);
+       }
+
        prtd->pcm_count = snd_pcm_lib_period_bytes(substream);
        prtd->pos = 0;
        /* rate and channels are sent to audio driver */
index 4ce5d25..99a128a 100644 (file)
@@ -13,6 +13,7 @@
 #include <linux/of_gpio.h>
 #include <linux/of_device.h>
 #include <linux/clk.h>
+#include <linux/pinctrl/consumer.h>
 #include <linux/pm_runtime.h>
 #include <linux/regmap.h>
 #include <linux/spinlock.h>
@@ -54,8 +55,40 @@ struct rk_i2s_dev {
        const struct rk_i2s_pins *pins;
        unsigned int bclk_ratio;
        spinlock_t lock; /* tx/rx lock */
+       struct pinctrl *pinctrl;
+       struct pinctrl_state *bclk_on;
+       struct pinctrl_state *bclk_off;
 };
 
+static int i2s_pinctrl_select_bclk_on(struct rk_i2s_dev *i2s)
+{
+       int ret = 0;
+
+       if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_on))
+               ret = pinctrl_select_state(i2s->pinctrl,
+                                    i2s->bclk_on);
+
+       if (ret)
+               dev_err(i2s->dev, "bclk enable failed %d\n", ret);
+
+       return ret;
+}
+
+static int i2s_pinctrl_select_bclk_off(struct rk_i2s_dev *i2s)
+{
+
+       int ret = 0;
+
+       if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_off))
+               ret = pinctrl_select_state(i2s->pinctrl,
+                                    i2s->bclk_off);
+
+       if (ret)
+               dev_err(i2s->dev, "bclk disable failed %d\n", ret);
+
+       return ret;
+}
+
 static int i2s_runtime_suspend(struct device *dev)
 {
        struct rk_i2s_dev *i2s = dev_get_drvdata(dev);
@@ -92,38 +125,49 @@ static inline struct rk_i2s_dev *to_info(struct snd_soc_dai *dai)
        return snd_soc_dai_get_drvdata(dai);
 }
 
-static void rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on)
+static int rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on)
 {
        unsigned int val = 0;
        int retry = 10;
+       int ret = 0;
 
        spin_lock(&i2s->lock);
        if (on) {
-               regmap_update_bits(i2s->regmap, I2S_DMACR,
-                                  I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE);
+               ret = regmap_update_bits(i2s->regmap, I2S_DMACR,
+                               I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE);
+               if (ret < 0)
+                       goto end;
 
-               regmap_update_bits(i2s->regmap, I2S_XFER,
-                                  I2S_XFER_TXS_START | I2S_XFER_RXS_START,
-                                  I2S_XFER_TXS_START | I2S_XFER_RXS_START);
+               ret = regmap_update_bits(i2s->regmap, I2S_XFER,
+                               I2S_XFER_TXS_START | I2S_XFER_RXS_START,
+                               I2S_XFER_TXS_START | I2S_XFER_RXS_START);
+               if (ret < 0)
+                       goto end;
 
                i2s->tx_start = true;
        } else {
                i2s->tx_start = false;
 
-               regmap_update_bits(i2s->regmap, I2S_DMACR,
-                                  I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE);
+               ret = regmap_update_bits(i2s->regmap, I2S_DMACR,
+                               I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE);
+               if (ret < 0)
+                       goto end;
 
                if (!i2s->rx_start) {
-                       regmap_update_bits(i2s->regmap, I2S_XFER,
-                                          I2S_XFER_TXS_START |
-                                          I2S_XFER_RXS_START,
-                                          I2S_XFER_TXS_STOP |
-                                          I2S_XFER_RXS_STOP);
+                       ret = regmap_update_bits(i2s->regmap, I2S_XFER,
+                                       I2S_XFER_TXS_START |
+                                       I2S_XFER_RXS_START,
+                                       I2S_XFER_TXS_STOP |
+                                       I2S_XFER_RXS_STOP);
+                       if (ret < 0)
+                               goto end;
 
                        udelay(150);
-                       regmap_update_bits(i2s->regmap, I2S_CLR,
-                                          I2S_CLR_TXC | I2S_CLR_RXC,
-                                          I2S_CLR_TXC | I2S_CLR_RXC);
+                       ret = regmap_update_bits(i2s->regmap, I2S_CLR,
+                                       I2S_CLR_TXC | I2S_CLR_RXC,
+                                       I2S_CLR_TXC | I2S_CLR_RXC);
+                       if (ret < 0)
+                               goto end;
 
                        regmap_read(i2s->regmap, I2S_CLR, &val);
 
@@ -138,44 +182,57 @@ static void rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on)
                        }
                }
        }
+end:
        spin_unlock(&i2s->lock);
+       if (ret < 0)
+               dev_err(i2s->dev, "lrclk update failed\n");
+
+       return ret;
 }
 
-static void rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on)
+static int rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on)
 {
        unsigned int val = 0;
        int retry = 10;
+       int ret = 0;
 
        spin_lock(&i2s->lock);
        if (on) {
-               regmap_update_bits(i2s->regmap, I2S_DMACR,
+               ret = regmap_update_bits(i2s->regmap, I2S_DMACR,
                                   I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_ENABLE);
+               if (ret < 0)
+                       goto end;
 
-               regmap_update_bits(i2s->regmap, I2S_XFER,
+               ret = regmap_update_bits(i2s->regmap, I2S_XFER,
                                   I2S_XFER_TXS_START | I2S_XFER_RXS_START,
                                   I2S_XFER_TXS_START | I2S_XFER_RXS_START);
+               if (ret < 0)
+                       goto end;
 
                i2s->rx_start = true;
        } else {
                i2s->rx_start = false;
 
-               regmap_update_bits(i2s->regmap, I2S_DMACR,
+               ret = regmap_update_bits(i2s->regmap, I2S_DMACR,
                                   I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_DISABLE);
+               if (ret < 0)
+                       goto end;
 
                if (!i2s->tx_start) {
-                       regmap_update_bits(i2s->regmap, I2S_XFER,
+                       ret = regmap_update_bits(i2s->regmap, I2S_XFER,
                                           I2S_XFER_TXS_START |
                                           I2S_XFER_RXS_START,
                                           I2S_XFER_TXS_STOP |
                                           I2S_XFER_RXS_STOP);
-
+                       if (ret < 0)
+                               goto end;
                        udelay(150);
-                       regmap_update_bits(i2s->regmap, I2S_CLR,
+                       ret = regmap_update_bits(i2s->regmap, I2S_CLR,
                                           I2S_CLR_TXC | I2S_CLR_RXC,
                                           I2S_CLR_TXC | I2S_CLR_RXC);
-
+                       if (ret < 0)
+                               goto end;
                        regmap_read(i2s->regmap, I2S_CLR, &val);
-
                        /* Should wait for clear operation to finish */
                        while (val) {
                                regmap_read(i2s->regmap, I2S_CLR, &val);
@@ -187,7 +244,12 @@ static void rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on)
                        }
                }
        }
+end:
        spin_unlock(&i2s->lock);
+       if (ret < 0)
+               dev_err(i2s->dev, "lrclk update failed\n");
+
+       return ret;
 }
 
 static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
@@ -425,17 +487,26 @@ static int rockchip_i2s_trigger(struct snd_pcm_substream *substream,
        case SNDRV_PCM_TRIGGER_RESUME:
        case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
                if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
-                       rockchip_snd_rxctrl(i2s, 1);
+                       ret = rockchip_snd_rxctrl(i2s, 1);
                else
-                       rockchip_snd_txctrl(i2s, 1);
+                       ret = rockchip_snd_txctrl(i2s, 1);
+               /* Do not turn on bclk if lrclk open fails. */
+               if (ret < 0)
+                       return ret;
+               i2s_pinctrl_select_bclk_on(i2s);
                break;
        case SNDRV_PCM_TRIGGER_SUSPEND:
        case SNDRV_PCM_TRIGGER_STOP:
        case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
-               if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
-                       rockchip_snd_rxctrl(i2s, 0);
-               else
-                       rockchip_snd_txctrl(i2s, 0);
+               if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) {
+                       if (!i2s->tx_start)
+                               i2s_pinctrl_select_bclk_off(i2s);
+                       ret = rockchip_snd_rxctrl(i2s, 0);
+               } else {
+                       if (!i2s->rx_start)
+                               i2s_pinctrl_select_bclk_off(i2s);
+                       ret = rockchip_snd_txctrl(i2s, 0);
+               }
                break;
        default:
                ret = -EINVAL;
@@ -736,6 +807,33 @@ static int rockchip_i2s_probe(struct platform_device *pdev)
        }
 
        i2s->bclk_ratio = 64;
+       i2s->pinctrl = devm_pinctrl_get(&pdev->dev);
+       if (IS_ERR(i2s->pinctrl))
+               dev_err(&pdev->dev, "failed to find i2s pinctrl\n");
+
+       i2s->bclk_on = pinctrl_lookup_state(i2s->pinctrl,
+                                  "bclk_on");
+       if (IS_ERR_OR_NULL(i2s->bclk_on))
+               dev_err(&pdev->dev, "failed to find i2s default state\n");
+       else
+               dev_dbg(&pdev->dev, "find i2s bclk state\n");
+
+       i2s->bclk_off = pinctrl_lookup_state(i2s->pinctrl,
+                                 "bclk_off");
+       if (IS_ERR_OR_NULL(i2s->bclk_off))
+               dev_err(&pdev->dev, "failed to find i2s gpio state\n");
+       else
+               dev_dbg(&pdev->dev, "find i2s bclk_off state\n");
+
+       i2s_pinctrl_select_bclk_off(i2s);
+
+       i2s->playback_dma_data.addr = res->start + I2S_TXDR;
+       i2s->playback_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+       i2s->playback_dma_data.maxburst = 4;
+
+       i2s->capture_dma_data.addr = res->start + I2S_RXDR;
+       i2s->capture_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+       i2s->capture_dma_data.maxburst = 4;
 
        dev_set_drvdata(&pdev->dev, i2s);
 
index 869c765..a8e842e 100644 (file)
@@ -62,6 +62,8 @@ struct snd_soc_dapm_widget *
 snd_soc_dapm_new_control_unlocked(struct snd_soc_dapm_context *dapm,
                         const struct snd_soc_dapm_widget *widget);
 
+static unsigned int soc_dapm_read(struct snd_soc_dapm_context *dapm, int reg);
+
 /* dapm power sequences - make this per codec in the future */
 static int dapm_up_seq[] = {
        [snd_soc_dapm_pre] = 1,
@@ -442,6 +444,9 @@ static int dapm_kcontrol_data_alloc(struct snd_soc_dapm_widget *widget,
 
                        snd_soc_dapm_add_path(widget->dapm, data->widget,
                                              widget, NULL, NULL);
+               } else if (e->reg != SND_SOC_NOPM) {
+                       data->value = soc_dapm_read(widget->dapm, e->reg) &
+                                     (e->mask << e->shift_l);
                }
                break;
        default:
index e693070..d867f44 100644 (file)
@@ -526,7 +526,7 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
                return -EINVAL;
        if (mc->platform_max && tmp > mc->platform_max)
                return -EINVAL;
-       if (tmp > mc->max - mc->min + 1)
+       if (tmp > mc->max - mc->min)
                return -EINVAL;
 
        if (invert)
@@ -547,7 +547,7 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
                        return -EINVAL;
                if (mc->platform_max && tmp > mc->platform_max)
                        return -EINVAL;
-               if (tmp > mc->max - mc->min + 1)
+               if (tmp > mc->max - mc->min)
                        return -EINVAL;
 
                if (invert)
index 000ea90..e24eea7 100644 (file)
@@ -181,12 +181,20 @@ int hda_dsp_core_run(struct snd_sof_dev *sdev, unsigned int core_mask)
  * Power Management.
  */
 
-static int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask)
+int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask)
 {
+       struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata;
+       const struct sof_intel_dsp_desc *chip = hda->desc;
        unsigned int cpa;
        u32 adspcs;
        int ret;
 
+       /* restrict core_mask to host managed cores mask */
+       core_mask &= chip->host_managed_cores_mask;
+       /* return if core_mask is not valid */
+       if (!core_mask)
+               return 0;
+
        /* update bits */
        snd_sof_dsp_update_bits(sdev, HDA_DSP_BAR, HDA_DSP_REG_ADSPCS,
                                HDA_DSP_ADSPCS_SPA_MASK(core_mask),
index 6429012..145d483 100644 (file)
@@ -95,9 +95,9 @@ out_put:
 }
 
 /*
- * first boot sequence has some extra steps. core 0 waits for power
- * status on core 1, so power up core 1 also momentarily, keep it in
- * reset/stall and then turn it off
+ * first boot sequence has some extra steps.
+ * power on all host managed cores and only unstall/run the boot core to boot the
+ * DSP then turn off all non boot cores (if any) is powered on.
  */
 static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag, bool imr_boot)
 {
@@ -110,7 +110,7 @@ static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag, bool imr_boot)
        int ret;
 
        /* step 1: power up corex */
-       ret = hda_dsp_enable_core(sdev, chip->host_managed_cores_mask);
+       ret = hda_dsp_core_power_up(sdev, chip->host_managed_cores_mask);
        if (ret < 0) {
                if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS)
                        dev_err(sdev->dev, "error: dsp core 0/1 power up failed\n");
@@ -127,7 +127,7 @@ static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag, bool imr_boot)
        snd_sof_dsp_write(sdev, HDA_DSP_BAR, chip->ipc_req, ipc_hdr);
 
        /* step 3: unset core 0 reset state & unstall/run core 0 */
-       ret = hda_dsp_core_run(sdev, BIT(0));
+       ret = hda_dsp_core_run(sdev, chip->init_core_mask);
        if (ret < 0) {
                if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS)
                        dev_err(sdev->dev,
@@ -389,7 +389,8 @@ int hda_dsp_cl_boot_firmware(struct snd_sof_dev *sdev)
        struct snd_dma_buffer dmab;
        int ret, ret1, i;
 
-       if (hda->imrboot_supported && !sdev->first_boot) {
+       if (sdev->system_suspend_target < SOF_SUSPEND_S4 &&
+           hda->imrboot_supported && !sdev->first_boot) {
                dev_dbg(sdev->dev, "IMR restore supported, booting from IMR directly\n");
                hda->boot_iteration = 0;
                ret = hda_dsp_boot_imr(sdev);
index dc1f743..6888e0a 100644 (file)
@@ -192,79 +192,7 @@ snd_pcm_uframes_t hda_dsp_pcm_pointer(struct snd_sof_dev *sdev,
                goto found;
        }
 
-       switch (sof_hda_position_quirk) {
-       case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY:
-               /*
-                * This legacy code, inherited from the Skylake driver,
-                * mixes DPIB registers and DPIB DDR updates and
-                * does not seem to follow any known hardware recommendations.
-                * It's not clear e.g. why there is a different flow
-                * for capture and playback, the only information that matters is
-                * what traffic class is used, and on all SOF-enabled platforms
-                * only VC0 is supported so the work-around was likely not necessary
-                * and quite possibly wrong.
-                */
-
-               /* DPIB/posbuf position mode:
-                * For Playback, Use DPIB register from HDA space which
-                * reflects the actual data transferred.
-                * For Capture, Use the position buffer for pointer, as DPIB
-                * is not accurate enough, its update may be completed
-                * earlier than the data written to DDR.
-                */
-               if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
-                       pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
-                                              AZX_REG_VS_SDXDPIB_XBASE +
-                                              (AZX_REG_VS_SDXDPIB_XINTERVAL *
-                                               hstream->index));
-               } else {
-                       /*
-                        * For capture stream, we need more workaround to fix the
-                        * position incorrect issue:
-                        *
-                        * 1. Wait at least 20us before reading position buffer after
-                        * the interrupt generated(IOC), to make sure position update
-                        * happens on frame boundary i.e. 20.833uSec for 48KHz.
-                        * 2. Perform a dummy Read to DPIB register to flush DMA
-                        * position value.
-                        * 3. Read the DMA Position from posbuf. Now the readback
-                        * value should be >= period boundary.
-                        */
-                       usleep_range(20, 21);
-                       snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
-                                        AZX_REG_VS_SDXDPIB_XBASE +
-                                        (AZX_REG_VS_SDXDPIB_XINTERVAL *
-                                         hstream->index));
-                       pos = snd_hdac_stream_get_pos_posbuf(hstream);
-               }
-               break;
-       case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS:
-               /*
-                * In case VC1 traffic is disabled this is the recommended option
-                */
-               pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
-                                      AZX_REG_VS_SDXDPIB_XBASE +
-                                      (AZX_REG_VS_SDXDPIB_XINTERVAL *
-                                       hstream->index));
-               break;
-       case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE:
-               /*
-                * This is the recommended option when VC1 is enabled.
-                * While this isn't needed for SOF platforms it's added for
-                * consistency and debug.
-                */
-               pos = snd_hdac_stream_get_pos_posbuf(hstream);
-               break;
-       default:
-               dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n",
-                            sof_hda_position_quirk);
-               pos = 0;
-               break;
-       }
-
-       if (pos >= hstream->bufsize)
-               pos = 0;
-
+       pos = hda_dsp_stream_get_position(hstream, substream->stream, true);
 found:
        pos = bytes_to_frames(substream->runtime, pos);
 
index daeb64c..d95ae17 100644 (file)
@@ -707,12 +707,13 @@ bool hda_dsp_check_stream_irq(struct snd_sof_dev *sdev)
 }
 
 static void
-hda_dsp_set_bytes_transferred(struct hdac_stream *hstream, u64 buffer_size)
+hda_dsp_compr_bytes_transferred(struct hdac_stream *hstream, int direction)
 {
+       u64 buffer_size = hstream->bufsize;
        u64 prev_pos, pos, num_bytes;
 
        div64_u64_rem(hstream->curr_pos, buffer_size, &prev_pos);
-       pos = snd_hdac_stream_get_pos_posbuf(hstream);
+       pos = hda_dsp_stream_get_position(hstream, direction, false);
 
        if (pos < prev_pos)
                num_bytes = (buffer_size - prev_pos) +  pos;
@@ -748,8 +749,7 @@ static bool hda_dsp_stream_check(struct hdac_bus *bus, u32 status)
                        if (s->substream && sof_hda->no_ipc_position) {
                                snd_sof_pcm_period_elapsed(s->substream);
                        } else if (s->cstream) {
-                               hda_dsp_set_bytes_transferred(s,
-                                       s->cstream->runtime->buffer_size);
+                               hda_dsp_compr_bytes_transferred(s, s->cstream->direction);
                                snd_compr_fragment_elapsed(s->cstream);
                        }
                }
@@ -1009,3 +1009,89 @@ void hda_dsp_stream_free(struct snd_sof_dev *sdev)
                devm_kfree(sdev->dev, hda_stream);
        }
 }
+
+snd_pcm_uframes_t hda_dsp_stream_get_position(struct hdac_stream *hstream,
+                                             int direction, bool can_sleep)
+{
+       struct hdac_ext_stream *hext_stream = stream_to_hdac_ext_stream(hstream);
+       struct sof_intel_hda_stream *hda_stream = hstream_to_sof_hda_stream(hext_stream);
+       struct snd_sof_dev *sdev = hda_stream->sdev;
+       snd_pcm_uframes_t pos;
+
+       switch (sof_hda_position_quirk) {
+       case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY:
+               /*
+                * This legacy code, inherited from the Skylake driver,
+                * mixes DPIB registers and DPIB DDR updates and
+                * does not seem to follow any known hardware recommendations.
+                * It's not clear e.g. why there is a different flow
+                * for capture and playback, the only information that matters is
+                * what traffic class is used, and on all SOF-enabled platforms
+                * only VC0 is supported so the work-around was likely not necessary
+                * and quite possibly wrong.
+                */
+
+               /* DPIB/posbuf position mode:
+                * For Playback, Use DPIB register from HDA space which
+                * reflects the actual data transferred.
+                * For Capture, Use the position buffer for pointer, as DPIB
+                * is not accurate enough, its update may be completed
+                * earlier than the data written to DDR.
+                */
+               if (direction == SNDRV_PCM_STREAM_PLAYBACK) {
+                       pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
+                                              AZX_REG_VS_SDXDPIB_XBASE +
+                                              (AZX_REG_VS_SDXDPIB_XINTERVAL *
+                                               hstream->index));
+               } else {
+                       /*
+                        * For capture stream, we need more workaround to fix the
+                        * position incorrect issue:
+                        *
+                        * 1. Wait at least 20us before reading position buffer after
+                        * the interrupt generated(IOC), to make sure position update
+                        * happens on frame boundary i.e. 20.833uSec for 48KHz.
+                        * 2. Perform a dummy Read to DPIB register to flush DMA
+                        * position value.
+                        * 3. Read the DMA Position from posbuf. Now the readback
+                        * value should be >= period boundary.
+                        */
+                       if (can_sleep)
+                               usleep_range(20, 21);
+
+                       snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
+                                        AZX_REG_VS_SDXDPIB_XBASE +
+                                        (AZX_REG_VS_SDXDPIB_XINTERVAL *
+                                         hstream->index));
+                       pos = snd_hdac_stream_get_pos_posbuf(hstream);
+               }
+               break;
+       case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS:
+               /*
+                * In case VC1 traffic is disabled this is the recommended option
+                */
+               pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
+                                      AZX_REG_VS_SDXDPIB_XBASE +
+                                      (AZX_REG_VS_SDXDPIB_XINTERVAL *
+                                       hstream->index));
+               break;
+       case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE:
+               /*
+                * This is the recommended option when VC1 is enabled.
+                * While this isn't needed for SOF platforms it's added for
+                * consistency and debug.
+                */
+               pos = snd_hdac_stream_get_pos_posbuf(hstream);
+               break;
+       default:
+               dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n",
+                            sof_hda_position_quirk);
+               pos = 0;
+               break;
+       }
+
+       if (pos >= hstream->bufsize)
+               pos = 0;
+
+       return pos;
+}
index 3e0f7b0..06476ff 100644 (file)
@@ -497,6 +497,7 @@ struct sof_intel_hda_stream {
  */
 int hda_dsp_probe(struct snd_sof_dev *sdev);
 int hda_dsp_remove(struct snd_sof_dev *sdev);
+int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask);
 int hda_dsp_core_run(struct snd_sof_dev *sdev, unsigned int core_mask);
 int hda_dsp_enable_core(struct snd_sof_dev *sdev, unsigned int core_mask);
 int hda_dsp_core_reset_power_down(struct snd_sof_dev *sdev,
@@ -564,6 +565,9 @@ int hda_dsp_stream_setup_bdl(struct snd_sof_dev *sdev,
 bool hda_dsp_check_ipc_irq(struct snd_sof_dev *sdev);
 bool hda_dsp_check_stream_irq(struct snd_sof_dev *sdev);
 
+snd_pcm_uframes_t hda_dsp_stream_get_position(struct hdac_stream *hstream,
+                                             int direction, bool can_sleep);
+
 struct hdac_ext_stream *
        hda_dsp_stream_get(struct snd_sof_dev *sdev, int direction, u32 flags);
 int hda_dsp_stream_put(struct snd_sof_dev *sdev, int direction, int stream_tag);
index 043554d..10740c5 100644 (file)
@@ -1577,24 +1577,23 @@ static int sof_ipc3_control_load_bytes(struct snd_sof_dev *sdev, struct snd_sof_
        struct sof_ipc_ctrl_data *cdata;
        int ret;
 
-       scontrol->ipc_control_data = kzalloc(scontrol->max_size, GFP_KERNEL);
-       if (!scontrol->ipc_control_data)
-               return -ENOMEM;
-
-       if (scontrol->max_size < sizeof(*cdata) ||
-           scontrol->max_size < sizeof(struct sof_abi_hdr)) {
-               ret = -EINVAL;
-               goto err;
+       if (scontrol->max_size < (sizeof(*cdata) + sizeof(struct sof_abi_hdr))) {
+               dev_err(sdev->dev, "%s: insufficient size for a bytes control: %zu.\n",
+                       __func__, scontrol->max_size);
+               return -EINVAL;
        }
 
-       /* init the get/put bytes data */
        if (scontrol->priv_size > scontrol->max_size - sizeof(*cdata)) {
-               dev_err(sdev->dev, "err: bytes data size %zu exceeds max %zu.\n",
+               dev_err(sdev->dev,
+                       "%s: bytes data size %zu exceeds max %zu.\n", __func__,
                        scontrol->priv_size, scontrol->max_size - sizeof(*cdata));
-               ret = -EINVAL;
-               goto err;
+               return -EINVAL;
        }
 
+       scontrol->ipc_control_data = kzalloc(scontrol->max_size, GFP_KERNEL);
+       if (!scontrol->ipc_control_data)
+               return -ENOMEM;
+
        scontrol->size = sizeof(struct sof_ipc_ctrl_data) + scontrol->priv_size;
 
        cdata = scontrol->ipc_control_data;
index 3333a06..e006532 100644 (file)
@@ -392,7 +392,7 @@ static int mt8186_dsp_probe(struct snd_sof_dev *sdev)
                                                      PLATFORM_DEVID_NONE,
                                                      pdev, sizeof(*pdev));
        if (IS_ERR(priv->ipc_dev)) {
-               ret = IS_ERR(priv->ipc_dev);
+               ret = PTR_ERR(priv->ipc_dev);
                dev_err(sdev->dev, "failed to create mtk-adsp-ipc device\n");
                goto err_adsp_off;
        }
index 18eb327..df740be 100644 (file)
@@ -23,6 +23,9 @@ static u32 snd_sof_dsp_power_target(struct snd_sof_dev *sdev)
        u32 target_dsp_state;
 
        switch (sdev->system_suspend_target) {
+       case SOF_SUSPEND_S5:
+       case SOF_SUSPEND_S4:
+               /* DSP should be in D3 if the system is suspending to S3+ */
        case SOF_SUSPEND_S3:
                /* DSP should be in D3 if the system is suspending to S3 */
                target_dsp_state = SOF_DSP_PM_D3;
@@ -335,8 +338,24 @@ int snd_sof_prepare(struct device *dev)
                return 0;
 
 #if defined(CONFIG_ACPI)
-       if (acpi_target_system_state() == ACPI_STATE_S0)
+       switch (acpi_target_system_state()) {
+       case ACPI_STATE_S0:
                sdev->system_suspend_target = SOF_SUSPEND_S0IX;
+               break;
+       case ACPI_STATE_S1:
+       case ACPI_STATE_S2:
+       case ACPI_STATE_S3:
+               sdev->system_suspend_target = SOF_SUSPEND_S3;
+               break;
+       case ACPI_STATE_S4:
+               sdev->system_suspend_target = SOF_SUSPEND_S4;
+               break;
+       case ACPI_STATE_S5:
+               sdev->system_suspend_target = SOF_SUSPEND_S5;
+               break;
+       default:
+               break;
+       }
 #endif
 
        return 0;
index 9d7f53f..f0f3d72 100644 (file)
@@ -85,6 +85,8 @@ enum sof_system_suspend_state {
        SOF_SUSPEND_NONE = 0,
        SOF_SUSPEND_S0IX,
        SOF_SUSPEND_S3,
+       SOF_SUSPEND_S4,
+       SOF_SUSPEND_S5,
 };
 
 enum sof_dfsentry_type {
index b7b6f38..6eb7d93 100644 (file)
@@ -637,10 +637,10 @@ static int snd_get_meter_comp_index(struct snd_us16x08_meter_store *store)
                }
        } else {
                /* skip channels with no compressor active */
-               while (!store->comp_store->val[
+               while (store->comp_index <= SND_US16X08_MAX_CHANNELS
+                       && !store->comp_store->val[
                        COMP_STORE_IDX(SND_US16X08_ID_COMP_SWITCH)]
-                       [store->comp_index - 1]
-                       && store->comp_index <= SND_US16X08_MAX_CHANNELS) {
+                       [store->comp_index - 1]) {
                        store->comp_index++;
                }
                ret = store->comp_index++;
index 4f56e17..f93201a 100644 (file)
@@ -3802,6 +3802,54 @@ YAMAHA_DEVICE(0x7010, "UB99"),
        }
 },
 
+/*
+ * MacroSilicon MS2100/MS2106 based AV capture cards
+ *
+ * These claim 96kHz 1ch in the descriptors, but are actually 48kHz 2ch.
+ * They also need QUIRK_FLAG_ALIGN_TRANSFER, which makes one wonder if
+ * they pretend to be 96kHz mono as a workaround for stereo being broken
+ * by that...
+ *
+ * They also have an issue with initial stream alignment that causes the
+ * channels to be swapped and out of phase, which is dealt with in quirks.c.
+ */
+{
+       USB_AUDIO_DEVICE(0x534d, 0x0021),
+       .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+               .vendor_name = "MacroSilicon",
+               .product_name = "MS210x",
+               .ifnum = QUIRK_ANY_INTERFACE,
+               .type = QUIRK_COMPOSITE,
+               .data = &(const struct snd_usb_audio_quirk[]) {
+                       {
+                               .ifnum = 2,
+                               .type = QUIRK_AUDIO_STANDARD_MIXER,
+                       },
+                       {
+                               .ifnum = 3,
+                               .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+                               .data = &(const struct audioformat) {
+                                       .formats = SNDRV_PCM_FMTBIT_S16_LE,
+                                       .channels = 2,
+                                       .iface = 3,
+                                       .altsetting = 1,
+                                       .altset_idx = 1,
+                                       .attributes = 0,
+                                       .endpoint = 0x82,
+                                       .ep_attr = USB_ENDPOINT_XFER_ISOC |
+                                               USB_ENDPOINT_SYNC_ASYNC,
+                                       .rates = SNDRV_PCM_RATE_CONTINUOUS,
+                                       .rate_min = 48000,
+                                       .rate_max = 48000,
+                               }
+                       },
+                       {
+                               .ifnum = -1
+                       }
+               }
+       }
+},
+
 /*
  * MacroSilicon MS2109 based HDMI capture cards
  *
@@ -4119,6 +4167,206 @@ YAMAHA_DEVICE(0x7010, "UB99"),
                }
        }
 },
+{
+       /*
+        * Fiero SC-01 (firmware v1.0.0 @ 48 kHz)
+        */
+       USB_DEVICE(0x2b53, 0x0023),
+       .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+               .vendor_name = "Fiero",
+               .product_name = "SC-01",
+               .ifnum = QUIRK_ANY_INTERFACE,
+               .type = QUIRK_COMPOSITE,
+               .data = &(const struct snd_usb_audio_quirk[]) {
+                       {
+                               .ifnum = 0,
+                               .type = QUIRK_AUDIO_STANDARD_INTERFACE
+                       },
+                       /* Playback */
+                       {
+                               .ifnum = 1,
+                               .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+                               .data = &(const struct audioformat) {
+                                       .formats = SNDRV_PCM_FMTBIT_S32_LE,
+                                       .channels = 2,
+                                       .fmt_bits = 24,
+                                       .iface = 1,
+                                       .altsetting = 1,
+                                       .altset_idx = 1,
+                                       .endpoint = 0x01,
+                                       .ep_attr = USB_ENDPOINT_XFER_ISOC |
+                                                  USB_ENDPOINT_SYNC_ASYNC,
+                                       .rates = SNDRV_PCM_RATE_48000,
+                                       .rate_min = 48000,
+                                       .rate_max = 48000,
+                                       .nr_rates = 1,
+                                       .rate_table = (unsigned int[]) { 48000 },
+                                       .clock = 0x29
+                               }
+                       },
+                       /* Capture */
+                       {
+                               .ifnum = 2,
+                               .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+                               .data = &(const struct audioformat) {
+                                       .formats = SNDRV_PCM_FMTBIT_S32_LE,
+                                       .channels = 2,
+                                       .fmt_bits = 24,
+                                       .iface = 2,
+                                       .altsetting = 1,
+                                       .altset_idx = 1,
+                                       .endpoint = 0x82,
+                                       .ep_attr = USB_ENDPOINT_XFER_ISOC |
+                                                  USB_ENDPOINT_SYNC_ASYNC |
+                                                  USB_ENDPOINT_USAGE_IMPLICIT_FB,
+                                       .rates = SNDRV_PCM_RATE_48000,
+                                       .rate_min = 48000,
+                                       .rate_max = 48000,
+                                       .nr_rates = 1,
+                                       .rate_table = (unsigned int[]) { 48000 },
+                                       .clock = 0x29
+                               }
+                       },
+                       {
+                               .ifnum = -1
+                       }
+               }
+       }
+},
+{
+       /*
+        * Fiero SC-01 (firmware v1.0.0 @ 96 kHz)
+        */
+       USB_DEVICE(0x2b53, 0x0024),
+       .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+               .vendor_name = "Fiero",
+               .product_name = "SC-01",
+               .ifnum = QUIRK_ANY_INTERFACE,
+               .type = QUIRK_COMPOSITE,
+               .data = &(const struct snd_usb_audio_quirk[]) {
+                       {
+                               .ifnum = 0,
+                               .type = QUIRK_AUDIO_STANDARD_INTERFACE
+                       },
+                       /* Playback */
+                       {
+                               .ifnum = 1,
+                               .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+                               .data = &(const struct audioformat) {
+                                       .formats = SNDRV_PCM_FMTBIT_S32_LE,
+                                       .channels = 2,
+                                       .fmt_bits = 24,
+                                       .iface = 1,
+                                       .altsetting = 1,
+                                       .altset_idx = 1,
+                                       .endpoint = 0x01,
+                                       .ep_attr = USB_ENDPOINT_XFER_ISOC |
+                                                  USB_ENDPOINT_SYNC_ASYNC,
+                                       .rates = SNDRV_PCM_RATE_96000,
+                                       .rate_min = 96000,
+                                       .rate_max = 96000,
+                                       .nr_rates = 1,
+                                       .rate_table = (unsigned int[]) { 96000 },
+                                       .clock = 0x29
+                               }
+                       },
+                       /* Capture */
+                       {
+                               .ifnum = 2,
+                               .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+                               .data = &(const struct audioformat) {
+                                       .formats = SNDRV_PCM_FMTBIT_S32_LE,
+                                       .channels = 2,
+                                       .fmt_bits = 24,
+                                       .iface = 2,
+                                       .altsetting = 1,
+                                       .altset_idx = 1,
+                                       .endpoint = 0x82,
+                                       .ep_attr = USB_ENDPOINT_XFER_ISOC |
+                                                  USB_ENDPOINT_SYNC_ASYNC |
+                                                  USB_ENDPOINT_USAGE_IMPLICIT_FB,
+                                       .rates = SNDRV_PCM_RATE_96000,
+                                       .rate_min = 96000,
+                                       .rate_max = 96000,
+                                       .nr_rates = 1,
+                                       .rate_table = (unsigned int[]) { 96000 },
+                                       .clock = 0x29
+                               }
+                       },
+                       {
+                               .ifnum = -1
+                       }
+               }
+       }
+},
+{
+       /*
+        * Fiero SC-01 (firmware v1.1.0)
+        */
+       USB_DEVICE(0x2b53, 0x0031),
+       .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+               .vendor_name = "Fiero",
+               .product_name = "SC-01",
+               .ifnum = QUIRK_ANY_INTERFACE,
+               .type = QUIRK_COMPOSITE,
+               .data = &(const struct snd_usb_audio_quirk[]) {
+                       {
+                               .ifnum = 0,
+                               .type = QUIRK_AUDIO_STANDARD_INTERFACE
+                       },
+                       /* Playback */
+                       {
+                               .ifnum = 1,
+                               .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+                               .data = &(const struct audioformat) {
+                                       .formats = SNDRV_PCM_FMTBIT_S32_LE,
+                                       .channels = 2,
+                                       .fmt_bits = 24,
+                                       .iface = 1,
+                                       .altsetting = 1,
+                                       .altset_idx = 1,
+                                       .endpoint = 0x01,
+                                       .ep_attr = USB_ENDPOINT_XFER_ISOC |
+                                                  USB_ENDPOINT_SYNC_ASYNC,
+                                       .rates = SNDRV_PCM_RATE_48000 |
+                                                SNDRV_PCM_RATE_96000,
+                                       .rate_min = 48000,
+                                       .rate_max = 96000,
+                                       .nr_rates = 2,
+                                       .rate_table = (unsigned int[]) { 48000, 96000 },
+                                       .clock = 0x29
+                               }
+                       },
+                       /* Capture */
+                       {
+                               .ifnum = 2,
+                               .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+                               .data = &(const struct audioformat) {
+                                       .formats = SNDRV_PCM_FMTBIT_S32_LE,
+                                       .channels = 2,
+                                       .fmt_bits = 24,
+                                       .iface = 2,
+                                       .altsetting = 1,
+                                       .altset_idx = 1,
+                                       .endpoint = 0x82,
+                                       .ep_attr = USB_ENDPOINT_XFER_ISOC |
+                                                  USB_ENDPOINT_SYNC_ASYNC |
+                                                  USB_ENDPOINT_USAGE_IMPLICIT_FB,
+                                       .rates = SNDRV_PCM_RATE_48000 |
+                                                SNDRV_PCM_RATE_96000,
+                                       .rate_min = 48000,
+                                       .rate_max = 96000,
+                                       .nr_rates = 2,
+                                       .rate_table = (unsigned int[]) { 48000, 96000 },
+                                       .clock = 0x29
+                               }
+                       },
+                       {
+                               .ifnum = -1
+                       }
+               }
+       }
+},
 
 #undef USB_DEVICE_VENDOR_SPEC
 #undef USB_AUDIO_DEVICE
index e8468f9..968d90c 100644 (file)
@@ -1478,6 +1478,7 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
        case USB_ID(0x041e, 0x3f19): /* E-Mu 0204 USB */
                set_format_emu_quirk(subs, fmt);
                break;
+       case USB_ID(0x534d, 0x0021): /* MacroSilicon MS2100/MS2106 */
        case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */
                subs->stream_offset_adj = 2;
                break;
@@ -1842,6 +1843,10 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
                   QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
        DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
                   QUIRK_FLAG_GET_SAMPLE_RATE),
+       DEVICE_FLG(0x1397, 0x0508, /* Behringer UMC204HD */
+                  QUIRK_FLAG_PLAYBACK_FIRST | QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+       DEVICE_FLG(0x1397, 0x0509, /* Behringer UMC404HD */
+                  QUIRK_FLAG_PLAYBACK_FIRST | QUIRK_FLAG_GENERIC_IMPLICIT_FB),
        DEVICE_FLG(0x13e5, 0x0001, /* Serato Phono */
                   QUIRK_FLAG_IGNORE_CTL_ERROR),
        DEVICE_FLG(0x154e, 0x1002, /* Denon DCD-1500RE */
@@ -1904,10 +1909,18 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
                   QUIRK_FLAG_IGNORE_CTL_ERROR),
        DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
                   QUIRK_FLAG_GET_SAMPLE_RATE),
+       DEVICE_FLG(0x534d, 0x0021, /* MacroSilicon MS2100/MS2106 */
+                  QUIRK_FLAG_ALIGN_TRANSFER),
        DEVICE_FLG(0x534d, 0x2109, /* MacroSilicon MS2109 */
                   QUIRK_FLAG_ALIGN_TRANSFER),
        DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
                   QUIRK_FLAG_GET_SAMPLE_RATE),
+       DEVICE_FLG(0x2b53, 0x0023, /* Fiero SC-01 (firmware v1.0.0 @ 48 kHz) */
+                  QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+       DEVICE_FLG(0x2b53, 0x0024, /* Fiero SC-01 (firmware v1.0.0 @ 96 kHz) */
+                  QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+       DEVICE_FLG(0x2b53, 0x0031, /* Fiero SC-01 (firmware v1.1.0) */
+                  QUIRK_FLAG_GENERIC_IMPLICIT_FB),
 
        /* Vendor matches */
        VENDOR_FLG(0x045e, /* MS Lifecam */
index 0d828e3..ab95fb3 100644 (file)
@@ -33,6 +33,8 @@
 #include <drm/intel_lpe_audio.h>
 #include "intel_hdmi_audio.h"
 
+#define INTEL_HDMI_AUDIO_SUSPEND_DELAY_MS  5000
+
 #define for_each_pipe(card_ctx, pipe) \
        for ((pipe) = 0; (pipe) < (card_ctx)->num_pipes; (pipe)++)
 #define for_each_port(card_ctx, port) \
@@ -1066,7 +1068,9 @@ static int had_pcm_open(struct snd_pcm_substream *substream)
        intelhaddata = snd_pcm_substream_chip(substream);
        runtime = substream->runtime;
 
-       pm_runtime_get_sync(intelhaddata->dev);
+       retval = pm_runtime_resume_and_get(intelhaddata->dev);
+       if (retval < 0)
+               return retval;
 
        /* set the runtime hw parameter with local snd_pcm_hardware struct */
        runtime->hw = had_pcm_hardware;
@@ -1534,8 +1538,12 @@ static void had_audio_wq(struct work_struct *work)
                container_of(work, struct snd_intelhad, hdmi_audio_wq);
        struct intel_hdmi_lpe_audio_pdata *pdata = ctx->dev->platform_data;
        struct intel_hdmi_lpe_audio_port_pdata *ppdata = &pdata->port[ctx->port];
+       int ret;
+
+       ret = pm_runtime_resume_and_get(ctx->dev);
+       if (ret < 0)
+               return;
 
-       pm_runtime_get_sync(ctx->dev);
        mutex_lock(&ctx->mutex);
        if (ppdata->pipe < 0) {
                dev_dbg(ctx->dev, "%s: Event: HAD_NOTIFY_HOT_UNPLUG : port = %d\n",
@@ -1802,8 +1810,11 @@ static int __hdmi_lpe_audio_probe(struct platform_device *pdev)
        pdata->notify_audio_lpe = notify_audio_lpe;
        spin_unlock_irq(&pdata->lpe_audio_slock);
 
+       pm_runtime_set_autosuspend_delay(&pdev->dev, INTEL_HDMI_AUDIO_SUSPEND_DELAY_MS);
        pm_runtime_use_autosuspend(&pdev->dev);
+       pm_runtime_enable(&pdev->dev);
        pm_runtime_mark_last_busy(&pdev->dev);
+       pm_runtime_idle(&pdev->dev);
 
        dev_dbg(&pdev->dev, "%s: handle pending notification\n", __func__);
        for_each_port(card_ctx, port) {
index e09d690..8aa0d27 100644 (file)
@@ -36,7 +36,7 @@
 #define MIDR_VARIANT(midr)     \
        (((midr) & MIDR_VARIANT_MASK) >> MIDR_VARIANT_SHIFT)
 #define MIDR_IMPLEMENTOR_SHIFT 24
-#define MIDR_IMPLEMENTOR_MASK  (0xff << MIDR_IMPLEMENTOR_SHIFT)
+#define MIDR_IMPLEMENTOR_MASK  (0xffU << MIDR_IMPLEMENTOR_SHIFT)
 #define MIDR_IMPLEMENTOR(midr) \
        (((midr) & MIDR_IMPLEMENTOR_MASK) >> MIDR_IMPLEMENTOR_SHIFT)
 
 
 #define APPLE_CPU_PART_M1_ICESTORM     0x022
 #define APPLE_CPU_PART_M1_FIRESTORM    0x023
+#define APPLE_CPU_PART_M1_ICESTORM_PRO 0x024
+#define APPLE_CPU_PART_M1_FIRESTORM_PRO        0x025
+#define APPLE_CPU_PART_M1_ICESTORM_MAX 0x028
+#define APPLE_CPU_PART_M1_FIRESTORM_MAX        0x029
 
 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
 #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110)
 #define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM)
 #define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM)
+#define MIDR_APPLE_M1_ICESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_PRO)
+#define MIDR_APPLE_M1_FIRESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_PRO)
+#define MIDR_APPLE_M1_ICESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_MAX)
+#define MIDR_APPLE_M1_FIRESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_MAX)
 
 /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
 #define MIDR_FUJITSU_ERRATUM_010001            MIDR_FUJITSU_A64FX
 
 #ifndef __ASSEMBLY__
 
-#include "sysreg.h"
+#include <asm/sysreg.h>
 
 #define read_cpuid(reg)                        read_sysreg_s(SYS_ ## reg)
 
index c1b6ddc..3bb1343 100644 (file)
@@ -139,8 +139,10 @@ struct kvm_guest_debug_arch {
        __u64 dbg_wvr[KVM_ARM_MAX_DBG_REGS];
 };
 
+#define KVM_DEBUG_ARCH_HSR_HIGH_VALID  (1 << 0)
 struct kvm_debug_exit_arch {
        __u32 hsr;
+       __u32 hsr_high; /* ESR_EL2[61:32] */
        __u64 far;      /* used for watchpoints */
 };
 
@@ -332,6 +334,40 @@ struct kvm_arm_copy_mte_tags {
 #define KVM_ARM64_SVE_VLS_WORDS        \
        ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
 
+/* Bitmap feature firmware registers */
+#define KVM_REG_ARM_FW_FEAT_BMAP               (0x0016 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM_FW_FEAT_BMAP_REG(r)                (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
+                                               KVM_REG_ARM_FW_FEAT_BMAP |      \
+                                               ((r) & 0xffff))
+
+#define KVM_REG_ARM_STD_BMAP                   KVM_REG_ARM_FW_FEAT_BMAP_REG(0)
+
+enum {
+       KVM_REG_ARM_STD_BIT_TRNG_V1_0   = 0,
+#ifdef __KERNEL__
+       KVM_REG_ARM_STD_BMAP_BIT_COUNT,
+#endif
+};
+
+#define KVM_REG_ARM_STD_HYP_BMAP               KVM_REG_ARM_FW_FEAT_BMAP_REG(1)
+
+enum {
+       KVM_REG_ARM_STD_HYP_BIT_PV_TIME = 0,
+#ifdef __KERNEL__
+       KVM_REG_ARM_STD_HYP_BMAP_BIT_COUNT,
+#endif
+};
+
+#define KVM_REG_ARM_VENDOR_HYP_BMAP            KVM_REG_ARM_FW_FEAT_BMAP_REG(2)
+
+enum {
+       KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT    = 0,
+       KVM_REG_ARM_VENDOR_HYP_BIT_PTP          = 1,
+#ifdef __KERNEL__
+       KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT,
+#endif
+};
+
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR      0
 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
index e17de69..03acc82 100644 (file)
 #define X86_FEATURE_INVPCID_SINGLE     ( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */
 #define X86_FEATURE_HW_PSTATE          ( 7*32+ 8) /* AMD HW-PState */
 #define X86_FEATURE_PROC_FEEDBACK      ( 7*32+ 9) /* AMD ProcFeedbackInterface */
-/* FREE!                                ( 7*32+10) */
+#define X86_FEATURE_XCOMPACTED         ( 7*32+10) /* "" Use compacted XSTATE (XSAVES or XSAVEC) */
 #define X86_FEATURE_PTI                        ( 7*32+11) /* Kernel Page Table Isolation enabled */
 #define X86_FEATURE_RETPOLINE          ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
 #define X86_FEATURE_RETPOLINE_LFENCE   ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */
 #define X86_FEATURE_SSBD               ( 7*32+17) /* Speculative Store Bypass Disable */
 #define X86_FEATURE_MBA                        ( 7*32+18) /* Memory Bandwidth Allocation */
 #define X86_FEATURE_RSB_CTXSW          ( 7*32+19) /* "" Fill RSB on context switches */
-/* FREE!                                ( 7*32+20) */
+#define X86_FEATURE_PERFMON_V2         ( 7*32+20) /* AMD Performance Monitoring Version 2 */
 #define X86_FEATURE_USE_IBPB           ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
 #define X86_FEATURE_USE_IBRS_FW                ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE  ( 7*32+23) /* "" Disable Speculative Store Bypass. */
 #define X86_FEATURE_VMW_VMMCALL                ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */
 #define X86_FEATURE_PVUNLOCK           ( 8*32+20) /* "" PV unlock function */
 #define X86_FEATURE_VCPUPREEMPT                ( 8*32+21) /* "" PV vcpu_is_preempted function */
+#define X86_FEATURE_TDX_GUEST          ( 8*32+22) /* Intel Trust Domain Extensions Guest */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
 #define X86_FEATURE_FSGSBASE           ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
 #define X86_FEATURE_VIRT_SSBD          (13*32+25) /* Virtualized Speculative Store Bypass Disable */
 #define X86_FEATURE_AMD_SSB_NO         (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
 #define X86_FEATURE_CPPC               (13*32+27) /* Collaborative Processor Performance Control */
+#define X86_FEATURE_BRS                        (13*32+31) /* Branch Sampling available */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM             (14*32+ 0) /* Digital Thermal Sensor */
 #define X86_FEATURE_SEV                        (19*32+ 1) /* AMD Secure Encrypted Virtualization */
 #define X86_FEATURE_VM_PAGE_FLUSH      (19*32+ 2) /* "" VM Page Flush MSR is supported */
 #define X86_FEATURE_SEV_ES             (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */
+#define X86_FEATURE_V_TSC_AUX          (19*32+ 9) /* "" Virtual TSC_AUX */
 #define X86_FEATURE_SME_COHERENT       (19*32+10) /* "" AMD hardware-enforced cache coherency */
 
 /*
index 1ae0fab..36369e7 100644 (file)
 # define DISABLE_SGX   (1 << (X86_FEATURE_SGX & 31))
 #endif
 
+#ifdef CONFIG_INTEL_TDX_GUEST
+# define DISABLE_TDX_GUEST     0
+#else
+# define DISABLE_TDX_GUEST     (1 << (X86_FEATURE_TDX_GUEST & 31))
+#endif
+
 /*
  * Make sure to add features to the correct mask
  */
@@ -73,7 +79,7 @@
 #define DISABLED_MASK5 0
 #define DISABLED_MASK6 0
 #define DISABLED_MASK7 (DISABLE_PTI)
-#define DISABLED_MASK8 0
+#define DISABLED_MASK8 (DISABLE_TDX_GUEST)
 #define DISABLED_MASK9 (DISABLE_SGX)
 #define DISABLED_MASK10        0
 #define DISABLED_MASK11        0
index bf6e960..2161480 100644 (file)
@@ -428,11 +428,12 @@ struct kvm_sync_regs {
        struct kvm_vcpu_events events;
 };
 
-#define KVM_X86_QUIRK_LINT0_REENABLED     (1 << 0)
-#define KVM_X86_QUIRK_CD_NW_CLEARED       (1 << 1)
-#define KVM_X86_QUIRK_LAPIC_MMIO_HOLE     (1 << 2)
-#define KVM_X86_QUIRK_OUT_7E_INC_RIP      (1 << 3)
-#define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4)
+#define KVM_X86_QUIRK_LINT0_REENABLED          (1 << 0)
+#define KVM_X86_QUIRK_CD_NW_CLEARED            (1 << 1)
+#define KVM_X86_QUIRK_LAPIC_MMIO_HOLE          (1 << 2)
+#define KVM_X86_QUIRK_OUT_7E_INC_RIP           (1 << 3)
+#define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT     (1 << 4)
+#define KVM_X86_QUIRK_FIX_HYPERCALL_INSN       (1 << 5)
 
 #define KVM_STATE_NESTED_FORMAT_VMX    0
 #define KVM_STATE_NESTED_FORMAT_SVM    1
index efa9693..f69c168 100644 (file)
 #define SVM_VMGEXIT_AP_JUMP_TABLE              0x80000005
 #define SVM_VMGEXIT_SET_AP_JUMP_TABLE          0
 #define SVM_VMGEXIT_GET_AP_JUMP_TABLE          1
+#define SVM_VMGEXIT_PSC                                0x80000010
+#define SVM_VMGEXIT_GUEST_REQUEST              0x80000011
+#define SVM_VMGEXIT_EXT_GUEST_REQUEST          0x80000012
+#define SVM_VMGEXIT_AP_CREATION                        0x80000013
+#define SVM_VMGEXIT_AP_CREATE_ON_INIT          0
+#define SVM_VMGEXIT_AP_CREATE                  1
+#define SVM_VMGEXIT_AP_DESTROY                 2
+#define SVM_VMGEXIT_HV_FEATURES                        0x8000fffd
 #define SVM_VMGEXIT_UNSUPPORTED_EVENT          0x8000ffff
 
 /* Exit code reserved for hypervisor/software use */
        { SVM_VMGEXIT_NMI_COMPLETE,     "vmgexit_nmi_complete" }, \
        { SVM_VMGEXIT_AP_HLT_LOOP,      "vmgexit_ap_hlt_loop" }, \
        { SVM_VMGEXIT_AP_JUMP_TABLE,    "vmgexit_ap_jump_table" }, \
+       { SVM_VMGEXIT_PSC,              "vmgexit_page_state_change" }, \
+       { SVM_VMGEXIT_GUEST_REQUEST,    "vmgexit_guest_request" }, \
+       { SVM_VMGEXIT_EXT_GUEST_REQUEST, "vmgexit_ext_guest_request" }, \
+       { SVM_VMGEXIT_AP_CREATION,      "vmgexit_ap_creation" }, \
+       { SVM_VMGEXIT_HV_FEATURES,      "vmgexit_hypervisor_feature" }, \
        { SVM_EXIT_ERR,         "invalid_guest_state" }
 
 
index 6491fa8..15b940e 100644 (file)
@@ -143,6 +143,12 @@ struct unwind_hint {
        .popsection
 .endm
 
+.macro STACK_FRAME_NON_STANDARD_FP func:req
+#ifdef CONFIG_FRAME_POINTER
+       STACK_FRAME_NON_STANDARD \func
+#endif
+.endm
+
 .macro ANNOTATE_NOENDBR
 .Lhere_\@:
        .pushsection .discard.noendbr
index 05c3642..a2def7b 100644 (file)
@@ -154,25 +154,77 @@ enum i915_mocs_table_index {
        I915_MOCS_CACHED,
 };
 
-/*
+/**
+ * enum drm_i915_gem_engine_class - uapi engine type enumeration
+ *
  * Different engines serve different roles, and there may be more than one
- * engine serving each role. enum drm_i915_gem_engine_class provides a
- * classification of the role of the engine, which may be used when requesting
- * operations to be performed on a certain subset of engines, or for providing
- * information about that group.
+ * engine serving each role.  This enum provides a classification of the role
+ * of the engine, which may be used when requesting operations to be performed
+ * on a certain subset of engines, or for providing information about that
+ * group.
  */
 enum drm_i915_gem_engine_class {
+       /**
+        * @I915_ENGINE_CLASS_RENDER:
+        *
+        * Render engines support instructions used for 3D, Compute (GPGPU),
+        * and programmable media workloads.  These instructions fetch data and
+        * dispatch individual work items to threads that operate in parallel.
+        * The threads run small programs (called "kernels" or "shaders") on
+        * the GPU's execution units (EUs).
+        */
        I915_ENGINE_CLASS_RENDER        = 0,
+
+       /**
+        * @I915_ENGINE_CLASS_COPY:
+        *
+        * Copy engines (also referred to as "blitters") support instructions
+        * that move blocks of data from one location in memory to another,
+        * or that fill a specified location of memory with fixed data.
+        * Copy engines can perform pre-defined logical or bitwise operations
+        * on the source, destination, or pattern data.
+        */
        I915_ENGINE_CLASS_COPY          = 1,
+
+       /**
+        * @I915_ENGINE_CLASS_VIDEO:
+        *
+        * Video engines (also referred to as "bit stream decode" (BSD) or
+        * "vdbox") support instructions that perform fixed-function media
+        * decode and encode.
+        */
        I915_ENGINE_CLASS_VIDEO         = 2,
+
+       /**
+        * @I915_ENGINE_CLASS_VIDEO_ENHANCE:
+        *
+        * Video enhancement engines (also referred to as "vebox") support
+        * instructions related to image enhancement.
+        */
        I915_ENGINE_CLASS_VIDEO_ENHANCE = 3,
 
-       /* should be kept compact */
+       /**
+        * @I915_ENGINE_CLASS_COMPUTE:
+        *
+        * Compute engines support a subset of the instructions available
+        * on render engines:  compute engines support Compute (GPGPU) and
+        * programmable media workloads, but do not support the 3D pipeline.
+        */
+       I915_ENGINE_CLASS_COMPUTE       = 4,
+
+       /* Values in this enum should be kept compact. */
 
+       /**
+        * @I915_ENGINE_CLASS_INVALID:
+        *
+        * Placeholder value to represent an invalid engine class assignment.
+        */
        I915_ENGINE_CLASS_INVALID       = -1
 };
 
-/*
+/**
+ * struct i915_engine_class_instance - Engine class/instance identifier
+ *
  * There may be more than one engine fulfilling any role within the system.
  * Each engine of a class is given a unique instance number and therefore
  * any engine can be specified by its class:instance tuplet. APIs that allow
@@ -180,10 +232,21 @@ enum drm_i915_gem_engine_class {
  * for this identification.
  */
 struct i915_engine_class_instance {
-       __u16 engine_class; /* see enum drm_i915_gem_engine_class */
-       __u16 engine_instance;
+       /**
+        * @engine_class:
+        *
+        * Engine class from enum drm_i915_gem_engine_class
+        */
+       __u16 engine_class;
 #define I915_ENGINE_CLASS_INVALID_NONE -1
 #define I915_ENGINE_CLASS_INVALID_VIRTUAL -2
+
+       /**
+        * @engine_instance:
+        *
+        * Engine instance.
+        */
+       __u16 engine_instance;
 };
 
 /**
@@ -2657,24 +2720,65 @@ enum drm_i915_perf_record_type {
        DRM_I915_PERF_RECORD_MAX /* non-ABI */
 };
 
-/*
+/**
+ * struct drm_i915_perf_oa_config
+ *
  * Structure to upload perf dynamic configuration into the kernel.
  */
 struct drm_i915_perf_oa_config {
-       /** String formatted like "%08x-%04x-%04x-%04x-%012x" */
+       /**
+        * @uuid:
+        *
+        * String formatted like "%\08x-%\04x-%\04x-%\04x-%\012x"
+        */
        char uuid[36];
 
+       /**
+        * @n_mux_regs:
+        *
+        * Number of mux regs in &mux_regs_ptr.
+        */
        __u32 n_mux_regs;
+
+       /**
+        * @n_boolean_regs:
+        *
+        * Number of boolean regs in &boolean_regs_ptr.
+        */
        __u32 n_boolean_regs;
+
+       /**
+        * @n_flex_regs:
+        *
+        * Number of flex regs in &flex_regs_ptr.
+        */
        __u32 n_flex_regs;
 
-       /*
-        * These fields are pointers to tuples of u32 values (register address,
-        * value). For example the expected length of the buffer pointed by
-        * mux_regs_ptr is (2 * sizeof(u32) * n_mux_regs).
+       /**
+        * @mux_regs_ptr:
+        *
+        * Pointer to tuples of u32 values (register address, value) for mux
+        * registers.  Expected length of buffer is (2 * sizeof(u32) *
+        * &n_mux_regs).
         */
        __u64 mux_regs_ptr;
+
+       /**
+        * @boolean_regs_ptr:
+        *
+        * Pointer to tuples of u32 values (register address, value) for mux
+        * registers.  Expected length of buffer is (2 * sizeof(u32) *
+        * &n_boolean_regs).
+        */
        __u64 boolean_regs_ptr;
+
+       /**
+        * @flex_regs_ptr:
+        *
+        * Pointer to tuples of u32 values (register address, value) for mux
+        * registers.  Expected length of buffer is (2 * sizeof(u32) *
+        * &n_flex_regs).
+        */
        __u64 flex_regs_ptr;
 };
 
@@ -2685,12 +2789,24 @@ struct drm_i915_perf_oa_config {
  * @data_ptr is also depends on the specific @query_id.
  */
 struct drm_i915_query_item {
-       /** @query_id: The id for this query */
+       /**
+        * @query_id:
+        *
+        * The id for this query.  Currently accepted query IDs are:
+        *  - %DRM_I915_QUERY_TOPOLOGY_INFO (see struct drm_i915_query_topology_info)
+        *  - %DRM_I915_QUERY_ENGINE_INFO (see struct drm_i915_engine_info)
+        *  - %DRM_I915_QUERY_PERF_CONFIG (see struct drm_i915_query_perf_config)
+        *  - %DRM_I915_QUERY_MEMORY_REGIONS (see struct drm_i915_query_memory_regions)
+        *  - %DRM_I915_QUERY_HWCONFIG_BLOB (see `GuC HWCONFIG blob uAPI`)
+        *  - %DRM_I915_QUERY_GEOMETRY_SUBSLICES (see struct drm_i915_query_topology_info)
+        */
        __u64 query_id;
-#define DRM_I915_QUERY_TOPOLOGY_INFO    1
-#define DRM_I915_QUERY_ENGINE_INFO     2
-#define DRM_I915_QUERY_PERF_CONFIG      3
-#define DRM_I915_QUERY_MEMORY_REGIONS   4
+#define DRM_I915_QUERY_TOPOLOGY_INFO           1
+#define DRM_I915_QUERY_ENGINE_INFO             2
+#define DRM_I915_QUERY_PERF_CONFIG             3
+#define DRM_I915_QUERY_MEMORY_REGIONS          4
+#define DRM_I915_QUERY_HWCONFIG_BLOB           5
+#define DRM_I915_QUERY_GEOMETRY_SUBSLICES      6
 /* Must be kept compact -- no holes and well documented */
 
        /**
@@ -2706,14 +2822,17 @@ struct drm_i915_query_item {
        /**
         * @flags:
         *
-        * When query_id == DRM_I915_QUERY_TOPOLOGY_INFO, must be 0.
+        * When &query_id == %DRM_I915_QUERY_TOPOLOGY_INFO, must be 0.
         *
-        * When query_id == DRM_I915_QUERY_PERF_CONFIG, must be one of the
+        * When &query_id == %DRM_I915_QUERY_PERF_CONFIG, must be one of the
         * following:
         *
-        *      - DRM_I915_QUERY_PERF_CONFIG_LIST
-        *      - DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID
-        *      - DRM_I915_QUERY_PERF_CONFIG_FOR_UUID
+        *      - %DRM_I915_QUERY_PERF_CONFIG_LIST
+        *      - %DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID
+        *      - %DRM_I915_QUERY_PERF_CONFIG_FOR_UUID
+        *
+        * When &query_id == %DRM_I915_QUERY_GEOMETRY_SUBSLICES must contain
+        * a struct i915_engine_class_instance that references a render engine.
         */
        __u32 flags;
 #define DRM_I915_QUERY_PERF_CONFIG_LIST          1
@@ -2771,66 +2890,112 @@ struct drm_i915_query {
        __u64 items_ptr;
 };
 
-/*
- * Data written by the kernel with query DRM_I915_QUERY_TOPOLOGY_INFO :
- *
- * data: contains the 3 pieces of information :
- *
- * - the slice mask with one bit per slice telling whether a slice is
- *   available. The availability of slice X can be queried with the following
- *   formula :
- *
- *           (data[X / 8] >> (X % 8)) & 1
- *
- * - the subslice mask for each slice with one bit per subslice telling
- *   whether a subslice is available. Gen12 has dual-subslices, which are
- *   similar to two gen11 subslices. For gen12, this array represents dual-
- *   subslices. The availability of subslice Y in slice X can be queried
- *   with the following formula :
- *
- *           (data[subslice_offset +
- *                 X * subslice_stride +
- *                 Y / 8] >> (Y % 8)) & 1
- *
- * - the EU mask for each subslice in each slice with one bit per EU telling
- *   whether an EU is available. The availability of EU Z in subslice Y in
- *   slice X can be queried with the following formula :
+/**
+ * struct drm_i915_query_topology_info
  *
- *           (data[eu_offset +
- *                 (X * max_subslices + Y) * eu_stride +
- *                 Z / 8] >> (Z % 8)) & 1
+ * Describes slice/subslice/EU information queried by
+ * %DRM_I915_QUERY_TOPOLOGY_INFO
  */
 struct drm_i915_query_topology_info {
-       /*
+       /**
+        * @flags:
+        *
         * Unused for now. Must be cleared to zero.
         */
        __u16 flags;
 
+       /**
+        * @max_slices:
+        *
+        * The number of bits used to express the slice mask.
+        */
        __u16 max_slices;
+
+       /**
+        * @max_subslices:
+        *
+        * The number of bits used to express the subslice mask.
+        */
        __u16 max_subslices;
+
+       /**
+        * @max_eus_per_subslice:
+        *
+        * The number of bits in the EU mask that correspond to a single
+        * subslice's EUs.
+        */
        __u16 max_eus_per_subslice;
 
-       /*
+       /**
+        * @subslice_offset:
+        *
         * Offset in data[] at which the subslice masks are stored.
         */
        __u16 subslice_offset;
 
-       /*
+       /**
+        * @subslice_stride:
+        *
         * Stride at which each of the subslice masks for each slice are
         * stored.
         */
        __u16 subslice_stride;
 
-       /*
+       /**
+        * @eu_offset:
+        *
         * Offset in data[] at which the EU masks are stored.
         */
        __u16 eu_offset;
 
-       /*
+       /**
+        * @eu_stride:
+        *
         * Stride at which each of the EU masks for each subslice are stored.
         */
        __u16 eu_stride;
 
+       /**
+        * @data:
+        *
+        * Contains 3 pieces of information :
+        *
+        * - The slice mask with one bit per slice telling whether a slice is
+        *   available. The availability of slice X can be queried with the
+        *   following formula :
+        *
+        *   .. code:: c
+        *
+        *      (data[X / 8] >> (X % 8)) & 1
+        *
+        *   Starting with Xe_HP platforms, Intel hardware no longer has
+        *   traditional slices so i915 will always report a single slice
+        *   (hardcoded slicemask = 0x1) which contains all of the platform's
+        *   subslices.  I.e., the mask here does not reflect any of the newer
+        *   hardware concepts such as "gslices" or "cslices" since userspace
+        *   is capable of inferring those from the subslice mask.
+        *
+        * - The subslice mask for each slice with one bit per subslice telling
+        *   whether a subslice is available.  Starting with Gen12 we use the
+        *   term "subslice" to refer to what the hardware documentation
+        *   describes as a "dual-subslices."  The availability of subslice Y
+        *   in slice X can be queried with the following formula :
+        *
+        *   .. code:: c
+        *
+        *      (data[subslice_offset + X * subslice_stride + Y / 8] >> (Y % 8)) & 1
+        *
+        * - The EU mask for each subslice in each slice, with one bit per EU
+        *   telling whether an EU is available. The availability of EU Z in
+        *   subslice Y in slice X can be queried with the following formula :
+        *
+        *   .. code:: c
+        *
+        *      (data[eu_offset +
+        *            (X * max_subslices + Y) * eu_stride +
+        *            Z / 8
+        *       ] >> (Z % 8)) & 1
+        */
        __u8 data[];
 };
 
@@ -2951,52 +3116,68 @@ struct drm_i915_query_engine_info {
        struct drm_i915_engine_info engines[];
 };
 
-/*
- * Data written by the kernel with query DRM_I915_QUERY_PERF_CONFIG.
+/**
+ * struct drm_i915_query_perf_config
+ *
+ * Data written by the kernel with query %DRM_I915_QUERY_PERF_CONFIG and
+ * %DRM_I915_QUERY_GEOMETRY_SUBSLICES.
  */
 struct drm_i915_query_perf_config {
        union {
-               /*
-                * When query_item.flags == DRM_I915_QUERY_PERF_CONFIG_LIST, i915 sets
-                * this fields to the number of configurations available.
+               /**
+                * @n_configs:
+                *
+                * When &drm_i915_query_item.flags ==
+                * %DRM_I915_QUERY_PERF_CONFIG_LIST, i915 sets this fields to
+                * the number of configurations available.
                 */
                __u64 n_configs;
 
-               /*
-                * When query_id == DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_ID,
-                * i915 will use the value in this field as configuration
-                * identifier to decide what data to write into config_ptr.
+               /**
+                * @config:
+                *
+                * When &drm_i915_query_item.flags ==
+                * %DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_ID, i915 will use the
+                * value in this field as configuration identifier to decide
+                * what data to write into config_ptr.
                 */
                __u64 config;
 
-               /*
-                * When query_id == DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID,
-                * i915 will use the value in this field as configuration
-                * identifier to decide what data to write into config_ptr.
+               /**
+                * @uuid:
+                *
+                * When &drm_i915_query_item.flags ==
+                * %DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID, i915 will use the
+                * value in this field as configuration identifier to decide
+                * what data to write into config_ptr.
                 *
                 * String formatted like "%08x-%04x-%04x-%04x-%012x"
                 */
                char uuid[36];
        };
 
-       /*
+       /**
+        * @flags:
+        *
         * Unused for now. Must be cleared to zero.
         */
        __u32 flags;
 
-       /*
-        * When query_item.flags == DRM_I915_QUERY_PERF_CONFIG_LIST, i915 will
-        * write an array of __u64 of configuration identifiers.
+       /**
+        * @data:
         *
-        * When query_item.flags == DRM_I915_QUERY_PERF_CONFIG_DATA, i915 will
-        * write a struct drm_i915_perf_oa_config. If the following fields of
-        * drm_i915_perf_oa_config are set not set to 0, i915 will write into
-        * the associated pointers the values of submitted when the
+        * When &drm_i915_query_item.flags == %DRM_I915_QUERY_PERF_CONFIG_LIST,
+        * i915 will write an array of __u64 of configuration identifiers.
+        *
+        * When &drm_i915_query_item.flags == %DRM_I915_QUERY_PERF_CONFIG_DATA,
+        * i915 will write a struct drm_i915_perf_oa_config. If the following
+        * fields of struct drm_i915_perf_oa_config are not set to 0, i915 will
+        * write into the associated pointers the values of submitted when the
         * configuration was created :
         *
-        *         - n_mux_regs
-        *         - n_boolean_regs
-        *         - n_flex_regs
+        *  - &drm_i915_perf_oa_config.n_mux_regs
+        *  - &drm_i915_perf_oa_config.n_boolean_regs
+        *  - &drm_i915_perf_oa_config.n_flex_regs
         */
        __u8 data[];
 };
@@ -3134,6 +3315,16 @@ struct drm_i915_query_memory_regions {
        struct drm_i915_memory_region_info regions[];
 };
 
+/**
+ * DOC: GuC HWCONFIG blob uAPI
+ *
+ * The GuC produces a blob with information about the current device.
+ * i915 reads this blob from GuC and makes it available via this uAPI.
+ *
+ * The format and meaning of the blob content are documented in the
+ * Programmer's Reference Manual.
+ */
+
 /**
  * struct drm_i915_gem_create_ext - Existing gem_create behaviour, with added
  * extension support using struct i915_user_extension.
index b339bf2..0242f31 100644 (file)
@@ -890,6 +890,7 @@ enum {
        IFLA_BOND_SLAVE_AD_AGGREGATOR_ID,
        IFLA_BOND_SLAVE_AD_ACTOR_OPER_PORT_STATE,
        IFLA_BOND_SLAVE_AD_PARTNER_OPER_PORT_STATE,
+       IFLA_BOND_SLAVE_PRIO,
        __IFLA_BOND_SLAVE_MAX,
 };
 
index 6a184d2..5088bd9 100644 (file)
@@ -444,6 +444,9 @@ struct kvm_run {
 #define KVM_SYSTEM_EVENT_SHUTDOWN       1
 #define KVM_SYSTEM_EVENT_RESET          2
 #define KVM_SYSTEM_EVENT_CRASH          3
+#define KVM_SYSTEM_EVENT_WAKEUP         4
+#define KVM_SYSTEM_EVENT_SUSPEND        5
+#define KVM_SYSTEM_EVENT_SEV_TERM       6
                        __u32 type;
                        __u32 ndata;
                        union {
@@ -646,6 +649,7 @@ struct kvm_vapic_addr {
 #define KVM_MP_STATE_OPERATING         7
 #define KVM_MP_STATE_LOAD              8
 #define KVM_MP_STATE_AP_RESET_HOLD     9
+#define KVM_MP_STATE_SUSPENDED         10
 
 struct kvm_mp_state {
        __u32 mp_state;
@@ -1150,8 +1154,9 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_S390_MEM_OP_EXTENSION 211
 #define KVM_CAP_PMU_CAPABILITY 212
 #define KVM_CAP_DISABLE_QUIRKS2 213
-/* #define KVM_CAP_VM_TSC_CONTROL 214 */
+#define KVM_CAP_VM_TSC_CONTROL 214
 #define KVM_CAP_SYSTEM_EVENT_DATA 215
+#define KVM_CAP_ARM_SYSTEM_SUSPEND 216
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -1240,6 +1245,7 @@ struct kvm_x86_mce {
 #define KVM_XEN_HVM_CONFIG_SHARED_INFO         (1 << 2)
 #define KVM_XEN_HVM_CONFIG_RUNSTATE            (1 << 3)
 #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL       (1 << 4)
+#define KVM_XEN_HVM_CONFIG_EVTCHN_SEND         (1 << 5)
 
 struct kvm_xen_hvm_config {
        __u32 flags;
@@ -1478,7 +1484,8 @@ struct kvm_s390_ucas_mapping {
 #define KVM_SET_PIT2              _IOW(KVMIO,  0xa0, struct kvm_pit_state2)
 /* Available with KVM_CAP_PPC_GET_PVINFO */
 #define KVM_PPC_GET_PVINFO       _IOW(KVMIO,  0xa1, struct kvm_ppc_pvinfo)
-/* Available with KVM_CAP_TSC_CONTROL */
+/* Available with KVM_CAP_TSC_CONTROL for a vCPU, or with
+*  KVM_CAP_VM_TSC_CONTROL to set defaults for a VM */
 #define KVM_SET_TSC_KHZ           _IO(KVMIO,  0xa2)
 #define KVM_GET_TSC_KHZ           _IO(KVMIO,  0xa3)
 /* Available with KVM_CAP_PCI_2_3 */
@@ -1694,6 +1701,32 @@ struct kvm_xen_hvm_attr {
                struct {
                        __u64 gfn;
                } shared_info;
+               struct {
+                       __u32 send_port;
+                       __u32 type; /* EVTCHNSTAT_ipi / EVTCHNSTAT_interdomain */
+                       __u32 flags;
+#define KVM_XEN_EVTCHN_DEASSIGN                (1 << 0)
+#define KVM_XEN_EVTCHN_UPDATE          (1 << 1)
+#define KVM_XEN_EVTCHN_RESET           (1 << 2)
+                       /*
+                        * Events sent by the guest are either looped back to
+                        * the guest itself (potentially on a different port#)
+                        * or signalled via an eventfd.
+                        */
+                       union {
+                               struct {
+                                       __u32 port;
+                                       __u32 vcpu;
+                                       __u32 priority;
+                               } port;
+                               struct {
+                                       __u32 port; /* Zero for eventfd */
+                                       __s32 fd;
+                               } eventfd;
+                               __u32 padding[4];
+                       } deliver;
+               } evtchn;
+               __u32 xen_version;
                __u64 pad[8];
        } u;
 };
@@ -1702,11 +1735,17 @@ struct kvm_xen_hvm_attr {
 #define KVM_XEN_ATTR_TYPE_LONG_MODE            0x0
 #define KVM_XEN_ATTR_TYPE_SHARED_INFO          0x1
 #define KVM_XEN_ATTR_TYPE_UPCALL_VECTOR                0x2
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
+#define KVM_XEN_ATTR_TYPE_EVTCHN               0x3
+#define KVM_XEN_ATTR_TYPE_XEN_VERSION          0x4
 
 /* Per-vCPU Xen attributes */
 #define KVM_XEN_VCPU_GET_ATTR  _IOWR(KVMIO, 0xca, struct kvm_xen_vcpu_attr)
 #define KVM_XEN_VCPU_SET_ATTR  _IOW(KVMIO,  0xcb, struct kvm_xen_vcpu_attr)
 
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
+#define KVM_XEN_HVM_EVTCHN_SEND        _IOW(KVMIO,  0xd0, struct kvm_irq_routing_xen_evtchn)
+
 #define KVM_GET_SREGS2             _IOR(KVMIO,  0xcc, struct kvm_sregs2)
 #define KVM_SET_SREGS2             _IOW(KVMIO,  0xcd, struct kvm_sregs2)
 
@@ -1724,6 +1763,13 @@ struct kvm_xen_vcpu_attr {
                        __u64 time_blocked;
                        __u64 time_offline;
                } runstate;
+               __u32 vcpu_id;
+               struct {
+                       __u32 port;
+                       __u32 priority;
+                       __u64 expires_ns;
+               } timer;
+               __u8 vector;
        } u;
 };
 
@@ -1734,6 +1780,10 @@ struct kvm_xen_vcpu_attr {
 #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT        0x3
 #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA   0x4
 #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST 0x5
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
+#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID         0x6
+#define KVM_XEN_VCPU_ATTR_TYPE_TIMER           0x7
+#define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR   0x8
 
 /* Secure Encrypted Virtualization command */
 enum sev_cmd_id {
index e998764..a5e06dc 100644 (file)
@@ -272,6 +272,15 @@ struct prctl_mm_map {
 # define PR_SCHED_CORE_SCOPE_THREAD_GROUP      1
 # define PR_SCHED_CORE_SCOPE_PROCESS_GROUP     2
 
+/* arm64 Scalable Matrix Extension controls */
+/* Flag values must be in sync with SVE versions */
+#define PR_SME_SET_VL                  63      /* set task vector length */
+# define PR_SME_SET_VL_ONEXEC          (1 << 18) /* defer effect until exec */
+#define PR_SME_GET_VL                  64      /* get task vector length */
+/* Bits common to PR_SME_SET_VL and PR_SME_GET_VL */
+# define PR_SME_VL_LEN_MASK            0xffff
+# define PR_SME_VL_INHERIT             (1 << 17) /* inherit across exec */
+
 #define PR_SET_VMA             0x53564d41
 # define PR_SET_VMA_ANON_NAME          0
 
index 5d99e7c..cab645d 100644 (file)
 
 /* Set or get vhost backend capability */
 
-/* Use message type V2 */
-#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1
-/* IOTLB can accept batching hints */
-#define VHOST_BACKEND_F_IOTLB_BATCH  0x2
-
 #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
 #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
 
 /* Get the valid iova range */
 #define VHOST_VDPA_GET_IOVA_RANGE      _IOR(VHOST_VIRTIO, 0x78, \
                                             struct vhost_vdpa_iova_range)
-
 /* Get the config size */
 #define VHOST_VDPA_GET_CONFIG_SIZE     _IOR(VHOST_VIRTIO, 0x79, __u32)
 
 /* Get the count of all virtqueues */
 #define VHOST_VDPA_GET_VQS_COUNT       _IOR(VHOST_VIRTIO, 0x80, __u32)
 
+/* Get the number of virtqueue groups. */
+#define VHOST_VDPA_GET_GROUP_NUM       _IOR(VHOST_VIRTIO, 0x81, __u32)
+
+/* Get the number of address spaces. */
+#define VHOST_VDPA_GET_AS_NUM          _IOR(VHOST_VIRTIO, 0x7A, unsigned int)
+
+/* Get the group for a virtqueue: read index, write group in num,
+ * The virtqueue index is stored in the index field of
+ * vhost_vring_state. The group for this specific virtqueue is
+ * returned via num field of vhost_vring_state.
+ */
+#define VHOST_VDPA_GET_VRING_GROUP     _IOWR(VHOST_VIRTIO, 0x7B,       \
+                                             struct vhost_vring_state)
+/* Set the ASID for a virtqueue group. The group index is stored in
+ * the index field of vhost_vring_state, the ASID associated with this
+ * group is stored at num field of vhost_vring_state.
+ */
+#define VHOST_VDPA_SET_GROUP_ASID      _IOW(VHOST_VIRTIO, 0x7C, \
+                                            struct vhost_vring_state)
+
 #endif
index 5a5bd74..9c366b3 100755 (executable)
@@ -1646,7 +1646,8 @@ Press any other key to refresh statistics immediately.
                          .format(values))
             if len(pids) > 1:
                 sys.exit('Error: Multiple processes found (pids: {}). Use "-p"'
-                         ' to specify the desired pid'.format(" ".join(pids)))
+                         ' to specify the desired pid'
+                         .format(" ".join(map(str, pids))))
             namespace.pid = pids[0]
 
     argparser = argparse.ArgumentParser(description=description_text,
index c1d5867..952f352 100644 (file)
@@ -149,23 +149,30 @@ int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus,
                        int fd, group_fd, *evsel_fd;
 
                        evsel_fd = FD(evsel, idx, thread);
-                       if (evsel_fd == NULL)
-                               return -EINVAL;
+                       if (evsel_fd == NULL) {
+                               err = -EINVAL;
+                               goto out;
+                       }
 
                        err = get_group_fd(evsel, idx, thread, &group_fd);
                        if (err < 0)
-                               return err;
+                               goto out;
 
                        fd = sys_perf_event_open(&evsel->attr,
                                                 threads->map[thread].pid,
                                                 cpu, group_fd, 0);
 
-                       if (fd < 0)
-                               return -errno;
+                       if (fd < 0) {
+                               err = -errno;
+                               goto out;
+                       }
 
                        *evsel_fd = fd;
                }
        }
+out:
+       if (err)
+               perf_evsel__close(evsel);
 
        return err;
 }
index a75bf11..54d4e50 100644 (file)
@@ -891,7 +891,9 @@ static int copy_kcore_dir(struct perf_inject *inject)
        if (ret < 0)
                return ret;
        pr_debug("%s\n", cmd);
-       return system(cmd);
+       ret = system(cmd);
+       free(cmd);
+       return ret;
 }
 
 static int output_fd(struct perf_inject *inject)
@@ -916,7 +918,7 @@ static int __cmd_inject(struct perf_inject *inject)
                inject->tool.tracing_data = perf_event__repipe_tracing_data;
        }
 
-       output_data_offset = session->header.data_offset;
+       output_data_offset = perf_session__data_offset(session->evlist);
 
        if (inject->build_id_all) {
                inject->tool.mmap         = perf_event__repipe_buildid_mmap;
index 4ce87a8..d2ecd4d 100644 (file)
@@ -2586,6 +2586,8 @@ int cmd_stat(int argc, const char **argv)
        if (evlist__initialize_ctlfd(evsel_list, stat_config.ctl_fd, stat_config.ctl_fd_ack))
                goto out;
 
+       /* Enable ignoring missing threads when -p option is defined. */
+       evlist__first(evsel_list)->ignore_missing_thread = target.pid;
        status = 0;
        for (run_idx = 0; forever || run_idx < stat_config.run_count; run_idx++) {
                if (stat_config.run_count != 1 && verbose > 0)
index d1ebb55..6f921db 100644 (file)
@@ -151,11 +151,21 @@ static int detect_ioctl(void)
 static int detect_share(int wp_cnt, int bp_cnt)
 {
        struct perf_event_attr attr;
-       int i, fd[wp_cnt + bp_cnt], ret;
+       int i, *fd = NULL, ret = -1;
+
+       if (wp_cnt + bp_cnt == 0)
+               return 0;
+
+       fd = malloc(sizeof(int) * (wp_cnt + bp_cnt));
+       if (!fd)
+               return -1;
 
        for (i = 0; i < wp_cnt; i++) {
                fd[i] = wp_event((void *)&the_var, &attr);
-               TEST_ASSERT_VAL("failed to create wp\n", fd[i] != -1);
+               if (fd[i] == -1) {
+                       pr_err("failed to create wp\n");
+                       goto out;
+               }
        }
 
        for (; i < (bp_cnt + wp_cnt); i++) {
@@ -166,9 +176,11 @@ static int detect_share(int wp_cnt, int bp_cnt)
 
        ret = i != (bp_cnt + wp_cnt);
 
+out:
        while (i--)
                close(fd[i]);
 
+       free(fd);
        return ret;
 }
 
index d54c537..5c0032f 100644 (file)
@@ -97,6 +97,8 @@ static int test__expr(struct test_suite *t __maybe_unused, int subtest __maybe_u
        ret |= test(ctx, "2.2 > 2.2", 0);
        ret |= test(ctx, "2.2 < 1.1", 0);
        ret |= test(ctx, "1.1 > 2.2", 0);
+       ret |= test(ctx, "1.1e10 < 1.1e100", 1);
+       ret |= test(ctx, "1.1e2 > 1.1e-2", 1);
 
        if (ret) {
                expr__ctx_free(ctx);
diff --git a/tools/perf/tests/shell/lib/perf_csv_output_lint.py b/tools/perf/tests/shell/lib/perf_csv_output_lint.py
deleted file mode 100644 (file)
index 714f283..0000000
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/usr/bin/python
-# SPDX-License-Identifier: GPL-2.0
-
-import argparse
-import sys
-
-# Basic sanity check of perf CSV output as specified in the man page.
-# Currently just checks the number of fields per line in output.
-
-ap = argparse.ArgumentParser()
-ap.add_argument('--no-args', action='store_true')
-ap.add_argument('--interval', action='store_true')
-ap.add_argument('--system-wide-no-aggr', action='store_true')
-ap.add_argument('--system-wide', action='store_true')
-ap.add_argument('--event', action='store_true')
-ap.add_argument('--per-core', action='store_true')
-ap.add_argument('--per-thread', action='store_true')
-ap.add_argument('--per-die', action='store_true')
-ap.add_argument('--per-node', action='store_true')
-ap.add_argument('--per-socket', action='store_true')
-ap.add_argument('--separator', default=',', nargs='?')
-args = ap.parse_args()
-
-Lines = sys.stdin.readlines()
-
-def check_csv_output(exp):
-  for line in Lines:
-    if 'failed' not in line:
-      count = line.count(args.separator)
-      if count != exp:
-        sys.stdout.write(''.join(Lines))
-        raise RuntimeError(f'wrong number of fields. expected {exp} in {line}')
-
-try:
-  if args.no_args or args.system_wide or args.event:
-    expected_items = 6
-  elif args.interval or args.per_thread or args.system_wide_no_aggr:
-    expected_items = 7
-  elif args.per_core or args.per_socket or args.per_node or args.per_die:
-    expected_items = 8
-  else:
-    ap.print_help()
-    raise RuntimeError('No checking option specified')
-  check_csv_output(expected_items)
-
-except:
-  sys.stdout.write('Test failed for input: ' + ''.join(Lines))
-  raise
index 983220e..38c26f3 100755 (executable)
@@ -6,20 +6,41 @@
 
 set -e
 
-pythonchecker=$(dirname $0)/lib/perf_csv_output_lint.py
-if [ "x$PYTHON" == "x" ]
-then
-       if which python3 > /dev/null
-       then
-               PYTHON=python3
-       elif which python > /dev/null
-       then
-               PYTHON=python
-       else
-               echo Skipping test, python not detected please set environment variable PYTHON.
-               exit 2
-       fi
-fi
+function commachecker()
+{
+       local -i cnt=0 exp=0
+
+       case "$1"
+       in "--no-args")         exp=6
+       ;; "--system-wide")     exp=6
+       ;; "--event")           exp=6
+       ;; "--interval")        exp=7
+       ;; "--per-thread")      exp=7
+       ;; "--system-wide-no-aggr")     exp=7
+                               [ $(uname -m) = "s390x" ] && exp=6
+       ;; "--per-core")        exp=8
+       ;; "--per-socket")      exp=8
+       ;; "--per-node")        exp=8
+       ;; "--per-die")         exp=8
+       esac
+
+       while read line
+       do
+               # Check for lines beginning with Failed
+               x=${line:0:6}
+               [ "$x" = "Failed" ] && continue
+
+               # Count the number of commas
+               x=$(echo $line | tr -d -c ',')
+               cnt="${#x}"
+               # echo $line $cnt
+               [ "$cnt" -ne "$exp" ] && {
+                       echo "wrong number of fields. expected $exp in $line" 1>&2
+                       exit 1;
+               }
+       done
+       return 0
+}
 
 # Return true if perf_event_paranoid is > $1 and not running as root.
 function ParanoidAndNotRoot()
@@ -30,7 +51,7 @@ function ParanoidAndNotRoot()
 check_no_args()
 {
        echo -n "Checking CSV output: no args "
-       perf stat -x, true 2>&1 | $PYTHON $pythonchecker --no-args
+       perf stat -x, true 2>&1 | commachecker --no-args
        echo "[Success]"
 }
 
@@ -42,7 +63,7 @@ check_system_wide()
                echo "[Skip] paranoid and not root"
                return
        fi
-       perf stat -x, -a true 2>&1 | $PYTHON $pythonchecker --system-wide
+       perf stat -x, -a true 2>&1 | commachecker --system-wide
        echo "[Success]"
 }
 
@@ -55,14 +76,14 @@ check_system_wide_no_aggr()
                return
        fi
        echo -n "Checking CSV output: system wide no aggregation "
-       perf stat -x, -A -a --no-merge true 2>&1 | $PYTHON $pythonchecker --system-wide-no-aggr
+       perf stat -x, -A -a --no-merge true 2>&1 | commachecker --system-wide-no-aggr
        echo "[Success]"
 }
 
 check_interval()
 {
        echo -n "Checking CSV output: interval "
-       perf stat -x, -I 1000 true 2>&1 | $PYTHON $pythonchecker --interval
+       perf stat -x, -I 1000 true 2>&1 | commachecker --interval
        echo "[Success]"
 }
 
@@ -70,7 +91,7 @@ check_interval()
 check_event()
 {
        echo -n "Checking CSV output: event "
-       perf stat -x, -e cpu-clock true 2>&1 | $PYTHON $pythonchecker --event
+       perf stat -x, -e cpu-clock true 2>&1 | commachecker --event
        echo "[Success]"
 }
 
@@ -82,7 +103,7 @@ check_per_core()
                echo "[Skip] paranoid and not root"
                return
        fi
-       perf stat -x, --per-core -a true 2>&1 | $PYTHON $pythonchecker --per-core
+       perf stat -x, --per-core -a true 2>&1 | commachecker --per-core
        echo "[Success]"
 }
 
@@ -94,7 +115,7 @@ check_per_thread()
                echo "[Skip] paranoid and not root"
                return
        fi
-       perf stat -x, --per-thread -a true 2>&1 | $PYTHON $pythonchecker --per-thread
+       perf stat -x, --per-thread -a true 2>&1 | commachecker --per-thread
        echo "[Success]"
 }
 
@@ -106,7 +127,7 @@ check_per_die()
                echo "[Skip] paranoid and not root"
                return
        fi
-       perf stat -x, --per-die -a true 2>&1 | $PYTHON $pythonchecker --per-die
+       perf stat -x, --per-die -a true 2>&1 | commachecker --per-die
        echo "[Success]"
 }
 
@@ -118,7 +139,7 @@ check_per_node()
                echo "[Skip] paranoid and not root"
                return
        fi
-       perf stat -x, --per-node -a true 2>&1 | $PYTHON $pythonchecker --per-node
+       perf stat -x, --per-node -a true 2>&1 | commachecker --per-node
        echo "[Success]"
 }
 
@@ -130,7 +151,7 @@ check_per_socket()
                echo "[Skip] paranoid and not root"
                return
        fi
-       perf stat -x, --per-socket -a true 2>&1 | $PYTHON $pythonchecker --per-socket
+       perf stat -x, --per-socket -a true 2>&1 | commachecker --per-socket
        echo "[Success]"
 }
 
index 6ffbb27..ec108d4 100755 (executable)
@@ -43,7 +43,7 @@ CFLAGS="-g -O0 -fno-inline -fno-omit-frame-pointer"
 cc $CFLAGS $TEST_PROGRAM_SOURCE -o $TEST_PROGRAM || exit 1
 
 # Add a 1 second delay to skip samples that are not in the leaf() function
-perf record -o $PERF_DATA --call-graph fp -e cycles//u -D 1000 -- $TEST_PROGRAM 2> /dev/null &
+perf record -o $PERF_DATA --call-graph fp -e cycles//u -D 1000 --user-callchains -- $TEST_PROGRAM 2> /dev/null &
 PID=$!
 
 echo " + Recording (PID=$PID)..."
index d23a9e3..0b4f61b 100644 (file)
@@ -115,7 +115,7 @@ static int check_cpu_topology(char *path, struct perf_cpu_map *map)
         * physical_package_id will be set to -1. Hence skip this
         * test if physical_package_id returns -1 for cpu from perf_cpu_map.
         */
-       if (strncmp(session->header.env.arch, "powerpc", 7)) {
+       if (!strncmp(session->header.env.arch, "ppc64le", 7)) {
                if (cpu__get_socket_id(perf_cpu_map__cpu(map, 0)) == -1)
                        return TEST_SKIP;
        }
index 2c5f72f..37c53ba 100755 (executable)
@@ -33,23 +33,13 @@ create_errno_lookup_func()
        local arch=$(arch_string "$1")
        local nr name
 
-       cat <<EoFuncBegin
-static const char *errno_to_name__$arch(int err)
-{
-       switch (err) {
-EoFuncBegin
+       printf "static const char *errno_to_name__%s(int err)\n{\n\tswitch (err) {\n" $arch
 
        while read name nr; do
                printf '\tcase %d: return "%s";\n' $nr $name
        done
 
-       cat <<EoFuncEnd
-       default:
-               return "(unknown)";
-       }
-}
-
-EoFuncEnd
+       printf '\tdefault: return "(unknown)";\n\t}\n}\n'
 }
 
 process_arch()
index 6f85f5d..17311ad 100644 (file)
@@ -50,6 +50,9 @@ struct linger {
 struct msghdr {
        void            *msg_name;      /* ptr to socket address structure */
        int             msg_namelen;    /* size of socket address structure */
+
+       int             msg_inq;        /* output, data left in socket */
+
        struct iov_iter msg_iter;       /* data */
 
        /*
@@ -62,8 +65,9 @@ struct msghdr {
                void __user     *msg_control_user;
        };
        bool            msg_control_is_user : 1;
-       __kernel_size_t msg_controllen; /* ancillary data buffer length */
+       bool            msg_get_inq : 1;/* return INQ after receive */
        unsigned int    msg_flags;      /* flags on received message */
+       __kernel_size_t msg_controllen; /* ancillary data buffer length */
        struct kiocb    *msg_iocb;      /* ptr to iocb for async requests */
 };
 
@@ -434,6 +438,7 @@ extern struct file *do_accept(struct file *file, unsigned file_flags,
 extern int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr,
                         int __user *upeer_addrlen, int flags);
 extern int __sys_socket(int family, int type, int protocol);
+extern struct file *__sys_socket_file(int family, int type, int protocol);
 extern int __sys_bind(int fd, struct sockaddr __user *umyaddr, int addrlen);
 extern int __sys_connect_file(struct file *file, struct sockaddr_storage *addr,
                              int addrlen, int file_flags);
index 1a80151..d040406 100644 (file)
@@ -387,26 +387,16 @@ static int arm_spe__synth_instruction_sample(struct arm_spe_queue *speq,
        return arm_spe_deliver_synth_event(spe, speq, event, &sample);
 }
 
-#define SPE_MEM_TYPE   (ARM_SPE_L1D_ACCESS | ARM_SPE_L1D_MISS | \
-                        ARM_SPE_LLC_ACCESS | ARM_SPE_LLC_MISS | \
-                        ARM_SPE_REMOTE_ACCESS)
-
-static bool arm_spe__is_memory_event(enum arm_spe_sample_type type)
-{
-       if (type & SPE_MEM_TYPE)
-               return true;
-
-       return false;
-}
-
 static u64 arm_spe__synth_data_source(const struct arm_spe_record *record)
 {
        union perf_mem_data_src data_src = { 0 };
 
        if (record->op == ARM_SPE_LD)
                data_src.mem_op = PERF_MEM_OP_LOAD;
-       else
+       else if (record->op == ARM_SPE_ST)
                data_src.mem_op = PERF_MEM_OP_STORE;
+       else
+               return 0;
 
        if (record->type & (ARM_SPE_LLC_ACCESS | ARM_SPE_LLC_MISS)) {
                data_src.mem_lvl = PERF_MEM_LVL_L3;
@@ -510,7 +500,11 @@ static int arm_spe_sample(struct arm_spe_queue *speq)
                        return err;
        }
 
-       if (spe->sample_memory && arm_spe__is_memory_event(record->type)) {
+       /*
+        * When data_src is zero it means the record is not a memory operation,
+        * skip to synthesize memory sample for this case.
+        */
+       if (spe->sample_memory && data_src) {
                err = arm_spe__synth_mem_sample(speq, spe->memory_id, data_src);
                if (err)
                        return err;
index e271e05..80b1d2b 100644 (file)
@@ -149,11 +149,10 @@ get_bpf_prog_info_linear(int fd, __u64 arrays)
                count = bpf_prog_info_read_offset_u32(&info, desc->count_offset);
                size  = bpf_prog_info_read_offset_u32(&info, desc->size_offset);
 
-               data_len += count * size;
+               data_len += roundup(count * size, sizeof(__u64));
        }
 
        /* step 3: allocate continuous memory */
-       data_len = roundup(data_len, sizeof(__u64));
        info_linear = malloc(sizeof(struct perf_bpil) + data_len);
        if (!info_linear)
                return ERR_PTR(-ENOMEM);
@@ -180,7 +179,7 @@ get_bpf_prog_info_linear(int fd, __u64 arrays)
                bpf_prog_info_set_offset_u64(&info_linear->info,
                                             desc->array_offset,
                                             ptr_to_u64(ptr));
-               ptr += count * size;
+               ptr += roundup(count * size, sizeof(__u64));
        }
 
        /* step 5: call syscall again to get required arrays */
index b73e84a..f289b77 100644 (file)
@@ -265,6 +265,12 @@ int off_cpu_write(struct perf_session *session)
 
        sample_type = evsel->core.attr.sample_type;
 
+       if (sample_type & ~OFFCPU_SAMPLE_TYPES) {
+               pr_err("not supported sample type: %llx\n",
+                      (unsigned long long)sample_type);
+               return -1;
+       }
+
        if (sample_type & (PERF_SAMPLE_ID | PERF_SAMPLE_IDENTIFIER)) {
                if (evsel->core.id)
                        sid = evsel->core.id[0];
@@ -319,7 +325,6 @@ int off_cpu_write(struct perf_session *session)
                }
                if (sample_type & PERF_SAMPLE_CGROUP)
                        data.array[n++] = key.cgroup_id;
-               /* TODO: handle more sample types */
 
                size = n * sizeof(u64);
                data.hdr.size = size;
index 792ae28..cc6d7fd 100644 (file)
@@ -71,6 +71,11 @@ struct {
        __uint(max_entries, 1);
 } cgroup_filter SEC(".maps");
 
+/* new kernel task_struct definition */
+struct task_struct___new {
+       long __state;
+} __attribute__((preserve_access_index));
+
 /* old kernel task_struct definition */
 struct task_struct___old {
        long state;
@@ -93,14 +98,17 @@ const volatile bool uses_cgroup_v1 = false;
  */
 static inline int get_task_state(struct task_struct *t)
 {
-       if (bpf_core_field_exists(t->__state))
-               return BPF_CORE_READ(t, __state);
+       /* recast pointer to capture new type for compiler */
+       struct task_struct___new *t_new = (void *)t;
 
-       /* recast pointer to capture task_struct___old type for compiler */
-       struct task_struct___old *t_old = (void *)t;
+       if (bpf_core_field_exists(t_new->__state)) {
+               return BPF_CORE_READ(t_new, __state);
+       } else {
+               /* recast pointer to capture old type for compiler */
+               struct task_struct___old *t_old = (void *)t;
 
-       /* now use old "state" name of the field */
-       return BPF_CORE_READ(t_old, state);
+               return BPF_CORE_READ(t_old, state);
+       }
 }
 
 static inline __u64 get_cgroup_id(struct task_struct *t)
index 82f3d46..328668f 100644 (file)
@@ -872,6 +872,30 @@ out_free:
        return err;
 }
 
+static int filename__read_build_id_ns(const char *filename,
+                                     struct build_id *bid,
+                                     struct nsinfo *nsi)
+{
+       struct nscookie nsc;
+       int ret;
+
+       nsinfo__mountns_enter(nsi, &nsc);
+       ret = filename__read_build_id(filename, bid);
+       nsinfo__mountns_exit(&nsc);
+
+       return ret;
+}
+
+static bool dso__build_id_mismatch(struct dso *dso, const char *name)
+{
+       struct build_id bid;
+
+       if (filename__read_build_id_ns(name, &bid, dso->nsinfo) < 0)
+               return false;
+
+       return !dso__build_id_equal(dso, &bid);
+}
+
 static int dso__cache_build_id(struct dso *dso, struct machine *machine,
                               void *priv __maybe_unused)
 {
@@ -886,6 +910,10 @@ static int dso__cache_build_id(struct dso *dso, struct machine *machine,
                is_kallsyms = true;
                name = machine->mmap_name;
        }
+
+       if (!is_kallsyms && dso__build_id_mismatch(dso, name))
+               return 0;
+
        return build_id_cache__add_b(&dso->bid, name, dso->nsinfo,
                                     is_kallsyms, is_vdso);
 }
index ce499c5..094b0a9 100644 (file)
@@ -48,6 +48,7 @@
 #include "util.h"
 #include "hashmap.h"
 #include "pmu-hybrid.h"
+#include "off_cpu.h"
 #include "../perf-sys.h"
 #include "util/parse-branch-options.h"
 #include <internal/xyarray.h>
@@ -1102,6 +1103,11 @@ static void evsel__set_default_freq_period(struct record_opts *opts,
        }
 }
 
+static bool evsel__is_offcpu_event(struct evsel *evsel)
+{
+       return evsel__is_bpf_output(evsel) && !strcmp(evsel->name, OFFCPU_EVENT);
+}
+
 /*
  * The enable_on_exec/disabled value strategy:
  *
@@ -1366,6 +1372,9 @@ void evsel__config(struct evsel *evsel, struct record_opts *opts,
         */
        if (evsel__is_dummy_event(evsel))
                evsel__reset_sample_bit(evsel, BRANCH_STACK);
+
+       if (evsel__is_offcpu_event(evsel))
+               evsel->core.attr.sample_type &= OFFCPU_SAMPLE_TYPES;
 }
 
 int evsel__set_filter(struct evsel *evsel, const char *filter)
index 0a13eb2..4dc8edb 100644 (file)
@@ -91,7 +91,7 @@ static int literal(yyscan_t scanner)
 }
 %}
 
-number         ([0-9]+\.?[0-9]*|[0-9]*\.?[0-9]+)
+number         ([0-9]+\.?[0-9]*|[0-9]*\.?[0-9]+)(e-?[0-9]+)?
 
 sch            [-,=]
 spec           \\{sch}
index 53332da..6ad629d 100644 (file)
@@ -3686,6 +3686,20 @@ int perf_session__write_header(struct perf_session *session,
        return perf_session__do_write_header(session, evlist, fd, at_exit, NULL);
 }
 
+size_t perf_session__data_offset(const struct evlist *evlist)
+{
+       struct evsel *evsel;
+       size_t data_offset;
+
+       data_offset = sizeof(struct perf_file_header);
+       evlist__for_each_entry(evlist, evsel) {
+               data_offset += evsel->core.ids * sizeof(u64);
+       }
+       data_offset += evlist->core.nr_entries * sizeof(struct perf_file_attr);
+
+       return data_offset;
+}
+
 int perf_session__inject_header(struct perf_session *session,
                                struct evlist *evlist,
                                int fd,
index 08563c1..56916da 100644 (file)
@@ -136,6 +136,8 @@ int perf_session__inject_header(struct perf_session *session,
                                int fd,
                                struct feat_copier *fc);
 
+size_t perf_session__data_offset(const struct evlist *evlist);
+
 void perf_header__set_feat(struct perf_header *header, int feat);
 void perf_header__clear_feat(struct perf_header *header, int feat);
 bool perf_header__has_feat(const struct perf_header *header, int feat);
index ee8fcfa..8f7baea 100644 (file)
@@ -1372,6 +1372,7 @@ static int parse_ids(bool metric_no_merge, struct perf_pmu *fake_pmu,
 
        *out_evlist = NULL;
        if (!metric_no_merge || hashmap__size(ids->ids) == 0) {
+               bool added_event = false;
                int i;
                /*
                 * We may fail to share events between metrics because a tool
@@ -1393,8 +1394,16 @@ static int parse_ids(bool metric_no_merge, struct perf_pmu *fake_pmu,
                                if (!tmp)
                                        return -ENOMEM;
                                ids__insert(ids->ids, tmp);
+                               added_event = true;
                        }
                }
+               if (!added_event && hashmap__size(ids->ids) == 0) {
+                       char *tmp = strdup("duration_time");
+
+                       if (!tmp)
+                               return -ENOMEM;
+                       ids__insert(ids->ids, tmp);
+               }
        }
        ret = metricgroup__build_event_string(&events, ids, modifier,
                                              has_constraint);
index 548008f..2dd67c6 100644 (file)
@@ -1,6 +1,8 @@
 #ifndef PERF_UTIL_OFF_CPU_H
 #define PERF_UTIL_OFF_CPU_H
 
+#include <linux/perf_event.h>
+
 struct evlist;
 struct target;
 struct perf_session;
@@ -8,6 +10,13 @@ struct record_opts;
 
 #define OFFCPU_EVENT  "offcpu-time"
 
+#define OFFCPU_SAMPLE_TYPES  (PERF_SAMPLE_IDENTIFIER | PERF_SAMPLE_IP | \
+                             PERF_SAMPLE_TID | PERF_SAMPLE_TIME | \
+                             PERF_SAMPLE_ID | PERF_SAMPLE_CPU | \
+                             PERF_SAMPLE_PERIOD | PERF_SAMPLE_CALLCHAIN | \
+                             PERF_SAMPLE_CGROUP)
+
+
 #ifdef HAVE_BPF_SKEL
 int off_cpu_prepare(struct evlist *evlist, struct target *target,
                    struct record_opts *opts);
index 27acdc5..84d17bd 100644 (file)
@@ -754,7 +754,7 @@ static int __event__synthesize_thread(union perf_event *comm_event,
        snprintf(filename, sizeof(filename), "%s/proc/%d/task",
                 machine->root_dir, pid);
 
-       n = scandir(filename, &dirent, filter_task, alphasort);
+       n = scandir(filename, &dirent, filter_task, NULL);
        if (n < 0)
                return n;
 
@@ -767,11 +767,12 @@ static int __event__synthesize_thread(union perf_event *comm_event,
                if (*end)
                        continue;
 
-               rc = -1;
+               /* some threads may exit just after scan, ignore it */
                if (perf_event__prepare_comm(comm_event, pid, _pid, machine,
                                             &tgid, &ppid, &kernel_thread) != 0)
-                       break;
+                       continue;
 
+               rc = -1;
                if (perf_event__synthesize_fork(tool, fork_event, _pid, tgid,
                                                ppid, process, machine) < 0)
                        break;
@@ -987,7 +988,7 @@ int perf_event__synthesize_threads(struct perf_tool *tool,
                return 0;
 
        snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir);
-       n = scandir(proc_path, &dirent, filter_task, alphasort);
+       n = scandir(proc_path, &dirent, filter_task, NULL);
        if (n < 0)
                return err;
 
index 3762269..81b6bd6 100644 (file)
@@ -174,7 +174,7 @@ static int elf_section_address_and_offset(int fd, const char *name, u64 *address
        Elf *elf;
        GElf_Ehdr ehdr;
        GElf_Shdr shdr;
-       int ret;
+       int ret = -1;
 
        elf = elf_begin(fd, PERF_ELF_C_READ_MMAP, NULL);
        if (elf == NULL)
@@ -197,7 +197,7 @@ out_err:
 #ifndef NO_LIBUNWIND_DEBUG_FRAME
 static u64 elf_section_offset(int fd, const char *name)
 {
-       u64 address, offset;
+       u64 address, offset = 0;
 
        if (elf_section_address_and_offset(fd, name, &address, &offset))
                return 0;
index 83ef55e..2974b44 100644 (file)
@@ -121,24 +121,24 @@ static void kprobe_multi_link_api_subtest(void)
 })
 
        GET_ADDR("bpf_fentry_test1", addrs[0]);
-       GET_ADDR("bpf_fentry_test2", addrs[1]);
-       GET_ADDR("bpf_fentry_test3", addrs[2]);
-       GET_ADDR("bpf_fentry_test4", addrs[3]);
-       GET_ADDR("bpf_fentry_test5", addrs[4]);
-       GET_ADDR("bpf_fentry_test6", addrs[5]);
-       GET_ADDR("bpf_fentry_test7", addrs[6]);
+       GET_ADDR("bpf_fentry_test3", addrs[1]);
+       GET_ADDR("bpf_fentry_test4", addrs[2]);
+       GET_ADDR("bpf_fentry_test5", addrs[3]);
+       GET_ADDR("bpf_fentry_test6", addrs[4]);
+       GET_ADDR("bpf_fentry_test7", addrs[5]);
+       GET_ADDR("bpf_fentry_test2", addrs[6]);
        GET_ADDR("bpf_fentry_test8", addrs[7]);
 
 #undef GET_ADDR
 
-       cookies[0] = 1;
-       cookies[1] = 2;
-       cookies[2] = 3;
-       cookies[3] = 4;
-       cookies[4] = 5;
-       cookies[5] = 6;
-       cookies[6] = 7;
-       cookies[7] = 8;
+       cookies[0] = 1; /* bpf_fentry_test1 */
+       cookies[1] = 2; /* bpf_fentry_test3 */
+       cookies[2] = 3; /* bpf_fentry_test4 */
+       cookies[3] = 4; /* bpf_fentry_test5 */
+       cookies[4] = 5; /* bpf_fentry_test6 */
+       cookies[5] = 6; /* bpf_fentry_test7 */
+       cookies[6] = 7; /* bpf_fentry_test2 */
+       cookies[7] = 8; /* bpf_fentry_test8 */
 
        opts.kprobe_multi.addrs = (const unsigned long *) &addrs;
        opts.kprobe_multi.cnt = ARRAY_SIZE(addrs);
@@ -149,14 +149,14 @@ static void kprobe_multi_link_api_subtest(void)
        if (!ASSERT_GE(link1_fd, 0, "link1_fd"))
                goto cleanup;
 
-       cookies[0] = 8;
-       cookies[1] = 7;
-       cookies[2] = 6;
-       cookies[3] = 5;
-       cookies[4] = 4;
-       cookies[5] = 3;
-       cookies[6] = 2;
-       cookies[7] = 1;
+       cookies[0] = 8; /* bpf_fentry_test1 */
+       cookies[1] = 7; /* bpf_fentry_test3 */
+       cookies[2] = 6; /* bpf_fentry_test4 */
+       cookies[3] = 5; /* bpf_fentry_test5 */
+       cookies[4] = 4; /* bpf_fentry_test6 */
+       cookies[5] = 3; /* bpf_fentry_test7 */
+       cookies[6] = 2; /* bpf_fentry_test2 */
+       cookies[7] = 1; /* bpf_fentry_test8 */
 
        opts.kprobe_multi.flags = BPF_F_KPROBE_MULTI_RETURN;
        prog_fd = bpf_program__fd(skel->progs.test_kretprobe);
@@ -181,12 +181,12 @@ static void kprobe_multi_attach_api_subtest(void)
        struct kprobe_multi *skel = NULL;
        const char *syms[8] = {
                "bpf_fentry_test1",
-               "bpf_fentry_test2",
                "bpf_fentry_test3",
                "bpf_fentry_test4",
                "bpf_fentry_test5",
                "bpf_fentry_test6",
                "bpf_fentry_test7",
+               "bpf_fentry_test2",
                "bpf_fentry_test8",
        };
        __u64 cookies[8];
@@ -198,14 +198,14 @@ static void kprobe_multi_attach_api_subtest(void)
        skel->bss->pid = getpid();
        skel->bss->test_cookie = true;
 
-       cookies[0] = 1;
-       cookies[1] = 2;
-       cookies[2] = 3;
-       cookies[3] = 4;
-       cookies[4] = 5;
-       cookies[5] = 6;
-       cookies[6] = 7;
-       cookies[7] = 8;
+       cookies[0] = 1; /* bpf_fentry_test1 */
+       cookies[1] = 2; /* bpf_fentry_test3 */
+       cookies[2] = 3; /* bpf_fentry_test4 */
+       cookies[3] = 4; /* bpf_fentry_test5 */
+       cookies[4] = 5; /* bpf_fentry_test6 */
+       cookies[5] = 6; /* bpf_fentry_test7 */
+       cookies[6] = 7; /* bpf_fentry_test2 */
+       cookies[7] = 8; /* bpf_fentry_test8 */
 
        opts.syms = syms;
        opts.cnt = ARRAY_SIZE(syms);
@@ -216,14 +216,14 @@ static void kprobe_multi_attach_api_subtest(void)
        if (!ASSERT_OK_PTR(link1, "bpf_program__attach_kprobe_multi_opts"))
                goto cleanup;
 
-       cookies[0] = 8;
-       cookies[1] = 7;
-       cookies[2] = 6;
-       cookies[3] = 5;
-       cookies[4] = 4;
-       cookies[5] = 3;
-       cookies[6] = 2;
-       cookies[7] = 1;
+       cookies[0] = 8; /* bpf_fentry_test1 */
+       cookies[1] = 7; /* bpf_fentry_test3 */
+       cookies[2] = 6; /* bpf_fentry_test4 */
+       cookies[3] = 5; /* bpf_fentry_test5 */
+       cookies[4] = 4; /* bpf_fentry_test6 */
+       cookies[5] = 3; /* bpf_fentry_test7 */
+       cookies[6] = 2; /* bpf_fentry_test2 */
+       cookies[7] = 1; /* bpf_fentry_test8 */
 
        opts.retprobe = true;
 
index a8cb8a9..335917d 100644 (file)
@@ -364,6 +364,9 @@ static int get_syms(char ***symsp, size_t *cntp)
                        continue;
                if (!strncmp(name, "rcu_", 4))
                        continue;
+               if (!strncmp(name, "__ftrace_invalid_address__",
+                            sizeof("__ftrace_invalid_address__") - 1))
+                       continue;
                err = hashmap__add(map, name, NULL);
                if (err) {
                        free(name);
index af293ea..e172d89 100644 (file)
@@ -4,6 +4,7 @@
  * Tests for sockmap/sockhash holding kTLS sockets.
  */
 
+#include <netinet/tcp.h>
 #include "test_progs.h"
 
 #define MAX_TEST_NAME 80
@@ -92,9 +93,78 @@ close_srv:
        close(srv);
 }
 
+static void test_sockmap_ktls_update_fails_when_sock_has_ulp(int family, int map)
+{
+       struct sockaddr_storage addr = {};
+       socklen_t len = sizeof(addr);
+       struct sockaddr_in6 *v6;
+       struct sockaddr_in *v4;
+       int err, s, zero = 0;
+
+       switch (family) {
+       case AF_INET:
+               v4 = (struct sockaddr_in *)&addr;
+               v4->sin_family = AF_INET;
+               break;
+       case AF_INET6:
+               v6 = (struct sockaddr_in6 *)&addr;
+               v6->sin6_family = AF_INET6;
+               break;
+       default:
+               PRINT_FAIL("unsupported socket family %d", family);
+               return;
+       }
+
+       s = socket(family, SOCK_STREAM, 0);
+       if (!ASSERT_GE(s, 0, "socket"))
+               return;
+
+       err = bind(s, (struct sockaddr *)&addr, len);
+       if (!ASSERT_OK(err, "bind"))
+               goto close;
+
+       err = getsockname(s, (struct sockaddr *)&addr, &len);
+       if (!ASSERT_OK(err, "getsockname"))
+               goto close;
+
+       err = connect(s, (struct sockaddr *)&addr, len);
+       if (!ASSERT_OK(err, "connect"))
+               goto close;
+
+       /* save sk->sk_prot and set it to tls_prots */
+       err = setsockopt(s, IPPROTO_TCP, TCP_ULP, "tls", strlen("tls"));
+       if (!ASSERT_OK(err, "setsockopt(TCP_ULP)"))
+               goto close;
+
+       /* sockmap update should not affect saved sk_prot */
+       err = bpf_map_update_elem(map, &zero, &s, BPF_ANY);
+       if (!ASSERT_ERR(err, "sockmap update elem"))
+               goto close;
+
+       /* call sk->sk_prot->setsockopt to dispatch to saved sk_prot */
+       err = setsockopt(s, IPPROTO_TCP, TCP_NODELAY, &zero, sizeof(zero));
+       ASSERT_OK(err, "setsockopt(TCP_NODELAY)");
+
+close:
+       close(s);
+}
+
+static const char *fmt_test_name(const char *subtest_name, int family,
+                                enum bpf_map_type map_type)
+{
+       const char *map_type_str = BPF_MAP_TYPE_SOCKMAP ? "SOCKMAP" : "SOCKHASH";
+       const char *family_str = AF_INET ? "IPv4" : "IPv6";
+       static char test_name[MAX_TEST_NAME];
+
+       snprintf(test_name, MAX_TEST_NAME,
+                "sockmap_ktls %s %s %s",
+                subtest_name, family_str, map_type_str);
+
+       return test_name;
+}
+
 static void run_tests(int family, enum bpf_map_type map_type)
 {
-       char test_name[MAX_TEST_NAME];
        int map;
 
        map = bpf_map_create(map_type, NULL, sizeof(int), sizeof(int), 1, NULL);
@@ -103,14 +173,10 @@ static void run_tests(int family, enum bpf_map_type map_type)
                return;
        }
 
-       snprintf(test_name, MAX_TEST_NAME,
-                "sockmap_ktls disconnect_after_delete %s %s",
-                family == AF_INET ? "IPv4" : "IPv6",
-                map_type == BPF_MAP_TYPE_SOCKMAP ? "SOCKMAP" : "SOCKHASH");
-       if (!test__start_subtest(test_name))
-               return;
-
-       test_sockmap_ktls_disconnect_after_delete(family, map);
+       if (test__start_subtest(fmt_test_name("disconnect_after_delete", family, map_type)))
+               test_sockmap_ktls_disconnect_after_delete(family, map);
+       if (test__start_subtest(fmt_test_name("update_fails_when_sock_has_ulp", family, map_type)))
+               test_sockmap_ktls_update_fails_when_sock_has_ulp(family, map);
 
        close(map);
 }
index c4da87e..19c7088 100644 (file)
@@ -831,6 +831,59 @@ out:
        bpf_object__close(obj);
 }
 
+#include "tailcall_bpf2bpf6.skel.h"
+
+/* Tail call counting works even when there is data on stack which is
+ * not aligned to 8 bytes.
+ */
+static void test_tailcall_bpf2bpf_6(void)
+{
+       struct tailcall_bpf2bpf6 *obj;
+       int err, map_fd, prog_fd, main_fd, data_fd, i, val;
+       LIBBPF_OPTS(bpf_test_run_opts, topts,
+               .data_in = &pkt_v4,
+               .data_size_in = sizeof(pkt_v4),
+               .repeat = 1,
+       );
+
+       obj = tailcall_bpf2bpf6__open_and_load();
+       if (!ASSERT_OK_PTR(obj, "open and load"))
+               return;
+
+       main_fd = bpf_program__fd(obj->progs.entry);
+       if (!ASSERT_GE(main_fd, 0, "entry prog fd"))
+               goto out;
+
+       map_fd = bpf_map__fd(obj->maps.jmp_table);
+       if (!ASSERT_GE(map_fd, 0, "jmp_table map fd"))
+               goto out;
+
+       prog_fd = bpf_program__fd(obj->progs.classifier_0);
+       if (!ASSERT_GE(prog_fd, 0, "classifier_0 prog fd"))
+               goto out;
+
+       i = 0;
+       err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY);
+       if (!ASSERT_OK(err, "jmp_table map update"))
+               goto out;
+
+       err = bpf_prog_test_run_opts(main_fd, &topts);
+       ASSERT_OK(err, "entry prog test run");
+       ASSERT_EQ(topts.retval, 0, "tailcall retval");
+
+       data_fd = bpf_map__fd(obj->maps.bss);
+       if (!ASSERT_GE(map_fd, 0, "bss map fd"))
+               goto out;
+
+       i = 0;
+       err = bpf_map_lookup_elem(data_fd, &i, &val);
+       ASSERT_OK(err, "bss map lookup");
+       ASSERT_EQ(val, 1, "done flag is set");
+
+out:
+       tailcall_bpf2bpf6__destroy(obj);
+}
+
 void test_tailcalls(void)
 {
        if (test__start_subtest("tailcall_1"))
@@ -855,4 +908,6 @@ void test_tailcalls(void)
                test_tailcall_bpf2bpf_4(false);
        if (test__start_subtest("tailcall_bpf2bpf_5"))
                test_tailcall_bpf2bpf_4(true);
+       if (test__start_subtest("tailcall_bpf2bpf_6"))
+               test_tailcall_bpf2bpf_6();
 }
index 93510f4..08f95a8 100644 (file)
@@ -54,21 +54,21 @@ static void kprobe_multi_check(void *ctx, bool is_return)
 
        if (is_return) {
                SET(kretprobe_test1_result, &bpf_fentry_test1, 8);
-               SET(kretprobe_test2_result, &bpf_fentry_test2, 7);
-               SET(kretprobe_test3_result, &bpf_fentry_test3, 6);
-               SET(kretprobe_test4_result, &bpf_fentry_test4, 5);
-               SET(kretprobe_test5_result, &bpf_fentry_test5, 4);
-               SET(kretprobe_test6_result, &bpf_fentry_test6, 3);
-               SET(kretprobe_test7_result, &bpf_fentry_test7, 2);
+               SET(kretprobe_test2_result, &bpf_fentry_test2, 2);
+               SET(kretprobe_test3_result, &bpf_fentry_test3, 7);
+               SET(kretprobe_test4_result, &bpf_fentry_test4, 6);
+               SET(kretprobe_test5_result, &bpf_fentry_test5, 5);
+               SET(kretprobe_test6_result, &bpf_fentry_test6, 4);
+               SET(kretprobe_test7_result, &bpf_fentry_test7, 3);
                SET(kretprobe_test8_result, &bpf_fentry_test8, 1);
        } else {
                SET(kprobe_test1_result, &bpf_fentry_test1, 1);
-               SET(kprobe_test2_result, &bpf_fentry_test2, 2);
-               SET(kprobe_test3_result, &bpf_fentry_test3, 3);
-               SET(kprobe_test4_result, &bpf_fentry_test4, 4);
-               SET(kprobe_test5_result, &bpf_fentry_test5, 5);
-               SET(kprobe_test6_result, &bpf_fentry_test6, 6);
-               SET(kprobe_test7_result, &bpf_fentry_test7, 7);
+               SET(kprobe_test2_result, &bpf_fentry_test2, 7);
+               SET(kprobe_test3_result, &bpf_fentry_test3, 2);
+               SET(kprobe_test4_result, &bpf_fentry_test4, 3);
+               SET(kprobe_test5_result, &bpf_fentry_test5, 4);
+               SET(kprobe_test6_result, &bpf_fentry_test6, 5);
+               SET(kprobe_test7_result, &bpf_fentry_test7, 6);
                SET(kprobe_test8_result, &bpf_fentry_test8, 8);
        }
 
diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
new file mode 100644 (file)
index 0000000..41ce83d
--- /dev/null
@@ -0,0 +1,42 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+#define __unused __attribute__((unused))
+
+struct {
+       __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+       __uint(max_entries, 1);
+       __uint(key_size, sizeof(__u32));
+       __uint(value_size, sizeof(__u32));
+} jmp_table SEC(".maps");
+
+int done = 0;
+
+SEC("tc")
+int classifier_0(struct __sk_buff *skb __unused)
+{
+       done = 1;
+       return 0;
+}
+
+static __noinline
+int subprog_tail(struct __sk_buff *skb)
+{
+       /* Don't propagate the constant to the caller */
+       volatile int ret = 1;
+
+       bpf_tail_call_static(skb, &jmp_table, 0);
+       return ret;
+}
+
+SEC("tc")
+int entry(struct __sk_buff *skb)
+{
+       /* Have data on stack which size is not a multiple of 8 */
+       volatile char arr[1] = {};
+
+       return subprog_tail(skb);
+}
+
+char __license[] SEC("license") = "GPL";
index 6ddc418..1a27a62 100644 (file)
        .result = ACCEPT,
        .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
 },
+{
+       "jeq32/jne32: bounds checking",
+       .insns = {
+       BPF_MOV64_IMM(BPF_REG_6, 563),
+       BPF_MOV64_IMM(BPF_REG_2, 0),
+       BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0),
+       BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0),
+       BPF_ALU32_REG(BPF_OR, BPF_REG_2, BPF_REG_6),
+       BPF_JMP32_IMM(BPF_JNE, BPF_REG_2, 8, 5),
+       BPF_JMP_IMM(BPF_JSGE, BPF_REG_2, 500, 2),
+       BPF_MOV64_IMM(BPF_REG_0, 2),
+       BPF_EXIT_INSN(),
+       BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
+       BPF_EXIT_INSN(),
+       BPF_MOV64_IMM(BPF_REG_0, 1),
+       BPF_EXIT_INSN(),
+       },
+       .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+       .result = ACCEPT,
+       .retval = 1,
+},
index 6f951d1..497fe17 100644 (file)
        .result = ACCEPT,
        .retval = 3,
 },
+{
+       "jump & dead code elimination",
+       .insns = {
+       BPF_MOV64_IMM(BPF_REG_0, 1),
+       BPF_MOV64_IMM(BPF_REG_3, 0),
+       BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0),
+       BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0),
+       BPF_ALU64_IMM(BPF_OR, BPF_REG_3, 32767),
+       BPF_JMP_IMM(BPF_JSGE, BPF_REG_3, 0, 1),
+       BPF_EXIT_INSN(),
+       BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 0x8000, 1),
+       BPF_EXIT_INSN(),
+       BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -32767),
+       BPF_MOV64_IMM(BPF_REG_0, 2),
+       BPF_JMP_IMM(BPF_JLE, BPF_REG_3, 0, 1),
+       BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
+       BPF_EXIT_INSN(),
+       },
+       .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+       .result = ACCEPT,
+       .retval = 2,
+},
index aa8e8b5..cd8c5ec 100644 (file)
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 CFLAGS += -I../../../../usr/include/
+CFLAGS += -I../../../../include/
 
 TEST_GEN_PROGS := dma_map_benchmark
 
index c3b3c09..5c997f1 100644 (file)
@@ -10,8 +10,8 @@
 #include <unistd.h>
 #include <sys/ioctl.h>
 #include <sys/mman.h>
-#include <linux/map_benchmark.h>
 #include <linux/types.h>
+#include <linux/map_benchmark.h>
 
 #define NSEC_PER_MSEC  1000000L
 
index e0b0164..be1d972 100644 (file)
@@ -73,20 +73,19 @@ void ucall_uninit(struct kvm_vm *vm)
 
 void ucall(uint64_t cmd, int nargs, ...)
 {
-       struct ucall uc = {
-               .cmd = cmd,
-       };
+       struct ucall uc = {};
        va_list va;
        int i;
 
+       WRITE_ONCE(uc.cmd, cmd);
        nargs = nargs <= UCALL_MAX_ARGS ? nargs : UCALL_MAX_ARGS;
 
        va_start(va, nargs);
        for (i = 0; i < nargs; ++i)
-               uc.args[i] = va_arg(va, uint64_t);
+               WRITE_ONCE(uc.args[i], va_arg(va, uint64_t));
        va_end(va);
 
-       *ucall_exit_mmio_addr = (vm_vaddr_t)&uc;
+       WRITE_ONCE(*ucall_exit_mmio_addr, (vm_vaddr_t)&uc);
 }
 
 uint64_t get_ucall(struct kvm_vm *vm, uint32_t vcpu_id, struct ucall *uc)
index 2a2d240..1a5cc3c 100644 (file)
@@ -7,10 +7,31 @@ else ifneq ($(filter -%,$(LLVM)),)
 LLVM_SUFFIX := $(LLVM)
 endif
 
-CC := $(LLVM_PREFIX)clang$(LLVM_SUFFIX)
+CLANG_TARGET_FLAGS_arm          := arm-linux-gnueabi
+CLANG_TARGET_FLAGS_arm64        := aarch64-linux-gnu
+CLANG_TARGET_FLAGS_hexagon      := hexagon-linux-musl
+CLANG_TARGET_FLAGS_m68k         := m68k-linux-gnu
+CLANG_TARGET_FLAGS_mips         := mipsel-linux-gnu
+CLANG_TARGET_FLAGS_powerpc      := powerpc64le-linux-gnu
+CLANG_TARGET_FLAGS_riscv        := riscv64-linux-gnu
+CLANG_TARGET_FLAGS_s390         := s390x-linux-gnu
+CLANG_TARGET_FLAGS_x86          := x86_64-linux-gnu
+CLANG_TARGET_FLAGS              := $(CLANG_TARGET_FLAGS_$(ARCH))
+
+ifeq ($(CROSS_COMPILE),)
+ifeq ($(CLANG_TARGET_FLAGS),)
+$(error Specify CROSS_COMPILE or add '--target=' option to lib.mk
+else
+CLANG_FLAGS     += --target=$(CLANG_TARGET_FLAGS)
+endif # CLANG_TARGET_FLAGS
+else
+CLANG_FLAGS     += --target=$(notdir $(CROSS_COMPILE:%-=%))
+endif # CROSS_COMPILE
+
+CC := $(LLVM_PREFIX)clang$(LLVM_SUFFIX) $(CLANG_FLAGS) -fintegrated-as
 else
 CC := $(CROSS_COMPILE)gcc
-endif
+endif # LLVM
 
 ifeq (0,$(MAKELEVEL))
     ifeq ($(OUTPUT),)
index a29f796..1257baa 100644 (file)
@@ -37,3 +37,4 @@ gro
 ioam6_parser
 toeplitz
 cmsg_sender
+unix_connect
\ No newline at end of file
index 7ea54af..ddad703 100644 (file)
@@ -54,7 +54,7 @@ TEST_GEN_FILES += ipsec
 TEST_GEN_FILES += ioam6_parser
 TEST_GEN_FILES += gro
 TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa
-TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls
+TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls tun
 TEST_GEN_FILES += toeplitz
 TEST_GEN_FILES += cmsg_sender
 TEST_GEN_FILES += stress_reuseport_listen
index df34164..969620a 100644 (file)
@@ -1,2 +1,3 @@
-TEST_GEN_PROGS := test_unix_oob
+TEST_GEN_PROGS := test_unix_oob unix_connect
+
 include ../../lib.mk
diff --git a/tools/testing/selftests/net/af_unix/unix_connect.c b/tools/testing/selftests/net/af_unix/unix_connect.c
new file mode 100644 (file)
index 0000000..157e44e
--- /dev/null
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define _GNU_SOURCE
+#include <sched.h>
+
+#include <stdio.h>
+#include <unistd.h>
+
+#include <sys/socket.h>
+#include <sys/un.h>
+
+#include "../../kselftest_harness.h"
+
+FIXTURE(unix_connect)
+{
+       int server, client;
+       int family;
+};
+
+FIXTURE_VARIANT(unix_connect)
+{
+       int type;
+       char sun_path[8];
+       int len;
+       int flags;
+       int err;
+};
+
+FIXTURE_VARIANT_ADD(unix_connect, stream_pathname)
+{
+       .type = SOCK_STREAM,
+       .sun_path = "test",
+       .len = 4 + 1,
+       .flags = 0,
+       .err = 0,
+};
+
+FIXTURE_VARIANT_ADD(unix_connect, stream_abstract)
+{
+       .type = SOCK_STREAM,
+       .sun_path = "\0test",
+       .len = 5,
+       .flags = 0,
+       .err = 0,
+};
+
+FIXTURE_VARIANT_ADD(unix_connect, stream_pathname_netns)
+{
+       .type = SOCK_STREAM,
+       .sun_path = "test",
+       .len = 4 + 1,
+       .flags = CLONE_NEWNET,
+       .err = 0,
+};
+
+FIXTURE_VARIANT_ADD(unix_connect, stream_abstract_netns)
+{
+       .type = SOCK_STREAM,
+       .sun_path = "\0test",
+       .len = 5,
+       .flags = CLONE_NEWNET,
+       .err = ECONNREFUSED,
+};
+
+FIXTURE_VARIANT_ADD(unix_connect, dgram_pathname)
+{
+       .type = SOCK_DGRAM,
+       .sun_path = "test",
+       .len = 4 + 1,
+       .flags = 0,
+       .err = 0,
+};
+
+FIXTURE_VARIANT_ADD(unix_connect, dgram_abstract)
+{
+       .type = SOCK_DGRAM,
+       .sun_path = "\0test",
+       .len = 5,
+       .flags = 0,
+       .err = 0,
+};
+
+FIXTURE_VARIANT_ADD(unix_connect, dgram_pathname_netns)
+{
+       .type = SOCK_DGRAM,
+       .sun_path = "test",
+       .len = 4 + 1,
+       .flags = CLONE_NEWNET,
+       .err = 0,
+};
+
+FIXTURE_VARIANT_ADD(unix_connect, dgram_abstract_netns)
+{
+       .type = SOCK_DGRAM,
+       .sun_path = "\0test",
+       .len = 5,
+       .flags = CLONE_NEWNET,
+       .err = ECONNREFUSED,
+};
+
+FIXTURE_SETUP(unix_connect)
+{
+       self->family = AF_UNIX;
+}
+
+FIXTURE_TEARDOWN(unix_connect)
+{
+       close(self->server);
+       close(self->client);
+
+       if (variant->sun_path[0])
+               remove("test");
+}
+
+#define offsetof(type, member) ((size_t)&((type *)0)->(member))
+
+TEST_F(unix_connect, test)
+{
+       socklen_t addrlen;
+       struct sockaddr_un addr = {
+               .sun_family = self->family,
+       };
+       int err;
+
+       self->server = socket(self->family, variant->type, 0);
+       ASSERT_NE(-1, self->server);
+
+       addrlen = offsetof(struct sockaddr_un, sun_path) + variant->len;
+       memcpy(&addr.sun_path, variant->sun_path, variant->len);
+
+       err = bind(self->server, (struct sockaddr *)&addr, addrlen);
+       ASSERT_EQ(0, err);
+
+       if (variant->type == SOCK_STREAM) {
+               err = listen(self->server, 32);
+               ASSERT_EQ(0, err);
+       }
+
+       err = unshare(variant->flags);
+       ASSERT_EQ(0, err);
+
+       self->client = socket(self->family, variant->type, 0);
+       ASSERT_LT(0, self->client);
+
+       err = connect(self->client, (struct sockaddr *)&addr, addrlen);
+       ASSERT_EQ(variant->err, err == -1 ? errno : 0);
+}
+
+TEST_HARNESS_MAIN
index 8a69c91..8ccaf87 100644 (file)
@@ -2,7 +2,7 @@
 
 CLANG ?= clang
 CCINCLUDE += -I../../bpf
-CCINCLUDE += -I../../../lib
+CCINCLUDE += -I../../../../lib
 CCINCLUDE += -I../../../../../usr/include/
 
 TEST_CUSTOM_PROGS = $(OUTPUT)/bpf/nat6to4.o
index bc21629..75dd83e 100644 (file)
@@ -456,7 +456,7 @@ int main(int argc, char *argv[])
                buf[1] = 0;
        } else if (opt.sock.type == SOCK_RAW) {
                struct udphdr hdr = { 1, 2, htons(opt.size), 0 };
-               struct sockaddr_in6 *sin6 = (void *)ai->ai_addr;;
+               struct sockaddr_in6 *sin6 = (void *)ai->ai_addr;
 
                memcpy(buf, &hdr, sizeof(hdr));
                sin6->sin6_port = htons(opt.sock.proto);
index 54701c8..03b5867 100755 (executable)
@@ -70,6 +70,10 @@ NSB_LO_IP6=2001:db8:2::2
 NL_IP=172.17.1.1
 NL_IP6=2001:db8:4::1
 
+# multicast and broadcast addresses
+MCAST_IP=224.0.0.1
+BCAST_IP=255.255.255.255
+
 MD5_PW=abc123
 MD5_WRONG_PW=abc1234
 
@@ -308,6 +312,9 @@ addr2str()
        127.0.0.1) echo "loopback";;
        ::1) echo "IPv6 loopback";;
 
+       ${BCAST_IP}) echo "broadcast";;
+       ${MCAST_IP}) echo "multicast";;
+
        ${NSA_IP})      echo "ns-A IP";;
        ${NSA_IP6})     echo "ns-A IPv6";;
        ${NSA_LO_IP})   echo "ns-A loopback IP";;
@@ -1793,12 +1800,33 @@ ipv4_addr_bind_novrf()
        done
 
        #
-       # raw socket with nonlocal bind
+       # tests for nonlocal bind
        #
        a=${NL_IP}
        log_start
-       run_cmd nettest -s -R -P icmp -f -l ${a} -I ${NSA_DEV} -b
-       log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address after device bind"
+       run_cmd nettest -s -R -f -l ${a} -b
+       log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address"
+
+       log_start
+       run_cmd nettest -s -f -l ${a} -b
+       log_test_addr ${a} $? 0 "TCP socket bind to nonlocal address"
+
+       log_start
+       run_cmd nettest -s -D -P icmp -f -l ${a} -b
+       log_test_addr ${a} $? 0 "ICMP socket bind to nonlocal address"
+
+       #
+       # check that ICMP sockets cannot bind to broadcast and multicast addresses
+       #
+       a=${BCAST_IP}
+       log_start
+       run_cmd nettest -s -D -P icmp -l ${a} -b
+       log_test_addr ${a} $? 1 "ICMP socket bind to broadcast address"
+
+       a=${MCAST_IP}
+       log_start
+       run_cmd nettest -s -D -P icmp -l ${a} -b
+       log_test_addr ${a} $? 1 "ICMP socket bind to multicast address"
 
        #
        # tcp sockets
@@ -1850,13 +1878,34 @@ ipv4_addr_bind_vrf()
        log_test_addr ${a} $? 1 "Raw socket bind to out of scope address after VRF bind"
 
        #
-       # raw socket with nonlocal bind
+       # tests for nonlocal bind
        #
        a=${NL_IP}
        log_start
-       run_cmd nettest -s -R -P icmp -f -l ${a} -I ${VRF} -b
+       run_cmd nettest -s -R -f -l ${a} -I ${VRF} -b
        log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address after VRF bind"
 
+       log_start
+       run_cmd nettest -s -f -l ${a} -I ${VRF} -b
+       log_test_addr ${a} $? 0 "TCP socket bind to nonlocal address after VRF bind"
+
+       log_start
+       run_cmd nettest -s -D -P icmp -f -l ${a} -I ${VRF} -b
+       log_test_addr ${a} $? 0 "ICMP socket bind to nonlocal address after VRF bind"
+
+       #
+       # check that ICMP sockets cannot bind to broadcast and multicast addresses
+       #
+       a=${BCAST_IP}
+       log_start
+       run_cmd nettest -s -D -P icmp -l ${a} -I ${VRF} -b
+       log_test_addr ${a} $? 1 "ICMP socket bind to broadcast address after VRF bind"
+
+       a=${MCAST_IP}
+       log_start
+       run_cmd nettest -s -D -P icmp -l ${a} -I ${VRF} -b
+       log_test_addr ${a} $? 1 "ICMP socket bind to multicast address after VRF bind"
+
        #
        # tcp sockets
        #
@@ -1889,10 +1938,12 @@ ipv4_addr_bind()
 
        log_subsection "No VRF"
        setup
+       set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
        ipv4_addr_bind_novrf
 
        log_subsection "With VRF"
        setup "yes"
+       set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
        ipv4_addr_bind_vrf
 }
 
index bbe3b37..c245476 100755 (executable)
@@ -303,6 +303,29 @@ run_fibrule_tests()
        log_section "IPv6 fib rule"
        fib_rule6_test
 }
+################################################################################
+# usage
+
+usage()
+{
+       cat <<EOF
+usage: ${0##*/} OPTS
+
+        -t <test>   Test(s) to run (default: all)
+                    (options: $TESTS)
+EOF
+}
+
+################################################################################
+# main
+
+while getopts ":t:h" opt; do
+       case $opt in
+               t) TESTS=$OPTARG;;
+               h) usage; exit 0;;
+               *) usage; exit 1;;
+       esac
+done
 
 if [ "$(id -u)" -ne 0 ];then
        echo "SKIP: Need root privileges"
index 8f48121..669ffd6 100644 (file)
@@ -3,6 +3,7 @@
 TEST_PROGS = bridge_igmp.sh \
        bridge_locked_port.sh \
        bridge_mdb.sh \
+       bridge_mdb_port_down.sh \
        bridge_mld.sh \
        bridge_port_isolation.sh \
        bridge_sticky_fdb.sh \
diff --git a/tools/testing/selftests/net/forwarding/bridge_mdb_port_down.sh b/tools/testing/selftests/net/forwarding/bridge_mdb_port_down.sh
new file mode 100755 (executable)
index 0000000..1a0480e
--- /dev/null
@@ -0,0 +1,118 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# Verify that permanent mdb entries can be added to and deleted from bridge
+# interfaces that are down, and works correctly when done so.
+
+ALL_TESTS="add_del_to_port_down"
+NUM_NETIFS=4
+
+TEST_GROUP="239.10.10.10"
+TEST_GROUP_MAC="01:00:5e:0a:0a:0a"
+
+source lib.sh
+
+
+add_del_to_port_down() {
+       RET=0
+
+       ip link set dev $swp2 down
+       bridge mdb add dev br0 port "$swp2" grp $TEST_GROUP permanent 2>/dev/null
+       check_err $? "Failed adding mdb entry"
+
+       ip link set dev $swp2 up
+       setup_wait_dev $swp2
+       mcast_packet_test $TEST_GROUP_MAC 192.0.2.1 $TEST_GROUP $h1 $h2
+       check_fail $? "Traffic to $TEST_GROUP wasn't forwarded"
+
+       ip link set dev $swp2 down
+       bridge mdb show dev br0 | grep -q "$TEST_GROUP permanent" 2>/dev/null
+       check_err $? "MDB entry did not persist after link up/down"
+
+       bridge mdb del dev br0 port "$swp2" grp $TEST_GROUP 2>/dev/null
+       check_err $? "Failed deleting mdb entry"
+
+       ip link set dev $swp2 up
+       setup_wait_dev $swp2
+       mcast_packet_test $TEST_GROUP_MAC 192.0.2.1 $TEST_GROUP $h1 $h2
+       check_err $? "Traffic to $TEST_GROUP was forwarded after entry removed"
+
+       log_test "MDB add/del entry to port with state down "
+}
+
+h1_create()
+{
+       simple_if_init $h1 192.0.2.1/24 2001:db8:1::1/64
+}
+
+h1_destroy()
+{
+       simple_if_fini $h1 192.0.2.1/24 2001:db8:1::1/64
+}
+
+h2_create()
+{
+       simple_if_init $h2 192.0.2.2/24 2001:db8:1::2/64
+}
+
+h2_destroy()
+{
+       simple_if_fini $h2 192.0.2.2/24 2001:db8:1::2/64
+}
+
+switch_create()
+{
+       # Enable multicast filtering
+       ip link add dev br0 type bridge mcast_snooping 1 mcast_querier 1
+
+       ip link set dev $swp1 master br0
+       ip link set dev $swp2 master br0
+
+       ip link set dev br0 up
+       ip link set dev $swp1 up
+
+       bridge link set dev $swp2 mcast_flood off
+       # Bridge currently has a "grace time" at creation time before it
+       # forwards multicast according to the mdb. Since we disable the
+       # mcast_flood setting per port
+       sleep 10
+}
+
+switch_destroy()
+{
+       ip link set dev $swp1 down
+       ip link set dev $swp2 down
+       ip link del dev br0
+}
+
+setup_prepare()
+{
+       h1=${NETIFS[p1]}
+       swp1=${NETIFS[p2]}
+
+       swp2=${NETIFS[p3]}
+       h2=${NETIFS[p4]}
+
+       vrf_prepare
+
+       h1_create
+       h2_create
+       switch_create
+}
+
+cleanup()
+{
+       pre_cleanup
+
+       switch_destroy
+       h1_destroy
+       h2_destroy
+
+       vrf_cleanup
+}
+
+trap cleanup EXIT
+
+setup_prepare
+tests_run
+exit $EXIT_STATUS
index 4b42dfd..072faa7 100755 (executable)
@@ -11,6 +11,8 @@ NUM_NETIFS=2
 source lib.sh
 source ethtool_lib.sh
 
+TIMEOUT=$((WAIT_TIMEOUT * 1000)) # ms
+
 setup_prepare()
 {
        swp1=${NETIFS[p1]}
@@ -18,7 +20,7 @@ setup_prepare()
        swp3=$NETIF_NO_CABLE
 }
 
-ethtool_extended_state_check()
+ethtool_ext_state()
 {
        local dev=$1; shift
        local expected_ext_state=$1; shift
@@ -30,21 +32,27 @@ ethtool_extended_state_check()
                | sed -e 's/^[[:space:]]*//')
        ext_state=$(echo $ext_state | cut -d "," -f1)
 
-       [[ $ext_state == $expected_ext_state ]]
-       check_err $? "Expected \"$expected_ext_state\", got \"$ext_state\""
-
-       [[ $ext_substate == $expected_ext_substate ]]
-       check_err $? "Expected \"$expected_ext_substate\", got \"$ext_substate\""
+       if [[ $ext_state != $expected_ext_state ]]; then
+               echo "Expected \"$expected_ext_state\", got \"$ext_state\""
+               return 1
+       fi
+       if [[ $ext_substate != $expected_ext_substate ]]; then
+               echo "Expected \"$expected_ext_substate\", got \"$ext_substate\""
+               return 1
+       fi
 }
 
 autoneg()
 {
+       local msg
+
        RET=0
 
        ip link set dev $swp1 up
 
-       sleep 4
-       ethtool_extended_state_check $swp1 "Autoneg" "No partner detected"
+       msg=$(busywait $TIMEOUT ethtool_ext_state $swp1 \
+                       "Autoneg" "No partner detected")
+       check_err $? "$msg"
 
        log_test "Autoneg, No partner detected"
 
@@ -53,6 +61,8 @@ autoneg()
 
 autoneg_force_mode()
 {
+       local msg
+
        RET=0
 
        ip link set dev $swp1 up
@@ -65,12 +75,13 @@ autoneg_force_mode()
        ethtool_set $swp1 speed $speed1 autoneg off
        ethtool_set $swp2 speed $speed2 autoneg off
 
-       sleep 4
-       ethtool_extended_state_check $swp1 "Autoneg" \
-               "No partner detected during force mode"
+       msg=$(busywait $TIMEOUT ethtool_ext_state $swp1 \
+                       "Autoneg" "No partner detected during force mode")
+       check_err $? "$msg"
 
-       ethtool_extended_state_check $swp2 "Autoneg" \
-               "No partner detected during force mode"
+       msg=$(busywait $TIMEOUT ethtool_ext_state $swp2 \
+                       "Autoneg" "No partner detected during force mode")
+       check_err $? "$msg"
 
        log_test "Autoneg, No partner detected during force mode"
 
@@ -83,12 +94,14 @@ autoneg_force_mode()
 
 no_cable()
 {
+       local msg
+
        RET=0
 
        ip link set dev $swp3 up
 
-       sleep 1
-       ethtool_extended_state_check $swp3 "No cable"
+       msg=$(busywait $TIMEOUT ethtool_ext_state $swp3 "No cable")
+       check_err $? "$msg"
 
        log_test "No cable"
 
index 37ae49d..3ffb9d6 100755 (executable)
@@ -1240,6 +1240,7 @@ learning_test()
        # FDB entry was installed.
        bridge link set dev $br_port1 flood off
 
+       ip link set $host1_if promisc on
        tc qdisc add dev $host1_if ingress
        tc filter add dev $host1_if ingress protocol ip pref 1 handle 101 \
                flower dst_mac $mac action drop
@@ -1250,7 +1251,7 @@ learning_test()
        tc -j -s filter show dev $host1_if ingress \
                | jq -e ".[] | select(.options.handle == 101) \
                | select(.options.actions[0].stats.packets == 1)" &> /dev/null
-       check_fail $? "Packet reached second host when should not"
+       check_fail $? "Packet reached first host when should not"
 
        $MZ $host1_if -c 1 -p 64 -a $mac -t ip -q
        sleep 1
@@ -1289,6 +1290,7 @@ learning_test()
 
        tc filter del dev $host1_if ingress protocol ip pref 1 handle 101 flower
        tc qdisc del dev $host1_if ingress
+       ip link set $host1_if promisc off
 
        bridge link set dev $br_port1 flood on
 
@@ -1306,6 +1308,7 @@ flood_test_do()
 
        # Add an ACL on `host2_if` which will tell us whether the packet
        # was flooded to it or not.
+       ip link set $host2_if promisc on
        tc qdisc add dev $host2_if ingress
        tc filter add dev $host2_if ingress protocol ip pref 1 handle 101 \
                flower dst_mac $mac action drop
@@ -1323,6 +1326,7 @@ flood_test_do()
 
        tc filter del dev $host2_if ingress protocol ip pref 1 handle 101 flower
        tc qdisc del dev $host2_if ingress
+       ip link set $host2_if promisc off
 
        return $err
 }
index 9dd43d7..515859a 100755 (executable)
@@ -61,6 +61,39 @@ chk_msk_nr()
        __chk_nr "grep -c token:" $*
 }
 
+wait_msk_nr()
+{
+       local condition="grep -c token:"
+       local expected=$1
+       local timeout=20
+       local msg nr
+       local max=0
+       local i=0
+
+       shift 1
+       msg=$*
+
+       while [ $i -lt $timeout ]; do
+               nr=$(ss -inmHMN $ns | $condition)
+               [ $nr == $expected ] && break;
+               [ $nr -gt $max ] && max=$nr
+               i=$((i + 1))
+               sleep 1
+       done
+
+       printf "%-50s" "$msg"
+       if [ $i -ge $timeout ]; then
+               echo "[ fail ] timeout while expecting $expected max $max last $nr"
+               ret=$test_cnt
+       elif [ $nr != $expected ]; then
+               echo "[ fail ] expected $expected found $nr"
+               ret=$test_cnt
+       else
+               echo "[  ok  ]"
+       fi
+       test_cnt=$((test_cnt+1))
+}
+
 chk_msk_fallback_nr()
 {
                __chk_nr "grep -c fallback" $*
@@ -146,7 +179,7 @@ ip -n $ns link set dev lo up
 echo "a" | \
        timeout ${timeout_test} \
                ip netns exec $ns \
-                       ./mptcp_connect -p 10000 -l -t ${timeout_poll} \
+                       ./mptcp_connect -p 10000 -l -t ${timeout_poll} -w 20 \
                                0.0.0.0 >/dev/null &
 wait_local_port_listen $ns 10000
 chk_msk_nr 0 "no msk on netns creation"
@@ -155,7 +188,7 @@ chk_msk_listen 10000
 echo "b" | \
        timeout ${timeout_test} \
                ip netns exec $ns \
-                       ./mptcp_connect -p 10000 -r 0 -t ${timeout_poll} \
+                       ./mptcp_connect -p 10000 -r 0 -t ${timeout_poll} -w 20 \
                                127.0.0.1 >/dev/null &
 wait_connected $ns 10000
 chk_msk_nr 2 "after MPC handshake "
@@ -167,13 +200,13 @@ flush_pids
 echo "a" | \
        timeout ${timeout_test} \
                ip netns exec $ns \
-                       ./mptcp_connect -p 10001 -l -s TCP -t ${timeout_poll} \
+                       ./mptcp_connect -p 10001 -l -s TCP -t ${timeout_poll} -w 20 \
                                0.0.0.0 >/dev/null &
 wait_local_port_listen $ns 10001
 echo "b" | \
        timeout ${timeout_test} \
                ip netns exec $ns \
-                       ./mptcp_connect -p 10001 -r 0 -t ${timeout_poll} \
+                       ./mptcp_connect -p 10001 -r 0 -t ${timeout_poll} -w 20 \
                                127.0.0.1 >/dev/null &
 wait_connected $ns 10001
 chk_msk_fallback_nr 1 "check fallback"
@@ -184,7 +217,7 @@ for I in `seq 1 $NR_CLIENTS`; do
        echo "a" | \
                timeout ${timeout_test} \
                        ip netns exec $ns \
-                               ./mptcp_connect -p $((I+10001)) -l -w 10 \
+                               ./mptcp_connect -p $((I+10001)) -l -w 20 \
                                        -t ${timeout_poll} 0.0.0.0 >/dev/null &
 done
 wait_local_port_listen $ns $((NR_CLIENTS + 10001))
@@ -193,12 +226,11 @@ for I in `seq 1 $NR_CLIENTS`; do
        echo "b" | \
                timeout ${timeout_test} \
                        ip netns exec $ns \
-                               ./mptcp_connect -p $((I+10001)) -w 10 \
+                               ./mptcp_connect -p $((I+10001)) -w 20 \
                                        -t ${timeout_poll} 127.0.0.1 >/dev/null &
 done
-sleep 1.5
 
-chk_msk_nr $((NR_CLIENTS*2)) "many msk socket present"
+wait_msk_nr $((NR_CLIENTS*2)) "many msk socket present"
 flush_pids
 
 exit $ret
index 8628aa6..e2ea6c1 100644 (file)
@@ -265,7 +265,7 @@ static void sock_test_tcpulp(int sock, int proto, unsigned int line)
 static int sock_listen_mptcp(const char * const listenaddr,
                             const char * const port)
 {
-       int sock;
+       int sock = -1;
        struct addrinfo hints = {
                .ai_protocol = IPPROTO_TCP,
                .ai_socktype = SOCK_STREAM,
index 29f75e2..8672d89 100644 (file)
@@ -88,7 +88,7 @@ static void xgetaddrinfo(const char *node, const char *service,
 static int sock_listen_mptcp(const char * const listenaddr,
                             const char * const port)
 {
-       int sock;
+       int sock = -1;
        struct addrinfo hints = {
                .ai_protocol = IPPROTO_TCP,
                .ai_socktype = SOCK_STREAM,
index a4406b7..55efe2a 100755 (executable)
@@ -455,6 +455,12 @@ wait_mpj()
        done
 }
 
+kill_wait()
+{
+       kill $1 > /dev/null 2>&1
+       wait $1 2>/dev/null
+}
+
 pm_nl_set_limits()
 {
        local ns=$1
@@ -654,6 +660,11 @@ do_transfer()
 
        local port=$((10000 + TEST_COUNT - 1))
        local cappid
+       local userspace_pm=0
+       local evts_ns1
+       local evts_ns1_pid
+       local evts_ns2
+       local evts_ns2_pid
 
        :> "$cout"
        :> "$sout"
@@ -690,10 +701,29 @@ do_transfer()
                extra_args="-r ${speed:6}"
        fi
 
+       if [[ "${addr_nr_ns1}" = "userspace_"* ]]; then
+               userspace_pm=1
+               addr_nr_ns1=${addr_nr_ns1:10}
+       fi
+
        if [[ "${addr_nr_ns2}" = "fastclose_"* ]]; then
                # disconnect
                extra_args="$extra_args -I ${addr_nr_ns2:10}"
                addr_nr_ns2=0
+       elif [[ "${addr_nr_ns2}" = "userspace_"* ]]; then
+               userspace_pm=1
+               addr_nr_ns2=${addr_nr_ns2:10}
+       fi
+
+       if [ $userspace_pm -eq 1 ]; then
+               evts_ns1=$(mktemp)
+               evts_ns2=$(mktemp)
+               :> "$evts_ns1"
+               :> "$evts_ns2"
+               ip netns exec ${listener_ns} ./pm_nl_ctl events >> "$evts_ns1" 2>&1 &
+               evts_ns1_pid=$!
+               ip netns exec ${connector_ns} ./pm_nl_ctl events >> "$evts_ns2" 2>&1 &
+               evts_ns2_pid=$!
        fi
 
        local local_addr
@@ -748,6 +778,8 @@ do_transfer()
        if [ $addr_nr_ns1 -gt 0 ]; then
                local counter=2
                local add_nr_ns1=${addr_nr_ns1}
+               local id=10
+               local tk
                while [ $add_nr_ns1 -gt 0 ]; do
                        local addr
                        if is_v6 "${connect_addr}"; then
@@ -755,9 +787,18 @@ do_transfer()
                        else
                                addr="10.0.$counter.1"
                        fi
-                       pm_nl_add_endpoint $ns1 $addr flags signal
+                       if [ $userspace_pm -eq 0 ]; then
+                               pm_nl_add_endpoint $ns1 $addr flags signal
+                       else
+                               tk=$(sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$evts_ns1")
+                               ip netns exec ${listener_ns} ./pm_nl_ctl ann $addr token $tk id $id
+                               sleep 1
+                               ip netns exec ${listener_ns} ./pm_nl_ctl rem token $tk id $id
+                       fi
+
                        counter=$((counter + 1))
                        add_nr_ns1=$((add_nr_ns1 - 1))
+                       id=$((id + 1))
                done
        elif [ $addr_nr_ns1 -lt 0 ]; then
                local rm_nr_ns1=$((-addr_nr_ns1))
@@ -804,6 +845,8 @@ do_transfer()
        if [ $addr_nr_ns2 -gt 0 ]; then
                local add_nr_ns2=${addr_nr_ns2}
                local counter=3
+               local id=20
+               local tk da dp sp
                while [ $add_nr_ns2 -gt 0 ]; do
                        local addr
                        if is_v6 "${connect_addr}"; then
@@ -811,9 +854,23 @@ do_transfer()
                        else
                                addr="10.0.$counter.2"
                        fi
-                       pm_nl_add_endpoint $ns2 $addr flags $flags
+                       if [ $userspace_pm -eq 0 ]; then
+                               pm_nl_add_endpoint $ns2 $addr flags $flags
+                       else
+                               tk=$(sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$evts_ns2")
+                               da=$(sed -n 's/.*\(daddr4:\)\([0-9.]*\).*$/\2/p;q' "$evts_ns2")
+                               dp=$(sed -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts_ns2")
+                               ip netns exec ${connector_ns} ./pm_nl_ctl csf lip $addr lid $id \
+                                                                       rip $da rport $dp token $tk
+                               sleep 1
+                               sp=$(grep "type:10" "$evts_ns2" |
+                                    sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
+                               ip netns exec ${connector_ns} ./pm_nl_ctl dsf lip $addr lport $sp \
+                                                                       rip $da rport $dp token $tk
+                       fi
                        counter=$((counter + 1))
                        add_nr_ns2=$((add_nr_ns2 - 1))
+                       id=$((id + 1))
                done
        elif [ $addr_nr_ns2 -lt 0 ]; then
                local rm_nr_ns2=$((-addr_nr_ns2))
@@ -890,6 +947,12 @@ do_transfer()
            kill $cappid
        fi
 
+       if [ $userspace_pm -eq 1 ]; then
+               kill_wait $evts_ns1_pid
+               kill_wait $evts_ns2_pid
+               rm -rf $evts_ns1 $evts_ns2
+       fi
+
        NSTAT_HISTORY=/tmp/${listener_ns}.nstat ip netns exec ${listener_ns} \
                nstat | grep Tcp > /tmp/${listener_ns}.out
        NSTAT_HISTORY=/tmp/${connector_ns}.nstat ip netns exec ${connector_ns} \
@@ -2810,6 +2873,25 @@ userspace_tests()
                chk_join_nr 0 0 0
                chk_rm_nr 0 0
        fi
+
+       # userspace pm add & remove address
+       if reset "userspace pm add & remove address"; then
+               set_userspace_pm $ns1
+               pm_nl_set_limits $ns2 1 1
+               run_tests $ns1 $ns2 10.0.1.1 0 userspace_1 0 slow
+               chk_join_nr 1 1 1
+               chk_add_nr 1 1
+               chk_rm_nr 1 1 invert
+       fi
+
+       # userspace pm create destroy subflow
+       if reset "userspace pm create destroy subflow"; then
+               set_userspace_pm $ns2
+               pm_nl_set_limits $ns1 0 1
+               run_tests $ns1 $ns2 10.0.1.1 0 0 userspace_1 slow
+               chk_join_nr 1 1 1
+               chk_rm_nr 0 1
+       fi
 }
 
 endpoint_tests()
index ac9a4d9..ae61f39 100644 (file)
@@ -136,7 +136,7 @@ static void xgetaddrinfo(const char *node, const char *service,
 static int sock_listen_mptcp(const char * const listenaddr,
                             const char * const port)
 {
-       int sock;
+       int sock = -1;
        struct addrinfo hints = {
                .ai_protocol = IPPROTO_TCP,
                .ai_socktype = SOCK_STREAM,
index 6a2f4b9..abddf4c 100644 (file)
@@ -31,7 +31,7 @@
 
 static void syntax(char *argv[])
 {
-       fprintf(stderr, "%s add|get|set|del|flush|dump|accept [<args>]\n", argv[0]);
+       fprintf(stderr, "%s add|ann|rem|csf|dsf|get|set|del|flush|dump|events|listen|accept [<args>]\n", argv[0]);
        fprintf(stderr, "\tadd [flags signal|subflow|backup|fullmesh] [id <nr>] [dev <name>] <ip>\n");
        fprintf(stderr, "\tann <local-ip> id <local-id> token <token> [port <local-port>] [dev <name>]\n");
        fprintf(stderr, "\trem id <local-id> token <token>\n");
@@ -39,7 +39,7 @@ static void syntax(char *argv[])
        fprintf(stderr, "\tdsf lip <local-ip> lport <local-port> rip <remote-ip> rport <remote-port> token <token>\n");
        fprintf(stderr, "\tdel <id> [<ip>]\n");
        fprintf(stderr, "\tget <id>\n");
-       fprintf(stderr, "\tset [<ip>] [id <nr>] flags [no]backup|[no]fullmesh [port <nr>]\n");
+       fprintf(stderr, "\tset [<ip>] [id <nr>] flags [no]backup|[no]fullmesh [port <nr>] [token <token>] [rip <ip>] [rport <port>]\n");
        fprintf(stderr, "\tflush\n");
        fprintf(stderr, "\tdump\n");
        fprintf(stderr, "\tlimits [<rcv addr max> <subflow max>]\n");
@@ -1279,7 +1279,10 @@ int set_flags(int fd, int pm_family, int argc, char *argv[])
        struct rtattr *rta, *nest;
        struct nlmsghdr *nh;
        u_int32_t flags = 0;
+       u_int32_t token = 0;
+       u_int16_t rport = 0;
        u_int16_t family;
+       void *rip = NULL;
        int nest_start;
        int use_id = 0;
        u_int8_t id;
@@ -1339,7 +1342,13 @@ int set_flags(int fd, int pm_family, int argc, char *argv[])
                error(1, 0, " missing flags keyword");
 
        for (; arg < argc; arg++) {
-               if (!strcmp(argv[arg], "flags")) {
+               if (!strcmp(argv[arg], "token")) {
+                       if (++arg >= argc)
+                               error(1, 0, " missing token value");
+
+                       /* token */
+                       token = atoi(argv[arg]);
+               } else if (!strcmp(argv[arg], "flags")) {
                        char *tok, *str;
 
                        /* flags */
@@ -1378,12 +1387,72 @@ int set_flags(int fd, int pm_family, int argc, char *argv[])
                        rta->rta_len = RTA_LENGTH(2);
                        memcpy(RTA_DATA(rta), &port, 2);
                        off += NLMSG_ALIGN(rta->rta_len);
+               } else if (!strcmp(argv[arg], "rport")) {
+                       if (++arg >= argc)
+                               error(1, 0, " missing remote port");
+
+                       rport = atoi(argv[arg]);
+               } else if (!strcmp(argv[arg], "rip")) {
+                       if (++arg >= argc)
+                               error(1, 0, " missing remote ip");
+
+                       rip = argv[arg];
                } else {
                        error(1, 0, "unknown keyword %s", argv[arg]);
                }
        }
        nest->rta_len = off - nest_start;
 
+       /* token */
+       if (token) {
+               rta = (void *)(data + off);
+               rta->rta_type = MPTCP_PM_ATTR_TOKEN;
+               rta->rta_len = RTA_LENGTH(4);
+               memcpy(RTA_DATA(rta), &token, 4);
+               off += NLMSG_ALIGN(rta->rta_len);
+       }
+
+       /* remote addr/port */
+       if (rip) {
+               nest_start = off;
+               nest = (void *)(data + off);
+               nest->rta_type = NLA_F_NESTED | MPTCP_PM_ATTR_ADDR_REMOTE;
+               nest->rta_len = RTA_LENGTH(0);
+               off += NLMSG_ALIGN(nest->rta_len);
+
+               /* addr data */
+               rta = (void *)(data + off);
+               if (inet_pton(AF_INET, rip, RTA_DATA(rta))) {
+                       family = AF_INET;
+                       rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR4;
+                       rta->rta_len = RTA_LENGTH(4);
+               } else if (inet_pton(AF_INET6, rip, RTA_DATA(rta))) {
+                       family = AF_INET6;
+                       rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR6;
+                       rta->rta_len = RTA_LENGTH(16);
+               } else {
+                       error(1, errno, "can't parse ip %s", (char *)rip);
+               }
+               off += NLMSG_ALIGN(rta->rta_len);
+
+               /* family */
+               rta = (void *)(data + off);
+               rta->rta_type = MPTCP_PM_ADDR_ATTR_FAMILY;
+               rta->rta_len = RTA_LENGTH(2);
+               memcpy(RTA_DATA(rta), &family, 2);
+               off += NLMSG_ALIGN(rta->rta_len);
+
+               if (rport) {
+                       rta = (void *)(data + off);
+                       rta->rta_type = MPTCP_PM_ADDR_ATTR_PORT;
+                       rta->rta_len = RTA_LENGTH(2);
+                       memcpy(RTA_DATA(rta), &rport, 2);
+                       off += NLMSG_ALIGN(rta->rta_len);
+               }
+
+               nest->rta_len = off - nest_start;
+       }
+
        do_nl_req(fd, nh, off, 0);
        return 0;
 }
index f441ff7..ffa13a9 100755 (executable)
@@ -12,6 +12,7 @@ timeout_test=$((timeout_poll * 2 + 1))
 test_cnt=1
 ret=0
 bail=0
+slack=50
 
 usage() {
        echo "Usage: $0 [ -b ] [ -c ] [ -d ]"
@@ -52,6 +53,7 @@ setup()
        cout=$(mktemp)
        capout=$(mktemp)
        size=$((2 * 2048 * 4096))
+
        dd if=/dev/zero of=$small bs=4096 count=20 >/dev/null 2>&1
        dd if=/dev/zero of=$large bs=4096 count=$((size / 4096)) >/dev/null 2>&1
 
@@ -104,6 +106,16 @@ setup()
        ip -net "$ns3" route add default via dead:beef:3::2
 
        ip netns exec "$ns3" ./pm_nl_ctl limits 1 1
+
+       # debug build can slow down measurably the test program
+       # we use quite tight time limit on the run-time, to ensure
+       # maximum B/W usage.
+       # Use kmemleak/lockdep/kasan/prove_locking presence as a rough
+       # estimate for this being a debug kernel and increase the
+       # maximum run-time accordingly. Observed run times for CI builds
+       # running selftests, including kbuild, were used to determine the
+       # amount of time to add.
+       grep -q ' kmemleak_init$\| lockdep_init$\| kasan_init$\| prove_locking$' /proc/kallsyms && slack=$((slack+550))
 }
 
 # $1: ns, $2: port
@@ -241,7 +253,7 @@ run_test()
 
        # mptcp_connect will do some sleeps to allow the mp_join handshake
        # completion (see mptcp_connect): 200ms on each side, add some slack
-       time=$((time + 450))
+       time=$((time + 400 + slack))
 
        printf "%-60s" "$msg"
        do_transfer $small $large $time
index 78d0bb6..3229725 100755 (executable)
@@ -37,6 +37,12 @@ rndh=$(stdbuf -o0 -e0 printf %x "$sec")-$(mktemp -u XXXXXX)
 ns1="ns1-$rndh"
 ns2="ns2-$rndh"
 
+kill_wait()
+{
+       kill $1 > /dev/null 2>&1
+       wait $1 2>/dev/null
+}
+
 cleanup()
 {
        echo "cleanup"
@@ -48,16 +54,16 @@ cleanup()
                kill -SIGUSR1 $client4_pid > /dev/null 2>&1
        fi
        if [ $server4_pid -ne 0 ]; then
-               kill $server4_pid > /dev/null 2>&1
+               kill_wait $server4_pid
        fi
        if [ $client6_pid -ne 0 ]; then
                kill -SIGUSR1 $client6_pid > /dev/null 2>&1
        fi
        if [ $server6_pid -ne 0 ]; then
-               kill $server6_pid > /dev/null 2>&1
+               kill_wait $server6_pid
        fi
        if [ $evts_pid -ne 0 ]; then
-               kill $evts_pid > /dev/null 2>&1
+               kill_wait $evts_pid
        fi
        local netns
        for netns in "$ns1" "$ns2" ;do
@@ -153,7 +159,7 @@ make_connection()
        sleep 1
 
        # Capture client/server attributes from MPTCP connection netlink events
-       kill $client_evts_pid
+       kill_wait $client_evts_pid
 
        local client_token
        local client_port
@@ -165,7 +171,7 @@ make_connection()
        client_port=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
        client_serverside=$(sed --unbuffered -n 's/.*\(server_side:\)\([[:digit:]]*\).*$/\2/p;q'\
                                      "$client_evts")
-       kill $server_evts_pid
+       kill_wait $server_evts_pid
        server_token=$(sed --unbuffered -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
        server_serverside=$(sed --unbuffered -n 's/.*\(server_side:\)\([[:digit:]]*\).*$/\2/p;q'\
                                      "$server_evts")
@@ -286,7 +292,7 @@ test_announce()
        verify_announce_event "$evts" "$ANNOUNCED" "$server4_token" "10.0.2.2"\
                              "$client_addr_id" "$new4_port"
 
-       kill $evts_pid
+       kill_wait $evts_pid
 
        # Capture events on the network namespace running the client
        :>"$evts"
@@ -321,7 +327,7 @@ test_announce()
        verify_announce_event "$evts" "$ANNOUNCED" "$client4_token" "10.0.2.1"\
                              "$server_addr_id" "$new4_port"
 
-       kill $evts_pid
+       kill_wait $evts_pid
        rm -f "$evts"
 }
 
@@ -416,7 +422,7 @@ test_remove()
        sleep 0.5
        verify_remove_event "$evts" "$REMOVED" "$server6_token" "$client_addr_id"
 
-       kill $evts_pid
+       kill_wait $evts_pid
 
        # Capture events on the network namespace running the client
        :>"$evts"
@@ -449,7 +455,7 @@ test_remove()
        sleep 0.5
        verify_remove_event "$evts" "$REMOVED" "$client6_token" "$server_addr_id"
 
-       kill $evts_pid
+       kill_wait $evts_pid
        rm -f "$evts"
 }
 
@@ -553,7 +559,7 @@ test_subflows()
                              "10.0.2.2" "$client4_port" "23" "$client_addr_id" "ns1" "ns2"
 
        # Delete the listener from the client ns, if one was created
-       kill $listener_pid > /dev/null 2>&1
+       kill_wait $listener_pid
 
        local sport
        sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts")
@@ -592,7 +598,7 @@ test_subflows()
                              "$client_addr_id" "ns1" "ns2"
 
        # Delete the listener from the client ns, if one was created
-       kill $listener_pid > /dev/null 2>&1
+       kill_wait $listener_pid
 
        sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts")
 
@@ -631,7 +637,7 @@ test_subflows()
                              "$client_addr_id" "ns1" "ns2"
 
        # Delete the listener from the client ns, if one was created
-       kill $listener_pid > /dev/null 2>&1
+       kill_wait $listener_pid
 
        sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts")
 
@@ -647,7 +653,7 @@ test_subflows()
        ip netns exec "$ns2" ./pm_nl_ctl rem id $client_addr_id token\
           "$client4_token" > /dev/null 2>&1
 
-       kill $evts_pid
+       kill_wait $evts_pid
 
        # Capture events on the network namespace running the client
        :>"$evts"
@@ -674,7 +680,7 @@ test_subflows()
                              "10.0.2.1" "$app4_port" "23" "$server_addr_id" "ns2" "ns1"
 
        # Delete the listener from the server ns, if one was created
-       kill $listener_pid> /dev/null 2>&1
+       kill_wait $listener_pid
 
        sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts")
 
@@ -713,7 +719,7 @@ test_subflows()
                              "$server_addr_id" "ns2" "ns1"
 
        # Delete the listener from the server ns, if one was created
-       kill $listener_pid > /dev/null 2>&1
+       kill_wait $listener_pid
 
        sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts")
 
@@ -750,7 +756,7 @@ test_subflows()
                              "10.0.2.2" "10.0.2.1" "$new4_port" "23" "$server_addr_id" "ns2" "ns1"
 
        # Delete the listener from the server ns, if one was created
-       kill $listener_pid > /dev/null 2>&1
+       kill_wait $listener_pid
 
        sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts")
 
@@ -766,14 +772,46 @@ test_subflows()
        ip netns exec "$ns1" ./pm_nl_ctl rem id $server_addr_id token\
           "$server4_token" > /dev/null 2>&1
 
-       kill $evts_pid
+       kill_wait $evts_pid
        rm -f "$evts"
 }
 
+test_prio()
+{
+       local count
+
+       # Send MP_PRIO signal from client to server machine
+       ip netns exec "$ns2" ./pm_nl_ctl set 10.0.1.2 port "$client4_port" flags backup token "$client4_token" rip 10.0.1.1 rport "$server4_port"
+       sleep 0.5
+
+       # Check TX
+       stdbuf -o0 -e0 printf "MP_PRIO TX                                                 \t"
+       count=$(ip netns exec "$ns2" nstat -as | grep MPTcpExtMPPrioTx | awk '{print $2}')
+       [ -z "$count" ] && count=0
+       if [ $count != 1 ]; then
+               stdbuf -o0 -e0 printf "[FAIL]\n"
+               exit 1
+       else
+               stdbuf -o0 -e0 printf "[OK]\n"
+       fi
+
+       # Check RX
+       stdbuf -o0 -e0 printf "MP_PRIO RX                                                 \t"
+       count=$(ip netns exec "$ns1" nstat -as | grep MPTcpExtMPPrioRx | awk '{print $2}')
+       [ -z "$count" ] && count=0
+       if [ $count != 1 ]; then
+               stdbuf -o0 -e0 printf "[FAIL]\n"
+               exit 1
+       else
+               stdbuf -o0 -e0 printf "[OK]\n"
+       fi
+}
+
 make_connection
 make_connection "v6"
 test_announce
 test_remove
 test_subflows
+test_prio
 
 exit 0
index 5d70b04..e71ec58 100644 (file)
@@ -235,6 +235,7 @@ FIXTURE_VARIANT(tls)
 {
        uint16_t tls_version;
        uint16_t cipher_type;
+       bool nopad;
 };
 
 FIXTURE_VARIANT_ADD(tls, 12_aes_gcm)
@@ -297,9 +298,17 @@ FIXTURE_VARIANT_ADD(tls, 13_aes_gcm_256)
        .cipher_type = TLS_CIPHER_AES_GCM_256,
 };
 
+FIXTURE_VARIANT_ADD(tls, 13_nopad)
+{
+       .tls_version = TLS_1_3_VERSION,
+       .cipher_type = TLS_CIPHER_AES_GCM_128,
+       .nopad = true,
+};
+
 FIXTURE_SETUP(tls)
 {
        struct tls_crypto_info_keys tls12;
+       int one = 1;
        int ret;
 
        tls_crypto_info_init(variant->tls_version, variant->cipher_type,
@@ -315,6 +324,12 @@ FIXTURE_SETUP(tls)
 
        ret = setsockopt(self->cfd, SOL_TLS, TLS_RX, &tls12, tls12.len);
        ASSERT_EQ(ret, 0);
+
+       if (variant->nopad) {
+               ret = setsockopt(self->cfd, SOL_TLS, TLS_RX_EXPECT_NO_PAD,
+                                (void *)&one, sizeof(one));
+               ASSERT_EQ(ret, 0);
+       }
 }
 
 FIXTURE_TEARDOWN(tls)
diff --git a/tools/testing/selftests/net/tun.c b/tools/testing/selftests/net/tun.c
new file mode 100644 (file)
index 0000000..fa83918
--- /dev/null
@@ -0,0 +1,162 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define _GNU_SOURCE
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <linux/if.h>
+#include <linux/if_tun.h>
+#include <linux/netlink.h>
+#include <linux/rtnetlink.h>
+#include <sys/ioctl.h>
+#include <sys/socket.h>
+
+#include "../kselftest_harness.h"
+
+static int tun_attach(int fd, char *dev)
+{
+       struct ifreq ifr;
+
+       memset(&ifr, 0, sizeof(ifr));
+       strcpy(ifr.ifr_name, dev);
+       ifr.ifr_flags = IFF_ATTACH_QUEUE;
+
+       return ioctl(fd, TUNSETQUEUE, (void *) &ifr);
+}
+
+static int tun_detach(int fd, char *dev)
+{
+       struct ifreq ifr;
+
+       memset(&ifr, 0, sizeof(ifr));
+       strcpy(ifr.ifr_name, dev);
+       ifr.ifr_flags = IFF_DETACH_QUEUE;
+
+       return ioctl(fd, TUNSETQUEUE, (void *) &ifr);
+}
+
+static int tun_alloc(char *dev)
+{
+       struct ifreq ifr;
+       int fd, err;
+
+       fd = open("/dev/net/tun", O_RDWR);
+       if (fd < 0) {
+               fprintf(stderr, "can't open tun: %s\n", strerror(errno));
+               return fd;
+       }
+
+       memset(&ifr, 0, sizeof(ifr));
+       strcpy(ifr.ifr_name, dev);
+       ifr.ifr_flags = IFF_TAP | IFF_NAPI | IFF_MULTI_QUEUE;
+
+       err = ioctl(fd, TUNSETIFF, (void *) &ifr);
+       if (err < 0) {
+               fprintf(stderr, "can't TUNSETIFF: %s\n", strerror(errno));
+               close(fd);
+               return err;
+       }
+       strcpy(dev, ifr.ifr_name);
+       return fd;
+}
+
+static int tun_delete(char *dev)
+{
+       struct {
+               struct nlmsghdr  nh;
+               struct ifinfomsg ifm;
+               unsigned char    data[64];
+       } req;
+       struct rtattr *rta;
+       int ret, rtnl;
+
+       rtnl = socket(AF_NETLINK, SOCK_DGRAM, NETLINK_ROUTE);
+       if (rtnl < 0) {
+               fprintf(stderr, "can't open rtnl: %s\n", strerror(errno));
+               return 1;
+       }
+
+       memset(&req, 0, sizeof(req));
+       req.nh.nlmsg_len = NLMSG_ALIGN(NLMSG_LENGTH(sizeof(req.ifm)));
+       req.nh.nlmsg_flags = NLM_F_REQUEST;
+       req.nh.nlmsg_type = RTM_DELLINK;
+
+       req.ifm.ifi_family = AF_UNSPEC;
+
+       rta = (struct rtattr *)(((char *)&req) + NLMSG_ALIGN(req.nh.nlmsg_len));
+       rta->rta_type = IFLA_IFNAME;
+       rta->rta_len = RTA_LENGTH(IFNAMSIZ);
+       req.nh.nlmsg_len += rta->rta_len;
+       memcpy(RTA_DATA(rta), dev, IFNAMSIZ);
+
+       ret = send(rtnl, &req, req.nh.nlmsg_len, 0);
+       if (ret < 0)
+               fprintf(stderr, "can't send: %s\n", strerror(errno));
+       ret = (unsigned int)ret != req.nh.nlmsg_len;
+
+       close(rtnl);
+       return ret;
+}
+
+FIXTURE(tun)
+{
+       char ifname[IFNAMSIZ];
+       int fd, fd2;
+};
+
+FIXTURE_SETUP(tun)
+{
+       memset(self->ifname, 0, sizeof(self->ifname));
+
+       self->fd = tun_alloc(self->ifname);
+       ASSERT_GE(self->fd, 0);
+
+       self->fd2 = tun_alloc(self->ifname);
+       ASSERT_GE(self->fd2, 0);
+}
+
+FIXTURE_TEARDOWN(tun)
+{
+       if (self->fd >= 0)
+               close(self->fd);
+       if (self->fd2 >= 0)
+               close(self->fd2);
+}
+
+TEST_F(tun, delete_detach_close) {
+       EXPECT_EQ(tun_delete(self->ifname), 0);
+       EXPECT_EQ(tun_detach(self->fd, self->ifname), -1);
+       EXPECT_EQ(errno, 22);
+}
+
+TEST_F(tun, detach_delete_close) {
+       EXPECT_EQ(tun_detach(self->fd, self->ifname), 0);
+       EXPECT_EQ(tun_delete(self->ifname), 0);
+}
+
+TEST_F(tun, detach_close_delete) {
+       EXPECT_EQ(tun_detach(self->fd, self->ifname), 0);
+       close(self->fd);
+       self->fd = -1;
+       EXPECT_EQ(tun_delete(self->ifname), 0);
+}
+
+TEST_F(tun, reattach_delete_close) {
+       EXPECT_EQ(tun_detach(self->fd, self->ifname), 0);
+       EXPECT_EQ(tun_attach(self->fd, self->ifname), 0);
+       EXPECT_EQ(tun_delete(self->ifname), 0);
+}
+
+TEST_F(tun, reattach_close_delete) {
+       EXPECT_EQ(tun_detach(self->fd, self->ifname), 0);
+       EXPECT_EQ(tun_attach(self->fd, self->ifname), 0);
+       close(self->fd);
+       self->fd = -1;
+       EXPECT_EQ(tun_delete(self->ifname), 0);
+}
+
+TEST_HARNESS_MAIN
index f8a19f5..ebbd0b2 100755 (executable)
@@ -34,7 +34,7 @@ cfg_veth() {
        ip -netns "${PEER_NS}" addr add dev veth1 192.168.1.1/24
        ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad
        ip -netns "${PEER_NS}" link set dev veth1 up
-       ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy
+       ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp
 }
 
 run_one() {
index 820bc50..fad2d1a 100755 (executable)
@@ -34,7 +34,7 @@ run_one() {
        ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad
        ip -netns "${PEER_NS}" link set dev veth1 up
 
-       ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy
+       ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp
        ip netns exec "${PEER_NS}" ./udpgso_bench_rx ${rx_args} -r &
        ip netns exec "${PEER_NS}" ./udpgso_bench_rx -t ${rx_args} -r &
 
index 807b74c..832c738 100755 (executable)
@@ -36,7 +36,7 @@ run_one() {
        ip netns exec "${PEER_NS}" ethtool -K veth1 rx-gro-list on
 
 
-       ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy
+       ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp
        tc -n "${PEER_NS}" qdisc add dev veth1 clsact
        tc -n "${PEER_NS}" filter add dev veth1 ingress prio 4 protocol ipv6 bpf object-file ../bpf/nat6to4.o section schedcls/ingress6/nat_6  direct-action
        tc -n "${PEER_NS}" filter add dev veth1 egress prio 4 protocol ip bpf object-file ../bpf/nat6to4.o section schedcls/egress4/snat4 direct-action
index 6f05e06..1bcd82e 100755 (executable)
@@ -46,7 +46,7 @@ create_ns() {
                ip -n $BASE$ns addr add dev veth$ns $BM_NET_V4$ns/24
                ip -n $BASE$ns addr add dev veth$ns $BM_NET_V6$ns/64 nodad
        done
-       ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp_dummy 2>/dev/null
+       ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp 2>/dev/null
 }
 
 create_vxlan_endpoint() {
index 80b5d35..dc932fd 100755 (executable)
@@ -120,7 +120,7 @@ run_all() {
        run_udp "${ipv4_args}"
 
        echo "ipv6"
-       run_tcp "${ipv4_args}"
+       run_tcp "${ipv6_args}"
        run_udp "${ipv6_args}"
 }
 
index 19eac3e..430895d 100755 (executable)
@@ -289,14 +289,14 @@ if [ $CPUS -gt 1 ]; then
        ip netns exec $NS_SRC ethtool -L veth$SRC rx 1 tx 2 2>/dev/null
        printf "%-60s" "bad setting: XDP with RX nr less than TX"
        ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o \
-               section xdp_dummy 2>/dev/null &&\
+               section xdp 2>/dev/null &&\
                echo "fail - set operation successful ?!?" || echo " ok "
 
        # the following tests will run with multiple channels active
        ip netns exec $NS_SRC ethtool -L veth$SRC rx 2
        ip netns exec $NS_DST ethtool -L veth$DST rx 2
        ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o \
-               section xdp_dummy 2>/dev/null
+               section xdp 2>/dev/null
        printf "%-60s" "bad setting: reducing RX nr below peer TX with XDP set"
        ip netns exec $NS_DST ethtool -L veth$DST rx 1 2>/dev/null &&\
                echo "fail - set operation successful ?!?" || echo " ok "
@@ -311,7 +311,7 @@ if [ $CPUS -gt 2 ]; then
        chk_channels "setting invalid channels nr" $DST 2 2
 fi
 
-ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o section xdp_dummy 2>/dev/null
+ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o section xdp 2>/dev/null
 chk_gro_flag "with xdp attached - gro flag" $DST on
 chk_gro_flag "        - peer gro flag" $SRC off
 chk_tso_flag "        - tso flag" $SRC off
index b35010c..a699187 100755 (executable)
@@ -31,7 +31,7 @@ BUGS="flush_remove_add reload"
 
 # List of possible paths to pktgen script from kernel tree for performance tests
 PKTGEN_SCRIPT_PATHS="
-       ../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh
+       ../../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh
        pktgen/pktgen_bench_xmit_mode_netif_receive.sh"
 
 # Definition of set types:
index d52f65d..9fe1cef 100644 (file)
@@ -1,7 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0-only
 __pycache__/
 *.pyc
-plugins/
 *.xml
 *.tap
 tdc_config_local.py
index b24494c..c652e8c 100644 (file)
         "teardown": [
             "$TC actions flush action gact"
         ]
+    },
+    {
+        "id": "7f52",
+        "name": "Try to flush action which is referenced by filter",
+        "category": [
+            "actions",
+            "gact"
+        ],
+        "plugins": {
+            "requires": "nsPlugin"
+        },
+        "setup": [
+            [
+                "$TC actions flush action gact",
+                0,
+                1,
+                255
+            ],
+            "$TC qdisc add dev $DEV1 ingress",
+            "$TC actions add action pass index 1",
+            "$TC filter add dev $DEV1 protocol all ingress prio 1 handle 0x1234 matchall action gact index 1"
+        ],
+        "cmdUnderTest": "$TC actions flush action gact",
+        "expExitCode": "1",
+        "verifyCmd": "$TC actions ls action gact",
+        "matchPattern": "total acts 1.*action order [0-9]*: gact action pass.*index 1 ref 2 bind 1",
+        "matchCount": "1",
+        "teardown": [
+            "$TC qdisc del dev $DEV1 ingress",
+            [
+                "sleep 1; $TC actions flush action gact",
+                0,
+                1
+            ]
+        ]
+    },
+    {
+        "id": "ae1e",
+        "name": "Try to flush actions when last one is referenced by filter",
+        "category": [
+            "actions",
+            "gact"
+        ],
+        "plugins": {
+            "requires": "nsPlugin"
+        },
+        "setup": [
+            [
+                "$TC actions flush action gact",
+                0,
+                1,
+                255
+            ],
+            "$TC qdisc add dev $DEV1 ingress",
+           [
+                "$TC actions add action pass index 1",
+               0,
+               1,
+               255
+           ],
+            "$TC actions add action reclassify index 2",
+            "$TC actions add action drop index 3",
+            "$TC filter add dev $DEV1 protocol all ingress prio 1 handle 0x1234 matchall action gact index 3"
+        ],
+        "cmdUnderTest": "$TC actions flush action gact",
+        "expExitCode": "0",
+        "verifyCmd": "$TC actions ls action gact",
+        "matchPattern": "total acts 1.*action order [0-9]*: gact action drop.*index 3 ref 2 bind 1",
+        "matchCount": "1",
+        "teardown": [
+            "$TC qdisc del dev $DEV1 ingress",
+            [
+                "sleep 1; $TC actions flush action gact",
+                0,
+                1
+            ]
+        ]
     }
 ]
index 6bb36ca..a309876 100644 (file)
@@ -209,7 +209,7 @@ int main(int argc, char **argv)
        if (write)
                gup.gup_flags |= FOLL_WRITE;
 
-       gup_fd = open("/sys/kernel/debug/gup_test", O_RDWR);
+       gup_fd = open(GUP_TEST_FILE, O_RDWR);
        if (gup_fd == -1) {
                switch (errno) {
                case EACCES:
@@ -224,7 +224,7 @@ int main(int argc, char **argv)
                        printf("check if CONFIG_GUP_TEST is enabled in kernel config\n");
                        break;
                default:
-                       perror("failed to open /sys/kernel/debug/gup_test");
+                       perror("failed to open " GUP_TEST_FILE);
                        break;
                }
                exit(KSFT_SKIP);
index 2fcf243..f5e4e0b 100644 (file)
@@ -54,6 +54,7 @@ static int ksm_write_sysfs(const char *file_path, unsigned long val)
        }
        if (fprintf(f, "%lu", val) < 0) {
                perror("fprintf");
+               fclose(f);
                return 1;
        }
        fclose(f);
@@ -72,6 +73,7 @@ static int ksm_read_sysfs(const char *file_path, unsigned long *val)
        }
        if (fscanf(f, "%lu", val) != 1) {
                perror("fscanf");
+               fclose(f);
                return 1;
        }
        fclose(f);
index 7d1b809..9700358 100644 (file)
@@ -19,8 +19,6 @@ endif
 MIRROR := https://download.wireguard.com/qemu-test/distfiles/
 
 KERNEL_BUILD_PATH := $(BUILD_PATH)/kernel$(if $(findstring yes,$(DEBUG_KERNEL)),-debug)
-rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
-WIREGUARD_SOURCES := $(call rwildcard,$(KERNEL_PATH)/drivers/net/wireguard/,*)
 
 default: qemu
 
@@ -109,20 +107,22 @@ CHOST := x86_64-linux-musl
 QEMU_ARCH := x86_64
 KERNEL_ARCH := x86_64
 KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/arch/x86/boot/bzImage
+QEMU_VPORT_RESULT := virtio-serial-device
 ifeq ($(HOST_ARCH),$(ARCH))
-QEMU_MACHINE := -cpu host -machine q35,accel=kvm
+QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off -no-acpi
 else
-QEMU_MACHINE := -cpu max -machine q35
+QEMU_MACHINE := -cpu max -machine microvm -no-acpi
 endif
 else ifeq ($(ARCH),i686)
 CHOST := i686-linux-musl
 QEMU_ARCH := i386
 KERNEL_ARCH := x86
 KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/arch/x86/boot/bzImage
+QEMU_VPORT_RESULT := virtio-serial-device
 ifeq ($(subst x86_64,i686,$(HOST_ARCH)),$(ARCH))
-QEMU_MACHINE := -cpu host -machine q35,accel=kvm
+QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off -no-acpi
 else
-QEMU_MACHINE := -cpu max -machine q35
+QEMU_MACHINE := -cpu coreduo -machine microvm -no-acpi
 endif
 else ifeq ($(ARCH),mips64)
 CHOST := mips64-linux-musl
@@ -208,10 +208,11 @@ QEMU_ARCH := m68k
 KERNEL_ARCH := m68k
 KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/vmlinux
 KERNEL_CMDLINE := $(shell sed -n 's/CONFIG_CMDLINE=\(.*\)/\1/p' arch/m68k.config)
+QEMU_VPORT_RESULT := virtio-serial-device
 ifeq ($(HOST_ARCH),$(ARCH))
-QEMU_MACHINE := -cpu host,accel=kvm -machine q800 -append $(KERNEL_CMDLINE)
+QEMU_MACHINE := -cpu host,accel=kvm -machine virt -append $(KERNEL_CMDLINE)
 else
-QEMU_MACHINE := -machine q800 -smp 1 -append $(KERNEL_CMDLINE)
+QEMU_MACHINE := -machine virt -smp 1 -append $(KERNEL_CMDLINE)
 endif
 else ifeq ($(ARCH),riscv64)
 CHOST := riscv64-linux-musl
@@ -322,8 +323,9 @@ $(KERNEL_BUILD_PATH)/.config: $(TOOLCHAIN_PATH)/.installed kernel.config arch/$(
        cd $(KERNEL_BUILD_PATH) && ARCH=$(KERNEL_ARCH) $(KERNEL_PATH)/scripts/kconfig/merge_config.sh -n $(KERNEL_BUILD_PATH)/.config $(KERNEL_BUILD_PATH)/minimal.config
        $(if $(findstring yes,$(DEBUG_KERNEL)),cp debug.config $(KERNEL_BUILD_PATH) && cd $(KERNEL_BUILD_PATH) && ARCH=$(KERNEL_ARCH) $(KERNEL_PATH)/scripts/kconfig/merge_config.sh -n $(KERNEL_BUILD_PATH)/.config debug.config,)
 
-$(KERNEL_BZIMAGE): $(TOOLCHAIN_PATH)/.installed $(KERNEL_BUILD_PATH)/.config $(BUILD_PATH)/init-cpio-spec.txt $(IPERF_PATH)/src/iperf3 $(IPUTILS_PATH)/ping $(BASH_PATH)/bash $(IPROUTE2_PATH)/misc/ss $(IPROUTE2_PATH)/ip/ip $(IPTABLES_PATH)/iptables/xtables-legacy-multi $(NMAP_PATH)/ncat/ncat $(WIREGUARD_TOOLS_PATH)/src/wg $(BUILD_PATH)/init ../netns.sh $(WIREGUARD_SOURCES)
+$(KERNEL_BZIMAGE): $(TOOLCHAIN_PATH)/.installed $(KERNEL_BUILD_PATH)/.config $(BUILD_PATH)/init-cpio-spec.txt $(IPERF_PATH)/src/iperf3 $(IPUTILS_PATH)/ping $(BASH_PATH)/bash $(IPROUTE2_PATH)/misc/ss $(IPROUTE2_PATH)/ip/ip $(IPTABLES_PATH)/iptables/xtables-legacy-multi $(NMAP_PATH)/ncat/ncat $(WIREGUARD_TOOLS_PATH)/src/wg $(BUILD_PATH)/init
        $(MAKE) -C $(KERNEL_PATH) O=$(KERNEL_BUILD_PATH) ARCH=$(KERNEL_ARCH) CROSS_COMPILE=$(CROSS_COMPILE)
+.PHONY: $(KERNEL_BZIMAGE)
 
 $(TOOLCHAIN_PATH)/$(CHOST)/include/linux/.installed: | $(KERNEL_BUILD_PATH)/.config $(TOOLCHAIN_PATH)/.installed
        rm -rf $(TOOLCHAIN_PATH)/$(CHOST)/include/linux
index fc7959b..0579c66 100644 (file)
@@ -7,6 +7,7 @@ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
 CONFIG_VIRTIO_MENU=y
 CONFIG_VIRTIO_MMIO=y
 CONFIG_VIRTIO_CONSOLE=y
+CONFIG_COMPAT_32BIT_TIME=y
 CONFIG_CMDLINE_BOOL=y
 CONFIG_CMDLINE="console=ttyAMA0 wg.success=vport0p1 panic_on_warn=1"
 CONFIG_FRAME_WARN=1024
index f3066be..2a3307b 100644 (file)
@@ -7,6 +7,7 @@ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
 CONFIG_VIRTIO_MENU=y
 CONFIG_VIRTIO_MMIO=y
 CONFIG_VIRTIO_CONSOLE=y
+CONFIG_COMPAT_32BIT_TIME=y
 CONFIG_CMDLINE_BOOL=y
 CONFIG_CMDLINE="console=ttyAMA0 wg.success=vport0p1 panic_on_warn=1"
 CONFIG_CPU_BIG_ENDIAN=y
index 6d90892..35b0650 100644 (file)
@@ -1,6 +1,10 @@
-CONFIG_ACPI=y
 CONFIG_SERIAL_8250=y
 CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_VIRTIO_MENU=y
+CONFIG_VIRTIO_MMIO=y
+CONFIG_VIRTIO_CONSOLE=y
+CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
+CONFIG_COMPAT_32BIT_TIME=y
 CONFIG_CMDLINE_BOOL=y
-CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1"
+CONFIG_CMDLINE="console=ttyS0 wg.success=vport0p1 panic_on_warn=1 reboot=t"
 CONFIG_FRAME_WARN=1024
index 82c925e..39c48cb 100644 (file)
@@ -1,9 +1,7 @@
 CONFIG_MMU=y
+CONFIG_VIRT=y
 CONFIG_M68KCLASSIC=y
-CONFIG_M68040=y
-CONFIG_MAC=y
-CONFIG_SERIAL_PMACZILOG=y
-CONFIG_SERIAL_PMACZILOG_TTYS=y
-CONFIG_SERIAL_PMACZILOG_CONSOLE=y
-CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1"
+CONFIG_VIRTIO_CONSOLE=y
+CONFIG_COMPAT_32BIT_TIME=y
+CONFIG_CMDLINE="console=ttyGF0 wg.success=vport0p1 panic_on_warn=1"
 CONFIG_FRAME_WARN=1024
index d7ec63c..2a84402 100644 (file)
@@ -6,6 +6,7 @@ CONFIG_POWER_RESET=y
 CONFIG_POWER_RESET_SYSCON=y
 CONFIG_SERIAL_8250=y
 CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_COMPAT_32BIT_TIME=y
 CONFIG_CMDLINE_BOOL=y
 CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1"
 CONFIG_FRAME_WARN=1024
index 18a4982..56146a1 100644 (file)
@@ -7,6 +7,7 @@ CONFIG_POWER_RESET=y
 CONFIG_POWER_RESET_SYSCON=y
 CONFIG_SERIAL_8250=y
 CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_COMPAT_32BIT_TIME=y
 CONFIG_CMDLINE_BOOL=y
 CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1"
 CONFIG_FRAME_WARN=1024
index 5e04882..174a9ff 100644 (file)
@@ -4,6 +4,7 @@ CONFIG_PPC_85xx=y
 CONFIG_PHYS_64BIT=y
 CONFIG_SERIAL_8250=y
 CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_COMPAT_32BIT_TIME=y
 CONFIG_MATH_EMULATION=y
 CONFIG_CMDLINE_BOOL=y
 CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1"
index efa0069..cf2d137 100644 (file)
@@ -1,6 +1,9 @@
-CONFIG_ACPI=y
 CONFIG_SERIAL_8250=y
 CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_VIRTIO_MENU=y
+CONFIG_VIRTIO_MMIO=y
+CONFIG_VIRTIO_CONSOLE=y
+CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
 CONFIG_CMDLINE_BOOL=y
-CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1"
+CONFIG_CMDLINE="console=ttyS0 wg.success=vport0p1 panic_on_warn=1 reboot=t"
 CONFIG_FRAME_WARN=1280
index c9e1284..3e49924 100644 (file)
@@ -11,6 +11,7 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <fcntl.h>
+#include <time.h>
 #include <sys/wait.h>
 #include <sys/mount.h>
 #include <sys/stat.h>
@@ -70,6 +71,15 @@ static void seed_rng(void)
        close(fd);
 }
 
+static void set_time(void)
+{
+       if (time(NULL))
+               return;
+       pretty_message("[+] Setting fake time...");
+       if (stime(&(time_t){1433512680}) < 0)
+               panic("settimeofday()");
+}
+
 static void mount_filesystems(void)
 {
        pretty_message("[+] Mounting filesystems...");
@@ -259,6 +269,7 @@ int main(int argc, char *argv[])
        print_banner();
        mount_filesystems();
        seed_rng();
+       set_time();
        kmod_selftests();
        enable_logging();
        clear_leaks();