Merge branch 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox...
authorJakub Kicinski <kuba@kernel.org>
Wed, 15 Jun 2022 02:09:39 +0000 (19:09 -0700)
committerJakub Kicinski <kuba@kernel.org>
Wed, 15 Jun 2022 02:09:39 +0000 (19:09 -0700)
Saeed Mahameed says:

====================
mlx5-next: updates 2022-06-14

1) Updated HW bits and definitions for upcoming features
 1.1) vport debug counters
 1.2) flow meter
 1.3) Execute ASO action for flow entry
 1.4) enhanced CQE compression

2) Add ICM header-modify-pattern RDMA API

Leon Says
=========

SW steering manipulates packet's header using "modifying header" actions.
Many of these actions do the same operation, but use different data each time.
Currently we create and keep every one of these actions, which use expensive
and limited resources.

Now we introduce a new mechanism - pattern and argument, which splits
a modifying action into two parts:
1. action pattern: contains the operations to be applied on packet's header,
mainly set/add/copy of fields in the packet
2. action data/argument: contains the data to be used by each operation
in the pattern.

This way we reuse same patterns with different arguments to create new
modifying actions, and since many actions share the same operations, we end
up creating a small number of patterns that we keep in a dedicated cache.

These modify header patterns are implemented as new type of ICM memory,
so the following kernel patch series add the support for this new ICM type.
==========

* 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux:
  net/mlx5: Add bits and fields to support enhanced CQE compression
  net/mlx5: Remove not used MLX5_CAP_BITS_RW_MASK
  net/mlx5: group fdb cleanup to single function
  net/mlx5: Add support EXECUTE_ASO action for flow entry
  net/mlx5: Add HW definitions of vport debug counters
  net/mlx5: Add IFC bits and enums for flow meter
  RDMA/mlx5: Support handling of modify-header pattern ICM area
  net/mlx5: Manage ICM of type modify-header pattern
  net/mlx5: Introduce header-modify-pattern ICM properties
====================

Link: https://lore.kernel.org/r/20220614184028.51548-1-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
598 files changed:
Documentation/ABI/testing/sysfs-ata
Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
Documentation/arm/tcm.rst
Documentation/arm64/sme.rst
Documentation/devicetree/bindings/clock/idt,versaclock5.yaml
Documentation/devicetree/bindings/cpufreq/brcm,stb-avs-cpu-freq.txt
Documentation/devicetree/bindings/display/arm,malidp.yaml
Documentation/devicetree/bindings/display/msm/dpu-sc7180.yaml
Documentation/devicetree/bindings/display/msm/dpu-sc7280.yaml
Documentation/devicetree/bindings/display/msm/dpu-sdm845.yaml
Documentation/devicetree/bindings/display/msm/dsi-controller-main.yaml
Documentation/devicetree/bindings/display/msm/dsi-phy-10nm.yaml
Documentation/devicetree/bindings/display/msm/dsi-phy-14nm.yaml
Documentation/devicetree/bindings/display/msm/dsi-phy-20nm.yaml
Documentation/devicetree/bindings/display/msm/dsi-phy-28nm.yaml
Documentation/devicetree/bindings/display/msm/dsi-phy-common.yaml
Documentation/devicetree/bindings/hwmon/vexpress.txt
Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra186-mc.yaml
Documentation/devicetree/bindings/mfd/maxim,max77714.yaml
Documentation/devicetree/bindings/mmc/brcm,sdhci-brcmstb.yaml
Documentation/devicetree/bindings/mmc/marvell,xenon-sdhci.yaml
Documentation/devicetree/bindings/nvme/apple,nvme-ans.yaml
Documentation/devicetree/bindings/phy/phy-stih407-usb.txt
Documentation/devicetree/bindings/phy/qcom,qmp-usb3-dp-phy.yaml
Documentation/devicetree/bindings/phy/qcom,qusb2-phy.yaml
Documentation/devicetree/bindings/phy/qcom,usb-snps-femto-v2.yaml
Documentation/devicetree/bindings/pinctrl/pinctrl-rk805.txt
Documentation/devicetree/bindings/pinctrl/ralink,mt7620-pinctrl.yaml
Documentation/devicetree/bindings/pinctrl/ralink,rt305x-pinctrl.yaml
Documentation/devicetree/bindings/power/supply/maxim,max77976.yaml
Documentation/devicetree/bindings/regulator/qcom,usb-vbus-regulator.yaml
Documentation/devicetree/bindings/regulator/vexpress.txt
Documentation/devicetree/bindings/usb/dwc3-st.txt
Documentation/devicetree/bindings/usb/ehci-st.txt
Documentation/devicetree/bindings/usb/ohci-st.txt
Documentation/devicetree/bindings/usb/qcom,dwc3.yaml
Documentation/devicetree/bindings/vendor-prefixes.yaml
Documentation/devicetree/bindings/watchdog/allwinner,sun4i-a10-wdt.yaml
Documentation/driver-api/hte/hte.rst [new file with mode: 0644]
Documentation/driver-api/hte/index.rst [new file with mode: 0644]
Documentation/driver-api/hte/tegra194-hte.rst [new file with mode: 0644]
Documentation/driver-api/index.rst
Documentation/features/core/cBPF-JIT/arch-support.txt
Documentation/features/core/eBPF-JIT/arch-support.txt
Documentation/features/core/generic-idle-thread/arch-support.txt
Documentation/features/core/jump-labels/arch-support.txt
Documentation/features/core/thread-info-in-task/arch-support.txt
Documentation/features/core/tracehook/arch-support.txt
Documentation/features/debug/KASAN/arch-support.txt
Documentation/features/debug/debug-vm-pgtable/arch-support.txt
Documentation/features/debug/gcov-profile-all/arch-support.txt
Documentation/features/debug/kcov/arch-support.txt
Documentation/features/debug/kgdb/arch-support.txt
Documentation/features/debug/kmemleak/arch-support.txt
Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
Documentation/features/debug/kprobes/arch-support.txt
Documentation/features/debug/kretprobes/arch-support.txt
Documentation/features/debug/optprobes/arch-support.txt
Documentation/features/debug/stackprotector/arch-support.txt
Documentation/features/debug/uprobes/arch-support.txt
Documentation/features/debug/user-ret-profiler/arch-support.txt
Documentation/features/io/dma-contiguous/arch-support.txt
Documentation/features/locking/cmpxchg-local/arch-support.txt
Documentation/features/locking/lockdep/arch-support.txt
Documentation/features/locking/queued-rwlocks/arch-support.txt
Documentation/features/locking/queued-spinlocks/arch-support.txt
Documentation/features/perf/kprobes-event/arch-support.txt
Documentation/features/perf/perf-regs/arch-support.txt
Documentation/features/perf/perf-stackdump/arch-support.txt
Documentation/features/sched/membarrier-sync-core/arch-support.txt
Documentation/features/sched/numa-balancing/arch-support.txt
Documentation/features/seccomp/seccomp-filter/arch-support.txt
Documentation/features/time/arch-tick-broadcast/arch-support.txt
Documentation/features/time/clockevents/arch-support.txt
Documentation/features/time/context-tracking/arch-support.txt
Documentation/features/time/irq-time-acct/arch-support.txt
Documentation/features/time/virt-cpuacct/arch-support.txt
Documentation/features/vm/ELF-ASLR/arch-support.txt
Documentation/features/vm/PG_uncached/arch-support.txt
Documentation/features/vm/THP/arch-support.txt
Documentation/features/vm/TLB/arch-support.txt
Documentation/features/vm/huge-vmap/arch-support.txt
Documentation/features/vm/ioremap_prot/arch-support.txt
Documentation/features/vm/pte_special/arch-support.txt
Documentation/filesystems/netfs_library.rst
Documentation/hte/hte.rst [deleted file]
Documentation/hte/index.rst [deleted file]
Documentation/hte/tegra194-hte.rst [deleted file]
Documentation/index.rst
Documentation/networking/tls.rst
Documentation/usb/usbmon.rst
MAINTAINERS
Makefile
arch/arm/include/asm/xen/xen-ops.h [new file with mode: 0644]
arch/arm/mm/dma-mapping.c
arch/arm/xen/enlighten.c
arch/arm64/include/asm/sysreg.h
arch/arm64/include/asm/xen/xen-ops.h [new file with mode: 0644]
arch/arm64/kernel/fpsimd.c
arch/arm64/kernel/mte.c
arch/arm64/mm/dma-mapping.c
arch/arm64/net/bpf_jit_comp.c
arch/arm64/tools/gen-sysreg.awk
arch/powerpc/Kconfig
arch/powerpc/include/asm/thread_info.h
arch/powerpc/kernel/Makefile
arch/powerpc/kernel/process.c
arch/powerpc/kernel/ptrace/ptrace-fpu.c
arch/powerpc/kernel/ptrace/ptrace.c
arch/powerpc/kernel/rtas.c
arch/powerpc/kexec/crash.c
arch/powerpc/mm/nohash/kaslr_booke.c
arch/powerpc/platforms/powernv/Makefile
arch/powerpc/platforms/pseries/papr_scm.c
arch/s390/Kconfig
arch/s390/Makefile
arch/s390/mm/init.c
arch/x86/Kconfig
arch/x86/include/asm/kvm_host.h
arch/x86/include/asm/uaccess.h
arch/x86/kvm/mmu/mmu.c
arch/x86/kvm/mmu/tdp_iter.c
arch/x86/kvm/mmu/tdp_iter.h
arch/x86/kvm/mmu/tdp_mmu.c
arch/x86/kvm/svm/nested.c
arch/x86/kvm/svm/svm.c
arch/x86/kvm/svm/svm.h
arch/x86/kvm/vmx/vmx.c
arch/x86/kvm/x86.c
arch/x86/kvm/xen.h
arch/x86/mm/mem_encrypt.c
arch/x86/mm/mem_encrypt_amd.c
arch/x86/xen/enlighten_hvm.c
arch/x86/xen/enlighten_pv.c
certs/Makefile
certs/extract-cert.c
drivers/ata/libata-core.c
drivers/ata/libata-scsi.c
drivers/ata/libata-transport.c
drivers/ata/pata_octeon_cf.c
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.h
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
drivers/gpu/drm/amd/amdgpu/nv.c
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
drivers/gpu/drm/amd/amdkfd/kfd_crat.c
drivers/gpu/drm/amd/amdkfd/kfd_device.c
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.h
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
drivers/gpu/drm/amd/display/dc/dc.h
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.c
drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c
drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
drivers/gpu/drm/amd/display/dc/link/link_hwss_hpo_dp.c
drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h
drivers/gpu/drm/amd/display/include/ddc_service_types.h
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_pptable.h
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_pptable.h
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_7_pptable.h
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_pptable.h
drivers/gpu/drm/ast/ast_dp.c
drivers/gpu/drm/ast/ast_dp501.c
drivers/gpu/drm/ast/ast_drv.h
drivers/gpu/drm/ast/ast_main.c
drivers/gpu/drm/ast/ast_mode.c
drivers/gpu/drm/ast/ast_post.c
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
drivers/gpu/drm/bridge/ti-sn65dsi83.c
drivers/gpu/drm/drm_atomic_helper.c
drivers/gpu/drm/imx/ipuv3-crtc.c
drivers/gpu/drm/panfrost/panfrost_drv.c
drivers/gpu/drm/panfrost/panfrost_job.c
drivers/gpu/drm/panfrost/panfrost_job.h
drivers/idle/intel_idle.c
drivers/input/joystick/Kconfig
drivers/input/misc/soc_button_array.c
drivers/input/mouse/bcm5974.c
drivers/mmc/core/block.c
drivers/mmc/host/sdhci-pci-gli.c
drivers/net/amt.c
drivers/net/bonding/bond_main.c
drivers/net/bonding/bond_netlink.c
drivers/net/bonding/bond_options.c
drivers/net/dsa/lantiq_gswip.c
drivers/net/dsa/microchip/ksz8.h
drivers/net/dsa/mv88e6xxx/serdes.c
drivers/net/dsa/realtek/rtl8365mb.c
drivers/net/eql.c
drivers/net/ethernet/altera/altera_tse_main.c
drivers/net/ethernet/altera/altera_utils.h
drivers/net/ethernet/amd/au1000_eth.c
drivers/net/ethernet/amd/au1000_eth.h
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
drivers/net/ethernet/broadcom/bgmac-bcma-mdio.c
drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
drivers/net/ethernet/cadence/macb_ptp.c
drivers/net/ethernet/huawei/hinic/hinic_sriov.c
drivers/net/ethernet/intel/e1000/e1000_hw.c
drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
drivers/net/ethernet/intel/i40e/i40e.h
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
drivers/net/ethernet/intel/i40e/i40e_main.c
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
drivers/net/ethernet/intel/iavf/iavf.h
drivers/net/ethernet/intel/iavf/iavf_main.c
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
drivers/net/ethernet/intel/ice/ice_gnss.c
drivers/net/ethernet/intel/ice/ice_lib.c
drivers/net/ethernet/intel/ice/ice_sriov.c
drivers/net/ethernet/intel/ice/ice_virtchnl.c
drivers/net/ethernet/intel/igb/e1000_defines.h
drivers/net/ethernet/intel/igb/e1000_regs.h
drivers/net/ethernet/intel/ixgb/ixgb_hw.c
drivers/net/ethernet/intel/ixgbe/ixgbe.h
drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
drivers/net/ethernet/mediatek/mtk_eth_soc.c
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
drivers/net/ethernet/mellanox/mlx5/core/dev.c
drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
drivers/net/ethernet/mellanox/mlx5/core/en/params.c
drivers/net/ethernet/mellanox/mlx5/core/en_common.c
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
drivers/net/ethernet/netronome/nfp/flower/action.c
drivers/net/ethernet/netronome/nfp/flower/cmsg.h
drivers/net/ethernet/netronome/nfp/flower/conntrack.c
drivers/net/ethernet/netronome/nfp/flower/match.c
drivers/net/ethernet/netronome/nfp/nfd3/dp.c
drivers/net/ethernet/netronome/nfp/nfd3/rings.c
drivers/net/ethernet/netronome/nfp/nfd3/xsk.c
drivers/net/ethernet/netronome/nfp/nfdk/dp.c
drivers/net/ethernet/netronome/nfp/nfdk/rings.c
drivers/net/ethernet/netronome/nfp/nfp_net.h
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
drivers/net/ethernet/netronome/nfp/nfp_net_dp.h
drivers/net/ethernet/netronome/nfp/nfp_net_sriov.c
drivers/net/ethernet/netronome/nfp/nfp_net_xsk.c
drivers/net/ethernet/netronome/nfp/nfpcore/crc32.h
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_dev.c
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
drivers/net/ipa/gsi.c
drivers/net/ipa/gsi.h
drivers/net/ipa/gsi_private.h
drivers/net/ipa/gsi_trans.c
drivers/net/ipa/ipa_cmd.c
drivers/net/ipa/ipa_endpoint.c
drivers/net/ipa/ipa_endpoint.h
drivers/net/ipvlan/ipvlan.h
drivers/net/ipvlan/ipvlan_core.c
drivers/net/ipvlan/ipvlan_main.c
drivers/net/macsec.c
drivers/net/macvlan.c
drivers/net/netconsole.c
drivers/net/phy/dp83867.c
drivers/net/phy/fixed_phy.c
drivers/net/phy/mdio_bus.c
drivers/net/team/team.c
drivers/net/usb/smsc95xx.c
drivers/net/usb/usbnet.c
drivers/net/vmxnet3/Makefile
drivers/net/vmxnet3/upt1_defs.h
drivers/net/vmxnet3/vmxnet3_defs.h
drivers/net/vmxnet3/vmxnet3_drv.c
drivers/net/vmxnet3/vmxnet3_ethtool.c
drivers/net/vmxnet3/vmxnet3_int.h
drivers/net/vrf.c
drivers/net/vxlan/vxlan_core.c
drivers/net/wan/farsync.h
drivers/net/wireguard/receive.c
drivers/net/wireless/mac80211_hwsim.c
drivers/net/wireless/microchip/wilc1000/cfg80211.c
drivers/net/wireless/microchip/wilc1000/fw.h
drivers/net/wireless/microchip/wilc1000/hif.c
drivers/net/wireless/microchip/wilc1000/hif.h
drivers/net/wireless/microchip/wilc1000/netdev.c
drivers/net/wireless/microchip/wilc1000/netdev.h
drivers/net/wireless/microchip/wilc1000/spi.c
drivers/net/wireless/microchip/wilc1000/wlan.c
drivers/net/wireless/microchip/wilc1000/wlan.h
drivers/net/wireless/microchip/wilc1000/wlan_if.h
drivers/net/wireless/ray_cs.c
drivers/net/wireless/realtek/rtlwifi/debug.c
drivers/net/wireless/realtek/rtw88/debug.c
drivers/net/wireless/realtek/rtw88/main.c
drivers/net/wireless/realtek/rtw88/rtw8723d.c
drivers/net/wireless/realtek/rtw88/rtw8723d.h
drivers/net/wireless/realtek/rtw88/rtw8723de.c
drivers/net/wireless/realtek/rtw88/rtw8723de.h [deleted file]
drivers/net/wireless/realtek/rtw88/rtw8821c.c
drivers/net/wireless/realtek/rtw88/rtw8821c.h
drivers/net/wireless/realtek/rtw88/rtw8821ce.c
drivers/net/wireless/realtek/rtw88/rtw8821ce.h [deleted file]
drivers/net/wireless/realtek/rtw88/rtw8822b.c
drivers/net/wireless/realtek/rtw88/rtw8822b.h
drivers/net/wireless/realtek/rtw88/rtw8822be.c
drivers/net/wireless/realtek/rtw88/rtw8822be.h [deleted file]
drivers/net/wireless/realtek/rtw88/rtw8822c.c
drivers/net/wireless/realtek/rtw88/rtw8822c.h
drivers/net/wireless/realtek/rtw88/rtw8822ce.c
drivers/net/wireless/realtek/rtw88/rtw8822ce.h [deleted file]
drivers/net/wireless/realtek/rtw89/cam.c
drivers/net/wireless/realtek/rtw89/cam.h
drivers/net/wireless/realtek/rtw89/core.c
drivers/net/wireless/realtek/rtw89/core.h
drivers/net/wireless/realtek/rtw89/debug.c
drivers/net/wireless/realtek/rtw89/debug.h
drivers/net/wireless/realtek/rtw89/fw.c
drivers/net/wireless/realtek/rtw89/fw.h
drivers/net/wireless/realtek/rtw89/mac.c
drivers/net/wireless/realtek/rtw89/mac.h
drivers/net/wireless/realtek/rtw89/mac80211.c
drivers/net/wireless/realtek/rtw89/pci.c
drivers/net/wireless/realtek/rtw89/pci.h
drivers/net/wireless/realtek/rtw89/phy.c
drivers/net/wireless/realtek/rtw89/phy.h
drivers/net/wireless/realtek/rtw89/rtw8852c.c
drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.h
drivers/net/wireless/realtek/rtw89/sar.c
drivers/net/wireless/silabs/wfx/fwio.c
drivers/net/wireless/st/cw1200/bh.c
drivers/net/wireless/virt_wifi.c
drivers/net/xen-netback/common.h
drivers/net/xen-netback/interface.c
drivers/net/xen-netback/netback.c
drivers/net/xen-netback/rx.c
drivers/nfc/nfcmrvl/usb.c
drivers/nfc/st21nfca/se.c
drivers/platform/mips/Kconfig
drivers/ptp/ptp_ocp.c
drivers/virtio/Kconfig
drivers/virtio/virtio.c
drivers/xen/Kconfig
drivers/xen/Makefile
drivers/xen/grant-dma-iommu.c [new file with mode: 0644]
drivers/xen/grant-dma-ops.c [new file with mode: 0644]
drivers/xen/grant-table.c
drivers/xen/xlate_mmu.c
fs/9p/cache.c
fs/9p/v9fs.c
fs/9p/v9fs.h
fs/9p/vfs_addr.c
fs/9p/vfs_inode.c
fs/afs/callback.c
fs/afs/dir.c
fs/afs/dir_edit.c
fs/afs/dir_silly.c
fs/afs/dynroot.c
fs/afs/file.c
fs/afs/fs_operation.c
fs/afs/inode.c
fs/afs/internal.h
fs/afs/super.c
fs/afs/write.c
fs/ceph/addr.c
fs/ceph/cache.c
fs/ceph/cache.h
fs/ceph/caps.c
fs/ceph/file.c
fs/ceph/inode.c
fs/ceph/mds_client.c
fs/ceph/snap.c
fs/ceph/super.c
fs/ceph/super.h
fs/ceph/xattr.c
fs/cifs/cifsfs.c
fs/cifs/cifsglob.h
fs/cifs/file.c
fs/cifs/fscache.c
fs/cifs/fscache.h
fs/cifs/inode.c
fs/cifs/misc.c
fs/cifs/smb2ops.c
fs/ext2/inode.c
fs/fs-writeback.c
fs/inode.c
fs/netfs/buffered_read.c
fs/netfs/internal.h
fs/netfs/objects.c
fs/quota/dquot.c
fs/zonefs/super.c
include/asm-generic/Kbuild
include/asm-generic/platform-feature.h [new file with mode: 0644]
include/linux/ieee80211.h
include/linux/if_macvlan.h
include/linux/if_team.h
include/linux/if_vlan.h
include/linux/libata.h
include/linux/netdevice.h
include/linux/netfs.h
include/linux/platform-feature.h [new file with mode: 0644]
include/linux/skbuff.h
include/linux/socket.h
include/linux/virtio_config.h
include/net/bond_options.h
include/net/dropreason.h [new file with mode: 0644]
include/net/flow_offload.h
include/net/ip_tunnels.h
include/net/ipv6.h
include/net/mac80211.h
include/net/netfilter/nf_tables.h
include/net/netfilter/nf_tables_offload.h
include/net/sock.h
include/net/tcp.h
include/net/udp.h
include/net/xfrm.h
include/trace/events/skb.h
include/uapi/linux/nl80211.h
include/uapi/linux/tls.h
include/xen/arm/xen-ops.h [new file with mode: 0644]
include/xen/grant_table.h
include/xen/xen-ops.h
include/xen/xen.h
init/Kconfig
kernel/Makefile
kernel/bpf/btf.c
kernel/dma/debug.c
kernel/dma/swiotlb.c
kernel/entry/kvm.c
kernel/platform-feature.c [new file with mode: 0644]
kernel/reboot.c
kernel/trace/bpf_trace.c
net/6lowpan/nhc.c
net/6lowpan/nhc.h
net/6lowpan/nhc_dest.c
net/6lowpan/nhc_fragment.c
net/6lowpan/nhc_ghc_ext_dest.c
net/6lowpan/nhc_ghc_ext_frag.c
net/6lowpan/nhc_ghc_ext_hop.c
net/6lowpan/nhc_ghc_ext_route.c
net/6lowpan/nhc_ghc_icmpv6.c
net/6lowpan/nhc_ghc_udp.c
net/6lowpan/nhc_hop.c
net/6lowpan/nhc_ipv6.c
net/6lowpan/nhc_mobility.c
net/6lowpan/nhc_routing.c
net/6lowpan/nhc_udp.c
net/8021q/vlan_core.c
net/8021q/vlan_dev.c
net/ax25/af_ax25.c
net/ax25/ax25_dev.c
net/bridge/br_if.c
net/bridge/br_netlink.c
net/bridge/br_vlan.c
net/core/.gitignore [new file with mode: 0644]
net/core/Makefile
net/core/datagram.c
net/core/dev.c
net/core/dev_ioctl.c
net/core/devlink.c
net/core/drop_monitor.c
net/core/dst.c
net/core/failover.c
net/core/flow_offload.c
net/core/link_watch.c
net/core/neighbour.c
net/core/net-sysfs.c
net/core/netpoll.c
net/core/pktgen.c
net/core/skbuff.c
net/core/sock.c
net/core/stream.c
net/decnet/af_decnet.c
net/dsa/slave.c
net/ethtool/ioctl.c
net/ethtool/netlink.c
net/ethtool/netlink.h
net/ipv4/af_inet.c
net/ipv4/devinet.c
net/ipv4/fib_semantics.c
net/ipv4/inet_hashtables.c
net/ipv4/ip_gre.c
net/ipv4/ipmr.c
net/ipv4/route.c
net/ipv4/tcp.c
net/ipv4/tcp_input.c
net/ipv4/tcp_ipv4.c
net/ipv4/tcp_output.c
net/ipv4/tcp_timer.c
net/ipv4/udp.c
net/ipv4/udplite.c
net/ipv4/xfrm4_policy.c
net/ipv4/xfrm4_protocol.c
net/ipv6/addrconf.c
net/ipv6/addrconf_core.c
net/ipv6/ip6_gre.c
net/ipv6/ip6_output.c
net/ipv6/ip6_tunnel.c
net/ipv6/ip6_vti.c
net/ipv6/ip6mr.c
net/ipv6/route.c
net/ipv6/seg6_hmac.c
net/ipv6/seg6_local.c
net/ipv6/sit.c
net/ipv6/tcp_ipv6.c
net/ipv6/udp.c
net/ipv6/udplite.c
net/ipv6/xfrm6_policy.c
net/iucv/af_iucv.c
net/l2tp/l2tp_ip6.c
net/llc/af_llc.c
net/mac80211/cfg.c
net/mac80211/ieee80211_i.h
net/mac80211/iface.c
net/mac80211/key.c
net/mac80211/key.h
net/mac80211/main.c
net/mac80211/mesh_hwmp.c
net/mac80211/mlme.c
net/mac80211/rx.c
net/mac80211/sta_info.h
net/mac80211/tx.c
net/mac80211/util.c
net/mac80211/wpa.c
net/mac80211/wpa.h
net/mptcp/protocol.c
net/netfilter/nf_tables_api.c
net/netfilter/nf_tables_offload.c
net/netfilter/nft_nat.c
net/openvswitch/actions.c
net/openvswitch/conntrack.c
net/openvswitch/vport-netdev.c
net/packet/af_packet.c
net/sched/act_mirred.c
net/sched/sch_api.c
net/sched/sch_generic.c
net/sctp/protocol.c
net/sctp/sm_statefuns.c
net/sctp/socket.c
net/sctp/stream_interleave.c
net/sctp/ulpqueue.c
net/smc/smc_pnet.c
net/socket.c
net/switchdev/switchdev.c
net/tipc/bearer.c
net/tls/tls_main.c
net/unix/af_unix.c
net/xdp/xsk.c
net/xdp/xsk_queue.h
net/xfrm/xfrm_device.c
scripts/sign-file.c
security/keys/trusted-keys/trusted_tpm2.c
sound/hda/hdac_device.c
sound/pci/hda/hda_intel.c
sound/pci/hda/patch_conexant.c
sound/pci/hda/patch_hdmi.c
sound/pci/hda/patch_realtek.c
sound/soc/codecs/cs35l36.c
sound/soc/codecs/cs42l51.c
sound/soc/codecs/cs42l52.c
sound/soc/codecs/cs42l56.c
sound/soc/codecs/cs53l30.c
sound/soc/codecs/es8328.c
sound/soc/codecs/nau8822.c
sound/soc/codecs/nau8822.h
sound/soc/codecs/wm8962.c
sound/soc/codecs/wm_adsp.c
sound/soc/fsl/fsl_sai.c
sound/soc/intel/boards/sof_cirrus_common.c
sound/soc/qcom/lpass-platform.c
sound/soc/sof/sof-audio.c
sound/soc/sof/sof-client-ipc-msg-injector.c
sound/usb/pcm.c
sound/usb/quirks-table.h
tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
tools/testing/selftests/bpf/progs/freplace_global_func.c [new file with mode: 0644]
tools/testing/selftests/kvm/x86_64/hyperv_clock.c
tools/testing/selftests/net/bpf/Makefile
tools/testing/selftests/netfilter/nft_nat.sh
virt/kvm/kvm_main.c

index 2f726c9..3daecac 100644 (file)
@@ -107,13 +107,14 @@ Description:
                                described in ATA8 7.16 and 7.17. Only valid if
                                the device is not a PM.
 
-               pio_mode:       (RO) Transfer modes supported by the device when
-                               in PIO mode. Mostly used by PATA device.
+               pio_mode:       (RO) PIO transfer mode used by the device.
+                               Mostly used by PATA devices.
 
-               xfer_mode:      (RO) Current transfer mode
+               xfer_mode:      (RO) Current transfer mode. Mostly used by
+                               PATA devices.
 
-               dma_mode:       (RO) Transfer modes supported by the device when
-                               in DMA mode. Mostly used by PATA device.
+               dma_mode:       (RO) DMA transfer mode used by the device.
+                               Mostly used by PATA devices.
 
                class:          (RO) Device class. Can be "ata" for disk,
                                "atapi" for packet device, "pmp" for PM, or
index 42214b4..90596d8 100644 (file)
@@ -26,6 +26,6 @@ Description:  Read/write the current state of DDR Backup Mode, which controls
                     DDR Backup Mode must be explicitly enabled by the user,
                     to invoke step 1.
 
-               See also Documentation/devicetree/bindings/mfd/bd9571mwv.txt.
+               See also Documentation/devicetree/bindings/mfd/rohm,bd9571mwv.yaml.
 Users:         User space applications for embedded boards equipped with a
                BD9571MWV PMIC.
index b256f97..1dc6c39 100644 (file)
@@ -34,7 +34,7 @@ CPU so it is usually wise not to overlap any physical RAM with
 the TCM.
 
 The TCM memory can then be remapped to another address again using
-the MMU, but notice that the TCM if often used in situations where
+the MMU, but notice that the TCM is often used in situations where
 the MMU is turned off. To avoid confusion the current Linux
 implementation will map the TCM 1 to 1 from physical to virtual
 memory in the location specified by the kernel. Currently Linux
index 8ba677b..937147f 100644 (file)
@@ -371,7 +371,7 @@ The regset data starts with struct user_za_header, containing:
 Appendix A.  SME programmer's model (informative)
 =================================================
 
-This section provides a minimal description of the additions made by SVE to the
+This section provides a minimal description of the additions made by SME to the
 ARMv8-A programmer's model that are relevant to this document.
 
 Note: This section is for information only and not intended to be complete or
index be66f1e..7c331bf 100644 (file)
@@ -45,7 +45,7 @@ description: |
   The case where SH and SP are both 1 is likely not very interesting.
 
 maintainers:
-  - Luca Ceresoli <luca@lucaceresoli.net>
+  - Luca Ceresoli <luca.ceresoli@bootlin.com>
 
 properties:
   compatible:
index 73470ec..ce91a91 100644 (file)
@@ -16,7 +16,7 @@ has been processed. See [2] for more information on the brcm,l2-intc node.
 firmware. On some SoCs, this firmware supports DFS and DVFS in addition to
 Adaptive Voltage Scaling.
 
-[2] Documentation/devicetree/bindings/interrupt-controller/brcm,l2-intc.txt
+[2] Documentation/devicetree/bindings/interrupt-controller/brcm,l2-intc.yaml
 
 
 Node brcm,avs-cpu-data-mem
index 795a08a..2a17ec6 100644 (file)
@@ -71,11 +71,6 @@ properties:
       - description: number of output lines for the green channel (G)
       - description: number of output lines for the blue channel (B)
 
-  arm,malidp-arqos-high-level:
-    $ref: /schemas/types.yaml#/definitions/uint32
-    description:
-      integer describing the ARQoS levels of DP500's QoS signaling
-
   arm,malidp-arqos-value:
     $ref: /schemas/types.yaml#/definitions/uint32
     description:
@@ -113,7 +108,7 @@ examples:
         clocks = <&oscclk2>, <&fpgaosc0>, <&fpgaosc1>, <&fpgaosc1>;
         clock-names = "pxlclk", "mclk", "aclk", "pclk";
         arm,malidp-output-port-lines = /bits/ 8 <8 8 8>;
-        arm,malidp-arqos-high-level = <0xd000d000>;
+        arm,malidp-arqos-value = <0xd000d000>;
 
         port {
             dp0_output: endpoint {
index b41991e..d3c3e4b 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Qualcomm Display DPU dt properties for SC7180 target
 
 maintainers:
-  - Krishna Manikandan <mkrishn@codeaurora.org>
+  - Krishna Manikandan <quic_mkrishn@quicinc.com>
 
 description: |
   Device tree bindings for MSM Mobile Display Subsystem(MDSS) that encapsulates
index 6e417d0..f427eec 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Qualcomm Display DPU dt properties for SC7280
 
 maintainers:
-  - Krishna Manikandan <mkrishn@codeaurora.org>
+  - Krishna Manikandan <quic_mkrishn@quicinc.com>
 
 description: |
   Device tree bindings for MSM Mobile Display Subsystem (MDSS) that encapsulates
index 1a42491..2bb8896 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Qualcomm Display DPU dt properties for SDM845 target
 
 maintainers:
-  - Krishna Manikandan <mkrishn@codeaurora.org>
+  - Krishna Manikandan <quic_mkrishn@quicinc.com>
 
 description: |
   Device tree bindings for MSM Mobile Display Subsystem(MDSS) that encapsulates
index 7095ec3..880bfe9 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Qualcomm Display DSI controller
 
 maintainers:
-  - Krishna Manikandan <mkrishn@codeaurora.org>
+  - Krishna Manikandan <quic_mkrishn@quicinc.com>
 
 allOf:
   - $ref: "../dsi-controller.yaml#"
index 2d5a766..716f921 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Qualcomm Display DSI 10nm PHY
 
 maintainers:
-  - Krishna Manikandan <mkrishn@codeaurora.org>
+  - Krishna Manikandan <quic_mkrishn@quicinc.com>
 
 allOf:
   - $ref: dsi-phy-common.yaml#
index 81dbee4..1342d74 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Qualcomm Display DSI 14nm PHY
 
 maintainers:
-  - Krishna Manikandan <mkrishn@codeaurora.org>
+  - Krishna Manikandan <quic_mkrishn@quicinc.com>
 
 allOf:
   - $ref: dsi-phy-common.yaml#
index b8de785..9c1f914 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Qualcomm Display DSI 20nm PHY
 
 maintainers:
-  - Krishna Manikandan <mkrishn@codeaurora.org>
+  - Krishna Manikandan <quic_mkrishn@quicinc.com>
 
 allOf:
   - $ref: dsi-phy-common.yaml#
index 69eecaa..3d8540a 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Qualcomm Display DSI 28nm PHY
 
 maintainers:
-  - Krishna Manikandan <mkrishn@codeaurora.org>
+  - Krishna Manikandan <quic_mkrishn@quicinc.com>
 
 allOf:
   - $ref: dsi-phy-common.yaml#
index 502bdda..76d40f7 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Description of Qualcomm Display DSI PHY common dt properties
 
 maintainers:
-  - Krishna Manikandan <mkrishn@codeaurora.org>
+  - Krishna Manikandan <quic_mkrishn@quicinc.com>
 
 description: |
   This defines the DSI PHY dt properties which are common for all
index 9c27ed6..4a4df4f 100644 (file)
@@ -9,7 +9,7 @@ Requires node properties:
        "arm,vexpress-power"
        "arm,vexpress-energy"
 - "arm,vexpress-sysreg,func" when controlled via vexpress-sysreg
-  (see Documentation/devicetree/bindings/arm/vexpress-sysreg.txt
+  (see Documentation/devicetree/bindings/arm/vexpress-config.yaml
   for more details)
 
 Optional node properties:
diff --git a/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
new file mode 100644 (file)
index 0000000..be1539d
--- /dev/null
@@ -0,0 +1,39 @@
+# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/iommu/xen,grant-dma.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Xen specific IOMMU for virtualized devices (e.g. virtio)
+
+maintainers:
+  - Stefano Stabellini <sstabellini@kernel.org>
+
+description:
+  The Xen IOMMU represents the Xen grant table interface. Grant mappings
+  are to be used with devices connected to the Xen IOMMU using the "iommus"
+  property, which also specifies the ID of the backend domain.
+  The binding is required to restrict memory access using Xen grant mappings.
+
+properties:
+  compatible:
+    const: xen,grant-dma
+
+  '#iommu-cells':
+    const: 1
+    description:
+      The single cell is the domid (domain ID) of the domain where the backend
+      is running.
+
+required:
+  - compatible
+  - "#iommu-cells"
+
+additionalProperties: false
+
+examples:
+  - |
+    iommu {
+        compatible = "xen,grant-dma";
+        #iommu-cells = <1>;
+    };
index c7cfa6c..935d63d 100644 (file)
@@ -150,7 +150,6 @@ allOf:
           description: 5 memory controller channels and 1 for stream-id registers
 
         reg-names:
-          maxItems: 6
           items:
             - const: sid
             - const: broadcast
@@ -170,7 +169,6 @@ allOf:
           description: 17 memory controller channels and 1 for stream-id registers
 
         reg-names:
-          minItems: 18
           items:
             - const: sid
             - const: broadcast
@@ -202,7 +200,6 @@ allOf:
           description: 17 memory controller channels and 1 for stream-id registers
 
         reg-names:
-          minItems: 18
           items:
             - const: sid
             - const: broadcast
index 74a6867..edac14a 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: MAX77714 PMIC with GPIO, RTC and watchdog from Maxim Integrated.
 
 maintainers:
-  - Luca Ceresoli <luca@lucaceresoli.net>
+  - Luca Ceresoli <luca.ceresoli@bootlin.com>
 
 description: |
   MAX77714 is a Power Management IC with 4 buck regulators, 9
index b672202..5ecdac9 100644 (file)
@@ -75,7 +75,6 @@ examples:
       sd-uhs-sdr104;
       sdhci,auto-cmd12;
       interrupts = <0x0 0x26 0x4>;
-      interrupt-names = "sdio0_0";
       clocks = <&scmi_clk 245>;
       clock-names = "sw_sdio";
     };
@@ -94,7 +93,6 @@ examples:
       non-removable;
       bus-width = <0x8>;
       interrupts = <0x0 0x27 0x4>;
-      interrupt-names = "sdio1_0";
       clocks = <&scmi_clk 245>;
       clock-names = "sw_sdio";
     };
index c79639e..3ee7588 100644 (file)
@@ -56,6 +56,9 @@ properties:
       - const: core
       - const: axi
 
+  interrupts:
+    maxItems: 1
+
   marvell,xenon-sdhc-id:
     $ref: /schemas/types.yaml#/definitions/uint32
     minimum: 0
@@ -145,7 +148,6 @@ allOf:
           items:
             - description: Xenon IP registers
             - description: Armada 3700 SoC PHY PAD Voltage Control register
-          minItems: 2
 
         marvell,pad-type:
           $ref: /schemas/types.yaml#/definitions/string
index ddff923..34dd1cc 100644 (file)
@@ -55,7 +55,6 @@ properties:
     maxItems: 1
 
   apple,sart:
-    maxItems: 1
     $ref: /schemas/types.yaml#/definitions/phandle
     description: |
       Reference to the SART address filter.
index de6a706..35f03df 100644 (file)
@@ -9,7 +9,7 @@ Required properties:
 - resets               : list of phandle and reset specifier pairs. There should be two entries, one
                          for the whole phy and one for the port
 - reset-names          : list of reset signal names. Should be "global" and "port"
-See: Documentation/devicetree/bindings/reset/st,sti-powerdown.txt
+See: Documentation/devicetree/bindings/reset/st,stih407-powerdown.yaml
 See: Documentation/devicetree/bindings/reset/reset.txt
 
 Example:
index 60dc278..b078009 100644 (file)
@@ -8,7 +8,7 @@ $schema: "http://devicetree.org/meta-schemas/core.yaml#"
 title: Qualcomm QMP USB3 DP PHY controller
 
 maintainers:
-  - Manu Gautam <mgautam@codeaurora.org>
+  - Wesley Cheng <quic_wcheng@quicinc.com>
 
 properties:
   compatible:
index 0ab3dad..d68ab49 100644 (file)
@@ -8,7 +8,7 @@ $schema: "http://devicetree.org/meta-schemas/core.yaml#"
 title: Qualcomm QUSB2 phy controller
 
 maintainers:
-  - Manu Gautam <mgautam@codeaurora.org>
+  - Wesley Cheng <quic_wcheng@quicinc.com>
 
 description:
   QUSB2 controller supports LS/FS/HS usb connectivity on Qualcomm chipsets.
index 1ce251d..7a0e6a9 100644 (file)
@@ -7,7 +7,7 @@ $schema: "http://devicetree.org/meta-schemas/core.yaml#"
 title: Qualcomm Synopsys Femto High-Speed USB PHY V2
 
 maintainers:
-  - Wesley Cheng <wcheng@codeaurora.org>
+  - Wesley Cheng <quic_wcheng@quicinc.com>
 
 description: |
   Qualcomm High-Speed USB PHY
index cbcbd31..939cb5b 100644 (file)
@@ -27,7 +27,7 @@ Required properties:
 - pins: List of pins. Valid values of pins properties are: gpio0, gpio1.
 
 First 2 properties must be added in the RK805 PMIC node, documented in
-Documentation/devicetree/bindings/mfd/rk808.txt
+Documentation/devicetree/bindings/mfd/rockchip,rk808.yaml
 
 Optional properties:
 -------------------
index 4d820df..6f17f39 100644 (file)
@@ -32,31 +32,37 @@ patternProperties:
           groups:
             description: The pin group to select.
             enum: [
+              # common
+              i2c, spi, wdt,
+
               # For MT7620 SoC
-              ephy, i2c, mdio, nd_sd, pa, pcie, rgmii1, rgmii2, spi, spi refclk,
-              uartf, uartlite, wdt, wled,
+              ephy, mdio, nd_sd, pa, pcie, rgmii1, rgmii2, spi refclk,
+              uartf, uartlite, wled,
 
               # For MT7628 and MT7688 SoCs
-              gpio, i2c, i2s, p0led_an, p0led_kn, p1led_an, p1led_kn, p2led_an,
+              gpio, i2s, p0led_an, p0led_kn, p1led_an, p1led_kn, p2led_an,
               p2led_kn, p3led_an, p3led_kn, p4led_an, p4led_kn, perst, pwm0,
-              pwm1, refclk, sdmode, spi, spi cs1, spis, uart0, uart1, uart2,
-              wdt, wled_an, wled_kn,
+              pwm1, refclk, sdmode, spi cs1, spis, uart0, uart1, uart2,
+              wled_an, wled_kn,
             ]
 
           function:
             description: The mux function to select.
             enum: [
+              # common
+              gpio, i2c, refclk, spi,
+
               # For MT7620 SoC
-              ephy, gpio, gpio i2s, gpio uartf, i2c, i2s uartf, mdio, nand, pa,
-              pcie refclk, pcie rst, pcm gpio, pcm i2s, pcm uartf, refclk,
-              rgmii1, rgmii2, sd, spi, spi refclk, uartf, uartlite, wdt refclk,
+              ephy, gpio i2s, gpio uartf, i2s uartf, mdio, nand, pa,
+              pcie refclk, pcie rst, pcm gpio, pcm i2s, pcm uartf,
+              rgmii1, rgmii2, sd, spi refclk, uartf, uartlite, wdt refclk,
               wdt rst, wled,
 
               # For MT7628 and MT7688 SoCs
-              antenna, debug, gpio, i2c, i2s, jtag, p0led_an, p0led_kn,
+              antenna, debug, i2s, jtag, p0led_an, p0led_kn,
               p1led_an, p1led_kn, p2led_an, p2led_kn, p3led_an, p3led_kn,
               p4led_an, p4led_kn, pcie, pcm, perst, pwm, pwm0, pwm1, pwm_uart2,
-              refclk, rsvd, sdxc, sdxc d5 d4, sdxc d6, sdxc d7, spi, spi cs1,
+              rsvd, sdxc, sdxc d5 d4, sdxc d6, sdxc d7, spi cs1,
               spis, sw_r, uart0, uart1, uart2, utif, wdt, wled_an, wled_kn, -,
             ]
 
index 425401c..f602a5d 100644 (file)
@@ -33,32 +33,29 @@ patternProperties:
           groups:
             description: The pin group to select.
             enum: [
+              # common
+              i2c, jtag, led, mdio, rgmii, spi, spi_cs1, uartf, uartlite,
+
               # For RT3050, RT3052 and RT3350 SoCs
-              i2c, jtag, mdio, rgmii, sdram, spi, uartf, uartlite,
+              sdram,
 
               # For RT3352 SoC
-              i2c, jtag, led, lna, mdio, pa, rgmii, spi, spi_cs1, uartf,
-              uartlite,
-
-              # For RT5350 SoC
-              i2c, jtag, led, spi, spi_cs1, uartf, uartlite,
+              lna, pa
             ]
 
           function:
             description: The mux function to select.
             enum: [
+              # common
+              gpio, gpio i2s, gpio uartf, i2c, i2s uartf, jtag, led, mdio,
+              pcm gpio, pcm i2s, pcm uartf, rgmii, spi, spi_cs1, uartf,
+              uartlite, wdg_cs1,
+
               # For RT3050, RT3052 and RT3350 SoCs
-              gpio, gpio i2s, gpio uartf, i2c, i2s uartf, jtag, mdio, pcm gpio,
-              pcm i2s, pcm uartf, rgmii, sdram, spi, uartf, uartlite,
+              sdram,
 
               # For RT3352 SoC
-              gpio, gpio i2s, gpio uartf, i2c, i2s uartf, jtag, led, lna, mdio,
-              pa, pcm gpio, pcm i2s, pcm uartf, rgmii, spi, spi_cs1, uartf,
-              uartlite, wdg_cs1,
-
-              # For RT5350 SoC
-              gpio, gpio i2s, gpio uartf, i2c, i2s uartf, jtag, led, pcm gpio,
-              pcm i2s, pcm uartf, spi, spi_cs1, uartf, uartlite, wdg_cs1,
+              lna, pa
             ]
 
         required:
index 675b9b2..f23dcc5 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Maxim Integrated MAX77976 Battery charger
 
 maintainers:
-  - Luca Ceresoli <luca@lucaceresoli.net>
+  - Luca Ceresoli <luca.ceresoli@bootlin.com>
 
 description: |
   The Maxim MAX77976 is a 19Vin / 5.5A, 1-Cell Li+ battery charger
index 12ed98c..dbe78cd 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: The Qualcomm PMIC VBUS output regulator driver
 
 maintainers:
-  - Wesley Cheng <wcheng@codeaurora.org>
+  - Wesley Cheng <quic_wcheng@quicinc.com>
 
 description: |
   This regulator driver controls the VBUS output by the Qualcomm PMIC.  This
index d775f72..1c2e92c 100644 (file)
@@ -4,7 +4,7 @@ Versatile Express voltage regulators
 Requires node properties:
 - "compatible" value: "arm,vexpress-volt"
 - "arm,vexpress-sysreg,func" when controlled via vexpress-sysreg
-  (see Documentation/devicetree/bindings/arm/vexpress-sysreg.txt
+  (see Documentation/devicetree/bindings/arm/vexpress-config.yaml
   for more details)
 
 Required regulator properties:
index bf73de0..4aa3684 100644 (file)
@@ -13,7 +13,7 @@ Required properties:
  - resets      : list of phandle and reset specifier pairs. There should be two entries, one
                  for the powerdown and softreset lines of the usb3 IP
  - reset-names : list of reset signal names. Names should be "powerdown" and "softreset"
-See: Documentation/devicetree/bindings/reset/st,sti-powerdown.txt
+See: Documentation/devicetree/bindings/reset/st,stih407-powerdown.yaml
 See: Documentation/devicetree/bindings/reset/reset.txt
 
  - #address-cells, #size-cells : should be '1' if the device has sub-nodes
index 065c91d..d6f2bde 100644 (file)
@@ -17,7 +17,7 @@ See: Documentation/devicetree/bindings/clock/clock-bindings.txt
  - resets              : phandle + reset specifier pairs to the powerdown and softreset lines
                          of the USB IP
  - reset-names         : should be "power" and "softreset"
-See: Documentation/devicetree/bindings/reset/st,sti-powerdown.txt
+See: Documentation/devicetree/bindings/reset/st,stih407-powerdown.yaml
 See: Documentation/devicetree/bindings/reset/reset.txt
 
 Example:
index 44c998c..1c73557 100644 (file)
@@ -15,7 +15,7 @@ See: Documentation/devicetree/bindings/clock/clock-bindings.txt
 
  - resets              : phandle to the powerdown and reset controller for the USB IP
  - reset-names         : should be "power" and "softreset".
-See: Documentation/devicetree/bindings/reset/st,sti-powerdown.txt
+See: Documentation/devicetree/bindings/reset/st,stih407-powerdown.yaml
 See: Documentation/devicetree/bindings/reset/reset.txt
 
 Example:
index e336fe2..749e196 100644 (file)
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Qualcomm SuperSpeed DWC3 USB SoC controller
 
 maintainers:
-  - Manu Gautam <mgautam@codeaurora.org>
+  - Wesley Cheng <quic_wcheng@quicinc.com>
 
 properties:
   compatible:
index 6bb20b4..0496773 100644 (file)
@@ -143,6 +143,9 @@ patternProperties:
     description: ASPEED Technology Inc.
   "^asus,.*":
     description: AsusTek Computer Inc.
+  "^atheros,.*":
+    description: Qualcomm Atheros, Inc. (deprecated, use qca)
+    deprecated: true
   "^atlas,.*":
     description: Atlas Scientific LLC
   "^atmel,.*":
index cbcf19f..ed6c1ca 100644 (file)
@@ -64,7 +64,6 @@ if:
 then:
   properties:
     clocks:
-      minItems: 2
       items:
         - description: High-frequency oscillator input, divided internally
         - description: Low-frequency oscillator input
diff --git a/Documentation/driver-api/hte/hte.rst b/Documentation/driver-api/hte/hte.rst
new file mode 100644 (file)
index 0000000..153f323
--- /dev/null
@@ -0,0 +1,79 @@
+.. SPDX-License-Identifier: GPL-2.0+
+
+============================================
+The Linux Hardware Timestamping Engine (HTE)
+============================================
+
+:Author: Dipen Patel
+
+Introduction
+------------
+
+Certain devices have built in hardware timestamping engines which can
+monitor sets of system signals, lines, buses etc... in realtime for state
+change; upon detecting the change they can automatically store the timestamp at
+the moment of occurrence. Such functionality may help achieve better accuracy
+in obtaining timestamps than using software counterparts i.e. ktime and
+friends.
+
+This document describes the API that can be used by hardware timestamping
+engine provider and consumer drivers that want to use the hardware timestamping
+engine (HTE) framework. Both consumers and providers must include
+``#include <linux/hte.h>``.
+
+The HTE framework APIs for the providers
+----------------------------------------
+
+.. kernel-doc:: drivers/hte/hte.c
+   :functions: devm_hte_register_chip hte_push_ts_ns
+
+The HTE framework APIs for the consumers
+----------------------------------------
+
+.. kernel-doc:: drivers/hte/hte.c
+   :functions: hte_init_line_attr hte_ts_get hte_ts_put devm_hte_request_ts_ns hte_request_ts_ns hte_enable_ts hte_disable_ts of_hte_req_count hte_get_clk_src_info
+
+The HTE framework public structures
+-----------------------------------
+.. kernel-doc:: include/linux/hte.h
+
+More on the HTE timestamp data
+------------------------------
+The ``struct hte_ts_data`` is used to pass timestamp details between the
+consumers and the providers. It expresses timestamp data in nanoseconds in
+u64. An example of the typical timestamp data life cycle, for the GPIO line is
+as follows::
+
+ - Monitors GPIO line change.
+ - Detects the state change on GPIO line.
+ - Converts timestamps in nanoseconds.
+ - Stores GPIO raw level in raw_level variable if the provider has that
+ hardware capability.
+ - Pushes this hte_ts_data object to HTE subsystem.
+ - HTE subsystem increments seq counter and invokes consumer provided callback.
+ Based on callback return value, the HTE core invokes secondary callback in
+ the thread context.
+
+HTE subsystem debugfs attributes
+--------------------------------
+HTE subsystem creates debugfs attributes at ``/sys/kernel/debug/hte/``.
+It also creates line/signal-related debugfs attributes at
+``/sys/kernel/debug/hte/<provider>/<label or line id>/``. Note that these
+attributes are read-only.
+
+`ts_requested`
+               The total number of entities requested from the given provider,
+               where entity is specified by the provider and could represent
+               lines, GPIO, chip signals, buses etc...
+                The attribute will be available at
+               ``/sys/kernel/debug/hte/<provider>/``.
+
+`total_ts`
+               The total number of entities supported by the provider.
+                The attribute will be available at
+               ``/sys/kernel/debug/hte/<provider>/``.
+
+`dropped_timestamps`
+               The dropped timestamps for a given line.
+                The attribute will be available at
+               ``/sys/kernel/debug/hte/<provider>/<label or line id>/``.
diff --git a/Documentation/driver-api/hte/index.rst b/Documentation/driver-api/hte/index.rst
new file mode 100644 (file)
index 0000000..9f43301
--- /dev/null
@@ -0,0 +1,22 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+============================================
+The Linux Hardware Timestamping Engine (HTE)
+============================================
+
+The HTE Subsystem
+=================
+
+.. toctree::
+   :maxdepth: 1
+
+   hte
+
+HTE Tegra Provider
+==================
+
+.. toctree::
+   :maxdepth: 1
+
+   tegra194-hte
+
diff --git a/Documentation/driver-api/hte/tegra194-hte.rst b/Documentation/driver-api/hte/tegra194-hte.rst
new file mode 100644 (file)
index 0000000..41983e0
--- /dev/null
@@ -0,0 +1,49 @@
+.. SPDX-License-Identifier: GPL-2.0+
+
+HTE Kernel provider driver
+==========================
+
+Description
+-----------
+The Nvidia tegra194 HTE provider driver implements two GTE
+(Generic Timestamping Engine) instances: 1) GPIO GTE and 2) LIC
+(Legacy Interrupt Controller) IRQ GTE. Both GTE instances get the
+timestamp from the system counter TSC which has 31.25MHz clock rate, and the
+driver converts clock tick rate to nanoseconds before storing it as timestamp
+value.
+
+GPIO GTE
+--------
+
+This GTE instance timestamps GPIO in real time. For that to happen GPIO
+needs to be configured as input. The always on (AON) GPIO controller instance
+supports timestamping GPIOs in real time and it has 39 GPIO lines. The GPIO GTE
+and AON GPIO controller are tightly coupled as it requires very specific bits
+to be set in GPIO config register before GPIO GTE can be used, for that GPIOLIB
+adds two optional APIs as below. The GPIO GTE code supports both kernel
+and userspace consumers. The kernel space consumers can directly talk to HTE
+subsystem while userspace consumers timestamp requests go through GPIOLIB CDEV
+framework to HTE subsystem.
+
+.. kernel-doc:: drivers/gpio/gpiolib.c
+   :functions: gpiod_enable_hw_timestamp_ns gpiod_disable_hw_timestamp_ns
+
+For userspace consumers, GPIO_V2_LINE_FLAG_EVENT_CLOCK_HTE flag must be
+specified during IOCTL calls. Refer to ``tools/gpio/gpio-event-mon.c``, which
+returns the timestamp in nanoseconds.
+
+LIC (Legacy Interrupt Controller) IRQ GTE
+-----------------------------------------
+
+This GTE instance timestamps LIC IRQ lines in real time. There are 352 IRQ
+lines which this instance can add timestamps to in real time. The hte
+devicetree binding described at ``Documentation/devicetree/bindings/hte/``
+provides an example of how a consumer can request an IRQ line. Since it is a
+one-to-one mapping with IRQ GTE provider, consumers can simply specify the IRQ
+number that they are interested in. There is no userspace consumer support for
+this GTE instance in the HTE framework.
+
+The provider source code of both IRQ and GPIO GTE instances is located at
+``drivers/hte/hte-tegra194.c``. The test driver
+``drivers/hte/hte-tegra194-test.c`` demonstrates HTE API usage for both IRQ
+and GPIO GTE.
index d76a60d..a6d525c 100644 (file)
@@ -108,6 +108,7 @@ available subsections can be seen below.
    xilinx/index
    xillybus
    zorro
+   hte/index
 
 .. only::  subproject and html
 
index 10482de..a053667 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index bcefb5a..c0bb9c9 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index d80d994..c9bfff2 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: |  ok  |
     |        ia64: |  ok  |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 53eab15..35e2a44 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 9492645..9b3e2ce 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
index b4274b8..9c7ffec 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: |  ok  |
     |        ia64: |  ok  |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index c15bb4b..2fd5fb6 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
index 4c31fc9..c45711e 100644 (file)
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
     |       nios2: | TODO |
     |    openrisc: | TODO |
-    |      parisc: | TODO |
+    |      parisc: |  ok  |
     |     powerpc: |  ok  |
     |       riscv: |  ok  |
     |        s390: |  ok  |
index d7a5ac4..502c1d4 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: |  ok  |
     |        mips: |  ok  |
@@ -24,7 +25,7 @@
     |        s390: |  ok  |
     |          sh: |  ok  |
     |       sparc: | TODO |
-    |          um: | TODO |
+    |          um: |  ok  |
     |         x86: |  ok  |
     |      xtensa: | TODO |
     -----------------------
index 136e14c..afb90be 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 5b3f3d8..04120d2 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: |  ok  |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: |  ok  |
     |        mips: |  ok  |
index 7a2eab4..e487c35 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: |  ok  |
     |        mips: |  ok  |
index db02ab1..b3697f4 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
index ec186e7..452385a 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: |  ok  |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 4b7865e..daecf04 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: |  ok  |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 5d9befa..adb1bd0 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
index d97fd38..ddcd716 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index d30e347..2512120 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 9ae1fa2..f2fcff8 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
index 9e09988..95e485c 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: |  ok  |
     |        mips: |  ok  |
index 5c4ec31..8b1a8d9 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
index 65007c1..ab69e8f 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: |  ok  |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: |  ok  |
     |        mips: |  ok  |
index 2005667..0bfb72a 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
@@ -20,7 +21,7 @@
     |    openrisc: |  ok  |
     |      parisc: | TODO |
     |     powerpc: |  ok  |
-    |       riscv: | TODO |
+    |       riscv: |  ok  |
     |        s390: | TODO |
     |          sh: | TODO |
     |       sparc: |  ok  |
index 707514f..d2f2201 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 9f31ce9..0d0647b 100644 (file)
@@ -7,12 +7,13 @@
     |         arch |status|
     -----------------------
     |       alpha: | TODO |
-    |         arc: | TODO |
+    |         arc: |  ok  |
     |         arm: |  ok  |
     |       arm64: |  ok  |
     |        csky: |  ok  |
     |     hexagon: |  ok  |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index f148c43..13c297b 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 32c88b6..931687e 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index d82a1f0..336d728 100644 (file)
@@ -36,6 +36,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
index 2687564..76d0121 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ..  |
     |     hexagon: |  ..  |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: |  ..  |
     |  microblaze: |  ..  |
     |        mips: | TODO |
index 1b41091..a86b8b1 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 2732725..364169f 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index b9a4bda..6ea2747 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: |  ok  |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: |  ok  |
     |        mips: |  ok  |
index 4aa51c9..c9e0a16 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 0306ece..fd17d8d 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: |  ..  |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 5d64e40..1a859ac 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ok  |
     |     hexagon: | TODO |
     |        ia64: |  ok  |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 92c9db2..b122995 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 7424fea..02f325f 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: |  ok  |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
index 6098506..9bfff97 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: |  ..  |
     |     hexagon: |  ..  |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: |  ..  |
     |  microblaze: |  ..  |
     |        mips: |  ok  |
index f2dcbec..039e4e9 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: |  ..  |
     |  microblaze: |  ..  |
     |        mips: | TODO |
index 680090d..13b4940 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: | TODO |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: | TODO |
index 205a90e..b01bf7b 100644 (file)
@@ -13,6 +13,7 @@
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
index 9f16d6e..fc3687b 100644 (file)
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
+    |       loong: |  ok  |
     |        m68k: | TODO |
     |  microblaze: | TODO |
     |        mips: |  ok  |
     |       nios2: | TODO |
     |    openrisc: | TODO |
-    |      parisc: | TODO |
+    |      parisc: |  ok  |
     |     powerpc: |  ok  |
     |       riscv: |  ok  |
     |        s390: |  ok  |
index a80a599..3276c3d 100644 (file)
@@ -37,30 +37,31 @@ The network filesystem helper library needs a place to store a bit of state for
 its use on each netfs inode it is helping to manage.  To this end, a context
 structure is defined::
 
-       struct netfs_i_context {
+       struct netfs_inode {
+               struct inode inode;
                const struct netfs_request_ops *ops;
-               struct fscache_cookie   *cache;
+               struct fscache_cookie *cache;
        };
 
-A network filesystem that wants to use netfs lib must place one of these
-directly after the VFS ``struct inode`` it allocates, usually as part of its
-own struct.  This can be done in a way similar to the following::
+A network filesystem that wants to use netfs lib must place one of these in its
+inode wrapper struct instead of the VFS ``struct inode``.  This can be done in
+a way similar to the following::
 
        struct my_inode {
-               struct {
-                       /* These must be contiguous */
-                       struct inode            vfs_inode;
-                       struct netfs_i_context  netfs_ctx;
-               };
+               struct netfs_inode netfs; /* Netfslib context and vfs inode */
                ...
        };
 
-This allows netfslib to find its state by simple offset from the inode pointer,
-thereby allowing the netfslib helper functions to be pointed to directly by the
-VFS/VM operation tables.
+This allows netfslib to find its state by using ``container_of()`` from the
+inode pointer, thereby allowing the netfslib helper functions to be pointed to
+directly by the VFS/VM operation tables.
 
 The structure contains the following fields:
 
+ * ``inode``
+
+   The VFS inode structure.
+
  * ``ops``
 
    The set of operations provided by the network filesystem to netfslib.
@@ -78,14 +79,12 @@ To help deal with the per-inode context, a number helper functions are
 provided.  Firstly, a function to perform basic initialisation on a context and
 set the operations table pointer::
 
-       void netfs_i_context_init(struct inode *inode,
-                                 const struct netfs_request_ops *ops);
+       void netfs_inode_init(struct inode *inode,
+                             const struct netfs_request_ops *ops);
 
-then two functions to cast between the VFS inode structure and the netfs
-context::
+then a function to cast from the VFS inode structure to the netfs context::
 
-       struct netfs_i_context *netfs_i_context(struct inode *inode);
-       struct inode *netfs_inode(struct netfs_i_context *ctx);
+       struct netfs_inode *netfs_node(struct inode *inode);
 
 and finally, a function to get the cache cookie pointer from the context
 attached to an inode (or NULL if fscache is disabled)::
diff --git a/Documentation/hte/hte.rst b/Documentation/hte/hte.rst
deleted file mode 100644 (file)
index 153f323..0000000
+++ /dev/null
@@ -1,79 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0+
-
-============================================
-The Linux Hardware Timestamping Engine (HTE)
-============================================
-
-:Author: Dipen Patel
-
-Introduction
-------------
-
-Certain devices have built in hardware timestamping engines which can
-monitor sets of system signals, lines, buses etc... in realtime for state
-change; upon detecting the change they can automatically store the timestamp at
-the moment of occurrence. Such functionality may help achieve better accuracy
-in obtaining timestamps than using software counterparts i.e. ktime and
-friends.
-
-This document describes the API that can be used by hardware timestamping
-engine provider and consumer drivers that want to use the hardware timestamping
-engine (HTE) framework. Both consumers and providers must include
-``#include <linux/hte.h>``.
-
-The HTE framework APIs for the providers
-----------------------------------------
-
-.. kernel-doc:: drivers/hte/hte.c
-   :functions: devm_hte_register_chip hte_push_ts_ns
-
-The HTE framework APIs for the consumers
-----------------------------------------
-
-.. kernel-doc:: drivers/hte/hte.c
-   :functions: hte_init_line_attr hte_ts_get hte_ts_put devm_hte_request_ts_ns hte_request_ts_ns hte_enable_ts hte_disable_ts of_hte_req_count hte_get_clk_src_info
-
-The HTE framework public structures
------------------------------------
-.. kernel-doc:: include/linux/hte.h
-
-More on the HTE timestamp data
-------------------------------
-The ``struct hte_ts_data`` is used to pass timestamp details between the
-consumers and the providers. It expresses timestamp data in nanoseconds in
-u64. An example of the typical timestamp data life cycle, for the GPIO line is
-as follows::
-
- - Monitors GPIO line change.
- - Detects the state change on GPIO line.
- - Converts timestamps in nanoseconds.
- - Stores GPIO raw level in raw_level variable if the provider has that
- hardware capability.
- - Pushes this hte_ts_data object to HTE subsystem.
- - HTE subsystem increments seq counter and invokes consumer provided callback.
- Based on callback return value, the HTE core invokes secondary callback in
- the thread context.
-
-HTE subsystem debugfs attributes
---------------------------------
-HTE subsystem creates debugfs attributes at ``/sys/kernel/debug/hte/``.
-It also creates line/signal-related debugfs attributes at
-``/sys/kernel/debug/hte/<provider>/<label or line id>/``. Note that these
-attributes are read-only.
-
-`ts_requested`
-               The total number of entities requested from the given provider,
-               where entity is specified by the provider and could represent
-               lines, GPIO, chip signals, buses etc...
-                The attribute will be available at
-               ``/sys/kernel/debug/hte/<provider>/``.
-
-`total_ts`
-               The total number of entities supported by the provider.
-                The attribute will be available at
-               ``/sys/kernel/debug/hte/<provider>/``.
-
-`dropped_timestamps`
-               The dropped timestamps for a given line.
-                The attribute will be available at
-               ``/sys/kernel/debug/hte/<provider>/<label or line id>/``.
diff --git a/Documentation/hte/index.rst b/Documentation/hte/index.rst
deleted file mode 100644 (file)
index 9f43301..0000000
+++ /dev/null
@@ -1,22 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0
-
-============================================
-The Linux Hardware Timestamping Engine (HTE)
-============================================
-
-The HTE Subsystem
-=================
-
-.. toctree::
-   :maxdepth: 1
-
-   hte
-
-HTE Tegra Provider
-==================
-
-.. toctree::
-   :maxdepth: 1
-
-   tegra194-hte
-
diff --git a/Documentation/hte/tegra194-hte.rst b/Documentation/hte/tegra194-hte.rst
deleted file mode 100644 (file)
index 41983e0..0000000
+++ /dev/null
@@ -1,49 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0+
-
-HTE Kernel provider driver
-==========================
-
-Description
------------
-The Nvidia tegra194 HTE provider driver implements two GTE
-(Generic Timestamping Engine) instances: 1) GPIO GTE and 2) LIC
-(Legacy Interrupt Controller) IRQ GTE. Both GTE instances get the
-timestamp from the system counter TSC which has 31.25MHz clock rate, and the
-driver converts clock tick rate to nanoseconds before storing it as timestamp
-value.
-
-GPIO GTE
---------
-
-This GTE instance timestamps GPIO in real time. For that to happen GPIO
-needs to be configured as input. The always on (AON) GPIO controller instance
-supports timestamping GPIOs in real time and it has 39 GPIO lines. The GPIO GTE
-and AON GPIO controller are tightly coupled as it requires very specific bits
-to be set in GPIO config register before GPIO GTE can be used, for that GPIOLIB
-adds two optional APIs as below. The GPIO GTE code supports both kernel
-and userspace consumers. The kernel space consumers can directly talk to HTE
-subsystem while userspace consumers timestamp requests go through GPIOLIB CDEV
-framework to HTE subsystem.
-
-.. kernel-doc:: drivers/gpio/gpiolib.c
-   :functions: gpiod_enable_hw_timestamp_ns gpiod_disable_hw_timestamp_ns
-
-For userspace consumers, GPIO_V2_LINE_FLAG_EVENT_CLOCK_HTE flag must be
-specified during IOCTL calls. Refer to ``tools/gpio/gpio-event-mon.c``, which
-returns the timestamp in nanoseconds.
-
-LIC (Legacy Interrupt Controller) IRQ GTE
------------------------------------------
-
-This GTE instance timestamps LIC IRQ lines in real time. There are 352 IRQ
-lines which this instance can add timestamps to in real time. The hte
-devicetree binding described at ``Documentation/devicetree/bindings/hte/``
-provides an example of how a consumer can request an IRQ line. Since it is a
-one-to-one mapping with IRQ GTE provider, consumers can simply specify the IRQ
-number that they are interested in. There is no userspace consumer support for
-this GTE instance in the HTE framework.
-
-The provider source code of both IRQ and GPIO GTE instances is located at
-``drivers/hte/hte-tegra194.c``. The test driver
-``drivers/hte/hte-tegra194-test.c`` demonstrates HTE API usage for both IRQ
-and GPIO GTE.
index 8f9be0e..67036a0 100644 (file)
@@ -137,7 +137,6 @@ needed).
    scheduler/index
    mhi/index
    peci/index
-   hte/index
 
 Architecture-agnostic documentation
 -----------------------------------
index 8cb2cd4..be8e10c 100644 (file)
@@ -214,6 +214,31 @@ of calling send directly after a handshake using gnutls.
 Since it doesn't implement a full record layer, control
 messages are not supported.
 
+Optional optimizations
+----------------------
+
+There are certain condition-specific optimizations the TLS ULP can make,
+if requested. Those optimizations are either not universally beneficial
+or may impact correctness, hence they require an opt-in.
+All options are set per-socket using setsockopt(), and their
+state can be checked using getsockopt() and via socket diag (``ss``).
+
+TLS_TX_ZEROCOPY_RO
+~~~~~~~~~~~~~~~~~~
+
+For device offload only. Allow sendfile() data to be transmitted directly
+to the NIC without making an in-kernel copy. This allows true zero-copy
+behavior when device offload is enabled.
+
+The application must make sure that the data is not modified between being
+submitted and transmission completing. In other words this is mostly
+applicable if the data sent on a socket via sendfile() is read-only.
+
+Modifying the data may result in different versions of the data being used
+for the original TCP transmission and TCP retransmissions. To the receiver
+this will look like TLS records had been tampered with and will result
+in record authentication failures.
+
 Statistics
 ==========
 
index b0bd510..6d5ec1e 100644 (file)
@@ -42,7 +42,7 @@ if usbmon is built into the kernel::
        # modprobe usbmon
        #
 
-Verify that bus sockets are present:
+Verify that bus sockets are present::
 
        # ls /sys/kernel/debug/usb/usbmon
        0s  0u  1s  1t  1u  2s  2t  2u  3s  3t  3u  4s  4t  4u
index a6d3bd9..05fcbea 100644 (file)
@@ -171,7 +171,6 @@ F:  drivers/scsi/53c700*
 
 6LOWPAN GENERIC (BTLE/IEEE 802.15.4)
 M:     Alexander Aring <alex.aring@gmail.com>
-M:     Jukka Rissanen <jukka.rissanen@linux.intel.com>
 L:     linux-bluetooth@vger.kernel.org
 L:     linux-wpan@vger.kernel.org
 S:     Maintained
@@ -1507,7 +1506,7 @@ F:        drivers/clocksource/arm_arch_timer.c
 ARM HDLCD DRM DRIVER
 M:     Liviu Dudau <liviu.dudau@arm.com>
 S:     Supported
-F:     Documentation/devicetree/bindings/display/arm,hdlcd.txt
+F:     Documentation/devicetree/bindings/display/arm,hdlcd.yaml
 F:     drivers/gpu/drm/arm/hdlcd_*
 
 ARM INTEGRATOR, VERSATILE AND REALVIEW SUPPORT
@@ -1542,7 +1541,7 @@ M:        Mihail Atanassov <mihail.atanassov@arm.com>
 L:     Mali DP Maintainers <malidp@foss.arm.com>
 S:     Supported
 T:     git git://anongit.freedesktop.org/drm/drm-misc
-F:     Documentation/devicetree/bindings/display/arm,komeda.txt
+F:     Documentation/devicetree/bindings/display/arm,komeda.yaml
 F:     Documentation/gpu/komeda-kms.rst
 F:     drivers/gpu/drm/arm/display/include/
 F:     drivers/gpu/drm/arm/display/komeda/
@@ -1564,7 +1563,7 @@ M:        Brian Starkey <brian.starkey@arm.com>
 L:     Mali DP Maintainers <malidp@foss.arm.com>
 S:     Supported
 T:     git git://anongit.freedesktop.org/drm/drm-misc
-F:     Documentation/devicetree/bindings/display/arm,malidp.txt
+F:     Documentation/devicetree/bindings/display/arm,malidp.yaml
 F:     Documentation/gpu/afbc.rst
 F:     drivers/gpu/drm/arm/
 
@@ -2009,7 +2008,7 @@ L:        linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
 S:     Maintained
 T:     git git://github.com/ulli-kroll/linux.git
 F:     Documentation/devicetree/bindings/arm/gemini.yaml
-F:     Documentation/devicetree/bindings/net/cortina,gemini-ethernet.txt
+F:     Documentation/devicetree/bindings/net/cortina,gemini-ethernet.yaml
 F:     Documentation/devicetree/bindings/pinctrl/cortina,gemini-pinctrl.txt
 F:     Documentation/devicetree/bindings/rtc/faraday,ftrtc010.yaml
 F:     arch/arm/boot/dts/gemini*
@@ -3757,6 +3756,13 @@ F:       include/linux/bpf_lsm.h
 F:     kernel/bpf/bpf_lsm.c
 F:     security/bpf/
 
+BPFTOOL
+M:     Quentin Monnet <quentin@isovalent.com>
+L:     bpf@vger.kernel.org
+S:     Maintained
+F:     kernel/bpf/disasm.*
+F:     tools/bpf/bpftool/
+
 BROADCOM B44 10/100 ETHERNET DRIVER
 M:     Michael Chan <michael.chan@broadcom.com>
 L:     netdev@vger.kernel.org
@@ -6078,7 +6084,7 @@ M:        Sakari Ailus <sakari.ailus@linux.intel.com>
 L:     linux-media@vger.kernel.org
 S:     Maintained
 T:     git git://linuxtv.org/media_tree.git
-F:     Documentation/devicetree/bindings/media/i2c/dongwoon,dw9807-vcm.txt
+F:     Documentation/devicetree/bindings/media/i2c/dongwoon,dw9807-vcm.yaml
 F:     drivers/media/i2c/dw9807-vcm.c
 
 DOUBLETALK DRIVER
@@ -9081,7 +9087,7 @@ HTE SUBSYSTEM
 M:     Dipen Patel <dipenp@nvidia.com>
 S:     Maintained
 F:     Documentation/devicetree/bindings/timestamp/
-F:     Documentation/hte/
+F:     Documentation/driver-api/hte/
 F:     drivers/hte/
 F:     include/linux/hte.h
 
@@ -11257,6 +11263,7 @@ M:      Damien Le Moal <damien.lemoal@opensource.wdc.com>
 L:     linux-ide@vger.kernel.org
 S:     Maintained
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata.git
+F:     Documentation/ABI/testing/sysfs-ata
 F:     Documentation/devicetree/bindings/ata/
 F:     drivers/ata/
 F:     include/linux/ata.h
@@ -12696,7 +12703,6 @@ L:      netdev@vger.kernel.org
 S:     Supported
 W:     http://www.mellanox.com
 Q:     https://patchwork.kernel.org/project/netdevbpf/list/
-F:     drivers/net/ethernet/mellanox/mlx5/core/accel/*
 F:     drivers/net/ethernet/mellanox/mlx5/core/en_accel/*
 F:     drivers/net/ethernet/mellanox/mlx5/core/fpga/*
 F:     include/linux/mlx5/mlx5_ifc_fpga.h
@@ -15824,6 +15830,14 @@ S:     Maintained
 F:     Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml
 F:     drivers/iio/chemical/pms7003.c
 
+PLATFORM FEATURE INFRASTRUCTURE
+M:     Juergen Gross <jgross@suse.com>
+S:     Maintained
+F:     arch/*/include/asm/platform-feature.h
+F:     include/asm-generic/platform-feature.h
+F:     include/linux/platform-feature.h
+F:     kernel/platform-feature.c
+
 PLDMFW LIBRARY
 M:     Jacob Keller <jacob.e.keller@intel.com>
 S:     Maintained
@@ -19220,7 +19234,7 @@ F:      arch/arc/plat-axs10x
 SYNOPSYS AXS10x RESET CONTROLLER DRIVER
 M:     Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
 S:     Supported
-F:     Documentation/devicetree/bindings/reset/snps,axs10x-reset.txt
+F:     Documentation/devicetree/bindings/reset/snps,axs10x-reset.yaml
 F:     drivers/reset/reset-axs10x.c
 
 SYNOPSYS CREG GPIO DRIVER
index c43d825..b2e93c1 100644 (file)
--- a/Makefile
+++ b/Makefile
@@ -788,6 +788,7 @@ stackp-flags-$(CONFIG_STACKPROTECTOR_STRONG)      := -fstack-protector-strong
 KBUILD_CFLAGS += $(stackp-flags-y)
 
 KBUILD_CFLAGS-$(CONFIG_WERROR) += -Werror
+KBUILD_CFLAGS-$(CONFIG_CC_NO_ARRAY_BOUNDS) += -Wno-array-bounds
 KBUILD_CFLAGS += $(KBUILD_CFLAGS-y) $(CONFIG_CC_IMPLICIT_FALLTHROUGH)
 
 ifdef CONFIG_CC_IS_CLANG
@@ -805,6 +806,9 @@ endif
 KBUILD_CFLAGS += $(call cc-disable-warning, unused-but-set-variable)
 KBUILD_CFLAGS += $(call cc-disable-warning, unused-const-variable)
 
+# These result in bogus false positives
+KBUILD_CFLAGS += $(call cc-disable-warning, dangling-pointer)
+
 ifdef CONFIG_FRAME_POINTER
 KBUILD_CFLAGS  += -fno-omit-frame-pointer -fno-optimize-sibling-calls
 else
diff --git a/arch/arm/include/asm/xen/xen-ops.h b/arch/arm/include/asm/xen/xen-ops.h
new file mode 100644 (file)
index 0000000..7ebb7eb
--- /dev/null
@@ -0,0 +1,2 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <xen/arm/xen-ops.h>
index 82ffac6..059cce0 100644 (file)
@@ -33,7 +33,7 @@
 #include <asm/dma-iommu.h>
 #include <asm/mach/map.h>
 #include <asm/system_info.h>
-#include <xen/swiotlb-xen.h>
+#include <asm/xen/xen-ops.h>
 
 #include "dma.h"
 #include "mm.h"
@@ -2287,10 +2287,7 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 
        set_dma_ops(dev, dma_ops);
 
-#ifdef CONFIG_XEN
-       if (xen_initial_domain())
-               dev->dma_ops = &xen_swiotlb_dma_ops;
-#endif
+       xen_setup_dma_ops(dev);
        dev->archdata.dma_ops_setup = true;
 }
 
index 07eb69f..1f9c3ba 100644 (file)
@@ -443,6 +443,8 @@ static int __init xen_guest_init(void)
        if (!xen_domain())
                return 0;
 
+       xen_set_restricted_virtio_memory_access();
+
        if (!acpi_disabled)
                xen_acpi_guest_init();
        else
index 55f998c..42ff95d 100644 (file)
 #define ID_AA64SMFR0_F32F32_SHIFT      32
 
 #define ID_AA64SMFR0_FA64              0x1
-#define ID_AA64SMFR0_I16I64            0x4
+#define ID_AA64SMFR0_I16I64            0xf
 #define ID_AA64SMFR0_F64F64            0x1
-#define ID_AA64SMFR0_I8I32             0x4
+#define ID_AA64SMFR0_I8I32             0xf
 #define ID_AA64SMFR0_F16F32            0x1
 #define ID_AA64SMFR0_B16F32            0x1
 #define ID_AA64SMFR0_F32F32            0x1
diff --git a/arch/arm64/include/asm/xen/xen-ops.h b/arch/arm64/include/asm/xen/xen-ops.h
new file mode 100644 (file)
index 0000000..7ebb7eb
--- /dev/null
@@ -0,0 +1,2 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <xen/arm/xen-ops.h>
index 8199793..aecf307 100644 (file)
@@ -331,7 +331,7 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type,
  *    trapping to the kernel.
  *
  *    When stored, Z0-Z31 (incorporating Vn in bits[127:0] or the
- *    corresponding Zn), P0-P15 and FFR are encoded in in
+ *    corresponding Zn), P0-P15 and FFR are encoded in
  *    task->thread.sve_state, formatted appropriately for vector
  *    length task->thread.sve_vl or, if SVCR.SM is set,
  *    task->thread.sme_vl.
@@ -1916,10 +1916,15 @@ void __efi_fpsimd_begin(void)
                        if (system_supports_sme()) {
                                svcr = read_sysreg_s(SYS_SVCR);
 
-                               if (!system_supports_fa64())
-                                       ffr = svcr & SVCR_SM_MASK;
+                               __this_cpu_write(efi_sm_state,
+                                                svcr & SVCR_SM_MASK);
 
-                               __this_cpu_write(efi_sm_state, ffr);
+                               /*
+                                * Unless we have FA64 FFR does not
+                                * exist in streaming mode.
+                                */
+                               if (!system_supports_fa64())
+                                       ffr = !(svcr & SVCR_SM_MASK);
                        }
 
                        sve_save_state(sve_state + sve_ffr_offset(sve_max_vl()),
@@ -1964,8 +1969,13 @@ void __efi_fpsimd_end(void)
                                        sysreg_clear_set_s(SYS_SVCR,
                                                           0,
                                                           SVCR_SM_MASK);
+
+                                       /*
+                                        * Unless we have FA64 FFR does not
+                                        * exist in streaming mode.
+                                        */
                                        if (!system_supports_fa64())
-                                               ffr = efi_sm_state;
+                                               ffr = false;
                                }
                        }
 
index 57b30bc..f6b0074 100644 (file)
@@ -244,6 +244,11 @@ static void mte_update_gcr_excl(struct task_struct *task)
                SYS_GCR_EL1);
 }
 
+#ifdef CONFIG_KASAN_HW_TAGS
+/* Only called from assembly, silence sparse */
+void __init kasan_hw_tags_enable(struct alt_instr *alt, __le32 *origptr,
+                                __le32 *updptr, int nr_inst);
+
 void __init kasan_hw_tags_enable(struct alt_instr *alt, __le32 *origptr,
                                 __le32 *updptr, int nr_inst)
 {
@@ -252,6 +257,7 @@ void __init kasan_hw_tags_enable(struct alt_instr *alt, __le32 *origptr,
        if (kasan_hw_tags_enabled())
                *updptr = cpu_to_le32(aarch64_insn_gen_nop());
 }
+#endif
 
 void mte_thread_init_user(void)
 {
index 6719f9e..6099c81 100644 (file)
@@ -9,9 +9,9 @@
 #include <linux/dma-map-ops.h>
 #include <linux/dma-iommu.h>
 #include <xen/xen.h>
-#include <xen/swiotlb-xen.h>
 
 #include <asm/cacheflush.h>
+#include <asm/xen/xen-ops.h>
 
 void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
                enum dma_data_direction dir)
@@ -52,8 +52,5 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
        if (iommu)
                iommu_setup_dma_ops(dev, dma_base, dma_base + size - 1);
 
-#ifdef CONFIG_XEN
-       if (xen_swiotlb_detect())
-               dev->dma_ops = &xen_swiotlb_dma_ops;
-#endif
+       xen_setup_dma_ops(dev);
 }
index 8ab4035..42f2e9a 100644 (file)
@@ -1478,6 +1478,7 @@ skip_init_ctx:
                        bpf_jit_binary_free(header);
                        prog->bpf_func = NULL;
                        prog->jited = 0;
+                       prog->jited_len = 0;
                        goto out_off;
                }
                bpf_jit_binary_lock_ro(header);
index 89bfb74..5c55509 100755 (executable)
@@ -253,7 +253,7 @@ END {
        next
 }
 
-/0b[01]+/ && block = "Enum" {
+/0b[01]+/ && block == "Enum" {
        expect_fields(2)
        val = $1
        name = $2
index be68c1f..c2ce2e6 100644 (file)
@@ -223,7 +223,6 @@ config PPC
        select HAVE_HARDLOCKUP_DETECTOR_PERF    if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH
        select HAVE_HW_BREAKPOINT               if PERF_EVENTS && (PPC_BOOK3S || PPC_8xx)
        select HAVE_IOREMAP_PROT
-       select HAVE_IRQ_EXIT_ON_IRQ_STACK
        select HAVE_IRQ_TIME_ACCOUNTING
        select HAVE_KERNEL_GZIP
        select HAVE_KERNEL_LZMA                 if DEFAULT_UIMAGE
@@ -786,7 +785,6 @@ config THREAD_SHIFT
        range 13 15
        default "15" if PPC_256K_PAGES
        default "14" if PPC64
-       default "14" if KASAN
        default "13"
        help
          Used to define the stack size. The default is almost always what you
index 125328d..af58f1e 100644 (file)
 
 #ifdef __KERNEL__
 
-#if defined(CONFIG_VMAP_STACK) && CONFIG_THREAD_SHIFT < PAGE_SHIFT
+#ifdef CONFIG_KASAN
+#define MIN_THREAD_SHIFT       (CONFIG_THREAD_SHIFT + 1)
+#else
+#define MIN_THREAD_SHIFT       CONFIG_THREAD_SHIFT
+#endif
+
+#if defined(CONFIG_VMAP_STACK) && MIN_THREAD_SHIFT < PAGE_SHIFT
 #define THREAD_SHIFT           PAGE_SHIFT
 #else
-#define THREAD_SHIFT           CONFIG_THREAD_SHIFT
+#define THREAD_SHIFT           MIN_THREAD_SHIFT
 #endif
 
 #define THREAD_SIZE            (1 << THREAD_SHIFT)
index 2e2a2a9..f91f0f2 100644 (file)
@@ -37,6 +37,8 @@ KASAN_SANITIZE_paca.o := n
 KASAN_SANITIZE_setup_64.o := n
 KASAN_SANITIZE_mce.o := n
 KASAN_SANITIZE_mce_power.o := n
+KASAN_SANITIZE_udbg.o := n
+KASAN_SANITIZE_udbg_16550.o := n
 
 # we have to be particularly careful in ppc64 to exclude code that
 # runs with translations off, as we cannot access the shadow with
index b62046b..ee04338 100644 (file)
@@ -2158,12 +2158,12 @@ static unsigned long ___get_wchan(struct task_struct *p)
                return 0;
 
        do {
-               sp = *(unsigned long *)sp;
+               sp = READ_ONCE_NOCHECK(*(unsigned long *)sp);
                if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD) ||
                    task_is_running(p))
                        return 0;
                if (count > 0) {
-                       ip = ((unsigned long *)sp)[STACK_FRAME_LR_SAVE];
+                       ip = READ_ONCE_NOCHECK(((unsigned long *)sp)[STACK_FRAME_LR_SAVE]);
                        if (!in_sched_functions(ip))
                                return ip;
                }
index 5dca193..09c4963 100644 (file)
@@ -17,9 +17,13 @@ int ptrace_get_fpr(struct task_struct *child, int index, unsigned long *data)
 
 #ifdef CONFIG_PPC_FPU_REGS
        flush_fp_to_thread(child);
-       if (fpidx < (PT_FPSCR - PT_FPR0))
-               memcpy(data, &child->thread.TS_FPR(fpidx), sizeof(long));
-       else
+       if (fpidx < (PT_FPSCR - PT_FPR0)) {
+               if (IS_ENABLED(CONFIG_PPC32))
+                       // On 32-bit the index we are passed refers to 32-bit words
+                       *data = ((u32 *)child->thread.fp_state.fpr)[fpidx];
+               else
+                       memcpy(data, &child->thread.TS_FPR(fpidx), sizeof(long));
+       } else
                *data = child->thread.fp_state.fpscr;
 #else
        *data = 0;
@@ -39,9 +43,13 @@ int ptrace_put_fpr(struct task_struct *child, int index, unsigned long data)
 
 #ifdef CONFIG_PPC_FPU_REGS
        flush_fp_to_thread(child);
-       if (fpidx < (PT_FPSCR - PT_FPR0))
-               memcpy(&child->thread.TS_FPR(fpidx), &data, sizeof(long));
-       else
+       if (fpidx < (PT_FPSCR - PT_FPR0)) {
+               if (IS_ENABLED(CONFIG_PPC32))
+                       // On 32-bit the index we are passed refers to 32-bit words
+                       ((u32 *)child->thread.fp_state.fpr)[fpidx] = data;
+               else
+                       memcpy(&child->thread.TS_FPR(fpidx), &data, sizeof(long));
+       } else
                child->thread.fp_state.fpscr = data;
 #endif
 
index 4d2dc22..5d7a72b 100644 (file)
@@ -444,4 +444,7 @@ void __init pt_regs_check(void)
         * real registers.
         */
        BUILD_BUG_ON(PT_DSCR < sizeof(struct user_pt_regs) / sizeof(unsigned long));
+
+       // ptrace_get/put_fpr() rely on PPC32 and VSX being incompatible
+       BUILD_BUG_ON(IS_ENABLED(CONFIG_PPC32) && IS_ENABLED(CONFIG_VSX));
 }
index 9bb43aa..a6fce31 100644 (file)
@@ -993,8 +993,8 @@ int rtas_call_reentrant(int token, int nargs, int nret, int *outputs, ...)
  *
  * Return: A pointer to the specified errorlog or NULL if not found.
  */
-struct pseries_errorlog *get_pseries_errorlog(struct rtas_error_log *log,
-                                             uint16_t section_id)
+noinstr struct pseries_errorlog *get_pseries_errorlog(struct rtas_error_log *log,
+                                                     uint16_t section_id)
 {
        struct rtas_ext_event_log_v6 *ext_log =
                (struct rtas_ext_event_log_v6 *)log->buffer;
index d85fa9f..80f5472 100644 (file)
@@ -224,7 +224,7 @@ void crash_kexec_secondary(struct pt_regs *regs)
 
 /* wait for all the CPUs to hit real mode but timeout if they don't come in */
 #if defined(CONFIG_SMP) && defined(CONFIG_PPC64)
-static void __maybe_unused crash_kexec_wait_realmode(int cpu)
+noinstr static void __maybe_unused crash_kexec_wait_realmode(int cpu)
 {
        unsigned int msecs;
        int i;
index 1f3f9fe..0d04f9d 100644 (file)
@@ -19,7 +19,6 @@
 #include <asm/cacheflush.h>
 #include <asm/kdump.h>
 #include <mm/mmu_decl.h>
-#include <generated/compile.h>
 #include <generated/utsrelease.h>
 
 struct regions {
@@ -37,10 +36,6 @@ struct regions {
        int reserved_mem_size_cells;
 };
 
-/* Simplified build-specific string for starting entropy. */
-static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
-               LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
-
 struct regions __initdata regions;
 
 static __init void kaslr_get_cmdline(void *fdt)
@@ -71,7 +66,8 @@ static unsigned long __init get_boot_seed(void *fdt)
 {
        unsigned long hash = 0;
 
-       hash = rotate_xor(hash, build_str, sizeof(build_str));
+       /* build-specific string for starting entropy. */
+       hash = rotate_xor(hash, linux_banner, strlen(linux_banner));
        hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
 
        return hash;
index 6488b38..19f0fc5 100644 (file)
@@ -4,6 +4,7 @@
 # in particular, idle code runs a bunch of things in real mode
 KASAN_SANITIZE_idle.o := n
 KASAN_SANITIZE_pci-ioda.o := n
+KASAN_SANITIZE_pci-ioda-tce.o := n
 # pnv_machine_check_early
 KASAN_SANITIZE_setup.o := n
 
index 181b855..82cae08 100644 (file)
@@ -465,6 +465,9 @@ static int papr_scm_pmu_check_events(struct papr_scm_priv *p, struct nvdimm_pmu
        u32 available_events;
        int index, rc = 0;
 
+       if (!p->stat_buffer_len)
+               return -ENOENT;
+
        available_events = (p->stat_buffer_len  - sizeof(struct papr_scm_perf_stats))
                        / sizeof(struct papr_scm_perf_stat);
        if (available_events == 0)
index b1a88f6..91c0b80 100644 (file)
@@ -125,6 +125,7 @@ config S390
        select CLONE_BACKWARDS2
        select DMA_OPS if PCI
        select DYNAMIC_FTRACE if FUNCTION_TRACER
+       select GCC12_NO_ARRAY_BOUNDS
        select GENERIC_ALLOCATOR
        select GENERIC_CPU_AUTOPROBE
        select GENERIC_CPU_VULNERABILITIES
@@ -768,7 +769,6 @@ menu "Virtualization"
 config PROTECTED_VIRTUALIZATION_GUEST
        def_bool n
        prompt "Protected virtualization guest support"
-       select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
        help
          Select this option, if you want to be able to run this
          kernel as a protected virtualization KVM guest.
index d73611b..495c68a 100644 (file)
@@ -32,15 +32,7 @@ KBUILD_CFLAGS_DECOMPRESSOR += -fno-stack-protector
 KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, address-of-packed-member)
 KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g)
 KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,))
-
-ifdef CONFIG_CC_IS_GCC
-       ifeq ($(call cc-ifversion, -ge, 1200, y), y)
-               ifeq ($(call cc-ifversion, -lt, 1300, y), y)
-                       KBUILD_CFLAGS += $(call cc-disable-warning, array-bounds)
-                       KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, array-bounds)
-               endif
-       endif
-endif
+KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_CC_NO_ARRAY_BOUNDS),-Wno-array-bounds)
 
 UTS_MACHINE    := s390x
 STACK_SIZE     := $(if $(CONFIG_KASAN),65536,16384)
index 6fb6bf6..6a0ac00 100644 (file)
@@ -31,6 +31,7 @@
 #include <linux/cma.h>
 #include <linux/gfp.h>
 #include <linux/dma-direct.h>
+#include <linux/platform-feature.h>
 #include <asm/processor.h>
 #include <linux/uaccess.h>
 #include <asm/pgalloc.h>
@@ -168,22 +169,14 @@ bool force_dma_unencrypted(struct device *dev)
        return is_prot_virt_guest();
 }
 
-#ifdef CONFIG_ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
-
-int arch_has_restricted_virtio_memory_access(void)
-{
-       return is_prot_virt_guest();
-}
-EXPORT_SYMBOL(arch_has_restricted_virtio_memory_access);
-
-#endif
-
 /* protected virtualization */
 static void pv_init(void)
 {
        if (!is_prot_virt_guest())
                return;
 
+       platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+
        /* make sure bounce buffers are shared */
        swiotlb_init(true, SWIOTLB_FORCE | SWIOTLB_VERBOSE);
        swiotlb_update_mem_attributes();
index 9783ebc..be0b95e 100644 (file)
@@ -1542,7 +1542,6 @@ config X86_CPA_STATISTICS
 config X86_MEM_ENCRYPT
        select ARCH_HAS_FORCE_DMA_UNENCRYPTED
        select DYNAMIC_PHYSICAL_MASK
-       select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
        def_bool n
 
 config AMD_MEM_ENCRYPT
index 959d66b..3a240a6 100644 (file)
@@ -653,6 +653,7 @@ struct kvm_vcpu_arch {
        u64 ia32_misc_enable_msr;
        u64 smbase;
        u64 smi_count;
+       bool at_instruction_boundary;
        bool tpr_access_reporting;
        bool xsaves_enabled;
        bool xfd_no_write_intercept;
@@ -1300,6 +1301,8 @@ struct kvm_vcpu_stat {
        u64 nested_run;
        u64 directed_yield_attempted;
        u64 directed_yield_successful;
+       u64 preemption_reported;
+       u64 preemption_other;
        u64 guest_mode;
 };
 
index 35f222a..913e593 100644 (file)
@@ -439,7 +439,7 @@ do {                                                                        \
                       [ptr] "+m" (*_ptr),                              \
                       [old] "+a" (__old)                               \
                     : [new] ltype (__new)                              \
-                    : "memory", "cc");                                 \
+                    : "memory");                                       \
        if (unlikely(__err))                                            \
                goto label;                                             \
        if (unlikely(!success))                                         \
index f465368..e826ee9 100644 (file)
@@ -5179,7 +5179,7 @@ static void __kvm_mmu_free_obsolete_roots(struct kvm *kvm, struct kvm_mmu *mmu)
                roots_to_free |= KVM_MMU_ROOT_CURRENT;
 
        for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) {
-               if (is_obsolete_root(kvm, mmu->root.hpa))
+               if (is_obsolete_root(kvm, mmu->prev_roots[i].hpa))
                        roots_to_free |= KVM_MMU_ROOT_PREVIOUS(i);
        }
 
index 6d3b3e5..ee4802d 100644 (file)
@@ -145,6 +145,15 @@ static bool try_step_up(struct tdp_iter *iter)
        return true;
 }
 
+/*
+ * Step the iterator back up a level in the paging structure. Should only be
+ * used when the iterator is below the root level.
+ */
+void tdp_iter_step_up(struct tdp_iter *iter)
+{
+       WARN_ON(!try_step_up(iter));
+}
+
 /*
  * Step to the next SPTE in a pre-order traversal of the paging structure.
  * To get to the next SPTE, the iterator either steps down towards the goal
index f0af385..adfca0c 100644 (file)
@@ -114,5 +114,6 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root,
                    int min_level, gfn_t next_last_level_gfn);
 void tdp_iter_next(struct tdp_iter *iter);
 void tdp_iter_restart(struct tdp_iter *iter);
+void tdp_iter_step_up(struct tdp_iter *iter);
 
 #endif /* __KVM_X86_MMU_TDP_ITER_H */
index 841feaa..7b9265d 100644 (file)
@@ -1742,12 +1742,12 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
        gfn_t start = slot->base_gfn;
        gfn_t end = start + slot->npages;
        struct tdp_iter iter;
+       int max_mapping_level;
        kvm_pfn_t pfn;
 
        rcu_read_lock();
 
        tdp_root_for_each_pte(iter, root, start, end) {
-retry:
                if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true))
                        continue;
 
@@ -1755,15 +1755,41 @@ retry:
                    !is_last_spte(iter.old_spte, iter.level))
                        continue;
 
+               /*
+                * This is a leaf SPTE. Check if the PFN it maps can
+                * be mapped at a higher level.
+                */
                pfn = spte_to_pfn(iter.old_spte);
-               if (kvm_is_reserved_pfn(pfn) ||
-                   iter.level >= kvm_mmu_max_mapping_level(kvm, slot, iter.gfn,
-                                                           pfn, PG_LEVEL_NUM))
+
+               if (kvm_is_reserved_pfn(pfn))
                        continue;
 
+               max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot,
+                               iter.gfn, pfn, PG_LEVEL_NUM);
+
+               WARN_ON(max_mapping_level < iter.level);
+
+               /*
+                * If this page is already mapped at the highest
+                * viable level, there's nothing more to do.
+                */
+               if (max_mapping_level == iter.level)
+                       continue;
+
+               /*
+                * The page can be remapped at a higher level, so step
+                * up to zap the parent SPTE.
+                */
+               while (max_mapping_level > iter.level)
+                       tdp_iter_step_up(&iter);
+
                /* Note, a successful atomic zap also does a remote TLB flush. */
-               if (tdp_mmu_zap_spte_atomic(kvm, &iter))
-                       goto retry;
+               tdp_mmu_zap_spte_atomic(kvm, &iter);
+
+               /*
+                * If the atomic zap fails, the iter will recurse back into
+                * the same subtree to retry.
+                */
        }
 
        rcu_read_unlock();
index bed5e16..3361258 100644 (file)
@@ -982,7 +982,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
        if (svm->tsc_ratio_msr != kvm_default_tsc_scaling_ratio) {
                WARN_ON(!svm->tsc_scaling_enabled);
                vcpu->arch.tsc_scaling_ratio = vcpu->arch.l1_tsc_scaling_ratio;
-               svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio);
+               __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio);
        }
 
        svm->nested.ctl.nested_cr3 = 0;
@@ -1387,7 +1387,7 @@ void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu *vcpu)
        vcpu->arch.tsc_scaling_ratio =
                kvm_calc_nested_tsc_multiplier(vcpu->arch.l1_tsc_scaling_ratio,
                                               svm->tsc_ratio_msr);
-       svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio);
+       __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio);
 }
 
 /* Inverse operation of nested_copy_vmcb_control_to_cache(). asid is copied too. */
index 200045f..1dc02cd 100644 (file)
@@ -465,11 +465,24 @@ static int has_svm(void)
        return 1;
 }
 
+void __svm_write_tsc_multiplier(u64 multiplier)
+{
+       preempt_disable();
+
+       if (multiplier == __this_cpu_read(current_tsc_ratio))
+               goto out;
+
+       wrmsrl(MSR_AMD64_TSC_RATIO, multiplier);
+       __this_cpu_write(current_tsc_ratio, multiplier);
+out:
+       preempt_enable();
+}
+
 static void svm_hardware_disable(void)
 {
        /* Make sure we clean up behind us */
        if (tsc_scaling)
-               wrmsrl(MSR_AMD64_TSC_RATIO, SVM_TSC_RATIO_DEFAULT);
+               __svm_write_tsc_multiplier(SVM_TSC_RATIO_DEFAULT);
 
        cpu_svm_disable();
 
@@ -515,8 +528,7 @@ static int svm_hardware_enable(void)
                 * Set the default value, even if we don't use TSC scaling
                 * to avoid having stale value in the msr
                 */
-               wrmsrl(MSR_AMD64_TSC_RATIO, SVM_TSC_RATIO_DEFAULT);
-               __this_cpu_write(current_tsc_ratio, SVM_TSC_RATIO_DEFAULT);
+               __svm_write_tsc_multiplier(SVM_TSC_RATIO_DEFAULT);
        }
 
 
@@ -999,11 +1011,12 @@ static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
        vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
 }
 
-void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
+static void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
 {
-       wrmsrl(MSR_AMD64_TSC_RATIO, multiplier);
+       __svm_write_tsc_multiplier(multiplier);
 }
 
+
 /* Evaluate instruction intercepts that depend on guest CPUID features. */
 static void svm_recalc_instruction_intercepts(struct kvm_vcpu *vcpu,
                                              struct vcpu_svm *svm)
@@ -1363,13 +1376,8 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
                sev_es_prepare_switch_to_guest(hostsa);
        }
 
-       if (tsc_scaling) {
-               u64 tsc_ratio = vcpu->arch.tsc_scaling_ratio;
-               if (tsc_ratio != __this_cpu_read(current_tsc_ratio)) {
-                       __this_cpu_write(current_tsc_ratio, tsc_ratio);
-                       wrmsrl(MSR_AMD64_TSC_RATIO, tsc_ratio);
-               }
-       }
+       if (tsc_scaling)
+               __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio);
 
        if (likely(tsc_aux_uret_slot >= 0))
                kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull);
@@ -4255,6 +4263,8 @@ out:
 
 static void svm_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 {
+       if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_INTR)
+               vcpu->arch.at_instruction_boundary = true;
 }
 
 static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu)
index 21c5460..500348c 100644 (file)
@@ -590,7 +590,7 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
                               bool has_error_code, u32 error_code);
 int nested_svm_exit_special(struct vcpu_svm *svm);
 void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu *vcpu);
-void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier);
+void __svm_write_tsc_multiplier(u64 multiplier);
 void nested_copy_vmcb_control_to_cache(struct vcpu_svm *svm,
                                       struct vmcb_control_area *control);
 void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm,
index a07e8cd..9bd86ec 100644 (file)
@@ -6547,6 +6547,7 @@ static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
                return;
 
        handle_interrupt_nmi_irqoff(vcpu, gate_offset(desc));
+       vcpu->arch.at_instruction_boundary = true;
 }
 
 static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
index e9473c7..03fbfbb 100644 (file)
@@ -296,6 +296,8 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
        STATS_DESC_COUNTER(VCPU, nested_run),
        STATS_DESC_COUNTER(VCPU, directed_yield_attempted),
        STATS_DESC_COUNTER(VCPU, directed_yield_successful),
+       STATS_DESC_COUNTER(VCPU, preemption_reported),
+       STATS_DESC_COUNTER(VCPU, preemption_other),
        STATS_DESC_ICOUNTER(VCPU, guest_mode)
 };
 
@@ -4625,6 +4627,19 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
        struct kvm_memslots *slots;
        static const u8 preempted = KVM_VCPU_PREEMPTED;
 
+       /*
+        * The vCPU can be marked preempted if and only if the VM-Exit was on
+        * an instruction boundary and will not trigger guest emulation of any
+        * kind (see vcpu_run).  Vendor specific code controls (conservatively)
+        * when this is true, for example allowing the vCPU to be marked
+        * preempted if and only if the VM-Exit was due to a host interrupt.
+        */
+       if (!vcpu->arch.at_instruction_boundary) {
+               vcpu->stat.preemption_other++;
+               return;
+       }
+
+       vcpu->stat.preemption_reported++;
        if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                return;
 
@@ -4654,19 +4669,21 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 {
        int idx;
 
-       if (vcpu->preempted && !vcpu->arch.guest_state_protected)
-               vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu);
+       if (vcpu->preempted) {
+               if (!vcpu->arch.guest_state_protected)
+                       vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu);
 
-       /*
-        * Take the srcu lock as memslots will be accessed to check the gfn
-        * cache generation against the memslots generation.
-        */
-       idx = srcu_read_lock(&vcpu->kvm->srcu);
-       if (kvm_xen_msr_enabled(vcpu->kvm))
-               kvm_xen_runstate_set_preempted(vcpu);
-       else
-               kvm_steal_time_set_preempted(vcpu);
-       srcu_read_unlock(&vcpu->kvm->srcu, idx);
+               /*
+                * Take the srcu lock as memslots will be accessed to check the gfn
+                * cache generation against the memslots generation.
+                */
+               idx = srcu_read_lock(&vcpu->kvm->srcu);
+               if (kvm_xen_msr_enabled(vcpu->kvm))
+                       kvm_xen_runstate_set_preempted(vcpu);
+               else
+                       kvm_steal_time_set_preempted(vcpu);
+               srcu_read_unlock(&vcpu->kvm->srcu, idx);
+       }
 
        static_call(kvm_x86_vcpu_put)(vcpu);
        vcpu->arch.last_host_tsc = rdtsc();
@@ -10422,6 +10439,13 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
        vcpu->arch.l1tf_flush_l1d = true;
 
        for (;;) {
+               /*
+                * If another guest vCPU requests a PV TLB flush in the middle
+                * of instruction emulation, the rest of the emulation could
+                * use a stale page translation. Assume that any code after
+                * this point can start executing an instruction.
+                */
+               vcpu->arch.at_instruction_boundary = false;
                if (kvm_vcpu_running(vcpu)) {
                        r = vcpu_enter_guest(vcpu);
                } else {
index ee5c4ae..532a535 100644 (file)
@@ -159,8 +159,10 @@ static inline void kvm_xen_runstate_set_preempted(struct kvm_vcpu *vcpu)
         * behalf of the vCPU. Only if the VMM does actually block
         * does it need to enter RUNSTATE_blocked.
         */
-       if (vcpu->preempted)
-               kvm_xen_update_runstate_guest(vcpu, RUNSTATE_runnable);
+       if (WARN_ON_ONCE(!vcpu->preempted))
+               return;
+
+       kvm_xen_update_runstate_guest(vcpu, RUNSTATE_runnable);
 }
 
 /* 32-bit compatibility definitions, also used natively in 32-bit build */
index 11350e2..9f27e14 100644 (file)
@@ -12,7 +12,6 @@
 #include <linux/swiotlb.h>
 #include <linux/cc_platform.h>
 #include <linux/mem_encrypt.h>
-#include <linux/virtio_config.h>
 
 /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
 bool force_dma_unencrypted(struct device *dev)
@@ -87,9 +86,3 @@ void __init mem_encrypt_init(void)
 
        print_mem_encrypt_feature_info();
 }
-
-int arch_has_restricted_virtio_memory_access(void)
-{
-       return cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT);
-}
-EXPORT_SYMBOL_GPL(arch_has_restricted_virtio_memory_access);
index e8f7953..f6d038e 100644 (file)
@@ -21,6 +21,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/virtio_config.h>
 #include <linux/cc_platform.h>
+#include <linux/platform-feature.h>
 
 #include <asm/tlbflush.h>
 #include <asm/fixmap.h>
@@ -242,6 +243,9 @@ void __init sev_setup_arch(void)
        size = total_mem * 6 / 100;
        size = clamp_val(size, IO_TLB_DEFAULT_SIZE, SZ_1G);
        swiotlb_adjust_size(size);
+
+       /* Set restricted memory access for virtio. */
+       platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
 }
 
 static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
index 517a9d8..8b71b1d 100644 (file)
@@ -195,6 +195,8 @@ static void __init xen_hvm_guest_init(void)
        if (xen_pv_domain())
                return;
 
+       xen_set_restricted_virtio_memory_access();
+
        init_hvm_pv_info();
 
        reserve_shared_info();
index f33a442..e3297b1 100644 (file)
@@ -109,6 +109,8 @@ static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
 
 static void __init xen_pv_init_platform(void)
 {
+       xen_set_restricted_virtio_memory_access();
+
        populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
 
        set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info);
index bb904f9..cb1a9da 100644 (file)
@@ -18,7 +18,7 @@ CFLAGS_blacklist_hashes.o += -I$(srctree)
 
 targets += blacklist_hashes_checked
 $(obj)/blacklist_hashes_checked: $(SYSTEM_BLACKLIST_HASH_LIST_SRCPREFIX)$(SYSTEM_BLACKLIST_HASH_LIST_FILENAME) scripts/check-blacklist-hashes.awk FORCE
-       $(call if_changed,check_blacklist_hashes,$(SYSTEM_BLACKLIST_HASH_LIST_SRCPREFIX)$(CONFIG_SYSTEM_BLACKLIST_HASH_LIST))
+       $(call if_changed,check_blacklist_hashes,$(SYSTEM_BLACKLIST_HASH_LIST_SRCPREFIX)$(CONFIG_SYSTEM_BLACKLIST_HASH_LIST))
 obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist_hashes.o
 else
 obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist_nohashes.o
index f7ef786..8c1fb9a 100644 (file)
 #include <openssl/err.h>
 #include <openssl/engine.h>
 
+/*
+ * OpenSSL 3.0 deprecates the OpenSSL's ENGINE API.
+ *
+ * Remove this if/when that API is no longer used
+ */
+#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
+
 #define PKEY_ID_PKCS7 2
 
 static __attribute__((noreturn))
index 40e8164..9601fa9 100644 (file)
@@ -2010,16 +2010,16 @@ retry:
        return err_mask;
 }
 
-static bool ata_log_supported(struct ata_device *dev, u8 log)
+static int ata_log_supported(struct ata_device *dev, u8 log)
 {
        struct ata_port *ap = dev->link->ap;
 
        if (dev->horkage & ATA_HORKAGE_NO_LOG_DIR)
-               return false;
+               return 0;
 
        if (ata_read_log_page(dev, ATA_LOG_DIRECTORY, 0, ap->sector_buf, 1))
-               return false;
-       return get_unaligned_le16(&ap->sector_buf[log * 2]) ? true : false;
+               return 0;
+       return get_unaligned_le16(&ap->sector_buf[log * 2]);
 }
 
 static bool ata_identify_page_supported(struct ata_device *dev, u8 page)
@@ -2455,15 +2455,20 @@ static void ata_dev_config_cpr(struct ata_device *dev)
        struct ata_cpr_log *cpr_log = NULL;
        u8 *desc, *buf = NULL;
 
-       if (ata_id_major_version(dev->id) < 11 ||
-           !ata_log_supported(dev, ATA_LOG_CONCURRENT_POSITIONING_RANGES))
+       if (ata_id_major_version(dev->id) < 11)
+               goto out;
+
+       buf_len = ata_log_supported(dev, ATA_LOG_CONCURRENT_POSITIONING_RANGES);
+       if (buf_len == 0)
                goto out;
 
        /*
         * Read the concurrent positioning ranges log (0x47). We can have at
-        * most 255 32B range descriptors plus a 64B header.
+        * most 255 32B range descriptors plus a 64B header. This log varies in
+        * size, so use the size reported in the GPL directory. Reading beyond
+        * the supported length will result in an error.
         */
-       buf_len = (64 + 255 * 32 + 511) & ~511;
+       buf_len <<= 9;
        buf = kzalloc(buf_len, GFP_KERNEL);
        if (!buf)
                goto out;
@@ -5462,7 +5467,7 @@ struct ata_host *ata_host_alloc_pinfo(struct device *dev,
                                      const struct ata_port_info * const * ppi,
                                      int n_ports)
 {
-       const struct ata_port_info *pi;
+       const struct ata_port_info *pi = &ata_dummy_port_info;
        struct ata_host *host;
        int i, j;
 
@@ -5470,7 +5475,7 @@ struct ata_host *ata_host_alloc_pinfo(struct device *dev,
        if (!host)
                return NULL;
 
-       for (i = 0, j = 0, pi = NULL; i < host->n_ports; i++) {
+       for (i = 0, j = 0; i < host->n_ports; i++) {
                struct ata_port *ap = host->ports[i];
 
                if (ppi[j])
index 42cecf9..86dbb1c 100644 (file)
@@ -2125,7 +2125,7 @@ static unsigned int ata_scsiop_inq_b9(struct ata_scsi_args *args, u8 *rbuf)
 
        /* SCSI Concurrent Positioning Ranges VPD page: SBC-5 rev 1 or later */
        rbuf[1] = 0xb9;
-       put_unaligned_be16(64 + (int)cpr_log->nr_cpr * 32 - 4, &rbuf[3]);
+       put_unaligned_be16(64 + (int)cpr_log->nr_cpr * 32 - 4, &rbuf[2]);
 
        for (i = 0; i < cpr_log->nr_cpr; i++, desc += 32) {
                desc[0] = cpr_log->cpr[i].num;
index ca12985..c380278 100644 (file)
@@ -196,7 +196,7 @@ static struct {
        { XFER_PIO_0,                   "XFER_PIO_0" },
        { XFER_PIO_SLOW,                "XFER_PIO_SLOW" }
 };
-ata_bitfield_name_match(xfer,ata_xfer_names)
+ata_bitfield_name_search(xfer, ata_xfer_names)
 
 /*
  * ATA Port attributes
index 6b5ed30..35608a0 100644 (file)
@@ -856,12 +856,14 @@ static int octeon_cf_probe(struct platform_device *pdev)
                                int i;
                                res_dma = platform_get_resource(dma_dev, IORESOURCE_MEM, 0);
                                if (!res_dma) {
+                                       put_device(&dma_dev->dev);
                                        of_node_put(dma_node);
                                        return -EINVAL;
                                }
                                cf_port->dma_base = (u64)devm_ioremap(&pdev->dev, res_dma->start,
                                                                         resource_size(res_dma));
                                if (!cf_port->dma_base) {
+                                       put_device(&dma_dev->dev);
                                        of_node_put(dma_node);
                                        return -EINVAL;
                                }
@@ -871,6 +873,7 @@ static int octeon_cf_probe(struct platform_device *pdev)
                                        irq = i;
                                        irq_handler = octeon_cf_interrupt;
                                }
+                               put_device(&dma_dev->dev);
                        }
                        of_node_put(dma_node);
                }
index 67abf8d..6b6d46e 100644 (file)
@@ -1918,9 +1918,6 @@ int amdgpu_amdkfd_gpuvm_map_gtt_bo_to_kernel(struct amdgpu_device *adev,
                return -EINVAL;
        }
 
-       /* delete kgd_mem from kfd_bo_list to avoid re-validating
-        * this BO in BO's restoring after eviction.
-        */
        mutex_lock(&mem->process_info->lock);
 
        ret = amdgpu_bo_reserve(bo, true);
@@ -1943,7 +1940,6 @@ int amdgpu_amdkfd_gpuvm_map_gtt_bo_to_kernel(struct amdgpu_device *adev,
 
        amdgpu_amdkfd_remove_eviction_fence(
                bo, mem->process_info->eviction_fence);
-       list_del_init(&mem->validate_list.head);
 
        if (size)
                *size = amdgpu_bo_size(bo);
@@ -2512,12 +2508,15 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef)
        process_info->eviction_fence = new_fence;
        *ef = dma_fence_get(&new_fence->base);
 
-       /* Attach new eviction fence to all BOs */
+       /* Attach new eviction fence to all BOs except pinned ones */
        list_for_each_entry(mem, &process_info->kfd_bo_list,
-               validate_list.head)
+               validate_list.head) {
+               if (mem->bo->tbo.pin_count)
+                       continue;
+
                amdgpu_bo_fence(mem->bo,
                        &process_info->eviction_fence->base, true);
-
+       }
        /* Attach eviction fence to PD / PT BOs */
        list_for_each_entry(peer_vm, &process_info->vm_list_head,
                            vm_list_node) {
index ede2fa5..1669915 100644 (file)
@@ -594,17 +594,20 @@ int amdgpu_get_gfx_off_status(struct amdgpu_device *adev, uint32_t *value)
 int amdgpu_gfx_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *ras_block)
 {
        int r;
-       r = amdgpu_ras_block_late_init(adev, ras_block);
-       if (r)
-               return r;
 
        if (amdgpu_ras_is_supported(adev, ras_block->block)) {
                if (!amdgpu_persistent_edc_harvesting_supported(adev))
                        amdgpu_ras_reset_error_status(adev, AMDGPU_RAS_BLOCK__GFX);
 
+               r = amdgpu_ras_block_late_init(adev, ras_block);
+               if (r)
+                       return r;
+
                r = amdgpu_irq_get(adev, &adev->gfx.cp_ecc_error_irq, 0);
                if (r)
                        goto late_fini;
+       } else {
+               amdgpu_ras_feature_enable_on_boot(adev, ras_block, 0);
        }
 
        return 0;
index 798c562..aebc384 100644 (file)
@@ -518,6 +518,8 @@ void amdgpu_gmc_tmz_set(struct amdgpu_device *adev)
        case IP_VERSION(9, 1, 0):
        /* RENOIR looks like RAVEN */
        case IP_VERSION(9, 3, 0):
+       /* GC 10.3.7 */
+       case IP_VERSION(10, 3, 7):
                if (amdgpu_tmz == 0) {
                        adev->gmc.tmz_enabled = false;
                        dev_info(adev->dev,
@@ -540,8 +542,6 @@ void amdgpu_gmc_tmz_set(struct amdgpu_device *adev)
        case IP_VERSION(10, 3, 1):
        /* YELLOW_CARP*/
        case IP_VERSION(10, 3, 3):
-       /* GC 10.3.7 */
-       case IP_VERSION(10, 3, 7):
                /* Don't enable it by default yet.
                 */
                if (amdgpu_tmz < 1) {
index 2de9309..dac202a 100644 (file)
@@ -197,6 +197,13 @@ static ssize_t amdgpu_ras_debugfs_read(struct file *f, char __user *buf,
        if (amdgpu_ras_query_error_status(obj->adev, &info))
                return -EINVAL;
 
+       /* Hardware counter will be reset automatically after the query on Vega20 and Arcturus */
+       if (obj->adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 2) &&
+           obj->adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4)) {
+               if (amdgpu_ras_reset_error_status(obj->adev, info.head.block))
+                       dev_warn(obj->adev->dev, "Failed to reset error counter and error status");
+       }
+
        s = snprintf(val, sizeof(val), "%s: %lu\n%s: %lu\n",
                        "ue", info.ue_count,
                        "ce", info.ce_count);
@@ -550,9 +557,10 @@ static ssize_t amdgpu_ras_sysfs_read(struct device *dev,
        if (amdgpu_ras_query_error_status(obj->adev, &info))
                return -EINVAL;
 
-       if (obj->adev->asic_type == CHIP_ALDEBARAN) {
+       if (obj->adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 2) &&
+           obj->adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4)) {
                if (amdgpu_ras_reset_error_status(obj->adev, info.head.block))
-                       DRM_WARN("Failed to reset error counter and error status");
+                       dev_warn(obj->adev->dev, "Failed to reset error counter and error status");
        }
 
        return sysfs_emit(buf, "%s: %lu\n%s: %lu\n", "ue", info.ue_count,
@@ -1027,9 +1035,6 @@ int amdgpu_ras_query_error_status(struct amdgpu_device *adev,
                }
        }
 
-       if (!amdgpu_persistent_edc_harvesting_supported(adev))
-               amdgpu_ras_reset_error_status(adev, info->head.block);
-
        return 0;
 }
 
@@ -1149,6 +1154,12 @@ int amdgpu_ras_query_error_count(struct amdgpu_device *adev,
                if (res)
                        return res;
 
+               if (adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 2) &&
+                   adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4)) {
+                       if (amdgpu_ras_reset_error_status(adev, info.head.block))
+                               dev_warn(adev->dev, "Failed to reset error counter and error status");
+               }
+
                ce += info.ce_count;
                ue += info.ue_count;
        }
@@ -1792,6 +1803,12 @@ static void amdgpu_ras_log_on_err_counter(struct amdgpu_device *adev)
                        continue;
 
                amdgpu_ras_query_error_status(adev, &info);
+
+               if (adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 2) &&
+                   adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4)) {
+                       if (amdgpu_ras_reset_error_status(adev, info.head.block))
+                               dev_warn(adev->dev, "Failed to reset error counter and error status");
+               }
        }
 }
 
@@ -2278,8 +2295,9 @@ static void amdgpu_ras_check_supported(struct amdgpu_device *adev)
            !amdgpu_ras_asic_supported(adev))
                return;
 
-       if (!(amdgpu_sriov_vf(adev) &&
-               (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2))))
+       /* If driver run on sriov guest side, only enable ras for aldebaran */
+       if (amdgpu_sriov_vf(adev) &&
+               adev->ip_versions[MP1_HWIP][0] != IP_VERSION(13, 0, 2))
                return;
 
        if (!adev->gmc.xgmi.connected_to_cpu) {
index 2ceeaa4..dc76d2b 100644 (file)
@@ -679,6 +679,7 @@ int amdgpu_vm_update_pdes(struct amdgpu_device *adev,
 {
        struct amdgpu_vm_update_params params;
        struct amdgpu_vm_bo_base *entry;
+       bool flush_tlb_needed = false;
        int r, idx;
 
        if (list_empty(&vm->relocated))
@@ -697,6 +698,9 @@ int amdgpu_vm_update_pdes(struct amdgpu_device *adev,
                goto error;
 
        list_for_each_entry(entry, &vm->relocated, vm_status) {
+               /* vm_flush_needed after updating moved PDEs */
+               flush_tlb_needed |= entry->moved;
+
                r = amdgpu_vm_pde_update(&params, entry);
                if (r)
                        goto error;
@@ -706,8 +710,8 @@ int amdgpu_vm_update_pdes(struct amdgpu_device *adev,
        if (r)
                goto error;
 
-       /* vm_flush_needed after updating PDEs */
-       atomic64_inc(&vm->tlb_seq);
+       if (flush_tlb_needed)
+               atomic64_inc(&vm->tlb_seq);
 
        while (!list_empty(&vm->relocated)) {
                entry = list_first_entry(&vm->relocated,
@@ -789,6 +793,11 @@ int amdgpu_vm_update_range(struct amdgpu_device *adev, struct amdgpu_vm *vm,
        flush_tlb |= adev->gmc.xgmi.num_physical_nodes &&
                     adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 0);
 
+       /*
+        * On GFX8 and older any 8 PTE block with a valid bit set enters the TLB
+        */
+       flush_tlb |= adev->ip_versions[GC_HWIP][0] < IP_VERSION(9, 0, 0);
+
        memset(&params, 0, sizeof(params));
        params.adev = adev;
        params.vm = vm;
index 8c0a3fc..a4a6751 100644 (file)
@@ -1096,6 +1096,7 @@ static void gfx_v11_0_read_wave_data(struct amdgpu_device *adev, uint32_t simd,
        dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_STS2);
        dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_DBG1);
        dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_M0);
+       dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_MODE);
 }
 
 static void gfx_v11_0_read_wave_sgprs(struct amdgpu_device *adev, uint32_t simd,
@@ -1316,7 +1317,7 @@ static void gfx_v11_0_rlc_backdoor_autoload_copy_ucode(struct amdgpu_device *ade
                memset(ptr + toc_offset + fw_size, 0, toc_fw_size - fw_size);
 
        if ((id != SOC21_FIRMWARE_ID_RS64_PFP) && (id != SOC21_FIRMWARE_ID_RS64_ME))
-               *(uint64_t *)fw_autoload_mask |= 1 << id;
+               *(uint64_t *)fw_autoload_mask |= 1ULL << id;
 }
 
 static void gfx_v11_0_rlc_backdoor_autoload_copy_toc_ucode(struct amdgpu_device *adev,
@@ -1983,7 +1984,7 @@ static int gfx_v11_0_init_csb(struct amdgpu_device *adev)
        return 0;
 }
 
-void gfx_v11_0_rlc_stop(struct amdgpu_device *adev)
+static void gfx_v11_0_rlc_stop(struct amdgpu_device *adev)
 {
        u32 tmp = RREG32_SOC15(GC, 0, regRLC_CNTL);
 
@@ -6028,6 +6029,7 @@ static void gfx_v11_0_handle_priv_fault(struct amdgpu_device *adev,
                break;
        default:
                BUG();
+               break;
        }
 }
 
index a0c0b7d..7f4b480 100644 (file)
@@ -638,6 +638,12 @@ static int gmc_v11_0_mc_init(struct amdgpu_device *adev)
        adev->gmc.aper_base = pci_resource_start(adev->pdev, 0);
        adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
 
+#ifdef CONFIG_X86_64
+       if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) {
+               adev->gmc.aper_base = adev->mmhub.funcs->get_mc_fb_offset(adev);
+               adev->gmc.aper_size = adev->gmc.real_vram_size;
+       }
+#endif
        /* In case the PCI BAR is larger than the actual amount of vram */
        adev->gmc.visible_vram_size = adev->gmc.aper_size;
        if (adev->gmc.visible_vram_size > adev->gmc.real_vram_size)
index 5d2dfef..d63d3f2 100644 (file)
@@ -299,7 +299,7 @@ static const struct imu_rlc_ram_golden imu_rlc_ram_golden_11_0_2[] =
        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regCPG_PSP_DEBUG, CPG_PSP_DEBUG__GPA_OVERRIDE_MASK, 0)
 };
 
-void program_imu_rlc_ram(struct amdgpu_device *adev,
+static void program_imu_rlc_ram(struct amdgpu_device *adev,
                                const struct imu_rlc_ram_golden *regs,
                                const u32 array_size)
 {
index d2722ad..f3c1af5 100644 (file)
@@ -535,6 +535,10 @@ void jpeg_v2_0_dec_ring_emit_ib(struct amdgpu_ring *ring,
 {
        unsigned vmid = AMDGPU_JOB_GET_VMID(job);
 
+       amdgpu_ring_write(ring, PACKETJ(mmUVD_JPEG_IH_CTRL_INTERNAL_OFFSET,
+               0, 0, PACKETJ_TYPE0));
+       amdgpu_ring_write(ring, (vmid << JPEG_IH_CTRL__IH_VMID__SHIFT));
+
        amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET,
                0, 0, PACKETJ_TYPE0));
        amdgpu_ring_write(ring, (vmid | (vmid << 4)));
@@ -768,7 +772,7 @@ static const struct amdgpu_ring_funcs jpeg_v2_0_dec_ring_vm_funcs = {
                8 + /* jpeg_v2_0_dec_ring_emit_vm_flush */
                18 + 18 + /* jpeg_v2_0_dec_ring_emit_fence x2 vm fence */
                8 + 16,
-       .emit_ib_size = 22, /* jpeg_v2_0_dec_ring_emit_ib */
+       .emit_ib_size = 24, /* jpeg_v2_0_dec_ring_emit_ib */
        .emit_ib = jpeg_v2_0_dec_ring_emit_ib,
        .emit_fence = jpeg_v2_0_dec_ring_emit_fence,
        .emit_vm_flush = jpeg_v2_0_dec_ring_emit_vm_flush,
index 1a03baa..654e43e 100644 (file)
@@ -41,6 +41,7 @@
 #define mmUVD_JRBC_RB_REF_DATA_INTERNAL_OFFSET                         0x4084
 #define mmUVD_JRBC_STATUS_INTERNAL_OFFSET                              0x4089
 #define mmUVD_JPEG_PITCH_INTERNAL_OFFSET                               0x401f
+#define mmUVD_JPEG_IH_CTRL_INTERNAL_OFFSET                             0x4149
 
 #define JRBC_DEC_EXTERNAL_REG_WRITE_ADDR                               0x18000
 
index fcf5194..7eee004 100644 (file)
@@ -541,7 +541,7 @@ static void mes_v11_0_enable(struct amdgpu_device *adev, bool enable)
 
 /* This function is for backdoor MES firmware */
 static int mes_v11_0_load_microcode(struct amdgpu_device *adev,
-                                   enum admgpu_mes_pipe pipe)
+                                   enum admgpu_mes_pipe pipe, bool prime_icache)
 {
        int r;
        uint32_t data;
@@ -593,16 +593,18 @@ static int mes_v11_0_load_microcode(struct amdgpu_device *adev,
        /* Set 0x3FFFF (256K-1) to CP_MES_MDBOUND_LO */
        WREG32_SOC15(GC, 0, regCP_MES_MDBOUND_LO, 0x3FFFF);
 
-       /* invalidate ICACHE */
-       data = RREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL);
-       data = REG_SET_FIELD(data, CP_MES_IC_OP_CNTL, PRIME_ICACHE, 0);
-       data = REG_SET_FIELD(data, CP_MES_IC_OP_CNTL, INVALIDATE_CACHE, 1);
-       WREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL, data);
-
-       /* prime the ICACHE. */
-       data = RREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL);
-       data = REG_SET_FIELD(data, CP_MES_IC_OP_CNTL, PRIME_ICACHE, 1);
-       WREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL, data);
+       if (prime_icache) {
+               /* invalidate ICACHE */
+               data = RREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL);
+               data = REG_SET_FIELD(data, CP_MES_IC_OP_CNTL, PRIME_ICACHE, 0);
+               data = REG_SET_FIELD(data, CP_MES_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+               WREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL, data);
+
+               /* prime the ICACHE. */
+               data = RREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL);
+               data = REG_SET_FIELD(data, CP_MES_IC_OP_CNTL, PRIME_ICACHE, 1);
+               WREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL, data);
+       }
 
        soc21_grbm_select(adev, 0, 0, 0, 0);
        mutex_unlock(&adev->srbm_mutex);
@@ -1044,17 +1046,19 @@ static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev)
        int r = 0;
 
        if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
-               r = mes_v11_0_load_microcode(adev, AMDGPU_MES_KIQ_PIPE);
+
+               r = mes_v11_0_load_microcode(adev, AMDGPU_MES_SCHED_PIPE, false);
                if (r) {
-                       DRM_ERROR("failed to load MES kiq fw, r=%d\n", r);
+                       DRM_ERROR("failed to load MES fw, r=%d\n", r);
                        return r;
                }
 
-               r = mes_v11_0_load_microcode(adev, AMDGPU_MES_SCHED_PIPE);
+               r = mes_v11_0_load_microcode(adev, AMDGPU_MES_KIQ_PIPE, true);
                if (r) {
-                       DRM_ERROR("failed to load MES fw, r=%d\n", r);
+                       DRM_ERROR("failed to load MES kiq fw, r=%d\n", r);
                        return r;
                }
+
        }
 
        mes_v11_0_enable(adev, true);
@@ -1086,7 +1090,7 @@ static int mes_v11_0_hw_init(void *handle)
        if (!adev->enable_mes_kiq) {
                if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
                        r = mes_v11_0_load_microcode(adev,
-                                            AMDGPU_MES_SCHED_PIPE);
+                                            AMDGPU_MES_SCHED_PIPE, true);
                        if (r) {
                                DRM_ERROR("failed to MES fw, r=%d\n", r);
                                return r;
index d016e3c..b3fba8d 100644 (file)
@@ -170,6 +170,7 @@ static const struct amdgpu_video_codec_info yc_video_codecs_decode_array[] = {
        {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
        {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
        {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
+       {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)},
 };
 
 static const struct amdgpu_video_codecs yc_video_codecs_decode = {
index 06b2635..83c6cca 100644 (file)
@@ -469,6 +469,7 @@ static void sdma_v5_2_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 se
        }
 }
 
+
 /**
  * sdma_v5_2_gfx_stop - stop the gfx async dma engines
  *
@@ -514,21 +515,17 @@ static void sdma_v5_2_rlc_stop(struct amdgpu_device *adev)
 }
 
 /**
- * sdma_v5_2_ctx_switch_enable_for_instance - start the async dma engines
- * context switch for an instance
+ * sdma_v5_2_ctx_switch_enable - stop the async dma engines context switch
  *
  * @adev: amdgpu_device pointer
- * @instance_idx: the index of the SDMA instance
+ * @enable: enable/disable the DMA MEs context switch.
  *
- * Unhalt the async dma engines context switch.
+ * Halt or unhalt the async dma engines context switch.
  */
-static void sdma_v5_2_ctx_switch_enable_for_instance(struct amdgpu_device *adev, int instance_idx)
+static void sdma_v5_2_ctx_switch_enable(struct amdgpu_device *adev, bool enable)
 {
        u32 f32_cntl, phase_quantum = 0;
-
-       if (WARN_ON(instance_idx >= adev->sdma.num_instances)) {
-               return;
-       }
+       int i;
 
        if (amdgpu_sdma_phase_quantum) {
                unsigned value = amdgpu_sdma_phase_quantum;
@@ -552,68 +549,50 @@ static void sdma_v5_2_ctx_switch_enable_for_instance(struct amdgpu_device *adev,
                phase_quantum =
                        value << SDMA0_PHASE0_QUANTUM__VALUE__SHIFT |
                        unit  << SDMA0_PHASE0_QUANTUM__UNIT__SHIFT;
-
-               WREG32_SOC15_IP(GC,
-                       sdma_v5_2_get_reg_offset(adev, instance_idx, mmSDMA0_PHASE0_QUANTUM),
-                       phase_quantum);
-               WREG32_SOC15_IP(GC,
-                       sdma_v5_2_get_reg_offset(adev, instance_idx, mmSDMA0_PHASE1_QUANTUM),
-                   phase_quantum);
-               WREG32_SOC15_IP(GC,
-                       sdma_v5_2_get_reg_offset(adev, instance_idx, mmSDMA0_PHASE2_QUANTUM),
-                   phase_quantum);
        }
 
-       if (!amdgpu_sriov_vf(adev)) {
-               f32_cntl = RREG32(sdma_v5_2_get_reg_offset(adev, instance_idx, mmSDMA0_CNTL));
-               f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_CNTL,
-                               AUTO_CTXSW_ENABLE, 1);
-               WREG32(sdma_v5_2_get_reg_offset(adev, instance_idx, mmSDMA0_CNTL), f32_cntl);
+       for (i = 0; i < adev->sdma.num_instances; i++) {
+               if (enable && amdgpu_sdma_phase_quantum) {
+                       WREG32_SOC15_IP(GC, sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_PHASE0_QUANTUM),
+                              phase_quantum);
+                       WREG32_SOC15_IP(GC, sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_PHASE1_QUANTUM),
+                              phase_quantum);
+                       WREG32_SOC15_IP(GC, sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_PHASE2_QUANTUM),
+                              phase_quantum);
+               }
+
+               if (!amdgpu_sriov_vf(adev)) {
+                       f32_cntl = RREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_CNTL));
+                       f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_CNTL,
+                                       AUTO_CTXSW_ENABLE, enable ? 1 : 0);
+                       WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_CNTL), f32_cntl);
+               }
        }
+
 }
 
 /**
- * sdma_v5_2_ctx_switch_disable_all - stop the async dma engines context switch
+ * sdma_v5_2_enable - stop the async dma engines
  *
  * @adev: amdgpu_device pointer
+ * @enable: enable/disable the DMA MEs.
  *
- * Halt the async dma engines context switch.
+ * Halt or unhalt the async dma engines.
  */
-static void sdma_v5_2_ctx_switch_disable_all(struct amdgpu_device *adev)
+static void sdma_v5_2_enable(struct amdgpu_device *adev, bool enable)
 {
        u32 f32_cntl;
        int i;
 
-       if (amdgpu_sriov_vf(adev))
-               return;
-
-       for (i = 0; i < adev->sdma.num_instances; i++) {
-               f32_cntl = RREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_CNTL));
-               f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_CNTL,
-                               AUTO_CTXSW_ENABLE, 0);
-               WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_CNTL), f32_cntl);
+       if (!enable) {
+               sdma_v5_2_gfx_stop(adev);
+               sdma_v5_2_rlc_stop(adev);
        }
-}
-
-/**
- * sdma_v5_2_halt - stop the async dma engines
- *
- * @adev: amdgpu_device pointer
- *
- * Halt the async dma engines.
- */
-static void sdma_v5_2_halt(struct amdgpu_device *adev)
-{
-       int i;
-       u32 f32_cntl;
-
-       sdma_v5_2_gfx_stop(adev);
-       sdma_v5_2_rlc_stop(adev);
 
        if (!amdgpu_sriov_vf(adev)) {
                for (i = 0; i < adev->sdma.num_instances; i++) {
                        f32_cntl = RREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_F32_CNTL));
-                       f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_F32_CNTL, HALT, 1);
+                       f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_F32_CNTL, HALT, enable ? 0 : 1);
                        WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_F32_CNTL), f32_cntl);
                }
        }
@@ -625,9 +604,6 @@ static void sdma_v5_2_halt(struct amdgpu_device *adev)
  * @adev: amdgpu_device pointer
  *
  * Set up the gfx DMA ring buffers and enable them.
- * It assumes that the dma engine is stopped for each instance.
- * The function enables the engine and preemptions sequentially for each instance.
- *
  * Returns 0 for success, error for failure.
  */
 static int sdma_v5_2_gfx_resume(struct amdgpu_device *adev)
@@ -769,7 +745,10 @@ static int sdma_v5_2_gfx_resume(struct amdgpu_device *adev)
 
                ring->sched.ready = true;
 
-               sdma_v5_2_ctx_switch_enable_for_instance(adev, i);
+               if (amdgpu_sriov_vf(adev)) { /* bare-metal sequence doesn't need below to lines */
+                       sdma_v5_2_ctx_switch_enable(adev, true);
+                       sdma_v5_2_enable(adev, true);
+               }
 
                r = amdgpu_ring_test_ring(ring);
                if (r) {
@@ -813,7 +792,7 @@ static int sdma_v5_2_load_microcode(struct amdgpu_device *adev)
        int i, j;
 
        /* halt the MEs */
-       sdma_v5_2_halt(adev);
+       sdma_v5_2_enable(adev, false);
 
        for (i = 0; i < adev->sdma.num_instances; i++) {
                if (!adev->sdma.instance[i].fw)
@@ -885,8 +864,8 @@ static int sdma_v5_2_start(struct amdgpu_device *adev)
        int r = 0;
 
        if (amdgpu_sriov_vf(adev)) {
-               sdma_v5_2_ctx_switch_disable_all(adev);
-               sdma_v5_2_halt(adev);
+               sdma_v5_2_ctx_switch_enable(adev, false);
+               sdma_v5_2_enable(adev, false);
 
                /* set RB registers */
                r = sdma_v5_2_gfx_resume(adev);
@@ -910,10 +889,12 @@ static int sdma_v5_2_start(struct amdgpu_device *adev)
                amdgpu_gfx_off_ctrl(adev, false);
 
        sdma_v5_2_soft_reset(adev);
+       /* unhalt the MEs */
+       sdma_v5_2_enable(adev, true);
+       /* enable sdma ring preemption */
+       sdma_v5_2_ctx_switch_enable(adev, true);
 
-       /* Soft reset supposes to disable the dma engine and preemption.
-        * Now start the gfx rings and rlc compute queues.
-        */
+       /* start the gfx rings and rlc compute queues */
        r = sdma_v5_2_gfx_resume(adev);
        if (adev->in_s0ix)
                amdgpu_gfx_off_ctrl(adev, true);
@@ -1447,8 +1428,8 @@ static int sdma_v5_2_hw_fini(void *handle)
        if (amdgpu_sriov_vf(adev))
                return 0;
 
-       sdma_v5_2_ctx_switch_disable_all(adev);
-       sdma_v5_2_halt(adev);
+       sdma_v5_2_ctx_switch_enable(adev, false);
+       sdma_v5_2_enable(adev, false);
 
        return 0;
 }
index 3cabcee..39405f0 100644 (file)
@@ -1761,23 +1761,21 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = {
        .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
 };
 
-static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
-                               struct amdgpu_job *job)
+static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p)
 {
        struct drm_gpu_scheduler **scheds;
 
        /* The create msg must be in the first IB submitted */
-       if (atomic_read(&job->base.entity->fence_seq))
+       if (atomic_read(&p->entity->fence_seq))
                return -EINVAL;
 
        scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC]
                [AMDGPU_RING_PRIO_DEFAULT].sched;
-       drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
+       drm_sched_entity_modify_sched(p->entity, scheds, 1);
        return 0;
 }
 
-static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
-                           uint64_t addr)
+static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr)
 {
        struct ttm_operation_ctx ctx = { false, false };
        struct amdgpu_bo_va_mapping *map;
@@ -1848,7 +1846,7 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
                if (create[0] == 0x7 || create[0] == 0x10 || create[0] == 0x11)
                        continue;
 
-               r = vcn_v3_0_limit_sched(p, job);
+               r = vcn_v3_0_limit_sched(p);
                if (r)
                        goto out;
        }
@@ -1862,7 +1860,7 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
                                           struct amdgpu_job *job,
                                           struct amdgpu_ib *ib)
 {
-       struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
+       struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
        uint32_t msg_lo = 0, msg_hi = 0;
        unsigned i;
        int r;
@@ -1881,8 +1879,7 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
                        msg_hi = val;
                } else if (reg == PACKET0(p->adev->vcn.internal.cmd, 0) &&
                           val == 0) {
-                       r = vcn_v3_0_dec_msg(p, job,
-                                            ((u64)msg_hi) << 32 | msg_lo);
+                       r = vcn_v3_0_dec_msg(p, ((u64)msg_hi) << 32 | msg_lo);
                        if (r)
                                return r;
                }
index 5e9adbc..cbfb32b 100644 (file)
@@ -1516,6 +1516,8 @@ static int kfd_fill_gpu_cache_info(struct kfd_dev *kdev,
                        num_of_cache_types = ARRAY_SIZE(beige_goby_cache_info);
                        break;
                case IP_VERSION(10, 3, 3):
+               case IP_VERSION(10, 3, 6): /* TODO: Double check these on production silicon */
+               case IP_VERSION(10, 3, 7): /* TODO: Double check these on production silicon */
                        pcache_info = yellow_carp_cache_info;
                        num_of_cache_types = ARRAY_SIZE(yellow_carp_cache_info);
                        break;
index 8667e3d..bf42004 100644 (file)
@@ -73,6 +73,8 @@ static void kfd_device_info_set_sdma_info(struct kfd_dev *kfd)
        case IP_VERSION(4, 1, 2):/* RENOIR */
        case IP_VERSION(5, 2, 1):/* VANGOGH */
        case IP_VERSION(5, 2, 3):/* YELLOW_CARP */
+       case IP_VERSION(5, 2, 6):/* GC 10.3.6 */
+       case IP_VERSION(5, 2, 7):/* GC 10.3.7 */
        case IP_VERSION(6, 0, 1):
                kfd->device_info.num_sdma_queues_per_engine = 2;
                break;
@@ -127,6 +129,8 @@ static void kfd_device_info_set_event_interrupt_class(struct kfd_dev *kfd)
        case IP_VERSION(9, 4, 2): /* ALDEBARAN */
        case IP_VERSION(10, 3, 1): /* VANGOGH */
        case IP_VERSION(10, 3, 3): /* YELLOW_CARP */
+       case IP_VERSION(10, 3, 6): /* GC 10.3.6 */
+       case IP_VERSION(10, 3, 7): /* GC 10.3.7 */
        case IP_VERSION(10, 1, 3): /* CYAN_SKILLFISH */
        case IP_VERSION(10, 1, 4):
        case IP_VERSION(10, 1, 10): /* NAVI10 */
@@ -178,7 +182,9 @@ static void kfd_device_info_init(struct kfd_dev *kfd,
 
                if (gc_version < IP_VERSION(11, 0, 0)) {
                        /* Navi2x+, Navi1x+ */
-                       if (gc_version >= IP_VERSION(10, 3, 0))
+                       if (gc_version == IP_VERSION(10, 3, 6))
+                               kfd->device_info.no_atomic_fw_version = 14;
+                       else if (gc_version >= IP_VERSION(10, 3, 0))
                                kfd->device_info.no_atomic_fw_version = 92;
                        else if (gc_version >= IP_VERSION(10, 1, 1))
                                kfd->device_info.no_atomic_fw_version = 145;
@@ -368,6 +374,16 @@ struct kfd_dev *kgd2kfd_probe(struct amdgpu_device *adev, bool vf)
                        if (!vf)
                                f2g = &gfx_v10_3_kfd2kgd;
                        break;
+               case IP_VERSION(10, 3, 6):
+                       gfx_target_version = 100306;
+                       if (!vf)
+                               f2g = &gfx_v10_3_kfd2kgd;
+                       break;
+               case IP_VERSION(10, 3, 7):
+                       gfx_target_version = 100307;
+                       if (!vf)
+                               f2g = &gfx_v10_3_kfd2kgd;
+                       break;
                case IP_VERSION(11, 0, 0):
                        gfx_target_version = 110000;
                        f2g = &gfx_v11_kfd2kgd;
index 997650d..e44376c 100644 (file)
@@ -296,7 +296,7 @@ svm_migrate_copy_to_vram(struct amdgpu_device *adev, struct svm_range *prange,
                         struct migrate_vma *migrate, struct dma_fence **mfence,
                         dma_addr_t *scratch)
 {
-       uint64_t npages = migrate->cpages;
+       uint64_t npages = migrate->npages;
        struct device *dev = adev->dev;
        struct amdgpu_res_cursor cursor;
        dma_addr_t *src;
@@ -344,7 +344,7 @@ svm_migrate_copy_to_vram(struct amdgpu_device *adev, struct svm_range *prange,
                                                mfence);
                                if (r)
                                        goto out_free_vram_pages;
-                               amdgpu_res_next(&cursor, j << PAGE_SHIFT);
+                               amdgpu_res_next(&cursor, (j + 1) << PAGE_SHIFT);
                                j = 0;
                        } else {
                                amdgpu_res_next(&cursor, PAGE_SIZE);
@@ -590,7 +590,7 @@ svm_migrate_copy_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
                        continue;
                }
                src[i] = svm_migrate_addr(adev, spage);
-               if (i > 0 && src[i] != src[i - 1] + PAGE_SIZE) {
+               if (j > 0 && src[i] != src[i - 1] + PAGE_SIZE) {
                        r = svm_migrate_copy_memory_gart(adev, dst + i - j,
                                                         src + i - j, j,
                                                         FROM_VRAM_TO_RAM,
index 2ebf013..7b33224 100644 (file)
@@ -1295,7 +1295,7 @@ svm_range_map_to_gpu(struct kfd_process_device *pdd, struct svm_range *prange,
                r = amdgpu_vm_update_range(adev, vm, false, false, flush_tlb, NULL,
                                           last_start, prange->start + i,
                                           pte_flags,
-                                          last_start - prange->start,
+                                          (last_start - prange->start) << PAGE_SHIFT,
                                           bo_adev ? bo_adev->vm_manager.vram_base_offset : 0,
                                           NULL, dma_addr, &vm->last_update);
 
@@ -2307,6 +2307,8 @@ svm_range_cpu_invalidate_pagetables(struct mmu_interval_notifier *mni,
 
        if (range->event == MMU_NOTIFY_RELEASE)
                return true;
+       if (!mmget_not_zero(mni->mm))
+               return true;
 
        start = mni->interval_tree.start;
        last = mni->interval_tree.last;
@@ -2333,6 +2335,7 @@ svm_range_cpu_invalidate_pagetables(struct mmu_interval_notifier *mni,
        }
 
        svm_range_unlock(prange);
+       mmput(mni->mm);
 
        return true;
 }
index ceb3437..bca5f01 100644 (file)
@@ -287,8 +287,11 @@ static void dcn31_enable_pme_wa(struct clk_mgr *clk_mgr_base)
 
 void dcn31_init_clocks(struct clk_mgr *clk_mgr)
 {
+       uint32_t ref_dtbclk = clk_mgr->clks.ref_dtbclk_khz;
+
        memset(&(clk_mgr->clks), 0, sizeof(struct dc_clocks));
        // Assumption is that boot state always supports pstate
+       clk_mgr->clks.ref_dtbclk_khz = ref_dtbclk;      // restore ref_dtbclk
        clk_mgr->clks.p_state_change_support = true;
        clk_mgr->clks.prev_p_state_change_support = true;
        clk_mgr->clks.pwr_state = DCN_PWR_STATE_UNKNOWN;
@@ -638,8 +641,14 @@ static void dcn31_set_low_power_state(struct clk_mgr *clk_mgr_base)
        }
 }
 
+int dcn31_get_dtb_ref_freq_khz(struct clk_mgr *clk_mgr_base)
+{
+       return clk_mgr_base->clks.ref_dtbclk_khz;
+}
+
 static struct clk_mgr_funcs dcn31_funcs = {
        .get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
+       .get_dtb_ref_clk_frequency = dcn31_get_dtb_ref_freq_khz,
        .update_clocks = dcn31_update_clocks,
        .init_clocks = dcn31_init_clocks,
        .enable_pme_wa = dcn31_enable_pme_wa,
@@ -719,7 +728,7 @@ void dcn31_clk_mgr_construct(
        }
 
        clk_mgr->base.base.dprefclk_khz = 600000;
-       clk_mgr->base.dccg->ref_dtbclk_khz = 600000;
+       clk_mgr->base.base.clks.ref_dtbclk_khz = 600000;
        dce_clock_read_ss_info(&clk_mgr->base);
        /*if bios enabled SS, driver needs to adjust dtb clock, only enable with correct bios*/
        //clk_mgr->base.dccg->ref_dtbclk_khz = dce_adjust_dp_ref_freq_for_ss(clk_mgr_internal, clk_mgr->base.base.dprefclk_khz);
index 961b10a..be06fdb 100644 (file)
@@ -51,6 +51,8 @@ void dcn31_clk_mgr_construct(struct dc_context *ctx,
                struct pp_smu_funcs *pp_smu,
                struct dccg *dccg);
 
+int dcn31_get_dtb_ref_freq_khz(struct clk_mgr *clk_mgr_base);
+
 void dcn31_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr_int);
 
 #endif //__DCN31_CLK_MGR_H__
index a2ade6e..fb4ae80 100644 (file)
@@ -41,9 +41,7 @@
 
 #include "dc_dmub_srv.h"
 
-#if defined (CONFIG_DRM_AMD_DC_DP2_0)
 #include "dc_link_dp.h"
-#endif
 
 #define TO_CLK_MGR_DCN315(clk_mgr)\
        container_of(clk_mgr, struct clk_mgr_dcn315, base)
@@ -580,6 +578,7 @@ static void dcn315_enable_pme_wa(struct clk_mgr *clk_mgr_base)
 
 static struct clk_mgr_funcs dcn315_funcs = {
        .get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
+       .get_dtb_ref_clk_frequency = dcn31_get_dtb_ref_freq_khz,
        .update_clocks = dcn315_update_clocks,
        .init_clocks = dcn31_init_clocks,
        .enable_pme_wa = dcn315_enable_pme_wa,
@@ -656,9 +655,9 @@ void dcn315_clk_mgr_construct(
 
        clk_mgr->base.base.dprefclk_khz = 600000;
        clk_mgr->base.base.dprefclk_khz = dcn315_smu_get_dpref_clk(&clk_mgr->base);
-       clk_mgr->base.dccg->ref_dtbclk_khz = clk_mgr->base.base.dprefclk_khz;
+       clk_mgr->base.base.clks.ref_dtbclk_khz = clk_mgr->base.base.dprefclk_khz;
        dce_clock_read_ss_info(&clk_mgr->base);
-       clk_mgr->base.dccg->ref_dtbclk_khz = dce_adjust_dp_ref_freq_for_ss(&clk_mgr->base, clk_mgr->base.base.dprefclk_khz);
+       clk_mgr->base.base.clks.ref_dtbclk_khz = dce_adjust_dp_ref_freq_for_ss(&clk_mgr->base, clk_mgr->base.base.dprefclk_khz);
 
        clk_mgr->base.base.bw_params = &dcn315_bw_params;
 
index fc3af81..e4bb9c6 100644 (file)
@@ -571,6 +571,7 @@ static void dcn316_clk_mgr_helper_populate_bw_params(
 static struct clk_mgr_funcs dcn316_funcs = {
        .enable_pme_wa = dcn316_enable_pme_wa,
        .get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
+       .get_dtb_ref_clk_frequency = dcn31_get_dtb_ref_freq_khz,
        .update_clocks = dcn316_update_clocks,
        .init_clocks = dcn31_init_clocks,
        .are_clock_states_equal = dcn31_are_clock_states_equal,
@@ -685,7 +686,7 @@ void dcn316_clk_mgr_construct(
 
        clk_mgr->base.base.dprefclk_khz = 600000;
        clk_mgr->base.base.dprefclk_khz = dcn316_smu_get_dpref_clk(&clk_mgr->base);
-       clk_mgr->base.dccg->ref_dtbclk_khz = clk_mgr->base.base.dprefclk_khz;
+       clk_mgr->base.base.clks.ref_dtbclk_khz = clk_mgr->base.base.dprefclk_khz;
        dce_clock_read_ss_info(&clk_mgr->base);
        /*clk_mgr->base.dccg->ref_dtbclk_khz =
        dce_adjust_dp_ref_freq_for_ss(&clk_mgr->base, clk_mgr->base.base.dprefclk_khz);*/
index dc30ac3..cbc47ae 100644 (file)
@@ -114,8 +114,8 @@ static const struct dc_link_settings fail_safe_link_settings = {
 
 static bool decide_fallback_link_setting(
                struct dc_link *link,
-               struct dc_link_settings initial_link_settings,
-               struct dc_link_settings *current_link_setting,
+               struct dc_link_settings *max,
+               struct dc_link_settings *cur,
                enum link_training_result training_result);
 static void maximize_lane_settings(const struct link_training_settings *lt_settings,
                struct dc_lane_settings lane_settings[LANE_COUNT_DP_MAX]);
@@ -2784,6 +2784,7 @@ bool perform_link_training_with_retries(
        enum dp_panel_mode panel_mode = dp_get_panel_mode(link);
        enum link_training_result status = LINK_TRAINING_CR_FAIL_LANE0;
        struct dc_link_settings cur_link_settings = *link_setting;
+       struct dc_link_settings max_link_settings = *link_setting;
        const struct link_hwss *link_hwss = get_link_hwss(link, &pipe_ctx->link_res);
        int fail_count = 0;
        bool is_link_bw_low = false; /* link bandwidth < stream bandwidth */
@@ -2793,7 +2794,6 @@ bool perform_link_training_with_retries(
 
        dp_trace_commit_lt_init(link);
 
-
        if (dp_get_link_encoding_format(&cur_link_settings) == DP_8b_10b_ENCODING)
                /* We need to do this before the link training to ensure the idle
                 * pattern in SST mode will be sent right after the link training
@@ -2909,19 +2909,15 @@ bool perform_link_training_with_retries(
                        uint32_t req_bw;
                        uint32_t link_bw;
 
-                       decide_fallback_link_setting(link, *link_setting, &cur_link_settings, status);
-                       /* Flag if reduced link bandwidth no longer meets stream requirements or fallen back to
-                        * minimum link bandwidth.
+                       decide_fallback_link_setting(link, &max_link_settings,
+                                       &cur_link_settings, status);
+                       /* Fail link training if reduced link bandwidth no longer meets
+                        * stream requirements.
                         */
                        req_bw = dc_bandwidth_in_kbps_from_timing(&stream->timing);
                        link_bw = dc_link_bandwidth_kbps(link, &cur_link_settings);
-                       is_link_bw_low = (req_bw > link_bw);
-                       is_link_bw_min = ((cur_link_settings.link_rate <= LINK_RATE_LOW) &&
-                               (cur_link_settings.lane_count <= LANE_COUNT_ONE));
-
-                       if (is_link_bw_low)
-                               DC_LOG_WARNING("%s: Link bandwidth too low after fallback req_bw(%d) > link_bw(%d)\n",
-                                       __func__, req_bw, link_bw);
+                       if (req_bw > link_bw)
+                               break;
                }
 
                msleep(delay_between_attempts);
@@ -3309,7 +3305,7 @@ static bool dp_verify_link_cap(
        int *fail_count)
 {
        struct dc_link_settings cur_link_settings = {0};
-       struct dc_link_settings initial_link_settings = *known_limit_link_setting;
+       struct dc_link_settings max_link_settings = *known_limit_link_setting;
        bool success = false;
        bool skip_video_pattern;
        enum clock_source_id dp_cs_id = get_clock_source_id(link);
@@ -3318,7 +3314,7 @@ static bool dp_verify_link_cap(
        struct link_resource link_res;
 
        memset(&irq_data, 0, sizeof(irq_data));
-       cur_link_settings = initial_link_settings;
+       cur_link_settings = max_link_settings;
 
        /* Grant extended timeout request */
        if ((link->lttpr_mode == LTTPR_MODE_NON_TRANSPARENT) && (link->dpcd_caps.lttpr_caps.max_ext_timeout > 0)) {
@@ -3361,7 +3357,7 @@ static bool dp_verify_link_cap(
                dp_trace_lt_result_update(link, status, true);
                dp_disable_link_phy(link, &link_res, link->connector_signal);
        } while (!success && decide_fallback_link_setting(link,
-                       initial_link_settings, &cur_link_settings, status));
+                       &max_link_settings, &cur_link_settings, status));
 
        link->verified_link_cap = success ?
                        cur_link_settings : fail_safe_link_settings;
@@ -3596,16 +3592,19 @@ static bool decide_fallback_link_setting_max_bw_policy(
  */
 static bool decide_fallback_link_setting(
                struct dc_link *link,
-               struct dc_link_settings initial_link_settings,
-               struct dc_link_settings *current_link_setting,
+               struct dc_link_settings *max,
+               struct dc_link_settings *cur,
                enum link_training_result training_result)
 {
-       if (!current_link_setting)
+       if (!cur)
                return false;
-       if (dp_get_link_encoding_format(&initial_link_settings) == DP_128b_132b_ENCODING ||
+       if (!max)
+               return false;
+
+       if (dp_get_link_encoding_format(max) == DP_128b_132b_ENCODING ||
                        link->dc->debug.force_dp2_lt_fallback_method)
-               return decide_fallback_link_setting_max_bw_policy(link, &initial_link_settings,
-                               current_link_setting, training_result);
+               return decide_fallback_link_setting_max_bw_policy(link, max, cur,
+                               training_result);
 
        switch (training_result) {
        case LINK_TRAINING_CR_FAIL_LANE0:
@@ -3613,28 +3612,18 @@ static bool decide_fallback_link_setting(
        case LINK_TRAINING_CR_FAIL_LANE23:
        case LINK_TRAINING_LQA_FAIL:
        {
-               if (!reached_minimum_link_rate
-                               (current_link_setting->link_rate)) {
-                       current_link_setting->link_rate =
-                               reduce_link_rate(
-                                       current_link_setting->link_rate);
-               } else if (!reached_minimum_lane_count
-                               (current_link_setting->lane_count)) {
-                       current_link_setting->link_rate =
-                               initial_link_settings.link_rate;
+               if (!reached_minimum_link_rate(cur->link_rate)) {
+                       cur->link_rate = reduce_link_rate(cur->link_rate);
+               } else if (!reached_minimum_lane_count(cur->lane_count)) {
+                       cur->link_rate = max->link_rate;
                        if (training_result == LINK_TRAINING_CR_FAIL_LANE0)
                                return false;
                        else if (training_result == LINK_TRAINING_CR_FAIL_LANE1)
-                               current_link_setting->lane_count =
-                                               LANE_COUNT_ONE;
-                       else if (training_result ==
-                                       LINK_TRAINING_CR_FAIL_LANE23)
-                               current_link_setting->lane_count =
-                                               LANE_COUNT_TWO;
+                               cur->lane_count = LANE_COUNT_ONE;
+                       else if (training_result == LINK_TRAINING_CR_FAIL_LANE23)
+                               cur->lane_count = LANE_COUNT_TWO;
                        else
-                               current_link_setting->lane_count =
-                                       reduce_lane_count(
-                                       current_link_setting->lane_count);
+                               cur->lane_count = reduce_lane_count(cur->lane_count);
                } else {
                        return false;
                }
@@ -3642,17 +3631,17 @@ static bool decide_fallback_link_setting(
        }
        case LINK_TRAINING_EQ_FAIL_EQ:
        {
-               if (!reached_minimum_lane_count
-                               (current_link_setting->lane_count)) {
-                       current_link_setting->lane_count =
-                               reduce_lane_count(
-                                       current_link_setting->lane_count);
-               } else if (!reached_minimum_link_rate
-                               (current_link_setting->link_rate)) {
-                       current_link_setting->link_rate =
-                               reduce_link_rate(
-                                       current_link_setting->link_rate);
-                       current_link_setting->lane_count = initial_link_settings.lane_count;
+               if (!reached_minimum_lane_count(cur->lane_count)) {
+                       cur->lane_count = reduce_lane_count(cur->lane_count);
+               } else if (!reached_minimum_link_rate(cur->link_rate)) {
+                       cur->link_rate = reduce_link_rate(cur->link_rate);
+                       /* Reduce max link rate to avoid potential infinite loop.
+                        * Needed so that any subsequent CR_FAIL fallback can't
+                        * re-set the link rate higher than the link rate from
+                        * the latest EQ_FAIL fallback.
+                        */
+                       max->link_rate = cur->link_rate;
+                       cur->lane_count = max->lane_count;
                } else {
                        return false;
                }
@@ -3660,12 +3649,15 @@ static bool decide_fallback_link_setting(
        }
        case LINK_TRAINING_EQ_FAIL_CR:
        {
-               if (!reached_minimum_link_rate
-                               (current_link_setting->link_rate)) {
-                       current_link_setting->link_rate =
-                               reduce_link_rate(
-                                       current_link_setting->link_rate);
-                       current_link_setting->lane_count = initial_link_settings.lane_count;
+               if (!reached_minimum_link_rate(cur->link_rate)) {
+                       cur->link_rate = reduce_link_rate(cur->link_rate);
+                       /* Reduce max link rate to avoid potential infinite loop.
+                        * Needed so that any subsequent CR_FAIL fallback can't
+                        * re-set the link rate higher than the link rate from
+                        * the latest EQ_FAIL fallback.
+                        */
+                       max->link_rate = cur->link_rate;
+                       cur->lane_count = max->lane_count;
                } else {
                        return false;
                }
index 3960c74..817028d 100644 (file)
@@ -47,7 +47,7 @@ struct aux_payload;
 struct set_config_cmd_payload;
 struct dmub_notification;
 
-#define DC_VER "3.2.186"
+#define DC_VER "3.2.187"
 
 #define MAX_SURFACES 3
 #define MAX_PLANES 6
@@ -416,6 +416,7 @@ struct dc_clocks {
        bool p_state_change_support;
        enum dcn_zstate_support_state zstate_support;
        bool dtbclk_en;
+       int ref_dtbclk_khz;
        enum dcn_pwr_state pwr_state;
        /*
         * Elements below are not compared for the purposes of
@@ -719,6 +720,8 @@ struct dc_debug_options {
        bool apply_vendor_specific_lttpr_wa;
        bool extended_blank_optimization;
        union aux_wake_wa_options aux_wake_wa;
+       /* uses value at boot and disables switch */
+       bool disable_dtb_ref_clk_switch;
        uint8_t psr_power_use_phy_fsm;
        enum dml_hostvm_override_opts dml_hostvm_override;
 };
index 287a106..bbc58d1 100644 (file)
@@ -513,12 +513,10 @@ void dccg31_set_physymclk(
 /* Controls the generation of pixel valid for OTG in (OTG -> HPO case) */
 static void dccg31_set_dtbclk_dto(
                struct dccg *dccg,
-               int dtbclk_inst,
-               int req_dtbclk_khz,
-               int num_odm_segments,
-               const struct dc_crtc_timing *timing)
+               struct dtbclk_dto_params *params)
 {
        struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+       int req_dtbclk_khz = params->pixclk_khz;
        uint32_t dtbdto_div;
 
        /* Mode                 DTBDTO Rate       DTBCLK_DTO<x>_DIV Register
@@ -529,57 +527,53 @@ static void dccg31_set_dtbclk_dto(
         * DSC native 4:2:2     pixel rate/2      4
         * Other modes          pixel rate        8
         */
-       if (num_odm_segments == 4) {
+       if (params->num_odm_segments == 4) {
                dtbdto_div = 2;
-               req_dtbclk_khz = req_dtbclk_khz / 4;
-       } else if ((num_odm_segments == 2) ||
-                       (timing->pixel_encoding == PIXEL_ENCODING_YCBCR420) ||
-                       (timing->flags.DSC && timing->pixel_encoding == PIXEL_ENCODING_YCBCR422
-                                       && !timing->dsc_cfg.ycbcr422_simple)) {
+               req_dtbclk_khz = params->pixclk_khz / 4;
+       } else if ((params->num_odm_segments == 2) ||
+                       (params->timing->pixel_encoding == PIXEL_ENCODING_YCBCR420) ||
+                       (params->timing->flags.DSC && params->timing->pixel_encoding == PIXEL_ENCODING_YCBCR422
+                                       && !params->timing->dsc_cfg.ycbcr422_simple)) {
                dtbdto_div = 4;
-               req_dtbclk_khz = req_dtbclk_khz / 2;
+               req_dtbclk_khz = params->pixclk_khz / 2;
        } else
                dtbdto_div = 8;
 
-       if (dccg->ref_dtbclk_khz && req_dtbclk_khz) {
+       if (params->ref_dtbclk_khz && req_dtbclk_khz) {
                uint32_t modulo, phase;
 
                // phase / modulo = dtbclk / dtbclk ref
-               modulo = dccg->ref_dtbclk_khz * 1000;
-               phase = div_u64((((unsigned long long)modulo * req_dtbclk_khz) + dccg->ref_dtbclk_khz - 1),
-                       dccg->ref_dtbclk_khz);
+               modulo = params->ref_dtbclk_khz * 1000;
+               phase = div_u64((((unsigned long long)modulo * req_dtbclk_khz) + params->ref_dtbclk_khz - 1),
+                               params->ref_dtbclk_khz);
 
-               REG_UPDATE(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-                               DTBCLK_DTO_DIV[dtbclk_inst], dtbdto_div);
+               REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+                               DTBCLK_DTO_DIV[params->otg_inst], dtbdto_div);
 
-               REG_WRITE(DTBCLK_DTO_MODULO[dtbclk_inst], modulo);
-               REG_WRITE(DTBCLK_DTO_PHASE[dtbclk_inst], phase);
+               REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], modulo);
+               REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], phase);
 
-               REG_UPDATE(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-                               DTBCLK_DTO_ENABLE[dtbclk_inst], 1);
+               REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+                               DTBCLK_DTO_ENABLE[params->otg_inst], 1);
 
-               REG_WAIT(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-                               DTBCLKDTO_ENABLE_STATUS[dtbclk_inst], 1,
+               REG_WAIT(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+                               DTBCLKDTO_ENABLE_STATUS[params->otg_inst], 1,
                                1, 100);
 
                /* The recommended programming sequence to enable DTBCLK DTO to generate
                 * valid pixel HPO DPSTREAM ENCODER, specifies that DTO source select should
                 * be set only after DTO is enabled
                 */
-               REG_UPDATE(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-                               PIPE_DTO_SRC_SEL[dtbclk_inst], 1);
-
-               dccg->dtbclk_khz[dtbclk_inst] = req_dtbclk_khz;
+               REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+                               PIPE_DTO_SRC_SEL[params->otg_inst], 1);
        } else {
-               REG_UPDATE_3(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-                               DTBCLK_DTO_ENABLE[dtbclk_inst], 0,
-                               PIPE_DTO_SRC_SEL[dtbclk_inst], 0,
-                               DTBCLK_DTO_DIV[dtbclk_inst], dtbdto_div);
+               REG_UPDATE_3(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+                               DTBCLK_DTO_ENABLE[params->otg_inst], 0,
+                               PIPE_DTO_SRC_SEL[params->otg_inst], 0,
+                               DTBCLK_DTO_DIV[params->otg_inst], dtbdto_div);
 
-               REG_WRITE(DTBCLK_DTO_MODULO[dtbclk_inst], 0);
-               REG_WRITE(DTBCLK_DTO_PHASE[dtbclk_inst], 0);
-
-               dccg->dtbclk_khz[dtbclk_inst] = 0;
+               REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], 0);
+               REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], 0);
        }
 }
 
@@ -606,16 +600,12 @@ void dccg31_set_audio_dtbclk_dto(
 
                REG_UPDATE(DCCG_AUDIO_DTO_SOURCE,
                                DCCG_AUDIO_DTO_SEL, 4);  //  04 - DCCG_AUDIO_DTO_SEL_AUDIO_DTO_DTBCLK
-
-               dccg->audio_dtbclk_khz = req_audio_dtbclk_khz;
        } else {
                REG_WRITE(DCCG_AUDIO_DTBCLK_DTO_PHASE, 0);
                REG_WRITE(DCCG_AUDIO_DTBCLK_DTO_MODULO, 0);
 
                REG_UPDATE(DCCG_AUDIO_DTO_SOURCE,
                                DCCG_AUDIO_DTO_SEL, 3);  //  03 - DCCG_AUDIO_DTO_SEL_NO_AUDIO_DTO
-
-               dccg->audio_dtbclk_khz = 0;
        }
 }
 
index d94fd10..8b12b41 100644 (file)
@@ -230,9 +230,7 @@ static void enc31_hw_init(struct link_encoder *enc)
        AUX_RX_PHASE_DETECT_LEN,  [21,20] = 0x3 default is 3
        AUX_RX_DETECTION_THRESHOLD [30:28] = 1
 */
-       AUX_REG_WRITE(AUX_DPHY_RX_CONTROL0, 0x103d1110);
-
-       AUX_REG_WRITE(AUX_DPHY_TX_CONTROL, 0x21c7a);
+       // dmub will read AUX_DPHY_RX_CONTROL0/AUX_DPHY_TX_CONTROL from vbios table in dp_aux_init
 
        //AUX_DPHY_TX_REF_CONTROL'AUX_TX_REF_DIV HW default is 0x32;
        // Set AUX_TX_REF_DIV Divider to generate 2 MHz reference from refclk
index 789f756..d227367 100644 (file)
@@ -1284,10 +1284,8 @@ static bool is_dtbclk_required(struct dc *dc, struct dc_state *context)
        for (i = 0; i < dc->res_pool->pipe_count; i++) {
                if (!context->res_ctx.pipe_ctx[i].stream)
                        continue;
-#if defined (CONFIG_DRM_AMD_DC_DP2_0)
                if (is_dp_128b_132b_signal(&context->res_ctx.pipe_ctx[i]))
                        return true;
-#endif
        }
        return false;
 }
index 46ce5a0..b5570aa 100644 (file)
@@ -237,6 +237,7 @@ struct clk_mgr_funcs {
                        bool safe_to_lower);
 
        int (*get_dp_ref_clk_frequency)(struct clk_mgr *clk_mgr);
+       int (*get_dtb_ref_clk_frequency)(struct clk_mgr *clk_mgr);
 
        void (*set_low_power_state)(struct clk_mgr *clk_mgr);
 
index b2fa4de..c702191 100644 (file)
@@ -60,8 +60,17 @@ struct dccg {
        const struct dccg_funcs *funcs;
        int pipe_dppclk_khz[MAX_PIPES];
        int ref_dppclk;
-       int dtbclk_khz[MAX_PIPES];
-       int audio_dtbclk_khz;
+       //int dtbclk_khz[MAX_PIPES];/* TODO needs to be removed */
+       //int audio_dtbclk_khz;/* TODO needs to be removed */
+       int ref_dtbclk_khz;/* TODO needs to be removed */
+};
+
+struct dtbclk_dto_params {
+       const struct dc_crtc_timing *timing;
+       int otg_inst;
+       int pixclk_khz;
+       int req_audio_dtbclk_khz;
+       int num_odm_segments;
        int ref_dtbclk_khz;
 };
 
@@ -111,10 +120,7 @@ struct dccg_funcs {
 
        void (*set_dtbclk_dto)(
                        struct dccg *dccg,
-                       int dtbclk_inst,
-                       int req_dtbclk_khz,
-                       int num_odm_segments,
-                       const struct dc_crtc_timing *timing);
+                       struct dtbclk_dto_params *dto_params);
 
        void (*set_audio_dtbclk_dto)(
                        struct dccg *dccg,
index 87972dc..ea6cf8b 100644 (file)
@@ -27,6 +27,7 @@
 #include "core_types.h"
 #include "dccg.h"
 #include "dc_link_dp.h"
+#include "clk_mgr.h"
 
 static enum phyd32clk_clock_source get_phyd32clk_src(struct dc_link *link)
 {
@@ -106,14 +107,18 @@ static void setup_hpo_dp_stream_encoder(struct pipe_ctx *pipe_ctx)
        struct hpo_dp_link_encoder *link_enc = pipe_ctx->link_res.hpo_dp_link_enc;
        struct dccg *dccg = dc->res_pool->dccg;
        struct timing_generator *tg = pipe_ctx->stream_res.tg;
-       int odm_segment_count = get_odm_segment_count(pipe_ctx);
+       struct dtbclk_dto_params dto_params = {0};
        enum phyd32clk_clock_source phyd32clk = get_phyd32clk_src(pipe_ctx->stream->link);
 
+       dto_params.otg_inst = tg->inst;
+       dto_params.pixclk_khz = pipe_ctx->stream->phy_pix_clk;
+       dto_params.num_odm_segments = get_odm_segment_count(pipe_ctx);
+       dto_params.timing = &pipe_ctx->stream->timing;
+       dto_params.ref_dtbclk_khz = dc->clk_mgr->funcs->get_dtb_ref_clk_frequency(dc->clk_mgr);
+
        dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst);
        dccg->funcs->enable_symclk32_se(dccg, stream_enc->inst, phyd32clk);
-       dccg->funcs->set_dtbclk_dto(dccg, tg->inst, pipe_ctx->stream->phy_pix_clk,
-                       odm_segment_count,
-                       &pipe_ctx->stream->timing);
+       dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
        stream_enc->funcs->enable_stream(stream_enc);
        stream_enc->funcs->map_stream_to_link(stream_enc, stream_enc->inst, link_enc->inst);
 }
@@ -124,9 +129,13 @@ static void reset_hpo_dp_stream_encoder(struct pipe_ctx *pipe_ctx)
        struct hpo_dp_stream_encoder *stream_enc = pipe_ctx->stream_res.hpo_dp_stream_enc;
        struct dccg *dccg = dc->res_pool->dccg;
        struct timing_generator *tg = pipe_ctx->stream_res.tg;
+       struct dtbclk_dto_params dto_params = {0};
+
+       dto_params.otg_inst = tg->inst;
+       dto_params.timing = &pipe_ctx->stream->timing;
 
        stream_enc->funcs->disable(stream_enc);
-       dccg->funcs->set_dtbclk_dto(dccg, tg->inst, 0, 0, &pipe_ctx->stream->timing);
+       dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
        dccg->funcs->disable_symclk32_se(dccg, stream_enc->inst);
        dccg->funcs->set_dpstreamclk(dccg, REFCLK, tg->inst);
 }
index 7c9330a..c7bd7e2 100644 (file)
@@ -84,7 +84,7 @@ void dmub_dcn31_reset(struct dmub_srv *dmub)
 {
        union dmub_gpint_data_register cmd;
        const uint32_t timeout = 100;
-       uint32_t in_reset, scratch, i;
+       uint32_t in_reset, scratch, i, pwait_mode;
 
        REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, &in_reset);
 
@@ -115,6 +115,13 @@ void dmub_dcn31_reset(struct dmub_srv *dmub)
                        udelay(1);
                }
 
+               for (i = 0; i < timeout; ++i) {
+                       REG_GET(DMCUB_CNTL, DMCUB_PWAIT_MODE_STATUS, &pwait_mode);
+                       if (pwait_mode & (1 << 0))
+                               break;
+
+                       udelay(1);
+               }
                /* Force reset in case we timed out, DMCUB is likely hung. */
        }
 
@@ -125,6 +132,8 @@ void dmub_dcn31_reset(struct dmub_srv *dmub)
        REG_WRITE(DMCUB_INBOX1_WPTR, 0);
        REG_WRITE(DMCUB_OUTBOX1_RPTR, 0);
        REG_WRITE(DMCUB_OUTBOX1_WPTR, 0);
+       REG_WRITE(DMCUB_OUTBOX0_RPTR, 0);
+       REG_WRITE(DMCUB_OUTBOX0_WPTR, 0);
        REG_WRITE(DMCUB_SCRATCH0, 0);
 
        /* Clear the GPINT command manually so we don't send anything during boot. */
index 59ddc81..f6db6f8 100644 (file)
@@ -151,7 +151,8 @@ struct dmub_srv;
        DMUB_SF(DCN_VM_FB_OFFSET, FB_OFFSET) \
        DMUB_SF(DMCUB_INBOX0_WPTR, DMCUB_INBOX0_WPTR) \
        DMUB_SF(DMCUB_INTERRUPT_ENABLE, DMCUB_GPINT_IH_INT_EN) \
-       DMUB_SF(DMCUB_INTERRUPT_ACK, DMCUB_GPINT_IH_INT_ACK)
+       DMUB_SF(DMCUB_INTERRUPT_ACK, DMCUB_GPINT_IH_INT_ACK) \
+       DMUB_SF(DMCUB_CNTL, DMCUB_PWAIT_MODE_STATUS)
 
 struct dmub_srv_dcn31_reg_offset {
 #define DMUB_SR(reg) uint32_t reg;
index 73b9e0a..20a3d4e 100644 (file)
@@ -127,6 +127,8 @@ struct av_sync_data {
 static const uint8_t DP_SINK_DEVICE_STR_ID_1[] = {7, 1, 8, 7, 3, 0};
 static const uint8_t DP_SINK_DEVICE_STR_ID_2[] = {7, 1, 8, 7, 5, 0};
 
+static const u8 DP_SINK_BRANCH_DEV_NAME_7580[] = "7580\x80u";
+
 /*MST Dock*/
 static const uint8_t SYNAPTICS_DEVICE_ID[] = "SYNA";
 
index 247c6e9..1cb399d 100644 (file)
@@ -22,6 +22,7 @@
 #ifndef SMU_11_0_7_PPTABLE_H
 #define SMU_11_0_7_PPTABLE_H
 
+#pragma pack(push, 1)
 
 #define SMU_11_0_7_TABLE_FORMAT_REVISION                  15
 
@@ -139,7 +140,7 @@ struct smu_11_0_7_overdrive_table
     uint32_t max[SMU_11_0_7_MAX_ODSETTING];                   //default maximum settings
     uint32_t min[SMU_11_0_7_MAX_ODSETTING];                   //default minimum settings
     int16_t  pm_setting[SMU_11_0_7_MAX_PMSETTING];            //Optimized power mode feature settings
-} __attribute__((packed));
+};
 
 enum SMU_11_0_7_PPCLOCK_ID {
     SMU_11_0_7_PPCLOCK_GFXCLK = 0,
@@ -166,7 +167,7 @@ struct smu_11_0_7_power_saving_clock_table
     uint32_t count;                                           //power_saving_clock_count = SMU_11_0_7_PPCLOCK_COUNT
     uint32_t max[SMU_11_0_7_MAX_PPCLOCK];                       //PowerSavingClock Mode Clock Maximum array In MHz
     uint32_t min[SMU_11_0_7_MAX_PPCLOCK];                       //PowerSavingClock Mode Clock Minimum array In MHz
-} __attribute__((packed));
+};
 
 struct smu_11_0_7_powerplay_table
 {
@@ -191,6 +192,8 @@ struct smu_11_0_7_powerplay_table
       struct smu_11_0_7_overdrive_table               overdrive_table;
 
       PPTable_t smc_pptable;                        //PPTable_t in smu11_driver_if.h
-} __attribute__((packed));
+};
+
+#pragma pack(pop)
 
 #endif
index 7a63cf8..0116e3d 100644 (file)
@@ -22,6 +22,7 @@
 #ifndef SMU_11_0_PPTABLE_H
 #define SMU_11_0_PPTABLE_H
 
+#pragma pack(push, 1)
 
 #define SMU_11_0_TABLE_FORMAT_REVISION                  12
 
@@ -109,7 +110,7 @@ struct smu_11_0_overdrive_table
     uint8_t  cap[SMU_11_0_MAX_ODFEATURE];                     //OD feature support flags
     uint32_t max[SMU_11_0_MAX_ODSETTING];                     //default maximum settings
     uint32_t min[SMU_11_0_MAX_ODSETTING];                     //default minimum settings
-} __attribute__((packed));
+};
 
 enum SMU_11_0_PPCLOCK_ID {
     SMU_11_0_PPCLOCK_GFXCLK = 0,
@@ -133,7 +134,7 @@ struct smu_11_0_power_saving_clock_table
     uint32_t count;                                           //power_saving_clock_count = SMU_11_0_PPCLOCK_COUNT
     uint32_t max[SMU_11_0_MAX_PPCLOCK];                       //PowerSavingClock Mode Clock Maximum array In MHz
     uint32_t min[SMU_11_0_MAX_PPCLOCK];                       //PowerSavingClock Mode Clock Minimum array In MHz
-} __attribute__((packed));
+};
 
 struct smu_11_0_powerplay_table
 {
@@ -162,6 +163,8 @@ struct smu_11_0_powerplay_table
 #ifndef SMU_11_0_PARTIAL_PPTABLE
       PPTable_t smc_pptable;                        //PPTable_t in smu11_driver_if.h
 #endif
-} __attribute__((packed));
+};
+
+#pragma pack(pop)
 
 #endif
index 3f29f43..478862d 100644 (file)
@@ -22,6 +22,8 @@
 #ifndef SMU_13_0_7_PPTABLE_H
 #define SMU_13_0_7_PPTABLE_H
 
+#pragma pack(push, 1)
+
 #define SMU_13_0_7_TABLE_FORMAT_REVISION 15
 
 //// POWERPLAYTABLE::ulPlatformCaps
@@ -194,7 +196,8 @@ struct smu_13_0_7_powerplay_table
     struct smu_13_0_7_overdrive_table overdrive_table;
     uint8_t padding1;
     PPTable_t smc_pptable; //PPTable_t in driver_if.h
-} __attribute__((packed));
+};
 
+#pragma pack(pop)
 
 #endif
index 1f31139..0433074 100644 (file)
@@ -22,6 +22,8 @@
 #ifndef SMU_13_0_PPTABLE_H
 #define SMU_13_0_PPTABLE_H
 
+#pragma pack(push, 1)
+
 #define SMU_13_0_TABLE_FORMAT_REVISION                  1
 
 //// POWERPLAYTABLE::ulPlatformCaps
@@ -109,7 +111,7 @@ struct smu_13_0_overdrive_table {
        uint8_t  cap[SMU_13_0_MAX_ODFEATURE];                     //OD feature support flags
        uint32_t max[SMU_13_0_MAX_ODSETTING];                     //default maximum settings
        uint32_t min[SMU_13_0_MAX_ODSETTING];                     //default minimum settings
-} __attribute__((packed));
+};
 
 enum SMU_13_0_PPCLOCK_ID {
        SMU_13_0_PPCLOCK_GFXCLK = 0,
@@ -132,7 +134,7 @@ struct smu_13_0_power_saving_clock_table {
        uint32_t count;                                           //power_saving_clock_count = SMU_11_0_PPCLOCK_COUNT
        uint32_t max[SMU_13_0_MAX_PPCLOCK];                       //PowerSavingClock Mode Clock Maximum array In MHz
        uint32_t min[SMU_13_0_MAX_PPCLOCK];                       //PowerSavingClock Mode Clock Minimum array In MHz
-} __attribute__((packed));
+};
 
 struct smu_13_0_powerplay_table {
        struct atom_common_table_header header;
@@ -160,6 +162,8 @@ struct smu_13_0_powerplay_table {
 #ifndef SMU_13_0_PARTIAL_PPTABLE
        PPTable_t smc_pptable;                        //PPTable_t in driver_if.h
 #endif
-} __attribute__((packed));
+};
+
+#pragma pack(pop)
 
 #endif
index 4551bc8..f573d58 100644 (file)
@@ -160,13 +160,12 @@ void ast_dp_launch(struct drm_device *dev, u8 bPower)
                }
 
                if (bDPExecute)
-                       ast->tx_chip_type = AST_TX_ASTDP;
+                       ast->tx_chip_types |= BIT(AST_TX_ASTDP);
 
                ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xE5,
                                                        (u8) ~ASTDP_HOST_EDID_READ_DONE_MASK,
                                                        ASTDP_HOST_EDID_READ_DONE);
-       } else
-               ast->tx_chip_type = AST_TX_NONE;
+       }
 }
 
 
index 204c926..4f75a9e 100644 (file)
@@ -450,7 +450,7 @@ void ast_init_3rdtx(struct drm_device *dev)
                        ast_init_dvo(dev);
                        break;
                default:
-                       if (ast->tx_chip_type == AST_TX_SIL164)
+                       if (ast->tx_chip_types & BIT(AST_TX_SIL164))
                                ast_init_dvo(dev);
                        else
                                ast_init_analog(dev);
index afebe35..a34db43 100644 (file)
@@ -73,6 +73,11 @@ enum ast_tx_chip {
        AST_TX_ASTDP,
 };
 
+#define AST_TX_NONE_BIT                BIT(AST_TX_NONE)
+#define AST_TX_SIL164_BIT      BIT(AST_TX_SIL164)
+#define AST_TX_DP501_BIT       BIT(AST_TX_DP501)
+#define AST_TX_ASTDP_BIT       BIT(AST_TX_ASTDP)
+
 #define AST_DRAM_512Mx16 0
 #define AST_DRAM_1Gx16   1
 #define AST_DRAM_512Mx32 2
@@ -173,7 +178,7 @@ struct ast_private {
        struct drm_plane primary_plane;
        struct ast_cursor_plane cursor_plane;
        struct drm_crtc crtc;
-       union {
+       struct {
                struct {
                        struct drm_encoder encoder;
                        struct ast_vga_connector vga_connector;
@@ -199,7 +204,7 @@ struct ast_private {
                ast_use_defaults
        } config_mode;
 
-       enum ast_tx_chip tx_chip_type;
+       unsigned long tx_chip_types;            /* bitfield of enum ast_chip_type */
        u8 *dp501_fw_addr;
        const struct firmware *dp501_fw;        /* dp501 fw */
 };
index d770d5a..0674532 100644 (file)
@@ -216,7 +216,7 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
        }
 
        /* Check 3rd Tx option (digital output afaik) */
-       ast->tx_chip_type = AST_TX_NONE;
+       ast->tx_chip_types |= AST_TX_NONE_BIT;
 
        /*
         * VGACRA3 Enhanced Color Mode Register, check if DVO is already
@@ -229,7 +229,7 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
        if (!*need_post) {
                jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa3, 0xff);
                if (jreg & 0x80)
-                       ast->tx_chip_type = AST_TX_SIL164;
+                       ast->tx_chip_types = AST_TX_SIL164_BIT;
        }
 
        if ((ast->chip == AST2300) || (ast->chip == AST2400) || (ast->chip == AST2500)) {
@@ -241,7 +241,7 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
                jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd1, 0xff);
                switch (jreg) {
                case 0x04:
-                       ast->tx_chip_type = AST_TX_SIL164;
+                       ast->tx_chip_types = AST_TX_SIL164_BIT;
                        break;
                case 0x08:
                        ast->dp501_fw_addr = drmm_kzalloc(dev, 32*1024, GFP_KERNEL);
@@ -254,22 +254,19 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
                        }
                        fallthrough;
                case 0x0c:
-                       ast->tx_chip_type = AST_TX_DP501;
+                       ast->tx_chip_types = AST_TX_DP501_BIT;
                }
        } else if (ast->chip == AST2600)
                ast_dp_launch(&ast->base, 0);
 
        /* Print stuff for diagnostic purposes */
-       switch(ast->tx_chip_type) {
-       case AST_TX_SIL164:
+       if (ast->tx_chip_types & AST_TX_NONE_BIT)
+               drm_info(dev, "Using analog VGA\n");
+       if (ast->tx_chip_types & AST_TX_SIL164_BIT)
                drm_info(dev, "Using Sil164 TMDS transmitter\n");
-               break;
-       case AST_TX_DP501:
+       if (ast->tx_chip_types & AST_TX_DP501_BIT)
                drm_info(dev, "Using DP501 DisplayPort transmitter\n");
-               break;
-       default:
-               drm_info(dev, "Analog VGA only\n");
-       }
+
        return 0;
 }
 
index 323af27..db2010a 100644 (file)
@@ -997,10 +997,10 @@ static void ast_crtc_dpms(struct drm_crtc *crtc, int mode)
        case DRM_MODE_DPMS_ON:
                ast_set_index_reg_mask(ast, AST_IO_SEQ_PORT,  0x01, 0xdf, 0);
                ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb6, 0xfc, 0);
-               if (ast->tx_chip_type == AST_TX_DP501)
+               if (ast->tx_chip_types & AST_TX_DP501_BIT)
                        ast_set_dp501_video_output(crtc->dev, 1);
 
-               if (ast->tx_chip_type == AST_TX_ASTDP) {
+               if (ast->tx_chip_types & AST_TX_ASTDP_BIT) {
                        ast_dp_power_on_off(crtc->dev, AST_DP_POWER_ON);
                        ast_wait_for_vretrace(ast);
                        ast_dp_set_on_off(crtc->dev, 1);
@@ -1012,17 +1012,17 @@ static void ast_crtc_dpms(struct drm_crtc *crtc, int mode)
        case DRM_MODE_DPMS_SUSPEND:
        case DRM_MODE_DPMS_OFF:
                ch = mode;
-               if (ast->tx_chip_type == AST_TX_DP501)
+               if (ast->tx_chip_types & AST_TX_DP501_BIT)
                        ast_set_dp501_video_output(crtc->dev, 0);
-               break;
 
-               if (ast->tx_chip_type == AST_TX_ASTDP) {
+               if (ast->tx_chip_types & AST_TX_ASTDP_BIT) {
                        ast_dp_set_on_off(crtc->dev, 0);
                        ast_dp_power_on_off(crtc->dev, AST_DP_POWER_OFF);
                }
 
                ast_set_index_reg_mask(ast, AST_IO_SEQ_PORT,  0x01, 0xdf, 0x20);
                ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb6, 0xfc, ch);
+               break;
        }
 }
 
@@ -1155,7 +1155,7 @@ ast_crtc_helper_atomic_flush(struct drm_crtc *crtc,
                ast_crtc_load_lut(ast, crtc);
 
        //Set Aspeed Display-Port
-       if (ast->tx_chip_type == AST_TX_ASTDP)
+       if (ast->tx_chip_types & AST_TX_ASTDP_BIT)
                ast_dp_set_mode(crtc, vbios_mode_info);
 
        mutex_unlock(&ast->ioregs_lock);
@@ -1739,22 +1739,26 @@ int ast_mode_config_init(struct ast_private *ast)
 
        ast_crtc_init(dev);
 
-       switch (ast->tx_chip_type) {
-       case AST_TX_NONE:
+       if (ast->tx_chip_types & AST_TX_NONE_BIT) {
                ret = ast_vga_output_init(ast);
-               break;
-       case AST_TX_SIL164:
+               if (ret)
+                       return ret;
+       }
+       if (ast->tx_chip_types & AST_TX_SIL164_BIT) {
                ret = ast_sil164_output_init(ast);
-               break;
-       case AST_TX_DP501:
+               if (ret)
+                       return ret;
+       }
+       if (ast->tx_chip_types & AST_TX_DP501_BIT) {
                ret = ast_dp501_output_init(ast);
-               break;
-       case AST_TX_ASTDP:
+               if (ret)
+                       return ret;
+       }
+       if (ast->tx_chip_types & AST_TX_ASTDP_BIT) {
                ret = ast_astdp_output_init(ast);
-               break;
+               if (ret)
+                       return ret;
        }
-       if (ret)
-               return ret;
 
        drm_mode_config_reset(dev);
 
index 0aa9cf0..82fd3c8 100644 (file)
@@ -391,7 +391,7 @@ void ast_post_gpu(struct drm_device *dev)
 
                ast_init_3rdtx(dev);
        } else {
-               if (ast->tx_chip_type != AST_TX_NONE)
+               if (ast->tx_chip_types & AST_TX_SIL164_BIT)
                        ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa3, 0xcf, 0x80);        /* Enable DVO */
        }
 }
index b97f6e8..01c8b80 100644 (file)
@@ -1266,6 +1266,25 @@ static int analogix_dp_bridge_attach(struct drm_bridge *bridge,
        return 0;
 }
 
+static
+struct drm_crtc *analogix_dp_get_old_crtc(struct analogix_dp_device *dp,
+                                         struct drm_atomic_state *state)
+{
+       struct drm_encoder *encoder = dp->encoder;
+       struct drm_connector *connector;
+       struct drm_connector_state *conn_state;
+
+       connector = drm_atomic_get_old_connector_for_encoder(state, encoder);
+       if (!connector)
+               return NULL;
+
+       conn_state = drm_atomic_get_old_connector_state(state, connector);
+       if (!conn_state)
+               return NULL;
+
+       return conn_state->crtc;
+}
+
 static
 struct drm_crtc *analogix_dp_get_new_crtc(struct analogix_dp_device *dp,
                                          struct drm_atomic_state *state)
@@ -1446,14 +1465,16 @@ analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge,
 {
        struct drm_atomic_state *old_state = old_bridge_state->base.state;
        struct analogix_dp_device *dp = bridge->driver_private;
-       struct drm_crtc *crtc;
+       struct drm_crtc *old_crtc, *new_crtc;
+       struct drm_crtc_state *old_crtc_state = NULL;
        struct drm_crtc_state *new_crtc_state = NULL;
+       int ret;
 
-       crtc = analogix_dp_get_new_crtc(dp, old_state);
-       if (!crtc)
+       new_crtc = analogix_dp_get_new_crtc(dp, old_state);
+       if (!new_crtc)
                goto out;
 
-       new_crtc_state = drm_atomic_get_new_crtc_state(old_state, crtc);
+       new_crtc_state = drm_atomic_get_new_crtc_state(old_state, new_crtc);
        if (!new_crtc_state)
                goto out;
 
@@ -1462,6 +1483,19 @@ analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge,
                return;
 
 out:
+       old_crtc = analogix_dp_get_old_crtc(dp, old_state);
+       if (old_crtc) {
+               old_crtc_state = drm_atomic_get_old_crtc_state(old_state,
+                                                              old_crtc);
+
+               /* When moving from PSR to fully disabled, exit PSR first. */
+               if (old_crtc_state && old_crtc_state->self_refresh_active) {
+                       ret = analogix_dp_disable_psr(dp);
+                       if (ret)
+                               DRM_ERROR("Failed to disable psr (%d)\n", ret);
+               }
+       }
+
        analogix_dp_bridge_disable(bridge);
 }
 
index 2831f08..ac66f40 100644 (file)
@@ -577,7 +577,7 @@ static int sn65dsi83_parse_dt(struct sn65dsi83 *ctx, enum sn65dsi83_model model)
        ctx->host_node = of_graph_get_remote_port_parent(endpoint);
        of_node_put(endpoint);
 
-       if (ctx->dsi_lanes < 0 || ctx->dsi_lanes > 4) {
+       if (ctx->dsi_lanes <= 0 || ctx->dsi_lanes > 4) {
                ret = -EINVAL;
                goto err_put_node;
        }
index 9603193..987e4b2 100644 (file)
@@ -1011,9 +1011,19 @@ crtc_needs_disable(struct drm_crtc_state *old_state,
                return drm_atomic_crtc_effectively_active(old_state);
 
        /*
-        * We need to run through the crtc_funcs->disable() function if the CRTC
-        * is currently on, if it's transitioning to self refresh mode, or if
-        * it's in self refresh mode and needs to be fully disabled.
+        * We need to disable bridge(s) and CRTC if we're transitioning out of
+        * self-refresh and changing CRTCs at the same time, because the
+        * bridge tracks self-refresh status via CRTC state.
+        */
+       if (old_state->self_refresh_active &&
+           old_state->crtc != new_state->crtc)
+               return true;
+
+       /*
+        * We also need to run through the crtc_funcs->disable() function if
+        * the CRTC is currently on, if it's transitioning to self refresh
+        * mode, or if it's in self refresh mode and needs to be fully
+        * disabled.
         */
        return old_state->active ||
               (old_state->self_refresh_active && !new_state->active) ||
index 9c8829f..f7863d6 100644 (file)
@@ -69,7 +69,7 @@ static void ipu_crtc_disable_planes(struct ipu_crtc *ipu_crtc,
        drm_atomic_crtc_state_for_each_plane(plane, old_crtc_state) {
                if (plane == &ipu_crtc->plane[0]->base)
                        disable_full = true;
-               if (&ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base)
+               if (ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base)
                        disable_partial = true;
        }
 
index 7fcbc2a..087e69b 100644 (file)
@@ -233,6 +233,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
                struct drm_file *file)
 {
        struct panfrost_device *pfdev = dev->dev_private;
+       struct panfrost_file_priv *file_priv = file->driver_priv;
        struct drm_panfrost_submit *args = data;
        struct drm_syncobj *sync_out = NULL;
        struct panfrost_job *job;
@@ -262,12 +263,12 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
        job->jc = args->jc;
        job->requirements = args->requirements;
        job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
-       job->file_priv = file->driver_priv;
+       job->mmu = file_priv->mmu;
 
        slot = panfrost_job_get_slot(job);
 
        ret = drm_sched_job_init(&job->base,
-                                &job->file_priv->sched_entity[slot],
+                                &file_priv->sched_entity[slot],
                                 NULL);
        if (ret)
                goto out_put_job;
index fda5871..7c42084 100644 (file)
@@ -201,7 +201,7 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
                return;
        }
 
-       cfg = panfrost_mmu_as_get(pfdev, job->file_priv->mmu);
+       cfg = panfrost_mmu_as_get(pfdev, job->mmu);
 
        job_write(pfdev, JS_HEAD_NEXT_LO(js), lower_32_bits(jc_head));
        job_write(pfdev, JS_HEAD_NEXT_HI(js), upper_32_bits(jc_head));
@@ -435,7 +435,7 @@ static void panfrost_job_handle_err(struct panfrost_device *pfdev,
                job->jc = 0;
        }
 
-       panfrost_mmu_as_put(pfdev, job->file_priv->mmu);
+       panfrost_mmu_as_put(pfdev, job->mmu);
        panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
 
        if (signal_fence)
@@ -456,7 +456,7 @@ static void panfrost_job_handle_done(struct panfrost_device *pfdev,
         * happen when we receive the DONE interrupt while doing a GPU reset).
         */
        job->jc = 0;
-       panfrost_mmu_as_put(pfdev, job->file_priv->mmu);
+       panfrost_mmu_as_put(pfdev, job->mmu);
        panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
 
        dma_fence_signal_locked(job->done_fence);
index 77e6d0e..8becc1b 100644 (file)
@@ -17,7 +17,7 @@ struct panfrost_job {
        struct kref refcount;
 
        struct panfrost_device *pfdev;
-       struct panfrost_file_priv *file_priv;
+       struct panfrost_mmu *mmu;
 
        /* Fence to be signaled by IRQ handler when the job is complete. */
        struct dma_fence *done_fence;
index b9bb94b..424ef47 100644 (file)
@@ -115,6 +115,18 @@ static unsigned int mwait_substates __initdata;
 #define flg2MWAIT(flags) (((flags) >> 24) & 0xFF)
 #define MWAIT2flg(eax) ((eax & 0xFF) << 24)
 
+static __always_inline int __intel_idle(struct cpuidle_device *dev,
+                                       struct cpuidle_driver *drv, int index)
+{
+       struct cpuidle_state *state = &drv->states[index];
+       unsigned long eax = flg2MWAIT(state->flags);
+       unsigned long ecx = 1; /* break on interrupt flag */
+
+       mwait_idle_with_hints(eax, ecx);
+
+       return index;
+}
+
 /**
  * intel_idle - Ask the processor to enter the given idle state.
  * @dev: cpuidle device of the target CPU.
@@ -132,16 +144,19 @@ static unsigned int mwait_substates __initdata;
 static __cpuidle int intel_idle(struct cpuidle_device *dev,
                                struct cpuidle_driver *drv, int index)
 {
-       struct cpuidle_state *state = &drv->states[index];
-       unsigned long eax = flg2MWAIT(state->flags);
-       unsigned long ecx = 1; /* break on interrupt flag */
+       return __intel_idle(dev, drv, index);
+}
 
-       if (state->flags & CPUIDLE_FLAG_IRQ_ENABLE)
-               local_irq_enable();
+static __cpuidle int intel_idle_irq(struct cpuidle_device *dev,
+                                   struct cpuidle_driver *drv, int index)
+{
+       int ret;
 
-       mwait_idle_with_hints(eax, ecx);
+       raw_local_irq_enable();
+       ret = __intel_idle(dev, drv, index);
+       raw_local_irq_disable();
 
-       return index;
+       return ret;
 }
 
 /**
@@ -1801,6 +1816,9 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
                /* Structure copy. */
                drv->states[drv->state_count] = cpuidle_state_table[cstate];
 
+               if (cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IRQ_ENABLE)
+                       drv->states[drv->state_count].enter = intel_idle_irq;
+
                if ((disabled_states_mask & BIT(drv->state_count)) ||
                    ((icpu->use_acpi || force_use_acpi) &&
                     intel_idle_off_by_default(mwait_hint) &&
index 505a032..9dcf3f5 100644 (file)
@@ -402,6 +402,7 @@ config JOYSTICK_N64
 config JOYSTICK_SENSEHAT
        tristate "Raspberry Pi Sense HAT joystick"
        depends on INPUT && I2C
+       depends on HAS_IOMEM
        select MFD_SIMPLE_MFD_I2C
        help
          Say Y here if you want to enable the driver for the
index cbb1599..4804761 100644 (file)
@@ -85,13 +85,13 @@ static const struct dmi_system_id dmi_use_low_level_irq[] = {
        },
        {
                /*
-                * Lenovo Yoga Tab2 1051L, something messes with the home-button
+                * Lenovo Yoga Tab2 1051F/1051L, something messes with the home-button
                 * IRQ settings, leading to a non working home-button.
                 */
                .matches = {
                        DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
                        DMI_MATCH(DMI_PRODUCT_NAME, "60073"),
-                       DMI_MATCH(DMI_PRODUCT_VERSION, "1051L"),
+                       DMI_MATCH(DMI_PRODUCT_VERSION, "1051"),
                },
        },
        {} /* Terminating entry */
index 59a1450..ca15061 100644 (file)
@@ -942,17 +942,22 @@ static int bcm5974_probe(struct usb_interface *iface,
        if (!dev->tp_data)
                goto err_free_bt_buffer;
 
-       if (dev->bt_urb)
+       if (dev->bt_urb) {
                usb_fill_int_urb(dev->bt_urb, udev,
                                 usb_rcvintpipe(udev, cfg->bt_ep),
                                 dev->bt_data, dev->cfg.bt_datalen,
                                 bcm5974_irq_button, dev, 1);
 
+               dev->bt_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+       }
+
        usb_fill_int_urb(dev->tp_urb, udev,
                         usb_rcvintpipe(udev, cfg->tp_ep),
                         dev->tp_data, dev->cfg.tp_datalen,
                         bcm5974_irq_trackpad, dev, 1);
 
+       dev->tp_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+
        /* create bcm5974 device */
        usb_make_path(udev, dev->phys, sizeof(dev->phys));
        strlcat(dev->phys, "/input0", sizeof(dev->phys));
index 1259ca2..f4a1281 100644 (file)
@@ -1499,8 +1499,7 @@ void mmc_blk_cqe_recovery(struct mmc_queue *mq)
        err = mmc_cqe_recovery(host);
        if (err)
                mmc_blk_reset(mq->blkdata, host, MMC_BLK_CQE_RECOVERY);
-       else
-               mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY);
+       mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY);
 
        pr_debug("%s: CQE recovery done\n", mmc_hostname(host));
 }
index 1499a64..f13c08d 100644 (file)
@@ -982,6 +982,9 @@ static int gl9763e_runtime_resume(struct sdhci_pci_chip *chip)
        struct sdhci_host *host = slot->host;
        u16 clock;
 
+       if (host->mmc->ios.power_mode != MMC_POWER_ON)
+               return 0;
+
        clock = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
 
        clock |= SDHCI_CLOCK_PLL_EN;
index ebee5f0..be2719a 100644 (file)
@@ -51,6 +51,7 @@ static char *status_str[] = {
 };
 
 static char *type_str[] = {
+       "", /* Type 0 is not defined */
        "AMT_MSG_DISCOVERY",
        "AMT_MSG_ADVERTISEMENT",
        "AMT_MSG_REQUEST",
@@ -2220,8 +2221,7 @@ static bool amt_advertisement_handler(struct amt_dev *amt, struct sk_buff *skb)
        struct amt_header_advertisement *amta;
        int hdr_size;
 
-       hdr_size = sizeof(*amta) - sizeof(struct amt_header);
-
+       hdr_size = sizeof(*amta) + sizeof(struct udphdr);
        if (!pskb_may_pull(skb, hdr_size))
                return true;
 
@@ -2251,19 +2251,27 @@ static bool amt_multicast_data_handler(struct amt_dev *amt, struct sk_buff *skb)
        struct ethhdr *eth;
        struct iphdr *iph;
 
+       hdr_size = sizeof(*amtmd) + sizeof(struct udphdr);
+       if (!pskb_may_pull(skb, hdr_size))
+               return true;
+
        amtmd = (struct amt_header_mcast_data *)(udp_hdr(skb) + 1);
        if (amtmd->reserved || amtmd->version)
                return true;
 
-       hdr_size = sizeof(*amtmd) + sizeof(struct udphdr);
        if (iptunnel_pull_header(skb, hdr_size, htons(ETH_P_IP), false))
                return true;
+
        skb_reset_network_header(skb);
        skb_push(skb, sizeof(*eth));
        skb_reset_mac_header(skb);
        skb_pull(skb, sizeof(*eth));
        eth = eth_hdr(skb);
+
+       if (!pskb_may_pull(skb, sizeof(*iph)))
+               return true;
        iph = ip_hdr(skb);
+
        if (iph->version == 4) {
                if (!ipv4_is_multicast(iph->daddr))
                        return true;
@@ -2274,6 +2282,9 @@ static bool amt_multicast_data_handler(struct amt_dev *amt, struct sk_buff *skb)
        } else if (iph->version == 6) {
                struct ipv6hdr *ip6h;
 
+               if (!pskb_may_pull(skb, sizeof(*ip6h)))
+                       return true;
+
                ip6h = ipv6_hdr(skb);
                if (!ipv6_addr_is_multicast(&ip6h->daddr))
                        return true;
@@ -2306,8 +2317,7 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
        struct iphdr *iph;
        int hdr_size, len;
 
-       hdr_size = sizeof(*amtmq) - sizeof(struct amt_header);
-
+       hdr_size = sizeof(*amtmq) + sizeof(struct udphdr);
        if (!pskb_may_pull(skb, hdr_size))
                return true;
 
@@ -2315,22 +2325,27 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
        if (amtmq->reserved || amtmq->version)
                return true;
 
-       hdr_size = sizeof(*amtmq) + sizeof(struct udphdr) - sizeof(*eth);
+       hdr_size -= sizeof(*eth);
        if (iptunnel_pull_header(skb, hdr_size, htons(ETH_P_TEB), false))
                return true;
+
        oeth = eth_hdr(skb);
        skb_reset_mac_header(skb);
        skb_pull(skb, sizeof(*eth));
        skb_reset_network_header(skb);
        eth = eth_hdr(skb);
+       if (!pskb_may_pull(skb, sizeof(*iph)))
+               return true;
+
        iph = ip_hdr(skb);
        if (iph->version == 4) {
-               if (!ipv4_is_multicast(iph->daddr))
-                       return true;
                if (!pskb_may_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS +
                                   sizeof(*ihv3)))
                        return true;
 
+               if (!ipv4_is_multicast(iph->daddr))
+                       return true;
+
                ihv3 = skb_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS);
                skb_reset_transport_header(skb);
                skb_push(skb, sizeof(*iph) + AMT_IPHDR_OPTS);
@@ -2345,15 +2360,17 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
                ip_eth_mc_map(iph->daddr, eth->h_dest);
 #if IS_ENABLED(CONFIG_IPV6)
        } else if (iph->version == 6) {
-               struct ipv6hdr *ip6h = ipv6_hdr(skb);
                struct mld2_query *mld2q;
+               struct ipv6hdr *ip6h;
 
-               if (!ipv6_addr_is_multicast(&ip6h->daddr))
-                       return true;
                if (!pskb_may_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS +
                                   sizeof(*mld2q)))
                        return true;
 
+               ip6h = ipv6_hdr(skb);
+               if (!ipv6_addr_is_multicast(&ip6h->daddr))
+                       return true;
+
                mld2q = skb_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS);
                skb_reset_transport_header(skb);
                skb_push(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS);
@@ -2389,23 +2406,23 @@ static bool amt_update_handler(struct amt_dev *amt, struct sk_buff *skb)
 {
        struct amt_header_membership_update *amtmu;
        struct amt_tunnel_list *tunnel;
-       struct udphdr *udph;
        struct ethhdr *eth;
        struct iphdr *iph;
-       int len;
+       int len, hdr_size;
 
        iph = ip_hdr(skb);
-       udph = udp_hdr(skb);
 
-       if (__iptunnel_pull_header(skb, sizeof(*udph), skb->protocol,
-                                  false, false))
+       hdr_size = sizeof(*amtmu) + sizeof(struct udphdr);
+       if (!pskb_may_pull(skb, hdr_size))
                return true;
 
-       amtmu = (struct amt_header_membership_update *)skb->data;
+       amtmu = (struct amt_header_membership_update *)(udp_hdr(skb) + 1);
        if (amtmu->reserved || amtmu->version)
                return true;
 
-       skb_pull(skb, sizeof(*amtmu));
+       if (iptunnel_pull_header(skb, hdr_size, skb->protocol, false))
+               return true;
+
        skb_reset_network_header(skb);
 
        list_for_each_entry_rcu(tunnel, &amt->tunnel_list, list) {
@@ -2426,6 +2443,9 @@ static bool amt_update_handler(struct amt_dev *amt, struct sk_buff *skb)
        return true;
 
 report:
+       if (!pskb_may_pull(skb, sizeof(*iph)))
+               return true;
+
        iph = ip_hdr(skb);
        if (iph->version == 4) {
                if (ip_mc_check_igmp(skb)) {
@@ -2679,7 +2699,8 @@ static int amt_rcv(struct sock *sk, struct sk_buff *skb)
        amt = rcu_dereference_sk_user_data(sk);
        if (!amt) {
                err = true;
-               goto drop;
+               kfree_skb(skb);
+               goto out;
        }
 
        skb->dev = amt->dev;
index f85372a..3d42718 100644 (file)
@@ -6218,45 +6218,33 @@ int bond_create(struct net *net, const char *name)
 {
        struct net_device *bond_dev;
        struct bonding *bond;
-       struct alb_bond_info *bond_info;
-       int res;
+       int res = -ENOMEM;
 
        rtnl_lock();
 
        bond_dev = alloc_netdev_mq(sizeof(struct bonding),
                                   name ? name : "bond%d", NET_NAME_UNKNOWN,
                                   bond_setup, tx_queues);
-       if (!bond_dev) {
-               pr_err("%s: eek! can't alloc netdev!\n", name);
-               rtnl_unlock();
-               return -ENOMEM;
-       }
+       if (!bond_dev)
+               goto out;
 
-       /*
-        * Initialize rx_hashtbl_used_head to RLB_NULL_INDEX.
-        * It is set to 0 by default which is wrong.
-        */
        bond = netdev_priv(bond_dev);
-       bond_info = &(BOND_ALB_INFO(bond));
-       bond_info->rx_hashtbl_used_head = RLB_NULL_INDEX;
-
        dev_net_set(bond_dev, net);
        bond_dev->rtnl_link_ops = &bond_link_ops;
 
        res = register_netdevice(bond_dev);
        if (res < 0) {
                free_netdev(bond_dev);
-               rtnl_unlock();
-
-               return res;
+               goto out;
        }
 
        netif_carrier_off(bond_dev);
 
        bond_work_init_all(bond);
 
+out:
        rtnl_unlock();
-       return 0;
+       return res;
 }
 
 static int __net_init bond_net_init(struct net *net)
index 6f404f9..5a6f444 100644 (file)
@@ -151,7 +151,8 @@ static int bond_slave_changelink(struct net_device *bond_dev,
                snprintf(queue_id_str, sizeof(queue_id_str), "%s:%u\n",
                         slave_dev->name, queue_id);
                bond_opt_initstr(&newval, queue_id_str);
-               err = __bond_opt_set(bond, BOND_OPT_QUEUE_ID, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_QUEUE_ID, &newval,
+                                    data[IFLA_BOND_SLAVE_QUEUE_ID], extack);
                if (err)
                        return err;
        }
@@ -175,7 +176,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int mode = nla_get_u8(data[IFLA_BOND_MODE]);
 
                bond_opt_initval(&newval, mode);
-               err = __bond_opt_set(bond, BOND_OPT_MODE, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_MODE, &newval,
+                                    data[IFLA_BOND_MODE], extack);
                if (err)
                        return err;
        }
@@ -192,7 +194,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        active_slave = slave_dev->name;
                }
                bond_opt_initstr(&newval, active_slave);
-               err = __bond_opt_set(bond, BOND_OPT_ACTIVE_SLAVE, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_ACTIVE_SLAVE, &newval,
+                                    data[IFLA_BOND_ACTIVE_SLAVE], extack);
                if (err)
                        return err;
        }
@@ -200,7 +203,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                miimon = nla_get_u32(data[IFLA_BOND_MIIMON]);
 
                bond_opt_initval(&newval, miimon);
-               err = __bond_opt_set(bond, BOND_OPT_MIIMON, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_MIIMON, &newval,
+                                    data[IFLA_BOND_MIIMON], extack);
                if (err)
                        return err;
        }
@@ -208,7 +212,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int updelay = nla_get_u32(data[IFLA_BOND_UPDELAY]);
 
                bond_opt_initval(&newval, updelay);
-               err = __bond_opt_set(bond, BOND_OPT_UPDELAY, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_UPDELAY, &newval,
+                                    data[IFLA_BOND_UPDELAY], extack);
                if (err)
                        return err;
        }
@@ -216,7 +221,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int downdelay = nla_get_u32(data[IFLA_BOND_DOWNDELAY]);
 
                bond_opt_initval(&newval, downdelay);
-               err = __bond_opt_set(bond, BOND_OPT_DOWNDELAY, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_DOWNDELAY, &newval,
+                                    data[IFLA_BOND_DOWNDELAY], extack);
                if (err)
                        return err;
        }
@@ -224,7 +230,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int delay = nla_get_u32(data[IFLA_BOND_PEER_NOTIF_DELAY]);
 
                bond_opt_initval(&newval, delay);
-               err = __bond_opt_set(bond, BOND_OPT_PEER_NOTIF_DELAY, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_PEER_NOTIF_DELAY, &newval,
+                                    data[IFLA_BOND_PEER_NOTIF_DELAY], extack);
                if (err)
                        return err;
        }
@@ -232,7 +239,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int use_carrier = nla_get_u8(data[IFLA_BOND_USE_CARRIER]);
 
                bond_opt_initval(&newval, use_carrier);
-               err = __bond_opt_set(bond, BOND_OPT_USE_CARRIER, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_USE_CARRIER, &newval,
+                                    data[IFLA_BOND_USE_CARRIER], extack);
                if (err)
                        return err;
        }
@@ -240,12 +248,14 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int arp_interval = nla_get_u32(data[IFLA_BOND_ARP_INTERVAL]);
 
                if (arp_interval && miimon) {
-                       netdev_err(bond->dev, "ARP monitoring cannot be used with MII monitoring\n");
+                       NL_SET_ERR_MSG_ATTR(extack, data[IFLA_BOND_ARP_INTERVAL],
+                                           "ARP monitoring cannot be used with MII monitoring");
                        return -EINVAL;
                }
 
                bond_opt_initval(&newval, arp_interval);
-               err = __bond_opt_set(bond, BOND_OPT_ARP_INTERVAL, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_ARP_INTERVAL, &newval,
+                                    data[IFLA_BOND_ARP_INTERVAL], extack);
                if (err)
                        return err;
        }
@@ -264,7 +274,9 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
 
                        bond_opt_initval(&newval, (__force u64)target);
                        err = __bond_opt_set(bond, BOND_OPT_ARP_TARGETS,
-                                            &newval);
+                                            &newval,
+                                            data[IFLA_BOND_ARP_IP_TARGET],
+                                            extack);
                        if (err)
                                break;
                        i++;
@@ -292,7 +304,9 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
 
                        bond_opt_initextra(&newval, &addr6, sizeof(addr6));
                        err = __bond_opt_set(bond, BOND_OPT_NS_TARGETS,
-                                            &newval);
+                                            &newval,
+                                            data[IFLA_BOND_NS_IP6_TARGET],
+                                            extack);
                        if (err)
                                break;
                        i++;
@@ -307,12 +321,14 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int arp_validate = nla_get_u32(data[IFLA_BOND_ARP_VALIDATE]);
 
                if (arp_validate && miimon) {
-                       netdev_err(bond->dev, "ARP validating cannot be used with MII monitoring\n");
+                       NL_SET_ERR_MSG_ATTR(extack, data[IFLA_BOND_ARP_INTERVAL],
+                                           "ARP validating cannot be used with MII monitoring");
                        return -EINVAL;
                }
 
                bond_opt_initval(&newval, arp_validate);
-               err = __bond_opt_set(bond, BOND_OPT_ARP_VALIDATE, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_ARP_VALIDATE, &newval,
+                                    data[IFLA_BOND_ARP_VALIDATE], extack);
                if (err)
                        return err;
        }
@@ -321,7 +337,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u32(data[IFLA_BOND_ARP_ALL_TARGETS]);
 
                bond_opt_initval(&newval, arp_all_targets);
-               err = __bond_opt_set(bond, BOND_OPT_ARP_ALL_TARGETS, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_ARP_ALL_TARGETS, &newval,
+                                    data[IFLA_BOND_ARP_ALL_TARGETS], extack);
                if (err)
                        return err;
        }
@@ -335,7 +352,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        primary = dev->name;
 
                bond_opt_initstr(&newval, primary);
-               err = __bond_opt_set(bond, BOND_OPT_PRIMARY, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_PRIMARY, &newval,
+                                    data[IFLA_BOND_PRIMARY], extack);
                if (err)
                        return err;
        }
@@ -344,7 +362,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u8(data[IFLA_BOND_PRIMARY_RESELECT]);
 
                bond_opt_initval(&newval, primary_reselect);
-               err = __bond_opt_set(bond, BOND_OPT_PRIMARY_RESELECT, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_PRIMARY_RESELECT, &newval,
+                                    data[IFLA_BOND_PRIMARY_RESELECT], extack);
                if (err)
                        return err;
        }
@@ -353,7 +372,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u8(data[IFLA_BOND_FAIL_OVER_MAC]);
 
                bond_opt_initval(&newval, fail_over_mac);
-               err = __bond_opt_set(bond, BOND_OPT_FAIL_OVER_MAC, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_FAIL_OVER_MAC, &newval,
+                                    data[IFLA_BOND_FAIL_OVER_MAC], extack);
                if (err)
                        return err;
        }
@@ -362,7 +382,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u8(data[IFLA_BOND_XMIT_HASH_POLICY]);
 
                bond_opt_initval(&newval, xmit_hash_policy);
-               err = __bond_opt_set(bond, BOND_OPT_XMIT_HASH, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_XMIT_HASH, &newval,
+                                    data[IFLA_BOND_XMIT_HASH_POLICY], extack);
                if (err)
                        return err;
        }
@@ -371,7 +392,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u32(data[IFLA_BOND_RESEND_IGMP]);
 
                bond_opt_initval(&newval, resend_igmp);
-               err = __bond_opt_set(bond, BOND_OPT_RESEND_IGMP, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_RESEND_IGMP, &newval,
+                                    data[IFLA_BOND_RESEND_IGMP], extack);
                if (err)
                        return err;
        }
@@ -380,7 +402,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u8(data[IFLA_BOND_NUM_PEER_NOTIF]);
 
                bond_opt_initval(&newval, num_peer_notif);
-               err = __bond_opt_set(bond, BOND_OPT_NUM_PEER_NOTIF, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_NUM_PEER_NOTIF, &newval,
+                                    data[IFLA_BOND_NUM_PEER_NOTIF], extack);
                if (err)
                        return err;
        }
@@ -389,7 +412,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u8(data[IFLA_BOND_ALL_SLAVES_ACTIVE]);
 
                bond_opt_initval(&newval, all_slaves_active);
-               err = __bond_opt_set(bond, BOND_OPT_ALL_SLAVES_ACTIVE, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_ALL_SLAVES_ACTIVE, &newval,
+                                    data[IFLA_BOND_ALL_SLAVES_ACTIVE], extack);
                if (err)
                        return err;
        }
@@ -398,7 +422,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u32(data[IFLA_BOND_MIN_LINKS]);
 
                bond_opt_initval(&newval, min_links);
-               err = __bond_opt_set(bond, BOND_OPT_MINLINKS, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_MINLINKS, &newval,
+                                    data[IFLA_BOND_MIN_LINKS], extack);
                if (err)
                        return err;
        }
@@ -407,7 +432,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u32(data[IFLA_BOND_LP_INTERVAL]);
 
                bond_opt_initval(&newval, lp_interval);
-               err = __bond_opt_set(bond, BOND_OPT_LP_INTERVAL, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_LP_INTERVAL, &newval,
+                                    data[IFLA_BOND_LP_INTERVAL], extack);
                if (err)
                        return err;
        }
@@ -416,7 +442,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u32(data[IFLA_BOND_PACKETS_PER_SLAVE]);
 
                bond_opt_initval(&newval, packets_per_slave);
-               err = __bond_opt_set(bond, BOND_OPT_PACKETS_PER_SLAVE, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_PACKETS_PER_SLAVE, &newval,
+                                    data[IFLA_BOND_PACKETS_PER_SLAVE], extack);
                if (err)
                        return err;
        }
@@ -425,7 +452,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int lacp_active = nla_get_u8(data[IFLA_BOND_AD_LACP_ACTIVE]);
 
                bond_opt_initval(&newval, lacp_active);
-               err = __bond_opt_set(bond, BOND_OPT_LACP_ACTIVE, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_LACP_ACTIVE, &newval,
+                                    data[IFLA_BOND_AD_LACP_ACTIVE], extack);
                if (err)
                        return err;
        }
@@ -435,7 +463,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u8(data[IFLA_BOND_AD_LACP_RATE]);
 
                bond_opt_initval(&newval, lacp_rate);
-               err = __bond_opt_set(bond, BOND_OPT_LACP_RATE, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_LACP_RATE, &newval,
+                                    data[IFLA_BOND_AD_LACP_RATE], extack);
                if (err)
                        return err;
        }
@@ -444,7 +473,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u8(data[IFLA_BOND_AD_SELECT]);
 
                bond_opt_initval(&newval, ad_select);
-               err = __bond_opt_set(bond, BOND_OPT_AD_SELECT, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_AD_SELECT, &newval,
+                                    data[IFLA_BOND_AD_SELECT], extack);
                if (err)
                        return err;
        }
@@ -453,7 +483,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u16(data[IFLA_BOND_AD_ACTOR_SYS_PRIO]);
 
                bond_opt_initval(&newval, actor_sys_prio);
-               err = __bond_opt_set(bond, BOND_OPT_AD_ACTOR_SYS_PRIO, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_AD_ACTOR_SYS_PRIO, &newval,
+                                    data[IFLA_BOND_AD_ACTOR_SYS_PRIO], extack);
                if (err)
                        return err;
        }
@@ -462,7 +493,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                        nla_get_u16(data[IFLA_BOND_AD_USER_PORT_KEY]);
 
                bond_opt_initval(&newval, port_key);
-               err = __bond_opt_set(bond, BOND_OPT_AD_USER_PORT_KEY, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_AD_USER_PORT_KEY, &newval,
+                                    data[IFLA_BOND_AD_USER_PORT_KEY], extack);
                if (err)
                        return err;
        }
@@ -472,7 +504,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
 
                bond_opt_initval(&newval,
                                 nla_get_u64(data[IFLA_BOND_AD_ACTOR_SYSTEM]));
-               err = __bond_opt_set(bond, BOND_OPT_AD_ACTOR_SYSTEM, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_AD_ACTOR_SYSTEM, &newval,
+                                    data[IFLA_BOND_AD_ACTOR_SYSTEM], extack);
                if (err)
                        return err;
        }
@@ -480,7 +513,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int dynamic_lb = nla_get_u8(data[IFLA_BOND_TLB_DYNAMIC_LB]);
 
                bond_opt_initval(&newval, dynamic_lb);
-               err = __bond_opt_set(bond, BOND_OPT_TLB_DYNAMIC_LB, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_TLB_DYNAMIC_LB, &newval,
+                                    data[IFLA_BOND_TLB_DYNAMIC_LB], extack);
                if (err)
                        return err;
        }
@@ -489,7 +523,8 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
                int missed_max = nla_get_u8(data[IFLA_BOND_MISSED_MAX]);
 
                bond_opt_initval(&newval, missed_max);
-               err = __bond_opt_set(bond, BOND_OPT_MISSED_MAX, &newval);
+               err = __bond_opt_set(bond, BOND_OPT_MISSED_MAX, &newval,
+                                    data[IFLA_BOND_MISSED_MAX], extack);
                if (err)
                        return err;
        }
index 1f8323a..96eef19 100644 (file)
@@ -632,27 +632,35 @@ static int bond_opt_check_deps(struct bonding *bond,
 }
 
 static void bond_opt_dep_print(struct bonding *bond,
-                              const struct bond_option *opt)
+                              const struct bond_option *opt,
+                              struct nlattr *bad_attr,
+                              struct netlink_ext_ack *extack)
 {
        const struct bond_opt_value *modeval;
        struct bond_params *params;
 
        params = &bond->params;
        modeval = bond_opt_get_val(BOND_OPT_MODE, params->mode);
-       if (test_bit(params->mode, &opt->unsuppmodes))
+       if (test_bit(params->mode, &opt->unsuppmodes)) {
                netdev_err(bond->dev, "option %s: mode dependency failed, not supported in mode %s(%llu)\n",
                           opt->name, modeval->string, modeval->value);
+               NL_SET_ERR_MSG_ATTR(extack, bad_attr,
+                                   "option not supported in mode");
+       }
 }
 
 static void bond_opt_error_interpret(struct bonding *bond,
                                     const struct bond_option *opt,
-                                    int error, const struct bond_opt_value *val)
+                                    int error, const struct bond_opt_value *val,
+                                    struct nlattr *bad_attr,
+                                    struct netlink_ext_ack *extack)
 {
        const struct bond_opt_value *minval, *maxval;
        char *p;
 
        switch (error) {
        case -EINVAL:
+               NL_SET_ERR_MSG_ATTR(extack, bad_attr, "invalid option value");
                if (val) {
                        if (val->string) {
                                /* sometimes RAWVAL opts may have new lines */
@@ -674,13 +682,17 @@ static void bond_opt_error_interpret(struct bonding *bond,
                           opt->name, minval ? minval->value : 0, maxval->value);
                break;
        case -EACCES:
-               bond_opt_dep_print(bond, opt);
+               bond_opt_dep_print(bond, opt, bad_attr, extack);
                break;
        case -ENOTEMPTY:
+               NL_SET_ERR_MSG_ATTR(extack, bad_attr,
+                                   "unable to set option because the bond device has slaves");
                netdev_err(bond->dev, "option %s: unable to set because the bond device has slaves\n",
                           opt->name);
                break;
        case -EBUSY:
+               NL_SET_ERR_MSG_ATTR(extack, bad_attr,
+                                   "unable to set option because the bond is up");
                netdev_err(bond->dev, "option %s: unable to set because the bond device is up\n",
                           opt->name);
                break;
@@ -691,6 +703,8 @@ static void bond_opt_error_interpret(struct bonding *bond,
                                *p = '\0';
                        netdev_err(bond->dev, "option %s: interface %s does not exist!\n",
                                   opt->name, val->string);
+                       NL_SET_ERR_MSG_ATTR(extack, bad_attr,
+                                           "interface does not exist");
                }
                break;
        default:
@@ -703,13 +717,17 @@ static void bond_opt_error_interpret(struct bonding *bond,
  * @bond: target bond device
  * @option: option to set
  * @val: value to set it to
+ * @bad_attr: netlink attribue that caused the error
+ * @extack: extended netlink error structure, used when an error message
+ *          needs to be returned to the caller via netlink
  *
  * This function is used to change the bond's option value, it can be
  * used for both enabling/changing an option and for disabling it. RTNL lock
  * must be obtained before calling this function.
  */
 int __bond_opt_set(struct bonding *bond,
-                  unsigned int option, struct bond_opt_value *val)
+                  unsigned int option, struct bond_opt_value *val,
+                  struct nlattr *bad_attr, struct netlink_ext_ack *extack)
 {
        const struct bond_opt_value *retval = NULL;
        const struct bond_option *opt;
@@ -731,7 +749,7 @@ int __bond_opt_set(struct bonding *bond,
        ret = opt->set(bond, retval);
 out:
        if (ret)
-               bond_opt_error_interpret(bond, opt, ret, val);
+               bond_opt_error_interpret(bond, opt, ret, val, bad_attr, extack);
 
        return ret;
 }
@@ -753,7 +771,7 @@ int __bond_opt_set_notify(struct bonding *bond,
 
        ASSERT_RTNL();
 
-       ret = __bond_opt_set(bond, option, val);
+       ret = __bond_opt_set(bond, option, val, NULL, NULL);
 
        if (!ret && (bond->dev->reg_state == NETREG_REGISTERED))
                call_netdevice_notifiers(NETDEV_CHANGEINFODATA, bond->dev);
index 8af4def..e531b93 100644 (file)
@@ -2070,8 +2070,10 @@ static int gswip_gphy_fw_list(struct gswip_priv *priv,
        for_each_available_child_of_node(gphy_fw_list_np, gphy_fw_np) {
                err = gswip_gphy_fw_probe(priv, &priv->gphy_fw[i],
                                          gphy_fw_np, i);
-               if (err)
+               if (err) {
+                       of_node_put(gphy_fw_np);
                        goto remove_gphy;
+               }
                i++;
        }
 
index 03da369..cae76f5 100644 (file)
@@ -7,7 +7,8 @@
 
 #ifndef __KSZ8XXX_H
 #define __KSZ8XXX_H
-#include <linux/kernel.h>
+
+#include <linux/types.h>
 
 enum ksz_regs {
        REG_IND_CTRL_0,
index 7b37d45..d94150d 100644 (file)
@@ -50,22 +50,25 @@ static int mv88e6390_serdes_write(struct mv88e6xxx_chip *chip,
 }
 
 static int mv88e6xxx_serdes_pcs_get_state(struct mv88e6xxx_chip *chip,
-                                         u16 ctrl, u16 status, u16 lpa,
+                                         u16 bmsr, u16 lpa, u16 status,
                                          struct phylink_link_state *state)
 {
+       state->link = false;
+
+       /* If the BMSR reports that the link had failed, report this to
+        * phylink.
+        */
+       if (!(bmsr & BMSR_LSTATUS))
+               return 0;
+
        state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK);
+       state->an_complete = !!(bmsr & BMSR_ANEGCOMPLETE);
 
        if (status & MV88E6390_SGMII_PHY_STATUS_SPD_DPL_VALID) {
                /* The Spped and Duplex Resolved register is 1 if AN is enabled
                 * and complete, or if AN is disabled. So with disabled AN we
-                * still get here on link up. But we want to set an_complete
-                * only if AN was enabled, thus we look at BMCR_ANENABLE.
-                * (According to 802.3-2008 section 22.2.4.2.10, we should be
-                *  able to get this same value from BMSR_ANEGCAPABLE, but tests
-                *  show that these Marvell PHYs don't conform to this part of
-                *  the specificaion - BMSR_ANEGCAPABLE is simply always 1.)
+                * still get here on link up.
                 */
-               state->an_complete = !!(ctrl & BMCR_ANENABLE);
                state->duplex = status &
                                MV88E6390_SGMII_PHY_STATUS_DUPLEX_FULL ?
                                                 DUPLEX_FULL : DUPLEX_HALF;
@@ -191,12 +194,12 @@ int mv88e6352_serdes_pcs_config(struct mv88e6xxx_chip *chip, int port,
 int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port,
                                   int lane, struct phylink_link_state *state)
 {
-       u16 lpa, status, ctrl;
+       u16 bmsr, lpa, status;
        int err;
 
-       err = mv88e6352_serdes_read(chip, MII_BMCR, &ctrl);
+       err = mv88e6352_serdes_read(chip, MII_BMSR, &bmsr);
        if (err) {
-               dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);
+               dev_err(chip->dev, "can't read Serdes PHY BMSR: %d\n", err);
                return err;
        }
 
@@ -212,7 +215,7 @@ int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port,
                return err;
        }
 
-       return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);
+       return mv88e6xxx_serdes_pcs_get_state(chip, bmsr, lpa, status, state);
 }
 
 int mv88e6352_serdes_pcs_an_restart(struct mv88e6xxx_chip *chip, int port,
@@ -918,13 +921,13 @@ int mv88e6390_serdes_pcs_config(struct mv88e6xxx_chip *chip, int port,
 static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip,
        int port, int lane, struct phylink_link_state *state)
 {
-       u16 lpa, status, ctrl;
+       u16 bmsr, lpa, status;
        int err;
 
        err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,
-                                   MV88E6390_SGMII_BMCR, &ctrl);
+                                   MV88E6390_SGMII_BMSR, &bmsr);
        if (err) {
-               dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);
+               dev_err(chip->dev, "can't read Serdes PHY BMSR: %d\n", err);
                return err;
        }
 
@@ -942,7 +945,7 @@ static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip,
                return err;
        }
 
-       return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);
+       return mv88e6xxx_serdes_pcs_get_state(chip, bmsr, lpa, status, state);
 }
 
 static int mv88e6390_serdes_pcs_get_state_10g(struct mv88e6xxx_chip *chip,
index 3bb42a9..769f672 100644 (file)
@@ -955,35 +955,21 @@ static int rtl8365mb_ext_config_forcemode(struct realtek_priv *priv, int port,
        return 0;
 }
 
-static bool rtl8365mb_phy_mode_supported(struct dsa_switch *ds, int port,
-                                        phy_interface_t interface)
-{
-       int ext_int;
-
-       ext_int = rtl8365mb_extint_port_map[port];
-
-       if (ext_int < 0 &&
-           (interface == PHY_INTERFACE_MODE_NA ||
-            interface == PHY_INTERFACE_MODE_INTERNAL ||
-            interface == PHY_INTERFACE_MODE_GMII))
-               /* Internal PHY */
-               return true;
-       else if ((ext_int >= 1) &&
-                phy_interface_mode_is_rgmii(interface))
-               /* Extension MAC */
-               return true;
-
-       return false;
-}
-
 static void rtl8365mb_phylink_get_caps(struct dsa_switch *ds, int port,
                                       struct phylink_config *config)
 {
-       if (dsa_is_user_port(ds, port))
+       if (dsa_is_user_port(ds, port)) {
                __set_bit(PHY_INTERFACE_MODE_INTERNAL,
                          config->supported_interfaces);
-       else if (dsa_is_cpu_port(ds, port))
+
+               /* GMII is the default interface mode for phylib, so
+                * we have to support it for ports with integrated PHY.
+                */
+               __set_bit(PHY_INTERFACE_MODE_GMII,
+                         config->supported_interfaces);
+       } else if (dsa_is_cpu_port(ds, port)) {
                phy_interface_set_rgmii(config->supported_interfaces);
+       }
 
        config->mac_capabilities = MAC_SYM_PAUSE | MAC_ASYM_PAUSE |
                                   MAC_10 | MAC_100 | MAC_1000FD;
@@ -996,12 +982,6 @@ static void rtl8365mb_phylink_mac_config(struct dsa_switch *ds, int port,
        struct realtek_priv *priv = ds->priv;
        int ret;
 
-       if (!rtl8365mb_phy_mode_supported(ds, port, state->interface)) {
-               dev_err(priv->dev, "phy mode %s is unsupported on port %d\n",
-                       phy_modes(state->interface), port);
-               return;
-       }
-
        if (mode != MLO_AN_PHY && mode != MLO_AN_FIXED) {
                dev_err(priv->dev,
                        "port %d supports only conventional PHY or fixed-link\n",
index 557ca8f..ca3e470 100644 (file)
@@ -225,7 +225,7 @@ static void eql_kill_one_slave(slave_queue_t *queue, slave_t *slave)
        list_del(&slave->list);
        queue->num_slaves--;
        slave->dev->flags &= ~IFF_SLAVE;
-       dev_put_track(slave->dev, &slave->dev_tracker);
+       netdev_put(slave->dev, &slave->dev_tracker);
        kfree(slave);
 }
 
@@ -399,7 +399,7 @@ static int __eql_insert_slave(slave_queue_t *queue, slave_t *slave)
                if (duplicate_slave)
                        eql_kill_one_slave(queue, duplicate_slave);
 
-               dev_hold_track(slave->dev, &slave->dev_tracker, GFP_ATOMIC);
+               netdev_hold(slave->dev, &slave->dev_tracker, GFP_ATOMIC);
                list_add(&slave->list, &queue->all_slaves);
                queue->num_slaves++;
                slave->dev->flags |= IFF_SLAVE;
index a381626..8c58285 100644 (file)
@@ -163,7 +163,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id)
        mdio = mdiobus_alloc();
        if (mdio == NULL) {
                netdev_err(dev, "Error allocating MDIO bus\n");
-               return -ENOMEM;
+               ret = -ENOMEM;
+               goto put_node;
        }
 
        mdio->name = ALTERA_TSE_RESOURCE_NAME;
@@ -180,6 +181,7 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id)
                           mdio->id);
                goto out_free_mdio;
        }
+       of_node_put(mdio_node);
 
        if (netif_msg_drv(priv))
                netdev_info(dev, "MDIO bus %s: created\n", mdio->id);
@@ -189,6 +191,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id)
 out_free_mdio:
        mdiobus_free(mdio);
        mdio = NULL;
+put_node:
+       of_node_put(mdio_node);
        return ret;
 }
 
index b7d772f..3c2e32f 100644 (file)
@@ -3,11 +3,12 @@
  * Copyright (C) 2014 Altera Corporation. All rights reserved
  */
 
-#include <linux/kernel.h>
-
 #ifndef __ALTERA_UTILS_H__
 #define __ALTERA_UTILS_H__
 
+#include <linux/compiler.h>
+#include <linux/types.h>
+
 void tse_set_bit(void __iomem *ioaddr, size_t offs, u32 bit_mask);
 void tse_clear_bit(void __iomem *ioaddr, size_t offs, u32 bit_mask);
 int tse_bit_is_set(void __iomem *ioaddr, size_t offs, u32 bit_mask);
index c6f0039..d5f2c69 100644 (file)
@@ -820,7 +820,7 @@ static int au1000_rx(struct net_device *dev)
                                pr_cont("\n");
                        }
                }
-               prxd->buff_stat = (u32)(pDB->dma_addr | RX_DMA_ENABLE);
+               prxd->buff_stat = lower_32_bits(pDB->dma_addr) | RX_DMA_ENABLE;
                aup->rx_head = (aup->rx_head + 1) & (NUM_RX_DMA - 1);
                wmb(); /* drain writebuffer */
 
@@ -996,7 +996,7 @@ static netdev_tx_t au1000_tx(struct sk_buff *skb, struct net_device *dev)
        ps->tx_packets++;
        ps->tx_bytes += ptxd->len;
 
-       ptxd->buff_stat = pDB->dma_addr | TX_DMA_ENABLE;
+       ptxd->buff_stat = lower_32_bits(pDB->dma_addr) | TX_DMA_ENABLE;
        wmb(); /* drain writebuffer */
        dev_kfree_skb(skb);
        aup->tx_head = (aup->tx_head + 1) & (NUM_TX_DMA - 1);
@@ -1131,9 +1131,9 @@ static int au1000_probe(struct platform_device *pdev)
        /* Allocate the data buffers
         * Snooping works fine with eth on all au1xxx
         */
-       aup->vaddr = (u32)dma_alloc_coherent(&pdev->dev, MAX_BUF_SIZE *
-                                         (NUM_TX_BUFFS + NUM_RX_BUFFS),
-                                         &aup->dma_addr, 0);
+       aup->vaddr = dma_alloc_coherent(&pdev->dev, MAX_BUF_SIZE *
+                                       (NUM_TX_BUFFS + NUM_RX_BUFFS),
+                                       &aup->dma_addr, 0);
        if (!aup->vaddr) {
                dev_err(&pdev->dev, "failed to allocate data buffers\n");
                err = -ENOMEM;
@@ -1234,8 +1234,8 @@ static int au1000_probe(struct platform_device *pdev)
        for (i = 0; i < (NUM_TX_BUFFS+NUM_RX_BUFFS); i++) {
                pDB->pnext = pDBfree;
                pDBfree = pDB;
-               pDB->vaddr = (u32 *)((unsigned)aup->vaddr + MAX_BUF_SIZE*i);
-               pDB->dma_addr = (dma_addr_t)virt_to_bus(pDB->vaddr);
+               pDB->vaddr = aup->vaddr + MAX_BUF_SIZE * i;
+               pDB->dma_addr = aup->dma_addr + MAX_BUF_SIZE * i;
                pDB++;
        }
        aup->pDBfree = pDBfree;
@@ -1246,7 +1246,7 @@ static int au1000_probe(struct platform_device *pdev)
                if (!pDB)
                        goto err_out;
 
-               aup->rx_dma_ring[i]->buff_stat = (unsigned)pDB->dma_addr;
+               aup->rx_dma_ring[i]->buff_stat = lower_32_bits(pDB->dma_addr);
                aup->rx_db_inuse[i] = pDB;
        }
 
@@ -1255,7 +1255,7 @@ static int au1000_probe(struct platform_device *pdev)
                if (!pDB)
                        goto err_out;
 
-               aup->tx_dma_ring[i]->buff_stat = (unsigned)pDB->dma_addr;
+               aup->tx_dma_ring[i]->buff_stat = lower_32_bits(pDB->dma_addr);
                aup->tx_dma_ring[i]->len = 0;
                aup->tx_db_inuse[i] = pDB;
        }
@@ -1310,7 +1310,7 @@ err_remap2:
        iounmap(aup->mac);
 err_remap1:
        dma_free_coherent(&pdev->dev, MAX_BUF_SIZE * (NUM_TX_BUFFS + NUM_RX_BUFFS),
-                       (void *)aup->vaddr, aup->dma_addr);
+                         aup->vaddr, aup->dma_addr);
 err_vaddr:
        free_netdev(dev);
 err_alloc:
@@ -1343,7 +1343,7 @@ static int au1000_remove(struct platform_device *pdev)
                        au1000_ReleaseDB(aup, aup->tx_db_inuse[i]);
 
        dma_free_coherent(&pdev->dev, MAX_BUF_SIZE * (NUM_TX_BUFFS + NUM_RX_BUFFS),
-                       (void *)aup->vaddr, aup->dma_addr);
+                         aup->vaddr, aup->dma_addr);
 
        iounmap(aup->macdma);
        iounmap(aup->mac);
index e3a3ed2..2489c2f 100644 (file)
@@ -106,8 +106,8 @@ struct au1000_private {
        struct mac_reg *mac;  /* mac registers                      */
        u32 *enable;     /* address of MAC Enable Register     */
        void __iomem *macdma;   /* base of MAC DMA port */
-       u32 vaddr;                /* virtual address of rx/tx buffers   */
-       dma_addr_t dma_addr;      /* dma address of rx/tx buffers       */
+       void *vaddr;            /* virtual address of rx/tx buffers   */
+       dma_addr_t dma_addr;    /* dma address of rx/tx buffers       */
 
        spinlock_t lock;       /* Serialise access to device */
 
index a359329..4d46780 100644 (file)
@@ -2784,7 +2784,7 @@ void xgbe_print_pkt(struct net_device *netdev, struct sk_buff *skb, bool tx_rx)
 
        netdev_dbg(netdev, "Dst MAC addr: %pM\n", eth->h_dest);
        netdev_dbg(netdev, "Src MAC addr: %pM\n", eth->h_source);
-       netdev_dbg(netdev, "Protocol: %#06hx\n", ntohs(eth->h_proto));
+       netdev_dbg(netdev, "Protocol: %#06x\n", ntohs(eth->h_proto));
 
        for (i = 0; i < skb->len; i += 32) {
                unsigned int len = min(skb->len - i, 32U);
index 086739e..9b83d53 100644 (file)
@@ -234,6 +234,7 @@ struct mii_bus *bcma_mdio_mii_register(struct bgmac *bgmac)
        np = of_get_child_by_name(core->dev.of_node, "mdio");
 
        err = of_mdiobus_register(mii_bus, np);
+       of_node_put(np);
        if (err) {
                dev_err(&core->dev, "Registration of mii bus failed\n");
                goto err_free_bus;
index ddf2f39..c4ed436 100644 (file)
@@ -307,7 +307,7 @@ int bnxt_set_vf_bw(struct net_device *dev, int vf_id, int min_tx_rate,
                return -EINVAL;
        }
 
-       if (min_tx_rate > pf_link_speed || min_tx_rate > max_tx_rate) {
+       if (min_tx_rate > pf_link_speed) {
                netdev_info(bp->dev, "min tx rate %d is invalid for VF %d\n",
                            min_tx_rate, vf_id);
                return -EINVAL;
index 9559c16..e6cb20a 100644 (file)
@@ -434,7 +434,7 @@ int gem_get_hwtst(struct net_device *dev, struct ifreq *rq)
                return 0;
 }
 
-static int gem_ptp_set_one_step_sync(struct macb *bp, u8 enable)
+static void gem_ptp_set_one_step_sync(struct macb *bp, u8 enable)
 {
        u32 reg_val;
 
@@ -444,8 +444,6 @@ static int gem_ptp_set_one_step_sync(struct macb *bp, u8 enable)
                macb_writel(bp, NCR, reg_val | MACB_BIT(OSSMODE));
        else
                macb_writel(bp, NCR, reg_val & ~MACB_BIT(OSSMODE));
-
-       return 0;
 }
 
 int gem_set_hwtst(struct net_device *dev, struct ifreq *ifr, int cmd)
@@ -468,8 +466,7 @@ int gem_set_hwtst(struct net_device *dev, struct ifreq *ifr, int cmd)
        case HWTSTAMP_TX_OFF:
                break;
        case HWTSTAMP_TX_ONESTEP_SYNC:
-               if (gem_ptp_set_one_step_sync(bp, 1) != 0)
-                       return -ERANGE;
+               gem_ptp_set_one_step_sync(bp, 1);
                tx_bd_control = TSTAMP_ALL_FRAMES;
                break;
        case HWTSTAMP_TX_ON:
index 01e7d3c..df55584 100644 (file)
@@ -852,12 +852,6 @@ int hinic_ndo_set_vf_bw(struct net_device *netdev,
                return -EINVAL;
        }
 
-       if (max_tx_rate < min_tx_rate) {
-               netif_err(nic_dev, drv, netdev, "Max rate %d must be greater than or equal to min rate %d\n",
-                         max_tx_rate, min_tx_rate);
-               return -EINVAL;
-       }
-
        err = hinic_port_link_state(nic_dev, &link_state);
        if (err) {
                netif_err(nic_dev, drv, netdev,
index 1042e79..f8860f2 100644 (file)
@@ -4376,7 +4376,7 @@ void e1000_rar_set(struct e1000_hw *hw, u8 *addr, u32 index)
 /**
  * e1000_write_vfta - Writes a value to the specified offset in the VLAN filter table.
  * @hw: Struct containing variables accessed by shared code
- * @offset: Offset in VLAN filer table to write
+ * @offset: Offset in VLAN filter table to write
  * @value: Value to write into VLAN filter table
  */
 void e1000_write_vfta(struct e1000_hw *hw, u32 offset, u32 value)
@@ -4396,7 +4396,7 @@ void e1000_write_vfta(struct e1000_hw *hw, u32 offset, u32 value)
 }
 
 /**
- * e1000_clear_vfta - Clears the VLAN filer table
+ * e1000_clear_vfta - Clears the VLAN filter table
  * @hw: Struct containing variables accessed by shared code
  */
 static void e1000_clear_vfta(struct e1000_hw *hw)
index 30ca9ee..f2fba6e 100644 (file)
@@ -1825,7 +1825,7 @@ static void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx)
                fm10k_sm_mbx_connect_reset(mbx);
                break;
        case FM10K_STATE_CONNECT:
-               /* try connnecting at lower version */
+               /* try connecting at lower version */
                if (mbx->remote) {
                        while (mbx->local > 1)
                                mbx->local--;
index 18558a0..57f4ec4 100644 (file)
@@ -565,6 +565,7 @@ struct i40e_pf {
 #define I40E_FLAG_DISABLE_FW_LLDP              BIT(24)
 #define I40E_FLAG_RS_FEC                       BIT(25)
 #define I40E_FLAG_BASE_R_FEC                   BIT(26)
+#define I40E_FLAG_VF_VLAN_PRUNING              BIT(27)
 /* TOTAL_PORT_SHUTDOWN
  * Allows to physically disable the link on the NIC's port.
  * If enabled, (after link down request from the OS)
index 610f00c..c65e9e2 100644 (file)
@@ -457,6 +457,8 @@ static const struct i40e_priv_flags i40e_gstrings_priv_flags[] = {
        I40E_PRIV_FLAG("disable-fw-lldp", I40E_FLAG_DISABLE_FW_LLDP, 0),
        I40E_PRIV_FLAG("rs-fec", I40E_FLAG_RS_FEC, 0),
        I40E_PRIV_FLAG("base-r-fec", I40E_FLAG_BASE_R_FEC, 0),
+       I40E_PRIV_FLAG("vf-vlan-pruning",
+                      I40E_FLAG_VF_VLAN_PRUNING, 0),
 };
 
 #define I40E_PRIV_FLAGS_STR_LEN ARRAY_SIZE(i40e_gstrings_priv_flags)
@@ -5285,6 +5287,13 @@ flags_complete:
                return -EOPNOTSUPP;
        }
 
+       if ((changed_flags & I40E_FLAG_VF_VLAN_PRUNING) &&
+           pf->num_alloc_vfs) {
+               dev_warn(&pf->pdev->dev,
+                        "Changing vf-vlan-pruning flag while VF(s) are active is not supported\n");
+               return -EOPNOTSUPP;
+       }
+
        if ((changed_flags & new_flags &
             I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED) &&
            (new_flags & I40E_FLAG_MFP_ENABLED))
index 332a608..1599ac5 100644 (file)
@@ -1368,6 +1368,114 @@ static int i40e_correct_mac_vlan_filters(struct i40e_vsi *vsi,
        return 0;
 }
 
+/**
+ * i40e_get_vf_new_vlan - Get new vlan id on a vf
+ * @vsi: the vsi to configure
+ * @new_mac: new mac filter to be added
+ * @f: existing mac filter, replaced with new_mac->f if new_mac is not NULL
+ * @vlan_filters: the number of active VLAN filters
+ * @trusted: flag if the VF is trusted
+ *
+ * Get new VLAN id based on current VLAN filters, trust, PVID
+ * and vf-vlan-prune-disable flag.
+ *
+ * Returns the value of the new vlan filter or
+ * the old value if no new filter is needed.
+ */
+static s16 i40e_get_vf_new_vlan(struct i40e_vsi *vsi,
+                               struct i40e_new_mac_filter *new_mac,
+                               struct i40e_mac_filter *f,
+                               int vlan_filters,
+                               bool trusted)
+{
+       s16 pvid = le16_to_cpu(vsi->info.pvid);
+       struct i40e_pf *pf = vsi->back;
+       bool is_any;
+
+       if (new_mac)
+               f = new_mac->f;
+
+       if (pvid && f->vlan != pvid)
+               return pvid;
+
+       is_any = (trusted ||
+                 !(pf->flags & I40E_FLAG_VF_VLAN_PRUNING));
+
+       if ((vlan_filters && f->vlan == I40E_VLAN_ANY) ||
+           (!is_any && !vlan_filters && f->vlan == I40E_VLAN_ANY) ||
+           (is_any && !vlan_filters && f->vlan == 0)) {
+               if (is_any)
+                       return I40E_VLAN_ANY;
+               else
+                       return 0;
+       }
+
+       return f->vlan;
+}
+
+/**
+ * i40e_correct_vf_mac_vlan_filters - Correct non-VLAN VF filters if necessary
+ * @vsi: the vsi to configure
+ * @tmp_add_list: list of filters ready to be added
+ * @tmp_del_list: list of filters ready to be deleted
+ * @vlan_filters: the number of active VLAN filters
+ * @trusted: flag if the VF is trusted
+ *
+ * Correct VF VLAN filters based on current VLAN filters, trust, PVID
+ * and vf-vlan-prune-disable flag.
+ *
+ * In case of memory allocation failure return -ENOMEM. Otherwise, return 0.
+ *
+ * This function is only expected to be called from within
+ * i40e_sync_vsi_filters.
+ *
+ * NOTE: This function expects to be called while under the
+ * mac_filter_hash_lock
+ */
+static int i40e_correct_vf_mac_vlan_filters(struct i40e_vsi *vsi,
+                                           struct hlist_head *tmp_add_list,
+                                           struct hlist_head *tmp_del_list,
+                                           int vlan_filters,
+                                           bool trusted)
+{
+       struct i40e_mac_filter *f, *add_head;
+       struct i40e_new_mac_filter *new_mac;
+       struct hlist_node *h;
+       int bkt, new_vlan;
+
+       hlist_for_each_entry(new_mac, tmp_add_list, hlist) {
+               new_mac->f->vlan = i40e_get_vf_new_vlan(vsi, new_mac, NULL,
+                                                       vlan_filters, trusted);
+       }
+
+       hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) {
+               new_vlan = i40e_get_vf_new_vlan(vsi, NULL, f, vlan_filters,
+                                               trusted);
+               if (new_vlan != f->vlan) {
+                       add_head = i40e_add_filter(vsi, f->macaddr, new_vlan);
+                       if (!add_head)
+                               return -ENOMEM;
+                       /* Create a temporary i40e_new_mac_filter */
+                       new_mac = kzalloc(sizeof(*new_mac), GFP_ATOMIC);
+                       if (!new_mac)
+                               return -ENOMEM;
+                       new_mac->f = add_head;
+                       new_mac->state = add_head->state;
+
+                       /* Add the new filter to the tmp list */
+                       hlist_add_head(&new_mac->hlist, tmp_add_list);
+
+                       /* Put the original filter into the delete list */
+                       f->state = I40E_FILTER_REMOVE;
+                       hash_del(&f->hlist);
+                       hlist_add_head(&f->hlist, tmp_del_list);
+               }
+       }
+
+       vsi->has_vlan_filter = !!vlan_filters;
+       return 0;
+}
+
 /**
  * i40e_rm_default_mac_filter - Remove the default MAC filter set by NVM
  * @vsi: the PF Main VSI - inappropriate for any other VSI
@@ -2423,10 +2531,14 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
                                vlan_filters++;
                }
 
-               retval = i40e_correct_mac_vlan_filters(vsi,
-                                                      &tmp_add_list,
-                                                      &tmp_del_list,
-                                                      vlan_filters);
+               if (vsi->type != I40E_VSI_SRIOV)
+                       retval = i40e_correct_mac_vlan_filters
+                               (vsi, &tmp_add_list, &tmp_del_list,
+                                vlan_filters);
+               else
+                       retval = i40e_correct_vf_mac_vlan_filters
+                               (vsi, &tmp_add_list, &tmp_del_list,
+                                vlan_filters, pf->vf[vsi->vf_id].trusted);
 
                hlist_for_each_entry(new, &tmp_add_list, hlist)
                        netdev_hw_addr_refcnt(new->f, vsi->netdev, 1);
@@ -2855,8 +2967,21 @@ int i40e_add_vlan_all_mac(struct i40e_vsi *vsi, s16 vid)
        int bkt;
 
        hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) {
-               if (f->state == I40E_FILTER_REMOVE)
+               /* If we're asked to add a filter that has been marked for
+                * removal, it is safe to simply restore it to active state.
+                * __i40e_del_filter will have simply deleted any filters which
+                * were previously marked NEW or FAILED, so if it is currently
+                * marked REMOVE it must have previously been ACTIVE. Since we
+                * haven't yet run the sync filters task, just restore this
+                * filter to the ACTIVE state so that the sync task leaves it
+                * in place.
+                */
+               if (f->state == I40E_FILTER_REMOVE && f->vlan == vid) {
+                       f->state = I40E_FILTER_ACTIVE;
+                       continue;
+               } else if (f->state == I40E_FILTER_REMOVE) {
                        continue;
+               }
                add_f = i40e_add_filter(vsi, f->macaddr, vid);
                if (!add_f) {
                        dev_info(&vsi->back->pdev->dev,
index 2606e8f..9949469 100644 (file)
@@ -4349,6 +4349,7 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
                /* duplicate request, so just return success */
                goto error_pvid;
 
+       i40e_vlan_stripping_enable(vsi);
        i40e_vc_reset_vf(vf, true);
        /* During reset the VF got a new VSI, so refresh a pointer. */
        vsi = pf->vsi[vf->lan_vsi_idx];
@@ -4364,7 +4365,7 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
         * MAC addresses deleted.
         */
        if ((!(vlan_id || qos) ||
-           vlanprio != le16_to_cpu(vsi->info.pvid)) &&
+            vlanprio != le16_to_cpu(vsi->info.pvid)) &&
            vsi->info.pvid) {
                ret = i40e_add_vlan_all_mac(vsi, I40E_VLAN_ANY);
                if (ret) {
@@ -4727,6 +4728,11 @@ int i40e_ndo_set_vf_trust(struct net_device *netdev, int vf_id, bool setting)
                goto out;
 
        vf->trusted = setting;
+
+       /* request PF to sync mac/vlan filters for the VF */
+       set_bit(__I40E_MACVLAN_SYNC_PENDING, pf->state);
+       pf->vsi[vf->lan_vsi_idx]->flags |= I40E_VSI_FLAG_FILTER_CHANGED;
+
        i40e_vc_reset_vf(vf, true);
        dev_info(&pf->pdev->dev, "VF %u is now %strusted\n",
                 vf_id, setting ? "" : "un");
index 49aed3e..fda1198 100644 (file)
@@ -146,7 +146,8 @@ struct iavf_mac_filter {
                u8 remove:1;        /* filter needs to be removed */
                u8 add:1;           /* filter needs to be added */
                u8 is_primary:1;    /* filter is a default VF MAC */
-               u8 padding:4;
+               u8 add_handled:1;   /* received response for filter add */
+               u8 padding:3;
        };
 };
 
@@ -248,6 +249,7 @@ struct iavf_adapter {
        struct work_struct adminq_task;
        struct delayed_work client_task;
        wait_queue_head_t down_waitqueue;
+       wait_queue_head_t vc_waitqueue;
        struct iavf_q_vector *q_vectors;
        struct list_head vlan_filter_list;
        struct list_head mac_filter_list;
@@ -292,6 +294,7 @@ struct iavf_adapter {
 #define IAVF_FLAG_QUEUES_DISABLED              BIT(17)
 #define IAVF_FLAG_SETUP_NETDEV_FEATURES                BIT(18)
 #define IAVF_FLAG_REINIT_MSIX_NEEDED           BIT(20)
+#define IAVF_FLAG_INITIAL_MAC_SET              BIT(23)
 /* duplicates for common code */
 #define IAVF_FLAG_DCB_ENABLED                  0
        /* flags for admin queue service task */
@@ -559,6 +562,8 @@ void iavf_enable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid);
 void iavf_disable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid);
 void iavf_enable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid);
 void iavf_disable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid);
+int iavf_replace_primary_mac(struct iavf_adapter *adapter,
+                            const u8 *new_mac);
 void
 iavf_set_vlan_offload_features(struct iavf_adapter *adapter,
                               netdev_features_t prev_features,
index 7dfcf78..95772e1 100644 (file)
@@ -983,6 +983,7 @@ struct iavf_mac_filter *iavf_add_filter(struct iavf_adapter *adapter,
 
                list_add_tail(&f->list, &adapter->mac_filter_list);
                f->add = true;
+               f->add_handled = false;
                f->is_new_mac = true;
                f->is_primary = false;
                adapter->aq_required |= IAVF_FLAG_AQ_ADD_MAC_FILTER;
@@ -994,47 +995,132 @@ struct iavf_mac_filter *iavf_add_filter(struct iavf_adapter *adapter,
 }
 
 /**
- * iavf_set_mac - NDO callback to set port mac address
- * @netdev: network interface device structure
- * @p: pointer to an address structure
+ * iavf_replace_primary_mac - Replace current primary address
+ * @adapter: board private structure
+ * @new_mac: new MAC address to be applied
  *
- * Returns 0 on success, negative on failure
+ * Replace current dev_addr and send request to PF for removal of previous
+ * primary MAC address filter and addition of new primary MAC filter.
+ * Return 0 for success, -ENOMEM for failure.
+ *
+ * Do not call this with mac_vlan_list_lock!
  **/
-static int iavf_set_mac(struct net_device *netdev, void *p)
+int iavf_replace_primary_mac(struct iavf_adapter *adapter,
+                            const u8 *new_mac)
 {
-       struct iavf_adapter *adapter = netdev_priv(netdev);
        struct iavf_hw *hw = &adapter->hw;
        struct iavf_mac_filter *f;
-       struct sockaddr *addr = p;
-
-       if (!is_valid_ether_addr(addr->sa_data))
-               return -EADDRNOTAVAIL;
-
-       if (ether_addr_equal(netdev->dev_addr, addr->sa_data))
-               return 0;
 
        spin_lock_bh(&adapter->mac_vlan_list_lock);
 
+       list_for_each_entry(f, &adapter->mac_filter_list, list) {
+               f->is_primary = false;
+       }
+
        f = iavf_find_filter(adapter, hw->mac.addr);
        if (f) {
                f->remove = true;
-               f->is_primary = true;
                adapter->aq_required |= IAVF_FLAG_AQ_DEL_MAC_FILTER;
        }
 
-       f = iavf_add_filter(adapter, addr->sa_data);
+       f = iavf_add_filter(adapter, new_mac);
+
        if (f) {
+               /* Always send the request to add if changing primary MAC
+                * even if filter is already present on the list
+                */
                f->is_primary = true;
-               ether_addr_copy(hw->mac.addr, addr->sa_data);
+               f->add = true;
+               adapter->aq_required |= IAVF_FLAG_AQ_ADD_MAC_FILTER;
+               ether_addr_copy(hw->mac.addr, new_mac);
        }
 
        spin_unlock_bh(&adapter->mac_vlan_list_lock);
 
        /* schedule the watchdog task to immediately process the request */
-       if (f)
+       if (f) {
                queue_work(iavf_wq, &adapter->watchdog_task.work);
+               return 0;
+       }
+       return -ENOMEM;
+}
+
+/**
+ * iavf_is_mac_set_handled - wait for a response to set MAC from PF
+ * @netdev: network interface device structure
+ * @macaddr: MAC address to set
+ *
+ * Returns true on success, false on failure
+ */
+static bool iavf_is_mac_set_handled(struct net_device *netdev,
+                                   const u8 *macaddr)
+{
+       struct iavf_adapter *adapter = netdev_priv(netdev);
+       struct iavf_mac_filter *f;
+       bool ret = false;
+
+       spin_lock_bh(&adapter->mac_vlan_list_lock);
+
+       f = iavf_find_filter(adapter, macaddr);
+
+       if (!f || (!f->add && f->add_handled))
+               ret = true;
+
+       spin_unlock_bh(&adapter->mac_vlan_list_lock);
+
+       return ret;
+}
+
+/**
+ * iavf_set_mac - NDO callback to set port MAC address
+ * @netdev: network interface device structure
+ * @p: pointer to an address structure
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iavf_set_mac(struct net_device *netdev, void *p)
+{
+       struct iavf_adapter *adapter = netdev_priv(netdev);
+       struct sockaddr *addr = p;
+       bool handle_mac = iavf_is_mac_set_handled(netdev, addr->sa_data);
+       int ret;
 
-       return (f == NULL) ? -ENOMEM : 0;
+       if (!is_valid_ether_addr(addr->sa_data))
+               return -EADDRNOTAVAIL;
+
+       ret = iavf_replace_primary_mac(adapter, addr->sa_data);
+
+       if (ret)
+               return ret;
+
+       /* If this is an initial set MAC during VF spawn do not wait */
+       if (adapter->flags & IAVF_FLAG_INITIAL_MAC_SET) {
+               adapter->flags &= ~IAVF_FLAG_INITIAL_MAC_SET;
+               return 0;
+       }
+
+       if (handle_mac)
+               goto done;
+
+       ret = wait_event_interruptible_timeout(adapter->vc_waitqueue, false, msecs_to_jiffies(2500));
+
+       /* If ret < 0 then it means wait was interrupted.
+        * If ret == 0 then it means we got a timeout.
+        * else it means we got response for set MAC from PF,
+        * check if netdev MAC was updated to requested MAC,
+        * if yes then set MAC succeeded otherwise it failed return -EACCES
+        */
+       if (ret < 0)
+               return ret;
+
+       if (!ret)
+               return -EAGAIN;
+
+done:
+       if (!ether_addr_equal(netdev->dev_addr, addr->sa_data))
+               return -EACCES;
+
+       return 0;
 }
 
 /**
@@ -2451,6 +2537,8 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter)
                ether_addr_copy(netdev->perm_addr, adapter->hw.mac.addr);
        }
 
+       adapter->flags |= IAVF_FLAG_INITIAL_MAC_SET;
+
        adapter->tx_desc_count = IAVF_DEFAULT_TXD;
        adapter->rx_desc_count = IAVF_DEFAULT_RXD;
        err = iavf_init_interrupt_scheme(adapter);
@@ -4681,6 +4769,9 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
        /* Setup the wait queue for indicating transition to down status */
        init_waitqueue_head(&adapter->down_waitqueue);
 
+       /* Setup the wait queue for indicating virtchannel events */
+       init_waitqueue_head(&adapter->vc_waitqueue);
+
        return 0;
 
 err_ioremap:
index 782450d..e2b4ba9 100644 (file)
@@ -598,6 +598,8 @@ static void iavf_mac_add_ok(struct iavf_adapter *adapter)
        spin_lock_bh(&adapter->mac_vlan_list_lock);
        list_for_each_entry_safe(f, ftmp, &adapter->mac_filter_list, list) {
                f->is_new_mac = false;
+               if (!f->add && !f->add_handled)
+                       f->add_handled = true;
        }
        spin_unlock_bh(&adapter->mac_vlan_list_lock);
 }
@@ -618,6 +620,9 @@ static void iavf_mac_add_reject(struct iavf_adapter *adapter)
                if (f->remove && ether_addr_equal(f->macaddr, netdev->dev_addr))
                        f->remove = false;
 
+               if (!f->add && !f->add_handled)
+                       f->add_handled = true;
+
                if (f->is_new_mac) {
                        list_del(&f->list);
                        kfree(f);
@@ -1932,6 +1937,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
                        iavf_mac_add_reject(adapter);
                        /* restore administratively set MAC address */
                        ether_addr_copy(adapter->hw.mac.addr, netdev->dev_addr);
+                       wake_up(&adapter->vc_waitqueue);
                        break;
                case VIRTCHNL_OP_DEL_VLAN:
                        dev_err(&adapter->pdev->dev, "Failed to delete VLAN filter, error %s\n",
@@ -2091,7 +2097,13 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
                if (!v_retval)
                        iavf_mac_add_ok(adapter);
                if (!ether_addr_equal(netdev->dev_addr, adapter->hw.mac.addr))
-                       eth_hw_addr_set(netdev, adapter->hw.mac.addr);
+                       if (!ether_addr_equal(netdev->dev_addr,
+                                             adapter->hw.mac.addr)) {
+                               netif_addr_lock_bh(netdev);
+                               eth_hw_addr_set(netdev, adapter->hw.mac.addr);
+                               netif_addr_unlock_bh(netdev);
+                       }
+               wake_up(&adapter->vc_waitqueue);
                break;
        case VIRTCHNL_OP_GET_STATS: {
                struct iavf_eth_stats *stats =
@@ -2121,10 +2133,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
                        /* restore current mac address */
                        ether_addr_copy(adapter->hw.mac.addr, netdev->dev_addr);
                } else {
+                       netif_addr_lock_bh(netdev);
                        /* refresh current mac address if changed */
-                       eth_hw_addr_set(netdev, adapter->hw.mac.addr);
                        ether_addr_copy(netdev->perm_addr,
                                        adapter->hw.mac.addr);
+                       netif_addr_unlock_bh(netdev);
                }
                spin_lock_bh(&adapter->mac_vlan_list_lock);
                iavf_add_filter(adapter, adapter->hw.mac.addr);
@@ -2160,6 +2173,10 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
                }
                fallthrough;
        case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: {
+               struct iavf_mac_filter *f;
+               bool was_mac_changed;
+               u64 aq_required = 0;
+
                if (v_opcode == VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS)
                        memcpy(&adapter->vlan_v2_caps, msg,
                               min_t(u16, msglen,
@@ -2167,6 +2184,46 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
 
                iavf_process_config(adapter);
                adapter->flags |= IAVF_FLAG_SETUP_NETDEV_FEATURES;
+               was_mac_changed = !ether_addr_equal(netdev->dev_addr,
+                                                   adapter->hw.mac.addr);
+
+               spin_lock_bh(&adapter->mac_vlan_list_lock);
+
+               /* re-add all MAC filters */
+               list_for_each_entry(f, &adapter->mac_filter_list, list) {
+                       if (was_mac_changed &&
+                           ether_addr_equal(netdev->dev_addr, f->macaddr))
+                               ether_addr_copy(f->macaddr,
+                                               adapter->hw.mac.addr);
+
+                       f->is_new_mac = true;
+                       f->add = true;
+                       f->add_handled = false;
+                       f->remove = false;
+               }
+
+               /* re-add all VLAN filters */
+               if (VLAN_FILTERING_ALLOWED(adapter)) {
+                       struct iavf_vlan_filter *vlf;
+
+                       if (!list_empty(&adapter->vlan_filter_list)) {
+                               list_for_each_entry(vlf,
+                                                   &adapter->vlan_filter_list,
+                                                   list)
+                                       vlf->add = true;
+
+                               aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
+                       }
+               }
+
+               spin_unlock_bh(&adapter->mac_vlan_list_lock);
+
+               netif_addr_lock_bh(netdev);
+               eth_hw_addr_set(netdev, adapter->hw.mac.addr);
+               netif_addr_unlock_bh(netdev);
+
+               adapter->aq_required |= IAVF_FLAG_AQ_ADD_MAC_FILTER |
+                       aq_required;
                }
                break;
        case VIRTCHNL_OP_ENABLE_QUEUES:
index 5d10c4f..ead6d50 100644 (file)
@@ -852,7 +852,7 @@ ice_create_init_fdir_rule(struct ice_pf *pf, enum ice_fltr_ptype flow)
        if (!seg)
                return -ENOMEM;
 
-       tun_seg = devm_kcalloc(dev, sizeof(*seg), ICE_FD_HW_SEG_MAX,
+       tun_seg = devm_kcalloc(dev, ICE_FD_HW_SEG_MAX, sizeof(*tun_seg),
                               GFP_KERNEL);
        if (!tun_seg) {
                devm_kfree(dev, seg);
@@ -1214,7 +1214,7 @@ ice_cfg_fdir_xtrct_seq(struct ice_pf *pf, struct ethtool_rx_flow_spec *fsp,
        if (!seg)
                return -ENOMEM;
 
-       tun_seg = devm_kcalloc(dev, sizeof(*seg), ICE_FD_HW_SEG_MAX,
+       tun_seg = devm_kcalloc(dev, ICE_FD_HW_SEG_MAX, sizeof(*tun_seg),
                               GFP_KERNEL);
        if (!tun_seg) {
                devm_kfree(dev, seg);
index 57586a2..c6d755f 100644 (file)
@@ -17,13 +17,13 @@ static void ice_gnss_read(struct kthread_work *work)
        struct gnss_serial *gnss = container_of(work, struct gnss_serial,
                                                read_work.work);
        struct ice_aqc_link_topo_addr link_topo;
-       u8 i2c_params, bytes_read;
+       unsigned int i, bytes_read, data_len;
        struct tty_port *port;
        struct ice_pf *pf;
        struct ice_hw *hw;
        __be16 data_len_b;
        char *buf = NULL;
-       u16 i, data_len;
+       u8 i2c_params;
        int err = 0;
 
        pf = gnss->back;
@@ -65,7 +65,7 @@ static void ice_gnss_read(struct kthread_work *work)
                mdelay(10);
        }
 
-       data_len = min(data_len, (u16)PAGE_SIZE);
+       data_len = min_t(typeof(data_len), data_len, PAGE_SIZE);
        data_len = tty_buffer_request_room(port, data_len);
        if (!data_len) {
                err = -ENOMEM;
@@ -74,9 +74,10 @@ static void ice_gnss_read(struct kthread_work *work)
 
        /* Read received data */
        for (i = 0; i < data_len; i += bytes_read) {
-               u16 bytes_left = data_len - i;
+               unsigned int bytes_left = data_len - i;
 
-               bytes_read = min_t(typeof(bytes_left), bytes_left, ICE_MAX_I2C_DATA_SIZE);
+               bytes_read = min_t(typeof(bytes_left), bytes_left,
+                                  ICE_MAX_I2C_DATA_SIZE);
 
                err = ice_aq_read_i2c(hw, link_topo, ICE_GNSS_UBX_I2C_BUS_ADDR,
                                      cpu_to_le16(ICE_GNSS_UBX_EMPTY_DATA),
index 454e01a..b28fb8e 100644 (file)
@@ -887,6 +887,9 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt)
                        (ICE_AQ_VSI_OUTER_TAG_VLAN_8100 <<
                         ICE_AQ_VSI_OUTER_TAG_TYPE_S) &
                        ICE_AQ_VSI_OUTER_TAG_TYPE_M;
+               ctxt->info.outer_vlan_flags |=
+                       FIELD_PREP(ICE_AQ_VSI_OUTER_VLAN_EMODE_M,
+                                  ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING);
        }
        /* Have 1:1 UP mapping for both ingress/egress tables */
        table |= ICE_UP_TABLE_TRANSLATE(0, 0);
@@ -2403,7 +2406,7 @@ static void ice_set_agg_vsi(struct ice_vsi *vsi)
                                agg_id);
                        return;
                }
-               /* aggregator node is created, store the neeeded info */
+               /* aggregator node is created, store the needed info */
                agg_node->valid = true;
                agg_node->agg_id = agg_id;
        }
index bb1721f..86093b2 100644 (file)
@@ -1593,16 +1593,6 @@ ice_set_vf_bw(struct net_device *netdev, int vf_id, int min_tx_rate,
                goto out_put_vf;
        }
 
-       /* when max_tx_rate is zero that means no max Tx rate limiting, so only
-        * check if max_tx_rate is non-zero
-        */
-       if (max_tx_rate && min_tx_rate > max_tx_rate) {
-               dev_err(dev, "Cannot set min Tx rate %d Mbps greater than max Tx rate %d Mbps\n",
-                       min_tx_rate, max_tx_rate);
-               ret = -EINVAL;
-               goto out_put_vf;
-       }
-
        if (min_tx_rate && ice_is_dcb_active(pf)) {
                dev_err(dev, "DCB on PF is currently enabled. VF min Tx rate limiting not allowed on this PF.\n");
                ret = -EOPNOTSUPP;
index 1d9b84c..99cb382 100644 (file)
@@ -359,6 +359,54 @@ static u16 ice_vc_get_max_frame_size(struct ice_vf *vf)
        return max_frame_size;
 }
 
+/**
+ * ice_vc_get_vlan_caps
+ * @hw: pointer to the hw
+ * @vf: pointer to the VF info
+ * @vsi: pointer to the VSI
+ * @driver_caps: current driver caps
+ *
+ * Return 0 if there is no VLAN caps supported, or VLAN caps value
+ */
+static u32
+ice_vc_get_vlan_caps(struct ice_hw *hw, struct ice_vf *vf, struct ice_vsi *vsi,
+                    u32 driver_caps)
+{
+       if (ice_is_eswitch_mode_switchdev(vf->pf))
+               /* In switchdev setting VLAN from VF isn't supported */
+               return 0;
+
+       if (driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {
+               /* VLAN offloads based on current device configuration */
+               return VIRTCHNL_VF_OFFLOAD_VLAN_V2;
+       } else if (driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN) {
+               /* allow VF to negotiate VIRTCHNL_VF_OFFLOAD explicitly for
+                * these two conditions, which amounts to guest VLAN filtering
+                * and offloads being based on the inner VLAN or the
+                * inner/single VLAN respectively and don't allow VF to
+                * negotiate VIRTCHNL_VF_OFFLOAD in any other cases
+                */
+               if (ice_is_dvm_ena(hw) && ice_vf_is_port_vlan_ena(vf)) {
+                       return VIRTCHNL_VF_OFFLOAD_VLAN;
+               } else if (!ice_is_dvm_ena(hw) &&
+                          !ice_vf_is_port_vlan_ena(vf)) {
+                       /* configure backward compatible support for VFs that
+                        * only support VIRTCHNL_VF_OFFLOAD_VLAN, the PF is
+                        * configured in SVM, and no port VLAN is configured
+                        */
+                       ice_vf_vsi_cfg_svm_legacy_vlan_mode(vsi);
+                       return VIRTCHNL_VF_OFFLOAD_VLAN;
+               } else if (ice_is_dvm_ena(hw)) {
+                       /* configure software offloaded VLAN support when DVM
+                        * is enabled, but no port VLAN is enabled
+                        */
+                       ice_vf_vsi_cfg_dvm_legacy_vlan_mode(vsi);
+               }
+       }
+
+       return 0;
+}
+
 /**
  * ice_vc_get_vf_res_msg
  * @vf: pointer to the VF info
@@ -402,33 +450,8 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
                goto err;
        }
 
-       if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {
-               /* VLAN offloads based on current device configuration */
-               vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN_V2;
-       } else if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN) {
-               /* allow VF to negotiate VIRTCHNL_VF_OFFLOAD explicitly for
-                * these two conditions, which amounts to guest VLAN filtering
-                * and offloads being based on the inner VLAN or the
-                * inner/single VLAN respectively and don't allow VF to
-                * negotiate VIRTCHNL_VF_OFFLOAD in any other cases
-                */
-               if (ice_is_dvm_ena(hw) && ice_vf_is_port_vlan_ena(vf)) {
-                       vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN;
-               } else if (!ice_is_dvm_ena(hw) &&
-                          !ice_vf_is_port_vlan_ena(vf)) {
-                       vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN;
-                       /* configure backward compatible support for VFs that
-                        * only support VIRTCHNL_VF_OFFLOAD_VLAN, the PF is
-                        * configured in SVM, and no port VLAN is configured
-                        */
-                       ice_vf_vsi_cfg_svm_legacy_vlan_mode(vsi);
-               } else if (ice_is_dvm_ena(hw)) {
-                       /* configure software offloaded VLAN support when DVM
-                        * is enabled, but no port VLAN is enabled
-                        */
-                       ice_vf_vsi_cfg_dvm_legacy_vlan_mode(vsi);
-               }
-       }
+       vfres->vf_cap_flags |= ice_vc_get_vlan_caps(hw, vf, vsi,
+                                                   vf->driver_caps);
 
        if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
                vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_PF;
@@ -3529,42 +3552,6 @@ ice_vc_repr_del_mac(struct ice_vf __always_unused *vf, u8 __always_unused *msg)
                                     VIRTCHNL_STATUS_SUCCESS, NULL, 0);
 }
 
-static int ice_vc_repr_add_vlan(struct ice_vf *vf, u8 __always_unused *msg)
-{
-       dev_dbg(ice_pf_to_dev(vf->pf),
-               "Can't add VLAN in switchdev mode for VF %d\n", vf->vf_id);
-       return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_VLAN,
-                                    VIRTCHNL_STATUS_SUCCESS, NULL, 0);
-}
-
-static int ice_vc_repr_del_vlan(struct ice_vf *vf, u8 __always_unused *msg)
-{
-       dev_dbg(ice_pf_to_dev(vf->pf),
-               "Can't delete VLAN in switchdev mode for VF %d\n", vf->vf_id);
-       return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DEL_VLAN,
-                                    VIRTCHNL_STATUS_SUCCESS, NULL, 0);
-}
-
-static int ice_vc_repr_ena_vlan_stripping(struct ice_vf *vf)
-{
-       dev_dbg(ice_pf_to_dev(vf->pf),
-               "Can't enable VLAN stripping in switchdev mode for VF %d\n",
-               vf->vf_id);
-       return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING,
-                                    VIRTCHNL_STATUS_ERR_NOT_SUPPORTED,
-                                    NULL, 0);
-}
-
-static int ice_vc_repr_dis_vlan_stripping(struct ice_vf *vf)
-{
-       dev_dbg(ice_pf_to_dev(vf->pf),
-               "Can't disable VLAN stripping in switchdev mode for VF %d\n",
-               vf->vf_id);
-       return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING,
-                                    VIRTCHNL_STATUS_ERR_NOT_SUPPORTED,
-                                    NULL, 0);
-}
-
 static int
 ice_vc_repr_cfg_promiscuous_mode(struct ice_vf *vf, u8 __always_unused *msg)
 {
@@ -3591,10 +3578,10 @@ static const struct ice_virtchnl_ops ice_virtchnl_repr_ops = {
        .config_rss_lut = ice_vc_config_rss_lut,
        .get_stats_msg = ice_vc_get_stats_msg,
        .cfg_promiscuous_mode_msg = ice_vc_repr_cfg_promiscuous_mode,
-       .add_vlan_msg = ice_vc_repr_add_vlan,
-       .remove_vlan_msg = ice_vc_repr_del_vlan,
-       .ena_vlan_stripping = ice_vc_repr_ena_vlan_stripping,
-       .dis_vlan_stripping = ice_vc_repr_dis_vlan_stripping,
+       .add_vlan_msg = ice_vc_add_vlan_msg,
+       .remove_vlan_msg = ice_vc_remove_vlan_msg,
+       .ena_vlan_stripping = ice_vc_ena_vlan_stripping,
+       .dis_vlan_stripping = ice_vc_dis_vlan_stripping,
        .handle_rss_cfg_msg = ice_vc_handle_rss_cfg,
        .add_fdir_fltr_msg = ice_vc_add_fdir_fltr,
        .del_fdir_fltr_msg = ice_vc_del_fdir_fltr,
index ca54297..fa02892 100644 (file)
 #define E1000_VFTA_ENTRY_MASK                0x7F
 #define E1000_VFTA_ENTRY_BIT_SHIFT_MASK      0x1F
 
-/* DMA Coalescing register fields */
-#define E1000_PCIEMISC_LX_DECISION      0x00000080 /* Lx power on DMA coal */
-
 /* Tx Rate-Scheduler Config fields */
 #define E1000_RTTBCNRC_RS_ENA          0x80000000
 #define E1000_RTTBCNRC_RF_DEC_MASK     0x00003FFF
index 9cb4998..eb9f6da 100644 (file)
 #define E1000_DMCRTRH  0x05DD0 /* Receive Packet Rate Threshold */
 #define E1000_DMCCNT   0x05DD4 /* Current Rx Count */
 #define E1000_FCRTC    0x02170 /* Flow Control Rx high watermark */
-#define E1000_PCIEMISC 0x05BB8 /* PCIE misc config register */
 
 /* TX Rate Limit Registers */
 #define E1000_RTTDQSEL 0x3604 /* Tx Desc Plane Queue Select - WO */
index c8d1e81..98bd326 100644 (file)
@@ -576,7 +576,7 @@ ixgb_rar_set(struct ixgb_hw *hw,
  * Writes a value to the specified offset in the VLAN filter table.
  *
  * hw - Struct containing variables accessed by shared code
- * offset - Offset in VLAN filer table to write
+ * offset - Offset in VLAN filter table to write
  * value - Value to write into VLAN filter table
  *****************************************************************************/
 void
@@ -588,7 +588,7 @@ ixgb_write_vfta(struct ixgb_hw *hw,
 }
 
 /******************************************************************************
- * Clears the VLAN filer table
+ * Clears the VLAN filter table
  *
  * hw - Struct containing variables accessed by shared code
  *****************************************************************************/
index 921a4d9..48444ab 100644 (file)
@@ -167,12 +167,46 @@ enum ixgbe_tx_flags {
 #define IXGBE_82599_VF_DEVICE_ID        0x10ED
 #define IXGBE_X540_VF_DEVICE_ID         0x1515
 
+#define UPDATE_VF_COUNTER_32bit(reg, last_counter, counter)    \
+       {                                                       \
+               u32 current_counter = IXGBE_READ_REG(hw, reg);  \
+               if (current_counter < last_counter)             \
+                       counter += 0x100000000LL;               \
+               last_counter = current_counter;                 \
+               counter &= 0xFFFFFFFF00000000LL;                \
+               counter |= current_counter;                     \
+       }
+
+#define UPDATE_VF_COUNTER_36bit(reg_lsb, reg_msb, last_counter, counter) \
+       {                                                                \
+               u64 current_counter_lsb = IXGBE_READ_REG(hw, reg_lsb);   \
+               u64 current_counter_msb = IXGBE_READ_REG(hw, reg_msb);   \
+               u64 current_counter = (current_counter_msb << 32) |      \
+                       current_counter_lsb;                             \
+               if (current_counter < last_counter)                      \
+                       counter += 0x1000000000LL;                       \
+               last_counter = current_counter;                          \
+               counter &= 0xFFFFFFF000000000LL;                         \
+               counter |= current_counter;                              \
+       }
+
+struct vf_stats {
+       u64 gprc;
+       u64 gorc;
+       u64 gptc;
+       u64 gotc;
+       u64 mprc;
+};
+
 struct vf_data_storage {
        struct pci_dev *vfdev;
        unsigned char vf_mac_addresses[ETH_ALEN];
        u16 vf_mc_hashes[IXGBE_MAX_VF_MC_ENTRIES];
        u16 num_vf_mc_hashes;
        bool clear_to_send;
+       struct vf_stats vfstats;
+       struct vf_stats last_vfstats;
+       struct vf_stats saved_rst_vfstats;
        bool pf_set_mac;
        u16 pf_vlan; /* When set, guest VLAN config not allowed. */
        u16 pf_qos;
index 95c92fe..1003889 100644 (file)
@@ -879,7 +879,7 @@ static s32 ixgbe_set_vfta_82598(struct ixgbe_hw *hw, u32 vlan, u32 vind,
  *  ixgbe_clear_vfta_82598 - Clear VLAN filter table
  *  @hw: pointer to hardware structure
  *
- *  Clears the VLAN filer table, and the VMDq index associated with the filter
+ *  Clears the VLAN filter table, and the VMDq index associated with the filter
  **/
 static s32 ixgbe_clear_vfta_82598(struct ixgbe_hw *hw)
 {
index 4c26c4b..38c4609 100644 (file)
@@ -3237,7 +3237,7 @@ vfta_update:
  *  ixgbe_clear_vfta_generic - Clear VLAN filter table
  *  @hw: pointer to hardware structure
  *
- *  Clears the VLAN filer table, and the VMDq index associated with the filter
+ *  Clears the VLAN filter table, and the VMDq index associated with the filter
  **/
 s32 ixgbe_clear_vfta_generic(struct ixgbe_hw *hw)
 {
index 77c2e70..5c62e99 100644 (file)
@@ -5549,6 +5549,47 @@ static int ixgbe_non_sfp_link_config(struct ixgbe_hw *hw)
        return ret;
 }
 
+/**
+ * ixgbe_clear_vf_stats_counters - Clear out VF stats after reset
+ * @adapter: board private structure
+ *
+ * On a reset we need to clear out the VF stats or accounting gets
+ * messed up because they're not clear on read.
+ **/
+static void ixgbe_clear_vf_stats_counters(struct ixgbe_adapter *adapter)
+{
+       struct ixgbe_hw *hw = &adapter->hw;
+       int i;
+
+       for (i = 0; i < adapter->num_vfs; i++) {
+               adapter->vfinfo[i].last_vfstats.gprc =
+                       IXGBE_READ_REG(hw, IXGBE_PVFGPRC(i));
+               adapter->vfinfo[i].saved_rst_vfstats.gprc +=
+                       adapter->vfinfo[i].vfstats.gprc;
+               adapter->vfinfo[i].vfstats.gprc = 0;
+               adapter->vfinfo[i].last_vfstats.gptc =
+                       IXGBE_READ_REG(hw, IXGBE_PVFGPTC(i));
+               adapter->vfinfo[i].saved_rst_vfstats.gptc +=
+                       adapter->vfinfo[i].vfstats.gptc;
+               adapter->vfinfo[i].vfstats.gptc = 0;
+               adapter->vfinfo[i].last_vfstats.gorc =
+                       IXGBE_READ_REG(hw, IXGBE_PVFGORC_LSB(i));
+               adapter->vfinfo[i].saved_rst_vfstats.gorc +=
+                       adapter->vfinfo[i].vfstats.gorc;
+               adapter->vfinfo[i].vfstats.gorc = 0;
+               adapter->vfinfo[i].last_vfstats.gotc =
+                       IXGBE_READ_REG(hw, IXGBE_PVFGOTC_LSB(i));
+               adapter->vfinfo[i].saved_rst_vfstats.gotc +=
+                       adapter->vfinfo[i].vfstats.gotc;
+               adapter->vfinfo[i].vfstats.gotc = 0;
+               adapter->vfinfo[i].last_vfstats.mprc =
+                       IXGBE_READ_REG(hw, IXGBE_PVFMPRC(i));
+               adapter->vfinfo[i].saved_rst_vfstats.mprc +=
+                       adapter->vfinfo[i].vfstats.mprc;
+               adapter->vfinfo[i].vfstats.mprc = 0;
+       }
+}
+
 static void ixgbe_setup_gpie(struct ixgbe_adapter *adapter)
 {
        struct ixgbe_hw *hw = &adapter->hw;
@@ -5684,6 +5725,7 @@ static void ixgbe_up_complete(struct ixgbe_adapter *adapter)
        adapter->link_check_timeout = jiffies;
        mod_timer(&adapter->service_timer, jiffies);
 
+       ixgbe_clear_vf_stats_counters(adapter);
        /* Set PF Reset Done bit so PF/VF Mail Ops can work */
        ctrl_ext = IXGBE_READ_REG(hw, IXGBE_CTRL_EXT);
        ctrl_ext |= IXGBE_CTRL_EXT_PFRSTD;
@@ -7271,6 +7313,32 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)
        netdev->stats.rx_length_errors = hwstats->rlec;
        netdev->stats.rx_crc_errors = hwstats->crcerrs;
        netdev->stats.rx_missed_errors = total_mpc;
+
+       /* VF Stats Collection - skip while resetting because these
+        * are not clear on read and otherwise you'll sometimes get
+        * crazy values.
+        */
+       if (!test_bit(__IXGBE_RESETTING, &adapter->state)) {
+               for (i = 0; i < adapter->num_vfs; i++) {
+                       UPDATE_VF_COUNTER_32bit(IXGBE_PVFGPRC(i),
+                                               adapter->vfinfo[i].last_vfstats.gprc,
+                                               adapter->vfinfo[i].vfstats.gprc);
+                       UPDATE_VF_COUNTER_32bit(IXGBE_PVFGPTC(i),
+                                               adapter->vfinfo[i].last_vfstats.gptc,
+                                               adapter->vfinfo[i].vfstats.gptc);
+                       UPDATE_VF_COUNTER_36bit(IXGBE_PVFGORC_LSB(i),
+                                               IXGBE_PVFGORC_MSB(i),
+                                               adapter->vfinfo[i].last_vfstats.gorc,
+                                               adapter->vfinfo[i].vfstats.gorc);
+                       UPDATE_VF_COUNTER_36bit(IXGBE_PVFGOTC_LSB(i),
+                                               IXGBE_PVFGOTC_MSB(i),
+                                               adapter->vfinfo[i].last_vfstats.gotc,
+                                               adapter->vfinfo[i].vfstats.gotc);
+                       UPDATE_VF_COUNTER_32bit(IXGBE_PVFMPRC(i),
+                                               adapter->vfinfo[i].last_vfstats.mprc,
+                                               adapter->vfinfo[i].vfstats.mprc);
+               }
+       }
 }
 
 /**
@@ -9022,6 +9090,23 @@ static void ixgbe_get_stats64(struct net_device *netdev,
        stats->rx_missed_errors = netdev->stats.rx_missed_errors;
 }
 
+static int ixgbe_ndo_get_vf_stats(struct net_device *netdev, int vf,
+                                 struct ifla_vf_stats *vf_stats)
+{
+       struct ixgbe_adapter *adapter = netdev_priv(netdev);
+
+       if (vf < 0 || vf >= adapter->num_vfs)
+               return -EINVAL;
+
+       vf_stats->rx_packets = adapter->vfinfo[vf].vfstats.gprc;
+       vf_stats->rx_bytes   = adapter->vfinfo[vf].vfstats.gorc;
+       vf_stats->tx_packets = adapter->vfinfo[vf].vfstats.gptc;
+       vf_stats->tx_bytes   = adapter->vfinfo[vf].vfstats.gotc;
+       vf_stats->multicast  = adapter->vfinfo[vf].vfstats.mprc;
+
+       return 0;
+}
+
 #ifdef CONFIG_IXGBE_DCB
 /**
  * ixgbe_validate_rtr - verify 802.1Qp to Rx packet buffer mapping is valid.
@@ -10338,6 +10423,7 @@ static const struct net_device_ops ixgbe_netdev_ops = {
        .ndo_set_vf_rss_query_en = ixgbe_ndo_set_vf_rss_query_en,
        .ndo_set_vf_trust       = ixgbe_ndo_set_vf_trust,
        .ndo_get_vf_config      = ixgbe_ndo_get_vf_config,
+       .ndo_get_vf_stats       = ixgbe_ndo_get_vf_stats,
        .ndo_get_stats64        = ixgbe_get_stats64,
        .ndo_setup_tc           = __ixgbe_setup_tc,
 #ifdef IXGBE_FCOE
index 7f11c0a..67e49aa 100644 (file)
@@ -77,7 +77,7 @@ static int __ixgbe_enable_sriov(struct ixgbe_adapter *adapter,
        IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, IXGBE_PFDTXGSWC_VT_LBEN);
        adapter->bridge_mode = BRIDGE_MODE_VEB;
 
-       /* limit trafffic classes based on VFs enabled */
+       /* limit traffic classes based on VFs enabled */
        if ((adapter->hw.mac.type == ixgbe_mac_82599EB) && (num_vfs < 16)) {
                adapter->dcb_cfg.num_tcs.pg_tcs = MAX_TRAFFIC_CLASS;
                adapter->dcb_cfg.num_tcs.pfc_tcs = MAX_TRAFFIC_CLASS;
@@ -1184,9 +1184,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter,
 
        switch (xcast_mode) {
        case IXGBEVF_XCAST_MODE_NONE:
-               disable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE |
+               disable = IXGBE_VMOLR_ROMPE |
                          IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
-               enable = 0;
+               enable = IXGBE_VMOLR_BAM;
                break;
        case IXGBEVF_XCAST_MODE_MULTI:
                disable = IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
@@ -1208,9 +1208,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter,
                        return -EPERM;
                }
 
-               disable = 0;
+               disable = IXGBE_VMOLR_VPE;
                enable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE |
-                        IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
+                        IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE;
                break;
        default:
                return -EOPNOTSUPP;
index 6da9880..7f7ea46 100644 (file)
@@ -2533,6 +2533,13 @@ enum {
 #define IXGBE_PVFTXDCTL(P)     (0x06028 + (0x40 * (P)))
 #define IXGBE_PVFTDWBAL(P)     (0x06038 + (0x40 * (P)))
 #define IXGBE_PVFTDWBAH(P)     (0x0603C + (0x40 * (P)))
+#define IXGBE_PVFGPRC(x)       (0x0101C + (0x40 * (x)))
+#define IXGBE_PVFGPTC(x)       (0x08300 + (0x04 * (x)))
+#define IXGBE_PVFGORC_LSB(x)   (0x01020 + (0x40 * (x)))
+#define IXGBE_PVFGORC_MSB(x)   (0x0D020 + (0x40 * (x)))
+#define IXGBE_PVFGOTC_LSB(x)   (0x08400 + (0x08 * (x)))
+#define IXGBE_PVFGOTC_MSB(x)   (0x08404 + (0x08 * (x)))
+#define IXGBE_PVFMPRC(x)       (0x0D01C + (0x40 * (x)))
 
 #define IXGBE_PVFTDWBALn(q_per_pool, vf_number, vf_q_index) \
                (IXGBE_PVFTDWBAL((q_per_pool)*(vf_number) + (vf_q_index)))
index b3b3c07..6beb3d4 100644 (file)
@@ -899,6 +899,17 @@ static bool mtk_rx_get_desc(struct mtk_eth *eth, struct mtk_rx_dma_v2 *rxd,
        return true;
 }
 
+static void *mtk_max_lro_buf_alloc(gfp_t gfp_mask)
+{
+       unsigned int size = mtk_max_frag_size(MTK_MAX_LRO_RX_LENGTH);
+       unsigned long data;
+
+       data = __get_free_pages(gfp_mask | __GFP_COMP | __GFP_NOWARN,
+                               get_order(size));
+
+       return (void *)data;
+}
+
 /* the qdma core needs scratch memory to be setup */
 static int mtk_init_fq_dma(struct mtk_eth *eth)
 {
@@ -1433,8 +1444,8 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
        int done = 0, bytes = 0;
 
        while (done < budget) {
+               unsigned int pktlen, *rxdcsum;
                struct net_device *netdev;
-               unsigned int pktlen;
                dma_addr_t dma_addr;
                u32 hash, reason;
                int mac = 0;
@@ -1467,7 +1478,10 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
                        goto release_desc;
 
                /* alloc new buffer */
-               new_data = napi_alloc_frag(ring->frag_size);
+               if (ring->frag_size <= PAGE_SIZE)
+                       new_data = napi_alloc_frag(ring->frag_size);
+               else
+                       new_data = mtk_max_lro_buf_alloc(GFP_ATOMIC);
                if (unlikely(!new_data)) {
                        netdev->stats.rx_dropped++;
                        goto release_desc;
@@ -1498,7 +1512,13 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
                pktlen = RX_DMA_GET_PLEN0(trxd.rxd2);
                skb->dev = netdev;
                skb_put(skb, pktlen);
-               if (trxd.rxd4 & eth->soc->txrx.rx_dma_l4_valid)
+
+               if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2))
+                       rxdcsum = &trxd.rxd3;
+               else
+                       rxdcsum = &trxd.rxd4;
+
+               if (*rxdcsum & eth->soc->txrx.rx_dma_l4_valid)
                        skb->ip_summed = CHECKSUM_UNNECESSARY;
                else
                        skb_checksum_none_assert(skb);
@@ -1914,7 +1934,10 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag)
                return -ENOMEM;
 
        for (i = 0; i < rx_dma_size; i++) {
-               ring->data[i] = netdev_alloc_frag(ring->frag_size);
+               if (ring->frag_size <= PAGE_SIZE)
+                       ring->data[i] = netdev_alloc_frag(ring->frag_size);
+               else
+                       ring->data[i] = mtk_max_lro_buf_alloc(GFP_KERNEL);
                if (!ring->data[i])
                        return -ENOMEM;
        }
@@ -3744,6 +3767,7 @@ static const struct mtk_soc_data mt7986_data = {
                .txd_size = sizeof(struct mtk_tx_dma_v2),
                .rxd_size = sizeof(struct mtk_rx_dma_v2),
                .rx_irq_done_mask = MTK_RX_DONE_INT_V2,
+               .rx_dma_l4_valid = RX_DMA_L4_VALID_V2,
                .dma_max_len = MTK_TX_DMA_BUF_LEN_V2,
                .dma_len_offset = 8,
        },
index ed5038d..6400a82 100644 (file)
@@ -2110,7 +2110,7 @@ static int mlx4_en_get_module_eeprom(struct net_device *dev,
                        en_err(priv,
                               "mlx4_get_module_info i(%d) offset(%d) bytes_to_read(%d) - FAILED (0x%x)\n",
                               i, offset, ee->len - i, ret);
-                       return 0;
+                       return ret;
                }
 
                i += ret;
index 0eb9d74..50422b5 100644 (file)
@@ -579,17 +579,6 @@ static void *pci_get_other_drvdata(struct device *this, struct device *other)
        return pci_get_drvdata(to_pci_dev(other));
 }
 
-static int next_phys_dev(struct device *dev, const void *data)
-{
-       struct mlx5_core_dev *mdev, *this = (struct mlx5_core_dev *)data;
-
-       mdev = pci_get_other_drvdata(this->device, dev);
-       if (!mdev)
-               return 0;
-
-       return _next_phys_dev(mdev, data);
-}
-
 static int next_phys_dev_lag(struct device *dev, const void *data)
 {
        struct mlx5_core_dev *mdev, *this = (struct mlx5_core_dev *)data;
@@ -623,13 +612,6 @@ static struct mlx5_core_dev *mlx5_get_next_dev(struct mlx5_core_dev *dev,
        return pci_get_drvdata(to_pci_dev(next));
 }
 
-/* Must be called with intf_mutex held */
-struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev)
-{
-       lockdep_assert_held(&mlx5_intf_mutex);
-       return mlx5_get_next_dev(dev, &next_phys_dev);
-}
-
 /* Must be called with intf_mutex held */
 struct mlx5_core_dev *mlx5_get_next_phys_dev_lag(struct mlx5_core_dev *dev)
 {
index eae9aa9..978a2bb 100644 (file)
@@ -675,6 +675,9 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work)
        if (!tracer->owner)
                return;
 
+       if (unlikely(!tracer->str_db.loaded))
+               goto arm;
+
        block_count = tracer->buff.size / TRACER_BLOCK_SIZE_BYTE;
        start_offset = tracer->buff.consumer_index * TRACER_BLOCK_SIZE_BYTE;
 
@@ -732,6 +735,7 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work)
                                                      &tmp_trace_block[TRACES_PER_BLOCK - 1]);
        }
 
+arm:
        mlx5_fw_tracer_arm(dev);
 }
 
@@ -1136,8 +1140,7 @@ static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void
                queue_work(tracer->work_queue, &tracer->ownership_change_work);
                break;
        case MLX5_TRACER_SUBTYPE_TRACES_AVAILABLE:
-               if (likely(tracer->str_db.loaded))
-                       queue_work(tracer->work_queue, &tracer->handle_traces_work);
+               queue_work(tracer->work_queue, &tracer->handle_traces_work);
                break;
        default:
                mlx5_core_dbg(dev, "FWTracer: Event with unrecognized subtype: sub_type %d\n",
index 6836448..3c1edfa 100644 (file)
@@ -565,7 +565,8 @@ static void mlx5e_build_rx_cq_param(struct mlx5_core_dev *mdev,
 static u8 rq_end_pad_mode(struct mlx5_core_dev *mdev, struct mlx5e_params *params)
 {
        bool lro_en = params->packet_merge.type == MLX5E_PACKET_MERGE_LRO;
-       bool ro = MLX5_CAP_GEN(mdev, relaxed_ordering_write);
+       bool ro = pcie_relaxed_ordering_enabled(mdev->pdev) &&
+               MLX5_CAP_GEN(mdev, relaxed_ordering_write);
 
        return ro && lro_en ?
                MLX5_WQ_END_PAD_MODE_NONE : MLX5_WQ_END_PAD_MODE_ALIGN;
index 43a536c..c0f409c 100644 (file)
 
 void mlx5e_mkey_set_relaxed_ordering(struct mlx5_core_dev *mdev, void *mkc)
 {
+       bool ro_pci_enable = pcie_relaxed_ordering_enabled(mdev->pdev);
        bool ro_write = MLX5_CAP_GEN(mdev, relaxed_ordering_write);
        bool ro_read = MLX5_CAP_GEN(mdev, relaxed_ordering_read);
 
-       MLX5_SET(mkc, mkc, relaxed_ordering_read, ro_read);
-       MLX5_SET(mkc, mkc, relaxed_ordering_write, ro_write);
+       MLX5_SET(mkc, mkc, relaxed_ordering_read, ro_pci_enable && ro_read);
+       MLX5_SET(mkc, mkc, relaxed_ordering_write, ro_pci_enable && ro_write);
 }
 
 static int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn,
index eb90e79..f797fd9 100644 (file)
@@ -950,6 +950,13 @@ err_event_reg:
        return err;
 }
 
+static void mlx5e_cleanup_uplink_rep_tx(struct mlx5e_rep_priv *rpriv)
+{
+       mlx5e_rep_tc_netdevice_event_unregister(rpriv);
+       mlx5e_rep_bond_cleanup(rpriv);
+       mlx5e_rep_tc_cleanup(rpriv);
+}
+
 static int mlx5e_init_rep_tx(struct mlx5e_priv *priv)
 {
        struct mlx5e_rep_priv *rpriv = priv->ppriv;
@@ -961,42 +968,36 @@ static int mlx5e_init_rep_tx(struct mlx5e_priv *priv)
                return err;
        }
 
-       err = mlx5e_tc_ht_init(&rpriv->tc_ht);
-       if (err)
-               goto err_ht_init;
-
        if (rpriv->rep->vport == MLX5_VPORT_UPLINK) {
                err = mlx5e_init_uplink_rep_tx(rpriv);
                if (err)
                        goto err_init_tx;
        }
 
+       err = mlx5e_tc_ht_init(&rpriv->tc_ht);
+       if (err)
+               goto err_ht_init;
+
        return 0;
 
-err_init_tx:
-       mlx5e_tc_ht_cleanup(&rpriv->tc_ht);
 err_ht_init:
+       if (rpriv->rep->vport == MLX5_VPORT_UPLINK)
+               mlx5e_cleanup_uplink_rep_tx(rpriv);
+err_init_tx:
        mlx5e_destroy_tises(priv);
        return err;
 }
 
-static void mlx5e_cleanup_uplink_rep_tx(struct mlx5e_rep_priv *rpriv)
-{
-       mlx5e_rep_tc_netdevice_event_unregister(rpriv);
-       mlx5e_rep_bond_cleanup(rpriv);
-       mlx5e_rep_tc_cleanup(rpriv);
-}
-
 static void mlx5e_cleanup_rep_tx(struct mlx5e_priv *priv)
 {
        struct mlx5e_rep_priv *rpriv = priv->ppriv;
 
-       mlx5e_destroy_tises(priv);
+       mlx5e_tc_ht_cleanup(&rpriv->tc_ht);
 
        if (rpriv->rep->vport == MLX5_VPORT_UPLINK)
                mlx5e_cleanup_uplink_rep_tx(rpriv);
 
-       mlx5e_tc_ht_cleanup(&rpriv->tc_ht);
+       mlx5e_destroy_tises(priv);
 }
 
 static void mlx5e_rep_enable(struct mlx5e_priv *priv)
index 217cac2..2ce3728 100644 (file)
@@ -2690,9 +2690,6 @@ static int mlx5_esw_offloads_devcom_event(int event,
 
        switch (event) {
        case ESW_OFFLOADS_DEVCOM_PAIR:
-               if (mlx5_get_next_phys_dev(esw->dev) != peer_esw->dev)
-                       break;
-
                if (mlx5_eswitch_vport_match_metadata_enabled(esw) !=
                    mlx5_eswitch_vport_match_metadata_enabled(peer_esw))
                        break;
@@ -2744,6 +2741,9 @@ static void esw_offloads_devcom_init(struct mlx5_eswitch *esw)
        if (!MLX5_CAP_ESW(esw->dev, merged_eswitch))
                return;
 
+       if (!mlx5_is_lag_supported(esw->dev))
+               return;
+
        mlx5_devcom_register_component(devcom,
                                       MLX5_DEVCOM_ESW_OFFLOADS,
                                       mlx5_esw_offloads_devcom_event,
@@ -2761,6 +2761,9 @@ static void esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw)
        if (!MLX5_CAP_ESW(esw->dev, merged_eswitch))
                return;
 
+       if (!mlx5_is_lag_supported(esw->dev))
+               return;
+
        mlx5_devcom_send_event(devcom, MLX5_DEVCOM_ESW_OFFLOADS,
                               ESW_OFFLOADS_DEVCOM_UNPAIR, esw);
 
index 14187e5..f1b908d 100644 (file)
@@ -1574,9 +1574,22 @@ static struct mlx5_flow_rule *find_flow_rule(struct fs_fte *fte,
        return NULL;
 }
 
-static bool check_conflicting_actions(u32 action1, u32 action2)
+static bool check_conflicting_actions_vlan(const struct mlx5_fs_vlan *vlan0,
+                                          const struct mlx5_fs_vlan *vlan1)
 {
-       u32 xored_actions = action1 ^ action2;
+       return vlan0->ethtype != vlan1->ethtype ||
+              vlan0->vid != vlan1->vid ||
+              vlan0->prio != vlan1->prio;
+}
+
+static bool check_conflicting_actions(const struct mlx5_flow_act *act1,
+                                     const struct mlx5_flow_act *act2)
+{
+       u32 action1 = act1->action;
+       u32 action2 = act2->action;
+       u32 xored_actions;
+
+       xored_actions = action1 ^ action2;
 
        /* if one rule only wants to count, it's ok */
        if (action1 == MLX5_FLOW_CONTEXT_ACTION_COUNT ||
@@ -1593,6 +1606,22 @@ static bool check_conflicting_actions(u32 action1, u32 action2)
                             MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2))
                return true;
 
+       if (action1 & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT &&
+           act1->pkt_reformat != act2->pkt_reformat)
+               return true;
+
+       if (action1 & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
+           act1->modify_hdr != act2->modify_hdr)
+               return true;
+
+       if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH &&
+           check_conflicting_actions_vlan(&act1->vlan[0], &act2->vlan[0]))
+               return true;
+
+       if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2 &&
+           check_conflicting_actions_vlan(&act1->vlan[1], &act2->vlan[1]))
+               return true;
+
        return false;
 }
 
@@ -1600,7 +1629,7 @@ static int check_conflicting_ftes(struct fs_fte *fte,
                                  const struct mlx5_flow_context *flow_context,
                                  const struct mlx5_flow_act *flow_act)
 {
-       if (check_conflicting_actions(flow_act->action, fte->action.action)) {
+       if (check_conflicting_actions(flow_act, &fte->action)) {
                mlx5_core_warn(get_dev(&fte->node),
                               "Found two FTEs with conflicting actions\n");
                return -EEXIST;
index 552b6e2..2a8fc54 100644 (file)
@@ -783,7 +783,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
 {
        struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
        struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev;
-       struct lag_tracker tracker;
+       struct lag_tracker tracker = { };
        bool do_bond, roce_lag;
        int err;
        int i;
index 72f70fa..c81b173 100644 (file)
@@ -74,6 +74,16 @@ struct mlx5_lag {
        struct lag_mpesw          lag_mpesw;
 };
 
+static inline bool mlx5_is_lag_supported(struct mlx5_core_dev *dev)
+{
+       if (!MLX5_CAP_GEN(dev, vport_group_manager) ||
+           !MLX5_CAP_GEN(dev, lag_master) ||
+           MLX5_CAP_GEN(dev, num_lag_ports) < 2 ||
+           MLX5_CAP_GEN(dev, num_lag_ports) > MLX5_MAX_PORTS)
+               return false;
+       return true;
+}
+
 static inline struct mlx5_lag *
 mlx5_lag_dev(struct mlx5_core_dev *dev)
 {
index 484cb1e..9cc7afe 100644 (file)
@@ -209,7 +209,6 @@ int mlx5_attach_device(struct mlx5_core_dev *dev);
 void mlx5_detach_device(struct mlx5_core_dev *dev);
 int mlx5_register_device(struct mlx5_core_dev *dev);
 void mlx5_unregister_device(struct mlx5_core_dev *dev);
-struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev);
 struct mlx5_core_dev *mlx5_get_next_phys_dev_lag(struct mlx5_core_dev *dev);
 void mlx5_dev_list_lock(void);
 void mlx5_dev_list_unlock(void);
index 0147de4..b456e81 100644 (file)
@@ -674,9 +674,9 @@ nfp_fl_set_ip6_hop_limit_flow_label(u32 off, __be32 exact, __be32 mask,
                                            fl_hl_mask->hop_limit;
                break;
        case round_down(offsetof(struct ipv6hdr, flow_lbl), 4):
-               if (mask & ~IPV6_FLOW_LABEL_MASK ||
-                   exact & ~IPV6_FLOW_LABEL_MASK) {
-                       NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit IPv6 flow label action");
+               if (mask & ~IPV6_FLOWINFO_MASK ||
+                   exact & ~IPV6_FLOWINFO_MASK) {
+                       NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit IPv6 flow info action");
                        return -EOPNOTSUPP;
                }
 
index 68e8a2f..2df2af1 100644 (file)
@@ -96,8 +96,6 @@
 #define NFP_FL_PUSH_VLAN_PRIO          GENMASK(15, 13)
 #define NFP_FL_PUSH_VLAN_VID           GENMASK(11, 0)
 
-#define IPV6_FLOW_LABEL_MASK           cpu_to_be32(0x000fffff)
-
 /* LAG ports */
 #define NFP_FL_LAG_OUT                 0xC0DE0000
 
index 443a5d6..7c31a46 100644 (file)
@@ -507,6 +507,11 @@ nfp_fl_calc_key_layers_sz(struct nfp_fl_key_ls in_key_ls, uint16_t *map)
                key_size += sizeof(struct nfp_flower_ipv6);
        }
 
+       if (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_QINQ) {
+               map[FLOW_PAY_QINQ] = key_size;
+               key_size += sizeof(struct nfp_flower_vlan);
+       }
+
        if (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_GRE) {
                map[FLOW_PAY_GRE] = key_size;
                if (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_TUN_IPV6)
@@ -515,11 +520,6 @@ nfp_fl_calc_key_layers_sz(struct nfp_fl_key_ls in_key_ls, uint16_t *map)
                        key_size += sizeof(struct nfp_flower_ipv4_gre_tun);
        }
 
-       if (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_QINQ) {
-               map[FLOW_PAY_QINQ] = key_size;
-               key_size += sizeof(struct nfp_flower_vlan);
-       }
-
        if ((in_key_ls.key_layer & NFP_FLOWER_LAYER_VXLAN) ||
            (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_GENEVE)) {
                map[FLOW_PAY_UDP_TUN] = key_size;
@@ -758,6 +758,17 @@ static int nfp_fl_ct_add_offload(struct nfp_fl_nft_tc_merge *m_entry)
                }
        }
 
+       if (NFP_FLOWER_LAYER2_QINQ & key_layer.key_layer_two) {
+               offset = key_map[FLOW_PAY_QINQ];
+               key = kdata + offset;
+               msk = mdata + offset;
+               for (i = 0; i < _CT_TYPE_MAX; i++) {
+                       nfp_flower_compile_vlan((struct nfp_flower_vlan *)key,
+                                               (struct nfp_flower_vlan *)msk,
+                                               rules[i]);
+               }
+       }
+
        if (key_layer.key_layer_two & NFP_FLOWER_LAYER2_GRE) {
                offset = key_map[FLOW_PAY_GRE];
                key = kdata + offset;
@@ -798,17 +809,6 @@ static int nfp_fl_ct_add_offload(struct nfp_fl_nft_tc_merge *m_entry)
                }
        }
 
-       if (NFP_FLOWER_LAYER2_QINQ & key_layer.key_layer_two) {
-               offset = key_map[FLOW_PAY_QINQ];
-               key = kdata + offset;
-               msk = mdata + offset;
-               for (i = 0; i < _CT_TYPE_MAX; i++) {
-                       nfp_flower_compile_vlan((struct nfp_flower_vlan *)key,
-                                               (struct nfp_flower_vlan *)msk,
-                                               rules[i]);
-               }
-       }
-
        if (key_layer.key_layer & NFP_FLOWER_LAYER_VXLAN ||
            key_layer.key_layer_two & NFP_FLOWER_LAYER2_GENEVE) {
                offset = key_map[FLOW_PAY_UDP_TUN];
index 193a167..e014301 100644 (file)
@@ -625,6 +625,14 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
                msk += sizeof(struct nfp_flower_ipv6);
        }
 
+       if (NFP_FLOWER_LAYER2_QINQ & key_ls->key_layer_two) {
+               nfp_flower_compile_vlan((struct nfp_flower_vlan *)ext,
+                                       (struct nfp_flower_vlan *)msk,
+                                       rule);
+               ext += sizeof(struct nfp_flower_vlan);
+               msk += sizeof(struct nfp_flower_vlan);
+       }
+
        if (key_ls->key_layer_two & NFP_FLOWER_LAYER2_GRE) {
                if (key_ls->key_layer_two & NFP_FLOWER_LAYER2_TUN_IPV6) {
                        struct nfp_flower_ipv6_gre_tun *gre_match;
@@ -660,14 +668,6 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
                }
        }
 
-       if (NFP_FLOWER_LAYER2_QINQ & key_ls->key_layer_two) {
-               nfp_flower_compile_vlan((struct nfp_flower_vlan *)ext,
-                                       (struct nfp_flower_vlan *)msk,
-                                       rule);
-               ext += sizeof(struct nfp_flower_vlan);
-               msk += sizeof(struct nfp_flower_vlan);
-       }
-
        if (key_ls->key_layer & NFP_FLOWER_LAYER_VXLAN ||
            key_ls->key_layer_two & NFP_FLOWER_LAYER2_GENEVE) {
                if (key_ls->key_layer_two & NFP_FLOWER_LAYER2_TUN_IPV6) {
index 7db56ab..f9410d5 100644 (file)
@@ -282,7 +282,7 @@ netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev)
        txd = &tx_ring->txds[wr_idx];
        txd->offset_eop = (nr_frags ? 0 : NFD3_DESC_TX_EOP) | md_bytes;
        txd->dma_len = cpu_to_le16(skb_headlen(skb));
-       nfp_desc_set_dma_addr(txd, dma_addr);
+       nfp_desc_set_dma_addr_40b(txd, dma_addr);
        txd->data_len = cpu_to_le16(skb->len);
 
        txd->flags = 0;
@@ -320,7 +320,7 @@ netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev)
 
                        txd = &tx_ring->txds[wr_idx];
                        txd->dma_len = cpu_to_le16(fsize);
-                       nfp_desc_set_dma_addr(txd, dma_addr);
+                       nfp_desc_set_dma_addr_40b(txd, dma_addr);
                        txd->offset_eop = md_bytes |
                                ((f == nr_frags - 1) ? NFD3_DESC_TX_EOP : 0);
                        txd->vals8[1] = second_half;
@@ -562,8 +562,12 @@ nfp_nfd3_rx_give_one(const struct nfp_net_dp *dp,
        /* Fill freelist descriptor */
        rx_ring->rxds[wr_idx].fld.reserved = 0;
        rx_ring->rxds[wr_idx].fld.meta_len_dd = 0;
-       nfp_desc_set_dma_addr(&rx_ring->rxds[wr_idx].fld,
-                             dma_addr + dp->rx_dma_off);
+       /* DMA address is expanded to 48-bit width in freelist for NFP3800,
+        * so the *_48b macro is used accordingly, it's also OK to fill
+        * a 40-bit address since the top 8 bits are get set to 0.
+        */
+       nfp_desc_set_dma_addr_48b(&rx_ring->rxds[wr_idx].fld,
+                                 dma_addr + dp->rx_dma_off);
 
        rx_ring->wr_p++;
        if (!(rx_ring->wr_p % NFP_NET_FL_BATCH)) {
@@ -817,7 +821,7 @@ nfp_nfd3_tx_xdp_buf(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring,
        txd = &tx_ring->txds[wr_idx];
        txd->offset_eop = NFD3_DESC_TX_EOP;
        txd->dma_len = cpu_to_le16(pkt_len);
-       nfp_desc_set_dma_addr(txd, rxbuf->dma_addr + dma_off);
+       nfp_desc_set_dma_addr_40b(txd, rxbuf->dma_addr + dma_off);
        txd->data_len = cpu_to_le16(pkt_len);
 
        txd->flags = 0;
@@ -1193,7 +1197,7 @@ nfp_nfd3_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
        txd = &tx_ring->txds[wr_idx];
        txd->offset_eop = meta_len | NFD3_DESC_TX_EOP;
        txd->dma_len = cpu_to_le16(skb_headlen(skb));
-       nfp_desc_set_dma_addr(txd, dma_addr);
+       nfp_desc_set_dma_addr_40b(txd, dma_addr);
        txd->data_len = cpu_to_le16(skb->len);
 
        txd->flags = 0;
index 47604d5..f31eabd 100644 (file)
@@ -260,6 +260,7 @@ const struct nfp_dp_ops nfp_nfd3_ops = {
        .version                = NFP_NFD_VER_NFD3,
        .tx_min_desc_per_pkt    = 1,
        .cap_mask               = NFP_NFD3_CFG_CTRL_SUPPORTED,
+       .dma_mask               = DMA_BIT_MASK(40),
        .poll                   = nfp_nfd3_poll,
        .xsk_poll               = nfp_nfd3_xsk_poll,
        .ctrl_poll              = nfp_nfd3_ctrl_poll,
index c16c4b4..454fea4 100644 (file)
@@ -40,7 +40,7 @@ nfp_nfd3_xsk_tx_xdp(const struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec,
        txd = &tx_ring->txds[wr_idx];
        txd->offset_eop = NFD3_DESC_TX_EOP;
        txd->dma_len = cpu_to_le16(pkt_len);
-       nfp_desc_set_dma_addr(txd, xrxbuf->dma_addr + pkt_off);
+       nfp_desc_set_dma_addr_40b(txd, xrxbuf->dma_addr + pkt_off);
        txd->data_len = cpu_to_le16(pkt_len);
 
        txd->flags = 0;
@@ -361,10 +361,8 @@ static void nfp_nfd3_xsk_tx(struct nfp_net_tx_ring *tx_ring)
 
                        /* Build TX descriptor. */
                        txd = &tx_ring->txds[wr_idx];
-                       nfp_desc_set_dma_addr(txd,
-                                             xsk_buff_raw_get_dma(xsk_pool,
-                                                                  desc[i].addr
-                                                                  ));
+                       nfp_desc_set_dma_addr_40b(txd,
+                                                 xsk_buff_raw_get_dma(xsk_pool, desc[i].addr));
                        txd->offset_eop = NFD3_DESC_TX_EOP;
                        txd->dma_len = cpu_to_le16(desc[i].len);
                        txd->data_len = cpu_to_le16(desc[i].len);
index e509d6d..300637e 100644 (file)
@@ -314,7 +314,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev)
                    FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type);
 
        txd->dma_len_type = cpu_to_le16(dlen_type);
-       nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
+       nfp_desc_set_dma_addr_48b(txd, dma_addr);
 
        /* starts at bit 0 */
        BUILD_BUG_ON(!(NFDK_DESC_TX_DMA_LEN_HEAD & 1));
@@ -339,7 +339,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev)
                        dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len);
 
                        txd->dma_len_type = cpu_to_le16(dlen_type);
-                       nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
+                       nfp_desc_set_dma_addr_48b(txd, dma_addr);
 
                        dma_len -= dlen_type;
                        dma_addr += dlen_type + 1;
@@ -595,8 +595,8 @@ nfp_nfdk_rx_give_one(const struct nfp_net_dp *dp,
        /* Fill freelist descriptor */
        rx_ring->rxds[wr_idx].fld.reserved = 0;
        rx_ring->rxds[wr_idx].fld.meta_len_dd = 0;
-       nfp_desc_set_dma_addr(&rx_ring->rxds[wr_idx].fld,
-                             dma_addr + dp->rx_dma_off);
+       nfp_desc_set_dma_addr_48b(&rx_ring->rxds[wr_idx].fld,
+                                 dma_addr + dp->rx_dma_off);
 
        rx_ring->wr_p++;
        if (!(rx_ring->wr_p % NFP_NET_FL_BATCH)) {
@@ -929,7 +929,7 @@ nfp_nfdk_tx_xdp_buf(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring,
                    FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type);
 
        txd->dma_len_type = cpu_to_le16(dlen_type);
-       nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
+       nfp_desc_set_dma_addr_48b(txd, dma_addr);
 
        tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;
        dma_len -= tmp_dlen;
@@ -940,7 +940,7 @@ nfp_nfdk_tx_xdp_buf(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring,
                dma_len -= 1;
                dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len);
                txd->dma_len_type = cpu_to_le16(dlen_type);
-               nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
+               nfp_desc_set_dma_addr_48b(txd, dma_addr);
 
                dlen_type &= NFDK_DESC_TX_DMA_LEN;
                dma_len -= dlen_type;
@@ -1332,7 +1332,7 @@ nfp_nfdk_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
                    FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type);
 
        txd->dma_len_type = cpu_to_le16(dlen_type);
-       nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
+       nfp_desc_set_dma_addr_48b(txd, dma_addr);
 
        tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;
        dma_len -= tmp_dlen;
@@ -1343,7 +1343,7 @@ nfp_nfdk_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
                dma_len -= 1;
                dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len);
                txd->dma_len_type = cpu_to_le16(dlen_type);
-               nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr);
+               nfp_desc_set_dma_addr_48b(txd, dma_addr);
 
                dlen_type &= NFDK_DESC_TX_DMA_LEN;
                dma_len -= dlen_type;
index 301f111..f4d94ae 100644 (file)
@@ -181,6 +181,7 @@ const struct nfp_dp_ops nfp_nfdk_ops = {
        .version                = NFP_NFD_VER_NFDK,
        .tx_min_desc_per_pkt    = NFDK_TX_DESC_PER_SIMPLE_PKT,
        .cap_mask               = NFP_NFDK_CFG_CTRL_SUPPORTED,
+       .dma_mask               = DMA_BIT_MASK(48),
        .poll                   = nfp_nfdk_poll,
        .ctrl_poll              = nfp_nfdk_ctrl_poll,
        .xmit                   = nfp_nfdk_tx,
index 3dd3a92..b07cea8 100644 (file)
@@ -115,7 +115,7 @@ struct nfp_nfdk_tx_buf;
 #define D_IDX(ring, idx)       ((idx) & ((ring)->cnt - 1))
 
 /* Convenience macro for writing dma address into RX/TX descriptors */
-#define nfp_desc_set_dma_addr(desc, dma_addr)                          \
+#define nfp_desc_set_dma_addr_40b(desc, dma_addr)                      \
        do {                                                            \
                __typeof__(desc) __d = (desc);                          \
                dma_addr_t __addr = (dma_addr);                         \
@@ -124,13 +124,13 @@ struct nfp_nfdk_tx_buf;
                __d->dma_addr_hi = upper_32_bits(__addr) & 0xff;        \
        } while (0)
 
-#define nfp_nfdk_tx_desc_set_dma_addr(desc, dma_addr)                         \
-       do {                                                                   \
-               __typeof__(desc) __d = (desc);                                 \
-               dma_addr_t __addr = (dma_addr);                                \
-                                                                              \
-               __d->dma_addr_hi = cpu_to_le16(upper_32_bits(__addr) & 0xff);  \
-               __d->dma_addr_lo = cpu_to_le32(lower_32_bits(__addr));         \
+#define nfp_desc_set_dma_addr_48b(desc, dma_addr)                      \
+       do {                                                            \
+               __typeof__(desc) __d = (desc);                          \
+               dma_addr_t __addr = (dma_addr);                         \
+                                                                       \
+               __d->dma_addr_hi = cpu_to_le16(upper_32_bits(__addr));  \
+               __d->dma_addr_lo = cpu_to_le32(lower_32_bits(__addr));  \
        } while (0)
 
 /**
@@ -225,8 +225,8 @@ struct nfp_net_tx_ring {
 struct nfp_net_rx_desc {
        union {
                struct {
-                       u8 dma_addr_hi; /* High bits of the buf address */
-                       __le16 reserved; /* Must be zero */
+                       __le16 dma_addr_hi; /* High bits of the buf address */
+                       u8 reserved; /* Must be zero */
                        u8 meta_len_dd; /* Must be zero */
 
                        __le32 dma_addr_lo; /* Low bits of the buffer address */
index 4e56a99..57f284e 100644 (file)
@@ -2040,6 +2040,7 @@ nfp_net_alloc(struct pci_dev *pdev, const struct nfp_dev_info *dev_info,
              void __iomem *ctrl_bar, bool needs_netdev,
              unsigned int max_tx_rings, unsigned int max_rx_rings)
 {
+       u64 dma_mask = dma_get_mask(&pdev->dev);
        struct nfp_net *nn;
        int err;
 
@@ -2085,6 +2086,14 @@ nfp_net_alloc(struct pci_dev *pdev, const struct nfp_dev_info *dev_info,
                goto err_free_nn;
        }
 
+       if ((dma_mask & nn->dp.ops->dma_mask) != dma_mask) {
+               dev_err(&pdev->dev,
+                       "DMA mask of loaded firmware: %llx, required DMA mask: %llx\n",
+                       nn->dp.ops->dma_mask, dma_mask);
+               err = -EINVAL;
+               goto err_free_nn;
+       }
+
        nn->max_tx_rings = max_tx_rings;
        nn->max_rx_rings = max_rx_rings;
 
index c934cc2..83becb3 100644 (file)
@@ -117,6 +117,7 @@ enum nfp_nfd_version {
  * @version:                   Indicate dp type
  * @tx_min_desc_per_pkt:       Minimal TX descs needed for each packet
  * @cap_mask:                  Mask of supported features
+ * @dma_mask:                  DMA addressing capability
  * @poll:                      Napi poll for normal rx/tx
  * @xsk_poll:                  Napi poll when xsk is enabled
  * @ctrl_poll:                 Tasklet poll for ctrl rx/tx
@@ -134,6 +135,7 @@ struct nfp_dp_ops {
        enum nfp_nfd_version version;
        unsigned int tx_min_desc_per_pkt;
        u32 cap_mask;
+       u64 dma_mask;
 
        int (*poll)(struct napi_struct *napi, int budget);
        int (*xsk_poll)(struct napi_struct *napi, int budget);
index 54af309..6eeeb0f 100644 (file)
@@ -15,7 +15,7 @@
 #include "nfp_net_sriov.h"
 
 static int
-nfp_net_sriov_check(struct nfp_app *app, int vf, u16 cap, const char *msg)
+nfp_net_sriov_check(struct nfp_app *app, int vf, u16 cap, const char *msg, bool warn)
 {
        u16 cap_vf;
 
@@ -24,12 +24,14 @@ nfp_net_sriov_check(struct nfp_app *app, int vf, u16 cap, const char *msg)
 
        cap_vf = readw(app->pf->vfcfg_tbl2 + NFP_NET_VF_CFG_MB_CAP);
        if ((cap_vf & cap) != cap) {
-               nfp_warn(app->pf->cpp, "ndo_set_vf_%s not supported\n", msg);
+               if (warn)
+                       nfp_warn(app->pf->cpp, "ndo_set_vf_%s not supported\n", msg);
                return -EOPNOTSUPP;
        }
 
        if (vf < 0 || vf >= app->pf->num_vfs) {
-               nfp_warn(app->pf->cpp, "invalid VF id %d\n", vf);
+               if (warn)
+                       nfp_warn(app->pf->cpp, "invalid VF id %d\n", vf);
                return -EINVAL;
        }
 
@@ -65,7 +67,7 @@ int nfp_app_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
        unsigned int vf_offset;
        int err;
 
-       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_MAC, "mac");
+       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_MAC, "mac", true);
        if (err)
                return err;
 
@@ -101,7 +103,7 @@ int nfp_app_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos,
        u32 vlan_tag;
        int err;
 
-       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_VLAN, "vlan");
+       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_VLAN, "vlan", true);
        if (err)
                return err;
 
@@ -115,7 +117,7 @@ int nfp_app_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos,
        }
 
        /* Check if fw supports or not */
-       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_VLAN_PROTO, "vlan_proto");
+       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_VLAN_PROTO, "vlan_proto", true);
        if (err)
                is_proto_sup = false;
 
@@ -149,7 +151,7 @@ int nfp_app_set_vf_rate(struct net_device *netdev, int vf,
        u32 vf_offset, ratevalue;
        int err;
 
-       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_RATE, "rate");
+       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_RATE, "rate", true);
        if (err)
                return err;
 
@@ -181,7 +183,7 @@ int nfp_app_set_vf_spoofchk(struct net_device *netdev, int vf, bool enable)
        int err;
 
        err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_SPOOF,
-                                 "spoofchk");
+                                 "spoofchk", true);
        if (err)
                return err;
 
@@ -205,7 +207,7 @@ int nfp_app_set_vf_trust(struct net_device *netdev, int vf, bool enable)
        int err;
 
        err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_TRUST,
-                                 "trust");
+                                 "trust", true);
        if (err)
                return err;
 
@@ -230,7 +232,7 @@ int nfp_app_set_vf_link_state(struct net_device *netdev, int vf,
        int err;
 
        err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_LINK_STATE,
-                                 "link_state");
+                                 "link_state", true);
        if (err)
                return err;
 
@@ -265,7 +267,7 @@ int nfp_app_get_vf_config(struct net_device *netdev, int vf,
        u8 flags;
        int err;
 
-       err = nfp_net_sriov_check(app, vf, 0, "");
+       err = nfp_net_sriov_check(app, vf, 0, "", true);
        if (err)
                return err;
 
@@ -285,13 +287,13 @@ int nfp_app_get_vf_config(struct net_device *netdev, int vf,
 
        ivi->vlan = FIELD_GET(NFP_NET_VF_CFG_VLAN_VID, vlan_tag);
        ivi->qos = FIELD_GET(NFP_NET_VF_CFG_VLAN_QOS, vlan_tag);
-       if (!nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_VLAN_PROTO, "vlan_proto"))
+       if (!nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_VLAN_PROTO, "vlan_proto", false))
                ivi->vlan_proto = htons(FIELD_GET(NFP_NET_VF_CFG_VLAN_PROT, vlan_tag));
        ivi->spoofchk = FIELD_GET(NFP_NET_VF_CFG_CTRL_SPOOF, flags);
        ivi->trusted = FIELD_GET(NFP_NET_VF_CFG_CTRL_TRUST, flags);
        ivi->linkstate = FIELD_GET(NFP_NET_VF_CFG_CTRL_LINK_STATE, flags);
 
-       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_RATE, "rate");
+       err = nfp_net_sriov_check(app, vf, NFP_NET_VF_CFG_MB_CAP_RATE, "rate", false);
        if (!err) {
                rate = readl(app->pf->vfcfg_tbl2 + vf_offset +
                             NFP_NET_VF_CFG_RATE);
index 8682944..aea507a 100644 (file)
@@ -70,8 +70,12 @@ void nfp_net_xsk_rx_ring_fill_freelist(struct nfp_net_rx_ring *rx_ring)
 
                nfp_net_xsk_rx_bufs_stash(rx_ring, wr_idx, xdp);
 
-               nfp_desc_set_dma_addr(&rx_ring->rxds[wr_idx].fld,
-                                     rx_ring->xsk_rxbufs[wr_idx].dma_addr);
+               /* DMA address is expanded to 48-bit width in freelist for NFP3800,
+                * so the *_48b macro is used accordingly, it's also OK to fill
+                * a 40-bit address since the top 8 bits are get set to 0.
+                */
+               nfp_desc_set_dma_addr_48b(&rx_ring->rxds[wr_idx].fld,
+                                         rx_ring->xsk_rxbufs[wr_idx].dma_addr);
 
                rx_ring->wr_p++;
                wr_ptr_add++;
index afab6f0..6ad43c7 100644 (file)
@@ -4,7 +4,6 @@
 #ifndef NFP_CRC32_H
 #define NFP_CRC32_H
 
-#include <linux/kernel.h>
 #include <linux/crc32.h>
 
 /**
index 28384d6..0725b51 100644 (file)
@@ -9,7 +9,7 @@
 
 const struct nfp_dev_info nfp_dev_info[NFP_DEV_CNT] = {
        [NFP_DEV_NFP3800] = {
-               .dma_mask               = DMA_BIT_MASK(40),
+               .dma_mask               = DMA_BIT_MASK(48),
                .qc_idx_mask            = GENMASK(8, 0),
                .qc_addr_offset         = 0x400000,
                .min_qc_size            = 512,
@@ -21,7 +21,7 @@ const struct nfp_dev_info nfp_dev_info[NFP_DEV_CNT] = {
                .qc_area_sz             = 0x100000,
        },
        [NFP_DEV_NFP3800_VF] = {
-               .dma_mask               = DMA_BIT_MASK(40),
+               .dma_mask               = DMA_BIT_MASK(48),
                .qc_idx_mask            = GENMASK(8, 0),
                .qc_addr_offset         = 0,
                .min_qc_size            = 512,
index e90fa97..8dd7aa0 100644 (file)
@@ -1869,8 +1869,7 @@ int qlcnic_sriov_set_vf_tx_rate(struct net_device *netdev, int vf,
        if (!min_tx_rate)
                min_tx_rate = QLC_VF_MIN_TX_RATE;
 
-       if (max_tx_rate &&
-           (max_tx_rate >= 10000 || max_tx_rate < min_tx_rate)) {
+       if (max_tx_rate && max_tx_rate >= 10000) {
                netdev_err(netdev,
                           "Invalid max Tx rate, allowed range is [%d - %d]",
                           min_tx_rate, QLC_VF_MAX_TX_RATE);
@@ -1880,8 +1879,7 @@ int qlcnic_sriov_set_vf_tx_rate(struct net_device *netdev, int vf,
        if (!max_tx_rate)
                max_tx_rate = 10000;
 
-       if (min_tx_rate &&
-           (min_tx_rate > max_tx_rate || min_tx_rate < QLC_VF_MIN_TX_RATE)) {
+       if (min_tx_rate && min_tx_rate < QLC_VF_MIN_TX_RATE) {
                netdev_err(netdev,
                           "Invalid min Tx rate, allowed range is [%d - %d]",
                           QLC_VF_MIN_TX_RATE, max_tx_rate);
index f9f8093..38fe77d 100644 (file)
@@ -1072,13 +1072,11 @@ static int intel_eth_pci_probe(struct pci_dev *pdev,
 
        ret = stmmac_dvr_probe(&pdev->dev, plat, &res);
        if (ret) {
-               goto err_dvr_probe;
+               goto err_alloc_irq;
        }
 
        return 0;
 
-err_dvr_probe:
-       pci_free_irq_vectors(pdev);
 err_alloc_irq:
        clk_disable_unprepare(plat->stmmac_clk);
        clk_unregister_fixed_rate(plat->stmmac_clk);
index 9cfe843..5b446d2 100644 (file)
@@ -823,7 +823,7 @@ static void gsi_channel_program(struct gsi_channel *channel, bool doorbell)
 
        /* Now update the scratch registers for GPI protocol */
        gpi = &scr.gpi;
-       gpi->max_outstanding_tre = gsi_channel_trans_tre_max(gsi, channel_id) *
+       gpi->max_outstanding_tre = channel->trans_tre_max *
                                        GSI_RING_ELEMENT_SIZE;
        gpi->outstanding_threshold = 2 * GSI_RING_ELEMENT_SIZE;
 
@@ -991,36 +991,22 @@ void gsi_resume(struct gsi *gsi)
        enable_irq(gsi->irq);
 }
 
-/**
- * gsi_channel_tx_queued() - Report queued TX transfers for a channel
- * @channel:   Channel for which to report
- *
- * Report to the network stack the number of bytes and transactions that
- * have been queued to hardware since last call.  This and the next function
- * supply information used by the network stack for throttling.
- *
- * For each channel we track the number of transactions used and bytes of
- * data those transactions represent.  We also track what those values are
- * each time this function is called.  Subtracting the two tells us
- * the number of bytes and transactions that have been added between
- * successive calls.
- *
- * Calling this each time we ring the channel doorbell allows us to
- * provide accurate information to the network stack about how much
- * work we've given the hardware at any point in time.
- */
-void gsi_channel_tx_queued(struct gsi_channel *channel)
+void gsi_trans_tx_queued(struct gsi_trans *trans)
 {
+       u32 channel_id = trans->channel_id;
+       struct gsi *gsi = trans->gsi;
+       struct gsi_channel *channel;
        u32 trans_count;
        u32 byte_count;
 
+       channel = &gsi->channel[channel_id];
+
        byte_count = channel->byte_count - channel->queued_byte_count;
        trans_count = channel->trans_count - channel->queued_trans_count;
        channel->queued_byte_count = channel->byte_count;
        channel->queued_trans_count = channel->trans_count;
 
-       ipa_gsi_channel_tx_queued(channel->gsi, gsi_channel_id(channel),
-                                 trans_count, byte_count);
+       ipa_gsi_channel_tx_queued(gsi, channel_id, trans_count, byte_count);
 }
 
 /**
@@ -1327,17 +1313,29 @@ static int gsi_irq_init(struct gsi *gsi, struct platform_device *pdev)
 }
 
 /* Return the transaction associated with a transfer completion event */
-static struct gsi_trans *gsi_event_trans(struct gsi_channel *channel,
-                                        struct gsi_event *event)
+static struct gsi_trans *
+gsi_event_trans(struct gsi *gsi, struct gsi_event *event)
 {
+       u32 channel_id = event->chid;
+       struct gsi_channel *channel;
+       struct gsi_trans *trans;
        u32 tre_offset;
        u32 tre_index;
 
+       channel = &gsi->channel[channel_id];
+       if (WARN(!channel->gsi, "event has bad channel %u\n", channel_id))
+               return NULL;
+
        /* Event xfer_ptr records the TRE it's associated with */
        tre_offset = lower_32_bits(le64_to_cpu(event->xfer_ptr));
        tre_index = gsi_ring_index(&channel->tre_ring, tre_offset);
 
-       return gsi_channel_trans_mapped(channel, tre_index);
+       trans = gsi_channel_trans_mapped(channel, tre_index);
+
+       if (WARN(!trans, "channel %u event with no transaction\n", channel_id))
+               return NULL;
+
+       return trans;
 }
 
 /**
@@ -1381,7 +1379,9 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
         */
        old_index = ring->index;
        event = gsi_ring_virt(ring, old_index);
-       trans = gsi_event_trans(channel, event);
+       trans = gsi_event_trans(channel->gsi, event);
+       if (!trans)
+               return;
 
        /* Compute the number of events to process before we wrap,
         * and determine when we'll be done processing events.
@@ -1493,7 +1493,9 @@ static struct gsi_trans *gsi_channel_update(struct gsi_channel *channel)
                return NULL;
 
        /* Get the transaction for the latest completed event. */
-       trans = gsi_event_trans(channel, gsi_ring_virt(ring, index - 1));
+       trans = gsi_event_trans(gsi, gsi_ring_virt(ring, index - 1));
+       if (!trans)
+               return NULL;
 
        /* For RX channels, update each completed transaction with the number
         * of bytes that were actually received.  For TX channels, report
@@ -2001,9 +2003,10 @@ static void gsi_channel_evt_ring_exit(struct gsi_channel *channel)
        gsi_evt_ring_id_free(gsi, evt_ring_id);
 }
 
-static bool gsi_channel_data_valid(struct gsi *gsi,
+static bool gsi_channel_data_valid(struct gsi *gsi, bool command,
                                   const struct ipa_gsi_endpoint_data *data)
 {
+       const struct gsi_channel_data *channel_data;
        u32 channel_id = data->channel_id;
        struct device *dev = gsi->dev;
 
@@ -2019,10 +2022,24 @@ static bool gsi_channel_data_valid(struct gsi *gsi,
                return false;
        }
 
-       if (!data->channel.tlv_count ||
-           data->channel.tlv_count > GSI_TLV_MAX) {
+       if (command && !data->toward_ipa) {
+               dev_err(dev, "command channel %u is not TX\n", channel_id);
+               return false;
+       }
+
+       channel_data = &data->channel;
+
+       if (!channel_data->tlv_count ||
+           channel_data->tlv_count > GSI_TLV_MAX) {
                dev_err(dev, "channel %u bad tlv_count %u; must be 1..%u\n",
-                       channel_id, data->channel.tlv_count, GSI_TLV_MAX);
+                       channel_id, channel_data->tlv_count, GSI_TLV_MAX);
+               return false;
+       }
+
+       if (command && IPA_COMMAND_TRANS_TRE_MAX > channel_data->tlv_count) {
+               dev_err(dev, "command TRE max too big for channel %u (%u > %u)\n",
+                       channel_id, IPA_COMMAND_TRANS_TRE_MAX,
+                       channel_data->tlv_count);
                return false;
        }
 
@@ -2031,22 +2048,22 @@ static bool gsi_channel_data_valid(struct gsi *gsi,
         * gsi_channel_tre_max() is computed, tre_count has to be almost
         * twice the TLV FIFO size to satisfy this requirement.
         */
-       if (data->channel.tre_count < 2 * data->channel.tlv_count - 1) {
+       if (channel_data->tre_count < 2 * channel_data->tlv_count - 1) {
                dev_err(dev, "channel %u TLV count %u exceeds TRE count %u\n",
-                       channel_id, data->channel.tlv_count,
-                       data->channel.tre_count);
+                       channel_id, channel_data->tlv_count,
+                       channel_data->tre_count);
                return false;
        }
 
-       if (!is_power_of_2(data->channel.tre_count)) {
+       if (!is_power_of_2(channel_data->tre_count)) {
                dev_err(dev, "channel %u bad tre_count %u; not power of 2\n",
-                       channel_id, data->channel.tre_count);
+                       channel_id, channel_data->tre_count);
                return false;
        }
 
-       if (!is_power_of_2(data->channel.event_count)) {
+       if (!is_power_of_2(channel_data->event_count)) {
                dev_err(dev, "channel %u bad event_count %u; not power of 2\n",
-                       channel_id, data->channel.event_count);
+                       channel_id, channel_data->event_count);
                return false;
        }
 
@@ -2062,7 +2079,7 @@ static int gsi_channel_init_one(struct gsi *gsi,
        u32 tre_count;
        int ret;
 
-       if (!gsi_channel_data_valid(gsi, data))
+       if (!gsi_channel_data_valid(gsi, command, data))
                return -EINVAL;
 
        /* Worst case we need an event for every outstanding TRE */
@@ -2080,7 +2097,7 @@ static int gsi_channel_init_one(struct gsi *gsi,
        channel->gsi = gsi;
        channel->toward_ipa = data->toward_ipa;
        channel->command = command;
-       channel->tlv_count = data->channel.tlv_count;
+       channel->trans_tre_max = data->channel.tlv_count;
        channel->tre_count = tre_count;
        channel->event_count = data->channel.event_count;
 
@@ -2295,13 +2312,5 @@ u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id)
        struct gsi_channel *channel = &gsi->channel[channel_id];
 
        /* Hardware limit is channel->tre_count - 1 */
-       return channel->tre_count - (channel->tlv_count - 1);
-}
-
-/* Returns the maximum number of TREs in a single transaction for a channel */
-u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id)
-{
-       struct gsi_channel *channel = &gsi->channel[channel_id];
-
-       return channel->tlv_count;
+       return channel->tre_count - (channel->trans_tre_max - 1);
 }
index 5d66116..89dac7f 100644 (file)
@@ -110,7 +110,7 @@ struct gsi_channel {
        bool toward_ipa;
        bool command;                   /* AP command TX channel or not */
 
-       u8 tlv_count;                   /* # entries in TLV FIFO */
+       u8 trans_tre_max;               /* max TREs in a transaction */
        u16 tre_count;
        u16 event_count;
 
@@ -188,15 +188,6 @@ void gsi_teardown(struct gsi *gsi);
  */
 u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id);
 
-/**
- * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
- * @gsi:       GSI pointer
- * @channel_id:        Channel whose limit is to be returned
- *
- * Return:      The maximum TRE count per transaction on the channel
- */
-u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id);
-
 /**
  * gsi_channel_start() - Start an allocated GSI channel
  * @gsi:       GSI pointer
index ea333a2..56450a1 100644 (file)
@@ -105,14 +105,12 @@ void gsi_channel_doorbell(struct gsi_channel *channel);
 void *gsi_ring_virt(struct gsi_ring *ring, u32 index);
 
 /**
- * gsi_channel_tx_queued() - Report the number of bytes queued to hardware
- * @channel:   Channel whose bytes have been queued
+ * gsi_trans_tx_queued() - Report a queued TX channel transaction
+ * @trans:     Transaction being passed to hardware
  *
- * This arranges for the the number of transactions and bytes for
- * transfer that have been queued to hardware to be reported.  It
- * passes this information up the network stack so it can be used to
- * throttle transmissions.
+ * Report to the network stack that a TX transaction is being supplied
+ * to the hardware.
  */
-void gsi_channel_tx_queued(struct gsi_channel *channel);
+void gsi_trans_tx_queued(struct gsi_trans *trans);
 
 #endif /* _GSI_PRIVATE_H_ */
index 55f8fe7..278e467 100644 (file)
@@ -340,7 +340,7 @@ struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
        struct gsi_trans_info *trans_info;
        struct gsi_trans *trans;
 
-       if (WARN_ON(tre_count > gsi_channel_trans_tre_max(gsi, channel_id)))
+       if (WARN_ON(tre_count > channel->trans_tre_max))
                return NULL;
 
        trans_info = &channel->trans_info;
@@ -603,7 +603,7 @@ static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db)
        if (ring_db || !atomic_read(&channel->trans_info.tre_avail)) {
                /* Report what we're handing off to hardware for TX channels */
                if (channel->toward_ipa)
-                       gsi_channel_tx_queued(channel);
+                       gsi_trans_tx_queued(trans);
                gsi_channel_doorbell(channel);
        }
 }
@@ -745,14 +745,10 @@ int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
         * element is used to fill a single TRE when the transaction is
         * committed.  So we need as many scatterlist elements as the
         * maximum number of TREs that can be outstanding.
-        *
-        * All TREs in a transaction must fit within the channel's TLV FIFO.
-        * A transaction on a channel can allocate as many TREs as that but
-        * no more.
         */
        ret = gsi_trans_pool_init(&trans_info->sg_pool,
                                  sizeof(struct scatterlist),
-                                 tre_max, channel->tlv_count);
+                                 tre_max, channel->trans_tre_max);
        if (ret)
                goto err_trans_pool_exit;
 
index e58cd44..6dea402 100644 (file)
@@ -353,13 +353,13 @@ int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max)
        /* This is as good a place as any to validate build constants */
        ipa_cmd_validate_build();
 
-       /* Even though command payloads are allocated one at a time,
-        * a single transaction can require up to tlv_count of them,
-        * so we treat them as if that many can be allocated at once.
+       /* Command payloads are allocated one at a time, but a single
+        * transaction can require up to the maximum supported by the
+        * channel; treat them as if they were allocated all at once.
         */
        return gsi_trans_pool_init_dma(dev, &trans_info->cmd_pool,
                                       sizeof(union ipa_cmd_payload),
-                                      tre_max, channel->tlv_count);
+                                      tre_max, channel->trans_tre_max);
 }
 
 void ipa_cmd_pool_exit(struct gsi_channel *channel)
index d3b3255..66d2bfd 100644 (file)
@@ -1020,7 +1020,7 @@ int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb)
         * If not, see if we can linearize it before giving up.
         */
        nr_frags = skb_shinfo(skb)->nr_frags;
-       if (1 + nr_frags > endpoint->trans_tre_max) {
+       if (nr_frags > endpoint->skb_frag_max) {
                if (skb_linearize(skb))
                        return -E2BIG;
                nr_frags = 0;
@@ -1368,18 +1368,14 @@ static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint,
        }
 }
 
-/* Complete a TX transaction, command or from ipa_endpoint_skb_tx() */
-static void ipa_endpoint_tx_complete(struct ipa_endpoint *endpoint,
-                                    struct gsi_trans *trans)
-{
-}
-
-/* Complete transaction initiated in ipa_endpoint_replenish_one() */
-static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
-                                    struct gsi_trans *trans)
+void ipa_endpoint_trans_complete(struct ipa_endpoint *endpoint,
+                                struct gsi_trans *trans)
 {
        struct page *page;
 
+       if (endpoint->toward_ipa)
+               return;
+
        if (trans->cancelled)
                goto done;
 
@@ -1393,15 +1389,6 @@ done:
        ipa_endpoint_replenish(endpoint);
 }
 
-void ipa_endpoint_trans_complete(struct ipa_endpoint *endpoint,
-                                struct gsi_trans *trans)
-{
-       if (endpoint->toward_ipa)
-               ipa_endpoint_tx_complete(endpoint, trans);
-       else
-               ipa_endpoint_rx_complete(endpoint, trans);
-}
-
 void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint,
                                struct gsi_trans *trans)
 {
@@ -1721,7 +1708,7 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
        if (endpoint->ee_id != GSI_EE_AP)
                return;
 
-       endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id);
+       endpoint->skb_frag_max = gsi->channel[channel_id].trans_tre_max - 1;
        if (!endpoint->toward_ipa) {
                /* RX transactions require a single TRE, so the maximum
                 * backlog is the same as the maximum outstanding TREs.
index 01790c6..28e0a73 100644 (file)
@@ -142,7 +142,7 @@ enum ipa_replenish_flag {
  * @endpoint_id:       IPA endpoint number
  * @toward_ipa:                Endpoint direction (true = TX, false = RX)
  * @config:            Default endpoint configuration
- * @trans_tre_max:     Maximum number of TRE descriptors per transaction
+ * @skb_frag_max:      Maximum allowed number of TX SKB fragments
  * @evt_ring_id:       GSI event ring used by the endpoint
  * @netdev:            Network device pointer, if endpoint uses one
  * @replenish_flags:   Replenishing state flags
@@ -157,7 +157,7 @@ struct ipa_endpoint {
        bool toward_ipa;
        struct ipa_endpoint_config config;
 
-       u32 trans_tre_max;
+       u32 skb_frag_max;       /* Used for netdev TX only */
        u32 evt_ring_id;
 
        /* Net device this endpoint is associated with, if any */
index 3837c89..de94921 100644 (file)
@@ -47,11 +47,11 @@ typedef enum {
 } ipvl_hdr_type;
 
 struct ipvl_pcpu_stats {
-       u64                     rx_pkts;
-       u64                     rx_bytes;
-       u64                     rx_mcast;
-       u64                     tx_pkts;
-       u64                     tx_bytes;
+       u64_stats_t             rx_pkts;
+       u64_stats_t             rx_bytes;
+       u64_stats_t             rx_mcast;
+       u64_stats_t             tx_pkts;
+       u64_stats_t             tx_bytes;
        struct u64_stats_sync   syncp;
        u32                     rx_errs;
        u32                     tx_drps;
index 6ffb274..dfeb5b3 100644 (file)
@@ -19,10 +19,10 @@ void ipvlan_count_rx(const struct ipvl_dev *ipvlan,
 
                pcptr = this_cpu_ptr(ipvlan->pcpu_stats);
                u64_stats_update_begin(&pcptr->syncp);
-               pcptr->rx_pkts++;
-               pcptr->rx_bytes += len;
+               u64_stats_inc(&pcptr->rx_pkts);
+               u64_stats_add(&pcptr->rx_bytes, len);
                if (mcast)
-                       pcptr->rx_mcast++;
+                       u64_stats_inc(&pcptr->rx_mcast);
                u64_stats_update_end(&pcptr->syncp);
        } else {
                this_cpu_inc(ipvlan->pcpu_stats->rx_errs);
index aa28a29..49ba8a5 100644 (file)
@@ -224,8 +224,8 @@ static netdev_tx_t ipvlan_start_xmit(struct sk_buff *skb,
                pcptr = this_cpu_ptr(ipvlan->pcpu_stats);
 
                u64_stats_update_begin(&pcptr->syncp);
-               pcptr->tx_pkts++;
-               pcptr->tx_bytes += skblen;
+               u64_stats_inc(&pcptr->tx_pkts);
+               u64_stats_add(&pcptr->tx_bytes, skblen);
                u64_stats_update_end(&pcptr->syncp);
        } else {
                this_cpu_inc(ipvlan->pcpu_stats->tx_drps);
@@ -300,11 +300,11 @@ static void ipvlan_get_stats64(struct net_device *dev,
                        pcptr = per_cpu_ptr(ipvlan->pcpu_stats, idx);
                        do {
                                strt= u64_stats_fetch_begin_irq(&pcptr->syncp);
-                               rx_pkts = pcptr->rx_pkts;
-                               rx_bytes = pcptr->rx_bytes;
-                               rx_mcast = pcptr->rx_mcast;
-                               tx_pkts = pcptr->tx_pkts;
-                               tx_bytes = pcptr->tx_bytes;
+                               rx_pkts = u64_stats_read(&pcptr->rx_pkts);
+                               rx_bytes = u64_stats_read(&pcptr->rx_bytes);
+                               rx_mcast = u64_stats_read(&pcptr->rx_mcast);
+                               tx_pkts = u64_stats_read(&pcptr->tx_pkts);
+                               tx_bytes = u64_stats_read(&pcptr->tx_bytes);
                        } while (u64_stats_fetch_retry_irq(&pcptr->syncp,
                                                           strt));
 
@@ -315,8 +315,8 @@ static void ipvlan_get_stats64(struct net_device *dev,
                        s->tx_bytes += tx_bytes;
 
                        /* u32 values are updated without syncp protection. */
-                       rx_errs += pcptr->rx_errs;
-                       tx_drps += pcptr->tx_drps;
+                       rx_errs += READ_ONCE(pcptr->rx_errs);
+                       tx_drps += READ_ONCE(pcptr->tx_drps);
                }
                s->rx_errors = rx_errs;
                s->rx_dropped = rx_errs;
index 817577e..c881e1b 100644 (file)
@@ -523,8 +523,8 @@ static void count_tx(struct net_device *dev, int ret, int len)
                struct pcpu_sw_netstats *stats = this_cpu_ptr(dev->tstats);
 
                u64_stats_update_begin(&stats->syncp);
-               stats->tx_packets++;
-               stats->tx_bytes += len;
+               u64_stats_inc(&stats->tx_packets);
+               u64_stats_add(&stats->tx_bytes, len);
                u64_stats_update_end(&stats->syncp);
        }
 }
@@ -825,8 +825,8 @@ static void count_rx(struct net_device *dev, int len)
        struct pcpu_sw_netstats *stats = this_cpu_ptr(dev->tstats);
 
        u64_stats_update_begin(&stats->syncp);
-       stats->rx_packets++;
-       stats->rx_bytes += len;
+       u64_stats_inc(&stats->rx_packets);
+       u64_stats_add(&stats->rx_bytes, len);
        u64_stats_update_end(&stats->syncp);
 }
 
@@ -3462,7 +3462,7 @@ static int macsec_dev_init(struct net_device *dev)
                memcpy(dev->broadcast, real_dev->broadcast, dev->addr_len);
 
        /* Get macsec's reference to real_dev */
-       dev_hold_track(real_dev, &macsec->dev_tracker, GFP_KERNEL);
+       netdev_hold(real_dev, &macsec->dev_tracker, GFP_KERNEL);
 
        return 0;
 }
@@ -3710,7 +3710,7 @@ static void macsec_free_netdev(struct net_device *dev)
        free_percpu(macsec->secy.tx_sc.stats);
 
        /* Get rid of the macsec's reference to real_dev */
-       dev_put_track(macsec->real_dev, &macsec->dev_tracker);
+       netdev_put(macsec->real_dev, &macsec->dev_tracker);
 }
 
 static void macsec_setup(struct net_device *dev)
index eff75be..1080d6e 100644 (file)
@@ -575,8 +575,8 @@ static netdev_tx_t macvlan_start_xmit(struct sk_buff *skb,
 
                pcpu_stats = this_cpu_ptr(vlan->pcpu_stats);
                u64_stats_update_begin(&pcpu_stats->syncp);
-               pcpu_stats->tx_packets++;
-               pcpu_stats->tx_bytes += len;
+               u64_stats_inc(&pcpu_stats->tx_packets);
+               u64_stats_add(&pcpu_stats->tx_bytes, len);
                u64_stats_update_end(&pcpu_stats->syncp);
        } else {
                this_cpu_inc(vlan->pcpu_stats->tx_dropped);
@@ -915,7 +915,7 @@ static int macvlan_init(struct net_device *dev)
        port->count += 1;
 
        /* Get macvlan's reference to lowerdev */
-       dev_hold_track(lowerdev, &vlan->dev_tracker, GFP_KERNEL);
+       netdev_hold(lowerdev, &vlan->dev_tracker, GFP_KERNEL);
 
        return 0;
 }
@@ -949,11 +949,11 @@ static void macvlan_dev_get_stats64(struct net_device *dev,
                        p = per_cpu_ptr(vlan->pcpu_stats, i);
                        do {
                                start = u64_stats_fetch_begin_irq(&p->syncp);
-                               rx_packets      = p->rx_packets;
-                               rx_bytes        = p->rx_bytes;
-                               rx_multicast    = p->rx_multicast;
-                               tx_packets      = p->tx_packets;
-                               tx_bytes        = p->tx_bytes;
+                               rx_packets      = u64_stats_read(&p->rx_packets);
+                               rx_bytes        = u64_stats_read(&p->rx_bytes);
+                               rx_multicast    = u64_stats_read(&p->rx_multicast);
+                               tx_packets      = u64_stats_read(&p->tx_packets);
+                               tx_bytes        = u64_stats_read(&p->tx_bytes);
                        } while (u64_stats_fetch_retry_irq(&p->syncp, start));
 
                        stats->rx_packets       += rx_packets;
@@ -964,8 +964,8 @@ static void macvlan_dev_get_stats64(struct net_device *dev,
                        /* rx_errors & tx_dropped are u32, updated
                         * without syncp protection.
                         */
-                       rx_errors       += p->rx_errors;
-                       tx_dropped      += p->tx_dropped;
+                       rx_errors       += READ_ONCE(p->rx_errors);
+                       tx_dropped      += READ_ONCE(p->tx_dropped);
                }
                stats->rx_errors        = rx_errors;
                stats->rx_dropped       = rx_errors;
@@ -1185,7 +1185,7 @@ static void macvlan_dev_free(struct net_device *dev)
        struct macvlan_dev *vlan = netdev_priv(dev);
 
        /* Get rid of the macvlan's reference to lowerdev */
-       dev_put_track(vlan->lowerdev, &vlan->dev_tracker);
+       netdev_put(vlan->lowerdev, &vlan->dev_tracker);
 }
 
 void macvlan_common_setup(struct net_device *dev)
index ab8cd55..ddac61d 100644 (file)
@@ -721,7 +721,7 @@ restart:
                                __netpoll_cleanup(&nt->np);
 
                                spin_lock_irqsave(&target_list_lock, flags);
-                               dev_put_track(nt->np.dev, &nt->np.dev_tracker);
+                               netdev_put(nt->np.dev, &nt->np.dev_tracker);
                                nt->np.dev = NULL;
                                nt->enabled = false;
                                stopped = true;
index 8561f2d..13dafe7 100644 (file)
 #define DP83867_DOWNSHIFT_2_COUNT      2
 #define DP83867_DOWNSHIFT_4_COUNT      4
 #define DP83867_DOWNSHIFT_8_COUNT      8
+#define DP83867_SGMII_AUTONEG_EN       BIT(7)
 
 /* CFG3 bits */
 #define DP83867_CFG3_INT_OE                    BIT(7)
@@ -855,6 +856,32 @@ static int dp83867_phy_reset(struct phy_device *phydev)
                         DP83867_PHYCR_FORCE_LINK_GOOD, 0);
 }
 
+static void dp83867_link_change_notify(struct phy_device *phydev)
+{
+       /* There is a limitation in DP83867 PHY device where SGMII AN is
+        * only triggered once after the device is booted up. Even after the
+        * PHY TPI is down and up again, SGMII AN is not triggered and
+        * hence no new in-band message from PHY to MAC side SGMII.
+        * This could cause an issue during power up, when PHY is up prior
+        * to MAC. At this condition, once MAC side SGMII is up, MAC side
+        * SGMII wouldn`t receive new in-band message from TI PHY with
+        * correct link status, speed and duplex info.
+        * Thus, implemented a SW solution here to retrigger SGMII Auto-Neg
+        * whenever there is a link change.
+        */
+       if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {
+               int val = 0;
+
+               val = phy_clear_bits(phydev, DP83867_CFG2,
+                                    DP83867_SGMII_AUTONEG_EN);
+               if (val < 0)
+                       return;
+
+               phy_set_bits(phydev, DP83867_CFG2,
+                            DP83867_SGMII_AUTONEG_EN);
+       }
+}
+
 static struct phy_driver dp83867_driver[] = {
        {
                .phy_id         = DP83867_PHY_ID,
@@ -879,6 +906,8 @@ static struct phy_driver dp83867_driver[] = {
 
                .suspend        = genphy_suspend,
                .resume         = genphy_resume,
+
+               .link_change_notify = dp83867_link_change_notify,
        },
 };
 module_phy_driver(dp83867_driver);
index 03abe62..aef739c 100644 (file)
@@ -353,6 +353,7 @@ static int __init fixed_mdio_bus_init(void)
        fmb->mii_bus->parent = &pdev->dev;
        fmb->mii_bus->read = &fixed_mdio_read;
        fmb->mii_bus->write = &fixed_mdio_write;
+       fmb->mii_bus->phy_mask = ~0;
 
        ret = mdiobus_register(fmb->mii_bus);
        if (ret)
index 58d6029..8a2dbe8 100644 (file)
@@ -1046,7 +1046,6 @@ int __init mdio_bus_init(void)
 
        return ret;
 }
-EXPORT_SYMBOL_GPL(mdio_bus_init);
 
 #if IS_ENABLED(CONFIG_PHYLIB)
 void mdio_bus_exit(void)
index b07dde6..aac133a 100644 (file)
@@ -749,10 +749,10 @@ static rx_handler_result_t team_handle_frame(struct sk_buff **pskb)
 
                pcpu_stats = this_cpu_ptr(team->pcpu_stats);
                u64_stats_update_begin(&pcpu_stats->syncp);
-               pcpu_stats->rx_packets++;
-               pcpu_stats->rx_bytes += skb->len;
+               u64_stats_inc(&pcpu_stats->rx_packets);
+               u64_stats_add(&pcpu_stats->rx_bytes, skb->len);
                if (skb->pkt_type == PACKET_MULTICAST)
-                       pcpu_stats->rx_multicast++;
+                       u64_stats_inc(&pcpu_stats->rx_multicast);
                u64_stats_update_end(&pcpu_stats->syncp);
 
                skb->dev = team->dev;
@@ -1720,8 +1720,8 @@ static netdev_tx_t team_xmit(struct sk_buff *skb, struct net_device *dev)
 
                pcpu_stats = this_cpu_ptr(team->pcpu_stats);
                u64_stats_update_begin(&pcpu_stats->syncp);
-               pcpu_stats->tx_packets++;
-               pcpu_stats->tx_bytes += len;
+               u64_stats_inc(&pcpu_stats->tx_packets);
+               u64_stats_add(&pcpu_stats->tx_bytes, len);
                u64_stats_update_end(&pcpu_stats->syncp);
        } else {
                this_cpu_inc(team->pcpu_stats->tx_dropped);
@@ -1854,11 +1854,11 @@ team_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats)
                p = per_cpu_ptr(team->pcpu_stats, i);
                do {
                        start = u64_stats_fetch_begin_irq(&p->syncp);
-                       rx_packets      = p->rx_packets;
-                       rx_bytes        = p->rx_bytes;
-                       rx_multicast    = p->rx_multicast;
-                       tx_packets      = p->tx_packets;
-                       tx_bytes        = p->tx_bytes;
+                       rx_packets      = u64_stats_read(&p->rx_packets);
+                       rx_bytes        = u64_stats_read(&p->rx_bytes);
+                       rx_multicast    = u64_stats_read(&p->rx_multicast);
+                       tx_packets      = u64_stats_read(&p->tx_packets);
+                       tx_bytes        = u64_stats_read(&p->tx_bytes);
                } while (u64_stats_fetch_retry_irq(&p->syncp, start));
 
                stats->rx_packets       += rx_packets;
@@ -1870,9 +1870,9 @@ team_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats)
                 * rx_dropped, tx_dropped & rx_nohandler are u32,
                 * updated without syncp protection.
                 */
-               rx_dropped      += p->rx_dropped;
-               tx_dropped      += p->tx_dropped;
-               rx_nohandler    += p->rx_nohandler;
+               rx_dropped      += READ_ONCE(p->rx_dropped);
+               tx_dropped      += READ_ONCE(p->tx_dropped);
+               rx_nohandler    += READ_ONCE(p->rx_nohandler);
        }
        stats->rx_dropped       = rx_dropped;
        stats->tx_dropped       = tx_dropped;
index bd03e16..3511081 100644 (file)
@@ -2088,6 +2088,11 @@ static const struct usb_device_id products[] = {
                USB_DEVICE(0x0424, 0x9E08),
                .driver_info = (unsigned long) &smsc95xx_info,
        },
+       {
+               /* Microchip's EVB-LAN8670-USB 10BASE-T1S Ethernet Device */
+               USB_DEVICE(0x184F, 0x0051),
+               .driver_info = (unsigned long)&smsc95xx_info,
+       },
        { },            /* END */
 };
 MODULE_DEVICE_TABLE(usb, products);
index 1cb6dab..dc79811 100644 (file)
@@ -337,8 +337,8 @@ void usbnet_skb_return (struct usbnet *dev, struct sk_buff *skb)
                skb->protocol = eth_type_trans (skb, dev->net);
 
        flags = u64_stats_update_begin_irqsave(&stats64->syncp);
-       stats64->rx_packets++;
-       stats64->rx_bytes += skb->len;
+       u64_stats_inc(&stats64->rx_packets);
+       u64_stats_add(&stats64->rx_bytes, skb->len);
        u64_stats_update_end_irqrestore(&stats64->syncp, flags);
 
        netif_dbg(dev, rx_status, dev->net, "< rx, len %zu, type 0x%x\n",
@@ -1258,8 +1258,8 @@ static void tx_complete (struct urb *urb)
                unsigned long flags;
 
                flags = u64_stats_update_begin_irqsave(&stats64->syncp);
-               stats64->tx_packets += entry->packets;
-               stats64->tx_bytes += entry->length;
+               u64_stats_add(&stats64->tx_packets, entry->packets);
+               u64_stats_add(&stats64->tx_bytes, entry->length);
                u64_stats_update_end_irqrestore(&stats64->syncp, flags);
        } else {
                dev->net->stats.tx_errors++;
index 7a38925..a666a88 100644 (file)
@@ -2,7 +2,7 @@
 #
 # Linux driver for VMware's vmxnet3 ethernet NIC.
 #
-# Copyright (C) 2007-2021, VMware, Inc. All Rights Reserved.
+# Copyright (C) 2007-2022, VMware, Inc. All Rights Reserved.
 #
 # This program is free software; you can redistribute it and/or modify it
 # under the terms of the GNU General Public License as published by the
index f9f3a23..41c0660 100644 (file)
@@ -1,7 +1,7 @@
 /*
  * Linux driver for VMware's vmxnet3 ethernet NIC.
  *
- * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved.
+ * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of the GNU General Public License as published by the
index 74d4e8b..41d6767 100644 (file)
@@ -1,7 +1,7 @@
 /*
  * Linux driver for VMware's vmxnet3 ethernet NIC.
  *
- * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved.
+ * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of the GNU General Public License as published by the
@@ -40,7 +40,13 @@ enum {
        VMXNET3_REG_MACL        = 0x28, /* MAC Address Low */
        VMXNET3_REG_MACH        = 0x30, /* MAC Address High */
        VMXNET3_REG_ICR         = 0x38, /* Interrupt Cause Register */
-       VMXNET3_REG_ECR         = 0x40  /* Event Cause Register */
+       VMXNET3_REG_ECR         = 0x40, /* Event Cause Register */
+       VMXNET3_REG_DCR         = 0x48, /* Device capability register,
+                                        * from 0x48 to 0x80
+                                        */
+       VMXNET3_REG_PTCR        = 0x88, /* Passthru capbility register
+                                        * from 0x88 to 0xb0
+                                        */
 };
 
 /* BAR 0 */
@@ -51,8 +57,18 @@ enum {
        VMXNET3_REG_RXPROD2     = 0xA00  /* Rx Producer Index for ring 2 */
 };
 
-#define VMXNET3_PT_REG_SIZE     4096   /* BAR 0 */
-#define VMXNET3_VD_REG_SIZE     4096   /* BAR 1 */
+/* For Large PT BAR, the following offset to DB register */
+enum {
+       VMXNET3_REG_LB_TXPROD   = 0x1000, /* Tx Producer Index */
+       VMXNET3_REG_LB_RXPROD   = 0x1400, /* Rx Producer Index for ring 1 */
+       VMXNET3_REG_LB_RXPROD2  = 0x1800, /* Rx Producer Index for ring 2 */
+};
+
+#define VMXNET3_PT_REG_SIZE         4096               /* BAR 0 */
+#define VMXNET3_LARGE_PT_REG_SIZE   8192               /* large PT pages */
+#define VMXNET3_VD_REG_SIZE         4096               /* BAR 1 */
+#define VMXNET3_LARGE_BAR0_REG_SIZE (4096 * 4096)      /* LARGE BAR 0 */
+#define VMXNET3_OOB_REG_SIZE        (4094 * 4096)      /* OOB pages */
 
 #define VMXNET3_REG_ALIGN       8      /* All registers are 8-byte aligned. */
 #define VMXNET3_REG_ALIGN_MASK  0x7
@@ -83,6 +99,9 @@ enum {
        VMXNET3_CMD_SET_COALESCE,
        VMXNET3_CMD_REGISTER_MEMREGS,
        VMXNET3_CMD_SET_RSS_FIELDS,
+       VMXNET3_CMD_RESERVED4,
+       VMXNET3_CMD_RESERVED5,
+       VMXNET3_CMD_SET_RING_BUFFER_SIZE,
 
        VMXNET3_CMD_FIRST_GET = 0xF00D0000,
        VMXNET3_CMD_GET_QUEUE_STATUS = VMXNET3_CMD_FIRST_GET,
@@ -101,6 +120,9 @@ enum {
        VMXNET3_CMD_GET_RESERVED2,
        VMXNET3_CMD_GET_RESERVED3,
        VMXNET3_CMD_GET_MAX_QUEUES_CONF,
+       VMXNET3_CMD_GET_RESERVED4,
+       VMXNET3_CMD_GET_MAX_CAPABILITIES,
+       VMXNET3_CMD_GET_DCR0_REG,
 };
 
 /*
@@ -126,17 +148,17 @@ struct Vmxnet3_TxDesc {
 
 #ifdef __BIG_ENDIAN_BITFIELD
        u32 msscof:14;  /* MSS, checksum offset, flags */
-       u32 ext1:1;
+       u32 ext1:1;     /* set to 1 to indicate inner csum/tso, vmxnet3 v7 */
        u32 dtype:1;    /* descriptor type */
-       u32 oco:1;
+       u32 oco:1;      /* Outer csum offload */
        u32 gen:1;      /* generation bit */
        u32 len:14;
 #else
        u32 len:14;
        u32 gen:1;      /* generation bit */
-       u32 oco:1;
+       u32 oco:1;      /* Outer csum offload */
        u32 dtype:1;    /* descriptor type */
-       u32 ext1:1;
+       u32 ext1:1;     /* set to 1 to indicate inner csum/tso, vmxnet3 v7 */
        u32 msscof:14;  /* MSS, checksum offset, flags */
 #endif  /* __BIG_ENDIAN_BITFIELD */
 
@@ -240,11 +262,13 @@ struct Vmxnet3_RxCompDesc {
        u32             rqID:10;      /* rx queue/ring ID */
        u32             sop:1;        /* Start of Packet */
        u32             eop:1;        /* End of Packet */
-       u32             ext1:2;
+       u32             ext1:2;       /* bit 0: indicating v4/v6/.. is for inner header */
+                                     /* bit 1: indicating rssType is based on inner header */
        u32             rxdIdx:12;    /* Index of the RxDesc */
 #else
        u32             rxdIdx:12;    /* Index of the RxDesc */
-       u32             ext1:2;
+       u32             ext1:2;       /* bit 0: indicating v4/v6/.. is for inner header */
+                                     /* bit 1: indicating rssType is based on inner header */
        u32             eop:1;        /* End of Packet */
        u32             sop:1;        /* Start of Packet */
        u32             rqID:10;      /* rx queue/ring ID */
@@ -378,6 +402,8 @@ union Vmxnet3_GenericDesc {
 
 /* max # of tx descs for a non-tso pkt */
 #define VMXNET3_MAX_TXD_PER_PKT 16
+/* max # of tx descs for a tso pkt */
+#define VMXNET3_MAX_TSO_TXD_PER_PKT 24
 
 /* Max size of a single rx buffer */
 #define VMXNET3_MAX_RX_BUF_SIZE  ((1 << 14) - 1)
@@ -724,6 +750,13 @@ enum Vmxnet3_RSSField {
        VMXNET3_RSS_FIELDS_ESPIP6 = 0x0020,
 };
 
+struct Vmxnet3_RingBufferSize {
+       __le16             ring1BufSizeType0;
+       __le16             ring1BufSizeType1;
+       __le16             ring2BufSizeType1;
+       __le16             pad;
+};
+
 /* If the command data <= 16 bytes, use the shared memory directly.
  * otherwise, use variable length configuration descriptor.
  */
@@ -731,6 +764,7 @@ union Vmxnet3_CmdInfo {
        struct Vmxnet3_VariableLenConfDesc      varConf;
        struct Vmxnet3_SetPolling               setPolling;
        enum   Vmxnet3_RSSField                 setRssFields;
+       struct Vmxnet3_RingBufferSize           ringBufSize;
        __le64                                  data[2];
 };
 
@@ -801,4 +835,30 @@ struct Vmxnet3_DriverShared {
 #define VMXNET3_LINK_UP         (10000 << 16 | 1)    /* 10 Gbps, up */
 #define VMXNET3_LINK_DOWN       0
 
+#define VMXNET3_DCR_ERROR                          31   /* error when bit 31 of DCR is set */
+#define VMXNET3_CAP_UDP_RSS                        0    /* bit 0 of DCR 0 */
+#define VMXNET3_CAP_ESP_RSS_IPV4                   1    /* bit 1 of DCR 0 */
+#define VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD        2    /* bit 2 of DCR 0 */
+#define VMXNET3_CAP_GENEVE_TSO                     3    /* bit 3 of DCR 0 */
+#define VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD         4    /* bit 4 of DCR 0 */
+#define VMXNET3_CAP_VXLAN_TSO                      5    /* bit 5 of DCR 0 */
+#define VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD  6    /* bit 6 of DCR 0 */
+#define VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD   7    /* bit 7 of DCR 0 */
+#define VMXNET3_CAP_PKT_STEERING_IPV4              8    /* bit 8 of DCR 0 */
+#define VMXNET3_CAP_VERSION_4_MAX                  VMXNET3_CAP_PKT_STEERING_IPV4
+#define VMXNET3_CAP_ESP_RSS_IPV6                   9    /* bit 9 of DCR 0 */
+#define VMXNET3_CAP_VERSION_5_MAX                  VMXNET3_CAP_ESP_RSS_IPV6
+#define VMXNET3_CAP_ESP_OVER_UDP_RSS               10   /* bit 10 of DCR 0 */
+#define VMXNET3_CAP_INNER_RSS                      11   /* bit 11 of DCR 0 */
+#define VMXNET3_CAP_INNER_ESP_RSS                  12   /* bit 12 of DCR 0 */
+#define VMXNET3_CAP_CRC32_HASH_FUNC                13   /* bit 13 of DCR 0 */
+#define VMXNET3_CAP_VERSION_6_MAX                  VMXNET3_CAP_CRC32_HASH_FUNC
+#define VMXNET3_CAP_OAM_FILTER                     14   /* bit 14 of DCR 0 */
+#define VMXNET3_CAP_ESP_QS                         15   /* bit 15 of DCR 0 */
+#define VMXNET3_CAP_LARGE_BAR                      16   /* bit 16 of DCR 0 */
+#define VMXNET3_CAP_OOORX_COMP                     17   /* bit 17 of DCR 0 */
+#define VMXNET3_CAP_VERSION_7_MAX                  18
+/* when new capability is introduced, update VMXNET3_CAP_MAX */
+#define VMXNET3_CAP_MAX                            VMXNET3_CAP_VERSION_7_MAX
+
 #endif /* _VMXNET3_DEFS_H_ */
index 93e8d11..1565e18 100644 (file)
@@ -1,7 +1,7 @@
 /*
  * Linux driver for VMware's vmxnet3 ethernet NIC.
  *
- * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved.
+ * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of the GNU General Public License as published by the
@@ -130,6 +130,20 @@ vmxnet3_tq_stop(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter)
        netif_stop_subqueue(adapter->netdev, (tq - adapter->tx_queue));
 }
 
+/* Check if capability is supported by UPT device or
+ * UPT is even requested
+ */
+bool
+vmxnet3_check_ptcapability(u32 cap_supported, u32 cap)
+{
+       if (cap_supported & (1UL << VMXNET3_DCR_ERROR) ||
+           cap_supported & (1UL << cap)) {
+               return true;
+       }
+
+       return false;
+}
+
 
 /*
  * Check the link state. This may start or stop the tx queue.
@@ -571,6 +585,7 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx,
 
                rbi = rbi_base + ring->next2fill;
                gd = ring->base + ring->next2fill;
+               rbi->comp_state = VMXNET3_RXD_COMP_PENDING;
 
                if (rbi->buf_type == VMXNET3_RX_BUF_SKB) {
                        if (rbi->skb == NULL) {
@@ -630,8 +645,10 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx,
 
                /* Fill the last buffer but dont mark it ready, or else the
                 * device will think that the queue is full */
-               if (num_allocated == num_to_alloc)
+               if (num_allocated == num_to_alloc) {
+                       rbi->comp_state = VMXNET3_RXD_COMP_DONE;
                        break;
+               }
 
                gd->dword[2] |= cpu_to_le32(ring->gen << VMXNET3_RXD_GEN_SHIFT);
                num_allocated++;
@@ -1044,6 +1061,23 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
                        }
                        tq->stats.copy_skb_header++;
                }
+               if (unlikely(count > VMXNET3_MAX_TSO_TXD_PER_PKT)) {
+                       /* tso pkts must not use more than
+                        * VMXNET3_MAX_TSO_TXD_PER_PKT entries
+                        */
+                       if (skb_linearize(skb) != 0) {
+                               tq->stats.drop_too_many_frags++;
+                               goto drop_pkt;
+                       }
+                       tq->stats.linearized++;
+
+                       /* recalculate the # of descriptors to use */
+                       count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + 1;
+                       if (unlikely(count > VMXNET3_MAX_TSO_TXD_PER_PKT)) {
+                               tq->stats.drop_too_many_frags++;
+                               goto drop_pkt;
+                       }
+               }
                if (skb->encapsulation) {
                        vmxnet3_prepare_inner_tso(skb, &ctx);
                } else {
@@ -1127,7 +1161,12 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
        if (ctx.mss) {
                if (VMXNET3_VERSION_GE_4(adapter) && skb->encapsulation) {
                        gdesc->txd.hlen = ctx.l4_offset + ctx.l4_hdr_size;
-                       gdesc->txd.om = VMXNET3_OM_ENCAP;
+                       if (VMXNET3_VERSION_GE_7(adapter)) {
+                               gdesc->txd.om = VMXNET3_OM_TSO;
+                               gdesc->txd.ext1 = 1;
+                       } else {
+                               gdesc->txd.om = VMXNET3_OM_ENCAP;
+                       }
                        gdesc->txd.msscof = ctx.mss;
 
                        if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM)
@@ -1144,8 +1183,15 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
                            skb->encapsulation) {
                                gdesc->txd.hlen = ctx.l4_offset +
                                                  ctx.l4_hdr_size;
-                               gdesc->txd.om = VMXNET3_OM_ENCAP;
-                               gdesc->txd.msscof = 0;          /* Reserved */
+                               if (VMXNET3_VERSION_GE_7(adapter)) {
+                                       gdesc->txd.om = VMXNET3_OM_CSUM;
+                                       gdesc->txd.msscof = ctx.l4_offset +
+                                                           skb->csum_offset;
+                                       gdesc->txd.ext1 = 1;
+                               } else {
+                                       gdesc->txd.om = VMXNET3_OM_ENCAP;
+                                       gdesc->txd.msscof = 0;          /* Reserved */
+                               }
                        } else {
                                gdesc->txd.hlen = ctx.l4_offset;
                                gdesc->txd.om = VMXNET3_OM_CSUM;
@@ -1193,7 +1239,7 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
        if (tx_num_deferred >= le32_to_cpu(tq->shared->txThreshold)) {
                tq->shared->txNumDeferred = 0;
                VMXNET3_WRITE_BAR0_REG(adapter,
-                                      VMXNET3_REG_TXPROD + tq->qid * 8,
+                                      adapter->tx_prod_offset + tq->qid * 8,
                                       tq->tx_ring.next2fill);
        }
 
@@ -1345,14 +1391,15 @@ static int
 vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
                       struct vmxnet3_adapter *adapter, int quota)
 {
-       static const u32 rxprod_reg[2] = {
-               VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2
+       u32 rxprod_reg[2] = {
+               adapter->rx_prod_offset, adapter->rx_prod2_offset
        };
        u32 num_pkts = 0;
        bool skip_page_frags = false;
        struct Vmxnet3_RxCompDesc *rcd;
        struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx;
        u16 segCnt = 0, mss = 0;
+       int comp_offset, fill_offset;
 #ifdef __BIG_ENDIAN_BITFIELD
        struct Vmxnet3_RxDesc rxCmdDesc;
        struct Vmxnet3_RxCompDesc rxComp;
@@ -1625,9 +1672,15 @@ not_lro:
 
 rcd_done:
                /* device may have skipped some rx descs */
-               ring->next2comp = idx;
-               num_to_alloc = vmxnet3_cmd_ring_desc_avail(ring);
                ring = rq->rx_ring + ring_idx;
+               rbi->comp_state = VMXNET3_RXD_COMP_DONE;
+
+               comp_offset = vmxnet3_cmd_ring_desc_avail(ring);
+               fill_offset = (idx > ring->next2fill ? 0 : ring->size) +
+                             idx - ring->next2fill - 1;
+               if (!ring->isOutOfOrder || fill_offset >= comp_offset)
+                       ring->next2comp = idx;
+               num_to_alloc = vmxnet3_cmd_ring_desc_avail(ring);
 
                /* Ensure that the writes to rxd->gen bits will be observed
                 * after all other writes to rxd objects.
@@ -1635,18 +1688,38 @@ rcd_done:
                dma_wmb();
 
                while (num_to_alloc) {
-                       vmxnet3_getRxDesc(rxd, &ring->base[ring->next2fill].rxd,
-                                         &rxCmdDesc);
-                       BUG_ON(!rxd->addr);
+                       rbi = rq->buf_info[ring_idx] + ring->next2fill;
+                       if (!(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_OOORX_COMP)))
+                               goto refill_buf;
+                       if (ring_idx == 0) {
+                               /* ring0 Type1 buffers can get skipped; re-fill them */
+                               if (rbi->buf_type != VMXNET3_RX_BUF_SKB)
+                                       goto refill_buf;
+                       }
+                       if (rbi->comp_state == VMXNET3_RXD_COMP_DONE) {
+refill_buf:
+                               vmxnet3_getRxDesc(rxd, &ring->base[ring->next2fill].rxd,
+                                                 &rxCmdDesc);
+                               WARN_ON(!rxd->addr);
+
+                               /* Recv desc is ready to be used by the device */
+                               rxd->gen = ring->gen;
+                               vmxnet3_cmd_ring_adv_next2fill(ring);
+                               rbi->comp_state = VMXNET3_RXD_COMP_PENDING;
+                               num_to_alloc--;
+                       } else {
+                               /* rx completion hasn't occurred */
+                               ring->isOutOfOrder = 1;
+                               break;
+                       }
+               }
 
-                       /* Recv desc is ready to be used by the device */
-                       rxd->gen = ring->gen;
-                       vmxnet3_cmd_ring_adv_next2fill(ring);
-                       num_to_alloc--;
+               if (num_to_alloc == 0) {
+                       ring->isOutOfOrder = 0;
                }
 
                /* if needed, update the register */
-               if (unlikely(rq->shared->updateRxProd)) {
+               if (unlikely(rq->shared->updateRxProd) && (ring->next2fill & 0xf) == 0) {
                        VMXNET3_WRITE_BAR0_REG(adapter,
                                               rxprod_reg[ring_idx] + rq->qid * 8,
                                               ring->next2fill);
@@ -1810,6 +1883,7 @@ vmxnet3_rq_init(struct vmxnet3_rx_queue *rq,
                memset(rq->rx_ring[i].base, 0, rq->rx_ring[i].size *
                       sizeof(struct Vmxnet3_RxDesc));
                rq->rx_ring[i].gen = VMXNET3_INIT_GEN;
+               rq->rx_ring[i].isOutOfOrder = 0;
        }
        if (vmxnet3_rq_alloc_rx_buf(rq, 0, rq->rx_ring[0].size - 1,
                                    adapter) == 0) {
@@ -2000,8 +2074,17 @@ vmxnet3_poll_rx_only(struct napi_struct *napi, int budget)
        rxd_done = vmxnet3_rq_rx_complete(rq, adapter, budget);
 
        if (rxd_done < budget) {
+               struct Vmxnet3_RxCompDesc *rcd;
+#ifdef __BIG_ENDIAN_BITFIELD
+               struct Vmxnet3_RxCompDesc rxComp;
+#endif
                napi_complete_done(napi, rxd_done);
                vmxnet3_enable_intr(adapter, rq->comp_ring.intr_idx);
+               /* after unmasking the interrupt, check if any descriptors were completed */
+               vmxnet3_getRxComp(rcd, &rq->comp_ring.base[rq->comp_ring.next2proc].rcd,
+                                 &rxComp);
+               if (rcd->gen == rq->comp_ring.gen && napi_reschedule(napi))
+                       vmxnet3_disable_intr(adapter, rq->comp_ring.intr_idx);
        }
        return rxd_done;
 }
@@ -2626,6 +2709,23 @@ vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter)
        /* the rest are already zeroed */
 }
 
+static void
+vmxnet3_init_bufsize(struct vmxnet3_adapter *adapter)
+{
+       struct Vmxnet3_DriverShared *shared = adapter->shared;
+       union Vmxnet3_CmdInfo *cmdInfo = &shared->cu.cmdInfo;
+       unsigned long flags;
+
+       if (!VMXNET3_VERSION_GE_7(adapter))
+               return;
+
+       cmdInfo->ringBufSize = adapter->ringBufSize;
+       spin_lock_irqsave(&adapter->cmd_lock, flags);
+       VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
+                              VMXNET3_CMD_SET_RING_BUFFER_SIZE);
+       spin_unlock_irqrestore(&adapter->cmd_lock, flags);
+}
+
 static void
 vmxnet3_init_coalesce(struct vmxnet3_adapter *adapter)
 {
@@ -2671,6 +2771,36 @@ vmxnet3_init_rssfields(struct vmxnet3_adapter *adapter)
                adapter->rss_fields =
                        VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);
        } else {
+               if (VMXNET3_VERSION_GE_7(adapter)) {
+                       if ((adapter->rss_fields & VMXNET3_RSS_FIELDS_UDPIP4 ||
+                            adapter->rss_fields & VMXNET3_RSS_FIELDS_UDPIP6) &&
+                           vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                                      VMXNET3_CAP_UDP_RSS)) {
+                               adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_UDP_RSS;
+                       } else {
+                               adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_UDP_RSS);
+                       }
+
+                       if ((adapter->rss_fields & VMXNET3_RSS_FIELDS_ESPIP4) &&
+                           vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                                      VMXNET3_CAP_ESP_RSS_IPV4)) {
+                               adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_ESP_RSS_IPV4;
+                       } else {
+                               adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_ESP_RSS_IPV4);
+                       }
+
+                       if ((adapter->rss_fields & VMXNET3_RSS_FIELDS_ESPIP6) &&
+                           vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                                      VMXNET3_CAP_ESP_RSS_IPV6)) {
+                               adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_ESP_RSS_IPV6;
+                       } else {
+                               adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_ESP_RSS_IPV6);
+                       }
+
+                       VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]);
+                       VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG);
+                       adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);
+               }
                cmdInfo->setRssFields = adapter->rss_fields;
                VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
                                       VMXNET3_CMD_SET_RSS_FIELDS);
@@ -2734,14 +2864,15 @@ vmxnet3_activate_dev(struct vmxnet3_adapter *adapter)
                goto activate_err;
        }
 
+       vmxnet3_init_bufsize(adapter);
        vmxnet3_init_coalesce(adapter);
        vmxnet3_init_rssfields(adapter);
 
        for (i = 0; i < adapter->num_rx_queues; i++) {
                VMXNET3_WRITE_BAR0_REG(adapter,
-                               VMXNET3_REG_RXPROD + i * VMXNET3_REG_ALIGN,
+                               adapter->rx_prod_offset + i * VMXNET3_REG_ALIGN,
                                adapter->rx_queue[i].rx_ring[0].next2fill);
-               VMXNET3_WRITE_BAR0_REG(adapter, (VMXNET3_REG_RXPROD2 +
+               VMXNET3_WRITE_BAR0_REG(adapter, (adapter->rx_prod2_offset +
                                (i * VMXNET3_REG_ALIGN)),
                                adapter->rx_queue[i].rx_ring[1].next2fill);
        }
@@ -2907,19 +3038,29 @@ static void
 vmxnet3_adjust_rx_ring_size(struct vmxnet3_adapter *adapter)
 {
        size_t sz, i, ring0_size, ring1_size, comp_size;
-       if (adapter->netdev->mtu <= VMXNET3_MAX_SKB_BUF_SIZE -
-                                   VMXNET3_MAX_ETH_HDR_SIZE) {
-               adapter->skb_buf_size = adapter->netdev->mtu +
-                                       VMXNET3_MAX_ETH_HDR_SIZE;
-               if (adapter->skb_buf_size < VMXNET3_MIN_T0_BUF_SIZE)
-                       adapter->skb_buf_size = VMXNET3_MIN_T0_BUF_SIZE;
-
-               adapter->rx_buf_per_pkt = 1;
+       /* With version7 ring1 will have only T0 buffers */
+       if (!VMXNET3_VERSION_GE_7(adapter)) {
+               if (adapter->netdev->mtu <= VMXNET3_MAX_SKB_BUF_SIZE -
+                                           VMXNET3_MAX_ETH_HDR_SIZE) {
+                       adapter->skb_buf_size = adapter->netdev->mtu +
+                                               VMXNET3_MAX_ETH_HDR_SIZE;
+                       if (adapter->skb_buf_size < VMXNET3_MIN_T0_BUF_SIZE)
+                               adapter->skb_buf_size = VMXNET3_MIN_T0_BUF_SIZE;
+
+                       adapter->rx_buf_per_pkt = 1;
+               } else {
+                       adapter->skb_buf_size = VMXNET3_MAX_SKB_BUF_SIZE;
+                       sz = adapter->netdev->mtu - VMXNET3_MAX_SKB_BUF_SIZE +
+                                                   VMXNET3_MAX_ETH_HDR_SIZE;
+                       adapter->rx_buf_per_pkt = 1 + (sz + PAGE_SIZE - 1) / PAGE_SIZE;
+               }
        } else {
-               adapter->skb_buf_size = VMXNET3_MAX_SKB_BUF_SIZE;
-               sz = adapter->netdev->mtu - VMXNET3_MAX_SKB_BUF_SIZE +
-                                           VMXNET3_MAX_ETH_HDR_SIZE;
-               adapter->rx_buf_per_pkt = 1 + (sz + PAGE_SIZE - 1) / PAGE_SIZE;
+               adapter->skb_buf_size = min((int)adapter->netdev->mtu + VMXNET3_MAX_ETH_HDR_SIZE,
+                                           VMXNET3_MAX_SKB_BUF_SIZE);
+               adapter->rx_buf_per_pkt = 1;
+               adapter->ringBufSize.ring1BufSizeType0 = cpu_to_le16(adapter->skb_buf_size);
+               adapter->ringBufSize.ring1BufSizeType1 = 0;
+               adapter->ringBufSize.ring2BufSizeType1 = cpu_to_le16(PAGE_SIZE);
        }
 
        /*
@@ -2935,6 +3076,11 @@ vmxnet3_adjust_rx_ring_size(struct vmxnet3_adapter *adapter)
        ring1_size = (ring1_size + sz - 1) / sz * sz;
        ring1_size = min_t(u32, ring1_size, VMXNET3_RX_RING2_MAX_SIZE /
                           sz * sz);
+       /* For v7 and later, keep ring size power of 2 for UPT */
+       if (VMXNET3_VERSION_GE_7(adapter)) {
+               ring0_size = rounddown_pow_of_two(ring0_size);
+               ring1_size = rounddown_pow_of_two(ring1_size);
+       }
        comp_size = ring0_size + ring1_size;
 
        for (i = 0; i < adapter->num_rx_queues; i++) {
@@ -3185,6 +3331,47 @@ vmxnet3_declare_features(struct vmxnet3_adapter *adapter)
                        NETIF_F_GSO_UDP_TUNNEL_CSUM;
        }
 
+       if (VMXNET3_VERSION_GE_7(adapter)) {
+               unsigned long flags;
+
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_GENEVE_TSO)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_TSO;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_VXLAN_TSO)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_TSO;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD;
+               }
+
+               VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]);
+               spin_lock_irqsave(&adapter->cmd_lock, flags);
+               VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG);
+               adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);
+               spin_unlock_irqrestore(&adapter->cmd_lock, flags);
+
+               if (!(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD)) &&
+                   !(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD))) {
+                       netdev->hw_enc_features &= ~NETIF_F_GSO_UDP_TUNNEL_CSUM;
+                       netdev->features &= ~NETIF_F_GSO_UDP_TUNNEL_CSUM;
+               }
+       }
+
        netdev->vlan_features = netdev->hw_features &
                                ~(NETIF_F_HW_VLAN_CTAG_TX |
                                  NETIF_F_HW_VLAN_CTAG_RX);
@@ -3472,7 +3659,12 @@ vmxnet3_probe_device(struct pci_dev *pdev,
                goto err_alloc_pci;
 
        ver = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_VRRS);
-       if (ver & (1 << VMXNET3_REV_6)) {
+       if (ver & (1 << VMXNET3_REV_7)) {
+               VMXNET3_WRITE_BAR1_REG(adapter,
+                                      VMXNET3_REG_VRRS,
+                                      1 << VMXNET3_REV_7);
+               adapter->version = VMXNET3_REV_7 + 1;
+       } else if (ver & (1 << VMXNET3_REV_6)) {
                VMXNET3_WRITE_BAR1_REG(adapter,
                                       VMXNET3_REG_VRRS,
                                       1 << VMXNET3_REV_6);
@@ -3520,6 +3712,39 @@ vmxnet3_probe_device(struct pci_dev *pdev,
                goto err_ver;
        }
 
+       if (VMXNET3_VERSION_GE_7(adapter)) {
+               adapter->devcap_supported[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_DCR);
+               adapter->ptcap_supported[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_PTCR);
+               if (adapter->devcap_supported[0] & (1UL << VMXNET3_CAP_LARGE_BAR)) {
+                       adapter->dev_caps[0] = adapter->devcap_supported[0] &
+                                                       (1UL << VMXNET3_CAP_LARGE_BAR);
+               }
+               if (!(adapter->ptcap_supported[0] & (1UL << VMXNET3_DCR_ERROR)) &&
+                   adapter->ptcap_supported[0] & (1UL << VMXNET3_CAP_OOORX_COMP) &&
+                   adapter->devcap_supported[0] & (1UL << VMXNET3_CAP_OOORX_COMP)) {
+                       adapter->dev_caps[0] |= adapter->devcap_supported[0] &
+                                               (1UL << VMXNET3_CAP_OOORX_COMP);
+               }
+               if (adapter->dev_caps[0])
+                       VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]);
+
+               spin_lock_irqsave(&adapter->cmd_lock, flags);
+               VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG);
+               adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);
+               spin_unlock_irqrestore(&adapter->cmd_lock, flags);
+       }
+
+       if (VMXNET3_VERSION_GE_7(adapter) &&
+           adapter->dev_caps[0] & (1UL << VMXNET3_CAP_LARGE_BAR)) {
+               adapter->tx_prod_offset = VMXNET3_REG_LB_TXPROD;
+               adapter->rx_prod_offset = VMXNET3_REG_LB_RXPROD;
+               adapter->rx_prod2_offset = VMXNET3_REG_LB_RXPROD2;
+       } else {
+               adapter->tx_prod_offset = VMXNET3_REG_TXPROD;
+               adapter->rx_prod_offset = VMXNET3_REG_RXPROD;
+               adapter->rx_prod2_offset = VMXNET3_REG_RXPROD2;
+       }
+
        if (VMXNET3_VERSION_GE_6(adapter)) {
                spin_lock_irqsave(&adapter->cmd_lock, flags);
                VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
index 3172d46..ce39932 100644 (file)
@@ -1,7 +1,7 @@
 /*
  * Linux driver for VMware's vmxnet3 ethernet NIC.
  *
- * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved.
+ * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of the GNU General Public License as published by the
@@ -298,7 +298,7 @@ netdev_features_t vmxnet3_features_check(struct sk_buff *skb,
        return features;
 }
 
-static void vmxnet3_enable_encap_offloads(struct net_device *netdev)
+static void vmxnet3_enable_encap_offloads(struct net_device *netdev, netdev_features_t features)
 {
        struct vmxnet3_adapter *adapter = netdev_priv(netdev);
 
@@ -306,8 +306,50 @@ static void vmxnet3_enable_encap_offloads(struct net_device *netdev)
                netdev->hw_enc_features |= NETIF_F_SG | NETIF_F_RXCSUM |
                        NETIF_F_HW_CSUM | NETIF_F_HW_VLAN_CTAG_TX |
                        NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_TSO | NETIF_F_TSO6 |
-                       NETIF_F_LRO | NETIF_F_GSO_UDP_TUNNEL |
-                       NETIF_F_GSO_UDP_TUNNEL_CSUM;
+                       NETIF_F_LRO;
+               if (features & NETIF_F_GSO_UDP_TUNNEL)
+                       netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL;
+               if (features & NETIF_F_GSO_UDP_TUNNEL_CSUM)
+                       netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
+       }
+       if (VMXNET3_VERSION_GE_7(adapter)) {
+               unsigned long flags;
+
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_GENEVE_TSO)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_TSO;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_VXLAN_TSO)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_TSO;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD;
+               }
+               if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                              VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD)) {
+                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD;
+               }
+
+               VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]);
+               spin_lock_irqsave(&adapter->cmd_lock, flags);
+               VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG);
+               adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);
+               spin_unlock_irqrestore(&adapter->cmd_lock, flags);
+
+               if (!(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD)) &&
+                   !(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD))) {
+                       netdev->hw_enc_features &= ~NETIF_F_GSO_UDP_TUNNEL_CSUM;
+               }
        }
 }
 
@@ -322,6 +364,22 @@ static void vmxnet3_disable_encap_offloads(struct net_device *netdev)
                        NETIF_F_LRO | NETIF_F_GSO_UDP_TUNNEL |
                        NETIF_F_GSO_UDP_TUNNEL_CSUM);
        }
+       if (VMXNET3_VERSION_GE_7(adapter)) {
+               unsigned long flags;
+
+               adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD |
+                                         1UL << VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD  |
+                                         1UL << VMXNET3_CAP_GENEVE_TSO |
+                                         1UL << VMXNET3_CAP_VXLAN_TSO  |
+                                         1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD |
+                                         1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD);
+
+               VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]);
+               spin_lock_irqsave(&adapter->cmd_lock, flags);
+               VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG);
+               adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);
+               spin_unlock_irqrestore(&adapter->cmd_lock, flags);
+       }
 }
 
 int vmxnet3_set_features(struct net_device *netdev, netdev_features_t features)
@@ -357,8 +415,8 @@ int vmxnet3_set_features(struct net_device *netdev, netdev_features_t features)
                        adapter->shared->devRead.misc.uptFeatures &=
                        ~UPT1_F_RXVLAN;
 
-               if ((features & tun_offload_mask) != 0 && !udp_tun_enabled) {
-                       vmxnet3_enable_encap_offloads(netdev);
+               if ((features & tun_offload_mask) != 0) {
+                       vmxnet3_enable_encap_offloads(netdev, features);
                        adapter->shared->devRead.misc.uptFeatures |=
                        UPT1_F_RXINNEROFLD;
                } else if ((features & tun_offload_mask) == 0 &&
@@ -462,7 +520,7 @@ vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p)
        for (i = 0; i < adapter->num_tx_queues; i++) {
                struct vmxnet3_tx_queue *tq = &adapter->tx_queue[i];
 
-               buf[j++] = VMXNET3_READ_BAR0_REG(adapter, VMXNET3_REG_TXPROD +
+               buf[j++] = VMXNET3_READ_BAR0_REG(adapter, adapter->tx_prod_offset +
                                                 i * VMXNET3_REG_ALIGN);
 
                buf[j++] = VMXNET3_GET_ADDR_LO(tq->tx_ring.basePA);
@@ -490,9 +548,9 @@ vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p)
        for (i = 0; i < adapter->num_rx_queues; i++) {
                struct vmxnet3_rx_queue *rq = &adapter->rx_queue[i];
 
-               buf[j++] =  VMXNET3_READ_BAR0_REG(adapter, VMXNET3_REG_RXPROD +
+               buf[j++] =  VMXNET3_READ_BAR0_REG(adapter, adapter->rx_prod_offset +
                                                  i * VMXNET3_REG_ALIGN);
-               buf[j++] =  VMXNET3_READ_BAR0_REG(adapter, VMXNET3_REG_RXPROD2 +
+               buf[j++] =  VMXNET3_READ_BAR0_REG(adapter, adapter->rx_prod2_offset +
                                                  i * VMXNET3_REG_ALIGN);
 
                buf[j++] = VMXNET3_GET_ADDR_LO(rq->rx_ring[0].basePA);
@@ -660,6 +718,13 @@ vmxnet3_set_ringparam(struct net_device *netdev,
        new_rx_ring2_size = min_t(u32, new_rx_ring2_size,
                                  VMXNET3_RX_RING2_MAX_SIZE);
 
+       /* For v7 and later, keep ring size power of 2 for UPT */
+       if (VMXNET3_VERSION_GE_7(adapter)) {
+               new_tx_ring_size = rounddown_pow_of_two(new_tx_ring_size);
+               new_rx_ring_size = rounddown_pow_of_two(new_rx_ring_size);
+               new_rx_ring2_size = rounddown_pow_of_two(new_rx_ring2_size);
+       }
+
        /* rx data ring buffer size has to be a multiple of
         * VMXNET3_RXDATA_DESC_SIZE_ALIGN
         */
@@ -913,6 +978,39 @@ vmxnet3_set_rss_hash_opt(struct net_device *netdev,
                        union Vmxnet3_CmdInfo *cmdInfo = &shared->cu.cmdInfo;
                        unsigned long flags;
 
+                       if (VMXNET3_VERSION_GE_7(adapter)) {
+                               if ((rss_fields & VMXNET3_RSS_FIELDS_UDPIP4 ||
+                                    rss_fields & VMXNET3_RSS_FIELDS_UDPIP6) &&
+                                   vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                                              VMXNET3_CAP_UDP_RSS)) {
+                                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_UDP_RSS;
+                               } else {
+                                       adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_UDP_RSS);
+                               }
+                               if ((rss_fields & VMXNET3_RSS_FIELDS_ESPIP4) &&
+                                   vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                                              VMXNET3_CAP_ESP_RSS_IPV4)) {
+                                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_ESP_RSS_IPV4;
+                               } else {
+                                       adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_ESP_RSS_IPV4);
+                               }
+                               if ((rss_fields & VMXNET3_RSS_FIELDS_ESPIP6) &&
+                                   vmxnet3_check_ptcapability(adapter->ptcap_supported[0],
+                                                              VMXNET3_CAP_ESP_RSS_IPV6)) {
+                                       adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_ESP_RSS_IPV6;
+                               } else {
+                                       adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_ESP_RSS_IPV6);
+                               }
+
+                               VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR,
+                                                      adapter->dev_caps[0]);
+                               spin_lock_irqsave(&adapter->cmd_lock, flags);
+                               VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
+                                                      VMXNET3_CMD_GET_DCR0_REG);
+                               adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter,
+                                                                            VMXNET3_REG_CMD);
+                               spin_unlock_irqrestore(&adapter->cmd_lock, flags);
+                       }
                        spin_lock_irqsave(&adapter->cmd_lock, flags);
                        cmdInfo->setRssFields = rss_fields;
                        VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
index 7027ff4..3367db2 100644 (file)
@@ -1,7 +1,7 @@
 /*
  * Linux driver for VMware's vmxnet3 ethernet NIC.
  *
- * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved.
+ * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of the GNU General Public License as published by the
 /*
  * Version numbers
  */
-#define VMXNET3_DRIVER_VERSION_STRING   "1.6.0.0-k"
+#define VMXNET3_DRIVER_VERSION_STRING   "1.7.0.0-k"
 
 /* Each byte of this 32-bit integer encodes a version number in
  * VMXNET3_DRIVER_VERSION_STRING.
  */
-#define VMXNET3_DRIVER_VERSION_NUM      0x01060000
+#define VMXNET3_DRIVER_VERSION_NUM      0x01070000
 
 #if defined(CONFIG_PCI_MSI)
        /* RSS only makes sense if MSI-X is supported. */
        #define VMXNET3_RSS
 #endif
 
+#define VMXNET3_REV_7          6       /* Vmxnet3 Rev. 7 */
 #define VMXNET3_REV_6          5       /* Vmxnet3 Rev. 6 */
 #define VMXNET3_REV_5          4       /* Vmxnet3 Rev. 5 */
 #define VMXNET3_REV_4          3       /* Vmxnet3 Rev. 4 */
@@ -135,6 +136,7 @@ struct vmxnet3_cmd_ring {
        u32             next2fill;
        u32             next2comp;
        u8              gen;
+       u8              isOutOfOrder;
        dma_addr_t      basePA;
 };
 
@@ -259,9 +261,13 @@ enum vmxnet3_rx_buf_type {
        VMXNET3_RX_BUF_PAGE = 2
 };
 
+#define VMXNET3_RXD_COMP_PENDING        0
+#define VMXNET3_RXD_COMP_DONE           1
+
 struct vmxnet3_rx_buf_info {
        enum vmxnet3_rx_buf_type buf_type;
        u16     len;
+       u8      comp_state;
        union {
                struct sk_buff *skb;
                struct page    *page;
@@ -402,6 +408,13 @@ struct vmxnet3_adapter {
        dma_addr_t pm_conf_pa;
        dma_addr_t rss_conf_pa;
        bool   queuesExtEnabled;
+       struct Vmxnet3_RingBufferSize     ringBufSize;
+       u32    devcap_supported[8];
+       u32    ptcap_supported[8];
+       u32    dev_caps[8];
+       u16    tx_prod_offset;
+       u16    rx_prod_offset;
+       u16    rx_prod2_offset;
 };
 
 #define VMXNET3_WRITE_BAR0_REG(adapter, reg, val)  \
@@ -431,11 +444,13 @@ struct vmxnet3_adapter {
        (adapter->version >= VMXNET3_REV_5 + 1)
 #define VMXNET3_VERSION_GE_6(adapter) \
        (adapter->version >= VMXNET3_REV_6 + 1)
+#define VMXNET3_VERSION_GE_7(adapter) \
+       (adapter->version >= VMXNET3_REV_7 + 1)
 
 /* must be a multiple of VMXNET3_RING_SIZE_ALIGN */
 #define VMXNET3_DEF_TX_RING_SIZE    512
 #define VMXNET3_DEF_RX_RING_SIZE    1024
-#define VMXNET3_DEF_RX_RING2_SIZE   256
+#define VMXNET3_DEF_RX_RING2_SIZE   512
 
 #define VMXNET3_DEF_RXDATA_DESC_SIZE 128
 
@@ -494,6 +509,7 @@ void vmxnet3_set_ethtool_ops(struct net_device *netdev);
 
 void vmxnet3_get_stats64(struct net_device *dev,
                         struct rtnl_link_stats64 *stats);
+bool vmxnet3_check_ptcapability(u32 cap_supported, u32 cap);
 
 extern char vmxnet3_driver_name[];
 #endif
index cfc30ce..40445a1 100644 (file)
@@ -814,8 +814,8 @@ static void vrf_rt6_release(struct net_device *dev, struct net_vrf *vrf)
         */
        if (rt6) {
                dst = &rt6->dst;
-               dev_replace_track(dst->dev, net->loopback_dev,
-                                 &dst->dev_tracker, GFP_KERNEL);
+               netdev_ref_replace(dst->dev, net->loopback_dev,
+                                  &dst->dev_tracker, GFP_KERNEL);
                dst->dev = net->loopback_dev;
                dst_release(dst);
        }
@@ -1061,8 +1061,8 @@ static void vrf_rtable_release(struct net_device *dev, struct net_vrf *vrf)
         */
        if (rth) {
                dst = &rth->dst;
-               dev_replace_track(dst->dev, net->loopback_dev,
-                                 &dst->dev_tracker, GFP_KERNEL);
+               netdev_ref_replace(dst->dev, net->loopback_dev,
+                                  &dst->dev_tracker, GFP_KERNEL);
                dst->dev = net->loopback_dev;
                dst_release(dst);
        }
index 265d4a0..8b0710b 100644 (file)
@@ -2385,15 +2385,15 @@ static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,
                vxlan_snoop(dev, &loopback, eth_hdr(skb)->h_source, 0, vni);
 
        u64_stats_update_begin(&tx_stats->syncp);
-       tx_stats->tx_packets++;
-       tx_stats->tx_bytes += len;
+       u64_stats_inc(&tx_stats->tx_packets);
+       u64_stats_add(&tx_stats->tx_bytes, len);
        u64_stats_update_end(&tx_stats->syncp);
        vxlan_vnifilter_count(src_vxlan, vni, NULL, VXLAN_VNI_STATS_TX, len);
 
        if (__netif_rx(skb) == NET_RX_SUCCESS) {
                u64_stats_update_begin(&rx_stats->syncp);
-               rx_stats->rx_packets++;
-               rx_stats->rx_bytes += len;
+               u64_stats_inc(&rx_stats->rx_packets);
+               u64_stats_add(&rx_stats->rx_bytes, len);
                u64_stats_update_end(&rx_stats->syncp);
                vxlan_vnifilter_count(dst_vxlan, vni, NULL, VXLAN_VNI_STATS_RX,
                                      len);
index 5f43568..63908db 100644 (file)
@@ -43,7 +43,7 @@
  *      This version number is incremented with each official release of the
  *      package and is a simplified number for normal user reference.
  *      Individual files are tracked by the version control system and may
- *      have individual versions (or IDs) that move much faster than the
+ *      have individual versions (or IDs) that move much faster than
  *      the release version as individual updates are tracked.
  */
 #define FST_USER_VERSION        "1.04"
index 7b8df40..7135d51 100644 (file)
 /* Must be called with bh disabled. */
 static void update_rx_stats(struct wg_peer *peer, size_t len)
 {
-       struct pcpu_sw_netstats *tstats =
-               get_cpu_ptr(peer->device->dev->tstats);
-
-       u64_stats_update_begin(&tstats->syncp);
-       ++tstats->rx_packets;
-       tstats->rx_bytes += len;
+       dev_sw_netstats_rx_add(peer->device->dev, len);
        peer->rx_bytes += len;
-       u64_stats_update_end(&tstats->syncp);
-       put_cpu_ptr(tstats);
 }
 
 #define SKB_TYPE_LE32(skb) (((struct message_header *)(skb)->data)->type)
index 2f746eb..bd408d2 100644 (file)
@@ -290,8 +290,7 @@ static inline int hwsim_net_set_netgroup(struct net *net)
 {
        struct hwsim_net *hwsim_net = net_generic(net, hwsim_net_id);
 
-       hwsim_net->netgroup = ida_simple_get(&hwsim_netgroup_ida,
-                                            0, 0, GFP_KERNEL);
+       hwsim_net->netgroup = ida_alloc(&hwsim_netgroup_ida, GFP_KERNEL);
        return hwsim_net->netgroup >= 0 ? 0 : -ENOMEM;
 }
 
@@ -4733,7 +4732,7 @@ static void __net_exit hwsim_exit_net(struct net *net)
                                         NULL);
        }
 
-       ida_simple_remove(&hwsim_netgroup_ida, hwsim_net_get_netgroup(net));
+       ida_free(&hwsim_netgroup_ida, hwsim_net_get_netgroup(net));
 }
 
 static struct pernet_operations hwsim_net_ops = {
index 8d8378b..1ac4684 100644 (file)
 static const struct ieee80211_txrx_stypes
        wilc_wfi_cfg80211_mgmt_types[NUM_NL80211_IFTYPES] = {
        [NL80211_IFTYPE_STATION] = {
-               .tx = 0xffff,
+               .tx = BIT(IEEE80211_STYPE_ACTION >> 4) |
+                       BIT(IEEE80211_STYPE_AUTH >> 4),
                .rx = BIT(IEEE80211_STYPE_ACTION >> 4) |
-                       BIT(IEEE80211_STYPE_PROBE_REQ >> 4)
+                       BIT(IEEE80211_STYPE_PROBE_REQ >> 4) |
+                       BIT(IEEE80211_STYPE_AUTH >> 4)
        },
        [NL80211_IFTYPE_AP] = {
                .tx = 0xffff,
@@ -305,6 +307,7 @@ static int connect(struct wiphy *wiphy, struct net_device *dev,
        int ret;
        u32 i;
        u8 security = WILC_FW_SEC_NO;
+       enum mfptype mfp_type = WILC_FW_MFP_NONE;
        enum authtype auth_type = WILC_FW_AUTH_ANY;
        u32 cipher_group;
        struct cfg80211_bss *bss;
@@ -313,32 +316,9 @@ static int connect(struct wiphy *wiphy, struct net_device *dev,
 
        vif->connecting = true;
 
-       memset(priv->wep_key, 0, sizeof(priv->wep_key));
-       memset(priv->wep_key_len, 0, sizeof(priv->wep_key_len));
-
        cipher_group = sme->crypto.cipher_group;
        if (cipher_group != 0) {
-               if (cipher_group == WLAN_CIPHER_SUITE_WEP40) {
-                       security = WILC_FW_SEC_WEP;
-
-                       priv->wep_key_len[sme->key_idx] = sme->key_len;
-                       memcpy(priv->wep_key[sme->key_idx], sme->key,
-                              sme->key_len);
-
-                       wilc_set_wep_default_keyid(vif, sme->key_idx);
-                       wilc_add_wep_key_bss_sta(vif, sme->key, sme->key_len,
-                                                sme->key_idx);
-               } else if (cipher_group == WLAN_CIPHER_SUITE_WEP104) {
-                       security = WILC_FW_SEC_WEP_EXTENDED;
-
-                       priv->wep_key_len[sme->key_idx] = sme->key_len;
-                       memcpy(priv->wep_key[sme->key_idx], sme->key,
-                              sme->key_len);
-
-                       wilc_set_wep_default_keyid(vif, sme->key_idx);
-                       wilc_add_wep_key_bss_sta(vif, sme->key, sme->key_len,
-                                                sme->key_idx);
-               } else if (sme->crypto.wpa_versions & NL80211_WPA_VERSION_2) {
+               if (sme->crypto.wpa_versions & NL80211_WPA_VERSION_2) {
                        if (cipher_group == WLAN_CIPHER_SUITE_TKIP)
                                security = WILC_FW_SEC_WPA2_TKIP;
                        else
@@ -373,8 +353,14 @@ static int connect(struct wiphy *wiphy, struct net_device *dev,
                auth_type = WILC_FW_AUTH_OPEN_SYSTEM;
                break;
 
-       case NL80211_AUTHTYPE_SHARED_KEY:
-               auth_type = WILC_FW_AUTH_SHARED_KEY;
+       case NL80211_AUTHTYPE_SAE:
+               auth_type = WILC_FW_AUTH_SAE;
+               if (sme->ssid_len) {
+                       memcpy(vif->auth.ssid.ssid, sme->ssid, sme->ssid_len);
+                       vif->auth.ssid.ssid_len = sme->ssid_len;
+               }
+               vif->auth.key_mgmt_suite = cpu_to_be32(sme->crypto.akm_suites[0]);
+               ether_addr_copy(vif->auth.bssid, sme->bssid);
                break;
 
        default:
@@ -384,6 +370,10 @@ static int connect(struct wiphy *wiphy, struct net_device *dev,
        if (sme->crypto.n_akm_suites) {
                if (sme->crypto.akm_suites[0] == WLAN_AKM_SUITE_8021X)
                        auth_type = WILC_FW_AUTH_IEEE8021;
+               else if (sme->crypto.akm_suites[0] == WLAN_AKM_SUITE_PSK_SHA256)
+                       auth_type = WILC_FW_AUTH_OPEN_SYSTEM_SHA256;
+               else if (sme->crypto.akm_suites[0] == WLAN_AKM_SUITE_8021X_SHA256)
+                       auth_type = WILC_FW_AUTH_IEE8021X_SHA256;
        }
 
        if (wfi_drv->usr_scan_req.scan_result) {
@@ -427,6 +417,13 @@ static int connect(struct wiphy *wiphy, struct net_device *dev,
        wfi_drv->conn_info.arg = priv;
        wfi_drv->conn_info.param = join_params;
 
+       if (sme->mfp == NL80211_MFP_OPTIONAL)
+               mfp_type = WILC_FW_MFP_OPTIONAL;
+       else if (sme->mfp == NL80211_MFP_REQUIRED)
+               mfp_type = WILC_FW_MFP_REQUIRED;
+
+       wfi_drv->conn_info.mfp_type = mfp_type;
+
        ret = wilc_set_join_req(vif, bss->bssid, sme->ie, sme->ie_len);
        if (ret) {
                netdev_err(dev, "wilc_set_join_req(): Error\n");
@@ -487,14 +484,6 @@ static int disconnect(struct wiphy *wiphy, struct net_device *dev,
        return ret;
 }
 
-static inline void wilc_wfi_cfg_copy_wep_info(struct wilc_priv *priv,
-                                             u8 key_index,
-                                             struct key_params *params)
-{
-       priv->wep_key_len[key_index] = params->key_len;
-       memcpy(priv->wep_key[key_index], params->key, params->key_len);
-}
-
 static int wilc_wfi_cfg_allocate_wpa_entry(struct wilc_priv *priv, u8 idx)
 {
        if (!priv->wilc_gtk[idx]) {
@@ -514,6 +503,18 @@ static int wilc_wfi_cfg_allocate_wpa_entry(struct wilc_priv *priv, u8 idx)
        return 0;
 }
 
+static int wilc_wfi_cfg_allocate_wpa_igtk_entry(struct wilc_priv *priv, u8 idx)
+{
+       idx -= 4;
+       if (!priv->wilc_igtk[idx]) {
+               priv->wilc_igtk[idx] = kzalloc(sizeof(*priv->wilc_igtk[idx]),
+                                              GFP_KERNEL);
+               if (!priv->wilc_igtk[idx])
+                       return -ENOMEM;
+       }
+       return 0;
+}
+
 static int wilc_wfi_cfg_copy_wpa_info(struct wilc_wfi_key *key_info,
                                      struct key_params *params)
 {
@@ -550,35 +551,9 @@ static int add_key(struct wiphy *wiphy, struct net_device *netdev, u8 key_index,
        u8 op_mode;
        struct wilc_vif *vif = netdev_priv(netdev);
        struct wilc_priv *priv = &vif->priv;
+       struct wilc_wfi_key *key;
 
        switch (params->cipher) {
-       case WLAN_CIPHER_SUITE_WEP40:
-       case WLAN_CIPHER_SUITE_WEP104:
-               if (priv->wdev.iftype == NL80211_IFTYPE_AP) {
-                       wilc_wfi_cfg_copy_wep_info(priv, key_index, params);
-
-                       if (params->cipher == WLAN_CIPHER_SUITE_WEP40)
-                               mode = WILC_FW_SEC_WEP;
-                       else
-                               mode = WILC_FW_SEC_WEP_EXTENDED;
-
-                       ret = wilc_add_wep_key_bss_ap(vif, params->key,
-                                                     params->key_len,
-                                                     key_index, mode,
-                                                     WILC_FW_AUTH_OPEN_SYSTEM);
-                       break;
-               }
-               if (memcmp(params->key, priv->wep_key[key_index],
-                          params->key_len)) {
-                       wilc_wfi_cfg_copy_wep_info(priv, key_index, params);
-
-                       ret = wilc_add_wep_key_bss_sta(vif, params->key,
-                                                      params->key_len,
-                                                      key_index);
-               }
-
-               break;
-
        case WLAN_CIPHER_SUITE_TKIP:
        case WLAN_CIPHER_SUITE_CCMP:
                if (priv->wdev.iftype == NL80211_IFTYPE_AP ||
@@ -640,6 +615,26 @@ static int add_key(struct wiphy *wiphy, struct net_device *netdev, u8 key_index,
                                           key_index);
 
                break;
+       case WLAN_CIPHER_SUITE_AES_CMAC:
+               ret = wilc_wfi_cfg_allocate_wpa_igtk_entry(priv, key_index);
+               if (ret)
+                       return -ENOMEM;
+
+               key = priv->wilc_igtk[key_index - 4];
+               ret = wilc_wfi_cfg_copy_wpa_info(key, params);
+               if (ret)
+                       return -ENOMEM;
+
+               if (priv->wdev.iftype == NL80211_IFTYPE_AP ||
+                   priv->wdev.iftype == NL80211_IFTYPE_P2P_GO)
+                       op_mode = WILC_AP_MODE;
+               else
+                       op_mode = WILC_STATION_MODE;
+
+               ret = wilc_add_igtk(vif, params->key, keylen, params->seq,
+                                   params->seq_len, mac_addr, op_mode,
+                                   key_index);
+               break;
 
        default:
                netdev_err(netdev, "%s: Unsupported cipher\n", __func__);
@@ -657,30 +652,34 @@ static int del_key(struct wiphy *wiphy, struct net_device *netdev,
        struct wilc_vif *vif = netdev_priv(netdev);
        struct wilc_priv *priv = &vif->priv;
 
-       if (priv->wilc_gtk[key_index]) {
-               kfree(priv->wilc_gtk[key_index]->key);
-               priv->wilc_gtk[key_index]->key = NULL;
-               kfree(priv->wilc_gtk[key_index]->seq);
-               priv->wilc_gtk[key_index]->seq = NULL;
-
-               kfree(priv->wilc_gtk[key_index]);
-               priv->wilc_gtk[key_index] = NULL;
-       }
-
-       if (priv->wilc_ptk[key_index]) {
-               kfree(priv->wilc_ptk[key_index]->key);
-               priv->wilc_ptk[key_index]->key = NULL;
-               kfree(priv->wilc_ptk[key_index]->seq);
-               priv->wilc_ptk[key_index]->seq = NULL;
-               kfree(priv->wilc_ptk[key_index]);
-               priv->wilc_ptk[key_index] = NULL;
-       }
-
-       if (key_index <= 3 && priv->wep_key_len[key_index]) {
-               memset(priv->wep_key[key_index], 0,
-                      priv->wep_key_len[key_index]);
-               priv->wep_key_len[key_index] = 0;
-               wilc_remove_wep_key(vif, key_index);
+       if (!pairwise && (key_index == 4 || key_index == 5)) {
+               key_index -= 4;
+               if (priv->wilc_igtk[key_index]) {
+                       kfree(priv->wilc_igtk[key_index]->key);
+                       priv->wilc_igtk[key_index]->key = NULL;
+                       kfree(priv->wilc_igtk[key_index]->seq);
+                       priv->wilc_igtk[key_index]->seq = NULL;
+                       kfree(priv->wilc_igtk[key_index]);
+                       priv->wilc_igtk[key_index] = NULL;
+               }
+       } else {
+               if (priv->wilc_gtk[key_index]) {
+                       kfree(priv->wilc_gtk[key_index]->key);
+                       priv->wilc_gtk[key_index]->key = NULL;
+                       kfree(priv->wilc_gtk[key_index]->seq);
+                       priv->wilc_gtk[key_index]->seq = NULL;
+
+                       kfree(priv->wilc_gtk[key_index]);
+                       priv->wilc_gtk[key_index] = NULL;
+               }
+               if (priv->wilc_ptk[key_index]) {
+                       kfree(priv->wilc_ptk[key_index]->key);
+                       priv->wilc_ptk[key_index]->key = NULL;
+                       kfree(priv->wilc_ptk[key_index]->seq);
+                       priv->wilc_ptk[key_index]->seq = NULL;
+                       kfree(priv->wilc_ptk[key_index]);
+                       priv->wilc_ptk[key_index] = NULL;
+               }
        }
 
        return 0;
@@ -695,11 +694,20 @@ static int get_key(struct wiphy *wiphy, struct net_device *netdev, u8 key_index,
        struct  key_params key_params;
 
        if (!pairwise) {
-               key_params.key = priv->wilc_gtk[key_index]->key;
-               key_params.cipher = priv->wilc_gtk[key_index]->cipher;
-               key_params.key_len = priv->wilc_gtk[key_index]->key_len;
-               key_params.seq = priv->wilc_gtk[key_index]->seq;
-               key_params.seq_len = priv->wilc_gtk[key_index]->seq_len;
+               if (key_index == 4 || key_index == 5) {
+                       key_index -= 4;
+                       key_params.key = priv->wilc_igtk[key_index]->key;
+                       key_params.cipher = priv->wilc_igtk[key_index]->cipher;
+                       key_params.key_len = priv->wilc_igtk[key_index]->key_len;
+                       key_params.seq = priv->wilc_igtk[key_index]->seq;
+                       key_params.seq_len = priv->wilc_igtk[key_index]->seq_len;
+               } else {
+                       key_params.key = priv->wilc_gtk[key_index]->key;
+                       key_params.cipher = priv->wilc_gtk[key_index]->cipher;
+                       key_params.key_len = priv->wilc_gtk[key_index]->key_len;
+                       key_params.seq = priv->wilc_gtk[key_index]->seq;
+                       key_params.seq_len = priv->wilc_gtk[key_index]->seq_len;
+               }
        } else {
                key_params.key = priv->wilc_ptk[key_index]->key;
                key_params.cipher = priv->wilc_ptk[key_index]->cipher;
@@ -713,14 +721,19 @@ static int get_key(struct wiphy *wiphy, struct net_device *netdev, u8 key_index,
        return 0;
 }
 
+/* wiphy_new_nm() will WARNON if not present */
 static int set_default_key(struct wiphy *wiphy, struct net_device *netdev,
                           u8 key_index, bool unicast, bool multicast)
 {
-       struct wilc_vif *vif = netdev_priv(netdev);
+       return 0;
+}
 
-       wilc_set_wep_default_keyid(vif, key_index);
+static int set_default_mgmt_key(struct wiphy *wiphy, struct net_device *netdev,
+                               u8 key_index)
+{
+       struct wilc_vif *vif = netdev_priv(netdev);
 
-       return 0;
+       return wilc_set_default_mgmt_key_index(vif, key_index);
 }
 
 static int get_station(struct wiphy *wiphy, struct net_device *dev,
@@ -977,6 +990,18 @@ static inline void wilc_wfi_cfg_parse_ch_attr(u8 *buf, u32 len, u8 sta_ch)
        }
 }
 
+bool wilc_wfi_mgmt_frame_rx(struct wilc_vif *vif, u8 *buff, u32 size)
+{
+       struct wilc *wl = vif->wilc;
+       struct wilc_priv *priv = &vif->priv;
+       int freq, ret;
+
+       freq = ieee80211_channel_to_frequency(wl->op_ch, NL80211_BAND_2GHZ);
+       ret = cfg80211_rx_mgmt(&priv->wdev, freq, 0, buff, size, 0);
+
+       return ret;
+}
+
 void wilc_wfi_p2p_rx(struct wilc_vif *vif, u8 *buff, u32 size)
 {
        struct wilc *wl = vif->wilc;
@@ -1162,8 +1187,14 @@ static int mgmt_tx(struct wiphy *wiphy,
                goto out_txq_add_pkt;
        }
 
-       if (!ieee80211_is_public_action((struct ieee80211_hdr *)buf, len))
+       if (!ieee80211_is_public_action((struct ieee80211_hdr *)buf, len)) {
+               if (chan)
+                       wilc_set_mac_chnl_num(vif, chan->hw_value);
+               else
+                       wilc_set_mac_chnl_num(vif, vif->wilc->op_ch);
+
                goto out_set_timeout;
+       }
 
        d = (struct wilc_p2p_pub_act_frame *)(&mgmt->u.action);
        if (d->oui_type != WLAN_OUI_TYPE_WFA_P2P ||
@@ -1230,6 +1261,7 @@ void wilc_update_mgmt_frame_registrations(struct wiphy *wiphy,
        struct wilc_vif *vif = netdev_priv(wdev->netdev);
        u32 presp_bit = BIT(IEEE80211_STYPE_PROBE_REQ >> 4);
        u32 action_bit = BIT(IEEE80211_STYPE_ACTION >> 4);
+       u32 pauth_bit = BIT(IEEE80211_STYPE_AUTH >> 4);
 
        if (wl->initialized) {
                bool prev = vif->mgmt_reg_stypes & presp_bit;
@@ -1243,10 +1275,26 @@ void wilc_update_mgmt_frame_registrations(struct wiphy *wiphy,
 
                if (now != prev)
                        wilc_frame_register(vif, IEEE80211_STYPE_ACTION, now);
+
+               prev = vif->mgmt_reg_stypes & pauth_bit;
+               now = upd->interface_stypes & pauth_bit;
+               if (now != prev)
+                       wilc_frame_register(vif, IEEE80211_STYPE_AUTH, now);
        }
 
        vif->mgmt_reg_stypes =
-               upd->interface_stypes & (presp_bit | action_bit);
+               upd->interface_stypes & (presp_bit | action_bit | pauth_bit);
+}
+
+static int external_auth(struct wiphy *wiphy, struct net_device *dev,
+                        struct cfg80211_external_auth_params *auth)
+{
+       struct wilc_vif *vif = netdev_priv(dev);
+
+       if (auth->status == WLAN_STATUS_SUCCESS)
+               wilc_set_external_auth_param(vif, auth);
+
+       return 0;
 }
 
 static int set_cqm_rssi_config(struct wiphy *wiphy, struct net_device *dev,
@@ -1647,6 +1695,7 @@ static const struct cfg80211_ops wilc_cfg80211_ops = {
        .del_key = del_key,
        .get_key = get_key,
        .set_default_key = set_default_key,
+       .set_default_mgmt_key = set_default_mgmt_key,
        .add_virtual_intf = add_virtual_intf,
        .del_virtual_intf = del_virtual_intf,
        .change_virtual_intf = change_virtual_intf,
@@ -1662,6 +1711,7 @@ static const struct cfg80211_ops wilc_cfg80211_ops = {
        .change_bss = change_bss,
        .set_wiphy_params = set_wiphy_params,
 
+       .external_auth = external_auth,
        .set_pmksa = set_pmksa,
        .del_pmksa = del_pmksa,
        .flush_pmksa = flush_pmksa,
@@ -1804,7 +1854,7 @@ struct wilc *wilc_create_wiphy(struct device *dev)
                                BIT(NL80211_IFTYPE_P2P_GO) |
                                BIT(NL80211_IFTYPE_P2P_CLIENT);
        wiphy->flags |= WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL;
-
+       wiphy->features |= NL80211_FEATURE_SAE;
        set_wiphy_dev(wiphy, dev);
        wl->wiphy = wiphy;
        ret = wiphy_register(wiphy);
index 1114530..5c5cac4 100644 (file)
@@ -41,21 +41,23 @@ struct wilc_drv_handler {
        u8 mode;
 } __packed;
 
-struct wilc_wep_key {
-       u8 index;
+struct wilc_sta_wpa_ptk {
+       u8 mac_addr[ETH_ALEN];
        u8 key_len;
        u8 key[];
 } __packed;
 
-struct wilc_sta_wpa_ptk {
+struct wilc_ap_wpa_ptk {
        u8 mac_addr[ETH_ALEN];
+       u8 index;
        u8 key_len;
        u8 key[];
 } __packed;
 
-struct wilc_ap_wpa_ptk {
-       u8 mac_addr[ETH_ALEN];
+struct wilc_wpa_igtk {
        u8 index;
+       u8 pn_len;
+       u8 pn[6];
        u8 key_len;
        u8 key[];
 } __packed;
@@ -116,4 +118,13 @@ struct wilc_join_bss_param {
                struct wilc_noa_opp_enable opp_en;
        };
 } __packed;
+
+struct wilc_external_auth_param {
+       u8 action;
+       u8 bssid[ETH_ALEN];
+       u8 ssid[IEEE80211_MAX_SSID_LEN];
+       u8 ssid_len;
+       __le32 key_mgmt_suites;
+       __le16 status;
+} __packed;
 #endif
index 71b44cf..4038a25 100644 (file)
@@ -271,12 +271,19 @@ error:
 static int wilc_send_connect_wid(struct wilc_vif *vif)
 {
        int result = 0;
-       struct wid wid_list[4];
+       struct wid wid_list[5];
        u32 wid_cnt = 0;
        struct host_if_drv *hif_drv = vif->hif_drv;
        struct wilc_conn_info *conn_attr = &hif_drv->conn_info;
        struct wilc_join_bss_param *bss_param = conn_attr->param;
 
+
+        wid_list[wid_cnt].id = WID_SET_MFP;
+        wid_list[wid_cnt].type = WID_CHAR;
+        wid_list[wid_cnt].size = sizeof(char);
+        wid_list[wid_cnt].val = (s8 *)&conn_attr->mfp_type;
+        wid_cnt++;
+
        wid_list[wid_cnt].id = WID_INFO_ELEMENT_ASSOCIATE;
        wid_list[wid_cnt].type = WID_BIN_DATA;
        wid_list[wid_cnt].val = conn_attr->req_ies;
@@ -306,7 +313,10 @@ static int wilc_send_connect_wid(struct wilc_vif *vif)
                netdev_err(vif->ndev, "failed to send config packet\n");
                goto error;
        } else {
-               hif_drv->hif_state = HOST_IF_WAITING_CONN_RESP;
+                if (conn_attr->auth_type == WILC_FW_AUTH_SAE)
+                        hif_drv->hif_state = HOST_IF_EXTERNAL_AUTH;
+                else
+                        hif_drv->hif_state = HOST_IF_WAITING_CONN_RESP;
        }
 
        return 0;
@@ -665,7 +675,12 @@ static void handle_rcvd_gnrl_async_info(struct work_struct *work)
                goto free_msg;
        }
 
-       if (hif_drv->hif_state == HOST_IF_WAITING_CONN_RESP) {
+
+        if (hif_drv->hif_state == HOST_IF_EXTERNAL_AUTH) {
+                cfg80211_external_auth_request(vif->ndev, &vif->auth,
+                                              GFP_KERNEL);
+                hif_drv->hif_state = HOST_IF_WAITING_CONN_RESP;
+        } else if (hif_drv->hif_state == HOST_IF_WAITING_CONN_RESP) {
                host_int_parse_assoc_resp_info(vif, mac_info->status);
        } else if (mac_info->status == WILC_MAC_STATUS_DISCONNECTED) {
                if (hif_drv->hif_state == HOST_IF_CONNECTED) {
@@ -710,7 +725,8 @@ int wilc_disconnect(struct wilc_vif *vif)
        }
 
        if (conn_info->conn_result) {
-               if (hif_drv->hif_state == HOST_IF_WAITING_CONN_RESP)
+               if (hif_drv->hif_state == HOST_IF_WAITING_CONN_RESP ||
+                   hif_drv->hif_state == HOST_IF_EXTERNAL_AUTH)
                        del_timer(&hif_drv->connect_timer);
 
                conn_info->conn_result(CONN_DISCONN_EVENT_DISCONN_NOTIF, 0,
@@ -986,6 +1002,31 @@ void wilc_set_wowlan_trigger(struct wilc_vif *vif, bool enabled)
                pr_err("Failed to send wowlan trigger config packet\n");
 }
 
+int wilc_set_external_auth_param(struct wilc_vif *vif,
+                                struct cfg80211_external_auth_params *auth)
+{
+       int ret;
+       struct wid wid;
+       struct wilc_external_auth_param *param;
+
+       wid.id = WID_EXTERNAL_AUTH_PARAM;
+       wid.type = WID_BIN_DATA;
+       wid.size = sizeof(*param);
+       param = kzalloc(sizeof(*param), GFP_KERNEL);
+       if (!param)
+               return -EINVAL;
+
+       wid.val = (u8 *)param;
+       param->action = auth->action;
+       ether_addr_copy(param->bssid, auth->bssid);
+       memcpy(param->ssid, auth->ssid.ssid, auth->ssid.ssid_len);
+       param->ssid_len = auth->ssid.ssid_len;
+       ret = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1);
+
+       kfree(param);
+       return ret;
+}
+
 static void handle_scan_timer(struct work_struct *work)
 {
        struct host_if_msg *msg = container_of(work, struct host_if_msg, work);
@@ -1038,108 +1079,6 @@ static void timer_connect_cb(struct timer_list *t)
                kfree(msg);
 }
 
-int wilc_remove_wep_key(struct wilc_vif *vif, u8 index)
-{
-       struct wid wid;
-       int result;
-
-       wid.id = WID_REMOVE_WEP_KEY;
-       wid.type = WID_STR;
-       wid.size = sizeof(char);
-       wid.val = &index;
-
-       result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1);
-       if (result)
-               netdev_err(vif->ndev,
-                          "Failed to send remove wep key config packet\n");
-       return result;
-}
-
-int wilc_set_wep_default_keyid(struct wilc_vif *vif, u8 index)
-{
-       struct wid wid;
-       int result;
-
-       wid.id = WID_KEY_ID;
-       wid.type = WID_CHAR;
-       wid.size = sizeof(char);
-       wid.val = &index;
-       result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1);
-       if (result)
-               netdev_err(vif->ndev,
-                          "Failed to send wep default key config packet\n");
-
-       return result;
-}
-
-int wilc_add_wep_key_bss_sta(struct wilc_vif *vif, const u8 *key, u8 len,
-                            u8 index)
-{
-       struct wid wid;
-       int result;
-       struct wilc_wep_key *wep_key;
-
-       wid.id = WID_ADD_WEP_KEY;
-       wid.type = WID_STR;
-       wid.size = sizeof(*wep_key) + len;
-       wep_key = kzalloc(wid.size, GFP_KERNEL);
-       if (!wep_key)
-               return -ENOMEM;
-
-       wid.val = (u8 *)wep_key;
-
-       wep_key->index = index;
-       wep_key->key_len = len;
-       memcpy(wep_key->key, key, len);
-
-       result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1);
-       if (result)
-               netdev_err(vif->ndev,
-                          "Failed to add wep key config packet\n");
-
-       kfree(wep_key);
-       return result;
-}
-
-int wilc_add_wep_key_bss_ap(struct wilc_vif *vif, const u8 *key, u8 len,
-                           u8 index, u8 mode, enum authtype auth_type)
-{
-       struct wid wid_list[3];
-       int result;
-       struct wilc_wep_key *wep_key;
-
-       wid_list[0].id = WID_11I_MODE;
-       wid_list[0].type = WID_CHAR;
-       wid_list[0].size = sizeof(char);
-       wid_list[0].val = &mode;
-
-       wid_list[1].id = WID_AUTH_TYPE;
-       wid_list[1].type = WID_CHAR;
-       wid_list[1].size = sizeof(char);
-       wid_list[1].val = (s8 *)&auth_type;
-
-       wid_list[2].id = WID_WEP_KEY_VALUE;
-       wid_list[2].type = WID_STR;
-       wid_list[2].size = sizeof(*wep_key) + len;
-       wep_key = kzalloc(wid_list[2].size, GFP_KERNEL);
-       if (!wep_key)
-               return -ENOMEM;
-
-       wid_list[2].val = (u8 *)wep_key;
-
-       wep_key->index = index;
-       wep_key->key_len = len;
-       memcpy(wep_key->key, key, len);
-       result = wilc_send_config_pkt(vif, WILC_SET_CFG, wid_list,
-                                     ARRAY_SIZE(wid_list));
-       if (result)
-               netdev_err(vif->ndev,
-                          "Failed to add wep ap key config packet\n");
-
-       kfree(wep_key);
-       return result;
-}
-
 int wilc_add_ptk(struct wilc_vif *vif, const u8 *ptk, u8 ptk_key_len,
                 const u8 *mac_addr, const u8 *rx_mic, const u8 *tx_mic,
                 u8 mode, u8 cipher_mode, u8 index)
@@ -1211,6 +1150,36 @@ int wilc_add_ptk(struct wilc_vif *vif, const u8 *ptk, u8 ptk_key_len,
        return result;
 }
 
+int wilc_add_igtk(struct wilc_vif *vif, const u8 *igtk, u8 igtk_key_len,
+                 const u8 *pn, u8 pn_len, const u8 *mac_addr, u8 mode, u8 index)
+{
+       int result = 0;
+       u8 t_key_len = igtk_key_len;
+       struct wid wid;
+       struct wilc_wpa_igtk *key_buf;
+
+       key_buf = kzalloc(sizeof(*key_buf) + t_key_len, GFP_KERNEL);
+       if (!key_buf)
+               return -ENOMEM;
+
+       key_buf->index = index;
+
+       memcpy(&key_buf->pn[0], pn, pn_len);
+       key_buf->pn_len = pn_len;
+
+       memcpy(&key_buf->key[0], igtk, igtk_key_len);
+       key_buf->key_len = t_key_len;
+
+       wid.id = WID_ADD_IGTK;
+       wid.type = WID_STR;
+       wid.size = sizeof(*key_buf) + t_key_len;
+       wid.val = (s8 *)key_buf;
+       result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1);
+       kfree(key_buf);
+
+       return result;
+}
+
 int wilc_add_rx_gtk(struct wilc_vif *vif, const u8 *rx_gtk, u8 gtk_key_len,
                    u8 index, u32 key_rsc_len, const u8 *key_rsc,
                    const u8 *rx_mic, const u8 *tx_mic, u8 mode,
@@ -1749,6 +1718,10 @@ void wilc_frame_register(struct wilc_vif *vif, u16 frame_type, bool reg)
                reg_frame.reg_id = WILC_FW_PROBE_REQ_IDX;
                break;
 
+        case IEEE80211_STYPE_AUTH:
+                reg_frame.reg_id = WILC_FW_AUTH_REQ_IDX;
+                break;
+
        default:
                break;
        }
@@ -1996,3 +1969,20 @@ int wilc_get_tx_power(struct wilc_vif *vif, u8 *tx_power)
 
        return wilc_send_config_pkt(vif, WILC_GET_CFG, &wid, 1);
 }
+
+int wilc_set_default_mgmt_key_index(struct wilc_vif *vif, u8 index)
+{
+        struct wid wid;
+        int result;
+
+        wid.id = WID_DEFAULT_MGMT_KEY_ID;
+        wid.type = WID_CHAR;
+        wid.size = sizeof(char);
+        wid.val = &index;
+        result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1);
+        if (result)
+                netdev_err(vif->ndev,
+                           "Failed to send default mgmt key index\n");
+
+        return result;
+}
index 77616fc..d8dd94d 100644 (file)
@@ -47,6 +47,7 @@ enum host_if_state {
        HOST_IF_WAITING_CONN_RESP       = 3,
        HOST_IF_CONNECTED               = 4,
        HOST_IF_P2P_LISTEN              = 5,
+       HOST_IF_EXTERNAL_AUTH           = 6,
        HOST_IF_FORCE_32BIT             = 0xFFFFFFFF
 };
 
@@ -107,6 +108,7 @@ struct wilc_conn_info {
        u8 bssid[ETH_ALEN];
        u8 security;
        enum authtype auth_type;
+       enum mfptype mfp_type;
        u8 ch;
        u8 *req_ies;
        size_t req_ies_len;
@@ -151,15 +153,12 @@ struct host_if_drv {
 };
 
 struct wilc_vif;
-int wilc_remove_wep_key(struct wilc_vif *vif, u8 index);
-int wilc_set_wep_default_keyid(struct wilc_vif *vif, u8 index);
-int wilc_add_wep_key_bss_sta(struct wilc_vif *vif, const u8 *key, u8 len,
-                            u8 index);
-int wilc_add_wep_key_bss_ap(struct wilc_vif *vif, const u8 *key, u8 len,
-                           u8 index, u8 mode, enum authtype auth_type);
 int wilc_add_ptk(struct wilc_vif *vif, const u8 *ptk, u8 ptk_key_len,
                 const u8 *mac_addr, const u8 *rx_mic, const u8 *tx_mic,
                 u8 mode, u8 cipher_mode, u8 index);
+int wilc_add_igtk(struct wilc_vif *vif, const u8 *igtk, u8 igtk_key_len,
+                 const u8 *pn, u8 pn_len, const u8 *mac_addr, u8 mode,
+                 u8 index);
 s32 wilc_get_inactive_time(struct wilc_vif *vif, const u8 *mac,
                           u32 *out_val);
 int wilc_add_rx_gtk(struct wilc_vif *vif, const u8 *rx_gtk, u8 gtk_key_len,
@@ -208,9 +207,12 @@ int wilc_get_vif_idx(struct wilc_vif *vif);
 int wilc_set_tx_power(struct wilc_vif *vif, u8 tx_power);
 int wilc_get_tx_power(struct wilc_vif *vif, u8 *tx_power);
 void wilc_set_wowlan_trigger(struct wilc_vif *vif, bool enabled);
+int wilc_set_external_auth_param(struct wilc_vif *vif,
+                                struct cfg80211_external_auth_params *param);
 void wilc_scan_complete_received(struct wilc *wilc, u8 *buffer, u32 length);
 void wilc_network_info_received(struct wilc *wilc, u8 *buffer, u32 length);
 void wilc_gnrl_async_info_received(struct wilc *wilc, u8 *buffer, u32 length);
 void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
                                struct cfg80211_crypto_settings *crypto);
+int wilc_set_default_mgmt_key_index(struct wilc_vif *vif, u8 index);
 #endif
index 3c292e3..fcc4e61 100644 (file)
@@ -835,15 +835,24 @@ void wilc_frmw_to_host(struct wilc *wilc, u8 *buff, u32 size,
        }
 }
 
-void wilc_wfi_mgmt_rx(struct wilc *wilc, u8 *buff, u32 size)
+void wilc_wfi_mgmt_rx(struct wilc *wilc, u8 *buff, u32 size, bool is_auth)
 {
        int srcu_idx;
        struct wilc_vif *vif;
 
        srcu_idx = srcu_read_lock(&wilc->srcu);
        list_for_each_entry_rcu(vif, &wilc->vif_list, list) {
+               struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buff;
                u16 type = le16_to_cpup((__le16 *)buff);
                u32 type_bit = BIT(type >> 4);
+               u32 auth_bit = BIT(IEEE80211_STYPE_AUTH >> 4);
+
+               if ((vif->mgmt_reg_stypes & auth_bit &&
+                    ieee80211_is_auth(mgmt->frame_control)) &&
+                   vif->iftype == WILC_STATION_MODE && is_auth) {
+                       wilc_wfi_mgmt_frame_rx(vif, buff, size);
+                       break;
+               }
 
                if (vif->priv.p2p_listen_state &&
                    vif->mgmt_reg_stypes & type_bit)
index a067274..822e65d 100644 (file)
@@ -45,12 +45,6 @@ struct wilc_wfi_key {
        u32 cipher;
 };
 
-struct wilc_wfi_wep_key {
-       u8 *key;
-       u8 key_len;
-       u8 key_idx;
-};
-
 struct sta_info {
        u8 sta_associated_bss[WILC_MAX_NUM_STA][ETH_ALEN];
 };
@@ -63,8 +57,6 @@ struct wilc_wfi_p2p_listen_params {
 };
 
 static const u32 wilc_cipher_suites[] = {
-       WLAN_CIPHER_SUITE_WEP40,
-       WLAN_CIPHER_SUITE_WEP104,
        WLAN_CIPHER_SUITE_TKIP,
        WLAN_CIPHER_SUITE_CCMP,
        WLAN_CIPHER_SUITE_AES_CMAC
@@ -132,13 +124,12 @@ struct wilc_priv {
        struct net_device *dev;
        struct host_if_drv *hif_drv;
        struct wilc_pmkid_attr pmkid_list;
-       u8 wep_key[4][WLAN_KEY_LEN_WEP104];
-       u8 wep_key_len[4];
 
        /* The real interface that the monitor is on */
        struct net_device *real_ndev;
        struct wilc_wfi_key *wilc_gtk[WILC_MAX_NUM_STA];
        struct wilc_wfi_key *wilc_ptk[WILC_MAX_NUM_STA];
+       struct wilc_wfi_key *wilc_igtk[2];
        u8 wilc_groupkey;
 
        /* mutexes */
@@ -195,6 +186,7 @@ struct wilc_vif {
        struct wilc_priv priv;
        struct list_head list;
        struct cfg80211_bss *bss;
+       struct cfg80211_external_auth_params auth;
 };
 
 struct wilc_tx_queue_status {
@@ -288,7 +280,7 @@ struct wilc_wfi_mon_priv {
 void wilc_frmw_to_host(struct wilc *wilc, u8 *buff, u32 size, u32 pkt_offset);
 void wilc_mac_indicate(struct wilc *wilc);
 void wilc_netdev_cleanup(struct wilc *wilc);
-void wilc_wfi_mgmt_rx(struct wilc *wilc, u8 *buff, u32 size);
+void wilc_wfi_mgmt_rx(struct wilc *wilc, u8 *buff, u32 size, bool is_auth);
 void wilc_wlan_set_bssid(struct net_device *wilc_netdev, const u8 *bssid,
                         u8 mode);
 struct wilc_vif *wilc_netdev_ifc_init(struct wilc *wl, const char *name,
index 18420e9..2ae8dd3 100644 (file)
@@ -191,11 +191,11 @@ static void wilc_wlan_power(struct wilc *wilc, bool on)
                /* assert ENABLE: */
                gpiod_set_value(gpios->enable, 1);
                mdelay(5);
-               /* deassert RESET: */
-               gpiod_set_value(gpios->reset, 0);
-       } else {
                /* assert RESET: */
                gpiod_set_value(gpios->reset, 1);
+       } else {
+               /* deassert RESET: */
+               gpiod_set_value(gpios->reset, 0);
                /* deassert ENABLE: */
                gpiod_set_value(gpios->enable, 0);
        }
index 48441f0..f3f504d 100644 (file)
@@ -968,7 +968,8 @@ static void wilc_wlan_handle_rx_buff(struct wilc *wilc, u8 *buffer, int size)
 
                if (pkt_offset & IS_MANAGMEMENT) {
                        buff_ptr += HOST_HDR_OFFSET;
-                       wilc_wfi_mgmt_rx(wilc, buff_ptr, pkt_len);
+                       wilc_wfi_mgmt_rx(wilc, buff_ptr, pkt_len,
+                                        pkt_offset & IS_MGMT_AUTH_PKT);
                } else {
                        if (!is_cfg_packet) {
                                wilc_frmw_to_host(wilc, buff_ptr, pkt_len,
index eb79781..b45e727 100644 (file)
 #define IS_MANAGMEMENT         0x100
 #define IS_MANAGMEMENT_CALLBACK        0x080
 #define IS_MGMT_STATUS_SUCCES  0x040
+#define IS_MGMT_AUTH_PKT       0x010
 
 #define WILC_WID_TYPE          GENMASK(15, 12)
 #define WILC_VMM_ENTRY_FULL_RETRY      1
@@ -423,6 +424,7 @@ int wilc_wlan_get_num_conn_ifcs(struct wilc *wilc);
 netdev_tx_t wilc_mac_xmit(struct sk_buff *skb, struct net_device *dev);
 
 void wilc_wfi_p2p_rx(struct wilc_vif *vif, u8 *buff, u32 size);
+bool wilc_wfi_mgmt_frame_rx(struct wilc_vif *vif, u8 *buff, u32 size);
 void host_wakeup_notify(struct wilc *wilc);
 void host_sleep_notify(struct wilc *wilc);
 void chip_allow_sleep(struct wilc *wilc);
index 6eb7eb4..df2f5a6 100644 (file)
@@ -85,7 +85,16 @@ enum authtype {
        WILC_FW_AUTH_OPEN_SYSTEM = 1,
        WILC_FW_AUTH_SHARED_KEY = 2,
        WILC_FW_AUTH_ANY = 3,
-       WILC_FW_AUTH_IEEE8021 = 5
+       WILC_FW_AUTH_IEEE8021 = 5,
+       WILC_FW_AUTH_SAE = 7,
+       WILC_FW_AUTH_IEE8021X_SHA256 = 9,
+       WILC_FW_AUTH_OPEN_SYSTEM_SHA256 = 13
+};
+
+enum mfptype {
+       WILC_FW_MFP_NONE = 0x0,
+       WILC_FW_MFP_OPTIONAL = 0x1,
+       WILC_FW_MFP_REQUIRED = 0x2
 };
 
 enum site_survey {
@@ -176,7 +185,8 @@ enum {
 
 enum {
        WILC_FW_ACTION_FRM_IDX = 0,
-       WILC_FW_PROBE_REQ_IDX = 1
+       WILC_FW_PROBE_REQ_IDX = 1,
+       WILC_FW_AUTH_REQ_IDX = 2
 };
 
 enum wid_type {
@@ -657,6 +667,9 @@ enum {
        WID_LOG_TERMINAL_SWITCH         = 0x00CD,
        WID_TX_POWER                    = 0x00CE,
        WID_WOWLAN_TRIGGER              = 0X00CF,
+       WID_SET_MFP                     = 0x00D0,
+
+       WID_DEFAULT_MGMT_KEY_ID         = 0x00D2,
        /*  EMAC Short WID list */
        /*  RTS Threshold */
        /*
@@ -746,6 +759,7 @@ enum {
        WID_REMOVE_KEY                  = 0x301E,
        WID_ASSOC_REQ_INFO              = 0x301F,
        WID_ASSOC_RES_INFO              = 0x3020,
+       WID_ADD_IGTK                    = 0x3022,
        WID_MANUFACTURER                = 0x3026, /* Added for CAPI tool */
        WID_MODEL_NAME                  = 0x3027, /* Added for CAPI tool */
        WID_MODEL_NUM                   = 0x3028, /* Added for CAPI tool */
@@ -789,7 +803,7 @@ enum {
        WID_ADD_BEACON                  = 0x408a,
 
        WID_SETUP_MULTICAST_FILTER      = 0x408b,
-
+       WID_EXTERNAL_AUTH_PARAM         = 0x408d,
        /* Miscellaneous WIDs */
        WID_ALL                         = 0x7FFE,
        WID_MAX                         = 0xFFFF
index 87e98ab..1f57a00 100644 (file)
@@ -1643,38 +1643,34 @@ static void authenticate_timeout(struct timer_list *t)
 /*===========================================================================*/
 static int parse_addr(char *in_str, UCHAR *out)
 {
+       int i, k;
        int len;
-       int i, j, k;
-       int status;
 
        if (in_str == NULL)
                return 0;
-       if ((len = strlen(in_str)) < 2)
+       len = strnlen(in_str, ADDRLEN * 2 + 1) - 1;
+       if (len < 1)
                return 0;
        memset(out, 0, ADDRLEN);
 
-       status = 1;
-       j = len - 1;
-       if (j > 12)
-               j = 12;
        i = 5;
 
-       while (j > 0) {
-               if ((k = hex_to_bin(in_str[j--])) != -1)
+       while (len > 0) {
+               if ((k = hex_to_bin(in_str[len--])) != -1)
                        out[i] = k;
                else
                        return 0;
 
-               if (j == 0)
+               if (len == 0)
                        break;
-               if ((k = hex_to_bin(in_str[j--])) != -1)
+               if ((k = hex_to_bin(in_str[len--])) != -1)
                        out[i] += k << 4;
                else
                        return 0;
                if (!i--)
                        break;
        }
-       return status;
+       return 1;
 }
 
 /*===========================================================================*/
index 901cdfe..0b1bc04 100644 (file)
@@ -329,8 +329,8 @@ static ssize_t rtl_debugfs_set_write_h2c(struct file *filp,
 
        tmp_len = (count > sizeof(tmp) - 1 ? sizeof(tmp) - 1 : count);
 
-       if (!buffer || copy_from_user(tmp, buffer, tmp_len))
-               return count;
+       if (copy_from_user(tmp, buffer, tmp_len))
+               return -EFAULT;
 
        tmp[tmp_len] = '\0';
 
@@ -340,8 +340,8 @@ static ssize_t rtl_debugfs_set_write_h2c(struct file *filp,
                         &h2c_data[4], &h2c_data[5],
                         &h2c_data[6], &h2c_data[7]);
 
-       if (h2c_len <= 0)
-               return count;
+       if (h2c_len == 0)
+               return -EINVAL;
 
        for (i = 0; i < h2c_len; i++)
                h2c_data_packed[i] = (u8)h2c_data[i];
index 1a52ff5..7cde6bc 100644 (file)
@@ -269,11 +269,7 @@ static int rtw_debugfs_get_rsvd_page(struct seq_file *m, void *v)
        for (i = 0 ; i < buf_size ; i += 8) {
                if (i % page_size == 0)
                        seq_printf(m, "PAGE %d\n", (i + offset) / page_size);
-               seq_printf(m, "%2.2x %2.2x %2.2x %2.2x %2.2x %2.2x %2.2x %2.2x\n",
-                          *(buf + i), *(buf + i + 1),
-                          *(buf + i + 2), *(buf + i + 3),
-                          *(buf + i + 4), *(buf + i + 5),
-                          *(buf + i + 6), *(buf + i + 7));
+               seq_printf(m, "%8ph\n", buf + i);
        }
        vfree(buf);
 
index efabd5b..a44b181 100644 (file)
@@ -1383,9 +1383,12 @@ void rtw_core_scan_start(struct rtw_dev *rtwdev, struct rtw_vif *rtwvif,
 void rtw_core_scan_complete(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
                            bool hw_scan)
 {
-       struct rtw_vif *rtwvif = (struct rtw_vif *)vif->drv_priv;
+       struct rtw_vif *rtwvif = vif ? (struct rtw_vif *)vif->drv_priv : NULL;
        u32 config = 0;
 
+       if (!rtwvif)
+               return;
+
        clear_bit(RTW_FLAG_SCANNING, rtwdev->flags);
        clear_bit(RTW_FLAG_DIG_DISABLE, rtwdev->flags);
 
index 93cce44..993bd6b 100644 (file)
@@ -2701,7 +2701,7 @@ static const struct rtw_reg_domain coex_info_hw_regs_8723d[] = {
        {0x953, BIT(1), RTW_REG_DOMAIN_MAC8},
 };
 
-struct rtw_chip_info rtw8723d_hw_spec = {
+const struct rtw_chip_info rtw8723d_hw_spec = {
        .ops = &rtw8723d_ops,
        .id = RTW_CHIP_TYPE_8723D,
        .fw_name = "rtw88/rtw8723d_fw.bin",
index 41d3517..4641f6e 100644 (file)
@@ -72,6 +72,8 @@ struct rtw8723d_efuse {
        struct rtw8723de_efuse e;
 };
 
+extern const struct rtw_chip_info rtw8723d_hw_spec;
+
 /* phy status page0 */
 #define GET_PHY_STAT_P0_PWDB(phy_stat)                                         \
        le32_get_bits(*((__le32 *)(phy_stat) + 0x00), GENMASK(15, 8))
index 2dd6894..abbaafa 100644 (file)
@@ -5,7 +5,7 @@
 #include <linux/module.h>
 #include <linux/pci.h>
 #include "pci.h"
-#include "rtw8723de.h"
+#include "rtw8723d.h"
 
 static const struct pci_device_id rtw_8723de_id_table[] = {
        {
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8723de.h b/drivers/net/wireless/realtek/rtw88/rtw8723de.h
deleted file mode 100644 (file)
index 2b48948..0000000
+++ /dev/null
@@ -1,10 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
-/* Copyright(c) 2018-2019  Realtek Corporation
- */
-
-#ifndef __RTW_8723DE_H_
-#define __RTW_8723DE_H_
-
-extern struct rtw_chip_info rtw8723d_hw_spec;
-
-#endif
index ffee39e..42841f5 100644 (file)
@@ -1877,7 +1877,7 @@ static const struct rtw_reg_domain coex_info_hw_regs_8821c[] = {
        {0x60A, MASKBYTE0, RTW_REG_DOMAIN_MAC8},
 };
 
-struct rtw_chip_info rtw8821c_hw_spec = {
+const struct rtw_chip_info rtw8821c_hw_spec = {
        .ops = &rtw8821c_ops,
        .id = RTW_CHIP_TYPE_8821C,
        .fw_name = "rtw88/rtw8821c_fw.bin",
index d9fbddd..2698801 100644 (file)
@@ -84,6 +84,8 @@ _rtw_write32s_mask(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 data)
        rtw_write32_mask(rtwdev, addr + 0x200, mask, data);
 }
 
+extern const struct rtw_chip_info rtw8821c_hw_spec;
+
 #define rtw_write32s_mask(rtwdev, addr, mask, data)                           \
        do {                                                                   \
                BUILD_BUG_ON((addr) < 0xC00 || (addr) >= 0xD00);               \
index 56d22f9..f3d971f 100644 (file)
@@ -5,7 +5,7 @@
 #include <linux/module.h>
 #include <linux/pci.h>
 #include "pci.h"
-#include "rtw8821ce.h"
+#include "rtw8821c.h"
 
 static const struct pci_device_id rtw_8821ce_id_table[] = {
        {
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821ce.h b/drivers/net/wireless/realtek/rtw88/rtw8821ce.h
deleted file mode 100644 (file)
index 54142ac..0000000
+++ /dev/null
@@ -1,10 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
-/* Copyright(c) 2018-2019  Realtek Corporation
- */
-
-#ifndef __RTW_8821CE_H_
-#define __RTW_8821CE_H_
-
-extern struct rtw_chip_info rtw8821c_hw_spec;
-
-#endif
index dccd722..3218488 100644 (file)
@@ -2497,7 +2497,7 @@ static struct rtw_hw_reg_offset rtw8822b_edcca_th[] = {
        [EDCCA_TH_H2L_IDX] = {{.addr = 0x8a4, .mask = MASKBYTE1}, .offset = 0},
 };
 
-struct rtw_chip_info rtw8822b_hw_spec = {
+const struct rtw_chip_info rtw8822b_hw_spec = {
        .ops = &rtw8822b_ops,
        .id = RTW_CHIP_TYPE_8822B,
        .fw_name = "rtw88/rtw8822b_fw.bin",
index 3fff8b8..01d3644 100644 (file)
@@ -187,4 +187,6 @@ _rtw_write32s_mask(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 data)
 #define REG_ANTWT      0x1904
 #define REG_IQKFAILMSK 0x1bf0
 
+extern const struct rtw_chip_info rtw8822b_hw_spec;
+
 #endif
index 62ee7e6..4994950 100644 (file)
@@ -5,7 +5,7 @@
 #include <linux/module.h>
 #include <linux/pci.h>
 #include "pci.h"
-#include "rtw8822be.h"
+#include "rtw8822b.h"
 
 static const struct pci_device_id rtw_8822be_id_table[] = {
        {
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822be.h b/drivers/net/wireless/realtek/rtw88/rtw8822be.h
deleted file mode 100644 (file)
index 6668460..0000000
+++ /dev/null
@@ -1,10 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
-/* Copyright(c) 2018-2019  Realtek Corporation
- */
-
-#ifndef __RTW_8822BE_H_
-#define __RTW_8822BE_H_
-
-extern struct rtw_chip_info rtw8822b_hw_spec;
-
-#endif
index c043b5c..09f9e4a 100644 (file)
@@ -5310,7 +5310,7 @@ static const struct rtw_reg_domain coex_info_hw_regs_8822c[] = {
        {0xc50, MASKBYTE0, RTW_REG_DOMAIN_MAC8},
 };
 
-struct rtw_chip_info rtw8822c_hw_spec = {
+const struct rtw_chip_info rtw8822c_hw_spec = {
        .ops = &rtw8822c_ops,
        .id = RTW_CHIP_TYPE_8822C,
        .fw_name = "rtw88/rtw8822c_fw.bin",
index 8201955..479d5d7 100644 (file)
@@ -118,6 +118,8 @@ enum rtw8822c_dpk_one_shot_action {
 void rtw8822c_parse_tbl_dpk(struct rtw_dev *rtwdev,
                            const struct rtw_table *tbl);
 
+extern const struct rtw_chip_info rtw8822c_hw_spec;
+
 #define RTW_DECL_TABLE_DPK(name)                       \
 const struct rtw_table name ## _tbl = {                        \
        .data = name,                                   \
index 3845b13..e26c6bc 100644 (file)
@@ -5,7 +5,7 @@
 #include <linux/module.h>
 #include <linux/pci.h>
 #include "pci.h"
-#include "rtw8822ce.h"
+#include "rtw8822c.h"
 
 static const struct pci_device_id rtw_8822ce_id_table[] = {
        {
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822ce.h b/drivers/net/wireless/realtek/rtw88/rtw8822ce.h
deleted file mode 100644 (file)
index fee32d7..0000000
+++ /dev/null
@@ -1,10 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
-/* Copyright(c) 2018-2019  Realtek Corporation
- */
-
-#ifndef __RTW_8822CE_H_
-#define __RTW_8822CE_H_
-
-extern struct rtw_chip_info rtw8822c_hw_spec;
-
-#endif
index 8a26ade..db3c55f 100644 (file)
@@ -602,11 +602,18 @@ int rtw89_cam_fill_bssid_cam_info(struct rtw89_dev *rtwdev,
        struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
        struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam;
        u8 bss_color = vif->bss_conf.he_bss_color.color;
+       u8 bss_mask;
+
+       if (vif->bss_conf.nontransmitted)
+               bss_mask = RTW89_BSSID_MATCH_5_BYTES;
+       else
+               bss_mask = RTW89_BSSID_MATCH_ALL;
 
        FWCMD_SET_ADDR_BSSID_IDX(cmd, bssid_cam->bssid_cam_idx);
        FWCMD_SET_ADDR_BSSID_OFFSET(cmd, bssid_cam->offset);
        FWCMD_SET_ADDR_BSSID_LEN(cmd, bssid_cam->len);
        FWCMD_SET_ADDR_BSSID_VALID(cmd, bssid_cam->valid);
+       FWCMD_SET_ADDR_BSSID_MASK(cmd, bss_mask);
        FWCMD_SET_ADDR_BSSID_BB_SEL(cmd, bssid_cam->phy_idx);
        FWCMD_SET_ADDR_BSSID_BSS_COLOR(cmd, bss_color);
 
index a3931d3..74a6c47 100644 (file)
@@ -9,6 +9,9 @@
 
 #define RTW89_SEC_CAM_LEN      20
 
+#define RTW89_BSSID_MATCH_ALL GENMASK(5, 0)
+#define RTW89_BSSID_MATCH_5_BYTES GENMASK(4, 0)
+
 static inline void FWCMD_SET_ADDR_IDX(void *cmd, u32 value)
 {
        le32p_replace_bits((__le32 *)(cmd) + 1, value, GENMASK(7, 0));
@@ -309,6 +312,11 @@ static inline void FWCMD_SET_ADDR_BSSID_BB_SEL(void *cmd, u32 value)
        le32p_replace_bits((__le32 *)(cmd) + 13, value, BIT(1));
 }
 
+static inline void FWCMD_SET_ADDR_BSSID_MASK(void *cmd, u32 value)
+{
+       le32p_replace_bits((__le32 *)(cmd) + 13, value, GENMASK(7, 2));
+}
+
 static inline void FWCMD_SET_ADDR_BSSID_BSS_COLOR(void *cmd, u32 value)
 {
        le32p_replace_bits((__le32 *)(cmd) + 13, value, GENMASK(13, 8));
index a6a9057..d2f2a3d 100644 (file)
@@ -1343,6 +1343,47 @@ struct rtw89_vif_rx_stats_iter_data {
        const u8 *bssid;
 };
 
+static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev,
+                                     struct ieee80211_vif *vif,
+                                     struct sk_buff *skb)
+{
+       struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+       struct ieee80211_trigger *tf = (struct ieee80211_trigger *)skb->data;
+       u8 *pos, *end, type;
+       u16 aid;
+
+       if (!ether_addr_equal(vif->bss_conf.bssid, tf->ta) ||
+           rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION ||
+           rtwvif->net_type == RTW89_NET_TYPE_NO_LINK)
+               return;
+
+       type = le64_get_bits(tf->common_info, IEEE80211_TRIGGER_TYPE_MASK);
+       if (type != IEEE80211_TRIGGER_TYPE_BASIC)
+               return;
+
+       end = (u8 *)tf + skb->len;
+       pos = tf->variable;
+
+       while (end - pos >= RTW89_TF_BASIC_USER_INFO_SZ) {
+               aid = RTW89_GET_TF_USER_INFO_AID12(pos);
+               rtw89_debug(rtwdev, RTW89_DBG_TXRX,
+                           "[TF] aid: %d, ul_mcs: %d, rua: %d\n",
+                           aid, RTW89_GET_TF_USER_INFO_UL_MCS(pos),
+                           RTW89_GET_TF_USER_INFO_RUA(pos));
+
+               if (aid == RTW89_TF_PAD)
+                       break;
+
+               if (aid == vif->bss_conf.aid) {
+                       rtwvif->stats.rx_tf_acc++;
+                       rtwdev->stats.rx_tf_acc++;
+                       break;
+               }
+
+               pos += RTW89_TF_BASIC_USER_INFO_SZ;
+       }
+}
+
 static void rtw89_vif_rx_stats_iter(void *data, u8 *mac,
                                    struct ieee80211_vif *vif)
 {
@@ -1355,6 +1396,11 @@ static void rtw89_vif_rx_stats_iter(void *data, u8 *mac,
        struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
        const u8 *bssid = iter_data->bssid;
 
+       if (ieee80211_is_trigger(hdr->frame_control)) {
+               rtw89_stats_trigger_frame(rtwdev, vif, skb);
+               return;
+       }
+
        if (!ether_addr_equal(vif->bss_conf.bssid, bssid))
                return;
 
@@ -1608,7 +1654,7 @@ static void rtw89_core_update_rx_status(struct rtw89_dev *rtwdev,
 
        if (rtwdev->scanning &&
            RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw)) {
-               u8 chan = hal->current_channel;
+               u8 chan = hal->current_primary_channel;
                u8 band = hal->current_band_type;
                enum nl80211_band nl_band;
 
@@ -2023,6 +2069,8 @@ static bool rtw89_traffic_stats_calc(struct rtw89_dev *rtwdev,
        stats->rx_unicast = 0;
        stats->tx_cnt = 0;
        stats->rx_cnt = 0;
+       stats->rx_tf_periodic = stats->rx_tf_acc;
+       stats->rx_tf_acc = 0;
 
        if (tx_tfc_lv != stats->tx_tfc_lv || rx_tfc_lv != stats->rx_tfc_lv)
                return true;
@@ -2875,7 +2923,10 @@ void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
 void rtw89_core_scan_complete(struct rtw89_dev *rtwdev,
                              struct ieee80211_vif *vif, bool hw_scan)
 {
-       struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+       struct rtw89_vif *rtwvif = vif ? (struct rtw89_vif *)vif->drv_priv : NULL;
+
+       if (!rtwvif)
+               return;
 
        ether_addr_copy(rtwvif->mac_addr, vif->addr);
        rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
@@ -3008,6 +3059,7 @@ static int rtw89_core_register_hw(struct rtw89_dev *rtwdev)
        ieee80211_hw_set(hw, SUPPORTS_PS);
        ieee80211_hw_set(hw, SUPPORTS_DYNAMIC_PS);
        ieee80211_hw_set(hw, SINGLE_SCAN_ON_ALL_BANDS);
+       ieee80211_hw_set(hw, SUPPORTS_MULTI_BSSID);
 
        hw->wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
                                     BIT(NL80211_IFTYPE_AP);
index e8a7722..239d47d 100644 (file)
@@ -55,6 +55,16 @@ enum htc_om_channel_width {
 #define RTW89_HTC_MASK_HTC_OM_DL_MU_MIMO_RR BIT(16)
 #define RTW89_HTC_MASK_HTC_OM_UL_MU_DATA_DIS BIT(17)
 
+#define RTW89_TF_PAD GENMASK(11, 0)
+#define RTW89_TF_BASIC_USER_INFO_SZ 6
+
+#define RTW89_GET_TF_USER_INFO_AID12(data)     \
+       le32_get_bits(*((const __le32 *)(data)), GENMASK(11, 0))
+#define RTW89_GET_TF_USER_INFO_RUA(data)       \
+       le32_get_bits(*((const __le32 *)(data)), GENMASK(19, 12))
+#define RTW89_GET_TF_USER_INFO_UL_MCS(data)    \
+       le32_get_bits(*((const __le32 *)(data)), GENMASK(24, 21))
+
 enum rtw89_subband {
        RTW89_CH_2G = 0,
        RTW89_CH_5G_BAND_1 = 1,
@@ -943,6 +953,10 @@ struct rtw89_traffic_stats {
        u32 rx_throughput;
        u32 tx_throughput_raw;
        u32 rx_throughput_raw;
+
+       u32 rx_tf_acc;
+       u32 rx_tf_periodic;
+
        enum rtw89_tfc_lv tx_tfc_lv;
        enum rtw89_tfc_lv rx_tfc_lv;
        struct ewma_tp tx_ewma_tp;
@@ -2550,9 +2564,24 @@ enum rtw89_sar_sources {
        RTW89_SAR_SOURCE_NR,
 };
 
+enum rtw89_sar_subband {
+       RTW89_SAR_2GHZ_SUBBAND,
+       RTW89_SAR_5GHZ_SUBBAND_1_2, /* U-NII-1 and U-NII-2 */
+       RTW89_SAR_5GHZ_SUBBAND_2_E, /* U-NII-2-Extended */
+       RTW89_SAR_5GHZ_SUBBAND_3,   /* U-NII-3 */
+       RTW89_SAR_6GHZ_SUBBAND_5_L, /* U-NII-5 lower part */
+       RTW89_SAR_6GHZ_SUBBAND_5_H, /* U-NII-5 higher part */
+       RTW89_SAR_6GHZ_SUBBAND_6,   /* U-NII-6 */
+       RTW89_SAR_6GHZ_SUBBAND_7_L, /* U-NII-7 lower part */
+       RTW89_SAR_6GHZ_SUBBAND_7_H, /* U-NII-7 higher part */
+       RTW89_SAR_6GHZ_SUBBAND_8,   /* U-NII-8 */
+
+       RTW89_SAR_SUBBAND_NR,
+};
+
 struct rtw89_sar_cfg_common {
-       bool set[RTW89_SUBBAND_NR];
-       s32 cfg[RTW89_SUBBAND_NR];
+       bool set[RTW89_SAR_SUBBAND_NR];
+       s32 cfg[RTW89_SAR_SUBBAND_NR];
 };
 
 struct rtw89_sar_info {
@@ -2646,6 +2675,10 @@ struct rtw89_lck_info {
        u8 thermal[RF_PATH_MAX];
 };
 
+struct rtw89_rx_dck_info {
+       u8 thermal[RF_PATH_MAX];
+};
+
 struct rtw89_iqk_info {
        bool lok_cor_fail[RTW89_IQK_CHS_NR][RTW89_IQK_PATH_NR];
        bool lok_fin_fail[RTW89_IQK_CHS_NR][RTW89_IQK_PATH_NR];
@@ -2776,13 +2809,20 @@ enum rtw89_multi_cfo_mode {
 enum rtw89_phy_cfo_status {
        RTW89_PHY_DCFO_STATE_NORMAL = 0,
        RTW89_PHY_DCFO_STATE_ENHANCE = 1,
+       RTW89_PHY_DCFO_STATE_HOLD = 2,
        RTW89_PHY_DCFO_STATE_MAX
 };
 
+enum rtw89_phy_cfo_ul_ofdma_acc_mode {
+       RTW89_CFO_UL_OFDMA_ACC_DISABLE = 0,
+       RTW89_CFO_UL_OFDMA_ACC_ENABLE = 1
+};
+
 struct rtw89_cfo_tracking_info {
        u16 cfo_timer_ms;
        bool cfo_trig_by_timer_en;
        enum rtw89_phy_cfo_status phy_cfo_status;
+       enum rtw89_phy_cfo_ul_ofdma_acc_mode cfo_ul_ofdma_acc_mode;
        u8 phy_cfo_trk_cnt;
        bool is_adjust;
        enum rtw89_multi_cfo_mode rtw89_multi_cfo_mode;
@@ -3125,6 +3165,7 @@ struct rtw89_dev {
        struct rtw89_dpk_info dpk;
        struct rtw89_mcc_info mcc;
        struct rtw89_lck_info lck;
+       struct rtw89_rx_dck_info rx_dck;
        bool is_tssi_mode[RF_PATH_MAX];
        bool is_bt_iqk_timeout;
 
index 7820bc3..f00f819 100644 (file)
@@ -2376,7 +2376,8 @@ static int rtw89_debug_priv_phy_info_get(struct seq_file *m, void *v)
        seq_printf(m, "TP TX: %u [%u] Mbps (lv: %d), RX: %u [%u] Mbps (lv: %d)\n",
                   stats->tx_throughput, stats->tx_throughput_raw, stats->tx_tfc_lv,
                   stats->rx_throughput, stats->rx_throughput_raw, stats->rx_tfc_lv);
-       seq_printf(m, "Beacon: %u\n", pkt_stat->beacon_nr);
+       seq_printf(m, "Beacon: %u, TF: %u\n", pkt_stat->beacon_nr,
+                  stats->rx_tf_periodic);
        seq_printf(m, "Avg packet length: TX=%u, RX=%u\n", stats->tx_avg_len,
                   stats->rx_avg_len);
 
index de72155..561b04f 100644 (file)
@@ -24,6 +24,7 @@ enum rtw89_debug_mask {
        RTW89_DBG_BTC = BIT(13),
        RTW89_DBG_BF = BIT(14),
        RTW89_DBG_HW_SCAN = BIT(15),
+       RTW89_DBG_SAR = BIT(16),
 };
 
 enum rtw89_debug_mac_reg_sel {
index 4718ace..2d9c315 100644 (file)
@@ -2257,7 +2257,7 @@ static int rtw89_hw_scan_add_chan_list(struct rtw89_dev *rtwdev,
                list_add_tail(&ch_info->list, &chan_list);
                off_chan_time += ch_info->period;
        }
-       rtw89_fw_h2c_scan_list_offload(rtwdev, list_len, &chan_list);
+       ret = rtw89_fw_h2c_scan_list_offload(rtwdev, list_len, &chan_list);
 
 out:
        list_for_each_entry_safe(ch_info, tmp, &chan_list, list) {
@@ -2339,6 +2339,9 @@ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
        rtwvif->scan_req = NULL;
        rtwvif->scan_ies = NULL;
        rtwdev->scan_info.scanning_vif = NULL;
+
+       if (rtwvif->net_type != RTW89_NET_TYPE_NO_LINK)
+               rtw89_store_op_chan(rtwdev, false);
 }
 
 void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
@@ -2365,20 +2368,27 @@ int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
                if (ret)
                        goto out;
        }
-       rtw89_fw_h2c_scan_offload(rtwdev, &opt, rtwvif);
+       ret = rtw89_fw_h2c_scan_offload(rtwdev, &opt, rtwvif);
 out:
        return ret;
 }
 
-void rtw89_store_op_chan(struct rtw89_dev *rtwdev)
+void rtw89_store_op_chan(struct rtw89_dev *rtwdev, bool backup)
 {
        struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
        struct rtw89_hal *hal = &rtwdev->hal;
 
-       scan_info->op_pri_ch = hal->current_primary_channel;
-       scan_info->op_chan = hal->current_channel;
-       scan_info->op_bw = hal->current_band_width;
-       scan_info->op_band = hal->current_band_type;
+       if (backup) {
+               scan_info->op_pri_ch = hal->current_primary_channel;
+               scan_info->op_chan = hal->current_channel;
+               scan_info->op_bw = hal->current_band_width;
+               scan_info->op_band = hal->current_band_type;
+       } else {
+               hal->current_primary_channel = scan_info->op_pri_ch;
+               hal->current_channel = scan_info->op_chan;
+               hal->current_band_width = scan_info->op_bw;
+               hal->current_band_type = scan_info->op_band;
+       }
 }
 
 #define H2C_FW_CPU_EXCEPTION_LEN 4
index 95a55c4..e75ad22 100644 (file)
@@ -2633,17 +2633,14 @@ int rtw89_fw_msg_reg(struct rtw89_dev *rtwdev,
                     struct rtw89_mac_c2h_info *c2h_info);
 int rtw89_fw_h2c_fw_log(struct rtw89_dev *rtwdev, bool enable);
 void rtw89_fw_st_dbg_dump(struct rtw89_dev *rtwdev);
-void rtw89_store_op_chan(struct rtw89_dev *rtwdev);
+void rtw89_store_op_chan(struct rtw89_dev *rtwdev, bool backup);
 void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
                         struct ieee80211_scan_request *req);
 void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
                            bool aborted);
 int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
                          bool enable);
-void rtw89_hw_scan_status_report(struct rtw89_dev *rtwdev, struct sk_buff *skb);
-void rtw89_hw_scan_chan_switch(struct rtw89_dev *rtwdev, struct sk_buff *skb);
 void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
-void rtw89_store_op_chan(struct rtw89_dev *rtwdev);
 int rtw89_fw_h2c_trigger_cpu_exception(struct rtw89_dev *rtwdev);
 
 #endif
index 3cf8929..93124b8 100644 (file)
@@ -3681,17 +3681,20 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *c2h,
                rtw89_hw_scan_complete(rtwdev, vif, false);
                break;
        case RTW89_SCAN_ENTER_CH_NOTIFY:
-               if (rtw89_is_op_chan(rtwdev, band, chan))
+               hal->prev_band_type = hal->current_band_type;
+               hal->current_band_type = band;
+               hal->prev_primary_channel = hal->current_primary_channel;
+               hal->current_primary_channel = chan;
+               hal->current_channel = chan;
+               hal->current_band_width = RTW89_CHANNEL_WIDTH_20;
+               if (rtw89_is_op_chan(rtwdev, band, chan)) {
+                       rtw89_store_op_chan(rtwdev, false);
                        ieee80211_wake_queues(rtwdev->hw);
+               }
                break;
        default:
                return;
        }
-
-       hal->prev_band_type = hal->current_band_type;
-       hal->prev_primary_channel = hal->current_channel;
-       hal->current_channel = chan;
-       hal->current_band_type = band;
 }
 
 static void
index 9f511c8..f666193 100644 (file)
@@ -666,6 +666,7 @@ enum mac_ax_err_info {
        MAC_AX_ERR_L2_ERR_APB_BBRF_TO_RX4281 = 0x2360,
        MAC_AX_ERR_L2_ERR_APB_BBRF_TO_OTHERS = 0x2370,
        MAC_AX_ERR_L2_RESET_DONE = 0x2400,
+       MAC_AX_ERR_L2_ERR_WDT_TIMEOUT_INT = 0x2599,
        MAC_AX_ERR_CPU_EXCEPTION = 0x3000,
        MAC_AX_ERR_ASSERTION = 0x4000,
        MAC_AX_GET_ERR_MAX,
index f24e4a2..6d0c62c 100644 (file)
@@ -350,7 +350,7 @@ static void rtw89_ops_bss_info_changed(struct ieee80211_hw *hw,
                        rtw89_phy_set_bss_color(rtwdev, vif);
                        rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, vif);
                        rtw89_mac_port_update(rtwdev, rtwvif);
-                       rtw89_store_op_chan(rtwdev);
+                       rtw89_store_op_chan(rtwdev, true);
                } else {
                        /* Abort ongoing scan if cancel_scan isn't issued
                         * when disconnected by peer
index 0ef7821..25872df 100644 (file)
@@ -738,6 +738,9 @@ static irqreturn_t rtw89_pci_interrupt_threadfn(int irq, void *dev)
        if (unlikely(isrs.halt_c2h_isrs & B_AX_HALT_C2H_INT_EN))
                rtw89_ser_notify(rtwdev, rtw89_mac_get_err_status(rtwdev));
 
+       if (unlikely(isrs.halt_c2h_isrs & B_AX_WDT_TIMEOUT_INT_EN))
+               rtw89_ser_notify(rtwdev, MAC_AX_ERR_L2_ERR_WDT_TIMEOUT_INT);
+
        if (unlikely(rtwpci->under_recovery))
                goto enable_intr;
 
@@ -3126,7 +3129,7 @@ static void rtw89_pci_recovery_intr_mask_v1(struct rtw89_dev *rtwdev)
        struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
 
        rtwpci->ind_intrs = B_AX_HS0ISR_IND_INT_EN;
-       rtwpci->halt_c2h_intrs = B_AX_HALT_C2H_INT_EN;
+       rtwpci->halt_c2h_intrs = B_AX_HALT_C2H_INT_EN | B_AX_WDT_TIMEOUT_INT_EN;
        rtwpci->intrs[0] = 0;
        rtwpci->intrs[1] = 0;
 }
@@ -3138,7 +3141,7 @@ static void rtw89_pci_default_intr_mask_v1(struct rtw89_dev *rtwdev)
        rtwpci->ind_intrs = B_AX_HCI_AXIDMA_INT_EN |
                            B_AX_HS1ISR_IND_INT_EN |
                            B_AX_HS0ISR_IND_INT_EN;
-       rtwpci->halt_c2h_intrs = B_AX_HALT_C2H_INT_EN;
+       rtwpci->halt_c2h_intrs = B_AX_HALT_C2H_INT_EN | B_AX_WDT_TIMEOUT_INT_EN;
        rtwpci->intrs[0] = B_AX_TXDMA_STUCK_INT_EN |
                           B_AX_RXDMA_INT_EN |
                           B_AX_RXP1DMA_INT_EN |
@@ -3155,7 +3158,7 @@ static void rtw89_pci_low_power_intr_mask_v1(struct rtw89_dev *rtwdev)
 
        rtwpci->ind_intrs = B_AX_HS1ISR_IND_INT_EN |
                            B_AX_HS0ISR_IND_INT_EN;
-       rtwpci->halt_c2h_intrs = B_AX_HALT_C2H_INT_EN;
+       rtwpci->halt_c2h_intrs = B_AX_HALT_C2H_INT_EN | B_AX_WDT_TIMEOUT_INT_EN;
        rtwpci->intrs[0] = 0;
        rtwpci->intrs[1] = B_AX_GPIO18_INT_EN;
 }
index bb585ed..a118647 100644 (file)
@@ -94,6 +94,7 @@
 
 /* Interrupts */
 #define R_AX_HIMR0 0x01A0
+#define B_AX_WDT_TIMEOUT_INT_EN BIT(22)
 #define B_AX_HALT_C2H_INT_EN BIT(21)
 #define R_AX_HISR0 0x01A4
 
index 762cdba..217aacb 100644 (file)
@@ -2151,6 +2151,7 @@ static void rtw89_phy_cfo_init(struct rtw89_dev *rtwdev)
        cfo->cfo_trig_by_timer_en = false;
        cfo->phy_cfo_trk_cnt = 0;
        cfo->phy_cfo_status = RTW89_PHY_DCFO_STATE_NORMAL;
+       cfo->cfo_ul_ofdma_acc_mode = RTW89_CFO_UL_OFDMA_ACC_ENABLE;
 }
 
 static void rtw89_phy_cfo_crystal_cap_adjust(struct rtw89_dev *rtwdev,
@@ -2419,6 +2420,13 @@ void rtw89_phy_cfo_track(struct rtw89_dev *rtwdev)
 {
        struct rtw89_cfo_tracking_info *cfo = &rtwdev->cfo_tracking;
        struct rtw89_traffic_stats *stats = &rtwdev->stats;
+       bool is_ul_ofdma = false, ofdma_acc_en = false;
+
+       if (stats->rx_tf_periodic > CFO_TF_CNT_TH)
+               is_ul_ofdma = true;
+       if (cfo->cfo_ul_ofdma_acc_mode == RTW89_CFO_UL_OFDMA_ACC_ENABLE &&
+           is_ul_ofdma)
+               ofdma_acc_en = true;
 
        switch (cfo->phy_cfo_status) {
        case RTW89_PHY_DCFO_STATE_NORMAL:
@@ -2430,16 +2438,26 @@ void rtw89_phy_cfo_track(struct rtw89_dev *rtwdev)
                }
                break;
        case RTW89_PHY_DCFO_STATE_ENHANCE:
-               if (cfo->phy_cfo_trk_cnt >= CFO_PERIOD_CNT) {
+               if (stats->tx_throughput <= CFO_TP_LOWER)
+                       cfo->phy_cfo_status = RTW89_PHY_DCFO_STATE_NORMAL;
+               else if (ofdma_acc_en &&
+                        cfo->phy_cfo_trk_cnt >= CFO_PERIOD_CNT)
+                       cfo->phy_cfo_status = RTW89_PHY_DCFO_STATE_HOLD;
+               else
+                       cfo->phy_cfo_trk_cnt++;
+
+               if (cfo->phy_cfo_status == RTW89_PHY_DCFO_STATE_NORMAL) {
                        cfo->phy_cfo_trk_cnt = 0;
                        cfo->cfo_trig_by_timer_en = false;
                }
-               if (cfo->cfo_trig_by_timer_en == 1)
-                       cfo->phy_cfo_trk_cnt++;
+               break;
+       case RTW89_PHY_DCFO_STATE_HOLD:
                if (stats->tx_throughput <= CFO_TP_LOWER) {
                        cfo->phy_cfo_status = RTW89_PHY_DCFO_STATE_NORMAL;
                        cfo->phy_cfo_trk_cnt = 0;
                        cfo->cfo_trig_by_timer_en = false;
+               } else {
+                       cfo->phy_cfo_trk_cnt++;
                }
                break;
        default:
index 2916601..e20636f 100644 (file)
@@ -62,6 +62,7 @@
 #define CFO_COMP_PERIOD 250
 #define CFO_COMP_WEIGHT 8
 #define MAX_CFO_TOLERANCE 30
+#define CFO_TF_CNT_TH 300
 
 #define CCX_MAX_PERIOD 2097
 #define CCX_MAX_PERIOD_UNIT 32
index 64840c8..b697aef 100644 (file)
@@ -1861,6 +1861,7 @@ static void rtw8852c_rfk_track(struct rtw89_dev *rtwdev)
 {
        rtw8852c_dpk_track(rtwdev);
        rtw8852c_lck_track(rtwdev);
+       rtw8852c_rx_dck_track(rtwdev);
 }
 
 static u32 rtw8852c_bb_cal_txpwr_ref(struct rtw89_dev *rtwdev,
index dfb9cab..4186d82 100644 (file)
@@ -3864,6 +3864,7 @@ void rtw8852c_iqk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
 
 void rtw8852c_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy, bool is_afe)
 {
+       struct rtw89_rx_dck_info *rx_dck = &rtwdev->rx_dck;
        u8 path, kpath;
        u32 rf_reg5;
 
@@ -3883,6 +3884,7 @@ void rtw8852c_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy, bool is_a
                rtw89_write_rf(rtwdev, path, RR_RSV1, RR_RSV1_RST, 0x0);
                rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_MASK, RR_MOD_V_RX);
                _set_rx_dck(rtwdev, phy, path, is_afe);
+               rx_dck->thermal[path] = ewma_thermal_read(&rtwdev->phystat.avg_thermal[path]);
                rtw89_write_rf(rtwdev, path, RR_RSV1, RFREG_MASK, rf_reg5);
 
                if (rtwdev->is_tssi_mode[path])
@@ -3891,6 +3893,31 @@ void rtw8852c_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy, bool is_a
        }
 }
 
+#define RTW8852C_RX_DCK_TH 8
+
+void rtw8852c_rx_dck_track(struct rtw89_dev *rtwdev)
+{
+       struct rtw89_rx_dck_info *rx_dck = &rtwdev->rx_dck;
+       u8 cur_thermal;
+       int delta;
+       int path;
+
+       for (path = 0; path < RF_PATH_NUM_8852C; path++) {
+               cur_thermal =
+                       ewma_thermal_read(&rtwdev->phystat.avg_thermal[path]);
+               delta = abs((int)cur_thermal - rx_dck->thermal[path]);
+
+               rtw89_debug(rtwdev, RTW89_DBG_RFK_TRACK,
+                           "[RX_DCK] path=%d current thermal=0x%x delta=0x%x\n",
+                           path, cur_thermal, delta);
+
+               if (delta >= RTW8852C_RX_DCK_TH) {
+                       rtw8852c_rx_dck(rtwdev, RTW89_PHY_0, false);
+                       return;
+               }
+       }
+}
+
 void rtw8852c_dpk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
 {
        u32 tx_en;
index c32756f..5118a49 100644 (file)
@@ -12,6 +12,7 @@ void rtw8852c_rck(struct rtw89_dev *rtwdev);
 void rtw8852c_dack(struct rtw89_dev *rtwdev);
 void rtw8852c_iqk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx);
 void rtw8852c_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx, bool is_afe);
+void rtw8852c_rx_dck_track(struct rtw89_dev *rtwdev);
 void rtw8852c_dpk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy);
 void rtw8852c_dpk_track(struct rtw89_dev *rtwdev);
 void rtw8852c_tssi(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy);
index 097c878..eb2d3ec 100644 (file)
 #include "debug.h"
 #include "sar.h"
 
+static enum rtw89_sar_subband rtw89_sar_get_subband(struct rtw89_dev *rtwdev,
+                                                   u32 center_freq)
+{
+       switch (center_freq) {
+       default:
+               rtw89_debug(rtwdev, RTW89_DBG_SAR,
+                           "center freq: %u to SAR subband is unhandled\n",
+                           center_freq);
+               fallthrough;
+       case 2412 ... 2484:
+               return RTW89_SAR_2GHZ_SUBBAND;
+       case 5180 ... 5320:
+               return RTW89_SAR_5GHZ_SUBBAND_1_2;
+       case 5500 ... 5720:
+               return RTW89_SAR_5GHZ_SUBBAND_2_E;
+       case 5745 ... 5825:
+               return RTW89_SAR_5GHZ_SUBBAND_3;
+       case 5955 ... 6155:
+               return RTW89_SAR_6GHZ_SUBBAND_5_L;
+       case 6175 ... 6415:
+               return RTW89_SAR_6GHZ_SUBBAND_5_H;
+       case 6435 ... 6515:
+               return RTW89_SAR_6GHZ_SUBBAND_6;
+       case 6535 ... 6695:
+               return RTW89_SAR_6GHZ_SUBBAND_7_L;
+       case 6715 ... 6855:
+               return RTW89_SAR_6GHZ_SUBBAND_7_H;
+
+       /* freq 6875 (ch 185, 20MHz) spans RTW89_SAR_6GHZ_SUBBAND_7_H
+        * and RTW89_SAR_6GHZ_SUBBAND_8, so directly describe it with
+        * struct rtw89_sar_span in the following.
+        */
+
+       case 6895 ... 7115:
+               return RTW89_SAR_6GHZ_SUBBAND_8;
+       }
+}
+
+struct rtw89_sar_span {
+       enum rtw89_sar_subband subband_low;
+       enum rtw89_sar_subband subband_high;
+};
+
+#define RTW89_SAR_SPAN_VALID(span) ((span)->subband_high)
+
+#define RTW89_SAR_6GHZ_SPAN_HEAD 6145
+#define RTW89_SAR_6GHZ_SPAN_IDX(center_freq) \
+       ((((int)(center_freq) - RTW89_SAR_6GHZ_SPAN_HEAD) / 5) / 2)
+
+#define RTW89_DECL_SAR_6GHZ_SPAN(center_freq, subband_l, subband_h) \
+       [RTW89_SAR_6GHZ_SPAN_IDX(center_freq)] = { \
+               .subband_low = RTW89_SAR_6GHZ_ ## subband_l, \
+               .subband_high = RTW89_SAR_6GHZ_ ## subband_h, \
+       }
+
+/* Since 6GHz SAR subbands are not edge aligned, some cases span two SAR
+ * subbands. In the following, we describe each of them with rtw89_sar_span.
+ */
+static const struct rtw89_sar_span rtw89_sar_overlapping_6ghz[] = {
+       RTW89_DECL_SAR_6GHZ_SPAN(6145, SUBBAND_5_L, SUBBAND_5_H),
+       RTW89_DECL_SAR_6GHZ_SPAN(6165, SUBBAND_5_L, SUBBAND_5_H),
+       RTW89_DECL_SAR_6GHZ_SPAN(6185, SUBBAND_5_L, SUBBAND_5_H),
+       RTW89_DECL_SAR_6GHZ_SPAN(6505, SUBBAND_6, SUBBAND_7_L),
+       RTW89_DECL_SAR_6GHZ_SPAN(6525, SUBBAND_6, SUBBAND_7_L),
+       RTW89_DECL_SAR_6GHZ_SPAN(6545, SUBBAND_6, SUBBAND_7_L),
+       RTW89_DECL_SAR_6GHZ_SPAN(6665, SUBBAND_7_L, SUBBAND_7_H),
+       RTW89_DECL_SAR_6GHZ_SPAN(6705, SUBBAND_7_L, SUBBAND_7_H),
+       RTW89_DECL_SAR_6GHZ_SPAN(6825, SUBBAND_7_H, SUBBAND_8),
+       RTW89_DECL_SAR_6GHZ_SPAN(6865, SUBBAND_7_H, SUBBAND_8),
+       RTW89_DECL_SAR_6GHZ_SPAN(6875, SUBBAND_7_H, SUBBAND_8),
+       RTW89_DECL_SAR_6GHZ_SPAN(6885, SUBBAND_7_H, SUBBAND_8),
+};
+
 static int rtw89_query_sar_config_common(struct rtw89_dev *rtwdev, s32 *cfg)
 {
        struct rtw89_sar_cfg_common *rtwsar = &rtwdev->sar.cfg_common;
-       enum rtw89_subband subband = rtwdev->hal.current_subband;
+       struct rtw89_hal *hal = &rtwdev->hal;
+       enum rtw89_band band = hal->current_band_type;
+       u32 center_freq = hal->current_freq;
+       const struct rtw89_sar_span *span = NULL;
+       enum rtw89_sar_subband subband_l, subband_h;
+       int idx;
+
+       if (band == RTW89_BAND_6G) {
+               idx = RTW89_SAR_6GHZ_SPAN_IDX(center_freq);
+               /* To decrease size of rtw89_sar_overlapping_6ghz[],
+                * RTW89_SAR_6GHZ_SPAN_IDX() truncates the leading NULLs
+                * to make first span as index 0 of the table. So, if center
+                * frequency is less than the first one, it will get netative.
+                */
+               if (idx >= 0 && idx < ARRAY_SIZE(rtw89_sar_overlapping_6ghz))
+                       span = &rtw89_sar_overlapping_6ghz[idx];
+       }
+
+       if (span && RTW89_SAR_SPAN_VALID(span)) {
+               subband_l = span->subband_low;
+               subband_h = span->subband_high;
+       } else {
+               subband_l = rtw89_sar_get_subband(rtwdev, center_freq);
+               subband_h = subband_l;
+       }
+
+       rtw89_debug(rtwdev, RTW89_DBG_SAR,
+                   "for {band %u, center_freq %u}, SAR subband: {%u, %u}\n",
+                   band, center_freq, subband_l, subband_h);
 
-       if (!rtwsar->set[subband])
+       if (!rtwsar->set[subband_l] && !rtwsar->set[subband_h])
                return -ENODATA;
 
-       *cfg = rtwsar->cfg[subband];
+       if (!rtwsar->set[subband_l])
+               *cfg = rtwsar->cfg[subband_h];
+       else if (!rtwsar->set[subband_h])
+               *cfg = rtwsar->cfg[subband_l];
+       else
+               *cfg = min(rtwsar->cfg[subband_l], rtwsar->cfg[subband_h]);
+
        return 0;
 }
 
@@ -128,21 +235,20 @@ exit:
        return ret;
 }
 
-static const u8 rtw89_common_sar_subband_map[] = {
-       RTW89_CH_2G,
-       RTW89_CH_5G_BAND_1,
-       RTW89_CH_5G_BAND_3,
-       RTW89_CH_5G_BAND_4,
-};
-
 static const struct cfg80211_sar_freq_ranges rtw89_common_sar_freq_ranges[] = {
        { .start_freq = 2412, .end_freq = 2484, },
        { .start_freq = 5180, .end_freq = 5320, },
        { .start_freq = 5500, .end_freq = 5720, },
        { .start_freq = 5745, .end_freq = 5825, },
+       { .start_freq = 5955, .end_freq = 6155, },
+       { .start_freq = 6175, .end_freq = 6415, },
+       { .start_freq = 6435, .end_freq = 6515, },
+       { .start_freq = 6535, .end_freq = 6695, },
+       { .start_freq = 6715, .end_freq = 6875, },
+       { .start_freq = 6875, .end_freq = 7115, },
 };
 
-static_assert(ARRAY_SIZE(rtw89_common_sar_subband_map) ==
+static_assert(RTW89_SAR_SUBBAND_NR ==
              ARRAY_SIZE(rtw89_common_sar_freq_ranges));
 
 const struct cfg80211_sar_capa rtw89_sar_capa = {
@@ -159,7 +265,6 @@ int rtw89_ops_set_sar_specs(struct ieee80211_hw *hw,
        u8 fct;
        u32 freq_start;
        u32 freq_end;
-       u32 band;
        s32 power;
        u32 i, idx;
 
@@ -175,15 +280,14 @@ int rtw89_ops_set_sar_specs(struct ieee80211_hw *hw,
 
                freq_start = rtw89_common_sar_freq_ranges[idx].start_freq;
                freq_end = rtw89_common_sar_freq_ranges[idx].end_freq;
-               band = rtw89_common_sar_subband_map[idx];
                power = sar->sub_specs[i].power;
 
-               rtw89_info(rtwdev, "On freq %u to %u, ", freq_start, freq_end);
-               rtw89_info(rtwdev, "set SAR power limit %d (unit: 1/%lu dBm)\n",
-                          power, BIT(fct));
+               rtw89_debug(rtwdev, RTW89_DBG_SAR,
+                           "On freq %u to %u, set SAR limit %d (unit: 1/%lu dBm)\n",
+                           freq_start, freq_end, power, BIT(fct));
 
-               sar_common.set[band] = true;
-               sar_common.cfg[band] = power;
+               sar_common.set[idx] = true;
+               sar_common.cfg[idx] = power;
        }
 
        return rtw89_apply_sar_common(rtwdev, &sar_common);
index 3d1b8a1..52c7f56 100644 (file)
@@ -286,8 +286,7 @@ static int load_firmware_secure(struct wfx_dev *wdev)
 
 error:
        kfree(buf);
-       if (fw)
-               release_firmware(fw);
+       release_firmware(fw);
        if (ret)
                print_boot_status(wdev);
        return ret;
index 10e019c..3b4ded2 100644 (file)
@@ -327,18 +327,12 @@ static int cw1200_bh_rx_helper(struct cw1200_common *priv,
        if (WARN_ON(wsm_handle_rx(priv, wsm_id, wsm, &skb_rx)))
                goto err;
 
-       if (skb_rx) {
-               dev_kfree_skb(skb_rx);
-               skb_rx = NULL;
-       }
+       dev_kfree_skb(skb_rx);
 
        return 0;
 
 err:
-       if (skb_rx) {
-               dev_kfree_skb(skb_rx);
-               skb_rx = NULL;
-       }
+       dev_kfree_skb(skb_rx);
        return -1;
 }
 
index 514f2c1..ba14d83 100644 (file)
@@ -654,7 +654,7 @@ static int __init virt_wifi_init_module(void)
 {
        int err;
 
-       /* Guaranteed to be locallly-administered and not multicast. */
+       /* Guaranteed to be locally-administered and not multicast. */
        eth_random_addr(fake_router_bssid);
 
        err = register_netdevice_notifier(&virt_wifi_notifier);
index d9dea48..8174d7b 100644 (file)
@@ -48,7 +48,6 @@
 #include <linux/debugfs.h>
 
 typedef unsigned int pending_ring_idx_t;
-#define INVALID_PENDING_RING_IDX (~0U)
 
 struct pending_tx_info {
        struct xen_netif_tx_request req; /* tx request */
@@ -82,8 +81,6 @@ struct xenvif_rx_meta {
 /* Discriminate from any valid pending_idx value. */
 #define INVALID_PENDING_IDX 0xFFFF
 
-#define MAX_BUFFER_OFFSET XEN_PAGE_SIZE
-
 #define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE
 
 /* The maximum number of frags is derived from the size of a grant (same
@@ -367,11 +364,6 @@ void xenvif_free(struct xenvif *vif);
 int xenvif_xenbus_init(void);
 void xenvif_xenbus_fini(void);
 
-int xenvif_schedulable(struct xenvif *vif);
-
-int xenvif_queue_stopped(struct xenvif_queue *queue);
-void xenvif_wake_queue(struct xenvif_queue *queue);
-
 /* (Un)Map communication rings. */
 void xenvif_unmap_frontend_data_rings(struct xenvif_queue *queue);
 int xenvif_map_frontend_data_rings(struct xenvif_queue *queue,
@@ -394,7 +386,6 @@ int xenvif_dealloc_kthread(void *data);
 irqreturn_t xenvif_ctrl_irq_fn(int irq, void *data);
 
 bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread);
-void xenvif_rx_action(struct xenvif_queue *queue);
 void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb);
 
 void xenvif_carrier_on(struct xenvif *vif);
@@ -403,9 +394,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 void xenvif_zerocopy_callback(struct sk_buff *skb, struct ubuf_info *ubuf,
                              bool zerocopy_success);
 
-/* Unmap a pending page and release it back to the guest */
-void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
-
 static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
        return MAX_PENDING_REQS -
index 8e03537..fb32ae8 100644 (file)
@@ -69,7 +69,7 @@ void xenvif_skb_zerocopy_complete(struct xenvif_queue *queue)
        wake_up(&queue->dealloc_wq);
 }
 
-int xenvif_schedulable(struct xenvif *vif)
+static int xenvif_schedulable(struct xenvif *vif)
 {
        return netif_running(vif->dev) &&
                test_bit(VIF_STATUS_CONNECTED, &vif->status) &&
@@ -177,20 +177,6 @@ irqreturn_t xenvif_interrupt(int irq, void *dev_id)
        return IRQ_HANDLED;
 }
 
-int xenvif_queue_stopped(struct xenvif_queue *queue)
-{
-       struct net_device *dev = queue->vif->dev;
-       unsigned int id = queue->id;
-       return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
-}
-
-void xenvif_wake_queue(struct xenvif_queue *queue)
-{
-       struct net_device *dev = queue->vif->dev;
-       unsigned int id = queue->id;
-       netif_tx_wake_queue(netdev_get_tx_queue(dev, id));
-}
-
 static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
                               struct net_device *sb_dev)
 {
index d93814c..fc61a44 100644 (file)
@@ -112,6 +112,8 @@ static void make_tx_response(struct xenvif_queue *queue,
                             s8       st);
 static void push_tx_responses(struct xenvif_queue *queue);
 
+static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
+
 static inline int tx_work_todo(struct xenvif_queue *queue);
 
 static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
@@ -1418,7 +1420,7 @@ static void push_tx_responses(struct xenvif_queue *queue)
                notify_remote_via_irq(queue->tx_irq);
 }
 
-void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
+static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
 {
        int ret;
        struct gnttab_unmap_grant_ref tx_unmap_op;
index dbac4c0..8df2c73 100644 (file)
@@ -486,7 +486,7 @@ static void xenvif_rx_skb(struct xenvif_queue *queue)
 
 #define RX_BATCH_SIZE 64
 
-void xenvif_rx_action(struct xenvif_queue *queue)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
        struct sk_buff_head completed_skbs;
        unsigned int work_done = 0;
index a99aedf..ea73094 100644 (file)
@@ -388,13 +388,25 @@ static void nfcmrvl_play_deferred(struct nfcmrvl_usb_drv_data *drv_data)
        int err;
 
        while ((urb = usb_get_from_anchor(&drv_data->deferred))) {
+               usb_anchor_urb(urb, &drv_data->tx_anchor);
+
                err = usb_submit_urb(urb, GFP_ATOMIC);
-               if (err)
+               if (err) {
+                       kfree(urb->setup_packet);
+                       usb_unanchor_urb(urb);
+                       usb_free_urb(urb);
                        break;
+               }
 
                drv_data->tx_in_flight++;
+               usb_free_urb(urb);
+       }
+
+       /* Cleanup the rest deferred urbs. */
+       while ((urb = usb_get_from_anchor(&drv_data->deferred))) {
+               kfree(urb->setup_packet);
+               usb_free_urb(urb);
        }
-       usb_scuttle_anchored_urbs(&drv_data->deferred);
 }
 
 static int nfcmrvl_resume(struct usb_interface *intf)
index 7e213f8..df8d27c 100644 (file)
@@ -300,6 +300,8 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
        int r = 0;
        struct device *dev = &hdev->ndev->dev;
        struct nfc_evt_transaction *transaction;
+       u32 aid_len;
+       u8 params_len;
 
        pr_debug("connectivity gate event: %x\n", event);
 
@@ -308,43 +310,48 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
                r = nfc_se_connectivity(hdev->ndev, host);
        break;
        case ST21NFCA_EVT_TRANSACTION:
-               /*
-                * According to specification etsi 102 622
+               /* According to specification etsi 102 622
                 * 11.2.2.4 EVT_TRANSACTION Table 52
                 * Description  Tag     Length
                 * AID          81      5 to 16
                 * PARAMETERS   82      0 to 255
+                *
+                * The key differences are aid storage length is variably sized
+                * in the packet, but fixed in nfc_evt_transaction, and that the aid_len
+                * is u8 in the packet, but u32 in the structure, and the tags in
+                * the packet are not included in nfc_evt_transaction.
+                *
+                * size in bytes: 1          1       5-16 1             1           0-255
+                * offset:        0          1       2    aid_len + 2   aid_len + 3 aid_len + 4
+                * member name:   aid_tag(M) aid_len aid  params_tag(M) params_len  params
+                * example:       0x81       5-16    X    0x82 0-255    X
                 */
-               if (skb->len < NFC_MIN_AID_LENGTH + 2 &&
-                   skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG)
+               if (skb->len < 2 || skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG)
                        return -EPROTO;
 
-               transaction = devm_kzalloc(dev, skb->len - 2, GFP_KERNEL);
-               if (!transaction)
-                       return -ENOMEM;
-
-               transaction->aid_len = skb->data[1];
+               aid_len = skb->data[1];
 
-               /* Checking if the length of the AID is valid */
-               if (transaction->aid_len > sizeof(transaction->aid))
-                       return -EINVAL;
+               if (skb->len < aid_len + 4 || aid_len > sizeof(transaction->aid))
+                       return -EPROTO;
 
-               memcpy(transaction->aid, &skb->data[2],
-                      transaction->aid_len);
+               params_len = skb->data[aid_len + 3];
 
-               /* Check next byte is PARAMETERS tag (82) */
-               if (skb->data[transaction->aid_len + 2] !=
-                   NFC_EVT_TRANSACTION_PARAMS_TAG)
+               /* Verify PARAMETERS tag is (82), and final check that there is enough
+                * space in the packet to read everything.
+                */
+               if ((skb->data[aid_len + 2] != NFC_EVT_TRANSACTION_PARAMS_TAG) ||
+                   (skb->len < aid_len + 4 + params_len))
                        return -EPROTO;
 
-               transaction->params_len = skb->data[transaction->aid_len + 3];
+               transaction = devm_kzalloc(dev, sizeof(*transaction) + params_len, GFP_KERNEL);
+               if (!transaction)
+                       return -ENOMEM;
 
-               /* Total size is allocated (skb->len - 2) minus fixed array members */
-               if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction)))
-                       return -EINVAL;
+               transaction->aid_len = aid_len;
+               transaction->params_len = params_len;
 
-               memcpy(transaction->params, skb->data +
-                      transaction->aid_len + 4, transaction->params_len);
+               memcpy(transaction->aid, &skb->data[2], aid_len);
+               memcpy(transaction->params, &skb->data[aid_len + 4], params_len);
 
                r = nfc_se_transaction(hdev->ndev, host, transaction);
        break;
index d421e14..6b51ad0 100644 (file)
@@ -17,7 +17,7 @@ menuconfig MIPS_PLATFORM_DEVICES
 if MIPS_PLATFORM_DEVICES
 
 config CPU_HWMON
-       tristate "Loongson-3 CPU HWMon Driver"
+       bool "Loongson-3 CPU HWMon Driver"
        depends on MACH_LOONGSON64
        select HWMON
        default y
index 4519ef4..e59ea21 100644 (file)
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright (c) 2020 Facebook */
 
+#include <linux/bits.h>
 #include <linux/err.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
@@ -88,10 +89,10 @@ struct tod_reg {
 #define TOD_CTRL_DISABLE_FMT_A BIT(17)
 #define TOD_CTRL_DISABLE_FMT_B BIT(16)
 #define TOD_CTRL_ENABLE                BIT(0)
-#define TOD_CTRL_GNSS_MASK     ((1U << 4) - 1)
+#define TOD_CTRL_GNSS_MASK     GENMASK(3, 0)
 #define TOD_CTRL_GNSS_SHIFT    24
 
-#define TOD_STATUS_UTC_MASK            0xff
+#define TOD_STATUS_UTC_MASK            GENMASK(7, 0)
 #define TOD_STATUS_UTC_VALID           BIT(8)
 #define TOD_STATUS_LEAP_ANNOUNCE       BIT(12)
 #define TOD_STATUS_LEAP_VALID          BIT(16)
@@ -205,7 +206,7 @@ struct frequency_reg {
 #define FREQ_STATUS_VALID      BIT(31)
 #define FREQ_STATUS_ERROR      BIT(30)
 #define FREQ_STATUS_OVERRUN    BIT(29)
-#define FREQ_STATUS_MASK       (BIT(24) - 1)
+#define FREQ_STATUS_MASK       GENMASK(23, 0)
 
 struct ptp_ocp_flash_info {
        const char *name;
@@ -674,9 +675,9 @@ static const struct ocp_selector ptp_ocp_clock[] = {
        { }
 };
 
+#define SMA_DISABLE            BIT(16)
 #define SMA_ENABLE             BIT(15)
-#define SMA_SELECT_MASK                ((1U << 15) - 1)
-#define SMA_DISABLE            0x10000
+#define SMA_SELECT_MASK                GENMASK(14, 0)
 
 static const struct ocp_selector ptp_ocp_sma_in[] = {
        { .name = "10Mhz",      .value = 0x0000 },
@@ -2154,7 +2155,7 @@ ptp_ocp_fb_set_pins(struct ptp_ocp *bp)
        struct ptp_pin_desc *config;
        int i;
 
-       config = kzalloc(sizeof(*config) * 4, GFP_KERNEL);
+       config = kcalloc(4, sizeof(*config), GFP_KERNEL);
        if (!config)
                return -ENOMEM;
 
@@ -3440,7 +3441,7 @@ ptp_ocp_tod_status_show(struct seq_file *s, void *data)
 
        val = ioread32(&bp->tod->utc_status);
        seq_printf(s, "UTC status register: 0x%08X\n", val);
-       seq_printf(s, "UTC offset: %d  valid:%d\n",
+       seq_printf(s, "UTC offset: %ld  valid:%d\n",
                val & TOD_STATUS_UTC_MASK, val & TOD_STATUS_UTC_VALID ? 1 : 0);
        seq_printf(s, "Leap second info valid:%d, Leap second announce %d\n",
                val & TOD_STATUS_LEAP_VALID ? 1 : 0,
@@ -3700,10 +3701,8 @@ ptp_ocp_detach(struct ptp_ocp *bp)
                serial8250_unregister_port(bp->mac_port);
        if (bp->nmea_port != -1)
                serial8250_unregister_port(bp->nmea_port);
-       if (bp->spi_flash)
-               platform_device_unregister(bp->spi_flash);
-       if (bp->i2c_ctrl)
-               platform_device_unregister(bp->i2c_ctrl);
+       platform_device_unregister(bp->spi_flash);
+       platform_device_unregister(bp->i2c_ctrl);
        if (bp->i2c_clk)
                clk_hw_unregister_fixed_rate(bp->i2c_clk);
        if (bp->n_irqs)
@@ -3773,7 +3772,6 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 out:
        ptp_ocp_detach(bp);
-       pci_set_drvdata(pdev, NULL);
 out_disable:
        pci_disable_device(pdev);
 out_free:
@@ -3789,7 +3787,6 @@ ptp_ocp_remove(struct pci_dev *pdev)
 
        devlink_unregister(devlink);
        ptp_ocp_detach(bp);
-       pci_set_drvdata(pdev, NULL);
        pci_disable_device(pdev);
 
        devlink_free(devlink);
index b5adf6a..a6dc8b5 100644 (file)
@@ -6,12 +6,6 @@ config VIRTIO
          bus, such as CONFIG_VIRTIO_PCI, CONFIG_VIRTIO_MMIO, CONFIG_RPMSG
          or CONFIG_S390_GUEST.
 
-config ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
-       bool
-       help
-         This option is selected if the architecture may need to enforce
-         VIRTIO_F_ACCESS_PLATFORM
-
 config VIRTIO_PCI_LIB
        tristate
        help
index ef04a96..6bace84 100644 (file)
@@ -5,6 +5,7 @@
 #include <linux/module.h>
 #include <linux/idr.h>
 #include <linux/of.h>
+#include <linux/platform-feature.h>
 #include <uapi/linux/virtio_ids.h>
 
 /* Unique numbering for virtio devices. */
@@ -170,12 +171,10 @@ EXPORT_SYMBOL_GPL(virtio_add_status);
 static int virtio_features_ok(struct virtio_device *dev)
 {
        unsigned int status;
-       int ret;
 
        might_sleep();
 
-       ret = arch_has_restricted_virtio_memory_access();
-       if (ret) {
+       if (platform_has(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS)) {
                if (!virtio_has_feature(dev, VIRTIO_F_VERSION_1)) {
                        dev_warn(&dev->dev,
                                 "device must provide VIRTIO_F_VERSION_1\n");
index 120d32f..bfd5f4f 100644 (file)
@@ -335,4 +335,24 @@ config XEN_UNPOPULATED_ALLOC
          having to balloon out RAM regions in order to obtain physical memory
          space to create such mappings.
 
+config XEN_GRANT_DMA_IOMMU
+       bool
+       select IOMMU_API
+
+config XEN_GRANT_DMA_OPS
+       bool
+       select DMA_OPS
+
+config XEN_VIRTIO
+       bool "Xen virtio support"
+       depends on VIRTIO
+       select XEN_GRANT_DMA_OPS
+       select XEN_GRANT_DMA_IOMMU if OF
+       help
+         Enable virtio support for running as Xen guest. Depending on the
+         guest type this will require special support on the backend side
+         (qemu or kernel, depending on the virtio device types used).
+
+         If in doubt, say n.
+
 endmenu
index 5aae66e..c0503f1 100644 (file)
@@ -39,3 +39,5 @@ xen-gntalloc-y                                := gntalloc.o
 xen-privcmd-y                          := privcmd.o privcmd-buf.o
 obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)    += xen-front-pgdir-shbuf.o
 obj-$(CONFIG_XEN_UNPOPULATED_ALLOC)    += unpopulated-alloc.o
+obj-$(CONFIG_XEN_GRANT_DMA_OPS)                += grant-dma-ops.o
+obj-$(CONFIG_XEN_GRANT_DMA_IOMMU)      += grant-dma-iommu.o
diff --git a/drivers/xen/grant-dma-iommu.c b/drivers/xen/grant-dma-iommu.c
new file mode 100644 (file)
index 0000000..16b8bc0
--- /dev/null
@@ -0,0 +1,78 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Stub IOMMU driver which does nothing.
+ * The main purpose of it being present is to reuse generic IOMMU device tree
+ * bindings by Xen grant DMA-mapping layer.
+ *
+ * Copyright (C) 2022 EPAM Systems Inc.
+ */
+
+#include <linux/iommu.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+struct grant_dma_iommu_device {
+       struct device *dev;
+       struct iommu_device iommu;
+};
+
+/* Nothing is really needed here */
+static const struct iommu_ops grant_dma_iommu_ops;
+
+static const struct of_device_id grant_dma_iommu_of_match[] = {
+       { .compatible = "xen,grant-dma" },
+       { },
+};
+
+static int grant_dma_iommu_probe(struct platform_device *pdev)
+{
+       struct grant_dma_iommu_device *mmu;
+       int ret;
+
+       mmu = devm_kzalloc(&pdev->dev, sizeof(*mmu), GFP_KERNEL);
+       if (!mmu)
+               return -ENOMEM;
+
+       mmu->dev = &pdev->dev;
+
+       ret = iommu_device_register(&mmu->iommu, &grant_dma_iommu_ops, &pdev->dev);
+       if (ret)
+               return ret;
+
+       platform_set_drvdata(pdev, mmu);
+
+       return 0;
+}
+
+static int grant_dma_iommu_remove(struct platform_device *pdev)
+{
+       struct grant_dma_iommu_device *mmu = platform_get_drvdata(pdev);
+
+       platform_set_drvdata(pdev, NULL);
+       iommu_device_unregister(&mmu->iommu);
+
+       return 0;
+}
+
+static struct platform_driver grant_dma_iommu_driver = {
+       .driver = {
+               .name = "grant-dma-iommu",
+               .of_match_table = grant_dma_iommu_of_match,
+       },
+       .probe = grant_dma_iommu_probe,
+       .remove = grant_dma_iommu_remove,
+};
+
+static int __init grant_dma_iommu_init(void)
+{
+       struct device_node *iommu_np;
+
+       iommu_np = of_find_matching_node(NULL, grant_dma_iommu_of_match);
+       if (!iommu_np)
+               return 0;
+
+       of_node_put(iommu_np);
+
+       return platform_driver_register(&grant_dma_iommu_driver);
+}
+subsys_initcall(grant_dma_iommu_init);
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
new file mode 100644 (file)
index 0000000..fc01424
--- /dev/null
@@ -0,0 +1,346 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Xen grant DMA-mapping layer - contains special DMA-mapping routines
+ * for providing grant references as DMA addresses to be used by frontends
+ * (e.g. virtio) in Xen guests
+ *
+ * Copyright (c) 2021, Juergen Gross <jgross@suse.com>
+ */
+
+#include <linux/module.h>
+#include <linux/dma-map-ops.h>
+#include <linux/of.h>
+#include <linux/pfn.h>
+#include <linux/xarray.h>
+#include <xen/xen.h>
+#include <xen/xen-ops.h>
+#include <xen/grant_table.h>
+
+struct xen_grant_dma_data {
+       /* The ID of backend domain */
+       domid_t backend_domid;
+       /* Is device behaving sane? */
+       bool broken;
+};
+
+static DEFINE_XARRAY(xen_grant_dma_devices);
+
+#define XEN_GRANT_DMA_ADDR_OFF (1ULL << 63)
+
+static inline dma_addr_t grant_to_dma(grant_ref_t grant)
+{
+       return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << PAGE_SHIFT);
+}
+
+static inline grant_ref_t dma_to_grant(dma_addr_t dma)
+{
+       return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> PAGE_SHIFT);
+}
+
+static struct xen_grant_dma_data *find_xen_grant_dma_data(struct device *dev)
+{
+       struct xen_grant_dma_data *data;
+
+       xa_lock(&xen_grant_dma_devices);
+       data = xa_load(&xen_grant_dma_devices, (unsigned long)dev);
+       xa_unlock(&xen_grant_dma_devices);
+
+       return data;
+}
+
+/*
+ * DMA ops for Xen frontends (e.g. virtio).
+ *
+ * Used to act as a kind of software IOMMU for Xen guests by using grants as
+ * DMA addresses.
+ * Such a DMA address is formed by using the grant reference as a frame
+ * number and setting the highest address bit (this bit is for the backend
+ * to be able to distinguish it from e.g. a mmio address).
+ */
+static void *xen_grant_dma_alloc(struct device *dev, size_t size,
+                                dma_addr_t *dma_handle, gfp_t gfp,
+                                unsigned long attrs)
+{
+       struct xen_grant_dma_data *data;
+       unsigned int i, n_pages = PFN_UP(size);
+       unsigned long pfn;
+       grant_ref_t grant;
+       void *ret;
+
+       data = find_xen_grant_dma_data(dev);
+       if (!data)
+               return NULL;
+
+       if (unlikely(data->broken))
+               return NULL;
+
+       ret = alloc_pages_exact(n_pages * PAGE_SIZE, gfp);
+       if (!ret)
+               return NULL;
+
+       pfn = virt_to_pfn(ret);
+
+       if (gnttab_alloc_grant_reference_seq(n_pages, &grant)) {
+               free_pages_exact(ret, n_pages * PAGE_SIZE);
+               return NULL;
+       }
+
+       for (i = 0; i < n_pages; i++) {
+               gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
+                               pfn_to_gfn(pfn + i), 0);
+       }
+
+       *dma_handle = grant_to_dma(grant);
+
+       return ret;
+}
+
+static void xen_grant_dma_free(struct device *dev, size_t size, void *vaddr,
+                              dma_addr_t dma_handle, unsigned long attrs)
+{
+       struct xen_grant_dma_data *data;
+       unsigned int i, n_pages = PFN_UP(size);
+       grant_ref_t grant;
+
+       data = find_xen_grant_dma_data(dev);
+       if (!data)
+               return;
+
+       if (unlikely(data->broken))
+               return;
+
+       grant = dma_to_grant(dma_handle);
+
+       for (i = 0; i < n_pages; i++) {
+               if (unlikely(!gnttab_end_foreign_access_ref(grant + i))) {
+                       dev_alert(dev, "Grant still in use by backend domain, disabled for further use\n");
+                       data->broken = true;
+                       return;
+               }
+       }
+
+       gnttab_free_grant_reference_seq(grant, n_pages);
+
+       free_pages_exact(vaddr, n_pages * PAGE_SIZE);
+}
+
+static struct page *xen_grant_dma_alloc_pages(struct device *dev, size_t size,
+                                             dma_addr_t *dma_handle,
+                                             enum dma_data_direction dir,
+                                             gfp_t gfp)
+{
+       void *vaddr;
+
+       vaddr = xen_grant_dma_alloc(dev, size, dma_handle, gfp, 0);
+       if (!vaddr)
+               return NULL;
+
+       return virt_to_page(vaddr);
+}
+
+static void xen_grant_dma_free_pages(struct device *dev, size_t size,
+                                    struct page *vaddr, dma_addr_t dma_handle,
+                                    enum dma_data_direction dir)
+{
+       xen_grant_dma_free(dev, size, page_to_virt(vaddr), dma_handle, 0);
+}
+
+static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
+                                        unsigned long offset, size_t size,
+                                        enum dma_data_direction dir,
+                                        unsigned long attrs)
+{
+       struct xen_grant_dma_data *data;
+       unsigned int i, n_pages = PFN_UP(size);
+       grant_ref_t grant;
+       dma_addr_t dma_handle;
+
+       if (WARN_ON(dir == DMA_NONE))
+               return DMA_MAPPING_ERROR;
+
+       data = find_xen_grant_dma_data(dev);
+       if (!data)
+               return DMA_MAPPING_ERROR;
+
+       if (unlikely(data->broken))
+               return DMA_MAPPING_ERROR;
+
+       if (gnttab_alloc_grant_reference_seq(n_pages, &grant))
+               return DMA_MAPPING_ERROR;
+
+       for (i = 0; i < n_pages; i++) {
+               gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
+                               xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
+       }
+
+       dma_handle = grant_to_dma(grant) + offset;
+
+       return dma_handle;
+}
+
+static void xen_grant_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
+                                    size_t size, enum dma_data_direction dir,
+                                    unsigned long attrs)
+{
+       struct xen_grant_dma_data *data;
+       unsigned int i, n_pages = PFN_UP(size);
+       grant_ref_t grant;
+
+       if (WARN_ON(dir == DMA_NONE))
+               return;
+
+       data = find_xen_grant_dma_data(dev);
+       if (!data)
+               return;
+
+       if (unlikely(data->broken))
+               return;
+
+       grant = dma_to_grant(dma_handle);
+
+       for (i = 0; i < n_pages; i++) {
+               if (unlikely(!gnttab_end_foreign_access_ref(grant + i))) {
+                       dev_alert(dev, "Grant still in use by backend domain, disabled for further use\n");
+                       data->broken = true;
+                       return;
+               }
+       }
+
+       gnttab_free_grant_reference_seq(grant, n_pages);
+}
+
+static void xen_grant_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
+                                  int nents, enum dma_data_direction dir,
+                                  unsigned long attrs)
+{
+       struct scatterlist *s;
+       unsigned int i;
+
+       if (WARN_ON(dir == DMA_NONE))
+               return;
+
+       for_each_sg(sg, s, nents, i)
+               xen_grant_dma_unmap_page(dev, s->dma_address, sg_dma_len(s), dir,
+                               attrs);
+}
+
+static int xen_grant_dma_map_sg(struct device *dev, struct scatterlist *sg,
+                               int nents, enum dma_data_direction dir,
+                               unsigned long attrs)
+{
+       struct scatterlist *s;
+       unsigned int i;
+
+       if (WARN_ON(dir == DMA_NONE))
+               return -EINVAL;
+
+       for_each_sg(sg, s, nents, i) {
+               s->dma_address = xen_grant_dma_map_page(dev, sg_page(s), s->offset,
+                               s->length, dir, attrs);
+               if (s->dma_address == DMA_MAPPING_ERROR)
+                       goto out;
+
+               sg_dma_len(s) = s->length;
+       }
+
+       return nents;
+
+out:
+       xen_grant_dma_unmap_sg(dev, sg, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC);
+       sg_dma_len(sg) = 0;
+
+       return -EIO;
+}
+
+static int xen_grant_dma_supported(struct device *dev, u64 mask)
+{
+       return mask == DMA_BIT_MASK(64);
+}
+
+static const struct dma_map_ops xen_grant_dma_ops = {
+       .alloc = xen_grant_dma_alloc,
+       .free = xen_grant_dma_free,
+       .alloc_pages = xen_grant_dma_alloc_pages,
+       .free_pages = xen_grant_dma_free_pages,
+       .mmap = dma_common_mmap,
+       .get_sgtable = dma_common_get_sgtable,
+       .map_page = xen_grant_dma_map_page,
+       .unmap_page = xen_grant_dma_unmap_page,
+       .map_sg = xen_grant_dma_map_sg,
+       .unmap_sg = xen_grant_dma_unmap_sg,
+       .dma_supported = xen_grant_dma_supported,
+};
+
+bool xen_is_grant_dma_device(struct device *dev)
+{
+       struct device_node *iommu_np;
+       bool has_iommu;
+
+       /* XXX Handle only DT devices for now */
+       if (!dev->of_node)
+               return false;
+
+       iommu_np = of_parse_phandle(dev->of_node, "iommus", 0);
+       has_iommu = iommu_np && of_device_is_compatible(iommu_np, "xen,grant-dma");
+       of_node_put(iommu_np);
+
+       return has_iommu;
+}
+
+void xen_grant_setup_dma_ops(struct device *dev)
+{
+       struct xen_grant_dma_data *data;
+       struct of_phandle_args iommu_spec;
+
+       data = find_xen_grant_dma_data(dev);
+       if (data) {
+               dev_err(dev, "Xen grant DMA data is already created\n");
+               return;
+       }
+
+       /* XXX ACPI device unsupported for now */
+       if (!dev->of_node)
+               goto err;
+
+       if (of_parse_phandle_with_args(dev->of_node, "iommus", "#iommu-cells",
+                       0, &iommu_spec)) {
+               dev_err(dev, "Cannot parse iommus property\n");
+               goto err;
+       }
+
+       if (!of_device_is_compatible(iommu_spec.np, "xen,grant-dma") ||
+                       iommu_spec.args_count != 1) {
+               dev_err(dev, "Incompatible IOMMU node\n");
+               of_node_put(iommu_spec.np);
+               goto err;
+       }
+
+       of_node_put(iommu_spec.np);
+
+       data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+       if (!data)
+               goto err;
+
+       /*
+        * The endpoint ID here means the ID of the domain where the corresponding
+        * backend is running
+        */
+       data->backend_domid = iommu_spec.args[0];
+
+       if (xa_err(xa_store(&xen_grant_dma_devices, (unsigned long)dev, data,
+                       GFP_KERNEL))) {
+               dev_err(dev, "Cannot store Xen grant DMA data\n");
+               goto err;
+       }
+
+       dev->dma_ops = &xen_grant_dma_ops;
+
+       return;
+
+err:
+       dev_err(dev, "Cannot set up Xen grant DMA ops, retain platform DMA ops\n");
+}
+
+MODULE_DESCRIPTION("Xen grant DMA-mapping layer");
+MODULE_AUTHOR("Juergen Gross <jgross@suse.com>");
+MODULE_LICENSE("GPL");
index 7a18292..738029d 100644 (file)
@@ -33,6 +33,7 @@
 
 #define pr_fmt(fmt) "xen:" KBUILD_MODNAME ": " fmt
 
+#include <linux/bitmap.h>
 #include <linux/memblock.h>
 #include <linux/sched.h>
 #include <linux/mm.h>
 
 static grant_ref_t **gnttab_list;
 static unsigned int nr_grant_frames;
+
+/*
+ * Handling of free grants:
+ *
+ * Free grants are in a simple list anchored in gnttab_free_head. They are
+ * linked by grant ref, the last element contains GNTTAB_LIST_END. The number
+ * of free entries is stored in gnttab_free_count.
+ * Additionally there is a bitmap of free entries anchored in
+ * gnttab_free_bitmap. This is being used for simplifying allocation of
+ * multiple consecutive grants, which is needed e.g. for support of virtio.
+ * gnttab_last_free is used to add free entries of new frames at the end of
+ * the free list.
+ * gnttab_free_tail_ptr specifies the variable which references the start
+ * of consecutive free grants ending with gnttab_last_free. This pointer is
+ * updated in a rather defensive way, in order to avoid performance hits in
+ * hot paths.
+ * All those variables are protected by gnttab_list_lock.
+ */
 static int gnttab_free_count;
-static grant_ref_t gnttab_free_head;
+static unsigned int gnttab_size;
+static grant_ref_t gnttab_free_head = GNTTAB_LIST_END;
+static grant_ref_t gnttab_last_free = GNTTAB_LIST_END;
+static grant_ref_t *gnttab_free_tail_ptr;
+static unsigned long *gnttab_free_bitmap;
 static DEFINE_SPINLOCK(gnttab_list_lock);
+
 struct grant_frames xen_auto_xlat_grant_frames;
 static unsigned int xen_gnttab_version;
 module_param_named(version, xen_gnttab_version, uint, 0);
@@ -168,16 +192,116 @@ static int get_free_entries(unsigned count)
 
        ref = head = gnttab_free_head;
        gnttab_free_count -= count;
-       while (count-- > 1)
-               head = gnttab_entry(head);
+       while (count--) {
+               bitmap_clear(gnttab_free_bitmap, head, 1);
+               if (gnttab_free_tail_ptr == __gnttab_entry(head))
+                       gnttab_free_tail_ptr = &gnttab_free_head;
+               if (count)
+                       head = gnttab_entry(head);
+       }
        gnttab_free_head = gnttab_entry(head);
        gnttab_entry(head) = GNTTAB_LIST_END;
 
+       if (!gnttab_free_count) {
+               gnttab_last_free = GNTTAB_LIST_END;
+               gnttab_free_tail_ptr = NULL;
+       }
+
        spin_unlock_irqrestore(&gnttab_list_lock, flags);
 
        return ref;
 }
 
+static int get_seq_entry_count(void)
+{
+       if (gnttab_last_free == GNTTAB_LIST_END || !gnttab_free_tail_ptr ||
+           *gnttab_free_tail_ptr == GNTTAB_LIST_END)
+               return 0;
+
+       return gnttab_last_free - *gnttab_free_tail_ptr + 1;
+}
+
+/* Rebuilds the free grant list and tries to find count consecutive entries. */
+static int get_free_seq(unsigned int count)
+{
+       int ret = -ENOSPC;
+       unsigned int from, to;
+       grant_ref_t *last;
+
+       gnttab_free_tail_ptr = &gnttab_free_head;
+       last = &gnttab_free_head;
+
+       for (from = find_first_bit(gnttab_free_bitmap, gnttab_size);
+            from < gnttab_size;
+            from = find_next_bit(gnttab_free_bitmap, gnttab_size, to + 1)) {
+               to = find_next_zero_bit(gnttab_free_bitmap, gnttab_size,
+                                       from + 1);
+               if (ret < 0 && to - from >= count) {
+                       ret = from;
+                       bitmap_clear(gnttab_free_bitmap, ret, count);
+                       from += count;
+                       gnttab_free_count -= count;
+                       if (from == to)
+                               continue;
+               }
+
+               /*
+                * Recreate the free list in order to have it properly sorted.
+                * This is needed to make sure that the free tail has the maximum
+                * possible size.
+                */
+               while (from < to) {
+                       *last = from;
+                       last = __gnttab_entry(from);
+                       gnttab_last_free = from;
+                       from++;
+               }
+               if (to < gnttab_size)
+                       gnttab_free_tail_ptr = __gnttab_entry(to - 1);
+       }
+
+       *last = GNTTAB_LIST_END;
+       if (gnttab_last_free != gnttab_size - 1)
+               gnttab_free_tail_ptr = NULL;
+
+       return ret;
+}
+
+static int get_free_entries_seq(unsigned int count)
+{
+       unsigned long flags;
+       int ret = 0;
+
+       spin_lock_irqsave(&gnttab_list_lock, flags);
+
+       if (gnttab_free_count < count) {
+               ret = gnttab_expand(count - gnttab_free_count);
+               if (ret < 0)
+                       goto out;
+       }
+
+       if (get_seq_entry_count() < count) {
+               ret = get_free_seq(count);
+               if (ret >= 0)
+                       goto out;
+               ret = gnttab_expand(count - get_seq_entry_count());
+               if (ret < 0)
+                       goto out;
+       }
+
+       ret = *gnttab_free_tail_ptr;
+       *gnttab_free_tail_ptr = gnttab_entry(ret + count - 1);
+       gnttab_free_count -= count;
+       if (!gnttab_free_count)
+               gnttab_free_tail_ptr = NULL;
+       bitmap_clear(gnttab_free_bitmap, ret, count);
+
+ out:
+       spin_unlock_irqrestore(&gnttab_list_lock, flags);
+
+       return ret;
+}
+
 static void do_free_callbacks(void)
 {
        struct gnttab_free_callback *callback, *next;
@@ -204,21 +328,51 @@ static inline void check_free_callbacks(void)
                do_free_callbacks();
 }
 
-static void put_free_entry(grant_ref_t ref)
+static void put_free_entry_locked(grant_ref_t ref)
 {
-       unsigned long flags;
-
        if (unlikely(ref < GNTTAB_NR_RESERVED_ENTRIES))
                return;
 
-       spin_lock_irqsave(&gnttab_list_lock, flags);
        gnttab_entry(ref) = gnttab_free_head;
        gnttab_free_head = ref;
+       if (!gnttab_free_count)
+               gnttab_last_free = ref;
+       if (gnttab_free_tail_ptr == &gnttab_free_head)
+               gnttab_free_tail_ptr = __gnttab_entry(ref);
        gnttab_free_count++;
+       bitmap_set(gnttab_free_bitmap, ref, 1);
+}
+
+static void put_free_entry(grant_ref_t ref)
+{
+       unsigned long flags;
+
+       spin_lock_irqsave(&gnttab_list_lock, flags);
+       put_free_entry_locked(ref);
        check_free_callbacks();
        spin_unlock_irqrestore(&gnttab_list_lock, flags);
 }
 
+static void gnttab_set_free(unsigned int start, unsigned int n)
+{
+       unsigned int i;
+
+       for (i = start; i < start + n - 1; i++)
+               gnttab_entry(i) = i + 1;
+
+       gnttab_entry(i) = GNTTAB_LIST_END;
+       if (!gnttab_free_count) {
+               gnttab_free_head = start;
+               gnttab_free_tail_ptr = &gnttab_free_head;
+       } else {
+               gnttab_entry(gnttab_last_free) = start;
+       }
+       gnttab_free_count += n;
+       gnttab_last_free = i;
+
+       bitmap_set(gnttab_free_bitmap, start, n);
+}
+
 /*
  * Following applies to gnttab_update_entry_v1 and gnttab_update_entry_v2.
  * Introducing a valid entry into the grant table:
@@ -450,23 +604,31 @@ void gnttab_free_grant_references(grant_ref_t head)
 {
        grant_ref_t ref;
        unsigned long flags;
-       int count = 1;
-       if (head == GNTTAB_LIST_END)
-               return;
+
        spin_lock_irqsave(&gnttab_list_lock, flags);
-       ref = head;
-       while (gnttab_entry(ref) != GNTTAB_LIST_END) {
-               ref = gnttab_entry(ref);
-               count++;
+       while (head != GNTTAB_LIST_END) {
+               ref = gnttab_entry(head);
+               put_free_entry_locked(head);
+               head = ref;
        }
-       gnttab_entry(ref) = gnttab_free_head;
-       gnttab_free_head = head;
-       gnttab_free_count += count;
        check_free_callbacks();
        spin_unlock_irqrestore(&gnttab_list_lock, flags);
 }
 EXPORT_SYMBOL_GPL(gnttab_free_grant_references);
 
+void gnttab_free_grant_reference_seq(grant_ref_t head, unsigned int count)
+{
+       unsigned long flags;
+       unsigned int i;
+
+       spin_lock_irqsave(&gnttab_list_lock, flags);
+       for (i = count; i > 0; i--)
+               put_free_entry_locked(head + i - 1);
+       check_free_callbacks();
+       spin_unlock_irqrestore(&gnttab_list_lock, flags);
+}
+EXPORT_SYMBOL_GPL(gnttab_free_grant_reference_seq);
+
 int gnttab_alloc_grant_references(u16 count, grant_ref_t *head)
 {
        int h = get_free_entries(count);
@@ -480,6 +642,24 @@ int gnttab_alloc_grant_references(u16 count, grant_ref_t *head)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_grant_references);
 
+int gnttab_alloc_grant_reference_seq(unsigned int count, grant_ref_t *first)
+{
+       int h;
+
+       if (count == 1)
+               h = get_free_entries(1);
+       else
+               h = get_free_entries_seq(count);
+
+       if (h < 0)
+               return -ENOSPC;
+
+       *first = h;
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_alloc_grant_reference_seq);
+
 int gnttab_empty_grant_references(const grant_ref_t *private_head)
 {
        return (*private_head == GNTTAB_LIST_END);
@@ -572,16 +752,13 @@ static int grow_gnttab_list(unsigned int more_frames)
                        goto grow_nomem;
        }
 
+       gnttab_set_free(gnttab_size, extra_entries);
 
-       for (i = grefs_per_frame * nr_grant_frames;
-            i < grefs_per_frame * new_nr_grant_frames - 1; i++)
-               gnttab_entry(i) = i + 1;
-
-       gnttab_entry(i) = gnttab_free_head;
-       gnttab_free_head = grefs_per_frame * nr_grant_frames;
-       gnttab_free_count += extra_entries;
+       if (!gnttab_free_tail_ptr)
+               gnttab_free_tail_ptr = __gnttab_entry(gnttab_size);
 
        nr_grant_frames = new_nr_grant_frames;
+       gnttab_size += extra_entries;
 
        check_free_callbacks();
 
@@ -1424,20 +1601,20 @@ static int gnttab_expand(unsigned int req_entries)
 int gnttab_init(void)
 {
        int i;
-       unsigned long max_nr_grant_frames;
+       unsigned long max_nr_grant_frames, max_nr_grefs;
        unsigned int max_nr_glist_frames, nr_glist_frames;
-       unsigned int nr_init_grefs;
        int ret;
 
        gnttab_request_version();
        max_nr_grant_frames = gnttab_max_grant_frames();
+       max_nr_grefs = max_nr_grant_frames *
+                       gnttab_interface->grefs_per_grant_frame;
        nr_grant_frames = 1;
 
        /* Determine the maximum number of frames required for the
         * grant reference free list on the current hypervisor.
         */
-       max_nr_glist_frames = (max_nr_grant_frames *
-                              gnttab_interface->grefs_per_grant_frame / RPP);
+       max_nr_glist_frames = max_nr_grefs / RPP;
 
        gnttab_list = kmalloc_array(max_nr_glist_frames,
                                    sizeof(grant_ref_t *),
@@ -1454,6 +1631,12 @@ int gnttab_init(void)
                }
        }
 
+       gnttab_free_bitmap = bitmap_zalloc(max_nr_grefs, GFP_KERNEL);
+       if (!gnttab_free_bitmap) {
+               ret = -ENOMEM;
+               goto ini_nomem;
+       }
+
        ret = arch_gnttab_init(max_nr_grant_frames,
                               nr_status_frames(max_nr_grant_frames));
        if (ret < 0)
@@ -1464,15 +1647,10 @@ int gnttab_init(void)
                goto ini_nomem;
        }
 
-       nr_init_grefs = nr_grant_frames *
-                       gnttab_interface->grefs_per_grant_frame;
-
-       for (i = GNTTAB_NR_RESERVED_ENTRIES; i < nr_init_grefs - 1; i++)
-               gnttab_entry(i) = i + 1;
+       gnttab_size = nr_grant_frames * gnttab_interface->grefs_per_grant_frame;
 
-       gnttab_entry(nr_init_grefs - 1) = GNTTAB_LIST_END;
-       gnttab_free_count = nr_init_grefs - GNTTAB_NR_RESERVED_ENTRIES;
-       gnttab_free_head  = GNTTAB_NR_RESERVED_ENTRIES;
+       gnttab_set_free(GNTTAB_NR_RESERVED_ENTRIES,
+                       gnttab_size - GNTTAB_NR_RESERVED_ENTRIES);
 
        printk("Grant table initialized\n");
        return 0;
@@ -1481,6 +1659,7 @@ int gnttab_init(void)
        for (i--; i >= 0; i--)
                free_page((unsigned long)gnttab_list[i]);
        kfree(gnttab_list);
+       bitmap_free(gnttab_free_bitmap);
        return ret;
 }
 EXPORT_SYMBOL_GPL(gnttab_init);
index 34742c6..f17c4c0 100644 (file)
@@ -261,7 +261,6 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
 
        return 0;
 }
-EXPORT_SYMBOL_GPL(xen_xlate_map_ballooned_pages);
 
 struct remap_pfn {
        struct mm_struct *mm;
index 1c8dc69..cebba4e 100644 (file)
@@ -62,12 +62,12 @@ void v9fs_cache_inode_get_cookie(struct inode *inode)
        version = cpu_to_le32(v9inode->qid.version);
        path = cpu_to_le64(v9inode->qid.path);
        v9ses = v9fs_inode2v9ses(inode);
-       v9inode->netfs_ctx.cache =
+       v9inode->netfs.cache =
                fscache_acquire_cookie(v9fs_session_cache(v9ses),
                                       0,
                                       &path, sizeof(path),
                                       &version, sizeof(version),
-                                      i_size_read(&v9inode->vfs_inode));
+                                      i_size_read(&v9inode->netfs.inode));
 
        p9_debug(P9_DEBUG_FSC, "inode %p get cookie %p\n",
                 inode, v9fs_inode_cookie(v9inode));
index e28ddf7..0129de2 100644 (file)
@@ -625,7 +625,7 @@ static void v9fs_inode_init_once(void *foo)
        struct v9fs_inode *v9inode = (struct v9fs_inode *)foo;
 
        memset(&v9inode->qid, 0, sizeof(v9inode->qid));
-       inode_init_once(&v9inode->vfs_inode);
+       inode_init_once(&v9inode->netfs.inode);
 }
 
 /**
index ec0e8df..1b219c2 100644 (file)
@@ -109,11 +109,7 @@ struct v9fs_session_info {
 #define V9FS_INO_INVALID_ATTR 0x01
 
 struct v9fs_inode {
-       struct {
-               /* These must be contiguous */
-               struct inode    vfs_inode;      /* the VFS's inode record */
-               struct netfs_i_context netfs_ctx; /* Netfslib context */
-       };
+       struct netfs_inode netfs; /* Netfslib context and vfs inode */
        struct p9_qid qid;
        unsigned int cache_validity;
        struct p9_fid *writeback_fid;
@@ -122,13 +118,13 @@ struct v9fs_inode {
 
 static inline struct v9fs_inode *V9FS_I(const struct inode *inode)
 {
-       return container_of(inode, struct v9fs_inode, vfs_inode);
+       return container_of(inode, struct v9fs_inode, netfs.inode);
 }
 
 static inline struct fscache_cookie *v9fs_inode_cookie(struct v9fs_inode *v9inode)
 {
 #ifdef CONFIG_9P_FSCACHE
-       return netfs_i_cookie(&v9inode->vfs_inode);
+       return netfs_i_cookie(&v9inode->netfs.inode);
 #else
        return NULL;
 #endif
index 8ce82ff..90c6c1b 100644 (file)
@@ -140,7 +140,7 @@ static void v9fs_write_to_cache_done(void *priv, ssize_t transferred_or_error,
            transferred_or_error != -ENOBUFS) {
                version = cpu_to_le32(v9inode->qid.version);
                fscache_invalidate(v9fs_inode_cookie(v9inode), &version,
-                                  i_size_read(&v9inode->vfs_inode), 0);
+                                  i_size_read(&v9inode->netfs.inode), 0);
        }
 }
 
index 55367ec..e660c63 100644 (file)
@@ -234,7 +234,7 @@ struct inode *v9fs_alloc_inode(struct super_block *sb)
        v9inode->writeback_fid = NULL;
        v9inode->cache_validity = 0;
        mutex_init(&v9inode->v_mutex);
-       return &v9inode->vfs_inode;
+       return &v9inode->netfs.inode;
 }
 
 /**
@@ -252,7 +252,7 @@ void v9fs_free_inode(struct inode *inode)
  */
 static void v9fs_set_netfs_context(struct inode *inode)
 {
-       netfs_i_context_init(inode, &v9fs_req_ops);
+       netfs_inode_init(inode, &v9fs_req_ops);
 }
 
 int v9fs_init_inode(struct v9fs_session_info *v9ses,
index 1b4d580..a484fa6 100644 (file)
@@ -30,7 +30,7 @@ void afs_invalidate_mmap_work(struct work_struct *work)
 {
        struct afs_vnode *vnode = container_of(work, struct afs_vnode, cb_work);
 
-       unmap_mapping_pages(vnode->vfs_inode.i_mapping, 0, 0, false);
+       unmap_mapping_pages(vnode->netfs.inode.i_mapping, 0, 0, false);
 }
 
 void afs_server_init_callback_work(struct work_struct *work)
index 79f6b74..56ae5cd 100644 (file)
@@ -109,7 +109,7 @@ struct afs_lookup_cookie {
  */
 static void afs_dir_read_cleanup(struct afs_read *req)
 {
-       struct address_space *mapping = req->vnode->vfs_inode.i_mapping;
+       struct address_space *mapping = req->vnode->netfs.inode.i_mapping;
        struct folio *folio;
        pgoff_t last = req->nr_pages - 1;
 
@@ -153,7 +153,7 @@ static bool afs_dir_check_folio(struct afs_vnode *dvnode, struct folio *folio,
                block = kmap_local_folio(folio, offset);
                if (block->hdr.magic != AFS_DIR_MAGIC) {
                        printk("kAFS: %s(%lx): [%llx] bad magic %zx/%zx is %04hx\n",
-                              __func__, dvnode->vfs_inode.i_ino,
+                              __func__, dvnode->netfs.inode.i_ino,
                               pos, offset, size, ntohs(block->hdr.magic));
                        trace_afs_dir_check_failed(dvnode, pos + offset, i_size);
                        kunmap_local(block);
@@ -183,7 +183,7 @@ error:
 static void afs_dir_dump(struct afs_vnode *dvnode, struct afs_read *req)
 {
        union afs_xdr_dir_block *block;
-       struct address_space *mapping = dvnode->vfs_inode.i_mapping;
+       struct address_space *mapping = dvnode->netfs.inode.i_mapping;
        struct folio *folio;
        pgoff_t last = req->nr_pages - 1;
        size_t offset, size;
@@ -217,7 +217,7 @@ static void afs_dir_dump(struct afs_vnode *dvnode, struct afs_read *req)
  */
 static int afs_dir_check(struct afs_vnode *dvnode, struct afs_read *req)
 {
-       struct address_space *mapping = dvnode->vfs_inode.i_mapping;
+       struct address_space *mapping = dvnode->netfs.inode.i_mapping;
        struct folio *folio;
        pgoff_t last = req->nr_pages - 1;
        int ret = 0;
@@ -269,7 +269,7 @@ static int afs_dir_open(struct inode *inode, struct file *file)
 static struct afs_read *afs_read_dir(struct afs_vnode *dvnode, struct key *key)
        __acquires(&dvnode->validate_lock)
 {
-       struct address_space *mapping = dvnode->vfs_inode.i_mapping;
+       struct address_space *mapping = dvnode->netfs.inode.i_mapping;
        struct afs_read *req;
        loff_t i_size;
        int nr_pages, i;
@@ -287,7 +287,7 @@ static struct afs_read *afs_read_dir(struct afs_vnode *dvnode, struct key *key)
        req->cleanup = afs_dir_read_cleanup;
 
 expand:
-       i_size = i_size_read(&dvnode->vfs_inode);
+       i_size = i_size_read(&dvnode->netfs.inode);
        if (i_size < 2048) {
                ret = afs_bad(dvnode, afs_file_error_dir_small);
                goto error;
@@ -305,7 +305,7 @@ expand:
        req->actual_len = i_size; /* May change */
        req->len = nr_pages * PAGE_SIZE; /* We can ask for more than there is */
        req->data_version = dvnode->status.data_version; /* May change */
-       iov_iter_xarray(&req->def_iter, READ, &dvnode->vfs_inode.i_mapping->i_pages,
+       iov_iter_xarray(&req->def_iter, READ, &dvnode->netfs.inode.i_mapping->i_pages,
                        0, i_size);
        req->iter = &req->def_iter;
 
@@ -897,7 +897,7 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
 
 out_op:
        if (op->error == 0) {
-               inode = &op->file[1].vnode->vfs_inode;
+               inode = &op->file[1].vnode->netfs.inode;
                op->file[1].vnode = NULL;
        }
 
@@ -1139,7 +1139,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
        afs_stat_v(dir, n_reval);
 
        /* search the directory for this vnode */
-       ret = afs_do_lookup_one(&dir->vfs_inode, dentry, &fid, key, &dir_version);
+       ret = afs_do_lookup_one(&dir->netfs.inode, dentry, &fid, key, &dir_version);
        switch (ret) {
        case 0:
                /* the filename maps to something */
@@ -1170,7 +1170,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
                        _debug("%pd: file deleted (uq %u -> %u I:%u)",
                               dentry, fid.unique,
                               vnode->fid.unique,
-                              vnode->vfs_inode.i_generation);
+                              vnode->netfs.inode.i_generation);
                        goto not_found;
                }
                goto out_valid;
@@ -1368,7 +1368,7 @@ static void afs_dir_remove_subdir(struct dentry *dentry)
        if (d_really_is_positive(dentry)) {
                struct afs_vnode *vnode = AFS_FS_I(d_inode(dentry));
 
-               clear_nlink(&vnode->vfs_inode);
+               clear_nlink(&vnode->netfs.inode);
                set_bit(AFS_VNODE_DELETED, &vnode->flags);
                clear_bit(AFS_VNODE_CB_PROMISED, &vnode->flags);
                clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags);
@@ -1487,8 +1487,8 @@ static void afs_dir_remove_link(struct afs_operation *op)
                /* Already done */
        } else if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) {
                write_seqlock(&vnode->cb_lock);
-               drop_nlink(&vnode->vfs_inode);
-               if (vnode->vfs_inode.i_nlink == 0) {
+               drop_nlink(&vnode->netfs.inode);
+               if (vnode->netfs.inode.i_nlink == 0) {
                        set_bit(AFS_VNODE_DELETED, &vnode->flags);
                        __afs_break_callback(vnode, afs_cb_break_for_unlink);
                }
@@ -1504,7 +1504,7 @@ static void afs_dir_remove_link(struct afs_operation *op)
                        op->error = ret;
        }
 
-       _debug("nlink %d [val %d]", vnode->vfs_inode.i_nlink, op->error);
+       _debug("nlink %d [val %d]", vnode->netfs.inode.i_nlink, op->error);
 }
 
 static void afs_unlink_success(struct afs_operation *op)
@@ -1680,8 +1680,8 @@ static void afs_link_success(struct afs_operation *op)
        afs_update_dentry_version(op, dvp, op->dentry);
        if (op->dentry_2->d_parent == op->dentry->d_parent)
                afs_update_dentry_version(op, dvp, op->dentry_2);
-       ihold(&vp->vnode->vfs_inode);
-       d_instantiate(op->dentry, &vp->vnode->vfs_inode);
+       ihold(&vp->vnode->netfs.inode);
+       d_instantiate(op->dentry, &vp->vnode->netfs.inode);
 }
 
 static void afs_link_put(struct afs_operation *op)
index d98e109..0ab7752 100644 (file)
@@ -109,7 +109,7 @@ static void afs_clear_contig_bits(union afs_xdr_dir_block *block,
  */
 static struct folio *afs_dir_get_folio(struct afs_vnode *vnode, pgoff_t index)
 {
-       struct address_space *mapping = vnode->vfs_inode.i_mapping;
+       struct address_space *mapping = vnode->netfs.inode.i_mapping;
        struct folio *folio;
 
        folio = __filemap_get_folio(mapping, index,
@@ -216,7 +216,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
 
        _enter(",,{%d,%s},", name->len, name->name);
 
-       i_size = i_size_read(&vnode->vfs_inode);
+       i_size = i_size_read(&vnode->netfs.inode);
        if (i_size > AFS_DIR_BLOCK_SIZE * AFS_DIR_MAX_BLOCKS ||
            (i_size & (AFS_DIR_BLOCK_SIZE - 1))) {
                clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags);
@@ -336,7 +336,7 @@ found_space:
        if (b < AFS_DIR_BLOCKS_WITH_CTR)
                meta->meta.alloc_ctrs[b] -= need_slots;
 
-       inode_inc_iversion_raw(&vnode->vfs_inode);
+       inode_inc_iversion_raw(&vnode->netfs.inode);
        afs_stat_v(vnode, n_dir_cr);
        _debug("Insert %s in %u[%u]", name->name, b, slot);
 
@@ -383,7 +383,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode,
 
        _enter(",,{%d,%s},", name->len, name->name);
 
-       i_size = i_size_read(&vnode->vfs_inode);
+       i_size = i_size_read(&vnode->netfs.inode);
        if (i_size < AFS_DIR_BLOCK_SIZE ||
            i_size > AFS_DIR_BLOCK_SIZE * AFS_DIR_MAX_BLOCKS ||
            (i_size & (AFS_DIR_BLOCK_SIZE - 1))) {
@@ -463,7 +463,7 @@ found_dirent:
        if (b < AFS_DIR_BLOCKS_WITH_CTR)
                meta->meta.alloc_ctrs[b] += need_slots;
 
-       inode_set_iversion_raw(&vnode->vfs_inode, vnode->status.data_version);
+       inode_set_iversion_raw(&vnode->netfs.inode, vnode->status.data_version);
        afs_stat_v(vnode, n_dir_rm);
        _debug("Remove %s from %u[%u]", name->name, b, slot);
 
index 45cfd50..bb5807e 100644 (file)
@@ -131,7 +131,7 @@ int afs_sillyrename(struct afs_vnode *dvnode, struct afs_vnode *vnode,
                        goto out;
        } while (!d_is_negative(sdentry));
 
-       ihold(&vnode->vfs_inode);
+       ihold(&vnode->netfs.inode);
 
        ret = afs_do_silly_rename(dvnode, vnode, dentry, sdentry, key);
        switch (ret) {
@@ -148,7 +148,7 @@ int afs_sillyrename(struct afs_vnode *dvnode, struct afs_vnode *vnode,
                d_drop(sdentry);
        }
 
-       iput(&vnode->vfs_inode);
+       iput(&vnode->netfs.inode);
        dput(sdentry);
 out:
        _leave(" = %d", ret);
index f120bcb..3a5bbff 100644 (file)
@@ -76,7 +76,7 @@ struct inode *afs_iget_pseudo_dir(struct super_block *sb, bool root)
        /* there shouldn't be an existing inode */
        BUG_ON(!(inode->i_state & I_NEW));
 
-       netfs_i_context_init(inode, NULL);
+       netfs_inode_init(inode, NULL);
        inode->i_size           = 0;
        inode->i_mode           = S_IFDIR | S_IRUGO | S_IXUGO;
        if (root) {
index a8e8832..4de7af7 100644 (file)
@@ -194,7 +194,7 @@ int afs_release(struct inode *inode, struct file *file)
                afs_put_wb_key(af->wb);
 
        if ((file->f_mode & FMODE_WRITE)) {
-               i_size = i_size_read(&vnode->vfs_inode);
+               i_size = i_size_read(&vnode->netfs.inode);
                afs_set_cache_aux(vnode, &aux);
                fscache_unuse_cookie(afs_vnode_cache(vnode), &aux, &i_size);
        } else {
@@ -325,7 +325,7 @@ static void afs_issue_read(struct netfs_io_subrequest *subreq)
        fsreq->iter     = &fsreq->def_iter;
 
        iov_iter_xarray(&fsreq->def_iter, READ,
-                       &fsreq->vnode->vfs_inode.i_mapping->i_pages,
+                       &fsreq->vnode->netfs.inode.i_mapping->i_pages,
                        fsreq->pos, fsreq->len);
 
        afs_fetch_data(fsreq->vnode, fsreq);
index d222dfb..7a3803c 100644 (file)
@@ -232,14 +232,14 @@ int afs_put_operation(struct afs_operation *op)
        if (op->file[1].modification && op->file[1].vnode != op->file[0].vnode)
                clear_bit(AFS_VNODE_MODIFYING, &op->file[1].vnode->flags);
        if (op->file[0].put_vnode)
-               iput(&op->file[0].vnode->vfs_inode);
+               iput(&op->file[0].vnode->netfs.inode);
        if (op->file[1].put_vnode)
-               iput(&op->file[1].vnode->vfs_inode);
+               iput(&op->file[1].vnode->netfs.inode);
 
        if (op->more_files) {
                for (i = 0; i < op->nr_files - 2; i++)
                        if (op->more_files[i].put_vnode)
-                               iput(&op->more_files[i].vnode->vfs_inode);
+                               iput(&op->more_files[i].vnode->netfs.inode);
                kfree(op->more_files);
        }
 
index 30b0662..22811e9 100644 (file)
@@ -58,7 +58,7 @@ static noinline void dump_vnode(struct afs_vnode *vnode, struct afs_vnode *paren
  */
 static void afs_set_netfs_context(struct afs_vnode *vnode)
 {
-       netfs_i_context_init(&vnode->vfs_inode, &afs_req_ops);
+       netfs_inode_init(&vnode->netfs.inode, &afs_req_ops);
 }
 
 /*
@@ -96,7 +96,7 @@ static int afs_inode_init_from_status(struct afs_operation *op,
        inode->i_flags |= S_NOATIME;
        inode->i_uid = make_kuid(&init_user_ns, status->owner);
        inode->i_gid = make_kgid(&init_user_ns, status->group);
-       set_nlink(&vnode->vfs_inode, status->nlink);
+       set_nlink(&vnode->netfs.inode, status->nlink);
 
        switch (status->type) {
        case AFS_FTYPE_FILE:
@@ -139,7 +139,7 @@ static int afs_inode_init_from_status(struct afs_operation *op,
        afs_set_netfs_context(vnode);
 
        vnode->invalid_before   = status->data_version;
-       inode_set_iversion_raw(&vnode->vfs_inode, status->data_version);
+       inode_set_iversion_raw(&vnode->netfs.inode, status->data_version);
 
        if (!vp->scb.have_cb) {
                /* it's a symlink we just created (the fileserver
@@ -163,7 +163,7 @@ static void afs_apply_status(struct afs_operation *op,
 {
        struct afs_file_status *status = &vp->scb.status;
        struct afs_vnode *vnode = vp->vnode;
-       struct inode *inode = &vnode->vfs_inode;
+       struct inode *inode = &vnode->netfs.inode;
        struct timespec64 t;
        umode_t mode;
        bool data_changed = false;
@@ -246,7 +246,7 @@ static void afs_apply_status(struct afs_operation *op,
                 * idea of what the size should be that's not the same as
                 * what's on the server.
                 */
-               vnode->netfs_ctx.remote_i_size = status->size;
+               vnode->netfs.remote_i_size = status->size;
                if (change_size) {
                        afs_set_i_size(vnode, status->size);
                        inode->i_ctime = t;
@@ -289,7 +289,7 @@ void afs_vnode_commit_status(struct afs_operation *op, struct afs_vnode_param *v
                 */
                if (vp->scb.status.abort_code == VNOVNODE) {
                        set_bit(AFS_VNODE_DELETED, &vnode->flags);
-                       clear_nlink(&vnode->vfs_inode);
+                       clear_nlink(&vnode->netfs.inode);
                        __afs_break_callback(vnode, afs_cb_break_for_deleted);
                        op->flags &= ~AFS_OPERATION_DIR_CONFLICT;
                }
@@ -306,8 +306,8 @@ void afs_vnode_commit_status(struct afs_operation *op, struct afs_vnode_param *v
                if (vp->scb.have_cb)
                        afs_apply_callback(op, vp);
        } else if (vp->op_unlinked && !(op->flags & AFS_OPERATION_DIR_CONFLICT)) {
-               drop_nlink(&vnode->vfs_inode);
-               if (vnode->vfs_inode.i_nlink == 0) {
+               drop_nlink(&vnode->netfs.inode);
+               if (vnode->netfs.inode.i_nlink == 0) {
                        set_bit(AFS_VNODE_DELETED, &vnode->flags);
                        __afs_break_callback(vnode, afs_cb_break_for_deleted);
                }
@@ -326,7 +326,7 @@ static void afs_fetch_status_success(struct afs_operation *op)
        struct afs_vnode *vnode = vp->vnode;
        int ret;
 
-       if (vnode->vfs_inode.i_state & I_NEW) {
+       if (vnode->netfs.inode.i_state & I_NEW) {
                ret = afs_inode_init_from_status(op, vp, vnode);
                op->error = ret;
                if (ret == 0)
@@ -430,7 +430,7 @@ static void afs_get_inode_cache(struct afs_vnode *vnode)
        struct afs_vnode_cache_aux aux;
 
        if (vnode->status.type != AFS_FTYPE_FILE) {
-               vnode->netfs_ctx.cache = NULL;
+               vnode->netfs.cache = NULL;
                return;
        }
 
@@ -457,7 +457,7 @@ static void afs_get_inode_cache(struct afs_vnode *vnode)
 struct inode *afs_iget(struct afs_operation *op, struct afs_vnode_param *vp)
 {
        struct afs_vnode_param *dvp = &op->file[0];
-       struct super_block *sb = dvp->vnode->vfs_inode.i_sb;
+       struct super_block *sb = dvp->vnode->netfs.inode.i_sb;
        struct afs_vnode *vnode;
        struct inode *inode;
        int ret;
@@ -582,10 +582,10 @@ static void afs_zap_data(struct afs_vnode *vnode)
        /* nuke all the non-dirty pages that aren't locked, mapped or being
         * written back in a regular file and completely discard the pages in a
         * directory or symlink */
-       if (S_ISREG(vnode->vfs_inode.i_mode))
-               invalidate_remote_inode(&vnode->vfs_inode);
+       if (S_ISREG(vnode->netfs.inode.i_mode))
+               invalidate_remote_inode(&vnode->netfs.inode);
        else
-               invalidate_inode_pages2(vnode->vfs_inode.i_mapping);
+               invalidate_inode_pages2(vnode->netfs.inode.i_mapping);
 }
 
 /*
@@ -683,8 +683,8 @@ int afs_validate(struct afs_vnode *vnode, struct key *key)
               key_serial(key));
 
        if (unlikely(test_bit(AFS_VNODE_DELETED, &vnode->flags))) {
-               if (vnode->vfs_inode.i_nlink)
-                       clear_nlink(&vnode->vfs_inode);
+               if (vnode->netfs.inode.i_nlink)
+                       clear_nlink(&vnode->netfs.inode);
                goto valid;
        }
 
@@ -826,7 +826,7 @@ void afs_evict_inode(struct inode *inode)
 static void afs_setattr_success(struct afs_operation *op)
 {
        struct afs_vnode_param *vp = &op->file[0];
-       struct inode *inode = &vp->vnode->vfs_inode;
+       struct inode *inode = &vp->vnode->netfs.inode;
        loff_t old_i_size = i_size_read(inode);
 
        op->setattr.old_i_size = old_i_size;
@@ -843,7 +843,7 @@ static void afs_setattr_success(struct afs_operation *op)
 static void afs_setattr_edit_file(struct afs_operation *op)
 {
        struct afs_vnode_param *vp = &op->file[0];
-       struct inode *inode = &vp->vnode->vfs_inode;
+       struct inode *inode = &vp->vnode->netfs.inode;
 
        if (op->setattr.attr->ia_valid & ATTR_SIZE) {
                loff_t size = op->setattr.attr->ia_size;
@@ -875,7 +875,7 @@ int afs_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
                ATTR_MTIME | ATTR_MTIME_SET | ATTR_TIMES_SET | ATTR_TOUCH;
        struct afs_operation *op;
        struct afs_vnode *vnode = AFS_FS_I(d_inode(dentry));
-       struct inode *inode = &vnode->vfs_inode;
+       struct inode *inode = &vnode->netfs.inode;
        loff_t i_size;
        int ret;
 
index a309959..984b113 100644 (file)
@@ -619,12 +619,7 @@ enum afs_lock_state {
  * leak from one inode to another.
  */
 struct afs_vnode {
-       struct {
-               /* These must be contiguous */
-               struct inode    vfs_inode;      /* the VFS's inode record */
-               struct netfs_i_context netfs_ctx; /* Netfslib context */
-       };
-
+       struct netfs_inode      netfs;          /* Netfslib context and vfs inode */
        struct afs_volume       *volume;        /* volume on which vnode resides */
        struct afs_fid          fid;            /* the file identifier for this inode */
        struct afs_file_status  status;         /* AFS status info for this file */
@@ -675,7 +670,7 @@ struct afs_vnode {
 static inline struct fscache_cookie *afs_vnode_cache(struct afs_vnode *vnode)
 {
 #ifdef CONFIG_AFS_FSCACHE
-       return netfs_i_cookie(&vnode->vfs_inode);
+       return netfs_i_cookie(&vnode->netfs.inode);
 #else
        return NULL;
 #endif
@@ -685,7 +680,7 @@ static inline void afs_vnode_set_cache(struct afs_vnode *vnode,
                                       struct fscache_cookie *cookie)
 {
 #ifdef CONFIG_AFS_FSCACHE
-       vnode->netfs_ctx.cache = cookie;
+       vnode->netfs.cache = cookie;
 #endif
 }
 
@@ -892,7 +887,7 @@ static inline void afs_invalidate_cache(struct afs_vnode *vnode, unsigned int fl
 
        afs_set_cache_aux(vnode, &aux);
        fscache_invalidate(afs_vnode_cache(vnode), &aux,
-                          i_size_read(&vnode->vfs_inode), flags);
+                          i_size_read(&vnode->netfs.inode), flags);
 }
 
 /*
@@ -1217,7 +1212,7 @@ static inline struct afs_net *afs_i2net(struct inode *inode)
 
 static inline struct afs_net *afs_v2net(struct afs_vnode *vnode)
 {
-       return afs_i2net(&vnode->vfs_inode);
+       return afs_i2net(&vnode->netfs.inode);
 }
 
 static inline struct afs_net *afs_sock2net(struct sock *sk)
@@ -1593,12 +1588,12 @@ extern void yfs_fs_store_opaque_acl2(struct afs_operation *);
  */
 static inline struct afs_vnode *AFS_FS_I(struct inode *inode)
 {
-       return container_of(inode, struct afs_vnode, vfs_inode);
+       return container_of(inode, struct afs_vnode, netfs.inode);
 }
 
 static inline struct inode *AFS_VNODE_TO_I(struct afs_vnode *vnode)
 {
-       return &vnode->vfs_inode;
+       return &vnode->netfs.inode;
 }
 
 /*
@@ -1621,8 +1616,8 @@ static inline void afs_update_dentry_version(struct afs_operation *op,
  */
 static inline void afs_set_i_size(struct afs_vnode *vnode, u64 size)
 {
-       i_size_write(&vnode->vfs_inode, size);
-       vnode->vfs_inode.i_blocks = ((size + 1023) >> 10) << 1;
+       i_size_write(&vnode->netfs.inode, size);
+       vnode->netfs.inode.i_blocks = ((size + 1023) >> 10) << 1;
 }
 
 /*
index 1fea195..95d7130 100644 (file)
@@ -659,7 +659,7 @@ static void afs_i_init_once(void *_vnode)
        struct afs_vnode *vnode = _vnode;
 
        memset(vnode, 0, sizeof(*vnode));
-       inode_init_once(&vnode->vfs_inode);
+       inode_init_once(&vnode->netfs.inode);
        mutex_init(&vnode->io_lock);
        init_rwsem(&vnode->validate_lock);
        spin_lock_init(&vnode->wb_lock);
@@ -700,8 +700,8 @@ static struct inode *afs_alloc_inode(struct super_block *sb)
        init_rwsem(&vnode->rmdir_lock);
        INIT_WORK(&vnode->cb_work, afs_invalidate_mmap_work);
 
-       _leave(" = %p", &vnode->vfs_inode);
-       return &vnode->vfs_inode;
+       _leave(" = %p", &vnode->netfs.inode);
+       return &vnode->netfs.inode;
 }
 
 static void afs_free_inode(struct inode *inode)
index 2236b21..f80a609 100644 (file)
@@ -146,10 +146,10 @@ int afs_write_end(struct file *file, struct address_space *mapping,
 
        write_end_pos = pos + copied;
 
-       i_size = i_size_read(&vnode->vfs_inode);
+       i_size = i_size_read(&vnode->netfs.inode);
        if (write_end_pos > i_size) {
                write_seqlock(&vnode->cb_lock);
-               i_size = i_size_read(&vnode->vfs_inode);
+               i_size = i_size_read(&vnode->netfs.inode);
                if (write_end_pos > i_size)
                        afs_set_i_size(vnode, write_end_pos);
                write_sequnlock(&vnode->cb_lock);
@@ -257,7 +257,7 @@ static void afs_redirty_pages(struct writeback_control *wbc,
  */
 static void afs_pages_written_back(struct afs_vnode *vnode, loff_t start, unsigned int len)
 {
-       struct address_space *mapping = vnode->vfs_inode.i_mapping;
+       struct address_space *mapping = vnode->netfs.inode.i_mapping;
        struct folio *folio;
        pgoff_t end;
 
@@ -354,7 +354,6 @@ static const struct afs_operation_ops afs_store_data_operation = {
 static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t pos,
                          bool laundering)
 {
-       struct netfs_i_context *ictx = &vnode->netfs_ctx;
        struct afs_operation *op;
        struct afs_wb_key *wbk = NULL;
        loff_t size = iov_iter_count(iter);
@@ -385,9 +384,9 @@ static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t
        op->store.write_iter = iter;
        op->store.pos = pos;
        op->store.size = size;
-       op->store.i_size = max(pos + size, ictx->remote_i_size);
+       op->store.i_size = max(pos + size, vnode->netfs.remote_i_size);
        op->store.laundering = laundering;
-       op->mtime = vnode->vfs_inode.i_mtime;
+       op->mtime = vnode->netfs.inode.i_mtime;
        op->flags |= AFS_OPERATION_UNINTR;
        op->ops = &afs_store_data_operation;
 
@@ -554,7 +553,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
        struct iov_iter iter;
        unsigned long priv;
        unsigned int offset, to, len, max_len;
-       loff_t i_size = i_size_read(&vnode->vfs_inode);
+       loff_t i_size = i_size_read(&vnode->netfs.inode);
        bool new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
        bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode));
        long count = wbc->nr_to_write;
@@ -845,7 +844,7 @@ ssize_t afs_file_write(struct kiocb *iocb, struct iov_iter *from)
        _enter("{%llx:%llu},{%zu},",
               vnode->fid.vid, vnode->fid.vnode, count);
 
-       if (IS_SWAPFILE(&vnode->vfs_inode)) {
+       if (IS_SWAPFILE(&vnode->netfs.inode)) {
                printk(KERN_INFO
                       "AFS: Attempt to write to active swap file!\n");
                return -EBUSY;
@@ -958,8 +957,8 @@ void afs_prune_wb_keys(struct afs_vnode *vnode)
        /* Discard unused keys */
        spin_lock(&vnode->wb_lock);
 
-       if (!mapping_tagged(&vnode->vfs_inode.i_data, PAGECACHE_TAG_WRITEBACK) &&
-           !mapping_tagged(&vnode->vfs_inode.i_data, PAGECACHE_TAG_DIRTY)) {
+       if (!mapping_tagged(&vnode->netfs.inode.i_data, PAGECACHE_TAG_WRITEBACK) &&
+           !mapping_tagged(&vnode->netfs.inode.i_data, PAGECACHE_TAG_DIRTY)) {
                list_for_each_entry_safe(wbk, tmp, &vnode->wb_keys, vnode_link) {
                        if (refcount_read(&wbk->usage) == 1)
                                list_move(&wbk->vnode_link, &graveyard);
@@ -1034,6 +1033,6 @@ static void afs_write_to_cache(struct afs_vnode *vnode,
                               bool caching)
 {
        fscache_write_to_cache(afs_vnode_cache(vnode),
-                              vnode->vfs_inode.i_mapping, start, len, i_size,
+                              vnode->netfs.inode.i_mapping, start, len, i_size,
                               afs_write_to_cache_done, vnode, caching);
 }
index e5221be..f5f116e 100644 (file)
@@ -1798,7 +1798,7 @@ enum {
 static int __ceph_pool_perm_get(struct ceph_inode_info *ci,
                                s64 pool, struct ceph_string *pool_ns)
 {
-       struct ceph_fs_client *fsc = ceph_inode_to_client(&ci->vfs_inode);
+       struct ceph_fs_client *fsc = ceph_inode_to_client(&ci->netfs.inode);
        struct ceph_mds_client *mdsc = fsc->mdsc;
        struct ceph_osd_request *rd_req = NULL, *wr_req = NULL;
        struct rb_node **p, *parent;
@@ -1913,7 +1913,7 @@ static int __ceph_pool_perm_get(struct ceph_inode_info *ci,
                                     0, false, true);
        err = ceph_osdc_start_request(&fsc->client->osdc, rd_req, false);
 
-       wr_req->r_mtime = ci->vfs_inode.i_mtime;
+       wr_req->r_mtime = ci->netfs.inode.i_mtime;
        err2 = ceph_osdc_start_request(&fsc->client->osdc, wr_req, false);
 
        if (!err)
index ddea999..177d8e8 100644 (file)
@@ -29,9 +29,9 @@ void ceph_fscache_register_inode_cookie(struct inode *inode)
        if (!(inode->i_state & I_NEW))
                return;
 
-       WARN_ON_ONCE(ci->netfs_ctx.cache);
+       WARN_ON_ONCE(ci->netfs.cache);
 
-       ci->netfs_ctx.cache =
+       ci->netfs.cache =
                fscache_acquire_cookie(fsc->fscache, 0,
                                       &ci->i_vino, sizeof(ci->i_vino),
                                       &ci->i_version, sizeof(ci->i_version),
index 7255b79..26c6ae0 100644 (file)
@@ -28,7 +28,7 @@ void ceph_fscache_invalidate(struct inode *inode, bool dio_write);
 
 static inline struct fscache_cookie *ceph_fscache_cookie(struct ceph_inode_info *ci)
 {
-       return netfs_i_cookie(&ci->vfs_inode);
+       return netfs_i_cookie(&ci->netfs.inode);
 }
 
 static inline void ceph_fscache_resize(struct inode *inode, loff_t to)
index bf2e940..38c9303 100644 (file)
@@ -492,7 +492,7 @@ static void __cap_set_timeouts(struct ceph_mds_client *mdsc,
        struct ceph_mount_options *opt = mdsc->fsc->mount_options;
        ci->i_hold_caps_max = round_jiffies(jiffies +
                                            opt->caps_wanted_delay_max * HZ);
-       dout("__cap_set_timeouts %p %lu\n", &ci->vfs_inode,
+       dout("__cap_set_timeouts %p %lu\n", &ci->netfs.inode,
             ci->i_hold_caps_max - jiffies);
 }
 
@@ -507,7 +507,7 @@ static void __cap_set_timeouts(struct ceph_mds_client *mdsc,
 static void __cap_delay_requeue(struct ceph_mds_client *mdsc,
                                struct ceph_inode_info *ci)
 {
-       dout("__cap_delay_requeue %p flags 0x%lx at %lu\n", &ci->vfs_inode,
+       dout("__cap_delay_requeue %p flags 0x%lx at %lu\n", &ci->netfs.inode,
             ci->i_ceph_flags, ci->i_hold_caps_max);
        if (!mdsc->stopping) {
                spin_lock(&mdsc->cap_delay_lock);
@@ -531,7 +531,7 @@ no_change:
 static void __cap_delay_requeue_front(struct ceph_mds_client *mdsc,
                                      struct ceph_inode_info *ci)
 {
-       dout("__cap_delay_requeue_front %p\n", &ci->vfs_inode);
+       dout("__cap_delay_requeue_front %p\n", &ci->netfs.inode);
        spin_lock(&mdsc->cap_delay_lock);
        ci->i_ceph_flags |= CEPH_I_FLUSH;
        if (!list_empty(&ci->i_cap_delay_list))
@@ -548,7 +548,7 @@ static void __cap_delay_requeue_front(struct ceph_mds_client *mdsc,
 static void __cap_delay_cancel(struct ceph_mds_client *mdsc,
                               struct ceph_inode_info *ci)
 {
-       dout("__cap_delay_cancel %p\n", &ci->vfs_inode);
+       dout("__cap_delay_cancel %p\n", &ci->netfs.inode);
        if (list_empty(&ci->i_cap_delay_list))
                return;
        spin_lock(&mdsc->cap_delay_lock);
@@ -568,7 +568,7 @@ static void __check_cap_issue(struct ceph_inode_info *ci, struct ceph_cap *cap,
         * Each time we receive FILE_CACHE anew, we increment
         * i_rdcache_gen.
         */
-       if (S_ISREG(ci->vfs_inode.i_mode) &&
+       if (S_ISREG(ci->netfs.inode.i_mode) &&
            (issued & (CEPH_CAP_FILE_CACHE|CEPH_CAP_FILE_LAZYIO)) &&
            (had & (CEPH_CAP_FILE_CACHE|CEPH_CAP_FILE_LAZYIO)) == 0) {
                ci->i_rdcache_gen++;
@@ -583,14 +583,14 @@ static void __check_cap_issue(struct ceph_inode_info *ci, struct ceph_cap *cap,
        if ((issued & CEPH_CAP_FILE_SHARED) != (had & CEPH_CAP_FILE_SHARED)) {
                if (issued & CEPH_CAP_FILE_SHARED)
                        atomic_inc(&ci->i_shared_gen);
-               if (S_ISDIR(ci->vfs_inode.i_mode)) {
-                       dout(" marking %p NOT complete\n", &ci->vfs_inode);
+               if (S_ISDIR(ci->netfs.inode.i_mode)) {
+                       dout(" marking %p NOT complete\n", &ci->netfs.inode);
                        __ceph_dir_clear_complete(ci);
                }
        }
 
        /* Wipe saved layout if we're losing DIR_CREATE caps */
-       if (S_ISDIR(ci->vfs_inode.i_mode) && (had & CEPH_CAP_DIR_CREATE) &&
+       if (S_ISDIR(ci->netfs.inode.i_mode) && (had & CEPH_CAP_DIR_CREATE) &&
                !(issued & CEPH_CAP_DIR_CREATE)) {
             ceph_put_string(rcu_dereference_raw(ci->i_cached_layout.pool_ns));
             memset(&ci->i_cached_layout, 0, sizeof(ci->i_cached_layout));
@@ -771,7 +771,7 @@ static int __cap_is_valid(struct ceph_cap *cap)
 
        if (cap->cap_gen < gen || time_after_eq(jiffies, ttl)) {
                dout("__cap_is_valid %p cap %p issued %s "
-                    "but STALE (gen %u vs %u)\n", &cap->ci->vfs_inode,
+                    "but STALE (gen %u vs %u)\n", &cap->ci->netfs.inode,
                     cap, ceph_cap_string(cap->issued), cap->cap_gen, gen);
                return 0;
        }
@@ -797,7 +797,7 @@ int __ceph_caps_issued(struct ceph_inode_info *ci, int *implemented)
                if (!__cap_is_valid(cap))
                        continue;
                dout("__ceph_caps_issued %p cap %p issued %s\n",
-                    &ci->vfs_inode, cap, ceph_cap_string(cap->issued));
+                    &ci->netfs.inode, cap, ceph_cap_string(cap->issued));
                have |= cap->issued;
                if (implemented)
                        *implemented |= cap->implemented;
@@ -844,12 +844,12 @@ static void __touch_cap(struct ceph_cap *cap)
 
        spin_lock(&s->s_cap_lock);
        if (!s->s_cap_iterator) {
-               dout("__touch_cap %p cap %p mds%d\n", &cap->ci->vfs_inode, cap,
+               dout("__touch_cap %p cap %p mds%d\n", &cap->ci->netfs.inode, cap,
                     s->s_mds);
                list_move_tail(&cap->session_caps, &s->s_caps);
        } else {
                dout("__touch_cap %p cap %p mds%d NOP, iterating over caps\n",
-                    &cap->ci->vfs_inode, cap, s->s_mds);
+                    &cap->ci->netfs.inode, cap, s->s_mds);
        }
        spin_unlock(&s->s_cap_lock);
 }
@@ -867,7 +867,7 @@ int __ceph_caps_issued_mask(struct ceph_inode_info *ci, int mask, int touch)
 
        if ((have & mask) == mask) {
                dout("__ceph_caps_issued_mask ino 0x%llx snap issued %s"
-                    " (mask %s)\n", ceph_ino(&ci->vfs_inode),
+                    " (mask %s)\n", ceph_ino(&ci->netfs.inode),
                     ceph_cap_string(have),
                     ceph_cap_string(mask));
                return 1;
@@ -879,7 +879,7 @@ int __ceph_caps_issued_mask(struct ceph_inode_info *ci, int mask, int touch)
                        continue;
                if ((cap->issued & mask) == mask) {
                        dout("__ceph_caps_issued_mask ino 0x%llx cap %p issued %s"
-                            " (mask %s)\n", ceph_ino(&ci->vfs_inode), cap,
+                            " (mask %s)\n", ceph_ino(&ci->netfs.inode), cap,
                             ceph_cap_string(cap->issued),
                             ceph_cap_string(mask));
                        if (touch)
@@ -891,7 +891,7 @@ int __ceph_caps_issued_mask(struct ceph_inode_info *ci, int mask, int touch)
                have |= cap->issued;
                if ((have & mask) == mask) {
                        dout("__ceph_caps_issued_mask ino 0x%llx combo issued %s"
-                            " (mask %s)\n", ceph_ino(&ci->vfs_inode),
+                            " (mask %s)\n", ceph_ino(&ci->netfs.inode),
                             ceph_cap_string(cap->issued),
                             ceph_cap_string(mask));
                        if (touch) {
@@ -919,7 +919,7 @@ int __ceph_caps_issued_mask(struct ceph_inode_info *ci, int mask, int touch)
 int __ceph_caps_issued_mask_metric(struct ceph_inode_info *ci, int mask,
                                   int touch)
 {
-       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->vfs_inode.i_sb);
+       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->netfs.inode.i_sb);
        int r;
 
        r = __ceph_caps_issued_mask(ci, mask, touch);
@@ -950,7 +950,7 @@ int __ceph_caps_revoking_other(struct ceph_inode_info *ci,
 
 int ceph_caps_revoking(struct ceph_inode_info *ci, int mask)
 {
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        int ret;
 
        spin_lock(&ci->i_ceph_lock);
@@ -969,8 +969,8 @@ int __ceph_caps_used(struct ceph_inode_info *ci)
        if (ci->i_rd_ref)
                used |= CEPH_CAP_FILE_RD;
        if (ci->i_rdcache_ref ||
-           (S_ISREG(ci->vfs_inode.i_mode) &&
-            ci->vfs_inode.i_data.nrpages))
+           (S_ISREG(ci->netfs.inode.i_mode) &&
+            ci->netfs.inode.i_data.nrpages))
                used |= CEPH_CAP_FILE_CACHE;
        if (ci->i_wr_ref)
                used |= CEPH_CAP_FILE_WR;
@@ -993,11 +993,11 @@ int __ceph_caps_file_wanted(struct ceph_inode_info *ci)
        const int WR_SHIFT = ffs(CEPH_FILE_MODE_WR);
        const int LAZY_SHIFT = ffs(CEPH_FILE_MODE_LAZY);
        struct ceph_mount_options *opt =
-               ceph_inode_to_client(&ci->vfs_inode)->mount_options;
+               ceph_inode_to_client(&ci->netfs.inode)->mount_options;
        unsigned long used_cutoff = jiffies - opt->caps_wanted_delay_max * HZ;
        unsigned long idle_cutoff = jiffies - opt->caps_wanted_delay_min * HZ;
 
-       if (S_ISDIR(ci->vfs_inode.i_mode)) {
+       if (S_ISDIR(ci->netfs.inode.i_mode)) {
                int want = 0;
 
                /* use used_cutoff here, to keep dir's wanted caps longer */
@@ -1050,7 +1050,7 @@ int __ceph_caps_file_wanted(struct ceph_inode_info *ci)
 int __ceph_caps_wanted(struct ceph_inode_info *ci)
 {
        int w = __ceph_caps_file_wanted(ci) | __ceph_caps_used(ci);
-       if (S_ISDIR(ci->vfs_inode.i_mode)) {
+       if (S_ISDIR(ci->netfs.inode.i_mode)) {
                /* we want EXCL if holding caps of dir ops */
                if (w & CEPH_CAP_ANY_DIR_OPS)
                        w |= CEPH_CAP_FILE_EXCL;
@@ -1116,9 +1116,9 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release)
 
        lockdep_assert_held(&ci->i_ceph_lock);
 
-       dout("__ceph_remove_cap %p from %p\n", cap, &ci->vfs_inode);
+       dout("__ceph_remove_cap %p from %p\n", cap, &ci->netfs.inode);
 
-       mdsc = ceph_inode_to_client(&ci->vfs_inode)->mdsc;
+       mdsc = ceph_inode_to_client(&ci->netfs.inode)->mdsc;
 
        /* remove from inode's cap rbtree, and clear auth cap */
        rb_erase(&cap->ci_node, &ci->i_caps);
@@ -1169,7 +1169,7 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release)
                 * keep i_snap_realm.
                 */
                if (ci->i_wr_ref == 0 && ci->i_snap_realm)
-                       ceph_change_snap_realm(&ci->vfs_inode, NULL);
+                       ceph_change_snap_realm(&ci->netfs.inode, NULL);
 
                __cap_delay_cancel(mdsc, ci);
        }
@@ -1188,11 +1188,11 @@ void ceph_remove_cap(struct ceph_cap *cap, bool queue_release)
 
        lockdep_assert_held(&ci->i_ceph_lock);
 
-       fsc = ceph_inode_to_client(&ci->vfs_inode);
+       fsc = ceph_inode_to_client(&ci->netfs.inode);
        WARN_ON_ONCE(ci->i_auth_cap == cap &&
                     !list_empty(&ci->i_dirty_item) &&
                     !fsc->blocklisted &&
-                    !ceph_inode_is_shutdown(&ci->vfs_inode));
+                    !ceph_inode_is_shutdown(&ci->netfs.inode));
 
        __ceph_remove_cap(cap, queue_release);
 }
@@ -1343,7 +1343,7 @@ static void __prep_cap(struct cap_msg_args *arg, struct ceph_cap *cap,
                       int flushing, u64 flush_tid, u64 oldest_flush_tid)
 {
        struct ceph_inode_info *ci = cap->ci;
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        int held, revoking;
 
        lockdep_assert_held(&ci->i_ceph_lock);
@@ -1440,7 +1440,7 @@ static void __prep_cap(struct cap_msg_args *arg, struct ceph_cap *cap,
 static void __send_cap(struct cap_msg_args *arg, struct ceph_inode_info *ci)
 {
        struct ceph_msg *msg;
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
 
        msg = ceph_msg_new(CEPH_MSG_CLIENT_CAPS, CAP_MSG_SIZE, GFP_NOFS, false);
        if (!msg) {
@@ -1528,7 +1528,7 @@ static void __ceph_flush_snaps(struct ceph_inode_info *ci,
                __releases(ci->i_ceph_lock)
                __acquires(ci->i_ceph_lock)
 {
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        struct ceph_mds_client *mdsc = session->s_mdsc;
        struct ceph_cap_snap *capsnap;
        u64 oldest_flush_tid = 0;
@@ -1622,7 +1622,7 @@ static void __ceph_flush_snaps(struct ceph_inode_info *ci,
 void ceph_flush_snaps(struct ceph_inode_info *ci,
                      struct ceph_mds_session **psession)
 {
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        struct ceph_mds_client *mdsc = ceph_inode_to_client(inode)->mdsc;
        struct ceph_mds_session *session = NULL;
        int mds;
@@ -1682,8 +1682,8 @@ int __ceph_mark_dirty_caps(struct ceph_inode_info *ci, int mask,
                           struct ceph_cap_flush **pcf)
 {
        struct ceph_mds_client *mdsc =
-               ceph_sb_to_client(ci->vfs_inode.i_sb)->mdsc;
-       struct inode *inode = &ci->vfs_inode;
+               ceph_sb_to_client(ci->netfs.inode.i_sb)->mdsc;
+       struct inode *inode = &ci->netfs.inode;
        int was = ci->i_dirty_caps;
        int dirty = 0;
 
@@ -1696,7 +1696,7 @@ int __ceph_mark_dirty_caps(struct ceph_inode_info *ci, int mask,
                return 0;
        }
 
-       dout("__mark_dirty_caps %p %s dirty %s -> %s\n", &ci->vfs_inode,
+       dout("__mark_dirty_caps %p %s dirty %s -> %s\n", &ci->netfs.inode,
             ceph_cap_string(mask), ceph_cap_string(was),
             ceph_cap_string(was | mask));
        ci->i_dirty_caps |= mask;
@@ -1712,7 +1712,7 @@ int __ceph_mark_dirty_caps(struct ceph_inode_info *ci, int mask,
                                ci->i_snap_realm->cached_context);
                }
                dout(" inode %p now dirty snapc %p auth cap %p\n",
-                    &ci->vfs_inode, ci->i_head_snapc, ci->i_auth_cap);
+                    &ci->netfs.inode, ci->i_head_snapc, ci->i_auth_cap);
                BUG_ON(!list_empty(&ci->i_dirty_item));
                spin_lock(&mdsc->cap_dirty_lock);
                list_add(&ci->i_dirty_item, &session->s_cap_dirty);
@@ -1875,7 +1875,7 @@ static int try_nonblocking_invalidate(struct inode *inode)
 
 bool __ceph_should_report_size(struct ceph_inode_info *ci)
 {
-       loff_t size = i_size_read(&ci->vfs_inode);
+       loff_t size = i_size_read(&ci->netfs.inode);
        /* mds will adjust max size according to the reported size */
        if (ci->i_flushing_caps & CEPH_CAP_FILE_WR)
                return false;
@@ -1900,7 +1900,7 @@ bool __ceph_should_report_size(struct ceph_inode_info *ci)
 void ceph_check_caps(struct ceph_inode_info *ci, int flags,
                     struct ceph_mds_session *session)
 {
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb);
        struct ceph_cap *cap;
        u64 flush_tid, oldest_flush_tid;
@@ -2467,7 +2467,7 @@ static void __kick_flushing_caps(struct ceph_mds_client *mdsc,
        __releases(ci->i_ceph_lock)
        __acquires(ci->i_ceph_lock)
 {
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        struct ceph_cap *cap;
        struct ceph_cap_flush *cf;
        int ret;
@@ -2560,7 +2560,7 @@ void ceph_early_kick_flushing_caps(struct ceph_mds_client *mdsc,
                cap = ci->i_auth_cap;
                if (!(cap && cap->session == session)) {
                        pr_err("%p auth cap %p not mds%d ???\n",
-                               &ci->vfs_inode, cap, session->s_mds);
+                               &ci->netfs.inode, cap, session->s_mds);
                        spin_unlock(&ci->i_ceph_lock);
                        continue;
                }
@@ -2610,7 +2610,7 @@ void ceph_kick_flushing_caps(struct ceph_mds_client *mdsc,
                cap = ci->i_auth_cap;
                if (!(cap && cap->session == session)) {
                        pr_err("%p auth cap %p not mds%d ???\n",
-                               &ci->vfs_inode, cap, session->s_mds);
+                               &ci->netfs.inode, cap, session->s_mds);
                        spin_unlock(&ci->i_ceph_lock);
                        continue;
                }
@@ -2630,7 +2630,7 @@ void ceph_kick_flushing_inode_caps(struct ceph_mds_session *session,
 
        lockdep_assert_held(&ci->i_ceph_lock);
 
-       dout("%s %p flushing %s\n", __func__, &ci->vfs_inode,
+       dout("%s %p flushing %s\n", __func__, &ci->netfs.inode,
             ceph_cap_string(ci->i_flushing_caps));
 
        if (!list_empty(&ci->i_cap_flush_list)) {
@@ -2673,10 +2673,10 @@ void ceph_take_cap_refs(struct ceph_inode_info *ci, int got,
        }
        if (got & CEPH_CAP_FILE_BUFFER) {
                if (ci->i_wb_ref == 0)
-                       ihold(&ci->vfs_inode);
+                       ihold(&ci->netfs.inode);
                ci->i_wb_ref++;
                dout("%s %p wb %d -> %d (?)\n", __func__,
-                    &ci->vfs_inode, ci->i_wb_ref-1, ci->i_wb_ref);
+                    &ci->netfs.inode, ci->i_wb_ref-1, ci->i_wb_ref);
        }
 }
 
@@ -3004,7 +3004,7 @@ int ceph_get_caps(struct file *filp, int need, int want, loff_t endoff, int *got
                        return ret;
                }
 
-               if (S_ISREG(ci->vfs_inode.i_mode) &&
+               if (S_ISREG(ci->netfs.inode.i_mode) &&
                    ci->i_inline_version != CEPH_INLINE_NONE &&
                    (_got & (CEPH_CAP_FILE_CACHE|CEPH_CAP_FILE_LAZYIO)) &&
                    i_size_read(inode) > 0) {
@@ -3094,7 +3094,7 @@ enum put_cap_refs_mode {
 static void __ceph_put_cap_refs(struct ceph_inode_info *ci, int had,
                                enum put_cap_refs_mode mode)
 {
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        int last = 0, put = 0, flushsnaps = 0, wake = 0;
        bool check_flushsnaps = false;
 
@@ -3202,7 +3202,7 @@ void ceph_put_cap_refs_no_check_caps(struct ceph_inode_info *ci, int had)
 void ceph_put_wrbuffer_cap_refs(struct ceph_inode_info *ci, int nr,
                                struct ceph_snap_context *snapc)
 {
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        struct ceph_cap_snap *capsnap = NULL, *iter;
        int put = 0;
        bool last = false;
@@ -3698,7 +3698,7 @@ static void handle_cap_flush_ack(struct inode *inode, u64 flush_tid,
                                     session->s_mds,
                                     &list_first_entry(&session->s_cap_flushing,
                                                struct ceph_inode_info,
-                                               i_flushing_item)->vfs_inode);
+                                               i_flushing_item)->netfs.inode);
                        }
                }
                mdsc->num_cap_flushing--;
@@ -4345,7 +4345,7 @@ unsigned long ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
                        break;
                list_del_init(&ci->i_cap_delay_list);
 
-               inode = igrab(&ci->vfs_inode);
+               inode = igrab(&ci->netfs.inode);
                if (inode) {
                        spin_unlock(&mdsc->cap_delay_lock);
                        dout("check_delayed_caps on %p\n", inode);
@@ -4373,7 +4373,7 @@ static void flush_dirty_session_caps(struct ceph_mds_session *s)
        while (!list_empty(&s->s_cap_dirty)) {
                ci = list_first_entry(&s->s_cap_dirty, struct ceph_inode_info,
                                      i_dirty_item);
-               inode = &ci->vfs_inode;
+               inode = &ci->netfs.inode;
                ihold(inode);
                dout("flush_dirty_caps %llx.%llx\n", ceph_vinop(inode));
                spin_unlock(&mdsc->cap_dirty_lock);
@@ -4407,7 +4407,7 @@ void __ceph_touch_fmode(struct ceph_inode_info *ci,
 
 void ceph_get_fmode(struct ceph_inode_info *ci, int fmode, int count)
 {
-       struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->vfs_inode.i_sb);
+       struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->netfs.inode.i_sb);
        int bits = (fmode << 1) | 1;
        bool already_opened = false;
        int i;
@@ -4441,7 +4441,7 @@ void ceph_get_fmode(struct ceph_inode_info *ci, int fmode, int count)
  */
 void ceph_put_fmode(struct ceph_inode_info *ci, int fmode, int count)
 {
-       struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->vfs_inode.i_sb);
+       struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->netfs.inode.i_sb);
        int bits = (fmode << 1) | 1;
        bool is_closed = true;
        int i;
@@ -4656,7 +4656,7 @@ int ceph_purge_inode_cap(struct inode *inode, struct ceph_cap *cap, bool *invali
        lockdep_assert_held(&ci->i_ceph_lock);
 
        dout("removing cap %p, ci is %p, inode is %p\n",
-            cap, ci, &ci->vfs_inode);
+            cap, ci, &ci->netfs.inode);
 
        is_auth = (cap == ci->i_auth_cap);
        __ceph_remove_cap(cap, false);
index 8c8226c..da59e83 100644 (file)
@@ -205,7 +205,7 @@ static int ceph_init_file_info(struct inode *inode, struct file *file,
 {
        struct ceph_inode_info *ci = ceph_inode(inode);
        struct ceph_mount_options *opt =
-               ceph_inode_to_client(&ci->vfs_inode)->mount_options;
+               ceph_inode_to_client(&ci->netfs.inode)->mount_options;
        struct ceph_file_info *fi;
        int ret;
 
index b7e9cac..650746b 100644 (file)
@@ -176,7 +176,7 @@ static struct ceph_inode_frag *__get_or_create_frag(struct ceph_inode_info *ci,
        rb_insert_color(&frag->node, &ci->i_fragtree);
 
        dout("get_or_create_frag added %llx.%llx frag %x\n",
-            ceph_vinop(&ci->vfs_inode), f);
+            ceph_vinop(&ci->netfs.inode), f);
        return frag;
 }
 
@@ -457,10 +457,10 @@ struct inode *ceph_alloc_inode(struct super_block *sb)
        if (!ci)
                return NULL;
 
-       dout("alloc_inode %p\n", &ci->vfs_inode);
+       dout("alloc_inode %p\n", &ci->netfs.inode);
 
        /* Set parameters for the netfs library */
-       netfs_i_context_init(&ci->vfs_inode, &ceph_netfs_ops);
+       netfs_inode_init(&ci->netfs.inode, &ceph_netfs_ops);
 
        spin_lock_init(&ci->i_ceph_lock);
 
@@ -547,7 +547,7 @@ struct inode *ceph_alloc_inode(struct super_block *sb)
        INIT_WORK(&ci->i_work, ceph_inode_work);
        ci->i_work_mask = 0;
        memset(&ci->i_btime, '\0', sizeof(ci->i_btime));
-       return &ci->vfs_inode;
+       return &ci->netfs.inode;
 }
 
 void ceph_free_inode(struct inode *inode)
@@ -1978,7 +1978,7 @@ static void ceph_inode_work(struct work_struct *work)
 {
        struct ceph_inode_info *ci = container_of(work, struct ceph_inode_info,
                                                 i_work);
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
 
        if (test_and_clear_bit(CEPH_I_WORK_WRITEBACK, &ci->i_work_mask)) {
                dout("writeback %p\n", inode);
index f5d110d..33f517d 100644 (file)
@@ -1564,7 +1564,7 @@ int ceph_iterate_session_caps(struct ceph_mds_session *session,
        p = session->s_caps.next;
        while (p != &session->s_caps) {
                cap = list_entry(p, struct ceph_cap, session_caps);
-               inode = igrab(&cap->ci->vfs_inode);
+               inode = igrab(&cap->ci->netfs.inode);
                if (!inode) {
                        p = p->next;
                        continue;
@@ -1622,7 +1622,7 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
        int iputs;
 
        dout("removing cap %p, ci is %p, inode is %p\n",
-            cap, ci, &ci->vfs_inode);
+            cap, ci, &ci->netfs.inode);
        spin_lock(&ci->i_ceph_lock);
        iputs = ceph_purge_inode_cap(inode, cap, &invalidate);
        spin_unlock(&ci->i_ceph_lock);
index 322ee5a..864cdaa 100644 (file)
@@ -521,7 +521,7 @@ static bool has_new_snaps(struct ceph_snap_context *o,
 static void ceph_queue_cap_snap(struct ceph_inode_info *ci,
                                struct ceph_cap_snap **pcapsnap)
 {
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        struct ceph_snap_context *old_snapc, *new_snapc;
        struct ceph_cap_snap *capsnap = *pcapsnap;
        struct ceph_buffer *old_blob = NULL;
@@ -652,7 +652,7 @@ update_snapc:
 int __ceph_finish_cap_snap(struct ceph_inode_info *ci,
                            struct ceph_cap_snap *capsnap)
 {
-       struct inode *inode = &ci->vfs_inode;
+       struct inode *inode = &ci->netfs.inode;
        struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb);
 
        BUG_ON(capsnap->writing);
@@ -712,7 +712,7 @@ static void queue_realm_cap_snaps(struct ceph_snap_realm *realm)
 
        spin_lock(&realm->inodes_with_caps_lock);
        list_for_each_entry(ci, &realm->inodes_with_caps, i_snap_realm_item) {
-               struct inode *inode = igrab(&ci->vfs_inode);
+               struct inode *inode = igrab(&ci->netfs.inode);
                if (!inode)
                        continue;
                spin_unlock(&realm->inodes_with_caps_lock);
@@ -904,7 +904,7 @@ static void flush_snaps(struct ceph_mds_client *mdsc)
        while (!list_empty(&mdsc->snap_flush_list)) {
                ci = list_first_entry(&mdsc->snap_flush_list,
                                struct ceph_inode_info, i_snap_flush_item);
-               inode = &ci->vfs_inode;
+               inode = &ci->netfs.inode;
                ihold(inode);
                spin_unlock(&mdsc->snap_flush_lock);
                ceph_flush_snaps(ci, &session);
index b73b4f7..4014080 100644 (file)
@@ -876,7 +876,7 @@ mempool_t *ceph_wb_pagevec_pool;
 static void ceph_inode_init_once(void *foo)
 {
        struct ceph_inode_info *ci = foo;
-       inode_init_once(&ci->vfs_inode);
+       inode_init_once(&ci->netfs.inode);
 }
 
 static int __init init_caches(void)
index dd7dac0..f59dac6 100644 (file)
@@ -316,11 +316,7 @@ struct ceph_inode_xattrs_info {
  * Ceph inode.
  */
 struct ceph_inode_info {
-       struct {
-               /* These must be contiguous */
-               struct inode vfs_inode;
-               struct netfs_i_context netfs_ctx; /* Netfslib context */
-       };
+       struct netfs_inode netfs; /* Netfslib context and vfs inode */
        struct ceph_vino i_vino;   /* ceph ino + snap */
 
        spinlock_t i_ceph_lock;
@@ -436,7 +432,7 @@ struct ceph_inode_info {
 static inline struct ceph_inode_info *
 ceph_inode(const struct inode *inode)
 {
-       return container_of(inode, struct ceph_inode_info, vfs_inode);
+       return container_of(inode, struct ceph_inode_info, netfs.inode);
 }
 
 static inline struct ceph_fs_client *
@@ -1316,7 +1312,7 @@ static inline void __ceph_update_quota(struct ceph_inode_info *ci,
        has_quota = __ceph_has_quota(ci, QUOTA_GET_ANY);
 
        if (had_quota != has_quota)
-               ceph_adjust_quota_realms_count(&ci->vfs_inode, has_quota);
+               ceph_adjust_quota_realms_count(&ci->netfs.inode, has_quota);
 }
 
 extern void ceph_handle_quota(struct ceph_mds_client *mdsc,
index 8c2dc2c..f141f52 100644 (file)
@@ -57,7 +57,7 @@ static bool ceph_vxattrcb_layout_exists(struct ceph_inode_info *ci)
 static ssize_t ceph_vxattrcb_layout(struct ceph_inode_info *ci, char *val,
                                    size_t size)
 {
-       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->vfs_inode.i_sb);
+       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->netfs.inode.i_sb);
        struct ceph_osd_client *osdc = &fsc->client->osdc;
        struct ceph_string *pool_ns;
        s64 pool = ci->i_layout.pool_id;
@@ -69,7 +69,7 @@ static ssize_t ceph_vxattrcb_layout(struct ceph_inode_info *ci, char *val,
 
        pool_ns = ceph_try_get_string(ci->i_layout.pool_ns);
 
-       dout("ceph_vxattrcb_layout %p\n", &ci->vfs_inode);
+       dout("ceph_vxattrcb_layout %p\n", &ci->netfs.inode);
        down_read(&osdc->lock);
        pool_name = ceph_pg_pool_name_by_id(osdc->osdmap, pool);
        if (pool_name) {
@@ -161,7 +161,7 @@ static ssize_t ceph_vxattrcb_layout_pool(struct ceph_inode_info *ci,
                                         char *val, size_t size)
 {
        ssize_t ret;
-       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->vfs_inode.i_sb);
+       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->netfs.inode.i_sb);
        struct ceph_osd_client *osdc = &fsc->client->osdc;
        s64 pool = ci->i_layout.pool_id;
        const char *pool_name;
@@ -313,7 +313,7 @@ static ssize_t ceph_vxattrcb_snap_btime(struct ceph_inode_info *ci, char *val,
 static ssize_t ceph_vxattrcb_cluster_fsid(struct ceph_inode_info *ci,
                                          char *val, size_t size)
 {
-       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->vfs_inode.i_sb);
+       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->netfs.inode.i_sb);
 
        return ceph_fmt_xattr(val, size, "%pU", &fsc->client->fsid);
 }
@@ -321,7 +321,7 @@ static ssize_t ceph_vxattrcb_cluster_fsid(struct ceph_inode_info *ci,
 static ssize_t ceph_vxattrcb_client_id(struct ceph_inode_info *ci,
                                       char *val, size_t size)
 {
-       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->vfs_inode.i_sb);
+       struct ceph_fs_client *fsc = ceph_sb_to_client(ci->netfs.inode.i_sb);
 
        return ceph_fmt_xattr(val, size, "client%lld",
                              ceph_client_gid(fsc->client));
@@ -629,7 +629,7 @@ static int __set_xattr(struct ceph_inode_info *ci,
        }
 
        dout("__set_xattr_val added %llx.%llx xattr %p %.*s=%.*s\n",
-            ceph_vinop(&ci->vfs_inode), xattr, name_len, name, val_len, val);
+            ceph_vinop(&ci->netfs.inode), xattr, name_len, name, val_len, val);
 
        return 0;
 }
@@ -871,7 +871,7 @@ struct ceph_buffer *__ceph_build_xattrs_blob(struct ceph_inode_info *ci)
        struct ceph_buffer *old_blob = NULL;
        void *dest;
 
-       dout("__build_xattrs_blob %p\n", &ci->vfs_inode);
+       dout("__build_xattrs_blob %p\n", &ci->netfs.inode);
        if (ci->i_xattrs.dirty) {
                int need = __get_required_blob_size(ci, 0, 0);
 
index 12c8728..c85d9a3 100644 (file)
@@ -377,7 +377,7 @@ cifs_alloc_inode(struct super_block *sb)
        cifs_inode->flags = 0;
        spin_lock_init(&cifs_inode->writers_lock);
        cifs_inode->writers = 0;
-       cifs_inode->vfs_inode.i_blkbits = 14;  /* 2**14 = CIFS_MAX_MSGSIZE */
+       cifs_inode->netfs.inode.i_blkbits = 14;  /* 2**14 = CIFS_MAX_MSGSIZE */
        cifs_inode->server_eof = 0;
        cifs_inode->uniqueid = 0;
        cifs_inode->createtime = 0;
@@ -389,12 +389,12 @@ cifs_alloc_inode(struct super_block *sb)
         * Can not set i_flags here - they get immediately overwritten to zero
         * by the VFS.
         */
-       /* cifs_inode->vfs_inode.i_flags = S_NOATIME | S_NOCMTIME; */
+       /* cifs_inode->netfs.inode.i_flags = S_NOATIME | S_NOCMTIME; */
        INIT_LIST_HEAD(&cifs_inode->openFileList);
        INIT_LIST_HEAD(&cifs_inode->llist);
        INIT_LIST_HEAD(&cifs_inode->deferred_closes);
        spin_lock_init(&cifs_inode->deferred_lock);
-       return &cifs_inode->vfs_inode;
+       return &cifs_inode->netfs.inode;
 }
 
 static void
@@ -1418,7 +1418,7 @@ cifs_init_once(void *inode)
 {
        struct cifsInodeInfo *cifsi = inode;
 
-       inode_init_once(&cifsi->vfs_inode);
+       inode_init_once(&cifsi->netfs.inode);
        init_rwsem(&cifsi->lock_sem);
 }
 
index f873379..e773716 100644 (file)
@@ -1479,20 +1479,16 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file);
 #define CIFS_CACHE_RW_FLG      (CIFS_CACHE_READ_FLG | CIFS_CACHE_WRITE_FLG)
 #define CIFS_CACHE_RHW_FLG     (CIFS_CACHE_RW_FLG | CIFS_CACHE_HANDLE_FLG)
 
-#define CIFS_CACHE_READ(cinode) ((cinode->oplock & CIFS_CACHE_READ_FLG) || (CIFS_SB(cinode->vfs_inode.i_sb)->mnt_cifs_flags & CIFS_MOUNT_RO_CACHE))
+#define CIFS_CACHE_READ(cinode) ((cinode->oplock & CIFS_CACHE_READ_FLG) || (CIFS_SB(cinode->netfs.inode.i_sb)->mnt_cifs_flags & CIFS_MOUNT_RO_CACHE))
 #define CIFS_CACHE_HANDLE(cinode) (cinode->oplock & CIFS_CACHE_HANDLE_FLG)
-#define CIFS_CACHE_WRITE(cinode) ((cinode->oplock & CIFS_CACHE_WRITE_FLG) || (CIFS_SB(cinode->vfs_inode.i_sb)->mnt_cifs_flags & CIFS_MOUNT_RW_CACHE))
+#define CIFS_CACHE_WRITE(cinode) ((cinode->oplock & CIFS_CACHE_WRITE_FLG) || (CIFS_SB(cinode->netfs.inode.i_sb)->mnt_cifs_flags & CIFS_MOUNT_RW_CACHE))
 
 /*
  * One of these for each file inode
  */
 
 struct cifsInodeInfo {
-       struct {
-               /* These must be contiguous */
-               struct inode    vfs_inode;      /* the VFS's inode record */
-               struct netfs_i_context netfs_ctx; /* Netfslib context */
-       };
+       struct netfs_inode netfs; /* Netfslib context and vfs inode */
        bool can_cache_brlcks;
        struct list_head llist; /* locks helb by this inode */
        /*
@@ -1531,7 +1527,7 @@ struct cifsInodeInfo {
 static inline struct cifsInodeInfo *
 CIFS_I(struct inode *inode)
 {
-       return container_of(inode, struct cifsInodeInfo, vfs_inode);
+       return container_of(inode, struct cifsInodeInfo, netfs.inode);
 }
 
 static inline struct cifs_sb_info *
index 1618e05..e64cda7 100644 (file)
@@ -2004,7 +2004,7 @@ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
                                        bool fsuid_only)
 {
        struct cifsFileInfo *open_file = NULL;
-       struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_inode->vfs_inode.i_sb);
+       struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_inode->netfs.inode.i_sb);
 
        /* only filter by fsuid on multiuser mounts */
        if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER))
@@ -2060,7 +2060,7 @@ cifs_get_writable_file(struct cifsInodeInfo *cifs_inode, int flags,
                return rc;
        }
 
-       cifs_sb = CIFS_SB(cifs_inode->vfs_inode.i_sb);
+       cifs_sb = CIFS_SB(cifs_inode->netfs.inode.i_sb);
 
        /* only filter by fsuid on multiuser mounts */
        if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER))
@@ -4669,14 +4669,14 @@ bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file)
                /* This inode is open for write at least once */
                struct cifs_sb_info *cifs_sb;
 
-               cifs_sb = CIFS_SB(cifsInode->vfs_inode.i_sb);
+               cifs_sb = CIFS_SB(cifsInode->netfs.inode.i_sb);
                if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DIRECT_IO) {
                        /* since no page cache to corrupt on directio
                        we can change size safely */
                        return true;
                }
 
-               if (i_size_read(&cifsInode->vfs_inode) < end_of_file)
+               if (i_size_read(&cifsInode->netfs.inode) < end_of_file)
                        return true;
 
                return false;
index a638b29..23ef56f 100644 (file)
@@ -101,13 +101,13 @@ void cifs_fscache_get_inode_cookie(struct inode *inode)
        struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
        struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
 
-       cifs_fscache_fill_coherency(&cifsi->vfs_inode, &cd);
+       cifs_fscache_fill_coherency(&cifsi->netfs.inode, &cd);
 
-       cifsi->netfs_ctx.cache =
+       cifsi->netfs.cache =
                fscache_acquire_cookie(tcon->fscache, 0,
                                       &cifsi->uniqueid, sizeof(cifsi->uniqueid),
                                       &cd, sizeof(cd),
-                                      i_size_read(&cifsi->vfs_inode));
+                                      i_size_read(&cifsi->netfs.inode));
 }
 
 void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool update)
@@ -131,7 +131,7 @@ void cifs_fscache_release_inode_cookie(struct inode *inode)
        if (cookie) {
                cifs_dbg(FYI, "%s: (0x%p)\n", __func__, cookie);
                fscache_relinquish_cookie(cookie, false);
-               cifsi->netfs_ctx.cache = NULL;
+               cifsi->netfs.cache = NULL;
        }
 }
 
index 52355c0..ab9a51d 100644 (file)
@@ -52,10 +52,10 @@ void cifs_fscache_fill_coherency(struct inode *inode,
        struct cifsInodeInfo *cifsi = CIFS_I(inode);
 
        memset(cd, 0, sizeof(*cd));
-       cd->last_write_time_sec   = cpu_to_le64(cifsi->vfs_inode.i_mtime.tv_sec);
-       cd->last_write_time_nsec  = cpu_to_le32(cifsi->vfs_inode.i_mtime.tv_nsec);
-       cd->last_change_time_sec  = cpu_to_le64(cifsi->vfs_inode.i_ctime.tv_sec);
-       cd->last_change_time_nsec = cpu_to_le32(cifsi->vfs_inode.i_ctime.tv_nsec);
+       cd->last_write_time_sec   = cpu_to_le64(cifsi->netfs.inode.i_mtime.tv_sec);
+       cd->last_write_time_nsec  = cpu_to_le32(cifsi->netfs.inode.i_mtime.tv_nsec);
+       cd->last_change_time_sec  = cpu_to_le64(cifsi->netfs.inode.i_ctime.tv_sec);
+       cd->last_change_time_nsec = cpu_to_le32(cifsi->netfs.inode.i_ctime.tv_nsec);
 }
 
 
index 2f9e7d2..81da81e 100644 (file)
@@ -115,7 +115,7 @@ cifs_revalidate_cache(struct inode *inode, struct cifs_fattr *fattr)
                 __func__, cifs_i->uniqueid);
        set_bit(CIFS_INO_INVALID_MAPPING, &cifs_i->flags);
        /* Invalidate fscache cookie */
-       cifs_fscache_fill_coherency(&cifs_i->vfs_inode, &cd);
+       cifs_fscache_fill_coherency(&cifs_i->netfs.inode, &cd);
        fscache_invalidate(cifs_inode_cookie(inode), &cd, i_size_read(inode), 0);
 }
 
@@ -2499,7 +2499,7 @@ int cifs_fiemap(struct inode *inode, struct fiemap_extent_info *fei, u64 start,
                u64 len)
 {
        struct cifsInodeInfo *cifs_i = CIFS_I(inode);
-       struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_i->vfs_inode.i_sb);
+       struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_i->netfs.inode.i_sb);
        struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
        struct TCP_Server_Info *server = tcon->ses->server;
        struct cifsFileInfo *cfile;
index 35962a1..cbc3b43 100644 (file)
@@ -537,11 +537,11 @@ void cifs_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock)
        if (oplock == OPLOCK_EXCLUSIVE) {
                cinode->oplock = CIFS_CACHE_WRITE_FLG | CIFS_CACHE_READ_FLG;
                cifs_dbg(FYI, "Exclusive Oplock granted on inode %p\n",
-                        &cinode->vfs_inode);
+                        &cinode->netfs.inode);
        } else if (oplock == OPLOCK_READ) {
                cinode->oplock = CIFS_CACHE_READ_FLG;
                cifs_dbg(FYI, "Level II Oplock granted on inode %p\n",
-                        &cinode->vfs_inode);
+                        &cinode->netfs.inode);
        } else
                cinode->oplock = 0;
 }
index 98a76fa..8543caf 100644 (file)
@@ -4260,15 +4260,15 @@ smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
        if (oplock == SMB2_OPLOCK_LEVEL_BATCH) {
                cinode->oplock = CIFS_CACHE_RHW_FLG;
                cifs_dbg(FYI, "Batch Oplock granted on inode %p\n",
-                        &cinode->vfs_inode);
+                        &cinode->netfs.inode);
        } else if (oplock == SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
                cinode->oplock = CIFS_CACHE_RW_FLG;
                cifs_dbg(FYI, "Exclusive Oplock granted on inode %p\n",
-                        &cinode->vfs_inode);
+                        &cinode->netfs.inode);
        } else if (oplock == SMB2_OPLOCK_LEVEL_II) {
                cinode->oplock = CIFS_CACHE_READ_FLG;
                cifs_dbg(FYI, "Level II Oplock granted on inode %p\n",
-                        &cinode->vfs_inode);
+                        &cinode->netfs.inode);
        } else
                cinode->oplock = 0;
 }
@@ -4307,7 +4307,7 @@ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
 
        cinode->oplock = new_oplock;
        cifs_dbg(FYI, "%s Lease granted on inode %p\n", message,
-                &cinode->vfs_inode);
+                &cinode->netfs.inode);
 }
 
 static void
index 360ce36..e6b9322 100644 (file)
@@ -1549,7 +1549,7 @@ static int __ext2_write_inode(struct inode *inode, int do_sync)
        if (IS_ERR(raw_inode))
                return -EIO;
 
-       /* For fields not not tracking in the in-memory inode,
+       /* For fields not tracking in the in-memory inode,
         * initialise them to zero for new inodes. */
        if (ei->i_state & EXT2_STATE_NEW)
                memset(raw_inode, 0, EXT2_SB(sb)->s_inode_size);
index a21d8f1..0522136 100644 (file)
@@ -120,6 +120,7 @@ static bool inode_io_list_move_locked(struct inode *inode,
                                      struct list_head *head)
 {
        assert_spin_locked(&wb->list_lock);
+       assert_spin_locked(&inode->i_lock);
 
        list_move(&inode->i_io_list, head);
 
@@ -1365,9 +1366,9 @@ static int move_expired_inodes(struct list_head *delaying_queue,
                inode = wb_inode(delaying_queue->prev);
                if (inode_dirtied_after(inode, dirtied_before))
                        break;
+               spin_lock(&inode->i_lock);
                list_move(&inode->i_io_list, &tmp);
                moved++;
-               spin_lock(&inode->i_lock);
                inode->i_state |= I_SYNC_QUEUED;
                spin_unlock(&inode->i_lock);
                if (sb_is_blkdev_sb(inode->i_sb))
@@ -1383,7 +1384,12 @@ static int move_expired_inodes(struct list_head *delaying_queue,
                goto out;
        }
 
-       /* Move inodes from one superblock together */
+       /*
+        * Although inode's i_io_list is moved from 'tmp' to 'dispatch_queue',
+        * we don't take inode->i_lock here because it is just a pointless overhead.
+        * Inode is already marked as I_SYNC_QUEUED so writeback list handling is
+        * fully under our control.
+        */
        while (!list_empty(&tmp)) {
                sb = wb_inode(tmp.prev)->i_sb;
                list_for_each_prev_safe(pos, node, &tmp) {
@@ -1826,8 +1832,8 @@ static long writeback_sb_inodes(struct super_block *sb,
                         * We'll have another go at writing back this inode
                         * when we completed a full scan of b_io.
                         */
-                       spin_unlock(&inode->i_lock);
                        requeue_io(inode, wb);
+                       spin_unlock(&inode->i_lock);
                        trace_writeback_sb_inodes_requeue(inode);
                        continue;
                }
@@ -2358,6 +2364,7 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 {
        struct super_block *sb = inode->i_sb;
        int dirtytime = 0;
+       struct bdi_writeback *wb = NULL;
 
        trace_writeback_mark_inode_dirty(inode, flags);
 
@@ -2409,6 +2416,17 @@ void __mark_inode_dirty(struct inode *inode, int flags)
                        inode->i_state &= ~I_DIRTY_TIME;
                inode->i_state |= flags;
 
+               /*
+                * Grab inode's wb early because it requires dropping i_lock and we
+                * need to make sure following checks happen atomically with dirty
+                * list handling so that we don't move inodes under flush worker's
+                * hands.
+                */
+               if (!was_dirty) {
+                       wb = locked_inode_to_wb_and_lock_list(inode);
+                       spin_lock(&inode->i_lock);
+               }
+
                /*
                 * If the inode is queued for writeback by flush worker, just
                 * update its dirty state. Once the flush worker is done with
@@ -2416,7 +2434,7 @@ void __mark_inode_dirty(struct inode *inode, int flags)
                 * list, based upon its state.
                 */
                if (inode->i_state & I_SYNC_QUEUED)
-                       goto out_unlock_inode;
+                       goto out_unlock;
 
                /*
                 * Only add valid (hashed) inodes to the superblock's
@@ -2424,22 +2442,19 @@ void __mark_inode_dirty(struct inode *inode, int flags)
                 */
                if (!S_ISBLK(inode->i_mode)) {
                        if (inode_unhashed(inode))
-                               goto out_unlock_inode;
+                               goto out_unlock;
                }
                if (inode->i_state & I_FREEING)
-                       goto out_unlock_inode;
+                       goto out_unlock;
 
                /*
                 * If the inode was already on b_dirty/b_io/b_more_io, don't
                 * reposition it (that would break b_dirty time-ordering).
                 */
                if (!was_dirty) {
-                       struct bdi_writeback *wb;
                        struct list_head *dirty_list;
                        bool wakeup_bdi = false;
 
-                       wb = locked_inode_to_wb_and_lock_list(inode);
-
                        inode->dirtied_when = jiffies;
                        if (dirtytime)
                                inode->dirtied_time_when = jiffies;
@@ -2453,6 +2468,7 @@ void __mark_inode_dirty(struct inode *inode, int flags)
                                                               dirty_list);
 
                        spin_unlock(&wb->list_lock);
+                       spin_unlock(&inode->i_lock);
                        trace_writeback_dirty_inode_enqueue(inode);
 
                        /*
@@ -2467,6 +2483,9 @@ void __mark_inode_dirty(struct inode *inode, int flags)
                        return;
                }
        }
+out_unlock:
+       if (wb)
+               spin_unlock(&wb->list_lock);
 out_unlock_inode:
        spin_unlock(&inode->i_lock);
 }
index 9d9b422..bd4da9c 100644 (file)
@@ -27,7 +27,7 @@
  * Inode locking rules:
  *
  * inode->i_lock protects:
- *   inode->i_state, inode->i_hash, __iget()
+ *   inode->i_state, inode->i_hash, __iget(), inode->i_io_list
  * Inode LRU list locks protect:
  *   inode->i_sb->s_inode_lru, inode->i_lru
  * inode->i_sb->s_inode_list_lock protects:
index 8742d22..d37e012 100644 (file)
@@ -155,7 +155,7 @@ static void netfs_rreq_expand(struct netfs_io_request *rreq,
 void netfs_readahead(struct readahead_control *ractl)
 {
        struct netfs_io_request *rreq;
-       struct netfs_i_context *ctx = netfs_i_context(ractl->mapping->host);
+       struct netfs_inode *ctx = netfs_inode(ractl->mapping->host);
        int ret;
 
        _enter("%lx,%x", readahead_index(ractl), readahead_count(ractl));
@@ -215,7 +215,7 @@ int netfs_read_folio(struct file *file, struct folio *folio)
 {
        struct address_space *mapping = folio_file_mapping(folio);
        struct netfs_io_request *rreq;
-       struct netfs_i_context *ctx = netfs_i_context(mapping->host);
+       struct netfs_inode *ctx = netfs_inode(mapping->host);
        int ret;
 
        _enter("%lx", folio_index(folio));
@@ -331,7 +331,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
                      void **_fsdata)
 {
        struct netfs_io_request *rreq;
-       struct netfs_i_context *ctx = netfs_i_context(file_inode(file ));
+       struct netfs_inode *ctx = netfs_inode(file_inode(file ));
        struct folio *folio;
        unsigned int fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE;
        pgoff_t index = pos >> PAGE_SHIFT;
index b7b0e3d..43fac1b 100644 (file)
@@ -91,7 +91,7 @@ static inline void netfs_stat_d(atomic_t *stat)
 /*
  * Miscellaneous functions.
  */
-static inline bool netfs_is_cache_enabled(struct netfs_i_context *ctx)
+static inline bool netfs_is_cache_enabled(struct netfs_inode *ctx)
 {
 #if IS_ENABLED(CONFIG_FSCACHE)
        struct fscache_cookie *cookie = ctx->cache;
index e86107b..c6afa60 100644 (file)
@@ -18,7 +18,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
 {
        static atomic_t debug_ids;
        struct inode *inode = file ? file_inode(file) : mapping->host;
-       struct netfs_i_context *ctx = netfs_i_context(inode);
+       struct netfs_inode *ctx = netfs_inode(inode);
        struct netfs_io_request *rreq;
        int ret;
 
index a74aef9..09d1307 100644 (file)
@@ -79,6 +79,7 @@
 #include <linux/capability.h>
 #include <linux/quotaops.h>
 #include <linux/blkdev.h>
+#include <linux/sched/mm.h>
 #include "../internal.h" /* ugh */
 
 #include <linux/uaccess.h>
@@ -425,9 +426,11 @@ EXPORT_SYMBOL(mark_info_dirty);
 int dquot_acquire(struct dquot *dquot)
 {
        int ret = 0, ret2 = 0;
+       unsigned int memalloc;
        struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
 
        mutex_lock(&dquot->dq_lock);
+       memalloc = memalloc_nofs_save();
        if (!test_bit(DQ_READ_B, &dquot->dq_flags)) {
                ret = dqopt->ops[dquot->dq_id.type]->read_dqblk(dquot);
                if (ret < 0)
@@ -458,6 +461,7 @@ int dquot_acquire(struct dquot *dquot)
        smp_mb__before_atomic();
        set_bit(DQ_ACTIVE_B, &dquot->dq_flags);
 out_iolock:
+       memalloc_nofs_restore(memalloc);
        mutex_unlock(&dquot->dq_lock);
        return ret;
 }
@@ -469,9 +473,11 @@ EXPORT_SYMBOL(dquot_acquire);
 int dquot_commit(struct dquot *dquot)
 {
        int ret = 0;
+       unsigned int memalloc;
        struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
 
        mutex_lock(&dquot->dq_lock);
+       memalloc = memalloc_nofs_save();
        if (!clear_dquot_dirty(dquot))
                goto out_lock;
        /* Inactive dquot can be only if there was error during read/init
@@ -481,6 +487,7 @@ int dquot_commit(struct dquot *dquot)
        else
                ret = -EIO;
 out_lock:
+       memalloc_nofs_restore(memalloc);
        mutex_unlock(&dquot->dq_lock);
        return ret;
 }
@@ -492,9 +499,11 @@ EXPORT_SYMBOL(dquot_commit);
 int dquot_release(struct dquot *dquot)
 {
        int ret = 0, ret2 = 0;
+       unsigned int memalloc;
        struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
 
        mutex_lock(&dquot->dq_lock);
+       memalloc = memalloc_nofs_save();
        /* Check whether we are not racing with some other dqget() */
        if (dquot_is_busy(dquot))
                goto out_dqlock;
@@ -510,6 +519,7 @@ int dquot_release(struct dquot *dquot)
        }
        clear_bit(DQ_ACTIVE_B, &dquot->dq_flags);
 out_dqlock:
+       memalloc_nofs_restore(memalloc);
        mutex_unlock(&dquot->dq_lock);
        return ret;
 }
index bcb21ae..0532997 100644 (file)
@@ -110,15 +110,51 @@ static inline void zonefs_i_size_write(struct inode *inode, loff_t isize)
        }
 }
 
-static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
-                             unsigned int flags, struct iomap *iomap,
-                             struct iomap *srcmap)
+static int zonefs_read_iomap_begin(struct inode *inode, loff_t offset,
+                                  loff_t length, unsigned int flags,
+                                  struct iomap *iomap, struct iomap *srcmap)
 {
        struct zonefs_inode_info *zi = ZONEFS_I(inode);
        struct super_block *sb = inode->i_sb;
        loff_t isize;
 
-       /* All I/Os should always be within the file maximum size */
+       /*
+        * All blocks are always mapped below EOF. If reading past EOF,
+        * act as if there is a hole up to the file maximum size.
+        */
+       mutex_lock(&zi->i_truncate_mutex);
+       iomap->bdev = inode->i_sb->s_bdev;
+       iomap->offset = ALIGN_DOWN(offset, sb->s_blocksize);
+       isize = i_size_read(inode);
+       if (iomap->offset >= isize) {
+               iomap->type = IOMAP_HOLE;
+               iomap->addr = IOMAP_NULL_ADDR;
+               iomap->length = length;
+       } else {
+               iomap->type = IOMAP_MAPPED;
+               iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
+               iomap->length = isize - iomap->offset;
+       }
+       mutex_unlock(&zi->i_truncate_mutex);
+
+       trace_zonefs_iomap_begin(inode, iomap);
+
+       return 0;
+}
+
+static const struct iomap_ops zonefs_read_iomap_ops = {
+       .iomap_begin    = zonefs_read_iomap_begin,
+};
+
+static int zonefs_write_iomap_begin(struct inode *inode, loff_t offset,
+                                   loff_t length, unsigned int flags,
+                                   struct iomap *iomap, struct iomap *srcmap)
+{
+       struct zonefs_inode_info *zi = ZONEFS_I(inode);
+       struct super_block *sb = inode->i_sb;
+       loff_t isize;
+
+       /* All write I/Os should always be within the file maximum size */
        if (WARN_ON_ONCE(offset + length > zi->i_max_size))
                return -EIO;
 
@@ -128,7 +164,7 @@ static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
         * operation.
         */
        if (WARN_ON_ONCE(zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
-                        (flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)))
+                        !(flags & IOMAP_DIRECT)))
                return -EIO;
 
        /*
@@ -137,47 +173,44 @@ static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
         * write pointer) and unwriten beyond.
         */
        mutex_lock(&zi->i_truncate_mutex);
+       iomap->bdev = inode->i_sb->s_bdev;
+       iomap->offset = ALIGN_DOWN(offset, sb->s_blocksize);
+       iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
        isize = i_size_read(inode);
-       if (offset >= isize)
+       if (iomap->offset >= isize) {
                iomap->type = IOMAP_UNWRITTEN;
-       else
+               iomap->length = zi->i_max_size - iomap->offset;
+       } else {
                iomap->type = IOMAP_MAPPED;
-       if (flags & IOMAP_WRITE)
-               length = zi->i_max_size - offset;
-       else
-               length = min(length, isize - offset);
+               iomap->length = isize - iomap->offset;
+       }
        mutex_unlock(&zi->i_truncate_mutex);
 
-       iomap->offset = ALIGN_DOWN(offset, sb->s_blocksize);
-       iomap->length = ALIGN(offset + length, sb->s_blocksize) - iomap->offset;
-       iomap->bdev = inode->i_sb->s_bdev;
-       iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
-
        trace_zonefs_iomap_begin(inode, iomap);
 
        return 0;
 }
 
-static const struct iomap_ops zonefs_iomap_ops = {
-       .iomap_begin    = zonefs_iomap_begin,
+static const struct iomap_ops zonefs_write_iomap_ops = {
+       .iomap_begin    = zonefs_write_iomap_begin,
 };
 
 static int zonefs_read_folio(struct file *unused, struct folio *folio)
 {
-       return iomap_read_folio(folio, &zonefs_iomap_ops);
+       return iomap_read_folio(folio, &zonefs_read_iomap_ops);
 }
 
 static void zonefs_readahead(struct readahead_control *rac)
 {
-       iomap_readahead(rac, &zonefs_iomap_ops);
+       iomap_readahead(rac, &zonefs_read_iomap_ops);
 }
 
 /*
  * Map blocks for page writeback. This is used only on conventional zone files,
  * which implies that the page range can only be within the fixed inode size.
  */
-static int zonefs_map_blocks(struct iomap_writepage_ctx *wpc,
-                            struct inode *inode, loff_t offset)
+static int zonefs_write_map_blocks(struct iomap_writepage_ctx *wpc,
+                                  struct inode *inode, loff_t offset)
 {
        struct zonefs_inode_info *zi = ZONEFS_I(inode);
 
@@ -191,12 +224,12 @@ static int zonefs_map_blocks(struct iomap_writepage_ctx *wpc,
            offset < wpc->iomap.offset + wpc->iomap.length)
                return 0;
 
-       return zonefs_iomap_begin(inode, offset, zi->i_max_size - offset,
-                                 IOMAP_WRITE, &wpc->iomap, NULL);
+       return zonefs_write_iomap_begin(inode, offset, zi->i_max_size - offset,
+                                       IOMAP_WRITE, &wpc->iomap, NULL);
 }
 
 static const struct iomap_writeback_ops zonefs_writeback_ops = {
-       .map_blocks             = zonefs_map_blocks,
+       .map_blocks             = zonefs_write_map_blocks,
 };
 
 static int zonefs_writepage(struct page *page, struct writeback_control *wbc)
@@ -226,7 +259,8 @@ static int zonefs_swap_activate(struct swap_info_struct *sis,
                return -EINVAL;
        }
 
-       return iomap_swapfile_activate(sis, swap_file, span, &zonefs_iomap_ops);
+       return iomap_swapfile_activate(sis, swap_file, span,
+                                      &zonefs_read_iomap_ops);
 }
 
 static const struct address_space_operations zonefs_file_aops = {
@@ -647,7 +681,7 @@ static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf)
 
        /* Serialize against truncates */
        filemap_invalidate_lock_shared(inode->i_mapping);
-       ret = iomap_page_mkwrite(vmf, &zonefs_iomap_ops);
+       ret = iomap_page_mkwrite(vmf, &zonefs_write_iomap_ops);
        filemap_invalidate_unlock_shared(inode->i_mapping);
 
        sb_end_pagefault(inode->i_sb);
@@ -899,7 +933,7 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from)
        if (append)
                ret = zonefs_file_dio_append(iocb, from);
        else
-               ret = iomap_dio_rw(iocb, from, &zonefs_iomap_ops,
+               ret = iomap_dio_rw(iocb, from, &zonefs_write_iomap_ops,
                                   &zonefs_write_dio_ops, 0, NULL, 0);
        if (zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
            (ret > 0 || ret == -EIOCBQUEUED)) {
@@ -948,7 +982,7 @@ static ssize_t zonefs_file_buffered_write(struct kiocb *iocb,
        if (ret <= 0)
                goto inode_unlock;
 
-       ret = iomap_file_buffered_write(iocb, from, &zonefs_iomap_ops);
+       ret = iomap_file_buffered_write(iocb, from, &zonefs_write_iomap_ops);
        if (ret > 0)
                iocb->ki_pos += ret;
        else if (ret == -EIO)
@@ -1041,7 +1075,7 @@ static ssize_t zonefs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
                        goto inode_unlock;
                }
                file_accessed(iocb->ki_filp);
-               ret = iomap_dio_rw(iocb, to, &zonefs_iomap_ops,
+               ret = iomap_dio_rw(iocb, to, &zonefs_read_iomap_ops,
                                   &zonefs_read_dio_ops, 0, NULL, 0);
        } else {
                ret = generic_file_read_iter(iocb, to);
@@ -1085,7 +1119,8 @@ static int zonefs_seq_file_write_open(struct inode *inode)
 
                if (sbi->s_mount_opts & ZONEFS_MNTOPT_EXPLICIT_OPEN) {
 
-                       if (wro > sbi->s_max_wro_seq_files) {
+                       if (sbi->s_max_wro_seq_files
+                           && wro > sbi->s_max_wro_seq_files) {
                                atomic_dec(&sbi->s_wro_seq_files);
                                ret = -EBUSY;
                                goto unlock;
@@ -1760,12 +1795,6 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent)
 
        atomic_set(&sbi->s_wro_seq_files, 0);
        sbi->s_max_wro_seq_files = bdev_max_open_zones(sb->s_bdev);
-       if (!sbi->s_max_wro_seq_files &&
-           sbi->s_mount_opts & ZONEFS_MNTOPT_EXPLICIT_OPEN) {
-               zonefs_info(sb, "No open zones limit. Ignoring explicit_open mount option\n");
-               sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN;
-       }
-
        atomic_set(&sbi->s_active_seq_files, 0);
        sbi->s_max_active_seq_files = bdev_max_active_zones(sb->s_bdev);
 
@@ -1790,6 +1819,14 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent)
        zonefs_info(sb, "Mounting %u zones",
                    blkdev_nr_zones(sb->s_bdev->bd_disk));
 
+       if (!sbi->s_max_wro_seq_files &&
+           !sbi->s_max_active_seq_files &&
+           sbi->s_mount_opts & ZONEFS_MNTOPT_EXPLICIT_OPEN) {
+               zonefs_info(sb,
+                       "No open and active zone limits. Ignoring explicit_open mount option\n");
+               sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN;
+       }
+
        /* Create root directory inode */
        ret = -ENOMEM;
        inode = new_inode(sb);
index 302506b..8e47d48 100644 (file)
@@ -44,6 +44,7 @@ mandatory-y += msi.h
 mandatory-y += pci.h
 mandatory-y += percpu.h
 mandatory-y += pgalloc.h
+mandatory-y += platform-feature.h
 mandatory-y += preempt.h
 mandatory-y += rwonce.h
 mandatory-y += sections.h
diff --git a/include/asm-generic/platform-feature.h b/include/asm-generic/platform-feature.h
new file mode 100644 (file)
index 0000000..4b0af3d
--- /dev/null
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_GENERIC_PLATFORM_FEATURE_H
+#define _ASM_GENERIC_PLATFORM_FEATURE_H
+
+/* Number of arch specific feature flags. */
+#define PLATFORM_ARCH_FEAT_N   0
+
+#endif /* _ASM_GENERIC_PLATFORM_FEATURE_H */
index 75d40ac..5c65ae6 100644 (file)
@@ -76,6 +76,7 @@
 #define IEEE80211_STYPE_ACTION         0x00D0
 
 /* control */
+#define IEEE80211_STYPE_TRIGGER                0x0020
 #define IEEE80211_STYPE_CTL_EXT                0x0060
 #define IEEE80211_STYPE_BACK_REQ       0x0080
 #define IEEE80211_STYPE_BACK           0x0090
@@ -295,6 +296,17 @@ static inline u16 ieee80211_sn_sub(u16 sn1, u16 sn2)
 
 #define IEEE80211_HT_CTL_LEN           4
 
+/* trigger type within common_info of trigger frame */
+#define IEEE80211_TRIGGER_TYPE_MASK            0xf
+#define IEEE80211_TRIGGER_TYPE_BASIC           0x0
+#define IEEE80211_TRIGGER_TYPE_BFRP            0x1
+#define IEEE80211_TRIGGER_TYPE_MU_BAR          0x2
+#define IEEE80211_TRIGGER_TYPE_MU_RTS          0x3
+#define IEEE80211_TRIGGER_TYPE_BSRP            0x4
+#define IEEE80211_TRIGGER_TYPE_GCR_MU_BAR      0x5
+#define IEEE80211_TRIGGER_TYPE_BQRP            0x6
+#define IEEE80211_TRIGGER_TYPE_NFRP            0x7
+
 struct ieee80211_hdr {
        __le16 frame_control;
        __le16 duration_id;
@@ -324,6 +336,15 @@ struct ieee80211_qos_hdr {
        __le16 qos_ctrl;
 } __packed __aligned(2);
 
+struct ieee80211_trigger {
+       __le16 frame_control;
+       __le16 duration;
+       u8 ra[ETH_ALEN];
+       u8 ta[ETH_ALEN];
+       __le64 common_info;
+       u8 variable[];
+} __packed __aligned(2);
+
 /**
  * ieee80211_has_tods - check if IEEE80211_FCTL_TODS is set
  * @fc: frame control bytes in little-endian byteorder
@@ -729,6 +750,16 @@ static inline bool ieee80211_is_qos_nullfunc(__le16 fc)
               cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_NULLFUNC);
 }
 
+/**
+ * ieee80211_is_trigger - check if frame is trigger frame
+ * @fc: frame control field in little-endian byteorder
+ */
+static inline bool ieee80211_is_trigger(__le16 fc)
+{
+       return (fc & cpu_to_le16(IEEE80211_FCTL_FTYPE | IEEE80211_FCTL_STYPE)) ==
+              cpu_to_le16(IEEE80211_FTYPE_CTL | IEEE80211_STYPE_TRIGGER);
+}
+
 /**
  * ieee80211_is_any_nullfunc - check if frame is regular or QoS nullfunc frame
  * @fc: frame control bytes in little-endian byteorder
index b422947..5230251 100644 (file)
@@ -46,10 +46,10 @@ static inline void macvlan_count_rx(const struct macvlan_dev *vlan,
 
                pcpu_stats = get_cpu_ptr(vlan->pcpu_stats);
                u64_stats_update_begin(&pcpu_stats->syncp);
-               pcpu_stats->rx_packets++;
-               pcpu_stats->rx_bytes += len;
+               u64_stats_inc(&pcpu_stats->rx_packets);
+               u64_stats_add(&pcpu_stats->rx_bytes, len);
                if (multicast)
-                       pcpu_stats->rx_multicast++;
+                       u64_stats_inc(&pcpu_stats->rx_multicast);
                u64_stats_update_end(&pcpu_stats->syncp);
                put_cpu_ptr(vlan->pcpu_stats);
        } else {
index add6079..fc985e5 100644 (file)
 #include <uapi/linux/if_team.h>
 
 struct team_pcpu_stats {
-       u64                     rx_packets;
-       u64                     rx_bytes;
-       u64                     rx_multicast;
-       u64                     tx_packets;
-       u64                     tx_bytes;
+       u64_stats_t             rx_packets;
+       u64_stats_t             rx_bytes;
+       u64_stats_t             rx_multicast;
+       u64_stats_t             tx_packets;
+       u64_stats_t             tx_bytes;
        struct u64_stats_sync   syncp;
        u32                     rx_dropped;
        u32                     tx_dropped;
index 2be4dd7..e00c4ee 100644 (file)
@@ -118,11 +118,11 @@ static inline void vlan_drop_rx_stag_filter_info(struct net_device *dev)
  *     @tx_dropped: number of tx drops
  */
 struct vlan_pcpu_stats {
-       u64                     rx_packets;
-       u64                     rx_bytes;
-       u64                     rx_multicast;
-       u64                     tx_packets;
-       u64                     tx_bytes;
+       u64_stats_t             rx_packets;
+       u64_stats_t             rx_bytes;
+       u64_stats_t             rx_multicast;
+       u64_stats_t             tx_packets;
+       u64_stats_t             tx_bytes;
        struct u64_stats_sync   syncp;
        u32                     rx_errors;
        u32                     tx_dropped;
index 732de90..0f2a59c 100644 (file)
@@ -822,7 +822,6 @@ struct ata_port {
        struct ata_queued_cmd   qcmd[ATA_MAX_QUEUE + 1];
        u64                     qc_active;
        int                     nr_active_links; /* #links with active qcs */
-       unsigned int            sas_last_tag;   /* track next tag hw expects */
 
        struct ata_link         link;           /* host default link */
        struct ata_link         *slave_link;    /* see ata_slave_link_init() */
index f615a66..89afa4f 100644 (file)
@@ -2636,10 +2636,10 @@ struct packet_offload {
 
 /* often modified stats are per-CPU, other are shared (netdev->stats) */
 struct pcpu_sw_netstats {
-       u64     rx_packets;
-       u64     rx_bytes;
-       u64     tx_packets;
-       u64     tx_bytes;
+       u64_stats_t             rx_packets;
+       u64_stats_t             rx_bytes;
+       u64_stats_t             tx_packets;
+       u64_stats_t             tx_bytes;
        struct u64_stats_sync   syncp;
 } __aligned(4 * sizeof(u64));
 
@@ -2656,8 +2656,8 @@ static inline void dev_sw_netstats_rx_add(struct net_device *dev, unsigned int l
        struct pcpu_sw_netstats *tstats = this_cpu_ptr(dev->tstats);
 
        u64_stats_update_begin(&tstats->syncp);
-       tstats->rx_bytes += len;
-       tstats->rx_packets++;
+       u64_stats_add(&tstats->rx_bytes, len);
+       u64_stats_inc(&tstats->rx_packets);
        u64_stats_update_end(&tstats->syncp);
 }
 
@@ -2668,8 +2668,8 @@ static inline void dev_sw_netstats_tx_add(struct net_device *dev,
        struct pcpu_sw_netstats *tstats = this_cpu_ptr(dev->tstats);
 
        u64_stats_update_begin(&tstats->syncp);
-       tstats->tx_bytes += len;
-       tstats->tx_packets += packets;
+       u64_stats_add(&tstats->tx_bytes, len);
+       u64_stats_add(&tstats->tx_packets, packets);
        u64_stats_update_end(&tstats->syncp);
 }
 
@@ -3981,8 +3981,8 @@ static inline void netdev_tracker_free(struct net_device *dev,
 #endif
 }
 
-static inline void dev_hold_track(struct net_device *dev,
-                                 netdevice_tracker *tracker, gfp_t gfp)
+static inline void netdev_hold(struct net_device *dev,
+                              netdevice_tracker *tracker, gfp_t gfp)
 {
        if (dev) {
                __dev_hold(dev);
@@ -3990,8 +3990,8 @@ static inline void dev_hold_track(struct net_device *dev,
        }
 }
 
-static inline void dev_put_track(struct net_device *dev,
-                                netdevice_tracker *tracker)
+static inline void netdev_put(struct net_device *dev,
+                             netdevice_tracker *tracker)
 {
        if (dev) {
                netdev_tracker_free(dev, tracker);
@@ -4004,11 +4004,11 @@ static inline void dev_put_track(struct net_device *dev,
  *     @dev: network device
  *
  * Hold reference to device to keep it from being freed.
- * Try using dev_hold_track() instead.
+ * Try using netdev_hold() instead.
  */
 static inline void dev_hold(struct net_device *dev)
 {
-       dev_hold_track(dev, NULL, GFP_ATOMIC);
+       netdev_hold(dev, NULL, GFP_ATOMIC);
 }
 
 /**
@@ -4016,17 +4016,17 @@ static inline void dev_hold(struct net_device *dev)
  *     @dev: network device
  *
  * Release reference to device to allow it to be freed.
- * Try using dev_put_track() instead.
+ * Try using netdev_put() instead.
  */
 static inline void dev_put(struct net_device *dev)
 {
-       dev_put_track(dev, NULL);
+       netdev_put(dev, NULL);
 }
 
-static inline void dev_replace_track(struct net_device *odev,
-                                    struct net_device *ndev,
-                                    netdevice_tracker *tracker,
-                                    gfp_t gfp)
+static inline void netdev_ref_replace(struct net_device *odev,
+                                     struct net_device *ndev,
+                                     netdevice_tracker *tracker,
+                                     gfp_t gfp)
 {
        if (odev)
                netdev_tracker_free(odev, tracker);
index 77fa6a6..6dbb4c9 100644 (file)
@@ -119,9 +119,10 @@ typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error,
                                      bool was_async);
 
 /*
- * Per-inode description.  This must be directly after the inode struct.
+ * Per-inode context.  This wraps the VFS inode.
  */
-struct netfs_i_context {
+struct netfs_inode {
+       struct inode            inode;          /* The VFS inode */
        const struct netfs_request_ops *ops;
 #if IS_ENABLED(CONFIG_FSCACHE)
        struct fscache_cookie   *cache;
@@ -256,7 +257,7 @@ struct netfs_cache_ops {
         * boundary as appropriate.
         */
        enum netfs_io_source (*prepare_read)(struct netfs_io_subrequest *subreq,
-                                              loff_t i_size);
+                                            loff_t i_size);
 
        /* Prepare a write operation, working out what part of the write we can
         * actually do.
@@ -288,45 +289,35 @@ extern void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
 extern void netfs_stats_show(struct seq_file *);
 
 /**
- * netfs_i_context - Get the netfs inode context from the inode
+ * netfs_inode - Get the netfs inode context from the inode
  * @inode: The inode to query
  *
  * Get the netfs lib inode context from the network filesystem's inode.  The
  * context struct is expected to directly follow on from the VFS inode struct.
  */
-static inline struct netfs_i_context *netfs_i_context(struct inode *inode)
+static inline struct netfs_inode *netfs_inode(struct inode *inode)
 {
-       return (void *)inode + sizeof(*inode);
+       return container_of(inode, struct netfs_inode, inode);
 }
 
 /**
- * netfs_inode - Get the netfs inode from the inode context
- * @ctx: The context to query
- *
- * Get the netfs inode from the netfs library's inode context.  The VFS inode
- * is expected to directly precede the context struct.
- */
-static inline struct inode *netfs_inode(struct netfs_i_context *ctx)
-{
-       return (void *)ctx - sizeof(struct inode);
-}
-
-/**
- * netfs_i_context_init - Initialise a netfs lib context
+ * netfs_inode_init - Initialise a netfslib inode context
  * @inode: The inode with which the context is associated
  * @ops: The netfs's operations list
  *
  * Initialise the netfs library context struct.  This is expected to follow on
  * directly from the VFS inode struct.
  */
-static inline void netfs_i_context_init(struct inode *inode,
-                                       const struct netfs_request_ops *ops)
+static inline void netfs_inode_init(struct inode *inode,
+                                   const struct netfs_request_ops *ops)
 {
-       struct netfs_i_context *ctx = netfs_i_context(inode);
+       struct netfs_inode *ctx = netfs_inode(inode);
 
-       memset(ctx, 0, sizeof(*ctx));
        ctx->ops = ops;
        ctx->remote_i_size = i_size_read(inode);
+#if IS_ENABLED(CONFIG_FSCACHE)
+       ctx->cache = NULL;
+#endif
 }
 
 /**
@@ -338,7 +329,7 @@ static inline void netfs_i_context_init(struct inode *inode,
  */
 static inline void netfs_resize_file(struct inode *inode, loff_t new_i_size)
 {
-       struct netfs_i_context *ctx = netfs_i_context(inode);
+       struct netfs_inode *ctx = netfs_inode(inode);
 
        ctx->remote_i_size = new_i_size;
 }
@@ -352,7 +343,7 @@ static inline void netfs_resize_file(struct inode *inode, loff_t new_i_size)
 static inline struct fscache_cookie *netfs_i_cookie(struct inode *inode)
 {
 #if IS_ENABLED(CONFIG_FSCACHE)
-       struct netfs_i_context *ctx = netfs_i_context(inode);
+       struct netfs_inode *ctx = netfs_inode(inode);
        return ctx->cache;
 #else
        return NULL;
diff --git a/include/linux/platform-feature.h b/include/linux/platform-feature.h
new file mode 100644 (file)
index 0000000..b2f48be
--- /dev/null
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _PLATFORM_FEATURE_H
+#define _PLATFORM_FEATURE_H
+
+#include <linux/bitops.h>
+#include <asm/platform-feature.h>
+
+/* The platform features are starting with the architecture specific ones. */
+
+/* Used to enable platform specific DMA handling for virtio devices. */
+#define PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS  (0 + PLATFORM_ARCH_FEAT_N)
+
+#define PLATFORM_FEAT_N                                (1 + PLATFORM_ARCH_FEAT_N)
+
+void platform_set(unsigned int feature);
+void platform_clear(unsigned int feature);
+bool platform_has(unsigned int feature);
+
+#endif /* _PLATFORM_FEATURE_H */
index d3d1055..82edf03 100644 (file)
@@ -43,6 +43,7 @@
 #include <linux/netfilter/nf_conntrack_common.h>
 #endif
 #include <net/net_debug.h>
+#include <net/dropreason.h>
 
 /**
  * DOC: skb checksums
@@ -337,184 +338,6 @@ struct sk_buff_head {
 
 struct sk_buff;
 
-/* The reason of skb drop, which is used in kfree_skb_reason().
- * en...maybe they should be splited by group?
- *
- * Each item here should also be in 'TRACE_SKB_DROP_REASON', which is
- * used to translate the reason to string.
- */
-enum skb_drop_reason {
-       SKB_NOT_DROPPED_YET = 0,
-       SKB_DROP_REASON_NOT_SPECIFIED,  /* drop reason is not specified */
-       SKB_DROP_REASON_NO_SOCKET,      /* socket not found */
-       SKB_DROP_REASON_PKT_TOO_SMALL,  /* packet size is too small */
-       SKB_DROP_REASON_TCP_CSUM,       /* TCP checksum error */
-       SKB_DROP_REASON_SOCKET_FILTER,  /* dropped by socket filter */
-       SKB_DROP_REASON_UDP_CSUM,       /* UDP checksum error */
-       SKB_DROP_REASON_NETFILTER_DROP, /* dropped by netfilter */
-       SKB_DROP_REASON_OTHERHOST,      /* packet don't belong to current
-                                        * host (interface is in promisc
-                                        * mode)
-                                        */
-       SKB_DROP_REASON_IP_CSUM,        /* IP checksum error */
-       SKB_DROP_REASON_IP_INHDR,       /* there is something wrong with
-                                        * IP header (see
-                                        * IPSTATS_MIB_INHDRERRORS)
-                                        */
-       SKB_DROP_REASON_IP_RPFILTER,    /* IP rpfilter validate failed.
-                                        * see the document for rp_filter
-                                        * in ip-sysctl.rst for more
-                                        * information
-                                        */
-       SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST, /* destination address of L2
-                                                 * is multicast, but L3 is
-                                                 * unicast.
-                                                 */
-       SKB_DROP_REASON_XFRM_POLICY,    /* xfrm policy check failed */
-       SKB_DROP_REASON_IP_NOPROTO,     /* no support for IP protocol */
-       SKB_DROP_REASON_SOCKET_RCVBUFF, /* socket receive buff is full */
-       SKB_DROP_REASON_PROTO_MEM,      /* proto memory limition, such as
-                                        * udp packet drop out of
-                                        * udp_memory_allocated.
-                                        */
-       SKB_DROP_REASON_TCP_MD5NOTFOUND,        /* no MD5 hash and one
-                                                * expected, corresponding
-                                                * to LINUX_MIB_TCPMD5NOTFOUND
-                                                */
-       SKB_DROP_REASON_TCP_MD5UNEXPECTED,      /* MD5 hash and we're not
-                                                * expecting one, corresponding
-                                                * to LINUX_MIB_TCPMD5UNEXPECTED
-                                                */
-       SKB_DROP_REASON_TCP_MD5FAILURE, /* MD5 hash and its wrong,
-                                        * corresponding to
-                                        * LINUX_MIB_TCPMD5FAILURE
-                                        */
-       SKB_DROP_REASON_SOCKET_BACKLOG, /* failed to add skb to socket
-                                        * backlog (see
-                                        * LINUX_MIB_TCPBACKLOGDROP)
-                                        */
-       SKB_DROP_REASON_TCP_FLAGS,      /* TCP flags invalid */
-       SKB_DROP_REASON_TCP_ZEROWINDOW, /* TCP receive window size is zero,
-                                        * see LINUX_MIB_TCPZEROWINDOWDROP
-                                        */
-       SKB_DROP_REASON_TCP_OLD_DATA,   /* the TCP data reveived is already
-                                        * received before (spurious retrans
-                                        * may happened), see
-                                        * LINUX_MIB_DELAYEDACKLOST
-                                        */
-       SKB_DROP_REASON_TCP_OVERWINDOW, /* the TCP data is out of window,
-                                        * the seq of the first byte exceed
-                                        * the right edges of receive
-                                        * window
-                                        */
-       SKB_DROP_REASON_TCP_OFOMERGE,   /* the data of skb is already in
-                                        * the ofo queue, corresponding to
-                                        * LINUX_MIB_TCPOFOMERGE
-                                        */
-       SKB_DROP_REASON_TCP_RFC7323_PAWS, /* PAWS check, corresponding to
-                                          * LINUX_MIB_PAWSESTABREJECTED
-                                          */
-       SKB_DROP_REASON_TCP_INVALID_SEQUENCE, /* Not acceptable SEQ field */
-       SKB_DROP_REASON_TCP_RESET,      /* Invalid RST packet */
-       SKB_DROP_REASON_TCP_INVALID_SYN, /* Incoming packet has unexpected SYN flag */
-       SKB_DROP_REASON_TCP_CLOSE,      /* TCP socket in CLOSE state */
-       SKB_DROP_REASON_TCP_FASTOPEN,   /* dropped by FASTOPEN request socket */
-       SKB_DROP_REASON_TCP_OLD_ACK,    /* TCP ACK is old, but in window */
-       SKB_DROP_REASON_TCP_TOO_OLD_ACK, /* TCP ACK is too old */
-       SKB_DROP_REASON_TCP_ACK_UNSENT_DATA, /* TCP ACK for data we haven't sent yet */
-       SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE, /* pruned from TCP OFO queue */
-       SKB_DROP_REASON_TCP_OFO_DROP,   /* data already in receive queue */
-       SKB_DROP_REASON_IP_OUTNOROUTES, /* route lookup failed */
-       SKB_DROP_REASON_BPF_CGROUP_EGRESS,      /* dropped by
-                                                * BPF_PROG_TYPE_CGROUP_SKB
-                                                * eBPF program
-                                                */
-       SKB_DROP_REASON_IPV6DISABLED,   /* IPv6 is disabled on the device */
-       SKB_DROP_REASON_NEIGH_CREATEFAIL,       /* failed to create neigh
-                                                * entry
-                                                */
-       SKB_DROP_REASON_NEIGH_FAILED,   /* neigh entry in failed state */
-       SKB_DROP_REASON_NEIGH_QUEUEFULL,        /* arp_queue for neigh
-                                                * entry is full
-                                                */
-       SKB_DROP_REASON_NEIGH_DEAD,     /* neigh entry is dead */
-       SKB_DROP_REASON_TC_EGRESS,      /* dropped in TC egress HOOK */
-       SKB_DROP_REASON_QDISC_DROP,     /* dropped by qdisc when packet
-                                        * outputting (failed to enqueue to
-                                        * current qdisc)
-                                        */
-       SKB_DROP_REASON_CPU_BACKLOG,    /* failed to enqueue the skb to
-                                        * the per CPU backlog queue. This
-                                        * can be caused by backlog queue
-                                        * full (see netdev_max_backlog in
-                                        * net.rst) or RPS flow limit
-                                        */
-       SKB_DROP_REASON_XDP,            /* dropped by XDP in input path */
-       SKB_DROP_REASON_TC_INGRESS,     /* dropped in TC ingress HOOK */
-       SKB_DROP_REASON_UNHANDLED_PROTO,        /* protocol not implemented
-                                                * or not supported
-                                                */
-       SKB_DROP_REASON_SKB_CSUM,       /* sk_buff checksum computation
-                                        * error
-                                        */
-       SKB_DROP_REASON_SKB_GSO_SEG,    /* gso segmentation error */
-       SKB_DROP_REASON_SKB_UCOPY_FAULT,        /* failed to copy data from
-                                                * user space, e.g., via
-                                                * zerocopy_sg_from_iter()
-                                                * or skb_orphan_frags_rx()
-                                                */
-       SKB_DROP_REASON_DEV_HDR,        /* device driver specific
-                                        * header/metadata is invalid
-                                        */
-       /* the device is not ready to xmit/recv due to any of its data
-        * structure that is not up/ready/initialized, e.g., the IFF_UP is
-        * not set, or driver specific tun->tfiles[txq] is not initialized
-        */
-       SKB_DROP_REASON_DEV_READY,
-       SKB_DROP_REASON_FULL_RING,      /* ring buffer is full */
-       SKB_DROP_REASON_NOMEM,          /* error due to OOM */
-       SKB_DROP_REASON_HDR_TRUNC,      /* failed to trunc/extract the header
-                                        * from networking data, e.g., failed
-                                        * to pull the protocol header from
-                                        * frags via pskb_may_pull()
-                                        */
-       SKB_DROP_REASON_TAP_FILTER,     /* dropped by (ebpf) filter directly
-                                        * attached to tun/tap, e.g., via
-                                        * TUNSETFILTEREBPF
-                                        */
-       SKB_DROP_REASON_TAP_TXFILTER,   /* dropped by tx filter implemented
-                                        * at tun/tap, e.g., check_filter()
-                                        */
-       SKB_DROP_REASON_ICMP_CSUM,      /* ICMP checksum error */
-       SKB_DROP_REASON_INVALID_PROTO,  /* the packet doesn't follow RFC
-                                        * 2211, such as a broadcasts
-                                        * ICMP_TIMESTAMP
-                                        */
-       SKB_DROP_REASON_IP_INADDRERRORS,        /* host unreachable, corresponding
-                                                * to IPSTATS_MIB_INADDRERRORS
-                                                */
-       SKB_DROP_REASON_IP_INNOROUTES,  /* network unreachable, corresponding
-                                        * to IPSTATS_MIB_INADDRERRORS
-                                        */
-       SKB_DROP_REASON_PKT_TOO_BIG,    /* packet size is too big (maybe exceed
-                                        * the MTU)
-                                        */
-       SKB_DROP_REASON_MAX,
-};
-
-#define SKB_DR_INIT(name, reason)                              \
-       enum skb_drop_reason name = SKB_DROP_REASON_##reason
-#define SKB_DR(name)                                           \
-       SKB_DR_INIT(name, NOT_SPECIFIED)
-#define SKB_DR_SET(name, reason)                               \
-       (name = SKB_DROP_REASON_##reason)
-#define SKB_DR_OR(name, reason)                                        \
-       do {                                                    \
-               if (name == SKB_DROP_REASON_NOT_SPECIFIED ||    \
-                   name == SKB_NOT_DROPPED_YET)                \
-                       SKB_DR_SET(name, reason);               \
-       } while (0)
-
 /* To allow 64K frame to be packed as single skb without frag_list we
  * require 64K/PAGE_SIZE pages plus 1 additional page to allow for
  * buffers which do not start on a page boundary.
index 17311ad..414b8c7 100644 (file)
@@ -428,10 +428,6 @@ extern int __sys_recvfrom(int fd, void __user *ubuf, size_t size,
 extern int __sys_sendto(int fd, void __user *buff, size_t len,
                        unsigned int flags, struct sockaddr __user *addr,
                        int addr_len);
-extern int __sys_accept4_file(struct file *file, unsigned file_flags,
-                       struct sockaddr __user *upeer_sockaddr,
-                        int __user *upeer_addrlen, int flags,
-                        unsigned long nofile);
 extern struct file *do_accept(struct file *file, unsigned file_flags,
                              struct sockaddr __user *upeer_sockaddr,
                              int __user *upeer_addrlen, int flags);
index 9a36051..49c7c32 100644 (file)
@@ -604,13 +604,4 @@ static inline void virtio_cwrite64(struct virtio_device *vdev,
                _r;                                                     \
        })
 
-#ifdef CONFIG_ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
-int arch_has_restricted_virtio_memory_access(void);
-#else
-static inline int arch_has_restricted_virtio_memory_access(void)
-{
-       return 0;
-}
-#endif /* CONFIG_ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS */
-
 #endif /* _LINUX_VIRTIO_CONFIG_H */
index 61b4906..1618b76 100644 (file)
@@ -107,7 +107,8 @@ struct bond_option {
 };
 
 int __bond_opt_set(struct bonding *bond, unsigned int option,
-                  struct bond_opt_value *val);
+                  struct bond_opt_value *val,
+                  struct nlattr *bad_attr, struct netlink_ext_ack *extack);
 int __bond_opt_set_notify(struct bonding *bond, unsigned int option,
                          struct bond_opt_value *val);
 int bond_opt_tryset_rtnl(struct bonding *bond, unsigned int option, char *buf);
diff --git a/include/net/dropreason.h b/include/net/dropreason.h
new file mode 100644 (file)
index 0000000..fae9b40
--- /dev/null
@@ -0,0 +1,256 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef _LINUX_DROPREASON_H
+#define _LINUX_DROPREASON_H
+
+/**
+ * enum skb_drop_reason - the reasons of skb drops
+ *
+ * The reason of skb drop, which is used in kfree_skb_reason().
+ */
+enum skb_drop_reason {
+       /**
+        * @SKB_NOT_DROPPED_YET: skb is not dropped yet (used for no-drop case)
+        */
+       SKB_NOT_DROPPED_YET = 0,
+       /** @SKB_DROP_REASON_NOT_SPECIFIED: drop reason is not specified */
+       SKB_DROP_REASON_NOT_SPECIFIED,
+       /** @SKB_DROP_REASON_NO_SOCKET: socket not found */
+       SKB_DROP_REASON_NO_SOCKET,
+       /** @SKB_DROP_REASON_PKT_TOO_SMALL: packet size is too small */
+       SKB_DROP_REASON_PKT_TOO_SMALL,
+       /** @SKB_DROP_REASON_TCP_CSUM: TCP checksum error */
+       SKB_DROP_REASON_TCP_CSUM,
+       /** @SKB_DROP_REASON_SOCKET_FILTER: dropped by socket filter */
+       SKB_DROP_REASON_SOCKET_FILTER,
+       /** @SKB_DROP_REASON_UDP_CSUM: UDP checksum error */
+       SKB_DROP_REASON_UDP_CSUM,
+       /** @SKB_DROP_REASON_NETFILTER_DROP: dropped by netfilter */
+       SKB_DROP_REASON_NETFILTER_DROP,
+       /**
+        * @SKB_DROP_REASON_OTHERHOST: packet don't belong to current host
+        * (interface is in promisc mode)
+        */
+       SKB_DROP_REASON_OTHERHOST,
+       /** @SKB_DROP_REASON_IP_CSUM: IP checksum error */
+       SKB_DROP_REASON_IP_CSUM,
+       /**
+        * @SKB_DROP_REASON_IP_INHDR: there is something wrong with IP header (see
+        * IPSTATS_MIB_INHDRERRORS)
+        */
+       SKB_DROP_REASON_IP_INHDR,
+       /**
+        * @SKB_DROP_REASON_IP_RPFILTER: IP rpfilter validate failed. see the
+        * document for rp_filter in ip-sysctl.rst for more information
+        */
+       SKB_DROP_REASON_IP_RPFILTER,
+       /**
+        * @SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST: destination address of L2 is
+        * multicast, but L3 is unicast.
+        */
+       SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST,
+       /** @SKB_DROP_REASON_XFRM_POLICY: xfrm policy check failed */
+       SKB_DROP_REASON_XFRM_POLICY,
+       /** @SKB_DROP_REASON_IP_NOPROTO: no support for IP protocol */
+       SKB_DROP_REASON_IP_NOPROTO,
+       /** @SKB_DROP_REASON_SOCKET_RCVBUFF: socket receive buff is full */
+       SKB_DROP_REASON_SOCKET_RCVBUFF,
+       /**
+        * @SKB_DROP_REASON_PROTO_MEM: proto memory limition, such as udp packet
+        * drop out of udp_memory_allocated.
+        */
+       SKB_DROP_REASON_PROTO_MEM,
+       /**
+        * @SKB_DROP_REASON_TCP_MD5NOTFOUND: no MD5 hash and one expected,
+        * corresponding to LINUX_MIB_TCPMD5NOTFOUND
+        */
+       SKB_DROP_REASON_TCP_MD5NOTFOUND,
+       /**
+        * @SKB_DROP_REASON_TCP_MD5UNEXPECTED: MD5 hash and we're not expecting
+        * one, corresponding to LINUX_MIB_TCPMD5UNEXPECTED
+        */
+       SKB_DROP_REASON_TCP_MD5UNEXPECTED,
+       /**
+        * @SKB_DROP_REASON_TCP_MD5FAILURE: MD5 hash and its wrong, corresponding
+        * to LINUX_MIB_TCPMD5FAILURE
+        */
+       SKB_DROP_REASON_TCP_MD5FAILURE,
+       /**
+        * @SKB_DROP_REASON_SOCKET_BACKLOG: failed to add skb to socket backlog (
+        * see LINUX_MIB_TCPBACKLOGDROP)
+        */
+       SKB_DROP_REASON_SOCKET_BACKLOG,
+       /** @SKB_DROP_REASON_TCP_FLAGS: TCP flags invalid */
+       SKB_DROP_REASON_TCP_FLAGS,
+       /**
+        * @SKB_DROP_REASON_TCP_ZEROWINDOW: TCP receive window size is zero,
+        * see LINUX_MIB_TCPZEROWINDOWDROP
+        */
+       SKB_DROP_REASON_TCP_ZEROWINDOW,
+       /**
+        * @SKB_DROP_REASON_TCP_OLD_DATA: the TCP data reveived is already
+        * received before (spurious retrans may happened), see
+        * LINUX_MIB_DELAYEDACKLOST
+        */
+       SKB_DROP_REASON_TCP_OLD_DATA,
+       /**
+        * @SKB_DROP_REASON_TCP_OVERWINDOW: the TCP data is out of window,
+        * the seq of the first byte exceed the right edges of receive
+        * window
+        */
+       SKB_DROP_REASON_TCP_OVERWINDOW,
+       /**
+        * @SKB_DROP_REASON_TCP_OFOMERGE: the data of skb is already in the ofo
+        * queue, corresponding to LINUX_MIB_TCPOFOMERGE
+        */
+       SKB_DROP_REASON_TCP_OFOMERGE,
+       /**
+        * @SKB_DROP_REASON_TCP_RFC7323_PAWS: PAWS check, corresponding to
+        * LINUX_MIB_PAWSESTABREJECTED
+        */
+       SKB_DROP_REASON_TCP_RFC7323_PAWS,
+       /** @SKB_DROP_REASON_TCP_INVALID_SEQUENCE: Not acceptable SEQ field */
+       SKB_DROP_REASON_TCP_INVALID_SEQUENCE,
+       /** @SKB_DROP_REASON_TCP_RESET: Invalid RST packet */
+       SKB_DROP_REASON_TCP_RESET,
+       /**
+        * @SKB_DROP_REASON_TCP_INVALID_SYN: Incoming packet has unexpected
+        * SYN flag
+        */
+       SKB_DROP_REASON_TCP_INVALID_SYN,
+       /** @SKB_DROP_REASON_TCP_CLOSE: TCP socket in CLOSE state */
+       SKB_DROP_REASON_TCP_CLOSE,
+       /** @SKB_DROP_REASON_TCP_FASTOPEN: dropped by FASTOPEN request socket */
+       SKB_DROP_REASON_TCP_FASTOPEN,
+       /** @SKB_DROP_REASON_TCP_OLD_ACK: TCP ACK is old, but in window */
+       SKB_DROP_REASON_TCP_OLD_ACK,
+       /** @SKB_DROP_REASON_TCP_TOO_OLD_ACK: TCP ACK is too old */
+       SKB_DROP_REASON_TCP_TOO_OLD_ACK,
+       /**
+        * @SKB_DROP_REASON_TCP_ACK_UNSENT_DATA: TCP ACK for data we haven't
+        * sent yet
+        */
+       SKB_DROP_REASON_TCP_ACK_UNSENT_DATA,
+       /** @SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE: pruned from TCP OFO queue */
+       SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE,
+       /** @SKB_DROP_REASON_TCP_OFO_DROP: data already in receive queue */
+       SKB_DROP_REASON_TCP_OFO_DROP,
+       /** @SKB_DROP_REASON_IP_OUTNOROUTES: route lookup failed */
+       SKB_DROP_REASON_IP_OUTNOROUTES,
+       /**
+        * @SKB_DROP_REASON_BPF_CGROUP_EGRESS: dropped by BPF_PROG_TYPE_CGROUP_SKB
+        * eBPF program
+        */
+       SKB_DROP_REASON_BPF_CGROUP_EGRESS,
+       /** @SKB_DROP_REASON_IPV6DISABLED: IPv6 is disabled on the device */
+       SKB_DROP_REASON_IPV6DISABLED,
+       /** @SKB_DROP_REASON_NEIGH_CREATEFAIL: failed to create neigh entry */
+       SKB_DROP_REASON_NEIGH_CREATEFAIL,
+       /** @SKB_DROP_REASON_NEIGH_FAILED: neigh entry in failed state */
+       SKB_DROP_REASON_NEIGH_FAILED,
+       /** @SKB_DROP_REASON_NEIGH_QUEUEFULL: arp_queue for neigh entry is full */
+       SKB_DROP_REASON_NEIGH_QUEUEFULL,
+       /** @SKB_DROP_REASON_NEIGH_DEAD: neigh entry is dead */
+       SKB_DROP_REASON_NEIGH_DEAD,
+       /** @SKB_DROP_REASON_TC_EGRESS: dropped in TC egress HOOK */
+       SKB_DROP_REASON_TC_EGRESS,
+       /**
+        * @SKB_DROP_REASON_QDISC_DROP: dropped by qdisc when packet outputting (
+        * failed to enqueue to current qdisc)
+        */
+       SKB_DROP_REASON_QDISC_DROP,
+       /**
+        * @SKB_DROP_REASON_CPU_BACKLOG: failed to enqueue the skb to the per CPU
+        * backlog queue. This can be caused by backlog queue full (see
+        * netdev_max_backlog in net.rst) or RPS flow limit
+        */
+       SKB_DROP_REASON_CPU_BACKLOG,
+       /** @SKB_DROP_REASON_XDP: dropped by XDP in input path */
+       SKB_DROP_REASON_XDP,
+       /** @SKB_DROP_REASON_TC_INGRESS: dropped in TC ingress HOOK */
+       SKB_DROP_REASON_TC_INGRESS,
+       /** @SKB_DROP_REASON_UNHANDLED_PROTO: protocol not implemented or not supported */
+       SKB_DROP_REASON_UNHANDLED_PROTO,
+       /** @SKB_DROP_REASON_SKB_CSUM: sk_buff checksum computation error */
+       SKB_DROP_REASON_SKB_CSUM,
+       /** @SKB_DROP_REASON_SKB_GSO_SEG: gso segmentation error */
+       SKB_DROP_REASON_SKB_GSO_SEG,
+       /**
+        * @SKB_DROP_REASON_SKB_UCOPY_FAULT: failed to copy data from user space,
+        * e.g., via zerocopy_sg_from_iter() or skb_orphan_frags_rx()
+        */
+       SKB_DROP_REASON_SKB_UCOPY_FAULT,
+       /** @SKB_DROP_REASON_DEV_HDR: device driver specific header/metadata is invalid */
+       SKB_DROP_REASON_DEV_HDR,
+       /**
+        * @SKB_DROP_REASON_DEV_READY: the device is not ready to xmit/recv due to
+        * any of its data structure that is not up/ready/initialized,
+        * e.g., the IFF_UP is not set, or driver specific tun->tfiles[txq]
+        * is not initialized
+        */
+       SKB_DROP_REASON_DEV_READY,
+       /** @SKB_DROP_REASON_FULL_RING: ring buffer is full */
+       SKB_DROP_REASON_FULL_RING,
+       /** @SKB_DROP_REASON_NOMEM: error due to OOM */
+       SKB_DROP_REASON_NOMEM,
+       /**
+        * @SKB_DROP_REASON_HDR_TRUNC: failed to trunc/extract the header from
+        * networking data, e.g., failed to pull the protocol header from
+        * frags via pskb_may_pull()
+        */
+       SKB_DROP_REASON_HDR_TRUNC,
+       /**
+        * @SKB_DROP_REASON_TAP_FILTER: dropped by (ebpf) filter directly attached
+        * to tun/tap, e.g., via TUNSETFILTEREBPF
+        */
+       SKB_DROP_REASON_TAP_FILTER,
+       /**
+        * @SKB_DROP_REASON_TAP_TXFILTER: dropped by tx filter implemented at
+        * tun/tap, e.g., check_filter()
+        */
+       SKB_DROP_REASON_TAP_TXFILTER,
+       /** @SKB_DROP_REASON_ICMP_CSUM: ICMP checksum error */
+       SKB_DROP_REASON_ICMP_CSUM,
+       /**
+        * @SKB_DROP_REASON_INVALID_PROTO: the packet doesn't follow RFC 2211,
+        * such as a broadcasts ICMP_TIMESTAMP
+        */
+       SKB_DROP_REASON_INVALID_PROTO,
+       /**
+        * @SKB_DROP_REASON_IP_INADDRERRORS: host unreachable, corresponding to
+        * IPSTATS_MIB_INADDRERRORS
+        */
+       SKB_DROP_REASON_IP_INADDRERRORS,
+       /**
+        * @SKB_DROP_REASON_IP_INNOROUTES: network unreachable, corresponding to
+        * IPSTATS_MIB_INADDRERRORS
+        */
+       SKB_DROP_REASON_IP_INNOROUTES,
+       /**
+        * @SKB_DROP_REASON_PKT_TOO_BIG: packet size is too big (maybe exceed the
+        * MTU)
+        */
+       SKB_DROP_REASON_PKT_TOO_BIG,
+       /**
+        * @SKB_DROP_REASON_MAX: the maximum of drop reason, which shouldn't be
+        * used as a real 'reason'
+        */
+       SKB_DROP_REASON_MAX,
+};
+
+#define SKB_DR_INIT(name, reason)                              \
+       enum skb_drop_reason name = SKB_DROP_REASON_##reason
+#define SKB_DR(name)                                           \
+       SKB_DR_INIT(name, NOT_SPECIFIED)
+#define SKB_DR_SET(name, reason)                               \
+       (name = SKB_DROP_REASON_##reason)
+#define SKB_DR_OR(name, reason)                                        \
+       do {                                                    \
+               if (name == SKB_DROP_REASON_NOT_SPECIFIED ||    \
+                   name == SKB_NOT_DROPPED_YET)                \
+                       SKB_DR_SET(name, reason);               \
+       } while (0)
+
+extern const char * const drop_reasons[];
+
+#endif
index 021778a..6484095 100644 (file)
@@ -612,5 +612,6 @@ int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
                                enum tc_setup_type type, void *data,
                                struct flow_block_offload *bo,
                                void (*cleanup)(struct flow_block_cb *block_cb));
+bool flow_indr_dev_exists(void);
 
 #endif /* _NET_FLOW_OFFLOAD_H */
index c24fa93..70cbc4a 100644 (file)
@@ -456,8 +456,8 @@ static inline void iptunnel_xmit_stats(struct net_device *dev, int pkt_len)
                struct pcpu_sw_netstats *tstats = get_cpu_ptr(dev->tstats);
 
                u64_stats_update_begin(&tstats->syncp);
-               tstats->tx_bytes += pkt_len;
-               tstats->tx_packets++;
+               u64_stats_add(&tstats->tx_bytes, pkt_len);
+               u64_stats_inc(&tstats->tx_packets);
                u64_stats_update_end(&tstats->syncp);
                put_cpu_ptr(tstats);
        } else {
index 5b38bf1..de9dcc5 100644 (file)
@@ -1063,7 +1063,7 @@ int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr);
 int ip6_append_data(struct sock *sk,
                    int getfrag(void *from, char *to, int offset, int len,
                                int odd, struct sk_buff *skb),
-                   void *from, int length, int transhdrlen,
+                   void *from, size_t length, int transhdrlen,
                    struct ipcm6_cookie *ipc6, struct flowi6 *fl6,
                    struct rt6_info *rt, unsigned int flags);
 
@@ -1079,7 +1079,7 @@ struct sk_buff *__ip6_make_skb(struct sock *sk, struct sk_buff_head *queue,
 struct sk_buff *ip6_make_skb(struct sock *sk,
                             int getfrag(void *from, char *to, int offset,
                                         int len, int odd, struct sk_buff *skb),
-                            void *from, int length, int transhdrlen,
+                            void *from, size_t length, int transhdrlen,
                             struct ipcm6_cookie *ipc6,
                             struct rt6_info *rt, unsigned int flags,
                             struct inet_cork_full *cork);
index ebadb21..5c9e97e 100644 (file)
@@ -1958,36 +1958,6 @@ struct ieee80211_key_seq {
        };
 };
 
-/**
- * struct ieee80211_cipher_scheme - cipher scheme
- *
- * This structure contains a cipher scheme information defining
- * the secure packet crypto handling.
- *
- * @cipher: a cipher suite selector
- * @iftype: a cipher iftype bit mask indicating an allowed cipher usage
- * @hdr_len: a length of a security header used the cipher
- * @pn_len: a length of a packet number in the security header
- * @pn_off: an offset of pn from the beginning of the security header
- * @key_idx_off: an offset of key index byte in the security header
- * @key_idx_mask: a bit mask of key_idx bits
- * @key_idx_shift: a bit shift needed to get key_idx
- *     key_idx value calculation:
- *      (sec_header_base[key_idx_off] & key_idx_mask) >> key_idx_shift
- * @mic_len: a mic length in bytes
- */
-struct ieee80211_cipher_scheme {
-       u32 cipher;
-       u16 iftype;
-       u8 hdr_len;
-       u8 pn_len;
-       u8 pn_off;
-       u8 key_idx_off;
-       u8 key_idx_mask;
-       u8 key_idx_shift;
-       u8 mic_len;
-};
-
 /**
  * enum set_key_cmd - key command
  *
@@ -2664,9 +2634,6 @@ enum ieee80211_hw_flags {
  *     deliver to a WMM STA during any Service Period triggered by the WMM STA.
  *     Use IEEE80211_WMM_IE_STA_QOSINFO_SP_* for correct values.
  *
- * @n_cipher_schemes: a size of an array of cipher schemes definitions.
- * @cipher_schemes: a pointer to an array of cipher scheme definitions
- *     supported by HW.
  * @max_nan_de_entries: maximum number of NAN DE functions supported by the
  *     device.
  *
@@ -2716,8 +2683,6 @@ struct ieee80211_hw {
        netdev_features_t netdev_features;
        u8 uapsd_queues;
        u8 uapsd_max_sp_len;
-       u8 n_cipher_schemes;
-       const struct ieee80211_cipher_scheme *cipher_schemes;
        u8 max_nan_de_entries;
        u8 tx_sk_pacing_shift;
        u8 weight_multiplier;
index 20af9d3..279ae0f 100644 (file)
@@ -1090,7 +1090,6 @@ struct nft_stats {
 
 struct nft_hook {
        struct list_head        list;
-       bool                    inactive;
        struct nf_hook_ops      ops;
        struct rcu_head         rcu;
 };
index 7971478..3568b6a 100644 (file)
@@ -92,7 +92,7 @@ int nft_flow_rule_offload_commit(struct net *net);
        NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg)         \
        memset(&(__reg)->mask, 0xff, (__reg)->len);
 
-int nft_chain_offload_priority(struct nft_base_chain *basechain);
+bool nft_chain_offload_support(const struct nft_base_chain *basechain);
 
 int nft_offload_init(void);
 void nft_offload_exit(void);
index c585ef6..304a5e3 100644 (file)
@@ -611,7 +611,7 @@ void sock_net_set(struct sock *sk, struct net *net)
 
 int sk_set_peek_off(struct sock *sk, int val);
 
-static inline int sk_peek_offset(struct sock *sk, int flags)
+static inline int sk_peek_offset(const struct sock *sk, int flags)
 {
        if (unlikely(flags & MSG_PEEK)) {
                return READ_ONCE(sk->sk_peek_off);
@@ -863,7 +863,7 @@ static inline void sk_add_bind2_node(struct sock *sk, struct hlist_head *list)
                ({ tpos = (typeof(*tpos) *)((void *)pos - offset); 1;});       \
             pos = rcu_dereference(hlist_next_rcu(pos)))
 
-static inline struct user_namespace *sk_user_ns(struct sock *sk)
+static inline struct user_namespace *sk_user_ns(const struct sock *sk)
 {
        /* Careful only use this in a context where these parameters
         * can not change and must all be valid, such as recvmsg from
@@ -909,7 +909,7 @@ enum sock_flags {
 
 #define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))
 
-static inline void sock_copy_flags(struct sock *nsk, struct sock *osk)
+static inline void sock_copy_flags(struct sock *nsk, const struct sock *osk)
 {
        nsk->sk_flags = osk->sk_flags;
 }
@@ -1254,6 +1254,7 @@ struct proto {
        void                    (*enter_memory_pressure)(struct sock *sk);
        void                    (*leave_memory_pressure)(struct sock *sk);
        atomic_long_t           *memory_allocated;      /* Current allocated memory. */
+       int  __percpu           *per_cpu_fw_alloc;
        struct percpu_counter   *sockets_allocated;     /* Current number of sockets. */
 
        /*
@@ -1397,21 +1398,46 @@ static inline bool sk_under_memory_pressure(const struct sock *sk)
 }
 
 static inline long
-sk_memory_allocated(const struct sock *sk)
+proto_memory_allocated(const struct proto *prot)
 {
-       return atomic_long_read(sk->sk_prot->memory_allocated);
+       return max(0L, atomic_long_read(prot->memory_allocated));
 }
 
 static inline long
+sk_memory_allocated(const struct sock *sk)
+{
+       return proto_memory_allocated(sk->sk_prot);
+}
+
+/* 1 MB per cpu, in page units */
+#define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT))
+
+static inline void
 sk_memory_allocated_add(struct sock *sk, int amt)
 {
-       return atomic_long_add_return(amt, sk->sk_prot->memory_allocated);
+       int local_reserve;
+
+       preempt_disable();
+       local_reserve = __this_cpu_add_return(*sk->sk_prot->per_cpu_fw_alloc, amt);
+       if (local_reserve >= SK_MEMORY_PCPU_RESERVE) {
+               __this_cpu_sub(*sk->sk_prot->per_cpu_fw_alloc, local_reserve);
+               atomic_long_add(local_reserve, sk->sk_prot->memory_allocated);
+       }
+       preempt_enable();
 }
 
 static inline void
 sk_memory_allocated_sub(struct sock *sk, int amt)
 {
-       atomic_long_sub(amt, sk->sk_prot->memory_allocated);
+       int local_reserve;
+
+       preempt_disable();
+       local_reserve = __this_cpu_sub_return(*sk->sk_prot->per_cpu_fw_alloc, amt);
+       if (local_reserve <= -SK_MEMORY_PCPU_RESERVE) {
+               __this_cpu_sub(*sk->sk_prot->per_cpu_fw_alloc, local_reserve);
+               atomic_long_add(local_reserve, sk->sk_prot->memory_allocated);
+       }
+       preempt_enable();
 }
 
 #define SK_ALLOC_PERCPU_COUNTER_BATCH 16
@@ -1440,12 +1466,6 @@ proto_sockets_allocated_sum_positive(struct proto *prot)
        return percpu_counter_sum_positive(prot->sockets_allocated);
 }
 
-static inline long
-proto_memory_allocated(struct proto *prot)
-{
-       return atomic_long_read(prot->memory_allocated);
-}
-
 static inline bool
 proto_memory_pressure(struct proto *prot)
 {
@@ -1532,30 +1552,18 @@ int __sk_mem_schedule(struct sock *sk, int size, int kind);
 void __sk_mem_reduce_allocated(struct sock *sk, int amount);
 void __sk_mem_reclaim(struct sock *sk, int amount);
 
-/* We used to have PAGE_SIZE here, but systems with 64KB pages
- * do not necessarily have 16x time more memory than 4KB ones.
- */
-#define SK_MEM_QUANTUM 4096
-#define SK_MEM_QUANTUM_SHIFT ilog2(SK_MEM_QUANTUM)
 #define SK_MEM_SEND    0
 #define SK_MEM_RECV    1
 
-/* sysctl_mem values are in pages, we convert them in SK_MEM_QUANTUM units */
+/* sysctl_mem values are in pages */
 static inline long sk_prot_mem_limits(const struct sock *sk, int index)
 {
-       long val = sk->sk_prot->sysctl_mem[index];
-
-#if PAGE_SIZE > SK_MEM_QUANTUM
-       val <<= PAGE_SHIFT - SK_MEM_QUANTUM_SHIFT;
-#elif PAGE_SIZE < SK_MEM_QUANTUM
-       val >>= SK_MEM_QUANTUM_SHIFT - PAGE_SHIFT;
-#endif
-       return val;
+       return sk->sk_prot->sysctl_mem[index];
 }
 
 static inline int sk_mem_pages(int amt)
 {
-       return (amt + SK_MEM_QUANTUM - 1) >> SK_MEM_QUANTUM_SHIFT;
+       return (amt + PAGE_SIZE - 1) >> PAGE_SHIFT;
 }
 
 static inline bool sk_has_account(struct sock *sk)
@@ -1566,19 +1574,23 @@ static inline bool sk_has_account(struct sock *sk)
 
 static inline bool sk_wmem_schedule(struct sock *sk, int size)
 {
+       int delta;
+
        if (!sk_has_account(sk))
                return true;
-       return size <= sk->sk_forward_alloc ||
-               __sk_mem_schedule(sk, size, SK_MEM_SEND);
+       delta = size - sk->sk_forward_alloc;
+       return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_SEND);
 }
 
 static inline bool
 sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
 {
+       int delta;
+
        if (!sk_has_account(sk))
                return true;
-       return size <= sk->sk_forward_alloc ||
-               __sk_mem_schedule(sk, size, SK_MEM_RECV) ||
+       delta = size - sk->sk_forward_alloc;
+       return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_RECV) ||
                skb_pfmemalloc(skb);
 }
 
@@ -1604,7 +1616,7 @@ static inline void sk_mem_reclaim(struct sock *sk)
 
        reclaimable = sk->sk_forward_alloc - sk_unused_reserved_mem(sk);
 
-       if (reclaimable >= SK_MEM_QUANTUM)
+       if (reclaimable >= (int)PAGE_SIZE)
                __sk_mem_reclaim(sk, reclaimable);
 }
 
@@ -1614,19 +1626,6 @@ static inline void sk_mem_reclaim_final(struct sock *sk)
        sk_mem_reclaim(sk);
 }
 
-static inline void sk_mem_reclaim_partial(struct sock *sk)
-{
-       int reclaimable;
-
-       if (!sk_has_account(sk))
-               return;
-
-       reclaimable = sk->sk_forward_alloc - sk_unused_reserved_mem(sk);
-
-       if (reclaimable > SK_MEM_QUANTUM)
-               __sk_mem_reclaim(sk, reclaimable - 1);
-}
-
 static inline void sk_mem_charge(struct sock *sk, int size)
 {
        if (!sk_has_account(sk))
@@ -1634,29 +1633,17 @@ static inline void sk_mem_charge(struct sock *sk, int size)
        sk->sk_forward_alloc -= size;
 }
 
-/* the following macros control memory reclaiming in sk_mem_uncharge()
+/* the following macros control memory reclaiming in mptcp_rmem_uncharge()
  */
 #define SK_RECLAIM_THRESHOLD   (1 << 21)
 #define SK_RECLAIM_CHUNK       (1 << 20)
 
 static inline void sk_mem_uncharge(struct sock *sk, int size)
 {
-       int reclaimable;
-
        if (!sk_has_account(sk))
                return;
        sk->sk_forward_alloc += size;
-       reclaimable = sk->sk_forward_alloc - sk_unused_reserved_mem(sk);
-
-       /* Avoid a possible overflow.
-        * TCP send queues can make this happen, if sk_mem_reclaim()
-        * is not called and more than 2 GBytes are released at once.
-        *
-        * If we reach 2 MBytes, reclaim 1 MBytes right now, there is
-        * no need to hold that much forward allocation anyway.
-        */
-       if (unlikely(reclaimable >= SK_RECLAIM_THRESHOLD))
-               __sk_mem_reclaim(sk, SK_RECLAIM_CHUNK);
+       sk_mem_reclaim(sk);
 }
 
 /*
index 1e99f5c..4794cae 100644 (file)
@@ -253,6 +253,8 @@ extern long sysctl_tcp_mem[3];
 #define TCP_RACK_NO_DUPTHRESH    0x4 /* Do not use DUPACK threshold in RACK */
 
 extern atomic_long_t tcp_memory_allocated;
+DECLARE_PER_CPU(int, tcp_memory_per_cpu_fw_alloc);
+
 extern struct percpu_counter tcp_sockets_allocated;
 extern unsigned long tcp_memory_pressure;
 
index b83a003..b60eea2 100644 (file)
@@ -95,6 +95,7 @@ static inline struct udp_hslot *udp_hashslot2(struct udp_table *table,
 extern struct proto udp_prot;
 
 extern atomic_long_t udp_memory_allocated;
+DECLARE_PER_CPU(int, udp_memory_per_cpu_fw_alloc);
 
 /* sysctl variables for udp */
 extern long sysctl_udp_mem[3];
index c39d910..9287712 100644 (file)
@@ -1923,7 +1923,7 @@ static inline void xfrm_dev_state_free(struct xfrm_state *x)
                if (dev->xfrmdev_ops->xdo_dev_state_free)
                        dev->xfrmdev_ops->xdo_dev_state_free(x);
                xso->dev = NULL;
-               dev_put_track(dev, &xso->dev_tracker);
+               netdev_put(dev, &xso->dev_tracker);
        }
 }
 #else
index a477bf9..45264e4 100644 (file)
@@ -9,92 +9,6 @@
 #include <linux/netdevice.h>
 #include <linux/tracepoint.h>
 
-#define TRACE_SKB_DROP_REASON                                  \
-       EM(SKB_DROP_REASON_NOT_SPECIFIED, NOT_SPECIFIED)        \
-       EM(SKB_DROP_REASON_NO_SOCKET, NO_SOCKET)                \
-       EM(SKB_DROP_REASON_PKT_TOO_SMALL, PKT_TOO_SMALL)        \
-       EM(SKB_DROP_REASON_TCP_CSUM, TCP_CSUM)                  \
-       EM(SKB_DROP_REASON_SOCKET_FILTER, SOCKET_FILTER)        \
-       EM(SKB_DROP_REASON_UDP_CSUM, UDP_CSUM)                  \
-       EM(SKB_DROP_REASON_NETFILTER_DROP, NETFILTER_DROP)      \
-       EM(SKB_DROP_REASON_OTHERHOST, OTHERHOST)                \
-       EM(SKB_DROP_REASON_IP_CSUM, IP_CSUM)                    \
-       EM(SKB_DROP_REASON_IP_INHDR, IP_INHDR)                  \
-       EM(SKB_DROP_REASON_IP_RPFILTER, IP_RPFILTER)            \
-       EM(SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST,             \
-          UNICAST_IN_L2_MULTICAST)                             \
-       EM(SKB_DROP_REASON_XFRM_POLICY, XFRM_POLICY)            \
-       EM(SKB_DROP_REASON_IP_NOPROTO, IP_NOPROTO)              \
-       EM(SKB_DROP_REASON_SOCKET_RCVBUFF, SOCKET_RCVBUFF)      \
-       EM(SKB_DROP_REASON_PROTO_MEM, PROTO_MEM)                \
-       EM(SKB_DROP_REASON_TCP_MD5NOTFOUND, TCP_MD5NOTFOUND)    \
-       EM(SKB_DROP_REASON_TCP_MD5UNEXPECTED,                   \
-          TCP_MD5UNEXPECTED)                                   \
-       EM(SKB_DROP_REASON_TCP_MD5FAILURE, TCP_MD5FAILURE)      \
-       EM(SKB_DROP_REASON_SOCKET_BACKLOG, SOCKET_BACKLOG)      \
-       EM(SKB_DROP_REASON_TCP_FLAGS, TCP_FLAGS)                \
-       EM(SKB_DROP_REASON_TCP_ZEROWINDOW, TCP_ZEROWINDOW)      \
-       EM(SKB_DROP_REASON_TCP_OLD_DATA, TCP_OLD_DATA)          \
-       EM(SKB_DROP_REASON_TCP_OVERWINDOW, TCP_OVERWINDOW)      \
-       EM(SKB_DROP_REASON_TCP_OFOMERGE, TCP_OFOMERGE)          \
-       EM(SKB_DROP_REASON_TCP_OFO_DROP, TCP_OFO_DROP)          \
-       EM(SKB_DROP_REASON_TCP_RFC7323_PAWS, TCP_RFC7323_PAWS)  \
-       EM(SKB_DROP_REASON_TCP_INVALID_SEQUENCE,                \
-          TCP_INVALID_SEQUENCE)                                \
-       EM(SKB_DROP_REASON_TCP_RESET, TCP_RESET)                \
-       EM(SKB_DROP_REASON_TCP_INVALID_SYN, TCP_INVALID_SYN)    \
-       EM(SKB_DROP_REASON_TCP_CLOSE, TCP_CLOSE)                \
-       EM(SKB_DROP_REASON_TCP_FASTOPEN, TCP_FASTOPEN)          \
-       EM(SKB_DROP_REASON_TCP_OLD_ACK, TCP_OLD_ACK)            \
-       EM(SKB_DROP_REASON_TCP_TOO_OLD_ACK, TCP_TOO_OLD_ACK)    \
-       EM(SKB_DROP_REASON_TCP_ACK_UNSENT_DATA,                 \
-          TCP_ACK_UNSENT_DATA)                                 \
-       EM(SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE,                 \
-         TCP_OFO_QUEUE_PRUNE)                                  \
-       EM(SKB_DROP_REASON_IP_OUTNOROUTES, IP_OUTNOROUTES)      \
-       EM(SKB_DROP_REASON_BPF_CGROUP_EGRESS,                   \
-          BPF_CGROUP_EGRESS)                                   \
-       EM(SKB_DROP_REASON_IPV6DISABLED, IPV6DISABLED)          \
-       EM(SKB_DROP_REASON_NEIGH_CREATEFAIL, NEIGH_CREATEFAIL)  \
-       EM(SKB_DROP_REASON_NEIGH_FAILED, NEIGH_FAILED)          \
-       EM(SKB_DROP_REASON_NEIGH_QUEUEFULL, NEIGH_QUEUEFULL)    \
-       EM(SKB_DROP_REASON_NEIGH_DEAD, NEIGH_DEAD)              \
-       EM(SKB_DROP_REASON_TC_EGRESS, TC_EGRESS)                \
-       EM(SKB_DROP_REASON_QDISC_DROP, QDISC_DROP)              \
-       EM(SKB_DROP_REASON_CPU_BACKLOG, CPU_BACKLOG)            \
-       EM(SKB_DROP_REASON_XDP, XDP)                            \
-       EM(SKB_DROP_REASON_TC_INGRESS, TC_INGRESS)              \
-       EM(SKB_DROP_REASON_UNHANDLED_PROTO, UNHANDLED_PROTO)    \
-       EM(SKB_DROP_REASON_SKB_CSUM, SKB_CSUM)                  \
-       EM(SKB_DROP_REASON_SKB_GSO_SEG, SKB_GSO_SEG)            \
-       EM(SKB_DROP_REASON_SKB_UCOPY_FAULT, SKB_UCOPY_FAULT)    \
-       EM(SKB_DROP_REASON_DEV_HDR, DEV_HDR)                    \
-       EM(SKB_DROP_REASON_DEV_READY, DEV_READY)                \
-       EM(SKB_DROP_REASON_FULL_RING, FULL_RING)                \
-       EM(SKB_DROP_REASON_NOMEM, NOMEM)                        \
-       EM(SKB_DROP_REASON_HDR_TRUNC, HDR_TRUNC)                \
-       EM(SKB_DROP_REASON_TAP_FILTER, TAP_FILTER)              \
-       EM(SKB_DROP_REASON_TAP_TXFILTER, TAP_TXFILTER)          \
-       EM(SKB_DROP_REASON_ICMP_CSUM, ICMP_CSUM)                \
-       EM(SKB_DROP_REASON_INVALID_PROTO, INVALID_PROTO)        \
-       EM(SKB_DROP_REASON_IP_INADDRERRORS, IP_INADDRERRORS)    \
-       EM(SKB_DROP_REASON_IP_INNOROUTES, IP_INNOROUTES)        \
-       EM(SKB_DROP_REASON_PKT_TOO_BIG, PKT_TOO_BIG)            \
-       EMe(SKB_DROP_REASON_MAX, MAX)
-
-#undef EM
-#undef EMe
-
-#define EM(a, b)       TRACE_DEFINE_ENUM(a);
-#define EMe(a, b)      TRACE_DEFINE_ENUM(a);
-
-TRACE_SKB_DROP_REASON
-
-#undef EM
-#undef EMe
-#define EM(a, b)       { a, #b },
-#define EMe(a, b)      { a, #b }
-
 /*
  * Tracepoint for free an sk_buff:
  */
@@ -121,8 +35,7 @@ TRACE_EVENT(kfree_skb,
 
        TP_printk("skbaddr=%p protocol=%u location=%p reason: %s",
                  __entry->skbaddr, __entry->protocol, __entry->location,
-                 __print_symbolic(__entry->reason,
-                                  TRACE_SKB_DROP_REASON))
+                 drop_reasons[__entry->reason])
 );
 
 TRACE_EVENT(consume_skb,
index d9490e3..98f905f 100644 (file)
@@ -5874,7 +5874,7 @@ enum nl80211_ap_sme_features {
  * @NL80211_FEATURE_INACTIVITY_TIMER: This driver takes care of freeing up
  *     the connected inactive stations in AP mode.
  * @NL80211_FEATURE_CELL_BASE_REG_HINTS: This driver has been tested
- *     to work properly to suppport receiving regulatory hints from
+ *     to work properly to support receiving regulatory hints from
  *     cellular base stations.
  * @NL80211_FEATURE_P2P_DEVICE_NEEDS_CHANNEL: (no longer available, only
  *     here to reserve the value for API/ABI compatibility)
index ac39328..bb8f808 100644 (file)
@@ -39,7 +39,7 @@
 /* TLS socket options */
 #define TLS_TX                 1       /* Set transmit parameters */
 #define TLS_RX                 2       /* Set receive parameters */
-#define TLS_TX_ZEROCOPY_SENDFILE       3       /* transmit zerocopy sendfile */
+#define TLS_TX_ZEROCOPY_RO     3       /* TX zerocopy (only sendfile now) */
 
 /* Supported versions */
 #define TLS_VERSION_MINOR(ver) ((ver) & 0xFF)
@@ -161,7 +161,7 @@ enum {
        TLS_INFO_CIPHER,
        TLS_INFO_TXCONF,
        TLS_INFO_RXCONF,
-       TLS_INFO_ZC_SENDFILE,
+       TLS_INFO_ZC_RO_TX,
        __TLS_INFO_MAX,
 };
 #define TLS_INFO_MAX (__TLS_INFO_MAX - 1)
diff --git a/include/xen/arm/xen-ops.h b/include/xen/arm/xen-ops.h
new file mode 100644 (file)
index 0000000..b0766a6
--- /dev/null
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_ARM_XEN_OPS_H
+#define _ASM_ARM_XEN_OPS_H
+
+#include <xen/swiotlb-xen.h>
+#include <xen/xen-ops.h>
+
+static inline void xen_setup_dma_ops(struct device *dev)
+{
+#ifdef CONFIG_XEN
+       if (xen_is_grant_dma_device(dev))
+               xen_grant_setup_dma_ops(dev);
+       else if (xen_swiotlb_detect())
+               dev->dma_ops = &xen_swiotlb_dma_ops;
+#endif
+}
+
+#endif /* _ASM_ARM_XEN_OPS_H */
index 527c990..e279be3 100644 (file)
@@ -127,10 +127,14 @@ int gnttab_try_end_foreign_access(grant_ref_t ref);
  */
 int gnttab_alloc_grant_references(u16 count, grant_ref_t *pprivate_head);
 
+int gnttab_alloc_grant_reference_seq(unsigned int count, grant_ref_t *first);
+
 void gnttab_free_grant_reference(grant_ref_t ref);
 
 void gnttab_free_grant_references(grant_ref_t head);
 
+void gnttab_free_grant_reference_seq(grant_ref_t head, unsigned int count);
+
 int gnttab_empty_grant_references(const grant_ref_t *pprivate_head);
 
 int gnttab_claim_grant_reference(grant_ref_t *pprivate_head);
index c7c1b46..8054696 100644 (file)
@@ -214,4 +214,17 @@ static inline void xen_preemptible_hcall_end(void) { }
 
 #endif /* CONFIG_XEN_PV && !CONFIG_PREEMPTION */
 
+#ifdef CONFIG_XEN_GRANT_DMA_OPS
+void xen_grant_setup_dma_ops(struct device *dev);
+bool xen_is_grant_dma_device(struct device *dev);
+#else
+static inline void xen_grant_setup_dma_ops(struct device *dev)
+{
+}
+static inline bool xen_is_grant_dma_device(struct device *dev)
+{
+       return false;
+}
+#endif /* CONFIG_XEN_GRANT_DMA_OPS */
+
 #endif /* INCLUDE_XEN_OPS_H */
index a99bab8..0780a81 100644 (file)
@@ -52,6 +52,14 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
 extern u64 xen_saved_max_mem_size;
 #endif
 
+#include <linux/platform-feature.h>
+
+static inline void xen_set_restricted_virtio_memory_access(void)
+{
+       if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
+               platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+}
+
 #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
 int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
 void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
index c984afc..c7900e8 100644 (file)
@@ -885,6 +885,15 @@ config CC_IMPLICIT_FALLTHROUGH
        default "-Wimplicit-fallthrough=5" if CC_IS_GCC && $(cc-option,-Wimplicit-fallthrough=5)
        default "-Wimplicit-fallthrough" if CC_IS_CLANG && $(cc-option,-Wunreachable-code-fallthrough)
 
+# Currently, disable gcc-12 array-bounds globally.
+# We may want to target only particular configurations some day.
+config GCC12_NO_ARRAY_BOUNDS
+       def_bool y
+
+config CC_NO_ARRAY_BOUNDS
+       bool
+       default y if CC_IS_GCC && GCC_VERSION >= 120000 && GCC_VERSION < 130000 && GCC12_NO_ARRAY_BOUNDS
+
 #
 # For architectures that know their GCC __int128 support is sound
 #
index 318789c..a7e1f49 100644 (file)
@@ -7,7 +7,7 @@ obj-y     = fork.o exec_domain.o panic.o \
            cpu.o exit.o softirq.o resource.o \
            sysctl.o capability.o ptrace.o user.o \
            signal.o sys.o umh.o workqueue.o pid.o task_work.o \
-           extable.o params.o \
+           extable.o params.o platform-feature.o \
            kthread.o sys_ni.o nsproxy.o \
            notifier.o ksysfs.o cred.o reboot.o \
            async.o range.o smpboot.o ucount.o regset.o
index 7bccaa4..63d0ac7 100644 (file)
@@ -6054,6 +6054,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
                                    struct bpf_reg_state *regs,
                                    bool ptr_to_mem_ok)
 {
+       enum bpf_prog_type prog_type = resolve_prog_type(env->prog);
        struct bpf_verifier_log *log = &env->log;
        u32 i, nargs, ref_id, ref_obj_id = 0;
        bool is_kfunc = btf_is_kernel(btf);
@@ -6171,7 +6172,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
                                return -EINVAL;
                        }
                        /* rest of the arguments can be anything, like normal kfunc */
-               } else if (btf_get_prog_ctx_type(log, btf, t, env->prog->type, i)) {
+               } else if (btf_get_prog_ctx_type(log, btf, t, prog_type, i)) {
                        /* If function expects ctx type in BTF check that caller
                         * is passing PTR_TO_CTX.
                         */
index ac74063..2caafd1 100644 (file)
@@ -564,7 +564,7 @@ static void add_dma_entry(struct dma_debug_entry *entry, unsigned long attrs)
 
        rc = active_cacheline_insert(entry);
        if (rc == -ENOMEM) {
-               pr_err("cacheline tracking ENOMEM, dma-debug disabled\n");
+               pr_err_once("cacheline tracking ENOMEM, dma-debug disabled\n");
                global_disable = true;
        } else if (rc == -EEXIST && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
                err_printk(entry->dev, entry,
index dfa1de8..cb50f8d 100644 (file)
@@ -192,7 +192,7 @@ void __init swiotlb_update_mem_attributes(void)
 }
 
 static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
-                                   unsigned long nslabs, bool late_alloc)
+               unsigned long nslabs, unsigned int flags, bool late_alloc)
 {
        void *vaddr = phys_to_virt(start);
        unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
@@ -203,8 +203,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
        mem->index = 0;
        mem->late_alloc = late_alloc;
 
-       if (swiotlb_force_bounce)
-               mem->force_bounce = true;
+       mem->force_bounce = swiotlb_force_bounce || (flags & SWIOTLB_FORCE);
 
        spin_lock_init(&mem->lock);
        for (i = 0; i < mem->nslabs; i++) {
@@ -275,8 +274,7 @@ retry:
                panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
                      __func__, alloc_size, PAGE_SIZE);
 
-       swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
-       mem->force_bounce = flags & SWIOTLB_FORCE;
+       swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, flags, false);
 
        if (flags & SWIOTLB_VERBOSE)
                swiotlb_print_info();
@@ -348,7 +346,7 @@ retry:
 
        set_memory_decrypted((unsigned long)vstart,
                             (nslabs << IO_TLB_SHIFT) >> PAGE_SHIFT);
-       swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, true);
+       swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, 0, true);
 
        swiotlb_print_info();
        return 0;
@@ -835,8 +833,8 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 
                set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
                                     rmem->size >> PAGE_SHIFT);
-               swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
-               mem->force_bounce = true;
+               swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, SWIOTLB_FORCE,
+                               false);
                mem->for_alloc = true;
 
                rmem->priv = mem;
index 9d09f48..2e0f75b 100644 (file)
@@ -9,12 +9,6 @@ static int xfer_to_guest_mode_work(struct kvm_vcpu *vcpu, unsigned long ti_work)
                int ret;
 
                if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) {
-                       clear_notify_signal();
-                       if (task_work_pending(current))
-                               task_work_run();
-               }
-
-               if (ti_work & _TIF_SIGPENDING) {
                        kvm_handle_signal_exit(vcpu);
                        return -EINTR;
                }
diff --git a/kernel/platform-feature.c b/kernel/platform-feature.c
new file mode 100644 (file)
index 0000000..cb6a6c3
--- /dev/null
@@ -0,0 +1,27 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bitops.h>
+#include <linux/cache.h>
+#include <linux/export.h>
+#include <linux/platform-feature.h>
+
+#define PLATFORM_FEAT_ARRAY_SZ  BITS_TO_LONGS(PLATFORM_FEAT_N)
+static unsigned long __read_mostly platform_features[PLATFORM_FEAT_ARRAY_SZ];
+
+void platform_set(unsigned int feature)
+{
+       set_bit(feature, platform_features);
+}
+EXPORT_SYMBOL_GPL(platform_set);
+
+void platform_clear(unsigned int feature)
+{
+       clear_bit(feature, platform_features);
+}
+EXPORT_SYMBOL_GPL(platform_clear);
+
+bool platform_has(unsigned int feature)
+{
+       return test_bit(feature, platform_features);
+}
+EXPORT_SYMBOL_GPL(platform_has);
index a091145..b5a71d1 100644 (file)
@@ -315,6 +315,43 @@ static int sys_off_notify(struct notifier_block *nb,
        return handler->sys_off_cb(&data);
 }
 
+static struct sys_off_handler platform_sys_off_handler;
+
+static struct sys_off_handler *alloc_sys_off_handler(int priority)
+{
+       struct sys_off_handler *handler;
+       gfp_t flags;
+
+       /*
+        * Platforms like m68k can't allocate sys_off handler dynamically
+        * at the early boot time because memory allocator isn't available yet.
+        */
+       if (priority == SYS_OFF_PRIO_PLATFORM) {
+               handler = &platform_sys_off_handler;
+               if (handler->cb_data)
+                       return ERR_PTR(-EBUSY);
+       } else {
+               if (system_state > SYSTEM_RUNNING)
+                       flags = GFP_ATOMIC;
+               else
+                       flags = GFP_KERNEL;
+
+               handler = kzalloc(sizeof(*handler), flags);
+               if (!handler)
+                       return ERR_PTR(-ENOMEM);
+       }
+
+       return handler;
+}
+
+static void free_sys_off_handler(struct sys_off_handler *handler)
+{
+       if (handler == &platform_sys_off_handler)
+               memset(handler, 0, sizeof(*handler));
+       else
+               kfree(handler);
+}
+
 /**
  *     register_sys_off_handler - Register sys-off handler
  *     @mode: Sys-off mode
@@ -345,9 +382,9 @@ register_sys_off_handler(enum sys_off_mode mode,
        struct sys_off_handler *handler;
        int err;
 
-       handler = kzalloc(sizeof(*handler), GFP_KERNEL);
-       if (!handler)
-               return ERR_PTR(-ENOMEM);
+       handler = alloc_sys_off_handler(priority);
+       if (IS_ERR(handler))
+               return handler;
 
        switch (mode) {
        case SYS_OFF_MODE_POWER_OFF_PREPARE:
@@ -364,7 +401,7 @@ register_sys_off_handler(enum sys_off_mode mode,
                break;
 
        default:
-               kfree(handler);
+               free_sys_off_handler(handler);
                return ERR_PTR(-EINVAL);
        }
 
@@ -391,7 +428,7 @@ register_sys_off_handler(enum sys_off_mode mode,
        }
 
        if (err) {
-               kfree(handler);
+               free_sys_off_handler(handler);
                return ERR_PTR(err);
        }
 
@@ -409,7 +446,7 @@ void unregister_sys_off_handler(struct sys_off_handler *handler)
 {
        int err;
 
-       if (!handler)
+       if (IS_ERR_OR_NULL(handler))
                return;
 
        if (handler->blocking)
@@ -422,7 +459,7 @@ void unregister_sys_off_handler(struct sys_off_handler *handler)
        /* sanity check, shall never happen */
        WARN_ON(err);
 
-       kfree(handler);
+       free_sys_off_handler(handler);
 }
 EXPORT_SYMBOL_GPL(unregister_sys_off_handler);
 
@@ -584,7 +621,23 @@ static void do_kernel_power_off_prepare(void)
  */
 void do_kernel_power_off(void)
 {
+       struct sys_off_handler *sys_off = NULL;
+
+       /*
+        * Register sys-off handlers for legacy PM callback. This allows
+        * legacy PM callbacks temporary co-exist with the new sys-off API.
+        *
+        * TODO: Remove legacy handlers once all legacy PM users will be
+        *       switched to the sys-off based APIs.
+        */
+       if (pm_power_off)
+               sys_off = register_sys_off_handler(SYS_OFF_MODE_POWER_OFF,
+                                                  SYS_OFF_PRIO_DEFAULT,
+                                                  legacy_pm_power_off, NULL);
+
        atomic_notifier_call_chain(&power_off_handler_list, 0, NULL);
+
+       unregister_sys_off_handler(sys_off);
 }
 
 /**
@@ -595,7 +648,8 @@ void do_kernel_power_off(void)
  */
 bool kernel_can_power_off(void)
 {
-       return !atomic_notifier_call_chain_is_empty(&power_off_handler_list);
+       return !atomic_notifier_call_chain_is_empty(&power_off_handler_list) ||
+               pm_power_off;
 }
 EXPORT_SYMBOL_GPL(kernel_can_power_off);
 
@@ -630,7 +684,6 @@ SYSCALL_DEFINE4(reboot, int, magic1, int, magic2, unsigned int, cmd,
                void __user *, arg)
 {
        struct pid_namespace *pid_ns = task_active_pid_ns(current);
-       struct sys_off_handler *sys_off = NULL;
        char buffer[256];
        int ret = 0;
 
@@ -655,21 +708,6 @@ SYSCALL_DEFINE4(reboot, int, magic1, int, magic2, unsigned int, cmd,
        if (ret)
                return ret;
 
-       /*
-        * Register sys-off handlers for legacy PM callback. This allows
-        * legacy PM callbacks temporary co-exist with the new sys-off API.
-        *
-        * TODO: Remove legacy handlers once all legacy PM users will be
-        *       switched to the sys-off based APIs.
-        */
-       if (pm_power_off) {
-               sys_off = register_sys_off_handler(SYS_OFF_MODE_POWER_OFF,
-                                                  SYS_OFF_PRIO_DEFAULT,
-                                                  legacy_pm_power_off, NULL);
-               if (IS_ERR(sys_off))
-                       return PTR_ERR(sys_off);
-       }
-
        /* Instead of trying to make the power_off code look like
         * halt when pm_power_off is not set do it the easy way.
         */
@@ -727,7 +765,6 @@ SYSCALL_DEFINE4(reboot, int, magic1, int, magic2, unsigned int, cmd,
                break;
        }
        mutex_unlock(&system_transition_mutex);
-       unregister_sys_off_handler(sys_off);
        return ret;
 }
 
index 10b157a..7a13e6a 100644 (file)
@@ -2263,11 +2263,11 @@ static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32
        int err = -ENOMEM;
        unsigned int i;
 
-       syms = kvmalloc(cnt * sizeof(*syms), GFP_KERNEL);
+       syms = kvmalloc_array(cnt, sizeof(*syms), GFP_KERNEL);
        if (!syms)
                goto error;
 
-       buf = kvmalloc(cnt * KSYM_NAME_LEN, GFP_KERNEL);
+       buf = kvmalloc_array(cnt, KSYM_NAME_LEN, GFP_KERNEL);
        if (!buf)
                goto error;
 
@@ -2464,7 +2464,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
                return -EINVAL;
 
        size = cnt * sizeof(*addrs);
-       addrs = kvmalloc(size, GFP_KERNEL);
+       addrs = kvmalloc_array(cnt, sizeof(*addrs), GFP_KERNEL);
        if (!addrs)
                return -ENOMEM;
 
@@ -2489,7 +2489,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
 
        ucookies = u64_to_user_ptr(attr->link_create.kprobe_multi.cookies);
        if (ucookies) {
-               cookies = kvmalloc(size, GFP_KERNEL);
+               cookies = kvmalloc_array(cnt, sizeof(*addrs), GFP_KERNEL);
                if (!cookies) {
                        err = -ENOMEM;
                        goto error;
index d6bbbd4..7b37459 100644 (file)
 
 #include "nhc.h"
 
-static struct rb_root rb_root = RB_ROOT;
-static struct lowpan_nhc *lowpan_nexthdr_nhcs[NEXTHDR_MAX + 1];
+static const struct lowpan_nhc *lowpan_nexthdr_nhcs[NEXTHDR_MAX + 1];
 static DEFINE_SPINLOCK(lowpan_nhc_lock);
 
-static int lowpan_nhc_insert(struct lowpan_nhc *nhc)
+static const struct lowpan_nhc *lowpan_nhc_by_nhcid(struct sk_buff *skb)
 {
-       struct rb_node **new = &rb_root.rb_node, *parent = NULL;
-
-       /* Figure out where to put new node */
-       while (*new) {
-               struct lowpan_nhc *this = rb_entry(*new, struct lowpan_nhc,
-                                                  node);
-               int result, len_dif, len;
-
-               len_dif = nhc->idlen - this->idlen;
-
-               if (nhc->idlen < this->idlen)
-                       len = nhc->idlen;
-               else
-                       len = this->idlen;
-
-               result = memcmp(nhc->id, this->id, len);
-               if (!result)
-                       result = len_dif;
-
-               parent = *new;
-               if (result < 0)
-                       new = &((*new)->rb_left);
-               else if (result > 0)
-                       new = &((*new)->rb_right);
-               else
-                       return -EEXIST;
-       }
+       const struct lowpan_nhc *nhc;
+       int i;
+       u8 id;
 
-       /* Add new node and rebalance tree. */
-       rb_link_node(&nhc->node, parent, new);
-       rb_insert_color(&nhc->node, &rb_root);
+       if (!pskb_may_pull(skb, 1))
+               return NULL;
 
-       return 0;
-}
+       id = *skb->data;
 
-static void lowpan_nhc_remove(struct lowpan_nhc *nhc)
-{
-       rb_erase(&nhc->node, &rb_root);
-}
+       for (i = 0; i < NEXTHDR_MAX + 1; i++) {
+               nhc = lowpan_nexthdr_nhcs[i];
+               if (!nhc)
+                       continue;
 
-static struct lowpan_nhc *lowpan_nhc_by_nhcid(const struct sk_buff *skb)
-{
-       struct rb_node *node = rb_root.rb_node;
-       const u8 *nhcid_skb_ptr = skb->data;
-
-       while (node) {
-               struct lowpan_nhc *nhc = rb_entry(node, struct lowpan_nhc,
-                                                 node);
-               u8 nhcid_skb_ptr_masked[LOWPAN_NHC_MAX_ID_LEN];
-               int result, i;
-
-               if (nhcid_skb_ptr + nhc->idlen > skb->data + skb->len)
-                       return NULL;
-
-               /* copy and mask afterwards the nhid value from skb */
-               memcpy(nhcid_skb_ptr_masked, nhcid_skb_ptr, nhc->idlen);
-               for (i = 0; i < nhc->idlen; i++)
-                       nhcid_skb_ptr_masked[i] &= nhc->idmask[i];
-
-               result = memcmp(nhcid_skb_ptr_masked, nhc->id, nhc->idlen);
-               if (result < 0)
-                       node = node->rb_left;
-               else if (result > 0)
-                       node = node->rb_right;
-               else
+               if ((id & nhc->idmask) == nhc->id)
                        return nhc;
        }
 
@@ -92,7 +41,7 @@ static struct lowpan_nhc *lowpan_nhc_by_nhcid(const struct sk_buff *skb)
 int lowpan_nhc_check_compression(struct sk_buff *skb,
                                 const struct ipv6hdr *hdr, u8 **hc_ptr)
 {
-       struct lowpan_nhc *nhc;
+       const struct lowpan_nhc *nhc;
        int ret = 0;
 
        spin_lock_bh(&lowpan_nhc_lock);
@@ -110,7 +59,7 @@ int lowpan_nhc_do_compression(struct sk_buff *skb, const struct ipv6hdr *hdr,
                              u8 **hc_ptr)
 {
        int ret;
-       struct lowpan_nhc *nhc;
+       const struct lowpan_nhc *nhc;
 
        spin_lock_bh(&lowpan_nhc_lock);
 
@@ -153,7 +102,7 @@ int lowpan_nhc_do_uncompression(struct sk_buff *skb,
                                const struct net_device *dev,
                                struct ipv6hdr *hdr)
 {
-       struct lowpan_nhc *nhc;
+       const struct lowpan_nhc *nhc;
        int ret;
 
        spin_lock_bh(&lowpan_nhc_lock);
@@ -189,18 +138,9 @@ int lowpan_nhc_do_uncompression(struct sk_buff *skb,
        return 0;
 }
 
-int lowpan_nhc_add(struct lowpan_nhc *nhc)
+int lowpan_nhc_add(const struct lowpan_nhc *nhc)
 {
-       int ret;
-
-       if (!nhc->idlen || !nhc->idsetup)
-               return -EINVAL;
-
-       WARN_ONCE(nhc->idlen > LOWPAN_NHC_MAX_ID_LEN,
-                 "LOWPAN_NHC_MAX_ID_LEN should be updated to %zd.\n",
-                 nhc->idlen);
-
-       nhc->idsetup(nhc);
+       int ret = 0;
 
        spin_lock_bh(&lowpan_nhc_lock);
 
@@ -209,10 +149,6 @@ int lowpan_nhc_add(struct lowpan_nhc *nhc)
                goto out;
        }
 
-       ret = lowpan_nhc_insert(nhc);
-       if (ret < 0)
-               goto out;
-
        lowpan_nexthdr_nhcs[nhc->nexthdr] = nhc;
 out:
        spin_unlock_bh(&lowpan_nhc_lock);
@@ -220,11 +156,10 @@ out:
 }
 EXPORT_SYMBOL(lowpan_nhc_add);
 
-void lowpan_nhc_del(struct lowpan_nhc *nhc)
+void lowpan_nhc_del(const struct lowpan_nhc *nhc)
 {
        spin_lock_bh(&lowpan_nhc_lock);
 
-       lowpan_nhc_remove(nhc);
        lowpan_nexthdr_nhcs[nhc->nexthdr] = NULL;
 
        spin_unlock_bh(&lowpan_nhc_lock);
index 67951c4..ab7b497 100644 (file)
  * @_name: const char * of common header compression name.
  * @_nexthdr: ipv6 nexthdr field for the header compression.
  * @_nexthdrlen: ipv6 nexthdr len for the reserved space.
- * @_idsetup: callback to setup id and mask values.
- * @_idlen: len for the next header id and mask, should be always the same.
+ * @_id: one byte nhc id value.
+ * @_idmask: one byte nhc id mask value.
  * @_uncompress: callback for uncompression call.
  * @_compress: callback for compression call.
  */
 #define LOWPAN_NHC(__nhc, _name, _nexthdr,     \
-                  _hdrlen, _idsetup, _idlen,   \
+                  _hdrlen, _id, _idmask,       \
                   _uncompress, _compress)      \
-static u8 __nhc##_val[_idlen];                 \
-static u8 __nhc##_mask[_idlen];                        \
-static struct lowpan_nhc __nhc = {             \
+static const struct lowpan_nhc __nhc = {       \
        .name           = _name,                \
        .nexthdr        = _nexthdr,             \
        .nexthdrlen     = _hdrlen,              \
-       .id             = __nhc##_val,          \
-       .idmask         = __nhc##_mask,         \
-       .idlen          = _idlen,               \
-       .idsetup        = _idsetup,             \
+       .id             = _id,                  \
+       .idmask         = _idmask,              \
        .uncompress     = _uncompress,          \
        .compress       = _compress,            \
 }
@@ -53,27 +49,21 @@ module_exit(__nhc##_exit);
 /**
  * struct lowpan_nhc - hold 6lowpan next hdr compression ifnformation
  *
- * @node: holder for the rbtree.
  * @name: name of the specific next header compression
  * @nexthdr: next header value of the protocol which should be compressed.
  * @nexthdrlen: ipv6 nexthdr len for the reserved space.
- * @id: array for nhc id. Note this need to be in network byteorder.
- * @mask: array for nhc id mask. Note this need to be in network byteorder.
- * @len: the length of the next header id and mask.
- * @setup: callback to setup fill the next header id value and mask.
+ * @id: one byte nhc id value.
+ * @idmask: one byte nhc id mask value.
  * @compress: callback to do the header compression.
  * @uncompress: callback to do the header uncompression.
  */
 struct lowpan_nhc {
-       struct rb_node  node;
        const char      *name;
-       const u8        nexthdr;
-       const size_t    nexthdrlen;
-       u8              *id;
-       u8              *idmask;
-       const size_t    idlen;
+       u8              nexthdr;
+       size_t          nexthdrlen;
+       u8              id;
+       u8              idmask;
 
-       void            (*idsetup)(struct lowpan_nhc *nhc);
        int             (*uncompress)(struct sk_buff *skb, size_t needed);
        int             (*compress)(struct sk_buff *skb, u8 **hc_ptr);
 };
@@ -126,14 +116,14 @@ int lowpan_nhc_do_uncompression(struct sk_buff *skb,
  *
  * @nhc: nhc which should be add.
  */
-int lowpan_nhc_add(struct lowpan_nhc *nhc);
+int lowpan_nhc_add(const struct lowpan_nhc *nhc);
 
 /**
  * lowpan_nhc_del - delete a next header compression from framework
  *
  * @nhc: nhc which should be delete.
  */
-void lowpan_nhc_del(struct lowpan_nhc *nhc);
+void lowpan_nhc_del(const struct lowpan_nhc *nhc);
 
 /**
  * lowpan_nhc_init - adding all default nhcs
index 4768a94..0cbcc78 100644 (file)
@@ -6,18 +6,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_NHC_DEST_IDLEN  1
 #define LOWPAN_NHC_DEST_ID_0   0xe6
 #define LOWPAN_NHC_DEST_MASK_0 0xfe
 
-static void dest_nhid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_NHC_DEST_ID_0;
-       nhc->idmask[0] = LOWPAN_NHC_DEST_MASK_0;
-}
-
 LOWPAN_NHC(nhc_dest, "RFC6282 Destination Options", NEXTHDR_DEST, 0,
-          dest_nhid_setup, LOWPAN_NHC_DEST_IDLEN, NULL, NULL);
+          LOWPAN_NHC_DEST_ID_0, LOWPAN_NHC_DEST_MASK_0,  NULL, NULL);
 
 module_lowpan_nhc(nhc_dest);
 MODULE_DESCRIPTION("6LoWPAN next header RFC6282 Destination Options compression");
index be85f07..9414552 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_NHC_FRAGMENT_IDLEN      1
 #define LOWPAN_NHC_FRAGMENT_ID_0       0xe4
 #define LOWPAN_NHC_FRAGMENT_MASK_0     0xfe
 
-static void fragment_nhid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_NHC_FRAGMENT_ID_0;
-       nhc->idmask[0] = LOWPAN_NHC_FRAGMENT_MASK_0;
-}
-
 LOWPAN_NHC(nhc_fragment, "RFC6282 Fragment", NEXTHDR_FRAGMENT, 0,
-          fragment_nhid_setup, LOWPAN_NHC_FRAGMENT_IDLEN, NULL, NULL);
+          LOWPAN_NHC_FRAGMENT_ID_0, LOWPAN_NHC_FRAGMENT_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(nhc_fragment);
 MODULE_DESCRIPTION("6LoWPAN next header RFC6282 Fragment compression");
index a9137f1..e4745dd 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_GHC_EXT_DEST_IDLEN      1
 #define LOWPAN_GHC_EXT_DEST_ID_0       0xb6
 #define LOWPAN_GHC_EXT_DEST_MASK_0     0xfe
 
-static void dest_ghid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_GHC_EXT_DEST_ID_0;
-       nhc->idmask[0] = LOWPAN_GHC_EXT_DEST_MASK_0;
-}
-
 LOWPAN_NHC(ghc_ext_dest, "RFC7400 Destination Extension Header", NEXTHDR_DEST,
-          0, dest_ghid_setup, LOWPAN_GHC_EXT_DEST_IDLEN, NULL, NULL);
+          0, LOWPAN_GHC_EXT_DEST_ID_0, LOWPAN_GHC_EXT_DEST_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(ghc_ext_dest);
 MODULE_DESCRIPTION("6LoWPAN generic header destination extension compression");
index d49b745..220e5ab 100644 (file)
@@ -5,19 +5,12 @@
 
 #include "nhc.h"
 
-#define LOWPAN_GHC_EXT_FRAG_IDLEN      1
 #define LOWPAN_GHC_EXT_FRAG_ID_0       0xb4
 #define LOWPAN_GHC_EXT_FRAG_MASK_0     0xfe
 
-static void frag_ghid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_GHC_EXT_FRAG_ID_0;
-       nhc->idmask[0] = LOWPAN_GHC_EXT_FRAG_MASK_0;
-}
-
 LOWPAN_NHC(ghc_ext_frag, "RFC7400 Fragmentation Extension Header",
-          NEXTHDR_FRAGMENT, 0, frag_ghid_setup,
-          LOWPAN_GHC_EXT_FRAG_IDLEN, NULL, NULL);
+          NEXTHDR_FRAGMENT, 0, LOWPAN_GHC_EXT_FRAG_ID_0,
+          LOWPAN_GHC_EXT_FRAG_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(ghc_ext_frag);
 MODULE_DESCRIPTION("6LoWPAN generic header fragmentation extension compression");
index 3beedf5..9b0de4d 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_GHC_EXT_HOP_IDLEN       1
 #define LOWPAN_GHC_EXT_HOP_ID_0                0xb0
 #define LOWPAN_GHC_EXT_HOP_MASK_0      0xfe
 
-static void hop_ghid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_GHC_EXT_HOP_ID_0;
-       nhc->idmask[0] = LOWPAN_GHC_EXT_HOP_MASK_0;
-}
-
 LOWPAN_NHC(ghc_ext_hop, "RFC7400 Hop-by-Hop Extension Header", NEXTHDR_HOP, 0,
-          hop_ghid_setup, LOWPAN_GHC_EXT_HOP_IDLEN, NULL, NULL);
+          LOWPAN_GHC_EXT_HOP_ID_0, LOWPAN_GHC_EXT_HOP_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(ghc_ext_hop);
 MODULE_DESCRIPTION("6LoWPAN generic header hop-by-hop extension compression");
index 70dc0ea..3e86fae 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_GHC_EXT_ROUTE_IDLEN     1
 #define LOWPAN_GHC_EXT_ROUTE_ID_0      0xb2
 #define LOWPAN_GHC_EXT_ROUTE_MASK_0    0xfe
 
-static void route_ghid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_GHC_EXT_ROUTE_ID_0;
-       nhc->idmask[0] = LOWPAN_GHC_EXT_ROUTE_MASK_0;
-}
-
 LOWPAN_NHC(ghc_ext_route, "RFC7400 Routing Extension Header", NEXTHDR_ROUTING,
-          0, route_ghid_setup, LOWPAN_GHC_EXT_ROUTE_IDLEN, NULL, NULL);
+          0, LOWPAN_GHC_EXT_ROUTE_ID_0, LOWPAN_GHC_EXT_ROUTE_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(ghc_ext_route);
 MODULE_DESCRIPTION("6LoWPAN generic header routing extension compression");
index 339ceff..1634f3e 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_GHC_ICMPV6_IDLEN                1
 #define LOWPAN_GHC_ICMPV6_ID_0         0xdf
 #define LOWPAN_GHC_ICMPV6_MASK_0       0xff
 
-static void icmpv6_ghid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_GHC_ICMPV6_ID_0;
-       nhc->idmask[0] = LOWPAN_GHC_ICMPV6_MASK_0;
-}
-
 LOWPAN_NHC(ghc_icmpv6, "RFC7400 ICMPv6", NEXTHDR_ICMP, 0,
-          icmpv6_ghid_setup, LOWPAN_GHC_ICMPV6_IDLEN, NULL, NULL);
+          LOWPAN_GHC_ICMPV6_ID_0, LOWPAN_GHC_ICMPV6_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(ghc_icmpv6);
 MODULE_DESCRIPTION("6LoWPAN generic header ICMPv6 compression");
index f47fec6..4ac4813 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_GHC_UDP_IDLEN   1
 #define LOWPAN_GHC_UDP_ID_0    0xd0
 #define LOWPAN_GHC_UDP_MASK_0  0xf8
 
-static void udp_ghid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_GHC_UDP_ID_0;
-       nhc->idmask[0] = LOWPAN_GHC_UDP_MASK_0;
-}
-
 LOWPAN_NHC(ghc_udp, "RFC7400 UDP", NEXTHDR_UDP, 0,
-          udp_ghid_setup, LOWPAN_GHC_UDP_IDLEN, NULL, NULL);
+          LOWPAN_GHC_UDP_ID_0, LOWPAN_GHC_UDP_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(ghc_udp);
 MODULE_DESCRIPTION("6LoWPAN generic header UDP compression");
index 158fc19..182087d 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_NHC_HOP_IDLEN   1
 #define LOWPAN_NHC_HOP_ID_0    0xe0
 #define LOWPAN_NHC_HOP_MASK_0  0xfe
 
-static void hop_nhid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_NHC_HOP_ID_0;
-       nhc->idmask[0] = LOWPAN_NHC_HOP_MASK_0;
-}
-
 LOWPAN_NHC(nhc_hop, "RFC6282 Hop-by-Hop Options", NEXTHDR_HOP, 0,
-          hop_nhid_setup, LOWPAN_NHC_HOP_IDLEN, NULL, NULL);
+          LOWPAN_NHC_HOP_ID_0, LOWPAN_NHC_HOP_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(nhc_hop);
 MODULE_DESCRIPTION("6LoWPAN next header RFC6282 Hop-by-Hop Options compression");
index 08b7589..2024236 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_NHC_IPV6_IDLEN  1
 #define LOWPAN_NHC_IPV6_ID_0   0xee
 #define LOWPAN_NHC_IPV6_MASK_0 0xfe
 
-static void ipv6_nhid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_NHC_IPV6_ID_0;
-       nhc->idmask[0] = LOWPAN_NHC_IPV6_MASK_0;
-}
-
-LOWPAN_NHC(nhc_ipv6, "RFC6282 IPv6", NEXTHDR_IPV6, 0, ipv6_nhid_setup,
-          LOWPAN_NHC_IPV6_IDLEN, NULL, NULL);
+LOWPAN_NHC(nhc_ipv6, "RFC6282 IPv6", NEXTHDR_IPV6, 0, LOWPAN_NHC_IPV6_ID_0,
+          LOWPAN_NHC_IPV6_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(nhc_ipv6);
 MODULE_DESCRIPTION("6LoWPAN next header RFC6282 IPv6 compression");
index ac8fca6..1c31d87 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_NHC_MOBILITY_IDLEN      1
 #define LOWPAN_NHC_MOBILITY_ID_0       0xe8
 #define LOWPAN_NHC_MOBILITY_MASK_0     0xfe
 
-static void mobility_nhid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_NHC_MOBILITY_ID_0;
-       nhc->idmask[0] = LOWPAN_NHC_MOBILITY_MASK_0;
-}
-
 LOWPAN_NHC(nhc_mobility, "RFC6282 Mobility", NEXTHDR_MOBILITY, 0,
-          mobility_nhid_setup, LOWPAN_NHC_MOBILITY_IDLEN, NULL, NULL);
+          LOWPAN_NHC_MOBILITY_ID_0, LOWPAN_NHC_MOBILITY_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(nhc_mobility);
 MODULE_DESCRIPTION("6LoWPAN next header RFC6282 Mobility compression");
index 1c17402..dae03eb 100644 (file)
@@ -5,18 +5,11 @@
 
 #include "nhc.h"
 
-#define LOWPAN_NHC_ROUTING_IDLEN       1
 #define LOWPAN_NHC_ROUTING_ID_0                0xe2
 #define LOWPAN_NHC_ROUTING_MASK_0      0xfe
 
-static void routing_nhid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_NHC_ROUTING_ID_0;
-       nhc->idmask[0] = LOWPAN_NHC_ROUTING_MASK_0;
-}
-
 LOWPAN_NHC(nhc_routing, "RFC6282 Routing", NEXTHDR_ROUTING, 0,
-          routing_nhid_setup, LOWPAN_NHC_ROUTING_IDLEN, NULL, NULL);
+          LOWPAN_NHC_ROUTING_ID_0, LOWPAN_NHC_ROUTING_MASK_0, NULL, NULL);
 
 module_lowpan_nhc(nhc_routing);
 MODULE_DESCRIPTION("6LoWPAN next header RFC6282 Routing compression");
index 33f17bd..0a506c7 100644 (file)
@@ -14,7 +14,6 @@
 
 #define LOWPAN_NHC_UDP_MASK            0xF8
 #define LOWPAN_NHC_UDP_ID              0xF0
-#define LOWPAN_NHC_UDP_IDLEN           1
 
 #define LOWPAN_NHC_UDP_4BIT_PORT       0xF0B0
 #define LOWPAN_NHC_UDP_4BIT_MASK       0xFFF0
@@ -169,14 +168,8 @@ static int udp_compress(struct sk_buff *skb, u8 **hc_ptr)
        return 0;
 }
 
-static void udp_nhid_setup(struct lowpan_nhc *nhc)
-{
-       nhc->id[0] = LOWPAN_NHC_UDP_ID;
-       nhc->idmask[0] = LOWPAN_NHC_UDP_MASK;
-}
-
 LOWPAN_NHC(nhc_udp, "RFC6282 UDP", NEXTHDR_UDP, sizeof(struct udphdr),
-          udp_nhid_setup, LOWPAN_NHC_UDP_IDLEN, udp_uncompress, udp_compress);
+          LOWPAN_NHC_UDP_ID, LOWPAN_NHC_UDP_MASK, udp_uncompress, udp_compress);
 
 module_lowpan_nhc(nhc_udp);
 MODULE_DESCRIPTION("6LoWPAN next header RFC6282 UDP compression");
index acf8c79..5aa8144 100644 (file)
@@ -63,10 +63,10 @@ bool vlan_do_receive(struct sk_buff **skbp)
        rx_stats = this_cpu_ptr(vlan_dev_priv(vlan_dev)->vlan_pcpu_stats);
 
        u64_stats_update_begin(&rx_stats->syncp);
-       rx_stats->rx_packets++;
-       rx_stats->rx_bytes += skb->len;
+       u64_stats_inc(&rx_stats->rx_packets);
+       u64_stats_add(&rx_stats->rx_bytes, skb->len);
        if (skb->pkt_type == PACKET_MULTICAST)
-               rx_stats->rx_multicast++;
+               u64_stats_inc(&rx_stats->rx_multicast);
        u64_stats_update_end(&rx_stats->syncp);
 
        return true;
index 839f202..035812b 100644 (file)
@@ -128,8 +128,8 @@ static netdev_tx_t vlan_dev_hard_start_xmit(struct sk_buff *skb,
 
                stats = this_cpu_ptr(vlan->vlan_pcpu_stats);
                u64_stats_update_begin(&stats->syncp);
-               stats->tx_packets++;
-               stats->tx_bytes += len;
+               u64_stats_inc(&stats->tx_packets);
+               u64_stats_add(&stats->tx_bytes, len);
                u64_stats_update_end(&stats->syncp);
        } else {
                this_cpu_inc(vlan->vlan_pcpu_stats->tx_dropped);
@@ -615,7 +615,7 @@ static int vlan_dev_init(struct net_device *dev)
                return -ENOMEM;
 
        /* Get vlan's reference to real_dev */
-       dev_hold_track(real_dev, &vlan->dev_tracker, GFP_KERNEL);
+       netdev_hold(real_dev, &vlan->dev_tracker, GFP_KERNEL);
 
        return 0;
 }
@@ -713,11 +713,11 @@ static void vlan_dev_get_stats64(struct net_device *dev,
                p = per_cpu_ptr(vlan_dev_priv(dev)->vlan_pcpu_stats, i);
                do {
                        start = u64_stats_fetch_begin_irq(&p->syncp);
-                       rxpackets       = p->rx_packets;
-                       rxbytes         = p->rx_bytes;
-                       rxmulticast     = p->rx_multicast;
-                       txpackets       = p->tx_packets;
-                       txbytes         = p->tx_bytes;
+                       rxpackets       = u64_stats_read(&p->rx_packets);
+                       rxbytes         = u64_stats_read(&p->rx_bytes);
+                       rxmulticast     = u64_stats_read(&p->rx_multicast);
+                       txpackets       = u64_stats_read(&p->tx_packets);
+                       txbytes         = u64_stats_read(&p->tx_bytes);
                } while (u64_stats_fetch_retry_irq(&p->syncp, start));
 
                stats->rx_packets       += rxpackets;
@@ -726,8 +726,8 @@ static void vlan_dev_get_stats64(struct net_device *dev,
                stats->tx_packets       += txpackets;
                stats->tx_bytes         += txbytes;
                /* rx_errors & tx_dropped are u32 */
-               rx_errors       += p->rx_errors;
-               tx_dropped      += p->tx_dropped;
+               rx_errors       += READ_ONCE(p->rx_errors);
+               tx_dropped      += READ_ONCE(p->tx_dropped);
        }
        stats->rx_errors  = rx_errors;
        stats->tx_dropped = tx_dropped;
@@ -852,7 +852,7 @@ static void vlan_dev_free(struct net_device *dev)
        vlan->vlan_pcpu_stats = NULL;
 
        /* Get rid of the vlan's reference to real_dev */
-       dev_put_track(vlan->real_dev, &vlan->dev_tracker);
+       netdev_put(vlan->real_dev, &vlan->dev_tracker);
 }
 
 void vlan_setup(struct net_device *dev)
index 95393bb..1a5c0b0 100644 (file)
@@ -102,7 +102,8 @@ again:
                        ax25_disconnect(s, ENETUNREACH);
                        s->ax25_dev = NULL;
                        if (sk->sk_socket) {
-                               dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
+                               netdev_put(ax25_dev->dev,
+                                          &ax25_dev->dev_tracker);
                                ax25_dev_put(ax25_dev);
                        }
                        ax25_cb_del(s);
@@ -1065,7 +1066,7 @@ static int ax25_release(struct socket *sock)
                        del_timer_sync(&ax25->t3timer);
                        del_timer_sync(&ax25->idletimer);
                }
-               dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
+               netdev_put(ax25_dev->dev, &ax25_dev->dev_tracker);
                ax25_dev_put(ax25_dev);
        }
 
@@ -1146,7 +1147,7 @@ static int ax25_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
 
        if (ax25_dev) {
                ax25_fillin_cb(ax25, ax25_dev);
-               dev_hold_track(ax25_dev->dev, &ax25_dev->dev_tracker, GFP_ATOMIC);
+               netdev_hold(ax25_dev->dev, &ax25_dev->dev_tracker, GFP_ATOMIC);
        }
 
 done:
index 95a76d5..ab88b6a 100644 (file)
@@ -60,7 +60,7 @@ void ax25_dev_device_up(struct net_device *dev)
        refcount_set(&ax25_dev->refcount, 1);
        dev->ax25_ptr     = ax25_dev;
        ax25_dev->dev     = dev;
-       dev_hold_track(dev, &ax25_dev->dev_tracker, GFP_ATOMIC);
+       netdev_hold(dev, &ax25_dev->dev_tracker, GFP_ATOMIC);
        ax25_dev->forward = NULL;
        ax25_dev->device_up = true;
 
@@ -136,7 +136,7 @@ unlock_put:
        spin_unlock_bh(&ax25_dev_lock);
        ax25_dev_put(ax25_dev);
        dev->ax25_ptr = NULL;
-       dev_put_track(dev, &ax25_dev->dev_tracker);
+       netdev_put(dev, &ax25_dev->dev_tracker);
        ax25_dev_put(ax25_dev);
 }
 
@@ -205,7 +205,7 @@ void __exit ax25_dev_free(void)
        ax25_dev = ax25_dev_list;
        while (ax25_dev != NULL) {
                s        = ax25_dev;
-               dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
+               netdev_put(ax25_dev->dev, &ax25_dev->dev_tracker);
                ax25_dev = ax25_dev->next;
                kfree(s);
        }
index 47fcbad..a84a7cf 100644 (file)
@@ -274,7 +274,7 @@ static void destroy_nbp(struct net_bridge_port *p)
 
        p->br = NULL;
        p->dev = NULL;
-       dev_put_track(dev, &p->dev_tracker);
+       netdev_put(dev, &p->dev_tracker);
 
        kobject_put(&p->kobj);
 }
@@ -423,7 +423,7 @@ static struct net_bridge_port *new_nbp(struct net_bridge *br,
                return ERR_PTR(-ENOMEM);
 
        p->br = br;
-       dev_hold_track(dev, &p->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &p->dev_tracker, GFP_KERNEL);
        p->dev = dev;
        p->path_cost = port_cost(dev);
        p->priority = 0x8000 >> BR_PORT_BITS;
@@ -434,7 +434,7 @@ static struct net_bridge_port *new_nbp(struct net_bridge *br,
        br_stp_port_timer_init(p);
        err = br_multicast_add_port(p);
        if (err) {
-               dev_put_track(dev, &p->dev_tracker);
+               netdev_put(dev, &p->dev_tracker);
                kfree(p);
                p = ERR_PTR(err);
        }
@@ -615,7 +615,7 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
        err = dev_set_allmulti(dev, 1);
        if (err) {
                br_multicast_del_port(p);
-               dev_put_track(dev, &p->dev_tracker);
+               netdev_put(dev, &p->dev_tracker);
                kfree(p);       /* kobject not yet init'd, manually free */
                goto err1;
        }
@@ -725,7 +725,7 @@ err3:
        sysfs_remove_link(br->ifobj, p->dev->name);
 err2:
        br_multicast_del_port(p);
-       dev_put_track(dev, &p->dev_tracker);
+       netdev_put(dev, &p->dev_tracker);
        kobject_put(&p->kobj);
        dev_set_allmulti(dev, -1);
 err1:
index bb01776..1ef14a0 100644 (file)
@@ -1770,10 +1770,10 @@ static int br_fill_linkxstats(struct sk_buff *skb,
                        if (v->vid == pvid)
                                vxi.flags |= BRIDGE_VLAN_INFO_PVID;
                        br_vlan_get_stats(v, &stats);
-                       vxi.rx_bytes = stats.rx_bytes;
-                       vxi.rx_packets = stats.rx_packets;
-                       vxi.tx_bytes = stats.tx_bytes;
-                       vxi.tx_packets = stats.tx_packets;
+                       vxi.rx_bytes = u64_stats_read(&stats.rx_bytes);
+                       vxi.rx_packets = u64_stats_read(&stats.rx_packets);
+                       vxi.tx_bytes = u64_stats_read(&stats.tx_bytes);
+                       vxi.tx_packets = u64_stats_read(&stats.tx_packets);
 
                        if (nla_put(skb, BRIDGE_XSTATS_VLAN, sizeof(vxi), &vxi))
                                goto nla_put_failure;
index 0f5e75c..6e53dc9 100644 (file)
@@ -505,8 +505,8 @@ struct sk_buff *br_handle_vlan(struct net_bridge *br,
        if (br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) {
                stats = this_cpu_ptr(v->stats);
                u64_stats_update_begin(&stats->syncp);
-               stats->tx_bytes += skb->len;
-               stats->tx_packets++;
+               u64_stats_add(&stats->tx_bytes, skb->len);
+               u64_stats_inc(&stats->tx_packets);
                u64_stats_update_end(&stats->syncp);
        }
 
@@ -624,8 +624,8 @@ static bool __allowed_ingress(const struct net_bridge *br,
        if (br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) {
                stats = this_cpu_ptr(v->stats);
                u64_stats_update_begin(&stats->syncp);
-               stats->rx_bytes += skb->len;
-               stats->rx_packets++;
+               u64_stats_add(&stats->rx_bytes, skb->len);
+               u64_stats_inc(&stats->rx_packets);
                u64_stats_update_end(&stats->syncp);
        }
 
@@ -1379,16 +1379,16 @@ void br_vlan_get_stats(const struct net_bridge_vlan *v,
                cpu_stats = per_cpu_ptr(v->stats, i);
                do {
                        start = u64_stats_fetch_begin_irq(&cpu_stats->syncp);
-                       rxpackets = cpu_stats->rx_packets;
-                       rxbytes = cpu_stats->rx_bytes;
-                       txbytes = cpu_stats->tx_bytes;
-                       txpackets = cpu_stats->tx_packets;
+                       rxpackets = u64_stats_read(&cpu_stats->rx_packets);
+                       rxbytes = u64_stats_read(&cpu_stats->rx_bytes);
+                       txbytes = u64_stats_read(&cpu_stats->tx_bytes);
+                       txpackets = u64_stats_read(&cpu_stats->tx_packets);
                } while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, start));
 
-               stats->rx_packets += rxpackets;
-               stats->rx_bytes += rxbytes;
-               stats->tx_bytes += txbytes;
-               stats->tx_packets += txpackets;
+               u64_stats_add(&stats->rx_packets, rxpackets);
+               u64_stats_add(&stats->rx_bytes, rxbytes);
+               u64_stats_add(&stats->tx_bytes, txbytes);
+               u64_stats_add(&stats->tx_packets, txpackets);
        }
 }
 
@@ -1779,14 +1779,18 @@ static bool br_vlan_stats_fill(struct sk_buff *skb,
                return false;
 
        br_vlan_get_stats(v, &stats);
-       if (nla_put_u64_64bit(skb, BRIDGE_VLANDB_STATS_RX_BYTES, stats.rx_bytes,
+       if (nla_put_u64_64bit(skb, BRIDGE_VLANDB_STATS_RX_BYTES,
+                             u64_stats_read(&stats.rx_bytes),
                              BRIDGE_VLANDB_STATS_PAD) ||
            nla_put_u64_64bit(skb, BRIDGE_VLANDB_STATS_RX_PACKETS,
-                             stats.rx_packets, BRIDGE_VLANDB_STATS_PAD) ||
-           nla_put_u64_64bit(skb, BRIDGE_VLANDB_STATS_TX_BYTES, stats.tx_bytes,
+                             u64_stats_read(&stats.rx_packets),
+                             BRIDGE_VLANDB_STATS_PAD) ||
+           nla_put_u64_64bit(skb, BRIDGE_VLANDB_STATS_TX_BYTES,
+                             u64_stats_read(&stats.tx_bytes),
                              BRIDGE_VLANDB_STATS_PAD) ||
            nla_put_u64_64bit(skb, BRIDGE_VLANDB_STATS_TX_PACKETS,
-                             stats.tx_packets, BRIDGE_VLANDB_STATS_PAD))
+                             u64_stats_read(&stats.tx_packets),
+                             BRIDGE_VLANDB_STATS_PAD))
                goto out_err;
 
        nla_nest_end(skb, nest);
diff --git a/net/core/.gitignore b/net/core/.gitignore
new file mode 100644 (file)
index 0000000..df1e743
--- /dev/null
@@ -0,0 +1 @@
+dropreason_str.c
index a8e4f73..e8ce3bd 100644 (file)
@@ -4,7 +4,8 @@
 #
 
 obj-y := sock.o request_sock.o skbuff.o datagram.o stream.o scm.o \
-        gen_stats.o gen_estimator.o net_namespace.o secure_seq.o flow_dissector.o
+        gen_stats.o gen_estimator.o net_namespace.o secure_seq.o \
+        flow_dissector.o dropreason_str.o
 
 obj-$(CONFIG_SYSCTL) += sysctl_net_core.o
 
@@ -39,3 +40,23 @@ obj-$(CONFIG_NET_SOCK_MSG) += skmsg.o
 obj-$(CONFIG_BPF_SYSCALL) += sock_map.o
 obj-$(CONFIG_BPF_SYSCALL) += bpf_sk_storage.o
 obj-$(CONFIG_OF)       += of_net.o
+
+clean-files := dropreason_str.c
+
+quiet_cmd_dropreason_str = GEN     $@
+cmd_dropreason_str = awk -F ',' 'BEGIN{ print "\#include <net/dropreason.h>\n"; \
+       print "const char * const drop_reasons[] = {" }\
+       /^enum skb_drop/ { dr=1; }\
+       /^\};/ { dr=0; }\
+       /^\tSKB_DROP_REASON_/ {\
+               if (dr) {\
+                       sub(/\tSKB_DROP_REASON_/, "", $$1);\
+                       printf "\t[SKB_DROP_REASON_%s] = \"%s\",\n", $$1, $$1;\
+               }\
+       }\
+       END{ print "};" }' $< > $@
+
+$(obj)/dropreason_str.c: $(srctree)/include/net/dropreason.h
+       $(call cmd,dropreason_str)
+
+$(obj)/dropreason_str.o: $(obj)/dropreason_str.c
index 50f4fae..35791f8 100644 (file)
@@ -320,7 +320,6 @@ EXPORT_SYMBOL(skb_recv_datagram);
 void skb_free_datagram(struct sock *sk, struct sk_buff *skb)
 {
        consume_skb(skb);
-       sk_mem_reclaim_partial(sk);
 }
 EXPORT_SYMBOL(skb_free_datagram);
 
@@ -336,7 +335,6 @@ void __skb_free_datagram_locked(struct sock *sk, struct sk_buff *skb, int len)
        slow = lock_sock_fast(sk);
        sk_peek_offset_bwd(sk, len);
        skb_orphan(skb);
-       sk_mem_reclaim_partial(sk);
        unlock_sock_fast(sk, slow);
 
        /* skb is now orphaned, can be freed outside of locked section */
@@ -396,7 +394,6 @@ int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags)
                                      NULL);
 
        kfree_skb(skb);
-       sk_mem_reclaim_partial(sk);
        return err;
 }
 EXPORT_SYMBOL(skb_kill_datagram);
index 08ce317..8958c42 100644 (file)
@@ -3925,7 +3925,7 @@ int dev_loopback_xmit(struct net *net, struct sock *sk, struct sk_buff *skb)
        skb->pkt_type = PACKET_LOOPBACK;
        if (skb->ip_summed == CHECKSUM_NONE)
                skb->ip_summed = CHECKSUM_UNNECESSARY;
-       WARN_ON(!skb_dst(skb));
+       DEBUG_NET_WARN_ON_ONCE(!skb_dst(skb));
        skb_dst_force(skb);
        netif_rx(skb);
        return 0;
@@ -6351,6 +6351,23 @@ int dev_set_threaded(struct net_device *dev, bool threaded)
 }
 EXPORT_SYMBOL(dev_set_threaded);
 
+/* Double check that napi_get_frags() allocates skbs with
+ * skb->head being backed by slab, not a page fragment.
+ * This is to make sure bug fixed in 3226b158e67c
+ * ("net: avoid 32 x truesize under-estimation for tiny skbs")
+ * does not accidentally come back.
+ */
+static void napi_get_frags_check(struct napi_struct *napi)
+{
+       struct sk_buff *skb;
+
+       local_bh_disable();
+       skb = napi_get_frags(napi);
+       WARN_ON_ONCE(skb && skb->head_frag);
+       napi_free_frags(napi);
+       local_bh_enable();
+}
+
 void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi,
                           int (*poll)(struct napi_struct *, int), int weight)
 {
@@ -6378,6 +6395,7 @@ void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi,
        set_bit(NAPI_STATE_NPSVC, &napi->state);
        list_add_rcu(&napi->dev_list, &dev->napi_list);
        napi_hash_add(napi);
+       napi_get_frags_check(napi);
        /* Create kthread for this napi if dev->threaded is set.
         * Clear dev->threaded if kthread creation failed so that
         * threaded mode will not be enabled in napi_enable().
@@ -7463,7 +7481,7 @@ static int __netdev_adjacent_dev_insert(struct net_device *dev,
        adj->ref_nr = 1;
        adj->private = private;
        adj->ignore = false;
-       dev_hold_track(adj_dev, &adj->dev_tracker, GFP_KERNEL);
+       netdev_hold(adj_dev, &adj->dev_tracker, GFP_KERNEL);
 
        pr_debug("Insert adjacency: dev %s adj_dev %s adj->ref_nr %d; dev_hold on %s\n",
                 dev->name, adj_dev->name, adj->ref_nr, adj_dev->name);
@@ -7492,7 +7510,7 @@ remove_symlinks:
        if (netdev_adjacent_is_neigh_list(dev, adj_dev, dev_list))
                netdev_adjacent_sysfs_del(dev, adj_dev->name, dev_list);
 free_adj:
-       dev_put_track(adj_dev, &adj->dev_tracker);
+       netdev_put(adj_dev, &adj->dev_tracker);
        kfree(adj);
 
        return ret;
@@ -7534,7 +7552,7 @@ static void __netdev_adjacent_dev_remove(struct net_device *dev,
        list_del_rcu(&adj->list);
        pr_debug("adjacency: dev_put for %s, because link removed from %s to %s\n",
                 adj_dev->name, dev->name, adj_dev->name);
-       dev_put_track(adj_dev, &adj->dev_tracker);
+       netdev_put(adj_dev, &adj->dev_tracker);
        kfree_rcu(adj, rcu);
 }
 
@@ -10062,7 +10080,7 @@ int register_netdevice(struct net_device *dev)
 
        dev_init_scheduler(dev);
 
-       dev_hold_track(dev, &dev->dev_registered_tracker, GFP_KERNEL);
+       netdev_hold(dev, &dev->dev_registered_tracker, GFP_KERNEL);
        list_netdevice(dev);
 
        add_device_randomness(dev->dev_addr, dev->addr_len);
@@ -10459,23 +10477,23 @@ void dev_fetch_sw_netstats(struct rtnl_link_stats64 *s,
        int cpu;
 
        for_each_possible_cpu(cpu) {
+               u64 rx_packets, rx_bytes, tx_packets, tx_bytes;
                const struct pcpu_sw_netstats *stats;
-               struct pcpu_sw_netstats tmp;
                unsigned int start;
 
                stats = per_cpu_ptr(netstats, cpu);
                do {
                        start = u64_stats_fetch_begin_irq(&stats->syncp);
-                       tmp.rx_packets = stats->rx_packets;
-                       tmp.rx_bytes   = stats->rx_bytes;
-                       tmp.tx_packets = stats->tx_packets;
-                       tmp.tx_bytes   = stats->tx_bytes;
+                       rx_packets = u64_stats_read(&stats->rx_packets);
+                       rx_bytes   = u64_stats_read(&stats->rx_bytes);
+                       tx_packets = u64_stats_read(&stats->tx_packets);
+                       tx_bytes   = u64_stats_read(&stats->tx_bytes);
                } while (u64_stats_fetch_retry_irq(&stats->syncp, start));
 
-               s->rx_packets += tmp.rx_packets;
-               s->rx_bytes   += tmp.rx_bytes;
-               s->tx_packets += tmp.tx_packets;
-               s->tx_bytes   += tmp.tx_bytes;
+               s->rx_packets += rx_packets;
+               s->rx_bytes   += rx_bytes;
+               s->tx_packets += tx_packets;
+               s->tx_bytes   += tx_bytes;
        }
 }
 EXPORT_SYMBOL_GPL(dev_fetch_sw_netstats);
@@ -10868,7 +10886,7 @@ void unregister_netdevice_many(struct list_head *head)
        synchronize_net();
 
        list_for_each_entry(dev, head, unreg_list) {
-               dev_put_track(dev, &dev->dev_registered_tracker);
+               netdev_put(dev, &dev->dev_registered_tracker);
                net_set_todo(dev);
        }
 
index 4f6be44..7674bb9 100644 (file)
@@ -384,10 +384,10 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, void __user *data,
                        return -ENODEV;
                if (!netif_is_bridge_master(dev))
                        return -EOPNOTSUPP;
-               dev_hold_track(dev, &dev_tracker, GFP_KERNEL);
+               netdev_hold(dev, &dev_tracker, GFP_KERNEL);
                rtnl_unlock();
                err = br_ioctl_call(net, netdev_priv(dev), cmd, ifr, NULL);
-               dev_put_track(dev, &dev_tracker);
+               netdev_put(dev, &dev_tracker);
                rtnl_lock();
                return err;
 
index 5cc8849..db61f3a 100644 (file)
@@ -7946,8 +7946,8 @@ static int devlink_nl_cmd_health_reporter_test_doit(struct sk_buff *skb,
 }
 
 struct devlink_stats {
-       u64 rx_bytes;
-       u64 rx_packets;
+       u64_stats_t rx_bytes;
+       u64_stats_t rx_packets;
        struct u64_stats_sync syncp;
 };
 
@@ -8104,12 +8104,12 @@ static void devlink_trap_stats_read(struct devlink_stats __percpu *trap_stats,
                cpu_stats = per_cpu_ptr(trap_stats, i);
                do {
                        start = u64_stats_fetch_begin_irq(&cpu_stats->syncp);
-                       rx_packets = cpu_stats->rx_packets;
-                       rx_bytes = cpu_stats->rx_bytes;
+                       rx_packets = u64_stats_read(&cpu_stats->rx_packets);
+                       rx_bytes = u64_stats_read(&cpu_stats->rx_bytes);
                } while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, start));
 
-               stats->rx_packets += rx_packets;
-               stats->rx_bytes += rx_bytes;
+               u64_stats_add(&stats->rx_packets, rx_packets);
+               u64_stats_add(&stats->rx_bytes, rx_bytes);
        }
 }
 
@@ -8127,11 +8127,13 @@ devlink_trap_group_stats_put(struct sk_buff *msg,
                return -EMSGSIZE;
 
        if (nla_put_u64_64bit(msg, DEVLINK_ATTR_STATS_RX_PACKETS,
-                             stats.rx_packets, DEVLINK_ATTR_PAD))
+                             u64_stats_read(&stats.rx_packets),
+                             DEVLINK_ATTR_PAD))
                goto nla_put_failure;
 
        if (nla_put_u64_64bit(msg, DEVLINK_ATTR_STATS_RX_BYTES,
-                             stats.rx_bytes, DEVLINK_ATTR_PAD))
+                             u64_stats_read(&stats.rx_bytes),
+                             DEVLINK_ATTR_PAD))
                goto nla_put_failure;
 
        nla_nest_end(msg, attr);
@@ -8171,11 +8173,13 @@ static int devlink_trap_stats_put(struct sk_buff *msg, struct devlink *devlink,
                goto nla_put_failure;
 
        if (nla_put_u64_64bit(msg, DEVLINK_ATTR_STATS_RX_PACKETS,
-                             stats.rx_packets, DEVLINK_ATTR_PAD))
+                             u64_stats_read(&stats.rx_packets),
+                             DEVLINK_ATTR_PAD))
                goto nla_put_failure;
 
        if (nla_put_u64_64bit(msg, DEVLINK_ATTR_STATS_RX_BYTES,
-                             stats.rx_bytes, DEVLINK_ATTR_PAD))
+                             u64_stats_read(&stats.rx_bytes),
+                             DEVLINK_ATTR_PAD))
                goto nla_put_failure;
 
        nla_nest_end(msg, attr);
@@ -11641,8 +11645,8 @@ devlink_trap_stats_update(struct devlink_stats __percpu *trap_stats,
 
        stats = this_cpu_ptr(trap_stats);
        u64_stats_update_begin(&stats->syncp);
-       stats->rx_bytes += skb_len;
-       stats->rx_packets++;
+       u64_stats_add(&stats->rx_bytes, skb_len);
+       u64_stats_inc(&stats->rx_packets);
        u64_stats_update_end(&stats->syncp);
 }
 
index 41cac0e..75501e1 100644 (file)
 static int trace_state = TRACE_OFF;
 static bool monitor_hw;
 
-#undef EM
-#undef EMe
-
-#define EM(a, b)       [a] = #b,
-#define EMe(a, b)      [a] = #b
-
-/* drop_reasons is used to translate 'enum skb_drop_reason' to string,
- * which is reported to user space.
- */
-static const char * const drop_reasons[] = {
-       TRACE_SKB_DROP_REASON
-};
-
 /* net_dm_mutex
  *
  * An overall lock guarding every operation coming from userspace.
@@ -68,7 +55,7 @@ static const char * const drop_reasons[] = {
 static DEFINE_MUTEX(net_dm_mutex);
 
 struct net_dm_stats {
-       u64 dropped;
+       u64_stats_t dropped;
        struct u64_stats_sync syncp;
 };
 
@@ -543,7 +530,7 @@ static void net_dm_packet_trace_kfree_skb_hit(void *ignore,
 unlock_free:
        spin_unlock_irqrestore(&data->drop_queue.lock, flags);
        u64_stats_update_begin(&data->stats.syncp);
-       data->stats.dropped++;
+       u64_stats_inc(&data->stats.dropped);
        u64_stats_update_end(&data->stats.syncp);
        consume_skb(nskb);
 }
@@ -877,7 +864,8 @@ net_dm_hw_metadata_copy(const struct devlink_trap_metadata *metadata)
        }
 
        hw_metadata->input_dev = metadata->input_dev;
-       dev_hold_track(hw_metadata->input_dev, &hw_metadata->dev_tracker, GFP_ATOMIC);
+       netdev_hold(hw_metadata->input_dev, &hw_metadata->dev_tracker,
+                   GFP_ATOMIC);
 
        return hw_metadata;
 
@@ -893,7 +881,7 @@ free_hw_metadata:
 static void
 net_dm_hw_metadata_free(struct devlink_trap_metadata *hw_metadata)
 {
-       dev_put_track(hw_metadata->input_dev, &hw_metadata->dev_tracker);
+       netdev_put(hw_metadata->input_dev, &hw_metadata->dev_tracker);
        kfree(hw_metadata->fa_cookie);
        kfree(hw_metadata->trap_name);
        kfree(hw_metadata->trap_group_name);
@@ -998,7 +986,7 @@ net_dm_hw_trap_packet_probe(void *ignore, const struct devlink *devlink,
 unlock_free:
        spin_unlock_irqrestore(&hw_data->drop_queue.lock, flags);
        u64_stats_update_begin(&hw_data->stats.syncp);
-       hw_data->stats.dropped++;
+       u64_stats_inc(&hw_data->stats.dropped);
        u64_stats_update_end(&hw_data->stats.syncp);
        net_dm_hw_metadata_free(n_hw_metadata);
 free:
@@ -1445,10 +1433,10 @@ static void net_dm_stats_read(struct net_dm_stats *stats)
 
                do {
                        start = u64_stats_fetch_begin_irq(&cpu_stats->syncp);
-                       dropped = cpu_stats->dropped;
+                       dropped = u64_stats_read(&cpu_stats->dropped);
                } while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, start));
 
-               stats->dropped += dropped;
+               u64_stats_add(&stats->dropped, dropped);
        }
 }
 
@@ -1464,7 +1452,7 @@ static int net_dm_stats_put(struct sk_buff *msg)
                return -EMSGSIZE;
 
        if (nla_put_u64_64bit(msg, NET_DM_ATTR_STATS_DROPPED,
-                             stats.dropped, NET_DM_ATTR_PAD))
+                             u64_stats_read(&stats.dropped), NET_DM_ATTR_PAD))
                goto nla_put_failure;
 
        nla_nest_end(msg, attr);
@@ -1489,10 +1477,10 @@ static void net_dm_hw_stats_read(struct net_dm_stats *stats)
 
                do {
                        start = u64_stats_fetch_begin_irq(&cpu_stats->syncp);
-                       dropped = cpu_stats->dropped;
+                       dropped = u64_stats_read(&cpu_stats->dropped);
                } while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, start));
 
-               stats->dropped += dropped;
+               u64_stats_add(&stats->dropped, dropped);
        }
 }
 
@@ -1508,7 +1496,7 @@ static int net_dm_hw_stats_put(struct sk_buff *msg)
                return -EMSGSIZE;
 
        if (nla_put_u64_64bit(msg, NET_DM_ATTR_STATS_DROPPED,
-                             stats.dropped, NET_DM_ATTR_PAD))
+                             u64_stats_read(&stats.dropped), NET_DM_ATTR_PAD))
                goto nla_put_failure;
 
        nla_nest_end(msg, attr);
index d16c2c9..bc9c9be 100644 (file)
@@ -49,7 +49,7 @@ void dst_init(struct dst_entry *dst, struct dst_ops *ops,
              unsigned short flags)
 {
        dst->dev = dev;
-       dev_hold_track(dev, &dst->dev_tracker, GFP_ATOMIC);
+       netdev_hold(dev, &dst->dev_tracker, GFP_ATOMIC);
        dst->ops = ops;
        dst_init_metrics(dst, dst_default_metrics.metrics, true);
        dst->expires = 0UL;
@@ -117,7 +117,7 @@ struct dst_entry *dst_destroy(struct dst_entry * dst)
 
        if (dst->ops->destroy)
                dst->ops->destroy(dst);
-       dev_put_track(dst->dev, &dst->dev_tracker);
+       netdev_put(dst->dev, &dst->dev_tracker);
 
        lwtstate_put(dst->lwtstate);
 
@@ -159,8 +159,8 @@ void dst_dev_put(struct dst_entry *dst)
        dst->input = dst_discard;
        dst->output = dst_discard_out;
        dst->dev = blackhole_netdev;
-       dev_replace_track(dev, blackhole_netdev, &dst->dev_tracker,
-                         GFP_ATOMIC);
+       netdev_ref_replace(dev, blackhole_netdev, &dst->dev_tracker,
+                          GFP_ATOMIC);
 }
 EXPORT_SYMBOL(dst_dev_put);
 
index dcaa92a..864d2d8 100644 (file)
@@ -252,7 +252,7 @@ struct failover *failover_register(struct net_device *dev,
                return ERR_PTR(-ENOMEM);
 
        rcu_assign_pointer(failover->ops, ops);
-       dev_hold_track(dev, &failover->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &failover->dev_tracker, GFP_KERNEL);
        dev->priv_flags |= IFF_FAILOVER;
        rcu_assign_pointer(failover->failover_dev, dev);
 
@@ -285,7 +285,7 @@ void failover_unregister(struct failover *failover)
                    failover_dev->name);
 
        failover_dev->priv_flags &= ~IFF_FAILOVER;
-       dev_put_track(failover_dev, &failover->dev_tracker);
+       netdev_put(failover_dev, &failover->dev_tracker);
 
        spin_lock(&failover_lock);
        list_del(&failover->list);
index 73f68d4..929f637 100644 (file)
@@ -595,3 +595,9 @@ int flow_indr_dev_setup_offload(struct net_device *dev,     struct Qdisc *sch,
        return (bo && list_empty(&bo->cb_list)) ? -EOPNOTSUPP : count;
 }
 EXPORT_SYMBOL(flow_indr_dev_setup_offload);
+
+bool flow_indr_dev_exists(void)
+{
+       return !list_empty(&flow_block_indr_dev_list);
+}
+EXPORT_SYMBOL(flow_indr_dev_exists);
index a244d3b..aa6cb1f 100644 (file)
@@ -110,7 +110,7 @@ static void linkwatch_add_event(struct net_device *dev)
        spin_lock_irqsave(&lweventlist_lock, flags);
        if (list_empty(&dev->link_watch_list)) {
                list_add_tail(&dev->link_watch_list, &lweventlist);
-               dev_hold_track(dev, &dev->linkwatch_dev_tracker, GFP_ATOMIC);
+               netdev_hold(dev, &dev->linkwatch_dev_tracker, GFP_ATOMIC);
        }
        spin_unlock_irqrestore(&lweventlist_lock, flags);
 }
index 5462528..d8ec706 100644 (file)
@@ -624,7 +624,7 @@ ___neigh_create(struct neigh_table *tbl, const void *pkey,
 
        memcpy(n->primary_key, pkey, key_len);
        n->dev = dev;
-       dev_hold_track(dev, &n->dev_tracker, GFP_ATOMIC);
+       netdev_hold(dev, &n->dev_tracker, GFP_ATOMIC);
 
        /* Protocol specific setup. */
        if (tbl->constructor && (error = tbl->constructor(n)) < 0) {
@@ -770,10 +770,10 @@ struct pneigh_entry * pneigh_lookup(struct neigh_table *tbl,
        write_pnet(&n->net, net);
        memcpy(n->key, pkey, key_len);
        n->dev = dev;
-       dev_hold_track(dev, &n->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &n->dev_tracker, GFP_KERNEL);
 
        if (tbl->pconstructor && tbl->pconstructor(n)) {
-               dev_put_track(dev, &n->dev_tracker);
+               netdev_put(dev, &n->dev_tracker);
                kfree(n);
                n = NULL;
                goto out;
@@ -805,7 +805,7 @@ int pneigh_delete(struct neigh_table *tbl, struct net *net, const void *pkey,
                        write_unlock_bh(&tbl->lock);
                        if (tbl->pdestructor)
                                tbl->pdestructor(n);
-                       dev_put_track(n->dev, &n->dev_tracker);
+                       netdev_put(n->dev, &n->dev_tracker);
                        kfree(n);
                        return 0;
                }
@@ -838,7 +838,7 @@ static int pneigh_ifdown_and_unlock(struct neigh_table *tbl,
                n->next = NULL;
                if (tbl->pdestructor)
                        tbl->pdestructor(n);
-               dev_put_track(n->dev, &n->dev_tracker);
+               netdev_put(n->dev, &n->dev_tracker);
                kfree(n);
        }
        return -ENOENT;
@@ -879,7 +879,7 @@ void neigh_destroy(struct neighbour *neigh)
        if (dev->netdev_ops->ndo_neigh_destroy)
                dev->netdev_ops->ndo_neigh_destroy(dev, neigh);
 
-       dev_put_track(dev, &neigh->dev_tracker);
+       netdev_put(dev, &neigh->dev_tracker);
        neigh_parms_put(neigh->parms);
 
        neigh_dbg(2, "neigh %p is destroyed\n", neigh);
@@ -1671,13 +1671,13 @@ struct neigh_parms *neigh_parms_alloc(struct net_device *dev,
                refcount_set(&p->refcnt, 1);
                p->reachable_time =
                                neigh_rand_reach_time(NEIGH_VAR(p, BASE_REACHABLE_TIME));
-               dev_hold_track(dev, &p->dev_tracker, GFP_KERNEL);
+               netdev_hold(dev, &p->dev_tracker, GFP_KERNEL);
                p->dev = dev;
                write_pnet(&p->net, net);
                p->sysctl_table = NULL;
 
                if (ops->ndo_neigh_setup && ops->ndo_neigh_setup(dev, p)) {
-                       dev_put_track(dev, &p->dev_tracker);
+                       netdev_put(dev, &p->dev_tracker);
                        kfree(p);
                        return NULL;
                }
@@ -1708,7 +1708,7 @@ void neigh_parms_release(struct neigh_table *tbl, struct neigh_parms *parms)
        list_del(&parms->list);
        parms->dead = 1;
        write_unlock_bh(&tbl->lock);
-       dev_put_track(parms->dev, &parms->dev_tracker);
+       netdev_put(parms->dev, &parms->dev_tracker);
        call_rcu(&parms->rcu_head, neigh_rcu_free_parms);
 }
 EXPORT_SYMBOL(neigh_parms_release);
index e319e24..d49fc97 100644 (file)
@@ -1016,7 +1016,7 @@ static void rx_queue_release(struct kobject *kobj)
 #endif
 
        memset(kobj, 0, sizeof(*kobj));
-       dev_put_track(queue->dev, &queue->dev_tracker);
+       netdev_put(queue->dev, &queue->dev_tracker);
 }
 
 static const void *rx_queue_namespace(struct kobject *kobj)
@@ -1056,7 +1056,7 @@ static int rx_queue_add_kobject(struct net_device *dev, int index)
        /* Kobject_put later will trigger rx_queue_release call which
         * decreases dev refcount: Take that reference here
         */
-       dev_hold_track(queue->dev, &queue->dev_tracker, GFP_KERNEL);
+       netdev_hold(queue->dev, &queue->dev_tracker, GFP_KERNEL);
 
        kobj->kset = dev->queues_kset;
        error = kobject_init_and_add(kobj, &rx_queue_ktype, NULL,
@@ -1619,7 +1619,7 @@ static void netdev_queue_release(struct kobject *kobj)
        struct netdev_queue *queue = to_netdev_queue(kobj);
 
        memset(kobj, 0, sizeof(*kobj));
-       dev_put_track(queue->dev, &queue->dev_tracker);
+       netdev_put(queue->dev, &queue->dev_tracker);
 }
 
 static const void *netdev_queue_namespace(struct kobject *kobj)
@@ -1659,7 +1659,7 @@ static int netdev_queue_add_kobject(struct net_device *dev, int index)
        /* Kobject_put later will trigger netdev_queue_release call
         * which decreases dev refcount: Take that reference here
         */
-       dev_hold_track(queue->dev, &queue->dev_tracker, GFP_KERNEL);
+       netdev_hold(queue->dev, &queue->dev_tracker, GFP_KERNEL);
 
        kobj->kset = dev->queues_kset;
        error = kobject_init_and_add(kobj, &netdev_queue_ktype, NULL,
index db72446..5d27067 100644 (file)
@@ -853,7 +853,7 @@ void netpoll_cleanup(struct netpoll *np)
        if (!np->dev)
                goto out;
        __netpoll_cleanup(np);
-       dev_put_track(np->dev, &np->dev_tracker);
+       netdev_put(np->dev, &np->dev_tracker);
        np->dev = NULL;
 out:
        rtnl_unlock();
index 84b62cd..88906ba 100644 (file)
@@ -2100,7 +2100,7 @@ static int pktgen_setup_dev(const struct pktgen_net *pn,
 
        /* Clean old setups */
        if (pkt_dev->odev) {
-               dev_put_track(pkt_dev->odev, &pkt_dev->dev_tracker);
+               netdev_put(pkt_dev->odev, &pkt_dev->dev_tracker);
                pkt_dev->odev = NULL;
        }
 
@@ -3807,7 +3807,7 @@ static int pktgen_add_device(struct pktgen_thread *t, const char *ifname)
 
        return add_dev_to_thread(t, pkt_dev);
 out2:
-       dev_put_track(pkt_dev->odev, &pkt_dev->dev_tracker);
+       netdev_put(pkt_dev->odev, &pkt_dev->dev_tracker);
 out1:
 #ifdef CONFIG_XFRM
        free_SAs(pkt_dev);
@@ -3901,7 +3901,7 @@ static int pktgen_remove_device(struct pktgen_thread *t,
        /* Dis-associate from the interface */
 
        if (pkt_dev->odev) {
-               dev_put_track(pkt_dev->odev, &pkt_dev->dev_tracker);
+               netdev_put(pkt_dev->odev, &pkt_dev->dev_tracker);
                pkt_dev->odev = NULL;
        }
 
index 5b3559c..fec75f8 100644 (file)
@@ -91,6 +91,9 @@ static struct kmem_cache *skbuff_ext_cache __ro_after_init;
 int sysctl_max_skb_frags __read_mostly = MAX_SKB_FRAGS;
 EXPORT_SYMBOL(sysctl_max_skb_frags);
 
+/* The array 'drop_reasons' is auto-generated in dropreason_str.c */
+EXPORT_SYMBOL(drop_reasons);
+
 /**
  *     skb_panic - private function for out-of-line support
  *     @skb:   buffer
@@ -557,6 +560,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
        struct sk_buff *skb;
        void *data;
 
+       DEBUG_NET_WARN_ON_ONCE(!in_softirq());
        len += NET_SKB_PAD + NET_IP_ALIGN;
 
        /* If requested length is either too small or too big,
@@ -725,7 +729,7 @@ void skb_release_head_state(struct sk_buff *skb)
 {
        skb_dst_drop(skb);
        if (skb->destructor) {
-               WARN_ON(in_hardirq());
+               DEBUG_NET_WARN_ON_ONCE(in_hardirq());
                skb->destructor(skb);
        }
 #if IS_ENABLED(CONFIG_NF_CONNTRACK)
@@ -978,7 +982,7 @@ void napi_consume_skb(struct sk_buff *skb, int budget)
                return;
        }
 
-       lockdep_assert_in_softirq();
+       DEBUG_NET_WARN_ON_ONCE(!in_softirq());
 
        if (!skb_unref(skb))
                return;
index 2ff40dd..92a0296 100644 (file)
@@ -991,7 +991,7 @@ EXPORT_SYMBOL(sock_set_mark);
 static void sock_release_reserved_memory(struct sock *sk, int bytes)
 {
        /* Round down bytes to multiple of pages */
-       bytes &= ~(SK_MEM_QUANTUM - 1);
+       bytes = round_down(bytes, PAGE_SIZE);
 
        WARN_ON(bytes > sk->sk_reserved_mem);
        sk->sk_reserved_mem -= bytes;
@@ -1019,7 +1019,8 @@ static int sock_reserve_memory(struct sock *sk, int bytes)
                return -ENOMEM;
 
        /* pre-charge to forward_alloc */
-       allocated = sk_memory_allocated_add(sk, pages);
+       sk_memory_allocated_add(sk, pages);
+       allocated = sk_memory_allocated(sk);
        /* If the system goes into memory pressure with this
         * precharge, give up and return error.
         */
@@ -1028,9 +1029,9 @@ static int sock_reserve_memory(struct sock *sk, int bytes)
                mem_cgroup_uncharge_skmem(sk->sk_memcg, pages);
                return -ENOMEM;
        }
-       sk->sk_forward_alloc += pages << SK_MEM_QUANTUM_SHIFT;
+       sk->sk_forward_alloc += pages << PAGE_SHIFT;
 
-       sk->sk_reserved_mem += pages << SK_MEM_QUANTUM_SHIFT;
+       sk->sk_reserved_mem += pages << PAGE_SHIFT;
 
        return 0;
 }
@@ -2844,7 +2845,7 @@ void __release_sock(struct sock *sk)
                do {
                        next = skb->next;
                        prefetch(next);
-                       WARN_ON_ONCE(skb_dst_is_noref(skb));
+                       DEBUG_NET_WARN_ON_ONCE(skb_dst_is_noref(skb));
                        skb_mark_not_on_list(skb);
                        sk_backlog_rcv(sk, skb);
 
@@ -2906,11 +2907,13 @@ EXPORT_SYMBOL(sk_wait_data);
  */
 int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
 {
-       struct proto *prot = sk->sk_prot;
-       long allocated = sk_memory_allocated_add(sk, amt);
        bool memcg_charge = mem_cgroup_sockets_enabled && sk->sk_memcg;
+       struct proto *prot = sk->sk_prot;
        bool charged = true;
+       long allocated;
 
+       sk_memory_allocated_add(sk, amt);
+       allocated = sk_memory_allocated(sk);
        if (memcg_charge &&
            !(charged = mem_cgroup_charge_skmem(sk->sk_memcg, amt,
                                                gfp_memcg_charge())))
@@ -2987,7 +2990,6 @@ suppress_allocation:
 
        return 0;
 }
-EXPORT_SYMBOL(__sk_mem_raise_allocated);
 
 /**
  *     __sk_mem_schedule - increase sk_forward_alloc and memory_allocated
@@ -3003,10 +3005,10 @@ int __sk_mem_schedule(struct sock *sk, int size, int kind)
 {
        int ret, amt = sk_mem_pages(size);
 
-       sk->sk_forward_alloc += amt << SK_MEM_QUANTUM_SHIFT;
+       sk->sk_forward_alloc += amt << PAGE_SHIFT;
        ret = __sk_mem_raise_allocated(sk, size, amt, kind);
        if (!ret)
-               sk->sk_forward_alloc -= amt << SK_MEM_QUANTUM_SHIFT;
+               sk->sk_forward_alloc -= amt << PAGE_SHIFT;
        return ret;
 }
 EXPORT_SYMBOL(__sk_mem_schedule);
@@ -3029,17 +3031,16 @@ void __sk_mem_reduce_allocated(struct sock *sk, int amount)
            (sk_memory_allocated(sk) < sk_prot_mem_limits(sk, 0)))
                sk_leave_memory_pressure(sk);
 }
-EXPORT_SYMBOL(__sk_mem_reduce_allocated);
 
 /**
  *     __sk_mem_reclaim - reclaim sk_forward_alloc and memory_allocated
  *     @sk: socket
- *     @amount: number of bytes (rounded down to a SK_MEM_QUANTUM multiple)
+ *     @amount: number of bytes (rounded down to a PAGE_SIZE multiple)
  */
 void __sk_mem_reclaim(struct sock *sk, int amount)
 {
-       amount >>= SK_MEM_QUANTUM_SHIFT;
-       sk->sk_forward_alloc -= amount << SK_MEM_QUANTUM_SHIFT;
+       amount >>= PAGE_SHIFT;
+       sk->sk_forward_alloc -= amount << PAGE_SHIFT;
        __sk_mem_reduce_allocated(sk, amount);
 }
 EXPORT_SYMBOL(__sk_mem_reclaim);
@@ -3798,6 +3799,10 @@ int proto_register(struct proto *prot, int alloc_slab)
                pr_err("%s: missing sysctl_mem\n", prot->name);
                return -EINVAL;
        }
+       if (prot->memory_allocated && !prot->per_cpu_fw_alloc) {
+               pr_err("%s: missing per_cpu_fw_alloc\n", prot->name);
+               return -EINVAL;
+       }
        if (alloc_slab) {
                prot->slab = kmem_cache_create_usercopy(prot->name,
                                        prot->obj_size, 0,
index 06b36c7..ccc083c 100644 (file)
@@ -196,13 +196,13 @@ void sk_stream_kill_queues(struct sock *sk)
        __skb_queue_purge(&sk->sk_receive_queue);
 
        /* Next, the write queue. */
-       WARN_ON(!skb_queue_empty(&sk->sk_write_queue));
+       WARN_ON_ONCE(!skb_queue_empty(&sk->sk_write_queue));
 
        /* Account for returned memory. */
        sk_mem_reclaim_final(sk);
 
-       WARN_ON(sk->sk_wmem_queued);
-       WARN_ON(sk->sk_forward_alloc);
+       WARN_ON_ONCE(sk->sk_wmem_queued);
+       WARN_ON_ONCE(sk->sk_forward_alloc);
 
        /* It is _impossible_ for the backlog to contain anything
         * when we get here.  All user references to this socket
index dc92a67..aa4f43f 100644 (file)
@@ -149,6 +149,7 @@ static DEFINE_RWLOCK(dn_hash_lock);
 static struct hlist_head dn_sk_hash[DN_SK_HASH_SIZE];
 static struct hlist_head dn_wild_sk;
 static atomic_long_t decnet_memory_allocated;
+static DEFINE_PER_CPU(int, decnet_memory_per_cpu_fw_alloc);
 
 static int __dn_setsockopt(struct socket *sock, int level, int optname,
                sockptr_t optval, unsigned int optlen, int flags);
@@ -454,7 +455,10 @@ static struct proto dn_proto = {
        .owner                  = THIS_MODULE,
        .enter_memory_pressure  = dn_enter_memory_pressure,
        .memory_pressure        = &dn_memory_pressure,
+
        .memory_allocated       = &decnet_memory_allocated,
+       .per_cpu_fw_alloc       = &decnet_memory_per_cpu_fw_alloc,
+
        .sysctl_mem             = sysctl_decnet_mem,
        .sysctl_wmem            = sysctl_decnet_wmem,
        .sysctl_rmem            = sysctl_decnet_rmem,
index 801a5d4..2e1ac63 100644 (file)
@@ -935,10 +935,10 @@ static void dsa_slave_get_ethtool_stats(struct net_device *dev,
                s = per_cpu_ptr(dev->tstats, i);
                do {
                        start = u64_stats_fetch_begin_irq(&s->syncp);
-                       tx_packets = s->tx_packets;
-                       tx_bytes = s->tx_bytes;
-                       rx_packets = s->rx_packets;
-                       rx_bytes = s->rx_bytes;
+                       tx_packets = u64_stats_read(&s->tx_packets);
+                       tx_bytes = u64_stats_read(&s->tx_bytes);
+                       rx_packets = u64_stats_read(&s->rx_packets);
+                       rx_bytes = u64_stats_read(&s->rx_bytes);
                } while (u64_stats_fetch_retry_irq(&s->syncp, start));
                data[0] += tx_packets;
                data[1] += tx_bytes;
index 326e14e..6a7308d 100644 (file)
@@ -369,22 +369,9 @@ EXPORT_SYMBOL(ethtool_convert_legacy_u32_to_link_mode);
 bool ethtool_convert_link_mode_to_legacy_u32(u32 *legacy_u32,
                                             const unsigned long *src)
 {
-       bool retval = true;
-
-       /* TODO: following test will soon always be true */
-       if (__ETHTOOL_LINK_MODE_MASK_NBITS > 32) {
-               __ETHTOOL_DECLARE_LINK_MODE_MASK(ext);
-
-               linkmode_zero(ext);
-               bitmap_fill(ext, 32);
-               bitmap_complement(ext, ext, __ETHTOOL_LINK_MODE_MASK_NBITS);
-               if (linkmode_intersects(ext, src)) {
-                       /* src mask goes beyond bit 31 */
-                       retval = false;
-               }
-       }
        *legacy_u32 = src[0];
-       return retval;
+       return find_next_bit(src, __ETHTOOL_LINK_MODE_MASK_NBITS, 32) ==
+               __ETHTOOL_LINK_MODE_MASK_NBITS;
 }
 EXPORT_SYMBOL(ethtool_convert_link_mode_to_legacy_u32);
 
@@ -2010,7 +1997,7 @@ static int ethtool_phys_id(struct net_device *dev, void __user *useraddr)
         * removal of the device.
         */
        busy = true;
-       dev_hold_track(dev, &dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &dev_tracker, GFP_KERNEL);
        rtnl_unlock();
 
        if (rc == 0) {
@@ -2034,7 +2021,7 @@ static int ethtool_phys_id(struct net_device *dev, void __user *useraddr)
        }
 
        rtnl_lock();
-       dev_put_track(dev, &dev_tracker);
+       netdev_put(dev, &dev_tracker);
        busy = false;
 
        (void) ops->set_phys_id(dev, ETHTOOL_ID_INACTIVE);
index 5fe8f4a..e26079e 100644 (file)
@@ -402,7 +402,7 @@ static int ethnl_default_doit(struct sk_buff *skb, struct genl_info *info)
                ops->cleanup_data(reply_data);
 
        genlmsg_end(rskb, reply_payload);
-       dev_put_track(req_info->dev, &req_info->dev_tracker);
+       netdev_put(req_info->dev, &req_info->dev_tracker);
        kfree(reply_data);
        kfree(req_info);
        return genlmsg_reply(rskb, info);
@@ -414,7 +414,7 @@ err_cleanup:
        if (ops->cleanup_data)
                ops->cleanup_data(reply_data);
 err_dev:
-       dev_put_track(req_info->dev, &req_info->dev_tracker);
+       netdev_put(req_info->dev, &req_info->dev_tracker);
        kfree(reply_data);
        kfree(req_info);
        return ret;
@@ -550,7 +550,7 @@ static int ethnl_default_start(struct netlink_callback *cb)
                 * same parser as for non-dump (doit) requests is used, it
                 * would take reference to the device if it finds one
                 */
-               dev_put_track(req_info->dev, &req_info->dev_tracker);
+               netdev_put(req_info->dev, &req_info->dev_tracker);
                req_info->dev = NULL;
        }
        if (ret < 0)
index 7919ddb..c0d5876 100644 (file)
@@ -237,7 +237,7 @@ struct ethnl_req_info {
 
 static inline void ethnl_parse_header_dev_put(struct ethnl_req_info *req_info)
 {
-       dev_put_track(req_info->dev, &req_info->dev_tracker);
+       netdev_put(req_info->dev, &req_info->dev_tracker);
 }
 
 /**
index 93da9f7..30e0e89 100644 (file)
@@ -148,10 +148,10 @@ void inet_sock_destruct(struct sock *sk)
                return;
        }
 
-       WARN_ON(atomic_read(&sk->sk_rmem_alloc));
-       WARN_ON(refcount_read(&sk->sk_wmem_alloc));
-       WARN_ON(sk->sk_wmem_queued);
-       WARN_ON(sk_forward_alloc_get(sk));
+       WARN_ON_ONCE(atomic_read(&sk->sk_rmem_alloc));
+       WARN_ON_ONCE(refcount_read(&sk->sk_wmem_alloc));
+       WARN_ON_ONCE(sk->sk_wmem_queued);
+       WARN_ON_ONCE(sk_forward_alloc_get(sk));
 
        kfree(rcu_dereference_protected(inet->inet_opt, 1));
        dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1));
index b2366ad..92b778e 100644 (file)
@@ -244,7 +244,7 @@ void in_dev_finish_destroy(struct in_device *idev)
 #ifdef NET_REFCNT_DEBUG
        pr_debug("%s: %p=%s\n", __func__, idev, dev ? dev->name : "NIL");
 #endif
-       dev_put_track(dev, &idev->dev_tracker);
+       netdev_put(dev, &idev->dev_tracker);
        if (!idev->dead)
                pr_err("Freeing alive in_device %p\n", idev);
        else
@@ -272,7 +272,7 @@ static struct in_device *inetdev_init(struct net_device *dev)
        if (IPV4_DEVCONF(in_dev->cnf, FORWARDING))
                dev_disable_lro(dev);
        /* Reference in_dev->dev */
-       dev_hold_track(dev, &in_dev->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &in_dev->dev_tracker, GFP_KERNEL);
        /* Account for reference dev->ip_ptr (below) */
        refcount_set(&in_dev->refcnt, 1);
 
index a57ba23..a5439a8 100644 (file)
@@ -211,7 +211,7 @@ static void rt_fibinfo_free_cpus(struct rtable __rcu * __percpu *rtp)
 
 void fib_nh_common_release(struct fib_nh_common *nhc)
 {
-       dev_put_track(nhc->nhc_dev, &nhc->nhc_dev_tracker);
+       netdev_put(nhc->nhc_dev, &nhc->nhc_dev_tracker);
        lwtstate_put(nhc->nhc_lwtstate);
        rt_fibinfo_free_cpus(nhc->nhc_pcpu_rth_output);
        rt_fibinfo_free(&nhc->nhc_rth_input);
@@ -1057,7 +1057,8 @@ static int fib_check_nh_v6_gw(struct net *net, struct fib_nh *nh,
        err = ipv6_stub->fib6_nh_init(net, &fib6_nh, &cfg, GFP_KERNEL, extack);
        if (!err) {
                nh->fib_nh_dev = fib6_nh.fib_nh_dev;
-               dev_hold_track(nh->fib_nh_dev, &nh->fib_nh_dev_tracker, GFP_KERNEL);
+               netdev_hold(nh->fib_nh_dev, &nh->fib_nh_dev_tracker,
+                           GFP_KERNEL);
                nh->fib_nh_oif = nh->fib_nh_dev->ifindex;
                nh->fib_nh_scope = RT_SCOPE_LINK;
 
@@ -1141,7 +1142,7 @@ static int fib_check_nh_v4_gw(struct net *net, struct fib_nh *nh, u32 table,
                if (!netif_carrier_ok(dev))
                        nh->fib_nh_flags |= RTNH_F_LINKDOWN;
                nh->fib_nh_dev = dev;
-               dev_hold_track(dev, &nh->fib_nh_dev_tracker, GFP_ATOMIC);
+               netdev_hold(dev, &nh->fib_nh_dev_tracker, GFP_ATOMIC);
                nh->fib_nh_scope = RT_SCOPE_LINK;
                return 0;
        }
@@ -1195,7 +1196,7 @@ static int fib_check_nh_v4_gw(struct net *net, struct fib_nh *nh, u32 table,
                               "No egress device for nexthop gateway");
                goto out;
        }
-       dev_hold_track(dev, &nh->fib_nh_dev_tracker, GFP_ATOMIC);
+       netdev_hold(dev, &nh->fib_nh_dev_tracker, GFP_ATOMIC);
        if (!netif_carrier_ok(dev))
                nh->fib_nh_flags |= RTNH_F_LINKDOWN;
        err = (dev->flags & IFF_UP) ? 0 : -ENETDOWN;
@@ -1229,7 +1230,7 @@ static int fib_check_nh_nongw(struct net *net, struct fib_nh *nh,
        }
 
        nh->fib_nh_dev = in_dev->dev;
-       dev_hold_track(nh->fib_nh_dev, &nh->fib_nh_dev_tracker, GFP_ATOMIC);
+       netdev_hold(nh->fib_nh_dev, &nh->fib_nh_dev_tracker, GFP_ATOMIC);
        nh->fib_nh_scope = RT_SCOPE_HOST;
        if (!netif_carrier_ok(nh->fib_nh_dev))
                nh->fib_nh_flags |= RTNH_F_LINKDOWN;
index e8de5e6..545f91b 100644 (file)
@@ -1026,10 +1026,12 @@ void __init inet_hashinfo2_init(struct inet_hashinfo *h, const char *name,
        init_hashinfo_lhash2(h);
 
        /* this one is used for source ports of outgoing connections */
-       table_perturb = kmalloc_array(INET_TABLE_PERTURB_SIZE,
-                                     sizeof(*table_perturb), GFP_KERNEL);
-       if (!table_perturb)
-               panic("TCP: failed to alloc table_perturb");
+       table_perturb = alloc_large_system_hash("Table-perturb",
+                                               sizeof(*table_perturb),
+                                               INET_TABLE_PERTURB_SIZE,
+                                               0, 0, NULL, NULL,
+                                               INET_TABLE_PERTURB_SIZE,
+                                               INET_TABLE_PERTURB_SIZE);
 }
 
 int inet_hashinfo2_init_mod(struct inet_hashinfo *h)
index 7e474a8..3b9cd48 100644 (file)
@@ -629,21 +629,20 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
        }
 
        if (dev->header_ops) {
-               const int pull_len = tunnel->hlen + sizeof(struct iphdr);
-
                if (skb_cow_head(skb, 0))
                        goto free_skb;
 
                tnl_params = (const struct iphdr *)skb->data;
 
-               if (pull_len > skb_transport_offset(skb))
-                       goto free_skb;
-
                /* Pull skb since ip_tunnel_xmit() needs skb->data pointing
                 * to gre header.
                 */
-               skb_pull(skb, pull_len);
+               skb_pull(skb, tunnel->hlen + sizeof(struct iphdr));
                skb_reset_mac_header(skb);
+
+               if (skb->ip_summed == CHECKSUM_PARTIAL &&
+                   skb_checksum_start(skb) < skb->data)
+                       goto free_skb;
        } else {
                if (skb_cow_head(skb, dev->needed_headroom))
                        goto free_skb;
index 13e6329..8324e54 100644 (file)
@@ -691,7 +691,7 @@ static int vif_delete(struct mr_table *mrt, int vifi, int notify,
        if (v->flags & (VIFF_TUNNEL | VIFF_REGISTER) && !notify)
                unregister_netdevice_queue(dev, head);
 
-       dev_put_track(dev, &v->dev_tracker);
+       netdev_put(dev, &v->dev_tracker);
        return 0;
 }
 
index 356f535..2d16bcc 100644 (file)
@@ -1550,9 +1550,8 @@ void rt_flush_dev(struct net_device *dev)
                        if (rt->dst.dev != dev)
                                continue;
                        rt->dst.dev = blackhole_netdev;
-                       dev_replace_track(dev, blackhole_netdev,
-                                         &rt->dst.dev_tracker,
-                                         GFP_ATOMIC);
+                       netdev_ref_replace(dev, blackhole_netdev,
+                                          &rt->dst.dev_tracker, GFP_ATOMIC);
                        list_move(&rt->rt_uncached, &ul->quarantine);
                }
                spin_unlock_bh(&ul->lock);
@@ -2851,7 +2850,7 @@ struct dst_entry *ipv4_blackhole_route(struct net *net, struct dst_entry *dst_or
                new->output = dst_discard_out;
 
                new->dev = net->loopback_dev;
-               dev_hold_track(new->dev, &new->dev_tracker, GFP_ATOMIC);
+               netdev_hold(new->dev, &new->dev_tracker, GFP_ATOMIC);
 
                rt->rt_is_input = ort->rt_is_input;
                rt->rt_iif = ort->rt_iif;
index 9984d23..14ebb4e 100644 (file)
@@ -294,6 +294,8 @@ EXPORT_SYMBOL(sysctl_tcp_mem);
 
 atomic_long_t tcp_memory_allocated ____cacheline_aligned_in_smp;       /* Current allocated memory. */
 EXPORT_SYMBOL(tcp_memory_allocated);
+DEFINE_PER_CPU(int, tcp_memory_per_cpu_fw_alloc);
+EXPORT_PER_CPU_SYMBOL_GPL(tcp_memory_per_cpu_fw_alloc);
 
 #if IS_ENABLED(CONFIG_SMC)
 DEFINE_STATIC_KEY_FALSE(tcp_have_smc);
@@ -856,9 +858,6 @@ struct sk_buff *tcp_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp,
 {
        struct sk_buff *skb;
 
-       if (unlikely(tcp_under_memory_pressure(sk)))
-               sk_mem_reclaim_partial(sk);
-
        skb = alloc_skb_fclone(size + MAX_TCP_HEADER, gfp);
        if (likely(skb)) {
                bool mem_scheduled;
@@ -2762,8 +2761,6 @@ void __tcp_close(struct sock *sk, long timeout)
                __kfree_skb(skb);
        }
 
-       sk_mem_reclaim(sk);
-
        /* If socket has been already reset (e.g. in tcp_reset()) - kill it. */
        if (sk->sk_state == TCP_CLOSE)
                goto adjudge_to_death;
@@ -2871,7 +2868,6 @@ adjudge_to_death:
                }
        }
        if (sk->sk_state != TCP_CLOSE) {
-               sk_mem_reclaim(sk);
                if (tcp_check_oom(sk, 0)) {
                        tcp_set_state(sk, TCP_CLOSE);
                        tcp_send_active_reset(sk, GFP_ATOMIC);
@@ -2949,7 +2945,6 @@ void tcp_write_queue_purge(struct sock *sk)
        }
        tcp_rtx_queue_purge(sk);
        INIT_LIST_HEAD(&tcp_sk(sk)->tsorted_sent_queue);
-       sk_mem_reclaim(sk);
        tcp_clear_all_retrans_hints(tcp_sk(sk));
        tcp_sk(sk)->packets_out = 0;
        inet_csk(sk)->icsk_backoff = 0;
@@ -4661,11 +4656,11 @@ void __init tcp_init(void)
        max_wshare = min(4UL*1024*1024, limit);
        max_rshare = min(6UL*1024*1024, limit);
 
-       init_net.ipv4.sysctl_tcp_wmem[0] = SK_MEM_QUANTUM;
+       init_net.ipv4.sysctl_tcp_wmem[0] = PAGE_SIZE;
        init_net.ipv4.sysctl_tcp_wmem[1] = 16*1024;
        init_net.ipv4.sysctl_tcp_wmem[2] = max(64*1024, max_wshare);
 
-       init_net.ipv4.sysctl_tcp_rmem[0] = SK_MEM_QUANTUM;
+       init_net.ipv4.sysctl_tcp_rmem[0] = PAGE_SIZE;
        init_net.ipv4.sysctl_tcp_rmem[1] = 131072;
        init_net.ipv4.sysctl_tcp_rmem[2] = max(131072, max_rshare);
 
index 2e2a9ec..fdc7beb 100644 (file)
@@ -805,7 +805,6 @@ static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb)
                         * restart window, so that we send ACKs quickly.
                         */
                        tcp_incr_quickack(sk, TCP_MAX_QUICKACKS);
-                       sk_mem_reclaim(sk);
                }
        }
        icsk->icsk_ack.lrcvtime = now;
@@ -4390,7 +4389,6 @@ void tcp_fin(struct sock *sk)
        skb_rbtree_purge(&tp->out_of_order_queue);
        if (tcp_is_sack(tp))
                tcp_sack_reset(&tp->rx_opt);
-       sk_mem_reclaim(sk);
 
        if (!sock_flag(sk, SOCK_DEAD)) {
                sk->sk_state_change(sk);
@@ -5287,7 +5285,7 @@ new_range:
                    before(TCP_SKB_CB(skb)->end_seq, start)) {
                        /* Do not attempt collapsing tiny skbs */
                        if (range_truesize != head->truesize ||
-                           end - start >= SKB_WITH_OVERHEAD(SK_MEM_QUANTUM)) {
+                           end - start >= SKB_WITH_OVERHEAD(PAGE_SIZE)) {
                                tcp_collapse(sk, NULL, &tp->out_of_order_queue,
                                             head, skb, start, end);
                        } else {
@@ -5336,7 +5334,6 @@ static bool tcp_prune_ofo_queue(struct sock *sk)
                tcp_drop_reason(sk, rb_to_skb(node),
                                SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE);
                if (!prev || goal <= 0) {
-                       sk_mem_reclaim(sk);
                        if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf &&
                            !tcp_under_memory_pressure(sk))
                                break;
@@ -5383,7 +5380,6 @@ static int tcp_prune_queue(struct sock *sk)
                             skb_peek(&sk->sk_receive_queue),
                             NULL,
                             tp->copied_seq, tp->rcv_nxt);
-       sk_mem_reclaim(sk);
 
        if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf)
                return 0;
index fe8f23b..fda811a 100644 (file)
@@ -3045,7 +3045,10 @@ struct proto tcp_prot = {
        .stream_memory_free     = tcp_stream_memory_free,
        .sockets_allocated      = &tcp_sockets_allocated,
        .orphan_count           = &tcp_orphan_count,
+
        .memory_allocated       = &tcp_memory_allocated,
+       .per_cpu_fw_alloc       = &tcp_memory_per_cpu_fw_alloc,
+
        .memory_pressure        = &tcp_memory_pressure,
        .sysctl_mem             = sysctl_tcp_mem,
        .sysctl_wmem_offset     = offsetof(struct net, ipv4.sysctl_tcp_wmem),
index 1c05443..8ab98e1 100644 (file)
@@ -3367,7 +3367,7 @@ void sk_forced_mem_schedule(struct sock *sk, int size)
        if (size <= sk->sk_forward_alloc)
                return;
        amt = sk_mem_pages(size);
-       sk->sk_forward_alloc += amt * SK_MEM_QUANTUM;
+       sk->sk_forward_alloc += amt << PAGE_SHIFT;
        sk_memory_allocated_add(sk, amt);
 
        if (mem_cgroup_sockets_enabled && sk->sk_memcg)
index 20cf4a9..2208755 100644 (file)
@@ -290,15 +290,13 @@ void tcp_delack_timer_handler(struct sock *sk)
 {
        struct inet_connection_sock *icsk = inet_csk(sk);
 
-       sk_mem_reclaim_partial(sk);
-
        if (((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) ||
            !(icsk->icsk_ack.pending & ICSK_ACK_TIMER))
-               goto out;
+               return;
 
        if (time_after(icsk->icsk_ack.timeout, jiffies)) {
                sk_reset_timer(sk, &icsk->icsk_delack_timer, icsk->icsk_ack.timeout);
-               goto out;
+               return;
        }
        icsk->icsk_ack.pending &= ~ICSK_ACK_TIMER;
 
@@ -317,10 +315,6 @@ void tcp_delack_timer_handler(struct sock *sk)
                tcp_send_ack(sk);
                __NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKS);
        }
-
-out:
-       if (tcp_under_memory_pressure(sk))
-               sk_mem_reclaim(sk);
 }
 
 
@@ -600,11 +594,11 @@ void tcp_write_timer_handler(struct sock *sk)
 
        if (((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) ||
            !icsk->icsk_pending)
-               goto out;
+               return;
 
        if (time_after(icsk->icsk_timeout, jiffies)) {
                sk_reset_timer(sk, &icsk->icsk_retransmit_timer, icsk->icsk_timeout);
-               goto out;
+               return;
        }
 
        tcp_mstamp_refresh(tcp_sk(sk));
@@ -626,9 +620,6 @@ void tcp_write_timer_handler(struct sock *sk)
                tcp_probe_timer(sk);
                break;
        }
-
-out:
-       sk_mem_reclaim(sk);
 }
 
 static void tcp_write_timer(struct timer_list *t)
@@ -743,8 +734,6 @@ static void tcp_keepalive_timer (struct timer_list *t)
                elapsed = keepalive_time_when(tp) - elapsed;
        }
 
-       sk_mem_reclaim(sk);
-
 resched:
        inet_csk_reset_keepalive_timer (sk, elapsed);
        goto out;
index aa9f2ec..6172b47 100644 (file)
@@ -125,6 +125,8 @@ EXPORT_SYMBOL(sysctl_udp_mem);
 
 atomic_long_t udp_memory_allocated ____cacheline_aligned_in_smp;
 EXPORT_SYMBOL(udp_memory_allocated);
+DEFINE_PER_CPU(int, udp_memory_per_cpu_fw_alloc);
+EXPORT_PER_CPU_SYMBOL_GPL(udp_memory_per_cpu_fw_alloc);
 
 #define MAX_UDP_PORTS 65536
 #define PORTS_PER_CHAIN (MAX_UDP_PORTS / UDP_HTABLE_SIZE_MIN)
@@ -1461,11 +1463,11 @@ static void udp_rmem_release(struct sock *sk, int size, int partial,
 
 
        sk->sk_forward_alloc += size;
-       amt = (sk->sk_forward_alloc - partial) & ~(SK_MEM_QUANTUM - 1);
+       amt = (sk->sk_forward_alloc - partial) & ~(PAGE_SIZE - 1);
        sk->sk_forward_alloc -= amt;
 
        if (amt)
-               __sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT);
+               __sk_mem_reduce_allocated(sk, amt >> PAGE_SHIFT);
 
        atomic_sub(size, &sk->sk_rmem_alloc);
 
@@ -1558,7 +1560,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
        spin_lock(&list->lock);
        if (size >= sk->sk_forward_alloc) {
                amt = sk_mem_pages(size);
-               delta = amt << SK_MEM_QUANTUM_SHIFT;
+               delta = amt << PAGE_SHIFT;
                if (!__sk_mem_raise_allocated(sk, delta, amt, SK_MEM_RECV)) {
                        err = -ENOBUFS;
                        spin_unlock(&list->lock);
@@ -2946,6 +2948,8 @@ struct proto udp_prot = {
        .psock_update_sk_prot   = udp_bpf_update_proto,
 #endif
        .memory_allocated       = &udp_memory_allocated,
+       .per_cpu_fw_alloc       = &udp_memory_per_cpu_fw_alloc,
+
        .sysctl_mem             = sysctl_udp_mem,
        .sysctl_wmem_offset     = offsetof(struct net, ipv4.sysctl_udp_wmem_min),
        .sysctl_rmem_offset     = offsetof(struct net, ipv4.sysctl_udp_rmem_min),
@@ -3263,8 +3267,8 @@ EXPORT_SYMBOL(udp_flow_hashrnd);
 
 static void __udp_sysctl_init(struct net *net)
 {
-       net->ipv4.sysctl_udp_rmem_min = SK_MEM_QUANTUM;
-       net->ipv4.sysctl_udp_wmem_min = SK_MEM_QUANTUM;
+       net->ipv4.sysctl_udp_rmem_min = PAGE_SIZE;
+       net->ipv4.sysctl_udp_wmem_min = PAGE_SIZE;
 
 #ifdef CONFIG_NET_L3_MASTER_DEV
        net->ipv4.sysctl_udp_l3mdev_accept = 0;
index cd1cd68..6e08a76 100644 (file)
@@ -51,7 +51,10 @@ struct proto         udplite_prot = {
        .unhash            = udp_lib_unhash,
        .rehash            = udp_v4_rehash,
        .get_port          = udp_v4_get_port,
+
        .memory_allocated  = &udp_memory_allocated,
+       .per_cpu_fw_alloc  = &udp_memory_per_cpu_fw_alloc,
+
        .sysctl_mem        = sysctl_udp_mem,
        .obj_size          = sizeof(struct udp_sock),
        .h.udp_table       = &udplite_table,
index 6fde0b1..3d0dfa6 100644 (file)
@@ -75,7 +75,7 @@ static int xfrm4_fill_dst(struct xfrm_dst *xdst, struct net_device *dev,
        xdst->u.rt.rt_iif = fl4->flowi4_iif;
 
        xdst->u.dst.dev = dev;
-       dev_hold_track(dev, &xdst->u.dst.dev_tracker, GFP_ATOMIC);
+       netdev_hold(dev, &xdst->u.dst.dev_tracker, GFP_ATOMIC);
 
        /* Sheit... I remember I did this right. Apparently,
         * it was magically lost, so this code needs audit */
index 2fe5860..b146ce8 100644 (file)
@@ -304,4 +304,3 @@ void __init xfrm4_protocol_init(void)
 {
        xfrm_input_register_afinfo(&xfrm4_input_afinfo);
 }
-EXPORT_SYMBOL(xfrm4_protocol_init);
index 1b19325..3497ad1 100644 (file)
@@ -398,13 +398,13 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
        if (ndev->cnf.forwarding)
                dev_disable_lro(dev);
        /* We refer to the device */
-       dev_hold_track(dev, &ndev->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &ndev->dev_tracker, GFP_KERNEL);
 
        if (snmp6_alloc_dev(ndev) < 0) {
                netdev_dbg(dev, "%s: cannot allocate memory for statistics\n",
                           __func__);
                neigh_parms_release(&nd_tbl, ndev->nd_parms);
-               dev_put_track(dev, &ndev->dev_tracker);
+               netdev_put(dev, &ndev->dev_tracker);
                kfree(ndev);
                return ERR_PTR(err);
        }
index 881d147..507a835 100644 (file)
@@ -263,7 +263,7 @@ void in6_dev_finish_destroy(struct inet6_dev *idev)
 #ifdef NET_REFCNT_DEBUG
        pr_debug("%s: %s\n", __func__, dev ? dev->name : "NIL");
 #endif
-       dev_put_track(dev, &idev->dev_tracker);
+       netdev_put(dev, &idev->dev_tracker);
        if (!idev->dead) {
                pr_warn("Freeing alive inet6 device %p\n", idev);
                return;
index 4e37f7c..3e22cbe 100644 (file)
@@ -398,7 +398,7 @@ static void ip6erspan_tunnel_uninit(struct net_device *dev)
        ip6erspan_tunnel_unlink_md(ign, t);
        ip6gre_tunnel_unlink(ign, t);
        dst_cache_reset(&t->dst_cache);
-       dev_put_track(dev, &t->dev_tracker);
+       netdev_put(dev, &t->dev_tracker);
 }
 
 static void ip6gre_tunnel_uninit(struct net_device *dev)
@@ -411,7 +411,7 @@ static void ip6gre_tunnel_uninit(struct net_device *dev)
        if (ign->fb_tunnel_dev == dev)
                WRITE_ONCE(ign->fb_tunnel_dev, NULL);
        dst_cache_reset(&t->dst_cache);
-       dev_put_track(dev, &t->dev_tracker);
+       netdev_put(dev, &t->dev_tracker);
 }
 
 
@@ -1495,7 +1495,7 @@ static int ip6gre_tunnel_init_common(struct net_device *dev)
        }
        ip6gre_tnl_init_features(dev);
 
-       dev_hold_track(dev, &tunnel->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &tunnel->dev_tracker, GFP_KERNEL);
        return 0;
 
 cleanup_dst_cache_init:
@@ -1887,7 +1887,7 @@ static int ip6erspan_tap_init(struct net_device *dev)
        dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
        ip6erspan_tnl_link_config(tunnel, 1);
 
-       dev_hold_track(dev, &tunnel->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &tunnel->dev_tracker, GFP_KERNEL);
        return 0;
 
 cleanup_dst_cache_init:
index 4081b12..77e3f59 100644 (file)
@@ -1450,7 +1450,7 @@ static int __ip6_append_data(struct sock *sk,
                             struct page_frag *pfrag,
                             int getfrag(void *from, char *to, int offset,
                                         int len, int odd, struct sk_buff *skb),
-                            void *from, int length, int transhdrlen,
+                            void *from, size_t length, int transhdrlen,
                             unsigned int flags, struct ipcm6_cookie *ipc6)
 {
        struct sk_buff *skb, *skb_prev = NULL;
@@ -1798,7 +1798,7 @@ error:
 int ip6_append_data(struct sock *sk,
                    int getfrag(void *from, char *to, int offset, int len,
                                int odd, struct sk_buff *skb),
-                   void *from, int length, int transhdrlen,
+                   void *from, size_t length, int transhdrlen,
                    struct ipcm6_cookie *ipc6, struct flowi6 *fl6,
                    struct rt6_info *rt, unsigned int flags)
 {
@@ -1995,7 +1995,7 @@ EXPORT_SYMBOL_GPL(ip6_flush_pending_frames);
 struct sk_buff *ip6_make_skb(struct sock *sk,
                             int getfrag(void *from, char *to, int offset,
                                         int len, int odd, struct sk_buff *skb),
-                            void *from, int length, int transhdrlen,
+                            void *from, size_t length, int transhdrlen,
                             struct ipcm6_cookie *ipc6, struct rt6_info *rt,
                             unsigned int flags, struct inet_cork_full *cork)
 {
index 19325b7..c7279f2 100644 (file)
@@ -381,7 +381,7 @@ ip6_tnl_dev_uninit(struct net_device *dev)
        else
                ip6_tnl_unlink(ip6n, t);
        dst_cache_reset(&t->dst_cache);
-       dev_put_track(dev, &t->dev_tracker);
+       netdev_put(dev, &t->dev_tracker);
 }
 
 /**
@@ -796,7 +796,6 @@ static int __ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
                                                struct sk_buff *skb),
                         bool log_ecn_err)
 {
-       struct pcpu_sw_netstats *tstats;
        const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
        int err;
 
@@ -856,11 +855,7 @@ static int __ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
                }
        }
 
-       tstats = this_cpu_ptr(tunnel->dev->tstats);
-       u64_stats_update_begin(&tstats->syncp);
-       tstats->rx_packets++;
-       tstats->rx_bytes += skb->len;
-       u64_stats_update_end(&tstats->syncp);
+       dev_sw_netstats_rx_add(tunnel->dev, skb->len);
 
        skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(tunnel->dev)));
 
@@ -1889,7 +1884,7 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
        dev->min_mtu = ETH_MIN_MTU;
        dev->max_mtu = IP6_MAX_MTU - dev->hard_header_len;
 
-       dev_hold_track(dev, &t->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &t->dev_tracker, GFP_KERNEL);
        return 0;
 
 destroy_dst:
index 3a434d7..8fe59a7 100644 (file)
@@ -293,7 +293,7 @@ static void vti6_dev_uninit(struct net_device *dev)
                RCU_INIT_POINTER(ip6n->tnls_wc[0], NULL);
        else
                vti6_tnl_unlink(ip6n, t);
-       dev_put_track(dev, &t->dev_tracker);
+       netdev_put(dev, &t->dev_tracker);
 }
 
 static int vti6_input_proto(struct sk_buff *skb, int nexthdr, __be32 spi,
@@ -936,7 +936,7 @@ static inline int vti6_dev_init_gen(struct net_device *dev)
        dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
        if (!dev->tstats)
                return -ENOMEM;
-       dev_hold_track(dev, &t->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &t->dev_tracker, GFP_KERNEL);
        return 0;
 }
 
index 4e74bc6..d4aad41 100644 (file)
@@ -741,7 +741,7 @@ static int mif6_delete(struct mr_table *mrt, int vifi, int notify,
        if ((v->flags & MIFF_REGISTER) && !notify)
                unregister_netdevice_queue(dev, head);
 
-       dev_put_track(dev, &v->dev_tracker);
+       netdev_put(dev, &v->dev_tracker);
        return 0;
 }
 
index d25dc83..0be01a4 100644 (file)
@@ -182,9 +182,9 @@ static void rt6_uncached_list_flush_dev(struct net_device *dev)
 
                        if (rt_dev == dev) {
                                rt->dst.dev = blackhole_netdev;
-                               dev_replace_track(rt_dev, blackhole_netdev,
-                                                 &rt->dst.dev_tracker,
-                                                 GFP_ATOMIC);
+                               netdev_ref_replace(rt_dev, blackhole_netdev,
+                                                  &rt->dst.dev_tracker,
+                                                  GFP_ATOMIC);
                                handled = true;
                        }
                        if (handled)
@@ -607,7 +607,7 @@ static void rt6_probe_deferred(struct work_struct *w)
 
        addrconf_addr_solict_mult(&work->target, &mcaddr);
        ndisc_send_ns(work->dev, &work->target, &mcaddr, NULL, 0);
-       dev_put_track(work->dev, &work->dev_tracker);
+       netdev_put(work->dev, &work->dev_tracker);
        kfree(work);
 }
 
@@ -661,7 +661,7 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
        } else {
                INIT_WORK(&work->work, rt6_probe_deferred);
                work->target = *nh_gw;
-               dev_hold_track(dev, &work->dev_tracker, GFP_ATOMIC);
+               netdev_hold(dev, &work->dev_tracker, GFP_ATOMIC);
                work->dev = dev;
                schedule_work(&work->work);
        }
index 29bc4e7..6de0118 100644 (file)
@@ -399,7 +399,6 @@ int __init seg6_hmac_init(void)
 {
        return seg6_hmac_init_algo();
 }
-EXPORT_SYMBOL(seg6_hmac_init);
 
 int __net_init seg6_hmac_net_init(struct net *net)
 {
index 9fbe243..98a3428 100644 (file)
@@ -218,6 +218,7 @@ seg6_lookup_any_nexthop(struct sk_buff *skb, struct in6_addr *nhaddr,
        struct flowi6 fl6;
        int dev_flags = 0;
 
+       memset(&fl6, 0, sizeof(fl6));
        fl6.flowi6_iif = skb->dev->ifindex;
        fl6.daddr = nhaddr ? *nhaddr : hdr->daddr;
        fl6.saddr = hdr->saddr;
index c0b138c..fab89fd 100644 (file)
@@ -521,7 +521,7 @@ static void ipip6_tunnel_uninit(struct net_device *dev)
                ipip6_tunnel_del_prl(tunnel, NULL);
        }
        dst_cache_reset(&tunnel->dst_cache);
-       dev_put_track(dev, &tunnel->dev_tracker);
+       netdev_put(dev, &tunnel->dev_tracker);
 }
 
 static int ipip6_err(struct sk_buff *skb, u32 info)
@@ -686,8 +686,6 @@ static int ipip6_rcv(struct sk_buff *skb)
        tunnel = ipip6_tunnel_lookup(dev_net(skb->dev), skb->dev,
                                     iph->saddr, iph->daddr, sifindex);
        if (tunnel) {
-               struct pcpu_sw_netstats *tstats;
-
                if (tunnel->parms.iph.protocol != IPPROTO_IPV6 &&
                    tunnel->parms.iph.protocol != 0)
                        goto out;
@@ -724,11 +722,7 @@ static int ipip6_rcv(struct sk_buff *skb)
                        }
                }
 
-               tstats = this_cpu_ptr(tunnel->dev->tstats);
-               u64_stats_update_begin(&tstats->syncp);
-               tstats->rx_packets++;
-               tstats->rx_bytes += skb->len;
-               u64_stats_update_end(&tstats->syncp);
+               dev_sw_netstats_rx_add(tunnel->dev, skb->len);
 
                netif_rx(skb);
 
@@ -1463,7 +1457,7 @@ static int ipip6_tunnel_init(struct net_device *dev)
                dev->tstats = NULL;
                return err;
        }
-       dev_hold_track(dev, &tunnel->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &tunnel->dev_tracker, GFP_KERNEL);
        return 0;
 }
 
index f37dd4a..c72448b 100644 (file)
@@ -2159,7 +2159,10 @@ struct proto tcpv6_prot = {
        .leave_memory_pressure  = tcp_leave_memory_pressure,
        .stream_memory_free     = tcp_stream_memory_free,
        .sockets_allocated      = &tcp_sockets_allocated,
+
        .memory_allocated       = &tcp_memory_allocated,
+       .per_cpu_fw_alloc       = &tcp_memory_per_cpu_fw_alloc,
+
        .memory_pressure        = &tcp_memory_pressure,
        .orphan_count           = &tcp_orphan_count,
        .sysctl_mem             = sysctl_tcp_mem,
index 55afd7f..be074f0 100644 (file)
@@ -1740,7 +1740,10 @@ struct proto udpv6_prot = {
 #ifdef CONFIG_BPF_SYSCALL
        .psock_update_sk_prot   = udp_bpf_update_proto,
 #endif
+
        .memory_allocated       = &udp_memory_allocated,
+       .per_cpu_fw_alloc       = &udp_memory_per_cpu_fw_alloc,
+
        .sysctl_mem             = sysctl_udp_mem,
        .sysctl_wmem_offset     = offsetof(struct net, ipv4.sysctl_udp_wmem_min),
        .sysctl_rmem_offset     = offsetof(struct net, ipv4.sysctl_udp_rmem_min),
index fbb700d..b707258 100644 (file)
@@ -48,7 +48,10 @@ struct proto udplitev6_prot = {
        .unhash            = udp_lib_unhash,
        .rehash            = udp_v6_rehash,
        .get_port          = udp_v6_get_port,
+
        .memory_allocated  = &udp_memory_allocated,
+       .per_cpu_fw_alloc  = &udp_memory_per_cpu_fw_alloc,
+
        .sysctl_mem        = sysctl_udp_mem,
        .obj_size          = sizeof(struct udp6_sock),
        .h.udp_table       = &udplite_table,
index e64e427..4a4b0e4 100644 (file)
@@ -73,11 +73,11 @@ static int xfrm6_fill_dst(struct xfrm_dst *xdst, struct net_device *dev,
        struct rt6_info *rt = (struct rt6_info *)xdst->route;
 
        xdst->u.dst.dev = dev;
-       dev_hold_track(dev, &xdst->u.dst.dev_tracker, GFP_ATOMIC);
+       netdev_hold(dev, &xdst->u.dst.dev_tracker, GFP_ATOMIC);
 
        xdst->u.rt6.rt6i_idev = in6_dev_get(dev);
        if (!xdst->u.rt6.rt6i_idev) {
-               dev_put_track(dev, &xdst->u.dst.dev_tracker);
+               netdev_put(dev, &xdst->u.dst.dev_tracker);
                return -ENODEV;
        }
 
index a0385dd..498a0c3 100644 (file)
@@ -278,8 +278,6 @@ static void iucv_sock_destruct(struct sock *sk)
        skb_queue_purge(&sk->sk_receive_queue);
        skb_queue_purge(&sk->sk_error_queue);
 
-       sk_mem_reclaim(sk);
-
        if (!sock_flag(sk, SOCK_DEAD)) {
                pr_err("Attempt to release alive iucv socket %p\n", sk);
                return;
index c6ff8bf..9dbd801 100644 (file)
@@ -504,14 +504,15 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
        struct ipcm6_cookie ipc6;
        int addr_len = msg->msg_namelen;
        int transhdrlen = 4; /* zero session-id */
-       int ulen = len + transhdrlen;
+       int ulen;
        int err;
 
        /* Rough check on arithmetic overflow,
         * better check is made in ip6_append_data().
         */
-       if (len > INT_MAX)
+       if (len > INT_MAX - transhdrlen)
                return -EMSGSIZE;
+       ulen = len + transhdrlen;
 
        /* Mirror BSD error message compatibility */
        if (msg->msg_flags & MSG_OOB)
index 7f555d2..da7fe94 100644 (file)
@@ -224,7 +224,7 @@ static int llc_ui_release(struct socket *sock)
        } else {
                release_sock(sk);
        }
-       dev_put_track(llc->dev, &llc->dev_tracker);
+       netdev_put(llc->dev, &llc->dev_tracker);
        sock_put(sk);
        llc_sk_free(sk);
 out:
index f7896f2..881efbf 100644 (file)
@@ -5,7 +5,7 @@
  * Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
  * Copyright 2013-2015  Intel Mobile Communications GmbH
  * Copyright (C) 2015-2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2021 Intel Corporation
+ * Copyright (C) 2018-2022 Intel Corporation
  */
 
 #include <linux/ieee80211.h>
@@ -438,7 +438,6 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
        struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
        struct ieee80211_local *local = sdata->local;
        struct sta_info *sta = NULL;
-       const struct ieee80211_cipher_scheme *cs = NULL;
        struct ieee80211_key *key;
        int err;
 
@@ -456,23 +455,12 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
                if (WARN_ON_ONCE(fips_enabled))
                        return -EINVAL;
                break;
-       case WLAN_CIPHER_SUITE_CCMP:
-       case WLAN_CIPHER_SUITE_CCMP_256:
-       case WLAN_CIPHER_SUITE_AES_CMAC:
-       case WLAN_CIPHER_SUITE_BIP_CMAC_256:
-       case WLAN_CIPHER_SUITE_BIP_GMAC_128:
-       case WLAN_CIPHER_SUITE_BIP_GMAC_256:
-       case WLAN_CIPHER_SUITE_GCMP:
-       case WLAN_CIPHER_SUITE_GCMP_256:
-               break;
        default:
-               cs = ieee80211_cs_get(local, params->cipher, sdata->vif.type);
                break;
        }
 
        key = ieee80211_key_alloc(params->cipher, key_idx, params->key_len,
-                                 params->key, params->seq_len, params->seq,
-                                 cs);
+                                 params->key, params->seq_len, params->seq);
        if (IS_ERR(key))
                return PTR_ERR(key);
 
@@ -537,9 +525,6 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
                break;
        }
 
-       if (sta)
-               sta->cipher_scheme = cs;
-
        err = ieee80211_key_link(key, sdata, sta);
 
  out_unlock:
@@ -548,33 +533,53 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
        return err;
 }
 
+static struct ieee80211_key *
+ieee80211_lookup_key(struct ieee80211_sub_if_data *sdata,
+                    u8 key_idx, bool pairwise, const u8 *mac_addr)
+{
+       struct ieee80211_local *local = sdata->local;
+       struct sta_info *sta;
+
+       if (mac_addr) {
+               sta = sta_info_get_bss(sdata, mac_addr);
+               if (!sta)
+                       return NULL;
+
+               if (pairwise && key_idx < NUM_DEFAULT_KEYS)
+                       return rcu_dereference_check_key_mtx(local,
+                                                            sta->ptk[key_idx]);
+
+               if (!pairwise &&
+                   key_idx < NUM_DEFAULT_KEYS +
+                             NUM_DEFAULT_MGMT_KEYS +
+                             NUM_DEFAULT_BEACON_KEYS)
+                       return rcu_dereference_check_key_mtx(local,
+                                                            sta->deflink.gtk[key_idx]);
+
+               return NULL;
+       }
+
+       if (key_idx < NUM_DEFAULT_KEYS +
+                     NUM_DEFAULT_MGMT_KEYS +
+                     NUM_DEFAULT_BEACON_KEYS)
+               return rcu_dereference_check_key_mtx(local,
+                                                    sdata->keys[key_idx]);
+
+       return NULL;
+}
+
 static int ieee80211_del_key(struct wiphy *wiphy, struct net_device *dev,
                             u8 key_idx, bool pairwise, const u8 *mac_addr)
 {
        struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
        struct ieee80211_local *local = sdata->local;
-       struct sta_info *sta;
-       struct ieee80211_key *key = NULL;
+       struct ieee80211_key *key;
        int ret;
 
        mutex_lock(&local->sta_mtx);
        mutex_lock(&local->key_mtx);
 
-       if (mac_addr) {
-               ret = -ENOENT;
-
-               sta = sta_info_get_bss(sdata, mac_addr);
-               if (!sta)
-                       goto out_unlock;
-
-               if (pairwise)
-                       key = key_mtx_dereference(local, sta->ptk[key_idx]);
-               else
-                       key = key_mtx_dereference(local,
-                                                 sta->deflink.gtk[key_idx]);
-       } else
-               key = key_mtx_dereference(local, sdata->keys[key_idx]);
-
+       key = ieee80211_lookup_key(sdata, key_idx, pairwise, mac_addr);
        if (!key) {
                ret = -ENOENT;
                goto out_unlock;
@@ -597,10 +602,9 @@ static int ieee80211_get_key(struct wiphy *wiphy, struct net_device *dev,
                                              struct key_params *params))
 {
        struct ieee80211_sub_if_data *sdata;
-       struct sta_info *sta = NULL;
        u8 seq[6] = {0};
        struct key_params params;
-       struct ieee80211_key *key = NULL;
+       struct ieee80211_key *key;
        u64 pn64;
        u32 iv32;
        u16 iv16;
@@ -611,20 +615,7 @@ static int ieee80211_get_key(struct wiphy *wiphy, struct net_device *dev,
 
        rcu_read_lock();
 
-       if (mac_addr) {
-               sta = sta_info_get_bss(sdata, mac_addr);
-               if (!sta)
-                       goto out;
-
-               if (pairwise && key_idx < NUM_DEFAULT_KEYS)
-                       key = rcu_dereference(sta->ptk[key_idx]);
-               else if (!pairwise &&
-                        key_idx < NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS +
-                        NUM_DEFAULT_BEACON_KEYS)
-                       key = rcu_dereference(sta->deflink.gtk[key_idx]);
-       } else
-               key = rcu_dereference(sdata->keys[key_idx]);
-
+       key = ieee80211_lookup_key(sdata, key_idx, pairwise, mac_addr);
        if (!key)
                goto out;
 
@@ -1207,9 +1198,6 @@ static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev,
                                params->crypto.control_port_over_nl80211;
        sdata->control_port_no_preauth =
                                params->crypto.control_port_no_preauth;
-       sdata->encrypt_headroom = ieee80211_cs_headroom(sdata->local,
-                                                       &params->crypto,
-                                                       sdata->vif.type);
 
        list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list) {
                vlan->control_port_protocol =
@@ -1220,10 +1208,6 @@ static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev,
                        params->crypto.control_port_over_nl80211;
                vlan->control_port_no_preauth =
                        params->crypto.control_port_no_preauth;
-               vlan->encrypt_headroom =
-                       ieee80211_cs_headroom(sdata->local,
-                                             &params->crypto,
-                                             vlan->vif.type);
        }
 
        sdata->vif.bss_conf.dtim_period = params->dtim_period;
index 86ef0a4..1cf3315 100644 (file)
@@ -5,7 +5,7 @@
  * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
  * Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
  * Copyright 2013-2015  Intel Mobile Communications GmbH
- * Copyright (C) 2018-2021 Intel Corporation
+ * Copyright (C) 2018-2022 Intel Corporation
  */
 
 #ifndef IEEE80211_I_H
@@ -944,7 +944,6 @@ struct ieee80211_sub_if_data {
        bool control_port_no_encrypt;
        bool control_port_no_preauth;
        bool control_port_over_nl80211;
-       int encrypt_headroom;
 
        atomic_t num_tx_queued;
        struct ieee80211_tx_queue_params tx_conf[IEEE80211_NUM_ACS];
@@ -2483,14 +2482,6 @@ void ieee80211_dfs_radar_detected_work(struct work_struct *work);
 int ieee80211_send_action_csa(struct ieee80211_sub_if_data *sdata,
                              struct cfg80211_csa_settings *csa_settings);
 
-bool ieee80211_cs_valid(const struct ieee80211_cipher_scheme *cs);
-bool ieee80211_cs_list_valid(const struct ieee80211_cipher_scheme *cs, int n);
-const struct ieee80211_cipher_scheme *
-ieee80211_cs_get(struct ieee80211_local *local, u32 cipher,
-                enum nl80211_iftype iftype);
-int ieee80211_cs_headroom(struct ieee80211_local *local,
-                         struct cfg80211_crypto_settings *crypto,
-                         enum nl80211_iftype iftype);
 void ieee80211_recalc_dtim(struct ieee80211_local *local,
                           struct ieee80211_sub_if_data *sdata);
 int ieee80211_check_combinations(struct ieee80211_sub_if_data *sdata,
index 4153147..fb8d102 100644 (file)
@@ -8,7 +8,7 @@
  * Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
  * Copyright 2013-2014  Intel Mobile Communications GmbH
  * Copyright (c) 2016        Intel Deutschland GmbH
- * Copyright (C) 2018-2021 Intel Corporation
+ * Copyright (C) 2018-2022 Intel Corporation
  */
 #include <linux/slab.h>
 #include <linux/kernel.h>
@@ -1036,8 +1036,6 @@ int ieee80211_add_virtual_monitor(struct ieee80211_local *local)
                 wiphy_name(local->hw.wiphy));
        sdata->wdev.iftype = NL80211_IFTYPE_MONITOR;
 
-       sdata->encrypt_headroom = IEEE80211_ENCRYPT_HEADROOM;
-
        ieee80211_set_default_queues(sdata);
 
        ret = drv_add_interface(local, sdata);
@@ -1644,7 +1642,6 @@ static void ieee80211_setup_sdata(struct ieee80211_sub_if_data *sdata,
        sdata->control_port_no_encrypt = false;
        sdata->control_port_over_nl80211 = false;
        sdata->control_port_no_preauth = false;
-       sdata->encrypt_headroom = IEEE80211_ENCRYPT_HEADROOM;
        sdata->vif.bss_conf.idle = true;
        sdata->vif.bss_conf.txpower = INT_MIN; /* unset */
 
@@ -2116,8 +2113,6 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
        sdata->ap_power_level = IEEE80211_UNSET_POWER_LEVEL;
        sdata->user_power_level = local->user_power_level;
 
-       sdata->encrypt_headroom = IEEE80211_ENCRYPT_HEADROOM;
-
        /* setup type-dependent data */
        ieee80211_setup_sdata(sdata, type);
 
index 0fcf8ae..c3476de 100644 (file)
@@ -6,7 +6,7 @@
  * Copyright 2007-2008 Johannes Berg <johannes@sipsolutions.net>
  * Copyright 2013-2014  Intel Mobile Communications GmbH
  * Copyright 2015-2017 Intel Deutschland GmbH
- * Copyright 2018-2020  Intel Corporation
+ * Copyright 2018-2020, 2022  Intel Corporation
  */
 
 #include <linux/if_ether.h>
@@ -531,8 +531,7 @@ static int ieee80211_key_replace(struct ieee80211_sub_if_data *sdata,
 struct ieee80211_key *
 ieee80211_key_alloc(u32 cipher, int idx, size_t key_len,
                    const u8 *key_data,
-                   size_t seq_len, const u8 *seq,
-                   const struct ieee80211_cipher_scheme *cs)
+                   size_t seq_len, const u8 *seq)
 {
        struct ieee80211_key *key;
        int i, j, err;
@@ -675,21 +674,6 @@ ieee80211_key_alloc(u32 cipher, int idx, size_t key_len,
                        return ERR_PTR(err);
                }
                break;
-       default:
-               if (cs) {
-                       if (seq_len && seq_len != cs->pn_len) {
-                               kfree(key);
-                               return ERR_PTR(-EINVAL);
-                       }
-
-                       key->conf.iv_len = cs->hdr_len;
-                       key->conf.icv_len = cs->mic_len;
-                       for (i = 0; i < IEEE80211_NUM_TIDS + 1; i++)
-                               for (j = 0; j < seq_len; j++)
-                                       key->u.gen.rx_pn[i][j] =
-                                                       seq[seq_len - j - 1];
-                       key->flags |= KEY_FLAG_CIPHER_SCHEME;
-               }
        }
        memcpy(key->conf.key, key_data, key_len);
        INIT_LIST_HEAD(&key->list);
@@ -1294,7 +1278,7 @@ ieee80211_gtk_rekey_add(struct ieee80211_vif *vif,
 
        key = ieee80211_key_alloc(keyconf->cipher, keyconf->keyidx,
                                  keyconf->keylen, keyconf->key,
-                                 0, NULL, NULL);
+                                 0, NULL);
        if (IS_ERR(key))
                return ERR_CAST(key);
 
index 1e326c8..e994dce 100644 (file)
@@ -2,7 +2,7 @@
 /*
  * Copyright 2002-2004, Instant802 Networks, Inc.
  * Copyright 2005, Devicescape Software, Inc.
- * Copyright (C) 2019 Intel Corporation
+ * Copyright (C) 2019, 2022 Intel Corporation
  */
 
 #ifndef IEEE80211_KEY_H
@@ -30,12 +30,10 @@ struct sta_info;
  * @KEY_FLAG_UPLOADED_TO_HARDWARE: Indicates that this key is present
  *     in the hardware for TX crypto hardware acceleration.
  * @KEY_FLAG_TAINTED: Key is tainted and packets should be dropped.
- * @KEY_FLAG_CIPHER_SCHEME: This key is for a hardware cipher scheme
  */
 enum ieee80211_internal_key_flags {
        KEY_FLAG_UPLOADED_TO_HARDWARE   = BIT(0),
        KEY_FLAG_TAINTED                = BIT(1),
-       KEY_FLAG_CIPHER_SCHEME          = BIT(2),
 };
 
 enum ieee80211_internal_tkip_state {
@@ -140,8 +138,7 @@ struct ieee80211_key {
 struct ieee80211_key *
 ieee80211_key_alloc(u32 cipher, int idx, size_t key_len,
                    const u8 *key_data,
-                   size_t seq_len, const u8 *seq,
-                   const struct ieee80211_cipher_scheme *cs);
+                   size_t seq_len, const u8 *seq);
 /*
  * Insert a key into data structures (sdata, sta if necessary)
  * to make it used, free old key. On failure, also free the new key.
@@ -166,6 +163,8 @@ void ieee80211_reenable_keys(struct ieee80211_sub_if_data *sdata);
 
 #define key_mtx_dereference(local, ref) \
        rcu_dereference_protected(ref, lockdep_is_held(&((local)->key_mtx)))
+#define rcu_dereference_check_key_mtx(local, ref) \
+       rcu_dereference_check(ref, lockdep_is_held(&((local)->key_mtx)))
 
 void ieee80211_delayed_tailroom_dec(struct work_struct *wk);
 
index 5a385d4..4f3e93c 100644 (file)
@@ -5,7 +5,7 @@
  * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
  * Copyright 2013-2014  Intel Mobile Communications GmbH
  * Copyright (C) 2017     Intel Deutschland GmbH
- * Copyright (C) 2018-2021 Intel Corporation
+ * Copyright (C) 2018-2022 Intel Corporation
  */
 
 #include <net/mac80211.h>
@@ -778,7 +778,7 @@ static int ieee80211_init_cipher_suites(struct ieee80211_local *local)
 {
        bool have_wep = !fips_enabled; /* FIPS does not permit the use of RC4 */
        bool have_mfp = ieee80211_hw_check(&local->hw, MFP_CAPABLE);
-       int n_suites = 0, r = 0, w = 0;
+       int r = 0, w = 0;
        u32 *suites;
        static const u32 cipher_suites[] = {
                /* keep WEP first, it may be removed below */
@@ -824,10 +824,9 @@ static int ieee80211_init_cipher_suites(struct ieee80211_local *local)
                                continue;
                        suites[w++] = suite;
                }
-       } else if (!local->hw.cipher_schemes) {
-               /* If the driver doesn't have cipher schemes, there's nothing
-                * else to do other than assign the (software supported and
-                * perhaps offloaded) cipher suites.
+       } else {
+               /* assign the (software supported and perhaps offloaded)
+                * cipher suites
                 */
                local->hw.wiphy->cipher_suites = cipher_suites;
                local->hw.wiphy->n_cipher_suites = ARRAY_SIZE(cipher_suites);
@@ -842,58 +841,6 @@ static int ieee80211_init_cipher_suites(struct ieee80211_local *local)
 
                /* not dynamically allocated, so just return */
                return 0;
-       } else {
-               const struct ieee80211_cipher_scheme *cs;
-
-               cs = local->hw.cipher_schemes;
-
-               /* Driver specifies cipher schemes only (but not cipher suites
-                * including the schemes)
-                *
-                * We start counting ciphers defined by schemes, TKIP, CCMP,
-                * CCMP-256, GCMP, and GCMP-256
-                */
-               n_suites = local->hw.n_cipher_schemes + 5;
-
-               /* check if we have WEP40 and WEP104 */
-               if (have_wep)
-                       n_suites += 2;
-
-               /* check if we have AES_CMAC, BIP-CMAC-256, BIP-GMAC-128,
-                * BIP-GMAC-256
-                */
-               if (have_mfp)
-                       n_suites += 4;
-
-               suites = kmalloc_array(n_suites, sizeof(u32), GFP_KERNEL);
-               if (!suites)
-                       return -ENOMEM;
-
-               suites[w++] = WLAN_CIPHER_SUITE_CCMP;
-               suites[w++] = WLAN_CIPHER_SUITE_CCMP_256;
-               suites[w++] = WLAN_CIPHER_SUITE_TKIP;
-               suites[w++] = WLAN_CIPHER_SUITE_GCMP;
-               suites[w++] = WLAN_CIPHER_SUITE_GCMP_256;
-
-               if (have_wep) {
-                       suites[w++] = WLAN_CIPHER_SUITE_WEP40;
-                       suites[w++] = WLAN_CIPHER_SUITE_WEP104;
-               }
-
-               if (have_mfp) {
-                       suites[w++] = WLAN_CIPHER_SUITE_AES_CMAC;
-                       suites[w++] = WLAN_CIPHER_SUITE_BIP_CMAC_256;
-                       suites[w++] = WLAN_CIPHER_SUITE_BIP_GMAC_128;
-                       suites[w++] = WLAN_CIPHER_SUITE_BIP_GMAC_256;
-               }
-
-               for (r = 0; r < local->hw.n_cipher_schemes; r++) {
-                       suites[w++] = cs[r].cipher;
-                       if (WARN_ON(cs[r].pn_len > IEEE80211_MAX_PN_LEN)) {
-                               kfree(suites);
-                               return -EINVAL;
-                       }
-               }
        }
 
        local->hw.wiphy->cipher_suites = suites;
@@ -1168,12 +1115,6 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
        if (local->hw.wiphy->max_scan_ie_len)
                local->hw.wiphy->max_scan_ie_len -= local->scan_ies_len;
 
-       if (WARN_ON(!ieee80211_cs_list_valid(local->hw.cipher_schemes,
-                                            local->hw.n_cipher_schemes))) {
-               result = -EINVAL;
-               goto fail_workqueue;
-       }
-
        result = ieee80211_init_cipher_suites(local);
        if (result < 0)
                goto fail_workqueue;
index 58ebdcd..45e7c1b 100644 (file)
@@ -1,7 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
  * Copyright (c) 2008, 2009 open80211s Ltd.
- * Copyright (C) 2019, 2021 Intel Corporation
+ * Copyright (C) 2019, 2021-2022 Intel Corporation
  * Author:     Luis Carlos Cobo <luisca@cozybit.com>
  */
 
@@ -247,13 +247,13 @@ int mesh_path_error_tx(struct ieee80211_sub_if_data *sdata,
                return -EAGAIN;
 
        skb = dev_alloc_skb(local->tx_headroom +
-                           sdata->encrypt_headroom +
+                           IEEE80211_ENCRYPT_HEADROOM +
                            IEEE80211_ENCRYPT_TAILROOM +
                            hdr_len +
                            2 + 15 /* PERR IE */);
        if (!skb)
                return -1;
-       skb_reserve(skb, local->tx_headroom + sdata->encrypt_headroom);
+       skb_reserve(skb, local->tx_headroom + IEEE80211_ENCRYPT_HEADROOM);
        mgmt = skb_put_zero(skb, hdr_len);
        mgmt->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT |
                                          IEEE80211_STYPE_ACTION);
index 58d48dc..6d5ad71 100644 (file)
@@ -8,7 +8,7 @@
  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
  * Copyright 2013-2014  Intel Mobile Communications GmbH
  * Copyright (C) 2015 - 2017 Intel Deutschland GmbH
- * Copyright (C) 2018 - 2021 Intel Corporation
+ * Copyright (C) 2018 - 2022 Intel Corporation
  */
 
 #include <linux/delay.h>
@@ -2496,8 +2496,6 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
        memset(ifmgd->tx_tspec, 0, sizeof(ifmgd->tx_tspec));
        cancel_delayed_work_sync(&ifmgd->tx_tspec_wk);
 
-       sdata->encrypt_headroom = IEEE80211_ENCRYPT_HEADROOM;
-
        bss_conf->pwr_reduction = 0;
        bss_conf->tx_pwr_env_num = 0;
        memset(bss_conf->tx_pwr_env, 0, sizeof(bss_conf->tx_pwr_env));
@@ -6071,8 +6069,6 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
        sdata->control_port_over_nl80211 =
                                        req->crypto.control_port_over_nl80211;
        sdata->control_port_no_preauth = req->crypto.control_port_no_preauth;
-       sdata->encrypt_headroom = ieee80211_cs_headroom(local, &req->crypto,
-                                                       sdata->vif.type);
 
        /* kick off associate process */
 
index 3c08ae0..a9f4e90 100644 (file)
@@ -6,7 +6,7 @@
  * Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
  * Copyright 2013-2014  Intel Mobile Communications GmbH
  * Copyright(c) 2015 - 2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2021 Intel Corporation
+ * Copyright (C) 2018-2022 Intel Corporation
  */
 
 #include <linux/jiffies.h>
@@ -1009,43 +1009,20 @@ static int ieee80211_get_mmie_keyidx(struct sk_buff *skb)
        return -1;
 }
 
-static int ieee80211_get_keyid(struct sk_buff *skb,
-                              const struct ieee80211_cipher_scheme *cs)
+static int ieee80211_get_keyid(struct sk_buff *skb)
 {
        struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
-       __le16 fc;
-       int hdrlen;
-       int minlen;
-       u8 key_idx_off;
-       u8 key_idx_shift;
+       __le16 fc = hdr->frame_control;
+       int hdrlen = ieee80211_hdrlen(fc);
        u8 keyid;
 
-       fc = hdr->frame_control;
-       hdrlen = ieee80211_hdrlen(fc);
-
-       if (cs) {
-               minlen = hdrlen + cs->hdr_len;
-               key_idx_off = hdrlen + cs->key_idx_off;
-               key_idx_shift = cs->key_idx_shift;
-       } else {
-               /* WEP, TKIP, CCMP and GCMP */
-               minlen = hdrlen + IEEE80211_WEP_IV_LEN;
-               key_idx_off = hdrlen + 3;
-               key_idx_shift = 6;
-       }
-
-       if (unlikely(skb->len < minlen))
+       /* WEP, TKIP, CCMP and GCMP */
+       if (unlikely(skb->len < hdrlen + IEEE80211_WEP_IV_LEN))
                return -EINVAL;
 
-       skb_copy_bits(skb, key_idx_off, &keyid, 1);
+       skb_copy_bits(skb, hdrlen + 3, &keyid, 1);
 
-       if (cs)
-               keyid &= cs->key_idx_mask;
-       keyid >>= key_idx_shift;
-
-       /* cs could use more than the usual two bits for the keyid */
-       if (unlikely(keyid >= NUM_DEFAULT_KEYS))
-               return -EINVAL;
+       keyid >>= 6;
 
        return keyid;
 }
@@ -1916,7 +1893,6 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
        struct ieee80211_key *ptk_idx = NULL;
        int mmie_keyidx = -1;
        __le16 fc;
-       const struct ieee80211_cipher_scheme *cs = NULL;
 
        if (ieee80211_is_ext(hdr->frame_control))
                return RX_CONTINUE;
@@ -1959,8 +1935,7 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
 
                if (ieee80211_has_protected(fc) &&
                    !(status->flag & RX_FLAG_IV_STRIPPED)) {
-                       cs = rx->sta->cipher_scheme;
-                       keyid = ieee80211_get_keyid(rx->skb, cs);
+                       keyid = ieee80211_get_keyid(rx->skb);
 
                        if (unlikely(keyid < 0))
                                return RX_DROP_UNUSABLE;
@@ -2065,7 +2040,7 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
                    (status->flag & RX_FLAG_IV_STRIPPED))
                        return RX_CONTINUE;
 
-               keyidx = ieee80211_get_keyid(rx->skb, cs);
+               keyidx = ieee80211_get_keyid(rx->skb);
 
                if (unlikely(keyidx < 0))
                        return RX_DROP_UNUSABLE;
@@ -2131,7 +2106,7 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
                result = ieee80211_crypto_gcmp_decrypt(rx);
                break;
        default:
-               result = ieee80211_crypto_hw_decrypt(rx);
+               result = RX_DROP_UNUSABLE;
        }
 
        /* the hdr variable is invalid after the decrypt handlers */
@@ -2945,7 +2920,7 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx)
                tailroom = IEEE80211_ENCRYPT_TAILROOM;
 
        fwd_skb = skb_copy_expand(skb, local->tx_headroom +
-                                      sdata->encrypt_headroom,
+                                      IEEE80211_ENCRYPT_HEADROOM,
                                  tailroom, GFP_ATOMIC);
        if (!fwd_skb)
                goto out;
index 35c390b..aa6950a 100644 (file)
@@ -3,7 +3,7 @@
  * Copyright 2002-2005, Devicescape Software, Inc.
  * Copyright 2013-2014  Intel Mobile Communications GmbH
  * Copyright(c) 2015-2017 Intel Deutschland GmbH
- * Copyright(c) 2020-2021 Intel Corporation
+ * Copyright(c) 2020-2022 Intel Corporation
  */
 
 #ifndef STA_INFO_H
@@ -616,7 +616,6 @@ struct link_sta_info {
  *     taken from HT/VHT capabilities or VHT operating mode notification
  * @known_smps_mode: the smps_mode the client thinks we are in. Relevant for
  *     AP only.
- * @cipher_scheme: optional cipher scheme for this station
  * @cparams: CoDel parameters for this station.
  * @reserved_tid: reserved TID (if any, otherwise IEEE80211_TID_UNRESERVED)
  * @fast_tx: TX fastpath information
@@ -700,7 +699,6 @@ struct sta_info {
 #endif
 
        enum ieee80211_smps_mode known_smps_mode;
-       const struct ieee80211_cipher_scheme *cipher_scheme;
 
        struct codel_params cparams;
 
index 0e4efc0..37fe72b 100644 (file)
@@ -5,7 +5,7 @@
  * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
  * Copyright 2007      Johannes Berg <johannes@sipsolutions.net>
  * Copyright 2013-2014  Intel Mobile Communications GmbH
- * Copyright (C) 2018-2021 Intel Corporation
+ * Copyright (C) 2018-2022 Intel Corporation
  *
  * Transmit and frame generation functions.
  */
@@ -882,7 +882,7 @@ static int ieee80211_fragment(struct ieee80211_tx_data *tx,
                rem -= fraglen;
                tmp = dev_alloc_skb(local->tx_headroom +
                                    frag_threshold +
-                                   tx->sdata->encrypt_headroom +
+                                   IEEE80211_ENCRYPT_HEADROOM +
                                    IEEE80211_ENCRYPT_TAILROOM);
                if (!tmp)
                        return -ENOMEM;
@@ -890,7 +890,7 @@ static int ieee80211_fragment(struct ieee80211_tx_data *tx,
                __skb_queue_tail(&tx->skbs, tmp);
 
                skb_reserve(tmp,
-                           local->tx_headroom + tx->sdata->encrypt_headroom);
+                           local->tx_headroom + IEEE80211_ENCRYPT_HEADROOM);
 
                /* copy control information */
                memcpy(tmp->cb, skb->cb, sizeof(tmp->cb));
@@ -1040,8 +1040,6 @@ ieee80211_tx_h_encrypt(struct ieee80211_tx_data *tx)
        case WLAN_CIPHER_SUITE_GCMP:
        case WLAN_CIPHER_SUITE_GCMP_256:
                return ieee80211_crypto_gcmp_encrypt(tx);
-       default:
-               return ieee80211_crypto_hw_encrypt(tx);
        }
 
        return TX_DROP;
@@ -2013,7 +2011,7 @@ void ieee80211_xmit(struct ieee80211_sub_if_data *sdata,
 
        headroom = local->tx_headroom;
        if (encrypt != ENCRYPT_NO)
-               headroom += sdata->encrypt_headroom;
+               headroom += IEEE80211_ENCRYPT_HEADROOM;
        headroom -= skb_headroom(skb);
        headroom = max_t(int, 0, headroom);
 
@@ -2867,7 +2865,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
         */
 
        if (head_need > 0 || skb_cloned(skb)) {
-               head_need += sdata->encrypt_headroom;
+               head_need += IEEE80211_ENCRYPT_HEADROOM;
                head_need += local->tx_headroom;
                head_need = max_t(int, 0, head_need);
                if (ieee80211_skb_resize(sdata, skb, head_need, ENCRYPT_DATA)) {
@@ -3128,15 +3126,6 @@ void ieee80211_check_fast_xmit(struct sta_info *sta)
                        /* we don't know how to generate IVs for this at all */
                        if (WARN_ON(gen_iv))
                                goto out;
-                       /* pure hardware keys are OK, of course */
-                       if (!(build.key->flags & KEY_FLAG_CIPHER_SCHEME))
-                               break;
-                       /* cipher scheme might require space allocation */
-                       if (iv_spc &&
-                           build.key->conf.iv_len > IEEE80211_FAST_XMIT_MAX_IV)
-                               goto out;
-                       if (iv_spc)
-                               build.hdr_len += build.key->conf.iv_len;
                }
 
                fc |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
index 1e26b52..9e6c4dc 100644 (file)
@@ -6,7 +6,7 @@
  * Copyright 2007      Johannes Berg <johannes@sipsolutions.net>
  * Copyright 2013-2014  Intel Mobile Communications GmbH
  * Copyright (C) 2015-2017     Intel Deutschland GmbH
- * Copyright (C) 2018-2021 Intel Corporation
+ * Copyright (C) 2018-2022 Intel Corporation
  *
  * utilities for mac80211
  */
@@ -4212,74 +4212,6 @@ int ieee80211_send_action_csa(struct ieee80211_sub_if_data *sdata,
        return 0;
 }
 
-bool ieee80211_cs_valid(const struct ieee80211_cipher_scheme *cs)
-{
-       return !(cs == NULL || cs->cipher == 0 ||
-                cs->hdr_len < cs->pn_len + cs->pn_off ||
-                cs->hdr_len <= cs->key_idx_off ||
-                cs->key_idx_shift > 7 ||
-                cs->key_idx_mask == 0);
-}
-
-bool ieee80211_cs_list_valid(const struct ieee80211_cipher_scheme *cs, int n)
-{
-       int i;
-
-       /* Ensure we have enough iftype bitmap space for all iftype values */
-       WARN_ON((NUM_NL80211_IFTYPES / 8 + 1) > sizeof(cs[0].iftype));
-
-       for (i = 0; i < n; i++)
-               if (!ieee80211_cs_valid(&cs[i]))
-                       return false;
-
-       return true;
-}
-
-const struct ieee80211_cipher_scheme *
-ieee80211_cs_get(struct ieee80211_local *local, u32 cipher,
-                enum nl80211_iftype iftype)
-{
-       const struct ieee80211_cipher_scheme *l = local->hw.cipher_schemes;
-       int n = local->hw.n_cipher_schemes;
-       int i;
-       const struct ieee80211_cipher_scheme *cs = NULL;
-
-       for (i = 0; i < n; i++) {
-               if (l[i].cipher == cipher) {
-                       cs = &l[i];
-                       break;
-               }
-       }
-
-       if (!cs || !(cs->iftype & BIT(iftype)))
-               return NULL;
-
-       return cs;
-}
-
-int ieee80211_cs_headroom(struct ieee80211_local *local,
-                         struct cfg80211_crypto_settings *crypto,
-                         enum nl80211_iftype iftype)
-{
-       const struct ieee80211_cipher_scheme *cs;
-       int headroom = IEEE80211_ENCRYPT_HEADROOM;
-       int i;
-
-       for (i = 0; i < crypto->n_ciphers_pairwise; i++) {
-               cs = ieee80211_cs_get(local, crypto->ciphers_pairwise[i],
-                                     iftype);
-
-               if (cs && headroom < cs->hdr_len)
-                       headroom = cs->hdr_len;
-       }
-
-       cs = ieee80211_cs_get(local, crypto->cipher_group, iftype);
-       if (cs && headroom < cs->hdr_len)
-               headroom = cs->hdr_len;
-
-       return headroom;
-}
-
 static bool
 ieee80211_extend_noa_desc(struct ieee80211_noa_data *data, u32 tsf, int i)
 {
index 5fd8a3e..93ec2f3 100644 (file)
@@ -3,7 +3,7 @@
  * Copyright 2002-2004, Instant802 Networks, Inc.
  * Copyright 2008, Jouni Malinen <j@w1.fi>
  * Copyright (C) 2016-2017 Intel Deutschland GmbH
- * Copyright (C) 2020-2021 Intel Corporation
+ * Copyright (C) 2020-2022 Intel Corporation
  */
 
 #include <linux/netdevice.h>
@@ -778,102 +778,6 @@ ieee80211_crypto_gcmp_decrypt(struct ieee80211_rx_data *rx)
        return RX_CONTINUE;
 }
 
-static ieee80211_tx_result
-ieee80211_crypto_cs_encrypt(struct ieee80211_tx_data *tx,
-                           struct sk_buff *skb)
-{
-       struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
-       struct ieee80211_key *key = tx->key;
-       struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
-       int hdrlen;
-       u8 *pos, iv_len = key->conf.iv_len;
-
-       if (info->control.hw_key &&
-           !(info->control.hw_key->flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE)) {
-               /* hwaccel has no need for preallocated head room */
-               return TX_CONTINUE;
-       }
-
-       if (unlikely(skb_headroom(skb) < iv_len &&
-                    pskb_expand_head(skb, iv_len, 0, GFP_ATOMIC)))
-               return TX_DROP;
-
-       hdrlen = ieee80211_hdrlen(hdr->frame_control);
-
-       pos = skb_push(skb, iv_len);
-       memmove(pos, pos + iv_len, hdrlen);
-
-       return TX_CONTINUE;
-}
-
-static inline int ieee80211_crypto_cs_pn_compare(u8 *pn1, u8 *pn2, int len)
-{
-       int i;
-
-       /* pn is little endian */
-       for (i = len - 1; i >= 0; i--) {
-               if (pn1[i] < pn2[i])
-                       return -1;
-               else if (pn1[i] > pn2[i])
-                       return 1;
-       }
-
-       return 0;
-}
-
-static ieee80211_rx_result
-ieee80211_crypto_cs_decrypt(struct ieee80211_rx_data *rx)
-{
-       struct ieee80211_key *key = rx->key;
-       struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
-       const struct ieee80211_cipher_scheme *cs = NULL;
-       int hdrlen = ieee80211_hdrlen(hdr->frame_control);
-       struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
-       int data_len;
-       u8 *rx_pn;
-       u8 *skb_pn;
-       u8 qos_tid;
-
-       if (!rx->sta || !rx->sta->cipher_scheme ||
-           !(status->flag & RX_FLAG_DECRYPTED))
-               return RX_DROP_UNUSABLE;
-
-       if (!ieee80211_is_data(hdr->frame_control))
-               return RX_CONTINUE;
-
-       cs = rx->sta->cipher_scheme;
-
-       data_len = rx->skb->len - hdrlen - cs->hdr_len;
-
-       if (data_len < 0)
-               return RX_DROP_UNUSABLE;
-
-       if (ieee80211_is_data_qos(hdr->frame_control))
-               qos_tid = ieee80211_get_tid(hdr);
-       else
-               qos_tid = 0;
-
-       if (skb_linearize(rx->skb))
-               return RX_DROP_UNUSABLE;
-
-       rx_pn = key->u.gen.rx_pn[qos_tid];
-       skb_pn = rx->skb->data + hdrlen + cs->pn_off;
-
-       if (ieee80211_crypto_cs_pn_compare(skb_pn, rx_pn, cs->pn_len) <= 0)
-               return RX_DROP_UNUSABLE;
-
-       memcpy(rx_pn, skb_pn, cs->pn_len);
-
-       /* remove security header and MIC */
-       if (pskb_trim(rx->skb, rx->skb->len - cs->mic_len))
-               return RX_DROP_UNUSABLE;
-
-       memmove(rx->skb->data + cs->hdr_len, rx->skb->data, hdrlen);
-       skb_pull(rx->skb, cs->hdr_len);
-
-       return RX_CONTINUE;
-}
-
 static void bip_aad(struct sk_buff *skb, u8 *aad)
 {
        __le16 mask_fc;
@@ -1212,38 +1116,3 @@ ieee80211_crypto_aes_gmac_decrypt(struct ieee80211_rx_data *rx)
 
        return RX_CONTINUE;
 }
-
-ieee80211_tx_result
-ieee80211_crypto_hw_encrypt(struct ieee80211_tx_data *tx)
-{
-       struct sk_buff *skb;
-       struct ieee80211_tx_info *info = NULL;
-       ieee80211_tx_result res;
-
-       skb_queue_walk(&tx->skbs, skb) {
-               info  = IEEE80211_SKB_CB(skb);
-
-               /* handle hw-only algorithm */
-               if (!info->control.hw_key)
-                       return TX_DROP;
-
-               if (tx->key->flags & KEY_FLAG_CIPHER_SCHEME) {
-                       res = ieee80211_crypto_cs_encrypt(tx, skb);
-                       if (res != TX_CONTINUE)
-                               return res;
-               }
-       }
-
-       ieee80211_tx_set_protected(tx);
-
-       return TX_CONTINUE;
-}
-
-ieee80211_rx_result
-ieee80211_crypto_hw_decrypt(struct ieee80211_rx_data *rx)
-{
-       if (rx->sta && rx->sta->cipher_scheme)
-               return ieee80211_crypto_cs_decrypt(rx);
-
-       return RX_DROP_UNUSABLE;
-}
index af32722..a9a81ab 100644 (file)
@@ -1,6 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 /*
  * Copyright 2002-2004, Instant802 Networks, Inc.
+ * Copyright (C) 2022 Intel Corporation
  */
 
 #ifndef WPA_H
@@ -39,10 +40,6 @@ ieee80211_tx_result
 ieee80211_crypto_aes_gmac_encrypt(struct ieee80211_tx_data *tx);
 ieee80211_rx_result
 ieee80211_crypto_aes_gmac_decrypt(struct ieee80211_rx_data *rx);
-ieee80211_tx_result
-ieee80211_crypto_hw_encrypt(struct ieee80211_tx_data *tx);
-ieee80211_rx_result
-ieee80211_crypto_hw_decrypt(struct ieee80211_rx_data *rx);
 
 ieee80211_tx_result
 ieee80211_crypto_gcmp_encrypt(struct ieee80211_tx_data *tx);
index 17e1339..e0fb9f9 100644 (file)
@@ -167,8 +167,8 @@ static bool mptcp_ooo_try_coalesce(struct mptcp_sock *msk, struct sk_buff *to,
 
 static void __mptcp_rmem_reclaim(struct sock *sk, int amount)
 {
-       amount >>= SK_MEM_QUANTUM_SHIFT;
-       mptcp_sk(sk)->rmem_fwd_alloc -= amount << SK_MEM_QUANTUM_SHIFT;
+       amount >>= PAGE_SHIFT;
+       mptcp_sk(sk)->rmem_fwd_alloc -= amount << PAGE_SHIFT;
        __sk_mem_reduce_allocated(sk, amount);
 }
 
@@ -327,7 +327,7 @@ static bool mptcp_rmem_schedule(struct sock *sk, struct sock *ssk, int size)
                return true;
 
        amt = sk_mem_pages(size);
-       amount = amt << SK_MEM_QUANTUM_SHIFT;
+       amount = amt << PAGE_SHIFT;
        msk->rmem_fwd_alloc += amount;
        if (!__sk_mem_raise_allocated(sk, size, amt, SK_MEM_RECV)) {
                if (ssk->sk_forward_alloc < amount) {
@@ -972,10 +972,10 @@ static void __mptcp_mem_reclaim_partial(struct sock *sk)
 
        lockdep_assert_held_once(&sk->sk_lock.slock);
 
-       if (reclaimable > SK_MEM_QUANTUM)
+       if (reclaimable > (int)PAGE_SIZE)
                __mptcp_rmem_reclaim(sk, reclaimable - 1);
 
-       sk_mem_reclaim_partial(sk);
+       sk_mem_reclaim(sk);
 }
 
 static void mptcp_mem_reclaim_partial(struct sock *sk)
@@ -3437,7 +3437,10 @@ static struct proto mptcp_prot = {
        .get_port       = mptcp_get_port,
        .forward_alloc_get      = mptcp_forward_alloc_get,
        .sockets_allocated      = &mptcp_sockets_allocated,
+
        .memory_allocated       = &tcp_memory_allocated,
+       .per_cpu_fw_alloc       = &tcp_memory_per_cpu_fw_alloc,
+
        .memory_pressure        = &tcp_memory_pressure,
        .sysctl_wmem_offset     = offsetof(struct net, ipv4.sysctl_tcp_wmem),
        .sysctl_rmem_offset     = offsetof(struct net, ipv4.sysctl_tcp_rmem),
index 746be13..51144fc 100644 (file)
@@ -544,6 +544,7 @@ static int nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type,
        if (msg_type == NFT_MSG_NEWFLOWTABLE)
                nft_activate_next(ctx->net, flowtable);
 
+       INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans));
        nft_trans_flowtable(trans) = flowtable;
        nft_trans_commit_list_add_tail(ctx->net, trans);
 
@@ -1914,7 +1915,6 @@ static struct nft_hook *nft_netdev_hook_alloc(struct net *net,
                goto err_hook_dev;
        }
        hook->ops.dev = dev;
-       hook->inactive = false;
 
        return hook;
 
@@ -2166,7 +2166,7 @@ static int nft_basechain_init(struct nft_base_chain *basechain, u8 family,
        chain->flags |= NFT_CHAIN_BASE | flags;
        basechain->policy = NF_ACCEPT;
        if (chain->flags & NFT_CHAIN_HW_OFFLOAD &&
-           nft_chain_offload_priority(basechain) < 0)
+           !nft_chain_offload_support(basechain))
                return -EOPNOTSUPP;
 
        flow_block_init(&basechain->flow_block);
@@ -7332,7 +7332,7 @@ static void __nft_unregister_flowtable_net_hooks(struct net *net,
                nf_unregister_net_hook(net, &hook->ops);
                if (release_netdev) {
                        list_del(&hook->list);
-                       kfree_rcu(hook);
+                       kfree_rcu(hook, rcu);
                }
        }
 }
@@ -7433,11 +7433,15 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
 
        if (nla[NFTA_FLOWTABLE_FLAGS]) {
                flags = ntohl(nla_get_be32(nla[NFTA_FLOWTABLE_FLAGS]));
-               if (flags & ~NFT_FLOWTABLE_MASK)
-                       return -EOPNOTSUPP;
+               if (flags & ~NFT_FLOWTABLE_MASK) {
+                       err = -EOPNOTSUPP;
+                       goto err_flowtable_update_hook;
+               }
                if ((flowtable->data.flags & NFT_FLOWTABLE_HW_OFFLOAD) ^
-                   (flags & NFT_FLOWTABLE_HW_OFFLOAD))
-                       return -EOPNOTSUPP;
+                   (flags & NFT_FLOWTABLE_HW_OFFLOAD)) {
+                       err = -EOPNOTSUPP;
+                       goto err_flowtable_update_hook;
+               }
        } else {
                flags = flowtable->data.flags;
        }
@@ -7618,6 +7622,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
 {
        const struct nlattr * const *nla = ctx->nla;
        struct nft_flowtable_hook flowtable_hook;
+       LIST_HEAD(flowtable_del_list);
        struct nft_hook *this, *hook;
        struct nft_trans *trans;
        int err;
@@ -7633,7 +7638,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
                        err = -ENOENT;
                        goto err_flowtable_del_hook;
                }
-               hook->inactive = true;
+               list_move(&hook->list, &flowtable_del_list);
        }
 
        trans = nft_trans_alloc(ctx, NFT_MSG_DELFLOWTABLE,
@@ -7646,6 +7651,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
        nft_trans_flowtable(trans) = flowtable;
        nft_trans_flowtable_update(trans) = true;
        INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans));
+       list_splice(&flowtable_del_list, &nft_trans_flowtable_hooks(trans));
        nft_flowtable_hook_release(&flowtable_hook);
 
        nft_trans_commit_list_add_tail(ctx->net, trans);
@@ -7653,13 +7659,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
        return 0;
 
 err_flowtable_del_hook:
-       list_for_each_entry(this, &flowtable_hook.list, list) {
-               hook = nft_hook_list_find(&flowtable->hook_list, this);
-               if (!hook)
-                       break;
-
-               hook->inactive = false;
-       }
+       list_splice(&flowtable_del_list, &flowtable->hook_list);
        nft_flowtable_hook_release(&flowtable_hook);
 
        return err;
@@ -8329,6 +8329,9 @@ static void nft_commit_release(struct nft_trans *trans)
                nf_tables_chain_destroy(&trans->ctx);
                break;
        case NFT_MSG_DELRULE:
+               if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
+                       nft_flow_rule_destroy(nft_trans_flow_rule(trans));
+
                nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
                break;
        case NFT_MSG_DELSET:
@@ -8563,17 +8566,6 @@ void nft_chain_del(struct nft_chain *chain)
        list_del_rcu(&chain->list);
 }
 
-static void nft_flowtable_hooks_del(struct nft_flowtable *flowtable,
-                                   struct list_head *hook_list)
-{
-       struct nft_hook *hook, *next;
-
-       list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) {
-               if (hook->inactive)
-                       list_move(&hook->list, hook_list);
-       }
-}
-
 static void nf_tables_module_autoload_cleanup(struct net *net)
 {
        struct nftables_pernet *nft_net = nft_pernet(net);
@@ -8828,6 +8820,9 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
                        nf_tables_rule_notify(&trans->ctx,
                                              nft_trans_rule(trans),
                                              NFT_MSG_NEWRULE);
+                       if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
+                               nft_flow_rule_destroy(nft_trans_flow_rule(trans));
+
                        nft_trans_destroy(trans);
                        break;
                case NFT_MSG_DELRULE:
@@ -8918,8 +8913,6 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
                        break;
                case NFT_MSG_DELFLOWTABLE:
                        if (nft_trans_flowtable_update(trans)) {
-                               nft_flowtable_hooks_del(nft_trans_flowtable(trans),
-                                                       &nft_trans_flowtable_hooks(trans));
                                nf_tables_flowtable_notify(&trans->ctx,
                                                           nft_trans_flowtable(trans),
                                                           &nft_trans_flowtable_hooks(trans),
@@ -9000,7 +8993,6 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
        struct nftables_pernet *nft_net = nft_pernet(net);
        struct nft_trans *trans, *next;
        struct nft_trans_elem *te;
-       struct nft_hook *hook;
 
        if (action == NFNL_ABORT_VALIDATE &&
            nf_tables_validate(net) < 0)
@@ -9131,8 +9123,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
                        break;
                case NFT_MSG_DELFLOWTABLE:
                        if (nft_trans_flowtable_update(trans)) {
-                               list_for_each_entry(hook, &nft_trans_flowtable(trans)->hook_list, list)
-                                       hook->inactive = false;
+                               list_splice(&nft_trans_flowtable_hooks(trans),
+                                           &nft_trans_flowtable(trans)->hook_list);
                        } else {
                                trans->ctx.table->use++;
                                nft_clear(trans->ctx.net, nft_trans_flowtable(trans));
index 2d36952..910ef88 100644 (file)
@@ -208,7 +208,7 @@ static int nft_setup_cb_call(enum tc_setup_type type, void *type_data,
        return 0;
 }
 
-int nft_chain_offload_priority(struct nft_base_chain *basechain)
+static int nft_chain_offload_priority(const struct nft_base_chain *basechain)
 {
        if (basechain->ops.priority <= 0 ||
            basechain->ops.priority > USHRT_MAX)
@@ -217,6 +217,27 @@ int nft_chain_offload_priority(struct nft_base_chain *basechain)
        return 0;
 }
 
+bool nft_chain_offload_support(const struct nft_base_chain *basechain)
+{
+       struct net_device *dev;
+       struct nft_hook *hook;
+
+       if (nft_chain_offload_priority(basechain) < 0)
+               return false;
+
+       list_for_each_entry(hook, &basechain->hook_list, list) {
+               if (hook->ops.pf != NFPROTO_NETDEV ||
+                   hook->ops.hooknum != NF_NETDEV_INGRESS)
+                       return false;
+
+               dev = hook->ops.dev;
+               if (!dev->netdev_ops->ndo_setup_tc && !flow_indr_dev_exists())
+                       return false;
+       }
+
+       return true;
+}
+
 static void nft_flow_cls_offload_setup(struct flow_cls_offload *cls_flow,
                                       const struct nft_base_chain *basechain,
                                       const struct nft_rule *rule,
index 4394df4..e5fd699 100644 (file)
@@ -335,7 +335,8 @@ static void nft_nat_inet_eval(const struct nft_expr *expr,
 {
        const struct nft_nat *priv = nft_expr_priv(expr);
 
-       if (priv->family == nft_pf(pkt))
+       if (priv->family == nft_pf(pkt) ||
+           priv->family == NFPROTO_INET)
                nft_nat_eval(expr, regs, pkt);
 }
 
index 1b5d730..868db46 100644 (file)
@@ -373,6 +373,7 @@ static void set_ip_addr(struct sk_buff *skb, struct iphdr *nh,
        update_ip_l4_checksum(skb, nh, *addr, new_addr);
        csum_replace4(&nh->check, *addr, new_addr);
        skb_clear_hash(skb);
+       ovs_ct_clear(skb, NULL);
        *addr = new_addr;
 }
 
@@ -420,6 +421,7 @@ static void set_ipv6_addr(struct sk_buff *skb, u8 l4_proto,
                update_ipv6_checksum(skb, l4_proto, addr, new_addr);
 
        skb_clear_hash(skb);
+       ovs_ct_clear(skb, NULL);
        memcpy(addr, new_addr, sizeof(__be32[4]));
 }
 
@@ -660,6 +662,7 @@ static int set_nsh(struct sk_buff *skb, struct sw_flow_key *flow_key,
 static void set_tp_port(struct sk_buff *skb, __be16 *port,
                        __be16 new_port, __sum16 *check)
 {
+       ovs_ct_clear(skb, NULL);
        inet_proto_csum_replace2(check, skb, *port, new_port, false);
        *port = new_port;
 }
@@ -699,6 +702,7 @@ static int set_udp(struct sk_buff *skb, struct sw_flow_key *flow_key,
                uh->dest = dst;
                flow_key->tp.src = src;
                flow_key->tp.dst = dst;
+               ovs_ct_clear(skb, NULL);
        }
 
        skb_clear_hash(skb);
@@ -761,6 +765,8 @@ static int set_sctp(struct sk_buff *skb, struct sw_flow_key *flow_key,
        sh->checksum = old_csum ^ old_correct_csum ^ new_csum;
 
        skb_clear_hash(skb);
+       ovs_ct_clear(skb, NULL);
+
        flow_key->tp.src = sh->source;
        flow_key->tp.dst = sh->dest;
 
index 4a947c1..4e70df9 100644 (file)
@@ -1342,7 +1342,9 @@ int ovs_ct_clear(struct sk_buff *skb, struct sw_flow_key *key)
 
        nf_ct_put(ct);
        nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
-       ovs_ct_fill_key(skb, key, false);
+
+       if (key)
+               ovs_ct_fill_key(skb, key, false);
 
        return 0;
 }
index b498dac..2f61d5b 100644 (file)
@@ -115,7 +115,7 @@ error_master_upper_dev_unlink:
 error_unlock:
        rtnl_unlock();
 error_put:
-       dev_put_track(vport->dev, &vport->dev_tracker);
+       netdev_put(vport->dev, &vport->dev_tracker);
 error_free_vport:
        ovs_vport_free(vport);
        return ERR_PTR(err);
@@ -137,7 +137,7 @@ static void vport_netdev_free(struct rcu_head *rcu)
 {
        struct vport *vport = container_of(rcu, struct vport, rcu);
 
-       dev_put_track(vport->dev, &vport->dev_tracker);
+       netdev_put(vport->dev, &vport->dev_tracker);
        ovs_vport_free(vport);
 }
 
@@ -173,7 +173,7 @@ void ovs_netdev_tunnel_destroy(struct vport *vport)
         */
        if (vport->dev->reg_state == NETREG_REGISTERED)
                rtnl_delete_link(vport->dev);
-       dev_put_track(vport->dev, &vport->dev_tracker);
+       netdev_put(vport->dev, &vport->dev_tracker);
        vport->dev = NULL;
        rtnl_unlock();
 
index ca6e92a..d08c472 100644 (file)
@@ -3134,7 +3134,7 @@ static int packet_release(struct socket *sock)
        packet_cached_dev_reset(po);
 
        if (po->prot_hook.dev) {
-               dev_put_track(po->prot_hook.dev, &po->prot_hook.dev_tracker);
+               netdev_put(po->prot_hook.dev, &po->prot_hook.dev_tracker);
                po->prot_hook.dev = NULL;
        }
        spin_unlock(&po->bind_lock);
@@ -3235,15 +3235,15 @@ static int packet_do_bind(struct sock *sk, const char *name, int ifindex,
                WRITE_ONCE(po->num, proto);
                po->prot_hook.type = proto;
 
-               dev_put_track(po->prot_hook.dev, &po->prot_hook.dev_tracker);
+               netdev_put(po->prot_hook.dev, &po->prot_hook.dev_tracker);
 
                if (unlikely(unlisted)) {
                        po->prot_hook.dev = NULL;
                        WRITE_ONCE(po->ifindex, -1);
                        packet_cached_dev_reset(po);
                } else {
-                       dev_hold_track(dev, &po->prot_hook.dev_tracker,
-                                      GFP_ATOMIC);
+                       netdev_hold(dev, &po->prot_hook.dev_tracker,
+                                   GFP_ATOMIC);
                        po->prot_hook.dev = dev;
                        WRITE_ONCE(po->ifindex, dev ? dev->ifindex : 0);
                        packet_cached_dev_assign(po, dev);
@@ -4167,8 +4167,8 @@ static int packet_notifier(struct notifier_block *this,
                                if (msg == NETDEV_UNREGISTER) {
                                        packet_cached_dev_reset(po);
                                        WRITE_ONCE(po->ifindex, -1);
-                                       dev_put_track(po->prot_hook.dev,
-                                                     &po->prot_hook.dev_tracker);
+                                       netdev_put(po->prot_hook.dev,
+                                                  &po->prot_hook.dev_tracker);
                                        po->prot_hook.dev = NULL;
                                }
                                spin_unlock(&po->bind_lock);
index ebb92fb..a1d70cf 100644 (file)
@@ -79,7 +79,7 @@ static void tcf_mirred_release(struct tc_action *a)
 
        /* last reference to action, no need to lock */
        dev = rcu_dereference_protected(m->tcfm_dev, 1);
-       dev_put_track(dev, &m->tcfm_dev_tracker);
+       netdev_put(dev, &m->tcfm_dev_tracker);
 }
 
 static const struct nla_policy mirred_policy[TCA_MIRRED_MAX + 1] = {
@@ -181,7 +181,7 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
                mac_header_xmit = dev_is_mac_header_xmit(ndev);
                odev = rcu_replace_pointer(m->tcfm_dev, ndev,
                                          lockdep_is_held(&m->tcf_lock));
-               dev_put_track(odev, &m->tcfm_dev_tracker);
+               netdev_put(odev, &m->tcfm_dev_tracker);
                netdev_tracker_alloc(ndev, &m->tcfm_dev_tracker, GFP_ATOMIC);
                m->tcfm_mac_header_xmit = mac_header_xmit;
        }
@@ -402,7 +402,7 @@ static int mirred_device_event(struct notifier_block *unused,
                list_for_each_entry(m, &mirred_list, tcfm_list) {
                        spin_lock_bh(&m->tcf_lock);
                        if (tcf_mirred_dev_dereference(m) == dev) {
-                               dev_put_track(dev, &m->tcfm_dev_tracker);
+                               netdev_put(dev, &m->tcfm_dev_tracker);
                                /* Note : no rcu grace period necessary, as
                                 * net_device are already rcu protected.
                                 */
index e3c0e8e..bf87b50 100644 (file)
@@ -1292,7 +1292,7 @@ err_out5:
        if (ops->destroy)
                ops->destroy(sch);
 err_out3:
-       dev_put_track(dev, &sch->dev_tracker);
+       netdev_put(dev, &sch->dev_tracker);
        qdisc_free(sch);
 err_out2:
        module_put(ops->owner);
index dba0b3e..cc6eabe 100644 (file)
@@ -541,7 +541,7 @@ static void dev_watchdog(struct timer_list *t)
        spin_unlock(&dev->tx_global_lock);
 
        if (release)
-               dev_put_track(dev, &dev->watchdog_dev_tracker);
+               netdev_put(dev, &dev->watchdog_dev_tracker);
 }
 
 void __netdev_watchdog_up(struct net_device *dev)
@@ -551,7 +551,8 @@ void __netdev_watchdog_up(struct net_device *dev)
                        dev->watchdog_timeo = 5*HZ;
                if (!mod_timer(&dev->watchdog_timer,
                               round_jiffies(jiffies + dev->watchdog_timeo)))
-                       dev_hold_track(dev, &dev->watchdog_dev_tracker, GFP_ATOMIC);
+                       netdev_hold(dev, &dev->watchdog_dev_tracker,
+                                   GFP_ATOMIC);
        }
 }
 EXPORT_SYMBOL_GPL(__netdev_watchdog_up);
@@ -565,7 +566,7 @@ static void dev_watchdog_down(struct net_device *dev)
 {
        netif_tx_lock_bh(dev);
        if (del_timer(&dev->watchdog_timer))
-               dev_put_track(dev, &dev->watchdog_dev_tracker);
+               netdev_put(dev, &dev->watchdog_dev_tracker);
        netif_tx_unlock_bh(dev);
 }
 
@@ -975,7 +976,7 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue,
        sch->enqueue = ops->enqueue;
        sch->dequeue = ops->dequeue;
        sch->dev_queue = dev_queue;
-       dev_hold_track(dev, &sch->dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &sch->dev_tracker, GFP_KERNEL);
        refcount_set(&sch->refcnt, 1);
 
        return sch;
@@ -1067,7 +1068,7 @@ static void qdisc_destroy(struct Qdisc *qdisc)
                ops->destroy(qdisc);
 
        module_put(ops->owner);
-       dev_put_track(qdisc_dev(qdisc), &qdisc->dev_tracker);
+       netdev_put(qdisc_dev(qdisc), &qdisc->dev_tracker);
 
        trace_qdisc_destroy(qdisc);
 
index 35928fe..fa500ea 100644 (file)
@@ -1523,11 +1523,11 @@ static __init int sctp_init(void)
        limit = (sysctl_sctp_mem[1]) << (PAGE_SHIFT - 7);
        max_share = min(4UL*1024*1024, limit);
 
-       sysctl_sctp_rmem[0] = SK_MEM_QUANTUM; /* give each asoc 1 page min */
+       sysctl_sctp_rmem[0] = PAGE_SIZE; /* give each asoc 1 page min */
        sysctl_sctp_rmem[1] = 1500 * SKB_TRUESIZE(1);
        sysctl_sctp_rmem[2] = max(sysctl_sctp_rmem[1], max_share);
 
-       sysctl_sctp_wmem[0] = SK_MEM_QUANTUM;
+       sysctl_sctp_wmem[0] = PAGE_SIZE;
        sysctl_sctp_wmem[1] = 16*1024;
        sysctl_sctp_wmem[2] = max(64*1024, max_share);
 
index 52edee1..f6ee7f4 100644 (file)
@@ -6590,8 +6590,6 @@ static int sctp_eat_data(const struct sctp_association *asoc,
                        pr_debug("%s: under pressure, reneging for tsn:%u\n",
                                 __func__, tsn);
                        deliver = SCTP_CMD_RENEGE;
-               } else {
-                       sk_mem_reclaim(sk);
                }
        }
 
index 6d37d2d..171f1a3 100644 (file)
@@ -93,6 +93,7 @@ static int sctp_sock_migrate(struct sock *oldsk, struct sock *newsk,
 
 static unsigned long sctp_memory_pressure;
 static atomic_long_t sctp_memory_allocated;
+static DEFINE_PER_CPU(int, sctp_memory_per_cpu_fw_alloc);
 struct percpu_counter sctp_sockets_allocated;
 
 static void sctp_enter_memory_pressure(struct sock *sk)
@@ -1823,9 +1824,6 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
        if (sctp_wspace(asoc) < (int)msg_len)
                sctp_prsctp_prune(asoc, sinfo, msg_len - sctp_wspace(asoc));
 
-       if (sk_under_memory_pressure(sk))
-               sk_mem_reclaim(sk);
-
        if (sctp_wspace(asoc) <= 0 || !sk_wmem_schedule(sk, msg_len)) {
                timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
                err = sctp_wait_for_sndbuf(asoc, &timeo, msg_len);
@@ -9194,8 +9192,6 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
                        goto do_error;
                if (signal_pending(current))
                        goto do_interrupted;
-               if (sk_under_memory_pressure(sk))
-                       sk_mem_reclaim(sk);
                if ((int)msg_len <= sctp_wspace(asoc) &&
                    sk_wmem_schedule(sk, msg_len))
                        break;
@@ -9657,7 +9653,10 @@ struct proto sctp_prot = {
        .sysctl_wmem =  sysctl_sctp_wmem,
        .memory_pressure = &sctp_memory_pressure,
        .enter_memory_pressure = sctp_enter_memory_pressure,
+
        .memory_allocated = &sctp_memory_allocated,
+       .per_cpu_fw_alloc = &sctp_memory_per_cpu_fw_alloc,
+
        .sockets_allocated = &sctp_sockets_allocated,
 };
 
@@ -9700,7 +9699,10 @@ struct proto sctpv6_prot = {
        .sysctl_wmem    = sysctl_sctp_wmem,
        .memory_pressure = &sctp_memory_pressure,
        .enter_memory_pressure = sctp_enter_memory_pressure,
+
        .memory_allocated = &sctp_memory_allocated,
+       .per_cpu_fw_alloc = &sctp_memory_per_cpu_fw_alloc,
+
        .sockets_allocated = &sctp_sockets_allocated,
 };
 #endif /* IS_ENABLED(CONFIG_IPV6) */
index 6b13f73..bb22b71 100644 (file)
@@ -979,8 +979,6 @@ static void sctp_renege_events(struct sctp_ulpq *ulpq, struct sctp_chunk *chunk,
 
        if (freed >= needed && sctp_ulpevent_idata(ulpq, chunk, gfp) <= 0)
                sctp_intl_start_pd(ulpq, gfp);
-
-       sk_mem_reclaim(asoc->base.sk);
 }
 
 static void sctp_intl_stream_abort_pd(struct sctp_ulpq *ulpq, __u16 sid,
index 407fed4..0a8510a 100644 (file)
@@ -1100,12 +1100,8 @@ void sctp_ulpq_renege(struct sctp_ulpq *ulpq, struct sctp_chunk *chunk,
                else if (retval == 1)
                        sctp_ulpq_reasm_drain(ulpq);
        }
-
-       sk_mem_reclaim(asoc->base.sk);
 }
 
-
-
 /* Notify the application if an association is aborted and in
  * partial delivery mode.  Send up any pending received messages.
  */
index 7055ed1..4c3bf6d 100644 (file)
@@ -120,7 +120,8 @@ static int smc_pnet_remove_by_pnetid(struct net *net, char *pnet_name)
                    smc_pnet_match(pnetelem->pnet_name, pnet_name)) {
                        list_del(&pnetelem->list);
                        if (pnetelem->type == SMC_PNET_ETH && pnetelem->ndev) {
-                               dev_put_track(pnetelem->ndev, &pnetelem->dev_tracker);
+                               netdev_put(pnetelem->ndev,
+                                          &pnetelem->dev_tracker);
                                pr_warn_ratelimited("smc: net device %s "
                                                    "erased user defined "
                                                    "pnetid %.16s\n",
@@ -196,7 +197,7 @@ static int smc_pnet_add_by_ndev(struct net_device *ndev)
        list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) {
                if (pnetelem->type == SMC_PNET_ETH && !pnetelem->ndev &&
                    !strncmp(pnetelem->eth_name, ndev->name, IFNAMSIZ)) {
-                       dev_hold_track(ndev, &pnetelem->dev_tracker, GFP_ATOMIC);
+                       netdev_hold(ndev, &pnetelem->dev_tracker, GFP_ATOMIC);
                        pnetelem->ndev = ndev;
                        rc = 0;
                        pr_warn_ratelimited("smc: adding net device %s with "
@@ -227,7 +228,7 @@ static int smc_pnet_remove_by_ndev(struct net_device *ndev)
        mutex_lock(&pnettable->lock);
        list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) {
                if (pnetelem->type == SMC_PNET_ETH && pnetelem->ndev == ndev) {
-                       dev_put_track(pnetelem->ndev, &pnetelem->dev_tracker);
+                       netdev_put(pnetelem->ndev, &pnetelem->dev_tracker);
                        pnetelem->ndev = NULL;
                        rc = 0;
                        pr_warn_ratelimited("smc: removing net device %s with "
index 2bc8773..1b6f5e2 100644 (file)
@@ -1878,10 +1878,8 @@ out_fd:
        return ERR_PTR(err);
 }
 
-int __sys_accept4_file(struct file *file, unsigned file_flags,
-                      struct sockaddr __user *upeer_sockaddr,
-                      int __user *upeer_addrlen, int flags,
-                      unsigned long nofile)
+static int __sys_accept4_file(struct file *file, struct sockaddr __user *upeer_sockaddr,
+                             int __user *upeer_addrlen, int flags)
 {
        struct file *newfile;
        int newfd;
@@ -1892,11 +1890,11 @@ int __sys_accept4_file(struct file *file, unsigned file_flags,
        if (SOCK_NONBLOCK != O_NONBLOCK && (flags & SOCK_NONBLOCK))
                flags = (flags & ~SOCK_NONBLOCK) | O_NONBLOCK;
 
-       newfd = __get_unused_fd_flags(flags, nofile);
+       newfd = get_unused_fd_flags(flags);
        if (unlikely(newfd < 0))
                return newfd;
 
-       newfile = do_accept(file, file_flags, upeer_sockaddr, upeer_addrlen,
+       newfile = do_accept(file, 0, upeer_sockaddr, upeer_addrlen,
                            flags);
        if (IS_ERR(newfile)) {
                put_unused_fd(newfd);
@@ -1926,9 +1924,8 @@ int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr,
 
        f = fdget(fd);
        if (f.file) {
-               ret = __sys_accept4_file(f.file, 0, upeer_sockaddr,
-                                               upeer_addrlen, flags,
-                                               rlimit(RLIMIT_NOFILE));
+               ret = __sys_accept4_file(f.file, upeer_sockaddr,
+                                        upeer_addrlen, flags);
                fdput(f);
        }
 
index 474f763..8cc42ae 100644 (file)
@@ -64,7 +64,7 @@ void switchdev_deferred_process(void)
 
        while ((dfitem = switchdev_deferred_dequeue())) {
                dfitem->func(dfitem->dev, dfitem->data);
-               dev_put_track(dfitem->dev, &dfitem->dev_tracker);
+               netdev_put(dfitem->dev, &dfitem->dev_tracker);
                kfree(dfitem);
        }
 }
@@ -91,7 +91,7 @@ static int switchdev_deferred_enqueue(struct net_device *dev,
        dfitem->dev = dev;
        dfitem->func = func;
        memcpy(dfitem->data, data, data_len);
-       dev_hold_track(dev, &dfitem->dev_tracker, GFP_ATOMIC);
+       netdev_hold(dev, &dfitem->dev_tracker, GFP_ATOMIC);
        spin_lock_bh(&deferred_lock);
        list_add_tail(&dfitem->list, &deferred);
        spin_unlock_bh(&deferred_lock);
index 932c87b..35cac77 100644 (file)
@@ -788,7 +788,7 @@ int tipc_attach_loopback(struct net *net)
        if (!dev)
                return -ENODEV;
 
-       dev_hold_track(dev, &tn->loopback_pt.dev_tracker, GFP_KERNEL);
+       netdev_hold(dev, &tn->loopback_pt.dev_tracker, GFP_KERNEL);
        tn->loopback_pt.dev = dev;
        tn->loopback_pt.type = htons(ETH_P_TIPC);
        tn->loopback_pt.func = tipc_loopback_rcv_pkt;
@@ -801,7 +801,7 @@ void tipc_detach_loopback(struct net *net)
        struct tipc_net *tn = tipc_net(net);
 
        dev_remove_pack(&tn->loopback_pt);
-       dev_put_track(net->loopback_dev, &tn->loopback_pt.dev_tracker);
+       netdev_put(net->loopback_dev, &tn->loopback_pt.dev_tracker);
 }
 
 /* Caller should hold rtnl_lock to protect the bearer */
index b91ddc1..da17641 100644 (file)
@@ -544,7 +544,7 @@ static int do_tls_getsockopt(struct sock *sk, int optname,
                rc = do_tls_getsockopt_conf(sk, optval, optlen,
                                            optname == TLS_TX);
                break;
-       case TLS_TX_ZEROCOPY_SENDFILE:
+       case TLS_TX_ZEROCOPY_RO:
                rc = do_tls_getsockopt_tx_zc(sk, optval, optlen);
                break;
        default:
@@ -731,7 +731,7 @@ static int do_tls_setsockopt(struct sock *sk, int optname, sockptr_t optval,
                                            optname == TLS_TX);
                release_sock(sk);
                break;
-       case TLS_TX_ZEROCOPY_SENDFILE:
+       case TLS_TX_ZEROCOPY_RO:
                lock_sock(sk);
                rc = do_tls_setsockopt_tx_zc(sk, optval, optlen);
                release_sock(sk);
@@ -970,7 +970,7 @@ static int tls_get_info(const struct sock *sk, struct sk_buff *skb)
                goto nla_failure;
 
        if (ctx->tx_conf == TLS_HW && ctx->zerocopy_sendfile) {
-               err = nla_put_flag(skb, TLS_INFO_ZC_SENDFILE);
+               err = nla_put_flag(skb, TLS_INFO_ZC_RO_TX);
                if (err)
                        goto nla_failure;
        }
@@ -994,7 +994,7 @@ static size_t tls_get_info_size(const struct sock *sk)
                nla_total_size(sizeof(u16)) +   /* TLS_INFO_CIPHER */
                nla_total_size(sizeof(u16)) +   /* TLS_INFO_RXCONF */
                nla_total_size(sizeof(u16)) +   /* TLS_INFO_TXCONF */
-               nla_total_size(0) +             /* TLS_INFO_ZC_SENDFILE */
+               nla_total_size(0) +             /* TLS_INFO_ZC_RO_TX */
                0;
 
        return size;
index 654dcef..3453e00 100644 (file)
@@ -302,7 +302,7 @@ static void __unix_remove_socket(struct sock *sk)
 
 static void __unix_insert_socket(struct sock *sk)
 {
-       WARN_ON(!sk_unhashed(sk));
+       DEBUG_NET_WARN_ON_ONCE(!sk_unhashed(sk));
        sk_add_node(sk, &unix_socket_table[sk->sk_hash]);
 }
 
@@ -490,7 +490,7 @@ static int unix_dgram_peer_wake_me(struct sock *sk, struct sock *other)
         * -ECONNREFUSED. Otherwise, if we haven't queued any skbs
         * to other and its full, we will hang waiting for POLLOUT.
         */
-       if (unix_recvq_full(other) && !sock_flag(other, SOCK_DEAD))
+       if (unix_recvq_full_lockless(other) && !sock_flag(other, SOCK_DEAD))
                return 1;
 
        if (connected)
@@ -554,9 +554,9 @@ static void unix_sock_destructor(struct sock *sk)
                u->oob_skb = NULL;
        }
 #endif
-       WARN_ON(refcount_read(&sk->sk_wmem_alloc));
-       WARN_ON(!sk_unhashed(sk));
-       WARN_ON(sk->sk_socket);
+       DEBUG_NET_WARN_ON_ONCE(refcount_read(&sk->sk_wmem_alloc));
+       DEBUG_NET_WARN_ON_ONCE(!sk_unhashed(sk));
+       DEBUG_NET_WARN_ON_ONCE(sk->sk_socket);
        if (!sock_flag(sk, SOCK_DEAD)) {
                pr_info("Attempt to release alive unix socket: %p\n", sk);
                return;
index e0a4526..19ac872 100644 (file)
@@ -373,7 +373,8 @@ u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max_entries)
                goto out;
        }
 
-       nb_pkts = xskq_cons_peek_desc_batch(xs->tx, pool, max_entries);
+       max_entries = xskq_cons_nb_entries(xs->tx, max_entries);
+       nb_pkts = xskq_cons_read_desc_batch(xs->tx, pool, max_entries);
        if (!nb_pkts) {
                xs->tx->queue_empty_descs++;
                goto out;
@@ -389,7 +390,7 @@ u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max_entries)
        if (!nb_pkts)
                goto out;
 
-       xskq_cons_release_n(xs->tx, nb_pkts);
+       xskq_cons_release_n(xs->tx, max_entries);
        __xskq_cons_release(xs->tx);
        xs->sk.sk_write_space(&xs->sk);
 
index a794410..fb20bf7 100644 (file)
@@ -282,14 +282,6 @@ static inline bool xskq_cons_peek_desc(struct xsk_queue *q,
        return xskq_cons_read_desc(q, desc, pool);
 }
 
-static inline u32 xskq_cons_peek_desc_batch(struct xsk_queue *q, struct xsk_buff_pool *pool,
-                                           u32 max)
-{
-       u32 entries = xskq_cons_nb_entries(q, max);
-
-       return xskq_cons_read_desc_batch(q, pool, entries);
-}
-
 /* To improve performance in the xskq_cons_release functions, only update local state here.
  * Reflect this to global state when we get new entries from the ring in
  * xskq_cons_get_entries() and whenever Rx or Tx processing are completed in the NAPI loop.
index 35c7e89..637ca88 100644 (file)
@@ -275,7 +275,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
                xso->dev = NULL;
                xso->dir = 0;
                xso->real_dev = NULL;
-               dev_put_track(dev, &xso->dev_tracker);
+               netdev_put(dev, &xso->dev_tracker);
 
                if (err != -EOPNOTSUPP)
                        return err;
index fbd34b8..7434e9e 100644 (file)
 #include <openssl/err.h>
 #include <openssl/engine.h>
 
+/*
+ * OpenSSL 3.0 deprecates the OpenSSL's ENGINE API.
+ *
+ * Remove this if/when that API is no longer used
+ */
+#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
+
 /*
  * Use CMS if we have openssl-1.0.0 or newer available - otherwise we have to
  * assume that it's not available and its header file is missing and that we
index 0165da3..2b2c8eb 100644 (file)
@@ -283,8 +283,8 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
        /* key properties */
        flags = 0;
        flags |= options->policydigest_len ? 0 : TPM2_OA_USER_WITH_AUTH;
-       flags |= payload->migratable ? (TPM2_OA_FIXED_TPM |
-                                       TPM2_OA_FIXED_PARENT) : 0;
+       flags |= payload->migratable ? 0 : (TPM2_OA_FIXED_TPM |
+                                           TPM2_OA_FIXED_PARENT);
        tpm_buf_append_u32(&buf, flags);
 
        /* policy */
index 3e9e9ac..b7e5032 100644 (file)
@@ -660,6 +660,7 @@ static const struct hda_vendor_id hda_vendor_ids[] = {
        { 0x14f1, "Conexant" },
        { 0x17e8, "Chrontel" },
        { 0x1854, "LG" },
+       { 0x19e5, "Huawei" },
        { 0x1aec, "Wolfson Microelectronics" },
        { 0x1af4, "QEMU" },
        { 0x434d, "C-Media" },
index 0a83eb6..a77165b 100644 (file)
@@ -2525,6 +2525,9 @@ static const struct pci_device_id azx_ids[] = {
          .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
        { PCI_DEVICE(0x8086, 0x51cf),
          .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+       /* Meteorlake-P */
+       { PCI_DEVICE(0x8086, 0x7e28),
+         .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
        /* Broxton-P(Apollolake) */
        { PCI_DEVICE(0x8086, 0x5a98),
          .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_BROXTON },
index aa360a0..1248d1a 100644 (file)
@@ -1052,6 +1052,13 @@ static int patch_conexant_auto(struct hda_codec *codec)
                snd_hda_pick_fixup(codec, cxt5051_fixup_models,
                                   cxt5051_fixups, cxt_fixups);
                break;
+       case 0x14f15098:
+               codec->pin_amp_workaround = 1;
+               spec->gen.mixer_nid = 0x22;
+               spec->gen.add_stereo_mix_input = HDA_HINT_STEREO_MIX_AUTO;
+               snd_hda_pick_fixup(codec, cxt5066_fixup_models,
+                                  cxt5066_fixups, cxt_fixups);
+               break;
        case 0x14f150f2:
                codec->power_save_node = 1;
                fallthrough;
index 31fe417..6c209cd 100644 (file)
@@ -4554,6 +4554,7 @@ HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI",    patch_i915_icl_hdmi),
 HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI",        patch_i915_icl_hdmi),
 HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_adlp_hdmi),
 HDA_CODEC_ENTRY(0x8086281f, "Raptorlake-P HDMI",       patch_i915_adlp_hdmi),
+HDA_CODEC_ENTRY(0x8086281d, "Meteorlake HDMI", patch_i915_adlp_hdmi),
 HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI", patch_generic_hdmi),
 HDA_CODEC_ENTRY(0x80862882, "Valleyview2 HDMI",        patch_i915_byt_hdmi),
 HDA_CODEC_ENTRY(0x80862883, "Braswell HDMI",   patch_i915_byt_hdmi),
index f3ad454..b0f9541 100644 (file)
@@ -443,6 +443,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
        case 0x10ec0245:
        case 0x10ec0255:
        case 0x10ec0256:
+       case 0x19e58326:
        case 0x10ec0257:
        case 0x10ec0282:
        case 0x10ec0283:
@@ -580,6 +581,7 @@ static void alc_shutup_pins(struct hda_codec *codec)
        switch (codec->core.vendor_id) {
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
        case 0x10ec0283:
        case 0x10ec0286:
        case 0x10ec0288:
@@ -3247,6 +3249,7 @@ static void alc_disable_headset_jack_key(struct hda_codec *codec)
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_write_coef_idx(codec, 0x48, 0x0);
                alc_update_coef_idx(codec, 0x49, 0x0045, 0x0);
                break;
@@ -3275,6 +3278,7 @@ static void alc_enable_headset_jack_key(struct hda_codec *codec)
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_write_coef_idx(codec, 0x48, 0xd011);
                alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
                break;
@@ -4910,6 +4914,7 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec)
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_process_coef_fw(codec, coef0256);
                break;
        case 0x10ec0234:
@@ -5025,6 +5030,7 @@ static void alc_headset_mode_mic_in(struct hda_codec *codec, hda_nid_t hp_pin,
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_write_coef_idx(codec, 0x45, 0xc489);
                snd_hda_set_pin_ctl_cache(codec, hp_pin, 0);
                alc_process_coef_fw(codec, coef0256);
@@ -5175,6 +5181,7 @@ static void alc_headset_mode_default(struct hda_codec *codec)
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_write_coef_idx(codec, 0x1b, 0x0e4b);
                alc_write_coef_idx(codec, 0x45, 0xc089);
                msleep(50);
@@ -5274,6 +5281,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec)
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_process_coef_fw(codec, coef0256);
                break;
        case 0x10ec0234:
@@ -5388,6 +5396,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec)
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_process_coef_fw(codec, coef0256);
                break;
        case 0x10ec0234:
@@ -5489,6 +5498,7 @@ static void alc_determine_headset_type(struct hda_codec *codec)
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_write_coef_idx(codec, 0x1b, 0x0e4b);
                alc_write_coef_idx(codec, 0x06, 0x6104);
                alc_write_coefex_idx(codec, 0x57, 0x3, 0x09a3);
@@ -5783,6 +5793,7 @@ static void alc255_set_default_jack_type(struct hda_codec *codec)
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_process_coef_fw(codec, alc256fw);
                break;
        }
@@ -6385,6 +6396,7 @@ static void alc_combo_jack_hp_jd_restart(struct hda_codec *codec)
        case 0x10ec0236:
        case 0x10ec0255:
        case 0x10ec0256:
+       case 0x19e58326:
                alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */
                alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15);
                break;
@@ -9059,6 +9071,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x103c, 0x89c3, "Zbook Studio G9", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
        SND_PCI_QUIRK(0x103c, 0x89c6, "Zbook Fury 17 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
        SND_PCI_QUIRK(0x103c, 0x89ca, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+       SND_PCI_QUIRK(0x103c, 0x8a78, "HP Dev One", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
        SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
        SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
        SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
@@ -9258,6 +9271,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
        SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
        SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
+       SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
        SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
        SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
        SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
@@ -10095,6 +10109,7 @@ static int patch_alc269(struct hda_codec *codec)
        case 0x10ec0230:
        case 0x10ec0236:
        case 0x10ec0256:
+       case 0x19e58326:
                spec->codec_variant = ALC269_TYPE_ALC256;
                spec->shutup = alc256_shutup;
                spec->init_hook = alc256_init;
@@ -11545,6 +11560,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
        HDA_CODEC_ENTRY(0x10ec0b00, "ALCS1200A", patch_alc882),
        HDA_CODEC_ENTRY(0x10ec1168, "ALC1220", patch_alc882),
        HDA_CODEC_ENTRY(0x10ec1220, "ALC1220", patch_alc882),
+       HDA_CODEC_ENTRY(0x19e58326, "HW8326", patch_alc269),
        {} /* terminator */
 };
 MODULE_DEVICE_TABLE(hdaudio, snd_hda_id_realtek);
index 920190d..dfe85dc 100644 (file)
@@ -444,7 +444,8 @@ static bool cs35l36_volatile_reg(struct device *dev, unsigned int reg)
        }
 }
 
-static DECLARE_TLV_DB_SCALE(dig_vol_tlv, -10200, 25, 0);
+static const DECLARE_TLV_DB_RANGE(dig_vol_tlv, 0, 912,
+                                 TLV_DB_MINMAX_ITEM(-10200, 1200));
 static DECLARE_TLV_DB_SCALE(amp_gain_tlv, 0, 1, 1);
 
 static const char * const cs35l36_pcm_sftramp_text[] =  {
index aff6185..0e93318 100644 (file)
@@ -143,7 +143,7 @@ static const struct snd_kcontrol_new cs42l51_snd_controls[] = {
                        0, 0xA0, 96, adc_att_tlv),
        SOC_DOUBLE_R_SX_TLV("PGA Volume",
                        CS42L51_ALC_PGA_CTL, CS42L51_ALC_PGB_CTL,
-                       0, 0x1A, 30, pga_tlv),
+                       0, 0x19, 30, pga_tlv),
        SOC_SINGLE("Playback Deemphasis Switch", CS42L51_DAC_CTL, 3, 1, 0),
        SOC_SINGLE("Auto-Mute Switch", CS42L51_DAC_CTL, 2, 1, 0),
        SOC_SINGLE("Soft Ramp Switch", CS42L51_DAC_CTL, 1, 1, 0),
index 9b182b5..10e6964 100644 (file)
@@ -137,7 +137,9 @@ static DECLARE_TLV_DB_SCALE(mic_tlv, 1600, 100, 0);
 
 static DECLARE_TLV_DB_SCALE(pga_tlv, -600, 50, 0);
 
-static DECLARE_TLV_DB_SCALE(mix_tlv, -50, 50, 0);
+static DECLARE_TLV_DB_SCALE(pass_tlv, -6000, 50, 0);
+
+static DECLARE_TLV_DB_SCALE(mix_tlv, -5150, 50, 0);
 
 static DECLARE_TLV_DB_SCALE(beep_tlv, -56, 200, 0);
 
@@ -351,7 +353,7 @@ static const struct snd_kcontrol_new cs42l52_snd_controls[] = {
                              CS42L52_SPKB_VOL, 0, 0x40, 0xC0, hl_tlv),
 
        SOC_DOUBLE_R_SX_TLV("Bypass Volume", CS42L52_PASSTHRUA_VOL,
-                             CS42L52_PASSTHRUB_VOL, 0, 0x88, 0x90, pga_tlv),
+                             CS42L52_PASSTHRUB_VOL, 0, 0x88, 0x90, pass_tlv),
 
        SOC_DOUBLE("Bypass Mute", CS42L52_MISC_CTL, 4, 5, 1, 0),
 
@@ -364,7 +366,7 @@ static const struct snd_kcontrol_new cs42l52_snd_controls[] = {
                              CS42L52_ADCB_VOL, 0, 0xA0, 0x78, ipd_tlv),
        SOC_DOUBLE_R_SX_TLV("ADC Mixer Volume",
                             CS42L52_ADCA_MIXER_VOL, CS42L52_ADCB_MIXER_VOL,
-                               0, 0x19, 0x7F, ipd_tlv),
+                               0, 0x19, 0x7F, mix_tlv),
 
        SOC_DOUBLE("ADC Switch", CS42L52_ADC_MISC_CTL, 0, 1, 1, 0),
 
index dc23007..510c942 100644 (file)
@@ -391,9 +391,9 @@ static const struct snd_kcontrol_new cs42l56_snd_controls[] = {
        SOC_DOUBLE("ADC Boost Switch", CS42L56_GAIN_BIAS_CTL, 3, 2, 1, 1),
 
        SOC_DOUBLE_R_SX_TLV("Headphone Volume", CS42L56_HPA_VOLUME,
-                             CS42L56_HPB_VOLUME, 0, 0x84, 0x48, hl_tlv),
+                             CS42L56_HPB_VOLUME, 0, 0x44, 0x48, hl_tlv),
        SOC_DOUBLE_R_SX_TLV("LineOut Volume", CS42L56_LOA_VOLUME,
-                             CS42L56_LOB_VOLUME, 0, 0x84, 0x48, hl_tlv),
+                             CS42L56_LOB_VOLUME, 0, 0x44, 0x48, hl_tlv),
 
        SOC_SINGLE_TLV("Bass Shelving Volume", CS42L56_TONE_CTL,
                        0, 0x00, 1, tone_tlv),
index 7035452..360ca2f 100644 (file)
@@ -348,22 +348,22 @@ static const struct snd_kcontrol_new cs53l30_snd_controls[] = {
        SOC_ENUM("ADC2 NG Delay", adc2_ng_delay_enum),
 
        SOC_SINGLE_SX_TLV("ADC1A PGA Volume",
-                   CS53L30_ADC1A_AFE_CTL, 0, 0x34, 0x18, pga_tlv),
+                   CS53L30_ADC1A_AFE_CTL, 0, 0x34, 0x24, pga_tlv),
        SOC_SINGLE_SX_TLV("ADC1B PGA Volume",
-                   CS53L30_ADC1B_AFE_CTL, 0, 0x34, 0x18, pga_tlv),
+                   CS53L30_ADC1B_AFE_CTL, 0, 0x34, 0x24, pga_tlv),
        SOC_SINGLE_SX_TLV("ADC2A PGA Volume",
-                   CS53L30_ADC2A_AFE_CTL, 0, 0x34, 0x18, pga_tlv),
+                   CS53L30_ADC2A_AFE_CTL, 0, 0x34, 0x24, pga_tlv),
        SOC_SINGLE_SX_TLV("ADC2B PGA Volume",
-                   CS53L30_ADC2B_AFE_CTL, 0, 0x34, 0x18, pga_tlv),
+                   CS53L30_ADC2B_AFE_CTL, 0, 0x34, 0x24, pga_tlv),
 
        SOC_SINGLE_SX_TLV("ADC1A Digital Volume",
-                   CS53L30_ADC1A_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv),
+                   CS53L30_ADC1A_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv),
        SOC_SINGLE_SX_TLV("ADC1B Digital Volume",
-                   CS53L30_ADC1B_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv),
+                   CS53L30_ADC1B_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv),
        SOC_SINGLE_SX_TLV("ADC2A Digital Volume",
-                   CS53L30_ADC2A_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv),
+                   CS53L30_ADC2A_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv),
        SOC_SINGLE_SX_TLV("ADC2B Digital Volume",
-                   CS53L30_ADC2B_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv),
+                   CS53L30_ADC2B_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv),
 };
 
 static const struct snd_soc_dapm_widget cs53l30_dapm_widgets[] = {
index 3f00ead..dd53dfd 100644 (file)
@@ -161,13 +161,16 @@ static int es8328_put_deemph(struct snd_kcontrol *kcontrol,
        if (deemph > 1)
                return -EINVAL;
 
+       if (es8328->deemph == deemph)
+               return 0;
+
        ret = es8328_set_deemph(component);
        if (ret < 0)
                return ret;
 
        es8328->deemph = deemph;
 
-       return 0;
+       return 1;
 }
 
 
index 66bbd8f..08f6c56 100644 (file)
@@ -740,6 +740,8 @@ static int nau8822_set_pll(struct snd_soc_dai *dai, int pll_id, int source,
                pll_param->pll_int, pll_param->pll_frac,
                pll_param->mclk_scaler, pll_param->pre_factor);
 
+       snd_soc_component_update_bits(component,
+               NAU8822_REG_POWER_MANAGEMENT_1, NAU8822_PLL_EN_MASK, NAU8822_PLL_OFF);
        snd_soc_component_update_bits(component,
                NAU8822_REG_PLL_N, NAU8822_PLLMCLK_DIV2 | NAU8822_PLLN_MASK,
                (pll_param->pre_factor ? NAU8822_PLLMCLK_DIV2 : 0) |
@@ -757,6 +759,8 @@ static int nau8822_set_pll(struct snd_soc_dai *dai, int pll_id, int source,
                pll_param->mclk_scaler << NAU8822_MCLKSEL_SFT);
        snd_soc_component_update_bits(component,
                NAU8822_REG_CLOCKING, NAU8822_CLKM_MASK, NAU8822_CLKM_PLL);
+       snd_soc_component_update_bits(component,
+               NAU8822_REG_POWER_MANAGEMENT_1, NAU8822_PLL_EN_MASK, NAU8822_PLL_ON);
 
        return 0;
 }
index 489191f..b45d42c 100644 (file)
@@ -90,6 +90,9 @@
 #define NAU8822_REFIMP_3K                      0x3
 #define NAU8822_IOBUF_EN                       (0x1 << 2)
 #define NAU8822_ABIAS_EN                       (0x1 << 3)
+#define NAU8822_PLL_EN_MASK                    (0x1 << 5)
+#define NAU8822_PLL_ON                         (0x1 << 5)
+#define NAU8822_PLL_OFF                                (0x0 << 5)
 
 /* NAU8822_REG_AUDIO_INTERFACE (0x4) */
 #define NAU8822_AIFMT_MASK                     (0x3 << 3)
index 34cd5a2..5cca893 100644 (file)
@@ -3868,6 +3868,7 @@ static int wm8962_runtime_suspend(struct device *dev)
 #endif
 
 static const struct dev_pm_ops wm8962_pm = {
+       SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
        SET_RUNTIME_PM_OPS(wm8962_runtime_suspend, wm8962_runtime_resume, NULL)
 };
 
index 7973a75..6d7fd88 100644 (file)
@@ -333,7 +333,7 @@ int wm_adsp_fw_put(struct snd_kcontrol *kcontrol,
        struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
        struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
        struct wm_adsp *dsp = snd_soc_component_get_drvdata(component);
-       int ret = 0;
+       int ret = 1;
 
        if (ucontrol->value.enumerated.item[0] == dsp[e->shift_l].fw)
                return 0;
index fa950dd..e765da9 100644 (file)
@@ -1293,6 +1293,7 @@ static const struct of_device_id fsl_sai_ids[] = {
        { .compatible = "fsl,imx8mm-sai", .data = &fsl_sai_imx8mm_data },
        { .compatible = "fsl,imx8mp-sai", .data = &fsl_sai_imx8mp_data },
        { .compatible = "fsl,imx8ulp-sai", .data = &fsl_sai_imx8ulp_data },
+       { .compatible = "fsl,imx8mn-sai", .data = &fsl_sai_imx8mp_data },
        { /* sentinel */ }
 };
 MODULE_DEVICE_TABLE(of, fsl_sai_ids);
index e71d74e..f4192df 100644 (file)
@@ -54,22 +54,29 @@ static struct snd_soc_dai_link_component cs35l41_components[] = {
        },
 };
 
+/*
+ * Mapping between ACPI instance id and speaker position.
+ *
+ * Four speakers:
+ *         0: Tweeter left, 1: Woofer left
+ *         2: Tweeter right, 3: Woofer right
+ */
 static struct snd_soc_codec_conf cs35l41_codec_conf[] = {
        {
                .dlc = COMP_CODEC_CONF(CS35L41_DEV0_NAME),
-               .name_prefix = "WL",
+               .name_prefix = "TL",
        },
        {
                .dlc = COMP_CODEC_CONF(CS35L41_DEV1_NAME),
-               .name_prefix = "WR",
+               .name_prefix = "WL",
        },
        {
                .dlc = COMP_CODEC_CONF(CS35L41_DEV2_NAME),
-               .name_prefix = "TL",
+               .name_prefix = "TR",
        },
        {
                .dlc = COMP_CODEC_CONF(CS35L41_DEV3_NAME),
-               .name_prefix = "TR",
+               .name_prefix = "WR",
        },
 };
 
@@ -101,6 +108,21 @@ static int cs35l41_init(struct snd_soc_pcm_runtime *rtd)
        return ret;
 }
 
+/*
+ * Channel map:
+ *
+ * TL/WL: ASPRX1 on slot 0, ASPRX2 on slot 1 (default)
+ * TR/WR: ASPRX1 on slot 1, ASPRX2 on slot 0
+ */
+static const struct {
+       unsigned int rx[2];
+} cs35l41_channel_map[] = {
+       {.rx = {0, 1}}, /* TL */
+       {.rx = {0, 1}}, /* WL */
+       {.rx = {1, 0}}, /* TR */
+       {.rx = {1, 0}}, /* WR */
+};
+
 static int cs35l41_hw_params(struct snd_pcm_substream *substream,
                             struct snd_pcm_hw_params *params)
 {
@@ -134,6 +156,16 @@ static int cs35l41_hw_params(struct snd_pcm_substream *substream,
                                ret);
                        return ret;
                }
+
+               /* setup channel map */
+               ret = snd_soc_dai_set_channel_map(codec_dai, 0, NULL,
+                                                 ARRAY_SIZE(cs35l41_channel_map[i].rx),
+                                                 (unsigned int *)cs35l41_channel_map[i].rx);
+               if (ret < 0) {
+                       dev_err(codec_dai->dev, "fail to set channel map, ret %d\n",
+                               ret);
+                       return ret;
+               }
        }
 
        return 0;
index f03a7ae..b41ab7a 100644 (file)
@@ -898,7 +898,7 @@ static int lpass_platform_cdc_dma_mmap(struct snd_pcm_substream *substream,
        struct snd_pcm_runtime *runtime = substream->runtime;
        unsigned long size, offset;
 
-       vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+       vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
        size = vma->vm_end - vma->vm_start;
        offset = vma->vm_pgoff << PAGE_SHIFT;
        return io_remap_pfn_range(vma, vma->vm_start,
index 8d74063..2897609 100644 (file)
@@ -318,7 +318,7 @@ sink_prepare:
                        p->walking = false;
                        if (ret < 0) {
                                /* unprepare the source widget */
-                               if (!widget_ops[widget->id].ipc_unprepare && swidget->prepared) {
+                               if (widget_ops[widget->id].ipc_unprepare && swidget->prepared) {
                                        widget_ops[widget->id].ipc_unprepare(swidget);
                                        swidget->prepared = false;
                                }
index 03490a4..6bdfa52 100644 (file)
@@ -150,7 +150,7 @@ static ssize_t sof_msg_inject_dfs_write(struct file *file, const char __user *bu
 {
        struct sof_client_dev *cdev = file->private_data;
        struct sof_msg_inject_priv *priv = cdev->data;
-       size_t size;
+       ssize_t size;
        int ret;
 
        if (*ppos)
@@ -158,8 +158,10 @@ static ssize_t sof_msg_inject_dfs_write(struct file *file, const char __user *bu
 
        size = simple_write_to_buffer(priv->tx_buffer, priv->max_msg_size,
                                      ppos, buffer, count);
+       if (size < 0)
+               return size;
        if (size != count)
-               return size > 0 ? -EFAULT : size;
+               return -EFAULT;
 
        memset(priv->rx_buffer, 0, priv->max_msg_size);
 
@@ -179,7 +181,7 @@ static ssize_t sof_msg_inject_ipc4_dfs_write(struct file *file,
        struct sof_client_dev *cdev = file->private_data;
        struct sof_msg_inject_priv *priv = cdev->data;
        struct sof_ipc4_msg *ipc4_msg = priv->tx_buffer;
-       size_t size;
+       ssize_t size;
        int ret;
 
        if (*ppos)
@@ -192,18 +194,20 @@ static ssize_t sof_msg_inject_ipc4_dfs_write(struct file *file,
        size = simple_write_to_buffer(&ipc4_msg->header_u64,
                                      sizeof(ipc4_msg->header_u64),
                                      ppos, buffer, count);
+       if (size < 0)
+               return size;
        if (size != sizeof(ipc4_msg->header_u64))
-               return size > 0 ? -EFAULT : size;
+               return -EFAULT;
 
        count -= size;
-       if (!count) {
-               /* Copy the payload */
-               size = simple_write_to_buffer(ipc4_msg->data_ptr,
-                                             priv->max_msg_size, ppos, buffer,
-                                             count);
-               if (size != count)
-                       return size > 0 ? -EFAULT : size;
-       }
+       /* Copy the payload */
+       size = simple_write_to_buffer(ipc4_msg->data_ptr,
+                                     priv->max_msg_size, ppos, buffer,
+                                     count);
+       if (size < 0)
+               return size;
+       if (size != count)
+               return -EFAULT;
 
        ipc4_msg->data_size = count;
 
index b470404..e692ae0 100644 (file)
@@ -291,6 +291,9 @@ int snd_usb_audioformat_set_sync_ep(struct snd_usb_audio *chip,
        bool is_playback;
        int err;
 
+       if (fmt->sync_ep)
+               return 0; /* already set up */
+
        alts = snd_usb_get_host_interface(chip, fmt->iface, fmt->altsetting);
        if (!alts)
                return 0;
@@ -304,7 +307,7 @@ int snd_usb_audioformat_set_sync_ep(struct snd_usb_audio *chip,
         * Generic sync EP handling
         */
 
-       if (altsd->bNumEndpoints < 2)
+       if (fmt->ep_idx > 0 || altsd->bNumEndpoints < 2)
                return 0;
 
        is_playback = !(get_endpoint(alts, 0)->bEndpointAddress & USB_DIR_IN);
index 78eb41b..4f56e17 100644 (file)
@@ -2658,7 +2658,12 @@ YAMAHA_DEVICE(0x7010, "UB99"),
                                        .nr_rates = 2,
                                        .rate_table = (unsigned int[]) {
                                                44100, 48000
-                                       }
+                                       },
+                                       .sync_ep = 0x82,
+                                       .sync_iface = 0,
+                                       .sync_altsetting = 1,
+                                       .sync_ep_idx = 1,
+                                       .implicit_fb = 1,
                                }
                        },
                        {
index d9aad15..02bb8cb 100644 (file)
@@ -395,6 +395,18 @@ static void test_func_map_prog_compatibility(void)
                                     "./test_attach_probe.o");
 }
 
+static void test_func_replace_global_func(void)
+{
+       const char *prog_name[] = {
+               "freplace/test_pkt_access",
+       };
+
+       test_fexit_bpf2bpf_common("./freplace_global_func.o",
+                                 "./test_pkt_access.o",
+                                 ARRAY_SIZE(prog_name),
+                                 prog_name, false, NULL);
+}
+
 /* NOTE: affect other tests, must run in serial mode */
 void serial_test_fexit_bpf2bpf(void)
 {
@@ -416,4 +428,6 @@ void serial_test_fexit_bpf2bpf(void)
                test_func_replace_multi();
        if (test__start_subtest("fmod_ret_freplace"))
                test_fmod_ret_freplace();
+       if (test__start_subtest("func_replace_global_func"))
+               test_func_replace_global_func();
 }
diff --git a/tools/testing/selftests/bpf/progs/freplace_global_func.c b/tools/testing/selftests/bpf/progs/freplace_global_func.c
new file mode 100644 (file)
index 0000000..96cb61a
--- /dev/null
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+__noinline
+int test_ctx_global_func(struct __sk_buff *skb)
+{
+       volatile int retval = 1;
+       return retval;
+}
+
+SEC("freplace/test_pkt_access")
+int new_test_pkt_access(struct __sk_buff *skb)
+{
+       return test_ctx_global_func(skb);
+}
+
+char _license[] SEC("license") = "GPL";
index e0b2bb1..3330fb1 100644 (file)
@@ -44,7 +44,7 @@ static inline void nop_loop(void)
 {
        int i;
 
-       for (i = 0; i < 1000000; i++)
+       for (i = 0; i < 100000000; i++)
                asm volatile("nop");
 }
 
@@ -56,12 +56,14 @@ static inline void check_tsc_msr_rdtsc(void)
        tsc_freq = rdmsr(HV_X64_MSR_TSC_FREQUENCY);
        GUEST_ASSERT(tsc_freq > 0);
 
-       /* First, check MSR-based clocksource */
+       /* For increased accuracy, take mean rdtsc() before and afrer rdmsr() */
        r1 = rdtsc();
        t1 = rdmsr(HV_X64_MSR_TIME_REF_COUNT);
+       r1 = (r1 + rdtsc()) / 2;
        nop_loop();
        r2 = rdtsc();
        t2 = rdmsr(HV_X64_MSR_TIME_REF_COUNT);
+       r2 = (r2 + rdtsc()) / 2;
 
        GUEST_ASSERT(r2 > r1 && t2 > t1);
 
@@ -181,12 +183,14 @@ static void host_check_tsc_msr_rdtsc(struct kvm_vm *vm)
        tsc_freq = vcpu_get_msr(vm, VCPU_ID, HV_X64_MSR_TSC_FREQUENCY);
        TEST_ASSERT(tsc_freq > 0, "TSC frequency must be nonzero");
 
-       /* First, check MSR-based clocksource */
+       /* For increased accuracy, take mean rdtsc() before and afrer ioctl */
        r1 = rdtsc();
        t1 = vcpu_get_msr(vm, VCPU_ID, HV_X64_MSR_TIME_REF_COUNT);
+       r1 = (r1 + rdtsc()) / 2;
        nop_loop();
        r2 = rdtsc();
        t2 = vcpu_get_msr(vm, VCPU_ID, HV_X64_MSR_TIME_REF_COUNT);
+       r2 = (r2 + rdtsc()) / 2;
 
        TEST_ASSERT(t2 > t1, "Time reference MSR is not monotonic (%ld <= %ld)", t1, t2);
 
index f91bf14..8a69c91 100644 (file)
@@ -2,6 +2,7 @@
 
 CLANG ?= clang
 CCINCLUDE += -I../../bpf
+CCINCLUDE += -I../../../lib
 CCINCLUDE += -I../../../../../usr/include/
 
 TEST_CUSTOM_PROGS = $(OUTPUT)/bpf/nat6to4.o
@@ -10,5 +11,4 @@ all: $(TEST_CUSTOM_PROGS)
 $(OUTPUT)/%.o: %.c
        $(CLANG) -O2 -target bpf -c $< $(CCINCLUDE) -o $@
 
-clean:
-       rm -f $(TEST_CUSTOM_PROGS)
+EXTRA_CLEAN := $(TEST_CUSTOM_PROGS)
index eb8543b..924ecb3 100755 (executable)
@@ -374,6 +374,45 @@ EOF
        return $lret
 }
 
+test_local_dnat_portonly()
+{
+       local family=$1
+       local daddr=$2
+       local lret=0
+       local sr_s
+       local sr_r
+
+ip netns exec "$ns0" nft -f /dev/stdin <<EOF
+table $family nat {
+       chain output {
+               type nat hook output priority 0; policy accept;
+               meta l4proto tcp dnat to :2000
+
+       }
+}
+EOF
+       if [ $? -ne 0 ]; then
+               if [ $family = "inet" ];then
+                       echo "SKIP: inet port test"
+                       test_inet_nat=false
+                       return
+               fi
+               echo "SKIP: Could not add $family dnat hook"
+               return
+       fi
+
+       echo SERVER-$family | ip netns exec "$ns1" timeout 5 socat -u STDIN TCP-LISTEN:2000 &
+       sc_s=$!
+
+       result=$(ip netns exec "$ns0" timeout 1 socat TCP:$daddr:2000 STDOUT)
+
+       if [ "$result" = "SERVER-inet" ];then
+               echo "PASS: inet port rewrite without l3 address"
+       else
+               echo "ERROR: inet port rewrite"
+               ret=1
+       fi
+}
 
 test_masquerade6()
 {
@@ -1148,6 +1187,10 @@ fi
 reset_counters
 test_local_dnat ip
 test_local_dnat6 ip6
+
+reset_counters
+test_local_dnat_portonly inet 10.0.1.99
+
 reset_counters
 $test_inet_nat && test_local_dnat inet
 $test_inet_nat && test_local_dnat6 inet
index 64ec222..44c4767 100644 (file)
@@ -4300,8 +4300,11 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
                kvm_put_kvm_no_destroy(kvm);
                mutex_lock(&kvm->lock);
                list_del(&dev->vm_node);
+               if (ops->release)
+                       ops->release(dev);
                mutex_unlock(&kvm->lock);
-               ops->destroy(dev);
+               if (ops->destroy)
+                       ops->destroy(dev);
                return ret;
        }