Back-merge the 5.11 devel branch for further patching.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
S: Las Heras, Mendoza CP 5539
S: Argentina
+N: Jay Cliburn
+E: jcliburn@gmail.com
+D: ATLX Ethernet drivers
+
N: Steven P. Cole
E: scole@lanl.gov
E: elenstev@mesatop.com
D: ISDN Maintainer
S: USA
+N: Gerrit Renker
+E: gerrit@erg.abdn.ac.uk
+D: DCCP protocol support.
+
N: Philip Gladstone
E: philip@gladstonefamily.net
D: Kernel / timekeeping stuff
E: seasons@makosteszta.sote.hu
D: Original author of software suspend
+N: Alexey Kuznetsov
+E: kuznet@ms2.inr.ac.ru
+D: Author and maintainer of large parts of the networking stack
+
N: Jaroslav Kysela
E: perex@perex.cz
W: https://www.perex.cz
E: wolfgang@iksw-muees.de
D: Auerswald USB driver
+N: Shrijeet Mukherjee
+E: shrijeet@gmail.com
+D: Network routing domains (VRF).
+
N: Paul Mundt
E: paul.mundt@gmail.com
D: SuperH maintainer
S: 16 Baliqiao Nanjie, Beijing 101100
S: People's Repulic of China
+N: Aviad Yehezkel
+E: aviadye@nvidia.com
+D: Kernel TLS implementation and offload support.
+
N: Victor Yodaiken
E: yodaiken@fsmlabs.com
D: RTLinux (RealTime Linux)
S: Bellevue, Washington 98007
S: USA
+N: Wensong Zhang
+E: wensong@linux-vs.org
+D: IP virtual server (IPVS).
+
N: Haojian Zhuang
E: haojian.zhuang@gmail.com
D: MMP support
This option is obsoleted by the "nopv" option, which
has equivalent effect for XEN platform.
+ xen_no_vector_callback
+ [KNL,X86,XEN] Disable the vector callback for Xen
+ event channel interrupts.
+
xen_scrub_pages= [XEN]
Boolean option to control scrubbing pages before giving them back
to Xen, for use by other domains. Can be also changed at runtime
enum:
- renesas,etheravb-r8a774a1
- renesas,etheravb-r8a774b1
+ - renesas,etheravb-r8a774e1
- renesas,etheravb-r8a7795
- renesas,etheravb-r8a7796
- renesas,etheravb-r8a77961
* snps,route-dcbcp, DCB Control Packets
* snps,route-up, Untagged Packets
* snps,route-multi-broad, Multicast & Broadcast Packets
- * snps,priority, RX queue priority (Range 0x0 to 0xF)
+ * snps,priority, bitmask of the tagged frames priorities assigned to
+ the queue
snps,mtl-tx-config:
$ref: /schemas/types.yaml#/definitions/phandle
* snps,idle_slope, unlock on WoL
* snps,high_credit, max write outstanding req. limit
* snps,low_credit, max read outstanding req. limit
- * snps,priority, TX queue priority (Range 0x0 to 0xF)
+ * snps,priority, bitmask of the priorities assigned to the queue.
+ When a PFC frame is received with priorities matching the bitmask,
+ the queue is blocked from transmitting for the pause time specified
+ in the PFC frame.
snps,reset-gpio:
deprecated: true
0x00000010 Memory Uncorrectable non-fatal
0x00000020 Memory Uncorrectable fatal
0x00000040 PCI Express Correctable
- 0x00000080 PCI Express Uncorrectable fatal
- 0x00000100 PCI Express Uncorrectable non-fatal
+ 0x00000080 PCI Express Uncorrectable non-fatal
+ 0x00000100 PCI Express Uncorrectable fatal
0x00000200 Platform Correctable
0x00000400 Platform Uncorrectable non-fatal
0x00000800 Platform Uncorrectable fatal
The following is a random collection of documentation regarding
network devices.
-struct net_device allocation rules
-==================================
+struct net_device lifetime rules
+================================
Network device structures need to persist even after module is unloaded and
must be allocated with alloc_netdev_mqs() and friends.
If device has registered successfully, it will be freed on last use
-by free_netdev(). This is required to handle the pathologic case cleanly
-(example: rmmod mydriver </sys/class/net/myeth/mtu )
+by free_netdev(). This is required to handle the pathological case cleanly
+(example: ``rmmod mydriver </sys/class/net/myeth/mtu``)
-alloc_netdev_mqs()/alloc_netdev() reserve extra space for driver
+alloc_netdev_mqs() / alloc_netdev() reserve extra space for driver
private data which gets freed when the network device is freed. If
separately allocated data is attached to the network device
-(netdev_priv(dev)) then it is up to the module exit handler to free that.
+(netdev_priv()) then it is up to the module exit handler to free that.
+
+There are two groups of APIs for registering struct net_device.
+First group can be used in normal contexts where ``rtnl_lock`` is not already
+held: register_netdev(), unregister_netdev().
+Second group can be used when ``rtnl_lock`` is already held:
+register_netdevice(), unregister_netdevice(), free_netdevice().
+
+Simple drivers
+--------------
+
+Most drivers (especially device drivers) handle lifetime of struct net_device
+in context where ``rtnl_lock`` is not held (e.g. driver probe and remove paths).
+
+In that case the struct net_device registration is done using
+the register_netdev(), and unregister_netdev() functions:
+
+.. code-block:: c
+
+ int probe()
+ {
+ struct my_device_priv *priv;
+ int err;
+
+ dev = alloc_netdev_mqs(...);
+ if (!dev)
+ return -ENOMEM;
+ priv = netdev_priv(dev);
+
+ /* ... do all device setup before calling register_netdev() ...
+ */
+
+ err = register_netdev(dev);
+ if (err)
+ goto err_undo;
+
+ /* net_device is visible to the user! */
+
+ err_undo:
+ /* ... undo the device setup ... */
+ free_netdev(dev);
+ return err;
+ }
+
+ void remove()
+ {
+ unregister_netdev(dev);
+ free_netdev(dev);
+ }
+
+Note that after calling register_netdev() the device is visible in the system.
+Users can open it and start sending / receiving traffic immediately,
+or run any other callback, so all initialization must be done prior to
+registration.
+
+unregister_netdev() closes the device and waits for all users to be done
+with it. The memory of struct net_device itself may still be referenced
+by sysfs but all operations on that device will fail.
+
+free_netdev() can be called after unregister_netdev() returns on when
+register_netdev() failed.
+
+Device management under RTNL
+----------------------------
+
+Registering struct net_device while in context which already holds
+the ``rtnl_lock`` requires extra care. In those scenarios most drivers
+will want to make use of struct net_device's ``needs_free_netdev``
+and ``priv_destructor`` members for freeing of state.
+
+Example flow of netdev handling under ``rtnl_lock``:
+
+.. code-block:: c
+
+ static void my_setup(struct net_device *dev)
+ {
+ dev->needs_free_netdev = true;
+ }
+
+ static void my_destructor(struct net_device *dev)
+ {
+ some_obj_destroy(priv->obj);
+ some_uninit(priv);
+ }
+
+ int create_link()
+ {
+ struct my_device_priv *priv;
+ int err;
+
+ ASSERT_RTNL();
+
+ dev = alloc_netdev(sizeof(*priv), "net%d", NET_NAME_UNKNOWN, my_setup);
+ if (!dev)
+ return -ENOMEM;
+ priv = netdev_priv(dev);
+
+ /* Implicit constructor */
+ err = some_init(priv);
+ if (err)
+ goto err_free_dev;
+
+ priv->obj = some_obj_create();
+ if (!priv->obj) {
+ err = -ENOMEM;
+ goto err_some_uninit;
+ }
+ /* End of constructor, set the destructor: */
+ dev->priv_destructor = my_destructor;
+
+ err = register_netdevice(dev);
+ if (err)
+ /* register_netdevice() calls destructor on failure */
+ goto err_free_dev;
+
+ /* If anything fails now unregister_netdevice() (or unregister_netdev())
+ * will take care of calling my_destructor and free_netdev().
+ */
+
+ return 0;
+
+ err_some_uninit:
+ some_uninit(priv);
+ err_free_dev:
+ free_netdev(dev);
+ return err;
+ }
+
+If struct net_device.priv_destructor is set it will be called by the core
+some time after unregister_netdevice(), it will also be called if
+register_netdevice() fails. The callback may be invoked with or without
+``rtnl_lock`` held.
+
+There is no explicit constructor callback, driver "constructs" the private
+netdev state after allocating it and before registration.
+
+Setting struct net_device.needs_free_netdev makes core call free_netdevice()
+automatically after unregister_netdevice() when all references to the device
+are gone. It only takes effect after a successful call to register_netdevice()
+so if register_netdevice() fails driver is responsible for calling
+free_netdev().
+
+free_netdev() is safe to call on error paths right after unregister_netdevice()
+or when register_netdevice() fails. Parts of netdev (de)registration process
+happen after ``rtnl_lock`` is released, therefore in those cases free_netdev()
+will defer some of the processing until ``rtnl_lock`` is released.
+
+Devices spawned from struct rtnl_link_ops should never free the
+struct net_device directly.
+
+.ndo_init and .ndo_uninit
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``.ndo_init`` and ``.ndo_uninit`` callbacks are called during net_device
+registration and de-registration, under ``rtnl_lock``. Drivers can use
+those e.g. when parts of their init process need to run under ``rtnl_lock``.
+
+``.ndo_init`` runs before device is visible in the system, ``.ndo_uninit``
+runs during de-registering after device is closed but other subsystems
+may still have outstanding references to the netdevice.
MTU
===
offloads, old connections will remain active after flags are cleared.
TLS encryption cannot be offloaded to devices without checksum calculation
-offload. Hence, TLS TX device feature flag requires NETIF_F_HW_CSUM being set.
+offload. Hence, TLS TX device feature flag requires TX csum offload being set.
Disabling the latter implies clearing the former. Disabling TX checksum offload
should not affect old connections, and drivers should make sure checksum
calculation does not break for them.
powersave
oss-emulation
seq-oss
+ jack-injection
--- /dev/null
+============================
+ALSA Jack Software Injection
+============================
+
+Simple Introduction On Jack Injection
+=====================================
+
+Here jack injection means users could inject plugin or plugout events
+to the audio jacks through debugfs interface, it is helpful to
+validate ALSA userspace changes. For example, we change the audio
+profile switching code in the pulseaudio, and we want to verify if the
+change works as expected and if the change introduce the regression,
+in this case, we could inject plugin or plugout events to an audio
+jack or to some audio jacks, we don't need to physically access the
+machine and plug/unplug physical devices to the audio jack.
+
+In this design, an audio jack doesn't equal to a physical audio jack.
+Sometimes a physical audio jack contains multi functions, and the
+ALSA driver creates multi ``jack_kctl`` for a ``snd_jack``, here the
+``snd_jack`` represents a physical audio jack and the ``jack_kctl``
+represents a function, for example a physical jack has two functions:
+headphone and mic_in, the ALSA ASoC driver will build 2 ``jack_kctl``
+for this jack. The jack injection is implemented based on the
+``jack_kctl`` instead of ``snd_jack``.
+
+To inject events to audio jacks, we need to enable the jack injection
+via ``sw_inject_enable`` first, once it is enabled, this jack will not
+change the state by hardware events anymore, we could inject plugin or
+plugout events via ``jackin_inject`` and check the jack state via
+``status``, after we finish our test, we need to disable the jack
+injection via ``sw_inject_enable`` too, once it is disabled, the jack
+state will be restored according to the last reported hardware events
+and will change by future hardware events.
+
+The Layout of Jack Injection Interface
+======================================
+
+If users enable the SND_JACK_INJECTION_DEBUG in the kernel, the audio
+jack injection interface will be created as below:
+::
+
+ $debugfs_mount_dir/sound
+ |-- card0
+ |-- |-- HDMI_DP_pcm_10_Jack
+ |-- |-- |-- jackin_inject
+ |-- |-- |-- kctl_id
+ |-- |-- |-- mask_bits
+ |-- |-- |-- status
+ |-- |-- |-- sw_inject_enable
+ |-- |-- |-- type
+ ...
+ |-- |-- HDMI_DP_pcm_9_Jack
+ |-- |-- jackin_inject
+ |-- |-- kctl_id
+ |-- |-- mask_bits
+ |-- |-- status
+ |-- |-- sw_inject_enable
+ |-- |-- type
+ |-- card1
+ |-- HDMI_DP_pcm_5_Jack
+ |-- |-- jackin_inject
+ |-- |-- kctl_id
+ |-- |-- mask_bits
+ |-- |-- status
+ |-- |-- sw_inject_enable
+ |-- |-- type
+ ...
+ |-- Headphone_Jack
+ |-- |-- jackin_inject
+ |-- |-- kctl_id
+ |-- |-- mask_bits
+ |-- |-- status
+ |-- |-- sw_inject_enable
+ |-- |-- type
+ |-- Headset_Mic_Jack
+ |-- jackin_inject
+ |-- kctl_id
+ |-- mask_bits
+ |-- status
+ |-- sw_inject_enable
+ |-- type
+
+The Explanation Of The Nodes
+======================================
+
+kctl_id
+ read-only, get jack_kctl->kctl's id
+ ::
+
+ sound/card1/Headphone_Jack# cat kctl_id
+ Headphone Jack
+
+mask_bits
+ read-only, get jack_kctl's supported events mask_bits
+ ::
+
+ sound/card1/Headphone_Jack# cat mask_bits
+ 0x0001 HEADPHONE(0x0001)
+
+status
+ read-only, get jack_kctl's current status
+
+- headphone unplugged:
+
+ ::
+
+ sound/card1/Headphone_Jack# cat status
+ Unplugged
+
+- headphone plugged:
+
+ ::
+
+ sound/card1/Headphone_Jack# cat status
+ Plugged
+
+type
+ read-only, get snd_jack's supported events from type (all supported events on the physical audio jack)
+ ::
+
+ sound/card1/Headphone_Jack# cat type
+ 0x7803 HEADPHONE(0x0001) MICROPHONE(0x0002) BTN_3(0x0800) BTN_2(0x1000) BTN_1(0x2000) BTN_0(0x4000)
+
+sw_inject_enable
+ read-write, enable or disable injection
+
+- injection disabled:
+
+ ::
+
+ sound/card1/Headphone_Jack# cat sw_inject_enable
+ Jack: Headphone Jack Inject Enabled: 0
+
+- injection enabled:
+
+ ::
+
+ sound/card1/Headphone_Jack# cat sw_inject_enable
+ Jack: Headphone Jack Inject Enabled: 1
+
+- to enable jack injection:
+
+ ::
+
+ sound/card1/Headphone_Jack# echo 1 > sw_inject_enable
+
+- to disable jack injection:
+
+ ::
+
+ sound/card1/Headphone_Jack# echo 0 > sw_inject_enable
+
+jackin_inject
+ write-only, inject plugin or plugout
+
+- to inject plugin:
+
+ ::
+
+ sound/card1/Headphone_Jack# echo 1 > jackin_inject
+
+- to inject plugout:
+
+ ::
+
+ sound/card1/Headphone_Jack# echo 0 > jackin_inject
M: Arthur Kiyanovski <akiyano@amazon.com>
R: Guy Tzalik <gtzalik@amazon.com>
R: Saeed Bishara <saeedb@amazon.com>
-R: Zorik Machulsky <zorik@amazon.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst
M: Felix Kuehling <Felix.Kuehling@amd.com>
L: amd-gfx@lists.freedesktop.org
S: Supported
-T: git git://people.freedesktop.org/~agd5f/linux
+T: git https://gitlab.freedesktop.org/agd5f/linux.git
F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd*.[ch]
F: drivers/gpu/drm/amd/amdkfd/
F: drivers/gpu/drm/amd/include/cik_structs.h
F: drivers/hwmon/asus_atk0110.c
ATLX ETHERNET DRIVERS
-M: Jay Cliburn <jcliburn@gmail.com>
M: Chris Snook <chris.snook@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/scsi/dc395x.*
DCCP PROTOCOL
-M: Gerrit Renker <gerrit@erg.abdn.ac.uk>
L: dccp@vger.kernel.org
-S: Maintained
+S: Orphan
W: http://www.linuxfoundation.org/collaborate/workgroups/networking/dccp
F: include/linux/dccp.h
F: include/linux/tfrc.h
F: drivers/scsi/ips*
IPVS
-M: Wensong Zhang <wensong@linux-vs.org>
M: Simon Horman <horms@verge.net.au>
M: Julian Anastasov <ja@ssi.bg>
L: netdev@vger.kernel.org
NETWORKING [IPv4/IPv6]
M: "David S. Miller" <davem@davemloft.net>
-M: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
L: netdev@vger.kernel.org
S: Maintained
NETWORKING [TLS]
M: Boris Pismenny <borisp@nvidia.com>
-M: Aviad Yehezkel <aviadye@nvidia.com>
M: John Fastabend <john.fastabend@gmail.com>
M: Daniel Borkmann <daniel@iogearbox.net>
M: Jakub Kicinski <kuba@kernel.org>
M: Christian König <christian.koenig@amd.com>
L: amd-gfx@lists.freedesktop.org
S: Supported
-T: git git://people.freedesktop.org/~agd5f/linux
+T: git https://gitlab.freedesktop.org/agd5f/linux.git
F: drivers/gpu/drm/amd/
F: drivers/gpu/drm/radeon/
F: include/uapi/drm/amdgpu_drm.h
M: David Rientjes <rientjes@google.com>
M: Joonsoo Kim <iamjoonsoo.kim@lge.com>
M: Andrew Morton <akpm@linux-foundation.org>
+M: Vlastimil Babka <vbabka@suse.cz>
L: linux-mm@kvack.org
S: Maintained
F: include/linux/sl?b*.h
VRF
M: David Ahern <dsahern@kernel.org>
-M: Shrijeet Mukherjee <shrijeet@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/vrf.rst
VERSION = 5
PATCHLEVEL = 11
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc4
NAME = Kleptomaniac Octopus
# *DOCUMENTATION*
}
gnttab_init();
if (!xen_initial_domain())
- xenbus_probe(NULL);
+ xenbus_probe();
/*
* Making sure board specific code will not set up ops for
select HAVE_NMI
select HAVE_PATA_PLATFORM
select HAVE_PERF_EVENTS
- select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI && HW_PERF_EVENTS
- select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API
#include <asm/lse.h>
#define ATOMIC_OP(op) \
-static inline void arch_##op(int i, atomic_t *v) \
+static __always_inline void arch_##op(int i, atomic_t *v) \
{ \
__lse_ll_sc_body(op, i, v); \
}
#undef ATOMIC_OP
#define ATOMIC_FETCH_OP(name, op) \
-static inline int arch_##op##name(int i, atomic_t *v) \
+static __always_inline int arch_##op##name(int i, atomic_t *v) \
{ \
return __lse_ll_sc_body(op##name, i, v); \
}
#undef ATOMIC_FETCH_OPS
#define ATOMIC64_OP(op) \
-static inline void arch_##op(long i, atomic64_t *v) \
+static __always_inline void arch_##op(long i, atomic64_t *v) \
{ \
__lse_ll_sc_body(op, i, v); \
}
#undef ATOMIC64_OP
#define ATOMIC64_FETCH_OP(name, op) \
-static inline long arch_##op##name(long i, atomic64_t *v) \
+static __always_inline long arch_##op##name(long i, atomic64_t *v) \
{ \
return __lse_ll_sc_body(op##name, i, v); \
}
#undef ATOMIC64_FETCH_OP
#undef ATOMIC64_FETCH_OPS
-static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
+static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
{
return __lse_ll_sc_body(atomic64_dec_if_positive, v);
}
#endif /* CONFIG_ARM64_FORCE_52BIT */
extern phys_addr_t arm64_dma_phys_limit;
-extern phys_addr_t arm64_dma32_phys_limit;
-#define ARCH_LOW_ADDRESS_LIMIT ((arm64_dma_phys_limit ? : arm64_dma32_phys_limit) - 1)
+#define ARCH_LOW_ADDRESS_LIMIT (arm64_dma_phys_limit - 1)
struct debug_info {
#ifdef CONFIG_HAVE_HW_BREAKPOINT
DEFINE(S_SDEI_TTBR1, offsetof(struct pt_regs, sdei_ttbr1));
DEFINE(S_PMR_SAVE, offsetof(struct pt_regs, pmr_save));
DEFINE(S_STACKFRAME, offsetof(struct pt_regs, stackframe));
- DEFINE(S_FRAME_SIZE, sizeof(struct pt_regs));
+ DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs));
BLANK();
#ifdef CONFIG_COMPAT
DEFINE(COMPAT_SIGFRAME_REGS_OFFSET, offsetof(struct compat_sigframe, uc.uc_mcontext.arm_r0));
*/
.macro ftrace_regs_entry, allregs=0
/* Make room for pt_regs, plus a callee frame */
- sub sp, sp, #(S_FRAME_SIZE + 16)
+ sub sp, sp, #(PT_REGS_SIZE + 16)
/* Save function arguments (and x9 for simplicity) */
stp x0, x1, [sp, #S_X0]
.endif
/* Save the callsite's SP and LR */
- add x10, sp, #(S_FRAME_SIZE + 16)
+ add x10, sp, #(PT_REGS_SIZE + 16)
stp x9, x10, [sp, #S_LR]
/* Save the PC after the ftrace callsite */
str x30, [sp, #S_PC]
/* Create a frame record for the callsite above pt_regs */
- stp x29, x9, [sp, #S_FRAME_SIZE]
- add x29, sp, #S_FRAME_SIZE
+ stp x29, x9, [sp, #PT_REGS_SIZE]
+ add x29, sp, #PT_REGS_SIZE
/* Create our frame record within pt_regs. */
stp x29, x30, [sp, #S_STACKFRAME]
ldr x9, [sp, #S_PC]
/* Restore the callsite's SP */
- add sp, sp, #S_FRAME_SIZE + 16
+ add sp, sp, #PT_REGS_SIZE + 16
ret x9
SYM_CODE_END(ftrace_common)
ldr x0, [sp, #S_PC]
sub x0, x0, #AARCH64_INSN_SIZE // ip (callsite's BL insn)
add x1, sp, #S_LR // parent_ip (callsite's LR)
- ldr x2, [sp, #S_FRAME_SIZE] // parent fp (callsite's FP)
+ ldr x2, [sp, #PT_REGS_SIZE] // parent fp (callsite's FP)
bl prepare_ftrace_return
b ftrace_common_return
SYM_CODE_END(ftrace_graph_caller)
.endif
#endif
- sub sp, sp, #S_FRAME_SIZE
+ sub sp, sp, #PT_REGS_SIZE
#ifdef CONFIG_VMAP_STACK
/*
* Test whether the SP has overflowed, without corrupting a GPR.
* userspace, and can clobber EL0 registers to free up GPRs.
*/
- /* Stash the original SP (minus S_FRAME_SIZE) in tpidr_el0. */
+ /* Stash the original SP (minus PT_REGS_SIZE) in tpidr_el0. */
msr tpidr_el0, x0
/* Recover the original x0 value and stash it in tpidrro_el0 */
scs_load tsk, x20
.else
- add x21, sp, #S_FRAME_SIZE
+ add x21, sp, #PT_REGS_SIZE
get_current_task tsk
.endif /* \el == 0 */
mrs x22, elr_el1
ldp x26, x27, [sp, #16 * 13]
ldp x28, x29, [sp, #16 * 14]
ldr lr, [sp, #S_LR]
- add sp, sp, #S_FRAME_SIZE // restore sp
+ add sp, sp, #PT_REGS_SIZE // restore sp
.if \el == 0
alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
/*
* Store the original GPRs to the new stack. The orginal SP (minus
- * S_FRAME_SIZE) was stashed in tpidr_el0 by kernel_ventry.
+ * PT_REGS_SIZE) was stashed in tpidr_el0 by kernel_ventry.
*/
- sub sp, sp, #S_FRAME_SIZE
+ sub sp, sp, #PT_REGS_SIZE
kernel_entry 1
mrs x0, tpidr_el0
- add x0, x0, #S_FRAME_SIZE
+ add x0, x0, #PT_REGS_SIZE
str x0, [sp, #S_SP]
/* Stash the regs for handle_bad_stack */
#include <linux/platform_device.h>
#include <linux/sched_clock.h>
#include <linux/smp.h>
-#include <linux/nmi.h>
-#include <linux/cpufreq.h>
/* ARMv8 Cortex-A53 specific event types. */
#define ARMV8_A53_PERFCTR_PREF_LINEFILL 0xC2
static int __init armv8_pmu_driver_init(void)
{
- int ret;
-
if (acpi_disabled)
- ret = platform_driver_register(&armv8_pmu_driver);
+ return platform_driver_register(&armv8_pmu_driver);
else
- ret = arm_pmu_acpi_probe(armv8_pmuv3_init);
-
- /*
- * Try to re-initialize lockup detector after PMU init in
- * case PMU events are triggered via NMIs.
- */
- if (ret == 0 && arm_pmu_irq_is_nmi())
- lockup_detector_init();
-
- return ret;
+ return arm_pmu_acpi_probe(armv8_pmuv3_init);
}
device_initcall(armv8_pmu_driver_init)
userpg->cap_user_time_zero = 1;
userpg->cap_user_time_short = 1;
}
-
-#ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF
-/*
- * Safe maximum CPU frequency in case a particular platform doesn't implement
- * cpufreq driver. Although, architecture doesn't put any restrictions on
- * maximum frequency but 5 GHz seems to be safe maximum given the available
- * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
- * hand, we can't make it much higher as it would lead to a large hard-lockup
- * detection timeout on parts which are running slower (eg. 1GHz on
- * Developerbox) and doesn't possess a cpufreq driver.
- */
-#define SAFE_MAX_CPU_FREQ 5000000000UL // 5 GHz
-u64 hw_nmi_get_sample_period(int watchdog_thresh)
-{
- unsigned int cpu = smp_processor_id();
- unsigned long max_cpu_freq;
-
- max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
- if (!max_cpu_freq)
- max_cpu_freq = SAFE_MAX_CPU_FREQ;
-
- return (u64)max_cpu_freq * watchdog_thresh;
-}
-#endif
stp x24, x25, [sp, #S_X24]
stp x26, x27, [sp, #S_X26]
stp x28, x29, [sp, #S_X28]
- add x0, sp, #S_FRAME_SIZE
+ add x0, sp, #PT_REGS_SIZE
stp lr, x0, [sp, #S_LR]
/*
* Construct a useful saved PSTATE
.endm
SYM_CODE_START(kretprobe_trampoline)
- sub sp, sp, #S_FRAME_SIZE
+ sub sp, sp, #PT_REGS_SIZE
save_all_base_regs
restore_all_base_regs
- add sp, sp, #S_FRAME_SIZE
+ add sp, sp, #PT_REGS_SIZE
ret
SYM_CODE_END(kretprobe_trampoline)
asmlinkage void do_notify_resume(struct pt_regs *regs,
unsigned long thread_flags)
{
- /*
- * The assembly code enters us with IRQs off, but it hasn't
- * informed the tracing code of that for efficiency reasons.
- * Update the trace code with the current status.
- */
- trace_hardirqs_off();
-
do {
if (thread_flags & _TIF_NEED_RESCHED) {
/* Unmask Debug and SError for the next task */
#include <asm/daifflags.h>
#include <asm/debug-monitors.h>
+#include <asm/exception.h>
#include <asm/fpsimd.h>
#include <asm/syscall.h>
#include <asm/thread_info.h>
if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
local_daif_mask();
flags = current_thread_info()->flags;
- if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) {
- /*
- * We're off to userspace, where interrupts are
- * always enabled after we restore the flags from
- * the SPSR.
- */
- trace_hardirqs_on();
+ if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP))
return;
- }
local_daif_restore(DAIF_PROCCTX);
}
EXPORT_SYMBOL(memstart_addr);
/*
- * We create both ZONE_DMA and ZONE_DMA32. ZONE_DMA covers the first 1G of
- * memory as some devices, namely the Raspberry Pi 4, have peripherals with
- * this limited view of the memory. ZONE_DMA32 will cover the rest of the 32
- * bit addressable memory area.
+ * If the corresponding config options are enabled, we create both ZONE_DMA
+ * and ZONE_DMA32. By default ZONE_DMA covers the 32-bit addressable memory
+ * unless restricted on specific platforms (e.g. 30-bit on Raspberry Pi 4).
+ * In such case, ZONE_DMA32 covers the rest of the 32-bit addressable memory,
+ * otherwise it is empty.
*/
phys_addr_t arm64_dma_phys_limit __ro_after_init;
-phys_addr_t arm64_dma32_phys_limit __ro_after_init;
#ifdef CONFIG_KEXEC_CORE
/*
if (crash_base == 0) {
/* Current arm64 boot protocol requires 2MB alignment */
- crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
+ crash_base = memblock_find_in_range(0, arm64_dma_phys_limit,
crash_size, SZ_2M);
if (crash_base == 0) {
pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
unsigned int __maybe_unused acpi_zone_dma_bits;
unsigned int __maybe_unused dt_zone_dma_bits;
+ phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32);
#ifdef CONFIG_ZONE_DMA
acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
#endif
#ifdef CONFIG_ZONE_DMA32
- max_zone_pfns[ZONE_DMA32] = PFN_DOWN(arm64_dma32_phys_limit);
+ max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
+ if (!arm64_dma_phys_limit)
+ arm64_dma_phys_limit = dma32_phys_limit;
#endif
+ if (!arm64_dma_phys_limit)
+ arm64_dma_phys_limit = PHYS_MASK + 1;
max_zone_pfns[ZONE_NORMAL] = max;
free_area_init(max_zone_pfns);
early_init_fdt_scan_reserved_mem();
- if (IS_ENABLED(CONFIG_ZONE_DMA32))
- arm64_dma32_phys_limit = max_zone_phys(32);
- else
- arm64_dma32_phys_limit = PHYS_MASK + 1;
-
reserve_elfcorehdr();
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
-
- dma_contiguous_reserve(arm64_dma32_phys_limit);
}
void __init bootmem_init(void)
sparse_init();
zone_sizes_init(min, max);
+ /*
+ * Reserve the CMA area after arm64_dma_phys_limit was initialised.
+ */
+ dma_contiguous_reserve(arm64_dma_phys_limit);
+
/*
* request_standard_resources() depends on crashkernel's memory being
* reserved, so do it here.
void __init mem_init(void)
{
if (swiotlb_force == SWIOTLB_FORCE ||
- max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit))
+ max_pfn > PFN_DOWN(arm64_dma_phys_limit))
swiotlb_init(1);
else
swiotlb_force = SWIOTLB_NO_FORCE;
#include <linux/libfdt.h>
#include <asm/addrspace.h>
+#include <asm/unaligned.h>
/*
* These two variables specify the free mem region
dtb_size = fdt_totalsize((void *)&__appended_dtb);
/* last four bytes is always image size in little endian */
- image_size = le32_to_cpup((void *)&__image_end - 4);
+ image_size = get_unaligned_le32((void *)&__image_end - 4);
/* copy dtb to where the booted kernel will expect it */
memcpy((void *)VMLINUX_LOAD_ADDRESS_ULL + image_size,
static int __init octeon_irq_init_ciu(
struct device_node *ciu_node, struct device_node *parent)
{
- unsigned int i, r;
+ int i, r;
struct irq_chip *chip;
struct irq_chip *chip_edge;
struct irq_chip *chip_mbox;
#undef ns_to_kernel_old_timeval
#define ns_to_kernel_old_timeval ns_to_old_timeval32
+/*
+ * Some data types as stored in coredump.
+ */
+#define user_long_t compat_long_t
+#define user_siginfo_t compat_siginfo_t
+#define copy_siginfo_to_external copy_siginfo_to_external32
+
#include "../../../fs/binfmt_elf.c"
#undef ns_to_kernel_old_timeval
#define ns_to_kernel_old_timeval ns_to_old_timeval32
+/*
+ * Some data types as stored in coredump.
+ */
+#define user_long_t compat_long_t
+#define user_siginfo_t compat_siginfo_t
+#define copy_siginfo_to_external copy_siginfo_to_external32
+
#include "../../../fs/binfmt_elf.c"
static inline __init unsigned long rotate_xor(unsigned long hash,
const void *area, size_t size)
{
- size_t i;
- unsigned long *ptr = (unsigned long *)area;
+ const typeof(hash) *ptr = PTR_ALIGN(area, sizeof(hash));
+ size_t diff, i;
+
+ diff = (void *)ptr - area;
+ if (unlikely(size < diff + sizeof(hash)))
+ return hash;
+
+ size = ALIGN_DOWN(size - diff, sizeof(hash));
for (i = 0; i < size / sizeof(hash); i++) {
/* Rotate by odd number of bits and XOR. */
return do_syscall_2(__NR_gettimeofday, (unsigned long)_tv, (unsigned long)_tz);
}
+#ifdef __powerpc64__
+
static __always_inline
int clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
{
return do_syscall_2(__NR_clock_getres, _clkid, (unsigned long)_ts);
}
-#ifdef CONFIG_VDSO32
+#else
#define BUILD_VDSO32 1
+static __always_inline
+int clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
+{
+ return do_syscall_2(__NR_clock_gettime64, _clkid, (unsigned long)_ts);
+}
+
+static __always_inline
+int clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
+{
+ return do_syscall_2(__NR_clock_getres_time64, _clkid, (unsigned long)_ts);
+}
+
static __always_inline
int clock_gettime32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
{
.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
_sinittext = .;
INIT_TEXT
+
+ /*
+ *.init.text might be RO so we must ensure this section ends on
+ * a page boundary.
+ */
+ . = ALIGN(PAGE_SIZE);
_einittext = .;
#ifdef CONFIG_PPC64
*(.tramp.ftrace.init);
EXIT_TEXT
}
+ . = ALIGN(PAGE_SIZE);
+
INIT_DATA_SECTION(16)
. = ALIGN(8);
config PAGE_OFFSET
hex
- default 0xC0000000 if 32BIT && MAXPHYSMEM_2GB
+ default 0xC0000000 if 32BIT && MAXPHYSMEM_1GB
default 0x80000000 if 64BIT && !MMU
default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB
default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB
choice
prompt "Maximum Physical Memory"
- default MAXPHYSMEM_2GB if 32BIT
+ default MAXPHYSMEM_1GB if 32BIT
default MAXPHYSMEM_2GB if 64BIT && CMODEL_MEDLOW
default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY
+ config MAXPHYSMEM_1GB
+ bool "1GiB"
config MAXPHYSMEM_2GB
bool "2GiB"
config MAXPHYSMEM_128GB
phy-mode = "gmii";
phy-handle = <&phy0>;
phy0: ethernet-phy@0 {
+ compatible = "ethernet-phy-id0007.0771";
reg = <0>;
+ reset-gpios = <&gpio 12 GPIO_ACTIVE_LOW>;
};
};
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_SPI=y
CONFIG_SPI_SIFIVE=y
+CONFIG_GPIOLIB=y
+CONFIG_GPIO_SIFIVE=y
# CONFIG_PTP_1588_CLOCK is not set
CONFIG_POWER_RESET=y
CONFIG_DRM=y
| _PAGE_DIRTY)
#define PAGE_KERNEL __pgprot(_PAGE_KERNEL)
-#define PAGE_KERNEL_EXEC __pgprot(_PAGE_KERNEL | _PAGE_EXEC)
#define PAGE_KERNEL_READ __pgprot(_PAGE_KERNEL & ~_PAGE_WRITE)
#define PAGE_KERNEL_EXEC __pgprot(_PAGE_KERNEL | _PAGE_EXEC)
#define PAGE_KERNEL_READ_EXEC __pgprot((_PAGE_KERNEL & ~_PAGE_WRITE) \
#include <linux/types.h>
-#ifndef GENERIC_TIME_VSYSCALL
+#ifndef CONFIG_GENERIC_TIME_VSYSCALL
struct vdso_data {
};
#endif
static struct cacheinfo *get_cacheinfo(u32 level, enum cache_type type)
{
- struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(smp_processor_id());
+ /*
+ * Using raw_smp_processor_id() elides a preemptability check, but this
+ * is really indicative of a larger problem: the cacheinfo UABI assumes
+ * that cores have a homonogenous view of the cache hierarchy. That
+ * happens to be the case for the current set of RISC-V systems, but
+ * likely won't be true in general. Since there's no way to provide
+ * correct information for these systems via the current UABI we're
+ * just eliding the check for now.
+ */
+ struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(raw_smp_processor_id());
struct cacheinfo *this_leaf;
int index;
REG_L a1, (a1)
jr a1
1:
-#ifdef CONFIG_TRACE_IRQFLAGS
- call trace_hardirqs_on
-#endif
/*
* Exceptions run with interrupts enabled or disabled depending on the
* state of SR_PIE in m/sstatus.
*/
andi t0, s1, SR_PIE
beqz t0, 1f
+#ifdef CONFIG_TRACE_IRQFLAGS
+ call trace_hardirqs_on
+#endif
csrs CSR_STATUS, SR_IE
1:
tail do_trap_unknown
handle_syscall:
+#ifdef CONFIG_RISCV_M_MODE
+ /*
+ * When running is M-Mode (no MMU config), MPIE does not get set.
+ * As a result, we need to force enable interrupts here because
+ * handle_exception did not do set SR_IE as it always sees SR_PIE
+ * being cleared.
+ */
+ csrs CSR_STATUS, SR_IE
+#endif
#if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING)
/* Recover a0 - a7 for system calls */
REG_L a0, PT_A0(sp)
* Syscall number held in a7.
* If syscall number is above allowed value, redirect to ni_syscall.
*/
- bge a7, t0, 1f
- /*
- * Check if syscall is rejected by tracer, i.e., a7 == -1.
- * If yes, we pretend it was executed.
- */
- li t1, -1
- beq a7, t1, ret_from_syscall_rejected
- blt a7, t1, 1f
+ bgeu a7, t0, 1f
/* Call syscall */
la s0, sys_call_table
slli t0, a7, RISCV_LGPTR
{
struct memblock_region *region = NULL;
struct resource *res = NULL;
- int ret = 0;
+ struct resource *mem_res = NULL;
+ size_t mem_res_sz = 0;
+ int ret = 0, i = 0;
code_res.start = __pa_symbol(_text);
code_res.end = __pa_symbol(_etext) - 1;
bss_res.end = __pa_symbol(__bss_stop) - 1;
bss_res.flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+ mem_res_sz = (memblock.memory.cnt + memblock.reserved.cnt) * sizeof(*mem_res);
+ mem_res = memblock_alloc(mem_res_sz, SMP_CACHE_BYTES);
+ if (!mem_res)
+ panic("%s: Failed to allocate %zu bytes\n", __func__, mem_res_sz);
/*
* Start by adding the reserved regions, if they overlap
* with /memory regions, insert_resource later on will take
* care of it.
*/
for_each_reserved_mem_region(region) {
- res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
- if (!res)
- panic("%s: Failed to allocate %zu bytes\n", __func__,
- sizeof(struct resource));
+ res = &mem_res[i++];
res->name = "Reserved";
res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
* Ignore any other reserved regions within
* system memory.
*/
- if (memblock_is_memory(res->start))
+ if (memblock_is_memory(res->start)) {
+ memblock_free((phys_addr_t) res, sizeof(struct resource));
continue;
+ }
ret = add_resource(&iomem_resource, res);
if (ret < 0)
/* Add /memory regions to the resource tree */
for_each_mem_region(region) {
- res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
- if (!res)
- panic("%s: Failed to allocate %zu bytes\n", __func__,
- sizeof(struct resource));
+ res = &mem_res[i++];
if (unlikely(memblock_is_nomap(region))) {
res->name = "Reserved";
return;
error:
- memblock_free((phys_addr_t) res, sizeof(struct resource));
/* Better an empty resource tree than an inconsistent one */
release_child_resources(&iomem_resource);
+ memblock_free((phys_addr_t) mem_res, mem_res_sz);
}
#include <asm/stacktrace.h>
-register unsigned long sp_in_global __asm__("sp");
+register const unsigned long sp_in_global __asm__("sp");
#ifdef CONFIG_FRAME_POINTER
sp = user_stack_pointer(regs);
pc = instruction_pointer(regs);
} else if (task == NULL || task == current) {
- const register unsigned long current_sp = sp_in_global;
fp = (unsigned long)__builtin_frame_address(0);
- sp = current_sp;
+ sp = sp_in_global;
pc = (unsigned long)walk_stackframe;
} else {
/* task blocked in __switch_to */
* Copyright (C) 2017 SiFive
*/
+#include <linux/of_clk.h>
#include <linux/clocksource.h>
#include <linux/delay.h>
#include <asm/sbi.h>
riscv_timebase = prop;
lpj_fine = riscv_timebase / HZ;
+
+ of_clk_init(NULL);
timer_probe();
}
#include <linux/binfmts.h>
#include <linux/err.h>
#include <asm/page.h>
-#ifdef GENERIC_TIME_VSYSCALL
+#ifdef CONFIG_GENERIC_TIME_VSYSCALL
#include <vdso/datapage.h>
#else
#include <asm/vdso.h>
void __init setup_bootmem(void)
{
phys_addr_t mem_start = 0;
- phys_addr_t start, end = 0;
+ phys_addr_t start, dram_end, end = 0;
phys_addr_t vmlinux_end = __pa_symbol(&_end);
phys_addr_t vmlinux_start = __pa_symbol(&_start);
+ phys_addr_t max_mapped_addr = __pa(~(ulong)0);
u64 i;
/* Find the memory region containing the kernel */
/* Reserve from the start of the kernel to the end of the kernel */
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
- max_pfn = PFN_DOWN(memblock_end_of_DRAM());
+ dram_end = memblock_end_of_DRAM();
+
+ /*
+ * memblock allocator is not aware of the fact that last 4K bytes of
+ * the addressable memory can not be mapped because of IS_ERR_VALUE
+ * macro. Make sure that last 4k bytes are not usable by memblock
+ * if end of dram is equal to maximum addressable memory.
+ */
+ if (max_mapped_addr == (dram_end - 1))
+ memblock_set_current_limit(max_mapped_addr - 4096);
+
+ max_pfn = PFN_DOWN(dram_end);
max_low_pfn = max_pfn;
dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn));
set_max_mapnr(max_low_pfn);
VMALLOC_END));
for_each_mem_range(i, &_start, &_end) {
- void *start = (void *)_start;
- void *end = (void *)_end;
+ void *start = (void *)__va(_start);
+ void *end = (void *)__va(_end);
if (start >= end)
break;
#include <asm/hyperv-tlfs.h>
#include <asm/mshyperv.h>
#include <asm/idtentry.h>
+#include <linux/kexec.h>
#include <linux/version.h>
#include <linux/vmalloc.h>
#include <linux/mm.h>
#include <linux/syscore_ops.h>
#include <clocksource/hyperv_timer.h>
+int hyperv_init_cpuhp;
+
void *hv_hypercall_pg;
EXPORT_SYMBOL_GPL(hv_hypercall_pg);
register_syscore_ops(&hv_syscore_ops);
+ hyperv_init_cpuhp = cpuhp;
return;
remove_cpuhp_state:
if (!hv_hypercall_pg)
goto do_native;
- if (cpumask_empty(cpus))
- return;
-
local_irq_save(flags);
+ /*
+ * Only check the mask _after_ interrupt has been disabled to avoid the
+ * mask changing under our feet.
+ */
+ if (cpumask_empty(cpus)) {
+ local_irq_restore(flags);
+ return;
+ }
+
flush_pcpu = (struct hv_tlb_flush **)
this_cpu_ptr(hyperv_pcpu_input_arg);
#if IS_ENABLED(CONFIG_HYPERV)
+extern int hyperv_init_cpuhp;
+
extern void *hv_hypercall_pg;
extern void __percpu **hyperv_pcpu_input_arg;
{
if (kexec_in_progress && hv_kexec_handler)
hv_kexec_handler();
+
+ /*
+ * Call hv_cpu_die() on all the CPUs, otherwise later the hypervisor
+ * corrupts the old VP Assist Pages and can crash the kexec kernel.
+ */
+ if (kexec_in_progress && hyperv_init_cpuhp > 0)
+ cpuhp_remove_state(hyperv_init_cpuhp);
+
+ /* The function calls stop_other_cpus(). */
native_machine_shutdown();
+
+ /* Disable the hypercall page when there is only 1 active CPU. */
+ if (kexec_in_progress)
+ hyperv_cleanup();
}
static void hv_machine_crash_shutdown(struct pt_regs *regs)
{
if (hv_crash_handler)
hv_crash_handler(regs);
+
+ /* The function calls crash_smp_send_stop(). */
native_machine_crash_shutdown(regs);
+
+ /* Disable the hypercall page when there is only 1 active CPU. */
+ hyperv_cleanup();
}
#endif /* CONFIG_KEXEC_CORE */
#endif /* CONFIG_HYPERV */
else
per_cpu(xen_vcpu_id, cpu) = cpu;
rc = xen_vcpu_setup(cpu);
- if (rc)
+ if (rc || !xen_have_vector_callback)
return rc;
- if (xen_have_vector_callback && xen_feature(XENFEAT_hvm_safe_pvclock))
+ if (xen_feature(XENFEAT_hvm_safe_pvclock))
xen_setup_timer(cpu);
rc = xen_smp_intr_init(cpu);
return 0;
}
+static bool no_vector_callback __initdata;
+
static void __init xen_hvm_guest_init(void)
{
if (xen_pv_domain())
xen_panic_handler_init();
- if (xen_feature(XENFEAT_hvm_callback_vector))
+ if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector))
xen_have_vector_callback = 1;
xen_hvm_smp_init();
}
early_param("xen_nopv", xen_parse_nopv);
+static __init int xen_parse_no_vector_callback(char *arg)
+{
+ no_vector_callback = true;
+ return 0;
+}
+early_param("xen_no_vector_callback", xen_parse_no_vector_callback);
+
bool __init xen_hvm_need_lapic(void)
{
if (xen_pv_domain())
int cpu;
native_smp_prepare_cpus(max_cpus);
- WARN_ON(xen_smp_intr_init(0));
- xen_init_lock_cpu(0);
+ if (xen_have_vector_callback) {
+ WARN_ON(xen_smp_intr_init(0));
+ xen_init_lock_cpu(0);
+ }
for_each_possible_cpu(cpu) {
if (cpu == 0)
static void xen_hvm_cpu_die(unsigned int cpu)
{
if (common_cpu_die(cpu) == 0) {
- xen_smp_intr_free(cpu);
- xen_uninit_lock_cpu(cpu);
- xen_teardown_timer(cpu);
+ if (xen_have_vector_callback) {
+ xen_smp_intr_free(cpu);
+ xen_uninit_lock_cpu(cpu);
+ xen_teardown_timer(cpu);
+ }
}
}
#else
void __init xen_hvm_smp_init(void)
{
- if (!xen_have_vector_callback)
+ smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
+ smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
+ smp_ops.smp_cpus_done = xen_smp_cpus_done;
+ smp_ops.cpu_die = xen_hvm_cpu_die;
+
+ if (!xen_have_vector_callback) {
+ nopvspin = true;
return;
+ }
- smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
- smp_ops.cpu_die = xen_hvm_cpu_die;
smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
- smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
- smp_ops.smp_cpus_done = xen_smp_cpus_done;
}
extern struct list_head acpi_bus_id_list;
struct acpi_device_bus_id {
- char bus_id[15];
+ const char *bus_id;
unsigned int instance_no;
struct list_head node;
};
acpi_device_bus_id->instance_no--;
else {
list_del(&acpi_device_bus_id->node);
+ kfree_const(acpi_device_bus_id->bus_id);
kfree(acpi_device_bus_id);
}
break;
}
if (!found) {
acpi_device_bus_id = new_bus_id;
- strcpy(acpi_device_bus_id->bus_id, acpi_device_hid(device));
+ acpi_device_bus_id->bus_id =
+ kstrdup_const(acpi_device_hid(device), GFP_KERNEL);
+ if (!acpi_device_bus_id->bus_id) {
+ pr_err(PREFIX "Memory allocation error for bus id\n");
+ result = -ENOMEM;
+ goto err_free_new_bus_id;
+ }
+
acpi_device_bus_id->instance_no = 0;
list_add_tail(&acpi_device_bus_id->node, &acpi_bus_id_list);
}
if (device->parent)
list_del(&device->node);
list_del(&device->wakeup_list);
+
+ err_free_new_bus_id:
+ if (!found)
+ kfree(new_bus_id);
+
mutex_unlock(&acpi_device_lock);
err_detach:
struct isa_driver *isa_driver = dev->platform_data;
if (isa_driver && isa_driver->remove)
- return isa_driver->remove(dev, to_isa_dev(dev)->id);
+ isa_driver->remove(dev, to_isa_dev(dev)->id);
return 0;
}
buffer->vaddr = NULL;
}
+ /* free page list */
+ kfree(buffer->pages);
+ /* release memory */
cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount);
kfree(buffer);
}
union igp_info {
struct atom_integrated_system_info_v1_11 v11;
struct atom_integrated_system_info_v1_12 v12;
+ struct atom_integrated_system_info_v2_1 v21;
};
union umc_info {
if (adev->flags & AMD_IS_APU) {
igp_info = (union igp_info *)
(mode_info->atom_context->bios + data_offset);
- switch (crev) {
- case 11:
- mem_channel_number = igp_info->v11.umachannelnumber;
- /* channel width is 64 */
- if (vram_width)
- *vram_width = mem_channel_number * 64;
- mem_type = igp_info->v11.memorytype;
- if (vram_type)
- *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type);
+ switch (frev) {
+ case 1:
+ switch (crev) {
+ case 11:
+ case 12:
+ mem_channel_number = igp_info->v11.umachannelnumber;
+ if (!mem_channel_number)
+ mem_channel_number = 1;
+ /* channel width is 64 */
+ if (vram_width)
+ *vram_width = mem_channel_number * 64;
+ mem_type = igp_info->v11.memorytype;
+ if (vram_type)
+ *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type);
+ break;
+ default:
+ return -EINVAL;
+ }
break;
- case 12:
- mem_channel_number = igp_info->v12.umachannelnumber;
- /* channel width is 64 */
- if (vram_width)
- *vram_width = mem_channel_number * 64;
- mem_type = igp_info->v12.memorytype;
- if (vram_type)
- *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type);
+ case 2:
+ switch (crev) {
+ case 1:
+ case 2:
+ mem_channel_number = igp_info->v21.umachannelnumber;
+ if (!mem_channel_number)
+ mem_channel_number = 1;
+ /* channel width is 64 */
+ if (vram_width)
+ *vram_width = mem_channel_number * 64;
+ mem_type = igp_info->v21.memorytype;
+ if (vram_type)
+ *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type);
+ break;
+ default:
+ return -EINVAL;
+ }
break;
default:
return -EINVAL;
#endif
default:
if (amdgpu_dc > 0)
- DRM_INFO("Display Core has been requested via kernel parameter "
+ DRM_INFO_ONCE("Display Core has been requested via kernel parameter "
"but isn't supported by ASIC, ignoring\n");
return false;
}
/* Renoir */
{0x1002, 0x1636, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
+ {0x1002, 0x1638, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
+ {0x1002, 0x164C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
/* Navi12 */
{0x1002, 0x7360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI12},
#define mmGCR_GENERAL_CNTL_Sienna_Cichlid 0x1580
#define mmGCR_GENERAL_CNTL_Sienna_Cichlid_BASE_IDX 0
+#define mmGOLDEN_TSC_COUNT_UPPER_Vangogh 0x0025
+#define mmGOLDEN_TSC_COUNT_UPPER_Vangogh_BASE_IDX 1
+#define mmGOLDEN_TSC_COUNT_LOWER_Vangogh 0x0026
+#define mmGOLDEN_TSC_COUNT_LOWER_Vangogh_BASE_IDX 1
#define mmSPI_CONFIG_CNTL_1_Vangogh 0x2441
#define mmSPI_CONFIG_CNTL_1_Vangogh_BASE_IDX 1
#define mmVGT_TF_MEMORY_BASE_HI_Vangogh 0x2261
#define mmGCVM_L2_CGTT_CLK_CTRL_Sienna_Cichlid 0x15db
#define mmGCVM_L2_CGTT_CLK_CTRL_Sienna_Cichlid_BASE_IDX 0
+#define mmGC_THROTTLE_CTRL_Sienna_Cichlid 0x2030
+#define mmGC_THROTTLE_CTRL_Sienna_Cichlid_BASE_IDX 0
+
MODULE_FIRMWARE("amdgpu/navi10_ce.bin");
MODULE_FIRMWARE("amdgpu/navi10_pfp.bin");
MODULE_FIRMWARE("amdgpu/navi10_me.bin");
static void gfx_v10_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start, bool secure);
static u32 gfx_v10_3_get_disabled_sa(struct amdgpu_device *adev);
static void gfx_v10_3_program_pbb_mode(struct amdgpu_device *adev);
+static void gfx_v10_3_set_power_brake_sequence(struct amdgpu_device *adev);
static void gfx10_kiq_set_resources(struct amdgpu_ring *kiq_ring, uint64_t queue_mask)
{
if (adev->asic_type == CHIP_SIENNA_CICHLID)
gfx_v10_3_program_pbb_mode(adev);
+ if (adev->asic_type >= CHIP_SIENNA_CICHLID)
+ gfx_v10_3_set_power_brake_sequence(adev);
+
return r;
}
amdgpu_gfx_off_ctrl(adev, false);
mutex_lock(&adev->gfx.gpu_clock_mutex);
- clock = (uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER) |
- ((uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER) << 32ULL);
+ switch (adev->asic_type) {
+ case CHIP_VANGOGH:
+ clock = (uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Vangogh) |
+ ((uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL);
+ break;
+ default:
+ clock = (uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER) |
+ ((uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER) << 32ULL);
+ break;
+ }
mutex_unlock(&adev->gfx.gpu_clock_mutex);
amdgpu_gfx_off_ctrl(adev, true);
return clock;
}
}
+static void gfx_v10_3_set_power_brake_sequence(struct amdgpu_device *adev)
+{
+ WREG32_SOC15(GC, 0, mmGRBM_GFX_INDEX,
+ (0x1 << GRBM_GFX_INDEX__SA_BROADCAST_WRITES__SHIFT) |
+ (0x1 << GRBM_GFX_INDEX__INSTANCE_BROADCAST_WRITES__SHIFT) |
+ (0x1 << GRBM_GFX_INDEX__SE_BROADCAST_WRITES__SHIFT));
+
+ WREG32_SOC15(GC, 0, mmGC_CAC_IND_INDEX, ixPWRBRK_STALL_PATTERN_CTRL);
+ WREG32_SOC15(GC, 0, mmGC_CAC_IND_DATA,
+ (0x1 << PWRBRK_STALL_PATTERN_CTRL__PWRBRK_STEP_INTERVAL__SHIFT) |
+ (0x12 << PWRBRK_STALL_PATTERN_CTRL__PWRBRK_BEGIN_STEP__SHIFT) |
+ (0x13 << PWRBRK_STALL_PATTERN_CTRL__PWRBRK_END_STEP__SHIFT) |
+ (0xf << PWRBRK_STALL_PATTERN_CTRL__PWRBRK_THROTTLE_PATTERN_BIT_NUMS__SHIFT));
+
+ WREG32_SOC15(GC, 0, mmGC_THROTTLE_CTRL_Sienna_Cichlid,
+ (0x1 << GC_THROTTLE_CTRL__PWRBRK_STALL_EN__SHIFT) |
+ (0x1 << GC_THROTTLE_CTRL__PATTERN_MODE__SHIFT) |
+ (0x5 << GC_THROTTLE_CTRL__RELEASE_STEP_INTERVAL__SHIFT));
+
+ WREG32_SOC15(GC, 0, mmDIDT_IND_INDEX, ixDIDT_SQ_THROTTLE_CTRL);
+
+ WREG32_SOC15(GC, 0, mmDIDT_IND_DATA,
+ (0x1 << DIDT_SQ_THROTTLE_CTRL__PWRBRK_STALL_EN__SHIFT));
+}
+
const struct amdgpu_ip_block_version gfx_v10_0_ip_block =
{
.type = AMD_IP_BLOCK_TYPE_GFX,
GFX_CTRL_CMD_ID_DISABLE_INT = 0x00060000, /* disable PSP-to-Gfx interrupt */
GFX_CTRL_CMD_ID_MODE1_RST = 0x00070000, /* trigger the Mode 1 reset */
GFX_CTRL_CMD_ID_GBR_IH_SET = 0x00080000, /* set Gbr IH_RB_CNTL registers */
- GFX_CTRL_CMD_ID_CONSUME_CMD = 0x000A0000, /* send interrupt to psp for updating write pointer of vf */
+ GFX_CTRL_CMD_ID_CONSUME_CMD = 0x00090000, /* send interrupt to psp for updating write pointer of vf */
GFX_CTRL_CMD_ID_DESTROY_GPCOM_RING = 0x000C0000, /* destroy GPCOM ring */
GFX_CTRL_CMD_ID_MAX = 0x000F0000, /* max command ID */
break;
case CHIP_RENOIR:
adev->asic_funcs = &soc15_asic_funcs;
- if (adev->pdev->device == 0x1636)
+ if ((adev->pdev->device == 0x1636) ||
+ (adev->pdev->device == 0x164c))
adev->apu_flags |= AMD_APU_IS_RENOIR;
else
adev->apu_flags |= AMD_APU_IS_GREEN_SARDINE;
(struct crat_subtype_iolink *)sub_type_hdr);
if (ret < 0)
return ret;
- crat_table->length += (sub_type_hdr->length * entries);
- crat_table->total_entries += entries;
- sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr +
- sub_type_hdr->length * entries);
+ if (entries) {
+ crat_table->length += (sub_type_hdr->length * entries);
+ crat_table->total_entries += entries;
+
+ sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr +
+ sub_type_hdr->length * entries);
+ }
#else
pr_info("IO link not available for non x86 platforms\n");
#endif
}
#endif
-#ifdef CONFIG_DEBUG_FS
-static int create_crtc_crc_properties(struct amdgpu_display_manager *dm)
-{
- dm->crc_win_x_start_property =
- drm_property_create_range(adev_to_drm(dm->adev),
- DRM_MODE_PROP_ATOMIC,
- "AMD_CRC_WIN_X_START", 0, U16_MAX);
- if (!dm->crc_win_x_start_property)
- return -ENOMEM;
-
- dm->crc_win_y_start_property =
- drm_property_create_range(adev_to_drm(dm->adev),
- DRM_MODE_PROP_ATOMIC,
- "AMD_CRC_WIN_Y_START", 0, U16_MAX);
- if (!dm->crc_win_y_start_property)
- return -ENOMEM;
-
- dm->crc_win_x_end_property =
- drm_property_create_range(adev_to_drm(dm->adev),
- DRM_MODE_PROP_ATOMIC,
- "AMD_CRC_WIN_X_END", 0, U16_MAX);
- if (!dm->crc_win_x_end_property)
- return -ENOMEM;
-
- dm->crc_win_y_end_property =
- drm_property_create_range(adev_to_drm(dm->adev),
- DRM_MODE_PROP_ATOMIC,
- "AMD_CRC_WIN_Y_END", 0, U16_MAX);
- if (!dm->crc_win_y_end_property)
- return -ENOMEM;
-
- return 0;
-}
-#endif
-
static int amdgpu_dm_init(struct amdgpu_device *adev)
{
struct dc_init_data init_data;
dc_init_callbacks(adev->dm.dc, &init_params);
}
-#endif
-#ifdef CONFIG_DEBUG_FS
- if (create_crtc_crc_properties(&adev->dm))
- DRM_ERROR("amdgpu: failed to create crc property.\n");
#endif
if (amdgpu_dm_initialize_drm_device(adev)) {
DRM_ERROR(
state->crc_src = cur->crc_src;
state->cm_has_degamma = cur->cm_has_degamma;
state->cm_is_degamma_srgb = cur->cm_is_degamma_srgb;
-#ifdef CONFIG_DEBUG_FS
- state->crc_window = cur->crc_window;
-#endif
+
/* TODO Duplicate dc_stream after objects are stream object is flattened */
return &state->base;
}
-#ifdef CONFIG_DEBUG_FS
-static int amdgpu_dm_crtc_atomic_set_property(struct drm_crtc *crtc,
- struct drm_crtc_state *crtc_state,
- struct drm_property *property,
- uint64_t val)
-{
- struct drm_device *dev = crtc->dev;
- struct amdgpu_device *adev = drm_to_adev(dev);
- struct dm_crtc_state *dm_new_state =
- to_dm_crtc_state(crtc_state);
-
- if (property == adev->dm.crc_win_x_start_property)
- dm_new_state->crc_window.x_start = val;
- else if (property == adev->dm.crc_win_y_start_property)
- dm_new_state->crc_window.y_start = val;
- else if (property == adev->dm.crc_win_x_end_property)
- dm_new_state->crc_window.x_end = val;
- else if (property == adev->dm.crc_win_y_end_property)
- dm_new_state->crc_window.y_end = val;
- else
- return -EINVAL;
-
- return 0;
-}
-
-static int amdgpu_dm_crtc_atomic_get_property(struct drm_crtc *crtc,
- const struct drm_crtc_state *state,
- struct drm_property *property,
- uint64_t *val)
-{
- struct drm_device *dev = crtc->dev;
- struct amdgpu_device *adev = drm_to_adev(dev);
- struct dm_crtc_state *dm_state =
- to_dm_crtc_state(state);
-
- if (property == adev->dm.crc_win_x_start_property)
- *val = dm_state->crc_window.x_start;
- else if (property == adev->dm.crc_win_y_start_property)
- *val = dm_state->crc_window.y_start;
- else if (property == adev->dm.crc_win_x_end_property)
- *val = dm_state->crc_window.x_end;
- else if (property == adev->dm.crc_win_y_end_property)
- *val = dm_state->crc_window.y_end;
- else
- return -EINVAL;
-
- return 0;
-}
-#endif
-
static inline int dm_set_vupdate_irq(struct drm_crtc *crtc, bool enable)
{
enum dc_irq_source irq_source;
.enable_vblank = dm_enable_vblank,
.disable_vblank = dm_disable_vblank,
.get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp,
-#ifdef CONFIG_DEBUG_FS
- .atomic_set_property = amdgpu_dm_crtc_atomic_set_property,
- .atomic_get_property = amdgpu_dm_crtc_atomic_get_property,
-#endif
};
static enum drm_connector_status
return 0;
}
-#ifdef CONFIG_DEBUG_FS
-static void attach_crtc_crc_properties(struct amdgpu_display_manager *dm,
- struct amdgpu_crtc *acrtc)
-{
- drm_object_attach_property(&acrtc->base.base,
- dm->crc_win_x_start_property,
- 0);
- drm_object_attach_property(&acrtc->base.base,
- dm->crc_win_y_start_property,
- 0);
- drm_object_attach_property(&acrtc->base.base,
- dm->crc_win_x_end_property,
- 0);
- drm_object_attach_property(&acrtc->base.base,
- dm->crc_win_y_end_property,
- 0);
-}
-#endif
-
static int amdgpu_dm_crtc_init(struct amdgpu_display_manager *dm,
struct drm_plane *plane,
uint32_t crtc_index)
drm_crtc_enable_color_mgmt(&acrtc->base, MAX_COLOR_LUT_ENTRIES,
true, MAX_COLOR_LUT_ENTRIES);
drm_mode_crtc_set_gamma_size(&acrtc->base, MAX_COLOR_LEGACY_LUT_ENTRIES);
-#ifdef CONFIG_DEBUG_FS
- attach_crtc_crc_properties(dm, acrtc);
-#endif
+
return 0;
fail:
*/
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
- bool configure_crc = false;
dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
dc_stream_retain(dm_new_crtc_state->stream);
acrtc->dm_irq_params.stream = dm_new_crtc_state->stream;
manage_dm_interrupts(adev, acrtc, true);
- }
- if (IS_ENABLED(CONFIG_DEBUG_FS) && new_crtc_state->active &&
- amdgpu_dm_is_valid_crc_source(dm_new_crtc_state->crc_src)) {
+
+#ifdef CONFIG_DEBUG_FS
/**
* Frontend may have changed so reapply the CRC capture
* settings for the stream.
*/
dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
- dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
- if (amdgpu_dm_crc_window_is_default(dm_new_crtc_state)) {
- if (!old_crtc_state->active || drm_atomic_crtc_needs_modeset(new_crtc_state))
- configure_crc = true;
- } else {
- if (amdgpu_dm_crc_window_changed(dm_new_crtc_state, dm_old_crtc_state))
- configure_crc = true;
- }
-
- if (configure_crc)
+ if (amdgpu_dm_is_valid_crc_source(dm_new_crtc_state->crc_src)) {
amdgpu_dm_crtc_configure_crc_source(
- crtc, dm_new_crtc_state, dm_new_crtc_state->crc_src);
+ crtc, dm_new_crtc_state,
+ dm_new_crtc_state->crc_src);
+ }
+#endif
}
}
*/
const struct gpu_info_soc_bounding_box_v1_0 *soc_bounding_box;
-#ifdef CONFIG_DEBUG_FS
- /**
- * @crc_win_x_start_property:
- *
- * X start of the crc calculation window
- */
- struct drm_property *crc_win_x_start_property;
- /**
- * @crc_win_y_start_property:
- *
- * Y start of the crc calculation window
- */
- struct drm_property *crc_win_y_start_property;
- /**
- * @crc_win_x_end_property:
- *
- * X end of the crc calculation window
- */
- struct drm_property *crc_win_x_end_property;
- /**
- * @crc_win_y_end_property:
- *
- * Y end of the crc calculation window
- */
- struct drm_property *crc_win_y_end_property;
-#endif
/**
* @mst_encoders:
*
struct dc_plane_state *dc_state;
};
-#ifdef CONFIG_DEBUG_FS
-struct crc_rec {
- uint16_t x_start;
- uint16_t y_start;
- uint16_t x_end;
- uint16_t y_end;
- };
-#endif
-
struct dm_crtc_state {
struct drm_crtc_state base;
struct dc_stream_state *stream;
struct dc_info_packet vrr_infopacket;
int abm_level;
-#ifdef CONFIG_DEBUG_FS
- struct crc_rec crc_window;
-#endif
};
#define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
return pipe_crc_sources;
}
-static void amdgpu_dm_set_crc_window_default(struct dm_crtc_state *dm_crtc_state)
-{
- dm_crtc_state->crc_window.x_start = 0;
- dm_crtc_state->crc_window.y_start = 0;
- dm_crtc_state->crc_window.x_end = 0;
- dm_crtc_state->crc_window.y_end = 0;
-}
-
-bool amdgpu_dm_crc_window_is_default(struct dm_crtc_state *dm_crtc_state)
-{
- bool ret = true;
-
- if ((dm_crtc_state->crc_window.x_start != 0) ||
- (dm_crtc_state->crc_window.y_start != 0) ||
- (dm_crtc_state->crc_window.x_end != 0) ||
- (dm_crtc_state->crc_window.y_end != 0))
- ret = false;
-
- return ret;
-}
-
-bool amdgpu_dm_crc_window_changed(struct dm_crtc_state *dm_new_crtc_state,
- struct dm_crtc_state *dm_old_crtc_state)
-{
- bool ret = false;
-
- if ((dm_new_crtc_state->crc_window.x_start != dm_old_crtc_state->crc_window.x_start) ||
- (dm_new_crtc_state->crc_window.y_start != dm_old_crtc_state->crc_window.y_start) ||
- (dm_new_crtc_state->crc_window.x_end != dm_old_crtc_state->crc_window.x_end) ||
- (dm_new_crtc_state->crc_window.y_end != dm_old_crtc_state->crc_window.y_end))
- ret = true;
-
- return ret;
-}
-
int
amdgpu_dm_crtc_verify_crc_source(struct drm_crtc *crtc, const char *src_name,
size_t *values_cnt)
struct dc_stream_state *stream_state = dm_crtc_state->stream;
bool enable = amdgpu_dm_is_valid_crc_source(source);
int ret = 0;
- struct crc_params *crc_window = NULL, tmp_window;
/* Configuration will be deferred to stream enable. */
if (!stream_state)
/* Enable CRTC CRC generation if necessary. */
if (dm_is_crc_source_crtc(source) || source == AMDGPU_DM_PIPE_CRC_SOURCE_NONE) {
- if (!enable)
- amdgpu_dm_set_crc_window_default(dm_crtc_state);
-
- if (!amdgpu_dm_crc_window_is_default(dm_crtc_state)) {
- crc_window = &tmp_window;
-
- tmp_window.windowa_x_start = dm_crtc_state->crc_window.x_start;
- tmp_window.windowa_y_start = dm_crtc_state->crc_window.y_start;
- tmp_window.windowa_x_end = dm_crtc_state->crc_window.x_end;
- tmp_window.windowa_y_end = dm_crtc_state->crc_window.y_end;
- tmp_window.windowb_x_start = dm_crtc_state->crc_window.x_start;
- tmp_window.windowb_y_start = dm_crtc_state->crc_window.y_start;
- tmp_window.windowb_x_end = dm_crtc_state->crc_window.x_end;
- tmp_window.windowb_y_end = dm_crtc_state->crc_window.y_end;
- }
-
if (!dc_stream_configure_crc(stream_state->ctx->dc,
- stream_state, crc_window, enable, enable)) {
+ stream_state, NULL, enable, enable)) {
ret = -EINVAL;
goto unlock;
}
}
/* amdgpu_dm_crc.c */
-bool amdgpu_dm_crc_window_is_default(struct dm_crtc_state *dm_crtc_state);
-bool amdgpu_dm_crc_window_changed(struct dm_crtc_state *dm_new_crtc_state,
- struct dm_crtc_state *dm_old_crtc_state);
+#ifdef CONFIG_DEBUG_FS
int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc,
struct dm_crtc_state *dm_crtc_state,
enum amdgpu_dm_pipe_crc_source source);
-#ifdef CONFIG_DEBUG_FS
int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name);
int amdgpu_dm_crtc_verify_crc_source(struct drm_crtc *crtc,
const char *src_name,
unsigned int cust_pattern_size)
{
struct pipe_ctx *pipes = link->dc->current_state->res_ctx.pipe_ctx;
- struct pipe_ctx *pipe_ctx = &pipes[0];
+ struct pipe_ctx *pipe_ctx = NULL;
unsigned int lane;
unsigned int i;
unsigned char link_qual_pattern[LANE_COUNT_DP_MAX] = {0};
memset(&training_pattern, 0, sizeof(training_pattern));
for (i = 0; i < MAX_PIPES; i++) {
+ if (pipes[i].stream == NULL)
+ continue;
+
if (pipes[i].stream->link == link && !pipes[i].top_pipe && !pipes[i].prev_odm_pipe) {
pipe_ctx = &pipes[i];
break;
}
}
+ if (pipe_ctx == NULL)
+ return false;
+
/* Reset CRTC Test Pattern if it is currently running and request is VideoMode */
if (link->test_pattern_enabled && test_pattern ==
DP_TEST_PATTERN_VIDEO_MODE) {
unsigned int mpc1_get_mpc_out_mux(struct mpc *mpc, int opp_id)
{
struct dcn10_mpc *mpc10 = TO_DCN10_MPC(mpc);
- uint32_t val = 0;
+ uint32_t val = 0xf;
if (opp_id < MAX_OPP && REG(MUX[opp_id]))
REG_GET(MUX[opp_id], MPC_OUT_MUX, &val);
.disable_pplib_clock_request = false,
.disable_pplib_wm_range = false,
.pplib_wm_report_mode = WM_REPORT_DEFAULT,
- .pipe_split_policy = MPC_SPLIT_DYNAMIC,
- .force_single_disp_pipe_split = true,
+ .pipe_split_policy = MPC_SPLIT_AVOID,
+ .force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.voltage_align_fclk = true,
.disable_stereo_support = true,
.populate_dml_pipes = dcn30_populate_dml_pipes_from_context,
.acquire_idle_pipe_for_layer = dcn20_acquire_idle_pipe_for_layer,
.add_stream_to_ctx = dcn30_add_stream_to_ctx,
+ .add_dsc_to_stream_resource = dcn20_add_dsc_to_stream_resource,
.remove_stream_from_ctx = dcn20_remove_stream_from_ctx,
.populate_dml_writeback_from_context = dcn30_populate_dml_writeback_from_context,
.set_mcif_arb_params = dcn30_set_mcif_arb_params,
}
if (mode_lib->vba.DRAMClockChangeSupportsVActive &&
- mode_lib->vba.MinActiveDRAMClockChangeMargin > 60 &&
- mode_lib->vba.PrefetchMode[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] == 0) {
+ mode_lib->vba.MinActiveDRAMClockChangeMargin > 60) {
mode_lib->vba.DRAMClockChangeWatermark += 25;
for (k = 0; k < mode_lib->vba.NumberOfActivePlanes; ++k) {
- if (mode_lib->vba.DRAMClockChangeWatermark >
- dml_max(mode_lib->vba.StutterEnterPlusExitWatermark, mode_lib->vba.UrgentWatermark))
- mode_lib->vba.MinTTUVBlank[k] += 25;
+ if (mode_lib->vba.PrefetchMode[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] == 0) {
+ if (mode_lib->vba.DRAMClockChangeWatermark >
+ dml_max(mode_lib->vba.StutterEnterPlusExitWatermark, mode_lib->vba.UrgentWatermark))
+ mode_lib->vba.MinTTUVBlank[k] += 25;
+ }
}
mode_lib->vba.DRAMClockChangeSupport[0][0] = dm_dram_clock_change_vactive;
if (ret)
goto out;
- if (old_fb->format != fb->format) {
+ /*
+ * Only check the FOURCC format code, excluding modifiers. This is
+ * enough for all legacy drivers. Atomic drivers have their own
+ * checks in their ->atomic_check implementation, which will
+ * return -EINVAL if any hw or driver constraint is violated due
+ * to modifier changes.
+ */
+ if (old_fb->format->format != fb->format->format) {
DRM_DEBUG_KMS("Page flip is not allowed to change frame buffer format.\n");
ret = -EINVAL;
goto out;
i915_config.o \
i915_irq.o \
i915_getparam.o \
+ i915_mitigations.o \
i915_params.o \
i915_pci.o \
i915_scatterlist.o \
get_dsi_io_power_domains(i915,
enc_to_intel_dsi(encoder));
-
- if (crtc_state->dsc.compression_enable)
- intel_display_power_get(i915,
- intel_dsc_power_domain(crtc_state));
}
static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
val = pch_get_backlight(connector);
else
val = lpt_get_backlight(connector);
- val = intel_panel_compute_brightness(connector, val);
- panel->backlight.level = clamp(val, panel->backlight.min,
- panel->backlight.max);
if (cpu_mode) {
drm_dbg_kms(&dev_priv->drm,
"CPU backlight register was enabled, switching to PCH override\n");
/* Write converted CPU PWM value to PCH override register */
- lpt_set_backlight(connector->base.state, panel->backlight.level);
+ lpt_set_backlight(connector->base.state, val);
intel_de_write(dev_priv, BLC_PWM_PCH_CTL1,
pch_ctl1 | BLM_PCH_OVERRIDE_ENABLE);
cpu_ctl2 & ~BLM_PWM_ENABLE);
}
+ val = intel_panel_compute_brightness(connector, val);
+ panel->backlight.level = clamp(val, panel->backlight.min,
+ panel->backlight.max);
+
return 0;
}
intel_dsi_prepare(encoder, pipe_config);
intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_ON);
- intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay);
- /* Deassert reset */
- intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
+ /*
+ * Give the panel time to power-on and then deassert its reset.
+ * Depending on the VBT MIPI sequences version the deassert-seq
+ * may contain the necessary delay, intel_dsi_msleep() will skip
+ * the delay in that case. If there is no deassert-seq, then an
+ * unconditional msleep is used to give the panel time to power-on.
+ */
+ if (dev_priv->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) {
+ intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay);
+ intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
+ } else {
+ msleep(intel_dsi->panel_on_delay);
+ }
if (IS_GEMINILAKE(dev_priv)) {
glk_cold_boot = glk_dsi_enable_io(encoder);
#include "i915_drv.h"
#include "intel_gpu_commands.h"
-#define MAX_URB_ENTRIES 64
-#define STATE_SIZE (4 * 1024)
#define GT3_INLINE_DATA_DELAYS 0x1E00
#define batch_advance(Y, CS) GEM_BUG_ON((Y)->end != (CS))
};
struct batch_vals {
- u32 max_primitives;
- u32 max_urb_entries;
- u32 cmd_size;
- u32 state_size;
+ u32 max_threads;
u32 state_start;
- u32 batch_size;
+ u32 surface_start;
u32 surface_height;
u32 surface_width;
- u32 scratch_size;
- u32 max_size;
+ u32 size;
};
+static inline int num_primitives(const struct batch_vals *bv)
+{
+ /*
+ * We need to saturate the GPU with work in order to dispatch
+ * a shader on every HW thread, and clear the thread-local registers.
+ * In short, we have to dispatch work faster than the shaders can
+ * run in order to fill the EU and occupy each HW thread.
+ */
+ return bv->max_threads;
+}
+
static void
batch_get_defaults(struct drm_i915_private *i915, struct batch_vals *bv)
{
if (IS_HASWELL(i915)) {
- bv->max_primitives = 280;
- bv->max_urb_entries = MAX_URB_ENTRIES;
+ switch (INTEL_INFO(i915)->gt) {
+ default:
+ case 1:
+ bv->max_threads = 70;
+ break;
+ case 2:
+ bv->max_threads = 140;
+ break;
+ case 3:
+ bv->max_threads = 280;
+ break;
+ }
bv->surface_height = 16 * 16;
bv->surface_width = 32 * 2 * 16;
} else {
- bv->max_primitives = 128;
- bv->max_urb_entries = MAX_URB_ENTRIES / 2;
+ switch (INTEL_INFO(i915)->gt) {
+ default:
+ case 1: /* including vlv */
+ bv->max_threads = 36;
+ break;
+ case 2:
+ bv->max_threads = 128;
+ break;
+ }
bv->surface_height = 16 * 8;
bv->surface_width = 32 * 16;
}
- bv->cmd_size = bv->max_primitives * 4096;
- bv->state_size = STATE_SIZE;
- bv->state_start = bv->cmd_size;
- bv->batch_size = bv->cmd_size + bv->state_size;
- bv->scratch_size = bv->surface_height * bv->surface_width;
- bv->max_size = bv->batch_size + bv->scratch_size;
+ bv->state_start = round_up(SZ_1K + num_primitives(bv) * 64, SZ_4K);
+ bv->surface_start = bv->state_start + SZ_4K;
+ bv->size = bv->surface_start + bv->surface_height * bv->surface_width;
}
static void batch_init(struct batch_chunk *bc,
gen7_fill_binding_table(struct batch_chunk *state,
const struct batch_vals *bv)
{
- u32 surface_start = gen7_fill_surface_state(state, bv->batch_size, bv);
+ u32 surface_start =
+ gen7_fill_surface_state(state, bv->surface_start, bv);
u32 *cs = batch_alloc_items(state, 32, 8);
u32 offset = batch_offset(state, cs);
gen7_emit_state_base_address(struct batch_chunk *batch,
u32 surface_state_base)
{
- u32 *cs = batch_alloc_items(batch, 0, 12);
+ u32 *cs = batch_alloc_items(batch, 0, 10);
- *cs++ = STATE_BASE_ADDRESS | (12 - 2);
+ *cs++ = STATE_BASE_ADDRESS | (10 - 2);
/* general */
*cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY;
/* surface */
*cs++ = BASE_ADDRESS_MODIFY;
*cs++ = 0;
*cs++ = BASE_ADDRESS_MODIFY;
- *cs++ = 0;
- *cs++ = 0;
batch_advance(batch, cs);
}
u32 urb_size, u32 curbe_size,
u32 mode)
{
- u32 urb_entries = bv->max_urb_entries;
- u32 threads = bv->max_primitives - 1;
+ u32 threads = bv->max_threads - 1;
u32 *cs = batch_alloc_items(batch, 32, 8);
*cs++ = MEDIA_VFE_STATE | (8 - 2);
*cs++ = 0;
/* number of threads & urb entries for GPGPU vs Media Mode */
- *cs++ = threads << 16 | urb_entries << 8 | mode << 2;
+ *cs++ = threads << 16 | 1 << 8 | mode << 2;
*cs++ = 0;
{
unsigned int x_offset = (media_object_index % 16) * 64;
unsigned int y_offset = (media_object_index / 16) * 16;
- unsigned int inline_data_size;
- unsigned int media_batch_size;
- unsigned int i;
+ unsigned int pkt = 6 + 3;
u32 *cs;
- inline_data_size = 112 * 8;
- media_batch_size = inline_data_size + 6;
-
- cs = batch_alloc_items(batch, 8, media_batch_size);
+ cs = batch_alloc_items(batch, 8, pkt);
- *cs++ = MEDIA_OBJECT | (media_batch_size - 2);
+ *cs++ = MEDIA_OBJECT | (pkt - 2);
/* interface descriptor offset */
*cs++ = 0;
*cs++ = 0;
/* inline */
- *cs++ = (y_offset << 16) | (x_offset);
+ *cs++ = y_offset << 16 | x_offset;
*cs++ = 0;
*cs++ = GT3_INLINE_DATA_DELAYS;
- for (i = 3; i < inline_data_size; i++)
- *cs++ = 0;
batch_advance(batch, cs);
}
static void gen7_emit_pipeline_flush(struct batch_chunk *batch)
{
- u32 *cs = batch_alloc_items(batch, 0, 5);
+ u32 *cs = batch_alloc_items(batch, 0, 4);
- *cs++ = GFX_OP_PIPE_CONTROL(5);
- *cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE |
- PIPE_CONTROL_GLOBAL_GTT_IVB;
+ *cs++ = GFX_OP_PIPE_CONTROL(4);
+ *cs++ = PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
+ PIPE_CONTROL_DEPTH_CACHE_FLUSH |
+ PIPE_CONTROL_DC_FLUSH_ENABLE |
+ PIPE_CONTROL_CS_STALL;
*cs++ = 0;
*cs++ = 0;
+
+ batch_advance(batch, cs);
+}
+
+static void gen7_emit_pipeline_invalidate(struct batch_chunk *batch)
+{
+ u32 *cs = batch_alloc_items(batch, 0, 8);
+
+ /* ivb: Stall before STATE_CACHE_INVALIDATE */
+ *cs++ = GFX_OP_PIPE_CONTROL(4);
+ *cs++ = PIPE_CONTROL_STALL_AT_SCOREBOARD |
+ PIPE_CONTROL_CS_STALL;
+ *cs++ = 0;
+ *cs++ = 0;
+
+ *cs++ = GFX_OP_PIPE_CONTROL(4);
+ *cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE;
*cs++ = 0;
+ *cs++ = 0;
+
batch_advance(batch, cs);
}
const struct batch_vals *bv)
{
struct drm_i915_private *i915 = vma->vm->i915;
- unsigned int desc_count = 64;
- const u32 urb_size = 112;
+ const unsigned int desc_count = 1;
+ const unsigned int urb_size = 1;
struct batch_chunk cmds, state;
- u32 interface_descriptor;
+ u32 descriptors;
unsigned int i;
- batch_init(&cmds, vma, start, 0, bv->cmd_size);
- batch_init(&state, vma, start, bv->state_start, bv->state_size);
+ batch_init(&cmds, vma, start, 0, bv->state_start);
+ batch_init(&state, vma, start, bv->state_start, SZ_4K);
- interface_descriptor =
- gen7_fill_interface_descriptor(&state, bv,
- IS_HASWELL(i915) ?
- &cb_kernel_hsw :
- &cb_kernel_ivb,
- desc_count);
- gen7_emit_pipeline_flush(&cmds);
+ descriptors = gen7_fill_interface_descriptor(&state, bv,
+ IS_HASWELL(i915) ?
+ &cb_kernel_hsw :
+ &cb_kernel_ivb,
+ desc_count);
+
+ gen7_emit_pipeline_invalidate(&cmds);
batch_add(&cmds, PIPELINE_SELECT | PIPELINE_SELECT_MEDIA);
batch_add(&cmds, MI_NOOP);
- gen7_emit_state_base_address(&cmds, interface_descriptor);
+ gen7_emit_pipeline_invalidate(&cmds);
+
gen7_emit_pipeline_flush(&cmds);
+ gen7_emit_state_base_address(&cmds, descriptors);
+ gen7_emit_pipeline_invalidate(&cmds);
gen7_emit_vfe_state(&cmds, bv, urb_size - 1, 0, 0);
+ gen7_emit_interface_descriptor_load(&cmds, descriptors, desc_count);
- gen7_emit_interface_descriptor_load(&cmds,
- interface_descriptor,
- desc_count);
-
- for (i = 0; i < bv->max_primitives; i++)
+ for (i = 0; i < num_primitives(bv); i++)
gen7_emit_media_object(&cmds, i);
batch_add(&cmds, MI_BATCH_BUFFER_END);
batch_get_defaults(engine->i915, &bv);
if (!vma)
- return bv.max_size;
+ return bv.size;
- GEM_BUG_ON(vma->obj->base.size < bv.max_size);
+ GEM_BUG_ON(vma->obj->base.size < bv.size);
batch = i915_gem_object_pin_map(vma->obj, I915_MAP_WC);
if (IS_ERR(batch))
return PTR_ERR(batch);
- emit_batch(vma, memset(batch, 0, bv.max_size), &bv);
+ emit_batch(vma, memset(batch, 0, bv.size), &bv);
i915_gem_object_flush_map(vma->obj);
__i915_gem_object_release_map(vma->obj);
#include "gen6_ppgtt.h"
#include "gen7_renderclear.h"
#include "i915_drv.h"
+#include "i915_mitigations.h"
#include "intel_breadcrumbs.h"
#include "intel_context.h"
#include "intel_gt.h"
GEM_BUG_ON(HAS_EXECLISTS(engine->i915));
if (engine->wa_ctx.vma && ce != engine->kernel_context) {
- if (engine->wa_ctx.vma->private != ce) {
+ if (engine->wa_ctx.vma->private != ce &&
+ i915_mitigate_clear_residuals()) {
ret = clear_residuals(rq);
if (ret)
return ret;
GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma);
- if (IS_HASWELL(engine->i915) && engine->class == RENDER_CLASS) {
+ if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) {
err = gen7_ctx_switch_bb_init(engine);
if (err)
goto err_ring_unpin;
DDI_BUF_CTL_ENABLE);
vgpu_vreg_t(vgpu, DDI_BUF_CTL(port)) |= DDI_BUF_IS_IDLE;
}
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~(PORTA_HOTPLUG_ENABLE | PORTA_HOTPLUG_STATUS_MASK);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~(PORTB_HOTPLUG_ENABLE | PORTB_HOTPLUG_STATUS_MASK);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~(PORTC_HOTPLUG_ENABLE | PORTC_HOTPLUG_STATUS_MASK);
+ /* No hpd_invert set in vgpu vbt, need to clear invert mask */
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= ~BXT_DDI_HPD_INVERT_MASK;
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~BXT_DE_PORT_HOTPLUG_MASK;
vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) &= ~(BIT(0) | BIT(1));
vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) &=
vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_EDP)) |=
(TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST |
TRANS_DDI_FUNC_ENABLE);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTA_HOTPLUG_ENABLE;
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
GEN8_DE_PORT_HOTPLUG(HPD_PORT_A);
}
(TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST |
(PORT_B << TRANS_DDI_PORT_SHIFT) |
TRANS_DDI_FUNC_ENABLE);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTB_HOTPLUG_ENABLE;
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
GEN8_DE_PORT_HOTPLUG(HPD_PORT_B);
}
(TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST |
(PORT_B << TRANS_DDI_PORT_SHIFT) |
TRANS_DDI_FUNC_ENABLE);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTC_HOTPLUG_ENABLE;
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
GEN8_DE_PORT_HOTPLUG(HPD_PORT_C);
}
PORTD_HOTPLUG_STATUS_MASK;
intel_vgpu_trigger_virtual_event(vgpu, DP_D_HOTPLUG);
} else if (IS_BROXTON(i915)) {
- if (connected) {
- if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) {
+ if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) {
+ if (connected) {
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
GEN8_DE_PORT_HOTPLUG(HPD_PORT_A);
+ } else {
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
+ ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_A);
}
- if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
- vgpu_vreg_t(vgpu, SFUSE_STRAP) |=
- SFUSE_STRAP_DDIB_DETECTED;
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |=
+ GEN8_DE_PORT_HOTPLUG(HPD_PORT_A);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~PORTA_HOTPLUG_STATUS_MASK;
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTA_HOTPLUG_LONG_DETECT;
+ intel_vgpu_trigger_virtual_event(vgpu, DP_A_HOTPLUG);
+ }
+ if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
+ if (connected) {
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
GEN8_DE_PORT_HOTPLUG(HPD_PORT_B);
- }
- if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
vgpu_vreg_t(vgpu, SFUSE_STRAP) |=
- SFUSE_STRAP_DDIC_DETECTED;
- vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
- GEN8_DE_PORT_HOTPLUG(HPD_PORT_C);
- }
- } else {
- if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) {
+ SFUSE_STRAP_DDIB_DETECTED;
+ } else {
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
- ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_A);
- }
- if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
+ ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_B);
vgpu_vreg_t(vgpu, SFUSE_STRAP) &=
~SFUSE_STRAP_DDIB_DETECTED;
- vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
- ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_B);
}
- if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
- vgpu_vreg_t(vgpu, SFUSE_STRAP) &=
- ~SFUSE_STRAP_DDIC_DETECTED;
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |=
+ GEN8_DE_PORT_HOTPLUG(HPD_PORT_B);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~PORTB_HOTPLUG_STATUS_MASK;
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTB_HOTPLUG_LONG_DETECT;
+ intel_vgpu_trigger_virtual_event(vgpu, DP_B_HOTPLUG);
+ }
+ if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
+ if (connected) {
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+ GEN8_DE_PORT_HOTPLUG(HPD_PORT_C);
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) |=
+ SFUSE_STRAP_DDIC_DETECTED;
+ } else {
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
~GEN8_DE_PORT_HOTPLUG(HPD_PORT_C);
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) &=
+ ~SFUSE_STRAP_DDIC_DETECTED;
}
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |=
+ GEN8_DE_PORT_HOTPLUG(HPD_PORT_C);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~PORTC_HOTPLUG_STATUS_MASK;
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTC_HOTPLUG_LONG_DETECT;
+ intel_vgpu_trigger_virtual_event(vgpu, DP_C_HOTPLUG);
}
- vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
- PORTB_HOTPLUG_STATUS_MASK;
- intel_vgpu_trigger_virtual_event(vgpu, DP_B_HOTPLUG);
}
}
if (ret)
goto out_clean_sched_policy;
- if (IS_BROADWELL(dev_priv))
+ if (IS_BROADWELL(dev_priv) || IS_BROXTON(dev_priv))
ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B);
- /* FixMe: Re-enable APL/BXT once vfio_edid enabled */
- else if (!IS_BROXTON(dev_priv))
+ else
ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);
if (ret)
goto out_clean_sched_policy;
void i915_driver_shutdown(struct drm_i915_private *i915)
{
+ disable_rpm_wakeref_asserts(&i915->runtime_pm);
+
i915_gem_suspend(i915);
drm_kms_helper_poll_disable(&i915->drm);
intel_suspend_encoders(i915);
intel_shutdown_encoders(i915);
+
+ enable_rpm_wakeref_asserts(&i915->runtime_pm);
}
static bool suspend_to_idle(struct drm_i915_private *dev_priv)
--- /dev/null
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <linux/kernel.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+
+#include "i915_drv.h"
+#include "i915_mitigations.h"
+
+static unsigned long mitigations __read_mostly = ~0UL;
+
+enum {
+ CLEAR_RESIDUALS = 0,
+};
+
+static const char * const names[] = {
+ [CLEAR_RESIDUALS] = "residuals",
+};
+
+bool i915_mitigate_clear_residuals(void)
+{
+ return READ_ONCE(mitigations) & BIT(CLEAR_RESIDUALS);
+}
+
+static int mitigations_set(const char *val, const struct kernel_param *kp)
+{
+ unsigned long new = ~0UL;
+ char *str, *sep, *tok;
+ bool first = true;
+ int err = 0;
+
+ BUILD_BUG_ON(ARRAY_SIZE(names) >= BITS_PER_TYPE(mitigations));
+
+ str = kstrdup(val, GFP_KERNEL);
+ if (!str)
+ return -ENOMEM;
+
+ for (sep = str; (tok = strsep(&sep, ","));) {
+ bool enable = true;
+ int i;
+
+ /* Be tolerant of leading/trailing whitespace */
+ tok = strim(tok);
+
+ if (first) {
+ first = false;
+
+ if (!strcmp(tok, "auto"))
+ continue;
+
+ new = 0;
+ if (!strcmp(tok, "off"))
+ continue;
+ }
+
+ if (*tok == '!') {
+ enable = !enable;
+ tok++;
+ }
+
+ if (!strncmp(tok, "no", 2)) {
+ enable = !enable;
+ tok += 2;
+ }
+
+ if (*tok == '\0')
+ continue;
+
+ for (i = 0; i < ARRAY_SIZE(names); i++) {
+ if (!strcmp(tok, names[i])) {
+ if (enable)
+ new |= BIT(i);
+ else
+ new &= ~BIT(i);
+ break;
+ }
+ }
+ if (i == ARRAY_SIZE(names)) {
+ pr_err("Bad \"%s.mitigations=%s\", '%s' is unknown\n",
+ DRIVER_NAME, val, tok);
+ err = -EINVAL;
+ break;
+ }
+ }
+ kfree(str);
+ if (err)
+ return err;
+
+ WRITE_ONCE(mitigations, new);
+ return 0;
+}
+
+static int mitigations_get(char *buffer, const struct kernel_param *kp)
+{
+ unsigned long local = READ_ONCE(mitigations);
+ int count, i;
+ bool enable;
+
+ if (!local)
+ return scnprintf(buffer, PAGE_SIZE, "%s\n", "off");
+
+ if (local & BIT(BITS_PER_LONG - 1)) {
+ count = scnprintf(buffer, PAGE_SIZE, "%s,", "auto");
+ enable = false;
+ } else {
+ enable = true;
+ count = 0;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(names); i++) {
+ if ((local & BIT(i)) != enable)
+ continue;
+
+ count += scnprintf(buffer + count, PAGE_SIZE - count,
+ "%s%s,", enable ? "" : "!", names[i]);
+ }
+
+ buffer[count - 1] = '\n';
+ return count;
+}
+
+static const struct kernel_param_ops ops = {
+ .set = mitigations_set,
+ .get = mitigations_get,
+};
+
+module_param_cb_unsafe(mitigations, &ops, NULL, 0600);
+MODULE_PARM_DESC(mitigations,
+"Selectively enable security mitigations for all Intel® GPUs in the system.\n"
+"\n"
+" auto -- enables all mitigations required for the platform [default]\n"
+" off -- disables all mitigations\n"
+"\n"
+"Individual mitigations can be enabled by passing a comma-separated string,\n"
+"e.g. mitigations=residuals to enable only clearing residuals or\n"
+"mitigations=auto,noresiduals to disable only the clear residual mitigation.\n"
+"Either '!' or 'no' may be used to switch from enabling the mitigation to\n"
+"disabling it.\n"
+"\n"
+"Active mitigations for Ivybridge, Baytrail, Haswell:\n"
+" residuals -- clear all thread-local registers between contexts"
+);
--- /dev/null
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef __I915_MITIGATIONS_H__
+#define __I915_MITIGATIONS_H__
+
+#include <linux/types.h>
+
+bool i915_mitigate_clear_residuals(void);
+
+#endif /* __I915_MITIGATIONS_H__ */
nouveau-y += dispnv50/wndw.o
nouveau-y += dispnv50/wndwc37e.o
nouveau-y += dispnv50/wndwc57e.o
+nouveau-y += dispnv50/wndwc67e.o
nouveau-y += dispnv50/base.o
nouveau-y += dispnv50/base507c.o
int version;
int (*new)(struct nouveau_drm *, s32, struct nv50_core **);
} cores[] = {
+ { GA102_DISP_CORE_CHANNEL_DMA, 0, corec57d_new },
{ TU102_DISP_CORE_CHANNEL_DMA, 0, corec57d_new },
{ GV100_DISP_CORE_CHANNEL_DMA, 0, corec37d_new },
{ GP102_DISP_CORE_CHANNEL_DMA, 0, core917d_new },
int version;
int (*new)(struct nouveau_drm *, int, s32, struct nv50_wndw **);
} curses[] = {
+ { GA102_DISP_CURSOR, 0, cursc37a_new },
{ TU102_DISP_CURSOR, 0, cursc37a_new },
{ GV100_DISP_CURSOR, 0, cursc37a_new },
{ GK104_DISP_CURSOR, 0, curs907a_new },
int
nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
- const s32 *oclass, u8 head, void *data, u32 size, u64 syncbuf,
+ const s32 *oclass, u8 head, void *data, u32 size, s64 syncbuf,
struct nv50_dmac *dmac)
{
struct nouveau_cli *cli = (void *)device->object.client;
if (ret)
return ret;
- if (!syncbuf)
+ if (syncbuf < 0)
return 0;
ret = nvif_object_ctor(&dmac->base.user, "kmsSyncCtxDma", NV50_DISP_HANDLE_SYNCBUF,
int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
const s32 *oclass, u8 head, void *data, u32 size,
- u64 syncbuf, struct nv50_dmac *dmac);
+ s64 syncbuf, struct nv50_dmac *dmac);
void nv50_dmac_destroy(struct nv50_dmac *);
/*
int version;
int (*init)(struct nouveau_drm *, s32, struct nv50_wndw *);
} wimms[] = {
+ { GA102_DISP_WINDOW_IMM_CHANNEL_DMA, 0, wimmc37b_init },
{ TU102_DISP_WINDOW_IMM_CHANNEL_DMA, 0, wimmc37b_init },
{ GV100_DISP_WINDOW_IMM_CHANNEL_DMA, 0, wimmc37b_init },
{}
int ret;
ret = nv50_dmac_create(&drm->client.device, &disp->disp->object,
- &oclass, 0, &args, sizeof(args), 0,
+ &oclass, 0, &args, sizeof(args), -1,
&wndw->wimm);
if (ret) {
NV_ERROR(drm, "wimm%04x allocation failed: %d\n", oclass, ret);
int (*new)(struct nouveau_drm *, enum drm_plane_type,
int, s32, struct nv50_wndw **);
} wndws[] = {
+ { GA102_DISP_WINDOW_CHANNEL_DMA, 0, wndwc67e_new },
{ TU102_DISP_WINDOW_CHANNEL_DMA, 0, wndwc57e_new },
{ GV100_DISP_WINDOW_CHANNEL_DMA, 0, wndwc37e_new },
{}
int wndwc57e_new(struct nouveau_drm *, enum drm_plane_type, int, s32,
struct nv50_wndw **);
+bool wndwc57e_ilut(struct nv50_wndw *, struct nv50_wndw_atom *, int);
+int wndwc57e_ilut_set(struct nv50_wndw *, struct nv50_wndw_atom *);
+int wndwc57e_ilut_clr(struct nv50_wndw *);
+int wndwc57e_csc_set(struct nv50_wndw *, struct nv50_wndw_atom *);
+int wndwc57e_csc_clr(struct nv50_wndw *);
+
+int wndwc67e_new(struct nouveau_drm *, enum drm_plane_type, int, s32,
+ struct nv50_wndw **);
int nv50_wndw_new(struct nouveau_drm *, enum drm_plane_type, int index,
struct nv50_wndw **);
return 0;
}
-static int
+int
wndwc57e_csc_clr(struct nv50_wndw *wndw)
{
struct nvif_push *push = wndw->wndw.push;
return 0;
}
-static int
+int
wndwc57e_csc_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
{
struct nvif_push *push = wndw->wndw.push;
return 0;
}
-static int
+int
wndwc57e_ilut_clr(struct nv50_wndw *wndw)
{
struct nvif_push *push = wndw->wndw.push;
return 0;
}
-static int
+int
wndwc57e_ilut_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
{
struct nvif_push *push = wndw->wndw.push;
writew(readw(mem - 4), mem + 4);
}
-static bool
+bool
wndwc57e_ilut(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw, int size)
{
if (size = size ? size : 1024, size != 256 && size != 1024)
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "wndw.h"
+#include "atom.h"
+
+#include <nvif/pushc37b.h>
+
+#include <nvhw/class/clc57e.h>
+
+static int
+wndwc67e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
+{
+ struct nvif_push *push = wndw->wndw.push;
+ int ret;
+
+ if ((ret = PUSH_WAIT(push, 17)))
+ return ret;
+
+ PUSH_MTHD(push, NVC57E, SET_PRESENT_CONTROL,
+ NVVAL(NVC57E, SET_PRESENT_CONTROL, MIN_PRESENT_INTERVAL, asyw->image.interval) |
+ NVVAL(NVC57E, SET_PRESENT_CONTROL, BEGIN_MODE, asyw->image.mode) |
+ NVDEF(NVC57E, SET_PRESENT_CONTROL, TIMESTAMP_MODE, DISABLE));
+
+ PUSH_MTHD(push, NVC57E, SET_SIZE,
+ NVVAL(NVC57E, SET_SIZE, WIDTH, asyw->image.w) |
+ NVVAL(NVC57E, SET_SIZE, HEIGHT, asyw->image.h),
+
+ SET_STORAGE,
+ NVVAL(NVC57E, SET_STORAGE, BLOCK_HEIGHT, asyw->image.blockh),
+
+ SET_PARAMS,
+ NVVAL(NVC57E, SET_PARAMS, FORMAT, asyw->image.format) |
+ NVDEF(NVC57E, SET_PARAMS, CLAMP_BEFORE_BLEND, DISABLE) |
+ NVDEF(NVC57E, SET_PARAMS, SWAP_UV, DISABLE) |
+ NVDEF(NVC57E, SET_PARAMS, FMT_ROUNDING_MODE, ROUND_TO_NEAREST),
+
+ SET_PLANAR_STORAGE(0),
+ NVVAL(NVC57E, SET_PLANAR_STORAGE, PITCH, asyw->image.blocks[0]) |
+ NVVAL(NVC57E, SET_PLANAR_STORAGE, PITCH, asyw->image.pitch[0] >> 6));
+
+ PUSH_MTHD(push, NVC57E, SET_CONTEXT_DMA_ISO(0), asyw->image.handle, 1);
+ PUSH_MTHD(push, NVC57E, SET_OFFSET(0), asyw->image.offset[0] >> 8);
+
+ PUSH_MTHD(push, NVC57E, SET_POINT_IN(0),
+ NVVAL(NVC57E, SET_POINT_IN, X, asyw->state.src_x >> 16) |
+ NVVAL(NVC57E, SET_POINT_IN, Y, asyw->state.src_y >> 16));
+
+ PUSH_MTHD(push, NVC57E, SET_SIZE_IN,
+ NVVAL(NVC57E, SET_SIZE_IN, WIDTH, asyw->state.src_w >> 16) |
+ NVVAL(NVC57E, SET_SIZE_IN, HEIGHT, asyw->state.src_h >> 16));
+
+ PUSH_MTHD(push, NVC57E, SET_SIZE_OUT,
+ NVVAL(NVC57E, SET_SIZE_OUT, WIDTH, asyw->state.crtc_w) |
+ NVVAL(NVC57E, SET_SIZE_OUT, HEIGHT, asyw->state.crtc_h));
+ return 0;
+}
+
+static const struct nv50_wndw_func
+wndwc67e = {
+ .acquire = wndwc37e_acquire,
+ .release = wndwc37e_release,
+ .sema_set = wndwc37e_sema_set,
+ .sema_clr = wndwc37e_sema_clr,
+ .ntfy_set = wndwc37e_ntfy_set,
+ .ntfy_clr = wndwc37e_ntfy_clr,
+ .ntfy_reset = corec37d_ntfy_init,
+ .ntfy_wait_begun = base507c_ntfy_wait_begun,
+ .ilut = wndwc57e_ilut,
+ .ilut_identity = true,
+ .ilut_size = 1024,
+ .xlut_set = wndwc57e_ilut_set,
+ .xlut_clr = wndwc57e_ilut_clr,
+ .csc = base907c_csc,
+ .csc_set = wndwc57e_csc_set,
+ .csc_clr = wndwc57e_csc_clr,
+ .image_set = wndwc67e_image_set,
+ .image_clr = wndwc37e_image_clr,
+ .blend_set = wndwc37e_blend_set,
+ .update = wndwc37e_update,
+};
+
+int
+wndwc67e_new(struct nouveau_drm *drm, enum drm_plane_type type, int index,
+ s32 oclass, struct nv50_wndw **pwndw)
+{
+ return wndwc37e_new_(&wndwc67e, drm, type, index, oclass, BIT(index >> 1), pwndw);
+}
#define NV_DEVICE_INFO_V0_PASCAL 0x0a
#define NV_DEVICE_INFO_V0_VOLTA 0x0b
#define NV_DEVICE_INFO_V0_TURING 0x0c
+#define NV_DEVICE_INFO_V0_AMPERE 0x0d
__u8 family;
__u8 pad06[2];
__u64 ram_size;
#define GP102_DISP /* cl5070.h */ 0x00009870
#define GV100_DISP /* cl5070.h */ 0x0000c370
#define TU102_DISP /* cl5070.h */ 0x0000c570
+#define GA102_DISP /* cl5070.h */ 0x0000c670
#define GV100_DISP_CAPS 0x0000c373
#define GK104_DISP_CURSOR /* cl507a.h */ 0x0000917a
#define GV100_DISP_CURSOR /* cl507a.h */ 0x0000c37a
#define TU102_DISP_CURSOR /* cl507a.h */ 0x0000c57a
+#define GA102_DISP_CURSOR /* cl507a.h */ 0x0000c67a
#define NV50_DISP_OVERLAY /* cl507b.h */ 0x0000507b
#define G82_DISP_OVERLAY /* cl507b.h */ 0x0000827b
#define GV100_DISP_WINDOW_IMM_CHANNEL_DMA /* clc37b.h */ 0x0000c37b
#define TU102_DISP_WINDOW_IMM_CHANNEL_DMA /* clc37b.h */ 0x0000c57b
+#define GA102_DISP_WINDOW_IMM_CHANNEL_DMA /* clc37b.h */ 0x0000c67b
#define NV50_DISP_BASE_CHANNEL_DMA /* cl507c.h */ 0x0000507c
#define G82_DISP_BASE_CHANNEL_DMA /* cl507c.h */ 0x0000827c
#define GP102_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000987d
#define GV100_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000c37d
#define TU102_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000c57d
+#define GA102_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000c67d
#define NV50_DISP_OVERLAY_CHANNEL_DMA /* cl507e.h */ 0x0000507e
#define G82_DISP_OVERLAY_CHANNEL_DMA /* cl507e.h */ 0x0000827e
#define GV100_DISP_WINDOW_CHANNEL_DMA /* clc37e.h */ 0x0000c37e
#define TU102_DISP_WINDOW_CHANNEL_DMA /* clc37e.h */ 0x0000c57e
+#define GA102_DISP_WINDOW_CHANNEL_DMA /* clc37e.h */ 0x0000c67e
#define NV50_TESLA 0x00005097
#define G82_TESLA 0x00008297
GP100 = 0x130,
GV100 = 0x140,
TU100 = 0x160,
+ GA100 = 0x170,
} card_type;
u32 chipset;
u8 chiprev;
int gp102_disp_new(struct nvkm_device *, int, struct nvkm_disp **);
int gv100_disp_new(struct nvkm_device *, int, struct nvkm_disp **);
int tu102_disp_new(struct nvkm_device *, int, struct nvkm_disp **);
+int ga102_disp_new(struct nvkm_device *, int, struct nvkm_disp **);
#endif
int gm200_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **);
int gv100_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **);
int tu102_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **);
+int ga100_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **);
#endif
int gp102_fb_new(struct nvkm_device *, int, struct nvkm_fb **);
int gp10b_fb_new(struct nvkm_device *, int, struct nvkm_fb **);
int gv100_fb_new(struct nvkm_device *, int, struct nvkm_fb **);
+int ga100_fb_new(struct nvkm_device *, int, struct nvkm_fb **);
+int ga102_fb_new(struct nvkm_device *, int, struct nvkm_fb **);
#include <subdev/bios.h>
#include <subdev/bios/ramcfg.h>
int g94_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **);
int gf119_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **);
int gk104_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **);
+int ga102_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **);
#endif
int gf117_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **);
int gf119_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **);
int gk104_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **);
+int gk110_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **);
int gm200_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **);
static inline int
int gp100_mc_new(struct nvkm_device *, int, struct nvkm_mc **);
int gp10b_mc_new(struct nvkm_device *, int, struct nvkm_mc **);
int tu102_mc_new(struct nvkm_device *, int, struct nvkm_mc **);
+int ga100_mc_new(struct nvkm_device *, int, struct nvkm_mc **);
#endif
case NV_DEVICE_INFO_V0_PASCAL:
case NV_DEVICE_INFO_V0_VOLTA:
case NV_DEVICE_INFO_V0_TURING:
+ case NV_DEVICE_INFO_V0_AMPERE: //XXX: not confirmed
ret = nv50_backlight_init(nv_encoder, &props, &ops);
break;
default:
struct nvif_disp *disp)
{
static const struct nvif_mclass disps[] = {
+ { GA102_DISP, -1 },
{ TU102_DISP, -1 },
{ GV100_DISP, -1 },
{ GP102_DISP, -1 },
.fb = gk110_fb_new,
.fuse = gf100_fuse_new,
.gpio = gk104_gpio_new,
- .i2c = gk104_i2c_new,
+ .i2c = gk110_i2c_new,
.ibus = gk104_ibus_new,
.iccsense = gf100_iccsense_new,
.imem = nv50_instmem_new,
.fb = gk110_fb_new,
.fuse = gf100_fuse_new,
.gpio = gk104_gpio_new,
- .i2c = gk104_i2c_new,
+ .i2c = gk110_i2c_new,
.ibus = gk104_ibus_new,
.iccsense = gf100_iccsense_new,
.imem = nv50_instmem_new,
.fb = gk110_fb_new,
.fuse = gf100_fuse_new,
.gpio = gk104_gpio_new,
- .i2c = gk104_i2c_new,
+ .i2c = gk110_i2c_new,
.ibus = gk104_ibus_new,
.iccsense = gf100_iccsense_new,
.imem = nv50_instmem_new,
.fb = gk110_fb_new,
.fuse = gf100_fuse_new,
.gpio = gk104_gpio_new,
- .i2c = gk104_i2c_new,
+ .i2c = gk110_i2c_new,
.ibus = gk104_ibus_new,
.iccsense = gf100_iccsense_new,
.imem = nv50_instmem_new,
.fb = gm107_fb_new,
.fuse = gm107_fuse_new,
.gpio = gk104_gpio_new,
- .i2c = gk104_i2c_new,
+ .i2c = gk110_i2c_new,
.ibus = gk104_ibus_new,
.iccsense = gf100_iccsense_new,
.imem = nv50_instmem_new,
.fb = gm107_fb_new,
.fuse = gm107_fuse_new,
.gpio = gk104_gpio_new,
- .i2c = gk104_i2c_new,
+ .i2c = gk110_i2c_new,
.ibus = gk104_ibus_new,
.iccsense = gf100_iccsense_new,
.imem = nv50_instmem_new,
.sec2 = tu102_sec2_new,
};
+static const struct nvkm_device_chip
+nv170_chipset = {
+ .name = "GA100",
+ .bar = tu102_bar_new,
+ .bios = nvkm_bios_new,
+ .devinit = ga100_devinit_new,
+ .fb = ga100_fb_new,
+ .gpio = gk104_gpio_new,
+ .i2c = gm200_i2c_new,
+ .ibus = gm200_ibus_new,
+ .imem = nv50_instmem_new,
+ .mc = ga100_mc_new,
+ .mmu = tu102_mmu_new,
+ .pci = gp100_pci_new,
+ .timer = gk20a_timer_new,
+};
+
+static const struct nvkm_device_chip
+nv172_chipset = {
+ .name = "GA102",
+ .bar = tu102_bar_new,
+ .bios = nvkm_bios_new,
+ .devinit = ga100_devinit_new,
+ .fb = ga102_fb_new,
+ .gpio = ga102_gpio_new,
+ .i2c = gm200_i2c_new,
+ .ibus = gm200_ibus_new,
+ .imem = nv50_instmem_new,
+ .mc = ga100_mc_new,
+ .mmu = tu102_mmu_new,
+ .pci = gp100_pci_new,
+ .timer = gk20a_timer_new,
+ .disp = ga102_disp_new,
+ .dma = gv100_dma_new,
+};
+
+static const struct nvkm_device_chip
+nv174_chipset = {
+ .name = "GA104",
+ .bar = tu102_bar_new,
+ .bios = nvkm_bios_new,
+ .devinit = ga100_devinit_new,
+ .fb = ga102_fb_new,
+ .gpio = ga102_gpio_new,
+ .i2c = gm200_i2c_new,
+ .ibus = gm200_ibus_new,
+ .imem = nv50_instmem_new,
+ .mc = ga100_mc_new,
+ .mmu = tu102_mmu_new,
+ .pci = gp100_pci_new,
+ .timer = gk20a_timer_new,
+ .disp = ga102_disp_new,
+ .dma = gv100_dma_new,
+};
+
static int
nvkm_device_event_ctor(struct nvkm_object *object, void *data, u32 size,
struct nvkm_notify *notify)
case 0x130: device->card_type = GP100; break;
case 0x140: device->card_type = GV100; break;
case 0x160: device->card_type = TU100; break;
+ case 0x170: device->card_type = GA100; break;
default:
break;
}
case 0x166: device->chip = &nv166_chipset; break;
case 0x167: device->chip = &nv167_chipset; break;
case 0x168: device->chip = &nv168_chipset; break;
+ case 0x172: device->chip = &nv172_chipset; break;
+ case 0x174: device->chip = &nv174_chipset; break;
default:
- nvdev_error(device, "unknown chipset (%08x)\n", boot0);
- ret = -ENODEV;
- goto done;
+ if (nvkm_boolopt(device->cfgopt, "NvEnableUnsupportedChipsets", false)) {
+ switch (device->chipset) {
+ case 0x170: device->chip = &nv170_chipset; break;
+ default:
+ break;
+ }
+ }
+
+ if (!device->chip) {
+ nvdev_error(device, "unknown chipset (%08x)\n", boot0);
+ ret = -ENODEV;
+ goto done;
+ }
+ break;
}
nvdev_info(device, "NVIDIA %s (%08x)\n",
case GP100: args->v0.family = NV_DEVICE_INFO_V0_PASCAL; break;
case GV100: args->v0.family = NV_DEVICE_INFO_V0_VOLTA; break;
case TU100: args->v0.family = NV_DEVICE_INFO_V0_TURING; break;
+ case GA100: args->v0.family = NV_DEVICE_INFO_V0_AMPERE; break;
default:
args->v0.family = 0;
break;
nvkm-y += nvkm/engine/disp/gp102.o
nvkm-y += nvkm/engine/disp/gv100.o
nvkm-y += nvkm/engine/disp/tu102.o
+nvkm-y += nvkm/engine/disp/ga102.o
nvkm-y += nvkm/engine/disp/vga.o
nvkm-y += nvkm/engine/disp/head.o
nvkm-y += nvkm/engine/disp/sorgp100.o
nvkm-y += nvkm/engine/disp/sorgv100.o
nvkm-y += nvkm/engine/disp/sortu102.o
+nvkm-y += nvkm/engine/disp/sorga102.o
nvkm-y += nvkm/engine/disp/outp.o
nvkm-y += nvkm/engine/disp/dp.o
nvkm-y += nvkm/engine/disp/rootgp102.o
nvkm-y += nvkm/engine/disp/rootgv100.o
nvkm-y += nvkm/engine/disp/roottu102.o
+nvkm-y += nvkm/engine/disp/rootga102.o
nvkm-y += nvkm/engine/disp/capsgv100.o
#include <nvif/event.h>
+/* IED scripts are no longer used by UEFI/RM from Ampere, but have been updated for
+ * the x86 option ROM. However, the relevant VBIOS table versions weren't modified,
+ * so we're unable to detect this in a nice way.
+ */
+#define AMPERE_IED_HACK(disp) ((disp)->engine.subdev.device->card_type >= GA100)
+
struct lt_state {
struct nvkm_dp *dp;
u8 stat[6];
dp->dpcd[DPCD_RC02] &= ~DPCD_RC02_TPS3_SUPPORTED;
lt.pc2 = dp->dpcd[DPCD_RC02] & DPCD_RC02_TPS3_SUPPORTED;
+ if (AMPERE_IED_HACK(disp) && (lnkcmp = lt.dp->info.script[0])) {
+ /* Execute BeforeLinkTraining script from DP Info table. */
+ while (ior->dp.bw < nvbios_rd08(bios, lnkcmp))
+ lnkcmp += 3;
+ lnkcmp = nvbios_rd16(bios, lnkcmp + 1);
+
+ nvbios_init(&dp->outp.disp->engine.subdev, lnkcmp,
+ init.outp = &dp->outp.info;
+ init.or = ior->id;
+ init.link = ior->asy.link;
+ );
+ }
+
/* Set desired link configuration on the source. */
if ((lnkcmp = lt.dp->info.lnkcmp)) {
if (dp->version < 0x30) {
);
}
- /* Execute BeforeLinkTraining script from DP Info table. */
- nvbios_init(&dp->outp.disp->engine.subdev, dp->info.script[0],
- init.outp = &dp->outp.info;
- init.or = dp->outp.ior->id;
- init.link = dp->outp.ior->asy.link;
- );
+ if (!AMPERE_IED_HACK(dp->outp.disp)) {
+ /* Execute BeforeLinkTraining script from DP Info table. */
+ nvbios_init(&dp->outp.disp->engine.subdev, dp->info.script[0],
+ init.outp = &dp->outp.info;
+ init.or = dp->outp.ior->id;
+ init.link = dp->outp.ior->asy.link;
+ );
+ }
}
static const struct dp_rates {
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "nv50.h"
+#include "head.h"
+#include "ior.h"
+#include "channv50.h"
+#include "rootnv50.h"
+
+static const struct nv50_disp_func
+ga102_disp = {
+ .init = tu102_disp_init,
+ .fini = gv100_disp_fini,
+ .intr = gv100_disp_intr,
+ .uevent = &gv100_disp_chan_uevent,
+ .super = gv100_disp_super,
+ .root = &ga102_disp_root_oclass,
+ .wndw = { .cnt = gv100_disp_wndw_cnt },
+ .head = { .cnt = gv100_head_cnt, .new = gv100_head_new },
+ .sor = { .cnt = gv100_sor_cnt, .new = ga102_sor_new },
+ .ramht_size = 0x2000,
+};
+
+int
+ga102_disp_new(struct nvkm_device *device, int index, struct nvkm_disp **pdisp)
+{
+ return nv50_disp_new_(&ga102_disp, device, index, pdisp);
+}
void gv100_sor_dp_audio_sym(struct nvkm_ior *, int, u16, u32);
void gv100_sor_dp_watermark(struct nvkm_ior *, int, u8);
+void tu102_sor_dp_vcpi(struct nvkm_ior *, int, u8, u8, u16, u16);
+
void g84_hdmi_ctrl(struct nvkm_ior *, int, bool, u8, u8, u8 *, u8 , u8 *, u8);
void gt215_hdmi_ctrl(struct nvkm_ior *, int, bool, u8, u8, u8 *, u8 , u8 *, u8);
void gf119_hdmi_ctrl(struct nvkm_ior *, int, bool, u8, u8, u8 *, u8 , u8 *, u8);
int gv100_sor_new(struct nvkm_disp *, int);
int tu102_sor_new(struct nvkm_disp *, int);
+
+int ga102_sor_new(struct nvkm_disp *, int);
#endif
void gv100_disp_super(struct work_struct *);
int gv100_disp_wndw_cnt(struct nvkm_disp *, unsigned long *);
+int tu102_disp_init(struct nv50_disp *);
+
void nv50_disp_dptmds_war_2(struct nv50_disp *, struct dcb_output *);
void nv50_disp_dptmds_war_3(struct nv50_disp *, struct dcb_output *);
void nv50_disp_update_sppll1(struct nv50_disp *);
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "rootnv50.h"
+#include "channv50.h"
+
+#include <nvif/class.h>
+
+static const struct nv50_disp_root_func
+ga102_disp_root = {
+ .user = {
+ {{-1,-1,GV100_DISP_CAPS }, gv100_disp_caps_new },
+ {{0,0,GA102_DISP_CURSOR }, gv100_disp_curs_new },
+ {{0,0,GA102_DISP_WINDOW_IMM_CHANNEL_DMA}, gv100_disp_wimm_new },
+ {{0,0,GA102_DISP_CORE_CHANNEL_DMA }, gv100_disp_core_new },
+ {{0,0,GA102_DISP_WINDOW_CHANNEL_DMA }, gv100_disp_wndw_new },
+ {}
+ },
+};
+
+static int
+ga102_disp_root_new(struct nvkm_disp *disp, const struct nvkm_oclass *oclass,
+ void *data, u32 size, struct nvkm_object **pobject)
+{
+ return nv50_disp_root_new_(&ga102_disp_root, disp, oclass, data, size, pobject);
+}
+
+const struct nvkm_disp_oclass
+ga102_disp_root_oclass = {
+ .base.oclass = GA102_DISP,
+ .base.minver = -1,
+ .base.maxver = -1,
+ .ctor = ga102_disp_root_new,
+};
extern const struct nvkm_disp_oclass gp102_disp_root_oclass;
extern const struct nvkm_disp_oclass gv100_disp_root_oclass;
extern const struct nvkm_disp_oclass tu102_disp_root_oclass;
+extern const struct nvkm_disp_oclass ga102_disp_root_oclass;
#endif
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "ior.h"
+
+#include <subdev/timer.h>
+
+static int
+ga102_sor_dp_links(struct nvkm_ior *sor, struct nvkm_i2c_aux *aux)
+{
+ struct nvkm_device *device = sor->disp->engine.subdev.device;
+ const u32 soff = nv50_ior_base(sor);
+ const u32 loff = nv50_sor_link(sor);
+ u32 dpctrl = 0x00000000;
+ u32 clksor = 0x00000000;
+
+ switch (sor->dp.bw) {
+ case 0x06: clksor |= 0x00000000; break;
+ case 0x0a: clksor |= 0x00040000; break;
+ case 0x14: clksor |= 0x00080000; break;
+ case 0x1e: clksor |= 0x000c0000; break;
+ default:
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ dpctrl |= ((1 << sor->dp.nr) - 1) << 16;
+ if (sor->dp.mst)
+ dpctrl |= 0x40000000;
+ if (sor->dp.ef)
+ dpctrl |= 0x00004000;
+
+ nvkm_mask(device, 0x612300 + soff, 0x007c0000, clksor);
+
+ /*XXX*/
+ nvkm_msec(device, 40, NVKM_DELAY);
+ nvkm_mask(device, 0x612300 + soff, 0x00030000, 0x00010000);
+ nvkm_mask(device, 0x61c10c + loff, 0x00000003, 0x00000001);
+
+ nvkm_mask(device, 0x61c10c + loff, 0x401f4000, dpctrl);
+ return 0;
+}
+
+static void
+ga102_sor_clock(struct nvkm_ior *sor)
+{
+ struct nvkm_device *device = sor->disp->engine.subdev.device;
+ u32 div2 = 0;
+ if (sor->asy.proto == TMDS) {
+ if (sor->tmds.high_speed)
+ div2 = 1;
+ }
+ nvkm_wr32(device, 0x00ec08 + (sor->id * 0x10), 0x00000000);
+ nvkm_wr32(device, 0x00ec04 + (sor->id * 0x10), div2);
+}
+
+static const struct nvkm_ior_func
+ga102_sor_hda = {
+ .route = {
+ .get = gm200_sor_route_get,
+ .set = gm200_sor_route_set,
+ },
+ .state = gv100_sor_state,
+ .power = nv50_sor_power,
+ .clock = ga102_sor_clock,
+ .hdmi = {
+ .ctrl = gv100_hdmi_ctrl,
+ .scdc = gm200_hdmi_scdc,
+ },
+ .dp = {
+ .lanes = { 0, 1, 2, 3 },
+ .links = ga102_sor_dp_links,
+ .power = g94_sor_dp_power,
+ .pattern = gm107_sor_dp_pattern,
+ .drive = gm200_sor_dp_drive,
+ .vcpi = tu102_sor_dp_vcpi,
+ .audio = gv100_sor_dp_audio,
+ .audio_sym = gv100_sor_dp_audio_sym,
+ .watermark = gv100_sor_dp_watermark,
+ },
+ .hda = {
+ .hpd = gf119_hda_hpd,
+ .eld = gf119_hda_eld,
+ .device_entry = gv100_hda_device_entry,
+ },
+};
+
+static const struct nvkm_ior_func
+ga102_sor = {
+ .route = {
+ .get = gm200_sor_route_get,
+ .set = gm200_sor_route_set,
+ },
+ .state = gv100_sor_state,
+ .power = nv50_sor_power,
+ .clock = ga102_sor_clock,
+ .hdmi = {
+ .ctrl = gv100_hdmi_ctrl,
+ .scdc = gm200_hdmi_scdc,
+ },
+ .dp = {
+ .lanes = { 0, 1, 2, 3 },
+ .links = ga102_sor_dp_links,
+ .power = g94_sor_dp_power,
+ .pattern = gm107_sor_dp_pattern,
+ .drive = gm200_sor_dp_drive,
+ .vcpi = tu102_sor_dp_vcpi,
+ .audio = gv100_sor_dp_audio,
+ .audio_sym = gv100_sor_dp_audio_sym,
+ .watermark = gv100_sor_dp_watermark,
+ },
+};
+
+int
+ga102_sor_new(struct nvkm_disp *disp, int id)
+{
+ struct nvkm_device *device = disp->engine.subdev.device;
+ u32 hda = nvkm_rd32(device, 0x08a15c);
+ if (hda & BIT(id))
+ return nvkm_ior_new_(&ga102_sor_hda, disp, SOR, id);
+ return nvkm_ior_new_(&ga102_sor, disp, SOR, id);
+}
#include <subdev/timer.h>
-static void
+void
tu102_sor_dp_vcpi(struct nvkm_ior *sor, int head,
u8 slot, u8 slot_nr, u16 pbn, u16 aligned)
{
#include <core/gpuobj.h>
#include <subdev/timer.h>
-static int
+int
tu102_disp_init(struct nv50_disp *disp)
{
struct nvkm_device *device = disp->base.engine.subdev.device;
nvkm_debug(subdev, "%08x: type %02x, %d bytes\n",
image.base, image.type, image.size);
- if (!shadow_fetch(bios, mthd, image.size)) {
+ if (!shadow_fetch(bios, mthd, image.base + image.size)) {
nvkm_debug(subdev, "%08x: fetch failed\n", image.base);
return 0;
}
return NULL;
/* we can't get the bios image pointer without PDISP */
+ if (device->card_type >= GA100)
+ addr = device->chipset == 0x170; /*XXX: find the fuse reg for this */
+ else
if (device->card_type >= GM100)
addr = nvkm_rd32(device, 0x021c04);
else
nvkm-y += nvkm/subdev/devinit/gm200.o
nvkm-y += nvkm/subdev/devinit/gv100.o
nvkm-y += nvkm/subdev/devinit/tu102.o
+nvkm-y += nvkm/subdev/devinit/ga100.o
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "nv50.h"
+
+#include <subdev/bios.h>
+#include <subdev/bios/pll.h>
+#include <subdev/clk/pll.h>
+
+static int
+ga100_devinit_pll_set(struct nvkm_devinit *init, u32 type, u32 freq)
+{
+ struct nvkm_subdev *subdev = &init->subdev;
+ struct nvkm_device *device = subdev->device;
+ struct nvbios_pll info;
+ int head = type - PLL_VPLL0;
+ int N, fN, M, P;
+ int ret;
+
+ ret = nvbios_pll_parse(device->bios, type, &info);
+ if (ret)
+ return ret;
+
+ ret = gt215_pll_calc(subdev, &info, freq, &N, &fN, &M, &P);
+ if (ret < 0)
+ return ret;
+
+ switch (info.type) {
+ case PLL_VPLL0:
+ case PLL_VPLL1:
+ case PLL_VPLL2:
+ case PLL_VPLL3:
+ nvkm_wr32(device, 0x00ef00 + (head * 0x40), 0x02080004);
+ nvkm_wr32(device, 0x00ef18 + (head * 0x40), (N << 16) | fN);
+ nvkm_wr32(device, 0x00ef04 + (head * 0x40), (P << 16) | M);
+ nvkm_wr32(device, 0x00e9c0 + (head * 0x04), 0x00000001);
+ break;
+ default:
+ nvkm_warn(subdev, "%08x/%dKhz unimplemented\n", type, freq);
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
+static const struct nvkm_devinit_func
+ga100_devinit = {
+ .init = nv50_devinit_init,
+ .post = tu102_devinit_post,
+ .pll_set = ga100_devinit_pll_set,
+};
+
+int
+ga100_devinit_new(struct nvkm_device *device, int index, struct nvkm_devinit **pinit)
+{
+ return nv50_devinit_new_(&ga100_devinit, device, index, pinit);
+}
int index, struct nvkm_devinit *);
int nv04_devinit_post(struct nvkm_devinit *, bool);
+int tu102_devinit_post(struct nvkm_devinit *, bool);
#endif
return ret;
}
-static int
+int
tu102_devinit_post(struct nvkm_devinit *base, bool post)
{
struct nv50_devinit *init = nv50_devinit(base);
nvkm-y += nvkm/subdev/fb/gp102.o
nvkm-y += nvkm/subdev/fb/gp10b.o
nvkm-y += nvkm/subdev/fb/gv100.o
+nvkm-y += nvkm/subdev/fb/ga100.o
+nvkm-y += nvkm/subdev/fb/ga102.o
nvkm-y += nvkm/subdev/fb/ram.o
nvkm-y += nvkm/subdev/fb/ramnv04.o
nvkm-y += nvkm/subdev/fb/ramgm107.o
nvkm-y += nvkm/subdev/fb/ramgm200.o
nvkm-y += nvkm/subdev/fb/ramgp100.o
+nvkm-y += nvkm/subdev/fb/ramga102.o
nvkm-y += nvkm/subdev/fb/sddr2.o
nvkm-y += nvkm/subdev/fb/sddr3.o
nvkm-y += nvkm/subdev/fb/gddr3.o
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "gf100.h"
+#include "ram.h"
+
+static const struct nvkm_fb_func
+ga100_fb = {
+ .dtor = gf100_fb_dtor,
+ .oneinit = gf100_fb_oneinit,
+ .init = gp100_fb_init,
+ .init_page = gv100_fb_init_page,
+ .init_unkn = gp100_fb_init_unkn,
+ .ram_new = gp100_ram_new,
+ .default_bigpage = 16,
+};
+
+int
+ga100_fb_new(struct nvkm_device *device, int index, struct nvkm_fb **pfb)
+{
+ return gp102_fb_new_(&ga100_fb, device, index, pfb);
+}
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "gf100.h"
+#include "ram.h"
+
+static const struct nvkm_fb_func
+ga102_fb = {
+ .dtor = gf100_fb_dtor,
+ .oneinit = gf100_fb_oneinit,
+ .init = gp100_fb_init,
+ .init_page = gv100_fb_init_page,
+ .init_unkn = gp100_fb_init_unkn,
+ .ram_new = ga102_ram_new,
+ .default_bigpage = 16,
+};
+
+int
+ga102_fb_new(struct nvkm_device *device, int index, struct nvkm_fb **pfb)
+{
+ return gp102_fb_new_(&ga102_fb, device, index, pfb);
+}
#include "gf100.h"
#include "ram.h"
-static int
+int
gv100_fb_init_page(struct nvkm_fb *fb)
{
return (fb->page == 16) ? 0 : -EINVAL;
struct nvkm_fb **);
bool gp102_fb_vpr_scrub_required(struct nvkm_fb *);
int gp102_fb_vpr_scrub(struct nvkm_fb *);
+
+int gv100_fb_init_page(struct nvkm_fb *);
#endif
int gm107_ram_new(struct nvkm_fb *, struct nvkm_ram **);
int gm200_ram_new(struct nvkm_fb *, struct nvkm_ram **);
int gp100_ram_new(struct nvkm_fb *, struct nvkm_ram **);
+int ga102_ram_new(struct nvkm_fb *, struct nvkm_ram **);
#endif
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "ram.h"
+
+#include <subdev/bios.h>
+#include <subdev/bios/init.h>
+#include <subdev/bios/rammap.h>
+
+static const struct nvkm_ram_func
+ga102_ram = {
+};
+
+int
+ga102_ram_new(struct nvkm_fb *fb, struct nvkm_ram **pram)
+{
+ struct nvkm_device *device = fb->subdev.device;
+ enum nvkm_ram_type type = nvkm_fb_bios_memtype(device->bios);
+ u32 size = nvkm_rd32(device, 0x1183a4);
+
+ return nvkm_ram_new_(&ga102_ram, fb, type, (u64)size << 20, pram);
+}
nvkm-y += nvkm/subdev/gpio/g94.o
nvkm-y += nvkm/subdev/gpio/gf119.o
nvkm-y += nvkm/subdev/gpio/gk104.o
+nvkm-y += nvkm/subdev/gpio/ga102.o
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+static void
+ga102_gpio_reset(struct nvkm_gpio *gpio, u8 match)
+{
+ struct nvkm_device *device = gpio->subdev.device;
+ struct nvkm_bios *bios = device->bios;
+ u8 ver, len;
+ u16 entry;
+ int ent = -1;
+
+ while ((entry = dcb_gpio_entry(bios, 0, ++ent, &ver, &len))) {
+ u32 data = nvbios_rd32(bios, entry);
+ u8 line = (data & 0x0000003f);
+ u8 defs = !!(data & 0x00000080);
+ u8 func = (data & 0x0000ff00) >> 8;
+ u8 unk0 = (data & 0x00ff0000) >> 16;
+ u8 unk1 = (data & 0x1f000000) >> 24;
+
+ if ( func == DCB_GPIO_UNUSED ||
+ (match != DCB_GPIO_UNUSED && match != func))
+ continue;
+
+ nvkm_gpio_set(gpio, 0, func, line, defs);
+
+ nvkm_mask(device, 0x021200 + (line * 4), 0xff, unk0);
+ if (unk1--)
+ nvkm_mask(device, 0x00d740 + (unk1 * 4), 0xff, line);
+ }
+}
+
+static int
+ga102_gpio_drive(struct nvkm_gpio *gpio, int line, int dir, int out)
+{
+ struct nvkm_device *device = gpio->subdev.device;
+ u32 data = ((dir ^ 1) << 13) | (out << 12);
+ nvkm_mask(device, 0x021200 + (line * 4), 0x00003000, data);
+ nvkm_mask(device, 0x00d604, 0x00000001, 0x00000001); /* update? */
+ return 0;
+}
+
+static int
+ga102_gpio_sense(struct nvkm_gpio *gpio, int line)
+{
+ struct nvkm_device *device = gpio->subdev.device;
+ return !!(nvkm_rd32(device, 0x021200 + (line * 4)) & 0x00004000);
+}
+
+static void
+ga102_gpio_intr_stat(struct nvkm_gpio *gpio, u32 *hi, u32 *lo)
+{
+ struct nvkm_device *device = gpio->subdev.device;
+ u32 intr0 = nvkm_rd32(device, 0x021640);
+ u32 intr1 = nvkm_rd32(device, 0x02164c);
+ u32 stat0 = nvkm_rd32(device, 0x021648) & intr0;
+ u32 stat1 = nvkm_rd32(device, 0x021654) & intr1;
+ *lo = (stat1 & 0xffff0000) | (stat0 >> 16);
+ *hi = (stat1 << 16) | (stat0 & 0x0000ffff);
+ nvkm_wr32(device, 0x021640, intr0);
+ nvkm_wr32(device, 0x02164c, intr1);
+}
+
+static void
+ga102_gpio_intr_mask(struct nvkm_gpio *gpio, u32 type, u32 mask, u32 data)
+{
+ struct nvkm_device *device = gpio->subdev.device;
+ u32 inte0 = nvkm_rd32(device, 0x021648);
+ u32 inte1 = nvkm_rd32(device, 0x021654);
+ if (type & NVKM_GPIO_LO)
+ inte0 = (inte0 & ~(mask << 16)) | (data << 16);
+ if (type & NVKM_GPIO_HI)
+ inte0 = (inte0 & ~(mask & 0xffff)) | (data & 0xffff);
+ mask >>= 16;
+ data >>= 16;
+ if (type & NVKM_GPIO_LO)
+ inte1 = (inte1 & ~(mask << 16)) | (data << 16);
+ if (type & NVKM_GPIO_HI)
+ inte1 = (inte1 & ~mask) | data;
+ nvkm_wr32(device, 0x021648, inte0);
+ nvkm_wr32(device, 0x021654, inte1);
+}
+
+static const struct nvkm_gpio_func
+ga102_gpio = {
+ .lines = 32,
+ .intr_stat = ga102_gpio_intr_stat,
+ .intr_mask = ga102_gpio_intr_mask,
+ .drive = ga102_gpio_drive,
+ .sense = ga102_gpio_sense,
+ .reset = ga102_gpio_reset,
+};
+
+int
+ga102_gpio_new(struct nvkm_device *device, int index, struct nvkm_gpio **pgpio)
+{
+ return nvkm_gpio_new_(&ga102_gpio, device, index, pgpio);
+}
nvkm-y += nvkm/subdev/i2c/gf117.o
nvkm-y += nvkm/subdev/i2c/gf119.o
nvkm-y += nvkm/subdev/i2c/gk104.o
+nvkm-y += nvkm/subdev/i2c/gk110.o
nvkm-y += nvkm/subdev/i2c/gm200.o
nvkm-y += nvkm/subdev/i2c/pad.o
#define __NVKM_I2C_AUX_H__
#include "pad.h"
+static inline void
+nvkm_i2c_aux_autodpcd(struct nvkm_i2c *i2c, int aux, bool enable)
+{
+ if (i2c->func->aux_autodpcd)
+ i2c->func->aux_autodpcd(i2c, aux, false);
+}
+
struct nvkm_i2c_aux_func {
bool address_only;
int (*xfer)(struct nvkm_i2c_aux *, bool retry, u8 type,
u8 type, u32 addr, u8 *data, u8 *size)
{
struct g94_i2c_aux *aux = g94_i2c_aux(obj);
- struct nvkm_device *device = aux->base.pad->i2c->subdev.device;
+ struct nvkm_i2c *i2c = aux->base.pad->i2c;
+ struct nvkm_device *device = i2c->subdev.device;
const u32 base = aux->ch * 0x50;
u32 ctrl, stat, timeout, retries = 0;
u32 xbuf[4] = {};
goto out;
}
+ nvkm_i2c_aux_autodpcd(i2c, aux->ch, false);
+
if (!(type & 1)) {
memcpy(xbuf, data, *size);
for (i = 0; i < 16; i += 4) {
if (!timeout--) {
AUX_ERR(&aux->base, "timeout %08x", ctrl);
ret = -EIO;
- goto out;
+ goto out_err;
}
} while (ctrl & 0x00010000);
ret = 0;
memcpy(data, xbuf, *size);
*size = stat & 0x0000001f;
}
-
+out_err:
+ nvkm_i2c_aux_autodpcd(i2c, aux->ch, true);
out:
g94_i2c_aux_fini(aux);
return ret < 0 ? ret : (stat & 0x000f0000) >> 16;
gm200_i2c_aux_fini(struct gm200_i2c_aux *aux)
{
struct nvkm_device *device = aux->base.pad->i2c->subdev.device;
- nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00310000, 0x00000000);
+ nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00710000, 0x00000000);
}
static int
AUX_ERR(&aux->base, "begin idle timeout %08x", ctrl);
return -EBUSY;
}
- } while (ctrl & 0x03010000);
+ } while (ctrl & 0x07010000);
/* set some magic, and wait up to 1ms for it to appear */
- nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00300000, ureq);
+ nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00700000, ureq);
timeout = 1000;
do {
ctrl = nvkm_rd32(device, 0x00d954 + (aux->ch * 0x50));
gm200_i2c_aux_fini(aux);
return -EBUSY;
}
- } while ((ctrl & 0x03000000) != urep);
+ } while ((ctrl & 0x07000000) != urep);
return 0;
}
u8 type, u32 addr, u8 *data, u8 *size)
{
struct gm200_i2c_aux *aux = gm200_i2c_aux(obj);
- struct nvkm_device *device = aux->base.pad->i2c->subdev.device;
+ struct nvkm_i2c *i2c = aux->base.pad->i2c;
+ struct nvkm_device *device = i2c->subdev.device;
const u32 base = aux->ch * 0x50;
u32 ctrl, stat, timeout, retries = 0;
u32 xbuf[4] = {};
goto out;
}
+ nvkm_i2c_aux_autodpcd(i2c, aux->ch, false);
+
if (!(type & 1)) {
memcpy(xbuf, data, *size);
for (i = 0; i < 16; i += 4) {
if (!timeout--) {
AUX_ERR(&aux->base, "timeout %08x", ctrl);
ret = -EIO;
- goto out;
+ goto out_err;
}
} while (ctrl & 0x00010000);
ret = 0;
*size = stat & 0x0000001f;
}
+out_err:
+ nvkm_i2c_aux_autodpcd(i2c, aux->ch, true);
out:
gm200_i2c_aux_fini(aux);
return ret < 0 ? ret : (stat & 0x000f0000) >> 16;
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+#include "pad.h"
+
+static void
+gk110_aux_autodpcd(struct nvkm_i2c *i2c, int aux, bool enable)
+{
+ nvkm_mask(i2c->subdev.device, 0x00e4f8 + (aux * 0x50), 0x00010000, enable << 16);
+}
+
+static const struct nvkm_i2c_func
+gk110_i2c = {
+ .pad_x_new = gf119_i2c_pad_x_new,
+ .pad_s_new = gf119_i2c_pad_s_new,
+ .aux = 4,
+ .aux_stat = gk104_aux_stat,
+ .aux_mask = gk104_aux_mask,
+ .aux_autodpcd = gk110_aux_autodpcd,
+};
+
+int
+gk110_i2c_new(struct nvkm_device *device, int index, struct nvkm_i2c **pi2c)
+{
+ return nvkm_i2c_new_(&gk110_i2c, device, index, pi2c);
+}
#include "priv.h"
#include "pad.h"
+static void
+gm200_aux_autodpcd(struct nvkm_i2c *i2c, int aux, bool enable)
+{
+ nvkm_mask(i2c->subdev.device, 0x00d968 + (aux * 0x50), 0x00010000, enable << 16);
+}
+
static const struct nvkm_i2c_func
gm200_i2c = {
.pad_x_new = gf119_i2c_pad_x_new,
.aux = 8,
.aux_stat = gk104_aux_stat,
.aux_mask = gk104_aux_mask,
+ .aux_autodpcd = gm200_aux_autodpcd,
};
int
/* SPDX-License-Identifier: MIT */
#ifndef __NVKM_I2C_PAD_H__
#define __NVKM_I2C_PAD_H__
-#include <subdev/i2c.h>
+#include "priv.h"
struct nvkm_i2c_pad {
const struct nvkm_i2c_pad_func *func;
/* mask on/off interrupt types for a given set of auxch
*/
void (*aux_mask)(struct nvkm_i2c *, u32, u32, u32);
+
+ /* enable/disable HW-initiated DPCD reads
+ */
+ void (*aux_autodpcd)(struct nvkm_i2c *, int aux, bool enable);
};
void g94_aux_stat(struct nvkm_i2c *, u32 *, u32 *, u32 *, u32 *);
* Authors: Ben Skeggs
*/
#include "priv.h"
+#include <subdev/timer.h>
static void
gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0400));
u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0400));
nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat);
- nvkm_mask(device, 0x122128 + (i * 0x0400), 0x00000200, 0x00000000);
}
static void
u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0400));
u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0400));
nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat);
- nvkm_mask(device, 0x124128 + (i * 0x0400), 0x00000200, 0x00000000);
}
static void
u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0400));
u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0400));
nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat);
- nvkm_mask(device, 0x128128 + (i * 0x0400), 0x00000200, 0x00000000);
}
void
intr1 &= ~stat;
}
}
+
+ nvkm_mask(device, 0x121c4c, 0x0000003f, 0x00000002);
+ nvkm_msec(device, 2000,
+ if (!(nvkm_rd32(device, 0x121c4c) & 0x0000003f))
+ break;
+ );
}
static int
* Authors: Ben Skeggs
*/
#include "priv.h"
+#include <subdev/timer.h>
static void
gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0800));
u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0800));
nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat);
- nvkm_mask(device, 0x122128 + (i * 0x0800), 0x00000200, 0x00000000);
}
static void
u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0800));
u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0800));
nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat);
- nvkm_mask(device, 0x124128 + (i * 0x0800), 0x00000200, 0x00000000);
}
static void
u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0800));
u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0800));
nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat);
- nvkm_mask(device, 0x128128 + (i * 0x0800), 0x00000200, 0x00000000);
}
void
intr1 &= ~stat;
}
}
+
+ nvkm_mask(device, 0x12004c, 0x0000003f, 0x00000002);
+ nvkm_msec(device, 2000,
+ if (!(nvkm_rd32(device, 0x12004c) & 0x0000003f))
+ break;
+ );
}
static int
nvkm-y += nvkm/subdev/mc/gp100.o
nvkm-y += nvkm/subdev/mc/gp10b.o
nvkm-y += nvkm/subdev/mc/tu102.o
+nvkm-y += nvkm/subdev/mc/ga100.o
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+static void
+ga100_mc_intr_unarm(struct nvkm_mc *mc)
+{
+ nvkm_wr32(mc->subdev.device, 0xb81610, 0x00000004);
+}
+
+static void
+ga100_mc_intr_rearm(struct nvkm_mc *mc)
+{
+ nvkm_wr32(mc->subdev.device, 0xb81608, 0x00000004);
+}
+
+static void
+ga100_mc_intr_mask(struct nvkm_mc *mc, u32 mask, u32 intr)
+{
+ nvkm_wr32(mc->subdev.device, 0xb81210, mask & intr );
+ nvkm_wr32(mc->subdev.device, 0xb81410, mask & ~(mask & intr));
+}
+
+static u32
+ga100_mc_intr_stat(struct nvkm_mc *mc)
+{
+ u32 intr_top = nvkm_rd32(mc->subdev.device, 0xb81600), intr = 0x00000000;
+ if (intr_top & 0x00000004)
+ intr = nvkm_mask(mc->subdev.device, 0xb81010, 0x00000000, 0x00000000);
+ return intr;
+}
+
+static void
+ga100_mc_init(struct nvkm_mc *mc)
+{
+ nv50_mc_init(mc);
+ nvkm_wr32(mc->subdev.device, 0xb81210, 0xffffffff);
+}
+
+static const struct nvkm_mc_func
+ga100_mc = {
+ .init = ga100_mc_init,
+ .intr = gp100_mc_intr,
+ .intr_unarm = ga100_mc_intr_unarm,
+ .intr_rearm = ga100_mc_intr_rearm,
+ .intr_mask = ga100_mc_intr_mask,
+ .intr_stat = ga100_mc_intr_stat,
+ .reset = gk104_mc_reset,
+};
+
+int
+ga100_mc_new(struct nvkm_device *device, int index, struct nvkm_mc **pmc)
+{
+ return nvkm_mc_new_(&ga100_mc, device, index, pmc);
+}
{
struct nvkm_device *device = mmu->subdev.device;
struct nvkm_mm *mm = &device->fb->ram->vram;
- const u32 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL);
- const u32 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP);
- const u32 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED);
+ const u64 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL);
+ const u64 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP);
+ const u64 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED);
u8 type = NVKM_MEM_KIND * !!mmu->func->kind;
u8 heap = NVKM_MEM_VRAM;
int heapM, heapN, heapU;
static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
-static spinlock_t shrinker_lock;
+static struct mutex shrinker_lock;
static struct list_head shrinker_list;
static struct shrinker mm_shrinker;
size_t size = (1ULL << order) * PAGE_SIZE;
addr = dma_map_page(pool->dev, p, 0, size, DMA_BIDIRECTIONAL);
- if (dma_mapping_error(pool->dev, **dma_addr))
+ if (dma_mapping_error(pool->dev, addr))
return -EFAULT;
}
spin_lock_init(&pt->lock);
INIT_LIST_HEAD(&pt->pages);
- spin_lock(&shrinker_lock);
+ mutex_lock(&shrinker_lock);
list_add_tail(&pt->shrinker_list, &shrinker_list);
- spin_unlock(&shrinker_lock);
+ mutex_unlock(&shrinker_lock);
}
/* Remove a pool_type from the global shrinker list and free all pages */
{
struct page *p, *tmp;
- spin_lock(&shrinker_lock);
+ mutex_lock(&shrinker_lock);
list_del(&pt->shrinker_list);
- spin_unlock(&shrinker_lock);
+ mutex_unlock(&shrinker_lock);
list_for_each_entry_safe(p, tmp, &pt->pages, lru)
ttm_pool_free_page(pt->pool, pt->caching, pt->order, p);
unsigned int num_freed;
struct page *p;
- spin_lock(&shrinker_lock);
+ mutex_lock(&shrinker_lock);
pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list);
p = ttm_pool_type_take(pt);
}
list_move_tail(&pt->shrinker_list, &shrinker_list);
- spin_unlock(&shrinker_lock);
+ mutex_unlock(&shrinker_lock);
return num_freed;
}
{
unsigned int i;
- spin_lock(&shrinker_lock);
+ mutex_lock(&shrinker_lock);
seq_puts(m, "\t ");
for (i = 0; i < MAX_ORDER; ++i)
seq_printf(m, "\ntotal\t: %8lu of %8lu\n",
atomic_long_read(&allocated_pages), page_pool_size);
- spin_unlock(&shrinker_lock);
+ mutex_unlock(&shrinker_lock);
return 0;
}
if (!page_pool_size)
page_pool_size = num_pages;
- spin_lock_init(&shrinker_lock);
+ mutex_init(&shrinker_lock);
INIT_LIST_HEAD(&shrinker_list);
for (i = 0; i < MAX_ORDER; ++i) {
depends on NEW_LEDS
depends on LEDS_CLASS
select POWER_SUPPLY
+ select CRC32
help
Support for
for (i = 0; i < cl_data->num_hid_devices; i++) {
cl_data->sensor_virt_addr[i] = dma_alloc_coherent(dev, sizeof(int) * 8,
- &cl_data->sensor_phys_addr[i],
+ &cl_data->sensor_dma_addr[i],
GFP_KERNEL);
cl_data->sensor_sts[i] = 0;
cl_data->sensor_requested_cnt[i] = 0;
}
info.period = msecs_to_jiffies(AMD_SFH_IDLE_LOOP);
info.sensor_idx = cl_idx;
- info.phys_address = cl_data->sensor_phys_addr[i];
+ info.dma_address = cl_data->sensor_dma_addr[i];
cl_data->report_descr[i] = kzalloc(cl_data->report_descr_sz[i], GFP_KERNEL);
if (!cl_data->report_descr[i]) {
if (cl_data->sensor_virt_addr[i]) {
dma_free_coherent(&privdata->pdev->dev, 8 * sizeof(int),
cl_data->sensor_virt_addr[i],
- cl_data->sensor_phys_addr[i]);
+ cl_data->sensor_dma_addr[i]);
}
kfree(cl_data->feature_report[i]);
kfree(cl_data->input_report[i]);
if (cl_data->sensor_virt_addr[i]) {
dma_free_coherent(&privdata->pdev->dev, 8 * sizeof(int),
cl_data->sensor_virt_addr[i],
- cl_data->sensor_phys_addr[i]);
+ cl_data->sensor_dma_addr[i]);
}
}
kfree(cl_data);
int hid_descr_size[MAX_HID_DEVICES];
phys_addr_t phys_addr_base;
u32 *sensor_virt_addr[MAX_HID_DEVICES];
- phys_addr_t sensor_phys_addr[MAX_HID_DEVICES];
+ dma_addr_t sensor_dma_addr[MAX_HID_DEVICES];
u32 sensor_sts[MAX_HID_DEVICES];
u32 sensor_requested_cnt[MAX_HID_DEVICES];
u8 report_type[MAX_HID_DEVICES];
cmd_param.s.buf_layout = 1;
cmd_param.s.buf_length = 16;
- writeq(info.phys_address, privdata->mmio + AMD_C2P_MSG2);
+ writeq(info.dma_address, privdata->mmio + AMD_C2P_MSG2);
writel(cmd_param.ul, privdata->mmio + AMD_C2P_MSG1);
writel(cmd_base.ul, privdata->mmio + AMD_C2P_MSG0);
}
struct amd_mp2_sensor_info {
u8 sensor_idx;
u32 period;
- phys_addr_t phys_address;
+ dma_addr_t dma_address;
};
void amd_start_sensor(struct amd_mp2_dev *privdata, struct amd_mp2_sensor_info info);
#define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401
#define USB_DEVICE_ID_HP_X2 0x074d
#define USB_DEVICE_ID_HP_X2_10_COVER 0x0755
+#define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706
#define USB_VENDOR_ID_ELECOM 0x056e
#define USB_DEVICE_ID_ELECOM_BM084 0x0061
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD),
HID_BATTERY_QUIRK_IGNORE },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN),
+ HID_BATTERY_QUIRK_IGNORE },
{}
};
HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
0xc531),
.driver_data = recvr_type_gaming_hidpp},
+ { /* Logitech G602 receiver (0xc537) */
+ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 0xc537),
+ .driver_data = recvr_type_gaming_hidpp},
{ /* Logitech lightspeed receiver (0xc539) */
HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1),
{ /* MX Master mouse over Bluetooth */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb012),
.driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
+ { /* MX Ergo trackball over Bluetooth */
+ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01d) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01e),
.driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* MX Master 3 mouse over Bluetooth */
HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
USB_VENDOR_ID_SYNAPTICS, 0xce08) },
+ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
+ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ USB_VENDOR_ID_SYNAPTICS, 0xce09) },
+
/* TopSeed panels */
{ .driver_data = MT_CLS_TOPSEED,
MT_USB_DEVICE(USB_VENDOR_ID_TOPSEED2,
goto cleanup;
} else if (rc < 0) {
hid_err(hdev,
- "failed retrieving string descriptor #%hhu: %d\n",
+ "failed retrieving string descriptor #%u: %d\n",
idx, rc);
goto cleanup;
}
wdata->state.cmd_err = err;
wiimote_cmd_complete(wdata);
} else if (err) {
- hid_warn(wdata->hdev, "Remote error %hhu on req %hhu\n", err,
+ hid_warn(wdata->hdev, "Remote error %u on req %u\n", err,
cmd);
}
}
group);
}
+static void wacom_devm_kfifo_release(struct device *dev, void *res)
+{
+ struct kfifo_rec_ptr_2 *devres = res;
+
+ kfifo_free(devres);
+}
+
+static int wacom_devm_kfifo_alloc(struct wacom *wacom)
+{
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct kfifo_rec_ptr_2 *pen_fifo = &wacom_wac->pen_fifo;
+ int error;
+
+ pen_fifo = devres_alloc(wacom_devm_kfifo_release,
+ sizeof(struct kfifo_rec_ptr_2),
+ GFP_KERNEL);
+
+ if (!pen_fifo)
+ return -ENOMEM;
+
+ error = kfifo_alloc(pen_fifo, WACOM_PKGLEN_MAX, GFP_KERNEL);
+ if (error) {
+ devres_free(pen_fifo);
+ return error;
+ }
+
+ devres_add(&wacom->hdev->dev, pen_fifo);
+
+ return 0;
+}
+
enum led_brightness wacom_leds_brightness_get(struct wacom_led *led)
{
struct wacom *wacom = led->wacom;
if (features->check_for_hid_type && features->hid_type != hdev->type)
return -ENODEV;
- error = kfifo_alloc(&wacom_wac->pen_fifo, WACOM_PKGLEN_MAX, GFP_KERNEL);
+ error = wacom_devm_kfifo_alloc(wacom);
if (error)
return error;
if (wacom->wacom_wac.features.type != REMOTE)
wacom_release_resources(wacom);
-
- kfifo_free(&wacom_wac->pen_fifo);
}
#ifdef CONFIG_PM
/* Make sure conn_state is set as hv_synic_cleanup checks for it */
mb();
cpuhp_remove_state(hyperv_cpuhp_online);
- hyperv_cleanup();
};
static void hv_crash_handler(struct pt_regs *regs)
cpu = smp_processor_id();
hv_stimer_cleanup(cpu);
hv_synic_disable_regs(cpu);
- hyperv_cleanup();
};
static int hv_synic_suspend(void)
return -ENODEV;
}
-static int elektor_remove(struct device *dev, unsigned int id)
+static void elektor_remove(struct device *dev, unsigned int id)
{
i2c_del_adapter(&pcf_isa_ops);
iounmap(base_iomem);
release_mem_region(base, 2);
}
-
- return 0;
}
static struct isa_driver i2c_elektor_driver = {
return -ENODEV;
}
-static int pca_isa_remove(struct device *dev, unsigned int id)
+static void pca_isa_remove(struct device *dev, unsigned int id)
{
i2c_del_adapter(&pca_isa_ops);
free_irq(irq, &pca_isa_ops);
}
release_region(base, IO_SIZE);
-
- return 0;
}
static struct isa_driver pca_isa_driver = {
return ret;
gid_type = ib_cache_gid_parse_type_str(buf);
- if (gid_type < 0)
+ if (gid_type < 0) {
+ cma_configfs_params_put(cma_dev);
return -EINVAL;
+ }
ret = cma_set_default_gid_type(cma_dev, group->port_num, gid_type);
} else {
ret = xa_alloc_cyclic(&rt->xa, &res->id, res, xa_limit_32b,
&rt->next_id, GFP_KERNEL);
+ ret = (ret < 0) ? ret : 0;
}
out:
u64 uid;
struct list_head list;
- /* sync between removal event and id destroy, protected by file mut */
- int destroying;
struct work_struct close_work;
};
static DEFINE_XARRAY_ALLOC(multicast_table);
static const struct file_operations ucma_fops;
-static int __destroy_id(struct ucma_context *ctx);
+static int ucma_destroy_private_ctx(struct ucma_context *ctx);
static inline struct ucma_context *_ucma_find_context(int id,
struct ucma_file *file)
/* once all inflight tasks are finished, we close all underlying
* resources. The context is still alive till its explicit destryoing
- * by its creator.
+ * by its creator. This puts back the xarray's reference.
*/
ucma_put_ctx(ctx);
wait_for_completion(&ctx->comp);
/* No new events will be generated after destroying the id. */
rdma_destroy_id(ctx->cm_id);
- /*
- * At this point ctx->ref is zero so the only place the ctx can be is in
- * a uevent or in __destroy_id(). Since the former doesn't touch
- * ctx->cm_id and the latter sync cancels this, there is no races with
- * this store.
- */
+ /* Reading the cm_id without holding a positive ref is not allowed */
ctx->cm_id = NULL;
}
return NULL;
INIT_WORK(&ctx->close_work, ucma_close_id);
- refcount_set(&ctx->ref, 1);
init_completion(&ctx->comp);
/* So list_del() will work if we don't do ucma_finish_ctx() */
INIT_LIST_HEAD(&ctx->list);
return ctx;
}
+static void ucma_set_ctx_cm_id(struct ucma_context *ctx,
+ struct rdma_cm_id *cm_id)
+{
+ refcount_set(&ctx->ref, 1);
+ ctx->cm_id = cm_id;
+}
+
static void ucma_finish_ctx(struct ucma_context *ctx)
{
lockdep_assert_held(&ctx->file->mut);
ctx = ucma_alloc_ctx(listen_ctx->file);
if (!ctx)
goto err_backlog;
- ctx->cm_id = cm_id;
+ ucma_set_ctx_cm_id(ctx, cm_id);
uevent = ucma_create_uevent(listen_ctx, event);
if (!uevent)
return 0;
err_alloc:
- xa_erase(&ctx_table, ctx->id);
- kfree(ctx);
+ ucma_destroy_private_ctx(ctx);
err_backlog:
atomic_inc(&listen_ctx->backlog);
/* Returning error causes the new ID to be destroyed */
wake_up_interruptible(&ctx->file->poll_wait);
}
- if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL && !ctx->destroying)
- queue_work(system_unbound_wq, &ctx->close_work);
+ if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) {
+ xa_lock(&ctx_table);
+ if (xa_load(&ctx_table, ctx->id) == ctx)
+ queue_work(system_unbound_wq, &ctx->close_work);
+ xa_unlock(&ctx_table);
+ }
return 0;
}
ret = PTR_ERR(cm_id);
goto err1;
}
- ctx->cm_id = cm_id;
+ ucma_set_ctx_cm_id(ctx, cm_id);
resp.id = ctx->id;
if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) {
- xa_erase(&ctx_table, ctx->id);
- __destroy_id(ctx);
+ ucma_destroy_private_ctx(ctx);
return -EFAULT;
}
return 0;
err1:
- xa_erase(&ctx_table, ctx->id);
- kfree(ctx);
+ ucma_destroy_private_ctx(ctx);
return ret;
}
rdma_unlock_handler(mc->ctx->cm_id);
}
-/*
- * ucma_free_ctx is called after the underlying rdma CM-ID is destroyed. At
- * this point, no new events will be reported from the hardware. However, we
- * still need to cleanup the UCMA context for this ID. Specifically, there
- * might be events that have not yet been consumed by the user space software.
- * mutex. After that we release them as needed.
- */
-static int ucma_free_ctx(struct ucma_context *ctx)
+static int ucma_cleanup_ctx_events(struct ucma_context *ctx)
{
int events_reported;
struct ucma_event *uevent, *tmp;
LIST_HEAD(list);
- ucma_cleanup_multicast(ctx);
-
- /* Cleanup events not yet reported to the user. */
+ /* Cleanup events not yet reported to the user.*/
mutex_lock(&ctx->file->mut);
list_for_each_entry_safe(uevent, tmp, &ctx->file->event_list, list) {
- if (uevent->ctx == ctx || uevent->conn_req_ctx == ctx)
+ if (uevent->ctx != ctx)
+ continue;
+
+ if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST &&
+ xa_cmpxchg(&ctx_table, uevent->conn_req_ctx->id,
+ uevent->conn_req_ctx, XA_ZERO_ENTRY,
+ GFP_KERNEL) == uevent->conn_req_ctx) {
list_move_tail(&uevent->list, &list);
+ continue;
+ }
+ list_del(&uevent->list);
+ kfree(uevent);
}
list_del(&ctx->list);
events_reported = ctx->events_reported;
mutex_unlock(&ctx->file->mut);
/*
- * If this was a listening ID then any connections spawned from it
- * that have not been delivered to userspace are cleaned up too.
- * Must be done outside any locks.
+ * If this was a listening ID then any connections spawned from it that
+ * have not been delivered to userspace are cleaned up too. Must be done
+ * outside any locks.
*/
list_for_each_entry_safe(uevent, tmp, &list, list) {
- list_del(&uevent->list);
- if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST &&
- uevent->conn_req_ctx != ctx)
- __destroy_id(uevent->conn_req_ctx);
+ ucma_destroy_private_ctx(uevent->conn_req_ctx);
kfree(uevent);
}
-
- mutex_destroy(&ctx->mutex);
- kfree(ctx);
return events_reported;
}
-static int __destroy_id(struct ucma_context *ctx)
+/*
+ * When this is called the xarray must have a XA_ZERO_ENTRY in the ctx->id (ie
+ * the ctx is not public to the user). This either because:
+ * - ucma_finish_ctx() hasn't been called
+ * - xa_cmpxchg() succeed to remove the entry (only one thread can succeed)
+ */
+static int ucma_destroy_private_ctx(struct ucma_context *ctx)
{
+ int events_reported;
+
/*
- * If the refcount is already 0 then ucma_close_id() has already
- * destroyed the cm_id, otherwise holding the refcount keeps cm_id
- * valid. Prevent queue_work() from being called.
+ * Destroy the underlying cm_id. New work queuing is prevented now by
+ * the removal from the xarray. Once the work is cancled ref will either
+ * be 0 because the work ran to completion and consumed the ref from the
+ * xarray, or it will be positive because we still have the ref from the
+ * xarray. This can also be 0 in cases where cm_id was never set
*/
- if (refcount_inc_not_zero(&ctx->ref)) {
- rdma_lock_handler(ctx->cm_id);
- ctx->destroying = 1;
- rdma_unlock_handler(ctx->cm_id);
- ucma_put_ctx(ctx);
- }
-
cancel_work_sync(&ctx->close_work);
- /* At this point it's guaranteed that there is no inflight closing task */
- if (ctx->cm_id)
+ if (refcount_read(&ctx->ref))
ucma_close_id(&ctx->close_work);
- return ucma_free_ctx(ctx);
+
+ events_reported = ucma_cleanup_ctx_events(ctx);
+ ucma_cleanup_multicast(ctx);
+
+ WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, XA_ZERO_ENTRY, NULL,
+ GFP_KERNEL) != NULL);
+ mutex_destroy(&ctx->mutex);
+ kfree(ctx);
+ return events_reported;
}
static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf,
xa_lock(&ctx_table);
ctx = _ucma_find_context(cmd.id, file);
- if (!IS_ERR(ctx))
- __xa_erase(&ctx_table, ctx->id);
+ if (!IS_ERR(ctx)) {
+ if (__xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY,
+ GFP_KERNEL) != ctx)
+ ctx = ERR_PTR(-ENOENT);
+ }
xa_unlock(&ctx_table);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
- resp.events_reported = __destroy_id(ctx);
+ resp.events_reported = ucma_destroy_private_ctx(ctx);
if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp)))
ret = -EFAULT;
* prevented by this being a FD release function. The list_add_tail() in
* ucma_connect_event_handler() can run concurrently, however it only
* adds to the list *after* a listening ID. By only reading the first of
- * the list, and relying on __destroy_id() to block
+ * the list, and relying on ucma_destroy_private_ctx() to block
* ucma_connect_event_handler(), no additional locking is needed.
*/
while (!list_empty(&file->ctx_list)) {
struct ucma_context *ctx = list_first_entry(
&file->ctx_list, struct ucma_context, list);
- xa_erase(&ctx_table, ctx->id);
- __destroy_id(ctx);
+ WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY,
+ GFP_KERNEL) != ctx);
+ ucma_destroy_private_ctx(ctx);
}
kfree(file);
return 0;
*/
if (mask)
pgsz_bitmap &= GENMASK(count_trailing_zeros(mask), 0);
- return rounddown_pow_of_two(pgsz_bitmap);
+ return pgsz_bitmap ? rounddown_pow_of_two(pgsz_bitmap) : 0;
}
EXPORT_SYMBOL(ib_umem_find_best_pgsz);
err = set_has_smi_cap(dev);
if (err)
- return err;
+ goto err_mp;
if (!mlx5_core_mp_enabled(mdev)) {
for (i = 1; i <= dev->num_ports; i++) {
err = mlx5_alloc_bfreg(dev->mdev, &dev->fp_bfreg, false, true);
if (err)
- mlx5_free_bfreg(dev->mdev, &dev->fp_bfreg);
+ mlx5_free_bfreg(dev->mdev, &dev->bfreg);
return err;
}
pr_err("%s(%d) Freeing in use pdid=0x%x.\n",
__func__, dev->id, pd->id);
}
- kfree(uctx->cntxt_pd);
uctx->cntxt_pd = NULL;
_ocrdma_dealloc_pd(dev, pd);
+ kfree(pd);
}
static struct ocrdma_pd *ocrdma_get_ucontext_pd(struct ocrdma_ucontext *uctx)
}
usnic_uiom_free_dev_list(dev_list);
+ dev_list = NULL;
}
/* Try to find resources on an unused vf */
qp_grp_check:
if (IS_ERR_OR_NULL(qp_grp)) {
usnic_err("Failed to allocate qp_grp\n");
+ if (usnic_ib_share_vf)
+ usnic_uiom_free_dev_list(dev_list);
return ERR_PTR(qp_grp ? PTR_ERR(qp_grp) : -ENOMEM);
}
return qp_grp;
return err;
}
-static int htcpen_isa_remove(struct device *dev, unsigned int id)
+static void htcpen_isa_remove(struct device *dev, unsigned int id)
{
struct input_dev *htcpen_dev = dev_get_drvdata(dev);
release_region(HTCPEN_PORT_INDEX, 2);
release_region(HTCPEN_PORT_INIT, 1);
release_region(HTCPEN_PORT_IRQ_CLEAR, 1);
-
- return 0;
}
#ifdef CONFIG_PM
}
static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = {
+ { .compatible = "qcom,msm8998-smmu-v2" },
{ .compatible = "qcom,sc7180-smmu-500" },
+ { .compatible = "qcom,sdm630-smmu-v2" },
{ .compatible = "qcom,sdm845-smmu-500" },
{ .compatible = "qcom,sm8150-smmu-500" },
{ .compatible = "qcom,sm8250-smmu-500" },
#include <linux/dmi.h>
#include <linux/pci-ats.h>
#include <linux/memblock.h>
-#include <linux/dma-map-ops.h>
#include <linux/dma-direct.h>
#include <linux/crash_dump.h>
#include <linux/numa.h>
iommu->flags |= VTD_FLAG_SVM_CAPABLE;
}
-static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_dev *sdev,
- unsigned long address, unsigned long pages, int ih)
+static void __flush_svm_range_dev(struct intel_svm *svm,
+ struct intel_svm_dev *sdev,
+ unsigned long address,
+ unsigned long pages, int ih)
{
struct qi_desc desc;
}
}
+static void intel_flush_svm_range_dev(struct intel_svm *svm,
+ struct intel_svm_dev *sdev,
+ unsigned long address,
+ unsigned long pages, int ih)
+{
+ unsigned long shift = ilog2(__roundup_pow_of_two(pages));
+ unsigned long align = (1ULL << (VTD_PAGE_SHIFT + shift));
+ unsigned long start = ALIGN_DOWN(address, align);
+ unsigned long end = ALIGN(address + (pages << VTD_PAGE_SHIFT), align);
+
+ while (start < end) {
+ __flush_svm_range_dev(svm, sdev, start, align >> VTD_PAGE_SHIFT, ih);
+ start += align;
+ }
+}
+
static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address,
unsigned long pages, int ih)
{
select BLK_DEV_INTEGRITY
select DM_BUFIO
select CRYPTO
+ select CRYPTO_SKCIPHER
select ASYNC_XOR
help
This device-mapper target emulates a block device that has
tristate "Drive-managed zoned block device target support"
depends on BLK_DEV_DM
depends on BLK_DEV_ZONED
+ select CRC32
help
This device-mapper target takes a host-managed or host-aware zoned
block device and exposes most of its capacity as a regular block
}
EXPORT_SYMBOL_GPL(dm_bufio_get_device_size);
+struct dm_io_client *dm_bufio_get_dm_io_client(struct dm_bufio_client *c)
+{
+ return c->dm_io;
+}
+EXPORT_SYMBOL_GPL(dm_bufio_get_dm_io_client);
+
sector_t dm_bufio_get_block_number(struct dm_buffer *b)
{
return b->block;
static void kcryptd_async_done(struct crypto_async_request *async_req,
int error);
-static void crypt_alloc_req_skcipher(struct crypt_config *cc,
+static int crypt_alloc_req_skcipher(struct crypt_config *cc,
struct convert_context *ctx)
{
unsigned key_index = ctx->cc_sector & (cc->tfms_count - 1);
- if (!ctx->r.req)
- ctx->r.req = mempool_alloc(&cc->req_pool, GFP_NOIO);
+ if (!ctx->r.req) {
+ ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO);
+ if (!ctx->r.req)
+ return -ENOMEM;
+ }
skcipher_request_set_tfm(ctx->r.req, cc->cipher_tfm.tfms[key_index]);
skcipher_request_set_callback(ctx->r.req,
CRYPTO_TFM_REQ_MAY_BACKLOG,
kcryptd_async_done, dmreq_of_req(cc, ctx->r.req));
+
+ return 0;
}
-static void crypt_alloc_req_aead(struct crypt_config *cc,
+static int crypt_alloc_req_aead(struct crypt_config *cc,
struct convert_context *ctx)
{
- if (!ctx->r.req_aead)
- ctx->r.req_aead = mempool_alloc(&cc->req_pool, GFP_NOIO);
+ if (!ctx->r.req) {
+ ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO);
+ if (!ctx->r.req)
+ return -ENOMEM;
+ }
aead_request_set_tfm(ctx->r.req_aead, cc->cipher_tfm.tfms_aead[0]);
aead_request_set_callback(ctx->r.req_aead,
CRYPTO_TFM_REQ_MAY_BACKLOG,
kcryptd_async_done, dmreq_of_req(cc, ctx->r.req_aead));
+
+ return 0;
}
-static void crypt_alloc_req(struct crypt_config *cc,
+static int crypt_alloc_req(struct crypt_config *cc,
struct convert_context *ctx)
{
if (crypt_integrity_aead(cc))
- crypt_alloc_req_aead(cc, ctx);
+ return crypt_alloc_req_aead(cc, ctx);
else
- crypt_alloc_req_skcipher(cc, ctx);
+ return crypt_alloc_req_skcipher(cc, ctx);
}
static void crypt_free_req_skcipher(struct crypt_config *cc,
* Encrypt / decrypt data from one bio to another one (can be the same one)
*/
static blk_status_t crypt_convert(struct crypt_config *cc,
- struct convert_context *ctx, bool atomic)
+ struct convert_context *ctx, bool atomic, bool reset_pending)
{
unsigned int tag_offset = 0;
unsigned int sector_step = cc->sector_size >> SECTOR_SHIFT;
int r;
- atomic_set(&ctx->cc_pending, 1);
+ /*
+ * if reset_pending is set we are dealing with the bio for the first time,
+ * else we're continuing to work on the previous bio, so don't mess with
+ * the cc_pending counter
+ */
+ if (reset_pending)
+ atomic_set(&ctx->cc_pending, 1);
while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) {
- crypt_alloc_req(cc, ctx);
+ r = crypt_alloc_req(cc, ctx);
+ if (r) {
+ complete(&ctx->restart);
+ return BLK_STS_DEV_RESOURCE;
+ }
+
atomic_inc(&ctx->cc_pending);
if (crypt_integrity_aead(cc))
* but the driver request queue is full, let's wait.
*/
case -EBUSY:
- wait_for_completion(&ctx->restart);
+ if (in_interrupt()) {
+ if (try_wait_for_completion(&ctx->restart)) {
+ /*
+ * we don't have to block to wait for completion,
+ * so proceed
+ */
+ } else {
+ /*
+ * we can't wait for completion without blocking
+ * exit and continue processing in a workqueue
+ */
+ ctx->r.req = NULL;
+ ctx->cc_sector += sector_step;
+ tag_offset++;
+ return BLK_STS_DEV_RESOURCE;
+ }
+ } else {
+ wait_for_completion(&ctx->restart);
+ }
reinit_completion(&ctx->restart);
fallthrough;
/*
atomic_inc(&io->io_pending);
}
+static void kcryptd_io_bio_endio(struct work_struct *work)
+{
+ struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work);
+ bio_endio(io->base_bio);
+}
+
/*
* One of the bios was finished. Check for completion of
* the whole request and correctly clean up the buffer.
kfree(io->integrity_metadata);
base_bio->bi_status = error;
- bio_endio(base_bio);
+
+ /*
+ * If we are running this function from our tasklet,
+ * we can't call bio_endio() here, because it will call
+ * clone_endio() from dm.c, which in turn will
+ * free the current struct dm_crypt_io structure with
+ * our tasklet. In this case we need to delay bio_endio()
+ * execution to after the tasklet is done and dequeued.
+ */
+ if (tasklet_trylock(&io->tasklet)) {
+ tasklet_unlock(&io->tasklet);
+ bio_endio(base_bio);
+ return;
+ }
+
+ INIT_WORK(&io->work, kcryptd_io_bio_endio);
+ queue_work(cc->io_queue, &io->work);
}
/*
}
}
+static void kcryptd_crypt_write_continue(struct work_struct *work)
+{
+ struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work);
+ struct crypt_config *cc = io->cc;
+ struct convert_context *ctx = &io->ctx;
+ int crypt_finished;
+ sector_t sector = io->sector;
+ blk_status_t r;
+
+ wait_for_completion(&ctx->restart);
+ reinit_completion(&ctx->restart);
+
+ r = crypt_convert(cc, &io->ctx, true, false);
+ if (r)
+ io->error = r;
+ crypt_finished = atomic_dec_and_test(&ctx->cc_pending);
+ if (!crypt_finished && kcryptd_crypt_write_inline(cc, ctx)) {
+ /* Wait for completion signaled by kcryptd_async_done() */
+ wait_for_completion(&ctx->restart);
+ crypt_finished = 1;
+ }
+
+ /* Encryption was already finished, submit io now */
+ if (crypt_finished) {
+ kcryptd_crypt_write_io_submit(io, 0);
+ io->sector = sector;
+ }
+
+ crypt_dec_pending(io);
+}
+
static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
{
struct crypt_config *cc = io->cc;
crypt_inc_pending(io);
r = crypt_convert(cc, ctx,
- test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags));
+ test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags), true);
+ /*
+ * Crypto API backlogged the request, because its queue was full
+ * and we're in softirq context, so continue from a workqueue
+ * (TODO: is it actually possible to be in softirq in the write path?)
+ */
+ if (r == BLK_STS_DEV_RESOURCE) {
+ INIT_WORK(&io->work, kcryptd_crypt_write_continue);
+ queue_work(cc->crypt_queue, &io->work);
+ return;
+ }
if (r)
io->error = r;
crypt_finished = atomic_dec_and_test(&ctx->cc_pending);
crypt_dec_pending(io);
}
+static void kcryptd_crypt_read_continue(struct work_struct *work)
+{
+ struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work);
+ struct crypt_config *cc = io->cc;
+ blk_status_t r;
+
+ wait_for_completion(&io->ctx.restart);
+ reinit_completion(&io->ctx.restart);
+
+ r = crypt_convert(cc, &io->ctx, true, false);
+ if (r)
+ io->error = r;
+
+ if (atomic_dec_and_test(&io->ctx.cc_pending))
+ kcryptd_crypt_read_done(io);
+
+ crypt_dec_pending(io);
+}
+
static void kcryptd_crypt_read_convert(struct dm_crypt_io *io)
{
struct crypt_config *cc = io->cc;
io->sector);
r = crypt_convert(cc, &io->ctx,
- test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags));
+ test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags), true);
+ /*
+ * Crypto API backlogged the request, because its queue was full
+ * and we're in softirq context, so continue from a workqueue
+ */
+ if (r == BLK_STS_DEV_RESOURCE) {
+ INIT_WORK(&io->work, kcryptd_crypt_read_continue);
+ queue_work(cc->crypt_queue, &io->work);
+ return;
+ }
if (r)
io->error = r;
if ((bio_data_dir(io->base_bio) == READ && test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags)) ||
(bio_data_dir(io->base_bio) == WRITE && test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags))) {
- if (in_irq()) {
- /* Crypto API's "skcipher_walk_first() refuses to work in hard IRQ context */
+ /*
+ * in_irq(): Crypto API's skcipher_walk_first() refuses to work in hard IRQ context.
+ * irqs_disabled(): the kernel may run some IO completion from the idle thread, but
+ * it is being executed with irqs disabled.
+ */
+ if (in_irq() || irqs_disabled()) {
tasklet_init(&io->tasklet, kcryptd_crypt_tasklet, (unsigned long)&io->work);
tasklet_schedule(&io->tasklet);
return;
#undef MAY_BE_HASH
}
-static void dm_integrity_flush_buffers(struct dm_integrity_c *ic)
+struct flush_request {
+ struct dm_io_request io_req;
+ struct dm_io_region io_reg;
+ struct dm_integrity_c *ic;
+ struct completion comp;
+};
+
+static void flush_notify(unsigned long error, void *fr_)
+{
+ struct flush_request *fr = fr_;
+ if (unlikely(error != 0))
+ dm_integrity_io_error(fr->ic, "flusing disk cache", -EIO);
+ complete(&fr->comp);
+}
+
+static void dm_integrity_flush_buffers(struct dm_integrity_c *ic, bool flush_data)
{
int r;
+
+ struct flush_request fr;
+
+ if (!ic->meta_dev)
+ flush_data = false;
+ if (flush_data) {
+ fr.io_req.bi_op = REQ_OP_WRITE,
+ fr.io_req.bi_op_flags = REQ_PREFLUSH | REQ_SYNC,
+ fr.io_req.mem.type = DM_IO_KMEM,
+ fr.io_req.mem.ptr.addr = NULL,
+ fr.io_req.notify.fn = flush_notify,
+ fr.io_req.notify.context = &fr;
+ fr.io_req.client = dm_bufio_get_dm_io_client(ic->bufio),
+ fr.io_reg.bdev = ic->dev->bdev,
+ fr.io_reg.sector = 0,
+ fr.io_reg.count = 0,
+ fr.ic = ic;
+ init_completion(&fr.comp);
+ r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL);
+ BUG_ON(r);
+ }
+
r = dm_bufio_write_dirty_buffers(ic->bufio);
if (unlikely(r))
dm_integrity_io_error(ic, "writing tags", r);
+
+ if (flush_data)
+ wait_for_completion(&fr.comp);
}
static void sleep_on_endio_wait(struct dm_integrity_c *ic)
if (unlikely(dio->op == REQ_OP_DISCARD) && likely(ic->mode != 'D')) {
integrity_metadata(&dio->work);
- dm_integrity_flush_buffers(ic);
+ dm_integrity_flush_buffers(ic, false);
dio->in_flight = (atomic_t)ATOMIC_INIT(1);
dio->completion = NULL;
flushes = bio_list_get(&ic->flush_bio_list);
if (unlikely(ic->mode != 'J')) {
spin_unlock_irq(&ic->endio_wait.lock);
- dm_integrity_flush_buffers(ic);
+ dm_integrity_flush_buffers(ic, true);
goto release_flush_bios;
}
complete_journal_op(&comp);
wait_for_completion_io(&comp.comp);
- dm_integrity_flush_buffers(ic);
+ dm_integrity_flush_buffers(ic, true);
}
static void integrity_writer(struct work_struct *w)
{
int r;
- dm_integrity_flush_buffers(ic);
+ dm_integrity_flush_buffers(ic, false);
if (dm_integrity_failed(ic))
return;
unsigned long limit;
struct bio *bio;
- dm_integrity_flush_buffers(ic);
+ dm_integrity_flush_buffers(ic, false);
range.logical_sector = 0;
range.n_sectors = ic->provided_data_sectors;
add_new_range_and_wait(ic, &range);
spin_unlock_irq(&ic->endio_wait.lock);
- dm_integrity_flush_buffers(ic);
- if (ic->meta_dev)
- blkdev_issue_flush(ic->dev->bdev, GFP_NOIO);
+ dm_integrity_flush_buffers(ic, true);
limit = ic->provided_data_sectors;
if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING)) {
if (ic->meta_dev)
queue_work(ic->writer_wq, &ic->writer_work);
drain_workqueue(ic->writer_wq);
- dm_integrity_flush_buffers(ic);
+ dm_integrity_flush_buffers(ic, true);
}
if (ic->mode == 'B') {
- dm_integrity_flush_buffers(ic);
+ dm_integrity_flush_buffers(ic, true);
#if 1
/* set to 0 to test bitmap replay code */
init_journal(ic, 0, ic->journal_sections, 0);
unsigned extra_args;
struct dm_arg_set as;
static const struct dm_arg _args[] = {
- {0, 9, "Invalid number of feature args"},
+ {0, 15, "Invalid number of feature args"},
};
unsigned journal_sectors, interleave_sectors, buffer_sectors, journal_watermark, sync_msec;
bool should_write_sb;
blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs));
/*
- * RAID1 and RAID10 personalities require bio splitting,
- * RAID0/4/5/6 don't and process large discard bios properly.
+ * RAID0 and RAID10 personalities require bio splitting,
+ * RAID1/4/5/6 don't and process large discard bios properly.
*/
- if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
+ if (rs_is_raid0(rs) || rs_is_raid10(rs)) {
limits->discard_granularity = chunk_size_bytes;
limits->max_discard_sectors = rs->md.chunk_sectors;
}
* for them to be committed.
*/
struct bio_list bios_queued_during_merge;
+
+ /*
+ * Flush data after merge.
+ */
+ struct bio flush_bio;
};
/*
static void error_bios(struct bio *bio);
+static int flush_data(struct dm_snapshot *s)
+{
+ struct bio *flush_bio = &s->flush_bio;
+
+ bio_reset(flush_bio);
+ bio_set_dev(flush_bio, s->origin->bdev);
+ flush_bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
+
+ return submit_bio_wait(flush_bio);
+}
+
static void merge_callback(int read_err, unsigned long write_err, void *context)
{
struct dm_snapshot *s = context;
goto shut;
}
+ if (flush_data(s) < 0) {
+ DMERR("Flush after merge failed: shutting down merge");
+ goto shut;
+ }
+
if (s->store->type->commit_merge(s->store,
s->num_merging_chunks) < 0) {
DMERR("Write error in exception store: shutting down merge");
s->first_merging_chunk = 0;
s->num_merging_chunks = 0;
bio_list_init(&s->bios_queued_during_merge);
+ bio_init(&s->flush_bio, NULL, 0);
/* Allocate hash table for COW data */
if (init_hash_tables(s)) {
dm_exception_store_destroy(s->store);
+ bio_uninit(&s->flush_bio);
+
dm_put_device(ti, s->cow);
dm_put_device(ti, s->origin);
* subset of the parent bdev; require extra privileges.
*/
if (!capable(CAP_SYS_RAWIO)) {
- DMWARN_LIMIT(
+ DMDEBUG_LIMIT(
"%s: sending ioctl %x to DM device without required privilege.",
current->comm, cmd);
r = -ENOIOCTLCMD;
return res;
}
-static int radio_isa_common_remove(struct radio_isa_card *isa,
- unsigned region_size)
+static void radio_isa_common_remove(struct radio_isa_card *isa,
+ unsigned region_size)
{
const struct radio_isa_ops *ops = isa->drv->ops;
release_region(isa->io, region_size);
v4l2_info(&isa->v4l2_dev, "Removed radio card %s\n", isa->drv->card);
kfree(isa);
- return 0;
}
int radio_isa_probe(struct device *pdev, unsigned int dev)
}
EXPORT_SYMBOL_GPL(radio_isa_probe);
-int radio_isa_remove(struct device *pdev, unsigned int dev)
+void radio_isa_remove(struct device *pdev, unsigned int dev)
{
struct radio_isa_card *isa = dev_get_drvdata(pdev);
- return radio_isa_common_remove(isa, isa->drv->region_size);
+ radio_isa_common_remove(isa, isa->drv->region_size);
}
EXPORT_SYMBOL_GPL(radio_isa_remove);
int radio_isa_match(struct device *pdev, unsigned int dev);
int radio_isa_probe(struct device *pdev, unsigned int dev);
-int radio_isa_remove(struct device *pdev, unsigned int dev);
+void radio_isa_remove(struct device *pdev, unsigned int dev);
#ifdef CONFIG_PNP
int radio_isa_pnp_probe(struct pnp_dev *dev,
const struct pnp_device_id *dev_id);
kfree(fmr2);
}
-static int fmr2_isa_remove(struct device *pdev, unsigned int ndev)
+static void fmr2_isa_remove(struct device *pdev, unsigned int ndev)
{
fmr2_remove(dev_get_drvdata(pdev));
-
- return 0;
}
static void fmr2_pnp_remove(struct pnp_dev *pdev)
return -ENXIO;
}
-static int tscan1_remove(struct device *dev, unsigned id /*unused*/)
+static void tscan1_remove(struct device *dev, unsigned id /*unused*/)
{
struct net_device *netdev;
struct sja1000_priv *priv;
release_region(pld_base, TSCAN1_PLD_SIZE);
free_sja1000dev(netdev);
-
- return 0;
}
static struct isa_driver tscan1_isa_driver = {
else
skb = alloc_can_skb(priv->ndev, (struct can_frame **)&cfd);
- if (!cfd) {
+ if (!skb) {
stats->rx_dropped++;
return 0;
}
return 1;
}
-static int el3_isa_remove(struct device *pdev,
+static void el3_isa_remove(struct device *pdev,
unsigned int ndev)
{
el3_device_remove(pdev);
dev_set_drvdata(pdev, NULL);
- return 0;
}
#ifdef CONFIG_PM
if (rc && ((struct hwrm_err_output *)&resp)->cmd_err ==
NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) {
- install.flags |=
+ install.flags =
cpu_to_le16(NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG);
rc = _hwrm_send_message_silent(bp, &install,
* UPDATE directory and try the flash again
*/
defrag_attempted = true;
+ install.flags = 0;
rc = __bnxt_flash_nvram(bp->dev,
BNX_DIR_TYPE_UPDATE,
BNX_DIR_ORDINAL_FIRST,
int bnxt_get_ulp_stat_ctxs(struct bnxt *bp)
{
- if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP))
- return BNXT_MIN_ROCE_STAT_CTXS;
+ if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) {
+ struct bnxt_en_dev *edev = bp->edev;
+
+ if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested)
+ return BNXT_MIN_ROCE_STAT_CTXS;
+ }
return 0;
}
#define TCB_L2T_IX_M 0xfffULL
#define TCB_L2T_IX_V(x) ((x) << TCB_L2T_IX_S)
+#define TCB_T_FLAGS_W 1
+#define TCB_T_FLAGS_S 0
+#define TCB_T_FLAGS_M 0xffffffffffffffffULL
+#define TCB_T_FLAGS_V(x) ((__u64)(x) << TCB_T_FLAGS_S)
+
+#define TCB_FIELD_COOKIE_TFLAG 1
+
#define TCB_SMAC_SEL_W 0
#define TCB_SMAC_SEL_S 24
#define TCB_SMAC_SEL_M 0xffULL
void chtls_tcp_push(struct sock *sk, int flags);
int chtls_push_frames(struct chtls_sock *csk, int comp);
int chtls_set_tcb_tflag(struct sock *sk, unsigned int bit_pos, int val);
+void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word,
+ u64 mask, u64 val, u8 cookie,
+ int through_l2t);
int chtls_setkey(struct chtls_sock *csk, u32 keylen, u32 mode, int cipher_type);
+void chtls_set_quiesce_ctrl(struct sock *sk, int val);
void skb_entail(struct sock *sk, struct sk_buff *skb, int flags);
unsigned int keyid_to_addr(int start_addr, int keyid);
void free_tls_keyid(struct sock *sk);
#include "chtls.h"
#include "chtls_cm.h"
#include "clip_tbl.h"
+#include "t4_tcb.h"
/*
* State transitions and actions for close. Note that if we are in SYN_SENT
if (sk->sk_state != TCP_SYN_RECV)
chtls_send_abort(sk, mode, skb);
else
- goto out;
+ chtls_set_tcb_field_rpl_skb(sk, TCB_T_FLAGS_W,
+ TCB_T_FLAGS_V(TCB_T_FLAGS_M), 0,
+ TCB_FIELD_COOKIE_TFLAG, 1);
return;
out:
else if (tcp_sk(sk)->linger2 < 0 &&
!csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN))
chtls_abort_conn(sk, skb);
+ else if (csk_flag_nochk(csk, CSK_TX_DATA_SENT))
+ chtls_set_quiesce_ctrl(sk, 0);
break;
default:
pr_info("close_con_rpl in bad state %d\n", sk->sk_state);
return 0;
}
+static int chtls_set_tcb_rpl(struct chtls_dev *cdev, struct sk_buff *skb)
+{
+ struct cpl_set_tcb_rpl *rpl = cplhdr(skb) + RSS_HDR;
+ unsigned int hwtid = GET_TID(rpl);
+ struct sock *sk;
+
+ sk = lookup_tid(cdev->tids, hwtid);
+
+ /* return EINVAL if socket doesn't exist */
+ if (!sk)
+ return -EINVAL;
+
+ /* Reusing the skb as size of cpl_set_tcb_field structure
+ * is greater than cpl_abort_req
+ */
+ if (TCB_COOKIE_G(rpl->cookie) == TCB_FIELD_COOKIE_TFLAG)
+ chtls_send_abort(sk, CPL_ABORT_SEND_RST, NULL);
+
+ kfree_skb(skb);
+ return 0;
+}
+
chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = {
[CPL_PASS_OPEN_RPL] = chtls_pass_open_rpl,
[CPL_CLOSE_LISTSRV_RPL] = chtls_close_listsrv_rpl,
[CPL_CLOSE_CON_RPL] = chtls_conn_cpl,
[CPL_ABORT_REQ_RSS] = chtls_conn_cpl,
[CPL_ABORT_RPL_RSS] = chtls_conn_cpl,
- [CPL_FW4_ACK] = chtls_wr_ack,
+ [CPL_FW4_ACK] = chtls_wr_ack,
+ [CPL_SET_TCB_RPL] = chtls_set_tcb_rpl,
};
return ret < 0 ? ret : 0;
}
+void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word,
+ u64 mask, u64 val, u8 cookie,
+ int through_l2t)
+{
+ struct sk_buff *skb;
+ unsigned int wrlen;
+
+ wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata);
+ wrlen = roundup(wrlen, 16);
+
+ skb = alloc_skb(wrlen, GFP_KERNEL | __GFP_NOFAIL);
+ if (!skb)
+ return;
+
+ __set_tcb_field(sk, skb, word, mask, val, cookie, 0);
+ send_or_defer(sk, tcp_sk(sk), skb, through_l2t);
+}
+
/*
* Set one of the t_flags bits in the TCB.
*/
TF_RX_QUIESCE_V(val));
}
+void chtls_set_quiesce_ctrl(struct sock *sk, int val)
+{
+ struct chtls_sock *csk;
+ struct sk_buff *skb;
+ unsigned int wrlen;
+ int ret;
+
+ wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata);
+ wrlen = roundup(wrlen, 16);
+
+ skb = alloc_skb(wrlen, GFP_ATOMIC);
+ if (!skb)
+ return;
+
+ csk = rcu_dereference_sk_user_data(sk);
+
+ __set_tcb_field(sk, skb, 1, TF_RX_QUIESCE_V(1), 0, 0, 1);
+ set_wr_txq(skb, CPL_PRIORITY_CONTROL, csk->port_id);
+ ret = cxgb4_ofld_send(csk->egress_dev, skb);
+ if (ret < 0)
+ kfree_skb(skb);
+}
+
/* TLS Key bitmap processing */
int chtls_init_kmap(struct chtls_dev *cdev, struct cxgb4_lld_info *lldi)
{
* SBP is *not* set in PRT_SBPVSI (default not set).
*/
skb = i40e_construct_skb_zc(rx_ring, *bi);
- *bi = NULL;
if (!skb) {
rx_ring->rx_stats.alloc_buff_failed++;
break;
}
+ *bi = NULL;
cleaned_count++;
i40e_inc_ntc(rx_ring);
phylink_set(mask, Autoneg);
phylink_set_port_modes(mask);
- phylink_set(mask, Pause);
- phylink_set(mask, Asym_Pause);
switch (state->interface) {
case PHY_INTERFACE_MODE_10GBASER:
#define MLXSW_THERMAL_ASIC_TEMP_NORM 75000 /* 75C */
#define MLXSW_THERMAL_ASIC_TEMP_HIGH 85000 /* 85C */
#define MLXSW_THERMAL_ASIC_TEMP_HOT 105000 /* 105C */
-#define MLXSW_THERMAL_ASIC_TEMP_CRIT 110000 /* 110C */
+#define MLXSW_THERMAL_ASIC_TEMP_CRIT 140000 /* 140C */
#define MLXSW_THERMAL_HYSTERESIS_TEMP 5000 /* 5C */
#define MLXSW_THERMAL_MODULE_TEMP_SHIFT (MLXSW_THERMAL_HYSTERESIS_TEMP * 2)
#define MLXSW_THERMAL_ZONE_MAX_NAME 16
if (err)
return err;
+ if (crit_temp > emerg_temp) {
+ dev_warn(dev, "%s : Critical threshold %d is above emergency threshold %d\n",
+ tz->tzdev->type, crit_temp, emerg_temp);
+ return 0;
+ }
+
/* According to the system thermal requirements, the thermal zones are
* defined with four trip points. The critical and emergency
* temperature thresholds, provided by QSFP module are set as "active"
tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = crit_temp;
tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp = crit_temp;
tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temp = emerg_temp;
- if (emerg_temp > crit_temp)
- tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp +
+ tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp +
MLXSW_THERMAL_MODULE_TEMP_SHIFT;
- else
- tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp;
return 0;
}
.ndo_set_features = netxen_set_features,
};
-static inline bool netxen_function_zero(struct pci_dev *pdev)
-{
- return (PCI_FUNC(pdev->devfn) == 0) ? true : false;
-}
-
static inline void netxen_set_interrupt_mode(struct netxen_adapter *adapter,
u32 mode)
{
netxen_initialize_interrupt_registers(adapter);
netxen_set_msix_bit(pdev, 0);
- if (netxen_function_zero(pdev)) {
+ if (adapter->portnum == 0) {
if (!netxen_setup_msi_interrupts(adapter, num_msix))
netxen_set_interrupt_mode(adapter, NETXEN_MSI_MODE);
else
int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg,
unsigned int ptp_rate)
{
- u32 speed, total_offset, offset, ctrl, ctr_low;
- u32 extcfg = readl(ioaddr + GMAC_EXT_CONFIG);
- u32 mac_cfg = readl(ioaddr + GMAC_CONFIG);
int i, ret = 0x0;
- u64 total_ctr;
-
- if (extcfg & GMAC_CONFIG_EIPG_EN) {
- offset = (extcfg & GMAC_CONFIG_EIPG) >> GMAC_CONFIG_EIPG_SHIFT;
- offset = 104 + (offset * 8);
- } else {
- offset = (mac_cfg & GMAC_CONFIG_IPG) >> GMAC_CONFIG_IPG_SHIFT;
- offset = 96 - (offset * 8);
- }
-
- speed = mac_cfg & (GMAC_CONFIG_PS | GMAC_CONFIG_FES);
- speed = speed >> GMAC_CONFIG_FES_SHIFT;
-
- switch (speed) {
- case 0x0:
- offset = offset * 1000; /* 1G */
- break;
- case 0x1:
- offset = offset * 400; /* 2.5G */
- break;
- case 0x2:
- offset = offset * 100000; /* 10M */
- break;
- case 0x3:
- offset = offset * 10000; /* 100M */
- break;
- default:
- return -EINVAL;
- }
-
- offset = offset / 1000;
+ u32 ctrl;
ret |= dwmac5_est_write(ioaddr, BTR_LOW, cfg->btr[0], false);
ret |= dwmac5_est_write(ioaddr, BTR_HIGH, cfg->btr[1], false);
ret |= dwmac5_est_write(ioaddr, TER, cfg->ter, false);
ret |= dwmac5_est_write(ioaddr, LLR, cfg->gcl_size, false);
+ ret |= dwmac5_est_write(ioaddr, CTR_LOW, cfg->ctr[0], false);
+ ret |= dwmac5_est_write(ioaddr, CTR_HIGH, cfg->ctr[1], false);
if (ret)
return ret;
- total_offset = 0;
for (i = 0; i < cfg->gcl_size; i++) {
- ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i] + offset, true);
+ ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i], true);
if (ret)
return ret;
-
- total_offset += offset;
}
- total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000ULL;
- total_ctr += total_offset;
-
- ctr_low = do_div(total_ctr, 1000000000);
-
- ret |= dwmac5_est_write(ioaddr, CTR_LOW, ctr_low, false);
- ret |= dwmac5_est_write(ioaddr, CTR_HIGH, total_ctr, false);
- if (ret)
- return ret;
-
ctrl = readl(ioaddr + MTL_EST_CONTROL);
ctrl &= ~PTOV;
ctrl |= ((1000000000 / ptp_rate) * 6) << PTOV_SHIFT;
spin_lock_irqsave(&ch->lock, flags);
stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 0);
spin_unlock_irqrestore(&ch->lock, flags);
- __napi_schedule_irqoff(&ch->rx_napi);
+ __napi_schedule(&ch->rx_napi);
}
}
spin_lock_irqsave(&ch->lock, flags);
stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 0, 1);
spin_unlock_irqrestore(&ch->lock, flags);
- __napi_schedule_irqoff(&ch->tx_napi);
+ __napi_schedule(&ch->tx_napi);
}
}
{
struct stmmac_priv *priv = netdev_priv(dev);
int txfifosz = priv->plat->tx_fifo_size;
+ const int mtu = new_mtu;
if (txfifosz == 0)
txfifosz = priv->dma_cap.tx_fifo_size;
if ((txfifosz < new_mtu) || (new_mtu > BUF_SIZE_16KiB))
return -EINVAL;
- dev->mtu = new_mtu;
+ dev->mtu = mtu;
netdev_update_features(dev);
{
u32 size, wid = priv->dma_cap.estwid, dep = priv->dma_cap.estdep;
struct plat_stmmacenet_data *plat = priv->plat;
- struct timespec64 time;
+ struct timespec64 time, current_time;
+ ktime_t current_time_ns;
bool fpe = false;
int i, ret = 0;
u64 ctr;
}
/* Adjust for real system time */
- time = ktime_to_timespec64(qopt->base_time);
+ priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, ¤t_time);
+ current_time_ns = timespec64_to_ktime(current_time);
+ if (ktime_after(qopt->base_time, current_time_ns)) {
+ time = ktime_to_timespec64(qopt->base_time);
+ } else {
+ ktime_t base_time;
+ s64 n;
+
+ n = div64_s64(ktime_sub_ns(current_time_ns, qopt->base_time),
+ qopt->cycle_time);
+ base_time = ktime_add_ns(qopt->base_time,
+ (n + 1) * qopt->cycle_time);
+
+ time = ktime_to_timespec64(base_time);
+ }
+
priv->plat->est->btr[0] = (u32)time.tv_nsec;
priv->plat->est->btr[1] = (u32)time.tv_sec;
ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]->netdev = netdev;
ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->netdev = netdev;
+ SET_NETDEV_DEV(netdev, &ipa->pdev->dev);
priv = netdev_priv(netdev);
priv->ipa = ipa;
/* Make clk optional to keep DTB backward compatibility. */
priv->refclk = clk_get_optional(dev, NULL);
if (IS_ERR(priv->refclk))
- dev_err_probe(dev, PTR_ERR(priv->refclk), "Failed to request clock\n");
+ return dev_err_probe(dev, PTR_ERR(priv->refclk),
+ "Failed to request clock\n");
ret = clk_prepare_enable(priv->refclk);
if (ret)
write_unlock_bh(&pch->upl);
return -EALREADY;
}
+ refcount_inc(&pchb->file.refcnt);
rcu_assign_pointer(pch->bridge, pchb);
write_unlock_bh(&pch->upl);
write_unlock_bh(&pchb->upl);
goto err_unset;
}
+ refcount_inc(&pch->file.refcnt);
rcu_assign_pointer(pchb->bridge, pch);
write_unlock_bh(&pchb->upl);
- refcount_inc(&pch->file.refcnt);
- refcount_inc(&pchb->file.refcnt);
-
return 0;
err_unset:
write_lock_bh(&pch->upl);
+ /* Re-read pch->bridge with upl held in case it was modified concurrently */
+ pchb = rcu_dereference_protected(pch->bridge, lockdep_is_held(&pch->upl));
RCU_INIT_POINTER(pch->bridge, NULL);
write_unlock_bh(&pch->upl);
synchronize_rcu();
+
+ if (pchb)
+ if (refcount_dec_and_test(&pchb->file.refcnt))
+ ppp_destroy_channel(pchb);
+
return -EALREADY;
}
config USB_RTL8153_ECM
tristate "RTL8153 ECM support"
depends on USB_NET_CDCETHER && (USB_RTL8152 || USB_RTL8152=n)
- default y
help
This option supports ECM mode for RTL8153 ethernet adapter, when
CONFIG_USB_RTL8152 is not set, or the RTL8153 device is not
.driver_info = 0,
},
+/* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */
+{
+ USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x721e, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
+ .driver_info = 0,
+},
+
/* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */
{
USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)},
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)},
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x721e)},
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)},
{REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)},
{REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
};
static const struct usb_device_id products[] = {
+/* Realtek RTL8153 Based USB 3.0 Ethernet Adapters */
{
USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_REALTEK, 0x8153, USB_CLASS_COMM,
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&r8153_info,
},
+/* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */
+{
+ USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_LENOVO, 0x721e, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&r8153_info,
+},
+
{ }, /* END */
};
MODULE_DEVICE_TABLE(usb, products);
reply_len = sizeof *phym;
retval = rndis_query(dev, intf, u.buf,
RNDIS_OID_GEN_PHYSICAL_MEDIUM,
- 0, (void **) &phym, &reply_len);
+ reply_len, (void **)&phym, &reply_len);
if (retval != 0 || !phym) {
/* OID is optional so don't fail here. */
phym_unspec = cpu_to_le32(RNDIS_PHYSICAL_MEDIUM_UNSPECIFIED);
NULL,
};
+static inline bool nvme_discovery_ctrl(struct nvme_ctrl *ctrl)
+{
+ return ctrl->opts && ctrl->opts->discovery_nqn;
+}
+
static bool nvme_validate_cntlid(struct nvme_subsystem *subsys,
struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
{
}
if ((id->cmic & NVME_CTRL_CMIC_MULTI_CTRL) ||
- (ctrl->opts && ctrl->opts->discovery_nqn))
+ nvme_discovery_ctrl(ctrl))
continue;
dev_err(ctrl->device,
goto out_free;
}
- if (!ctrl->opts->discovery_nqn && !ctrl->kas) {
+ if (!nvme_discovery_ctrl(ctrl) && !ctrl->kas) {
dev_err(ctrl->device,
"keep-alive support is mandatory for fabrics\n");
ret = -EINVAL;
if (ret < 0)
return ret;
- if (!ctrl->identified) {
+ if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) {
ret = nvme_hwmon_init(ctrl);
if (ret < 0)
return ret;
static inline size_t nvme_tcp_req_cur_length(struct nvme_tcp_request *req)
{
- return min_t(size_t, req->iter.bvec->bv_len - req->iter.iov_offset,
+ return min_t(size_t, iov_iter_single_seg_count(&req->iter),
req->pdu_len - req->pdu_sent);
}
* directly, otherwise queue io_work. Also, only do that if we
* are on the same cpu, so we don't introduce contention.
*/
- if (queue->io_cpu == smp_processor_id() &&
+ if (queue->io_cpu == __smp_processor_id() &&
sync && empty && mutex_trylock(&queue->send_mutex)) {
queue->more_requests = !last;
nvme_tcp_send_all(queue);
}
ndev->inline_data_size = nport->inline_data_size;
ndev->inline_page_count = inline_page_count;
+
+ if (nport->pi_enable && !(cm_id->device->attrs.device_cap_flags &
+ IB_DEVICE_INTEGRITY_HANDOVER)) {
+ pr_warn("T10-PI is not supported by device %s. Disabling it\n",
+ cm_id->device->name);
+ nport->pi_enable = false;
+ }
+
ndev->device = cm_id->device;
kref_init(&ndev->ref);
goto out_destroy_id;
}
- if (port->nport->pi_enable &&
- !(cm_id->device->attrs.device_cap_flags &
- IB_DEVICE_INTEGRITY_HANDOVER)) {
- pr_err("T10-PI is not supported for %pISpcs\n", addr);
- ret = -EINVAL;
- goto out_destroy_id;
- }
-
port->cm_id = cm_id;
return 0;
return per_cpu(hw_events->irq, cpu);
}
-bool arm_pmu_irq_is_nmi(void)
-{
- return has_nmi;
-}
-
/*
* PMU hardware loses all context when a CPU goes offline.
* When a CPU is hotplugged back in, since some hardware registers are
return err;
}
-static int advansys_isa_remove(struct device *dev, unsigned int id)
+static void advansys_isa_remove(struct device *dev, unsigned int id)
{
int ioport = _asc_def_iop_base[id];
advansys_release(dev_get_drvdata(dev));
release_region(ioport, ASC_IOADR_GAP);
- return 0;
}
static struct isa_driver advansys_isa_driver = {
return 1;
}
-static int aha1542_isa_remove(struct device *pdev,
+static void aha1542_isa_remove(struct device *pdev,
unsigned int ndev)
{
aha1542_release(dev_get_drvdata(pdev));
dev_set_drvdata(pdev, NULL);
- return 0;
}
static struct isa_driver aha1542_isa_driver = {
return 1;
}
-static int fdomain_isa_remove(struct device *dev, unsigned int ndev)
+static void fdomain_isa_remove(struct device *dev, unsigned int ndev)
{
struct Scsi_Host *sh = dev_get_drvdata(dev);
int base = sh->io_port;
fdomain_destroy(sh);
release_region(base, FDOMAIN_REGION_SIZE);
dev_set_drvdata(dev, NULL);
- return 0;
}
static struct isa_driver fdomain_isa_driver = {
return 1;
}
-static int generic_NCR5380_isa_remove(struct device *pdev,
- unsigned int ndev)
+static void generic_NCR5380_isa_remove(struct device *pdev,
+ unsigned int ndev)
{
generic_NCR5380_release_resources(dev_get_drvdata(pdev));
dev_set_drvdata(pdev, NULL);
- return 0;
}
static struct isa_driver generic_NCR5380_isa_driver = {
select SCSI_MPT3SAS
depends on PCI && SCSI
help
- Dummy config option for backwards compatiblity: configure the MPT3SAS
+ Dummy config option for backwards compatibility: configure the MPT3SAS
driver instead.
chap_name);
break;
case ISCSI_BOOT_TGT_CHAP_SECRET:
- rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN,
+ rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN,
chap_secret);
break;
case ISCSI_BOOT_TGT_REV_CHAP_NAME:
mchap_name);
break;
case ISCSI_BOOT_TGT_REV_CHAP_SECRET:
- rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN,
+ rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN,
mchap_secret);
break;
case ISCSI_BOOT_TGT_FLAGS:
k = sdeb_zbc_model_str(sdeb_zbc_model_s);
if (k < 0) {
ret = k;
- goto free_vm;
+ goto free_q_arr;
}
sdeb_zbc_model = k;
switch (sdeb_zbc_model) {
break;
default:
pr_err("Invalid ZBC model\n");
- return -EINVAL;
+ ret = -EINVAL;
+ goto free_q_arr;
}
}
if (sdeb_zbc_model != BLK_ZONED_NONE) {
}
}
- if (sdp->no_write_same)
+ if (sdp->no_write_same) {
+ rq->rq_flags |= RQF_QUIET;
return BLK_STS_TARGET;
+ }
if (sdkp->ws16 || lba > 0xffffffff || nr_blocks > 0xffff)
return sd_setup_write_same16_cmnd(cmd, false);
static int sd_remove(struct device *dev)
{
struct scsi_disk *sdkp;
- dev_t devt;
sdkp = dev_get_drvdata(dev);
- devt = disk_devt(sdkp->disk);
scsi_autopm_get_device(sdkp->device);
async_synchronize_full_domain(&scsi_sd_pm_domain);
if (ret)
dev_err(hba->dev, "%s: En WB flush during H8: failed: %d\n",
__func__, ret);
- ufshcd_wb_toggle_flush(hba, true);
+ if (!(hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL))
+ ufshcd_wb_toggle_flush(hba, true);
}
static void ufshcd_scsi_unblock_requests(struct ufs_hba *hba)
static inline void ufshcd_wb_toggle_flush(struct ufs_hba *hba, bool enable)
{
- if (hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL)
- return;
-
if (enable)
ufshcd_wb_buf_flush_enable(hba);
else
{
struct Scsi_Host *host;
struct ufs_hba *hba;
- unsigned int tag;
u32 pos;
int err;
- u8 resp = 0xF;
- struct ufshcd_lrb *lrbp;
+ u8 resp = 0xF, lun;
unsigned long flags;
host = cmd->device->host;
hba = shost_priv(host);
- tag = cmd->request->tag;
- lrbp = &hba->lrb[tag];
- err = ufshcd_issue_tm_cmd(hba, lrbp->lun, 0, UFS_LOGICAL_RESET, &resp);
+ lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
+ err = ufshcd_issue_tm_cmd(hba, lun, 0, UFS_LOGICAL_RESET, &resp);
if (err || resp != UPIU_TASK_MANAGEMENT_FUNC_COMPL) {
if (!err)
err = resp;
/* clear the commands that were pending for corresponding LUN */
for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) {
- if (hba->lrb[pos].lun == lrbp->lun) {
+ if (hba->lrb[pos].lun == lun) {
err = ufshcd_clear_cmd(hba, pos);
if (err)
break;
ufshcd_wb_need_flush(hba));
}
+ flush_work(&hba->eeh_work);
+
if (req_dev_pwr_mode != hba->curr_dev_pwr_mode) {
if (!ufshcd_is_runtime_pm(pm_op))
/* ensure that bkops is disabled */
}
}
- flush_work(&hba->eeh_work);
-
/*
* In the case of DeepSleep, the device is expected to remain powered
* with the link off, so do not check for bkops.
if ((ufs_get_pm_lvl_to_dev_pwr_mode(hba->spm_lvl) ==
hba->curr_dev_pwr_mode) &&
(ufs_get_pm_lvl_to_link_pwr_state(hba->spm_lvl) ==
- hba->uic_link_state))
+ hba->uic_link_state) &&
+ !hba->dev_info.b_rpm_dev_flush_capable)
goto out;
if (pm_runtime_suspended(hba->dev)) {
return 0;
}
-struct xcopy_dev_search_info {
- const unsigned char *dev_wwn;
- struct se_device *found_dev;
-};
-
+/**
+ * target_xcopy_locate_se_dev_e4_iter - compare XCOPY NAA device identifiers
+ *
+ * @se_dev: device being considered for match
+ * @dev_wwn: XCOPY requested NAA dev_wwn
+ * @return: 1 on match, 0 on no-match
+ */
static int target_xcopy_locate_se_dev_e4_iter(struct se_device *se_dev,
- void *data)
+ const unsigned char *dev_wwn)
{
- struct xcopy_dev_search_info *info = data;
unsigned char tmp_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
int rc;
- if (!se_dev->dev_attrib.emulate_3pc)
+ if (!se_dev->dev_attrib.emulate_3pc) {
+ pr_debug("XCOPY: emulate_3pc disabled on se_dev %p\n", se_dev);
return 0;
+ }
memset(&tmp_dev_wwn[0], 0, XCOPY_NAA_IEEE_REGEX_LEN);
target_xcopy_gen_naa_ieee(se_dev, &tmp_dev_wwn[0]);
- rc = memcmp(&tmp_dev_wwn[0], info->dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN);
- if (rc != 0)
- return 0;
-
- info->found_dev = se_dev;
- pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev);
-
- rc = target_depend_item(&se_dev->dev_group.cg_item);
+ rc = memcmp(&tmp_dev_wwn[0], dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN);
if (rc != 0) {
- pr_err("configfs_depend_item attempt failed: %d for se_dev: %p\n",
- rc, se_dev);
- return rc;
+ pr_debug("XCOPY: skip non-matching: %*ph\n",
+ XCOPY_NAA_IEEE_REGEX_LEN, tmp_dev_wwn);
+ return 0;
}
+ pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev);
- pr_debug("Called configfs_depend_item for se_dev: %p se_dev->se_dev_group: %p\n",
- se_dev, &se_dev->dev_group);
return 1;
}
-static int target_xcopy_locate_se_dev_e4(const unsigned char *dev_wwn,
- struct se_device **found_dev)
+static int target_xcopy_locate_se_dev_e4(struct se_session *sess,
+ const unsigned char *dev_wwn,
+ struct se_device **_found_dev,
+ struct percpu_ref **_found_lun_ref)
{
- struct xcopy_dev_search_info info;
- int ret;
-
- memset(&info, 0, sizeof(info));
- info.dev_wwn = dev_wwn;
-
- ret = target_for_each_device(target_xcopy_locate_se_dev_e4_iter, &info);
- if (ret == 1) {
- *found_dev = info.found_dev;
- return 0;
- } else {
- pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n");
- return -EINVAL;
+ struct se_dev_entry *deve;
+ struct se_node_acl *nacl;
+ struct se_lun *this_lun = NULL;
+ struct se_device *found_dev = NULL;
+
+ /* cmd with NULL sess indicates no associated $FABRIC_MOD */
+ if (!sess)
+ goto err_out;
+
+ pr_debug("XCOPY 0xe4: searching for: %*ph\n",
+ XCOPY_NAA_IEEE_REGEX_LEN, dev_wwn);
+
+ nacl = sess->se_node_acl;
+ rcu_read_lock();
+ hlist_for_each_entry_rcu(deve, &nacl->lun_entry_hlist, link) {
+ struct se_device *this_dev;
+ int rc;
+
+ this_lun = rcu_dereference(deve->se_lun);
+ this_dev = rcu_dereference_raw(this_lun->lun_se_dev);
+
+ rc = target_xcopy_locate_se_dev_e4_iter(this_dev, dev_wwn);
+ if (rc) {
+ if (percpu_ref_tryget_live(&this_lun->lun_ref))
+ found_dev = this_dev;
+ break;
+ }
}
+ rcu_read_unlock();
+ if (found_dev == NULL)
+ goto err_out;
+
+ pr_debug("lun_ref held for se_dev: %p se_dev->se_dev_group: %p\n",
+ found_dev, &found_dev->dev_group);
+ *_found_dev = found_dev;
+ *_found_lun_ref = &this_lun->lun_ref;
+ return 0;
+err_out:
+ pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n");
+ return -EINVAL;
}
static int target_xcopy_parse_tiddesc_e4(struct se_cmd *se_cmd, struct xcopy_op *xop,
switch (xop->op_origin) {
case XCOL_SOURCE_RECV_OP:
- rc = target_xcopy_locate_se_dev_e4(xop->dst_tid_wwn,
- &xop->dst_dev);
+ rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess,
+ xop->dst_tid_wwn,
+ &xop->dst_dev,
+ &xop->remote_lun_ref);
break;
case XCOL_DEST_RECV_OP:
- rc = target_xcopy_locate_se_dev_e4(xop->src_tid_wwn,
- &xop->src_dev);
+ rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess,
+ xop->src_tid_wwn,
+ &xop->src_dev,
+ &xop->remote_lun_ref);
break;
default:
pr_err("XCOPY CSCD descriptor IDs not found in CSCD list - "
static void xcopy_pt_undepend_remotedev(struct xcopy_op *xop)
{
- struct se_device *remote_dev;
-
if (xop->op_origin == XCOL_SOURCE_RECV_OP)
- remote_dev = xop->dst_dev;
+ pr_debug("putting dst lun_ref for %p\n", xop->dst_dev);
else
- remote_dev = xop->src_dev;
-
- pr_debug("Calling configfs_undepend_item for"
- " remote_dev: %p remote_dev->dev_group: %p\n",
- remote_dev, &remote_dev->dev_group.cg_item);
+ pr_debug("putting src lun_ref for %p\n", xop->src_dev);
- target_undepend_item(&remote_dev->dev_group.cg_item);
+ percpu_ref_put(xop->remote_lun_ref);
}
static void xcopy_pt_release_cmd(struct se_cmd *se_cmd)
struct se_device *dst_dev;
unsigned char dst_tid_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
unsigned char local_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
+ struct percpu_ref *remote_lun_ref;
sector_t src_lba;
sector_t dst_lba;
/* Set up clock divider */
ssp->clkin_rate = clk_get_rate(ssp->clk);
ssp->baud_rate = SIFIVE_DEFAULT_BAUD_RATE;
+ ssp->port.uartclk = ssp->baud_rate * 16;
__ssp_update_div(ssp);
platform_set_drvdata(pdev, ssp);
return ret;
}
-static int pcwd_isa_remove(struct device *dev, unsigned int id)
+static void pcwd_isa_remove(struct device *dev, unsigned int id)
{
if (debug >= DEBUG)
pr_debug("pcwd_isa_remove id=%d\n", id);
- if (!pcwd_private.io_addr)
- return 1;
-
/* Disable the board */
if (!nowayout)
pcwd_stop();
(pcwd_private.revision == PCWD_REVISION_A) ? 2 : 4);
pcwd_private.io_addr = 0x0000;
cards_found--;
-
- return 0;
}
static void pcwd_isa_shutdown(struct device *dev, unsigned int id)
.irq_ack = ack_dynirq,
};
-int xen_set_callback_via(uint64_t via)
-{
- struct xen_hvm_param a;
- a.domid = DOMID_SELF;
- a.index = HVM_PARAM_CALLBACK_IRQ;
- a.value = via;
- return HYPERVISOR_hvm_op(HVMOP_set_param, &a);
-}
-EXPORT_SYMBOL_GPL(xen_set_callback_via);
-
#ifdef CONFIG_XEN_PVHVM
/* Vector callbacks are better than PCI interrupts to receive event
* channel notifications because we can receive vector callbacks on any
dev_warn(&pdev->dev, "request_irq failed err=%d\n", ret);
goto out;
}
+ /*
+ * It doesn't strictly *have* to run on CPU0 but it sure
+ * as hell better process the event channel ports delivered
+ * to CPU0.
+ */
+ irq_set_affinity(pdev->irq, cpumask_of(0));
+
callback_via = get_callback_via(pdev);
ret = xen_set_callback_via(callback_via);
if (ret) {
ret = gnttab_init();
if (ret)
goto grant_out;
- xenbus_probe(NULL);
return 0;
grant_out:
gnttab_free_auto_xlat_frames();
return 0;
}
-static long privcmd_ioctl_mmap_resource(struct file *file, void __user *udata)
+static long privcmd_ioctl_mmap_resource(struct file *file,
+ struct privcmd_mmap_resource __user *udata)
{
struct privcmd_data *data = file->private_data;
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
struct privcmd_mmap_resource kdata;
xen_pfn_t *pfns = NULL;
- struct xen_mem_acquire_resource xdata;
+ struct xen_mem_acquire_resource xdata = { };
int rc;
if (copy_from_user(&kdata, udata, sizeof(kdata)))
if (data->domid != DOMID_INVALID && data->domid != kdata.dom)
return -EPERM;
+ /* Both fields must be set or unset */
+ if (!!kdata.addr != !!kdata.num)
+ return -EINVAL;
+
+ xdata.domid = kdata.dom;
+ xdata.type = kdata.type;
+ xdata.id = kdata.id;
+
+ if (!kdata.addr && !kdata.num) {
+ /* Query the size of the resource. */
+ rc = HYPERVISOR_memory_op(XENMEM_acquire_resource, &xdata);
+ if (rc)
+ return rc;
+ return __put_user(xdata.nr_frames, &udata->num);
+ }
+
mmap_write_lock(mm);
vma = find_vma(mm, kdata.addr);
} else
vma->vm_private_data = PRIV_VMA_LOCKED;
- memset(&xdata, 0, sizeof(xdata));
- xdata.domid = kdata.dom;
- xdata.type = kdata.type;
- xdata.id = kdata.id;
xdata.frame = kdata.idx;
xdata.nr_frames = kdata.num;
set_xen_guest_handle(xdata.frame_list, pfns);
const char *type,
const char *nodename);
int xenbus_probe_devices(struct xen_bus_type *bus);
+void xenbus_probe(void);
void xenbus_dev_changed(const char *node, struct xen_bus_type *bus);
static int xenbus_irq;
static struct task_struct *xenbus_task;
-static DECLARE_WORK(probe_work, xenbus_probe);
-
-
static irqreturn_t wake_waiting(int irq, void *unused)
{
- if (unlikely(xenstored_ready == 0)) {
- xenstored_ready = 1;
- schedule_work(&probe_work);
- }
-
wake_up(&xb_waitq);
return IRQ_HANDLED;
}
}
EXPORT_SYMBOL_GPL(unregister_xenstore_notifier);
-void xenbus_probe(struct work_struct *unused)
+void xenbus_probe(void)
{
xenstored_ready = 1;
+ /*
+ * In the HVM case, xenbus_init() deferred its call to
+ * xs_init() in case callbacks were not operational yet.
+ * So do it now.
+ */
+ if (xen_store_domain_type == XS_HVM)
+ xs_init();
+
/* Notify others that xenstore is up */
blocking_notifier_call_chain(&xenstore_chain, 0, NULL);
}
-EXPORT_SYMBOL_GPL(xenbus_probe);
-static int __init xenbus_probe_initcall(void)
+/*
+ * Returns true when XenStore init must be deferred in order to
+ * allow the PCI platform device to be initialised, before we
+ * can actually have event channel interrupts working.
+ */
+static bool xs_hvm_defer_init_for_callback(void)
{
- if (!xen_domain())
- return -ENODEV;
+#ifdef CONFIG_XEN_PVHVM
+ return xen_store_domain_type == XS_HVM &&
+ !xen_have_vector_callback;
+#else
+ return false;
+#endif
+}
- if (xen_initial_domain() || xen_hvm_domain())
- return 0;
+static int __init xenbus_probe_initcall(void)
+{
+ /*
+ * Probe XenBus here in the XS_PV case, and also XS_HVM unless we
+ * need to wait for the platform PCI device to come up.
+ */
+ if (xen_store_domain_type == XS_PV ||
+ (xen_store_domain_type == XS_HVM &&
+ !xs_hvm_defer_init_for_callback()))
+ xenbus_probe();
- xenbus_probe(NULL);
return 0;
}
-
device_initcall(xenbus_probe_initcall);
+int xen_set_callback_via(uint64_t via)
+{
+ struct xen_hvm_param a;
+ int ret;
+
+ a.domid = DOMID_SELF;
+ a.index = HVM_PARAM_CALLBACK_IRQ;
+ a.value = via;
+
+ ret = HYPERVISOR_hvm_op(HVMOP_set_param, &a);
+ if (ret)
+ return ret;
+
+ /*
+ * If xenbus_probe_initcall() deferred the xenbus_probe()
+ * due to the callback not functioning yet, we can do it now.
+ */
+ if (!xenstored_ready && xs_hvm_defer_init_for_callback())
+ xenbus_probe();
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xen_set_callback_via);
+
/* Set up event channel for xenstored which is run as a local process
* (this is normally used only in dom0)
*/
break;
}
- /* Initialize the interface to xenstore. */
- err = xs_init();
- if (err) {
- pr_warn("Error initializing xenstore comms: %i\n", err);
- goto out_error;
+ /*
+ * HVM domains may not have a functional callback yet. In that
+ * case let xs_init() be called from xenbus_probe(), which will
+ * get invoked at an appropriate time.
+ */
+ if (xen_store_domain_type != XS_HVM) {
+ err = xs_init();
+ if (err) {
+ pr_warn("Error initializing xenstore comms: %i\n", err);
+ goto out_error;
+ }
}
if ((xen_store_domain_type != XS_LOCAL) &&
root = list_first_entry(&fs_info->allocated_roots,
struct btrfs_root, leak_list);
btrfs_err(fs_info, "leaked root %s refcount %d",
- btrfs_root_name(root->root_key.objectid, buf),
+ btrfs_root_name(&root->root_key, buf),
refcount_read(&root->refs));
while (refcount_read(&root->refs) > 1)
btrfs_put_root(root);
static void extent_io_tree_panic(struct extent_io_tree *tree, int err)
{
- struct inode *inode = tree->private_data;
-
- btrfs_panic(btrfs_sb(inode->i_sb), err,
+ btrfs_panic(tree->fs_info, err,
"locking error: extent tree was modified by another thread while locked");
}
* some fairly slow code that needs optimization. This walks the list
* of all the inodes with pending delalloc and forces them to disk.
*/
-static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot,
+static int start_delalloc_inodes(struct btrfs_root *root,
+ struct writeback_control *wbc, bool snapshot,
bool in_reclaim_context)
{
struct btrfs_inode *binode;
struct list_head works;
struct list_head splice;
int ret = 0;
+ bool full_flush = wbc->nr_to_write == LONG_MAX;
INIT_LIST_HEAD(&works);
INIT_LIST_HEAD(&splice);
if (snapshot)
set_bit(BTRFS_INODE_SNAPSHOT_FLUSH,
&binode->runtime_flags);
- work = btrfs_alloc_delalloc_work(inode);
- if (!work) {
- iput(inode);
- ret = -ENOMEM;
- goto out;
- }
- list_add_tail(&work->list, &works);
- btrfs_queue_work(root->fs_info->flush_workers,
- &work->work);
- if (*nr != U64_MAX) {
- (*nr)--;
- if (*nr == 0)
+ if (full_flush) {
+ work = btrfs_alloc_delalloc_work(inode);
+ if (!work) {
+ iput(inode);
+ ret = -ENOMEM;
+ goto out;
+ }
+ list_add_tail(&work->list, &works);
+ btrfs_queue_work(root->fs_info->flush_workers,
+ &work->work);
+ } else {
+ ret = sync_inode(inode, wbc);
+ if (!ret &&
+ test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,
+ &BTRFS_I(inode)->runtime_flags))
+ ret = sync_inode(inode, wbc);
+ btrfs_add_delayed_iput(inode);
+ if (ret || wbc->nr_to_write <= 0)
goto out;
}
cond_resched();
int btrfs_start_delalloc_snapshot(struct btrfs_root *root)
{
+ struct writeback_control wbc = {
+ .nr_to_write = LONG_MAX,
+ .sync_mode = WB_SYNC_NONE,
+ .range_start = 0,
+ .range_end = LLONG_MAX,
+ };
struct btrfs_fs_info *fs_info = root->fs_info;
- u64 nr = U64_MAX;
if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state))
return -EROFS;
- return start_delalloc_inodes(root, &nr, true, false);
+ return start_delalloc_inodes(root, &wbc, true, false);
}
int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr,
bool in_reclaim_context)
{
+ struct writeback_control wbc = {
+ .nr_to_write = (nr == U64_MAX) ? LONG_MAX : (unsigned long)nr,
+ .sync_mode = WB_SYNC_NONE,
+ .range_start = 0,
+ .range_end = LLONG_MAX,
+ };
struct btrfs_root *root;
struct list_head splice;
int ret;
spin_lock(&fs_info->delalloc_root_lock);
list_splice_init(&fs_info->delalloc_roots, &splice);
while (!list_empty(&splice) && nr) {
+ /*
+ * Reset nr_to_write here so we know that we're doing a full
+ * flush.
+ */
+ if (nr == U64_MAX)
+ wbc.nr_to_write = LONG_MAX;
+
root = list_first_entry(&splice, struct btrfs_root,
delalloc_root);
root = btrfs_grab_root(root);
&fs_info->delalloc_roots);
spin_unlock(&fs_info->delalloc_root_lock);
- ret = start_delalloc_inodes(root, &nr, false, in_reclaim_context);
+ ret = start_delalloc_inodes(root, &wbc, false, in_reclaim_context);
btrfs_put_root(root);
- if (ret < 0)
+ if (ret < 0 || wbc.nr_to_write <= 0)
goto out;
spin_lock(&fs_info->delalloc_root_lock);
}
{ BTRFS_DATA_RELOC_TREE_OBJECTID, "DATA_RELOC_TREE" },
};
-const char *btrfs_root_name(u64 objectid, char *buf)
+const char *btrfs_root_name(const struct btrfs_key *key, char *buf)
{
int i;
- if (objectid == BTRFS_TREE_RELOC_OBJECTID) {
+ if (key->objectid == BTRFS_TREE_RELOC_OBJECTID) {
snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN,
- "TREE_RELOC offset=%llu", objectid);
+ "TREE_RELOC offset=%llu", key->offset);
return buf;
}
for (i = 0; i < ARRAY_SIZE(root_map); i++) {
- if (root_map[i].id == objectid)
+ if (root_map[i].id == key->objectid)
return root_map[i].name;
}
- snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", objectid);
+ snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", key->objectid);
return buf;
}
void btrfs_print_leaf(struct extent_buffer *l);
void btrfs_print_tree(struct extent_buffer *c, bool follow);
-const char *btrfs_root_name(u64 objectid, char *buf);
+const char *btrfs_root_name(const struct btrfs_key *key, char *buf);
#endif
return 0;
for (i = 0; i < btrfs_header_nritems(leaf); i++) {
+ u8 type;
+
btrfs_item_key_to_cpu(leaf, &key, i);
if (key.type != BTRFS_EXTENT_DATA_KEY)
continue;
ei = btrfs_item_ptr(leaf, i, struct btrfs_file_extent_item);
- if (btrfs_file_extent_type(leaf, ei) == BTRFS_FILE_EXTENT_REG &&
+ type = btrfs_file_extent_type(leaf, ei);
+
+ if ((type == BTRFS_FILE_EXTENT_REG ||
+ type == BTRFS_FILE_EXTENT_PREALLOC) &&
btrfs_file_extent_disk_bytenr(leaf, ei) == data_bytenr) {
found = true;
space_cache_ino = key.objectid;
loops = 0;
while ((delalloc_bytes || dio_bytes) && loops < 3) {
- btrfs_start_delalloc_roots(fs_info, items, true);
+ u64 nr_pages = min(delalloc_bytes, to_reclaim) >> PAGE_SHIFT;
+
+ btrfs_start_delalloc_roots(fs_info, nr_pages, true);
loops++;
if (wait_ordered && !trans) {
{
struct btrfs_fs_info *fs_info = leaf->fs_info;
u64 length;
+ u64 chunk_end;
u64 stripe_len;
u16 num_stripes;
u16 sub_stripes;
"invalid chunk length, have %llu", length);
return -EUCLEAN;
}
+ if (unlikely(check_add_overflow(logical, length, &chunk_end))) {
+ chunk_err(leaf, chunk, logical,
+"invalid chunk logical start and length, have logical start %llu length %llu",
+ logical, length);
+ return -EUCLEAN;
+ }
if (unlikely(!is_power_of_2(stripe_len) || stripe_len != BTRFS_STRIPE_LEN)) {
chunk_err(leaf, chunk, logical,
"invalid chunk stripe length: %llu",
if (!ses->binding) {
ses->capabilities = server->capabilities;
- if (linuxExtEnabled == 0)
+ if (!linuxExtEnabled)
ses->capabilities &= (~server->vals->cap_unix);
if (ses->auth_key.response) {
vi = find_vol(fullpath);
spin_unlock(&vol_list_lock);
- kref_put(&vi->refcnt, vol_release);
+ if (!IS_ERR(vi))
+ kref_put(&vi->refcnt, vol_release);
}
/**
int
smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx)
{
- int rc = 0;
-
memcpy(new_ctx, ctx, sizeof(*ctx));
new_ctx->prepath = NULL;
new_ctx->mount_options = NULL;
DUP_CTX_STR(nodename);
DUP_CTX_STR(iocharset);
- return rc;
+ return 0;
}
static int
free_rsp_buf(resp_buftype, rsp);
/* retry close in a worker thread if this one is interrupted */
- if (rc == -EINTR) {
+ if (is_interrupt_error(rc)) {
int tmp_rc;
tmp_rc = smb2_handle_cancelled_close(tcon, persistent_fid,
__le16 TransformCount;
__u16 Reserved1;
__u32 Reserved2;
- __le16 RDMATransformIds[1];
+ __le16 RDMATransformIds[];
} __packed;
/* Signing algorithms */
}
return err;
}
-
-int __ext4_handle_dirty_super(const char *where, unsigned int line,
- handle_t *handle, struct super_block *sb)
-{
- struct buffer_head *bh = EXT4_SB(sb)->s_sbh;
- int err = 0;
-
- ext4_superblock_csum_set(sb);
- if (ext4_handle_valid(handle)) {
- err = jbd2_journal_dirty_metadata(handle, bh);
- if (err)
- ext4_journal_abort_handle(where, line, __func__,
- bh, handle, err);
- } else
- mark_buffer_dirty(bh);
- return err;
-}
handle_t *handle, struct inode *inode,
struct buffer_head *bh);
-int __ext4_handle_dirty_super(const char *where, unsigned int line,
- handle_t *handle, struct super_block *sb);
-
#define ext4_journal_get_write_access(handle, bh) \
__ext4_journal_get_write_access(__func__, __LINE__, (handle), (bh))
#define ext4_forget(handle, is_metadata, inode, bh, block_nr) \
#define ext4_handle_dirty_metadata(handle, inode, bh) \
__ext4_handle_dirty_metadata(__func__, __LINE__, (handle), (inode), \
(bh))
-#define ext4_handle_dirty_super(handle, sb) \
- __ext4_handle_dirty_super(__func__, __LINE__, (handle), (sb))
handle_t *__ext4_journal_start_sb(struct super_block *sb, unsigned int line,
int type, int blocks, int rsv_blocks,
trace_ext4_fc_track_range(inode, start, end, ret);
}
-static void ext4_fc_submit_bh(struct super_block *sb)
+static void ext4_fc_submit_bh(struct super_block *sb, bool is_tail)
{
int write_flags = REQ_SYNC;
struct buffer_head *bh = EXT4_SB(sb)->s_fc_bh;
- /* TODO: REQ_FUA | REQ_PREFLUSH is unnecessarily expensive. */
- if (test_opt(sb, BARRIER))
+ /* Add REQ_FUA | REQ_PREFLUSH only its tail */
+ if (test_opt(sb, BARRIER) && is_tail)
write_flags |= REQ_FUA | REQ_PREFLUSH;
lock_buffer(bh);
set_buffer_dirty(bh);
*crc = ext4_chksum(sbi, *crc, tl, sizeof(*tl));
if (pad_len > 0)
ext4_fc_memzero(sb, tl + 1, pad_len, crc);
- ext4_fc_submit_bh(sb);
+ ext4_fc_submit_bh(sb, false);
ret = jbd2_fc_get_buf(EXT4_SB(sb)->s_journal, &bh);
if (ret)
tail.fc_crc = cpu_to_le32(crc);
ext4_fc_memcpy(sb, dst, &tail.fc_crc, sizeof(tail.fc_crc), NULL);
- ext4_fc_submit_bh(sb);
+ ext4_fc_submit_bh(sb, true);
return 0;
}
list_splice_init(&sbi->s_fc_dentry_q[FC_Q_STAGING],
&sbi->s_fc_dentry_q[FC_Q_MAIN]);
list_splice_init(&sbi->s_fc_q[FC_Q_STAGING],
- &sbi->s_fc_q[FC_Q_STAGING]);
+ &sbi->s_fc_q[FC_Q_MAIN]);
ext4_clear_mount_flag(sb, EXT4_MF_FC_COMMITTING);
ext4_clear_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
entry.len = darg.dname_len;
inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
- if (IS_ERR_OR_NULL(inode)) {
+ if (IS_ERR(inode)) {
jbd_debug(1, "Inode %d not found", darg.ino);
return 0;
}
old_parent = ext4_iget(sb, darg.parent_ino,
EXT4_IGET_NORMAL);
- if (IS_ERR_OR_NULL(old_parent)) {
+ if (IS_ERR(old_parent)) {
jbd_debug(1, "Dir with inode %d not found", darg.parent_ino);
iput(inode);
return 0;
darg.parent_ino, darg.dname_len);
inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
- if (IS_ERR_OR_NULL(inode)) {
+ if (IS_ERR(inode)) {
jbd_debug(1, "Inode not found.");
return 0;
}
trace_ext4_fc_replay(sb, tag, ino, 0, 0);
inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL);
- if (!IS_ERR_OR_NULL(inode)) {
+ if (!IS_ERR(inode)) {
ext4_ext_clear_bb(inode);
iput(inode);
}
+ inode = NULL;
ext4_fc_record_modified_inode(sb, ino);
/* Given that we just wrote the inode on disk, this SHOULD succeed. */
inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL);
- if (IS_ERR_OR_NULL(inode)) {
+ if (IS_ERR(inode)) {
jbd_debug(1, "Inode not found.");
return -EFSCORRUPTED;
}
goto out;
inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
- if (IS_ERR_OR_NULL(inode)) {
+ if (IS_ERR(inode)) {
jbd_debug(1, "inode %d not found.", darg.ino);
inode = NULL;
ret = -EINVAL;
* dot and dot dot dirents are setup properly.
*/
dir = ext4_iget(sb, darg.parent_ino, EXT4_IGET_NORMAL);
- if (IS_ERR_OR_NULL(dir)) {
+ if (IS_ERR(dir)) {
jbd_debug(1, "Dir %d not found.", darg.ino);
goto out;
}
inode = ext4_iget(sb, le32_to_cpu(fc_add_ex->fc_ino),
EXT4_IGET_NORMAL);
- if (IS_ERR_OR_NULL(inode)) {
+ if (IS_ERR(inode)) {
jbd_debug(1, "Inode not found.");
return 0;
}
le32_to_cpu(lrange->fc_ino), cur, remaining);
inode = ext4_iget(sb, le32_to_cpu(lrange->fc_ino), EXT4_IGET_NORMAL);
- if (IS_ERR_OR_NULL(inode)) {
+ if (IS_ERR(inode)) {
jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange->fc_ino));
return 0;
}
for (i = 0; i < state->fc_modified_inodes_used; i++) {
inode = ext4_iget(sb, state->fc_modified_inodes[i],
EXT4_IGET_NORMAL);
- if (IS_ERR_OR_NULL(inode)) {
+ if (IS_ERR(inode)) {
jbd_debug(1, "Inode %d not found.",
state->fc_modified_inodes[i]);
continue;
if (ret > 0) {
path = ext4_find_extent(inode, map.m_lblk, NULL, 0);
- if (!IS_ERR_OR_NULL(path)) {
+ if (!IS_ERR(path)) {
for (j = 0; j < path->p_depth; j++)
ext4_mb_mark_bb(inode->i_sb,
path[j].p_block, 1, 1);
err = ext4_journal_get_write_access(handle, sbi->s_sbh);
if (err)
goto out_journal;
- strlcpy(sbi->s_es->s_last_mounted, cp,
+ lock_buffer(sbi->s_sbh);
+ strncpy(sbi->s_es->s_last_mounted, cp,
sizeof(sbi->s_es->s_last_mounted));
- ext4_handle_dirty_super(handle, sb);
+ ext4_superblock_csum_set(sb);
+ unlock_buffer(sbi->s_sbh);
+ ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh);
out_journal:
ext4_journal_stop(handle);
out:
err = ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh);
if (err)
goto out_brelse;
+ lock_buffer(EXT4_SB(sb)->s_sbh);
ext4_set_feature_large_file(sb);
+ ext4_superblock_csum_set(sb);
+ unlock_buffer(EXT4_SB(sb)->s_sbh);
ext4_handle_sync(handle);
- err = ext4_handle_dirty_super(handle, sb);
+ err = ext4_handle_dirty_metadata(handle, NULL,
+ EXT4_SB(sb)->s_sbh);
}
ext4_update_inode_fsync_trans(handle, inode, need_datasync);
out_brelse:
err = ext4_journal_get_write_access(handle, sbi->s_sbh);
if (err)
goto pwsalt_err_journal;
+ lock_buffer(sbi->s_sbh);
generate_random_uuid(sbi->s_es->s_encrypt_pw_salt);
+ ext4_superblock_csum_set(sb);
+ unlock_buffer(sbi->s_sbh);
err = ext4_handle_dirty_metadata(handle, NULL,
sbi->s_sbh);
pwsalt_err_journal:
(le32_to_cpu(sbi->s_es->s_inodes_count))) {
/* Insert this inode at the head of the on-disk orphan list */
NEXT_ORPHAN(inode) = le32_to_cpu(sbi->s_es->s_last_orphan);
+ lock_buffer(sbi->s_sbh);
sbi->s_es->s_last_orphan = cpu_to_le32(inode->i_ino);
+ ext4_superblock_csum_set(sb);
+ unlock_buffer(sbi->s_sbh);
dirty = true;
}
list_add(&EXT4_I(inode)->i_orphan, &sbi->s_orphan);
mutex_unlock(&sbi->s_orphan_lock);
if (dirty) {
- err = ext4_handle_dirty_super(handle, sb);
+ err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh);
rc = ext4_mark_iloc_dirty(handle, inode, &iloc);
if (!err)
err = rc;
mutex_unlock(&sbi->s_orphan_lock);
goto out_brelse;
}
+ lock_buffer(sbi->s_sbh);
sbi->s_es->s_last_orphan = cpu_to_le32(ino_next);
+ ext4_superblock_csum_set(inode->i_sb);
+ unlock_buffer(sbi->s_sbh);
mutex_unlock(&sbi->s_orphan_lock);
- err = ext4_handle_dirty_super(handle, inode->i_sb);
+ err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh);
} else {
struct ext4_iloc iloc2;
struct inode *i_prev =
return retval2;
}
}
- brelse(ent->bh);
- ent->bh = NULL;
-
return retval;
}
}
}
+ old_file_type = old.de->file_type;
if (IS_DIRSYNC(old.dir) || IS_DIRSYNC(new.dir))
ext4_handle_sync(handle);
force_reread = (new.dir->i_ino == old.dir->i_ino &&
ext4_test_inode_flag(new.dir, EXT4_INODE_INLINE_DATA));
- old_file_type = old.de->file_type;
if (whiteout) {
/*
* Do this before adding a new entry, so the old entry is sure
retval = 0;
end_rename:
- brelse(old.dir_bh);
- brelse(old.bh);
- brelse(new.bh);
if (whiteout) {
- if (retval)
+ if (retval) {
+ ext4_setent(handle, &old,
+ old.inode->i_ino, old_file_type);
drop_nlink(whiteout);
+ }
unlock_new_inode(whiteout);
iput(whiteout);
+
}
+ brelse(old.dir_bh);
+ brelse(old.bh);
+ brelse(new.bh);
if (handle)
ext4_journal_stop(handle);
return retval;
EXT4_SB(sb)->s_gdb_count++;
ext4_kvfree_array_rcu(o_group_desc);
+ lock_buffer(EXT4_SB(sb)->s_sbh);
le16_add_cpu(&es->s_reserved_gdt_blocks, -1);
- err = ext4_handle_dirty_super(handle, sb);
+ ext4_superblock_csum_set(sb);
+ unlock_buffer(EXT4_SB(sb)->s_sbh);
+ err = ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh);
if (err)
ext4_std_error(sb, err);
return err;
reserved_blocks *= blocks_count;
do_div(reserved_blocks, 100);
+ lock_buffer(sbi->s_sbh);
ext4_blocks_count_set(es, ext4_blocks_count(es) + blocks_count);
ext4_free_blocks_count_set(es, ext4_free_blocks_count(es) + free_blocks);
le32_add_cpu(&es->s_inodes_count, EXT4_INODES_PER_GROUP(sb) *
* active. */
ext4_r_blocks_count_set(es, ext4_r_blocks_count(es) +
reserved_blocks);
+ ext4_superblock_csum_set(sb);
+ unlock_buffer(sbi->s_sbh);
/* Update the free space counts */
percpu_counter_add(&sbi->s_freeclusters_counter,
ext4_update_super(sb, flex_gd);
- err = ext4_handle_dirty_super(handle, sb);
+ err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh);
exit_journal:
err2 = ext4_journal_stop(handle);
goto errout;
}
+ lock_buffer(EXT4_SB(sb)->s_sbh);
ext4_blocks_count_set(es, o_blocks_count + add);
ext4_free_blocks_count_set(es, ext4_free_blocks_count(es) + add);
+ ext4_superblock_csum_set(sb);
+ unlock_buffer(EXT4_SB(sb)->s_sbh);
ext4_debug("freeing blocks %llu through %llu\n", o_blocks_count,
o_blocks_count + add);
/* We add the blocks to the bitmap and set the group need init bit */
err = ext4_group_add_blocks(handle, sb, o_blocks_count, add);
if (err)
goto errout;
- ext4_handle_dirty_super(handle, sb);
+ ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh);
ext4_debug("freed blocks %llu through %llu\n", o_blocks_count,
o_blocks_count + add);
errout:
if (err)
goto errout;
+ lock_buffer(sbi->s_sbh);
ext4_clear_feature_resize_inode(sb);
ext4_set_feature_meta_bg(sb);
sbi->s_es->s_first_meta_bg =
cpu_to_le32(num_desc_blocks(sb, sbi->s_groups_count));
+ ext4_superblock_csum_set(sb);
+ unlock_buffer(sbi->s_sbh);
- err = ext4_handle_dirty_super(handle, sb);
+ err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh);
if (err) {
ext4_std_error(sb, err);
goto errout;
static int ext4_load_journal(struct super_block *, struct ext4_super_block *,
unsigned long journal_devnum);
static int ext4_show_options(struct seq_file *seq, struct dentry *root);
-static int ext4_commit_super(struct super_block *sb, int sync);
+static void ext4_update_super(struct super_block *sb);
+static int ext4_commit_super(struct super_block *sb);
static int ext4_mark_recovery_complete(struct super_block *sb,
struct ext4_super_block *es);
static int ext4_clear_journal_err(struct super_block *sb,
return EXT4_ERR_UNKNOWN;
}
-static void __save_error_info(struct super_block *sb, int error,
- __u32 ino, __u64 block,
- const char *func, unsigned int line)
+static void save_error_info(struct super_block *sb, int error,
+ __u32 ino, __u64 block,
+ const char *func, unsigned int line)
{
struct ext4_sb_info *sbi = EXT4_SB(sb);
- EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS;
- if (bdev_read_only(sb->s_bdev))
- return;
/* We default to EFSCORRUPTED error... */
if (error == 0)
error = EFSCORRUPTED;
spin_unlock(&sbi->s_error_lock);
}
-static void save_error_info(struct super_block *sb, int error,
- __u32 ino, __u64 block,
- const char *func, unsigned int line)
-{
- __save_error_info(sb, error, ino, block, func, line);
- if (!bdev_read_only(sb->s_bdev))
- ext4_commit_super(sb, 1);
-}
-
/* Deal with the reporting of failure conditions on a filesystem such as
* inconsistencies detected or read IO failures.
*
* used to deal with unrecoverable failures such as journal IO errors or ENOMEM
* at a critical moment in log management.
*/
-static void ext4_handle_error(struct super_block *sb, bool force_ro)
+static void ext4_handle_error(struct super_block *sb, bool force_ro, int error,
+ __u32 ino, __u64 block,
+ const char *func, unsigned int line)
{
journal_t *journal = EXT4_SB(sb)->s_journal;
+ bool continue_fs = !force_ro && test_opt(sb, ERRORS_CONT);
+ EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS;
if (test_opt(sb, WARN_ON_ERROR))
WARN_ON_ONCE(1);
- if (sb_rdonly(sb) || (!force_ro && test_opt(sb, ERRORS_CONT)))
+ if (!continue_fs && !sb_rdonly(sb)) {
+ ext4_set_mount_flag(sb, EXT4_MF_FS_ABORTED);
+ if (journal)
+ jbd2_journal_abort(journal, -EIO);
+ }
+
+ if (!bdev_read_only(sb->s_bdev)) {
+ save_error_info(sb, error, ino, block, func, line);
+ /*
+ * In case the fs should keep running, we need to writeout
+ * superblock through the journal. Due to lock ordering
+ * constraints, it may not be safe to do it right here so we
+ * defer superblock flushing to a workqueue.
+ */
+ if (continue_fs)
+ schedule_work(&EXT4_SB(sb)->s_error_work);
+ else
+ ext4_commit_super(sb);
+ }
+
+ if (sb_rdonly(sb) || continue_fs)
return;
- ext4_set_mount_flag(sb, EXT4_MF_FS_ABORTED);
- if (journal)
- jbd2_journal_abort(journal, -EIO);
/*
* We force ERRORS_RO behavior when system is rebooting. Otherwise we
* could panic during 'reboot -f' as the underlying device got already
{
struct ext4_sb_info *sbi = container_of(work, struct ext4_sb_info,
s_error_work);
+ journal_t *journal = sbi->s_journal;
+ handle_t *handle;
- ext4_commit_super(sbi->s_sb, 1);
+ /*
+ * If the journal is still running, we have to write out superblock
+ * through the journal to avoid collisions of other journalled sb
+ * updates.
+ *
+ * We use directly jbd2 functions here to avoid recursing back into
+ * ext4 error handling code during handling of previous errors.
+ */
+ if (!sb_rdonly(sbi->s_sb) && journal) {
+ handle = jbd2_journal_start(journal, 1);
+ if (IS_ERR(handle))
+ goto write_directly;
+ if (jbd2_journal_get_write_access(handle, sbi->s_sbh)) {
+ jbd2_journal_stop(handle);
+ goto write_directly;
+ }
+ ext4_update_super(sbi->s_sb);
+ if (jbd2_journal_dirty_metadata(handle, sbi->s_sbh)) {
+ jbd2_journal_stop(handle);
+ goto write_directly;
+ }
+ jbd2_journal_stop(handle);
+ return;
+ }
+write_directly:
+ /*
+ * Write through journal failed. Write sb directly to get error info
+ * out and hope for the best.
+ */
+ ext4_commit_super(sbi->s_sb);
}
#define ext4_error_ratelimit(sb) \
sb->s_id, function, line, current->comm, &vaf);
va_end(args);
}
- save_error_info(sb, error, 0, block, function, line);
- ext4_handle_error(sb, force_ro);
+ ext4_handle_error(sb, force_ro, error, 0, block, function, line);
}
void __ext4_error_inode(struct inode *inode, const char *function,
current->comm, &vaf);
va_end(args);
}
- save_error_info(inode->i_sb, error, inode->i_ino, block,
- function, line);
- ext4_handle_error(inode->i_sb, false);
+ ext4_handle_error(inode->i_sb, false, error, inode->i_ino, block,
+ function, line);
}
void __ext4_error_file(struct file *file, const char *function,
current->comm, path, &vaf);
va_end(args);
}
- save_error_info(inode->i_sb, EFSCORRUPTED, inode->i_ino, block,
- function, line);
- ext4_handle_error(inode->i_sb, false);
+ ext4_handle_error(inode->i_sb, false, EFSCORRUPTED, inode->i_ino, block,
+ function, line);
}
const char *ext4_decode_error(struct super_block *sb, int errno,
sb->s_id, function, line, errstr);
}
- save_error_info(sb, -errno, 0, 0, function, line);
- ext4_handle_error(sb, false);
+ ext4_handle_error(sb, false, -errno, 0, 0, function, line);
}
void __ext4_msg(struct super_block *sb,
if (test_opt(sb, ERRORS_CONT)) {
if (test_opt(sb, WARN_ON_ERROR))
WARN_ON_ONCE(1);
- __save_error_info(sb, EFSCORRUPTED, ino, block, function, line);
- schedule_work(&EXT4_SB(sb)->s_error_work);
+ EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS;
+ if (!bdev_read_only(sb->s_bdev)) {
+ save_error_info(sb, EFSCORRUPTED, ino, block, function,
+ line);
+ schedule_work(&EXT4_SB(sb)->s_error_work);
+ }
return;
}
ext4_unlock_group(sb, grp);
- save_error_info(sb, EFSCORRUPTED, ino, block, function, line);
- ext4_handle_error(sb, false);
+ ext4_handle_error(sb, false, EFSCORRUPTED, ino, block, function, line);
/*
* We only get here in the ERRORS_RO case; relocking the group
* may be dangerous, but nothing bad will happen since the
es->s_state = cpu_to_le16(sbi->s_mount_state);
}
if (!sb_rdonly(sb))
- ext4_commit_super(sb, 1);
+ ext4_commit_super(sb);
rcu_read_lock();
group_desc = rcu_dereference(sbi->s_group_desc);
if (sbi->s_journal)
ext4_set_feature_journal_needs_recovery(sb);
- err = ext4_commit_super(sb, 1);
+ err = ext4_commit_super(sb);
done:
if (test_opt(sb, DEBUG))
printk(KERN_INFO "[EXT4 FS bs=%lu, gc=%u, "
if (DUMMY_ENCRYPTION_ENABLED(sbi) && !sb_rdonly(sb) &&
!ext4_has_feature_encrypt(sb)) {
ext4_set_feature_encrypt(sb);
- ext4_commit_super(sb, 1);
+ ext4_commit_super(sb);
}
/*
es->s_journal_dev = cpu_to_le32(journal_devnum);
/* Make sure we flush the recovery flag to disk. */
- ext4_commit_super(sb, 1);
+ ext4_commit_super(sb);
}
return 0;
return err;
}
-static int ext4_commit_super(struct super_block *sb, int sync)
+/* Copy state of EXT4_SB(sb) into buffer for on-disk superblock */
+static void ext4_update_super(struct super_block *sb)
{
struct ext4_sb_info *sbi = EXT4_SB(sb);
- struct ext4_super_block *es = EXT4_SB(sb)->s_es;
- struct buffer_head *sbh = EXT4_SB(sb)->s_sbh;
- int error = 0;
-
- if (!sbh || block_device_ejected(sb))
- return error;
+ struct ext4_super_block *es = sbi->s_es;
+ struct buffer_head *sbh = sbi->s_sbh;
+ lock_buffer(sbh);
/*
* If the file system is mounted read-only, don't update the
* superblock write time. This avoids updating the superblock
if (!(sb->s_flags & SB_RDONLY))
ext4_update_tstamp(es, s_wtime);
es->s_kbytes_written =
- cpu_to_le64(EXT4_SB(sb)->s_kbytes_written +
+ cpu_to_le64(sbi->s_kbytes_written +
((part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
- EXT4_SB(sb)->s_sectors_written_start) >> 1));
- if (percpu_counter_initialized(&EXT4_SB(sb)->s_freeclusters_counter))
+ sbi->s_sectors_written_start) >> 1));
+ if (percpu_counter_initialized(&sbi->s_freeclusters_counter))
ext4_free_blocks_count_set(es,
- EXT4_C2B(EXT4_SB(sb), percpu_counter_sum_positive(
- &EXT4_SB(sb)->s_freeclusters_counter)));
- if (percpu_counter_initialized(&EXT4_SB(sb)->s_freeinodes_counter))
+ EXT4_C2B(sbi, percpu_counter_sum_positive(
+ &sbi->s_freeclusters_counter)));
+ if (percpu_counter_initialized(&sbi->s_freeinodes_counter))
es->s_free_inodes_count =
cpu_to_le32(percpu_counter_sum_positive(
- &EXT4_SB(sb)->s_freeinodes_counter));
+ &sbi->s_freeinodes_counter));
/* Copy error information to the on-disk superblock */
spin_lock(&sbi->s_error_lock);
if (sbi->s_add_error_count > 0) {
}
spin_unlock(&sbi->s_error_lock);
- BUFFER_TRACE(sbh, "marking dirty");
ext4_superblock_csum_set(sb);
- if (sync)
- lock_buffer(sbh);
+ unlock_buffer(sbh);
+}
+
+static int ext4_commit_super(struct super_block *sb)
+{
+ struct buffer_head *sbh = EXT4_SB(sb)->s_sbh;
+ int error = 0;
+
+ if (!sbh || block_device_ejected(sb))
+ return error;
+
+ ext4_update_super(sb);
+
if (buffer_write_io_error(sbh) || !buffer_uptodate(sbh)) {
/*
* Oh, dear. A previous attempt to write the
clear_buffer_write_io_error(sbh);
set_buffer_uptodate(sbh);
}
+ BUFFER_TRACE(sbh, "marking dirty");
mark_buffer_dirty(sbh);
- if (sync) {
- unlock_buffer(sbh);
- error = __sync_dirty_buffer(sbh,
- REQ_SYNC | (test_opt(sb, BARRIER) ? REQ_FUA : 0));
- if (buffer_write_io_error(sbh)) {
- ext4_msg(sb, KERN_ERR, "I/O error while writing "
- "superblock");
- clear_buffer_write_io_error(sbh);
- set_buffer_uptodate(sbh);
- }
+ error = __sync_dirty_buffer(sbh,
+ REQ_SYNC | (test_opt(sb, BARRIER) ? REQ_FUA : 0));
+ if (buffer_write_io_error(sbh)) {
+ ext4_msg(sb, KERN_ERR, "I/O error while writing "
+ "superblock");
+ clear_buffer_write_io_error(sbh);
+ set_buffer_uptodate(sbh);
}
return error;
}
if (ext4_has_feature_journal_needs_recovery(sb) && sb_rdonly(sb)) {
ext4_clear_feature_journal_needs_recovery(sb);
- ext4_commit_super(sb, 1);
+ ext4_commit_super(sb);
}
out:
jbd2_journal_unlock_updates(journal);
EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS;
es->s_state |= cpu_to_le16(EXT4_ERROR_FS);
- ext4_commit_super(sb, 1);
+ ext4_commit_super(sb);
jbd2_journal_clear_err(journal);
jbd2_journal_update_sb_errno(journal);
ext4_clear_feature_journal_needs_recovery(sb);
}
- error = ext4_commit_super(sb, 1);
+ error = ext4_commit_super(sb);
out:
if (journal)
/* we rely on upper layer to stop further updates */
ext4_set_feature_journal_needs_recovery(sb);
}
- ext4_commit_super(sb, 1);
+ ext4_commit_super(sb);
return 0;
}
}
if (sbi->s_journal == NULL && !(old_sb_flags & SB_RDONLY)) {
- err = ext4_commit_super(sb, 1);
+ err = ext4_commit_super(sb);
if (err)
goto restore_opts;
}
BUFFER_TRACE(EXT4_SB(sb)->s_sbh, "get_write_access");
if (ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh) == 0) {
+ lock_buffer(EXT4_SB(sb)->s_sbh);
ext4_set_feature_xattr(sb);
- ext4_handle_dirty_super(handle, sb);
+ ext4_superblock_csum_set(sb);
+ unlock_buffer(EXT4_SB(sb)->s_sbh);
+ ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh);
}
}
unsigned cq_entries;
unsigned cq_mask;
atomic_t cq_timeouts;
+ unsigned cq_last_tm_flush;
unsigned long cq_check_overflow;
struct wait_queue_head cq_wait;
struct fasync_struct *cq_fasync;
static int __io_sq_thread_acquire_files(struct io_ring_ctx *ctx)
{
+ if (current->flags & PF_EXITING)
+ return -EFAULT;
+
if (!current->files) {
struct files_struct *files;
struct nsproxy *nsproxy;
{
struct mm_struct *mm;
+ if (current->flags & PF_EXITING)
+ return -EFAULT;
if (current->mm)
return 0;
static void io_flush_timeouts(struct io_ring_ctx *ctx)
{
- while (!list_empty(&ctx->timeout_list)) {
+ u32 seq;
+
+ if (list_empty(&ctx->timeout_list))
+ return;
+
+ seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+
+ do {
+ u32 events_needed, events_got;
struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
struct io_kiocb, timeout.list);
if (io_is_timeout_noseq(req))
break;
- if (req->timeout.target_seq != ctx->cached_cq_tail
- - atomic_read(&ctx->cq_timeouts))
+
+ /*
+ * Since seq can easily wrap around over time, subtract
+ * the last seq at which timeouts were flushed before comparing.
+ * Assuming not more than 2^31-1 events have happened since,
+ * these subtractions won't have wrapped, so we can check if
+ * target is in [last_seq, current_seq] by comparing the two.
+ */
+ events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush;
+ events_got = seq - ctx->cq_last_tm_flush;
+ if (events_got < events_needed)
break;
list_del_init(&req->timeout.list);
io_kill_timeout(req);
- }
+ } while (!list_empty(&ctx->timeout_list));
+
+ ctx->cq_last_tm_flush = seq;
}
static void io_commit_cqring(struct io_ring_ctx *ctx)
tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
req->timeout.target_seq = tail + off;
+ /* Update the last seq here in case io_flush_timeouts() hasn't.
+ * This is safe because ->completion_lock is held, and submissions
+ * and completions are never mixed in the same ->completion_lock section.
+ */
+ ctx->cq_last_tm_flush = tail;
+
/*
* Insertion sort, ensuring the first entry in the list is always
* the one we need first.
if (sqt_spin || !time_after(jiffies, timeout)) {
io_run_task_work();
+ io_sq_thread_drop_mm_files();
cond_resched();
if (sqt_spin)
timeout = jiffies + sqd->sq_thread_idle;
}
io_run_task_work();
+ io_sq_thread_drop_mm_files();
if (cur_css)
io_sq_thread_unassociate_blkcg();
mutex_unlock(&ctx->uring_lock);
/* make sure callers enter the ring to get error */
- io_ring_set_wakeup_flag(ctx);
+ if (ctx->rings)
+ io_ring_set_wakeup_flag(ctx);
}
/*
finish_wait(&tctx->wait, &wait);
} while (1);
+ finish_wait(&tctx->wait, &wait);
atomic_dec(&tctx->in_idle);
io_uring_remove_task_files(tctx);
*/
ret = io_uring_install_fd(ctx, file);
if (ret < 0) {
+ io_disable_sqo_submit(ctx);
/* fput will clean it up */
fput(file);
return ret;
{
struct mount *mnt = real_mount(path->mnt);
- if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW))
- return -EINVAL;
if (!may_mount())
return -EPERM;
if (path->dentry != path->mnt->mnt_root)
return 0;
}
+// caller is responsible for flags being sane
int path_umount(struct path *path, int flags)
{
struct mount *mnt = real_mount(path->mnt);
struct path path;
int ret;
+ // basic validity checks done first
+ if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW))
+ return -EINVAL;
+
if (!(flags & UMOUNT_NOFOLLOW))
lookup_flags |= LOOKUP_FOLLOW;
ret = user_path_at(AT_FDCWD, name, lookup_flags, &path);
const struct nfs_fh *fhandle)
{
struct nfs_delegation *delegation;
- struct inode *freeme, *res = NULL;
+ struct super_block *freeme = NULL;
+ struct inode *res = NULL;
list_for_each_entry_rcu(delegation, &server->delegations, super_list) {
spin_lock(&delegation->lock);
if (delegation->inode != NULL &&
!test_bit(NFS_DELEGATION_REVOKED, &delegation->flags) &&
nfs_compare_fh(fhandle, &NFS_I(delegation->inode)->fh) == 0) {
- freeme = igrab(delegation->inode);
- if (freeme && nfs_sb_active(freeme->i_sb))
- res = freeme;
+ if (nfs_sb_active(server->super)) {
+ freeme = server->super;
+ res = igrab(delegation->inode);
+ }
spin_unlock(&delegation->lock);
if (res != NULL)
return res;
if (freeme) {
rcu_read_unlock();
- iput(freeme);
+ nfs_sb_deactive(freeme);
rcu_read_lock();
}
return ERR_PTR(-EAGAIN);
} clone_data;
};
-#define nfs_errorf(fc, fmt, ...) errorf(fc, fmt, ## __VA_ARGS__)
-#define nfs_invalf(fc, fmt, ...) invalf(fc, fmt, ## __VA_ARGS__)
-#define nfs_warnf(fc, fmt, ...) warnf(fc, fmt, ## __VA_ARGS__)
+#define nfs_errorf(fc, fmt, ...) ((fc)->log.log ? \
+ errorf(fc, fmt, ## __VA_ARGS__) : \
+ ({ dprintk(fmt "\n", ## __VA_ARGS__); }))
+
+#define nfs_ferrorf(fc, fac, fmt, ...) ((fc)->log.log ? \
+ errorf(fc, fmt, ## __VA_ARGS__) : \
+ ({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); }))
+
+#define nfs_invalf(fc, fmt, ...) ((fc)->log.log ? \
+ invalf(fc, fmt, ## __VA_ARGS__) : \
+ ({ dprintk(fmt "\n", ## __VA_ARGS__); -EINVAL; }))
+
+#define nfs_finvalf(fc, fac, fmt, ...) ((fc)->log.log ? \
+ invalf(fc, fmt, ## __VA_ARGS__) : \
+ ({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); -EINVAL; }))
+
+#define nfs_warnf(fc, fmt, ...) ((fc)->log.log ? \
+ warnf(fc, fmt, ## __VA_ARGS__) : \
+ ({ dprintk(fmt "\n", ## __VA_ARGS__); }))
+
+#define nfs_fwarnf(fc, fac, fmt, ...) ((fc)->log.log ? \
+ warnf(fc, fmt, ## __VA_ARGS__) : \
+ ({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); }))
static inline struct nfs_fs_context *nfs_fc2context(const struct fs_context *fc)
{
static inline struct inode *nfs_igrab_and_active(struct inode *inode)
{
- inode = igrab(inode);
- if (inode != NULL && !nfs_sb_active(inode->i_sb)) {
- iput(inode);
- inode = NULL;
+ struct super_block *sb = inode->i_sb;
+
+ if (sb && nfs_sb_active(sb)) {
+ if (igrab(inode))
+ return inode;
+ nfs_sb_deactive(sb);
}
- return inode;
+ return NULL;
}
static inline void nfs_iput_and_deactive(struct inode *inode)
trace_nfs4_close(state, &calldata->arg, &calldata->res, task->tk_status);
/* Handle Layoutreturn errors */
- if (pnfs_roc_done(task, calldata->inode,
- &calldata->arg.lr_args,
- &calldata->res.lr_res,
- &calldata->res.lr_ret) == -EAGAIN)
+ if (pnfs_roc_done(task, &calldata->arg.lr_args, &calldata->res.lr_res,
+ &calldata->res.lr_ret) == -EAGAIN)
goto out_restart;
/* hmm. we are done with the inode, and in the process of freeing
trace_nfs4_delegreturn_exit(&data->args, &data->res, task->tk_status);
/* Handle Layoutreturn errors */
- if (pnfs_roc_done(task, data->inode,
- &data->args.lr_args,
- &data->res.lr_res,
- &data->res.lr_ret) == -EAGAIN)
+ if (pnfs_roc_done(task, &data->args.lr_args, &data->res.lr_res,
+ &data->res.lr_ret) == -EAGAIN)
goto out_restart;
switch (task->tk_status) {
struct nfs4_delegreturndata *data = calldata;
struct inode *inode = data->inode;
+ if (data->lr.roc)
+ pnfs_roc_release(&data->lr.arg, &data->lr.res,
+ data->res.lr_ret);
if (inode) {
- if (data->lr.roc)
- pnfs_roc_release(&data->lr.arg, &data->lr.res,
- data->res.lr_ret);
nfs_post_op_update_inode_force_wcc(inode, &data->fattr);
nfs_iput_and_deactive(inode);
}
nfs_fattr_init(data->res.fattr);
data->timestamp = jiffies;
data->rpc_status = 0;
- data->lr.roc = pnfs_roc(inode, &data->lr.arg, &data->lr.res, cred);
data->inode = nfs_igrab_and_active(inode);
- if (data->inode) {
+ if (data->inode || issync) {
+ data->lr.roc = pnfs_roc(inode, &data->lr.arg, &data->lr.res,
+ cred);
if (data->lr.roc) {
data->args.lr_args = &data->lr.arg;
data->res.lr_res = &data->lr.res;
}
- } else if (data->lr.roc) {
- pnfs_roc_release(&data->lr.arg, &data->lr.res, 0);
- data->lr.roc = false;
}
task_setup_data.callback_data = data;
data->arg.new_lock_owner, ret);
} else
data->cancelled = true;
+ trace_nfs4_set_lock(fl, state, &data->res.stateid, cmd, ret);
rpc_put_task(task);
dprintk("%s: done, ret = %d!\n", __func__, ret);
- trace_nfs4_set_lock(fl, state, &data->res.stateid, cmd, ret);
return ret;
}
fc, ctx->nfs_server.hostname,
ctx->nfs_server.export_path);
if (err) {
- nfs_errorf(fc, "NFS4: Couldn't follow remote path");
+ nfs_ferrorf(fc, MOUNT, "NFS4: Couldn't follow remote path");
dfprintk(MOUNT, "<-- nfs4_try_get_tree() = %d [error]\n", err);
} else {
dfprintk(MOUNT, "<-- nfs4_try_get_tree() = 0\n");
fc, ctx->nfs_server.hostname,
ctx->nfs_server.export_path);
if (err) {
- nfs_errorf(fc, "NFS4: Couldn't follow remote path");
+ nfs_ferrorf(fc, MOUNT, "NFS4: Couldn't follow remote path");
dfprintk(MOUNT, "<-- nfs4_get_referral_tree() = %d [error]\n", err);
} else {
dfprintk(MOUNT, "<-- nfs4_get_referral_tree() = 0\n");
LIST_HEAD(freeme);
spin_lock(&inode->i_lock);
- if (!pnfs_layout_is_valid(lo) || !arg_stateid ||
+ if (!pnfs_layout_is_valid(lo) ||
!nfs4_stateid_match_other(&lo->plh_stateid, arg_stateid))
goto out_unlock;
if (stateid) {
return false;
}
-int pnfs_roc_done(struct rpc_task *task, struct inode *inode,
- struct nfs4_layoutreturn_args **argpp,
- struct nfs4_layoutreturn_res **respp,
- int *ret)
+int pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp,
+ struct nfs4_layoutreturn_res **respp, int *ret)
{
struct nfs4_layoutreturn_args *arg = *argpp;
int retval = -EAGAIN;
return 0;
case -NFS4ERR_OLD_STATEID:
if (!nfs4_layout_refresh_old_stateid(&arg->stateid,
- &arg->range, inode))
+ &arg->range, arg->inode))
break;
*ret = -NFS4ERR_NOMATCHING_LAYOUT;
return -EAGAIN;
int ret)
{
struct pnfs_layout_hdr *lo = args->layout;
- const nfs4_stateid *arg_stateid = NULL;
+ struct inode *inode = args->inode;
const nfs4_stateid *res_stateid = NULL;
struct nfs4_xdr_opaque_data *ld_private = args->ld_private;
switch (ret) {
case -NFS4ERR_NOMATCHING_LAYOUT:
+ spin_lock(&inode->i_lock);
+ if (pnfs_layout_is_valid(lo) &&
+ nfs4_stateid_match_other(&args->stateid, &lo->plh_stateid))
+ pnfs_set_plh_return_info(lo, args->range.iomode, 0);
+ pnfs_clear_layoutreturn_waitbit(lo);
+ spin_unlock(&inode->i_lock);
break;
case 0:
if (res->lrs_present)
res_stateid = &res->stateid;
fallthrough;
default:
- arg_stateid = &args->stateid;
+ pnfs_layoutreturn_free_lsegs(lo, &args->stateid, &args->range,
+ res_stateid);
}
trace_nfs4_layoutreturn_on_close(args->inode, &args->stateid, ret);
- pnfs_layoutreturn_free_lsegs(lo, arg_stateid, &args->range,
- res_stateid);
if (ld_private && ld_private->ops && ld_private->ops->free)
ld_private->ops->free(ld_private);
pnfs_put_layout_hdr(lo);
goto lookup_again;
}
+ /*
+ * Because we free lsegs when sending LAYOUTRETURN, we need to wait
+ * for LAYOUTRETURN.
+ */
+ if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags)) {
+ spin_unlock(&ino->i_lock);
+ dprintk("%s wait for layoutreturn\n", __func__);
+ lseg = ERR_PTR(pnfs_prepare_to_retry_layoutget(lo));
+ if (!IS_ERR(lseg)) {
+ pnfs_put_layout_hdr(lo);
+ dprintk("%s retrying\n", __func__);
+ trace_pnfs_update_layout(ino, pos, count, iomode, lo,
+ lseg,
+ PNFS_UPDATE_LAYOUT_RETRY);
+ goto lookup_again;
+ }
+ trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
+ PNFS_UPDATE_LAYOUT_RETURN);
+ goto out_put_layout_hdr;
+ }
+
lseg = pnfs_find_lseg(lo, &arg, strict_iomode);
if (lseg) {
trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
nfs4_stateid_copy(&stateid, &lo->plh_stateid);
}
- /*
- * Because we free lsegs before sending LAYOUTRETURN, we need to wait
- * for LAYOUTRETURN even if first is true.
- */
- if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags)) {
- spin_unlock(&ino->i_lock);
- dprintk("%s wait for layoutreturn\n", __func__);
- lseg = ERR_PTR(pnfs_prepare_to_retry_layoutget(lo));
- if (!IS_ERR(lseg)) {
- if (first)
- pnfs_clear_first_layoutget(lo);
- pnfs_put_layout_hdr(lo);
- dprintk("%s retrying\n", __func__);
- trace_pnfs_update_layout(ino, pos, count, iomode, lo,
- lseg, PNFS_UPDATE_LAYOUT_RETRY);
- goto lookup_again;
- }
- trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
- PNFS_UPDATE_LAYOUT_RETURN);
- goto out_put_layout_hdr;
- }
-
if (pnfs_layoutgets_blocked(lo)) {
trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
PNFS_UPDATE_LAYOUT_BLOCKED);
&rng, GFP_KERNEL);
if (!lgp) {
pnfs_clear_first_layoutget(lo);
+ nfs_layoutget_end(lo);
pnfs_put_layout_hdr(lo);
return;
}
struct nfs4_layoutreturn_args *args,
struct nfs4_layoutreturn_res *res,
const struct cred *cred);
-int pnfs_roc_done(struct rpc_task *task, struct inode *inode,
- struct nfs4_layoutreturn_args **argpp,
- struct nfs4_layoutreturn_res **respp,
- int *ret);
+int pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp,
+ struct nfs4_layoutreturn_res **respp, int *ret);
void pnfs_roc_release(struct nfs4_layoutreturn_args *args,
struct nfs4_layoutreturn_res *res,
int ret);
}
static inline int
-pnfs_roc_done(struct rpc_task *task, struct inode *inode,
+pnfs_roc_done(struct rpc_task *task,
struct nfs4_layoutreturn_args **argpp,
struct nfs4_layoutreturn_res **respp,
int *ret)
pnfs_generic_clear_request_commit(struct nfs_page *req,
struct nfs_commit_info *cinfo)
{
- struct pnfs_layout_segment *freeme = NULL;
+ struct pnfs_commit_bucket *bucket = NULL;
if (!test_and_clear_bit(PG_COMMIT_TO_DS, &req->wb_flags))
goto out;
cinfo->ds->nwritten--;
- if (list_is_singular(&req->wb_list)) {
- struct pnfs_commit_bucket *bucket;
-
+ if (list_is_singular(&req->wb_list))
bucket = list_first_entry(&req->wb_list,
- struct pnfs_commit_bucket,
- written);
- freeme = pnfs_free_bucket_lseg(bucket);
- }
+ struct pnfs_commit_bucket, written);
out:
nfs_request_remove_commit_list(req, cinfo);
- pnfs_put_lseg(freeme);
+ if (bucket)
+ pnfs_put_lseg(pnfs_free_bucket_lseg(bucket));
}
EXPORT_SYMBOL_GPL(pnfs_generic_clear_request_commit);
struct pnfs_commit_bucket *bucket,
struct nfs_commit_info *cinfo)
{
+ struct pnfs_layout_segment *lseg;
struct list_head *pos;
list_for_each(pos, &bucket->committing)
cinfo->ds->ncommitting--;
list_splice_init(&bucket->committing, head);
- return pnfs_free_bucket_lseg(bucket);
+ lseg = pnfs_free_bucket_lseg(bucket);
+ if (!lseg)
+ lseg = pnfs_get_lseg(bucket->lseg);
+ return lseg;
}
static struct nfs_commit_data *
if (!data)
return NULL;
data->lseg = pnfs_bucket_get_committing(&data->pages, bucket, cinfo);
- if (!data->lseg)
- data->lseg = pnfs_get_lseg(bucket->lseg);
return data;
}
#include "pnfs.h"
#include "trace.h"
+static bool inter_copy_offload_enable;
+module_param(inter_copy_offload_enable, bool, 0644);
+MODULE_PARM_DESC(inter_copy_offload_enable,
+ "Enable inter server to server copy offload. Default: false");
+
#ifdef CONFIG_NFSD_V4_SECURITY_LABEL
#include <linux/security.h>
return p;
}
+static void *
+svcxdr_savemem(struct nfsd4_compoundargs *argp, __be32 *p, u32 len)
+{
+ __be32 *tmp;
+
+ /*
+ * The location of the decoded data item is stable,
+ * so @p is OK to use. This is the common case.
+ */
+ if (p != argp->xdr->scratch.iov_base)
+ return p;
+
+ tmp = svcxdr_tmpalloc(argp, len);
+ if (!tmp)
+ return NULL;
+ memcpy(tmp, p, len);
+ return tmp;
+}
+
/*
* NFSv4 basic data type decoders
*/
p = xdr_inline_decode(argp->xdr, len);
if (!p)
return nfserr_bad_xdr;
- o->data = svcxdr_tmpalloc(argp, len);
+ o->data = svcxdr_savemem(argp, p, len);
if (!o->data)
return nfserr_jukebox;
o->len = len;
- memcpy(o->data, p, len);
return nfs_ok;
}
status = check_filename((char *)p, *lenp);
if (status)
return status;
- *namp = svcxdr_tmpalloc(argp, *lenp);
+ *namp = svcxdr_savemem(argp, p, *lenp);
if (!*namp)
return nfserr_jukebox;
- memcpy(*namp, p, *lenp);
return nfs_ok;
}
p = xdr_inline_decode(argp->xdr, putfh->pf_fhlen);
if (!p)
return nfserr_bad_xdr;
- putfh->pf_fhval = svcxdr_tmpalloc(argp, putfh->pf_fhlen);
+ putfh->pf_fhval = svcxdr_savemem(argp, p, putfh->pf_fhlen);
if (!putfh->pf_fhval)
return nfserr_jukebox;
- memcpy(putfh->pf_fhval, p, putfh->pf_fhlen);
return nfs_ok;
}
p = xdr_inline_decode(argp->xdr, setclientid->se_callback_netid_len);
if (!p)
return nfserr_bad_xdr;
- setclientid->se_callback_netid_val = svcxdr_tmpalloc(argp,
+ setclientid->se_callback_netid_val = svcxdr_savemem(argp, p,
setclientid->se_callback_netid_len);
if (!setclientid->se_callback_netid_val)
return nfserr_jukebox;
- memcpy(setclientid->se_callback_netid_val, p,
- setclientid->se_callback_netid_len);
if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_addr_len) < 0)
return nfserr_bad_xdr;
p = xdr_inline_decode(argp->xdr, setclientid->se_callback_addr_len);
if (!p)
return nfserr_bad_xdr;
- setclientid->se_callback_addr_val = svcxdr_tmpalloc(argp,
+ setclientid->se_callback_addr_val = svcxdr_savemem(argp, p,
setclientid->se_callback_addr_len);
if (!setclientid->se_callback_addr_val)
return nfserr_jukebox;
- memcpy(setclientid->se_callback_addr_val, p,
- setclientid->se_callback_addr_len);
if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_ident) < 0)
return nfserr_bad_xdr;
p = xdr_inline_decode(argp->xdr, verify->ve_attrlen);
if (!p)
return nfserr_bad_xdr;
- verify->ve_attrval = svcxdr_tmpalloc(argp, verify->ve_attrlen);
+ verify->ve_attrval = svcxdr_savemem(argp, p, verify->ve_attrlen);
if (!verify->ve_attrval)
return nfserr_jukebox;
- memcpy(verify->ve_attrval, p, verify->ve_attrlen);
return nfs_ok;
}
p = xdr_inline_decode(argp->xdr, argp->taglen);
if (!p)
return 0;
- argp->tag = svcxdr_tmpalloc(argp, argp->taglen);
+ argp->tag = svcxdr_savemem(argp, p, argp->taglen);
if (!argp->tag)
return 0;
- memcpy(argp->tag, p, argp->taglen);
max_reply += xdr_align_size(argp->taglen);
}
resp->rqstp->rq_vec, read->rd_vlen, maxcount, eof);
if (nfserr)
return nfserr;
+ xdr_truncate_encode(xdr, starting_len + 16 + xdr_align_size(*maxcount));
tmp = htonl(NFS4_CONTENT_DATA);
write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4);
write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp64, 8);
tmp = htonl(*maxcount);
write_bytes_to_xdr_buf(xdr->buf, starting_len + 12, &tmp, 4);
+
+ tmp = xdr_zero;
+ write_bytes_to_xdr_buf(xdr->buf, starting_len + 16 + *maxcount, &tmp,
+ xdr_pad_size(*maxcount));
return nfs_ok;
}
if (nfserr && segments == 0)
xdr_truncate_encode(xdr, starting_len);
else {
- tmp = htonl(eof);
- write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4);
- tmp = htonl(segments);
- write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
if (nfserr) {
xdr_truncate_encode(xdr, last_segment);
nfserr = nfs_ok;
+ eof = 0;
}
+ tmp = htonl(eof);
+ write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4);
+ tmp = htonl(segments);
+ write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
}
return nfserr;
#define NFSDDBG_FACILITY NFSDDBG_SVC
-bool inter_copy_offload_enable;
-EXPORT_SYMBOL_GPL(inter_copy_offload_enable);
-module_param(inter_copy_offload_enable, bool, 0644);
-MODULE_PARM_DESC(inter_copy_offload_enable,
- "Enable inter server to server copy offload. Default: false");
-
extern struct svc_program nfsd_program;
static int nfsd(void *vrqstp);
#if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
struct nfs_fh c_fh;
nfs4_stateid stateid;
};
-extern bool inter_copy_offload_enable;
struct nfsd4_seek {
/* request */
};
#ifdef CONFIG_MEM_SOFT_DIRTY
+
+#define is_cow_mapping(flags) (((flags) & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE)
+
+static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
+{
+ struct page *page;
+
+ if (!pte_write(pte))
+ return false;
+ if (!is_cow_mapping(vma->vm_flags))
+ return false;
+ if (likely(!atomic_read(&vma->vm_mm->has_pinned)))
+ return false;
+ page = vm_normal_page(vma, addr, pte);
+ if (!page)
+ return false;
+ return page_maybe_dma_pinned(page);
+}
+
static inline void clear_soft_dirty(struct vm_area_struct *vma,
unsigned long addr, pte_t *pte)
{
if (pte_present(ptent)) {
pte_t old_pte;
+ if (pte_is_pinned(vma, addr, ptent))
+ return;
old_pte = ptep_modify_prot_start(vma, addr, pte);
ptent = pte_wrprotect(old_pte);
ptent = pte_clear_soft_dirty(ptent);
.type = type,
};
+ if (mmap_write_lock_killable(mm)) {
+ count = -EINTR;
+ goto out_mm;
+ }
if (type == CLEAR_REFS_MM_HIWATER_RSS) {
- if (mmap_write_lock_killable(mm)) {
- count = -EINTR;
- goto out_mm;
- }
-
/*
* Writing 5 to /proc/pid/clear_refs resets the peak
* resident set size to this mm's current rss value.
*/
reset_mm_hiwater_rss(mm);
- mmap_write_unlock(mm);
- goto out_mm;
+ goto out_unlock;
}
- if (mmap_read_lock_killable(mm)) {
- count = -EINTR;
- goto out_mm;
- }
tlb_gather_mmu(&tlb, mm, 0, -1);
if (type == CLEAR_REFS_SOFT_DIRTY) {
for (vma = mm->mmap; vma; vma = vma->vm_next) {
if (!(vma->vm_flags & VM_SOFTDIRTY))
continue;
- mmap_read_unlock(mm);
- if (mmap_write_lock_killable(mm)) {
- count = -EINTR;
- goto out_mm;
- }
- for (vma = mm->mmap; vma; vma = vma->vm_next) {
- vma->vm_flags &= ~VM_SOFTDIRTY;
- vma_set_page_prot(vma);
- }
- mmap_write_downgrade(mm);
- break;
+ vma->vm_flags &= ~VM_SOFTDIRTY;
+ vma_set_page_prot(vma);
}
mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY,
if (type == CLEAR_REFS_SOFT_DIRTY)
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb, 0, -1);
- mmap_read_unlock(mm);
+out_unlock:
+ mmap_write_unlock(mm);
out_mm:
mmput(mm);
}
* See Documentation/atomic_bitops.txt for details.
*/
-static inline void set_bit(unsigned int nr, volatile unsigned long *p)
+static __always_inline void set_bit(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p);
}
-static inline void clear_bit(unsigned int nr, volatile unsigned long *p)
+static __always_inline void clear_bit(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p);
}
-static inline void change_bit(unsigned int nr, volatile unsigned long *p)
+static __always_inline void change_bit(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
/* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 */
#if GCC_VERSION < 40900
# error Sorry, your version of GCC is too old - please use 4.9 or newer.
+#elif defined(CONFIG_ARM64) && GCC_VERSION < 50100
+/*
+ * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63293
+ * https://lore.kernel.org/r/20210107111841.GN1551@shell.armlinux.org.uk
+ */
+# error Sorry, your version of GCC is too old - please use 5.1 or newer.
#endif
/*
unsigned dm_bufio_get_block_size(struct dm_bufio_client *c);
sector_t dm_bufio_get_device_size(struct dm_bufio_client *c);
+struct dm_io_client *dm_bufio_get_dm_io_client(struct dm_bufio_client *c);
sector_t dm_bufio_get_block_number(struct dm_buffer *b);
void *dm_bufio_get_block_data(struct dm_buffer *b);
void *dm_bufio_get_aux_data(struct dm_buffer *b);
struct isa_driver {
int (*match)(struct device *, unsigned int);
int (*probe)(struct device *, unsigned int);
- int (*remove)(struct device *, unsigned int);
+ void (*remove)(struct device *, unsigned int);
void (*shutdown)(struct device *, unsigned int);
int (*suspend)(struct device *, unsigned int, pm_message_t);
int (*resume)(struct device *, unsigned int);
#define KASAN_SHADOW_INIT 0
#endif
+#ifndef PTE_HWTABLE_PTRS
+#define PTE_HWTABLE_PTRS 0
+#endif
+
extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
-extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
+extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS];
extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD];
extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
{
struct mem_cgroup *memcg = page_memcg(page);
- VM_WARN_ON_ONCE_PAGE(!memcg, page);
+ VM_WARN_ON_ONCE_PAGE(!memcg && !mem_cgroup_disabled(), page);
return mem_cgroup_lruvec(memcg, pgdat);
}
static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; }
#endif
-bool arm_pmu_irq_is_nmi(void);
-
/* Internal functions only for core arm_pmu code */
struct arm_pmu *armpmu_alloc(void);
struct arm_pmu *armpmu_alloc_atomic(void);
static inline bool skb_frag_must_loop(struct page *p)
{
#if defined(CONFIG_HIGHMEM)
- if (PageHighMem(p))
+ if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) || PageHighMem(p))
return true;
#endif
return false;
struct sk_buff *root_skb;
struct sk_buff *cur_skb;
__u8 *frag_data;
+ __u32 frag_off;
};
void skb_prepare_seq_read(struct sk_buff *skb, unsigned int from,
size_t total_pcm_alloc_bytes; /* total amount of allocated buffers */
struct mutex memory_mutex; /* protection for the above */
+#ifdef CONFIG_SND_DEBUG
+ struct dentry *debugfs_root; /* debugfs root for card */
+#endif
#ifdef CONFIG_PM
unsigned int power_state; /* power state */
extern int snd_major;
extern int snd_ecards_limit;
extern struct class *sound_class;
+#ifdef CONFIG_SND_DEBUG
+extern struct dentry *sound_debugfs_root;
+#endif
void snd_request_card(int card);
char name[100];
unsigned int key[6]; /* Keep in sync with definitions above */
#endif /* CONFIG_SND_JACK_INPUT_DEV */
+ int hw_status_cache;
void *private_data;
void (*private_free)(struct snd_jack *);
};
struct work_struct;
-void xenbus_probe(struct work_struct *);
+void xenbus_probe(void);
#define XENBUS_IS_ERR_READ(str) ({ \
if (!IS_ERR(str) && strlen(str) == 0) { \
config KPROBE_EVENTS_ON_NOTRACE
bool "Do NOT protect notrace function from kprobe events"
depends on KPROBE_EVENTS
- depends on KPROBES_ON_FTRACE
+ depends on DYNAMIC_FTRACE
default n
help
This is only for the developers who want to debug ftrace itself
return 0;
}
-#if defined(CONFIG_KPROBES_ON_FTRACE) && \
+#if defined(CONFIG_DYNAMIC_FTRACE) && \
!defined(CONFIG_KPROBE_EVENTS_ON_NOTRACE)
static bool __within_notrace_func(unsigned long addr)
{
(const struct compat_iovec __user *)uvec;
int ret = -EFAULT, i;
- if (!user_access_begin(uvec, nr_segs * sizeof(*uvec)))
+ if (!user_access_begin(uiov, nr_segs * sizeof(*uiov)))
return -EFAULT;
for (i = 0; i < nr_segs; i++) {
* So we need to block hugepage fault by PG_hwpoison bit check.
*/
if (unlikely(PageHWPoison(page))) {
- ret = VM_FAULT_HWPOISON |
+ ret = VM_FAULT_HWPOISON_LARGE |
VM_FAULT_SET_HINDEX(hstate_index(h));
goto backout_unlocked;
}
return false;
}
#endif
-pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss;
+pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS]
+ __page_aligned_bss;
static inline bool kasan_pte_table(pmd_t pmd)
{
goto retry;
}
} else if (ret == -EIO) {
- pr_info("%s: %#lx: unknown page type: %lx (%pGP)\n",
+ pr_info("%s: %#lx: unknown page type: %lx (%pGp)\n",
__func__, pfn, page->flags, &page->flags);
}
const nodemask_t *to, int flags)
{
int busy = 0;
- int err;
+ int err = 0;
nodemask_t tmp;
migrate_prep();
{
struct page *page;
-#ifdef CONFIG_CMA
- /*
- * Balance movable allocations between regular and CMA areas by
- * allocating from CMA when over half of the zone's free memory
- * is in the CMA area.
- */
- if (alloc_flags & ALLOC_CMA &&
- zone_page_state(zone, NR_FREE_CMA_PAGES) >
- zone_page_state(zone, NR_FREE_PAGES) / 2) {
- page = __rmqueue_cma_fallback(zone, order);
- if (page)
- return page;
+ if (IS_ENABLED(CONFIG_CMA)) {
+ /*
+ * Balance movable allocations between regular and CMA areas by
+ * allocating from CMA when over half of the zone's free memory
+ * is in the CMA area.
+ */
+ if (alloc_flags & ALLOC_CMA &&
+ zone_page_state(zone, NR_FREE_CMA_PAGES) >
+ zone_page_state(zone, NR_FREE_PAGES) / 2) {
+ page = __rmqueue_cma_fallback(zone, order);
+ if (page)
+ goto out;
+ }
}
-#endif
retry:
page = __rmqueue_smallest(zone, order, migratetype);
if (unlikely(!page)) {
alloc_flags))
goto retry;
}
-
- trace_mm_page_alloc_zone_locked(page, order, migratetype);
+out:
+ if (page)
+ trace_mm_page_alloc_zone_locked(page, order, migratetype);
return page;
}
#include <linux/mm.h>
#include <linux/uio.h>
#include <linux/sched.h>
+#include <linux/compat.h>
#include <linux/sched/mm.h>
#include <linux/highmem.h>
#include <linux/ptrace.h>
t = acquire_slab(s, n, page, object == NULL, &objects);
if (!t)
- break;
+ continue; /* cmpxchg raced */
available += objects;
if (!object) {
return NULL;
}
- if (flags & VM_MAP_PUT_PAGES)
+ if (flags & VM_MAP_PUT_PAGES) {
area->pages = pages;
+ area->nr_pages = count;
+ }
return area->addr;
}
EXPORT_SYMBOL(vmap);
if (!PageSwapCache(page)) {
if (!(sc->gfp_mask & __GFP_IO))
goto keep_locked;
+ if (page_maybe_dma_pinned(page))
+ goto keep_locked;
if (PageTransHuge(page)) {
/* cannot split THP, skip it */
if (!can_split_huge_page(page, NULL))
return 0;
out_free_newdev:
- if (new_dev->reg_state == NETREG_UNINITIALIZED ||
- new_dev->reg_state == NETREG_UNREGISTERED)
- free_netdev(new_dev);
+ free_netdev(new_dev);
return err;
}
if (peer)
return -EOPNOTSUPP;
+ memset(addr, 0, sizeof(*addr));
addr->can_family = AF_CAN;
addr->can_ifindex = so->ifindex;
addr->can_addr.tp.rx_id = so->rxid;
}
}
- if ((features & NETIF_F_HW_TLS_TX) && !(features & NETIF_F_HW_CSUM)) {
- netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n");
- features &= ~NETIF_F_HW_TLS_TX;
+ if (features & NETIF_F_HW_TLS_TX) {
+ bool ip_csum = (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) ==
+ (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
+ bool hw_csum = features & NETIF_F_HW_CSUM;
+
+ if (!ip_csum && !hw_csum) {
+ netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n");
+ features &= ~NETIF_F_HW_TLS_TX;
+ }
}
return features;
ret = call_netdevice_notifiers(NETDEV_REGISTER, dev);
ret = notifier_to_errno(ret);
if (ret) {
+ /* Expect explicit free_netdev() on failure */
+ dev->needs_free_netdev = false;
rollback_registered(dev);
- rcu_barrier();
-
- dev->reg_state = NETREG_UNREGISTERED;
- /* We should put the kobject that hold in
- * netdev_unregister_kobject(), otherwise
- * the net device cannot be freed when
- * driver calls free_netdev(), because the
- * kobject is being hold.
- */
- kobject_put(&dev->dev.kobj);
+ net_set_todo(dev);
+ goto out;
}
/*
* Prevent userspace races by waiting until the network
struct napi_struct *p, *n;
might_sleep();
+
+ /* When called immediately after register_netdevice() failed the unwind
+ * handling may still be dismantling the device. Handle that case by
+ * deferring the free.
+ */
+ if (dev->reg_state == NETREG_UNREGISTERING) {
+ ASSERT_RTNL();
+ dev->needs_free_netdev = true;
+ return;
+ }
+
netif_free_tx_queues(dev);
netif_free_rx_queues(dev);
dev->ifindex = ifm->ifi_index;
- if (ops->newlink) {
+ if (ops->newlink)
err = ops->newlink(link_net ? : net, dev, tb, data, extack);
- /* Drivers should call free_netdev() in ->destructor
- * and unregister it on failure after registration
- * so that device could be finally freed in rtnl_unlock.
- */
- if (err < 0) {
- /* If device is not registered at all, free it now */
- if (dev->reg_state == NETREG_UNINITIALIZED ||
- dev->reg_state == NETREG_UNREGISTERED)
- free_netdev(dev);
- goto out;
- }
- } else {
+ else
err = register_netdevice(dev);
- if (err < 0) {
- free_netdev(dev);
- goto out;
- }
+ if (err < 0) {
+ free_netdev(dev);
+ goto out;
}
+
err = rtnl_configure_link(dev, ifm);
if (err < 0)
goto out_unregister;
struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
gfp_t gfp_mask)
{
- struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+ struct napi_alloc_cache *nc;
struct sk_buff *skb;
void *data;
len += NET_SKB_PAD + NET_IP_ALIGN;
- if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) ||
+ /* If requested length is either too small or too big,
+ * we use kmalloc() for skb->head allocation.
+ */
+ if (len <= SKB_WITH_OVERHEAD(1024) ||
+ len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
(gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
if (!skb)
goto skb_success;
}
+ nc = this_cpu_ptr(&napi_alloc_cache);
len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
len = SKB_DATA_ALIGN(len);
st->root_skb = st->cur_skb = skb;
st->frag_idx = st->stepped_offset = 0;
st->frag_data = NULL;
+ st->frag_off = 0;
}
EXPORT_SYMBOL(skb_prepare_seq_read);
st->stepped_offset += skb_headlen(st->cur_skb);
while (st->frag_idx < skb_shinfo(st->cur_skb)->nr_frags) {
+ unsigned int pg_idx, pg_off, pg_sz;
+
frag = &skb_shinfo(st->cur_skb)->frags[st->frag_idx];
- block_limit = skb_frag_size(frag) + st->stepped_offset;
+ pg_idx = 0;
+ pg_off = skb_frag_off(frag);
+ pg_sz = skb_frag_size(frag);
+
+ if (skb_frag_must_loop(skb_frag_page(frag))) {
+ pg_idx = (pg_off + st->frag_off) >> PAGE_SHIFT;
+ pg_off = offset_in_page(pg_off + st->frag_off);
+ pg_sz = min_t(unsigned int, pg_sz - st->frag_off,
+ PAGE_SIZE - pg_off);
+ }
+
+ block_limit = pg_sz + st->stepped_offset;
if (abs_offset < block_limit) {
if (!st->frag_data)
- st->frag_data = kmap_atomic(skb_frag_page(frag));
+ st->frag_data = kmap_atomic(skb_frag_page(frag) + pg_idx);
- *data = (u8 *) st->frag_data + skb_frag_off(frag) +
+ *data = (u8 *)st->frag_data + pg_off +
(abs_offset - st->stepped_offset);
return block_limit - abs_offset;
st->frag_data = NULL;
}
- st->frag_idx++;
- st->stepped_offset += skb_frag_size(frag);
+ st->stepped_offset += pg_sz;
+ st->frag_off += pg_sz;
+ if (st->frag_off == skb_frag_size(frag)) {
+ st->frag_off = 0;
+ st->frag_idx++;
+ }
}
if (st->frag_data) {
unsigned int delta_truesize = 0;
unsigned int delta_len = 0;
struct sk_buff *tail = NULL;
- struct sk_buff *nskb;
+ struct sk_buff *nskb, *tmp;
+ int err;
skb_push(skb, -skb_network_offset(skb) + offset);
nskb = list_skb;
list_skb = list_skb->next;
+ err = 0;
+ if (skb_shared(nskb)) {
+ tmp = skb_clone(nskb, GFP_ATOMIC);
+ if (tmp) {
+ consume_skb(nskb);
+ nskb = tmp;
+ err = skb_unclone(nskb, GFP_ATOMIC);
+ } else {
+ err = -ENOMEM;
+ }
+ }
+
if (!tail)
skb->next = nskb;
else
tail->next = nskb;
+ if (unlikely(err)) {
+ nskb->next = list_skb;
+ goto err_linearize;
+ }
+
tail = nskb;
delta_len += nskb->len;
i = j = reciprocal_scale(hash, socks);
while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) {
i++;
- if (i >= reuse->num_socks)
+ if (i >= socks)
i = 0;
if (i == j)
goto out;
fn = &reply_funcs[dcb->cmd];
if (!fn->cb)
return -EOPNOTSUPP;
- if (fn->type != nlh->nlmsg_type)
+ if (fn->type == RTM_SETDCB && !netlink_capable(skb, CAP_NET_ADMIN))
return -EPERM;
if (!tb[DCB_ATTR_IFNAME])
static void dsa_port_teardown(struct dsa_port *dp)
{
+ struct devlink_port *dlp = &dp->devlink_port;
+
if (!dp->setup)
return;
+ devlink_port_type_clear(dlp);
+
switch (dp->type) {
case DSA_PORT_TYPE_UNUSED:
break;
int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
{
int mtu = ETH_DATA_LEN + cpu_dp->tag_ops->overhead;
+ struct dsa_switch *ds = cpu_dp->ds;
+ struct device_link *consumer_link;
int ret;
+ /* The DSA master must use SET_NETDEV_DEV for this to work. */
+ consumer_link = device_link_add(ds->dev, dev->dev.parent,
+ DL_FLAG_AUTOREMOVE_CONSUMER);
+ if (!consumer_link)
+ netdev_err(dev,
+ "Failed to create a device link to DSA switch %s\n",
+ dev_name(ds->dev));
+
rtnl_lock();
ret = dev_set_mtu(dev, mtu);
rtnl_unlock();
int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
{
u8 *tail;
- u8 *vaddr;
int nfrags;
int esph_offset;
struct page *page;
page = pfrag->page;
get_page(page);
- vaddr = kmap_atomic(page);
-
- tail = vaddr + pfrag->offset;
+ tail = page_address(page) + pfrag->offset;
esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto);
- kunmap_atomic(vaddr);
-
nfrags = skb_shinfo(skb)->nr_frags;
__skb_fill_page_desc(skb, nfrags, page, pfrag->offset,
int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
{
u8 *tail;
- u8 *vaddr;
int nfrags;
int esph_offset;
struct page *page;
page = pfrag->page;
get_page(page);
- vaddr = kmap_atomic(page);
-
- tail = vaddr + pfrag->offset;
+ tail = page_address(page) + pfrag->offset;
esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto);
- kunmap_atomic(vaddr);
-
nfrags = skb_shinfo(skb)->nr_frags;
__skb_fill_page_desc(skb, nfrags, page, pfrag->offset,
return -EINVAL;
}
+static int
+ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk,
+ struct sk_buff *skb, unsigned int mtu)
+{
+ struct sk_buff *segs, *nskb;
+ netdev_features_t features;
+ int ret = 0;
+
+ /* Please see corresponding comment in ip_finish_output_gso
+ * describing the cases where GSO segment length exceeds the
+ * egress MTU.
+ */
+ features = netif_skb_features(skb);
+ segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
+ if (IS_ERR_OR_NULL(segs)) {
+ kfree_skb(skb);
+ return -ENOMEM;
+ }
+
+ consume_skb(skb);
+
+ skb_list_walk_safe(segs, segs, nskb) {
+ int err;
+
+ skb_mark_not_on_list(segs);
+ err = ip6_fragment(net, sk, segs, ip6_finish_output2);
+ if (err && ret == 0)
+ ret = err;
+ }
+
+ return ret;
+}
+
static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb)
{
+ unsigned int mtu;
+
#if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)
/* Policy lookup after SNAT yielded a new policy */
if (skb_dst(skb)->xfrm) {
}
#endif
- if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) ||
+ mtu = ip6_skb_dst_mtu(skb);
+ if (skb_is_gso(skb) && !skb_gso_validate_network_len(skb, mtu))
+ return ip6_finish_output_gso_slowpath_drop(net, sk, skb, mtu);
+
+ if ((skb->len > mtu && !skb_is_gso(skb)) ||
dst_allfrag(skb_dst(skb)) ||
(IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size))
return ip6_fragment(net, sk, skb, ip6_finish_output2);
}
#ifdef CONFIG_IPV6_SIT_6RD
- if (ipip6_netlink_6rd_parms(data, &ip6rd))
+ if (ipip6_netlink_6rd_parms(data, &ip6rd)) {
err = ipip6_tunnel_update_6rd(nt, &ip6rd);
+ if (err < 0)
+ unregister_netdevice_queue(dev, NULL);
+ }
#endif
return err;
static bool tcp_can_send_ack(const struct sock *ssk)
{
return !((1 << inet_sk_state_load(ssk)) &
- (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE));
+ (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN));
}
static void mptcp_send_ack(struct mptcp_sock *msk)
static int mptcp_disconnect(struct sock *sk, int flags)
{
- /* Should never be called.
- * inet_stream_connect() calls ->disconnect, but that
- * refers to the subflow socket, not the mptcp one.
- */
- WARN_ON_ONCE(1);
+ struct mptcp_subflow_context *subflow;
+ struct mptcp_sock *msk = mptcp_sk(sk);
+
+ __mptcp_flush_join_list(msk);
+ mptcp_for_each_subflow(msk, subflow) {
+ struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+
+ lock_sock(ssk);
+ tcp_disconnect(ssk, flags);
+ release_sock(ssk);
+ }
return 0;
}
return true;
}
+static void mptcp_shutdown(struct sock *sk, int how)
+{
+ pr_debug("sk=%p, how=%d", sk, how);
+
+ if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk))
+ __mptcp_wr_shutdown(sk);
+}
+
static struct proto mptcp_prot = {
.name = "MPTCP",
.owner = THIS_MODULE,
.accept = mptcp_accept,
.setsockopt = mptcp_setsockopt,
.getsockopt = mptcp_getsockopt,
- .shutdown = tcp_shutdown,
+ .shutdown = mptcp_shutdown,
.destroy = mptcp_destroy,
.sendmsg = mptcp_sendmsg,
.recvmsg = mptcp_recvmsg,
return mask;
}
-static int mptcp_shutdown(struct socket *sock, int how)
-{
- struct mptcp_sock *msk = mptcp_sk(sock->sk);
- struct sock *sk = sock->sk;
- int ret = 0;
-
- pr_debug("sk=%p, how=%d", msk, how);
-
- lock_sock(sk);
-
- how++;
- if ((how & ~SHUTDOWN_MASK) || !how) {
- ret = -EINVAL;
- goto out_unlock;
- }
-
- if (sock->state == SS_CONNECTING) {
- if ((1 << sk->sk_state) &
- (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_CLOSE))
- sock->state = SS_DISCONNECTING;
- else
- sock->state = SS_CONNECTED;
- }
-
- sk->sk_shutdown |= how;
- if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk))
- __mptcp_wr_shutdown(sk);
-
- /* Wake up anyone sleeping in poll. */
- sk->sk_state_change(sk);
-
-out_unlock:
- release_sock(sk);
-
- return ret;
-}
-
static const struct proto_ops mptcp_stream_ops = {
.family = PF_INET,
.owner = THIS_MODULE,
.ioctl = inet_ioctl,
.gettstamp = sock_gettstamp,
.listen = mptcp_listen,
- .shutdown = mptcp_shutdown,
+ .shutdown = inet_shutdown,
.setsockopt = sock_common_setsockopt,
.getsockopt = sock_common_getsockopt,
.sendmsg = inet_sendmsg,
.ioctl = inet6_ioctl,
.gettstamp = sock_gettstamp,
.listen = mptcp_listen,
- .shutdown = mptcp_shutdown,
+ .shutdown = inet_shutdown,
.setsockopt = sock_common_setsockopt,
.getsockopt = sock_common_getsockopt,
.sendmsg = inet6_sendmsg,
{
int ret;
+ /* module_param hashsize could have changed value */
+ nf_conntrack_htable_size_user = nf_conntrack_htable_size;
+
ret = proc_dointvec(table, write, buffer, lenp, ppos);
if (ret < 0 || !write)
return ret;
ret = register_pernet_subsys(&nat_net_ops);
if (ret < 0) {
nf_ct_extend_unregister(&nat_extend);
+ kvfree(nf_nat_bysource);
return ret;
}
return;
}
- if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST) {
+ if (state == RXRPC_CALL_SERVER_RECV_REQUEST) {
unsigned long timo = READ_ONCE(call->next_req_timo);
unsigned long now, expect_req_by;
default: /* we have a ticket we can't encode */
pr_err("Unsupported key token type (%u)\n",
token->security_index);
- continue;
+ return -ENOPKG;
}
_debug("token[%u]: toksize=%u", ntoks, toksize);
break;
default:
- break;
+ pr_err("Unsupported key token type (%u)\n",
+ token->security_index);
+ return -ENOPKG;
}
ASSERTCMP((unsigned long)xdr - (unsigned long)oldxdr, ==,
goto errattr;
smc_clc_get_hostname(&host);
if (host) {
- snprintf(hostname, sizeof(hostname), "%s", host);
+ memcpy(hostname, host, SMC_MAX_HOSTNAME_LEN);
+ hostname[SMC_MAX_HOSTNAME_LEN] = 0;
if (nla_put_string(skb, SMC_NLA_SYS_LOCAL_HOST, hostname))
goto errattr;
}
smc_ism_get_system_eid(smcd_dev, &seid);
mutex_unlock(&smcd_dev_list.mutex);
if (seid && smc_ism_is_v2_capable()) {
- snprintf(smc_seid, sizeof(smc_seid), "%s", seid);
+ memcpy(smc_seid, seid, SMC_MAX_EID_LEN);
+ smc_seid[SMC_MAX_EID_LEN] = 0;
if (nla_put_string(skb, SMC_NLA_SYS_SEID, smc_seid))
goto errattr;
}
goto errattr;
if (nla_put_u8(skb, SMC_NLA_LGR_R_VLAN_ID, lgr->vlan_id))
goto errattr;
- snprintf(smc_target, sizeof(smc_target), "%s", lgr->pnet_id);
+ memcpy(smc_target, lgr->pnet_id, SMC_MAX_PNETID_LEN);
+ smc_target[SMC_MAX_PNETID_LEN] = 0;
if (nla_put_string(skb, SMC_NLA_LGR_R_PNETID, smc_target))
goto errattr;
struct sk_buff *skb,
struct netlink_callback *cb)
{
- char smc_ibname[IB_DEVICE_NAME_MAX + 1];
+ char smc_ibname[IB_DEVICE_NAME_MAX];
u8 smc_gid_target[41];
struct nlattr *attrs;
u32 link_uid = 0;
goto errattr;
if (nla_put_u32(skb, SMC_NLA_LGR_D_CHID, smc_ism_get_chid(lgr->smcd)))
goto errattr;
- snprintf(smc_pnet, sizeof(smc_pnet), "%s", lgr->smcd->pnetid);
+ memcpy(smc_pnet, lgr->smcd->pnetid, SMC_MAX_PNETID_LEN);
+ smc_pnet[SMC_MAX_PNETID_LEN] = 0;
if (nla_put_string(skb, SMC_NLA_LGR_D_PNETID, smc_pnet))
goto errattr;
goto errv2attr;
if (nla_put_u8(skb, SMC_NLA_LGR_V2_OS, lgr->peer_os))
goto errv2attr;
- snprintf(smc_host, sizeof(smc_host), "%s", lgr->peer_hostname);
+ memcpy(smc_host, lgr->peer_hostname, SMC_MAX_HOSTNAME_LEN);
+ smc_host[SMC_MAX_HOSTNAME_LEN] = 0;
if (nla_put_string(skb, SMC_NLA_LGR_V2_PEER_HOST, smc_host))
goto errv2attr;
- snprintf(smc_eid, sizeof(smc_eid), "%s", lgr->negotiated_eid);
+ memcpy(smc_eid, lgr->negotiated_eid, SMC_MAX_EID_LEN);
+ smc_eid[SMC_MAX_EID_LEN] = 0;
if (nla_put_string(skb, SMC_NLA_LGR_V2_NEG_EID, smc_eid))
goto errv2attr;
if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR,
smcibdev->pnetid_by_user[port]))
goto errattr;
- snprintf(smc_pnet, sizeof(smc_pnet), "%s",
- (char *)&smcibdev->pnetid[port]);
+ memcpy(smc_pnet, &smcibdev->pnetid[port], SMC_MAX_PNETID_LEN);
+ smc_pnet[SMC_MAX_PNETID_LEN] = 0;
if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet))
goto errattr;
if (nla_put_u32(skb, SMC_NLA_DEV_PORT_NETDEV,
struct sk_buff *skb,
struct netlink_callback *cb)
{
- char smc_ibname[IB_DEVICE_NAME_MAX + 1];
+ char smc_ibname[IB_DEVICE_NAME_MAX];
struct smc_pci_dev smc_pci_dev;
struct pci_dev *pci_dev;
unsigned char is_crit;
goto errattr;
if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR, smcd->pnetid_by_user))
goto errportattr;
- snprintf(smc_pnet, sizeof(smc_pnet), "%s", smcd->pnetid);
+ memcpy(smc_pnet, smcd->pnetid, SMC_MAX_PNETID_LEN);
+ smc_pnet[SMC_MAX_PNETID_LEN] = 0;
if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet))
goto errportattr;
scope_id = dev->ifindex;
dev_put(dev);
} else {
- if (kstrtou32(p, 10, &scope_id) == 0) {
+ if (kstrtou32(p, 10, &scope_id) != 0) {
kfree(p);
return 0;
}
return 0; /* record not complete */
}
+static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec,
+ int flags)
+{
+ return kernel_sendpage(sock, virt_to_page(vec->iov_base),
+ offset_in_page(vec->iov_base),
+ vec->iov_len, flags);
+}
+
+/*
+ * kernel_sendpage() is used exclusively to reduce the number of
+ * copy operations in this path. Therefore the caller must ensure
+ * that the pages backing @xdr are unchanging.
+ *
+ * In addition, the logic assumes that * .bv_len is never larger
+ * than PAGE_SIZE.
+ */
+static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg,
+ struct xdr_buf *xdr, rpc_fraghdr marker,
+ unsigned int *sentp)
+{
+ const struct kvec *head = xdr->head;
+ const struct kvec *tail = xdr->tail;
+ struct kvec rm = {
+ .iov_base = &marker,
+ .iov_len = sizeof(marker),
+ };
+ int flags, ret;
+
+ *sentp = 0;
+ xdr_alloc_bvec(xdr, GFP_KERNEL);
+
+ msg->msg_flags = MSG_MORE;
+ ret = kernel_sendmsg(sock, msg, &rm, 1, rm.iov_len);
+ if (ret < 0)
+ return ret;
+ *sentp += ret;
+ if (ret != rm.iov_len)
+ return -EAGAIN;
+
+ flags = head->iov_len < xdr->len ? MSG_MORE | MSG_SENDPAGE_NOTLAST : 0;
+ ret = svc_tcp_send_kvec(sock, head, flags);
+ if (ret < 0)
+ return ret;
+ *sentp += ret;
+ if (ret != head->iov_len)
+ goto out;
+
+ if (xdr->page_len) {
+ unsigned int offset, len, remaining;
+ struct bio_vec *bvec;
+
+ bvec = xdr->bvec;
+ offset = xdr->page_base;
+ remaining = xdr->page_len;
+ flags = MSG_MORE | MSG_SENDPAGE_NOTLAST;
+ while (remaining > 0) {
+ if (remaining <= PAGE_SIZE && tail->iov_len == 0)
+ flags = 0;
+ len = min(remaining, bvec->bv_len);
+ ret = kernel_sendpage(sock, bvec->bv_page,
+ bvec->bv_offset + offset,
+ len, flags);
+ if (ret < 0)
+ return ret;
+ *sentp += ret;
+ if (ret != len)
+ goto out;
+ remaining -= len;
+ offset = 0;
+ bvec++;
+ }
+ }
+
+ if (tail->iov_len) {
+ ret = svc_tcp_send_kvec(sock, tail, 0);
+ if (ret < 0)
+ return ret;
+ *sentp += ret;
+ }
+
+out:
+ return 0;
+}
+
/**
* svc_tcp_sendto - Send out a reply on a TCP socket
* @rqstp: completed svc_rqst
mutex_lock(&xprt->xpt_mutex);
if (svc_xprt_is_dead(xprt))
goto out_notconn;
- err = xprt_sock_sendmsg(svsk->sk_sock, &msg, xdr, 0, marker, &sent);
+ err = svc_tcp_sendmsg(svsk->sk_sock, &msg, xdr, marker, &sent);
xdr_free_bvec(xdr);
trace_svcsock_tcp_send(xprt, err < 0 ? err : sent);
if (err < 0 || sent != (xdr->len + sizeof(marker)))
int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
struct sk_buff_head *xmitq)
{
- struct tipc_msg *hdr = buf_msg(skb_peek(list));
struct sk_buff_head *backlogq = &l->backlogq;
struct sk_buff_head *transmq = &l->transmq;
struct sk_buff *skb, *_skb;
u16 ack = l->rcv_nxt - 1;
u16 seqno = l->snd_nxt;
int pkt_cnt = skb_queue_len(list);
- int imp = msg_importance(hdr);
unsigned int mss = tipc_link_mss(l);
unsigned int cwin = l->window;
unsigned int mtu = l->mtu;
+ struct tipc_msg *hdr;
bool new_bundle;
int rc = 0;
+ int imp;
+
+ if (pkt_cnt <= 0)
+ return 0;
+ hdr = buf_msg(skb_peek(list));
if (unlikely(msg_size(hdr) > mtu)) {
pr_warn("Too large msg, purging xmit list %d %d %d %d %d!\n",
skb_queue_len(list), msg_user(hdr),
return -EMSGSIZE;
}
+ imp = msg_importance(hdr);
/* Allow oversubscription of one data msg per source at congestion */
if (unlikely(l->backlog[imp].len >= l->backlog[imp].limit)) {
if (imp == TIPC_SYSTEM_IMPORTANCE) {
}
/**
- * link_reset_stats - reset link statistics
+ * tipc_link_reset_stats - reset link statistics
* @l: pointer to link
*/
void tipc_link_reset_stats(struct tipc_link *l)
}
/**
- * tipc_node_xmit() is the general link level function for message sending
+ * tipc_node_xmit() - general link level function for message sending
* @net: the applicable net namespace
* @list: chain of buffers containing message
* @dnode: address of destination node
struct inode *inode;
audit_log_format(ab, " name=");
+ spin_lock(&a->u.dentry->d_lock);
audit_log_untrustedstring(ab, a->u.dentry->d_name.name);
+ spin_unlock(&a->u.dentry->d_lock);
inode = d_backing_inode(a->u.dentry);
if (inode) {
dentry = d_find_alias(inode);
if (dentry) {
audit_log_format(ab, " name=");
- audit_log_untrustedstring(ab,
- dentry->d_name.name);
+ spin_lock(&dentry->d_lock);
+ audit_log_untrustedstring(ab, dentry->d_name.name);
+ spin_unlock(&dentry->d_lock);
dput(dentry);
}
audit_log_format(ab, " dev=");
NULL
};
-static struct attribute_group ac97_adapter_attr_group = {
+static const struct attribute_group ac97_adapter_attr_group = {
.name = "ac97_operations",
.attrs = ac97_controller_device_attrs,
};
goto fail;
}
- strlcpy(onyx->codec.name, "onyx", MAX_CODEC_NAME_LEN);
+ strscpy(onyx->codec.name, "onyx", MAX_CODEC_NAME_LEN);
onyx->codec.owner = THIS_MODULE;
onyx->codec.init = onyx_init_codec;
onyx->codec.exit = onyx_exit_codec;
/* seems that half is a saner default */
tas->drc_range = TAS3004_DRC_MAX / 2;
- strlcpy(tas->codec.name, "tas", MAX_CODEC_NAME_LEN);
+ strscpy(tas->codec.name, "tas", MAX_CODEC_NAME_LEN);
tas->codec.owner = THIS_MODULE;
tas->codec.init = tas_init_codec;
tas->codec.exit = tas_exit_codec;
if (!toonie)
return -ENOMEM;
- strlcpy(toonie->codec.name, "toonie", sizeof(toonie->codec.name));
+ strscpy(toonie->codec.name, "toonie", sizeof(toonie->codec.name));
toonie->codec.owner = THIS_MODULE;
toonie->codec.init = toonie_init_codec;
toonie->codec.exit = toonie_exit_codec;
return err;
aoa_card = alsa_card->private_data;
aoa_card->alsa_card = alsa_card;
- strlcpy(alsa_card->driver, "AppleOnbdAudio", sizeof(alsa_card->driver));
- strlcpy(alsa_card->shortname, name, sizeof(alsa_card->shortname));
- strlcpy(alsa_card->longname, name, sizeof(alsa_card->longname));
- strlcpy(alsa_card->mixername, name, sizeof(alsa_card->mixername));
+ strscpy(alsa_card->driver, "AppleOnbdAudio", sizeof(alsa_card->driver));
+ strscpy(alsa_card->shortname, name, sizeof(alsa_card->shortname));
+ strscpy(alsa_card->longname, name, sizeof(alsa_card->longname));
+ strscpy(alsa_card->mixername, name, sizeof(alsa_card->mixername));
err = snd_card_register(aoa_card->alsa_card);
if (err < 0) {
printk(KERN_ERR "snd-aoa: couldn't register alsa card\n");
ldev->gpio.methods->set_lineout(codec->gpio, 1);
ctl = snd_ctl_new1(&lineout_ctl, codec->gpio);
if (cc->connected & CC_LINEOUT_LABELLED_HEADPHONE)
- strlcpy(ctl->id.name,
+ strscpy(ctl->id.name,
"Headphone Switch", sizeof(ctl->id.name));
ldev->lineout_ctrl = ctl;
aoa_snd_ctl_add(ctl);
ctl = snd_ctl_new1(&lineout_detect_choice,
ldev);
if (cc->connected & CC_LINEOUT_LABELLED_HEADPHONE)
- strlcpy(ctl->id.name,
+ strscpy(ctl->id.name,
"Headphone Detect Autoswitch",
sizeof(ctl->id.name));
aoa_snd_ctl_add(ctl);
ctl = snd_ctl_new1(&lineout_detected,
ldev);
if (cc->connected & CC_LINEOUT_LABELLED_HEADPHONE)
- strlcpy(ctl->id.name,
+ strscpy(ctl->id.name,
"Headphone Detected",
sizeof(ctl->id.name));
ldev->lineout_detected_ctrl = ctl;
int length;
if (*sdev->modalias) {
- strlcpy(buf, sdev->modalias, sizeof(sdev->modalias) + 1);
+ strscpy(buf, sdev->modalias, sizeof(sdev->modalias) + 1);
strcat(buf, "\n");
length = strlen(buf);
} else {
card->private_free = aaci_free_card;
- strlcpy(card->driver, DRIVER_NAME, sizeof(card->driver));
- strlcpy(card->shortname, "ARM AC'97 Interface", sizeof(card->shortname));
+ strscpy(card->driver, DRIVER_NAME, sizeof(card->driver));
+ strscpy(card->shortname, "ARM AC'97 Interface", sizeof(card->shortname));
snprintf(card->longname, sizeof(card->longname),
"%s PL%03x rev%u at 0x%08llx, irq %d",
card->shortname, amba_part(dev), amba_rev(dev),
pcm->private_data = aaci;
pcm->info_flags = 0;
- strlcpy(pcm->name, DRIVER_NAME, sizeof(pcm->name));
+ strscpy(pcm->name, DRIVER_NAME, sizeof(pcm->name));
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &aaci_playback_ops);
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &aaci_capture_ops);
if (ret < 0)
goto err;
- strlcpy(card->driver, dev->dev.driver->name, sizeof(card->driver));
+ strscpy(card->driver, dev->dev.driver->name, sizeof(card->driver));
ret = pxa2xx_ac97_pcm_new(card);
if (ret)
from the driver are in the proper ranges or the check of the invalid
access at out-of-array areas.
+config SND_JACK_INJECTION_DEBUG
+ bool "Sound jack injection interface via debugfs"
+ depends on SND_JACK && SND_DEBUG && DEBUG_FS
+ help
+ This option can be used to enable or disable sound jack
+ software injection.
+ Say Y if you are debugging via jack injection interface.
+ If unsure select "N".
+
config SND_VMASTER
bool
static inline void snd_compress_set_id(struct snd_compr *compr, const char *id)
{
- strlcpy(compr->id, id, sizeof(compr->id));
+ strscpy(compr->id, id, sizeof(compr->id));
}
#else
static inline int snd_compress_proc_init(struct snd_compr *compr)
kctl->id.device = ncontrol->device;
kctl->id.subdevice = ncontrol->subdevice;
if (ncontrol->name) {
- strlcpy(kctl->id.name, ncontrol->name, sizeof(kctl->id.name));
+ strscpy(kctl->id.name, ncontrol->name, sizeof(kctl->id.name));
if (strcmp(ncontrol->name, kctl->id.name) != 0)
pr_warn("ALSA: Control name '%s' truncated to '%s'\n",
ncontrol->name, kctl->id.name);
return -ENOMEM;
down_read(&snd_ioctl_rwsem);
info->card = card->number;
- strlcpy(info->id, card->id, sizeof(info->id));
- strlcpy(info->driver, card->driver, sizeof(info->driver));
- strlcpy(info->name, card->shortname, sizeof(info->name));
- strlcpy(info->longname, card->longname, sizeof(info->longname));
- strlcpy(info->mixername, card->mixername, sizeof(info->mixername));
- strlcpy(info->components, card->components, sizeof(info->components));
+ strscpy(info->id, card->id, sizeof(info->id));
+ strscpy(info->driver, card->driver, sizeof(info->driver));
+ strscpy(info->name, card->shortname, sizeof(info->name));
+ strscpy(info->longname, card->longname, sizeof(info->longname));
+ strscpy(info->mixername, card->mixername, sizeof(info->mixername));
+ strscpy(info->components, card->components, sizeof(info->components));
up_read(&snd_ioctl_rwsem);
if (copy_to_user(arg, info, sizeof(struct snd_ctl_card_info))) {
kfree(info);
{
size_t offset = value_sizes[info->type] * info->count;
- offset = (offset + sizeof(u32) - 1) / sizeof(u32);
+ offset = DIV_ROUND_UP(offset, sizeof(u32));
memset32((u32 *)control->value.bytes.data + offset, pattern,
sizeof(control->value) / sizeof(u32) - offset);
}
/* check whether the remaining area kept untouched */
offset = value_sizes[info->type] * info->count;
- offset = (offset + sizeof(u32) - 1) / sizeof(u32);
+ offset = DIV_ROUND_UP(offset, sizeof(u32));
p = (u32 *)control->value.bytes.data + offset;
for (; offset < sizeof(control->value) / sizeof(u32); offset++, p++) {
if (*p != pattern) {
WARN(strlen(names[info->value.enumerated.item]) >= sizeof(info->value.enumerated.name),
"ALSA: too long item name '%s'\n",
names[info->value.enumerated.item]);
- strlcpy(info->value.enumerated.name,
+ strscpy(info->value.enumerated.name,
names[info->value.enumerated.item],
sizeof(info->value.enumerated.name));
return 0;
sid.index = 0;
sid.iface = SNDRV_CTL_ELEM_IFACE_CARD;
- strlcpy(sid.name, name, sizeof(sid.name));
+ strscpy(sid.name, name, sizeof(sid.name));
while (snd_ctl_find_id(card, &sid)) {
sid.index++;
memset(&info, 0, sizeof(info));
info.card = hw->card->number;
- strlcpy(info.id, hw->id, sizeof(info.id));
- strlcpy(info.name, hw->name, sizeof(info.name));
+ strscpy(info.id, hw->id, sizeof(info.id));
+ strscpy(info.name, hw->name, sizeof(info.name));
info.iface = hw->iface;
if (copy_to_user(_info, &info, sizeof(info)))
return -EFAULT;
hwdep->card = card;
hwdep->device = device;
if (id)
- strlcpy(hwdep->id, id, sizeof(hwdep->id));
+ strscpy(hwdep->id, id, sizeof(hwdep->id));
snd_device_initialize(&hwdep->dev, card);
hwdep->dev.release = release_hwdep_device;
#include <linux/time.h>
#include <linux/ctype.h>
#include <linux/pm.h>
+#include <linux/debugfs.h>
#include <linux/completion.h>
#include <sound/core.h>
{
struct snd_card *card;
int err;
+#ifdef CONFIG_SND_DEBUG
+ char name[8];
+#endif
if (snd_BUG_ON(!card_ret))
return -EINVAL;
if (extra_size > 0)
card->private_data = (char *)card + sizeof(struct snd_card);
if (xid)
- strlcpy(card->id, xid, sizeof(card->id));
+ strscpy(card->id, xid, sizeof(card->id));
err = 0;
mutex_lock(&snd_card_mutex);
if (idx < 0) /* first check the matching module-name slot */
dev_err(parent, "unable to create card info\n");
goto __error_ctl;
}
+
+#ifdef CONFIG_SND_DEBUG
+ sprintf(name, "card%d", idx);
+ card->debugfs_root = debugfs_create_dir(name, sound_debugfs_root);
+#endif
+
*card_ret = card;
return 0;
return ret;
/* wait, until all devices are ready for the free operation */
wait_for_completion(&released);
+
+#ifdef CONFIG_SND_DEBUG
+ debugfs_remove(card->debugfs_root);
+ card->debugfs_root = NULL;
+#endif
+
return 0;
}
EXPORT_SYMBOL(snd_card_free);
/* last resort... */
dev_err(card->dev, "unable to set card id (%s)\n", id);
if (card->proc_root->name)
- strlcpy(card->id, card->proc_root->name, sizeof(card->id));
+ strscpy(card->id, card->proc_root->name, sizeof(card->id));
}
/**
#include <linux/input.h>
#include <linux/slab.h>
#include <linux/module.h>
+#include <linux/ctype.h>
+#include <linux/mm.h>
+#include <linux/debugfs.h>
#include <sound/jack.h>
#include <sound/core.h>
#include <sound/control.h>
struct snd_kcontrol *kctl;
struct list_head list; /* list of controls belong to the same jack */
unsigned int mask_bits; /* only masked status bits are reported via kctl */
+ struct snd_jack *jack; /* pointer to struct snd_jack */
+ bool sw_inject_enable; /* allow to inject plug event via debugfs */
+#ifdef CONFIG_SND_JACK_INJECTION_DEBUG
+ struct dentry *jack_debugfs_root; /* jack_kctl debugfs root */
+#endif
};
#ifdef CONFIG_SND_JACK_INPUT_DEV
}
#endif /* CONFIG_SND_JACK_INPUT_DEV */
+#ifdef CONFIG_SND_JACK_INJECTION_DEBUG
+static void snd_jack_inject_report(struct snd_jack_kctl *jack_kctl, int status)
+{
+ struct snd_jack *jack;
+#ifdef CONFIG_SND_JACK_INPUT_DEV
+ int i;
+#endif
+ if (!jack_kctl)
+ return;
+
+ jack = jack_kctl->jack;
+
+ if (jack_kctl->sw_inject_enable)
+ snd_kctl_jack_report(jack->card, jack_kctl->kctl,
+ status & jack_kctl->mask_bits);
+
+#ifdef CONFIG_SND_JACK_INPUT_DEV
+ if (!jack->input_dev)
+ return;
+
+ for (i = 0; i < ARRAY_SIZE(jack->key); i++) {
+ int testbit = ((SND_JACK_BTN_0 >> i) & jack_kctl->mask_bits);
+
+ if (jack->type & testbit)
+ input_report_key(jack->input_dev, jack->key[i],
+ status & testbit);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(jack_switch_types); i++) {
+ int testbit = ((1 << i) & jack_kctl->mask_bits);
+
+ if (jack->type & testbit)
+ input_report_switch(jack->input_dev,
+ jack_switch_types[i],
+ status & testbit);
+ }
+
+ input_sync(jack->input_dev);
+#endif /* CONFIG_SND_JACK_INPUT_DEV */
+}
+
+static ssize_t sw_inject_enable_read(struct file *file,
+ char __user *to, size_t count, loff_t *ppos)
+{
+ struct snd_jack_kctl *jack_kctl = file->private_data;
+ int len, ret;
+ char buf[128];
+
+ len = scnprintf(buf, sizeof(buf), "%s: %s\t\t%s: %i\n", "Jack", jack_kctl->kctl->id.name,
+ "Inject Enabled", jack_kctl->sw_inject_enable);
+ ret = simple_read_from_buffer(to, count, ppos, buf, len);
+
+ return ret;
+}
+
+static ssize_t sw_inject_enable_write(struct file *file,
+ const char __user *from, size_t count, loff_t *ppos)
+{
+ struct snd_jack_kctl *jack_kctl = file->private_data;
+ int ret, err;
+ unsigned long enable;
+ char buf[8] = { 0 };
+
+ ret = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, from, count);
+ err = kstrtoul(buf, 0, &enable);
+ if (err)
+ return err;
+
+ if (jack_kctl->sw_inject_enable == (!!enable))
+ return ret;
+
+ jack_kctl->sw_inject_enable = !!enable;
+
+ if (!jack_kctl->sw_inject_enable)
+ snd_jack_report(jack_kctl->jack, jack_kctl->jack->hw_status_cache);
+
+ return ret;
+}
+
+static ssize_t jackin_inject_write(struct file *file,
+ const char __user *from, size_t count, loff_t *ppos)
+{
+ struct snd_jack_kctl *jack_kctl = file->private_data;
+ int ret, err;
+ unsigned long enable;
+ char buf[8] = { 0 };
+
+ if (!jack_kctl->sw_inject_enable)
+ return -EINVAL;
+
+ ret = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, from, count);
+ err = kstrtoul(buf, 0, &enable);
+ if (err)
+ return err;
+
+ snd_jack_inject_report(jack_kctl, !!enable ? jack_kctl->mask_bits : 0);
+
+ return ret;
+}
+
+static ssize_t jack_kctl_id_read(struct file *file,
+ char __user *to, size_t count, loff_t *ppos)
+{
+ struct snd_jack_kctl *jack_kctl = file->private_data;
+ char buf[64];
+ int len, ret;
+
+ len = scnprintf(buf, sizeof(buf), "%s\n", jack_kctl->kctl->id.name);
+ ret = simple_read_from_buffer(to, count, ppos, buf, len);
+
+ return ret;
+}
+
+/* the bit definition is aligned with snd_jack_types in jack.h */
+static const char * const jack_events_name[] = {
+ "HEADPHONE(0x0001)", "MICROPHONE(0x0002)", "LINEOUT(0x0004)",
+ "MECHANICAL(0x0008)", "VIDEOOUT(0x0010)", "LINEIN(0x0020)",
+ "", "", "", "BTN_5(0x0200)", "BTN_4(0x0400)", "BTN_3(0x0800)",
+ "BTN_2(0x1000)", "BTN_1(0x2000)", "BTN_0(0x4000)", "",
+};
+
+/* the recommended buffer size is 256 */
+static int parse_mask_bits(unsigned int mask_bits, char *buf, size_t buf_size)
+{
+ int i;
+
+ scnprintf(buf, buf_size, "0x%04x", mask_bits);
+
+ for (i = 0; i < ARRAY_SIZE(jack_events_name); i++)
+ if (mask_bits & (1 << i)) {
+ strlcat(buf, " ", buf_size);
+ strlcat(buf, jack_events_name[i], buf_size);
+ }
+ strlcat(buf, "\n", buf_size);
+
+ return strlen(buf);
+}
+
+static ssize_t jack_kctl_mask_bits_read(struct file *file,
+ char __user *to, size_t count, loff_t *ppos)
+{
+ struct snd_jack_kctl *jack_kctl = file->private_data;
+ char buf[256];
+ int len, ret;
+
+ len = parse_mask_bits(jack_kctl->mask_bits, buf, sizeof(buf));
+ ret = simple_read_from_buffer(to, count, ppos, buf, len);
+
+ return ret;
+}
+
+static ssize_t jack_kctl_status_read(struct file *file,
+ char __user *to, size_t count, loff_t *ppos)
+{
+ struct snd_jack_kctl *jack_kctl = file->private_data;
+ char buf[16];
+ int len, ret;
+
+ len = scnprintf(buf, sizeof(buf), "%s\n", jack_kctl->kctl->private_value ?
+ "Plugged" : "Unplugged");
+ ret = simple_read_from_buffer(to, count, ppos, buf, len);
+
+ return ret;
+}
+
+#ifdef CONFIG_SND_JACK_INPUT_DEV
+static ssize_t jack_type_read(struct file *file,
+ char __user *to, size_t count, loff_t *ppos)
+{
+ struct snd_jack_kctl *jack_kctl = file->private_data;
+ char buf[256];
+ int len, ret;
+
+ len = parse_mask_bits(jack_kctl->jack->type, buf, sizeof(buf));
+ ret = simple_read_from_buffer(to, count, ppos, buf, len);
+
+ return ret;
+}
+
+static const struct file_operations jack_type_fops = {
+ .open = simple_open,
+ .read = jack_type_read,
+ .llseek = default_llseek,
+};
+#endif
+
+static const struct file_operations sw_inject_enable_fops = {
+ .open = simple_open,
+ .read = sw_inject_enable_read,
+ .write = sw_inject_enable_write,
+ .llseek = default_llseek,
+};
+
+static const struct file_operations jackin_inject_fops = {
+ .open = simple_open,
+ .write = jackin_inject_write,
+ .llseek = default_llseek,
+};
+
+static const struct file_operations jack_kctl_id_fops = {
+ .open = simple_open,
+ .read = jack_kctl_id_read,
+ .llseek = default_llseek,
+};
+
+static const struct file_operations jack_kctl_mask_bits_fops = {
+ .open = simple_open,
+ .read = jack_kctl_mask_bits_read,
+ .llseek = default_llseek,
+};
+
+static const struct file_operations jack_kctl_status_fops = {
+ .open = simple_open,
+ .read = jack_kctl_status_read,
+ .llseek = default_llseek,
+};
+
+static int snd_jack_debugfs_add_inject_node(struct snd_jack *jack,
+ struct snd_jack_kctl *jack_kctl)
+{
+ char *tname;
+ int i;
+
+ /* Don't create injection interface for Phantom jacks */
+ if (strstr(jack_kctl->kctl->id.name, "Phantom"))
+ return 0;
+
+ tname = kstrdup(jack_kctl->kctl->id.name, GFP_KERNEL);
+ if (!tname)
+ return -ENOMEM;
+
+ /* replace the chars which are not suitable for folder's name with _ */
+ for (i = 0; tname[i]; i++)
+ if (!isalnum(tname[i]))
+ tname[i] = '_';
+
+ jack_kctl->jack_debugfs_root = debugfs_create_dir(tname, jack->card->debugfs_root);
+ kfree(tname);
+
+ debugfs_create_file("sw_inject_enable", 0644, jack_kctl->jack_debugfs_root, jack_kctl,
+ &sw_inject_enable_fops);
+
+ debugfs_create_file("jackin_inject", 0200, jack_kctl->jack_debugfs_root, jack_kctl,
+ &jackin_inject_fops);
+
+ debugfs_create_file("kctl_id", 0444, jack_kctl->jack_debugfs_root, jack_kctl,
+ &jack_kctl_id_fops);
+
+ debugfs_create_file("mask_bits", 0444, jack_kctl->jack_debugfs_root, jack_kctl,
+ &jack_kctl_mask_bits_fops);
+
+ debugfs_create_file("status", 0444, jack_kctl->jack_debugfs_root, jack_kctl,
+ &jack_kctl_status_fops);
+
+#ifdef CONFIG_SND_JACK_INPUT_DEV
+ debugfs_create_file("type", 0444, jack_kctl->jack_debugfs_root, jack_kctl,
+ &jack_type_fops);
+#endif
+ return 0;
+}
+
+static void snd_jack_debugfs_clear_inject_node(struct snd_jack_kctl *jack_kctl)
+{
+ debugfs_remove(jack_kctl->jack_debugfs_root);
+ jack_kctl->jack_debugfs_root = NULL;
+}
+#else /* CONFIG_SND_JACK_INJECTION_DEBUG */
+static int snd_jack_debugfs_add_inject_node(struct snd_jack *jack,
+ struct snd_jack_kctl *jack_kctl)
+{
+ return 0;
+}
+
+static void snd_jack_debugfs_clear_inject_node(struct snd_jack_kctl *jack_kctl)
+{
+}
+#endif /* CONFIG_SND_JACK_INJECTION_DEBUG */
+
static void snd_jack_kctl_private_free(struct snd_kcontrol *kctl)
{
struct snd_jack_kctl *jack_kctl;
jack_kctl = kctl->private_data;
if (jack_kctl) {
+ snd_jack_debugfs_clear_inject_node(jack_kctl);
list_del(&jack_kctl->list);
kfree(jack_kctl);
}
static void snd_jack_kctl_add(struct snd_jack *jack, struct snd_jack_kctl *jack_kctl)
{
+ jack_kctl->jack = jack;
list_add_tail(&jack_kctl->list, &jack->kctl_list);
+ snd_jack_debugfs_add_inject_node(jack, jack_kctl);
}
static struct snd_jack_kctl * snd_jack_kctl_new(struct snd_card *card, const char *name, unsigned int mask)
void snd_jack_report(struct snd_jack *jack, int status)
{
struct snd_jack_kctl *jack_kctl;
+ unsigned int mask_bits = 0;
#ifdef CONFIG_SND_JACK_INPUT_DEV
int i;
#endif
if (!jack)
return;
+ jack->hw_status_cache = status;
+
list_for_each_entry(jack_kctl, &jack->kctl_list, list)
- snd_kctl_jack_report(jack->card, jack_kctl->kctl,
- status & jack_kctl->mask_bits);
+ if (jack_kctl->sw_inject_enable)
+ mask_bits |= jack_kctl->mask_bits;
+ else
+ snd_kctl_jack_report(jack->card, jack_kctl->kctl,
+ status & jack_kctl->mask_bits);
#ifdef CONFIG_SND_JACK_INPUT_DEV
if (!jack->input_dev)
return;
for (i = 0; i < ARRAY_SIZE(jack->key); i++) {
- int testbit = SND_JACK_BTN_0 >> i;
+ int testbit = ((SND_JACK_BTN_0 >> i) & ~mask_bits);
if (jack->type & testbit)
input_report_key(jack->input_dev, jack->key[i],
}
for (i = 0; i < ARRAY_SIZE(jack_switch_types); i++) {
- int testbit = 1 << i;
+ int testbit = ((1 << i) & ~mask_bits);
+
if (jack->type & testbit)
input_report_switch(jack->input_dev,
jack_switch_types[i],
struct mixer_info info;
memset(&info, 0, sizeof(info));
- strlcpy(info.id, mixer && mixer->id[0] ? mixer->id : card->driver, sizeof(info.id));
- strlcpy(info.name, mixer && mixer->name[0] ? mixer->name : card->mixername, sizeof(info.name));
+ strscpy(info.id, mixer && mixer->id[0] ? mixer->id : card->driver, sizeof(info.id));
+ strscpy(info.name, mixer && mixer->name[0] ? mixer->name : card->mixername, sizeof(info.name));
info.modify_counter = card->mixer_oss_change_count;
if (copy_to_user(_info, &info, sizeof(info)))
return -EFAULT;
_old_mixer_info info;
memset(&info, 0, sizeof(info));
- strlcpy(info.id, mixer && mixer->id[0] ? mixer->id : card->driver, sizeof(info.id));
- strlcpy(info.name, mixer && mixer->name[0] ? mixer->name : card->mixername, sizeof(info.name));
+ strscpy(info.id, mixer && mixer->id[0] ? mixer->id : card->driver, sizeof(info.id));
+ strscpy(info.name, mixer && mixer->name[0] ? mixer->name : card->mixername, sizeof(info.name));
if (copy_to_user(_info, &info, sizeof(info)))
return -EFAULT;
return 0;
if (orange == 0)
return 0;
- return ((nrange * (val - omin)) + (orange / 2)) / orange + nmin;
+ return DIV_ROUND_CLOSEST(nrange * (val - omin), orange) + nmin;
}
/* convert from alsa native to oss values (0-100) */
memset(&id, 0, sizeof(id));
id.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
- strlcpy(id.name, name, sizeof(id.name));
+ strscpy(id.name, name, sizeof(id.name));
id.index = index;
return snd_ctl_find_id(card, &id);
}
mixer->oss_dev_alloc = 1;
mixer->card = card;
if (*card->mixername)
- strlcpy(mixer->name, card->mixername, sizeof(mixer->name));
+ strscpy(mixer->name, card->mixername, sizeof(mixer->name));
else
snprintf(mixer->name, sizeof(mixer->name),
"mixer%i", card->number);
if (plugin->src_format.rate < plugin->dst_format.rate) {
res = (((frames * data->pitch) + (BITS/2)) >> SHIFT);
} else {
- res = (((frames << SHIFT) + (data->pitch / 2)) / data->pitch);
+ res = DIV_ROUND_CLOSEST(frames << SHIFT, data->pitch);
}
if (data->old_src_frames > 0) {
snd_pcm_sframes_t frames1 = frames, res1 = data->old_dst_frames;
return 0;
data = (struct rate_priv *)plugin->extra_data;
if (plugin->src_format.rate < plugin->dst_format.rate) {
- res = (((frames << SHIFT) + (data->pitch / 2)) / data->pitch);
+ res = DIV_ROUND_CLOSEST(frames << SHIFT, data->pitch);
} else {
res = (((frames * data->pitch) + (BITS/2)) >> SHIFT);
}
init_waitqueue_head(&pcm->open_wait);
INIT_LIST_HEAD(&pcm->list);
if (id)
- strlcpy(pcm->id, id, sizeof(pcm->id));
+ strscpy(pcm->id, id, sizeof(pcm->id));
err = snd_pcm_new_stream(pcm, SNDRV_PCM_STREAM_PLAYBACK,
playback_count);
info->device = pcm->device;
info->stream = substream->stream;
info->subdevice = substream->number;
- strlcpy(info->id, pcm->id, sizeof(info->id));
- strlcpy(info->name, pcm->name, sizeof(info->name));
+ strscpy(info->id, pcm->id, sizeof(info->id));
+ strscpy(info->name, pcm->name, sizeof(info->name));
info->dev_class = pcm->dev_class;
info->dev_subclass = pcm->dev_subclass;
info->subdevices_count = pstr->substream_count;
info->subdevices_avail = pstr->substream_count - pstr->substream_opened;
- strlcpy(info->subname, substream->name, sizeof(info->subname));
+ strscpy(info->subname, substream->name, sizeof(info->subname));
return 0;
}
INIT_LIST_HEAD(&rmidi->streams[SNDRV_RAWMIDI_STREAM_OUTPUT].substreams);
if (id != NULL)
- strlcpy(rmidi->id, id, sizeof(rmidi->id));
+ strscpy(rmidi->id, id, sizeof(rmidi->id));
snd_device_initialize(&rmidi->dev, card);
rmidi->dev.release = release_rawmidi_device;
snd_use_lock_init(&mdev->use_lock);
/* copy and truncate the name of synth device */
- strlcpy(mdev->name, pinfo->name, sizeof(mdev->name));
+ strscpy(mdev->name, pinfo->name, sizeof(mdev->name));
/* create MIDI coder */
if (snd_midi_event_new(MAX_MIDI_EVENT_BUF, &mdev->coder) < 0) {
inf->device = dev;
inf->dev_type = 0; /* FIXME: ?? */
inf->capabilities = 0; /* FIXME: ?? */
- strlcpy(inf->name, mdev->name, sizeof(inf->name));
+ strscpy(inf->name, mdev->name, sizeof(inf->name));
snd_use_lock_free(&mdev->use_lock);
return 0;
}
snd_use_lock_init(&rec->use_lock);
/* copy and truncate the name of synth device */
- strlcpy(rec->name, dev->name, sizeof(rec->name));
+ strscpy(rec->name, dev->name, sizeof(rec->name));
/* registration */
spin_lock_irqsave(®ister_lock, flags);
inf->synth_subtype = 0;
inf->nr_voices = 16;
inf->device = dev;
- strlcpy(inf->name, minf.name, sizeof(inf->name));
+ strscpy(inf->name, minf.name, sizeof(inf->name));
} else {
if ((rec = get_synthdev(dp, dev)) == NULL)
return -ENXIO;
inf->synth_subtype = rec->synth_subtype;
inf->nr_voices = rec->nr_voices;
inf->device = dev;
- strlcpy(inf->name, rec->name, sizeof(inf->name));
+ strscpy(inf->name, rec->name, sizeof(inf->name));
snd_use_lock_free(&rec->use_lock);
}
return 0;
info->queue = q->queue;
info->owner = q->owner;
info->locked = q->locked;
- strlcpy(info->name, q->name, sizeof(info->name));
+ strscpy(info->name, q->name, sizeof(info->name));
queuefree(q);
return 0;
extlen = 0;
if (snd_seq_ev_is_variable(event)) {
extlen = event->data.ext.len & ~SNDRV_SEQ_EXT_MASK;
- ncells = (extlen + sizeof(struct snd_seq_event) - 1) / sizeof(struct snd_seq_event);
+ ncells = DIV_ROUND_UP(extlen, sizeof(struct snd_seq_event));
}
if (ncells >= pool->total_elements)
return -ENOMEM;
/* set port name */
if (info->name[0])
- strlcpy(port->name, info->name, sizeof(port->name));
+ strscpy(port->name, info->name, sizeof(port->name));
/* set capabilities */
port->capability = info->capability;
return -EINVAL;
/* get port name */
- strlcpy(info->name, port->name, sizeof(info->name));
+ strscpy(info->name, port->name, sizeof(info->name));
/* get capabilities */
info->capability = port->capability;
/* Set up the port */
memset(&portinfo, 0, sizeof(portinfo));
portinfo.addr.client = client;
- strlcpy(portinfo.name, portname ? portname : "Unnamed port",
+ strscpy(portinfo.name, portname ? portname : "Unnamed port",
sizeof(portinfo.name));
portinfo.capability = cap;
#include <linux/time.h>
#include <linux/device.h>
#include <linux/module.h>
+#include <linux/debugfs.h>
#include <sound/core.h>
#include <sound/minors.h>
#include <sound/info.h>
int snd_ecards_limit;
EXPORT_SYMBOL(snd_ecards_limit);
+#ifdef CONFIG_SND_DEBUG
+struct dentry *sound_debugfs_root;
+EXPORT_SYMBOL_GPL(sound_debugfs_root);
+#endif
+
static struct snd_minor *snd_minors[SNDRV_OS_MINORS];
static DEFINE_MUTEX(sound_mutex);
unregister_chrdev(major, "alsa");
return -ENOMEM;
}
+
+#ifdef CONFIG_SND_DEBUG
+ sound_debugfs_root = debugfs_create_dir("sound", NULL);
+#endif
#ifndef MODULE
pr_info("Advanced Linux Sound Architecture Driver Initialized.\n");
#endif
static void __exit alsa_sound_exit(void)
{
+#ifdef CONFIG_SND_DEBUG
+ debugfs_remove(sound_debugfs_root);
+#endif
snd_info_done();
unregister_chrdev(major, "alsa");
}
timer->tmr_device = tid->device;
timer->tmr_subdevice = tid->subdevice;
if (id)
- strlcpy(timer->id, id, sizeof(timer->id));
+ strscpy(timer->id, id, sizeof(timer->id));
timer->sticks = 1;
INIT_LIST_HEAD(&timer->device_list);
INIT_LIST_HEAD(&timer->open_list_head);
ginfo->card = t->card ? t->card->number : -1;
if (t->hw.flags & SNDRV_TIMER_HW_SLAVE)
ginfo->flags |= SNDRV_TIMER_FLG_SLAVE;
- strlcpy(ginfo->id, t->id, sizeof(ginfo->id));
- strlcpy(ginfo->name, t->name, sizeof(ginfo->name));
+ strscpy(ginfo->id, t->id, sizeof(ginfo->id));
+ strscpy(ginfo->name, t->name, sizeof(ginfo->name));
ginfo->resolution = t->hw.resolution;
if (t->hw.resolution_min > 0) {
ginfo->resolution_min = t->hw.resolution_min;
info->card = t->card ? t->card->number : -1;
if (t->hw.flags & SNDRV_TIMER_HW_SLAVE)
info->flags |= SNDRV_TIMER_FLG_SLAVE;
- strlcpy(info->id, t->id, sizeof(info->id));
- strlcpy(info->name, t->name, sizeof(info->name));
+ strscpy(info->id, t->id, sizeof(info->id));
+ strscpy(info->name, t->name, sizeof(info->name));
info->resolution = t->hw.resolution;
if (copy_to_user(_info, info, sizeof(*_info)))
err = -EFAULT;
info.card = t->card ? t->card->number : -1;
if (t->hw.flags & SNDRV_TIMER_HW_SLAVE)
info.flags |= SNDRV_TIMER_FLG_SLAVE;
- strlcpy(info.id, t->id, sizeof(info.id));
- strlcpy(info.name, t->name, sizeof(info.name));
+ strscpy(info.id, t->id, sizeof(info.id));
+ strscpy(info.name, t->name, sizeof(info.name));
info.resolution = t->hw.resolution;
if (copy_to_user(_info, &info, sizeof(*_info)))
return -EFAULT;
dpcm->period_update_pending = 1;
}
tick = dpcm->period_size_frac - dpcm->irq_pos;
- tick = (tick + dpcm->pcm_bps - 1) / dpcm->pcm_bps;
+ tick = DIV_ROUND_UP(tick, dpcm->pcm_bps);
mod_timer(&dpcm->timer, jiffies + tick);
return 0;
static void dummy_systimer_rearm(struct dummy_systimer_pcm *dpcm)
{
mod_timer(&dpcm->timer, jiffies +
- (dpcm->frac_period_rest + dpcm->rate - 1) / dpcm->rate);
+ DIV_ROUND_UP(dpcm->frac_period_rest, dpcm->rate));
}
static void dummy_systimer_update(struct dummy_systimer_pcm *dpcm)
return;
opl3->oss_seq_dev = dev;
- strlcpy(dev->name, name, sizeof(dev->name));
+ strscpy(dev->name, name, sizeof(dev->name));
arg = SNDRV_SEQ_DEVICE_ARGPTR(dev);
arg->type = SYNTH_TYPE_FM;
if (opl3->hardware < OPL3_HW_OPL3) {
}
if (name)
- strlcpy(patch->name, name, sizeof(patch->name));
+ strscpy(patch->name, name, sizeof(patch->name));
return 0;
}
chip->ibl.size = 0;
vx_set_ibl(chip, &chip->ibl); /* query the info */
if (preferred > 0) {
- chip->ibl.size = ((preferred + chip->ibl.granularity - 1) /
- chip->ibl.granularity) * chip->ibl.granularity;
+ chip->ibl.size = roundup(preferred, chip->ibl.granularity);
if (chip->ibl.size > chip->ibl.max_size)
chip->ibl.size = chip->ibl.max_size;
} else
memset(&event, 0, sizeof(event));
count = min_t(long, count, sizeof(event.lock_status));
- if (bebob->dev_lock_changed) {
- event.lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS;
- event.lock_status.status = (bebob->dev_lock_count > 0);
- bebob->dev_lock_changed = false;
- }
+ event.lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS;
+ event.lock_status.status = (bebob->dev_lock_count > 0);
+ bebob->dev_lock_changed = false;
spin_unlock_irq(&bebob->lock);
info.card = dev->card->index;
*(__be32 *)&info.guid[0] = cpu_to_be32(dev->config_rom[3]);
*(__be32 *)&info.guid[4] = cpu_to_be32(dev->config_rom[4]);
- strlcpy(info.device_name, dev_name(&dev->device),
+ strscpy(info.device_name, dev_name(&dev->device),
sizeof(info.device_name));
if (copy_to_user(arg, &info, sizeof(info)))
# SPDX-License-Identifier: GPL-2.0-only
snd-dice-objs := dice-transaction.o dice-stream.o dice-proc.o dice-midi.o \
dice-pcm.o dice-hwdep.o dice.o dice-tcelectronic.o \
- dice-alesis.o dice-extension.o dice-mytek.o dice-presonus.o
+ dice-alesis.o dice-extension.o dice-mytek.o dice-presonus.o \
+ dice-harman.o
obj-$(CONFIG_SND_DICE) += snd-dice.o
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+// dice-harman.c - a part of driver for DICE based devices
+//
+// Copyright (c) 2021 Takashi Sakamoto
+//
+// Licensed under the terms of the GNU General Public License, version 2.
+
+#include "dice.h"
+
+int snd_dice_detect_harman_formats(struct snd_dice *dice)
+{
+ int i;
+
+ // Lexicon I-ONYX FW810s supports sampling transfer frequency up to
+ // 96.0 kHz, 12 PCM channels and 1 MIDI channel in its first tx stream
+ // , 10 PCM channels and 1 MIDI channel in its first rx stream for all
+ // of the frequencies.
+ for (i = 0; i < 2; ++i) {
+ dice->tx_pcm_chs[0][i] = 12;
+ dice->tx_midi_ports[0] = 1;
+ dice->rx_pcm_chs[0][i] = 10;
+ dice->rx_midi_ports[0] = 1;
+ }
+
+ return 0;
+}
info.card = dev->card->index;
*(__be32 *)&info.guid[0] = cpu_to_be32(dev->config_rom[3]);
*(__be32 *)&info.guid[4] = cpu_to_be32(dev->config_rom[4]);
- strlcpy(info.device_name, dev_name(&dev->device),
+ strscpy(info.device_name, dev_name(&dev->device),
sizeof(info.device_name));
if (copy_to_user(arg, &info, sizeof(info)))
#define OUI_MYTEK 0x001ee8
#define OUI_SSL 0x0050c2 // Actually ID reserved by IEEE.
#define OUI_PRESONUS 0x000a92
+#define OUI_HARMAN 0x000fd7
#define DICE_CATEGORY_ID 0x04
#define WEISS_CATEGORY_ID 0x00
#define LOUD_CATEGORY_ID 0x10
+#define HARMAN_CATEGORY_ID 0x20
#define MODEL_ALESIS_IO_BOTH 0x000001
category = WEISS_CATEGORY_ID;
else if (vendor == OUI_LOUD)
category = LOUD_CATEGORY_ID;
+ else if (vendor == OUI_HARMAN)
+ category = HARMAN_CATEGORY_ID;
else
category = DICE_CATEGORY_ID;
if (device->config_rom[3] != ((vendor << 8) | category) ||
.model_id = 0x000008,
.driver_data = (kernel_ulong_t)snd_dice_detect_presonus_formats,
},
+ // Lexicon I-ONYX FW810S.
+ {
+ .match_flags = IEEE1394_MATCH_VENDOR_ID |
+ IEEE1394_MATCH_MODEL_ID,
+ .vendor_id = OUI_HARMAN,
+ .model_id = 0x000001,
+ .driver_data = (kernel_ulong_t)snd_dice_detect_harman_formats,
+ },
{
.match_flags = IEEE1394_MATCH_VERSION,
.version = DICE_INTERFACE,
int snd_dice_detect_extension_formats(struct snd_dice *dice);
int snd_dice_detect_mytek_formats(struct snd_dice *dice);
int snd_dice_detect_presonus_formats(struct snd_dice *dice);
+int snd_dice_detect_harman_formats(struct snd_dice *dice);
#endif
info.card = dev->card->index;
*(__be32 *)&info.guid[0] = cpu_to_be32(dev->config_rom[3]);
*(__be32 *)&info.guid[4] = cpu_to_be32(dev->config_rom[4]);
- strlcpy(info.device_name, dev_name(&dev->device),
+ strscpy(info.device_name, dev_name(&dev->device),
sizeof(info.device_name));
if (copy_to_user(arg, &info, sizeof(info)))
}
memset(&event, 0, sizeof(event));
- if (ff->dev_lock_changed) {
- event.lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS;
- event.lock_status.status = (ff->dev_lock_count > 0);
- ff->dev_lock_changed = false;
+ event.lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS;
+ event.lock_status.status = (ff->dev_lock_count > 0);
+ ff->dev_lock_changed = false;
- count = min_t(long, count, sizeof(event.lock_status));
- }
+ count = min_t(long, count, sizeof(event.lock_status));
spin_unlock_irq(&ff->lock);
info.card = dev->card->index;
*(__be32 *)&info.guid[0] = cpu_to_be32(dev->config_rom[3]);
*(__be32 *)&info.guid[4] = cpu_to_be32(dev->config_rom[4]);
- strlcpy(info.device_name, dev_name(&dev->device),
+ strscpy(info.device_name, dev_name(&dev->device),
sizeof(info.device_name));
if (copy_to_user(arg, &info, sizeof(info)))
info.card = dev->card->index;
*(__be32 *)&info.guid[0] = cpu_to_be32(dev->config_rom[3]);
*(__be32 *)&info.guid[4] = cpu_to_be32(dev->config_rom[4]);
- strlcpy(info.device_name, dev_name(&dev->device),
+ strscpy(info.device_name, dev_name(&dev->device),
sizeof(info.device_name));
if (copy_to_user(arg, &info, sizeof(info)))
info.card = dev->card->index;
*(__be32 *)&info.guid[0] = cpu_to_be32(dev->config_rom[3]);
*(__be32 *)&info.guid[4] = cpu_to_be32(dev->config_rom[4]);
- strlcpy(info.device_name, dev_name(&dev->device),
+ strscpy(info.device_name, dev_name(&dev->device),
sizeof(info.device_name));
if (copy_to_user(arg, &info, sizeof(info)))
}
memset(&event, 0, sizeof(event));
- if (oxfw->dev_lock_changed) {
- event.lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS;
- event.lock_status.status = (oxfw->dev_lock_count > 0);
- oxfw->dev_lock_changed = false;
+ event.lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS;
+ event.lock_status.status = (oxfw->dev_lock_count > 0);
+ oxfw->dev_lock_changed = false;
- count = min_t(long, count, sizeof(event.lock_status));
- }
+ count = min_t(long, count, sizeof(event.lock_status));
spin_unlock_irq(&oxfw->lock);
info.card = dev->card->index;
*(__be32 *)&info.guid[0] = cpu_to_be32(dev->config_rom[3]);
*(__be32 *)&info.guid[4] = cpu_to_be32(dev->config_rom[4]);
- strlcpy(info.device_name, dev_name(&dev->device),
+ strscpy(info.device_name, dev_name(&dev->device),
sizeof(info.device_name));
if (copy_to_user(arg, &info, sizeof(info)))
info.card = dev->card->index;
*(__be32 *)&info.guid[0] = cpu_to_be32(dev->config_rom[3]);
*(__be32 *)&info.guid[4] = cpu_to_be32(dev->config_rom[4]);
- strlcpy(info.device_name, dev_name(&dev->device),
+ strscpy(info.device_name, dev_name(&dev->device),
sizeof(info.device_name));
if (copy_to_user(arg, &info, sizeof(info)))
pos_adj = bus->bdl_pos_adj;
if (!azx_dev->no_period_wakeup && pos_adj > 0) {
pos_align = pos_adj;
- pos_adj = (pos_adj * runtime->rate + 47999) / 48000;
+ pos_adj = DIV_ROUND_UP(pos_adj * runtime->rate, 48000);
if (!pos_adj)
pos_adj = pos_align;
else
- pos_adj = ((pos_adj + pos_align - 1) / pos_align) *
- pos_align;
+ pos_adj = roundup(pos_adj, pos_align);
pos_adj = frames_to_bytes(runtime, pos_adj);
if (pos_adj >= period_bytes) {
dev_warn(bus->dev, "Too big adjustment %d\n",
NULL
};
-static struct attribute_group hdac_dev_attr_group = {
+static const struct attribute_group hdac_dev_attr_group = {
.attrs = hdac_dev_attrs,
};
list_add_tail(&bus->buses, &master->buses);
bus->master = master;
}
- strlcpy(bus->name, name, sizeof(bus->name));
+ strscpy(bus->name, name, sizeof(bus->name));
err = snd_device_new(card, SNDRV_DEV_BUS, bus, &ops);
if (err < 0) {
snd_i2c_bus_free(bus);
if (device == NULL)
return -ENOMEM;
device->addr = addr;
- strlcpy(device->name, name, sizeof(device->name));
+ strscpy(device->name, name, sizeof(device->name));
list_add_tail(&device->list, &bus->devices);
device->bus = bus;
*rdevice = device;
if (error < 0)
goto out;
- strlcpy(card->driver, "AD1848", sizeof(card->driver));
- strlcpy(card->shortname, chip->pcm->name, sizeof(card->shortname));
+ strscpy(card->driver, "AD1848", sizeof(card->driver));
+ strscpy(card->shortname, chip->pcm->name, sizeof(card->shortname));
if (!thinkpad[n])
snprintf(card->longname, sizeof(card->longname),
return error;
}
-static int snd_ad1848_remove(struct device *dev, unsigned int n)
+static void snd_ad1848_remove(struct device *dev, unsigned int n)
{
snd_card_free(dev_get_drvdata(dev));
- return 0;
}
#ifdef CONFIG_PM
return error;
}
-static int snd_adlib_remove(struct device *dev, unsigned int n)
+static void snd_adlib_remove(struct device *dev, unsigned int n)
{
snd_card_free(dev_get_drvdata(dev));
- return 0;
}
static struct isa_driver snd_adlib_driver = {
return err;
}
-static int snd_cmi8328_remove(struct device *pdev, unsigned int dev)
+static void snd_cmi8328_remove(struct device *pdev, unsigned int dev)
{
struct snd_card *card = dev_get_drvdata(pdev);
struct snd_cmi8328 *cmi = card->private_data;
snd_cmi8328_cfg_write(cmi->port, CFG2, 0);
snd_cmi8328_cfg_write(cmi->port, CFG3, 0);
snd_card_free(card);
- return 0;
}
#ifdef CONFIG_PM
return 0;
}
-static int snd_cmi8330_isa_remove(struct device *devptr,
+static void snd_cmi8330_isa_remove(struct device *devptr,
unsigned int dev)
{
snd_card_free(dev_get_drvdata(devptr));
- return 0;
}
#ifdef CONFIG_PM
if (error < 0)
goto out;
- strlcpy(card->driver, "CS4231", sizeof(card->driver));
- strlcpy(card->shortname, chip->pcm->name, sizeof(card->shortname));
+ strscpy(card->driver, "CS4231", sizeof(card->driver));
+ strscpy(card->shortname, chip->pcm->name, sizeof(card->shortname));
if (dma2[n] < 0)
snprintf(card->longname, sizeof(card->longname),
return error;
}
-static int snd_cs4231_remove(struct device *dev, unsigned int n)
+static void snd_cs4231_remove(struct device *dev, unsigned int n)
{
snd_card_free(dev_get_drvdata(dev));
- return 0;
}
#ifdef CONFIG_PM
if (err < 0)
return err;
}
- strlcpy(card->driver, chip->pcm->name, sizeof(card->driver));
- strlcpy(card->shortname, chip->pcm->name, sizeof(card->shortname));
+ strscpy(card->driver, chip->pcm->name, sizeof(card->driver));
+ strscpy(card->shortname, chip->pcm->name, sizeof(card->shortname));
if (dma2[dev] < 0)
snprintf(card->longname, sizeof(card->longname),
"%s at 0x%lx, irq %i, dma %i",
return 0;
}
-static int snd_cs423x_isa_remove(struct device *pdev,
+static void snd_cs423x_isa_remove(struct device *pdev,
unsigned int dev)
{
snd_card_free(dev_get_drvdata(pdev));
- return 0;
}
#ifdef CONFIG_PM
if (error < 0)
return error;
- strlcpy(card->driver, "ES1688", sizeof(card->driver));
- strlcpy(card->shortname, chip->pcm->name, sizeof(card->shortname));
+ strscpy(card->driver, "ES1688", sizeof(card->driver));
+ strscpy(card->shortname, chip->pcm->name, sizeof(card->shortname));
snprintf(card->longname, sizeof(card->longname),
"%s at 0x%lx, irq %i, dma %i", chip->pcm->name, chip->port,
chip->irq, chip->dma8);
return error;
}
-static int snd_es1688_isa_remove(struct device *dev, unsigned int n)
+static void snd_es1688_isa_remove(struct device *dev, unsigned int n)
{
snd_card_free(dev_get_drvdata(dev));
- return 0;
}
static struct isa_driver snd_es1688_driver = {
}
}
-static int snd_es18xx_isa_remove(struct device *devptr,
- unsigned int dev)
+static void snd_es18xx_isa_remove(struct device *devptr,
+ unsigned int dev)
{
snd_card_free(dev_get_drvdata(devptr));
- return 0;
}
#ifdef CONFIG_PM
return err;
}
-static int snd_galaxy_remove(struct device *dev, unsigned int n)
+static void snd_galaxy_remove(struct device *dev, unsigned int n)
{
snd_card_free(dev_get_drvdata(dev));
- return 0;
}
static struct isa_driver snd_galaxy_driver = {
return error;
}
-static int snd_gusclassic_remove(struct device *dev, unsigned int n)
+static void snd_gusclassic_remove(struct device *dev, unsigned int n)
{
snd_card_free(dev_get_drvdata(dev));
- return 0;
}
static struct isa_driver snd_gusclassic_driver = {
return error;
}
-static int snd_gusextreme_remove(struct device *dev, unsigned int n)
+static void snd_gusextreme_remove(struct device *dev, unsigned int n)
{
snd_card_free(dev_get_drvdata(dev));
- return 0;
}
static struct isa_driver snd_gusextreme_driver = {
return err;
}
-static int snd_gusmax_remove(struct device *devptr, unsigned int dev)
+static void snd_gusmax_remove(struct device *devptr, unsigned int dev)
{
snd_card_free(dev_get_drvdata(devptr));
- return 0;
}
#define DEV_NAME "gusmax"
}
}
-static int snd_interwave_isa_remove(struct device *devptr, unsigned int dev)
+static void snd_interwave_isa_remove(struct device *devptr, unsigned int dev)
{
snd_card_free(dev_get_drvdata(devptr));
- return 0;
}
static struct isa_driver snd_interwave_driver = {
#endif
}
-static int snd_msnd_isa_remove(struct device *pdev, unsigned int dev)
+static void snd_msnd_isa_remove(struct device *pdev, unsigned int dev)
{
snd_msnd_unload(dev_get_drvdata(pdev));
- return 0;
}
static struct isa_driver snd_msnd_driver = {
return 0;
}
-static int snd_opl3sa2_isa_remove(struct device *devptr,
+static void snd_opl3sa2_isa_remove(struct device *devptr,
unsigned int dev)
{
snd_card_free(dev_get_drvdata(devptr));
- return 0;
}
#ifdef CONFIG_PM
return 0;
}
-static int snd_miro_isa_remove(struct device *devptr,
+static void snd_miro_isa_remove(struct device *devptr,
unsigned int dev)
{
snd_card_free(dev_get_drvdata(devptr));
- return 0;
}
#define DEV_NAME "miro"
return 0;
}
-static int snd_opti9xx_isa_remove(struct device *devptr,
- unsigned int dev)
+static void snd_opti9xx_isa_remove(struct device *devptr,
+ unsigned int dev)
{
snd_card_free(dev_get_drvdata(devptr));
- return 0;
}
#ifdef CONFIG_PM
return err;
}
-static int snd_jazz16_remove(struct device *devptr, unsigned int dev)
+static void snd_jazz16_remove(struct device *devptr, unsigned int dev)
{
struct snd_card *card = dev_get_drvdata(devptr);
snd_card_free(card);
- return 0;
}
#ifdef CONFIG_PM
}
}
-static int snd_sb16_isa_remove(struct device *pdev, unsigned int dev)
+static void snd_sb16_isa_remove(struct device *pdev, unsigned int dev)
{
snd_card_free(dev_get_drvdata(pdev));
- return 0;
}
#ifdef CONFIG_PM
return err;
/* fill in codec header */
- strlcpy(p->codec_name, info.codec_name, sizeof(p->codec_name));
+ strscpy(p->codec_name, info.codec_name, sizeof(p->codec_name));
p->func_nr = func_nr;
p->mode = le16_to_cpu(funcdesc_h.flags_play_rec);
switch (le16_to_cpu(funcdesc_h.VOC_type)) {
return err;
}
-static int snd_sb8_remove(struct device *pdev, unsigned int dev)
+static void snd_sb8_remove(struct device *pdev, unsigned int dev)
{
snd_card_free(dev_get_drvdata(pdev));
- return 0;
}
#ifdef CONFIG_PM
ctl = snd_ctl_new1(&newctls[type], chip);
if (! ctl)
return -ENOMEM;
- strlcpy(ctl->id.name, name, sizeof(ctl->id.name));
+ strscpy(ctl->id.name, name, sizeof(ctl->id.name));
ctl->id.index = index;
ctl->private_value = value;
if ((err = snd_ctl_add(chip->card, ctl)) < 0)
return err;
}
-static int snd_sc6000_remove(struct device *devptr, unsigned int dev)
+static void snd_sc6000_remove(struct device *devptr, unsigned int dev)
{
struct snd_card *card = dev_get_drvdata(devptr);
char __iomem **vport = card->private_data;
release_region(mss_port[dev], 4);
snd_card_free(card);
- return 0;
}
static struct isa_driver snd_sc6000_driver = {
return ret;
}
-static int snd_sscape_remove(struct device *devptr, unsigned int dev)
+static void snd_sscape_remove(struct device *devptr, unsigned int dev)
{
snd_card_free(dev_get_drvdata(devptr));
- return 0;
}
#define DEV_NAME "sscape"
return 0;
}
-static int snd_wavefront_isa_remove(struct device *devptr,
+static void snd_wavefront_isa_remove(struct device *devptr,
unsigned int dev)
{
snd_card_free(dev_get_drvdata(devptr));
- return 0;
}
#define DEV_NAME "wavefront"
{
mixer_info info;
memset(&info, 0, sizeof(info));
- strlcpy(info.id, dmasound.mach.name2, sizeof(info.id));
- strlcpy(info.name, dmasound.mach.name2, sizeof(info.name));
+ strscpy(info.id, dmasound.mach.name2, sizeof(info.id));
+ strscpy(info.name, dmasound.mach.name2, sizeof(info.name));
info.modify_counter = mixer.modify_counter;
if (copy_to_user((void __user *)arg, &info, sizeof(info)))
return -EFAULT;
return err;
/* check PCI availability (32bit DMA) */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(32)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(32))) {
dev_err(card->dev, "error setting 32-bit DMA mask.\n");
pci_disable_device(pci);
return -ENXIO;
if (err < 0)
return err;
/* check, if we can restrict PCI DMA transfers to 31 bits */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(31)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(31)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(31))) {
dev_err(card->dev,
"architecture does not support 31bit PCI busmaster DMA\n");
pci_disable_device(pci);
if ((err = pci_enable_device(pci)) < 0)
return err;
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(28)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(28)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(28))) {
dev_err(card->dev, "error setting 28bit DMA mask\n");
pci_disable_device(pci);
return -ENXIO;
return err;
}
/* check, if we can restrict PCI DMA transfers to 24 bits */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(24)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(24)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(24))) {
dev_err(&pci->dev, "architecture does not support 24bit PCI busmaster DMA\n");
pci_disable_device(pci);
return -ENXIO;
int lines;
int cols = 8;
- lines = (len + cols - 1) / cols;
+ lines = DIV_ROUND_UP(len, cols);
if (lines > 8)
lines = 8;
// check PCI availability (DMA).
if ((err = pci_enable_device(pci)) < 0)
return err;
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(32)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(32))) {
dev_err(card->dev, "error to set DMA mask\n");
pci_disable_device(pci);
return -ENXIO;
pci_set_master(pci);
/* check PCI availability (32bit DMA) */
- if ((dma_set_mask(&pci->dev, DMA_BIT_MASK(32)) < 0) ||
- (dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32)) < 0)) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(32))) {
dev_err(card->dev, "Impossible to set 32bit mask DMA\n");
pci_disable_device(pci);
return -ENXIO;
chip->irq = -1;
/* check if we can restrict PCI DMA transfers to 24 bits */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(24)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(24)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(24))) {
dev_err(card->dev,
"architecture does not support 24bit PCI busmaster DMA\n"
);
current_block = chip->current_line * 16 / chip->lines;
irq_block = status >> INT_RISCS_SHIFT;
if (current_block != irq_block)
- chip->current_line = (irq_block * chip->lines + 15) / 16;
+ chip->current_line = DIV_ROUND_UP(irq_block * chip->lines,
+ 16);
snd_pcm_period_elapsed(chip->substream);
}
err = pci_enable_device(pci);
if (err < 0)
return err;
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(32)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(32))) {
dev_err(card->dev, "error to set 32bit mask DMA\n");
pci_disable_device(pci);
return -ENXIO;
correctionPerGOF = tmp1 / GOF_PER_SEC;
tmp1 -= correctionPerGOF * GOF_PER_SEC;
correctionPerSec = tmp1;
- initialDelay = ((48000 * 24) + rate - 1) / rate;
+ initialDelay = DIV_ROUND_UP(48000 * 24, rate);
/*
* Fill in the VariDecimate control block.
if ((err = pci_enable_device(pci)) < 0)
return err;
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(32)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(32))) {
dev_warn(card->dev, "unable to get 32bit dma\n");
err = -ENXIO;
goto pcifail;
/* drop the original AD1888 HPF control */
memset(&elem, 0, sizeof(elem));
elem.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
- strlcpy(elem.name, "High Pass Filter Enable", sizeof(elem.name));
+ strscpy(elem.name, "High Pass Filter Enable", sizeof(elem.name));
snd_ctl_remove_id(card, &elem);
/* drop the original V_REFOUT control */
memset(&elem, 0, sizeof(elem));
elem.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
- strlcpy(elem.name, "V_REFOUT Enable", sizeof(elem.name));
+ strscpy(elem.name, "V_REFOUT Enable", sizeof(elem.name));
snd_ctl_remove_id(card, &elem);
/* add the OLPC-specific controls */
return err;
/* Set DMA transfer mask */
- if (!dma_set_mask(&pci->dev, DMA_BIT_MASK(dma_bits))) {
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(dma_bits));
- } else {
- dma_set_mask(&pci->dev, DMA_BIT_MASK(32));
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32));
- }
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(dma_bits)))
+ dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(32));
if (!hw->io_base) {
err = pci_request_regions(pci, "XFi");
return err;
/* Set DMA transfer mask */
- if (!dma_set_mask(&pci->dev, DMA_BIT_MASK(dma_bits))) {
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(dma_bits));
- } else {
- dma_set_mask(&pci->dev, DMA_BIT_MASK(32));
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32));
- }
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(dma_bits)))
+ dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(32));
if (!hw->io_base) {
err = pci_request_regions(pci, "XFi");
pcm->private_data = atc;
pcm->info_flags = 0;
pcm->dev_subclass = SNDRV_PCM_SUBCLASS_GENERIC_MIX;
- strlcpy(pcm->name, device_name, sizeof(pcm->name));
+ strscpy(pcm->name, device_name, sizeof(pcm->name));
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &ct_pcm_playback_ops);
mgr->type = NUM_RSCTYP;
- mgr->rscs = kzalloc(((amount + 8 - 1) / 8), GFP_KERNEL);
+ mgr->rscs = kzalloc(DIV_ROUND_UP(amount, 8), GFP_KERNEL);
if (!mgr->rscs)
return -ENOMEM;
}
#endif
- strlcpy(card->driver, emu->card_capabilities->driver,
+ strscpy(card->driver, emu->card_capabilities->driver,
sizeof(card->driver));
- strlcpy(card->shortname, emu->card_capabilities->name,
+ strscpy(card->shortname, emu->card_capabilities->name,
sizeof(card->shortname));
snprintf(card->longname, sizeof(card->longname),
"%s (rev.%d, serial:0x%x) at 0x%lx, irq %i",
emu->serial);
if (!*card->id && c->id)
- strlcpy(card->id, c->id, sizeof(card->id));
+ strscpy(card->id, c->id, sizeof(card->id));
is_audigy = emu->audigy = c->emu10k2_chip;
memset(gctl, 0, sizeof(*gctl));
id = &ctl->kcontrol->id;
gctl->id.iface = (__force int)id->iface;
- strlcpy(gctl->id.name, id->name, sizeof(gctl->id.name));
+ strscpy(gctl->id.name, id->name, sizeof(gctl->id.name));
gctl->id.index = id->index;
gctl->id.device = id->device;
gctl->id.subdevice = id->subdevice;
err = snd_emu10k1_verify_controls(emu, icode, in_kernel);
if (err < 0)
goto __error;
- strlcpy(emu->fx8010.name, icode->name, sizeof(emu->fx8010.name));
+ strscpy(emu->fx8010.name, icode->name, sizeof(emu->fx8010.name));
/* stop FX processor - this may be dangerous, but it's better to miss
some samples than generate wrong ones - [jk] */
if (emu->audigy)
int err;
mutex_lock(&emu->fx8010.lock);
- strlcpy(icode->name, emu->fx8010.name, sizeof(icode->name));
+ strscpy(icode->name, emu->fx8010.name, sizeof(icode->name));
/* ok, do the main job */
err = snd_emu10k1_gpr_peek(emu, icode);
if (err >= 0)
struct snd_dma_buffer *dmab)
{
if (emu->iommu_workaround) {
- size_t npages = (size + PAGE_SIZE - 1) / PAGE_SIZE;
+ size_t npages = DIV_ROUND_UP(size, PAGE_SIZE);
size_t size_real = npages * PAGE_SIZE;
/*
unsigned int freq, r;
mutex_lock(&ensoniq->src_mutex);
- freq = ((rate << 15) + 1500) / 3000;
+ freq = DIV_ROUND_CLOSEST(rate << 15, 3000);
r = (snd_es1371_wait_src_ready(ensoniq) & (ES_1371_SRC_DISABLE |
ES_1371_DIS_P2 | ES_1371_DIS_R1)) |
ES_1371_DIS_P1;
unsigned int freq, r;
mutex_lock(&ensoniq->src_mutex);
- freq = ((rate << 15) + 1500) / 3000;
+ freq = DIV_ROUND_CLOSEST(rate << 15, 3000);
r = (snd_es1371_wait_src_ready(ensoniq) & (ES_1371_SRC_DISABLE |
ES_1371_DIS_P1 | ES_1371_DIS_R1)) |
ES_1371_DIS_P2;
if ((err = pci_enable_device(pci)) < 0)
return err;
/* check, if we can restrict PCI DMA transfers to 24 bits */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(24)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(24)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(24))) {
dev_err(card->dev,
"architecture does not support 24bit PCI busmaster DMA\n");
pci_disable_device(pci);
if ((err = pci_enable_device(pci)) < 0)
return err;
/* check, if we can restrict PCI DMA transfers to 28 bits */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(28)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(28)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(28))) {
dev_err(card->dev,
"architecture does not support 28bit PCI busmaster DMA\n");
pci_disable_device(pci);
if (!snd_tea575x_init(&chip->tea, THIS_MODULE)) {
dev_info(card->dev, "detected TEA575x radio type %s\n",
get_tea575x_gpio(chip)->name);
- strlcpy(chip->tea.card, get_tea575x_gpio(chip)->name,
+ strscpy(chip->tea.card, get_tea575x_gpio(chip)->name,
sizeof(chip->tea.card));
break;
}
chip->tea575x_tuner |= tuner_only;
}
if (!(chip->tea575x_tuner & TUNER_DISABLED)) {
- strlcpy(chip->tea.card, get_tea575x_gpio(chip)->name,
+ strscpy(chip->tea.card, get_tea575x_gpio(chip)->name,
sizeof(chip->tea.card));
}
#endif
}
if (!name)
return 0;
- strlcpy(label, name, maxlen);
+ strscpy(label, name, maxlen);
return 1;
}
EXPORT_SYMBOL_GPL(snd_hda_get_pin_label);
sizeof(imux->items[imux->num_items].label),
"%s %d", label, label_idx);
else
- strlcpy(imux->items[imux->num_items].label, label,
+ strscpy(imux->items[imux->num_items].label, label,
sizeof(imux->items[imux->num_items].label));
imux->items[imux->num_items].index = index;
imux->num_items++;
&pcm);
if (err < 0)
return err;
- strlcpy(pcm->name, cpcm->name, sizeof(pcm->name));
+ strscpy(pcm->name, cpcm->name, sizeof(pcm->name));
apcm = kzalloc(sizeof(*apcm), GFP_KERNEL);
if (apcm == NULL) {
snd_device_free(chip->card, pcm);
codec_info(codec, "HDMI: out of range MNL %d\n", mnl);
goto out_fail;
} else
- strlcpy(e->monitor_name, buf + ELD_FIXED_BYTES, mnl + 1);
+ strscpy(e->monitor_name, buf + ELD_FIXED_BYTES, mnl + 1);
for (i = 0; i < e->sad_count; i++) {
if (ELD_FIXED_BYTES + mnl + 3 * (i + 1) > size) {
if (*str)
return;
- strlcpy(str, chip_name, len);
+ strscpy(str, chip_name, len);
/* drop non-alnum chars after a space */
for (p = strchr(str, ' '); p; p = strchr(p + 1, ' ')) {
/* allow 64bit DMA address if supported by H/W */
if (!(gcap & AZX_GCAP_64OK))
dma_bits = 32;
- if (!dma_set_mask(&pci->dev, DMA_BIT_MASK(dma_bits))) {
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(dma_bits));
- } else {
- dma_set_mask(&pci->dev, DMA_BIT_MASK(32));
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32));
- }
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(dma_bits)))
+ dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(32));
/* read number of streams from GCAP register instead of using
* hardcoded value
return -EBUSY;
strcpy(card->driver, "HDA-Intel");
- strlcpy(card->shortname, driver_short_names[chip->driver_type],
+ strscpy(card->shortname, driver_short_names[chip->driver_type],
sizeof(card->shortname));
snprintf(card->longname, sizeof(card->longname),
"%s at 0x%lx irq %i",
/* HSW/BDW controllers need this power */
if (CONTROLLER_IN_GPU(pci))
- hda->need_i915_power = 1;
+ hda->need_i915_power = true;
}
/* Request display power well for the HDA controller or codec. For
!is_jack_detectable(codec, nid);
if (base_name)
- strlcpy(name, base_name, sizeof(name));
+ strscpy(name, base_name, sizeof(name));
else
snd_hda_get_pin_label(codec, nid, cfg, name, sizeof(name), NULL);
if (phantom_jack)
#include <linux/moduleparam.h>
#include <linux/mutex.h>
#include <linux/of_device.h>
+#include <linux/reset.h>
#include <linux/slab.h>
#include <linux/time.h>
#include <linux/string.h>
struct hda_tegra {
struct azx chip;
struct device *dev;
- struct clk *hda_clk;
- struct clk *hda2codec_2x_clk;
- struct clk *hda2hdmi_clk;
+ struct reset_control *reset;
+ struct clk_bulk_data clocks[3];
+ unsigned int nclocks;
void __iomem *regs;
struct work_struct probe_work;
};
writel(v, hda->regs + HDA_IPFS_INTR_MASK);
}
-static int hda_tegra_enable_clocks(struct hda_tegra *data)
-{
- int rc;
-
- rc = clk_prepare_enable(data->hda_clk);
- if (rc)
- return rc;
- rc = clk_prepare_enable(data->hda2codec_2x_clk);
- if (rc)
- goto disable_hda;
- rc = clk_prepare_enable(data->hda2hdmi_clk);
- if (rc)
- goto disable_codec_2x;
-
- return 0;
-
-disable_codec_2x:
- clk_disable_unprepare(data->hda2codec_2x_clk);
-disable_hda:
- clk_disable_unprepare(data->hda_clk);
- return rc;
-}
-
-static void hda_tegra_disable_clocks(struct hda_tegra *data)
-{
- clk_disable_unprepare(data->hda2hdmi_clk);
- clk_disable_unprepare(data->hda2codec_2x_clk);
- clk_disable_unprepare(data->hda_clk);
-}
-
/*
* power management
*/
azx_stop_chip(chip);
azx_enter_link_reset(chip);
}
- hda_tegra_disable_clocks(hda);
+ clk_bulk_disable_unprepare(hda->nclocks, hda->clocks);
return 0;
}
struct hda_tegra *hda = container_of(chip, struct hda_tegra, chip);
int rc;
- rc = hda_tegra_enable_clocks(hda);
+ if (!chip->running) {
+ rc = reset_control_assert(hda->reset);
+ if (rc)
+ return rc;
+ }
+
+ rc = clk_bulk_prepare_enable(hda->nclocks, hda->clocks);
if (rc != 0)
return rc;
- if (chip && chip->running) {
+ if (chip->running) {
hda_tegra_init(hda);
azx_init_chip(chip, 1);
/* disable controller wake up event*/
azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &
~STATESTS_INT_MASK);
+ } else {
+ usleep_range(10, 100);
+
+ rc = reset_control_deassert(hda->reset);
+ if (rc)
+ return rc;
}
return 0;
return 0;
}
-static int hda_tegra_init_clk(struct hda_tegra *hda)
-{
- struct device *dev = hda->dev;
-
- hda->hda_clk = devm_clk_get(dev, "hda");
- if (IS_ERR(hda->hda_clk)) {
- dev_err(dev, "failed to get hda clock\n");
- return PTR_ERR(hda->hda_clk);
- }
- hda->hda2codec_2x_clk = devm_clk_get(dev, "hda2codec_2x");
- if (IS_ERR(hda->hda2codec_2x_clk)) {
- dev_err(dev, "failed to get hda2codec_2x clock\n");
- return PTR_ERR(hda->hda2codec_2x_clk);
- }
- hda->hda2hdmi_clk = devm_clk_get(dev, "hda2hdmi");
- if (IS_ERR(hda->hda2hdmi_clk)) {
- dev_err(dev, "failed to get hda2hdmi clock\n");
- return PTR_ERR(hda->hda2hdmi_clk);
- }
-
- return 0;
-}
-
static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev)
{
struct hda_tegra *hda = container_of(chip, struct hda_tegra, chip);
return err;
}
- err = hda_tegra_init_clk(hda);
+ hda->reset = devm_reset_control_array_get_exclusive(&pdev->dev);
+ if (IS_ERR(hda->reset)) {
+ err = PTR_ERR(hda->reset);
+ goto out_free;
+ }
+
+ hda->clocks[hda->nclocks++].id = "hda";
+ hda->clocks[hda->nclocks++].id = "hda2hdmi";
+ hda->clocks[hda->nclocks++].id = "hda2codec_2x";
+
+ err = devm_clk_bulk_get(&pdev->dev, hda->nclocks, hda->clocks);
if (err < 0)
goto out_free;
if (action == HDA_FIXUP_ACT_PRE_PROBE) {
spec->mute_led_eapd = 0x1b;
- spec->dynamic_eapd = 1;
+ spec->dynamic_eapd = true;
snd_hda_gen_add_mute_led_cdev(codec, cx_auto_vmaster_mute_led);
}
}
if (err < 0)
return err;
/* check, if we can restrict PCI DMA transfers to 28 bits */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(28)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(28)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(28))) {
dev_err(card->dev,
"architecture does not support 28bit PCI busmaster DMA\n");
pci_disable_device(pci);
{
struct snd_ctl_elem_id sid = {0};
- strlcpy(sid.name, name, sizeof(sid.name));
+ strscpy(sid.name, name, sizeof(sid.name));
sid.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
return snd_ctl_find_id(card, &sid);
}
/* notify about master speaker mute change */
memset(&elem_id, 0, sizeof(elem_id));
elem_id.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
- strlcpy(elem_id.name, "Master Speakers Playback Switch",
+ strscpy(elem_id.name, "Master Speakers Playback Switch",
sizeof(elem_id.name));
kctl = snd_ctl_find_id(ice->card, &elem_id);
snd_ctl_notify(ice->card, SNDRV_CTL_EVENT_MASK_VALUE, &kctl->id);
/* and headphone mute change */
- strlcpy(elem_id.name, spec->wm8776.ctl[WM8776_CTL_HP_SW].name,
+ strscpy(elem_id.name, spec->wm8776.ctl[WM8776_CTL_HP_SW].name,
sizeof(elem_id.name));
kctl = snd_ctl_find_id(ice->card, &elem_id);
snd_ctl_notify(ice->card, SNDRV_CTL_EVENT_MASK_VALUE, &kctl->id);
{
struct snd_ctl_elem_id sid = {0};
- strlcpy(sid.name, name, sizeof(sid.name));
+ strscpy(sid.name, name, sizeof(sid.name));
sid.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
return snd_ctl_find_id(card, &sid);
}
unsigned int index_offset;
memset(&elem_id, 0, sizeof(elem_id));
- strlcpy(elem_id.name, ctl_name, sizeof(elem_id.name));
+ strscpy(elem_id.name, ctl_name, sizeof(elem_id.name));
elem_id.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
kctl = snd_ctl_find_id(card, &elem_id);
if (!kctl)
chip->bmaddr = pci_iomap(pci, 3, 0);
else
chip->bmaddr = pci_iomap(pci, 1, 0);
+
+port_inited:
if (!chip->bmaddr) {
dev_err(card->dev, "Controller space ioremap problem\n");
snd_intel8x0m_free(chip);
return -EIO;
}
- port_inited:
/* initialize offsets */
chip->bdbars_count = 2;
tbl = intel_regs;
}
strcpy(card->driver, "Lola");
- strlcpy(card->shortname, "Digigram Lola", sizeof(card->shortname));
+ strscpy(card->shortname, "Digigram Lola", sizeof(card->shortname));
snprintf(card->longname, sizeof(card->longname),
"%s at 0x%lx irq %i",
card->shortname, chip->bar[0].addr, chip->irq);
}
nitems = chip->clock.items;
- nb_verbs = (nitems + 3) / 4;
+ nb_verbs = DIV_ROUND_UP(nitems, 4);
idx = 0;
idx_list = 0;
for (i = 0; i < nb_verbs; i++) {
&pcm);
if (err < 0)
return err;
- strlcpy(pcm->name, "Digigram Lola", sizeof(pcm->name));
+ strscpy(pcm->name, "Digigram Lola", sizeof(pcm->name));
pcm->private_data = chip;
for (i = 0; i < 2; i++) {
if (chip->pcm[i].num_streams)
snd_pcm_format_width(runtime->format) == 16 ? 0 : 1);
/* set up dac/adc rate */
- freq = ((runtime->rate << 15) + 24000 ) / 48000;
+ freq = DIV_ROUND_CLOSEST(runtime->rate << 15, 48000);
if (freq)
freq--;
return -EIO;
/* check, if we can restrict PCI DMA transfers to 28 bits */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(28)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(28)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(28))) {
dev_err(card->dev,
"architecture does not support 28bit PCI busmaster DMA\n");
pci_disable_device(pci);
unsigned char qs_out_channels;
unsigned char ds_out_channels;
unsigned char ss_out_channels;
+ u32 io_loopback; /* output loopback channel states*/
struct snd_dma_buffer capture_dma_buf;
struct snd_dma_buffer playback_dma_buf;
HDSP_AnalogExtensionBoard);
static struct snd_kcontrol_new snd_hdsp_adat_sync_check = HDSP_ADAT_SYNC_CHECK;
+
+static bool hdsp_loopback_get(struct hdsp *const hdsp, const u8 channel)
+{
+ return hdsp->io_loopback & (1 << channel);
+}
+
+static int hdsp_loopback_set(struct hdsp *const hdsp, const u8 channel, const bool enable)
+{
+ if (hdsp_loopback_get(hdsp, channel) == enable)
+ return 0;
+
+ hdsp->io_loopback ^= (1 << channel);
+
+ hdsp_write(hdsp, HDSP_inputEnable + (4 * (hdsp->max_channels + channel)), enable);
+
+ return 1;
+}
+
+static int snd_hdsp_loopback_get(struct snd_kcontrol *const kcontrol,
+ struct snd_ctl_elem_value *const ucontrol)
+{
+ struct hdsp *const hdsp = snd_kcontrol_chip(kcontrol);
+ const u8 channel = snd_ctl_get_ioff(kcontrol, &ucontrol->id);
+
+ if (channel >= hdsp->max_channels)
+ return -ENOENT;
+
+ ucontrol->value.integer.value[0] = hdsp_loopback_get(hdsp, channel);
+
+ return 0;
+}
+
+static int snd_hdsp_loopback_put(struct snd_kcontrol *const kcontrol,
+ struct snd_ctl_elem_value *const ucontrol)
+{
+ struct hdsp *const hdsp = snd_kcontrol_chip(kcontrol);
+ const u8 channel = snd_ctl_get_ioff(kcontrol, &ucontrol->id);
+ const bool enable = ucontrol->value.integer.value[0] & 1;
+
+ if (channel >= hdsp->max_channels)
+ return -ENOENT;
+
+ return hdsp_loopback_set(hdsp, channel, enable);
+}
+
+static struct snd_kcontrol_new snd_hdsp_loopback_control = {
+ .iface = SNDRV_CTL_ELEM_IFACE_HWDEP,
+ .name = "Output Loopback",
+ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
+ .info = snd_ctl_boolean_mono_info,
+ .get = snd_hdsp_loopback_get,
+ .put = snd_hdsp_loopback_put
+};
+
static int snd_hdsp_create_controls(struct snd_card *card, struct hdsp *hdsp)
{
unsigned int idx;
}
}
+ /* Output loopback controls for H9632 cards */
+ if (hdsp->io_type == H9632) {
+ snd_hdsp_loopback_control.count = hdsp->max_channels;
+ kctl = snd_ctl_new1(&snd_hdsp_loopback_control, hdsp);
+ if (kctl == NULL)
+ return -ENOMEM;
+ err = snd_ctl_add(card, kctl);
+ if (err < 0)
+ return err;
+ }
+
/* AEB control for H96xx card */
if (hdsp->io_type == H9632 || hdsp->io_type == H9652) {
if ((err = snd_ctl_add(card, kctl = snd_ctl_new1(&snd_hdsp_96xx_aeb, hdsp))) < 0)
static void snd_hdsp_initialize_channels(struct hdsp *hdsp)
{
- int status, aebi_channels, aebo_channels;
+ int status, aebi_channels, aebo_channels, i;
switch (hdsp->io_type) {
case Digiface:
hdsp->ss_out_channels = H9632_SS_CHANNELS+aebo_channels;
hdsp->ds_out_channels = H9632_DS_CHANNELS+aebo_channels;
hdsp->qs_out_channels = H9632_QS_CHANNELS+aebo_channels;
+ /* Disable loopback of output channels, as the set function
+ * only sets on a change we fake all bits (channels) as enabled.
+ */
+ hdsp->io_loopback = 0xffffffff;
+ for (i = 0; i < hdsp->max_channels; ++i)
+ hdsp_loopback_set(hdsp, i, false);
break;
case Multiface:
memset(&hdspm_version, 0, sizeof(hdspm_version));
hdspm_version.card_type = hdspm->io_type;
- strlcpy(hdspm_version.cardname, hdspm->card_name,
+ strscpy(hdspm_version.cardname, hdspm->card_name,
sizeof(hdspm_version.cardname));
hdspm_version.serial = hdspm->serial;
hdspm_version.firmware_rev = hdspm->firmware_rev;
else if (rate == 48000)
delta = 0x1000;
else
- delta = (((rate << 12) + 24000) / 48000) & 0x0000ffff;
+ delta = DIV_ROUND_CLOSEST(rate << 12, 48000) & 0x0000ffff;
return delta;
}
unsigned int div;
unsigned long flags;
- div = (rate * 65536 + SV_FULLRATE / 2) / SV_FULLRATE;
+ div = DIV_ROUND_CLOSEST(rate * 65536, SV_FULLRATE);
if (div > 65535)
div = 65535;
spin_lock_irqsave(&sonic->reg_lock, flags);
if ((err = pci_enable_device(pci)) < 0)
return err;
/* check, if we can restrict PCI DMA transfers to 24 bits */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(24)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(24)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(24))) {
dev_err(card->dev,
"architecture does not support 24bit PCI busmaster DMA\n");
pci_disable_device(pci);
else if (rate == 48000)
delta = 0x1000;
else
- delta = (((rate << 12) + 24000) / 48000) & 0x0000ffff;
+ delta = DIV_ROUND_CLOSEST(rate << 12, 48000) & 0x0000ffff;
return delta;
}
ESO_bytes++;
// Set channel sample rate, 4.12 format
- val = (((unsigned int) 48000L << 12) + (runtime->rate/2)) / runtime->rate;
+ val = DIV_ROUND_CLOSEST(48000U << 12, runtime->rate);
outw(val, TRID_REG(trident, T4D_SBDELTA_DELTA_R));
// Set channel interrupt blk length
if ((err = pci_enable_device(pci)) < 0)
return err;
/* check, if we can restrict PCI DMA transfers to 30 bits */
- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(30)) < 0 ||
- dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(30)) < 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(30))) {
dev_err(card->dev,
"architecture does not support 30bit PCI busmaster DMA\n");
pci_disable_device(pci);
return -EINVAL; /* ignored */
memset(&info, 0, sizeof(struct i2c_board_info));
- strlcpy(info.type, "keywest", I2C_NAME_SIZE);
+ strscpy(info.type, "keywest", I2C_NAME_SIZE);
info.addr = keywest_ctx->addr;
client = i2c_new_client_device(adapter, &info);
if (IS_ERR(client))
pkt->hdr.token = hw_block_id;
pkt->hdr.opcode = AFE_CMD_REMOTE_LPASS_CORE_HW_VOTE_REQUEST;
vote_cfg->hw_block_id = hw_block_id;
- strlcpy(vote_cfg->client_name, client_name,
+ strscpy(vote_cfg->client_name, client_name,
sizeof(vote_cfg->client_name));
ret = afe_apr_send_pkt(afe, pkt, NULL,
uinfo->value.enumerated.items = cfg->max;
if (uinfo->value.enumerated.item >= cfg->max)
uinfo->value.enumerated.item = cfg->max - 1;
- strlcpy(uinfo->value.enumerated.name,
+ strscpy(uinfo->value.enumerated.name,
cfg->texts[uinfo->value.enumerated.item],
sizeof(uinfo->value.enumerated.name));
} else {
if (ret < 0)
return ret;
- strlcpy(rmidi->name, bcd2k->card->shortname, sizeof(rmidi->name));
+ strscpy(rmidi->name, bcd2k->card->shortname, sizeof(rmidi->name));
rmidi->info_flags = SNDRV_RAWMIDI_INFO_DUPLEX;
rmidi->private_data = bcd2k;
}
cdev->pcm->private_data = cdev;
- strlcpy(cdev->pcm->name, cdev->product_name, sizeof(cdev->pcm->name));
+ strscpy(cdev->pcm->name, cdev->product_name, sizeof(cdev->pcm->name));
memset(cdev->sub_playback, 0, sizeof(cdev->sub_playback));
memset(cdev->sub_capture, 0, sizeof(cdev->sub_capture));
usb_string(usb_dev, usb_dev->descriptor.iProduct,
cdev->product_name, CAIAQ_USB_STR_LEN);
- strlcpy(card->driver, MODNAME, sizeof(card->driver));
- strlcpy(card->shortname, cdev->product_name, sizeof(card->shortname));
- strlcpy(card->mixername, cdev->product_name, sizeof(card->mixername));
+ strscpy(card->driver, MODNAME, sizeof(card->driver));
+ strscpy(card->shortname, cdev->product_name, sizeof(card->shortname));
+ strscpy(card->mixername, cdev->product_name, sizeof(card->mixername));
/* if the id was not passed as module option, fill it with a shortened
* version of the product string which does not contain any
if (ret < 0)
return ret;
- strlcpy(rmidi->name, device->product_name, sizeof(rmidi->name));
+ strscpy(rmidi->name, device->product_name, sizeof(rmidi->name));
rmidi->info_flags = SNDRV_RAWMIDI_INFO_DUPLEX;
rmidi->private_data = device;
else if (quirk && quirk->product_name)
s = quirk->product_name;
if (s && *s) {
- strlcpy(card->shortname, s, sizeof(card->shortname));
+ strscpy(card->shortname, s, sizeof(card->shortname));
return;
}
if (preset && preset->profile_name)
s = preset->profile_name;
if (s && *s) {
- strlcpy(card->longname, s, sizeof(card->longname));
+ strscpy(card->longname, s, sizeof(card->longname));
return;
}
s = preset->vendor_name;
else if (quirk && quirk->vendor_name)
s = quirk->vendor_name;
+ *card->longname = 0;
if (s && *s) {
- len = strlcpy(card->longname, s, sizeof(card->longname));
+ strscpy(card->longname, s, sizeof(card->longname));
} else {
/* retrieve the vendor and device strings as longname */
if (dev->descriptor.iManufacturer)
- len = usb_string(dev, dev->descriptor.iManufacturer,
- card->longname, sizeof(card->longname));
- else
- len = 0;
+ usb_string(dev, dev->descriptor.iManufacturer,
+ card->longname, sizeof(card->longname));
/* we don't really care if there isn't any vendor string */
}
- if (len > 0) {
+ if (*card->longname) {
strim(card->longname);
if (*card->longname)
strlcat(card->longname, " ", sizeof(card->longname));
if (selector) {
int ret, i, cur;
+ if (selector->bNrInPins == 1) {
+ ret = 1;
+ goto find_source;
+ }
+
/* the entity ID we are looking for is a selector.
* find out what it currently selects */
ret = uac_clock_selector_get_val(chip, selector->bClockID);
return -EINVAL;
}
+ find_source:
cur = ret;
ret = __uac_clock_find_source(chip, fmt,
selector->baCSourceID[ret - 1],
return ret;
}
- strlcpy(card->driver, DRIVER_NAME, sizeof(card->driver));
+ strscpy(card->driver, DRIVER_NAME, sizeof(card->driver));
if (quirk && quirk->device_name)
- strlcpy(card->shortname, quirk->device_name, sizeof(card->shortname));
+ strscpy(card->shortname, quirk->device_name, sizeof(card->shortname));
else
- strlcpy(card->shortname, "M2Tech generic audio", sizeof(card->shortname));
+ strscpy(card->shortname, "M2Tech generic audio", sizeof(card->shortname));
strlcat(card->longname, card->shortname, sizeof(card->longname));
len = strlcat(card->longname, " at ", sizeof(card->longname));
pcm->private_data = rt;
pcm->private_free = hiface_pcm_free;
- strlcpy(pcm->name, "USB-SPDIF Audio", sizeof(pcm->name));
+ strscpy(pcm->name, "USB-SPDIF Audio", sizeof(pcm->name));
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &pcm_ops);
snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_VMALLOC,
NULL, 0, 0);
/* Pioneer devices with vendor spec class */
if (attr == USB_ENDPOINT_SYNC_ASYNC &&
alts->desc.bInterfaceClass == USB_CLASS_VENDOR_SPEC &&
- USB_ID_VENDOR(chip->usb_id) == 0x2b73 /* Pioneer */) {
+ (USB_ID_VENDOR(chip->usb_id) == 0x2b73 || /* Pioneer */
+ USB_ID_VENDOR(chip->usb_id) == 0x08e4 /* Pioneer */)) {
if (skip_pioneer_sync_ep(chip, fmt, alts))
return 1;
}
static int
check_mapped_name(const struct usbmix_name_map *p, char *buf, int buflen)
{
+ int len;
+
if (!p || !p->name)
return 0;
buflen--;
- return strlcpy(buf, p->name, buflen);
+ len = strscpy(buf, p->name, buflen);
+ return len < 0 ? buflen : len;
}
/* ignore the error value if ignore_ctl_error flag is set */
int index, char *buf, int buflen)
{
const struct usbmix_selector_map *p;
+ int len;
if (!state->selector_map)
return 0;
for (p = state->selector_map; p->id; p++) {
- if (p->id == unitid && index < p->count)
- return strlcpy(buf, p->names[index], buflen);
+ if (p->id == unitid && index < p->count) {
+ len = strscpy(buf, p->names[index], buflen);
+ return len < 0 ? buflen : len;
+ }
}
return 0;
}
if (val < cval->min)
return 0;
else if (val >= cval->max)
- return (cval->max - cval->min + cval->res - 1) / cval->res;
+ return DIV_ROUND_UP(cval->max - cval->min, cval->res);
else
return (val - cval->min) / cval->res;
}
(cval->control << 8) | minchn,
&cval->res) < 0) {
cval->res = 1;
- } else {
+ } else if (cval->head.mixer->protocol == UAC_VERSION_1) {
int last_valid_res = cval->res;
while (cval->res > 1) {
}
uinfo->value.integer.min = 0;
uinfo->value.integer.max =
- (cval->max - cval->min + cval->res - 1) / cval->res;
+ DIV_ROUND_UP(cval->max - cval->min, cval->res);
}
return 0;
}
if (!found)
return;
- strlcpy(kctl->id.name, "Headphone", sizeof(kctl->id.name));
+ strscpy(kctl->id.name, "Headphone", sizeof(kctl->id.name));
}
static const struct usb_feature_control_info *get_feature_control_info(int control)
break;
default:
if (!len)
- strlcpy(kctl->id.name, audio_feature_info[control-1].name,
+ strscpy(kctl->id.name, audio_feature_info[control-1].name,
sizeof(kctl->id.name));
break;
}
int name_len = get_term_name(mixer->chip, term, name, name_size, 0);
if (name_len == 0)
- strlcpy(name, "Unknown", name_size);
+ strscpy(name, "Unknown", name_size);
/*
* sound/core/ctljack.c has a convention of naming jack controls
if (check_mapped_name(map, kctl->id.name, sizeof(kctl->id.name))) {
/* nothing */ ;
} else if (info->name) {
- strlcpy(kctl->id.name, info->name, sizeof(kctl->id.name));
+ strscpy(kctl->id.name, info->name, sizeof(kctl->id.name));
} else {
if (extension_unit)
nameid = uac_extension_unit_iExtension(desc, state->mixer->protocol);
kctl->id.name,
sizeof(kctl->id.name));
if (!len)
- strlcpy(kctl->id.name, name, sizeof(kctl->id.name));
+ strscpy(kctl->id.name, name, sizeof(kctl->id.name));
}
append_ctl_name(kctl, " ");
append_ctl_name(kctl, valinfo->suffix);
kctl->id.name, sizeof(kctl->id.name), 0);
/* ... or use the fixed string "USB" as the last resort */
if (!len)
- strlcpy(kctl->id.name, "USB", sizeof(kctl->id.name));
+ strscpy(kctl->id.name, "USB", sizeof(kctl->id.name));
/* and add the proper suffix */
if (desc->bDescriptorSubtype == UAC2_CLOCK_SELECTOR ||
if (info->value.enumerated.item >= count)
info->value.enumerated.item = count - 1;
name = group->options[info->value.enumerated.item].name;
- strlcpy(info->value.enumerated.name, name, sizeof(info->value.enumerated.name));
+ strscpy(info->value.enumerated.name, name, sizeof(info->value.enumerated.name));
info->type = SNDRV_CTL_ELEM_TYPE_ENUMERATED;
info->count = 1;
info->value.enumerated.items = count;
}
kctl->private_free = snd_usb_mixer_elem_free;
- strlcpy(kctl->id.name, name, sizeof(kctl->id.name));
+ strscpy(kctl->id.name, name, sizeof(kctl->id.name));
err = snd_usb_mixer_add_control(&elem->head, kctl);
if (err < 0)
}
kctl->private_free = snd_usb_mixer_elem_free;
- strlcpy(kctl->id.name, name, sizeof(kctl->id.name));
+ strscpy(kctl->id.name, name, sizeof(kctl->id.name));
err = snd_usb_mixer_add_control(&elem->head, kctl);
if (err < 0)
else
kctl->private_free = snd_usb_mixer_elem_free;
- strlcpy(kctl->id.name, name, sizeof(kctl->id.name));
+ strscpy(kctl->id.name, name, sizeof(kctl->id.name));
err = snd_usb_mixer_add_control(&elem->head, kctl);
if (err < 0)
}
}
},
+{
+ /*
+ * Pioneer DJ DJM-750
+ * 8 channels playback & 8 channels capture @ 44.1/48/96kHz S24LE
+ */
+ USB_DEVICE_VENDOR_SPEC(0x08e4, 0x017f),
+ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+ .ifnum = QUIRK_ANY_INTERFACE,
+ .type = QUIRK_COMPOSITE,
+ .data = (const struct snd_usb_audio_quirk[]) {
+ {
+ .ifnum = 0,
+ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+ .data = &(const struct audioformat) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 0,
+ .altsetting = 1,
+ .altset_idx = 1,
+ .endpoint = 0x05,
+ .ep_attr = USB_ENDPOINT_XFER_ISOC|
+ USB_ENDPOINT_SYNC_ASYNC,
+ .rates = SNDRV_PCM_RATE_44100|
+ SNDRV_PCM_RATE_48000|
+ SNDRV_PCM_RATE_96000,
+ .rate_min = 44100,
+ .rate_max = 96000,
+ .nr_rates = 3,
+ .rate_table = (unsigned int[]) { 44100, 48000, 96000 }
+ }
+ },
+ {
+ .ifnum = 0,
+ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+ .data = &(const struct audioformat) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 0,
+ .altsetting = 1,
+ .altset_idx = 1,
+ .endpoint = 0x86,
+ .ep_idx = 1,
+ .ep_attr = USB_ENDPOINT_XFER_ISOC|
+ USB_ENDPOINT_SYNC_ASYNC|
+ USB_ENDPOINT_USAGE_IMPLICIT_FB,
+ .rates = SNDRV_PCM_RATE_44100|
+ SNDRV_PCM_RATE_48000|
+ SNDRV_PCM_RATE_96000,
+ .rate_min = 44100,
+ .rate_max = 96000,
+ .nr_rates = 3,
+ .rate_table = (unsigned int[]) { 44100, 48000, 96000 }
+ }
+ },
+ {
+ .ifnum = -1
+ }
+ }
+ }
+},
+{
+ /*
+ * Pioneer DJ DJM-450
+ * PCM is 8 channels out @ 48 fixed (endpoint 0x01)
+ * and 8 channels in @ 48 fixed (endpoint 0x82).
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0013),
+ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+ .ifnum = QUIRK_ANY_INTERFACE,
+ .type = QUIRK_COMPOSITE,
+ .data = (const struct snd_usb_audio_quirk[]) {
+ {
+ .ifnum = 0,
+ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+ .data = &(const struct audioformat) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8, // outputs
+ .iface = 0,
+ .altsetting = 1,
+ .altset_idx = 1,
+ .endpoint = 0x01,
+ .ep_attr = USB_ENDPOINT_XFER_ISOC|
+ USB_ENDPOINT_SYNC_ASYNC,
+ .rates = SNDRV_PCM_RATE_48000,
+ .rate_min = 48000,
+ .rate_max = 48000,
+ .nr_rates = 1,
+ .rate_table = (unsigned int[]) { 48000 }
+ }
+ },
+ {
+ .ifnum = 0,
+ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+ .data = &(const struct audioformat) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8, // inputs
+ .iface = 0,
+ .altsetting = 1,
+ .altset_idx = 1,
+ .endpoint = 0x82,
+ .ep_idx = 1,
+ .ep_attr = USB_ENDPOINT_XFER_ISOC|
+ USB_ENDPOINT_SYNC_ASYNC|
+ USB_ENDPOINT_USAGE_IMPLICIT_FB,
+ .rates = SNDRV_PCM_RATE_48000,
+ .rate_min = 48000,
+ .rate_max = 48000,
+ .nr_rates = 1,
+ .rate_table = (unsigned int[]) { 48000 }
+ }
+ },
+ {
+ .ifnum = -1
+ }
+ }
+ }
+},
#undef USB_DEVICE_VENDOR_SPEC
#undef USB_AUDIO_DEVICE
subs->pkt_offset_adj = (emu_samplerate_id >= EMU_QUIRK_SR_176400HZ) ? 4 : 0;
}
+static int pioneer_djm_set_format_quirk(struct snd_usb_substream *subs,
+ u16 windex)
+{
+ unsigned int cur_rate = subs->data_endpoint->cur_rate;
+ u8 sr[3];
+ // Convert to little endian
+ sr[0] = cur_rate & 0xff;
+ sr[1] = (cur_rate >> 8) & 0xff;
+ sr[2] = (cur_rate >> 16) & 0xff;
+ usb_set_interface(subs->dev, 0, 1);
+ // we should derive windex from fmt-sync_ep but it's not set
+ snd_usb_ctl_msg(subs->stream->chip->dev,
+ usb_rcvctrlpipe(subs->stream->chip->dev, 0),
+ 0x01, 0x22, 0x0100, windex, &sr, 0x0003);
+ return 0;
+}
+
void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
const struct audioformat *fmt)
{
case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */
subs->stream_offset_adj = 2;
break;
+ case USB_ID(0x2b73, 0x0013): /* Pioneer DJM-450 */
+ pioneer_djm_set_format_quirk(subs, 0x0082);
+ break;
}
}
card_ctx->irq = irq;
/* only 32bit addressable */
- dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
- dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
+ dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
init_channel_allocations();
/* setup private data which can be retrieved when required */
pcm->private_data = ctx;
pcm->info_flags = 0;
- strlcpy(pcm->name, card->shortname, strlen(card->shortname));
+ strscpy(pcm->name, card->shortname, strlen(card->shortname));
/* setup the ops for playabck */
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &had_pcm_ops);
str = xenbus_read(XBT_NIL, device_path, XENSND_FIELD_DEVICE_NAME, NULL);
if (!IS_ERR(str)) {
- strlcpy(pcm_instance->name, str, sizeof(pcm_instance->name));
+ strscpy(pcm_instance->name, str, sizeof(pcm_instance->name));
kfree(str);
}
set_array_of ${instance}.options ${instancedir}/trace_options
set_value_of ${instance}.trace_clock ${instancedir}/trace_clock
set_value_of ${instance}.cpumask ${instancedir}/tracing_cpumask
+ set_value_of ${instance}.tracing_on ${instancedir}/tracing_on
set_value_of ${instance}.tracer ${instancedir}/current_tracer
set_array_of ${instance}.ftrace.filters \
${instancedir}/set_ftrace_filter
if [ `echo $val | sed -e s/f//g`x != x ]; then
emit_kv $PREFIX.cpumask = $val
fi
+ val=`cat $INSTANCE/tracing_on`
+ if [ `echo $val | sed -e s/f//g`x != x ]; then
+ emit_kv $PREFIX.tracing_on = $val
+ fi
val=
for i in `cat $INSTANCE/set_event`; do
#define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
#endif // static_assert
-#ifdef __GENKSYMS__
-/* genksyms gets confused by _Static_assert */
-#define _Static_assert(expr, ...)
-#endif
-
#endif /* _LINUX_BUILD_BUG_H */
#define KVM_EXIT_X86_RDMSR 29
#define KVM_EXIT_X86_WRMSR 30
#define KVM_EXIT_DIRTY_RING_FULL 31
+#define KVM_EXIT_AP_RESET_HOLD 32
/* For KVM_EXIT_INTERNAL_ERROR */
/* Emulate instruction failed. */
#define KVM_MP_STATE_CHECK_STOP 6
#define KVM_MP_STATE_OPERATING 7
#define KVM_MP_STATE_LOAD 8
+#define KVM_MP_STATE_AP_RESET_HOLD 9
struct kvm_mp_state {
__u32 mp_state;
perf_cpu_map__put(cpus);
__T_END;
- return 0;
+ return tests_failed == 0 ? 0 : -1;
}
char path[PATH_MAX];
int id, err, pid, go_pipe[2];
union perf_event *event;
- char bf;
int count = 0;
snprintf(path, PATH_MAX, "%s/kernel/debug/tracing/events/syscalls/sys_enter_prctl/id",
sysfs__mountpoint());
if (filename__read_int(path, &id)) {
+ tests_failed++;
fprintf(stderr, "error: failed to get tracepoint id: %s\n", path);
return -1;
}
pid = fork();
if (!pid) {
int i;
+ char bf;
read(go_pipe[0], &bf, 1);
perf_evlist__enable(evlist);
/* kick the child and wait for it to finish */
- write(go_pipe[1], &bf, 1);
+ write(go_pipe[1], "A", 1);
waitpid(pid, NULL, 0);
/*
test_mmap_cpus();
__T_END;
- return 0;
+ return tests_failed == 0 ? 0 : -1;
}
test_stat_thread_enable();
__T_END;
- return 0;
+ return tests_failed == 0 ? 0 : -1;
}
perf_thread_map__put(threads);
__T_END;
- return 0;
+ return tests_failed == 0 ? 0 : -1;
}
Copyright (C) 2018 Red Hat, Inc., Arnaldo Carvalho de Melo <acme@redhat.com>
*/
-#include <bpf/bpf.h>
+#include <bpf.h>
#define NSEC_PER_SEC 1000000000L
test_global_aggr()
{
- local cyc
-
perf stat -a --no-big-num -e cycles,instructions sleep 1 2>&1 | \
grep -e cycles -e instructions | \
while read num evt hash ipc rest
do
# skip not counted events
- if [[ $num == "<not" ]]; then
+ if [ "$num" = "<not" ]; then
continue
fi
# save cycles count
- if [[ $evt == "cycles" ]]; then
+ if [ "$evt" = "cycles" ]; then
cyc=$num
continue
fi
# skip if no cycles
- if [[ -z $cyc ]]; then
+ if [ -z "$cyc" ]; then
continue
fi
# use printf for rounding and a leading zero
- local res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)`
- if [[ $ipc != $res ]]; then
+ res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)`
+ if [ "$ipc" != "$res" ]; then
echo "IPC is different: $res != $ipc ($num / $cyc)"
exit 1
fi
test_no_aggr()
{
- declare -A results
-
perf stat -a -A --no-big-num -e cycles,instructions sleep 1 2>&1 | \
grep ^CPU | \
while read cpu num evt hash ipc rest
do
# skip not counted events
- if [[ $num == "<not" ]]; then
+ if [ "$num" = "<not" ]; then
continue
fi
# save cycles count
- if [[ $evt == "cycles" ]]; then
- results[$cpu]=$num
+ if [ "$evt" = "cycles" ]; then
+ results="$results $cpu:$num"
continue
fi
+ cyc=${results##* $cpu:}
+ cyc=${cyc%% *}
+
# skip if no cycles
- local cyc=${results[$cpu]}
- if [[ -z $cyc ]]; then
+ if [ -z "$cyc" ]; then
continue
fi
# use printf for rounding and a leading zero
- local res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)`
- if [[ $ipc != $res ]]; then
+ res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)`
+ if [ "$ipc" != "$res" ]; then
echo "IPC is different for $cpu: $res != $ipc ($num / $cyc)"
exit 1
fi
attr_offset = lseek(ff.fd, 0, SEEK_CUR);
evlist__for_each_entry(evlist, evsel) {
+ if (evsel->core.attr.size < sizeof(evsel->core.attr)) {
+ /*
+ * We are likely in "perf inject" and have read
+ * from an older file. Update attr size so that
+ * reader gets the right offset to the ids.
+ */
+ evsel->core.attr.size = sizeof(evsel->core.attr);
+ }
f_attr = (struct perf_file_attr){
.attr = evsel->core.attr,
.ids = {
pid_t machine__get_current_tid(struct machine *machine, int cpu)
{
- int nr_cpus = min(machine->env->nr_cpus_online, MAX_NR_CPUS);
+ int nr_cpus = min(machine->env->nr_cpus_avail, MAX_NR_CPUS);
if (cpu < 0 || cpu >= nr_cpus || !machine->current_tid)
return -1;
pid_t tid)
{
struct thread *thread;
- int nr_cpus = min(machine->env->nr_cpus_online, MAX_NR_CPUS);
+ int nr_cpus = min(machine->env->nr_cpus_avail, MAX_NR_CPUS);
if (cpu < 0)
return -EINVAL;
{
int i, err = -1;
struct perf_cpu_map *map;
- int nr_cpus = min(session->header.env.nr_cpus_online, MAX_NR_CPUS);
+ int nr_cpus = min(session->header.env.nr_cpus_avail, MAX_NR_CPUS);
for (i = 0; i < PERF_TYPE_MAX; ++i) {
struct evsel *evsel;
#include "evlist.h"
#include "expr.h"
#include "metricgroup.h"
+#include "cgroup.h"
#include <linux/zalloc.h>
/*
enum stat_type type;
int ctx;
int cpu;
+ struct cgroup *cgrp;
struct runtime_stat *stat;
struct stats stats;
u64 metric_total;
if (a->ctx != b->ctx)
return a->ctx - b->ctx;
+ if (a->cgrp != b->cgrp)
+ return (char *)a->cgrp < (char *)b->cgrp ? -1 : +1;
+
if (a->evsel == NULL && b->evsel == NULL) {
if (a->stat == b->stat)
return 0;
bool create,
enum stat_type type,
int ctx,
- struct runtime_stat *st)
+ struct runtime_stat *st,
+ struct cgroup *cgrp)
{
struct rblist *rblist;
struct rb_node *nd;
.type = type,
.ctx = ctx,
.stat = st,
+ .cgrp = cgrp,
};
rblist = &st->value_list;
+ /* don't use context info for clock events */
+ if (type == STAT_NSECS)
+ dm.ctx = 0;
+
nd = rblist__find(rblist, &dm);
if (nd)
return container_of(nd, struct saved_value, rb_node);
reset_stat(st);
}
+struct runtime_stat_data {
+ int ctx;
+ struct cgroup *cgrp;
+};
+
static void update_runtime_stat(struct runtime_stat *st,
enum stat_type type,
- int ctx, int cpu, u64 count)
+ int cpu, u64 count,
+ struct runtime_stat_data *rsd)
{
- struct saved_value *v = saved_value_lookup(NULL, cpu, true,
- type, ctx, st);
+ struct saved_value *v = saved_value_lookup(NULL, cpu, true, type,
+ rsd->ctx, st, rsd->cgrp);
if (v)
update_stats(&v->stats, count);
void perf_stat__update_shadow_stats(struct evsel *counter, u64 count,
int cpu, struct runtime_stat *st)
{
- int ctx = evsel_context(counter);
u64 count_ns = count;
struct saved_value *v;
+ struct runtime_stat_data rsd = {
+ .ctx = evsel_context(counter),
+ .cgrp = counter->cgrp,
+ };
count *= counter->scale;
if (evsel__is_clock(counter))
- update_runtime_stat(st, STAT_NSECS, 0, cpu, count_ns);
+ update_runtime_stat(st, STAT_NSECS, cpu, count_ns, &rsd);
else if (evsel__match(counter, HARDWARE, HW_CPU_CYCLES))
- update_runtime_stat(st, STAT_CYCLES, ctx, cpu, count);
+ update_runtime_stat(st, STAT_CYCLES, cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, CYCLES_IN_TX))
- update_runtime_stat(st, STAT_CYCLES_IN_TX, ctx, cpu, count);
+ update_runtime_stat(st, STAT_CYCLES_IN_TX, cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TRANSACTION_START))
- update_runtime_stat(st, STAT_TRANSACTION, ctx, cpu, count);
+ update_runtime_stat(st, STAT_TRANSACTION, cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, ELISION_START))
- update_runtime_stat(st, STAT_ELISION, ctx, cpu, count);
+ update_runtime_stat(st, STAT_ELISION, cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TOPDOWN_TOTAL_SLOTS))
update_runtime_stat(st, STAT_TOPDOWN_TOTAL_SLOTS,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_ISSUED))
update_runtime_stat(st, STAT_TOPDOWN_SLOTS_ISSUED,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_RETIRED))
update_runtime_stat(st, STAT_TOPDOWN_SLOTS_RETIRED,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TOPDOWN_FETCH_BUBBLES))
update_runtime_stat(st, STAT_TOPDOWN_FETCH_BUBBLES,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TOPDOWN_RECOVERY_BUBBLES))
update_runtime_stat(st, STAT_TOPDOWN_RECOVERY_BUBBLES,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TOPDOWN_RETIRING))
update_runtime_stat(st, STAT_TOPDOWN_RETIRING,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TOPDOWN_BAD_SPEC))
update_runtime_stat(st, STAT_TOPDOWN_BAD_SPEC,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TOPDOWN_FE_BOUND))
update_runtime_stat(st, STAT_TOPDOWN_FE_BOUND,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, TOPDOWN_BE_BOUND))
update_runtime_stat(st, STAT_TOPDOWN_BE_BOUND,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_FRONTEND))
update_runtime_stat(st, STAT_STALLED_CYCLES_FRONT,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_BACKEND))
update_runtime_stat(st, STAT_STALLED_CYCLES_BACK,
- ctx, cpu, count);
+ cpu, count, &rsd);
else if (evsel__match(counter, HARDWARE, HW_BRANCH_INSTRUCTIONS))
- update_runtime_stat(st, STAT_BRANCHES, ctx, cpu, count);
+ update_runtime_stat(st, STAT_BRANCHES, cpu, count, &rsd);
else if (evsel__match(counter, HARDWARE, HW_CACHE_REFERENCES))
- update_runtime_stat(st, STAT_CACHEREFS, ctx, cpu, count);
+ update_runtime_stat(st, STAT_CACHEREFS, cpu, count, &rsd);
else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1D))
- update_runtime_stat(st, STAT_L1_DCACHE, ctx, cpu, count);
+ update_runtime_stat(st, STAT_L1_DCACHE, cpu, count, &rsd);
else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1I))
- update_runtime_stat(st, STAT_L1_ICACHE, ctx, cpu, count);
+ update_runtime_stat(st, STAT_L1_ICACHE, cpu, count, &rsd);
else if (evsel__match(counter, HW_CACHE, HW_CACHE_LL))
- update_runtime_stat(st, STAT_LL_CACHE, ctx, cpu, count);
+ update_runtime_stat(st, STAT_LL_CACHE, cpu, count, &rsd);
else if (evsel__match(counter, HW_CACHE, HW_CACHE_DTLB))
- update_runtime_stat(st, STAT_DTLB_CACHE, ctx, cpu, count);
+ update_runtime_stat(st, STAT_DTLB_CACHE, cpu, count, &rsd);
else if (evsel__match(counter, HW_CACHE, HW_CACHE_ITLB))
- update_runtime_stat(st, STAT_ITLB_CACHE, ctx, cpu, count);
+ update_runtime_stat(st, STAT_ITLB_CACHE, cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, SMI_NUM))
- update_runtime_stat(st, STAT_SMI_NUM, ctx, cpu, count);
+ update_runtime_stat(st, STAT_SMI_NUM, cpu, count, &rsd);
else if (perf_stat_evsel__is(counter, APERF))
- update_runtime_stat(st, STAT_APERF, ctx, cpu, count);
+ update_runtime_stat(st, STAT_APERF, cpu, count, &rsd);
if (counter->collect_stat) {
- v = saved_value_lookup(counter, cpu, true, STAT_NONE, 0, st);
+ v = saved_value_lookup(counter, cpu, true, STAT_NONE, 0, st,
+ rsd.cgrp);
update_stats(&v->stats, count);
if (counter->metric_leader)
v->metric_total += count;
} else if (counter->metric_leader) {
v = saved_value_lookup(counter->metric_leader,
- cpu, true, STAT_NONE, 0, st);
+ cpu, true, STAT_NONE, 0, st, rsd.cgrp);
v->metric_total += count;
v->metric_other++;
}
}
static double runtime_stat_avg(struct runtime_stat *st,
- enum stat_type type, int ctx, int cpu)
+ enum stat_type type, int cpu,
+ struct runtime_stat_data *rsd)
{
struct saved_value *v;
- v = saved_value_lookup(NULL, cpu, false, type, ctx, st);
+ v = saved_value_lookup(NULL, cpu, false, type, rsd->ctx, st, rsd->cgrp);
if (!v)
return 0.0;
}
static double runtime_stat_n(struct runtime_stat *st,
- enum stat_type type, int ctx, int cpu)
+ enum stat_type type, int cpu,
+ struct runtime_stat_data *rsd)
{
struct saved_value *v;
- v = saved_value_lookup(NULL, cpu, false, type, ctx, st);
+ v = saved_value_lookup(NULL, cpu, false, type, rsd->ctx, st, rsd->cgrp);
if (!v)
return 0.0;
}
static void print_stalled_cycles_frontend(struct perf_stat_config *config,
- int cpu,
- struct evsel *evsel, double avg,
+ int cpu, double avg,
struct perf_stat_output_ctx *out,
- struct runtime_stat *st)
+ struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double total, ratio = 0.0;
const char *color;
- int ctx = evsel_context(evsel);
- total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_CYCLES, cpu, rsd);
if (total)
ratio = avg / total * 100.0;
}
static void print_stalled_cycles_backend(struct perf_stat_config *config,
- int cpu,
- struct evsel *evsel, double avg,
+ int cpu, double avg,
struct perf_stat_output_ctx *out,
- struct runtime_stat *st)
+ struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double total, ratio = 0.0;
const char *color;
- int ctx = evsel_context(evsel);
- total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_CYCLES, cpu, rsd);
if (total)
ratio = avg / total * 100.0;
}
static void print_branch_misses(struct perf_stat_config *config,
- int cpu,
- struct evsel *evsel,
- double avg,
+ int cpu, double avg,
struct perf_stat_output_ctx *out,
- struct runtime_stat *st)
+ struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double total, ratio = 0.0;
const char *color;
- int ctx = evsel_context(evsel);
- total = runtime_stat_avg(st, STAT_BRANCHES, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_BRANCHES, cpu, rsd);
if (total)
ratio = avg / total * 100.0;
}
static void print_l1_dcache_misses(struct perf_stat_config *config,
- int cpu,
- struct evsel *evsel,
- double avg,
+ int cpu, double avg,
struct perf_stat_output_ctx *out,
- struct runtime_stat *st)
-
+ struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double total, ratio = 0.0;
const char *color;
- int ctx = evsel_context(evsel);
- total = runtime_stat_avg(st, STAT_L1_DCACHE, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_L1_DCACHE, cpu, rsd);
if (total)
ratio = avg / total * 100.0;
}
static void print_l1_icache_misses(struct perf_stat_config *config,
- int cpu,
- struct evsel *evsel,
- double avg,
+ int cpu, double avg,
struct perf_stat_output_ctx *out,
- struct runtime_stat *st)
-
+ struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double total, ratio = 0.0;
const char *color;
- int ctx = evsel_context(evsel);
- total = runtime_stat_avg(st, STAT_L1_ICACHE, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_L1_ICACHE, cpu, rsd);
if (total)
ratio = avg / total * 100.0;
}
static void print_dtlb_cache_misses(struct perf_stat_config *config,
- int cpu,
- struct evsel *evsel,
- double avg,
+ int cpu, double avg,
struct perf_stat_output_ctx *out,
- struct runtime_stat *st)
+ struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double total, ratio = 0.0;
const char *color;
- int ctx = evsel_context(evsel);
- total = runtime_stat_avg(st, STAT_DTLB_CACHE, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_DTLB_CACHE, cpu, rsd);
if (total)
ratio = avg / total * 100.0;
}
static void print_itlb_cache_misses(struct perf_stat_config *config,
- int cpu,
- struct evsel *evsel,
- double avg,
+ int cpu, double avg,
struct perf_stat_output_ctx *out,
- struct runtime_stat *st)
+ struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double total, ratio = 0.0;
const char *color;
- int ctx = evsel_context(evsel);
- total = runtime_stat_avg(st, STAT_ITLB_CACHE, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_ITLB_CACHE, cpu, rsd);
if (total)
ratio = avg / total * 100.0;
}
static void print_ll_cache_misses(struct perf_stat_config *config,
- int cpu,
- struct evsel *evsel,
- double avg,
+ int cpu, double avg,
struct perf_stat_output_ctx *out,
- struct runtime_stat *st)
+ struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double total, ratio = 0.0;
const char *color;
- int ctx = evsel_context(evsel);
- total = runtime_stat_avg(st, STAT_LL_CACHE, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_LL_CACHE, cpu, rsd);
if (total)
ratio = avg / total * 100.0;
return x;
}
-static double td_total_slots(int ctx, int cpu, struct runtime_stat *st)
+static double td_total_slots(int cpu, struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
- return runtime_stat_avg(st, STAT_TOPDOWN_TOTAL_SLOTS, ctx, cpu);
+ return runtime_stat_avg(st, STAT_TOPDOWN_TOTAL_SLOTS, cpu, rsd);
}
-static double td_bad_spec(int ctx, int cpu, struct runtime_stat *st)
+static double td_bad_spec(int cpu, struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double bad_spec = 0;
double total_slots;
double total;
- total = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_ISSUED, ctx, cpu) -
- runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, ctx, cpu) +
- runtime_stat_avg(st, STAT_TOPDOWN_RECOVERY_BUBBLES, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_ISSUED, cpu, rsd) -
+ runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, cpu, rsd) +
+ runtime_stat_avg(st, STAT_TOPDOWN_RECOVERY_BUBBLES, cpu, rsd);
- total_slots = td_total_slots(ctx, cpu, st);
+ total_slots = td_total_slots(cpu, st, rsd);
if (total_slots)
bad_spec = total / total_slots;
return sanitize_val(bad_spec);
}
-static double td_retiring(int ctx, int cpu, struct runtime_stat *st)
+static double td_retiring(int cpu, struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double retiring = 0;
- double total_slots = td_total_slots(ctx, cpu, st);
+ double total_slots = td_total_slots(cpu, st, rsd);
double ret_slots = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED,
- ctx, cpu);
+ cpu, rsd);
if (total_slots)
retiring = ret_slots / total_slots;
return retiring;
}
-static double td_fe_bound(int ctx, int cpu, struct runtime_stat *st)
+static double td_fe_bound(int cpu, struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double fe_bound = 0;
- double total_slots = td_total_slots(ctx, cpu, st);
+ double total_slots = td_total_slots(cpu, st, rsd);
double fetch_bub = runtime_stat_avg(st, STAT_TOPDOWN_FETCH_BUBBLES,
- ctx, cpu);
+ cpu, rsd);
if (total_slots)
fe_bound = fetch_bub / total_slots;
return fe_bound;
}
-static double td_be_bound(int ctx, int cpu, struct runtime_stat *st)
+static double td_be_bound(int cpu, struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
- double sum = (td_fe_bound(ctx, cpu, st) +
- td_bad_spec(ctx, cpu, st) +
- td_retiring(ctx, cpu, st));
+ double sum = (td_fe_bound(cpu, st, rsd) +
+ td_bad_spec(cpu, st, rsd) +
+ td_retiring(cpu, st, rsd));
if (sum == 0)
return 0;
return sanitize_val(1.0 - sum);
* the ratios we need to recreate the sum.
*/
-static double td_metric_ratio(int ctx, int cpu,
- enum stat_type type,
- struct runtime_stat *stat)
+static double td_metric_ratio(int cpu, enum stat_type type,
+ struct runtime_stat *stat,
+ struct runtime_stat_data *rsd)
{
- double sum = runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, ctx, cpu) +
- runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, ctx, cpu) +
- runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, ctx, cpu) +
- runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, ctx, cpu);
- double d = runtime_stat_avg(stat, type, ctx, cpu);
+ double sum = runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, cpu, rsd) +
+ runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, cpu, rsd) +
+ runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, cpu, rsd) +
+ runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, cpu, rsd);
+ double d = runtime_stat_avg(stat, type, cpu, rsd);
if (sum)
return d / sum;
* We allow two missing.
*/
-static bool full_td(int ctx, int cpu,
- struct runtime_stat *stat)
+static bool full_td(int cpu, struct runtime_stat *stat,
+ struct runtime_stat_data *rsd)
{
int c = 0;
- if (runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, ctx, cpu) > 0)
+ if (runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, cpu, rsd) > 0)
c++;
- if (runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, ctx, cpu) > 0)
+ if (runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, cpu, rsd) > 0)
c++;
- if (runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, ctx, cpu) > 0)
+ if (runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, cpu, rsd) > 0)
c++;
- if (runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, ctx, cpu) > 0)
+ if (runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, cpu, rsd) > 0)
c++;
return c >= 2;
}
-static void print_smi_cost(struct perf_stat_config *config,
- int cpu, struct evsel *evsel,
+static void print_smi_cost(struct perf_stat_config *config, int cpu,
struct perf_stat_output_ctx *out,
- struct runtime_stat *st)
+ struct runtime_stat *st,
+ struct runtime_stat_data *rsd)
{
double smi_num, aperf, cycles, cost = 0.0;
- int ctx = evsel_context(evsel);
const char *color = NULL;
- smi_num = runtime_stat_avg(st, STAT_SMI_NUM, ctx, cpu);
- aperf = runtime_stat_avg(st, STAT_APERF, ctx, cpu);
- cycles = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
+ smi_num = runtime_stat_avg(st, STAT_SMI_NUM, cpu, rsd);
+ aperf = runtime_stat_avg(st, STAT_APERF, cpu, rsd);
+ cycles = runtime_stat_avg(st, STAT_CYCLES, cpu, rsd);
if ((cycles == 0) || (aperf == 0))
return;
scale = 1e-9;
} else {
v = saved_value_lookup(metric_events[i], cpu, false,
- STAT_NONE, 0, st);
+ STAT_NONE, 0, st,
+ metric_events[i]->cgrp);
if (!v)
break;
stats = &v->stats;
print_metric_t print_metric = out->print_metric;
double total, ratio = 0.0, total2;
const char *color = NULL;
- int ctx = evsel_context(evsel);
+ struct runtime_stat_data rsd = {
+ .ctx = evsel_context(evsel),
+ .cgrp = evsel->cgrp,
+ };
struct metric_event *me;
int num = 1;
if (evsel__match(evsel, HARDWARE, HW_INSTRUCTIONS)) {
- total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_CYCLES, cpu, &rsd);
if (total) {
ratio = avg / total;
print_metric(config, ctxp, NULL, NULL, "insn per cycle", 0);
}
- total = runtime_stat_avg(st, STAT_STALLED_CYCLES_FRONT,
- ctx, cpu);
+ total = runtime_stat_avg(st, STAT_STALLED_CYCLES_FRONT, cpu, &rsd);
total = max(total, runtime_stat_avg(st,
STAT_STALLED_CYCLES_BACK,
- ctx, cpu));
+ cpu, &rsd));
if (total && avg) {
out->new_line(config, ctxp);
ratio);
}
} else if (evsel__match(evsel, HARDWARE, HW_BRANCH_MISSES)) {
- if (runtime_stat_n(st, STAT_BRANCHES, ctx, cpu) != 0)
- print_branch_misses(config, cpu, evsel, avg, out, st);
+ if (runtime_stat_n(st, STAT_BRANCHES, cpu, &rsd) != 0)
+ print_branch_misses(config, cpu, avg, out, st, &rsd);
else
print_metric(config, ctxp, NULL, NULL, "of all branches", 0);
} else if (
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
- if (runtime_stat_n(st, STAT_L1_DCACHE, ctx, cpu) != 0)
- print_l1_dcache_misses(config, cpu, evsel, avg, out, st);
+ if (runtime_stat_n(st, STAT_L1_DCACHE, cpu, &rsd) != 0)
+ print_l1_dcache_misses(config, cpu, avg, out, st, &rsd);
else
print_metric(config, ctxp, NULL, NULL, "of all L1-dcache accesses", 0);
} else if (
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
- if (runtime_stat_n(st, STAT_L1_ICACHE, ctx, cpu) != 0)
- print_l1_icache_misses(config, cpu, evsel, avg, out, st);
+ if (runtime_stat_n(st, STAT_L1_ICACHE, cpu, &rsd) != 0)
+ print_l1_icache_misses(config, cpu, avg, out, st, &rsd);
else
print_metric(config, ctxp, NULL, NULL, "of all L1-icache accesses", 0);
} else if (
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
- if (runtime_stat_n(st, STAT_DTLB_CACHE, ctx, cpu) != 0)
- print_dtlb_cache_misses(config, cpu, evsel, avg, out, st);
+ if (runtime_stat_n(st, STAT_DTLB_CACHE, cpu, &rsd) != 0)
+ print_dtlb_cache_misses(config, cpu, avg, out, st, &rsd);
else
print_metric(config, ctxp, NULL, NULL, "of all dTLB cache accesses", 0);
} else if (
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
- if (runtime_stat_n(st, STAT_ITLB_CACHE, ctx, cpu) != 0)
- print_itlb_cache_misses(config, cpu, evsel, avg, out, st);
+ if (runtime_stat_n(st, STAT_ITLB_CACHE, cpu, &rsd) != 0)
+ print_itlb_cache_misses(config, cpu, avg, out, st, &rsd);
else
print_metric(config, ctxp, NULL, NULL, "of all iTLB cache accesses", 0);
} else if (
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
- if (runtime_stat_n(st, STAT_LL_CACHE, ctx, cpu) != 0)
- print_ll_cache_misses(config, cpu, evsel, avg, out, st);
+ if (runtime_stat_n(st, STAT_LL_CACHE, cpu, &rsd) != 0)
+ print_ll_cache_misses(config, cpu, avg, out, st, &rsd);
else
print_metric(config, ctxp, NULL, NULL, "of all LL-cache accesses", 0);
} else if (evsel__match(evsel, HARDWARE, HW_CACHE_MISSES)) {
- total = runtime_stat_avg(st, STAT_CACHEREFS, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_CACHEREFS, cpu, &rsd);
if (total)
ratio = avg * 100 / total;
- if (runtime_stat_n(st, STAT_CACHEREFS, ctx, cpu) != 0)
+ if (runtime_stat_n(st, STAT_CACHEREFS, cpu, &rsd) != 0)
print_metric(config, ctxp, NULL, "%8.3f %%",
"of all cache refs", ratio);
else
print_metric(config, ctxp, NULL, NULL, "of all cache refs", 0);
} else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_FRONTEND)) {
- print_stalled_cycles_frontend(config, cpu, evsel, avg, out, st);
+ print_stalled_cycles_frontend(config, cpu, avg, out, st, &rsd);
} else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_BACKEND)) {
- print_stalled_cycles_backend(config, cpu, evsel, avg, out, st);
+ print_stalled_cycles_backend(config, cpu, avg, out, st, &rsd);
} else if (evsel__match(evsel, HARDWARE, HW_CPU_CYCLES)) {
- total = runtime_stat_avg(st, STAT_NSECS, 0, cpu);
+ total = runtime_stat_avg(st, STAT_NSECS, cpu, &rsd);
if (total) {
ratio = avg / total;
print_metric(config, ctxp, NULL, NULL, "Ghz", 0);
}
} else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX)) {
- total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_CYCLES, cpu, &rsd);
if (total)
print_metric(config, ctxp, NULL,
print_metric(config, ctxp, NULL, NULL, "transactional cycles",
0);
} else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX_CP)) {
- total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
- total2 = runtime_stat_avg(st, STAT_CYCLES_IN_TX, ctx, cpu);
+ total = runtime_stat_avg(st, STAT_CYCLES, cpu, &rsd);
+ total2 = runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu, &rsd);
if (total2 < avg)
total2 = avg;
else
print_metric(config, ctxp, NULL, NULL, "aborted cycles", 0);
} else if (perf_stat_evsel__is(evsel, TRANSACTION_START)) {
- total = runtime_stat_avg(st, STAT_CYCLES_IN_TX,
- ctx, cpu);
+ total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu, &rsd);
if (avg)
ratio = total / avg;
- if (runtime_stat_n(st, STAT_CYCLES_IN_TX, ctx, cpu) != 0)
+ if (runtime_stat_n(st, STAT_CYCLES_IN_TX, cpu, &rsd) != 0)
print_metric(config, ctxp, NULL, "%8.0f",
"cycles / transaction", ratio);
else
print_metric(config, ctxp, NULL, NULL, "cycles / transaction",
0);
} else if (perf_stat_evsel__is(evsel, ELISION_START)) {
- total = runtime_stat_avg(st, STAT_CYCLES_IN_TX,
- ctx, cpu);
+ total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu, &rsd);
if (avg)
ratio = total / avg;
else
print_metric(config, ctxp, NULL, NULL, "CPUs utilized", 0);
} else if (perf_stat_evsel__is(evsel, TOPDOWN_FETCH_BUBBLES)) {
- double fe_bound = td_fe_bound(ctx, cpu, st);
+ double fe_bound = td_fe_bound(cpu, st, &rsd);
if (fe_bound > 0.2)
color = PERF_COLOR_RED;
print_metric(config, ctxp, color, "%8.1f%%", "frontend bound",
fe_bound * 100.);
} else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_RETIRED)) {
- double retiring = td_retiring(ctx, cpu, st);
+ double retiring = td_retiring(cpu, st, &rsd);
if (retiring > 0.7)
color = PERF_COLOR_GREEN;
print_metric(config, ctxp, color, "%8.1f%%", "retiring",
retiring * 100.);
} else if (perf_stat_evsel__is(evsel, TOPDOWN_RECOVERY_BUBBLES)) {
- double bad_spec = td_bad_spec(ctx, cpu, st);
+ double bad_spec = td_bad_spec(cpu, st, &rsd);
if (bad_spec > 0.1)
color = PERF_COLOR_RED;
print_metric(config, ctxp, color, "%8.1f%%", "bad speculation",
bad_spec * 100.);
} else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_ISSUED)) {
- double be_bound = td_be_bound(ctx, cpu, st);
+ double be_bound = td_be_bound(cpu, st, &rsd);
const char *name = "backend bound";
static int have_recovery_bubbles = -1;
if (be_bound > 0.2)
color = PERF_COLOR_RED;
- if (td_total_slots(ctx, cpu, st) > 0)
+ if (td_total_slots(cpu, st, &rsd) > 0)
print_metric(config, ctxp, color, "%8.1f%%", name,
be_bound * 100.);
else
print_metric(config, ctxp, NULL, NULL, name, 0);
} else if (perf_stat_evsel__is(evsel, TOPDOWN_RETIRING) &&
- full_td(ctx, cpu, st)) {
- double retiring = td_metric_ratio(ctx, cpu,
- STAT_TOPDOWN_RETIRING, st);
-
+ full_td(cpu, st, &rsd)) {
+ double retiring = td_metric_ratio(cpu,
+ STAT_TOPDOWN_RETIRING, st,
+ &rsd);
if (retiring > 0.7)
color = PERF_COLOR_GREEN;
print_metric(config, ctxp, color, "%8.1f%%", "retiring",
retiring * 100.);
} else if (perf_stat_evsel__is(evsel, TOPDOWN_FE_BOUND) &&
- full_td(ctx, cpu, st)) {
- double fe_bound = td_metric_ratio(ctx, cpu,
- STAT_TOPDOWN_FE_BOUND, st);
-
+ full_td(cpu, st, &rsd)) {
+ double fe_bound = td_metric_ratio(cpu,
+ STAT_TOPDOWN_FE_BOUND, st,
+ &rsd);
if (fe_bound > 0.2)
color = PERF_COLOR_RED;
print_metric(config, ctxp, color, "%8.1f%%", "frontend bound",
fe_bound * 100.);
} else if (perf_stat_evsel__is(evsel, TOPDOWN_BE_BOUND) &&
- full_td(ctx, cpu, st)) {
- double be_bound = td_metric_ratio(ctx, cpu,
- STAT_TOPDOWN_BE_BOUND, st);
-
+ full_td(cpu, st, &rsd)) {
+ double be_bound = td_metric_ratio(cpu,
+ STAT_TOPDOWN_BE_BOUND, st,
+ &rsd);
if (be_bound > 0.2)
color = PERF_COLOR_RED;
print_metric(config, ctxp, color, "%8.1f%%", "backend bound",
be_bound * 100.);
} else if (perf_stat_evsel__is(evsel, TOPDOWN_BAD_SPEC) &&
- full_td(ctx, cpu, st)) {
- double bad_spec = td_metric_ratio(ctx, cpu,
- STAT_TOPDOWN_BAD_SPEC, st);
-
+ full_td(cpu, st, &rsd)) {
+ double bad_spec = td_metric_ratio(cpu,
+ STAT_TOPDOWN_BAD_SPEC, st,
+ &rsd);
if (bad_spec > 0.1)
color = PERF_COLOR_RED;
print_metric(config, ctxp, color, "%8.1f%%", "bad speculation",
} else if (evsel->metric_expr) {
generic_metric(config, evsel->metric_expr, evsel->metric_events, NULL,
evsel->name, evsel->metric_name, NULL, 1, cpu, out, st);
- } else if (runtime_stat_n(st, STAT_NSECS, 0, cpu) != 0) {
+ } else if (runtime_stat_n(st, STAT_NSECS, cpu, &rsd) != 0) {
char unit = 'M';
char unit_buf[10];
- total = runtime_stat_avg(st, STAT_NSECS, 0, cpu);
+ total = runtime_stat_avg(st, STAT_NSECS, cpu, &rsd);
if (total)
ratio = 1000.0 * avg / total;
snprintf(unit_buf, sizeof(unit_buf), "%c/sec", unit);
print_metric(config, ctxp, NULL, "%8.3f", unit_buf, ratio);
} else if (perf_stat_evsel__is(evsel, SMI_NUM)) {
- print_smi_cost(config, cpu, evsel, out, st);
+ print_smi_cost(config, cpu, out, st, &rsd);
} else {
num = 0;
}
TARGETS_HOTPLUG = cpu-hotplug
TARGETS_HOTPLUG += memory-hotplug
-# User can optionally provide a TARGETS skiplist.
-SKIP_TARGETS ?=
+# User can optionally provide a TARGETS skiplist. By default we skip
+# BPF since it has cutting edge build time dependencies which require
+# more effort to install.
+SKIP_TARGETS ?= bpf
ifneq ($(SKIP_TARGETS),)
TMP := $(filter-out $(SKIP_TARGETS), $(TARGETS))
override TARGETS := $(TMP)
mov x11, x1 // actual data
mov x12, x2 // data size
- puts "Mistatch: PID="
+ puts "Mismatch: PID="
mov x0, x20
bl putdec
puts ", iteration="
mov x11, x1 // actual data
mov x12, x2 // data size
- puts "Mistatch: PID="
+ puts "Mismatch: PID="
mov x0, x20
bl putdec
puts ", iteration="
FIXTURE_VARIANT(tls)
{
- u16 tls_version;
- u16 cipher_type;
+ uint16_t tls_version;
+ uint16_t cipher_type;
};
FIXTURE_VARIANT_ADD(tls, 12_gcm)
local message=$2
local port=$3
- ip netns exec ${netns} conntrack -L -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp'
+ if echo $message |grep -q 'ipv6';then
+ local family="ipv6"
+ else
+ local family="ipv4"
+ fi
+
+ ip netns exec ${netns} conntrack -L -f $family -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp'
if [ $? -ne 0 ] ; then
echo "FAIL: ${netns} did not show attached helper $message" 1>&2
ret=1
sleep 3 | ip netns exec ${ns2} nc -w 2 -l -p $port > /dev/null &
- sleep 1
sleep 1 | ip netns exec ${ns1} nc -w 2 10.0.1.2 $port > /dev/null &
+ sleep 1
check_for_helper "$ns1" "ip $msg" $port
check_for_helper "$ns2" "ip $msg" $port
sleep 3 | ip netns exec ${ns2} nc -w 2 -6 -l -p $port > /dev/null &
- sleep 1
sleep 1 | ip netns exec ${ns1} nc -w 2 -6 dead:1::2 $port > /dev/null &
+ sleep 1
check_for_helper "$ns1" "ipv6 $msg" $port
check_for_helper "$ns2" "ipv6 $msg" $port