A driver can still return a critical failure for this function if
it can't get the device operational after reset. If the platform
previously tried a soft reset, it might now try a hard reset (power
-cycle) and then call slot_reset() again. It the device still can't
+cycle) and then call slot_reset() again. If the device still can't
be recovered, there is nothing more that can be done; the platform
will typically report a "permanent failure" in such a case. The
device will be considered "dead" in this case.
.driver_data - cpufreq driver specific data.
- .resolve_freq - Returns the most appropriate frequency for a target
- frequency. Doesn't change the frequency though.
-
.get_intermediate and target_intermediate - Used to switch to stable
frequency while changing CPU frequency.
.exit - A pointer to a per-policy cleanup function called during
CPU_POST_DEAD phase of cpu hotplug process.
- .stop_cpu - A pointer to a per-policy stop function called during
- CPU_DOWN_PREPARE phase of cpu hotplug process.
-
.suspend - A pointer to a per-policy suspend function which is called
with interrupts disabled and _after_ the governor is stopped for the
policy.
where voltage is in V, frequency is in MHz.
+ performance-domains:
+ maxItems: 1
+ description:
+ List of phandles and performance domain specifiers, as defined by
+ bindings of the performance domain provider. See also
+ dvfs/performance-domain.yaml.
+
power-domains:
description:
List of phandles and PM domain specifiers, as defined by bindings of the
"qcom,saw2"
A more specific value could be one of:
"qcom,apq8064-saw2-v1.1-cpu"
+ "qcom,msm8226-saw2-v2.1-cpu"
"qcom,msm8974-saw2-v2.1-cpu"
"qcom,apq8084-saw2-v2.1-cpu"
cpu2: cpu@100 {
device_type = "cpu";
- compatible = "arm,cortex-a57";
+ compatible = "arm,cortex-a72";
reg = <0x100>;
enable-method = "psci";
cpu-idle-states = <&CPU_SLEEP_0>;
- clocks = <&infracfg CLK_INFRA_CA57SEL>,
+ clocks = <&infracfg CLK_INFRA_CA72SEL>,
<&apmixedsys CLK_APMIXED_MAINPLL>;
clock-names = "cpu", "intermediate";
operating-points-v2 = <&cpu_opp_table_b>;
cpu3: cpu@101 {
device_type = "cpu";
- compatible = "arm,cortex-a57";
+ compatible = "arm,cortex-a72";
reg = <0x101>;
enable-method = "psci";
cpu-idle-states = <&CPU_SLEEP_0>;
- clocks = <&infracfg CLK_INFRA_CA57SEL>,
+ clocks = <&infracfg CLK_INFRA_CA72SEL>,
<&apmixedsys CLK_APMIXED_MAINPLL>;
clock-names = "cpu", "intermediate";
operating-points-v2 = <&cpu_opp_table_b>;
--- /dev/null
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/dvfs/performance-domain.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Generic performance domains
+
+maintainers:
+ - Sudeep Holla <sudeep.holla@arm.com>
+
+description: |+
+ This binding is intended for performance management of groups of devices or
+ CPUs that run in the same performance domain. Performance domains must not
+ be confused with power domains. A performance domain is defined by a set
+ of devices that always have to run at the same performance level. For a given
+ performance domain, there is a single point of control that affects all the
+ devices in the domain, making it impossible to set the performance level of
+ an individual device in the domain independently from other devices in
+ that domain. For example, a set of CPUs that share a voltage domain, and
+ have a common frequency control, is said to be in the same performance
+ domain.
+
+ This device tree binding can be used to bind performance domain consumer
+ devices with their performance domains provided by performance domain
+ providers. A performance domain provider can be represented by any node in
+ the device tree and can provide one or more performance domains. A consumer
+ node can refer to the provider by a phandle and a set of phandle arguments
+ (so called performance domain specifiers) of length specified by the
+ \#performance-domain-cells property in the performance domain provider node.
+
+select: true
+
+properties:
+ "#performance-domain-cells":
+ description:
+ Number of cells in a performance domain specifier. Typically 0 for nodes
+ representing a single performance domain and 1 for nodes providing
+ multiple performance domains (e.g. performance controllers), but can be
+ any value as specified by device tree binding documentation of particular
+ provider.
+ enum: [ 0, 1 ]
+
+ performance-domains:
+ $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ maxItems: 1
+ description:
+ A phandle and performance domain specifier as defined by bindings of the
+ performance controller/provider specified by phandle.
+
+additionalProperties: true
+
+examples:
+ - |
+ performance: performance-controller@12340000 {
+ compatible = "qcom,cpufreq-hw";
+ reg = <0x12340000 0x1000>;
+ #performance-domain-cells = <1>;
+ };
+
+ // The node above defines a performance controller that is a performance
+ // domain provider and expects one cell as its phandle argument.
+
+ cpus {
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57";
+ reg = <0x0 0x0>;
+ performance-domains = <&performance 1>;
+ };
+ };
The regulator will be enabled when initializing the PCIe host and
disabled either as part of the init process or when shutting down the
host.
+- vph-supply: Should specify the regulator in charge of VPH one of the three
+ PCIe PHY powers. This regulator can be supplied by both 1.8v and 3.3v voltage
+ supplies.
Additional required properties for imx6sx-pcie:
- clock names: Must include the following additional entries:
+++ /dev/null
-charger-manager bindings
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-Required properties :
- - compatible : "charger-manager"
- - <>-supply : for regulator consumer, named according to cm-regulator-name
- - cm-chargers : name of chargers
- - cm-fuel-gauge : name of battery fuel gauge
- - subnode <regulator> :
- - cm-regulator-name : name of charger regulator
- - subnode <cable> :
- - cm-cable-name : name of charger cable - one of USB, USB-HOST,
- SDP, DCP, CDP, ACA, FAST-CHARGER, SLOW-CHARGER, WPT,
- PD, DOCK, JIG, or MECHANICAL
- - cm-cable-extcon : name of extcon dev
-(optional) - cm-cable-min : minimum current of cable
-(optional) - cm-cable-max : maximum current of cable
-
-Optional properties :
- - cm-name : charger manager's name (default : "battery")
- - cm-poll-mode : polling mode - 0 for disabled, 1 for always, 2 for when
- external power is connected, or 3 for when charging. If not present,
- then polling is disabled
- - cm-poll-interval : polling interval (in ms)
- - cm-battery-stat : battery status - 0 for battery always present, 1 for no
- battery, 2 to check presence via fuel gauge, or 3 to check presence
- via charger
- - cm-fullbatt-vchkdrop-volt : voltage drop (in uV) before restarting charging
- - cm-fullbatt-voltage : voltage (in uV) of full battery
- - cm-fullbatt-soc : state of charge to consider as full battery
- - cm-fullbatt-capacity : capcity (in uAh) to consider as full battery
- - cm-thermal-zone : name of external thermometer's thermal zone
- - cm-battery-* : threshold battery temperature for charging
- -cold : critical cold temperature of battery for charging
- -cold-in-minus : flag that cold temperature is in minus degrees
- -hot : critical hot temperature of battery for charging
- -temp-diff : temperature difference to allow recharging
- - cm-dis/charging-max = limits of charging duration
-
-Deprecated properties:
- - cm-num-chargers
- - cm-fullbatt-vchkdrop-ms
-
-Example :
- charger-manager@0 {
- compatible = "charger-manager";
- chg-reg-supply = <&charger_regulator>;
-
- cm-name = "battery";
- /* Always polling ON : 30s */
- cm-poll-mode = <1>;
- cm-poll-interval = <30000>;
-
- cm-fullbatt-vchkdrop-volt = <150000>;
- cm-fullbatt-soc = <100>;
-
- cm-battery-stat = <3>;
-
- cm-chargers = "charger0", "charger1", "charger2";
-
- cm-fuel-gauge = "fuelgauge0";
-
- cm-thermal-zone = "thermal_zone.1"
- /* in deci centigrade */
- cm-battery-cold = <50>;
- cm-battery-cold-in-minus;
- cm-battery-hot = <800>;
- cm-battery-temp-diff = <100>;
-
- /* Allow charging for 5hr */
- cm-charging-max = <18000000>;
- /* Allow discharging for 2hr */
- cm-discharging-max = <7200000>;
-
- regulator@0 {
- cm-regulator-name = "chg-reg";
- cable@0 {
- cm-cable-name = "USB";
- cm-cable-extcon = "extcon-dev.0";
- cm-cable-min = <475000>;
- cm-cable-max = <500000>;
- };
- cable@1 {
- cm-cable-name = "SDP";
- cm-cable-extcon = "extcon-dev.0";
- cm-cable-min = <650000>;
- cm-cable-max = <675000>;
- };
- };
-
- };
--- /dev/null
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/power/supply/charger-manager.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Charger Manager
+
+maintainers:
+ - Sebastian Reichel <sre@kernel.org>
+
+description: |
+ Binding for the legacy charger manager driver.
+ Please do not use for new products.
+
+properties:
+ compatible:
+ const: charger-manager
+
+ cm-chargers:
+ description: name of chargers
+ $ref: /schemas/types.yaml#/definitions/string-array
+
+ cm-num-chargers:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ deprecated: true
+
+ cm-fuel-gauge:
+ description: name of battery fuel gauge
+ $ref: /schemas/types.yaml#/definitions/string
+
+ cm-name:
+ description: name of the charger manager
+ default: battery
+ $ref: /schemas/types.yaml#/definitions/string
+
+ cm-poll-mode:
+ description: polling mode
+ default: 0
+ enum:
+ - 0 # disabled
+ - 1 # always
+ - 2 # when external power is connected
+ - 3 # when charging
+
+ cm-poll-interval:
+ description: polling interval (in ms)
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-battery-stat:
+ description: battery status
+ enum:
+ - 0 # battery always present
+ - 1 # no battery
+ - 2 # check presence via fuel gauge
+ - 3 # check presence via charger
+
+ cm-fullbatt-vchkdrop-volt:
+ description: voltage drop before restarting charging in uV
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-fullbatt-vchkdrop-ms:
+ deprecated: true
+
+ cm-fullbatt-voltage:
+ description: voltage of full battery in uV
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-fullbatt-soc:
+ description: state of charge to consider as full battery in %
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-fullbatt-capacity:
+ description: capcity to consider as full battery in uAh
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-thermal-zone:
+ description: name of external thermometer's thermal zone
+ $ref: /schemas/types.yaml#/definitions/string
+
+ cm-discharging-max:
+ description: limits of discharging duration in ms
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-charging-max:
+ description: limits of charging duration in ms
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-battery-cold:
+ description: critical cold temperature of battery for charging in deci-degree celsius
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-battery-cold-in-minus:
+ description: if set cm-battery-cold temperature is in minus degrees
+ type: boolean
+
+ cm-battery-hot:
+ description: critical hot temperature of battery for charging in deci-degree celsius
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-battery-temp-diff:
+ description: temperature difference to allow recharging in deci-degree celsius
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+patternProperties:
+ "-supply$":
+ description: regulator consumer, named according to cm-regulator-name
+ $ref: /schemas/types.yaml#/definitions/phandle
+
+ "^regulator[@-][0-9]$":
+ type: object
+ properties:
+ cm-regulator-name:
+ description: name of charger regulator
+ $ref: /schemas/types.yaml#/definitions/string
+
+ required:
+ - cm-regulator-name
+
+ additionalProperties: false
+
+ patternProperties:
+ "^cable[@-][0-9]$":
+ type: object
+ properties:
+ cm-cable-name:
+ description: name of charger cable
+ enum:
+ - USB
+ - USB-HOST
+ - SDP
+ - DCP
+ - CDP
+ - ACA
+ - FAST-CHARGER
+ - SLOW-CHARGER
+ - WPT
+ - PD
+ - DOCK
+ - JIG
+ - MECHANICAL
+
+ cm-cable-extcon:
+ description: name of extcon dev
+ $ref: /schemas/types.yaml#/definitions/string
+
+ cm-cable-min:
+ description: minimum current of cable in uA
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ cm-cable-max:
+ description: maximum current of cable in uA
+ $ref: /schemas/types.yaml#/definitions/uint32
+
+ required:
+ - cm-cable-name
+ - cm-cable-extcon
+
+ additionalProperties: false
+
+required:
+ - compatible
+ - cm-chargers
+ - cm-fuel-gauge
+
+additionalProperties: false
+
+examples:
+ - |
+ charger-manager {
+ compatible = "charger-manager";
+ chg-reg-supply = <&charger_regulator>;
+
+ cm-name = "battery";
+ /* Always polling ON : 30s */
+ cm-poll-mode = <1>;
+ cm-poll-interval = <30000>;
+
+ cm-fullbatt-vchkdrop-volt = <150000>;
+ cm-fullbatt-soc = <100>;
+
+ cm-battery-stat = <3>;
+
+ cm-chargers = "charger0", "charger1", "charger2";
+
+ cm-fuel-gauge = "fuelgauge0";
+
+ cm-thermal-zone = "thermal_zone.1";
+ /* in deci centigrade */
+ cm-battery-cold = <50>;
+ cm-battery-cold-in-minus;
+ cm-battery-hot = <800>;
+ cm-battery-temp-diff = <100>;
+
+ /* Allow charging for 5hr */
+ cm-charging-max = <18000000>;
+ /* Allow discharging for 2hr */
+ cm-discharging-max = <7200000>;
+
+ regulator-0 {
+ cm-regulator-name = "chg-reg";
+ cable-0 {
+ cm-cable-name = "USB";
+ cm-cable-extcon = "extcon-dev.0";
+ cm-cable-min = <475000>;
+ cm-cable-max = <500000>;
+ };
+ cable-1 {
+ cm-cable-name = "SDP";
+ cm-cable-extcon = "extcon-dev.0";
+ cm-cable-min = <650000>;
+ cm-cable-max = <675000>;
+ };
+ };
+ };
reg = <0x36>;
maxim,alert-low-soc-level = <10>;
interrupt-parent = <&gpio7>;
- interrupts = <2 IRQ_TYPE_EDGE_FALLING>;
+ interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
wakeup-source;
};
};
--- /dev/null
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/power/supply/richtek,rt5033-battery.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Richtek RT5033 PMIC Fuel Gauge
+
+maintainers:
+ - Stephan Gerhold <stephan@gerhold.net>
+
+allOf:
+ - $ref: power-supply.yaml#
+
+properties:
+ compatible:
+ const: richtek,rt5033-battery
+
+ reg:
+ maxItems: 1
+
+ interrupts:
+ maxItems: 1
+
+required:
+ - compatible
+ - reg
+
+additionalProperties: false
+
+examples:
+ - |
+ i2c {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ battery@35 {
+ compatible = "richtek,rt5033-battery";
+ reg = <0x35>;
+ };
+ };
+ - |
+ #include <dt-bindings/interrupt-controller/irq.h>
+ i2c {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ battery@35 {
+ compatible = "richtek,rt5033-battery";
+ reg = <0x35>;
+ interrupt-parent = <&msmgpio>;
+ interrupts = <121 IRQ_TYPE_EDGE_FALLING>;
+ };
+ };
+++ /dev/null
-TI SOC ECAP based APWM controller
-
-Required properties:
-- compatible: Must be "ti,<soc>-ecap".
- for am33xx - compatible = "ti,am3352-ecap", "ti,am33xx-ecap";
- for am4372 - compatible = "ti,am4372-ecap", "ti,am3352-ecap", "ti,am33xx-ecap";
- for da850 - compatible = "ti,da850-ecap", "ti,am3352-ecap", "ti,am33xx-ecap";
- for dra746 - compatible = "ti,dra746-ecap", "ti,am3352-ecap";
- for 66ak2g - compatible = "ti,k2g-ecap", "ti,am3352-ecap";
- for am654 - compatible = "ti,am654-ecap", "ti,am3352-ecap";
-- #pwm-cells: should be 3. See pwm.yaml in this directory for a description of
- the cells format. The PWM channel index ranges from 0 to 4. The only third
- cell flag supported by this binding is PWM_POLARITY_INVERTED.
-- reg: physical base address and size of the registers map.
-
-Optional properties:
-- clocks: Handle to the ECAP's functional clock.
-- clock-names: Must be set to "fck".
-
-Example:
-
-ecap0: ecap@48300100 { /* ECAP on am33xx */
- compatible = "ti,am3352-ecap", "ti,am33xx-ecap";
- #pwm-cells = <3>;
- reg = <0x48300100 0x80>;
- clocks = <&l4ls_gclk>;
- clock-names = "fck";
-};
-
-ecap0: ecap@48300100 { /* ECAP on am4372 */
- compatible = "ti,am4372-ecap", "ti,am3352-ecap", "ti,am33xx-ecap";
- #pwm-cells = <3>;
- reg = <0x48300100 0x80>;
- ti,hwmods = "ecap0";
- clocks = <&l4ls_gclk>;
- clock-names = "fck";
-};
-
-ecap0: ecap@1f06000 { /* ECAP on da850 */
- compatible = "ti,da850-ecap", "ti,am3352-ecap", "ti,am33xx-ecap";
- #pwm-cells = <3>;
- reg = <0x1f06000 0x80>;
-};
-
-ecap0: ecap@4843e100 {
- compatible = "ti,dra746-ecap", "ti,am3352-ecap";
- #pwm-cells = <3>;
- reg = <0x4843e100 0x80>;
- clocks = <&l4_root_clk_div>;
- clock-names = "fck";
-};
--- /dev/null
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pwm/pwm-tiecap.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: TI SOC ECAP based APWM controller
+
+maintainers:
+ - Vignesh R <vigneshr@ti.com>
+
+allOf:
+ - $ref: pwm.yaml#
+
+properties:
+ compatible:
+ oneOf:
+ - const: ti,am3352-ecap
+ - items:
+ - enum:
+ - ti,da850-ecap
+ - ti,am4372-ecap
+ - ti,dra746-ecap
+ - ti,k2g-ecap
+ - ti,am654-ecap
+ - ti,am64-ecap
+ - const: ti,am3352-ecap
+
+ reg:
+ maxItems: 1
+
+ "#pwm-cells":
+ const: 3
+ description: |
+ See pwm.yaml in this directory for a description of the cells format.
+ The only third cell flag supported by this binding is PWM_POLARITY_INVERTED.
+
+ clock-names:
+ const: fck
+
+ clocks:
+ maxItems: 1
+
+ power-domains:
+ maxItems: 1
+
+required:
+ - compatible
+ - reg
+ - "#pwm-cells"
+ - clocks
+ - clock-names
+
+additionalProperties: false
+
+examples:
+ - |
+ ecap0: pwm@48300100 { /* ECAP on am33xx */
+ compatible = "ti,am3352-ecap";
+ #pwm-cells = <3>;
+ reg = <0x48300100 0x80>;
+ clocks = <&l4ls_gclk>;
+ clock-names = "fck";
+ };
+++ /dev/null
-TI SOC EHRPWM based PWM controller
-
-Required properties:
-- compatible: Must be "ti,<soc>-ehrpwm".
- for am33xx - compatible = "ti,am3352-ehrpwm", "ti,am33xx-ehrpwm";
- for am4372 - compatible = "ti,am4372-ehrpwm", "ti-am3352-ehrpwm", "ti,am33xx-ehrpwm";
- for am654 - compatible = "ti,am654-ehrpwm", "ti-am3352-ehrpwm";
- for da850 - compatible = "ti,da850-ehrpwm", "ti-am3352-ehrpwm", "ti,am33xx-ehrpwm";
- for dra746 - compatible = "ti,dra746-ehrpwm", "ti-am3352-ehrpwm";
-- #pwm-cells: should be 3. See pwm.yaml in this directory for a description of
- the cells format. The only third cell flag supported by this binding is
- PWM_POLARITY_INVERTED.
-- reg: physical base address and size of the registers map.
-
-Optional properties:
-- clocks: Handle to the PWM's time-base and functional clock.
-- clock-names: Must be set to "tbclk" and "fck".
-
-Example:
-
-ehrpwm0: pwm@48300200 { /* EHRPWM on am33xx */
- compatible = "ti,am3352-ehrpwm", "ti,am33xx-ehrpwm";
- #pwm-cells = <3>;
- reg = <0x48300200 0x100>;
- clocks = <&ehrpwm0_tbclk>, <&l4ls_gclk>;
- clock-names = "tbclk", "fck";
-};
-
-ehrpwm0: pwm@48300200 { /* EHRPWM on am4372 */
- compatible = "ti,am4372-ehrpwm", "ti,am3352-ehrpwm", "ti,am33xx-ehrpwm";
- #pwm-cells = <3>;
- reg = <0x48300200 0x80>;
- clocks = <&ehrpwm0_tbclk>, <&l4ls_gclk>;
- clock-names = "tbclk", "fck";
- ti,hwmods = "ehrpwm0";
-};
-
-ehrpwm0: pwm@1f00000 { /* EHRPWM on da850 */
- compatible = "ti,da850-ehrpwm", "ti,am3352-ehrpwm", "ti,am33xx-ehrpwm";
- #pwm-cells = <3>;
- reg = <0x1f00000 0x2000>;
-};
-
-ehrpwm0: pwm@4843e200 { /* EHRPWM on dra746 */
- compatible = "ti,dra746-ehrpwm", "ti,am3352-ehrpwm";
- #pwm-cells = <3>;
- reg = <0x4843e200 0x80>;
- clocks = <&ehrpwm0_tbclk>, <&l4_root_clk_div>;
- clock-names = "tbclk", "fck";
-};
--- /dev/null
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pwm/pwm-tiehrpwm.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: TI SOC EHRPWM based PWM controller
+
+maintainers:
+ - Vignesh R <vigneshr@ti.com>
+
+allOf:
+ - $ref: pwm.yaml#
+
+properties:
+ compatible:
+ oneOf:
+ - const: ti,am3352-ehrpwm
+ - items:
+ - enum:
+ - ti,da850-ehrpwm
+ - ti,am4372-ehrpwm
+ - ti,dra746-ehrpwm
+ - ti,am654-ehrpwm
+ - ti,am64-epwm
+ - const: ti,am3352-ehrpwm
+
+ reg:
+ maxItems: 1
+
+ "#pwm-cells":
+ const: 3
+ description: |
+ See pwm.yaml in this directory for a description of the cells format.
+ The only third cell flag supported by this binding is PWM_POLARITY_INVERTED.
+
+ clock-names:
+ items:
+ - const: tbclk
+ - const: fck
+
+ clocks:
+ maxItems: 2
+
+ power-domains:
+ maxItems: 1
+
+required:
+ - compatible
+ - reg
+ - "#pwm-cells"
+ - clocks
+ - clock-names
+
+additionalProperties: false
+
+examples:
+ - |
+ ehrpwm0: pwm@48300200 { /* EHRPWM on am33xx */
+ compatible = "ti,am3352-ehrpwm";
+ #pwm-cells = <3>;
+ reg = <0x48300200 0x100>;
+ clocks = <&ehrpwm0_tbclk>, <&l4ls_gclk>;
+ clock-names = "tbclk", "fck";
+ };
--- /dev/null
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/watchdog/atmel,sama5d4-wdt.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Atmel SAMA5D4 Watchdog Timer (WDT) Controller
+
+maintainers:
+ - Eugen Hristev <eugen.hristev@microchip.com>
+
+allOf:
+ - $ref: "watchdog.yaml#"
+
+properties:
+ compatible:
+ enum:
+ - atmel,sama5d4-wdt
+ - microchip,sam9x60-wdt
+ - microchip,sama7g5-wdt
+
+ reg:
+ maxItems: 1
+
+ atmel,watchdog-type:
+ $ref: /schemas/types.yaml#/definitions/string
+ description: should be hardware or software.
+ oneOf:
+ - description:
+ Enable watchdog fault reset. A watchdog fault triggers
+ watchdog reset.
+ const: hardware
+ - description:
+ Enable watchdog fault interrupt. A watchdog fault asserts
+ watchdog interrupt.
+ const: software
+ default: hardware
+
+ atmel,idle-halt:
+ $ref: /schemas/types.yaml#/definitions/flag
+ description: |
+ present if you want to stop the watchdog when the CPU is in idle state.
+ CAUTION: This property should be used with care, it actually makes the
+ watchdog not counting when the CPU is in idle state, therefore the
+ watchdog reset time depends on mean CPU usage and will not reset at all
+ if the CPU stop working while it is in idle state, which is probably
+ not what you want.
+
+ atmel,dbg-halt:
+ $ref: /schemas/types.yaml#/definitions/flag
+ description: |
+ present if you want to stop the watchdog when the CPU is in debug state.
+
+required:
+ - compatible
+ - reg
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/interrupt-controller/irq.h>
+
+ watchdog@fc068640 {
+ compatible = "atmel,sama5d4-wdt";
+ reg = <0xfc068640 0x10>;
+ interrupts = <4 IRQ_TYPE_LEVEL_HIGH 5>;
+ timeout-sec = <10>;
+ atmel,watchdog-type = "hardware";
+ atmel,dbg-halt;
+ atmel,idle-halt;
+ };
+
+...
+++ /dev/null
-* Atmel SAMA5D4 Watchdog Timer (WDT) Controller
-
-Required properties:
-- compatible: "atmel,sama5d4-wdt" or "microchip,sam9x60-wdt"
-- reg: base physical address and length of memory mapped region.
-
-Optional properties:
-- timeout-sec: watchdog timeout value (in seconds).
-- interrupts: interrupt number to the CPU.
-- atmel,watchdog-type: should be "hardware" or "software".
- "hardware": enable watchdog fault reset. A watchdog fault triggers
- watchdog reset.
- "software": enable watchdog fault interrupt. A watchdog fault asserts
- watchdog interrupt.
-- atmel,idle-halt: present if you want to stop the watchdog when the CPU is
- in idle state.
- CAUTION: This property should be used with care, it actually makes the
- watchdog not counting when the CPU is in idle state, therefore the
- watchdog reset time depends on mean CPU usage and will not reset at all
- if the CPU stop working while it is in idle state, which is probably
- not what you want.
-- atmel,dbg-halt: present if you want to stop the watchdog when the CPU is
- in debug state.
-
-Example:
- watchdog@fc068640 {
- compatible = "atmel,sama5d4-wdt";
- reg = <0xfc068640 0x10>;
- interrupts = <4 IRQ_TYPE_LEVEL_HIGH 5>;
- timeout-sec = <10>;
- atmel,watchdog-type = "hardware";
- atmel,dbg-halt;
- atmel,idle-halt;
- };
--- /dev/null
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/watchdog/mstar,msc313e-wdt.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: MStar Watchdog Device Tree Bindings
+
+maintainers:
+ - Daniel Palmer <daniel@0x0f.com>
+ - Romain Perier <romain.perier@gmail.com>
+
+allOf:
+ - $ref: watchdog.yaml#
+
+properties:
+ compatible:
+ enum:
+ - mstar,msc313e-wdt
+
+ reg:
+ maxItems: 1
+
+ clocks:
+ maxItems: 1
+
+required:
+ - compatible
+ - clocks
+ - reg
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ watchdog@6000 {
+ compatible = "mstar,msc313e-wdt";
+ reg = <0x6000 0x1f>;
+ clocks = <&xtal_div2>;
+ };
Mediatek SoCs Watchdog timer
+The watchdog supports a pre-timeout interrupt that fires timeout-sec/2
+before the expiry.
+
Required properties:
- compatible should contain:
"mediatek,mt8183-wdt": for MT8183
"mediatek,mt8516-wdt", "mediatek,mt6589-wdt": for MT8516
"mediatek,mt8192-wdt": for MT8192
+ "mediatek,mt8195-wdt", "mediatek,mt6589-wdt": for MT8195
- reg : Specifies base physical address and size of the registers.
Optional properties:
+- interrupts: Watchdog pre-timeout (bark) interrupt.
- timeout-sec: contains the watchdog timeout in seconds.
- #reset-cells: Should be 1.
compatible = "mediatek,mt8183-wdt",
"mediatek,mt6589-wdt";
reg = <0 0x10007000 0 0x100>;
+ interrupts = <GIC_SPI 139 IRQ_TYPE_NONE>;
timeout-sec = <10>;
#reset-cells = <1>;
};
enum:
- qcom,apss-wdt-qcs404
- qcom,apss-wdt-sc7180
+ - qcom,apss-wdt-sc7280
- qcom,apss-wdt-sdm845
- qcom,apss-wdt-sdx55
- qcom,apss-wdt-sm8150
- rockchip,rk3328-wdt
- rockchip,rk3368-wdt
- rockchip,rk3399-wdt
+ - rockchip,rk3568-wdt
- rockchip,rv1108-wdt
- const: snps,dw-wdt
PWM
devm_pwm_get()
- devm_pwm_put()
+ devm_of_pwm_get()
+ devm_fwnode_pwm_get()
REGULATOR
devm_regulator_bulk_get()
New users should use the pwm_get() function and pass to it the consumer
device or a consumer name. pwm_put() is used to free the PWM device. Managed
-variants of these functions, devm_pwm_get() and devm_pwm_put(), also exist.
+variants of the getter, devm_pwm_get(), devm_of_pwm_get(),
+devm_fwnode_pwm_get(), also exist.
After being requested, a PWM has to be configured using::
This API controls both the PWM period/duty_cycle config and the
enable/disable state.
+There is also a usage_power setting: If set, the PWM driver is only required to
+maintain the power output but has more freedom regarding signal form.
+If supported by the driver, the signal can be optimized, for example to improve
+EMI by phase shifting the individual channels of a chip.
The pwm_config(), pwm_enable() and pwm_disable() functions are just wrappers
around pwm_apply_state() and should not be used if the user wants to change
.id_table = mpu3050_ids,
};
+Reference to PWM device
+=======================
+
+Sometimes a device can be a consumer of PWM channel. Obviously OS would like
+to know which one. To provide this mapping the special property has been
+introduced, i.e.::
+
+ Device (DEV)
+ {
+ Name (_DSD, Package ()
+ {
+ ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ Package () {
+ Package () { "compatible", Package () { "pwm-leds" } },
+ Package () { "label", "alarm-led" },
+ Package () { "pwms",
+ Package () {
+ "\\_SB.PCI0.PWM", // <PWM device reference>
+ 0, // <PWM index>
+ 600000000, // <PWM period>
+ 0, // <PWM flags>
+ }
+ }
+ }
+
+ })
+ ...
+
+In the above example the PWM-based LED driver references to the PWM channel 0
+of \_SB.PCI0.PWM device with initial period setting equal to 600 ms (note that
+value is given in nanoseconds).
+
GPIO support
============
.driver_data - cpufreq驱动程序的特定数据。
- .resolve_freq - 返回最适合目标频率的频率。不过并不能改变频率。
-
.get_intermediate 和 target_intermediate - 用于在改变CPU频率时切换到稳定
的频率。
.exit - 一个指向per-policy清理函数的指针,该函数在cpu热插拔过程的CPU_POST_DEAD
阶段被调用。
- .stop_cpu - 一个指向per-policy停止函数的指针,该函数在cpu热插拔过程的CPU_DOWN_PREPARE
- 阶段被调用。
-
.suspend - 一个指向per-policy暂停函数的指针,该函数在关中断且在该策略的调节器停止
后被调用。
F: arch/arm/mach-mstar/
F: drivers/clk/mstar/
F: drivers/gpio/gpio-msc313.c
+F: drivers/watchdog/msc313e_wdt.c
F: include/dt-bindings/clock/mstar-*
F: include/dt-bindings/gpio/msc313-gpio.h
F: drivers/pci/controller/pci-aardvark.c
PCI DRIVER FOR ALTERA PCIE IP
-M: Ley Foon Tan <ley.foon.tan@intel.com>
-L: rfi@lists.rocketboards.org (moderated for non-subscribers)
+M: Joyce Ooi <joyce.ooi@intel.com>
L: linux-pci@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/pci/altera-pcie.txt
F: Documentation/PCI/pci-error-recovery.rst
PCI MSI DRIVER FOR ALTERA MSI IP
-M: Ley Foon Tan <ley.foon.tan@intel.com>
-L: rfi@lists.rocketboards.org (moderated for non-subscribers)
+M: Joyce Ooi <joyce.ooi@intel.com>
L: linux-pci@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/pci/altera-pcie-msi.txt
F: Documentation/ABI/testing/sysfs-class-power
F: Documentation/devicetree/bindings/power/supply/
F: drivers/power/supply/
+F: include/linux/power/
F: include/linux/power_supply.h
POWERNV OPERATOR PANEL LCD DISPLAY DRIVER
F: drivers/s390/scsi/zfcp_*
S3C ADC BATTERY DRIVER
-M: Krzysztof Kozlowski <krzk@kernel.org>
+M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
L: linux-samsung-soc@vger.kernel.org
S: Odd Fixes
F: drivers/power/supply/s3c_adc_battery.c
}
if (size < (16UL<<20) && size != old_size)
- return 0;
+ return false;
if (dev)
dev_info(dev, "MMCONFIG at %pR reserved in %s\n",
&cfg->res, (unsigned long) cfg->address);
}
- return 1;
+ return true;
}
static bool __ref
{
if (!early && !acpi_disabled) {
if (is_mmconf_reserved(is_acpi_reserved, cfg, dev, 0))
- return 1;
+ return true;
if (dev)
dev_info(dev, FW_INFO
* _CBA method, just assume it's reserved.
*/
if (pci_mmcfg_running_state)
- return 1;
+ return true;
/* Don't try to do this check unless configuration
type 1 is available. how about type 2 ?*/
if (raw_pci_ops)
return is_mmconf_reserved(e820__mapped_all, cfg, dev, 1);
- return 0;
+ return false;
}
static void __init pci_mmcfg_reject_broken(int early)
bool "Platform Runtime Mechanism Support"
depends on EFI && X86_64
default y
+ help
+ Platform Runtime Mechanism (PRM) is a firmware interface exposing a
+ set of binary executables that can be called from the AML interpreter
+ or directly from device drivers.
+
+ Say Y to enable the AML interpreter to execute the PRM code.
+
+ While this feature is optional in principle, leaving it out may
+ substantially increase computational overhead related to the
+ initialization of some server systems.
case IORESOURCE_MEM:
if (!address_found) {
dev->res = *rentry->res;
+ dev->res.name = dev_name(&dev->dev);
address_found = true;
}
break;
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V131"),
},
},
+ {
+ .callback = video_set_report_key_events,
+ .driver_data = (void *)((uintptr_t)REPORT_BRIGHTNESS_KEY_EVENTS),
+ .ident = "Dell Vostro 3350",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3350"),
+ },
+ },
/*
* Some machines change the brightness themselves when a brightness
* hotkey gets pressed, despite us telling them not to. In this case
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_HOTPLUG_OST_SUPPORT;
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_PCLPI_SUPPORT;
- capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_PRM_SUPPORT;
+ if (IS_ENABLED(CONFIG_ACPI_PRMT))
+ capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_PRM_SUPPORT;
#ifdef CONFIG_ARM64
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_GENERIC_INITIATOR_SUPPORT;
mem_sleep_current = PM_SUSPEND_TO_IDLE;
/*
- * Some LPS0 systems, like ASUS Zenbook UX430UNR/i7-8550U, require the
- * EC GPE to be enabled while suspended for certain wakeup devices to
- * work, so mark it as wakeup-capable.
+ * Some Intel based LPS0 systems, like ASUS Zenbook UX430UNR/i7-8550U don't
+ * use intel-hid or intel-vbtn but require the EC GPE to be enabled while
+ * suspended for certain wakeup devices to work, so mark it as wakeup-capable.
+ *
+ * Only enable on !AMD as enabling this universally causes problems for a number
+ * of AMD based systems.
*/
- acpi_ec_mark_gpe_for_wake();
+ if (!acpi_s2idle_vendor_amd())
+ acpi_ec_mark_gpe_for_wake();
return 0;
}
#include <linux/cpumask.h>
#include <linux/init.h>
#include <linux/percpu.h>
+#include <linux/rcupdate.h>
#include <linux/sched.h>
#include <linux/smp.h>
-static DEFINE_PER_CPU(struct scale_freq_data *, sft_data);
+static DEFINE_PER_CPU(struct scale_freq_data __rcu *, sft_data);
static struct cpumask scale_freq_counters_mask;
static bool scale_freq_invariant;
if (cpumask_empty(&scale_freq_counters_mask))
scale_freq_invariant = topology_scale_freq_invariant();
+ rcu_read_lock();
+
for_each_cpu(cpu, cpus) {
- sfd = per_cpu(sft_data, cpu);
+ sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
/* Use ARCH provided counters whenever possible */
if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) {
- per_cpu(sft_data, cpu) = data;
+ rcu_assign_pointer(per_cpu(sft_data, cpu), data);
cpumask_set_cpu(cpu, &scale_freq_counters_mask);
}
}
+ rcu_read_unlock();
+
update_scale_freq_invariant(true);
}
EXPORT_SYMBOL_GPL(topology_set_scale_freq_source);
struct scale_freq_data *sfd;
int cpu;
+ rcu_read_lock();
+
for_each_cpu(cpu, cpus) {
- sfd = per_cpu(sft_data, cpu);
+ sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
if (sfd && sfd->source == source) {
- per_cpu(sft_data, cpu) = NULL;
+ rcu_assign_pointer(per_cpu(sft_data, cpu), NULL);
cpumask_clear_cpu(cpu, &scale_freq_counters_mask);
}
}
+ rcu_read_unlock();
+
+ /*
+ * Make sure all references to previous sft_data are dropped to avoid
+ * use-after-free races.
+ */
+ synchronize_rcu();
+
update_scale_freq_invariant(false);
}
EXPORT_SYMBOL_GPL(topology_clear_scale_freq_source);
void topology_scale_freq_tick(void)
{
- struct scale_freq_data *sfd = *this_cpu_ptr(&sft_data);
+ struct scale_freq_data *sfd = rcu_dereference_sched(*this_cpu_ptr(&sft_data));
if (sfd)
sfd->set_freq_scale();
mutex_lock(&gpd_list_lock);
list_add(&genpd->gpd_list_node, &gpd_list);
- genpd_debug_add(genpd);
mutex_unlock(&gpd_list_lock);
+ genpd_debug_add(genpd);
return 0;
}
static bool genpd_present(const struct generic_pm_domain *genpd)
{
+ bool ret = false;
const struct generic_pm_domain *gpd;
- list_for_each_entry(gpd, &gpd_list, gpd_list_node)
- if (gpd == genpd)
- return true;
- return false;
+ mutex_lock(&gpd_list_lock);
+ list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
+ if (gpd == genpd) {
+ ret = true;
+ break;
+ }
+ }
+ mutex_unlock(&gpd_list_lock);
+
+ return ret;
}
/**
int of_genpd_add_provider_simple(struct device_node *np,
struct generic_pm_domain *genpd)
{
- int ret = -EINVAL;
+ int ret;
if (!np || !genpd)
return -EINVAL;
- mutex_lock(&gpd_list_lock);
-
if (!genpd_present(genpd))
- goto unlock;
+ return -EINVAL;
genpd->dev.of_node = np;
if (ret != -EPROBE_DEFER)
dev_err(&genpd->dev, "Failed to add OPP table: %d\n",
ret);
- goto unlock;
+ return ret;
}
/*
dev_pm_opp_of_remove_table(&genpd->dev);
}
- goto unlock;
+ return ret;
}
genpd->provider = &np->fwnode;
genpd->has_provider = true;
-unlock:
- mutex_unlock(&gpd_list_lock);
-
- return ret;
+ return 0;
}
EXPORT_SYMBOL_GPL(of_genpd_add_provider_simple);
if (!np || !data)
return -EINVAL;
- mutex_lock(&gpd_list_lock);
-
if (!data->xlate)
data->xlate = genpd_xlate_onecell;
if (ret < 0)
goto error;
- mutex_unlock(&gpd_list_lock);
-
return 0;
error:
}
}
- mutex_unlock(&gpd_list_lock);
-
return ret;
}
EXPORT_SYMBOL_GPL(of_genpd_add_provider_onecell);
void *cb, int error)
{
ktime_t rettime;
- s64 nsecs;
if (!pm_print_times_enabled)
return;
rettime = ktime_get();
- nsecs = (s64) ktime_to_ns(ktime_sub(rettime, calltime));
-
dev_info(dev, "%pS returned %d after %Ld usecs\n", cb, error,
- (unsigned long long)nsecs >> 10);
+ (unsigned long long)ktime_us_delta(rettime, calltime));
}
/**
return bestdiv;
}
+int divider_determine_rate(struct clk_hw *hw, struct clk_rate_request *req,
+ const struct clk_div_table *table, u8 width,
+ unsigned long flags)
+{
+ int div;
+
+ div = clk_divider_bestdiv(hw, req->best_parent_hw, req->rate,
+ &req->best_parent_rate, table, width, flags);
+
+ req->rate = DIV_ROUND_UP_ULL((u64)req->best_parent_rate, div);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(divider_determine_rate);
+
+int divider_ro_determine_rate(struct clk_hw *hw, struct clk_rate_request *req,
+ const struct clk_div_table *table, u8 width,
+ unsigned long flags, unsigned int val)
+{
+ int div;
+
+ div = _get_div(table, val, flags, width);
+
+ /* Even a read-only clock can propagate a rate change */
+ if (clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) {
+ if (!req->best_parent_hw)
+ return -EINVAL;
+
+ req->best_parent_rate = clk_hw_round_rate(req->best_parent_hw,
+ req->rate * div);
+ }
+
+ req->rate = DIV_ROUND_UP_ULL((u64)req->best_parent_rate, div);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(divider_ro_determine_rate);
+
long divider_round_rate_parent(struct clk_hw *hw, struct clk_hw *parent,
unsigned long rate, unsigned long *prate,
const struct clk_div_table *table,
u8 width, unsigned long flags)
{
- int div;
+ struct clk_rate_request req = {
+ .rate = rate,
+ .best_parent_rate = *prate,
+ .best_parent_hw = parent,
+ };
+ int ret;
- div = clk_divider_bestdiv(hw, parent, rate, prate, table, width, flags);
+ ret = divider_determine_rate(hw, &req, table, width, flags);
+ if (ret)
+ return ret;
- return DIV_ROUND_UP_ULL((u64)*prate, div);
+ *prate = req.best_parent_rate;
+
+ return req.rate;
}
EXPORT_SYMBOL_GPL(divider_round_rate_parent);
const struct clk_div_table *table, u8 width,
unsigned long flags, unsigned int val)
{
- int div;
-
- div = _get_div(table, val, flags, width);
+ struct clk_rate_request req = {
+ .rate = rate,
+ .best_parent_rate = *prate,
+ .best_parent_hw = parent,
+ };
+ int ret;
- /* Even a read-only clock can propagate a rate change */
- if (clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) {
- if (!parent)
- return -EINVAL;
+ ret = divider_ro_determine_rate(hw, &req, table, width, flags, val);
+ if (ret)
+ return ret;
- *prate = clk_hw_round_rate(parent, rate * div);
- }
+ *prate = req.best_parent_rate;
- return DIV_ROUND_UP_ULL((u64)*prate, div);
+ return req.rate;
}
EXPORT_SYMBOL_GPL(divider_ro_round_rate_parent);
-
static long clk_divider_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *prate)
{
reg |= BIT(cfg->mux_bit);
else
reg &= ~BIT(cfg->mux_bit);
+ writel(reg, ksc->regs + cfg->mux_reg);
spin_unlock_irqrestore(&ksc->clk_lock, flags);
return 0;
vco_rate = lmk04832_calc_pll2_params(*prate, rate, &n, &p, &r);
if (vco_rate < 0) {
- dev_err(lmk->dev, "PLL2 parmeters out of range\n");
+ dev_err(lmk->dev, "PLL2 parameters out of range\n");
return vco_rate;
}
vco_rate = lmk04832_calc_pll2_params(prate, rate, &n, &p, &r);
if (vco_rate < 0) {
- dev_err(lmk->dev, "failed to determine PLL2 parmeters\n");
+ dev_err(lmk->dev, "failed to determine PLL2 parameters\n");
return vco_rate;
}
/*
* PLL2_N registers must be programmed after other PLL2 dividers are
- * programed to ensure proper VCO frequency calibration
+ * programmed to ensure proper VCO frequency calibration
*/
ret = regmap_write(lmk->regmap, LMK04832_REG_PLL2_N_0,
FIELD_GET(0x030000, n));
return -EINVAL;
}
- /* Enable Duty Cycle Corretion */
+ /* Enable Duty Cycle Correction */
if (dclk_div == 1) {
ret = regmap_update_bits(lmk->regmap,
LMK04832_REG_CLKOUT_CTRL3(dclk->id),
lmk->dclk = devm_kcalloc(lmk->dev, info->num_channels >> 1,
sizeof(struct lmk_dclk), GFP_KERNEL);
- if (IS_ERR(lmk->dclk)) {
- ret = PTR_ERR(lmk->dclk);
+ if (!lmk->dclk) {
+ ret = -ENOMEM;
goto err_disable_oscin;
}
lmk->clkout = devm_kcalloc(lmk->dev, info->num_channels,
sizeof(*lmk->clkout), GFP_KERNEL);
- if (IS_ERR(lmk->clkout)) {
- ret = PTR_ERR(lmk->clkout);
+ if (!lmk->clkout) {
+ ret = -ENOMEM;
goto err_disable_oscin;
}
lmk->clk_data = devm_kzalloc(lmk->dev, struct_size(lmk->clk_data, hws,
info->num_channels),
GFP_KERNEL);
- if (IS_ERR(lmk->clk_data)) {
- ret = PTR_ERR(lmk->clk_data);
+ if (!lmk->clk_data) {
+ ret = -ENOMEM;
goto err_disable_oscin;
}
if (!reset_data)
return -ENOMEM;
+ spin_lock_init(&reset_data->lock);
reset_data->membase = base;
reset_data->rcdev.owner = THIS_MODULE;
reset_data->rcdev.ops = &stm32_reset_ops;
};
-static const char *fmc_mux_p[] __initconst = {
+static const char *fmc_mux_p[] = {
"24m", "75m", "125m", "150m", "200m", "250m", "300m", "400m"
};
-static const char *mmc_mux_p[] __initconst = {
+static const char *mmc_mux_p[] = {
"100k", "25m", "49p5m", "99m", "187p5m", "150m", "198m", "400k"
};
-static const char *sysapb_mux_p[] __initconst = {
+static const char *sysapb_mux_p[] = {
"24m", "50m",
};
-static const char *sysbus_mux_p[] __initconst = {
+static const char *sysbus_mux_p[] = {
"24m", "300m"
};
-static const char *uart_mux_p[] __initconst = { "50m", "24m", "3m" };
+static const char *uart_mux_p[] = { "50m", "24m", "3m" };
-static const char *a73_clksel_mux_p[] __initconst = {
+static const char *a73_clksel_mux_p[] = {
"24m", "apll", "1000m"
};
static const u32 uart_mux_table[] = { 0, 1, 2 };
static const u32 a73_clksel_mux_table[] = { 0, 1, 2 };
-static struct hisi_mux_clock hi3559av100_mux_clks_crg[] __initdata = {
+static struct hisi_mux_clock hi3559av100_mux_clks_crg[] = {
{
HI3559AV100_FMC_MUX, "fmc_mux", fmc_mux_p, ARRAY_SIZE(fmc_mux_p),
CLK_SET_RATE_PARENT, 0x170, 2, 3, 0, fmc_mux_table,
},
};
-static struct hisi_gate_clock hi3559av100_gate_clks[] __initdata = {
+static struct hisi_gate_clock hi3559av100_gate_clks[] = {
{
HI3559AV100_FMC_CLK, "clk_fmc", "fmc_mux",
CLK_SET_RATE_PARENT, 0x170, 1, 0,
},
};
-static struct hi3559av100_pll_clock hi3559av100_pll_clks[] __initdata = {
+static struct hi3559av100_pll_clock hi3559av100_pll_clks[] = {
{
HI3559AV100_APLL_CLK, "apll", NULL, 0x0, 0, 24, 24, 3, 28, 3,
0x4, 0, 12, 12, 6
}
}
-static __init struct hisi_clock_data *hi3559av100_clk_register(
+static struct hisi_clock_data *hi3559av100_clk_register(
struct platform_device *pdev)
{
struct hisi_clock_data *clk_data;
return ERR_PTR(ret);
}
-static __init void hi3559av100_clk_unregister(struct platform_device *pdev)
+static void hi3559av100_clk_unregister(struct platform_device *pdev)
{
struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
.unregister_clks = hi3559av100_clk_unregister,
};
-static struct hisi_fixed_rate_clock hi3559av100_shub_fixed_rate_clks[]
- __initdata = {
+static struct hisi_fixed_rate_clock hi3559av100_shub_fixed_rate_clks[] = {
{ HI3559AV100_SHUB_SOURCE_SOC_24M, "clk_source_24M", NULL, 0, 24000000UL, },
{ HI3559AV100_SHUB_SOURCE_SOC_200M, "clk_source_200M", NULL, 0, 200000000UL, },
{ HI3559AV100_SHUB_SOURCE_SOC_300M, "clk_source_300M", NULL, 0, 300000000UL, },
/* shub mux clk */
static u32 shub_source_clk_mux_table[] = {0, 1, 2, 3};
-static const char *shub_source_clk_mux_p[] __initconst = {
+static const char *shub_source_clk_mux_p[] = {
"clk_source_24M", "clk_source_200M", "clk_source_300M", "clk_source_PLL"
};
static u32 shub_uart_source_clk_mux_table[] = {0, 1, 2, 3};
-static const char *shub_uart_source_clk_mux_p[] __initconst = {
+static const char *shub_uart_source_clk_mux_p[] = {
"clk_uart_32K", "clk_uart_div_clk", "clk_uart_div_clk", "clk_source_24M"
};
-static struct hisi_mux_clock hi3559av100_shub_mux_clks[] __initdata = {
+static struct hisi_mux_clock hi3559av100_shub_mux_clks[] = {
{
HI3559AV100_SHUB_SOURCE_CLK, "shub_clk", shub_source_clk_mux_p,
ARRAY_SIZE(shub_source_clk_mux_p),
static struct clk_div_table shub_spi_clk_table[] = {{0, 8}, {1, 4}, {2, 2}};
static struct clk_div_table shub_uart_div_clk_table[] = {{1, 8}, {2, 4}};
-static struct hisi_divider_clock hi3559av100_shub_div_clks[] __initdata = {
+static struct hisi_divider_clock hi3559av100_shub_div_clks[] = {
{ HI3559AV100_SHUB_SPI_SOURCE_CLK, "clk_spi_clk", "shub_clk", 0, 0x20, 24, 2,
CLK_DIVIDER_ALLOW_ZERO, shub_spi_clk_table,
},
};
/* shub gate clk */
-static struct hisi_gate_clock hi3559av100_shub_gate_clks[] __initdata = {
+static struct hisi_gate_clock hi3559av100_shub_gate_clks[] = {
{
HI3559AV100_SHUB_SPI0_CLK, "clk_shub_spi0", "clk_spi_clk",
0, 0x20, 1, 0,
return 0;
}
-static __init struct hisi_clock_data *hi3559av100_shub_clk_register(
+static struct hisi_clock_data *hi3559av100_shub_clk_register(
struct platform_device *pdev)
{
struct hisi_clock_data *clk_data = NULL;
return ERR_PTR(ret);
}
-static __init void hi3559av100_shub_clk_unregister(struct platform_device *pdev)
+static void hi3559av100_shub_clk_unregister(struct platform_device *pdev)
{
struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
div->width);
}
-static long clk_regmap_div_round_rate(struct clk_hw *hw, unsigned long rate,
- unsigned long *prate)
+static int clk_regmap_div_determine_rate(struct clk_hw *hw,
+ struct clk_rate_request *req)
{
struct clk_regmap *clk = to_clk_regmap(hw);
struct clk_regmap_div_data *div = clk_get_regmap_div_data(clk);
if (div->flags & CLK_DIVIDER_READ_ONLY) {
ret = regmap_read(clk->map, div->offset, &val);
if (ret)
- /* Gives a hint that something is wrong */
- return 0;
+ return ret;
val >>= div->shift;
val &= clk_div_mask(div->width);
- return divider_ro_round_rate(hw, rate, prate, div->table,
- div->width, div->flags, val);
+ return divider_ro_determine_rate(hw, req, div->table,
+ div->width, div->flags, val);
}
- return divider_round_rate(hw, rate, prate, div->table, div->width,
- div->flags);
+ return divider_determine_rate(hw, req, div->table, div->width,
+ div->flags);
}
static int clk_regmap_div_set_rate(struct clk_hw *hw, unsigned long rate,
const struct clk_ops clk_regmap_divider_ops = {
.recalc_rate = clk_regmap_div_recalc_rate,
- .round_rate = clk_regmap_div_round_rate,
+ .determine_rate = clk_regmap_div_determine_rate,
.set_rate = clk_regmap_div_set_rate,
};
EXPORT_SYMBOL_GPL(clk_regmap_divider_ops);
const struct clk_ops clk_regmap_divider_ro_ops = {
.recalc_rate = clk_regmap_div_recalc_rate,
- .round_rate = clk_regmap_div_round_rate,
+ .determine_rate = clk_regmap_div_determine_rate,
};
EXPORT_SYMBOL_GPL(clk_regmap_divider_ro_ops);
If in doubt, say N.
+config ACPI_CPPC_CPUFREQ_FIE
+ bool "Frequency Invariance support for CPPC cpufreq driver"
+ depends on ACPI_CPPC_CPUFREQ && GENERIC_ARCH_TOPOLOGY
+ default y
+ help
+ This extends frequency invariance support in the CPPC cpufreq driver,
+ by using CPPC delivered and reference performance counters.
+
+ If in doubt, say N.
+
config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM
tristate "Allwinner nvmem based SUN50I CPUFreq driver"
depends on ARCH_SUNXI
#define pr_fmt(fmt) "CPPC Cpufreq:" fmt
+#include <linux/arch_topology.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/dmi.h>
+#include <linux/irq_work.h>
+#include <linux/kthread.h>
#include <linux/time.h>
#include <linux/vmalloc.h>
+#include <uapi/linux/sched/types.h>
#include <asm/unaligned.h>
}
};
+#ifdef CONFIG_ACPI_CPPC_CPUFREQ_FIE
+
+/* Frequency invariance support */
+struct cppc_freq_invariance {
+ int cpu;
+ struct irq_work irq_work;
+ struct kthread_work work;
+ struct cppc_perf_fb_ctrs prev_perf_fb_ctrs;
+ struct cppc_cpudata *cpu_data;
+};
+
+static DEFINE_PER_CPU(struct cppc_freq_invariance, cppc_freq_inv);
+static struct kthread_worker *kworker_fie;
+
+static struct cpufreq_driver cppc_cpufreq_driver;
+static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu);
+static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
+ struct cppc_perf_fb_ctrs *fb_ctrs_t0,
+ struct cppc_perf_fb_ctrs *fb_ctrs_t1);
+
+/**
+ * cppc_scale_freq_workfn - CPPC arch_freq_scale updater for frequency invariance
+ * @work: The work item.
+ *
+ * The CPPC driver register itself with the topology core to provide its own
+ * implementation (cppc_scale_freq_tick()) of topology_scale_freq_tick() which
+ * gets called by the scheduler on every tick.
+ *
+ * Note that the arch specific counters have higher priority than CPPC counters,
+ * if available, though the CPPC driver doesn't need to have any special
+ * handling for that.
+ *
+ * On an invocation of cppc_scale_freq_tick(), we schedule an irq work (since we
+ * reach here from hard-irq context), which then schedules a normal work item
+ * and cppc_scale_freq_workfn() updates the per_cpu arch_freq_scale variable
+ * based on the counter updates since the last tick.
+ */
+static void cppc_scale_freq_workfn(struct kthread_work *work)
+{
+ struct cppc_freq_invariance *cppc_fi;
+ struct cppc_perf_fb_ctrs fb_ctrs = {0};
+ struct cppc_cpudata *cpu_data;
+ unsigned long local_freq_scale;
+ u64 perf;
+
+ cppc_fi = container_of(work, struct cppc_freq_invariance, work);
+ cpu_data = cppc_fi->cpu_data;
+
+ if (cppc_get_perf_ctrs(cppc_fi->cpu, &fb_ctrs)) {
+ pr_warn("%s: failed to read perf counters\n", __func__);
+ return;
+ }
+
+ perf = cppc_perf_from_fbctrs(cpu_data, &cppc_fi->prev_perf_fb_ctrs,
+ &fb_ctrs);
+ cppc_fi->prev_perf_fb_ctrs = fb_ctrs;
+
+ perf <<= SCHED_CAPACITY_SHIFT;
+ local_freq_scale = div64_u64(perf, cpu_data->perf_caps.highest_perf);
+
+ /* This can happen due to counter's overflow */
+ if (unlikely(local_freq_scale > 1024))
+ local_freq_scale = 1024;
+
+ per_cpu(arch_freq_scale, cppc_fi->cpu) = local_freq_scale;
+}
+
+static void cppc_irq_work(struct irq_work *irq_work)
+{
+ struct cppc_freq_invariance *cppc_fi;
+
+ cppc_fi = container_of(irq_work, struct cppc_freq_invariance, irq_work);
+ kthread_queue_work(kworker_fie, &cppc_fi->work);
+}
+
+static void cppc_scale_freq_tick(void)
+{
+ struct cppc_freq_invariance *cppc_fi = &per_cpu(cppc_freq_inv, smp_processor_id());
+
+ /*
+ * cppc_get_perf_ctrs() can potentially sleep, call that from the right
+ * context.
+ */
+ irq_work_queue(&cppc_fi->irq_work);
+}
+
+static struct scale_freq_data cppc_sftd = {
+ .source = SCALE_FREQ_SOURCE_CPPC,
+ .set_freq_scale = cppc_scale_freq_tick,
+};
+
+static void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy)
+{
+ struct cppc_freq_invariance *cppc_fi;
+ int cpu, ret;
+
+ if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
+ return;
+
+ for_each_cpu(cpu, policy->cpus) {
+ cppc_fi = &per_cpu(cppc_freq_inv, cpu);
+ cppc_fi->cpu = cpu;
+ cppc_fi->cpu_data = policy->driver_data;
+ kthread_init_work(&cppc_fi->work, cppc_scale_freq_workfn);
+ init_irq_work(&cppc_fi->irq_work, cppc_irq_work);
+
+ ret = cppc_get_perf_ctrs(cpu, &cppc_fi->prev_perf_fb_ctrs);
+ if (ret) {
+ pr_warn("%s: failed to read perf counters for cpu:%d: %d\n",
+ __func__, cpu, ret);
+
+ /*
+ * Don't abort if the CPU was offline while the driver
+ * was getting registered.
+ */
+ if (cpu_online(cpu))
+ return;
+ }
+ }
+
+ /* Register for freq-invariance */
+ topology_set_scale_freq_source(&cppc_sftd, policy->cpus);
+}
+
+/*
+ * We free all the resources on policy's removal and not on CPU removal as the
+ * irq-work are per-cpu and the hotplug core takes care of flushing the pending
+ * irq-works (hint: smpcfd_dying_cpu()) on CPU hotplug. Even if the kthread-work
+ * fires on another CPU after the concerned CPU is removed, it won't harm.
+ *
+ * We just need to make sure to remove them all on policy->exit().
+ */
+static void cppc_cpufreq_cpu_fie_exit(struct cpufreq_policy *policy)
+{
+ struct cppc_freq_invariance *cppc_fi;
+ int cpu;
+
+ if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
+ return;
+
+ /* policy->cpus will be empty here, use related_cpus instead */
+ topology_clear_scale_freq_source(SCALE_FREQ_SOURCE_CPPC, policy->related_cpus);
+
+ for_each_cpu(cpu, policy->related_cpus) {
+ cppc_fi = &per_cpu(cppc_freq_inv, cpu);
+ irq_work_sync(&cppc_fi->irq_work);
+ kthread_cancel_work_sync(&cppc_fi->work);
+ }
+}
+
+static void __init cppc_freq_invariance_init(void)
+{
+ struct sched_attr attr = {
+ .size = sizeof(struct sched_attr),
+ .sched_policy = SCHED_DEADLINE,
+ .sched_nice = 0,
+ .sched_priority = 0,
+ /*
+ * Fake (unused) bandwidth; workaround to "fix"
+ * priority inheritance.
+ */
+ .sched_runtime = 1000000,
+ .sched_deadline = 10000000,
+ .sched_period = 10000000,
+ };
+ int ret;
+
+ if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
+ return;
+
+ kworker_fie = kthread_create_worker(0, "cppc_fie");
+ if (IS_ERR(kworker_fie))
+ return;
+
+ ret = sched_setattr_nocheck(kworker_fie->task, &attr);
+ if (ret) {
+ pr_warn("%s: failed to set SCHED_DEADLINE: %d\n", __func__,
+ ret);
+ kthread_destroy_worker(kworker_fie);
+ return;
+ }
+}
+
+static void cppc_freq_invariance_exit(void)
+{
+ if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
+ return;
+
+ kthread_destroy_worker(kworker_fie);
+ kworker_fie = NULL;
+}
+
+#else
+static inline void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy)
+{
+}
+
+static inline void cppc_cpufreq_cpu_fie_exit(struct cpufreq_policy *policy)
+{
+}
+
+static inline void cppc_freq_invariance_init(void)
+{
+}
+
+static inline void cppc_freq_invariance_exit(void)
+{
+}
+#endif /* CONFIG_ACPI_CPPC_CPUFREQ_FIE */
+
/* Callback function used to retrieve the max frequency from DMI */
static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private)
{
return 0;
}
-static void cppc_cpufreq_stop_cpu(struct cpufreq_policy *policy)
-{
- struct cppc_cpudata *cpu_data = policy->driver_data;
- struct cppc_perf_caps *caps = &cpu_data->perf_caps;
- unsigned int cpu = policy->cpu;
- int ret;
-
- cpu_data->perf_ctrls.desired_perf = caps->lowest_perf;
-
- ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
- if (ret)
- pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
- caps->lowest_perf, cpu, ret);
-
- /* Remove CPU node from list and free driver data for policy */
- free_cpumask_var(cpu_data->shared_cpu_map);
- list_del(&cpu_data->node);
- kfree(policy->driver_data);
- policy->driver_data = NULL;
-}
-
/*
* The PCC subspace describes the rate at which platform can accept commands
* on the shared PCC channel (including READs which do not count towards freq
return NULL;
}
+static void cppc_cpufreq_put_cpu_data(struct cpufreq_policy *policy)
+{
+ struct cppc_cpudata *cpu_data = policy->driver_data;
+
+ list_del(&cpu_data->node);
+ free_cpumask_var(cpu_data->shared_cpu_map);
+ kfree(cpu_data);
+ policy->driver_data = NULL;
+}
+
static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
{
unsigned int cpu = policy->cpu;
default:
pr_debug("Unsupported CPU co-ord type: %d\n",
policy->shared_type);
- return -EFAULT;
+ ret = -EFAULT;
+ goto out;
}
/*
cpu_data->perf_ctrls.desired_perf = caps->highest_perf;
ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
- if (ret)
+ if (ret) {
pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
caps->highest_perf, cpu, ret);
+ goto out;
+ }
+
+ cppc_cpufreq_cpu_fie_init(policy);
+ return 0;
+out:
+ cppc_cpufreq_put_cpu_data(policy);
return ret;
}
+static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+{
+ struct cppc_cpudata *cpu_data = policy->driver_data;
+ struct cppc_perf_caps *caps = &cpu_data->perf_caps;
+ unsigned int cpu = policy->cpu;
+ int ret;
+
+ cppc_cpufreq_cpu_fie_exit(policy);
+
+ cpu_data->perf_ctrls.desired_perf = caps->lowest_perf;
+
+ ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
+ if (ret)
+ pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
+ caps->lowest_perf, cpu, ret);
+
+ cppc_cpufreq_put_cpu_data(policy);
+ return 0;
+}
+
static inline u64 get_delta(u64 t1, u64 t0)
{
if (t1 > t0 || t0 > ~(u32)0)
return (u32)t1 - (u32)t0;
}
-static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data,
- struct cppc_perf_fb_ctrs fb_ctrs_t0,
- struct cppc_perf_fb_ctrs fb_ctrs_t1)
+static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
+ struct cppc_perf_fb_ctrs *fb_ctrs_t0,
+ struct cppc_perf_fb_ctrs *fb_ctrs_t1)
{
u64 delta_reference, delta_delivered;
- u64 reference_perf, delivered_perf;
+ u64 reference_perf;
- reference_perf = fb_ctrs_t0.reference_perf;
+ reference_perf = fb_ctrs_t0->reference_perf;
- delta_reference = get_delta(fb_ctrs_t1.reference,
- fb_ctrs_t0.reference);
- delta_delivered = get_delta(fb_ctrs_t1.delivered,
- fb_ctrs_t0.delivered);
+ delta_reference = get_delta(fb_ctrs_t1->reference,
+ fb_ctrs_t0->reference);
+ delta_delivered = get_delta(fb_ctrs_t1->delivered,
+ fb_ctrs_t0->delivered);
- /* Check to avoid divide-by zero */
- if (delta_reference || delta_delivered)
- delivered_perf = (reference_perf * delta_delivered) /
- delta_reference;
- else
- delivered_perf = cpu_data->perf_ctrls.desired_perf;
+ /* Check to avoid divide-by zero and invalid delivered_perf */
+ if (!delta_reference || !delta_delivered)
+ return cpu_data->perf_ctrls.desired_perf;
- return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf);
+ return (reference_perf * delta_delivered) / delta_reference;
}
static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0};
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
struct cppc_cpudata *cpu_data = policy->driver_data;
+ u64 delivered_perf;
int ret;
cpufreq_cpu_put(policy);
if (ret)
return ret;
- return cppc_get_rate_from_fbctrs(cpu_data, fb_ctrs_t0, fb_ctrs_t1);
+ delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0,
+ &fb_ctrs_t1);
+
+ return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf);
}
static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state)
.target = cppc_cpufreq_set_target,
.get = cppc_cpufreq_get_rate,
.init = cppc_cpufreq_cpu_init,
- .stop_cpu = cppc_cpufreq_stop_cpu,
+ .exit = cppc_cpufreq_cpu_exit,
.set_boost = cppc_cpufreq_set_boost,
.attr = cppc_cpufreq_attr,
.name = "cppc_cpufreq",
static int __init cppc_cpufreq_init(void)
{
+ int ret;
+
if ((acpi_disabled) || !acpi_cpc_valid())
return -ENODEV;
INIT_LIST_HEAD(&cpu_data_list);
cppc_check_hisi_workaround();
+ cppc_freq_invariance_init();
+
+ ret = cpufreq_register_driver(&cppc_cpufreq_driver);
+ if (ret)
+ cppc_freq_invariance_exit();
- return cpufreq_register_driver(&cppc_cpufreq_driver);
+ return ret;
}
static inline void free_cpu_data(void)
static void __exit cppc_cpufreq_exit(void)
{
cpufreq_unregister_driver(&cppc_cpufreq_driver);
+ cppc_freq_invariance_exit();
free_cpu_data();
}
* Machines for which the cpufreq device is *always* created, mostly used for
* platforms using "operating-points" (V1) property.
*/
-static const struct of_device_id whitelist[] __initconst = {
+static const struct of_device_id allowlist[] __initconst = {
{ .compatible = "allwinner,sun4i-a10", },
{ .compatible = "allwinner,sun5i-a10s", },
{ .compatible = "allwinner,sun5i-a13", },
* Machines for which the cpufreq device is *not* created, mostly used for
* platforms using "operating-points-v2" property.
*/
-static const struct of_device_id blacklist[] __initconst = {
+static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "allwinner,sun50i-h6", },
{ .compatible = "arm,vexpress", },
{ .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", },
+ { .compatible = "mediatek,mt8365", },
{ .compatible = "mediatek,mt8516", },
{ .compatible = "nvidia,tegra20", },
{ .compatible = "qcom,msm8996", },
{ .compatible = "qcom,qcs404", },
{ .compatible = "qcom,sc7180", },
+ { .compatible = "qcom,sc7280", },
{ .compatible = "qcom,sdm845", },
{ .compatible = "st,stih407", },
if (!np)
return -ENODEV;
- match = of_match_node(whitelist, np);
+ match = of_match_node(allowlist, np);
if (match) {
data = match->data;
goto create_pdev;
}
- if (cpu0_node_has_opp_v2_prop() && !of_match_node(blacklist, np))
+ if (cpu0_node_has_opp_v2_prop() && !of_match_node(blocklist, np))
goto create_pdev;
of_node_put(np);
}
EXPORT_SYMBOL_GPL(cpufreq_disable_fast_switch);
+static unsigned int __resolve_freq(struct cpufreq_policy *policy,
+ unsigned int target_freq, unsigned int relation)
+{
+ unsigned int idx;
+
+ target_freq = clamp_val(target_freq, policy->min, policy->max);
+
+ if (!cpufreq_driver->target_index)
+ return target_freq;
+
+ idx = cpufreq_frequency_table_target(policy, target_freq, relation);
+ policy->cached_resolved_idx = idx;
+ policy->cached_target_freq = target_freq;
+ return policy->freq_table[idx].frequency;
+}
+
/**
* cpufreq_driver_resolve_freq - Map a target frequency to a driver-supported
* one.
unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
unsigned int target_freq)
{
- target_freq = clamp_val(target_freq, policy->min, policy->max);
- policy->cached_target_freq = target_freq;
-
- if (cpufreq_driver->target_index) {
- unsigned int idx;
-
- idx = cpufreq_frequency_table_target(policy, target_freq,
- CPUFREQ_RELATION_L);
- policy->cached_resolved_idx = idx;
- return policy->freq_table[idx].frequency;
- }
-
- if (cpufreq_driver->resolve_freq)
- return cpufreq_driver->resolve_freq(policy, target_freq);
-
- return target_freq;
+ return __resolve_freq(policy, target_freq, CPUFREQ_RELATION_L);
}
EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq);
policy->cdev = NULL;
}
- if (cpufreq_driver->stop_cpu)
- cpufreq_driver->stop_cpu(policy);
-
if (has_target())
cpufreq_exit_governor(policy);
unsigned int relation)
{
unsigned int old_target_freq = target_freq;
- int index;
if (cpufreq_disabled())
return -ENODEV;
- /* Make sure that target_freq is within supported range */
- target_freq = clamp_val(target_freq, policy->min, policy->max);
+ target_freq = __resolve_freq(policy, target_freq, relation);
pr_debug("target for CPU %u: %u kHz, relation %u, requested %u kHz\n",
policy->cpu, target_freq, relation, old_target_freq);
if (!cpufreq_driver->target_index)
return -EINVAL;
- index = cpufreq_frequency_table_target(policy, target_freq, relation);
-
- return __target_index(policy, index);
+ return __target_index(policy, policy->cached_resolved_idx);
}
EXPORT_SYMBOL_GPL(__cpufreq_driver_target);
return 0;
}
-static int intel_pstate_cpu_offline(struct cpufreq_policy *policy)
+static int intel_cpufreq_cpu_offline(struct cpufreq_policy *policy)
{
struct cpudata *cpu = all_cpu_data[policy->cpu];
return 0;
}
-static void intel_pstate_stop_cpu(struct cpufreq_policy *policy)
+static int intel_pstate_cpu_offline(struct cpufreq_policy *policy)
{
- pr_debug("CPU %d stopping\n", policy->cpu);
-
intel_pstate_clear_update_util_hook(policy->cpu);
+
+ return intel_cpufreq_cpu_offline(policy);
}
static int intel_pstate_cpu_exit(struct cpufreq_policy *policy)
.resume = intel_pstate_resume,
.init = intel_pstate_cpu_init,
.exit = intel_pstate_cpu_exit,
- .stop_cpu = intel_pstate_stop_cpu,
.offline = intel_pstate_cpu_offline,
.online = intel_pstate_cpu_online,
.update_limits = intel_pstate_update_limits,
.fast_switch = intel_cpufreq_fast_switch,
.init = intel_cpufreq_cpu_init,
.exit = intel_cpufreq_cpu_exit,
- .offline = intel_pstate_cpu_offline,
+ .offline = intel_cpufreq_cpu_offline,
.online = intel_pstate_cpu_online,
.suspend = intel_pstate_suspend,
.resume = intel_pstate_resume,
{ .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", },
+ { .compatible = "mediatek,mt8365", },
{ .compatible = "mediatek,mt8516", },
{ }
static int powernv_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
- /* timer is deleted in cpufreq_cpu_stop() */
+ struct powernv_smp_call_data freq_data;
+ struct global_pstate_info *gpstates = policy->driver_data;
+
+ freq_data.pstate_id = idx_to_pstate(powernv_pstate_info.min);
+ freq_data.gpstate_id = idx_to_pstate(powernv_pstate_info.min);
+ smp_call_function_single(policy->cpu, set_pstate, &freq_data, 1);
+ if (gpstates)
+ del_timer_sync(&gpstates->timer);
+
kfree(policy->driver_data);
return 0;
.priority = 0,
};
-static void powernv_cpufreq_stop_cpu(struct cpufreq_policy *policy)
-{
- struct powernv_smp_call_data freq_data;
- struct global_pstate_info *gpstates = policy->driver_data;
-
- freq_data.pstate_id = idx_to_pstate(powernv_pstate_info.min);
- freq_data.gpstate_id = idx_to_pstate(powernv_pstate_info.min);
- smp_call_function_single(policy->cpu, set_pstate, &freq_data, 1);
- if (gpstates)
- del_timer_sync(&gpstates->timer);
-}
-
static unsigned int powernv_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq)
{
.target_index = powernv_cpufreq_target_index,
.fast_switch = powernv_fast_switch,
.get = powernv_cpufreq_get,
- .stop_cpu = powernv_cpufreq_stop_cpu,
.attr = powernv_cpu_freq_attr,
};
nr_opp = dev_pm_opp_get_opp_count(cpu_dev);
if (nr_opp <= 0) {
dev_err(cpu_dev, "%s: No OPPs for this device: %d\n",
- __func__, ret);
+ __func__, nr_opp);
ret = -ENODEV;
goto out_free_opp;
.start_index[PM_SLEEP_MODE_SPC] = 3,
};
+/* SPM register data for 8226 */
+static const struct spm_reg_data spm_reg_8226_cpu = {
+ .reg_offset = spm_reg_offset_v2_1,
+ .spm_cfg = 0x0,
+ .spm_dly = 0x3C102800,
+ .seq = { 0x60, 0x03, 0x60, 0x0B, 0x0F, 0x20, 0x10, 0x80, 0x30, 0x90,
+ 0x5B, 0x60, 0x03, 0x60, 0x3B, 0x76, 0x76, 0x0B, 0x94, 0x5B,
+ 0x80, 0x10, 0x26, 0x30, 0x0F },
+ .start_index[PM_SLEEP_MODE_STBY] = 0,
+ .start_index[PM_SLEEP_MODE_SPC] = 5,
+};
+
static const u8 spm_reg_offset_v1_1[SPM_REG_NR] = {
[SPM_REG_CFG] = 0x08,
[SPM_REG_SPM_CTL] = 0x20,
}
static const struct of_device_id spm_match_table[] = {
+ { .compatible = "qcom,msm8226-saw2-v2.1-cpu",
+ .data = &spm_reg_8226_cpu },
{ .compatible = "qcom,msm8974-saw2-v2.1-cpu",
.data = &spm_reg_8974_8084_cpu },
{ .compatible = "qcom,apq8084-saw2-v2.1-cpu",
adev->pm.smu_prv_buffer_size = 0;
}
+static int amdgpu_device_init_apu_flags(struct amdgpu_device *adev)
+{
+ if (!(adev->flags & AMD_IS_APU) ||
+ adev->asic_type < CHIP_RAVEN)
+ return 0;
+
+ switch (adev->asic_type) {
+ case CHIP_RAVEN:
+ if (adev->pdev->device == 0x15dd)
+ adev->apu_flags |= AMD_APU_IS_RAVEN;
+ if (adev->pdev->device == 0x15d8)
+ adev->apu_flags |= AMD_APU_IS_PICASSO;
+ break;
+ case CHIP_RENOIR:
+ if ((adev->pdev->device == 0x1636) ||
+ (adev->pdev->device == 0x164c))
+ adev->apu_flags |= AMD_APU_IS_RENOIR;
+ else
+ adev->apu_flags |= AMD_APU_IS_GREEN_SARDINE;
+ break;
+ case CHIP_VANGOGH:
+ adev->apu_flags |= AMD_APU_IS_VANGOGH;
+ break;
+ case CHIP_YELLOW_CARP:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/**
* amdgpu_device_check_arguments - validate module params
*
mutex_init(&adev->psp.mutex);
mutex_init(&adev->notifier_lock);
+ r = amdgpu_device_init_apu_flags(adev);
+ if (r)
+ return r;
+
r = amdgpu_device_check_arguments(adev);
if (r)
return r;
case CHIP_SIENNA_CICHLID:
case CHIP_NAVY_FLOUNDER:
case CHIP_DIMGREY_CAVEFISH:
+ case CHIP_BEIGE_GOBY:
case CHIP_VANGOGH:
case CHIP_ALDEBARAN:
break;
* highest. That helps saving some idle power.
* DISABLE_FRACTIONAL_PWM (bit 2) disabled by default
* PSR (bit 3) disabled by default
+ * EDP NO POWER SEQUENCING (bit 4) disabled by default
*/
uint amdgpu_dc_feature_mask = 2;
uint amdgpu_dc_debug_mask;
{0x1002, 0x73E0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
{0x1002, 0x73E1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
{0x1002, 0x73E2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
+ {0x1002, 0x73E3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
{0x1002, 0x73FF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
/* Aldebaran */
case CHIP_NAVI14:
case CHIP_NAVI12:
case CHIP_VANGOGH:
+ case CHIP_YELLOW_CARP:
/* Don't enable it by default yet.
*/
if (amdgpu_tmz < 1) {
struct mm_struct *mm, struct page **pages,
uint64_t start, uint64_t npages,
struct hmm_range **phmm_range, bool readonly,
- bool mmap_locked)
+ bool mmap_locked, void *owner)
{
struct hmm_range *hmm_range;
unsigned long timeout;
hmm_range->hmm_pfns = pfns;
hmm_range->start = start;
hmm_range->end = start + npages * PAGE_SIZE;
+ hmm_range->dev_private_owner = owner;
/* Assuming 512MB takes maxmium 1 second to fault page address */
timeout = max(npages >> 17, 1ULL) * HMM_RANGE_DEFAULT_TIMEOUT;
struct mm_struct *mm, struct page **pages,
uint64_t start, uint64_t npages,
struct hmm_range **phmm_range, bool readonly,
- bool mmap_locked);
+ bool mmap_locked, void *owner);
int amdgpu_hmm_range_get_pages_done(struct hmm_range *hmm_range);
#if defined(CONFIG_HMM_MIRROR)
void (*enable_aspm)(struct amdgpu_device *adev,
bool enable);
void (*program_aspm)(struct amdgpu_device *adev);
+ void (*apply_lc_spc_mode_wa)(struct amdgpu_device *adev);
+ void (*apply_l1_link_width_reconfig_wa)(struct amdgpu_device *adev);
};
struct amdgpu_nbio {
mem->bus.offset += adev->gmc.aper_base;
mem->bus.is_iomem = true;
- if (adev->gmc.xgmi.connected_to_cpu)
- mem->bus.caching = ttm_cached;
- else
- mem->bus.caching = ttm_write_combined;
break;
default:
return -EINVAL;
readonly = amdgpu_ttm_tt_is_readonly(ttm);
r = amdgpu_hmm_range_get_pages(&bo->notifier, mm, pages, start,
ttm->num_pages, >t->range, readonly,
- true);
+ true, NULL);
out_unlock:
mmap_read_unlock(mm);
mmput(mm);
bo_mem->mem_type == AMDGPU_PL_OA)
return -EINVAL;
- if (!amdgpu_gtt_mgr_has_gart_addr(bo_mem)) {
+ if (bo_mem->mem_type != TTM_PL_TT ||
+ !amdgpu_gtt_mgr_has_gart_addr(bo_mem)) {
gtt->offset = AMDGPU_BO_INVALID_OFFSET;
return 0;
}
if (i == 1)
node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
+ if (adev->gmc.xgmi.connected_to_cpu)
+ node->base.bus.caching = ttm_cached;
+ else
+ node->base.bus.caching = ttm_write_combined;
+
atomic64_add(vis_usage, &mgr->vis_usage);
*res = &node->base;
return 0;
{
uint32_t def, data;
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG))
+ return;
+
def = data = RREG32_SOC15(ATHUB, 0, mmATHUB_MISC_CNTL);
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG))
+ if (enable)
data |= ATHUB_MISC_CNTL__CG_ENABLE_MASK;
else
data &= ~ATHUB_MISC_CNTL__CG_ENABLE_MASK;
{
uint32_t def, data;
+ if (!((adev->cg_flags & AMD_CG_SUPPORT_MC_LS) &&
+ (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS)))
+ return;
+
def = data = RREG32_SOC15(ATHUB, 0, mmATHUB_MISC_CNTL);
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS) &&
- (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
+ if (enable)
data |= ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
else
data &= ~ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
4 + /* RMI */
1); /* SQG */
- if (adev->asic_type == CHIP_NAVI10 ||
- adev->asic_type == CHIP_NAVI14 ||
- adev->asic_type == CHIP_NAVI12) {
- mutex_lock(&adev->grbm_idx_mutex);
- for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
- for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
- gfx_v10_0_select_se_sh(adev, i, j, 0xffffffff);
- wgp_active_bitmap = gfx_v10_0_get_wgp_active_bitmap_per_sh(adev);
- /*
- * Set corresponding TCP bits for the inactive WGPs in
- * GCRD_SA_TARGETS_DISABLE
- */
- gcrd_targets_disable_tcp = 0;
- /* Set TCP & SQC bits in UTCL1_UTCL0_INVREQ_DISABLE */
- utcl_invreq_disable = 0;
-
- for (k = 0; k < max_wgp_per_sh; k++) {
- if (!(wgp_active_bitmap & (1 << k))) {
- gcrd_targets_disable_tcp |= 3 << (2 * k);
- utcl_invreq_disable |= (3 << (2 * k)) |
- (3 << (2 * (max_wgp_per_sh + k)));
- }
+ mutex_lock(&adev->grbm_idx_mutex);
+ for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+ for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+ gfx_v10_0_select_se_sh(adev, i, j, 0xffffffff);
+ wgp_active_bitmap = gfx_v10_0_get_wgp_active_bitmap_per_sh(adev);
+ /*
+ * Set corresponding TCP bits for the inactive WGPs in
+ * GCRD_SA_TARGETS_DISABLE
+ */
+ gcrd_targets_disable_tcp = 0;
+ /* Set TCP & SQC bits in UTCL1_UTCL0_INVREQ_DISABLE */
+ utcl_invreq_disable = 0;
+
+ for (k = 0; k < max_wgp_per_sh; k++) {
+ if (!(wgp_active_bitmap & (1 << k))) {
+ gcrd_targets_disable_tcp |= 3 << (2 * k);
+ gcrd_targets_disable_tcp |= 1 << (k + (max_wgp_per_sh * 2));
+ utcl_invreq_disable |= (3 << (2 * k)) |
+ (3 << (2 * (max_wgp_per_sh + k)));
}
-
- tmp = RREG32_SOC15(GC, 0, mmUTCL1_UTCL0_INVREQ_DISABLE);
- /* only override TCP & SQC bits */
- tmp &= 0xffffffff << (4 * max_wgp_per_sh);
- tmp |= (utcl_invreq_disable & utcl_invreq_disable_mask);
- WREG32_SOC15(GC, 0, mmUTCL1_UTCL0_INVREQ_DISABLE, tmp);
-
- tmp = RREG32_SOC15(GC, 0, mmGCRD_SA_TARGETS_DISABLE);
- /* only override TCP bits */
- tmp &= 0xffffffff << (2 * max_wgp_per_sh);
- tmp |= (gcrd_targets_disable_tcp & gcrd_targets_disable_mask);
- WREG32_SOC15(GC, 0, mmGCRD_SA_TARGETS_DISABLE, tmp);
}
- }
- gfx_v10_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
- mutex_unlock(&adev->grbm_idx_mutex);
+ tmp = RREG32_SOC15(GC, 0, mmUTCL1_UTCL0_INVREQ_DISABLE);
+ /* only override TCP & SQC bits */
+ tmp &= (0xffffffffU << (4 * max_wgp_per_sh));
+ tmp |= (utcl_invreq_disable & utcl_invreq_disable_mask);
+ WREG32_SOC15(GC, 0, mmUTCL1_UTCL0_INVREQ_DISABLE, tmp);
+
+ tmp = RREG32_SOC15(GC, 0, mmGCRD_SA_TARGETS_DISABLE);
+ /* only override TCP & SQC bits */
+ tmp &= (0xffffffffU << (3 * max_wgp_per_sh));
+ tmp |= (gcrd_targets_disable_tcp & gcrd_targets_disable_mask);
+ WREG32_SOC15(GC, 0, mmGCRD_SA_TARGETS_DISABLE, tmp);
+ }
}
+
+ gfx_v10_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+ mutex_unlock(&adev->grbm_idx_mutex);
}
static void gfx_v10_0_get_tcc_info(struct amdgpu_device *adev)
* init golden registers and rlc resume may override some registers,
* reconfig them here
*/
- gfx_v10_0_tcp_harvest(adev);
+ if (adev->asic_type == CHIP_NAVI10 ||
+ adev->asic_type == CHIP_NAVI14 ||
+ adev->asic_type == CHIP_NAVI12)
+ gfx_v10_0_tcp_harvest(adev);
r = gfx_v10_0_cp_resume(adev);
if (r)
{
uint32_t data, def;
+ if (!(adev->cg_flags & (AMD_CG_SUPPORT_GFX_MGCG | AMD_CG_SUPPORT_GFX_MGLS)))
+ return;
+
/* It is disabled by HW by default */
if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_MGCG)) {
/* 0 - Disable some blocks' MGCG */
RLC_CGTT_MGCG_OVERRIDE__RLC_CGTT_SCLK_OVERRIDE_MASK |
RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGCG_OVERRIDE_MASK |
RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGLS_OVERRIDE_MASK |
+ RLC_CGTT_MGCG_OVERRIDE__GFXIP_FGCG_OVERRIDE_MASK |
RLC_CGTT_MGCG_OVERRIDE__ENABLE_CGTS_LEGACY_MASK);
if (def != data)
WREG32_SOC15(GC, 0, mmCP_MEM_SLP_CNTL, data);
}
}
- } else {
+ } else if (!enable || !(adev->cg_flags & AMD_CG_SUPPORT_GFX_MGCG)) {
/* 1 - MGCG_OVERRIDE */
def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
data |= (RLC_CGTT_MGCG_OVERRIDE__RLC_CGTT_SCLK_OVERRIDE_MASK |
RLC_CGTT_MGCG_OVERRIDE__GRBM_CGTT_SCLK_OVERRIDE_MASK |
RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGCG_OVERRIDE_MASK |
- RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGLS_OVERRIDE_MASK);
+ RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGLS_OVERRIDE_MASK |
+ RLC_CGTT_MGCG_OVERRIDE__GFXIP_FGCG_OVERRIDE_MASK |
+ RLC_CGTT_MGCG_OVERRIDE__ENABLE_CGTS_LEGACY_MASK);
if (def != data)
WREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE, data);
{
uint32_t data, def;
+ if (!(adev->cg_flags & (AMD_CG_SUPPORT_GFX_3D_CGCG | AMD_CG_SUPPORT_GFX_3D_CGLS)))
+ return;
+
/* Enable 3D CGCG/CGLS */
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)) {
+ if (enable) {
/* write cmd to clear cgcg/cgls ov */
def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
+
/* unset CGCG override */
- data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_GFX3D_CG_OVERRIDE_MASK;
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)
+ data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_GFX3D_CG_OVERRIDE_MASK;
+
/* update CGCG and CGLS override bits */
if (def != data)
WREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE, data);
+
/* enable 3Dcgcg FSM(0x0000363f) */
def = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D);
- data = (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
- RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
+ data = 0;
+
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)
+ data = (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+ RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
+
if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS)
data |= (0x000F << RLC_CGCG_CGLS_CTRL_3D__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK;
+
if (def != data)
WREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D, data);
} else {
/* Disable CGCG/CGLS */
def = data = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D);
+
/* disable cgcg, cgls should be disabled */
- data &= ~(RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK |
- RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK);
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)
+ data &= ~RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
+
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS)
+ data &= ~RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK;
+
/* disable cgcg and cgls in FSM */
if (def != data)
WREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D, data);
{
uint32_t def, data;
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)) {
+ if (!(adev->cg_flags & (AMD_CG_SUPPORT_GFX_CGCG | AMD_CG_SUPPORT_GFX_CGLS)))
+ return;
+
+ if (enable) {
def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
+
/* unset CGCG override */
- data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGCG_OVERRIDE_MASK;
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)
+ data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGCG_OVERRIDE_MASK;
+
if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS)
data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGLS_OVERRIDE_MASK;
- else
- data |= RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGLS_OVERRIDE_MASK;
+
/* update CGCG and CGLS override bits */
if (def != data)
WREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE, data);
/* enable cgcg FSM(0x0000363F) */
def = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL);
- data = (0x36 << RLC_CGCG_CGLS_CTRL__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
- RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
+ data = 0;
+
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)
+ data = (0x36 << RLC_CGCG_CGLS_CTRL__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+ RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
+
if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS)
data |= (0x000F << RLC_CGCG_CGLS_CTRL__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK;
+
if (def != data)
WREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL, data);
WREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_CNTL, data);
} else {
def = data = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL);
+
/* reset CGCG/CGLS bits */
- data &= ~(RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK | RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK);
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)
+ data &= ~RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
+
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS)
+ data &= ~RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK;
+
/* disable cgcg and cgls in FSM */
if (def != data)
WREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL, data);
{
uint32_t def, data;
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_FGCG)) {
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_GFX_FGCG))
+ return;
+
+ if (enable) {
def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
/* unset FGCG override */
data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_FGCG_OVERRIDE_MASK;
}
}
+static void gfx_v10_0_apply_medium_grain_clock_gating_workaround(struct amdgpu_device *adev)
+{
+ uint32_t reg_data = 0;
+ uint32_t reg_idx = 0;
+ uint32_t i;
+
+ const uint32_t tcp_ctrl_regs[] = {
+ mmCGTS_SA0_WGP00_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP00_CU1_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP01_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP01_CU1_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP02_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP02_CU1_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP10_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP10_CU1_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP11_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP11_CU1_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP12_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP12_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP00_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP00_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP01_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP01_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP02_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP02_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP10_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP10_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP11_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP11_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP12_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP12_CU1_TCP_CTRL_REG
+ };
+
+ const uint32_t tcp_ctrl_regs_nv12[] = {
+ mmCGTS_SA0_WGP00_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP00_CU1_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP01_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP01_CU1_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP02_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP02_CU1_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP10_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP10_CU1_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP11_CU0_TCP_CTRL_REG,
+ mmCGTS_SA0_WGP11_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP00_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP00_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP01_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP01_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP02_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP02_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP10_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP10_CU1_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP11_CU0_TCP_CTRL_REG,
+ mmCGTS_SA1_WGP11_CU1_TCP_CTRL_REG,
+ };
+
+ const uint32_t sm_ctlr_regs[] = {
+ mmCGTS_SA0_QUAD0_SM_CTRL_REG,
+ mmCGTS_SA0_QUAD1_SM_CTRL_REG,
+ mmCGTS_SA1_QUAD0_SM_CTRL_REG,
+ mmCGTS_SA1_QUAD1_SM_CTRL_REG
+ };
+
+ if (adev->asic_type == CHIP_NAVI12) {
+ for (i = 0; i < ARRAY_SIZE(tcp_ctrl_regs_nv12); i++) {
+ reg_idx = adev->reg_offset[GC_HWIP][0][mmCGTS_SA0_WGP00_CU0_TCP_CTRL_REG_BASE_IDX] +
+ tcp_ctrl_regs_nv12[i];
+ reg_data = RREG32(reg_idx);
+ reg_data |= CGTS_SA0_WGP00_CU0_TCP_CTRL_REG__TCPI_LS_OVERRIDE_MASK;
+ WREG32(reg_idx, reg_data);
+ }
+ } else {
+ for (i = 0; i < ARRAY_SIZE(tcp_ctrl_regs); i++) {
+ reg_idx = adev->reg_offset[GC_HWIP][0][mmCGTS_SA0_WGP00_CU0_TCP_CTRL_REG_BASE_IDX] +
+ tcp_ctrl_regs[i];
+ reg_data = RREG32(reg_idx);
+ reg_data |= CGTS_SA0_WGP00_CU0_TCP_CTRL_REG__TCPI_LS_OVERRIDE_MASK;
+ WREG32(reg_idx, reg_data);
+ }
+ }
+
+ for (i = 0; i < ARRAY_SIZE(sm_ctlr_regs); i++) {
+ reg_idx = adev->reg_offset[GC_HWIP][0][mmCGTS_SA0_QUAD0_SM_CTRL_REG_BASE_IDX] +
+ sm_ctlr_regs[i];
+ reg_data = RREG32(reg_idx);
+ reg_data &= ~CGTS_SA0_QUAD0_SM_CTRL_REG__SM_MODE_MASK;
+ reg_data |= 2 << CGTS_SA0_QUAD0_SM_CTRL_REG__SM_MODE__SHIFT;
+ WREG32(reg_idx, reg_data);
+ }
+}
+
static int gfx_v10_0_update_gfx_clock_gating(struct amdgpu_device *adev,
bool enable)
{
gfx_v10_0_update_3d_clock_gating(adev, enable);
/* === CGCG + CGLS === */
gfx_v10_0_update_coarse_grain_clock_gating(adev, enable);
+
+ if ((adev->asic_type >= CHIP_NAVI10) &&
+ (adev->asic_type <= CHIP_NAVI12))
+ gfx_v10_0_apply_medium_grain_clock_gating_workaround(adev);
} else {
/* CGCG/CGLS should be disabled before MGCG/MGLS
* === CGCG + CGLS ===
static u32 gfx_v10_0_get_wgp_active_bitmap_per_sh(struct amdgpu_device *adev)
{
- u32 data, wgp_bitmask;
- data = RREG32_SOC15(GC, 0, mmCC_GC_SHADER_ARRAY_CONFIG);
- data |= RREG32_SOC15(GC, 0, mmGC_USER_SHADER_ARRAY_CONFIG);
+ u32 disabled_mask =
+ ~amdgpu_gfx_create_bitmask(adev->gfx.config.max_cu_per_sh >> 1);
+ u32 efuse_setting = 0;
+ u32 vbios_setting = 0;
+
+ efuse_setting = RREG32_SOC15(GC, 0, mmCC_GC_SHADER_ARRAY_CONFIG);
+ efuse_setting &= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_WGPS_MASK;
+ efuse_setting >>= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_WGPS__SHIFT;
- data &= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_WGPS_MASK;
- data >>= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_WGPS__SHIFT;
+ vbios_setting = RREG32_SOC15(GC, 0, mmGC_USER_SHADER_ARRAY_CONFIG);
+ vbios_setting &= GC_USER_SHADER_ARRAY_CONFIG__INACTIVE_WGPS_MASK;
+ vbios_setting >>= GC_USER_SHADER_ARRAY_CONFIG__INACTIVE_WGPS__SHIFT;
- wgp_bitmask =
- amdgpu_gfx_create_bitmask(adev->gfx.config.max_cu_per_sh >> 1);
+ disabled_mask |= efuse_setting | vbios_setting;
- return (~data) & wgp_bitmask;
+ return (~disabled_mask);
}
static u32 gfx_v10_0_get_cu_active_bitmap_per_sh(struct amdgpu_device *adev)
RC_MEM_POWER_SD_EN, 0);
WREG32_SOC15(HDP, 0, mmHDP_MEM_POWER_CTRL, hdp_mem_pwr_cntl);
- /* only one clock gating mode (LS/DS/SD) can be enabled */
- if (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS) {
- hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
- HDP_MEM_POWER_CTRL,
- IPH_MEM_POWER_LS_EN, enable);
- hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
- HDP_MEM_POWER_CTRL,
- RC_MEM_POWER_LS_EN, enable);
- } else if (adev->cg_flags & AMD_CG_SUPPORT_HDP_DS) {
- hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
- HDP_MEM_POWER_CTRL,
- IPH_MEM_POWER_DS_EN, enable);
- hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
- HDP_MEM_POWER_CTRL,
- RC_MEM_POWER_DS_EN, enable);
- } else if (adev->cg_flags & AMD_CG_SUPPORT_HDP_SD) {
- hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
- HDP_MEM_POWER_CTRL,
- IPH_MEM_POWER_SD_EN, enable);
- /* RC should not use shut down mode, fallback to ds */
- hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
- HDP_MEM_POWER_CTRL,
- RC_MEM_POWER_DS_EN, enable);
- }
-
- /* confirmed that IPH_MEM_POWER_CTRL_EN and RC_MEM_POWER_CTRL_EN have to
- * be set for SRAM LS/DS/SD */
- if (adev->cg_flags & (AMD_CG_SUPPORT_HDP_LS | AMD_CG_SUPPORT_HDP_DS |
- AMD_CG_SUPPORT_HDP_SD)) {
- hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
- IPH_MEM_POWER_CTRL_EN, 1);
- hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
- RC_MEM_POWER_CTRL_EN, 1);
+ /* Already disabled above. The actions below are for "enabled" only */
+ if (enable) {
+ /* only one clock gating mode (LS/DS/SD) can be enabled */
+ if (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS) {
+ hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+ HDP_MEM_POWER_CTRL,
+ IPH_MEM_POWER_LS_EN, 1);
+ hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+ HDP_MEM_POWER_CTRL,
+ RC_MEM_POWER_LS_EN, 1);
+ } else if (adev->cg_flags & AMD_CG_SUPPORT_HDP_DS) {
+ hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+ HDP_MEM_POWER_CTRL,
+ IPH_MEM_POWER_DS_EN, 1);
+ hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+ HDP_MEM_POWER_CTRL,
+ RC_MEM_POWER_DS_EN, 1);
+ } else if (adev->cg_flags & AMD_CG_SUPPORT_HDP_SD) {
+ hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+ HDP_MEM_POWER_CTRL,
+ IPH_MEM_POWER_SD_EN, 1);
+ /* RC should not use shut down mode, fallback to ds or ls if allowed */
+ if (adev->cg_flags & AMD_CG_SUPPORT_HDP_DS)
+ hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+ HDP_MEM_POWER_CTRL,
+ RC_MEM_POWER_DS_EN, 1);
+ else if (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS)
+ hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+ HDP_MEM_POWER_CTRL,
+ RC_MEM_POWER_LS_EN, 1);
+ }
+
+ /* confirmed that IPH_MEM_POWER_CTRL_EN and RC_MEM_POWER_CTRL_EN have to
+ * be set for SRAM LS/DS/SD */
+ if (adev->cg_flags & (AMD_CG_SUPPORT_HDP_LS | AMD_CG_SUPPORT_HDP_DS |
+ AMD_CG_SUPPORT_HDP_SD)) {
+ hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+ IPH_MEM_POWER_CTRL_EN, 1);
+ hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+ RC_MEM_POWER_CTRL_EN, 1);
+ WREG32_SOC15(HDP, 0, mmHDP_MEM_POWER_CTRL, hdp_mem_pwr_cntl);
+ }
}
- WREG32_SOC15(HDP, 0, mmHDP_MEM_POWER_CTRL, hdp_mem_pwr_cntl);
-
- /* restore IPH & RC clock override after clock/power mode changing */
- WREG32_SOC15(HDP, 0, mmHDP_CLK_CNTL, hdp_clk_cntl1);
+ /* disable IPH & RC clock override after clock/power mode changing */
+ hdp_clk_cntl = REG_SET_FIELD(hdp_clk_cntl, HDP_CLK_CNTL,
+ IPH_MEM_CLK_SOFT_OVERRIDE, 0);
+ hdp_clk_cntl = REG_SET_FIELD(hdp_clk_cntl, HDP_CLK_CNTL,
+ RC_MEM_CLK_SOFT_OVERRIDE, 0);
+ WREG32_SOC15(HDP, 0, mmHDP_CLK_CNTL, hdp_clk_cntl);
}
static void hdp_v5_0_update_medium_grain_clock_gating(struct amdgpu_device *adev,
{
uint32_t def, data, def1, data1;
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG))
+ return;
+
switch (adev->asic_type) {
case CHIP_SIENNA_CICHLID:
case CHIP_NAVY_FLOUNDER:
break;
}
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG)) {
+ if (enable) {
data |= MM_ATC_L2_MISC_CG__ENABLE_MASK;
data1 &= ~(DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
{
uint32_t def, data;
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_MC_LS))
+ return;
+
switch (adev->asic_type) {
case CHIP_SIENNA_CICHLID:
case CHIP_NAVY_FLOUNDER:
break;
}
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS))
+ if (enable)
data |= MM_ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
else
data &= ~MM_ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
#define mmBIF_MMSCH1_DOORBELL_RANGE 0x01d8
#define mmBIF_MMSCH1_DOORBELL_RANGE_BASE_IDX 2
+#define smnPCIE_LC_LINK_WIDTH_CNTL 0x11140288
+
static void nbio_v2_3_remap_hdp_registers(struct amdgpu_device *adev)
{
WREG32_SOC15(NBIO, 0, mmREMAP_HDP_MEM_FLUSH_CNTL,
{
uint32_t def, data;
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_BIF_MGCG))
+ return;
+
def = data = RREG32_PCIE(smnCPM_CONTROL);
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_BIF_MGCG)) {
+ if (enable) {
data |= (CPM_CONTROL__LCLK_DYN_GATE_ENABLE_MASK |
CPM_CONTROL__TXCLK_DYN_GATE_ENABLE_MASK |
CPM_CONTROL__TXCLK_LCNT_GATE_ENABLE_MASK |
{
uint32_t def, data;
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_BIF_LS))
+ return;
+
def = data = RREG32_PCIE(smnPCIE_CNTL2);
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_BIF_LS)) {
+ if (enable) {
data |= (PCIE_CNTL2__SLV_MEM_LS_EN_MASK |
PCIE_CNTL2__MST_MEM_LS_EN_MASK |
PCIE_CNTL2__REPLAY_MEM_LS_EN_MASK);
WREG32_PCIE(smnPCIE_LC_CNTL3, data);
}
+static void nbio_v2_3_apply_lc_spc_mode_wa(struct amdgpu_device *adev)
+{
+ uint32_t reg_data = 0;
+ uint32_t link_width = 0;
+
+ if (!((adev->asic_type >= CHIP_NAVI10) &&
+ (adev->asic_type <= CHIP_NAVI12)))
+ return;
+
+ reg_data = RREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL);
+ link_width = (reg_data & PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD_MASK)
+ >> PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD__SHIFT;
+
+ /*
+ * Program PCIE_LC_CNTL6.LC_SPC_MODE_8GT to 0x2 (4 symbols per clock data)
+ * if link_width is 0x3 (x4)
+ */
+ if (0x3 == link_width) {
+ reg_data = RREG32_PCIE(smnPCIE_LC_CNTL6);
+ reg_data &= ~PCIE_LC_CNTL6__LC_SPC_MODE_8GT_MASK;
+ reg_data |= (0x2 << PCIE_LC_CNTL6__LC_SPC_MODE_8GT__SHIFT);
+ WREG32_PCIE(smnPCIE_LC_CNTL6, reg_data);
+ }
+}
+
+static void nbio_v2_3_apply_l1_link_width_reconfig_wa(struct amdgpu_device *adev)
+{
+ uint32_t reg_data = 0;
+
+ if (adev->asic_type != CHIP_NAVI10)
+ return;
+
+ reg_data = RREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL);
+ reg_data |= PCIE_LC_LINK_WIDTH_CNTL__LC_L1_RECONFIG_EN_MASK;
+ WREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL, reg_data);
+}
+
const struct amdgpu_nbio_funcs nbio_v2_3_funcs = {
.get_hdp_flush_req_offset = nbio_v2_3_get_hdp_flush_req_offset,
.get_hdp_flush_done_offset = nbio_v2_3_get_hdp_flush_done_offset,
.remap_hdp_registers = nbio_v2_3_remap_hdp_registers,
.enable_aspm = nbio_v2_3_enable_aspm,
.program_aspm = nbio_v2_3_program_aspm,
+ .apply_lc_spc_mode_wa = nbio_v2_3_apply_lc_spc_mode_wa,
+ .apply_l1_link_width_reconfig_wa = nbio_v2_3_apply_l1_link_width_reconfig_wa,
};
#include "smuio_v11_0.h"
#include "smuio_v11_0_6.h"
+#define codec_info_build(type, width, height, level) \
+ .codec_type = type,\
+ .max_width = width,\
+ .max_height = height,\
+ .max_pixels_per_frame = height * width,\
+ .max_level = level,
+
static const struct amd_ip_funcs nv_common_ip_funcs;
/* Navi */
.codec_array = sriov_sc_video_codecs_decode_array,
};
+/* Beige Goby*/
+static const struct amdgpu_video_codec_info bg_video_codecs_decode_array[] = {
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+};
+
+static const struct amdgpu_video_codecs bg_video_codecs_decode = {
+ .codec_count = ARRAY_SIZE(bg_video_codecs_decode_array),
+ .codec_array = bg_video_codecs_decode_array,
+};
+
+static const struct amdgpu_video_codecs bg_video_codecs_encode = {
+ .codec_count = 0,
+ .codec_array = NULL,
+};
+
static int nv_query_video_codecs(struct amdgpu_device *adev, bool encode,
const struct amdgpu_video_codecs **codecs)
{
else
*codecs = &sc_video_codecs_decode;
return 0;
+ case CHIP_BEIGE_GOBY:
+ if (encode)
+ *codecs = &bg_video_codecs_encode;
+ else
+ *codecs = &bg_video_codecs_decode;
+ return 0;
case CHIP_NAVI10:
case CHIP_NAVI14:
case CHIP_NAVI12:
break;
case CHIP_VANGOGH:
- adev->apu_flags |= AMD_APU_IS_VANGOGH;
adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
AMD_CG_SUPPORT_GFX_MGLS |
AMD_CG_SUPPORT_GFX_CP_LS |
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ if (adev->nbio.funcs->apply_lc_spc_mode_wa)
+ adev->nbio.funcs->apply_lc_spc_mode_wa(adev);
+
+ if (adev->nbio.funcs->apply_l1_link_width_reconfig_wa)
+ adev->nbio.funcs->apply_l1_link_width_reconfig_wa(adev);
+
/* enable pcie gen2/3 link */
nv_pcie_gen3_enable(adev);
/* enable aspm */
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0111, 0x00000100),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
- SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
+ SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003e0),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000)
};
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_POWER_CNTL, 0x003fff07, 0x40000051),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
- SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
+ SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003e0),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x03fbe1fe)
};
sdma_v4_0_setup_ulv(adev);
- if (adev->sdma.funcs && adev->sdma.funcs->reset_ras_error_count)
- adev->sdma.funcs->reset_ras_error_count(adev);
+ if (!amdgpu_persistent_edc_harvesting_supported(adev)) {
+ if (adev->sdma.funcs &&
+ adev->sdma.funcs->reset_ras_error_count)
+ adev->sdma.funcs->reset_ras_error_count(adev);
+ }
if (adev->sdma.funcs && adev->sdma.funcs->ras_late_init)
return adev->sdma.funcs->ras_late_init(adev, &ih_info);
if (adev->flags & AMD_IS_APU)
return;
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_ROM_MGCG))
+ return;
+
def = data = RREG32_SOC15(SMUIO, 0, mmCGTT_ROM_CLK_CTRL0);
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_ROM_MGCG))
+ if (enable)
data &= ~(CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE0_MASK |
CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE1_MASK);
else
break;
case CHIP_RAVEN:
adev->asic_funcs = &soc15_asic_funcs;
- if (adev->pdev->device == 0x15dd)
- adev->apu_flags |= AMD_APU_IS_RAVEN;
- if (adev->pdev->device == 0x15d8)
- adev->apu_flags |= AMD_APU_IS_PICASSO;
+
if (adev->rev_id >= 0x8)
adev->apu_flags |= AMD_APU_IS_RAVEN2;
break;
case CHIP_RENOIR:
adev->asic_funcs = &soc15_asic_funcs;
- if ((adev->pdev->device == 0x1636) ||
- (adev->pdev->device == 0x164c))
- adev->apu_flags |= AMD_APU_IS_RENOIR;
- else
- adev->apu_flags |= AMD_APU_IS_GREEN_SARDINE;
if (adev->apu_flags & AMD_APU_IS_RENOIR)
adev->external_rev_id = adev->rev_id + 0x91;
struct page *page;
page = pfn_to_page(pfn);
- page->zone_device_data = prange;
+ svm_range_bo_ref(prange->svm_bo);
+ page->zone_device_data = prange->svm_bo;
get_page(page);
lock_page(page);
}
for (i = j = 0; i < npages; i++) {
struct page *spage;
- dst[i] = cursor.start + (j << PAGE_SHIFT);
- migrate->dst[i] = svm_migrate_addr_to_pfn(adev, dst[i]);
- svm_migrate_get_vram_page(prange, migrate->dst[i]);
-
- migrate->dst[i] = migrate_pfn(migrate->dst[i]);
- migrate->dst[i] |= MIGRATE_PFN_LOCKED;
-
- if (migrate->src[i] & MIGRATE_PFN_VALID) {
- spage = migrate_pfn_to_page(migrate->src[i]);
+ spage = migrate_pfn_to_page(migrate->src[i]);
+ if (spage && !is_zone_device_page(spage)) {
+ dst[i] = cursor.start + (j << PAGE_SHIFT);
+ migrate->dst[i] = svm_migrate_addr_to_pfn(adev, dst[i]);
+ svm_migrate_get_vram_page(prange, migrate->dst[i]);
+ migrate->dst[i] = migrate_pfn(migrate->dst[i]);
+ migrate->dst[i] |= MIGRATE_PFN_LOCKED;
src[i] = dma_map_page(dev, spage, 0, PAGE_SIZE,
DMA_TO_DEVICE);
r = dma_mapping_error(dev, src[i]);
}
}
+#ifdef DEBUG_FORCE_MIXED_DOMAINS
+ for (i = 0, j = 0; i < npages; i += 4, j++) {
+ if (j & 1)
+ continue;
+ svm_migrate_put_vram_page(adev, dst[i]);
+ migrate->dst[i] = 0;
+ svm_migrate_put_vram_page(adev, dst[i + 1]);
+ migrate->dst[i + 1] = 0;
+ svm_migrate_put_vram_page(adev, dst[i + 2]);
+ migrate->dst[i + 2] = 0;
+ svm_migrate_put_vram_page(adev, dst[i + 3]);
+ migrate->dst[i + 3] = 0;
+ }
+#endif
out:
return r;
}
uint64_t end)
{
uint64_t npages = (end - start) >> PAGE_SHIFT;
+ struct kfd_process_device *pdd;
struct dma_fence *mfence = NULL;
struct migrate_vma migrate;
dma_addr_t *scratch;
size_t size;
void *buf;
int r = -ENOMEM;
- int retry = 0;
memset(&migrate, 0, sizeof(migrate));
migrate.vma = vma;
migrate.start = start;
migrate.end = end;
migrate.flags = MIGRATE_VMA_SELECT_SYSTEM;
- migrate.pgmap_owner = adev;
+ migrate.pgmap_owner = SVM_ADEV_PGMAP_OWNER(adev);
size = 2 * sizeof(*migrate.src) + sizeof(uint64_t) + sizeof(dma_addr_t);
size *= npages;
migrate.dst = migrate.src + npages;
scratch = (dma_addr_t *)(migrate.dst + npages);
-retry:
r = migrate_vma_setup(&migrate);
if (r) {
pr_debug("failed %d prepare migrate svms 0x%p [0x%lx 0x%lx]\n",
goto out_free;
}
if (migrate.cpages != npages) {
- pr_debug("collect 0x%lx/0x%llx pages, retry\n", migrate.cpages,
+ pr_debug("Partial migration. 0x%lx/0x%llx pages can be migrated\n",
+ migrate.cpages,
npages);
- migrate_vma_finalize(&migrate);
- if (retry++ >= 3) {
- r = -ENOMEM;
- pr_debug("failed %d migrate svms 0x%p [0x%lx 0x%lx]\n",
- r, prange->svms, prange->start, prange->last);
- goto out_free;
- }
-
- goto retry;
}
if (migrate.cpages) {
out_free:
kvfree(buf);
out:
+ if (!r) {
+ pdd = svm_range_get_pdd_by_adev(prange, adev);
+ if (pdd)
+ WRITE_ONCE(pdd->page_in, pdd->page_in + migrate.cpages);
+ }
+
return r;
}
prange->start, prange->last, best_loc);
/* FIXME: workaround for page locking bug with invalid pages */
- svm_range_prefault(prange, mm);
+ svm_range_prefault(prange, mm, SVM_ADEV_PGMAP_OWNER(adev));
start = prange->start << PAGE_SHIFT;
end = (prange->last + 1) << PAGE_SHIFT;
static void svm_migrate_page_free(struct page *page)
{
- /* Keep this function to avoid warning */
+ struct svm_range_bo *svm_bo = page->zone_device_data;
+
+ if (svm_bo) {
+ pr_debug("svm_bo ref left: %d\n", kref_read(&svm_bo->kref));
+ svm_range_bo_unref(svm_bo);
+ }
}
static int
svm_migrate_copy_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
struct migrate_vma *migrate, struct dma_fence **mfence,
- dma_addr_t *scratch)
+ dma_addr_t *scratch, uint64_t npages)
{
- uint64_t npages = migrate->cpages;
struct device *dev = adev->dev;
uint64_t *src;
dma_addr_t *dst;
src = (uint64_t *)(scratch + npages);
dst = scratch;
- for (i = 0, j = 0; i < npages; i++, j++, addr += PAGE_SIZE) {
+ for (i = 0, j = 0; i < npages; i++, addr += PAGE_SIZE) {
struct page *spage;
spage = migrate_pfn_to_page(migrate->src[i]);
- if (!spage) {
- pr_debug("failed get spage svms 0x%p [0x%lx 0x%lx]\n",
+ if (!spage || !is_zone_device_page(spage)) {
+ pr_debug("invalid page. Could be in CPU already svms 0x%p [0x%lx 0x%lx]\n",
prange->svms, prange->start, prange->last);
- r = -ENOMEM;
- goto out_oom;
+ if (j) {
+ r = svm_migrate_copy_memory_gart(adev, dst + i - j,
+ src + i - j, j,
+ FROM_VRAM_TO_RAM,
+ mfence);
+ if (r)
+ goto out_oom;
+ j = 0;
+ }
+ continue;
}
src[i] = svm_migrate_addr(adev, spage);
if (i > 0 && src[i] != src[i - 1] + PAGE_SIZE) {
migrate->dst[i] = migrate_pfn(page_to_pfn(dpage));
migrate->dst[i] |= MIGRATE_PFN_LOCKED;
+ j++;
}
r = svm_migrate_copy_memory_gart(adev, dst + i - j, src + i - j, j,
struct vm_area_struct *vma, uint64_t start, uint64_t end)
{
uint64_t npages = (end - start) >> PAGE_SHIFT;
+ struct kfd_process_device *pdd;
struct dma_fence *mfence = NULL;
struct migrate_vma migrate;
dma_addr_t *scratch;
migrate.start = start;
migrate.end = end;
migrate.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
- migrate.pgmap_owner = adev;
+ migrate.pgmap_owner = SVM_ADEV_PGMAP_OWNER(adev);
size = 2 * sizeof(*migrate.src) + sizeof(uint64_t) + sizeof(dma_addr_t);
size *= npages;
if (migrate.cpages) {
r = svm_migrate_copy_to_ram(adev, prange, &migrate, &mfence,
- scratch);
+ scratch, npages);
migrate_vma_pages(&migrate);
svm_migrate_copy_done(adev, mfence);
migrate_vma_finalize(&migrate);
out_free:
kvfree(buf);
out:
+ if (!r) {
+ pdd = svm_range_get_pdd_by_adev(prange, adev);
+ if (pdd)
+ WRITE_ONCE(pdd->page_out,
+ pdd->page_out + migrate.cpages);
+ }
return r;
}
pgmap->range.start = res->start;
pgmap->range.end = res->end;
pgmap->ops = &svm_migrate_pgmap_ops;
- pgmap->owner = adev;
+ pgmap->owner = SVM_ADEV_PGMAP_OWNER(adev);
pgmap->flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
r = devm_memremap_pages(adev->dev, pgmap);
if (IS_ERR(r)) {
* number of CU's a device has along with number of other competing processes
*/
struct attribute attr_cu_occupancy;
+
+ /* sysfs counters for GPU retry fault and page migration tracking */
+ struct kobject *kobj_counters;
+ struct attribute attr_faults;
+ struct attribute attr_page_in;
+ struct attribute attr_page_out;
+ uint64_t faults;
+ uint64_t page_in;
+ uint64_t page_out;
};
#define qpd_to_pdd(x) container_of(x, struct kfd_process_device, qpd)
return 0;
}
+static ssize_t kfd_sysfs_counters_show(struct kobject *kobj,
+ struct attribute *attr, char *buf)
+{
+ struct kfd_process_device *pdd;
+
+ if (!strcmp(attr->name, "faults")) {
+ pdd = container_of(attr, struct kfd_process_device,
+ attr_faults);
+ return sysfs_emit(buf, "%llu\n", READ_ONCE(pdd->faults));
+ }
+ if (!strcmp(attr->name, "page_in")) {
+ pdd = container_of(attr, struct kfd_process_device,
+ attr_page_in);
+ return sysfs_emit(buf, "%llu\n", READ_ONCE(pdd->page_in));
+ }
+ if (!strcmp(attr->name, "page_out")) {
+ pdd = container_of(attr, struct kfd_process_device,
+ attr_page_out);
+ return sysfs_emit(buf, "%llu\n", READ_ONCE(pdd->page_out));
+ }
+ return 0;
+}
+
static struct attribute attr_queue_size = {
.name = "size",
.mode = KFD_SYSFS_FILE_MODE
.show = kfd_procfs_stats_show,
};
-static struct attribute *procfs_stats_attrs[] = {
- NULL
-};
-
static struct kobj_type procfs_stats_type = {
.sysfs_ops = &procfs_stats_ops,
- .default_attrs = procfs_stats_attrs,
+ .release = kfd_procfs_kobj_release,
+};
+
+static const struct sysfs_ops sysfs_counters_ops = {
+ .show = kfd_sysfs_counters_show,
+};
+
+static struct kobj_type sysfs_counters_type = {
+ .sysfs_ops = &sysfs_counters_ops,
+ .release = kfd_procfs_kobj_release,
};
int kfd_procfs_add_queue(struct queue *q)
return 0;
}
-static int kfd_sysfs_create_file(struct kfd_process *p, struct attribute *attr,
+static void kfd_sysfs_create_file(struct kobject *kobj, struct attribute *attr,
char *name)
{
- int ret = 0;
+ int ret;
- if (!p || !attr || !name)
- return -EINVAL;
+ if (!kobj || !attr || !name)
+ return;
attr->name = name;
attr->mode = KFD_SYSFS_FILE_MODE;
sysfs_attr_init(attr);
- ret = sysfs_create_file(p->kobj, attr);
-
- return ret;
+ ret = sysfs_create_file(kobj, attr);
+ if (ret)
+ pr_warn("Create sysfs %s/%s failed %d", kobj->name, name, ret);
}
-static int kfd_procfs_add_sysfs_stats(struct kfd_process *p)
+static void kfd_procfs_add_sysfs_stats(struct kfd_process *p)
{
- int ret = 0;
+ int ret;
int i;
char stats_dir_filename[MAX_SYSFS_FILENAME_LEN];
- if (!p)
- return -EINVAL;
-
- if (!p->kobj)
- return -EFAULT;
+ if (!p || !p->kobj)
+ return;
/*
* Create sysfs files for each GPU:
*/
for (i = 0; i < p->n_pdds; i++) {
struct kfd_process_device *pdd = p->pdds[i];
- struct kobject *kobj_stats;
snprintf(stats_dir_filename, MAX_SYSFS_FILENAME_LEN,
"stats_%u", pdd->dev->id);
- kobj_stats = kfd_alloc_struct(kobj_stats);
- if (!kobj_stats)
- return -ENOMEM;
+ pdd->kobj_stats = kfd_alloc_struct(pdd->kobj_stats);
+ if (!pdd->kobj_stats)
+ return;
- ret = kobject_init_and_add(kobj_stats,
- &procfs_stats_type,
- p->kobj,
- stats_dir_filename);
+ ret = kobject_init_and_add(pdd->kobj_stats,
+ &procfs_stats_type,
+ p->kobj,
+ stats_dir_filename);
if (ret) {
pr_warn("Creating KFD proc/stats_%s folder failed",
- stats_dir_filename);
- kobject_put(kobj_stats);
- goto err;
+ stats_dir_filename);
+ kobject_put(pdd->kobj_stats);
+ pdd->kobj_stats = NULL;
+ return;
}
- pdd->kobj_stats = kobj_stats;
- pdd->attr_evict.name = "evicted_ms";
- pdd->attr_evict.mode = KFD_SYSFS_FILE_MODE;
- sysfs_attr_init(&pdd->attr_evict);
- ret = sysfs_create_file(kobj_stats, &pdd->attr_evict);
- if (ret)
- pr_warn("Creating eviction stats for gpuid %d failed",
- (int)pdd->dev->id);
-
+ kfd_sysfs_create_file(pdd->kobj_stats, &pdd->attr_evict,
+ "evicted_ms");
/* Add sysfs file to report compute unit occupancy */
- if (pdd->dev->kfd2kgd->get_cu_occupancy != NULL) {
- pdd->attr_cu_occupancy.name = "cu_occupancy";
- pdd->attr_cu_occupancy.mode = KFD_SYSFS_FILE_MODE;
- sysfs_attr_init(&pdd->attr_cu_occupancy);
- ret = sysfs_create_file(kobj_stats,
- &pdd->attr_cu_occupancy);
- if (ret)
- pr_warn("Creating %s failed for gpuid: %d",
- pdd->attr_cu_occupancy.name,
- (int)pdd->dev->id);
- }
+ if (pdd->dev->kfd2kgd->get_cu_occupancy)
+ kfd_sysfs_create_file(pdd->kobj_stats,
+ &pdd->attr_cu_occupancy,
+ "cu_occupancy");
}
-err:
- return ret;
}
-
-static int kfd_procfs_add_sysfs_files(struct kfd_process *p)
+static void kfd_procfs_add_sysfs_counters(struct kfd_process *p)
{
int ret = 0;
int i;
+ char counters_dir_filename[MAX_SYSFS_FILENAME_LEN];
- if (!p)
- return -EINVAL;
+ if (!p || !p->kobj)
+ return;
- if (!p->kobj)
- return -EFAULT;
+ /*
+ * Create sysfs files for each GPU which supports SVM
+ * - proc/<pid>/counters_<gpuid>/
+ * - proc/<pid>/counters_<gpuid>/faults
+ * - proc/<pid>/counters_<gpuid>/page_in
+ * - proc/<pid>/counters_<gpuid>/page_out
+ */
+ for_each_set_bit(i, p->svms.bitmap_supported, p->n_pdds) {
+ struct kfd_process_device *pdd = p->pdds[i];
+ struct kobject *kobj_counters;
+
+ snprintf(counters_dir_filename, MAX_SYSFS_FILENAME_LEN,
+ "counters_%u", pdd->dev->id);
+ kobj_counters = kfd_alloc_struct(kobj_counters);
+ if (!kobj_counters)
+ return;
+
+ ret = kobject_init_and_add(kobj_counters, &sysfs_counters_type,
+ p->kobj, counters_dir_filename);
+ if (ret) {
+ pr_warn("Creating KFD proc/%s folder failed",
+ counters_dir_filename);
+ kobject_put(kobj_counters);
+ return;
+ }
+
+ pdd->kobj_counters = kobj_counters;
+ kfd_sysfs_create_file(kobj_counters, &pdd->attr_faults,
+ "faults");
+ kfd_sysfs_create_file(kobj_counters, &pdd->attr_page_in,
+ "page_in");
+ kfd_sysfs_create_file(kobj_counters, &pdd->attr_page_out,
+ "page_out");
+ }
+}
+
+static void kfd_procfs_add_sysfs_files(struct kfd_process *p)
+{
+ int i;
+
+ if (!p || !p->kobj)
+ return;
/*
* Create sysfs files for each GPU:
snprintf(pdd->vram_filename, MAX_SYSFS_FILENAME_LEN, "vram_%u",
pdd->dev->id);
- ret = kfd_sysfs_create_file(p, &pdd->attr_vram, pdd->vram_filename);
- if (ret)
- pr_warn("Creating vram usage for gpu id %d failed",
- (int)pdd->dev->id);
+ kfd_sysfs_create_file(p->kobj, &pdd->attr_vram,
+ pdd->vram_filename);
snprintf(pdd->sdma_filename, MAX_SYSFS_FILENAME_LEN, "sdma_%u",
pdd->dev->id);
- ret = kfd_sysfs_create_file(p, &pdd->attr_sdma, pdd->sdma_filename);
- if (ret)
- pr_warn("Creating sdma usage for gpu id %d failed",
- (int)pdd->dev->id);
+ kfd_sysfs_create_file(p->kobj, &pdd->attr_sdma,
+ pdd->sdma_filename);
}
-
- return ret;
}
void kfd_procfs_del_queue(struct queue *q)
goto out;
}
- process->attr_pasid.name = "pasid";
- process->attr_pasid.mode = KFD_SYSFS_FILE_MODE;
- sysfs_attr_init(&process->attr_pasid);
- ret = sysfs_create_file(process->kobj, &process->attr_pasid);
- if (ret)
- pr_warn("Creating pasid for pid %d failed",
- (int)process->lead_thread->pid);
+ kfd_sysfs_create_file(process->kobj, &process->attr_pasid,
+ "pasid");
process->kobj_queues = kobject_create_and_add("queues",
process->kobj);
if (!process->kobj_queues)
pr_warn("Creating KFD proc/queues folder failed");
- ret = kfd_procfs_add_sysfs_stats(process);
- if (ret)
- pr_warn("Creating sysfs stats dir for pid %d failed",
- (int)process->lead_thread->pid);
-
- ret = kfd_procfs_add_sysfs_files(process);
- if (ret)
- pr_warn("Creating sysfs usage file for pid %d failed",
- (int)process->lead_thread->pid);
+ kfd_procfs_add_sysfs_stats(process);
+ kfd_procfs_add_sysfs_files(process);
+ kfd_procfs_add_sysfs_counters(process);
}
out:
if (!IS_ERR(process))
p->n_pdds = 0;
}
+static void kfd_process_remove_sysfs(struct kfd_process *p)
+{
+ struct kfd_process_device *pdd;
+ int i;
+
+ if (!p->kobj)
+ return;
+
+ sysfs_remove_file(p->kobj, &p->attr_pasid);
+ kobject_del(p->kobj_queues);
+ kobject_put(p->kobj_queues);
+ p->kobj_queues = NULL;
+
+ for (i = 0; i < p->n_pdds; i++) {
+ pdd = p->pdds[i];
+
+ sysfs_remove_file(p->kobj, &pdd->attr_vram);
+ sysfs_remove_file(p->kobj, &pdd->attr_sdma);
+
+ sysfs_remove_file(pdd->kobj_stats, &pdd->attr_evict);
+ if (pdd->dev->kfd2kgd->get_cu_occupancy)
+ sysfs_remove_file(pdd->kobj_stats,
+ &pdd->attr_cu_occupancy);
+ kobject_del(pdd->kobj_stats);
+ kobject_put(pdd->kobj_stats);
+ pdd->kobj_stats = NULL;
+ }
+
+ for_each_set_bit(i, p->svms.bitmap_supported, p->n_pdds) {
+ pdd = p->pdds[i];
+
+ sysfs_remove_file(pdd->kobj_counters, &pdd->attr_faults);
+ sysfs_remove_file(pdd->kobj_counters, &pdd->attr_page_in);
+ sysfs_remove_file(pdd->kobj_counters, &pdd->attr_page_out);
+ kobject_del(pdd->kobj_counters);
+ kobject_put(pdd->kobj_counters);
+ pdd->kobj_counters = NULL;
+ }
+
+ kobject_del(p->kobj);
+ kobject_put(p->kobj);
+ p->kobj = NULL;
+}
+
/* No process locking is needed in this function, because the process
* is not findable any more. We must assume that no other thread is
* using it any more, otherwise we couldn't safely free the process
{
struct kfd_process *p = container_of(work, struct kfd_process,
release_work);
- int i;
-
- /* Remove the procfs files */
- if (p->kobj) {
- sysfs_remove_file(p->kobj, &p->attr_pasid);
- kobject_del(p->kobj_queues);
- kobject_put(p->kobj_queues);
- p->kobj_queues = NULL;
-
- for (i = 0; i < p->n_pdds; i++) {
- struct kfd_process_device *pdd = p->pdds[i];
-
- sysfs_remove_file(p->kobj, &pdd->attr_vram);
- sysfs_remove_file(p->kobj, &pdd->attr_sdma);
- sysfs_remove_file(p->kobj, &pdd->attr_evict);
- if (pdd->dev->kfd2kgd->get_cu_occupancy != NULL)
- sysfs_remove_file(p->kobj, &pdd->attr_cu_occupancy);
- kobject_del(pdd->kobj_stats);
- kobject_put(pdd->kobj_stats);
- pdd->kobj_stats = NULL;
- }
-
- kobject_del(p->kobj);
- kobject_put(p->kobj);
- p->kobj = NULL;
- }
-
+ kfd_process_remove_sysfs(p);
kfd_iommu_unbind_process(p);
kfd_process_free_outstanding_kfd_bos(p);
if (pqn->q && pqn->q->gws)
amdgpu_amdkfd_remove_gws_from_process(pqm->process->kgd_process_info,
pqn->q->gws);
+ kfd_procfs_del_queue(pqn->q);
uninit_queue(pqn->q);
list_del(&pqn->process_queue_list);
kfree(pqn);
}
static int
-svm_range_dma_map_dev(struct device *dev, dma_addr_t **dma_addr,
- unsigned long *hmm_pfns, uint64_t npages)
+svm_range_dma_map_dev(struct amdgpu_device *adev, struct svm_range *prange,
+ unsigned long *hmm_pfns, uint32_t gpuidx)
{
enum dma_data_direction dir = DMA_BIDIRECTIONAL;
- dma_addr_t *addr = *dma_addr;
+ dma_addr_t *addr = prange->dma_addr[gpuidx];
+ struct device *dev = adev->dev;
struct page *page;
int i, r;
if (!addr) {
- addr = kvmalloc_array(npages, sizeof(*addr),
+ addr = kvmalloc_array(prange->npages, sizeof(*addr),
GFP_KERNEL | __GFP_ZERO);
if (!addr)
return -ENOMEM;
- *dma_addr = addr;
+ prange->dma_addr[gpuidx] = addr;
}
- for (i = 0; i < npages; i++) {
+ for (i = 0; i < prange->npages; i++) {
if (WARN_ONCE(addr[i] && !dma_mapping_error(dev, addr[i]),
"leaking dma mapping\n"))
dma_unmap_page(dev, addr[i], PAGE_SIZE, dir);
page = hmm_pfn_to_page(hmm_pfns[i]);
+ if (is_zone_device_page(page)) {
+ struct amdgpu_device *bo_adev =
+ amdgpu_ttm_adev(prange->svm_bo->bo->tbo.bdev);
+
+ addr[i] = (hmm_pfns[i] << PAGE_SHIFT) +
+ bo_adev->vm_manager.vram_base_offset -
+ bo_adev->kfd.dev->pgmap.range.start;
+ addr[i] |= SVM_RANGE_VRAM_DOMAIN;
+ pr_debug("vram address detected: 0x%llx\n", addr[i]);
+ continue;
+ }
addr[i] = dma_map_page(dev, page, 0, PAGE_SIZE, dir);
r = dma_mapping_error(dev, addr[i]);
if (r) {
}
adev = (struct amdgpu_device *)pdd->dev->kgd;
- r = svm_range_dma_map_dev(adev->dev, &prange->dma_addr[gpuidx],
- hmm_pfns, prange->npages);
+ r = svm_range_dma_map_dev(adev, prange, hmm_pfns, gpuidx);
if (r)
break;
}
return true;
}
-static struct svm_range_bo *svm_range_bo_ref(struct svm_range_bo *svm_bo)
-{
- if (svm_bo)
- kref_get(&svm_bo->kref);
-
- return svm_bo;
-}
-
static void svm_range_bo_release(struct kref *kref)
{
struct svm_range_bo *svm_bo;
kfree(svm_bo);
}
-static void svm_range_bo_unref(struct svm_range_bo *svm_bo)
+void svm_range_bo_unref(struct svm_range_bo *svm_bo)
{
if (!svm_bo)
return;
return (struct amdgpu_device *)pdd->dev->kgd;
}
+struct kfd_process_device *
+svm_range_get_pdd_by_adev(struct svm_range *prange, struct amdgpu_device *adev)
+{
+ struct kfd_process *p;
+ int32_t gpu_idx, gpuid;
+ int r;
+
+ p = container_of(prange->svms, struct kfd_process, svms);
+
+ r = kfd_process_gpuid_from_kgd(p, adev, &gpuid, &gpu_idx);
+ if (r) {
+ pr_debug("failed to get device id by adev %p\n", adev);
+ return NULL;
+ }
+
+ return kfd_process_device_from_gpuidx(p, gpu_idx);
+}
+
static int svm_range_bo_validate(void *param, struct amdgpu_bo *bo)
{
struct ttm_operation_ctx ctx = { false, false };
}
static uint64_t
-svm_range_get_pte_flags(struct amdgpu_device *adev, struct svm_range *prange)
+svm_range_get_pte_flags(struct amdgpu_device *adev, struct svm_range *prange,
+ int domain)
{
struct amdgpu_device *bo_adev;
uint32_t flags = prange->flags;
uint32_t mapping_flags = 0;
uint64_t pte_flags;
- bool snoop = !prange->ttm_res;
+ bool snoop = (domain != SVM_RANGE_VRAM_DOMAIN);
bool coherent = flags & KFD_IOCTL_SVM_FLAG_COHERENT;
- if (prange->svm_bo && prange->ttm_res)
+ if (domain == SVM_RANGE_VRAM_DOMAIN)
bo_adev = amdgpu_ttm_adev(prange->svm_bo->bo->tbo.bdev);
switch (adev->asic_type) {
case CHIP_ARCTURUS:
- if (prange->svm_bo && prange->ttm_res) {
+ if (domain == SVM_RANGE_VRAM_DOMAIN) {
if (bo_adev == adev) {
mapping_flags |= coherent ?
AMDGPU_VM_MTYPE_CC : AMDGPU_VM_MTYPE_RW;
}
break;
case CHIP_ALDEBARAN:
- if (prange->svm_bo && prange->ttm_res) {
+ if (domain == SVM_RANGE_VRAM_DOMAIN) {
if (bo_adev == adev) {
mapping_flags |= coherent ?
AMDGPU_VM_MTYPE_CC : AMDGPU_VM_MTYPE_RW;
mapping_flags |= AMDGPU_VM_PAGE_EXECUTABLE;
pte_flags = AMDGPU_PTE_VALID;
- pte_flags |= prange->ttm_res ? 0 : AMDGPU_PTE_SYSTEM;
+ pte_flags |= (domain == SVM_RANGE_VRAM_DOMAIN) ? 0 : AMDGPU_PTE_SYSTEM;
pte_flags |= snoop ? AMDGPU_PTE_SNOOPED : 0;
pte_flags |= amdgpu_gem_va_map_flags(adev, mapping_flags);
pr_debug("svms 0x%p [0x%lx 0x%lx] vram %d PTE 0x%llx mapping 0x%x\n",
prange->svms, prange->start, prange->last,
- prange->ttm_res ? 1:0, pte_flags, mapping_flags);
+ (domain == SVM_RANGE_VRAM_DOMAIN) ? 1:0, pte_flags, mapping_flags);
return pte_flags;
}
struct amdgpu_bo_va bo_va;
bool table_freed = false;
uint64_t pte_flags;
+ unsigned long last_start;
+ int last_domain;
int r = 0;
+ int64_t i;
pr_debug("svms 0x%p [0x%lx 0x%lx]\n", prange->svms, prange->start,
prange->last);
- if (prange->svm_bo && prange->ttm_res) {
+ if (prange->svm_bo && prange->ttm_res)
bo_va.is_xgmi = amdgpu_xgmi_same_hive(adev, bo_adev);
- prange->mapping.bo_va = &bo_va;
- }
- prange->mapping.start = prange->start;
- prange->mapping.last = prange->last;
- prange->mapping.offset = prange->ttm_res ? prange->offset : 0;
- pte_flags = svm_range_get_pte_flags(adev, prange);
+ last_start = prange->start;
+ for (i = 0; i < prange->npages; i++) {
+ last_domain = dma_addr[i] & SVM_RANGE_VRAM_DOMAIN;
+ dma_addr[i] &= ~SVM_RANGE_VRAM_DOMAIN;
+ if ((prange->start + i) < prange->last &&
+ last_domain == (dma_addr[i + 1] & SVM_RANGE_VRAM_DOMAIN))
+ continue;
- r = amdgpu_vm_bo_update_mapping(adev, bo_adev, vm, false, false, NULL,
- prange->mapping.start,
- prange->mapping.last, pte_flags,
- prange->mapping.offset,
- prange->ttm_res,
- dma_addr, &vm->last_update,
- &table_freed);
- if (r) {
- pr_debug("failed %d to map to gpu 0x%lx\n", r, prange->start);
- goto out;
+ pr_debug("Mapping range [0x%lx 0x%llx] on domain: %s\n",
+ last_start, prange->start + i, last_domain ? "GPU" : "CPU");
+ pte_flags = svm_range_get_pte_flags(adev, prange, last_domain);
+ r = amdgpu_vm_bo_update_mapping(adev, bo_adev, vm, false, false, NULL,
+ last_start,
+ prange->start + i, pte_flags,
+ last_start - prange->start,
+ NULL,
+ dma_addr,
+ &vm->last_update,
+ &table_freed);
+ if (r) {
+ pr_debug("failed %d to map to gpu 0x%lx\n", r, prange->start);
+ goto out;
+ }
+ last_start = prange->start + i + 1;
}
r = amdgpu_vm_update_pdes(adev, vm, false);
p->pasid, TLB_FLUSH_LEGACY);
}
out:
- prange->mapping.bo_va = NULL;
return r;
}
ttm_eu_backoff_reservation(&ctx->ticket, &ctx->validate_list);
}
+static void *kfd_svm_page_owner(struct kfd_process *p, int32_t gpuidx)
+{
+ struct kfd_process_device *pdd;
+ struct amdgpu_device *adev;
+
+ pdd = kfd_process_device_from_gpuidx(p, gpuidx);
+ adev = (struct amdgpu_device *)pdd->dev->kgd;
+
+ return SVM_ADEV_PGMAP_OWNER(adev);
+}
+
/*
* Validation+GPU mapping with concurrent invalidation (MMU notifiers)
*
{
struct svm_validate_context ctx;
struct hmm_range *hmm_range;
+ struct kfd_process *p;
+ void *owner;
+ int32_t idx;
int r = 0;
ctx.process = container_of(prange->svms, struct kfd_process, svms);
svm_range_reserve_bos(&ctx);
- if (!prange->actual_loc) {
- r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
- prange->start << PAGE_SHIFT,
- prange->npages, &hmm_range,
- false, true);
- if (r) {
- pr_debug("failed %d to get svm range pages\n", r);
- goto unreserve_out;
- }
-
- r = svm_range_dma_map(prange, ctx.bitmap,
- hmm_range->hmm_pfns);
- if (r) {
- pr_debug("failed %d to dma map range\n", r);
- goto unreserve_out;
+ p = container_of(prange->svms, struct kfd_process, svms);
+ owner = kfd_svm_page_owner(p, find_first_bit(ctx.bitmap,
+ MAX_GPU_INSTANCE));
+ for_each_set_bit(idx, ctx.bitmap, MAX_GPU_INSTANCE) {
+ if (kfd_svm_page_owner(p, idx) != owner) {
+ owner = NULL;
+ break;
}
+ }
+ r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
+ prange->start << PAGE_SHIFT,
+ prange->npages, &hmm_range,
+ false, true, owner);
+ if (r) {
+ pr_debug("failed %d to get svm range pages\n", r);
+ goto unreserve_out;
+ }
- prange->validated_once = true;
+ r = svm_range_dma_map(prange, ctx.bitmap,
+ hmm_range->hmm_pfns);
+ if (r) {
+ pr_debug("failed %d to dma map range\n", r);
+ goto unreserve_out;
}
+ prange->validated_once = true;
+
svm_range_lock(prange);
- if (!prange->actual_loc) {
- if (amdgpu_hmm_range_get_pages_done(hmm_range)) {
- pr_debug("hmm update the range, need validate again\n");
- r = -EAGAIN;
- goto unlock_out;
- }
+ if (amdgpu_hmm_range_get_pages_done(hmm_range)) {
+ pr_debug("hmm update the range, need validate again\n");
+ r = -EAGAIN;
+ goto unlock_out;
}
if (!list_empty(&prange->child_list)) {
pr_debug("range split by unmap in parallel, validate again\n");
unsigned long start, unsigned long last)
{
struct svm_range_list *svms = prange->svms;
+ struct svm_range *pchild;
struct kfd_process *p;
int r = 0;
if (!p->xnack_enabled) {
int evicted_ranges;
- atomic_inc(&prange->invalid);
+ list_for_each_entry(pchild, &prange->child_list, child_list) {
+ mutex_lock_nested(&pchild->lock, 1);
+ if (pchild->start <= last && pchild->last >= start) {
+ pr_debug("increment pchild invalid [0x%lx 0x%lx]\n",
+ pchild->start, pchild->last);
+ atomic_inc(&pchild->invalid);
+ }
+ mutex_unlock(&pchild->lock);
+ }
+
+ if (prange->start <= last && prange->last >= start)
+ atomic_inc(&prange->invalid);
+
evicted_ranges = atomic_inc_return(&svms->evicted_ranges);
if (evicted_ranges != 1)
return r;
schedule_delayed_work(&svms->restore_work,
msecs_to_jiffies(AMDGPU_SVM_RANGE_RESTORE_DELAY_MS));
} else {
- struct svm_range *pchild;
unsigned long s, l;
pr_debug("invalidate unmap svms 0x%p [0x%lx 0x%lx] from GPUs\n",
return false;
}
+static void
+svm_range_count_fault(struct amdgpu_device *adev, struct kfd_process *p,
+ struct svm_range *prange, int32_t gpuidx)
+{
+ struct kfd_process_device *pdd;
+
+ if (gpuidx == MAX_GPU_INSTANCE)
+ /* fault is on different page of same range
+ * or fault is skipped to recover later
+ */
+ pdd = svm_range_get_pdd_by_adev(prange, adev);
+ else
+ /* fault recovered
+ * or fault cannot recover because GPU no access on the range
+ */
+ pdd = kfd_process_device_from_gpuidx(p, gpuidx);
+
+ if (pdd)
+ WRITE_ONCE(pdd->faults, pdd->faults + 1);
+}
+
int
svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
uint64_t addr)
struct svm_range *prange;
struct kfd_process *p;
uint64_t timestamp;
- int32_t best_loc, gpuidx;
+ int32_t best_loc;
+ int32_t gpuidx = MAX_GPU_INSTANCE;
bool write_locked = false;
int r = 0;
out_unlock_svms:
mutex_unlock(&svms->lock);
mmap_read_unlock(mm);
+
+ svm_range_count_fault(adev, p, prange, gpuidx);
+
mmput(mm);
out:
kfd_unref_process(p);
/* FIXME: This is a workaround for page locking bug when some pages are
* invalid during migration to VRAM
*/
-void svm_range_prefault(struct svm_range *prange, struct mm_struct *mm)
+void svm_range_prefault(struct svm_range *prange, struct mm_struct *mm,
+ void *owner)
{
struct hmm_range *hmm_range;
int r;
r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
prange->start << PAGE_SHIFT,
prange->npages, &hmm_range,
- false, true);
+ false, true, owner);
if (!r) {
amdgpu_hmm_range_get_pages_done(hmm_range);
prange->validated_once = true;
best_loc == prange->actual_loc)
return 0;
- /*
- * Prefetch to GPU without host access flag, set actual_loc to gpu, then
- * validate on gpu and map to gpus will be handled afterwards.
- */
- if (best_loc && !prange->actual_loc &&
- !(prange->flags & KFD_IOCTL_SVM_FLAG_HOST_ACCESS)) {
- prange->actual_loc = best_loc;
- return 0;
- }
-
if (!best_loc) {
r = svm_migrate_vram_to_ram(prange, mm);
*migrated = !r;
#include "amdgpu.h"
#include "kfd_priv.h"
+#define SVM_RANGE_VRAM_DOMAIN (1UL << 0)
+#define SVM_ADEV_PGMAP_OWNER(adev)\
+ ((adev)->hive ? (void *)(adev)->hive : (void *)(adev))
+
struct svm_range_bo {
struct amdgpu_bo *bo;
struct kref kref;
struct list_head update_list;
struct list_head remove_list;
struct list_head insert_list;
- struct amdgpu_bo_va_mapping mapping;
uint64_t npages;
dma_addr_t *dma_addr[MAX_GPU_INSTANCE];
struct ttm_resource *ttm_res;
mutex_unlock(&prange->lock);
}
+static inline struct svm_range_bo *svm_range_bo_ref(struct svm_range_bo *svm_bo)
+{
+ if (svm_bo)
+ kref_get(&svm_bo->kref);
+
+ return svm_bo;
+}
+
int svm_range_list_init(struct kfd_process *p);
void svm_range_list_fini(struct kfd_process *p);
int svm_ioctl(struct kfd_process *p, enum kfd_ioctl_svm_op op, uint64_t start,
void svm_range_dma_unmap(struct device *dev, dma_addr_t *dma_addr,
unsigned long offset, unsigned long npages);
void svm_range_free_dma_mappings(struct svm_range *prange);
-void svm_range_prefault(struct svm_range *prange, struct mm_struct *mm);
+void svm_range_prefault(struct svm_range *prange, struct mm_struct *mm,
+ void *owner);
+struct kfd_process_device *
+svm_range_get_pdd_by_adev(struct svm_range *prange, struct amdgpu_device *adev);
/* SVM API and HMM page migration work together, device memory type
* is initialized to not 0 when page migration register device memory.
*/
#define KFD_IS_SVM_API_SUPPORTED(dev) ((dev)->pgmap.type != 0)
+void svm_range_bo_unref(struct svm_range_bo *svm_bo);
#else
struct kfd_process;
if (amdgpu_dc_feature_mask & DC_DISABLE_FRACTIONAL_PWM_MASK)
init_data.flags.disable_fractional_pwm = true;
+ if (amdgpu_dc_feature_mask & DC_EDP_NO_POWER_SEQUENCING)
+ init_data.flags.edp_no_power_sequencing = true;
+
init_data.flags.power_down_display_on_boot = true;
INIT_LIST_HEAD(&adev->dm.da_list);
int i;
struct drm_plane *plane;
struct drm_plane_state *new_plane_state;
- struct drm_plane_state *primary_state, *cursor_state, *overlay_state = NULL;
+ struct drm_plane_state *primary_state, *overlay_state = NULL;
/* Check if primary plane is contained inside overlay */
for_each_new_plane_in_state_reverse(state, plane, new_plane_state, i) {
if (!primary_state->crtc)
return 0;
- /* check if cursor plane is enabled */
- cursor_state = drm_atomic_get_plane_state(state, overlay_state->crtc->cursor);
- if (IS_ERR(cursor_state))
- return PTR_ERR(cursor_state);
-
- if (drm_atomic_plane_disabling(plane->state, cursor_state))
- return 0;
-
/* Perform the bounds check to ensure the overlay plane covers the primary */
if (primary_state->crtc_x < overlay_state->crtc_x ||
primary_state->crtc_y < overlay_state->crtc_y ||
bool allow_seamless_boot_optimization;
bool power_down_display_on_boot;
bool edp_not_connected;
+ bool edp_no_power_sequencing;
bool force_enum_edp;
bool forced_clocks;
bool allow_lttpr_non_transparent_mode;
/* dc_service_sleep_in_milliseconds(50); */
/*edp 1.2*/
panel_instance = link->panel_cntl->inst;
- if (cntl.action == TRANSMITTER_CONTROL_BACKLIGHT_ON)
- edp_receiver_ready_T7(link);
+
+ if (cntl.action == TRANSMITTER_CONTROL_BACKLIGHT_ON) {
+ if (!link->dc->config.edp_no_power_sequencing)
+ /*
+ * Sometimes, DP receiver chip power-controlled externally by an
+ * Embedded Controller could be treated and used as eDP,
+ * if it drives mobile display. In this case,
+ * we shouldn't be doing power-sequencing, hence we can skip
+ * waiting for T7-ready.
+ */
+ edp_receiver_ready_T7(link);
+ else
+ DC_LOG_DC("edp_receiver_ready_T7 skipped\n");
+ }
if (ctx->dc->ctx->dmub_srv &&
ctx->dc->debug.dmub_command_table) {
dc_link_backlight_enable_aux(link, enable);
/*edp 1.2*/
- if (cntl.action == TRANSMITTER_CONTROL_BACKLIGHT_OFF)
- edp_add_delay_for_T9(link);
+ if (cntl.action == TRANSMITTER_CONTROL_BACKLIGHT_OFF) {
+ if (!link->dc->config.edp_no_power_sequencing)
+ /*
+ * Sometimes, DP receiver chip power-controlled externally by an
+ * Embedded Controller could be treated and used as eDP,
+ * if it drives mobile display. In this case,
+ * we shouldn't be doing power-sequencing, hence we can skip
+ * waiting for T9-ready.
+ */
+ edp_add_delay_for_T9(link);
+ else
+ DC_LOG_DC("edp_receiver_ready_T9 skipped\n");
+ }
if (!enable && link->dpcd_sink_ext_caps.bits.oled)
msleep(OLED_PRE_T11_DELAY);
CFLAGS_$(AMDDALPATH)/dc/dcn31/dcn31_resource.o += -mhard-float
endif
+ifdef CONFIG_X86
ifdef IS_OLD_GCC
# Stack alignment mismatch, proceed with caution.
# GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
else
CFLAGS_$(AMDDALPATH)/dc/dcn31/dcn31_resource.o += -msse2
endif
+endif
AMD_DAL_DCN31 = $(addprefix $(AMDDALPATH)/dc/dcn31/,$(DCN31))
endif
endif
+ifneq ($(CONFIG_FRAME_WARN),0)
+frame_warn_flag := -Wframe-larger-than=2048
+endif
+
CFLAGS_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_ccflags)
ifdef CONFIG_DRM_AMD_DC_DCN
CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_rq_dlg_calc_20v2.o := $(dml_ccflags)
CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_mode_vba_21.o := $(dml_ccflags)
CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_rq_dlg_calc_21.o := $(dml_ccflags)
-CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_mode_vba_30.o := $(dml_ccflags) -Wframe-larger-than=2048
+CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_mode_vba_30.o := $(dml_ccflags) $(frame_warn_flag)
CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_rq_dlg_calc_30.o := $(dml_ccflags)
-CFLAGS_$(AMDDALPATH)/dc/dml/dcn31/display_mode_vba_31.o := $(dml_ccflags) -Wframe-larger-than=2048
+CFLAGS_$(AMDDALPATH)/dc/dml/dcn31/display_mode_vba_31.o := $(dml_ccflags) $(frame_warn_flag)
CFLAGS_$(AMDDALPATH)/dc/dml/dcn31/display_rq_dlg_calc_31.o := $(dml_ccflags)
CFLAGS_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_ccflags)
CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/display_mode_vba.o := $(dml_rcflags)
dal_irq_service_ack(irq_service, source);
- if (info->funcs->set)
+ if (info->funcs && info->funcs->set)
return info->funcs->set(irq_service, info, enable);
dal_irq_service_set_generic(irq_service, info, enable);
return false;
}
- if (info->funcs->ack)
+ if (info->funcs && info->funcs->ack)
return info->funcs->ack(irq_service, info);
dal_irq_service_ack_generic(irq_service, info);
};
#define DAL_VALID_IRQ_SRC_NUM(src) \
- ((src) <= DAL_IRQ_SOURCES_NUMBER && (src) > DC_IRQ_SOURCE_INVALID)
+ ((src) < DAL_IRQ_SOURCES_NUMBER && (src) > DC_IRQ_SOURCE_INVALID)
/* Number of Page Flip IRQ Sources. */
#define DAL_PFLIP_IRQ_SRC_NUM \
{
return REG_READ(DMCUB_TIMER_CURRENT);
}
+
+void dmub_dcn31_get_diagnostic_data(struct dmub_srv *dmub, struct dmub_diagnostic_data *diag_data)
+{
+ uint32_t is_dmub_enabled, is_soft_reset, is_sec_reset;
+ uint32_t is_traceport_enabled, is_cw0_enabled, is_cw6_enabled;
+
+ if (!dmub || !diag_data)
+ return;
+
+ memset(diag_data, 0, sizeof(*diag_data));
+
+ diag_data->dmcub_version = dmub->fw_version;
+
+ diag_data->scratch[0] = REG_READ(DMCUB_SCRATCH0);
+ diag_data->scratch[1] = REG_READ(DMCUB_SCRATCH1);
+ diag_data->scratch[2] = REG_READ(DMCUB_SCRATCH2);
+ diag_data->scratch[3] = REG_READ(DMCUB_SCRATCH3);
+ diag_data->scratch[4] = REG_READ(DMCUB_SCRATCH4);
+ diag_data->scratch[5] = REG_READ(DMCUB_SCRATCH5);
+ diag_data->scratch[6] = REG_READ(DMCUB_SCRATCH6);
+ diag_data->scratch[7] = REG_READ(DMCUB_SCRATCH7);
+ diag_data->scratch[8] = REG_READ(DMCUB_SCRATCH8);
+ diag_data->scratch[9] = REG_READ(DMCUB_SCRATCH9);
+ diag_data->scratch[10] = REG_READ(DMCUB_SCRATCH10);
+ diag_data->scratch[11] = REG_READ(DMCUB_SCRATCH11);
+ diag_data->scratch[12] = REG_READ(DMCUB_SCRATCH12);
+ diag_data->scratch[13] = REG_READ(DMCUB_SCRATCH13);
+ diag_data->scratch[14] = REG_READ(DMCUB_SCRATCH14);
+ diag_data->scratch[15] = REG_READ(DMCUB_SCRATCH15);
+
+ diag_data->undefined_address_fault_addr = REG_READ(DMCUB_UNDEFINED_ADDRESS_FAULT_ADDR);
+ diag_data->inst_fetch_fault_addr = REG_READ(DMCUB_INST_FETCH_FAULT_ADDR);
+ diag_data->data_write_fault_addr = REG_READ(DMCUB_DATA_WRITE_FAULT_ADDR);
+
+ diag_data->inbox1_rptr = REG_READ(DMCUB_INBOX1_RPTR);
+ diag_data->inbox1_wptr = REG_READ(DMCUB_INBOX1_WPTR);
+ diag_data->inbox1_size = REG_READ(DMCUB_INBOX1_SIZE);
+
+ diag_data->inbox0_rptr = REG_READ(DMCUB_INBOX0_RPTR);
+ diag_data->inbox0_wptr = REG_READ(DMCUB_INBOX0_WPTR);
+ diag_data->inbox0_size = REG_READ(DMCUB_INBOX0_SIZE);
+
+ REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_dmub_enabled);
+ diag_data->is_dmcub_enabled = is_dmub_enabled;
+
+ REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, &is_soft_reset);
+ diag_data->is_dmcub_soft_reset = is_soft_reset;
+
+ REG_GET(DMCUB_SEC_CNTL, DMCUB_SEC_RESET_STATUS, &is_sec_reset);
+ diag_data->is_dmcub_secure_reset = is_sec_reset;
+
+ REG_GET(DMCUB_CNTL, DMCUB_TRACEPORT_EN, &is_traceport_enabled);
+ diag_data->is_traceport_en = is_traceport_enabled;
+
+ REG_GET(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_ENABLE, &is_cw0_enabled);
+ diag_data->is_cw0_enabled = is_cw0_enabled;
+
+ REG_GET(DMCUB_REGION3_CW6_TOP_ADDRESS, DMCUB_REGION3_CW6_ENABLE, &is_cw6_enabled);
+ diag_data->is_cw6_enabled = is_cw6_enabled;
+}
DMUB_SR(DMCUB_CNTL) \
DMUB_SR(DMCUB_CNTL2) \
DMUB_SR(DMCUB_SEC_CNTL) \
+ DMUB_SR(DMCUB_INBOX0_SIZE) \
+ DMUB_SR(DMCUB_INBOX0_RPTR) \
+ DMUB_SR(DMCUB_INBOX0_WPTR) \
DMUB_SR(DMCUB_INBOX1_BASE_ADDRESS) \
DMUB_SR(DMCUB_INBOX1_SIZE) \
DMUB_SR(DMCUB_INBOX1_RPTR) \
DMUB_SR(DMCUB_SCRATCH14) \
DMUB_SR(DMCUB_SCRATCH15) \
DMUB_SR(DMCUB_GPINT_DATAIN1) \
+ DMUB_SR(DMCUB_GPINT_DATAOUT) \
DMUB_SR(CC_DC_PIPE_DIS) \
DMUB_SR(MMHUBBUB_SOFT_RESET) \
DMUB_SR(DCN_VM_FB_LOCATION_BASE) \
DMUB_SR(DCN_VM_FB_OFFSET) \
- DMUB_SR(DMCUB_TIMER_CURRENT)
+ DMUB_SR(DMCUB_TIMER_CURRENT) \
+ DMUB_SR(DMCUB_INST_FETCH_FAULT_ADDR) \
+ DMUB_SR(DMCUB_UNDEFINED_ADDRESS_FAULT_ADDR) \
+ DMUB_SR(DMCUB_DATA_WRITE_FAULT_ADDR)
#define DMUB_DCN31_FIELDS() \
DMUB_SF(DMCUB_CNTL, DMCUB_ENABLE) \
DMUB_SF(DMCUB_CNTL2, DMCUB_SOFT_RESET) \
DMUB_SF(DMCUB_SEC_CNTL, DMCUB_SEC_RESET) \
DMUB_SF(DMCUB_SEC_CNTL, DMCUB_MEM_UNIT_ID) \
+ DMUB_SF(DMCUB_SEC_CNTL, DMCUB_SEC_RESET_STATUS) \
DMUB_SF(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_TOP_ADDRESS) \
DMUB_SF(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_ENABLE) \
DMUB_SF(DMCUB_REGION3_CW1_TOP_ADDRESS, DMCUB_REGION3_CW1_TOP_ADDRESS) \
DMUB_SF(CC_DC_PIPE_DIS, DC_DMCUB_ENABLE) \
DMUB_SF(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET) \
DMUB_SF(DCN_VM_FB_LOCATION_BASE, FB_BASE) \
- DMUB_SF(DCN_VM_FB_OFFSET, FB_OFFSET)
+ DMUB_SF(DCN_VM_FB_OFFSET, FB_OFFSET) \
+ DMUB_SF(DMCUB_INBOX0_WPTR, DMCUB_INBOX0_WPTR)
struct dmub_srv_dcn31_reg_offset {
#define DMUB_SR(reg) uint32_t reg;
DMUB_DCN31_REGS()
+ DMCUB_INTERNAL_REGS()
#undef DMUB_SR
};
uint32_t dmub_dcn31_get_current_time(struct dmub_srv *dmub);
+void dmub_dcn31_get_diagnostic_data(struct dmub_srv *dmub, struct dmub_diagnostic_data *diag_data);
+
#endif /* _DMUB_DCN31_H_ */
break;
case DMUB_ASIC_DCN31:
+ dmub->regs_dcn31 = &dmub_srv_dcn31_regs;
funcs->reset = dmub_dcn31_reset;
funcs->reset_release = dmub_dcn31_reset_release;
funcs->backdoor_load = dmub_dcn31_backdoor_load;
funcs->get_outbox0_wptr = dmub_dcn31_get_outbox0_wptr;
funcs->set_outbox0_rptr = dmub_dcn31_set_outbox0_rptr;
- if (asic == DMUB_ASIC_DCN31) {
- dmub->regs_dcn31 = &dmub_srv_dcn31_regs;
- }
+ funcs->get_diagnostic_data = dmub_dcn31_get_diagnostic_data;
funcs->get_current_time = dmub_dcn31_get_current_time;
};
enum DC_FEATURE_MASK {
- DC_FBC_MASK = 0x1,
- DC_MULTI_MON_PP_MCLK_SWITCH_MASK = 0x2,
- DC_DISABLE_FRACTIONAL_PWM_MASK = 0x4,
- DC_PSR_MASK = 0x8,
+ //Default value can be found at "uint amdgpu_dc_feature_mask"
+ DC_FBC_MASK = (1 << 0), //0x1, disabled by default
+ DC_MULTI_MON_PP_MCLK_SWITCH_MASK = (1 << 1), //0x2, enabled by default
+ DC_DISABLE_FRACTIONAL_PWM_MASK = (1 << 2), //0x4, disabled by default
+ DC_PSR_MASK = (1 << 3), //0x8, disabled by default
+ DC_EDP_NO_POWER_SEQUENCING = (1 << 4), //0x10, disabled by default
};
enum DC_DEBUG_MASK {
*/
typedef enum ENUM_NUM_SIMD_PER_CU {
-NUM_SIMD_PER_CU = 0x00000004,
+NUM_SIMD_PER_CU = 0x00000002,
} ENUM_NUM_SIMD_PER_CU;
/*
return sprintf(buf, "%i\n", 0);
}
-static ssize_t amdgpu_hwmon_show_power_cap_max(struct device *dev,
- struct device_attribute *attr,
- char *buf)
+
+static ssize_t amdgpu_hwmon_show_power_cap_generic(struct device *dev,
+ struct device_attribute *attr,
+ char *buf,
+ enum pp_power_limit_level pp_limit_level)
{
struct amdgpu_device *adev = dev_get_drvdata(dev);
const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
enum pp_power_type power_type = to_sensor_dev_attr(attr)->index;
- enum pp_power_limit_level pp_limit_level = PP_PWR_LIMIT_MAX;
uint32_t limit;
ssize_t size;
int r;
if (adev->in_suspend && !adev->in_runpm)
return -EPERM;
+ if ( !(pp_funcs && pp_funcs->get_power_limit))
+ return -ENODATA;
+
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
- if (pp_funcs && pp_funcs->get_power_limit)
- r = pp_funcs->get_power_limit(adev->powerplay.pp_handle, &limit,
- pp_limit_level, power_type);
- else
- r = -ENODATA;
+ r = pp_funcs->get_power_limit(adev->powerplay.pp_handle, &limit,
+ pp_limit_level, power_type);
if (!r)
size = sysfs_emit(buf, "%u\n", limit * 1000000);
return size;
}
-static ssize_t amdgpu_hwmon_show_power_cap(struct device *dev,
+
+static ssize_t amdgpu_hwmon_show_power_cap_max(struct device *dev,
struct device_attribute *attr,
char *buf)
{
- struct amdgpu_device *adev = dev_get_drvdata(dev);
- const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
- enum pp_power_type power_type = to_sensor_dev_attr(attr)->index;
- enum pp_power_limit_level pp_limit_level = PP_PWR_LIMIT_CURRENT;
- uint32_t limit;
- ssize_t size;
- int r;
-
- if (amdgpu_in_reset(adev))
- return -EPERM;
- if (adev->in_suspend && !adev->in_runpm)
- return -EPERM;
-
- r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
- if (r < 0) {
- pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
- return r;
- }
-
- if (pp_funcs && pp_funcs->get_power_limit)
- r = pp_funcs->get_power_limit(adev->powerplay.pp_handle, &limit,
- pp_limit_level, power_type);
- else
- r = -ENODATA;
+ return amdgpu_hwmon_show_power_cap_generic(dev, attr, buf, PP_PWR_LIMIT_MAX);
- if (!r)
- size = sysfs_emit(buf, "%u\n", limit * 1000000);
- else
- size = sysfs_emit(buf, "\n");
+}
- pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
- pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
+static ssize_t amdgpu_hwmon_show_power_cap(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return amdgpu_hwmon_show_power_cap_generic(dev, attr, buf, PP_PWR_LIMIT_CURRENT);
- return size;
}
static ssize_t amdgpu_hwmon_show_power_cap_default(struct device *dev,
struct device_attribute *attr,
char *buf)
{
- struct amdgpu_device *adev = dev_get_drvdata(dev);
- const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
- enum pp_power_type power_type = to_sensor_dev_attr(attr)->index;
- enum pp_power_limit_level pp_limit_level = PP_PWR_LIMIT_DEFAULT;
- uint32_t limit;
- ssize_t size;
- int r;
+ return amdgpu_hwmon_show_power_cap_generic(dev, attr, buf, PP_PWR_LIMIT_DEFAULT);
- if (amdgpu_in_reset(adev))
- return -EPERM;
- if (adev->in_suspend && !adev->in_runpm)
- return -EPERM;
-
- r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
- if (r < 0) {
- pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
- return r;
- }
-
- if (pp_funcs && pp_funcs->get_power_limit)
- r = pp_funcs->get_power_limit(adev->powerplay.pp_handle, &limit,
- pp_limit_level, power_type);
- else
- r = -ENODATA;
-
- if (!r)
- size = sysfs_emit(buf, "%u\n", limit * 1000000);
- else
- size = sysfs_emit(buf, "\n");
-
- pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
- pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-
- return size;
}
+
static ssize_t amdgpu_hwmon_show_power_label(struct device *dev,
struct device_attribute *attr,
char *buf)
if (smu->is_apu) {
smu_powergate_sdma(&adev->smu, true);
- smu_dpm_set_vcn_enable(smu, false);
- smu_dpm_set_jpeg_enable(smu, false);
}
+ smu_dpm_set_vcn_enable(smu, false);
+ smu_dpm_set_jpeg_enable(smu, false);
+
+ adev->vcn.cur_state = AMD_PG_STATE_GATE;
+ adev->jpeg.cur_state = AMD_PG_STATE_GATE;
+
if (!smu->pm_enabled)
return 0;
static int yellow_carp_system_features_control(struct smu_context *smu, bool en)
{
struct smu_feature *feature = &smu->smu_feature;
+ struct amdgpu_device *adev = smu->adev;
uint32_t feature_mask[2];
int ret = 0;
- if (!en)
+ if (!en && !adev->in_s0ix)
ret = smu_cmn_send_smc_msg(smu, SMU_MSG_PrepareMp1ForUnload, NULL);
bitmap_zero(feature->enabled, feature->feature_num);
const struct drm_mode_fb_cmd2 *cmd)
{
struct drm_gem_object *obj;
+ struct drm_framebuffer *fb;
/*
* Find the GEM object and thus the gtt range object that is
return ERR_PTR(-ENOENT);
/* Let the core code do all the work */
- return psb_framebuffer_create(dev, cmd, obj);
+ fb = psb_framebuffer_create(dev, cmd, obj);
+ if (IS_ERR(fb))
+ drm_gem_object_put(obj);
+
+ return fb;
}
static int psbfb_probe(struct drm_fb_helper *fb_helper,
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
enum phy phy = intel_port_to_phy(i915, encoder->port);
+ enum intel_dpll_id id;
+ u32 val;
- return _cnl_ddi_get_pll(i915, DG1_DPCLKA_CFGCR0(phy),
- DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy),
- DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(phy));
+ val = intel_de_read(i915, DG1_DPCLKA_CFGCR0(phy));
+ val &= DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy);
+ val >>= DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(phy);
+ id = val;
+
+ /*
+ * _DG1_DPCLKA0_CFGCR0 maps between DPLL 0 and 1 with one bit for phy A
+ * and B while _DG1_DPCLKA1_CFGCR0 maps between DPLL 2 and 3 with one
+ * bit for phy C and D.
+ */
+ if (phy >= PHY_C)
+ id += DPLL_ID_DG1_DPLL2;
+
+ return intel_get_shared_dpll_by_id(i915, id);
}
static void icl_ddi_combo_enable_clock(struct intel_encoder *encoder,
if (size < sizeof(struct dp_sdp))
return -EINVAL;
- memset(vsc, 0, size);
+ memset(vsc, 0, sizeof(*vsc));
if (sdp->sdp_header.HB0 != 0)
return -EINVAL;
return true;
/* Waiting to drain ELSP? */
- synchronize_hardirq(to_pci_dev(engine->i915->drm.dev)->irq);
+ intel_synchronize_hardirq(engine->i915);
intel_engine_flush_submission(engine);
/* ELSP is empty, but there are ready requests? E.g. after reset */
ENGINE_TRACE(engine, "ring:{HEAD:%04x, TAIL:%04x}\n",
ring->head, ring->tail);
- /* Double check the ring is empty & disabled before we resume */
- synchronize_hardirq(engine->i915->drm.irq);
+ /*
+ * Double check the ring is empty & disabled before we resume. Called
+ * from atomic context during PCI probe, so _hardirq().
+ */
+ intel_synchronize_hardirq(engine->i915);
if (!stop_ring(engine))
goto err;
#include <drm/drm_aperture.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_ioctl.h>
-#include <drm/drm_irq.h>
#include <drm/drm_managed.h>
#include <drm/drm_probe_helper.h>
#include <linux/sysrq.h>
#include <drm/drm_drv.h>
-#include <drm/drm_irq.h>
#include "display/intel_de.h"
#include "display/intel_display_types.h"
bool intel_irqs_enabled(struct drm_i915_private *dev_priv)
{
- /*
- * We only use drm_irq_uninstall() at unload and VT switch, so
- * this is the only thing we need to check.
- */
return dev_priv->runtime_pm.irqs_enabled;
}
{
synchronize_irq(to_pci_dev(i915->drm.dev)->irq);
}
+
+void intel_synchronize_hardirq(struct drm_i915_private *i915)
+{
+ synchronize_hardirq(to_pci_dev(i915->drm.dev)->irq);
+}
void intel_runtime_pm_enable_interrupts(struct drm_i915_private *dev_priv);
bool intel_irqs_enabled(struct drm_i915_private *dev_priv);
void intel_synchronize_irq(struct drm_i915_private *i915);
+void intel_synchronize_hardirq(struct drm_i915_private *i915);
int intel_get_crtc_scanline(struct intel_crtc *crtc);
void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv,
#define _DG1_DPCLKA1_CFGCR0 0x16C280
#define _DG1_DPCLKA_PHY_IDX(phy) ((phy) % 2)
#define _DG1_DPCLKA_PLL_IDX(pll) ((pll) % 2)
-#define _DG1_PHY_DPLL_MAP(phy) ((phy) >= PHY_C ? DPLL_ID_DG1_DPLL2 : DPLL_ID_DG1_DPLL0)
#define DG1_DPCLKA_CFGCR0(phy) _MMIO_PHY((phy) / 2, \
_DG1_DPCLKA_CFGCR0, \
_DG1_DPCLKA1_CFGCR0)
#define DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(phy) (_DG1_DPCLKA_PHY_IDX(phy) * 2)
#define DG1_DPCLKA_CFGCR0_DDI_CLK_SEL(pll, phy) (_DG1_DPCLKA_PLL_IDX(pll) << DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(phy))
#define DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy) (0x3 << DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(phy))
-#define DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_DPLL_MAP(clk_sel, phy) \
- (((clk_sel) >> DG1_DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(phy)) + _DG1_PHY_DPLL_MAP(phy))
/* ADLS Clocks */
#define _ADLS_DPCLKA_CFGCR0 0x164280
/* Handle is imported dma-buf, so cannot be migrated to VRAM for scanout */
if (obj->import_attach) {
DRM_DEBUG_KMS("Cannot create framebuffer from imported dma_buf\n");
+ drm_gem_object_put(obj);
return ERR_PTR(-EINVAL);
}
if (radeon_device_is_virtual())
radeon_pci_remove(pdev);
-#ifdef CONFIG_PPC64
+#if defined(CONFIG_PPC64) || defined(CONFIG_MACH_LOONGSON64)
/*
* Some adapters need to be suspended before a
* shutdown occurs in order to prevent an error
- * during kexec.
- * Make this power specific becauase it breaks
- * some non-power boards.
+ * during kexec, shutdown or reboot.
+ * Make this power and Loongson specific because
+ * it breaks some other boards.
*/
radeon_suspend_kms(pci_get_drvdata(pdev), true, true, false);
#endif
* function are calling it.
*/
-static void radeon_update_memory_usage(struct radeon_bo *bo,
- unsigned mem_type, int sign)
+static void radeon_update_memory_usage(struct ttm_buffer_object *bo,
+ unsigned int mem_type, int sign)
{
- struct radeon_device *rdev = bo->rdev;
+ struct radeon_device *rdev = radeon_get_rdev(bo->bdev);
switch (mem_type) {
case TTM_PL_TT:
if (sign > 0)
- atomic64_add(bo->tbo.base.size, &rdev->gtt_usage);
+ atomic64_add(bo->base.size, &rdev->gtt_usage);
else
- atomic64_sub(bo->tbo.base.size, &rdev->gtt_usage);
+ atomic64_sub(bo->base.size, &rdev->gtt_usage);
break;
case TTM_PL_VRAM:
if (sign > 0)
- atomic64_add(bo->tbo.base.size, &rdev->vram_usage);
+ atomic64_add(bo->base.size, &rdev->vram_usage);
else
- atomic64_sub(bo->tbo.base.size, &rdev->vram_usage);
+ atomic64_sub(bo->base.size, &rdev->vram_usage);
break;
}
}
bo = container_of(tbo, struct radeon_bo, tbo);
- radeon_update_memory_usage(bo, bo->tbo.resource->mem_type, -1);
-
mutex_lock(&bo->rdev->gem.mutex);
list_del_init(&bo->list);
mutex_unlock(&bo->rdev->gem.mutex);
}
void radeon_bo_move_notify(struct ttm_buffer_object *bo,
- bool evict,
+ unsigned int old_type,
struct ttm_resource *new_mem)
{
struct radeon_bo *rbo;
+ radeon_update_memory_usage(bo, old_type, -1);
+ if (new_mem)
+ radeon_update_memory_usage(bo, new_mem->mem_type, 1);
+
if (!radeon_ttm_bo_is_radeon_bo(bo))
return;
rbo = container_of(bo, struct radeon_bo, tbo);
radeon_bo_check_tiling(rbo, 0, 1);
radeon_vm_bo_invalidate(rbo->rdev, rbo);
-
- /* update statistics */
- if (!new_mem)
- return;
-
- radeon_update_memory_usage(rbo, bo->resource->mem_type, -1);
- radeon_update_memory_usage(rbo, new_mem->mem_type, 1);
}
vm_fault_t radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
extern int radeon_bo_check_tiling(struct radeon_bo *bo, bool has_moved,
bool force_drop);
extern void radeon_bo_move_notify(struct ttm_buffer_object *bo,
- bool evict,
+ unsigned int old_type,
struct ttm_resource *new_mem);
extern vm_fault_t radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
extern int radeon_bo_get_surface_reg(struct radeon_bo *bo);
struct ttm_resource *old_mem = bo->resource;
struct radeon_device *rdev;
struct radeon_bo *rbo;
- int r;
+ int r, old_type;
if (new_mem->mem_type == TTM_PL_TT) {
r = radeon_ttm_tt_bind(bo->bdev, bo->ttm, new_mem);
if (WARN_ON_ONCE(rbo->tbo.pin_count > 0))
return -EINVAL;
+ /* Save old type for statistics update */
+ old_type = old_mem->mem_type;
+
rdev = radeon_get_rdev(bo->bdev);
if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) {
ttm_bo_move_null(bo, new_mem);
out:
/* update statistics */
atomic64_add(bo->base.size, &rdev->num_bytes_moved);
- radeon_bo_move_notify(bo, evict, new_mem);
+ radeon_bo_move_notify(bo, old_type, new_mem);
return 0;
}
static void
radeon_bo_delete_mem_notify(struct ttm_buffer_object *bo)
{
- radeon_bo_move_notify(bo, false, NULL);
+ unsigned int old_type = TTM_PL_SYSTEM;
+
+ if (bo->resource)
+ old_type = bo->resource->mem_type;
+ radeon_bo_move_notify(bo, old_type, NULL);
}
static struct ttm_device_funcs radeon_bo_driver = {
* struct cdns_pcie - private data for Cadence PCIe controller drivers
* @reg_base: IO mapped register base
* @mem_res: start/end offsets in the physical system memory to map PCI accesses
+ * @dev: PCIe controller
* @is_rc: tell whether the PCIe controller mode is Root Complex or Endpoint.
- * @bus: In Root Complex mode, the bus number
- * @ops: Platform specific ops to control various inputs from Cadence PCIe
+ * @phy_count: number of supported PHY devices
+ * @phy: list of pointers to specific PHY control blocks
+ * @link: list of pointers to corresponding device link representations
+ * @ops: Platform-specific ops to control various inputs from Cadence PCIe
* wrapper
*/
struct cdns_pcie {
#define IMX8MQ_GPR_PCIE_REF_USE_PAD BIT(9)
#define IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN BIT(10)
#define IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE BIT(11)
+#define IMX8MQ_GPR_PCIE_VREG_BYPASS BIT(12)
#define IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE GENMASK(11, 8)
#define IMX8MQ_PCIE2_BASE_ADDR 0x33c00000
u32 tx_swing_full;
u32 tx_swing_low;
struct regulator *vpcie;
+ struct regulator *vph;
void __iomem *phy_base;
/* power domain for pcie */
imx6_pcie_grp_offset(imx6_pcie),
IMX8MQ_GPR_PCIE_REF_USE_PAD,
IMX8MQ_GPR_PCIE_REF_USE_PAD);
+ /*
+ * Regarding the datasheet, the PCIE_VPH is suggested
+ * to be 1.8V. If the PCIE_VPH is supplied by 3.3V, the
+ * VREG_BYPASS should be cleared to zero.
+ */
+ if (imx6_pcie->vph &&
+ regulator_get_voltage(imx6_pcie->vph) > 3000000)
+ regmap_update_bits(imx6_pcie->iomuxc_gpr,
+ imx6_pcie_grp_offset(imx6_pcie),
+ IMX8MQ_GPR_PCIE_VREG_BYPASS,
+ 0);
break;
case IMX7D:
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
return ret;
}
imx6_pcie->phy_base = devm_ioremap_resource(dev, &res);
- if (IS_ERR(imx6_pcie->phy_base)) {
- dev_err(dev, "Unable to map PCIe PHY\n");
+ if (IS_ERR(imx6_pcie->phy_base))
return PTR_ERR(imx6_pcie->phy_base);
- }
}
dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
imx6_pcie->vpcie = NULL;
}
+ imx6_pcie->vph = devm_regulator_get_optional(&pdev->dev, "vph");
+ if (IS_ERR(imx6_pcie->vph)) {
+ if (PTR_ERR(imx6_pcie->vph) != -ENODEV)
+ return PTR_ERR(imx6_pcie->vph);
+ imx6_pcie->vph = NULL;
+ }
+
platform_set_drvdata(pdev, imx6_pcie);
ret = imx6_pcie_attach_pd(dev);
.variant = IMX6QP,
.flags = IMX6_PCIE_FLAG_IMX6_PHY |
IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE,
+ .dbi_length = 0x200,
},
[IMX7D] = {
.variant = IMX7D,
#define PCIE_APP_IRN_PM_TO_ACK BIT(9)
#define PCIE_APP_IRN_LINK_AUTO_BW_STAT BIT(11)
#define PCIE_APP_IRN_BW_MGT BIT(12)
+#define PCIE_APP_IRN_INTA BIT(13)
+#define PCIE_APP_IRN_INTB BIT(14)
+#define PCIE_APP_IRN_INTC BIT(15)
+#define PCIE_APP_IRN_INTD BIT(16)
#define PCIE_APP_IRN_MSG_LTR BIT(18)
#define PCIE_APP_IRN_SYS_ERR_RC BIT(29)
#define PCIE_APP_INTX_OFST 12
PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
- (PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
- (PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
- (PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
- (PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
+ PCIE_APP_IRN_INTA | PCIE_APP_IRN_INTB | \
+ PCIE_APP_IRN_INTC | PCIE_APP_IRN_INTD)
#define BUS_IATU_OFFSET SZ_256M
#define RESET_INTERVAL_MS 100
if (unlikely(irq > 31))
return -EINVAL;
- appl_writel(pcie, (1 << irq), APPL_MSI_CTRL_1);
+ appl_writel(pcie, BIT(irq), APPL_MSI_CTRL_1);
return 0;
}
goto fail_host_init;
}
+ dw_pcie_setup_rc(&pcie->pci.pp);
+
ret = tegra_pcie_dw_start_link(&pcie->pci);
if (ret < 0)
goto fail_host_init;
int irq;
};
-static inline u32 ls_pcie_g4_lut_readl(struct ls_pcie_g4 *pcie, u32 off)
-{
- return ioread32(pcie->pci.csr_axi_slave_base + PCIE_LUT_OFF + off);
-}
-
-static inline void ls_pcie_g4_lut_writel(struct ls_pcie_g4 *pcie,
- u32 off, u32 val)
-{
- iowrite32(val, pcie->pci.csr_axi_slave_base + PCIE_LUT_OFF + off);
-}
-
static inline u32 ls_pcie_g4_pf_readl(struct ls_pcie_g4 *pcie, u32 off)
{
return ioread32(pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off);
#define PIO_COMPLETION_STATUS_UR 1
#define PIO_COMPLETION_STATUS_CRS 2
#define PIO_COMPLETION_STATUS_CA 4
-#define PIO_NON_POSTED_REQ BIT(0)
+#define PIO_NON_POSTED_REQ BIT(10)
#define PIO_ADDR_LS (PIO_BASE_ADDR + 0x8)
#define PIO_ADDR_MS (PIO_BASE_ADDR + 0xc)
#define PIO_WR_DATA (PIO_BASE_ADDR + 0x10)
#define LTSSM_MASK 0x3f
#define LTSSM_L0 0x10
#define RC_BAR_CONFIG 0x300
+#define VENDOR_ID_REG (LMI_BASE_ADDR + 0x44)
/* PCIe core controller registers */
#define CTRL_CORE_BASE_ADDR 0x18000
reg |= (IS_RC_MSK << IS_RC_SHIFT);
advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+ /*
+ * Replace incorrect PCI vendor id value 0x1b4b by correct value 0x11ab.
+ * VENDOR_ID_REG contains vendor id in low 16 bits and subsystem vendor
+ * id in high 16 bits. Updating this register changes readback value of
+ * read-only vendor id bits in PCIE_CORE_DEV_ID_REG register. Workaround
+ * for erratum 4.1: "The value of device and vendor ID is incorrect".
+ */
+ reg = (PCI_VENDOR_ID_MARVELL << 16) | PCI_VENDOR_ID_MARVELL;
+ advk_writel(pcie, reg, VENDOR_ID_REG);
+
/* Set Advanced Error Capabilities and Control PF0 register */
reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX |
PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN |
* Special configuration registers directly in the first few words
* in I/O space.
*/
-#define PCI_IOSIZE 0x00
-#define PCI_PROT 0x04 /* AHB protection */
-#define PCI_CTRL 0x08 /* PCI control signal */
-#define PCI_SOFTRST 0x10 /* Soft reset counter and response error enable */
-#define PCI_CONFIG 0x28 /* PCI configuration command register */
-#define PCI_DATA 0x2C
+#define FTPCI_IOSIZE 0x00
+#define FTPCI_PROT 0x04 /* AHB protection */
+#define FTPCI_CTRL 0x08 /* PCI control signal */
+#define FTPCI_SOFTRST 0x10 /* Soft reset counter and response error enable */
+#define FTPCI_CONFIG 0x28 /* PCI configuration command register */
+#define FTPCI_DATA 0x2C
#define FARADAY_PCI_STATUS_CMD 0x04 /* Status and command */
#define FARADAY_PCI_PMC 0x40 /* Power management control */
PCI_CONF_FUNCTION(PCI_FUNC(fn)) |
PCI_CONF_WHERE(config) |
PCI_CONF_ENABLE,
- p->base + PCI_CONFIG);
+ p->base + FTPCI_CONFIG);
- *value = readl(p->base + PCI_DATA);
+ *value = readl(p->base + FTPCI_DATA);
if (size == 1)
*value = (*value >> (8 * (config & 3))) & 0xFF;
PCI_CONF_FUNCTION(PCI_FUNC(fn)) |
PCI_CONF_WHERE(config) |
PCI_CONF_ENABLE,
- p->base + PCI_CONFIG);
+ p->base + FTPCI_CONFIG);
switch (size) {
case 4:
- writel(value, p->base + PCI_DATA);
+ writel(value, p->base + FTPCI_DATA);
break;
case 2:
- writew(value, p->base + PCI_DATA + (config & 3));
+ writew(value, p->base + FTPCI_DATA + (config & 3));
break;
case 1:
- writeb(value, p->base + PCI_DATA + (config & 3));
+ writeb(value, p->base + FTPCI_DATA + (config & 3));
break;
default:
ret = PCIBIOS_BAD_REGISTER_NUMBER;
if (!faraday_res_to_memcfg(io->start - win->offset,
resource_size(io), &val)) {
/* setup I/O space size */
- writel(val, p->base + PCI_IOSIZE);
+ writel(val, p->base + FTPCI_IOSIZE);
} else {
dev_err(dev, "illegal IO mem size\n");
return -EINVAL;
}
/* Setup hostbridge */
- val = readl(p->base + PCI_CTRL);
+ val = readl(p->base + FTPCI_CTRL);
val |= PCI_COMMAND_IO;
val |= PCI_COMMAND_MEMORY;
val |= PCI_COMMAND_MASTER;
- writel(val, p->base + PCI_CTRL);
+ writel(val, p->base + FTPCI_CTRL);
/* Mask and clear all interrupts */
faraday_raw_pci_write_config(p, 0, 0, FARADAY_PCI_CTRL2 + 2, 2, 0xF000);
if (variant->cascaded_irq) {
hv_pcibus_probed,
hv_pcibus_installed,
hv_pcibus_removing,
- hv_pcibus_removed,
hv_pcibus_maximum
};
/* Protocol version negotiated with the host */
enum pci_protocol_version_t protocol_version;
enum hv_pcibus_state state;
- refcount_t remove_lock;
struct hv_device *hdev;
resource_size_t low_mmio_space;
resource_size_t high_mmio_space;
struct resource *low_mmio_res;
struct resource *high_mmio_res;
struct completion *survey_event;
- struct completion remove_event;
struct pci_bus *pci_bus;
spinlock_t config_lock; /* Avoid two threads writing index page */
spinlock_t device_list_lock; /* Protect lists below */
kfree(hpdev);
}
-static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus);
-static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus);
-
/*
* There is no good way to get notified from vmbus_onoffer_rescind(),
* so let's use polling here, since this is not a hot path.
}
spin_unlock_irqrestore(&hbus->device_list_lock, flags);
- if (!dr) {
- put_hvpcibus(hbus);
+ if (!dr)
return;
- }
/* First, mark all existing children as reported missing. */
spin_lock_irqsave(&hbus->device_list_lock, flags);
break;
}
- put_hvpcibus(hbus);
kfree(dr);
}
list_add_tail(&dr->list_entry, &hbus->dr_list);
spin_unlock_irqrestore(&hbus->device_list_lock, flags);
- if (pending_dr) {
+ if (pending_dr)
kfree(dr_wrk);
- } else {
- get_hvpcibus(hbus);
+ else
queue_work(hbus->wq, &dr_wrk->wrk);
- }
return 0;
}
put_pcichild(hpdev);
put_pcichild(hpdev);
/* hpdev has been freed. Do not use it any more. */
-
- put_hvpcibus(hbus);
}
/**
hpdev->state = hv_pcichild_ejecting;
get_pcichild(hpdev);
INIT_WORK(&hpdev->wrk, hv_eject_device_work);
- get_hvpcibus(hbus);
queue_work(hbus->wq, &hpdev->wrk);
}
return 0;
}
-static void get_hvpcibus(struct hv_pcibus_device *hbus)
-{
- refcount_inc(&hbus->remove_lock);
-}
-
-static void put_hvpcibus(struct hv_pcibus_device *hbus)
-{
- if (refcount_dec_and_test(&hbus->remove_lock))
- complete(&hbus->remove_event);
-}
-
#define HVPCI_DOM_MAP_SIZE (64 * 1024)
static DECLARE_BITMAP(hvpci_dom_map, HVPCI_DOM_MAP_SIZE);
hbus->sysdata.domain = dom;
hbus->hdev = hdev;
- refcount_set(&hbus->remove_lock, 1);
INIT_LIST_HEAD(&hbus->children);
INIT_LIST_HEAD(&hbus->dr_list);
INIT_LIST_HEAD(&hbus->resources_for_children);
spin_lock_init(&hbus->config_lock);
spin_lock_init(&hbus->device_list_lock);
spin_lock_init(&hbus->retarget_msi_interrupt_lock);
- init_completion(&hbus->remove_event);
hbus->wq = alloc_ordered_workqueue("hv_pci_%x", 0,
hbus->sysdata.domain);
if (!hbus->wq) {
struct pci_packet teardown_packet;
u8 buffer[sizeof(struct pci_message)];
} pkt;
- struct hv_dr_state *dr;
struct hv_pci_compl comp_pkt;
+ struct hv_pci_dev *hpdev, *tmp;
+ unsigned long flags;
int ret;
/*
if (!keep_devs) {
/* Delete any children which might still exist. */
- dr = kzalloc(sizeof(*dr), GFP_KERNEL);
- if (dr && hv_pci_start_relations_work(hbus, dr))
- kfree(dr);
+ spin_lock_irqsave(&hbus->device_list_lock, flags);
+ list_for_each_entry_safe(hpdev, tmp, &hbus->children, list_entry) {
+ list_del(&hpdev->list_entry);
+ if (hpdev->pci_slot)
+ pci_destroy_slot(hpdev->pci_slot);
+ /* For the two refs got in new_pcichild_device() */
+ put_pcichild(hpdev);
+ put_pcichild(hpdev);
+ }
+ spin_unlock_irqrestore(&hbus->device_list_lock, flags);
}
ret = hv_send_resources_released(hdev);
hbus = hv_get_drvdata(hdev);
if (hbus->state == hv_pcibus_installed) {
+ tasklet_disable(&hdev->channel->callback_event);
+ hbus->state = hv_pcibus_removing;
+ tasklet_enable(&hdev->channel->callback_event);
+ destroy_workqueue(hbus->wq);
+ hbus->wq = NULL;
+ /*
+ * At this point, no work is running or can be scheduled
+ * on hbus-wq. We can't race with hv_pci_devices_present()
+ * or hv_pci_eject_device(), it's safe to proceed.
+ */
+
/* Remove the bus from PCI's point of view. */
pci_lock_rescan_remove();
pci_stop_root_bus(hbus->pci_bus);
hv_pci_remove_slots(hbus);
pci_remove_root_bus(hbus->pci_bus);
pci_unlock_rescan_remove();
- hbus->state = hv_pcibus_removed;
}
ret = hv_pci_bus_exit(hdev, false);
hv_pci_free_bridge_windows(hbus);
irq_domain_remove(hbus->irq_domain);
irq_domain_free_fwnode(hbus->sysdata.fwnode);
- put_hvpcibus(hbus);
- wait_for_completion(&hbus->remove_event);
- destroy_workqueue(hbus->wq);
hv_put_dom_num(hbus->sysdata.domain);
{ .compatible = "nvidia,tegra20-pcie", .data = &tegra20_pcie },
{ },
};
+MODULE_DEVICE_TABLE(of, tegra_pcie_of_match);
static void *tegra_pcie_ports_seq_start(struct seq_file *s, loff_t *pos)
{
// SPDX-License-Identifier: GPL-2.0+
-/**
+/*
* APM X-Gene PCIe Driver
*
* Copyright (c) 2014 Applied Micro Circuits Corporation.
{
void __iomem *cfg_base = port->cfg_base;
struct device *dev = port->dev;
- void *bar_addr;
+ void __iomem *bar_addr;
u32 pim_reg;
u64 cpu_addr = entry->res->start;
u64 pci_addr = cpu_addr - entry->offset;
struct iproc_msi;
/**
- * iProc MSI group
+ * struct iproc_msi_grp - iProc MSI group
*
* One MSI group is allocated per GIC interrupt, serviced by one iProc MSI
* event queue.
};
/**
- * iProc event queue based MSI
+ * struct iproc_msi - iProc event queue based MSI
*
* Only meant to be used on platforms without MSI support integrated into the
* GIC.
static struct msi_domain_info iproc_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
- MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX,
+ MSI_FLAG_PCI_MSIX,
.chip = &iproc_msi_irq_chip,
};
struct iproc_msi *msi = domain->host_data;
int hwirq, i;
+ if (msi->nr_cpus > 1 && nr_irqs > 1)
+ return -EINVAL;
+
mutex_lock(&msi->bitmap_lock);
- /* Allocate 'nr_cpus' number of MSI vectors each time */
- hwirq = bitmap_find_next_zero_area(msi->bitmap, msi->nr_msi_vecs, 0,
- msi->nr_cpus, 0);
- if (hwirq < msi->nr_msi_vecs) {
- bitmap_set(msi->bitmap, hwirq, msi->nr_cpus);
- } else {
- mutex_unlock(&msi->bitmap_lock);
- return -ENOSPC;
- }
+ /*
+ * Allocate 'nr_irqs' multiplied by 'nr_cpus' number of MSI vectors
+ * each time
+ */
+ hwirq = bitmap_find_free_region(msi->bitmap, msi->nr_msi_vecs,
+ order_base_2(msi->nr_cpus * nr_irqs));
mutex_unlock(&msi->bitmap_lock);
+ if (hwirq < 0)
+ return -ENOSPC;
+
for (i = 0; i < nr_irqs; i++) {
irq_domain_set_info(domain, virq + i, hwirq + i,
&iproc_msi_bottom_irq_chip,
mutex_lock(&msi->bitmap_lock);
hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq);
- bitmap_clear(msi->bitmap, hwirq, msi->nr_cpus);
+ bitmap_release_region(msi->bitmap, hwirq,
+ order_base_2(msi->nr_cpus * nr_irqs));
mutex_unlock(&msi->bitmap_lock);
mutex_init(&msi->bitmap_lock);
msi->nr_cpus = num_possible_cpus();
+ if (msi->nr_cpus == 1)
+ iproc_msi_domain_info.flags |= MSI_FLAG_MULTI_PCI_MSI;
+
msi->nr_irqs = of_irq_count(node);
if (!msi->nr_irqs) {
dev_err(pcie->dev, "found no MSI GIC interrupt\n");
#define IPROC_PCIE_REG_INVALID 0xffff
/**
- * iProc PCIe outbound mapping controller specific parameters
- *
+ * struct iproc_pcie_ob_map - iProc PCIe outbound mapping controller-specific
+ * parameters
* @window_sizes: list of supported outbound mapping window sizes in MB
* @nr_sizes: number of supported outbound mapping window sizes
*/
};
/**
- * iProc PCIe inbound mapping type
+ * enum iproc_pcie_ib_map_type - iProc PCIe inbound mapping type
+ * @IPROC_PCIE_IB_MAP_MEM: DDR memory
+ * @IPROC_PCIE_IB_MAP_IO: device I/O memory
+ * @IPROC_PCIE_IB_MAP_INVALID: invalid or unused
*/
enum iproc_pcie_ib_map_type {
- /* for DDR memory */
IPROC_PCIE_IB_MAP_MEM = 0,
-
- /* for device I/O memory */
IPROC_PCIE_IB_MAP_IO,
-
- /* invalid or unused */
IPROC_PCIE_IB_MAP_INVALID
};
/**
- * iProc PCIe inbound mapping controller specific parameters
- *
+ * struct iproc_pcie_ib_map - iProc PCIe inbound mapping controller-specific
+ * parameters
* @type: inbound mapping region type
* @size_unit: inbound mapping region size unit, could be SZ_1K, SZ_1M, or
* SZ_1G
writel(val, pcie->base + offset);
}
-/**
+/*
* APB error forwarding can be disabled during access of configuration
* registers of the endpoint device, to prevent unsupported requests
* (typically seen during enumeration with multi-function devices) from
return PCIBIOS_SUCCESSFUL;
}
-/**
+/*
* Note access to the configuration registers are protected at the higher layer
* by 'pci_lock' in drivers/pci/access.c
*/
return 0;
}
-/**
+/*
* Some iProc SoCs require the SW to configure the outbound address mapping
*
* Outbound address translation:
#define _PCIE_IPROC_H
/**
- * iProc PCIe interface type
+ * enum iproc_pcie_type - iProc PCIe interface type
+ * @IPROC_PCIE_PAXB_BCMA: BCMA-based host controllers
+ * @IPROC_PCIE_PAXB: PAXB-based host controllers for
+ * NS, NSP, Cygnus, NS2, and Pegasus SOCs
+ * @IPROC_PCIE_PAXB_V2: PAXB-based host controllers for Stingray SoCs
+ * @IPROC_PCIE_PAXC: PAXC-based host controllers
+ * @IPROC_PCIE_PAXC_V2: PAXC-based host controllers (second generation)
*
* PAXB is the wrapper used in root complex that can be connected to an
* external endpoint device.
};
/**
- * iProc PCIe outbound mapping
+ * struct iproc_pcie_ob - iProc PCIe outbound mapping
* @axi_offset: offset from the AXI address to the internal address used by
* the iProc PCIe core
* @nr_windows: total number of supported outbound mapping windows
};
/**
- * iProc PCIe inbound mapping
+ * struct iproc_pcie_ib - iProc PCIe inbound mapping
* @nr_regions: total number of supported inbound mapping regions
*/
struct iproc_pcie_ib {
struct iproc_msi;
/**
- * iProc PCIe device
- *
+ * struct iproc_pcie - iProc PCIe device
* @dev: pointer to device data structure
* @type: iProc PCIe interface type
* @reg_offsets: register offsets
* @base: PCIe host controller I/O register base
* @base_addr: PCIe host controller register base physical address
+ * @mem: host bridge memory window resource
* @phy: optional PHY device that controls the Serdes
* @map_irq: function callback to map interrupts
* @ep_is_internal: indicates an internal emulated endpoint device is connected
{ .compatible = "mediatek,mt8192-pcie" },
{},
};
+MODULE_DEVICE_TABLE(of, mtk_pcie_of_match);
static struct platform_driver mtk_pcie_driver = {
.probe = mtk_pcie_probe,
regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "subsys");
if (regs) {
pcie->base = devm_ioremap_resource(dev, regs);
- if (IS_ERR(pcie->base)) {
- dev_err(dev, "failed to map shared register\n");
+ if (IS_ERR(pcie->base))
return PTR_ERR(pcie->base);
- }
}
pcie->free_ck = devm_clk_get(dev, "free_ck");
LOCAL_STATUS_TO_EVENT_MAP(PM_MSI_INT_SYS_ERR),
};
-struct {
+static struct {
u32 base;
u32 offset;
u32 mask;
if (err)
return err;
- err = rockchip_pcie_setup_irq(rockchip);
- if (err)
- return err;
-
rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v");
if (IS_ERR(rockchip->vpcie12v)) {
if (PTR_ERR(rockchip->vpcie12v) != -ENODEV)
if (err)
goto err_vpcie;
- rockchip_pcie_enable_interrupts(rockchip);
-
err = rockchip_pcie_init_irq_domain(rockchip);
if (err < 0)
goto err_deinit_port;
bridge->sysdata = rockchip;
bridge->ops = &rockchip_pcie_ops;
+ err = rockchip_pcie_setup_irq(rockchip);
+ if (err)
+ goto err_remove_irq_domain;
+
+ rockchip_pcie_enable_interrupts(rockchip);
+
err = pci_host_probe(bridge);
if (err < 0)
goto err_remove_irq_domain;
struct pci_config_window *cfg;
unsigned int bus_range, bus_range_max, bsz;
struct resource *conflict;
- int i, err;
+ int err;
if (busr->start > busr->end)
return ERR_PTR(-EINVAL);
cfg->busr.start = busr->start;
cfg->busr.end = busr->end;
cfg->busr.flags = IORESOURCE_BUS;
+ cfg->bus_shift = bus_shift;
bus_range = resource_size(&cfg->busr);
bus_range_max = resource_size(cfgres) >> bus_shift;
if (bus_range > bus_range_max) {
cfg->winp = kcalloc(bus_range, sizeof(*cfg->winp), GFP_KERNEL);
if (!cfg->winp)
goto err_exit_malloc;
- for (i = 0; i < bus_range; i++) {
- cfg->winp[i] =
- pci_remap_cfgspace(cfgres->start + i * bsz,
- bsz);
- if (!cfg->winp[i])
- goto err_exit_iomap;
- }
} else {
cfg->win = pci_remap_cfgspace(cfgres->start, bus_range * bsz);
if (!cfg->win)
}
EXPORT_SYMBOL_GPL(pci_ecam_free);
+static int pci_ecam_add_bus(struct pci_bus *bus)
+{
+ struct pci_config_window *cfg = bus->sysdata;
+ unsigned int bsz = 1 << cfg->bus_shift;
+ unsigned int busn = bus->number;
+ phys_addr_t start;
+
+ if (!per_bus_mapping)
+ return 0;
+
+ if (busn < cfg->busr.start || busn > cfg->busr.end)
+ return -EINVAL;
+
+ busn -= cfg->busr.start;
+ start = cfg->res.start + busn * bsz;
+
+ cfg->winp[busn] = pci_remap_cfgspace(start, bsz);
+ if (!cfg->winp[busn])
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void pci_ecam_remove_bus(struct pci_bus *bus)
+{
+ struct pci_config_window *cfg = bus->sysdata;
+ unsigned int busn = bus->number;
+
+ if (!per_bus_mapping || busn < cfg->busr.start || busn > cfg->busr.end)
+ return;
+
+ busn -= cfg->busr.start;
+ if (cfg->winp[busn]) {
+ iounmap(cfg->winp[busn]);
+ cfg->winp[busn] = NULL;
+ }
+}
+
/*
* Function to implement the pci_ops ->map_bus method
*/
/* ECAM ops */
const struct pci_ecam_ops pci_generic_ecam_ops = {
.pci_ops = {
+ .add_bus = pci_ecam_add_bus,
+ .remove_bus = pci_ecam_remove_bus,
.map_bus = pci_ecam_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
/* ECAM ops for 32-bit access only (non-compliant) */
const struct pci_ecam_ops pci_32b_ops = {
.pci_ops = {
+ .add_bus = pci_ecam_add_bus,
+ .remove_bus = pci_ecam_remove_bus,
.map_bus = pci_ecam_map_bus,
.read = pci_generic_config_read32,
.write = pci_generic_config_write32,
/* ECAM ops for 32-bit read only (non-compliant) */
const struct pci_ecam_ops pci_32b_read_ops = {
.pci_ops = {
+ .add_bus = pci_ecam_add_bus,
+ .remove_bus = pci_ecam_remove_bus,
.map_bus = pci_ecam_map_bus,
.read = pci_generic_config_read32,
.write = pci_generic_config_write,
int cpci_hp_start(void);
int cpci_hp_stop(void);
+/* Global variables */
+extern int cpci_debug;
+
/*
* Internal function prototypes, these functions should not be used by
* board/chassis drivers.
#define MY_NAME "cpci_hotplug"
-extern int cpci_debug;
-
#define dbg(format, arg...) \
do { \
if (cpci_debug) \
*
* Won't work for more than one PCI-PCI bridge in a slot.
*
- * @bus_num - bus number of PCI device
- * @dev_num - device number of PCI device
- * @slot - Pointer to u8 where slot number will be returned
+ * @bus: pointer to the PCI bus structure
+ * @bus_num: bus number of PCI device
+ * @dev_num: device number of PCI device
+ * @slot: Pointer to u8 where slot number will be returned
*
* Output: SUCCESS or FAILURE
*/
/**
* cpqhp_pushbutton_thread - handle pushbutton events
- * @slot: target slot (struct)
+ * @t: pointer to struct timer_list which holds all timer-related callbacks
*
* Scheduled procedure to handle blocking stuff for the pushbuttons.
* Handles all pending events and exits.
if (retval)
return retval;
- return sprintf(buf, "%d\n", value);
+ return sysfs_emit(buf, "%d\n", value);
}
static ssize_t power_write_file(struct pci_slot *pci_slot, const char *buf,
if (retval)
return retval;
- return sprintf(buf, "%d\n", value);
+ return sysfs_emit(buf, "%d\n", value);
}
static ssize_t attention_write_file(struct pci_slot *pci_slot, const char *buf,
if (retval)
return retval;
- return sprintf(buf, "%d\n", value);
+ return sysfs_emit(buf, "%d\n", value);
}
static struct pci_slot_attribute hotplug_slot_attr_latch = {
if (retval)
return retval;
- return sprintf(buf, "%d\n", value);
+ return sysfs_emit(buf, "%d\n", value);
}
static struct pci_slot_attribute hotplug_slot_attr_presence = {
* struct controller - PCIe hotplug controller
* @pcie: pointer to the controller's PCIe port service device
* @slot_cap: cached copy of the Slot Capabilities register
+ * @inband_presence_disabled: In-Band Presence Detect Disable supported by
+ * controller and disabled per spec recommendation (PCIe r5.0, appendix I
+ * implementation note)
* @slot_ctrl: cached copy of the Slot Control register
* @ctrl_lock: serializes writes to the Slot Control register
* @cmd_started: jiffies when the Slot Control register was last written;
PCI_EXP_SLTCTL_PWR_OFF);
}
+static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
+ struct pci_dev *pdev, int irq)
+{
+ /*
+ * Ignore link changes which occurred while waiting for DPC recovery.
+ * Could be several if DPC triggered multiple times consecutively.
+ */
+ synchronize_hardirq(irq);
+ atomic_and(~PCI_EXP_SLTSTA_DLLSC, &ctrl->pending_events);
+ if (pciehp_poll_mode)
+ pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
+ PCI_EXP_SLTSTA_DLLSC);
+ ctrl_info(ctrl, "Slot(%s): Link Down/Up ignored (recovered by DPC)\n",
+ slot_name(ctrl));
+
+ /*
+ * If the link is unexpectedly down after successful recovery,
+ * the corresponding link change may have been ignored above.
+ * Synthesize it to ensure that it is acted on.
+ */
+ down_read(&ctrl->reset_lock);
+ if (!pciehp_check_link_active(ctrl))
+ pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
+ up_read(&ctrl->reset_lock);
+}
+
static irqreturn_t pciehp_isr(int irq, void *dev_id)
{
struct controller *ctrl = (struct controller *)dev_id;
PCI_EXP_SLTCTL_ATTN_IND_ON);
}
+ /*
+ * Ignore Link Down/Up events caused by Downstream Port Containment
+ * if recovery from the error succeeded.
+ */
+ if ((events & PCI_EXP_SLTSTA_DLLSC) && pci_dpc_recovered(pdev) &&
+ ctrl->state == ON_STATE) {
+ events &= ~PCI_EXP_SLTSTA_DLLSC;
+ pciehp_ignore_dpc_link_change(ctrl, pdev, irq);
+ }
+
/*
* Disable requests have higher priority than Presence Detect Changed
* or Data Link Layer State Changed events.
static ssize_t add_slot_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
- return sprintf(buf, "0\n");
+ return sysfs_emit(buf, "0\n");
}
static ssize_t remove_slot_store(struct kobject *kobj,
static ssize_t remove_slot_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
- return sprintf(buf, "0\n");
+ return sysfs_emit(buf, "0\n");
}
static struct kobj_attribute add_slot_attr =
static ssize_t show_ctrl(struct device *dev, struct device_attribute *attr, char *buf)
{
struct pci_dev *pdev;
- char *out = buf;
int index, busnr;
struct resource *res;
struct pci_bus *bus;
+ size_t len = 0;
pdev = to_pci_dev(dev);
bus = pdev->subordinate;
- out += sprintf(buf, "Free resources: memory\n");
+ len += sysfs_emit_at(buf, len, "Free resources: memory\n");
pci_bus_for_each_resource(bus, res, index) {
if (res && (res->flags & IORESOURCE_MEM) &&
!(res->flags & IORESOURCE_PREFETCH)) {
- out += sprintf(out, "start = %8.8llx, length = %8.8llx\n",
- (unsigned long long)res->start,
- (unsigned long long)resource_size(res));
+ len += sysfs_emit_at(buf, len,
+ "start = %8.8llx, length = %8.8llx\n",
+ (unsigned long long)res->start,
+ (unsigned long long)resource_size(res));
}
}
- out += sprintf(out, "Free resources: prefetchable memory\n");
+ len += sysfs_emit_at(buf, len, "Free resources: prefetchable memory\n");
pci_bus_for_each_resource(bus, res, index) {
if (res && (res->flags & IORESOURCE_MEM) &&
(res->flags & IORESOURCE_PREFETCH)) {
- out += sprintf(out, "start = %8.8llx, length = %8.8llx\n",
- (unsigned long long)res->start,
- (unsigned long long)resource_size(res));
+ len += sysfs_emit_at(buf, len,
+ "start = %8.8llx, length = %8.8llx\n",
+ (unsigned long long)res->start,
+ (unsigned long long)resource_size(res));
}
}
- out += sprintf(out, "Free resources: IO\n");
+ len += sysfs_emit_at(buf, len, "Free resources: IO\n");
pci_bus_for_each_resource(bus, res, index) {
if (res && (res->flags & IORESOURCE_IO)) {
- out += sprintf(out, "start = %8.8llx, length = %8.8llx\n",
- (unsigned long long)res->start,
- (unsigned long long)resource_size(res));
+ len += sysfs_emit_at(buf, len,
+ "start = %8.8llx, length = %8.8llx\n",
+ (unsigned long long)res->start,
+ (unsigned long long)resource_size(res));
}
}
- out += sprintf(out, "Free resources: bus numbers\n");
+ len += sysfs_emit_at(buf, len, "Free resources: bus numbers\n");
for (busnr = bus->busn_res.start; busnr <= bus->busn_res.end; busnr++) {
if (!pci_find_bus(pci_domain_nr(bus), busnr))
break;
}
if (busnr < bus->busn_res.end)
- out += sprintf(out, "start = %8.8x, length = %8.8x\n",
- busnr, (int)(bus->busn_res.end - busnr));
+ len += sysfs_emit_at(buf, len,
+ "start = %8.8x, length = %8.8x\n",
+ busnr, (int)(bus->busn_res.end - busnr));
- return out - buf;
+ return len;
}
static DEVICE_ATTR(ctrl, S_IRUGO, show_ctrl, NULL);
{
struct pci_dev *pdev = to_pci_dev(dev);
- return sprintf(buf, "%u\n", pci_sriov_get_totalvfs(pdev));
+ return sysfs_emit(buf, "%u\n", pci_sriov_get_totalvfs(pdev));
}
static ssize_t sriov_numvfs_show(struct device *dev,
num_vfs = pdev->sriov->num_VFs;
device_unlock(&pdev->dev);
- return sprintf(buf, "%u\n", num_vfs);
+ return sysfs_emit(buf, "%u\n", num_vfs);
}
/*
if (num_vfs == pdev->sriov->num_VFs)
goto exit;
+ /* is PF driver loaded */
+ if (!pdev->driver) {
+ pci_info(pdev, "no driver bound to device; cannot configure SR-IOV\n");
+ ret = -ENOENT;
+ goto exit;
+ }
+
/* is PF driver loaded w/callback */
- if (!pdev->driver || !pdev->driver->sriov_configure) {
- pci_info(pdev, "Driver does not support SRIOV configuration via sysfs\n");
+ if (!pdev->driver->sriov_configure) {
+ pci_info(pdev, "driver does not support SR-IOV configuration via sysfs\n");
ret = -ENOENT;
goto exit;
}
{
struct pci_dev *pdev = to_pci_dev(dev);
- return sprintf(buf, "%u\n", pdev->sriov->offset);
+ return sysfs_emit(buf, "%u\n", pdev->sriov->offset);
}
static ssize_t sriov_stride_show(struct device *dev,
{
struct pci_dev *pdev = to_pci_dev(dev);
- return sprintf(buf, "%u\n", pdev->sriov->stride);
+ return sysfs_emit(buf, "%u\n", pdev->sriov->stride);
}
static ssize_t sriov_vf_device_show(struct device *dev,
{
struct pci_dev *pdev = to_pci_dev(dev);
- return sprintf(buf, "%x\n", pdev->sriov->vf_device);
+ return sysfs_emit(buf, "%x\n", pdev->sriov->vf_device);
}
static ssize_t sriov_drivers_autoprobe_show(struct device *dev,
{
struct pci_dev *pdev = to_pci_dev(dev);
- return sprintf(buf, "%u\n", pdev->sriov->drivers_autoprobe);
+ return sysfs_emit(buf, "%u\n", pdev->sriov->drivers_autoprobe);
}
static ssize_t sriov_drivers_autoprobe_store(struct device *dev,
return retval;
entry = irq_get_msi_desc(irq);
- if (entry)
- return sprintf(buf, "%s\n",
- entry->msi_attrib.is_msix ? "msix" : "msi");
+ if (!entry)
+ return -ENODEV;
- return -ENODEV;
+ return sysfs_emit(buf, "%s\n",
+ entry->msi_attrib.is_msix ? "msix" : "msi");
}
static int populate_msi_sysfs(struct pci_dev *pdev)
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
+ struct pci_p2pdma *p2pdma;
size_t size = 0;
- if (pdev->p2pdma->pool)
- size = gen_pool_size(pdev->p2pdma->pool);
+ rcu_read_lock();
+ p2pdma = rcu_dereference(pdev->p2pdma);
+ if (p2pdma && p2pdma->pool)
+ size = gen_pool_size(p2pdma->pool);
+ rcu_read_unlock();
- return scnprintf(buf, PAGE_SIZE, "%zd\n", size);
+ return sysfs_emit(buf, "%zd\n", size);
}
static DEVICE_ATTR_RO(size);
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
+ struct pci_p2pdma *p2pdma;
size_t avail = 0;
- if (pdev->p2pdma->pool)
- avail = gen_pool_avail(pdev->p2pdma->pool);
+ rcu_read_lock();
+ p2pdma = rcu_dereference(pdev->p2pdma);
+ if (p2pdma && p2pdma->pool)
+ avail = gen_pool_avail(p2pdma->pool);
+ rcu_read_unlock();
- return scnprintf(buf, PAGE_SIZE, "%zd\n", avail);
+ return sysfs_emit(buf, "%zd\n", avail);
}
static DEVICE_ATTR_RO(available);
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
+ struct pci_p2pdma *p2pdma;
+ bool published = false;
- return scnprintf(buf, PAGE_SIZE, "%d\n",
- pdev->p2pdma->p2pmem_published);
+ rcu_read_lock();
+ p2pdma = rcu_dereference(pdev->p2pdma);
+ if (p2pdma)
+ published = p2pdma->p2pmem_published;
+ rcu_read_unlock();
+
+ return sysfs_emit(buf, "%d\n", published);
}
static DEVICE_ATTR_RO(published);
static void pci_p2pdma_release(void *data)
{
struct pci_dev *pdev = data;
- struct pci_p2pdma *p2pdma = pdev->p2pdma;
+ struct pci_p2pdma *p2pdma;
+ p2pdma = rcu_dereference_protected(pdev->p2pdma, 1);
if (!p2pdma)
return;
if (error)
goto out_pool_destroy;
- pdev->p2pdma = p2p;
-
error = sysfs_create_group(&pdev->dev.kobj, &p2pmem_group);
if (error)
goto out_pool_destroy;
+ rcu_assign_pointer(pdev->p2pdma, p2p);
return 0;
out_pool_destroy:
- pdev->p2pdma = NULL;
gen_pool_destroy(p2p->pool);
out:
devm_kfree(&pdev->dev, p2p);
{
struct pci_p2pdma_pagemap *p2p_pgmap;
struct dev_pagemap *pgmap;
+ struct pci_p2pdma *p2pdma;
void *addr;
int error;
goto pgmap_free;
}
- error = gen_pool_add_owner(pdev->p2pdma->pool, (unsigned long)addr,
+ p2pdma = rcu_dereference_protected(pdev->p2pdma, 1);
+ error = gen_pool_add_owner(p2pdma->pool, (unsigned long)addr,
pci_bus_address(pdev, bar) + offset,
range_len(&pgmap->range), dev_to_node(&pdev->dev),
pgmap->ref);
{}
};
+/*
+ * This lookup function tries to find the PCI device corresponding to a given
+ * host bridge.
+ *
+ * It assumes the host bridge device is the first PCI device in the
+ * bus->devices list and that the devfn is 00.0. These assumptions should hold
+ * for all the devices in the whitelist above.
+ *
+ * This function is equivalent to pci_get_slot(host->bus, 0), however it does
+ * not take the pci_bus_sem lock seeing __host_bridge_whitelist() must not
+ * sleep.
+ *
+ * For this to be safe, the caller should hold a reference to a device on the
+ * bridge, which should ensure the host_bridge device will not be freed
+ * or removed from the head of the devices list.
+ */
+static struct pci_dev *pci_host_bridge_dev(struct pci_host_bridge *host)
+{
+ struct pci_dev *root;
+
+ root = list_first_entry_or_null(&host->bus->devices,
+ struct pci_dev, bus_list);
+
+ if (!root)
+ return NULL;
+ if (root->devfn != PCI_DEVFN(0, 0))
+ return NULL;
+
+ return root;
+}
+
static bool __host_bridge_whitelist(struct pci_host_bridge *host,
- bool same_host_bridge)
+ bool same_host_bridge, bool warn)
{
- struct pci_dev *root = pci_get_slot(host->bus, PCI_DEVFN(0, 0));
+ struct pci_dev *root = pci_host_bridge_dev(host);
const struct pci_p2pdma_whitelist_entry *entry;
unsigned short vendor, device;
vendor = root->vendor;
device = root->device;
- pci_dev_put(root);
for (entry = pci_p2pdma_whitelist; entry->vendor; entry++) {
if (vendor != entry->vendor || device != entry->device)
return true;
}
+ if (warn)
+ pci_warn(root, "Host bridge not in P2PDMA whitelist: %04x:%04x\n",
+ vendor, device);
+
return false;
}
* If we can't find a common upstream bridge take a look at the root
* complex and compare it to a whitelist of known good hardware.
*/
-static bool host_bridge_whitelist(struct pci_dev *a, struct pci_dev *b)
+static bool host_bridge_whitelist(struct pci_dev *a, struct pci_dev *b,
+ bool warn)
{
struct pci_host_bridge *host_a = pci_find_host_bridge(a->bus);
struct pci_host_bridge *host_b = pci_find_host_bridge(b->bus);
if (host_a == host_b)
- return __host_bridge_whitelist(host_a, true);
+ return __host_bridge_whitelist(host_a, true, warn);
- if (__host_bridge_whitelist(host_a, false) &&
- __host_bridge_whitelist(host_b, false))
+ if (__host_bridge_whitelist(host_a, false, warn) &&
+ __host_bridge_whitelist(host_b, false, warn))
return true;
return false;
}
+static unsigned long map_types_idx(struct pci_dev *client)
+{
+ return (pci_domain_nr(client->bus) << 16) |
+ (client->bus->number << 8) | client->devfn;
+}
+
+/*
+ * Calculate the P2PDMA mapping type and distance between two PCI devices.
+ *
+ * If the two devices are the same PCI function, return
+ * PCI_P2PDMA_MAP_BUS_ADDR and a distance of 0.
+ *
+ * If they are two functions of the same device, return
+ * PCI_P2PDMA_MAP_BUS_ADDR and a distance of 2 (one hop up to the bridge,
+ * then one hop back down to another function of the same device).
+ *
+ * In the case where two devices are connected to the same PCIe switch,
+ * return a distance of 4. This corresponds to the following PCI tree:
+ *
+ * -+ Root Port
+ * \+ Switch Upstream Port
+ * +-+ Switch Downstream Port 0
+ * + \- Device A
+ * \-+ Switch Downstream Port 1
+ * \- Device B
+ *
+ * The distance is 4 because we traverse from Device A to Downstream Port 0
+ * to the common Switch Upstream Port, back down to Downstream Port 1 and
+ * then to Device B. The mapping type returned depends on the ACS
+ * redirection setting of the ports along the path.
+ *
+ * If ACS redirect is set on any port in the path, traffic between the
+ * devices will go through the host bridge, so return
+ * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE; otherwise return
+ * PCI_P2PDMA_MAP_BUS_ADDR.
+ *
+ * Any two devices that have a data path that goes through the host bridge
+ * will consult a whitelist. If the host bridge is in the whitelist, return
+ * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE with the distance set to the number of
+ * ports per above. If the device is not in the whitelist, return
+ * PCI_P2PDMA_MAP_NOT_SUPPORTED.
+ */
static enum pci_p2pdma_map_type
-__upstream_bridge_distance(struct pci_dev *provider, struct pci_dev *client,
- int *dist, bool *acs_redirects, struct seq_buf *acs_list)
+calc_map_type_and_dist(struct pci_dev *provider, struct pci_dev *client,
+ int *dist, bool verbose)
{
+ enum pci_p2pdma_map_type map_type = PCI_P2PDMA_MAP_THRU_HOST_BRIDGE;
struct pci_dev *a = provider, *b = client, *bb;
+ bool acs_redirects = false;
+ struct pci_p2pdma *p2pdma;
+ struct seq_buf acs_list;
+ int acs_cnt = 0;
int dist_a = 0;
int dist_b = 0;
- int acs_cnt = 0;
+ char buf[128];
- if (acs_redirects)
- *acs_redirects = false;
+ seq_buf_init(&acs_list, buf, sizeof(buf));
/*
* Note, we don't need to take references to devices returned by
* pci_upstream_bridge() seeing we hold a reference to a child
* device which will already hold a reference to the upstream bridge.
*/
-
while (a) {
dist_b = 0;
if (pci_bridge_has_acs_redir(a)) {
- seq_buf_print_bus_devfn(acs_list, a);
+ seq_buf_print_bus_devfn(&acs_list, a);
acs_cnt++;
}
dist_a++;
}
- if (dist)
- *dist = dist_a + dist_b;
-
- return PCI_P2PDMA_MAP_THRU_HOST_BRIDGE;
+ *dist = dist_a + dist_b;
+ goto map_through_host_bridge;
check_b_path_acs:
bb = b;
break;
if (pci_bridge_has_acs_redir(bb)) {
- seq_buf_print_bus_devfn(acs_list, bb);
+ seq_buf_print_bus_devfn(&acs_list, bb);
acs_cnt++;
}
bb = pci_upstream_bridge(bb);
}
- if (dist)
- *dist = dist_a + dist_b;
-
- if (acs_cnt) {
- if (acs_redirects)
- *acs_redirects = true;
-
- return PCI_P2PDMA_MAP_THRU_HOST_BRIDGE;
- }
-
- return PCI_P2PDMA_MAP_BUS_ADDR;
-}
-
-static unsigned long map_types_idx(struct pci_dev *client)
-{
- return (pci_domain_nr(client->bus) << 16) |
- (client->bus->number << 8) | client->devfn;
-}
-
-/*
- * Find the distance through the nearest common upstream bridge between
- * two PCI devices.
- *
- * If the two devices are the same device then 0 will be returned.
- *
- * If there are two virtual functions of the same device behind the same
- * bridge port then 2 will be returned (one step down to the PCIe switch,
- * then one step back to the same device).
- *
- * In the case where two devices are connected to the same PCIe switch, the
- * value 4 will be returned. This corresponds to the following PCI tree:
- *
- * -+ Root Port
- * \+ Switch Upstream Port
- * +-+ Switch Downstream Port
- * + \- Device A
- * \-+ Switch Downstream Port
- * \- Device B
- *
- * The distance is 4 because we traverse from Device A through the downstream
- * port of the switch, to the common upstream port, back up to the second
- * downstream port and then to Device B.
- *
- * Any two devices that cannot communicate using p2pdma will return
- * PCI_P2PDMA_MAP_NOT_SUPPORTED.
- *
- * Any two devices that have a data path that goes through the host bridge
- * will consult a whitelist. If the host bridges are on the whitelist,
- * this function will return PCI_P2PDMA_MAP_THRU_HOST_BRIDGE.
- *
- * If either bridge is not on the whitelist this function returns
- * PCI_P2PDMA_MAP_NOT_SUPPORTED.
- *
- * If a bridge which has any ACS redirection bits set is in the path,
- * acs_redirects will be set to true. In this case, a list of all infringing
- * bridge addresses will be populated in acs_list (assuming it's non-null)
- * for printk purposes.
- */
-static enum pci_p2pdma_map_type
-upstream_bridge_distance(struct pci_dev *provider, struct pci_dev *client,
- int *dist, bool *acs_redirects, struct seq_buf *acs_list)
-{
- enum pci_p2pdma_map_type map_type;
-
- map_type = __upstream_bridge_distance(provider, client, dist,
- acs_redirects, acs_list);
+ *dist = dist_a + dist_b;
- if (map_type == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE) {
- if (!cpu_supports_p2pdma() &&
- !host_bridge_whitelist(provider, client))
- map_type = PCI_P2PDMA_MAP_NOT_SUPPORTED;
+ if (!acs_cnt) {
+ map_type = PCI_P2PDMA_MAP_BUS_ADDR;
+ goto done;
}
- if (provider->p2pdma)
- xa_store(&provider->p2pdma->map_types, map_types_idx(client),
- xa_mk_value(map_type), GFP_KERNEL);
-
- return map_type;
-}
-
-static enum pci_p2pdma_map_type
-upstream_bridge_distance_warn(struct pci_dev *provider, struct pci_dev *client,
- int *dist)
-{
- struct seq_buf acs_list;
- bool acs_redirects;
- int ret;
-
- seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE);
- if (!acs_list.buffer)
- return -ENOMEM;
-
- ret = upstream_bridge_distance(provider, client, dist, &acs_redirects,
- &acs_list);
- if (acs_redirects) {
+ if (verbose) {
+ acs_list.buffer[acs_list.len-1] = 0; /* drop final semicolon */
pci_warn(client, "ACS redirect is set between the client and provider (%s)\n",
pci_name(provider));
- /* Drop final semicolon */
- acs_list.buffer[acs_list.len-1] = 0;
pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n",
acs_list.buffer);
}
+ acs_redirects = true;
- if (ret == PCI_P2PDMA_MAP_NOT_SUPPORTED) {
- pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider (%s) do not share an upstream bridge or whitelisted host bridge\n",
- pci_name(provider));
+map_through_host_bridge:
+ if (!cpu_supports_p2pdma() &&
+ !host_bridge_whitelist(provider, client, acs_redirects)) {
+ if (verbose)
+ pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider (%s) do not share an upstream bridge or whitelisted host bridge\n",
+ pci_name(provider));
+ map_type = PCI_P2PDMA_MAP_NOT_SUPPORTED;
}
-
- kfree(acs_list.buffer);
-
- return ret;
+done:
+ rcu_read_lock();
+ p2pdma = rcu_dereference(provider->p2pdma);
+ if (p2pdma)
+ xa_store(&p2pdma->map_types, map_types_idx(client),
+ xa_mk_value(map_type), GFP_KERNEL);
+ rcu_read_unlock();
+ return map_type;
}
/**
int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients,
int num_clients, bool verbose)
{
+ enum pci_p2pdma_map_type map;
bool not_supported = false;
struct pci_dev *pci_client;
int total_dist = 0;
- int distance;
- int i, ret;
+ int i, distance;
if (num_clients == 0)
return -1;
return -1;
}
- if (verbose)
- ret = upstream_bridge_distance_warn(provider,
- pci_client, &distance);
- else
- ret = upstream_bridge_distance(provider, pci_client,
- &distance, NULL, NULL);
+ map = calc_map_type_and_dist(provider, pci_client, &distance,
+ verbose);
pci_dev_put(pci_client);
- if (ret == PCI_P2PDMA_MAP_NOT_SUPPORTED)
+ if (map == PCI_P2PDMA_MAP_NOT_SUPPORTED)
not_supported = true;
if (not_supported && !verbose)
*/
bool pci_has_p2pmem(struct pci_dev *pdev)
{
- return pdev->p2pdma && pdev->p2pdma->p2pmem_published;
+ struct pci_p2pdma *p2pdma;
+ bool res;
+
+ rcu_read_lock();
+ p2pdma = rcu_dereference(pdev->p2pdma);
+ res = p2pdma && p2pdma->p2pmem_published;
+ rcu_read_unlock();
+
+ return res;
}
EXPORT_SYMBOL_GPL(pci_has_p2pmem);
{
void *ret = NULL;
struct percpu_ref *ref;
+ struct pci_p2pdma *p2pdma;
/*
* Pairs with synchronize_rcu() in pci_p2pdma_release() to
* read-lock.
*/
rcu_read_lock();
- if (unlikely(!pdev->p2pdma))
+ p2pdma = rcu_dereference(pdev->p2pdma);
+ if (unlikely(!p2pdma))
goto out;
- ret = (void *)gen_pool_alloc_owner(pdev->p2pdma->pool, size,
- (void **) &ref);
+ ret = (void *)gen_pool_alloc_owner(p2pdma->pool, size, (void **) &ref);
if (!ret)
goto out;
if (unlikely(!percpu_ref_tryget_live(ref))) {
- gen_pool_free(pdev->p2pdma->pool, (unsigned long) ret, size);
+ gen_pool_free(p2pdma->pool, (unsigned long) ret, size);
ret = NULL;
goto out;
}
void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size)
{
struct percpu_ref *ref;
+ struct pci_p2pdma *p2pdma = rcu_dereference_protected(pdev->p2pdma, 1);
- gen_pool_free_owner(pdev->p2pdma->pool, (uintptr_t)addr, size,
+ gen_pool_free_owner(p2pdma->pool, (uintptr_t)addr, size,
(void **) &ref);
percpu_ref_put(ref);
}
*/
pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev, void *addr)
{
+ struct pci_p2pdma *p2pdma;
+
if (!addr)
return 0;
- if (!pdev->p2pdma)
+
+ p2pdma = rcu_dereference_protected(pdev->p2pdma, 1);
+ if (!p2pdma)
return 0;
/*
* bus address as the physical address. So gen_pool_virt_to_phys()
* actually returns the bus address despite the misleading name.
*/
- return gen_pool_virt_to_phys(pdev->p2pdma->pool, (unsigned long)addr);
+ return gen_pool_virt_to_phys(p2pdma->pool, (unsigned long)addr);
}
EXPORT_SYMBOL_GPL(pci_p2pmem_virt_to_bus);
*/
void pci_p2pmem_publish(struct pci_dev *pdev, bool publish)
{
- if (pdev->p2pdma)
- pdev->p2pdma->p2pmem_published = publish;
+ struct pci_p2pdma *p2pdma;
+
+ rcu_read_lock();
+ p2pdma = rcu_dereference(pdev->p2pdma);
+ if (p2pdma)
+ p2pdma->p2pmem_published = publish;
+ rcu_read_unlock();
}
EXPORT_SYMBOL_GPL(pci_p2pmem_publish);
-static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct pci_dev *provider,
- struct pci_dev *client)
+static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap,
+ struct device *dev)
{
+ enum pci_p2pdma_map_type type = PCI_P2PDMA_MAP_NOT_SUPPORTED;
+ struct pci_dev *provider = to_p2p_pgmap(pgmap)->provider;
+ struct pci_dev *client;
+ struct pci_p2pdma *p2pdma;
+
if (!provider->p2pdma)
return PCI_P2PDMA_MAP_NOT_SUPPORTED;
- return xa_to_value(xa_load(&provider->p2pdma->map_types,
- map_types_idx(client)));
+ if (!dev_is_pci(dev))
+ return PCI_P2PDMA_MAP_NOT_SUPPORTED;
+
+ client = to_pci_dev(dev);
+
+ rcu_read_lock();
+ p2pdma = rcu_dereference(provider->p2pdma);
+
+ if (p2pdma)
+ type = xa_to_value(xa_load(&p2pdma->map_types,
+ map_types_idx(client)));
+ rcu_read_unlock();
+ return type;
}
static int __pci_p2pdma_map_sg(struct pci_p2pdma_pagemap *p2p_pgmap,
{
struct pci_p2pdma_pagemap *p2p_pgmap =
to_p2p_pgmap(sg_page(sg)->pgmap);
- struct pci_dev *client;
-
- if (WARN_ON_ONCE(!dev_is_pci(dev)))
- return 0;
- client = to_pci_dev(dev);
-
- switch (pci_p2pdma_map_type(p2p_pgmap->provider, client)) {
+ switch (pci_p2pdma_map_type(sg_page(sg)->pgmap, dev)) {
case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
return dma_map_sg_attrs(dev, sg, nents, dir, attrs);
case PCI_P2PDMA_MAP_BUS_ADDR:
void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
- struct pci_p2pdma_pagemap *p2p_pgmap =
- to_p2p_pgmap(sg_page(sg)->pgmap);
enum pci_p2pdma_map_type map_type;
- struct pci_dev *client;
-
- if (WARN_ON_ONCE(!dev_is_pci(dev)))
- return;
-
- client = to_pci_dev(dev);
- map_type = pci_p2pdma_map_type(p2p_pgmap->provider, client);
+ map_type = pci_p2pdma_map_type(sg_page(sg)->pgmap, dev);
if (map_type == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE)
dma_unmap_sg_attrs(dev, sg, nents, dir, attrs);
ACPI_ATTR_INDEX_SHOW,
};
-static void dsm_label_utf16s_to_utf8s(union acpi_object *obj, char *buf)
+static int dsm_label_utf16s_to_utf8s(union acpi_object *obj, char *buf)
{
int len;
+
len = utf16s_to_utf8s((const wchar_t *)obj->buffer.pointer,
obj->buffer.length,
UTF16_LITTLE_ENDIAN,
- buf, PAGE_SIZE);
- buf[len] = '\n';
+ buf, PAGE_SIZE - 1);
+ buf[len++] = '\n';
+
+ return len;
}
static int dsm_get_label(struct device *dev, char *buf,
{
acpi_handle handle = ACPI_HANDLE(dev);
union acpi_object *obj, *tmp;
- int len = -1;
+ int len = 0;
if (!handle)
return -1;
* this entry must return a null string.
*/
if (attr == ACPI_ATTR_INDEX_SHOW) {
- scnprintf(buf, PAGE_SIZE, "%llu\n", tmp->integer.value);
+ len = sysfs_emit(buf, "%llu\n", tmp->integer.value);
} else if (attr == ACPI_ATTR_LABEL_SHOW) {
if (tmp[1].type == ACPI_TYPE_STRING)
- scnprintf(buf, PAGE_SIZE, "%s\n",
- tmp[1].string.pointer);
+ len = sysfs_emit(buf, "%s\n",
+ tmp[1].string.pointer);
else if (tmp[1].type == ACPI_TYPE_BUFFER)
- dsm_label_utf16s_to_utf8s(tmp + 1, buf);
+ len = dsm_label_utf16s_to_utf8s(tmp + 1, buf);
}
- len = strlen(buf) > 0 ? strlen(buf) : -1;
}
ACPI_FREE(obj);
- return len;
+ return len > 0 ? len : -1;
}
static ssize_t label_show(struct device *dev, struct device_attribute *attr,
if (np == NULL)
return 0;
- return sysfs_emit(buf, "%pOF", np);
+ return sysfs_emit(buf, "%pOF\n", np);
}
static DEVICE_ATTR_RO(devspec);
#endif
return pci_reset_hotplug_slot(dev->slot->hotplug, probe);
}
+static int pci_reset_bus_function(struct pci_dev *dev, int probe)
+{
+ int rc;
+
+ rc = pci_dev_reset_slot_function(dev, probe);
+ if (rc != -ENOTTY)
+ return rc;
+ return pci_parent_bus_reset(dev, probe);
+}
+
static void pci_dev_lock(struct pci_dev *dev)
{
pci_cfg_access_lock(dev);
rc = pci_pm_reset(dev, 0);
if (rc != -ENOTTY)
return rc;
- rc = pci_dev_reset_slot_function(dev, 0);
- if (rc != -ENOTTY)
- return rc;
- return pci_parent_bus_reset(dev, 0);
+ return pci_reset_bus_function(dev, 0);
}
EXPORT_SYMBOL_GPL(__pci_reset_function_locked);
if (rc != -ENOTTY)
return rc;
rc = pci_pm_reset(dev, 1);
- if (rc != -ENOTTY)
- return rc;
- rc = pci_dev_reset_slot_function(dev, 1);
if (rc != -ENOTTY)
return rc;
- return pci_parent_bus_reset(dev, 1);
+ return pci_reset_bus_function(dev, 1);
}
/**
spin_lock(&resource_alignment_lock);
if (resource_alignment_param)
- count = scnprintf(buf, PAGE_SIZE, "%s", resource_alignment_param);
+ count = sysfs_emit(buf, "%s\n", resource_alignment_param);
spin_unlock(&resource_alignment_lock);
- /*
- * When set by the command line, resource_alignment_param will not
- * have a trailing line feed, which is ugly. So conditionally add
- * it here.
- */
- if (count >= 2 && buf[count - 2] != '\n' && count < PAGE_SIZE - 1) {
- buf[count - 1] = '\n';
- buf[count++] = 0;
- }
-
return count;
}
static ssize_t resource_alignment_store(struct bus_type *bus,
const char *buf, size_t count)
{
- char *param = kstrndup(buf, count, GFP_KERNEL);
+ char *param, *old, *end;
+
+ if (count >= (PAGE_SIZE - 1))
+ return -EINVAL;
+ param = kstrndup(buf, count, GFP_KERNEL);
if (!param)
return -ENOMEM;
+ end = strchr(param, '\n');
+ if (end)
+ *end = '\0';
+
spin_lock(&resource_alignment_lock);
- kfree(resource_alignment_param);
- resource_alignment_param = param;
+ old = resource_alignment_param;
+ if (strlen(param)) {
+ resource_alignment_param = param;
+ } else {
+ kfree(param);
+ resource_alignment_param = NULL;
+ }
spin_unlock(&resource_alignment_lock);
+
+ kfree(old);
+
return count;
}
/**
* pci_dev_set_io_state - Set the new error state if possible.
*
- * @dev - pci device to set new error_state
- * @new - the state we want dev to be in
+ * @dev: PCI device to set new error_state
+ * @new: the state we want dev to be in
*
* Must be called with device_lock held.
*
/* pci_dev priv_flags */
#define PCI_DEV_ADDED 0
+#define PCI_DPC_RECOVERED 1
+#define PCI_DPC_RECOVERING 2
static inline void pci_dev_assign_added(struct pci_dev *dev, bool added)
{
void pci_dpc_init(struct pci_dev *pdev);
void dpc_process_error(struct pci_dev *pdev);
pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
+bool pci_dpc_recovered(struct pci_dev *pdev);
#else
static inline void pci_save_dpc_state(struct pci_dev *dev) {}
static inline void pci_restore_dpc_state(struct pci_dev *dev) {}
static inline void pci_dpc_init(struct pci_dev *pdev) {}
+static inline bool pci_dpc_recovered(struct pci_dev *pdev) { return false; }
#endif
#ifdef CONFIG_PCIEPORTBUS
char *buf) \
{ \
unsigned int i; \
- char *str = buf; \
struct pci_dev *pdev = to_pci_dev(dev); \
u64 *stats = pdev->aer_stats->stats_array; \
+ size_t len = 0; \
\
for (i = 0; i < ARRAY_SIZE(strings_array); i++) { \
if (strings_array[i]) \
- str += sprintf(str, "%s %llu\n", \
- strings_array[i], stats[i]); \
+ len += sysfs_emit_at(buf, len, "%s %llu\n", \
+ strings_array[i], \
+ stats[i]); \
else if (stats[i]) \
- str += sprintf(str, #stats_array "_bit[%d] %llu\n",\
- i, stats[i]); \
+ len += sysfs_emit_at(buf, len, \
+ #stats_array "_bit[%d] %llu\n",\
+ i, stats[i]); \
} \
- str += sprintf(str, "TOTAL_%s %llu\n", total_string, \
- pdev->aer_stats->total_field); \
- return str-buf; \
+ len += sysfs_emit_at(buf, len, "TOTAL_%s %llu\n", total_string, \
+ pdev->aer_stats->total_field); \
+ return len; \
} \
static DEVICE_ATTR_RO(name)
char *buf) \
{ \
struct pci_dev *pdev = to_pci_dev(dev); \
- return sprintf(buf, "%llu\n", pdev->aer_stats->field); \
+ return sysfs_emit(buf, "%llu\n", pdev->aer_stats->field); \
} \
static DEVICE_ATTR_RO(name)
pdev = pci_get_domain_bus_and_slot(entry.domain, entry.bus,
entry.devfn);
if (!pdev) {
- pr_err("AER recover: Can not find pci_dev for %04x:%02x:%02x:%x\n",
+ pr_err("no pci_dev for %04x:%02x:%02x.%x\n",
entry.domain, entry.bus,
PCI_SLOT(entry.devfn), PCI_FUNC(entry.devfn));
continue;
&aer_recover_ring_lock))
schedule_work(&aer_recover_work);
else
- pr_err("AER recover: Buffer overflow when recovering AER for %04x:%02x:%02x:%x\n",
+ pr_err("buffer overflow in recovery for %04x:%02x:%02x.%x\n",
domain, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
}
EXPORT_SYMBOL_GPL(aer_recover_queue);
struct pci_dev *pdev = to_pci_dev(dev);
struct pcie_link_state *link = pcie_aspm_get_link(pdev);
- return sprintf(buf, "%d\n", (link->aspm_enabled & state) ? 1 : 0);
+ return sysfs_emit(buf, "%d\n", (link->aspm_enabled & state) ? 1 : 0);
}
static ssize_t aspm_attr_store_common(struct device *dev,
struct pci_dev *pdev = to_pci_dev(dev);
struct pcie_link_state *link = pcie_aspm_get_link(pdev);
- return sprintf(buf, "%d\n", link->clkpm_enabled);
+ return sysfs_emit(buf, "%d\n", link->clkpm_enabled);
}
static ssize_t clkpm_store(struct device *dev,
pci_write_config_word(dev, dev->dpc_cap + PCI_EXP_DPC_CTL, *cap);
}
+static DECLARE_WAIT_QUEUE_HEAD(dpc_completed_waitqueue);
+
+#ifdef CONFIG_HOTPLUG_PCI_PCIE
+static bool dpc_completed(struct pci_dev *pdev)
+{
+ u16 status;
+
+ pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_STATUS, &status);
+ if ((status != 0xffff) && (status & PCI_EXP_DPC_STATUS_TRIGGER))
+ return false;
+
+ if (test_bit(PCI_DPC_RECOVERING, &pdev->priv_flags))
+ return false;
+
+ return true;
+}
+
+/**
+ * pci_dpc_recovered - whether DPC triggered and has recovered successfully
+ * @pdev: PCI device
+ *
+ * Return true if DPC was triggered for @pdev and has recovered successfully.
+ * Wait for recovery if it hasn't completed yet. Called from the PCIe hotplug
+ * driver to recognize and ignore Link Down/Up events caused by DPC.
+ */
+bool pci_dpc_recovered(struct pci_dev *pdev)
+{
+ struct pci_host_bridge *host;
+
+ if (!pdev->dpc_cap)
+ return false;
+
+ /*
+ * Synchronization between hotplug and DPC is not supported
+ * if DPC is owned by firmware and EDR is not enabled.
+ */
+ host = pci_find_host_bridge(pdev->bus);
+ if (!host->native_dpc && !IS_ENABLED(CONFIG_PCIE_EDR))
+ return false;
+
+ /*
+ * Need a timeout in case DPC never completes due to failure of
+ * dpc_wait_rp_inactive(). The spec doesn't mandate a time limit,
+ * but reports indicate that DPC completes within 4 seconds.
+ */
+ wait_event_timeout(dpc_completed_waitqueue, dpc_completed(pdev),
+ msecs_to_jiffies(4000));
+
+ return test_and_clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
+}
+#endif /* CONFIG_HOTPLUG_PCI_PCIE */
+
static int dpc_wait_rp_inactive(struct pci_dev *pdev)
{
unsigned long timeout = jiffies + HZ;
pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
{
+ pci_ers_result_t ret;
u16 cap;
+ set_bit(PCI_DPC_RECOVERING, &pdev->priv_flags);
+
/*
* DPC disables the Link automatically in hardware, so it has
* already been reset by the time we get here.
if (!pcie_wait_for_link(pdev, false))
pci_info(pdev, "Data Link Layer Link Active not cleared in 1000 msec\n");
- if (pdev->dpc_rp_extensions && dpc_wait_rp_inactive(pdev))
- return PCI_ERS_RESULT_DISCONNECT;
+ if (pdev->dpc_rp_extensions && dpc_wait_rp_inactive(pdev)) {
+ clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
+ ret = PCI_ERS_RESULT_DISCONNECT;
+ goto out;
+ }
pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
PCI_EXP_DPC_STATUS_TRIGGER);
if (!pcie_wait_for_link(pdev, true)) {
pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n");
- return PCI_ERS_RESULT_DISCONNECT;
+ clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
+ ret = PCI_ERS_RESULT_DISCONNECT;
+ } else {
+ set_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
+ ret = PCI_ERS_RESULT_RECOVERED;
}
-
- return PCI_ERS_RESULT_RECOVERED;
+out:
+ clear_bit(PCI_DPC_RECOVERING, &pdev->priv_flags);
+ wake_up_all(&dpc_completed_waitqueue);
+ return ret;
}
static void dpc_process_rp_pio_error(struct pci_dev *pdev)
#include <linux/hypervisor.h>
#include <linux/irqdomain.h>
#include <linux/pm_runtime.h>
+#include <linux/list_sort.h>
#include "pci.h"
#define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */
dev_set_msi_domain(&bus->dev, d);
}
+static int res_cmp(void *priv, const struct list_head *a,
+ const struct list_head *b)
+{
+ struct resource_entry *entry1, *entry2;
+
+ entry1 = container_of(a, struct resource_entry, node);
+ entry2 = container_of(b, struct resource_entry, node);
+
+ if (entry1->res->flags != entry2->res->flags)
+ return entry1->res->flags > entry2->res->flags;
+
+ if (entry1->offset != entry2->offset)
+ return entry1->offset > entry2->offset;
+
+ return entry1->res->start > entry2->res->start;
+}
+
static int pci_register_host_bridge(struct pci_host_bridge *bridge)
{
struct device *parent = bridge->dev.parent;
- struct resource_entry *window, *n;
+ struct resource_entry *window, *next, *n;
struct pci_bus *bus, *b;
- resource_size_t offset;
+ resource_size_t offset, next_offset;
LIST_HEAD(resources);
- struct resource *res;
+ struct resource *res, *next_res;
char addr[64], *fmt;
const char *name;
int err;
if (nr_node_ids > 1 && pcibus_to_node(bus) == NUMA_NO_NODE)
dev_warn(&bus->dev, "Unknown NUMA node; performance will be reduced\n");
+ /* Sort and coalesce contiguous windows */
+ list_sort(NULL, &resources, res_cmp);
+ resource_list_for_each_entry_safe(window, n, &resources) {
+ if (list_is_last(&window->node, &resources))
+ break;
+
+ next = list_next_entry(window, node);
+ offset = window->offset;
+ res = window->res;
+ next_offset = next->offset;
+ next_res = next->res;
+
+ if (res->flags != next_res->flags || offset != next_offset)
+ continue;
+
+ if (res->end + 1 == next_res->start) {
+ next_res->start = res->start;
+ res->flags = res->start = res->end = 0;
+ }
+ }
+
/* Add initial resources to the bus */
resource_list_for_each_entry_safe(window, n, &resources) {
- list_move_tail(&window->node, &bridge->windows);
offset = window->offset;
res = window->res;
+ if (!res->end)
+ continue;
+
+ list_move_tail(&window->node, &bridge->windows);
if (res->flags & IORESOURCE_BUS)
pci_bus_insert_busn_res(bus, bus->number, res->end);
pci_bus_put(pci_dev->bus);
kfree(pci_dev->driver_override);
bitmap_free(pci_dev->dma_alias_mask);
+ dev_dbg(dev, "device released\n");
kfree(pci_dev);
}
#include <linux/nvme.h>
#include <linux/platform_data/x86/apple.h>
#include <linux/pm_runtime.h>
+#include <linux/suspend.h>
#include <linux/switchtec.h>
#include <asm/dma.h> /* isa_dma_bridge_buggy */
#include "pci.h"
return;
if (pci_pcie_type(dev) != PCI_EXP_TYPE_UPSTREAM)
return;
+
+ /*
+ * SXIO/SXFP/SXLF turns off power to the Thunderbolt controller.
+ * We don't know how to turn it back on again, but firmware does,
+ * so we can only use SXIO/SXFP/SXLF if we're suspending via
+ * firmware.
+ */
+ if (!pm_suspend_via_firmware())
+ return;
+
bridge = ACPI_HANDLE(&dev->dev);
if (!bridge)
return;
static ssize_t address_read_file(struct pci_slot *slot, char *buf)
{
if (slot->number == 0xff)
- return sprintf(buf, "%04x:%02x\n",
- pci_domain_nr(slot->bus),
- slot->bus->number);
- else
- return sprintf(buf, "%04x:%02x:%02x\n",
- pci_domain_nr(slot->bus),
- slot->bus->number,
- slot->number);
+ return sysfs_emit(buf, "%04x:%02x\n",
+ pci_domain_nr(slot->bus),
+ slot->bus->number);
+
+ return sysfs_emit(buf, "%04x:%02x:%02x\n",
+ pci_domain_nr(slot->bus),
+ slot->bus->number,
+ slot->number);
}
static ssize_t bus_speed_read(enum pci_bus_speed speed, char *buf)
{
- return sprintf(buf, "%s\n", pci_speed_string(speed));
+ return sysfs_emit(buf, "%s\n", pci_speed_string(speed));
}
static ssize_t max_speed_read_file(struct pci_slot *slot, char *buf)
ver = ioread32(&stdev->mmio_sys_info->device_version);
- return sprintf(buf, "%x\n", ver);
+ return sysfs_emit(buf, "%x\n", ver);
}
static DEVICE_ATTR_RO(device_version);
ver = ioread32(&stdev->mmio_sys_info->firmware_version);
- return sprintf(buf, "%08x\n", ver);
+ return sysfs_emit(buf, "%08x\n", ver);
}
static DEVICE_ATTR_RO(fw_version);
/* component_vendor field not supported after gen3 */
if (stdev->gen != SWITCHTEC_GEN3)
- return sprintf(buf, "none\n");
+ return sysfs_emit(buf, "none\n");
return io_string_show(buf, &si->gen3.component_vendor,
sizeof(si->gen3.component_vendor));
/* component_id field not supported after gen3 */
if (stdev->gen != SWITCHTEC_GEN3)
- return sprintf(buf, "none\n");
+ return sysfs_emit(buf, "none\n");
- return sprintf(buf, "PM%04X\n", id);
+ return sysfs_emit(buf, "PM%04X\n", id);
}
static DEVICE_ATTR_RO(component_id);
/* component_revision field not supported after gen3 */
if (stdev->gen != SWITCHTEC_GEN3)
- return sprintf(buf, "255\n");
+ return sysfs_emit(buf, "255\n");
- return sprintf(buf, "%d\n", rev);
+ return sysfs_emit(buf, "%d\n", rev);
}
static DEVICE_ATTR_RO(component_revision);
{
struct switchtec_dev *stdev = to_stdev(dev);
- return sprintf(buf, "%d\n", stdev->partition);
+ return sysfs_emit(buf, "%d\n", stdev->partition);
}
static DEVICE_ATTR_RO(partition);
{
struct switchtec_dev *stdev = to_stdev(dev);
- return sprintf(buf, "%d\n", stdev->partition_count);
+ return sysfs_emit(buf, "%d\n", stdev->partition_count);
}
static DEVICE_ATTR_RO(partition_count);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
at91_shdwc->shdwc_base = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(at91_shdwc->shdwc_base)) {
- dev_err(&pdev->dev, "Could not map reset controller address\n");
+ if (IS_ERR(at91_shdwc->shdwc_base))
return PTR_ERR(at91_shdwc->shdwc_base);
- }
match = of_match_node(at91_shdwc_of_match, pdev->dev.of_node);
at91_shdwc->rcfg = match->data;
{ .compatible = "gpio-poweroff", },
{},
};
+MODULE_DEVICE_TABLE(of, of_gpio_poweroff_match);
static struct platform_driver gpio_poweroff_driver = {
.probe = gpio_poweroff_probe,
{.compatible = "ti,keystone-reset", },
{},
};
+MODULE_DEVICE_TABLE(of, rsctrl_of_match);
static int rsctrl_probe(struct platform_device *pdev)
{
{ .compatible = "regulator-poweroff", },
{},
};
+MODULE_DEVICE_TABLE(of, of_regulator_poweroff_match);
static struct platform_driver regulator_poweroff_driver = {
.probe = regulator_poweroff_probe,
config BATTERY_RT5033
tristate "RT5033 fuel gauge support"
- depends on MFD_RT5033
+ depends on I2C
+ select REGMAP_I2C
help
This adds support for battery fuel gauge in Richtek RT5033 PMIC.
The fuelgauge calculates and determines the battery state of charge
Say Y to enable support for Microchip UCS1002 Programmable
USB Port Power Controller with Charger Emulation.
-config CHARGER_BD70528
- tristate "ROHM bd70528 charger driver"
- depends on MFD_ROHM_BD70528
- select LINEAR_RANGES
- help
- Say Y here to enable support for getting battery status
- information and altering charger configurations from charger
- block of the ROHM BD70528 Power Management IC.
-
config CHARGER_BD99954
tristate "ROHM bd99954 charger driver"
depends on I2C
obj-$(CONFIG_CHARGER_88PM860X) += 88pm860x_charger.o
obj-$(CONFIG_CHARGER_PCF50633) += pcf50633-charger.o
obj-$(CONFIG_BATTERY_RX51) += rx51_battery.o
-obj-$(CONFIG_AB8500_BM) += ab8500_bmdata.o ab8500_charger.o ab8500_fg.o ab8500_btemp.o abx500_chargalg.o pm2301_charger.o
+obj-$(CONFIG_AB8500_BM) += ab8500_bmdata.o ab8500_charger.o ab8500_fg.o ab8500_btemp.o abx500_chargalg.o
obj-$(CONFIG_CHARGER_CPCAP) += cpcap-charger.o
obj-$(CONFIG_CHARGER_ISP1704) += isp1704_charger.o
obj-$(CONFIG_CHARGER_MAX8903) += max8903_charger.o
obj-$(CONFIG_CHARGER_SC2731) += sc2731_charger.o
obj-$(CONFIG_FUEL_GAUGE_SC27XX) += sc27xx_fuel_gauge.o
obj-$(CONFIG_CHARGER_UCS1002) += ucs1002_power.o
-obj-$(CONFIG_CHARGER_BD70528) += bd70528-charger.o
obj-$(CONFIG_CHARGER_BD99954) += bd99954-charger.o
obj-$(CONFIG_CHARGER_WILCO) += wilco-charger.o
obj-$(CONFIG_RN5T618_POWER) += rn5t618_power.o
int usb_safety_tmr_h;
int bkup_bat_v;
int bkup_bat_i;
- bool autopower_cfg;
- bool ac_enabled;
- bool usb_enabled;
bool no_maintenance;
bool capacity_scaling;
bool chg_unknown_bat;
struct device_node *np,
struct abx500_bm_data *bm);
+extern struct platform_driver ab8500_fg_driver;
+extern struct platform_driver ab8500_btemp_driver;
+extern struct platform_driver abx500_chargalg_driver;
+
#endif /* _AB8500_CHARGER_H_ */
* - POWER_SUPPLY_TYPE_USB,
* because only them store as drv_data pointer to struct ux500_charger.
*/
-#define psy_to_ux500_charger(x) power_supply_get_drvdata(psy)
+#define psy_to_ux500_charger(x) power_supply_get_drvdata(x)
/* Forward declaration */
struct ux500_charger;
#include <linux/init.h>
#include <linux/module.h>
#include <linux/device.h>
+#include <linux/component.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/slab.h>
return 0;
}
-static int ab8500_btemp_remove(struct platform_device *pdev)
-{
- struct ab8500_btemp *di = platform_get_drvdata(pdev);
- int i, irq;
-
- /* Disable interrupts */
- for (i = 0; i < ARRAY_SIZE(ab8500_btemp_irq); i++) {
- irq = platform_get_irq_byname(pdev, ab8500_btemp_irq[i].name);
- free_irq(irq, di);
- }
-
- /* Delete the work queue */
- destroy_workqueue(di->btemp_wq);
-
- flush_scheduled_work();
- power_supply_unregister(di->btemp_psy);
-
- return 0;
-}
-
static char *supply_interface[] = {
"ab8500_chargalg",
"ab8500_fg",
.external_power_changed = ab8500_btemp_external_power_changed,
};
+static int ab8500_btemp_bind(struct device *dev, struct device *master,
+ void *data)
+{
+ struct ab8500_btemp *di = dev_get_drvdata(dev);
+
+ /* Create a work queue for the btemp */
+ di->btemp_wq =
+ alloc_workqueue("ab8500_btemp_wq", WQ_MEM_RECLAIM, 0);
+ if (di->btemp_wq == NULL) {
+ dev_err(dev, "failed to create work queue\n");
+ return -ENOMEM;
+ }
+
+ /* Kick off periodic temperature measurements */
+ ab8500_btemp_periodic(di, true);
+
+ return 0;
+}
+
+static void ab8500_btemp_unbind(struct device *dev, struct device *master,
+ void *data)
+{
+ struct ab8500_btemp *di = dev_get_drvdata(dev);
+
+ /* Delete the work queue */
+ destroy_workqueue(di->btemp_wq);
+ flush_scheduled_work();
+}
+
+static const struct component_ops ab8500_btemp_component_ops = {
+ .bind = ab8500_btemp_bind,
+ .unbind = ab8500_btemp_unbind,
+};
+
static int ab8500_btemp_probe(struct platform_device *pdev)
{
- struct device_node *np = pdev->dev.of_node;
struct power_supply_config psy_cfg = {};
struct device *dev = &pdev->dev;
struct ab8500_btemp *di;
di->bm = &ab8500_bm_data;
- ret = ab8500_bm_of_probe(dev, np, di->bm);
- if (ret) {
- dev_err(dev, "failed to get battery information\n");
- return ret;
- }
-
/* get parent data */
di->dev = dev;
di->parent = dev_get_drvdata(pdev->dev.parent);
psy_cfg.num_supplicants = ARRAY_SIZE(supply_interface);
psy_cfg.drv_data = di;
- /* Create a work queue for the btemp */
- di->btemp_wq =
- alloc_workqueue("ab8500_btemp_wq", WQ_MEM_RECLAIM, 0);
- if (di->btemp_wq == NULL) {
- dev_err(dev, "failed to create work queue\n");
- return -ENOMEM;
- }
-
/* Init work for measuring temperature periodically */
INIT_DEFERRABLE_WORK(&di->btemp_periodic_work,
ab8500_btemp_periodic_work);
AB8500_BTEMP_HIGH_TH, &val);
if (ret < 0) {
dev_err(dev, "%s ab8500 read failed\n", __func__);
- goto free_btemp_wq;
+ return ret;
}
switch (val) {
case BTEMP_HIGH_TH_57_0:
}
/* Register BTEMP power supply class */
- di->btemp_psy = power_supply_register(dev, &ab8500_btemp_desc,
- &psy_cfg);
+ di->btemp_psy = devm_power_supply_register(dev, &ab8500_btemp_desc,
+ &psy_cfg);
if (IS_ERR(di->btemp_psy)) {
dev_err(dev, "failed to register BTEMP psy\n");
- ret = PTR_ERR(di->btemp_psy);
- goto free_btemp_wq;
+ return PTR_ERR(di->btemp_psy);
}
/* Register interrupts */
for (i = 0; i < ARRAY_SIZE(ab8500_btemp_irq); i++) {
irq = platform_get_irq_byname(pdev, ab8500_btemp_irq[i].name);
- if (irq < 0) {
- ret = irq;
- goto free_irq;
- }
+ if (irq < 0)
+ return irq;
- ret = request_threaded_irq(irq, NULL, ab8500_btemp_irq[i].isr,
+ ret = devm_request_threaded_irq(dev, irq, NULL,
+ ab8500_btemp_irq[i].isr,
IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
ab8500_btemp_irq[i].name, di);
if (ret) {
dev_err(dev, "failed to request %s IRQ %d: %d\n"
, ab8500_btemp_irq[i].name, irq, ret);
- goto free_irq;
+ return ret;
}
dev_dbg(dev, "Requested %s IRQ %d: %d\n",
ab8500_btemp_irq[i].name, irq, ret);
platform_set_drvdata(pdev, di);
- /* Kick off periodic temperature measurements */
- ab8500_btemp_periodic(di, true);
list_add_tail(&di->node, &ab8500_btemp_list);
- return ret;
+ return component_add(dev, &ab8500_btemp_component_ops);
+}
-free_irq:
- /* We also have to free all successfully registered irqs */
- for (i = i - 1; i >= 0; i--) {
- irq = platform_get_irq_byname(pdev, ab8500_btemp_irq[i].name);
- free_irq(irq, di);
- }
+static int ab8500_btemp_remove(struct platform_device *pdev)
+{
+ component_del(&pdev->dev, &ab8500_btemp_component_ops);
- power_supply_unregister(di->btemp_psy);
-free_btemp_wq:
- destroy_workqueue(di->btemp_wq);
- return ret;
+ return 0;
}
static SIMPLE_DEV_PM_OPS(ab8500_btemp_pm_ops, ab8500_btemp_suspend, ab8500_btemp_resume);
{ .compatible = "stericsson,ab8500-btemp", },
{ },
};
+MODULE_DEVICE_TABLE(of, ab8500_btemp_match);
-static struct platform_driver ab8500_btemp_driver = {
+struct platform_driver ab8500_btemp_driver = {
.probe = ab8500_btemp_probe,
.remove = ab8500_btemp_remove,
.driver = {
.pm = &ab8500_btemp_pm_ops,
},
};
-
-static int __init ab8500_btemp_init(void)
-{
- return platform_driver_register(&ab8500_btemp_driver);
-}
-
-static void __exit ab8500_btemp_exit(void)
-{
- platform_driver_unregister(&ab8500_btemp_driver);
-}
-
-device_initcall(ab8500_btemp_init);
-module_exit(ab8500_btemp_exit);
-
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Johan Palsson, Karl Komierowski, Arun R Murthy");
MODULE_ALIAS("platform:ab8500-btemp");
#include <linux/init.h>
#include <linux/module.h>
#include <linux/device.h>
+#include <linux/component.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/notifier.h>
static void ab8500_power_supply_changed(struct ab8500_charger *di,
struct power_supply *psy)
{
+ /*
+ * This happens if we get notifications or interrupts and
+ * the platform has been configured not to support one or
+ * other type of charging.
+ */
+ if (!psy)
+ return;
+
if (di->autopower_cfg) {
if (!di->usb.charger_connected &&
!di->ac.charger_connected &&
if (!connected)
di->flags.vbus_drop_end = false;
- sysfs_notify(&di->usb_chg.psy->dev.kobj, NULL, "present");
+ /*
+ * Sometimes the platform is configured not to support
+ * USB charging and no psy has been created, but we still
+ * will get these notifications.
+ */
+ if (di->usb_chg.psy) {
+ sysfs_notify(&di->usb_chg.psy->dev.kobj, NULL,
+ "present");
+ }
if (connected) {
mutex_lock(&di->charger_attached_mutex);
enum ab8500_usb_state bm_usb_state;
unsigned mA = *((unsigned *)power);
- if (!di)
- return NOTIFY_DONE;
-
if (event != USB_EVENT_VBUS) {
dev_dbg(di->dev, "not a standard host, returning\n");
return NOTIFY_DONE;
.notifier_call = ab8500_external_charger_prepare,
};
-static int ab8500_charger_remove(struct platform_device *pdev)
+static char *supply_interface[] = {
+ "ab8500_chargalg",
+ "ab8500_fg",
+ "ab8500_btemp",
+};
+
+static const struct power_supply_desc ab8500_ac_chg_desc = {
+ .name = "ab8500_ac",
+ .type = POWER_SUPPLY_TYPE_MAINS,
+ .properties = ab8500_charger_ac_props,
+ .num_properties = ARRAY_SIZE(ab8500_charger_ac_props),
+ .get_property = ab8500_charger_ac_get_property,
+};
+
+static const struct power_supply_desc ab8500_usb_chg_desc = {
+ .name = "ab8500_usb",
+ .type = POWER_SUPPLY_TYPE_USB,
+ .properties = ab8500_charger_usb_props,
+ .num_properties = ARRAY_SIZE(ab8500_charger_usb_props),
+ .get_property = ab8500_charger_usb_get_property,
+};
+
+static int ab8500_charger_bind(struct device *dev)
{
- struct ab8500_charger *di = platform_get_drvdata(pdev);
- int i, irq, ret;
+ struct ab8500_charger *di = dev_get_drvdata(dev);
+ int ch_stat;
+ int ret;
+
+ /* Create a work queue for the charger */
+ di->charger_wq = alloc_ordered_workqueue("ab8500_charger_wq",
+ WQ_MEM_RECLAIM);
+ if (di->charger_wq == NULL) {
+ dev_err(dev, "failed to create work queue\n");
+ return -ENOMEM;
+ }
+
+ ch_stat = ab8500_charger_detect_chargers(di, false);
+
+ if (ch_stat & AC_PW_CONN) {
+ if (is_ab8500(di->parent))
+ queue_delayed_work(di->charger_wq,
+ &di->ac_charger_attached_work,
+ HZ);
+ }
+ if (ch_stat & USB_PW_CONN) {
+ if (is_ab8500(di->parent))
+ queue_delayed_work(di->charger_wq,
+ &di->usb_charger_attached_work,
+ HZ);
+ di->vbus_detected = true;
+ di->vbus_detected_start = true;
+ queue_work(di->charger_wq,
+ &di->detect_usb_type_work);
+ }
+
+ ret = component_bind_all(dev, di);
+ if (ret) {
+ dev_err(dev, "can't bind component devices\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static void ab8500_charger_unbind(struct device *dev)
+{
+ struct ab8500_charger *di = dev_get_drvdata(dev);
+ int ret;
/* Disable AC charging */
ab8500_charger_ac_en(&di->ac_chg, false, 0, 0);
/* Disable USB charging */
ab8500_charger_usb_en(&di->usb_chg, false, 0, 0);
- /* Disable interrupts */
- for (i = 0; i < ARRAY_SIZE(ab8500_charger_irq); i++) {
- irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
- free_irq(irq, di);
- }
-
/* Backup battery voltage and current disable */
ret = abx500_mask_and_set_register_interruptible(di->dev,
AB8500_RTC, AB8500_RTC_CTRL_REG, RTC_BUP_CH_ENA, 0);
if (ret < 0)
dev_err(di->dev, "%s mask and set failed\n", __func__);
- usb_unregister_notifier(di->usb_phy, &di->nb);
- usb_put_phy(di->usb_phy);
-
/* Delete the work queue */
destroy_workqueue(di->charger_wq);
- /* Unregister external charger enable notifier */
- if (!di->ac_chg.enabled)
- blocking_notifier_chain_unregister(
- &charger_notifier_list, &charger_nb);
-
flush_scheduled_work();
- if (di->usb_chg.enabled)
- power_supply_unregister(di->usb_chg.psy);
- if (di->ac_chg.enabled && !di->ac_chg.external)
- power_supply_unregister(di->ac_chg.psy);
-
- return 0;
+ /* Unbind fg, btemp, algorithm */
+ component_unbind_all(dev, di);
}
-static char *supply_interface[] = {
- "ab8500_chargalg",
- "ab8500_fg",
- "ab8500_btemp",
+static const struct component_master_ops ab8500_charger_comp_ops = {
+ .bind = ab8500_charger_bind,
+ .unbind = ab8500_charger_unbind,
};
-static const struct power_supply_desc ab8500_ac_chg_desc = {
- .name = "ab8500_ac",
- .type = POWER_SUPPLY_TYPE_MAINS,
- .properties = ab8500_charger_ac_props,
- .num_properties = ARRAY_SIZE(ab8500_charger_ac_props),
- .get_property = ab8500_charger_ac_get_property,
+static struct platform_driver *const ab8500_charger_component_drivers[] = {
+ &ab8500_fg_driver,
+ &ab8500_btemp_driver,
+ &abx500_chargalg_driver,
};
-static const struct power_supply_desc ab8500_usb_chg_desc = {
- .name = "ab8500_usb",
- .type = POWER_SUPPLY_TYPE_USB,
- .properties = ab8500_charger_usb_props,
- .num_properties = ARRAY_SIZE(ab8500_charger_usb_props),
- .get_property = ab8500_charger_usb_get_property,
-};
+static int ab8500_charger_compare_dev(struct device *dev, void *data)
+{
+ return dev == data;
+}
static int ab8500_charger_probe(struct platform_device *pdev)
{
- struct device_node *np = pdev->dev.of_node;
+ struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
+ struct component_match *match = NULL;
struct power_supply_config ac_psy_cfg = {}, usb_psy_cfg = {};
struct ab8500_charger *di;
- int irq, i, charger_status, ret = 0, ch_stat;
- struct device *dev = &pdev->dev;
+ int charger_status;
+ int i, irq;
+ int ret;
di = devm_kzalloc(dev, sizeof(*di), GFP_KERNEL);
if (!di)
return ret;
}
+ /*
+ * VDD ADC supply needs to be enabled from this driver when there
+ * is a charger connected to avoid erroneous BTEMP_HIGH/LOW
+ * interrupts during charging
+ */
+ di->regu = devm_regulator_get(dev, "vddadc");
+ if (IS_ERR(di->regu)) {
+ ret = PTR_ERR(di->regu);
+ dev_err(dev, "failed to get vddadc regulator\n");
+ return ret;
+ }
+
+ /* Request interrupts */
+ for (i = 0; i < ARRAY_SIZE(ab8500_charger_irq); i++) {
+ irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
+ if (irq < 0)
+ return irq;
+
+ ret = devm_request_threaded_irq(dev,
+ irq, NULL, ab8500_charger_irq[i].isr,
+ IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
+ ab8500_charger_irq[i].name, di);
+
+ if (ret != 0) {
+ dev_err(dev, "failed to request %s IRQ %d: %d\n"
+ , ab8500_charger_irq[i].name, irq, ret);
+ return ret;
+ }
+ dev_dbg(dev, "Requested %s IRQ %d: %d\n",
+ ab8500_charger_irq[i].name, irq, ret);
+ }
+
/* initialize lock */
spin_lock_init(&di->usb_state.usb_lock);
mutex_init(&di->usb_ipt_crnt_lock);
di->ac_chg.max_out_curr =
di->bm->chg_output_curr[di->bm->n_chg_out_curr - 1];
di->ac_chg.wdt_refresh = CHG_WD_INTERVAL;
- di->ac_chg.enabled = di->bm->ac_enabled;
+ /*
+ * The AB8505 only supports USB charging. If we are not the
+ * AB8505, register an AC charger.
+ *
+ * TODO: if this should be opt-in, add DT properties for this.
+ */
+ if (!is_ab8505(di->parent))
+ di->ac_chg.enabled = true;
di->ac_chg.external = false;
- /*notifier for external charger enabling*/
- if (!di->ac_chg.enabled)
- blocking_notifier_chain_register(
- &charger_notifier_list, &charger_nb);
-
/* USB supply */
/* ux500_charger sub-class */
di->usb_chg.ops.enable = &ab8500_charger_usb_en;
di->usb_chg.max_out_curr =
di->bm->chg_output_curr[di->bm->n_chg_out_curr - 1];
di->usb_chg.wdt_refresh = CHG_WD_INTERVAL;
- di->usb_chg.enabled = di->bm->usb_enabled;
di->usb_chg.external = false;
di->usb_state.usb_current = -1;
- /* Create a work queue for the charger */
- di->charger_wq = alloc_ordered_workqueue("ab8500_charger_wq",
- WQ_MEM_RECLAIM);
- if (di->charger_wq == NULL) {
- dev_err(dev, "failed to create work queue\n");
- return -ENOMEM;
- }
-
mutex_init(&di->charger_attached_mutex);
/* Init work for HW failure check */
INIT_WORK(&di->check_usb_thermal_prot_work,
ab8500_charger_check_usb_thermal_prot_work);
- /*
- * VDD ADC supply needs to be enabled from this driver when there
- * is a charger connected to avoid erroneous BTEMP_HIGH/LOW
- * interrupts during charging
- */
- di->regu = devm_regulator_get(dev, "vddadc");
- if (IS_ERR(di->regu)) {
- ret = PTR_ERR(di->regu);
- dev_err(dev, "failed to get vddadc regulator\n");
- goto free_charger_wq;
- }
-
/* Initialize OVV, and other registers */
ret = ab8500_charger_init_hw_registers(di);
if (ret) {
dev_err(dev, "failed to initialize ABB registers\n");
- goto free_charger_wq;
+ return ret;
}
/* Register AC charger class */
if (di->ac_chg.enabled) {
- di->ac_chg.psy = power_supply_register(dev,
+ di->ac_chg.psy = devm_power_supply_register(dev,
&ab8500_ac_chg_desc,
&ac_psy_cfg);
if (IS_ERR(di->ac_chg.psy)) {
dev_err(dev, "failed to register AC charger\n");
- ret = PTR_ERR(di->ac_chg.psy);
- goto free_charger_wq;
+ return PTR_ERR(di->ac_chg.psy);
}
}
/* Register USB charger class */
- if (di->usb_chg.enabled) {
- di->usb_chg.psy = power_supply_register(dev,
- &ab8500_usb_chg_desc,
- &usb_psy_cfg);
- if (IS_ERR(di->usb_chg.psy)) {
- dev_err(dev, "failed to register USB charger\n");
- ret = PTR_ERR(di->usb_chg.psy);
- goto free_ac;
- }
- }
-
- di->usb_phy = usb_get_phy(USB_PHY_TYPE_USB2);
- if (IS_ERR_OR_NULL(di->usb_phy)) {
- dev_err(dev, "failed to get usb transceiver\n");
- ret = -EINVAL;
- goto free_usb;
- }
- di->nb.notifier_call = ab8500_charger_usb_notifier_call;
- ret = usb_register_notifier(di->usb_phy, &di->nb);
- if (ret) {
- dev_err(dev, "failed to register usb notifier\n");
- goto put_usb_phy;
+ di->usb_chg.psy = devm_power_supply_register(dev,
+ &ab8500_usb_chg_desc,
+ &usb_psy_cfg);
+ if (IS_ERR(di->usb_chg.psy)) {
+ dev_err(dev, "failed to register USB charger\n");
+ return PTR_ERR(di->usb_chg.psy);
}
/* Identify the connected charger types during startup */
sysfs_notify(&di->ac_chg.psy->dev.kobj, NULL, "present");
}
- if (charger_status & USB_PW_CONN) {
- di->vbus_detected = true;
- di->vbus_detected_start = true;
- queue_work(di->charger_wq,
- &di->detect_usb_type_work);
- }
-
- /* Register interrupts */
- for (i = 0; i < ARRAY_SIZE(ab8500_charger_irq); i++) {
- irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
- if (irq < 0) {
- ret = irq;
- goto free_irq;
- }
+ platform_set_drvdata(pdev, di);
- ret = request_threaded_irq(irq, NULL, ab8500_charger_irq[i].isr,
- IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
- ab8500_charger_irq[i].name, di);
+ /* Create something that will match the subdrivers when we bind */
+ for (i = 0; i < ARRAY_SIZE(ab8500_charger_component_drivers); i++) {
+ struct device_driver *drv = &ab8500_charger_component_drivers[i]->driver;
+ struct device *p = NULL, *d;
- if (ret != 0) {
- dev_err(dev, "failed to request %s IRQ %d: %d\n"
- , ab8500_charger_irq[i].name, irq, ret);
- goto free_irq;
+ while ((d = platform_find_device_by_driver(p, drv))) {
+ put_device(p);
+ component_match_add(dev, &match,
+ ab8500_charger_compare_dev, d);
+ p = d;
}
- dev_dbg(dev, "Requested %s IRQ %d: %d\n",
- ab8500_charger_irq[i].name, irq, ret);
+ put_device(p);
+ }
+ if (!match) {
+ dev_err(dev, "no matching components\n");
+ return -ENODEV;
+ }
+ if (IS_ERR(match)) {
+ dev_err(dev, "could not create component match\n");
+ return PTR_ERR(match);
}
- platform_set_drvdata(pdev, di);
-
- mutex_lock(&di->charger_attached_mutex);
+ /* Notifier for external charger enabling */
+ if (!di->ac_chg.enabled)
+ blocking_notifier_chain_register(
+ &charger_notifier_list, &charger_nb);
- ch_stat = ab8500_charger_detect_chargers(di, false);
- if ((ch_stat & AC_PW_CONN) == AC_PW_CONN) {
- if (is_ab8500(di->parent))
- queue_delayed_work(di->charger_wq,
- &di->ac_charger_attached_work,
- HZ);
+ di->usb_phy = usb_get_phy(USB_PHY_TYPE_USB2);
+ if (IS_ERR_OR_NULL(di->usb_phy)) {
+ dev_err(dev, "failed to get usb transceiver\n");
+ ret = -EINVAL;
+ goto out_charger_notifier;
}
- if ((ch_stat & USB_PW_CONN) == USB_PW_CONN) {
- if (is_ab8500(di->parent))
- queue_delayed_work(di->charger_wq,
- &di->usb_charger_attached_work,
- HZ);
+ di->nb.notifier_call = ab8500_charger_usb_notifier_call;
+ ret = usb_register_notifier(di->usb_phy, &di->nb);
+ if (ret) {
+ dev_err(dev, "failed to register usb notifier\n");
+ goto put_usb_phy;
}
- mutex_unlock(&di->charger_attached_mutex);
- return ret;
+ ret = component_master_add_with_match(&pdev->dev,
+ &ab8500_charger_comp_ops,
+ match);
+ if (ret) {
+ dev_err(dev, "failed to add component master\n");
+ goto free_notifier;
+ }
-free_irq:
- usb_unregister_notifier(di->usb_phy, &di->nb);
+ return 0;
- /* We also have to free all successfully registered irqs */
- for (i = i - 1; i >= 0; i--) {
- irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
- free_irq(irq, di);
- }
+free_notifier:
+ usb_unregister_notifier(di->usb_phy, &di->nb);
put_usb_phy:
usb_put_phy(di->usb_phy);
-free_usb:
- if (di->usb_chg.enabled)
- power_supply_unregister(di->usb_chg.psy);
-free_ac:
- if (di->ac_chg.enabled)
- power_supply_unregister(di->ac_chg.psy);
-free_charger_wq:
- destroy_workqueue(di->charger_wq);
+out_charger_notifier:
+ if (!di->ac_chg.enabled)
+ blocking_notifier_chain_unregister(
+ &charger_notifier_list, &charger_nb);
return ret;
}
+static int ab8500_charger_remove(struct platform_device *pdev)
+{
+ struct ab8500_charger *di = platform_get_drvdata(pdev);
+
+ component_master_del(&pdev->dev, &ab8500_charger_comp_ops);
+
+ usb_unregister_notifier(di->usb_phy, &di->nb);
+ usb_put_phy(di->usb_phy);
+ if (!di->ac_chg.enabled)
+ blocking_notifier_chain_unregister(
+ &charger_notifier_list, &charger_nb);
+
+ return 0;
+}
+
static SIMPLE_DEV_PM_OPS(ab8500_charger_pm_ops, ab8500_charger_suspend, ab8500_charger_resume);
static const struct of_device_id ab8500_charger_match[] = {
{ .compatible = "stericsson,ab8500-charger", },
{ },
};
+MODULE_DEVICE_TABLE(of, ab8500_charger_match);
static struct platform_driver ab8500_charger_driver = {
.probe = ab8500_charger_probe,
static int __init ab8500_charger_init(void)
{
+ int ret;
+
+ ret = platform_register_drivers(ab8500_charger_component_drivers,
+ ARRAY_SIZE(ab8500_charger_component_drivers));
+ if (ret)
+ return ret;
+
return platform_driver_register(&ab8500_charger_driver);
}
static void __exit ab8500_charger_exit(void)
{
+ platform_unregister_drivers(ab8500_charger_component_drivers,
+ ARRAY_SIZE(ab8500_charger_component_drivers));
platform_driver_unregister(&ab8500_charger_driver);
}
-subsys_initcall_sync(ab8500_charger_init);
+module_init(ab8500_charger_init);
module_exit(ab8500_charger_exit);
MODULE_LICENSE("GPL v2");
#include <linux/init.h>
#include <linux/module.h>
+#include <linux/component.h>
#include <linux/device.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
((y1) + ((((y2) - (y1)) * ((x) - (x1))) / ((x2) - (x1))));
/**
- * struct ab8500_fg_interrupts - ab8500 fg interupts
+ * struct ab8500_fg_interrupts - ab8500 fg interrupts
* @name: name of the interrupt
* @isr function pointer to the isr
*/
return 0;
}
-static int ab8500_fg_remove(struct platform_device *pdev)
-{
- int ret = 0;
- struct ab8500_fg *di = platform_get_drvdata(pdev);
-
- list_del(&di->node);
-
- /* Disable coulomb counter */
- ret = ab8500_fg_coulomb_counter(di, false);
- if (ret)
- dev_err(di->dev, "failed to disable coulomb counter\n");
-
- destroy_workqueue(di->fg_wq);
- ab8500_fg_sysfs_exit(di);
-
- flush_scheduled_work();
- ab8500_fg_sysfs_psy_remove_attrs(di);
- power_supply_unregister(di->fg_psy);
- return ret;
-}
-
/* ab8500 fg driver interrupts and their respective isr */
static struct ab8500_fg_interrupts ab8500_fg_irq[] = {
{"NCONV_ACCU", ab8500_fg_cc_convend_handler},
.external_power_changed = ab8500_fg_external_power_changed,
};
+static int ab8500_fg_bind(struct device *dev, struct device *master,
+ void *data)
+{
+ struct ab8500_fg *di = dev_get_drvdata(dev);
+
+ /* Create a work queue for running the FG algorithm */
+ di->fg_wq = alloc_ordered_workqueue("ab8500_fg_wq", WQ_MEM_RECLAIM);
+ if (di->fg_wq == NULL) {
+ dev_err(dev, "failed to create work queue\n");
+ return -ENOMEM;
+ }
+
+ /* Start the coulomb counter */
+ ab8500_fg_coulomb_counter(di, true);
+ /* Run the FG algorithm */
+ queue_delayed_work(di->fg_wq, &di->fg_periodic_work, 0);
+
+ return 0;
+}
+
+static void ab8500_fg_unbind(struct device *dev, struct device *master,
+ void *data)
+{
+ struct ab8500_fg *di = dev_get_drvdata(dev);
+ int ret;
+
+ /* Disable coulomb counter */
+ ret = ab8500_fg_coulomb_counter(di, false);
+ if (ret)
+ dev_err(dev, "failed to disable coulomb counter\n");
+
+ destroy_workqueue(di->fg_wq);
+ flush_scheduled_work();
+}
+
+static const struct component_ops ab8500_fg_component_ops = {
+ .bind = ab8500_fg_bind,
+ .unbind = ab8500_fg_unbind,
+};
+
static int ab8500_fg_probe(struct platform_device *pdev)
{
- struct device_node *np = pdev->dev.of_node;
- struct power_supply_config psy_cfg = {};
struct device *dev = &pdev->dev;
+ struct power_supply_config psy_cfg = {};
struct ab8500_fg *di;
int i, irq;
int ret = 0;
di->bm = &ab8500_bm_data;
- ret = ab8500_bm_of_probe(dev, np, di->bm);
- if (ret) {
- dev_err(dev, "failed to get battery information\n");
- return ret;
- }
-
mutex_init(&di->cc_lock);
/* get parent data */
ab8500_fg_charge_state_to(di, AB8500_FG_CHARGE_INIT);
ab8500_fg_discharge_state_to(di, AB8500_FG_DISCHARGE_INIT);
- /* Create a work queue for running the FG algorithm */
- di->fg_wq = alloc_ordered_workqueue("ab8500_fg_wq", WQ_MEM_RECLAIM);
- if (di->fg_wq == NULL) {
- dev_err(dev, "failed to create work queue\n");
- return -ENOMEM;
- }
-
/* Init work for running the fg algorithm instantly */
INIT_WORK(&di->fg_work, ab8500_fg_instant_work);
ret = ab8500_fg_init_hw_registers(di);
if (ret) {
dev_err(dev, "failed to initialize registers\n");
- goto free_inst_curr_wq;
+ return ret;
}
/* Consider battery unknown until we're informed otherwise */
di->flags.batt_id_received = false;
/* Register FG power supply class */
- di->fg_psy = power_supply_register(dev, &ab8500_fg_desc, &psy_cfg);
+ di->fg_psy = devm_power_supply_register(dev, &ab8500_fg_desc, &psy_cfg);
if (IS_ERR(di->fg_psy)) {
dev_err(dev, "failed to register FG psy\n");
- ret = PTR_ERR(di->fg_psy);
- goto free_inst_curr_wq;
+ return PTR_ERR(di->fg_psy);
}
di->fg_samples = SEC_TO_SAMPLE(di->bm->fg_params->init_timer);
- ab8500_fg_coulomb_counter(di, true);
/*
* Initialize completion used to notify completion and start
/* Register primary interrupt handlers */
for (i = 0; i < ARRAY_SIZE(ab8500_fg_irq); i++) {
irq = platform_get_irq_byname(pdev, ab8500_fg_irq[i].name);
- if (irq < 0) {
- ret = irq;
- goto free_irq;
- }
+ if (irq < 0)
+ return irq;
- ret = request_threaded_irq(irq, NULL, ab8500_fg_irq[i].isr,
+ ret = devm_request_threaded_irq(dev, irq, NULL,
+ ab8500_fg_irq[i].isr,
IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
ab8500_fg_irq[i].name, di);
if (ret != 0) {
dev_err(dev, "failed to request %s IRQ %d: %d\n",
ab8500_fg_irq[i].name, irq, ret);
- goto free_irq;
+ return ret;
}
dev_dbg(dev, "Requested %s IRQ %d: %d\n",
ab8500_fg_irq[i].name, irq, ret);
ret = ab8500_fg_sysfs_init(di);
if (ret) {
dev_err(dev, "failed to create sysfs entry\n");
- goto free_irq;
+ return ret;
}
ret = ab8500_fg_sysfs_psy_create_attrs(di);
if (ret) {
dev_err(dev, "failed to create FG psy\n");
ab8500_fg_sysfs_exit(di);
- goto free_irq;
+ return ret;
}
/* Calibrate the fg first time */
/* Use room temp as default value until we get an update from driver. */
di->bat_temp = 210;
- /* Run the FG algorithm */
- queue_delayed_work(di->fg_wq, &di->fg_periodic_work, 0);
-
list_add_tail(&di->node, &ab8500_fg_list);
- return ret;
+ return component_add(dev, &ab8500_fg_component_ops);
+}
-free_irq:
- /* We also have to free all registered irqs */
- while (--i >= 0) {
- /* Last assignment of i from primary interrupt handlers */
- irq = platform_get_irq_byname(pdev, ab8500_fg_irq[i].name);
- free_irq(irq, di);
- }
+static int ab8500_fg_remove(struct platform_device *pdev)
+{
+ int ret = 0;
+ struct ab8500_fg *di = platform_get_drvdata(pdev);
+
+ component_del(&pdev->dev, &ab8500_fg_component_ops);
+ list_del(&di->node);
+ ab8500_fg_sysfs_exit(di);
+ ab8500_fg_sysfs_psy_remove_attrs(di);
- power_supply_unregister(di->fg_psy);
-free_inst_curr_wq:
- destroy_workqueue(di->fg_wq);
return ret;
}
{ .compatible = "stericsson,ab8500-fg", },
{ },
};
+MODULE_DEVICE_TABLE(of, ab8500_fg_match);
-static struct platform_driver ab8500_fg_driver = {
+struct platform_driver ab8500_fg_driver = {
.probe = ab8500_fg_probe,
.remove = ab8500_fg_remove,
.driver = {
.pm = &ab8500_fg_pm_ops,
},
};
-
-static int __init ab8500_fg_init(void)
-{
- return platform_driver_register(&ab8500_fg_driver);
-}
-
-static void __exit ab8500_fg_exit(void)
-{
- platform_driver_unregister(&ab8500_fg_driver);
-}
-
-subsys_initcall_sync(ab8500_fg_init);
-module_exit(ab8500_fg_exit);
-
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Johan Palsson, Karl Komierowski");
MODULE_ALIAS("platform:ab8500-fg");
#include <linux/init.h>
#include <linux/module.h>
#include <linux/device.h>
+#include <linux/component.h>
#include <linux/hrtimer.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
return 0;
}
-static int abx500_chargalg_remove(struct platform_device *pdev)
+static char *supply_interface[] = {
+ "ab8500_fg",
+};
+
+static const struct power_supply_desc abx500_chargalg_desc = {
+ .name = "abx500_chargalg",
+ .type = POWER_SUPPLY_TYPE_BATTERY,
+ .properties = abx500_chargalg_props,
+ .num_properties = ARRAY_SIZE(abx500_chargalg_props),
+ .get_property = abx500_chargalg_get_property,
+ .external_power_changed = abx500_chargalg_external_power_changed,
+};
+
+static int abx500_chargalg_bind(struct device *dev, struct device *master,
+ void *data)
{
- struct abx500_chargalg *di = platform_get_drvdata(pdev);
+ struct abx500_chargalg *di = dev_get_drvdata(dev);
- /* sysfs interface to enable/disbale charging from user space */
- abx500_chargalg_sysfs_exit(di);
+ /* Create a work queue for the chargalg */
+ di->chargalg_wq = alloc_ordered_workqueue("abx500_chargalg_wq",
+ WQ_MEM_RECLAIM);
+ if (di->chargalg_wq == NULL) {
+ dev_err(di->dev, "failed to create work queue\n");
+ return -ENOMEM;
+ }
+
+ /* Run the charging algorithm */
+ queue_delayed_work(di->chargalg_wq, &di->chargalg_periodic_work, 0);
+ return 0;
+}
+
+static void abx500_chargalg_unbind(struct device *dev, struct device *master,
+ void *data)
+{
+ struct abx500_chargalg *di = dev_get_drvdata(dev);
+
+ /* Stop all timers and work */
hrtimer_cancel(&di->safety_timer);
hrtimer_cancel(&di->maintenance_timer);
/* Delete the work queue */
destroy_workqueue(di->chargalg_wq);
-
- power_supply_unregister(di->chargalg_psy);
-
- return 0;
+ flush_scheduled_work();
}
-static char *supply_interface[] = {
- "ab8500_fg",
-};
-
-static const struct power_supply_desc abx500_chargalg_desc = {
- .name = "abx500_chargalg",
- .type = POWER_SUPPLY_TYPE_BATTERY,
- .properties = abx500_chargalg_props,
- .num_properties = ARRAY_SIZE(abx500_chargalg_props),
- .get_property = abx500_chargalg_get_property,
- .external_power_changed = abx500_chargalg_external_power_changed,
+static const struct component_ops abx500_chargalg_component_ops = {
+ .bind = abx500_chargalg_bind,
+ .unbind = abx500_chargalg_unbind,
};
static int abx500_chargalg_probe(struct platform_device *pdev)
{
- struct device_node *np = pdev->dev.of_node;
+ struct device *dev = &pdev->dev;
struct power_supply_config psy_cfg = {};
struct abx500_chargalg *di;
int ret = 0;
- di = devm_kzalloc(&pdev->dev, sizeof(*di), GFP_KERNEL);
- if (!di) {
- dev_err(&pdev->dev, "%s no mem for ab8500_chargalg\n", __func__);
+ di = devm_kzalloc(dev, sizeof(*di), GFP_KERNEL);
+ if (!di)
return -ENOMEM;
- }
di->bm = &ab8500_bm_data;
- ret = ab8500_bm_of_probe(&pdev->dev, np, di->bm);
- if (ret) {
- dev_err(&pdev->dev, "failed to get battery information\n");
- return ret;
- }
-
/* get device struct and parent */
- di->dev = &pdev->dev;
+ di->dev = dev;
di->parent = dev_get_drvdata(pdev->dev.parent);
psy_cfg.supplied_to = supply_interface;
di->maintenance_timer.function =
abx500_chargalg_maintenance_timer_expired;
- /* Create a work queue for the chargalg */
- di->chargalg_wq = alloc_ordered_workqueue("abx500_chargalg_wq",
- WQ_MEM_RECLAIM);
- if (di->chargalg_wq == NULL) {
- dev_err(di->dev, "failed to create work queue\n");
- return -ENOMEM;
- }
-
/* Init work for chargalg */
INIT_DEFERRABLE_WORK(&di->chargalg_periodic_work,
abx500_chargalg_periodic_work);
di->chg_info.prev_conn_chg = -1;
/* Register chargalg power supply class */
- di->chargalg_psy = power_supply_register(di->dev, &abx500_chargalg_desc,
+ di->chargalg_psy = devm_power_supply_register(di->dev,
+ &abx500_chargalg_desc,
&psy_cfg);
if (IS_ERR(di->chargalg_psy)) {
dev_err(di->dev, "failed to register chargalg psy\n");
- ret = PTR_ERR(di->chargalg_psy);
- goto free_chargalg_wq;
+ return PTR_ERR(di->chargalg_psy);
}
platform_set_drvdata(pdev, di);
ret = abx500_chargalg_sysfs_init(di);
if (ret) {
dev_err(di->dev, "failed to create sysfs entry\n");
- goto free_psy;
+ return ret;
}
di->curr_status.curr_step = CHARGALG_CURR_STEP_HIGH;
- /* Run the charging algorithm */
- queue_delayed_work(di->chargalg_wq, &di->chargalg_periodic_work, 0);
-
dev_info(di->dev, "probe success\n");
- return ret;
+ return component_add(dev, &abx500_chargalg_component_ops);
+}
-free_psy:
- power_supply_unregister(di->chargalg_psy);
-free_chargalg_wq:
- destroy_workqueue(di->chargalg_wq);
- return ret;
+static int abx500_chargalg_remove(struct platform_device *pdev)
+{
+ struct abx500_chargalg *di = platform_get_drvdata(pdev);
+
+ component_del(&pdev->dev, &abx500_chargalg_component_ops);
+
+ /* sysfs interface to enable/disable charging from user space */
+ abx500_chargalg_sysfs_exit(di);
+
+ return 0;
}
static SIMPLE_DEV_PM_OPS(abx500_chargalg_pm_ops, abx500_chargalg_suspend, abx500_chargalg_resume);
{ },
};
-static struct platform_driver abx500_chargalg_driver = {
+struct platform_driver abx500_chargalg_driver = {
.probe = abx500_chargalg_probe,
.remove = abx500_chargalg_remove,
.driver = {
.pm = &abx500_chargalg_pm_ops,
},
};
-
-module_platform_driver(abx500_chargalg_driver);
-
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Johan Palsson, Karl Komierowski");
MODULE_ALIAS("platform:abx500-chargalg");
#define AXP209_FG_PERCENT GENMASK(6, 0)
#define AXP22X_FG_VALID BIT(7)
+#define AXP20X_CHRG_CTRL1_ENABLE BIT(7)
#define AXP20X_CHRG_CTRL1_TGT_VOLT GENMASK(6, 5)
#define AXP20X_CHRG_CTRL1_TGT_4_1V (0 << 5)
#define AXP20X_CHRG_CTRL1_TGT_4_15V (1 << 5)
case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX:
return axp20x_set_max_constant_charge_current(axp20x_batt,
val->intval);
-
+ case POWER_SUPPLY_PROP_STATUS:
+ switch (val->intval) {
+ case POWER_SUPPLY_STATUS_CHARGING:
+ return regmap_update_bits(axp20x_batt->regmap, AXP20X_CHRG_CTRL1,
+ AXP20X_CHRG_CTRL1_ENABLE, AXP20X_CHRG_CTRL1_ENABLE);
+
+ case POWER_SUPPLY_STATUS_DISCHARGING:
+ case POWER_SUPPLY_STATUS_NOT_CHARGING:
+ return regmap_update_bits(axp20x_batt->regmap, AXP20X_CHRG_CTRL1,
+ AXP20X_CHRG_CTRL1_ENABLE, 0);
+ }
+ fallthrough;
default:
return -EINVAL;
}
static int axp20x_battery_prop_writeable(struct power_supply *psy,
enum power_supply_property psp)
{
- return psp == POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN ||
+ return psp == POWER_SUPPLY_PROP_STATUS ||
+ psp == POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN ||
psp == POWER_SUPPLY_PROP_VOLTAGE_MAX_DESIGN ||
psp == POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT ||
psp == POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX;
for (i = 0; i < NR_RETRY_CNT; i++) {
ret = regmap_read(info->regmap, reg, &val);
- if (ret == -EBUSY)
- continue;
- else
+ if (ret != -EBUSY)
break;
}
* detection reports one despite it not being there.
* Please keep this listed sorted alphabetically.
*/
-static const struct dmi_system_id axp288_fuel_gauge_blacklist[] = {
+static const struct dmi_system_id axp288_no_battery_list[] = {
{
/* ACEPC T8 Cherry Trail Z8350 mini PC */
.matches = {
DMI_MATCH(DMI_PRODUCT_NAME, "MEEGOPAD T02"),
},
},
- {
- /* Meegopad T08 */
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Default string"),
- DMI_MATCH(DMI_BOARD_VENDOR, "To be filled by OEM."),
- DMI_MATCH(DMI_BOARD_NAME, "T3 MRD"),
- DMI_MATCH(DMI_BOARD_VERSION, "V1.1"),
- },
- },
{ /* Mele PCG03 Mini PC */
.matches = {
DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Mini PC"),
DMI_MATCH(DMI_PRODUCT_NAME, "Z83-4"),
}
},
+ {
+ /* Various Ace PC/Meegopad/MinisForum/Wintel Mini-PCs/HDMI-sticks */
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "T3 MRD"),
+ DMI_MATCH(DMI_CHASSIS_TYPE, "3"),
+ DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
+ DMI_MATCH(DMI_BIOS_VERSION, "5.11"),
+ },
+ },
{}
};
};
unsigned int val;
- if (dmi_check_system(axp288_fuel_gauge_blacklist))
+ if (dmi_check_system(axp288_no_battery_list))
return -ENODEV;
/*
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0-or-later
-//
-// Copyright (C) 2018 ROHM Semiconductors
-//
-// power-supply driver for ROHM BD70528 PMIC
-
-/*
- * BD70528 charger HW state machine.
- *
- * The thermal shutdown state is not drawn. From any other state but
- * battery error and suspend it is possible to go to TSD/TMP states
- * if temperature is out of bounds.
- *
- * CHG_RST = H
- * or CHG_EN=L
- * or (DCIN2_UVLO=L && DCIN1_UVLO=L)
- * or (DCIN2_OVLO=H & DCIN1_UVKLO=L)
- *
- * +--------------+ +--------------+
- * | | | |
- * | Any state +-------> | Suspend |
- * | | | |
- * +--------------+ +------+-------+
- * |
- * CHG_EN = H && BAT_DET = H && |
- * No errors (temp, bat_ov, UVLO, |
- * OVLO...) |
- * |
- * BAT_OV or +---------v----------+
- * (DBAT && TTRI) | |
- * +-----------------+ Trickle Charge | <---------------+
- * | | | |
- * | +-------+------------+ |
- * | | |
- * | | ^ |
- * | V_BAT > VTRI_TH | | VBAT < VTRI_TH - 50mV |
- * | | | |
- * | v | |
- * | | |
- * | BAT_OV or +----------+----+ |
- * | (DBAT && TFST) | | |
- * | +----------------+ Fast Charge | |
- * | | | | |
- * v v +----+----------+ |
- * | |
- *+----------------+ ILIM_DET=L | ^ ILIM_DET |
- *| | & CV_DET=H | | or CV_DET=L |
- *| Battery Error | & VBAT > | | or VBAT < VRECHG_TH |
- *| | VRECHG_TH | | or IBAT > IFST/x |
- *+----------------+ & IBAT < | | |
- * IFST/x v | |
- * ^ | |
- * | +---------+-+ |
- * | | | |
- * +-------------------+ Top OFF | |
- * BAT_OV = H or | | |
- * (DBAT && TFST) +-----+-----+ |
- * | |
- * Stay top-off for 15s | |
- * v |
- * |
- * +--------+ |
- * | | |
- * | Done +-------------------------+
- * | |
- * +--------+ VBAT < VRECHG_TH
- */
-
-#include <linux/kernel.h>
-#include <linux/interrupt.h>
-#include <linux/mfd/rohm-bd70528.h>
-#include <linux/module.h>
-#include <linux/platform_device.h>
-#include <linux/power_supply.h>
-#include <linux/linear_range.h>
-
-#define CHG_STAT_SUSPEND 0x0
-#define CHG_STAT_TRICKLE 0x1
-#define CHG_STAT_FAST 0x3
-#define CHG_STAT_TOPOFF 0xe
-#define CHG_STAT_DONE 0xf
-#define CHG_STAT_OTP_TRICKLE 0x10
-#define CHG_STAT_OTP_FAST 0x11
-#define CHG_STAT_OTP_DONE 0x12
-#define CHG_STAT_TSD_TRICKLE 0x20
-#define CHG_STAT_TSD_FAST 0x21
-#define CHG_STAT_TSD_TOPOFF 0x22
-#define CHG_STAT_BAT_ERR 0x7f
-
-static const char *bd70528_charger_model = "BD70528";
-static const char *bd70528_charger_manufacturer = "ROHM Semiconductors";
-
-#define BD_ERR_IRQ_HND(_name_, _wrn_) \
-static irqreturn_t bd0528_##_name_##_interrupt(int irq, void *arg) \
-{ \
- struct power_supply *psy = (struct power_supply *)arg; \
- \
- power_supply_changed(psy); \
- dev_err(&psy->dev, (_wrn_)); \
- \
- return IRQ_HANDLED; \
-}
-
-#define BD_INFO_IRQ_HND(_name_, _wrn_) \
-static irqreturn_t bd0528_##_name_##_interrupt(int irq, void *arg) \
-{ \
- struct power_supply *psy = (struct power_supply *)arg; \
- \
- power_supply_changed(psy); \
- dev_dbg(&psy->dev, (_wrn_)); \
- \
- return IRQ_HANDLED; \
-}
-
-#define BD_IRQ_HND(_name_) bd0528_##_name_##_interrupt
-
-struct bd70528_psy {
- struct regmap *regmap;
- struct device *dev;
- struct power_supply *psy;
-};
-
-BD_ERR_IRQ_HND(BAT_OV_DET, "Battery overvoltage detected\n");
-BD_ERR_IRQ_HND(DBAT_DET, "Dead battery detected\n");
-BD_ERR_IRQ_HND(COLD_DET, "Battery cold\n");
-BD_ERR_IRQ_HND(HOT_DET, "Battery hot\n");
-BD_ERR_IRQ_HND(CHG_TSD, "Charger thermal shutdown\n");
-BD_ERR_IRQ_HND(DCIN2_OV_DET, "DCIN2 overvoltage detected\n");
-
-BD_INFO_IRQ_HND(BAT_OV_RES, "Battery voltage back to normal\n");
-BD_INFO_IRQ_HND(COLD_RES, "Battery temperature back to normal\n");
-BD_INFO_IRQ_HND(HOT_RES, "Battery temperature back to normal\n");
-BD_INFO_IRQ_HND(BAT_RMV, "Battery removed\n");
-BD_INFO_IRQ_HND(BAT_DET, "Battery detected\n");
-BD_INFO_IRQ_HND(DCIN2_OV_RES, "DCIN2 voltage back to normal\n");
-BD_INFO_IRQ_HND(DCIN2_RMV, "DCIN2 removed\n");
-BD_INFO_IRQ_HND(DCIN2_DET, "DCIN2 detected\n");
-BD_INFO_IRQ_HND(DCIN1_RMV, "DCIN1 removed\n");
-BD_INFO_IRQ_HND(DCIN1_DET, "DCIN1 detected\n");
-
-struct irq_name_pair {
- const char *n;
- irqreturn_t (*h)(int irq, void *arg);
-};
-
-static int bd70528_get_irqs(struct platform_device *pdev,
- struct bd70528_psy *bdpsy)
-{
- int irq, i, ret;
- unsigned int mask;
- static const struct irq_name_pair bd70528_chg_irqs[] = {
- { .n = "bd70528-bat-ov-res", .h = BD_IRQ_HND(BAT_OV_RES) },
- { .n = "bd70528-bat-ov-det", .h = BD_IRQ_HND(BAT_OV_DET) },
- { .n = "bd70528-bat-dead", .h = BD_IRQ_HND(DBAT_DET) },
- { .n = "bd70528-bat-warmed", .h = BD_IRQ_HND(COLD_RES) },
- { .n = "bd70528-bat-cold", .h = BD_IRQ_HND(COLD_DET) },
- { .n = "bd70528-bat-cooled", .h = BD_IRQ_HND(HOT_RES) },
- { .n = "bd70528-bat-hot", .h = BD_IRQ_HND(HOT_DET) },
- { .n = "bd70528-chg-tshd", .h = BD_IRQ_HND(CHG_TSD) },
- { .n = "bd70528-bat-removed", .h = BD_IRQ_HND(BAT_RMV) },
- { .n = "bd70528-bat-detected", .h = BD_IRQ_HND(BAT_DET) },
- { .n = "bd70528-dcin2-ov-res", .h = BD_IRQ_HND(DCIN2_OV_RES) },
- { .n = "bd70528-dcin2-ov-det", .h = BD_IRQ_HND(DCIN2_OV_DET) },
- { .n = "bd70528-dcin2-removed", .h = BD_IRQ_HND(DCIN2_RMV) },
- { .n = "bd70528-dcin2-detected", .h = BD_IRQ_HND(DCIN2_DET) },
- { .n = "bd70528-dcin1-removed", .h = BD_IRQ_HND(DCIN1_RMV) },
- { .n = "bd70528-dcin1-detected", .h = BD_IRQ_HND(DCIN1_DET) },
- };
-
- for (i = 0; i < ARRAY_SIZE(bd70528_chg_irqs); i++) {
- irq = platform_get_irq_byname(pdev, bd70528_chg_irqs[i].n);
- if (irq < 0) {
- dev_err(&pdev->dev, "Bad IRQ information for %s (%d)\n",
- bd70528_chg_irqs[i].n, irq);
- return irq;
- }
- ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
- bd70528_chg_irqs[i].h,
- IRQF_ONESHOT,
- bd70528_chg_irqs[i].n,
- bdpsy->psy);
-
- if (ret)
- return ret;
- }
- /*
- * BD70528 irq controller is not touching the main mask register.
- * So enable the charger block interrupts at main level. We can just
- * leave them enabled as irq-controller should disable irqs
- * from sub-registers when IRQ is disabled or freed.
- */
- mask = BD70528_REG_INT_BAT1_MASK | BD70528_REG_INT_BAT2_MASK;
- ret = regmap_update_bits(bdpsy->regmap,
- BD70528_REG_INT_MAIN_MASK, mask, 0);
- if (ret)
- dev_err(&pdev->dev, "Failed to enable charger IRQs\n");
-
- return ret;
-}
-
-static int bd70528_get_charger_status(struct bd70528_psy *bdpsy, int *val)
-{
- int ret;
- unsigned int v;
-
- ret = regmap_read(bdpsy->regmap, BD70528_REG_CHG_CURR_STAT, &v);
- if (ret) {
- dev_err(bdpsy->dev, "Charger state read failure %d\n",
- ret);
- return ret;
- }
-
- switch (v & BD70528_MASK_CHG_STAT) {
- case CHG_STAT_SUSPEND:
- /* Maybe we should check the CHG_TTRI_EN? */
- case CHG_STAT_OTP_TRICKLE:
- case CHG_STAT_OTP_FAST:
- case CHG_STAT_OTP_DONE:
- case CHG_STAT_TSD_TRICKLE:
- case CHG_STAT_TSD_FAST:
- case CHG_STAT_TSD_TOPOFF:
- case CHG_STAT_BAT_ERR:
- *val = POWER_SUPPLY_STATUS_NOT_CHARGING;
- break;
- case CHG_STAT_DONE:
- *val = POWER_SUPPLY_STATUS_FULL;
- break;
- case CHG_STAT_TRICKLE:
- case CHG_STAT_FAST:
- case CHG_STAT_TOPOFF:
- *val = POWER_SUPPLY_STATUS_CHARGING;
- break;
- default:
- *val = POWER_SUPPLY_STATUS_UNKNOWN;
- break;
- }
-
- return 0;
-}
-
-static int bd70528_get_charge_type(struct bd70528_psy *bdpsy, int *val)
-{
- int ret;
- unsigned int v;
-
- ret = regmap_read(bdpsy->regmap, BD70528_REG_CHG_CURR_STAT, &v);
- if (ret) {
- dev_err(bdpsy->dev, "Charger state read failure %d\n",
- ret);
- return ret;
- }
-
- switch (v & BD70528_MASK_CHG_STAT) {
- case CHG_STAT_TRICKLE:
- *val = POWER_SUPPLY_CHARGE_TYPE_TRICKLE;
- break;
- case CHG_STAT_FAST:
- case CHG_STAT_TOPOFF:
- *val = POWER_SUPPLY_CHARGE_TYPE_FAST;
- break;
- case CHG_STAT_DONE:
- case CHG_STAT_SUSPEND:
- /* Maybe we should check the CHG_TTRI_EN? */
- case CHG_STAT_OTP_TRICKLE:
- case CHG_STAT_OTP_FAST:
- case CHG_STAT_OTP_DONE:
- case CHG_STAT_TSD_TRICKLE:
- case CHG_STAT_TSD_FAST:
- case CHG_STAT_TSD_TOPOFF:
- case CHG_STAT_BAT_ERR:
- *val = POWER_SUPPLY_CHARGE_TYPE_NONE;
- break;
- default:
- *val = POWER_SUPPLY_CHARGE_TYPE_UNKNOWN;
- break;
- }
-
- return 0;
-}
-
-static int bd70528_get_battery_health(struct bd70528_psy *bdpsy, int *val)
-{
- int ret;
- unsigned int v;
-
- ret = regmap_read(bdpsy->regmap, BD70528_REG_CHG_BAT_STAT, &v);
- if (ret) {
- dev_err(bdpsy->dev, "Battery state read failure %d\n",
- ret);
- return ret;
- }
- /* No battery? */
- if (!(v & BD70528_MASK_CHG_BAT_DETECT))
- *val = POWER_SUPPLY_HEALTH_DEAD;
- else if (v & BD70528_MASK_CHG_BAT_OVERVOLT)
- *val = POWER_SUPPLY_HEALTH_OVERVOLTAGE;
- else if (v & BD70528_MASK_CHG_BAT_TIMER)
- *val = POWER_SUPPLY_HEALTH_SAFETY_TIMER_EXPIRE;
- else
- *val = POWER_SUPPLY_HEALTH_GOOD;
-
- return 0;
-}
-
-static int bd70528_get_online(struct bd70528_psy *bdpsy, int *val)
-{
- int ret;
- unsigned int v;
-
- ret = regmap_read(bdpsy->regmap, BD70528_REG_CHG_IN_STAT, &v);
- if (ret) {
- dev_err(bdpsy->dev, "DC1 IN state read failure %d\n",
- ret);
- return ret;
- }
-
- *val = (v & BD70528_MASK_CHG_DCIN1_UVLO) ? 1 : 0;
-
- return 0;
-}
-
-static int bd70528_get_present(struct bd70528_psy *bdpsy, int *val)
-{
- int ret;
- unsigned int v;
-
- ret = regmap_read(bdpsy->regmap, BD70528_REG_CHG_BAT_STAT, &v);
- if (ret) {
- dev_err(bdpsy->dev, "Battery state read failure %d\n",
- ret);
- return ret;
- }
-
- *val = (v & BD70528_MASK_CHG_BAT_DETECT) ? 1 : 0;
-
- return 0;
-}
-
-static const struct linear_range current_limit_ranges[] = {
- {
- .min = 5,
- .step = 1,
- .min_sel = 0,
- .max_sel = 0x22,
- },
- {
- .min = 40,
- .step = 5,
- .min_sel = 0x23,
- .max_sel = 0x26,
- },
- {
- .min = 60,
- .step = 20,
- .min_sel = 0x27,
- .max_sel = 0x2d,
- },
- {
- .min = 200,
- .step = 50,
- .min_sel = 0x2e,
- .max_sel = 0x34,
- },
- {
- .min = 500,
- .step = 0,
- .min_sel = 0x35,
- .max_sel = 0x3f,
- },
-};
-
-/*
- * BD70528 would support setting and getting own charge current/
- * voltage for low temperatures. The driver currently only reads
- * the charge current at room temperature. We do set both though.
- */
-static const struct linear_range warm_charge_curr[] = {
- {
- .min = 10,
- .step = 10,
- .min_sel = 0,
- .max_sel = 0x12,
- },
- {
- .min = 200,
- .step = 25,
- .min_sel = 0x13,
- .max_sel = 0x1f,
- },
-};
-
-/*
- * Cold charge current selectors are identical to warm charge current
- * selectors. The difference is that only smaller currents are available
- * at cold charge range.
- */
-#define MAX_COLD_CHG_CURR_SEL 0x15
-#define MAX_WARM_CHG_CURR_SEL 0x1f
-#define MIN_CHG_CURR_SEL 0x0
-
-static int get_charge_current(struct bd70528_psy *bdpsy, int *ma)
-{
- unsigned int sel;
- int ret;
-
- ret = regmap_read(bdpsy->regmap, BD70528_REG_CHG_CHG_CURR_WARM,
- &sel);
- if (ret) {
- dev_err(bdpsy->dev,
- "Charge current reading failed (%d)\n", ret);
- return ret;
- }
-
- sel &= BD70528_MASK_CHG_CHG_CURR;
-
- ret = linear_range_get_value_array(&warm_charge_curr[0],
- ARRAY_SIZE(warm_charge_curr),
- sel, ma);
- if (ret) {
- dev_err(bdpsy->dev,
- "Unknown charge current value 0x%x\n",
- sel);
- }
-
- return ret;
-}
-
-static int get_current_limit(struct bd70528_psy *bdpsy, int *ma)
-{
- unsigned int sel;
- int ret;
-
- ret = regmap_read(bdpsy->regmap, BD70528_REG_CHG_DCIN_ILIM,
- &sel);
-
- if (ret) {
- dev_err(bdpsy->dev,
- "Input current limit reading failed (%d)\n", ret);
- return ret;
- }
-
- sel &= BD70528_MASK_CHG_DCIN_ILIM;
-
- ret = linear_range_get_value_array(¤t_limit_ranges[0],
- ARRAY_SIZE(current_limit_ranges),
- sel, ma);
- if (ret) {
- /* Unspecified values mean 500 mA */
- *ma = 500;
- }
- return 0;
-}
-
-static enum power_supply_property bd70528_charger_props[] = {
- POWER_SUPPLY_PROP_STATUS,
- POWER_SUPPLY_PROP_CHARGE_TYPE,
- POWER_SUPPLY_PROP_HEALTH,
- POWER_SUPPLY_PROP_PRESENT,
- POWER_SUPPLY_PROP_ONLINE,
- POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT,
- POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT,
- POWER_SUPPLY_PROP_MODEL_NAME,
- POWER_SUPPLY_PROP_MANUFACTURER,
-};
-
-static int bd70528_charger_get_property(struct power_supply *psy,
- enum power_supply_property psp,
- union power_supply_propval *val)
-{
- struct bd70528_psy *bdpsy = power_supply_get_drvdata(psy);
- int ret = 0;
-
- switch (psp) {
- case POWER_SUPPLY_PROP_STATUS:
- return bd70528_get_charger_status(bdpsy, &val->intval);
- case POWER_SUPPLY_PROP_CHARGE_TYPE:
- return bd70528_get_charge_type(bdpsy, &val->intval);
- case POWER_SUPPLY_PROP_HEALTH:
- return bd70528_get_battery_health(bdpsy, &val->intval);
- case POWER_SUPPLY_PROP_PRESENT:
- return bd70528_get_present(bdpsy, &val->intval);
- case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT:
- ret = get_current_limit(bdpsy, &val->intval);
- val->intval *= 1000;
- return ret;
- case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
- ret = get_charge_current(bdpsy, &val->intval);
- val->intval *= 1000;
- return ret;
- case POWER_SUPPLY_PROP_ONLINE:
- return bd70528_get_online(bdpsy, &val->intval);
- case POWER_SUPPLY_PROP_MODEL_NAME:
- val->strval = bd70528_charger_model;
- return 0;
- case POWER_SUPPLY_PROP_MANUFACTURER:
- val->strval = bd70528_charger_manufacturer;
- return 0;
- default:
- break;
- }
-
- return -EINVAL;
-}
-
-static int bd70528_prop_is_writable(struct power_supply *psy,
- enum power_supply_property psp)
-{
- switch (psp) {
- case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT:
- case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
- return 1;
- default:
- break;
- }
- return 0;
-}
-
-static int set_charge_current(struct bd70528_psy *bdpsy, int ma)
-{
- unsigned int reg;
- int ret = 0, tmpret;
- bool found;
-
- if (ma > 500) {
- dev_warn(bdpsy->dev,
- "Requested charge current %u exceed maximum (500mA)\n",
- ma);
- reg = MAX_WARM_CHG_CURR_SEL;
- goto set;
- }
- if (ma < 10) {
- dev_err(bdpsy->dev,
- "Requested charge current %u smaller than min (10mA)\n",
- ma);
- reg = MIN_CHG_CURR_SEL;
- ret = -EINVAL;
- goto set;
- }
-
-/*
- * For BD70528 voltage/current limits we happily accept any value which
- * belongs the range. We could check if value matching the selector is
- * desired by computing the range min + (sel - sel_low) * range step - but
- * I guess it is enough if we use voltage/current which is closest (below)
- * the requested?
- */
-
- ret = linear_range_get_selector_low_array(warm_charge_curr,
- ARRAY_SIZE(warm_charge_curr),
- ma, ®, &found);
- if (ret) {
- dev_err(bdpsy->dev,
- "Unsupported charge current %u mA\n", ma);
- reg = MIN_CHG_CURR_SEL;
- goto set;
- }
- if (!found) {
- /*
- * There was a gap in supported values and we hit it.
- * Yet a smaller value was found so we use it.
- */
- dev_warn(bdpsy->dev,
- "Unsupported charge current %u mA\n", ma);
- }
-set:
-
- tmpret = regmap_update_bits(bdpsy->regmap,
- BD70528_REG_CHG_CHG_CURR_WARM,
- BD70528_MASK_CHG_CHG_CURR, reg);
- if (tmpret)
- dev_err(bdpsy->dev,
- "Charge current write failure (%d)\n", tmpret);
-
- if (reg > MAX_COLD_CHG_CURR_SEL)
- reg = MAX_COLD_CHG_CURR_SEL;
-
- if (!tmpret)
- tmpret = regmap_update_bits(bdpsy->regmap,
- BD70528_REG_CHG_CHG_CURR_COLD,
- BD70528_MASK_CHG_CHG_CURR, reg);
-
- if (!ret)
- ret = tmpret;
-
- return ret;
-}
-
-#define MAX_CURR_LIMIT_SEL 0x34
-#define MIN_CURR_LIMIT_SEL 0x0
-
-static int set_current_limit(struct bd70528_psy *bdpsy, int ma)
-{
- unsigned int reg;
- int ret = 0, tmpret;
- bool found;
-
- if (ma > 500) {
- dev_warn(bdpsy->dev,
- "Requested current limit %u exceed maximum (500mA)\n",
- ma);
- reg = MAX_CURR_LIMIT_SEL;
- goto set;
- }
- if (ma < 5) {
- dev_err(bdpsy->dev,
- "Requested current limit %u smaller than min (5mA)\n",
- ma);
- reg = MIN_CURR_LIMIT_SEL;
- ret = -EINVAL;
- goto set;
- }
-
- ret = linear_range_get_selector_low_array(current_limit_ranges,
- ARRAY_SIZE(current_limit_ranges),
- ma, ®, &found);
- if (ret) {
- dev_err(bdpsy->dev, "Unsupported current limit %umA\n", ma);
- reg = MIN_CURR_LIMIT_SEL;
- goto set;
- }
- if (!found) {
- /*
- * There was a gap in supported values and we hit it.
- * We found a smaller value from ranges and use it.
- * Warn user though.
- */
- dev_warn(bdpsy->dev, "Unsupported current limit %umA\n", ma);
- }
-
-set:
- tmpret = regmap_update_bits(bdpsy->regmap,
- BD70528_REG_CHG_DCIN_ILIM,
- BD70528_MASK_CHG_DCIN_ILIM, reg);
-
- if (!ret)
- ret = tmpret;
-
- return ret;
-}
-
-static int bd70528_charger_set_property(struct power_supply *psy,
- enum power_supply_property psp,
- const union power_supply_propval *val)
-{
- struct bd70528_psy *bdpsy = power_supply_get_drvdata(psy);
-
- switch (psp) {
- case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT:
- return set_current_limit(bdpsy, val->intval / 1000);
- case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
- return set_charge_current(bdpsy, val->intval / 1000);
- default:
- break;
- }
- return -EINVAL;
-}
-
-static const struct power_supply_desc bd70528_charger_desc = {
- .name = "bd70528-charger",
- .type = POWER_SUPPLY_TYPE_MAINS,
- .properties = bd70528_charger_props,
- .num_properties = ARRAY_SIZE(bd70528_charger_props),
- .get_property = bd70528_charger_get_property,
- .set_property = bd70528_charger_set_property,
- .property_is_writeable = bd70528_prop_is_writable,
-};
-
-static int bd70528_power_probe(struct platform_device *pdev)
-{
- struct bd70528_psy *bdpsy;
- struct power_supply_config cfg = {};
-
- bdpsy = devm_kzalloc(&pdev->dev, sizeof(*bdpsy), GFP_KERNEL);
- if (!bdpsy)
- return -ENOMEM;
-
- bdpsy->regmap = dev_get_regmap(pdev->dev.parent, NULL);
- if (!bdpsy->regmap) {
- dev_err(&pdev->dev, "No regmap found for chip\n");
- return -EINVAL;
- }
- bdpsy->dev = &pdev->dev;
-
- platform_set_drvdata(pdev, bdpsy);
- cfg.drv_data = bdpsy;
- cfg.of_node = pdev->dev.parent->of_node;
-
- bdpsy->psy = devm_power_supply_register(&pdev->dev,
- &bd70528_charger_desc, &cfg);
- if (IS_ERR(bdpsy->psy)) {
- dev_err(&pdev->dev, "failed: power supply register\n");
- return PTR_ERR(bdpsy->psy);
- }
-
- return bd70528_get_irqs(pdev, bdpsy);
-}
-
-static struct platform_driver bd70528_power = {
- .driver = {
- .name = "bd70528-power"
- },
- .probe = bd70528_power_probe,
-};
-
-module_platform_driver(bd70528_power);
-
-MODULE_AUTHOR("Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com>");
-MODULE_DESCRIPTION("BD70528 power-supply driver");
-MODULE_LICENSE("GPL");
-MODULE_ALIAS("platform:bd70528-power");
* Author: Mark A. Greer <mgreer@animalcreek.com>
*/
+#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
-#include <linux/of_irq.h>
-#include <linux/of_device.h>
#include <linux/pm_runtime.h>
#include <linux/power_supply.h>
#include <linux/power/bq24190_charger.h>
};
MODULE_DEVICE_TABLE(i2c, bq24190_i2c_ids);
-#ifdef CONFIG_OF
static const struct of_device_id bq24190_of_match[] = {
{ .compatible = "ti,bq24190", },
{ .compatible = "ti,bq24192", },
{ },
};
MODULE_DEVICE_TABLE(of, bq24190_of_match);
-#else
-static const struct of_device_id bq24190_of_match[] = {
- { },
-};
-#endif
static struct i2c_driver bq24190_driver = {
.probe = bq24190_probe,
.driver = {
.name = "bq24190-charger",
.pm = &bq24190_pm_ops,
- .of_match_table = of_match_ptr(bq24190_of_match),
+ .of_match_table = bq24190_of_match,
},
};
module_i2c_driver(bq24190_driver);
},
{},
};
+MODULE_DEVICE_TABLE(of, charger_manager_match);
static struct charger_desc *of_cm_parse_desc(struct device *dev)
{
if (!empty->voltage)
return -ENODATA;
val->intval = empty->counter_uah - latest->counter_uah;
- if (val->intval < 0)
+ if (val->intval < 0) {
+ /* Assume invalid config if CHARGE_NOW is -20% */
+ if (ddata->charge_full && abs(val->intval) > ddata->charge_full/5) {
+ empty->voltage = 0;
+ ddata->charge_full = 0;
+ return -ENODATA;
+ }
val->intval = 0;
- else if (ddata->charge_full && ddata->charge_full < val->intval)
+ } else if (ddata->charge_full && ddata->charge_full < val->intval) {
+ /* Assume invalid config if CHARGE_NOW exceeds CHARGE_FULL by 20% */
+ if (val->intval > (6*ddata->charge_full)/5) {
+ empty->voltage = 0;
+ ddata->charge_full = 0;
+ return -ENODATA;
+ }
val->intval = ddata->charge_full;
+ }
break;
case POWER_SUPPLY_PROP_CHARGE_FULL:
if (!ddata->charge_full)
case POWER_SUPPLY_PROP_CHARGE_FULL:
if (val->intval < 0)
return -EINVAL;
- if (val->intval > ddata->config.info.charge_full_design)
+ if (val->intval > (6*ddata->config.info.charge_full_design)/5)
return -EINVAL;
ddata->charge_full = val->intval;
POWER_SUPPLY_PROP_CURRENT_NOW,
};
-/* No battery always shows temperature of -40000 */
-static bool cpcap_charger_battery_found(struct cpcap_charger_ddata *ddata)
-{
- struct iio_channel *channel;
- int error, temperature;
-
- channel = ddata->channels[CPCAP_CHARGER_IIO_BATTDET];
- error = iio_read_channel_processed(channel, &temperature);
- if (error < 0) {
- dev_warn(ddata->dev, "%s failed: %i\n", __func__, error);
-
- return false;
- }
-
- return temperature > -20000 && temperature < 60000;
-}
-
static int cpcap_charger_get_charge_voltage(struct cpcap_charger_ddata *ddata)
{
struct iio_channel *channel;
if (!ddata->feeding_vbus && cpcap_charger_vbus_valid(ddata) &&
s.chrgcurr1) {
- int max_current = 532000;
+ int max_current;
int vchrg, ichrg;
+ union power_supply_propval val;
+ struct power_supply *battery;
- if (cpcap_charger_battery_found(ddata))
+ battery = power_supply_get_by_name("battery");
+ if (IS_ERR_OR_NULL(battery)) {
+ dev_err(ddata->dev, "battery power_supply not available %li\n",
+ PTR_ERR(battery));
+ return;
+ }
+
+ error = power_supply_get_property(battery, POWER_SUPPLY_PROP_PRESENT, &val);
+ power_supply_put(battery);
+ if (error)
+ goto out_err;
+
+ if (val.intval) {
max_current = 1596000;
+ } else {
+ dev_info(ddata->dev, "battery not inserted, charging disabled\n");
+ max_current = 0;
+ }
if (max_current > ddata->limit_current)
max_current = ddata->limit_current;
#include <linux/interrupt.h>
#include <linux/power_supply.h>
#include <linux/of_device.h>
-#include <linux/max17040_battery.h>
#include <linux/regmap.h>
#include <linux/slab.h>
struct regmap *regmap;
struct delayed_work work;
struct power_supply *battery;
- struct max17040_platform_data *pdata;
struct chip_data data;
/* battery capacity */
int soc;
- /* State Of Charge */
- int status;
/* Low alert threshold from 32% to 1% of the State of Charge */
u32 low_soc_alert;
/* some devices return twice the capacity */
static int max17040_get_online(struct max17040_chip *chip)
{
- return chip->pdata && chip->pdata->battery_online ?
- chip->pdata->battery_online() : 1;
-}
-
-static int max17040_get_status(struct max17040_chip *chip)
-{
- if (!chip->pdata || !chip->pdata->charger_online
- || !chip->pdata->charger_enable)
- return POWER_SUPPLY_STATUS_UNKNOWN;
-
- if (max17040_get_soc(chip) > MAX17040_BATTERY_FULL)
- return POWER_SUPPLY_STATUS_FULL;
-
- if (chip->pdata->charger_online())
- if (chip->pdata->charger_enable())
- return POWER_SUPPLY_STATUS_CHARGING;
- else
- return POWER_SUPPLY_STATUS_NOT_CHARGING;
- else
- return POWER_SUPPLY_STATUS_DISCHARGING;
+ return 1;
}
static int max17040_get_of_data(struct max17040_chip *chip)
static void max17040_check_changes(struct max17040_chip *chip)
{
chip->soc = max17040_get_soc(chip);
- chip->status = max17040_get_status(chip);
}
static void max17040_queue_work(struct max17040_chip *chip)
static void max17040_work(struct work_struct *work)
{
struct max17040_chip *chip;
- int last_soc, last_status;
+ int last_soc;
chip = container_of(work, struct max17040_chip, work.work);
- /* store SOC and status to check changes */
+ /* store SOC to check changes */
last_soc = chip->soc;
- last_status = chip->status;
max17040_check_changes(chip);
/* check changes and send uevent */
- if (last_soc != chip->soc || last_status != chip->status)
+ if (last_soc != chip->soc)
power_supply_changed(chip->battery);
max17040_queue_work(chip);
static int max17040_enable_alert_irq(struct max17040_chip *chip)
{
struct i2c_client *client = chip->client;
- unsigned int flags;
int ret;
- flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT;
ret = devm_request_threaded_irq(&client->dev, client->irq, NULL,
- max17040_thread_handler, flags,
+ max17040_thread_handler, IRQF_ONESHOT,
chip->battery->desc->name, chip);
return ret;
struct max17040_chip *chip = power_supply_get_drvdata(psy);
switch (psp) {
- case POWER_SUPPLY_PROP_STATUS:
- val->intval = max17040_get_status(chip);
- break;
case POWER_SUPPLY_PROP_ONLINE:
val->intval = max17040_get_online(chip);
break;
};
static enum power_supply_property max17040_battery_props[] = {
- POWER_SUPPLY_PROP_STATUS,
POWER_SUPPLY_PROP_ONLINE,
POWER_SUPPLY_PROP_VOLTAGE_NOW,
POWER_SUPPLY_PROP_CAPACITY,
chip->client = client;
chip->regmap = devm_regmap_init_i2c(client, &max17040_regmap);
- chip->pdata = client->dev.platform_data;
chip_id = (enum chip_id) id->driver_data;
if (client->dev.of_node) {
ret = max17040_get_of_data(chip);
}
if (client->irq) {
- unsigned int flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT;
+ unsigned int flags = IRQF_ONESHOT;
/*
* On ACPI systems the IRQ may be handled by ACPI-event code,
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright 2012 ST Ericsson.
- *
- * Power supply driver for ST Ericsson pm2xxx_charger charger
- */
-
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/device.h>
-#include <linux/interrupt.h>
-#include <linux/delay.h>
-#include <linux/slab.h>
-#include <linux/platform_device.h>
-#include <linux/power_supply.h>
-#include <linux/regulator/consumer.h>
-#include <linux/err.h>
-#include <linux/i2c.h>
-#include <linux/workqueue.h>
-#include <linux/mfd/abx500/ab8500.h>
-#include <linux/pm2301_charger.h>
-#include <linux/gpio.h>
-#include <linux/pm_runtime.h>
-#include <linux/pm.h>
-
-#include "ab8500-bm.h"
-#include "ab8500-chargalg.h"
-#include "pm2301_charger.h"
-
-#define to_pm2xxx_charger_ac_device_info(x) container_of((x), \
- struct pm2xxx_charger, ac_chg)
-#define SLEEP_MIN 50
-#define SLEEP_MAX 100
-#define PM2XXX_AUTOSUSPEND_DELAY 500
-
-static int pm2xxx_interrupt_registers[] = {
- PM2XXX_REG_INT1,
- PM2XXX_REG_INT2,
- PM2XXX_REG_INT3,
- PM2XXX_REG_INT4,
- PM2XXX_REG_INT5,
- PM2XXX_REG_INT6,
-};
-
-static enum power_supply_property pm2xxx_charger_ac_props[] = {
- POWER_SUPPLY_PROP_HEALTH,
- POWER_SUPPLY_PROP_PRESENT,
- POWER_SUPPLY_PROP_ONLINE,
- POWER_SUPPLY_PROP_VOLTAGE_AVG,
-};
-
-static int pm2xxx_charger_voltage_map[] = {
- 3500,
- 3525,
- 3550,
- 3575,
- 3600,
- 3625,
- 3650,
- 3675,
- 3700,
- 3725,
- 3750,
- 3775,
- 3800,
- 3825,
- 3850,
- 3875,
- 3900,
- 3925,
- 3950,
- 3975,
- 4000,
- 4025,
- 4050,
- 4075,
- 4100,
- 4125,
- 4150,
- 4175,
- 4200,
- 4225,
- 4250,
- 4275,
- 4300,
-};
-
-static int pm2xxx_charger_current_map[] = {
- 200,
- 200,
- 400,
- 600,
- 800,
- 1000,
- 1200,
- 1400,
- 1600,
- 1800,
- 2000,
- 2200,
- 2400,
- 2600,
- 2800,
- 3000,
-};
-
-static void set_lpn_pin(struct pm2xxx_charger *pm2)
-{
- if (!pm2->ac.charger_connected && gpio_is_valid(pm2->lpn_pin)) {
- gpio_set_value(pm2->lpn_pin, 1);
- usleep_range(SLEEP_MIN, SLEEP_MAX);
- }
-}
-
-static void clear_lpn_pin(struct pm2xxx_charger *pm2)
-{
- if (!pm2->ac.charger_connected && gpio_is_valid(pm2->lpn_pin))
- gpio_set_value(pm2->lpn_pin, 0);
-}
-
-static int pm2xxx_reg_read(struct pm2xxx_charger *pm2, int reg, u8 *val)
-{
- int ret;
-
- /* wake up the device */
- pm_runtime_get_sync(pm2->dev);
-
- ret = i2c_smbus_read_i2c_block_data(pm2->config.pm2xxx_i2c, reg,
- 1, val);
- if (ret < 0)
- dev_err(pm2->dev, "Error reading register at 0x%x\n", reg);
- else
- ret = 0;
-
- pm_runtime_put_sync(pm2->dev);
-
- return ret;
-}
-
-static int pm2xxx_reg_write(struct pm2xxx_charger *pm2, int reg, u8 val)
-{
- int ret;
-
- /* wake up the device */
- pm_runtime_get_sync(pm2->dev);
-
- ret = i2c_smbus_write_i2c_block_data(pm2->config.pm2xxx_i2c, reg,
- 1, &val);
- if (ret < 0)
- dev_err(pm2->dev, "Error writing register at 0x%x\n", reg);
- else
- ret = 0;
-
- pm_runtime_put_sync(pm2->dev);
-
- return ret;
-}
-
-static int pm2xxx_charging_enable_mngt(struct pm2xxx_charger *pm2)
-{
- int ret;
-
- /* Enable charging */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG2,
- (PM2XXX_CH_AUTO_RESUME_EN | PM2XXX_CHARGER_ENA));
-
- return ret;
-}
-
-static int pm2xxx_charging_disable_mngt(struct pm2xxx_charger *pm2)
-{
- int ret;
-
- /* Disable SW EOC ctrl */
- ret = pm2xxx_reg_write(pm2, PM2XXX_SW_CTRL_REG, PM2XXX_SWCTRL_HW);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx write failed\n", __func__);
- return ret;
- }
-
- /* Disable charging */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG2,
- (PM2XXX_CH_AUTO_RESUME_DIS | PM2XXX_CHARGER_DIS));
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx write failed\n", __func__);
- return ret;
- }
-
- return 0;
-}
-
-static int pm2xxx_charger_batt_therm_mngt(struct pm2xxx_charger *pm2, int val)
-{
- queue_work(pm2->charger_wq, &pm2->check_main_thermal_prot_work);
-
- return 0;
-}
-
-
-static int pm2xxx_charger_die_therm_mngt(struct pm2xxx_charger *pm2, int val)
-{
- queue_work(pm2->charger_wq, &pm2->check_main_thermal_prot_work);
-
- return 0;
-}
-
-static int pm2xxx_charger_ovv_mngt(struct pm2xxx_charger *pm2, int val)
-{
- dev_err(pm2->dev, "Overvoltage detected\n");
- pm2->flags.ovv = true;
- power_supply_changed(pm2->ac_chg.psy);
-
- /* Schedule a new HW failure check */
- queue_delayed_work(pm2->charger_wq, &pm2->check_hw_failure_work, 0);
-
- return 0;
-}
-
-static int pm2xxx_charger_wd_exp_mngt(struct pm2xxx_charger *pm2, int val)
-{
- dev_dbg(pm2->dev , "20 minutes watchdog expired\n");
-
- pm2->ac.wd_expired = true;
- power_supply_changed(pm2->ac_chg.psy);
-
- return 0;
-}
-
-static int pm2xxx_charger_vbat_lsig_mngt(struct pm2xxx_charger *pm2, int val)
-{
- int ret;
-
- switch (val) {
- case PM2XXX_INT1_ITVBATLOWR:
- dev_dbg(pm2->dev, "VBAT grows above VBAT_LOW level\n");
- /* Enable SW EOC ctrl */
- ret = pm2xxx_reg_write(pm2, PM2XXX_SW_CTRL_REG,
- PM2XXX_SWCTRL_SW);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx write failed\n", __func__);
- return ret;
- }
- break;
-
- case PM2XXX_INT1_ITVBATLOWF:
- dev_dbg(pm2->dev, "VBAT drops below VBAT_LOW level\n");
- /* Disable SW EOC ctrl */
- ret = pm2xxx_reg_write(pm2, PM2XXX_SW_CTRL_REG,
- PM2XXX_SWCTRL_HW);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx write failed\n", __func__);
- return ret;
- }
- break;
-
- default:
- dev_err(pm2->dev, "Unknown VBAT level\n");
- }
-
- return 0;
-}
-
-static int pm2xxx_charger_bat_disc_mngt(struct pm2xxx_charger *pm2, int val)
-{
- dev_dbg(pm2->dev, "battery disconnected\n");
-
- return 0;
-}
-
-static int pm2xxx_charger_detection(struct pm2xxx_charger *pm2, u8 *val)
-{
- int ret;
-
- ret = pm2xxx_reg_read(pm2, PM2XXX_SRCE_REG_INT2, val);
-
- if (ret < 0) {
- dev_err(pm2->dev, "Charger detection failed\n");
- goto out;
- }
-
- *val &= (PM2XXX_INT2_S_ITVPWR1PLUG | PM2XXX_INT2_S_ITVPWR2PLUG);
-
-out:
- return ret;
-}
-
-static int pm2xxx_charger_itv_pwr_plug_mngt(struct pm2xxx_charger *pm2, int val)
-{
-
- int ret;
- u8 read_val;
-
- /*
- * Since we can't be sure that the events are received
- * synchronously, we have the check if the main charger is
- * connected by reading the interrupt source register.
- */
- ret = pm2xxx_charger_detection(pm2, &read_val);
-
- if ((ret == 0) && read_val) {
- pm2->ac.charger_connected = 1;
- pm2->ac_conn = true;
- queue_work(pm2->charger_wq, &pm2->ac_work);
- }
-
-
- return ret;
-}
-
-static int pm2xxx_charger_itv_pwr_unplug_mngt(struct pm2xxx_charger *pm2,
- int val)
-{
- pm2->ac.charger_connected = 0;
- queue_work(pm2->charger_wq, &pm2->ac_work);
-
- return 0;
-}
-
-static int pm2_int_reg0(void *pm2_data, int val)
-{
- struct pm2xxx_charger *pm2 = pm2_data;
- int ret = 0;
-
- if (val & PM2XXX_INT1_ITVBATLOWR) {
- ret = pm2xxx_charger_vbat_lsig_mngt(pm2,
- PM2XXX_INT1_ITVBATLOWR);
- if (ret < 0)
- goto out;
- }
-
- if (val & PM2XXX_INT1_ITVBATLOWF) {
- ret = pm2xxx_charger_vbat_lsig_mngt(pm2,
- PM2XXX_INT1_ITVBATLOWF);
- if (ret < 0)
- goto out;
- }
-
- if (val & PM2XXX_INT1_ITVBATDISCONNECT) {
- ret = pm2xxx_charger_bat_disc_mngt(pm2,
- PM2XXX_INT1_ITVBATDISCONNECT);
- if (ret < 0)
- goto out;
- }
-out:
- return ret;
-}
-
-static int pm2_int_reg1(void *pm2_data, int val)
-{
- struct pm2xxx_charger *pm2 = pm2_data;
- int ret = 0;
-
- if (val & (PM2XXX_INT2_ITVPWR1PLUG | PM2XXX_INT2_ITVPWR2PLUG)) {
- dev_dbg(pm2->dev , "Main charger plugged\n");
- ret = pm2xxx_charger_itv_pwr_plug_mngt(pm2, val &
- (PM2XXX_INT2_ITVPWR1PLUG | PM2XXX_INT2_ITVPWR2PLUG));
- }
-
- if (val &
- (PM2XXX_INT2_ITVPWR1UNPLUG | PM2XXX_INT2_ITVPWR2UNPLUG)) {
- dev_dbg(pm2->dev , "Main charger unplugged\n");
- ret = pm2xxx_charger_itv_pwr_unplug_mngt(pm2, val &
- (PM2XXX_INT2_ITVPWR1UNPLUG |
- PM2XXX_INT2_ITVPWR2UNPLUG));
- }
-
- return ret;
-}
-
-static int pm2_int_reg2(void *pm2_data, int val)
-{
- struct pm2xxx_charger *pm2 = pm2_data;
- int ret = 0;
-
- if (val & PM2XXX_INT3_ITAUTOTIMEOUTWD)
- ret = pm2xxx_charger_wd_exp_mngt(pm2, val);
-
- if (val & (PM2XXX_INT3_ITCHPRECHARGEWD |
- PM2XXX_INT3_ITCHCCWD | PM2XXX_INT3_ITCHCVWD)) {
- dev_dbg(pm2->dev,
- "Watchdog occurred for precharge, CC and CV charge\n");
- }
-
- return ret;
-}
-
-static int pm2_int_reg3(void *pm2_data, int val)
-{
- struct pm2xxx_charger *pm2 = pm2_data;
- int ret = 0;
-
- if (val & (PM2XXX_INT4_ITCHARGINGON)) {
- dev_dbg(pm2->dev ,
- "charging operation has started\n");
- }
-
- if (val & (PM2XXX_INT4_ITVRESUME)) {
- dev_dbg(pm2->dev,
- "battery discharged down to VResume threshold\n");
- }
-
- if (val & (PM2XXX_INT4_ITBATTFULL)) {
- dev_dbg(pm2->dev , "battery fully detected\n");
- }
-
- if (val & (PM2XXX_INT4_ITCVPHASE)) {
- dev_dbg(pm2->dev, "CV phase enter with 0.5C charging\n");
- }
-
- if (val & (PM2XXX_INT4_ITVPWR2OVV | PM2XXX_INT4_ITVPWR1OVV)) {
- pm2->failure_case = VPWR_OVV;
- ret = pm2xxx_charger_ovv_mngt(pm2, val &
- (PM2XXX_INT4_ITVPWR2OVV | PM2XXX_INT4_ITVPWR1OVV));
- dev_dbg(pm2->dev, "VPWR/VSYSTEM overvoltage detected\n");
- }
-
- if (val & (PM2XXX_INT4_S_ITBATTEMPCOLD |
- PM2XXX_INT4_S_ITBATTEMPHOT)) {
- ret = pm2xxx_charger_batt_therm_mngt(pm2, val &
- (PM2XXX_INT4_S_ITBATTEMPCOLD |
- PM2XXX_INT4_S_ITBATTEMPHOT));
- dev_dbg(pm2->dev, "BTEMP is too Low/High\n");
- }
-
- return ret;
-}
-
-static int pm2_int_reg4(void *pm2_data, int val)
-{
- struct pm2xxx_charger *pm2 = pm2_data;
- int ret = 0;
-
- if (val & PM2XXX_INT5_ITVSYSTEMOVV) {
- pm2->failure_case = VSYSTEM_OVV;
- ret = pm2xxx_charger_ovv_mngt(pm2, val &
- PM2XXX_INT5_ITVSYSTEMOVV);
- dev_dbg(pm2->dev, "VSYSTEM overvoltage detected\n");
- }
-
- if (val & (PM2XXX_INT5_ITTHERMALWARNINGFALL |
- PM2XXX_INT5_ITTHERMALWARNINGRISE |
- PM2XXX_INT5_ITTHERMALSHUTDOWNFALL |
- PM2XXX_INT5_ITTHERMALSHUTDOWNRISE)) {
- dev_dbg(pm2->dev, "BTEMP die temperature is too Low/High\n");
- ret = pm2xxx_charger_die_therm_mngt(pm2, val &
- (PM2XXX_INT5_ITTHERMALWARNINGFALL |
- PM2XXX_INT5_ITTHERMALWARNINGRISE |
- PM2XXX_INT5_ITTHERMALSHUTDOWNFALL |
- PM2XXX_INT5_ITTHERMALSHUTDOWNRISE));
- }
-
- return ret;
-}
-
-static int pm2_int_reg5(void *pm2_data, int val)
-{
- struct pm2xxx_charger *pm2 = pm2_data;
-
- if (val & (PM2XXX_INT6_ITVPWR2DROP | PM2XXX_INT6_ITVPWR1DROP)) {
- dev_dbg(pm2->dev, "VMPWR drop to VBAT level\n");
- }
-
- if (val & (PM2XXX_INT6_ITVPWR2VALIDRISE |
- PM2XXX_INT6_ITVPWR1VALIDRISE |
- PM2XXX_INT6_ITVPWR2VALIDFALL |
- PM2XXX_INT6_ITVPWR1VALIDFALL)) {
- dev_dbg(pm2->dev, "Falling/Rising edge on WPWR1/2\n");
- }
-
- return 0;
-}
-
-static irqreturn_t pm2xxx_irq_int(int irq, void *data)
-{
- struct pm2xxx_charger *pm2 = data;
- struct pm2xxx_interrupts *interrupt = pm2->pm2_int;
- int i;
-
- /* wake up the device */
- pm_runtime_get_sync(pm2->dev);
-
- do {
- for (i = 0; i < PM2XXX_NUM_INT_REG; i++) {
- pm2xxx_reg_read(pm2,
- pm2xxx_interrupt_registers[i],
- &(interrupt->reg[i]));
-
- if (interrupt->reg[i] > 0)
- interrupt->handler[i](pm2, interrupt->reg[i]);
- }
- } while (gpio_get_value(pm2->pdata->gpio_irq_number) == 0);
-
- pm_runtime_mark_last_busy(pm2->dev);
- pm_runtime_put_autosuspend(pm2->dev);
-
- return IRQ_HANDLED;
-}
-
-static int pm2xxx_charger_get_ac_cv(struct pm2xxx_charger *pm2)
-{
- int ret = 0;
- u8 val;
-
- if (pm2->ac.charger_connected && pm2->ac.charger_online) {
-
- ret = pm2xxx_reg_read(pm2, PM2XXX_SRCE_REG_INT4, &val);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx read failed\n", __func__);
- goto out;
- }
-
- if (val & PM2XXX_INT4_S_ITCVPHASE)
- ret = PM2XXX_CONST_VOLT;
- else
- ret = PM2XXX_CONST_CURR;
- }
-out:
- return ret;
-}
-
-static int pm2xxx_current_to_regval(int curr)
-{
- int i;
-
- if (curr < pm2xxx_charger_current_map[0])
- return 0;
-
- for (i = 1; i < ARRAY_SIZE(pm2xxx_charger_current_map); i++) {
- if (curr < pm2xxx_charger_current_map[i])
- return (i - 1);
- }
-
- i = ARRAY_SIZE(pm2xxx_charger_current_map) - 1;
- if (curr == pm2xxx_charger_current_map[i])
- return i;
- else
- return -EINVAL;
-}
-
-static int pm2xxx_voltage_to_regval(int curr)
-{
- int i;
-
- if (curr < pm2xxx_charger_voltage_map[0])
- return 0;
-
- for (i = 1; i < ARRAY_SIZE(pm2xxx_charger_voltage_map); i++) {
- if (curr < pm2xxx_charger_voltage_map[i])
- return i - 1;
- }
-
- i = ARRAY_SIZE(pm2xxx_charger_voltage_map) - 1;
- if (curr == pm2xxx_charger_voltage_map[i])
- return i;
- else
- return -EINVAL;
-}
-
-static int pm2xxx_charger_update_charger_current(struct ux500_charger *charger,
- int ich_out)
-{
- int ret;
- int curr_index;
- struct pm2xxx_charger *pm2;
- u8 val;
-
- if (charger->psy->desc->type == POWER_SUPPLY_TYPE_MAINS)
- pm2 = to_pm2xxx_charger_ac_device_info(charger);
- else
- return -ENXIO;
-
- curr_index = pm2xxx_current_to_regval(ich_out);
- if (curr_index < 0) {
- dev_err(pm2->dev,
- "Charger current too high, charging not started\n");
- return -ENXIO;
- }
-
- ret = pm2xxx_reg_read(pm2, PM2XXX_BATT_CTRL_REG6, &val);
- if (ret >= 0) {
- val &= ~PM2XXX_DIR_CH_CC_CURRENT_MASK;
- val |= curr_index;
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG6, val);
- if (ret < 0) {
- dev_err(pm2->dev,
- "%s write failed\n", __func__);
- }
- }
- else
- dev_err(pm2->dev, "%s read failed\n", __func__);
-
- return ret;
-}
-
-static int pm2xxx_charger_ac_get_property(struct power_supply *psy,
- enum power_supply_property psp,
- union power_supply_propval *val)
-{
- struct pm2xxx_charger *pm2;
-
- pm2 = to_pm2xxx_charger_ac_device_info(psy_to_ux500_charger(psy));
-
- switch (psp) {
- case POWER_SUPPLY_PROP_HEALTH:
- if (pm2->flags.mainextchnotok)
- val->intval = POWER_SUPPLY_HEALTH_UNSPEC_FAILURE;
- else if (pm2->ac.wd_expired)
- val->intval = POWER_SUPPLY_HEALTH_DEAD;
- else if (pm2->flags.main_thermal_prot)
- val->intval = POWER_SUPPLY_HEALTH_OVERHEAT;
- else if (pm2->flags.ovv)
- val->intval = POWER_SUPPLY_HEALTH_OVERVOLTAGE;
- else
- val->intval = POWER_SUPPLY_HEALTH_GOOD;
- break;
- case POWER_SUPPLY_PROP_ONLINE:
- val->intval = pm2->ac.charger_online;
- break;
- case POWER_SUPPLY_PROP_PRESENT:
- val->intval = pm2->ac.charger_connected;
- break;
- case POWER_SUPPLY_PROP_VOLTAGE_AVG:
- pm2->ac.cv_active = pm2xxx_charger_get_ac_cv(pm2);
- val->intval = pm2->ac.cv_active;
- break;
- default:
- return -EINVAL;
- }
- return 0;
-}
-
-static int pm2xxx_charging_init(struct pm2xxx_charger *pm2)
-{
- int ret = 0;
-
- /* enable CC and CV watchdog */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG3,
- (PM2XXX_CH_WD_CV_PHASE_60MIN | PM2XXX_CH_WD_CC_PHASE_60MIN));
- if( ret < 0)
- return ret;
-
- /* enable precharge watchdog */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG4,
- PM2XXX_CH_WD_PRECH_PHASE_60MIN);
-
- /* Disable auto timeout */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG5,
- PM2XXX_CH_WD_AUTO_TIMEOUT_20MIN);
-
- /*
- * EOC current level = 100mA
- * Precharge current level = 100mA
- * CC current level = 1000mA
- */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG6,
- (PM2XXX_DIR_CH_CC_CURRENT_1000MA |
- PM2XXX_CH_PRECH_CURRENT_100MA |
- PM2XXX_CH_EOC_CURRENT_100MA));
-
- /*
- * recharge threshold = 3.8V
- * Precharge to CC threshold = 2.9V
- */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG7,
- (PM2XXX_CH_PRECH_VOL_2_9 | PM2XXX_CH_VRESUME_VOL_3_8));
-
- /* float voltage charger level = 4.2V */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG8,
- PM2XXX_CH_VOLT_4_2);
-
- /* Voltage drop between VBAT and VSYS in HW charging = 300mV */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG9,
- (PM2XXX_CH_150MV_DROP_300MV | PM2XXX_CHARCHING_INFO_DIS |
- PM2XXX_CH_CC_REDUCED_CURRENT_IDENT |
- PM2XXX_CH_CC_MODEDROP_DIS));
-
- /* Input charger level of over voltage = 10V */
- ret = pm2xxx_reg_write(pm2, PM2XXX_INP_VOLT_VPWR2,
- PM2XXX_VPWR2_OVV_10);
- ret = pm2xxx_reg_write(pm2, PM2XXX_INP_VOLT_VPWR1,
- PM2XXX_VPWR1_OVV_10);
-
- /* Input charger drop */
- ret = pm2xxx_reg_write(pm2, PM2XXX_INP_DROP_VPWR2,
- (PM2XXX_VPWR2_HW_OPT_DIS | PM2XXX_VPWR2_VALID_DIS |
- PM2XXX_VPWR2_DROP_DIS));
- ret = pm2xxx_reg_write(pm2, PM2XXX_INP_DROP_VPWR1,
- (PM2XXX_VPWR1_HW_OPT_DIS | PM2XXX_VPWR1_VALID_DIS |
- PM2XXX_VPWR1_DROP_DIS));
-
- /* Disable battery low monitoring */
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_LOW_LEV_COMP_REG,
- PM2XXX_VBAT_LOW_MONITORING_ENA);
-
- return ret;
-}
-
-static int pm2xxx_charger_ac_en(struct ux500_charger *charger,
- int enable, int vset, int iset)
-{
- int ret;
- int volt_index;
- int curr_index;
- u8 val;
-
- struct pm2xxx_charger *pm2 = to_pm2xxx_charger_ac_device_info(charger);
-
- if (enable) {
- if (!pm2->ac.charger_connected) {
- dev_dbg(pm2->dev, "AC charger not connected\n");
- return -ENXIO;
- }
-
- dev_dbg(pm2->dev, "Enable AC: %dmV %dmA\n", vset, iset);
- if (!pm2->vddadc_en_ac) {
- ret = regulator_enable(pm2->regu);
- if (ret)
- dev_warn(pm2->dev,
- "Failed to enable vddadc regulator\n");
- else
- pm2->vddadc_en_ac = true;
- }
-
- ret = pm2xxx_charging_init(pm2);
- if (ret < 0) {
- dev_err(pm2->dev, "%s charging init failed\n",
- __func__);
- goto error_occured;
- }
-
- volt_index = pm2xxx_voltage_to_regval(vset);
- curr_index = pm2xxx_current_to_regval(iset);
-
- if (volt_index < 0 || curr_index < 0) {
- dev_err(pm2->dev,
- "Charger voltage or current too high, "
- "charging not started\n");
- return -ENXIO;
- }
-
- ret = pm2xxx_reg_read(pm2, PM2XXX_BATT_CTRL_REG8, &val);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx read failed\n", __func__);
- goto error_occured;
- }
- val &= ~PM2XXX_CH_VOLT_MASK;
- val |= volt_index;
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG8, val);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx write failed\n", __func__);
- goto error_occured;
- }
-
- ret = pm2xxx_reg_read(pm2, PM2XXX_BATT_CTRL_REG6, &val);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx read failed\n", __func__);
- goto error_occured;
- }
- val &= ~PM2XXX_DIR_CH_CC_CURRENT_MASK;
- val |= curr_index;
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_CTRL_REG6, val);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx write failed\n", __func__);
- goto error_occured;
- }
-
- if (!pm2->bat->enable_overshoot) {
- ret = pm2xxx_reg_read(pm2, PM2XXX_LED_CTRL_REG, &val);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx read failed\n",
- __func__);
- goto error_occured;
- }
- val |= PM2XXX_ANTI_OVERSHOOT_EN;
- ret = pm2xxx_reg_write(pm2, PM2XXX_LED_CTRL_REG, val);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx write failed\n",
- __func__);
- goto error_occured;
- }
- }
-
- ret = pm2xxx_charging_enable_mngt(pm2);
- if (ret < 0) {
- dev_err(pm2->dev, "Failed to enable"
- "pm2xxx ac charger\n");
- goto error_occured;
- }
-
- pm2->ac.charger_online = 1;
- } else {
- pm2->ac.charger_online = 0;
- pm2->ac.wd_expired = false;
-
- /* Disable regulator if enabled */
- if (pm2->vddadc_en_ac) {
- regulator_disable(pm2->regu);
- pm2->vddadc_en_ac = false;
- }
-
- ret = pm2xxx_charging_disable_mngt(pm2);
- if (ret < 0) {
- dev_err(pm2->dev, "failed to disable"
- "pm2xxx ac charger\n");
- goto error_occured;
- }
-
- dev_dbg(pm2->dev, "PM2301: " "Disabled AC charging\n");
- }
- power_supply_changed(pm2->ac_chg.psy);
-
-error_occured:
- return ret;
-}
-
-static int pm2xxx_charger_watchdog_kick(struct ux500_charger *charger)
-{
- int ret;
- struct pm2xxx_charger *pm2;
-
- if (charger->psy->desc->type == POWER_SUPPLY_TYPE_MAINS)
- pm2 = to_pm2xxx_charger_ac_device_info(charger);
- else
- return -ENXIO;
-
- ret = pm2xxx_reg_write(pm2, PM2XXX_BATT_WD_KICK, WD_TIMER);
- if (ret)
- dev_err(pm2->dev, "Failed to kick WD!\n");
-
- return ret;
-}
-
-static void pm2xxx_charger_ac_work(struct work_struct *work)
-{
- struct pm2xxx_charger *pm2 = container_of(work,
- struct pm2xxx_charger, ac_work);
-
-
- power_supply_changed(pm2->ac_chg.psy);
- sysfs_notify(&pm2->ac_chg.psy->dev.kobj, NULL, "present");
-};
-
-static void pm2xxx_charger_check_hw_failure_work(struct work_struct *work)
-{
- u8 reg_value;
-
- struct pm2xxx_charger *pm2 = container_of(work,
- struct pm2xxx_charger, check_hw_failure_work.work);
-
- if (pm2->flags.ovv) {
- pm2xxx_reg_read(pm2, PM2XXX_SRCE_REG_INT4, ®_value);
-
- if (!(reg_value & (PM2XXX_INT4_S_ITVPWR1OVV |
- PM2XXX_INT4_S_ITVPWR2OVV))) {
- pm2->flags.ovv = false;
- power_supply_changed(pm2->ac_chg.psy);
- }
- }
-
- /* If we still have a failure, schedule a new check */
- if (pm2->flags.ovv) {
- queue_delayed_work(pm2->charger_wq,
- &pm2->check_hw_failure_work, round_jiffies(HZ));
- }
-}
-
-static void pm2xxx_charger_check_main_thermal_prot_work(
- struct work_struct *work)
-{
- int ret;
- u8 val;
-
- struct pm2xxx_charger *pm2 = container_of(work, struct pm2xxx_charger,
- check_main_thermal_prot_work);
-
- /* Check if die temp warning is still active */
- ret = pm2xxx_reg_read(pm2, PM2XXX_SRCE_REG_INT5, &val);
- if (ret < 0) {
- dev_err(pm2->dev, "%s pm2xxx read failed\n", __func__);
- return;
- }
- if (val & (PM2XXX_INT5_S_ITTHERMALWARNINGRISE
- | PM2XXX_INT5_S_ITTHERMALSHUTDOWNRISE))
- pm2->flags.main_thermal_prot = true;
- else if (val & (PM2XXX_INT5_S_ITTHERMALWARNINGFALL
- | PM2XXX_INT5_S_ITTHERMALSHUTDOWNFALL))
- pm2->flags.main_thermal_prot = false;
-
- power_supply_changed(pm2->ac_chg.psy);
-}
-
-static struct pm2xxx_interrupts pm2xxx_int = {
- .handler[0] = pm2_int_reg0,
- .handler[1] = pm2_int_reg1,
- .handler[2] = pm2_int_reg2,
- .handler[3] = pm2_int_reg3,
- .handler[4] = pm2_int_reg4,
- .handler[5] = pm2_int_reg5,
-};
-
-static struct pm2xxx_irq pm2xxx_charger_irq[] = {
- {"PM2XXX_IRQ_INT", pm2xxx_irq_int},
-};
-
-static int __maybe_unused pm2xxx_wall_charger_resume(struct device *dev)
-{
- struct i2c_client *i2c_client = to_i2c_client(dev);
- struct pm2xxx_charger *pm2;
-
- pm2 = (struct pm2xxx_charger *)i2c_get_clientdata(i2c_client);
- set_lpn_pin(pm2);
-
- /* If we still have a HW failure, schedule a new check */
- if (pm2->flags.ovv)
- queue_delayed_work(pm2->charger_wq,
- &pm2->check_hw_failure_work, 0);
-
- return 0;
-}
-
-static int __maybe_unused pm2xxx_wall_charger_suspend(struct device *dev)
-{
- struct i2c_client *i2c_client = to_i2c_client(dev);
- struct pm2xxx_charger *pm2;
-
- pm2 = (struct pm2xxx_charger *)i2c_get_clientdata(i2c_client);
- clear_lpn_pin(pm2);
-
- /* Cancel any pending HW failure check */
- if (delayed_work_pending(&pm2->check_hw_failure_work))
- cancel_delayed_work(&pm2->check_hw_failure_work);
-
- flush_work(&pm2->ac_work);
- flush_work(&pm2->check_main_thermal_prot_work);
-
- return 0;
-}
-
-static int __maybe_unused pm2xxx_runtime_suspend(struct device *dev)
-{
- struct i2c_client *pm2xxx_i2c_client = to_i2c_client(dev);
- struct pm2xxx_charger *pm2;
-
- pm2 = (struct pm2xxx_charger *)i2c_get_clientdata(pm2xxx_i2c_client);
- clear_lpn_pin(pm2);
-
- return 0;
-}
-
-static int __maybe_unused pm2xxx_runtime_resume(struct device *dev)
-{
- struct i2c_client *pm2xxx_i2c_client = to_i2c_client(dev);
- struct pm2xxx_charger *pm2;
-
- pm2 = (struct pm2xxx_charger *)i2c_get_clientdata(pm2xxx_i2c_client);
-
- if (gpio_is_valid(pm2->lpn_pin) && gpio_get_value(pm2->lpn_pin) == 0)
- set_lpn_pin(pm2);
-
- return 0;
-}
-
-static const struct dev_pm_ops pm2xxx_pm_ops __maybe_unused = {
- SET_SYSTEM_SLEEP_PM_OPS(pm2xxx_wall_charger_suspend,
- pm2xxx_wall_charger_resume)
- SET_RUNTIME_PM_OPS(pm2xxx_runtime_suspend, pm2xxx_runtime_resume, NULL)
-};
-
-static int pm2xxx_wall_charger_probe(struct i2c_client *i2c_client,
- const struct i2c_device_id *id)
-{
- struct pm2xxx_platform_data *pl_data = i2c_client->dev.platform_data;
- struct power_supply_config psy_cfg = {};
- struct pm2xxx_charger *pm2;
- int ret = 0;
- u8 val;
- int i;
-
- if (!pl_data) {
- dev_err(&i2c_client->dev, "No platform data supplied\n");
- return -EINVAL;
- }
-
- pm2 = kzalloc(sizeof(struct pm2xxx_charger), GFP_KERNEL);
- if (!pm2) {
- dev_err(&i2c_client->dev, "pm2xxx_charger allocation failed\n");
- return -ENOMEM;
- }
-
- /* get parent data */
- pm2->dev = &i2c_client->dev;
-
- pm2->pm2_int = &pm2xxx_int;
-
- /* get charger spcific platform data */
- if (!pl_data->wall_charger) {
- dev_err(pm2->dev, "no charger platform data supplied\n");
- ret = -EINVAL;
- goto free_device_info;
- }
-
- pm2->pdata = pl_data->wall_charger;
-
- /* get battery specific platform data */
- if (!pl_data->battery) {
- dev_err(pm2->dev, "no battery platform data supplied\n");
- ret = -EINVAL;
- goto free_device_info;
- }
-
- pm2->bat = pl_data->battery;
-
- if (!i2c_check_functionality(i2c_client->adapter,
- I2C_FUNC_SMBUS_BYTE_DATA |
- I2C_FUNC_SMBUS_READ_WORD_DATA)) {
- ret = -ENODEV;
- dev_info(pm2->dev, "pm2301 i2c_check_functionality failed\n");
- goto free_device_info;
- }
-
- pm2->config.pm2xxx_i2c = i2c_client;
- pm2->config.pm2xxx_id = (struct i2c_device_id *) id;
- i2c_set_clientdata(i2c_client, pm2);
-
- /* AC supply */
- /* power_supply base class */
- pm2->ac_chg_desc.name = pm2->pdata->label;
- pm2->ac_chg_desc.type = POWER_SUPPLY_TYPE_MAINS;
- pm2->ac_chg_desc.properties = pm2xxx_charger_ac_props;
- pm2->ac_chg_desc.num_properties = ARRAY_SIZE(pm2xxx_charger_ac_props);
- pm2->ac_chg_desc.get_property = pm2xxx_charger_ac_get_property;
-
- psy_cfg.supplied_to = pm2->pdata->supplied_to;
- psy_cfg.num_supplicants = pm2->pdata->num_supplicants;
- /* pm2xxx_charger sub-class */
- pm2->ac_chg.ops.enable = &pm2xxx_charger_ac_en;
- pm2->ac_chg.ops.kick_wd = &pm2xxx_charger_watchdog_kick;
- pm2->ac_chg.ops.update_curr = &pm2xxx_charger_update_charger_current;
- pm2->ac_chg.max_out_volt = pm2xxx_charger_voltage_map[
- ARRAY_SIZE(pm2xxx_charger_voltage_map) - 1];
- pm2->ac_chg.max_out_curr = pm2xxx_charger_current_map[
- ARRAY_SIZE(pm2xxx_charger_current_map) - 1];
- pm2->ac_chg.wdt_refresh = WD_KICK_INTERVAL;
- pm2->ac_chg.enabled = true;
- pm2->ac_chg.external = true;
-
- /* Create a work queue for the charger */
- pm2->charger_wq = alloc_ordered_workqueue("pm2xxx_charger_wq",
- WQ_MEM_RECLAIM);
- if (pm2->charger_wq == NULL) {
- ret = -ENOMEM;
- dev_err(pm2->dev, "failed to create work queue\n");
- goto free_device_info;
- }
-
- /* Init work for charger detection */
- INIT_WORK(&pm2->ac_work, pm2xxx_charger_ac_work);
-
- /* Init work for checking HW status */
- INIT_WORK(&pm2->check_main_thermal_prot_work,
- pm2xxx_charger_check_main_thermal_prot_work);
-
- /* Init work for HW failure check */
- INIT_DEFERRABLE_WORK(&pm2->check_hw_failure_work,
- pm2xxx_charger_check_hw_failure_work);
-
- /*
- * VDD ADC supply needs to be enabled from this driver when there
- * is a charger connected to avoid erroneous BTEMP_HIGH/LOW
- * interrupts during charging
- */
- pm2->regu = regulator_get(pm2->dev, "vddadc");
- if (IS_ERR(pm2->regu)) {
- ret = PTR_ERR(pm2->regu);
- dev_err(pm2->dev, "failed to get vddadc regulator\n");
- goto free_charger_wq;
- }
-
- /* Register AC charger class */
- pm2->ac_chg.psy = power_supply_register(pm2->dev, &pm2->ac_chg_desc,
- &psy_cfg);
- if (IS_ERR(pm2->ac_chg.psy)) {
- dev_err(pm2->dev, "failed to register AC charger\n");
- ret = PTR_ERR(pm2->ac_chg.psy);
- goto free_regulator;
- }
-
- /* Register interrupts */
- ret = request_threaded_irq(gpio_to_irq(pm2->pdata->gpio_irq_number),
- NULL,
- pm2xxx_charger_irq[0].isr,
- pm2->pdata->irq_type | IRQF_ONESHOT,
- pm2xxx_charger_irq[0].name, pm2);
-
- if (ret != 0) {
- dev_err(pm2->dev, "failed to request %s IRQ %d: %d\n",
- pm2xxx_charger_irq[0].name,
- gpio_to_irq(pm2->pdata->gpio_irq_number), ret);
- goto unregister_pm2xxx_charger;
- }
-
- ret = pm_runtime_set_active(pm2->dev);
- if (ret)
- dev_err(pm2->dev, "set active Error\n");
-
- pm_runtime_enable(pm2->dev);
- pm_runtime_set_autosuspend_delay(pm2->dev, PM2XXX_AUTOSUSPEND_DELAY);
- pm_runtime_use_autosuspend(pm2->dev);
- pm_runtime_resume(pm2->dev);
-
- /* pm interrupt can wake up system */
- ret = enable_irq_wake(gpio_to_irq(pm2->pdata->gpio_irq_number));
- if (ret) {
- dev_err(pm2->dev, "failed to set irq wake\n");
- goto unregister_pm2xxx_interrupt;
- }
-
- mutex_init(&pm2->lock);
-
- if (gpio_is_valid(pm2->pdata->lpn_gpio)) {
- /* get lpn GPIO from platform data */
- pm2->lpn_pin = pm2->pdata->lpn_gpio;
-
- /*
- * Charger detection mechanism requires pulling up the LPN pin
- * while i2c communication if Charger is not connected
- * LPN pin of PM2301 is GPIO60 of AB9540
- */
- ret = gpio_request(pm2->lpn_pin, "pm2301_lpm_gpio");
-
- if (ret < 0) {
- dev_err(pm2->dev, "pm2301_lpm_gpio request failed\n");
- goto disable_pm2_irq_wake;
- }
- ret = gpio_direction_output(pm2->lpn_pin, 0);
- if (ret < 0) {
- dev_err(pm2->dev, "pm2301_lpm_gpio direction failed\n");
- goto free_gpio;
- }
- set_lpn_pin(pm2);
- }
-
- /* read interrupt registers */
- for (i = 0; i < PM2XXX_NUM_INT_REG; i++)
- pm2xxx_reg_read(pm2,
- pm2xxx_interrupt_registers[i],
- &val);
-
- ret = pm2xxx_charger_detection(pm2, &val);
-
- if ((ret == 0) && val) {
- pm2->ac.charger_connected = 1;
- ab8500_override_turn_on_stat(~AB8500_POW_KEY_1_ON,
- AB8500_MAIN_CH_DET);
- pm2->ac_conn = true;
- power_supply_changed(pm2->ac_chg.psy);
- sysfs_notify(&pm2->ac_chg.psy->dev.kobj, NULL, "present");
- }
-
- return 0;
-
-free_gpio:
- if (gpio_is_valid(pm2->lpn_pin))
- gpio_free(pm2->lpn_pin);
-disable_pm2_irq_wake:
- disable_irq_wake(gpio_to_irq(pm2->pdata->gpio_irq_number));
-unregister_pm2xxx_interrupt:
- /* disable interrupt */
- free_irq(gpio_to_irq(pm2->pdata->gpio_irq_number), pm2);
-unregister_pm2xxx_charger:
- /* unregister power supply */
- power_supply_unregister(pm2->ac_chg.psy);
-free_regulator:
- /* disable the regulator */
- regulator_put(pm2->regu);
-free_charger_wq:
- destroy_workqueue(pm2->charger_wq);
-free_device_info:
- kfree(pm2);
-
- return ret;
-}
-
-static int pm2xxx_wall_charger_remove(struct i2c_client *i2c_client)
-{
- struct pm2xxx_charger *pm2 = i2c_get_clientdata(i2c_client);
-
- /* Disable pm_runtime */
- pm_runtime_disable(pm2->dev);
- /* Disable AC charging */
- pm2xxx_charger_ac_en(&pm2->ac_chg, false, 0, 0);
-
- /* Disable wake by pm interrupt */
- disable_irq_wake(gpio_to_irq(pm2->pdata->gpio_irq_number));
-
- /* Disable interrupts */
- free_irq(gpio_to_irq(pm2->pdata->gpio_irq_number), pm2);
-
- /* Delete the work queue */
- destroy_workqueue(pm2->charger_wq);
-
- flush_scheduled_work();
-
- /* disable the regulator */
- regulator_put(pm2->regu);
-
- power_supply_unregister(pm2->ac_chg.psy);
-
- if (gpio_is_valid(pm2->lpn_pin))
- gpio_free(pm2->lpn_pin);
-
- kfree(pm2);
-
- return 0;
-}
-
-static const struct i2c_device_id pm2xxx_id[] = {
- { "pm2301", 0 },
- { }
-};
-
-MODULE_DEVICE_TABLE(i2c, pm2xxx_id);
-
-static struct i2c_driver pm2xxx_charger_driver = {
- .probe = pm2xxx_wall_charger_probe,
- .remove = pm2xxx_wall_charger_remove,
- .driver = {
- .name = "pm2xxx-wall_charger",
- .pm = IS_ENABLED(CONFIG_PM) ? &pm2xxx_pm_ops : NULL,
- },
- .id_table = pm2xxx_id,
-};
-
-static int __init pm2xxx_charger_init(void)
-{
- return i2c_add_driver(&pm2xxx_charger_driver);
-}
-
-static void __exit pm2xxx_charger_exit(void)
-{
- i2c_del_driver(&pm2xxx_charger_driver);
-}
-
-device_initcall_sync(pm2xxx_charger_init);
-module_exit(pm2xxx_charger_exit);
-
-MODULE_LICENSE("GPL v2");
-MODULE_AUTHOR("Rajkumar kasirajan, Olivier Launay");
-MODULE_DESCRIPTION("PM2xxx charger management driver");
#define CHG_STATE_NO_BAT2 13
#define CHG_STATE_CHG_READY_VUSB 14
+#define GCHGDET_TYPE_MASK 0x30
+#define GCHGDET_TYPE_SDP 0x00
+#define GCHGDET_TYPE_CDP 0x10
+#define GCHGDET_TYPE_DCP 0x20
+
#define FG_ENABLE 1
+/*
+ * Formula seems accurate for battery current, but for USB current around 70mA
+ * per step was seen on Kobo Clara HD but all sources show the same formula
+ * also fur USB current. To avoid accidentially unwanted high currents we stick
+ * to that formula
+ */
+#define TO_CUR_REG(x) ((x) / 100000 - 1)
+#define FROM_CUR_REG(x) ((((x) & 0x1f) + 1) * 100000)
+#define CHG_MIN_CUR 100000
+#define CHG_MAX_CUR 1800000
+#define ADP_MAX_CUR 2500000
+#define USB_MAX_CUR 1400000
+
+
struct rn5t618_power_info {
struct rn5t618 *rn5t618;
struct platform_device *pdev;
int irq;
};
+static enum power_supply_usb_type rn5t618_usb_types[] = {
+ POWER_SUPPLY_USB_TYPE_SDP,
+ POWER_SUPPLY_USB_TYPE_DCP,
+ POWER_SUPPLY_USB_TYPE_CDP,
+ POWER_SUPPLY_USB_TYPE_UNKNOWN
+};
+
static enum power_supply_property rn5t618_usb_props[] = {
+ /* input current limit is not very accurate */
+ POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT,
POWER_SUPPLY_PROP_STATUS,
+ POWER_SUPPLY_PROP_USB_TYPE,
POWER_SUPPLY_PROP_ONLINE,
};
static enum power_supply_property rn5t618_adp_props[] = {
+ /* input current limit is not very accurate */
+ POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT,
POWER_SUPPLY_PROP_STATUS,
POWER_SUPPLY_PROP_ONLINE,
};
POWER_SUPPLY_PROP_TIME_TO_EMPTY_NOW,
POWER_SUPPLY_PROP_TIME_TO_FULL_NOW,
POWER_SUPPLY_PROP_TECHNOLOGY,
+ POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT,
POWER_SUPPLY_PROP_CHARGE_FULL,
POWER_SUPPLY_PROP_CHARGE_NOW,
};
return 0;
}
+static int rn5t618_battery_set_current_limit(struct rn5t618_power_info *info,
+ const union power_supply_propval *val)
+{
+ if (val->intval < CHG_MIN_CUR)
+ return -EINVAL;
+
+ if (val->intval >= CHG_MAX_CUR)
+ return -EINVAL;
+
+ return regmap_update_bits(info->rn5t618->regmap,
+ RN5T618_CHGISET,
+ 0x1F, TO_CUR_REG(val->intval));
+}
+
+static int rn5t618_battery_get_current_limit(struct rn5t618_power_info *info,
+ union power_supply_propval *val)
+{
+ unsigned int regval;
+ int ret;
+
+ ret = regmap_read(info->rn5t618->regmap, RN5T618_CHGISET,
+ ®val);
+ if (ret < 0)
+ return ret;
+
+ val->intval = FROM_CUR_REG(regval);
+
+ return 0;
+}
+
static int rn5t618_battery_charge_full(struct rn5t618_power_info *info,
union power_supply_propval *val)
{
case POWER_SUPPLY_PROP_TECHNOLOGY:
val->intval = POWER_SUPPLY_TECHNOLOGY_LION;
break;
+ case POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT:
+ ret = rn5t618_battery_get_current_limit(info, val);
+ break;
case POWER_SUPPLY_PROP_CHARGE_FULL:
ret = rn5t618_battery_charge_full(info, val);
break;
return ret;
}
+static int rn5t618_battery_set_property(struct power_supply *psy,
+ enum power_supply_property psp,
+ const union power_supply_propval *val)
+{
+ struct rn5t618_power_info *info = power_supply_get_drvdata(psy);
+
+ switch (psp) {
+ case POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT:
+ return rn5t618_battery_set_current_limit(info, val);
+ default:
+ return -EINVAL;
+ }
+}
+
+static int rn5t618_battery_property_is_writeable(struct power_supply *psy,
+ enum power_supply_property psp)
+{
+ switch (psp) {
+ case POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT:
+ return true;
+ default:
+ return false;
+ }
+}
+
static int rn5t618_adp_get_property(struct power_supply *psy,
enum power_supply_property psp,
union power_supply_propval *val)
{
struct rn5t618_power_info *info = power_supply_get_drvdata(psy);
unsigned int chgstate;
+ unsigned int regval;
bool online;
int ret;
if (val->intval != POWER_SUPPLY_STATUS_CHARGING)
val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
+ break;
+ case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT:
+ ret = regmap_read(info->rn5t618->regmap,
+ RN5T618_REGISET1, ®val);
+ if (ret < 0)
+ return ret;
+
+ val->intval = FROM_CUR_REG(regval);
break;
default:
return -EINVAL;
return 0;
}
+static int rn5t618_adp_set_property(struct power_supply *psy,
+ enum power_supply_property psp,
+ const union power_supply_propval *val)
+{
+ struct rn5t618_power_info *info = power_supply_get_drvdata(psy);
+ int ret;
+
+ switch (psp) {
+ case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT:
+ if (val->intval > ADP_MAX_CUR)
+ return -EINVAL;
+
+ if (val->intval < CHG_MIN_CUR)
+ return -EINVAL;
+
+ ret = regmap_write(info->rn5t618->regmap, RN5T618_REGISET1,
+ TO_CUR_REG(val->intval));
+ if (ret < 0)
+ return ret;
+
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int rn5t618_adp_property_is_writeable(struct power_supply *psy,
+ enum power_supply_property psp)
+{
+ switch (psp) {
+ case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static int rc5t619_usb_get_type(struct rn5t618_power_info *info,
+ union power_supply_propval *val)
+{
+ unsigned int regval;
+ int ret;
+
+ ret = regmap_read(info->rn5t618->regmap, RN5T618_GCHGDET, ®val);
+ if (ret < 0)
+ return ret;
+
+ switch (regval & GCHGDET_TYPE_MASK) {
+ case GCHGDET_TYPE_SDP:
+ val->intval = POWER_SUPPLY_USB_TYPE_SDP;
+ break;
+ case GCHGDET_TYPE_CDP:
+ val->intval = POWER_SUPPLY_USB_TYPE_CDP;
+ break;
+ case GCHGDET_TYPE_DCP:
+ val->intval = POWER_SUPPLY_USB_TYPE_DCP;
+ break;
+ default:
+ val->intval = POWER_SUPPLY_USB_TYPE_UNKNOWN;
+ }
+
+ return 0;
+}
+
static int rn5t618_usb_get_property(struct power_supply *psy,
enum power_supply_property psp,
union power_supply_propval *val)
{
struct rn5t618_power_info *info = power_supply_get_drvdata(psy);
unsigned int chgstate;
+ unsigned int regval;
bool online;
int ret;
if (val->intval != POWER_SUPPLY_STATUS_CHARGING)
val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
+ break;
+ case POWER_SUPPLY_PROP_USB_TYPE:
+ if (!online || (info->rn5t618->variant != RC5T619))
+ return -ENODATA;
+
+ return rc5t619_usb_get_type(info, val);
+ case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT:
+ ret = regmap_read(info->rn5t618->regmap, RN5T618_CHGCTL1,
+ ®val);
+ if (ret < 0)
+ return ret;
+
+ val->intval = 0;
+ if (regval & 2) {
+ ret = regmap_read(info->rn5t618->regmap,
+ RN5T618_REGISET2,
+ ®val);
+ if (ret < 0)
+ return ret;
+
+ val->intval = FROM_CUR_REG(regval);
+ }
break;
default:
return -EINVAL;
return 0;
}
+static int rn5t618_usb_set_property(struct power_supply *psy,
+ enum power_supply_property psp,
+ const union power_supply_propval *val)
+{
+ struct rn5t618_power_info *info = power_supply_get_drvdata(psy);
+ int ret;
+
+ switch (psp) {
+ case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT:
+ if (val->intval > USB_MAX_CUR)
+ return -EINVAL;
+
+ if (val->intval < CHG_MIN_CUR)
+ return -EINVAL;
+
+ ret = regmap_write(info->rn5t618->regmap, RN5T618_REGISET2,
+ 0xE0 | TO_CUR_REG(val->intval));
+ if (ret < 0)
+ return ret;
+
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int rn5t618_usb_property_is_writeable(struct power_supply *psy,
+ enum power_supply_property psp)
+{
+ switch (psp) {
+ case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT:
+ return true;
+ default:
+ return false;
+ }
+}
+
static const struct power_supply_desc rn5t618_battery_desc = {
.name = "rn5t618-battery",
.type = POWER_SUPPLY_TYPE_BATTERY,
.properties = rn5t618_battery_props,
.num_properties = ARRAY_SIZE(rn5t618_battery_props),
.get_property = rn5t618_battery_get_property,
+ .set_property = rn5t618_battery_set_property,
+ .property_is_writeable = rn5t618_battery_property_is_writeable,
};
static const struct power_supply_desc rn5t618_adp_desc = {
.properties = rn5t618_adp_props,
.num_properties = ARRAY_SIZE(rn5t618_adp_props),
.get_property = rn5t618_adp_get_property,
+ .set_property = rn5t618_adp_set_property,
+ .property_is_writeable = rn5t618_adp_property_is_writeable,
};
static const struct power_supply_desc rn5t618_usb_desc = {
.name = "rn5t618-usb",
.type = POWER_SUPPLY_TYPE_USB,
+ .usb_types = rn5t618_usb_types,
+ .num_usb_types = ARRAY_SIZE(rn5t618_usb_types),
.properties = rn5t618_usb_props,
.num_properties = ARRAY_SIZE(rn5t618_usb_props),
.get_property = rn5t618_usb_get_property,
+ .set_property = rn5t618_usb_set_property,
+ .property_is_writeable = rn5t618_usb_property_is_writeable,
};
static irqreturn_t rn5t618_charger_irq(int irq, void *data)
};
MODULE_DEVICE_TABLE(i2c, rt5033_battery_id);
+static const struct of_device_id rt5033_battery_of_match[] = {
+ { .compatible = "richtek,rt5033-battery", },
+ { }
+};
+MODULE_DEVICE_TABLE(of, rt5033_battery_of_match);
+
static struct i2c_driver rt5033_battery_driver = {
.driver = {
.name = "rt5033-battery",
+ .of_match_table = rt5033_battery_of_match,
},
.probe = rt5033_battery_probe,
.remove = rt5033_battery_remove,
/* Supports special manufacturer commands from TI BQ20Z65 and BQ20Z75 IC. */
#define SBS_FLAGS_TI_BQ20ZX5 BIT(0)
+static const enum power_supply_property string_properties[] = {
+ POWER_SUPPLY_PROP_TECHNOLOGY,
+ POWER_SUPPLY_PROP_MANUFACTURER,
+ POWER_SUPPLY_PROP_MODEL_NAME,
+};
+
+#define NR_STRING_BUFFERS ARRAY_SIZE(string_properties)
+
struct sbs_info {
struct i2c_client *client;
struct power_supply *power_supply;
struct delayed_work work;
struct mutex mode_lock;
u32 flags;
+ int technology;
+ char strings[NR_STRING_BUFFERS][I2C_SMBUS_BLOCK_MAX + 1];
};
-static char model_name[I2C_SMBUS_BLOCK_MAX + 1];
-static char manufacturer[I2C_SMBUS_BLOCK_MAX + 1];
-static char chemistry[I2C_SMBUS_BLOCK_MAX + 1];
+static char *sbs_get_string_buf(struct sbs_info *chip,
+ enum power_supply_property psp)
+{
+ int i = 0;
+
+ for (i = 0; i < NR_STRING_BUFFERS; i++)
+ if (string_properties[i] == psp)
+ return chip->strings[i];
+
+ return ERR_PTR(-EINVAL);
+}
+
+static void sbs_invalidate_cached_props(struct sbs_info *chip)
+{
+ int i = 0;
+
+ chip->technology = -1;
+
+ for (i = 0; i < NR_STRING_BUFFERS; i++)
+ chip->strings[i][0] = 0;
+}
+
static bool force_load;
static int sbs_read_word_data(struct i2c_client *client, u8 address);
chip->is_present = false;
/* Disable PEC when no device is present */
client->flags &= ~I2C_CLIENT_PEC;
+ sbs_invalidate_cached_props(chip);
return 0;
}
return 0;
}
-static int sbs_get_battery_string_property(struct i2c_client *client,
- int reg_offset, enum power_supply_property psp, char *val)
+static int sbs_get_property_index(struct i2c_client *client,
+ enum power_supply_property psp)
{
- s32 ret;
+ int count;
- ret = sbs_read_string_data(client, sbs_data[reg_offset].addr, val);
+ for (count = 0; count < ARRAY_SIZE(sbs_data); count++)
+ if (psp == sbs_data[count].psp)
+ return count;
- if (ret < 0)
- return ret;
+ dev_warn(&client->dev,
+ "%s: Invalid Property - %d\n", __func__, psp);
- return 0;
+ return -EINVAL;
+}
+
+static const char *sbs_get_constant_string(struct sbs_info *chip,
+ enum power_supply_property psp)
+{
+ int ret;
+ char *buf;
+ u8 addr;
+
+ buf = sbs_get_string_buf(chip, psp);
+ if (IS_ERR(buf))
+ return buf;
+
+ if (!buf[0]) {
+ ret = sbs_get_property_index(chip->client, psp);
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ addr = sbs_data[ret].addr;
+
+ ret = sbs_read_string_data(chip->client, addr, buf);
+ if (ret < 0)
+ return ERR_PTR(ret);
+ }
+
+ return buf;
}
static void sbs_unit_adjustment(struct i2c_client *client,
return 0;
}
-static int sbs_get_property_index(struct i2c_client *client,
- enum power_supply_property psp)
-{
- int count;
- for (count = 0; count < ARRAY_SIZE(sbs_data); count++)
- if (psp == sbs_data[count].psp)
- return count;
-
- dev_warn(&client->dev,
- "%s: Invalid Property - %d\n", __func__, psp);
-
- return -EINVAL;
-}
-
-static int sbs_get_chemistry(struct i2c_client *client,
+static int sbs_get_chemistry(struct sbs_info *chip,
union power_supply_propval *val)
{
- enum power_supply_property psp = POWER_SUPPLY_PROP_TECHNOLOGY;
- int ret;
+ const char *chemistry;
- ret = sbs_get_property_index(client, psp);
- if (ret < 0)
- return ret;
+ if (chip->technology != -1) {
+ val->intval = chip->technology;
+ return 0;
+ }
- ret = sbs_get_battery_string_property(client, ret, psp,
- chemistry);
- if (ret < 0)
- return ret;
+ chemistry = sbs_get_constant_string(chip, POWER_SUPPLY_PROP_TECHNOLOGY);
+
+ if (IS_ERR(chemistry))
+ return PTR_ERR(chemistry);
if (!strncasecmp(chemistry, "LION", 4))
- val->intval = POWER_SUPPLY_TECHNOLOGY_LION;
+ chip->technology = POWER_SUPPLY_TECHNOLOGY_LION;
else if (!strncasecmp(chemistry, "LiP", 3))
- val->intval = POWER_SUPPLY_TECHNOLOGY_LIPO;
+ chip->technology = POWER_SUPPLY_TECHNOLOGY_LIPO;
else if (!strncasecmp(chemistry, "NiCd", 4))
- val->intval = POWER_SUPPLY_TECHNOLOGY_NiCd;
+ chip->technology = POWER_SUPPLY_TECHNOLOGY_NiCd;
else if (!strncasecmp(chemistry, "NiMH", 4))
- val->intval = POWER_SUPPLY_TECHNOLOGY_NiMH;
+ chip->technology = POWER_SUPPLY_TECHNOLOGY_NiMH;
else
- val->intval = POWER_SUPPLY_TECHNOLOGY_UNKNOWN;
+ chip->technology = POWER_SUPPLY_TECHNOLOGY_UNKNOWN;
+
+ if (chip->technology == POWER_SUPPLY_TECHNOLOGY_UNKNOWN)
+ dev_warn(&chip->client->dev, "Unknown chemistry: %s\n", chemistry);
- if (val->intval == POWER_SUPPLY_TECHNOLOGY_UNKNOWN)
- dev_warn(&client->dev, "Unknown chemistry: %s\n", chemistry);
+ val->intval = chip->technology;
return 0;
}
int ret = 0;
struct sbs_info *chip = power_supply_get_drvdata(psy);
struct i2c_client *client = chip->client;
+ const char *str;
if (chip->gpio_detect) {
ret = gpiod_get_value_cansleep(chip->gpio_detect);
break;
case POWER_SUPPLY_PROP_TECHNOLOGY:
- ret = sbs_get_chemistry(client, val);
+ ret = sbs_get_chemistry(chip, val);
if (ret < 0)
break;
break;
case POWER_SUPPLY_PROP_MODEL_NAME:
- ret = sbs_get_property_index(client, psp);
- if (ret < 0)
- break;
-
- ret = sbs_get_battery_string_property(client, ret, psp,
- model_name);
- val->strval = model_name;
- break;
-
case POWER_SUPPLY_PROP_MANUFACTURER:
- ret = sbs_get_property_index(client, psp);
- if (ret < 0)
- break;
-
- ret = sbs_get_battery_string_property(client, ret, psp,
- manufacturer);
- val->strval = manufacturer;
+ str = sbs_get_constant_string(chip, psp);
+ if (IS_ERR(str))
+ ret = PTR_ERR(str);
+ else
+ val->strval = str;
break;
case POWER_SUPPLY_PROP_MANUFACTURE_YEAR:
psy_cfg.of_node = client->dev.of_node;
psy_cfg.drv_data = chip;
chip->last_state = POWER_SUPPLY_STATUS_UNKNOWN;
+ sbs_invalidate_cached_props(chip);
mutex_init(&chip->mode_lock);
/* use pdata if available, fall back to DT properties,
{ .compatible = "sprd,sc2731-charger", },
{ }
};
+MODULE_DEVICE_TABLE(of, sc2731_charger_of_match);
static struct platform_driver sc2731_charger_driver = {
.driver = {
{ .compatible = "sprd,sc2731-fgu", },
{ }
};
+MODULE_DEVICE_TABLE(of, sc27xx_fgu_of_match);
static struct platform_driver sc27xx_fgu_driver = {
.probe = sc27xx_fgu_probe,
#include <linux/delay.h>
#include <linux/err.h>
-#include <linux/gpio.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
struct spwr_battery_device *bat = container_of(nf, struct spwr_battery_device, notif);
int status;
+ /*
+ * We cannot use strict matching when registering the notifier as the
+ * EC expects us to register it against instance ID 0. Strict matching
+ * would thus drop events, as those may have non-zero instance IDs in
+ * this subsystem. So we need to check the instance ID of the event
+ * here manually.
+ */
+ if (event->instance_id != bat->sdev->uid.instance)
+ return 0;
+
dev_dbg(&bat->sdev->dev, "power event (cid = %#04x, iid = %#04x, tid = %#04x)\n",
event->command_id, event->instance_id, event->target_id);
bat->notif.base.fn = spwr_notify_bat;
bat->notif.event.reg = registry;
bat->notif.event.id.target_category = sdev->uid.category;
- bat->notif.event.id.instance = 0;
- bat->notif.event.mask = SSAM_EVENT_MASK_STRICT;
+ bat->notif.event.id.instance = 0; /* need to register with instance 0 */
+ bat->notif.event.mask = SSAM_EVENT_MASK_TARGET;
bat->notif.event.flags = SSAM_EVENT_SEQUENCED;
bat->psy_desc.name = bat->name;
static int spwr_ac_update_unlocked(struct spwr_ac_device *ac)
{
- u32 old = ac->state;
+ __le32 old = ac->state;
int status;
lockdep_assert_held(&ac->lock);
{
struct pwm_device *pwm;
- /* check, whether the driver supports a third cell for flags */
- if (pc->of_pwm_n_cells < 3)
+ if (pc->of_pwm_n_cells < 2)
return ERR_PTR(-EINVAL);
/* flags in the third cell are optional */
pwm->args.period = args->args[1];
pwm->args.polarity = PWM_POLARITY_NORMAL;
- if (args->args_count > 2 && args->args[2] & PWM_POLARITY_INVERTED)
- pwm->args.polarity = PWM_POLARITY_INVERSED;
+ if (pc->of_pwm_n_cells >= 3) {
+ if (args->args_count > 2 && args->args[2] & PWM_POLARITY_INVERTED)
+ pwm->args.polarity = PWM_POLARITY_INVERSED;
+ }
return pwm;
}
EXPORT_SYMBOL_GPL(of_pwm_xlate_with_flags);
-static struct pwm_device *
-of_pwm_simple_xlate(struct pwm_chip *pc, const struct of_phandle_args *args)
-{
- struct pwm_device *pwm;
-
- /* sanity check driver support */
- if (pc->of_pwm_n_cells < 2)
- return ERR_PTR(-EINVAL);
-
- /* all cells are required */
- if (args->args_count != pc->of_pwm_n_cells)
- return ERR_PTR(-EINVAL);
-
- if (args->args[0] >= pc->npwm)
- return ERR_PTR(-EINVAL);
-
- pwm = pwm_request_from_chip(pc, args->args[0], NULL);
- if (IS_ERR(pwm))
- return pwm;
-
- pwm->args.period = args->args[1];
-
- return pwm;
-}
-
static void of_pwmchip_add(struct pwm_chip *chip)
{
if (!chip->dev || !chip->dev->of_node)
return;
if (!chip->of_xlate) {
- chip->of_xlate = of_pwm_simple_xlate;
- chip->of_pwm_n_cells = 2;
+ u32 pwm_cells;
+
+ if (of_property_read_u32(chip->dev->of_node, "#pwm-cells",
+ &pwm_cells))
+ pwm_cells = 2;
+
+ chip->of_xlate = of_pwm_xlate_with_flags;
+ chip->of_pwm_n_cells = pwm_cells;
}
of_node_get(chip->dev->of_node);
*/
int pwmchip_remove(struct pwm_chip *chip)
{
- unsigned int i;
- int ret = 0;
-
pwmchip_sysfs_unexport(chip);
mutex_lock(&pwm_lock);
- for (i = 0; i < chip->npwm; i++) {
- struct pwm_device *pwm = &chip->pwms[i];
-
- if (test_bit(PWMF_REQUESTED, &pwm->flags)) {
- ret = -EBUSY;
- goto out;
- }
- }
-
list_del_init(&chip->list);
if (IS_ENABLED(CONFIG_OF))
free_pwms(chip);
-out:
mutex_unlock(&pwm_lock);
- return ret;
+
+ return 0;
}
EXPORT_SYMBOL_GPL(pwmchip_remove);
+static void devm_pwmchip_remove(void *data)
+{
+ struct pwm_chip *chip = data;
+
+ pwmchip_remove(chip);
+}
+
+int devm_pwmchip_add(struct device *dev, struct pwm_chip *chip)
+{
+ int ret;
+
+ ret = pwmchip_add(chip);
+ if (ret)
+ return ret;
+
+ return devm_add_action_or_reset(dev, devm_pwmchip_remove, chip);
+}
+EXPORT_SYMBOL_GPL(devm_pwmchip_add);
+
/**
* pwm_request() - request a PWM device
* @pwm: global PWM device index
if (state->period == pwm->state.period &&
state->duty_cycle == pwm->state.duty_cycle &&
state->polarity == pwm->state.polarity &&
- state->enabled == pwm->state.enabled)
+ state->enabled == pwm->state.enabled &&
+ state->usage_power == pwm->state.usage_power)
return 0;
if (chip->ops->apply) {
}
EXPORT_SYMBOL_GPL(pwm_adjust_config);
-static struct pwm_chip *of_node_to_pwmchip(struct device_node *np)
+static struct pwm_chip *fwnode_to_pwmchip(struct fwnode_handle *fwnode)
{
struct pwm_chip *chip;
mutex_lock(&pwm_lock);
list_for_each_entry(chip, &pwm_chips, list)
- if (chip->dev && chip->dev->of_node == np) {
+ if (chip->dev && dev_fwnode(chip->dev) == fwnode) {
mutex_unlock(&pwm_lock);
return chip;
}
return ERR_PTR(err);
}
- pc = of_node_to_pwmchip(args.np);
+ pc = fwnode_to_pwmchip(of_fwnode_handle(args.np));
if (IS_ERR(pc)) {
if (PTR_ERR(pc) != -EPROBE_DEFER)
pr_err("%s(): PWM chip not found\n", __func__);
}
EXPORT_SYMBOL_GPL(of_pwm_get);
-#if IS_ENABLED(CONFIG_ACPI)
-static struct pwm_chip *device_to_pwmchip(struct device *dev)
-{
- struct pwm_chip *chip;
-
- mutex_lock(&pwm_lock);
-
- list_for_each_entry(chip, &pwm_chips, list) {
- struct acpi_device *adev = ACPI_COMPANION(chip->dev);
-
- if ((chip->dev == dev) || (adev && &adev->dev == dev)) {
- mutex_unlock(&pwm_lock);
- return chip;
- }
- }
-
- mutex_unlock(&pwm_lock);
-
- return ERR_PTR(-EPROBE_DEFER);
-}
-#endif
-
/**
* acpi_pwm_get() - request a PWM via parsing "pwms" property in ACPI
- * @fwnode: firmware node to get the "pwm" property from
+ * @fwnode: firmware node to get the "pwms" property from
*
* Returns the PWM device parsed from the fwnode and index specified in the
* "pwms" property or a negative error-code on failure.
* Returns: A pointer to the requested PWM device or an ERR_PTR()-encoded
* error code on failure.
*/
-static struct pwm_device *acpi_pwm_get(struct fwnode_handle *fwnode)
+static struct pwm_device *acpi_pwm_get(const struct fwnode_handle *fwnode)
{
- struct pwm_device *pwm = ERR_PTR(-ENODEV);
-#if IS_ENABLED(CONFIG_ACPI)
+ struct pwm_device *pwm;
struct fwnode_reference_args args;
- struct acpi_device *acpi;
struct pwm_chip *chip;
int ret;
if (ret < 0)
return ERR_PTR(ret);
- acpi = to_acpi_device_node(args.fwnode);
- if (!acpi)
- return ERR_PTR(-EINVAL);
-
if (args.nargs < 2)
return ERR_PTR(-EPROTO);
- chip = device_to_pwmchip(&acpi->dev);
+ chip = fwnode_to_pwmchip(args.fwnode);
if (IS_ERR(chip))
return ERR_CAST(chip);
if (args.nargs > 2 && args.args[2] & PWM_POLARITY_INVERTED)
pwm->args.polarity = PWM_POLARITY_INVERSED;
-#endif
return pwm;
}
*/
struct pwm_device *pwm_get(struct device *dev, const char *con_id)
{
+ const struct fwnode_handle *fwnode = dev ? dev_fwnode(dev) : NULL;
const char *dev_id = dev ? dev_name(dev) : NULL;
struct pwm_device *pwm;
struct pwm_chip *chip;
int err;
/* look up via DT first */
- if (IS_ENABLED(CONFIG_OF) && dev && dev->of_node)
- return of_pwm_get(dev, dev->of_node, con_id);
+ if (is_of_node(fwnode))
+ return of_pwm_get(dev, to_of_node(fwnode), con_id);
/* then lookup via ACPI */
- if (dev && is_acpi_node(dev->fwnode)) {
- pwm = acpi_pwm_get(dev->fwnode);
+ if (is_acpi_node(fwnode)) {
+ pwm = acpi_pwm_get(fwnode);
if (!IS_ERR(pwm) || PTR_ERR(pwm) != -ENOENT)
return pwm;
}
}
EXPORT_SYMBOL_GPL(pwm_put);
-static void devm_pwm_release(struct device *dev, void *res)
+static void devm_pwm_release(void *pwm)
{
- pwm_put(*(struct pwm_device **)res);
+ pwm_put(pwm);
}
/**
*/
struct pwm_device *devm_pwm_get(struct device *dev, const char *con_id)
{
- struct pwm_device **ptr, *pwm;
-
- ptr = devres_alloc(devm_pwm_release, sizeof(*ptr), GFP_KERNEL);
- if (!ptr)
- return ERR_PTR(-ENOMEM);
+ struct pwm_device *pwm;
+ int ret;
pwm = pwm_get(dev, con_id);
- if (!IS_ERR(pwm)) {
- *ptr = pwm;
- devres_add(dev, ptr);
- } else {
- devres_free(ptr);
- }
+ if (IS_ERR(pwm))
+ return pwm;
+
+ ret = devm_add_action_or_reset(dev, devm_pwm_release, pwm);
+ if (ret)
+ return ERR_PTR(ret);
return pwm;
}
struct pwm_device *devm_of_pwm_get(struct device *dev, struct device_node *np,
const char *con_id)
{
- struct pwm_device **ptr, *pwm;
-
- ptr = devres_alloc(devm_pwm_release, sizeof(*ptr), GFP_KERNEL);
- if (!ptr)
- return ERR_PTR(-ENOMEM);
+ struct pwm_device *pwm;
+ int ret;
pwm = of_pwm_get(dev, np, con_id);
- if (!IS_ERR(pwm)) {
- *ptr = pwm;
- devres_add(dev, ptr);
- } else {
- devres_free(ptr);
- }
+ if (IS_ERR(pwm))
+ return pwm;
+
+ ret = devm_add_action_or_reset(dev, devm_pwm_release, pwm);
+ if (ret)
+ return ERR_PTR(ret);
return pwm;
}
struct fwnode_handle *fwnode,
const char *con_id)
{
- struct pwm_device **ptr, *pwm = ERR_PTR(-ENODEV);
-
- ptr = devres_alloc(devm_pwm_release, sizeof(*ptr), GFP_KERNEL);
- if (!ptr)
- return ERR_PTR(-ENOMEM);
+ struct pwm_device *pwm = ERR_PTR(-ENODEV);
+ int ret;
if (is_of_node(fwnode))
pwm = of_pwm_get(dev, to_of_node(fwnode), con_id);
else if (is_acpi_node(fwnode))
pwm = acpi_pwm_get(fwnode);
+ if (IS_ERR(pwm))
+ return pwm;
- if (!IS_ERR(pwm)) {
- *ptr = pwm;
- devres_add(dev, ptr);
- } else {
- devres_free(ptr);
- }
+ ret = devm_add_action_or_reset(dev, devm_pwm_release, pwm);
+ if (ret)
+ return ERR_PTR(ret);
return pwm;
}
EXPORT_SYMBOL_GPL(devm_fwnode_pwm_get);
-static int devm_pwm_match(struct device *dev, void *res, void *data)
-{
- struct pwm_device **p = res;
-
- if (WARN_ON(!p || !*p))
- return 0;
-
- return *p == data;
-}
-
-/**
- * devm_pwm_put() - resource managed pwm_put()
- * @dev: device for PWM consumer
- * @pwm: PWM device
- *
- * Release a PWM previously allocated using devm_pwm_get(). Calling this
- * function is usually not needed because devm-allocated resources are
- * automatically released on driver detach.
- */
-void devm_pwm_put(struct device *dev, struct pwm_device *pwm)
-{
- WARN_ON(devres_release(dev, devm_pwm_release, devm_pwm_match, pwm));
-}
-EXPORT_SYMBOL_GPL(devm_pwm_put);
-
#ifdef CONFIG_DEBUG_FS
static void pwm_dbg_show(struct pwm_chip *chip, struct seq_file *s)
{
seq_printf(s, " polarity: %s",
state.polarity ? "inverse" : "normal");
+ if (state.usage_power)
+ seq_puts(s, " usage_power");
+
seq_puts(s, "\n");
}
}
chip->chip.ops = &atmel_hlcdc_pwm_ops;
chip->chip.dev = dev;
chip->chip.npwm = 1;
- chip->chip.of_xlate = of_pwm_xlate_with_flags;
- chip->chip.of_pwm_n_cells = 3;
ret = pwmchip_add(&chip->chip);
if (ret) {
tcbpwm->chip.dev = &pdev->dev;
tcbpwm->chip.ops = &atmel_tcb_pwm_ops;
- tcbpwm->chip.of_xlate = of_pwm_xlate_with_flags;
- tcbpwm->chip.of_pwm_n_cells = 3;
tcbpwm->chip.npwm = NPWM;
tcbpwm->channel = channel;
tcbpwm->regmap = regmap;
atmel_pwm->chip.dev = &pdev->dev;
atmel_pwm->chip.ops = &atmel_pwm_ops;
- atmel_pwm->chip.of_xlate = of_pwm_xlate_with_flags;
- atmel_pwm->chip.of_pwm_n_cells = 3;
atmel_pwm->chip.npwm = 4;
ret = pwmchip_add(&atmel_pwm->chip);
ip->chip.dev = &pdev->dev;
ip->chip.ops = &iproc_pwm_ops;
ip->chip.npwm = 4;
- ip->chip.of_xlate = of_pwm_xlate_with_flags;
- ip->chip.of_pwm_n_cells = 3;
ip->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(ip->base))
kp->chip.dev = &pdev->dev;
kp->chip.ops = &kona_pwm_ops;
kp->chip.npwm = 6;
- kp->chip.of_xlate = of_pwm_xlate_with_flags;
- kp->chip.of_pwm_n_cells = 3;
kp->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(kp->base))
pc->chip.dev = &pdev->dev;
pc->chip.ops = &bcm2835_pwm_ops;
pc->chip.npwm = 2;
- pc->chip.of_xlate = of_pwm_xlate_with_flags;
- pc->chip.of_pwm_n_cells = 3;
platform_set_drvdata(pdev, pc);
return container_of(chip, struct berlin_pwm_chip, chip);
}
-static inline u32 berlin_pwm_readl(struct berlin_pwm_chip *chip,
+static inline u32 berlin_pwm_readl(struct berlin_pwm_chip *bpc,
unsigned int channel, unsigned long offset)
{
- return readl_relaxed(chip->base + channel * 0x10 + offset);
+ return readl_relaxed(bpc->base + channel * 0x10 + offset);
}
-static inline void berlin_pwm_writel(struct berlin_pwm_chip *chip,
+static inline void berlin_pwm_writel(struct berlin_pwm_chip *bpc,
unsigned int channel, u32 value,
unsigned long offset)
{
- writel_relaxed(value, chip->base + channel * 0x10 + offset);
+ writel_relaxed(value, bpc->base + channel * 0x10 + offset);
}
static int berlin_pwm_request(struct pwm_chip *chip, struct pwm_device *pwm)
kfree(channel);
}
-static int berlin_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm_dev,
- int duty_ns, int period_ns)
+static int berlin_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ u64 duty_ns, u64 period_ns)
{
- struct berlin_pwm_chip *pwm = to_berlin_pwm_chip(chip);
+ struct berlin_pwm_chip *bpc = to_berlin_pwm_chip(chip);
bool prescale_4096 = false;
u32 value, duty, period;
u64 cycles;
- cycles = clk_get_rate(pwm->clk);
+ cycles = clk_get_rate(bpc->clk);
cycles *= period_ns;
do_div(cycles, NSEC_PER_SEC);
do_div(cycles, period_ns);
duty = cycles;
- value = berlin_pwm_readl(pwm, pwm_dev->hwpwm, BERLIN_PWM_CONTROL);
+ value = berlin_pwm_readl(bpc, pwm->hwpwm, BERLIN_PWM_CONTROL);
if (prescale_4096)
value |= BERLIN_PWM_PRESCALE_4096;
else
value &= ~BERLIN_PWM_PRESCALE_4096;
- berlin_pwm_writel(pwm, pwm_dev->hwpwm, value, BERLIN_PWM_CONTROL);
+ berlin_pwm_writel(bpc, pwm->hwpwm, value, BERLIN_PWM_CONTROL);
- berlin_pwm_writel(pwm, pwm_dev->hwpwm, duty, BERLIN_PWM_DUTY);
- berlin_pwm_writel(pwm, pwm_dev->hwpwm, period, BERLIN_PWM_TCNT);
+ berlin_pwm_writel(bpc, pwm->hwpwm, duty, BERLIN_PWM_DUTY);
+ berlin_pwm_writel(bpc, pwm->hwpwm, period, BERLIN_PWM_TCNT);
return 0;
}
static int berlin_pwm_set_polarity(struct pwm_chip *chip,
- struct pwm_device *pwm_dev,
+ struct pwm_device *pwm,
enum pwm_polarity polarity)
{
- struct berlin_pwm_chip *pwm = to_berlin_pwm_chip(chip);
+ struct berlin_pwm_chip *bpc = to_berlin_pwm_chip(chip);
u32 value;
- value = berlin_pwm_readl(pwm, pwm_dev->hwpwm, BERLIN_PWM_CONTROL);
+ value = berlin_pwm_readl(bpc, pwm->hwpwm, BERLIN_PWM_CONTROL);
if (polarity == PWM_POLARITY_NORMAL)
value &= ~BERLIN_PWM_INVERT_POLARITY;
else
value |= BERLIN_PWM_INVERT_POLARITY;
- berlin_pwm_writel(pwm, pwm_dev->hwpwm, value, BERLIN_PWM_CONTROL);
+ berlin_pwm_writel(bpc, pwm->hwpwm, value, BERLIN_PWM_CONTROL);
return 0;
}
-static int berlin_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm_dev)
+static int berlin_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
{
- struct berlin_pwm_chip *pwm = to_berlin_pwm_chip(chip);
+ struct berlin_pwm_chip *bpc = to_berlin_pwm_chip(chip);
u32 value;
- value = berlin_pwm_readl(pwm, pwm_dev->hwpwm, BERLIN_PWM_EN);
+ value = berlin_pwm_readl(bpc, pwm->hwpwm, BERLIN_PWM_EN);
value |= BERLIN_PWM_ENABLE;
- berlin_pwm_writel(pwm, pwm_dev->hwpwm, value, BERLIN_PWM_EN);
+ berlin_pwm_writel(bpc, pwm->hwpwm, value, BERLIN_PWM_EN);
return 0;
}
static void berlin_pwm_disable(struct pwm_chip *chip,
- struct pwm_device *pwm_dev)
+ struct pwm_device *pwm)
{
- struct berlin_pwm_chip *pwm = to_berlin_pwm_chip(chip);
+ struct berlin_pwm_chip *bpc = to_berlin_pwm_chip(chip);
u32 value;
- value = berlin_pwm_readl(pwm, pwm_dev->hwpwm, BERLIN_PWM_EN);
+ value = berlin_pwm_readl(bpc, pwm->hwpwm, BERLIN_PWM_EN);
value &= ~BERLIN_PWM_ENABLE;
- berlin_pwm_writel(pwm, pwm_dev->hwpwm, value, BERLIN_PWM_EN);
+ berlin_pwm_writel(bpc, pwm->hwpwm, value, BERLIN_PWM_EN);
+}
+
+static int berlin_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ const struct pwm_state *state)
+{
+ int err;
+ bool enabled = pwm->state.enabled;
+
+ if (state->polarity != pwm->state.polarity) {
+ if (enabled) {
+ berlin_pwm_disable(chip, pwm);
+ enabled = false;
+ }
+
+ err = berlin_pwm_set_polarity(chip, pwm, state->polarity);
+ if (err)
+ return err;
+ }
+
+ if (!state->enabled) {
+ if (enabled)
+ berlin_pwm_disable(chip, pwm);
+ return 0;
+ }
+
+ if (state->period != pwm->state.period ||
+ state->duty_cycle != pwm->state.duty_cycle) {
+ err = berlin_pwm_config(chip, pwm, state->duty_cycle, state->period);
+ if (err)
+ return err;
+ }
+
+ if (!enabled)
+ return berlin_pwm_enable(chip, pwm);
+
+ return 0;
}
static const struct pwm_ops berlin_pwm_ops = {
.request = berlin_pwm_request,
.free = berlin_pwm_free,
- .config = berlin_pwm_config,
- .set_polarity = berlin_pwm_set_polarity,
- .enable = berlin_pwm_enable,
- .disable = berlin_pwm_disable,
+ .apply = berlin_pwm_apply,
.owner = THIS_MODULE,
};
static int berlin_pwm_probe(struct platform_device *pdev)
{
- struct berlin_pwm_chip *pwm;
+ struct berlin_pwm_chip *bpc;
int ret;
- pwm = devm_kzalloc(&pdev->dev, sizeof(*pwm), GFP_KERNEL);
- if (!pwm)
+ bpc = devm_kzalloc(&pdev->dev, sizeof(*bpc), GFP_KERNEL);
+ if (!bpc)
return -ENOMEM;
- pwm->base = devm_platform_ioremap_resource(pdev, 0);
- if (IS_ERR(pwm->base))
- return PTR_ERR(pwm->base);
+ bpc->base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(bpc->base))
+ return PTR_ERR(bpc->base);
- pwm->clk = devm_clk_get(&pdev->dev, NULL);
- if (IS_ERR(pwm->clk))
- return PTR_ERR(pwm->clk);
+ bpc->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(bpc->clk))
+ return PTR_ERR(bpc->clk);
- ret = clk_prepare_enable(pwm->clk);
+ ret = clk_prepare_enable(bpc->clk);
if (ret)
return ret;
- pwm->chip.dev = &pdev->dev;
- pwm->chip.ops = &berlin_pwm_ops;
- pwm->chip.npwm = 4;
- pwm->chip.of_xlate = of_pwm_xlate_with_flags;
- pwm->chip.of_pwm_n_cells = 3;
+ bpc->chip.dev = &pdev->dev;
+ bpc->chip.ops = &berlin_pwm_ops;
+ bpc->chip.npwm = 4;
- ret = pwmchip_add(&pwm->chip);
+ ret = pwmchip_add(&bpc->chip);
if (ret < 0) {
dev_err(&pdev->dev, "failed to add PWM chip: %d\n", ret);
- clk_disable_unprepare(pwm->clk);
+ clk_disable_unprepare(bpc->clk);
return ret;
}
- platform_set_drvdata(pdev, pwm);
+ platform_set_drvdata(pdev, bpc);
return 0;
}
static int berlin_pwm_remove(struct platform_device *pdev)
{
- struct berlin_pwm_chip *pwm = platform_get_drvdata(pdev);
- int ret;
+ struct berlin_pwm_chip *bpc = platform_get_drvdata(pdev);
+
+ pwmchip_remove(&bpc->chip);
- ret = pwmchip_remove(&pwm->chip);
- clk_disable_unprepare(pwm->clk);
+ clk_disable_unprepare(bpc->clk);
- return ret;
+ return 0;
}
#ifdef CONFIG_PM_SLEEP
static int berlin_pwm_suspend(struct device *dev)
{
- struct berlin_pwm_chip *pwm = dev_get_drvdata(dev);
+ struct berlin_pwm_chip *bpc = dev_get_drvdata(dev);
unsigned int i;
- for (i = 0; i < pwm->chip.npwm; i++) {
+ for (i = 0; i < bpc->chip.npwm; i++) {
struct berlin_pwm_channel *channel;
- channel = pwm_get_chip_data(&pwm->chip.pwms[i]);
+ channel = pwm_get_chip_data(&bpc->chip.pwms[i]);
if (!channel)
continue;
- channel->enable = berlin_pwm_readl(pwm, i, BERLIN_PWM_ENABLE);
- channel->ctrl = berlin_pwm_readl(pwm, i, BERLIN_PWM_CONTROL);
- channel->duty = berlin_pwm_readl(pwm, i, BERLIN_PWM_DUTY);
- channel->tcnt = berlin_pwm_readl(pwm, i, BERLIN_PWM_TCNT);
+ channel->enable = berlin_pwm_readl(bpc, i, BERLIN_PWM_ENABLE);
+ channel->ctrl = berlin_pwm_readl(bpc, i, BERLIN_PWM_CONTROL);
+ channel->duty = berlin_pwm_readl(bpc, i, BERLIN_PWM_DUTY);
+ channel->tcnt = berlin_pwm_readl(bpc, i, BERLIN_PWM_TCNT);
}
- clk_disable_unprepare(pwm->clk);
+ clk_disable_unprepare(bpc->clk);
return 0;
}
static int berlin_pwm_resume(struct device *dev)
{
- struct berlin_pwm_chip *pwm = dev_get_drvdata(dev);
+ struct berlin_pwm_chip *bpc = dev_get_drvdata(dev);
unsigned int i;
int ret;
- ret = clk_prepare_enable(pwm->clk);
+ ret = clk_prepare_enable(bpc->clk);
if (ret)
return ret;
- for (i = 0; i < pwm->chip.npwm; i++) {
+ for (i = 0; i < bpc->chip.npwm; i++) {
struct berlin_pwm_channel *channel;
- channel = pwm_get_chip_data(&pwm->chip.pwms[i]);
+ channel = pwm_get_chip_data(&bpc->chip.pwms[i]);
if (!channel)
continue;
- berlin_pwm_writel(pwm, i, channel->ctrl, BERLIN_PWM_CONTROL);
- berlin_pwm_writel(pwm, i, channel->duty, BERLIN_PWM_DUTY);
- berlin_pwm_writel(pwm, i, channel->tcnt, BERLIN_PWM_TCNT);
- berlin_pwm_writel(pwm, i, channel->enable, BERLIN_PWM_ENABLE);
+ berlin_pwm_writel(bpc, i, channel->ctrl, BERLIN_PWM_CONTROL);
+ berlin_pwm_writel(bpc, i, channel->duty, BERLIN_PWM_DUTY);
+ berlin_pwm_writel(bpc, i, channel->tcnt, BERLIN_PWM_TCNT);
+ berlin_pwm_writel(bpc, i, channel->enable, BERLIN_PWM_ENABLE);
}
return 0;
spin_lock_init(&priv->lock);
- platform_set_drvdata(pdev, priv);
-
- return pwmchip_add(&priv->chip);
-}
-
-static int clps711x_pwm_remove(struct platform_device *pdev)
-{
- struct clps711x_chip *priv = platform_get_drvdata(pdev);
-
- return pwmchip_remove(&priv->chip);
+ return devm_pwmchip_add(&pdev->dev, &priv->chip);
}
static const struct of_device_id __maybe_unused clps711x_pwm_dt_ids[] = {
.of_match_table = of_match_ptr(clps711x_pwm_dt_ids),
},
.probe = clps711x_pwm_probe,
- .remove = clps711x_pwm_remove,
};
module_platform_driver(clps711x_pwm_driver);
/* get the PMIC regmap */
pwm->regmap = pmic->regmap;
- platform_set_drvdata(pdev, pwm);
-
- return pwmchip_add(&pwm->chip);
-}
-
-static int crystalcove_pwm_remove(struct platform_device *pdev)
-{
- struct crystalcove_pwm *pwm = platform_get_drvdata(pdev);
-
- return pwmchip_remove(&pwm->chip);
+ return devm_pwmchip_add(&pdev->dev, &pwm->chip);
}
static struct platform_driver crystalcove_pwm_driver = {
.probe = crystalcove_pwm_probe,
- .remove = crystalcove_pwm_remove,
.driver = {
.name = "crystal_cove_pwm",
},
ep93xx_pwm_release_gpio(pdev);
}
-static int ep93xx_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
- int duty_ns, int period_ns)
+static int ep93xx_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ const struct pwm_state *state)
{
+ int ret;
struct ep93xx_pwm *ep93xx_pwm = to_ep93xx_pwm(chip);
- void __iomem *base = ep93xx_pwm->base;
- unsigned long long c;
- unsigned long period_cycles;
- unsigned long duty_cycles;
- unsigned long term;
- int ret = 0;
-
- /*
- * The clock needs to be enabled to access the PWM registers.
- * Configuration can be changed at any time.
- */
- if (!pwm_is_enabled(pwm)) {
- ret = clk_enable(ep93xx_pwm->clk);
- if (ret)
- return ret;
- }
+ bool enabled = state->enabled;
- c = clk_get_rate(ep93xx_pwm->clk);
- c *= period_ns;
- do_div(c, 1000000000);
- period_cycles = c;
+ if (state->polarity != pwm->state.polarity) {
+ if (enabled) {
+ writew(0x0, ep93xx_pwm->base + EP93XX_PWMx_ENABLE);
+ clk_disable_unprepare(ep93xx_pwm->clk);
+ enabled = false;
+ }
- c = period_cycles;
- c *= duty_ns;
- do_div(c, period_ns);
- duty_cycles = c;
+ /*
+ * The clock needs to be enabled to access the PWM registers.
+ * Polarity can only be changed when the PWM is disabled.
+ */
+ ret = clk_prepare_enable(ep93xx_pwm->clk);
+ if (ret)
+ return ret;
- if (period_cycles < 0x10000 && duty_cycles < 0x10000) {
- term = readw(base + EP93XX_PWMx_TERM_COUNT);
+ if (state->polarity == PWM_POLARITY_INVERSED)
+ writew(0x1, ep93xx_pwm->base + EP93XX_PWMx_INVERT);
+ else
+ writew(0x0, ep93xx_pwm->base + EP93XX_PWMx_INVERT);
- /* Order is important if PWM is running */
- if (period_cycles > term) {
- writew(period_cycles, base + EP93XX_PWMx_TERM_COUNT);
- writew(duty_cycles, base + EP93XX_PWMx_DUTY_CYCLE);
- } else {
- writew(duty_cycles, base + EP93XX_PWMx_DUTY_CYCLE);
- writew(period_cycles, base + EP93XX_PWMx_TERM_COUNT);
- }
- } else {
- ret = -EINVAL;
+ clk_disable_unprepare(ep93xx_pwm->clk);
}
- if (!pwm_is_enabled(pwm))
- clk_disable(ep93xx_pwm->clk);
-
- return ret;
-}
-
-static int ep93xx_pwm_polarity(struct pwm_chip *chip, struct pwm_device *pwm,
- enum pwm_polarity polarity)
-{
- struct ep93xx_pwm *ep93xx_pwm = to_ep93xx_pwm(chip);
- int ret;
+ if (!state->enabled) {
+ if (enabled) {
+ writew(0x0, ep93xx_pwm->base + EP93XX_PWMx_ENABLE);
+ clk_disable_unprepare(ep93xx_pwm->clk);
+ }
- /*
- * The clock needs to be enabled to access the PWM registers.
- * Polarity can only be changed when the PWM is disabled.
- */
- ret = clk_enable(ep93xx_pwm->clk);
- if (ret)
- return ret;
+ return 0;
+ }
- if (polarity == PWM_POLARITY_INVERSED)
- writew(0x1, ep93xx_pwm->base + EP93XX_PWMx_INVERT);
- else
- writew(0x0, ep93xx_pwm->base + EP93XX_PWMx_INVERT);
+ if (state->period != pwm->state.period ||
+ state->duty_cycle != pwm->state.duty_cycle) {
+ struct ep93xx_pwm *ep93xx_pwm = to_ep93xx_pwm(chip);
+ void __iomem *base = ep93xx_pwm->base;
+ unsigned long long c;
+ unsigned long period_cycles;
+ unsigned long duty_cycles;
+ unsigned long term;
+
+ /*
+ * The clock needs to be enabled to access the PWM registers.
+ * Configuration can be changed at any time.
+ */
+ if (!pwm_is_enabled(pwm)) {
+ ret = clk_prepare_enable(ep93xx_pwm->clk);
+ if (ret)
+ return ret;
+ }
- clk_disable(ep93xx_pwm->clk);
+ c = clk_get_rate(ep93xx_pwm->clk);
+ c *= state->period;
+ do_div(c, 1000000000);
+ period_cycles = c;
+
+ c = period_cycles;
+ c *= state->duty_cycle;
+ do_div(c, state->period);
+ duty_cycles = c;
+
+ if (period_cycles < 0x10000 && duty_cycles < 0x10000) {
+ term = readw(base + EP93XX_PWMx_TERM_COUNT);
+
+ /* Order is important if PWM is running */
+ if (period_cycles > term) {
+ writew(period_cycles, base + EP93XX_PWMx_TERM_COUNT);
+ writew(duty_cycles, base + EP93XX_PWMx_DUTY_CYCLE);
+ } else {
+ writew(duty_cycles, base + EP93XX_PWMx_DUTY_CYCLE);
+ writew(period_cycles, base + EP93XX_PWMx_TERM_COUNT);
+ }
+ ret = 0;
+ } else {
+ ret = -EINVAL;
+ }
- return 0;
-}
+ if (!pwm_is_enabled(pwm))
+ clk_disable_unprepare(ep93xx_pwm->clk);
-static int ep93xx_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
-{
- struct ep93xx_pwm *ep93xx_pwm = to_ep93xx_pwm(chip);
- int ret;
+ if (ret)
+ return ret;
+ }
- ret = clk_enable(ep93xx_pwm->clk);
- if (ret)
- return ret;
+ if (!enabled) {
+ ret = clk_prepare_enable(ep93xx_pwm->clk);
+ if (ret)
+ return ret;
- writew(0x1, ep93xx_pwm->base + EP93XX_PWMx_ENABLE);
+ writew(0x1, ep93xx_pwm->base + EP93XX_PWMx_ENABLE);
+ }
return 0;
}
-static void ep93xx_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
-{
- struct ep93xx_pwm *ep93xx_pwm = to_ep93xx_pwm(chip);
-
- writew(0x0, ep93xx_pwm->base + EP93XX_PWMx_ENABLE);
- clk_disable(ep93xx_pwm->clk);
-}
-
static const struct pwm_ops ep93xx_pwm_ops = {
.request = ep93xx_pwm_request,
.free = ep93xx_pwm_free,
- .config = ep93xx_pwm_config,
- .set_polarity = ep93xx_pwm_polarity,
- .enable = ep93xx_pwm_enable,
- .disable = ep93xx_pwm_disable,
+ .apply = ep93xx_pwm_apply,
.owner = THIS_MODULE,
};
fpc->chip.ops = &fsl_pwm_ops;
- fpc->chip.of_xlate = of_pwm_xlate_with_flags;
- fpc->chip.of_pwm_n_cells = 3;
fpc->chip.npwm = 8;
ret = pwmchip_add(&fpc->chip);
pwm_chip->chip.ops = &hibvt_pwm_ops;
pwm_chip->chip.dev = &pdev->dev;
pwm_chip->chip.npwm = soc->num_pwms;
- pwm_chip->chip.of_xlate = of_pwm_xlate_with_flags;
- pwm_chip->chip.of_pwm_n_cells = 3;
pwm_chip->soc = soc;
pwm_chip->base = devm_platform_ioremap_resource(pdev, 0);
struct img_pwm_chip *pwm_chip = to_img_pwm_chip(chip);
int ret;
- ret = pm_runtime_get_sync(chip->dev);
+ ret = pm_runtime_resume_and_get(chip->dev);
if (ret < 0)
return ret;
tpm->chip.dev = &pdev->dev;
tpm->chip.ops = &imx_tpm_pwm_ops;
- tpm->chip.of_xlate = of_pwm_xlate_with_flags;
- tpm->chip.of_pwm_n_cells = 3;
/* get number of channels */
val = readl(tpm->base + PWM_IMX_TPM_PARAM);
if (!imx)
return -ENOMEM;
- platform_set_drvdata(pdev, imx);
-
imx->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
if (IS_ERR(imx->clk_ipg))
return dev_err_probe(&pdev->dev, PTR_ERR(imx->clk_ipg),
if (IS_ERR(imx->mmio_base))
return PTR_ERR(imx->mmio_base);
- return pwmchip_add(&imx->chip);
-}
-
-static int pwm_imx1_remove(struct platform_device *pdev)
-{
- struct pwm_imx1_chip *imx = platform_get_drvdata(pdev);
-
- pwm_imx1_clk_disable_unprepare(&imx->chip);
-
- return pwmchip_remove(&imx->chip);
+ return devm_pwmchip_add(&pdev->dev, &imx->chip);
}
static struct platform_driver pwm_imx1_driver = {
.of_match_table = pwm_imx1_dt_ids,
},
.probe = pwm_imx1_probe,
- .remove = pwm_imx1_remove,
};
module_platform_driver(pwm_imx1_driver);
imx->chip.dev = &pdev->dev;
imx->chip.npwm = 1;
- imx->chip.of_xlate = of_pwm_xlate_with_flags;
- imx->chip.of_pwm_n_cells = 3;
-
imx->mmio_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(imx->mmio_base))
return PTR_ERR(imx->mmio_base);
jz4740->chip.dev = dev;
jz4740->chip.ops = &jz4740_pwm_ops;
jz4740->chip.npwm = info->num_pwms;
- jz4740->chip.of_xlate = of_pwm_xlate_with_flags;
- jz4740->chip.of_pwm_n_cells = 3;
platform_set_drvdata(pdev, jz4740);
lpc18xx_pwm->chip.dev = &pdev->dev;
lpc18xx_pwm->chip.ops = &lpc18xx_pwm_ops;
lpc18xx_pwm->chip.npwm = 16;
- lpc18xx_pwm->chip.of_xlate = of_pwm_xlate_with_flags;
- lpc18xx_pwm->chip.of_pwm_n_cells = 3;
/* SCT counter must be in unify (32 bit) mode */
lpc18xx_pwm_writel(lpc18xx_pwm, LPC18XX_PWM_CONFIG,
static void pwm_lpss_remove_pci(struct pci_dev *pdev)
{
- struct pwm_lpss_chip *lpwm = pci_get_drvdata(pdev);
-
pm_runtime_forbid(&pdev->dev);
pm_runtime_get_sync(&pdev->dev);
-
- pwm_lpss_remove(lpwm);
}
#ifdef CONFIG_PM
static int pwm_lpss_remove_platform(struct platform_device *pdev)
{
- struct pwm_lpss_chip *lpwm = platform_get_drvdata(pdev);
-
pm_runtime_disable(&pdev->dev);
- return pwm_lpss_remove(lpwm);
+ return 0;
}
static const struct acpi_device_id pwm_lpss_acpi_match[] = {
lpwm->chip.ops = &pwm_lpss_ops;
lpwm->chip.npwm = info->npwm;
- ret = pwmchip_add(&lpwm->chip);
+ ret = devm_pwmchip_add(dev, &lpwm->chip);
if (ret) {
dev_err(dev, "failed to add PWM chip: %d\n", ret);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(pwm_lpss_probe);
-int pwm_lpss_remove(struct pwm_lpss_chip *lpwm)
-{
- return pwmchip_remove(&lpwm->chip);
-}
-EXPORT_SYMBOL_GPL(pwm_lpss_remove);
-
MODULE_DESCRIPTION("PWM driver for Intel LPSS");
MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>");
MODULE_LICENSE("GPL v2");
struct pwm_lpss_chip *pwm_lpss_probe(struct device *dev, struct resource *r,
const struct pwm_lpss_boardinfo *info);
-int pwm_lpss_remove(struct pwm_lpss_chip *lpwm);
#endif /* __PWM_LPSS_H */
meson->chip.dev = &pdev->dev;
meson->chip.ops = &meson_pwm_ops;
meson->chip.npwm = MESON_NUM_PWMS;
- meson->chip.of_xlate = of_pwm_xlate_with_flags;
- meson->chip.of_pwm_n_cells = 3;
meson->data = of_device_get_match_data(&pdev->dev);
if (err < 0)
return err;
- err = pwmchip_add(&meson->chip);
+ err = devm_pwmchip_add(&pdev->dev, &meson->chip);
if (err < 0) {
dev_err(&pdev->dev, "failed to register PWM chip: %d\n", err);
return err;
}
- platform_set_drvdata(pdev, meson);
-
return 0;
}
-static int meson_pwm_remove(struct platform_device *pdev)
-{
- struct meson_pwm *meson = platform_get_drvdata(pdev);
-
- return pwmchip_remove(&meson->chip);
-}
-
static struct platform_driver meson_pwm_driver = {
.driver = {
.name = "meson-pwm",
.of_match_table = meson_pwm_matches,
},
.probe = meson_pwm_probe,
- .remove = meson_pwm_remove,
};
module_platform_driver(meson_pwm_driver);
mxs->chip.dev = &pdev->dev;
mxs->chip.ops = &mxs_pwm_ops;
- mxs->chip.of_xlate = of_pwm_xlate_with_flags;
- mxs->chip.of_pwm_n_cells = 3;
ret = of_property_read_u32(np, "fsl,pwm-number", &mxs->chip.npwm);
if (ret < 0) {
omap->chip.dev = &pdev->dev;
omap->chip.ops = &pwm_omap_dmtimer_ops;
omap->chip.npwm = 1;
- omap->chip.of_xlate = of_pwm_xlate_with_flags;
- omap->chip.of_pwm_n_cells = 3;
mutex_init(&omap->mutex);
#include <linux/bitmap.h>
/*
- * Because the PCA9685 has only one prescaler per chip, changing the period of
- * one channel affects the period of all 16 PWM outputs!
- * However, the ratio between each configured duty cycle and the chip-wide
- * period remains constant, because the OFF time is set in proportion to the
- * counter range.
+ * Because the PCA9685 has only one prescaler per chip, only the first channel
+ * that is enabled is allowed to change the prescale register.
+ * PWM channels requested afterwards must use a period that results in the same
+ * prescale setting as the one set by the first requested channel.
+ * GPIOs do not count as enabled PWMs as they are not using the prescaler.
*/
#define PCA9685_MODE1 0x00
struct pca9685 {
struct pwm_chip chip;
struct regmap *regmap;
-#if IS_ENABLED(CONFIG_GPIOLIB)
struct mutex lock;
+ DECLARE_BITMAP(pwms_enabled, PCA9685_MAXCHAN + 1);
+#if IS_ENABLED(CONFIG_GPIOLIB)
struct gpio_chip gpio;
DECLARE_BITMAP(pwms_inuse, PCA9685_MAXCHAN + 1);
#endif
return container_of(chip, struct pca9685, chip);
}
+/* This function is supposed to be called with the lock mutex held */
+static bool pca9685_prescaler_can_change(struct pca9685 *pca, int channel)
+{
+ /* No PWM enabled: Change allowed */
+ if (bitmap_empty(pca->pwms_enabled, PCA9685_MAXCHAN + 1))
+ return true;
+ /* More than one PWM enabled: Change not allowed */
+ if (bitmap_weight(pca->pwms_enabled, PCA9685_MAXCHAN + 1) > 1)
+ return false;
+ /*
+ * Only one PWM enabled: Change allowed if the PWM about to
+ * be changed is the one that is already enabled
+ */
+ return test_bit(channel, pca->pwms_enabled);
+}
+
+static int pca9685_read_reg(struct pca9685 *pca, unsigned int reg, unsigned int *val)
+{
+ struct device *dev = pca->chip.dev;
+ int err;
+
+ err = regmap_read(pca->regmap, reg, val);
+ if (err)
+ dev_err(dev, "regmap_read of register 0x%x failed: %pe\n", reg, ERR_PTR(err));
+
+ return err;
+}
+
+static int pca9685_write_reg(struct pca9685 *pca, unsigned int reg, unsigned int val)
+{
+ struct device *dev = pca->chip.dev;
+ int err;
+
+ err = regmap_write(pca->regmap, reg, val);
+ if (err)
+ dev_err(dev, "regmap_write to register 0x%x failed: %pe\n", reg, ERR_PTR(err));
+
+ return err;
+}
+
/* Helper function to set the duty cycle ratio to duty/4096 (e.g. duty=2048 -> 50%) */
static void pca9685_pwm_set_duty(struct pca9685 *pca, int channel, unsigned int duty)
{
+ struct pwm_device *pwm = &pca->chip.pwms[channel];
+ unsigned int on, off;
+
if (duty == 0) {
/* Set the full OFF bit, which has the highest precedence */
- regmap_write(pca->regmap, REG_OFF_H(channel), LED_FULL);
+ pca9685_write_reg(pca, REG_OFF_H(channel), LED_FULL);
+ return;
} else if (duty >= PCA9685_COUNTER_RANGE) {
/* Set the full ON bit and clear the full OFF bit */
- regmap_write(pca->regmap, REG_ON_H(channel), LED_FULL);
- regmap_write(pca->regmap, REG_OFF_H(channel), 0);
- } else {
- /* Set OFF time (clears the full OFF bit) */
- regmap_write(pca->regmap, REG_OFF_L(channel), duty & 0xff);
- regmap_write(pca->regmap, REG_OFF_H(channel), (duty >> 8) & 0xf);
- /* Clear the full ON bit */
- regmap_write(pca->regmap, REG_ON_H(channel), 0);
+ pca9685_write_reg(pca, REG_ON_H(channel), LED_FULL);
+ pca9685_write_reg(pca, REG_OFF_H(channel), 0);
+ return;
}
+
+
+ if (pwm->state.usage_power && channel < PCA9685_MAXCHAN) {
+ /*
+ * If usage_power is set, the pca9685 driver will phase shift
+ * the individual channels relative to their channel number.
+ * This improves EMI because the enabled channels no longer
+ * turn on at the same time, while still maintaining the
+ * configured duty cycle / power output.
+ */
+ on = channel * PCA9685_COUNTER_RANGE / PCA9685_MAXCHAN;
+ } else
+ on = 0;
+
+ off = (on + duty) % PCA9685_COUNTER_RANGE;
+
+ /* Set ON time (clears full ON bit) */
+ pca9685_write_reg(pca, REG_ON_L(channel), on & 0xff);
+ pca9685_write_reg(pca, REG_ON_H(channel), (on >> 8) & 0xf);
+ /* Set OFF time (clears full OFF bit) */
+ pca9685_write_reg(pca, REG_OFF_L(channel), off & 0xff);
+ pca9685_write_reg(pca, REG_OFF_H(channel), (off >> 8) & 0xf);
}
static unsigned int pca9685_pwm_get_duty(struct pca9685 *pca, int channel)
{
- unsigned int off_h = 0, val = 0;
+ struct pwm_device *pwm = &pca->chip.pwms[channel];
+ unsigned int off = 0, on = 0, val = 0;
if (WARN_ON(channel >= PCA9685_MAXCHAN)) {
/* HW does not support reading state of "all LEDs" channel */
return 0;
}
- regmap_read(pca->regmap, LED_N_OFF_H(channel), &off_h);
- if (off_h & LED_FULL) {
+ pca9685_read_reg(pca, LED_N_OFF_H(channel), &off);
+ if (off & LED_FULL) {
/* Full OFF bit is set */
return 0;
}
- regmap_read(pca->regmap, LED_N_ON_H(channel), &val);
- if (val & LED_FULL) {
+ pca9685_read_reg(pca, LED_N_ON_H(channel), &on);
+ if (on & LED_FULL) {
/* Full ON bit is set */
return PCA9685_COUNTER_RANGE;
}
- if (regmap_read(pca->regmap, LED_N_OFF_L(channel), &val)) {
- /* Reset val to 0 in case reading LED_N_OFF_L failed */
+ pca9685_read_reg(pca, LED_N_OFF_L(channel), &val);
+ off = ((off & 0xf) << 8) | (val & 0xff);
+ if (!pwm->state.usage_power)
+ return off;
+
+ /* Read ON register to calculate duty cycle of staggered output */
+ if (pca9685_read_reg(pca, LED_N_ON_L(channel), &val)) {
+ /* Reset val to 0 in case reading LED_N_ON_L failed */
val = 0;
}
- return ((off_h & 0xf) << 8) | (val & 0xff);
+ on = ((on & 0xf) << 8) | (val & 0xff);
+ return (off - on) & (PCA9685_COUNTER_RANGE - 1);
}
#if IS_ENABLED(CONFIG_GPIOLIB)
{
struct device *dev = pca->chip.dev;
- mutex_init(&pca->lock);
-
pca->gpio.label = dev_name(dev);
pca->gpio.parent = dev;
pca->gpio.request = pca9685_pwm_gpio_request;
static void pca9685_set_sleep_mode(struct pca9685 *pca, bool enable)
{
- regmap_update_bits(pca->regmap, PCA9685_MODE1,
- MODE1_SLEEP, enable ? MODE1_SLEEP : 0);
+ struct device *dev = pca->chip.dev;
+ int err = regmap_update_bits(pca->regmap, PCA9685_MODE1,
+ MODE1_SLEEP, enable ? MODE1_SLEEP : 0);
+ if (err) {
+ dev_err(dev, "regmap_update_bits of register 0x%x failed: %pe\n",
+ PCA9685_MODE1, ERR_PTR(err));
+ return;
+ }
+
if (!enable) {
/* Wait 500us for the oscillator to be back up */
udelay(500);
}
}
-static int pca9685_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
- const struct pwm_state *state)
+static int __pca9685_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ const struct pwm_state *state)
{
struct pca9685 *pca = to_pca(chip);
unsigned long long duty, prescale;
return 0;
}
- regmap_read(pca->regmap, PCA9685_PRESCALE, &val);
+ pca9685_read_reg(pca, PCA9685_PRESCALE, &val);
if (prescale != val) {
+ if (!pca9685_prescaler_can_change(pca, pwm->hwpwm)) {
+ dev_err(chip->dev,
+ "pwm not changed: periods of enabled pwms must match!\n");
+ return -EBUSY;
+ }
+
/*
* Putting the chip briefly into SLEEP mode
* at this point won't interfere with the
pca9685_set_sleep_mode(pca, true);
/* Change the chip-wide output frequency */
- regmap_write(pca->regmap, PCA9685_PRESCALE, prescale);
+ pca9685_write_reg(pca, PCA9685_PRESCALE, prescale);
/* Wake the chip up */
pca9685_set_sleep_mode(pca, false);
return 0;
}
+static int pca9685_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ const struct pwm_state *state)
+{
+ struct pca9685 *pca = to_pca(chip);
+ int ret;
+
+ mutex_lock(&pca->lock);
+ ret = __pca9685_pwm_apply(chip, pwm, state);
+ if (ret == 0) {
+ if (state->enabled)
+ set_bit(pwm->hwpwm, pca->pwms_enabled);
+ else
+ clear_bit(pwm->hwpwm, pca->pwms_enabled);
+ }
+ mutex_unlock(&pca->lock);
+
+ return ret;
+}
+
static void pca9685_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
struct pwm_state *state)
{
unsigned int val = 0;
/* Calculate (chip-wide) period from prescale value */
- regmap_read(pca->regmap, PCA9685_PRESCALE, &val);
+ pca9685_read_reg(pca, PCA9685_PRESCALE, &val);
/*
* PCA9685_OSC_CLOCK_MHZ is 25, i.e. an integer divider of 1000.
* The following calculation is therefore only a multiplication
if (pca9685_pwm_test_and_set_inuse(pca, pwm->hwpwm))
return -EBUSY;
+
+ if (pwm->hwpwm < PCA9685_MAXCHAN) {
+ /* PWMs - except the "all LEDs" channel - default to enabled */
+ mutex_lock(&pca->lock);
+ set_bit(pwm->hwpwm, pca->pwms_enabled);
+ mutex_unlock(&pca->lock);
+ }
+
pm_runtime_get_sync(chip->dev);
return 0;
{
struct pca9685 *pca = to_pca(chip);
+ mutex_lock(&pca->lock);
pca9685_pwm_set_duty(pca, pwm->hwpwm, 0);
+ clear_bit(pwm->hwpwm, pca->pwms_enabled);
+ mutex_unlock(&pca->lock);
+
pm_runtime_put(chip->dev);
pca9685_pwm_clear_inuse(pca, pwm->hwpwm);
}
i2c_set_clientdata(client, pca);
- regmap_read(pca->regmap, PCA9685_MODE2, ®);
+ mutex_init(&pca->lock);
+
+ ret = pca9685_read_reg(pca, PCA9685_MODE2, ®);
+ if (ret)
+ return ret;
if (device_property_read_bool(&client->dev, "invert"))
reg |= MODE2_INVRT;
else
reg |= MODE2_OUTDRV;
- regmap_write(pca->regmap, PCA9685_MODE2, reg);
+ ret = pca9685_write_reg(pca, PCA9685_MODE2, reg);
+ if (ret)
+ return ret;
/* Disable all LED ALLCALL and SUBx addresses to avoid bus collisions */
- regmap_read(pca->regmap, PCA9685_MODE1, ®);
+ pca9685_read_reg(pca, PCA9685_MODE1, ®);
reg &= ~(MODE1_ALLCALL | MODE1_SUB1 | MODE1_SUB2 | MODE1_SUB3);
- regmap_write(pca->regmap, PCA9685_MODE1, reg);
+ pca9685_write_reg(pca, PCA9685_MODE1, reg);
- /* Reset OFF registers to POR default */
- regmap_write(pca->regmap, PCA9685_ALL_LED_OFF_L, LED_FULL);
- regmap_write(pca->regmap, PCA9685_ALL_LED_OFF_H, LED_FULL);
+ /* Reset OFF/ON registers to POR default */
+ pca9685_write_reg(pca, PCA9685_ALL_LED_OFF_L, LED_FULL);
+ pca9685_write_reg(pca, PCA9685_ALL_LED_OFF_H, LED_FULL);
+ pca9685_write_reg(pca, PCA9685_ALL_LED_ON_L, 0);
+ pca9685_write_reg(pca, PCA9685_ALL_LED_ON_H, 0);
pca->chip.ops = &pca9685_pwm_ops;
/* Add an extra channel for ALL_LED */
static int pwm_probe(struct platform_device *pdev)
{
const struct platform_device_id *id = platform_get_device_id(pdev);
- struct pxa_pwm_chip *pwm;
+ struct pxa_pwm_chip *pc;
int ret = 0;
if (IS_ENABLED(CONFIG_OF) && id == NULL)
if (id == NULL)
return -EINVAL;
- pwm = devm_kzalloc(&pdev->dev, sizeof(*pwm), GFP_KERNEL);
- if (pwm == NULL)
+ pc = devm_kzalloc(&pdev->dev, sizeof(*pc), GFP_KERNEL);
+ if (pc == NULL)
return -ENOMEM;
- pwm->clk = devm_clk_get(&pdev->dev, NULL);
- if (IS_ERR(pwm->clk))
- return PTR_ERR(pwm->clk);
+ pc->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(pc->clk))
+ return PTR_ERR(pc->clk);
- pwm->chip.dev = &pdev->dev;
- pwm->chip.ops = &pxa_pwm_ops;
- pwm->chip.npwm = (id->driver_data & HAS_SECONDARY_PWM) ? 2 : 1;
+ pc->chip.dev = &pdev->dev;
+ pc->chip.ops = &pxa_pwm_ops;
+ pc->chip.npwm = (id->driver_data & HAS_SECONDARY_PWM) ? 2 : 1;
if (IS_ENABLED(CONFIG_OF)) {
- pwm->chip.of_xlate = pxa_pwm_of_xlate;
- pwm->chip.of_pwm_n_cells = 1;
+ pc->chip.of_xlate = pxa_pwm_of_xlate;
+ pc->chip.of_pwm_n_cells = 1;
}
- pwm->mmio_base = devm_platform_ioremap_resource(pdev, 0);
- if (IS_ERR(pwm->mmio_base))
- return PTR_ERR(pwm->mmio_base);
+ pc->mmio_base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(pc->mmio_base))
+ return PTR_ERR(pc->mmio_base);
- ret = pwmchip_add(&pwm->chip);
+ ret = pwmchip_add(&pc->chip);
if (ret < 0) {
dev_err(&pdev->dev, "pwmchip_add() failed: %d\n", ret);
return ret;
}
- platform_set_drvdata(pdev, pwm);
+ platform_set_drvdata(pdev, pc);
return 0;
}
static int pwm_remove(struct platform_device *pdev)
{
- struct pxa_pwm_chip *chip;
+ struct pxa_pwm_chip *pc;
- chip = platform_get_drvdata(pdev);
- if (chip == NULL)
- return -ENODEV;
+ pc = platform_get_drvdata(pdev);
- return pwmchip_remove(&chip->chip);
+ return pwmchip_remove(&pc->chip);
}
static struct platform_driver pwm_driver = {
tpu->chip.dev = &pdev->dev;
tpu->chip.ops = &tpu_pwm_ops;
- tpu->chip.of_xlate = of_pwm_xlate_with_flags;
- tpu->chip.of_pwm_n_cells = 3;
tpu->chip.npwm = TPU_CHANNEL_MAX;
pm_runtime_enable(&pdev->dev);
pc->chip.ops = &rockchip_pwm_ops;
pc->chip.npwm = 1;
- if (pc->data->supports_polarity) {
- pc->chip.of_xlate = of_pwm_xlate_with_flags;
- pc->chip.of_pwm_n_cells = 3;
- }
-
enable_conf = pc->data->enable_conf;
ctrl = readl_relaxed(pc->base + pc->data->regs.ctrl);
enabled = (ctrl & enable_conf) == enable_conf;
ret = pwm_samsung_parse_dt(chip);
if (ret)
return ret;
-
- chip->chip.of_xlate = of_pwm_xlate_with_flags;
- chip->chip.of_pwm_n_cells = 3;
} else {
if (!pdev->dev.platform_data) {
dev_err(&pdev->dev, "no platform data specified\n");
chip = &ddata->chip;
chip->dev = dev;
chip->ops = &pwm_sifive_ops;
- chip->of_xlate = of_pwm_xlate_with_flags;
- chip->of_pwm_n_cells = 3;
chip->npwm = 4;
ddata->regs = devm_platform_ioremap_resource(pdev, 0);
}
static int spear_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
- int duty_ns, int period_ns)
+ u64 duty_ns, u64 period_ns)
{
struct spear_pwm_chip *pc = to_spear_pwm_chip(chip);
u64 val, div, clk_rate;
clk_disable(pc->clk);
}
+static int spear_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ const struct pwm_state *state)
+{
+ int err;
+
+ if (state->polarity != PWM_POLARITY_NORMAL)
+ return -EINVAL;
+
+ if (!state->enabled) {
+ if (pwm->state.enabled)
+ spear_pwm_disable(chip, pwm);
+ return 0;
+ }
+
+ if (state->period != pwm->state.period ||
+ state->duty_cycle != pwm->state.duty_cycle) {
+ err = spear_pwm_config(chip, pwm, state->duty_cycle, state->period);
+ if (err)
+ return err;
+ }
+
+ if (!pwm->state.enabled)
+ return spear_pwm_enable(chip, pwm);
+
+ return 0;
+}
+
static const struct pwm_ops spear_pwm_ops = {
- .config = spear_pwm_config,
- .enable = spear_pwm_enable,
- .disable = spear_pwm_disable,
+ .apply = spear_pwm_apply,
.owner = THIS_MODULE,
};
static int spear_pwm_remove(struct platform_device *pdev)
{
struct spear_pwm_chip *pc = platform_get_drvdata(pdev);
- int i;
- for (i = 0; i < NUM_PWM; i++)
- pwm_disable(&pc->chip.pwms[i]);
+ pwmchip_remove(&pc->chip);
/* clk was prepared in probe, hence unprepare it here */
clk_unprepare(pc->clk);
- return pwmchip_remove(&pc->chip);
+
+ return 0;
}
static const struct of_device_id spear_pwm_of_match[] = {
{
struct sprd_pwm_chip *spc = platform_get_drvdata(pdev);
- return pwmchip_remove(&spc->chip);
+ pwmchip_remove(&spc->chip);
+
+ return 0;
}
static const struct of_device_id sprd_pwm_of_match[] = {
priv->chip.dev = &pdev->dev;
priv->chip.ops = &stm32_pwm_lp_ops;
priv->chip.npwm = 1;
- priv->chip.of_xlate = of_pwm_xlate_with_flags;
- priv->chip.of_pwm_n_cells = 3;
ret = pwmchip_add(&priv->chip);
if (ret < 0)
priv->regmap = ddata->regmap;
priv->clk = ddata->clk;
priv->max_arr = ddata->max_arr;
- priv->chip.of_xlate = of_pwm_xlate_with_flags;
- priv->chip.of_pwm_n_cells = 3;
if (!priv->regmap || !priv->clk)
return -EINVAL;
pwm->chip.dev = &pdev->dev;
pwm->chip.ops = &sun4i_pwm_ops;
pwm->chip.npwm = pwm->data->npwm;
- pwm->chip.of_xlate = of_pwm_xlate_with_flags;
- pwm->chip.of_pwm_n_cells = 3;
spin_lock_init(&pwm->ctrl_lock);
static int tegra_pwm_remove(struct platform_device *pdev)
{
struct tegra_pwm_chip *pc = platform_get_drvdata(pdev);
- unsigned int i;
- int err;
-
- if (WARN_ON(!pc))
- return -ENODEV;
-
- err = clk_prepare_enable(pc->clk);
- if (err < 0)
- return err;
-
- for (i = 0; i < pc->chip.npwm; i++) {
- struct pwm_device *pwm = &pc->chip.pwms[i];
-
- if (!pwm_is_enabled(pwm))
- if (clk_prepare_enable(pc->clk) < 0)
- continue;
- pwm_writel(pc, i, 0);
-
- clk_disable_unprepare(pc->clk);
- }
+ pwmchip_remove(&pc->chip);
reset_control_assert(pc->rst);
- clk_disable_unprepare(pc->clk);
- return pwmchip_remove(&pc->chip);
+ return 0;
}
#ifdef CONFIG_PM_SLEEP
* duty_ns = 10^9 * duty_cycles / PWM_CLK_RATE
*/
static int ecap_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
- int duty_ns, int period_ns)
+ int duty_ns, int period_ns, int enabled)
{
struct ecap_pwm_chip *pc = to_ecap_pwm_chip(chip);
u32 period_cycles, duty_cycles;
unsigned long long c;
u16 value;
- if (period_ns > NSEC_PER_SEC)
- return -ERANGE;
-
c = pc->clk_rate;
c = c * period_ns;
do_div(c, NSEC_PER_SEC);
writew(value, pc->mmio_base + ECCTL2);
- if (!pwm_is_enabled(pwm)) {
+ if (!enabled) {
/* Update active registers if not running */
writel(duty_cycles, pc->mmio_base + CAP2);
writel(period_cycles, pc->mmio_base + CAP1);
writel(period_cycles, pc->mmio_base + CAP3);
}
- if (!pwm_is_enabled(pwm)) {
+ if (!enabled) {
value = readw(pc->mmio_base + ECCTL2);
/* Disable APWM mode to put APWM output Low */
value &= ~ECCTL2_APWM_MODE;
pm_runtime_put_sync(pc->chip.dev);
}
-static void ecap_pwm_free(struct pwm_chip *chip, struct pwm_device *pwm)
+static int ecap_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ const struct pwm_state *state)
{
- if (pwm_is_enabled(pwm)) {
- dev_warn(chip->dev, "Removing PWM device without disabling\n");
- pm_runtime_put_sync(chip->dev);
+ int err;
+ int enabled = pwm->state.enabled;
+
+ if (state->polarity != pwm->state.polarity) {
+
+ if (enabled) {
+ ecap_pwm_disable(chip, pwm);
+ enabled = false;
+ }
+
+ err = ecap_pwm_set_polarity(chip, pwm, state->polarity);
+ if (err)
+ return err;
+ }
+
+ if (!state->enabled) {
+ if (enabled)
+ ecap_pwm_disable(chip, pwm);
+ return 0;
}
+
+ if (state->period != pwm->state.period ||
+ state->duty_cycle != pwm->state.duty_cycle) {
+ if (state->period > NSEC_PER_SEC)
+ return -ERANGE;
+
+ err = ecap_pwm_config(chip, pwm, state->duty_cycle,
+ state->period, enabled);
+ if (err)
+ return err;
+ }
+
+ if (!enabled)
+ return ecap_pwm_enable(chip, pwm);
+
+ return 0;
}
static const struct pwm_ops ecap_pwm_ops = {
- .free = ecap_pwm_free,
- .config = ecap_pwm_config,
- .set_polarity = ecap_pwm_set_polarity,
- .enable = ecap_pwm_enable,
- .disable = ecap_pwm_disable,
+ .apply = ecap_pwm_apply,
.owner = THIS_MODULE,
};
pc->chip.dev = &pdev->dev;
pc->chip.ops = &ecap_pwm_ops;
- pc->chip.of_xlate = of_pwm_xlate_with_flags;
- pc->chip.of_pwm_n_cells = 3;
pc->chip.npwm = 1;
pc->mmio_base = devm_platform_ioremap_resource(pdev, 0);
pc->chip.dev = &pdev->dev;
pc->chip.ops = &ehrpwm_pwm_ops;
- pc->chip.of_xlate = of_pwm_xlate_with_flags;
- pc->chip.of_pwm_n_cells = 3;
pc->chip.npwm = NUM_PWM_CHANNEL;
pc->mmio_base = devm_platform_ioremap_resource(pdev, 0);
return -ERANGE;
/*
- * PWMC controls a divider that divides the input clk by a
- * power of two between 1 and 8. As a smaller divider yields
- * higher precision, pick the smallest possible one.
+ * PWMC controls a divider that divides the input clk by a power of two
+ * between 1 and 8. As a smaller divider yields higher precision, pick
+ * the smallest possible one. As period is at most 0xffff << 3, pwmc0 is
+ * in the intended range [0..3].
*/
- if (period > 0xffff) {
- pwmc0 = ilog2(period >> 16);
- if (WARN_ON(pwmc0 > 3))
- return -EINVAL;
- } else {
- pwmc0 = 0;
- }
+ pwmc0 = fls(period >> 16);
+ if (WARN_ON(pwmc0 > 3))
+ return -EINVAL;
period >>= pwmc0;
duty_cycle >>= pwmc0;
chip->chip.dev = &pdev->dev;
chip->chip.ops = &vt8500_pwm_ops;
- chip->chip.of_xlate = of_pwm_xlate_with_flags;
- chip->chip.of_pwm_n_cells = 3;
chip->chip.npwm = VT8500_NR_PWMS;
chip->clk = devm_clk_get(&pdev->dev, NULL);
static int vt8500_pwm_remove(struct platform_device *pdev)
{
- struct vt8500_chip *chip;
+ struct vt8500_chip *chip = platform_get_drvdata(pdev);
- chip = platform_get_drvdata(pdev);
- if (chip == NULL)
- return -ENODEV;
+ pwmchip_remove(&chip->chip);
clk_unprepare(chip->clk);
- return pwmchip_remove(&chip->chip);
+ return 0;
}
static struct platform_driver vt8500_pwm_driver = {
The watchdog is usually used together with the watchdog daemon
which is available from
- <ftp://ibiblio.org/pub/Linux/system/daemons/watchdog/>. This daemon can
- also monitor NFS connections and can reboot the machine when the process
- table is full.
+ <https://ibiblio.org/pub/Linux/system/daemons/watchdog/>. This daemon
+ can also monitor NFS connections and can reboot the machine when the
+ process table is full.
If unsure, say N.
Say Y here if you want to enable watchdog device status read through
sysfs attributes.
+config WATCHDOG_HRTIMER_PRETIMEOUT
+ bool "Enable watchdog hrtimer-based pretimeouts"
+ help
+ Enable this if you want to use a hrtimer timer based pretimeout for
+ watchdogs that do not natively support pretimeout support. Be aware
+ that because this pretimeout functionality uses hrtimers, it may not
+ be able to fire before the actual watchdog fires in some situations.
+
comment "Watchdog Pretimeout Governors"
config WATCHDOG_PRETIMEOUT_GOV
depends on HAS_IOMEM
select WATCHDOG_CORE
help
- Watchdog driver for the xps_timebase_wdt ip core.
+ Watchdog driver for the xps_timebase_wdt IP core.
To compile this driver as a module, choose M here: the
module will be called of_xilinx_wdt.
select WATCHDOG_CORE
select RESET_CONTROLLER
help
- Watchdog timer embedded into Alphascale asm9260 chips. This will reboot your
- system when the timeout is reached.
+ Watchdog timer embedded into Alphascale asm9260 chips. This will
+ reboot your system when the timeout is reached.
config AT91RM9200_WATCHDOG
tristate "AT91RM9200 watchdog"
depends on ARCH_OMAP16XX || ARCH_OMAP2PLUS || COMPILE_TEST
select WATCHDOG_CORE
help
- Support for TI OMAP1610/OMAP1710/OMAP2420/OMAP3430/OMAP4430 watchdog. Say 'Y'
- here to enable the OMAP1610/OMAP1710/OMAP2420/OMAP3430/OMAP4430 watchdog timer.
+ Support for TI OMAP1610/OMAP1710/OMAP2420/OMAP3430/OMAP4430 watchdog.
+ Say 'Y' here to enable the
+ OMAP1610/OMAP1710/OMAP2420/OMAP3430/OMAP4430 watchdog timer.
config PNX4008_WATCHDOG
tristate "LPC32XX Watchdog"
Say Y here to include support for the watchdog timer in Toshiba
Visconti SoCs.
+config MSC313E_WATCHDOG
+ tristate "MStar MSC313e watchdog"
+ depends on ARCH_MSTARV7 || COMPILE_TEST
+ select WATCHDOG_CORE
+ help
+ Say Y here to include support for the Watchdog timer embedded
+ into MStar MSC313e chips. This will reboot your system when the
+ timeout is reached.
+
+ To compile this driver as a module, choose M here: the
+ module will be called msc313e_wdt.
+
# X86 (i386 + ia64 + x86_64) Architecture
config ACQUIRE_WDT
This is the driver for the built-in watchdog timer on the fit-PC2,
fit-PC2i, CM-iAM single-board computers made by Compulab.
- It`s possible to enable watchdog timer either from BIOS (F2) or from booted Linux.
- When "Watchdog Timer Value" enabled one can set 31-255 s operational range.
+ It's possible to enable the watchdog timer either from BIOS (F2) or
+ from booted Linux.
+ When the "Watchdog Timer Value" is enabled one can set 31-255 seconds
+ operational range.
- Entering BIOS setup temporary disables watchdog operation regardless to current state,
- so system will not be restarted while user in BIOS setup.
+ Entering BIOS setup temporarily disables watchdog operation regardless
+ of current state, so system will not be restarted while user is in
+ BIOS setup.
- Once watchdog was enabled the system will be restarted every
+ Once the watchdog is enabled the system will be restarted every
"Watchdog Timer Value" period, so to prevent it user can restart or
disable the watchdog.
depends on X86
help
This is the driver for the hardware watchdog on the IB700 Single
- Board Computer produced by TMC Technology (www.tmc-uk.com). This watchdog
- simply watches your kernel to make sure it doesn't freeze, and if
- it does, it reboots your computer after a certain amount of time.
+ Board Computer produced by TMC Technology (www.tmc-uk.com). This
+ watchdog simply watches your kernel to make sure it doesn't freeze,
+ and if it does, it reboots your computer after a certain amount of time.
- This driver is like the WDT501 driver but for slightly different hardware.
+ This driver is like the WDT501 driver but for slightly different
+ hardware.
To compile this driver as a module, choose M here: the
module will be called ib700wdt.
select WATCHDOG_CORE
depends on MACH_PIC32 || (MIPS && COMPILE_TEST)
help
- Watchdog driver for PIC32 instruction fetch counting timer. This specific
- timer is typically be used in misson critical and safety critical
- applications, where any single failure of the software functionality
- and sequencing must be detected.
+ Watchdog driver for PIC32 instruction fetch counting timer. This
+ specific timer is typically be used in mission critical and safety
+ critical applications, where any single failure of the software
+ functionality and sequencing must be detected.
To compile this driver as a loadable module, choose M here.
The module will be called pic32-dmt.
For BookE processors (MPC85xx) use the BOOKE_WDT driver instead.
-config MV64X60_WDT
- tristate "MV64X60 (Marvell Discovery) Watchdog Timer"
- depends on MV64X60 || COMPILE_TEST
-
config PIKA_WDT
tristate "PIKA FPGA Watchdog"
depends on WARP || (PPC64 && COMPILE_TEST)
This card simply watches your kernel to make sure it doesn't freeze,
and if it does, it reboots your computer after a certain amount of
time. This driver is like the WDT501 driver but for different
- hardware. Please read <file:Documentation/watchdog/pcwd-watchdog.rst>. The PC
- watchdog cards can be ordered from <http://www.berkprod.com/>.
+ hardware. Please read <file:Documentation/watchdog/pcwd-watchdog.rst>.
+ The PC watchdog cards can be ordered from <http://www.berkprod.com/>.
To compile this driver as a module, choose M here: the
module will be called pcwd.
This option enable support for an In-secure watchdog timer driver for
Intel Keem Bay SoC. This WDT has a 32 bit timer and decrements in every
count unit. An interrupt will be triggered, when the count crosses
- the thershold configured in the register.
+ the threshold configured in the register.
To compile this driver as a module, choose M here: the
module will be called keembay_wdt.
watchdog-objs += watchdog_core.o watchdog_dev.o
watchdog-$(CONFIG_WATCHDOG_PRETIMEOUT_GOV) += watchdog_pretimeout.o
+watchdog-$(CONFIG_WATCHDOG_HRTIMER_PRETIMEOUT) += watchdog_hrtimer_pretimeout.o
obj-$(CONFIG_WATCHDOG_PRETIMEOUT_GOV_NOOP) += pretimeout_noop.o
obj-$(CONFIG_WATCHDOG_PRETIMEOUT_GOV_PANIC) += pretimeout_panic.o
obj-$(CONFIG_PM8916_WATCHDOG) += pm8916_wdt.o
obj-$(CONFIG_ARM_SMC_WATCHDOG) += arm_smc_wdt.o
obj-$(CONFIG_VISCONTI_WATCHDOG) += visconti_wdt.o
+obj-$(CONFIG_MSC313E_WATCHDOG) += msc313e_wdt.o
# X86 (i386 + ia64 + x86_64) Architecture
obj-$(CONFIG_ACQUIRE_WDT) += acquirewdt.o
# POWERPC Architecture
obj-$(CONFIG_GEF_WDT) += gef_wdt.o
obj-$(CONFIG_8xxx_WDT) += mpc8xxx_wdt.o
-obj-$(CONFIG_MV64X60_WDT) += mv64x60_wdt.o
obj-$(CONFIG_PIKA_WDT) += pika_wdt.o
obj-$(CONFIG_BOOKE_WDT) += booke_wdt.o
obj-$(CONFIG_MEN_A21_WDT) += mena21_wdt.o
wdd->timeout = timeout;
- actual = min(timeout, wdd->max_hw_heartbeat_ms * 1000);
+ actual = min(timeout, wdd->max_hw_heartbeat_ms / 1000);
writel(actual * WDT_RATE_1MHZ, wdt->base + WDT_RELOAD_VALUE);
writel(WDT_RESTART_MAGIC, wdt->base + WDT_RESTART);
struct aspeed_wdt *wdt = dev_get_drvdata(dev);
u32 status = readl(wdt->base + WDT_TIMEOUT_STATUS);
- return sprintf(buf, "%u\n",
- !(status & WDT_TIMEOUT_STATUS_BOOT_SECONDARY));
+ return sysfs_emit(buf, "%u\n",
+ !(status & WDT_TIMEOUT_STATUS_BOOT_SECONDARY));
}
static ssize_t access_cs0_store(struct device *dev,
static bool nowayout = WATCHDOG_NOWAYOUT;
+static inline void bcm7038_wdt_write(u32 value, void __iomem *addr)
+{
+ /* MIPS chips strapped for BE will automagically configure the
+ * peripheral registers for CPU-native byte order.
+ */
+ if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ __raw_writel(value, addr);
+ else
+ writel_relaxed(value, addr);
+}
+
+static inline u32 bcm7038_wdt_read(void __iomem *addr)
+{
+ if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ return __raw_readl(addr);
+ else
+ return readl_relaxed(addr);
+}
+
static void bcm7038_wdt_set_timeout_reg(struct watchdog_device *wdog)
{
struct bcm7038_watchdog *wdt = watchdog_get_drvdata(wdog);
timeout = wdt->rate * wdog->timeout;
- writel(timeout, wdt->base + WDT_TIMEOUT_REG);
+ bcm7038_wdt_write(timeout, wdt->base + WDT_TIMEOUT_REG);
}
static int bcm7038_wdt_ping(struct watchdog_device *wdog)
{
struct bcm7038_watchdog *wdt = watchdog_get_drvdata(wdog);
- writel(WDT_START_1, wdt->base + WDT_CMD_REG);
- writel(WDT_START_2, wdt->base + WDT_CMD_REG);
+ bcm7038_wdt_write(WDT_START_1, wdt->base + WDT_CMD_REG);
+ bcm7038_wdt_write(WDT_START_2, wdt->base + WDT_CMD_REG);
return 0;
}
{
struct bcm7038_watchdog *wdt = watchdog_get_drvdata(wdog);
- writel(WDT_STOP_1, wdt->base + WDT_CMD_REG);
- writel(WDT_STOP_2, wdt->base + WDT_CMD_REG);
+ bcm7038_wdt_write(WDT_STOP_1, wdt->base + WDT_CMD_REG);
+ bcm7038_wdt_write(WDT_STOP_2, wdt->base + WDT_CMD_REG);
return 0;
}
struct bcm7038_watchdog *wdt = watchdog_get_drvdata(wdog);
u32 time_left;
- time_left = readl(wdt->base + WDT_CMD_REG);
+ time_left = bcm7038_wdt_read(wdt->base + WDT_CMD_REG);
return time_left / wdt->rate;
}
}
/**
- * booke_wdt_disable - disable the watchdog on the given CPU
+ * __booke_wdt_disable - disable the watchdog on the given CPU
*
* This function is called on each CPU. It disables the watchdog on that CPU.
*
if (test_and_set_bit(DIAG_WDOG_BUSY, &wdt_status))
return -EBUSY;
- ret = -ENODEV;
-
if (MACHINE_IS_VM) {
ebc_cmd = kmalloc(MAX_CMDLEN, GFP_KERNEL);
if (!ebc_cmd) {
int ret;
unsigned int func;
- ret = -ENODEV;
-
if (MACHINE_IS_VM) {
ebc_cmd = kmalloc(MAX_CMDLEN, GFP_KERNEL);
if (!ebc_cmd)
*/
#include <linux/bitops.h>
-#include <linux/limits.h>
-#include <linux/kernel.h>
#include <linux/clk.h>
+#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/err.h>
+#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
+#include <linux/limits.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
-#include <linux/interrupt.h>
#include <linux/of.h>
-#include <linux/pm.h>
#include <linux/platform_device.h>
+#include <linux/pm.h>
#include <linux/reset.h>
#include <linux/watchdog.h>
-#include <linux/debugfs.h>
#define WDOG_CONTROL_REG_OFFSET 0x00
#define WDOG_CONTROL_REG_WDT_EN_MASK 0x01
};
/**
- * cleanup_module:
+ * eurwdt_exit:
*
* Unload the watchdog. You cannot do this with any file handles open.
* If your watchdog is set to continue ticking on close and you unload
static const struct pci_device_id hpwdt_devices[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_COMPAQ, 0xB203) }, /* iLO2 */
{ PCI_DEVICE(PCI_VENDOR_ID_HP, 0x3306) }, /* iLO3 */
+ { PCI_DEVICE(PCI_VENDOR_ID_HP_3PAR, 0x0389) }, /* PCtrl */
{0}, /* terminate list */
};
MODULE_DEVICE_TABLE(pci, hpwdt_devices);
#define TCOBASE(p) ((p)->tco_res->start)
/* SMI Control and Enable Register */
#define SMI_EN(p) ((p)->smi_res->start)
+#define TCO_EN (1 << 13)
+#define GBL_SMI_EN (1 << 0)
#define TCO_RLD(p) (TCOBASE(p) + 0x00) /* TCO Timer Reload/Curr. Value */
#define TCOv1_TMR(p) (TCOBASE(p) + 0x01) /* TCOv1 Timer Initial Value*/
tmrval = seconds_to_ticks(p, t);
- /* For TCO v1 the timer counts down twice before rebooting */
- if (p->iTCO_version == 1)
+ /*
+ * If TCO SMIs are off, the timer counts down twice before rebooting.
+ * Otherwise, the BIOS generally reboots when the SMI triggers.
+ */
+ if (p->smi_res &&
+ (SMI_EN(p) & (TCO_EN | GBL_SMI_EN)) != (TCO_EN | GBL_SMI_EN))
tmrval /= 2;
/* from the specs: */
if (!devm_request_region(dev, p->smi_res->start,
resource_size(p->smi_res),
pdev->name)) {
- pr_err("I/O address 0x%04llx already in use, device disabled\n",
+ dev_err(dev, "I/O address 0x%04llx already in use, device disabled\n",
(u64)SMI_EN(p));
return -EBUSY;
}
} else if (iTCO_vendorsupport ||
turn_SMI_watchdog_clear_off >= p->iTCO_version) {
- pr_err("SMI I/O resource is missing\n");
+ dev_err(dev, "SMI I/O resource is missing\n");
return -ENODEV;
}
* Disables TCO logic generating an SMI#
*/
val32 = inl(SMI_EN(p));
- val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */
+ val32 &= ~TCO_EN; /* Turn off SMI clearing watchdog */
outl(val32, SMI_EN(p));
}
struct regmap *regmap;
struct watchdog_device wdog;
bool ext_reset;
+ bool clk_is_on;
};
static bool nowayout = WATCHDOG_NOWAYOUT;
{
struct imx2_wdt_device *wdev = watchdog_get_drvdata(wdog);
+ if (!wdev->clk_is_on)
+ return 0;
+
regmap_write(wdev->regmap, IMX2_WDT_WSR, IMX2_WDT_SEQ1);
regmap_write(wdev->regmap, IMX2_WDT_WSR, IMX2_WDT_SEQ2);
return 0;
if (ret)
return ret;
+ wdev->clk_is_on = true;
+
regmap_read(wdev->regmap, IMX2_WDT_WRSR, &val);
wdog->bootstatus = val & IMX2_WDT_WRSR_TOUT ? WDIOF_CARDRESET : 0;
clk_disable_unprepare(wdev->clk);
+ wdev->clk_is_on = false;
+
return 0;
}
if (ret)
return ret;
+ wdev->clk_is_on = true;
+
if (watchdog_active(wdog) && !imx2_wdt_is_running(wdev)) {
/*
* If the watchdog is still active and resumes
watchdog_stop_on_reboot(wdog);
watchdog_stop_on_unregister(wdog);
- ret = devm_watchdog_register_device(dev, wdog);
- if (ret)
- return ret;
-
ret = imx_scu_irq_group_enable(SC_IRQ_GROUP_WDOG,
SC_IRQ_WDOG,
true);
if (ret) {
dev_warn(dev, "Enable irq failed, pretimeout NOT supported\n");
- return 0;
+ goto register_device;
}
imx_sc_wdd->wdt_notifier.notifier_call = imx_sc_wdt_notify;
false);
dev_warn(dev,
"Register irq notifier failed, pretimeout NOT supported\n");
- return 0;
+ goto register_device;
}
ret = devm_add_action_or_reset(dev, imx_sc_wdt_action,
else
dev_warn(dev, "Add action failed, pretimeout NOT supported\n");
- return 0;
+register_device:
+ return devm_watchdog_register_device(dev, wdog);
}
static int __maybe_unused imx_sc_wdt_suspend(struct device *dev)
return val;
}
-static inline void superio_outw(int val, int reg)
-{
- outb(reg++, REG);
- outb(val >> 8, VAL);
- outb(reg, REG);
- outb(val, VAL);
-}
-
/* Internal function, should be called after superio_select(GPIO) */
static void _wdt_update_timeout(unsigned int t)
{
watchdog_set_drvdata(jz4740_wdt, drvdata);
drvdata->map = device_node_to_regmap(dev->parent->of_node);
- if (!drvdata->map) {
+ if (IS_ERR(drvdata->map)) {
dev_err(dev, "regmap not found\n");
- return -EINVAL;
+ return PTR_ERR(drvdata->map);
}
return devm_watchdog_register_device(dev, &drvdata->wdt);
#define TIM_WDOG_EN 0x8
#define TIM_SAFE 0xc
-#define WDT_ISR_MASK GENMASK(9, 8)
-#define WDT_ISR_CLEAR 0x8200ff18
+#define WDT_TH_INT_MASK BIT(8)
+#define WDT_TO_INT_MASK BIT(9)
+#define WDT_INT_CLEAR_SMC 0x8200ff18
+
#define WDT_UNLOCK 0xf1d0dead
+#define WDT_DISABLE 0x0
+#define WDT_ENABLE 0x1
+
#define WDT_LOAD_MAX U32_MAX
#define WDT_LOAD_MIN 1
+
#define WDT_TIMEOUT 5
+#define WDT_PRETIMEOUT 4
static unsigned int timeout = WDT_TIMEOUT;
module_param(timeout, int, 0);
{
struct keembay_wdt *wdt = watchdog_get_drvdata(wdog);
- keembay_wdt_set_timeout_reg(wdog);
- keembay_wdt_writel(wdt, TIM_WDOG_EN, 1);
+ keembay_wdt_writel(wdt, TIM_WDOG_EN, WDT_ENABLE);
return 0;
}
{
struct keembay_wdt *wdt = watchdog_get_drvdata(wdog);
- keembay_wdt_writel(wdt, TIM_WDOG_EN, 0);
+ keembay_wdt_writel(wdt, TIM_WDOG_EN, WDT_DISABLE);
return 0;
}
{
wdog->timeout = t;
keembay_wdt_set_timeout_reg(wdog);
+ keembay_wdt_set_pretimeout_reg(wdog);
return 0;
}
struct keembay_wdt *wdt = dev_id;
struct arm_smccc_res res;
- keembay_wdt_writel(wdt, TIM_WATCHDOG, 1);
- arm_smccc_smc(WDT_ISR_CLEAR, WDT_ISR_MASK, 0, 0, 0, 0, 0, 0, &res);
- dev_crit(wdt->wdd.parent, "Intel Keem Bay non-sec wdt timeout.\n");
+ arm_smccc_smc(WDT_INT_CLEAR_SMC, WDT_TO_INT_MASK, 0, 0, 0, 0, 0, 0, &res);
+ dev_crit(wdt->wdd.parent, "Intel Keem Bay non-secure wdt timeout.\n");
emergency_restart();
return IRQ_HANDLED;
struct keembay_wdt *wdt = dev_id;
struct arm_smccc_res res;
- arm_smccc_smc(WDT_ISR_CLEAR, WDT_ISR_MASK, 0, 0, 0, 0, 0, 0, &res);
- dev_crit(wdt->wdd.parent, "Intel Keem Bay non-sec wdt pre-timeout.\n");
+ keembay_wdt_set_pretimeout(&wdt->wdd, 0x0);
+
+ arm_smccc_smc(WDT_INT_CLEAR_SMC, WDT_TH_INT_MASK, 0, 0, 0, 0, 0, 0, &res);
+ dev_crit(wdt->wdd.parent, "Intel Keem Bay non-secure wdt pre-timeout.\n");
watchdog_notify_pretimeout(&wdt->wdd);
return IRQ_HANDLED;
wdt->wdd.min_timeout = WDT_LOAD_MIN;
wdt->wdd.max_timeout = WDT_LOAD_MAX / wdt->rate;
wdt->wdd.timeout = WDT_TIMEOUT;
+ wdt->wdd.pretimeout = WDT_PRETIMEOUT;
watchdog_set_drvdata(&wdt->wdd, wdt);
watchdog_set_nowayout(&wdt->wdd, nowayout);
watchdog_init_timeout(&wdt->wdd, timeout, dev);
keembay_wdt_set_timeout(&wdt->wdd, wdt->wdd.timeout);
+ keembay_wdt_set_pretimeout(&wdt->wdd, wdt->wdd.pretimeout);
ret = devm_watchdog_register_device(dev, &wdt->wdd);
if (ret)
MODULE_DEVICE_TABLE(of, keembay_wdt_match);
static struct platform_driver keembay_wdt_driver = {
- .probe = keembay_wdt_probe,
- .driver = {
+ .probe = keembay_wdt_probe,
+ .driver = {
.name = "keembay_wdt",
.of_match_table = keembay_wdt_match,
.pm = &keembay_wdt_pm_ops,
struct lpc18xx_wdt_dev *lpc18xx_wdt = platform_get_drvdata(pdev);
dev_warn(&pdev->dev, "I quit now, hardware will probably reboot!\n");
- del_timer(&lpc18xx_wdt->timer);
+ del_timer_sync(&lpc18xx_wdt->timer);
return 0;
}
#endif /* CONFIG_DEBUG_FS */
};
-/*
+/**
* struct mei_mc_hdr - Management Control Command Header
*
* @command: Management Control (0x2)
complete(&wdt->response);
}
-/*
+/**
* mei_wdt_notif - callback for event notification
*
* @cldev: bus device
{
struct device *dev = &pdev->dev;
struct meson_wdt_dev *meson_wdt;
- const struct of_device_id *of_id;
int err;
meson_wdt = devm_kzalloc(dev, sizeof(*meson_wdt), GFP_KERNEL);
if (IS_ERR(meson_wdt->wdt_base))
return PTR_ERR(meson_wdt->wdt_base);
- of_id = of_match_device(meson_wdt_dt_ids, dev);
- if (!of_id) {
- dev_err(dev, "Unable to initialize WDT data\n");
- return -ENODEV;
- }
- meson_wdt->data = of_id->data;
+ meson_wdt->data = device_get_match_data(dev);
meson_wdt->wdt_dev.parent = dev;
meson_wdt->wdt_dev.info = &meson_wdt_info;
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * MStar WDT driver
+ *
+ * Copyright (C) 2019 - 2021 Daniel Palmer
+ * Copyright (C) 2021 Romain Perier
+ *
+ */
+
+#include <linux/clk.h>
+#include <linux/io.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/watchdog.h>
+
+#define REG_WDT_CLR 0x0
+#define REG_WDT_MAX_PRD_L 0x10
+#define REG_WDT_MAX_PRD_H 0x14
+
+#define MSC313E_WDT_MIN_TIMEOUT 1
+#define MSC313E_WDT_DEFAULT_TIMEOUT 30
+
+static unsigned int timeout;
+
+module_param(timeout, int, 0);
+MODULE_PARM_DESC(timeout, "Watchdog timeout in seconds");
+
+struct msc313e_wdt_priv {
+ void __iomem *base;
+ struct watchdog_device wdev;
+ struct clk *clk;
+};
+
+static int msc313e_wdt_start(struct watchdog_device *wdev)
+{
+ struct msc313e_wdt_priv *priv = watchdog_get_drvdata(wdev);
+ u32 timeout;
+ int err;
+
+ err = clk_prepare_enable(priv->clk);
+ if (err)
+ return err;
+
+ timeout = wdev->timeout * clk_get_rate(priv->clk);
+ writew(timeout & 0xffff, priv->base + REG_WDT_MAX_PRD_L);
+ writew((timeout >> 16) & 0xffff, priv->base + REG_WDT_MAX_PRD_H);
+ writew(1, priv->base + REG_WDT_CLR);
+ return 0;
+}
+
+static int msc313e_wdt_ping(struct watchdog_device *wdev)
+{
+ struct msc313e_wdt_priv *priv = watchdog_get_drvdata(wdev);
+
+ writew(1, priv->base + REG_WDT_CLR);
+ return 0;
+}
+
+static int msc313e_wdt_stop(struct watchdog_device *wdev)
+{
+ struct msc313e_wdt_priv *priv = watchdog_get_drvdata(wdev);
+
+ writew(0, priv->base + REG_WDT_MAX_PRD_L);
+ writew(0, priv->base + REG_WDT_MAX_PRD_H);
+ writew(0, priv->base + REG_WDT_CLR);
+ clk_disable_unprepare(priv->clk);
+ return 0;
+}
+
+static int msc313e_wdt_settimeout(struct watchdog_device *wdev, unsigned int new_time)
+{
+ wdev->timeout = new_time;
+
+ return msc313e_wdt_start(wdev);
+}
+
+static const struct watchdog_info msc313e_wdt_ident = {
+ .identity = "MSC313e watchdog",
+ .options = WDIOF_MAGICCLOSE | WDIOF_KEEPALIVEPING | WDIOF_SETTIMEOUT,
+};
+
+static const struct watchdog_ops msc313e_wdt_ops = {
+ .owner = THIS_MODULE,
+ .start = msc313e_wdt_start,
+ .stop = msc313e_wdt_stop,
+ .ping = msc313e_wdt_ping,
+ .set_timeout = msc313e_wdt_settimeout,
+};
+
+static const struct of_device_id msc313e_wdt_of_match[] = {
+ { .compatible = "mstar,msc313e-wdt", },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, msc313e_wdt_of_match);
+
+static int msc313e_wdt_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct msc313e_wdt_priv *priv;
+
+ priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ priv->base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(priv->base))
+ return PTR_ERR(priv->base);
+
+ priv->clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(priv->clk)) {
+ dev_err(dev, "No input clock\n");
+ return PTR_ERR(priv->clk);
+ }
+
+ priv->wdev.info = &msc313e_wdt_ident,
+ priv->wdev.ops = &msc313e_wdt_ops,
+ priv->wdev.parent = dev;
+ priv->wdev.min_timeout = MSC313E_WDT_MIN_TIMEOUT;
+ priv->wdev.max_timeout = U32_MAX / clk_get_rate(priv->clk);
+ priv->wdev.timeout = MSC313E_WDT_DEFAULT_TIMEOUT;
+
+ watchdog_set_drvdata(&priv->wdev, priv);
+
+ watchdog_init_timeout(&priv->wdev, timeout, dev);
+ watchdog_stop_on_reboot(&priv->wdev);
+ watchdog_stop_on_unregister(&priv->wdev);
+
+ return devm_watchdog_register_device(dev, &priv->wdev);
+}
+
+static int __maybe_unused msc313e_wdt_suspend(struct device *dev)
+{
+ struct msc313e_wdt_priv *priv = dev_get_drvdata(dev);
+
+ if (watchdog_active(&priv->wdev))
+ msc313e_wdt_stop(&priv->wdev);
+
+ return 0;
+}
+
+static int __maybe_unused msc313e_wdt_resume(struct device *dev)
+{
+ struct msc313e_wdt_priv *priv = dev_get_drvdata(dev);
+
+ if (watchdog_active(&priv->wdev))
+ msc313e_wdt_start(&priv->wdev);
+
+ return 0;
+}
+
+static SIMPLE_DEV_PM_OPS(msc313e_wdt_pm_ops, msc313e_wdt_suspend, msc313e_wdt_resume);
+
+static struct platform_driver msc313e_wdt_driver = {
+ .driver = {
+ .name = "msc313e-wdt",
+ .of_match_table = msc313e_wdt_of_match,
+ .pm = &msc313e_wdt_pm_ops,
+ },
+ .probe = msc313e_wdt_probe,
+};
+module_platform_driver(msc313e_wdt_driver);
+
+MODULE_AUTHOR("Daniel Palmer <daniel@thingy.jp>");
+MODULE_DESCRIPTION("Watchdog driver for MStar MSC313e");
+MODULE_LICENSE("GPL v2");
#include <linux/reset-controller.h>
#include <linux/types.h>
#include <linux/watchdog.h>
+#include <linux/interrupt.h>
#define WDT_MAX_TIMEOUT 31
-#define WDT_MIN_TIMEOUT 1
+#define WDT_MIN_TIMEOUT 2
#define WDT_LENGTH_TIMEOUT(n) ((n) << 5)
#define WDT_LENGTH 0x04
u32 reg;
wdt_dev->timeout = timeout;
+ /*
+ * In dual mode, irq will be triggered at timeout / 2
+ * the real timeout occurs at timeout
+ */
+ if (wdt_dev->pretimeout)
+ wdt_dev->pretimeout = timeout / 2;
/*
* One bit is the value of 512 ticks
* The clock has 32 KHz
*/
- reg = WDT_LENGTH_TIMEOUT(timeout << 6) | WDT_LENGTH_KEY;
+ reg = WDT_LENGTH_TIMEOUT((timeout - wdt_dev->pretimeout) << 6)
+ | WDT_LENGTH_KEY;
iowrite32(reg, wdt_base + WDT_LENGTH);
mtk_wdt_ping(wdt_dev);
return ret;
reg = ioread32(wdt_base + WDT_MODE);
- reg &= ~(WDT_MODE_IRQ_EN | WDT_MODE_DUAL_EN);
+ if (wdt_dev->pretimeout)
+ reg |= (WDT_MODE_IRQ_EN | WDT_MODE_DUAL_EN);
+ else
+ reg &= ~(WDT_MODE_IRQ_EN | WDT_MODE_DUAL_EN);
reg |= (WDT_MODE_EN | WDT_MODE_KEY);
iowrite32(reg, wdt_base + WDT_MODE);
return 0;
}
+static int mtk_wdt_set_pretimeout(struct watchdog_device *wdd,
+ unsigned int timeout)
+{
+ struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdd);
+ void __iomem *wdt_base = mtk_wdt->wdt_base;
+ u32 reg = ioread32(wdt_base + WDT_MODE);
+
+ if (timeout && !wdd->pretimeout) {
+ wdd->pretimeout = wdd->timeout / 2;
+ reg |= (WDT_MODE_IRQ_EN | WDT_MODE_DUAL_EN);
+ } else if (!timeout && wdd->pretimeout) {
+ wdd->pretimeout = 0;
+ reg &= ~(WDT_MODE_IRQ_EN | WDT_MODE_DUAL_EN);
+ } else {
+ return 0;
+ }
+
+ reg |= WDT_MODE_KEY;
+ iowrite32(reg, wdt_base + WDT_MODE);
+
+ return mtk_wdt_set_timeout(wdd, wdd->timeout);
+}
+
+static irqreturn_t mtk_wdt_isr(int irq, void *arg)
+{
+ struct watchdog_device *wdd = arg;
+
+ watchdog_notify_pretimeout(wdd);
+
+ return IRQ_HANDLED;
+}
+
static const struct watchdog_info mtk_wdt_info = {
.identity = DRV_NAME,
.options = WDIOF_SETTIMEOUT |
WDIOF_MAGICCLOSE,
};
+static const struct watchdog_info mtk_wdt_pt_info = {
+ .identity = DRV_NAME,
+ .options = WDIOF_SETTIMEOUT |
+ WDIOF_PRETIMEOUT |
+ WDIOF_KEEPALIVEPING |
+ WDIOF_MAGICCLOSE,
+};
+
static const struct watchdog_ops mtk_wdt_ops = {
.owner = THIS_MODULE,
.start = mtk_wdt_start,
.stop = mtk_wdt_stop,
.ping = mtk_wdt_ping,
.set_timeout = mtk_wdt_set_timeout,
+ .set_pretimeout = mtk_wdt_set_pretimeout,
.restart = mtk_wdt_restart,
};
struct device *dev = &pdev->dev;
struct mtk_wdt_dev *mtk_wdt;
const struct mtk_wdt_data *wdt_data;
- int err;
+ int err, irq;
mtk_wdt = devm_kzalloc(dev, sizeof(*mtk_wdt), GFP_KERNEL);
if (!mtk_wdt)
if (IS_ERR(mtk_wdt->wdt_base))
return PTR_ERR(mtk_wdt->wdt_base);
- mtk_wdt->wdt_dev.info = &mtk_wdt_info;
+ irq = platform_get_irq(pdev, 0);
+ if (irq > 0) {
+ err = devm_request_irq(&pdev->dev, irq, mtk_wdt_isr, 0, "wdt_bark",
+ &mtk_wdt->wdt_dev);
+ if (err)
+ return err;
+
+ mtk_wdt->wdt_dev.info = &mtk_wdt_pt_info;
+ mtk_wdt->wdt_dev.pretimeout = WDT_MAX_TIMEOUT / 2;
+ } else {
+ if (irq == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+
+ mtk_wdt->wdt_dev.info = &mtk_wdt_info;
+ }
+
mtk_wdt->wdt_dev.ops = &mtk_wdt_ops;
mtk_wdt->wdt_dev.timeout = WDT_MAX_TIMEOUT;
mtk_wdt->wdt_dev.max_hw_heartbeat_ms = WDT_MAX_TIMEOUT * 1000;
#include <linux/uaccess.h>
#include <linux/gpio/consumer.h>
-#include <asm/mach-au1x00/au1000.h>
-
#define MTX1_WDT_INTERVAL (5 * HZ)
static int ticks = 100 * HZ;
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0
-/*
- * mv64x60_wdt.c - MV64X60 (Marvell Discovery) watchdog userspace interface
- *
- * Author: James Chapman <jchapman@katalix.com>
- *
- * Platform-specific setup code should configure the dog to generate
- * interrupt or reset as required. This code only enables/disables
- * and services the watchdog.
- *
- * Derived from mpc8xx_wdt.c, with the following copyright.
- *
- * 2002 (c) Florian Schirmer <jolt@tuxbox.org>
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <linux/fs.h>
-#include <linux/init.h>
-#include <linux/kernel.h>
-#include <linux/miscdevice.h>
-#include <linux/module.h>
-#include <linux/watchdog.h>
-#include <linux/platform_device.h>
-#include <linux/mv643xx.h>
-#include <linux/uaccess.h>
-#include <linux/io.h>
-
-#define MV64x60_WDT_WDC_OFFSET 0
-
-/*
- * The watchdog configuration register contains a pair of 2-bit fields,
- * 1. a reload field, bits 27-26, which triggers a reload of
- * the countdown register, and
- * 2. an enable field, bits 25-24, which toggles between
- * enabling and disabling the watchdog timer.
- * Bit 31 is a read-only field which indicates whether the
- * watchdog timer is currently enabled.
- *
- * The low 24 bits contain the timer reload value.
- */
-#define MV64x60_WDC_ENABLE_SHIFT 24
-#define MV64x60_WDC_SERVICE_SHIFT 26
-#define MV64x60_WDC_ENABLED_SHIFT 31
-
-#define MV64x60_WDC_ENABLED_TRUE 1
-#define MV64x60_WDC_ENABLED_FALSE 0
-
-/* Flags bits */
-#define MV64x60_WDOG_FLAG_OPENED 0
-
-static unsigned long wdt_flags;
-static int wdt_status;
-static void __iomem *mv64x60_wdt_regs;
-static int mv64x60_wdt_timeout;
-static int mv64x60_wdt_count;
-static unsigned int bus_clk;
-static char expect_close;
-static DEFINE_SPINLOCK(mv64x60_wdt_spinlock);
-
-static bool nowayout = WATCHDOG_NOWAYOUT;
-module_param(nowayout, bool, 0);
-MODULE_PARM_DESC(nowayout,
- "Watchdog cannot be stopped once started (default="
- __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
-
-static int mv64x60_wdt_toggle_wdc(int enabled_predicate, int field_shift)
-{
- u32 data;
- u32 enabled;
- int ret = 0;
-
- spin_lock(&mv64x60_wdt_spinlock);
- data = readl(mv64x60_wdt_regs + MV64x60_WDT_WDC_OFFSET);
- enabled = (data >> MV64x60_WDC_ENABLED_SHIFT) & 1;
-
- /* only toggle the requested field if enabled state matches predicate */
- if ((enabled ^ enabled_predicate) == 0) {
- /* We write a 1, then a 2 -- to the appropriate field */
- data = (1 << field_shift) | mv64x60_wdt_count;
- writel(data, mv64x60_wdt_regs + MV64x60_WDT_WDC_OFFSET);
-
- data = (2 << field_shift) | mv64x60_wdt_count;
- writel(data, mv64x60_wdt_regs + MV64x60_WDT_WDC_OFFSET);
- ret = 1;
- }
- spin_unlock(&mv64x60_wdt_spinlock);
-
- return ret;
-}
-
-static void mv64x60_wdt_service(void)
-{
- mv64x60_wdt_toggle_wdc(MV64x60_WDC_ENABLED_TRUE,
- MV64x60_WDC_SERVICE_SHIFT);
-}
-
-static void mv64x60_wdt_handler_enable(void)
-{
- if (mv64x60_wdt_toggle_wdc(MV64x60_WDC_ENABLED_FALSE,
- MV64x60_WDC_ENABLE_SHIFT)) {
- mv64x60_wdt_service();
- pr_notice("watchdog activated\n");
- }
-}
-
-static void mv64x60_wdt_handler_disable(void)
-{
- if (mv64x60_wdt_toggle_wdc(MV64x60_WDC_ENABLED_TRUE,
- MV64x60_WDC_ENABLE_SHIFT))
- pr_notice("watchdog deactivated\n");
-}
-
-static void mv64x60_wdt_set_timeout(unsigned int timeout)
-{
- /* maximum bus cycle count is 0xFFFFFFFF */
- if (timeout > 0xFFFFFFFF / bus_clk)
- timeout = 0xFFFFFFFF / bus_clk;
-
- mv64x60_wdt_count = timeout * bus_clk >> 8;
- mv64x60_wdt_timeout = timeout;
-}
-
-static int mv64x60_wdt_open(struct inode *inode, struct file *file)
-{
- if (test_and_set_bit(MV64x60_WDOG_FLAG_OPENED, &wdt_flags))
- return -EBUSY;
-
- if (nowayout)
- __module_get(THIS_MODULE);
-
- mv64x60_wdt_handler_enable();
-
- return stream_open(inode, file);
-}
-
-static int mv64x60_wdt_release(struct inode *inode, struct file *file)
-{
- if (expect_close == 42)
- mv64x60_wdt_handler_disable();
- else {
- pr_crit("unexpected close, not stopping timer!\n");
- mv64x60_wdt_service();
- }
- expect_close = 0;
-
- clear_bit(MV64x60_WDOG_FLAG_OPENED, &wdt_flags);
-
- return 0;
-}
-
-static ssize_t mv64x60_wdt_write(struct file *file, const char __user *data,
- size_t len, loff_t *ppos)
-{
- if (len) {
- if (!nowayout) {
- size_t i;
-
- expect_close = 0;
-
- for (i = 0; i != len; i++) {
- char c;
- if (get_user(c, data + i))
- return -EFAULT;
- if (c == 'V')
- expect_close = 42;
- }
- }
- mv64x60_wdt_service();
- }
-
- return len;
-}
-
-static long mv64x60_wdt_ioctl(struct file *file,
- unsigned int cmd, unsigned long arg)
-{
- int timeout;
- int options;
- void __user *argp = (void __user *)arg;
- static const struct watchdog_info info = {
- .options = WDIOF_SETTIMEOUT |
- WDIOF_MAGICCLOSE |
- WDIOF_KEEPALIVEPING,
- .firmware_version = 0,
- .identity = "MV64x60 watchdog",
- };
-
- switch (cmd) {
- case WDIOC_GETSUPPORT:
- if (copy_to_user(argp, &info, sizeof(info)))
- return -EFAULT;
- break;
-
- case WDIOC_GETSTATUS:
- case WDIOC_GETBOOTSTATUS:
- if (put_user(wdt_status, (int __user *)argp))
- return -EFAULT;
- wdt_status &= ~WDIOF_KEEPALIVEPING;
- break;
-
- case WDIOC_GETTEMP:
- return -EOPNOTSUPP;
-
- case WDIOC_SETOPTIONS:
- if (get_user(options, (int __user *)argp))
- return -EFAULT;
-
- if (options & WDIOS_DISABLECARD)
- mv64x60_wdt_handler_disable();
-
- if (options & WDIOS_ENABLECARD)
- mv64x60_wdt_handler_enable();
- break;
-
- case WDIOC_KEEPALIVE:
- mv64x60_wdt_service();
- wdt_status |= WDIOF_KEEPALIVEPING;
- break;
-
- case WDIOC_SETTIMEOUT:
- if (get_user(timeout, (int __user *)argp))
- return -EFAULT;
- mv64x60_wdt_set_timeout(timeout);
- fallthrough;
-
- case WDIOC_GETTIMEOUT:
- if (put_user(mv64x60_wdt_timeout, (int __user *)argp))
- return -EFAULT;
- break;
-
- default:
- return -ENOTTY;
- }
-
- return 0;
-}
-
-static const struct file_operations mv64x60_wdt_fops = {
- .owner = THIS_MODULE,
- .llseek = no_llseek,
- .write = mv64x60_wdt_write,
- .unlocked_ioctl = mv64x60_wdt_ioctl,
- .compat_ioctl = compat_ptr_ioctl,
- .open = mv64x60_wdt_open,
- .release = mv64x60_wdt_release,
-};
-
-static struct miscdevice mv64x60_wdt_miscdev = {
- .minor = WATCHDOG_MINOR,
- .name = "watchdog",
- .fops = &mv64x60_wdt_fops,
-};
-
-static int mv64x60_wdt_probe(struct platform_device *dev)
-{
- struct mv64x60_wdt_pdata *pdata = dev_get_platdata(&dev->dev);
- struct resource *r;
- int timeout = 10;
-
- bus_clk = 133; /* in MHz */
- if (pdata) {
- timeout = pdata->timeout;
- bus_clk = pdata->bus_clk;
- }
-
- /* Since bus_clk is truncated MHz, actual frequency could be
- * up to 1MHz higher. Round up, since it's better to time out
- * too late than too soon.
- */
- bus_clk++;
- bus_clk *= 1000000; /* convert to Hz */
-
- r = platform_get_resource(dev, IORESOURCE_MEM, 0);
- if (!r)
- return -ENODEV;
-
- mv64x60_wdt_regs = devm_ioremap(&dev->dev, r->start, resource_size(r));
- if (mv64x60_wdt_regs == NULL)
- return -ENOMEM;
-
- mv64x60_wdt_set_timeout(timeout);
-
- mv64x60_wdt_handler_disable(); /* in case timer was already running */
-
- return misc_register(&mv64x60_wdt_miscdev);
-}
-
-static int mv64x60_wdt_remove(struct platform_device *dev)
-{
- misc_deregister(&mv64x60_wdt_miscdev);
-
- mv64x60_wdt_handler_disable();
-
- return 0;
-}
-
-static struct platform_driver mv64x60_wdt_driver = {
- .probe = mv64x60_wdt_probe,
- .remove = mv64x60_wdt_remove,
- .driver = {
- .name = MV64x60_WDT_NAME,
- },
-};
-
-static int __init mv64x60_wdt_init(void)
-{
- pr_info("MV64x60 watchdog driver\n");
-
- return platform_driver_register(&mv64x60_wdt_driver);
-}
-
-static void __exit mv64x60_wdt_exit(void)
-{
- platform_driver_unregister(&mv64x60_wdt_driver);
-}
-
-module_init(mv64x60_wdt_init);
-module_exit(mv64x60_wdt_exit);
-
-MODULE_AUTHOR("James Chapman <jchapman@katalix.com>");
-MODULE_DESCRIPTION("MV64x60 watchdog driver");
-MODULE_LICENSE("GPL");
-MODULE_ALIAS("platform:" MV64x60_WDT_NAME);
}
/**
- * Poke the watchdog when an interrupt is received
+ * octeon_wdt_poke_irq - Poke the watchdog when an interrupt is received
*
* @cpl:
* @dev_id:
extern int prom_putchar(char c);
/**
- * Write a string to the uart
+ * octeon_wdt_write_string - Write a string to the uart
*
* @str: String to write
*/
}
/**
- * Write a hex number out of the uart
+ * octeon_wdt_write_hex() - Write a hex number out of the uart
*
* @value: Number to display
* @digits: Number of digits to print (1 to 16)
};
/**
+ * octeon_wdt_nmi_stage3:
+ *
* NMI stage 3 handler. NMIs are handled in the following manner:
* 1) The first NMI handler enables CVMSEG and transfers from
* the bootbus region into normal memory. It is careful to not
static enum cpuhp_state octeon_wdt_online;
/**
- * Module/ driver initialization.
+ * octeon_wdt_init - Module/ driver initialization.
*
* Returns Zero on success
*/
}
/**
- * Module / driver shutdown
+ * octeon_wdt_cleanup - Module / driver shutdown
*/
static void __exit octeon_wdt_cleanup(void)
{
* (C) Copyright 2011 (Alejandro Cabrera <aldaya@gmail.com>)
*/
+#include <linux/bits.h>
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/module.h>
#define XWT_TBR_OFFSET 0x8 /* Timebase Register Offset */
/* Control/Status Register Masks */
-#define XWT_CSR0_WRS_MASK 0x00000008 /* Reset status */
-#define XWT_CSR0_WDS_MASK 0x00000004 /* Timer state */
-#define XWT_CSR0_EWDT1_MASK 0x00000002 /* Enable bit 1 */
+#define XWT_CSR0_WRS_MASK BIT(3) /* Reset status */
+#define XWT_CSR0_WDS_MASK BIT(2) /* Timer state */
+#define XWT_CSR0_EWDT1_MASK BIT(1) /* Enable bit 1 */
/* Control/Status Register 0/1 bits */
-#define XWT_CSRX_EWDT2_MASK 0x00000001 /* Enable bit 2 */
+#define XWT_CSRX_EWDT2_MASK BIT(0) /* Enable bit 2 */
/* SelfTest constants */
#define XWT_MAX_SELFTEST_LOOP_COUNT 0x00010000
struct xwdt_device {
void __iomem *base;
u32 wdt_interval;
- spinlock_t spinlock;
+ spinlock_t spinlock; /* spinlock for register handling */
struct watchdog_device xilinx_wdt_wdd;
struct clk *clk;
};
spin_unlock(&xdev->spinlock);
+ dev_dbg(wdd->parent, "Watchdog Started!\n");
+
return 0;
}
clk_disable(xdev->clk);
- pr_info("Stopped!\n");
+ dev_dbg(wdd->parent, "Watchdog Stopped!\n");
return 0;
}
"The watchdog clock freq cannot be obtained\n");
} else {
pfreq = clk_get_rate(xdev->clk);
+ rc = clk_prepare_enable(xdev->clk);
+ if (rc) {
+ dev_err(dev, "unable to enable clock\n");
+ return rc;
+ }
+ rc = devm_add_action_or_reset(dev, xwdt_clk_disable_unprepare,
+ xdev->clk);
+ if (rc)
+ return rc;
}
/*
spin_lock_init(&xdev->spinlock);
watchdog_set_drvdata(xilinx_wdt_wdd, xdev);
- rc = clk_prepare_enable(xdev->clk);
- if (rc) {
- dev_err(dev, "unable to enable clock\n");
- return rc;
- }
- rc = devm_add_action_or_reset(dev, xwdt_clk_disable_unprepare,
- xdev->clk);
- if (rc)
- return rc;
-
rc = xwdt_selftest(xdev);
if (rc == XWT_TIMER_FAILED) {
dev_err(dev, "SelfTest routine error\n");
clk_disable(xdev->clk);
- dev_info(dev, "Xilinx Watchdog Timer at %p with timeout %ds\n",
- xdev->base, xilinx_wdt_wdd->timeout);
+ dev_info(dev, "Xilinx Watchdog Timer with timeout %ds\n",
+ xilinx_wdt_wdd->timeout);
platform_set_drvdata(pdev, xdev);
return ret;
}
- /* Fix the wdt and timer1 clock freqency to 25MHz */
+ /* Fix the wdt and timer1 clock frequency to 25MHz */
val = WDT_AXP_FIXED_ENABLE_BIT | TIMER1_FIXED_ENABLE_BIT;
atomic_io_modify(dev->reg + TIMER_CTRL, val, val);
/* -- Notifier funtions -----------------------------------------*/
/**
- * notify_sys:
+ * pc87413_notify_sys:
* @this: our notifier block
* @code: the event being reported
* @unused: unused
return 0;
}
-static SIMPLE_DEV_PM_OPS(qcom_wdt_pm_ops, qcom_wdt_suspend, qcom_wdt_resume);
+static const struct dev_pm_ops qcom_wdt_pm_ops = {
+ SET_LATE_SYSTEM_SLEEP_PM_OPS(qcom_wdt_suspend, qcom_wdt_resume)
+};
static const struct of_device_id qcom_wdt_of_table[] = {
{ .compatible = "qcom,kpss-timer", .data = &match_data_apcs_tmr },
wdd->min_timeout = MIN_WDT_TIMEOUT;
wdd->max_timeout = MAX_WDT_TIMEOUT;
wdt->last_ping = jiffies;
- wdt->sam9x60_support = of_device_is_compatible(dev->of_node,
- "microchip,sam9x60-wdt");
+
+ if (of_device_is_compatible(dev->of_node, "microchip,sam9x60-wdt") ||
+ of_device_is_compatible(dev->of_node, "microchip,sama7g5-wdt"))
+ wdt->sam9x60_support = true;
watchdog_set_drvdata(wdd, wdt);
{
.compatible = "microchip,sam9x60-wdt",
},
+ {
+ .compatible = "microchip,sama7g5-wdt",
+ },
+
{ }
};
MODULE_DEVICE_TABLE(of, sama5d4_wdt_of_match);
static void wdt_turnoff(void)
{
/* Stop the timer */
- del_timer(&timer);
+ del_timer_sync(&timer);
inb_p(wdt_stop);
pr_info("Watchdog timer is now disabled...\n");
}
#define SBSA_GWDT_WCS_WS0 BIT(1)
#define SBSA_GWDT_WCS_WS1 BIT(2)
+#define SBSA_GWDT_VERSION_MASK 0xF
+#define SBSA_GWDT_VERSION_SHIFT 16
+
/**
* struct sbsa_gwdt - Internal representation of the SBSA GWDT
* @wdd: kernel watchdog_device structure
* @clk: store the System Counter clock frequency, in Hz.
+ * @version: store the architecture version
* @refresh_base: Virtual address of the watchdog refresh frame
* @control_base: Virtual address of the watchdog control frame
*/
struct sbsa_gwdt {
struct watchdog_device wdd;
u32 clk;
+ int version;
void __iomem *refresh_base;
void __iomem *control_base;
};
"Watchdog cannot be stopped once started (default="
__MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
+/*
+ * Arm Base System Architecture 1.0 introduces watchdog v1 which
+ * increases the length watchdog offset register to 48 bits.
+ * - For version 0: WOR is 32 bits;
+ * - For version 1: WOR is 48 bits which comprises the register
+ * offset 0x8 and 0xC, and the bits [63:48] are reserved which are
+ * Read-As-Zero and Writes-Ignored.
+ */
+static u64 sbsa_gwdt_reg_read(struct sbsa_gwdt *gwdt)
+{
+ if (gwdt->version == 0)
+ return readl(gwdt->control_base + SBSA_GWDT_WOR);
+ else
+ return readq(gwdt->control_base + SBSA_GWDT_WOR);
+}
+
+static void sbsa_gwdt_reg_write(u64 val, struct sbsa_gwdt *gwdt)
+{
+ if (gwdt->version == 0)
+ writel((u32)val, gwdt->control_base + SBSA_GWDT_WOR);
+ else
+ writeq(val, gwdt->control_base + SBSA_GWDT_WOR);
+}
+
/*
* watchdog operation functions
*/
wdd->timeout = timeout;
if (action)
- writel(gwdt->clk * timeout,
- gwdt->control_base + SBSA_GWDT_WOR);
+ sbsa_gwdt_reg_write(gwdt->clk * timeout, gwdt);
else
/*
* In the single stage mode, The first signal (WS0) is ignored,
* the timeout is (WOR * 2), so the WOR should be configured
* to half value of timeout.
*/
- writel(gwdt->clk / 2 * timeout,
- gwdt->control_base + SBSA_GWDT_WOR);
+ sbsa_gwdt_reg_write(gwdt->clk / 2 * timeout, gwdt);
return 0;
}
*/
if (!action &&
!(readl(gwdt->control_base + SBSA_GWDT_WCS) & SBSA_GWDT_WCS_WS0))
- timeleft += readl(gwdt->control_base + SBSA_GWDT_WOR);
+ timeleft += sbsa_gwdt_reg_read(gwdt);
timeleft += lo_hi_readq(gwdt->control_base + SBSA_GWDT_WCV) -
arch_timer_read_counter();
return 0;
}
+static void sbsa_gwdt_get_version(struct watchdog_device *wdd)
+{
+ struct sbsa_gwdt *gwdt = watchdog_get_drvdata(wdd);
+ int ver;
+
+ ver = readl(gwdt->control_base + SBSA_GWDT_W_IIDR);
+ ver = (ver >> SBSA_GWDT_VERSION_SHIFT) & SBSA_GWDT_VERSION_MASK;
+
+ gwdt->version = ver;
+}
+
static int sbsa_gwdt_start(struct watchdog_device *wdd)
{
struct sbsa_gwdt *gwdt = watchdog_get_drvdata(wdd);
wdd->info = &sbsa_gwdt_info;
wdd->ops = &sbsa_gwdt_ops;
wdd->min_timeout = 1;
- wdd->max_hw_heartbeat_ms = U32_MAX / gwdt->clk * 1000;
wdd->timeout = DEFAULT_TIMEOUT;
watchdog_set_drvdata(wdd, gwdt);
watchdog_set_nowayout(wdd, nowayout);
+ sbsa_gwdt_get_version(wdd);
+ if (gwdt->version == 0)
+ wdd->max_hw_heartbeat_ms = U32_MAX / gwdt->clk * 1000;
+ else
+ wdd->max_hw_heartbeat_ms = GENMASK_ULL(47, 0) / gwdt->clk * 1000;
status = readl(cf_base + SBSA_GWDT_WCS);
if (status & SBSA_GWDT_WCS_WS1) {
static int wdt_turnoff(void)
{
/* Stop the timer */
- del_timer(&timer);
+ del_timer_sync(&timer);
/* Stop the watchdog */
wdt_config(0);
/*
* Initial timeout value, may be overwritten by device tree or module
- * parmeter in watchdog_init_timeout().
+ * parameter in watchdog_init_timeout().
*
* Reading a zero here means that either the hardware has a default
* value of zero (which is very unlikely and definitely a hardware
* warranty of any kind, whether express or implied.
*/
-#include <linux/acpi.h>
#include <linux/device.h>
#include <linux/resource.h>
#include <linux/amba/bus.h>
#include <linux/math64.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
-#include <linux/of.h>
#include <linux/pm.h>
+#include <linux/property.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/types.h>
* @wdd: instance of struct watchdog_device
* @lock: spin lock protecting dev structure and io access
* @base: base address of wdt
- * @clk: clock structure of wdt
+ * @clk: (optional) clock structure of wdt
+ * @rate: (optional) clock rate when provided via properties
* @adev: amba device structure of wdt
* @status: current status of wdt
* @load_val: load value to be set for current timeout
sp805_wdt_probe(struct amba_device *adev, const struct amba_id *id)
{
struct sp805_wdt *wdt;
+ u64 rate = 0;
int ret = 0;
wdt = devm_kzalloc(&adev->dev, sizeof(*wdt), GFP_KERNEL);
if (IS_ERR(wdt->base))
return PTR_ERR(wdt->base);
- if (adev->dev.of_node) {
- wdt->clk = devm_clk_get(&adev->dev, NULL);
- if (IS_ERR(wdt->clk)) {
- dev_err(&adev->dev, "Clock not found\n");
- return PTR_ERR(wdt->clk);
- }
- wdt->rate = clk_get_rate(wdt->clk);
- } else if (has_acpi_companion(&adev->dev)) {
- /*
- * When Driver probe with ACPI device, clock devices
- * are not available, so watchdog rate get from
- * clock-frequency property given in _DSD object.
- */
- device_property_read_u64(&adev->dev, "clock-frequency",
- &wdt->rate);
- if (!wdt->rate) {
- dev_err(&adev->dev, "no clock-frequency property\n");
- return -ENODEV;
- }
+ /*
+ * When driver probe with ACPI device, clock devices
+ * are not available, so watchdog rate get from
+ * clock-frequency property given in _DSD object.
+ */
+ device_property_read_u64(&adev->dev, "clock-frequency", &rate);
+
+ wdt->clk = devm_clk_get_optional(&adev->dev, NULL);
+ if (IS_ERR(wdt->clk))
+ return dev_err_probe(&adev->dev, PTR_ERR(wdt->clk), "Clock not found\n");
+
+ wdt->rate = clk_get_rate(wdt->clk);
+ if (!wdt->rate)
+ wdt->rate = rate;
+ if (!wdt->rate) {
+ dev_err(&adev->dev, "no clock-frequency property\n");
+ return -ENODEV;
}
wdt->adev = adev;
static void wdt_turnoff(void)
{
/* Stop the timer */
- del_timer(&timer);
+ del_timer_sync(&timer);
wdt_change(WDT_DISABLE);
*
* (c) Copyright 2008-2011 Wim Van Sebroeck <wim@iguana.be>.
*
+ * (c) Copyright 2021 Hewlett Packard Enterprise Development LP.
+ *
* This source code is part of the generic code that can be used
* by all the watchdog timer drivers.
*
* This material is provided "AS-IS" and at no charge.
*/
+#include <linux/hrtimer.h>
+#include <linux/kthread.h>
+
#define MAX_DOGS 32 /* Maximum number of watchdog devices */
+/*
+ * struct watchdog_core_data - watchdog core internal data
+ * @dev: The watchdog's internal device
+ * @cdev: The watchdog's Character device.
+ * @wdd: Pointer to watchdog device.
+ * @lock: Lock for watchdog core.
+ * @status: Watchdog core internal status bits.
+ */
+struct watchdog_core_data {
+ struct device dev;
+ struct cdev cdev;
+ struct watchdog_device *wdd;
+ struct mutex lock;
+ ktime_t last_keepalive;
+ ktime_t last_hw_keepalive;
+ ktime_t open_deadline;
+ struct hrtimer timer;
+ struct kthread_work work;
+#if IS_ENABLED(CONFIG_WATCHDOG_HRTIMER_PRETIMEOUT)
+ struct hrtimer pretimeout_timer;
+#endif
+ unsigned long status; /* Internal status bits */
+#define _WDOG_DEV_OPEN 0 /* Opened ? */
+#define _WDOG_ALLOW_RELEASE 1 /* Did we receive the magic char ? */
+#define _WDOG_KEEPALIVE 2 /* Did we receive a keepalive ? */
+};
+
/*
* Functions/procedures to be called by the core
*/
extern void watchdog_dev_unregister(struct watchdog_device *);
extern int __init watchdog_dev_init(void);
extern void __exit watchdog_dev_exit(void);
+
+static inline bool watchdog_have_pretimeout(struct watchdog_device *wdd)
+{
+ return wdd->info->options & WDIOF_PRETIMEOUT ||
+ IS_ENABLED(CONFIG_WATCHDOG_HRTIMER_PRETIMEOUT);
+}
+
+#if IS_ENABLED(CONFIG_WATCHDOG_HRTIMER_PRETIMEOUT)
+void watchdog_hrtimer_pretimeout_init(struct watchdog_device *wdd);
+void watchdog_hrtimer_pretimeout_start(struct watchdog_device *wdd);
+void watchdog_hrtimer_pretimeout_stop(struct watchdog_device *wdd);
+#else
+static inline void watchdog_hrtimer_pretimeout_init(struct watchdog_device *wdd) {}
+static inline void watchdog_hrtimer_pretimeout_start(struct watchdog_device *wdd) {}
+static inline void watchdog_hrtimer_pretimeout_stop(struct watchdog_device *wdd) {}
+#endif
*
* (c) Copyright 2008-2011 Wim Van Sebroeck <wim@iguana.be>.
*
+ * (c) Copyright 2021 Hewlett Packard Enterprise Development LP.
*
* This source code is part of the generic code that can be used
* by all the watchdog timer drivers.
#include "watchdog_core.h"
#include "watchdog_pretimeout.h"
-/*
- * struct watchdog_core_data - watchdog core internal data
- * @dev: The watchdog's internal device
- * @cdev: The watchdog's Character device.
- * @wdd: Pointer to watchdog device.
- * @lock: Lock for watchdog core.
- * @status: Watchdog core internal status bits.
- */
-struct watchdog_core_data {
- struct device dev;
- struct cdev cdev;
- struct watchdog_device *wdd;
- struct mutex lock;
- ktime_t last_keepalive;
- ktime_t last_hw_keepalive;
- ktime_t open_deadline;
- struct hrtimer timer;
- struct kthread_work work;
- unsigned long status; /* Internal status bits */
-#define _WDOG_DEV_OPEN 0 /* Opened ? */
-#define _WDOG_ALLOW_RELEASE 1 /* Did we receive the magic char ? */
-#define _WDOG_KEEPALIVE 2 /* Did we receive a keepalive ? */
-};
-
/* the dev_t structure to store the dynamically allocated watchdog devices */
static dev_t watchdog_devt;
/* Reference to watchdog device behind /dev/watchdog */
else
err = wdd->ops->start(wdd); /* restart watchdog */
+ if (err == 0)
+ watchdog_hrtimer_pretimeout_start(wdd);
+
watchdog_update_worker(wdd);
return err;
started_at = ktime_get();
if (watchdog_hw_running(wdd) && wdd->ops->ping) {
err = __watchdog_ping(wdd);
- if (err == 0)
+ if (err == 0) {
set_bit(WDOG_ACTIVE, &wdd->status);
+ watchdog_hrtimer_pretimeout_start(wdd);
+ }
} else {
err = wdd->ops->start(wdd);
if (err == 0) {
wd_data->last_keepalive = started_at;
wd_data->last_hw_keepalive = started_at;
watchdog_update_worker(wdd);
+ watchdog_hrtimer_pretimeout_start(wdd);
}
}
if (err == 0) {
clear_bit(WDOG_ACTIVE, &wdd->status);
watchdog_update_worker(wdd);
+ watchdog_hrtimer_pretimeout_stop(wdd);
}
return err;
if (test_and_clear_bit(_WDOG_KEEPALIVE, &wd_data->status))
status |= WDIOF_KEEPALIVEPING;
+ if (IS_ENABLED(CONFIG_WATCHDOG_HRTIMER_PRETIMEOUT))
+ status |= WDIOF_PRETIMEOUT;
+
return status;
}
{
int err = 0;
- if (!(wdd->info->options & WDIOF_PRETIMEOUT))
+ if (!watchdog_have_pretimeout(wdd))
return -EOPNOTSUPP;
if (watchdog_pretimeout_invalid(wdd, timeout))
{
struct watchdog_device *wdd = dev_get_drvdata(dev);
- return sprintf(buf, "%d\n", !!test_bit(WDOG_NO_WAY_OUT, &wdd->status));
+ return sysfs_emit(buf, "%d\n", !!test_bit(WDOG_NO_WAY_OUT,
+ &wdd->status));
}
static ssize_t nowayout_store(struct device *dev, struct device_attribute *attr,
status = watchdog_get_status(wdd);
mutex_unlock(&wd_data->lock);
- return sprintf(buf, "0x%x\n", status);
+ return sysfs_emit(buf, "0x%x\n", status);
}
static DEVICE_ATTR_RO(status);
{
struct watchdog_device *wdd = dev_get_drvdata(dev);
- return sprintf(buf, "%u\n", wdd->bootstatus);
+ return sysfs_emit(buf, "%u\n", wdd->bootstatus);
}
static DEVICE_ATTR_RO(bootstatus);
status = watchdog_get_timeleft(wdd, &val);
mutex_unlock(&wd_data->lock);
if (!status)
- status = sprintf(buf, "%u\n", val);
+ status = sysfs_emit(buf, "%u\n", val);
return status;
}
{
struct watchdog_device *wdd = dev_get_drvdata(dev);
- return sprintf(buf, "%u\n", wdd->timeout);
+ return sysfs_emit(buf, "%u\n", wdd->timeout);
}
static DEVICE_ATTR_RO(timeout);
+static ssize_t min_timeout_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct watchdog_device *wdd = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%u\n", wdd->min_timeout);
+}
+static DEVICE_ATTR_RO(min_timeout);
+
+static ssize_t max_timeout_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct watchdog_device *wdd = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%u\n", wdd->max_timeout);
+}
+static DEVICE_ATTR_RO(max_timeout);
+
static ssize_t pretimeout_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct watchdog_device *wdd = dev_get_drvdata(dev);
- return sprintf(buf, "%u\n", wdd->pretimeout);
+ return sysfs_emit(buf, "%u\n", wdd->pretimeout);
}
static DEVICE_ATTR_RO(pretimeout);
{
struct watchdog_device *wdd = dev_get_drvdata(dev);
- return sprintf(buf, "%s\n", wdd->info->identity);
+ return sysfs_emit(buf, "%s\n", wdd->info->identity);
}
static DEVICE_ATTR_RO(identity);
struct watchdog_device *wdd = dev_get_drvdata(dev);
if (watchdog_active(wdd))
- return sprintf(buf, "active\n");
+ return sysfs_emit(buf, "active\n");
- return sprintf(buf, "inactive\n");
+ return sysfs_emit(buf, "inactive\n");
}
static DEVICE_ATTR_RO(state);
if (attr == &dev_attr_timeleft.attr && !wdd->ops->get_timeleft)
mode = 0;
- else if (attr == &dev_attr_pretimeout.attr &&
- !(wdd->info->options & WDIOF_PRETIMEOUT))
+ else if (attr == &dev_attr_pretimeout.attr && !watchdog_have_pretimeout(wdd))
mode = 0;
else if ((attr == &dev_attr_pretimeout_governor.attr ||
attr == &dev_attr_pretimeout_available_governors.attr) &&
- (!(wdd->info->options & WDIOF_PRETIMEOUT) ||
- !IS_ENABLED(CONFIG_WATCHDOG_PRETIMEOUT_GOV)))
+ (!watchdog_have_pretimeout(wdd) || !IS_ENABLED(CONFIG_WATCHDOG_PRETIMEOUT_GOV)))
mode = 0;
return mode;
&dev_attr_state.attr,
&dev_attr_identity.attr,
&dev_attr_timeout.attr,
+ &dev_attr_min_timeout.attr,
+ &dev_attr_max_timeout.attr,
&dev_attr_pretimeout.attr,
&dev_attr_timeleft.attr,
&dev_attr_bootstatus.attr,
kthread_init_work(&wd_data->work, watchdog_ping_work);
hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
wd_data->timer.function = watchdog_timer_expired;
+ watchdog_hrtimer_pretimeout_init(wdd);
if (wdd->id == 0) {
old_wd_data = wd_data;
hrtimer_cancel(&wd_data->timer);
kthread_cancel_work_sync(&wd_data->work);
+ watchdog_hrtimer_pretimeout_stop(wdd);
put_device(&wd_data->dev);
}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * (c) Copyright 2021 Hewlett Packard Enterprise Development LP.
+ */
+
+#include <linux/hrtimer.h>
+#include <linux/watchdog.h>
+
+#include "watchdog_core.h"
+#include "watchdog_pretimeout.h"
+
+static enum hrtimer_restart watchdog_hrtimer_pretimeout(struct hrtimer *timer)
+{
+ struct watchdog_core_data *wd_data;
+
+ wd_data = container_of(timer, struct watchdog_core_data, pretimeout_timer);
+
+ watchdog_notify_pretimeout(wd_data->wdd);
+ return HRTIMER_NORESTART;
+}
+
+void watchdog_hrtimer_pretimeout_init(struct watchdog_device *wdd)
+{
+ struct watchdog_core_data *wd_data = wdd->wd_data;
+
+ hrtimer_init(&wd_data->pretimeout_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ wd_data->pretimeout_timer.function = watchdog_hrtimer_pretimeout;
+}
+
+void watchdog_hrtimer_pretimeout_start(struct watchdog_device *wdd)
+{
+ if (!(wdd->info->options & WDIOF_PRETIMEOUT) &&
+ !watchdog_pretimeout_invalid(wdd, wdd->pretimeout))
+ hrtimer_start(&wdd->wd_data->pretimeout_timer,
+ ktime_set(wdd->timeout - wdd->pretimeout, 0),
+ HRTIMER_MODE_REL);
+ else
+ hrtimer_cancel(&wdd->wd_data->pretimeout_timer);
+}
+
+void watchdog_hrtimer_pretimeout_stop(struct watchdog_device *wdd)
+{
+ hrtimer_cancel(&wdd->wd_data->pretimeout_timer);
+}
#include <linux/string.h>
#include <linux/watchdog.h>
+#include "watchdog_core.h"
#include "watchdog_pretimeout.h"
/* Default watchdog pretimeout governor */
mutex_lock(&governor_lock);
list_for_each_entry(priv, &governor_list, entry)
- count += sprintf(buf + count, "%s\n", priv->gov->name);
+ count += sysfs_emit_at(buf, count, "%s\n", priv->gov->name);
mutex_unlock(&governor_lock);
spin_lock_irq(&pretimeout_lock);
if (wdd->gov)
- count = sprintf(buf, "%s\n", wdd->gov->name);
+ count = sysfs_emit(buf, "%s\n", wdd->gov->name);
spin_unlock_irq(&pretimeout_lock);
return count;
{
struct watchdog_pretimeout *p;
- if (!(wdd->info->options & WDIOF_PRETIMEOUT))
+ if (!watchdog_have_pretimeout(wdd))
return 0;
p = kzalloc(sizeof(*p), GFP_KERNEL);
{
struct watchdog_pretimeout *p, *t;
- if (!(wdd->info->options & WDIOF_PRETIMEOUT))
+ if (!watchdog_have_pretimeout(wdd))
return;
spin_lock_irq(&pretimeout_lock);
/*
* WDAT specification says that the watchdog is required to reboot
* the system when it fires. However, it also states that it is
- * recommeded to make it configurable through hardware register. We
+ * recommended to make it configurable through hardware register. We
* enable reboot now if it is configurable, just in case.
*/
ret = wdat_wdt_run_action(wdat, ACPI_WDAT_SET_REBOOT, 0, NULL);
return 0;
/*
- * We need to stop the watchdog if firmare is not doing it or if we
+ * We need to stop the watchdog if firmware is not doing it or if we
* are going suspend to idle (where firmware is not involved). If
* firmware is stopping the watchdog we kick it here one more time
* to give it some time.
}
/**
- * notify_sys:
+ * wdt_notify_sys:
* @this: our notifier block
* @code: the event being reported
* @unused: unused
};
/**
- * cleanup_module:
+ * wdt_exit:
*
* Unload the watchdog. You cannot do this with any file handles open.
* If your watchdog is set to continue ticking on close and you unload
}
/**
- * notify_sys:
+ * wdtpci_notify_sys:
* @this: our notifier block
* @code: the event being reported
* @unused: unused
#define ZIIRAVE_CMD_JUMP_TO_BOOTLOADER_MAGIC 1
#define ZIIRAVE_CMD_RESET_PROCESSOR_MAGIC 1
-#define ZIIRAVE_FW_VERSION_FMT "02.%02u.%02u"
-#define ZIIRAVE_BL_VERSION_FMT "01.%02u.%02u"
-
struct ziirave_wdt_rev {
unsigned char major;
unsigned char minor;
if (ret)
return ret;
- ret = sprintf(buf, ZIIRAVE_FW_VERSION_FMT, w_priv->firmware_rev.major,
- w_priv->firmware_rev.minor);
+ ret = sysfs_emit(buf, "02.%02u.%02u\n",
+ w_priv->firmware_rev.major,
+ w_priv->firmware_rev.minor);
mutex_unlock(&w_priv->sysfs_mutex);
if (ret)
return ret;
- ret = sprintf(buf, ZIIRAVE_BL_VERSION_FMT, w_priv->bootloader_rev.major,
- w_priv->bootloader_rev.minor);
+ ret = sysfs_emit(buf, "01.%02u.%02u\n",
+ w_priv->bootloader_rev.major,
+ w_priv->bootloader_rev.minor);
mutex_unlock(&w_priv->sysfs_mutex);
if (ret)
return ret;
- ret = sprintf(buf, "%s", ziirave_reasons[w_priv->reset_reason]);
+ ret = sysfs_emit(buf, "%s\n", ziirave_reasons[w_priv->reset_reason]);
mutex_unlock(&w_priv->sysfs_mutex);
}
dev_info(&client->dev,
- "Firmware updated to version " ZIIRAVE_FW_VERSION_FMT "\n",
+ "Firmware updated to version 02.%02u.%02u\n",
w_priv->firmware_rev.major, w_priv->firmware_rev.minor);
/* Restore the watchdog timeout */
}
dev_info(&client->dev,
- "Firmware version: " ZIIRAVE_FW_VERSION_FMT "\n",
+ "Firmware version: 02.%02u.%02u\n",
w_priv->firmware_rev.major, w_priv->firmware_rev.minor);
ret = ziirave_wdt_revision(client, &w_priv->bootloader_rev,
}
dev_info(&client->dev,
- "Bootloader version: " ZIIRAVE_BL_VERSION_FMT "\n",
+ "Bootloader version: 01.%02u.%02u\n",
w_priv->bootloader_rev.major, w_priv->bootloader_rev.minor);
w_priv->reset_reason = i2c_smbus_read_byte_data(client,
module_init(init_nlm);
module_exit(exit_nlm);
+/**
+ * nlmsvc_dispatch - Process an NLM Request
+ * @rqstp: incoming request
+ * @statp: pointer to location of accept_stat field in RPC Reply buffer
+ *
+ * Return values:
+ * %0: Processing complete; do not send a Reply
+ * %1: Processing complete; send Reply in rqstp->rq_res
+ */
+static int nlmsvc_dispatch(struct svc_rqst *rqstp, __be32 *statp)
+{
+ const struct svc_procedure *procp = rqstp->rq_procinfo;
+ struct kvec *argv = rqstp->rq_arg.head;
+ struct kvec *resv = rqstp->rq_res.head;
+
+ svcxdr_init_decode(rqstp);
+ if (!procp->pc_decode(rqstp, argv->iov_base))
+ goto out_decode_err;
+
+ *statp = procp->pc_func(rqstp);
+ if (*statp == rpc_drop_reply)
+ return 0;
+ if (*statp != rpc_success)
+ return 1;
+
+ svcxdr_init_encode(rqstp);
+ if (!procp->pc_encode(rqstp, resv->iov_base + resv->iov_len))
+ goto out_encode_err;
+
+ return 1;
+
+out_decode_err:
+ *statp = rpc_garbage_args;
+ return 1;
+
+out_encode_err:
+ *statp = rpc_system_err;
+ return 1;
+}
+
/*
* Define NLM program and procedures
*/
.vs_nproc = 17,
.vs_proc = nlmsvc_procedures,
.vs_count = nlmsvc_version1_count,
+ .vs_dispatch = nlmsvc_dispatch,
.vs_xdrsize = NLMSVC_XDRSIZE,
};
static unsigned int nlmsvc_version3_count[24];
.vs_nproc = 24,
.vs_proc = nlmsvc_procedures,
.vs_count = nlmsvc_version3_count,
+ .vs_dispatch = nlmsvc_dispatch,
.vs_xdrsize = NLMSVC_XDRSIZE,
};
#ifdef CONFIG_LOCKD_V4
.vs_nproc = 24,
.vs_proc = nlmsvc_procedures4,
.vs_count = nlmsvc_version4_count,
+ .vs_dispatch = nlmsvc_dispatch,
.vs_xdrsize = NLMSVC_XDRSIZE,
};
#endif
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Encode/decode NLM basic data types
+ *
+ * Basic NLMv3 XDR data types are not defined in an IETF standards
+ * document. X/Open has a description of these data types that
+ * is useful. See Chapter 10 of "Protocols for Interworking:
+ * XNFS, Version 3W".
+ *
+ * Basic NLMv4 XDR data types are defined in Appendix II.1.4 of
+ * RFC 1813: "NFS Version 3 Protocol Specification".
+ *
+ * Author: Chuck Lever <chuck.lever@oracle.com>
+ *
+ * Copyright (c) 2020, Oracle and/or its affiliates.
+ */
+
+#ifndef _LOCKD_SVCXDR_H_
+#define _LOCKD_SVCXDR_H_
+
+static inline bool
+svcxdr_decode_stats(struct xdr_stream *xdr, __be32 *status)
+{
+ __be32 *p;
+
+ p = xdr_inline_decode(xdr, XDR_UNIT);
+ if (!p)
+ return false;
+ *status = *p;
+
+ return true;
+}
+
+static inline bool
+svcxdr_encode_stats(struct xdr_stream *xdr, __be32 status)
+{
+ __be32 *p;
+
+ p = xdr_reserve_space(xdr, XDR_UNIT);
+ if (!p)
+ return false;
+ *p = status;
+
+ return true;
+}
+
+static inline bool
+svcxdr_decode_string(struct xdr_stream *xdr, char **data, unsigned int *data_len)
+{
+ __be32 *p;
+ u32 len;
+
+ if (xdr_stream_decode_u32(xdr, &len) < 0)
+ return false;
+ if (len > NLM_MAXSTRLEN)
+ return false;
+ p = xdr_inline_decode(xdr, len);
+ if (!p)
+ return false;
+ *data_len = len;
+ *data = (char *)p;
+
+ return true;
+}
+
+/*
+ * NLM cookies are defined by specification to be a variable-length
+ * XDR opaque no longer than 1024 bytes. However, this implementation
+ * limits their length to 32 bytes, and treats zero-length cookies
+ * specially.
+ */
+static inline bool
+svcxdr_decode_cookie(struct xdr_stream *xdr, struct nlm_cookie *cookie)
+{
+ __be32 *p;
+ u32 len;
+
+ if (xdr_stream_decode_u32(xdr, &len) < 0)
+ return false;
+ if (len > NLM_MAXCOOKIELEN)
+ return false;
+ if (!len)
+ goto out_hpux;
+
+ p = xdr_inline_decode(xdr, len);
+ if (!p)
+ return false;
+ cookie->len = len;
+ memcpy(cookie->data, p, len);
+
+ return true;
+
+ /* apparently HPUX can return empty cookies */
+out_hpux:
+ cookie->len = 4;
+ memset(cookie->data, 0, 4);
+ return true;
+}
+
+static inline bool
+svcxdr_encode_cookie(struct xdr_stream *xdr, const struct nlm_cookie *cookie)
+{
+ __be32 *p;
+
+ if (xdr_stream_encode_u32(xdr, cookie->len) < 0)
+ return false;
+ p = xdr_reserve_space(xdr, cookie->len);
+ if (!p)
+ return false;
+ memcpy(p, cookie->data, cookie->len);
+
+ return true;
+}
+
+static inline bool
+svcxdr_decode_owner(struct xdr_stream *xdr, struct xdr_netobj *obj)
+{
+ __be32 *p;
+ u32 len;
+
+ if (xdr_stream_decode_u32(xdr, &len) < 0)
+ return false;
+ if (len > XDR_MAX_NETOBJ)
+ return false;
+ p = xdr_inline_decode(xdr, len);
+ if (!p)
+ return false;
+ obj->len = len;
+ obj->data = (u8 *)p;
+
+ return true;
+}
+
+static inline bool
+svcxdr_encode_owner(struct xdr_stream *xdr, const struct xdr_netobj *obj)
+{
+ unsigned int quadlen = XDR_QUADLEN(obj->len);
+ __be32 *p;
+
+ if (xdr_stream_encode_u32(xdr, obj->len) < 0)
+ return false;
+ p = xdr_reserve_space(xdr, obj->len);
+ if (!p)
+ return false;
+ p[quadlen - 1] = 0; /* XDR pad */
+ memcpy(p, obj->data, obj->len);
+
+ return true;
+}
+
+#endif /* _LOCKD_SVCXDR_H_ */
#include <uapi/linux/nfs2.h>
-#define NLMDBG_FACILITY NLMDBG_XDR
+#include "svcxdr.h"
static inline loff_t
}
/*
- * XDR functions for basic NLM types
+ * NLM file handles are defined by specification to be a variable-length
+ * XDR opaque no longer than 1024 bytes. However, this implementation
+ * constrains their length to exactly the length of an NFSv2 file
+ * handle.
*/
-static __be32 *nlm_decode_cookie(__be32 *p, struct nlm_cookie *c)
+static bool
+svcxdr_decode_fhandle(struct xdr_stream *xdr, struct nfs_fh *fh)
{
- unsigned int len;
-
- len = ntohl(*p++);
-
- if(len==0)
- {
- c->len=4;
- memset(c->data, 0, 4); /* hockeypux brain damage */
- }
- else if(len<=NLM_MAXCOOKIELEN)
- {
- c->len=len;
- memcpy(c->data, p, len);
- p+=XDR_QUADLEN(len);
- }
- else
- {
- dprintk("lockd: bad cookie size %d (only cookies under "
- "%d bytes are supported.)\n",
- len, NLM_MAXCOOKIELEN);
- return NULL;
- }
- return p;
-}
-
-static inline __be32 *
-nlm_encode_cookie(__be32 *p, struct nlm_cookie *c)
-{
- *p++ = htonl(c->len);
- memcpy(p, c->data, c->len);
- p+=XDR_QUADLEN(c->len);
- return p;
-}
-
-static __be32 *
-nlm_decode_fh(__be32 *p, struct nfs_fh *f)
-{
- unsigned int len;
-
- if ((len = ntohl(*p++)) != NFS2_FHSIZE) {
- dprintk("lockd: bad fhandle size %d (should be %d)\n",
- len, NFS2_FHSIZE);
- return NULL;
- }
- f->size = NFS2_FHSIZE;
- memset(f->data, 0, sizeof(f->data));
- memcpy(f->data, p, NFS2_FHSIZE);
- return p + XDR_QUADLEN(NFS2_FHSIZE);
-}
-
-/*
- * Encode and decode owner handle
- */
-static inline __be32 *
-nlm_decode_oh(__be32 *p, struct xdr_netobj *oh)
-{
- return xdr_decode_netobj(p, oh);
-}
-
-static inline __be32 *
-nlm_encode_oh(__be32 *p, struct xdr_netobj *oh)
-{
- return xdr_encode_netobj(p, oh);
+ __be32 *p;
+ u32 len;
+
+ if (xdr_stream_decode_u32(xdr, &len) < 0)
+ return false;
+ if (len != NFS2_FHSIZE)
+ return false;
+
+ p = xdr_inline_decode(xdr, len);
+ if (!p)
+ return false;
+ fh->size = NFS2_FHSIZE;
+ memcpy(fh->data, p, len);
+ memset(fh->data + NFS2_FHSIZE, 0, sizeof(fh->data) - NFS2_FHSIZE);
+
+ return true;
}
-static __be32 *
-nlm_decode_lock(__be32 *p, struct nlm_lock *lock)
+static bool
+svcxdr_decode_lock(struct xdr_stream *xdr, struct nlm_lock *lock)
{
- struct file_lock *fl = &lock->fl;
- s32 start, len, end;
-
- if (!(p = xdr_decode_string_inplace(p, &lock->caller,
- &lock->len,
- NLM_MAXSTRLEN))
- || !(p = nlm_decode_fh(p, &lock->fh))
- || !(p = nlm_decode_oh(p, &lock->oh)))
- return NULL;
- lock->svid = ntohl(*p++);
+ struct file_lock *fl = &lock->fl;
+ s32 start, len, end;
+
+ if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
+ return false;
+ if (!svcxdr_decode_fhandle(xdr, &lock->fh))
+ return false;
+ if (!svcxdr_decode_owner(xdr, &lock->oh))
+ return false;
+ if (xdr_stream_decode_u32(xdr, &lock->svid) < 0)
+ return false;
+ if (xdr_stream_decode_u32(xdr, &start) < 0)
+ return false;
+ if (xdr_stream_decode_u32(xdr, &len) < 0)
+ return false;
locks_init_lock(fl);
fl->fl_flags = FL_POSIX;
- fl->fl_type = F_RDLCK; /* as good as anything else */
- start = ntohl(*p++);
- len = ntohl(*p++);
+ fl->fl_type = F_RDLCK;
end = start + len - 1;
-
fl->fl_start = s32_to_loff_t(start);
-
if (len == 0 || end < 0)
fl->fl_end = OFFSET_MAX;
else
fl->fl_end = s32_to_loff_t(end);
- return p;
+
+ return true;
}
-/*
- * Encode result of a TEST/TEST_MSG call
- */
-static __be32 *
-nlm_encode_testres(__be32 *p, struct nlm_res *resp)
+static bool
+svcxdr_encode_holder(struct xdr_stream *xdr, const struct nlm_lock *lock)
{
- s32 start, len;
-
- if (!(p = nlm_encode_cookie(p, &resp->cookie)))
- return NULL;
- *p++ = resp->status;
-
- if (resp->status == nlm_lck_denied) {
- struct file_lock *fl = &resp->lock.fl;
-
- *p++ = (fl->fl_type == F_RDLCK)? xdr_zero : xdr_one;
- *p++ = htonl(resp->lock.svid);
-
- /* Encode owner handle. */
- if (!(p = xdr_encode_netobj(p, &resp->lock.oh)))
- return NULL;
+ const struct file_lock *fl = &lock->fl;
+ s32 start, len;
+
+ /* exclusive */
+ if (xdr_stream_encode_bool(xdr, fl->fl_type != F_RDLCK) < 0)
+ return false;
+ if (xdr_stream_encode_u32(xdr, lock->svid) < 0)
+ return false;
+ if (!svcxdr_encode_owner(xdr, &lock->oh))
+ return false;
+ start = loff_t_to_s32(fl->fl_start);
+ if (fl->fl_end == OFFSET_MAX)
+ len = 0;
+ else
+ len = loff_t_to_s32(fl->fl_end - fl->fl_start + 1);
+ if (xdr_stream_encode_u32(xdr, start) < 0)
+ return false;
+ if (xdr_stream_encode_u32(xdr, len) < 0)
+ return false;
- start = loff_t_to_s32(fl->fl_start);
- if (fl->fl_end == OFFSET_MAX)
- len = 0;
- else
- len = loff_t_to_s32(fl->fl_end - fl->fl_start + 1);
+ return true;
+}
- *p++ = htonl(start);
- *p++ = htonl(len);
+static bool
+svcxdr_encode_testrply(struct xdr_stream *xdr, const struct nlm_res *resp)
+{
+ if (!svcxdr_encode_stats(xdr, resp->status))
+ return false;
+ switch (resp->status) {
+ case nlm_lck_denied:
+ if (!svcxdr_encode_holder(xdr, &resp->lock))
+ return false;
}
- return p;
+ return true;
}
/*
- * First, the server side XDR functions
+ * Decode Call arguments
*/
+
+int
+nlmsvc_decode_void(struct svc_rqst *rqstp, __be32 *p)
+{
+ return 1;
+}
+
int
nlmsvc_decode_testargs(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
- u32 exclusive;
+ u32 exclusive;
- if (!(p = nlm_decode_cookie(p, &argp->cookie)))
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
return 0;
-
- exclusive = ntohl(*p++);
- if (!(p = nlm_decode_lock(p, &argp->lock)))
+ if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
+ return 0;
+ if (!svcxdr_decode_lock(xdr, &argp->lock))
return 0;
if (exclusive)
argp->lock.fl.fl_type = F_WRLCK;
- return xdr_argsize_check(rqstp, p);
-}
-
-int
-nlmsvc_encode_testres(struct svc_rqst *rqstp, __be32 *p)
-{
- struct nlm_res *resp = rqstp->rq_resp;
-
- if (!(p = nlm_encode_testres(p, resp)))
- return 0;
- return xdr_ressize_check(rqstp, p);
+ return 1;
}
int
nlmsvc_decode_lockargs(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
- u32 exclusive;
+ u32 exclusive;
- if (!(p = nlm_decode_cookie(p, &argp->cookie)))
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
+ return 0;
+ if (xdr_stream_decode_bool(xdr, &argp->block) < 0)
return 0;
- argp->block = ntohl(*p++);
- exclusive = ntohl(*p++);
- if (!(p = nlm_decode_lock(p, &argp->lock)))
+ if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
+ return 0;
+ if (!svcxdr_decode_lock(xdr, &argp->lock))
return 0;
if (exclusive)
argp->lock.fl.fl_type = F_WRLCK;
- argp->reclaim = ntohl(*p++);
- argp->state = ntohl(*p++);
+ if (xdr_stream_decode_bool(xdr, &argp->reclaim) < 0)
+ return 0;
+ if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
+ return 0;
argp->monitor = 1; /* monitor client by default */
- return xdr_argsize_check(rqstp, p);
+ return 1;
}
int
nlmsvc_decode_cancargs(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
- u32 exclusive;
+ u32 exclusive;
- if (!(p = nlm_decode_cookie(p, &argp->cookie)))
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
+ return 0;
+ if (xdr_stream_decode_bool(xdr, &argp->block) < 0)
return 0;
- argp->block = ntohl(*p++);
- exclusive = ntohl(*p++);
- if (!(p = nlm_decode_lock(p, &argp->lock)))
+ if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
+ return 0;
+ if (!svcxdr_decode_lock(xdr, &argp->lock))
return 0;
if (exclusive)
argp->lock.fl.fl_type = F_WRLCK;
- return xdr_argsize_check(rqstp, p);
+
+ return 1;
}
int
nlmsvc_decode_unlockargs(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
- if (!(p = nlm_decode_cookie(p, &argp->cookie))
- || !(p = nlm_decode_lock(p, &argp->lock)))
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
+ return 0;
+ if (!svcxdr_decode_lock(xdr, &argp->lock))
return 0;
argp->lock.fl.fl_type = F_UNLCK;
- return xdr_argsize_check(rqstp, p);
+
+ return 1;
}
int
-nlmsvc_decode_shareargs(struct svc_rqst *rqstp, __be32 *p)
+nlmsvc_decode_res(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_args *argp = rqstp->rq_argp;
- struct nlm_lock *lock = &argp->lock;
-
- memset(lock, 0, sizeof(*lock));
- locks_init_lock(&lock->fl);
- lock->svid = ~(u32) 0;
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
+ struct nlm_res *resp = rqstp->rq_argp;
- if (!(p = nlm_decode_cookie(p, &argp->cookie))
- || !(p = xdr_decode_string_inplace(p, &lock->caller,
- &lock->len, NLM_MAXSTRLEN))
- || !(p = nlm_decode_fh(p, &lock->fh))
- || !(p = nlm_decode_oh(p, &lock->oh)))
+ if (!svcxdr_decode_cookie(xdr, &resp->cookie))
+ return 0;
+ if (!svcxdr_decode_stats(xdr, &resp->status))
return 0;
- argp->fsm_mode = ntohl(*p++);
- argp->fsm_access = ntohl(*p++);
- return xdr_argsize_check(rqstp, p);
+
+ return 1;
}
int
-nlmsvc_encode_shareres(struct svc_rqst *rqstp, __be32 *p)
+nlmsvc_decode_reboot(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_res *resp = rqstp->rq_resp;
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
+ struct nlm_reboot *argp = rqstp->rq_argp;
+ u32 len;
- if (!(p = nlm_encode_cookie(p, &resp->cookie)))
+ if (xdr_stream_decode_u32(xdr, &len) < 0)
+ return 0;
+ if (len > SM_MAXSTRLEN)
+ return 0;
+ p = xdr_inline_decode(xdr, len);
+ if (!p)
+ return 0;
+ argp->len = len;
+ argp->mon = (char *)p;
+ if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
+ return 0;
+ p = xdr_inline_decode(xdr, SM_PRIV_SIZE);
+ if (!p)
return 0;
- *p++ = resp->status;
- *p++ = xdr_zero; /* sequence argument */
- return xdr_ressize_check(rqstp, p);
+ memcpy(&argp->priv.data, p, sizeof(argp->priv.data));
+
+ return 1;
}
int
-nlmsvc_encode_res(struct svc_rqst *rqstp, __be32 *p)
+nlmsvc_decode_shareargs(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_res *resp = rqstp->rq_resp;
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
+ struct nlm_args *argp = rqstp->rq_argp;
+ struct nlm_lock *lock = &argp->lock;
- if (!(p = nlm_encode_cookie(p, &resp->cookie)))
+ memset(lock, 0, sizeof(*lock));
+ locks_init_lock(&lock->fl);
+ lock->svid = ~(u32)0;
+
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
+ return 0;
+ if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
+ return 0;
+ if (!svcxdr_decode_fhandle(xdr, &lock->fh))
+ return 0;
+ if (!svcxdr_decode_owner(xdr, &lock->oh))
+ return 0;
+ /* XXX: Range checks are missing in the original code */
+ if (xdr_stream_decode_u32(xdr, &argp->fsm_mode) < 0)
+ return 0;
+ if (xdr_stream_decode_u32(xdr, &argp->fsm_access) < 0)
return 0;
- *p++ = resp->status;
- return xdr_ressize_check(rqstp, p);
+
+ return 1;
}
int
nlmsvc_decode_notify(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
struct nlm_lock *lock = &argp->lock;
- if (!(p = xdr_decode_string_inplace(p, &lock->caller,
- &lock->len, NLM_MAXSTRLEN)))
+ if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
+ return 0;
+ if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
return 0;
- argp->state = ntohl(*p++);
- return xdr_argsize_check(rqstp, p);
+
+ return 1;
}
+
+/*
+ * Encode Reply results
+ */
+
int
-nlmsvc_decode_reboot(struct svc_rqst *rqstp, __be32 *p)
+nlmsvc_encode_void(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_reboot *argp = rqstp->rq_argp;
-
- if (!(p = xdr_decode_string_inplace(p, &argp->mon, &argp->len, SM_MAXSTRLEN)))
- return 0;
- argp->state = ntohl(*p++);
- memcpy(&argp->priv.data, p, sizeof(argp->priv.data));
- p += XDR_QUADLEN(SM_PRIV_SIZE);
- return xdr_argsize_check(rqstp, p);
+ return 1;
}
int
-nlmsvc_decode_res(struct svc_rqst *rqstp, __be32 *p)
+nlmsvc_encode_testres(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_res *resp = rqstp->rq_argp;
+ struct xdr_stream *xdr = &rqstp->rq_res_stream;
+ struct nlm_res *resp = rqstp->rq_resp;
- if (!(p = nlm_decode_cookie(p, &resp->cookie)))
- return 0;
- resp->status = *p++;
- return xdr_argsize_check(rqstp, p);
+ return svcxdr_encode_cookie(xdr, &resp->cookie) &&
+ svcxdr_encode_testrply(xdr, resp);
}
int
-nlmsvc_decode_void(struct svc_rqst *rqstp, __be32 *p)
+nlmsvc_encode_res(struct svc_rqst *rqstp, __be32 *p)
{
- return xdr_argsize_check(rqstp, p);
+ struct xdr_stream *xdr = &rqstp->rq_res_stream;
+ struct nlm_res *resp = rqstp->rq_resp;
+
+ return svcxdr_encode_cookie(xdr, &resp->cookie) &&
+ svcxdr_encode_stats(xdr, resp->status);
}
int
-nlmsvc_encode_void(struct svc_rqst *rqstp, __be32 *p)
+nlmsvc_encode_shareres(struct svc_rqst *rqstp, __be32 *p)
{
- return xdr_ressize_check(rqstp, p);
+ struct xdr_stream *xdr = &rqstp->rq_res_stream;
+ struct nlm_res *resp = rqstp->rq_resp;
+
+ if (!svcxdr_encode_cookie(xdr, &resp->cookie))
+ return 0;
+ if (!svcxdr_encode_stats(xdr, resp->status))
+ return 0;
+ /* sequence */
+ if (xdr_stream_encode_u32(xdr, 0) < 0)
+ return 0;
+
+ return 1;
}
#include <linux/sunrpc/stats.h>
#include <linux/lockd/lockd.h>
-#define NLMDBG_FACILITY NLMDBG_XDR
+#include "svcxdr.h"
static inline loff_t
s64_to_loff_t(__s64 offset)
}
/*
- * XDR functions for basic NLM types
+ * NLM file handles are defined by specification to be a variable-length
+ * XDR opaque no longer than 1024 bytes. However, this implementation
+ * limits their length to the size of an NFSv3 file handle.
*/
-static __be32 *
-nlm4_decode_cookie(__be32 *p, struct nlm_cookie *c)
+static bool
+svcxdr_decode_fhandle(struct xdr_stream *xdr, struct nfs_fh *fh)
{
- unsigned int len;
-
- len = ntohl(*p++);
-
- if(len==0)
- {
- c->len=4;
- memset(c->data, 0, 4); /* hockeypux brain damage */
- }
- else if(len<=NLM_MAXCOOKIELEN)
- {
- c->len=len;
- memcpy(c->data, p, len);
- p+=XDR_QUADLEN(len);
- }
- else
- {
- dprintk("lockd: bad cookie size %d (only cookies under "
- "%d bytes are supported.)\n",
- len, NLM_MAXCOOKIELEN);
- return NULL;
- }
- return p;
-}
-
-static __be32 *
-nlm4_encode_cookie(__be32 *p, struct nlm_cookie *c)
-{
- *p++ = htonl(c->len);
- memcpy(p, c->data, c->len);
- p+=XDR_QUADLEN(c->len);
- return p;
-}
-
-static __be32 *
-nlm4_decode_fh(__be32 *p, struct nfs_fh *f)
-{
- memset(f->data, 0, sizeof(f->data));
- f->size = ntohl(*p++);
- if (f->size > NFS_MAXFHSIZE) {
- dprintk("lockd: bad fhandle size %d (should be <=%d)\n",
- f->size, NFS_MAXFHSIZE);
- return NULL;
- }
- memcpy(f->data, p, f->size);
- return p + XDR_QUADLEN(f->size);
-}
-
-/*
- * Encode and decode owner handle
- */
-static __be32 *
-nlm4_decode_oh(__be32 *p, struct xdr_netobj *oh)
-{
- return xdr_decode_netobj(p, oh);
+ __be32 *p;
+ u32 len;
+
+ if (xdr_stream_decode_u32(xdr, &len) < 0)
+ return false;
+ if (len > NFS_MAXFHSIZE)
+ return false;
+
+ p = xdr_inline_decode(xdr, len);
+ if (!p)
+ return false;
+ fh->size = len;
+ memcpy(fh->data, p, len);
+ memset(fh->data + len, 0, sizeof(fh->data) - len);
+
+ return true;
}
-static __be32 *
-nlm4_decode_lock(__be32 *p, struct nlm_lock *lock)
+static bool
+svcxdr_decode_lock(struct xdr_stream *xdr, struct nlm_lock *lock)
{
- struct file_lock *fl = &lock->fl;
- __u64 len, start;
- __s64 end;
-
- if (!(p = xdr_decode_string_inplace(p, &lock->caller,
- &lock->len, NLM_MAXSTRLEN))
- || !(p = nlm4_decode_fh(p, &lock->fh))
- || !(p = nlm4_decode_oh(p, &lock->oh)))
- return NULL;
- lock->svid = ntohl(*p++);
+ struct file_lock *fl = &lock->fl;
+ u64 len, start;
+ s64 end;
+
+ if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
+ return false;
+ if (!svcxdr_decode_fhandle(xdr, &lock->fh))
+ return false;
+ if (!svcxdr_decode_owner(xdr, &lock->oh))
+ return false;
+ if (xdr_stream_decode_u32(xdr, &lock->svid) < 0)
+ return false;
+ if (xdr_stream_decode_u64(xdr, &start) < 0)
+ return false;
+ if (xdr_stream_decode_u64(xdr, &len) < 0)
+ return false;
locks_init_lock(fl);
fl->fl_flags = FL_POSIX;
- fl->fl_type = F_RDLCK; /* as good as anything else */
- p = xdr_decode_hyper(p, &start);
- p = xdr_decode_hyper(p, &len);
+ fl->fl_type = F_RDLCK;
end = start + len - 1;
-
fl->fl_start = s64_to_loff_t(start);
-
if (len == 0 || end < 0)
fl->fl_end = OFFSET_MAX;
else
fl->fl_end = s64_to_loff_t(end);
- return p;
+
+ return true;
}
-/*
- * Encode result of a TEST/TEST_MSG call
- */
-static __be32 *
-nlm4_encode_testres(__be32 *p, struct nlm_res *resp)
+static bool
+svcxdr_encode_holder(struct xdr_stream *xdr, const struct nlm_lock *lock)
+{
+ const struct file_lock *fl = &lock->fl;
+ s64 start, len;
+
+ /* exclusive */
+ if (xdr_stream_encode_bool(xdr, fl->fl_type != F_RDLCK) < 0)
+ return false;
+ if (xdr_stream_encode_u32(xdr, lock->svid) < 0)
+ return false;
+ if (!svcxdr_encode_owner(xdr, &lock->oh))
+ return false;
+ start = loff_t_to_s64(fl->fl_start);
+ if (fl->fl_end == OFFSET_MAX)
+ len = 0;
+ else
+ len = loff_t_to_s64(fl->fl_end - fl->fl_start + 1);
+ if (xdr_stream_encode_u64(xdr, start) < 0)
+ return false;
+ if (xdr_stream_encode_u64(xdr, len) < 0)
+ return false;
+
+ return true;
+}
+
+static bool
+svcxdr_encode_testrply(struct xdr_stream *xdr, const struct nlm_res *resp)
{
- s64 start, len;
-
- dprintk("xdr: before encode_testres (p %p resp %p)\n", p, resp);
- if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
- return NULL;
- *p++ = resp->status;
-
- if (resp->status == nlm_lck_denied) {
- struct file_lock *fl = &resp->lock.fl;
-
- *p++ = (fl->fl_type == F_RDLCK)? xdr_zero : xdr_one;
- *p++ = htonl(resp->lock.svid);
-
- /* Encode owner handle. */
- if (!(p = xdr_encode_netobj(p, &resp->lock.oh)))
- return NULL;
-
- start = loff_t_to_s64(fl->fl_start);
- if (fl->fl_end == OFFSET_MAX)
- len = 0;
- else
- len = loff_t_to_s64(fl->fl_end - fl->fl_start + 1);
-
- p = xdr_encode_hyper(p, start);
- p = xdr_encode_hyper(p, len);
- dprintk("xdr: encode_testres (status %u pid %d type %d start %Ld end %Ld)\n",
- resp->status, (int)resp->lock.svid, fl->fl_type,
- (long long)fl->fl_start, (long long)fl->fl_end);
+ if (!svcxdr_encode_stats(xdr, resp->status))
+ return false;
+ switch (resp->status) {
+ case nlm_lck_denied:
+ if (!svcxdr_encode_holder(xdr, &resp->lock))
+ return false;
}
- dprintk("xdr: after encode_testres (p %p resp %p)\n", p, resp);
- return p;
+ return true;
}
/*
- * First, the server side XDR functions
+ * Decode Call arguments
*/
+
+int
+nlm4svc_decode_void(struct svc_rqst *rqstp, __be32 *p)
+{
+ return 1;
+}
+
int
nlm4svc_decode_testargs(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
- u32 exclusive;
+ u32 exclusive;
- if (!(p = nlm4_decode_cookie(p, &argp->cookie)))
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
return 0;
-
- exclusive = ntohl(*p++);
- if (!(p = nlm4_decode_lock(p, &argp->lock)))
+ if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
+ return 0;
+ if (!svcxdr_decode_lock(xdr, &argp->lock))
return 0;
if (exclusive)
argp->lock.fl.fl_type = F_WRLCK;
- return xdr_argsize_check(rqstp, p);
-}
-
-int
-nlm4svc_encode_testres(struct svc_rqst *rqstp, __be32 *p)
-{
- struct nlm_res *resp = rqstp->rq_resp;
-
- if (!(p = nlm4_encode_testres(p, resp)))
- return 0;
- return xdr_ressize_check(rqstp, p);
+ return 1;
}
int
nlm4svc_decode_lockargs(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
- u32 exclusive;
+ u32 exclusive;
- if (!(p = nlm4_decode_cookie(p, &argp->cookie)))
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
+ return 0;
+ if (xdr_stream_decode_bool(xdr, &argp->block) < 0)
return 0;
- argp->block = ntohl(*p++);
- exclusive = ntohl(*p++);
- if (!(p = nlm4_decode_lock(p, &argp->lock)))
+ if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
+ return 0;
+ if (!svcxdr_decode_lock(xdr, &argp->lock))
return 0;
if (exclusive)
argp->lock.fl.fl_type = F_WRLCK;
- argp->reclaim = ntohl(*p++);
- argp->state = ntohl(*p++);
+ if (xdr_stream_decode_bool(xdr, &argp->reclaim) < 0)
+ return 0;
+ if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
+ return 0;
argp->monitor = 1; /* monitor client by default */
- return xdr_argsize_check(rqstp, p);
+ return 1;
}
int
nlm4svc_decode_cancargs(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
- u32 exclusive;
+ u32 exclusive;
- if (!(p = nlm4_decode_cookie(p, &argp->cookie)))
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
+ return 0;
+ if (xdr_stream_decode_bool(xdr, &argp->block) < 0)
return 0;
- argp->block = ntohl(*p++);
- exclusive = ntohl(*p++);
- if (!(p = nlm4_decode_lock(p, &argp->lock)))
+ if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
+ return 0;
+ if (!svcxdr_decode_lock(xdr, &argp->lock))
return 0;
if (exclusive)
argp->lock.fl.fl_type = F_WRLCK;
- return xdr_argsize_check(rqstp, p);
+ return 1;
}
int
nlm4svc_decode_unlockargs(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
- if (!(p = nlm4_decode_cookie(p, &argp->cookie))
- || !(p = nlm4_decode_lock(p, &argp->lock)))
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
+ return 0;
+ if (!svcxdr_decode_lock(xdr, &argp->lock))
return 0;
argp->lock.fl.fl_type = F_UNLCK;
- return xdr_argsize_check(rqstp, p);
+
+ return 1;
}
int
-nlm4svc_decode_shareargs(struct svc_rqst *rqstp, __be32 *p)
+nlm4svc_decode_res(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_args *argp = rqstp->rq_argp;
- struct nlm_lock *lock = &argp->lock;
-
- memset(lock, 0, sizeof(*lock));
- locks_init_lock(&lock->fl);
- lock->svid = ~(u32) 0;
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
+ struct nlm_res *resp = rqstp->rq_argp;
- if (!(p = nlm4_decode_cookie(p, &argp->cookie))
- || !(p = xdr_decode_string_inplace(p, &lock->caller,
- &lock->len, NLM_MAXSTRLEN))
- || !(p = nlm4_decode_fh(p, &lock->fh))
- || !(p = nlm4_decode_oh(p, &lock->oh)))
+ if (!svcxdr_decode_cookie(xdr, &resp->cookie))
+ return 0;
+ if (!svcxdr_decode_stats(xdr, &resp->status))
return 0;
- argp->fsm_mode = ntohl(*p++);
- argp->fsm_access = ntohl(*p++);
- return xdr_argsize_check(rqstp, p);
+
+ return 1;
}
int
-nlm4svc_encode_shareres(struct svc_rqst *rqstp, __be32 *p)
+nlm4svc_decode_reboot(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_res *resp = rqstp->rq_resp;
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
+ struct nlm_reboot *argp = rqstp->rq_argp;
+ u32 len;
- if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
+ if (xdr_stream_decode_u32(xdr, &len) < 0)
return 0;
- *p++ = resp->status;
- *p++ = xdr_zero; /* sequence argument */
- return xdr_ressize_check(rqstp, p);
+ if (len > SM_MAXSTRLEN)
+ return 0;
+ p = xdr_inline_decode(xdr, len);
+ if (!p)
+ return 0;
+ argp->len = len;
+ argp->mon = (char *)p;
+ if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
+ return 0;
+ p = xdr_inline_decode(xdr, SM_PRIV_SIZE);
+ if (!p)
+ return 0;
+ memcpy(&argp->priv.data, p, sizeof(argp->priv.data));
+
+ return 1;
}
int
-nlm4svc_encode_res(struct svc_rqst *rqstp, __be32 *p)
+nlm4svc_decode_shareargs(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_res *resp = rqstp->rq_resp;
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
+ struct nlm_args *argp = rqstp->rq_argp;
+ struct nlm_lock *lock = &argp->lock;
+
+ memset(lock, 0, sizeof(*lock));
+ locks_init_lock(&lock->fl);
+ lock->svid = ~(u32)0;
- if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
+ if (!svcxdr_decode_cookie(xdr, &argp->cookie))
return 0;
- *p++ = resp->status;
- return xdr_ressize_check(rqstp, p);
+ if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
+ return 0;
+ if (!svcxdr_decode_fhandle(xdr, &lock->fh))
+ return 0;
+ if (!svcxdr_decode_owner(xdr, &lock->oh))
+ return 0;
+ /* XXX: Range checks are missing in the original code */
+ if (xdr_stream_decode_u32(xdr, &argp->fsm_mode) < 0)
+ return 0;
+ if (xdr_stream_decode_u32(xdr, &argp->fsm_access) < 0)
+ return 0;
+
+ return 1;
}
int
nlm4svc_decode_notify(struct svc_rqst *rqstp, __be32 *p)
{
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
struct nlm_args *argp = rqstp->rq_argp;
struct nlm_lock *lock = &argp->lock;
- if (!(p = xdr_decode_string_inplace(p, &lock->caller,
- &lock->len, NLM_MAXSTRLEN)))
+ if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
+ return 0;
+ if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
return 0;
- argp->state = ntohl(*p++);
- return xdr_argsize_check(rqstp, p);
+
+ return 1;
}
+
+/*
+ * Encode Reply results
+ */
+
int
-nlm4svc_decode_reboot(struct svc_rqst *rqstp, __be32 *p)
+nlm4svc_encode_void(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_reboot *argp = rqstp->rq_argp;
-
- if (!(p = xdr_decode_string_inplace(p, &argp->mon, &argp->len, SM_MAXSTRLEN)))
- return 0;
- argp->state = ntohl(*p++);
- memcpy(&argp->priv.data, p, sizeof(argp->priv.data));
- p += XDR_QUADLEN(SM_PRIV_SIZE);
- return xdr_argsize_check(rqstp, p);
+ return 1;
}
int
-nlm4svc_decode_res(struct svc_rqst *rqstp, __be32 *p)
+nlm4svc_encode_testres(struct svc_rqst *rqstp, __be32 *p)
{
- struct nlm_res *resp = rqstp->rq_argp;
+ struct xdr_stream *xdr = &rqstp->rq_res_stream;
+ struct nlm_res *resp = rqstp->rq_resp;
- if (!(p = nlm4_decode_cookie(p, &resp->cookie)))
- return 0;
- resp->status = *p++;
- return xdr_argsize_check(rqstp, p);
+ return svcxdr_encode_cookie(xdr, &resp->cookie) &&
+ svcxdr_encode_testrply(xdr, resp);
}
int
-nlm4svc_decode_void(struct svc_rqst *rqstp, __be32 *p)
+nlm4svc_encode_res(struct svc_rqst *rqstp, __be32 *p)
{
- return xdr_argsize_check(rqstp, p);
+ struct xdr_stream *xdr = &rqstp->rq_res_stream;
+ struct nlm_res *resp = rqstp->rq_resp;
+
+ return svcxdr_encode_cookie(xdr, &resp->cookie) &&
+ svcxdr_encode_stats(xdr, resp->status);
}
int
-nlm4svc_encode_void(struct svc_rqst *rqstp, __be32 *p)
+nlm4svc_encode_shareres(struct svc_rqst *rqstp, __be32 *p)
{
- return xdr_ressize_check(rqstp, p);
+ struct xdr_stream *xdr = &rqstp->rq_res_stream;
+ struct nlm_res *resp = rqstp->rq_resp;
+
+ if (!svcxdr_encode_cookie(xdr, &resp->cookie))
+ return 0;
+ if (!svcxdr_encode_stats(xdr, resp->status))
+ return 0;
+ /* sequence */
+ if (xdr_stream_encode_u32(xdr, 0) < 0)
+ return 0;
+
+ return 1;
}
/**
* locks_in_grace
+ * @net: network namespace
*
* Lock managers call this function to determine when it is OK for them
* to answer ordinary lock requests, and when they should accept only
unsigned int longest_chain_cachesize;
struct shrinker nfsd_reply_cache_shrinker;
+
+ /* tracking server-to-server copy mounts */
+ spinlock_t nfsd_ssc_lock;
+ struct list_head nfsd_ssc_mount_list;
+ wait_queue_head_t nfsd_ssc_waitq;
+
/* utsname taken from the process that starts the server */
char nfsd_name[UNX_MAXNODENAME+1];
};
struct nfsd3_getaclres *resp = rqstp->rq_resp;
struct dentry *dentry = resp->fh.fh_dentry;
struct kvec *head = rqstp->rq_res.head;
- struct inode *inode = d_inode(dentry);
+ struct inode *inode;
unsigned int base;
int n;
int w;
return 0;
switch (resp->status) {
case nfs_ok:
+ inode = d_inode(dentry);
if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
return 0;
if (xdr_stream_encode_u32(xdr, resp->mask) < 0)
args.authflavor = clp->cl_cred.cr_flavor;
clp->cl_cb_ident = conn->cb_ident;
} else {
- if (!conn->cb_xprt) {
- trace_nfsd_cb_setup_err(clp, -EINVAL);
+ if (!conn->cb_xprt)
return -EINVAL;
- }
clp->cl_cb_conn.cb_xprt = conn->cb_xprt;
clp->cl_cb_session = ses;
args.bc_xprt = conn->cb_xprt;
}
clp->cl_cb_client = client;
clp->cl_cb_cred = cred;
- trace_nfsd_cb_setup(clp);
+ rcu_read_lock();
+ trace_nfsd_cb_setup(clp, rpc_peeraddr2str(client, RPC_DISPLAY_NETID),
+ args.authflavor);
+ rcu_read_unlock();
return 0;
}
+static void nfsd4_mark_cb_state(struct nfs4_client *clp, int newstate)
+{
+ if (clp->cl_cb_state != newstate) {
+ clp->cl_cb_state = newstate;
+ trace_nfsd_cb_state(clp);
+ }
+}
+
static void nfsd4_mark_cb_down(struct nfs4_client *clp, int reason)
{
if (test_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_flags))
return;
- clp->cl_cb_state = NFSD4_CB_DOWN;
- trace_nfsd_cb_state(clp);
+ nfsd4_mark_cb_state(clp, NFSD4_CB_DOWN);
}
static void nfsd4_mark_cb_fault(struct nfs4_client *clp, int reason)
{
if (test_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_flags))
return;
- clp->cl_cb_state = NFSD4_CB_FAULT;
- trace_nfsd_cb_state(clp);
+ nfsd4_mark_cb_state(clp, NFSD4_CB_FAULT);
}
static void nfsd4_cb_probe_done(struct rpc_task *task, void *calldata)
{
struct nfs4_client *clp = container_of(calldata, struct nfs4_client, cl_cb_null);
- trace_nfsd_cb_done(clp, task->tk_status);
if (task->tk_status)
nfsd4_mark_cb_down(clp, task->tk_status);
- else {
- clp->cl_cb_state = NFSD4_CB_UP;
- trace_nfsd_cb_state(clp);
- }
+ else
+ nfsd4_mark_cb_state(clp, NFSD4_CB_UP);
}
static void nfsd4_cb_probe_release(void *calldata)
*/
void nfsd4_probe_callback(struct nfs4_client *clp)
{
- clp->cl_cb_state = NFSD4_CB_UNKNOWN;
- trace_nfsd_cb_state(clp);
+ trace_nfsd_cb_probe(clp);
+ nfsd4_mark_cb_state(clp, NFSD4_CB_UNKNOWN);
set_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_flags);
nfsd4_run_cb(&clp->cl_cb_null);
}
void nfsd4_change_callback(struct nfs4_client *clp, struct nfs4_cb_conn *conn)
{
- clp->cl_cb_state = NFSD4_CB_UNKNOWN;
+ nfsd4_mark_cb_state(clp, NFSD4_CB_UNKNOWN);
spin_lock(&clp->cl_lock);
memcpy(&clp->cl_cb_conn, conn, sizeof(struct nfs4_cb_conn));
spin_unlock(&clp->cl_lock);
- trace_nfsd_cb_state(clp);
}
/*
struct nfsd4_callback *cb = calldata;
struct nfs4_client *clp = cb->cb_clp;
- trace_nfsd_cb_done(clp, task->tk_status);
-
if (!nfsd4_cb_sequence_done(task, cb))
return;
/* must be called under the state lock */
void nfsd4_shutdown_callback(struct nfs4_client *clp)
{
+ if (clp->cl_cb_state != NFSD4_CB_UNKNOWN)
+ trace_nfsd_cb_shutdown(clp);
+
set_bit(NFSD4_CLIENT_CB_KILL, &clp->cl_flags);
/*
* Note this won't actually result in a null callback;
* kill the old client:
*/
if (clp->cl_cb_client) {
- trace_nfsd_cb_shutdown(clp);
rpc_shutdown_client(clp->cl_cb_client);
clp->cl_cb_client = NULL;
put_cred(clp->cl_cb_cred);
struct rpc_clnt *clnt;
int flags;
- trace_nfsd_cb_work(clp, cb->cb_msg.rpc_proc->p_name);
-
if (cb->cb_need_restart) {
cb->cb_need_restart = false;
} else {
* Don't send probe messages for 4.1 or later.
*/
if (!cb->cb_ops && clp->cl_minorversion) {
- clp->cl_cb_state = NFSD4_CB_UP;
+ nfsd4_mark_cb_state(clp, NFSD4_CB_UP);
nfsd41_destroy_cb(cb);
return;
}
MODULE_PARM_DESC(inter_copy_offload_enable,
"Enable inter server to server copy offload. Default: false");
+#ifdef CONFIG_NFSD_V4_2_INTER_SSC
+static int nfsd4_ssc_umount_timeout = 900000; /* default to 15 mins */
+module_param(nfsd4_ssc_umount_timeout, int, 0644);
+MODULE_PARM_DESC(nfsd4_ssc_umount_timeout,
+ "idle msecs before unmount export from source server");
+#endif
+
#ifdef CONFIG_NFSD_V4_SECURITY_LABEL
#include <linux/security.h>
#define NFSD42_INTERSSC_MOUNTOPS "vers=4.2,addr=%s,sec=sys"
+/*
+ * setup a work entry in the ssc delayed unmount list.
+ */
+static __be32 nfsd4_ssc_setup_dul(struct nfsd_net *nn, char *ipaddr,
+ struct nfsd4_ssc_umount_item **retwork, struct vfsmount **ss_mnt)
+{
+ struct nfsd4_ssc_umount_item *ni = 0;
+ struct nfsd4_ssc_umount_item *work = NULL;
+ struct nfsd4_ssc_umount_item *tmp;
+ DEFINE_WAIT(wait);
+
+ *ss_mnt = NULL;
+ *retwork = NULL;
+ work = kzalloc(sizeof(*work), GFP_KERNEL);
+try_again:
+ spin_lock(&nn->nfsd_ssc_lock);
+ list_for_each_entry_safe(ni, tmp, &nn->nfsd_ssc_mount_list, nsui_list) {
+ if (strncmp(ni->nsui_ipaddr, ipaddr, sizeof(ni->nsui_ipaddr)))
+ continue;
+ /* found a match */
+ if (ni->nsui_busy) {
+ /* wait - and try again */
+ prepare_to_wait(&nn->nfsd_ssc_waitq, &wait,
+ TASK_INTERRUPTIBLE);
+ spin_unlock(&nn->nfsd_ssc_lock);
+
+ /* allow 20secs for mount/unmount for now - revisit */
+ if (signal_pending(current) ||
+ (schedule_timeout(20*HZ) == 0)) {
+ kfree(work);
+ return nfserr_eagain;
+ }
+ finish_wait(&nn->nfsd_ssc_waitq, &wait);
+ goto try_again;
+ }
+ *ss_mnt = ni->nsui_vfsmount;
+ refcount_inc(&ni->nsui_refcnt);
+ spin_unlock(&nn->nfsd_ssc_lock);
+ kfree(work);
+
+ /* return vfsmount in ss_mnt */
+ return 0;
+ }
+ if (work) {
+ strncpy(work->nsui_ipaddr, ipaddr, sizeof(work->nsui_ipaddr));
+ refcount_set(&work->nsui_refcnt, 2);
+ work->nsui_busy = true;
+ list_add_tail(&work->nsui_list, &nn->nfsd_ssc_mount_list);
+ *retwork = work;
+ }
+ spin_unlock(&nn->nfsd_ssc_lock);
+ return 0;
+}
+
+static void nfsd4_ssc_update_dul_work(struct nfsd_net *nn,
+ struct nfsd4_ssc_umount_item *work, struct vfsmount *ss_mnt)
+{
+ /* set nsui_vfsmount, clear busy flag and wakeup waiters */
+ spin_lock(&nn->nfsd_ssc_lock);
+ work->nsui_vfsmount = ss_mnt;
+ work->nsui_busy = false;
+ wake_up_all(&nn->nfsd_ssc_waitq);
+ spin_unlock(&nn->nfsd_ssc_lock);
+}
+
+static void nfsd4_ssc_cancel_dul_work(struct nfsd_net *nn,
+ struct nfsd4_ssc_umount_item *work)
+{
+ spin_lock(&nn->nfsd_ssc_lock);
+ list_del(&work->nsui_list);
+ wake_up_all(&nn->nfsd_ssc_waitq);
+ spin_unlock(&nn->nfsd_ssc_lock);
+ kfree(work);
+}
+
/*
* Support one copy source server for now.
*/
char *ipaddr, *dev_name, *raw_data;
int len, raw_len;
__be32 status = nfserr_inval;
+ struct nfsd4_ssc_umount_item *work = NULL;
+ struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
naddr = &nss->u.nl4_addr;
tmp_addrlen = rpc_uaddr2sockaddr(SVC_NET(rqstp), naddr->addr,
goto out_free_rawdata;
snprintf(dev_name, len + 5, "%s%s%s:/", startsep, ipaddr, endsep);
+ status = nfsd4_ssc_setup_dul(nn, ipaddr, &work, &ss_mnt);
+ if (status)
+ goto out_free_devname;
+ if (ss_mnt)
+ goto out_done;
+
/* Use an 'internal' mount: SB_KERNMOUNT -> MNT_INTERNAL */
ss_mnt = vfs_kern_mount(type, SB_KERNMOUNT, dev_name, raw_data);
module_put(type->owner);
- if (IS_ERR(ss_mnt))
+ if (IS_ERR(ss_mnt)) {
+ status = nfserr_nodev;
+ if (work)
+ nfsd4_ssc_cancel_dul_work(nn, work);
goto out_free_devname;
-
+ }
+ if (work)
+ nfsd4_ssc_update_dul_work(nn, work, ss_mnt);
+out_done:
status = 0;
*mount = ss_mnt;
nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct nfsd_file *src,
struct nfsd_file *dst)
{
+ bool found = false;
+ long timeout;
+ struct nfsd4_ssc_umount_item *tmp;
+ struct nfsd4_ssc_umount_item *ni = NULL;
+ struct nfsd_net *nn = net_generic(dst->nf_net, nfsd_net_id);
+
nfs42_ssc_close(src->nf_file);
- fput(src->nf_file);
nfsd_file_put(dst);
- mntput(ss_mnt);
+ fput(src->nf_file);
+
+ if (!nn) {
+ mntput(ss_mnt);
+ return;
+ }
+ spin_lock(&nn->nfsd_ssc_lock);
+ timeout = msecs_to_jiffies(nfsd4_ssc_umount_timeout);
+ list_for_each_entry_safe(ni, tmp, &nn->nfsd_ssc_mount_list, nsui_list) {
+ if (ni->nsui_vfsmount->mnt_sb == ss_mnt->mnt_sb) {
+ list_del(&ni->nsui_list);
+ /*
+ * vfsmount can be shared by multiple exports,
+ * decrement refcnt. If the count drops to 1 it
+ * will be unmounted when nsui_expire expires.
+ */
+ refcount_dec(&ni->nsui_refcnt);
+ ni->nsui_expire = jiffies + timeout;
+ list_add_tail(&ni->nsui_list, &nn->nfsd_ssc_mount_list);
+ found = true;
+ break;
+ }
+ }
+ spin_unlock(&nn->nfsd_ssc_lock);
+ if (!found) {
+ mntput(ss_mnt);
+ return;
+ }
}
#else /* CONFIG_NFSD_V4_2_INTER_SSC */
static void nfsd4_init_copy_res(struct nfsd4_copy *copy, bool sync)
{
- copy->cp_res.wr_stable_how = NFS_UNSTABLE;
+ copy->cp_res.wr_stable_how =
+ copy->committed ? NFS_FILE_SYNC : NFS_UNSTABLE;
copy->cp_synchronous = sync;
gen_boot_verifier(©->cp_res.wr_verifier, copy->cp_clp->net);
}
u64 bytes_total = copy->cp_count;
u64 src_pos = copy->cp_src_pos;
u64 dst_pos = copy->cp_dst_pos;
+ __be32 status;
/* See RFC 7862 p.67: */
if (bytes_total == 0)
src_pos += bytes_copied;
dst_pos += bytes_copied;
} while (bytes_total > 0 && !copy->cp_synchronous);
+ /* for a non-zero asynchronous copy do a commit of data */
+ if (!copy->cp_synchronous && copy->cp_res.wr_bytes_written > 0) {
+ down_write(©->nf_dst->nf_rwsem);
+ status = vfs_fsync_range(copy->nf_dst->nf_file,
+ copy->cp_dst_pos,
+ copy->cp_res.wr_bytes_written, 0);
+ up_write(©->nf_dst->nf_rwsem);
+ if (!status)
+ copy->committed = true;
+ }
return bytes_copied;
}
memcpy(&cb_copy->fh, ©->fh, sizeof(copy->fh));
nfsd4_init_cb(&cb_copy->cp_cb, cb_copy->cp_clp,
&nfsd4_cb_offload_ops, NFSPROC4_CLNT_CB_OFFLOAD);
+ trace_nfsd_cb_offload(copy->cp_clp, ©->cp_res.cb_stateid,
+ ©->fh, copy->cp_count, copy->nfserr);
nfsd4_run_cb(&cb_copy->cp_cb);
out:
if (!copy->cp_intra)
{
struct nfsd4_compoundres *resp = rqstp->rq_resp;
struct nfsd4_compoundargs *argp = rqstp->rq_argp;
- struct nfsd4_op *this = &argp->ops[resp->opcnt - 1];
+ struct nfsd4_op *this;
struct nfsd4_compound_state *cstate = &resp->cstate;
struct nfs4_op_map *allow = &cstate->clp->cl_spo_must_allow;
u32 opiter;
#include <linux/jhash.h>
#include <linux/string_helpers.h>
#include <linux/fsnotify.h>
+#include <linux/nfs_ssc.h>
#include "xdr4.h"
#include "xdr4cb.h"
#include "vfs.h"
struct nfsd4_conn *c = container_of(u, struct nfsd4_conn, cn_xpt_user);
struct nfs4_client *clp = c->cn_session->se_client;
+ trace_nfsd_cb_lost(clp);
+
spin_lock(&clp->cl_lock);
if (!list_empty(&c->cn_persession)) {
list_del(&c->cn_persession);
seq_printf(m, "\"");
}
+static const char *cb_state2str(int state)
+{
+ switch (state) {
+ case NFSD4_CB_UP:
+ return "UP";
+ case NFSD4_CB_UNKNOWN:
+ return "UNKNOWN";
+ case NFSD4_CB_DOWN:
+ return "DOWN";
+ case NFSD4_CB_FAULT:
+ return "FAULT";
+ }
+ return "UNDEFINED";
+}
+
static int client_info_show(struct seq_file *m, void *v)
{
struct inode *inode = m->private;
seq_printf(m, "\nImplementation time: [%lld, %ld]\n",
clp->cl_nii_time.tv_sec, clp->cl_nii_time.tv_nsec);
}
+ seq_printf(m, "callback state: %s\n", cb_state2str(clp->cl_cb_state));
+ seq_printf(m, "callback address: %pISpc\n", &clp->cl_cb_conn.cb_addr);
drop_client(clp);
return 0;
struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
bool already_expired;
+ trace_nfsd_clid_admin_expired(&clp->cl_clientid);
+
spin_lock(&clp->cl_lock);
clp->cl_time = 0;
spin_unlock(&clp->cl_lock);
lockdep_assert_held(&nn->client_lock);
- dprintk("NFSD: move_to_confirm nfs4_client %p\n", clp);
list_move(&clp->cl_idhash, &nn->conf_id_hashtbl[idhashval]);
rb_erase(&clp->cl_namenode, &nn->unconf_name_tree);
add_clp_to_name_tree(clp, &nn->conf_name_tree);
- if (!test_and_set_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags) &&
- clp->cl_nfsd_dentry &&
- clp->cl_nfsd_info_dentry)
- fsnotify_dentry(clp->cl_nfsd_info_dentry, FS_MODIFY);
+ set_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags);
+ trace_nfsd_clid_confirmed(&clp->cl_clientid);
renew_client_locked(clp);
}
}
/* case 6 */
exid->flags |= EXCHGID4_FLAG_CONFIRMED_R;
+ trace_nfsd_clid_confirmed_r(conf);
goto out_copy;
}
if (!creds_match) { /* case 3 */
if (client_has_state(conf)) {
status = nfserr_clid_inuse;
+ trace_nfsd_clid_cred_mismatch(conf, rqstp);
goto out;
}
goto out_new;
}
if (verfs_match) { /* case 2 */
conf->cl_exchange_flags |= EXCHGID4_FLAG_CONFIRMED_R;
+ trace_nfsd_clid_confirmed_r(conf);
goto out_copy;
}
/* case 5, client reboot */
+ trace_nfsd_clid_verf_mismatch(conf, rqstp, &verf);
conf = NULL;
goto out_new;
}
goto out;
}
- unconf = find_unconfirmed_client_by_name(&exid->clname, nn);
+ unconf = find_unconfirmed_client_by_name(&exid->clname, nn);
if (unconf) /* case 4, possible retry or client restart */
unhash_client_locked(unconf);
- /* case 1 (normal case) */
+ /* case 1, new owner ID */
+ trace_nfsd_clid_fresh(new);
+
out_new:
if (conf) {
status = mark_client_expired_locked(conf);
if (status)
goto out;
+ trace_nfsd_clid_replaced(&conf->cl_clientid);
}
new->cl_minorversion = cstate->minorversion;
new->cl_spo_must_allow.u.words[0] = exid->spo_must_allow[0];
out_nolock:
if (new)
expire_client(new);
- if (unconf)
+ if (unconf) {
+ trace_nfsd_clid_expire_unconf(&unconf->cl_clientid);
expire_client(unconf);
+ }
return status;
}
goto out_free_conn;
}
} else if (unconf) {
+ status = nfserr_clid_inuse;
if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
!rpc_cmp_addr(sa, (struct sockaddr *) &unconf->cl_addr)) {
- status = nfserr_clid_inuse;
+ trace_nfsd_clid_cred_mismatch(unconf, rqstp);
goto out_free_conn;
}
status = nfserr_wrong_cred;
old = NULL;
goto out_free_conn;
}
+ trace_nfsd_clid_replaced(&old->cl_clientid);
}
move_to_confirmed(unconf);
conf = unconf;
/* cache solo and embedded create sessions under the client_lock */
nfsd4_cache_create_session(cr_ses, cs_slot, status);
spin_unlock(&nn->client_lock);
+ if (conf == unconf)
+ fsnotify_dentry(conf->cl_nfsd_info_dentry, FS_MODIFY);
/* init connection and backchannel */
nfsd4_init_conn(rqstp, conn, new);
nfsd4_put_session(new);
status = nfserr_wrong_cred;
goto out;
}
+ trace_nfsd_clid_destroyed(&clp->cl_clientid);
unhash_client_locked(clp);
out:
spin_unlock(&nn->client_lock);
goto out;
status = nfs_ok;
+ trace_nfsd_clid_reclaim_complete(&clp->cl_clientid);
nfsd4_client_record_create(clp);
inc_reclaim_complete(clp);
out:
new = create_client(clname, rqstp, &clverifier);
if (new == NULL)
return nfserr_jukebox;
- /* Cases below refer to rfc 3530 section 14.2.33: */
spin_lock(&nn->client_lock);
conf = find_confirmed_client_by_name(&clname, nn);
if (conf && client_has_state(conf)) {
- /* case 0: */
status = nfserr_clid_inuse;
if (clp_used_exchangeid(conf))
goto out;
if (!same_creds(&conf->cl_cred, &rqstp->rq_cred)) {
- trace_nfsd_clid_inuse_err(conf);
+ trace_nfsd_clid_cred_mismatch(conf, rqstp);
goto out;
}
}
unconf = find_unconfirmed_client_by_name(&clname, nn);
if (unconf)
unhash_client_locked(unconf);
- /* We need to handle only case 1: probable callback update */
- if (conf && same_verf(&conf->cl_verifier, &clverifier)) {
- copy_clid(new, conf);
- gen_confirm(new, nn);
- }
+ if (conf) {
+ if (same_verf(&conf->cl_verifier, &clverifier)) {
+ copy_clid(new, conf);
+ gen_confirm(new, nn);
+ } else
+ trace_nfsd_clid_verf_mismatch(conf, rqstp,
+ &clverifier);
+ } else
+ trace_nfsd_clid_fresh(new);
new->cl_minorversion = 0;
gen_callback(new, setclid, rqstp);
add_to_unconfirmed(new);
spin_unlock(&nn->client_lock);
if (new)
free_client(new);
- if (unconf)
+ if (unconf) {
+ trace_nfsd_clid_expire_unconf(&unconf->cl_clientid);
expire_client(unconf);
+ }
return status;
}
-
__be32
nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
struct nfsd4_compound_state *cstate,
* Nevertheless, RFC 7530 recommends INUSE for this case:
*/
status = nfserr_clid_inuse;
- if (unconf && !same_creds(&unconf->cl_cred, &rqstp->rq_cred))
+ if (unconf && !same_creds(&unconf->cl_cred, &rqstp->rq_cred)) {
+ trace_nfsd_clid_cred_mismatch(unconf, rqstp);
goto out;
- if (conf && !same_creds(&conf->cl_cred, &rqstp->rq_cred))
+ }
+ if (conf && !same_creds(&conf->cl_cred, &rqstp->rq_cred)) {
+ trace_nfsd_clid_cred_mismatch(conf, rqstp);
goto out;
- /* cases below refer to rfc 3530 section 14.2.34: */
+ }
if (!unconf || !same_verf(&confirm, &unconf->cl_confirm)) {
if (conf && same_verf(&confirm, &conf->cl_confirm)) {
- /* case 2: probable retransmit */
status = nfs_ok;
- } else /* case 4: client hasn't noticed we rebooted yet? */
+ } else
status = nfserr_stale_clientid;
goto out;
}
status = nfs_ok;
- if (conf) { /* case 1: callback update */
+ if (conf) {
old = unconf;
unhash_client_locked(old);
nfsd4_change_callback(conf, &unconf->cl_cb_conn);
- } else { /* case 3: normal case; new or rebooted client */
+ } else {
old = find_confirmed_client_by_name(&unconf->cl_name, nn);
if (old) {
status = nfserr_clid_inuse;
old = NULL;
goto out;
}
+ trace_nfsd_clid_replaced(&old->cl_clientid);
}
move_to_confirmed(unconf);
conf = unconf;
}
get_client_locked(conf);
spin_unlock(&nn->client_lock);
+ if (conf == unconf)
+ fsnotify_dentry(conf->cl_nfsd_info_dentry, FS_MODIFY);
nfsd4_probe_callback(conf);
spin_lock(&nn->client_lock);
put_client_renew_locked(conf);
struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
struct nfs4_file *fp = dp->dl_stid.sc_file;
- trace_nfsd_deleg_break(&dp->dl_stid.sc_stateid);
+ trace_nfsd_cb_recall(&dp->dl_stid);
/*
* We don't want the locks code to timeout the lease for us;
return false;
}
+#ifdef CONFIG_NFSD_V4_2_INTER_SSC
+void nfsd4_ssc_init_umount_work(struct nfsd_net *nn)
+{
+ spin_lock_init(&nn->nfsd_ssc_lock);
+ INIT_LIST_HEAD(&nn->nfsd_ssc_mount_list);
+ init_waitqueue_head(&nn->nfsd_ssc_waitq);
+}
+EXPORT_SYMBOL_GPL(nfsd4_ssc_init_umount_work);
+
+/*
+ * This is called when nfsd is being shutdown, after all inter_ssc
+ * cleanup were done, to destroy the ssc delayed unmount list.
+ */
+static void nfsd4_ssc_shutdown_umount(struct nfsd_net *nn)
+{
+ struct nfsd4_ssc_umount_item *ni = NULL;
+ struct nfsd4_ssc_umount_item *tmp;
+
+ spin_lock(&nn->nfsd_ssc_lock);
+ list_for_each_entry_safe(ni, tmp, &nn->nfsd_ssc_mount_list, nsui_list) {
+ list_del(&ni->nsui_list);
+ spin_unlock(&nn->nfsd_ssc_lock);
+ mntput(ni->nsui_vfsmount);
+ kfree(ni);
+ spin_lock(&nn->nfsd_ssc_lock);
+ }
+ spin_unlock(&nn->nfsd_ssc_lock);
+}
+
+static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
+{
+ bool do_wakeup = false;
+ struct nfsd4_ssc_umount_item *ni = 0;
+ struct nfsd4_ssc_umount_item *tmp;
+
+ spin_lock(&nn->nfsd_ssc_lock);
+ list_for_each_entry_safe(ni, tmp, &nn->nfsd_ssc_mount_list, nsui_list) {
+ if (time_after(jiffies, ni->nsui_expire)) {
+ if (refcount_read(&ni->nsui_refcnt) > 1)
+ continue;
+
+ /* mark being unmount */
+ ni->nsui_busy = true;
+ spin_unlock(&nn->nfsd_ssc_lock);
+ mntput(ni->nsui_vfsmount);
+ spin_lock(&nn->nfsd_ssc_lock);
+
+ /* waiters need to start from begin of list */
+ list_del(&ni->nsui_list);
+ kfree(ni);
+
+ /* wakeup ssc_connect waiters */
+ do_wakeup = true;
+ continue;
+ }
+ break;
+ }
+ if (do_wakeup)
+ wake_up_all(&nn->nfsd_ssc_waitq);
+ spin_unlock(&nn->nfsd_ssc_lock);
+}
+#endif
+
static time64_t
nfs4_laundromat(struct nfsd_net *nn)
{
clp = list_entry(pos, struct nfs4_client, cl_lru);
if (!state_expired(<, clp->cl_time))
break;
- if (mark_client_expired_locked(clp)) {
- trace_nfsd_clid_expired(&clp->cl_clientid);
+ if (mark_client_expired_locked(clp))
continue;
- }
list_add(&clp->cl_lru, &reaplist);
}
spin_unlock(&nn->client_lock);
list_del_init(&nbl->nbl_lru);
free_blocked_lock(nbl);
}
+#ifdef CONFIG_NFSD_V4_2_INTER_SSC
+ /* service the server-to-server copy delayed unmount list */
+ nfsd4_ssc_expire_umount(nn);
+#endif
out:
return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
}
}
spin_unlock(&nn->blocked_locks_lock);
- if (queue)
+ if (queue) {
+ trace_nfsd_cb_notify_lock(lo, nbl);
nfsd4_run_cb(&nbl->nbl_cb);
+ }
}
static const struct lock_manager_operations nfsd_posix_mng_ops = {
unsigned int strhashval;
struct nfs4_client_reclaim *crp;
- trace_nfsd_clid_reclaim(nn, name.len, name.data);
crp = alloc_reclaim();
if (crp) {
strhashval = clientstr_hashval(name);
unsigned int strhashval;
struct nfs4_client_reclaim *crp = NULL;
- trace_nfsd_clid_find(nn, name.len, name.data);
-
strhashval = clientstr_hashval(name);
list_for_each_entry(crp, &nn->reclaim_str_hashtbl[strhashval], cr_strhash) {
if (compare_blob(&crp->cr_name, &name) == 0) {
nfsd4_client_tracking_exit(net);
nfs4_state_destroy_net(net);
+#ifdef CONFIG_NFSD_V4_2_INTER_SSC
+ nfsd4_ssc_shutdown_umount(nn);
+#endif
}
void
extern int nfsd4_is_junction(struct dentry *dentry);
extern int register_cld_notifier(void);
extern void unregister_cld_notifier(void);
+#ifdef CONFIG_NFSD_V4_2_INTER_SSC
+extern void nfsd4_ssc_init_umount_work(struct nfsd_net *nn);
+#endif
+
#else /* CONFIG_NFSD_V4 */
static inline int nfsd4_is_junction(struct dentry *dentry)
{
* returns a crc32 hash for the filehandle that is compatible with
* the one displayed by "wireshark".
*/
-
-static inline u32
-knfsd_fh_hash(struct knfsd_fh *fh)
+static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh)
{
return ~crc32_le(0xFFFFFFFF, (unsigned char *)&fh->fh_base, fh->fh_size);
}
#else
-static inline u32
-knfsd_fh_hash(struct knfsd_fh *fh)
+static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh)
{
return 0;
}
if (ret)
goto out_filecache;
+#ifdef CONFIG_NFSD_V4_2_INTER_SSC
+ nfsd4_ssc_init_umount_work(nn);
+#endif
nn->nfsd_net_up = true;
return 0;
__entry->ino = ino;
__entry->len = namlen;
memcpy(__get_str(name), name, namlen);
- __assign_str(name, name);
),
TP_printk("fh_hash=0x%08x ino=%llu name=%.*s",
__entry->fh_hash, __entry->ino,
DEFINE_STATEID_EVENT(open);
DEFINE_STATEID_EVENT(deleg_read);
-DEFINE_STATEID_EVENT(deleg_break);
DEFINE_STATEID_EVENT(deleg_recall);
DECLARE_EVENT_CLASS(nfsd_stateseqid_class,
TP_PROTO(const clientid_t *clid), \
TP_ARGS(clid))
-DEFINE_CLIENTID_EVENT(expired);
+DEFINE_CLIENTID_EVENT(expire_unconf);
+DEFINE_CLIENTID_EVENT(reclaim_complete);
+DEFINE_CLIENTID_EVENT(confirmed);
+DEFINE_CLIENTID_EVENT(destroyed);
+DEFINE_CLIENTID_EVENT(admin_expired);
+DEFINE_CLIENTID_EVENT(replaced);
DEFINE_CLIENTID_EVENT(purged);
DEFINE_CLIENTID_EVENT(renew);
DEFINE_CLIENTID_EVENT(stale);
DEFINE_NET_EVENT(grace_start);
DEFINE_NET_EVENT(grace_complete);
-DECLARE_EVENT_CLASS(nfsd_clid_class,
- TP_PROTO(const struct nfsd_net *nn,
- unsigned int namelen,
- const unsigned char *namedata),
- TP_ARGS(nn, namelen, namedata),
+TRACE_EVENT(nfsd_clid_cred_mismatch,
+ TP_PROTO(
+ const struct nfs4_client *clp,
+ const struct svc_rqst *rqstp
+ ),
+ TP_ARGS(clp, rqstp),
TP_STRUCT__entry(
- __field(unsigned long long, boot_time)
- __field(unsigned int, namelen)
- __dynamic_array(unsigned char, name, namelen)
+ __field(u32, cl_boot)
+ __field(u32, cl_id)
+ __field(unsigned long, cl_flavor)
+ __field(unsigned long, new_flavor)
+ __array(unsigned char, addr, sizeof(struct sockaddr_in6))
),
TP_fast_assign(
- __entry->boot_time = nn->boot_time;
- __entry->namelen = namelen;
- memcpy(__get_dynamic_array(name), namedata, namelen);
+ __entry->cl_boot = clp->cl_clientid.cl_boot;
+ __entry->cl_id = clp->cl_clientid.cl_id;
+ __entry->cl_flavor = clp->cl_cred.cr_flavor;
+ __entry->new_flavor = rqstp->rq_cred.cr_flavor;
+ memcpy(__entry->addr, &rqstp->rq_xprt->xpt_remote,
+ sizeof(struct sockaddr_in6));
),
- TP_printk("boot_time=%16llx nfs4_clientid=%.*s",
- __entry->boot_time, __entry->namelen, __get_str(name))
+ TP_printk("client %08x:%08x flavor=%s, conflict=%s from addr=%pISpc",
+ __entry->cl_boot, __entry->cl_id,
+ show_nfsd_authflavor(__entry->cl_flavor),
+ show_nfsd_authflavor(__entry->new_flavor), __entry->addr
+ )
)
-#define DEFINE_CLID_EVENT(name) \
-DEFINE_EVENT(nfsd_clid_class, nfsd_clid_##name, \
- TP_PROTO(const struct nfsd_net *nn, \
- unsigned int namelen, \
- const unsigned char *namedata), \
- TP_ARGS(nn, namelen, namedata))
-
-DEFINE_CLID_EVENT(find);
-DEFINE_CLID_EVENT(reclaim);
+TRACE_EVENT(nfsd_clid_verf_mismatch,
+ TP_PROTO(
+ const struct nfs4_client *clp,
+ const struct svc_rqst *rqstp,
+ const nfs4_verifier *verf
+ ),
+ TP_ARGS(clp, rqstp, verf),
+ TP_STRUCT__entry(
+ __field(u32, cl_boot)
+ __field(u32, cl_id)
+ __array(unsigned char, cl_verifier, NFS4_VERIFIER_SIZE)
+ __array(unsigned char, new_verifier, NFS4_VERIFIER_SIZE)
+ __array(unsigned char, addr, sizeof(struct sockaddr_in6))
+ ),
+ TP_fast_assign(
+ __entry->cl_boot = clp->cl_clientid.cl_boot;
+ __entry->cl_id = clp->cl_clientid.cl_id;
+ memcpy(__entry->cl_verifier, (void *)&clp->cl_verifier,
+ NFS4_VERIFIER_SIZE);
+ memcpy(__entry->new_verifier, (void *)verf,
+ NFS4_VERIFIER_SIZE);
+ memcpy(__entry->addr, &rqstp->rq_xprt->xpt_remote,
+ sizeof(struct sockaddr_in6));
+ ),
+ TP_printk("client %08x:%08x verf=0x%s, updated=0x%s from addr=%pISpc",
+ __entry->cl_boot, __entry->cl_id,
+ __print_hex_str(__entry->cl_verifier, NFS4_VERIFIER_SIZE),
+ __print_hex_str(__entry->new_verifier, NFS4_VERIFIER_SIZE),
+ __entry->addr
+ )
+);
-TRACE_EVENT(nfsd_clid_inuse_err,
+DECLARE_EVENT_CLASS(nfsd_clid_class,
TP_PROTO(const struct nfs4_client *clp),
TP_ARGS(clp),
TP_STRUCT__entry(
__field(u32, cl_boot)
__field(u32, cl_id)
__array(unsigned char, addr, sizeof(struct sockaddr_in6))
- __field(unsigned int, namelen)
- __dynamic_array(unsigned char, name, clp->cl_name.len)
+ __field(unsigned long, flavor)
+ __array(unsigned char, verifier, NFS4_VERIFIER_SIZE)
+ __dynamic_array(char, name, clp->cl_name.len + 1)
),
TP_fast_assign(
__entry->cl_boot = clp->cl_clientid.cl_boot;
__entry->cl_id = clp->cl_clientid.cl_id;
memcpy(__entry->addr, &clp->cl_addr,
sizeof(struct sockaddr_in6));
- __entry->namelen = clp->cl_name.len;
- memcpy(__get_dynamic_array(name), clp->cl_name.data,
- clp->cl_name.len);
- ),
- TP_printk("nfs4_clientid %.*s already in use by %pISpc, client %08x:%08x",
- __entry->namelen, __get_str(name), __entry->addr,
+ __entry->flavor = clp->cl_cred.cr_flavor;
+ memcpy(__entry->verifier, (void *)&clp->cl_verifier,
+ NFS4_VERIFIER_SIZE);
+ memcpy(__get_str(name), clp->cl_name.data, clp->cl_name.len);
+ __get_str(name)[clp->cl_name.len] = '\0';
+ ),
+ TP_printk("addr=%pISpc name='%s' verifier=0x%s flavor=%s client=%08x:%08x",
+ __entry->addr, __get_str(name),
+ __print_hex_str(__entry->verifier, NFS4_VERIFIER_SIZE),
+ show_nfsd_authflavor(__entry->flavor),
__entry->cl_boot, __entry->cl_id)
-)
+);
+
+#define DEFINE_CLID_EVENT(name) \
+DEFINE_EVENT(nfsd_clid_class, nfsd_clid_##name, \
+ TP_PROTO(const struct nfs4_client *clp), \
+ TP_ARGS(clp))
+
+DEFINE_CLID_EVENT(fresh);
+DEFINE_CLID_EVENT(confirmed_r);
/*
* from fs/nfsd/filecache.h
memcpy(__entry->addr, &conn->cb_addr,
sizeof(struct sockaddr_in6));
),
- TP_printk("client %08x:%08x callback addr=%pISpc prog=%u ident=%u",
- __entry->cl_boot, __entry->cl_id,
- __entry->addr, __entry->prog, __entry->ident)
+ TP_printk("addr=%pISpc client %08x:%08x prog=%u ident=%u",
+ __entry->addr, __entry->cl_boot, __entry->cl_id,
+ __entry->prog, __entry->ident)
);
TRACE_EVENT(nfsd_cb_nodelegs,
TP_printk("client %08x:%08x", __entry->cl_boot, __entry->cl_id)
)
-TRACE_DEFINE_ENUM(NFSD4_CB_UP);
-TRACE_DEFINE_ENUM(NFSD4_CB_UNKNOWN);
-TRACE_DEFINE_ENUM(NFSD4_CB_DOWN);
-TRACE_DEFINE_ENUM(NFSD4_CB_FAULT);
-
#define show_cb_state(val) \
__print_symbolic(val, \
{ NFSD4_CB_UP, "UP" }, \
TP_PROTO(const struct nfs4_client *clp), \
TP_ARGS(clp))
-DEFINE_NFSD_CB_EVENT(setup);
DEFINE_NFSD_CB_EVENT(state);
+DEFINE_NFSD_CB_EVENT(probe);
+DEFINE_NFSD_CB_EVENT(lost);
DEFINE_NFSD_CB_EVENT(shutdown);
+TRACE_DEFINE_ENUM(RPC_AUTH_NULL);
+TRACE_DEFINE_ENUM(RPC_AUTH_UNIX);
+TRACE_DEFINE_ENUM(RPC_AUTH_GSS);
+TRACE_DEFINE_ENUM(RPC_AUTH_GSS_KRB5);
+TRACE_DEFINE_ENUM(RPC_AUTH_GSS_KRB5I);
+TRACE_DEFINE_ENUM(RPC_AUTH_GSS_KRB5P);
+
+#define show_nfsd_authflavor(val) \
+ __print_symbolic(val, \
+ { RPC_AUTH_NULL, "none" }, \
+ { RPC_AUTH_UNIX, "sys" }, \
+ { RPC_AUTH_GSS, "gss" }, \
+ { RPC_AUTH_GSS_KRB5, "krb5" }, \
+ { RPC_AUTH_GSS_KRB5I, "krb5i" }, \
+ { RPC_AUTH_GSS_KRB5P, "krb5p" })
+
+TRACE_EVENT(nfsd_cb_setup,
+ TP_PROTO(const struct nfs4_client *clp,
+ const char *netid,
+ rpc_authflavor_t authflavor
+ ),
+ TP_ARGS(clp, netid, authflavor),
+ TP_STRUCT__entry(
+ __field(u32, cl_boot)
+ __field(u32, cl_id)
+ __field(unsigned long, authflavor)
+ __array(unsigned char, addr, sizeof(struct sockaddr_in6))
+ __array(unsigned char, netid, 8)
+ ),
+ TP_fast_assign(
+ __entry->cl_boot = clp->cl_clientid.cl_boot;
+ __entry->cl_id = clp->cl_clientid.cl_id;
+ strlcpy(__entry->netid, netid, sizeof(__entry->netid));
+ __entry->authflavor = authflavor;
+ memcpy(__entry->addr, &clp->cl_cb_conn.cb_addr,
+ sizeof(struct sockaddr_in6));
+ ),
+ TP_printk("addr=%pISpc client %08x:%08x proto=%s flavor=%s",
+ __entry->addr, __entry->cl_boot, __entry->cl_id,
+ __entry->netid, show_nfsd_authflavor(__entry->authflavor))
+);
+
TRACE_EVENT(nfsd_cb_setup_err,
TP_PROTO(
const struct nfs4_client *clp,
__entry->addr, __entry->cl_boot, __entry->cl_id, __entry->error)
);
-TRACE_EVENT(nfsd_cb_work,
+TRACE_EVENT(nfsd_cb_recall,
TP_PROTO(
- const struct nfs4_client *clp,
- const char *procedure
+ const struct nfs4_stid *stid
),
- TP_ARGS(clp, procedure),
+ TP_ARGS(stid),
TP_STRUCT__entry(
__field(u32, cl_boot)
__field(u32, cl_id)
- __string(procedure, procedure)
+ __field(u32, si_id)
+ __field(u32, si_generation)
__array(unsigned char, addr, sizeof(struct sockaddr_in6))
),
TP_fast_assign(
+ const stateid_t *stp = &stid->sc_stateid;
+ const struct nfs4_client *clp = stid->sc_client;
+
+ __entry->cl_boot = stp->si_opaque.so_clid.cl_boot;
+ __entry->cl_id = stp->si_opaque.so_clid.cl_id;
+ __entry->si_id = stp->si_opaque.so_id;
+ __entry->si_generation = stp->si_generation;
+ if (clp)
+ memcpy(__entry->addr, &clp->cl_cb_conn.cb_addr,
+ sizeof(struct sockaddr_in6));
+ else
+ memset(__entry->addr, 0, sizeof(struct sockaddr_in6));
+ ),
+ TP_printk("addr=%pISpc client %08x:%08x stateid %08x:%08x",
+ __entry->addr, __entry->cl_boot, __entry->cl_id,
+ __entry->si_id, __entry->si_generation)
+);
+
+TRACE_EVENT(nfsd_cb_notify_lock,
+ TP_PROTO(
+ const struct nfs4_lockowner *lo,
+ const struct nfsd4_blocked_lock *nbl
+ ),
+ TP_ARGS(lo, nbl),
+ TP_STRUCT__entry(
+ __field(u32, cl_boot)
+ __field(u32, cl_id)
+ __field(u32, fh_hash)
+ __array(unsigned char, addr, sizeof(struct sockaddr_in6))
+ ),
+ TP_fast_assign(
+ const struct nfs4_client *clp = lo->lo_owner.so_client;
+
__entry->cl_boot = clp->cl_clientid.cl_boot;
__entry->cl_id = clp->cl_clientid.cl_id;
- __assign_str(procedure, procedure)
+ __entry->fh_hash = knfsd_fh_hash(&nbl->nbl_fh);
memcpy(__entry->addr, &clp->cl_cb_conn.cb_addr,
sizeof(struct sockaddr_in6));
),
- TP_printk("addr=%pISpc client %08x:%08x procedure=%s",
+ TP_printk("addr=%pISpc client %08x:%08x fh_hash=0x%08x",
__entry->addr, __entry->cl_boot, __entry->cl_id,
- __get_str(procedure))
+ __entry->fh_hash)
);
-TRACE_EVENT(nfsd_cb_done,
+TRACE_EVENT(nfsd_cb_offload,
TP_PROTO(
const struct nfs4_client *clp,
- int status
+ const stateid_t *stp,
+ const struct knfsd_fh *fh,
+ u64 count,
+ __be32 status
),
- TP_ARGS(clp, status),
+ TP_ARGS(clp, stp, fh, count, status),
TP_STRUCT__entry(
__field(u32, cl_boot)
__field(u32, cl_id)
+ __field(u32, si_id)
+ __field(u32, si_generation)
+ __field(u32, fh_hash)
__field(int, status)
+ __field(u64, count)
__array(unsigned char, addr, sizeof(struct sockaddr_in6))
),
TP_fast_assign(
- __entry->cl_boot = clp->cl_clientid.cl_boot;
- __entry->cl_id = clp->cl_clientid.cl_id;
- __entry->status = status;
+ __entry->cl_boot = stp->si_opaque.so_clid.cl_boot;
+ __entry->cl_id = stp->si_opaque.so_clid.cl_id;
+ __entry->si_id = stp->si_opaque.so_id;
+ __entry->si_generation = stp->si_generation;
+ __entry->fh_hash = knfsd_fh_hash(fh);
+ __entry->status = be32_to_cpu(status);
+ __entry->count = count;
memcpy(__entry->addr, &clp->cl_cb_conn.cb_addr,
sizeof(struct sockaddr_in6));
),
- TP_printk("addr=%pISpc client %08x:%08x status=%d",
+ TP_printk("addr=%pISpc client %08x:%08x stateid %08x:%08x fh_hash=0x%08x count=%llu status=%d",
__entry->addr, __entry->cl_boot, __entry->cl_id,
- __entry->status)
+ __entry->si_id, __entry->si_generation,
+ __entry->fh_hash, __entry->count, __entry->status)
);
#endif /* _NFSD_TRACE_H */
}
#ifdef CONFIG_NFSD_V3
+static int
+nfsd_filemap_write_and_wait_range(struct nfsd_file *nf, loff_t offset,
+ loff_t end)
+{
+ struct address_space *mapping = nf->nf_file->f_mapping;
+ int ret = filemap_fdatawrite_range(mapping, offset, end);
+
+ if (ret)
+ return ret;
+ filemap_fdatawait_range_keep_errors(mapping, offset, end);
+ return 0;
+}
+
/*
* Commit all pending writes to stable storage.
*
if (err)
goto out;
if (EX_ISSYNC(fhp->fh_export)) {
- int err2;
+ int err2 = nfsd_filemap_write_and_wait_range(nf, offset, end);
down_write(&nf->nf_rwsem);
- err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
+ if (!err2)
+ err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
switch (err2) {
case 0:
nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
host_err = vfs_symlink(&init_user_ns, d_inode(dentry), dnew, path);
err = nfserrno(host_err);
+ fh_unlock(fhp);
if (!err)
err = nfserrno(commit_metadata(fhp));
- fh_unlock(fhp);
fh_drop_write(fhp);
if (d_really_is_negative(dold))
goto out_dput;
host_err = vfs_link(dold, &init_user_ns, dirp, dnew, NULL);
+ fh_unlock(ffhp);
if (!host_err) {
err = nfserrno(commit_metadata(ffhp));
if (!err)
{
struct dentry *dentry, *rdentry;
struct inode *dirp;
+ struct inode *rinode;
__be32 err;
int host_err;
host_err = -ENOENT;
goto out_drop_write;
}
+ rinode = d_inode(rdentry);
+ ihold(rinode);
if (!type)
type = d_inode(rdentry)->i_mode & S_IFMT;
host_err = vfs_rmdir(&init_user_ns, dirp, rdentry);
}
+ fh_unlock(fhp);
if (!host_err)
host_err = commit_metadata(fhp);
dput(rdentry);
+ iput(rinode); /* truncate the inode here */
out_drop_write:
fh_drop_write(fhp);
struct vfsmount *ss_mnt;
struct nfs_fh c_fh;
nfs4_stateid stateid;
+ bool committed;
};
struct nfsd4_seek {
#define CLK_INFRA_PMICWRAP 11
#define CLK_INFRA_CLK_13M 12
#define CLK_INFRA_CA53SEL 13
-#define CLK_INFRA_CA57SEL 14 /* Deprecated. Don't use it. */
#define CLK_INFRA_CA72SEL 14
#define CLK_INFRA_NR_CLK 15
#define OSC_SB_OSLPI_SUPPORT 0x00000100
#define OSC_SB_CPC_DIVERSE_HIGH_SUPPORT 0x00001000
#define OSC_SB_GENERIC_INITIATOR_SUPPORT 0x00002000
-#define OSC_SB_PRM_SUPPORT 0x00020000
#define OSC_SB_NATIVE_USB4_SUPPORT 0x00040000
+#define OSC_SB_PRM_SUPPORT 0x00200000
extern bool osc_sb_apei_support_acked;
extern bool osc_pc_lpi_support_confirmed;
enum scale_freq_source {
SCALE_FREQ_SOURCE_CPUFREQ = 0,
SCALE_FREQ_SOURCE_ARCH,
+ SCALE_FREQ_SOURCE_CPPC,
};
struct scale_freq_data {
unsigned long rate, unsigned long *prate,
const struct clk_div_table *table, u8 width,
unsigned long flags, unsigned int val);
+int divider_determine_rate(struct clk_hw *hw, struct clk_rate_request *req,
+ const struct clk_div_table *table, u8 width,
+ unsigned long flags);
+int divider_ro_determine_rate(struct clk_hw *hw, struct clk_rate_request *req,
+ const struct clk_div_table *table, u8 width,
+ unsigned long flags, unsigned int val);
int divider_get_val(unsigned long rate, unsigned long parent_rate,
const struct clk_div_table *table, u8 width,
unsigned long flags);
unsigned long target_perf,
unsigned long capacity);
- /*
- * Caches and returns the lowest driver-supported frequency greater than
- * or equal to the target frequency, subject to any driver limitations.
- * Does not set the frequency. Only to be implemented for drivers with
- * target().
- */
- unsigned int (*resolve_freq)(struct cpufreq_policy *policy,
- unsigned int target_freq);
-
/*
* Only for drivers with target_index() and CPUFREQ_ASYNC_NOTIFICATION
* unset.
int (*online)(struct cpufreq_policy *policy);
int (*offline)(struct cpufreq_policy *policy);
int (*exit)(struct cpufreq_policy *policy);
- void (*stop_cpu)(struct cpufreq_policy *policy);
int (*suspend)(struct cpufreq_policy *policy);
int (*resume)(struct cpufreq_policy *policy);
}
/**
- * dma_resv_exclusive - return the object's exclusive fence
+ * dma_resv_excl_fence - return the object's exclusive fence
* @obj: the reservation object
*
* Returns the exclusive fence (if any). Caller must either hold the objects
int nlmsvc_encode_shareres(struct svc_rqst *, __be32 *);
int nlmsvc_decode_notify(struct svc_rqst *, __be32 *);
int nlmsvc_decode_reboot(struct svc_rqst *, __be32 *);
-/*
-int nlmclt_encode_testargs(struct rpc_rqst *, u32 *, struct nlm_args *);
-int nlmclt_encode_lockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
-int nlmclt_encode_cancargs(struct rpc_rqst *, u32 *, struct nlm_args *);
-int nlmclt_encode_unlockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
- */
#endif /* LOCKD_XDR_H */
int nlm4svc_encode_shareres(struct svc_rqst *, __be32 *);
int nlm4svc_decode_notify(struct svc_rqst *, __be32 *);
int nlm4svc_decode_reboot(struct svc_rqst *, __be32 *);
-/*
-int nlmclt_encode_testargs(struct rpc_rqst *, u32 *, struct nlm_args *);
-int nlmclt_encode_lockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
-int nlmclt_encode_cancargs(struct rpc_rqst *, u32 *, struct nlm_args *);
-int nlmclt_encode_unlockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
- */
+
extern const struct rpc_version nlm_version4;
#endif /* LOCKD_XDR4_H */
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (C) 2009 Samsung Electronics
- * Minkyu Kang <mk7.kang@samsung.com>
- */
-
-#ifndef __MAX17040_BATTERY_H_
-#define __MAX17040_BATTERY_H_
-
-struct max17040_platform_data {
- int (*battery_online)(void);
- int (*charger_online)(void);
- int (*charger_enable)(void);
-};
-
-#endif
extern void mv64340_irq_init(unsigned int base);
-/* Watchdog Platform Device, Driver Data */
-#define MV64x60_WDT_NAME "mv64x60_wdt"
-
-struct mv64x60_wdt_pdata {
- int timeout; /* watchdog expiry in seconds, default 10 */
- int bus_clk; /* bus clock in MHz, default 133 */
-};
-
#endif /* __ASM_MV643XX_H */
*/
#include <linux/nfs_fs.h>
+#include <linux/sunrpc/svc.h>
extern struct nfs_ssc_client_ops_tbl nfs_ssc_client_tbl;
if (nfs_ssc_client_tbl.ssc_nfs4_ops)
(*nfs_ssc_client_tbl.ssc_nfs4_ops->sco_close)(filep);
}
+
+struct nfsd4_ssc_umount_item {
+ struct list_head nsui_list;
+ bool nsui_busy;
+ /*
+ * nsui_refcnt inited to 2, 1 on list and 1 for consumer. Entry
+ * is removed when refcnt drops to 1 and nsui_expire expires.
+ */
+ refcount_t nsui_refcnt;
+ unsigned long nsui_expire;
+ struct vfsmount *nsui_vfsmount;
+ char nsui_ipaddr[RPC_MAX_ADDRBUFLEN];
+};
#endif
/*
struct pci_config_window {
struct resource res;
struct resource busr;
+ unsigned int bus_shift;
void *priv;
const struct pci_ecam_ops *ops;
union {
/* SPDX-License-Identifier: GPL-2.0+ */
-/**
+/*
* PCI Endpoint ConfigFS header file
*
* Copyright (C) 2017 Texas Instruments
/* SPDX-License-Identifier: GPL-2.0 */
-/**
+/*
* PCI Endpoint *Controller* (EPC) header file
*
* Copyright (C) 2017 Texas Instruments
* @map_msi_irq: ops to map physical address to MSI address and return MSI data
* @start: ops to start the PCI link
* @stop: ops to stop the PCI link
+ * @get_features: ops to get the features supported by the EPC
* @owner: the module owner containing the ops
*/
struct pci_epc_ops {
/**
* struct pci_epc_features - features supported by a EPC device per function
* @linkup_notifier: indicate if the EPC device can notify EPF driver on link up
+ * @core_init_notifier: indicate cores that can notify about their availability
+ * for initialization
* @msi_capable: indicate if the endpoint function has MSI capability
* @msix_capable: indicate if the endpoint function has MSI-X capability
* @reserved_bar: bitmap to indicate reserved BAR unavailable to function driver
/* SPDX-License-Identifier: GPL-2.0 */
-/**
+/*
* PCI Endpoint *Function* (EPF) header file
*
* Copyright (C) 2017 Texas Instruments
* @phys_addr: physical address that should be mapped to the BAR
* @addr: virtual address corresponding to the @phys_addr
* @size: the size of the address space present in BAR
+ * @barno: BAR number
+ * @flags: flags that are set for the BAR
*/
struct pci_epf_bar {
dma_addr_t phys_addr;
* @header: represents standard configuration header
* @bar: represents the BAR of EPF device
* @msi_interrupts: number of MSI interrupts required by this function
+ * @msix_interrupts: number of MSI-X interrupts required by this function
* @func_no: unique function number within this endpoint device
* @epc: the EPC device to which this EPF device is bound
* @driver: the EPF driver to which this EPF device is bound
u16 pasid_features;
#endif
#ifdef CONFIG_PCI_P2PDMA
- struct pci_p2pdma *p2pdma;
+ struct pci_p2pdma __rcu *p2pdma;
#endif
u16 acs_cap; /* ACS Capability offset */
phys_addr_t rom; /* Physical address if not from BAR */
/**
* struct hotplug_slot - used to register a physical slot with the hotplug pci core
* @ops: pointer to the &struct hotplug_slot_ops to be used for this slot
+ * @slot_list: internal list used to track hotplug PCI slots
+ * @pci_slot: represents a physical slot
* @owner: The module owner of this structure
* @mod_name: The module name (KBUILD_MODNAME) of this structure
*/
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * PM2301 charger driver.
- *
- * Copyright (C) 2012 ST Ericsson Corporation
- *
- * Contact: Olivier LAUNAY (olivier.launay@stericsson.com
- */
-
-#ifndef __LINUX_PM2301_H
-#define __LINUX_PM2301_H
-
-/**
- * struct pm2xxx_bm_charger_parameters - Charger specific parameters
- * @ac_volt_max: maximum allowed AC charger voltage in mV
- * @ac_curr_max: maximum allowed AC charger current in mA
- */
-struct pm2xxx_bm_charger_parameters {
- int ac_volt_max;
- int ac_curr_max;
-};
-
-/**
- * struct pm2xxx_bm_data - pm2xxx battery management data
- * @enable_overshoot flag to enable VBAT overshoot control
- * @chg_params charger parameters
- */
-struct pm2xxx_bm_data {
- bool enable_overshoot;
- const struct pm2xxx_bm_charger_parameters *chg_params;
-};
-
-struct pm2xxx_charger_platform_data {
- char **supplied_to;
- size_t num_supplicants;
- int i2c_bus;
- const char *label;
- int gpio_irq_number;
- unsigned int lpn_gpio;
- int irq_type;
-};
-
-struct pm2xxx_platform_data {
- struct pm2xxx_charger_platform_data *wall_charger;
- struct pm2xxx_bm_data *battery;
-};
-
-#endif /* __LINUX_PM2301_H */
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (C) ST-Ericsson 2013
- * Author: Hongbo Zhang <hongbo.zhang@linaro.com>
- */
-
-#ifndef PWR_AB8500_H
-#define PWR_AB8500_H
-
-extern const struct abx500_res_to_temp ab8500_temp_tbl_a_thermistor[];
-extern const int ab8500_temp_tbl_a_size;
-
-extern const struct abx500_res_to_temp ab8500_temp_tbl_b_thermistor[];
-extern const int ab8500_temp_tbl_b_size;
-
-#endif /* PWR_AB8500_H */
* @duty_cycle: PWM duty cycle (in nanoseconds)
* @polarity: PWM polarity
* @enabled: PWM enabled status
+ * @usage_power: If set, the PWM driver is only required to maintain the power
+ * output but has more freedom regarding signal form.
+ * If supported, the signal can be optimized, for example to
+ * improve EMI by phase shifting individual channels.
*/
struct pwm_state {
u64 period;
u64 duty_cycle;
enum pwm_polarity polarity;
bool enabled;
+ bool usage_power;
};
/**
state->period = args.period;
state->polarity = args.polarity;
state->duty_cycle = 0;
+ state->usage_power = false;
}
/**
int pwmchip_add(struct pwm_chip *chip);
int pwmchip_remove(struct pwm_chip *chip);
+
+int devm_pwmchip_add(struct device *dev, struct pwm_chip *chip);
+
struct pwm_device *pwm_request_from_chip(struct pwm_chip *chip,
unsigned int index,
const char *label);
struct pwm_device *devm_fwnode_pwm_get(struct device *dev,
struct fwnode_handle *fwnode,
const char *con_id);
-void devm_pwm_put(struct device *dev, struct pwm_device *pwm);
#else
static inline struct pwm_device *pwm_request(int pwm_id, const char *label)
{
{
return ERR_PTR(-ENODEV);
}
-
-static inline void devm_pwm_put(struct device *dev, struct pwm_device *pwm)
-{
-}
#endif
static inline void pwm_apply_args(struct pwm_device *pwm)
state.enabled = false;
state.polarity = pwm->args.polarity;
state.period = pwm->args.period;
+ state.usage_power = false;
pwm_apply_state(pwm, &state);
}
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-/**
+/*
* pcitest.h - PCI test uapi defines
*
* Copyright (C) 2017 Texas Instruments
{
return __sched_setscheduler(p, attr, false, true);
}
+EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
/**
* sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
rcu_read_lock();
ucounts = task_ucounts(t);
sigpending = inc_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1);
- if (sigpending == 1)
- ucounts = get_ucounts(ucounts);
+ switch (sigpending) {
+ case 1:
+ if (likely(get_ucounts(ucounts)))
+ break;
+ fallthrough;
+ case LONG_MAX:
+ /*
+ * we need to decrease the ucount in the userns tree on any
+ * failure to avoid counts leaking.
+ */
+ dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1);
+ rcu_read_unlock();
+ return NULL;
+ }
rcu_read_unlock();
- if (override_rlimit || (sigpending < LONG_MAX && sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
+ if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
} else {
print_dropped_signal(sig);
}
if (unlikely(q == NULL)) {
- if (ucounts && dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1))
+ if (dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1))
put_ucounts(ucounts);
} else {
INIT_LIST_HEAD(&q->list);
long long ctxh;
struct gss_api_mech *gm = NULL;
time64_t expiry;
- int status = -EINVAL;
+ int status;
memset(&rsci, 0, sizeof(rsci));
/* context handle */
* @iov: kvec to write
*
* Returns:
- * On succes, returns zero
+ * On success, returns zero
* %-E2BIG if the client-provided Write chunk is too small
* %-ENOMEM if a resource has been exhausted
* %-EIO if an rdma-rw error occurred
* @length: number of bytes to write
*
* Returns:
- * On succes, returns zero
+ * On success, returns zero
* %-E2BIG if the client-provided Write chunk is too small
* %-ENOMEM if a resource has been exhausted
* %-EIO if an rdma-rw error occurred
* @data: pointer to write arguments
*
* Returns:
- * On succes, returns zero
+ * On success, returns zero
* %-E2BIG if the client-provided Write chunk is too small
* %-ENOMEM if a resource has been exhausted
* %-EIO if an rdma-rw error occurred