Merge tag 'drm-misc-next-2019-05-24' of git://anongit.freedesktop.org/drm/drm-misc...
authorDave Airlie <airlied@redhat.com>
Mon, 27 May 2019 22:25:46 +0000 (08:25 +1000)
committerDave Airlie <airlied@redhat.com>
Mon, 27 May 2019 22:59:11 +0000 (08:59 +1000)
drm-misc-next for v5.3, try #2:

UAPI Changes:
- Add HDR source metadata property.
- Make drm.h compile on GNU/kFreeBSD by including stdint.h
- Clarify how the userspace reviewer has to review new kernel UAPI.
- Clarify that for using new UAPI, merging to drm-next or drm-misc-next should be enough.

Cross-subsystem Changes:
- video/hdmi: Add unpack function for DRM infoframes.
- Device tree bindings:
  * Updating a property for Mali Midgard GPUs
  * Updating a property for STM32 DSI panel
  * Adding support for FriendlyELEC HD702E 800x1280 panel
  * Adding support for Evervision VGG804821 800x480 5.0" WVGA TFT panel
  * Adding support for the EDT ET035012DM6 3.5" 320x240 QVGA 24-bit RGB TFT.
  * Adding support for Three Five displays TFC S9700RTWV43TR-01B 800x480 panel
    with resistive touch found on TI's AM335X-EVM.
  * Adding support for EDT ETM0430G0DH6 480x272 panel.
- Add OSD101T2587-53TS driver with DT bindings.
- Add Samsung S6E63M0 panel driver with DT bindings.
- Add VXT VL050-8048NT-C01 800x480 panel with DT bindings.
- Dma-buf:
  - Make mmap callback actually optional.
  - Documentation updates.
  - Fix debugfs refcount inbalance.
  - Remove unused sync_dump function.
- Fix device tree bindings in drm-misc-next after a botched merge.

Core Changes:
- Add support for HDR infoframes and related EDID parsing.
- Remove prime sg_table caching, now done inside dma-buf.
- Add shiny new drm_gem_vram helpers for simple VRAM drivers;
  with some fixes to the new API on top.
- Small fix to job cleanup without timeout handler.
- Documentation fixes to drm_fourcc.
- Replace lookups of drm_format with struct drm_format_info;
  remove functions that become obsolete by this conversion.
- Remove double include in bridge/panel.c and some drivers.
- Remove drmP.h include from drm/edid and drm/dp.
- Fix null pointer deref in drm_fb_helper_hotplug_event().
- Remove most members from drm_fb_helper_crtc, only mode_set is kept.
- Remove race of fb helpers with userspace; only restore mode
  when userspace is not master.
- Move legacy setup from drm_file.c to drm_legacy_misc.c
- Rework scheduler job destruction.
- drm/bus was removed, remove from TODO.
- Add __drm_atomic_helper_crtc_reset() to subclass crtc_state,
  and convert some drivers to use it (conversion is not complete yet).
- Bump vblank timeout wait to 100 ms for atomic.
- Docbook fix for drm_hdmi_infoframe_set_hdr_metadata.

Driver Changes:
- sun4i: Use DRM_GEM_CMA_VMAP_DRIVER_OPS instead of definining manually.
- v3d: Small cleanups, adding support for compute shaders,
       reservation/synchronization fixes and job management refactoring,
       fixes MMU and debugfs.
- lima: Fix null pointer in irq handler on startup, set default timeout for scheduled jobs.
- stm/ltdc: Assorted fixes and adding FB modifier support.
- amdgpu: Avoid hw reset if guilty job was already signaled.
- virtio: Add seqno to fences, add trace events, use correct flags for fence allocation.
- Convert AST, bochs, mgag200, vboxvideo, hisilicon to the new drm_gem_vram API.
- sun6i_mipi_dsi: Support DSI GENERIC_SHORT_WRITE_2 transfers.
- bochs: Small fix to use PTR_RET_OR_ZERO and driver unload.
- gma500: header fixes
- cirrus: Remove unused files.
- mediatek: Fix compiler warning after merging the HDR series.
- vc4: Rework binner bo handling.

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/052875a5-27ba-3832-60c2-193d950afdff@linux.intel.com
220 files changed:
Documentation/devicetree/bindings/display/panel/edt,et-series.txt
Documentation/devicetree/bindings/display/panel/evervision,vgg804821.txt [new file with mode: 0644]
Documentation/devicetree/bindings/display/panel/friendlyarm,hd702e.txt [new file with mode: 0644]
Documentation/devicetree/bindings/display/panel/osddisplays,osd101t2045-53ts.txt [new file with mode: 0644]
Documentation/devicetree/bindings/display/panel/osddisplays,osd101t2587-53ts.txt [new file with mode: 0644]
Documentation/devicetree/bindings/display/panel/samsung,s6e63m0.txt [new file with mode: 0644]
Documentation/devicetree/bindings/display/panel/tfc,s9700rtwv43tr-01b.txt [new file with mode: 0644]
Documentation/devicetree/bindings/display/panel/vl050_8048nt_c01.txt [new file with mode: 0644]
Documentation/devicetree/bindings/display/st,stm32-ltdc.txt
Documentation/devicetree/bindings/gpu/arm,mali-midgard.txt
Documentation/devicetree/bindings/vendor-prefixes.yaml
Documentation/gpu/drm-mm.rst
Documentation/gpu/drm-uapi.rst
Documentation/gpu/todo.rst
MAINTAINERS
drivers/dma-buf/dma-buf.c
drivers/dma-buf/sync_debug.c
drivers/dma-buf/sync_debug.h
drivers/gpu/drm/Kconfig
drivers/gpu/drm/Makefile
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
drivers/gpu/drm/arm/malidp_crtc.c
drivers/gpu/drm/arm/malidp_hw.c
drivers/gpu/drm/arm/malidp_mw.c
drivers/gpu/drm/arm/malidp_planes.c
drivers/gpu/drm/armada/armada_fb.c
drivers/gpu/drm/ast/Kconfig
drivers/gpu/drm/ast/ast_drv.c
drivers/gpu/drm/ast/ast_drv.h
drivers/gpu/drm/ast/ast_fb.c
drivers/gpu/drm/ast/ast_main.c
drivers/gpu/drm/ast/ast_mode.c
drivers/gpu/drm/ast/ast_ttm.c
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
drivers/gpu/drm/bochs/Kconfig
drivers/gpu/drm/bochs/bochs.h
drivers/gpu/drm/bochs/bochs_drv.c
drivers/gpu/drm/bochs/bochs_kms.c
drivers/gpu/drm/bochs/bochs_mm.c
drivers/gpu/drm/bridge/panel.c
drivers/gpu/drm/cirrus/cirrus_drv.h [deleted file]
drivers/gpu/drm/cirrus/cirrus_ttm.c [deleted file]
drivers/gpu/drm/drm_atomic_helper.c
drivers/gpu/drm/drm_atomic_state_helper.c
drivers/gpu/drm/drm_atomic_uapi.c
drivers/gpu/drm/drm_auth.c
drivers/gpu/drm/drm_client.c
drivers/gpu/drm/drm_connector.c
drivers/gpu/drm/drm_dp_aux_dev.c
drivers/gpu/drm/drm_dp_dual_mode_helper.c
drivers/gpu/drm/drm_dp_helper.c
drivers/gpu/drm/drm_dp_mst_topology.c
drivers/gpu/drm/drm_edid.c
drivers/gpu/drm/drm_edid_load.c
drivers/gpu/drm/drm_fb_helper.c
drivers/gpu/drm/drm_file.c
drivers/gpu/drm/drm_format_helper.c
drivers/gpu/drm/drm_fourcc.c
drivers/gpu/drm/drm_gem_vram_helper.c [new file with mode: 0644]
drivers/gpu/drm/drm_internal.h
drivers/gpu/drm/drm_legacy.h
drivers/gpu/drm/drm_legacy_misc.c
drivers/gpu/drm/drm_prime.c
drivers/gpu/drm/drm_vram_helper_common.c [new file with mode: 0644]
drivers/gpu/drm/drm_vram_mm_helper.c [new file with mode: 0644]
drivers/gpu/drm/etnaviv/etnaviv_dump.c
drivers/gpu/drm/etnaviv/etnaviv_sched.c
drivers/gpu/drm/gma500/accel_2d.c
drivers/gpu/drm/gma500/blitter.h
drivers/gpu/drm/gma500/cdv_device.c
drivers/gpu/drm/gma500/cdv_device.h
drivers/gpu/drm/gma500/cdv_intel_crt.c
drivers/gpu/drm/gma500/cdv_intel_display.c
drivers/gpu/drm/gma500/cdv_intel_dp.c
drivers/gpu/drm/gma500/cdv_intel_hdmi.c
drivers/gpu/drm/gma500/cdv_intel_lvds.c
drivers/gpu/drm/gma500/framebuffer.c
drivers/gpu/drm/gma500/framebuffer.h
drivers/gpu/drm/gma500/gem.c
drivers/gpu/drm/gma500/gma_device.c
drivers/gpu/drm/gma500/gma_device.h
drivers/gpu/drm/gma500/gma_display.c
drivers/gpu/drm/gma500/gma_display.h
drivers/gpu/drm/gma500/gtt.c
drivers/gpu/drm/gma500/gtt.h
drivers/gpu/drm/gma500/intel_bios.c
drivers/gpu/drm/gma500/intel_bios.h
drivers/gpu/drm/gma500/intel_gmbus.c
drivers/gpu/drm/gma500/intel_i2c.c
drivers/gpu/drm/gma500/mdfld_device.c
drivers/gpu/drm/gma500/mdfld_dsi_dpi.c
drivers/gpu/drm/gma500/mdfld_dsi_output.c
drivers/gpu/drm/gma500/mdfld_dsi_output.h
drivers/gpu/drm/gma500/mdfld_dsi_pkg_sender.c
drivers/gpu/drm/gma500/mdfld_intel_display.c
drivers/gpu/drm/gma500/mdfld_tmd_vid.c
drivers/gpu/drm/gma500/mid_bios.c
drivers/gpu/drm/gma500/mid_bios.h
drivers/gpu/drm/gma500/mmu.c
drivers/gpu/drm/gma500/oaktrail.h
drivers/gpu/drm/gma500/oaktrail_crtc.c
drivers/gpu/drm/gma500/oaktrail_device.c
drivers/gpu/drm/gma500/oaktrail_hdmi.c
drivers/gpu/drm/gma500/oaktrail_lvds.c
drivers/gpu/drm/gma500/oaktrail_lvds_i2c.c
drivers/gpu/drm/gma500/power.h
drivers/gpu/drm/gma500/psb_device.c
drivers/gpu/drm/gma500/psb_drv.c
drivers/gpu/drm/gma500/psb_drv.h
drivers/gpu/drm/gma500/psb_intel_display.c
drivers/gpu/drm/gma500/psb_intel_lvds.c
drivers/gpu/drm/gma500/psb_intel_modes.c
drivers/gpu/drm/gma500/psb_intel_sdvo.c
drivers/gpu/drm/gma500/psb_irq.c
drivers/gpu/drm/gma500/psb_irq.h
drivers/gpu/drm/gma500/psb_lid.c
drivers/gpu/drm/gma500/tc35876x-dsi-lvds.c
drivers/gpu/drm/hisilicon/hibmc/Kconfig
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
drivers/gpu/drm/i915/intel_display.c
drivers/gpu/drm/i915/intel_sprite.c
drivers/gpu/drm/imx/ipuv3-plane.c
drivers/gpu/drm/lima/lima_drv.c
drivers/gpu/drm/lima/lima_pp.c
drivers/gpu/drm/lima/lima_sched.c
drivers/gpu/drm/mediatek/mtk_drm_fb.c
drivers/gpu/drm/mediatek/mtk_hdmi.c
drivers/gpu/drm/meson/meson_overlay.c
drivers/gpu/drm/mgag200/Kconfig
drivers/gpu/drm/mgag200/mgag200_cursor.c
drivers/gpu/drm/mgag200/mgag200_drv.c
drivers/gpu/drm/mgag200/mgag200_drv.h
drivers/gpu/drm/mgag200/mgag200_fb.c
drivers/gpu/drm/mgag200/mgag200_main.c
drivers/gpu/drm/mgag200/mgag200_mode.c
drivers/gpu/drm/mgag200/mgag200_ttm.c
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
drivers/gpu/drm/msm/msm_fb.c
drivers/gpu/drm/nouveau/dispnv50/head.c
drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv04.c
drivers/gpu/drm/omapdrm/omap_fb.c
drivers/gpu/drm/panel/Kconfig
drivers/gpu/drm/panel/Makefile
drivers/gpu/drm/panel/panel-osd-osd101t2587-53ts.c [new file with mode: 0644]
drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
drivers/gpu/drm/panel/panel-samsung-s6e63m0.c [new file with mode: 0644]
drivers/gpu/drm/panel/panel-simple.c
drivers/gpu/drm/panfrost/panfrost_device.c
drivers/gpu/drm/panfrost/panfrost_device.h
drivers/gpu/drm/panfrost/panfrost_job.c
drivers/gpu/drm/radeon/radeon_fb.c
drivers/gpu/drm/rockchip/rockchip_drm_fb.c
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
drivers/gpu/drm/scheduler/sched_main.c
drivers/gpu/drm/stm/dw_mipi_dsi-stm.c
drivers/gpu/drm/stm/ltdc.c
drivers/gpu/drm/sun4i/sun4i_drv.c
drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
drivers/gpu/drm/tegra/dc.c
drivers/gpu/drm/tegra/fb.c
drivers/gpu/drm/v3d/v3d_debugfs.c
drivers/gpu/drm/v3d/v3d_drv.c
drivers/gpu/drm/v3d/v3d_drv.h
drivers/gpu/drm/v3d/v3d_fence.c
drivers/gpu/drm/v3d/v3d_gem.c
drivers/gpu/drm/v3d/v3d_irq.c
drivers/gpu/drm/v3d/v3d_mmu.c
drivers/gpu/drm/v3d/v3d_regs.h
drivers/gpu/drm/v3d/v3d_sched.c
drivers/gpu/drm/v3d/v3d_trace.h
drivers/gpu/drm/vboxvideo/Kconfig
drivers/gpu/drm/vboxvideo/vbox_drv.c
drivers/gpu/drm/vboxvideo/vbox_drv.h
drivers/gpu/drm/vboxvideo/vbox_fb.c
drivers/gpu/drm/vboxvideo/vbox_main.c
drivers/gpu/drm/vboxvideo/vbox_mode.c
drivers/gpu/drm/vboxvideo/vbox_ttm.c
drivers/gpu/drm/vc4/vc4_bo.c
drivers/gpu/drm/vc4/vc4_drv.c
drivers/gpu/drm/vc4/vc4_drv.h
drivers/gpu/drm/vc4/vc4_gem.c
drivers/gpu/drm/vc4/vc4_irq.c
drivers/gpu/drm/vc4/vc4_plane.c
drivers/gpu/drm/vc4/vc4_v3d.c
drivers/gpu/drm/virtio/Makefile
drivers/gpu/drm/virtio/virtgpu_drv.h
drivers/gpu/drm/virtio/virtgpu_fence.c
drivers/gpu/drm/virtio/virtgpu_ioctl.c
drivers/gpu/drm/virtio/virtgpu_trace.h [new file with mode: 0644]
drivers/gpu/drm/virtio/virtgpu_trace_points.c [new file with mode: 0644]
drivers/gpu/drm/virtio/virtgpu_vq.c
drivers/gpu/drm/vkms/vkms_crtc.c
drivers/gpu/drm/zte/zx_plane.c
drivers/video/hdmi.c
include/drm/drm_atomic_state_helper.h
include/drm/drm_connector.h
include/drm/drm_device.h
include/drm/drm_edid.h
include/drm/drm_fb_helper.h
include/drm/drm_fourcc.h
include/drm/drm_gem_vram_helper.h [new file with mode: 0644]
include/drm/drm_mode_config.h
include/drm/drm_vram_mm_helper.h [new file with mode: 0644]
include/drm/gma_drm.h [deleted file]
include/drm/gpu_scheduler.h
include/linux/dma-buf.h
include/linux/hdmi.h
include/uapi/drm/drm.h
include/uapi/drm/drm_mode.h
include/uapi/drm/v3d_drm.h

index f56b99e..be86843 100644 (file)
@@ -6,6 +6,22 @@ Display bindings for EDT Display Technology Corp. Displays which are
 compatible with the simple-panel binding, which is specified in
 simple-panel.txt
 
+3,5" QVGA TFT Panels
+--------------------
++-----------------+---------------------+-------------------------------------+
+| Identifier      | compatbile          | description                         |
++=================+=====================+=====================================+
+| ET035012DM6     | edt,et035012dm6     | 3.5" QVGA TFT LCD panel             |
++-----------------+---------------------+-------------------------------------+
+
+4,3" WVGA TFT Panels
+--------------------
+
++-----------------+---------------------+-------------------------------------+
+| Identifier      | compatbile          | description                         |
++=================+=====================+=====================================+
+| ETM0430G0DH6    | edt,etm0430g0dh6    | 480x272 TFT Display                 |
++-----------------+---------------------+-------------------------------------+
 
 5,7" WVGA TFT Panels
 --------------------
diff --git a/Documentation/devicetree/bindings/display/panel/evervision,vgg804821.txt b/Documentation/devicetree/bindings/display/panel/evervision,vgg804821.txt
new file mode 100644 (file)
index 0000000..82d22e1
--- /dev/null
@@ -0,0 +1,12 @@
+Evervision Electronics Co. Ltd. VGG804821 5.0" WVGA TFT LCD Panel
+
+Required properties:
+- compatible: should be "evervision,vgg804821"
+- power-supply: See simple-panel.txt
+
+Optional properties:
+- backlight: See simple-panel.txt
+- enable-gpios: See simple-panel.txt
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/display/panel/friendlyarm,hd702e.txt b/Documentation/devicetree/bindings/display/panel/friendlyarm,hd702e.txt
new file mode 100644 (file)
index 0000000..6c9156f
--- /dev/null
@@ -0,0 +1,32 @@
+FriendlyELEC HD702E 800x1280 LCD panel
+
+HD702E lcd is FriendlyELEC developed eDP LCD panel with 800x1280
+resolution. It has built in Goodix, GT9271 captive touchscreen
+with backlight adjustable via PWM.
+
+Required properties:
+- compatible: should be "friendlyarm,hd702e"
+- power-supply: regulator to provide the supply voltage
+
+Optional properties:
+- backlight: phandle of the backlight device attached to the panel
+
+Optional nodes:
+- Video port for LCD panel input.
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
+
+Example:
+
+       panel {
+               compatible ="friendlyarm,hd702e", "simple-panel";
+               backlight = <&backlight>;
+               power-supply = <&vcc3v3_sys>;
+
+               port {
+                       panel_in_edp: endpoint {
+                               remote-endpoint = <&edp_out_panel>;
+                       };
+               };
+       };
diff --git a/Documentation/devicetree/bindings/display/panel/osddisplays,osd101t2045-53ts.txt b/Documentation/devicetree/bindings/display/panel/osddisplays,osd101t2045-53ts.txt
new file mode 100644 (file)
index 0000000..85c0b2c
--- /dev/null
@@ -0,0 +1,11 @@
+One Stop Displays OSD101T2045-53TS 10.1" 1920x1200 panel
+
+Required properties:
+- compatible: should be "osddisplays,osd101t2045-53ts"
+- power-supply: as specified in the base binding
+
+Optional properties:
+- backlight: as specified in the base binding
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/display/panel/osddisplays,osd101t2587-53ts.txt b/Documentation/devicetree/bindings/display/panel/osddisplays,osd101t2587-53ts.txt
new file mode 100644 (file)
index 0000000..9d88e96
--- /dev/null
@@ -0,0 +1,14 @@
+One Stop Displays OSD101T2587-53TS 10.1" 1920x1200 panel
+
+The panel is similar to OSD101T2045-53TS, but it needs additional
+MIPI_DSI_TURN_ON_PERIPHERAL message from the host.
+
+Required properties:
+- compatible: should be "osddisplays,osd101t2587-53ts"
+- power-supply: as specified in the base binding
+
+Optional properties:
+- backlight: as specified in the base binding
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/display/panel/samsung,s6e63m0.txt b/Documentation/devicetree/bindings/display/panel/samsung,s6e63m0.txt
new file mode 100644 (file)
index 0000000..9fb9ebe
--- /dev/null
@@ -0,0 +1,33 @@
+Samsung s6e63m0 AMOLED LCD panel
+
+Required properties:
+  - compatible: "samsung,s6e63m0"
+  - reset-gpios: GPIO spec for reset pin
+  - vdd3-supply: VDD regulator
+  - vci-supply: VCI regulator
+
+The panel must obey rules for SPI slave device specified in document [1].
+
+The device node can contain one 'port' child node with one child
+'endpoint' node, according to the bindings defined in [2]. This
+node should describe panel's video bus.
+
+[1]: Documentation/devicetree/bindings/spi/spi-bus.txt
+[2]: Documentation/devicetree/bindings/media/video-interfaces.txt
+
+Example:
+
+               s6e63m0: display@0 {
+                       compatible = "samsung,s6e63m0";
+                       reg = <0>;
+                       reset-gpio = <&mp05 5 1>;
+                       vdd3-supply = <&ldo12_reg>;
+                       vci-supply = <&ldo11_reg>;
+                       spi-max-frequency = <1200000>;
+
+                       port {
+                               lcd_ep: endpoint {
+                                       remote-endpoint = <&fimd_ep>;
+                               };
+                       };
+               };
diff --git a/Documentation/devicetree/bindings/display/panel/tfc,s9700rtwv43tr-01b.txt b/Documentation/devicetree/bindings/display/panel/tfc,s9700rtwv43tr-01b.txt
new file mode 100644 (file)
index 0000000..dfb572f
--- /dev/null
@@ -0,0 +1,15 @@
+TFC S9700RTWV43TR-01B 7" Three Five Corp 800x480 LCD panel with
+resistive touch
+
+The panel is found on TI AM335x-evm.
+
+Required properties:
+- compatible: should be "tfc,s9700rtwv43tr-01b"
+- power-supply: See panel-common.txt
+
+Optional properties:
+- enable-gpios: GPIO pin to enable or disable the panel, if there is one
+- backlight: phandle of the backlight device attached to the panel
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/display/panel/vl050_8048nt_c01.txt b/Documentation/devicetree/bindings/display/panel/vl050_8048nt_c01.txt
new file mode 100644 (file)
index 0000000..b42bf06
--- /dev/null
@@ -0,0 +1,12 @@
+VXT 800x480 color TFT LCD panel
+
+Required properties:
+- compatible: should be "vxt,vl050-8048nt-c01"
+- power-supply: as specified in the base binding
+
+Optional properties:
+- backlight: as specified in the base binding
+- enable-gpios: as specified in the base binding
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
index 3eb1b48..60c54da 100644 (file)
@@ -40,6 +40,8 @@ Mandatory nodes specific to STM32 DSI:
 - panel or bridge node: A node containing the panel or bridge description as
   documented in [6].
   - port: panel or bridge port node, connected to the DSI output port (port@1).
+Optional properties:
+- phy-dsi-supply: phandle of the regulator that provides the supply voltage.
 
 Note: You can find more documentation in the following references
 [1] Documentation/devicetree/bindings/clock/clock-bindings.txt
@@ -101,6 +103,7 @@ Example 2: DSI panel
                        clock-names = "pclk", "ref";
                        resets = <&rcc STM32F4_APB2_RESET(DSI)>;
                        reset-names = "apb";
+                       phy-dsi-supply = <&reg18>;
 
                        ports {
                                #address-cells = <1>;
index 1b1a741..e5ad3b2 100644 (file)
@@ -15,6 +15,7 @@ Required properties:
     + "arm,mali-t860"
     + "arm,mali-t880"
   * which must be preceded by one of the following vendor specifics:
+    + "allwinner,sun50i-h6-mali"
     + "amlogic,meson-gxm-mali"
     + "rockchip,rk3288-mali"
     + "rockchip,rk3399-mali"
@@ -31,21 +32,36 @@ Optional properties:
 
 - clocks : Phandle to clock for the Mali Midgard device.
 
+- clock-names : Specify the names of the clocks specified in clocks
+  when multiple clocks are present.
+    * core: clock driving the GPU itself (When only one clock is present,
+      assume it's this clock.)
+    * bus: bus clock for the GPU
+
 - mali-supply : Phandle to regulator for the Mali device. Refer to
   Documentation/devicetree/bindings/regulator/regulator.txt for details.
 
 - operating-points-v2 : Refer to Documentation/devicetree/bindings/opp/opp.txt
   for details.
 
+- #cooling-cells: Refer to Documentation/devicetree/bindings/thermal/thermal.txt
+  for details.
+
 - resets : Phandle of the GPU reset line.
 
 Vendor-specific bindings
 ------------------------
 
 The Mali GPU is integrated very differently from one SoC to
-another. In order to accomodate those differences, you have the option
+another. In order to accommodate those differences, you have the option
 to specify one more vendor-specific compatible, among:
 
+- "allwinner,sun50i-h6-mali"
+  Required properties:
+  - clocks : phandles to core and bus clocks
+  - clock-names : must contain "core" and "bus"
+  - resets: phandle to GPU reset line
+
 - "amlogic,meson-gxm-mali"
   Required properties:
   - resets : Should contain phandles of :
@@ -65,6 +81,7 @@ gpu@ffa30000 {
        mali-supply = <&vdd_gpu>;
        operating-points-v2 = <&gpu_opp_table>;
        power-domains = <&power RK3288_PD_GPU>;
+       #cooling-cells = <2>;
 };
 
 gpu_opp_table: opp_table0 {
index 33a65a4..f0bcff0 100644 (file)
@@ -287,6 +287,8 @@ patternProperties:
     description: Everest Semiconductor Co. Ltd.
   "^everspin,.*":
     description: Everspin Technologies, Inc.
+  "^evervision,.*":
+    description: Evervision Electronics Co. Ltd.
   "^exar,.*":
     description: Exar Corporation
   "^excito,.*":
@@ -849,6 +851,8 @@ patternProperties:
     description: Shenzhen Techstar Electronics Co., Ltd.
   "^terasic,.*":
     description: Terasic Inc.
+  "^tfc,.*":
+    description: Three Five Corp
   "^thine,.*":
     description: THine Electronics, Inc.
   "^ti,.*":
@@ -923,6 +927,8 @@ patternProperties:
     description: Voipac Technologies s.r.o.
   "^vot,.*":
     description: Vision Optical Technology Co., Ltd.
+  "^vxt,.*":
+    description: VXT Ltd
   "^wd,.*":
     description: Western Digital Corp.
   "^wetek,.*":
index 54a696d..c8ebd4f 100644 (file)
@@ -79,7 +79,6 @@ count for the TTM, which will call your initialization function.
 
 See the radeon_ttm.c file for an example of usage.
 
-
 The Graphics Execution Manager (GEM)
 ====================================
 
@@ -380,6 +379,39 @@ GEM CMA Helper Functions Reference
 .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
    :export:
 
+VRAM Helper Function Reference
+==============================
+
+.. kernel-doc:: drivers/gpu/drm/drm_vram_helper_common.c
+   :doc: overview
+
+.. kernel-doc:: include/drm/drm_gem_vram_helper.h
+   :internal:
+
+GEM VRAM Helper Functions Reference
+-----------------------------------
+
+.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
+   :doc: overview
+
+.. kernel-doc:: include/drm/drm_gem_vram_helper.h
+   :internal:
+
+.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
+   :export:
+
+VRAM MM Helper Functions Reference
+----------------------------------
+
+.. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c
+   :doc: overview
+
+.. kernel-doc:: include/drm/drm_vram_mm_helper.h
+   :internal:
+
+.. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c
+   :export:
+
 VMA Offset Manager
 ==================
 
index c9fd23e..05874d0 100644 (file)
@@ -85,16 +85,18 @@ leads to a few additional requirements:
 - The userspace side must be fully reviewed and tested to the standards of that
   userspace project. For e.g. mesa this means piglit testcases and review on the
   mailing list. This is again to ensure that the new interface actually gets the
-  job done.
+  job done.  The userspace-side reviewer should also provide at least an
+  Acked-by on the kernel uAPI patch indicating that they've looked at how the
+  kernel side is implementing the new feature being used.
 
 - The userspace patches must be against the canonical upstream, not some vendor
   fork. This is to make sure that no one cheats on the review and testing
   requirements by doing a quick fork.
 
 - The kernel patch can only be merged after all the above requirements are met,
-  but it **must** be merged **before** the userspace patches land. uAPI always flows
-  from the kernel, doing things the other way round risks divergence of the uAPI
-  definitions and header files.
+  but it **must** be merged to either drm-next or drm-misc-next **before** the
+  userspace patches land. uAPI always flows from the kernel, doing things the
+  other way round risks divergence of the uAPI definitions and header files.
 
 These are fairly steep requirements, but have grown out from years of shared
 pain and experience with uAPI added hastily, and almost always regretted about
index 1528ad2..66f05f4 100644 (file)
@@ -10,25 +10,6 @@ graphics subsystem useful as newbie projects. Or for slow rainy days.
 Subsystem-wide refactorings
 ===========================
 
-De-midlayer drivers
--------------------
-
-With the recent ``drm_bus`` cleanup patches for 3.17 it is no longer required
-to have a ``drm_bus`` structure set up. Drivers can directly set up the
-``drm_device`` structure instead of relying on bus methods in ``drm_usb.c``
-and ``drm_pci.c``. The goal is to get rid of the driver's ``->load`` /
-``->unload`` callbacks and open-code the load/unload sequence properly, using
-the new two-stage ``drm_device`` setup/teardown.
-
-Once all existing drivers are converted we can also remove those bus support
-files for USB and platform devices.
-
-All you need is a GPU for a non-converted driver (currently almost all of
-them, but also all the virtual ones used by KVM, so everyone qualifies).
-
-Contact: Daniel Vetter, Thierry Reding, respective driver maintainers
-
-
 Remove custom dumb_map_offset implementations
 ---------------------------------------------
 
@@ -300,6 +281,14 @@ it to use drm_mode_hsync() instead.
 
 Contact: Sean Paul
 
+drm_fb_helper tasks
+-------------------
+
+- drm_fb_helper_restore_fbdev_mode_unlocked() should call restore_fbdev_mode()
+  not the _force variant so it can bail out if there is a master. But first
+  these igt tests need to be fixed: kms_fbcon_fbt@psr and
+  kms_fbcon_fbt@psr-suspend.
+
 Core refactorings
 =================
 
index 429c6c6..eb274ac 100644 (file)
@@ -5413,6 +5413,7 @@ T:        git git://anongit.freedesktop.org/drm/drm-misc
 
 DRM PANEL DRIVERS
 M:     Thierry Reding <thierry.reding@gmail.com>
+R:     Sam Ravnborg <sam@ravnborg.org>
 L:     dri-devel@lists.freedesktop.org
 T:     git git://anongit.freedesktop.org/drm/drm-misc
 S:     Maintained
@@ -5441,7 +5442,6 @@ F:        Documentation/gpu/xen-front.rst
 DRM TTM SUBSYSTEM
 M:     Christian Koenig <christian.koenig@amd.com>
 M:     Huang Rui <ray.huang@amd.com>
-M:     Junwei Zhang <Jerry.Zhang@amd.com>
 T:     git git://people.freedesktop.org/~agd5f/linux
 S:     Maintained
 L:     dri-devel@lists.freedesktop.org
index 7c85802..f4104a2 100644 (file)
@@ -90,6 +90,10 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
 
        dmabuf = file->private_data;
 
+       /* check if buffer supports mmap */
+       if (!dmabuf->ops->mmap)
+               return -EINVAL;
+
        /* check for overflowing the buffer's size */
        if (vma->vm_pgoff + vma_pages(vma) >
            dmabuf->size >> PAGE_SHIFT)
@@ -404,8 +408,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
                          || !exp_info->ops
                          || !exp_info->ops->map_dma_buf
                          || !exp_info->ops->unmap_dma_buf
-                         || !exp_info->ops->release
-                         || !exp_info->ops->mmap)) {
+                         || !exp_info->ops->release)) {
                return ERR_PTR(-EINVAL);
        }
 
@@ -573,6 +576,7 @@ struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
        list_add(&attach->node, &dmabuf->attachments);
 
        mutex_unlock(&dmabuf->lock);
+
        return attach;
 
 err_attach:
@@ -595,6 +599,9 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
        if (WARN_ON(!dmabuf || !attach))
                return;
 
+       if (attach->sgt)
+               dmabuf->ops->unmap_dma_buf(attach, attach->sgt, attach->dir);
+
        mutex_lock(&dmabuf->lock);
        list_del(&attach->node);
        if (dmabuf->ops->detach)
@@ -630,10 +637,27 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
        if (WARN_ON(!attach || !attach->dmabuf))
                return ERR_PTR(-EINVAL);
 
+       if (attach->sgt) {
+               /*
+                * Two mappings with different directions for the same
+                * attachment are not allowed.
+                */
+               if (attach->dir != direction &&
+                   attach->dir != DMA_BIDIRECTIONAL)
+                       return ERR_PTR(-EBUSY);
+
+               return attach->sgt;
+       }
+
        sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
        if (!sg_table)
                sg_table = ERR_PTR(-ENOMEM);
 
+       if (!IS_ERR(sg_table) && attach->dmabuf->ops->cache_sgt_mapping) {
+               attach->sgt = sg_table;
+               attach->dir = direction;
+       }
+
        return sg_table;
 }
 EXPORT_SYMBOL_GPL(dma_buf_map_attachment);
@@ -657,8 +681,10 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
        if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
                return;
 
-       attach->dmabuf->ops->unmap_dma_buf(attach, sg_table,
-                                               direction);
+       if (attach->sgt == sg_table)
+               return;
+
+       attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, direction);
 }
 EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
 
@@ -906,6 +932,10 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
        if (WARN_ON(!dmabuf || !vma))
                return -EINVAL;
 
+       /* check if buffer supports mmap */
+       if (!dmabuf->ops->mmap)
+               return -EINVAL;
+
        /* check for offset overflow */
        if (pgoff + vma_pages(vma) < pgoff)
                return -EOVERFLOW;
@@ -1068,6 +1098,7 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused)
                                   fence->ops->get_driver_name(fence),
                                   fence->ops->get_timeline_name(fence),
                                   dma_fence_is_signaled(fence) ? "" : "un");
+                       dma_fence_put(fence);
                }
                rcu_read_unlock();
 
index c0abf37..434a665 100644 (file)
@@ -197,29 +197,3 @@ static __init int sync_debugfs_init(void)
        return 0;
 }
 late_initcall(sync_debugfs_init);
-
-#define DUMP_CHUNK 256
-static char sync_dump_buf[64 * 1024];
-void sync_dump(void)
-{
-       struct seq_file s = {
-               .buf = sync_dump_buf,
-               .size = sizeof(sync_dump_buf) - 1,
-       };
-       int i;
-
-       sync_info_debugfs_show(&s, NULL);
-
-       for (i = 0; i < s.count; i += DUMP_CHUNK) {
-               if ((s.count - i) > DUMP_CHUNK) {
-                       char c = s.buf[i + DUMP_CHUNK];
-
-                       s.buf[i + DUMP_CHUNK] = 0;
-                       pr_cont("%s", s.buf + i);
-                       s.buf[i + DUMP_CHUNK] = c;
-               } else {
-                       s.buf[s.count] = 0;
-                       pr_cont("%s", s.buf + i);
-               }
-       }
-}
index 05e33f9..6176e52 100644 (file)
@@ -68,6 +68,5 @@ void sync_timeline_debug_add(struct sync_timeline *obj);
 void sync_timeline_debug_remove(struct sync_timeline *obj);
 void sync_file_debug_add(struct sync_file *fence);
 void sync_file_debug_remove(struct sync_file *fence);
-void sync_dump(void);
 
 #endif /* _LINUX_SYNC_H */
index 36f900d..b62f40c 100644 (file)
@@ -161,6 +161,13 @@ config DRM_TTM
          GPU memory types. Will be enabled automatically if a device driver
          uses it.
 
+config DRM_VRAM_HELPER
+       tristate
+       depends on DRM
+       select DRM_TTM
+       help
+         Helpers for VRAM memory management
+
 config DRM_GEM_CMA_HELPER
        bool
        depends on DRM
index 72f5036..4c3dc42 100644 (file)
@@ -32,6 +32,11 @@ drm-$(CONFIG_AGP) += drm_agpsupport.o
 drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o
 drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
 
+drm_vram_helper-y := drm_gem_vram_helper.o \
+                    drm_vram_helper_common.o \
+                    drm_vram_mm_helper.o
+obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o
+
 drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_dsc.o drm_probe_helper.o \
                drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \
                drm_kms_helper_common.o drm_dp_dual_mode_helper.o \
index cc8ad38..9f282e9 100644 (file)
@@ -3341,8 +3341,6 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
                if (!ring || !ring->sched.thread)
                        continue;
 
-               drm_sched_stop(&ring->sched);
-
                /* after all hw jobs are reset, hw fence is meaningless, so force_completion */
                amdgpu_fence_driver_force_completion(ring);
        }
@@ -3350,8 +3348,7 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
        if(job)
                drm_sched_increase_karma(&job->base);
 
-
-
+       /* Don't suspend on bare metal if we are not going to HW reset the ASIC */
        if (!amdgpu_sriov_vf(adev)) {
 
                if (!need_full_reset)
@@ -3489,38 +3486,21 @@ end:
        return r;
 }
 
-static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev,
-                                         struct amdgpu_job *job)
+static bool amdgpu_device_lock_adev(struct amdgpu_device *adev, bool trylock)
 {
-       int i;
-
-       for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
-               struct amdgpu_ring *ring = adev->rings[i];
-
-               if (!ring || !ring->sched.thread)
-                       continue;
+       if (trylock) {
+               if (!mutex_trylock(&adev->lock_reset))
+                       return false;
+       } else
+               mutex_lock(&adev->lock_reset);
 
-               if (!adev->asic_reset_res)
-                       drm_sched_resubmit_jobs(&ring->sched);
-
-               drm_sched_start(&ring->sched, !adev->asic_reset_res);
-       }
-
-       if (!amdgpu_device_has_dc_support(adev)) {
-               drm_helper_resume_force_mode(adev->ddev);
-       }
-
-       adev->asic_reset_res = 0;
-}
-
-static void amdgpu_device_lock_adev(struct amdgpu_device *adev)
-{
-       mutex_lock(&adev->lock_reset);
        atomic_inc(&adev->gpu_reset_counter);
        adev->in_gpu_reset = 1;
        /* Block kfd: SRIOV would do it separately */
        if (!amdgpu_sriov_vf(adev))
                 amdgpu_amdkfd_pre_reset(adev);
+
+       return true;
 }
 
 static void amdgpu_device_unlock_adev(struct amdgpu_device *adev)
@@ -3548,40 +3528,42 @@ static void amdgpu_device_unlock_adev(struct amdgpu_device *adev)
 int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
                              struct amdgpu_job *job)
 {
-       int r;
+       struct list_head device_list, *device_list_handle =  NULL;
+       bool need_full_reset, job_signaled;
        struct amdgpu_hive_info *hive = NULL;
-       bool need_full_reset = false;
        struct amdgpu_device *tmp_adev = NULL;
-       struct list_head device_list, *device_list_handle =  NULL;
+       int i, r = 0;
 
+       need_full_reset = job_signaled = false;
        INIT_LIST_HEAD(&device_list);
 
        dev_info(adev->dev, "GPU reset begin!\n");
 
+       hive = amdgpu_get_xgmi_hive(adev, false);
+
        /*
-        * In case of XGMI hive disallow concurrent resets to be triggered
-        * by different nodes. No point also since the one node already executing
-        * reset will also reset all the other nodes in the hive.
+        * Here we trylock to avoid chain of resets executing from
+        * either trigger by jobs on different adevs in XGMI hive or jobs on
+        * different schedulers for same device while this TO handler is running.
+        * We always reset all schedulers for device and all devices for XGMI
+        * hive so that should take care of them too.
         */
-       hive = amdgpu_get_xgmi_hive(adev, 0);
-       if (hive && adev->gmc.xgmi.num_physical_nodes > 1 &&
-           !mutex_trylock(&hive->reset_lock))
+
+       if (hive && !mutex_trylock(&hive->reset_lock)) {
+               DRM_INFO("Bailing on TDR for s_job:%llx, hive: %llx as another already in progress",
+                        job->base.id, hive->hive_id);
                return 0;
+       }
 
        /* Start with adev pre asic reset first for soft reset check.*/
-       amdgpu_device_lock_adev(adev);
-       r = amdgpu_device_pre_asic_reset(adev,
-                                        job,
-                                        &need_full_reset);
-       if (r) {
-               /*TODO Should we stop ?*/
-               DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ",
-                         r, adev->ddev->unique);
-               adev->asic_reset_res = r;
+       if (!amdgpu_device_lock_adev(adev, !hive)) {
+               DRM_INFO("Bailing on TDR for s_job:%llx, as another already in progress",
+                                        job->base.id);
+               return 0;
        }
 
        /* Build list of devices to reset */
-       if  (need_full_reset && adev->gmc.xgmi.num_physical_nodes > 1) {
+       if  (adev->gmc.xgmi.num_physical_nodes > 1) {
                if (!hive) {
                        amdgpu_device_unlock_adev(adev);
                        return -ENODEV;
@@ -3598,13 +3580,56 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
                device_list_handle = &device_list;
        }
 
+       /* block all schedulers and reset given job's ring */
+       list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
+               for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
+                       struct amdgpu_ring *ring = tmp_adev->rings[i];
+
+                       if (!ring || !ring->sched.thread)
+                               continue;
+
+                       drm_sched_stop(&ring->sched, &job->base);
+               }
+       }
+
+
+       /*
+        * Must check guilty signal here since after this point all old
+        * HW fences are force signaled.
+        *
+        * job->base holds a reference to parent fence
+        */
+       if (job && job->base.s_fence->parent &&
+           dma_fence_is_signaled(job->base.s_fence->parent))
+               job_signaled = true;
+
+       if (!amdgpu_device_ip_need_full_reset(adev))
+               device_list_handle = &device_list;
+
+       if (job_signaled) {
+               dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
+               goto skip_hw_reset;
+       }
+
+
+       /* Guilty job will be freed after this*/
+       r = amdgpu_device_pre_asic_reset(adev,
+                                        job,
+                                        &need_full_reset);
+       if (r) {
+               /*TODO Should we stop ?*/
+               DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ",
+                         r, adev->ddev->unique);
+               adev->asic_reset_res = r;
+       }
+
 retry: /* Rest of adevs pre asic reset from XGMI hive. */
        list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
 
                if (tmp_adev == adev)
                        continue;
 
-               amdgpu_device_lock_adev(tmp_adev);
+               amdgpu_device_lock_adev(tmp_adev, false);
                r = amdgpu_device_pre_asic_reset(tmp_adev,
                                                 NULL,
                                                 &need_full_reset);
@@ -3628,9 +3653,28 @@ retry:   /* Rest of adevs pre asic reset from XGMI hive. */
                        goto retry;
        }
 
+skip_hw_reset:
+
        /* Post ASIC reset for all devs .*/
        list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
-               amdgpu_device_post_asic_reset(tmp_adev, tmp_adev == adev ? job : NULL);
+               for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
+                       struct amdgpu_ring *ring = tmp_adev->rings[i];
+
+                       if (!ring || !ring->sched.thread)
+                               continue;
+
+                       /* No point to resubmit jobs if we didn't HW reset*/
+                       if (!tmp_adev->asic_reset_res && !job_signaled)
+                               drm_sched_resubmit_jobs(&ring->sched);
+
+                       drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res);
+               }
+
+               if (!amdgpu_device_has_dc_support(tmp_adev) && !job_signaled) {
+                       drm_helper_resume_force_mode(tmp_adev->ddev);
+               }
+
+               tmp_adev->asic_reset_res = 0;
 
                if (r) {
                        /* bad news, how to tell it to userspace ? */
@@ -3643,7 +3687,7 @@ retry:    /* Rest of adevs pre asic reset from XGMI hive. */
                amdgpu_device_unlock_adev(tmp_adev);
        }
 
-       if (hive && adev->gmc.xgmi.num_physical_nodes > 1)
+       if (hive)
                mutex_unlock(&hive->reset_lock);
 
        if (r)
index e476092..2e28692 100644 (file)
@@ -121,6 +121,7 @@ static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
                                         struct drm_mode_fb_cmd2 *mode_cmd,
                                         struct drm_gem_object **gobj_p)
 {
+       const struct drm_format_info *info;
        struct amdgpu_device *adev = rfbdev->adev;
        struct drm_gem_object *gobj = NULL;
        struct amdgpu_bo *abo = NULL;
@@ -131,7 +132,8 @@ static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
        int height = mode_cmd->height;
        u32 cpp;
 
-       cpp = drm_format_plane_cpp(mode_cmd->pixel_format, 0);
+       info = drm_get_format_info(adev->ddev, mode_cmd);
+       cpp = info->cpp[0];
 
        /* need to align pitch with crtc limits */
        mode_cmd->pitches[0] = amdgpu_align_pitch(adev, mode_cmd->width, cpp,
index 56aad28..d6690e0 100644 (file)
@@ -463,23 +463,6 @@ static struct drm_crtc_state *malidp_crtc_duplicate_state(struct drm_crtc *crtc)
        return &state->base;
 }
 
-static void malidp_crtc_reset(struct drm_crtc *crtc)
-{
-       struct malidp_crtc_state *state = NULL;
-
-       if (crtc->state) {
-               state = to_malidp_crtc_state(crtc->state);
-               __drm_atomic_helper_crtc_destroy_state(crtc->state);
-       }
-
-       kfree(state);
-       state = kzalloc(sizeof(*state), GFP_KERNEL);
-       if (state) {
-               crtc->state = &state->base;
-               crtc->state->crtc = crtc;
-       }
-}
-
 static void malidp_crtc_destroy_state(struct drm_crtc *crtc,
                                      struct drm_crtc_state *state)
 {
@@ -493,6 +476,17 @@ static void malidp_crtc_destroy_state(struct drm_crtc *crtc,
        kfree(mali_state);
 }
 
+static void malidp_crtc_reset(struct drm_crtc *crtc)
+{
+       struct malidp_crtc_state *state =
+               kzalloc(sizeof(*state), GFP_KERNEL);
+
+       if (crtc->state)
+               malidp_crtc_destroy_state(crtc, crtc->state);
+
+       __drm_atomic_helper_crtc_reset(crtc, &state->base);
+}
+
 static int malidp_crtc_enable_vblank(struct drm_crtc *crtc)
 {
        struct malidp_drm *malidp = crtc_to_malidp_device(crtc);
index 8df12e9..53391c0 100644 (file)
@@ -382,7 +382,8 @@ static void malidp500_modeset(struct malidp_hw_device *hwdev, struct videomode *
 
 int malidp_format_get_bpp(u32 fmt)
 {
-       int bpp = drm_format_plane_cpp(fmt, 0) * 8;
+       const struct drm_format_info *info = drm_format_info(fmt);
+       int bpp = info->cpp[0] * 8;
 
        if (bpp == 0) {
                switch (fmt) {
index 5f102bd..2e81252 100644 (file)
@@ -158,7 +158,7 @@ malidp_mw_encoder_atomic_check(struct drm_encoder *encoder,
                return -EINVAL;
        }
 
-       n_planes = drm_format_num_planes(fb->format->format);
+       n_planes = fb->format->num_planes;
        for (i = 0; i < n_planes; i++) {
                struct drm_gem_cma_object *obj = drm_fb_cma_get_gem_obj(fb, i);
                /* memory write buffers are never rotated */
index d42e0ea..07ceb4e 100644 (file)
@@ -227,14 +227,13 @@ bool malidp_format_mod_supported(struct drm_device *drm,
 
        if (modifier & AFBC_SPLIT) {
                if (!info->is_yuv) {
-                       if (drm_format_plane_cpp(format, 0) <= 2) {
+                       if (info->cpp[0] <= 2) {
                                DRM_DEBUG_KMS("RGB formats <= 16bpp are not supported with SPLIT\n");
                                return false;
                        }
                }
 
-               if ((drm_format_horz_chroma_subsampling(format) != 1) ||
-                   (drm_format_vert_chroma_subsampling(format) != 1)) {
+               if ((info->hsub != 1) || (info->vsub != 1)) {
                        if (!(format == DRM_FORMAT_YUV420_10BIT &&
                              (map->features & MALIDP_DEVICE_AFBC_YUV_420_10_SUPPORT_SPLIT))) {
                                DRM_DEBUG_KMS("Formats which are sub-sampled should never be split\n");
@@ -244,8 +243,7 @@ bool malidp_format_mod_supported(struct drm_device *drm,
        }
 
        if (modifier & AFBC_CBR) {
-               if ((drm_format_horz_chroma_subsampling(format) == 1) ||
-                   (drm_format_vert_chroma_subsampling(format) == 1)) {
+               if ((info->hsub == 1) || (info->vsub == 1)) {
                        DRM_DEBUG_KMS("Formats which are not sub-sampled should not have CBR set\n");
                        return false;
                }
index 058ac7d..a2f6472 100644 (file)
@@ -87,6 +87,7 @@ struct armada_framebuffer *armada_framebuffer_create(struct drm_device *dev,
 struct drm_framebuffer *armada_fb_create(struct drm_device *dev,
        struct drm_file *dfile, const struct drm_mode_fb_cmd2 *mode)
 {
+       const struct drm_format_info *info = drm_get_format_info(dev, mode);
        struct armada_gem_object *obj;
        struct armada_framebuffer *dfb;
        int ret;
@@ -97,7 +98,7 @@ struct drm_framebuffer *armada_fb_create(struct drm_device *dev,
                mode->pitches[2]);
 
        /* We can only handle a single plane at the moment */
-       if (drm_format_num_planes(mode->pixel_format) > 1 &&
+       if (info->num_planes > 1 &&
            (mode->handles[0] != mode->handles[1] ||
             mode->handles[0] != mode->handles[2])) {
                ret = -EINVAL;
index ac47ecf..829620d 100644 (file)
@@ -2,9 +2,8 @@
 config DRM_AST
        tristate "AST server chips"
        depends on DRM && PCI && MMU
-       select DRM_TTM
        select DRM_KMS_HELPER
-       select DRM_TTM
+       select DRM_VRAM_HELPER
        help
         Say yes for experimental AST GPU driver. Do not enable
         this driver without having a working -modesetting,
index 3871b39..3811997 100644 (file)
@@ -205,13 +205,7 @@ static struct pci_driver ast_pci_driver = {
 
 static const struct file_operations ast_fops = {
        .owner = THIS_MODULE,
-       .open = drm_open,
-       .release = drm_release,
-       .unlocked_ioctl = drm_ioctl,
-       .mmap = ast_mmap,
-       .poll = drm_poll,
-       .compat_ioctl = drm_compat_ioctl,
-       .read = drm_read,
+       DRM_VRAM_MM_FILE_OPERATIONS
 };
 
 static struct drm_driver driver = {
@@ -228,10 +222,7 @@ static struct drm_driver driver = {
        .minor = DRIVER_MINOR,
        .patchlevel = DRIVER_PATCHLEVEL,
 
-       .gem_free_object_unlocked = ast_gem_free_object,
-       .dumb_create = ast_dumb_create,
-       .dumb_map_offset = ast_dumb_mmap_offset,
-
+       DRM_GEM_VRAM_DRIVER
 };
 
 static int __init ast_init(void)
index 1cf0c75..b6cac95 100644 (file)
 #include <drm/drm_encoder.h>
 #include <drm/drm_fb_helper.h>
 
-#include <drm/ttm/ttm_bo_api.h>
-#include <drm/ttm/ttm_bo_driver.h>
-#include <drm/ttm/ttm_placement.h>
-#include <drm/ttm/ttm_memory.h>
-#include <drm/ttm/ttm_module.h>
-
 #include <drm/drm_gem.h>
+#include <drm/drm_gem_vram_helper.h>
+
+#include <drm/drm_vram_mm_helper.h>
 
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
@@ -103,10 +100,6 @@ struct ast_private {
 
        int fb_mtrr;
 
-       struct {
-               struct ttm_bo_device bdev;
-       } ttm;
-
        struct drm_gem_object *cursor_cache;
        uint64_t cursor_cache_gpu_addr;
        /* Acces to this cache is protected by the crtc->mutex of the only crtc
@@ -263,7 +256,6 @@ struct ast_fbdev {
        struct ast_framebuffer afb;
        void *sysram;
        int size;
-       struct ttm_bo_kmap_obj mapping;
        int x1, y1, x2, y2; /* dirty rect */
        spinlock_t dirty_lock;
 };
@@ -321,73 +313,16 @@ void ast_fbdev_fini(struct drm_device *dev);
 void ast_fbdev_set_suspend(struct drm_device *dev, int state);
 void ast_fbdev_set_base(struct ast_private *ast, unsigned long gpu_addr);
 
-struct ast_bo {
-       struct ttm_buffer_object bo;
-       struct ttm_placement placement;
-       struct ttm_bo_kmap_obj kmap;
-       struct drm_gem_object gem;
-       struct ttm_place placements[3];
-       int pin_count;
-};
-#define gem_to_ast_bo(gobj) container_of((gobj), struct ast_bo, gem)
-
-static inline struct ast_bo *
-ast_bo(struct ttm_buffer_object *bo)
-{
-       return container_of(bo, struct ast_bo, bo);
-}
-
-
-#define to_ast_obj(x) container_of(x, struct ast_gem_object, base)
-
 #define AST_MM_ALIGN_SHIFT 4
 #define AST_MM_ALIGN_MASK ((1 << AST_MM_ALIGN_SHIFT) - 1)
 
-extern int ast_dumb_create(struct drm_file *file,
-                          struct drm_device *dev,
-                          struct drm_mode_create_dumb *args);
-
-extern void ast_gem_free_object(struct drm_gem_object *obj);
-extern int ast_dumb_mmap_offset(struct drm_file *file,
-                               struct drm_device *dev,
-                               uint32_t handle,
-                               uint64_t *offset);
-
 int ast_mm_init(struct ast_private *ast);
 void ast_mm_fini(struct ast_private *ast);
 
-int ast_bo_create(struct drm_device *dev, int size, int align,
-                 uint32_t flags, struct ast_bo **pastbo);
-
 int ast_gem_create(struct drm_device *dev,
                   u32 size, bool iskernel,
                   struct drm_gem_object **obj);
 
-int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr);
-int ast_bo_unpin(struct ast_bo *bo);
-
-static inline int ast_bo_reserve(struct ast_bo *bo, bool no_wait)
-{
-       int ret;
-
-       ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL);
-       if (ret) {
-               if (ret != -ERESTARTSYS && ret != -EBUSY)
-                       DRM_ERROR("reserve failed %p\n", bo);
-               return ret;
-       }
-       return 0;
-}
-
-static inline void ast_bo_unreserve(struct ast_bo *bo)
-{
-       ttm_bo_unreserve(&bo->bo);
-}
-
-void ast_ttm_placement(struct ast_bo *bo, int domain);
-int ast_bo_push_sysram(struct ast_bo *bo);
-int ast_mmap(struct file *filp, struct vm_area_struct *vma);
-
 /* ast post */
 void ast_enable_vga(struct drm_device *dev);
 void ast_enable_mmio(struct drm_device *dev);
index e718d0f..05f4522 100644 (file)
@@ -49,25 +49,25 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
 {
        int i;
        struct drm_gem_object *obj;
-       struct ast_bo *bo;
+       struct drm_gem_vram_object *gbo;
        int src_offset, dst_offset;
        int bpp = afbdev->afb.base.format->cpp[0];
        int ret = -EBUSY;
+       u8 *dst;
        bool unmap = false;
        bool store_for_later = false;
        int x2, y2;
        unsigned long flags;
 
        obj = afbdev->afb.obj;
-       bo = gem_to_ast_bo(obj);
+       gbo = drm_gem_vram_of_gem(obj);
 
-       /*
-        * try and reserve the BO, if we fail with busy
-        * then the BO is being moved and we should
-        * store up the damage until later.
+       /* Try to lock the BO. If we fail with -EBUSY then
+        * the BO is being moved and we should store up the
+        * damage until later.
         */
        if (drm_can_sleep())
-               ret = ast_bo_reserve(bo, true);
+               ret = drm_gem_vram_lock(gbo, true);
        if (ret) {
                if (ret != -EBUSY)
                        return;
@@ -101,25 +101,32 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
        afbdev->x2 = afbdev->y2 = 0;
        spin_unlock_irqrestore(&afbdev->dirty_lock, flags);
 
-       if (!bo->kmap.virtual) {
-               ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
-               if (ret) {
+       dst = drm_gem_vram_kmap(gbo, false, NULL);
+       if (IS_ERR(dst)) {
+               DRM_ERROR("failed to kmap fb updates\n");
+               goto out;
+       } else if (!dst) {
+               dst = drm_gem_vram_kmap(gbo, true, NULL);
+               if (IS_ERR(dst)) {
                        DRM_ERROR("failed to kmap fb updates\n");
-                       ast_bo_unreserve(bo);
-                       return;
+                       goto out;
                }
                unmap = true;
        }
+
        for (i = y; i <= y2; i++) {
                /* assume equal stride for now */
-               src_offset = dst_offset = i * afbdev->afb.base.pitches[0] + (x * bpp);
-               memcpy_toio(bo->kmap.virtual + src_offset, afbdev->sysram + src_offset, (x2 - x + 1) * bpp);
-
+               src_offset = dst_offset =
+                       i * afbdev->afb.base.pitches[0] + (x * bpp);
+               memcpy_toio(dst + dst_offset, afbdev->sysram + src_offset,
+                           (x2 - x + 1) * bpp);
        }
+
        if (unmap)
-               ttm_bo_kunmap(&bo->kmap);
+               drm_gem_vram_kunmap(gbo);
 
-       ast_bo_unreserve(bo);
+out:
+       drm_gem_vram_unlock(gbo);
 }
 
 static void ast_fillrect(struct fb_info *info,
index 2854399..4c7e31c 100644 (file)
@@ -593,7 +593,7 @@ int ast_gem_create(struct drm_device *dev,
                   u32 size, bool iskernel,
                   struct drm_gem_object **obj)
 {
-       struct ast_bo *astbo;
+       struct drm_gem_vram_object *gbo;
        int ret;
 
        *obj = NULL;
@@ -602,80 +602,13 @@ int ast_gem_create(struct drm_device *dev,
        if (size == 0)
                return -EINVAL;
 
-       ret = ast_bo_create(dev, size, 0, 0, &astbo);
-       if (ret) {
+       gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, size, 0, false);
+       if (IS_ERR(gbo)) {
+               ret = PTR_ERR(gbo);
                if (ret != -ERESTARTSYS)
                        DRM_ERROR("failed to allocate GEM object\n");
                return ret;
        }
-       *obj = &astbo->gem;
+       *obj = &gbo->gem;
        return 0;
 }
-
-int ast_dumb_create(struct drm_file *file,
-                   struct drm_device *dev,
-                   struct drm_mode_create_dumb *args)
-{
-       int ret;
-       struct drm_gem_object *gobj;
-       u32 handle;
-
-       args->pitch = args->width * ((args->bpp + 7) / 8);
-       args->size = args->pitch * args->height;
-
-       ret = ast_gem_create(dev, args->size, false,
-                            &gobj);
-       if (ret)
-               return ret;
-
-       ret = drm_gem_handle_create(file, gobj, &handle);
-       drm_gem_object_put_unlocked(gobj);
-       if (ret)
-               return ret;
-
-       args->handle = handle;
-       return 0;
-}
-
-static void ast_bo_unref(struct ast_bo **bo)
-{
-       if ((*bo) == NULL)
-               return;
-       ttm_bo_put(&((*bo)->bo));
-       *bo = NULL;
-}
-
-void ast_gem_free_object(struct drm_gem_object *obj)
-{
-       struct ast_bo *ast_bo = gem_to_ast_bo(obj);
-
-       ast_bo_unref(&ast_bo);
-}
-
-
-static inline u64 ast_bo_mmap_offset(struct ast_bo *bo)
-{
-       return drm_vma_node_offset_addr(&bo->bo.vma_node);
-}
-int
-ast_dumb_mmap_offset(struct drm_file *file,
-                    struct drm_device *dev,
-                    uint32_t handle,
-                    uint64_t *offset)
-{
-       struct drm_gem_object *obj;
-       struct ast_bo *bo;
-
-       obj = drm_gem_object_lookup(file, handle);
-       if (obj == NULL)
-               return -ENOENT;
-
-       bo = gem_to_ast_bo(obj);
-       *offset = ast_bo_mmap_offset(bo);
-
-       drm_gem_object_put_unlocked(obj);
-
-       return 0;
-
-}
-
index 97fed06..fb700d6 100644 (file)
@@ -521,7 +521,6 @@ static void ast_crtc_dpms(struct drm_crtc *crtc, int mode)
        }
 }
 
-/* ast is different - we will force move buffers out of VRAM */
 static int ast_crtc_do_set_base(struct drm_crtc *crtc,
                                struct drm_framebuffer *fb,
                                int x, int y, int atomic)
@@ -529,50 +528,54 @@ static int ast_crtc_do_set_base(struct drm_crtc *crtc,
        struct ast_private *ast = crtc->dev->dev_private;
        struct drm_gem_object *obj;
        struct ast_framebuffer *ast_fb;
-       struct ast_bo *bo;
+       struct drm_gem_vram_object *gbo;
        int ret;
-       u64 gpu_addr;
+       s64 gpu_addr;
+       void *base;
 
-       /* push the previous fb to system ram */
        if (!atomic && fb) {
                ast_fb = to_ast_framebuffer(fb);
                obj = ast_fb->obj;
-               bo = gem_to_ast_bo(obj);
-               ret = ast_bo_reserve(bo, false);
-               if (ret)
-                       return ret;
-               ast_bo_push_sysram(bo);
-               ast_bo_unreserve(bo);
+               gbo = drm_gem_vram_of_gem(obj);
+
+               /* unmap if console */
+               if (&ast->fbdev->afb == ast_fb)
+                       drm_gem_vram_kunmap(gbo);
+               drm_gem_vram_unpin(gbo);
        }
 
        ast_fb = to_ast_framebuffer(crtc->primary->fb);
        obj = ast_fb->obj;
-       bo = gem_to_ast_bo(obj);
+       gbo = drm_gem_vram_of_gem(obj);
 
-       ret = ast_bo_reserve(bo, false);
+       ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
        if (ret)
                return ret;
-
-       ret = ast_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr);
-       if (ret) {
-               ast_bo_unreserve(bo);
-               return ret;
+       gpu_addr = drm_gem_vram_offset(gbo);
+       if (gpu_addr < 0) {
+               ret = (int)gpu_addr;
+               goto err_drm_gem_vram_unpin;
        }
 
        if (&ast->fbdev->afb == ast_fb) {
                /* if pushing console in kmap it */
-               ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
-               if (ret)
+               base = drm_gem_vram_kmap(gbo, true, NULL);
+               if (IS_ERR(base)) {
+                       ret = PTR_ERR(base);
                        DRM_ERROR("failed to kmap fbcon\n");
-               else
+               } else {
                        ast_fbdev_set_base(ast, gpu_addr);
+               }
        }
-       ast_bo_unreserve(bo);
 
        ast_set_offset_reg(crtc);
        ast_set_start_address_crt1(crtc, (u32)gpu_addr);
 
        return 0;
+
+err_drm_gem_vram_unpin:
+       drm_gem_vram_unpin(gbo);
+       return ret;
 }
 
 static int ast_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
@@ -618,21 +621,18 @@ static int ast_crtc_mode_set(struct drm_crtc *crtc,
 
 static void ast_crtc_disable(struct drm_crtc *crtc)
 {
-       int ret;
-
        DRM_DEBUG_KMS("\n");
        ast_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
        if (crtc->primary->fb) {
+               struct ast_private *ast = crtc->dev->dev_private;
                struct ast_framebuffer *ast_fb = to_ast_framebuffer(crtc->primary->fb);
                struct drm_gem_object *obj = ast_fb->obj;
-               struct ast_bo *bo = gem_to_ast_bo(obj);
-
-               ret = ast_bo_reserve(bo, false);
-               if (ret)
-                       return;
+               struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj);
 
-               ast_bo_push_sysram(bo);
-               ast_bo_unreserve(bo);
+               /* unmap if console */
+               if (&ast->fbdev->afb == ast_fb)
+                       drm_gem_vram_kunmap(gbo);
+               drm_gem_vram_unpin(gbo);
        }
        crtc->primary->fb = NULL;
 }
@@ -918,28 +918,32 @@ static int ast_cursor_init(struct drm_device *dev)
        int size;
        int ret;
        struct drm_gem_object *obj;
-       struct ast_bo *bo;
-       uint64_t gpu_addr;
+       struct drm_gem_vram_object *gbo;
+       s64 gpu_addr;
+       void *base;
 
        size = (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE) * AST_DEFAULT_HWC_NUM;
 
        ret = ast_gem_create(dev, size, true, &obj);
        if (ret)
                return ret;
-       bo = gem_to_ast_bo(obj);
-       ret = ast_bo_reserve(bo, false);
-       if (unlikely(ret != 0))
-               goto fail;
-
-       ret = ast_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr);
-       ast_bo_unreserve(bo);
+       gbo = drm_gem_vram_of_gem(obj);
+       ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
        if (ret)
                goto fail;
+       gpu_addr = drm_gem_vram_offset(gbo);
+       if (gpu_addr < 0) {
+               drm_gem_vram_unpin(gbo);
+               ret = (int)gpu_addr;
+               goto fail;
+       }
 
        /* kmap the object */
-       ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &ast->cache_kmap);
-       if (ret)
+       base = drm_gem_vram_kmap_at(gbo, true, NULL, &ast->cache_kmap);
+       if (IS_ERR(base)) {
+               ret = PTR_ERR(base);
                goto fail;
+       }
 
        ast->cursor_cache = obj;
        ast->cursor_cache_gpu_addr = gpu_addr;
@@ -952,7 +956,9 @@ fail:
 static void ast_cursor_fini(struct drm_device *dev)
 {
        struct ast_private *ast = dev->dev_private;
-       ttm_bo_kunmap(&ast->cache_kmap);
+       struct drm_gem_vram_object *gbo =
+               drm_gem_vram_of_gem(ast->cursor_cache);
+       drm_gem_vram_kunmap_at(gbo, &ast->cache_kmap);
        drm_gem_object_put_unlocked(ast->cursor_cache);
 }
 
@@ -1173,8 +1179,8 @@ static int ast_cursor_set(struct drm_crtc *crtc,
        struct ast_private *ast = crtc->dev->dev_private;
        struct ast_crtc *ast_crtc = to_ast_crtc(crtc);
        struct drm_gem_object *obj;
-       struct ast_bo *bo;
-       uint64_t gpu_addr;
+       struct drm_gem_vram_object *gbo;
+       s64 gpu_addr;
        u32 csum;
        int ret;
        struct ttm_bo_kmap_obj uobj_map;
@@ -1193,19 +1199,27 @@ static int ast_cursor_set(struct drm_crtc *crtc,
                DRM_ERROR("Cannot find cursor object %x for crtc\n", handle);
                return -ENOENT;
        }
-       bo = gem_to_ast_bo(obj);
+       gbo = drm_gem_vram_of_gem(obj);
 
-       ret = ast_bo_reserve(bo, false);
+       ret = drm_gem_vram_lock(gbo, false);
        if (ret)
                goto fail;
 
-       ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &uobj_map);
-
-       src = ttm_kmap_obj_virtual(&uobj_map, &src_isiomem);
-       dst = ttm_kmap_obj_virtual(&ast->cache_kmap, &dst_isiomem);
-
+       memset(&uobj_map, 0, sizeof(uobj_map));
+       src = drm_gem_vram_kmap_at(gbo, true, &src_isiomem, &uobj_map);
+       if (IS_ERR(src)) {
+               ret = PTR_ERR(src);
+               goto fail_unlock;
+       }
        if (src_isiomem == true)
                DRM_ERROR("src cursor bo should be in main memory\n");
+
+       dst = drm_gem_vram_kmap_at(drm_gem_vram_of_gem(ast->cursor_cache),
+                                  false, &dst_isiomem, &ast->cache_kmap);
+       if (IS_ERR(dst)) {
+               ret = PTR_ERR(dst);
+               goto fail_unlock;
+       }
        if (dst_isiomem == false)
                DRM_ERROR("dst bo should be in VRAM\n");
 
@@ -1214,11 +1228,14 @@ static int ast_cursor_set(struct drm_crtc *crtc,
        /* do data transfer to cursor cache */
        csum = copy_cursor_image(src, dst, width, height);
 
+       drm_gem_vram_kunmap_at(gbo, &uobj_map);
+       drm_gem_vram_unlock(gbo);
+
        /* write checksum + signature */
-       ttm_bo_kunmap(&uobj_map);
-       ast_bo_unreserve(bo);
        {
-               u8 *dst = (u8 *)ast->cache_kmap.virtual + (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
+               u8 *dst = drm_gem_vram_kmap_at(drm_gem_vram_of_gem(ast->cursor_cache),
+                                              false, NULL, &ast->cache_kmap);
+               dst += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
                writel(csum, dst);
                writel(width, dst + AST_HWC_SIGNATURE_SizeX);
                writel(height, dst + AST_HWC_SIGNATURE_SizeY);
@@ -1244,6 +1261,9 @@ static int ast_cursor_set(struct drm_crtc *crtc,
 
        drm_gem_object_put_unlocked(obj);
        return 0;
+
+fail_unlock:
+       drm_gem_vram_unlock(gbo);
 fail:
        drm_gem_object_put_unlocked(obj);
        return ret;
@@ -1257,7 +1277,9 @@ static int ast_cursor_move(struct drm_crtc *crtc,
        int x_offset, y_offset;
        u8 *sig;
 
-       sig = (u8 *)ast->cache_kmap.virtual + (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
+       sig = drm_gem_vram_kmap_at(drm_gem_vram_of_gem(ast->cursor_cache),
+                                  false, NULL, &ast->cache_kmap);
+       sig += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
        writel(x, sig + AST_HWC_SIGNATURE_X);
        writel(y, sig + AST_HWC_SIGNATURE_Y);
 
index 75d477b..779c53e 100644 (file)
  * Authors: Dave Airlie <airlied@redhat.com>
  */
 #include <drm/drmP.h>
-#include <drm/ttm/ttm_page_alloc.h>
 
 #include "ast_drv.h"
 
-static inline struct ast_private *
-ast_bdev(struct ttm_bo_device *bd)
-{
-       return container_of(bd, struct ast_private, ttm.bdev);
-}
-
-static void ast_bo_ttm_destroy(struct ttm_buffer_object *tbo)
-{
-       struct ast_bo *bo;
-
-       bo = container_of(tbo, struct ast_bo, bo);
-
-       drm_gem_object_release(&bo->gem);
-       kfree(bo);
-}
-
-static bool ast_ttm_bo_is_ast_bo(struct ttm_buffer_object *bo)
-{
-       if (bo->destroy == &ast_bo_ttm_destroy)
-               return true;
-       return false;
-}
-
-static int
-ast_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
-                    struct ttm_mem_type_manager *man)
-{
-       switch (type) {
-       case TTM_PL_SYSTEM:
-               man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_MASK_CACHING;
-               man->default_caching = TTM_PL_FLAG_CACHED;
-               break;
-       case TTM_PL_VRAM:
-               man->func = &ttm_bo_manager_func;
-               man->flags = TTM_MEMTYPE_FLAG_FIXED |
-                       TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_FLAG_UNCACHED |
-                       TTM_PL_FLAG_WC;
-               man->default_caching = TTM_PL_FLAG_WC;
-               break;
-       default:
-               DRM_ERROR("Unsupported memory type %u\n", (unsigned)type);
-               return -EINVAL;
-       }
-       return 0;
-}
-
-static void
-ast_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
-{
-       struct ast_bo *astbo = ast_bo(bo);
-
-       if (!ast_ttm_bo_is_ast_bo(bo))
-               return;
-
-       ast_ttm_placement(astbo, TTM_PL_FLAG_SYSTEM);
-       *pl = astbo->placement;
-}
-
-static int ast_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp)
-{
-       struct ast_bo *astbo = ast_bo(bo);
-
-       return drm_vma_node_verify_access(&astbo->gem.vma_node,
-                                         filp->private_data);
-}
-
-static int ast_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
-                                 struct ttm_mem_reg *mem)
-{
-       struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
-       struct ast_private *ast = ast_bdev(bdev);
-
-       mem->bus.addr = NULL;
-       mem->bus.offset = 0;
-       mem->bus.size = mem->num_pages << PAGE_SHIFT;
-       mem->bus.base = 0;
-       mem->bus.is_iomem = false;
-       if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
-               return -EINVAL;
-       switch (mem->mem_type) {
-       case TTM_PL_SYSTEM:
-               /* system memory */
-               return 0;
-       case TTM_PL_VRAM:
-               mem->bus.offset = mem->start << PAGE_SHIFT;
-               mem->bus.base = pci_resource_start(ast->dev->pdev, 0);
-               mem->bus.is_iomem = true;
-               break;
-       default:
-               return -EINVAL;
-               break;
-       }
-       return 0;
-}
-
-static void ast_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
-{
-}
-
-static void ast_ttm_backend_destroy(struct ttm_tt *tt)
-{
-       ttm_tt_fini(tt);
-       kfree(tt);
-}
-
-static struct ttm_backend_func ast_tt_backend_func = {
-       .destroy = &ast_ttm_backend_destroy,
-};
-
-
-static struct ttm_tt *ast_ttm_tt_create(struct ttm_buffer_object *bo,
-                                       uint32_t page_flags)
-{
-       struct ttm_tt *tt;
-
-       tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL);
-       if (tt == NULL)
-               return NULL;
-       tt->func = &ast_tt_backend_func;
-       if (ttm_tt_init(tt, bo, page_flags)) {
-               kfree(tt);
-               return NULL;
-       }
-       return tt;
-}
-
-struct ttm_bo_driver ast_bo_driver = {
-       .ttm_tt_create = ast_ttm_tt_create,
-       .init_mem_type = ast_bo_init_mem_type,
-       .eviction_valuable = ttm_bo_eviction_valuable,
-       .evict_flags = ast_bo_evict_flags,
-       .move = NULL,
-       .verify_access = ast_bo_verify_access,
-       .io_mem_reserve = &ast_ttm_io_mem_reserve,
-       .io_mem_free = &ast_ttm_io_mem_free,
-};
-
 int ast_mm_init(struct ast_private *ast)
 {
+       struct drm_vram_mm *vmm;
        int ret;
        struct drm_device *dev = ast->dev;
-       struct ttm_bo_device *bdev = &ast->ttm.bdev;
-
-       ret = ttm_bo_device_init(&ast->ttm.bdev,
-                                &ast_bo_driver,
-                                dev->anon_inode->i_mapping,
-                                true);
-       if (ret) {
-               DRM_ERROR("Error initialising bo driver; %d\n", ret);
-               return ret;
-       }
 
-       ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
-                            ast->vram_size >> PAGE_SHIFT);
-       if (ret) {
-               DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
+       vmm = drm_vram_helper_alloc_mm(
+               dev, pci_resource_start(dev->pdev, 0),
+               ast->vram_size, &drm_gem_vram_mm_funcs);
+       if (IS_ERR(vmm)) {
+               ret = PTR_ERR(vmm);
+               DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
                return ret;
        }
 
@@ -203,148 +56,9 @@ void ast_mm_fini(struct ast_private *ast)
 {
        struct drm_device *dev = ast->dev;
 
-       ttm_bo_device_release(&ast->ttm.bdev);
+       drm_vram_helper_release_mm(dev);
 
        arch_phys_wc_del(ast->fb_mtrr);
        arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0),
                                pci_resource_len(dev->pdev, 0));
 }
-
-void ast_ttm_placement(struct ast_bo *bo, int domain)
-{
-       u32 c = 0;
-       unsigned i;
-
-       bo->placement.placement = bo->placements;
-       bo->placement.busy_placement = bo->placements;
-       if (domain & TTM_PL_FLAG_VRAM)
-               bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
-       if (domain & TTM_PL_FLAG_SYSTEM)
-               bo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM;
-       if (!c)
-               bo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM;
-       bo->placement.num_placement = c;
-       bo->placement.num_busy_placement = c;
-       for (i = 0; i < c; ++i) {
-               bo->placements[i].fpfn = 0;
-               bo->placements[i].lpfn = 0;
-       }
-}
-
-int ast_bo_create(struct drm_device *dev, int size, int align,
-                 uint32_t flags, struct ast_bo **pastbo)
-{
-       struct ast_private *ast = dev->dev_private;
-       struct ast_bo *astbo;
-       size_t acc_size;
-       int ret;
-
-       astbo = kzalloc(sizeof(struct ast_bo), GFP_KERNEL);
-       if (!astbo)
-               return -ENOMEM;
-
-       ret = drm_gem_object_init(dev, &astbo->gem, size);
-       if (ret)
-               goto error;
-
-       astbo->bo.bdev = &ast->ttm.bdev;
-
-       ast_ttm_placement(astbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
-
-       acc_size = ttm_bo_dma_acc_size(&ast->ttm.bdev, size,
-                                      sizeof(struct ast_bo));
-
-       ret = ttm_bo_init(&ast->ttm.bdev, &astbo->bo, size,
-                         ttm_bo_type_device, &astbo->placement,
-                         align >> PAGE_SHIFT, false, acc_size,
-                         NULL, NULL, ast_bo_ttm_destroy);
-       if (ret)
-               goto error;
-
-       *pastbo = astbo;
-       return 0;
-error:
-       kfree(astbo);
-       return ret;
-}
-
-static inline u64 ast_bo_gpu_offset(struct ast_bo *bo)
-{
-       return bo->bo.offset;
-}
-
-int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (bo->pin_count) {
-               bo->pin_count++;
-               if (gpu_addr)
-                       *gpu_addr = ast_bo_gpu_offset(bo);
-       }
-
-       ast_ttm_placement(bo, pl_flag);
-       for (i = 0; i < bo->placement.num_placement; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret)
-               return ret;
-
-       bo->pin_count = 1;
-       if (gpu_addr)
-               *gpu_addr = ast_bo_gpu_offset(bo);
-       return 0;
-}
-
-int ast_bo_unpin(struct ast_bo *bo)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i;
-       if (!bo->pin_count) {
-               DRM_ERROR("unpin bad %p\n", bo);
-               return 0;
-       }
-       bo->pin_count--;
-       if (bo->pin_count)
-               return 0;
-
-       for (i = 0; i < bo->placement.num_placement ; i++)
-               bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
-       return ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-}
-
-int ast_bo_push_sysram(struct ast_bo *bo)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-       if (!bo->pin_count) {
-               DRM_ERROR("unpin bad %p\n", bo);
-               return 0;
-       }
-       bo->pin_count--;
-       if (bo->pin_count)
-               return 0;
-
-       if (bo->kmap.virtual)
-               ttm_bo_kunmap(&bo->kmap);
-
-       ast_ttm_placement(bo, TTM_PL_FLAG_SYSTEM);
-       for (i = 0; i < bo->placement.num_placement ; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret) {
-               DRM_ERROR("pushing to VRAM failed\n");
-               return ret;
-       }
-       return 0;
-}
-
-int ast_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-       struct drm_file *file_priv = filp->private_data;
-       struct ast_private *ast = file_priv->minor->dev->dev_private;
-
-       return ttm_bo_mmap(filp, vma, &ast->ttm.bdev);
-}
index e836e2d..fdd607a 100644 (file)
@@ -603,8 +603,6 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
        const struct drm_display_mode *mode;
        struct drm_crtc_state *crtc_state;
        unsigned int tmp;
-       int hsub = 1;
-       int vsub = 1;
        int ret;
        int i;
 
@@ -642,13 +640,10 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
        if (state->nplanes > ATMEL_HLCDC_LAYER_MAX_PLANES)
                return -EINVAL;
 
-       hsub = drm_format_horz_chroma_subsampling(fb->format->format);
-       vsub = drm_format_vert_chroma_subsampling(fb->format->format);
-
        for (i = 0; i < state->nplanes; i++) {
                unsigned int offset = 0;
-               int xdiv = i ? hsub : 1;
-               int ydiv = i ? vsub : 1;
+               int xdiv = i ? fb->format->hsub : 1;
+               int ydiv = i ? fb->format->vsub : 1;
 
                state->bpp[i] = fb->format->cpp[i];
                if (!state->bpp[i])
index 17885fa..32b043a 100644 (file)
@@ -3,7 +3,7 @@ config DRM_BOCHS
        tristate "DRM Support for bochs dispi vga interface (qemu stdvga)"
        depends on DRM && PCI && MMU
        select DRM_KMS_HELPER
-       select DRM_TTM
+       select DRM_VRAM_HELPER
        help
          Choose this option for qemu.
          If M is selected the module will be called bochs-drm.
index 341cc9d..cc35d49 100644 (file)
@@ -10,9 +10,9 @@
 #include <drm/drm_simple_kms_helper.h>
 
 #include <drm/drm_gem.h>
+#include <drm/drm_gem_vram_helper.h>
 
-#include <drm/ttm/ttm_bo_driver.h>
-#include <drm/ttm/ttm_page_alloc.h>
+#include <drm/drm_vram_mm_helper.h>
 
 /* ---------------------------------------------------------------------- */
 
@@ -73,38 +73,8 @@ struct bochs_device {
        struct drm_device *dev;
        struct drm_simple_display_pipe pipe;
        struct drm_connector connector;
-
-       /* ttm */
-       struct {
-               struct ttm_bo_device bdev;
-               bool initialized;
-       } ttm;
-};
-
-struct bochs_bo {
-       struct ttm_buffer_object bo;
-       struct ttm_placement placement;
-       struct ttm_bo_kmap_obj kmap;
-       struct drm_gem_object gem;
-       struct ttm_place placements[3];
-       int pin_count;
 };
 
-static inline struct bochs_bo *bochs_bo(struct ttm_buffer_object *bo)
-{
-       return container_of(bo, struct bochs_bo, bo);
-}
-
-static inline struct bochs_bo *gem_to_bochs_bo(struct drm_gem_object *gem)
-{
-       return container_of(gem, struct bochs_bo, gem);
-}
-
-static inline u64 bochs_bo_mmap_offset(struct bochs_bo *bo)
-{
-       return drm_vma_node_offset_addr(&bo->bo.vma_node);
-}
-
 /* ---------------------------------------------------------------------- */
 
 /* bochs_hw.c */
@@ -122,26 +92,6 @@ int bochs_hw_load_edid(struct bochs_device *bochs);
 /* bochs_mm.c */
 int bochs_mm_init(struct bochs_device *bochs);
 void bochs_mm_fini(struct bochs_device *bochs);
-int bochs_mmap(struct file *filp, struct vm_area_struct *vma);
-
-int bochs_gem_create(struct drm_device *dev, u32 size, bool iskernel,
-                    struct drm_gem_object **obj);
-int bochs_gem_init_object(struct drm_gem_object *obj);
-void bochs_gem_free_object(struct drm_gem_object *obj);
-int bochs_dumb_create(struct drm_file *file, struct drm_device *dev,
-                     struct drm_mode_create_dumb *args);
-int bochs_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev,
-                          uint32_t handle, uint64_t *offset);
-
-int bochs_bo_pin(struct bochs_bo *bo, u32 pl_flag);
-int bochs_bo_unpin(struct bochs_bo *bo);
-
-int bochs_gem_prime_pin(struct drm_gem_object *obj);
-void bochs_gem_prime_unpin(struct drm_gem_object *obj);
-void *bochs_gem_prime_vmap(struct drm_gem_object *obj);
-void bochs_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
-int bochs_gem_prime_mmap(struct drm_gem_object *obj,
-                        struct vm_area_struct *vma);
 
 /* bochs_kms.c */
 int bochs_kms_init(struct bochs_device *bochs);
index 6b6e037..e7512a6 100644 (file)
@@ -10,6 +10,7 @@
 #include <linux/slab.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_probe_helper.h>
+#include <drm/drm_atomic_helper.h>
 
 #include "bochs.h"
 
@@ -63,14 +64,7 @@ err:
 
 static const struct file_operations bochs_fops = {
        .owner          = THIS_MODULE,
-       .open           = drm_open,
-       .release        = drm_release,
-       .unlocked_ioctl = drm_ioctl,
-       .compat_ioctl   = drm_compat_ioctl,
-       .poll           = drm_poll,
-       .read           = drm_read,
-       .llseek         = no_llseek,
-       .mmap           = bochs_mmap,
+       DRM_VRAM_MM_FILE_OPERATIONS
 };
 
 static struct drm_driver bochs_driver = {
@@ -82,17 +76,8 @@ static struct drm_driver bochs_driver = {
        .date                   = "20130925",
        .major                  = 1,
        .minor                  = 0,
-       .gem_free_object_unlocked = bochs_gem_free_object,
-       .dumb_create            = bochs_dumb_create,
-       .dumb_map_offset        = bochs_dumb_mmap_offset,
-
-       .gem_prime_export = drm_gem_prime_export,
-       .gem_prime_import = drm_gem_prime_import,
-       .gem_prime_pin = bochs_gem_prime_pin,
-       .gem_prime_unpin = bochs_gem_prime_unpin,
-       .gem_prime_vmap = bochs_gem_prime_vmap,
-       .gem_prime_vunmap = bochs_gem_prime_vunmap,
-       .gem_prime_mmap = bochs_gem_prime_mmap,
+       DRM_GEM_VRAM_DRIVER,
+       DRM_GEM_VRAM_DRIVER_PRIME,
 };
 
 /* ---------------------------------------------------------------------- */
@@ -174,6 +159,7 @@ static void bochs_pci_remove(struct pci_dev *pdev)
 {
        struct drm_device *dev = pci_get_drvdata(pdev);
 
+       drm_atomic_helper_shutdown(dev);
        drm_dev_unregister(dev);
        bochs_unload(dev);
        drm_dev_put(dev);
index 5e905f5..9e3ee7b 100644 (file)
@@ -30,16 +30,16 @@ static const uint32_t bochs_formats[] = {
 static void bochs_plane_update(struct bochs_device *bochs,
                               struct drm_plane_state *state)
 {
-       struct bochs_bo *bo;
+       struct drm_gem_vram_object *gbo;
 
        if (!state->fb || !bochs->stride)
                return;
 
-       bo = gem_to_bochs_bo(state->fb->obj[0]);
+       gbo = drm_gem_vram_of_gem(state->fb->obj[0]);
        bochs_hw_setbase(bochs,
                         state->crtc_x,
                         state->crtc_y,
-                        bo->bo.offset);
+                        gbo->bo.offset);
        bochs_hw_setformat(bochs, state->fb->format);
 }
 
@@ -72,23 +72,23 @@ static void bochs_pipe_update(struct drm_simple_display_pipe *pipe,
 static int bochs_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
                                 struct drm_plane_state *new_state)
 {
-       struct bochs_bo *bo;
+       struct drm_gem_vram_object *gbo;
 
        if (!new_state->fb)
                return 0;
-       bo = gem_to_bochs_bo(new_state->fb->obj[0]);
-       return bochs_bo_pin(bo, TTM_PL_FLAG_VRAM);
+       gbo = drm_gem_vram_of_gem(new_state->fb->obj[0]);
+       return drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
 }
 
 static void bochs_pipe_cleanup_fb(struct drm_simple_display_pipe *pipe,
                                  struct drm_plane_state *old_state)
 {
-       struct bochs_bo *bo;
+       struct drm_gem_vram_object *gbo;
 
        if (!old_state->fb)
                return;
-       bo = gem_to_bochs_bo(old_state->fb->obj[0]);
-       bochs_bo_unpin(bo);
+       gbo = drm_gem_vram_of_gem(old_state->fb->obj[0]);
+       drm_gem_vram_unpin(gbo);
 }
 
 static const struct drm_simple_display_pipe_funcs bochs_pipe_funcs = {
index 4a40308..543499c 100644 (file)
 
 #include "bochs.h"
 
-static void bochs_ttm_placement(struct bochs_bo *bo, int domain);
-
 /* ---------------------------------------------------------------------- */
 
-static inline struct bochs_device *bochs_bdev(struct ttm_bo_device *bd)
-{
-       return container_of(bd, struct bochs_device, ttm.bdev);
-}
-
-static void bochs_bo_ttm_destroy(struct ttm_buffer_object *tbo)
-{
-       struct bochs_bo *bo;
-
-       bo = container_of(tbo, struct bochs_bo, bo);
-       drm_gem_object_release(&bo->gem);
-       kfree(bo);
-}
-
-static bool bochs_ttm_bo_is_bochs_bo(struct ttm_buffer_object *bo)
-{
-       if (bo->destroy == &bochs_bo_ttm_destroy)
-               return true;
-       return false;
-}
-
-static int bochs_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
-                                 struct ttm_mem_type_manager *man)
-{
-       switch (type) {
-       case TTM_PL_SYSTEM:
-               man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_MASK_CACHING;
-               man->default_caching = TTM_PL_FLAG_CACHED;
-               break;
-       case TTM_PL_VRAM:
-               man->func = &ttm_bo_manager_func;
-               man->flags = TTM_MEMTYPE_FLAG_FIXED |
-                       TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_FLAG_UNCACHED |
-                       TTM_PL_FLAG_WC;
-               man->default_caching = TTM_PL_FLAG_WC;
-               break;
-       default:
-               DRM_ERROR("Unsupported memory type %u\n", (unsigned)type);
-               return -EINVAL;
-       }
-       return 0;
-}
-
-static void
-bochs_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
-{
-       struct bochs_bo *bochsbo = bochs_bo(bo);
-
-       if (!bochs_ttm_bo_is_bochs_bo(bo))
-               return;
-
-       bochs_ttm_placement(bochsbo, TTM_PL_FLAG_SYSTEM);
-       *pl = bochsbo->placement;
-}
-
-static int bochs_bo_verify_access(struct ttm_buffer_object *bo,
-                                 struct file *filp)
-{
-       struct bochs_bo *bochsbo = bochs_bo(bo);
-
-       return drm_vma_node_verify_access(&bochsbo->gem.vma_node,
-                                         filp->private_data);
-}
-
-static int bochs_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
-                                   struct ttm_mem_reg *mem)
-{
-       struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
-       struct bochs_device *bochs = bochs_bdev(bdev);
-
-       mem->bus.addr = NULL;
-       mem->bus.offset = 0;
-       mem->bus.size = mem->num_pages << PAGE_SHIFT;
-       mem->bus.base = 0;
-       mem->bus.is_iomem = false;
-       if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
-               return -EINVAL;
-       switch (mem->mem_type) {
-       case TTM_PL_SYSTEM:
-               /* system memory */
-               return 0;
-       case TTM_PL_VRAM:
-               mem->bus.offset = mem->start << PAGE_SHIFT;
-               mem->bus.base = bochs->fb_base;
-               mem->bus.is_iomem = true;
-               break;
-       default:
-               return -EINVAL;
-               break;
-       }
-       return 0;
-}
-
-static void bochs_ttm_io_mem_free(struct ttm_bo_device *bdev,
-                                 struct ttm_mem_reg *mem)
-{
-}
-
-static void bochs_ttm_backend_destroy(struct ttm_tt *tt)
-{
-       ttm_tt_fini(tt);
-       kfree(tt);
-}
-
-static struct ttm_backend_func bochs_tt_backend_func = {
-       .destroy = &bochs_ttm_backend_destroy,
-};
-
-static struct ttm_tt *bochs_ttm_tt_create(struct ttm_buffer_object *bo,
-                                         uint32_t page_flags)
-{
-       struct ttm_tt *tt;
-
-       tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL);
-       if (tt == NULL)
-               return NULL;
-       tt->func = &bochs_tt_backend_func;
-       if (ttm_tt_init(tt, bo, page_flags)) {
-               kfree(tt);
-               return NULL;
-       }
-       return tt;
-}
-
-static struct ttm_bo_driver bochs_bo_driver = {
-       .ttm_tt_create = bochs_ttm_tt_create,
-       .init_mem_type = bochs_bo_init_mem_type,
-       .eviction_valuable = ttm_bo_eviction_valuable,
-       .evict_flags = bochs_bo_evict_flags,
-       .move = NULL,
-       .verify_access = bochs_bo_verify_access,
-       .io_mem_reserve = &bochs_ttm_io_mem_reserve,
-       .io_mem_free = &bochs_ttm_io_mem_free,
-};
-
 int bochs_mm_init(struct bochs_device *bochs)
 {
-       struct ttm_bo_device *bdev = &bochs->ttm.bdev;
-       int ret;
+       struct drm_vram_mm *vmm;
 
-       ret = ttm_bo_device_init(&bochs->ttm.bdev,
-                                &bochs_bo_driver,
-                                bochs->dev->anon_inode->i_mapping,
-                                true);
-       if (ret) {
-               DRM_ERROR("Error initialising bo driver; %d\n", ret);
-               return ret;
-       }
-
-       ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
-                            bochs->fb_size >> PAGE_SHIFT);
-       if (ret) {
-               DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
-               return ret;
-       }
-
-       bochs->ttm.initialized = true;
-       return 0;
+       vmm = drm_vram_helper_alloc_mm(bochs->dev, bochs->fb_base,
+                                      bochs->fb_size,
+                                      &drm_gem_vram_mm_funcs);
+       return PTR_ERR_OR_ZERO(vmm);
 }
 
 void bochs_mm_fini(struct bochs_device *bochs)
 {
-       if (!bochs->ttm.initialized)
+       if (!bochs->dev->vram_mm)
                return;
 
-       ttm_bo_device_release(&bochs->ttm.bdev);
-       bochs->ttm.initialized = false;
-}
-
-static void bochs_ttm_placement(struct bochs_bo *bo, int domain)
-{
-       unsigned i;
-       u32 c = 0;
-       bo->placement.placement = bo->placements;
-       bo->placement.busy_placement = bo->placements;
-       if (domain & TTM_PL_FLAG_VRAM) {
-               bo->placements[c++].flags = TTM_PL_FLAG_WC
-                       | TTM_PL_FLAG_UNCACHED
-                       | TTM_PL_FLAG_VRAM;
-       }
-       if (domain & TTM_PL_FLAG_SYSTEM) {
-               bo->placements[c++].flags = TTM_PL_MASK_CACHING
-                       | TTM_PL_FLAG_SYSTEM;
-       }
-       if (!c) {
-               bo->placements[c++].flags = TTM_PL_MASK_CACHING
-                       | TTM_PL_FLAG_SYSTEM;
-       }
-       for (i = 0; i < c; ++i) {
-               bo->placements[i].fpfn = 0;
-               bo->placements[i].lpfn = 0;
-       }
-       bo->placement.num_placement = c;
-       bo->placement.num_busy_placement = c;
-}
-
-int bochs_bo_pin(struct bochs_bo *bo, u32 pl_flag)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (bo->pin_count) {
-               bo->pin_count++;
-               return 0;
-       }
-
-       bochs_ttm_placement(bo, pl_flag);
-       for (i = 0; i < bo->placement.num_placement; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-       ret = ttm_bo_reserve(&bo->bo, true, false, NULL);
-       if (ret)
-               return ret;
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       ttm_bo_unreserve(&bo->bo);
-       if (ret)
-               return ret;
-
-       bo->pin_count = 1;
-       return 0;
-}
-
-int bochs_bo_unpin(struct bochs_bo *bo)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (!bo->pin_count) {
-               DRM_ERROR("unpin bad %p\n", bo);
-               return 0;
-       }
-       bo->pin_count--;
-
-       if (bo->pin_count)
-               return 0;
-
-       for (i = 0; i < bo->placement.num_placement; i++)
-               bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
-       ret = ttm_bo_reserve(&bo->bo, true, false, NULL);
-       if (ret)
-               return ret;
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       ttm_bo_unreserve(&bo->bo);
-       if (ret)
-               return ret;
-
-       return 0;
-}
-
-int bochs_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-       struct drm_file *file_priv = filp->private_data;
-       struct bochs_device *bochs = file_priv->minor->dev->dev_private;
-
-       return ttm_bo_mmap(filp, vma, &bochs->ttm.bdev);
-}
-
-/* ---------------------------------------------------------------------- */
-
-static int bochs_bo_create(struct drm_device *dev, int size, int align,
-                          uint32_t flags, struct bochs_bo **pbochsbo)
-{
-       struct bochs_device *bochs = dev->dev_private;
-       struct bochs_bo *bochsbo;
-       size_t acc_size;
-       int ret;
-
-       bochsbo = kzalloc(sizeof(struct bochs_bo), GFP_KERNEL);
-       if (!bochsbo)
-               return -ENOMEM;
-
-       ret = drm_gem_object_init(dev, &bochsbo->gem, size);
-       if (ret) {
-               kfree(bochsbo);
-               return ret;
-       }
-
-       bochsbo->bo.bdev = &bochs->ttm.bdev;
-       bochsbo->bo.bdev->dev_mapping = dev->anon_inode->i_mapping;
-
-       bochs_ttm_placement(bochsbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
-
-       acc_size = ttm_bo_dma_acc_size(&bochs->ttm.bdev, size,
-                                      sizeof(struct bochs_bo));
-
-       ret = ttm_bo_init(&bochs->ttm.bdev, &bochsbo->bo, size,
-                         ttm_bo_type_device, &bochsbo->placement,
-                         align >> PAGE_SHIFT, false, acc_size,
-                         NULL, NULL, bochs_bo_ttm_destroy);
-       if (ret)
-               return ret;
-
-       *pbochsbo = bochsbo;
-       return 0;
-}
-
-int bochs_gem_create(struct drm_device *dev, u32 size, bool iskernel,
-                    struct drm_gem_object **obj)
-{
-       struct bochs_bo *bochsbo;
-       int ret;
-
-       *obj = NULL;
-
-       size = PAGE_ALIGN(size);
-       if (size == 0)
-               return -EINVAL;
-
-       ret = bochs_bo_create(dev, size, 0, 0, &bochsbo);
-       if (ret) {
-               if (ret != -ERESTARTSYS)
-                       DRM_ERROR("failed to allocate GEM object\n");
-               return ret;
-       }
-       *obj = &bochsbo->gem;
-       return 0;
-}
-
-int bochs_dumb_create(struct drm_file *file, struct drm_device *dev,
-                     struct drm_mode_create_dumb *args)
-{
-       struct drm_gem_object *gobj;
-       u32 handle;
-       int ret;
-
-       args->pitch = args->width * ((args->bpp + 7) / 8);
-       args->size = args->pitch * args->height;
-
-       ret = bochs_gem_create(dev, args->size, false,
-                              &gobj);
-       if (ret)
-               return ret;
-
-       ret = drm_gem_handle_create(file, gobj, &handle);
-       drm_gem_object_put_unlocked(gobj);
-       if (ret)
-               return ret;
-
-       args->handle = handle;
-       return 0;
-}
-
-static void bochs_bo_unref(struct bochs_bo **bo)
-{
-       struct ttm_buffer_object *tbo;
-
-       if ((*bo) == NULL)
-               return;
-
-       tbo = &((*bo)->bo);
-       ttm_bo_put(tbo);
-       *bo = NULL;
-}
-
-void bochs_gem_free_object(struct drm_gem_object *obj)
-{
-       struct bochs_bo *bochs_bo = gem_to_bochs_bo(obj);
-
-       bochs_bo_unref(&bochs_bo);
-}
-
-int bochs_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev,
-                          uint32_t handle, uint64_t *offset)
-{
-       struct drm_gem_object *obj;
-       struct bochs_bo *bo;
-
-       obj = drm_gem_object_lookup(file, handle);
-       if (obj == NULL)
-               return -ENOENT;
-
-       bo = gem_to_bochs_bo(obj);
-       *offset = bochs_bo_mmap_offset(bo);
-
-       drm_gem_object_put_unlocked(obj);
-       return 0;
-}
-
-/* ---------------------------------------------------------------------- */
-
-int bochs_gem_prime_pin(struct drm_gem_object *obj)
-{
-       struct bochs_bo *bo = gem_to_bochs_bo(obj);
-
-       return bochs_bo_pin(bo, TTM_PL_FLAG_VRAM);
-}
-
-void bochs_gem_prime_unpin(struct drm_gem_object *obj)
-{
-       struct bochs_bo *bo = gem_to_bochs_bo(obj);
-
-       bochs_bo_unpin(bo);
-}
-
-void *bochs_gem_prime_vmap(struct drm_gem_object *obj)
-{
-       struct bochs_bo *bo = gem_to_bochs_bo(obj);
-       bool is_iomem;
-       int ret;
-
-       ret = bochs_bo_pin(bo, TTM_PL_FLAG_VRAM);
-       if (ret)
-               return NULL;
-       ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
-       if (ret) {
-               bochs_bo_unpin(bo);
-               return NULL;
-       }
-       return ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
-}
-
-void bochs_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-       struct bochs_bo *bo = gem_to_bochs_bo(obj);
-
-       ttm_bo_kunmap(&bo->kmap);
-       bochs_bo_unpin(bo);
-}
-
-int bochs_gem_prime_mmap(struct drm_gem_object *obj,
-                        struct vm_area_struct *vma)
-{
-       struct bochs_bo *bo = gem_to_bochs_bo(obj);
-
-       bo->gem.vma_node.vm_node.start = bo->bo.vma_node.vm_node.start;
-       return drm_gem_prime_mmap(obj, vma);
+       drm_vram_helper_release_mm(bochs->dev);
 }
index 38eeaf8..000ba7c 100644 (file)
@@ -9,13 +9,12 @@
  */
 
 #include <drm/drmP.h>
-#include <drm/drm_panel.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_connector.h>
 #include <drm/drm_encoder.h>
 #include <drm/drm_modeset_helper_vtables.h>
-#include <drm/drm_probe_helper.h>
 #include <drm/drm_panel.h>
+#include <drm/drm_probe_helper.h>
 
 struct panel_bridge {
        struct drm_bridge bridge;
diff --git a/drivers/gpu/drm/cirrus/cirrus_drv.h b/drivers/gpu/drm/cirrus/cirrus_drv.h
deleted file mode 100644 (file)
index 1bd816b..0000000
+++ /dev/null
@@ -1,250 +0,0 @@
-/*
- * Copyright 2012 Red Hat
- *
- * This file is subject to the terms and conditions of the GNU General
- * Public License version 2. See the file COPYING in the main
- * directory of this archive for more details.
- *
- * Authors: Matthew Garrett
- *          Dave Airlie
- */
-#ifndef __CIRRUS_DRV_H__
-#define __CIRRUS_DRV_H__
-
-#include <video/vga.h>
-
-#include <drm/drm_encoder.h>
-#include <drm/drm_fb_helper.h>
-
-#include <drm/ttm/ttm_bo_api.h>
-#include <drm/ttm/ttm_bo_driver.h>
-#include <drm/ttm/ttm_placement.h>
-#include <drm/ttm/ttm_memory.h>
-#include <drm/ttm/ttm_module.h>
-
-#include <drm/drm_gem.h>
-
-#define DRIVER_AUTHOR          "Matthew Garrett"
-
-#define DRIVER_NAME            "cirrus"
-#define DRIVER_DESC            "qemu Cirrus emulation"
-#define DRIVER_DATE            "20110418"
-
-#define DRIVER_MAJOR           1
-#define DRIVER_MINOR           0
-#define DRIVER_PATCHLEVEL      0
-
-#define CIRRUSFB_CONN_LIMIT 1
-
-#define RREG8(reg) ioread8(((void __iomem *)cdev->rmmio) + (reg))
-#define WREG8(reg, v) iowrite8(v, ((void __iomem *)cdev->rmmio) + (reg))
-#define RREG32(reg) ioread32(((void __iomem *)cdev->rmmio) + (reg))
-#define WREG32(reg, v) iowrite32(v, ((void __iomem *)cdev->rmmio) + (reg))
-
-#define SEQ_INDEX 4
-#define SEQ_DATA 5
-
-#define WREG_SEQ(reg, v)                                       \
-       do {                                                    \
-               WREG8(SEQ_INDEX, reg);                          \
-               WREG8(SEQ_DATA, v);                             \
-       } while (0)                                             \
-
-#define CRT_INDEX 0x14
-#define CRT_DATA 0x15
-
-#define WREG_CRT(reg, v)                                       \
-       do {                                                    \
-               WREG8(CRT_INDEX, reg);                          \
-               WREG8(CRT_DATA, v);                             \
-       } while (0)                                             \
-
-#define GFX_INDEX 0xe
-#define GFX_DATA 0xf
-
-#define WREG_GFX(reg, v)                                       \
-       do {                                                    \
-               WREG8(GFX_INDEX, reg);                          \
-               WREG8(GFX_DATA, v);                             \
-       } while (0)                                             \
-
-/*
- * Cirrus has a "hidden" DAC register that can be accessed by writing to
- * the pixel mask register to reset the state, then reading from the register
- * four times. The next write will then pass to the DAC
- */
-#define VGA_DAC_MASK 0x6
-
-#define WREG_HDR(v)                                            \
-       do {                                                    \
-               RREG8(VGA_DAC_MASK);                                    \
-               RREG8(VGA_DAC_MASK);                                    \
-               RREG8(VGA_DAC_MASK);                                    \
-               RREG8(VGA_DAC_MASK);                                    \
-               WREG8(VGA_DAC_MASK, v);                                 \
-       } while (0)                                             \
-
-
-#define CIRRUS_MAX_FB_HEIGHT 4096
-#define CIRRUS_MAX_FB_WIDTH 4096
-
-#define CIRRUS_DPMS_CLEARED (-1)
-
-#define to_cirrus_crtc(x) container_of(x, struct cirrus_crtc, base)
-#define to_cirrus_encoder(x) container_of(x, struct cirrus_encoder, base)
-
-struct cirrus_crtc {
-       struct drm_crtc                 base;
-       int                             last_dpms;
-       bool                            enabled;
-};
-
-struct cirrus_fbdev;
-struct cirrus_mode_info {
-       struct cirrus_crtc              *crtc;
-       /* pointer to fbdev info structure */
-       struct cirrus_fbdev             *gfbdev;
-};
-
-struct cirrus_encoder {
-       struct drm_encoder              base;
-       int                             last_dpms;
-};
-
-struct cirrus_connector {
-       struct drm_connector            base;
-};
-
-struct cirrus_mc {
-       resource_size_t                 vram_size;
-       resource_size_t                 vram_base;
-};
-
-struct cirrus_device {
-       struct drm_device               *dev;
-       unsigned long                   flags;
-
-       resource_size_t                 rmmio_base;
-       resource_size_t                 rmmio_size;
-       void __iomem                    *rmmio;
-
-       struct cirrus_mc                        mc;
-       struct cirrus_mode_info         mode_info;
-
-       int                             num_crtc;
-       int fb_mtrr;
-
-       struct {
-               struct ttm_bo_device bdev;
-       } ttm;
-       bool mm_inited;
-};
-
-
-struct cirrus_fbdev {
-       struct drm_fb_helper helper; /* must be first */
-       struct drm_framebuffer *gfb;
-       void *sysram;
-       int size;
-       int x1, y1, x2, y2; /* dirty rect */
-       spinlock_t dirty_lock;
-};
-
-struct cirrus_bo {
-       struct ttm_buffer_object bo;
-       struct ttm_placement placement;
-       struct ttm_bo_kmap_obj kmap;
-       struct drm_gem_object gem;
-       struct ttm_place placements[3];
-       int pin_count;
-};
-#define gem_to_cirrus_bo(gobj) container_of((gobj), struct cirrus_bo, gem)
-
-static inline struct cirrus_bo *
-cirrus_bo(struct ttm_buffer_object *bo)
-{
-       return container_of(bo, struct cirrus_bo, bo);
-}
-
-
-#define to_cirrus_obj(x) container_of(x, struct cirrus_gem_object, base)
-
-                               /* cirrus_main.c */
-int cirrus_device_init(struct cirrus_device *cdev,
-                     struct drm_device *ddev,
-                     struct pci_dev *pdev,
-                     uint32_t flags);
-void cirrus_device_fini(struct cirrus_device *cdev);
-void cirrus_gem_free_object(struct drm_gem_object *obj);
-int cirrus_dumb_mmap_offset(struct drm_file *file,
-                           struct drm_device *dev,
-                           uint32_t handle,
-                           uint64_t *offset);
-int cirrus_gem_create(struct drm_device *dev,
-                  u32 size, bool iskernel,
-                     struct drm_gem_object **obj);
-int cirrus_dumb_create(struct drm_file *file,
-                   struct drm_device *dev,
-                      struct drm_mode_create_dumb *args);
-
-int cirrus_framebuffer_init(struct drm_device *dev,
-                           struct drm_framebuffer *gfb,
-                           const struct drm_mode_fb_cmd2 *mode_cmd,
-                           struct drm_gem_object *obj);
-
-bool cirrus_check_framebuffer(struct cirrus_device *cdev, int width, int height,
-                             int bpp, int pitch);
-
-                               /* cirrus_display.c */
-int cirrus_modeset_init(struct cirrus_device *cdev);
-void cirrus_modeset_fini(struct cirrus_device *cdev);
-
-                               /* cirrus_fbdev.c */
-int cirrus_fbdev_init(struct cirrus_device *cdev);
-void cirrus_fbdev_fini(struct cirrus_device *cdev);
-
-
-
-                               /* cirrus_irq.c */
-void cirrus_driver_irq_preinstall(struct drm_device *dev);
-int cirrus_driver_irq_postinstall(struct drm_device *dev);
-void cirrus_driver_irq_uninstall(struct drm_device *dev);
-irqreturn_t cirrus_driver_irq_handler(int irq, void *arg);
-
-                               /* cirrus_kms.c */
-int cirrus_driver_load(struct drm_device *dev, unsigned long flags);
-void cirrus_driver_unload(struct drm_device *dev);
-extern struct drm_ioctl_desc cirrus_ioctls[];
-extern int cirrus_max_ioctl;
-
-int cirrus_mm_init(struct cirrus_device *cirrus);
-void cirrus_mm_fini(struct cirrus_device *cirrus);
-void cirrus_ttm_placement(struct cirrus_bo *bo, int domain);
-int cirrus_bo_create(struct drm_device *dev, int size, int align,
-                    uint32_t flags, struct cirrus_bo **pcirrusbo);
-int cirrus_mmap(struct file *filp, struct vm_area_struct *vma);
-
-static inline int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait)
-{
-       int ret;
-
-       ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL);
-       if (ret) {
-               if (ret != -ERESTARTSYS && ret != -EBUSY)
-                       DRM_ERROR("reserve failed %p\n", bo);
-               return ret;
-       }
-       return 0;
-}
-
-static inline void cirrus_bo_unreserve(struct cirrus_bo *bo)
-{
-       ttm_bo_unreserve(&bo->bo);
-}
-
-int cirrus_bo_push_sysram(struct cirrus_bo *bo);
-int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr);
-
-extern int cirrus_bpp;
-
-#endif                         /* __CIRRUS_DRV_H__ */
diff --git a/drivers/gpu/drm/cirrus/cirrus_ttm.c b/drivers/gpu/drm/cirrus/cirrus_ttm.c
deleted file mode 100644 (file)
index e6b9846..0000000
+++ /dev/null
@@ -1,337 +0,0 @@
-/*
- * Copyright 2012 Red Hat Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the
- * "Software"), to deal in the Software without restriction, including
- * without limitation the rights to use, copy, modify, merge, publish,
- * distribute, sub license, and/or sell copies of the Software, and to
- * permit persons to whom the Software is furnished to do so, subject to
- * the following conditions:
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
- * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
- * USE OR OTHER DEALINGS IN THE SOFTWARE.
- *
- * The above copyright notice and this permission notice (including the
- * next paragraph) shall be included in all copies or substantial portions
- * of the Software.
- *
- */
-/*
- * Authors: Dave Airlie <airlied@redhat.com>
- */
-#include <drm/drmP.h>
-#include <drm/ttm/ttm_page_alloc.h>
-
-#include "cirrus_drv.h"
-
-static inline struct cirrus_device *
-cirrus_bdev(struct ttm_bo_device *bd)
-{
-       return container_of(bd, struct cirrus_device, ttm.bdev);
-}
-
-static void cirrus_bo_ttm_destroy(struct ttm_buffer_object *tbo)
-{
-       struct cirrus_bo *bo;
-
-       bo = container_of(tbo, struct cirrus_bo, bo);
-
-       drm_gem_object_release(&bo->gem);
-       kfree(bo);
-}
-
-static bool cirrus_ttm_bo_is_cirrus_bo(struct ttm_buffer_object *bo)
-{
-       if (bo->destroy == &cirrus_bo_ttm_destroy)
-               return true;
-       return false;
-}
-
-static int
-cirrus_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
-                    struct ttm_mem_type_manager *man)
-{
-       switch (type) {
-       case TTM_PL_SYSTEM:
-               man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_MASK_CACHING;
-               man->default_caching = TTM_PL_FLAG_CACHED;
-               break;
-       case TTM_PL_VRAM:
-               man->func = &ttm_bo_manager_func;
-               man->flags = TTM_MEMTYPE_FLAG_FIXED |
-                       TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_FLAG_UNCACHED |
-                       TTM_PL_FLAG_WC;
-               man->default_caching = TTM_PL_FLAG_WC;
-               break;
-       default:
-               DRM_ERROR("Unsupported memory type %u\n", (unsigned)type);
-               return -EINVAL;
-       }
-       return 0;
-}
-
-static void
-cirrus_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
-{
-       struct cirrus_bo *cirrusbo = cirrus_bo(bo);
-
-       if (!cirrus_ttm_bo_is_cirrus_bo(bo))
-               return;
-
-       cirrus_ttm_placement(cirrusbo, TTM_PL_FLAG_SYSTEM);
-       *pl = cirrusbo->placement;
-}
-
-static int cirrus_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp)
-{
-       struct cirrus_bo *cirrusbo = cirrus_bo(bo);
-
-       return drm_vma_node_verify_access(&cirrusbo->gem.vma_node,
-                                         filp->private_data);
-}
-
-static int cirrus_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
-                                 struct ttm_mem_reg *mem)
-{
-       struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
-       struct cirrus_device *cirrus = cirrus_bdev(bdev);
-
-       mem->bus.addr = NULL;
-       mem->bus.offset = 0;
-       mem->bus.size = mem->num_pages << PAGE_SHIFT;
-       mem->bus.base = 0;
-       mem->bus.is_iomem = false;
-       if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
-               return -EINVAL;
-       switch (mem->mem_type) {
-       case TTM_PL_SYSTEM:
-               /* system memory */
-               return 0;
-       case TTM_PL_VRAM:
-               mem->bus.offset = mem->start << PAGE_SHIFT;
-               mem->bus.base = pci_resource_start(cirrus->dev->pdev, 0);
-               mem->bus.is_iomem = true;
-               break;
-       default:
-               return -EINVAL;
-               break;
-       }
-       return 0;
-}
-
-static void cirrus_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
-{
-}
-
-static void cirrus_ttm_backend_destroy(struct ttm_tt *tt)
-{
-       ttm_tt_fini(tt);
-       kfree(tt);
-}
-
-static struct ttm_backend_func cirrus_tt_backend_func = {
-       .destroy = &cirrus_ttm_backend_destroy,
-};
-
-
-static struct ttm_tt *cirrus_ttm_tt_create(struct ttm_buffer_object *bo,
-                                          uint32_t page_flags)
-{
-       struct ttm_tt *tt;
-
-       tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL);
-       if (tt == NULL)
-               return NULL;
-       tt->func = &cirrus_tt_backend_func;
-       if (ttm_tt_init(tt, bo, page_flags)) {
-               kfree(tt);
-               return NULL;
-       }
-       return tt;
-}
-
-struct ttm_bo_driver cirrus_bo_driver = {
-       .ttm_tt_create = cirrus_ttm_tt_create,
-       .init_mem_type = cirrus_bo_init_mem_type,
-       .eviction_valuable = ttm_bo_eviction_valuable,
-       .evict_flags = cirrus_bo_evict_flags,
-       .move = NULL,
-       .verify_access = cirrus_bo_verify_access,
-       .io_mem_reserve = &cirrus_ttm_io_mem_reserve,
-       .io_mem_free = &cirrus_ttm_io_mem_free,
-};
-
-int cirrus_mm_init(struct cirrus_device *cirrus)
-{
-       int ret;
-       struct drm_device *dev = cirrus->dev;
-       struct ttm_bo_device *bdev = &cirrus->ttm.bdev;
-
-       ret = ttm_bo_device_init(&cirrus->ttm.bdev,
-                                &cirrus_bo_driver,
-                                dev->anon_inode->i_mapping,
-                                true);
-       if (ret) {
-               DRM_ERROR("Error initialising bo driver; %d\n", ret);
-               return ret;
-       }
-
-       ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
-                            cirrus->mc.vram_size >> PAGE_SHIFT);
-       if (ret) {
-               DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
-               return ret;
-       }
-
-       arch_io_reserve_memtype_wc(pci_resource_start(dev->pdev, 0),
-                                  pci_resource_len(dev->pdev, 0));
-
-       cirrus->fb_mtrr = arch_phys_wc_add(pci_resource_start(dev->pdev, 0),
-                                          pci_resource_len(dev->pdev, 0));
-
-       cirrus->mm_inited = true;
-       return 0;
-}
-
-void cirrus_mm_fini(struct cirrus_device *cirrus)
-{
-       struct drm_device *dev = cirrus->dev;
-
-       if (!cirrus->mm_inited)
-               return;
-
-       ttm_bo_device_release(&cirrus->ttm.bdev);
-
-       arch_phys_wc_del(cirrus->fb_mtrr);
-       cirrus->fb_mtrr = 0;
-       arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0),
-                               pci_resource_len(dev->pdev, 0));
-}
-
-void cirrus_ttm_placement(struct cirrus_bo *bo, int domain)
-{
-       u32 c = 0;
-       unsigned i;
-       bo->placement.placement = bo->placements;
-       bo->placement.busy_placement = bo->placements;
-       if (domain & TTM_PL_FLAG_VRAM)
-               bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
-       if (domain & TTM_PL_FLAG_SYSTEM)
-               bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
-       if (!c)
-               bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
-       bo->placement.num_placement = c;
-       bo->placement.num_busy_placement = c;
-       for (i = 0; i < c; ++i) {
-               bo->placements[i].fpfn = 0;
-               bo->placements[i].lpfn = 0;
-       }
-}
-
-int cirrus_bo_create(struct drm_device *dev, int size, int align,
-                 uint32_t flags, struct cirrus_bo **pcirrusbo)
-{
-       struct cirrus_device *cirrus = dev->dev_private;
-       struct cirrus_bo *cirrusbo;
-       size_t acc_size;
-       int ret;
-
-       cirrusbo = kzalloc(sizeof(struct cirrus_bo), GFP_KERNEL);
-       if (!cirrusbo)
-               return -ENOMEM;
-
-       ret = drm_gem_object_init(dev, &cirrusbo->gem, size);
-       if (ret) {
-               kfree(cirrusbo);
-               return ret;
-       }
-
-       cirrusbo->bo.bdev = &cirrus->ttm.bdev;
-
-       cirrus_ttm_placement(cirrusbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
-
-       acc_size = ttm_bo_dma_acc_size(&cirrus->ttm.bdev, size,
-                                      sizeof(struct cirrus_bo));
-
-       ret = ttm_bo_init(&cirrus->ttm.bdev, &cirrusbo->bo, size,
-                         ttm_bo_type_device, &cirrusbo->placement,
-                         align >> PAGE_SHIFT, false, acc_size,
-                         NULL, NULL, cirrus_bo_ttm_destroy);
-       if (ret)
-               return ret;
-
-       *pcirrusbo = cirrusbo;
-       return 0;
-}
-
-static inline u64 cirrus_bo_gpu_offset(struct cirrus_bo *bo)
-{
-       return bo->bo.offset;
-}
-
-int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (bo->pin_count) {
-               bo->pin_count++;
-               if (gpu_addr)
-                       *gpu_addr = cirrus_bo_gpu_offset(bo);
-       }
-
-       cirrus_ttm_placement(bo, pl_flag);
-       for (i = 0; i < bo->placement.num_placement; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret)
-               return ret;
-
-       bo->pin_count = 1;
-       if (gpu_addr)
-               *gpu_addr = cirrus_bo_gpu_offset(bo);
-       return 0;
-}
-
-int cirrus_bo_push_sysram(struct cirrus_bo *bo)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-       if (!bo->pin_count) {
-               DRM_ERROR("unpin bad %p\n", bo);
-               return 0;
-       }
-       bo->pin_count--;
-       if (bo->pin_count)
-               return 0;
-
-       if (bo->kmap.virtual)
-               ttm_bo_kunmap(&bo->kmap);
-
-       cirrus_ttm_placement(bo, TTM_PL_FLAG_SYSTEM);
-       for (i = 0; i < bo->placement.num_placement ; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret) {
-               DRM_ERROR("pushing to VRAM failed\n");
-               return ret;
-       }
-       return 0;
-}
-
-int cirrus_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-       struct drm_file *file_priv = filp->private_data;
-       struct cirrus_device *cirrus = file_priv->minor->dev->dev_private;
-
-       return ttm_bo_mmap(filp, vma, &cirrus->ttm.bdev);
-}
index 2e0cb42..79dbeaf 100644 (file)
@@ -1423,7 +1423,7 @@ drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
                ret = wait_event_timeout(dev->vblank[i].queue,
                                old_state->crtcs[i].last_vblank_count !=
                                        drm_crtc_vblank_count(crtc),
-                               msecs_to_jiffies(50));
+                               msecs_to_jiffies(100));
 
                WARN(!ret, "[CRTC:%d:%s] vblank wait timed out\n",
                     crtc->base.id, crtc->name);
index 59ffb6b..ec13823 100644 (file)
  * for these functions.
  */
 
+/**
+ * __drm_atomic_helper_crtc_reset - reset state on CRTC
+ * @crtc: drm CRTC
+ * @crtc_state: CRTC state to assign
+ *
+ * Initializes the newly allocated @crtc_state and assigns it to
+ * the &drm_crtc->state pointer of @crtc, usually required when
+ * initializing the drivers or when called from the &drm_crtc_funcs.reset
+ * hook.
+ *
+ * This is useful for drivers that subclass the CRTC state.
+ */
+void
+__drm_atomic_helper_crtc_reset(struct drm_crtc *crtc,
+                              struct drm_crtc_state *crtc_state)
+{
+       if (crtc_state)
+               crtc_state->crtc = crtc;
+
+       crtc->state = crtc_state;
+}
+EXPORT_SYMBOL(__drm_atomic_helper_crtc_reset);
+
 /**
  * drm_atomic_helper_crtc_reset - default &drm_crtc_funcs.reset hook for CRTCs
  * @crtc: drm CRTC
  */
 void drm_atomic_helper_crtc_reset(struct drm_crtc *crtc)
 {
-       if (crtc->state)
-               __drm_atomic_helper_crtc_destroy_state(crtc->state);
-
-       kfree(crtc->state);
-       crtc->state = kzalloc(sizeof(*crtc->state), GFP_KERNEL);
+       struct drm_crtc_state *crtc_state =
+               kzalloc(sizeof(*crtc->state), GFP_KERNEL);
 
        if (crtc->state)
-               crtc->state->crtc = crtc;
+               crtc->funcs->atomic_destroy_state(crtc, crtc->state);
+
+       __drm_atomic_helper_crtc_reset(crtc, crtc_state);
 }
 EXPORT_SYMBOL(drm_atomic_helper_crtc_reset);
 
@@ -314,7 +336,7 @@ EXPORT_SYMBOL(drm_atomic_helper_plane_destroy_state);
  * @conn_state: connector state to assign
  *
  * Initializes the newly allocated @conn_state and assigns it to
- * the &drm_conector->state pointer of @connector, usually required when
+ * the &drm_connector->state pointer of @connector, usually required when
  * initializing the drivers or when called from the &drm_connector_funcs.reset
  * hook.
  *
@@ -369,6 +391,9 @@ __drm_atomic_helper_connector_duplicate_state(struct drm_connector *connector,
                drm_connector_get(connector);
        state->commit = NULL;
 
+       if (state->hdr_output_metadata)
+               drm_property_blob_get(state->hdr_output_metadata);
+
        /* Don't copy over a writeback job, they are used only once */
        state->writeback_job = NULL;
 }
@@ -416,6 +441,8 @@ __drm_atomic_helper_connector_destroy_state(struct drm_connector_state *state)
 
        if (state->writeback_job)
                drm_writeback_cleanup_job(state->writeback_job);
+
+       drm_property_blob_put(state->hdr_output_metadata);
 }
 EXPORT_SYMBOL(__drm_atomic_helper_connector_destroy_state);
 
index 428d826..125605f 100644 (file)
@@ -676,6 +676,8 @@ static int drm_atomic_connector_set_property(struct drm_connector *connector,
 {
        struct drm_device *dev = connector->dev;
        struct drm_mode_config *config = &dev->mode_config;
+       bool replaced = false;
+       int ret;
 
        if (property == config->prop_crtc_id) {
                struct drm_crtc *crtc = drm_crtc_find(dev, file_priv, val);
@@ -726,6 +728,13 @@ static int drm_atomic_connector_set_property(struct drm_connector *connector,
                 */
                if (state->link_status != DRM_LINK_STATUS_GOOD)
                        state->link_status = val;
+       } else if (property == config->hdr_output_metadata_property) {
+               ret = drm_atomic_replace_property_blob_from_id(dev,
+                               &state->hdr_output_metadata,
+                               val,
+                               sizeof(struct hdr_output_metadata), -1,
+                               &replaced);
+               return ret;
        } else if (property == config->aspect_ratio_property) {
                state->picture_aspect_ratio = val;
        } else if (property == config->content_type_property) {
@@ -814,6 +823,9 @@ drm_atomic_connector_get_property(struct drm_connector *connector,
                *val = state->colorspace;
        } else if (property == connector->scaling_mode_property) {
                *val = state->scaling_mode;
+       } else if (property == config->hdr_output_metadata_property) {
+               *val = state->hdr_output_metadata ?
+                       state->hdr_output_metadata->base.id : 0;
        } else if (property == connector->content_protection_property) {
                *val = state->content_protection;
        } else if (property == config->writeback_fb_id_property) {
index 22c7a10..bf98402 100644 (file)
@@ -351,3 +351,23 @@ void drm_master_put(struct drm_master **master)
        *master = NULL;
 }
 EXPORT_SYMBOL(drm_master_put);
+
+/* Used by drm_client and drm_fb_helper */
+bool drm_master_internal_acquire(struct drm_device *dev)
+{
+       mutex_lock(&dev->master_mutex);
+       if (dev->master) {
+               mutex_unlock(&dev->master_mutex);
+               return false;
+       }
+
+       return true;
+}
+EXPORT_SYMBOL(drm_master_internal_acquire);
+
+/* Used by drm_client and drm_fb_helper */
+void drm_master_internal_release(struct drm_device *dev)
+{
+       mutex_unlock(&dev->master_mutex);
+}
+EXPORT_SYMBOL(drm_master_internal_release);
index f20d1dd..5abcd83 100644 (file)
@@ -243,6 +243,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
 static struct drm_client_buffer *
 drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format)
 {
+       const struct drm_format_info *info = drm_format_info(format);
        struct drm_mode_create_dumb dumb_args = { };
        struct drm_device *dev = client->dev;
        struct drm_client_buffer *buffer;
@@ -258,7 +259,7 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
 
        dumb_args.width = width;
        dumb_args.height = height;
-       dumb_args.bpp = drm_format_plane_cpp(format, 0) * 8;
+       dumb_args.bpp = info->cpp[0] * 8;
        ret = drm_mode_create_dumb(dev, &dumb_args, client->file);
        if (ret)
                goto err_delete;
index b34c3d3..365ace0 100644 (file)
@@ -1058,6 +1058,12 @@ int drm_connector_create_standard_properties(struct drm_device *dev)
                return -ENOMEM;
        dev->mode_config.non_desktop_property = prop;
 
+       prop = drm_property_create(dev, DRM_MODE_PROP_BLOB,
+                                  "HDR_OUTPUT_METADATA", 0);
+       if (!prop)
+               return -ENOMEM;
+       dev->mode_config.hdr_output_metadata_property = prop;
+
        return 0;
 }
 
index 0e4f25d..5be28e3 100644 (file)
 
 #include <linux/device.h>
 #include <linux/fs.h>
-#include <linux/slab.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/sched/signal.h>
+#include <linux/slab.h>
 #include <linux/uaccess.h>
 #include <linux/uio.h>
-#include <drm/drm_dp_helper.h>
+
 #include <drm/drm_crtc.h>
-#include <drm/drmP.h>
+#include <drm/drm_dp_helper.h>
+#include <drm/drm_print.h>
 
 #include "drm_crtc_helper_internal.h"
 
index e7f4fe2..1c9ea9f 100644 (file)
  * OTHER DEALINGS IN THE SOFTWARE.
  */
 
+#include <linux/delay.h>
 #include <linux/errno.h>
 #include <linux/export.h>
 #include <linux/i2c.h>
 #include <linux/slab.h>
 #include <linux/string.h>
+
 #include <drm/drm_dp_dual_mode_helper.h>
-#include <drm/drmP.h>
+#include <drm/drm_print.h>
 
 /**
  * DOC: dp dual mode helpers
index 54a6414..e6af758 100644 (file)
  * OF THIS SOFTWARE.
  */
 
-#include <linux/kernel.h>
-#include <linux/module.h>
 #include <linux/delay.h>
-#include <linux/init.h>
 #include <linux/errno.h>
-#include <linux/sched.h>
 #include <linux/i2c.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
 #include <linux/seq_file.h>
+
 #include <drm/drm_dp_helper.h>
-#include <drm/drmP.h>
+#include <drm/drm_print.h>
+#include <drm/drm_vblank.h>
 
 #include "drm_crtc_helper_internal.h"
 
index c630ed1..da1abca 100644 (file)
  * OF THIS SOFTWARE.
  */
 
-#include <linux/kernel.h>
 #include <linux/delay.h>
-#include <linux/init.h>
 #include <linux/errno.h>
+#include <linux/i2c.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
 #include <linux/sched.h>
 #include <linux/seq_file.h>
-#include <linux/i2c.h>
-#include <drm/drm_dp_mst_helper.h>
-#include <drm/drmP.h>
 
-#include <drm/drm_fixed.h>
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
+#include <drm/drm_dp_mst_helper.h>
+#include <drm/drm_drv.h>
+#include <drm/drm_fixed.h>
+#include <drm/drm_print.h>
 #include <drm/drm_probe_helper.h>
 
 /**
index 649cfd8..d87f574 100644 (file)
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  */
-#include <linux/kernel.h>
-#include <linux/slab.h>
+
 #include <linux/hdmi.h>
 #include <linux/i2c.h>
+#include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/slab.h>
 #include <linux/vga_switcheroo.h>
-#include <drm/drmP.h>
+
+#include <drm/drm_displayid.h>
+#include <drm/drm_drv.h>
 #include <drm/drm_edid.h>
 #include <drm/drm_encoder.h>
-#include <drm/drm_displayid.h>
+#include <drm/drm_print.h>
 #include <drm/drm_scdc_helper.h>
 
 #include "drm_crtc_internal.h"
@@ -2849,6 +2852,7 @@ add_detailed_modes(struct drm_connector *connector, struct edid *edid,
 #define VIDEO_BLOCK     0x02
 #define VENDOR_BLOCK    0x03
 #define SPEAKER_BLOCK  0x04
+#define HDR_STATIC_METADATA_BLOCK      0x6
 #define USE_EXTENDED_TAG 0x07
 #define EXT_VIDEO_CAPABILITY_BLOCK 0x00
 #define EXT_VIDEO_DATA_BLOCK_420       0x0E
@@ -3831,6 +3835,55 @@ static void fixup_detailed_cea_mode_clock(struct drm_display_mode *mode)
        mode->clock = clock;
 }
 
+static bool cea_db_is_hdmi_hdr_metadata_block(const u8 *db)
+{
+       if (cea_db_tag(db) != USE_EXTENDED_TAG)
+               return false;
+
+       if (db[1] != HDR_STATIC_METADATA_BLOCK)
+               return false;
+
+       if (cea_db_payload_len(db) < 3)
+               return false;
+
+       return true;
+}
+
+static uint8_t eotf_supported(const u8 *edid_ext)
+{
+       return edid_ext[2] &
+               (BIT(HDMI_EOTF_TRADITIONAL_GAMMA_SDR) |
+                BIT(HDMI_EOTF_TRADITIONAL_GAMMA_HDR) |
+                BIT(HDMI_EOTF_SMPTE_ST2084) |
+                BIT(HDMI_EOTF_BT_2100_HLG));
+}
+
+static uint8_t hdr_metadata_type(const u8 *edid_ext)
+{
+       return edid_ext[3] &
+               BIT(HDMI_STATIC_METADATA_TYPE1);
+}
+
+static void
+drm_parse_hdr_metadata_block(struct drm_connector *connector, const u8 *db)
+{
+       u16 len;
+
+       len = cea_db_payload_len(db);
+
+       connector->hdr_sink_metadata.hdmi_type1.eotf =
+                                               eotf_supported(db);
+       connector->hdr_sink_metadata.hdmi_type1.metadata_type =
+                                               hdr_metadata_type(db);
+
+       if (len >= 4)
+               connector->hdr_sink_metadata.hdmi_type1.max_cll = db[4];
+       if (len >= 5)
+               connector->hdr_sink_metadata.hdmi_type1.max_fall = db[5];
+       if (len >= 6)
+               connector->hdr_sink_metadata.hdmi_type1.min_cll = db[6];
+}
+
 static void
 drm_parse_hdmi_vsdb_audio(struct drm_connector *connector, const u8 *db)
 {
@@ -4458,6 +4511,8 @@ static void drm_parse_cea_ext(struct drm_connector *connector,
                        drm_parse_y420cmdb_bitmap(connector, db);
                if (cea_db_is_vcdb(db))
                        drm_parse_vcdb(connector, db);
+               if (cea_db_is_hdmi_hdr_metadata_block(db))
+                       drm_parse_hdr_metadata_block(connector, db);
        }
 }
 
@@ -4850,6 +4905,78 @@ static bool is_hdmi2_sink(struct drm_connector *connector)
                connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB420;
 }
 
+static inline bool is_eotf_supported(u8 output_eotf, u8 sink_eotf)
+{
+       return sink_eotf & BIT(output_eotf);
+}
+
+/**
+ * drm_hdmi_infoframe_set_hdr_metadata() - fill an HDMI DRM infoframe with
+ *                                         HDR metadata from userspace
+ * @frame: HDMI DRM infoframe
+ * @conn_state: Connector state containing HDR metadata
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int
+drm_hdmi_infoframe_set_hdr_metadata(struct hdmi_drm_infoframe *frame,
+                                   const struct drm_connector_state *conn_state)
+{
+       struct drm_connector *connector;
+       struct hdr_output_metadata *hdr_metadata;
+       int err;
+
+       if (!frame || !conn_state)
+               return -EINVAL;
+
+       connector = conn_state->connector;
+
+       if (!conn_state->hdr_output_metadata)
+               return -EINVAL;
+
+       hdr_metadata = conn_state->hdr_output_metadata->data;
+
+       if (!hdr_metadata || !connector)
+               return -EINVAL;
+
+       /* Sink EOTF is Bit map while infoframe is absolute values */
+       if (!is_eotf_supported(hdr_metadata->hdmi_metadata_type1.eotf,
+           connector->hdr_sink_metadata.hdmi_type1.eotf)) {
+               DRM_DEBUG_KMS("EOTF Not Supported\n");
+               return -EINVAL;
+       }
+
+       err = hdmi_drm_infoframe_init(frame);
+       if (err < 0)
+               return err;
+
+       frame->eotf = hdr_metadata->hdmi_metadata_type1.eotf;
+       frame->metadata_type = hdr_metadata->hdmi_metadata_type1.metadata_type;
+
+       BUILD_BUG_ON(sizeof(frame->display_primaries) !=
+                    sizeof(hdr_metadata->hdmi_metadata_type1.display_primaries));
+       BUILD_BUG_ON(sizeof(frame->white_point) !=
+                    sizeof(hdr_metadata->hdmi_metadata_type1.white_point));
+
+       memcpy(&frame->display_primaries,
+              &hdr_metadata->hdmi_metadata_type1.display_primaries,
+              sizeof(frame->display_primaries));
+
+       memcpy(&frame->white_point,
+              &hdr_metadata->hdmi_metadata_type1.white_point,
+              sizeof(frame->white_point));
+
+       frame->max_display_mastering_luminance =
+               hdr_metadata->hdmi_metadata_type1.max_display_mastering_luminance;
+       frame->min_display_mastering_luminance =
+               hdr_metadata->hdmi_metadata_type1.min_display_mastering_luminance;
+       frame->max_fall = hdr_metadata->hdmi_metadata_type1.max_fall;
+       frame->max_cll = hdr_metadata->hdmi_metadata_type1.max_cll;
+
+       return 0;
+}
+EXPORT_SYMBOL(drm_hdmi_infoframe_set_hdr_metadata);
+
 /**
  * drm_hdmi_avi_infoframe_from_display_mode() - fill an HDMI AVI infoframe with
  *                                              data from a DRM display mode
index 1e55935..feb1df9 100644 (file)
@@ -7,12 +7,15 @@
 
 */
 
-#include <linux/module.h>
 #include <linux/firmware.h>
-#include <drm/drmP.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+
 #include <drm/drm_crtc.h>
 #include <drm/drm_crtc_helper.h>
+#include <drm/drm_drv.h>
 #include <drm/drm_edid.h>
+#include <drm/drm_print.h>
 
 static char edid_firmware[PATH_MAX];
 module_param_string(edid_firmware, edid_firmware, sizeof(edid_firmware), 0644);
index 498f95c..302cf5f 100644 (file)
@@ -44,6 +44,7 @@
 
 #include "drm_crtc_internal.h"
 #include "drm_crtc_helper_internal.h"
+#include "drm_internal.h"
 
 static bool drm_fbdev_emulation = true;
 module_param_named(fbdev_emulation, drm_fbdev_emulation, bool, 0600);
@@ -387,6 +388,49 @@ int drm_fb_helper_debug_leave(struct fb_info *info)
 }
 EXPORT_SYMBOL(drm_fb_helper_debug_leave);
 
+/* Check if the plane can hw rotate to match panel orientation */
+static bool drm_fb_helper_panel_rotation(struct drm_mode_set *modeset,
+                                        unsigned int *rotation)
+{
+       struct drm_connector *connector = modeset->connectors[0];
+       struct drm_plane *plane = modeset->crtc->primary;
+       u64 valid_mask = 0;
+       unsigned int i;
+
+       if (!modeset->num_connectors)
+               return false;
+
+       switch (connector->display_info.panel_orientation) {
+       case DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP:
+               *rotation = DRM_MODE_ROTATE_180;
+               break;
+       case DRM_MODE_PANEL_ORIENTATION_LEFT_UP:
+               *rotation = DRM_MODE_ROTATE_90;
+               break;
+       case DRM_MODE_PANEL_ORIENTATION_RIGHT_UP:
+               *rotation = DRM_MODE_ROTATE_270;
+               break;
+       default:
+               *rotation = DRM_MODE_ROTATE_0;
+       }
+
+       /*
+        * TODO: support 90 / 270 degree hardware rotation,
+        * depending on the hardware this may require the framebuffer
+        * to be in a specific tiling format.
+        */
+       if (*rotation != DRM_MODE_ROTATE_180 || !plane->rotation_property)
+               return false;
+
+       for (i = 0; i < plane->rotation_property->num_values; i++)
+               valid_mask |= (1ULL << plane->rotation_property->values[i]);
+
+       if (!(*rotation & valid_mask))
+               return false;
+
+       return true;
+}
+
 static int restore_fbdev_mode_atomic(struct drm_fb_helper *fb_helper, bool active)
 {
        struct drm_device *dev = fb_helper->dev;
@@ -427,10 +471,13 @@ retry:
        for (i = 0; i < fb_helper->crtc_count; i++) {
                struct drm_mode_set *mode_set = &fb_helper->crtc_info[i].mode_set;
                struct drm_plane *primary = mode_set->crtc->primary;
+               unsigned int rotation;
 
-               /* Cannot fail as we've already gotten the plane state above */
-               plane_state = drm_atomic_get_new_plane_state(state, primary);
-               plane_state->rotation = fb_helper->crtc_info[i].rotation;
+               if (drm_fb_helper_panel_rotation(mode_set, &rotation)) {
+                       /* Cannot fail as we've already gotten the plane state above */
+                       plane_state = drm_atomic_get_new_plane_state(state, primary);
+                       plane_state->rotation = rotation;
+               }
 
                ret = __drm_atomic_helper_set_config(mode_set, state);
                if (ret != 0)
@@ -509,7 +556,7 @@ out:
        return ret;
 }
 
-static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
+static int restore_fbdev_mode_force(struct drm_fb_helper *fb_helper)
 {
        struct drm_device *dev = fb_helper->dev;
 
@@ -519,6 +566,21 @@ static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
                return restore_fbdev_mode_legacy(fb_helper);
 }
 
+static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
+{
+       struct drm_device *dev = fb_helper->dev;
+       int ret;
+
+       if (!drm_master_internal_acquire(dev))
+               return -EBUSY;
+
+       ret = restore_fbdev_mode_force(fb_helper);
+
+       drm_master_internal_release(dev);
+
+       return ret;
+}
+
 /**
  * drm_fb_helper_restore_fbdev_mode_unlocked - restore fbdev configuration
  * @fb_helper: driver-allocated fbdev helper, can be NULL
@@ -542,7 +604,17 @@ int drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
                return 0;
 
        mutex_lock(&fb_helper->lock);
-       ret = restore_fbdev_mode(fb_helper);
+       /*
+        * TODO:
+        * We should bail out here if there is a master by dropping _force.
+        * Currently these igt tests fail if we do that:
+        * - kms_fbcon_fbt@psr
+        * - kms_fbcon_fbt@psr-suspend
+        *
+        * So first these tests need to be fixed so they drop master or don't
+        * have an fd open.
+        */
+       ret = restore_fbdev_mode_force(fb_helper);
 
        do_delayed = fb_helper->delayed_hotplug;
        if (do_delayed)
@@ -556,34 +628,6 @@ int drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
 }
 EXPORT_SYMBOL(drm_fb_helper_restore_fbdev_mode_unlocked);
 
-static bool drm_fb_helper_is_bound(struct drm_fb_helper *fb_helper)
-{
-       struct drm_device *dev = fb_helper->dev;
-       struct drm_crtc *crtc;
-       int bound = 0, crtcs_bound = 0;
-
-       /*
-        * Sometimes user space wants everything disabled, so don't steal the
-        * display if there's a master.
-        */
-       if (READ_ONCE(dev->master))
-               return false;
-
-       drm_for_each_crtc(crtc, dev) {
-               drm_modeset_lock(&crtc->mutex, NULL);
-               if (crtc->primary->fb)
-                       crtcs_bound++;
-               if (crtc->primary->fb == fb_helper->fb)
-                       bound++;
-               drm_modeset_unlock(&crtc->mutex);
-       }
-
-       if (bound < crtcs_bound)
-               return false;
-
-       return true;
-}
-
 #ifdef CONFIG_MAGIC_SYSRQ
 /*
  * restore fbcon display for all kms driver's using this helper, used for sysrq
@@ -604,7 +648,7 @@ static bool drm_fb_helper_force_kernel_mode(void)
                        continue;
 
                mutex_lock(&helper->lock);
-               ret = restore_fbdev_mode(helper);
+               ret = restore_fbdev_mode_force(helper);
                if (ret)
                        error = true;
                mutex_unlock(&helper->lock);
@@ -663,20 +707,22 @@ static void dpms_legacy(struct drm_fb_helper *fb_helper, int dpms_mode)
 static void drm_fb_helper_dpms(struct fb_info *info, int dpms_mode)
 {
        struct drm_fb_helper *fb_helper = info->par;
+       struct drm_device *dev = fb_helper->dev;
 
        /*
         * For each CRTC in this fb, turn the connectors on/off.
         */
        mutex_lock(&fb_helper->lock);
-       if (!drm_fb_helper_is_bound(fb_helper)) {
-               mutex_unlock(&fb_helper->lock);
-               return;
-       }
+       if (!drm_master_internal_acquire(dev))
+               goto unlock;
 
-       if (drm_drv_uses_atomic_modeset(fb_helper->dev))
+       if (drm_drv_uses_atomic_modeset(dev))
                restore_fbdev_mode_atomic(fb_helper, dpms_mode == DRM_MODE_DPMS_ON);
        else
                dpms_legacy(fb_helper, dpms_mode);
+
+       drm_master_internal_release(dev);
+unlock:
        mutex_unlock(&fb_helper->lock);
 }
 
@@ -767,7 +813,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
                                          struct drm_clip_rect *clip)
 {
        struct drm_framebuffer *fb = fb_helper->fb;
-       unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0);
+       unsigned int cpp = fb->format->cpp[0];
        size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
        void *src = fb_helper->fbdev->screen_buffer + offset;
        void *dst = fb_helper->buffer->vaddr + offset;
@@ -881,7 +927,6 @@ int drm_fb_helper_init(struct drm_device *dev,
                if (!fb_helper->crtc_info[i].mode_set.connectors)
                        goto out_free;
                fb_helper->crtc_info[i].mode_set.num_connectors = 0;
-               fb_helper->crtc_info[i].rotation = DRM_MODE_ROTATE_0;
        }
 
        i = 0;
@@ -1509,6 +1554,7 @@ backoff:
 int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
 {
        struct drm_fb_helper *fb_helper = info->par;
+       struct drm_device *dev = fb_helper->dev;
        int ret;
 
        if (oops_in_progress)
@@ -1516,9 +1562,9 @@ int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
 
        mutex_lock(&fb_helper->lock);
 
-       if (!drm_fb_helper_is_bound(fb_helper)) {
+       if (!drm_master_internal_acquire(dev)) {
                ret = -EBUSY;
-               goto out;
+               goto unlock;
        }
 
        if (info->fix.visual == FB_VISUAL_TRUECOLOR)
@@ -1528,7 +1574,8 @@ int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
        else
                ret = setcmap_legacy(cmap, info);
 
-out:
+       drm_master_internal_release(dev);
+unlock:
        mutex_unlock(&fb_helper->lock);
 
        return ret;
@@ -1548,12 +1595,13 @@ int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd,
                        unsigned long arg)
 {
        struct drm_fb_helper *fb_helper = info->par;
+       struct drm_device *dev = fb_helper->dev;
        struct drm_mode_set *mode_set;
        struct drm_crtc *crtc;
        int ret = 0;
 
        mutex_lock(&fb_helper->lock);
-       if (!drm_fb_helper_is_bound(fb_helper)) {
+       if (!drm_master_internal_acquire(dev)) {
                ret = -EBUSY;
                goto unlock;
        }
@@ -1591,11 +1639,12 @@ int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd,
                }
 
                ret = 0;
-               goto unlock;
+               break;
        default:
                ret = -ENOTTY;
        }
 
+       drm_master_internal_release(dev);
 unlock:
        mutex_unlock(&fb_helper->lock);
        return ret;
@@ -1847,15 +1896,18 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
                return -EBUSY;
 
        mutex_lock(&fb_helper->lock);
-       if (!drm_fb_helper_is_bound(fb_helper)) {
-               mutex_unlock(&fb_helper->lock);
-               return -EBUSY;
+       if (!drm_master_internal_acquire(dev)) {
+               ret = -EBUSY;
+               goto unlock;
        }
 
        if (drm_drv_uses_atomic_modeset(dev))
                ret = pan_display_atomic(var, info);
        else
                ret = pan_display_legacy(var, info);
+
+       drm_master_internal_release(dev);
+unlock:
        mutex_unlock(&fb_helper->lock);
 
        return ret;
@@ -1979,16 +2031,16 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
                 */
                bool lastv = true, lasth = true;
 
-               desired_mode = fb_helper->crtc_info[i].desired_mode;
                mode_set = &fb_helper->crtc_info[i].mode_set;
+               desired_mode = mode_set->mode;
 
                if (!desired_mode)
                        continue;
 
                crtc_count++;
 
-               x = fb_helper->crtc_info[i].x;
-               y = fb_helper->crtc_info[i].y;
+               x = mode_set->x;
+               y = mode_set->y;
 
                sizes.surface_width  = max_t(u32, desired_mode->hdisplay + x, sizes.surface_width);
                sizes.surface_height = max_t(u32, desired_mode->vdisplay + y, sizes.surface_height);
@@ -2014,7 +2066,7 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
                DRM_INFO("Cannot find any crtc or sizes\n");
 
                /* First time: disable all crtc's.. */
-               if (!fb_helper->deferred_setup && !READ_ONCE(fb_helper->dev->master))
+               if (!fb_helper->deferred_setup)
                        restore_fbdev_mode(fb_helper);
                return -EAGAIN;
        }
@@ -2503,62 +2555,6 @@ static int drm_pick_crtcs(struct drm_fb_helper *fb_helper,
        return best_score;
 }
 
-/*
- * This function checks if rotation is necessary because of panel orientation
- * and if it is, if it is supported.
- * If rotation is necessary and supported, it gets set in fb_crtc.rotation.
- * If rotation is necessary but not supported, a DRM_MODE_ROTATE_* flag gets
- * or-ed into fb_helper->sw_rotations. In drm_setup_crtcs_fb() we check if only
- * one bit is set and then we set fb_info.fbcon_rotate_hint to make fbcon do
- * the unsupported rotation.
- */
-static void drm_setup_crtc_rotation(struct drm_fb_helper *fb_helper,
-                                   struct drm_fb_helper_crtc *fb_crtc,
-                                   struct drm_connector *connector)
-{
-       struct drm_plane *plane = fb_crtc->mode_set.crtc->primary;
-       uint64_t valid_mask = 0;
-       int i, rotation;
-
-       fb_crtc->rotation = DRM_MODE_ROTATE_0;
-
-       switch (connector->display_info.panel_orientation) {
-       case DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP:
-               rotation = DRM_MODE_ROTATE_180;
-               break;
-       case DRM_MODE_PANEL_ORIENTATION_LEFT_UP:
-               rotation = DRM_MODE_ROTATE_90;
-               break;
-       case DRM_MODE_PANEL_ORIENTATION_RIGHT_UP:
-               rotation = DRM_MODE_ROTATE_270;
-               break;
-       default:
-               rotation = DRM_MODE_ROTATE_0;
-       }
-
-       /*
-        * TODO: support 90 / 270 degree hardware rotation,
-        * depending on the hardware this may require the framebuffer
-        * to be in a specific tiling format.
-        */
-       if (rotation != DRM_MODE_ROTATE_180 || !plane->rotation_property) {
-               fb_helper->sw_rotations |= rotation;
-               return;
-       }
-
-       for (i = 0; i < plane->rotation_property->num_values; i++)
-               valid_mask |= (1ULL << plane->rotation_property->values[i]);
-
-       if (!(rotation & valid_mask)) {
-               fb_helper->sw_rotations |= rotation;
-               return;
-       }
-
-       fb_crtc->rotation = rotation;
-       /* Rotating in hardware, fbcon should not rotate */
-       fb_helper->sw_rotations |= DRM_MODE_ROTATE_0;
-}
-
 static struct drm_fb_helper_crtc *
 drm_fb_helper_crtc(struct drm_fb_helper *fb_helper, struct drm_crtc *crtc)
 {
@@ -2805,7 +2801,6 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
                drm_fb_helper_modeset_release(fb_helper,
                                              &fb_helper->crtc_info[i].mode_set);
 
-       fb_helper->sw_rotations = 0;
        drm_fb_helper_for_each_connector(fb_helper, i) {
                struct drm_display_mode *mode = modes[i];
                struct drm_fb_helper_crtc *fb_crtc = crtcs[i];
@@ -2819,13 +2814,8 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
                        DRM_DEBUG_KMS("desired mode %s set on crtc %d (%d,%d)\n",
                                      mode->name, fb_crtc->mode_set.crtc->base.id, offset->x, offset->y);
 
-                       fb_crtc->desired_mode = mode;
-                       fb_crtc->x = offset->x;
-                       fb_crtc->y = offset->y;
-                       modeset->mode = drm_mode_duplicate(dev,
-                                                          fb_crtc->desired_mode);
+                       modeset->mode = drm_mode_duplicate(dev, mode);
                        drm_connector_get(connector);
-                       drm_setup_crtc_rotation(fb_helper, fb_crtc, connector);
                        modeset->connectors[modeset->num_connectors++] = connector;
                        modeset->x = offset->x;
                        modeset->y = offset->y;
@@ -2848,11 +2838,23 @@ out:
 static void drm_setup_crtcs_fb(struct drm_fb_helper *fb_helper)
 {
        struct fb_info *info = fb_helper->fbdev;
+       unsigned int rotation, sw_rotations = 0;
        int i;
 
-       for (i = 0; i < fb_helper->crtc_count; i++)
-               if (fb_helper->crtc_info[i].mode_set.num_connectors)
-                       fb_helper->crtc_info[i].mode_set.fb = fb_helper->fb;
+       for (i = 0; i < fb_helper->crtc_count; i++) {
+               struct drm_mode_set *modeset = &fb_helper->crtc_info[i].mode_set;
+
+               if (!modeset->num_connectors)
+                       continue;
+
+               modeset->fb = fb_helper->fb;
+
+               if (drm_fb_helper_panel_rotation(modeset, &rotation))
+                       /* Rotating in hardware, fbcon should not rotate */
+                       sw_rotations |= DRM_MODE_ROTATE_0;
+               else
+                       sw_rotations |= rotation;
+       }
 
        mutex_lock(&fb_helper->dev->mode_config.mutex);
        drm_fb_helper_for_each_connector(fb_helper, i) {
@@ -2868,7 +2870,7 @@ static void drm_setup_crtcs_fb(struct drm_fb_helper *fb_helper)
        }
        mutex_unlock(&fb_helper->dev->mode_config.mutex);
 
-       switch (fb_helper->sw_rotations) {
+       switch (sw_rotations) {
        case DRM_MODE_ROTATE_0:
                info->fbcon_rotate_hint = FB_ROTATE_UR;
                break;
@@ -3041,12 +3043,14 @@ int drm_fb_helper_hotplug_event(struct drm_fb_helper *fb_helper)
                return err;
        }
 
-       if (!fb_helper->fb || !drm_fb_helper_is_bound(fb_helper)) {
+       if (!fb_helper->fb || !drm_master_internal_acquire(fb_helper->dev)) {
                fb_helper->delayed_hotplug = true;
                mutex_unlock(&fb_helper->lock);
                return err;
        }
 
+       drm_master_internal_release(fb_helper->dev);
+
        DRM_DEBUG_KMS("\n");
 
        drm_setup_crtcs(fb_helper, fb_helper->fb->width, fb_helper->fb->height);
index 233f114..075a776 100644 (file)
@@ -100,8 +100,6 @@ DEFINE_MUTEX(drm_global_mutex);
  * :ref:`IOCTL support in the userland interfaces chapter<drm_driver_ioctl>`.
  */
 
-static int drm_open_helper(struct file *filp, struct drm_minor *minor);
-
 /**
  * drm_file_alloc - allocate file context
  * @minor: minor to allocate on
@@ -273,76 +271,6 @@ static void drm_close_helper(struct file *filp)
        drm_file_free(file_priv);
 }
 
-static int drm_setup(struct drm_device * dev)
-{
-       int ret;
-
-       if (dev->driver->firstopen &&
-           drm_core_check_feature(dev, DRIVER_LEGACY)) {
-               ret = dev->driver->firstopen(dev);
-               if (ret != 0)
-                       return ret;
-       }
-
-       ret = drm_legacy_dma_setup(dev);
-       if (ret < 0)
-               return ret;
-
-
-       DRM_DEBUG("\n");
-       return 0;
-}
-
-/**
- * drm_open - open method for DRM file
- * @inode: device inode
- * @filp: file pointer.
- *
- * This function must be used by drivers as their &file_operations.open method.
- * It looks up the correct DRM device and instantiates all the per-file
- * resources for it. It also calls the &drm_driver.open driver callback.
- *
- * RETURNS:
- *
- * 0 on success or negative errno value on falure.
- */
-int drm_open(struct inode *inode, struct file *filp)
-{
-       struct drm_device *dev;
-       struct drm_minor *minor;
-       int retcode;
-       int need_setup = 0;
-
-       minor = drm_minor_acquire(iminor(inode));
-       if (IS_ERR(minor))
-               return PTR_ERR(minor);
-
-       dev = minor->dev;
-       if (!dev->open_count++)
-               need_setup = 1;
-
-       /* share address_space across all char-devs of a single device */
-       filp->f_mapping = dev->anon_inode->i_mapping;
-
-       retcode = drm_open_helper(filp, minor);
-       if (retcode)
-               goto err_undo;
-       if (need_setup) {
-               retcode = drm_setup(dev);
-               if (retcode) {
-                       drm_close_helper(filp);
-                       goto err_undo;
-               }
-       }
-       return 0;
-
-err_undo:
-       dev->open_count--;
-       drm_minor_release(minor);
-       return retcode;
-}
-EXPORT_SYMBOL(drm_open);
-
 /*
  * Check whether DRI will run on this CPU.
  *
@@ -424,6 +352,56 @@ static int drm_open_helper(struct file *filp, struct drm_minor *minor)
        return 0;
 }
 
+/**
+ * drm_open - open method for DRM file
+ * @inode: device inode
+ * @filp: file pointer.
+ *
+ * This function must be used by drivers as their &file_operations.open method.
+ * It looks up the correct DRM device and instantiates all the per-file
+ * resources for it. It also calls the &drm_driver.open driver callback.
+ *
+ * RETURNS:
+ *
+ * 0 on success or negative errno value on falure.
+ */
+int drm_open(struct inode *inode, struct file *filp)
+{
+       struct drm_device *dev;
+       struct drm_minor *minor;
+       int retcode;
+       int need_setup = 0;
+
+       minor = drm_minor_acquire(iminor(inode));
+       if (IS_ERR(minor))
+               return PTR_ERR(minor);
+
+       dev = minor->dev;
+       if (!dev->open_count++)
+               need_setup = 1;
+
+       /* share address_space across all char-devs of a single device */
+       filp->f_mapping = dev->anon_inode->i_mapping;
+
+       retcode = drm_open_helper(filp, minor);
+       if (retcode)
+               goto err_undo;
+       if (need_setup) {
+               retcode = drm_legacy_setup(dev);
+               if (retcode) {
+                       drm_close_helper(filp);
+                       goto err_undo;
+               }
+       }
+       return 0;
+
+err_undo:
+       dev->open_count--;
+       drm_minor_release(minor);
+       return retcode;
+}
+EXPORT_SYMBOL(drm_open);
+
 void drm_lastclose(struct drm_device * dev)
 {
        DRM_DEBUG("\n");
index a18da35..0897cb9 100644 (file)
@@ -36,7 +36,7 @@ static unsigned int clip_offset(struct drm_rect *clip,
 void drm_fb_memcpy(void *dst, void *vaddr, struct drm_framebuffer *fb,
                   struct drm_rect *clip)
 {
-       unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0);
+       unsigned int cpp = fb->format->cpp[0];
        size_t len = (clip->x2 - clip->x1) * cpp;
        unsigned int y, lines = clip->y2 - clip->y1;
 
@@ -63,7 +63,7 @@ void drm_fb_memcpy_dstclip(void __iomem *dst, void *vaddr,
                           struct drm_framebuffer *fb,
                           struct drm_rect *clip)
 {
-       unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0);
+       unsigned int cpp = fb->format->cpp[0];
        unsigned int offset = clip_offset(clip, fb->pitches[0], cpp);
        size_t len = (clip->x2 - clip->x1) * cpp;
        unsigned int y, lines = clip->y2 - clip->y1;
index 6ea55fb..35b459d 100644 (file)
@@ -332,124 +332,6 @@ drm_get_format_info(struct drm_device *dev,
 }
 EXPORT_SYMBOL(drm_get_format_info);
 
-/**
- * drm_format_num_planes - get the number of planes for format
- * @format: pixel format (DRM_FORMAT_*)
- *
- * Returns:
- * The number of planes used by the specified pixel format.
- */
-int drm_format_num_planes(uint32_t format)
-{
-       const struct drm_format_info *info;
-
-       info = drm_format_info(format);
-       return info ? info->num_planes : 1;
-}
-EXPORT_SYMBOL(drm_format_num_planes);
-
-/**
- * drm_format_plane_cpp - determine the bytes per pixel value
- * @format: pixel format (DRM_FORMAT_*)
- * @plane: plane index
- *
- * Returns:
- * The bytes per pixel value for the specified plane.
- */
-int drm_format_plane_cpp(uint32_t format, int plane)
-{
-       const struct drm_format_info *info;
-
-       info = drm_format_info(format);
-       if (!info || plane >= info->num_planes)
-               return 0;
-
-       return info->cpp[plane];
-}
-EXPORT_SYMBOL(drm_format_plane_cpp);
-
-/**
- * drm_format_horz_chroma_subsampling - get the horizontal chroma subsampling factor
- * @format: pixel format (DRM_FORMAT_*)
- *
- * Returns:
- * The horizontal chroma subsampling factor for the
- * specified pixel format.
- */
-int drm_format_horz_chroma_subsampling(uint32_t format)
-{
-       const struct drm_format_info *info;
-
-       info = drm_format_info(format);
-       return info ? info->hsub : 1;
-}
-EXPORT_SYMBOL(drm_format_horz_chroma_subsampling);
-
-/**
- * drm_format_vert_chroma_subsampling - get the vertical chroma subsampling factor
- * @format: pixel format (DRM_FORMAT_*)
- *
- * Returns:
- * The vertical chroma subsampling factor for the
- * specified pixel format.
- */
-int drm_format_vert_chroma_subsampling(uint32_t format)
-{
-       const struct drm_format_info *info;
-
-       info = drm_format_info(format);
-       return info ? info->vsub : 1;
-}
-EXPORT_SYMBOL(drm_format_vert_chroma_subsampling);
-
-/**
- * drm_format_plane_width - width of the plane given the first plane
- * @width: width of the first plane
- * @format: pixel format
- * @plane: plane index
- *
- * Returns:
- * The width of @plane, given that the width of the first plane is @width.
- */
-int drm_format_plane_width(int width, uint32_t format, int plane)
-{
-       const struct drm_format_info *info;
-
-       info = drm_format_info(format);
-       if (!info || plane >= info->num_planes)
-               return 0;
-
-       if (plane == 0)
-               return width;
-
-       return width / info->hsub;
-}
-EXPORT_SYMBOL(drm_format_plane_width);
-
-/**
- * drm_format_plane_height - height of the plane given the first plane
- * @height: height of the first plane
- * @format: pixel format
- * @plane: plane index
- *
- * Returns:
- * The height of @plane, given that the height of the first plane is @height.
- */
-int drm_format_plane_height(int height, uint32_t format, int plane)
-{
-       const struct drm_format_info *info;
-
-       info = drm_format_info(format);
-       if (!info || plane >= info->num_planes)
-               return 0;
-
-       if (plane == 0)
-               return height;
-
-       return height / info->vsub;
-}
-EXPORT_SYMBOL(drm_format_plane_height);
-
 /**
  * drm_format_info_block_width - width in pixels of block.
  * @info: pixel format info
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
new file mode 100644 (file)
index 0000000..7380a06
--- /dev/null
@@ -0,0 +1,772 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <drm/drm_gem_vram_helper.h>
+#include <drm/drm_device.h>
+#include <drm/drm_mode.h>
+#include <drm/drm_prime.h>
+#include <drm/drm_vram_mm_helper.h>
+#include <drm/ttm/ttm_page_alloc.h>
+
+/**
+ * DOC: overview
+ *
+ * This library provides a GEM buffer object that is backed by video RAM
+ * (VRAM). It can be used for framebuffer devices with dedicated memory.
+ */
+
+/*
+ * Buffer-objects helpers
+ */
+
+static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
+{
+       /* We got here via ttm_bo_put(), which means that the
+        * TTM buffer object in 'bo' has already been cleaned
+        * up; only release the GEM object.
+        */
+       drm_gem_object_release(&gbo->gem);
+}
+
+static void drm_gem_vram_destroy(struct drm_gem_vram_object *gbo)
+{
+       drm_gem_vram_cleanup(gbo);
+       kfree(gbo);
+}
+
+static void ttm_buffer_object_destroy(struct ttm_buffer_object *bo)
+{
+       struct drm_gem_vram_object *gbo = drm_gem_vram_of_bo(bo);
+
+       drm_gem_vram_destroy(gbo);
+}
+
+static void drm_gem_vram_placement(struct drm_gem_vram_object *gbo,
+                                  unsigned long pl_flag)
+{
+       unsigned int i;
+       unsigned int c = 0;
+
+       gbo->placement.placement = gbo->placements;
+       gbo->placement.busy_placement = gbo->placements;
+
+       if (pl_flag & TTM_PL_FLAG_VRAM)
+               gbo->placements[c++].flags = TTM_PL_FLAG_WC |
+                                            TTM_PL_FLAG_UNCACHED |
+                                            TTM_PL_FLAG_VRAM;
+
+       if (pl_flag & TTM_PL_FLAG_SYSTEM)
+               gbo->placements[c++].flags = TTM_PL_MASK_CACHING |
+                                            TTM_PL_FLAG_SYSTEM;
+
+       if (!c)
+               gbo->placements[c++].flags = TTM_PL_MASK_CACHING |
+                                            TTM_PL_FLAG_SYSTEM;
+
+       gbo->placement.num_placement = c;
+       gbo->placement.num_busy_placement = c;
+
+       for (i = 0; i < c; ++i) {
+               gbo->placements[i].fpfn = 0;
+               gbo->placements[i].lpfn = 0;
+       }
+}
+
+static int drm_gem_vram_init(struct drm_device *dev,
+                            struct ttm_bo_device *bdev,
+                            struct drm_gem_vram_object *gbo,
+                            size_t size, unsigned long pg_align,
+                            bool interruptible)
+{
+       int ret;
+       size_t acc_size;
+
+       ret = drm_gem_object_init(dev, &gbo->gem, size);
+       if (ret)
+               return ret;
+
+       acc_size = ttm_bo_dma_acc_size(bdev, size, sizeof(*gbo));
+
+       gbo->bo.bdev = bdev;
+       drm_gem_vram_placement(gbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
+
+       ret = ttm_bo_init(bdev, &gbo->bo, size, ttm_bo_type_device,
+                         &gbo->placement, pg_align, interruptible, acc_size,
+                         NULL, NULL, ttm_buffer_object_destroy);
+       if (ret)
+               goto err_drm_gem_object_release;
+
+       return 0;
+
+err_drm_gem_object_release:
+       drm_gem_object_release(&gbo->gem);
+       return ret;
+}
+
+/**
+ * drm_gem_vram_create() - Creates a VRAM-backed GEM object
+ * @dev:               the DRM device
+ * @bdev:              the TTM BO device backing the object
+ * @size:              the buffer size in bytes
+ * @pg_align:          the buffer's alignment in multiples of the page size
+ * @interruptible:     sleep interruptible if waiting for memory
+ *
+ * Returns:
+ * A new instance of &struct drm_gem_vram_object on success, or
+ * an ERR_PTR()-encoded error code otherwise.
+ */
+struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev,
+                                               struct ttm_bo_device *bdev,
+                                               size_t size,
+                                               unsigned long pg_align,
+                                               bool interruptible)
+{
+       struct drm_gem_vram_object *gbo;
+       int ret;
+
+       gbo = kzalloc(sizeof(*gbo), GFP_KERNEL);
+       if (!gbo)
+               return ERR_PTR(-ENOMEM);
+
+       ret = drm_gem_vram_init(dev, bdev, gbo, size, pg_align, interruptible);
+       if (ret < 0)
+               goto err_kfree;
+
+       return gbo;
+
+err_kfree:
+       kfree(gbo);
+       return ERR_PTR(ret);
+}
+EXPORT_SYMBOL(drm_gem_vram_create);
+
+/**
+ * drm_gem_vram_put() - Releases a reference to a VRAM-backed GEM object
+ * @gbo:       the GEM VRAM object
+ *
+ * See ttm_bo_put() for more information.
+ */
+void drm_gem_vram_put(struct drm_gem_vram_object *gbo)
+{
+       ttm_bo_put(&gbo->bo);
+}
+EXPORT_SYMBOL(drm_gem_vram_put);
+
+/**
+ * drm_gem_vram_lock() - Locks a VRAM-backed GEM object
+ * @gbo:       the GEM VRAM object
+ * @no_wait:   don't wait for buffer object to become available
+ *
+ * See ttm_bo_reserve() for more information.
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise
+ */
+int drm_gem_vram_lock(struct drm_gem_vram_object *gbo, bool no_wait)
+{
+       return ttm_bo_reserve(&gbo->bo, true, no_wait, NULL);
+}
+EXPORT_SYMBOL(drm_gem_vram_lock);
+
+/**
+ * drm_gem_vram_unlock() - \
+       Release a reservation acquired by drm_gem_vram_lock()
+ * @gbo:       the GEM VRAM object
+ *
+ * See ttm_bo_unreserve() for more information.
+ */
+void drm_gem_vram_unlock(struct drm_gem_vram_object *gbo)
+{
+       ttm_bo_unreserve(&gbo->bo);
+}
+EXPORT_SYMBOL(drm_gem_vram_unlock);
+
+/**
+ * drm_gem_vram_mmap_offset() - Returns a GEM VRAM object's mmap offset
+ * @gbo:       the GEM VRAM object
+ *
+ * See drm_vma_node_offset_addr() for more information.
+ *
+ * Returns:
+ * The buffer object's offset for userspace mappings on success, or
+ * 0 if no offset is allocated.
+ */
+u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo)
+{
+       return drm_vma_node_offset_addr(&gbo->bo.vma_node);
+}
+EXPORT_SYMBOL(drm_gem_vram_mmap_offset);
+
+/**
+ * drm_gem_vram_offset() - \
+       Returns a GEM VRAM object's offset in video memory
+ * @gbo:       the GEM VRAM object
+ *
+ * This function returns the buffer object's offset in the device's video
+ * memory. The buffer object has to be pinned to %TTM_PL_VRAM.
+ *
+ * Returns:
+ * The buffer object's offset in video memory on success, or
+ * a negative errno code otherwise.
+ */
+s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo)
+{
+       if (WARN_ON_ONCE(!gbo->pin_count))
+               return (s64)-ENODEV;
+       return gbo->bo.offset;
+}
+EXPORT_SYMBOL(drm_gem_vram_offset);
+
+/**
+ * drm_gem_vram_pin() - Pins a GEM VRAM object in a region.
+ * @gbo:       the GEM VRAM object
+ * @pl_flag:   a bitmask of possible memory regions
+ *
+ * Pinning a buffer object ensures that it is not evicted from
+ * a memory region. A pinned buffer object has to be unpinned before
+ * it can be pinned to another region.
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise.
+ */
+int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag)
+{
+       int i, ret;
+       struct ttm_operation_ctx ctx = { false, false };
+
+       ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
+       if (ret < 0)
+               return ret;
+
+       if (gbo->pin_count)
+               goto out;
+
+       drm_gem_vram_placement(gbo, pl_flag);
+       for (i = 0; i < gbo->placement.num_placement; ++i)
+               gbo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
+
+       ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
+       if (ret < 0)
+               goto err_ttm_bo_unreserve;
+
+out:
+       ++gbo->pin_count;
+       ttm_bo_unreserve(&gbo->bo);
+
+       return 0;
+
+err_ttm_bo_unreserve:
+       ttm_bo_unreserve(&gbo->bo);
+       return ret;
+}
+EXPORT_SYMBOL(drm_gem_vram_pin);
+
+/**
+ * drm_gem_vram_pin_locked() - Pins a GEM VRAM object in a region.
+ * @gbo:       the GEM VRAM object
+ * @pl_flag:   a bitmask of possible memory regions
+ *
+ * Pinning a buffer object ensures that it is not evicted from
+ * a memory region. A pinned buffer object has to be unpinned before
+ * it can be pinned to another region.
+ *
+ * This function pins a GEM VRAM object that has already been
+ * locked. Use drm_gem_vram_pin() if possible.
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise.
+ */
+int drm_gem_vram_pin_locked(struct drm_gem_vram_object *gbo,
+                           unsigned long pl_flag)
+{
+       int i, ret;
+       struct ttm_operation_ctx ctx = { false, false };
+
+       lockdep_assert_held(&gbo->bo.resv->lock.base);
+
+       if (gbo->pin_count) {
+               ++gbo->pin_count;
+               return 0;
+       }
+
+       drm_gem_vram_placement(gbo, pl_flag);
+       for (i = 0; i < gbo->placement.num_placement; ++i)
+               gbo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
+
+       ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
+       if (ret < 0)
+               return ret;
+
+       gbo->pin_count = 1;
+
+       return 0;
+}
+EXPORT_SYMBOL(drm_gem_vram_pin_locked);
+
+/**
+ * drm_gem_vram_unpin() - Unpins a GEM VRAM object
+ * @gbo:       the GEM VRAM object
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise.
+ */
+int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
+{
+       int i, ret;
+       struct ttm_operation_ctx ctx = { false, false };
+
+       ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
+       if (ret < 0)
+               return ret;
+
+       if (WARN_ON_ONCE(!gbo->pin_count))
+               goto out;
+
+       --gbo->pin_count;
+       if (gbo->pin_count)
+               goto out;
+
+       for (i = 0; i < gbo->placement.num_placement ; ++i)
+               gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
+
+       ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
+       if (ret < 0)
+               goto err_ttm_bo_unreserve;
+
+out:
+       ttm_bo_unreserve(&gbo->bo);
+
+       return 0;
+
+err_ttm_bo_unreserve:
+       ttm_bo_unreserve(&gbo->bo);
+       return ret;
+}
+EXPORT_SYMBOL(drm_gem_vram_unpin);
+
+/**
+ * drm_gem_vram_unpin_locked() - Unpins a GEM VRAM object
+ * @gbo:       the GEM VRAM object
+ *
+ * This function unpins a GEM VRAM object that has already been
+ * locked. Use drm_gem_vram_unpin() if possible.
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise.
+ */
+int drm_gem_vram_unpin_locked(struct drm_gem_vram_object *gbo)
+{
+       int i, ret;
+       struct ttm_operation_ctx ctx = { false, false };
+
+       lockdep_assert_held(&gbo->bo.resv->lock.base);
+
+       if (WARN_ON_ONCE(!gbo->pin_count))
+               return 0;
+
+       --gbo->pin_count;
+       if (gbo->pin_count)
+               return 0;
+
+       for (i = 0; i < gbo->placement.num_placement ; ++i)
+               gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
+
+       ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
+       if (ret < 0)
+               return ret;
+
+       return 0;
+}
+EXPORT_SYMBOL(drm_gem_vram_unpin_locked);
+
+/**
+ * drm_gem_vram_kmap_at() - Maps a GEM VRAM object into kernel address space
+ * @gbo:       the GEM VRAM object
+ * @map:       establish a mapping if necessary
+ * @is_iomem:  returns true if the mapped memory is I/O memory, or false \
+       otherwise; can be NULL
+ * @kmap:      the mapping's kmap object
+ *
+ * This function maps the buffer object into the kernel's address space
+ * or returns the current mapping. If the parameter map is false, the
+ * function only queries the current mapping, but does not establish a
+ * new one.
+ *
+ * Returns:
+ * The buffers virtual address if mapped, or
+ * NULL if not mapped, or
+ * an ERR_PTR()-encoded error code otherwise.
+ */
+void *drm_gem_vram_kmap_at(struct drm_gem_vram_object *gbo, bool map,
+                          bool *is_iomem, struct ttm_bo_kmap_obj *kmap)
+{
+       int ret;
+
+       if (kmap->virtual || !map)
+               goto out;
+
+       ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+       if (ret)
+               return ERR_PTR(ret);
+
+out:
+       if (!is_iomem)
+               return kmap->virtual;
+       if (!kmap->virtual) {
+               *is_iomem = false;
+               return NULL;
+       }
+       return ttm_kmap_obj_virtual(kmap, is_iomem);
+}
+EXPORT_SYMBOL(drm_gem_vram_kmap_at);
+
+/**
+ * drm_gem_vram_kmap() - Maps a GEM VRAM object into kernel address space
+ * @gbo:       the GEM VRAM object
+ * @map:       establish a mapping if necessary
+ * @is_iomem:  returns true if the mapped memory is I/O memory, or false \
+       otherwise; can be NULL
+ *
+ * This function maps the buffer object into the kernel's address space
+ * or returns the current mapping. If the parameter map is false, the
+ * function only queries the current mapping, but does not establish a
+ * new one.
+ *
+ * Returns:
+ * The buffers virtual address if mapped, or
+ * NULL if not mapped, or
+ * an ERR_PTR()-encoded error code otherwise.
+ */
+void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map,
+                       bool *is_iomem)
+{
+       return drm_gem_vram_kmap_at(gbo, map, is_iomem, &gbo->kmap);
+}
+EXPORT_SYMBOL(drm_gem_vram_kmap);
+
+/**
+ * drm_gem_vram_kunmap_at() - Unmaps a GEM VRAM object
+ * @gbo:       the GEM VRAM object
+ * @kmap:      the mapping's kmap object
+ */
+void drm_gem_vram_kunmap_at(struct drm_gem_vram_object *gbo,
+                           struct ttm_bo_kmap_obj *kmap)
+{
+       if (!kmap->virtual)
+               return;
+
+       ttm_bo_kunmap(kmap);
+       kmap->virtual = NULL;
+}
+EXPORT_SYMBOL(drm_gem_vram_kunmap_at);
+
+/**
+ * drm_gem_vram_kunmap() - Unmaps a GEM VRAM object
+ * @gbo:       the GEM VRAM object
+ */
+void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo)
+{
+       drm_gem_vram_kunmap_at(gbo, &gbo->kmap);
+}
+EXPORT_SYMBOL(drm_gem_vram_kunmap);
+
+/**
+ * drm_gem_vram_fill_create_dumb() - \
+       Helper for implementing &struct drm_driver.dumb_create
+ * @file:              the DRM file
+ * @dev:               the DRM device
+ * @bdev:              the TTM BO device managing the buffer object
+ * @pg_align:          the buffer's alignment in multiples of the page size
+ * @interruptible:     sleep interruptible if waiting for memory
+ * @args:              the arguments as provided to \
+                               &struct drm_driver.dumb_create
+ *
+ * This helper function fills &struct drm_mode_create_dumb, which is used
+ * by &struct drm_driver.dumb_create. Implementations of this interface
+ * should forwards their arguments to this helper, plus the driver-specific
+ * parameters.
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise.
+ */
+int drm_gem_vram_fill_create_dumb(struct drm_file *file,
+                                 struct drm_device *dev,
+                                 struct ttm_bo_device *bdev,
+                                 unsigned long pg_align,
+                                 bool interruptible,
+                                 struct drm_mode_create_dumb *args)
+{
+       size_t pitch, size;
+       struct drm_gem_vram_object *gbo;
+       int ret;
+       u32 handle;
+
+       pitch = args->width * ((args->bpp + 7) / 8);
+       size = pitch * args->height;
+
+       size = roundup(size, PAGE_SIZE);
+       if (!size)
+               return -EINVAL;
+
+       gbo = drm_gem_vram_create(dev, bdev, size, pg_align, interruptible);
+       if (IS_ERR(gbo))
+               return PTR_ERR(gbo);
+
+       ret = drm_gem_handle_create(file, &gbo->gem, &handle);
+       if (ret)
+               goto err_drm_gem_object_put_unlocked;
+
+       drm_gem_object_put_unlocked(&gbo->gem);
+
+       args->pitch = pitch;
+       args->size = size;
+       args->handle = handle;
+
+       return 0;
+
+err_drm_gem_object_put_unlocked:
+       drm_gem_object_put_unlocked(&gbo->gem);
+       return ret;
+}
+EXPORT_SYMBOL(drm_gem_vram_fill_create_dumb);
+
+/*
+ * Helpers for struct ttm_bo_driver
+ */
+
+static bool drm_is_gem_vram(struct ttm_buffer_object *bo)
+{
+       return (bo->destroy == ttm_buffer_object_destroy);
+}
+
+/**
+ * drm_gem_vram_bo_driver_evict_flags() - \
+       Implements &struct ttm_bo_driver.evict_flags
+ * @bo:        TTM buffer object. Refers to &struct drm_gem_vram_object.bo
+ * @pl:        TTM placement information.
+ */
+void drm_gem_vram_bo_driver_evict_flags(struct ttm_buffer_object *bo,
+                                       struct ttm_placement *pl)
+{
+       struct drm_gem_vram_object *gbo;
+
+       /* TTM may pass BOs that are not GEM VRAM BOs. */
+       if (!drm_is_gem_vram(bo))
+               return;
+
+       gbo = drm_gem_vram_of_bo(bo);
+       drm_gem_vram_placement(gbo, TTM_PL_FLAG_SYSTEM);
+       *pl = gbo->placement;
+}
+EXPORT_SYMBOL(drm_gem_vram_bo_driver_evict_flags);
+
+/**
+ * drm_gem_vram_bo_driver_verify_access() - \
+       Implements &struct ttm_bo_driver.verify_access
+ * @bo:                TTM buffer object. Refers to &struct drm_gem_vram_object.bo
+ * @filp:      File pointer.
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative errno code otherwise.
+ */
+int drm_gem_vram_bo_driver_verify_access(struct ttm_buffer_object *bo,
+                                        struct file *filp)
+{
+       struct drm_gem_vram_object *gbo = drm_gem_vram_of_bo(bo);
+
+       return drm_vma_node_verify_access(&gbo->gem.vma_node,
+                                         filp->private_data);
+}
+EXPORT_SYMBOL(drm_gem_vram_bo_driver_verify_access);
+
+/**
+ * drm_gem_vram_mm_funcs - Functions for &struct drm_vram_mm
+ *
+ * Most users of @struct drm_gem_vram_object will also use
+ * @struct drm_vram_mm. This instance of &struct drm_vram_mm_funcs
+ * can be used to connect both.
+ */
+const struct drm_vram_mm_funcs drm_gem_vram_mm_funcs = {
+       .evict_flags = drm_gem_vram_bo_driver_evict_flags,
+       .verify_access = drm_gem_vram_bo_driver_verify_access
+};
+EXPORT_SYMBOL(drm_gem_vram_mm_funcs);
+
+/*
+ * Helpers for struct drm_driver
+ */
+
+/**
+ * drm_gem_vram_driver_gem_free_object_unlocked() - \
+       Implements &struct drm_driver.gem_free_object_unlocked
+ * @gem:       GEM object. Refers to &struct drm_gem_vram_object.gem
+ */
+void drm_gem_vram_driver_gem_free_object_unlocked(struct drm_gem_object *gem)
+{
+       struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
+
+       drm_gem_vram_put(gbo);
+}
+EXPORT_SYMBOL(drm_gem_vram_driver_gem_free_object_unlocked);
+
+/**
+ * drm_gem_vram_driver_create_dumb() - \
+       Implements &struct drm_driver.dumb_create
+ * @file:              the DRM file
+ * @dev:               the DRM device
+ * @args:              the arguments as provided to \
+                               &struct drm_driver.dumb_create
+ *
+ * This function requires the driver to use @drm_device.vram_mm for its
+ * instance of VRAM MM.
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise.
+ */
+int drm_gem_vram_driver_dumb_create(struct drm_file *file,
+                                   struct drm_device *dev,
+                                   struct drm_mode_create_dumb *args)
+{
+       if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized"))
+               return -EINVAL;
+
+       return drm_gem_vram_fill_create_dumb(file, dev, &dev->vram_mm->bdev, 0,
+                                            false, args);
+}
+EXPORT_SYMBOL(drm_gem_vram_driver_dumb_create);
+
+/**
+ * drm_gem_vram_driver_dumb_mmap_offset() - \
+       Implements &struct drm_driver.dumb_mmap_offset
+ * @file:      DRM file pointer.
+ * @dev:       DRM device.
+ * @handle:    GEM handle
+ * @offset:    Returns the mapping's memory offset on success
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative errno code otherwise.
+ */
+int drm_gem_vram_driver_dumb_mmap_offset(struct drm_file *file,
+                                        struct drm_device *dev,
+                                        uint32_t handle, uint64_t *offset)
+{
+       struct drm_gem_object *gem;
+       struct drm_gem_vram_object *gbo;
+
+       gem = drm_gem_object_lookup(file, handle);
+       if (!gem)
+               return -ENOENT;
+
+       gbo = drm_gem_vram_of_gem(gem);
+       *offset = drm_gem_vram_mmap_offset(gbo);
+
+       drm_gem_object_put_unlocked(gem);
+
+       return 0;
+}
+EXPORT_SYMBOL(drm_gem_vram_driver_dumb_mmap_offset);
+
+/*
+ * PRIME helpers for struct drm_driver
+ */
+
+/**
+ * drm_gem_vram_driver_gem_prime_pin() - \
+       Implements &struct drm_driver.gem_prime_pin
+ * @gem:       The GEM object to pin
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative errno code otherwise.
+ */
+int drm_gem_vram_driver_gem_prime_pin(struct drm_gem_object *gem)
+{
+       struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
+
+       return drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
+}
+EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_pin);
+
+/**
+ * drm_gem_vram_driver_gem_prime_unpin() - \
+       Implements &struct drm_driver.gem_prime_unpin
+ * @gem:       The GEM object to unpin
+ */
+void drm_gem_vram_driver_gem_prime_unpin(struct drm_gem_object *gem)
+{
+       struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
+
+       drm_gem_vram_unpin(gbo);
+}
+EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_unpin);
+
+/**
+ * drm_gem_vram_driver_gem_prime_vmap() - \
+       Implements &struct drm_driver.gem_prime_vmap
+ * @gem:       The GEM object to map
+ *
+ * Returns:
+ * The buffers virtual address on success, or
+ * NULL otherwise.
+ */
+void *drm_gem_vram_driver_gem_prime_vmap(struct drm_gem_object *gem)
+{
+       struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
+       int ret;
+       void *base;
+
+       ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
+       if (ret)
+               return NULL;
+       base = drm_gem_vram_kmap(gbo, true, NULL);
+       if (IS_ERR(base)) {
+               drm_gem_vram_unpin(gbo);
+               return NULL;
+       }
+       return base;
+}
+EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_vmap);
+
+/**
+ * drm_gem_vram_driver_gem_prime_vunmap() - \
+       Implements &struct drm_driver.gem_prime_vunmap
+ * @gem:       The GEM object to unmap
+ * @vaddr:     The mapping's base address
+ */
+void drm_gem_vram_driver_gem_prime_vunmap(struct drm_gem_object *gem,
+                                         void *vaddr)
+{
+       struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
+
+       drm_gem_vram_kunmap(gbo);
+       drm_gem_vram_unpin(gbo);
+}
+EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_vunmap);
+
+/**
+ * drm_gem_vram_driver_gem_prime_mmap() - \
+       Implements &struct drm_driver.gem_prime_mmap
+ * @gem:       The GEM object to map
+ * @vma:       The VMA describing the mapping
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative errno code otherwise.
+ */
+int drm_gem_vram_driver_gem_prime_mmap(struct drm_gem_object *gem,
+                                      struct vm_area_struct *vma)
+{
+       struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
+
+       gbo->gem.vma_node.vm_node.start = gbo->bo.vma_node.vm_node.start;
+       return drm_gem_prime_mmap(gem, vma);
+}
+EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_mmap);
index e19ac7c..e6281d9 100644 (file)
@@ -93,6 +93,8 @@ int drm_dropmaster_ioctl(struct drm_device *dev, void *data,
                         struct drm_file *file_priv);
 int drm_master_open(struct drm_file *file_priv);
 void drm_master_release(struct drm_file *file_priv);
+bool drm_master_internal_acquire(struct drm_device *dev);
+void drm_master_internal_release(struct drm_device *dev);
 
 /* drm_sysfs.c */
 extern struct class *drm_class;
index 51f1fab..013ccdf 100644 (file)
@@ -187,10 +187,12 @@ int drm_legacy_sg_free(struct drm_device *dev, void *data,
 void drm_legacy_init_members(struct drm_device *dev);
 void drm_legacy_destroy_members(struct drm_device *dev);
 void drm_legacy_dev_reinit(struct drm_device *dev);
+int drm_legacy_setup(struct drm_device * dev);
 #else
 static inline void drm_legacy_init_members(struct drm_device *dev) {}
 static inline void drm_legacy_destroy_members(struct drm_device *dev) {}
 static inline void drm_legacy_dev_reinit(struct drm_device *dev) {}
+static inline int drm_legacy_setup(struct drm_device * dev) { return 0; }
 #endif
 
 #if IS_ENABLED(CONFIG_DRM_LEGACY)
index 2fe7868..18d05a6 100644 (file)
@@ -51,6 +51,26 @@ void drm_legacy_destroy_members(struct drm_device *dev)
        mutex_destroy(&dev->ctxlist_mutex);
 }
 
+int drm_legacy_setup(struct drm_device * dev)
+{
+       int ret;
+
+       if (dev->driver->firstopen &&
+           drm_core_check_feature(dev, DRIVER_LEGACY)) {
+               ret = dev->driver->firstopen(dev);
+               if (ret != 0)
+                       return ret;
+       }
+
+       ret = drm_legacy_dma_setup(dev);
+       if (ret < 0)
+               return ret;
+
+
+       DRM_DEBUG("\n");
+       return 0;
+}
+
 void drm_legacy_dev_reinit(struct drm_device *dev)
 {
        if (dev->irq_enabled)
index dc079ef..706034f 100644 (file)
@@ -86,11 +86,6 @@ struct drm_prime_member {
        struct rb_node handle_rb;
 };
 
-struct drm_prime_attachment {
-       struct sg_table *sgt;
-       enum dma_data_direction dir;
-};
-
 static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv,
                                    struct dma_buf *dma_buf, uint32_t handle)
 {
@@ -188,25 +183,16 @@ static int drm_prime_lookup_buf_handle(struct drm_prime_file_private *prime_fpri
  * @dma_buf: buffer to attach device to
  * @attach: buffer attachment data
  *
- * Allocates &drm_prime_attachment and calls &drm_driver.gem_prime_pin for
- * device specific attachment. This can be used as the &dma_buf_ops.attach
- * callback.
+ * Calls &drm_driver.gem_prime_pin for device specific handling. This can be
+ * used as the &dma_buf_ops.attach callback.
  *
  * Returns 0 on success, negative error code on failure.
  */
 int drm_gem_map_attach(struct dma_buf *dma_buf,
                       struct dma_buf_attachment *attach)
 {
-       struct drm_prime_attachment *prime_attach;
        struct drm_gem_object *obj = dma_buf->priv;
 
-       prime_attach = kzalloc(sizeof(*prime_attach), GFP_KERNEL);
-       if (!prime_attach)
-               return -ENOMEM;
-
-       prime_attach->dir = DMA_NONE;
-       attach->priv = prime_attach;
-
        return drm_gem_pin(obj);
 }
 EXPORT_SYMBOL(drm_gem_map_attach);
@@ -222,26 +208,8 @@ EXPORT_SYMBOL(drm_gem_map_attach);
 void drm_gem_map_detach(struct dma_buf *dma_buf,
                        struct dma_buf_attachment *attach)
 {
-       struct drm_prime_attachment *prime_attach = attach->priv;
        struct drm_gem_object *obj = dma_buf->priv;
 
-       if (prime_attach) {
-               struct sg_table *sgt = prime_attach->sgt;
-
-               if (sgt) {
-                       if (prime_attach->dir != DMA_NONE)
-                               dma_unmap_sg_attrs(attach->dev, sgt->sgl,
-                                                  sgt->nents,
-                                                  prime_attach->dir,
-                                                  DMA_ATTR_SKIP_CPU_SYNC);
-                       sg_free_table(sgt);
-               }
-
-               kfree(sgt);
-               kfree(prime_attach);
-               attach->priv = NULL;
-       }
-
        drm_gem_unpin(obj);
 }
 EXPORT_SYMBOL(drm_gem_map_detach);
@@ -286,39 +254,22 @@ void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpr
 struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
                                     enum dma_data_direction dir)
 {
-       struct drm_prime_attachment *prime_attach = attach->priv;
        struct drm_gem_object *obj = attach->dmabuf->priv;
        struct sg_table *sgt;
 
-       if (WARN_ON(dir == DMA_NONE || !prime_attach))
+       if (WARN_ON(dir == DMA_NONE))
                return ERR_PTR(-EINVAL);
 
-       /* return the cached mapping when possible */
-       if (prime_attach->dir == dir)
-               return prime_attach->sgt;
-
-       /*
-        * two mappings with different directions for the same attachment are
-        * not allowed
-        */
-       if (WARN_ON(prime_attach->dir != DMA_NONE))
-               return ERR_PTR(-EBUSY);
-
        if (obj->funcs)
                sgt = obj->funcs->get_sg_table(obj);
        else
                sgt = obj->dev->driver->gem_prime_get_sg_table(obj);
 
-       if (!IS_ERR(sgt)) {
-               if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
-                                     DMA_ATTR_SKIP_CPU_SYNC)) {
-                       sg_free_table(sgt);
-                       kfree(sgt);
-                       sgt = ERR_PTR(-ENOMEM);
-               } else {
-                       prime_attach->sgt = sgt;
-                       prime_attach->dir = dir;
-               }
+       if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
+                             DMA_ATTR_SKIP_CPU_SYNC)) {
+               sg_free_table(sgt);
+               kfree(sgt);
+               sgt = ERR_PTR(-ENOMEM);
        }
 
        return sgt;
@@ -331,14 +282,19 @@ EXPORT_SYMBOL(drm_gem_map_dma_buf);
  * @sgt: scatterlist info of the buffer to unmap
  * @dir: direction of DMA transfer
  *
- * Not implemented. The unmap is done at drm_gem_map_detach().  This can be
- * used as the &dma_buf_ops.unmap_dma_buf callback.
+ * This can be used as the &dma_buf_ops.unmap_dma_buf callback.
  */
 void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
                           struct sg_table *sgt,
                           enum dma_data_direction dir)
 {
-       /* nothing to be done here */
+       if (!sgt)
+               return;
+
+       dma_unmap_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
+                          DMA_ATTR_SKIP_CPU_SYNC);
+       sg_free_table(sgt);
+       kfree(sgt);
 }
 EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
 
@@ -452,6 +408,7 @@ int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
 EXPORT_SYMBOL(drm_gem_dmabuf_mmap);
 
 static const struct dma_buf_ops drm_gem_prime_dmabuf_ops =  {
+       .cache_sgt_mapping = true,
        .attach = drm_gem_map_attach,
        .detach = drm_gem_map_detach,
        .map_dma_buf = drm_gem_map_dma_buf,
diff --git a/drivers/gpu/drm/drm_vram_helper_common.c b/drivers/gpu/drm/drm_vram_helper_common.c
new file mode 100644 (file)
index 0000000..e9c9f9a
--- /dev/null
@@ -0,0 +1,96 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <linux/module.h>
+
+/**
+ * DOC: overview
+ *
+ * This library provides &struct drm_gem_vram_object (GEM VRAM), a GEM
+ * buffer object that is backed by video RAM. It can be used for
+ * framebuffer devices with dedicated memory. The video RAM can be
+ * managed with &struct drm_vram_mm (VRAM MM). Both data structures are
+ * supposed to be used together, but can also be used individually.
+ *
+ * With the GEM interface userspace applications create, manage and destroy
+ * graphics buffers, such as an on-screen framebuffer. GEM does not provide
+ * an implementation of these interfaces. It's up to the DRM driver to
+ * provide an implementation that suits the hardware. If the hardware device
+ * contains dedicated video memory, the DRM driver can use the VRAM helper
+ * library. Each active buffer object is stored in video RAM. Active
+ * buffer are used for drawing the current frame, typically something like
+ * the frame's scanout buffer or the cursor image. If there's no more space
+ * left in VRAM, inactive GEM objects can be moved to system memory.
+ *
+ * The easiest way to use the VRAM helper library is to call
+ * drm_vram_helper_alloc_mm(). The function allocates and initializes an
+ * instance of &struct drm_vram_mm in &struct drm_device.vram_mm . Use
+ * &DRM_GEM_VRAM_DRIVER to initialize &struct drm_driver and
+ * &DRM_VRAM_MM_FILE_OPERATIONS to initialize &struct file_operations;
+ * as illustrated below.
+ *
+ * .. code-block:: c
+ *
+ *     struct file_operations fops ={
+ *             .owner = THIS_MODULE,
+ *             DRM_VRAM_MM_FILE_OPERATION
+ *     };
+ *     struct drm_driver drv = {
+ *             .driver_feature = DRM_ ... ,
+ *             .fops = &fops,
+ *             DRM_GEM_VRAM_DRIVER
+ *     };
+ *
+ *     int init_drm_driver()
+ *     {
+ *             struct drm_device *dev;
+ *             uint64_t vram_base;
+ *             unsigned long vram_size;
+ *             int ret;
+ *
+ *             // setup device, vram base and size
+ *             // ...
+ *
+ *             ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size,
+ *                                            &drm_gem_vram_mm_funcs);
+ *             if (ret)
+ *                     return ret;
+ *             return 0;
+ *     }
+ *
+ * This creates an instance of &struct drm_vram_mm, exports DRM userspace
+ * interfaces for GEM buffer management and initializes file operations to
+ * allow for accessing created GEM buffers. With this setup, the DRM driver
+ * manages an area of video RAM with VRAM MM and provides GEM VRAM objects
+ * to userspace.
+ *
+ * To clean up the VRAM memory management, call drm_vram_helper_release_mm()
+ * in the driver's clean-up code.
+ *
+ * .. code-block:: c
+ *
+ *     void fini_drm_driver()
+ *     {
+ *             struct drm_device *dev = ...;
+ *
+ *             drm_vram_helper_release_mm(dev);
+ *     }
+ *
+ * For drawing or scanout operations, buffer object have to be pinned in video
+ * RAM. Call drm_gem_vram_pin() with &DRM_GEM_VRAM_PL_FLAG_VRAM or
+ * &DRM_GEM_VRAM_PL_FLAG_SYSTEM to pin a buffer object in video RAM or system
+ * memory. Call drm_gem_vram_unpin() to release the pinned object afterwards.
+ *
+ * A buffer object that is pinned in video RAM has a fixed address within that
+ * memory region. Call drm_gem_vram_offset() to retrieve this value. Typically
+ * it's used to program the hardware's scanout engine for framebuffers, set
+ * the cursor overlay's image for a mouse cursor, or use it as input to the
+ * hardware's draing engine.
+ *
+ * To access a buffer object's memory from the DRM driver, call
+ * drm_gem_vram_kmap(). It (optionally) maps the buffer into kernel address
+ * space and returns the memory address. Use drm_gem_vram_kunmap() to
+ * release the mapping.
+ */
+
+MODULE_DESCRIPTION("DRM VRAM memory-management helpers");
+MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/drm/drm_vram_mm_helper.c b/drivers/gpu/drm/drm_vram_mm_helper.c
new file mode 100644 (file)
index 0000000..c94a6dc
--- /dev/null
@@ -0,0 +1,295 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <drm/drm_vram_mm_helper.h>
+#include <drm/drmP.h>
+#include <drm/ttm/ttm_page_alloc.h>
+
+/**
+ * DOC: overview
+ *
+ * The data structure &struct drm_vram_mm and its helpers implement a memory
+ * manager for simple framebuffer devices with dedicated video memory. Buffer
+ * objects are either placed in video RAM or evicted to system memory. These
+ * helper functions work well with &struct drm_gem_vram_object.
+ */
+
+/*
+ * TTM TT
+ */
+
+static void backend_func_destroy(struct ttm_tt *tt)
+{
+       ttm_tt_fini(tt);
+       kfree(tt);
+}
+
+static struct ttm_backend_func backend_func = {
+       .destroy = backend_func_destroy
+};
+
+/*
+ * TTM BO device
+ */
+
+static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo,
+                                             uint32_t page_flags)
+{
+       struct ttm_tt *tt;
+       int ret;
+
+       tt = kzalloc(sizeof(*tt), GFP_KERNEL);
+       if (!tt)
+               return NULL;
+
+       tt->func = &backend_func;
+
+       ret = ttm_tt_init(tt, bo, page_flags);
+       if (ret < 0)
+               goto err_ttm_tt_init;
+
+       return tt;
+
+err_ttm_tt_init:
+       kfree(tt);
+       return NULL;
+}
+
+static int bo_driver_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
+                                  struct ttm_mem_type_manager *man)
+{
+       switch (type) {
+       case TTM_PL_SYSTEM:
+               man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
+               man->available_caching = TTM_PL_MASK_CACHING;
+               man->default_caching = TTM_PL_FLAG_CACHED;
+               break;
+       case TTM_PL_VRAM:
+               man->func = &ttm_bo_manager_func;
+               man->flags = TTM_MEMTYPE_FLAG_FIXED |
+                            TTM_MEMTYPE_FLAG_MAPPABLE;
+               man->available_caching = TTM_PL_FLAG_UNCACHED |
+                                        TTM_PL_FLAG_WC;
+               man->default_caching = TTM_PL_FLAG_WC;
+               break;
+       default:
+               return -EINVAL;
+       }
+       return 0;
+}
+
+static void bo_driver_evict_flags(struct ttm_buffer_object *bo,
+                                 struct ttm_placement *placement)
+{
+       struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev);
+
+       if (vmm->funcs && vmm->funcs->evict_flags)
+               vmm->funcs->evict_flags(bo, placement);
+}
+
+static int bo_driver_verify_access(struct ttm_buffer_object *bo,
+                                  struct file *filp)
+{
+       struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev);
+
+       if (!vmm->funcs || !vmm->funcs->verify_access)
+               return 0;
+       return vmm->funcs->verify_access(bo, filp);
+}
+
+static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev,
+                                   struct ttm_mem_reg *mem)
+{
+       struct ttm_mem_type_manager *man = bdev->man + mem->mem_type;
+       struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bdev);
+
+       if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
+               return -EINVAL;
+
+       mem->bus.addr = NULL;
+       mem->bus.size = mem->num_pages << PAGE_SHIFT;
+
+       switch (mem->mem_type) {
+       case TTM_PL_SYSTEM:     /* nothing to do */
+               mem->bus.offset = 0;
+               mem->bus.base = 0;
+               mem->bus.is_iomem = false;
+               break;
+       case TTM_PL_VRAM:
+               mem->bus.offset = mem->start << PAGE_SHIFT;
+               mem->bus.base = vmm->vram_base;
+               mem->bus.is_iomem = true;
+               break;
+       default:
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+static void bo_driver_io_mem_free(struct ttm_bo_device *bdev,
+                                 struct ttm_mem_reg *mem)
+{ }
+
+static struct ttm_bo_driver bo_driver = {
+       .ttm_tt_create = bo_driver_ttm_tt_create,
+       .ttm_tt_populate = ttm_pool_populate,
+       .ttm_tt_unpopulate = ttm_pool_unpopulate,
+       .init_mem_type = bo_driver_init_mem_type,
+       .eviction_valuable = ttm_bo_eviction_valuable,
+       .evict_flags = bo_driver_evict_flags,
+       .verify_access = bo_driver_verify_access,
+       .io_mem_reserve = bo_driver_io_mem_reserve,
+       .io_mem_free = bo_driver_io_mem_free,
+};
+
+/*
+ * struct drm_vram_mm
+ */
+
+/**
+ * drm_vram_mm_init() - Initialize an instance of VRAM MM.
+ * @vmm:       the VRAM MM instance to initialize
+ * @dev:       the DRM device
+ * @vram_base: the base address of the video memory
+ * @vram_size: the size of the video memory in bytes
+ * @funcs:     callback functions for buffer objects
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise.
+ */
+int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev,
+                    uint64_t vram_base, size_t vram_size,
+                    const struct drm_vram_mm_funcs *funcs)
+{
+       int ret;
+
+       vmm->vram_base = vram_base;
+       vmm->vram_size = vram_size;
+       vmm->funcs = funcs;
+
+       ret = ttm_bo_device_init(&vmm->bdev, &bo_driver,
+                                dev->anon_inode->i_mapping,
+                                true);
+       if (ret)
+               return ret;
+
+       ret = ttm_bo_init_mm(&vmm->bdev, TTM_PL_VRAM, vram_size >> PAGE_SHIFT);
+       if (ret)
+               return ret;
+
+       return 0;
+}
+EXPORT_SYMBOL(drm_vram_mm_init);
+
+/**
+ * drm_vram_mm_cleanup() - Cleans up an initialized instance of VRAM MM.
+ * @vmm:       the VRAM MM instance to clean up
+ */
+void drm_vram_mm_cleanup(struct drm_vram_mm *vmm)
+{
+       ttm_bo_device_release(&vmm->bdev);
+}
+EXPORT_SYMBOL(drm_vram_mm_cleanup);
+
+/**
+ * drm_vram_mm_mmap() - Helper for implementing &struct file_operations.mmap()
+ * @filp:      the mapping's file structure
+ * @vma:       the mapping's memory area
+ * @vmm:       the VRAM MM instance
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise.
+ */
+int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma,
+                    struct drm_vram_mm *vmm)
+{
+       return ttm_bo_mmap(filp, vma, &vmm->bdev);
+}
+EXPORT_SYMBOL(drm_vram_mm_mmap);
+
+/*
+ * Helpers for integration with struct drm_device
+ */
+
+/**
+ * drm_vram_helper_alloc_mm - Allocates a device's instance of \
+       &struct drm_vram_mm
+ * @dev:       the DRM device
+ * @vram_base: the base address of the video memory
+ * @vram_size: the size of the video memory in bytes
+ * @funcs:     callback functions for buffer objects
+ *
+ * Returns:
+ * The new instance of &struct drm_vram_mm on success, or
+ * an ERR_PTR()-encoded errno code otherwise.
+ */
+struct drm_vram_mm *drm_vram_helper_alloc_mm(
+       struct drm_device *dev, uint64_t vram_base, size_t vram_size,
+       const struct drm_vram_mm_funcs *funcs)
+{
+       int ret;
+
+       if (WARN_ON(dev->vram_mm))
+               return dev->vram_mm;
+
+       dev->vram_mm = kzalloc(sizeof(*dev->vram_mm), GFP_KERNEL);
+       if (!dev->vram_mm)
+               return ERR_PTR(-ENOMEM);
+
+       ret = drm_vram_mm_init(dev->vram_mm, dev, vram_base, vram_size, funcs);
+       if (ret)
+               goto err_kfree;
+
+       return dev->vram_mm;
+
+err_kfree:
+       kfree(dev->vram_mm);
+       dev->vram_mm = NULL;
+       return ERR_PTR(ret);
+}
+EXPORT_SYMBOL(drm_vram_helper_alloc_mm);
+
+/**
+ * drm_vram_helper_release_mm - Releases a device's instance of \
+       &struct drm_vram_mm
+ * @dev:       the DRM device
+ */
+void drm_vram_helper_release_mm(struct drm_device *dev)
+{
+       if (!dev->vram_mm)
+               return;
+
+       drm_vram_mm_cleanup(dev->vram_mm);
+       kfree(dev->vram_mm);
+       dev->vram_mm = NULL;
+}
+EXPORT_SYMBOL(drm_vram_helper_release_mm);
+
+/*
+ * Helpers for &struct file_operations
+ */
+
+/**
+ * drm_vram_mm_file_operations_mmap() - \
+       Implements &struct file_operations.mmap()
+ * @filp:      the mapping's file structure
+ * @vma:       the mapping's memory area
+ *
+ * Returns:
+ * 0 on success, or
+ * a negative error code otherwise.
+ */
+int drm_vram_mm_file_operations_mmap(
+       struct file *filp, struct vm_area_struct *vma)
+{
+       struct drm_file *file_priv = filp->private_data;
+       struct drm_device *dev = file_priv->minor->dev;
+
+       if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized"))
+               return -EINVAL;
+
+       return drm_vram_mm_mmap(filp, vma, dev->vram_mm);
+}
+EXPORT_SYMBOL(drm_vram_mm_file_operations_mmap);
index 33854c9..b24ddc4 100644 (file)
@@ -118,7 +118,6 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
        unsigned int n_obj, n_bomap_pages;
        size_t file_size, mmu_size;
        __le64 *bomap, *bomap_start;
-       unsigned long flags;
 
        /* Only catch the first event, or when manually re-armed */
        if (!etnaviv_dump_core)
@@ -135,13 +134,11 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
                    mmu_size + gpu->buffer.size;
 
        /* Add in the active command buffers */
-       spin_lock_irqsave(&gpu->sched.job_list_lock, flags);
        list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) {
                submit = to_etnaviv_submit(s_job);
                file_size += submit->cmdbuf.size;
                n_obj++;
        }
-       spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags);
 
        /* Add in the active buffer objects */
        list_for_each_entry(vram, &gpu->mmu->mappings, mmu_node) {
@@ -183,14 +180,12 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
                              gpu->buffer.size,
                              etnaviv_cmdbuf_get_va(&gpu->buffer));
 
-       spin_lock_irqsave(&gpu->sched.job_list_lock, flags);
        list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) {
                submit = to_etnaviv_submit(s_job);
                etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD,
                                      submit->cmdbuf.vaddr, submit->cmdbuf.size,
                                      etnaviv_cmdbuf_get_va(&submit->cmdbuf));
        }
-       spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags);
 
        /* Reserve space for the bomap */
        if (n_bomap_pages) {
index 6d24fea..a813c82 100644 (file)
@@ -109,7 +109,7 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
        }
 
        /* block scheduler */
-       drm_sched_stop(&gpu->sched);
+       drm_sched_stop(&gpu->sched, sched_job);
 
        if(sched_job)
                drm_sched_increase_karma(sched_job);
index 204c8e4..6536ed5 100644 (file)
  *
  **************************************************************************/
 
-#include <linux/module.h>
-#include <linux/kernel.h>
+#include <linux/console.h>
+#include <linux/delay.h>
 #include <linux/errno.h>
-#include <linux/string.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
 #include <linux/mm.h>
-#include <linux/tty.h>
+#include <linux/module.h>
 #include <linux/slab.h>
-#include <linux/delay.h>
-#include <linux/init.h>
-#include <linux/console.h>
+#include <linux/string.h>
+#include <linux/tty.h>
 
-#include <drm/drmP.h>
 #include <drm/drm.h>
 #include <drm/drm_crtc.h>
+#include <drm/drm_fourcc.h>
 
+#include "framebuffer.h"
 #include "psb_drv.h"
 #include "psb_reg.h"
-#include "framebuffer.h"
 
 /**
  *     psb_spank               -       reset the 2D engine
index b83648d..69551a2 100644 (file)
@@ -17,6 +17,8 @@
 #ifndef __BLITTER_H
 #define __BLITTER_H
 
+struct drm_psb_private;
+
 extern int gma_blt_wait_idle(struct drm_psb_private *dev_priv);
 
 #endif
index 34b8576..31b9319 100644 (file)
  **************************************************************************/
 
 #include <linux/backlight.h>
-#include <drm/drmP.h>
+#include <linux/delay.h>
+
 #include <drm/drm.h>
-#include <drm/gma_drm.h>
-#include "psb_drv.h"
-#include "psb_reg.h"
-#include "psb_intel_reg.h"
-#include "intel_bios.h"
+
 #include "cdv_device.h"
 #include "gma_device.h"
+#include "intel_bios.h"
+#include "psb_drv.h"
+#include "psb_intel_reg.h"
+#include "psb_reg.h"
 
 #define VGA_SR_INDEX           0x3c4
 #define VGA_SR_DATA            0x3c5
index 705c11d..19e544b 100644 (file)
  * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
  */
 
+struct drm_crtc;
+struct drm_device;
+struct psb_intel_mode_device;
+
 extern const struct drm_crtc_helper_funcs cdv_intel_helper_funcs;
 extern const struct drm_crtc_funcs cdv_intel_crtc_funcs;
 extern const struct gma_clock_funcs cdv_clock_funcs;
index cb5a14b..29c36d6 100644 (file)
  *     Eric Anholt <eric@anholt.net>
  */
 
+#include <linux/delay.h>
 #include <linux/i2c.h>
-#include <drm/drmP.h>
+#include <linux/pm_runtime.h>
 
+#include "cdv_device.h"
 #include "intel_bios.h"
+#include "power.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "power.h"
-#include "cdv_device.h"
-#include <linux/pm_runtime.h>
 
 
 static void cdv_intel_crt_dpms(struct drm_encoder *encoder, int mode)
index 17db4b4..9be7c37 100644 (file)
  *     Eric Anholt <eric@anholt.net>
  */
 
+#include <linux/delay.h>
 #include <linux/i2c.h>
 
-#include <drm/drmP.h>
+#include <drm/drm_crtc.h>
+
+#include "cdv_device.h"
 #include "framebuffer.h"
+#include "gma_display.h"
+#include "power.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "gma_display.h"
-#include "power.h"
-#include "cdv_device.h"
 
 static bool cdv_intel_find_dp_pll(const struct gma_limit_t *limit,
                                  struct drm_crtc *crtc, int target,
index 90ed200..570b595 100644 (file)
  */
 
 #include <linux/i2c.h>
-#include <linux/slab.h>
 #include <linux/module.h>
-#include <drm/drmP.h>
+#include <linux/slab.h>
+
 #include <drm/drm_crtc.h>
 #include <drm/drm_crtc_helper.h>
+#include <drm/drm_dp_helper.h>
+
+#include "gma_display.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "gma_display.h"
-#include <drm/drm_dp_helper.h>
 
 /**
  * struct i2c_algo_dp_aux_data - driver interface structure for i2c over dp
index 4e4e4a6..1711a41 100644 (file)
  *     We should probably make this generic and share it with Medfield
  */
 
-#include <drm/drmP.h>
+#include <linux/pm_runtime.h>
+
 #include <drm/drm.h>
 #include <drm/drm_crtc.h>
 #include <drm/drm_edid.h>
-#include "psb_intel_drv.h"
+
+#include "cdv_device.h"
 #include "psb_drv.h"
+#include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "cdv_device.h"
-#include <linux/pm_runtime.h>
 
 /* hdmi control bits */
 #define HDMI_NULL_PACKETS_DURING_VSYNC (1 << 9)
index 9c84461..50c2172 100644 (file)
  *     Jesse Barnes <jesse.barnes@intel.com>
  */
 
-#include <linux/i2c.h>
 #include <linux/dmi.h>
-#include <drm/drmP.h>
+#include <linux/i2c.h>
+#include <linux/pm_runtime.h>
 
+#include "cdv_device.h"
 #include "intel_bios.h"
+#include "power.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "power.h"
-#include <linux/pm_runtime.h>
-#include "cdv_device.h"
 
 /**
  * LVDS I2C backlight control macros
index a9d3a4a..26d95d8 100644 (file)
  *
  **************************************************************************/
 
-#include <linux/module.h>
-#include <linux/kernel.h>
+#include <linux/console.h>
+#include <linux/delay.h>
 #include <linux/errno.h>
-#include <linux/string.h>
-#include <linux/pfn_t.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
 #include <linux/mm.h>
-#include <linux/tty.h>
+#include <linux/module.h>
+#include <linux/pfn_t.h>
 #include <linux/slab.h>
-#include <linux/delay.h>
-#include <linux/init.h>
-#include <linux/console.h>
+#include <linux/string.h>
+#include <linux/tty.h>
 
-#include <drm/drmP.h>
 #include <drm/drm.h>
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_fourcc.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 
-#include "psb_drv.h"
-#include "psb_intel_reg.h"
-#include "psb_intel_drv.h"
 #include "framebuffer.h"
 #include "gtt.h"
+#include "psb_drv.h"
+#include "psb_intel_drv.h"
+#include "psb_intel_reg.h"
 
 static const struct drm_framebuffer_funcs psb_fb_funcs = {
        .destroy = drm_gem_fb_destroy,
@@ -232,7 +232,7 @@ static int psb_framebuffer_init(struct drm_device *dev,
         * Reject unknown formats, YUV formats, and formats with more than
         * 4 bytes per pixel.
         */
-       info = drm_format_info(mode_cmd->pixel_format);
+       info = drm_get_format_info(dev, mode_cmd);
        if (!info || !info->depth || info->cpp[0] > 4)
                return -EINVAL;
 
index e8e6357..b54477a 100644 (file)
@@ -22,7 +22,6 @@
 #ifndef _FRAMEBUFFER_H_
 #define _FRAMEBUFFER_H_
 
-#include <drm/drmP.h>
 #include <drm/drm_fb_helper.h>
 
 #include "psb_drv.h"
index 576f1b2..49c8aa6 100644 (file)
  *             accelerated operations on a GEM object)
  */
 
-#include <drm/drmP.h>
+#include <linux/pagemap.h>
+
 #include <drm/drm.h>
-#include <drm/gma_drm.h>
 #include <drm/drm_vma_manager.h>
+
 #include "psb_drv.h"
 
 void psb_gem_free_object(struct drm_gem_object *obj)
index a7fb6de..7d52871 100644 (file)
@@ -13,7 +13,6 @@
  *
  **************************************************************************/
 
-#include <drm/drmP.h>
 #include "psb_drv.h"
 
 void gma_get_core_freq(struct drm_device *dev)
index e1dbb00..9f0bb91 100644 (file)
@@ -15,6 +15,7 @@
 
 #ifndef _GMA_DEVICE_H
 #define _GMA_DEVICE_H
+struct drm_device;
 
 extern void gma_get_core_freq(struct drm_device *dev);
 
index 09c1161..af75ba8 100644 (file)
  *     Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
  */
 
-#include <drm/drmP.h>
+#include <linux/delay.h>
+#include <linux/highmem.h>
+
+#include <drm/drm_crtc.h>
+#include <drm/drm_fourcc.h>
+#include <drm/drm_vblank.h>
+
+#include "framebuffer.h"
 #include "gma_display.h"
+#include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "psb_drv.h"
-#include "framebuffer.h"
 
 /**
  * Returns whether any output on the specified pipe is of the specified type
index 239c374..e970cb8 100644 (file)
@@ -24,6 +24,9 @@
 
 #include <linux/pm_runtime.h>
 
+struct drm_encoder;
+struct drm_mode_set;
+
 struct gma_clock_t {
        /* given values */
        int n;
index 3949b09..0ac89c5 100644 (file)
  *         Alan Cox <alan@linux.intel.com>
  */
 
-#include <drm/drmP.h>
 #include <linux/shmem_fs.h>
+
 #include <asm/set_memory.h>
-#include "psb_drv.h"
+
 #include "blitter.h"
+#include "psb_drv.h"
 
 
 /*
index cb0c3a2..c9449aa 100644 (file)
@@ -20,7 +20,6 @@
 #ifndef _PSB_GTT_H_
 #define _PSB_GTT_H_
 
-#include <drm/drmP.h>
 #include <drm/drm_gem.h>
 
 /* This wants cleaning up with respect to the psb_dev and un-needed stuff */
index e019ea2..477315b 100644 (file)
  *    Eric Anholt <eric@anholt.net>
  *
  */
-#include <drm/drmP.h>
 #include <drm/drm.h>
-#include <drm/gma_drm.h>
+#include <drm/drm_dp_helper.h>
+
+#include "intel_bios.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "intel_bios.h"
 
 #define        SLAVE_ADDR1     0x70
 #define        SLAVE_ADDR2     0x72
index e0ccf1d..bb3b813 100644 (file)
@@ -22,8 +22,7 @@
 #ifndef _INTEL_BIOS_H_
 #define _INTEL_BIOS_H_
 
-#include <drm/drmP.h>
-#include <drm/drm_dp_helper.h>
+struct drm_device;
 
 struct vbt_header {
        u8 signature[20];               /**< Always starts with 'VBT$' */
index e7e2218..a083fbf 100644 (file)
  *     Eric Anholt <eric@anholt.net>
  *     Chris Wilson <chris@chris-wilson.co.uk>
  */
-#include <linux/module.h>
-#include <linux/i2c.h>
+
+#include <linux/delay.h>
 #include <linux/i2c-algo-bit.h>
-#include <drm/drmP.h>
-#include "psb_intel_drv.h"
-#include <drm/gma_drm.h>
+#include <linux/i2c.h>
+#include <linux/module.h>
+
 #include "psb_drv.h"
+#include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
 
 #define _wait_for(COND, MS, W) ({ \
index 98a28c2..29451c5 100644 (file)
  * Authors:
  *     Eric Anholt <eric@anholt.net>
  */
+
+#include <linux/delay.h>
 #include <linux/export.h>
-#include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
+#include <linux/i2c.h>
 
 #include "psb_drv.h"
 #include "psb_intel_reg.h"
index e2ab858..7450908 100644 (file)
  *
  **************************************************************************/
 
-#include "psb_drv.h"
-#include "mid_bios.h"
-#include "mdfld_output.h"
-#include "mdfld_dsi_output.h"
-#include "tc35876x-dsi-lvds.h"
+#include <linux/delay.h>
 
 #include <asm/intel_scu_ipc.h>
 
+#include "mdfld_dsi_output.h"
+#include "mdfld_output.h"
+#include "mid_bios.h"
+#include "psb_drv.h"
+#include "tc35876x-dsi-lvds.h"
+
 #ifdef CONFIG_BACKLIGHT_CLASS_DEVICE
 
 #define MRST_BLC_MAX_PWM_REG_FREQ          0xFFFF
@@ -342,7 +344,7 @@ static int mdfld_restore_display_registers(struct drm_device *dev, int pipenum)
 
        if (pipenum == 1) {
                /* restore palette (gamma) */
-               /*DRM_UDELAY(50000); */
+               /* udelay(50000); */
                for (i = 0; i < 256; i++)
                        PSB_WVDC32(pipe->palette[i], map->palette + (i << 2));
 
@@ -404,7 +406,7 @@ static int mdfld_restore_display_registers(struct drm_device *dev, int pipenum)
        PSB_WVDC32(pipe->conf, map->conf);
 
        /* restore palette (gamma) */
-       /*DRM_UDELAY(50000); */
+       /* udelay(50000); */
        for (i = 0; i < 256; i++)
                PSB_WVDC32(pipe->palette[i], map->palette + (i << 2));
 
index d0bf5a1..d4c65f2 100644 (file)
  * Jackie Li<yaodong.li@intel.com>
  */
 
+#include <linux/delay.h>
+
 #include "mdfld_dsi_dpi.h"
-#include "mdfld_output.h"
 #include "mdfld_dsi_pkg_sender.h"
+#include "mdfld_output.h"
 #include "psb_drv.h"
 #include "tc35876x-dsi-lvds.h"
 
index fe02092..03023fa 100644 (file)
  * Jackie Li<yaodong.li@intel.com>
  */
 
-#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/moduleparam.h>
+#include <linux/pm_runtime.h>
+
+#include <asm/intel_scu_ipc.h>
 
-#include "mdfld_dsi_output.h"
 #include "mdfld_dsi_dpi.h"
-#include "mdfld_output.h"
+#include "mdfld_dsi_output.h"
 #include "mdfld_dsi_pkg_sender.h"
+#include "mdfld_output.h"
 #include "tc35876x-dsi-lvds.h"
-#include <linux/pm_runtime.h>
-#include <asm/intel_scu_ipc.h>
 
 /* get the LABC from command line. */
 static int LABC_control = 1;
index 5b646c1..0cccfe4 100644 (file)
 #define __MDFLD_DSI_OUTPUT_H__
 
 #include <linux/backlight.h>
-#include <drm/drmP.h>
+
+#include <asm/intel-mid.h>
+
 #include <drm/drm.h>
 #include <drm/drm_crtc.h>
 #include <drm/drm_edid.h>
 
+#include "mdfld_output.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "mdfld_output.h"
-
-#include <asm/intel-mid.h>
 
 #define FLD_MASK(start, end)   (((1 << ((start) - (end) + 1)) - 1) << (end))
 #define FLD_VAL(val, start, end) (((val) << (end)) & FLD_MASK(start, end))
index c50534c..6e0de83 100644 (file)
  * Jackie Li<yaodong.li@intel.com>
  */
 
+#include <linux/delay.h>
 #include <linux/freezer.h>
+
 #include <video/mipi_display.h>
 
+#include "mdfld_dsi_dpi.h"
 #include "mdfld_dsi_output.h"
 #include "mdfld_dsi_pkg_sender.h"
-#include "mdfld_dsi_dpi.h"
 
 #define MDFLD_DSI_READ_MAX_COUNT               5000
 
index 2b9fa01..c2bd836 100644 (file)
  *     Eric Anholt <eric@anholt.net>
  */
 
+#include <linux/delay.h>
 #include <linux/i2c.h>
 #include <linux/pm_runtime.h>
 
-#include <drm/drmP.h>
-#include "psb_intel_reg.h"
-#include "gma_display.h"
+#include <drm/drm_crtc.h>
+#include <drm/drm_fourcc.h>
+
 #include "framebuffer.h"
-#include "mdfld_output.h"
+#include "gma_display.h"
 #include "mdfld_dsi_output.h"
+#include "mdfld_output.h"
+#include "psb_intel_reg.h"
 
 /* Hardcoded currently */
 static int ksel = KSEL_CRYSTAL_19;
index dc0c6c3..49c92de 100644 (file)
@@ -27,6 +27,8 @@
  * Scott Rowe <scott.m.rowe@intel.com>
  */
 
+#include <linux/delay.h>
+
 #include "mdfld_dsi_dpi.h"
 #include "mdfld_dsi_pkg_sender.h"
 
index 237041a..d624caf 100644 (file)
  * - Check ioremap failures
  */
 
-#include <drm/drmP.h>
 #include <drm/drm.h>
-#include <drm/gma_drm.h>
-#include "psb_drv.h"
+
 #include "mid_bios.h"
+#include "psb_drv.h"
 
 static void mid_get_fuse_settings(struct drm_device *dev)
 {
index 00e7d56..59e43a6 100644 (file)
@@ -16,6 +16,7 @@
  * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
  *
  **************************************************************************/
+struct drm_device;
 
 extern int mid_chip_setup(struct drm_device *dev);
 
index ccb161c..9d588be 100644 (file)
  * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
  *
  **************************************************************************/
-#include <drm/drmP.h>
+
+#include <linux/highmem.h>
+
+#include "mmu.h"
 #include "psb_drv.h"
 #include "psb_reg.h"
-#include "mmu.h"
 
 /*
  * Code for the SGX MMU:
index 30adbbe..e41bcab 100644 (file)
@@ -17,6 +17,8 @@
  *
  **************************************************************************/
 
+struct psb_intel_mode_device;
+
 /* MID device specific descriptors */
 
 struct oaktrail_timing_info {
index 1b7fd6a..b248978 100644 (file)
  * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
  */
 
+#include <linux/delay.h>
 #include <linux/i2c.h>
 #include <linux/pm_runtime.h>
 
-#include <drm/drmP.h>
+#include <drm/drm_fourcc.h>
+
 #include "framebuffer.h"
+#include "gma_display.h"
+#include "power.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "gma_display.h"
-#include "power.h"
 
 #define MRST_LIMIT_LVDS_100L   0
 #define MRST_LIMIT_LVDS_83     1
index ba30b43..3cd39d1 100644 (file)
  **************************************************************************/
 
 #include <linux/backlight.h>
-#include <linux/module.h>
+#include <linux/delay.h>
 #include <linux/dmi.h>
-#include <drm/drmP.h>
-#include <drm/drm.h>
-#include <drm/gma_drm.h>
-#include "psb_drv.h"
-#include "psb_reg.h"
-#include "psb_intel_reg.h"
+#include <linux/module.h>
+
 #include <asm/intel-mid.h>
 #include <asm/intel_scu_ipc.h>
-#include "mid_bios.h"
+
+#include <drm/drm.h>
+
 #include "intel_bios.h"
+#include "mid_bios.h"
+#include "psb_drv.h"
+#include "psb_intel_reg.h"
+#include "psb_reg.h"
 
 static int oaktrail_output_init(struct drm_device *dev)
 {
@@ -327,7 +329,7 @@ static int oaktrail_restore_display_registers(struct drm_device *dev)
 
        /* Actually enable it */
        PSB_WVDC32(p->dpll, MRST_DPLL_A);
-       DRM_UDELAY(150);
+       udelay(150);
 
        /* Restore mode */
        PSB_WVDC32(p->htotal, HTOTAL_A);
index c6d72de..f4c5208 100644 (file)
  *     Li Peng <peng.li@intel.com>
  */
 
-#include <drm/drmP.h>
+#include <linux/delay.h>
+
 #include <drm/drm.h>
+
+#include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "psb_drv.h"
 
 #define HDMI_READ(reg)         readl(hdmi_dev->regs + (reg))
 #define HDMI_WRITE(reg, val)   writel(val, hdmi_dev->regs + (reg))
@@ -815,7 +817,7 @@ void oaktrail_hdmi_restore(struct drm_device *dev)
        PSB_WVDC32(hdmi_dev->saveDPLL_ADJUST, DPLL_ADJUST);
        PSB_WVDC32(hdmi_dev->saveDPLL_UPDATE, DPLL_UPDATE);
        PSB_WVDC32(hdmi_dev->saveDPLL_CLK_ENABLE, DPLL_CLK_ENABLE);
-       DRM_UDELAY(150);
+       udelay(150);
 
        /* pipe */
        PSB_WVDC32(pipeb->src, PIPEBSRC);
index 83babb8..a9243bd 100644 (file)
  */
 
 #include <linux/i2c.h>
-#include <drm/drmP.h>
+#include <linux/pm_runtime.h>
+
 #include <asm/intel-mid.h>
 
 #include "intel_bios.h"
+#include "power.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "power.h"
-#include <linux/pm_runtime.h>
 
 /* The max/min PWM frequency in BPCR[31:17] - */
 /* The smallest number is 1 (not 0) that can fit in the
index f913a62..baaf821 100644 (file)
  *
  */
 
+#include <linux/delay.h>
+#include <linux/i2c-algo-bit.h>
+#include <linux/i2c.h>
+#include <linux/init.h>
+#include <linux/io.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/pci.h>
 #include <linux/types.h>
-#include <linux/i2c.h>
-#include <linux/i2c-algo-bit.h>
-#include <linux/init.h>
-#include <linux/io.h>
-#include <linux/delay.h>
 
-#include <drm/drmP.h>
 #include "psb_drv.h"
 #include "psb_intel_reg.h"
 
index 56d8708..0c89c4d 100644 (file)
@@ -31,7 +31,9 @@
 #define _PSB_POWERMGMT_H_
 
 #include <linux/pci.h>
-#include <drm/drmP.h>
+
+struct device;
+struct drm_device;
 
 void gma_power_init(struct drm_device *dev);
 void gma_power_uninit(struct drm_device *dev);
index dc0f852..38464bd 100644 (file)
  **************************************************************************/
 
 #include <linux/backlight.h>
-#include <drm/drmP.h>
+
 #include <drm/drm.h>
-#include <drm/gma_drm.h>
-#include "psb_drv.h"
-#include "psb_reg.h"
-#include "psb_intel_reg.h"
+
+#include "gma_device.h"
 #include "intel_bios.h"
 #include "psb_device.h"
-#include "gma_device.h"
+#include "psb_drv.h"
+#include "psb_intel_reg.h"
+#include "psb_reg.h"
 
 static int psb_output_init(struct drm_device *dev)
 {
index eefaf4d..5767fa1 100644 (file)
  *
  **************************************************************************/
 
-#include <drm/drmP.h>
+#include <linux/cpu.h>
+#include <linux/module.h>
+#include <linux/notifier.h>
+#include <linux/pm_runtime.h>
+#include <linux/spinlock.h>
+
+#include <asm/set_memory.h>
+
+#include <acpi/video.h>
+
 #include <drm/drm.h>
-#include "psb_drv.h"
+#include <drm/drm_drv.h>
+#include <drm/drm_file.h>
+#include <drm/drm_ioctl.h>
+#include <drm/drm_irq.h>
+#include <drm/drm_pci.h>
+#include <drm/drm_pciids.h>
+#include <drm/drm_vblank.h>
+
 #include "framebuffer.h"
-#include "psb_reg.h"
-#include "psb_intel_reg.h"
 #include "intel_bios.h"
 #include "mid_bios.h"
-#include <drm/drm_pciids.h>
 #include "power.h"
-#include <linux/cpu.h>
-#include <linux/notifier.h>
-#include <linux/spinlock.h>
-#include <linux/pm_runtime.h>
-#include <acpi/video.h>
-#include <linux/module.h>
-#include <asm/set_memory.h>
+#include "psb_drv.h"
+#include "psb_intel_reg.h"
+#include "psb_reg.h"
 
 static struct drm_driver driver;
 static int psb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent);
index bc608dd..5a73f13 100644 (file)
 #include <linux/kref.h>
 #include <linux/mm_types.h>
 
-#include <drm/drmP.h>
-#include <drm/gma_drm.h>
-#include "psb_reg.h"
-#include "psb_intel_drv.h"
+#include <drm/drm_device.h>
+
 #include "gma_display.h"
-#include "intel_bios.h"
 #include "gtt.h"
-#include "power.h"
-#include "opregion.h"
-#include "oaktrail.h"
+#include "intel_bios.h"
 #include "mmu.h"
+#include "oaktrail.h"
+#include "opregion.h"
+#include "power.h"
+#include "psb_intel_drv.h"
+#include "psb_reg.h"
 
 #define DRIVER_AUTHOR "Alan Cox <alan@linux.intel.com> and others"
 
index 8762efa..432cf44 100644 (file)
  *     Eric Anholt <eric@anholt.net>
  */
 
+#include <linux/delay.h>
 #include <linux/i2c.h>
 
-#include <drm/drmP.h>
 #include <drm/drm_plane_helper.h>
+
 #include "framebuffer.h"
+#include "gma_display.h"
+#include "power.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "gma_display.h"
-#include "power.h"
 
 #define INTEL_LIMIT_I9XX_SDVO_DAC   0
 #define INTEL_LIMIT_I9XX_LVDS      1
index 8baf632..d27300c 100644 (file)
  */
 
 #include <linux/i2c.h>
-#include <drm/drmP.h>
+#include <linux/pm_runtime.h>
 
 #include "intel_bios.h"
+#include "power.h"
 #include "psb_drv.h"
 #include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include "power.h"
-#include <linux/pm_runtime.h>
 
 /*
  * LVDS I2C backlight control macros
index fb4da3c..d00c6d4 100644 (file)
@@ -18,7 +18,7 @@
  */
 
 #include <linux/i2c.h>
-#include <drm/drmP.h>
+
 #include "psb_intel_drv.h"
 
 /**
index dd3cec0..264d7ad 100644 (file)
  * Authors:
  *     Eric Anholt <eric@anholt.net>
  */
-#include <linux/module.h>
+
+#include <linux/delay.h>
 #include <linux/i2c.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
 #include <linux/slab.h>
-#include <linux/delay.h>
-#include <drm/drmP.h>
+
 #include <drm/drm_crtc.h>
 #include <drm/drm_edid.h>
-#include "psb_intel_drv.h"
-#include <drm/gma_drm.h>
+
 #include "psb_drv.h"
-#include "psb_intel_sdvo_regs.h"
+#include "psb_intel_drv.h"
 #include "psb_intel_reg.h"
-#include <linux/kernel.h>
+#include "psb_intel_sdvo_regs.h"
 
 #define SDVO_TMDS_MASK (SDVO_OUTPUT_TMDS0 | SDVO_OUTPUT_TMDS1)
 #define SDVO_RGB_MASK  (SDVO_OUTPUT_RGB0 | SDVO_OUTPUT_RGB1)
index 78eb109..df4fe2e 100644 (file)
 /*
  */
 
-#include <drm/drmP.h>
+#include <drm/drm_vblank.h>
+
+#include "mdfld_output.h"
+#include "power.h"
 #include "psb_drv.h"
-#include "psb_reg.h"
 #include "psb_intel_reg.h"
-#include "power.h"
 #include "psb_irq.h"
-#include "mdfld_output.h"
+#include "psb_reg.h"
 
 /*
  * inline functions
index e6a81a8..f28fc4d 100644 (file)
@@ -24,7 +24,7 @@
 #ifndef _PSB_IRQ_H_
 #define _PSB_IRQ_H_
 
-#include <drm/drmP.h>
+struct drm_device;
 
 bool sysirq_init(struct drm_device *dev);
 void sysirq_uninit(struct drm_device *dev);
index be6dda5..2f5f30b 100644 (file)
  * Authors: Thomas Hellstrom <thomas-at-tungstengraphics-dot-com>
  **************************************************************************/
 
-#include <drm/drmP.h>
+#include <linux/spinlock.h>
+
 #include "psb_drv.h"
-#include "psb_reg.h"
 #include "psb_intel_reg.h"
-#include <linux/spinlock.h>
+#include "psb_reg.h"
 
 static void psb_lid_timer_func(struct timer_list *t)
 {
index 37c997e..7de3ce6 100644 (file)
  *
  */
 
-#include "mdfld_dsi_dpi.h"
-#include "mdfld_output.h"
-#include "mdfld_dsi_pkg_sender.h"
-#include "tc35876x-dsi-lvds.h"
-#include <linux/platform_data/tc35876x.h>
+#include <linux/delay.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/platform_data/tc35876x.h>
+
 #include <asm/intel_scu_ipc.h>
 
+#include "mdfld_dsi_dpi.h"
+#include "mdfld_dsi_pkg_sender.h"
+#include "mdfld_output.h"
+#include "tc35876x-dsi-lvds.h"
+
 static struct i2c_client *tc35876x_client;
 static struct i2c_client *cmi_lcd_i2c_client;
 
index 7cf8d38..f20eedf 100644 (file)
@@ -3,7 +3,7 @@ config DRM_HISI_HIBMC
        tristate "DRM Support for Hisilicon Hibmc"
        depends on DRM && PCI && MMU
        select DRM_KMS_HELPER
-       select DRM_TTM
+       select DRM_VRAM_HELPER
 
        help
          Choose this option if you have a Hisilicon Hibmc soc chipset.
index 9316b72..fbdf495 100644 (file)
@@ -96,27 +96,26 @@ static void hibmc_plane_atomic_update(struct drm_plane *plane,
        struct drm_plane_state  *state  = plane->state;
        u32 reg;
        int ret;
-       u64 gpu_addr = 0;
+       s64 gpu_addr = 0;
        unsigned int line_l;
        struct hibmc_drm_private *priv = plane->dev->dev_private;
        struct hibmc_framebuffer *hibmc_fb;
-       struct hibmc_bo *bo;
+       struct drm_gem_vram_object *gbo;
 
        if (!state->fb)
                return;
 
        hibmc_fb = to_hibmc_framebuffer(state->fb);
-       bo = gem_to_hibmc_bo(hibmc_fb->obj);
-       ret = ttm_bo_reserve(&bo->bo, true, false, NULL);
+       gbo = drm_gem_vram_of_gem(hibmc_fb->obj);
+
+       ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
        if (ret) {
-               DRM_ERROR("failed to reserve ttm_bo: %d", ret);
+               DRM_ERROR("failed to pin bo: %d", ret);
                return;
        }
-
-       ret = hibmc_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr);
-       ttm_bo_unreserve(&bo->bo);
-       if (ret) {
-               DRM_ERROR("failed to pin hibmc_bo: %d", ret);
+       gpu_addr = drm_gem_vram_offset(gbo);
+       if (gpu_addr < 0) {
+               drm_gem_vram_unpin(gbo);
                return;
        }
 
index 8ed94fc..725c0f5 100644 (file)
 
 static const struct file_operations hibmc_fops = {
        .owner          = THIS_MODULE,
-       .open           = drm_open,
-       .release        = drm_release,
-       .unlocked_ioctl = drm_ioctl,
-       .compat_ioctl   = drm_compat_ioctl,
-       .mmap           = hibmc_mmap,
-       .poll           = drm_poll,
-       .read           = drm_read,
-       .llseek         = no_llseek,
+       DRM_VRAM_MM_FILE_OPERATIONS
 };
 
 static irqreturn_t hibmc_drm_interrupt(int irq, void *arg)
@@ -63,9 +56,10 @@ static struct drm_driver hibmc_driver = {
        .desc                   = "hibmc drm driver",
        .major                  = 1,
        .minor                  = 0,
-       .gem_free_object_unlocked = hibmc_gem_free_object,
+       .gem_free_object_unlocked =
+               drm_gem_vram_driver_gem_free_object_unlocked,
        .dumb_create            = hibmc_dumb_create,
-       .dumb_map_offset        = hibmc_dumb_mmap_offset,
+       .dumb_map_offset        = drm_gem_vram_driver_dumb_mmap_offset,
        .irq_handler            = hibmc_drm_interrupt,
 };
 
index 0a381c2..3967693 100644 (file)
@@ -23,7 +23,8 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_gem.h>
-#include <drm/ttm/ttm_bo_driver.h>
+#include <drm/drm_gem_vram_helper.h>
+#include <drm/drm_vram_mm_helper.h>
 
 struct hibmc_framebuffer {
        struct drm_framebuffer fb;
@@ -48,36 +49,12 @@ struct hibmc_drm_private {
        struct drm_device  *dev;
        bool mode_config_initialized;
 
-       /* ttm */
-       struct ttm_bo_device bdev;
-       bool initialized;
-
        /* fbdev */
        struct hibmc_fbdev *fbdev;
-       bool mm_inited;
 };
 
 #define to_hibmc_framebuffer(x) container_of(x, struct hibmc_framebuffer, fb)
 
-struct hibmc_bo {
-       struct ttm_buffer_object bo;
-       struct ttm_placement placement;
-       struct ttm_bo_kmap_obj kmap;
-       struct drm_gem_object gem;
-       struct ttm_place placements[3];
-       int pin_count;
-};
-
-static inline struct hibmc_bo *hibmc_bo(struct ttm_buffer_object *bo)
-{
-       return container_of(bo, struct hibmc_bo, bo);
-}
-
-static inline struct hibmc_bo *gem_to_hibmc_bo(struct drm_gem_object *gem)
-{
-       return container_of(gem, struct hibmc_bo, gem);
-}
-
 void hibmc_set_power_mode(struct hibmc_drm_private *priv,
                          unsigned int power_mode);
 void hibmc_set_current_gate(struct hibmc_drm_private *priv,
@@ -97,14 +74,8 @@ hibmc_framebuffer_init(struct drm_device *dev,
 
 int hibmc_mm_init(struct hibmc_drm_private *hibmc);
 void hibmc_mm_fini(struct hibmc_drm_private *hibmc);
-int hibmc_bo_pin(struct hibmc_bo *bo, u32 pl_flag, u64 *gpu_addr);
-int hibmc_bo_unpin(struct hibmc_bo *bo);
-void hibmc_gem_free_object(struct drm_gem_object *obj);
 int hibmc_dumb_create(struct drm_file *file, struct drm_device *dev,
                      struct drm_mode_create_dumb *args);
-int hibmc_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev,
-                          u32 handle, u64 *offset);
-int hibmc_mmap(struct file *filp, struct vm_area_struct *vma);
 
 extern const struct drm_mode_config_funcs hibmc_mode_funcs;
 
index 8026859..bd5fbb2 100644 (file)
@@ -63,10 +63,10 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
        struct drm_mode_fb_cmd2 mode_cmd;
        struct drm_gem_object *gobj = NULL;
        int ret = 0;
-       int ret1;
        size_t size;
        unsigned int bytes_per_pixel;
-       struct hibmc_bo *bo = NULL;
+       struct drm_gem_vram_object *gbo = NULL;
+       void *base;
 
        DRM_DEBUG_DRIVER("surface width(%d), height(%d) and bpp(%d)\n",
                         sizes->surface_width, sizes->surface_height,
@@ -88,26 +88,20 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
                return -ENOMEM;
        }
 
-       bo = gem_to_hibmc_bo(gobj);
+       gbo = drm_gem_vram_of_gem(gobj);
 
-       ret = ttm_bo_reserve(&bo->bo, true, false, NULL);
-       if (ret) {
-               DRM_ERROR("failed to reserve ttm_bo: %d\n", ret);
-               goto out_unref_gem;
-       }
-
-       ret = hibmc_bo_pin(bo, TTM_PL_FLAG_VRAM, NULL);
+       ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
        if (ret) {
                DRM_ERROR("failed to pin fbcon: %d\n", ret);
-               goto out_unreserve_ttm_bo;
+               goto out_unref_gem;
        }
 
-       ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
-       if (ret) {
+       base = drm_gem_vram_kmap(gbo, true, NULL);
+       if (IS_ERR(base)) {
+               ret = PTR_ERR(base);
                DRM_ERROR("failed to kmap fbcon: %d\n", ret);
                goto out_unpin_bo;
        }
-       ttm_bo_unreserve(&bo->bo);
 
        info = drm_fb_helper_alloc_fbi(helper);
        if (IS_ERR(info)) {
@@ -131,24 +125,17 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
 
        drm_fb_helper_fill_info(info, &priv->fbdev->helper, sizes);
 
-       info->screen_base = bo->kmap.virtual;
+       info->screen_base = base;
        info->screen_size = size;
 
-       info->fix.smem_start = bo->bo.mem.bus.offset + bo->bo.mem.bus.base;
+       info->fix.smem_start = gbo->bo.mem.bus.offset + gbo->bo.mem.bus.base;
        info->fix.smem_len = size;
        return 0;
 
 out_release_fbi:
-       ret1 = ttm_bo_reserve(&bo->bo, true, false, NULL);
-       if (ret1) {
-               DRM_ERROR("failed to rsv ttm_bo when release fbi: %d\n", ret1);
-               goto out_unref_gem;
-       }
-       ttm_bo_kunmap(&bo->kmap);
+       drm_gem_vram_kunmap(gbo);
 out_unpin_bo:
-       hibmc_bo_unpin(bo);
-out_unreserve_ttm_bo:
-       ttm_bo_unreserve(&bo->bo);
+       drm_gem_vram_unpin(gbo);
 out_unref_gem:
        drm_gem_object_put_unlocked(gobj);
 
index 6093c42..52fba8c 100644 (file)
  */
 
 #include <drm/drm_atomic_helper.h>
-#include <drm/ttm/ttm_page_alloc.h>
 
 #include "hibmc_drm_drv.h"
 
-static inline struct hibmc_drm_private *
-hibmc_bdev(struct ttm_bo_device *bd)
-{
-       return container_of(bd, struct hibmc_drm_private, bdev);
-}
-
-static void hibmc_bo_ttm_destroy(struct ttm_buffer_object *tbo)
-{
-       struct hibmc_bo *bo = container_of(tbo, struct hibmc_bo, bo);
-
-       drm_gem_object_release(&bo->gem);
-       kfree(bo);
-}
-
-static bool hibmc_ttm_bo_is_hibmc_bo(struct ttm_buffer_object *bo)
-{
-       return bo->destroy == &hibmc_bo_ttm_destroy;
-}
-
-static int
-hibmc_bo_init_mem_type(struct ttm_bo_device *bdev, u32 type,
-                      struct ttm_mem_type_manager *man)
-{
-       switch (type) {
-       case TTM_PL_SYSTEM:
-               man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_MASK_CACHING;
-               man->default_caching = TTM_PL_FLAG_CACHED;
-               break;
-       case TTM_PL_VRAM:
-               man->func = &ttm_bo_manager_func;
-               man->flags = TTM_MEMTYPE_FLAG_FIXED |
-                       TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_FLAG_UNCACHED |
-                       TTM_PL_FLAG_WC;
-               man->default_caching = TTM_PL_FLAG_WC;
-               break;
-       default:
-               DRM_ERROR("unsupported memory type %u\n", type);
-               return -EINVAL;
-       }
-       return 0;
-}
-
-void hibmc_ttm_placement(struct hibmc_bo *bo, int domain)
-{
-       u32 count = 0;
-       u32 i;
-
-       bo->placement.placement = bo->placements;
-       bo->placement.busy_placement = bo->placements;
-       if (domain & TTM_PL_FLAG_VRAM)
-               bo->placements[count++].flags = TTM_PL_FLAG_WC |
-                       TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
-       if (domain & TTM_PL_FLAG_SYSTEM)
-               bo->placements[count++].flags = TTM_PL_MASK_CACHING |
-                       TTM_PL_FLAG_SYSTEM;
-       if (!count)
-               bo->placements[count++].flags = TTM_PL_MASK_CACHING |
-                       TTM_PL_FLAG_SYSTEM;
-
-       bo->placement.num_placement = count;
-       bo->placement.num_busy_placement = count;
-       for (i = 0; i < count; i++) {
-               bo->placements[i].fpfn = 0;
-               bo->placements[i].lpfn = 0;
-       }
-}
-
-static void
-hibmc_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
-{
-       struct hibmc_bo *hibmcbo = hibmc_bo(bo);
-
-       if (!hibmc_ttm_bo_is_hibmc_bo(bo))
-               return;
-
-       hibmc_ttm_placement(hibmcbo, TTM_PL_FLAG_SYSTEM);
-       *pl = hibmcbo->placement;
-}
-
-static int hibmc_bo_verify_access(struct ttm_buffer_object *bo,
-                                 struct file *filp)
-{
-       struct hibmc_bo *hibmcbo = hibmc_bo(bo);
-
-       return drm_vma_node_verify_access(&hibmcbo->gem.vma_node,
-                                         filp->private_data);
-}
-
-static int hibmc_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
-                                   struct ttm_mem_reg *mem)
-{
-       struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
-       struct hibmc_drm_private *hibmc = hibmc_bdev(bdev);
-
-       mem->bus.addr = NULL;
-       mem->bus.offset = 0;
-       mem->bus.size = mem->num_pages << PAGE_SHIFT;
-       mem->bus.base = 0;
-       mem->bus.is_iomem = false;
-       if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
-               return -EINVAL;
-       switch (mem->mem_type) {
-       case TTM_PL_SYSTEM:
-               /* system memory */
-               return 0;
-       case TTM_PL_VRAM:
-               mem->bus.offset = mem->start << PAGE_SHIFT;
-               mem->bus.base = pci_resource_start(hibmc->dev->pdev, 0);
-               mem->bus.is_iomem = true;
-               break;
-       default:
-               return -EINVAL;
-       }
-       return 0;
-}
-
-static void hibmc_ttm_backend_destroy(struct ttm_tt *tt)
-{
-       ttm_tt_fini(tt);
-       kfree(tt);
-}
-
-static struct ttm_backend_func hibmc_tt_backend_func = {
-       .destroy = &hibmc_ttm_backend_destroy,
-};
-
-static struct ttm_tt *hibmc_ttm_tt_create(struct ttm_buffer_object *bo,
-                                         u32 page_flags)
-{
-       struct ttm_tt *tt;
-       int ret;
-
-       tt = kzalloc(sizeof(*tt), GFP_KERNEL);
-       if (!tt) {
-               DRM_ERROR("failed to allocate ttm_tt\n");
-               return NULL;
-       }
-       tt->func = &hibmc_tt_backend_func;
-       ret = ttm_tt_init(tt, bo, page_flags);
-       if (ret) {
-               DRM_ERROR("failed to initialize ttm_tt: %d\n", ret);
-               kfree(tt);
-               return NULL;
-       }
-       return tt;
-}
-
-struct ttm_bo_driver hibmc_bo_driver = {
-       .ttm_tt_create          = hibmc_ttm_tt_create,
-       .init_mem_type          = hibmc_bo_init_mem_type,
-       .evict_flags            = hibmc_bo_evict_flags,
-       .move                   = NULL,
-       .verify_access          = hibmc_bo_verify_access,
-       .io_mem_reserve         = &hibmc_ttm_io_mem_reserve,
-       .io_mem_free            = NULL,
-};
-
 int hibmc_mm_init(struct hibmc_drm_private *hibmc)
 {
+       struct drm_vram_mm *vmm;
        int ret;
        struct drm_device *dev = hibmc->dev;
-       struct ttm_bo_device *bdev = &hibmc->bdev;
 
-       ret = ttm_bo_device_init(&hibmc->bdev,
-                                &hibmc_bo_driver,
-                                dev->anon_inode->i_mapping,
-                                true);
-       if (ret) {
-               DRM_ERROR("error initializing bo driver: %d\n", ret);
+       vmm = drm_vram_helper_alloc_mm(dev,
+                                      pci_resource_start(dev->pdev, 0),
+                                      hibmc->fb_size, &drm_gem_vram_mm_funcs);
+       if (IS_ERR(vmm)) {
+               ret = PTR_ERR(vmm);
+               DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
                return ret;
        }
 
-       ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
-                            hibmc->fb_size >> PAGE_SHIFT);
-       if (ret) {
-               DRM_ERROR("failed ttm VRAM init: %d\n", ret);
-               return ret;
-       }
-
-       hibmc->mm_inited = true;
        return 0;
 }
 
 void hibmc_mm_fini(struct hibmc_drm_private *hibmc)
 {
-       if (!hibmc->mm_inited)
-               return;
-
-       ttm_bo_device_release(&hibmc->bdev);
-       hibmc->mm_inited = false;
-}
-
-static void hibmc_bo_unref(struct hibmc_bo **bo)
-{
-       struct ttm_buffer_object *tbo;
-
-       if ((*bo) == NULL)
+       if (!hibmc->dev->vram_mm)
                return;
 
-       tbo = &((*bo)->bo);
-       ttm_bo_put(tbo);
-       *bo = NULL;
-}
-
-int hibmc_bo_create(struct drm_device *dev, int size, int align,
-                   u32 flags, struct hibmc_bo **phibmcbo)
-{
-       struct hibmc_drm_private *hibmc = dev->dev_private;
-       struct hibmc_bo *hibmcbo;
-       size_t acc_size;
-       int ret;
-
-       hibmcbo = kzalloc(sizeof(*hibmcbo), GFP_KERNEL);
-       if (!hibmcbo) {
-               DRM_ERROR("failed to allocate hibmcbo\n");
-               return -ENOMEM;
-       }
-       ret = drm_gem_object_init(dev, &hibmcbo->gem, size);
-       if (ret) {
-               DRM_ERROR("failed to initialize drm gem object: %d\n", ret);
-               kfree(hibmcbo);
-               return ret;
-       }
-
-       hibmcbo->bo.bdev = &hibmc->bdev;
-
-       hibmc_ttm_placement(hibmcbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
-
-       acc_size = ttm_bo_dma_acc_size(&hibmc->bdev, size,
-                                      sizeof(struct hibmc_bo));
-
-       ret = ttm_bo_init(&hibmc->bdev, &hibmcbo->bo, size,
-                         ttm_bo_type_device, &hibmcbo->placement,
-                         align >> PAGE_SHIFT, false, acc_size,
-                         NULL, NULL, hibmc_bo_ttm_destroy);
-       if (ret) {
-               hibmc_bo_unref(&hibmcbo);
-               DRM_ERROR("failed to initialize ttm_bo: %d\n", ret);
-               return ret;
-       }
-
-       *phibmcbo = hibmcbo;
-       return 0;
-}
-
-int hibmc_bo_pin(struct hibmc_bo *bo, u32 pl_flag, u64 *gpu_addr)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (bo->pin_count) {
-               bo->pin_count++;
-               if (gpu_addr)
-                       *gpu_addr = bo->bo.offset;
-               return 0;
-       }
-
-       hibmc_ttm_placement(bo, pl_flag);
-       for (i = 0; i < bo->placement.num_placement; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret)
-               return ret;
-
-       bo->pin_count = 1;
-       if (gpu_addr)
-               *gpu_addr = bo->bo.offset;
-       return 0;
-}
-
-int hibmc_bo_unpin(struct hibmc_bo *bo)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (!bo->pin_count) {
-               DRM_ERROR("unpin bad %p\n", bo);
-               return 0;
-       }
-       bo->pin_count--;
-       if (bo->pin_count)
-               return 0;
-
-       for (i = 0; i < bo->placement.num_placement ; i++)
-               bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret) {
-               DRM_ERROR("validate failed for unpin: %d\n", ret);
-               return ret;
-       }
-
-       return 0;
-}
-
-int hibmc_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-       struct drm_file *file_priv = filp->private_data;
-       struct hibmc_drm_private *hibmc = file_priv->minor->dev->dev_private;
-
-       return ttm_bo_mmap(filp, vma, &hibmc->bdev);
+       drm_vram_helper_release_mm(hibmc->dev);
 }
 
 int hibmc_gem_create(struct drm_device *dev, u32 size, bool iskernel,
                     struct drm_gem_object **obj)
 {
-       struct hibmc_bo *hibmcbo;
+       struct drm_gem_vram_object *gbo;
        int ret;
 
        *obj = NULL;
 
-       size = PAGE_ALIGN(size);
-       if (size == 0) {
-               DRM_ERROR("error: zero size\n");
+       size = roundup(size, PAGE_SIZE);
+       if (size == 0)
                return -EINVAL;
-       }
 
-       ret = hibmc_bo_create(dev, size, 0, 0, &hibmcbo);
-       if (ret) {
+       gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, size, 0, false);
+       if (IS_ERR(gbo)) {
+               ret = PTR_ERR(gbo);
                if (ret != -ERESTARTSYS)
                        DRM_ERROR("failed to allocate GEM object: %d\n", ret);
                return ret;
        }
-       *obj = &hibmcbo->gem;
+       *obj = &gbo->gem;
        return 0;
 }
 
@@ -377,35 +97,6 @@ int hibmc_dumb_create(struct drm_file *file, struct drm_device *dev,
        return 0;
 }
 
-void hibmc_gem_free_object(struct drm_gem_object *obj)
-{
-       struct hibmc_bo *hibmcbo = gem_to_hibmc_bo(obj);
-
-       hibmc_bo_unref(&hibmcbo);
-}
-
-static u64 hibmc_bo_mmap_offset(struct hibmc_bo *bo)
-{
-       return drm_vma_node_offset_addr(&bo->bo.vma_node);
-}
-
-int hibmc_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev,
-                          u32 handle, u64 *offset)
-{
-       struct drm_gem_object *obj;
-       struct hibmc_bo *bo;
-
-       obj = drm_gem_object_lookup(file, handle);
-       if (!obj)
-               return -ENOENT;
-
-       bo = gem_to_hibmc_bo(obj);
-       *offset = hibmc_bo_mmap_offset(bo);
-
-       drm_gem_object_put_unlocked(obj);
-       return 0;
-}
-
 static void hibmc_user_framebuffer_destroy(struct drm_framebuffer *fb)
 {
        struct hibmc_framebuffer *hibmc_fb = to_hibmc_framebuffer(fb);
index 5098228..2eb62ed 100644 (file)
@@ -14605,9 +14605,8 @@ static int intel_crtc_init(struct drm_i915_private *dev_priv, enum pipe pipe)
                ret = -ENOMEM;
                goto fail;
        }
+       __drm_atomic_helper_crtc_reset(&intel_crtc->base, &crtc_state->base);
        intel_crtc->config = crtc_state;
-       intel_crtc->base.state = &crtc_state->base;
-       crtc_state->base.crtc = &intel_crtc->base;
 
        primary = intel_primary_plane_create(dev_priv, pipe);
        if (IS_ERR(primary)) {
@@ -16149,7 +16148,7 @@ static void intel_modeset_readout_hw_state(struct drm_device *dev)
 
                __drm_atomic_helper_crtc_destroy_state(&crtc_state->base);
                memset(crtc_state, 0, sizeof(*crtc_state));
-               crtc_state->base.crtc = &crtc->base;
+               __drm_atomic_helper_crtc_reset(&crtc->base, &crtc_state->base);
 
                crtc_state->base.active = crtc_state->base.enable =
                        dev_priv->display.get_pipe_config(crtc, crtc_state);
index 2913e89..c1647c0 100644 (file)
@@ -325,7 +325,8 @@ skl_plane_max_stride(struct intel_plane *plane,
                     u32 pixel_format, u64 modifier,
                     unsigned int rotation)
 {
-       int cpp = drm_format_plane_cpp(pixel_format, 0);
+       const struct drm_format_info *info = drm_format_info(pixel_format);
+       int cpp = info->cpp[0];
 
        /*
         * "The stride in bytes must not exceed the
index d7a727a..8f101a0 100644 (file)
@@ -115,8 +115,8 @@ drm_plane_state_to_ubo(struct drm_plane_state *state)
        cma_obj = drm_fb_cma_get_gem_obj(fb, 1);
        BUG_ON(!cma_obj);
 
-       x /= drm_format_horz_chroma_subsampling(fb->format->format);
-       y /= drm_format_vert_chroma_subsampling(fb->format->format);
+       x /= fb->format->hsub;
+       y /= fb->format->vsub;
 
        return cma_obj->paddr + fb->offsets[1] + fb->pitches[1] * y +
               fb->format->cpp[1] * x - eba;
@@ -134,8 +134,8 @@ drm_plane_state_to_vbo(struct drm_plane_state *state)
        cma_obj = drm_fb_cma_get_gem_obj(fb, 2);
        BUG_ON(!cma_obj);
 
-       x /= drm_format_horz_chroma_subsampling(fb->format->format);
-       y /= drm_format_vert_chroma_subsampling(fb->format->format);
+       x /= fb->format->hsub;
+       y /= fb->format->vsub;
 
        return cma_obj->paddr + fb->offsets[2] + fb->pitches[2] * y +
               fb->format->cpp[2] * x - eba;
@@ -352,7 +352,6 @@ static int ipu_plane_atomic_check(struct drm_plane *plane,
        struct drm_framebuffer *old_fb = old_state->fb;
        unsigned long eba, ubo, vbo, old_ubo, old_vbo, alpha_eba;
        bool can_position = (plane->type == DRM_PLANE_TYPE_OVERLAY);
-       int hsub, vsub;
        int ret;
 
        /* Ok to disable */
@@ -471,10 +470,8 @@ static int ipu_plane_atomic_check(struct drm_plane *plane,
                 * The x/y offsets must be even in case of horizontal/vertical
                 * chroma subsampling.
                 */
-               hsub = drm_format_horz_chroma_subsampling(fb->format->format);
-               vsub = drm_format_vert_chroma_subsampling(fb->format->format);
-               if (((state->src.x1 >> 16) & (hsub - 1)) ||
-                   ((state->src.y1 >> 16) & (vsub - 1)))
+               if (((state->src.x1 >> 16) & (fb->format->hsub - 1)) ||
+                   ((state->src.y1 >> 16) & (fb->format->vsub - 1)))
                        return -EINVAL;
                break;
        case DRM_FORMAT_RGB565_A8:
index f9a281a..b29c26c 100644 (file)
@@ -17,7 +17,7 @@
 
 int lima_sched_timeout_ms;
 
-MODULE_PARM_DESC(sched_timeout_ms, "task run timeout in ms (0 = no timeout (default))");
+MODULE_PARM_DESC(sched_timeout_ms, "task run timeout in ms");
 module_param_named(sched_timeout_ms, lima_sched_timeout_ms, int, 0444);
 
 static int lima_ioctl_get_param(struct drm_device *dev, void *data, struct drm_file *file)
index d29721e..8fef224 100644 (file)
@@ -64,7 +64,13 @@ static irqreturn_t lima_pp_bcast_irq_handler(int irq, void *data)
        struct lima_ip *pp_bcast = data;
        struct lima_device *dev = pp_bcast->dev;
        struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp;
-       struct drm_lima_m450_pp_frame *frame = pipe->current_task->frame;
+       struct drm_lima_m450_pp_frame *frame;
+
+       /* for shared irq case */
+       if (!pipe->current_task)
+               return IRQ_NONE;
+
+       frame = pipe->current_task->frame;
 
        for (i = 0; i < frame->num_pp; i++) {
                struct lima_ip *ip = pipe->processor[i];
index d53bd45..4127cac 100644 (file)
@@ -258,7 +258,7 @@ static struct dma_fence *lima_sched_run_job(struct drm_sched_job *job)
 static void lima_sched_handle_error_task(struct lima_sched_pipe *pipe,
                                         struct lima_sched_task *task)
 {
-       drm_sched_stop(&pipe->base);
+       drm_sched_stop(&pipe->base, &task->base);
 
        if (task)
                drm_sched_increase_karma(&task->base);
@@ -329,19 +329,16 @@ static void lima_sched_error_work(struct work_struct *work)
 
 int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name)
 {
-       long timeout;
-
-       if (lima_sched_timeout_ms <= 0)
-               timeout = MAX_SCHEDULE_TIMEOUT;
-       else
-               timeout = msecs_to_jiffies(lima_sched_timeout_ms);
+       unsigned int timeout = lima_sched_timeout_ms > 0 ?
+                              lima_sched_timeout_ms : 500;
 
        pipe->fence_context = dma_fence_context_alloc(1);
        spin_lock_init(&pipe->fence_lock);
 
        INIT_WORK(&pipe->error_work, lima_sched_error_work);
 
-       return drm_sched_init(&pipe->base, &lima_sched_ops, 1, 0, timeout, name);
+       return drm_sched_init(&pipe->base, &lima_sched_ops, 1, 0,
+                             msecs_to_jiffies(timeout), name);
 }
 
 void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
index e20fcae..b5e2f23 100644 (file)
@@ -32,10 +32,11 @@ static struct drm_framebuffer *mtk_drm_framebuffer_init(struct drm_device *dev,
                                        const struct drm_mode_fb_cmd2 *mode,
                                        struct drm_gem_object *obj)
 {
+       const struct drm_format_info *info = drm_get_format_info(dev, mode);
        struct drm_framebuffer *fb;
        int ret;
 
-       if (drm_format_num_planes(mode->pixel_format) != 1)
+       if (info->num_planes != 1)
                return ERR_PTR(-EINVAL);
 
        fb = kzalloc(sizeof(*fb), GFP_KERNEL);
@@ -88,6 +89,7 @@ struct drm_framebuffer *mtk_drm_mode_fb_create(struct drm_device *dev,
                                               struct drm_file *file,
                                               const struct drm_mode_fb_cmd2 *cmd)
 {
+       const struct drm_format_info *info = drm_get_format_info(dev, cmd);
        struct drm_framebuffer *fb;
        struct drm_gem_object *gem;
        unsigned int width = cmd->width;
@@ -95,14 +97,14 @@ struct drm_framebuffer *mtk_drm_mode_fb_create(struct drm_device *dev,
        unsigned int size, bpp;
        int ret;
 
-       if (drm_format_num_planes(cmd->pixel_format) != 1)
+       if (info->num_planes != 1)
                return ERR_PTR(-EINVAL);
 
        gem = drm_gem_object_lookup(file, cmd->handles[0]);
        if (!gem)
                return ERR_PTR(-ENOENT);
 
-       bpp = drm_format_plane_cpp(cmd->pixel_format, 0);
+       bpp = info->cpp[0];
        size = (height - 1) * cmd->pitches[0] + width * bpp;
        size += cmd->offsets[0];
 
index e04e6c2..10cc991 100644 (file)
@@ -341,6 +341,9 @@ static void mtk_hdmi_hw_send_info_frame(struct mtk_hdmi *hdmi, u8 *buffer,
                ctrl_frame_en = VS_EN;
                ctrl_reg = GRL_ACP_ISRC_CTRL;
                break;
+       default:
+               dev_err(hdmi->dev, "Unknown infoframe type %d\n", frame_type);
+               return;
        }
        mtk_hdmi_clear_bits(hdmi, ctrl_reg, ctrl_frame_en);
        mtk_hdmi_write(hdmi, GRL_INFOFRM_TYPE, frame_type);
index bdbf925..55b3f2f 100644 (file)
@@ -458,7 +458,7 @@ static void meson_overlay_atomic_update(struct drm_plane *plane,
        }
 
        /* Update Canvas with buffer address */
-       priv->viu.vd1_planes = drm_format_num_planes(fb->format->format);
+       priv->viu.vd1_planes = fb->format->num_planes;
 
        switch (priv->viu.vd1_planes) {
        case 3:
@@ -466,8 +466,8 @@ static void meson_overlay_atomic_update(struct drm_plane *plane,
                priv->viu.vd1_addr2 = gem->paddr + fb->offsets[2];
                priv->viu.vd1_stride2 = fb->pitches[2];
                priv->viu.vd1_height2 =
-                       drm_format_plane_height(fb->height,
-                                               fb->format->format, 2);
+                       drm_format_info_plane_height(fb->format,
+                                               fb->height, 2);
                DRM_DEBUG("plane 2 addr 0x%x stride %d height %d\n",
                         priv->viu.vd1_addr2,
                         priv->viu.vd1_stride2,
@@ -478,8 +478,8 @@ static void meson_overlay_atomic_update(struct drm_plane *plane,
                priv->viu.vd1_addr1 = gem->paddr + fb->offsets[1];
                priv->viu.vd1_stride1 = fb->pitches[1];
                priv->viu.vd1_height1 =
-                       drm_format_plane_height(fb->height,
-                                               fb->format->format, 1);
+                       drm_format_info_plane_height(fb->format,
+                                               fb->height, 1);
                DRM_DEBUG("plane 1 addr 0x%x stride %d height %d\n",
                         priv->viu.vd1_addr1,
                         priv->viu.vd1_stride1,
@@ -490,8 +490,8 @@ static void meson_overlay_atomic_update(struct drm_plane *plane,
                priv->viu.vd1_addr0 = gem->paddr + fb->offsets[0];
                priv->viu.vd1_stride0 = fb->pitches[0];
                priv->viu.vd1_height0 =
-                       drm_format_plane_height(fb->height,
-                                               fb->format->format, 0);
+                       drm_format_info_plane_height(fb->format,
+                                               fb->height, 0);
                DRM_DEBUG("plane 0 addr 0x%x stride %d height %d\n",
                         priv->viu.vd1_addr0,
                         priv->viu.vd1_stride0,
index 91f3579..76fee0f 100644 (file)
@@ -3,7 +3,7 @@ config DRM_MGAG200
        tristate "Kernel modesetting driver for MGA G200 server engines"
        depends on DRM && PCI && MMU
        select DRM_KMS_HELPER
-       select DRM_TTM
+       select DRM_VRAM_HELPER
        help
         This is a KMS driver for the MGA G200 server chips, it
          does not support the original MGA G200 or any of the desktop
index 968e203..06a8c07 100644 (file)
@@ -23,9 +23,9 @@ static void mga_hide_cursor(struct mga_device *mdev)
        WREG8(MGA_CURPOSXL, 0);
        WREG8(MGA_CURPOSXH, 0);
        if (mdev->cursor.pixels_1->pin_count)
-               mgag200_bo_unpin(mdev->cursor.pixels_1);
+               drm_gem_vram_unpin_locked(mdev->cursor.pixels_1);
        if (mdev->cursor.pixels_2->pin_count)
-               mgag200_bo_unpin(mdev->cursor.pixels_2);
+               drm_gem_vram_unpin_locked(mdev->cursor.pixels_2);
 }
 
 int mga_crtc_cursor_set(struct drm_crtc *crtc,
@@ -36,13 +36,14 @@ int mga_crtc_cursor_set(struct drm_crtc *crtc,
 {
        struct drm_device *dev = crtc->dev;
        struct mga_device *mdev = (struct mga_device *)dev->dev_private;
-       struct mgag200_bo *pixels_1 = mdev->cursor.pixels_1;
-       struct mgag200_bo *pixels_2 = mdev->cursor.pixels_2;
-       struct mgag200_bo *pixels_current = mdev->cursor.pixels_current;
-       struct mgag200_bo *pixels_prev = mdev->cursor.pixels_prev;
+       struct drm_gem_vram_object *pixels_1 = mdev->cursor.pixels_1;
+       struct drm_gem_vram_object *pixels_2 = mdev->cursor.pixels_2;
+       struct drm_gem_vram_object *pixels_current = mdev->cursor.pixels_current;
+       struct drm_gem_vram_object *pixels_prev = mdev->cursor.pixels_prev;
        struct drm_gem_object *obj;
-       struct mgag200_bo *bo = NULL;
+       struct drm_gem_vram_object *gbo = NULL;
        int ret = 0;
+       u8 *src, *dst;
        unsigned int i, row, col;
        uint32_t colour_set[16];
        uint32_t *next_space = &colour_set[0];
@@ -50,7 +51,7 @@ int mga_crtc_cursor_set(struct drm_crtc *crtc,
        uint32_t this_colour;
        bool found = false;
        int colour_count = 0;
-       u64 gpu_addr;
+       s64 gpu_addr;
        u8 reg_index;
        u8 this_row[48];
 
@@ -79,54 +80,66 @@ int mga_crtc_cursor_set(struct drm_crtc *crtc,
        if (!obj)
                return -ENOENT;
 
-       ret = mgag200_bo_reserve(pixels_1, true);
+       ret = drm_gem_vram_lock(pixels_1, true);
        if (ret) {
                WREG8(MGA_CURPOSXL, 0);
                WREG8(MGA_CURPOSXH, 0);
                goto out_unref;
        }
-       ret = mgag200_bo_reserve(pixels_2, true);
+       ret = drm_gem_vram_lock(pixels_2, true);
        if (ret) {
                WREG8(MGA_CURPOSXL, 0);
                WREG8(MGA_CURPOSXH, 0);
-               mgag200_bo_unreserve(pixels_1);
-               goto out_unreserve1;
+               drm_gem_vram_unlock(pixels_1);
+               goto out_unlock1;
        }
 
        /* Move cursor buffers into VRAM if they aren't already */
        if (!pixels_1->pin_count) {
-               ret = mgag200_bo_pin(pixels_1, TTM_PL_FLAG_VRAM,
-                                    &mdev->cursor.pixels_1_gpu_addr);
+               ret = drm_gem_vram_pin_locked(pixels_1,
+                                             DRM_GEM_VRAM_PL_FLAG_VRAM);
                if (ret)
                        goto out1;
+               gpu_addr = drm_gem_vram_offset(pixels_1);
+               if (gpu_addr < 0) {
+                       drm_gem_vram_unpin_locked(pixels_1);
+                       goto out1;
+               }
+               mdev->cursor.pixels_1_gpu_addr = gpu_addr;
        }
        if (!pixels_2->pin_count) {
-               ret = mgag200_bo_pin(pixels_2, TTM_PL_FLAG_VRAM,
-                                    &mdev->cursor.pixels_2_gpu_addr);
+               ret = drm_gem_vram_pin_locked(pixels_2,
+                                             DRM_GEM_VRAM_PL_FLAG_VRAM);
                if (ret) {
-                       mgag200_bo_unpin(pixels_1);
+                       drm_gem_vram_unpin_locked(pixels_1);
                        goto out1;
                }
+               gpu_addr = drm_gem_vram_offset(pixels_2);
+               if (gpu_addr < 0) {
+                       drm_gem_vram_unpin_locked(pixels_1);
+                       drm_gem_vram_unpin_locked(pixels_2);
+                       goto out1;
+               }
+               mdev->cursor.pixels_2_gpu_addr = gpu_addr;
        }
 
-       bo = gem_to_mga_bo(obj);
-       ret = mgag200_bo_reserve(bo, true);
+       gbo = drm_gem_vram_of_gem(obj);
+       ret = drm_gem_vram_lock(gbo, true);
        if (ret) {
-               dev_err(&dev->pdev->dev, "failed to reserve user bo\n");
+               dev_err(&dev->pdev->dev, "failed to lock user bo\n");
                goto out1;
        }
-       if (!bo->kmap.virtual) {
-               ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
-               if (ret) {
-                       dev_err(&dev->pdev->dev, "failed to kmap user buffer updates\n");
-                       goto out2;
-               }
+       src = drm_gem_vram_kmap(gbo, true, NULL);
+       if (IS_ERR(src)) {
+               ret = PTR_ERR(src);
+               dev_err(&dev->pdev->dev, "failed to kmap user buffer updates\n");
+               goto out2;
        }
 
        memset(&colour_set[0], 0, sizeof(uint32_t)*16);
        /* width*height*4 = 16384 */
        for (i = 0; i < 16384; i += 4) {
-               this_colour = ioread32(bo->kmap.virtual + i);
+               this_colour = ioread32(src + i);
                /* No transparency */
                if (this_colour>>24 != 0xff &&
                        this_colour>>24 != 0x0) {
@@ -178,21 +191,18 @@ int mga_crtc_cursor_set(struct drm_crtc *crtc,
        }
 
        /* Map up-coming buffer to write colour indices */
-       if (!pixels_prev->kmap.virtual) {
-               ret = ttm_bo_kmap(&pixels_prev->bo, 0,
-                                 pixels_prev->bo.num_pages,
-                                 &pixels_prev->kmap);
-               if (ret) {
-                       dev_err(&dev->pdev->dev, "failed to kmap cursor updates\n");
-                       goto out3;
-               }
+       dst = drm_gem_vram_kmap(pixels_prev, true, NULL);
+       if (IS_ERR(dst)) {
+               ret = PTR_ERR(dst);
+               dev_err(&dev->pdev->dev, "failed to kmap cursor updates\n");
+               goto out3;
        }
 
        /* now write colour indices into hardware cursor buffer */
        for (row = 0; row < 64; row++) {
                memset(&this_row[0], 0, 48);
                for (col = 0; col < 64; col++) {
-                       this_colour = ioread32(bo->kmap.virtual + 4*(col + 64*row));
+                       this_colour = ioread32(src + 4*(col + 64*row));
                        /* write transparent pixels */
                        if (this_colour>>24 == 0x0) {
                                this_row[47 - col/8] |= 0x80>>(col%8);
@@ -210,7 +220,7 @@ int mga_crtc_cursor_set(struct drm_crtc *crtc,
                                }
                        }
                }
-               memcpy_toio(pixels_prev->kmap.virtual + row*48, &this_row[0], 48);
+               memcpy_toio(dst + row*48, &this_row[0], 48);
        }
 
        /* Program gpu address of cursor buffer */
@@ -236,17 +246,17 @@ int mga_crtc_cursor_set(struct drm_crtc *crtc,
        }
        ret = 0;
 
-       ttm_bo_kunmap(&pixels_prev->kmap);
+       drm_gem_vram_kunmap(pixels_prev);
  out3:
-       ttm_bo_kunmap(&bo->kmap);
+       drm_gem_vram_kunmap(gbo);
  out2:
-       mgag200_bo_unreserve(bo);
+       drm_gem_vram_unlock(gbo);
  out1:
        if (ret)
                mga_hide_cursor(mdev);
-       mgag200_bo_unreserve(pixels_1);
-out_unreserve1:
-       mgag200_bo_unreserve(pixels_2);
+       drm_gem_vram_unlock(pixels_1);
+out_unlock1:
+       drm_gem_vram_unlock(pixels_2);
 out_unref:
        drm_gem_object_put_unlocked(obj);
 
index ac6af4b..93bd158 100644 (file)
@@ -59,13 +59,7 @@ static void mga_pci_remove(struct pci_dev *pdev)
 
 static const struct file_operations mgag200_driver_fops = {
        .owner = THIS_MODULE,
-       .open = drm_open,
-       .release = drm_release,
-       .unlocked_ioctl = drm_ioctl,
-       .mmap = mgag200_mmap,
-       .poll = drm_poll,
-       .compat_ioctl = drm_compat_ioctl,
-       .read = drm_read,
+       DRM_VRAM_MM_FILE_OPERATIONS
 };
 
 static struct drm_driver driver = {
@@ -79,10 +73,7 @@ static struct drm_driver driver = {
        .major = DRIVER_MAJOR,
        .minor = DRIVER_MINOR,
        .patchlevel = DRIVER_PATCHLEVEL,
-
-       .gem_free_object_unlocked = mgag200_gem_free_object,
-       .dumb_create = mgag200_dumb_create,
-       .dumb_map_offset = mgag200_dumb_mmap_offset,
+       DRM_GEM_VRAM_DRIVER
 };
 
 static struct pci_driver mgag200_pci_driver = {
index 8c31e44..6180acb 100644 (file)
@@ -1,6 +1,6 @@
 /*
  * Copyright 2010 Matt Turner.
- * Copyright 2012 Red Hat 
+ * Copyright 2012 Red Hat
  *
  * This file is subject to the terms and conditions of the GNU General
  * Public License version 2. See the file COPYING in the main
 
 #include <drm/drm_encoder.h>
 #include <drm/drm_fb_helper.h>
-#include <drm/ttm/ttm_bo_api.h>
-#include <drm/ttm/ttm_bo_driver.h>
-#include <drm/ttm/ttm_placement.h>
-#include <drm/ttm/ttm_memory.h>
-#include <drm/ttm/ttm_module.h>
 
 #include <drm/drm_gem.h>
+#include <drm/drm_gem_vram_helper.h>
+
+#include <drm/drm_vram_mm_helper.h>
 
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
@@ -117,7 +115,6 @@ struct mga_fbdev {
        struct mga_framebuffer mfb;
        void *sysram;
        int size;
-       struct ttm_bo_kmap_obj mapping;
        int x1, y1, x2, y2; /* dirty rect */
        spinlock_t dirty_lock;
 };
@@ -159,13 +156,13 @@ struct mga_cursor {
           If either of these is NULL, then don't do hardware cursors, and
           fall back to software.
        */
-       struct mgag200_bo *pixels_1;
-       struct mgag200_bo *pixels_2;
+       struct drm_gem_vram_object *pixels_1;
+       struct drm_gem_vram_object *pixels_2;
        u64 pixels_1_gpu_addr, pixels_2_gpu_addr;
        /* The currently displayed icon, this points to one of pixels_1, or pixels_2 */
-       struct mgag200_bo *pixels_current;
+       struct drm_gem_vram_object *pixels_current;
        /* The previously displayed icon */
-       struct mgag200_bo *pixels_prev;
+       struct drm_gem_vram_object *pixels_prev;
 };
 
 struct mga_mc {
@@ -211,31 +208,10 @@ struct mga_device {
 
        int fb_mtrr;
 
-       struct {
-               struct ttm_bo_device bdev;
-       } ttm;
-
        /* SE model number stored in reg 0x1e24 */
        u32 unique_rev_id;
 };
 
-
-struct mgag200_bo {
-       struct ttm_buffer_object bo;
-       struct ttm_placement placement;
-       struct ttm_bo_kmap_obj kmap;
-       struct drm_gem_object gem;
-       struct ttm_place placements[3];
-       int pin_count;
-};
-#define gem_to_mga_bo(gobj) container_of((gobj), struct mgag200_bo, gem)
-
-static inline struct mgag200_bo *
-mgag200_bo(struct ttm_buffer_object *bo)
-{
-       return container_of(bo, struct mgag200_bo, bo);
-}
-
                                /* mgag200_mode.c */
 int mgag200_modeset_init(struct mga_device *mdev);
 void mgag200_modeset_fini(struct mga_device *mdev);
@@ -259,45 +235,15 @@ int mgag200_gem_create(struct drm_device *dev,
 int mgag200_dumb_create(struct drm_file *file,
                        struct drm_device *dev,
                        struct drm_mode_create_dumb *args);
-void mgag200_gem_free_object(struct drm_gem_object *obj);
-int
-mgag200_dumb_mmap_offset(struct drm_file *file,
-                        struct drm_device *dev,
-                        uint32_t handle,
-                        uint64_t *offset);
+
                                /* mgag200_i2c.c */
 struct mga_i2c_chan *mgag200_i2c_create(struct drm_device *dev);
 void mgag200_i2c_destroy(struct mga_i2c_chan *i2c);
 
-void mgag200_ttm_placement(struct mgag200_bo *bo, int domain);
-
-static inline int mgag200_bo_reserve(struct mgag200_bo *bo, bool no_wait)
-{
-       int ret;
-
-       ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL);
-       if (ret) {
-               if (ret != -ERESTARTSYS && ret != -EBUSY)
-                       DRM_ERROR("reserve failed %p\n", bo);
-               return ret;
-       }
-       return 0;
-}
-
-static inline void mgag200_bo_unreserve(struct mgag200_bo *bo)
-{
-       ttm_bo_unreserve(&bo->bo);
-}
-
-int mgag200_bo_create(struct drm_device *dev, int size, int align,
-                     uint32_t flags, struct mgag200_bo **pastbo);
 int mgag200_mm_init(struct mga_device *mdev);
 void mgag200_mm_fini(struct mga_device *mdev);
 int mgag200_mmap(struct file *filp, struct vm_area_struct *vma);
-int mgag200_bo_pin(struct mgag200_bo *bo, u32 pl_flag, u64 *gpu_addr);
-int mgag200_bo_unpin(struct mgag200_bo *bo);
-int mgag200_bo_push_sysram(struct mgag200_bo *bo);
-                          /* mgag200_cursor.c */
+
 int mga_crtc_cursor_set(struct drm_crtc *crtc, struct drm_file *file_priv,
                                                uint32_t handle, uint32_t width, uint32_t height);
 int mga_crtc_cursor_move(struct drm_crtc *crtc, int x, int y);
index 5b7e64c..97c575a 100644 (file)
@@ -23,25 +23,25 @@ static void mga_dirty_update(struct mga_fbdev *mfbdev,
 {
        int i;
        struct drm_gem_object *obj;
-       struct mgag200_bo *bo;
+       struct drm_gem_vram_object *gbo;
        int src_offset, dst_offset;
        int bpp = mfbdev->mfb.base.format->cpp[0];
        int ret = -EBUSY;
+       u8 *dst;
        bool unmap = false;
        bool store_for_later = false;
        int x2, y2;
        unsigned long flags;
 
        obj = mfbdev->mfb.obj;
-       bo = gem_to_mga_bo(obj);
+       gbo = drm_gem_vram_of_gem(obj);
 
-       /*
-        * try and reserve the BO, if we fail with busy
-        * then the BO is being moved and we should
-        * store up the damage until later.
+       /* Try to lock the BO. If we fail with -EBUSY then
+        * the BO is being moved and we should store up the
+        * damage until later.
         */
        if (drm_can_sleep())
-               ret = mgag200_bo_reserve(bo, true);
+               ret = drm_gem_vram_lock(gbo, true);
        if (ret) {
                if (ret != -EBUSY)
                        return;
@@ -75,25 +75,32 @@ static void mga_dirty_update(struct mga_fbdev *mfbdev,
        mfbdev->x2 = mfbdev->y2 = 0;
        spin_unlock_irqrestore(&mfbdev->dirty_lock, flags);
 
-       if (!bo->kmap.virtual) {
-               ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
-               if (ret) {
+       dst = drm_gem_vram_kmap(gbo, false, NULL);
+       if (IS_ERR(dst)) {
+               DRM_ERROR("failed to kmap fb updates\n");
+               goto out;
+       } else if (!dst) {
+               dst = drm_gem_vram_kmap(gbo, true, NULL);
+               if (IS_ERR(dst)) {
                        DRM_ERROR("failed to kmap fb updates\n");
-                       mgag200_bo_unreserve(bo);
-                       return;
+                       goto out;
                }
                unmap = true;
        }
+
        for (i = y; i <= y2; i++) {
                /* assume equal stride for now */
-               src_offset = dst_offset = i * mfbdev->mfb.base.pitches[0] + (x * bpp);
-               memcpy_toio(bo->kmap.virtual + src_offset, mfbdev->sysram + src_offset, (x2 - x + 1) * bpp);
-
+               src_offset = dst_offset =
+                       i * mfbdev->mfb.base.pitches[0] + (x * bpp);
+               memcpy_toio(dst + dst_offset, mfbdev->sysram + src_offset,
+                           (x2 - x + 1) * bpp);
        }
+
        if (unmap)
-               ttm_bo_kunmap(&bo->kmap);
+               drm_gem_vram_kunmap(gbo);
 
-       mgag200_bo_unreserve(bo);
+out:
+       drm_gem_vram_unlock(gbo);
 }
 
 static void mga_fillrect(struct fb_info *info,
index 1632550..f3687fe 100644 (file)
@@ -230,11 +230,13 @@ int mgag200_driver_load(struct drm_device *dev, unsigned long flags)
        }
 
        /* Make small buffers to store a hardware cursor (double buffered icon updates) */
-       mgag200_bo_create(dev, roundup(48*64, PAGE_SIZE), 0, 0,
-                                         &mdev->cursor.pixels_1);
-       mgag200_bo_create(dev, roundup(48*64, PAGE_SIZE), 0, 0,
-                                         &mdev->cursor.pixels_2);
-       if (!mdev->cursor.pixels_2 || !mdev->cursor.pixels_1) {
+       mdev->cursor.pixels_1 = drm_gem_vram_create(dev, &dev->vram_mm->bdev,
+                                                   roundup(48*64, PAGE_SIZE),
+                                                   0, 0);
+       mdev->cursor.pixels_2 = drm_gem_vram_create(dev, &dev->vram_mm->bdev,
+                                                   roundup(48*64, PAGE_SIZE),
+                                                   0, 0);
+       if (IS_ERR(mdev->cursor.pixels_2) || IS_ERR(mdev->cursor.pixels_1)) {
                mdev->cursor.pixels_1 = NULL;
                mdev->cursor.pixels_2 = NULL;
                dev_warn(&dev->pdev->dev,
@@ -272,7 +274,7 @@ int mgag200_gem_create(struct drm_device *dev,
                   u32 size, bool iskernel,
                   struct drm_gem_object **obj)
 {
-       struct mgag200_bo *astbo;
+       struct drm_gem_vram_object *gbo;
        int ret;
 
        *obj = NULL;
@@ -281,78 +283,13 @@ int mgag200_gem_create(struct drm_device *dev,
        if (size == 0)
                return -EINVAL;
 
-       ret = mgag200_bo_create(dev, size, 0, 0, &astbo);
-       if (ret) {
+       gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, size, 0, false);
+       if (IS_ERR(gbo)) {
+               ret = PTR_ERR(gbo);
                if (ret != -ERESTARTSYS)
                        DRM_ERROR("failed to allocate GEM object\n");
                return ret;
        }
-       *obj = &astbo->gem;
-       return 0;
-}
-
-int mgag200_dumb_create(struct drm_file *file,
-                   struct drm_device *dev,
-                   struct drm_mode_create_dumb *args)
-{
-       int ret;
-       struct drm_gem_object *gobj;
-       u32 handle;
-
-       args->pitch = args->width * ((args->bpp + 7) / 8);
-       args->size = args->pitch * args->height;
-
-       ret = mgag200_gem_create(dev, args->size, false,
-                            &gobj);
-       if (ret)
-               return ret;
-
-       ret = drm_gem_handle_create(file, gobj, &handle);
-       drm_gem_object_put_unlocked(gobj);
-       if (ret)
-               return ret;
-
-       args->handle = handle;
-       return 0;
-}
-
-static void mgag200_bo_unref(struct mgag200_bo **bo)
-{
-       if ((*bo) == NULL)
-               return;
-       ttm_bo_put(&((*bo)->bo));
-       *bo = NULL;
-}
-
-void mgag200_gem_free_object(struct drm_gem_object *obj)
-{
-       struct mgag200_bo *mgag200_bo = gem_to_mga_bo(obj);
-
-       mgag200_bo_unref(&mgag200_bo);
-}
-
-
-static inline u64 mgag200_bo_mmap_offset(struct mgag200_bo *bo)
-{
-       return drm_vma_node_offset_addr(&bo->bo.vma_node);
-}
-
-int
-mgag200_dumb_mmap_offset(struct drm_file *file,
-                    struct drm_device *dev,
-                    uint32_t handle,
-                    uint64_t *offset)
-{
-       struct drm_gem_object *obj;
-       struct mgag200_bo *bo;
-
-       obj = drm_gem_object_lookup(file, handle);
-       if (obj == NULL)
-               return -ENOENT;
-
-       bo = gem_to_mga_bo(obj);
-       *offset = mgag200_bo_mmap_offset(bo);
-
-       drm_gem_object_put_unlocked(obj);
+       *obj = &gbo->gem;
        return 0;
 }
index 7481a3d..1c8e0bf 100644 (file)
@@ -858,8 +858,6 @@ static void mga_set_start_address(struct drm_crtc *crtc, unsigned offset)
        WREG_ECRT(0x0, ((u8)(addr >> 16) & 0xf) | crtcext0);
 }
 
-
-/* ast is different - we will force move buffers out of VRAM */
 static int mga_crtc_do_set_base(struct drm_crtc *crtc,
                                struct drm_framebuffer *fb,
                                int x, int y, int atomic)
@@ -867,48 +865,51 @@ static int mga_crtc_do_set_base(struct drm_crtc *crtc,
        struct mga_device *mdev = crtc->dev->dev_private;
        struct drm_gem_object *obj;
        struct mga_framebuffer *mga_fb;
-       struct mgag200_bo *bo;
+       struct drm_gem_vram_object *gbo;
        int ret;
-       u64 gpu_addr;
+       s64 gpu_addr;
+       void *base;
 
-       /* push the previous fb to system ram */
        if (!atomic && fb) {
                mga_fb = to_mga_framebuffer(fb);
                obj = mga_fb->obj;
-               bo = gem_to_mga_bo(obj);
-               ret = mgag200_bo_reserve(bo, false);
-               if (ret)
-                       return ret;
-               mgag200_bo_push_sysram(bo);
-               mgag200_bo_unreserve(bo);
+               gbo = drm_gem_vram_of_gem(obj);
+
+               /* unmap if console */
+               if (&mdev->mfbdev->mfb == mga_fb)
+                       drm_gem_vram_kunmap(gbo);
+               drm_gem_vram_unpin(gbo);
        }
 
        mga_fb = to_mga_framebuffer(crtc->primary->fb);
        obj = mga_fb->obj;
-       bo = gem_to_mga_bo(obj);
+       gbo = drm_gem_vram_of_gem(obj);
 
-       ret = mgag200_bo_reserve(bo, false);
+       ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
        if (ret)
                return ret;
-
-       ret = mgag200_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr);
-       if (ret) {
-               mgag200_bo_unreserve(bo);
-               return ret;
+       gpu_addr = drm_gem_vram_offset(gbo);
+       if (gpu_addr < 0) {
+               ret = (int)gpu_addr;
+               goto err_drm_gem_vram_unpin;
        }
 
        if (&mdev->mfbdev->mfb == mga_fb) {
                /* if pushing console in kmap it */
-               ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
-               if (ret)
+               base = drm_gem_vram_kmap(gbo, true, NULL);
+               if (IS_ERR(base)) {
+                       ret = PTR_ERR(base);
                        DRM_ERROR("failed to kmap fbcon\n");
-
+               }
        }
-       mgag200_bo_unreserve(bo);
 
        mga_set_start_address(crtc, (u32)gpu_addr);
 
        return 0;
+
+err_drm_gem_vram_unpin:
+       drm_gem_vram_unpin(gbo);
+       return ret;
 }
 
 static int mga_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
@@ -1422,18 +1423,18 @@ static void mga_crtc_destroy(struct drm_crtc *crtc)
 
 static void mga_crtc_disable(struct drm_crtc *crtc)
 {
-       int ret;
        DRM_DEBUG_KMS("\n");
        mga_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
        if (crtc->primary->fb) {
+               struct mga_device *mdev = crtc->dev->dev_private;
                struct mga_framebuffer *mga_fb = to_mga_framebuffer(crtc->primary->fb);
                struct drm_gem_object *obj = mga_fb->obj;
-               struct mgag200_bo *bo = gem_to_mga_bo(obj);
-               ret = mgag200_bo_reserve(bo, false);
-               if (ret)
-                       return;
-               mgag200_bo_push_sysram(bo);
-               mgag200_bo_unreserve(bo);
+               struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj);
+
+               /* unmap if console */
+               if (&mdev->mfbdev->mfb == mga_fb)
+                       drm_gem_vram_kunmap(gbo);
+               drm_gem_vram_unpin(gbo);
        }
        crtc->primary->fb = NULL;
 }
index bd42365..59294c0 100644 (file)
  * Authors: Dave Airlie <airlied@redhat.com>
  */
 #include <drm/drmP.h>
-#include <drm/ttm/ttm_page_alloc.h>
 
 #include "mgag200_drv.h"
 
-static inline struct mga_device *
-mgag200_bdev(struct ttm_bo_device *bd)
-{
-       return container_of(bd, struct mga_device, ttm.bdev);
-}
-
-static void mgag200_bo_ttm_destroy(struct ttm_buffer_object *tbo)
-{
-       struct mgag200_bo *bo;
-
-       bo = container_of(tbo, struct mgag200_bo, bo);
-
-       drm_gem_object_release(&bo->gem);
-       kfree(bo);
-}
-
-static bool mgag200_ttm_bo_is_mgag200_bo(struct ttm_buffer_object *bo)
-{
-       if (bo->destroy == &mgag200_bo_ttm_destroy)
-               return true;
-       return false;
-}
-
-static int
-mgag200_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
-                    struct ttm_mem_type_manager *man)
-{
-       switch (type) {
-       case TTM_PL_SYSTEM:
-               man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_MASK_CACHING;
-               man->default_caching = TTM_PL_FLAG_CACHED;
-               break;
-       case TTM_PL_VRAM:
-               man->func = &ttm_bo_manager_func;
-               man->flags = TTM_MEMTYPE_FLAG_FIXED |
-                       TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_FLAG_UNCACHED |
-                       TTM_PL_FLAG_WC;
-               man->default_caching = TTM_PL_FLAG_WC;
-               break;
-       default:
-               DRM_ERROR("Unsupported memory type %u\n", (unsigned)type);
-               return -EINVAL;
-       }
-       return 0;
-}
-
-static void
-mgag200_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
-{
-       struct mgag200_bo *mgabo = mgag200_bo(bo);
-
-       if (!mgag200_ttm_bo_is_mgag200_bo(bo))
-               return;
-
-       mgag200_ttm_placement(mgabo, TTM_PL_FLAG_SYSTEM);
-       *pl = mgabo->placement;
-}
-
-static int mgag200_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp)
-{
-       struct mgag200_bo *mgabo = mgag200_bo(bo);
-
-       return drm_vma_node_verify_access(&mgabo->gem.vma_node,
-                                         filp->private_data);
-}
-
-static int mgag200_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
-                                 struct ttm_mem_reg *mem)
-{
-       struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
-       struct mga_device *mdev = mgag200_bdev(bdev);
-
-       mem->bus.addr = NULL;
-       mem->bus.offset = 0;
-       mem->bus.size = mem->num_pages << PAGE_SHIFT;
-       mem->bus.base = 0;
-       mem->bus.is_iomem = false;
-       if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
-               return -EINVAL;
-       switch (mem->mem_type) {
-       case TTM_PL_SYSTEM:
-               /* system memory */
-               return 0;
-       case TTM_PL_VRAM:
-               mem->bus.offset = mem->start << PAGE_SHIFT;
-               mem->bus.base = pci_resource_start(mdev->dev->pdev, 0);
-               mem->bus.is_iomem = true;
-               break;
-       default:
-               return -EINVAL;
-               break;
-       }
-       return 0;
-}
-
-static void mgag200_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
-{
-}
-
-static void mgag200_ttm_backend_destroy(struct ttm_tt *tt)
-{
-       ttm_tt_fini(tt);
-       kfree(tt);
-}
-
-static struct ttm_backend_func mgag200_tt_backend_func = {
-       .destroy = &mgag200_ttm_backend_destroy,
-};
-
-
-static struct ttm_tt *mgag200_ttm_tt_create(struct ttm_buffer_object *bo,
-                                           uint32_t page_flags)
-{
-       struct ttm_tt *tt;
-
-       tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL);
-       if (tt == NULL)
-               return NULL;
-       tt->func = &mgag200_tt_backend_func;
-       if (ttm_tt_init(tt, bo, page_flags)) {
-               kfree(tt);
-               return NULL;
-       }
-       return tt;
-}
-
-struct ttm_bo_driver mgag200_bo_driver = {
-       .ttm_tt_create = mgag200_ttm_tt_create,
-       .init_mem_type = mgag200_bo_init_mem_type,
-       .eviction_valuable = ttm_bo_eviction_valuable,
-       .evict_flags = mgag200_bo_evict_flags,
-       .move = NULL,
-       .verify_access = mgag200_bo_verify_access,
-       .io_mem_reserve = &mgag200_ttm_io_mem_reserve,
-       .io_mem_free = &mgag200_ttm_io_mem_free,
-};
-
 int mgag200_mm_init(struct mga_device *mdev)
 {
+       struct drm_vram_mm *vmm;
        int ret;
        struct drm_device *dev = mdev->dev;
-       struct ttm_bo_device *bdev = &mdev->ttm.bdev;
-
-       ret = ttm_bo_device_init(&mdev->ttm.bdev,
-                                &mgag200_bo_driver,
-                                dev->anon_inode->i_mapping,
-                                true);
-       if (ret) {
-               DRM_ERROR("Error initialising bo driver; %d\n", ret);
-               return ret;
-       }
 
-       ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM, mdev->mc.vram_size >> PAGE_SHIFT);
-       if (ret) {
-               DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
+       vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(dev->pdev, 0),
+                                      mdev->mc.vram_size,
+                                      &drm_gem_vram_mm_funcs);
+       if (IS_ERR(vmm)) {
+               ret = PTR_ERR(vmm);
+               DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
                return ret;
        }
 
@@ -203,149 +57,10 @@ void mgag200_mm_fini(struct mga_device *mdev)
 {
        struct drm_device *dev = mdev->dev;
 
-       ttm_bo_device_release(&mdev->ttm.bdev);
+       drm_vram_helper_release_mm(dev);
 
        arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0),
                                pci_resource_len(dev->pdev, 0));
        arch_phys_wc_del(mdev->fb_mtrr);
        mdev->fb_mtrr = 0;
 }
-
-void mgag200_ttm_placement(struct mgag200_bo *bo, int domain)
-{
-       u32 c = 0;
-       unsigned i;
-
-       bo->placement.placement = bo->placements;
-       bo->placement.busy_placement = bo->placements;
-       if (domain & TTM_PL_FLAG_VRAM)
-               bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
-       if (domain & TTM_PL_FLAG_SYSTEM)
-               bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
-       if (!c)
-               bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
-       bo->placement.num_placement = c;
-       bo->placement.num_busy_placement = c;
-       for (i = 0; i < c; ++i) {
-               bo->placements[i].fpfn = 0;
-               bo->placements[i].lpfn = 0;
-       }
-}
-
-int mgag200_bo_create(struct drm_device *dev, int size, int align,
-                 uint32_t flags, struct mgag200_bo **pmgabo)
-{
-       struct mga_device *mdev = dev->dev_private;
-       struct mgag200_bo *mgabo;
-       size_t acc_size;
-       int ret;
-
-       mgabo = kzalloc(sizeof(struct mgag200_bo), GFP_KERNEL);
-       if (!mgabo)
-               return -ENOMEM;
-
-       ret = drm_gem_object_init(dev, &mgabo->gem, size);
-       if (ret) {
-               kfree(mgabo);
-               return ret;
-       }
-
-       mgabo->bo.bdev = &mdev->ttm.bdev;
-
-       mgag200_ttm_placement(mgabo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
-
-       acc_size = ttm_bo_dma_acc_size(&mdev->ttm.bdev, size,
-                                      sizeof(struct mgag200_bo));
-
-       ret = ttm_bo_init(&mdev->ttm.bdev, &mgabo->bo, size,
-                         ttm_bo_type_device, &mgabo->placement,
-                         align >> PAGE_SHIFT, false, acc_size,
-                         NULL, NULL, mgag200_bo_ttm_destroy);
-       if (ret)
-               return ret;
-
-       *pmgabo = mgabo;
-       return 0;
-}
-
-static inline u64 mgag200_bo_gpu_offset(struct mgag200_bo *bo)
-{
-       return bo->bo.offset;
-}
-
-int mgag200_bo_pin(struct mgag200_bo *bo, u32 pl_flag, u64 *gpu_addr)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (bo->pin_count) {
-               bo->pin_count++;
-               if (gpu_addr)
-                       *gpu_addr = mgag200_bo_gpu_offset(bo);
-               return 0;
-       }
-
-       mgag200_ttm_placement(bo, pl_flag);
-       for (i = 0; i < bo->placement.num_placement; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret)
-               return ret;
-
-       bo->pin_count = 1;
-       if (gpu_addr)
-               *gpu_addr = mgag200_bo_gpu_offset(bo);
-       return 0;
-}
-
-int mgag200_bo_unpin(struct mgag200_bo *bo)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i;
-       if (!bo->pin_count) {
-               DRM_ERROR("unpin bad %p\n", bo);
-               return 0;
-       }
-       bo->pin_count--;
-       if (bo->pin_count)
-               return 0;
-
-       for (i = 0; i < bo->placement.num_placement ; i++)
-               bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
-       return ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-}
-
-int mgag200_bo_push_sysram(struct mgag200_bo *bo)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-       if (!bo->pin_count) {
-               DRM_ERROR("unpin bad %p\n", bo);
-               return 0;
-       }
-       bo->pin_count--;
-       if (bo->pin_count)
-               return 0;
-
-       if (bo->kmap.virtual)
-               ttm_bo_kunmap(&bo->kmap);
-
-       mgag200_ttm_placement(bo, TTM_PL_FLAG_SYSTEM);
-       for (i = 0; i < bo->placement.num_placement ; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret) {
-               DRM_ERROR("pushing to VRAM failed\n");
-               return ret;
-       }
-       return 0;
-}
-
-int mgag200_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-       struct drm_file *file_priv = filp->private_data;
-       struct mga_device *mdev = file_priv->minor->dev->dev_private;
-
-       return ttm_bo_mmap(filp, vma, &mdev->ttm.bdev);
-}
index dfdfa76..3772f74 100644 (file)
@@ -694,14 +694,12 @@ end:
 
 static void dpu_crtc_reset(struct drm_crtc *crtc)
 {
-       struct dpu_crtc_state *cstate;
+       struct dpu_crtc_state *cstate = kzalloc(sizeof(*cstate), GFP_KERNEL);
 
        if (crtc->state)
                dpu_crtc_destroy_state(crtc, crtc->state);
 
-       crtc->state = kzalloc(sizeof(*cstate), GFP_KERNEL);
-       if (crtc->state)
-               crtc->state->crtc = crtc;
+       __drm_atomic_helper_crtc_reset(crtc, &cstate->base);
 }
 
 /**
index f59fe1a..c3d491e 100644 (file)
@@ -1040,10 +1040,11 @@ int dpu_format_check_modified_format(
                const struct drm_mode_fb_cmd2 *cmd,
                struct drm_gem_object **bos)
 {
-       int ret, i, num_base_fmt_planes;
+       const struct drm_format_info *info;
        const struct dpu_format *fmt;
        struct dpu_hw_fmt_layout layout;
        uint32_t bos_total_size = 0;
+       int ret, i;
 
        if (!msm_fmt || !cmd || !bos) {
                DRM_ERROR("invalid arguments\n");
@@ -1051,14 +1052,16 @@ int dpu_format_check_modified_format(
        }
 
        fmt = to_dpu_format(msm_fmt);
-       num_base_fmt_planes = drm_format_num_planes(fmt->base.pixel_format);
+       info = drm_format_info(fmt->base.pixel_format);
+       if (!info)
+               return -EINVAL;
 
        ret = dpu_format_get_plane_sizes(fmt, cmd->width, cmd->height,
                        &layout, cmd->pitches);
        if (ret)
                return ret;
 
-       for (i = 0; i < num_base_fmt_planes; i++) {
+       for (i = 0; i < info->num_planes; i++) {
                if (!bos[i]) {
                        DRM_ERROR("invalid handle for plane %d\n", i);
                        return -EINVAL;
index ce1a555..d831ced 100644 (file)
@@ -557,14 +557,9 @@ static void _dpu_plane_setup_scaler(struct dpu_plane *pdpu,
                struct dpu_plane_state *pstate,
                const struct dpu_format *fmt, bool color_fill)
 {
-       uint32_t chroma_subsmpl_h, chroma_subsmpl_v;
+       const struct drm_format_info *info = drm_format_info(fmt->base.pixel_format);
 
        /* don't chroma subsample if decimating */
-       chroma_subsmpl_h =
-               drm_format_horz_chroma_subsampling(fmt->base.pixel_format);
-       chroma_subsmpl_v =
-               drm_format_vert_chroma_subsampling(fmt->base.pixel_format);
-
        /* update scaler. calculate default config for QSEED3 */
        _dpu_plane_setup_scaler3(pdpu, pstate,
                        drm_rect_width(&pdpu->pipe_cfg.src_rect),
@@ -572,7 +567,7 @@ static void _dpu_plane_setup_scaler(struct dpu_plane *pdpu,
                        drm_rect_width(&pdpu->pipe_cfg.dst_rect),
                        drm_rect_height(&pdpu->pipe_cfg.dst_rect),
                        &pstate->scaler3_cfg, fmt,
-                       chroma_subsmpl_h, chroma_subsmpl_v);
+                       info->hsub, info->vsub);
 }
 
 /**
index b0cf63c..c3751c9 100644 (file)
@@ -782,6 +782,7 @@ static void get_roi(struct drm_crtc *crtc, uint32_t *roi_w, uint32_t *roi_h)
 
 static void mdp5_crtc_restore_cursor(struct drm_crtc *crtc)
 {
+       const struct drm_format_info *info = drm_format_info(DRM_FORMAT_ARGB8888);
        struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state);
        struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
        struct mdp5_kms *mdp5_kms = get_kms(crtc);
@@ -800,7 +801,7 @@ static void mdp5_crtc_restore_cursor(struct drm_crtc *crtc)
        width = mdp5_crtc->cursor.width;
        height = mdp5_crtc->cursor.height;
 
-       stride = width * drm_format_plane_cpp(DRM_FORMAT_ARGB8888, 0);
+       stride = width * info->cpp[0];
 
        get_roi(crtc, &roi_w, &roi_h);
 
@@ -1002,23 +1003,6 @@ mdp5_crtc_atomic_print_state(struct drm_printer *p,
        drm_printf(p, "\tcmd_mode=%d\n", mdp5_cstate->cmd_mode);
 }
 
-static void mdp5_crtc_reset(struct drm_crtc *crtc)
-{
-       struct mdp5_crtc_state *mdp5_cstate;
-
-       if (crtc->state) {
-               __drm_atomic_helper_crtc_destroy_state(crtc->state);
-               kfree(to_mdp5_crtc_state(crtc->state));
-       }
-
-       mdp5_cstate = kzalloc(sizeof(*mdp5_cstate), GFP_KERNEL);
-
-       if (mdp5_cstate) {
-               mdp5_cstate->base.crtc = crtc;
-               crtc->state = &mdp5_cstate->base;
-       }
-}
-
 static struct drm_crtc_state *
 mdp5_crtc_duplicate_state(struct drm_crtc *crtc)
 {
@@ -1046,6 +1030,17 @@ static void mdp5_crtc_destroy_state(struct drm_crtc *crtc, struct drm_crtc_state
        kfree(mdp5_cstate);
 }
 
+static void mdp5_crtc_reset(struct drm_crtc *crtc)
+{
+       struct mdp5_crtc_state *mdp5_cstate =
+               kzalloc(sizeof(*mdp5_cstate), GFP_KERNEL);
+
+       if (crtc->state)
+               mdp5_crtc_destroy_state(crtc, crtc->state);
+
+       __drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
+}
+
 static const struct drm_crtc_funcs mdp5_crtc_funcs = {
        .set_config = drm_atomic_helper_set_config,
        .destroy = mdp5_crtc_destroy,
index be13140..9d9fb6c 100644 (file)
@@ -650,10 +650,10 @@ static int calc_scalex_steps(struct drm_plane *plane,
                uint32_t pixel_format, uint32_t src, uint32_t dest,
                uint32_t phasex_steps[COMP_MAX])
 {
+       const struct drm_format_info *info = drm_format_info(pixel_format);
        struct mdp5_kms *mdp5_kms = get_kms(plane);
        struct device *dev = mdp5_kms->dev->dev;
        uint32_t phasex_step;
-       unsigned int hsub;
        int ret;
 
        ret = calc_phase_step(src, dest, &phasex_step);
@@ -662,11 +662,9 @@ static int calc_scalex_steps(struct drm_plane *plane,
                return ret;
        }
 
-       hsub = drm_format_horz_chroma_subsampling(pixel_format);
-
        phasex_steps[COMP_0]   = phasex_step;
        phasex_steps[COMP_3]   = phasex_step;
-       phasex_steps[COMP_1_2] = phasex_step / hsub;
+       phasex_steps[COMP_1_2] = phasex_step / info->hsub;
 
        return 0;
 }
@@ -675,10 +673,10 @@ static int calc_scaley_steps(struct drm_plane *plane,
                uint32_t pixel_format, uint32_t src, uint32_t dest,
                uint32_t phasey_steps[COMP_MAX])
 {
+       const struct drm_format_info *info = drm_format_info(pixel_format);
        struct mdp5_kms *mdp5_kms = get_kms(plane);
        struct device *dev = mdp5_kms->dev->dev;
        uint32_t phasey_step;
-       unsigned int vsub;
        int ret;
 
        ret = calc_phase_step(src, dest, &phasey_step);
@@ -687,11 +685,9 @@ static int calc_scaley_steps(struct drm_plane *plane,
                return ret;
        }
 
-       vsub = drm_format_vert_chroma_subsampling(pixel_format);
-
        phasey_steps[COMP_0]   = phasey_step;
        phasey_steps[COMP_3]   = phasey_step;
-       phasey_steps[COMP_1_2] = phasey_step / vsub;
+       phasey_steps[COMP_1_2] = phasey_step / info->vsub;
 
        return 0;
 }
@@ -699,8 +695,9 @@ static int calc_scaley_steps(struct drm_plane *plane,
 static uint32_t get_scale_config(const struct mdp_format *format,
                uint32_t src, uint32_t dst, bool horz)
 {
+       const struct drm_format_info *info = drm_format_info(format->base.pixel_format);
        bool scaling = format->is_yuv ? true : (src != dst);
-       uint32_t sub, pix_fmt = format->base.pixel_format;
+       uint32_t sub;
        uint32_t ya_filter, uv_filter;
        bool yuv = format->is_yuv;
 
@@ -708,8 +705,7 @@ static uint32_t get_scale_config(const struct mdp_format *format,
                return 0;
 
        if (yuv) {
-               sub = horz ? drm_format_horz_chroma_subsampling(pix_fmt) :
-                            drm_format_vert_chroma_subsampling(pix_fmt);
+               sub = horz ? info->hsub : info->vsub;
                uv_filter = ((src / sub) <= dst) ?
                                   SCALE_FILTER_BIL : SCALE_FILTER_PCMN;
        }
@@ -754,7 +750,7 @@ static void mdp5_write_pixel_ext(struct mdp5_kms *mdp5_kms, enum mdp5_pipe pipe,
        uint32_t src_w, int pe_left[COMP_MAX], int pe_right[COMP_MAX],
        uint32_t src_h, int pe_top[COMP_MAX], int pe_bottom[COMP_MAX])
 {
-       uint32_t pix_fmt = format->base.pixel_format;
+       const struct drm_format_info *info = drm_format_info(format->base.pixel_format);
        uint32_t lr, tb, req;
        int i;
 
@@ -763,8 +759,8 @@ static void mdp5_write_pixel_ext(struct mdp5_kms *mdp5_kms, enum mdp5_pipe pipe,
                uint32_t roi_h = src_h;
 
                if (format->is_yuv && i == COMP_1_2) {
-                       roi_w /= drm_format_horz_chroma_subsampling(pix_fmt);
-                       roi_h /= drm_format_vert_chroma_subsampling(pix_fmt);
+                       roi_w /= info->hsub;
+                       roi_h /= info->vsub;
                }
 
                lr  = (pe_left[i] >= 0) ?
index 6153514..2834837 100644 (file)
@@ -127,14 +127,15 @@ uint32_t mdp5_smp_calculate(struct mdp5_smp *smp,
                const struct mdp_format *format,
                u32 width, bool hdecim)
 {
+       const struct drm_format_info *info = drm_format_info(format->base.pixel_format);
        struct mdp5_kms *mdp5_kms = get_kms(smp);
        int rev = mdp5_cfg_get_hw_rev(mdp5_kms->cfg);
        int i, hsub, nplanes, nlines;
        u32 fmt = format->base.pixel_format;
        uint32_t blkcfg = 0;
 
-       nplanes = drm_format_num_planes(fmt);
-       hsub = drm_format_horz_chroma_subsampling(fmt);
+       nplanes = info->num_planes;
+       hsub = info->hsub;
 
        /* different if BWC (compressed framebuffer?) enabled: */
        nlines = 2;
@@ -157,7 +158,7 @@ uint32_t mdp5_smp_calculate(struct mdp5_smp *smp,
        for (i = 0; i < nplanes; i++) {
                int n, fetch_stride, cpp;
 
-               cpp = drm_format_plane_cpp(fmt, i);
+               cpp = info->cpp[i];
                fetch_stride = width * cpp / (i ? hsub : 1);
 
                n = DIV_ROUND_UP(fetch_stride * nlines, smp->blk_size);
index 1360589..68fa2c8 100644 (file)
@@ -106,9 +106,11 @@ const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb)
 struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev,
                struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd)
 {
+       const struct drm_format_info *info = drm_get_format_info(dev,
+                                                                mode_cmd);
        struct drm_gem_object *bos[4] = {0};
        struct drm_framebuffer *fb;
-       int ret, i, n = drm_format_num_planes(mode_cmd->pixel_format);
+       int ret, i, n = info->num_planes;
 
        for (i = 0; i < n; i++) {
                bos[i] = drm_gem_object_lookup(file, mode_cmd->handles[i]);
@@ -135,22 +137,20 @@ out_unref:
 static struct drm_framebuffer *msm_framebuffer_init(struct drm_device *dev,
                const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos)
 {
+       const struct drm_format_info *info = drm_get_format_info(dev,
+                                                                mode_cmd);
        struct msm_drm_private *priv = dev->dev_private;
        struct msm_kms *kms = priv->kms;
        struct msm_framebuffer *msm_fb = NULL;
        struct drm_framebuffer *fb;
        const struct msm_format *format;
        int ret, i, n;
-       unsigned int hsub, vsub;
 
        DBG("create framebuffer: dev=%p, mode_cmd=%p (%dx%d@%4.4s)",
                        dev, mode_cmd, mode_cmd->width, mode_cmd->height,
                        (char *)&mode_cmd->pixel_format);
 
-       n = drm_format_num_planes(mode_cmd->pixel_format);
-       hsub = drm_format_horz_chroma_subsampling(mode_cmd->pixel_format);
-       vsub = drm_format_vert_chroma_subsampling(mode_cmd->pixel_format);
-
+       n = info->num_planes;
        format = kms->funcs->get_format(kms, mode_cmd->pixel_format,
                        mode_cmd->modifier[0]);
        if (!format) {
@@ -176,12 +176,12 @@ static struct drm_framebuffer *msm_framebuffer_init(struct drm_device *dev,
        }
 
        for (i = 0; i < n; i++) {
-               unsigned int width = mode_cmd->width / (i ? hsub : 1);
-               unsigned int height = mode_cmd->height / (i ? vsub : 1);
+               unsigned int width = mode_cmd->width / (i ? info->hsub : 1);
+               unsigned int height = mode_cmd->height / (i ? info->vsub : 1);
                unsigned int min_size;
 
                min_size = (height - 1) * mode_cmd->pitches[i]
-                        + width * drm_format_plane_cpp(mode_cmd->pixel_format, i)
+                        + width * info->cpp[i]
                         + mode_cmd->offsets[i];
 
                if (bos[i]->size < min_size) {
index 06ee238..48a6485 100644 (file)
@@ -420,16 +420,6 @@ nv50_head_atomic_duplicate_state(struct drm_crtc *crtc)
        return &asyh->state;
 }
 
-static void
-__drm_atomic_helper_crtc_reset(struct drm_crtc *crtc,
-                              struct drm_crtc_state *state)
-{
-       if (crtc->state)
-               crtc->funcs->atomic_destroy_state(crtc, crtc->state);
-       crtc->state = state;
-       crtc->state->crtc = crtc;
-}
-
 static void
 nv50_head_reset(struct drm_crtc *crtc)
 {
@@ -438,6 +428,9 @@ nv50_head_reset(struct drm_crtc *crtc)
        if (WARN_ON(!(asyh = kzalloc(sizeof(*asyh), GFP_KERNEL))))
                return;
 
+       if (crtc->state)
+               nv50_head_atomic_destroy_state(crtc, crtc->state);
+
        __drm_atomic_helper_crtc_reset(crtc, &asyh->state);
 }
 
index c80b967..2b44ba5 100644 (file)
@@ -26,8 +26,6 @@
 
 #include <subdev/gpio.h>
 
-#include <subdev/gpio.h>
-
 static void
 nv04_bus_intr(struct nvkm_bus *bus)
 {
index 4f8eb9d..6557b2d 100644 (file)
@@ -298,7 +298,9 @@ void omap_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m)
 struct drm_framebuffer *omap_framebuffer_create(struct drm_device *dev,
                struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd)
 {
-       unsigned int num_planes = drm_format_num_planes(mode_cmd->pixel_format);
+       const struct drm_format_info *info = drm_get_format_info(dev,
+                                                                mode_cmd);
+       unsigned int num_planes = info->num_planes;
        struct drm_gem_object *bos[4];
        struct drm_framebuffer *fb;
        int i;
@@ -337,7 +339,7 @@ struct drm_framebuffer *omap_framebuffer_init(struct drm_device *dev,
                        dev, mode_cmd, mode_cmd->width, mode_cmd->height,
                        (char *)&mode_cmd->pixel_format);
 
-       format = drm_format_info(mode_cmd->pixel_format);
+       format = drm_get_format_info(dev, mode_cmd);
 
        for (i = 0; i < ARRAY_SIZE(formats); i++) {
                if (formats[i] == mode_cmd->pixel_format)
index e281fc5..d9d931a 100644 (file)
@@ -132,6 +132,15 @@ config DRM_PANEL_ORISETECH_OTM8009A
          Say Y here if you want to enable support for Orise Technology
          otm8009a 480x800 dsi 2dl panel.
 
+config DRM_PANEL_OSD_OSD101T2587_53TS
+       tristate "OSD OSD101T2587-53TS DSI 1920x1200 video mode panel"
+       depends on OF
+       depends on DRM_MIPI_DSI
+       depends on BACKLIGHT_CLASS_DEVICE
+       help
+         Say Y here if you want to enable support for One Stop Displays
+         OSD101T2587-53TS 10.1" 1920x1200 dsi panel.
+
 config DRM_PANEL_PANASONIC_VVX10F034N00
        tristate "Panasonic VVX10F034N00 1920x1200 video mode panel"
        depends on OF
@@ -201,6 +210,15 @@ config DRM_PANEL_SAMSUNG_S6E63J0X03
        depends on BACKLIGHT_CLASS_DEVICE
        select VIDEOMODE_HELPERS
 
+config DRM_PANEL_SAMSUNG_S6E63M0
+       tristate "Samsung S6E63M0 RGB/SPI panel"
+       depends on OF
+       depends on SPI
+       depends on BACKLIGHT_CLASS_DEVICE
+       help
+         Say Y here if you want to enable support for Samsung S6E63M0
+         AMOLED LCD panel.
+
 config DRM_PANEL_SAMSUNG_S6E8AA0
        tristate "Samsung S6E8AA0 DSI video mode panel"
        depends on OF
index 78e3dc3..fb0cb3a 100644 (file)
@@ -11,6 +11,7 @@ obj-$(CONFIG_DRM_PANEL_KINGDISPLAY_KD097D04) += panel-kingdisplay-kd097d04.o
 obj-$(CONFIG_DRM_PANEL_LG_LG4573) += panel-lg-lg4573.o
 obj-$(CONFIG_DRM_PANEL_OLIMEX_LCD_OLINUXINO) += panel-olimex-lcd-olinuxino.o
 obj-$(CONFIG_DRM_PANEL_ORISETECH_OTM8009A) += panel-orisetech-otm8009a.o
+obj-$(CONFIG_DRM_PANEL_OSD_OSD101T2587_53TS) += panel-osd-osd101t2587-53ts.o
 obj-$(CONFIG_DRM_PANEL_PANASONIC_VVX10F034N00) += panel-panasonic-vvx10f034n00.o
 obj-$(CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN) += panel-raspberrypi-touchscreen.o
 obj-$(CONFIG_DRM_PANEL_RAYDIUM_RM68200) += panel-raydium-rm68200.o
@@ -20,6 +21,7 @@ obj-$(CONFIG_DRM_PANEL_SAMSUNG_LD9040) += panel-samsung-ld9040.o
 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6D16D0) += panel-samsung-s6d16d0.o
 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E3HA2) += panel-samsung-s6e3ha2.o
 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E63J0X03) += panel-samsung-s6e63j0x03.o
+obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E63M0) += panel-samsung-s6e63m0.o
 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0) += panel-samsung-s6e8aa0.o
 obj-$(CONFIG_DRM_PANEL_SEIKO_43WVF1G) += panel-seiko-43wvf1g.o
 obj-$(CONFIG_DRM_PANEL_SHARP_LQ101R1SX01) += panel-sharp-lq101r1sx01.o
diff --git a/drivers/gpu/drm/panel/panel-osd-osd101t2587-53ts.c b/drivers/gpu/drm/panel/panel-osd-osd101t2587-53ts.c
new file mode 100644 (file)
index 0000000..e0e20ec
--- /dev/null
@@ -0,0 +1,254 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com
+ *  Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
+ */
+
+#include <linux/backlight.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/regulator/consumer.h>
+
+#include <drm/drm_crtc.h>
+#include <drm/drm_device.h>
+#include <drm/drm_mipi_dsi.h>
+#include <drm/drm_panel.h>
+
+#include <video/mipi_display.h>
+
+struct osd101t2587_panel {
+       struct drm_panel base;
+       struct mipi_dsi_device *dsi;
+
+       struct backlight_device *backlight;
+       struct regulator *supply;
+
+       bool prepared;
+       bool enabled;
+
+       const struct drm_display_mode *default_mode;
+};
+
+static inline struct osd101t2587_panel *ti_osd_panel(struct drm_panel *panel)
+{
+       return container_of(panel, struct osd101t2587_panel, base);
+}
+
+static int osd101t2587_panel_disable(struct drm_panel *panel)
+{
+       struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel);
+       int ret;
+
+       if (!osd101t2587->enabled)
+               return 0;
+
+       backlight_disable(osd101t2587->backlight);
+
+       ret = mipi_dsi_shutdown_peripheral(osd101t2587->dsi);
+
+       osd101t2587->enabled = false;
+
+       return ret;
+}
+
+static int osd101t2587_panel_unprepare(struct drm_panel *panel)
+{
+       struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel);
+
+       if (!osd101t2587->prepared)
+               return 0;
+
+       regulator_disable(osd101t2587->supply);
+       osd101t2587->prepared = false;
+
+       return 0;
+}
+
+static int osd101t2587_panel_prepare(struct drm_panel *panel)
+{
+       struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel);
+       int ret;
+
+       if (osd101t2587->prepared)
+               return 0;
+
+       ret = regulator_enable(osd101t2587->supply);
+       if (!ret)
+               osd101t2587->prepared = true;
+
+       return ret;
+}
+
+static int osd101t2587_panel_enable(struct drm_panel *panel)
+{
+       struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel);
+       int ret;
+
+       if (osd101t2587->enabled)
+               return 0;
+
+       ret = mipi_dsi_turn_on_peripheral(osd101t2587->dsi);
+       if (ret)
+               return ret;
+
+       backlight_enable(osd101t2587->backlight);
+
+       osd101t2587->enabled = true;
+
+       return ret;
+}
+
+static const struct drm_display_mode default_mode_osd101t2587 = {
+       .clock = 164400,
+       .hdisplay = 1920,
+       .hsync_start = 1920 + 152,
+       .hsync_end = 1920 + 152 + 52,
+       .htotal = 1920 + 152 + 52 + 20,
+       .vdisplay = 1200,
+       .vsync_start = 1200 + 24,
+       .vsync_end = 1200 + 24 + 6,
+       .vtotal = 1200 + 24 + 6 + 48,
+       .vrefresh = 60,
+       .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
+};
+
+static int osd101t2587_panel_get_modes(struct drm_panel *panel)
+{
+       struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel);
+       struct drm_display_mode *mode;
+
+       mode = drm_mode_duplicate(panel->drm, osd101t2587->default_mode);
+       if (!mode) {
+               dev_err(panel->drm->dev, "failed to add mode %ux%ux@%u\n",
+                       osd101t2587->default_mode->hdisplay,
+                       osd101t2587->default_mode->vdisplay,
+                       osd101t2587->default_mode->vrefresh);
+               return -ENOMEM;
+       }
+
+       drm_mode_set_name(mode);
+
+       drm_mode_probed_add(panel->connector, mode);
+
+       panel->connector->display_info.width_mm = 217;
+       panel->connector->display_info.height_mm = 136;
+
+       return 1;
+}
+
+static const struct drm_panel_funcs osd101t2587_panel_funcs = {
+       .disable = osd101t2587_panel_disable,
+       .unprepare = osd101t2587_panel_unprepare,
+       .prepare = osd101t2587_panel_prepare,
+       .enable = osd101t2587_panel_enable,
+       .get_modes = osd101t2587_panel_get_modes,
+};
+
+static const struct of_device_id osd101t2587_of_match[] = {
+       {
+               .compatible = "osddisplays,osd101t2587-53ts",
+               .data = &default_mode_osd101t2587,
+       }, {
+               /* sentinel */
+       }
+};
+MODULE_DEVICE_TABLE(of, osd101t2587_of_match);
+
+static int osd101t2587_panel_add(struct osd101t2587_panel *osd101t2587)
+{
+       struct device *dev = &osd101t2587->dsi->dev;
+
+       osd101t2587->supply = devm_regulator_get(dev, "power");
+       if (IS_ERR(osd101t2587->supply))
+               return PTR_ERR(osd101t2587->supply);
+
+       osd101t2587->backlight = devm_of_find_backlight(dev);
+       if (IS_ERR(osd101t2587->backlight))
+               return PTR_ERR(osd101t2587->backlight);
+
+       drm_panel_init(&osd101t2587->base);
+       osd101t2587->base.funcs = &osd101t2587_panel_funcs;
+       osd101t2587->base.dev = &osd101t2587->dsi->dev;
+
+       return drm_panel_add(&osd101t2587->base);
+}
+
+static int osd101t2587_panel_probe(struct mipi_dsi_device *dsi)
+{
+       struct osd101t2587_panel *osd101t2587;
+       const struct of_device_id *id;
+       int ret;
+
+       id = of_match_node(osd101t2587_of_match, dsi->dev.of_node);
+       if (!id)
+               return -ENODEV;
+
+       dsi->lanes = 4;
+       dsi->format = MIPI_DSI_FMT_RGB888;
+       dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
+                         MIPI_DSI_MODE_VIDEO_BURST |
+                         MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
+                         MIPI_DSI_MODE_EOT_PACKET;
+
+       osd101t2587 = devm_kzalloc(&dsi->dev, sizeof(*osd101t2587), GFP_KERNEL);
+       if (!osd101t2587)
+               return -ENOMEM;
+
+       mipi_dsi_set_drvdata(dsi, osd101t2587);
+
+       osd101t2587->dsi = dsi;
+       osd101t2587->default_mode = id->data;
+
+       ret = osd101t2587_panel_add(osd101t2587);
+       if (ret < 0)
+               return ret;
+
+       ret = mipi_dsi_attach(dsi);
+       if (ret)
+               drm_panel_remove(&osd101t2587->base);
+
+       return ret;
+}
+
+static int osd101t2587_panel_remove(struct mipi_dsi_device *dsi)
+{
+       struct osd101t2587_panel *osd101t2587 = mipi_dsi_get_drvdata(dsi);
+       int ret;
+
+       ret = osd101t2587_panel_disable(&osd101t2587->base);
+       if (ret < 0)
+               dev_warn(&dsi->dev, "failed to disable panel: %d\n", ret);
+
+       osd101t2587_panel_unprepare(&osd101t2587->base);
+
+       drm_panel_remove(&osd101t2587->base);
+
+       ret = mipi_dsi_detach(dsi);
+       if (ret < 0)
+               dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", ret);
+
+       return ret;
+}
+
+static void osd101t2587_panel_shutdown(struct mipi_dsi_device *dsi)
+{
+       struct osd101t2587_panel *osd101t2587 = mipi_dsi_get_drvdata(dsi);
+
+       osd101t2587_panel_disable(&osd101t2587->base);
+       osd101t2587_panel_unprepare(&osd101t2587->base);
+}
+
+static struct mipi_dsi_driver osd101t2587_panel_driver = {
+       .driver = {
+               .name = "panel-osd-osd101t2587-53ts",
+               .of_match_table = osd101t2587_of_match,
+       },
+       .probe = osd101t2587_panel_probe,
+       .remove = osd101t2587_panel_remove,
+       .shutdown = osd101t2587_panel_shutdown,
+};
+module_mipi_dsi_driver(osd101t2587_panel_driver);
+
+MODULE_AUTHOR("Peter Ujfalusi <peter.ujfalusi@ti.com>");
+MODULE_DESCRIPTION("OSD101T2587-53TS DSI panel");
+MODULE_LICENSE("GPL v2");
index 2c9c972..1b708c8 100644 (file)
@@ -57,7 +57,6 @@
 #include <drm/drmP.h>
 #include <drm/drm_crtc.h>
 #include <drm/drm_mipi_dsi.h>
-#include <drm/drm_panel.h>
 
 #define RPI_DSI_DRIVER_NAME "rpi-ts-dsi"
 
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e63m0.c b/drivers/gpu/drm/panel/panel-samsung-s6e63m0.c
new file mode 100644 (file)
index 0000000..142d395
--- /dev/null
@@ -0,0 +1,514 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * S6E63M0 AMOLED LCD drm_panel driver.
+ *
+ * Copyright (C) 2019 PaweÅ‚ Chmiel <pawel.mikolaj.chmiel@gmail.com>
+ * Derived from drivers/gpu/drm/panel-samsung-ld9040.c
+ *
+ * Andrzej Hajda <a.hajda@samsung.com>
+ */
+
+#include <drm/drm_modes.h>
+#include <drm/drm_panel.h>
+#include <drm/drm_print.h>
+
+#include <linux/backlight.h>
+#include <linux/delay.h>
+#include <linux/gpio/consumer.h>
+#include <linux/module.h>
+#include <linux/regulator/consumer.h>
+#include <linux/spi/spi.h>
+
+#include <video/mipi_display.h>
+
+/* Manufacturer Command Set */
+#define MCS_ELVSS_ON                0xb1
+#define MCS_MIECTL1                0xc0
+#define MCS_BCMODE                              0xc1
+#define MCS_DISCTL   0xf2
+#define MCS_SRCCTL           0xf6
+#define MCS_IFCTL                       0xf7
+#define MCS_PANELCTL         0xF8
+#define MCS_PGAMMACTL                   0xfa
+
+#define NUM_GAMMA_LEVELS             11
+#define GAMMA_TABLE_COUNT           23
+
+#define DATA_MASK                                       0x100
+
+#define MAX_BRIGHTNESS              (NUM_GAMMA_LEVELS - 1)
+
+/* array of gamma tables for gamma value 2.2 */
+static u8 const s6e63m0_gamma_22[NUM_GAMMA_LEVELS][GAMMA_TABLE_COUNT] = {
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x78, 0xEC, 0x3D, 0xC8,
+         0xC2, 0xB6, 0xC4, 0xC7, 0xB6, 0xD5, 0xD7,
+         0xCC, 0x00, 0x39, 0x00, 0x36, 0x00, 0x51 },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x73, 0x4A, 0x3D, 0xC0,
+         0xC2, 0xB1, 0xBB, 0xBE, 0xAC, 0xCE, 0xCF,
+         0xC5, 0x00, 0x5D, 0x00, 0x5E, 0x00, 0x82 },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x70, 0x51, 0x3E, 0xBF,
+         0xC1, 0xAF, 0xB9, 0xBC, 0xAB, 0xCC, 0xCC,
+         0xC2, 0x00, 0x65, 0x00, 0x67, 0x00, 0x8D },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x6C, 0x54, 0x3A, 0xBC,
+         0xBF, 0xAC, 0xB7, 0xBB, 0xA9, 0xC9, 0xC9,
+         0xBE, 0x00, 0x71, 0x00, 0x73, 0x00, 0x9E },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x69, 0x54, 0x37, 0xBB,
+         0xBE, 0xAC, 0xB4, 0xB7, 0xA6, 0xC7, 0xC8,
+         0xBC, 0x00, 0x7B, 0x00, 0x7E, 0x00, 0xAB },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x66, 0x55, 0x34, 0xBA,
+         0xBD, 0xAB, 0xB1, 0xB5, 0xA3, 0xC5, 0xC6,
+         0xB9, 0x00, 0x85, 0x00, 0x88, 0x00, 0xBA },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x63, 0x53, 0x31, 0xB8,
+         0xBC, 0xA9, 0xB0, 0xB5, 0xA2, 0xC4, 0xC4,
+         0xB8, 0x00, 0x8B, 0x00, 0x8E, 0x00, 0xC2 },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x62, 0x54, 0x30, 0xB9,
+         0xBB, 0xA9, 0xB0, 0xB3, 0xA1, 0xC1, 0xC3,
+         0xB7, 0x00, 0x91, 0x00, 0x95, 0x00, 0xDA },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x66, 0x58, 0x34, 0xB6,
+         0xBA, 0xA7, 0xAF, 0xB3, 0xA0, 0xC1, 0xC2,
+         0xB7, 0x00, 0x97, 0x00, 0x9A, 0x00, 0xD1 },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x64, 0x56, 0x33, 0xB6,
+         0xBA, 0xA8, 0xAC, 0xB1, 0x9D, 0xC1, 0xC1,
+         0xB7, 0x00, 0x9C, 0x00, 0x9F, 0x00, 0xD6 },
+       { MCS_PGAMMACTL, 0x00,
+         0x18, 0x08, 0x24, 0x5f, 0x50, 0x2d, 0xB6,
+         0xB9, 0xA7, 0xAd, 0xB1, 0x9f, 0xbe, 0xC0,
+         0xB5, 0x00, 0xa0, 0x00, 0xa4, 0x00, 0xdb },
+};
+
+struct s6e63m0 {
+       struct device *dev;
+       struct drm_panel panel;
+       struct backlight_device *bl_dev;
+
+       struct regulator_bulk_data supplies[2];
+       struct gpio_desc *reset_gpio;
+
+       bool prepared;
+       bool enabled;
+
+       /*
+        * This field is tested by functions directly accessing bus before
+        * transfer, transfer is skipped if it is set. In case of transfer
+        * failure or unexpected response the field is set to error value.
+        * Such construct allows to eliminate many checks in higher level
+        * functions.
+        */
+       int error;
+};
+
+static const struct drm_display_mode default_mode = {
+       .clock          = 25628,
+       .hdisplay       = 480,
+       .hsync_start    = 480 + 16,
+       .hsync_end      = 480 + 16 + 2,
+       .htotal         = 480 + 16 + 2 + 16,
+       .vdisplay       = 800,
+       .vsync_start    = 800 + 28,
+       .vsync_end      = 800 + 28 + 2,
+       .vtotal         = 800 + 28 + 2 + 1,
+       .vrefresh       = 60,
+       .width_mm       = 53,
+       .height_mm      = 89,
+       .flags          = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC,
+};
+
+static inline struct s6e63m0 *panel_to_s6e63m0(struct drm_panel *panel)
+{
+       return container_of(panel, struct s6e63m0, panel);
+}
+
+static int s6e63m0_clear_error(struct s6e63m0 *ctx)
+{
+       int ret = ctx->error;
+
+       ctx->error = 0;
+       return ret;
+}
+
+static int s6e63m0_spi_write_word(struct s6e63m0 *ctx, u16 data)
+{
+       struct spi_device *spi = to_spi_device(ctx->dev);
+       struct spi_transfer xfer = {
+               .len    = 2,
+               .tx_buf = &data,
+       };
+       struct spi_message msg;
+
+       spi_message_init(&msg);
+       spi_message_add_tail(&xfer, &msg);
+
+       return spi_sync(spi, &msg);
+}
+
+static void s6e63m0_dcs_write(struct s6e63m0 *ctx, const u8 *data, size_t len)
+{
+       int ret = 0;
+
+       if (ctx->error < 0 || len == 0)
+               return;
+
+       DRM_DEV_DEBUG(ctx->dev, "writing dcs seq: %*ph\n", (int)len, data);
+       ret = s6e63m0_spi_write_word(ctx, *data);
+
+       while (!ret && --len) {
+               ++data;
+               ret = s6e63m0_spi_write_word(ctx, *data | DATA_MASK);
+       }
+
+       if (ret) {
+               DRM_DEV_ERROR(ctx->dev, "error %d writing dcs seq: %*ph\n", ret,
+                             (int)len, data);
+               ctx->error = ret;
+       }
+
+       usleep_range(300, 310);
+}
+
+#define s6e63m0_dcs_write_seq_static(ctx, seq ...) \
+       ({ \
+               static const u8 d[] = { seq }; \
+               s6e63m0_dcs_write(ctx, d, ARRAY_SIZE(d)); \
+       })
+
+static void s6e63m0_init(struct s6e63m0 *ctx)
+{
+       s6e63m0_dcs_write_seq_static(ctx, MCS_PANELCTL,
+                                    0x01, 0x27, 0x27, 0x07, 0x07, 0x54, 0x9f,
+                                    0x63, 0x86, 0x1a, 0x33, 0x0d, 0x00, 0x00);
+
+       s6e63m0_dcs_write_seq_static(ctx, MCS_DISCTL,
+                                    0x02, 0x03, 0x1c, 0x10, 0x10);
+       s6e63m0_dcs_write_seq_static(ctx, MCS_IFCTL,
+                                    0x03, 0x00, 0x00);
+
+       s6e63m0_dcs_write_seq_static(ctx, MCS_PGAMMACTL,
+                                    0x00, 0x18, 0x08, 0x24, 0x64, 0x56, 0x33,
+                                    0xb6, 0xba, 0xa8, 0xac, 0xb1, 0x9d, 0xc1,
+                                    0xc1, 0xb7, 0x00, 0x9c, 0x00, 0x9f, 0x00,
+                                    0xd6);
+       s6e63m0_dcs_write_seq_static(ctx, MCS_PGAMMACTL,
+                                    0x01);
+
+       s6e63m0_dcs_write_seq_static(ctx, MCS_SRCCTL,
+                                    0x00, 0x8c, 0x07);
+       s6e63m0_dcs_write_seq_static(ctx, 0xb3,
+                                    0xc);
+
+       s6e63m0_dcs_write_seq_static(ctx, 0xb5,
+                                    0x2c, 0x12, 0x0c, 0x0a, 0x10, 0x0e, 0x17,
+                                    0x13, 0x1f, 0x1a, 0x2a, 0x24, 0x1f, 0x1b,
+                                    0x1a, 0x17, 0x2b, 0x26, 0x22, 0x20, 0x3a,
+                                    0x34, 0x30, 0x2c, 0x29, 0x26, 0x25, 0x23,
+                                    0x21, 0x20, 0x1e, 0x1e);
+
+       s6e63m0_dcs_write_seq_static(ctx, 0xb6,
+                                    0x00, 0x00, 0x11, 0x22, 0x33, 0x44, 0x44,
+                                    0x44, 0x55, 0x55, 0x66, 0x66, 0x66, 0x66,
+                                    0x66, 0x66);
+
+       s6e63m0_dcs_write_seq_static(ctx, 0xb7,
+                                    0x2c, 0x12, 0x0c, 0x0a, 0x10, 0x0e, 0x17,
+                                    0x13, 0x1f, 0x1a, 0x2a, 0x24, 0x1f, 0x1b,
+                                    0x1a, 0x17, 0x2b, 0x26, 0x22, 0x20, 0x3a,
+                                    0x34, 0x30, 0x2c, 0x29, 0x26, 0x25, 0x23,
+                                    0x21, 0x20, 0x1e, 0x1e, 0x00, 0x00, 0x11,
+                                    0x22, 0x33, 0x44, 0x44, 0x44, 0x55, 0x55,
+                                    0x66, 0x66, 0x66, 0x66, 0x66, 0x66);
+
+       s6e63m0_dcs_write_seq_static(ctx, 0xb9,
+                                    0x2c, 0x12, 0x0c, 0x0a, 0x10, 0x0e, 0x17,
+                                    0x13, 0x1f, 0x1a, 0x2a, 0x24, 0x1f, 0x1b,
+                                    0x1a, 0x17, 0x2b, 0x26, 0x22, 0x20, 0x3a,
+                                    0x34, 0x30, 0x2c, 0x29, 0x26, 0x25, 0x23,
+                                    0x21, 0x20, 0x1e, 0x1e);
+
+       s6e63m0_dcs_write_seq_static(ctx, 0xba,
+                                    0x00, 0x00, 0x11, 0x22, 0x33, 0x44, 0x44,
+                                    0x44, 0x55, 0x55, 0x66, 0x66, 0x66, 0x66,
+                                    0x66, 0x66);
+
+       s6e63m0_dcs_write_seq_static(ctx, MCS_BCMODE,
+                                    0x4d, 0x96, 0x1d, 0x00, 0x00, 0x01, 0xdf,
+                                    0x00, 0x00, 0x03, 0x1f, 0x00, 0x00, 0x00,
+                                    0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x06,
+                                    0x09, 0x0d, 0x0f, 0x12, 0x15, 0x18);
+
+       s6e63m0_dcs_write_seq_static(ctx, 0xb2,
+                                    0x10, 0x10, 0x0b, 0x05);
+
+       s6e63m0_dcs_write_seq_static(ctx, MCS_MIECTL1,
+                                    0x01);
+
+       s6e63m0_dcs_write_seq_static(ctx, MCS_ELVSS_ON,
+                                    0x0b);
+
+       s6e63m0_dcs_write_seq_static(ctx, MIPI_DCS_EXIT_SLEEP_MODE);
+}
+
+static int s6e63m0_power_on(struct s6e63m0 *ctx)
+{
+       int ret;
+
+       ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies);
+       if (ret < 0)
+               return ret;
+
+       msleep(25);
+
+       gpiod_set_value(ctx->reset_gpio, 0);
+       msleep(120);
+
+       return 0;
+}
+
+static int s6e63m0_power_off(struct s6e63m0 *ctx)
+{
+       int ret;
+
+       gpiod_set_value(ctx->reset_gpio, 1);
+       msleep(120);
+
+       ret = regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies);
+       if (ret < 0)
+               return ret;
+
+       return 0;
+}
+
+static int s6e63m0_disable(struct drm_panel *panel)
+{
+       struct s6e63m0 *ctx = panel_to_s6e63m0(panel);
+
+       if (!ctx->enabled)
+               return 0;
+
+       backlight_disable(ctx->bl_dev);
+
+       s6e63m0_dcs_write_seq_static(ctx, MIPI_DCS_ENTER_SLEEP_MODE);
+       msleep(200);
+
+       ctx->enabled = false;
+
+       return 0;
+}
+
+static int s6e63m0_unprepare(struct drm_panel *panel)
+{
+       struct s6e63m0 *ctx = panel_to_s6e63m0(panel);
+       int ret;
+
+       if (!ctx->prepared)
+               return 0;
+
+       s6e63m0_clear_error(ctx);
+
+       ret = s6e63m0_power_off(ctx);
+       if (ret < 0)
+               return ret;
+
+       ctx->prepared = false;
+
+       return 0;
+}
+
+static int s6e63m0_prepare(struct drm_panel *panel)
+{
+       struct s6e63m0 *ctx = panel_to_s6e63m0(panel);
+       int ret;
+
+       if (ctx->prepared)
+               return 0;
+
+       ret = s6e63m0_power_on(ctx);
+       if (ret < 0)
+               return ret;
+
+       s6e63m0_init(ctx);
+
+       ret = s6e63m0_clear_error(ctx);
+
+       if (ret < 0)
+               s6e63m0_unprepare(panel);
+
+       ctx->prepared = true;
+
+       return ret;
+}
+
+static int s6e63m0_enable(struct drm_panel *panel)
+{
+       struct s6e63m0 *ctx = panel_to_s6e63m0(panel);
+
+       if (ctx->enabled)
+               return 0;
+
+       s6e63m0_dcs_write_seq_static(ctx, MIPI_DCS_SET_DISPLAY_ON);
+
+       backlight_enable(ctx->bl_dev);
+
+       ctx->enabled = true;
+
+       return 0;
+}
+
+static int s6e63m0_get_modes(struct drm_panel *panel)
+{
+       struct drm_connector *connector = panel->connector;
+       struct drm_display_mode *mode;
+
+       mode = drm_mode_duplicate(panel->drm, &default_mode);
+       if (!mode) {
+               DRM_ERROR("failed to add mode %ux%ux@%u\n",
+                         default_mode.hdisplay, default_mode.vdisplay,
+                         default_mode.vrefresh);
+               return -ENOMEM;
+       }
+
+       drm_mode_set_name(mode);
+
+       mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
+       drm_mode_probed_add(connector, mode);
+
+       return 1;
+}
+
+static const struct drm_panel_funcs s6e63m0_drm_funcs = {
+       .disable        = s6e63m0_disable,
+       .unprepare      = s6e63m0_unprepare,
+       .prepare        = s6e63m0_prepare,
+       .enable         = s6e63m0_enable,
+       .get_modes      = s6e63m0_get_modes,
+};
+
+static int s6e63m0_set_brightness(struct backlight_device *bd)
+{
+       struct s6e63m0 *ctx = bl_get_data(bd);
+
+       int brightness = bd->props.brightness;
+
+       /* disable and set new gamma */
+       s6e63m0_dcs_write(ctx, s6e63m0_gamma_22[brightness],
+                         ARRAY_SIZE(s6e63m0_gamma_22[brightness]));
+
+       /* update gamma table. */
+       s6e63m0_dcs_write_seq_static(ctx, MCS_PGAMMACTL, 0x01);
+
+       return s6e63m0_clear_error(ctx);
+}
+
+static const struct backlight_ops s6e63m0_backlight_ops = {
+       .update_status  = s6e63m0_set_brightness,
+};
+
+static int s6e63m0_backlight_register(struct s6e63m0 *ctx)
+{
+       struct backlight_properties props = {
+               .type           = BACKLIGHT_RAW,
+               .brightness     = MAX_BRIGHTNESS,
+               .max_brightness = MAX_BRIGHTNESS
+       };
+       struct device *dev = ctx->dev;
+       int ret = 0;
+
+       ctx->bl_dev = devm_backlight_device_register(dev, "panel", dev, ctx,
+                                                    &s6e63m0_backlight_ops,
+                                                    &props);
+       if (IS_ERR(ctx->bl_dev)) {
+               ret = PTR_ERR(ctx->bl_dev);
+               DRM_DEV_ERROR(dev, "error registering backlight device (%d)\n",
+                             ret);
+       }
+
+       return ret;
+}
+
+static int s6e63m0_probe(struct spi_device *spi)
+{
+       struct device *dev = &spi->dev;
+       struct s6e63m0 *ctx;
+       int ret;
+
+       ctx = devm_kzalloc(dev, sizeof(struct s6e63m0), GFP_KERNEL);
+       if (!ctx)
+               return -ENOMEM;
+
+       spi_set_drvdata(spi, ctx);
+
+       ctx->dev = dev;
+       ctx->enabled = false;
+       ctx->prepared = false;
+
+       ctx->supplies[0].supply = "vdd3";
+       ctx->supplies[1].supply = "vci";
+       ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ctx->supplies),
+                                     ctx->supplies);
+       if (ret < 0) {
+               DRM_DEV_ERROR(dev, "failed to get regulators: %d\n", ret);
+               return ret;
+       }
+
+       ctx->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
+       if (IS_ERR(ctx->reset_gpio)) {
+               DRM_DEV_ERROR(dev, "cannot get reset-gpios %ld\n",
+                             PTR_ERR(ctx->reset_gpio));
+               return PTR_ERR(ctx->reset_gpio);
+       }
+
+       spi->bits_per_word = 9;
+       spi->mode = SPI_MODE_3;
+       ret = spi_setup(spi);
+       if (ret < 0) {
+               DRM_DEV_ERROR(dev, "spi setup failed.\n");
+               return ret;
+       }
+
+       drm_panel_init(&ctx->panel);
+       ctx->panel.dev = dev;
+       ctx->panel.funcs = &s6e63m0_drm_funcs;
+
+       ret = s6e63m0_backlight_register(ctx);
+       if (ret < 0)
+               return ret;
+
+       return drm_panel_add(&ctx->panel);
+}
+
+static int s6e63m0_remove(struct spi_device *spi)
+{
+       struct s6e63m0 *ctx = spi_get_drvdata(spi);
+
+       drm_panel_remove(&ctx->panel);
+
+       return 0;
+}
+
+static const struct of_device_id s6e63m0_of_match[] = {
+       { .compatible = "samsung,s6e63m0" },
+       { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, s6e63m0_of_match);
+
+static struct spi_driver s6e63m0_driver = {
+       .probe                  = s6e63m0_probe,
+       .remove                 = s6e63m0_remove,
+       .driver                 = {
+               .name           = "panel-samsung-s6e63m0",
+               .of_match_table = s6e63m0_of_match,
+       },
+};
+module_spi_driver(s6e63m0_driver);
+
+MODULE_AUTHOR("PaweÅ‚ Chmiel <pawel.mikolaj.chmiel@gmail.com>");
+MODULE_DESCRIPTION("s6e63m0 LCD Driver");
+MODULE_LICENSE("GPL v2");
index 569be4e..c22c471 100644 (file)
@@ -1096,6 +1096,56 @@ static const struct panel_desc dlc_dlc1010gig = {
        .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
 };
 
+static const struct drm_display_mode edt_et035012dm6_mode = {
+       .clock = 6500,
+       .hdisplay = 320,
+       .hsync_start = 320 + 20,
+       .hsync_end = 320 + 20 + 30,
+       .htotal = 320 + 20 + 68,
+       .vdisplay = 240,
+       .vsync_start = 240 + 4,
+       .vsync_end = 240 + 4 + 4,
+       .vtotal = 240 + 4 + 4 + 14,
+       .vrefresh = 60,
+       .flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC,
+};
+
+static const struct panel_desc edt_et035012dm6 = {
+       .modes = &edt_et035012dm6_mode,
+       .num_modes = 1,
+       .bpc = 8,
+       .size = {
+               .width = 70,
+               .height = 52,
+       },
+       .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+       .bus_flags = DRM_BUS_FLAG_DE_LOW | DRM_BUS_FLAG_PIXDATA_NEGEDGE,
+};
+
+static const struct drm_display_mode edt_etm0430g0dh6_mode = {
+       .clock = 9000,
+       .hdisplay = 480,
+       .hsync_start = 480 + 2,
+       .hsync_end = 480 + 2 + 41,
+       .htotal = 480 + 2 + 41 + 2,
+       .vdisplay = 272,
+       .vsync_start = 272 + 2,
+       .vsync_end = 272 + 2 + 10,
+       .vtotal = 272 + 2 + 10 + 2,
+       .vrefresh = 60,
+       .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
+};
+
+static const struct panel_desc edt_etm0430g0dh6 = {
+       .modes = &edt_etm0430g0dh6_mode,
+       .num_modes = 1,
+       .bpc = 6,
+       .size = {
+               .width = 95,
+               .height = 54,
+       },
+};
+
 static const struct drm_display_mode edt_et057090dhu_mode = {
        .clock = 25175,
        .hdisplay = 640,
@@ -1160,6 +1210,33 @@ static const struct panel_desc edt_etm0700g0bdh6 = {
        .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE,
 };
 
+static const struct display_timing evervision_vgg804821_timing = {
+       .pixelclock = { 27600000, 33300000, 50000000 },
+       .hactive = { 800, 800, 800 },
+       .hfront_porch = { 40, 66, 70 },
+       .hback_porch = { 40, 67, 70 },
+       .hsync_len = { 40, 67, 70 },
+       .vactive = { 480, 480, 480 },
+       .vfront_porch = { 6, 10, 10 },
+       .vback_porch = { 7, 11, 11 },
+       .vsync_len = { 7, 11, 11 },
+       .flags = DISPLAY_FLAGS_HSYNC_HIGH | DISPLAY_FLAGS_VSYNC_HIGH |
+                DISPLAY_FLAGS_DE_HIGH | DISPLAY_FLAGS_PIXDATA_NEGEDGE |
+                DISPLAY_FLAGS_SYNC_NEGEDGE,
+};
+
+static const struct panel_desc evervision_vgg804821 = {
+       .timings = &evervision_vgg804821_timing,
+       .num_timings = 1,
+       .bpc = 8,
+       .size = {
+               .width = 108,
+               .height = 64,
+       },
+       .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+       .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_NEGEDGE,
+};
+
 static const struct drm_display_mode foxlink_fl500wvr00_a0t_mode = {
        .clock = 32260,
        .hdisplay = 800,
@@ -1184,6 +1261,29 @@ static const struct panel_desc foxlink_fl500wvr00_a0t = {
        .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
 };
 
+static const struct drm_display_mode friendlyarm_hd702e_mode = {
+       .clock          = 67185,
+       .hdisplay       = 800,
+       .hsync_start    = 800 + 20,
+       .hsync_end      = 800 + 20 + 24,
+       .htotal         = 800 + 20 + 24 + 20,
+       .vdisplay       = 1280,
+       .vsync_start    = 1280 + 4,
+       .vsync_end      = 1280 + 4 + 8,
+       .vtotal         = 1280 + 4 + 8 + 4,
+       .vrefresh       = 60,
+       .flags          = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC,
+};
+
+static const struct panel_desc friendlyarm_hd702e = {
+       .modes = &friendlyarm_hd702e_mode,
+       .num_modes = 1,
+       .size = {
+               .width  = 94,
+               .height = 151,
+       },
+};
+
 static const struct drm_display_mode giantplus_gpg482739qs5_mode = {
        .clock = 9000,
        .hdisplay = 480,
@@ -2355,6 +2455,31 @@ static const struct panel_desc starry_kr122ea0sra = {
        },
 };
 
+static const struct drm_display_mode tfc_s9700rtwv43tr_01b_mode = {
+       .clock = 30000,
+       .hdisplay = 800,
+       .hsync_start = 800 + 39,
+       .hsync_end = 800 + 39 + 47,
+       .htotal = 800 + 39 + 47 + 39,
+       .vdisplay = 480,
+       .vsync_start = 480 + 13,
+       .vsync_end = 480 + 13 + 2,
+       .vtotal = 480 + 13 + 2 + 29,
+       .vrefresh = 62,
+};
+
+static const struct panel_desc tfc_s9700rtwv43tr_01b = {
+       .modes = &tfc_s9700rtwv43tr_01b_mode,
+       .num_modes = 1,
+       .bpc = 8,
+       .size = {
+               .width = 155,
+               .height = 90,
+       },
+       .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+       .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_POSEDGE,
+};
+
 static const struct display_timing tianma_tm070jdhg30_timing = {
        .pixelclock = { 62600000, 68200000, 78100000 },
        .hactive = { 1280, 1280, 1280 },
@@ -2508,6 +2633,32 @@ static const struct panel_desc urt_umsh_8596md_parallel = {
        .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
 };
 
+static const struct drm_display_mode vl050_8048nt_c01_mode = {
+       .clock = 33333,
+       .hdisplay = 800,
+       .hsync_start = 800 + 210,
+       .hsync_end = 800 + 210 + 20,
+       .htotal = 800 + 210 + 20 + 46,
+       .vdisplay =  480,
+       .vsync_start = 480 + 22,
+       .vsync_end = 480 + 22 + 10,
+       .vtotal = 480 + 22 + 10 + 23,
+       .vrefresh = 60,
+       .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
+};
+
+static const struct panel_desc vl050_8048nt_c01 = {
+       .modes = &vl050_8048nt_c01_mode,
+       .num_modes = 1,
+       .bpc = 8,
+       .size = {
+               .width = 120,
+               .height = 76,
+       },
+       .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+       .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_POSEDGE,
+};
+
 static const struct drm_display_mode winstar_wf35ltiacd_mode = {
        .clock = 6410,
        .hdisplay = 320,
@@ -2645,6 +2796,12 @@ static const struct of_device_id platform_of_match[] = {
        }, {
                .compatible = "dlc,dlc1010gig",
                .data = &dlc_dlc1010gig,
+       }, {
+               .compatible = "edt,et035012dm6",
+               .data = &edt_et035012dm6,
+       }, {
+               .compatible = "edt,etm0430g0dh6",
+               .data = &edt_etm0430g0dh6,
        }, {
                .compatible = "edt,et057090dhu",
                .data = &edt_et057090dhu,
@@ -2660,9 +2817,15 @@ static const struct of_device_id platform_of_match[] = {
        }, {
                .compatible = "edt,etm0700g0edh6",
                .data = &edt_etm0700g0bdh6,
+       }, {
+               .compatible = "evervision,vgg804821",
+               .data = &evervision_vgg804821,
        }, {
                .compatible = "foxlink,fl500wvr00-a0t",
                .data = &foxlink_fl500wvr00_a0t,
+       }, {
+               .compatible = "friendlyarm,hd702e",
+               .data = &friendlyarm_hd702e,
        }, {
                .compatible = "giantplus,gpg482739qs5",
                .data = &giantplus_gpg482739qs5
@@ -2801,6 +2964,9 @@ static const struct of_device_id platform_of_match[] = {
        }, {
                .compatible = "starry,kr122ea0sra",
                .data = &starry_kr122ea0sra,
+       }, {
+               .compatible = "tfc,s9700rtwv43tr-01b",
+               .data = &tfc_s9700rtwv43tr_01b,
        }, {
                .compatible = "tianma,tm070jdhg30",
                .data = &tianma_tm070jdhg30,
@@ -2834,6 +3000,9 @@ static const struct of_device_id platform_of_match[] = {
        }, {
                .compatible = "urt,umsh-8596md-20t",
                .data = &urt_umsh_8596md_parallel,
+       }, {
+               .compatible = "vxt,vl050-8048nt-c01",
+               .data = &vl050_8048nt_c01,
        }, {
                .compatible = "winstar,wf35ltiacd",
                .data = &winstar_wf35ltiacd,
@@ -3053,6 +3222,37 @@ static const struct panel_desc_dsi lg_acx467akm_7 = {
        .lanes = 4,
 };
 
+static const struct drm_display_mode osd101t2045_53ts_mode = {
+       .clock = 154500,
+       .hdisplay = 1920,
+       .hsync_start = 1920 + 112,
+       .hsync_end = 1920 + 112 + 16,
+       .htotal = 1920 + 112 + 16 + 32,
+       .vdisplay = 1200,
+       .vsync_start = 1200 + 16,
+       .vsync_end = 1200 + 16 + 2,
+       .vtotal = 1200 + 16 + 2 + 16,
+       .vrefresh = 60,
+       .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
+};
+
+static const struct panel_desc_dsi osd101t2045_53ts = {
+       .desc = {
+               .modes = &osd101t2045_53ts_mode,
+               .num_modes = 1,
+               .bpc = 8,
+               .size = {
+                       .width = 217,
+                       .height = 136,
+               },
+       },
+       .flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
+                MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
+                MIPI_DSI_MODE_EOT_PACKET,
+       .format = MIPI_DSI_FMT_RGB888,
+       .lanes = 4,
+};
+
 static const struct of_device_id dsi_of_match[] = {
        {
                .compatible = "auo,b080uan01",
@@ -3072,6 +3272,9 @@ static const struct of_device_id dsi_of_match[] = {
        }, {
                .compatible = "lg,acx467akm-7",
                .data = &lg_acx467akm_7
+       }, {
+               .compatible = "osddisplays,osd101t2045-53ts",
+               .data = &osd101t2045_53ts
        }, {
                /* sentinel */
        }
@@ -3098,7 +3301,14 @@ static int panel_simple_dsi_probe(struct mipi_dsi_device *dsi)
        dsi->format = desc->format;
        dsi->lanes = desc->lanes;
 
-       return mipi_dsi_attach(dsi);
+       err = mipi_dsi_attach(dsi);
+       if (err) {
+               struct panel_simple *panel = dev_get_drvdata(&dsi->dev);
+
+               drm_panel_remove(&panel->base);
+       }
+
+       return err;
 }
 
 static int panel_simple_dsi_remove(struct mipi_dsi_device *dsi)
index 3b2bced..ccb8eb2 100644 (file)
@@ -55,11 +55,33 @@ static int panfrost_clk_init(struct panfrost_device *pfdev)
        if (err)
                return err;
 
+       pfdev->bus_clock = devm_clk_get_optional(pfdev->dev, "bus");
+       if (IS_ERR(pfdev->bus_clock)) {
+               dev_err(pfdev->dev, "get bus_clock failed %ld\n",
+                       PTR_ERR(pfdev->bus_clock));
+               return PTR_ERR(pfdev->bus_clock);
+       }
+
+       if (pfdev->bus_clock) {
+               rate = clk_get_rate(pfdev->bus_clock);
+               dev_info(pfdev->dev, "bus_clock rate = %lu\n", rate);
+
+               err = clk_prepare_enable(pfdev->bus_clock);
+               if (err)
+                       goto disable_clock;
+       }
+
        return 0;
+
+disable_clock:
+       clk_disable_unprepare(pfdev->clock);
+
+       return err;
 }
 
 static void panfrost_clk_fini(struct panfrost_device *pfdev)
 {
+       clk_disable_unprepare(pfdev->bus_clock);
        clk_disable_unprepare(pfdev->clock);
 }
 
index 56f452d..8074f22 100644 (file)
@@ -66,6 +66,7 @@ struct panfrost_device {
 
        void __iomem *iomem;
        struct clk *clock;
+       struct clk *bus_clock;
        struct regulator *regulator;
        struct reset_control *rstc;
 
index a5716c8..9bb9260 100644 (file)
@@ -387,7 +387,7 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
        mutex_lock(&pfdev->reset_lock);
 
        for (i = 0; i < NUM_JOB_SLOTS; i++)
-               drm_sched_stop(&pfdev->js->queue[i].sched);
+               drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
 
        if (sched_job)
                drm_sched_increase_karma(sched_job);
index 1298b84..287e3f9 100644 (file)
@@ -125,6 +125,7 @@ static int radeonfb_create_pinned_object(struct radeon_fbdev *rfbdev,
                                         struct drm_mode_fb_cmd2 *mode_cmd,
                                         struct drm_gem_object **gobj_p)
 {
+       const struct drm_format_info *info;
        struct radeon_device *rdev = rfbdev->rdev;
        struct drm_gem_object *gobj = NULL;
        struct radeon_bo *rbo = NULL;
@@ -135,7 +136,8 @@ static int radeonfb_create_pinned_object(struct radeon_fbdev *rfbdev,
        int height = mode_cmd->height;
        u32 cpp;
 
-       cpp = drm_format_plane_cpp(mode_cmd->pixel_format, 0);
+       info = drm_get_format_info(rdev->ddev, mode_cmd);
+       cpp = info->cpp[0];
 
        /* need to align pitch with crtc limits */
        mode_cmd->pitches[0] = radeon_align_pitch(rdev, mode_cmd->width, cpp,
index 97438bb..31030cf 100644 (file)
@@ -74,23 +74,18 @@ static struct drm_framebuffer *
 rockchip_user_fb_create(struct drm_device *dev, struct drm_file *file_priv,
                        const struct drm_mode_fb_cmd2 *mode_cmd)
 {
+       const struct drm_format_info *info = drm_get_format_info(dev,
+                                                                mode_cmd);
        struct drm_framebuffer *fb;
        struct drm_gem_object *objs[ROCKCHIP_MAX_FB_BUFFER];
        struct drm_gem_object *obj;
-       unsigned int hsub;
-       unsigned int vsub;
-       int num_planes;
+       int num_planes = min_t(int, info->num_planes, ROCKCHIP_MAX_FB_BUFFER);
        int ret;
        int i;
 
-       hsub = drm_format_horz_chroma_subsampling(mode_cmd->pixel_format);
-       vsub = drm_format_vert_chroma_subsampling(mode_cmd->pixel_format);
-       num_planes = min(drm_format_num_planes(mode_cmd->pixel_format),
-                        ROCKCHIP_MAX_FB_BUFFER);
-
        for (i = 0; i < num_planes; i++) {
-               unsigned int width = mode_cmd->width / (i ? hsub : 1);
-               unsigned int height = mode_cmd->height / (i ? vsub : 1);
+               unsigned int width = mode_cmd->width / (i ? info->hsub : 1);
+               unsigned int height = mode_cmd->height / (i ? info->vsub : 1);
                unsigned int min_size;
 
                obj = drm_gem_object_lookup(file_priv, mode_cmd->handles[i]);
@@ -103,7 +98,7 @@ rockchip_user_fb_create(struct drm_device *dev, struct drm_file *file_priv,
 
                min_size = (height - 1) * mode_cmd->pitches[i] +
                        mode_cmd->offsets[i] +
-                       width * drm_format_plane_cpp(mode_cmd->pixel_format, i);
+                       width * info->cpp[i];
 
                if (obj->size < min_size) {
                        drm_gem_object_put_unlocked(obj);
index 20a9c29..4189ca1 100644 (file)
@@ -315,24 +315,19 @@ static uint16_t scl_vop_cal_scale(enum scale_mode mode, uint32_t src,
 
 static void scl_vop_cal_scl_fac(struct vop *vop, const struct vop_win_data *win,
                             uint32_t src_w, uint32_t src_h, uint32_t dst_w,
-                            uint32_t dst_h, uint32_t pixel_format)
+                            uint32_t dst_h, const struct drm_format_info *info)
 {
        uint16_t yrgb_hor_scl_mode, yrgb_ver_scl_mode;
        uint16_t cbcr_hor_scl_mode = SCALE_NONE;
        uint16_t cbcr_ver_scl_mode = SCALE_NONE;
-       int hsub = drm_format_horz_chroma_subsampling(pixel_format);
-       int vsub = drm_format_vert_chroma_subsampling(pixel_format);
-       const struct drm_format_info *info;
        bool is_yuv = false;
-       uint16_t cbcr_src_w = src_w / hsub;
-       uint16_t cbcr_src_h = src_h / vsub;
+       uint16_t cbcr_src_w = src_w / info->hsub;
+       uint16_t cbcr_src_h = src_h / info->vsub;
        uint16_t vsu_mode;
        uint16_t lb_mode;
        uint32_t val;
        int vskiplines;
 
-       info = drm_format_info(pixel_format);
-
        if (info->is_yuv)
                is_yuv = true;
 
@@ -831,8 +826,8 @@ static void vop_plane_atomic_update(struct drm_plane *plane,
                    (state->rotation & DRM_MODE_REFLECT_X) ? 1 : 0);
 
        if (is_yuv) {
-               int hsub = drm_format_horz_chroma_subsampling(fb->format->format);
-               int vsub = drm_format_vert_chroma_subsampling(fb->format->format);
+               int hsub = fb->format->hsub;
+               int vsub = fb->format->vsub;
                int bpp = fb->format->cpp[1];
 
                uv_obj = fb->obj[1];
@@ -856,7 +851,7 @@ static void vop_plane_atomic_update(struct drm_plane *plane,
        if (win->phy->scl)
                scl_vop_cal_scl_fac(vop, win, actual_w, actual_h,
                                    drm_rect_width(dest), drm_rect_height(dest),
-                                   fb->format->format);
+                                   fb->format);
 
        VOP_WIN_SET(vop, win, act_info, act_info);
        VOP_WIN_SET(vop, win, dsp_info, dsp_info);
@@ -1222,17 +1217,6 @@ static void vop_crtc_destroy(struct drm_crtc *crtc)
        drm_crtc_cleanup(crtc);
 }
 
-static void vop_crtc_reset(struct drm_crtc *crtc)
-{
-       if (crtc->state)
-               __drm_atomic_helper_crtc_destroy_state(crtc->state);
-       kfree(crtc->state);
-
-       crtc->state = kzalloc(sizeof(struct rockchip_crtc_state), GFP_KERNEL);
-       if (crtc->state)
-               crtc->state->crtc = crtc;
-}
-
 static struct drm_crtc_state *vop_crtc_duplicate_state(struct drm_crtc *crtc)
 {
        struct rockchip_crtc_state *rockchip_state;
@@ -1254,6 +1238,17 @@ static void vop_crtc_destroy_state(struct drm_crtc *crtc,
        kfree(s);
 }
 
+static void vop_crtc_reset(struct drm_crtc *crtc)
+{
+       struct rockchip_crtc_state *crtc_state =
+               kzalloc(sizeof(*crtc_state), GFP_KERNEL);
+
+       if (crtc->state)
+               vop_crtc_destroy_state(crtc, crtc->state);
+
+       __drm_atomic_helper_crtc_reset(crtc, &crtc_state->base);
+}
+
 #ifdef CONFIG_DRM_ANALOGIX_DP
 static struct drm_connector *vop_get_edp_connector(struct vop *vop)
 {
index a1bec27..cf596fc 100644 (file)
@@ -265,32 +265,6 @@ void drm_sched_resume_timeout(struct drm_gpu_scheduler *sched,
 }
 EXPORT_SYMBOL(drm_sched_resume_timeout);
 
-/* job_finish is called after hw fence signaled
- */
-static void drm_sched_job_finish(struct work_struct *work)
-{
-       struct drm_sched_job *s_job = container_of(work, struct drm_sched_job,
-                                                  finish_work);
-       struct drm_gpu_scheduler *sched = s_job->sched;
-       unsigned long flags;
-
-       /*
-        * Canceling the timeout without removing our job from the ring mirror
-        * list is safe, as we will only end up in this worker if our jobs
-        * finished fence has been signaled. So even if some another worker
-        * manages to find this job as the next job in the list, the fence
-        * signaled check below will prevent the timeout to be restarted.
-        */
-       cancel_delayed_work_sync(&sched->work_tdr);
-
-       spin_lock_irqsave(&sched->job_list_lock, flags);
-       /* queue TDR for next job */
-       drm_sched_start_timeout(sched);
-       spin_unlock_irqrestore(&sched->job_list_lock, flags);
-
-       sched->ops->free_job(s_job);
-}
-
 static void drm_sched_job_begin(struct drm_sched_job *s_job)
 {
        struct drm_gpu_scheduler *sched = s_job->sched;
@@ -315,6 +289,15 @@ static void drm_sched_job_timedout(struct work_struct *work)
        if (job)
                job->sched->ops->timedout_job(job);
 
+       /*
+        * Guilty job did complete and hence needs to be manually removed
+        * See drm_sched_stop doc.
+        */
+       if (sched->free_guilty) {
+               job->sched->ops->free_job(job);
+               sched->free_guilty = false;
+       }
+
        spin_lock_irqsave(&sched->job_list_lock, flags);
        drm_sched_start_timeout(sched);
        spin_unlock_irqrestore(&sched->job_list_lock, flags);
@@ -370,40 +353,66 @@ EXPORT_SYMBOL(drm_sched_increase_karma);
  *
  * @sched: scheduler instance
  *
+ * Stop the scheduler and also removes and frees all completed jobs.
+ * Note: bad job will not be freed as it might be used later and so it's
+ * callers responsibility to release it manually if it's not part of the
+ * mirror list any more.
+ *
  */
-void drm_sched_stop(struct drm_gpu_scheduler *sched)
+void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad)
 {
-       struct drm_sched_job *s_job;
+       struct drm_sched_job *s_job, *tmp;
        unsigned long flags;
-       struct dma_fence *last_fence =  NULL;
 
        kthread_park(sched->thread);
 
        /*
-        * Verify all the signaled jobs in mirror list are removed from the ring
-        * by waiting for the latest job to enter the list. This should insure that
-        * also all the previous jobs that were in flight also already singaled
-        * and removed from the list.
+        * Iterate the job list from later to  earlier one and either deactive
+        * their HW callbacks or remove them from mirror list if they already
+        * signaled.
+        * This iteration is thread safe as sched thread is stopped.
         */
-       spin_lock_irqsave(&sched->job_list_lock, flags);
-       list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) {
+       list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, node) {
                if (s_job->s_fence->parent &&
                    dma_fence_remove_callback(s_job->s_fence->parent,
                                              &s_job->cb)) {
-                       dma_fence_put(s_job->s_fence->parent);
-                       s_job->s_fence->parent = NULL;
                        atomic_dec(&sched->hw_rq_count);
                } else {
-                        last_fence = dma_fence_get(&s_job->s_fence->finished);
-                        break;
+                       /*
+                        * remove job from ring_mirror_list.
+                        * Locking here is for concurrent resume timeout
+                        */
+                       spin_lock_irqsave(&sched->job_list_lock, flags);
+                       list_del_init(&s_job->node);
+                       spin_unlock_irqrestore(&sched->job_list_lock, flags);
+
+                       /*
+                        * Wait for job's HW fence callback to finish using s_job
+                        * before releasing it.
+                        *
+                        * Job is still alive so fence refcount at least 1
+                        */
+                       dma_fence_wait(&s_job->s_fence->finished, false);
+
+                       /*
+                        * We must keep bad job alive for later use during
+                        * recovery by some of the drivers but leave a hint
+                        * that the guilty job must be released.
+                        */
+                       if (bad != s_job)
+                               sched->ops->free_job(s_job);
+                       else
+                               sched->free_guilty = true;
                }
        }
-       spin_unlock_irqrestore(&sched->job_list_lock, flags);
 
-       if (last_fence) {
-               dma_fence_wait(last_fence, false);
-               dma_fence_put(last_fence);
-       }
+       /*
+        * Stop pending timer in flight as we rearm it in  drm_sched_start. This
+        * avoids the pending timeout work in progress to fire right away after
+        * this TDR finished and before the newly restarted jobs had a
+        * chance to complete.
+        */
+       cancel_delayed_work(&sched->work_tdr);
 }
 
 EXPORT_SYMBOL(drm_sched_stop);
@@ -417,21 +426,22 @@ EXPORT_SYMBOL(drm_sched_stop);
 void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery)
 {
        struct drm_sched_job *s_job, *tmp;
+       unsigned long flags;
        int r;
 
-       if (!full_recovery)
-               goto unpark;
-
        /*
         * Locking the list is not required here as the sched thread is parked
-        * so no new jobs are being pushed in to HW and in drm_sched_stop we
-        * flushed all the jobs who were still in mirror list but who already
-        * signaled and removed them self from the list. Also concurrent
+        * so no new jobs are being inserted or removed. Also concurrent
         * GPU recovers can't run in parallel.
         */
        list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) {
                struct dma_fence *fence = s_job->s_fence->parent;
 
+               atomic_inc(&sched->hw_rq_count);
+
+               if (!full_recovery)
+                       continue;
+
                if (fence) {
                        r = dma_fence_add_callback(fence, &s_job->cb,
                                                   drm_sched_process_job);
@@ -444,9 +454,12 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery)
                        drm_sched_process_job(NULL, &s_job->cb);
        }
 
-       drm_sched_start_timeout(sched);
+       if (full_recovery) {
+               spin_lock_irqsave(&sched->job_list_lock, flags);
+               drm_sched_start_timeout(sched);
+               spin_unlock_irqrestore(&sched->job_list_lock, flags);
+       }
 
-unpark:
        kthread_unpark(sched->thread);
 }
 EXPORT_SYMBOL(drm_sched_start);
@@ -463,7 +476,6 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched)
        uint64_t guilty_context;
        bool found_guilty = false;
 
-       /*TODO DO we need spinlock here ? */
        list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) {
                struct drm_sched_fence *s_fence = s_job->s_fence;
 
@@ -475,8 +487,8 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched)
                if (found_guilty && s_job->s_fence->scheduled.context == guilty_context)
                        dma_fence_set_error(&s_fence->finished, -ECANCELED);
 
+               dma_fence_put(s_job->s_fence->parent);
                s_job->s_fence->parent = sched->ops->run_job(s_job);
-               atomic_inc(&sched->hw_rq_count);
        }
 }
 EXPORT_SYMBOL(drm_sched_resubmit_jobs);
@@ -513,7 +525,6 @@ int drm_sched_job_init(struct drm_sched_job *job,
                return -ENOMEM;
        job->id = atomic64_inc_return(&sched->job_id_count);
 
-       INIT_WORK(&job->finish_work, drm_sched_job_finish);
        INIT_LIST_HEAD(&job->node);
 
        return 0;
@@ -596,24 +607,54 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)
        struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb);
        struct drm_sched_fence *s_fence = s_job->s_fence;
        struct drm_gpu_scheduler *sched = s_fence->sched;
-       unsigned long flags;
-
-       cancel_delayed_work(&sched->work_tdr);
 
        atomic_dec(&sched->hw_rq_count);
        atomic_dec(&sched->num_jobs);
 
-       spin_lock_irqsave(&sched->job_list_lock, flags);
-       /* remove job from ring_mirror_list */
-       list_del_init(&s_job->node);
-       spin_unlock_irqrestore(&sched->job_list_lock, flags);
+       trace_drm_sched_process_job(s_fence);
 
        drm_sched_fence_finished(s_fence);
-
-       trace_drm_sched_process_job(s_fence);
        wake_up_interruptible(&sched->wake_up_worker);
+}
+
+/**
+ * drm_sched_cleanup_jobs - destroy finished jobs
+ *
+ * @sched: scheduler instance
+ *
+ * Remove all finished jobs from the mirror list and destroy them.
+ */
+static void drm_sched_cleanup_jobs(struct drm_gpu_scheduler *sched)
+{
+       unsigned long flags;
+
+       /* Don't destroy jobs while the timeout worker is running */
+       if (sched->timeout != MAX_SCHEDULE_TIMEOUT &&
+           !cancel_delayed_work(&sched->work_tdr))
+               return;
+
+
+       while (!list_empty(&sched->ring_mirror_list)) {
+               struct drm_sched_job *job;
+
+               job = list_first_entry(&sched->ring_mirror_list,
+                                      struct drm_sched_job, node);
+               if (!dma_fence_is_signaled(&job->s_fence->finished))
+                       break;
+
+               spin_lock_irqsave(&sched->job_list_lock, flags);
+               /* remove job from ring_mirror_list */
+               list_del_init(&job->node);
+               spin_unlock_irqrestore(&sched->job_list_lock, flags);
+
+               sched->ops->free_job(job);
+       }
+
+       /* queue timeout for next job */
+       spin_lock_irqsave(&sched->job_list_lock, flags);
+       drm_sched_start_timeout(sched);
+       spin_unlock_irqrestore(&sched->job_list_lock, flags);
 
-       schedule_work(&s_job->finish_work);
 }
 
 /**
@@ -655,9 +696,10 @@ static int drm_sched_main(void *param)
                struct dma_fence *fence;
 
                wait_event_interruptible(sched->wake_up_worker,
+                                        (drm_sched_cleanup_jobs(sched),
                                         (!drm_sched_blocked(sched) &&
                                          (entity = drm_sched_select_entity(sched))) ||
-                                        kthread_should_stop());
+                                        kthread_should_stop()));
 
                if (!entity)
                        continue;
index 1bef73e..d8e4a14 100644 (file)
@@ -9,6 +9,7 @@
 #include <linux/clk.h>
 #include <linux/iopoll.h>
 #include <linux/module.h>
+#include <linux/regulator/consumer.h>
 #include <drm/drmP.h>
 #include <drm/drm_mipi_dsi.h>
 #include <drm/bridge/dw_mipi_dsi.h>
@@ -76,6 +77,7 @@ struct dw_mipi_dsi_stm {
        u32 hw_version;
        int lane_min_kbps;
        int lane_max_kbps;
+       struct regulator *vdd_supply;
 };
 
 static inline void dsi_write(struct dw_mipi_dsi_stm *dsi, u32 reg, u32 val)
@@ -314,21 +316,36 @@ static int dw_mipi_dsi_stm_probe(struct platform_device *pdev)
        res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
        dsi->base = devm_ioremap_resource(dev, res);
        if (IS_ERR(dsi->base)) {
-               DRM_ERROR("Unable to get dsi registers\n");
-               return PTR_ERR(dsi->base);
+               ret = PTR_ERR(dsi->base);
+               DRM_ERROR("Unable to get dsi registers %d\n", ret);
+               return ret;
+       }
+
+       dsi->vdd_supply = devm_regulator_get(dev, "phy-dsi");
+       if (IS_ERR(dsi->vdd_supply)) {
+               ret = PTR_ERR(dsi->vdd_supply);
+               if (ret != -EPROBE_DEFER)
+                       DRM_ERROR("Failed to request regulator: %d\n", ret);
+               return ret;
+       }
+
+       ret = regulator_enable(dsi->vdd_supply);
+       if (ret) {
+               DRM_ERROR("Failed to enable regulator: %d\n", ret);
+               return ret;
        }
 
        dsi->pllref_clk = devm_clk_get(dev, "ref");
        if (IS_ERR(dsi->pllref_clk)) {
                ret = PTR_ERR(dsi->pllref_clk);
-               dev_err(dev, "Unable to get pll reference clock: %d\n", ret);
-               return ret;
+               DRM_ERROR("Unable to get pll reference clock: %d\n", ret);
+               goto err_clk_get;
        }
 
        ret = clk_prepare_enable(dsi->pllref_clk);
        if (ret) {
-               dev_err(dev, "%s: Failed to enable pllref_clk\n", __func__);
-               return ret;
+               DRM_ERROR("Failed to enable pllref_clk: %d\n", ret);
+               goto err_clk_get;
        }
 
        dw_mipi_dsi_stm_plat_data.base = dsi->base;
@@ -338,20 +355,28 @@ static int dw_mipi_dsi_stm_probe(struct platform_device *pdev)
 
        dsi->dsi = dw_mipi_dsi_probe(pdev, &dw_mipi_dsi_stm_plat_data);
        if (IS_ERR(dsi->dsi)) {
-               DRM_ERROR("Failed to initialize mipi dsi host\n");
-               clk_disable_unprepare(dsi->pllref_clk);
-               return PTR_ERR(dsi->dsi);
+               ret = PTR_ERR(dsi->dsi);
+               DRM_ERROR("Failed to initialize mipi dsi host: %d\n", ret);
+               goto err_dsi_probe;
        }
 
        return 0;
+
+err_dsi_probe:
+       clk_disable_unprepare(dsi->pllref_clk);
+err_clk_get:
+       regulator_disable(dsi->vdd_supply);
+
+       return ret;
 }
 
 static int dw_mipi_dsi_stm_remove(struct platform_device *pdev)
 {
        struct dw_mipi_dsi_stm *dsi = platform_get_drvdata(pdev);
 
-       clk_disable_unprepare(dsi->pllref_clk);
        dw_mipi_dsi_remove(dsi->dsi);
+       clk_disable_unprepare(dsi->pllref_clk);
+       regulator_disable(dsi->vdd_supply);
 
        return 0;
 }
@@ -363,6 +388,7 @@ static int __maybe_unused dw_mipi_dsi_stm_suspend(struct device *dev)
        DRM_DEBUG_DRIVER("\n");
 
        clk_disable_unprepare(dsi->pllref_clk);
+       regulator_disable(dsi->vdd_supply);
 
        return 0;
 }
@@ -370,10 +396,22 @@ static int __maybe_unused dw_mipi_dsi_stm_suspend(struct device *dev)
 static int __maybe_unused dw_mipi_dsi_stm_resume(struct device *dev)
 {
        struct dw_mipi_dsi_stm *dsi = dw_mipi_dsi_stm_plat_data.priv_data;
+       int ret;
 
        DRM_DEBUG_DRIVER("\n");
 
-       clk_prepare_enable(dsi->pllref_clk);
+       ret = regulator_enable(dsi->vdd_supply);
+       if (ret) {
+               DRM_ERROR("Failed to enable regulator: %d\n", ret);
+               return ret;
+       }
+
+       ret = clk_prepare_enable(dsi->pllref_clk);
+       if (ret) {
+               regulator_disable(dsi->vdd_supply);
+               DRM_ERROR("Failed to enable pllref_clk: %d\n", ret);
+               return ret;
+       }
 
        return 0;
 }
index 32fd6a3..14eb8c4 100644 (file)
@@ -232,6 +232,11 @@ static const enum ltdc_pix_fmt ltdc_pix_fmt_a1[NB_PF] = {
        PF_ARGB4444             /* 0x07 */
 };
 
+static const u64 ltdc_format_modifiers[] = {
+       DRM_FORMAT_MOD_LINEAR,
+       DRM_FORMAT_MOD_INVALID
+};
+
 static inline u32 reg_read(void __iomem *base, u32 reg)
 {
        return readl_relaxed(base + reg);
@@ -426,8 +431,8 @@ static void ltdc_crtc_atomic_enable(struct drm_crtc *crtc,
        /* Enable IRQ */
        reg_set(ldev->regs, LTDC_IER, IER_RRIE | IER_FUIE | IER_TERRIE);
 
-       /* Immediately commit the planes */
-       reg_set(ldev->regs, LTDC_SRCR, SRCR_IMR);
+       /* Commit shadow registers = update planes at next vblank */
+       reg_set(ldev->regs, LTDC_SRCR, SRCR_VBR);
 
        /* Enable LTDC */
        reg_set(ldev->regs, LTDC_GCR, GCR_LTDCEN);
@@ -555,7 +560,7 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
        if (vm.flags & DISPLAY_FLAGS_VSYNC_HIGH)
                val |= GCR_VSPOL;
 
-       if (vm.flags & DISPLAY_FLAGS_DE_HIGH)
+       if (vm.flags & DISPLAY_FLAGS_DE_LOW)
                val |= GCR_DEPOL;
 
        if (vm.flags & DISPLAY_FLAGS_PIXDATA_NEGEDGE)
@@ -779,7 +784,7 @@ static void ltdc_plane_atomic_update(struct drm_plane *plane,
 
        /* Configures the color frame buffer pitch in bytes & line length */
        pitch_in_bytes = fb->pitches[0];
-       line_length = drm_format_plane_cpp(fb->format->format, 0) *
+       line_length = fb->format->cpp[0] *
                      (x1 - x0 + 1) + (ldev->caps.bus_width >> 3) - 1;
        val = ((pitch_in_bytes << 16) | line_length);
        reg_update_bits(ldev->regs, LTDC_L1CFBLR + lofs,
@@ -822,11 +827,11 @@ static void ltdc_plane_atomic_update(struct drm_plane *plane,
 
        mutex_lock(&ldev->err_lock);
        if (ldev->error_status & ISR_FUIF) {
-               DRM_DEBUG_DRIVER("Fifo underrun\n");
+               DRM_WARN("ltdc fifo underrun: please verify display mode\n");
                ldev->error_status &= ~ISR_FUIF;
        }
        if (ldev->error_status & ISR_TERRIF) {
-               DRM_DEBUG_DRIVER("Transfer error\n");
+               DRM_WARN("ltdc transfer error\n");
                ldev->error_status &= ~ISR_TERRIF;
        }
        mutex_unlock(&ldev->err_lock);
@@ -864,6 +869,16 @@ static void ltdc_plane_atomic_print_state(struct drm_printer *p,
        fpsi->counter = 0;
 }
 
+static bool ltdc_plane_format_mod_supported(struct drm_plane *plane,
+                                           u32 format,
+                                           u64 modifier)
+{
+       if (modifier == DRM_FORMAT_MOD_LINEAR)
+               return true;
+
+       return false;
+}
+
 static const struct drm_plane_funcs ltdc_plane_funcs = {
        .update_plane = drm_atomic_helper_update_plane,
        .disable_plane = drm_atomic_helper_disable_plane,
@@ -872,6 +887,7 @@ static const struct drm_plane_funcs ltdc_plane_funcs = {
        .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
        .atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
        .atomic_print_state = ltdc_plane_atomic_print_state,
+       .format_mod_supported = ltdc_plane_format_mod_supported,
 };
 
 static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs = {
@@ -890,6 +906,7 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
        unsigned int i, nb_fmt = 0;
        u32 formats[NB_PF * 2];
        u32 drm_fmt, drm_fmt_no_alpha;
+       const u64 *modifiers = ltdc_format_modifiers;
        int ret;
 
        /* Get supported pixel formats */
@@ -918,7 +935,7 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
 
        ret = drm_universal_plane_init(ddev, plane, possible_crtcs,
                                       &ltdc_plane_funcs, formats, nb_fmt,
-                                      NULL, type, NULL);
+                                      modifiers, type, NULL);
        if (ret < 0)
                return NULL;
 
@@ -1021,10 +1038,13 @@ static int ltdc_get_caps(struct drm_device *ddev)
        struct ltdc_device *ldev = ddev->dev_private;
        u32 bus_width_log2, lcr, gc2r;
 
-       /* at least 1 layer must be managed */
+       /*
+        * at least 1 layer must be managed & the number of layers
+        * must not exceed LTDC_MAX_LAYER
+        */
        lcr = reg_read(ldev->regs, LTDC_LCR);
 
-       ldev->caps.nb_layers = max_t(int, lcr, 1);
+       ldev->caps.nb_layers = clamp((int)lcr, 1, LTDC_MAX_LAYER);
 
        /* set data bus width */
        gc2r = reg_read(ldev->regs, LTDC_GC2R);
@@ -1125,8 +1145,9 @@ int ltdc_load(struct drm_device *ddev)
 
        ldev->pixel_clk = devm_clk_get(dev, "lcd");
        if (IS_ERR(ldev->pixel_clk)) {
-               DRM_ERROR("Unable to get lcd clock\n");
-               return -ENODEV;
+               if (PTR_ERR(ldev->pixel_clk) != -EPROBE_DEFER)
+                       DRM_ERROR("Unable to get lcd clock\n");
+               return PTR_ERR(ldev->pixel_clk);
        }
 
        if (clk_prepare_enable(ldev->pixel_clk)) {
@@ -1134,6 +1155,12 @@ int ltdc_load(struct drm_device *ddev)
                return -ENODEV;
        }
 
+       if (!IS_ERR(rstc)) {
+               reset_control_assert(rstc);
+               usleep_range(10, 20);
+               reset_control_deassert(rstc);
+       }
+
        res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
        ldev->regs = devm_ioremap_resource(dev, res);
        if (IS_ERR(ldev->regs)) {
@@ -1142,8 +1169,15 @@ int ltdc_load(struct drm_device *ddev)
                goto err;
        }
 
+       /* Disable interrupts */
+       reg_clear(ldev->regs, LTDC_IER,
+                 IER_LIE | IER_RRIE | IER_FUIE | IER_TERRIE);
+
        for (i = 0; i < MAX_IRQ; i++) {
                irq = platform_get_irq(pdev, i);
+               if (irq == -EPROBE_DEFER)
+                       goto err;
+
                if (irq < 0)
                        continue;
 
@@ -1156,15 +1190,6 @@ int ltdc_load(struct drm_device *ddev)
                }
        }
 
-       if (!IS_ERR(rstc)) {
-               reset_control_assert(rstc);
-               usleep_range(10, 20);
-               reset_control_deassert(rstc);
-       }
-
-       /* Disable interrupts */
-       reg_clear(ldev->regs, LTDC_IER,
-                 IER_LIE | IER_RRIE | IER_FUIE | IER_TERRIE);
 
        ret = ltdc_get_caps(ddev);
        if (ret) {
@@ -1203,6 +1228,8 @@ int ltdc_load(struct drm_device *ddev)
                goto err;
        }
 
+       ddev->mode_config.allow_fb_modifiers = true;
+
        ret = ltdc_crtc_init(ddev, crtc);
        if (ret) {
                DRM_ERROR("Failed to init crtc\n");
index 29258b4..0270d7e 100644 (file)
@@ -53,22 +53,8 @@ static struct drm_driver sun4i_drv_driver = {
        .minor                  = 0,
 
        /* GEM Operations */
+       DRM_GEM_CMA_VMAP_DRIVER_OPS,
        .dumb_create            = drm_sun4i_gem_dumb_create,
-       .gem_free_object_unlocked = drm_gem_cma_free_object,
-       .gem_vm_ops             = &drm_gem_cma_vm_ops,
-
-       /* PRIME Operations */
-       .prime_handle_to_fd     = drm_gem_prime_handle_to_fd,
-       .prime_fd_to_handle     = drm_gem_prime_fd_to_handle,
-       .gem_prime_import       = drm_gem_prime_import,
-       .gem_prime_export       = drm_gem_prime_export,
-       .gem_prime_get_sg_table = drm_gem_cma_prime_get_sg_table,
-       .gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table,
-       .gem_prime_vmap         = drm_gem_cma_prime_vmap,
-       .gem_prime_vunmap       = drm_gem_cma_prime_vunmap,
-       .gem_prime_mmap         = drm_gem_cma_prime_mmap,
-
-       /* Frame Buffer Operations */
 };
 
 static int sun4i_drv_bind(struct device *dev)
index bfa7e2b..a1fc8b5 100644 (file)
@@ -980,6 +980,7 @@ static ssize_t sun6i_dsi_transfer(struct mipi_dsi_host *host,
        switch (msg->type) {
        case MIPI_DSI_DCS_SHORT_WRITE:
        case MIPI_DSI_DCS_SHORT_WRITE_PARAM:
+       case MIPI_DSI_GENERIC_SHORT_WRITE_2_PARAM:
                ret = sun6i_dsi_dcs_write_short(dsi, msg);
                break;
 
index 607a6ea..079250c 100644 (file)
@@ -26,6 +26,9 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_plane_helper.h>
 
+static void tegra_crtc_atomic_destroy_state(struct drm_crtc *crtc,
+                                           struct drm_crtc_state *state);
+
 static void tegra_dc_stats_reset(struct tegra_dc_stats *stats)
 {
        stats->frames = 0;
@@ -1155,20 +1158,12 @@ static void tegra_dc_destroy(struct drm_crtc *crtc)
 
 static void tegra_crtc_reset(struct drm_crtc *crtc)
 {
-       struct tegra_dc_state *state;
+       struct tegra_dc_state *state = kzalloc(sizeof(*state), GFP_KERNEL);
 
        if (crtc->state)
-               __drm_atomic_helper_crtc_destroy_state(crtc->state);
-
-       kfree(crtc->state);
-       crtc->state = NULL;
-
-       state = kzalloc(sizeof(*state), GFP_KERNEL);
-       if (state) {
-               crtc->state = &state->base;
-               crtc->state->crtc = crtc;
-       }
+               tegra_crtc_atomic_destroy_state(crtc, crtc->state);
 
+       __drm_atomic_helper_crtc_reset(crtc, &state->base);
        drm_crtc_vblank_reset(crtc);
 }
 
index 1dd83a7..57cc26e 100644 (file)
@@ -131,18 +131,16 @@ struct drm_framebuffer *tegra_fb_create(struct drm_device *drm,
                                        struct drm_file *file,
                                        const struct drm_mode_fb_cmd2 *cmd)
 {
-       unsigned int hsub, vsub, i;
+       const struct drm_format_info *info = drm_get_format_info(drm, cmd);
        struct tegra_bo *planes[4];
        struct drm_gem_object *gem;
        struct drm_framebuffer *fb;
+       unsigned int i;
        int err;
 
-       hsub = drm_format_horz_chroma_subsampling(cmd->pixel_format);
-       vsub = drm_format_vert_chroma_subsampling(cmd->pixel_format);
-
-       for (i = 0; i < drm_format_num_planes(cmd->pixel_format); i++) {
-               unsigned int width = cmd->width / (i ? hsub : 1);
-               unsigned int height = cmd->height / (i ? vsub : 1);
+       for (i = 0; i < info->num_planes; i++) {
+               unsigned int width = cmd->width / (i ? info->hsub : 1);
+               unsigned int height = cmd->height / (i ? info->vsub : 1);
                unsigned int size, bpp;
 
                gem = drm_gem_object_lookup(file, cmd->handles[i]);
@@ -151,7 +149,7 @@ struct drm_framebuffer *tegra_fb_create(struct drm_device *drm,
                        goto unreference;
                }
 
-               bpp = drm_format_plane_cpp(cmd->pixel_format, i);
+               bpp = info->cpp[i];
 
                size = (height - 1) * cmd->pitches[i] +
                       width * bpp + cmd->offsets[i];
index a24af2d..78a7893 100644 (file)
@@ -26,6 +26,11 @@ static const struct v3d_reg_def v3d_hub_reg_defs[] = {
        REGDEF(V3D_HUB_IDENT3),
        REGDEF(V3D_HUB_INT_STS),
        REGDEF(V3D_HUB_INT_MSK_STS),
+
+       REGDEF(V3D_MMU_CTL),
+       REGDEF(V3D_MMU_VIO_ADDR),
+       REGDEF(V3D_MMU_VIO_ID),
+       REGDEF(V3D_MMU_DEBUG_INFO),
 };
 
 static const struct v3d_reg_def v3d_gca_reg_defs[] = {
@@ -50,12 +55,25 @@ static const struct v3d_reg_def v3d_core_reg_defs[] = {
        REGDEF(V3D_PTB_BPCA),
        REGDEF(V3D_PTB_BPCS),
 
-       REGDEF(V3D_MMU_CTL),
-       REGDEF(V3D_MMU_VIO_ADDR),
-
        REGDEF(V3D_GMP_STATUS),
        REGDEF(V3D_GMP_CFG),
        REGDEF(V3D_GMP_VIO_ADDR),
+
+       REGDEF(V3D_ERR_FDBGO),
+       REGDEF(V3D_ERR_FDBGB),
+       REGDEF(V3D_ERR_FDBGS),
+       REGDEF(V3D_ERR_STAT),
+};
+
+static const struct v3d_reg_def v3d_csd_reg_defs[] = {
+       REGDEF(V3D_CSD_STATUS),
+       REGDEF(V3D_CSD_CURRENT_CFG0),
+       REGDEF(V3D_CSD_CURRENT_CFG1),
+       REGDEF(V3D_CSD_CURRENT_CFG2),
+       REGDEF(V3D_CSD_CURRENT_CFG3),
+       REGDEF(V3D_CSD_CURRENT_CFG4),
+       REGDEF(V3D_CSD_CURRENT_CFG5),
+       REGDEF(V3D_CSD_CURRENT_CFG6),
 };
 
 static int v3d_v3d_debugfs_regs(struct seq_file *m, void *unused)
@@ -89,6 +107,17 @@ static int v3d_v3d_debugfs_regs(struct seq_file *m, void *unused)
                                   V3D_CORE_READ(core,
                                                 v3d_core_reg_defs[i].reg));
                }
+
+               if (v3d_has_csd(v3d)) {
+                       for (i = 0; i < ARRAY_SIZE(v3d_csd_reg_defs); i++) {
+                               seq_printf(m, "core %d %s (0x%04x): 0x%08x\n",
+                                          core,
+                                          v3d_csd_reg_defs[i].name,
+                                          v3d_csd_reg_defs[i].reg,
+                                          V3D_CORE_READ(core,
+                                                        v3d_csd_reg_defs[i].reg));
+                       }
+               }
        }
 
        return 0;
index a06b05f..fea597f 100644 (file)
@@ -7,9 +7,9 @@
  * This driver supports the Broadcom V3D 3.3 and 4.1 OpenGL ES GPUs.
  * For V3D 2.x support, see the VC4 driver.
  *
- * Currently only single-core rendering using the binner and renderer,
- * along with TFU (texture formatting unit) rendering is supported.
- * V3D 4.x's CSD (compute shader dispatch) is not yet supported.
+ * The V3D GPU includes a tiled render (composed of a bin and render
+ * pipelines), the TFU (texture formatting unit), and the CSD (compute
+ * shader dispatch).
  */
 
 #include <linux/clk.h>
@@ -120,6 +120,9 @@ static int v3d_get_param_ioctl(struct drm_device *dev, void *data,
        case DRM_V3D_PARAM_SUPPORTS_TFU:
                args->value = 1;
                return 0;
+       case DRM_V3D_PARAM_SUPPORTS_CSD:
+               args->value = v3d_has_csd(v3d);
+               return 0;
        default:
                DRM_DEBUG("Unknown parameter %d\n", args->param);
                return -EINVAL;
@@ -179,6 +182,7 @@ static const struct drm_ioctl_desc v3d_drm_ioctls[] = {
        DRM_IOCTL_DEF_DRV(V3D_GET_PARAM, v3d_get_param_ioctl, DRM_RENDER_ALLOW),
        DRM_IOCTL_DEF_DRV(V3D_GET_BO_OFFSET, v3d_get_bo_offset_ioctl, DRM_RENDER_ALLOW),
        DRM_IOCTL_DEF_DRV(V3D_SUBMIT_TFU, v3d_submit_tfu_ioctl, DRM_RENDER_ALLOW | DRM_AUTH),
+       DRM_IOCTL_DEF_DRV(V3D_SUBMIT_CSD, v3d_submit_csd_ioctl, DRM_RENDER_ALLOW | DRM_AUTH),
 };
 
 static struct drm_driver v3d_drm_driver = {
@@ -235,9 +239,9 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
        struct drm_device *drm;
        struct v3d_dev *v3d;
        int ret;
+       u32 mmu_debug;
        u32 ident1;
 
-       dev->coherent_dma_mask = DMA_BIT_MASK(36);
 
        v3d = kzalloc(sizeof(*v3d), GFP_KERNEL);
        if (!v3d)
@@ -254,6 +258,11 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
        if (ret)
                goto dev_free;
 
+       mmu_debug = V3D_READ(V3D_MMU_DEBUG_INFO);
+       dev->coherent_dma_mask =
+               DMA_BIT_MASK(30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_PA_WIDTH));
+       v3d->va_width = 30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_VA_WIDTH);
+
        ident1 = V3D_READ(V3D_HUB_IDENT1);
        v3d->ver = (V3D_GET_FIELD(ident1, V3D_HUB_IDENT1_TVER) * 10 +
                    V3D_GET_FIELD(ident1, V3D_HUB_IDENT1_REV));
index e9d4a2f..9aad9da 100644 (file)
@@ -16,9 +16,11 @@ enum v3d_queue {
        V3D_BIN,
        V3D_RENDER,
        V3D_TFU,
+       V3D_CSD,
+       V3D_CACHE_CLEAN,
 };
 
-#define V3D_MAX_QUEUES (V3D_TFU + 1)
+#define V3D_MAX_QUEUES (V3D_CACHE_CLEAN + 1)
 
 struct v3d_queue_state {
        struct drm_gpu_scheduler sched;
@@ -55,6 +57,8 @@ struct v3d_dev {
         */
        void *mmu_scratch;
        dma_addr_t mmu_scratch_paddr;
+       /* virtual address bits from V3D to the MMU. */
+       int va_width;
 
        /* Number of V3D cores. */
        u32 cores;
@@ -67,9 +71,10 @@ struct v3d_dev {
 
        struct work_struct overflow_mem_work;
 
-       struct v3d_exec_info *bin_job;
-       struct v3d_exec_info *render_job;
+       struct v3d_bin_job *bin_job;
+       struct v3d_render_job *render_job;
        struct v3d_tfu_job *tfu_job;
+       struct v3d_csd_job *csd_job;
 
        struct v3d_queue_state queue[V3D_MAX_QUEUES];
 
@@ -92,6 +97,12 @@ struct v3d_dev {
         */
        struct mutex sched_lock;
 
+       /* Lock taken during a cache clean and when initiating an L2
+        * flush, to keep L2 flushes from interfering with the
+        * synchronous L2 cleans.
+        */
+       struct mutex cache_clean_lock;
+
        struct {
                u32 num_allocated;
                u32 pages_allocated;
@@ -104,6 +115,12 @@ to_v3d_dev(struct drm_device *dev)
        return (struct v3d_dev *)dev->dev_private;
 }
 
+static inline bool
+v3d_has_csd(struct v3d_dev *v3d)
+{
+       return v3d->ver >= 41;
+}
+
 /* The per-fd struct, which tracks the MMU mappings. */
 struct v3d_file_priv {
        struct v3d_dev *v3d;
@@ -117,7 +134,7 @@ struct v3d_bo {
        struct drm_mm_node node;
 
        /* List entry for the BO's position in
-        * v3d_exec_info->unref_list
+        * v3d_render_job->unref_list
         */
        struct list_head unref_head;
 };
@@ -157,67 +174,74 @@ to_v3d_fence(struct dma_fence *fence)
 struct v3d_job {
        struct drm_sched_job base;
 
-       struct v3d_exec_info *exec;
+       struct kref refcount;
 
-       /* An optional fence userspace can pass in for the job to depend on. */
-       struct dma_fence *in_fence;
+       struct v3d_dev *v3d;
+
+       /* This is the array of BOs that were looked up at the start
+        * of submission.
+        */
+       struct drm_gem_object **bo;
+       u32 bo_count;
+
+       /* Array of struct dma_fence * to block on before submitting this job.
+        */
+       struct xarray deps;
+       unsigned long last_dep;
 
        /* v3d fence to be signaled by IRQ handler when the job is complete. */
        struct dma_fence *irq_fence;
 
+       /* scheduler fence for when the job is considered complete and
+        * the BO reservations can be released.
+        */
+       struct dma_fence *done_fence;
+
+       /* Callback for the freeing of the job on refcount going to 0. */
+       void (*free)(struct kref *ref);
+};
+
+struct v3d_bin_job {
+       struct v3d_job base;
+
        /* GPU virtual addresses of the start/end of the CL job. */
        u32 start, end;
 
        u32 timedout_ctca, timedout_ctra;
-};
 
-struct v3d_exec_info {
-       struct v3d_dev *v3d;
+       /* Corresponding render job, for attaching our overflow memory. */
+       struct v3d_render_job *render;
 
-       struct v3d_job bin, render;
-
-       /* Fence for when the scheduler considers the binner to be
-        * done, for render to depend on.
-        */
-       struct dma_fence *bin_done_fence;
+       /* Submitted tile memory allocation start/size, tile state. */
+       u32 qma, qms, qts;
+};
 
-       /* Fence for when the scheduler considers the render to be
-        * done, for when the BOs reservations should be complete.
-        */
-       struct dma_fence *render_done_fence;
+struct v3d_render_job {
+       struct v3d_job base;
 
-       struct kref refcount;
+       /* GPU virtual addresses of the start/end of the CL job. */
+       u32 start, end;
 
-       /* This is the array of BOs that were looked up at the start of exec. */
-       struct v3d_bo **bo;
-       u32 bo_count;
+       u32 timedout_ctca, timedout_ctra;
 
        /* List of overflow BOs used in the job that need to be
         * released once the job is complete.
         */
        struct list_head unref_list;
-
-       /* Submitted tile memory allocation start/size, tile state. */
-       u32 qma, qms, qts;
 };
 
 struct v3d_tfu_job {
-       struct drm_sched_job base;
+       struct v3d_job base;
 
        struct drm_v3d_submit_tfu args;
+};
 
-       /* An optional fence userspace can pass in for the job to depend on. */
-       struct dma_fence *in_fence;
-
-       /* v3d fence to be signaled by IRQ handler when the job is complete. */
-       struct dma_fence *irq_fence;
-
-       struct v3d_dev *v3d;
+struct v3d_csd_job {
+       struct v3d_job base;
 
-       struct kref refcount;
+       u32 timedout_batches;
 
-       /* This is the array of BOs that were looked up at the start of exec. */
-       struct v3d_bo *bo[4];
+       struct drm_v3d_submit_csd args;
 };
 
 /**
@@ -281,12 +305,14 @@ int v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
                        struct drm_file *file_priv);
 int v3d_submit_tfu_ioctl(struct drm_device *dev, void *data,
                         struct drm_file *file_priv);
+int v3d_submit_csd_ioctl(struct drm_device *dev, void *data,
+                        struct drm_file *file_priv);
 int v3d_wait_bo_ioctl(struct drm_device *dev, void *data,
                      struct drm_file *file_priv);
-void v3d_exec_put(struct v3d_exec_info *exec);
-void v3d_tfu_job_put(struct v3d_tfu_job *exec);
+void v3d_job_put(struct v3d_job *job);
 void v3d_reset(struct v3d_dev *v3d);
 void v3d_invalidate_caches(struct v3d_dev *v3d);
+void v3d_clean_caches(struct v3d_dev *v3d);
 
 /* v3d_irq.c */
 int v3d_irq_init(struct v3d_dev *v3d);
index b0a2a1a..89840ed 100644 (file)
@@ -36,6 +36,8 @@ static const char *v3d_fence_get_timeline_name(struct dma_fence *fence)
                return "v3d-render";
        case V3D_TFU:
                return "v3d-tfu";
+       case V3D_CSD:
+               return "v3d-csd";
        default:
                return NULL;
        }
index 93ff8fc..27e0f87 100644 (file)
@@ -109,7 +109,9 @@ v3d_reset(struct v3d_dev *v3d)
 {
        struct drm_device *dev = &v3d->drm;
 
-       DRM_ERROR("Resetting GPU.\n");
+       DRM_DEV_ERROR(dev->dev, "Resetting GPU for hang.\n");
+       DRM_DEV_ERROR(dev->dev, "V3D_ERR_STAT: 0x%08x\n",
+                     V3D_CORE_READ(0, V3D_ERR_STAT));
        trace_v3d_reset_begin(dev);
 
        /* XXX: only needed for safe powerdown, not reset. */
@@ -162,10 +164,52 @@ v3d_flush_l2t(struct v3d_dev *v3d, int core)
        /* While there is a busy bit (V3D_L2TCACTL_L2TFLS), we don't
         * need to wait for completion before dispatching the job --
         * L2T accesses will be stalled until the flush has completed.
+        * However, we do need to make sure we don't try to trigger a
+        * new flush while the L2_CLEAN queue is trying to
+        * synchronously clean after a job.
         */
+       mutex_lock(&v3d->cache_clean_lock);
        V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL,
                       V3D_L2TCACTL_L2TFLS |
                       V3D_SET_FIELD(V3D_L2TCACTL_FLM_FLUSH, V3D_L2TCACTL_FLM));
+       mutex_unlock(&v3d->cache_clean_lock);
+}
+
+/* Cleans texture L1 and L2 cachelines (writing back dirty data).
+ *
+ * For cleaning, which happens from the CACHE_CLEAN queue after CSD has
+ * executed, we need to make sure that the clean is done before
+ * signaling job completion.  So, we synchronously wait before
+ * returning, and we make sure that L2 invalidates don't happen in the
+ * meantime to confuse our are-we-done checks.
+ */
+void
+v3d_clean_caches(struct v3d_dev *v3d)
+{
+       struct drm_device *dev = &v3d->drm;
+       int core = 0;
+
+       trace_v3d_cache_clean_begin(dev);
+
+       V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL, V3D_L2TCACTL_TMUWCF);
+       if (wait_for(!(V3D_CORE_READ(core, V3D_CTL_L2TCACTL) &
+                      V3D_L2TCACTL_L2TFLS), 100)) {
+               DRM_ERROR("Timeout waiting for L1T write combiner flush\n");
+       }
+
+       mutex_lock(&v3d->cache_clean_lock);
+       V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL,
+                      V3D_L2TCACTL_L2TFLS |
+                      V3D_SET_FIELD(V3D_L2TCACTL_FLM_CLEAN, V3D_L2TCACTL_FLM));
+
+       if (wait_for(!(V3D_CORE_READ(core, V3D_CTL_L2TCACTL) &
+                      V3D_L2TCACTL_L2TFLS), 100)) {
+               DRM_ERROR("Timeout waiting for L2T clean\n");
+       }
+
+       mutex_unlock(&v3d->cache_clean_lock);
+
+       trace_v3d_cache_clean_end(dev);
 }
 
 /* Invalidates the slice caches.  These are read-only caches. */
@@ -193,28 +237,6 @@ v3d_invalidate_caches(struct v3d_dev *v3d)
        v3d_invalidate_slices(v3d, 0);
 }
 
-static void
-v3d_attach_object_fences(struct v3d_bo **bos, int bo_count,
-                        struct dma_fence *fence)
-{
-       int i;
-
-       for (i = 0; i < bo_count; i++) {
-               /* XXX: Use shared fences for read-only objects. */
-               reservation_object_add_excl_fence(bos[i]->base.base.resv,
-                                                 fence);
-       }
-}
-
-static void
-v3d_unlock_bo_reservations(struct v3d_bo **bos,
-                          int bo_count,
-                          struct ww_acquire_ctx *acquire_ctx)
-{
-       drm_gem_unlock_reservations((struct drm_gem_object **)bos, bo_count,
-                                   acquire_ctx);
-}
-
 /* Takes the reservation lock on all the BOs being referenced, so that
  * at queue submit time we can update the reservations.
  *
@@ -223,26 +245,21 @@ v3d_unlock_bo_reservations(struct v3d_bo **bos,
  * to v3d, so we don't attach dma-buf fences to them.
  */
 static int
-v3d_lock_bo_reservations(struct v3d_bo **bos,
-                        int bo_count,
+v3d_lock_bo_reservations(struct v3d_job *job,
                         struct ww_acquire_ctx *acquire_ctx)
 {
        int i, ret;
 
-       ret = drm_gem_lock_reservations((struct drm_gem_object **)bos,
-                                       bo_count, acquire_ctx);
+       ret = drm_gem_lock_reservations(job->bo, job->bo_count, acquire_ctx);
        if (ret)
                return ret;
 
-       /* Reserve space for our shared (read-only) fence references,
-        * before we commit the CL to the hardware.
-        */
-       for (i = 0; i < bo_count; i++) {
-               ret = reservation_object_reserve_shared(bos[i]->base.base.resv,
-                                                       1);
+       for (i = 0; i < job->bo_count; i++) {
+               ret = drm_gem_fence_array_add_implicit(&job->deps,
+                                                      job->bo[i], true);
                if (ret) {
-                       v3d_unlock_bo_reservations(bos, bo_count,
-                                                  acquire_ctx);
+                       drm_gem_unlock_reservations(job->bo, job->bo_count,
+                                                   acquire_ctx);
                        return ret;
                }
        }
@@ -251,11 +268,11 @@ v3d_lock_bo_reservations(struct v3d_bo **bos,
 }
 
 /**
- * v3d_cl_lookup_bos() - Sets up exec->bo[] with the GEM objects
+ * v3d_lookup_bos() - Sets up job->bo[] with the GEM objects
  * referenced by the job.
  * @dev: DRM device
  * @file_priv: DRM file for this fd
- * @exec: V3D job being set up
+ * @job: V3D job being set up
  *
  * The command validator needs to reference BOs by their index within
  * the submitted job's BO list.  This does the validation of the job's
@@ -265,18 +282,19 @@ v3d_lock_bo_reservations(struct v3d_bo **bos,
  * failure, because that will happen at v3d_exec_cleanup() time.
  */
 static int
-v3d_cl_lookup_bos(struct drm_device *dev,
-                 struct drm_file *file_priv,
-                 struct drm_v3d_submit_cl *args,
-                 struct v3d_exec_info *exec)
+v3d_lookup_bos(struct drm_device *dev,
+              struct drm_file *file_priv,
+              struct v3d_job *job,
+              u64 bo_handles,
+              u32 bo_count)
 {
        u32 *handles;
        int ret = 0;
        int i;
 
-       exec->bo_count = args->bo_handle_count;
+       job->bo_count = bo_count;
 
-       if (!exec->bo_count) {
+       if (!job->bo_count) {
                /* See comment on bo_index for why we have to check
                 * this.
                 */
@@ -284,15 +302,15 @@ v3d_cl_lookup_bos(struct drm_device *dev,
                return -EINVAL;
        }
 
-       exec->bo = kvmalloc_array(exec->bo_count,
-                                 sizeof(struct drm_gem_cma_object *),
-                                 GFP_KERNEL | __GFP_ZERO);
-       if (!exec->bo) {
+       job->bo = kvmalloc_array(job->bo_count,
+                                sizeof(struct drm_gem_cma_object *),
+                                GFP_KERNEL | __GFP_ZERO);
+       if (!job->bo) {
                DRM_DEBUG("Failed to allocate validated BO pointers\n");
                return -ENOMEM;
        }
 
-       handles = kvmalloc_array(exec->bo_count, sizeof(u32), GFP_KERNEL);
+       handles = kvmalloc_array(job->bo_count, sizeof(u32), GFP_KERNEL);
        if (!handles) {
                ret = -ENOMEM;
                DRM_DEBUG("Failed to allocate incoming GEM handles\n");
@@ -300,15 +318,15 @@ v3d_cl_lookup_bos(struct drm_device *dev,
        }
 
        if (copy_from_user(handles,
-                          (void __user *)(uintptr_t)args->bo_handles,
-                          exec->bo_count * sizeof(u32))) {
+                          (void __user *)(uintptr_t)bo_handles,
+                          job->bo_count * sizeof(u32))) {
                ret = -EFAULT;
                DRM_DEBUG("Failed to copy in GEM handles\n");
                goto fail;
        }
 
        spin_lock(&file_priv->table_lock);
-       for (i = 0; i < exec->bo_count; i++) {
+       for (i = 0; i < job->bo_count; i++) {
                struct drm_gem_object *bo = idr_find(&file_priv->object_idr,
                                                     handles[i]);
                if (!bo) {
@@ -319,7 +337,7 @@ v3d_cl_lookup_bos(struct drm_device *dev,
                        goto fail;
                }
                drm_gem_object_get(bo);
-               exec->bo[i] = to_v3d_bo(bo);
+               job->bo[i] = bo;
        }
        spin_unlock(&file_priv->table_lock);
 
@@ -329,67 +347,50 @@ fail:
 }
 
 static void
-v3d_exec_cleanup(struct kref *ref)
+v3d_job_free(struct kref *ref)
 {
-       struct v3d_exec_info *exec = container_of(ref, struct v3d_exec_info,
-                                                 refcount);
-       struct v3d_dev *v3d = exec->v3d;
-       unsigned int i;
-       struct v3d_bo *bo, *save;
-
-       dma_fence_put(exec->bin.in_fence);
-       dma_fence_put(exec->render.in_fence);
-
-       dma_fence_put(exec->bin.irq_fence);
-       dma_fence_put(exec->render.irq_fence);
-
-       dma_fence_put(exec->bin_done_fence);
-       dma_fence_put(exec->render_done_fence);
+       struct v3d_job *job = container_of(ref, struct v3d_job, refcount);
+       unsigned long index;
+       struct dma_fence *fence;
+       int i;
 
-       for (i = 0; i < exec->bo_count; i++)
-               drm_gem_object_put_unlocked(&exec->bo[i]->base.base);
-       kvfree(exec->bo);
+       for (i = 0; i < job->bo_count; i++) {
+               if (job->bo[i])
+                       drm_gem_object_put_unlocked(job->bo[i]);
+       }
+       kvfree(job->bo);
 
-       list_for_each_entry_safe(bo, save, &exec->unref_list, unref_head) {
-               drm_gem_object_put_unlocked(&bo->base.base);
+       xa_for_each(&job->deps, index, fence) {
+               dma_fence_put(fence);
        }
+       xa_destroy(&job->deps);
 
-       pm_runtime_mark_last_busy(v3d->dev);
-       pm_runtime_put_autosuspend(v3d->dev);
+       dma_fence_put(job->irq_fence);
+       dma_fence_put(job->done_fence);
 
-       kfree(exec);
-}
+       pm_runtime_mark_last_busy(job->v3d->dev);
+       pm_runtime_put_autosuspend(job->v3d->dev);
 
-void v3d_exec_put(struct v3d_exec_info *exec)
-{
-       kref_put(&exec->refcount, v3d_exec_cleanup);
+       kfree(job);
 }
 
 static void
-v3d_tfu_job_cleanup(struct kref *ref)
+v3d_render_job_free(struct kref *ref)
 {
-       struct v3d_tfu_job *job = container_of(ref, struct v3d_tfu_job,
-                                              refcount);
-       struct v3d_dev *v3d = job->v3d;
-       unsigned int i;
-
-       dma_fence_put(job->in_fence);
-       dma_fence_put(job->irq_fence);
+       struct v3d_render_job *job = container_of(ref, struct v3d_render_job,
+                                                 base.refcount);
+       struct v3d_bo *bo, *save;
 
-       for (i = 0; i < ARRAY_SIZE(job->bo); i++) {
-               if (job->bo[i])
-                       drm_gem_object_put_unlocked(&job->bo[i]->base.base);
+       list_for_each_entry_safe(bo, save, &job->unref_list, unref_head) {
+               drm_gem_object_put_unlocked(&bo->base.base);
        }
 
-       pm_runtime_mark_last_busy(v3d->dev);
-       pm_runtime_put_autosuspend(v3d->dev);
-
-       kfree(job);
+       v3d_job_free(ref);
 }
 
-void v3d_tfu_job_put(struct v3d_tfu_job *job)
+void v3d_job_put(struct v3d_job *job)
 {
-       kref_put(&job->refcount, v3d_tfu_job_cleanup);
+       kref_put(&job->refcount, job->free);
 }
 
 int
@@ -425,6 +426,87 @@ v3d_wait_bo_ioctl(struct drm_device *dev, void *data,
        return ret;
 }
 
+static int
+v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv,
+            struct v3d_job *job, void (*free)(struct kref *ref),
+            u32 in_sync)
+{
+       struct dma_fence *in_fence = NULL;
+       int ret;
+
+       job->v3d = v3d;
+       job->free = free;
+
+       ret = pm_runtime_get_sync(v3d->dev);
+       if (ret < 0)
+               return ret;
+
+       xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
+
+       ret = drm_syncobj_find_fence(file_priv, in_sync, 0, 0, &in_fence);
+       if (ret == -EINVAL)
+               goto fail;
+
+       ret = drm_gem_fence_array_add(&job->deps, in_fence);
+       if (ret)
+               goto fail;
+
+       kref_init(&job->refcount);
+
+       return 0;
+fail:
+       xa_destroy(&job->deps);
+       pm_runtime_put_autosuspend(v3d->dev);
+       return ret;
+}
+
+static int
+v3d_push_job(struct v3d_file_priv *v3d_priv,
+            struct v3d_job *job, enum v3d_queue queue)
+{
+       int ret;
+
+       ret = drm_sched_job_init(&job->base, &v3d_priv->sched_entity[queue],
+                                v3d_priv);
+       if (ret)
+               return ret;
+
+       job->done_fence = dma_fence_get(&job->base.s_fence->finished);
+
+       /* put by scheduler job completion */
+       kref_get(&job->refcount);
+
+       drm_sched_entity_push_job(&job->base, &v3d_priv->sched_entity[queue]);
+
+       return 0;
+}
+
+static void
+v3d_attach_fences_and_unlock_reservation(struct drm_file *file_priv,
+                                        struct v3d_job *job,
+                                        struct ww_acquire_ctx *acquire_ctx,
+                                        u32 out_sync,
+                                        struct dma_fence *done_fence)
+{
+       struct drm_syncobj *sync_out;
+       int i;
+
+       for (i = 0; i < job->bo_count; i++) {
+               /* XXX: Use shared fences for read-only objects. */
+               reservation_object_add_excl_fence(job->bo[i]->resv,
+                                                 job->done_fence);
+       }
+
+       drm_gem_unlock_reservations(job->bo, job->bo_count, acquire_ctx);
+
+       /* Update the return sync object for the job */
+       sync_out = drm_syncobj_find(file_priv, out_sync);
+       if (sync_out) {
+               drm_syncobj_replace_fence(sync_out, done_fence);
+               drm_syncobj_put(sync_out);
+       }
+}
+
 /**
  * v3d_submit_cl_ioctl() - Submits a job (frame) to the V3D.
  * @dev: DRM device
@@ -444,9 +526,9 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
        struct v3d_dev *v3d = to_v3d_dev(dev);
        struct v3d_file_priv *v3d_priv = file_priv->driver_priv;
        struct drm_v3d_submit_cl *args = data;
-       struct v3d_exec_info *exec;
+       struct v3d_bin_job *bin = NULL;
+       struct v3d_render_job *render;
        struct ww_acquire_ctx acquire_ctx;
-       struct drm_syncobj *sync_out;
        int ret = 0;
 
        trace_v3d_submit_cl_ioctl(&v3d->drm, args->rcl_start, args->rcl_end);
@@ -456,100 +538,87 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
                return -EINVAL;
        }
 
-       exec = kcalloc(1, sizeof(*exec), GFP_KERNEL);
-       if (!exec)
+       render = kcalloc(1, sizeof(*render), GFP_KERNEL);
+       if (!render)
                return -ENOMEM;
 
-       ret = pm_runtime_get_sync(v3d->dev);
-       if (ret < 0) {
-               kfree(exec);
+       render->start = args->rcl_start;
+       render->end = args->rcl_end;
+       INIT_LIST_HEAD(&render->unref_list);
+
+       ret = v3d_job_init(v3d, file_priv, &render->base,
+                          v3d_render_job_free, args->in_sync_rcl);
+       if (ret) {
+               kfree(render);
                return ret;
        }
 
-       kref_init(&exec->refcount);
+       if (args->bcl_start != args->bcl_end) {
+               bin = kcalloc(1, sizeof(*bin), GFP_KERNEL);
+               if (!bin)
+                       return -ENOMEM;
 
-       ret = drm_syncobj_find_fence(file_priv, args->in_sync_bcl,
-                                    0, 0, &exec->bin.in_fence);
-       if (ret == -EINVAL)
-               goto fail;
+               ret = v3d_job_init(v3d, file_priv, &bin->base,
+                                  v3d_job_free, args->in_sync_bcl);
+               if (ret) {
+                       v3d_job_put(&render->base);
+                       return ret;
+               }
 
-       ret = drm_syncobj_find_fence(file_priv, args->in_sync_rcl,
-                                    0, 0, &exec->render.in_fence);
-       if (ret == -EINVAL)
-               goto fail;
+               bin->start = args->bcl_start;
+               bin->end = args->bcl_end;
+               bin->qma = args->qma;
+               bin->qms = args->qms;
+               bin->qts = args->qts;
+               bin->render = render;
+       }
 
-       exec->qma = args->qma;
-       exec->qms = args->qms;
-       exec->qts = args->qts;
-       exec->bin.exec = exec;
-       exec->bin.start = args->bcl_start;
-       exec->bin.end = args->bcl_end;
-       exec->render.exec = exec;
-       exec->render.start = args->rcl_start;
-       exec->render.end = args->rcl_end;
-       exec->v3d = v3d;
-       INIT_LIST_HEAD(&exec->unref_list);
-
-       ret = v3d_cl_lookup_bos(dev, file_priv, args, exec);
+       ret = v3d_lookup_bos(dev, file_priv, &render->base,
+                            args->bo_handles, args->bo_handle_count);
        if (ret)
                goto fail;
 
-       ret = v3d_lock_bo_reservations(exec->bo, exec->bo_count,
-                                      &acquire_ctx);
+       ret = v3d_lock_bo_reservations(&render->base, &acquire_ctx);
        if (ret)
                goto fail;
 
        mutex_lock(&v3d->sched_lock);
-       if (exec->bin.start != exec->bin.end) {
-               ret = drm_sched_job_init(&exec->bin.base,
-                                        &v3d_priv->sched_entity[V3D_BIN],
-                                        v3d_priv);
+       if (bin) {
+               ret = v3d_push_job(v3d_priv, &bin->base, V3D_BIN);
                if (ret)
                        goto fail_unreserve;
 
-               exec->bin_done_fence =
-                       dma_fence_get(&exec->bin.base.s_fence->finished);
-
-               kref_get(&exec->refcount); /* put by scheduler job completion */
-               drm_sched_entity_push_job(&exec->bin.base,
-                                         &v3d_priv->sched_entity[V3D_BIN]);
+               ret = drm_gem_fence_array_add(&render->base.deps,
+                                             dma_fence_get(bin->base.done_fence));
+               if (ret)
+                       goto fail_unreserve;
        }
 
-       ret = drm_sched_job_init(&exec->render.base,
-                                &v3d_priv->sched_entity[V3D_RENDER],
-                                v3d_priv);
+       ret = v3d_push_job(v3d_priv, &render->base, V3D_RENDER);
        if (ret)
                goto fail_unreserve;
-
-       exec->render_done_fence =
-               dma_fence_get(&exec->render.base.s_fence->finished);
-
-       kref_get(&exec->refcount); /* put by scheduler job completion */
-       drm_sched_entity_push_job(&exec->render.base,
-                                 &v3d_priv->sched_entity[V3D_RENDER]);
        mutex_unlock(&v3d->sched_lock);
 
-       v3d_attach_object_fences(exec->bo, exec->bo_count,
-                                exec->render_done_fence);
-
-       v3d_unlock_bo_reservations(exec->bo, exec->bo_count, &acquire_ctx);
-
-       /* Update the return sync object for the */
-       sync_out = drm_syncobj_find(file_priv, args->out_sync);
-       if (sync_out) {
-               drm_syncobj_replace_fence(sync_out, exec->render_done_fence);
-               drm_syncobj_put(sync_out);
-       }
+       v3d_attach_fences_and_unlock_reservation(file_priv,
+                                                &render->base,
+                                                &acquire_ctx,
+                                                args->out_sync,
+                                                render->base.done_fence);
 
-       v3d_exec_put(exec);
+       if (bin)
+               v3d_job_put(&bin->base);
+       v3d_job_put(&render->base);
 
        return 0;
 
 fail_unreserve:
        mutex_unlock(&v3d->sched_lock);
-       v3d_unlock_bo_reservations(exec->bo, exec->bo_count, &acquire_ctx);
+       drm_gem_unlock_reservations(render->base.bo,
+                                   render->base.bo_count, &acquire_ctx);
 fail:
-       v3d_exec_put(exec);
+       if (bin)
+               v3d_job_put(&bin->base);
+       v3d_job_put(&render->base);
 
        return ret;
 }
@@ -572,10 +641,7 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data,
        struct drm_v3d_submit_tfu *args = data;
        struct v3d_tfu_job *job;
        struct ww_acquire_ctx acquire_ctx;
-       struct drm_syncobj *sync_out;
-       struct dma_fence *sched_done_fence;
        int ret = 0;
-       int bo_count;
 
        trace_v3d_submit_tfu_ioctl(&v3d->drm, args->iia);
 
@@ -583,81 +649,172 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data,
        if (!job)
                return -ENOMEM;
 
-       ret = pm_runtime_get_sync(v3d->dev);
-       if (ret < 0) {
+       ret = v3d_job_init(v3d, file_priv, &job->base,
+                          v3d_job_free, args->in_sync);
+       if (ret) {
                kfree(job);
                return ret;
        }
 
-       kref_init(&job->refcount);
-
-       ret = drm_syncobj_find_fence(file_priv, args->in_sync,
-                                    0, 0, &job->in_fence);
-       if (ret == -EINVAL)
-               goto fail;
+       job->base.bo = kcalloc(ARRAY_SIZE(args->bo_handles),
+                              sizeof(*job->base.bo), GFP_KERNEL);
+       if (!job->base.bo) {
+               v3d_job_put(&job->base);
+               return -ENOMEM;
+       }
 
        job->args = *args;
-       job->v3d = v3d;
 
        spin_lock(&file_priv->table_lock);
-       for (bo_count = 0; bo_count < ARRAY_SIZE(job->bo); bo_count++) {
+       for (job->base.bo_count = 0;
+            job->base.bo_count < ARRAY_SIZE(args->bo_handles);
+            job->base.bo_count++) {
                struct drm_gem_object *bo;
 
-               if (!args->bo_handles[bo_count])
+               if (!args->bo_handles[job->base.bo_count])
                        break;
 
                bo = idr_find(&file_priv->object_idr,
-                             args->bo_handles[bo_count]);
+                             args->bo_handles[job->base.bo_count]);
                if (!bo) {
                        DRM_DEBUG("Failed to look up GEM BO %d: %d\n",
-                                 bo_count, args->bo_handles[bo_count]);
+                                 job->base.bo_count,
+                                 args->bo_handles[job->base.bo_count]);
                        ret = -ENOENT;
                        spin_unlock(&file_priv->table_lock);
                        goto fail;
                }
                drm_gem_object_get(bo);
-               job->bo[bo_count] = to_v3d_bo(bo);
+               job->base.bo[job->base.bo_count] = bo;
        }
        spin_unlock(&file_priv->table_lock);
 
-       ret = v3d_lock_bo_reservations(job->bo, bo_count, &acquire_ctx);
+       ret = v3d_lock_bo_reservations(&job->base, &acquire_ctx);
        if (ret)
                goto fail;
 
        mutex_lock(&v3d->sched_lock);
-       ret = drm_sched_job_init(&job->base,
-                                &v3d_priv->sched_entity[V3D_TFU],
-                                v3d_priv);
+       ret = v3d_push_job(v3d_priv, &job->base, V3D_TFU);
        if (ret)
                goto fail_unreserve;
+       mutex_unlock(&v3d->sched_lock);
 
-       sched_done_fence = dma_fence_get(&job->base.s_fence->finished);
+       v3d_attach_fences_and_unlock_reservation(file_priv,
+                                                &job->base, &acquire_ctx,
+                                                args->out_sync,
+                                                job->base.done_fence);
 
-       kref_get(&job->refcount); /* put by scheduler job completion */
-       drm_sched_entity_push_job(&job->base, &v3d_priv->sched_entity[V3D_TFU]);
+       v3d_job_put(&job->base);
+
+       return 0;
+
+fail_unreserve:
        mutex_unlock(&v3d->sched_lock);
+       drm_gem_unlock_reservations(job->base.bo, job->base.bo_count,
+                                   &acquire_ctx);
+fail:
+       v3d_job_put(&job->base);
 
-       v3d_attach_object_fences(job->bo, bo_count, sched_done_fence);
+       return ret;
+}
 
-       v3d_unlock_bo_reservations(job->bo, bo_count, &acquire_ctx);
+/**
+ * v3d_submit_csd_ioctl() - Submits a CSD (texture formatting) job to the V3D.
+ * @dev: DRM device
+ * @data: ioctl argument
+ * @file_priv: DRM file for this fd
+ *
+ * Userspace provides the register setup for the CSD, which we don't
+ * need to validate since the CSD is behind the MMU.
+ */
+int
+v3d_submit_csd_ioctl(struct drm_device *dev, void *data,
+                    struct drm_file *file_priv)
+{
+       struct v3d_dev *v3d = to_v3d_dev(dev);
+       struct v3d_file_priv *v3d_priv = file_priv->driver_priv;
+       struct drm_v3d_submit_csd *args = data;
+       struct v3d_csd_job *job;
+       struct v3d_job *clean_job;
+       struct ww_acquire_ctx acquire_ctx;
+       int ret;
 
-       /* Update the return sync object */
-       sync_out = drm_syncobj_find(file_priv, args->out_sync);
-       if (sync_out) {
-               drm_syncobj_replace_fence(sync_out, sched_done_fence);
-               drm_syncobj_put(sync_out);
+       trace_v3d_submit_csd_ioctl(&v3d->drm, args->cfg[5], args->cfg[6]);
+
+       if (!v3d_has_csd(v3d)) {
+               DRM_DEBUG("Attempting CSD submit on non-CSD hardware\n");
+               return -EINVAL;
+       }
+
+       job = kcalloc(1, sizeof(*job), GFP_KERNEL);
+       if (!job)
+               return -ENOMEM;
+
+       ret = v3d_job_init(v3d, file_priv, &job->base,
+                          v3d_job_free, args->in_sync);
+       if (ret) {
+               kfree(job);
+               return ret;
+       }
+
+       clean_job = kcalloc(1, sizeof(*clean_job), GFP_KERNEL);
+       if (!clean_job) {
+               v3d_job_put(&job->base);
+               kfree(job);
+               return -ENOMEM;
        }
-       dma_fence_put(sched_done_fence);
 
-       v3d_tfu_job_put(job);
+       ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0);
+       if (ret) {
+               v3d_job_put(&job->base);
+               kfree(clean_job);
+               return ret;
+       }
+
+       job->args = *args;
+
+       ret = v3d_lookup_bos(dev, file_priv, clean_job,
+                            args->bo_handles, args->bo_handle_count);
+       if (ret)
+               goto fail;
+
+       ret = v3d_lock_bo_reservations(clean_job, &acquire_ctx);
+       if (ret)
+               goto fail;
+
+       mutex_lock(&v3d->sched_lock);
+       ret = v3d_push_job(v3d_priv, &job->base, V3D_CSD);
+       if (ret)
+               goto fail_unreserve;
+
+       ret = drm_gem_fence_array_add(&clean_job->deps,
+                                     dma_fence_get(job->base.done_fence));
+       if (ret)
+               goto fail_unreserve;
+
+       ret = v3d_push_job(v3d_priv, clean_job, V3D_CACHE_CLEAN);
+       if (ret)
+               goto fail_unreserve;
+       mutex_unlock(&v3d->sched_lock);
+
+       v3d_attach_fences_and_unlock_reservation(file_priv,
+                                                clean_job,
+                                                &acquire_ctx,
+                                                args->out_sync,
+                                                clean_job->done_fence);
+
+       v3d_job_put(&job->base);
+       v3d_job_put(clean_job);
 
        return 0;
 
 fail_unreserve:
        mutex_unlock(&v3d->sched_lock);
-       v3d_unlock_bo_reservations(job->bo, bo_count, &acquire_ctx);
+       drm_gem_unlock_reservations(clean_job->bo, clean_job->bo_count,
+                                   &acquire_ctx);
 fail:
-       v3d_tfu_job_put(job);
+       v3d_job_put(&job->base);
+       v3d_job_put(clean_job);
 
        return ret;
 }
@@ -677,6 +834,7 @@ v3d_gem_init(struct drm_device *dev)
        mutex_init(&v3d->bo_lock);
        mutex_init(&v3d->reset_lock);
        mutex_init(&v3d->sched_lock);
+       mutex_init(&v3d->cache_clean_lock);
 
        /* Note: We don't allocate address 0.  Various bits of HW
         * treat 0 as special, such as the occlusion query counters
@@ -715,7 +873,7 @@ v3d_gem_destroy(struct drm_device *dev)
 
        v3d_sched_fini(v3d);
 
-       /* Waiting for exec to finish would need to be done before
+       /* Waiting for jobs to finish would need to be done before
         * unregistering V3D.
         */
        WARN_ON(v3d->bin_job);
index aa0a180..268d8a8 100644 (file)
@@ -4,9 +4,9 @@
 /**
  * DOC: Interrupt management for the V3D engine
  *
- * When we take a bin, render, or TFU done interrupt, we need to
- * signal the fence for that job so that the scheduler can queue up
- * the next one and unblock any waiters.
+ * When we take a bin, render, TFU done, or CSD done interrupt, we
+ * need to signal the fence for that job so that the scheduler can
+ * queue up the next one and unblock any waiters.
  *
  * When we take the binner out of memory interrupt, we need to
  * allocate some new memory and pass it to the binner so that the
@@ -20,6 +20,7 @@
 #define V3D_CORE_IRQS ((u32)(V3D_INT_OUTOMEM | \
                             V3D_INT_FLDONE |   \
                             V3D_INT_FRDONE |   \
+                            V3D_INT_CSDDONE |  \
                             V3D_INT_GMPV))
 
 #define V3D_HUB_IRQS ((u32)(V3D_HUB_INT_MMU_WRV |      \
@@ -62,7 +63,7 @@ v3d_overflow_mem_work(struct work_struct *work)
        }
 
        drm_gem_object_get(obj);
-       list_add_tail(&bo->unref_head, &v3d->bin_job->unref_list);
+       list_add_tail(&bo->unref_head, &v3d->bin_job->render->unref_list);
        spin_unlock_irqrestore(&v3d->job_lock, irqflags);
 
        V3D_CORE_WRITE(0, V3D_PTB_BPOA, bo->node.start << PAGE_SHIFT);
@@ -96,7 +97,7 @@ v3d_irq(int irq, void *arg)
 
        if (intsts & V3D_INT_FLDONE) {
                struct v3d_fence *fence =
-                       to_v3d_fence(v3d->bin_job->bin.irq_fence);
+                       to_v3d_fence(v3d->bin_job->base.irq_fence);
 
                trace_v3d_bcl_irq(&v3d->drm, fence->seqno);
                dma_fence_signal(&fence->base);
@@ -105,13 +106,22 @@ v3d_irq(int irq, void *arg)
 
        if (intsts & V3D_INT_FRDONE) {
                struct v3d_fence *fence =
-                       to_v3d_fence(v3d->render_job->render.irq_fence);
+                       to_v3d_fence(v3d->render_job->base.irq_fence);
 
                trace_v3d_rcl_irq(&v3d->drm, fence->seqno);
                dma_fence_signal(&fence->base);
                status = IRQ_HANDLED;
        }
 
+       if (intsts & V3D_INT_CSDDONE) {
+               struct v3d_fence *fence =
+                       to_v3d_fence(v3d->csd_job->base.irq_fence);
+
+               trace_v3d_csd_irq(&v3d->drm, fence->seqno);
+               dma_fence_signal(&fence->base);
+               status = IRQ_HANDLED;
+       }
+
        /* We shouldn't be triggering these if we have GMP in
         * always-allowed mode.
         */
@@ -141,7 +151,7 @@ v3d_hub_irq(int irq, void *arg)
 
        if (intsts & V3D_HUB_INT_TFUC) {
                struct v3d_fence *fence =
-                       to_v3d_fence(v3d->tfu_job->irq_fence);
+                       to_v3d_fence(v3d->tfu_job->base.irq_fence);
 
                trace_v3d_tfu_irq(&v3d->drm, fence->seqno);
                dma_fence_signal(&fence->base);
@@ -152,10 +162,33 @@ v3d_hub_irq(int irq, void *arg)
                      V3D_HUB_INT_MMU_PTI |
                      V3D_HUB_INT_MMU_CAP)) {
                u32 axi_id = V3D_READ(V3D_MMU_VIO_ID);
-               u64 vio_addr = (u64)V3D_READ(V3D_MMU_VIO_ADDR) << 8;
-
-               dev_err(v3d->dev, "MMU error from client %d at 0x%08llx%s%s%s\n",
-                       axi_id, (long long)vio_addr,
+               u64 vio_addr = ((u64)V3D_READ(V3D_MMU_VIO_ADDR) <<
+                               (v3d->va_width - 32));
+               static const char *const v3d41_axi_ids[] = {
+                       "L2T",
+                       "PTB",
+                       "PSE",
+                       "TLB",
+                       "CLE",
+                       "TFU",
+                       "MMU",
+                       "GMP",
+               };
+               const char *client = "?";
+
+               V3D_WRITE(V3D_MMU_CTL,
+                         V3D_READ(V3D_MMU_CTL) & (V3D_MMU_CTL_CAP_EXCEEDED |
+                                                  V3D_MMU_CTL_PT_INVALID |
+                                                  V3D_MMU_CTL_WRITE_VIOLATION));
+
+               if (v3d->ver >= 41) {
+                       axi_id = axi_id >> 5;
+                       if (axi_id < ARRAY_SIZE(v3d41_axi_ids))
+                               client = v3d41_axi_ids[axi_id];
+               }
+
+               dev_err(v3d->dev, "MMU error from client %s (%d) at 0x%llx%s%s%s\n",
+                       client, axi_id, (long long)vio_addr,
                        ((intsts & V3D_HUB_INT_MMU_WRV) ?
                         ", write violation" : ""),
                        ((intsts & V3D_HUB_INT_MMU_PTI) ?
index 7a21f17..395e81d 100644 (file)
@@ -69,10 +69,13 @@ int v3d_mmu_set_page_table(struct v3d_dev *v3d)
        V3D_WRITE(V3D_MMU_PT_PA_BASE, v3d->pt_paddr >> V3D_MMU_PAGE_SHIFT);
        V3D_WRITE(V3D_MMU_CTL,
                  V3D_MMU_CTL_ENABLE |
-                 V3D_MMU_CTL_PT_INVALID |
+                 V3D_MMU_CTL_PT_INVALID_ENABLE |
                  V3D_MMU_CTL_PT_INVALID_ABORT |
+                 V3D_MMU_CTL_PT_INVALID_INT |
                  V3D_MMU_CTL_WRITE_VIOLATION_ABORT |
-                 V3D_MMU_CTL_CAP_EXCEEDED_ABORT);
+                 V3D_MMU_CTL_WRITE_VIOLATION_INT |
+                 V3D_MMU_CTL_CAP_EXCEEDED_ABORT |
+                 V3D_MMU_CTL_CAP_EXCEEDED_INT);
        V3D_WRITE(V3D_MMU_ILLEGAL_ADDR,
                  (v3d->mmu_scratch_paddr >> V3D_MMU_PAGE_SHIFT) |
                  V3D_MMU_ILLEGAL_ADDR_ENABLE);
index 8e88af2..9bcb577 100644 (file)
 # define V3D_MMU_CTL_PT_INVALID_ABORT                  BIT(19)
 # define V3D_MMU_CTL_PT_INVALID_INT                    BIT(18)
 # define V3D_MMU_CTL_PT_INVALID_EXCEPTION              BIT(17)
-# define V3D_MMU_CTL_WRITE_VIOLATION                   BIT(16)
+# define V3D_MMU_CTL_PT_INVALID_ENABLE                 BIT(16)
+# define V3D_MMU_CTL_WRITE_VIOLATION                   BIT(12)
 # define V3D_MMU_CTL_WRITE_VIOLATION_ABORT             BIT(11)
 # define V3D_MMU_CTL_WRITE_VIOLATION_INT               BIT(10)
 # define V3D_MMU_CTL_WRITE_VIOLATION_EXCEPTION         BIT(9)
 /* Address that faulted */
 #define V3D_MMU_VIO_ADDR                               0x01234
 
+#define V3D_MMU_DEBUG_INFO                             0x01238
+# define V3D_MMU_PA_WIDTH_MASK                         V3D_MASK(11, 8)
+# define V3D_MMU_PA_WIDTH_SHIFT                        8
+# define V3D_MMU_VA_WIDTH_MASK                         V3D_MASK(7, 4)
+# define V3D_MMU_VA_WIDTH_SHIFT                        4
+# define V3D_MMU_VERSION_MASK                          V3D_MASK(3, 0)
+# define V3D_MMU_VERSION_SHIFT                         0
+
 /* Per-V3D-core registers */
 
 #define V3D_CTL_IDENT0                                 0x00000
 #define V3D_CTL_L2TCACTL                               0x00030
 # define V3D_L2TCACTL_TMUWCF                           BIT(8)
 # define V3D_L2TCACTL_L2T_NO_WM                        BIT(4)
+/* Invalidates cache lines. */
 # define V3D_L2TCACTL_FLM_FLUSH                        0
+/* Removes cachelines without writing dirty lines back. */
 # define V3D_L2TCACTL_FLM_CLEAR                        1
+/* Writes out dirty cachelines and marks them clean, but doesn't invalidate. */
 # define V3D_L2TCACTL_FLM_CLEAN                        2
 # define V3D_L2TCACTL_FLM_MASK                         V3D_MASK(2, 1)
 # define V3D_L2TCACTL_FLM_SHIFT                        1
 #define V3D_CTL_INT_MSK_CLR                            0x00064
 # define V3D_INT_QPU_MASK                              V3D_MASK(27, 16)
 # define V3D_INT_QPU_SHIFT                             16
+# define V3D_INT_CSDDONE                               BIT(7)
+# define V3D_INT_PCTR                                  BIT(6)
 # define V3D_INT_GMPV                                  BIT(5)
 # define V3D_INT_TRFB                                  BIT(4)
 # define V3D_INT_SPILLUSE                              BIT(3)
 #define V3D_GMP_PRESERVE_LOAD                          0x00818
 #define V3D_GMP_VALID_LINES                            0x00820
 
+#define V3D_CSD_STATUS                                 0x00900
+# define V3D_CSD_STATUS_NUM_COMPLETED_MASK             V3D_MASK(11, 4)
+# define V3D_CSD_STATUS_NUM_COMPLETED_SHIFT            4
+# define V3D_CSD_STATUS_NUM_ACTIVE_MASK                V3D_MASK(3, 2)
+# define V3D_CSD_STATUS_NUM_ACTIVE_SHIFT               2
+# define V3D_CSD_STATUS_HAVE_CURRENT_DISPATCH          BIT(1)
+# define V3D_CSD_STATUS_HAVE_QUEUED_DISPATCH           BIT(0)
+
+#define V3D_CSD_QUEUED_CFG0                            0x00904
+# define V3D_CSD_QUEUED_CFG0_NUM_WGS_X_MASK            V3D_MASK(31, 16)
+# define V3D_CSD_QUEUED_CFG0_NUM_WGS_X_SHIFT           16
+# define V3D_CSD_QUEUED_CFG0_WG_X_OFFSET_MASK          V3D_MASK(15, 0)
+# define V3D_CSD_QUEUED_CFG0_WG_X_OFFSET_SHIFT         0
+
+#define V3D_CSD_QUEUED_CFG1                            0x00908
+# define V3D_CSD_QUEUED_CFG1_NUM_WGS_Y_MASK            V3D_MASK(31, 16)
+# define V3D_CSD_QUEUED_CFG1_NUM_WGS_Y_SHIFT           16
+# define V3D_CSD_QUEUED_CFG1_WG_Y_OFFSET_MASK          V3D_MASK(15, 0)
+# define V3D_CSD_QUEUED_CFG1_WG_Y_OFFSET_SHIFT         0
+
+#define V3D_CSD_QUEUED_CFG2                            0x0090c
+# define V3D_CSD_QUEUED_CFG2_NUM_WGS_Z_MASK            V3D_MASK(31, 16)
+# define V3D_CSD_QUEUED_CFG2_NUM_WGS_Z_SHIFT           16
+# define V3D_CSD_QUEUED_CFG2_WG_Z_OFFSET_MASK          V3D_MASK(15, 0)
+# define V3D_CSD_QUEUED_CFG2_WG_Z_OFFSET_SHIFT         0
+
+#define V3D_CSD_QUEUED_CFG3                            0x00910
+# define V3D_CSD_QUEUED_CFG3_OVERLAP_WITH_PREV         BIT(26)
+# define V3D_CSD_QUEUED_CFG3_MAX_SG_ID_MASK            V3D_MASK(25, 20)
+# define V3D_CSD_QUEUED_CFG3_MAX_SG_ID_SHIFT           20
+# define V3D_CSD_QUEUED_CFG3_BATCHES_PER_SG_M1_MASK    V3D_MASK(19, 12)
+# define V3D_CSD_QUEUED_CFG3_BATCHES_PER_SG_M1_SHIFT   12
+# define V3D_CSD_QUEUED_CFG3_WGS_PER_SG_MASK           V3D_MASK(11, 8)
+# define V3D_CSD_QUEUED_CFG3_WGS_PER_SG_SHIFT          8
+# define V3D_CSD_QUEUED_CFG3_WG_SIZE_MASK              V3D_MASK(7, 0)
+# define V3D_CSD_QUEUED_CFG3_WG_SIZE_SHIFT             0
+
+/* Number of batches, minus 1 */
+#define V3D_CSD_QUEUED_CFG4                            0x00914
+
+/* Shader address, pnan, singleseg, threading, like a shader record. */
+#define V3D_CSD_QUEUED_CFG5                            0x00918
+
+/* Uniforms address (4 byte aligned) */
+#define V3D_CSD_QUEUED_CFG6                            0x0091c
+
+#define V3D_CSD_CURRENT_CFG0                          0x00920
+#define V3D_CSD_CURRENT_CFG1                          0x00924
+#define V3D_CSD_CURRENT_CFG2                          0x00928
+#define V3D_CSD_CURRENT_CFG3                          0x0092c
+#define V3D_CSD_CURRENT_CFG4                          0x00930
+#define V3D_CSD_CURRENT_CFG5                          0x00934
+#define V3D_CSD_CURRENT_CFG6                          0x00938
+
+#define V3D_CSD_CURRENT_ID0                            0x0093c
+# define V3D_CSD_CURRENT_ID0_WG_X_MASK                 V3D_MASK(31, 16)
+# define V3D_CSD_CURRENT_ID0_WG_X_SHIFT                16
+# define V3D_CSD_CURRENT_ID0_WG_IN_SG_MASK             V3D_MASK(11, 8)
+# define V3D_CSD_CURRENT_ID0_WG_IN_SG_SHIFT            8
+# define V3D_CSD_CURRENT_ID0_L_IDX_MASK                V3D_MASK(7, 0)
+# define V3D_CSD_CURRENT_ID0_L_IDX_SHIFT               0
+
+#define V3D_CSD_CURRENT_ID1                            0x00940
+# define V3D_CSD_CURRENT_ID0_WG_Z_MASK                 V3D_MASK(31, 16)
+# define V3D_CSD_CURRENT_ID0_WG_Z_SHIFT                16
+# define V3D_CSD_CURRENT_ID0_WG_Y_MASK                 V3D_MASK(15, 0)
+# define V3D_CSD_CURRENT_ID0_WG_Y_SHIFT                0
+
+#define V3D_ERR_FDBGO                                  0x00f04
+#define V3D_ERR_FDBGB                                  0x00f08
+#define V3D_ERR_FDBGR                                  0x00f0c
+
+#define V3D_ERR_FDBGS                                  0x00f10
+# define V3D_ERR_FDBGS_INTERPZ_IP_STALL                BIT(17)
+# define V3D_ERR_FDBGS_DEPTHO_FIFO_IP_STALL            BIT(16)
+# define V3D_ERR_FDBGS_XYNRM_IP_STALL                  BIT(14)
+# define V3D_ERR_FDBGS_EZREQ_FIFO_OP_VALID             BIT(13)
+# define V3D_ERR_FDBGS_QXYF_FIFO_OP_VALID              BIT(12)
+# define V3D_ERR_FDBGS_QXYF_FIFO_OP_LAST               BIT(11)
+# define V3D_ERR_FDBGS_EZTEST_ANYQVALID                BIT(7)
+# define V3D_ERR_FDBGS_EZTEST_PASS                     BIT(6)
+# define V3D_ERR_FDBGS_EZTEST_QREADY                   BIT(5)
+# define V3D_ERR_FDBGS_EZTEST_VLF_OKNOVALID            BIT(4)
+# define V3D_ERR_FDBGS_EZTEST_QSTALL                   BIT(3)
+# define V3D_ERR_FDBGS_EZTEST_IP_VLFSTALL              BIT(2)
+# define V3D_ERR_FDBGS_EZTEST_IP_PRSTALL               BIT(1)
+# define V3D_ERR_FDBGS_EZTEST_IP_QSTALL                BIT(0)
+
+#define V3D_ERR_STAT                                   0x00f20
+# define V3D_ERR_L2CARE                                BIT(15)
+# define V3D_ERR_VCMBE                                 BIT(14)
+# define V3D_ERR_VCMRE                                 BIT(13)
+# define V3D_ERR_VCDI                                  BIT(12)
+# define V3D_ERR_VCDE                                  BIT(11)
+# define V3D_ERR_VDWE                                  BIT(10)
+# define V3D_ERR_VPMEAS                                BIT(9)
+# define V3D_ERR_VPMEFNA                               BIT(8)
+# define V3D_ERR_VPMEWNA                               BIT(7)
+# define V3D_ERR_VPMERNA                               BIT(6)
+# define V3D_ERR_VPMERR                                BIT(5)
+# define V3D_ERR_VPMEWR                                BIT(4)
+# define V3D_ERR_VPAERRGL                              BIT(3)
+# define V3D_ERR_VPAEBRGL                              BIT(2)
+# define V3D_ERR_VPAERGS                               BIT(1)
+# define V3D_ERR_VPAEABB                               BIT(0)
+
 #endif /* V3D_REGS_H */
index e740f3b..8c2df6d 100644 (file)
@@ -30,158 +30,152 @@ to_v3d_job(struct drm_sched_job *sched_job)
        return container_of(sched_job, struct v3d_job, base);
 }
 
-static struct v3d_tfu_job *
-to_tfu_job(struct drm_sched_job *sched_job)
+static struct v3d_bin_job *
+to_bin_job(struct drm_sched_job *sched_job)
 {
-       return container_of(sched_job, struct v3d_tfu_job, base);
+       return container_of(sched_job, struct v3d_bin_job, base.base);
 }
 
-static void
-v3d_job_free(struct drm_sched_job *sched_job)
+static struct v3d_render_job *
+to_render_job(struct drm_sched_job *sched_job)
 {
-       struct v3d_job *job = to_v3d_job(sched_job);
+       return container_of(sched_job, struct v3d_render_job, base.base);
+}
 
-       drm_sched_job_cleanup(sched_job);
+static struct v3d_tfu_job *
+to_tfu_job(struct drm_sched_job *sched_job)
+{
+       return container_of(sched_job, struct v3d_tfu_job, base.base);
+}
 
-       v3d_exec_put(job->exec);
+static struct v3d_csd_job *
+to_csd_job(struct drm_sched_job *sched_job)
+{
+       return container_of(sched_job, struct v3d_csd_job, base.base);
 }
 
 static void
-v3d_tfu_job_free(struct drm_sched_job *sched_job)
+v3d_job_free(struct drm_sched_job *sched_job)
 {
-       struct v3d_tfu_job *job = to_tfu_job(sched_job);
+       struct v3d_job *job = to_v3d_job(sched_job);
 
        drm_sched_job_cleanup(sched_job);
-
-       v3d_tfu_job_put(job);
+       v3d_job_put(job);
 }
 
 /**
- * Returns the fences that the bin or render job depends on, one by one.
- * v3d_job_run() won't be called until all of them have been signaled.
+ * Returns the fences that the job depends on, one by one.
+ *
+ * If placed in the scheduler's .dependency method, the corresponding
+ * .run_job won't be called until all of them have been signaled.
  */
 static struct dma_fence *
 v3d_job_dependency(struct drm_sched_job *sched_job,
                   struct drm_sched_entity *s_entity)
 {
        struct v3d_job *job = to_v3d_job(sched_job);
-       struct v3d_exec_info *exec = job->exec;
-       enum v3d_queue q = job == &exec->bin ? V3D_BIN : V3D_RENDER;
-       struct dma_fence *fence;
-
-       fence = job->in_fence;
-       if (fence) {
-               job->in_fence = NULL;
-               return fence;
-       }
-
-       if (q == V3D_RENDER) {
-               /* If we had a bin job, the render job definitely depends on
-                * it. We first have to wait for bin to be scheduled, so that
-                * its done_fence is created.
-                */
-               fence = exec->bin_done_fence;
-               if (fence) {
-                       exec->bin_done_fence = NULL;
-                       return fence;
-               }
-       }
 
        /* XXX: Wait on a fence for switching the GMP if necessary,
         * and then do so.
         */
 
-       return fence;
-}
-
-/**
- * Returns the fences that the TFU job depends on, one by one.
- * v3d_tfu_job_run() won't be called until all of them have been
- * signaled.
- */
-static struct dma_fence *
-v3d_tfu_job_dependency(struct drm_sched_job *sched_job,
-                      struct drm_sched_entity *s_entity)
-{
-       struct v3d_tfu_job *job = to_tfu_job(sched_job);
-       struct dma_fence *fence;
-
-       fence = job->in_fence;
-       if (fence) {
-               job->in_fence = NULL;
-               return fence;
-       }
+       if (!xa_empty(&job->deps))
+               return xa_erase(&job->deps, job->last_dep++);
 
        return NULL;
 }
 
-static struct dma_fence *v3d_job_run(struct drm_sched_job *sched_job)
+static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job)
 {
-       struct v3d_job *job = to_v3d_job(sched_job);
-       struct v3d_exec_info *exec = job->exec;
-       enum v3d_queue q = job == &exec->bin ? V3D_BIN : V3D_RENDER;
-       struct v3d_dev *v3d = exec->v3d;
+       struct v3d_bin_job *job = to_bin_job(sched_job);
+       struct v3d_dev *v3d = job->base.v3d;
        struct drm_device *dev = &v3d->drm;
        struct dma_fence *fence;
        unsigned long irqflags;
 
-       if (unlikely(job->base.s_fence->finished.error))
+       if (unlikely(job->base.base.s_fence->finished.error))
                return NULL;
 
        /* Lock required around bin_job update vs
         * v3d_overflow_mem_work().
         */
        spin_lock_irqsave(&v3d->job_lock, irqflags);
-       if (q == V3D_BIN) {
-               v3d->bin_job = job->exec;
-
-               /* Clear out the overflow allocation, so we don't
-                * reuse the overflow attached to a previous job.
-                */
-               V3D_CORE_WRITE(0, V3D_PTB_BPOS, 0);
-       } else {
-               v3d->render_job = job->exec;
-       }
+       v3d->bin_job = job;
+       /* Clear out the overflow allocation, so we don't
+        * reuse the overflow attached to a previous job.
+        */
+       V3D_CORE_WRITE(0, V3D_PTB_BPOS, 0);
        spin_unlock_irqrestore(&v3d->job_lock, irqflags);
 
-       /* Can we avoid this flush when q==RENDER?  We need to be
-        * careful of scheduling, though -- imagine job0 rendering to
-        * texture and job1 reading, and them being executed as bin0,
-        * bin1, render0, render1, so that render1's flush at bin time
+       v3d_invalidate_caches(v3d);
+
+       fence = v3d_fence_create(v3d, V3D_BIN);
+       if (IS_ERR(fence))
+               return NULL;
+
+       if (job->base.irq_fence)
+               dma_fence_put(job->base.irq_fence);
+       job->base.irq_fence = dma_fence_get(fence);
+
+       trace_v3d_submit_cl(dev, false, to_v3d_fence(fence)->seqno,
+                           job->start, job->end);
+
+       /* Set the current and end address of the control list.
+        * Writing the end register is what starts the job.
+        */
+       if (job->qma) {
+               V3D_CORE_WRITE(0, V3D_CLE_CT0QMA, job->qma);
+               V3D_CORE_WRITE(0, V3D_CLE_CT0QMS, job->qms);
+       }
+       if (job->qts) {
+               V3D_CORE_WRITE(0, V3D_CLE_CT0QTS,
+                              V3D_CLE_CT0QTS_ENABLE |
+                              job->qts);
+       }
+       V3D_CORE_WRITE(0, V3D_CLE_CT0QBA, job->start);
+       V3D_CORE_WRITE(0, V3D_CLE_CT0QEA, job->end);
+
+       return fence;
+}
+
+static struct dma_fence *v3d_render_job_run(struct drm_sched_job *sched_job)
+{
+       struct v3d_render_job *job = to_render_job(sched_job);
+       struct v3d_dev *v3d = job->base.v3d;
+       struct drm_device *dev = &v3d->drm;
+       struct dma_fence *fence;
+
+       if (unlikely(job->base.base.s_fence->finished.error))
+               return NULL;
+
+       v3d->render_job = job;
+
+       /* Can we avoid this flush?  We need to be careful of
+        * scheduling, though -- imagine job0 rendering to texture and
+        * job1 reading, and them being executed as bin0, bin1,
+        * render0, render1, so that render1's flush at bin time
         * wasn't enough.
         */
        v3d_invalidate_caches(v3d);
 
-       fence = v3d_fence_create(v3d, q);
+       fence = v3d_fence_create(v3d, V3D_RENDER);
        if (IS_ERR(fence))
                return NULL;
 
-       if (job->irq_fence)
-               dma_fence_put(job->irq_fence);
-       job->irq_fence = dma_fence_get(fence);
+       if (job->base.irq_fence)
+               dma_fence_put(job->base.irq_fence);
+       job->base.irq_fence = dma_fence_get(fence);
 
-       trace_v3d_submit_cl(dev, q == V3D_RENDER, to_v3d_fence(fence)->seqno,
+       trace_v3d_submit_cl(dev, true, to_v3d_fence(fence)->seqno,
                            job->start, job->end);
 
-       if (q == V3D_BIN) {
-               if (exec->qma) {
-                       V3D_CORE_WRITE(0, V3D_CLE_CT0QMA, exec->qma);
-                       V3D_CORE_WRITE(0, V3D_CLE_CT0QMS, exec->qms);
-               }
-               if (exec->qts) {
-                       V3D_CORE_WRITE(0, V3D_CLE_CT0QTS,
-                                      V3D_CLE_CT0QTS_ENABLE |
-                                      exec->qts);
-               }
-       } else {
-               /* XXX: Set the QCFG */
-       }
+       /* XXX: Set the QCFG */
 
        /* Set the current and end address of the control list.
         * Writing the end register is what starts the job.
         */
-       V3D_CORE_WRITE(0, V3D_CLE_CTNQBA(q), job->start);
-       V3D_CORE_WRITE(0, V3D_CLE_CTNQEA(q), job->end);
+       V3D_CORE_WRITE(0, V3D_CLE_CT1QBA, job->start);
+       V3D_CORE_WRITE(0, V3D_CLE_CT1QEA, job->end);
 
        return fence;
 }
@@ -190,7 +184,7 @@ static struct dma_fence *
 v3d_tfu_job_run(struct drm_sched_job *sched_job)
 {
        struct v3d_tfu_job *job = to_tfu_job(sched_job);
-       struct v3d_dev *v3d = job->v3d;
+       struct v3d_dev *v3d = job->base.v3d;
        struct drm_device *dev = &v3d->drm;
        struct dma_fence *fence;
 
@@ -199,9 +193,9 @@ v3d_tfu_job_run(struct drm_sched_job *sched_job)
                return NULL;
 
        v3d->tfu_job = job;
-       if (job->irq_fence)
-               dma_fence_put(job->irq_fence);
-       job->irq_fence = dma_fence_get(fence);
+       if (job->base.irq_fence)
+               dma_fence_put(job->base.irq_fence);
+       job->base.irq_fence = dma_fence_get(fence);
 
        trace_v3d_submit_tfu(dev, to_v3d_fence(fence)->seqno);
 
@@ -223,6 +217,48 @@ v3d_tfu_job_run(struct drm_sched_job *sched_job)
        return fence;
 }
 
+static struct dma_fence *
+v3d_csd_job_run(struct drm_sched_job *sched_job)
+{
+       struct v3d_csd_job *job = to_csd_job(sched_job);
+       struct v3d_dev *v3d = job->base.v3d;
+       struct drm_device *dev = &v3d->drm;
+       struct dma_fence *fence;
+       int i;
+
+       v3d->csd_job = job;
+
+       v3d_invalidate_caches(v3d);
+
+       fence = v3d_fence_create(v3d, V3D_CSD);
+       if (IS_ERR(fence))
+               return NULL;
+
+       if (job->base.irq_fence)
+               dma_fence_put(job->base.irq_fence);
+       job->base.irq_fence = dma_fence_get(fence);
+
+       trace_v3d_submit_csd(dev, to_v3d_fence(fence)->seqno);
+
+       for (i = 1; i <= 6; i++)
+               V3D_CORE_WRITE(0, V3D_CSD_QUEUED_CFG0 + 4 * i, job->args.cfg[i]);
+       /* CFG0 write kicks off the job. */
+       V3D_CORE_WRITE(0, V3D_CSD_QUEUED_CFG0, job->args.cfg[0]);
+
+       return fence;
+}
+
+static struct dma_fence *
+v3d_cache_clean_job_run(struct drm_sched_job *sched_job)
+{
+       struct v3d_job *job = to_v3d_job(sched_job);
+       struct v3d_dev *v3d = job->v3d;
+
+       v3d_clean_caches(v3d);
+
+       return NULL;
+}
+
 static void
 v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
 {
@@ -232,7 +268,7 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
 
        /* block scheduler */
        for (q = 0; q < V3D_MAX_QUEUES; q++)
-               drm_sched_stop(&v3d->queue[q].sched);
+               drm_sched_stop(&v3d->queue[q].sched, sched_job);
 
        if (sched_job)
                drm_sched_increase_karma(sched_job);
@@ -251,25 +287,23 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
        mutex_unlock(&v3d->reset_lock);
 }
 
+/* If the current address or return address have changed, then the GPU
+ * has probably made progress and we should delay the reset.  This
+ * could fail if the GPU got in an infinite loop in the CL, but that
+ * is pretty unlikely outside of an i-g-t testcase.
+ */
 static void
-v3d_job_timedout(struct drm_sched_job *sched_job)
+v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q,
+                   u32 *timedout_ctca, u32 *timedout_ctra)
 {
        struct v3d_job *job = to_v3d_job(sched_job);
-       struct v3d_exec_info *exec = job->exec;
-       struct v3d_dev *v3d = exec->v3d;
-       enum v3d_queue job_q = job == &exec->bin ? V3D_BIN : V3D_RENDER;
-       u32 ctca = V3D_CORE_READ(0, V3D_CLE_CTNCA(job_q));
-       u32 ctra = V3D_CORE_READ(0, V3D_CLE_CTNRA(job_q));
-
-       /* If the current address or return address have changed, then
-        * the GPU has probably made progress and we should delay the
-        * reset.  This could fail if the GPU got in an infinite loop
-        * in the CL, but that is pretty unlikely outside of an i-g-t
-        * testcase.
-        */
-       if (job->timedout_ctca != ctca || job->timedout_ctra != ctra) {
-               job->timedout_ctca = ctca;
-               job->timedout_ctra = ctra;
+       struct v3d_dev *v3d = job->v3d;
+       u32 ctca = V3D_CORE_READ(0, V3D_CLE_CTNCA(q));
+       u32 ctra = V3D_CORE_READ(0, V3D_CLE_CTNRA(q));
+
+       if (*timedout_ctca != ctca || *timedout_ctra != ctra) {
+               *timedout_ctca = ctca;
+               *timedout_ctra = ctra;
                return;
        }
 
@@ -277,25 +311,82 @@ v3d_job_timedout(struct drm_sched_job *sched_job)
 }
 
 static void
-v3d_tfu_job_timedout(struct drm_sched_job *sched_job)
+v3d_bin_job_timedout(struct drm_sched_job *sched_job)
 {
-       struct v3d_tfu_job *job = to_tfu_job(sched_job);
+       struct v3d_bin_job *job = to_bin_job(sched_job);
+
+       v3d_cl_job_timedout(sched_job, V3D_BIN,
+                           &job->timedout_ctca, &job->timedout_ctra);
+}
+
+static void
+v3d_render_job_timedout(struct drm_sched_job *sched_job)
+{
+       struct v3d_render_job *job = to_render_job(sched_job);
+
+       v3d_cl_job_timedout(sched_job, V3D_RENDER,
+                           &job->timedout_ctca, &job->timedout_ctra);
+}
+
+static void
+v3d_generic_job_timedout(struct drm_sched_job *sched_job)
+{
+       struct v3d_job *job = to_v3d_job(sched_job);
 
        v3d_gpu_reset_for_timeout(job->v3d, sched_job);
 }
 
-static const struct drm_sched_backend_ops v3d_sched_ops = {
+static void
+v3d_csd_job_timedout(struct drm_sched_job *sched_job)
+{
+       struct v3d_csd_job *job = to_csd_job(sched_job);
+       struct v3d_dev *v3d = job->base.v3d;
+       u32 batches = V3D_CORE_READ(0, V3D_CSD_CURRENT_CFG4);
+
+       /* If we've made progress, skip reset and let the timer get
+        * rearmed.
+        */
+       if (job->timedout_batches != batches) {
+               job->timedout_batches = batches;
+               return;
+       }
+
+       v3d_gpu_reset_for_timeout(v3d, sched_job);
+}
+
+static const struct drm_sched_backend_ops v3d_bin_sched_ops = {
        .dependency = v3d_job_dependency,
-       .run_job = v3d_job_run,
-       .timedout_job = v3d_job_timedout,
-       .free_job = v3d_job_free
+       .run_job = v3d_bin_job_run,
+       .timedout_job = v3d_bin_job_timedout,
+       .free_job = v3d_job_free,
+};
+
+static const struct drm_sched_backend_ops v3d_render_sched_ops = {
+       .dependency = v3d_job_dependency,
+       .run_job = v3d_render_job_run,
+       .timedout_job = v3d_render_job_timedout,
+       .free_job = v3d_job_free,
 };
 
 static const struct drm_sched_backend_ops v3d_tfu_sched_ops = {
-       .dependency = v3d_tfu_job_dependency,
+       .dependency = v3d_job_dependency,
        .run_job = v3d_tfu_job_run,
-       .timedout_job = v3d_tfu_job_timedout,
-       .free_job = v3d_tfu_job_free
+       .timedout_job = v3d_generic_job_timedout,
+       .free_job = v3d_job_free,
+};
+
+static const struct drm_sched_backend_ops v3d_csd_sched_ops = {
+       .dependency = v3d_job_dependency,
+       .run_job = v3d_csd_job_run,
+       .timedout_job = v3d_csd_job_timedout,
+       .free_job = v3d_job_free
+};
+
+static const struct drm_sched_backend_ops v3d_cache_clean_sched_ops = {
+       .dependency = v3d_job_dependency,
+       .run_job = v3d_cache_clean_job_run,
+       .timedout_job = v3d_generic_job_timedout,
+       .free_job = v3d_job_free
 };
 
 int
@@ -307,7 +398,7 @@ v3d_sched_init(struct v3d_dev *v3d)
        int ret;
 
        ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
-                            &v3d_sched_ops,
+                            &v3d_bin_sched_ops,
                             hw_jobs_limit, job_hang_limit,
                             msecs_to_jiffies(hang_limit_ms),
                             "v3d_bin");
@@ -317,14 +408,14 @@ v3d_sched_init(struct v3d_dev *v3d)
        }
 
        ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
-                            &v3d_sched_ops,
+                            &v3d_render_sched_ops,
                             hw_jobs_limit, job_hang_limit,
                             msecs_to_jiffies(hang_limit_ms),
                             "v3d_render");
        if (ret) {
                dev_err(v3d->dev, "Failed to create render scheduler: %d.",
                        ret);
-               drm_sched_fini(&v3d->queue[V3D_BIN].sched);
+               v3d_sched_fini(v3d);
                return ret;
        }
 
@@ -336,11 +427,36 @@ v3d_sched_init(struct v3d_dev *v3d)
        if (ret) {
                dev_err(v3d->dev, "Failed to create TFU scheduler: %d.",
                        ret);
-               drm_sched_fini(&v3d->queue[V3D_RENDER].sched);
-               drm_sched_fini(&v3d->queue[V3D_BIN].sched);
+               v3d_sched_fini(v3d);
                return ret;
        }
 
+       if (v3d_has_csd(v3d)) {
+               ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
+                                    &v3d_csd_sched_ops,
+                                    hw_jobs_limit, job_hang_limit,
+                                    msecs_to_jiffies(hang_limit_ms),
+                                    "v3d_csd");
+               if (ret) {
+                       dev_err(v3d->dev, "Failed to create CSD scheduler: %d.",
+                               ret);
+                       v3d_sched_fini(v3d);
+                       return ret;
+               }
+
+               ret = drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched,
+                                    &v3d_cache_clean_sched_ops,
+                                    hw_jobs_limit, job_hang_limit,
+                                    msecs_to_jiffies(hang_limit_ms),
+                                    "v3d_cache_clean");
+               if (ret) {
+                       dev_err(v3d->dev, "Failed to create CACHE_CLEAN scheduler: %d.",
+                               ret);
+                       v3d_sched_fini(v3d);
+                       return ret;
+               }
+       }
+
        return 0;
 }
 
@@ -349,6 +465,8 @@ v3d_sched_fini(struct v3d_dev *v3d)
 {
        enum v3d_queue q;
 
-       for (q = 0; q < V3D_MAX_QUEUES; q++)
-               drm_sched_fini(&v3d->queue[q].sched);
+       for (q = 0; q < V3D_MAX_QUEUES; q++) {
+               if (v3d->queue[q].sched.ready)
+                       drm_sched_fini(&v3d->queue[q].sched);
+       }
 }
index edd984a..7aa8dc3 100644 (file)
@@ -124,6 +124,26 @@ TRACE_EVENT(v3d_tfu_irq,
                      __entry->seqno)
 );
 
+TRACE_EVENT(v3d_csd_irq,
+           TP_PROTO(struct drm_device *dev,
+                    uint64_t seqno),
+           TP_ARGS(dev, seqno),
+
+           TP_STRUCT__entry(
+                            __field(u32, dev)
+                            __field(u64, seqno)
+                            ),
+
+           TP_fast_assign(
+                          __entry->dev = dev->primary->index;
+                          __entry->seqno = seqno;
+                          ),
+
+           TP_printk("dev=%u, seqno=%llu",
+                     __entry->dev,
+                     __entry->seqno)
+);
+
 TRACE_EVENT(v3d_submit_tfu_ioctl,
            TP_PROTO(struct drm_device *dev, u32 iia),
            TP_ARGS(dev, iia),
@@ -163,6 +183,80 @@ TRACE_EVENT(v3d_submit_tfu,
                      __entry->seqno)
 );
 
+TRACE_EVENT(v3d_submit_csd_ioctl,
+           TP_PROTO(struct drm_device *dev, u32 cfg5, u32 cfg6),
+           TP_ARGS(dev, cfg5, cfg6),
+
+           TP_STRUCT__entry(
+                            __field(u32, dev)
+                            __field(u32, cfg5)
+                            __field(u32, cfg6)
+                            ),
+
+           TP_fast_assign(
+                          __entry->dev = dev->primary->index;
+                          __entry->cfg5 = cfg5;
+                          __entry->cfg6 = cfg6;
+                          ),
+
+           TP_printk("dev=%u, CFG5 0x%08x, CFG6 0x%08x",
+                     __entry->dev,
+                     __entry->cfg5,
+                     __entry->cfg6)
+);
+
+TRACE_EVENT(v3d_submit_csd,
+           TP_PROTO(struct drm_device *dev,
+                    uint64_t seqno),
+           TP_ARGS(dev, seqno),
+
+           TP_STRUCT__entry(
+                            __field(u32, dev)
+                            __field(u64, seqno)
+                            ),
+
+           TP_fast_assign(
+                          __entry->dev = dev->primary->index;
+                          __entry->seqno = seqno;
+                          ),
+
+           TP_printk("dev=%u, seqno=%llu",
+                     __entry->dev,
+                     __entry->seqno)
+);
+
+TRACE_EVENT(v3d_cache_clean_begin,
+           TP_PROTO(struct drm_device *dev),
+           TP_ARGS(dev),
+
+           TP_STRUCT__entry(
+                            __field(u32, dev)
+                            ),
+
+           TP_fast_assign(
+                          __entry->dev = dev->primary->index;
+                          ),
+
+           TP_printk("dev=%u",
+                     __entry->dev)
+);
+
+TRACE_EVENT(v3d_cache_clean_end,
+           TP_PROTO(struct drm_device *dev),
+           TP_ARGS(dev),
+
+           TP_STRUCT__entry(
+                            __field(u32, dev)
+                            ),
+
+           TP_fast_assign(
+                          __entry->dev = dev->primary->index;
+                          ),
+
+           TP_printk("dev=%u",
+                     __entry->dev)
+);
+
 TRACE_EVENT(v3d_reset_begin,
            TP_PROTO(struct drm_device *dev),
            TP_ARGS(dev),
index d6ab955..56ba510 100644 (file)
@@ -3,7 +3,7 @@ config DRM_VBOXVIDEO
        tristate "Virtual Box Graphics Card"
        depends on DRM && X86 && PCI
        select DRM_KMS_HELPER
-       select DRM_TTM
+       select DRM_VRAM_HELPER
        select GENERIC_ALLOCATOR
        help
          This is a KMS driver for the virtual Graphics Card used in
index fb6a0f0..02537ab 100644 (file)
@@ -191,13 +191,7 @@ static struct pci_driver vbox_pci_driver = {
 
 static const struct file_operations vbox_fops = {
        .owner = THIS_MODULE,
-       .open = drm_open,
-       .release = drm_release,
-       .unlocked_ioctl = drm_ioctl,
-       .compat_ioctl = drm_compat_ioctl,
-       .mmap = vbox_mmap,
-       .poll = drm_poll,
-       .read = drm_read,
+       DRM_VRAM_MM_FILE_OPERATIONS
 };
 
 static struct drm_driver driver = {
@@ -215,9 +209,7 @@ static struct drm_driver driver = {
        .minor = DRIVER_MINOR,
        .patchlevel = DRIVER_PATCHLEVEL,
 
-       .gem_free_object_unlocked = vbox_gem_free_object,
-       .dumb_create = vbox_dumb_create,
-       .dumb_map_offset = vbox_dumb_mmap_offset,
+       DRM_GEM_VRAM_DRIVER,
        .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
        .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
        .gem_prime_export = drm_gem_prime_export,
index ece31f3..9028f94 100644 (file)
 #include <drm/drm_encoder.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_gem.h>
+#include <drm/drm_gem_vram_helper.h>
 
-#include <drm/ttm/ttm_bo_api.h>
-#include <drm/ttm/ttm_bo_driver.h>
-#include <drm/ttm/ttm_placement.h>
-#include <drm/ttm/ttm_memory.h>
-#include <drm/ttm/ttm_module.h>
+#include <drm/drm_vram_mm_helper.h>
 
 #include "vboxvideo_guest.h"
 #include "vboxvideo_vbe.h"
@@ -77,10 +74,6 @@ struct vbox_private {
 
        int fb_mtrr;
 
-       struct {
-               struct ttm_bo_device bdev;
-       } ttm;
-
        struct mutex hw_mutex; /* protects modeset and accel/vbva accesses */
        struct work_struct hotplug_work;
        u32 input_mapping_width;
@@ -96,8 +89,6 @@ struct vbox_private {
 #undef CURSOR_PIXEL_COUNT
 #undef CURSOR_DATA_SIZE
 
-struct vbox_gem_object;
-
 struct vbox_connector {
        struct drm_connector base;
        char name[32];
@@ -170,74 +161,12 @@ int vboxfb_create(struct drm_fb_helper *helper,
                  struct drm_fb_helper_surface_size *sizes);
 void vbox_fbdev_fini(struct vbox_private *vbox);
 
-struct vbox_bo {
-       struct ttm_buffer_object bo;
-       struct ttm_placement placement;
-       struct ttm_bo_kmap_obj kmap;
-       struct drm_gem_object gem;
-       struct ttm_place placements[3];
-       int pin_count;
-};
-
-#define gem_to_vbox_bo(gobj) container_of((gobj), struct vbox_bo, gem)
-
-static inline struct vbox_bo *vbox_bo(struct ttm_buffer_object *bo)
-{
-       return container_of(bo, struct vbox_bo, bo);
-}
-
-#define to_vbox_obj(x) container_of(x, struct vbox_gem_object, base)
-
-static inline u64 vbox_bo_gpu_offset(struct vbox_bo *bo)
-{
-       return bo->bo.offset;
-}
-
-int vbox_dumb_create(struct drm_file *file,
-                    struct drm_device *dev,
-                    struct drm_mode_create_dumb *args);
-
-void vbox_gem_free_object(struct drm_gem_object *obj);
-int vbox_dumb_mmap_offset(struct drm_file *file,
-                         struct drm_device *dev,
-                         u32 handle, u64 *offset);
-
 int vbox_mm_init(struct vbox_private *vbox);
 void vbox_mm_fini(struct vbox_private *vbox);
 
-int vbox_bo_create(struct vbox_private *vbox, int size, int align,
-                  u32 flags, struct vbox_bo **pvboxbo);
-
 int vbox_gem_create(struct vbox_private *vbox,
                    u32 size, bool iskernel, struct drm_gem_object **obj);
 
-int vbox_bo_pin(struct vbox_bo *bo, u32 pl_flag);
-int vbox_bo_unpin(struct vbox_bo *bo);
-
-static inline int vbox_bo_reserve(struct vbox_bo *bo, bool no_wait)
-{
-       int ret;
-
-       ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL);
-       if (ret) {
-               if (ret != -ERESTARTSYS && ret != -EBUSY)
-                       DRM_ERROR("reserve failed %p\n", bo);
-               return ret;
-       }
-       return 0;
-}
-
-static inline void vbox_bo_unreserve(struct vbox_bo *bo)
-{
-       ttm_bo_unreserve(&bo->bo);
-}
-
-void vbox_ttm_placement(struct vbox_bo *bo, int domain);
-int vbox_bo_push_sysram(struct vbox_bo *bo);
-int vbox_mmap(struct file *filp, struct vm_area_struct *vma);
-void *vbox_bo_kmap(struct vbox_bo *bo);
-void vbox_bo_kunmap(struct vbox_bo *bo);
-
 /* vbox_prime.c */
 int vbox_gem_prime_pin(struct drm_gem_object *obj);
 void vbox_gem_prime_unpin(struct drm_gem_object *obj);
index b724fe7..8f74bcf 100644 (file)
@@ -51,9 +51,9 @@ int vboxfb_create(struct drm_fb_helper *helper,
        struct drm_framebuffer *fb;
        struct fb_info *info;
        struct drm_gem_object *gobj;
-       struct vbox_bo *bo;
+       struct drm_gem_vram_object *gbo;
        int size, ret;
-       u64 gpu_addr;
+       s64 gpu_addr;
        u32 pitch;
 
        mode_cmd.width = sizes->surface_width;
@@ -75,9 +75,9 @@ int vboxfb_create(struct drm_fb_helper *helper,
        if (ret)
                return ret;
 
-       bo = gem_to_vbox_bo(gobj);
+       gbo = drm_gem_vram_of_gem(gobj);
 
-       ret = vbox_bo_pin(bo, TTM_PL_FLAG_VRAM);
+       ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
        if (ret)
                return ret;
 
@@ -86,7 +86,7 @@ int vboxfb_create(struct drm_fb_helper *helper,
                return PTR_ERR(info);
 
        info->screen_size = size;
-       info->screen_base = (char __iomem *)vbox_bo_kmap(bo);
+       info->screen_base = (char __iomem *)drm_gem_vram_kmap(gbo, true, NULL);
        if (IS_ERR(info->screen_base))
                return PTR_ERR(info->screen_base);
 
@@ -104,7 +104,9 @@ int vboxfb_create(struct drm_fb_helper *helper,
 
        drm_fb_helper_fill_info(info, helper, sizes);
 
-       gpu_addr = vbox_bo_gpu_offset(bo);
+       gpu_addr = drm_gem_vram_offset(gbo);
+       if (gpu_addr < 0)
+               return (int)gpu_addr;
        info->fix.smem_start = info->apertures->ranges[0].base + gpu_addr;
        info->fix.smem_len = vbox->available_vram_size - gpu_addr;
 
@@ -132,12 +134,10 @@ void vbox_fbdev_fini(struct vbox_private *vbox)
        drm_fb_helper_unregister_fbi(&vbox->fb_helper);
 
        if (afb->obj) {
-               struct vbox_bo *bo = gem_to_vbox_bo(afb->obj);
+               struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(afb->obj);
 
-               vbox_bo_kunmap(bo);
-
-               if (bo->pin_count)
-                       vbox_bo_unpin(bo);
+               drm_gem_vram_kunmap(gbo);
+               drm_gem_vram_unpin(gbo);
 
                drm_gem_object_put_unlocked(afb->obj);
                afb->obj = NULL;
index f4d02de..18693e2 100644 (file)
@@ -274,7 +274,7 @@ void vbox_hw_fini(struct vbox_private *vbox)
 int vbox_gem_create(struct vbox_private *vbox,
                    u32 size, bool iskernel, struct drm_gem_object **obj)
 {
-       struct vbox_bo *vboxbo;
+       struct drm_gem_vram_object *gbo;
        int ret;
 
        *obj = NULL;
@@ -283,79 +283,16 @@ int vbox_gem_create(struct vbox_private *vbox,
        if (size == 0)
                return -EINVAL;
 
-       ret = vbox_bo_create(vbox, size, 0, 0, &vboxbo);
-       if (ret) {
+       gbo = drm_gem_vram_create(&vbox->ddev, &vbox->ddev.vram_mm->bdev,
+                                 size, 0, false);
+       if (IS_ERR(gbo)) {
+               ret = PTR_ERR(gbo);
                if (ret != -ERESTARTSYS)
                        DRM_ERROR("failed to allocate GEM object\n");
                return ret;
        }
 
-       *obj = &vboxbo->gem;
-
-       return 0;
-}
-
-int vbox_dumb_create(struct drm_file *file,
-                    struct drm_device *dev, struct drm_mode_create_dumb *args)
-{
-       struct vbox_private *vbox =
-               container_of(dev, struct vbox_private, ddev);
-       struct drm_gem_object *gobj;
-       u32 handle;
-       int ret;
-
-       args->pitch = args->width * ((args->bpp + 7) / 8);
-       args->size = args->pitch * args->height;
-
-       ret = vbox_gem_create(vbox, args->size, false, &gobj);
-       if (ret)
-               return ret;
-
-       ret = drm_gem_handle_create(file, gobj, &handle);
-       drm_gem_object_put_unlocked(gobj);
-       if (ret)
-               return ret;
-
-       args->handle = handle;
+       *obj = &gbo->gem;
 
        return 0;
 }
-
-void vbox_gem_free_object(struct drm_gem_object *obj)
-{
-       struct vbox_bo *vbox_bo = gem_to_vbox_bo(obj);
-
-       ttm_bo_put(&vbox_bo->bo);
-}
-
-static inline u64 vbox_bo_mmap_offset(struct vbox_bo *bo)
-{
-       return drm_vma_node_offset_addr(&bo->bo.vma_node);
-}
-
-int
-vbox_dumb_mmap_offset(struct drm_file *file,
-                     struct drm_device *dev,
-                     u32 handle, u64 *offset)
-{
-       struct drm_gem_object *obj;
-       int ret;
-       struct vbox_bo *bo;
-
-       mutex_lock(&dev->struct_mutex);
-       obj = drm_gem_object_lookup(file, handle);
-       if (!obj) {
-               ret = -ENOENT;
-               goto out_unlock;
-       }
-
-       bo = gem_to_vbox_bo(obj);
-       *offset = vbox_bo_mmap_offset(bo);
-
-       drm_gem_object_put(obj);
-       ret = 0;
-
-out_unlock:
-       mutex_unlock(&dev->struct_mutex);
-       return ret;
-}
index 58cea13..e1e48ba 100644 (file)
@@ -172,7 +172,8 @@ static void vbox_crtc_set_base_and_mode(struct drm_crtc *crtc,
                                        struct drm_framebuffer *fb,
                                        int x, int y)
 {
-       struct vbox_bo *bo = gem_to_vbox_bo(to_vbox_framebuffer(fb)->obj);
+       struct drm_gem_vram_object *gbo =
+               drm_gem_vram_of_gem(to_vbox_framebuffer(fb)->obj);
        struct vbox_private *vbox = crtc->dev->dev_private;
        struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc);
        bool needs_modeset = drm_atomic_crtc_needs_modeset(crtc->state);
@@ -186,7 +187,7 @@ static void vbox_crtc_set_base_and_mode(struct drm_crtc *crtc,
 
        vbox_crtc->x = x;
        vbox_crtc->y = y;
-       vbox_crtc->fb_offset = vbox_bo_gpu_offset(bo);
+       vbox_crtc->fb_offset = drm_gem_vram_offset(gbo);
 
        /* vbox_do_modeset() checks vbox->single_framebuffer so update it now */
        if (needs_modeset && vbox_set_up_input_mapping(vbox)) {
@@ -302,14 +303,14 @@ static void vbox_primary_atomic_disable(struct drm_plane *plane,
 static int vbox_primary_prepare_fb(struct drm_plane *plane,
                                   struct drm_plane_state *new_state)
 {
-       struct vbox_bo *bo;
+       struct drm_gem_vram_object *gbo;
        int ret;
 
        if (!new_state->fb)
                return 0;
 
-       bo = gem_to_vbox_bo(to_vbox_framebuffer(new_state->fb)->obj);
-       ret = vbox_bo_pin(bo, TTM_PL_FLAG_VRAM);
+       gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(new_state->fb)->obj);
+       ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
        if (ret)
                DRM_WARN("Error %d pinning new fb, out of video mem?\n", ret);
 
@@ -319,13 +320,13 @@ static int vbox_primary_prepare_fb(struct drm_plane *plane,
 static void vbox_primary_cleanup_fb(struct drm_plane *plane,
                                    struct drm_plane_state *old_state)
 {
-       struct vbox_bo *bo;
+       struct drm_gem_vram_object *gbo;
 
        if (!old_state->fb)
                return;
 
-       bo = gem_to_vbox_bo(to_vbox_framebuffer(old_state->fb)->obj);
-       vbox_bo_unpin(bo);
+       gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(old_state->fb)->obj);
+       drm_gem_vram_unpin(gbo);
 }
 
 static int vbox_cursor_atomic_check(struct drm_plane *plane,
@@ -385,7 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
                container_of(plane->dev, struct vbox_private, ddev);
        struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc);
        struct drm_framebuffer *fb = plane->state->fb;
-       struct vbox_bo *bo = gem_to_vbox_bo(to_vbox_framebuffer(fb)->obj);
+       struct drm_gem_vram_object *gbo =
+               drm_gem_vram_of_gem(to_vbox_framebuffer(fb)->obj);
        u32 width = plane->state->crtc_w;
        u32 height = plane->state->crtc_h;
        size_t data_size, mask_size;
@@ -404,7 +406,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
        vbox_crtc->cursor_enabled = true;
 
        /* pinning is done in prepare/cleanup framebuffer */
-       src = vbox_bo_kmap(bo);
+       src = drm_gem_vram_kmap(gbo, true, NULL);
        if (IS_ERR(src)) {
                mutex_unlock(&vbox->hw_mutex);
                DRM_WARN("Could not kmap cursor bo, skipping update\n");
@@ -420,7 +422,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
        data_size = width * height * 4 + mask_size;
 
        copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
-       vbox_bo_kunmap(bo);
+       drm_gem_vram_kunmap(gbo);
 
        flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
                VBOX_MOUSE_POINTER_ALPHA;
@@ -460,25 +462,25 @@ static void vbox_cursor_atomic_disable(struct drm_plane *plane,
 static int vbox_cursor_prepare_fb(struct drm_plane *plane,
                                  struct drm_plane_state *new_state)
 {
-       struct vbox_bo *bo;
+       struct drm_gem_vram_object *gbo;
 
        if (!new_state->fb)
                return 0;
 
-       bo = gem_to_vbox_bo(to_vbox_framebuffer(new_state->fb)->obj);
-       return vbox_bo_pin(bo, TTM_PL_FLAG_SYSTEM);
+       gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(new_state->fb)->obj);
+       return drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_SYSTEM);
 }
 
 static void vbox_cursor_cleanup_fb(struct drm_plane *plane,
                                   struct drm_plane_state *old_state)
 {
-       struct vbox_bo *bo;
+       struct drm_gem_vram_object *gbo;
 
        if (!plane->state->fb)
                return;
 
-       bo = gem_to_vbox_bo(to_vbox_framebuffer(plane->state->fb)->obj);
-       vbox_bo_unpin(bo);
+       gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(plane->state->fb)->obj);
+       drm_gem_vram_unpin(gbo);
 }
 
 static const u32 vbox_cursor_plane_formats[] = {
index 9d78438..b82595a 100644 (file)
  */
 #include <linux/pci.h>
 #include <drm/drm_file.h>
-#include <drm/ttm/ttm_page_alloc.h>
 #include "vbox_drv.h"
 
-static inline struct vbox_private *vbox_bdev(struct ttm_bo_device *bd)
-{
-       return container_of(bd, struct vbox_private, ttm.bdev);
-}
-
-static void vbox_bo_ttm_destroy(struct ttm_buffer_object *tbo)
-{
-       struct vbox_bo *bo;
-
-       bo = container_of(tbo, struct vbox_bo, bo);
-
-       drm_gem_object_release(&bo->gem);
-       kfree(bo);
-}
-
-static bool vbox_ttm_bo_is_vbox_bo(struct ttm_buffer_object *bo)
-{
-       if (bo->destroy == &vbox_bo_ttm_destroy)
-               return true;
-
-       return false;
-}
-
-static int
-vbox_bo_init_mem_type(struct ttm_bo_device *bdev, u32 type,
-                     struct ttm_mem_type_manager *man)
-{
-       switch (type) {
-       case TTM_PL_SYSTEM:
-               man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_MASK_CACHING;
-               man->default_caching = TTM_PL_FLAG_CACHED;
-               break;
-       case TTM_PL_VRAM:
-               man->func = &ttm_bo_manager_func;
-               man->flags = TTM_MEMTYPE_FLAG_FIXED | TTM_MEMTYPE_FLAG_MAPPABLE;
-               man->available_caching = TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_WC;
-               man->default_caching = TTM_PL_FLAG_WC;
-               break;
-       default:
-               DRM_ERROR("Unsupported memory type %u\n", (unsigned int)type);
-               return -EINVAL;
-       }
-
-       return 0;
-}
-
-static void
-vbox_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
-{
-       struct vbox_bo *vboxbo = vbox_bo(bo);
-
-       if (!vbox_ttm_bo_is_vbox_bo(bo))
-               return;
-
-       vbox_ttm_placement(vboxbo, TTM_PL_FLAG_SYSTEM);
-       *pl = vboxbo->placement;
-}
-
-static int vbox_bo_verify_access(struct ttm_buffer_object *bo,
-                                struct file *filp)
-{
-       return 0;
-}
-
-static int vbox_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
-                                  struct ttm_mem_reg *mem)
-{
-       struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
-       struct vbox_private *vbox = vbox_bdev(bdev);
-
-       mem->bus.addr = NULL;
-       mem->bus.offset = 0;
-       mem->bus.size = mem->num_pages << PAGE_SHIFT;
-       mem->bus.base = 0;
-       mem->bus.is_iomem = false;
-       if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
-               return -EINVAL;
-       switch (mem->mem_type) {
-       case TTM_PL_SYSTEM:
-               /* system memory */
-               return 0;
-       case TTM_PL_VRAM:
-               mem->bus.offset = mem->start << PAGE_SHIFT;
-               mem->bus.base = pci_resource_start(vbox->ddev.pdev, 0);
-               mem->bus.is_iomem = true;
-               break;
-       default:
-               return -EINVAL;
-       }
-       return 0;
-}
-
-static void vbox_ttm_io_mem_free(struct ttm_bo_device *bdev,
-                                struct ttm_mem_reg *mem)
-{
-}
-
-static void vbox_ttm_backend_destroy(struct ttm_tt *tt)
-{
-       ttm_tt_fini(tt);
-       kfree(tt);
-}
-
-static struct ttm_backend_func vbox_tt_backend_func = {
-       .destroy = &vbox_ttm_backend_destroy,
-};
-
-static struct ttm_tt *vbox_ttm_tt_create(struct ttm_buffer_object *bo,
-                                        u32 page_flags)
-{
-       struct ttm_tt *tt;
-
-       tt = kzalloc(sizeof(*tt), GFP_KERNEL);
-       if (!tt)
-               return NULL;
-
-       tt->func = &vbox_tt_backend_func;
-       if (ttm_tt_init(tt, bo, page_flags)) {
-               kfree(tt);
-               return NULL;
-       }
-
-       return tt;
-}
-
-static struct ttm_bo_driver vbox_bo_driver = {
-       .ttm_tt_create = vbox_ttm_tt_create,
-       .init_mem_type = vbox_bo_init_mem_type,
-       .eviction_valuable = ttm_bo_eviction_valuable,
-       .evict_flags = vbox_bo_evict_flags,
-       .verify_access = vbox_bo_verify_access,
-       .io_mem_reserve = &vbox_ttm_io_mem_reserve,
-       .io_mem_free = &vbox_ttm_io_mem_free,
-};
-
 int vbox_mm_init(struct vbox_private *vbox)
 {
+       struct drm_vram_mm *vmm;
        int ret;
        struct drm_device *dev = &vbox->ddev;
-       struct ttm_bo_device *bdev = &vbox->ttm.bdev;
 
-       ret = ttm_bo_device_init(&vbox->ttm.bdev,
-                                &vbox_bo_driver,
-                                dev->anon_inode->i_mapping,
-                                true);
-       if (ret) {
-               DRM_ERROR("Error initialising bo driver; %d\n", ret);
+       vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(dev->pdev, 0),
+                                      vbox->available_vram_size,
+                                      &drm_gem_vram_mm_funcs);
+       if (IS_ERR(vmm)) {
+               ret = PTR_ERR(vmm);
+               DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
                return ret;
        }
 
-       ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
-                            vbox->available_vram_size >> PAGE_SHIFT);
-       if (ret) {
-               DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
-               goto err_device_release;
-       }
-
 #ifdef DRM_MTRR_WC
        vbox->fb_mtrr = drm_mtrr_add(pci_resource_start(dev->pdev, 0),
                                     pci_resource_len(dev->pdev, 0),
@@ -178,10 +34,6 @@ int vbox_mm_init(struct vbox_private *vbox)
                                         pci_resource_len(dev->pdev, 0));
 #endif
        return 0;
-
-err_device_release:
-       ttm_bo_device_release(&vbox->ttm.bdev);
-       return ret;
 }
 
 void vbox_mm_fini(struct vbox_private *vbox)
@@ -193,196 +45,5 @@ void vbox_mm_fini(struct vbox_private *vbox)
 #else
        arch_phys_wc_del(vbox->fb_mtrr);
 #endif
-       ttm_bo_device_release(&vbox->ttm.bdev);
-}
-
-void vbox_ttm_placement(struct vbox_bo *bo, int domain)
-{
-       unsigned int i;
-       u32 c = 0;
-
-       bo->placement.placement = bo->placements;
-       bo->placement.busy_placement = bo->placements;
-
-       if (domain & TTM_PL_FLAG_VRAM)
-               bo->placements[c++].flags =
-                   TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
-       if (domain & TTM_PL_FLAG_SYSTEM)
-               bo->placements[c++].flags =
-                   TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
-       if (!c)
-               bo->placements[c++].flags =
-                   TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
-
-       bo->placement.num_placement = c;
-       bo->placement.num_busy_placement = c;
-
-       for (i = 0; i < c; ++i) {
-               bo->placements[i].fpfn = 0;
-               bo->placements[i].lpfn = 0;
-       }
-}
-
-int vbox_bo_create(struct vbox_private *vbox, int size, int align,
-                  u32 flags, struct vbox_bo **pvboxbo)
-{
-       struct vbox_bo *vboxbo;
-       size_t acc_size;
-       int ret;
-
-       vboxbo = kzalloc(sizeof(*vboxbo), GFP_KERNEL);
-       if (!vboxbo)
-               return -ENOMEM;
-
-       ret = drm_gem_object_init(&vbox->ddev, &vboxbo->gem, size);
-       if (ret)
-               goto err_free_vboxbo;
-
-       vboxbo->bo.bdev = &vbox->ttm.bdev;
-
-       vbox_ttm_placement(vboxbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
-
-       acc_size = ttm_bo_dma_acc_size(&vbox->ttm.bdev, size,
-                                      sizeof(struct vbox_bo));
-
-       ret = ttm_bo_init(&vbox->ttm.bdev, &vboxbo->bo, size,
-                         ttm_bo_type_device, &vboxbo->placement,
-                         align >> PAGE_SHIFT, false, acc_size,
-                         NULL, NULL, vbox_bo_ttm_destroy);
-       if (ret)
-               goto err_free_vboxbo;
-
-       *pvboxbo = vboxbo;
-
-       return 0;
-
-err_free_vboxbo:
-       kfree(vboxbo);
-       return ret;
-}
-
-int vbox_bo_pin(struct vbox_bo *bo, u32 pl_flag)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (bo->pin_count) {
-               bo->pin_count++;
-               return 0;
-       }
-
-       ret = vbox_bo_reserve(bo, false);
-       if (ret)
-               return ret;
-
-       vbox_ttm_placement(bo, pl_flag);
-
-       for (i = 0; i < bo->placement.num_placement; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret == 0)
-               bo->pin_count = 1;
-
-       vbox_bo_unreserve(bo);
-
-       return ret;
-}
-
-int vbox_bo_unpin(struct vbox_bo *bo)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (!bo->pin_count) {
-               DRM_ERROR("unpin bad %p\n", bo);
-               return 0;
-       }
-       bo->pin_count--;
-       if (bo->pin_count)
-               return 0;
-
-       ret = vbox_bo_reserve(bo, false);
-       if (ret) {
-               DRM_ERROR("Error %d reserving bo, leaving it pinned\n", ret);
-               return ret;
-       }
-
-       for (i = 0; i < bo->placement.num_placement; i++)
-               bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
-
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-
-       vbox_bo_unreserve(bo);
-
-       return ret;
-}
-
-/*
- * Move a vbox-owned buffer object to system memory if no one else has it
- * pinned.  The caller must have pinned it previously, and this call will
- * release the caller's pin.
- */
-int vbox_bo_push_sysram(struct vbox_bo *bo)
-{
-       struct ttm_operation_ctx ctx = { false, false };
-       int i, ret;
-
-       if (!bo->pin_count) {
-               DRM_ERROR("unpin bad %p\n", bo);
-               return 0;
-       }
-       bo->pin_count--;
-       if (bo->pin_count)
-               return 0;
-
-       if (bo->kmap.virtual) {
-               ttm_bo_kunmap(&bo->kmap);
-               bo->kmap.virtual = NULL;
-       }
-
-       vbox_ttm_placement(bo, TTM_PL_FLAG_SYSTEM);
-
-       for (i = 0; i < bo->placement.num_placement; i++)
-               bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
-
-       ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
-       if (ret) {
-               DRM_ERROR("pushing to VRAM failed\n");
-               return ret;
-       }
-
-       return 0;
-}
-
-int vbox_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-       struct drm_file *file_priv = filp->private_data;
-       struct vbox_private *vbox = file_priv->minor->dev->dev_private;
-
-       return ttm_bo_mmap(filp, vma, &vbox->ttm.bdev);
-}
-
-void *vbox_bo_kmap(struct vbox_bo *bo)
-{
-       int ret;
-
-       if (bo->kmap.virtual)
-               return bo->kmap.virtual;
-
-       ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
-       if (ret) {
-               DRM_ERROR("Error kmapping bo: %d\n", ret);
-               return NULL;
-       }
-
-       return bo->kmap.virtual;
-}
-
-void vbox_bo_kunmap(struct vbox_bo *bo)
-{
-       if (bo->kmap.virtual) {
-               ttm_bo_kunmap(&bo->kmap);
-               bo->kmap.virtual = NULL;
-       }
+       drm_vram_helper_release_mm(&vbox->ddev);
 }
index 88ebd68..1434bb8 100644 (file)
@@ -799,13 +799,36 @@ vc4_prime_import_sg_table(struct drm_device *dev,
        return obj;
 }
 
+static int vc4_grab_bin_bo(struct vc4_dev *vc4, struct vc4_file *vc4file)
+{
+       int ret;
+
+       if (!vc4->v3d)
+               return -ENODEV;
+
+       if (vc4file->bin_bo_used)
+               return 0;
+
+       ret = vc4_v3d_bin_bo_get(vc4, &vc4file->bin_bo_used);
+       if (ret)
+               return ret;
+
+       return 0;
+}
+
 int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
                        struct drm_file *file_priv)
 {
        struct drm_vc4_create_bo *args = data;
+       struct vc4_file *vc4file = file_priv->driver_priv;
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct vc4_bo *bo = NULL;
        int ret;
 
+       ret = vc4_grab_bin_bo(vc4, vc4file);
+       if (ret)
+               return ret;
+
        /*
         * We can't allocate from the BO cache, because the BOs don't
         * get zeroed, and that might leak data between users.
@@ -846,6 +869,8 @@ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
                           struct drm_file *file_priv)
 {
        struct drm_vc4_create_shader_bo *args = data;
+       struct vc4_file *vc4file = file_priv->driver_priv;
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct vc4_bo *bo = NULL;
        int ret;
 
@@ -865,6 +890,10 @@ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
                return -EINVAL;
        }
 
+       ret = vc4_grab_bin_bo(vc4, vc4file);
+       if (ret)
+               return ret;
+
        bo = vc4_bo_create(dev, args->size, true, VC4_BO_TYPE_V3D_SHADER);
        if (IS_ERR(bo))
                return PTR_ERR(bo);
@@ -894,7 +923,7 @@ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
         */
        ret = drm_gem_handle_create(file_priv, &bo->base.base, &args->handle);
 
- fail:
+fail:
        drm_gem_object_put_unlocked(&bo->base.base);
 
        return ret;
index 6d9be20..0f99ad0 100644 (file)
@@ -128,8 +128,12 @@ static int vc4_open(struct drm_device *dev, struct drm_file *file)
 
 static void vc4_close(struct drm_device *dev, struct drm_file *file)
 {
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        struct vc4_file *vc4file = file->driver_priv;
 
+       if (vc4file->bin_bo_used)
+               vc4_v3d_bin_bo_put(vc4);
+
        vc4_perfmon_close_file(vc4file);
        kfree(vc4file);
 }
@@ -274,6 +278,8 @@ static int vc4_drm_bind(struct device *dev)
        drm->dev_private = vc4;
        INIT_LIST_HEAD(&vc4->debugfs_list);
 
+       mutex_init(&vc4->bin_bo_lock);
+
        ret = vc4_bo_cache_init(drm);
        if (ret)
                goto dev_put;
index 4f13f62..9170a24 100644 (file)
@@ -216,6 +216,11 @@ struct vc4_dev {
         * the minor is available (after drm_dev_register()).
         */
        struct list_head debugfs_list;
+
+       /* Mutex for binner bo allocation. */
+       struct mutex bin_bo_lock;
+       /* Reference count for our binner bo. */
+       struct kref bin_bo_kref;
 };
 
 static inline struct vc4_dev *
@@ -584,6 +589,11 @@ struct vc4_exec_info {
         * NULL otherwise.
         */
        struct vc4_perfmon *perfmon;
+
+       /* Whether the exec has taken a reference to the binner BO, which should
+        * happen with a VC4_PACKET_TILE_BINNING_MODE_CONFIG packet.
+        */
+       bool bin_bo_used;
 };
 
 /* Per-open file private data. Any driver-specific resource that has to be
@@ -594,6 +604,8 @@ struct vc4_file {
                struct idr idr;
                struct mutex lock;
        } perfmon;
+
+       bool bin_bo_used;
 };
 
 static inline struct vc4_exec_info *
@@ -833,6 +845,8 @@ void vc4_plane_async_set_fb(struct drm_plane *plane,
 extern struct platform_driver vc4_v3d_driver;
 extern const struct of_device_id vc4_v3d_dt_match[];
 int vc4_v3d_get_bin_slot(struct vc4_dev *vc4);
+int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used);
+void vc4_v3d_bin_bo_put(struct vc4_dev *vc4);
 int vc4_v3d_pm_get(struct vc4_dev *vc4);
 void vc4_v3d_pm_put(struct vc4_dev *vc4);
 
index d9311be..84795d9 100644 (file)
@@ -820,6 +820,7 @@ static int
 vc4_get_bcl(struct drm_device *dev, struct vc4_exec_info *exec)
 {
        struct drm_vc4_submit_cl *args = exec->args;
+       struct vc4_dev *vc4 = to_vc4_dev(dev);
        void *temp = NULL;
        void *bin;
        int ret = 0;
@@ -918,6 +919,12 @@ vc4_get_bcl(struct drm_device *dev, struct vc4_exec_info *exec)
        if (ret)
                goto fail;
 
+       if (exec->found_tile_binning_mode_config_packet) {
+               ret = vc4_v3d_bin_bo_get(vc4, &exec->bin_bo_used);
+               if (ret)
+                       goto fail;
+       }
+
        /* Block waiting on any previous rendering into the CS's VBO,
         * IB, or textures, so that pixels are actually written by the
         * time we try to read them.
@@ -966,6 +973,10 @@ vc4_complete_exec(struct drm_device *dev, struct vc4_exec_info *exec)
        vc4->bin_alloc_used &= ~exec->bin_slots;
        spin_unlock_irqrestore(&vc4->job_lock, irqflags);
 
+       /* Release the reference on the binner BO if needed. */
+       if (exec->bin_bo_used)
+               vc4_v3d_bin_bo_put(vc4);
+
        /* Release the reference we had on the perf monitor. */
        vc4_perfmon_put(exec->perfmon);
 
index ffd0a43..e226c24 100644 (file)
@@ -59,15 +59,22 @@ vc4_overflow_mem_work(struct work_struct *work)
 {
        struct vc4_dev *vc4 =
                container_of(work, struct vc4_dev, overflow_mem_work);
-       struct vc4_bo *bo = vc4->bin_bo;
+       struct vc4_bo *bo;
        int bin_bo_slot;
        struct vc4_exec_info *exec;
        unsigned long irqflags;
 
+       mutex_lock(&vc4->bin_bo_lock);
+
+       if (!vc4->bin_bo)
+               goto complete;
+
+       bo = vc4->bin_bo;
+
        bin_bo_slot = vc4_v3d_get_bin_slot(vc4);
        if (bin_bo_slot < 0) {
                DRM_ERROR("Couldn't allocate binner overflow mem\n");
-               return;
+               goto complete;
        }
 
        spin_lock_irqsave(&vc4->job_lock, irqflags);
@@ -98,6 +105,9 @@ vc4_overflow_mem_work(struct work_struct *work)
        V3D_WRITE(V3D_INTCTL, V3D_INT_OUTOMEM);
        V3D_WRITE(V3D_INTENA, V3D_INT_OUTOMEM);
        spin_unlock_irqrestore(&vc4->job_lock, irqflags);
+
+complete:
+       mutex_unlock(&vc4->bin_bo_lock);
 }
 
 static void
@@ -249,8 +259,10 @@ vc4_irq_postinstall(struct drm_device *dev)
        if (!vc4->v3d)
                return 0;
 
-       /* Enable both the render done and out of memory interrupts. */
-       V3D_WRITE(V3D_INTENA, V3D_DRIVER_IRQS);
+       /* Enable the render done interrupts. The out-of-memory interrupt is
+        * enabled as soon as we have a binner BO allocated.
+        */
+       V3D_WRITE(V3D_INTENA, V3D_INT_FLDONE | V3D_INT_FRDONE);
 
        return 0;
 }
index 4d918d3..be22749 100644 (file)
@@ -310,10 +310,10 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
        struct drm_framebuffer *fb = state->fb;
        struct drm_gem_cma_object *bo = drm_fb_cma_get_gem_obj(fb, 0);
        u32 subpixel_src_mask = (1 << 16) - 1;
-       u32 format = fb->format->format;
        int num_planes = fb->format->num_planes;
        struct drm_crtc_state *crtc_state;
-       u32 h_subsample, v_subsample;
+       u32 h_subsample = fb->format->hsub;
+       u32 v_subsample = fb->format->vsub;
        int i, ret;
 
        crtc_state = drm_atomic_get_existing_crtc_state(state->state,
@@ -328,9 +328,6 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
        if (ret)
                return ret;
 
-       h_subsample = drm_format_horz_chroma_subsampling(format);
-       v_subsample = drm_format_vert_chroma_subsampling(format);
-
        for (i = 0; i < num_planes; i++)
                vc4_state->offsets[i] = bo->paddr + fb->offsets[i];
 
@@ -592,8 +589,9 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
        u32 ctl0_offset = vc4_state->dlist_count;
        const struct hvs_format *format = vc4_get_hvs_format(fb->format->format);
        u64 base_format_mod = fourcc_mod_broadcom_mod(fb->modifier);
-       int num_planes = drm_format_num_planes(format->drm);
-       u32 h_subsample, v_subsample;
+       int num_planes = fb->format->num_planes;
+       u32 h_subsample = fb->format->hsub;
+       u32 v_subsample = fb->format->vsub;
        bool mix_plane_alpha;
        bool covers_screen;
        u32 scl0, scl1, pitch0;
@@ -623,9 +621,6 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
                scl1 = vc4_get_scl_field(state, 0);
        }
 
-       h_subsample = drm_format_horz_chroma_subsampling(format->drm);
-       v_subsample = drm_format_vert_chroma_subsampling(format->drm);
-
        rotation = drm_rotation_simplify(state->rotation,
                                         DRM_MODE_ROTATE_0 |
                                         DRM_MODE_REFLECT_X |
index a4b6859..0533646 100644 (file)
@@ -213,7 +213,7 @@ try_again:
 }
 
 /**
- * vc4_allocate_bin_bo() - allocates the memory that will be used for
+ * bin_bo_alloc() - allocates the memory that will be used for
  * tile binning.
  *
  * The binner has a limitation that the addresses in the tile state
@@ -234,14 +234,16 @@ try_again:
  * overall CMA pool before they make scenes complicated enough to run
  * out of bin space.
  */
-static int vc4_allocate_bin_bo(struct drm_device *drm)
+static int bin_bo_alloc(struct vc4_dev *vc4)
 {
-       struct vc4_dev *vc4 = to_vc4_dev(drm);
        struct vc4_v3d *v3d = vc4->v3d;
        uint32_t size = 16 * 1024 * 1024;
        int ret = 0;
        struct list_head list;
 
+       if (!v3d)
+               return -ENODEV;
+
        /* We may need to try allocating more than once to get a BO
         * that doesn't cross 256MB.  Track the ones we've allocated
         * that failed so far, so that we can free them when we've got
@@ -251,7 +253,7 @@ static int vc4_allocate_bin_bo(struct drm_device *drm)
        INIT_LIST_HEAD(&list);
 
        while (true) {
-               struct vc4_bo *bo = vc4_bo_create(drm, size, true,
+               struct vc4_bo *bo = vc4_bo_create(vc4->dev, size, true,
                                                  VC4_BO_TYPE_BIN);
 
                if (IS_ERR(bo)) {
@@ -292,6 +294,14 @@ static int vc4_allocate_bin_bo(struct drm_device *drm)
                        WARN_ON_ONCE(sizeof(vc4->bin_alloc_used) * 8 !=
                                     bo->base.base.size / vc4->bin_alloc_size);
 
+                       kref_init(&vc4->bin_bo_kref);
+
+                       /* Enable the out-of-memory interrupt to set our
+                        * newly-allocated binner BO, potentially from an
+                        * already-pending-but-masked interrupt.
+                        */
+                       V3D_WRITE(V3D_INTENA, V3D_INT_OUTOMEM);
+
                        break;
                }
 
@@ -311,6 +321,47 @@ static int vc4_allocate_bin_bo(struct drm_device *drm)
        return ret;
 }
 
+int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used)
+{
+       int ret = 0;
+
+       mutex_lock(&vc4->bin_bo_lock);
+
+       if (used && *used)
+               goto complete;
+
+       if (vc4->bin_bo)
+               kref_get(&vc4->bin_bo_kref);
+       else
+               ret = bin_bo_alloc(vc4);
+
+       if (ret == 0 && used)
+               *used = true;
+
+complete:
+       mutex_unlock(&vc4->bin_bo_lock);
+
+       return ret;
+}
+
+static void bin_bo_release(struct kref *ref)
+{
+       struct vc4_dev *vc4 = container_of(ref, struct vc4_dev, bin_bo_kref);
+
+       if (WARN_ON_ONCE(!vc4->bin_bo))
+               return;
+
+       drm_gem_object_put_unlocked(&vc4->bin_bo->base.base);
+       vc4->bin_bo = NULL;
+}
+
+void vc4_v3d_bin_bo_put(struct vc4_dev *vc4)
+{
+       mutex_lock(&vc4->bin_bo_lock);
+       kref_put(&vc4->bin_bo_kref, bin_bo_release);
+       mutex_unlock(&vc4->bin_bo_lock);
+}
+
 #ifdef CONFIG_PM
 static int vc4_v3d_runtime_suspend(struct device *dev)
 {
@@ -319,9 +370,6 @@ static int vc4_v3d_runtime_suspend(struct device *dev)
 
        vc4_irq_uninstall(vc4->dev);
 
-       drm_gem_object_put_unlocked(&vc4->bin_bo->base.base);
-       vc4->bin_bo = NULL;
-
        clk_disable_unprepare(v3d->clk);
 
        return 0;
@@ -333,10 +381,6 @@ static int vc4_v3d_runtime_resume(struct device *dev)
        struct vc4_dev *vc4 = v3d->vc4;
        int ret;
 
-       ret = vc4_allocate_bin_bo(vc4->dev);
-       if (ret)
-               return ret;
-
        ret = clk_prepare_enable(v3d->clk);
        if (ret != 0)
                return ret;
@@ -403,12 +447,6 @@ static int vc4_v3d_bind(struct device *dev, struct device *master, void *data)
        if (ret != 0)
                return ret;
 
-       ret = vc4_allocate_bin_bo(drm);
-       if (ret) {
-               clk_disable_unprepare(v3d->clk);
-               return ret;
-       }
-
        /* Reset the binner overflow address/size at setup, to be sure
         * we don't reuse an old one.
         */
index 4e90cc8..42949a1 100644 (file)
@@ -6,6 +6,6 @@
 virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o \
        virtgpu_fb.o virtgpu_display.o virtgpu_vq.o virtgpu_ttm.o \
        virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
-       virtgpu_ioctl.o virtgpu_prime.o
+       virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o
 
 obj-$(CONFIG_DRM_VIRTIO_GPU) += virtio-gpu.o
index b69ae10..5faccf9 100644 (file)
@@ -102,7 +102,6 @@ struct virtio_gpu_fence {
        struct dma_fence f;
        struct virtio_gpu_fence_driver *drv;
        struct list_head node;
-       uint64_t seq;
 };
 #define to_virtio_fence(x) \
        container_of(x, struct virtio_gpu_fence, f)
@@ -356,7 +355,7 @@ int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma);
 bool virtio_fence_signaled(struct dma_fence *f);
 struct virtio_gpu_fence *virtio_gpu_fence_alloc(
        struct virtio_gpu_device *vgdev);
-int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
+void virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
                          struct virtio_gpu_ctrl_hdr *cmd_hdr,
                          struct virtio_gpu_fence *fence);
 void virtio_gpu_fence_event_process(struct virtio_gpu_device *vdev,
index 87d1966..70d6c43 100644 (file)
@@ -24,6 +24,7 @@
  */
 
 #include <drm/drmP.h>
+#include <trace/events/dma_fence.h>
 #include "virtgpu_drv.h"
 
 static const char *virtio_get_driver_name(struct dma_fence *f)
@@ -40,16 +41,14 @@ bool virtio_fence_signaled(struct dma_fence *f)
 {
        struct virtio_gpu_fence *fence = to_virtio_fence(f);
 
-       if (atomic64_read(&fence->drv->last_seq) >= fence->seq)
+       if (atomic64_read(&fence->drv->last_seq) >= fence->f.seqno)
                return true;
        return false;
 }
 
 static void virtio_fence_value_str(struct dma_fence *f, char *str, int size)
 {
-       struct virtio_gpu_fence *fence = to_virtio_fence(f);
-
-       snprintf(str, size, "%llu", fence->seq);
+       snprintf(str, size, "%llu", f->seqno);
 }
 
 static void virtio_timeline_value_str(struct dma_fence *f, char *str, int size)
@@ -71,17 +70,22 @@ struct virtio_gpu_fence *virtio_gpu_fence_alloc(struct virtio_gpu_device *vgdev)
 {
        struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
        struct virtio_gpu_fence *fence = kzalloc(sizeof(struct virtio_gpu_fence),
-                                                       GFP_ATOMIC);
+                                                       GFP_KERNEL);
        if (!fence)
                return fence;
 
        fence->drv = drv;
+
+       /* This only partially initializes the fence because the seqno is
+        * unknown yet.  The fence must not be used outside of the driver
+        * until virtio_gpu_fence_emit is called.
+        */
        dma_fence_init(&fence->f, &virtio_fence_ops, &drv->lock, drv->context, 0);
 
        return fence;
 }
 
-int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
+void virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
                          struct virtio_gpu_ctrl_hdr *cmd_hdr,
                          struct virtio_gpu_fence *fence)
 {
@@ -89,14 +93,15 @@ int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
        unsigned long irq_flags;
 
        spin_lock_irqsave(&drv->lock, irq_flags);
-       fence->seq = ++drv->sync_seq;
+       fence->f.seqno = ++drv->sync_seq;
        dma_fence_get(&fence->f);
        list_add_tail(&fence->node, &drv->fences);
        spin_unlock_irqrestore(&drv->lock, irq_flags);
 
+       trace_dma_fence_emit(&fence->f);
+
        cmd_hdr->flags |= cpu_to_le32(VIRTIO_GPU_FLAG_FENCE);
-       cmd_hdr->fence_id = cpu_to_le64(fence->seq);
-       return 0;
+       cmd_hdr->fence_id = cpu_to_le64(fence->f.seqno);
 }
 
 void virtio_gpu_fence_event_process(struct virtio_gpu_device *vgdev,
@@ -109,7 +114,7 @@ void virtio_gpu_fence_event_process(struct virtio_gpu_device *vgdev,
        spin_lock_irqsave(&drv->lock, irq_flags);
        atomic64_set(&vgdev->fence_drv.last_seq, last_seq);
        list_for_each_entry_safe(fence, tmp, &drv->fences, node) {
-               if (last_seq < fence->seq)
+               if (last_seq < fence->f.seqno)
                        continue;
                dma_fence_signal_locked(&fence->f);
                list_del(&fence->node);
index 949a264..b7f9dfe 100644 (file)
@@ -553,34 +553,34 @@ copy_exit:
 
 struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
        DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl,
-                         DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+                         DRM_AUTH | DRM_RENDER_ALLOW),
 
        DRM_IOCTL_DEF_DRV(VIRTGPU_EXECBUFFER, virtio_gpu_execbuffer_ioctl,
-                         DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+                         DRM_AUTH | DRM_RENDER_ALLOW),
 
        DRM_IOCTL_DEF_DRV(VIRTGPU_GETPARAM, virtio_gpu_getparam_ioctl,
-                         DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+                         DRM_AUTH | DRM_RENDER_ALLOW),
 
        DRM_IOCTL_DEF_DRV(VIRTGPU_RESOURCE_CREATE,
                          virtio_gpu_resource_create_ioctl,
-                         DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+                         DRM_AUTH | DRM_RENDER_ALLOW),
 
        DRM_IOCTL_DEF_DRV(VIRTGPU_RESOURCE_INFO, virtio_gpu_resource_info_ioctl,
-                         DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+                         DRM_AUTH | DRM_RENDER_ALLOW),
 
        /* make transfer async to the main ring? - no sure, can we
         * thread these in the underlying GL
         */
        DRM_IOCTL_DEF_DRV(VIRTGPU_TRANSFER_FROM_HOST,
                          virtio_gpu_transfer_from_host_ioctl,
-                         DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+                         DRM_AUTH | DRM_RENDER_ALLOW),
        DRM_IOCTL_DEF_DRV(VIRTGPU_TRANSFER_TO_HOST,
                          virtio_gpu_transfer_to_host_ioctl,
-                         DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+                         DRM_AUTH | DRM_RENDER_ALLOW),
 
        DRM_IOCTL_DEF_DRV(VIRTGPU_WAIT, virtio_gpu_wait_ioctl,
-                         DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+                         DRM_AUTH | DRM_RENDER_ALLOW),
 
        DRM_IOCTL_DEF_DRV(VIRTGPU_GET_CAPS, virtio_gpu_get_caps_ioctl,
-                         DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+                         DRM_AUTH | DRM_RENDER_ALLOW),
 };
diff --git a/drivers/gpu/drm/virtio/virtgpu_trace.h b/drivers/gpu/drm/virtio/virtgpu_trace.h
new file mode 100644 (file)
index 0000000..711ecc2
--- /dev/null
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#if !defined(_VIRTGPU_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
+#define _VIRTGPU_TRACE_H_
+
+#include <linux/tracepoint.h>
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM virtio_gpu
+#define TRACE_INCLUDE_FILE virtgpu_trace
+
+DECLARE_EVENT_CLASS(virtio_gpu_cmd,
+       TP_PROTO(struct virtqueue *vq, struct virtio_gpu_ctrl_hdr *hdr),
+       TP_ARGS(vq, hdr),
+       TP_STRUCT__entry(
+                        __field(int, dev)
+                        __field(unsigned int, vq)
+                        __field(const char *, name)
+                        __field(u32, type)
+                        __field(u32, flags)
+                        __field(u64, fence_id)
+                        __field(u32, ctx_id)
+                        ),
+       TP_fast_assign(
+                      __entry->dev = vq->vdev->index;
+                      __entry->vq = vq->index;
+                      __entry->name = vq->name;
+                      __entry->type = le32_to_cpu(hdr->type);
+                      __entry->flags = le32_to_cpu(hdr->flags);
+                      __entry->fence_id = le64_to_cpu(hdr->fence_id);
+                      __entry->ctx_id = le32_to_cpu(hdr->ctx_id);
+                      ),
+       TP_printk("vdev=%d vq=%u name=%s type=0x%x flags=0x%x fence_id=%llu ctx_id=%u",
+                 __entry->dev, __entry->vq, __entry->name,
+                 __entry->type, __entry->flags, __entry->fence_id,
+                 __entry->ctx_id)
+);
+
+DEFINE_EVENT(virtio_gpu_cmd, virtio_gpu_cmd_queue,
+       TP_PROTO(struct virtqueue *vq, struct virtio_gpu_ctrl_hdr *hdr),
+       TP_ARGS(vq, hdr)
+);
+
+DEFINE_EVENT(virtio_gpu_cmd, virtio_gpu_cmd_response,
+       TP_PROTO(struct virtqueue *vq, struct virtio_gpu_ctrl_hdr *hdr),
+       TP_ARGS(vq, hdr)
+);
+
+#endif
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH ../../drivers/gpu/drm/virtio
+#include <trace/define_trace.h>
diff --git a/drivers/gpu/drm/virtio/virtgpu_trace_points.c b/drivers/gpu/drm/virtio/virtgpu_trace_points.c
new file mode 100644 (file)
index 0000000..1970cb6
--- /dev/null
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "virtgpu_drv.h"
+
+#define CREATE_TRACE_POINTS
+#include "virtgpu_trace.h"
index e62fe24..2c5eecc 100644 (file)
@@ -28,6 +28,7 @@
 
 #include <drm/drmP.h>
 #include "virtgpu_drv.h"
+#include "virtgpu_trace.h"
 #include <linux/virtio.h>
 #include <linux/virtio_config.h>
 #include <linux/virtio_ring.h>
@@ -192,6 +193,9 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work)
 
        list_for_each_entry_safe(entry, tmp, &reclaim_list, list) {
                resp = (struct virtio_gpu_ctrl_hdr *)entry->resp_buf;
+
+               trace_virtio_gpu_cmd_response(vgdev->ctrlq.vq, resp);
+
                if (resp->type != cpu_to_le32(VIRTIO_GPU_RESP_OK_NODATA)) {
                        if (resp->type >= cpu_to_le32(VIRTIO_GPU_RESP_ERR_UNSPEC)) {
                                struct virtio_gpu_ctrl_hdr *cmd;
@@ -284,6 +288,9 @@ retry:
                spin_lock(&vgdev->ctrlq.qlock);
                goto retry;
        } else {
+               trace_virtio_gpu_cmd_queue(vq,
+                       (struct virtio_gpu_ctrl_hdr *)vbuf->buf);
+
                virtqueue_kick(vq);
        }
 
@@ -359,6 +366,9 @@ retry:
                spin_lock(&vgdev->cursorq.qlock);
                goto retry;
        } else {
+               trace_virtio_gpu_cmd_queue(vq,
+                       (struct virtio_gpu_ctrl_hdr *)vbuf->buf);
+
                virtqueue_kick(vq);
        }
 
index bb66dbc..7508815 100644 (file)
@@ -83,26 +83,6 @@ bool vkms_get_vblank_timestamp(struct drm_device *dev, unsigned int pipe,
        return true;
 }
 
-static void vkms_atomic_crtc_reset(struct drm_crtc *crtc)
-{
-       struct vkms_crtc_state *vkms_state = NULL;
-
-       if (crtc->state) {
-               vkms_state = to_vkms_crtc_state(crtc->state);
-               __drm_atomic_helper_crtc_destroy_state(crtc->state);
-               kfree(vkms_state);
-               crtc->state = NULL;
-       }
-
-       vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL);
-       if (!vkms_state)
-               return;
-       INIT_WORK(&vkms_state->crc_work, vkms_crc_work_handle);
-
-       crtc->state = &vkms_state->base;
-       crtc->state->crtc = crtc;
-}
-
 static struct drm_crtc_state *
 vkms_atomic_crtc_duplicate_state(struct drm_crtc *crtc)
 {
@@ -135,6 +115,19 @@ static void vkms_atomic_crtc_destroy_state(struct drm_crtc *crtc,
        }
 }
 
+static void vkms_atomic_crtc_reset(struct drm_crtc *crtc)
+{
+       struct vkms_crtc_state *vkms_state =
+               kzalloc(sizeof(*vkms_state), GFP_KERNEL);
+
+       if (crtc->state)
+               vkms_atomic_crtc_destroy_state(crtc, crtc->state);
+
+       __drm_atomic_helper_crtc_reset(crtc, &vkms_state->base);
+       if (vkms_state)
+               INIT_WORK(&vkms_state->crc_work, vkms_crc_work_handle);
+}
+
 static const struct drm_crtc_funcs vkms_crtc_funcs = {
        .set_config             = drm_atomic_helper_set_config,
        .destroy                = drm_crtc_cleanup,
index 83d236f..706452f 100644 (file)
@@ -199,7 +199,6 @@ static void zx_vl_plane_atomic_update(struct drm_plane *plane,
        u32 dst_x, dst_y, dst_w, dst_h;
        uint32_t format;
        int fmt;
-       int num_planes;
        int i;
 
        if (!fb)
@@ -218,13 +217,12 @@ static void zx_vl_plane_atomic_update(struct drm_plane *plane,
        dst_h = drm_rect_height(dst);
 
        /* Set up data address registers for Y, Cb and Cr planes */
-       num_planes = drm_format_num_planes(format);
        paddr_reg = layer + VL_Y;
-       for (i = 0; i < num_planes; i++) {
+       for (i = 0; i < fb->format->num_planes; i++) {
                cma_obj = drm_fb_cma_get_gem_obj(fb, i);
                paddr = cma_obj->paddr + fb->offsets[i];
                paddr += src_y * fb->pitches[i];
-               paddr += src_x * drm_format_plane_cpp(format, i);
+               paddr += src_x * fb->format->cpp[i];
                zx_writel(paddr_reg, paddr);
                paddr_reg += 4;
        }
index 799ae49..b99ba01 100644 (file)
@@ -650,6 +650,150 @@ hdmi_vendor_any_infoframe_check_only(const union hdmi_vendor_any_infoframe *fram
        return 0;
 }
 
+/**
+ * hdmi_drm_infoframe_init() - initialize an HDMI Dynaminc Range and
+ * mastering infoframe
+ * @frame: HDMI DRM infoframe
+ *
+ * Returns 0 on success or a negative error code on failure.
+ */
+int hdmi_drm_infoframe_init(struct hdmi_drm_infoframe *frame)
+{
+       memset(frame, 0, sizeof(*frame));
+
+       frame->type = HDMI_INFOFRAME_TYPE_DRM;
+       frame->version = 1;
+       frame->length = HDMI_DRM_INFOFRAME_SIZE;
+
+       return 0;
+}
+EXPORT_SYMBOL(hdmi_drm_infoframe_init);
+
+static int hdmi_drm_infoframe_check_only(const struct hdmi_drm_infoframe *frame)
+{
+       if (frame->type != HDMI_INFOFRAME_TYPE_DRM ||
+           frame->version != 1)
+               return -EINVAL;
+
+       if (frame->length != HDMI_DRM_INFOFRAME_SIZE)
+               return -EINVAL;
+
+       return 0;
+}
+
+/**
+ * hdmi_drm_infoframe_check() - check a HDMI DRM infoframe
+ * @frame: HDMI DRM infoframe
+ *
+ * Validates that the infoframe is consistent.
+ * Returns 0 on success or a negative error code on failure.
+ */
+int hdmi_drm_infoframe_check(struct hdmi_drm_infoframe *frame)
+{
+       return hdmi_drm_infoframe_check_only(frame);
+}
+EXPORT_SYMBOL(hdmi_drm_infoframe_check);
+
+/**
+ * hdmi_drm_infoframe_pack_only() - write HDMI DRM infoframe to binary buffer
+ * @frame: HDMI DRM infoframe
+ * @buffer: destination buffer
+ * @size: size of buffer
+ *
+ * Packs the information contained in the @frame structure into a binary
+ * representation that can be written into the corresponding controller
+ * registers. Also computes the checksum as required by section 5.3.5 of
+ * the HDMI 1.4 specification.
+ *
+ * Returns the number of bytes packed into the binary buffer or a negative
+ * error code on failure.
+ */
+ssize_t hdmi_drm_infoframe_pack_only(const struct hdmi_drm_infoframe *frame,
+                                    void *buffer, size_t size)
+{
+       u8 *ptr = buffer;
+       size_t length;
+       int i;
+
+       length = HDMI_INFOFRAME_HEADER_SIZE + frame->length;
+
+       if (size < length)
+               return -ENOSPC;
+
+       memset(buffer, 0, size);
+
+       ptr[0] = frame->type;
+       ptr[1] = frame->version;
+       ptr[2] = frame->length;
+       ptr[3] = 0; /* checksum */
+
+       /* start infoframe payload */
+       ptr += HDMI_INFOFRAME_HEADER_SIZE;
+
+       *ptr++ = frame->eotf;
+       *ptr++ = frame->metadata_type;
+
+       for (i = 0; i < 3; i++) {
+               *ptr++ = frame->display_primaries[i].x;
+               *ptr++ = frame->display_primaries[i].x >> 8;
+               *ptr++ = frame->display_primaries[i].y;
+               *ptr++ = frame->display_primaries[i].y >> 8;
+       }
+
+       *ptr++ = frame->white_point.x;
+       *ptr++ = frame->white_point.x >> 8;
+
+       *ptr++ = frame->white_point.y;
+       *ptr++ = frame->white_point.y >> 8;
+
+       *ptr++ = frame->max_display_mastering_luminance;
+       *ptr++ = frame->max_display_mastering_luminance >> 8;
+
+       *ptr++ = frame->min_display_mastering_luminance;
+       *ptr++ = frame->min_display_mastering_luminance >> 8;
+
+       *ptr++ = frame->max_cll;
+       *ptr++ = frame->max_cll >> 8;
+
+       *ptr++ = frame->max_fall;
+       *ptr++ = frame->max_fall >> 8;
+
+       hdmi_infoframe_set_checksum(buffer, length);
+
+       return length;
+}
+EXPORT_SYMBOL(hdmi_drm_infoframe_pack_only);
+
+/**
+ * hdmi_drm_infoframe_pack() - check a HDMI DRM infoframe,
+ *                             and write it to binary buffer
+ * @frame: HDMI DRM infoframe
+ * @buffer: destination buffer
+ * @size: size of buffer
+ *
+ * Validates that the infoframe is consistent and updates derived fields
+ * (eg. length) based on other fields, after which it packs the information
+ * contained in the @frame structure into a binary representation that
+ * can be written into the corresponding controller registers. This function
+ * also computes the checksum as required by section 5.3.5 of the HDMI 1.4
+ * specification.
+ *
+ * Returns the number of bytes packed into the binary buffer or a negative
+ * error code on failure.
+ */
+ssize_t hdmi_drm_infoframe_pack(struct hdmi_drm_infoframe *frame,
+                               void *buffer, size_t size)
+{
+       int ret;
+
+       ret = hdmi_drm_infoframe_check(frame);
+       if (ret)
+               return ret;
+
+       return hdmi_drm_infoframe_pack_only(frame, buffer, size);
+}
+EXPORT_SYMBOL(hdmi_drm_infoframe_pack);
+
 /*
  * hdmi_vendor_any_infoframe_check() - check a vendor infoframe
  */
@@ -758,6 +902,10 @@ hdmi_infoframe_pack_only(const union hdmi_infoframe *frame, void *buffer, size_t
                length = hdmi_avi_infoframe_pack_only(&frame->avi,
                                                      buffer, size);
                break;
+       case HDMI_INFOFRAME_TYPE_DRM:
+               length = hdmi_drm_infoframe_pack_only(&frame->drm,
+                                                     buffer, size);
+               break;
        case HDMI_INFOFRAME_TYPE_SPD:
                length = hdmi_spd_infoframe_pack_only(&frame->spd,
                                                      buffer, size);
@@ -806,6 +954,9 @@ hdmi_infoframe_pack(union hdmi_infoframe *frame,
        case HDMI_INFOFRAME_TYPE_AVI:
                length = hdmi_avi_infoframe_pack(&frame->avi, buffer, size);
                break;
+       case HDMI_INFOFRAME_TYPE_DRM:
+               length = hdmi_drm_infoframe_pack(&frame->drm, buffer, size);
+               break;
        case HDMI_INFOFRAME_TYPE_SPD:
                length = hdmi_spd_infoframe_pack(&frame->spd, buffer, size);
                break;
@@ -838,6 +989,8 @@ static const char *hdmi_infoframe_type_get_name(enum hdmi_infoframe_type type)
                return "Source Product Description (SPD)";
        case HDMI_INFOFRAME_TYPE_AUDIO:
                return "Audio";
+       case HDMI_INFOFRAME_TYPE_DRM:
+               return "Dynamic Range and Mastering";
        }
        return "Reserved";
 }
@@ -1284,6 +1437,40 @@ static void hdmi_audio_infoframe_log(const char *level,
                        frame->downmix_inhibit ? "Yes" : "No");
 }
 
+/**
+ * hdmi_drm_infoframe_log() - log info of HDMI DRM infoframe
+ * @level: logging level
+ * @dev: device
+ * @frame: HDMI DRM infoframe
+ */
+static void hdmi_drm_infoframe_log(const char *level,
+                                  struct device *dev,
+                                  const struct hdmi_drm_infoframe *frame)
+{
+       int i;
+
+       hdmi_infoframe_log_header(level, dev,
+                                 (struct hdmi_any_infoframe *)frame);
+       hdmi_log("length: %d\n", frame->length);
+       hdmi_log("metadata type: %d\n", frame->metadata_type);
+       hdmi_log("eotf: %d\n", frame->eotf);
+       for (i = 0; i < 3; i++) {
+               hdmi_log("x[%d]: %d\n", i, frame->display_primaries[i].x);
+               hdmi_log("y[%d]: %d\n", i, frame->display_primaries[i].y);
+       }
+
+       hdmi_log("white point x: %d\n", frame->white_point.x);
+       hdmi_log("white point y: %d\n", frame->white_point.y);
+
+       hdmi_log("max_display_mastering_luminance: %d\n",
+                frame->max_display_mastering_luminance);
+       hdmi_log("min_display_mastering_luminance: %d\n",
+                frame->min_display_mastering_luminance);
+
+       hdmi_log("max_cll: %d\n", frame->max_cll);
+       hdmi_log("max_fall: %d\n", frame->max_fall);
+}
+
 static const char *
 hdmi_3d_structure_get_name(enum hdmi_3d_structure s3d_struct)
 {
@@ -1372,6 +1559,9 @@ void hdmi_infoframe_log(const char *level,
        case HDMI_INFOFRAME_TYPE_VENDOR:
                hdmi_vendor_any_infoframe_log(level, dev, &frame->vendor);
                break;
+       case HDMI_INFOFRAME_TYPE_DRM:
+               hdmi_drm_infoframe_log(level, dev, &frame->drm);
+               break;
        }
 }
 EXPORT_SYMBOL(hdmi_infoframe_log);
@@ -1614,6 +1804,70 @@ hdmi_vendor_any_infoframe_unpack(union hdmi_vendor_any_infoframe *frame,
        return 0;
 }
 
+/**
+ * hdmi_drm_infoframe_unpack() - unpack binary buffer to a HDMI DRM infoframe
+ * @frame: HDMI DRM infoframe
+ * @buffer: source buffer
+ * @size: size of buffer
+ *
+ * Unpacks the information contained in binary @buffer into a structured
+ * @frame of the HDMI Dynamic Range and Mastering (DRM) information frame.
+ * Also verifies the checksum as required by section 5.3.5 of the HDMI 1.4
+ * specification.
+ *
+ * Returns 0 on success or a negative error code on failure.
+ */
+static int hdmi_drm_infoframe_unpack(struct hdmi_drm_infoframe *frame,
+                                    const void *buffer, size_t size)
+{
+       const u8 *ptr = buffer;
+       const u8 *temp;
+       u8 x_lsb, x_msb;
+       u8 y_lsb, y_msb;
+       int ret;
+       int i;
+
+       if (size < HDMI_INFOFRAME_SIZE(DRM))
+               return -EINVAL;
+
+       if (ptr[0] != HDMI_INFOFRAME_TYPE_DRM ||
+           ptr[1] != 1 ||
+           ptr[2] != HDMI_DRM_INFOFRAME_SIZE)
+               return -EINVAL;
+
+       if (hdmi_infoframe_checksum(buffer, HDMI_INFOFRAME_SIZE(DRM)) != 0)
+               return -EINVAL;
+
+       ret = hdmi_drm_infoframe_init(frame);
+       if (ret)
+               return ret;
+
+       ptr += HDMI_INFOFRAME_HEADER_SIZE;
+
+       frame->eotf = ptr[0] & 0x7;
+       frame->metadata_type = ptr[1] & 0x7;
+
+       temp = ptr + 2;
+       for (i = 0; i < 3; i++) {
+               x_lsb = *temp++;
+               x_msb = *temp++;
+               frame->display_primaries[i].x =  (x_msb << 8) | x_lsb;
+               y_lsb = *temp++;
+               y_msb = *temp++;
+               frame->display_primaries[i].y = (y_msb << 8) | y_lsb;
+       }
+
+       frame->white_point.x = (ptr[15] << 8) | ptr[14];
+       frame->white_point.y = (ptr[17] << 8) | ptr[16];
+
+       frame->max_display_mastering_luminance = (ptr[19] << 8) | ptr[18];
+       frame->min_display_mastering_luminance = (ptr[21] << 8) | ptr[20];
+       frame->max_cll = (ptr[23] << 8) | ptr[22];
+       frame->max_fall = (ptr[25] << 8) | ptr[24];
+
+       return 0;
+}
+
 /**
  * hdmi_infoframe_unpack() - unpack binary buffer to a HDMI infoframe
  * @frame: HDMI infoframe
@@ -1640,6 +1894,9 @@ int hdmi_infoframe_unpack(union hdmi_infoframe *frame,
        case HDMI_INFOFRAME_TYPE_AVI:
                ret = hdmi_avi_infoframe_unpack(&frame->avi, buffer, size);
                break;
+       case HDMI_INFOFRAME_TYPE_DRM:
+               ret = hdmi_drm_infoframe_unpack(&frame->drm, buffer, size);
+               break;
        case HDMI_INFOFRAME_TYPE_SPD:
                ret = hdmi_spd_infoframe_unpack(&frame->spd, buffer, size);
                break;
index 66c92cb..4e6d2e7 100644 (file)
@@ -37,6 +37,8 @@ struct drm_private_state;
 struct drm_modeset_acquire_ctx;
 struct drm_device;
 
+void __drm_atomic_helper_crtc_reset(struct drm_crtc *crtc,
+                                   struct drm_crtc_state *state);
 void drm_atomic_helper_crtc_reset(struct drm_crtc *crtc);
 void __drm_atomic_helper_crtc_duplicate_state(struct drm_crtc *crtc,
                                              struct drm_crtc_state *state);
index 02a1312..f0e987d 100644 (file)
@@ -517,6 +517,10 @@ struct drm_connector_state {
         * Used by the atomic helpers to select the encoder, through the
         * &drm_connector_helper_funcs.atomic_best_encoder or
         * &drm_connector_helper_funcs.best_encoder callbacks.
+        *
+        * NOTE: Atomic drivers must fill this out (either themselves or through
+        * helpers), for otherwise the GETCONNECTOR and GETENCODER IOCTLs will
+        * not return correct data to userspace.
         */
        struct drm_encoder *best_encoder;
 
@@ -599,6 +603,12 @@ struct drm_connector_state {
         * and the connector bpc limitations obtained from edid.
         */
        u8 max_bpc;
+
+       /**
+        * @hdr_output_metadata:
+        * DRM blob property for HDR output metadata
+        */
+       struct drm_property_blob *hdr_output_metadata;
 };
 
 /**
@@ -1239,6 +1249,10 @@ struct drm_connector {
         * &drm_mode_config.connector_free_work.
         */
        struct llist_node free_node;
+
+       /* HDR metdata */
+       struct hdr_output_metadata hdr_output_metadata;
+       struct hdr_sink_metadata hdr_sink_metadata;
 };
 
 #define obj_to_connector(x) container_of(x, struct drm_connector, base)
index 7f9ef70..1acfc3b 100644 (file)
@@ -17,6 +17,7 @@ struct drm_vblank_crtc;
 struct drm_sg_mem;
 struct drm_local_map;
 struct drm_vma_offset_manager;
+struct drm_vram_mm;
 struct drm_fb_helper;
 
 struct inode;
@@ -286,6 +287,9 @@ struct drm_device {
        /** @vma_offset_manager: GEM information */
        struct drm_vma_offset_manager *vma_offset_manager;
 
+       /** @vram_mm: VRAM MM memory manager */
+       struct drm_vram_mm *vram_mm;
+
        /**
         * @switch_power_state:
         *
index 9d3b5b9..0e21e91 100644 (file)
@@ -25,6 +25,7 @@
 
 #include <linux/types.h>
 #include <linux/hdmi.h>
+#include <drm/drm_mode.h>
 
 struct drm_device;
 struct i2c_adapter;
@@ -370,6 +371,10 @@ drm_hdmi_avi_infoframe_quant_range(struct hdmi_avi_infoframe *frame,
                                   const struct drm_display_mode *mode,
                                   enum hdmi_quantization_range rgb_quant_range);
 
+int
+drm_hdmi_infoframe_set_hdr_metadata(struct hdmi_drm_infoframe *frame,
+                                   const struct drm_connector_state *conn_state);
+
 /**
  * drm_eld_mnl - Get ELD monitor name length in bytes.
  * @eld: pointer to an eld memory structure with mnl set
index 40af286..2af1c6d 100644 (file)
@@ -49,9 +49,6 @@ struct drm_fb_offset {
 
 struct drm_fb_helper_crtc {
        struct drm_mode_set mode_set;
-       struct drm_display_mode *desired_mode;
-       int x, y;
-       int rotation;
 };
 
 /**
@@ -151,13 +148,6 @@ struct drm_fb_helper {
        struct drm_fb_helper_crtc *crtc_info;
        int connector_count;
        int connector_info_alloc_count;
-       /**
-        * @sw_rotations:
-        * Bitmask of all rotations requested for panel-orientation which
-        * could not be handled in hardware. If only one bit is set
-        * fbdev->fbcon_rotate_hint gets set to the requested rotation.
-        */
-       int sw_rotations;
        /**
         * @connector_info:
         *
index b3d9d88..306d1ef 100644 (file)
@@ -260,6 +260,50 @@ drm_format_info_is_yuv_sampling_444(const struct drm_format_info *info)
        return info->is_yuv && info->hsub == 1 && info->vsub == 1;
 }
 
+/**
+ * drm_format_info_plane_width - width of the plane given the first plane
+ * @info: pixel format info
+ * @width: width of the first plane
+ * @plane: plane index
+ *
+ * Returns:
+ * The width of @plane, given that the width of the first plane is @width.
+ */
+static inline
+int drm_format_info_plane_width(const struct drm_format_info *info, int width,
+                               int plane)
+{
+       if (!info || plane >= info->num_planes)
+               return 0;
+
+       if (plane == 0)
+               return width;
+
+       return width / info->hsub;
+}
+
+/**
+ * drm_format_info_plane_height - height of the plane given the first plane
+ * @info: pixel format info
+ * @height: height of the first plane
+ * @plane: plane index
+ *
+ * Returns:
+ * The height of @plane, given that the height of the first plane is @height.
+ */
+static inline
+int drm_format_info_plane_height(const struct drm_format_info *info, int height,
+                                int plane)
+{
+       if (!info || plane >= info->num_planes)
+               return 0;
+
+       if (plane == 0)
+               return height;
+
+       return height / info->vsub;
+}
+
 const struct drm_format_info *__drm_format_info(u32 format);
 const struct drm_format_info *drm_format_info(u32 format);
 const struct drm_format_info *
@@ -268,12 +312,6 @@ drm_get_format_info(struct drm_device *dev,
 uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth);
 uint32_t drm_driver_legacy_fb_format(struct drm_device *dev,
                                     uint32_t bpp, uint32_t depth);
-int drm_format_num_planes(uint32_t format);
-int drm_format_plane_cpp(uint32_t format, int plane);
-int drm_format_horz_chroma_subsampling(uint32_t format);
-int drm_format_vert_chroma_subsampling(uint32_t format);
-int drm_format_plane_width(int width, uint32_t format, int plane);
-int drm_format_plane_height(int height, uint32_t format, int plane);
 unsigned int drm_format_info_block_width(const struct drm_format_info *info,
                                         int plane);
 unsigned int drm_format_info_block_height(const struct drm_format_info *info,
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
new file mode 100644 (file)
index 0000000..4d1d2c1
--- /dev/null
@@ -0,0 +1,162 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef DRM_GEM_VRAM_HELPER_H
+#define DRM_GEM_VRAM_HELPER_H
+
+#include <drm/drm_gem.h>
+#include <drm/ttm/ttm_bo_api.h>
+#include <drm/ttm/ttm_placement.h>
+#include <linux/kernel.h> /* for container_of() */
+
+struct drm_mode_create_dumb;
+struct drm_vram_mm_funcs;
+struct filp;
+struct vm_area_struct;
+
+#define DRM_GEM_VRAM_PL_FLAG_VRAM      TTM_PL_FLAG_VRAM
+#define DRM_GEM_VRAM_PL_FLAG_SYSTEM    TTM_PL_FLAG_SYSTEM
+
+/*
+ * Buffer-object helpers
+ */
+
+/**
+ * struct drm_gem_vram_object - GEM object backed by VRAM
+ * @gem:       GEM object
+ * @bo:                TTM buffer object
+ * @kmap:      Mapping information for @bo
+ * @placement: TTM placement information. Supported placements are \
+       %TTM_PL_VRAM and %TTM_PL_SYSTEM
+ * @placements:        TTM placement information.
+ * @pin_count: Pin counter
+ *
+ * The type struct drm_gem_vram_object represents a GEM object that is
+ * backed by VRAM. It can be used for simple framebuffer devices with
+ * dedicated memory. The buffer object can be evicted to system memory if
+ * video memory becomes scarce.
+ */
+struct drm_gem_vram_object {
+       struct drm_gem_object gem;
+       struct ttm_buffer_object bo;
+       struct ttm_bo_kmap_obj kmap;
+
+       /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
+       struct ttm_placement placement;
+       struct ttm_place placements[2];
+
+       int pin_count;
+};
+
+/**
+ * Returns the container of type &struct drm_gem_vram_object
+ * for field bo.
+ * @bo:                the VRAM buffer object
+ * Returns:    The containing GEM VRAM object
+ */
+static inline struct drm_gem_vram_object *drm_gem_vram_of_bo(
+       struct ttm_buffer_object *bo)
+{
+       return container_of(bo, struct drm_gem_vram_object, bo);
+}
+
+/**
+ * Returns the container of type &struct drm_gem_vram_object
+ * for field gem.
+ * @gem:       the GEM object
+ * Returns:    The containing GEM VRAM object
+ */
+static inline struct drm_gem_vram_object *drm_gem_vram_of_gem(
+       struct drm_gem_object *gem)
+{
+       return container_of(gem, struct drm_gem_vram_object, gem);
+}
+
+struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev,
+                                               struct ttm_bo_device *bdev,
+                                               size_t size,
+                                               unsigned long pg_align,
+                                               bool interruptible);
+void drm_gem_vram_put(struct drm_gem_vram_object *gbo);
+int drm_gem_vram_lock(struct drm_gem_vram_object *gbo, bool no_wait);
+void drm_gem_vram_unlock(struct drm_gem_vram_object *gbo);
+u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
+s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
+int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
+int drm_gem_vram_pin_locked(struct drm_gem_vram_object *gbo,
+                             unsigned long pl_flag);
+int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
+int drm_gem_vram_unpin_locked(struct drm_gem_vram_object *gbo);
+void *drm_gem_vram_kmap_at(struct drm_gem_vram_object *gbo, bool map,
+                          bool *is_iomem, struct ttm_bo_kmap_obj *kmap);
+void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map,
+                       bool *is_iomem);
+void drm_gem_vram_kunmap_at(struct drm_gem_vram_object *gbo,
+                           struct ttm_bo_kmap_obj *kmap);
+void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo);
+
+int drm_gem_vram_fill_create_dumb(struct drm_file *file,
+                                 struct drm_device *dev,
+                                 struct ttm_bo_device *bdev,
+                                 unsigned long pg_align,
+                                 bool interruptible,
+                                 struct drm_mode_create_dumb *args);
+
+/*
+ * Helpers for struct ttm_bo_driver
+ */
+
+void drm_gem_vram_bo_driver_evict_flags(struct ttm_buffer_object *bo,
+                                       struct ttm_placement *pl);
+
+int drm_gem_vram_bo_driver_verify_access(struct ttm_buffer_object *bo,
+                                        struct file *filp);
+
+extern const struct drm_vram_mm_funcs drm_gem_vram_mm_funcs;
+
+/*
+ * Helpers for struct drm_driver
+ */
+
+void drm_gem_vram_driver_gem_free_object_unlocked(struct drm_gem_object *gem);
+int drm_gem_vram_driver_dumb_create(struct drm_file *file,
+                                   struct drm_device *dev,
+                                   struct drm_mode_create_dumb *args);
+int drm_gem_vram_driver_dumb_mmap_offset(struct drm_file *file,
+                                        struct drm_device *dev,
+                                        uint32_t handle, uint64_t *offset);
+
+/**
+ * define DRM_GEM_VRAM_DRIVER - default callback functions for \
+       &struct drm_driver
+ *
+ * Drivers that use VRAM MM and GEM VRAM can use this macro to initialize
+ * &struct drm_driver with default functions.
+ */
+#define DRM_GEM_VRAM_DRIVER \
+       .gem_free_object_unlocked = \
+               drm_gem_vram_driver_gem_free_object_unlocked, \
+       .dumb_create              = drm_gem_vram_driver_dumb_create, \
+       .dumb_map_offset          = drm_gem_vram_driver_dumb_mmap_offset
+
+/*
+ * PRIME helpers for struct drm_driver
+ */
+
+int drm_gem_vram_driver_gem_prime_pin(struct drm_gem_object *obj);
+void drm_gem_vram_driver_gem_prime_unpin(struct drm_gem_object *obj);
+void *drm_gem_vram_driver_gem_prime_vmap(struct drm_gem_object *obj);
+void drm_gem_vram_driver_gem_prime_vunmap(struct drm_gem_object *obj,
+                                         void *vaddr);
+int drm_gem_vram_driver_gem_prime_mmap(struct drm_gem_object *obj,
+                                      struct vm_area_struct *vma);
+
+#define DRM_GEM_VRAM_DRIVER_PRIME \
+       .gem_prime_export = drm_gem_prime_export, \
+       .gem_prime_import = drm_gem_prime_import, \
+       .gem_prime_pin    = drm_gem_vram_driver_gem_prime_pin, \
+       .gem_prime_unpin  = drm_gem_vram_driver_gem_prime_unpin, \
+       .gem_prime_vmap   = drm_gem_vram_driver_gem_prime_vmap, \
+       .gem_prime_vunmap = drm_gem_vram_driver_gem_prime_vunmap, \
+       .gem_prime_mmap   = drm_gem_vram_driver_gem_prime_mmap
+
+#endif
index 7f60e8e..c031b5a 100644 (file)
@@ -836,6 +836,13 @@ struct drm_mode_config {
         */
        struct drm_property *writeback_out_fence_ptr_property;
 
+       /**
+        * hdr_output_metadata_property: Connector property containing hdr
+        * metatda. This will be provided by userspace compositors based
+        * on HDR content
+        */
+       struct drm_property *hdr_output_metadata_property;
+
        /* dumb ioctl parameters */
        uint32_t preferred_depth, prefer_shadow;
 
diff --git a/include/drm/drm_vram_mm_helper.h b/include/drm/drm_vram_mm_helper.h
new file mode 100644 (file)
index 0000000..a8ffd85
--- /dev/null
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef DRM_VRAM_MM_HELPER_H
+#define DRM_VRAM_MM_HELPER_H
+
+#include <drm/ttm/ttm_bo_driver.h>
+
+struct drm_device;
+
+/**
+ * struct drm_vram_mm_funcs - Callback functions for &struct drm_vram_mm
+ * @evict_flags:       Provides an implementation for struct \
+       &ttm_bo_driver.evict_flags
+ * @verify_access:     Provides an implementation for \
+       struct &ttm_bo_driver.verify_access
+ *
+ * These callback function integrate VRAM MM with TTM buffer objects. New
+ * functions can be added if necessary.
+ */
+struct drm_vram_mm_funcs {
+       void (*evict_flags)(struct ttm_buffer_object *bo,
+                           struct ttm_placement *placement);
+       int (*verify_access)(struct ttm_buffer_object *bo, struct file *filp);
+};
+
+/**
+ * struct drm_vram_mm - An instance of VRAM MM
+ * @vram_base: Base address of the managed video memory
+ * @vram_size: Size of the managed video memory in bytes
+ * @bdev:      The TTM BO device.
+ * @funcs:     TTM BO functions
+ *
+ * The fields &struct drm_vram_mm.vram_base and
+ * &struct drm_vram_mm.vrm_size are managed by VRAM MM, but are
+ * available for public read access. Use the field
+ * &struct drm_vram_mm.bdev to access the TTM BO device.
+ */
+struct drm_vram_mm {
+       uint64_t vram_base;
+       size_t vram_size;
+
+       struct ttm_bo_device bdev;
+
+       const struct drm_vram_mm_funcs *funcs;
+};
+
+/**
+ * drm_vram_mm_of_bdev() - \
+       Returns the container of type &struct ttm_bo_device for field bdev.
+ * @bdev:      the TTM BO device
+ *
+ * Returns:
+ * The containing instance of &struct drm_vram_mm
+ */
+static inline struct drm_vram_mm *drm_vram_mm_of_bdev(
+       struct ttm_bo_device *bdev)
+{
+       return container_of(bdev, struct drm_vram_mm, bdev);
+}
+
+int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev,
+                    uint64_t vram_base, size_t vram_size,
+                    const struct drm_vram_mm_funcs *funcs);
+void drm_vram_mm_cleanup(struct drm_vram_mm *vmm);
+
+int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma,
+                    struct drm_vram_mm *vmm);
+
+/*
+ * Helpers for integration with struct drm_device
+ */
+
+struct drm_vram_mm *drm_vram_helper_alloc_mm(
+       struct drm_device *dev, uint64_t vram_base, size_t vram_size,
+       const struct drm_vram_mm_funcs *funcs);
+void drm_vram_helper_release_mm(struct drm_device *dev);
+
+/*
+ * Helpers for &struct file_operations
+ */
+
+int drm_vram_mm_file_operations_mmap(
+       struct file *filp, struct vm_area_struct *vma);
+
+/**
+ * define DRM_VRAM_MM_FILE_OPERATIONS - default callback functions for \
+       &struct file_operations
+ *
+ * Drivers that use VRAM MM can use this macro to initialize
+ * &struct file_operations with default functions.
+ */
+#define DRM_VRAM_MM_FILE_OPERATIONS \
+       .llseek         = no_llseek, \
+       .read           = drm_read, \
+       .poll           = drm_poll, \
+       .unlocked_ioctl = drm_ioctl, \
+       .compat_ioctl   = drm_compat_ioctl, \
+       .mmap           = drm_vram_mm_file_operations_mmap, \
+       .open           = drm_open, \
+       .release        = drm_release \
+
+#endif
diff --git a/include/drm/gma_drm.h b/include/drm/gma_drm.h
deleted file mode 100644 (file)
index 87ac5e6..0000000
+++ /dev/null
@@ -1,25 +0,0 @@
-/**************************************************************************
- * Copyright (c) 2007-2011, Intel Corporation.
- * All Rights Reserved.
- * Copyright (c) 2008, Tungsten Graphics Inc.  Cedar Park, TX., USA.
- * All Rights Reserved.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc.,
- * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
- *
- **************************************************************************/
-
-#ifndef _GMA_DRM_H_
-#define _GMA_DRM_H_
-
-#endif
index 0daca4d..57b4121 100644 (file)
@@ -167,9 +167,6 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f);
  * @sched: the scheduler instance on which this job is scheduled.
  * @s_fence: contains the fences for the scheduling of job.
  * @finish_cb: the callback for the finished fence.
- * @finish_work: schedules the function @drm_sched_job_finish once the job has
- *               finished to remove the job from the
- *               @drm_gpu_scheduler.ring_mirror_list.
  * @node: used to append this struct to the @drm_gpu_scheduler.ring_mirror_list.
  * @id: a unique id assigned to each job scheduled on the scheduler.
  * @karma: increment on every hang caused by this job. If this exceeds the hang
@@ -188,7 +185,6 @@ struct drm_sched_job {
        struct drm_gpu_scheduler        *sched;
        struct drm_sched_fence          *s_fence;
        struct dma_fence_cb             finish_cb;
-       struct work_struct              finish_work;
        struct list_head                node;
        uint64_t                        id;
        atomic_t                        karma;
@@ -263,6 +259,7 @@ struct drm_sched_backend_ops {
  *              guilty and it will be considered for scheduling further.
  * @num_jobs: the number of jobs in queue in the scheduler
  * @ready: marks if the underlying HW is ready to work
+ * @free_guilty: A hit to time out handler to free the guilty job.
  *
  * One scheduler is implemented for each hardware ring.
  */
@@ -283,6 +280,7 @@ struct drm_gpu_scheduler {
        int                             hang_limit;
        atomic_t                        num_jobs;
        bool                    ready;
+       bool                            free_guilty;
 };
 
 int drm_sched_init(struct drm_gpu_scheduler *sched,
@@ -296,7 +294,7 @@ int drm_sched_job_init(struct drm_sched_job *job,
                       void *owner);
 void drm_sched_job_cleanup(struct drm_sched_job *job);
 void drm_sched_wakeup(struct drm_gpu_scheduler *sched);
-void drm_sched_stop(struct drm_gpu_scheduler *sched);
+void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad);
 void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery);
 void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched);
 void drm_sched_increase_karma(struct drm_sched_job *bad);
index 58725f8..8a32756 100644 (file)
@@ -39,18 +39,20 @@ struct dma_buf_attachment;
 
 /**
  * struct dma_buf_ops - operations possible on struct dma_buf
- * @map_atomic: [optional] maps a page from the buffer into kernel address
- *             space, users may not block until the subsequent unmap call.
- *             This callback must not sleep.
- * @unmap_atomic: [optional] unmaps a atomically mapped page from the buffer.
- *               This Callback must not sleep.
- * @map: [optional] maps a page from the buffer into kernel address space.
- * @unmap: [optional] unmaps a page from the buffer.
  * @vmap: [optional] creates a virtual mapping for the buffer into kernel
  *       address space. Same restrictions as for vmap and friends apply.
  * @vunmap: [optional] unmaps a vmap from the buffer
  */
 struct dma_buf_ops {
+       /**
+         * @cache_sgt_mapping:
+         *
+         * If true the framework will cache the first mapping made for each
+         * attachment. This avoids creating mappings for attachments multiple
+         * times.
+         */
+       bool cache_sgt_mapping;
+
        /**
         * @attach:
         *
@@ -205,8 +207,6 @@ struct dma_buf_ops {
         * to be restarted.
         */
        int (*end_cpu_access)(struct dma_buf *, enum dma_data_direction);
-       void *(*map)(struct dma_buf *, unsigned long);
-       void (*unmap)(struct dma_buf *, unsigned long, void *);
 
        /**
         * @mmap:
@@ -245,6 +245,31 @@ struct dma_buf_ops {
         */
        int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
+       /**
+        * @map:
+        *
+        * Maps a page from the buffer into kernel address space. The page is
+        * specified by offset into the buffer in PAGE_SIZE units.
+        *
+        * This callback is optional.
+        *
+        * Returns:
+        *
+        * Virtual address pointer where requested page can be accessed. NULL
+        * on error or when this function is unimplemented by the exporter.
+        */
+       void *(*map)(struct dma_buf *, unsigned long);
+
+       /**
+        * @unmap:
+        *
+        * Unmaps a page from the buffer. Page offset and address pointer should
+        * be the same as the one passed to and returned by matching call to map.
+        *
+        * This callback is optional.
+        */
+       void (*unmap)(struct dma_buf *, unsigned long, void *);
+
        void *(*vmap)(struct dma_buf *);
        void (*vunmap)(struct dma_buf *, void *vaddr);
 };
@@ -307,6 +332,8 @@ struct dma_buf {
  * @dmabuf: buffer for this attachment.
  * @dev: device attached to the buffer.
  * @node: list of dma_buf_attachment.
+ * @sgt: cached mapping.
+ * @dir: direction of cached mapping.
  * @priv: exporter specific attachment data.
  *
  * This structure holds the attachment information between the dma_buf buffer
@@ -322,6 +349,8 @@ struct dma_buf_attachment {
        struct dma_buf *dmabuf;
        struct device *dev;
        struct list_head node;
+       struct sg_table *sgt;
+       enum dma_data_direction dir;
        void *priv;
 };
 
index 927ad64..ee55ba5 100644 (file)
@@ -47,6 +47,7 @@ enum hdmi_infoframe_type {
        HDMI_INFOFRAME_TYPE_AVI = 0x82,
        HDMI_INFOFRAME_TYPE_SPD = 0x83,
        HDMI_INFOFRAME_TYPE_AUDIO = 0x84,
+       HDMI_INFOFRAME_TYPE_DRM = 0x87,
 };
 
 #define HDMI_IEEE_OUI 0x000c03
@@ -55,6 +56,7 @@ enum hdmi_infoframe_type {
 #define HDMI_AVI_INFOFRAME_SIZE    13
 #define HDMI_SPD_INFOFRAME_SIZE    25
 #define HDMI_AUDIO_INFOFRAME_SIZE  10
+#define HDMI_DRM_INFOFRAME_SIZE    26
 
 #define HDMI_INFOFRAME_SIZE(type)      \
        (HDMI_INFOFRAME_HEADER_SIZE + HDMI_ ## type ## _INFOFRAME_SIZE)
@@ -152,6 +154,17 @@ enum hdmi_content_type {
        HDMI_CONTENT_TYPE_GAME,
 };
 
+enum hdmi_metadata_type {
+       HDMI_STATIC_METADATA_TYPE1 = 1,
+};
+
+enum hdmi_eotf {
+       HDMI_EOTF_TRADITIONAL_GAMMA_SDR,
+       HDMI_EOTF_TRADITIONAL_GAMMA_HDR,
+       HDMI_EOTF_SMPTE_ST2084,
+       HDMI_EOTF_BT_2100_HLG,
+};
+
 struct hdmi_avi_infoframe {
        enum hdmi_infoframe_type type;
        unsigned char version;
@@ -175,12 +188,37 @@ struct hdmi_avi_infoframe {
        unsigned short right_bar;
 };
 
+/* DRM Infoframe as per CTA 861.G spec */
+struct hdmi_drm_infoframe {
+       enum hdmi_infoframe_type type;
+       unsigned char version;
+       unsigned char length;
+       enum hdmi_eotf eotf;
+       enum hdmi_metadata_type metadata_type;
+       struct {
+               u16 x, y;
+       } display_primaries[3];
+       struct {
+               u16 x, y;
+       } white_point;
+       u16 max_display_mastering_luminance;
+       u16 min_display_mastering_luminance;
+       u16 max_cll;
+       u16 max_fall;
+};
+
 int hdmi_avi_infoframe_init(struct hdmi_avi_infoframe *frame);
 ssize_t hdmi_avi_infoframe_pack(struct hdmi_avi_infoframe *frame, void *buffer,
                                size_t size);
 ssize_t hdmi_avi_infoframe_pack_only(const struct hdmi_avi_infoframe *frame,
                                     void *buffer, size_t size);
 int hdmi_avi_infoframe_check(struct hdmi_avi_infoframe *frame);
+int hdmi_drm_infoframe_init(struct hdmi_drm_infoframe *frame);
+ssize_t hdmi_drm_infoframe_pack(struct hdmi_drm_infoframe *frame, void *buffer,
+                               size_t size);
+ssize_t hdmi_drm_infoframe_pack_only(const struct hdmi_drm_infoframe *frame,
+                                    void *buffer, size_t size);
+int hdmi_drm_infoframe_check(struct hdmi_drm_infoframe *frame);
 
 enum hdmi_spd_sdi {
        HDMI_SPD_SDI_UNKNOWN,
@@ -320,6 +358,22 @@ struct hdmi_vendor_infoframe {
        unsigned int s3d_ext_data;
 };
 
+/* HDR Metadata as per 861.G spec */
+struct hdr_static_metadata {
+       __u8 eotf;
+       __u8 metadata_type;
+       __u16 max_cll;
+       __u16 max_fall;
+       __u16 min_cll;
+};
+
+struct hdr_sink_metadata {
+       __u32 metadata_type;
+       union {
+               struct hdr_static_metadata hdmi_type1;
+       };
+};
+
 int hdmi_vendor_infoframe_init(struct hdmi_vendor_infoframe *frame);
 ssize_t hdmi_vendor_infoframe_pack(struct hdmi_vendor_infoframe *frame,
                                   void *buffer, size_t size);
@@ -355,6 +409,7 @@ union hdmi_infoframe {
        struct hdmi_spd_infoframe spd;
        union hdmi_vendor_any_infoframe vendor;
        struct hdmi_audio_infoframe audio;
+       struct hdmi_drm_infoframe drm;
 };
 
 ssize_t hdmi_infoframe_pack(union hdmi_infoframe *frame, void *buffer,
index 661d73f..8a5b2f8 100644 (file)
@@ -50,6 +50,7 @@ typedef unsigned int drm_handle_t;
 
 #else /* One of the BSDs */
 
+#include <stdint.h>
 #include <sys/ioccom.h>
 #include <sys/types.h>
 typedef int8_t   __s8;
index 83cd163..997a7e0 100644 (file)
@@ -630,6 +630,29 @@ struct drm_color_lut {
        __u16 reserved;
 };
 
+/* HDR Metadata Infoframe as per 861.G spec */
+struct hdr_metadata_infoframe {
+       __u8 eotf;
+       __u8 metadata_type;
+       struct {
+               __u16 x, y;
+               } display_primaries[3];
+       struct {
+               __u16 x, y;
+               } white_point;
+       __u16 max_display_mastering_luminance;
+       __u16 min_display_mastering_luminance;
+       __u16 max_cll;
+       __u16 max_fall;
+};
+
+struct hdr_output_metadata {
+       __u32 metadata_type;
+       union {
+               struct hdr_metadata_infoframe hdmi_metadata_type1;
+       };
+};
+
 #define DRM_MODE_PAGE_FLIP_EVENT 0x01
 #define DRM_MODE_PAGE_FLIP_ASYNC 0x02
 #define DRM_MODE_PAGE_FLIP_TARGET_ABSOLUTE 0x4
index ea70669..58fbe48 100644 (file)
@@ -37,6 +37,7 @@ extern "C" {
 #define DRM_V3D_GET_PARAM                         0x04
 #define DRM_V3D_GET_BO_OFFSET                     0x05
 #define DRM_V3D_SUBMIT_TFU                        0x06
+#define DRM_V3D_SUBMIT_CSD                        0x07
 
 #define DRM_IOCTL_V3D_SUBMIT_CL           DRM_IOWR(DRM_COMMAND_BASE + DRM_V3D_SUBMIT_CL, struct drm_v3d_submit_cl)
 #define DRM_IOCTL_V3D_WAIT_BO             DRM_IOWR(DRM_COMMAND_BASE + DRM_V3D_WAIT_BO, struct drm_v3d_wait_bo)
@@ -45,6 +46,7 @@ extern "C" {
 #define DRM_IOCTL_V3D_GET_PARAM           DRM_IOWR(DRM_COMMAND_BASE + DRM_V3D_GET_PARAM, struct drm_v3d_get_param)
 #define DRM_IOCTL_V3D_GET_BO_OFFSET       DRM_IOWR(DRM_COMMAND_BASE + DRM_V3D_GET_BO_OFFSET, struct drm_v3d_get_bo_offset)
 #define DRM_IOCTL_V3D_SUBMIT_TFU          DRM_IOW(DRM_COMMAND_BASE + DRM_V3D_SUBMIT_TFU, struct drm_v3d_submit_tfu)
+#define DRM_IOCTL_V3D_SUBMIT_CSD          DRM_IOW(DRM_COMMAND_BASE + DRM_V3D_SUBMIT_CSD, struct drm_v3d_submit_csd)
 
 /**
  * struct drm_v3d_submit_cl - ioctl argument for submitting commands to the 3D
@@ -190,6 +192,7 @@ enum drm_v3d_param {
        DRM_V3D_PARAM_V3D_CORE0_IDENT1,
        DRM_V3D_PARAM_V3D_CORE0_IDENT2,
        DRM_V3D_PARAM_SUPPORTS_TFU,
+       DRM_V3D_PARAM_SUPPORTS_CSD,
 };
 
 struct drm_v3d_get_param {
@@ -230,6 +233,31 @@ struct drm_v3d_submit_tfu {
        __u32 out_sync;
 };
 
+/* Submits a compute shader for dispatch.  This job will block on any
+ * previous compute shaders submitted on this fd, and any other
+ * synchronization must be performed with in_sync/out_sync.
+ */
+struct drm_v3d_submit_csd {
+       __u32 cfg[7];
+       __u32 coef[4];
+
+       /* Pointer to a u32 array of the BOs that are referenced by the job.
+        */
+       __u64 bo_handles;
+
+       /* Number of BO handles passed in (size is that times 4). */
+       __u32 bo_handle_count;
+
+       /* sync object to block on before running the CSD job.  Each
+        * CSD job will execute in the order submitted to its FD.
+        * Synchronization against rendering/TFU jobs or CSD from
+        * other fds requires using sync objects.
+        */
+       __u32 in_sync;
+       /* Sync object to signal when the CSD job is done. */
+       __u32 out_sync;
+};
+
 #if defined(__cplusplus)
 }
 #endif