drm/i915/selftests: ensure we reserve a fence slot
authorMatthew Auld <matthew.auld@intel.com>
Wed, 29 Jun 2022 17:43:47 +0000 (18:43 +0100)
committerMatthew Auld <matthew.auld@intel.com>
Fri, 1 Jul 2022 07:30:00 +0000 (08:30 +0100)
We should always be explicit and allocate a fence slot before adding a
new fence.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-10-matthew.auld@intel.com
drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c

index 388c85b..da28acb 100644 (file)
@@ -1224,8 +1224,10 @@ static int __igt_mmap_migrate(struct intel_memory_region **placements,
                                          expand32(POISON_INUSE), &rq);
        i915_gem_object_unpin_pages(obj);
        if (rq) {
-               dma_resv_add_fence(obj->base.resv, &rq->fence,
-                                  DMA_RESV_USAGE_KERNEL);
+               err = dma_resv_reserve_fences(obj->base.resv, 1);
+               if (!err)
+                       dma_resv_add_fence(obj->base.resv, &rq->fence,
+                                          DMA_RESV_USAGE_KERNEL);
                i915_request_put(rq);
        }
        i915_gem_object_unlock(obj);