drm/i915/gem: don't trust the dma_buf->size
authorMatthew Auld <matthew.auld@intel.com>
Fri, 22 Jan 2021 18:15:13 +0000 (18:15 +0000)
committerDaniel Vetter <daniel.vetter@ffwll.ch>
Wed, 24 Mar 2021 18:30:35 +0000 (19:30 +0100)
At least for the time being, we need to limit our object sizes such that
the number of pages can fit within a 32b signed int. It looks like we
should also apply the same restriction to any imported dma-buf.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20210122181514.541436-1-matthew.auld@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c

index 04e9c04..dc11497 100644 (file)
@@ -244,6 +244,16 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
                }
        }
 
+       /*
+        * XXX: There is a prevalence of the assumption that we fit the
+        * object's page count inside a 32bit _signed_ variable. Let's document
+        * this and catch if we ever need to fix it. In the meantime, if you do
+        * spot such a local variable, please consider fixing!
+        */
+
+       if (dma_buf->size >> PAGE_SHIFT > INT_MAX)
+               return ERR_PTR(-E2BIG);
+
        /* need to attach */
        attach = dma_buf_attach(dma_buf, dev->dev);
        if (IS_ERR(attach))