workqueue: re-add lockdep dependencies for flushing
authorJohannes Berg <johannes.berg@intel.com>
Wed, 22 Aug 2018 09:49:04 +0000 (11:49 +0200)
committerTejun Heo <tj@kernel.org>
Wed, 22 Aug 2018 15:31:38 +0000 (08:31 -0700)
In flush_work(), we need to create a lockdep dependency so that
the following scenario is appropriately tagged as a problem:

  work_function()
  {
    mutex_lock(&mutex);
    ...
  }

  other_function()
  {
    mutex_lock(&mutex);
    flush_work(&work); // or cancel_work_sync(&work);
  }

This is a problem since the work might be running and be blocked
on trying to acquire the mutex.

Similarly, in flush_workqueue().

These were removed after cross-release partially caught these
problems, but now cross-release was reverted anyway. IMHO the
removal was erroneous anyway though, since lockdep should be
able to catch potential problems, not just actual ones, and
cross-release would only have caught the problem when actually
invoking wait_for_completion().

Fixes: fd1a5b04dfb8 ("workqueue: Remove now redundant lock acquisitions wrt. workqueue flushes")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
kernel/workqueue.c

index aa520e7..661184f 100644 (file)
@@ -2652,6 +2652,9 @@ void flush_workqueue(struct workqueue_struct *wq)
        if (WARN_ON(!wq_online))
                return;
 
+       lock_map_acquire(&wq->lockdep_map);
+       lock_map_release(&wq->lockdep_map);
+
        mutex_lock(&wq->mutex);
 
        /*
@@ -2905,6 +2908,11 @@ static bool __flush_work(struct work_struct *work, bool from_cancel)
        if (WARN_ON(!wq_online))
                return false;
 
+       if (!from_cancel) {
+               lock_map_acquire(&work->lockdep_map);
+               lock_map_release(&work->lockdep_map);
+       }
+
        if (start_flush_work(work, &barr, from_cancel)) {
                wait_for_completion(&barr.done);
                destroy_work_on_stack(&barr.work);