nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
authorMing Lei <ming.lei@redhat.com>
Wed, 9 Aug 2023 02:04:40 +0000 (10:04 +0800)
committerJens Axboe <axboe@kernel.dk>
Fri, 11 Aug 2023 14:12:32 +0000 (08:12 -0600)
commita7a7dabb5dd72d2875bc3ce56f94ea5ceb259d5b
tree1b4104bd16a57891c511cb8758ee560f8249b7c2
parentf099a108cabf72a1184b1e14e4a09f4ca3375750
nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll

Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io
polling, and the associated NS is guaranteed to be live in case of
io polling, so request is guaranteed to be valid because blk-mq uses
pre-allocated request pool.

Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which
isn't needed any more after switching to request based io polling.

Fix "BUG: sleeping function called from invalid context" because
set_page_dirty_lock() from blk_rq_unmap_user() may sleep.

Fixes: 585079b6e425 ("nvme: wire up async polling for io passthrough commands")
Reported-by: Guangwu Zhang <guazhang@redhat.com>
Cc: Kanchan Joshi <joshi.k@samsung.com>
Cc: Anuj Gupta <anuj20.g@samsung.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Tested-by: Guangwu Zhang <guazhang@redhat.com>
Link: https://lore.kernel.org/r/20230809020440.174682-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
drivers/nvme/host/ioctl.c