linux-2.6-microblaze.git
23 months agoio_uring: hide eventfd assumptions in eventfd paths
Pavel Begunkov [Mon, 20 Jun 2022 00:25:55 +0000 (01:25 +0100)]
io_uring: hide eventfd assumptions in eventfd paths

Some io_uring-eventfd users assume that there won't be spurious wakeups.
That assumption has to be honoured by all io_cqring_ev_posted() callers,
which is inconvenient and from time to time leads to problems but should
be maintained to not break the userspace.

Instead of making the callers track whether a CQE was posted or not, hide
it inside io_eventfd_signal(). It saves ->cached_cq_tail it saw last time
and triggers the eventfd only when ->cached_cq_tail changed since then.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/0ffc66bae37a2513080b601e4370e147faaa72c5.1655684496.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: fix io_poll_remove_all clang warnings
Pavel Begunkov [Mon, 20 Jun 2022 00:25:54 +0000 (01:25 +0100)]
io_uring: fix io_poll_remove_all clang warnings

clang complains on bitwise operations with bools, add a bit more
verbosity to better show that we want to call io_poll_remove_all_table()
twice but with different arguments.

Reported-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/f11d21dcdf9233e0eeb15fa13b858a05a78eb310.1655684496.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: improve task exit timeout cancellations
Pavel Begunkov [Mon, 20 Jun 2022 00:25:53 +0000 (01:25 +0100)]
io_uring: improve task exit timeout cancellations

Don't spin trying to cancel timeouts that are reachable but not
cancellable, e.g. already executing.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/ab8a7440a60bbdf69ae514f672ad050e43dd1b03.1655684496.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: fix multi ctx cancellation
Pavel Begunkov [Mon, 20 Jun 2022 00:25:52 +0000 (01:25 +0100)]
io_uring: fix multi ctx cancellation

io_uring_try_cancel_requests() loops until there is nothing left to do
with the ring, however there might be several rings and they might have
dependencies between them, e.g. via poll requests.

Instead of cancelling rings one by one, try to cancel them all and only
then loop over if we still potenially some work to do.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8d491fe02d8ac4c77ff38061cf86b9a827e8845c.1655684496.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: remove ->flush_cqes optimisation
Pavel Begunkov [Sun, 19 Jun 2022 11:26:08 +0000 (12:26 +0100)]
io_uring: remove ->flush_cqes optimisation

It's not clear how widely used IOSQE_CQE_SKIP_SUCCESS is, and how often
->flush_cqes flag prevents from completion being flushed. Sometimes it's
high level of concurrency that enables it at least for one CQE, but
sometimes it doesn't save much because nobody waiting on the CQ.

Remove ->flush_cqes flag and the optimisation, it should benefit the
normal use case. Note, that there is no spurious eventfd problem with
that as checks for spuriousness were incorporated into
io_eventfd_signal().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/692e81eeddccc096f449a7960365fa7b4a18f8e6.1655637157.git.asml.silence@gmail.com
[axboe: remove now dead state->flush_cqes variable]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move io_eventfd_signal()
Pavel Begunkov [Sun, 19 Jun 2022 11:26:06 +0000 (12:26 +0100)]
io_uring: move io_eventfd_signal()

Move io_eventfd_signal() in the sources without any changes and kill its
forward declaration.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9ebebb3f6f56f5a5448a621e0b6a537720c43334.1655637157.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: reshuffle io_uring/io_uring.h
Pavel Begunkov [Sun, 19 Jun 2022 11:26:05 +0000 (12:26 +0100)]
io_uring: reshuffle io_uring/io_uring.h

It's a good idea to first do forward declarations and then inline
helpers, otherwise there will be keep stumbling on dependencies
between them.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/1d7fa6672ed43f20ccc0c54ae201369ebc3ebfab.1655637157.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: remove extra io_commit_cqring()
Pavel Begunkov [Sun, 19 Jun 2022 11:26:04 +0000 (12:26 +0100)]
io_uring: remove extra io_commit_cqring()

We don't post events in __io_commit_cqring_flush() anymore but send all
requests to tw, so no need to do io_commit_cqring() there.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/f2481e32375e749be89c42e4804268b608722cef.1655637157.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move a few private types to local headers
Jens Axboe [Sun, 19 Jun 2022 01:44:33 +0000 (19:44 -0600)]
io_uring: move a few private types to local headers

Commit 3a3d47fa9cfd ("io_uring: make io_uring_types.h public") moved
a bunch of io_uring types to a kernel wide header, so we could make
tracing a bit saner rather than pass in a ton of arguments.

However, there are a few types in there that are not really needed to
be system wide. Move the cancel data and mapped buffers back to the
appropriate io_uring local headers.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: clean up tracing events
Pavel Begunkov [Thu, 16 Jun 2022 12:57:20 +0000 (13:57 +0100)]
io_uring: clean up tracing events

We have lots of trace events accepting an io_uring request and wanting
to print some of its fields like user_data, opcode, flags and so on.
However, as trace points were unaware of io_uring structures, we had to
pass all the fields as arguments. Teach trace/events/io_uring.h about
struct io_kiocb and stop the misery of passing a horde of arguments to
trace helpers.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/40ff72f92798114e56d400f2b003beb6cde6ef53.1655384063.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: make io_uring_types.h public
Pavel Begunkov [Thu, 16 Jun 2022 12:57:19 +0000 (13:57 +0100)]
io_uring: make io_uring_types.h public

Move io_uring types to linux/include, need them public so tracing can
see the definitions and we can clean trace/events/io_uring.h

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/a15f12e8cb7289b2de0deaddcc7518d98a132d17.1655384063.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: kill extra io_uring_types.h includes
Pavel Begunkov [Thu, 16 Jun 2022 12:57:18 +0000 (13:57 +0100)]
io_uring: kill extra io_uring_types.h includes

io_uring/io_uring.h already includes io_uring_types.h, no need to
include it every time. Kill it in a bunch of places, it prepares us for
following patches.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/94d8c943fbe0ef949981c508ddcee7fc1c18850f.1655384063.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: change ->cqe_cached invariant for CQE32
Pavel Begunkov [Fri, 17 Jun 2022 08:48:05 +0000 (09:48 +0100)]
io_uring: change ->cqe_cached invariant for CQE32

With IORING_SETUP_CQE32 ->cqe_cached doesn't store a real address but
rather an implicit offset into cqes. Store the real cqe pointer and
increment it accordingly if CQE32.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/1ee1838cba16bed96381a006950b36ba640d998c.1655455613.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: deduplicate io_get_cqe() calls
Pavel Begunkov [Fri, 17 Jun 2022 08:48:04 +0000 (09:48 +0100)]
io_uring: deduplicate io_get_cqe() calls

Deduplicate calls to io_get_cqe() from __io_fill_cqe_req().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/4fa077986cc3abab7c59ff4e7c390c783885465f.1655455613.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: deduplicate __io_fill_cqe_req tracing
Pavel Begunkov [Fri, 17 Jun 2022 08:48:03 +0000 (09:48 +0100)]
io_uring: deduplicate __io_fill_cqe_req tracing

Deduplicate two trace_io_uring_complete() calls in __io_fill_cqe_req().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/277ed85dba5189ab7d932164b314013a0f0b0fdc.1655455613.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: introduce io_req_cqe_overflow()
Pavel Begunkov [Fri, 17 Jun 2022 08:48:02 +0000 (09:48 +0100)]
io_uring: introduce io_req_cqe_overflow()

__io_fill_cqe_req() is hot and inlined, we want it to be as small as
possible. Add io_req_cqe_overflow() accepting only a request and doing
all overflow accounting, and replace with it two calls to 6 argument
io_cqring_event_overflow().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/048b9fbcce56814d77a1a540409c98c3d383edcb.1655455613.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: don't inline __io_get_cqe()
Pavel Begunkov [Fri, 17 Jun 2022 08:48:01 +0000 (09:48 +0100)]
io_uring: don't inline __io_get_cqe()

__io_get_cqe() is not as hot as io_get_cqe(), no need to inline it, it
sheds ~500B from the binary.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/c1ac829198a881b7af8710926f99a3559b9f24c0.1655455613.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: don't expose io_fill_cqe_aux()
Pavel Begunkov [Fri, 17 Jun 2022 08:48:00 +0000 (09:48 +0100)]
io_uring: don't expose io_fill_cqe_aux()

Deduplicate some code and add a helper for filling an aux CQE, locking
and notification.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b7c6557c8f9dc5c4cfb01292116c682a0ff61081.1655455613.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: kbuf: add comments for some tricky code
Hao Xu [Fri, 17 Jun 2022 05:04:29 +0000 (13:04 +0800)]
io_uring: kbuf: add comments for some tricky code

Add comments to explain why it is always under uring lock when
incrementing head in __io_kbuf_recycle. And rectify one comemnt about
kbuf consuming in iowq case.

Signed-off-by: Hao Xu <howeyxu@tencent.com>
Link: https://lore.kernel.org/r/20220617050429.94293-1-hao.xu@linux.dev
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: mutex locked poll hashing
Pavel Begunkov [Thu, 16 Jun 2022 09:22:12 +0000 (10:22 +0100)]
io_uring: mutex locked poll hashing

Currently we do two extra spin lock/unlock pairs to add a poll/apoll
request to the cancellation hash table and remove it from there.

On the submission side we often already hold ->uring_lock and tw
completion is likely to hold it as well. Add a second cancellation hash
table protected by ->uring_lock. In concerns for latency because of a
need to have the mutex locked on the completion side, use the new table
only in following cases:

1) IORING_SETUP_SINGLE_ISSUER: only one task grabs uring_lock, so there
   is little to no contention and so the main tw hander will almost
   always end up grabbing it before calling callbacks.

2) IORING_SETUP_SQPOLL: same as with single issuer, only one task is
   a major user of ->uring_lock.

3) apoll: we normally grab the lock on the completion side anyway to
   execute the request, so it's free.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/1bbad9c78c454b7b92f100bbf46730a37df7194f.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: propagate locking state to poll cancel
Pavel Begunkov [Thu, 16 Jun 2022 09:22:11 +0000 (10:22 +0100)]
io_uring: propagate locking state to poll cancel

Poll cancellation will be soon need to grab ->uring_lock inside, pass
the locking state, i.e. issue_flags, inside the cancellation functions.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b86781d047727c07163443b57551a3fa57c7c5e1.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: introduce a struct for hash table
Pavel Begunkov [Thu, 16 Jun 2022 09:22:10 +0000 (10:22 +0100)]
io_uring: introduce a struct for hash table

Instead of passing around a pointer to hash buckets, add a bit of type
safety and wrap it into a structure.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/d65bc3faba537ec2aca9eabf334394936d44bd28.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: pass hash table into poll_find
Pavel Begunkov [Thu, 16 Jun 2022 09:22:09 +0000 (10:22 +0100)]
io_uring: pass hash table into poll_find

In preparation for having multiple cancellation hash tables, pass a
table pointer into io_poll_find() and other poll cancel functions.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/a31c88502463dce09254240fa037352927d7ecc3.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: add IORING_SETUP_SINGLE_ISSUER
Pavel Begunkov [Thu, 16 Jun 2022 09:22:08 +0000 (10:22 +0100)]
io_uring: add IORING_SETUP_SINGLE_ISSUER

Add a new IORING_SETUP_SINGLE_ISSUER flag and the userspace visible part
of it, i.e. put limitations of submitters. Also, don't allow it together
with IOPOLL as we're not going to put it to good use.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/4bcc41ee467fdf04c8aab8baf6ce3ba21858c3d4.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: use state completion infra for poll reqs
Pavel Begunkov [Thu, 16 Jun 2022 09:22:07 +0000 (10:22 +0100)]
io_uring: use state completion infra for poll reqs

Use io_req_task_complete() for poll request completions, so it can
utilise state completions and save lots of unnecessary locking.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/ced94cb5a728d8e386c640d052fd3da3f5d6891a.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: clean up io_ring_ctx_alloc
Pavel Begunkov [Thu, 16 Jun 2022 09:22:06 +0000 (10:22 +0100)]
io_uring: clean up io_ring_ctx_alloc

Add a variable for the number of hash buckets in io_ring_ctx_alloc(),
makes it more readable.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/993926ed0d614ba9a76b2a85bebae2babcb13983.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: limit the number of cancellation buckets
Pavel Begunkov [Thu, 16 Jun 2022 09:22:05 +0000 (10:22 +0100)]
io_uring: limit the number of cancellation buckets

Don't allocate to many hash/cancellation buckets, there might be too
many, clamp it to 8 bits, or 256 * 64B = 16KB. We don't usually have too
many requests, and 256 buckets should be enough, especially since we
do hash search only in the cancellation path.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b9620c8072ba61a2d50eba894b89bd93a94a9abd.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: clean up io_try_cancel
Pavel Begunkov [Thu, 16 Jun 2022 09:22:04 +0000 (10:22 +0100)]
io_uring: clean up io_try_cancel

Get rid of an unnecessary extra goto in io_try_cancel() and simplify the
function.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/48cf5417b43a8386c6c364dba1ad9b4c7382d158.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: pass poll_find lock back
Pavel Begunkov [Thu, 16 Jun 2022 09:22:03 +0000 (10:22 +0100)]
io_uring: pass poll_find lock back

Instead of using implicit knowledge of what is locked or not after
io_poll_find() and co returns, pass back a pointer to the locked
bucket if any. If set the user must to unlock the spinlock.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/dae1dc5749aa34367812ecf62f82fd3f053aae44.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: switch cancel_hash to use per entry spinlock
Hao Xu [Thu, 16 Jun 2022 09:22:02 +0000 (10:22 +0100)]
io_uring: switch cancel_hash to use per entry spinlock

Add a new io_hash_bucket structure so that each bucket in cancel_hash
has separate spinlock. Use per entry lock for cancel_hash, this removes
some completion lock invocation and remove contension between different
cancel_hash entries.

Signed-off-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/05d1e135b0c8bce9d1441e6346776589e5783e26.1655371007.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: poll: remove unnecessary req->ref set
Hao Xu [Thu, 16 Jun 2022 09:22:01 +0000 (10:22 +0100)]
io_uring: poll: remove unnecessary req->ref set

We now don't need to set req->refcount for poll requests since the
reworked poll code ensures no request release race.

Signed-off-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/ec6fee45705890bdb968b0c175519242753c0215.1655371007.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: don't inline io_put_kbuf
Pavel Begunkov [Thu, 16 Jun 2022 09:22:00 +0000 (10:22 +0100)]
io_uring: don't inline io_put_kbuf

io_put_kbuf() is huge, don't bloat the kernel with inlining.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/2e21ccf0be471ffa654032914b9430813cae53f8.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: refactor io_req_task_complete()
Pavel Begunkov [Thu, 16 Jun 2022 09:21:59 +0000 (10:21 +0100)]
io_uring: refactor io_req_task_complete()

Clean up io_req_task_complete() and deduplicate io_put_kbuf() calls.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/ae3148ac7eb5cce3e06895cde306e9e959d6f6ae.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: kill REQ_F_COMPLETE_INLINE
Pavel Begunkov [Thu, 16 Jun 2022 09:21:58 +0000 (10:21 +0100)]
io_uring: kill REQ_F_COMPLETE_INLINE

REQ_F_COMPLETE_INLINE is only needed to delay queueing into the
completion list to io_queue_sqe() as __io_req_complete() is inlined and
we don't want to bloat the kernel.

As now we complete in a more centralised fashion in io_issue_sqe() we
can get rid of the flag and queue to the list directly.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/600ba20a9338b8a39b249b23d3d177803613dde4.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: rw: delegate sync completions to core io_uring
Pavel Begunkov [Thu, 16 Jun 2022 09:21:57 +0000 (10:21 +0100)]
io_uring: rw: delegate sync completions to core io_uring

io_issue_sqe() from the io_uring core knows how to complete requests
based on the returned error code, we can delegate io_read()/io_write()
completion to it. Make kiocb_done() to return the right completion
code and propagate it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/32ef005b45d23bf6b5e6837740dc0331bb051bd4.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: remove unused IO_REQ_CACHE_SIZE defined
Jens Axboe [Wed, 15 Jun 2022 22:28:17 +0000 (16:28 -0600)]
io_uring: remove unused IO_REQ_CACHE_SIZE defined

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: don't set REQ_F_COMPLETE_INLINE in tw
Pavel Begunkov [Wed, 15 Jun 2022 16:33:56 +0000 (17:33 +0100)]
io_uring: don't set REQ_F_COMPLETE_INLINE in tw

io_req_task_complete() enqueues requests for state completion itself, no
need for REQ_F_COMPLETE_INLINE, which is only serve the purpose of not
bloating the kernel.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/aca80f71464ad02c06f1311d998a2d6ee0b31573.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: remove check_cq checking from hot paths
Pavel Begunkov [Wed, 15 Jun 2022 16:33:55 +0000 (17:33 +0100)]
io_uring: remove check_cq checking from hot paths

All ctx->check_cq events are slow path, don't test every single flag one
by one in the hot path, but add a common guarding if.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/dff026585cea7ff3a172a7c83894a3b0111bbf6a.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: never defer-complete multi-apoll
Pavel Begunkov [Wed, 15 Jun 2022 16:33:54 +0000 (17:33 +0100)]
io_uring: never defer-complete multi-apoll

Luckily, nnobody completes multi-apoll requests outside the polling
functions, but don't set IO_URING_F_COMPLETE_DEFER in any case as
there is nobody who is catching REQ_F_COMPLETE_INLINE, and so will leak
requests if used.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/a65ed3f5effd9321ee06e6edea294a03be3e15a0.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: inline ->registered_rings
Pavel Begunkov [Wed, 15 Jun 2022 16:33:53 +0000 (17:33 +0100)]
io_uring: inline ->registered_rings

There can be only 16 registered rings, no need to allocate an array for
them separately but store it in tctx.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/495f0b953c87994dd9e13de2134019054fa5830d.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: explain io_wq_work::cancel_seq placement
Pavel Begunkov [Wed, 15 Jun 2022 16:33:52 +0000 (17:33 +0100)]
io_uring: explain io_wq_work::cancel_seq placement

Add a comment on why we keep ->cancel_seq in struct io_wq_work instead
of struct io_kiocb despite it needed only by io_uring but not io-wq.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/988e87eec9dc700b5dae933df3aefef303502f6c.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move small helpers to headers
Pavel Begunkov [Wed, 15 Jun 2022 16:33:51 +0000 (17:33 +0100)]
io_uring: move small helpers to headers

There is a bunch of inline helpers that will be useful not only to the
core of io_uring, move them to headers.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/22df99c83723e44cba7e945e8519e64e3642c064.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: refactor ctx slow data placement
Pavel Begunkov [Wed, 15 Jun 2022 16:33:50 +0000 (17:33 +0100)]
io_uring: refactor ctx slow data placement

Shove all slow path data at the end of ctx and get rid of extra
indention.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/bcaf200298dd469af20787650550efc66d89bef2.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: better caching for ctx timeout fields
Pavel Begunkov [Wed, 15 Jun 2022 16:33:49 +0000 (17:33 +0100)]
io_uring: better caching for ctx timeout fields

Following timeout fields access patterns, move all of them into a
separate cache line inside ctx, so they don't intervene with normal
completion caching, especially since timeout removals and completion
are separated and the later is done via tw.

It also sheds some bytes from io_ring_ctx, 1216B -> 1152B

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/4b163793072840de53b3cb66e0c2995e7226ff78.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move defer_list to slow data
Pavel Begunkov [Wed, 15 Jun 2022 16:33:48 +0000 (17:33 +0100)]
io_uring: move defer_list to slow data

draining is slow path, move defer_list to the end where slow data lives
inside the context.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/e16379391ca72b490afdd24e8944baab849b4a7b.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: make reg buf init consistent
Pavel Begunkov [Wed, 15 Jun 2022 16:33:47 +0000 (17:33 +0100)]
io_uring: make reg buf init consistent

The default (i.e. empty) state of register buffer is dummy_ubuf, so set
it to dummy on init instead of NULL.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/c5456aecf03d9627fbd6e65e100e2b5293a6151e.1655310733.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: deprecate epoll_ctl support
Jens Axboe [Wed, 1 Jun 2022 18:36:42 +0000 (12:36 -0600)]
io_uring: deprecate epoll_ctl support

As far as we know, nobody ever adopted the epoll_ctl management via
io_uring. Deprecate it now with a warning, and plan on removing it in
a later kernel version. When we do remove it, we can revert the following
commits as well:

39220e8d4a2a ("eventpoll: support non-blocking do_epoll_ctl() calls")
58e41a44c488 ("eventpoll: abstract out epoll_ctl() handler")

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/io-uring/CAHk-=wiTyisXBgKnVHAGYCNvkmjk=50agS2Uk6nr+n3ssLZg2w@mail.gmail.com/
Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: add support for level triggered poll
Jens Axboe [Fri, 27 May 2022 16:55:07 +0000 (10:55 -0600)]
io_uring: add support for level triggered poll

By default, the POLL_ADD command does edge triggered poll - if we get
a non-zero mask on the initial poll attempt, we complete the request
successfully.

Support level triggered by always waiting for a notification, regardless
of whether or not the initial mask matches the file state.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move opcode table to opdef.c
Jens Axboe [Wed, 15 Jun 2022 22:27:42 +0000 (16:27 -0600)]
io_uring: move opcode table to opdef.c

We already have the declarations in opdef.h, move the rest into its own
file rather than in the main io_uring.c file.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move read/write related opcodes to its own file
Jens Axboe [Mon, 13 Jun 2022 13:27:03 +0000 (07:27 -0600)]
io_uring: move read/write related opcodes to its own file

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move remaining file table manipulation to filetable.c
Jens Axboe [Thu, 26 May 2022 15:44:31 +0000 (09:44 -0600)]
io_uring: move remaining file table manipulation to filetable.c

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move rsrc related data, core, and commands
Jens Axboe [Mon, 13 Jun 2022 13:12:45 +0000 (07:12 -0600)]
io_uring: move rsrc related data, core, and commands

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: split provided buffers handling into its own file
Jens Axboe [Mon, 13 Jun 2022 13:07:23 +0000 (07:07 -0600)]
io_uring: split provided buffers handling into its own file

Move both the opcodes related to it, and the internals code dealing with
it.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move cancelation into its own file
Jens Axboe [Thu, 26 May 2022 02:36:47 +0000 (20:36 -0600)]
io_uring: move cancelation into its own file

This also helps cleanup the io_uring.h cancel parts, as we can make
things static in the cancel.c file, mostly.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move poll handling into its own file
Jens Axboe [Thu, 26 May 2022 02:31:09 +0000 (20:31 -0600)]
io_uring: move poll handling into its own file

Add a io_poll_issue() rather than export the general task_work locking
and io_issue_sqe(), and put the io_op_defs definition and structure into
a separate header file so that poll can use it.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: add opcode name to io_op_defs
Jens Axboe [Wed, 25 May 2022 17:57:03 +0000 (11:57 -0600)]
io_uring: add opcode name to io_op_defs

This kills the last per-op switch.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: include and forward-declaration sanitation
Jens Axboe [Wed, 25 May 2022 17:48:35 +0000 (11:48 -0600)]
io_uring: include and forward-declaration sanitation

Remove some dead headers we no longer need, and get rid of the
io_ring_ctx and io_uring_fops forward declarations.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move io_uring_task (tctx) helpers into its own file
Jens Axboe [Wed, 25 May 2022 17:01:04 +0000 (11:01 -0600)]
io_uring: move io_uring_task (tctx) helpers into its own file

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move fdinfo helpers to its own file
Jens Axboe [Wed, 25 May 2022 16:40:19 +0000 (10:40 -0600)]
io_uring: move fdinfo helpers to its own file

This also means moving a bit more of the fixed file handling to the
filetable side, which makes sense separately too.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: use io_is_uring_fops() consistently
Jens Axboe [Wed, 25 May 2022 16:28:04 +0000 (10:28 -0600)]
io_uring: use io_is_uring_fops() consistently

Convert the last spots that check for io_uring_fops to use the provided
helper instead.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move SQPOLL related handling into its own file
Jens Axboe [Wed, 25 May 2022 15:13:39 +0000 (09:13 -0600)]
io_uring: move SQPOLL related handling into its own file

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move timeout opcodes and handling into its own file
Jens Axboe [Wed, 25 May 2022 14:57:27 +0000 (08:57 -0600)]
io_uring: move timeout opcodes and handling into its own file

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move our reference counting into a header
Jens Axboe [Wed, 25 May 2022 14:56:52 +0000 (08:56 -0600)]
io_uring: move our reference counting into a header

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move msg_ring into its own file
Jens Axboe [Wed, 25 May 2022 12:42:08 +0000 (06:42 -0600)]
io_uring: move msg_ring into its own file

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: split network related opcodes into its own file
Jens Axboe [Wed, 25 May 2022 12:25:13 +0000 (06:25 -0600)]
io_uring: split network related opcodes into its own file

While at it, convert the handlers to just use io_eopnotsupp_prep()
if CONFIG_NET isn't set.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move statx handling to its own file
Jens Axboe [Wed, 25 May 2022 12:12:18 +0000 (06:12 -0600)]
io_uring: move statx handling to its own file

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move epoll handler to its own file
Jens Axboe [Wed, 25 May 2022 12:09:18 +0000 (06:09 -0600)]
io_uring: move epoll handler to its own file

Would be nice to sort out Kconfig for this and don't even compile
epoll.c if we don't have epoll configured.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: add a dummy -EOPNOTSUPP prep handler
Jens Axboe [Wed, 25 May 2022 12:04:14 +0000 (06:04 -0600)]
io_uring: add a dummy -EOPNOTSUPP prep handler

Add it and use it for the epoll handling, if epoll isn't configured.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move uring_cmd handling to its own file
Jens Axboe [Wed, 25 May 2022 11:59:19 +0000 (05:59 -0600)]
io_uring: move uring_cmd handling to its own file

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: split out open/close operations
Jens Axboe [Wed, 25 May 2022 03:54:43 +0000 (21:54 -0600)]
io_uring: split out open/close operations

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: separate out file table handling code
Jens Axboe [Wed, 25 May 2022 03:43:10 +0000 (21:43 -0600)]
io_uring: separate out file table handling code

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: split out fadvise/madvise operations
Jens Axboe [Wed, 25 May 2022 03:28:33 +0000 (21:28 -0600)]
io_uring: split out fadvise/madvise operations

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: split out fs related sync/fallocate functions
Jens Axboe [Wed, 25 May 2022 03:25:19 +0000 (21:25 -0600)]
io_uring: split out fs related sync/fallocate functions

This splits out sync_file_range, fsync, and fallocate.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: split out splice related operations
Jens Axboe [Wed, 25 May 2022 03:19:47 +0000 (21:19 -0600)]
io_uring: split out splice related operations

This splits out splice and tee support.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: split out filesystem related operations
Jens Axboe [Wed, 25 May 2022 03:13:00 +0000 (21:13 -0600)]
io_uring: split out filesystem related operations

This splits out renameat, unlinkat, mkdirat, symlinkat, and linkat.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move nop into its own file
Jens Axboe [Tue, 24 May 2022 17:56:42 +0000 (11:56 -0600)]
io_uring: move nop into its own file

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: move xattr related opcodes to its own file
Jens Axboe [Tue, 24 May 2022 17:46:43 +0000 (11:46 -0600)]
io_uring: move xattr related opcodes to its own file

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: handle completions in the core
Jens Axboe [Tue, 24 May 2022 21:21:00 +0000 (15:21 -0600)]
io_uring: handle completions in the core

Normally request handlers complete requests themselves, if they don't
return an error. For the latter case, the core will complete it for
them.

This is unhandy for pushing opcode handlers further out, as we don't
want a bunch of inline completion code and we don't want to make the
completion path slower than it is now.

Let the core handle any completion, unless the handler explicitly
asks us not to.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: set completion results upfront
Jens Axboe [Tue, 24 May 2022 18:45:38 +0000 (12:45 -0600)]
io_uring: set completion results upfront

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: add io_uring_types.h
Jens Axboe [Tue, 24 May 2022 16:56:14 +0000 (10:56 -0600)]
io_uring: add io_uring_types.h

This adds definitions of structs that both the core and the various
opcode handlers need to know about.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: define a request type cleanup handler
Jens Axboe [Tue, 24 May 2022 16:26:28 +0000 (10:26 -0600)]
io_uring: define a request type cleanup handler

This can move request type specific cleanup into a private handler,
removing the need for the core io_uring parts to know what types
they are dealing with.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: unify struct io_symlink and io_hardlink
Jens Axboe [Tue, 24 May 2022 16:19:47 +0000 (10:19 -0600)]
io_uring: unify struct io_symlink and io_hardlink

They are really just a subset of each other, just use the one type.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert iouring_cmd to io_cmd_type
Jens Axboe [Tue, 24 May 2022 16:09:32 +0000 (10:09 -0600)]
io_uring: convert iouring_cmd to io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert xattr to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 16:06:46 +0000 (10:06 -0600)]
io_uring: convert xattr to use io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert rsrc_update to io_cmd_type
Jens Axboe [Tue, 24 May 2022 16:05:49 +0000 (10:05 -0600)]
io_uring: convert rsrc_update to io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert msg and nop to io_cmd_type
Jens Axboe [Tue, 24 May 2022 16:03:49 +0000 (10:03 -0600)]
io_uring: convert msg and nop to io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert splice to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 16:01:47 +0000 (10:01 -0600)]
io_uring: convert splice to use io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert epoll to io_cmd_type
Jens Axboe [Tue, 24 May 2022 16:01:09 +0000 (10:01 -0600)]
io_uring: convert epoll to io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert file system request types to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 15:59:28 +0000 (09:59 -0600)]
io_uring: convert file system request types to use io_cmd_type

This converts statx, rename, unlink, mkdir, symlink, and hardlink to
use io_cmd_type.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert madvise/fadvise to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 15:51:05 +0000 (09:51 -0600)]
io_uring: convert madvise/fadvise to use io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert open/close path to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 15:49:25 +0000 (09:49 -0600)]
io_uring: convert open/close path to use io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert timeout path to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 15:45:22 +0000 (09:45 -0600)]
io_uring: convert timeout path to use io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert cancel path to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 15:33:01 +0000 (09:33 -0600)]
io_uring: convert cancel path to use io_cmd_type

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert the sync and fallocate paths to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 15:30:45 +0000 (09:30 -0600)]
io_uring: convert the sync and fallocate paths to use io_cmd_type

They all share the same struct io_sync, convert them to use the
io_cmd_type approach instead.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert net related opcodes to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 15:27:38 +0000 (09:27 -0600)]
io_uring: convert net related opcodes to use io_cmd_type

This converts accept, connect, send/recv, sendmsg/recvmsg, shutdown, and
socket to use io_cmd_type.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: remove recvmsg knowledge from io_arm_poll_handler()
Jens Axboe [Tue, 24 May 2022 15:24:42 +0000 (09:24 -0600)]
io_uring: remove recvmsg knowledge from io_arm_poll_handler()

There's a special case for recvmsg with MSG_ERRQUEUE set. This is
problematic as it means the core needs to know about this special
request type.

For now, just add a generic flag for it.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert poll_update path to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 15:16:40 +0000 (09:16 -0600)]
io_uring: convert poll_update path to use io_cmd_type

Remove struct io_poll_update from io_kiocb, and convert the poll path to
use the io_cmd_type approach instead.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert poll path to use io_cmd_type
Jens Axboe [Tue, 24 May 2022 15:13:46 +0000 (09:13 -0600)]
io_uring: convert poll path to use io_cmd_type

Remove struct io_poll_iocb from io_kiocb, and convert the poll path to
use the io_cmd_type approach instead.

While at it, rename io_poll_iocb to io_poll which is consistent with the
other request type private structures.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: convert read/write path to use io_cmd_type
Jens Axboe [Mon, 13 Jun 2022 12:57:44 +0000 (06:57 -0600)]
io_uring: convert read/write path to use io_cmd_type

Remove struct io_rw from io_kiocb, and convert the read/write path to
use the io_cmd_type approach instead.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
23 months agoio_uring: add generic command payload type to struct io_kiocb
Jens Axboe [Tue, 24 May 2022 14:32:05 +0000 (08:32 -0600)]
io_uring: add generic command payload type to struct io_kiocb

Each opcode generally has a command structure in io_kiocb which it can
use to store data associated with that request.

In preparation for having the core layer not know about what's inside
these fields, add a generic io_cmd_data type and put in the union as
well.

Signed-off-by: Jens Axboe <axboe@kernel.dk>