diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2020-07-23 20:25:20 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-07-24 13:00:46 -0600 |
commit | ae34817bd93e373a03203a4c6892735c430a14e1 (patch) | |
tree | 36e31c3d6eb575289ab4ad10a293c661fa3b8396 /fs | |
parent | 23b3628e45924419399da48c2b3a522b05557c91 (diff) |
io_uring: don't do opcode prep twice
Calling into opcode prep handlers may be dangerous, as they re-read
SQE but might not re-initialise requests completely. If io_req_defer()
passed fast checks and is done with preparations, punt it async.
As all other cases are covered with nulling @sqe, this guarantees that
io_[opcode]_prep() are visited only once per request.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/io_uring.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c index 6f3f18a99f4f..38e4c3902963 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -5447,7 +5447,8 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) { spin_unlock_irq(&ctx->completion_lock); kfree(de); - return 0; + io_queue_async_work(req); + return -EIOCBQUEUED; } trace_io_uring_defer(ctx, req, req->user_data); |