diff options
author | Ming Lei <ming.lei@redhat.com> | 2018-01-06 16:27:40 +0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2018-01-06 09:25:36 -0700 |
commit | fb350e0ad99359768e1e80b4784692031ec340e4 (patch) | |
tree | 25d87e34b68ada63eb442c6269b031f037011520 /block/blk-mq.c | |
parent | 7d4901a90d02500c8011472a060f9b2e60e6e605 (diff) |
blk-mq: fix race between updating nr_hw_queues and switching io sched
In both elevator_switch_mq() and blk_mq_update_nr_hw_queues(), sched tags
can be allocated, and q->nr_hw_queue is used, and race is inevitable, for
example: blk_mq_init_sched() may trigger use-after-free on hctx, which is
freed in blk_mq_realloc_hw_ctxs() when nr_hw_queues is decreased.
This patch fixes the race be holding q->sysfs_lock.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r-- | block/blk-mq.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c index dd21051fb251..111e1aa5562f 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2407,6 +2407,9 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, struct blk_mq_hw_ctx **hctxs = q->queue_hw_ctx; blk_mq_sysfs_unregister(q); + + /* protect against switching io scheduler */ + mutex_lock(&q->sysfs_lock); for (i = 0; i < set->nr_hw_queues; i++) { int node; @@ -2451,6 +2454,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, } } q->nr_hw_queues = i; + mutex_unlock(&q->sysfs_lock); blk_mq_sysfs_register(q); } |