diff options
author | Jens Axboe <axboe@kernel.dk> | 2021-06-17 10:19:54 -0600 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2021-06-17 10:25:50 -0600 |
commit | fe76421d1da1dcdb3a2cd8428ac40106bff28bc0 (patch) | |
tree | dc590c0c33f23e65a0e11cceb7be870944d0360a /fs/io-wq.c | |
parent | 0e03496d1967abf1ebb151a24318c07d07f41f7f (diff) |
io_uring: allow user configurable IO thread CPU affinity
io-wq defaults to per-node masks for IO workers. This works fine by
default, but isn't particularly handy for workloads that prefer more
specific affinities, for either performance or isolation reasons.
This adds IORING_REGISTER_IOWQ_AFF that allows the user to pass in a CPU
mask that is then applied to IO thread workers, and an
IORING_UNREGISTER_IOWQ_AFF that simply resets the masks back to the
default of per-node.
Note that no care is given to existing IO threads, they will need to go
through a reschedule before the affinity is correct if they are already
running or sleeping.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/io-wq.c')
-rw-r--r-- | fs/io-wq.c | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/fs/io-wq.c b/fs/io-wq.c index 2af8e1df4646..bb4d3ee9592e 100644 --- a/fs/io-wq.c +++ b/fs/io-wq.c @@ -1087,6 +1087,23 @@ static int io_wq_cpu_offline(unsigned int cpu, struct hlist_node *node) return __io_wq_cpu_online(wq, cpu, false); } +int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask) +{ + int i; + + rcu_read_lock(); + for_each_node(i) { + struct io_wqe *wqe = wq->wqes[i]; + + if (mask) + cpumask_copy(wqe->cpu_mask, mask); + else + cpumask_copy(wqe->cpu_mask, cpumask_of_node(i)); + } + rcu_read_unlock(); + return 0; +} + static __init int io_wq_init(void) { int ret; |