diff options
author | Paul E. McKenney <paulmck@kernel.org> | 2020-01-06 11:59:58 -0800 |
---|---|---|
committer | Paul E. McKenney <paulmck@kernel.org> | 2020-02-20 16:00:45 -0800 |
commit | fcb7381265e6cceb1d54283878d145db52b9d9d7 (patch) | |
tree | 15e2312520978a9d75c899589dee35dd57026896 /kernel/rcu | |
parent | bb6d3fb354c5ee8d6bde2d576eb7220ea09862b9 (diff) |
rcu-tasks: *_ONCE() for rcu_tasks_cbs_head
The RCU tasks list of callbacks, rcu_tasks_cbs_head, is sampled locklessly
by rcu_tasks_kthread() when waiting for work to do. This commit therefore
applies READ_ONCE() to that lockless sampling and WRITE_ONCE() to the
single potential store outside of rcu_tasks_kthread.
This data race was reported by KCSAN. Not appropriate for backporting
due to failure being unlikely.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Diffstat (limited to 'kernel/rcu')
-rw-r--r-- | kernel/rcu/update.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index 6c4b862f57d6..a27df761b149 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -528,7 +528,7 @@ void call_rcu_tasks(struct rcu_head *rhp, rcu_callback_t func) rhp->func = func; raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags); needwake = !rcu_tasks_cbs_head; - *rcu_tasks_cbs_tail = rhp; + WRITE_ONCE(*rcu_tasks_cbs_tail, rhp); rcu_tasks_cbs_tail = &rhp->next; raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags); /* We can't create the thread unless interrupts are enabled. */ @@ -658,7 +658,7 @@ static int __noreturn rcu_tasks_kthread(void *arg) /* If there were none, wait a bit and start over. */ if (!list) { wait_event_interruptible(rcu_tasks_cbs_wq, - rcu_tasks_cbs_head); + READ_ONCE(rcu_tasks_cbs_head)); if (!rcu_tasks_cbs_head) { WARN_ON(signal_pending(current)); schedule_timeout_interruptible(HZ/10); |