diff options
author | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2023-08-03 12:09:32 +0200 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2023-09-19 22:08:29 +0200 |
commit | 1aabbc532413ced293952f8e149ad0a607d6e470 (patch) | |
tree | 0228090cc9cef5ca2f08a36fc0f4fc2c9fd9f8fa /kernel/signal.c | |
parent | a20d6f63dbfc176697886d7709312ad0a795648e (diff) |
signal: Don't disable preemption in ptrace_stop() on PREEMPT_RT
On PREEMPT_RT keeping preemption disabled during the invocation of
cgroup_enter_frozen() is a problem because the function acquires
css_set_lock which is a sleeping lock on PREEMPT_RT and must not be
acquired with disabled preemption.
The preempt-disabled section is only for performance optimisation reasons
and can be avoided.
Extend the comment and don't disable preemption before scheduling on
PREEMPT_RT.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Link: https://lore.kernel.org/r/20230803100932.325870-3-bigeasy@linutronix.de
Diffstat (limited to 'kernel/signal.c')
-rw-r--r-- | kernel/signal.c | 15 |
1 files changed, 13 insertions, 2 deletions
diff --git a/kernel/signal.c b/kernel/signal.c index 3035bebd7075..f2a5578326ad 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -2345,11 +2345,22 @@ static int ptrace_stop(int exit_code, int why, unsigned long message, * will be no preemption between unlock and schedule() and so * improving the performance since the ptracer will observe that * the tracee is scheduled out once it gets on the CPU. + * + * On PREEMPT_RT locking tasklist_lock does not disable preemption. + * Therefore the task can be preempted after do_notify_parent_cldstop() + * before unlocking tasklist_lock so there is no benefit in doing this. + * + * In fact disabling preemption is harmful on PREEMPT_RT because + * the spinlock_t in cgroup_enter_frozen() must not be acquired + * with preemption disabled due to the 'sleeping' spinlock + * substitution of RT. */ - preempt_disable(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + preempt_disable(); read_unlock(&tasklist_lock); cgroup_enter_frozen(); - preempt_enable_no_resched(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + preempt_enable_no_resched(); schedule(); cgroup_leave_frozen(true); |