diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2020-02-24 15:01:37 +0100 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2020-02-24 16:18:20 -0800 |
commit | b0a81b94cc50a112601721fcc2f91fab78d7b9f3 (patch) | |
tree | f4d7b2395896a53576f2743900dfb102ee86e261 /kernel | |
parent | 70ed0706a48e3da3eb4515214fef658ff1184b9f (diff) |
bpf/trace: Remove redundant preempt_disable from trace_call_bpf()
Similar to __bpf_trace_run this is redundant because __bpf_trace_run() is
invoked from a trace point via __DO_TRACE() which already disables
preemption _before_ invoking any of the functions which are attached to a
trace point.
Remove it and add a cant_sleep() check.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200224145643.059995527@linutronix.de
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/trace/bpf_trace.c | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 15fafaed027c..07764c761073 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -83,7 +83,7 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) if (in_nmi()) /* not supported yet */ return 1; - preempt_disable(); + cant_sleep(); if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { /* @@ -115,7 +115,6 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) out: __this_cpu_dec(bpf_prog_active); - preempt_enable(); return ret; } |