diff options
author | Kan Liang <kan.liang@linux.intel.com> | 2020-01-21 10:13:38 -0800 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2020-02-11 13:23:48 +0100 |
commit | 6c1c07b33eb093e5a2a313ece89baa596ba6135e (patch) | |
tree | b7f85e52634693a16c4709ab37cf2de8483011c3 /kernel/events | |
parent | f861854e1b435b27197417f6f90d87188003cb24 (diff) |
perf/x86/intel: Avoid unnecessary PEBS_ENABLE MSR access in PMI
The perf PMI handler, intel_pmu_handle_irq(), currently does
unnecessary MSR accesses for PEBS_ENABLE MSR in
__intel_pmu_enable/disable_all() when PEBS is enabled.
When entering the handler, global ctrl is explicitly disabled. All
counters do not count anymore. It doesn't matter if PEBS is enabled
or not in a PMI handler.
Furthermore, for most cases, the cpuc->pebs_enabled is not changed in
PMI. The PEBS status doesn't change. The PEBS_ENABLE MSR doesn't need to
be changed either when exiting the handler.
PMI throttle may change the PEBS status during PMI handler. The
x86_pmu_stop() ends up in intel_pmu_pebs_disable() which can update
cpuc->pebs_enabled. But the MSR_IA32_PEBS_ENABLE is not updated
at the same time. Because the cpuc->enabled has been forced to 0.
The patch explicitly update the MSR_IA32_PEBS_ENABLE for this case.
Use ftrace to measure the duration of intel_pmu_handle_irq() on BDX.
#perf record -e cycles:P -- ./tchain_edit
The average duration of intel_pmu_handle_irq():
Without the patch 1.144 us
With the patch 1.025 us
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20200121181338.3234-1-kan.liang@linux.intel.com
Diffstat (limited to 'kernel/events')
0 files changed, 0 insertions, 0 deletions