Age | Commit message (Collapse) | Author | Files | Lines |
|
To enable sharing of the arm_pmu code with arm64, this patch factors it
out to drivers/perf/. A new drivers/perf directory is added for
performance monitor drivers to live under.
MAINTAINERS is updated accordingly. Files added previously without a
corresponsing MAINTAINERS update (perf_regs.c, perf_callchain.c, and
perf_event.h) are also added.
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[will: augmented Kconfig help slightly]
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Enable the probe function to be shared with other drivers, which will
inject the appropriate of_device_id and pmu_probe_info tables.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Currently the arm perf code has platdata callbacks for runtime PM and
irq handling, but no platform implements the hooks for the former. Kill
these off.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
In multi-cluster systems, the PMUs can be different across clusters, and
so our logical PMU may not be able to schedule events on all CPUs.
This patch adds a cpumask to encode which CPUs a PMU driver supports
controlling events for, and limits the driver to scheduling events on
those CPUs, and enabling and disabling the physical PMUs on those CPUs.
The cpumask is built based on the interrupt-affinity property, and in
the absence of such a property a homogenous system is assumed.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Historically, the PMU devicetree bindings have expected SPIs to be
listed in order of *logical* CPU number. This is problematic for
bootloaders, especially when the boot CPU (logical ID 0) isn't listed
first in the devicetree.
This patch adds a new optional property, interrupt-affinity, to the
PMU node which allows the interrupt affinity to be described using
a list of phandled to CPU nodes, with each entry in the list
corresponding to the SPI at the same index in the interrupts property.
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Handling multiple PMUs using a single hotplug notifier requires a list
of PMUs to be maintained, with synchronisation in the probe, remove, and
notify paths. This is error-prone and makes the code much harder to
maintain.
Instead of using a single notifier, we can dynamically allocate a
notifier block per-PMU. The end result is the same, but the list of PMUs
is implicit in the hotplug notifier list rather than within a perf-local
data structure, which makes the code far easier to handle.
Signed-off-by: Mark Rutland <mark.rutland at arm.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Currently the percpu_pmu pointers used as percpu_irq dev_id values are
defined separately from the other per-cpu accounting data, which make
dynamically allocating the data (as will be required for systems with
heterogeneous CPUs) difficult.
This patch moves the percpu_pmu pointers into pmu_hw_events (which is
itself allocated per cpu), which will allow for easier dynamic
allocation. Both percpu and regular irqs are requested using percpu_pmu
pointers as tokens, freeing us from having to know whether an irq is
percpu within the handler, and thus avoiding a radix tree lookup on the
handler path.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Now that the arm pmu code is limited to CPU PMUs the get_hw_events()
function is superfluous, as we'll always have a set of per-cpu
pmu_hw_events structures.
This patch removes the get_hw_events() function, replacing it with
a percpu hw_events pointer. Uses of get_hw_events are updated to use
this_cpu_ptr.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Commit 3fc2c83087 (ARM: perf: remove event limit from pmu_hw_events) got
rid of the upper limit on the number of events an arm_pmu could handle,
but introduced additional complexity and places a burden on each PMU
driver to allocate accounting data somehow. So far this has not
generally been useful as the only users of arm_pmu are the CPU backend
and the CCI driver.
Now that the CCI driver plugs into the perf subsystem directly, we can
remove some of the complexities that get in the way of supporting
heterogeneous CPU PMUs.
This patch restores the original limits on pmu_hw_events fields such
that the pmu_hw_events data can be allocated as a contiguous block. This
will simplify dynamic pmu_hw_events allocation in later patches.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The current PMU probing logic consists of a single switch statement,
which means that the core arm_pmu core in perf_event_cpu.c needs to know
about every CPU PMU variant supported by a driver using the arm_pmu
framework. This makes it rather difficult to decouple the drivers from
the (otherwise generic) probing code.
The patch refactors that switch statement to a table-driven lookup,
separating the logic and knowledge (in the form of the table). Later
patches will split the table across the relevant PMU drivers, which can
pass their tables to the generic probing function.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
We currently map from userspace-ABI standard event numbers to
hardware-specific IDs by use of two arrays, *_perf_map and
*_perf_cache_map. While we use designated initializers to initialize the
events we care about, zero is typically a valid hardware event number,
and thus we have to explicitly initialize unsupported event mappings to a
nonzero value ({HW,CACHE}_OP_UNSUPPORTED).
In the case of the *_cache_map, this requires initialising almost every
entry in a 3-dimensional array to CACHE_OP_UNSUPPORTED, requiring over a
hundred lines to add eleven supported events in the case of Cortex A9.
So as to take up less space and make the tables easier to deal with,
this patch adds two new macros to initialize every entry in these tables
to the *_UNSUPPORTED values. Supported events can be overridden
individually through the use of designated initializers.
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
A few PMU-related macros are now looking a little lonely in
asm/perf_event.h now that all other PMU-specific structs, function
prototypes and macros live in pmu.h.
So as to make their placement consistent and to make it easier to build
atop of the current PMU functionality, let's reunite the entire family in
pmu.h
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
On Krait processors we have a many-to-one relationship between
raw CPU events and the event programmed into the PMNx counter.
Two raw CPU events could map to the same value programmed in the
PMNx counter. To avoid this problem, we check for collisions
during the get_event_idx() callback by setting a bit in a bitmap
whenever a certain event is used in a PMNx counter (see the next
patch). Unfortunately, we don't have a hook to clear this bit in
the bitmap when the event is deleted so let's add an optional
clear_event_idx() callback for this purpose.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Perf has three ways to name a PMU: either by passing an explicit char *,
reading arm_pmu->name or accessing arm_pmu->pmu.name.
Just use arm_pmu->name consistently in the ARM backend.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The arm_pmu functions have wildly varied parameters which can often be
derived from struct perf_event.
This patch changes the arm_pmu function prototypes so that struct
perf_event pointers are passed in preference to fields that can be
derived from the event.
Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
This patch moves the CPU-specific IRQ registration and parsing code into
the CPU PMU backend. This is required because a PMU may have more than
one interrupt, which in turn can be either PPI (per-cpu) or SPI
(requiring strict affinity setting at the interrupt distributor).
Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com>
[will: cosmetic edits and reworked interrupt dispatching]
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The CPU PMU code is tightly coupled with generic ARM PMU handling code.
This makes it cumbersome when trying to add support for other ARM PMUs
(e.g. interconnect, L2 cache controller, bus) as the generic parts of
the code are not readily reusable.
This patch cleans up perf_event.c so that reusable code is exposed via
header files to other potential PMU drivers. The CPU code is
consistently named to identify it as such and also to prepare for moving
it into a separate file.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The arm_pmu_type enumeration was initially introduced to identify
different PMU types in the system, the usual one being that on the CPU
(ARM_PMU_DEVICE_CPU). With the removal of the PMU reservation code and
the introduction of devicetree bindings for the CPU PMU, the enumeration
is no longer required.
This patch removes the enumeration and updates the various CPU PMU
platform devices so that they no longer pass an .id field referring
to identify the PMU type.
Cc: Haojian Zhuang <haojian.zhuang@gmail.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: Pawel Moll <pawel.moll@arm.com>
Acked-by: Jon Hunter <jon-hunter@ti.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jiandong Zheng <jdzheng@broadcom.com>
Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com>
[will: cosmetic edits and actual removal of the enum type]
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The PMU reservation mechanism was originally intended to allow OProfile
and perf-events to co-ordinate over access to the CPU PMU. Since then,
OProfile for ARM has moved to using perf as its backend, so the
reservation code is no longer used.
This patch removes the reservation code for the CPU PMU on ARM.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Add runtime PM support to the ARM PMU driver so that devices such as OMAP
supporting dynamic PM can use the platform->runtime_* hooks to initialise
hardware at runtime. Without having these runtime PM hooks in place any
configuration of the PMU hardware would be lost when low power states are
entered and hence would prevent PMU from working.
This change also replaces the PMU platform functions enable_irq and disable_irq
added by Ming Lei with runtime_resume and runtime_suspend funtions. Ming had
added the enable_irq and disable_irq functions as a method to configure the
cross trigger interface on OMAP4 for routing the PMU interrupts. By adding
runtime PM support, we can move the code called by enable_irq and disable_irq
into the runtime PM callbacks runtime_resume and runtime_suspend.
Cc: Ming Lei <ming.lei@canonical.com>
Cc: Benoit Cousson <b-cousson@ti.com>
Cc: Paul Walmsley <paul@pwsan.com>
Cc: Kevin Hilman <khilman@ti.com>
Signed-off-by: Jon Hunter <jon-hunter@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
In order to provide PMU name strings compatible with the OProfile
user ABI, an enumeration of all PMUs is currently used by perf to
identify each PMU uniquely. Unfortunately, this does not scale well
in the presence of multiple PMUs and creates a single, global namespace
across all PMUs in the system.
This patch removes the enumeration and instead uses the name string
for the PMU to map onto the OProfile variant. perf_pmu_name is
implemented for CPU PMUs, which is all that OProfile cares about anyway.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
On ARM, the PMU does not stop counting after an overflow and therefore
IRQ latency affects the new counter value read by the kernel. This is
significant for non-sampling runs where it is possible for the new value
to overtake the previous one, causing the delta to be out by up to
max_period events.
Commit a737823d ("ARM: 6835/1: perf: ensure overflows aren't missed due
to IRQ latency") attempted to fix this problem by allowing interrupt
handlers to pass an overflow flag to the event update function, causing
the overflow calculation to assume that the counter passed through zero
when going from prev to new. Unfortunately, this doesn't work when
overflow occurs on the perf_task_tick path because we have the flag
cleared and end up computing a large negative delta.
This patch removes the overflow flag from armpmu_event_update and
instead limits the sample_period to half of the max_period for
non-sampling profiling runs.
Cc: <stable@vger.kernel.org>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch introduces .enable_irq and .disable_irq into
struct arm_pmu_platdata, so platform specific irq enablement
can be handled after request_irq, and platform specific irq
disablement can be handled before free_irq.
This patch is for support of pmu irq routed from CTI on omap4.
Acked-by: Jean Pihet <j-pihet@ti.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
pmu_init no longer exists, so don't declare it in asm/pmu.h.
Reported-by: Pawel Moll <Pawel.Moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Currently, struct arm_pmu and related functions are only visible to
{,arch/arm/}/kernel/perf_event.c. This prevents new drivers from using
the framework.
This patch moves declarations to asm/pmu.h, allowing new PMU drivers
to use the framework.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Once upon a time, OProfile and Perf fought hard over who could play with
the PMU. To stop all hell from breaking loose, pmu.c offered an internal
reserve/release API and took care of parsing PMU platform data passed in
from board support code.
Now that Perf has ingested OProfile, let's move the platform device
handling into the Perf driver and out of the PMU locking code.
Unfortunately, the lock has to remain to prevent Perf being bitten by
out-of-tree modules such as LTTng, which still claim a right to the PMU
when Perf isn't looking.
Acked-by: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Commit f12482c9 ("ARM: 6974/1: pmu: refactor reservation") changed
{release,reserve}_pmu to take an enum arm_pmu_type as a parameter, but
inconsistently named the parameter `type' or `device'. It would be nice
if these were consistent.
This patch makes use of enum arm_pmu_type consistent, always using
`type'. Related printks are updated, explicitly mentioning `type' also.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Commit f12482c9 ("ARM: 6974/1: pmu: refactor reservation") changed the
prototype of release_pmu, but missed the stub for when
CONFIG_CPU_HAS_PMU is not selected by the platform.
This patch changes the prototype of the stub, preventing possible build
failures when CONFIG_CPU_HAS_PMU is not selected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Currently, PMU platform_device reservation relies on some minor abuse
of the platform_device::id field for determining the type of PMU. This
is problematic for device tree based probing, where the ID cannot be
controlled.
This patch removes reliance on the id field, and depends on each PMU's
platform driver to figure out which type it is. As all PMUs handled by
the current platform_driver name "arm-pmu" are CPU PMUs, this
convention is hardcoded. New PMU types can be supported through the use
of {of,platform}_device_id tables
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Allow a platform-specific IRQ handler to be specified via platform data.
This will be used to implement the single-irq workaround for the DB8500.
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Acked-by: Lee Jones <lee.jones@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
The current PMU infrastructure for ARM requires that the IRQs for the PMU
device are fixed at compile time and are selected based on the ARCH_ or MACH_ flags. This has the disadvantage of tying the Kernel down to a
particular board as far as profiling is concerned.
This patch replaces the compile-time IRQ registration with a runtime mechanism which allows the IRQs to be registered with the framework as
a platform_device.
A further advantage of this change is that there is scope for registering
different types of performance counters in the future by changing the id
of the platform_device and attaching different resources to it.
Acked-by: Jamie Iles <jamie.iles@picochip.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch adds an enum describing the potential PMU device types in
preparation for PMU device registration via platform devices.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
To add support for perf events and to allow the hardware counters to be
shared with oprofile, we need a way to reserve access to the pmu
(performance monitor unit). Platforms with PMU interrupts should
register the interrupts in arch/arm/kernel/pmu.c
Signed-off-by: Jamie Iles <jamie.iles@picochip.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|