diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-12 09:50:05 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-12 09:50:05 -0800 |
commit | 06cff4a58e7dfa018c5f8a6ebdc3ff12745e0bae (patch) | |
tree | 9481c1d3c3ebcdeddfae8b786f24207834d3c433 | |
parent | 164f59000c19fa1ee5d09327a8055ec9f9b9905a (diff) | |
parent | 5f4c374760b031f06c69c2fdad1b0e981a1ad42f (diff) |
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
"The highlights this time are support for dynamically enabling and
disabling Clang's Shadow Call Stack at boot and a long-awaited
optimisation to the way in which we handle the SVE register state on
system call entry to avoid taking unnecessary traps from userspace.
Summary:
ACPI:
- Enable FPDT support for boot-time profiling
- Fix CPU PMU probing to work better with PREEMPT_RT
- Update SMMUv3 MSI DeviceID parsing to latest IORT spec
- APMT support for probing Arm CoreSight PMU devices
CPU features:
- Advertise new SVE instructions (v2.1)
- Advertise range prefetch instruction
- Advertise CSSC ("Common Short Sequence Compression") scalar
instructions, adding things like min, max, abs, popcount
- Enable DIT (Data Independent Timing) when running in the kernel
- More conversion of system register fields over to the generated
header
CPU misfeatures:
- Workaround for Cortex-A715 erratum #2645198
Dynamic SCS:
- Support for dynamic shadow call stacks to allow switching at
runtime between Clang's SCS implementation and the CPU's pointer
authentication feature when it is supported (complete with scary
DWARF parser!)
Tracing and debug:
- Remove static ftrace in favour of, err, dynamic ftrace!
- Seperate 'struct ftrace_regs' from 'struct pt_regs' in core ftrace
and existing arch code
- Introduce and implement FTRACE_WITH_ARGS on arm64 to replace the
old FTRACE_WITH_REGS
- Extend 'crashkernel=' parameter with default value and fallback to
placement above 4G physical if initial (low) allocation fails
SVE:
- Optimisation to avoid disabling SVE unconditionally on syscall
entry and just zeroing the non-shared state on return instead
Exceptions:
- Rework of undefined instruction handling to avoid serialisation on
global lock (this includes emulation of user accesses to the ID
registers)
Perf and PMU:
- Support for TLP filters in Hisilicon's PCIe PMU device
- Support for the DDR PMU present in Amlogic Meson G12 SoCs
- Support for the terribly-named "CoreSight PMU" architecture from
Arm (and Nvidia's implementation of said architecture)
Misc:
- Tighten up our boot protocol for systems with memory above 52 bits
physical
- Const-ify static keys to satisty jump label asm constraints
- Trivial FFA driver cleanups in preparation for v1.1 support
- Export the kernel_neon_* APIs as GPL symbols
- Harden our instruction generation routines against instrumentation
- A bunch of robustness improvements to our arch-specific selftests
- Minor cleanups and fixes all over (kbuild, kprobes, kfence, PMU, ...)"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (151 commits)
arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK
arm64: kprobes: Let arch do_page_fault() fix up page fault in user handler
arm64: Prohibit instrumentation on arch_stack_walk()
arm64:uprobe fix the uprobe SWBP_INSN in big-endian
arm64: alternatives: add __init/__initconst to some functions/variables
arm_pmu: Drop redundant armpmu->map_event() in armpmu_event_init()
kselftest/arm64: Allow epoll_wait() to return more than one result
kselftest/arm64: Don't drain output while spawning children
kselftest/arm64: Hold fp-stress children until they're all spawned
arm64/sysreg: Remove duplicate definitions from asm/sysreg.h
arm64/sysreg: Convert ID_DFR1_EL1 to automatic generation
arm64/sysreg: Convert ID_DFR0_EL1 to automatic generation
arm64/sysreg: Convert ID_AFR0_EL1 to automatic generation
arm64/sysreg: Convert ID_MMFR5_EL1 to automatic generation
arm64/sysreg: Convert MVFR2_EL1 to automatic generation
arm64/sysreg: Convert MVFR1_EL1 to automatic generation
arm64/sysreg: Convert MVFR0_EL1 to automatic generation
arm64/sysreg: Convert ID_PFR2_EL1 to automatic generation
arm64/sysreg: Convert ID_PFR1_EL1 to automatic generation
arm64/sysreg: Convert ID_PFR0_EL1 to automatic generation
...
138 files changed, 6535 insertions, 1583 deletions
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 42af9ca0127e..eb75778ad7e2 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -831,7 +831,7 @@ memory region [offset, offset + size] for that kernel image. If '@offset' is omitted, then a suitable offset is selected automatically. - [KNL, X86-64] Select a region under 4G first, and + [KNL, X86-64, ARM64] Select a region under 4G first, and fall back to reserve region above 4G when '@offset' hasn't been specified. See Documentation/admin-guide/kdump/kdump.rst for further details. @@ -851,26 +851,23 @@ available. It will be ignored if crashkernel=X is specified. crashkernel=size[KMG],low - [KNL, X86-64] range under 4G. When crashkernel=X,high + [KNL, X86-64, ARM64] range under 4G. When crashkernel=X,high is passed, kernel could allocate physical memory region above 4G, that cause second kernel crash on system that require some amount of low memory, e.g. swiotlb requires at least 64M+32K low memory, also enough extra low memory is needed to make sure DMA buffers for 32-bit devices won't run out. Kernel would try to allocate - at least 256M below 4G automatically. + default size of memory below 4G automatically. The default + size is platform dependent. + --> x86: max(swiotlb_size_or_default() + 8MiB, 256MiB) + --> arm64: 128MiB This one lets the user specify own low range under 4G for second kernel instead. 0: to disable low allocation. It will be ignored when crashkernel=X,high is not used or memory reserved is below 4G. - [KNL, ARM64] range in low memory. - This one lets the user specify a low range in the - DMA zone for the crash dump kernel. - It will be ignored when crashkernel=X,high is not used - or memory reserved is located in the DMA zones. - cryptomgr.notests [KNL] Disable crypto self-tests diff --git a/Documentation/admin-guide/perf/hisi-pcie-pmu.rst b/Documentation/admin-guide/perf/hisi-pcie-pmu.rst index 294ebbdb22af..7e863662e2d4 100644 --- a/Documentation/admin-guide/perf/hisi-pcie-pmu.rst +++ b/Documentation/admin-guide/perf/hisi-pcie-pmu.rst @@ -15,10 +15,10 @@ HiSilicon PCIe PMU driver The PCIe PMU driver registers a perf PMU with the name of its sicl-id and PCIe Core id.:: - /sys/bus/event_source/hisi_pcie<sicl>_<core> + /sys/bus/event_source/hisi_pcie<sicl>_core<core> PMU driver provides description of available events and filter options in sysfs, -see /sys/bus/event_source/devices/hisi_pcie<sicl>_<core>. +see /sys/bus/event_source/devices/hisi_pcie<sicl>_core<core>. The "format" directory describes all formats of the config (events) and config1 (filter options) fields of the perf_event_attr structure. The "events" directory @@ -33,13 +33,13 @@ monitored by PMU. Example usage of perf:: $# perf list - hisi_pcie0_0/rx_mwr_latency/ [kernel PMU event] - hisi_pcie0_0/rx_mwr_cnt/ [kernel PMU event] + hisi_pcie0_core0/rx_mwr_latency/ [kernel PMU event] + hisi_pcie0_core0/rx_mwr_cnt/ [kernel PMU event] ------------------------------------------ - $# perf stat -e hisi_pcie0_0/rx_mwr_latency/ - $# perf stat -e hisi_pcie0_0/rx_mwr_cnt/ - $# perf stat -g -e hisi_pcie0_0/rx_mwr_latency/ -e hisi_pcie0_0/rx_mwr_cnt/ + $# perf stat -e hisi_pcie0_core0/rx_mwr_latency/ + $# perf stat -e hisi_pcie0_core0/rx_mwr_cnt/ + $# perf stat -g -e hisi_pcie0_core0/rx_mwr_latency/ -e hisi_pcie0_core0/rx_mwr_cnt/ The current driver does not support sampling. So "perf record" is unsupported. Also attach to a task is unsupported for PCIe PMU. @@ -48,59 +48,83 @@ Filter options -------------- 1. Target filter -PMU could only monitor the performance of traffic downstream target Root Ports -or downstream target Endpoint. PCIe PMU driver support "port" and "bdf" -interfaces for users, and these two interfaces aren't supported at the same -time. --port -"port" filter can be used in all PCIe PMU events, target Root Port can be -selected by configuring the 16-bits-bitmap "port". Multi ports can be selected -for AP-layer-events, and only one port can be selected for TL/DL-layer-events. + PMU could only monitor the performance of traffic downstream target Root + Ports or downstream target Endpoint. PCIe PMU driver support "port" and + "bdf" interfaces for users, and these two interfaces aren't supported at the + same time. -For example, if target Root Port is 0000:00:00.0 (x8 lanes), bit0 of bitmap -should be set, port=0x1; if target Root Port is 0000:00:04.0 (x4 lanes), -bit8 is set, port=0x100; if these two Root Ports are both monitored, port=0x101. + - port -Example usage of perf:: + "port" filter can be used in all PCIe PMU events, target Root Port can be + selected by configuring the 16-bits-bitmap "port". Multi ports can be + selected for AP-layer-events, and only one port can be selected for + TL/DL-layer-events. - $# perf stat -e hisi_pcie0_0/rx_mwr_latency,port=0x1/ sleep 5 + For example, if target Root Port is 0000:00:00.0 (x8 lanes), bit0 of + bitmap should be set, port=0x1; if target Root Port is 0000:00:04.0 (x4 + lanes), bit8 is set, port=0x100; if these two Root Ports are both + monitored, port=0x101. --bdf + Example usage of perf:: -"bdf" filter can only be used in bandwidth events, target Endpoint is selected -by configuring BDF to "bdf". Counter only counts the bandwidth of message -requested by target Endpoint. + $# perf stat -e hisi_pcie0_core0/rx_mwr_latency,port=0x1/ sleep 5 -For example, "bdf=0x3900" means BDF of target Endpoint is 0000:39:00.0. + - bdf -Example usage of perf:: + "bdf" filter can only be used in bandwidth events, target Endpoint is + selected by configuring BDF to "bdf". Counter only counts the bandwidth of + message requested by target Endpoint. + + For example, "bdf=0x3900" means BDF of target Endpoint is 0000:39:00.0. + + Example usage of perf:: - $# perf stat -e hisi_pcie0_0/rx_mrd_flux,bdf=0x3900/ sleep 5 + $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,bdf=0x3900/ sleep 5 2. Trigger filter -Event statistics start when the first time TLP length is greater/smaller -than trigger condition. You can set the trigger condition by writing "trig_len", -and set the trigger mode by writing "trig_mode". This filter can only be used -in bandwidth events. -For example, "trig_len=4" means trigger condition is 2^4 DW, "trig_mode=0" -means statistics start when TLP length > trigger condition, "trig_mode=1" -means start when TLP length < condition. + Event statistics start when the first time TLP length is greater/smaller + than trigger condition. You can set the trigger condition by writing + "trig_len", and set the trigger mode by writing "trig_mode". This filter can + only be used in bandwidth events. -Example usage of perf:: + For example, "trig_len=4" means trigger condition is 2^4 DW, "trig_mode=0" + means statistics start when TLP length > trigger condition, "trig_mode=1" + means start when TLP length < condition. + + Example usage of perf:: - $# perf stat -e hisi_pcie0_0/rx_mrd_flux,trig_len=0x4,trig_mode=1/ sleep 5 + $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,trig_len=0x4,trig_mode=1/ sleep 5 3. Threshold filter -Counter counts when TLP length within the specified range. You can set the -threshold by writing "thr_len", and set the threshold mode by writing -"thr_mode". This filter can only be used in bandwidth events. -For example, "thr_len=4" means threshold is 2^4 DW, "thr_mode=0" means -counter counts when TLP length >= threshold, and "thr_mode=1" means counts -when TLP length < threshold. + Counter counts when TLP length within the specified range. You can set the + threshold by writing "thr_len", and set the threshold mode by writing + "thr_mode". This filter can only be used in bandwidth events. -Example usage of perf:: + For example, "thr_len=4" means threshold is 2^4 DW, "thr_mode=0" means + counter counts when TLP length >= threshold, and "thr_mode=1" means counts + when TLP length < threshold. + + Example usage of perf:: + + $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,thr_len=0x4,thr_mode=1/ sleep 5 + +4. TLP Length filter + + When counting bandwidth, the data can be composed of certain parts of TLP + packets. You can specify it through "len_mode": + + - 2'b00: Reserved (Do not use this since the behaviour is undefined) + - 2'b01: Bandwidth of TLP payloads + - 2'b10: Bandwidth of TLP headers + - 2'b11: Bandwidth of both TLP payloads and headers + + For example, "len_mode=2" means only counting the bandwidth of TLP headers + and "len_mode=3" means the final bandwidth data is composed of both TLP + headers and payloads. Default value if not specified is 2'b11. + + Example usage of perf:: - $# perf stat -e hisi_pcie0_0/rx_mrd_flux,thr_len=0x4,thr_mode=1/ sleep 5 + $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,len_mode=0x1/ sleep 5 diff --git a/Documentation/admin-guide/perf/index.rst b/Documentation/admin-guide/perf/index.rst index 793e1970bc05..9de64a40adab 100644 --- a/Documentation/admin-guide/perf/index.rst +++ b/Documentation/admin-guide/perf/index.rst @@ -19,3 +19,5 @@ Performance monitor support arm_dsu_pmu thunderx2-pmu alibaba_pmu + nvidia-pmu + meson-ddr-pmu diff --git a/Documentation/admin-guide/perf/meson-ddr-pmu.rst b/Documentation/admin-guide/perf/meson-ddr-pmu.rst new file mode 100644 index 000000000000..8e71be1d6346 --- /dev/null +++ b/Documentation/admin-guide/perf/meson-ddr-pmu.rst @@ -0,0 +1,70 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=========================================================== +Amlogic SoC DDR Bandwidth Performance Monitoring Unit (PMU) +=========================================================== + +The Amlogic Meson G12 SoC contains a bandwidth monitor inside DRAM controller. +The monitor includes 4 channels. Each channel can count the request accessing +DRAM. The channel can count up to 3 AXI port simultaneously. It can be helpful +to show if the performance bottleneck is on DDR bandwidth. + +Currently, this driver supports the following 5 perf events: + ++ meson_ddr_bw/total_rw_bytes/ ++ meson_ddr_bw/chan_1_rw_bytes/ ++ meson_ddr_bw/chan_2_rw_bytes/ ++ meson_ddr_bw/chan_3_rw_bytes/ ++ meson_ddr_bw/chan_4_rw_bytes/ + +meson_ddr_bw/chan_{1,2,3,4}_rw_bytes/ events are channel-specific events. +Each channel support filtering, which can let the channel to monitor +individual IP module in SoC. + +Below are DDR access request event filter keywords: + ++ arm - from CPU ++ vpu_read1 - from OSD + VPP read ++ gpu - from 3D GPU ++ pcie - from PCIe controller ++ hdcp - from HDCP controller ++ hevc_front - from HEVC codec front end ++ usb3_0 - from USB3.0 controller ++ hevc_back - from HEVC codec back end ++ h265enc - from HEVC encoder ++ vpu_read2 - from DI read ++ vpu_write1 - from VDIN write ++ vpu_write2 - from di write ++ vdec - from legacy codec video decoder ++ hcodec - from H264 encoder ++ ge2d - from ge2d ++ spicc1 - from SPI controller 1 ++ usb0 - from USB2.0 controller 0 ++ dma - from system DMA controller 1 ++ arb0 - from arb0 ++ sd_emmc_b - from SD eMMC b controller ++ usb1 - from USB2.0 controller 1 ++ audio - from Audio module ++ sd_emmc_c - from SD eMMC c controller ++ spicc2 - from SPI controller 2 ++ ethernet - from Ethernet controller + + +Examples: + + + Show the total DDR bandwidth per seconds: + + .. code-block:: bash + + perf stat -a -e meson_ddr_bw/total_rw_bytes/ -I 1000 sleep 10 + + + + Show individual DDR bandwidth from CPU and GPU respectively, as well as + sum of them: + + .. code-block:: bash + + perf stat -a -e meson_ddr_bw/chan_1_rw_bytes,arm=1/ -I 1000 sleep 10 + perf stat -a -e meson_ddr_bw/chan_2_rw_bytes,gpu=1/ -I 1000 sleep 10 + perf stat -a -e meson_ddr_bw/chan_3_rw_bytes,arm=1,gpu=1/ -I 1000 sleep 10 + diff --git a/Documentation/admin-guide/perf/nvidia-pmu.rst b/Documentation/admin-guide/perf/nvidia-pmu.rst new file mode 100644 index 000000000000..2e0d47cfe7ea --- /dev/null +++ b/Documentation/admin-guide/perf/nvidia-pmu.rst @@ -0,0 +1,299 @@ +========================================================= +NVIDIA Tegra SoC Uncore Performance Monitoring Unit (PMU) +========================================================= + +The NVIDIA Tegra SoC includes various system PMUs to measure key performance +metrics like memory bandwidth, latency, and utilization: + +* Scalable Coherency Fabric (SCF) +* NVLink-C2C0 +* NVLink-C2C1 +* CNVLink +* PCIE + +PMU Driver +---------- + +The PMUs in this document are based on ARM CoreSight PMU Architecture as +described in document: ARM IHI 0091. Since this is a standard architecture, the +PMUs are managed by a common driver "arm-cs-arch-pmu". This driver describes +the available events and configuration of each PMU in sysfs. Please see the +sections below to get the sysfs path of each PMU. Like other uncore PMU drivers, +the driver provides "cpumask" sysfs attribute to show the CPU id used to handle +the PMU event. There is also "associated_cpus" sysfs attribute, which contains a +list of CPUs associated with the PMU instance. + +.. _SCF_PMU_Section: + +SCF PMU +------- + +The SCF PMU monitors system level cache events, CPU traffic, and +strongly-ordered (SO) PCIE write traffic to local/remote memory. Please see +:ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about the PMU +traffic coverage. + +The events and configuration options of this PMU device are described in sysfs, +see /sys/bus/event_sources/devices/nvidia_scf_pmu_<socket-id>. + +Example usage: + +* Count event id 0x0 in socket 0:: + + perf stat -a -e nvidia_scf_pmu_0/event=0x0/ + +* Count event id 0x0 in socket 1:: + + perf stat -a -e nvidia_scf_pmu_1/event=0x0/ + +NVLink-C2C0 PMU +-------------------- + +The NVLink-C2C0 PMU monitors incoming traffic from a GPU/CPU connected with +NVLink-C2C (Chip-2-Chip) interconnect. The type of traffic captured by this PMU +varies dependent on the chip configuration: + +* NVIDIA Grace Hopper Superchip: Hopper GPU is connected with Grace SoC. + + In this config, the PMU captures GPU ATS translated or EGM traffic from the GPU. + +* NVIDIA Grace CPU Superchip: two Grace CPU SoCs are connected. + + In this config, the PMU captures read and relaxed ordered (RO) writes from + PCIE device of the remote SoC. + +Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about +the PMU traffic coverage. + +The events and configuration options of this PMU device are described in sysfs, +see /sys/bus/event_sources/devices/nvidia_nvlink_c2c0_pmu_<socket-id>. + +Example usage: + +* Count event id 0x0 from the GPU/CPU connected with socket 0:: + + perf stat -a -e nvidia_nvlink_c2c0_pmu_0/event=0x0/ + +* Count event id 0x0 from the GPU/CPU connected with socket 1:: + + perf stat -a -e nvidia_nvlink_c2c0_pmu_1/event=0x0/ + +* Count event id 0x0 from the GPU/CPU connected with socket 2:: + + perf stat -a -e nvidia_nvlink_c2c0_pmu_2/event=0x0/ + +* Count event id 0x0 from the GPU/CPU connected with socket 3:: + + perf stat -a -e nvidia_nvlink_c2c0_pmu_3/event=0x0/ + +NVLink-C2C1 PMU +------------------- + +The NVLink-C2C1 PMU monitors incoming traffic from a GPU connected with +NVLink-C2C (Chip-2-Chip) interconnect. This PMU captures untranslated GPU +traffic, in contrast with NvLink-C2C0 PMU that captures ATS translated traffic. +Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about +the PMU traffic coverage. + +The events and configuration options of this PMU device are described in sysfs, +see /sys/bus/event_sources/devices/nvidia_nvlink_c2c1_pmu_<socket-id>. + +Example usage: + +* Count event id 0x0 from the GPU connected with socket 0:: + + perf stat -a -e nvidia_nvlink_c2c1_pmu_0/event=0x0/ + +* Count event id 0x0 from the GPU connected with socket 1:: + + perf stat -a -e nvidia_nvlink_c2c1_pmu_1/event=0x0/ + +* Count event id 0x0 from the GPU connected with socket 2:: + + perf stat -a -e nvidia_nvlink_c2c1_pmu_2/event=0x0/ + +* Count event id 0x0 from the GPU connected with socket 3:: + + perf stat -a -e nvidia_nvlink_c2c1_pmu_3/event=0x0/ + +CNVLink PMU +--------------- + +The CNVLink PMU monitors traffic from GPU and PCIE device on remote sockets +to local memory. For PCIE traffic, this PMU captures read and relaxed ordered +(RO) write traffic. Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` +for more info about the PMU traffic coverage. + +The events and configuration options of this PMU device are described in sysfs, +see /sys/bus/event_sources/devices/nvidia_cnvlink_pmu_<socket-id>. + +Each SoC socket can be connected to one or more sockets via CNVLink. The user can +use "rem_socket" bitmap parameter to select the remote socket(s) to monitor. +Each bit represents the socket number, e.g. "rem_socket=0xE" corresponds to +socket 1 to 3. +/sys/bus/event_sources/devices/nvidia_cnvlink_pmu_<socket-id>/format/rem_socket +shows the valid bits that can be set in the "rem_socket" parameter. + +The PMU can not distinguish the remote traffic initiator, therefore it does not +provide filter to select the traffic source to monitor. It reports combined +traffic from remote GPU and PCIE devices. + +Example usage: + +* Count event id 0x0 for the traffic from remote socket 1, 2, and 3 to socket 0:: + + perf stat -a -e nvidia_cnvlink_pmu_0/event=0x0,rem_socket=0xE/ + +* Count event id 0x0 for the traffic from remote socket 0, 2, and 3 to socket 1:: + + perf stat -a -e nvidia_cnvlink_pmu_1/event=0x0,rem_socket=0xD/ + +* Count event id 0x0 for the traffic from remote socket 0, 1, and 3 to socket 2:: + + perf stat -a -e nvidia_cnvlink_pmu_2/event=0x0,rem_socket=0xB/ + +* Count event id 0x0 for the traffic from remote socket 0, 1, and 2 to socket 3:: + + perf stat -a -e nvidia_cnvlink_pmu_3/event=0x0,rem_socket=0x7/ + + +PCIE PMU +------------ + +The PCIE PMU monitors all read/write traffic from PCIE root ports to +local/remote memory. Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` +for more info about the PMU traffic coverage. + +The events and configuration options of this PMU device are described in sysfs, +see /sys/bus/event_sources/devices/nvidia_pcie_pmu_<socket-id>. + +Each SoC socket can support multiple root ports. The user can use +"root_port" bitmap parameter to select the port(s) to monitor, i.e. +"root_port=0xF" corresponds to root port 0 to 3. +/sys/bus/event_sources/devices/nvidia_pcie_pmu_<socket-id>/format/root_port +shows the valid bits that can be set in the "root_port" parameter. + +Example usage: + +* Count event id 0x0 from root port 0 and 1 of socket 0:: + + perf stat -a -e nvidia_pcie_pmu_0/event=0x0,root_port=0x3/ + +* Count event id 0x0 from root port 0 and 1 of socket 1:: + + perf stat -a -e nvidia_pcie_pmu_1/event=0x0,root_port=0x3/ + +.. _NVIDIA_Uncore_PMU_Traffic_Coverage_Section: + +Traffic Coverage +---------------- + +The PMU traffic coverage may vary dependent on the chip configuration: + +* **NVIDIA Grace Hopper Superchip**: Hopper GPU is connected with Grace SoC. + + Example configuration with two Grace SoCs:: + + ********************************* ********************************* + * SOCKET-A * * SOCKET-B * + * * * * + * :::::::: * * :::::::: * + * : PCIE : * * : PCIE : * + * :::::::: * * :::::::: * + * | * * | * + * | * * | * + * ::::::: ::::::::: * * ::::::::: ::::::: * + * : : : : * * : : : : * + * : GPU :<--NVLink-->: Grace :<---CNVLink--->: Grace :<--NVLink-->: GPU : * + * : : C2C : SoC : * * : SoC : C2C : : * + * ::::::: ::::::::: * * ::::::::: ::::::: * + * | | * * | | * + * | | * * | | * + * &&&&&&&& &&&&&&&& * * &&&&&&&& &&&&&&&& * + * & GMEM & & CMEM & * * & CMEM & & GMEM & * + * &&&&&&&& &&&&&&&& * * &&&&&&&& &&&&&&&& * + * * * * + ********************************* ********************************* + + GMEM = GPU Memory (e.g. HBM) + CMEM = CPU Memory (e.g. LPDDR5X) + + | + | Following table contains traffic coverage of Grace SoC PMU in socket-A: + + :: + + +--------------+-------+-----------+-----------+-----+----------+----------+ + | | Source | + + +-------+-----------+-----------+-----+----------+----------+ + | Destination | |GPU ATS |GPU Not-ATS| | Socket-B | Socket-B | + | |PCI R/W|Translated,|Translated | CPU | CPU/PCIE1| GPU/PCIE2| + | | |EGM | | | | | + +==============+=======+===========+===========+=====+==========+==========+ + | Local | PCIE |NVLink-C2C0|NVLink-C2C1| SCF | SCF PMU | CNVLink | + | SYSRAM/CMEM | PMU |PMU |PMU | PMU | | PMU | + +--------------+-------+-----------+-----------+-----+----------+----------+ + | Local GMEM | PCIE | N/A |NVLink-C2C1| SCF | SCF PMU | CNVLink | + | | PMU | |PMU | PMU | | PMU | + +--------------+-------+-----------+-----------+-----+----------+----------+ + | Remote | PCIE |NVLink-C2C0|NVLink-C2C1| SCF | | | + | SYSRAM/CMEM | PMU |PMU |PMU | PMU | N/A | N/A | + | over CNVLink | | | | | | | + +--------------+-------+-----------+-----------+-----+----------+----------+ + | Remote GMEM | PCIE |NVLink-C2C0|NVLink-C2C1| SCF | | | + | over CNVLink | PMU |PMU |PMU | PMU | N/A | N/A | + +--------------+-------+-----------+-----------+-----+----------+----------+ + + PCIE1 traffic represents strongly ordered (SO) writes. + PCIE2 traffic represents reads and relaxed ordered (RO) writes. + +* **NVIDIA Grace CPU Superchip**: two Grace CPU SoCs are connected. + + Example configuration with two Grace SoCs:: + + ******************* ******************* + * SOCKET-A * * SOCKET-B * + * * * * + * :::::::: * * :::::::: * + * : PCIE : * * : PCIE : * + * :::::::: * * :::::::: * + * | * * | * + * | * * | * + * ::::::::: * * ::::::::: * + * : : * * : : * + * : Grace :<--------NVLink------->: Grace : * + * : SoC : * C2C * : SoC : * + * ::::::::: * * ::::::::: * + * | * * | * + * | * * | * + * &&&&&&&& * * &&&&&&&& * + * & CMEM & * * & CMEM & * + * &&&&&&&& * * &&&&&&&& * + * * * * + ******************* ******************* + + GMEM = GPU Memory (e.g. HBM) + CMEM = CPU Memory (e.g. LPDDR5X) + + | + | Following table contains traffic coverage of Grace SoC PMU in socket-A: + + :: + + +-----------------+-----------+---------+----------+-------------+ + | | Source | + + +-----------+---------+----------+-------------+ + | Destination | | | Socket-B | Socket-B | + | | PCI R/W | CPU | CPU/PCIE1| PCIE2 | + | | | | | | + +=================+===========+=========+==========+=============+ + | Local | PCIE PMU | SCF PMU | SCF PMU | NVLink-C2C0 | + | SYSRAM/CMEM | | | | PMU | + +-----------------+-----------+---------+----------+-------------+ + | Remote | | | | | + | SYSRAM/CMEM | PCIE PMU | SCF PMU | N/A | N/A | + | over NVLink-C2C | | | | | + +-----------------+-----------+---------+----------+-------------+ + + PCIE1 traffic represents strongly ordered (SO) writes. + PCIE2 traffic represents reads and relaxed ordered (RO) writes. diff --git a/Documentation/arm64/acpi_object_usage.rst b/Documentation/arm64/acpi_object_usage.rst index 0609da73970b..484ef9676653 100644 --- a/Documentation/arm64/acpi_object_usage.rst +++ b/Documentation/arm64/acpi_object_usage.rst @@ -163,7 +163,7 @@ FPDT Section 5.2.23 (signature == "FPDT") **Firmware Performance Data Table** - Optional, not currently supported. + Optional, useful for boot performance profiling. GTDT Section 5.2.24 (signature == "GTDT") diff --git a/Documentation/arm64/booting.rst b/Documentation/arm64/booting.rst index 8c324ad638de..96fe10ec6c24 100644 --- a/Documentation/arm64/booting.rst +++ b/Documentation/arm64/booting.rst @@ -121,8 +121,9 @@ Header notes: to the base of DRAM, since memory below it is not accessible via the linear mapping 1 - 2MB aligned base may be anywhere in physical - memory + 2MB aligned base such that all image_size bytes + counted from the start of the image are within + the 48-bit addressable range of physical memory Bits 4-63 Reserved. ============= =============================================================== @@ -348,7 +349,7 @@ Before jumping into the kernel, the following conditions must be met: - HWFGWTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01. - For CPUs with the Scalable Matrix Extension FA64 feature (FEAT_SME_FA64) + For CPUs with the Scalable Matrix Extension FA64 feature (FEAT_SME_FA64): - If EL3 is present: diff --git a/Documentation/arm64/elf_hwcaps.rst b/Documentation/arm64/elf_hwcaps.rst index bb34287c8e01..6fed84f935df 100644 --- a/Documentation/arm64/elf_hwcaps.rst +++ b/Documentation/arm64/elf_hwcaps.rst @@ -275,6 +275,15 @@ HWCAP2_EBF16 HWCAP2_SVE_EBF16 Functionality implied by ID_AA64ZFR0_EL1.BF16 == 0b0010. +HWCAP2_CSSC + Functionality implied by ID_AA64ISAR2_EL1.CSSC == 0b0001. + +HWCAP2_RPRFM + Functionality implied by ID_AA64ISAR2_EL1.RPRFM == 0b0001. + +HWCAP2_SVE2P1 + Functionality implied by ID_AA64ZFR0_EL1.SVEver == 0b0010. + 4. Unused AT_HWCAP bits ----------------------- diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst index 808ade4cc008..ec5f889d7681 100644 --- a/Documentation/arm64/silicon-errata.rst +++ b/Documentation/arm64/silicon-errata.rst @@ -120,6 +120,8 @@ stable kernels. +----------------+-----------------+-----------------+-----------------------------+ | ARM | Cortex-A710 | #2224489 | ARM64_ERRATUM_2224489 | +----------------+-----------------+-----------------+-----------------------------+ +| ARM | Cortex-A715 | #2645198 | ARM64_ERRATUM_2645198 | ++----------------+-----------------+-----------------+-----------------------------+ | ARM | Cortex-X2 | #2119858 | ARM64_ERRATUM_2119858 | +----------------+-----------------+-----------------+-----------------------------+ | ARM | Cortex-X2 | #2224489 | ARM64_ERRATUM_2224489 | diff --git a/Documentation/arm64/sve.rst b/Documentation/arm64/sve.rst index f338ee2df46d..c7a356bf4e8f 100644 --- a/Documentation/arm64/sve.rst +++ b/Documentation/arm64/sve.rst @@ -52,6 +52,7 @@ model features for SVE is included in Appendix A. HWCAP2_SVEBITPERM HWCAP2_SVESHA3 HWCAP2_SVESM4 + HWCAP2_SVE2P1 This list may be extended over time as the SVE architecture evolves. diff --git a/Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml b/Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml new file mode 100644 index 000000000000..50f46a6898b1 --- /dev/null +++ b/Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml @@ -0,0 +1,54 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/perf/amlogic,g12-ddr-pmu.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Amlogic G12 DDR performance monitor + +maintainers: + - Jiucheng Xu <jiucheng.xu@amlogic.com> + +description: | + Amlogic G12 series SoC integrate DDR bandwidth monitor. + A timer is inside and can generate interrupt when timeout. + The bandwidth is counted in the timer ISR. Different platform + has different subset of event format attribute. + +properties: + compatible: + enum: + - amlogic,g12a-ddr-pmu + - amlogic,g12b-ddr-pmu + - amlogic,sm1-ddr-pmu + + reg: + items: + - description: DMC bandwidth register space. + - description: DMC PLL register space. + + interrupts: + items: + - description: The IRQ of the inside timer timeout. + +required: + - compatible + - reg + - interrupts + +additionalProperties: false + +examples: + - | + #include <dt-bindings/interrupt-controller/arm-gic.h> + pmu { + #address-cells=<2>; + #size-cells=<2>; + + pmu@ff638000 { + compatible = "amlogic,g12a-ddr-pmu"; + reg = <0x0 0xff638000 0x0 0x100>, + <0x0 0xff638c00 0x0 0x100>; + interrupts = <GIC_SPI 52 IRQ_TYPE_EDGE_RISING>; + }; + }; diff --git a/MAINTAINERS b/MAINTAINERS index 93b5f89308a8..c2b24b49d59f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1093,6 +1093,16 @@ S: Maintained F: Documentation/hid/amd-sfh* F: drivers/hid/amd-sfh-hid/ +AMLOGIC DDR PMU DRIVER +M: Jiucheng Xu <jiucheng.xu@amlogic.com> +L: linux-amlogic@lists.infradead.org +S: Supported +W: http://www.amlogic.com +F: Documentation/admin-guide/perf/meson-ddr-pmu.rst +F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml +F: drivers/perf/amlogic/ +F: include/soc/amlogic/ + AMPHION VPU CODEC V4L2 DRIVER M: Ming Qian <ming.qian@nxp.com> M: Shijie Qin <shijie.qin@nxp.com> @@ -9244,7 +9254,7 @@ F: drivers/misc/hisi_hikey_usb.c HISILICON PMU DRIVER M: Shaokun Zhang <zhangshaokun@hisilicon.com> -M: Qi Liu <liuqi115@huawei.com> +M: Jonathan Cameron <jonathan.cameron@huawei.com> S: Supported W: http://www.hisilicon.com F: Documentation/admin-guide/perf/hisi-pcie-pmu.rst @@ -966,8 +966,10 @@ LDFLAGS_vmlinux += --gc-sections endif ifdef CONFIG_SHADOW_CALL_STACK +ifndef CONFIG_DYNAMIC_SCS CC_FLAGS_SCS := -fsanitize=shadow-call-stack KBUILD_CFLAGS += $(CC_FLAGS_SCS) +endif export CC_FLAGS_SCS endif diff --git a/arch/Kconfig b/arch/Kconfig index 6b95244c3057..2d0e7099eb3f 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -654,6 +654,13 @@ config SHADOW_CALL_STACK reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying the stacks. +config DYNAMIC_SCS + bool + help + Set by the arch code if it relies on code patching to insert the + shadow call stack push and pop instructions rather than on the + compiler. + config LTO bool help diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 099ee812f3f1..7fc3457a8891 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only config ARM64 def_bool y + select ACPI_APMT if ACPI select ACPI_CCA_REQUIRED if ACPI select ACPI_GENERIC_GSI if ACPI select ACPI_GTDT if ACPI @@ -118,6 +119,7 @@ config ARM64 select CPU_PM if (SUSPEND || CPU_IDLE) select CRC32 select DCACHE_WORD_ACCESS + select DYNAMIC_FTRACE if FUNCTION_TRACER select DMA_DIRECT_REMAP select EDAC_SUPPORT select FRAME_POINTER @@ -182,8 +184,10 @@ config ARM64 select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE + select HAVE_DYNAMIC_FTRACE_WITH_ARGS \ + if $(cc-option,-fpatchable-function-entry=2) select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY \ - if DYNAMIC_FTRACE_WITH_REGS + if DYNAMIC_FTRACE_WITH_ARGS select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_FAST_GUP select HAVE_FTRACE_MCOUNT_RECORD @@ -234,16 +238,16 @@ config ARM64 help ARM 64-bit (AArch64) Linux support. -config CLANG_SUPPORTS_DYNAMIC_FTRACE_WITH_REGS +config CLANG_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS def_bool CC_IS_CLANG # https://github.com/ClangBuiltLinux/linux/issues/1507 depends on AS_IS_GNU || (AS_IS_LLVM && (LD_IS_LLD || LD_VERSION >= 23600)) - select HAVE_DYNAMIC_FTRACE_WITH_REGS + select HAVE_DYNAMIC_FTRACE_WITH_ARGS -config GCC_SUPPORTS_DYNAMIC_FTRACE_WITH_REGS +config GCC_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS def_bool CC_IS_GCC depends on $(cc-option,-fpatchable-function-entry=2) - select HAVE_DYNAMIC_FTRACE_WITH_REGS + select HAVE_DYNAMIC_FTRACE_WITH_ARGS config 64BIT def_bool y @@ -371,6 +375,9 @@ config KASAN_SHADOW_OFFSET default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS default 0xffffffffffffffff +config UNWIND_TABLES + bool + source "arch/arm64/Kconfig.platforms" menu "Kernel Features" @@ -965,6 +972,22 @@ config ARM64_ERRATUM_2457168 If unsure, say Y. +config ARM64_ERRATUM_2645198 + bool "Cortex-A715: 2645198: Workaround possible [ESR|FAR]_ELx corruption" + default y + help + This option adds the workaround for ARM Cortex-A715 erratum 2645198. + + If a Cortex-A715 cpu sees a page mapping permissions change from executable + to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers on the + next instruction abort caused by permission fault. + + Only user-space does executable to non-executable permission transition via + mprotect() system call. Workaround the problem by doing a break-before-make + TLB invalidation, for all changes to executable user space mappings. + + If unsure, say Y. + config CAVIUM_ERRATUM_22375 bool "Cavium erratum 22375, 24313" default y @@ -1715,7 +1738,6 @@ config ARM64_LSE_ATOMICS config ARM64_USE_LSE_ATOMICS bool "Atomic instructions" - depends on JUMP_LABEL default y help As part of the Large System Extensions, ARMv8.1 introduces new @@ -1817,7 +1839,7 @@ config ARM64_PTR_AUTH_KERNEL # which is only understood by binutils starting with version 2.33.1. depends on LD_IS_LLD || LD_VERSION >= 23301 || (CC_IS_GCC && GCC_VERSION < 90100) depends on !CC_IS_CLANG || AS_HAS_CFI_NEGATE_RA_STATE - depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS) + depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_ARGS) help If the compiler supports the -mbranch-protection or -msign-return-address flag (e.g. GCC 7 or later), then this option @@ -1827,7 +1849,7 @@ config ARM64_PTR_AUTH_KERNEL disabled with minimal loss of protection. This feature works with FUNCTION_GRAPH_TRACER option only if - DYNAMIC_FTRACE_WITH_REGS is enabled. + DYNAMIC_FTRACE_WITH_ARGS is enabled. config CC_HAS_BRANCH_PROT_PAC_RET # GCC 9 or later, clang 8 or later @@ -1925,7 +1947,7 @@ config ARM64_BTI_KERNEL depends on !CC_IS_GCC # https://github.com/llvm/llvm-project/commit/a88c722e687e6780dcd6a58718350dc76fcc4cc9 depends on !CC_IS_CLANG || CLANG_VERSION >= 120000 - depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS) + depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_ARGS) help Build the kernel with Branch Target Identification annotations and enable enforcement of this for kernel code. When this option @@ -2158,6 +2180,15 @@ config ARCH_NR_GPIO If unsure, leave the default value. +config UNWIND_PATCH_PAC_INTO_SCS + bool "Enable shadow call stack dynamically using code patching" + # needs Clang with https://reviews.llvm.org/D111780 incorporated + depends on CC_IS_CLANG && CLANG_VERSION >= 150000 + depends on ARM64_PTR_AUTH_KERNEL && CC_HAS_BRANCH_PROT_PAC_RET + depends on SHADOW_CALL_STACK + select UNWIND_TABLES + select DYNAMIC_SCS + endmenu # "Kernel Features" menu "Boot options" diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 5e56d26a2239..d62bd221828f 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -45,8 +45,13 @@ KBUILD_CFLAGS += $(call cc-option,-mabi=lp64) KBUILD_AFLAGS += $(call cc-option,-mabi=lp64) # Avoid generating .eh_frame* sections. +ifneq ($(CONFIG_UNWIND_TABLES),y) KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables KBUILD_AFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables +else +KBUILD_CFLAGS += -fasynchronous-unwind-tables +KBUILD_AFLAGS += -fasynchronous-unwind-tables +endif ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y) prepare: stack_protector_prepare @@ -72,10 +77,16 @@ branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address= # We enable additional protection for leaf functions as there is some # narrow potential for ROP protection benefits and no substantial # performance impact has been observed. +PACRET-y := pac-ret+leaf + +# Using a shadow call stack in leaf functions is too costly, so avoid PAC there +# as well when we may be patching PAC into SCS +PACRET-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS) := pac-ret + ifeq ($(CONFIG_ARM64_BTI_KERNEL),y) -branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI) := -mbranch-protection=pac-ret+leaf+bti +branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI) := -mbranch-protection=$(PACRET-y)+bti else -branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf +branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=$(PACRET-y) endif # -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the # compiler to generate them and consequently to break the single image contract @@ -128,7 +139,7 @@ endif CHECKFLAGS += -D__aarch64__ -ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y) +ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_ARGS),y) KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY CC_FLAGS_FTRACE := -fpatchable-function-entry=2 endif diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h index 3622e9f4fb44..bdf1f6bcd010 100644 --- a/arch/arm64/include/asm/alternative-macros.h +++ b/arch/arm64/include/asm/alternative-macros.h @@ -224,7 +224,7 @@ alternative_endif #include <linux/types.h> static __always_inline bool -alternative_has_feature_likely(unsigned long feature) +alternative_has_feature_likely(const unsigned long feature) { compiletime_assert(feature < ARM64_NCAPS, "feature must be < ARM64_NCAPS"); @@ -242,7 +242,7 @@ l_no: } static __always_inline bool -alternative_has_feature_unlikely(unsigned long feature) +alternative_has_feature_unlikely(const unsigned long feature) { compiletime_assert(feature < ARM64_NCAPS, "feature must be < ARM64_NCAPS"); diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index e5957a53be39..376a980f2bad 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -34,11 +34,6 @@ wx\n .req w\n .endr - .macro save_and_disable_daif, flags - mrs \flags, daif - msr daifset, #0xf - .endm - .macro disable_daif msr daifset, #0xf .endm @@ -47,15 +42,6 @@ msr daifclr, #0xf .endm - .macro restore_daif, flags:req - msr daif, \flags - .endm - - /* IRQ/FIQ are the lowest priority flags, unconditionally unmask the rest. */ - .macro enable_da - msr daifclr, #(8 | 4) - .endm - /* * Save/restore interrupts. */ @@ -620,17 +606,6 @@ alternative_endif .endm /* - * Perform the reverse of offset_ttbr1. - * bic is used as it can cover the immediate value and, in future, won't need - * to be nop'ed out when dealing with 52-bit kernel VAs. - */ - .macro restore_ttbr1, ttbr -#ifdef CONFIG_ARM64_VA_BITS_52 - bic \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET -#endif - .endm - -/* * Arrange a physical address in a TTBR register, taking care of 52-bit * addresses. * @@ -660,12 +635,10 @@ alternative_endif .endm .macro pte_to_phys, phys, pte -#ifdef CONFIG_ARM64_PA_BITS_52 - ubfiz \phys, \pte, #(48 - 16 - 12), #16 - bfxil \phys, \pte, #16, #32 - lsl \phys, \phys, #16 -#else and \phys, \pte, #PTE_ADDR_MASK +#ifdef CONFIG_ARM64_PA_BITS_52 + orr \phys, \phys, \phys, lsl #PTE_ADDR_HIGH_SHIFT + and \phys, \phys, GENMASK_ULL(PHYS_MASK_SHIFT - 1, PAGE_SHIFT) #endif .endm diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index f73f11b55042..03d1c9d7af82 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -832,7 +832,8 @@ static inline bool system_supports_tlb_range(void) cpus_have_const_cap(ARM64_HAS_TLB_RANGE); } -extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); +int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); +bool try_emulate_mrs(struct pt_regs *regs, u32 isn); static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange) { diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index 65e53ef5a396..4e8b66c74ea2 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -80,6 +80,7 @@ #define ARM_CPU_PART_CORTEX_X1 0xD44 #define ARM_CPU_PART_CORTEX_A510 0xD46 #define ARM_CPU_PART_CORTEX_A710 0xD47 +#define ARM_CPU_PART_CORTEX_A715 0xD4D #define ARM_CPU_PART_CORTEX_X2 0xD48 #define ARM_CPU_PART_NEOVERSE_N2 0xD49 #define ARM_CPU_PART_CORTEX_A78C 0xD4B @@ -142,6 +143,7 @@ #define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1) #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510) #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) +#define MIDR_CORTEX_A715 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A715) #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2) #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) #define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C) diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h index 19713d0f013b..92963f98afec 100644 --- a/arch/arm64/include/asm/exception.h +++ b/arch/arm64/include/asm/exception.h @@ -58,7 +58,8 @@ asmlinkage void call_on_irq_stack(struct pt_regs *regs, asmlinkage void asm_exit_to_user_mode(struct pt_regs *regs); void do_mem_abort(unsigned long far, unsigned long esr, struct pt_regs *regs); -void do_undefinstr(struct pt_regs *regs, unsigned long esr); +void do_el0_undef(struct pt_regs *regs, unsigned long esr); +void do_el1_undef(struct pt_regs *regs, unsigned long esr); void do_el0_bti(struct pt_regs *regs); void do_el1_bti(struct pt_regs *regs, unsigned long esr); void do_debug_exception(unsigned long addr_if_watchpoint, unsigned long esr, @@ -67,10 +68,10 @@ void do_fpsimd_acc(unsigned long esr, struct pt_regs *regs); void do_sve_acc(unsigned long esr, struct pt_regs *regs); void do_sme_acc(unsigned long esr, struct pt_regs *regs); void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs); -void do_sysinstr(unsigned long esr, struct pt_regs *regs); +void do_el0_sys(unsigned long esr, struct pt_regs *regs); void do_sp_pc_abort(unsigned long addr, unsigned long esr, struct pt_regs *regs); void bad_el0_sync(struct pt_regs *regs, int reason, unsigned long esr); -void do_cp15instr(unsigned long esr, struct pt_regs *regs); +void do_el0_cp15(unsigned long esr, struct pt_regs *regs); int do_compat_alignment_fixup(unsigned long addr, struct pt_regs *regs); void do_el0_svc(struct pt_regs *regs); void do_el0_svc_compat(struct pt_regs *regs); diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 6f86b7ab6c28..e6fa1e2982c8 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -56,11 +56,20 @@ extern void fpsimd_signal_preserve_current_state(void); extern void fpsimd_preserve_current_state(void); extern void fpsimd_restore_current_state(void); extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); +extern void fpsimd_kvm_prepare(void); + +struct cpu_fp_state { + struct user_fpsimd_state *st; + void *sve_state; + void *za_state; + u64 *svcr; + unsigned int sve_vl; + unsigned int sme_vl; + enum fp_type *fp_type; + enum fp_type to_save; +}; -extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, - void *sve_state, unsigned int sve_vl, - void *za_state, unsigned int sme_vl, - u64 *svcr); +extern void fpsimd_bind_state_to_cpu(struct cpu_fp_state *fp_state); extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_save_and_flush_cpu_state(void); diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index 329dbbd4d50b..5664729800ae 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -23,7 +23,7 @@ */ #define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR -#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS #define ARCH_SUPPORTS_FTRACE_OPS 1 #else #define MCOUNT_ADDR ((unsigned long)_mcount) @@ -33,8 +33,7 @@ #define MCOUNT_INSN_SIZE AARCH64_INSN_SIZE #define FTRACE_PLT_IDX 0 -#define FTRACE_REGS_PLT_IDX 1 -#define NR_FTRACE_PLTS 2 +#define NR_FTRACE_PLTS 1 /* * Currently, gcc tends to save the link register after the local variables @@ -69,7 +68,7 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) * Adjust addr to point at the BL in the callsite. * See ftrace_init_nop() for the callsite sequence. */ - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_ARGS)) return addr + AARCH64_INSN_SIZE; /* * addr is the address of the mcount call instruction. @@ -78,10 +77,71 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) return addr; } -#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS struct dyn_ftrace; struct ftrace_ops; -struct ftrace_regs; + +#define arch_ftrace_get_regs(regs) NULL + +struct ftrace_regs { + /* x0 - x8 */ + unsigned long regs[9]; + unsigned long __unused; + + unsigned long fp; + unsigned long lr; + + unsigned long sp; + unsigned long pc; +}; + +static __always_inline unsigned long +ftrace_regs_get_instruction_pointer(const struct ftrace_regs *fregs) +{ + return fregs->pc; +} + +static __always_inline void +ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, + unsigned long pc) +{ + fregs->pc = pc; +} + +static __always_inline unsigned long +ftrace_regs_get_stack_pointer(const struct ftrace_regs *fregs) +{ + return fregs->sp; +} + +static __always_inline unsigned long +ftrace_regs_get_argument(struct ftrace_regs *fregs, unsigned int n) +{ + if (n < 8) + return fregs->regs[n]; + return 0; +} + +static __always_inline unsigned long +ftrace_regs_get_return_value(const struct ftrace_regs *fregs) +{ + return fregs->regs[0]; +} + +static __always_inline void +ftrace_regs_set_return_value(struct ftrace_regs *fregs, + unsigned long ret) +{ + fregs->regs[0] = ret; +} + +static __always_inline void +ftrace_override_function_with_return(struct ftrace_regs *fregs) +{ + fregs->pc = fregs->lr; +} + +int ftrace_regs_query_register_offset(const char *name); int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); #define ftrace_init_nop ftrace_init_nop diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index d20f5da2d76f..6a4a1ab8eb23 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -49,6 +49,15 @@ extern pte_t huge_ptep_get(pte_t *ptep); void __init arm64_hugetlb_cma_reserve(void); +#define huge_ptep_modify_prot_start huge_ptep_modify_prot_start +extern pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep); + +#define huge_ptep_modify_prot_commit huge_ptep_modify_prot_commit +extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + pte_t old_pte, pte_t new_pte); + #include <asm-generic/hugetlb.h> #endif /* __ASM_HUGETLB_H */ diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h index 298b386d3ebe..06dd12c514e6 100644 --- a/arch/arm64/include/asm/hwcap.h +++ b/arch/arm64/include/asm/hwcap.h @@ -120,6 +120,9 @@ #define KERNEL_HWCAP_WFXT __khwcap2_feature(WFXT) #define KERNEL_HWCAP_EBF16 __khwcap2_feature(EBF16) #define KERNEL_HWCAP_SVE_EBF16 __khwcap2_feature(SVE_EBF16) +#define KERNEL_HWCAP_CSSC __khwcap2_feature(CSSC) +#define KERNEL_HWCAP_RPRFM __khwcap2_feature(RPRFM) +#define KERNEL_HWCAP_SVE2P1 __khwcap2_feature(SVE2P1) /* * This yields a mask that user programs can use to figure out what diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index 834bff720582..aaf1f52fbf3e 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -13,31 +13,6 @@ #include <asm/insn-def.h> #ifndef __ASSEMBLY__ -/* - * ARM Architecture Reference Manual for ARMv8 Profile-A, Issue A.a - * Section C3.1 "A64 instruction index by encoding": - * AArch64 main encoding table - * Bit position - * 28 27 26 25 Encoding Group - * 0 0 - - Unallocated - * 1 0 0 - Data processing, immediate - * 1 0 1 - Branch, exception generation and system instructions - * - 1 - 0 Loads and stores - * - 1 0 1 Data processing - register - * 0 1 1 1 Data processing - SIMD and floating point - * 1 1 1 1 Data processing - SIMD and floating point - * "-" means "don't care" - */ -enum aarch64_insn_encoding_class { - AARCH64_INSN_CLS_UNKNOWN, /* UNALLOCATED */ - AARCH64_INSN_CLS_SVE, /* SVE instructions */ - AARCH64_INSN_CLS_DP_IMM, /* Data processing - immediate */ - AARCH64_INSN_CLS_DP_REG, /* Data processing - register */ - AARCH64_INSN_CLS_DP_FPSIMD, /* Data processing - SIMD and FP */ - AARCH64_INSN_CLS_LDST, /* Loads and stores */ - AARCH64_INSN_CLS_BR_SYS, /* Branch, exception generation and - * system instructions */ -}; enum aarch64_insn_hint_cr_op { AARCH64_INSN_HINT_NOP = 0x0 << 5, @@ -326,6 +301,23 @@ static __always_inline u32 aarch64_insn_get_##abbr##_value(void) \ return (val); \ } +/* + * ARM Architecture Reference Manual for ARMv8 Profile-A, Issue A.a + * Section C3.1 "A64 instruction index by encoding": + * AArch64 main encoding table + * Bit position + * 28 27 26 25 Encoding Group + * 0 0 - - Unallocated + * 1 0 0 - Data processing, immediate + * 1 0 1 - Branch, exception generation and system instructions + * - 1 - 0 Loads and stores + * - 1 0 1 Data processing - register + * 0 1 1 1 Data processing - SIMD and floating point + * 1 1 1 1 Data processing - SIMD and floating point + * "-" means "don't care" + */ +__AARCH64_INSN_FUNCS(class_branch_sys, 0x1c000000, 0x14000000) + __AARCH64_INSN_FUNCS(adr, 0x9F000000, 0x10000000) __AARCH64_INSN_FUNCS(adrp, 0x9F000000, 0x90000000) __AARCH64_INSN_FUNCS(prfm, 0x3FC00000, 0x39800000) @@ -431,58 +423,122 @@ __AARCH64_INSN_FUNCS(pssbb, 0xFFFFFFFF, 0xD503349F) #undef __AARCH64_INSN_FUNCS -bool aarch64_insn_is_steppable_hint(u32 insn); -bool aarch64_insn_is_branch_imm(u32 insn); +static __always_inline bool aarch64_insn_is_steppable_hint(u32 insn) +{ + if (!aarch64_insn_is_hint(insn)) + return false; + + switch (insn & 0xFE0) { + case AARCH64_INSN_HINT_XPACLRI: + case AARCH64_INSN_HINT_PACIA_1716: + case AARCH64_INSN_HINT_PACIB_1716: + case AARCH64_INSN_HINT_PACIAZ: + case AARCH64_INSN_HINT_PACIASP: + case AARCH64_INSN_HINT_PACIBZ: + case AARCH64_INSN_HINT_PACIBSP: + case AARCH64_INSN_HINT_BTI: + case AARCH64_INSN_HINT_BTIC: + case AARCH64_INSN_HINT_BTIJ: + case AARCH64_INSN_HINT_BTIJC: + case AARCH64_INSN_HINT_NOP: + return true; + default: + return false; + } +} + +static __always_inline bool aarch64_insn_is_branch(u32 insn) +{ + /* b, bl, cb*, tb*, ret*, b.cond, br*, blr* */ + + return aarch64_insn_is_b(insn) || + aarch64_insn_is_bl(insn) || + aarch64_insn_is_cbz(insn) || + aarch64_insn_is_cbnz(insn) || + aarch64_insn_is_tbz(insn) || + aarch64_insn_is_tbnz(insn) || + aarch64_insn_is_ret(insn) || + aarch64_insn_is_ret_auth(insn) || + aarch64_insn_is_br(insn) || + aarch64_insn_is_br_auth(insn) || + aarch64_insn_is_blr(insn) || + aarch64_insn_is_blr_auth(insn) || + aarch64_insn_is_bcond(insn); +} -static inline bool aarch64_insn_is_adr_adrp(u32 insn) +static __always_inline bool aarch64_insn_is_branch_imm(u32 insn) { - return aarch64_insn_is_adr(insn) || aarch64_insn_is_adrp(insn); + return aarch64_insn_is_b(insn) || + aarch64_insn_is_bl(insn) || + aarch64_insn_is_tbz(insn) || + aarch64_insn_is_tbnz(insn) || + aarch64_insn_is_cbz(insn) || + aarch64_insn_is_cbnz(insn) || + aarch64_insn_is_bcond(insn); } -static inline bool aarch64_insn_is_dsb(u32 insn) +static __always_inline bool aarch64_insn_is_adr_adrp(u32 insn) { - return aarch64_insn_is_dsb_base(insn) || aarch64_insn_is_dsb_nxs(insn); + return aarch64_insn_is_adr(insn) || + aarch64_insn_is_adrp(insn); } -static inline bool aarch64_insn_is_barrier(u32 insn) +static __always_inline bool aarch64_insn_is_dsb(u32 insn) { - return aarch64_insn_is_dmb(insn) || aarch64_insn_is_dsb(insn) || - aarch64_insn_is_isb(insn) || aarch64_insn_is_sb(insn) || - aarch64_insn_is_clrex(insn) || aarch64_insn_is_ssbb(insn) || + return aarch64_insn_is_dsb_base(insn) || + aarch64_insn_is_dsb_nxs(insn); +} + +static __always_inline bool aarch64_insn_is_barrier(u32 insn) +{ + return aarch64_insn_is_dmb(insn) || + aarch64_insn_is_dsb(insn) || + aarch64_insn_is_isb(insn) || + aarch64_insn_is_sb(insn) || + aarch64_insn_is_clrex(insn) || + aarch64_insn_is_ssbb(insn) || aarch64_insn_is_pssbb(insn); } -static inline bool aarch64_insn_is_store_single(u32 insn) +static __always_inline bool aarch64_insn_is_store_single(u32 insn) { return aarch64_insn_is_store_imm(insn) || aarch64_insn_is_store_pre(insn) || aarch64_insn_is_store_post(insn); } -static inline bool aarch64_insn_is_store_pair(u32 insn) +static __always_inline bool aarch64_insn_is_store_pair(u32 insn) { return aarch64_insn_is_stp(insn) || aarch64_insn_is_stp_pre(insn) || aarch64_insn_is_stp_post(insn); } -static inline bool aarch64_insn_is_load_single(u32 insn) +static __always_inline bool aarch64_insn_is_load_single(u32 insn) { return aarch64_insn_is_load_imm(insn) || aarch64_insn_is_load_pre(insn) || aarch64_insn_is_load_post(insn); } -static inline bool aarch64_insn_is_load_pair(u32 insn) +static __always_inline bool aarch64_insn_is_load_pair(u32 insn) { return aarch64_insn_is_ldp(insn) || aarch64_insn_is_ldp_pre(insn) || aarch64_insn_is_ldp_post(insn); } +static __always_inline bool aarch64_insn_uses_literal(u32 insn) +{ + /* ldr/ldrsw (literal), prfm */ + + return aarch64_insn_is_ldr_lit(insn) || + aarch64_insn_is_ldrsw_lit(insn) || + aarch64_insn_is_adr_adrp(insn) || + aarch64_insn_is_prfm_lit(insn); +} + enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn); -bool aarch64_insn_uses_literal(u32 insn); -bool aarch64_insn_is_branch(u32 insn); u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn); u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type, u32 insn, u64 imm); @@ -496,8 +552,18 @@ u32 aarch64_insn_gen_comp_branch_imm(unsigned long pc, unsigned long addr, enum aarch64_insn_branch_type type); u32 aarch64_insn_gen_cond_branch_imm(unsigned long pc, unsigned long addr, enum aarch64_insn_condition cond); -u32 aarch64_insn_gen_hint(enum aarch64_insn_hint_cr_op op); -u32 aarch64_insn_gen_nop(void); + +static __always_inline u32 +aarch64_insn_gen_hint(enum aarch64_insn_hint_cr_op op) +{ + return aarch64_insn_get_hint_value() | op; +} + +static __always_inline u32 aarch64_insn_gen_nop(void) +{ + return aarch64_insn_gen_hint(AARCH64_INSN_HINT_NOP); +} + u32 aarch64_insn_gen_branch_reg(enum aarch64_insn_register reg, enum aarch64_insn_branch_type type); u32 aarch64_insn_gen_load_store_reg(enum aarch64_insn_register reg, @@ -580,10 +646,6 @@ u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant, enum aarch64_insn_register Rn, enum aarch64_insn_register Rd, u8 lsb); -u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base, - enum aarch64_insn_prfm_type type, - enum aarch64_insn_prfm_target target, - enum aarch64_insn_prfm_policy policy); #ifdef CONFIG_ARM64_LSE_ATOMICS u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result, enum aarch64_insn_register address, diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h index cea441b6aa5d..48ddc0f45d22 100644 --- a/arch/arm64/include/asm/jump_label.h +++ b/arch/arm64/include/asm/jump_label.h @@ -15,8 +15,8 @@ #define JUMP_LABEL_NOP_SIZE AARCH64_INSN_SIZE -static __always_inline bool arch_static_branch(struct static_key *key, - bool branch) +static __always_inline bool arch_static_branch(struct static_key * const key, + const bool branch) { asm_volatile_goto( "1: nop \n\t" @@ -32,8 +32,8 @@ l_yes: return true; } -static __always_inline bool arch_static_branch_jump(struct static_key *key, - bool branch) +static __always_inline bool arch_static_branch_jump(struct static_key * const key, + const bool branch) { asm_volatile_goto( "1: b %l[l_yes] \n\t" diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 32d14f481f0c..fcd14197756f 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -18,11 +18,6 @@ * with 4K (section size = 2M) but not with 16K (section size = 32M) or * 64K (section size = 512M). */ -#ifdef CONFIG_ARM64_4K_PAGES -#define ARM64_KERNEL_USES_PMD_MAPS 1 -#else -#define ARM64_KERNEL_USES_PMD_MAPS 0 -#endif /* * The idmap and swapper page tables need some space reserved in the kernel @@ -34,7 +29,7 @@ * VA range, so pages required to map highest possible PA are reserved in all * cases. */ -#if ARM64_KERNEL_USES_PMD_MAPS +#ifdef CONFIG_ARM64_4K_PAGES #define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS - 1) #else #define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS) @@ -96,7 +91,7 @@ #define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1) /* Initial memory map size */ -#if ARM64_KERNEL_USES_PMD_MAPS +#ifdef CONFIG_ARM64_4K_PAGES #define SWAPPER_BLOCK_SHIFT PMD_SHIFT #define SWAPPER_BLOCK_SIZE PMD_SIZE #define SWAPPER_TABLE_SHIFT PUD_SHIFT @@ -112,7 +107,7 @@ #define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) #define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) -#if ARM64_KERNEL_USES_PMD_MAPS +#ifdef CONFIG_ARM64_4K_PAGES #define SWAPPER_RW_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS) #define SWAPPER_RX_MMUFLAGS (SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY) #else diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 45e2136322ba..fd34ab155d0b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -306,8 +306,18 @@ struct vcpu_reset_state { struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; - /* Guest floating point state */ + /* + * Guest floating point state + * + * The architecture has two main floating point extensions, + * the original FPSIMD and SVE. These have overlapping + * register views, with the FPSIMD V registers occupying the + * low 128 bits of the SVE Z registers. When the core + * floating point code saves the register state of a task it + * records which view it saved in fp_type. + */ void *sve_state; + enum fp_type fp_type; unsigned int sve_max_vl; u64 svcr; diff --git a/arch/arm64/include/asm/lse.h b/arch/arm64/include/asm/lse.h index c503db8e73b0..f99d74826a7e 100644 --- a/arch/arm64/include/asm/lse.h +++ b/arch/arm64/include/asm/lse.h @@ -10,7 +10,6 @@ #include <linux/compiler_types.h> #include <linux/export.h> -#include <linux/jump_label.h> #include <linux/stringify.h> #include <asm/alternative.h> #include <asm/alternative-macros.h> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index d3f8b5df0c1f..72dbd6400549 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -18,6 +18,7 @@ #include <asm/cacheflush.h> #include <asm/cpufeature.h> +#include <asm/daifflags.h> #include <asm/proc-fns.h> #include <asm-generic/mm_hooks.h> #include <asm/cputype.h> @@ -152,6 +153,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) typedef void (ttbr_replace_func)(phys_addr_t); extern ttbr_replace_func idmap_cpu_replace_ttbr1; ttbr_replace_func *replace_phys; + unsigned long daif; /* phys_to_ttbr() zeros lower 2 bits of ttbr with 52-bit PA */ phys_addr_t ttbr1 = phys_to_ttbr(virt_to_phys(pgdp)); @@ -171,7 +173,15 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1); __cpu_install_idmap(idmap); + + /* + * We really don't want to take *any* exceptions while TTBR1 is + * in the process of being replaced so mask everything. + */ + daif = local_daif_save(); replace_phys(ttbr1); + local_daif_restore(daif); + cpu_uninstall_idmap(); } diff --git a/arch/arm64/include/asm/module.lds.h b/arch/arm64/include/asm/module.lds.h index 094701ec5500..dbba4b7559aa 100644 --- a/arch/arm64/include/asm/module.lds.h +++ b/arch/arm64/include/asm/module.lds.h @@ -17,4 +17,12 @@ SECTIONS { */ .text.hot : { *(.text.hot) } #endif + +#ifdef CONFIG_UNWIND_TABLES + /* + * Currently, we only use unwind info at module load time, so we can + * put it into the .init allocation. + */ + .init.eh_frame : { *(.eh_frame) } +#endif } diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 5ab8d163198f..f658aafc47df 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -159,6 +159,7 @@ #ifdef CONFIG_ARM64_PA_BITS_52 #define PTE_ADDR_HIGH (_AT(pteval_t, 0xf) << 12) #define PTE_ADDR_MASK (PTE_ADDR_LOW | PTE_ADDR_HIGH) +#define PTE_ADDR_HIGH_SHIFT 36 #else #define PTE_ADDR_MASK PTE_ADDR_LOW #endif diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index edf6625ce965..c36d56dbf940 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -77,11 +77,11 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; static inline phys_addr_t __pte_to_phys(pte_t pte) { return (pte_val(pte) & PTE_ADDR_LOW) | - ((pte_val(pte) & PTE_ADDR_HIGH) << 36); + ((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT); } static inline pteval_t __phys_to_pte_val(phys_addr_t phys) { - return (phys | (phys >> 36)) & PTE_ADDR_MASK; + return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK; } #else #define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_MASK) @@ -609,7 +609,6 @@ extern pgd_t init_pg_dir[PTRS_PER_PGD]; extern pgd_t init_pg_end[]; extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern pgd_t idmap_pg_dir[PTRS_PER_PGD]; -extern pgd_t idmap_pg_end[]; extern pgd_t tramp_pg_dir[PTRS_PER_PGD]; extern pgd_t reserved_pg_dir[PTRS_PER_PGD]; @@ -1096,6 +1095,15 @@ static inline bool pud_sect_supported(void) } +#define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION +#define ptep_modify_prot_start ptep_modify_prot_start +extern pte_t ptep_modify_prot_start(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep); + +#define ptep_modify_prot_commit ptep_modify_prot_commit +extern void ptep_modify_prot_commit(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + pte_t old_pte, pte_t new_pte); #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 445aa3af3b76..d51b32a69309 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -122,6 +122,12 @@ enum vec_type { ARM64_VEC_MAX, }; +enum fp_type { + FP_STATE_CURRENT, /* Save based on current task state. */ + FP_STATE_FPSIMD, + FP_STATE_SVE, +}; + struct cpu_context { unsigned long x19; unsigned long x20; @@ -152,6 +158,7 @@ struct thread_struct { struct user_fpsimd_state fpsimd_state; } uw; + enum fp_type fp_type; /* registers FPSIMD or SVE? */ unsigned int fpsimd_cpu; void *sve_state; /* SVE registers, if any */ void *za_state; /* ZA register, if any */ @@ -308,13 +315,13 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc, } #endif -static inline bool is_ttbr0_addr(unsigned long addr) +static __always_inline bool is_ttbr0_addr(unsigned long addr) { /* entry assembly clears tags for TTBR0 addrs */ return addr < TASK_SIZE; } -static inline bool is_ttbr1_addr(unsigned long addr) +static __always_inline bool is_ttbr1_addr(unsigned long addr) { /* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */ return arch_kasan_reset_tag(addr) >= PAGE_OFFSET; @@ -396,18 +403,5 @@ long get_tagged_addr_ctrl(struct task_struct *task); #define GET_TAGGED_ADDR_CTRL() get_tagged_addr_ctrl(current) #endif -/* - * For CONFIG_GCC_PLUGIN_STACKLEAK - * - * These need to be macros because otherwise we get stuck in a nightmare - * of header definitions for the use of task_stack_page. - */ - -/* - * The top of the current task's task stack - */ -#define current_top_of_stack() ((unsigned long)current->stack + THREAD_SIZE) -#define on_thread_stack() (on_task_stack(current, current_stack_pointer, 1)) - #endif /* __ASSEMBLY__ */ #endif /* __ASM_PROCESSOR_H */ diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index 8297bccf0784..ff7da1268a52 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -5,6 +5,7 @@ #ifdef __ASSEMBLY__ #include <asm/asm-offsets.h> +#include <asm/sysreg.h> #ifdef CONFIG_SHADOW_CALL_STACK scs_sp .req x18 @@ -24,6 +25,54 @@ .endm #endif /* CONFIG_SHADOW_CALL_STACK */ + +#else + +#include <linux/scs.h> +#include <asm/cpufeature.h> + +#ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS +static inline bool should_patch_pac_into_scs(void) +{ + u64 reg; + + /* + * We only enable the shadow call stack dynamically if we are running + * on a system that does not implement PAC or BTI. PAC and SCS provide + * roughly the same level of protection, and BTI relies on the PACIASP + * instructions serving as landing pads, preventing us from patching + * those instructions into something else. + */ + reg = read_sysreg_s(SYS_ID_AA64ISAR1_EL1); + if (SYS_FIELD_GET(ID_AA64ISAR1_EL1, APA, reg) | + SYS_FIELD_GET(ID_AA64ISAR1_EL1, API, reg)) + return false; + + reg = read_sysreg_s(SYS_ID_AA64ISAR2_EL1); + if (SYS_FIELD_GET(ID_AA64ISAR2_EL1, APA3, reg)) + return false; + + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) { + reg = read_sysreg_s(SYS_ID_AA64PFR1_EL1); + if (reg & (0xf << ID_AA64PFR1_EL1_BT_SHIFT)) + return false; + } + return true; +} + +static inline void dynamic_scs_init(void) +{ + if (should_patch_pac_into_scs()) { + pr_info("Enabling dynamic shadow call stack\n"); + static_branch_enable(&dynamic_scs_enabled); + } +} +#else +static inline void dynamic_scs_init(void) {} +#endif + +int scs_patch(const u8 eh_frame[], int size); + #endif /* __ASSEMBLY __ */ #endif /* _ASM_SCS_H */ diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h index aa3d3607d5c8..db7b371b367c 100644 --- a/arch/arm64/include/asm/spectre.h +++ b/arch/arm64/include/asm/spectre.h @@ -26,6 +26,7 @@ enum mitigation_state { SPECTRE_VULNERABLE, }; +struct pt_regs; struct task_struct; /* @@ -98,5 +99,6 @@ enum mitigation_state arm64_get_spectre_bhb_state(void); bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope); u8 spectre_bhb_loop_affected(int scope); void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused); +bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr); #endif /* __ASSEMBLY__ */ #endif /* __ASM_SPECTRE_H */ diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index 5a0edb064ea4..4e5354beafb0 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -57,6 +57,8 @@ static inline bool on_task_stack(const struct task_struct *tsk, return stackinfo_on_stack(&info, sp, size); } +#define on_thread_stack() (on_task_stack(current, current_stack_pointer, 1)) + #ifdef CONFIG_VMAP_STACK DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack); diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 7d301700d1a9..1312fb48f18b 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -90,20 +90,24 @@ */ #define pstate_field(op1, op2) ((op1) << Op1_shift | (op2) << Op2_shift) #define PSTATE_Imm_shift CRm_shift +#define SET_PSTATE(x, r) __emit_inst(0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift)) #define PSTATE_PAN pstate_field(0, 4) #define PSTATE_UAO pstate_field(0, 3) #define PSTATE_SSBS pstate_field(3, 1) +#define PSTATE_DIT pstate_field(3, 2) #define PSTATE_TCO pstate_field(3, 4) -#define SET_PSTATE_PAN(x) __emit_inst(0xd500401f | PSTATE_PAN | ((!!x) << PSTATE_Imm_shift)) -#define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift)) -#define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift)) -#define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift)) +#define SET_PSTATE_PAN(x) SET_PSTATE((x), PAN) +#define SET_PSTATE_UAO(x) SET_PSTATE((x), UAO) +#define SET_PSTATE_SSBS(x) SET_PSTATE((x), SSBS) +#define SET_PSTATE_DIT(x) SET_PSTATE((x), DIT) +#define SET_PSTATE_TCO(x) SET_PSTATE((x), TCO) #define set_pstate_pan(x) asm volatile(SET_PSTATE_PAN(x)) #define set_pstate_uao(x) asm volatile(SET_PSTATE_UAO(x)) #define set_pstate_ssbs(x) asm volatile(SET_PSTATE_SSBS(x)) +#define set_pstate_dit(x) asm volatile(SET_PSTATE_DIT(x)) #define __SYS_BARRIER_INSN(CRm, op2, Rt) \ __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f)) @@ -165,31 +169,6 @@ #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) -#define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0) -#define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1) -#define SYS_ID_PFR2_EL1 sys_reg(3, 0, 0, 3, 4) -#define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2) -#define SYS_ID_DFR1_EL1 sys_reg(3, 0, 0, 3, 5) -#define SYS_ID_AFR0_EL1 sys_reg(3, 0, 0, 1, 3) -#define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4) -#define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) -#define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) -#define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7) -#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) -#define SYS_ID_MMFR5_EL1 sys_reg(3, 0, 0, 3, 6) - -#define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0) -#define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1) -#define SYS_ID_ISAR2_EL1 sys_reg(3, 0, 0, 2, 2) -#define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3) -#define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4) -#define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5) -#define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7) - -#define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) -#define SYS_MVFR1_EL1 sys_reg(3, 0, 0, 3, 1) -#define SYS_MVFR2_EL1 sys_reg(3, 0, 0, 3, 2) - #define SYS_ACTLR_EL1 sys_reg(3, 0, 1, 0, 1) #define SYS_RGSR_EL1 sys_reg(3, 0, 1, 0, 5) #define SYS_GCR_EL1 sys_reg(3, 0, 1, 0, 6) @@ -692,112 +671,6 @@ #define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_48 #endif -#define ID_DFR0_PERFMON_SHIFT 24 - -#define ID_DFR0_PERFMON_8_0 0x3 -#define ID_DFR0_PERFMON_8_1 0x4 -#define ID_DFR0_PERFMON_8_4 0x5 -#define ID_DFR0_PERFMON_8_5 0x6 - -#define ID_ISAR4_SWP_FRAC_SHIFT 28 -#define ID_ISAR4_PSR_M_SHIFT 24 -#define ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT 20 -#define ID_ISAR4_BARRIER_SHIFT 16 -#define ID_ISAR4_SMC_SHIFT 12 -#define ID_ISAR4_WRITEBACK_SHIFT 8 -#define ID_ISAR4_WITHSHIFTS_SHIFT 4 -#define ID_ISAR4_UNPRIV_SHIFT 0 - -#define ID_DFR1_MTPMU_SHIFT 0 - -#define ID_ISAR0_DIVIDE_SHIFT 24 -#define ID_ISAR0_DEBUG_SHIFT 20 -#define ID_ISAR0_COPROC_SHIFT 16 -#define ID_ISAR0_CMPBRANCH_SHIFT 12 -#define ID_ISAR0_BITFIELD_SHIFT 8 -#define ID_ISAR0_BITCOUNT_SHIFT 4 -#define ID_ISAR0_SWAP_SHIFT 0 - -#define ID_ISAR5_RDM_SHIFT 24 -#define ID_ISAR5_CRC32_SHIFT 16 -#define ID_ISAR5_SHA2_SHIFT 12 -#define ID_ISAR5_SHA1_SHIFT 8 -#define ID_ISAR5_AES_SHIFT 4 -#define ID_ISAR5_SEVL_SHIFT 0 - -#define ID_ISAR6_I8MM_SHIFT 24 -#define ID_ISAR6_BF16_SHIFT 20 -#define ID_ISAR6_SPECRES_SHIFT 16 -#define ID_ISAR6_SB_SHIFT 12 -#define ID_ISAR6_FHM_SHIFT 8 -#define ID_ISAR6_DP_SHIFT 4 -#define ID_ISAR6_JSCVT_SHIFT 0 - -#define ID_MMFR0_INNERSHR_SHIFT 28 -#define ID_MMFR0_FCSE_SHIFT 24 -#define ID_MMFR0_AUXREG_SHIFT 20 -#define ID_MMFR0_TCM_SHIFT 16 -#define ID_MMFR0_SHARELVL_SHIFT 12 -#define ID_MMFR0_OUTERSHR_SHIFT 8 -#define ID_MMFR0_PMSA_SHIFT 4 -#define ID_MMFR0_VMSA_SHIFT 0 - -#define ID_MMFR4_EVT_SHIFT 28 -#define ID_MMFR4_CCIDX_SHIFT 24 -#define ID_MMFR4_LSM_SHIFT 20 -#define ID_MMFR4_HPDS_SHIFT 16 -#define ID_MMFR4_CNP_SHIFT 12 -#define ID_MMFR4_XNX_SHIFT 8 -#define ID_MMFR4_AC2_SHIFT 4 -#define ID_MMFR4_SPECSEI_SHIFT 0 - -#define ID_MMFR5_ETS_SHIFT 0 - -#define ID_PFR0_DIT_SHIFT 24 -#define ID_PFR0_CSV2_SHIFT 16 -#define ID_PFR0_STATE3_SHIFT 12 -#define ID_PFR0_STATE2_SHIFT 8 -#define ID_PFR0_STATE1_SHIFT 4 -#define ID_PFR0_STATE0_SHIFT 0 - -#define ID_DFR0_PERFMON_SHIFT 24 -#define ID_DFR0_MPROFDBG_SHIFT 20 -#define ID_DFR0_MMAPTRC_SHIFT 16 -#define ID_DFR0_COPTRC_SHIFT 12 -#define ID_DFR0_MMAPDBG_SHIFT 8 -#define ID_DFR0_COPSDBG_SHIFT 4 -#define ID_DFR0_COPDBG_SHIFT 0 - -#define ID_PFR2_SSBS_SHIFT 4 -#define ID_PFR2_CSV3_SHIFT 0 - -#define MVFR0_FPROUND_SHIFT 28 -#define MVFR0_FPSHVEC_SHIFT 24 -#define MVFR0_FPSQRT_SHIFT 20 -#define MVFR0_FPDIVIDE_SHIFT 16 -#define MVFR0_FPTRAP_SHIFT 12 -#define MVFR0_FPDP_SHIFT 8 -#define MVFR0_FPSP_SHIFT 4 -#define MVFR0_SIMD_SHIFT 0 - -#define MVFR1_SIMDFMAC_SHIFT 28 -#define MVFR1_FPHP_SHIFT 24 -#define MVFR1_SIMDHP_SHIFT 20 -#define MVFR1_SIMDSP_SHIFT 16 -#define MVFR1_SIMDINT_SHIFT 12 -#define MVFR1_SIMDLS_SHIFT 8 -#define MVFR1_FPDNAN_SHIFT 4 -#define MVFR1_FPFTZ_SHIFT 0 - -#define ID_PFR1_GIC_SHIFT 28 -#define ID_PFR1_VIRT_FRAC_SHIFT 24 -#define ID_PFR1_SEC_FRAC_SHIFT 20 -#define ID_PFR1_GENTIMER_SHIFT 16 -#define ID_PFR1_VIRTUALIZATION_SHIFT 12 -#define ID_PFR1_MPROGMOD_SHIFT 8 -#define ID_PFR1_SECURITY_SHIFT 4 -#define ID_PFR1_PROGMOD_SHIFT 0 - #if defined(CONFIG_ARM64_4K_PAGES) #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN @@ -815,9 +688,6 @@ #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN64_2_SHIFT #endif -#define MVFR2_FPMISC_SHIFT 4 -#define MVFR2_SIMDMISC_SHIFT 0 - #define CPACR_EL1_FPEN_EL1EN (BIT(20)) /* enable EL1 access */ #define CPACR_EL1_FPEN_EL0EN (BIT(21)) /* enable EL0 access, if EL1EN set */ @@ -851,10 +721,6 @@ #define SYS_RGSR_EL1_SEED_SHIFT 8 #define SYS_RGSR_EL1_SEED_MASK 0xffffUL -/* GMID_EL1 field definitions */ -#define GMID_EL1_BS_SHIFT 0 -#define GMID_EL1_BS_SIZE 4 - /* TFSR{,E0}_EL1 bit definitions */ #define SYS_TFSR_EL1_TF0_SHIFT 0 #define SYS_TFSR_EL1_TF1_SHIFT 1 diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h index 6e5826470bea..1f361e2da516 100644 --- a/arch/arm64/include/asm/traps.h +++ b/arch/arm64/include/asm/traps.h @@ -13,17 +13,16 @@ struct pt_regs; -struct undef_hook { - struct list_head node; - u32 instr_mask; - u32 instr_val; - u64 pstate_mask; - u64 pstate_val; - int (*fn)(struct pt_regs *regs, u32 instr); -}; +#ifdef CONFIG_ARMV8_DEPRECATED +bool try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn); +#else +static inline bool +try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn) +{ + return false; +} +#endif /* CONFIG_ARMV8_DEPRECATED */ -void register_undef_hook(struct undef_hook *hook); -void unregister_undef_hook(struct undef_hook *hook); void force_signal_inject(int signal, int code, unsigned long address, unsigned long err); void arm64_notify_segfault(unsigned long addr); void arm64_force_sig_fault(int signo, int code, unsigned long far, const char *str); diff --git a/arch/arm64/include/asm/uprobes.h b/arch/arm64/include/asm/uprobes.h index 315eef654e39..ba4bff5ca674 100644 --- a/arch/arm64/include/asm/uprobes.h +++ b/arch/arm64/include/asm/uprobes.h @@ -12,7 +12,7 @@ #define MAX_UINSN_BYTES AARCH64_INSN_SIZE -#define UPROBE_SWBP_INSN BRK64_OPCODE_UPROBES +#define UPROBE_SWBP_INSN cpu_to_le32(BRK64_OPCODE_UPROBES) #define UPROBE_SWBP_INSN_SIZE AARCH64_INSN_SIZE #define UPROBE_XOL_SLOT_BYTES MAX_UINSN_BYTES diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h index 9b245da6f507..b713d30544f1 100644 --- a/arch/arm64/include/uapi/asm/hwcap.h +++ b/arch/arm64/include/uapi/asm/hwcap.h @@ -93,5 +93,8 @@ #define HWCAP2_WFXT (1UL << 31) #define HWCAP2_EBF16 (1UL << 32) #define HWCAP2_SVE_EBF16 (1UL << 33) +#define HWCAP2_CSSC (1UL << 34) +#define HWCAP2_RPRFM (1UL << 35) +#define HWCAP2_SVE2P1 (1UL << 36) #endif /* _UAPI__ASM_HWCAP_H */ diff --git a/arch/arm64/include/uapi/asm/sigcontext.h b/arch/arm64/include/uapi/asm/sigcontext.h index 4aaf31e3bf16..9525041e4a14 100644 --- a/arch/arm64/include/uapi/asm/sigcontext.h +++ b/arch/arm64/include/uapi/asm/sigcontext.h @@ -62,6 +62,10 @@ struct sigcontext { * context. Such structures must be placed after the rt_sigframe on the stack * and be 16-byte aligned. The last structure must be a dummy one with the * magic and size set to 0. + * + * Note that the values allocated for use as magic should be chosen to + * be meaningful in ASCII to aid manual parsing, ZA doesn't follow this + * convention due to oversight but it should be observed for future additions. */ struct _aarch64_ctx { __u32 magic; diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 2f361a883d8c..8dd925f4a4c6 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -80,6 +80,8 @@ obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o obj-$(CONFIG_ARM64_MTE) += mte.o obj-y += vdso-wrap.o obj-$(CONFIG_COMPAT_VDSO) += vdso32-wrap.o +obj-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS) += patch-scs.o +CFLAGS_patch-scs.o += -mbranch-protection=none # Force dependency (vdso*-wrap.S includes vdso.so through incbin) $(obj)/vdso-wrap.o: $(obj)/vdso/vdso.so diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index 91263d09ea65..d32d4ed5519b 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -196,7 +196,7 @@ static void __apply_alternatives(const struct alt_region *region, } } -void apply_alternatives_vdso(void) +static void __init apply_alternatives_vdso(void) { struct alt_region region; const struct elf64_hdr *hdr; @@ -220,7 +220,7 @@ void apply_alternatives_vdso(void) __apply_alternatives(®ion, false, &all_capabilities[0]); } -static const struct alt_region kernel_alternatives = { +static const struct alt_region kernel_alternatives __initconst = { .begin = (struct alt_instr *)__alt_instructions, .end = (struct alt_instr *)__alt_instructions_end, }; @@ -229,7 +229,7 @@ static const struct alt_region kernel_alternatives = { * We might be patching the stop_machine state machine, so implement a * really simple polling protocol here. */ -static int __apply_alternatives_multi_stop(void *unused) +static int __init __apply_alternatives_multi_stop(void *unused) { /* We always have a CPU 0 at this point (__init) */ if (smp_processor_id()) { diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c index fb0e7c7b2e20..8a9052cf3013 100644 --- a/arch/arm64/kernel/armv8_deprecated.c +++ b/arch/arm64/kernel/armv8_deprecated.c @@ -17,7 +17,6 @@ #include <asm/sysreg.h> #include <asm/system_misc.h> #include <asm/traps.h> -#include <asm/kprobes.h> #define CREATE_TRACE_POINTS #include "trace-events-emulation.h" @@ -39,226 +38,46 @@ enum insn_emulation_mode { enum legacy_insn_status { INSN_DEPRECATED, INSN_OBSOLETE, -}; - -struct insn_emulation_ops { - const char *name; - enum legacy_insn_status status; - struct undef_hook *hooks; - int (*set_hw_mode)(bool enable); + INSN_UNAVAILABLE, }; struct insn_emulation { - struct list_head node; - struct insn_emulation_ops *ops; + const char *name; + enum legacy_insn_status status; + bool (*try_emulate)(struct pt_regs *regs, + u32 insn); + int (*set_hw_mode)(bool enable); + int current_mode; int min; int max; -}; - -static LIST_HEAD(insn_emulation); -static int nr_insn_emulated __initdata; -static DEFINE_RAW_SPINLOCK(insn_emulation_lock); -static DEFINE_MUTEX(insn_emulation_mutex); - -static void register_emulation_hooks(struct insn_emulation_ops *ops) -{ - struct undef_hook *hook; - - BUG_ON(!ops->hooks); - - for (hook = ops->hooks; hook->instr_mask; hook++) - register_undef_hook(hook); - - pr_notice("Registered %s emulation handler\n", ops->name); -} - -static void remove_emulation_hooks(struct insn_emulation_ops *ops) -{ - struct undef_hook *hook; - - BUG_ON(!ops->hooks); - - for (hook = ops->hooks; hook->instr_mask; hook++) - unregister_undef_hook(hook); - - pr_notice("Removed %s emulation handler\n", ops->name); -} - -static void enable_insn_hw_mode(void *data) -{ - struct insn_emulation *insn = (struct insn_emulation *)data; - if (insn->ops->set_hw_mode) - insn->ops->set_hw_mode(true); -} - -static void disable_insn_hw_mode(void *data) -{ - struct insn_emulation *insn = (struct insn_emulation *)data; - if (insn->ops->set_hw_mode) - insn->ops->set_hw_mode(false); -} - -/* Run set_hw_mode(mode) on all active CPUs */ -static int run_all_cpu_set_hw_mode(struct insn_emulation *insn, bool enable) -{ - if (!insn->ops->set_hw_mode) - return -EINVAL; - if (enable) - on_each_cpu(enable_insn_hw_mode, (void *)insn, true); - else - on_each_cpu(disable_insn_hw_mode, (void *)insn, true); - return 0; -} - -/* - * Run set_hw_mode for all insns on a starting CPU. - * Returns: - * 0 - If all the hooks ran successfully. - * -EINVAL - At least one hook is not supported by the CPU. - */ -static int run_all_insn_set_hw_mode(unsigned int cpu) -{ - int rc = 0; - unsigned long flags; - struct insn_emulation *insn; - - raw_spin_lock_irqsave(&insn_emulation_lock, flags); - list_for_each_entry(insn, &insn_emulation, node) { - bool enable = (insn->current_mode == INSN_HW); - if (insn->ops->set_hw_mode && insn->ops->set_hw_mode(enable)) { - pr_warn("CPU[%u] cannot support the emulation of %s", - cpu, insn->ops->name); - rc = -EINVAL; - } - } - raw_spin_unlock_irqrestore(&insn_emulation_lock, flags); - return rc; -} - -static int update_insn_emulation_mode(struct insn_emulation *insn, - enum insn_emulation_mode prev) -{ - int ret = 0; - - switch (prev) { - case INSN_UNDEF: /* Nothing to be done */ - break; - case INSN_EMULATE: - remove_emulation_hooks(insn->ops); - break; - case INSN_HW: - if (!run_all_cpu_set_hw_mode(insn, false)) - pr_notice("Disabled %s support\n", insn->ops->name); - break; - } - - switch (insn->current_mode) { - case INSN_UNDEF: - break; - case INSN_EMULATE: - register_emulation_hooks(insn->ops); - break; - case INSN_HW: - ret = run_all_cpu_set_hw_mode(insn, true); - if (!ret) - pr_notice("Enabled %s support\n", insn->ops->name); - break; - } - return ret; -} - -static void __init register_insn_emulation(struct insn_emulation_ops *ops) -{ - unsigned long flags; - struct insn_emulation *insn; - - insn = kzalloc(sizeof(*insn), GFP_KERNEL); - if (!insn) - return; - - insn->ops = ops; - insn->min = INSN_UNDEF; - - switch (ops->status) { - case INSN_DEPRECATED: - insn->current_mode = INSN_EMULATE; - /* Disable the HW mode if it was turned on at early boot time */ - run_all_cpu_set_hw_mode(insn, false); - insn->max = INSN_HW; - break; - case INSN_OBSOLETE: - insn->current_mode = INSN_UNDEF; - insn->max = INSN_EMULATE; - break; - } - - raw_spin_lock_irqsave(&insn_emulation_lock, flags); - list_add(&insn->node, &insn_emulation); - nr_insn_emulated++; - raw_spin_unlock_irqrestore(&insn_emulation_lock, flags); - - /* Register any handlers if required */ - update_insn_emulation_mode(insn, INSN_UNDEF); -} - -static int emulation_proc_handler(struct ctl_table *table, int write, - void *buffer, size_t *lenp, - loff_t *ppos) -{ - int ret = 0; - struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode); - enum insn_emulation_mode prev_mode = insn->current_mode; - - mutex_lock(&insn_emulation_mutex); - ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); + /* + * sysctl for this emulation + a sentinal entry. + */ + struct ctl_table sysctl[2]; +}; - if (ret || !write || prev_mode == insn->current_mode) - goto ret; +#define ARM_OPCODE_CONDTEST_FAIL 0 +#define ARM_OPCODE_CONDTEST_PASS 1 +#define ARM_OPCODE_CONDTEST_UNCOND 2 - ret = update_insn_emulation_mode(insn, prev_mode); - if (ret) { - /* Mode change failed, revert to previous mode. */ - insn->current_mode = prev_mode; - update_insn_emulation_mode(insn, INSN_UNDEF); - } -ret: - mutex_unlock(&insn_emulation_mutex); - return ret; -} +#define ARM_OPCODE_CONDITION_UNCOND 0xf -static void __init register_insn_emulation_sysctl(void) +static unsigned int __maybe_unused aarch32_check_condition(u32 opcode, u32 psr) { - unsigned long flags; - int i = 0; - struct insn_emulation *insn; - struct ctl_table *insns_sysctl, *sysctl; - - insns_sysctl = kcalloc(nr_insn_emulated + 1, sizeof(*sysctl), - GFP_KERNEL); - if (!insns_sysctl) - return; - - raw_spin_lock_irqsave(&insn_emulation_lock, flags); - list_for_each_entry(insn, &insn_emulation, node) { - sysctl = &insns_sysctl[i]; - - sysctl->mode = 0644; - sysctl->maxlen = sizeof(int); + u32 cc_bits = opcode >> 28; - sysctl->procname = insn->ops->name; - sysctl->data = &insn->current_mode; - sysctl->extra1 = &insn->min; - sysctl->extra2 = &insn->max; - sysctl->proc_handler = emulation_proc_handler; - i++; + if (cc_bits != ARM_OPCODE_CONDITION_UNCOND) { + if ((*aarch32_opcode_cond_checks[cc_bits])(psr)) + return ARM_OPCODE_CONDTEST_PASS; + else + return ARM_OPCODE_CONDTEST_FAIL; } - raw_spin_unlock_irqrestore(&insn_emulation_lock, flags); - - register_sysctl("abi", insns_sysctl); + return ARM_OPCODE_CONDTEST_UNCOND; } +#ifdef CONFIG_SWP_EMULATION /* * Implement emulation of the SWP/SWPB instructions using load-exclusive and * store-exclusive. @@ -339,25 +158,6 @@ static int emulate_swpX(unsigned int address, unsigned int *data, return res; } -#define ARM_OPCODE_CONDTEST_FAIL 0 -#define ARM_OPCODE_CONDTEST_PASS 1 -#define ARM_OPCODE_CONDTEST_UNCOND 2 - -#define ARM_OPCODE_CONDITION_UNCOND 0xf - -static unsigned int __kprobes aarch32_check_condition(u32 opcode, u32 psr) -{ - u32 cc_bits = opcode >> 28; - - if (cc_bits != ARM_OPCODE_CONDITION_UNCOND) { - if ((*aarch32_opcode_cond_checks[cc_bits])(psr)) - return ARM_OPCODE_CONDTEST_PASS; - else - return ARM_OPCODE_CONDTEST_FAIL; - } - return ARM_OPCODE_CONDTEST_UNCOND; -} - /* * swp_handler logs the id of calling process, dissects the instruction, sanity * checks the memory location, calls emulate_swpX for the actual operation and @@ -430,28 +230,27 @@ fault: return 0; } -/* - * Only emulate SWP/SWPB executed in ARM state/User mode. - * The kernel must be SWP free and SWP{B} does not exist in Thumb. - */ -static struct undef_hook swp_hooks[] = { - { - .instr_mask = 0x0fb00ff0, - .instr_val = 0x01000090, - .pstate_mask = PSR_AA32_MODE_MASK, - .pstate_val = PSR_AA32_MODE_USR, - .fn = swp_handler - }, - { } -}; +static bool try_emulate_swp(struct pt_regs *regs, u32 insn) +{ + /* SWP{B} only exists in ARM state and does not exist in Thumb */ + if (!compat_user_mode(regs) || compat_thumb_mode(regs)) + return false; -static struct insn_emulation_ops swp_ops = { + if ((insn & 0x0fb00ff0) != 0x01000090) + return false; + + return swp_handler(regs, insn) == 0; +} + +static struct insn_emulation insn_swp = { .name = "swp", .status = INSN_OBSOLETE, - .hooks = swp_hooks, + .try_emulate = try_emulate_swp, .set_hw_mode = NULL, }; +#endif /* CONFIG_SWP_EMULATION */ +#ifdef CONFIG_CP15_BARRIER_EMULATION static int cp15barrier_handler(struct pt_regs *regs, u32 instr) { perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, regs, regs->pc); @@ -514,31 +313,29 @@ static int cp15_barrier_set_hw_mode(bool enable) return 0; } -static struct undef_hook cp15_barrier_hooks[] = { - { - .instr_mask = 0x0fff0fdf, - .instr_val = 0x0e070f9a, - .pstate_mask = PSR_AA32_MODE_MASK, - .pstate_val = PSR_AA32_MODE_USR, - .fn = cp15barrier_handler, - }, - { - .instr_mask = 0x0fff0fff, - .instr_val = 0x0e070f95, - .pstate_mask = PSR_AA32_MODE_MASK, - .pstate_val = PSR_AA32_MODE_USR, - .fn = cp15barrier_handler, - }, - { } -}; +static bool try_emulate_cp15_barrier(struct pt_regs *regs, u32 insn) +{ + if (!compat_user_mode(regs) || compat_thumb_mode(regs)) + return false; + + if ((insn & 0x0fff0fdf) == 0x0e070f9a) + return cp15barrier_handler(regs, insn) == 0; + + if ((insn & 0x0fff0fff) == 0x0e070f95) + return cp15barrier_handler(regs, insn) == 0; -static struct insn_emulation_ops cp15_barrier_ops = { + return false; +} + +static struct insn_emulation insn_cp15_barrier = { .name = "cp15_barrier", .status = INSN_DEPRECATED, - .hooks = cp15_barrier_hooks, + .try_emulate = try_emulate_cp15_barrier, .set_hw_mode = cp15_barrier_set_hw_mode, }; +#endif /* CONFIG_CP15_BARRIER_EMULATION */ +#ifdef CONFIG_SETEND_EMULATION static int setend_set_hw_mode(bool enable) { if (!cpu_supports_mixed_endian_el0()) @@ -586,31 +383,218 @@ static int t16_setend_handler(struct pt_regs *regs, u32 instr) return rc; } -static struct undef_hook setend_hooks[] = { - { - .instr_mask = 0xfffffdff, - .instr_val = 0xf1010000, - .pstate_mask = PSR_AA32_MODE_MASK, - .pstate_val = PSR_AA32_MODE_USR, - .fn = a32_setend_handler, - }, - { - /* Thumb mode */ - .instr_mask = 0xfffffff7, - .instr_val = 0x0000b650, - .pstate_mask = (PSR_AA32_T_BIT | PSR_AA32_MODE_MASK), - .pstate_val = (PSR_AA32_T_BIT | PSR_AA32_MODE_USR), - .fn = t16_setend_handler, - }, - {} -}; +static bool try_emulate_setend(struct pt_regs *regs, u32 insn) +{ + if (compat_thumb_mode(regs) && + (insn & 0xfffffff7) == 0x0000b650) + return t16_setend_handler(regs, insn) == 0; + + if (compat_user_mode(regs) && + (insn & 0xfffffdff) == 0xf1010000) + return a32_setend_handler(regs, insn) == 0; -static struct insn_emulation_ops setend_ops = { + return false; +} + +static struct insn_emulation insn_setend = { .name = "setend", .status = INSN_DEPRECATED, - .hooks = setend_hooks, + .try_emulate = try_emulate_setend, .set_hw_mode = setend_set_hw_mode, }; +#endif /* CONFIG_SETEND_EMULATION */ + +static struct insn_emulation *insn_emulations[] = { +#ifdef CONFIG_SWP_EMULATION + &insn_swp, +#endif +#ifdef CONFIG_CP15_BARRIER_EMULATION + &insn_cp15_barrier, +#endif +#ifdef CONFIG_SETEND_EMULATION + &insn_setend, +#endif +}; + +static DEFINE_MUTEX(insn_emulation_mutex); + +static void enable_insn_hw_mode(void *data) +{ + struct insn_emulation *insn = (struct insn_emulation *)data; + if (insn->set_hw_mode) + insn->set_hw_mode(true); +} + +static void disable_insn_hw_mode(void *data) +{ + struct insn_emulation *insn = (struct insn_emulation *)data; + if (insn->set_hw_mode) + insn->set_hw_mode(false); +} + +/* Run set_hw_mode(mode) on all active CPUs */ +static int run_all_cpu_set_hw_mode(struct insn_emulation *insn, bool enable) +{ + if (!insn->set_hw_mode) + return -EINVAL; + if (enable) + on_each_cpu(enable_insn_hw_mode, (void *)insn, true); + else + on_each_cpu(disable_insn_hw_mode, (void *)insn, true); + return 0; +} + +/* + * Run set_hw_mode for all insns on a starting CPU. + * Returns: + * 0 - If all the hooks ran successfully. + * -EINVAL - At least one hook is not supported by the CPU. + */ +static int run_all_insn_set_hw_mode(unsigned int cpu) +{ + int rc = 0; + unsigned long flags; + + /* + * Disable IRQs to serialize against an IPI from + * run_all_cpu_set_hw_mode(), ensuring the HW is programmed to the most + * recent enablement state if the two race with one another. + */ + local_irq_save(flags); + for (int i = 0; i < ARRAY_SIZE(insn_emulations); i++) { + struct insn_emulation *insn = insn_emulations[i]; + bool enable = READ_ONCE(insn->current_mode) == INSN_HW; + if (insn->set_hw_mode && insn->set_hw_mode(enable)) { + pr_warn("CPU[%u] cannot support the emulation of %s", + cpu, insn->name); + rc = -EINVAL; + } + } + local_irq_restore(flags); + + return rc; +} + +static int update_insn_emulation_mode(struct insn_emulation *insn, + enum insn_emulation_mode prev) +{ + int ret = 0; + + switch (prev) { + case INSN_UNDEF: /* Nothing to be done */ + break; + case INSN_EMULATE: + break; + case INSN_HW: + if (!run_all_cpu_set_hw_mode(insn, false)) + pr_notice("Disabled %s support\n", insn->name); + break; + } + + switch (insn->current_mode) { + case INSN_UNDEF: + break; + case INSN_EMULATE: + break; + case INSN_HW: + ret = run_all_cpu_set_hw_mode(insn, true); + if (!ret) + pr_notice("Enabled %s support\n", insn->name); + break; + } + + return ret; +} + +static int emulation_proc_handler(struct ctl_table *table, int write, + void *buffer, size_t *lenp, + loff_t *ppos) +{ + int ret = 0; + struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode); + enum insn_emulation_mode prev_mode = insn->current_mode; + + mutex_lock(&insn_emulation_mutex); + ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); + + if (ret || !write || prev_mode == insn->current_mode) + goto ret; + + ret = update_insn_emulation_mode(insn, prev_mode); + if (ret) { + /* Mode change failed, revert to previous mode. */ + WRITE_ONCE(insn->current_mode, prev_mode); + update_insn_emulation_mode(insn, INSN_UNDEF); + } +ret: + mutex_unlock(&insn_emulation_mutex); + return ret; +} + +static void __init register_insn_emulation(struct insn_emulation *insn) +{ + struct ctl_table *sysctl; + + insn->min = INSN_UNDEF; + + switch (insn->status) { + case INSN_DEPRECATED: + insn->current_mode = INSN_EMULATE; + /* Disable the HW mode if it was turned on at early boot time */ + run_all_cpu_set_hw_mode(insn, false); + insn->max = INSN_HW; + break; + case INSN_OBSOLETE: + insn->current_mode = INSN_UNDEF; + insn->max = INSN_EMULATE; + break; + case INSN_UNAVAILABLE: + insn->current_mode = INSN_UNDEF; + insn->max = INSN_UNDEF; + break; + } + + /* Program the HW if required */ + update_insn_emulation_mode(insn, INSN_UNDEF); + + if (insn->status != INSN_UNAVAILABLE) { + sysctl = &insn->sysctl[0]; + + sysctl->mode = 0644; + sysctl->maxlen = sizeof(int); + + sysctl->procname = insn->name; + sysctl->data = &insn->current_mode; + sysctl->extra1 = &insn->min; + sysctl->extra2 = &insn->max; + sysctl->proc_handler = emulation_proc_handler; + + register_sysctl("abi", sysctl); + } +} + +bool try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn) +{ + for (int i = 0; i < ARRAY_SIZE(insn_emulations); i++) { + struct insn_emulation *ie = insn_emulations[i]; + + if (ie->status == INSN_UNAVAILABLE) + continue; + + /* + * A trap may race with the mode being changed + * INSN_EMULATE<->INSN_HW. Try to emulate the instruction to + * avoid a spurious UNDEF. + */ + if (READ_ONCE(ie->current_mode) == INSN_UNDEF) + continue; + + if (ie->try_emulate(regs, insn)) + return true; + } + + return false; +} /* * Invoked as core_initcall, which guarantees that the instruction @@ -618,24 +602,25 @@ static struct insn_emulation_ops setend_ops = { */ static int __init armv8_deprecated_init(void) { - if (IS_ENABLED(CONFIG_SWP_EMULATION)) - register_insn_emulation(&swp_ops); +#ifdef CONFIG_SETEND_EMULATION + if (!system_supports_mixed_endian_el0()) { + insn_setend.status = INSN_UNAVAILABLE; + pr_info("setend instruction emulation is not supported on this system\n"); + } - if (IS_ENABLED(CONFIG_CP15_BARRIER_EMULATION)) - register_insn_emulation(&cp15_barrier_ops); +#endif + for (int i = 0; i < ARRAY_SIZE(insn_emulations); i++) { + struct insn_emulation *ie = insn_emulations[i]; - if (IS_ENABLED(CONFIG_SETEND_EMULATION)) { - if (system_supports_mixed_endian_el0()) - register_insn_emulation(&setend_ops); - else - pr_info("setend instruction emulation is not supported on this system\n"); + if (ie->status == INSN_UNAVAILABLE) + continue; + + register_insn_emulation(ie); } cpuhp_setup_state_nocalls(CPUHP_AP_ARM64_ISNDEP_STARTING, "arm64/isndep:starting", run_all_insn_set_hw_mode, NULL); - register_insn_emulation_sysctl(); - return 0; } diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 1197e7679882..2234624536d9 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -82,6 +82,19 @@ int main(void) DEFINE(S_STACKFRAME, offsetof(struct pt_regs, stackframe)); DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs)); BLANK(); +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS + DEFINE(FREGS_X0, offsetof(struct ftrace_regs, regs[0])); + DEFINE(FREGS_X2, offsetof(struct ftrace_regs, regs[2])); + DEFINE(FREGS_X4, offsetof(struct ftrace_regs, regs[4])); + DEFINE(FREGS_X6, offsetof(struct ftrace_regs, regs[6])); + DEFINE(FREGS_X8, offsetof(struct ftrace_regs, regs[8])); + DEFINE(FREGS_FP, offsetof(struct ftrace_regs, fp)); + DEFINE(FREGS_LR, offsetof(struct ftrace_regs, lr)); + DEFINE(FREGS_SP, offsetof(struct ftrace_regs, sp)); + DEFINE(FREGS_PC, offsetof(struct ftrace_regs, pc)); + DEFINE(FREGS_SIZE, sizeof(struct ftrace_regs)); + BLANK(); +#endif #ifdef CONFIG_COMPAT DEFINE(COMPAT_SIGFRAME_REGS_OFFSET, offsetof(struct compat_sigframe, uc.uc_mcontext.arm_r0)); DEFINE(COMPAT_RT_SIGFRAME_REGS_OFFSET, offsetof(struct compat_rt_sigframe, sig.uc.uc_mcontext.arm_r0)); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 89ac00084f38..307faa2b4395 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -661,6 +661,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = { CAP_MIDR_RANGE_LIST(trbe_write_out_of_range_cpus), }, #endif +#ifdef CONFIG_ARM64_ERRATUM_2645198 + { + .desc = "ARM erratum 2645198", + .capability = ARM64_WORKAROUND_2645198, + ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A715) + }, +#endif #ifdef CONFIG_ARM64_ERRATUM_2077057 { .desc = "ARM erratum 2077057", diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b3f37e2209ad..7e76e1fda2a1 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -212,6 +212,8 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = { }; static const struct arm64_ftr_bits ftr_id_aa64isar2[] = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_EL1_CSSC_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_EL1_RPRFM_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_EL1_BC_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH), FTR_STRICT, FTR_EXACT, ID_AA64ISAR2_EL1_APA3_SHIFT, 4, 0), @@ -402,14 +404,14 @@ struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = { }; static const struct arm64_ftr_bits ftr_id_mmfr0[] = { - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 4, 0xf), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_FCSE_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_MMFR0_AUXREG_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_TCM_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_SHARELVL_SHIFT, 4, 0), - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_OUTERSHR_SHIFT, 4, 0xf), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_PMSA_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_VMSA_SHIFT, 4, 0), + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_EL1_InnerShr_SHIFT, 4, 0xf), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_EL1_FCSE_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_MMFR0_EL1_AuxReg_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_EL1_TCM_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_EL1_ShareLvl_SHIFT, 4, 0), + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_EL1_OuterShr_SHIFT, 4, 0xf), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_EL1_PMSA_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_EL1_VMSA_SHIFT, 4, 0), ARM64_FTR_END, }; @@ -429,32 +431,32 @@ static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = { }; static const struct arm64_ftr_bits ftr_mvfr0[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPROUND_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPSHVEC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPSQRT_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPDIVIDE_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPTRAP_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPDP_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPSP_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_SIMD_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_EL1_FPRound_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_EL1_FPShVec_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_EL1_FPSqrt_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_EL1_FPDivide_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_EL1_FPTrap_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_EL1_FPDP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_EL1_FPSP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_EL1_SIMDReg_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_mvfr1[] = { - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDFMAC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_FPHP_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDHP_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDSP_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDINT_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDLS_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_FPDNAN_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_FPFTZ_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_EL1_SIMDFMAC_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_EL1_FPHP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_EL1_SIMDHP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_EL1_SIMDSP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_EL1_SIMDInt_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_EL1_SIMDLS_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_EL1_FPDNaN_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_EL1_FPFtZ_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_mvfr2[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_FPMISC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_SIMDMISC_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_EL1_FPMisc_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_EL1_SIMDMisc_SHIFT, 4, 0), ARM64_FTR_END, }; @@ -470,34 +472,34 @@ static const struct arm64_ftr_bits ftr_gmid[] = { }; static const struct arm64_ftr_bits ftr_id_isar0[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_COPROC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_CMPBRANCH_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITFIELD_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITCOUNT_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_SWAP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_EL1_Divide_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_EL1_Debug_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_EL1_Coproc_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_EL1_CmpBranch_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_EL1_BitField_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_EL1_BitCount_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_EL1_Swap_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_isar5[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_EL1_RDM_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_EL1_CRC32_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_EL1_SHA2_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_EL1_SHA1_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_EL1_AES_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_EL1_SEVL_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_mmfr4[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EVT_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CCIDX_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_LSM_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_HPDS_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CNP_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_XNX_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_AC2_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EL1_EVT_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EL1_CCIDX_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EL1_LSM_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EL1_HPDS_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EL1_CnP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EL1_XNX_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EL1_AC2_SHIFT, 4, 0), /* * SpecSEI = 1 indicates that the PE might generate an SError on an @@ -505,80 +507,80 @@ static const struct arm64_ftr_bits ftr_id_mmfr4[] = { * SError might be generated than it will not be. Hence it has been * classified as FTR_HIGHER_SAFE. */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_MMFR4_SPECSEI_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_MMFR4_EL1_SpecSEI_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_isar4[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SWP_FRAC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_PSR_M_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_BARRIER_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SMC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WRITEBACK_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WITHSHIFTS_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_UNPRIV_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_EL1_SWP_frac_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_EL1_PSR_M_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_EL1_SynchPrim_frac_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_EL1_Barrier_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_EL1_SMC_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_EL1_Writeback_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_EL1_WithShifts_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_EL1_Unpriv_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_mmfr5[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR5_ETS_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR5_EL1_ETS_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_isar6[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_I8MM_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_BF16_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SPECRES_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SB_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_FHM_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_DP_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_JSCVT_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_EL1_I8MM_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_EL1_BF16_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_EL1_SPECRES_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_EL1_SB_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_EL1_FHM_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_EL1_DP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_EL1_JSCVT_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_pfr0[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_DIT_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR0_CSV2_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE3_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE2_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE1_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE0_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_EL1_DIT_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR0_EL1_CSV2_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_EL1_State3_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_EL1_State2_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_EL1_State1_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_EL1_State0_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_pfr1[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GIC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRT_FRAC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SEC_FRAC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GENTIMER_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRTUALIZATION_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_MPROGMOD_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SECURITY_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_PROGMOD_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_EL1_GIC_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_EL1_Virt_frac_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_EL1_Sec_frac_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_EL1_GenTimer_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_EL1_Virtualization_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_EL1_MProgMod_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_EL1_Security_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_EL1_ProgMod_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_pfr2[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_SSBS_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_CSV3_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_EL1_SSBS_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_EL1_CSV3_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_dfr0[] = { /* [31:28] TraceFilt */ - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_DFR0_PERFMON_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MPROFDBG_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPTRC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPTRC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPDBG_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPSDBG_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPDBG_SHIFT, 4, 0), + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_DFR0_EL1_PerfMon_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_EL1_MProfDbg_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_EL1_MMapTrc_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_EL1_CopTrc_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_EL1_MMapDbg_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_EL1_CopSDbg_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_EL1_CopDbg_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_dfr1[] = { - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR1_MTPMU_SHIFT, 4, 0), + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR1_EL1_MTPMU_SHIFT, 4, 0), ARM64_FTR_END, }; @@ -1119,12 +1121,12 @@ static int update_32bit_cpu_features(int cpu, struct cpuinfo_32bit *info, * EL1-dependent register fields to avoid spurious sanity check fails. */ if (!id_aa64pfr0_32bit_el1(pfr0)) { - relax_cpu_ftr_reg(SYS_ID_ISAR4_EL1, ID_ISAR4_SMC_SHIFT); - relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_VIRT_FRAC_SHIFT); - relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_SEC_FRAC_SHIFT); - relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_VIRTUALIZATION_SHIFT); - relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_SECURITY_SHIFT); - relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_PROGMOD_SHIFT); + relax_cpu_ftr_reg(SYS_ID_ISAR4_EL1, ID_ISAR4_EL1_SMC_SHIFT); + relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_EL1_Virt_frac_SHIFT); + relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_EL1_Sec_frac_SHIFT); + relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_EL1_Virtualization_SHIFT); + relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_EL1_Security_SHIFT); + relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_EL1_ProgMod_SHIFT); } taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu, @@ -2101,6 +2103,11 @@ static void cpu_trap_el0_impdef(const struct arm64_cpu_capabilities *__unused) sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_TIDCP); } +static void cpu_enable_dit(const struct arm64_cpu_capabilities *__unused) +{ + set_pstate_dit(1); +} + /* Internal helper functions to match cpu capability type */ static bool cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) @@ -2664,6 +2671,18 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .cpu_enable = cpu_trap_el0_impdef, }, + { + .desc = "Data independent timing control (DIT)", + .capability = ARM64_HAS_DIT, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .sys_reg = SYS_ID_AA64PFR0_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64PFR0_EL1_DIT_SHIFT, + .field_width = 4, + .min_field_value = ID_AA64PFR0_EL1_DIT_IMP, + .matches = has_cpuid_feature, + .cpu_enable = cpu_enable_dit, + }, {}, }; @@ -2772,6 +2791,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_EL1_AT_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT), #ifdef CONFIG_ARM64_SVE HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_SVE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR0_EL1_SVE_IMP, CAP_HWCAP, KERNEL_HWCAP_SVE), + HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_SVEver_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_SVEver_SVE2p1, CAP_HWCAP, KERNEL_HWCAP_SVE2P1), HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_SVEver_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_SVEver_SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2), HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_AES_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_AES_IMP, CAP_HWCAP, KERNEL_HWCAP_SVEAES), HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_AES_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_AES_PMULL128, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL), @@ -2798,6 +2818,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { #endif /* CONFIG_ARM64_MTE */ HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_EL1_ECV_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV), HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_EL1_AFP_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AFP), + HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_EL1_CSSC_SHIFT, 4, FTR_UNSIGNED, ID_AA64ISAR2_EL1_CSSC_IMP, CAP_HWCAP, KERNEL_HWCAP_CSSC), + HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_EL1_RPRFM_SHIFT, 4, FTR_UNSIGNED, ID_AA64ISAR2_EL1_RPRFM_IMP, CAP_HWCAP, KERNEL_HWCAP_RPRFM), HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_EL1_RPRES_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RPRES), HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_EL1_WFxT_SHIFT, 4, FTR_UNSIGNED, ID_AA64ISAR2_EL1_WFxT_IMP, CAP_HWCAP, KERNEL_HWCAP_WFXT), #ifdef CONFIG_ARM64_SME @@ -2829,24 +2851,24 @@ static bool compat_has_neon(const struct arm64_cpu_capabilities *cap, int scope) else mvfr1 = read_sysreg_s(SYS_MVFR1_EL1); - return cpuid_feature_extract_unsigned_field(mvfr1, MVFR1_SIMDSP_SHIFT) && - cpuid_feature_extract_unsigned_field(mvfr1, MVFR1_SIMDINT_SHIFT) && - cpuid_feature_extract_unsigned_field(mvfr1, MVFR1_SIMDLS_SHIFT); + return cpuid_feature_extract_unsigned_field(mvfr1, MVFR1_EL1_SIMDSP_SHIFT) && + cpuid_feature_extract_unsigned_field(mvfr1, MVFR1_EL1_SIMDInt_SHIFT) && + cpuid_feature_extract_unsigned_field(mvfr1, MVFR1_EL1_SIMDLS_SHIFT); } #endif static const struct arm64_cpu_capabilities compat_elf_hwcaps[] = { #ifdef CONFIG_COMPAT HWCAP_CAP_MATCH(compat_has_neon, CAP_COMPAT_HWCAP, COMPAT_HWCAP_NEON), - HWCAP_CAP(SYS_MVFR1_EL1, MVFR1_SIMDFMAC_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv4), + HWCAP_CAP(SYS_MVFR1_EL1, MVFR1_EL1_SIMDFMAC_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv4), /* Arm v8 mandates MVFR0.FPDP == {0, 2}. So, piggy back on this for the presence of VFP support */ - HWCAP_CAP(SYS_MVFR0_EL1, MVFR0_FPDP_SHIFT, 4, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFP), - HWCAP_CAP(SYS_MVFR0_EL1, MVFR0_FPDP_SHIFT, 4, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv3), - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 4, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_PMULL), - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_AES), - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA1_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA1), - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA2_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA2), - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_CRC32_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_CRC32), + HWCAP_CAP(SYS_MVFR0_EL1, MVFR0_EL1_FPDP_SHIFT, 4, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFP), + HWCAP_CAP(SYS_MVFR0_EL1, MVFR0_EL1_FPDP_SHIFT, 4, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv3), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_EL1_AES_SHIFT, 4, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_PMULL), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_EL1_AES_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_AES), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_EL1_SHA1_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA1), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_EL1_SHA2_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA2), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_EL1_CRC32_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_CRC32), #endif {}, }; @@ -3435,35 +3457,22 @@ int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt) return rc; } -static int emulate_mrs(struct pt_regs *regs, u32 insn) +bool try_emulate_mrs(struct pt_regs *regs, u32 insn) { u32 sys_reg, rt; + if (compat_user_mode(regs) || !aarch64_insn_is_mrs(insn)) + return false; + /* * sys_reg values are defined as used in mrs/msr instruction. * shift the imm value to get the encoding. */ sys_reg = (u32)aarch64_insn_decode_immediate(AARCH64_INSN_IMM_16, insn) << 5; rt = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RT, insn); - return do_emulate_mrs(regs, sys_reg, rt); + return do_emulate_mrs(regs, sys_reg, rt) == 0; } -static struct undef_hook mrs_hook = { - .instr_mask = 0xffff0000, - .instr_val = 0xd5380000, - .pstate_mask = PSR_AA32_MODE_MASK, - .pstate_val = PSR_MODE_EL0t, - .fn = emulate_mrs, -}; - -static int __init enable_mrs_emulation(void) -{ - register_undef_hook(&mrs_hook); - return 0; -} - -core_initcall(enable_mrs_emulation); - enum mitigation_state arm64_get_meltdown_state(void) { if (__meltdown_safe) diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index 28d4f442b0bc..379695262b77 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -116,6 +116,9 @@ static const char *const hwcap_str[] = { [KERNEL_HWCAP_WFXT] = "wfxt", [KERNEL_HWCAP_EBF16] = "ebf16", [KERNEL_HWCAP_SVE_EBF16] = "sveebf16", + [KERNEL_HWCAP_CSSC] = "cssc", + [KERNEL_HWCAP_RPRFM] = "rprfm", + [KERNEL_HWCAP_SVE2P1] = "sve2p1", }; #ifdef CONFIG_COMPAT diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 27369fa1c032..cce1167199e3 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -30,7 +30,7 @@ /* * Handle IRQ/context state management when entering from kernel mode. * Before this function is called it is not safe to call regular kernel code, - * intrumentable code, or any code which may trigger an exception. + * instrumentable code, or any code which may trigger an exception. * * This is intended to match the logic in irqentry_enter(), handling the kernel * mode transitions only. @@ -63,7 +63,7 @@ static void noinstr enter_from_kernel_mode(struct pt_regs *regs) /* * Handle IRQ/context state management when exiting to kernel mode. * After this function returns it is not safe to call regular kernel code, - * intrumentable code, or any code which may trigger an exception. + * instrumentable code, or any code which may trigger an exception. * * This is intended to match the logic in irqentry_exit(), handling the kernel * mode transitions only, and with preemption handled elsewhere. @@ -97,7 +97,7 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs) /* * Handle IRQ/context state management when entering from user mode. * Before this function is called it is not safe to call regular kernel code, - * intrumentable code, or any code which may trigger an exception. + * instrumentable code, or any code which may trigger an exception. */ static __always_inline void __enter_from_user_mode(void) { @@ -116,7 +116,7 @@ static __always_inline void enter_from_user_mode(struct pt_regs *regs) /* * Handle IRQ/context state management when exiting to user mode. * After this function returns it is not safe to call regular kernel code, - * intrumentable code, or any code which may trigger an exception. + * instrumentable code, or any code which may trigger an exception. */ static __always_inline void __exit_to_user_mode(void) { @@ -152,7 +152,7 @@ asmlinkage void noinstr asm_exit_to_user_mode(struct pt_regs *regs) /* * Handle IRQ/context state management when entering an NMI from user/kernel * mode. Before this function is called it is not safe to call regular kernel - * code, intrumentable code, or any code which may trigger an exception. + * code, instrumentable code, or any code which may trigger an exception. */ static void noinstr arm64_enter_nmi(struct pt_regs *regs) { @@ -170,7 +170,7 @@ static void noinstr arm64_enter_nmi(struct pt_regs *regs) /* * Handle IRQ/context state management when exiting an NMI from user/kernel * mode. After this function returns it is not safe to call regular kernel - * code, intrumentable code, or any code which may trigger an exception. + * code, instrumentable code, or any code which may trigger an exception. */ static void noinstr arm64_exit_nmi(struct pt_regs *regs) { @@ -192,7 +192,7 @@ static void noinstr arm64_exit_nmi(struct pt_regs *regs) /* * Handle IRQ/context state management when entering a debug exception from * kernel mode. Before this function is called it is not safe to call regular - * kernel code, intrumentable code, or any code which may trigger an exception. + * kernel code, instrumentable code, or any code which may trigger an exception. */ static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs) { @@ -207,7 +207,7 @@ static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs) /* * Handle IRQ/context state management when exiting a debug exception from * kernel mode. After this function returns it is not safe to call regular - * kernel code, intrumentable code, or any code which may trigger an exception. + * kernel code, instrumentable code, or any code which may trigger an exception. */ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs) { @@ -384,7 +384,7 @@ static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); local_daif_inherit(regs); - do_undefinstr(regs, esr); + do_el1_undef(regs, esr); local_daif_mask(); exit_to_kernel_mode(regs); } @@ -570,7 +570,7 @@ static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); - do_sysinstr(esr, regs); + do_el0_sys(esr, regs); exit_to_user_mode(regs); } @@ -599,7 +599,7 @@ static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); - do_undefinstr(regs, esr); + do_el0_undef(regs, esr); exit_to_user_mode(regs); } @@ -762,7 +762,7 @@ static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); - do_cp15instr(esr, regs); + do_el0_cp15(esr, regs); exit_to_user_mode(regs); } diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index 322a831f8ede..3b625f76ffba 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -13,83 +13,58 @@ #include <asm/ftrace.h> #include <asm/insn.h> -#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS /* * Due to -fpatchable-function-entry=2, the compiler has placed two NOPs before * the regular function prologue. For an enabled callsite, ftrace_init_nop() and * ftrace_make_call() have patched those NOPs to: * * MOV X9, LR - * BL <entry> - * - * ... where <entry> is either ftrace_caller or ftrace_regs_caller. + * BL ftrace_caller * * Each instrumented function follows the AAPCS, so here x0-x8 and x18-x30 are * live (x18 holds the Shadow Call Stack pointer), and x9-x17 are safe to * clobber. * - * We save the callsite's context into a pt_regs before invoking any ftrace - * callbacks. So that we can get a sensible backtrace, we create a stack record - * for the callsite and the ftrace entry assembly. This is not sufficient for - * reliable stacktrace: until we create the callsite stack record, its caller - * is missing from the LR and existing chain of frame records. + * We save the callsite's context into a struct ftrace_regs before invoking any + * ftrace callbacks. So that we can get a sensible backtrace, we create frame + * records for the callsite and the ftrace entry assembly. This is not + * sufficient for reliable stacktrace: until we create the callsite stack + * record, its caller is missing from the LR and existing chain of frame + * records. */ - .macro ftrace_regs_entry, allregs=0 - /* Make room for pt_regs, plus a callee frame */ - sub sp, sp, #(PT_REGS_SIZE + 16) - - /* Save function arguments (and x9 for simplicity) */ - stp x0, x1, [sp, #S_X0] - stp x2, x3, [sp, #S_X2] - stp x4, x5, [sp, #S_X4] - stp x6, x7, [sp, #S_X6] - stp x8, x9, [sp, #S_X8] - - /* Optionally save the callee-saved registers, always save the FP */ - .if \allregs == 1 - stp x10, x11, [sp, #S_X10] - stp x12, x13, [sp, #S_X12] - stp x14, x15, [sp, #S_X14] - stp x16, x17, [sp, #S_X16] - stp x18, x19, [sp, #S_X18] - stp x20, x21, [sp, #S_X20] - stp x22, x23, [sp, #S_X22] - stp x24, x25, [sp, #S_X24] - stp x26, x27, [sp, #S_X26] - stp x28, x29, [sp, #S_X28] - .else - str x29, [sp, #S_FP] - .endif - - /* Save the callsite's SP and LR */ - add x10, sp, #(PT_REGS_SIZE + 16) - stp x9, x10, [sp, #S_LR] +SYM_CODE_START(ftrace_caller) + bti c - /* Save the PC after the ftrace callsite */ - str x30, [sp, #S_PC] + /* Save original SP */ + mov x10, sp - /* Create a frame record for the callsite above pt_regs */ - stp x29, x9, [sp, #PT_REGS_SIZE] - add x29, sp, #PT_REGS_SIZE + /* Make room for ftrace regs, plus two frame records */ + sub sp, sp, #(FREGS_SIZE + 32) - /* Create our frame record within pt_regs. */ - stp x29, x30, [sp, #S_STACKFRAME] - add x29, sp, #S_STACKFRAME - .endm + /* Save function arguments */ + stp x0, x1, [sp, #FREGS_X0] + stp x2, x3, [sp, #FREGS_X2] + stp x4, x5, [sp, #FREGS_X4] + stp x6, x7, [sp, #FREGS_X6] + str x8, [sp, #FREGS_X8] -SYM_CODE_START(ftrace_regs_caller) - bti c - ftrace_regs_entry 1 - b ftrace_common -SYM_CODE_END(ftrace_regs_caller) + /* Save the callsite's FP, LR, SP */ + str x29, [sp, #FREGS_FP] + str x9, [sp, #FREGS_LR] + str x10, [sp, #FREGS_SP] -SYM_CODE_START(ftrace_caller) - bti c - ftrace_regs_entry 0 - b ftrace_common -SYM_CODE_END(ftrace_caller) + /* Save the PC after the ftrace callsite */ + str x30, [sp, #FREGS_PC] + + /* Create a frame record for the callsite above the ftrace regs */ + stp x29, x9, [sp, #FREGS_SIZE + 16] + add x29, sp, #FREGS_SIZE + 16 + + /* Create our frame record above the ftrace regs */ + stp x29, x30, [sp, #FREGS_SIZE] + add x29, sp, #FREGS_SIZE -SYM_CODE_START(ftrace_common) sub x0, x30, #AARCH64_INSN_SIZE // ip (callsite's BL insn) mov x1, x9 // parent_ip (callsite's LR) ldr_l x2, function_trace_op // op @@ -104,24 +79,24 @@ SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL) * to restore x0-x8, x29, and x30. */ /* Restore function arguments */ - ldp x0, x1, [sp] - ldp x2, x3, [sp, #S_X2] - ldp x4, x5, [sp, #S_X4] - ldp x6, x7, [sp, #S_X6] - ldr x8, [sp, #S_X8] + ldp x0, x1, [sp, #FREGS_X0] + ldp x2, x3, [sp, #FREGS_X2] + ldp x4, x5, [sp, #FREGS_X4] + ldp x6, x7, [sp, #FREGS_X6] + ldr x8, [sp, #FREGS_X8] /* Restore the callsite's FP, LR, PC */ - ldr x29, [sp, #S_FP] - ldr x30, [sp, #S_LR] - ldr x9, [sp, #S_PC] + ldr x29, [sp, #FREGS_FP] + ldr x30, [sp, #FREGS_LR] + ldr x9, [sp, #FREGS_PC] /* Restore the callsite's SP */ - add sp, sp, #PT_REGS_SIZE + 16 + add sp, sp, #FREGS_SIZE + 32 ret x9 -SYM_CODE_END(ftrace_common) +SYM_CODE_END(ftrace_caller) -#else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ +#else /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */ /* * Gcc with -pg will put the following code in the beginning of each function: @@ -195,44 +170,6 @@ SYM_CODE_END(ftrace_common) add \reg, \reg, #8 .endm -#ifndef CONFIG_DYNAMIC_FTRACE -/* - * void _mcount(unsigned long return_address) - * @return_address: return address to instrumented function - * - * This function makes calls, if enabled, to: - * - tracer function to probe instrumented function's entry, - * - ftrace_graph_caller to set up an exit hook - */ -SYM_FUNC_START(_mcount) - mcount_enter - - ldr_l x2, ftrace_trace_function - adr x0, ftrace_stub - cmp x0, x2 // if (ftrace_trace_function - b.eq skip_ftrace_call // != ftrace_stub) { - - mcount_get_pc x0 // function's pc - mcount_get_lr x1 // function's lr (= parent's pc) - blr x2 // (*ftrace_trace_function)(pc, lr); - -skip_ftrace_call: // } -#ifdef CONFIG_FUNCTION_GRAPH_TRACER - ldr_l x2, ftrace_graph_return - cmp x0, x2 // if ((ftrace_graph_return - b.ne ftrace_graph_caller // != ftrace_stub) - - ldr_l x2, ftrace_graph_entry // || (ftrace_graph_entry - adr_l x0, ftrace_graph_entry_stub // != ftrace_graph_entry_stub)) - cmp x0, x2 - b.ne ftrace_graph_caller // ftrace_graph_caller(); -#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ - mcount_exit -SYM_FUNC_END(_mcount) -EXPORT_SYMBOL(_mcount) -NOKPROBE(_mcount) - -#else /* CONFIG_DYNAMIC_FTRACE */ /* * _mcount() is used to build the kernel with -pg option, but all the branch * instructions to _mcount() are replaced to NOP initially at kernel start up, @@ -272,7 +209,6 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller(); mcount_exit SYM_FUNC_END(ftrace_caller) -#endif /* CONFIG_DYNAMIC_FTRACE */ #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* @@ -293,7 +229,7 @@ SYM_FUNC_START(ftrace_graph_caller) mcount_exit SYM_FUNC_END(ftrace_graph_caller) #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ -#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */ SYM_TYPED_FUNC_START(ftrace_stub) ret diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index e28137d64b76..11cb99c4d298 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -197,6 +197,9 @@ alternative_cb_end .endm .macro kernel_entry, el, regsize = 64 + .if \el == 0 + alternative_insn nop, SET_PSTATE_DIT(1), ARM64_HAS_DIT + .endif .if \regsize == 32 mov w0, w0 // zero upper 32 bits of x0 .endif diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 23834d96d1e7..dcc81e7200d4 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -118,16 +118,8 @@ * returned from the 2nd syscall yet, TIF_FOREIGN_FPSTATE is still set so * whatever is in the FPSIMD registers is not saved to memory, but discarded. */ -struct fpsimd_last_state_struct { - struct user_fpsimd_state *st; - void *sve_state; - void *za_state; - u64 *svcr; - unsigned int sve_vl; - unsigned int sme_vl; -}; -static DEFINE_PER_CPU(struct fpsimd_last_state_struct, fpsimd_last_state); +static DEFINE_PER_CPU(struct cpu_fp_state, fpsimd_last_state); __ro_after_init struct vl_info vl_info[ARM64_VEC_MAX] = { #ifdef CONFIG_ARM64_SVE @@ -330,15 +322,6 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, * The task can execute SVE instructions while in userspace without * trapping to the kernel. * - * When stored, Z0-Z31 (incorporating Vn in bits[127:0] or the - * corresponding Zn), P0-P15 and FFR are encoded in - * task->thread.sve_state, formatted appropriately for vector - * length task->thread.sve_vl or, if SVCR.SM is set, - * task->thread.sme_vl. - * - * task->thread.sve_state must point to a valid buffer at least - * sve_state_size(task) bytes in size. - * * During any syscall, the kernel may optionally clear TIF_SVE and * discard the vector state except for the FPSIMD subset. * @@ -348,7 +331,15 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, * do_sve_acc() to be called, which does some preparation and then * sets TIF_SVE. * - * When stored, FPSIMD registers V0-V31 are encoded in + * During any syscall, the kernel may optionally clear TIF_SVE and + * discard the vector state except for the FPSIMD subset. + * + * The data will be stored in one of two formats: + * + * * FPSIMD only - FP_STATE_FPSIMD: + * + * When the FPSIMD only state stored task->thread.fp_type is set to + * FP_STATE_FPSIMD, the FPSIMD registers V0-V31 are encoded in * task->thread.uw.fpsimd_state; bits [max : 128] for each of Z0-Z31 are * logically zero but not stored anywhere; P0-P15 and FFR are not * stored and have unspecified values from userspace's point of @@ -356,7 +347,23 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, * but userspace is discouraged from relying on this. * * task->thread.sve_state does not need to be non-NULL, valid or any - * particular size: it must not be dereferenced. + * particular size: it must not be dereferenced and any data stored + * there should be considered stale and not referenced. + * + * * SVE state - FP_STATE_SVE: + * + * When the full SVE state is stored task->thread.fp_type is set to + * FP_STATE_SVE and Z0-Z31 (incorporating Vn in bits[127:0] or the + * corresponding Zn), P0-P15 and FFR are encoded in in + * task->thread.sve_state, formatted appropriately for vector + * length task->thread.sve_vl or, if SVCR.SM is set, + * task->thread.sme_vl. The storage for the vector registers in + * task->thread.uw.fpsimd_state should be ignored. + * + * task->thread.sve_state must point to a valid buffer at least + * sve_state_size(task) bytes in size. The data stored in + * task->thread.uw.fpsimd_state.vregs should be considered stale + * and not referenced. * * * FPSR and FPCR are always stored in task->thread.uw.fpsimd_state * irrespective of whether TIF_SVE is clear or set, since these are @@ -378,11 +385,37 @@ static void task_fpsimd_load(void) WARN_ON(!system_supports_fpsimd()); WARN_ON(!have_cpu_fpsimd_context()); - /* Check if we should restore SVE first */ - if (IS_ENABLED(CONFIG_ARM64_SVE) && test_thread_flag(TIF_SVE)) { - sve_set_vq(sve_vq_from_vl(task_get_sve_vl(current)) - 1); - restore_sve_regs = true; - restore_ffr = true; + if (system_supports_sve()) { + switch (current->thread.fp_type) { + case FP_STATE_FPSIMD: + /* Stop tracking SVE for this task until next use. */ + if (test_and_clear_thread_flag(TIF_SVE)) + sve_user_disable(); + break; + case FP_STATE_SVE: + if (!thread_sm_enabled(¤t->thread) && + !WARN_ON_ONCE(!test_and_set_thread_flag(TIF_SVE))) + sve_user_enable(); + + if (test_thread_flag(TIF_SVE)) + sve_set_vq(sve_vq_from_vl(task_get_sve_vl(current)) - 1); + + restore_sve_regs = true; + restore_ffr = true; + break; + default: + /* + * This indicates either a bug in + * fpsimd_save() or memory corruption, we + * should always record an explicit format + * when we save. We always at least have the + * memory allocated for FPSMID registers so + * try that and hope for the best. + */ + WARN_ON_ONCE(1); + clear_thread_flag(TIF_SVE); + break; + } } /* Restore SME, override SVE register configuration if needed */ @@ -398,18 +431,19 @@ static void task_fpsimd_load(void) if (thread_za_enabled(¤t->thread)) za_load_state(current->thread.za_state); - if (thread_sm_enabled(¤t->thread)) { - restore_sve_regs = true; + if (thread_sm_enabled(¤t->thread)) restore_ffr = system_supports_fa64(); - } } - if (restore_sve_regs) + if (restore_sve_regs) { + WARN_ON_ONCE(current->thread.fp_type != FP_STATE_SVE); sve_load_state(sve_pffr(¤t->thread), ¤t->thread.uw.fpsimd_state.fpsr, restore_ffr); - else + } else { + WARN_ON_ONCE(current->thread.fp_type != FP_STATE_FPSIMD); fpsimd_load_state(¤t->thread.uw.fpsimd_state); + } } /* @@ -419,12 +453,12 @@ static void task_fpsimd_load(void) * last, if KVM is involved this may be the guest VM context rather * than the host thread for the VM pointed to by current. This means * that we must always reference the state storage via last rather - * than via current, other than the TIF_ flags which KVM will - * carefully maintain for us. + * than via current, if we are saving KVM state then it will have + * ensured that the type of registers to save is set in last->to_save. */ static void fpsimd_save(void) { - struct fpsimd_last_state_struct const *last = + struct cpu_fp_state const *last = this_cpu_ptr(&fpsimd_last_state); /* set by fpsimd_bind_task_to_cpu() or fpsimd_bind_state_to_cpu() */ bool save_sve_regs = false; @@ -437,7 +471,14 @@ static void fpsimd_save(void) if (test_thread_flag(TIF_FOREIGN_FPSTATE)) return; - if (test_thread_flag(TIF_SVE)) { + /* + * If a task is in a syscall the ABI allows us to only + * preserve the state shared with FPSIMD so don't bother + * saving the full SVE state in that case. + */ + if ((last->to_save == FP_STATE_CURRENT && test_thread_flag(TIF_SVE) && + !in_syscall(current_pt_regs())) || + last->to_save == FP_STATE_SVE) { save_sve_regs = true; save_ffr = true; vl = last->sve_vl; @@ -474,8 +515,10 @@ static void fpsimd_save(void) sve_save_state((char *)last->sve_state + sve_ffr_offset(vl), &last->st->fpsr, save_ffr); + *last->fp_type = FP_STATE_SVE; } else { fpsimd_save_state(last->st); + *last->fp_type = FP_STATE_FPSIMD; } } @@ -768,8 +811,7 @@ void fpsimd_sync_to_sve(struct task_struct *task) */ void sve_sync_to_fpsimd(struct task_struct *task) { - if (test_tsk_thread_flag(task, TIF_SVE) || - thread_sm_enabled(&task->thread)) + if (task->thread.fp_type == FP_STATE_SVE) sve_to_fpsimd(task); } @@ -848,8 +890,10 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type, fpsimd_flush_task_state(task); if (test_and_clear_tsk_thread_flag(task, TIF_SVE) || - thread_sm_enabled(&task->thread)) + thread_sm_enabled(&task->thread)) { sve_to_fpsimd(task); + task->thread.fp_type = FP_STATE_FPSIMD; + } if (system_supports_sme() && type == ARM64_VEC_SME) { task->thread.svcr &= ~(SVCR_SM_MASK | @@ -1368,6 +1412,7 @@ static void sve_init_regs(void) fpsimd_bind_task_to_cpu(); } else { fpsimd_to_sve(current); + current->thread.fp_type = FP_STATE_SVE; } } @@ -1596,6 +1641,8 @@ void fpsimd_flush_thread(void) current->thread.svcr = 0; } + current->thread.fp_type = FP_STATE_FPSIMD; + put_cpu_fpsimd_context(); kfree(sve_state); kfree(za_state); @@ -1628,14 +1675,38 @@ void fpsimd_signal_preserve_current_state(void) } /* + * Called by KVM when entering the guest. + */ +void fpsimd_kvm_prepare(void) +{ + if (!system_supports_sve()) + return; + + /* + * KVM does not save host SVE state since we can only enter + * the guest from a syscall so the ABI means that only the + * non-saved SVE state needs to be saved. If we have left + * SVE enabled for performance reasons then update the task + * state to be FPSIMD only. + */ + get_cpu_fpsimd_context(); + + if (test_and_clear_thread_flag(TIF_SVE)) { + sve_to_fpsimd(current); + current->thread.fp_type = FP_STATE_FPSIMD; + } + + put_cpu_fpsimd_context(); +} + +/* * Associate current's FPSIMD context with this cpu * The caller must have ownership of the cpu FPSIMD context before calling * this function. */ static void fpsimd_bind_task_to_cpu(void) { - struct fpsimd_last_state_struct *last = - this_cpu_ptr(&fpsimd_last_state); + struct cpu_fp_state *last = this_cpu_ptr(&fpsimd_last_state); WARN_ON(!system_supports_fpsimd()); last->st = ¤t->thread.uw.fpsimd_state; @@ -1644,6 +1715,8 @@ static void fpsimd_bind_task_to_cpu(void) last->sve_vl = task_get_sve_vl(current); last->sme_vl = task_get_sme_vl(current); last->svcr = ¤t->thread.svcr; + last->fp_type = ¤t->thread.fp_type; + last->to_save = FP_STATE_CURRENT; current->thread.fpsimd_cpu = smp_processor_id(); /* @@ -1665,22 +1738,14 @@ static void fpsimd_bind_task_to_cpu(void) } } -void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, - unsigned int sve_vl, void *za_state, - unsigned int sme_vl, u64 *svcr) +void fpsimd_bind_state_to_cpu(struct cpu_fp_state *state) { - struct fpsimd_last_state_struct *last = - this_cpu_ptr(&fpsimd_last_state); + struct cpu_fp_state *last = this_cpu_ptr(&fpsimd_last_state); WARN_ON(!system_supports_fpsimd()); WARN_ON(!in_softirq() && !irqs_disabled()); - last->st = st; - last->svcr = svcr; - last->sve_state = sve_state; - last->za_state = za_state; - last->sve_vl = sve_vl; - last->sme_vl = sme_vl; + *last = *state; } /* @@ -1838,7 +1903,7 @@ void kernel_neon_begin(void) /* Invalidate any task state remaining in the fpsimd regs: */ fpsimd_flush_cpu_state(); } -EXPORT_SYMBOL(kernel_neon_begin); +EXPORT_SYMBOL_GPL(kernel_neon_begin); /* * kernel_neon_end(): give the CPU FPSIMD registers back to the current task @@ -1856,7 +1921,7 @@ void kernel_neon_end(void) put_cpu_fpsimd_context(); } -EXPORT_SYMBOL(kernel_neon_end); +EXPORT_SYMBOL_GPL(kernel_neon_end); #ifdef CONFIG_EFI diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index 8745175f4a75..b30b955a8921 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -17,7 +17,49 @@ #include <asm/insn.h> #include <asm/patching.h> -#ifdef CONFIG_DYNAMIC_FTRACE +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS +struct fregs_offset { + const char *name; + int offset; +}; + +#define FREGS_OFFSET(n, field) \ +{ \ + .name = n, \ + .offset = offsetof(struct ftrace_regs, field), \ +} + +static const struct fregs_offset fregs_offsets[] = { + FREGS_OFFSET("x0", regs[0]), + FREGS_OFFSET("x1", regs[1]), + FREGS_OFFSET("x2", regs[2]), + FREGS_OFFSET("x3", regs[3]), + FREGS_OFFSET("x4", regs[4]), + FREGS_OFFSET("x5", regs[5]), + FREGS_OFFSET("x6", regs[6]), + FREGS_OFFSET("x7", regs[7]), + FREGS_OFFSET("x8", regs[8]), + + FREGS_OFFSET("x29", fp), + FREGS_OFFSET("x30", lr), + FREGS_OFFSET("lr", lr), + + FREGS_OFFSET("sp", sp), + FREGS_OFFSET("pc", pc), +}; + +int ftrace_regs_query_register_offset(const char *name) +{ + for (int i = 0; i < ARRAY_SIZE(fregs_offsets); i++) { + const struct fregs_offset *roff = &fregs_offsets[i]; + if (!strcmp(roff->name, name)) + return roff->offset; + } + + return -EINVAL; +} +#endif + /* * Replace a single instruction, which may be a branch or NOP. * If @validate == true, a replaced instruction is checked against 'old'. @@ -70,9 +112,6 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr) if (addr == FTRACE_ADDR) return &plt[FTRACE_PLT_IDX]; - if (addr == FTRACE_REGS_ADDR && - IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) - return &plt[FTRACE_REGS_PLT_IDX]; #endif return NULL; } @@ -154,25 +193,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) return ftrace_modify_code(pc, old, new, true); } -#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS -int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, - unsigned long addr) -{ - unsigned long pc = rec->ip; - u32 old, new; - - if (!ftrace_find_callable_addr(rec, NULL, &old_addr)) - return -EINVAL; - if (!ftrace_find_callable_addr(rec, NULL, &addr)) - return -EINVAL; - - old = aarch64_insn_gen_branch_imm(pc, old_addr, - AARCH64_INSN_BRANCH_LINK); - new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); - - return ftrace_modify_code(pc, old, new, true); -} - +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS /* * The compiler has inserted two NOPs before the regular function prologue. * All instrumented functions follow the AAPCS, so x0-x8 and x19-x30 are live, @@ -228,7 +249,7 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, * * Note: 'mod' is only set at module load time. */ - if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && + if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_ARGS) && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) && mod) { return aarch64_insn_patch_text_nosync((void *)pc, new); } @@ -246,7 +267,6 @@ void arch_ftrace_update_code(int command) command |= FTRACE_MAY_SLEEP; ftrace_modify_all_code(command); } -#endif /* CONFIG_DYNAMIC_FTRACE */ #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* @@ -277,21 +297,11 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent, } } -#ifdef CONFIG_DYNAMIC_FTRACE - -#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { - /* - * When DYNAMIC_FTRACE_WITH_REGS is selected, `fregs` can never be NULL - * and arch_ftrace_get_regs(fregs) will always give a non-NULL pt_regs - * in which we can safely modify the LR. - */ - struct pt_regs *regs = arch_ftrace_get_regs(fregs); - unsigned long *parent = (unsigned long *)&procedure_link_pointer(regs); - - prepare_ftrace_return(ip, parent, frame_pointer(regs)); + prepare_ftrace_return(ip, &fregs->lr, fregs->fp); } #else /* @@ -323,6 +333,5 @@ int ftrace_disable_ftrace_graph_caller(void) { return ftrace_modify_graph_caller(false); } -#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ -#endif /* CONFIG_DYNAMIC_FTRACE */ +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 2196aad7b55b..952e17bd1c0b 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -462,6 +462,9 @@ SYM_FUNC_START_LOCAL(__primary_switched) bl early_fdt_map // Try mapping the FDT early mov x0, x20 // pass the full boot status bl init_feature_override // Parse cpu feature overrides +#ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS + bl scs_patch_vmlinux +#endif mov x0, x20 bl finalise_el2 // Prefer VHE if possible ldp x29, x30, [sp], #16 diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 38dbd3828f13..6ad5c6ef5329 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -10,20 +10,21 @@ * Copyright (C) 2012 ARM Ltd. */ -#include <linux/irq.h> -#include <linux/memory.h> -#include <linux/smp.h> #include <linux/hardirq.h> #include <linux/init.h> +#include <linux/irq.h> #include <linux/irqchip.h> #include <linux/kprobes.h> +#include <linux/memory.h> #include <linux/scs.h> #include <linux/seq_file.h> +#include <linux/smp.h> #include <linux/vmalloc.h> #include <asm/daifflags.h> #include <asm/exception.h> -#include <asm/vmap_stack.h> #include <asm/softirq_stack.h> +#include <asm/stacktrace.h> +#include <asm/vmap_stack.h> /* Only access this in an NMI enter/exit */ DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); @@ -41,7 +42,7 @@ static void init_irq_scs(void) { int cpu; - if (!IS_ENABLED(CONFIG_SHADOW_CALL_STACK)) + if (!scs_is_enabled()) return; for_each_possible_cpu(cpu) diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index 76b41e4ca9fa..5af4975caeb5 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -15,9 +15,11 @@ #include <linux/kernel.h> #include <linux/mm.h> #include <linux/moduleloader.h> +#include <linux/scs.h> #include <linux/vmalloc.h> #include <asm/alternative.h> #include <asm/insn.h> +#include <asm/scs.h> #include <asm/sections.h> void *module_alloc(unsigned long size) @@ -497,9 +499,6 @@ static int module_init_ftrace_plt(const Elf_Ehdr *hdr, __init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR); - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) - __init_plt(&plts[FTRACE_REGS_PLT_IDX], FTRACE_REGS_ADDR); - mod->arch.ftrace_trampolines = plts; #endif return 0; @@ -514,5 +513,11 @@ int module_finalize(const Elf_Ehdr *hdr, if (s) apply_alternatives_module((void *)s->sh_addr, s->sh_size); + if (scs_is_dynamic()) { + s = find_section(hdr, sechdrs, ".init.eh_frame"); + if (s) + scs_patch((void *)s->sh_addr, s->sh_size); + } + return module_init_ftrace_plt(hdr, sechdrs, me); } diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c index 57c7c211f8c7..aa718d6a9274 100644 --- a/arch/arm64/kernel/paravirt.c +++ b/arch/arm64/kernel/paravirt.c @@ -141,10 +141,6 @@ static bool __init has_pv_steal_clock(void) { struct arm_smccc_res res; - /* To detect the presence of PV time support we require SMCCC 1.1+ */ - if (arm_smccc_1_1_get_conduit() == SMCCC_CONDUIT_NONE) - return false; - arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_HV_PV_TIME_FEATURES, &res); diff --git a/arch/arm64/kernel/patch-scs.c b/arch/arm64/kernel/patch-scs.c new file mode 100644 index 000000000000..1b3da02d5b74 --- /dev/null +++ b/arch/arm64/kernel/patch-scs.c @@ -0,0 +1,257 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022 - Google LLC + * Author: Ard Biesheuvel <ardb@google.com> + */ + +#include <linux/bug.h> +#include <linux/errno.h> +#include <linux/init.h> +#include <linux/linkage.h> +#include <linux/printk.h> +#include <linux/types.h> + +#include <asm/cacheflush.h> +#include <asm/scs.h> + +// +// This minimal DWARF CFI parser is partially based on the code in +// arch/arc/kernel/unwind.c, and on the document below: +// https://refspecs.linuxbase.org/LSB_4.0.0/LSB-Core-generic/LSB-Core-generic/ehframechpt.html +// + +#define DW_CFA_nop 0x00 +#define DW_CFA_set_loc 0x01 +#define DW_CFA_advance_loc1 0x02 +#define DW_CFA_advance_loc2 0x03 +#define DW_CFA_advance_loc4 0x04 +#define DW_CFA_offset_extended 0x05 +#define DW_CFA_restore_extended 0x06 +#define DW_CFA_undefined 0x07 +#define DW_CFA_same_value 0x08 +#define DW_CFA_register 0x09 +#define DW_CFA_remember_state 0x0a +#define DW_CFA_restore_state 0x0b +#define DW_CFA_def_cfa 0x0c +#define DW_CFA_def_cfa_register 0x0d +#define DW_CFA_def_cfa_offset 0x0e +#define DW_CFA_def_cfa_expression 0x0f +#define DW_CFA_expression 0x10 +#define DW_CFA_offset_extended_sf 0x11 +#define DW_CFA_def_cfa_sf 0x12 +#define DW_CFA_def_cfa_offset_sf 0x13 +#define DW_CFA_val_offset 0x14 +#define DW_CFA_val_offset_sf 0x15 +#define DW_CFA_val_expression 0x16 +#define DW_CFA_lo_user 0x1c +#define DW_CFA_negate_ra_state 0x2d +#define DW_CFA_GNU_args_size 0x2e +#define DW_CFA_GNU_negative_offset_extended 0x2f +#define DW_CFA_hi_user 0x3f + +extern const u8 __eh_frame_start[], __eh_frame_end[]; + +enum { + PACIASP = 0xd503233f, + AUTIASP = 0xd50323bf, + SCS_PUSH = 0xf800865e, + SCS_POP = 0xf85f8e5e, +}; + +static void __always_inline scs_patch_loc(u64 loc) +{ + u32 insn = le32_to_cpup((void *)loc); + + switch (insn) { + case PACIASP: + *(u32 *)loc = cpu_to_le32(SCS_PUSH); + break; + case AUTIASP: + *(u32 *)loc = cpu_to_le32(SCS_POP); + break; + default: + /* + * While the DW_CFA_negate_ra_state directive is guaranteed to + * appear right after a PACIASP/AUTIASP instruction, it may + * also appear after a DW_CFA_restore_state directive that + * restores a state that is only partially accurate, and is + * followed by DW_CFA_negate_ra_state directive to toggle the + * PAC bit again. So we permit other instructions here, and ignore + * them. + */ + return; + } + dcache_clean_pou(loc, loc + sizeof(u32)); +} + +/* + * Skip one uleb128/sleb128 encoded quantity from the opcode stream. All bytes + * except the last one have bit #7 set. + */ +static int __always_inline skip_xleb128(const u8 **opcode, int size) +{ + u8 c; + + do { + c = *(*opcode)++; + size--; + } while (c & BIT(7)); + + return size; +} + +struct eh_frame { + /* + * The size of this frame if 0 < size < U32_MAX, 0 terminates the list. + */ + u32 size; + + /* + * The first frame is a Common Information Entry (CIE) frame, followed + * by one or more Frame Description Entry (FDE) frames. In the former + * case, this field is 0, otherwise it is the negated offset relative + * to the associated CIE frame. + */ + u32 cie_id_or_pointer; + + union { + struct { // CIE + u8 version; + u8 augmentation_string[]; + }; + + struct { // FDE + s32 initial_loc; + s32 range; + u8 opcodes[]; + }; + }; +}; + +static int noinstr scs_handle_fde_frame(const struct eh_frame *frame, + bool fde_has_augmentation_data, + int code_alignment_factor) +{ + int size = frame->size - offsetof(struct eh_frame, opcodes) + 4; + u64 loc = (u64)offset_to_ptr(&frame->initial_loc); + const u8 *opcode = frame->opcodes; + + if (fde_has_augmentation_data) { + int l; + + // assume single byte uleb128_t + if (WARN_ON(*opcode & BIT(7))) + return -ENOEXEC; + + l = *opcode++; + opcode += l; + size -= l + 1; + } + + /* + * Starting from 'loc', apply the CFA opcodes that advance the location + * pointer, and identify the locations of the PAC instructions. + */ + while (size-- > 0) { + switch (*opcode++) { + case DW_CFA_nop: + case DW_CFA_remember_state: + case DW_CFA_restore_state: + break; + + case DW_CFA_advance_loc1: + loc += *opcode++ * code_alignment_factor; + size--; + break; + + case DW_CFA_advance_loc2: + loc += *opcode++ * code_alignment_factor; + loc += (*opcode++ << 8) * code_alignment_factor; + size -= 2; + break; + + case DW_CFA_def_cfa: + case DW_CFA_offset_extended: + size = skip_xleb128(&opcode, size); + fallthrough; + case DW_CFA_def_cfa_offset: + case DW_CFA_def_cfa_offset_sf: + case DW_CFA_def_cfa_register: + case DW_CFA_same_value: + case DW_CFA_restore_extended: + case 0x80 ... 0xbf: + size = skip_xleb128(&opcode, size); + break; + + case DW_CFA_negate_ra_state: + scs_patch_loc(loc - 4); + break; + + case 0x40 ... 0x7f: + // advance loc + loc += (opcode[-1] & 0x3f) * code_alignment_factor; + break; + + case 0xc0 ... 0xff: + break; + + default: + pr_err("unhandled opcode: %02x in FDE frame %lx\n", opcode[-1], (uintptr_t)frame); + return -ENOEXEC; + } + } + return 0; +} + +int noinstr scs_patch(const u8 eh_frame[], int size) +{ + const u8 *p = eh_frame; + + while (size > 4) { + const struct eh_frame *frame = (const void *)p; + bool fde_has_augmentation_data = true; + int code_alignment_factor = 1; + int ret; + + if (frame->size == 0 || + frame->size == U32_MAX || + frame->size > size) + break; + + if (frame->cie_id_or_pointer == 0) { + const u8 *p = frame->augmentation_string; + + /* a 'z' in the augmentation string must come first */ + fde_has_augmentation_data = *p == 'z'; + + /* + * The code alignment factor is a uleb128 encoded field + * but given that the only sensible values are 1 or 4, + * there is no point in decoding the whole thing. + */ + p += strlen(p) + 1; + if (!WARN_ON(*p & BIT(7))) + code_alignment_factor = *p; + } else { + ret = scs_handle_fde_frame(frame, + fde_has_augmentation_data, + code_alignment_factor); + if (ret) + return ret; + } + + p += sizeof(frame->size) + frame->size; + size -= sizeof(frame->size) + frame->size; + } + return 0; +} + +asmlinkage void __init scs_patch_vmlinux(void) +{ + if (!should_patch_pac_into_scs()) + return; + + WARN_ON(scs_patch(__eh_frame_start, __eh_frame_end - __eh_frame_start)); + icache_inval_all_pou(); + isb(); +} diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 7b0643fe2f13..a15b3c1d15d9 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1146,7 +1146,8 @@ static void __armv8pmu_probe_pmu(void *info) dfr0 = read_sysreg(id_aa64dfr0_el1); pmuver = cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMUVer_SHIFT); - if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF || pmuver == 0) + if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF || + pmuver == ID_AA64DFR0_EL1_PMUVer_NI) return; cpu_pmu->pmuver = pmuver; diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile index 839291430cb3..4c0ea3cd4ea4 100644 --- a/arch/arm64/kernel/pi/Makefile +++ b/arch/arm64/kernel/pi/Makefile @@ -7,6 +7,7 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) -fpie \ -I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \ -include $(srctree)/include/linux/hidden.h \ -D__DISABLE_EXPORTS -ffreestanding -D__NO_FORTIFY \ + -fno-asynchronous-unwind-tables -fno-unwind-tables \ $(call cc-option,-fno-addrsig) # remove SCS flags from all objects in this directory diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c index 104101f633b1..968d5fffe233 100644 --- a/arch/arm64/kernel/probes/decode-insn.c +++ b/arch/arm64/kernel/probes/decode-insn.c @@ -24,7 +24,7 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn) * currently safe. Lastly, MSR instructions can do any number of nasty * things we can't handle during single-stepping. */ - if (aarch64_get_insn_class(insn) == AARCH64_INSN_CLS_BR_SYS) { + if (aarch64_insn_is_class_branch_sys(insn)) { if (aarch64_insn_is_branch(insn) || aarch64_insn_is_msr_imm(insn) || aarch64_insn_is_msr_reg(insn) || diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index c9e4d0720285..f35d059a9a36 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -294,19 +294,12 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr) } break; - case KPROBE_HIT_ACTIVE: - case KPROBE_HIT_SSDONE: - /* - * In case the user-specified fault handler returned - * zero, try to fix up. - */ - if (fixup_exception(regs)) - return 1; } return 0; } -static void __kprobes kprobe_handler(struct pt_regs *regs) +static int __kprobes +kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr) { struct kprobe *p, *cur_kprobe; struct kprobe_ctlblk *kcb; @@ -316,39 +309,44 @@ static void __kprobes kprobe_handler(struct pt_regs *regs) cur_kprobe = kprobe_running(); p = get_kprobe((kprobe_opcode_t *) addr); + if (WARN_ON_ONCE(!p)) { + /* + * Something went wrong. This BRK used an immediate reserved + * for kprobes, but we couldn't find any corresponding probe. + */ + return DBG_HOOK_ERROR; + } - if (p) { - if (cur_kprobe) { - if (reenter_kprobe(p, regs, kcb)) - return; - } else { - /* Probe hit */ - set_current_kprobe(p); - kcb->kprobe_status = KPROBE_HIT_ACTIVE; - - /* - * If we have no pre-handler or it returned 0, we - * continue with normal processing. If we have a - * pre-handler and it returned non-zero, it will - * modify the execution path and no need to single - * stepping. Let's just reset current kprobe and exit. - */ - if (!p->pre_handler || !p->pre_handler(p, regs)) { - setup_singlestep(p, regs, kcb, 0); - } else - reset_current_kprobe(); - } + if (cur_kprobe) { + /* Hit a kprobe inside another kprobe */ + if (!reenter_kprobe(p, regs, kcb)) + return DBG_HOOK_ERROR; + } else { + /* Probe hit */ + set_current_kprobe(p); + kcb->kprobe_status = KPROBE_HIT_ACTIVE; + + /* + * If we have no pre-handler or it returned 0, we + * continue with normal processing. If we have a + * pre-handler and it returned non-zero, it will + * modify the execution path and not need to single-step + * Let's just reset current kprobe and exit. + */ + if (!p->pre_handler || !p->pre_handler(p, regs)) + setup_singlestep(p, regs, kcb, 0); + else + reset_current_kprobe(); } - /* - * The breakpoint instruction was removed right - * after we hit it. Another cpu has removed - * either a probepoint or a debugger breakpoint - * at this address. In either case, no further - * handling of this interrupt is appropriate. - * Return back to original instruction, and continue. - */ + + return DBG_HOOK_HANDLED; } +static struct break_hook kprobes_break_hook = { + .imm = KPROBES_BRK_IMM, + .fn = kprobe_breakpoint_handler, +}; + static int __kprobes kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned long esr) { @@ -373,18 +371,6 @@ static struct break_hook kprobes_break_ss_hook = { .fn = kprobe_breakpoint_ss_handler, }; -static int __kprobes -kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr) -{ - kprobe_handler(regs); - return DBG_HOOK_HANDLED; -} - -static struct break_hook kprobes_break_hook = { - .imm = KPROBES_BRK_IMM, - .fn = kprobe_breakpoint_handler, -}; - /* * Provide a blacklist of symbols identifying ranges which cannot be kprobed. * This blacklist is exposed to userspace via debugfs (kprobes/blacklist). diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 044a7d7f1f6a..19cd05eea3f0 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -331,6 +331,8 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) clear_tsk_thread_flag(dst, TIF_SME); } + dst->thread.fp_type = FP_STATE_FPSIMD; + /* clear any pending asynchronous tag fault raised by the parent */ clear_tsk_thread_flag(dst, TIF_MTE_ASYNC_FAULT); diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index bfce41c2a53b..fca9cc6f5581 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -521,10 +521,13 @@ bool has_spectre_v4(const struct arm64_cpu_capabilities *cap, int scope) return state != SPECTRE_UNAFFECTED; } -static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr) +bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr) { - if (user_mode(regs)) - return 1; + const u32 instr_mask = ~(1U << PSTATE_Imm_shift); + const u32 instr_val = 0xd500401f | PSTATE_SSBS; + + if ((instr & instr_mask) != instr_val) + return false; if (instr & BIT(PSTATE_Imm_shift)) regs->pstate |= PSR_SSBS_BIT; @@ -532,19 +535,11 @@ static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr) regs->pstate &= ~PSR_SSBS_BIT; arm64_skip_faulting_instruction(regs, 4); - return 0; + return true; } -static struct undef_hook ssbs_emulation_hook = { - .instr_mask = ~(1U << PSTATE_Imm_shift), - .instr_val = 0xd500401f | PSTATE_SSBS, - .fn = ssbs_emulation_handler, -}; - static enum mitigation_state spectre_v4_enable_hw_mitigation(void) { - static bool undef_hook_registered = false; - static DEFINE_RAW_SPINLOCK(hook_lock); enum mitigation_state state; /* @@ -555,13 +550,6 @@ static enum mitigation_state spectre_v4_enable_hw_mitigation(void) if (state != SPECTRE_MITIGATED || !this_cpu_has_cap(ARM64_SSBS)) return state; - raw_spin_lock(&hook_lock); - if (!undef_hook_registered) { - register_undef_hook(&ssbs_emulation_hook); - undef_hook_registered = true; - } - raw_spin_unlock(&hook_lock); - if (spectre_v4_mitigations_off()) { sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_DSSBS); set_pstate_ssbs(1); diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index c2fb5755bbec..979dbdc36d52 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -907,8 +907,7 @@ static int sve_set_common(struct task_struct *target, ret = __fpr_set(target, regset, pos, count, kbuf, ubuf, SVE_PT_FPSIMD_OFFSET); clear_tsk_thread_flag(target, TIF_SVE); - if (type == ARM64_VEC_SME) - fpsimd_force_sync_to_sve(target); + target->thread.fp_type = FP_STATE_FPSIMD; goto out; } @@ -931,6 +930,7 @@ static int sve_set_common(struct task_struct *target, if (!target->thread.sve_state) { ret = -ENOMEM; clear_tsk_thread_flag(target, TIF_SVE); + target->thread.fp_type = FP_STATE_FPSIMD; goto out; } @@ -942,6 +942,7 @@ static int sve_set_common(struct task_struct *target, */ fpsimd_sync_to_sve(target); set_tsk_thread_flag(target, TIF_SVE); + target->thread.fp_type = FP_STATE_SVE; BUILD_BUG_ON(SVE_PT_SVE_OFFSET != sizeof(header)); start = SVE_PT_SVE_OFFSET; diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index d56e170e1ca7..830be01af32d 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -144,7 +144,7 @@ static int init_sdei_scs(void) int cpu; int err = 0; - if (!IS_ENABLED(CONFIG_SHADOW_CALL_STACK)) + if (!scs_is_enabled()) return 0; for_each_possible_cpu(cpu) { diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index fea3223704b6..12cfe9d0d3fa 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -30,6 +30,7 @@ #include <linux/efi.h> #include <linux/psci.h> #include <linux/sched/task.h> +#include <linux/scs.h> #include <linux/mm.h> #include <asm/acpi.h> @@ -42,6 +43,7 @@ #include <asm/cpu_ops.h> #include <asm/kasan.h> #include <asm/numa.h> +#include <asm/scs.h> #include <asm/sections.h> #include <asm/setup.h> #include <asm/smp_plat.h> @@ -312,6 +314,8 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) jump_label_init(); parse_early_param(); + dynamic_scs_init(); + /* * Unmask asynchronous aborts and fiq after bringing up possible * earlycon. (Report possible System Errors once we can report this diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 9ad911f1647c..e0d09bf5b01b 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -207,6 +207,7 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx) __get_user_error(fpsimd.fpcr, &ctx->fpcr, err); clear_thread_flag(TIF_SVE); + current->thread.fp_type = FP_STATE_FPSIMD; /* load the hardware registers from the fpsimd_state structure */ if (!err) @@ -292,6 +293,7 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user) if (sve.head.size <= sizeof(*user->sve)) { clear_thread_flag(TIF_SVE); current->thread.svcr &= ~SVCR_SM_MASK; + current->thread.fp_type = FP_STATE_FPSIMD; goto fpsimd_only; } @@ -327,6 +329,7 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user) current->thread.svcr |= SVCR_SM_MASK; else set_thread_flag(TIF_SVE); + current->thread.fp_type = FP_STATE_SVE; fpsimd_only: /* copy the FP and status/control registers */ @@ -932,9 +935,11 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, * FPSIMD register state - flush the saved FPSIMD * register state in case it gets loaded. */ - if (current->thread.svcr & SVCR_SM_MASK) + if (current->thread.svcr & SVCR_SM_MASK) { memset(¤t->thread.uw.fpsimd_state, 0, sizeof(current->thread.uw.fpsimd_state)); + current->thread.fp_type = FP_STATE_FPSIMD; + } current->thread.svcr &= ~(SVCR_ZA_MASK | SVCR_SM_MASK); diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 634279b3b03d..117e2c180f3c 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -23,8 +23,8 @@ * * The regs must be on a stack currently owned by the calling task. */ -static inline void unwind_init_from_regs(struct unwind_state *state, - struct pt_regs *regs) +static __always_inline void unwind_init_from_regs(struct unwind_state *state, + struct pt_regs *regs) { unwind_init_common(state, current); @@ -58,8 +58,8 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state) * duration of the unwind, or the unwind will be bogus. It is never valid to * call this for the current task. */ -static inline void unwind_init_from_task(struct unwind_state *state, - struct task_struct *task) +static __always_inline void unwind_init_from_task(struct unwind_state *state, + struct task_struct *task) { unwind_init_common(state, task); @@ -186,7 +186,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) : stackinfo_get_unknown(); \ }) -noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, +noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, struct task_struct *task, struct pt_regs *regs) { diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index 8b02d310838f..e7163f31f716 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -60,6 +60,8 @@ void notrace __cpu_suspend_exit(void) * PSTATE was not saved over suspend/resume, re-enable any detected * features that might not have been set correctly. */ + if (cpus_have_const_cap(ARM64_HAS_DIT)) + set_pstate_dit(1); __uaccess_enable_hw_pan(); /* diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index d72e8f23422d..a5de47e3df2b 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -183,21 +183,12 @@ static inline void fp_user_discard(void) if (!system_supports_sve()) return; - /* - * If SME is not active then disable SVE, the registers will - * be cleared when userspace next attempts to access them and - * we do not need to track the SVE register state until then. - */ - clear_thread_flag(TIF_SVE); + if (test_thread_flag(TIF_SVE)) { + unsigned int sve_vq_minus_one; - /* - * task_fpsimd_load() won't be called to update CPACR_EL1 in - * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only - * happens if a context switch or kernel_neon_begin() or context - * modification (sigreturn, ptrace) intervenes. - * So, ensure that CPACR_EL1 is already correct for the fast-path case. - */ - sve_user_disable(); + sve_vq_minus_one = sve_vq_from_vl(task_get_sve_vl(current)) - 1; + sve_flush_live(true, sve_vq_minus_one); + } } void do_el0_svc(struct pt_regs *regs) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 23d281ed7621..4c0caa589e12 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -373,51 +373,22 @@ void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) regs->pstate &= ~PSR_BTYPE_MASK; } -static LIST_HEAD(undef_hook); -static DEFINE_RAW_SPINLOCK(undef_lock); - -void register_undef_hook(struct undef_hook *hook) +static int user_insn_read(struct pt_regs *regs, u32 *insnp) { - unsigned long flags; - - raw_spin_lock_irqsave(&undef_lock, flags); - list_add(&hook->node, &undef_hook); - raw_spin_unlock_irqrestore(&undef_lock, flags); -} - -void unregister_undef_hook(struct undef_hook *hook) -{ - unsigned long flags; - - raw_spin_lock_irqsave(&undef_lock, flags); - list_del(&hook->node); - raw_spin_unlock_irqrestore(&undef_lock, flags); -} - -static int call_undef_hook(struct pt_regs *regs) -{ - struct undef_hook *hook; - unsigned long flags; u32 instr; - int (*fn)(struct pt_regs *regs, u32 instr) = NULL; unsigned long pc = instruction_pointer(regs); - if (!user_mode(regs)) { - __le32 instr_le; - if (get_kernel_nofault(instr_le, (__le32 *)pc)) - goto exit; - instr = le32_to_cpu(instr_le); - } else if (compat_thumb_mode(regs)) { + if (compat_thumb_mode(regs)) { /* 16-bit Thumb instruction */ __le16 instr_le; if (get_user(instr_le, (__le16 __user *)pc)) - goto exit; + return -EFAULT; instr = le16_to_cpu(instr_le); if (aarch32_insn_is_wide(instr)) { u32 instr2; if (get_user(instr_le, (__le16 __user *)(pc + 2))) - goto exit; + return -EFAULT; instr2 = le16_to_cpu(instr_le); instr = (instr << 16) | instr2; } @@ -425,19 +396,12 @@ static int call_undef_hook(struct pt_regs *regs) /* 32-bit ARM instruction */ __le32 instr_le; if (get_user(instr_le, (__le32 __user *)pc)) - goto exit; + return -EFAULT; instr = le32_to_cpu(instr_le); } - raw_spin_lock_irqsave(&undef_lock, flags); - list_for_each_entry(hook, &undef_hook, node) - if ((instr & hook->instr_mask) == hook->instr_val && - (regs->pstate & hook->pstate_mask) == hook->pstate_val) - fn = hook->fn; - - raw_spin_unlock_irqrestore(&undef_lock, flags); -exit: - return fn ? fn(regs, instr) : 1; + *insnp = instr; + return 0; } void force_signal_inject(int signal, int code, unsigned long address, unsigned long err) @@ -486,21 +450,40 @@ void arm64_notify_segfault(unsigned long addr) force_signal_inject(SIGSEGV, code, addr, 0); } -void do_undefinstr(struct pt_regs *regs, unsigned long esr) +void do_el0_undef(struct pt_regs *regs, unsigned long esr) { + u32 insn; + /* check for AArch32 breakpoint instructions */ if (!aarch32_break_handler(regs)) return; - if (call_undef_hook(regs) == 0) + if (user_insn_read(regs, &insn)) + goto out_err; + + if (try_emulate_mrs(regs, insn)) return; - if (!user_mode(regs)) - die("Oops - Undefined instruction", regs, esr); + if (try_emulate_armv8_deprecated(regs, insn)) + return; +out_err: force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0); } -NOKPROBE_SYMBOL(do_undefinstr); + +void do_el1_undef(struct pt_regs *regs, unsigned long esr) +{ + u32 insn; + + if (aarch64_insn_read((void *)regs->pc, &insn)) + goto out_err; + + if (try_emulate_el1_ssbs(regs, insn)) + return; + +out_err: + die("Oops - Undefined instruction", regs, esr); +} void do_el0_bti(struct pt_regs *regs) { @@ -511,7 +494,6 @@ void do_el1_bti(struct pt_regs *regs, unsigned long esr) { die("Oops - BTI", regs, esr); } -NOKPROBE_SYMBOL(do_el1_bti); void do_el0_fpac(struct pt_regs *regs, unsigned long esr) { @@ -526,7 +508,6 @@ void do_el1_fpac(struct pt_regs *regs, unsigned long esr) */ die("Oops - FPAC", regs, esr); } -NOKPROBE_SYMBOL(do_el1_fpac) #define __user_cache_maint(insn, address, res) \ if (address >= TASK_SIZE_MAX) { \ @@ -748,7 +729,7 @@ static const struct sys64_hook cp15_64_hooks[] = { {}, }; -void do_cp15instr(unsigned long esr, struct pt_regs *regs) +void do_el0_cp15(unsigned long esr, struct pt_regs *regs) { const struct sys64_hook *hook, *hook_base; @@ -769,7 +750,7 @@ void do_cp15instr(unsigned long esr, struct pt_regs *regs) hook_base = cp15_64_hooks; break; default: - do_undefinstr(regs, esr); + do_el0_undef(regs, esr); return; } @@ -784,12 +765,11 @@ void do_cp15instr(unsigned long esr, struct pt_regs *regs) * EL0. Fall back to our usual undefined instruction handler * so that we handle these consistently. */ - do_undefinstr(regs, esr); + do_el0_undef(regs, esr); } -NOKPROBE_SYMBOL(do_cp15instr); #endif -void do_sysinstr(unsigned long esr, struct pt_regs *regs) +void do_el0_sys(unsigned long esr, struct pt_regs *regs) { const struct sys64_hook *hook; @@ -804,9 +784,8 @@ void do_sysinstr(unsigned long esr, struct pt_regs *regs) * back to our usual undefined instruction handler so that we handle * these consistently. */ - do_undefinstr(regs, esr); + do_el0_undef(regs, esr); } -NOKPROBE_SYMBOL(do_sysinstr); static const char *esr_class_str[] = { [0 ... ESR_ELx_EC_MAX] = "UNRECOGNIZED EC", diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 45131e354e27..4c13dafc98b8 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -121,6 +121,17 @@ jiffies = jiffies_64; #define TRAMP_TEXT #endif +#ifdef CONFIG_UNWIND_TABLES +#define UNWIND_DATA_SECTIONS \ + .eh_frame : { \ + __eh_frame_start = .; \ + *(.eh_frame) \ + __eh_frame_end = .; \ + } +#else +#define UNWIND_DATA_SECTIONS +#endif + /* * The size of the PE/COFF section that covers the kernel image, which * runs from _stext to _edata, must be a round multiple of the PE/COFF @@ -231,6 +242,8 @@ SECTIONS __alt_instructions_end = .; } + UNWIND_DATA_SECTIONS + . = ALIGN(SEGMENT_ALIGN); __inittext_end = .; __initdata_begin = .; diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index ec8e4494873d..02dd7e9ebd39 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -75,11 +75,12 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) { BUG_ON(!current->mm); - BUG_ON(test_thread_flag(TIF_SVE)); if (!system_supports_fpsimd()) return; + fpsimd_kvm_prepare(); + vcpu->arch.fp_state = FP_STATE_HOST_OWNED; vcpu_clear_flag(vcpu, HOST_SVE_ENABLED); @@ -129,20 +130,31 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu) */ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) { + struct cpu_fp_state fp_state; + WARN_ON_ONCE(!irqs_disabled()); if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) { + /* * Currently we do not support SME guests so SVCR is * always 0 and we just need a variable to point to. */ - fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fp_regs, - vcpu->arch.sve_state, - vcpu->arch.sve_max_vl, - NULL, 0, &vcpu->arch.svcr); + fp_state.st = &vcpu->arch.ctxt.fp_regs; + fp_state.sve_state = vcpu->arch.sve_state; + fp_state.sve_vl = vcpu->arch.sve_max_vl; + fp_state.za_state = NULL; + fp_state.svcr = &vcpu->arch.svcr; + fp_state.fp_type = &vcpu->arch.fp_type; + + if (vcpu_has_sve(vcpu)) + fp_state.to_save = FP_STATE_SVE; + else + fp_state.to_save = FP_STATE_FPSIMD; + + fpsimd_bind_state_to_cpu(&fp_state); clear_thread_flag(TIF_FOREIGN_FPSTATE); - update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu)); } } @@ -199,7 +211,5 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0); } - update_thread_flag(TIF_SVE, 0); - local_irq_restore(flags); } diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index be0a2bc3e20d..530347cdebe3 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -96,6 +96,7 @@ KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI) # when profile optimization is applied. gen-hyprel does not support SHT_REL and # causes a build failure. Remove profile optimization flags. KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%, $(KBUILD_CFLAGS)) +KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables # KVM nVHE code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f4a7c5abcbca..608e4f25161d 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1121,8 +1121,8 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r case SYS_ID_DFR0_EL1: /* Limit guests to PMUv3 for ARMv8.4 */ val = cpuid_feature_cap_perfmon_field(val, - ID_DFR0_PERFMON_SHIFT, - kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0); + ID_DFR0_EL1_PerfMon_SHIFT, + kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_EL1_PerfMon_PMUv3p4 : 0); break; } diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c index 49e972beeac7..924934cb85ee 100644 --- a/arch/arm64/lib/insn.c +++ b/arch/arm64/lib/insn.c @@ -20,91 +20,6 @@ #define AARCH64_INSN_N_BIT BIT(22) #define AARCH64_INSN_LSL_12 BIT(22) -static const int aarch64_insn_encoding_class[] = { - AARCH64_INSN_CLS_UNKNOWN, - AARCH64_INSN_CLS_UNKNOWN, - AARCH64_INSN_CLS_SVE, - AARCH64_INSN_CLS_UNKNOWN, - AARCH64_INSN_CLS_LDST, - AARCH64_INSN_CLS_DP_REG, - AARCH64_INSN_CLS_LDST, - AARCH64_INSN_CLS_DP_FPSIMD, - AARCH64_INSN_CLS_DP_IMM, - AARCH64_INSN_CLS_DP_IMM, - AARCH64_INSN_CLS_BR_SYS, - AARCH64_INSN_CLS_BR_SYS, - AARCH64_INSN_CLS_LDST, - AARCH64_INSN_CLS_DP_REG, - AARCH64_INSN_CLS_LDST, - AARCH64_INSN_CLS_DP_FPSIMD, -}; - -enum aarch64_insn_encoding_class __kprobes aarch64_get_insn_class(u32 insn) -{ - return aarch64_insn_encoding_class[(insn >> 25) & 0xf]; -} - -bool __kprobes aarch64_insn_is_steppable_hint(u32 insn) -{ - if (!aarch64_insn_is_hint(insn)) - return false; - - switch (insn & 0xFE0) { - case AARCH64_INSN_HINT_XPACLRI: - case AARCH64_INSN_HINT_PACIA_1716: - case AARCH64_INSN_HINT_PACIB_1716: - case AARCH64_INSN_HINT_PACIAZ: - case AARCH64_INSN_HINT_PACIASP: - case AARCH64_INSN_HINT_PACIBZ: - case AARCH64_INSN_HINT_PACIBSP: - case AARCH64_INSN_HINT_BTI: - case AARCH64_INSN_HINT_BTIC: - case AARCH64_INSN_HINT_BTIJ: - case AARCH64_INSN_HINT_BTIJC: - case AARCH64_INSN_HINT_NOP: - return true; - default: - return false; - } -} - -bool aarch64_insn_is_branch_imm(u32 insn) -{ - return (aarch64_insn_is_b(insn) || aarch64_insn_is_bl(insn) || - aarch64_insn_is_tbz(insn) || aarch64_insn_is_tbnz(insn) || - aarch64_insn_is_cbz(insn) || aarch64_insn_is_cbnz(insn) || - aarch64_insn_is_bcond(insn)); -} - -bool __kprobes aarch64_insn_uses_literal(u32 insn) -{ - /* ldr/ldrsw (literal), prfm */ - - return aarch64_insn_is_ldr_lit(insn) || - aarch64_insn_is_ldrsw_lit(insn) || - aarch64_insn_is_adr_adrp(insn) || - aarch64_insn_is_prfm_lit(insn); -} - -bool __kprobes aarch64_insn_is_branch(u32 insn) -{ - /* b, bl, cb*, tb*, ret*, b.cond, br*, blr* */ - - return aarch64_insn_is_b(insn) || - aarch64_insn_is_bl(insn) || - aarch64_insn_is_cbz(insn) || - aarch64_insn_is_cbnz(insn) || - aarch64_insn_is_tbz(insn) || - aarch64_insn_is_tbnz(insn) || - aarch64_insn_is_ret(insn) || - aarch64_insn_is_ret_auth(insn) || - aarch64_insn_is_br(insn) || - aarch64_insn_is_br_auth(insn) || - aarch64_insn_is_blr(insn) || - aarch64_insn_is_blr_auth(insn) || - aarch64_insn_is_bcond(insn); -} - static int __kprobes aarch64_get_imm_shift_mask(enum aarch64_insn_imm_type type, u32 *maskp, int *shiftp) { @@ -435,16 +350,6 @@ u32 aarch64_insn_gen_cond_branch_imm(unsigned long pc, unsigned long addr, offset >> 2); } -u32 __kprobes aarch64_insn_gen_hint(enum aarch64_insn_hint_cr_op op) -{ - return aarch64_insn_get_hint_value() | op; -} - -u32 __kprobes aarch64_insn_gen_nop(void) -{ - return aarch64_insn_gen_hint(AARCH64_INSN_HINT_NOP); -} - u32 aarch64_insn_gen_branch_reg(enum aarch64_insn_register reg, enum aarch64_insn_branch_type type) { @@ -816,76 +721,6 @@ u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, } #endif -static u32 aarch64_insn_encode_prfm_imm(enum aarch64_insn_prfm_type type, - enum aarch64_insn_prfm_target target, - enum aarch64_insn_prfm_policy policy, - u32 insn) -{ - u32 imm_type = 0, imm_target = 0, imm_policy = 0; - - switch (type) { - case AARCH64_INSN_PRFM_TYPE_PLD: - break; - case AARCH64_INSN_PRFM_TYPE_PLI: - imm_type = BIT(0); - break; - case AARCH64_INSN_PRFM_TYPE_PST: - imm_type = BIT(1); - break; - default: - pr_err("%s: unknown prfm type encoding %d\n", __func__, type); - return AARCH64_BREAK_FAULT; - } - - switch (target) { - case AARCH64_INSN_PRFM_TARGET_L1: - break; - case AARCH64_INSN_PRFM_TARGET_L2: - imm_target = BIT(0); - break; - case AARCH64_INSN_PRFM_TARGET_L3: - imm_target = BIT(1); - break; - default: - pr_err("%s: unknown prfm target encoding %d\n", __func__, target); - return AARCH64_BREAK_FAULT; - } - - switch (policy) { - case AARCH64_INSN_PRFM_POLICY_KEEP: - break; - case AARCH64_INSN_PRFM_POLICY_STRM: - imm_policy = BIT(0); - break; - default: - pr_err("%s: unknown prfm policy encoding %d\n", __func__, policy); - return AARCH64_BREAK_FAULT; - } - - /* In this case, imm5 is encoded into Rt field. */ - insn &= ~GENMASK(4, 0); - insn |= imm_policy | (imm_target << 1) | (imm_type << 3); - - return insn; -} - -u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base, - enum aarch64_insn_prfm_type type, - enum aarch64_insn_prfm_target target, - enum aarch64_insn_prfm_policy policy) -{ - u32 insn = aarch64_insn_get_prfm_value(); - - insn = aarch64_insn_encode_ldst_size(AARCH64_INSN_SIZE_64, insn); - - insn = aarch64_insn_encode_prfm_imm(type, target, policy, insn); - - insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, - base); - - return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_12, insn, 0); -} - u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst, enum aarch64_insn_register src, int imm, enum aarch64_insn_variant variant, diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 1b7c93ae7e63..5018ac03b6bf 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -18,7 +18,7 @@ */ .macro multitag_transfer_size, reg, tmp mrs_s \reg, SYS_GMID_EL1 - ubfx \reg, \reg, #GMID_EL1_BS_SHIFT, #GMID_EL1_BS_SIZE + ubfx \reg, \reg, #GMID_EL1_BS_SHIFT, #GMID_EL1_BS_WIDTH mov \tmp, #4 lsl \reg, \tmp, \reg .endm diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 5b391490e045..74f76514a48d 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -353,6 +353,11 @@ static bool is_el1_mte_sync_tag_check_fault(unsigned long esr) return false; } +static bool is_translation_fault(unsigned long esr) +{ + return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_FAULT; +} + static void __do_kernel_fault(unsigned long addr, unsigned long esr, struct pt_regs *regs) { @@ -385,7 +390,8 @@ static void __do_kernel_fault(unsigned long addr, unsigned long esr, } else if (addr < PAGE_SIZE) { msg = "NULL pointer dereference"; } else { - if (kfence_handle_page_fault(addr, esr & ESR_ELx_WNR, regs)) + if (is_translation_fault(esr) && + kfence_handle_page_fault(addr, esr & ESR_ELx_WNR, regs)) return; msg = "paging request"; diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 35e9a468d13e..cd8d96e1fa1a 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -559,3 +559,24 @@ bool __init arch_hugetlb_valid_size(unsigned long size) { return __hugetlb_valid_size(size); } + +pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) +{ + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198) && + cpus_have_const_cap(ARM64_WORKAROUND_2645198)) { + /* + * Break-before-make (BBM) is required for all user space mappings + * when the permission changes from executable to non-executable + * in cases where cpu is affected with errata #2645198. + */ + if (pte_user_exec(READ_ONCE(*ptep))) + return huge_ptep_clear_flush(vma, addr, ptep); + } + return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); +} + +void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, + pte_t old_pte, pte_t pte) +{ + set_huge_pte_at(vma->vm_mm, addr, ptep, pte); +} diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 4b4651ee47f2..58a0bb2c17f1 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -96,6 +96,8 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1; #define CRASH_ADDR_LOW_MAX arm64_dma_phys_limit #define CRASH_ADDR_HIGH_MAX (PHYS_MASK + 1) +#define DEFAULT_CRASH_KERNEL_LOW_SIZE (128UL << 20) + static int __init reserve_crashkernel_low(unsigned long long low_size) { unsigned long long low_base; @@ -130,6 +132,7 @@ static void __init reserve_crashkernel(void) unsigned long long crash_max = CRASH_ADDR_LOW_MAX; char *cmdline = boot_command_line; int ret; + bool fixed_base = false; if (!IS_ENABLED(CONFIG_KEXEC_CORE)) return; @@ -147,7 +150,9 @@ static void __init reserve_crashkernel(void) * is not allowed. */ ret = parse_crashkernel_low(cmdline, 0, &crash_low_size, &crash_base); - if (ret && (ret != -ENOENT)) + if (ret == -ENOENT) + crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; + else if (ret) return; crash_max = CRASH_ADDR_HIGH_MAX; @@ -159,18 +164,32 @@ static void __init reserve_crashkernel(void) crash_size = PAGE_ALIGN(crash_size); /* User specifies base address explicitly. */ - if (crash_base) + if (crash_base) { + fixed_base = true; crash_max = crash_base + crash_size; + } +retry: crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, crash_base, crash_max); if (!crash_base) { + /* + * If the first attempt was for low memory, fall back to + * high memory, the minimum required low memory will be + * reserved later. + */ + if (!fixed_base && (crash_max == CRASH_ADDR_LOW_MAX)) { + crash_max = CRASH_ADDR_HIGH_MAX; + crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; + goto retry; + } + pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); return; } - if ((crash_base >= CRASH_ADDR_LOW_MAX) && + if ((crash_base > CRASH_ADDR_LOW_MAX - crash_low_size) && crash_low_size && reserve_crashkernel_low(crash_low_size)) { memblock_phys_free(crash_base, crash_size); return; diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 9a7c38965154..2368e4daa23d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1196,7 +1196,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); - if (!ARM64_KERNEL_USES_PMD_MAPS) + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) return vmemmap_populate_basepages(start, end, node, altmap); do { @@ -1702,3 +1702,24 @@ static int __init prevent_bootmem_remove_init(void) } early_initcall(prevent_bootmem_remove_init); #endif + +pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) +{ + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198) && + cpus_have_const_cap(ARM64_WORKAROUND_2645198)) { + /* + * Break-before-make (BBM) is required for all user space mappings + * when the permission changes from executable to non-executable + * in cases where cpu is affected with errata #2645198. + */ + if (pte_user_exec(READ_ONCE(*ptep))) + return ptep_clear_flush(vma, addr, ptep); + } + return ptep_get_and_clear(vma->vm_mm, addr, ptep); +} + +void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, + pte_t old_pte, pte_t pte) +{ + set_pte_at(vma->vm_mm, addr, ptep, pte); +} diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index b9ecbbae1e1a..066fa60b93d2 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -189,16 +189,12 @@ SYM_FUNC_END(cpu_do_resume) * called by anything else. It can only be executed from a TTBR0 mapping. */ SYM_TYPED_FUNC_START(idmap_cpu_replace_ttbr1) - save_and_disable_daif flags=x2 - __idmap_cpu_set_reserved_ttbr1 x1, x3 offset_ttbr1 x0, x3 msr ttbr1_el1, x0 isb - restore_daif x2 - ret SYM_FUNC_END(idmap_cpu_replace_ttbr1) .popsection diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index f1c0347ec31a..dfeb2c51e257 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -20,6 +20,7 @@ HAS_CNP HAS_CRC32 HAS_DCPODP HAS_DCPOP +HAS_DIT HAS_E0PD HAS_ECV HAS_EPAN @@ -70,6 +71,7 @@ WORKAROUND_2038923 WORKAROUND_2064142 WORKAROUND_2077057 WORKAROUND_2457168 +WORKAROUND_2645198 WORKAROUND_2658417 WORKAROUND_TRBE_OVERWRITE_FILL_MODE WORKAROUND_TSB_FLUSH_FAILURE diff --git a/arch/arm64/tools/gen-sysreg.awk b/arch/arm64/tools/gen-sysreg.awk index db461921d256..c350164a3955 100755 --- a/arch/arm64/tools/gen-sysreg.awk +++ b/arch/arm64/tools/gen-sysreg.awk @@ -33,7 +33,7 @@ function expect_fields(nf) { # Print a CPP macro definition, padded with spaces so that the macro bodies # line up in a column function define(name, val) { - printf "%-48s%s\n", "#define " name, val + printf "%-56s%s\n", "#define " name, val } # Print standard BITMASK/SHIFT/WIDTH CPP definitions for a field diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 384757a7eda9..184e58fd5631 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -46,6 +46,760 @@ # feature that introduces them (eg, FEAT_LS64_ACCDATA introduces enumeration # item ACCDATA) though it may be more taseful to do something else. +Sysreg ID_PFR0_EL1 3 0 0 1 0 +Res0 63:32 +Enum 31:28 RAS + 0b0000 NI + 0b0001 RAS + 0b0010 RASv1p1 +EndEnum +Enum 27:24 DIT + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 AMU + 0b0000 NI + 0b0001 AMUv1 + 0b0010 AMUv1p1 +EndEnum +Enum 19:16 CSV2 + 0b0000 UNDISCLOSED + 0b0001 IMP + 0b0010 CSV2p1 +EndEnum +Enum 15:12 State3 + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 State2 + 0b0000 NI + 0b0001 NO_CV + 0b0010 CV +EndEnum +Enum 7:4 State1 + 0b0000 NI + 0b0001 THUMB + 0b0010 THUMB2 +EndEnum +Enum 3:0 State0 + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_PFR1_EL1 3 0 0 1 1 +Res0 63:32 +Enum 31:28 GIC + 0b0000 NI + 0b0001 GICv3 + 0b0010 GICv4p1 +EndEnum +Enum 27:24 Virt_frac + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 Sec_frac + 0b0000 NI + 0b0001 WALK_DISABLE + 0b0010 SECURE_MEMORY +EndEnum +Enum 19:16 GenTimer + 0b0000 NI + 0b0001 IMP + 0b0010 ECV +EndEnum +Enum 15:12 Virtualization + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 MProgMod + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 Security + 0b0000 NI + 0b0001 EL3 + 0b0001 NSACR_RFR +EndEnum +Enum 3:0 ProgMod + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_DFR0_EL1 3 0 0 1 2 +Res0 63:32 +Enum 31:28 TraceFilt + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 27:24 PerfMon + 0b0000 NI + 0b0001 PMUv1 + 0b0010 PMUv2 + 0b0011 PMUv3 + 0b0100 PMUv3p1 + 0b0101 PMUv3p4 + 0b0110 PMUv3p5 + 0b0111 PMUv3p7 + 0b1000 PMUv3p8 + 0b1111 IMPDEF +EndEnum +Enum 23:20 MProfDbg + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 19:16 MMapTrc + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 15:12 CopTrc + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 MMapDbg + 0b0000 NI + 0b0100 Armv7 + 0b0101 Armv7p1 +EndEnum +Field 7:4 CopSDbg +Enum 3:0 CopDbg + 0b0000 NI + 0b0010 Armv6 + 0b0011 Armv6p1 + 0b0100 Armv7 + 0b0101 Armv7p1 + 0b0110 Armv8 + 0b0111 VHE + 0b1000 Debugv8p2 + 0b1001 Debugv8p4 + 0b1010 Debugv8p8 +EndEnum +EndSysreg + +Sysreg ID_AFR0_EL1 3 0 0 1 3 +Res0 63:16 +Field 15:12 IMPDEF3 +Field 11:8 IMPDEF2 +Field 7:4 IMPDEF1 +Field 3:0 IMPDEF0 +EndSysreg + +Sysreg ID_MMFR0_EL1 3 0 0 1 4 +Res0 63:32 +Enum 31:28 InnerShr + 0b0000 NC + 0b0001 HW + 0b1111 IGNORED +EndEnum +Enum 27:24 FCSE + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 AuxReg + 0b0000 NI + 0b0001 ACTLR + 0b0010 AIFSR +EndEnum +Enum 19:16 TCM + 0b0000 NI + 0b0001 IMPDEF + 0b0010 TCM + 0b0011 TCM_DMA +EndEnum +Enum 15:12 ShareLvl + 0b0000 ONE + 0b0001 TWO +EndEnum +Enum 11:8 OuterShr + 0b0000 NC + 0b0001 HW + 0b1111 IGNORED +EndEnum +Enum 7:4 PMSA + 0b0000 NI + 0b0001 IMPDEF + 0b0010 PMSAv6 + 0b0011 PMSAv7 +EndEnum +Enum 3:0 VMSA + 0b0000 NI + 0b0001 IMPDEF + 0b0010 VMSAv6 + 0b0011 VMSAv7 + 0b0100 VMSAv7_PXN + 0b0101 VMSAv7_LONG +EndEnum +EndSysreg + +Sysreg ID_MMFR1_EL1 3 0 0 1 5 +Res0 63:32 +Enum 31:28 BPred + 0b0000 NI + 0b0001 BP_SW_MANGED + 0b0010 BP_ASID_AWARE + 0b0011 BP_NOSNOOP + 0b0100 BP_INVISIBLE +EndEnum +Enum 27:24 L1TstCln + 0b0000 NI + 0b0001 NOINVALIDATE + 0b0010 INVALIDATE +EndEnum +Enum 23:20 L1Uni + 0b0000 NI + 0b0001 INVALIDATE + 0b0010 CLEAN_AND_INVALIDATE +EndEnum +Enum 19:16 L1Hvd + 0b0000 NI + 0b0001 INVALIDATE_ISIDE_ONLY + 0b0010 INVALIDATE + 0b0011 CLEAN_AND_INVALIDATE +EndEnum +Enum 15:12 L1UniSW + 0b0000 NI + 0b0001 CLEAN + 0b0010 CLEAN_AND_INVALIDATE + 0b0011 INVALIDATE +EndEnum +Enum 11:8 L1HvdSW + 0b0000 NI + 0b0001 CLEAN_AND_INVALIDATE + 0b0010 INVALIDATE_DSIDE_ONLY + 0b0011 INVALIDATE +EndEnum +Enum 7:4 L1UniVA + 0b0000 NI + 0b0001 CLEAN_AND_INVALIDATE + 0b0010 INVALIDATE_BP +EndEnum +Enum 3:0 L1HvdVA + 0b0000 NI + 0b0001 CLEAN_AND_INVALIDATE + 0b0010 INVALIDATE_BP +EndEnum +EndSysreg + +Sysreg ID_MMFR2_EL1 3 0 0 1 6 +Res0 63:32 +Enum 31:28 HWAccFlg + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 27:24 WFIStall + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 MemBarr + 0b0000 NI + 0b0001 DSB_ONLY + 0b0010 IMP +EndEnum +Enum 19:16 UniTLB + 0b0000 NI + 0b0001 BY_VA + 0b0010 BY_MATCH_ASID + 0b0011 BY_ALL_ASID + 0b0100 OTHER_TLBS + 0b0101 BROADCAST + 0b0110 BY_IPA +EndEnum +Enum 15:12 HvdTLB + 0b0000 NI +EndEnum +Enum 11:8 L1HvdRng + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 L1HvdBG + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 L1HvdFG + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_MMFR3_EL1 3 0 0 1 7 +Res0 63:32 +Enum 31:28 Supersec + 0b0000 IMP + 0b1111 NI +EndEnum +Enum 27:24 CMemSz + 0b0000 4GB + 0b0001 64GB + 0b0010 1TB +EndEnum +Enum 23:20 CohWalk + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 19:16 PAN + 0b0000 NI + 0b0001 PAN + 0b0010 PAN2 +EndEnum +Enum 15:12 MaintBcst + 0b0000 NI + 0b0001 NO_TLB + 0b0010 ALL +EndEnum +Enum 11:8 BPMaint + 0b0000 NI + 0b0001 ALL + 0b0010 BY_VA +EndEnum +Enum 7:4 CMaintSW + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 CMaintVA + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_ISAR0_EL1 3 0 0 2 0 +Res0 63:28 +Enum 27:24 Divide + 0b0000 NI + 0b0001 xDIV_T32 + 0b0010 xDIV_A32 +EndEnum +Enum 23:20 Debug + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 19:16 Coproc + 0b0000 NI + 0b0001 MRC + 0b0010 MRC2 + 0b0011 MRRC + 0b0100 MRRC2 +EndEnum +Enum 15:12 CmpBranch + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 BitField + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 BitCount + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 Swap + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_ISAR1_EL1 3 0 0 2 1 +Res0 63:32 +Enum 31:28 Jazelle + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 27:24 Interwork + 0b0000 NI + 0b0001 BX + 0b0010 BLX + 0b0011 A32_BX +EndEnum +Enum 23:20 Immediate + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 19:16 IfThen + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 15:12 Extend + 0b0000 NI + 0b0001 SXTB + 0b0010 SXTB16 +EndEnum +Enum 11:8 Except_AR + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 Except + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 Endian + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_ISAR2_EL1 3 0 0 2 2 +Res0 63:32 +Enum 31:28 Reversal + 0b0000 NI + 0b0001 REV + 0b0010 RBIT +EndEnum +Enum 27:24 PSR_AR + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 MultU + 0b0000 NI + 0b0001 UMULL + 0b0010 UMAAL +EndEnum +Enum 19:16 MultS + 0b0000 NI + 0b0001 SMULL + 0b0010 SMLABB + 0b0011 SMLAD +EndEnum +Enum 15:12 Mult + 0b0000 NI + 0b0001 MLA + 0b0010 MLS +EndEnum +Enum 11:8 MultiAccessInt + 0b0000 NI + 0b0001 RESTARTABLE + 0b0010 CONTINUABLE +EndEnum +Enum 7:4 MemHint + 0b0000 NI + 0b0001 PLD + 0b0010 PLD2 + 0b0011 PLI + 0b0100 PLDW +EndEnum +Enum 3:0 LoadStore + 0b0000 NI + 0b0001 DOUBLE + 0b0010 ACQUIRE +EndEnum +EndSysreg + +Sysreg ID_ISAR3_EL1 3 0 0 2 3 +Res0 63:32 +Enum 31:28 T32EE + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 27:24 TrueNOP + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 T32Copy + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 19:16 TabBranch + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 15:12 SynchPrim + 0b0000 NI + 0b0001 EXCLUSIVE + 0b0010 DOUBLE +EndEnum +Enum 11:8 SVC + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 SIMD + 0b0000 NI + 0b0001 SSAT + 0b0011 PKHBT +EndEnum +Enum 3:0 Saturate + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_ISAR4_EL1 3 0 0 2 4 +Res0 63:32 +Enum 31:28 SWP_frac + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 27:24 PSR_M + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 SynchPrim_frac + 0b0000 NI + 0b0011 IMP +EndEnum +Enum 19:16 Barrier + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 15:12 SMC + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 Writeback + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 WithShifts + 0b0000 NI + 0b0001 LSL3 + 0b0011 LS + 0b0100 REG +EndEnum +Enum 3:0 Unpriv + 0b0000 NI + 0b0001 REG_BYTE + 0b0010 SIGNED_HALFWORD +EndEnum +EndSysreg + +Sysreg ID_ISAR5_EL1 3 0 0 2 5 +Res0 63:32 +Enum 31:28 VCMA + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 27:24 RDM + 0b0000 NI + 0b0001 IMP +EndEnum +Res0 23:20 +Enum 19:16 CRC32 + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 15:12 SHA2 + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 SHA1 + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 AES + 0b0000 NI + 0b0001 IMP + 0b0010 VMULL +EndEnum +Enum 3:0 SEVL + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_ISAR6_EL1 3 0 0 2 7 +Res0 63:28 +Enum 27:24 I8MM + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 BF16 + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 19:16 SPECRES + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 15:12 SB + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 FHM + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 DP + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 JSCVT + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_MMFR4_EL1 3 0 0 2 6 +Res0 63:32 +Enum 31:28 EVT + 0b0000 NI + 0b0001 NO_TLBIS + 0b0010 TLBIS +EndEnum +Enum 27:24 CCIDX + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 LSM + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 19:16 HPDS + 0b0000 NI + 0b0001 AA32HPD + 0b0010 HPDS2 +EndEnum +Enum 15:12 CnP + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 XNX + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 AC2 + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 SpecSEI + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg MVFR0_EL1 3 0 0 3 0 +Res0 63:32 +Enum 31:28 FPRound + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 27:24 FPShVec + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 23:20 FPSqrt + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 19:16 FPDivide + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 15:12 FPTrap + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 FPDP + 0b0000 NI + 0b0001 VFPv2 + 0b0001 VFPv3 +EndEnum +Enum 7:4 FPSP + 0b0000 NI + 0b0001 VFPv2 + 0b0001 VFPv3 +EndEnum +Enum 3:0 SIMDReg + 0b0000 NI + 0b0001 IMP_16x64 + 0b0001 IMP_32x64 +EndEnum +EndSysreg + +Sysreg MVFR1_EL1 3 0 0 3 1 +Res0 63:32 +Enum 31:28 SIMDFMAC + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 27:24 FPHP + 0b0000 NI + 0b0001 FPHP + 0b0010 FPHP_CONV + 0b0011 FP16 +EndEnum +Enum 23:20 SIMDHP + 0b0000 NI + 0b0001 SIMDHP + 0b0001 SIMDHP_FLOAT +EndEnum +Enum 19:16 SIMDSP + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 15:12 SIMDInt + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 11:8 SIMDLS + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 7:4 FPDNaN + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 FPFtZ + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg MVFR2_EL1 3 0 0 3 2 +Res0 63:8 +Enum 7:4 FPMisc + 0b0000 NI + 0b0001 FP + 0b0010 FP_DIRECTED_ROUNDING + 0b0011 FP_ROUNDING + 0b0100 FP_MAX_MIN +EndEnum +Enum 3:0 SIMDMisc + 0b0000 NI + 0b0001 SIMD_DIRECTED_ROUNDING + 0b0010 SIMD_ROUNDING + 0b0011 SIMD_MAX_MIN +EndEnum +EndSysreg + +Sysreg ID_PFR2_EL1 3 0 0 3 4 +Res0 63:12 +Enum 11:8 RAS_frac + 0b0000 NI + 0b0001 RASv1p1 +EndEnum +Enum 7:4 SSBS + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 CSV3 + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + +Sysreg ID_DFR1_EL1 3 0 0 3 5 +Res0 63:8 +Enum 7:4 HPMN0 + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 MTPMU + 0b0000 IMPDEF + 0b0001 IMP + 0b1111 NI +EndEnum +EndSysreg + +Sysreg ID_MMFR5_EL1 3 0 0 3 6 +Res0 63:8 +Enum 7:4 nTLBPA + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 3:0 ETS + 0b0000 NI + 0b0001 IMP +EndEnum +EndSysreg + Sysreg ID_AA64PFR0_EL1 3 0 0 4 0 Enum 63:60 CSV3 0b0000 NI @@ -210,6 +964,7 @@ EndEnum Enum 3:0 SVEver 0b0000 IMP 0b0001 SVE2 + 0b0010 SVE2p1 EndEnum EndSysreg @@ -484,7 +1239,16 @@ EndEnum EndSysreg Sysreg ID_AA64ISAR2_EL1 3 0 0 6 2 -Res0 63:28 +Res0 63:56 +Enum 55:52 CSSC + 0b0000 NI + 0b0001 IMP +EndEnum +Enum 51:48 RPRFM + 0b0000 NI + 0b0001 IMP +EndEnum +Res0 47:28 Enum 27:24 PAC_frac 0b0000 NI 0b0001 IMP diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h index 3cee7115441b..259b9dd5fe1c 100644 --- a/arch/powerpc/include/asm/ftrace.h +++ b/arch/powerpc/include/asm/ftrace.h @@ -37,12 +37,32 @@ static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs * return fregs->regs.msr ? &fregs->regs : NULL; } -static __always_inline void ftrace_instruction_pointer_set(struct ftrace_regs *fregs, - unsigned long ip) +static __always_inline void +ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, + unsigned long ip) { regs_set_return_ip(&fregs->regs, ip); } +static __always_inline unsigned long +ftrace_regs_get_instruction_pointer(struct ftrace_regs *fregs) +{ + return instruction_pointer(&fregs->regs); +} + +#define ftrace_regs_get_argument(fregs, n) \ + regs_get_kernel_argument(&(fregs)->regs, n) +#define ftrace_regs_get_stack_pointer(fregs) \ + kernel_stack_pointer(&(fregs)->regs) +#define ftrace_regs_return_value(fregs) \ + regs_return_value(&(fregs)->regs) +#define ftrace_regs_set_return_value(fregs, ret) \ + regs_set_return_value(&(fregs)->regs, ret) +#define ftrace_override_function_with_return(fregs) \ + override_function_with_return(&(fregs)->regs) +#define ftrace_regs_query_register_offset(name) \ + regs_query_register_offset(name) + struct ftrace_ops; #define ftrace_graph_func ftrace_graph_func diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index 6f80ec9c04be..e5c5cb1207e2 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -54,12 +54,33 @@ static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs * return NULL; } -static __always_inline void ftrace_instruction_pointer_set(struct ftrace_regs *fregs, - unsigned long ip) +static __always_inline unsigned long +ftrace_regs_get_instruction_pointer(const struct ftrace_regs *fregs) +{ + return fregs->regs.psw.addr; +} + +static __always_inline void +ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, + unsigned long ip) { fregs->regs.psw.addr = ip; } +#define ftrace_regs_get_argument(fregs, n) \ + regs_get_kernel_argument(&(fregs)->regs, n) +#define ftrace_regs_get_stack_pointer(fregs) \ + kernel_stack_pointer(&(fregs)->regs) +#define ftrace_regs_return_value(fregs) \ + regs_return_value(&(fregs)->regs) +#define ftrace_regs_set_return_value(fregs, ret) \ + regs_set_return_value(&(fregs)->regs, ret) +#define ftrace_override_function_with_return(fregs) \ + override_function_with_return(&(fregs)->regs) +#define ftrace_regs_query_register_offset(name) \ + regs_query_register_offset(name) + +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS /* * When an ftrace registered caller is tracing a function that is * also set by a register_ftrace_direct() call, it needs to be @@ -67,10 +88,12 @@ static __always_inline void ftrace_instruction_pointer_set(struct ftrace_regs *f * place the direct caller in the ORIG_GPR2 part of pt_regs. This * tells the ftrace_caller that there's a direct caller. */ -static inline void arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr) +static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsigned long addr) { + struct pt_regs *regs = &fregs->regs; regs->orig_gpr2 = addr; } +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */ /* * Even though the system call numbers are identical for s390/s390x a diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index 908d99b127d3..5061ac98ffa1 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -34,19 +34,6 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) return addr; } -/* - * When a ftrace registered caller is tracing a function that is - * also set by a register_ftrace_direct() call, it needs to be - * differentiated in the ftrace_caller trampoline. To do this, we - * place the direct caller in the ORIG_AX part of pt_regs. This - * tells the ftrace_caller that there's a direct caller. - */ -static inline void arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr) -{ - /* Emulate a call */ - regs->orig_ax = addr; -} - #ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS struct ftrace_regs { struct pt_regs regs; @@ -61,9 +48,25 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs) return &fregs->regs; } -#define ftrace_instruction_pointer_set(fregs, _ip) \ +#define ftrace_regs_set_instruction_pointer(fregs, _ip) \ do { (fregs)->regs.ip = (_ip); } while (0) +#define ftrace_regs_get_instruction_pointer(fregs) \ + ((fregs)->regs.ip) + +#define ftrace_regs_get_argument(fregs, n) \ + regs_get_kernel_argument(&(fregs)->regs, n) +#define ftrace_regs_get_stack_pointer(fregs) \ + kernel_stack_pointer(&(fregs)->regs) +#define ftrace_regs_return_value(fregs) \ + regs_return_value(&(fregs)->regs) +#define ftrace_regs_set_return_value(fregs, ret) \ + regs_set_return_value(&(fregs)->regs, ret) +#define ftrace_override_function_with_return(fregs) \ + override_function_with_return(&(fregs)->regs) +#define ftrace_regs_query_register_offset(name) \ + regs_query_register_offset(name) + struct ftrace_ops; #define ftrace_graph_func ftrace_graph_func void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, @@ -72,6 +75,24 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, #define FTRACE_GRAPH_TRAMP_ADDR FTRACE_GRAPH_ADDR #endif +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS +/* + * When a ftrace registered caller is tracing a function that is + * also set by a register_ftrace_direct() call, it needs to be + * differentiated in the ftrace_caller trampoline. To do this, we + * place the direct caller in the ORIG_AX part of pt_regs. This + * tells the ftrace_caller that there's a direct caller. + */ +static inline void +__arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr) +{ + /* Emulate a call */ + regs->orig_ax = addr; +} +#define arch_ftrace_set_direct_caller(fregs, addr) \ + __arch_ftrace_set_direct_caller(&(fregs)->regs, addr) +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */ + #ifdef CONFIG_DYNAMIC_FTRACE struct dyn_arch_ftrace { diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig index 473241b5193f..1cc11d2a2a88 100644 --- a/drivers/acpi/Kconfig +++ b/drivers/acpi/Kconfig @@ -90,7 +90,7 @@ config ACPI_SPCR_TABLE config ACPI_FPDT bool "ACPI Firmware Performance Data Table (FPDT) support" - depends on X86_64 + depends on X86_64 || ARM64 help Enable support for the Firmware Performance Data Table (FPDT). This table provides information on the timing of the system diff --git a/drivers/acpi/arm64/Kconfig b/drivers/acpi/arm64/Kconfig index d4a72835f328..b3ed6212244c 100644 --- a/drivers/acpi/arm64/Kconfig +++ b/drivers/acpi/arm64/Kconfig @@ -18,3 +18,6 @@ config ACPI_AGDI reset command. If set, the kernel parses AGDI table and listens for the command. + +config ACPI_APMT + bool diff --git a/drivers/acpi/arm64/Makefile b/drivers/acpi/arm64/Makefile index 7b9e4045659d..e21a9e84e394 100644 --- a/drivers/acpi/arm64/Makefile +++ b/drivers/acpi/arm64/Makefile @@ -2,4 +2,5 @@ obj-$(CONFIG_ACPI_AGDI) += agdi.o obj-$(CONFIG_ACPI_IORT) += iort.o obj-$(CONFIG_ACPI_GTDT) += gtdt.o +obj-$(CONFIG_ACPI_APMT) += apmt.o obj-y += dma.o diff --git a/drivers/acpi/arm64/apmt.c b/drivers/acpi/arm64/apmt.c new file mode 100644 index 000000000000..8cab69fa5d59 --- /dev/null +++ b/drivers/acpi/arm64/apmt.c @@ -0,0 +1,178 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ARM APMT table support. + * Design document number: ARM DEN0117. + * + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. + * + */ + +#define pr_fmt(fmt) "ACPI: APMT: " fmt + +#include <linux/acpi.h> +#include <linux/acpi_apmt.h> +#include <linux/init.h> +#include <linux/kernel.h> +#include <linux/platform_device.h> + +#define DEV_NAME "arm-cs-arch-pmu" + +/* There can be up to 3 resources: page 0 and 1 address, and interrupt. */ +#define DEV_MAX_RESOURCE_COUNT 3 + +/* Root pointer to the mapped APMT table */ +static struct acpi_table_header *apmt_table; + +static int __init apmt_init_resources(struct resource *res, + struct acpi_apmt_node *node) +{ + int irq, trigger; + int num_res = 0; + + res[num_res].start = node->base_address0; + res[num_res].end = node->base_address0 + SZ_4K - 1; + res[num_res].flags = IORESOURCE_MEM; + + num_res++; + + res[num_res].start = node->base_address1; + res[num_res].end = node->base_address1 + SZ_4K - 1; + res[num_res].flags = IORESOURCE_MEM; + + num_res++; + + if (node->ovflw_irq != 0) { + trigger = (node->ovflw_irq_flags & ACPI_APMT_OVFLW_IRQ_FLAGS_MODE); + trigger = (trigger == ACPI_APMT_OVFLW_IRQ_FLAGS_MODE_LEVEL) ? + ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE; + irq = acpi_register_gsi(NULL, node->ovflw_irq, trigger, + ACPI_ACTIVE_HIGH); + + if (irq <= 0) { + pr_warn("APMT could not register gsi hwirq %d\n", irq); + return num_res; + } + + res[num_res].start = irq; + res[num_res].end = irq; + res[num_res].flags = IORESOURCE_IRQ; + + num_res++; + } + + return num_res; +} + +/** + * apmt_add_platform_device() - Allocate a platform device for APMT node + * @node: Pointer to device ACPI APMT node + * @fwnode: fwnode associated with the APMT node + * + * Returns: 0 on success, <0 failure + */ +static int __init apmt_add_platform_device(struct acpi_apmt_node *node, + struct fwnode_handle *fwnode) +{ + struct platform_device *pdev; + int ret, count; + struct resource res[DEV_MAX_RESOURCE_COUNT]; + + pdev = platform_device_alloc(DEV_NAME, PLATFORM_DEVID_AUTO); + if (!pdev) + return -ENOMEM; + + memset(res, 0, sizeof(res)); + + count = apmt_init_resources(res, node); + + ret = platform_device_add_resources(pdev, res, count); + if (ret) + goto dev_put; + + /* + * Add a copy of APMT node pointer to platform_data to be used to + * retrieve APMT data information. + */ + ret = platform_device_add_data(pdev, &node, sizeof(node)); + if (ret) + goto dev_put; + + pdev->dev.fwnode = fwnode; + + ret = platform_device_add(pdev); + + if (ret) + goto dev_put; + + return 0; + +dev_put: + platform_device_put(pdev); + + return ret; +} + +static int __init apmt_init_platform_devices(void) +{ + struct acpi_apmt_node *apmt_node; + struct acpi_table_apmt *apmt; + struct fwnode_handle *fwnode; + u64 offset, end; + int ret; + + /* + * apmt_table and apmt both point to the start of APMT table, but + * have different struct types + */ + apmt = (struct acpi_table_apmt *)apmt_table; + offset = sizeof(*apmt); + end = apmt->header.length; + + while (offset < end) { + apmt_node = ACPI_ADD_PTR(struct acpi_apmt_node, apmt, + offset); + + fwnode = acpi_alloc_fwnode_static(); + if (!fwnode) + return -ENOMEM; + + ret = apmt_add_platform_device(apmt_node, fwnode); + if (ret) { + acpi_free_fwnode_static(fwnode); + return ret; + } + + offset += apmt_node->length; + } + + return 0; +} + +void __init acpi_apmt_init(void) +{ + acpi_status status; + int ret; + + /** + * APMT table nodes will be used at runtime after the apmt init, + * so we don't need to call acpi_put_table() to release + * the APMT table mapping. + */ + status = acpi_get_table(ACPI_SIG_APMT, 0, &apmt_table); + + if (ACPI_FAILURE(status)) { + if (status != AE_NOT_FOUND) { + const char *msg = acpi_format_exception(status); + + pr_err("Failed to get APMT table, %s\n", msg); + } + + return; + } + + ret = apmt_init_platform_devices(); + if (ret) { + pr_err("Failed to initialize APMT platform devices, ret: %d\n", ret); + acpi_put_table(apmt_table); + } +} diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index 8059baf4ef27..38fb84974f35 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -402,6 +402,10 @@ static struct acpi_iort_node *iort_node_get_id(struct acpi_iort_node *node, return NULL; } +#ifndef ACPI_IORT_SMMU_V3_DEVICEID_VALID +#define ACPI_IORT_SMMU_V3_DEVICEID_VALID (1 << 4) +#endif + static int iort_get_id_mapping_index(struct acpi_iort_node *node) { struct acpi_iort_smmu_v3 *smmu; @@ -418,12 +422,16 @@ static int iort_get_id_mapping_index(struct acpi_iort_node *node) smmu = (struct acpi_iort_smmu_v3 *)node->node_data; /* - * ID mapping index is only ignored if all interrupts are - * GSIV based + * Until IORT E.e (node rev. 5), the ID mapping index was + * defined to be valid unless all interrupts are GSIV-based. */ - if (smmu->event_gsiv && smmu->pri_gsiv && smmu->gerr_gsiv - && smmu->sync_gsiv) + if (node->revision < 5) { + if (smmu->event_gsiv && smmu->pri_gsiv && + smmu->gerr_gsiv && smmu->sync_gsiv) + return -EINVAL; + } else if (!(smmu->flags & ACPI_IORT_SMMU_V3_DEVICEID_VALID)) { return -EINVAL; + } if (smmu->id_mapping_index >= node->mapping_count) { pr_err(FW_BUG "[node %p type %d] ID mapping index overflows valid mappings\n", diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c index d466c8195314..351208eda9be 100644 --- a/drivers/acpi/bus.c +++ b/drivers/acpi/bus.c @@ -27,6 +27,7 @@ #include <linux/dmi.h> #endif #include <linux/acpi_agdi.h> +#include <linux/acpi_apmt.h> #include <linux/acpi_iort.h> #include <linux/acpi_viot.h> #include <linux/pci.h> @@ -1423,6 +1424,7 @@ static int __init acpi_init(void) acpi_setup_sb_notify_handler(); acpi_viot_init(); acpi_agdi_init(); + acpi_apmt_init(); return 0; } diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c index d5e86ef40b89..fa85c64d3ded 100644 --- a/drivers/firmware/arm_ffa/driver.c +++ b/drivers/firmware/arm_ffa/driver.c @@ -36,81 +36,6 @@ #include "common.h" #define FFA_DRIVER_VERSION FFA_VERSION_1_0 - -#define FFA_SMC(calling_convention, func_num) \ - ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, (calling_convention), \ - ARM_SMCCC_OWNER_STANDARD, (func_num)) - -#define FFA_SMC_32(func_num) FFA_SMC(ARM_SMCCC_SMC_32, (func_num)) -#define FFA_SMC_64(func_num) FFA_SMC(ARM_SMCCC_SMC_64, (func_num)) - -#define FFA_ERROR FFA_SMC_32(0x60) -#define FFA_SUCCESS FFA_SMC_32(0x61) -#define FFA_INTERRUPT FFA_SMC_32(0x62) -#define FFA_VERSION FFA_SMC_32(0x63) -#define FFA_FEATURES FFA_SMC_32(0x64) -#define FFA_RX_RELEASE FFA_SMC_32(0x65) -#define FFA_RXTX_MAP FFA_SMC_32(0x66) -#define FFA_FN64_RXTX_MAP FFA_SMC_64(0x66) -#define FFA_RXTX_UNMAP FFA_SMC_32(0x67) -#define FFA_PARTITION_INFO_GET FFA_SMC_32(0x68) -#define FFA_ID_GET FFA_SMC_32(0x69) -#define FFA_MSG_POLL FFA_SMC_32(0x6A) -#define FFA_MSG_WAIT FFA_SMC_32(0x6B) -#define FFA_YIELD FFA_SMC_32(0x6C) -#define FFA_RUN FFA_SMC_32(0x6D) -#define FFA_MSG_SEND FFA_SMC_32(0x6E) -#define FFA_MSG_SEND_DIRECT_REQ FFA_SMC_32(0x6F) -#define FFA_FN64_MSG_SEND_DIRECT_REQ FFA_SMC_64(0x6F) -#define FFA_MSG_SEND_DIRECT_RESP FFA_SMC_32(0x70) -#define FFA_FN64_MSG_SEND_DIRECT_RESP FFA_SMC_64(0x70) -#define FFA_MEM_DONATE FFA_SMC_32(0x71) -#define FFA_FN64_MEM_DONATE FFA_SMC_64(0x71) -#define FFA_MEM_LEND FFA_SMC_32(0x72) -#define FFA_FN64_MEM_LEND FFA_SMC_64(0x72) -#define FFA_MEM_SHARE FFA_SMC_32(0x73) -#define FFA_FN64_MEM_SHARE FFA_SMC_64(0x73) -#define FFA_MEM_RETRIEVE_REQ FFA_SMC_32(0x74) -#define FFA_FN64_MEM_RETRIEVE_REQ FFA_SMC_64(0x74) -#define FFA_MEM_RETRIEVE_RESP FFA_SMC_32(0x75) -#define FFA_MEM_RELINQUISH FFA_SMC_32(0x76) -#define FFA_MEM_RECLAIM FFA_SMC_32(0x77) -#define FFA_MEM_OP_PAUSE FFA_SMC_32(0x78) -#define FFA_MEM_OP_RESUME FFA_SMC_32(0x79) -#define FFA_MEM_FRAG_RX FFA_SMC_32(0x7A) -#define FFA_MEM_FRAG_TX FFA_SMC_32(0x7B) -#define FFA_NORMAL_WORLD_RESUME FFA_SMC_32(0x7C) - -/* - * For some calls it is necessary to use SMC64 to pass or return 64-bit values. - * For such calls FFA_FN_NATIVE(name) will choose the appropriate - * (native-width) function ID. - */ -#ifdef CONFIG_64BIT -#define FFA_FN_NATIVE(name) FFA_FN64_##name -#else -#define FFA_FN_NATIVE(name) FFA_##name -#endif - -/* FFA error codes. */ -#define FFA_RET_SUCCESS (0) -#define FFA_RET_NOT_SUPPORTED (-1) -#define FFA_RET_INVALID_PARAMETERS (-2) -#define FFA_RET_NO_MEMORY (-3) -#define FFA_RET_BUSY (-4) -#define FFA_RET_INTERRUPTED (-5) -#define FFA_RET_DENIED (-6) -#define FFA_RET_RETRY (-7) -#define FFA_RET_ABORTED (-8) - -#define MAJOR_VERSION_MASK GENMASK(30, 16) -#define MINOR_VERSION_MASK GENMASK(15, 0) -#define MAJOR_VERSION(x) ((u16)(FIELD_GET(MAJOR_VERSION_MASK, (x)))) -#define MINOR_VERSION(x) ((u16)(FIELD_GET(MINOR_VERSION_MASK, (x)))) -#define PACK_VERSION_INFO(major, minor) \ - (FIELD_PREP(MAJOR_VERSION_MASK, (major)) | \ - FIELD_PREP(MINOR_VERSION_MASK, (minor))) -#define FFA_VERSION_1_0 PACK_VERSION_INFO(1, 0) #define FFA_MIN_VERSION FFA_VERSION_1_0 #define SENDER_ID_MASK GENMASK(31, 16) @@ -121,12 +46,6 @@ (FIELD_PREP(SENDER_ID_MASK, (s)) | FIELD_PREP(RECEIVER_ID_MASK, (r))) /* - * FF-A specification mentions explicitly about '4K pages'. This should - * not be confused with the kernel PAGE_SIZE, which is the translation - * granule kernel is configured and may be one among 4K, 16K and 64K. - */ -#define FFA_PAGE_SIZE SZ_4K -/* * Keeping RX TX buffer size as 4K for now * 64K may be preferred to keep it min a page in 64K PAGE_SIZE config */ @@ -178,9 +97,9 @@ static struct ffa_drv_info *drv_info; */ static u32 ffa_compatible_version_find(u32 version) { - u16 major = MAJOR_VERSION(version), minor = MINOR_VERSION(version); - u16 drv_major = MAJOR_VERSION(FFA_DRIVER_VERSION); - u16 drv_minor = MINOR_VERSION(FFA_DRIVER_VERSION); + u16 major = FFA_MAJOR_VERSION(version), minor = FFA_MINOR_VERSION(version); + u16 drv_major = FFA_MAJOR_VERSION(FFA_DRIVER_VERSION); + u16 drv_minor = FFA_MINOR_VERSION(FFA_DRIVER_VERSION); if ((major < drv_major) || (major == drv_major && minor <= drv_minor)) return version; @@ -204,16 +123,16 @@ static int ffa_version_check(u32 *version) if (ver.a0 < FFA_MIN_VERSION) { pr_err("Incompatible v%d.%d! Earliest supported v%d.%d\n", - MAJOR_VERSION(ver.a0), MINOR_VERSION(ver.a0), - MAJOR_VERSION(FFA_MIN_VERSION), - MINOR_VERSION(FFA_MIN_VERSION)); + FFA_MAJOR_VERSION(ver.a0), FFA_MINOR_VERSION(ver.a0), + FFA_MAJOR_VERSION(FFA_MIN_VERSION), + FFA_MINOR_VERSION(FFA_MIN_VERSION)); return -EINVAL; } - pr_info("Driver version %d.%d\n", MAJOR_VERSION(FFA_DRIVER_VERSION), - MINOR_VERSION(FFA_DRIVER_VERSION)); - pr_info("Firmware version %d.%d found\n", MAJOR_VERSION(ver.a0), - MINOR_VERSION(ver.a0)); + pr_info("Driver version %d.%d\n", FFA_MAJOR_VERSION(FFA_DRIVER_VERSION), + FFA_MINOR_VERSION(FFA_DRIVER_VERSION)); + pr_info("Firmware version %d.%d found\n", FFA_MAJOR_VERSION(ver.a0), + FFA_MINOR_VERSION(ver.a0)); *version = ffa_compatible_version_find(ver.a0); return 0; diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index ef5045a53ce0..453b67582fee 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -20,6 +20,7 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \ # disable the stackleak plugin cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \ -fpie $(DISABLE_STACKLEAK_PLUGIN) \ + -fno-unwind-tables -fno-asynchronous-unwind-tables \ $(call cc-option,-mbranch-protection=none) cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \ -fno-builtin -fpic \ diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index 341010f20b77..77043bcdb33c 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -199,4 +199,8 @@ config MARVELL_CN10K_DDR_PMU Enable perf support for Marvell DDR Performance monitoring event on CN10K platform. +source "drivers/perf/arm_cspmu/Kconfig" + +source "drivers/perf/amlogic/Kconfig" + endmenu diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index 050d04ee19dd..13e45da61100 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -21,3 +21,5 @@ obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o obj-$(CONFIG_APPLE_M1_CPU_PMU) += apple_m1_cpu_pmu.o obj-$(CONFIG_ALIBABA_UNCORE_DRW_PMU) += alibaba_uncore_drw_pmu.o +obj-$(CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU) += arm_cspmu/ +obj-$(CONFIG_MESON_DDR_PMU) += amlogic/ diff --git a/drivers/perf/amlogic/Kconfig b/drivers/perf/amlogic/Kconfig new file mode 100644 index 000000000000..f68db01a7f17 --- /dev/null +++ b/drivers/perf/amlogic/Kconfig @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: GPL-2.0-only +config MESON_DDR_PMU + tristate "Amlogic DDR Bandwidth Performance Monitor" + depends on ARCH_MESON || COMPILE_TEST + help + Provides support for the DDR performance monitor + in Amlogic SoCs, which can give information about + memory throughput and other related events. It + supports multiple channels to monitor the memory + bandwidth simultaneously. diff --git a/drivers/perf/amlogic/Makefile b/drivers/perf/amlogic/Makefile new file mode 100644 index 000000000000..d3ab2ac5353b --- /dev/null +++ b/drivers/perf/amlogic/Makefile @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0-only + +obj-$(CONFIG_MESON_DDR_PMU) += meson_ddr_pmu_g12.o + +meson_ddr_pmu_g12-y := meson_ddr_pmu_core.o meson_g12_ddr_pmu.o diff --git a/drivers/perf/amlogic/meson_ddr_pmu_core.c b/drivers/perf/amlogic/meson_ddr_pmu_core.c new file mode 100644 index 000000000000..b84346dbac2c --- /dev/null +++ b/drivers/perf/amlogic/meson_ddr_pmu_core.c @@ -0,0 +1,561 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2022 Amlogic, Inc. All rights reserved. + */ + +#include <linux/bitfield.h> +#include <linux/init.h> +#include <linux/irqreturn.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/of_irq.h> +#include <linux/perf_event.h> +#include <linux/platform_device.h> +#include <linux/printk.h> +#include <linux/sysfs.h> +#include <linux/types.h> + +#include <soc/amlogic/meson_ddr_pmu.h> + +struct ddr_pmu { + struct pmu pmu; + struct dmc_info info; + struct dmc_counter counters; /* save counters from hw */ + bool pmu_enabled; + struct device *dev; + char *name; + struct hlist_node node; + enum cpuhp_state cpuhp_state; + int cpu; /* for cpu hotplug */ +}; + +#define DDR_PERF_DEV_NAME "meson_ddr_bw" +#define MAX_AXI_PORTS_OF_CHANNEL 4 /* A DMC channel can monitor max 4 axi ports */ + +#define to_ddr_pmu(p) container_of(p, struct ddr_pmu, pmu) +#define dmc_info_to_pmu(p) container_of(p, struct ddr_pmu, info) + +static void dmc_pmu_enable(struct ddr_pmu *pmu) +{ + if (!pmu->pmu_enabled) + pmu->info.hw_info->enable(&pmu->info); + + pmu->pmu_enabled = true; +} + +static void dmc_pmu_disable(struct ddr_pmu *pmu) +{ + if (pmu->pmu_enabled) + pmu->info.hw_info->disable(&pmu->info); + + pmu->pmu_enabled = false; +} + +static void meson_ddr_set_axi_filter(struct perf_event *event, u8 axi_id) +{ + struct ddr_pmu *pmu = to_ddr_pmu(event->pmu); + int chann; + + if (event->attr.config > ALL_CHAN_COUNTER_ID && + event->attr.config < COUNTER_MAX_ID) { + chann = event->attr.config - CHAN1_COUNTER_ID; + + pmu->info.hw_info->set_axi_filter(&pmu->info, axi_id, chann); + } +} + +static void ddr_cnt_addition(struct dmc_counter *sum, + struct dmc_counter *add1, + struct dmc_counter *add2, + int chann_nr) +{ + int i; + u64 cnt1, cnt2; + + sum->all_cnt = add1->all_cnt + add2->all_cnt; + sum->all_req = add1->all_req + add2->all_req; + for (i = 0; i < chann_nr; i++) { + cnt1 = add1->channel_cnt[i]; + cnt2 = add2->channel_cnt[i]; + + sum->channel_cnt[i] = cnt1 + cnt2; + } +} + +static void meson_ddr_perf_event_update(struct perf_event *event) +{ + struct ddr_pmu *pmu = to_ddr_pmu(event->pmu); + u64 new_raw_count = 0; + struct dmc_counter dc = {0}, sum_dc = {0}; + int idx; + int chann_nr = pmu->info.hw_info->chann_nr; + + /* get the remain counters in register. */ + pmu->info.hw_info->get_counters(&pmu->info, &dc); + + ddr_cnt_addition(&sum_dc, &pmu->counters, &dc, chann_nr); + + switch (event->attr.config) { + case ALL_CHAN_COUNTER_ID: + new_raw_count = sum_dc.all_cnt; + break; + case CHAN1_COUNTER_ID: + case CHAN2_COUNTER_ID: + case CHAN3_COUNTER_ID: + case CHAN4_COUNTER_ID: + case CHAN5_COUNTER_ID: + case CHAN6_COUNTER_ID: + case CHAN7_COUNTER_ID: + case CHAN8_COUNTER_ID: + idx = event->attr.config - CHAN1_COUNTER_ID; + new_raw_count = sum_dc.channel_cnt[idx]; + break; + } + + local64_set(&event->count, new_raw_count); +} + +static int meson_ddr_perf_event_init(struct perf_event *event) +{ + struct ddr_pmu *pmu = to_ddr_pmu(event->pmu); + u64 config1 = event->attr.config1; + u64 config2 = event->attr.config2; + + if (event->attr.type != event->pmu->type) + return -ENOENT; + + if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) + return -EOPNOTSUPP; + + if (event->cpu < 0) + return -EOPNOTSUPP; + + /* check if the number of parameters is too much */ + if (event->attr.config != ALL_CHAN_COUNTER_ID && + hweight64(config1) + hweight64(config2) > MAX_AXI_PORTS_OF_CHANNEL) + return -EOPNOTSUPP; + + event->cpu = pmu->cpu; + + return 0; +} + +static void meson_ddr_perf_event_start(struct perf_event *event, int flags) +{ + struct ddr_pmu *pmu = to_ddr_pmu(event->pmu); + + memset(&pmu->counters, 0, sizeof(pmu->counters)); + dmc_pmu_enable(pmu); +} + +static int meson_ddr_perf_event_add(struct perf_event *event, int flags) +{ + u64 config1 = event->attr.config1; + u64 config2 = event->attr.config2; + int i; + + for_each_set_bit(i, (const unsigned long *)&config1, sizeof(config1)) + meson_ddr_set_axi_filter(event, i); + + for_each_set_bit(i, (const unsigned long *)&config2, sizeof(config2)) + meson_ddr_set_axi_filter(event, i + 64); + + if (flags & PERF_EF_START) + meson_ddr_perf_event_start(event, flags); + + return 0; +} + +static void meson_ddr_perf_event_stop(struct perf_event *event, int flags) +{ + struct ddr_pmu *pmu = to_ddr_pmu(event->pmu); + + if (flags & PERF_EF_UPDATE) + meson_ddr_perf_event_update(event); + + dmc_pmu_disable(pmu); +} + +static void meson_ddr_perf_event_del(struct perf_event *event, int flags) +{ + meson_ddr_perf_event_stop(event, PERF_EF_UPDATE); +} + +static ssize_t meson_ddr_perf_cpumask_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct ddr_pmu *pmu = dev_get_drvdata(dev); + + return cpumap_print_to_pagebuf(true, buf, cpumask_of(pmu->cpu)); +} + +static struct device_attribute meson_ddr_perf_cpumask_attr = +__ATTR(cpumask, 0444, meson_ddr_perf_cpumask_show, NULL); + +static struct attribute *meson_ddr_perf_cpumask_attrs[] = { + &meson_ddr_perf_cpumask_attr.attr, + NULL, +}; + +static const struct attribute_group ddr_perf_cpumask_attr_group = { + .attrs = meson_ddr_perf_cpumask_attrs, +}; + +static ssize_t +pmu_event_show(struct device *dev, struct device_attribute *attr, + char *page) +{ + struct perf_pmu_events_attr *pmu_attr; + + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); + return sysfs_emit(page, "event=0x%02llx\n", pmu_attr->id); +} + +static ssize_t +event_show_unit(struct device *dev, struct device_attribute *attr, + char *page) +{ + return sysfs_emit(page, "MB\n"); +} + +static ssize_t +event_show_scale(struct device *dev, struct device_attribute *attr, + char *page) +{ + /* one count = 16byte = 1.52587890625e-05 MB */ + return sysfs_emit(page, "1.52587890625e-05\n"); +} + +#define AML_DDR_PMU_EVENT_ATTR(_name, _id) \ +{ \ + .attr = __ATTR(_name, 0444, pmu_event_show, NULL), \ + .id = _id, \ +} + +#define AML_DDR_PMU_EVENT_UNIT_ATTR(_name) \ + __ATTR(_name.unit, 0444, event_show_unit, NULL) + +#define AML_DDR_PMU_EVENT_SCALE_ATTR(_name) \ + __ATTR(_name.scale, 0444, event_show_scale, NULL) + +static struct device_attribute event_unit_attrs[] = { + AML_DDR_PMU_EVENT_UNIT_ATTR(total_rw_bytes), + AML_DDR_PMU_EVENT_UNIT_ATTR(chan_1_rw_bytes), + AML_DDR_PMU_EVENT_UNIT_ATTR(chan_2_rw_bytes), + AML_DDR_PMU_EVENT_UNIT_ATTR(chan_3_rw_bytes), + AML_DDR_PMU_EVENT_UNIT_ATTR(chan_4_rw_bytes), + AML_DDR_PMU_EVENT_UNIT_ATTR(chan_5_rw_bytes), + AML_DDR_PMU_EVENT_UNIT_ATTR(chan_6_rw_bytes), + AML_DDR_PMU_EVENT_UNIT_ATTR(chan_7_rw_bytes), + AML_DDR_PMU_EVENT_UNIT_ATTR(chan_8_rw_bytes), +}; + +static struct device_attribute event_scale_attrs[] = { + AML_DDR_PMU_EVENT_SCALE_ATTR(total_rw_bytes), + AML_DDR_PMU_EVENT_SCALE_ATTR(chan_1_rw_bytes), + AML_DDR_PMU_EVENT_SCALE_ATTR(chan_2_rw_bytes), + AML_DDR_PMU_EVENT_SCALE_ATTR(chan_3_rw_bytes), + AML_DDR_PMU_EVENT_SCALE_ATTR(chan_4_rw_bytes), + AML_DDR_PMU_EVENT_SCALE_ATTR(chan_5_rw_bytes), + AML_DDR_PMU_EVENT_SCALE_ATTR(chan_6_rw_bytes), + AML_DDR_PMU_EVENT_SCALE_ATTR(chan_7_rw_bytes), + AML_DDR_PMU_EVENT_SCALE_ATTR(chan_8_rw_bytes), +}; + +static struct perf_pmu_events_attr event_attrs[] = { + AML_DDR_PMU_EVENT_ATTR(total_rw_bytes, ALL_CHAN_COUNTER_ID), + AML_DDR_PMU_EVENT_ATTR(chan_1_rw_bytes, CHAN1_COUNTER_ID), + AML_DDR_PMU_EVENT_ATTR(chan_2_rw_bytes, CHAN2_COUNTER_ID), + AML_DDR_PMU_EVENT_ATTR(chan_3_rw_bytes, CHAN3_COUNTER_ID), + AML_DDR_PMU_EVENT_ATTR(chan_4_rw_bytes, CHAN4_COUNTER_ID), + AML_DDR_PMU_EVENT_ATTR(chan_5_rw_bytes, CHAN5_COUNTER_ID), + AML_DDR_PMU_EVENT_ATTR(chan_6_rw_bytes, CHAN6_COUNTER_ID), + AML_DDR_PMU_EVENT_ATTR(chan_7_rw_bytes, CHAN7_COUNTER_ID), + AML_DDR_PMU_EVENT_ATTR(chan_8_rw_bytes, CHAN8_COUNTER_ID), +}; + +/* three attrs are combined an event */ +static struct attribute *ddr_perf_events_attrs[COUNTER_MAX_ID * 3]; + +static struct attribute_group ddr_perf_events_attr_group = { + .name = "events", + .attrs = ddr_perf_events_attrs, +}; + +static umode_t meson_ddr_perf_format_attr_visible(struct kobject *kobj, + struct attribute *attr, + int n) +{ + struct pmu *pmu = dev_get_drvdata(kobj_to_dev(kobj)); + struct ddr_pmu *ddr_pmu = to_ddr_pmu(pmu); + const u64 *capability = ddr_pmu->info.hw_info->capability; + struct device_attribute *dev_attr; + int id; + char value[20]; // config1:xxx, 20 is enough + + dev_attr = container_of(attr, struct device_attribute, attr); + dev_attr->show(NULL, NULL, value); + + if (sscanf(value, "config1:%d", &id) == 1) + return capability[0] & (1ULL << id) ? attr->mode : 0; + + if (sscanf(value, "config2:%d", &id) == 1) + return capability[1] & (1ULL << id) ? attr->mode : 0; + + return attr->mode; +} + +static struct attribute_group ddr_perf_format_attr_group = { + .name = "format", + .is_visible = meson_ddr_perf_format_attr_visible, +}; + +static ssize_t meson_ddr_perf_identifier_show(struct device *dev, + struct device_attribute *attr, + char *page) +{ + struct ddr_pmu *pmu = dev_get_drvdata(dev); + + return sysfs_emit(page, "%s\n", pmu->name); +} + +static struct device_attribute meson_ddr_perf_identifier_attr = +__ATTR(identifier, 0444, meson_ddr_perf_identifier_show, NULL); + +static struct attribute *meson_ddr_perf_identifier_attrs[] = { + &meson_ddr_perf_identifier_attr.attr, + NULL, +}; + +static const struct attribute_group ddr_perf_identifier_attr_group = { + .attrs = meson_ddr_perf_identifier_attrs, +}; + +static const struct attribute_group *attr_groups[] = { + &ddr_perf_events_attr_group, + &ddr_perf_format_attr_group, + &ddr_perf_cpumask_attr_group, + &ddr_perf_identifier_attr_group, + NULL, +}; + +static irqreturn_t dmc_irq_handler(int irq, void *dev_id) +{ + struct dmc_info *info = dev_id; + struct ddr_pmu *pmu; + struct dmc_counter counters, *sum_cnter; + int i; + + pmu = dmc_info_to_pmu(info); + + if (info->hw_info->irq_handler(info, &counters) != 0) + goto out; + + sum_cnter = &pmu->counters; + sum_cnter->all_cnt += counters.all_cnt; + sum_cnter->all_req += counters.all_req; + + for (i = 0; i < pmu->info.hw_info->chann_nr; i++) + sum_cnter->channel_cnt[i] += counters.channel_cnt[i]; + + if (pmu->pmu_enabled) + /* + * the timer interrupt only supprt + * one shot mode, we have to re-enable + * it in ISR to support continue mode. + */ + info->hw_info->enable(info); + + dev_dbg(pmu->dev, "counts: %llu %llu %llu, %llu, %llu, %llu\t\t" + "sum: %llu %llu %llu, %llu, %llu, %llu\n", + counters.all_req, + counters.all_cnt, + counters.channel_cnt[0], + counters.channel_cnt[1], + counters.channel_cnt[2], + counters.channel_cnt[3], + + pmu->counters.all_req, + pmu->counters.all_cnt, + pmu->counters.channel_cnt[0], + pmu->counters.channel_cnt[1], + pmu->counters.channel_cnt[2], + pmu->counters.channel_cnt[3]); +out: + return IRQ_HANDLED; +} + +static int ddr_perf_offline_cpu(unsigned int cpu, struct hlist_node *node) +{ + struct ddr_pmu *pmu = hlist_entry_safe(node, struct ddr_pmu, node); + int target; + + if (cpu != pmu->cpu) + return 0; + + target = cpumask_any_but(cpu_online_mask, cpu); + if (target >= nr_cpu_ids) + return 0; + + perf_pmu_migrate_context(&pmu->pmu, cpu, target); + pmu->cpu = target; + + WARN_ON(irq_set_affinity(pmu->info.irq_num, cpumask_of(pmu->cpu))); + + return 0; +} + +static void fill_event_attr(struct ddr_pmu *pmu) +{ + int i, j, k; + struct attribute **dst = ddr_perf_events_attrs; + + j = 0; + k = 0; + + /* fill ALL_CHAN_COUNTER_ID event */ + dst[j++] = &event_attrs[k].attr.attr; + dst[j++] = &event_unit_attrs[k].attr; + dst[j++] = &event_scale_attrs[k].attr; + + k++; + + /* fill each channel event */ + for (i = 0; i < pmu->info.hw_info->chann_nr; i++, k++) { + dst[j++] = &event_attrs[k].attr.attr; + dst[j++] = &event_unit_attrs[k].attr; + dst[j++] = &event_scale_attrs[k].attr; + } + + dst[j] = NULL; /* mark end */ +} + +static void fmt_attr_fill(struct attribute **fmt_attr) +{ + ddr_perf_format_attr_group.attrs = fmt_attr; +} + +static int ddr_pmu_parse_dt(struct platform_device *pdev, + struct dmc_info *info) +{ + void __iomem *base; + int i, ret; + + info->hw_info = of_device_get_match_data(&pdev->dev); + + for (i = 0; i < info->hw_info->dmc_nr; i++) { + /* resource 0 for ddr register base */ + base = devm_platform_ioremap_resource(pdev, i); + if (IS_ERR(base)) + return PTR_ERR(base); + + info->ddr_reg[i] = base; + } + + /* resource i for pll register base */ + base = devm_platform_ioremap_resource(pdev, i); + if (IS_ERR(base)) + return PTR_ERR(base); + + info->pll_reg = base; + + ret = platform_get_irq(pdev, 0); + if (ret < 0) + return ret; + + info->irq_num = ret; + + ret = devm_request_irq(&pdev->dev, info->irq_num, dmc_irq_handler, + IRQF_NOBALANCING, dev_name(&pdev->dev), + (void *)info); + if (ret < 0) + return ret; + + return 0; +} + +int meson_ddr_pmu_create(struct platform_device *pdev) +{ + int ret; + char *name; + struct ddr_pmu *pmu; + + pmu = devm_kzalloc(&pdev->dev, sizeof(struct ddr_pmu), GFP_KERNEL); + if (!pmu) + return -ENOMEM; + + *pmu = (struct ddr_pmu) { + .pmu = { + .module = THIS_MODULE, + .capabilities = PERF_PMU_CAP_NO_EXCLUDE, + .task_ctx_nr = perf_invalid_context, + .attr_groups = attr_groups, + .event_init = meson_ddr_perf_event_init, + .add = meson_ddr_perf_event_add, + .del = meson_ddr_perf_event_del, + .start = meson_ddr_perf_event_start, + .stop = meson_ddr_perf_event_stop, + .read = meson_ddr_perf_event_update, + }, + }; + + ret = ddr_pmu_parse_dt(pdev, &pmu->info); + if (ret < 0) + return ret; + + fmt_attr_fill(pmu->info.hw_info->fmt_attr); + + pmu->cpu = smp_processor_id(); + + name = devm_kasprintf(&pdev->dev, GFP_KERNEL, DDR_PERF_DEV_NAME); + if (!name) + return -ENOMEM; + + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, name, NULL, + ddr_perf_offline_cpu); + if (ret < 0) + return ret; + + pmu->cpuhp_state = ret; + + /* Register the pmu instance for cpu hotplug */ + ret = cpuhp_state_add_instance_nocalls(pmu->cpuhp_state, &pmu->node); + if (ret) + goto cpuhp_instance_err; + + fill_event_attr(pmu); + + ret = perf_pmu_register(&pmu->pmu, name, -1); + if (ret) + goto pmu_register_err; + + pmu->name = name; + pmu->dev = &pdev->dev; + pmu->pmu_enabled = false; + + platform_set_drvdata(pdev, pmu); + + return 0; + +pmu_register_err: + cpuhp_state_remove_instance_nocalls(pmu->cpuhp_state, &pmu->node); + +cpuhp_instance_err: + cpuhp_remove_state(pmu->cpuhp_state); + + return ret; +} + +int meson_ddr_pmu_remove(struct platform_device *pdev) +{ + struct ddr_pmu *pmu = platform_get_drvdata(pdev); + + perf_pmu_unregister(&pmu->pmu); + cpuhp_state_remove_instance_nocalls(pmu->cpuhp_state, &pmu->node); + cpuhp_remove_state(pmu->cpuhp_state); + + return 0; +} diff --git a/drivers/perf/amlogic/meson_g12_ddr_pmu.c b/drivers/perf/amlogic/meson_g12_ddr_pmu.c new file mode 100644 index 000000000000..a78fdb15e26c --- /dev/null +++ b/drivers/perf/amlogic/meson_g12_ddr_pmu.c @@ -0,0 +1,394 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2022 Amlogic, Inc. All rights reserved. + */ + +#include <linux/err.h> +#include <linux/io.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/perf_event.h> +#include <linux/platform_device.h> +#include <linux/printk.h> +#include <linux/types.h> + +#include <soc/amlogic/meson_ddr_pmu.h> + +#define PORT_MAJOR 32 +#define DEFAULT_XTAL_FREQ 24000000UL + +#define DMC_QOS_IRQ BIT(30) + +/* DMC bandwidth monitor register address offset */ +#define DMC_MON_G12_CTRL0 (0x20 << 2) +#define DMC_MON_G12_CTRL1 (0x21 << 2) +#define DMC_MON_G12_CTRL2 (0x22 << 2) +#define DMC_MON_G12_CTRL3 (0x23 << 2) +#define DMC_MON_G12_CTRL4 (0x24 << 2) +#define DMC_MON_G12_CTRL5 (0x25 << 2) +#define DMC_MON_G12_CTRL6 (0x26 << 2) +#define DMC_MON_G12_CTRL7 (0x27 << 2) +#define DMC_MON_G12_CTRL8 (0x28 << 2) + +#define DMC_MON_G12_ALL_REQ_CNT (0x29 << 2) +#define DMC_MON_G12_ALL_GRANT_CNT (0x2a << 2) +#define DMC_MON_G12_ONE_GRANT_CNT (0x2b << 2) +#define DMC_MON_G12_SEC_GRANT_CNT (0x2c << 2) +#define DMC_MON_G12_THD_GRANT_CNT (0x2d << 2) +#define DMC_MON_G12_FOR_GRANT_CNT (0x2e << 2) +#define DMC_MON_G12_TIMER (0x2f << 2) + +/* Each bit represent a axi line */ +PMU_FORMAT_ATTR(event, "config:0-7"); +PMU_FORMAT_ATTR(arm, "config1:0"); +PMU_FORMAT_ATTR(gpu, "config1:1"); +PMU_FORMAT_ATTR(pcie, "config1:2"); +PMU_FORMAT_ATTR(hdcp, "config1:3"); +PMU_FORMAT_ATTR(hevc_front, "config1:4"); +PMU_FORMAT_ATTR(usb3_0, "config1:6"); +PMU_FORMAT_ATTR(device, "config1:7"); +PMU_FORMAT_ATTR(hevc_back, "config1:8"); +PMU_FORMAT_ATTR(h265enc, "config1:9"); +PMU_FORMAT_ATTR(vpu_read1, "config1:16"); +PMU_FORMAT_ATTR(vpu_read2, "config1:17"); +PMU_FORMAT_ATTR(vpu_read3, "config1:18"); +PMU_FORMAT_ATTR(vpu_write1, "config1:19"); +PMU_FORMAT_ATTR(vpu_write2, "config1:20"); +PMU_FORMAT_ATTR(vdec, "config1:21"); +PMU_FORMAT_ATTR(hcodec, "config1:22"); +PMU_FORMAT_ATTR(ge2d, "config1:23"); + +PMU_FORMAT_ATTR(spicc1, "config1:32"); +PMU_FORMAT_ATTR(usb0, "config1:33"); +PMU_FORMAT_ATTR(dma, "config1:34"); +PMU_FORMAT_ATTR(arb0, "config1:35"); +PMU_FORMAT_ATTR(sd_emmc_b, "config1:36"); +PMU_FORMAT_ATTR(usb1, "config1:37"); +PMU_FORMAT_ATTR(audio, "config1:38"); +PMU_FORMAT_ATTR(aififo, "config1:39"); +PMU_FORMAT_ATTR(parser, "config1:41"); +PMU_FORMAT_ATTR(ao_cpu, "config1:42"); +PMU_FORMAT_ATTR(sd_emmc_c, "config1:43"); +PMU_FORMAT_ATTR(spicc2, "config1:44"); +PMU_FORMAT_ATTR(ethernet, "config1:45"); +PMU_FORMAT_ATTR(sana, "config1:46"); + +/* for sm1 and g12b */ +PMU_FORMAT_ATTR(nna, "config1:10"); + +/* for g12b only */ +PMU_FORMAT_ATTR(gdc, "config1:11"); +PMU_FORMAT_ATTR(mipi_isp, "config1:12"); +PMU_FORMAT_ATTR(arm1, "config1:13"); +PMU_FORMAT_ATTR(sd_emmc_a, "config1:40"); + +static struct attribute *g12_pmu_format_attrs[] = { + &format_attr_event.attr, + &format_attr_arm.attr, + &format_attr_gpu.attr, + &format_attr_nna.attr, + &format_attr_gdc.attr, + &format_attr_arm1.attr, + &format_attr_mipi_isp.attr, + &format_attr_sd_emmc_a.attr, + &format_attr_pcie.attr, + &format_attr_hdcp.attr, + &format_attr_hevc_front.attr, + &format_attr_usb3_0.attr, + &format_attr_device.attr, + &format_attr_hevc_back.attr, + &format_attr_h265enc.attr, + &format_attr_vpu_read1.attr, + &format_attr_vpu_read2.attr, + &format_attr_vpu_read3.attr, + &format_attr_vpu_write1.attr, + &format_attr_vpu_write2.attr, + &format_attr_vdec.attr, + &format_attr_hcodec.attr, + &format_attr_ge2d.attr, + &format_attr_spicc1.attr, + &format_attr_usb0.attr, + &format_attr_dma.attr, + &format_attr_arb0.attr, + &format_attr_sd_emmc_b.attr, + &format_attr_usb1.attr, + &format_attr_audio.attr, + &format_attr_aififo.attr, + &format_attr_parser.attr, + &format_attr_ao_cpu.attr, + &format_attr_sd_emmc_c.attr, + &format_attr_spicc2.attr, + &format_attr_ethernet.attr, + &format_attr_sana.attr, + NULL, +}; + +/* calculate ddr clock */ +static unsigned long dmc_g12_get_freq_quick(struct dmc_info *info) +{ + unsigned int val; + unsigned int n, m, od1; + unsigned int od_div = 0xfff; + unsigned long freq = 0; + + val = readl(info->pll_reg); + val = val & 0xfffff; + switch ((val >> 16) & 7) { + case 0: + od_div = 2; + break; + + case 1: + od_div = 3; + break; + + case 2: + od_div = 4; + break; + + case 3: + od_div = 6; + break; + + case 4: + od_div = 8; + break; + + default: + break; + } + + m = val & 0x1ff; + n = ((val >> 10) & 0x1f); + od1 = (((val >> 19) & 0x1)) == 1 ? 2 : 1; + freq = DEFAULT_XTAL_FREQ / 1000; /* avoid overflow */ + if (n) + freq = ((((freq * m) / n) >> od1) / od_div) * 1000; + + return freq; +} + +#ifdef DEBUG +static void g12_dump_reg(struct dmc_info *db) +{ + int s = 0, i; + unsigned int r; + + for (i = 0; i < 9; i++) { + r = readl(db->ddr_reg[0] + (DMC_MON_G12_CTRL0 + (i << 2))); + pr_notice("DMC_MON_CTRL%d: %08x\n", i, r); + } + r = readl(db->ddr_reg[0] + DMC_MON_G12_ALL_REQ_CNT); + pr_notice("DMC_MON_ALL_REQ_CNT: %08x\n", r); + r = readl(db->ddr_reg[0] + DMC_MON_G12_ALL_GRANT_CNT); + pr_notice("DMC_MON_ALL_GRANT_CNT:%08x\n", r); + r = readl(db->ddr_reg[0] + DMC_MON_G12_ONE_GRANT_CNT); + pr_notice("DMC_MON_ONE_GRANT_CNT:%08x\n", r); + r = readl(db->ddr_reg[0] + DMC_MON_G12_SEC_GRANT_CNT); + pr_notice("DMC_MON_SEC_GRANT_CNT:%08x\n", r); + r = readl(db->ddr_reg[0] + DMC_MON_G12_THD_GRANT_CNT); + pr_notice("DMC_MON_THD_GRANT_CNT:%08x\n", r); + r = readl(db->ddr_reg[0] + DMC_MON_G12_FOR_GRANT_CNT); + pr_notice("DMC_MON_FOR_GRANT_CNT:%08x\n", r); + r = readl(db->ddr_reg[0] + DMC_MON_G12_TIMER); + pr_notice("DMC_MON_TIMER: %08x\n", r); +} +#endif + +static void dmc_g12_counter_enable(struct dmc_info *info) +{ + unsigned int val; + unsigned long clock_count = dmc_g12_get_freq_quick(info) / 10; /* 100ms */ + + writel(clock_count, info->ddr_reg[0] + DMC_MON_G12_TIMER); + + val = readl(info->ddr_reg[0] + DMC_MON_G12_CTRL0); + + /* enable all channel */ + val = BIT(31) | /* enable bit */ + BIT(20) | /* use timer */ + 0x0f; /* 4 channels */ + + writel(val, info->ddr_reg[0] + DMC_MON_G12_CTRL0); + +#ifdef DEBUG + g12_dump_reg(info); +#endif +} + +static void dmc_g12_config_fiter(struct dmc_info *info, + int port, int channel) +{ + u32 val; + u32 rp[MAX_CHANNEL_NUM] = {DMC_MON_G12_CTRL1, DMC_MON_G12_CTRL3, + DMC_MON_G12_CTRL5, DMC_MON_G12_CTRL7}; + u32 rs[MAX_CHANNEL_NUM] = {DMC_MON_G12_CTRL2, DMC_MON_G12_CTRL4, + DMC_MON_G12_CTRL6, DMC_MON_G12_CTRL8}; + int subport = -1; + + /* clear all port mask */ + if (port < 0) { + writel(0, info->ddr_reg[0] + rp[channel]); + writel(0, info->ddr_reg[0] + rs[channel]); + return; + } + + if (port >= PORT_MAJOR) + subport = port - PORT_MAJOR; + + if (subport < 0) { + val = readl(info->ddr_reg[0] + rp[channel]); + val |= (1 << port); + writel(val, info->ddr_reg[0] + rp[channel]); + val = 0xffff; + writel(val, info->ddr_reg[0] + rs[channel]); + } else { + val = BIT(23); /* select device */ + writel(val, info->ddr_reg[0] + rp[channel]); + val = readl(info->ddr_reg[0] + rs[channel]); + val |= (1 << subport); + writel(val, info->ddr_reg[0] + rs[channel]); + } +} + +static void dmc_g12_set_axi_filter(struct dmc_info *info, int axi_id, int channel) +{ + if (channel > info->hw_info->chann_nr) + return; + + dmc_g12_config_fiter(info, axi_id, channel); +} + +static void dmc_g12_counter_disable(struct dmc_info *info) +{ + int i; + + /* clear timer */ + writel(0, info->ddr_reg[0] + DMC_MON_G12_CTRL0); + writel(0, info->ddr_reg[0] + DMC_MON_G12_TIMER); + + writel(0, info->ddr_reg[0] + DMC_MON_G12_ALL_REQ_CNT); + writel(0, info->ddr_reg[0] + DMC_MON_G12_ALL_GRANT_CNT); + writel(0, info->ddr_reg[0] + DMC_MON_G12_ONE_GRANT_CNT); + writel(0, info->ddr_reg[0] + DMC_MON_G12_SEC_GRANT_CNT); + writel(0, info->ddr_reg[0] + DMC_MON_G12_THD_GRANT_CNT); + writel(0, info->ddr_reg[0] + DMC_MON_G12_FOR_GRANT_CNT); + + /* clear port channel mapping */ + for (i = 0; i < info->hw_info->chann_nr; i++) + dmc_g12_config_fiter(info, -1, i); +} + +static void dmc_g12_get_counters(struct dmc_info *info, + struct dmc_counter *counter) +{ + int i; + unsigned int reg; + + counter->all_cnt = readl(info->ddr_reg[0] + DMC_MON_G12_ALL_GRANT_CNT); + counter->all_req = readl(info->ddr_reg[0] + DMC_MON_G12_ALL_REQ_CNT); + + for (i = 0; i < info->hw_info->chann_nr; i++) { + reg = DMC_MON_G12_ONE_GRANT_CNT + (i << 2); + counter->channel_cnt[i] = readl(info->ddr_reg[0] + reg); + } +} + +static int dmc_g12_irq_handler(struct dmc_info *info, + struct dmc_counter *counter) +{ + unsigned int val; + int ret = -EINVAL; + + val = readl(info->ddr_reg[0] + DMC_MON_G12_CTRL0); + if (val & DMC_QOS_IRQ) { + dmc_g12_get_counters(info, counter); + /* clear irq flags */ + writel(val, info->ddr_reg[0] + DMC_MON_G12_CTRL0); + ret = 0; + } + return ret; +} + +static const struct dmc_hw_info g12a_dmc_info = { + .enable = dmc_g12_counter_enable, + .disable = dmc_g12_counter_disable, + .irq_handler = dmc_g12_irq_handler, + .get_counters = dmc_g12_get_counters, + .set_axi_filter = dmc_g12_set_axi_filter, + + .dmc_nr = 1, + .chann_nr = 4, + .capability = {0X7EFF00FF03DF, 0}, + .fmt_attr = g12_pmu_format_attrs, +}; + +static const struct dmc_hw_info g12b_dmc_info = { + .enable = dmc_g12_counter_enable, + .disable = dmc_g12_counter_disable, + .irq_handler = dmc_g12_irq_handler, + .get_counters = dmc_g12_get_counters, + .set_axi_filter = dmc_g12_set_axi_filter, + + .dmc_nr = 1, + .chann_nr = 4, + .capability = {0X7FFF00FF3FDF, 0}, + .fmt_attr = g12_pmu_format_attrs, +}; + +static const struct dmc_hw_info sm1_dmc_info = { + .enable = dmc_g12_counter_enable, + .disable = dmc_g12_counter_disable, + .irq_handler = dmc_g12_irq_handler, + .get_counters = dmc_g12_get_counters, + .set_axi_filter = dmc_g12_set_axi_filter, + + .dmc_nr = 1, + .chann_nr = 4, + .capability = {0X7EFF00FF07DF, 0}, + .fmt_attr = g12_pmu_format_attrs, +}; + +static int g12_ddr_pmu_probe(struct platform_device *pdev) +{ + return meson_ddr_pmu_create(pdev); +} + +static int g12_ddr_pmu_remove(struct platform_device *pdev) +{ + meson_ddr_pmu_remove(pdev); + + return 0; +} + +static const struct of_device_id meson_ddr_pmu_dt_match[] = { + { + .compatible = "amlogic,g12a-ddr-pmu", + .data = &g12a_dmc_info, + }, + { + .compatible = "amlogic,g12b-ddr-pmu", + .data = &g12b_dmc_info, + }, + { + .compatible = "amlogic,sm1-ddr-pmu", + .data = &sm1_dmc_info, + }, + {} +}; + +static struct platform_driver g12_ddr_pmu_driver = { + .probe = g12_ddr_pmu_probe, + .remove = g12_ddr_pmu_remove, + + .driver = { + .name = "meson-g12-ddr-pmu", + .of_match_table = meson_ddr_pmu_dt_match, + }, +}; + +module_platform_driver(g12_ddr_pmu_driver); +MODULE_AUTHOR("Jiucheng Xu"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Amlogic G12 series SoC DDR PMU"); diff --git a/drivers/perf/arm_cspmu/Kconfig b/drivers/perf/arm_cspmu/Kconfig new file mode 100644 index 000000000000..0b316fe69a45 --- /dev/null +++ b/drivers/perf/arm_cspmu/Kconfig @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + +config ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU + tristate "ARM Coresight Architecture PMU" + depends on ARM64 && ACPI + depends on ACPI_APMT || COMPILE_TEST + help + Provides support for performance monitoring unit (PMU) devices + based on ARM CoreSight PMU architecture. Note that this PMU + architecture does not have relationship with the ARM CoreSight + Self-Hosted Tracing. diff --git a/drivers/perf/arm_cspmu/Makefile b/drivers/perf/arm_cspmu/Makefile new file mode 100644 index 000000000000..fedb17df982d --- /dev/null +++ b/drivers/perf/arm_cspmu/Makefile @@ -0,0 +1,6 @@ +# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU) += arm_cspmu_module.o +arm_cspmu_module-y := arm_cspmu.o nvidia_cspmu.o diff --git a/drivers/perf/arm_cspmu/arm_cspmu.c b/drivers/perf/arm_cspmu/arm_cspmu.c new file mode 100644 index 000000000000..e31302ab7e37 --- /dev/null +++ b/drivers/perf/arm_cspmu/arm_cspmu.c @@ -0,0 +1,1303 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ARM CoreSight Architecture PMU driver. + * + * This driver adds support for uncore PMU based on ARM CoreSight Performance + * Monitoring Unit Architecture. The PMU is accessible via MMIO registers and + * like other uncore PMUs, it does not support process specific events and + * cannot be used in sampling mode. + * + * This code is based on other uncore PMUs like ARM DSU PMU. It provides a + * generic implementation to operate the PMU according to CoreSight PMU + * architecture and ACPI ARM PMU table (APMT) documents below: + * - ARM CoreSight PMU architecture document number: ARM IHI 0091 A.a-00bet0. + * - APMT document number: ARM DEN0117. + * + * The user should refer to the vendor technical documentation to get details + * about the supported events. + * + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + * + */ + +#include <linux/acpi.h> +#include <linux/cacheinfo.h> +#include <linux/ctype.h> +#include <linux/interrupt.h> +#include <linux/io-64-nonatomic-lo-hi.h> +#include <linux/module.h> +#include <linux/perf_event.h> +#include <linux/platform_device.h> +#include <acpi/processor.h> + +#include "arm_cspmu.h" +#include "nvidia_cspmu.h" + +#define PMUNAME "arm_cspmu" +#define DRVNAME "arm-cs-arch-pmu" + +#define ARM_CSPMU_CPUMASK_ATTR(_name, _config) \ + ARM_CSPMU_EXT_ATTR(_name, arm_cspmu_cpumask_show, \ + (unsigned long)_config) + +/* + * CoreSight PMU Arch register offsets. + */ +#define PMEVCNTR_LO 0x0 +#define PMEVCNTR_HI 0x4 +#define PMEVTYPER 0x400 +#define PMCCFILTR 0x47C +#define PMEVFILTR 0xA00 +#define PMCNTENSET 0xC00 +#define PMCNTENCLR 0xC20 +#define PMINTENSET 0xC40 +#define PMINTENCLR 0xC60 +#define PMOVSCLR 0xC80 +#define PMOVSSET 0xCC0 +#define PMCFGR 0xE00 +#define PMCR 0xE04 +#define PMIIDR 0xE08 + +/* PMCFGR register field */ +#define PMCFGR_NCG GENMASK(31, 28) +#define PMCFGR_HDBG BIT(24) +#define PMCFGR_TRO BIT(23) +#define PMCFGR_SS BIT(22) +#define PMCFGR_FZO BIT(21) +#define PMCFGR_MSI BIT(20) +#define PMCFGR_UEN BIT(19) +#define PMCFGR_NA BIT(17) +#define PMCFGR_EX BIT(16) +#define PMCFGR_CCD BIT(15) +#define PMCFGR_CC BIT(14) +#define PMCFGR_SIZE GENMASK(13, 8) +#define PMCFGR_N GENMASK(7, 0) + +/* PMCR register field */ +#define PMCR_TRO BIT(11) +#define PMCR_HDBG BIT(10) +#define PMCR_FZO BIT(9) +#define PMCR_NA BIT(8) +#define PMCR_DP BIT(5) +#define PMCR_X BIT(4) +#define PMCR_D BIT(3) +#define PMCR_C BIT(2) +#define PMCR_P BIT(1) +#define PMCR_E BIT(0) + +/* Each SET/CLR register supports up to 32 counters. */ +#define ARM_CSPMU_SET_CLR_COUNTER_SHIFT 5 +#define ARM_CSPMU_SET_CLR_COUNTER_NUM \ + (1 << ARM_CSPMU_SET_CLR_COUNTER_SHIFT) + +/* Convert counter idx into SET/CLR register number. */ +#define COUNTER_TO_SET_CLR_ID(idx) \ + (idx >> ARM_CSPMU_SET_CLR_COUNTER_SHIFT) + +/* Convert counter idx into SET/CLR register bit. */ +#define COUNTER_TO_SET_CLR_BIT(idx) \ + (idx & (ARM_CSPMU_SET_CLR_COUNTER_NUM - 1)) + +#define ARM_CSPMU_ACTIVE_CPU_MASK 0x0 +#define ARM_CSPMU_ASSOCIATED_CPU_MASK 0x1 + +/* Check if field f in flags is set with value v */ +#define CHECK_APMT_FLAG(flags, f, v) \ + ((flags & (ACPI_APMT_FLAGS_ ## f)) == (ACPI_APMT_FLAGS_ ## f ## _ ## v)) + +/* Check and use default if implementer doesn't provide attribute callback */ +#define CHECK_DEFAULT_IMPL_OPS(ops, callback) \ + do { \ + if (!ops->callback) \ + ops->callback = arm_cspmu_ ## callback; \ + } while (0) + +/* + * Maximum poll count for reading counter value using high-low-high sequence. + */ +#define HILOHI_MAX_POLL 1000 + +/* JEDEC-assigned JEP106 identification code */ +#define ARM_CSPMU_IMPL_ID_NVIDIA 0x36B + +static unsigned long arm_cspmu_cpuhp_state; + +/* + * In CoreSight PMU architecture, all of the MMIO registers are 32-bit except + * counter register. The counter register can be implemented as 32-bit or 64-bit + * register depending on the value of PMCFGR.SIZE field. For 64-bit access, + * single-copy 64-bit atomic support is implementation defined. APMT node flag + * is used to identify if the PMU supports 64-bit single copy atomic. If 64-bit + * single copy atomic is not supported, the driver treats the register as a pair + * of 32-bit register. + */ + +/* + * Read 64-bit register as a pair of 32-bit registers using hi-lo-hi sequence. + */ +static u64 read_reg64_hilohi(const void __iomem *addr, u32 max_poll_count) +{ + u32 val_lo, val_hi; + u64 val; + + /* Use high-low-high sequence to avoid tearing */ + do { + if (max_poll_count-- == 0) { + pr_err("ARM CSPMU: timeout hi-low-high sequence\n"); + return 0; + } + + val_hi = readl(addr + 4); + val_lo = readl(addr); + } while (val_hi != readl(addr + 4)); + + val = (((u64)val_hi << 32) | val_lo); + + return val; +} + +/* Check if PMU supports 64-bit single copy atomic. */ +static inline bool supports_64bit_atomics(const struct arm_cspmu *cspmu) +{ + return CHECK_APMT_FLAG(cspmu->apmt_node->flags, ATOMIC, SUPP); +} + +/* Check if cycle counter is supported. */ +static inline bool supports_cycle_counter(const struct arm_cspmu *cspmu) +{ + return (cspmu->pmcfgr & PMCFGR_CC); +} + +/* Get counter size, which is (PMCFGR_SIZE + 1). */ +static inline u32 counter_size(const struct arm_cspmu *cspmu) +{ + return FIELD_GET(PMCFGR_SIZE, cspmu->pmcfgr) + 1; +} + +/* Get counter mask. */ +static inline u64 counter_mask(const struct arm_cspmu *cspmu) +{ + return GENMASK_ULL(counter_size(cspmu) - 1, 0); +} + +/* Check if counter is implemented as 64-bit register. */ +static inline bool use_64b_counter_reg(const struct arm_cspmu *cspmu) +{ + return (counter_size(cspmu) > 32); +} + +ssize_t arm_cspmu_sysfs_event_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct dev_ext_attribute *eattr = + container_of(attr, struct dev_ext_attribute, attr); + return sysfs_emit(buf, "event=0x%llx\n", + (unsigned long long)eattr->var); +} +EXPORT_SYMBOL_GPL(arm_cspmu_sysfs_event_show); + +/* Default event list. */ +static struct attribute *arm_cspmu_event_attrs[] = { + ARM_CSPMU_EVENT_ATTR(cycles, ARM_CSPMU_EVT_CYCLES_DEFAULT), + NULL, +}; + +static struct attribute ** +arm_cspmu_get_event_attrs(const struct arm_cspmu *cspmu) +{ + struct attribute **attrs; + + attrs = devm_kmemdup(cspmu->dev, arm_cspmu_event_attrs, + sizeof(arm_cspmu_event_attrs), GFP_KERNEL); + + return attrs; +} + +static umode_t +arm_cspmu_event_attr_is_visible(struct kobject *kobj, + struct attribute *attr, int unused) +{ + struct device *dev = kobj_to_dev(kobj); + struct arm_cspmu *cspmu = to_arm_cspmu(dev_get_drvdata(dev)); + struct perf_pmu_events_attr *eattr; + + eattr = container_of(attr, typeof(*eattr), attr.attr); + + /* Hide cycle event if not supported */ + if (!supports_cycle_counter(cspmu) && + eattr->id == ARM_CSPMU_EVT_CYCLES_DEFAULT) + return 0; + + return attr->mode; +} + +ssize_t arm_cspmu_sysfs_format_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct dev_ext_attribute *eattr = + container_of(attr, struct dev_ext_attribute, attr); + return sysfs_emit(buf, "%s\n", (char *)eattr->var); +} +EXPORT_SYMBOL_GPL(arm_cspmu_sysfs_format_show); + +static struct attribute *arm_cspmu_format_attrs[] = { + ARM_CSPMU_FORMAT_EVENT_ATTR, + ARM_CSPMU_FORMAT_FILTER_ATTR, + NULL, +}; + +static struct attribute ** +arm_cspmu_get_format_attrs(const struct arm_cspmu *cspmu) +{ + struct attribute **attrs; + + attrs = devm_kmemdup(cspmu->dev, arm_cspmu_format_attrs, + sizeof(arm_cspmu_format_attrs), GFP_KERNEL); + + return attrs; +} + +static u32 arm_cspmu_event_type(const struct perf_event *event) +{ + return event->attr.config & ARM_CSPMU_EVENT_MASK; +} + +static bool arm_cspmu_is_cycle_counter_event(const struct perf_event *event) +{ + return (event->attr.config == ARM_CSPMU_EVT_CYCLES_DEFAULT); +} + +static u32 arm_cspmu_event_filter(const struct perf_event *event) +{ + return event->attr.config1 & ARM_CSPMU_FILTER_MASK; +} + +static ssize_t arm_cspmu_identifier_show(struct device *dev, + struct device_attribute *attr, + char *page) +{ + struct arm_cspmu *cspmu = to_arm_cspmu(dev_get_drvdata(dev)); + + return sysfs_emit(page, "%s\n", cspmu->identifier); +} + +static struct device_attribute arm_cspmu_identifier_attr = + __ATTR(identifier, 0444, arm_cspmu_identifier_show, NULL); + +static struct attribute *arm_cspmu_identifier_attrs[] = { + &arm_cspmu_identifier_attr.attr, + NULL, +}; + +static struct attribute_group arm_cspmu_identifier_attr_group = { + .attrs = arm_cspmu_identifier_attrs, +}; + +static const char *arm_cspmu_get_identifier(const struct arm_cspmu *cspmu) +{ + const char *identifier = + devm_kasprintf(cspmu->dev, GFP_KERNEL, "%x", + cspmu->impl.pmiidr); + return identifier; +} + +static const char *arm_cspmu_type_str[ACPI_APMT_NODE_TYPE_COUNT] = { + "mc", + "smmu", + "pcie", + "acpi", + "cache", +}; + +static const char *arm_cspmu_get_name(const struct arm_cspmu *cspmu) +{ + struct device *dev; + struct acpi_apmt_node *apmt_node; + u8 pmu_type; + char *name; + char acpi_hid_string[ACPI_ID_LEN] = { 0 }; + static atomic_t pmu_idx[ACPI_APMT_NODE_TYPE_COUNT] = { 0 }; + + dev = cspmu->dev; + apmt_node = cspmu->apmt_node; + pmu_type = apmt_node->type; + + if (pmu_type >= ACPI_APMT_NODE_TYPE_COUNT) { + dev_err(dev, "unsupported PMU type-%u\n", pmu_type); + return NULL; + } + + if (pmu_type == ACPI_APMT_NODE_TYPE_ACPI) { + memcpy(acpi_hid_string, + &apmt_node->inst_primary, + sizeof(apmt_node->inst_primary)); + name = devm_kasprintf(dev, GFP_KERNEL, "%s_%s_%s_%u", PMUNAME, + arm_cspmu_type_str[pmu_type], + acpi_hid_string, + apmt_node->inst_secondary); + } else { + name = devm_kasprintf(dev, GFP_KERNEL, "%s_%s_%d", PMUNAME, + arm_cspmu_type_str[pmu_type], + atomic_fetch_inc(&pmu_idx[pmu_type])); + } + + return name; +} + +static ssize_t arm_cspmu_cpumask_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct pmu *pmu = dev_get_drvdata(dev); + struct arm_cspmu *cspmu = to_arm_cspmu(pmu); + struct dev_ext_attribute *eattr = + container_of(attr, struct dev_ext_attribute, attr); + unsigned long mask_id = (unsigned long)eattr->var; + const cpumask_t *cpumask; + + switch (mask_id) { + case ARM_CSPMU_ACTIVE_CPU_MASK: + cpumask = &cspmu->active_cpu; + break; + case ARM_CSPMU_ASSOCIATED_CPU_MASK: + cpumask = &cspmu->associated_cpus; + break; + default: + return 0; + } + return cpumap_print_to_pagebuf(true, buf, cpumask); +} + +static struct attribute *arm_cspmu_cpumask_attrs[] = { + ARM_CSPMU_CPUMASK_ATTR(cpumask, ARM_CSPMU_ACTIVE_CPU_MASK), + ARM_CSPMU_CPUMASK_ATTR(associated_cpus, ARM_CSPMU_ASSOCIATED_CPU_MASK), + NULL, +}; + +static struct attribute_group arm_cspmu_cpumask_attr_group = { + .attrs = arm_cspmu_cpumask_attrs, +}; + +struct impl_match { + u32 pmiidr; + u32 mask; + int (*impl_init_ops)(struct arm_cspmu *cspmu); +}; + +static const struct impl_match impl_match[] = { + { + .pmiidr = ARM_CSPMU_IMPL_ID_NVIDIA, + .mask = ARM_CSPMU_PMIIDR_IMPLEMENTER, + .impl_init_ops = nv_cspmu_init_ops + }, + {} +}; + +static int arm_cspmu_init_impl_ops(struct arm_cspmu *cspmu) +{ + int ret; + struct acpi_apmt_node *apmt_node = cspmu->apmt_node; + struct arm_cspmu_impl_ops *impl_ops = &cspmu->impl.ops; + const struct impl_match *match = impl_match; + + /* + * Get PMU implementer and product id from APMT node. + * If APMT node doesn't have implementer/product id, try get it + * from PMIIDR. + */ + cspmu->impl.pmiidr = + (apmt_node->impl_id) ? apmt_node->impl_id : + readl(cspmu->base0 + PMIIDR); + + /* Find implementer specific attribute ops. */ + for (; match->pmiidr; match++) { + const u32 mask = match->mask; + + if ((match->pmiidr & mask) == (cspmu->impl.pmiidr & mask)) { + ret = match->impl_init_ops(cspmu); + if (ret) + return ret; + + break; + } + } + + /* Use default callbacks if implementer doesn't provide one. */ + CHECK_DEFAULT_IMPL_OPS(impl_ops, get_event_attrs); + CHECK_DEFAULT_IMPL_OPS(impl_ops, get_format_attrs); + CHECK_DEFAULT_IMPL_OPS(impl_ops, get_identifier); + CHECK_DEFAULT_IMPL_OPS(impl_ops, get_name); + CHECK_DEFAULT_IMPL_OPS(impl_ops, is_cycle_counter_event); + CHECK_DEFAULT_IMPL_OPS(impl_ops, event_type); + CHECK_DEFAULT_IMPL_OPS(impl_ops, event_filter); + CHECK_DEFAULT_IMPL_OPS(impl_ops, event_attr_is_visible); + + return 0; +} + +static struct attribute_group * +arm_cspmu_alloc_event_attr_group(struct arm_cspmu *cspmu) +{ + struct attribute_group *event_group; + struct device *dev = cspmu->dev; + const struct arm_cspmu_impl_ops *impl_ops = &cspmu->impl.ops; + + event_group = + devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL); + if (!event_group) + return NULL; + + event_group->name = "events"; + event_group->is_visible = impl_ops->event_attr_is_visible; + event_group->attrs = impl_ops->get_event_attrs(cspmu); + + if (!event_group->attrs) + return NULL; + + return event_group; +} + +static struct attribute_group * +arm_cspmu_alloc_format_attr_group(struct arm_cspmu *cspmu) +{ + struct attribute_group *format_group; + struct device *dev = cspmu->dev; + + format_group = + devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL); + if (!format_group) + return NULL; + + format_group->name = "format"; + format_group->attrs = cspmu->impl.ops.get_format_attrs(cspmu); + + if (!format_group->attrs) + return NULL; + + return format_group; +} + +static struct attribute_group ** +arm_cspmu_alloc_attr_group(struct arm_cspmu *cspmu) +{ + struct attribute_group **attr_groups = NULL; + struct device *dev = cspmu->dev; + const struct arm_cspmu_impl_ops *impl_ops = &cspmu->impl.ops; + int ret; + + ret = arm_cspmu_init_impl_ops(cspmu); + if (ret) + return NULL; + + cspmu->identifier = impl_ops->get_identifier(cspmu); + cspmu->name = impl_ops->get_name(cspmu); + + if (!cspmu->identifier || !cspmu->name) + return NULL; + + attr_groups = devm_kcalloc(dev, 5, sizeof(struct attribute_group *), + GFP_KERNEL); + if (!attr_groups) + return NULL; + + attr_groups[0] = arm_cspmu_alloc_event_attr_group(cspmu); + attr_groups[1] = arm_cspmu_alloc_format_attr_group(cspmu); + attr_groups[2] = &arm_cspmu_identifier_attr_group; + attr_groups[3] = &arm_cspmu_cpumask_attr_group; + + if (!attr_groups[0] || !attr_groups[1]) + return NULL; + + return attr_groups; +} + +static inline void arm_cspmu_reset_counters(struct arm_cspmu *cspmu) +{ + u32 pmcr = 0; + + pmcr |= PMCR_P; + pmcr |= PMCR_C; + writel(pmcr, cspmu->base0 + PMCR); +} + +static inline void arm_cspmu_start_counters(struct arm_cspmu *cspmu) +{ + writel(PMCR_E, cspmu->base0 + PMCR); +} + +static inline void arm_cspmu_stop_counters(struct arm_cspmu *cspmu) +{ + writel(0, cspmu->base0 + PMCR); +} + +static void arm_cspmu_enable(struct pmu *pmu) +{ + bool disabled; + struct arm_cspmu *cspmu = to_arm_cspmu(pmu); + + disabled = bitmap_empty(cspmu->hw_events.used_ctrs, + cspmu->num_logical_ctrs); + + if (disabled) + return; + + arm_cspmu_start_counters(cspmu); +} + +static void arm_cspmu_disable(struct pmu *pmu) +{ + struct arm_cspmu *cspmu = to_arm_cspmu(pmu); + + arm_cspmu_stop_counters(cspmu); +} + +static int arm_cspmu_get_event_idx(struct arm_cspmu_hw_events *hw_events, + struct perf_event *event) +{ + int idx; + struct arm_cspmu *cspmu = to_arm_cspmu(event->pmu); + + if (supports_cycle_counter(cspmu)) { + if (cspmu->impl.ops.is_cycle_counter_event(event)) { + /* Search for available cycle counter. */ + if (test_and_set_bit(cspmu->cycle_counter_logical_idx, + hw_events->used_ctrs)) + return -EAGAIN; + + return cspmu->cycle_counter_logical_idx; + } + + /* + * Search a regular counter from the used counter bitmap. + * The cycle counter divides the bitmap into two parts. Search + * the first then second half to exclude the cycle counter bit. + */ + idx = find_first_zero_bit(hw_events->used_ctrs, + cspmu->cycle_counter_logical_idx); + if (idx >= cspmu->cycle_counter_logical_idx) { + idx = find_next_zero_bit( + hw_events->used_ctrs, + cspmu->num_logical_ctrs, + cspmu->cycle_counter_logical_idx + 1); + } + } else { + idx = find_first_zero_bit(hw_events->used_ctrs, + cspmu->num_logical_ctrs); + } + + if (idx >= cspmu->num_logical_ctrs) + return -EAGAIN; + + set_bit(idx, hw_events->used_ctrs); + + return idx; +} + +static bool arm_cspmu_validate_event(struct pmu *pmu, + struct arm_cspmu_hw_events *hw_events, + struct perf_event *event) +{ + if (is_software_event(event)) + return true; + + /* Reject groups spanning multiple HW PMUs. */ + if (event->pmu != pmu) + return false; + + return (arm_cspmu_get_event_idx(hw_events, event) >= 0); +} + +/* + * Make sure the group of events can be scheduled at once + * on the PMU. + */ +static bool arm_cspmu_validate_group(struct perf_event *event) +{ + struct perf_event *sibling, *leader = event->group_leader; + struct arm_cspmu_hw_events fake_hw_events; + + if (event->group_leader == event) + return true; + + memset(&fake_hw_events, 0, sizeof(fake_hw_events)); + + if (!arm_cspmu_validate_event(event->pmu, &fake_hw_events, leader)) + return false; + + for_each_sibling_event(sibling, leader) { + if (!arm_cspmu_validate_event(event->pmu, &fake_hw_events, + sibling)) + return false; + } + + return arm_cspmu_validate_event(event->pmu, &fake_hw_events, event); +} + +static int arm_cspmu_event_init(struct perf_event *event) +{ + struct arm_cspmu *cspmu; + struct hw_perf_event *hwc = &event->hw; + + cspmu = to_arm_cspmu(event->pmu); + + /* + * Following other "uncore" PMUs, we do not support sampling mode or + * attach to a task (per-process mode). + */ + if (is_sampling_event(event)) { + dev_dbg(cspmu->pmu.dev, + "Can't support sampling events\n"); + return -EOPNOTSUPP; + } + + if (event->cpu < 0 || event->attach_state & PERF_ATTACH_TASK) { + dev_dbg(cspmu->pmu.dev, + "Can't support per-task counters\n"); + return -EINVAL; + } + + /* + * Make sure the CPU assignment is on one of the CPUs associated with + * this PMU. + */ + if (!cpumask_test_cpu(event->cpu, &cspmu->associated_cpus)) { + dev_dbg(cspmu->pmu.dev, + "Requested cpu is not associated with the PMU\n"); + return -EINVAL; + } + + /* Enforce the current active CPU to handle the events in this PMU. */ + event->cpu = cpumask_first(&cspmu->active_cpu); + if (event->cpu >= nr_cpu_ids) + return -EINVAL; + + if (!arm_cspmu_validate_group(event)) + return -EINVAL; + + /* + * The logical counter id is tracked with hw_perf_event.extra_reg.idx. + * The physical counter id is tracked with hw_perf_event.idx. + * We don't assign an index until we actually place the event onto + * hardware. Use -1 to signify that we haven't decided where to put it + * yet. + */ + hwc->idx = -1; + hwc->extra_reg.idx = -1; + hwc->config = cspmu->impl.ops.event_type(event); + + return 0; +} + +static inline u32 counter_offset(u32 reg_sz, u32 ctr_idx) +{ + return (PMEVCNTR_LO + (reg_sz * ctr_idx)); +} + +static void arm_cspmu_write_counter(struct perf_event *event, u64 val) +{ + u32 offset; + struct arm_cspmu *cspmu = to_arm_cspmu(event->pmu); + + if (use_64b_counter_reg(cspmu)) { + offset = counter_offset(sizeof(u64), event->hw.idx); + + writeq(val, cspmu->base1 + offset); + } else { + offset = counter_offset(sizeof(u32), event->hw.idx); + + writel(lower_32_bits(val), cspmu->base1 + offset); + } +} + +static u64 arm_cspmu_read_counter(struct perf_event *event) +{ + u32 offset; + const void __iomem *counter_addr; + struct arm_cspmu *cspmu = to_arm_cspmu(event->pmu); + + if (use_64b_counter_reg(cspmu)) { + offset = counter_offset(sizeof(u64), event->hw.idx); + counter_addr = cspmu->base1 + offset; + + return supports_64bit_atomics(cspmu) ? + readq(counter_addr) : + read_reg64_hilohi(counter_addr, HILOHI_MAX_POLL); + } + + offset = counter_offset(sizeof(u32), event->hw.idx); + return readl(cspmu->base1 + offset); +} + +/* + * arm_cspmu_set_event_period: Set the period for the counter. + * + * To handle cases of extreme interrupt latency, we program + * the counter with half of the max count for the counters. + */ +static void arm_cspmu_set_event_period(struct perf_event *event) +{ + struct arm_cspmu *cspmu = to_arm_cspmu(event->pmu); + u64 val = counter_mask(cspmu) >> 1ULL; + + local64_set(&event->hw.prev_count, val); + arm_cspmu_write_counter(event, val); +} + +static void arm_cspmu_enable_counter(struct arm_cspmu *cspmu, int idx) +{ + u32 reg_id, reg_bit, inten_off, cnten_off; + + reg_id = COUNTER_TO_SET_CLR_ID(idx); + reg_bit = COUNTER_TO_SET_CLR_BIT(idx); + + inten_off = PMINTENSET + (4 * reg_id); + cnten_off = PMCNTENSET + (4 * reg_id); + + writel(BIT(reg_bit), cspmu->base0 + inten_off); + writel(BIT(reg_bit), cspmu->base0 + cnten_off); +} + +static void arm_cspmu_disable_counter(struct arm_cspmu *cspmu, int idx) +{ + u32 reg_id, reg_bit, inten_off, cnten_off; + + reg_id = COUNTER_TO_SET_CLR_ID(idx); + reg_bit = COUNTER_TO_SET_CLR_BIT(idx); + + inten_off = PMINTENCLR + (4 * reg_id); + cnten_off = PMCNTENCLR + (4 * reg_id); + + writel(BIT(reg_bit), cspmu->base0 + cnten_off); + writel(BIT(reg_bit), cspmu->base0 + inten_off); +} + +static void arm_cspmu_event_update(struct perf_event *event) +{ + struct arm_cspmu *cspmu = to_arm_cspmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + u64 delta, prev, now; + + do { + prev = local64_read(&hwc->prev_count); + now = arm_cspmu_read_counter(event); + } while (local64_cmpxchg(&hwc->prev_count, prev, now) != prev); + + delta = (now - prev) & counter_mask(cspmu); + local64_add(delta, &event->count); +} + +static inline void arm_cspmu_set_event(struct arm_cspmu *cspmu, + struct hw_perf_event *hwc) +{ + u32 offset = PMEVTYPER + (4 * hwc->idx); + + writel(hwc->config, cspmu->base0 + offset); +} + +static inline void arm_cspmu_set_ev_filter(struct arm_cspmu *cspmu, + struct hw_perf_event *hwc, + u32 filter) +{ + u32 offset = PMEVFILTR + (4 * hwc->idx); + + writel(filter, cspmu->base0 + offset); +} + +static inline void arm_cspmu_set_cc_filter(struct arm_cspmu *cspmu, u32 filter) +{ + u32 offset = PMCCFILTR; + + writel(filter, cspmu->base0 + offset); +} + +static void arm_cspmu_start(struct perf_event *event, int pmu_flags) +{ + struct arm_cspmu *cspmu = to_arm_cspmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + u32 filter; + + /* We always reprogram the counter */ + if (pmu_flags & PERF_EF_RELOAD) + WARN_ON(!(hwc->state & PERF_HES_UPTODATE)); + + arm_cspmu_set_event_period(event); + + filter = cspmu->impl.ops.event_filter(event); + + if (event->hw.extra_reg.idx == cspmu->cycle_counter_logical_idx) { + arm_cspmu_set_cc_filter(cspmu, filter); + } else { + arm_cspmu_set_event(cspmu, hwc); + arm_cspmu_set_ev_filter(cspmu, hwc, filter); + } + + hwc->state = 0; + + arm_cspmu_enable_counter(cspmu, hwc->idx); +} + +static void arm_cspmu_stop(struct perf_event *event, int pmu_flags) +{ + struct arm_cspmu *cspmu = to_arm_cspmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + + if (hwc->state & PERF_HES_STOPPED) + return; + + arm_cspmu_disable_counter(cspmu, hwc->idx); + arm_cspmu_event_update(event); + + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; +} + +static inline u32 to_phys_idx(struct arm_cspmu *cspmu, u32 idx) +{ + return (idx == cspmu->cycle_counter_logical_idx) ? + ARM_CSPMU_CYCLE_CNTR_IDX : idx; +} + +static int arm_cspmu_add(struct perf_event *event, int flags) +{ + struct arm_cspmu *cspmu = to_arm_cspmu(event->pmu); + struct arm_cspmu_hw_events *hw_events = &cspmu->hw_events; + struct hw_perf_event *hwc = &event->hw; + int idx; + + if (WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), + &cspmu->associated_cpus))) + return -ENOENT; + + idx = arm_cspmu_get_event_idx(hw_events, event); + if (idx < 0) + return idx; + + hw_events->events[idx] = event; + hwc->idx = to_phys_idx(cspmu, idx); + hwc->extra_reg.idx = idx; + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; + + if (flags & PERF_EF_START) + arm_cspmu_start(event, PERF_EF_RELOAD); + + /* Propagate changes to the userspace mapping. */ + perf_event_update_userpage(event); + + return 0; +} + +static void arm_cspmu_del(struct perf_event *event, int flags) +{ + struct arm_cspmu *cspmu = to_arm_cspmu(event->pmu); + struct arm_cspmu_hw_events *hw_events = &cspmu->hw_events; + struct hw_perf_event *hwc = &event->hw; + int idx = hwc->extra_reg.idx; + + arm_cspmu_stop(event, PERF_EF_UPDATE); + + hw_events->events[idx] = NULL; + + clear_bit(idx, hw_events->used_ctrs); + + perf_event_update_userpage(event); +} + +static void arm_cspmu_read(struct perf_event *event) +{ + arm_cspmu_event_update(event); +} + +static struct arm_cspmu *arm_cspmu_alloc(struct platform_device *pdev) +{ + struct acpi_apmt_node *apmt_node; + struct arm_cspmu *cspmu; + struct device *dev; + + dev = &pdev->dev; + apmt_node = *(struct acpi_apmt_node **)dev_get_platdata(dev); + if (!apmt_node) { + dev_err(dev, "failed to get APMT node\n"); + return NULL; + } + + cspmu = devm_kzalloc(dev, sizeof(*cspmu), GFP_KERNEL); + if (!cspmu) + return NULL; + + cspmu->dev = dev; + cspmu->apmt_node = apmt_node; + + platform_set_drvdata(pdev, cspmu); + + return cspmu; +} + +static int arm_cspmu_init_mmio(struct arm_cspmu *cspmu) +{ + struct device *dev; + struct platform_device *pdev; + struct acpi_apmt_node *apmt_node; + + dev = cspmu->dev; + pdev = to_platform_device(dev); + apmt_node = cspmu->apmt_node; + + /* Base address for page 0. */ + cspmu->base0 = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(cspmu->base0)) { + dev_err(dev, "ioremap failed for page-0 resource\n"); + return PTR_ERR(cspmu->base0); + } + + /* Base address for page 1 if supported. Otherwise point to page 0. */ + cspmu->base1 = cspmu->base0; + if (CHECK_APMT_FLAG(apmt_node->flags, DUAL_PAGE, SUPP)) { + cspmu->base1 = devm_platform_ioremap_resource(pdev, 1); + if (IS_ERR(cspmu->base1)) { + dev_err(dev, "ioremap failed for page-1 resource\n"); + return PTR_ERR(cspmu->base1); + } + } + + cspmu->pmcfgr = readl(cspmu->base0 + PMCFGR); + + cspmu->num_logical_ctrs = FIELD_GET(PMCFGR_N, cspmu->pmcfgr) + 1; + + cspmu->cycle_counter_logical_idx = ARM_CSPMU_MAX_HW_CNTRS; + + if (supports_cycle_counter(cspmu)) { + /* + * The last logical counter is mapped to cycle counter if + * there is a gap between regular and cycle counter. Otherwise, + * logical and physical have 1-to-1 mapping. + */ + cspmu->cycle_counter_logical_idx = + (cspmu->num_logical_ctrs <= ARM_CSPMU_CYCLE_CNTR_IDX) ? + cspmu->num_logical_ctrs - 1 : + ARM_CSPMU_CYCLE_CNTR_IDX; + } + + cspmu->num_set_clr_reg = + DIV_ROUND_UP(cspmu->num_logical_ctrs, + ARM_CSPMU_SET_CLR_COUNTER_NUM); + + cspmu->hw_events.events = + devm_kcalloc(dev, cspmu->num_logical_ctrs, + sizeof(*cspmu->hw_events.events), GFP_KERNEL); + + if (!cspmu->hw_events.events) + return -ENOMEM; + + return 0; +} + +static inline int arm_cspmu_get_reset_overflow(struct arm_cspmu *cspmu, + u32 *pmovs) +{ + int i; + u32 pmovclr_offset = PMOVSCLR; + u32 has_overflowed = 0; + + for (i = 0; i < cspmu->num_set_clr_reg; ++i) { + pmovs[i] = readl(cspmu->base1 + pmovclr_offset); + has_overflowed |= pmovs[i]; + writel(pmovs[i], cspmu->base1 + pmovclr_offset); + pmovclr_offset += sizeof(u32); + } + + return has_overflowed != 0; +} + +static irqreturn_t arm_cspmu_handle_irq(int irq_num, void *dev) +{ + int idx, has_overflowed; + struct perf_event *event; + struct arm_cspmu *cspmu = dev; + DECLARE_BITMAP(pmovs, ARM_CSPMU_MAX_HW_CNTRS); + bool handled = false; + + arm_cspmu_stop_counters(cspmu); + + has_overflowed = arm_cspmu_get_reset_overflow(cspmu, (u32 *)pmovs); + if (!has_overflowed) + goto done; + + for_each_set_bit(idx, cspmu->hw_events.used_ctrs, + cspmu->num_logical_ctrs) { + event = cspmu->hw_events.events[idx]; + + if (!event) + continue; + + if (!test_bit(event->hw.idx, pmovs)) + continue; + + arm_cspmu_event_update(event); + arm_cspmu_set_event_period(event); + + handled = true; + } + +done: + arm_cspmu_start_counters(cspmu); + return IRQ_RETVAL(handled); +} + +static int arm_cspmu_request_irq(struct arm_cspmu *cspmu) +{ + int irq, ret; + struct device *dev; + struct platform_device *pdev; + struct acpi_apmt_node *apmt_node; + + dev = cspmu->dev; + pdev = to_platform_device(dev); + apmt_node = cspmu->apmt_node; + + /* Skip IRQ request if the PMU does not support overflow interrupt. */ + if (apmt_node->ovflw_irq == 0) + return 0; + + irq = platform_get_irq(pdev, 0); + if (irq < 0) + return irq; + + ret = devm_request_irq(dev, irq, arm_cspmu_handle_irq, + IRQF_NOBALANCING | IRQF_NO_THREAD, dev_name(dev), + cspmu); + if (ret) { + dev_err(dev, "Could not request IRQ %d\n", irq); + return ret; + } + + cspmu->irq = irq; + + return 0; +} + +static inline int arm_cspmu_find_cpu_container(int cpu, u32 container_uid) +{ + u32 acpi_uid; + struct device *cpu_dev = get_cpu_device(cpu); + struct acpi_device *acpi_dev = ACPI_COMPANION(cpu_dev); + + if (!cpu_dev) + return -ENODEV; + + while (acpi_dev) { + if (!strcmp(acpi_device_hid(acpi_dev), + ACPI_PROCESSOR_CONTAINER_HID) && + !kstrtouint(acpi_device_uid(acpi_dev), 0, &acpi_uid) && + acpi_uid == container_uid) + return 0; + + acpi_dev = acpi_dev_parent(acpi_dev); + } + + return -ENODEV; +} + +static int arm_cspmu_get_cpus(struct arm_cspmu *cspmu) +{ + struct device *dev; + struct acpi_apmt_node *apmt_node; + int affinity_flag; + int cpu; + + dev = cspmu->pmu.dev; + apmt_node = cspmu->apmt_node; + affinity_flag = apmt_node->flags & ACPI_APMT_FLAGS_AFFINITY; + + if (affinity_flag == ACPI_APMT_FLAGS_AFFINITY_PROC) { + for_each_possible_cpu(cpu) { + if (apmt_node->proc_affinity == + get_acpi_id_for_cpu(cpu)) { + cpumask_set_cpu(cpu, &cspmu->associated_cpus); + break; + } + } + } else { + for_each_possible_cpu(cpu) { + if (arm_cspmu_find_cpu_container( + cpu, apmt_node->proc_affinity)) + continue; + + cpumask_set_cpu(cpu, &cspmu->associated_cpus); + } + } + + if (cpumask_empty(&cspmu->associated_cpus)) { + dev_dbg(dev, "No cpu associated with the PMU\n"); + return -ENODEV; + } + + return 0; +} + +static int arm_cspmu_register_pmu(struct arm_cspmu *cspmu) +{ + int ret, capabilities; + struct attribute_group **attr_groups; + + attr_groups = arm_cspmu_alloc_attr_group(cspmu); + if (!attr_groups) + return -ENOMEM; + + ret = cpuhp_state_add_instance(arm_cspmu_cpuhp_state, + &cspmu->cpuhp_node); + if (ret) + return ret; + + capabilities = PERF_PMU_CAP_NO_EXCLUDE; + if (cspmu->irq == 0) + capabilities |= PERF_PMU_CAP_NO_INTERRUPT; + + cspmu->pmu = (struct pmu){ + .task_ctx_nr = perf_invalid_context, + .module = THIS_MODULE, + .pmu_enable = arm_cspmu_enable, + .pmu_disable = arm_cspmu_disable, + .event_init = arm_cspmu_event_init, + .add = arm_cspmu_add, + .del = arm_cspmu_del, + .start = arm_cspmu_start, + .stop = arm_cspmu_stop, + .read = arm_cspmu_read, + .attr_groups = (const struct attribute_group **)attr_groups, + .capabilities = capabilities, + }; + + /* Hardware counter init */ + arm_cspmu_stop_counters(cspmu); + arm_cspmu_reset_counters(cspmu); + + ret = perf_pmu_register(&cspmu->pmu, cspmu->name, -1); + if (ret) { + cpuhp_state_remove_instance(arm_cspmu_cpuhp_state, + &cspmu->cpuhp_node); + } + + return ret; +} + +static int arm_cspmu_device_probe(struct platform_device *pdev) +{ + int ret; + struct arm_cspmu *cspmu; + + cspmu = arm_cspmu_alloc(pdev); + if (!cspmu) + return -ENOMEM; + + ret = arm_cspmu_init_mmio(cspmu); + if (ret) + return ret; + + ret = arm_cspmu_request_irq(cspmu); + if (ret) + return ret; + + ret = arm_cspmu_get_cpus(cspmu); + if (ret) + return ret; + + ret = arm_cspmu_register_pmu(cspmu); + if (ret) + return ret; + + return 0; +} + +static int arm_cspmu_device_remove(struct platform_device *pdev) +{ + struct arm_cspmu *cspmu = platform_get_drvdata(pdev); + + perf_pmu_unregister(&cspmu->pmu); + cpuhp_state_remove_instance(arm_cspmu_cpuhp_state, &cspmu->cpuhp_node); + + return 0; +} + +static struct platform_driver arm_cspmu_driver = { + .driver = { + .name = DRVNAME, + .suppress_bind_attrs = true, + }, + .probe = arm_cspmu_device_probe, + .remove = arm_cspmu_device_remove, +}; + +static void arm_cspmu_set_active_cpu(int cpu, struct arm_cspmu *cspmu) +{ + cpumask_set_cpu(cpu, &cspmu->active_cpu); + WARN_ON(irq_set_affinity(cspmu->irq, &cspmu->active_cpu)); +} + +static int arm_cspmu_cpu_online(unsigned int cpu, struct hlist_node *node) +{ + struct arm_cspmu *cspmu = + hlist_entry_safe(node, struct arm_cspmu, cpuhp_node); + + if (!cpumask_test_cpu(cpu, &cspmu->associated_cpus)) + return 0; + + /* If the PMU is already managed, there is nothing to do */ + if (!cpumask_empty(&cspmu->active_cpu)) + return 0; + + /* Use this CPU for event counting */ + arm_cspmu_set_active_cpu(cpu, cspmu); + + return 0; +} + +static int arm_cspmu_cpu_teardown(unsigned int cpu, struct hlist_node *node) +{ + int dst; + struct cpumask online_supported; + + struct arm_cspmu *cspmu = + hlist_entry_safe(node, struct arm_cspmu, cpuhp_node); + + /* Nothing to do if this CPU doesn't own the PMU */ + if (!cpumask_test_and_clear_cpu(cpu, &cspmu->active_cpu)) + return 0; + + /* Choose a new CPU to migrate ownership of the PMU to */ + cpumask_and(&online_supported, &cspmu->associated_cpus, + cpu_online_mask); + dst = cpumask_any_but(&online_supported, cpu); + if (dst >= nr_cpu_ids) + return 0; + + /* Use this CPU for event counting */ + perf_pmu_migrate_context(&cspmu->pmu, cpu, dst); + arm_cspmu_set_active_cpu(dst, cspmu); + + return 0; +} + +static int __init arm_cspmu_init(void) +{ + int ret; + + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, + "perf/arm/cspmu:online", + arm_cspmu_cpu_online, + arm_cspmu_cpu_teardown); + if (ret < 0) + return ret; + arm_cspmu_cpuhp_state = ret; + return platform_driver_register(&arm_cspmu_driver); +} + +static void __exit arm_cspmu_exit(void) +{ + platform_driver_unregister(&arm_cspmu_driver); + cpuhp_remove_multi_state(arm_cspmu_cpuhp_state); +} + +module_init(arm_cspmu_init); +module_exit(arm_cspmu_exit); + +MODULE_LICENSE("GPL v2"); diff --git a/drivers/perf/arm_cspmu/arm_cspmu.h b/drivers/perf/arm_cspmu/arm_cspmu.h new file mode 100644 index 000000000000..51323b175a4a --- /dev/null +++ b/drivers/perf/arm_cspmu/arm_cspmu.h @@ -0,0 +1,151 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * ARM CoreSight Architecture PMU driver. + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + * + */ + +#ifndef __ARM_CSPMU_H__ +#define __ARM_CSPMU_H__ + +#include <linux/acpi.h> +#include <linux/bitfield.h> +#include <linux/cpumask.h> +#include <linux/device.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/perf_event.h> +#include <linux/platform_device.h> +#include <linux/types.h> + +#define to_arm_cspmu(p) (container_of(p, struct arm_cspmu, pmu)) + +#define ARM_CSPMU_EXT_ATTR(_name, _func, _config) \ + (&((struct dev_ext_attribute[]){ \ + { \ + .attr = __ATTR(_name, 0444, _func, NULL), \ + .var = (void *)_config \ + } \ + })[0].attr.attr) + +#define ARM_CSPMU_FORMAT_ATTR(_name, _config) \ + ARM_CSPMU_EXT_ATTR(_name, arm_cspmu_sysfs_format_show, (char *)_config) + +#define ARM_CSPMU_EVENT_ATTR(_name, _config) \ + PMU_EVENT_ATTR_ID(_name, arm_cspmu_sysfs_event_show, _config) + + +/* Default event id mask */ +#define ARM_CSPMU_EVENT_MASK GENMASK_ULL(63, 0) + +/* Default filter value mask */ +#define ARM_CSPMU_FILTER_MASK GENMASK_ULL(63, 0) + +/* Default event format */ +#define ARM_CSPMU_FORMAT_EVENT_ATTR \ + ARM_CSPMU_FORMAT_ATTR(event, "config:0-32") + +/* Default filter format */ +#define ARM_CSPMU_FORMAT_FILTER_ATTR \ + ARM_CSPMU_FORMAT_ATTR(filter, "config1:0-31") + +/* + * This is the default event number for cycle count, if supported, since the + * ARM Coresight PMU specification does not define a standard event code + * for cycle count. + */ +#define ARM_CSPMU_EVT_CYCLES_DEFAULT (0x1ULL << 32) + +/* + * The ARM Coresight PMU supports up to 256 event counters. + * If the counters are larger-than 32-bits, then the PMU includes at + * most 128 counters. + */ +#define ARM_CSPMU_MAX_HW_CNTRS 256 + +/* The cycle counter, if implemented, is located at counter[31]. */ +#define ARM_CSPMU_CYCLE_CNTR_IDX 31 + +/* PMIIDR register field */ +#define ARM_CSPMU_PMIIDR_IMPLEMENTER GENMASK(11, 0) +#define ARM_CSPMU_PMIIDR_PRODUCTID GENMASK(31, 20) + +struct arm_cspmu; + +/* This tracks the events assigned to each counter in the PMU. */ +struct arm_cspmu_hw_events { + /* The events that are active on the PMU for a given logical index. */ + struct perf_event **events; + + /* + * Each bit indicates a logical counter is being used (or not) for an + * event. If cycle counter is supported and there is a gap between + * regular and cycle counter, the last logical counter is mapped to + * cycle counter. Otherwise, logical and physical have 1-to-1 mapping. + */ + DECLARE_BITMAP(used_ctrs, ARM_CSPMU_MAX_HW_CNTRS); +}; + +/* Contains ops to query vendor/implementer specific attribute. */ +struct arm_cspmu_impl_ops { + /* Get event attributes */ + struct attribute **(*get_event_attrs)(const struct arm_cspmu *cspmu); + /* Get format attributes */ + struct attribute **(*get_format_attrs)(const struct arm_cspmu *cspmu); + /* Get string identifier */ + const char *(*get_identifier)(const struct arm_cspmu *cspmu); + /* Get PMU name to register to core perf */ + const char *(*get_name)(const struct arm_cspmu *cspmu); + /* Check if the event corresponds to cycle count event */ + bool (*is_cycle_counter_event)(const struct perf_event *event); + /* Decode event type/id from configs */ + u32 (*event_type)(const struct perf_event *event); + /* Decode filter value from configs */ + u32 (*event_filter)(const struct perf_event *event); + /* Hide/show unsupported events */ + umode_t (*event_attr_is_visible)(struct kobject *kobj, + struct attribute *attr, int unused); +}; + +/* Vendor/implementer descriptor. */ +struct arm_cspmu_impl { + u32 pmiidr; + struct arm_cspmu_impl_ops ops; + void *ctx; +}; + +/* Coresight PMU descriptor. */ +struct arm_cspmu { + struct pmu pmu; + struct device *dev; + struct acpi_apmt_node *apmt_node; + const char *name; + const char *identifier; + void __iomem *base0; + void __iomem *base1; + int irq; + cpumask_t associated_cpus; + cpumask_t active_cpu; + struct hlist_node cpuhp_node; + + u32 pmcfgr; + u32 num_logical_ctrs; + u32 num_set_clr_reg; + int cycle_counter_logical_idx; + + struct arm_cspmu_hw_events hw_events; + + struct arm_cspmu_impl impl; +}; + +/* Default function to show event attribute in sysfs. */ +ssize_t arm_cspmu_sysfs_event_show(struct device *dev, + struct device_attribute *attr, + char *buf); + +/* Default function to show format attribute in sysfs. */ +ssize_t arm_cspmu_sysfs_format_show(struct device *dev, + struct device_attribute *attr, + char *buf); + +#endif /* __ARM_CSPMU_H__ */ diff --git a/drivers/perf/arm_cspmu/nvidia_cspmu.c b/drivers/perf/arm_cspmu/nvidia_cspmu.c new file mode 100644 index 000000000000..72ef80caa3c8 --- /dev/null +++ b/drivers/perf/arm_cspmu/nvidia_cspmu.c @@ -0,0 +1,400 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + * + */ + +/* Support for NVIDIA specific attributes. */ + +#include <linux/topology.h> + +#include "nvidia_cspmu.h" + +#define NV_PCIE_PORT_COUNT 10ULL +#define NV_PCIE_FILTER_ID_MASK GENMASK_ULL(NV_PCIE_PORT_COUNT - 1, 0) + +#define NV_NVL_C2C_PORT_COUNT 2ULL +#define NV_NVL_C2C_FILTER_ID_MASK GENMASK_ULL(NV_NVL_C2C_PORT_COUNT - 1, 0) + +#define NV_CNVL_PORT_COUNT 4ULL +#define NV_CNVL_FILTER_ID_MASK GENMASK_ULL(NV_CNVL_PORT_COUNT - 1, 0) + +#define NV_GENERIC_FILTER_ID_MASK GENMASK_ULL(31, 0) + +#define NV_PRODID_MASK GENMASK(31, 0) + +#define NV_FORMAT_NAME_GENERIC 0 + +#define to_nv_cspmu_ctx(cspmu) ((struct nv_cspmu_ctx *)(cspmu->impl.ctx)) + +#define NV_CSPMU_EVENT_ATTR_4_INNER(_pref, _num, _suff, _config) \ + ARM_CSPMU_EVENT_ATTR(_pref##_num##_suff, _config) + +#define NV_CSPMU_EVENT_ATTR_4(_pref, _suff, _config) \ + NV_CSPMU_EVENT_ATTR_4_INNER(_pref, _0_, _suff, _config), \ + NV_CSPMU_EVENT_ATTR_4_INNER(_pref, _1_, _suff, _config + 1), \ + NV_CSPMU_EVENT_ATTR_4_INNER(_pref, _2_, _suff, _config + 2), \ + NV_CSPMU_EVENT_ATTR_4_INNER(_pref, _3_, _suff, _config + 3) + +struct nv_cspmu_ctx { + const char *name; + u32 filter_mask; + u32 filter_default_val; + struct attribute **event_attr; + struct attribute **format_attr; +}; + +static struct attribute *scf_pmu_event_attrs[] = { + ARM_CSPMU_EVENT_ATTR(bus_cycles, 0x1d), + + ARM_CSPMU_EVENT_ATTR(scf_cache_allocate, 0xF0), + ARM_CSPMU_EVENT_ATTR(scf_cache_refill, 0xF1), + ARM_CSPMU_EVENT_ATTR(scf_cache, 0xF2), + ARM_CSPMU_EVENT_ATTR(scf_cache_wb, 0xF3), + + NV_CSPMU_EVENT_ATTR_4(socket, rd_data, 0x101), + NV_CSPMU_EVENT_ATTR_4(socket, dl_rsp, 0x105), + NV_CSPMU_EVENT_ATTR_4(socket, wb_data, 0x109), + NV_CSPMU_EVENT_ATTR_4(socket, ev_rsp, 0x10d), + NV_CSPMU_EVENT_ATTR_4(socket, prb_data, 0x111), + + NV_CSPMU_EVENT_ATTR_4(socket, rd_outstanding, 0x115), + NV_CSPMU_EVENT_ATTR_4(socket, dl_outstanding, 0x119), + NV_CSPMU_EVENT_ATTR_4(socket, wb_outstanding, 0x11d), + NV_CSPMU_EVENT_ATTR_4(socket, wr_outstanding, 0x121), + NV_CSPMU_EVENT_ATTR_4(socket, ev_outstanding, 0x125), + NV_CSPMU_EVENT_ATTR_4(socket, prb_outstanding, 0x129), + + NV_CSPMU_EVENT_ATTR_4(socket, rd_access, 0x12d), + NV_CSPMU_EVENT_ATTR_4(socket, dl_access, 0x131), + NV_CSPMU_EVENT_ATTR_4(socket, wb_access, 0x135), + NV_CSPMU_EVENT_ATTR_4(socket, wr_access, 0x139), + NV_CSPMU_EVENT_ATTR_4(socket, ev_access, 0x13d), + NV_CSPMU_EVENT_ATTR_4(socket, prb_access, 0x141), + + NV_CSPMU_EVENT_ATTR_4(ocu, gmem_rd_data, 0x145), + NV_CSPMU_EVENT_ATTR_4(ocu, gmem_rd_access, 0x149), + NV_CSPMU_EVENT_ATTR_4(ocu, gmem_wb_access, 0x14d), + NV_CSPMU_EVENT_ATTR_4(ocu, gmem_rd_outstanding, 0x151), + NV_CSPMU_EVENT_ATTR_4(ocu, gmem_wr_outstanding, 0x155), + + NV_CSPMU_EVENT_ATTR_4(ocu, rem_rd_data, 0x159), + NV_CSPMU_EVENT_ATTR_4(ocu, rem_rd_access, 0x15d), + NV_CSPMU_EVENT_ATTR_4(ocu, rem_wb_access, 0x161), + NV_CSPMU_EVENT_ATTR_4(ocu, rem_rd_outstanding, 0x165), + NV_CSPMU_EVENT_ATTR_4(ocu, rem_wr_outstanding, 0x169), + + ARM_CSPMU_EVENT_ATTR(gmem_rd_data, 0x16d), + ARM_CSPMU_EVENT_ATTR(gmem_rd_access, 0x16e), + ARM_CSPMU_EVENT_ATTR(gmem_rd_outstanding, 0x16f), + ARM_CSPMU_EVENT_ATTR(gmem_dl_rsp, 0x170), + ARM_CSPMU_EVENT_ATTR(gmem_dl_access, 0x171), + ARM_CSPMU_EVENT_ATTR(gmem_dl_outstanding, 0x172), + ARM_CSPMU_EVENT_ATTR(gmem_wb_data, 0x173), + ARM_CSPMU_EVENT_ATTR(gmem_wb_access, 0x174), + ARM_CSPMU_EVENT_ATTR(gmem_wb_outstanding, 0x175), + ARM_CSPMU_EVENT_ATTR(gmem_ev_rsp, 0x176), + ARM_CSPMU_EVENT_ATTR(gmem_ev_access, 0x177), + ARM_CSPMU_EVENT_ATTR(gmem_ev_outstanding, 0x178), + ARM_CSPMU_EVENT_ATTR(gmem_wr_data, 0x179), + ARM_CSPMU_EVENT_ATTR(gmem_wr_outstanding, 0x17a), + ARM_CSPMU_EVENT_ATTR(gmem_wr_access, 0x17b), + + NV_CSPMU_EVENT_ATTR_4(socket, wr_data, 0x17c), + + NV_CSPMU_EVENT_ATTR_4(ocu, gmem_wr_data, 0x180), + NV_CSPMU_EVENT_ATTR_4(ocu, gmem_wb_data, 0x184), + NV_CSPMU_EVENT_ATTR_4(ocu, gmem_wr_access, 0x188), + NV_CSPMU_EVENT_ATTR_4(ocu, gmem_wb_outstanding, 0x18c), + + NV_CSPMU_EVENT_ATTR_4(ocu, rem_wr_data, 0x190), + NV_CSPMU_EVENT_ATTR_4(ocu, rem_wb_data, 0x194), + NV_CSPMU_EVENT_ATTR_4(ocu, rem_wr_access, 0x198), + NV_CSPMU_EVENT_ATTR_4(ocu, rem_wb_outstanding, 0x19c), + + ARM_CSPMU_EVENT_ATTR(gmem_wr_total_bytes, 0x1a0), + ARM_CSPMU_EVENT_ATTR(remote_socket_wr_total_bytes, 0x1a1), + ARM_CSPMU_EVENT_ATTR(remote_socket_rd_data, 0x1a2), + ARM_CSPMU_EVENT_ATTR(remote_socket_rd_outstanding, 0x1a3), + ARM_CSPMU_EVENT_ATTR(remote_socket_rd_access, 0x1a4), + + ARM_CSPMU_EVENT_ATTR(cmem_rd_data, 0x1a5), + ARM_CSPMU_EVENT_ATTR(cmem_rd_access, 0x1a6), + ARM_CSPMU_EVENT_ATTR(cmem_rd_outstanding, 0x1a7), + ARM_CSPMU_EVENT_ATTR(cmem_dl_rsp, 0x1a8), + ARM_CSPMU_EVENT_ATTR(cmem_dl_access, 0x1a9), + ARM_CSPMU_EVENT_ATTR(cmem_dl_outstanding, 0x1aa), + ARM_CSPMU_EVENT_ATTR(cmem_wb_data, 0x1ab), + ARM_CSPMU_EVENT_ATTR(cmem_wb_access, 0x1ac), + ARM_CSPMU_EVENT_ATTR(cmem_wb_outstanding, 0x1ad), + ARM_CSPMU_EVENT_ATTR(cmem_ev_rsp, 0x1ae), + ARM_CSPMU_EVENT_ATTR(cmem_ev_access, 0x1af), + ARM_CSPMU_EVENT_ATTR(cmem_ev_outstanding, 0x1b0), + ARM_CSPMU_EVENT_ATTR(cmem_wr_data, 0x1b1), + ARM_CSPMU_EVENT_ATTR(cmem_wr_outstanding, 0x1b2), + + NV_CSPMU_EVENT_ATTR_4(ocu, cmem_rd_data, 0x1b3), + NV_CSPMU_EVENT_ATTR_4(ocu, cmem_rd_access, 0x1b7), + NV_CSPMU_EVENT_ATTR_4(ocu, cmem_wb_access, 0x1bb), + NV_CSPMU_EVENT_ATTR_4(ocu, cmem_rd_outstanding, 0x1bf), + NV_CSPMU_EVENT_ATTR_4(ocu, cmem_wr_outstanding, 0x1c3), + + ARM_CSPMU_EVENT_ATTR(ocu_prb_access, 0x1c7), + ARM_CSPMU_EVENT_ATTR(ocu_prb_data, 0x1c8), + ARM_CSPMU_EVENT_ATTR(ocu_prb_outstanding, 0x1c9), + + ARM_CSPMU_EVENT_ATTR(cmem_wr_access, 0x1ca), + + NV_CSPMU_EVENT_ATTR_4(ocu, cmem_wr_access, 0x1cb), + NV_CSPMU_EVENT_ATTR_4(ocu, cmem_wb_data, 0x1cf), + NV_CSPMU_EVENT_ATTR_4(ocu, cmem_wr_data, 0x1d3), + NV_CSPMU_EVENT_ATTR_4(ocu, cmem_wb_outstanding, 0x1d7), + + ARM_CSPMU_EVENT_ATTR(cmem_wr_total_bytes, 0x1db), + + ARM_CSPMU_EVENT_ATTR(cycles, ARM_CSPMU_EVT_CYCLES_DEFAULT), + NULL, +}; + +static struct attribute *mcf_pmu_event_attrs[] = { + ARM_CSPMU_EVENT_ATTR(rd_bytes_loc, 0x0), + ARM_CSPMU_EVENT_ATTR(rd_bytes_rem, 0x1), + ARM_CSPMU_EVENT_ATTR(wr_bytes_loc, 0x2), + ARM_CSPMU_EVENT_ATTR(wr_bytes_rem, 0x3), + ARM_CSPMU_EVENT_ATTR(total_bytes_loc, 0x4), + ARM_CSPMU_EVENT_ATTR(total_bytes_rem, 0x5), + ARM_CSPMU_EVENT_ATTR(rd_req_loc, 0x6), + ARM_CSPMU_EVENT_ATTR(rd_req_rem, 0x7), + ARM_CSPMU_EVENT_ATTR(wr_req_loc, 0x8), + ARM_CSPMU_EVENT_ATTR(wr_req_rem, 0x9), + ARM_CSPMU_EVENT_ATTR(total_req_loc, 0xa), + ARM_CSPMU_EVENT_ATTR(total_req_rem, 0xb), + ARM_CSPMU_EVENT_ATTR(rd_cum_outs_loc, 0xc), + ARM_CSPMU_EVENT_ATTR(rd_cum_outs_rem, 0xd), + ARM_CSPMU_EVENT_ATTR(cycles, ARM_CSPMU_EVT_CYCLES_DEFAULT), + NULL, +}; + +static struct attribute *generic_pmu_event_attrs[] = { + ARM_CSPMU_EVENT_ATTR(cycles, ARM_CSPMU_EVT_CYCLES_DEFAULT), + NULL, +}; + +static struct attribute *scf_pmu_format_attrs[] = { + ARM_CSPMU_FORMAT_EVENT_ATTR, + NULL, +}; + +static struct attribute *pcie_pmu_format_attrs[] = { + ARM_CSPMU_FORMAT_EVENT_ATTR, + ARM_CSPMU_FORMAT_ATTR(root_port, "config1:0-9"), + NULL, +}; + +static struct attribute *nvlink_c2c_pmu_format_attrs[] = { + ARM_CSPMU_FORMAT_EVENT_ATTR, + NULL, +}; + +static struct attribute *cnvlink_pmu_format_attrs[] = { + ARM_CSPMU_FORMAT_EVENT_ATTR, + ARM_CSPMU_FORMAT_ATTR(rem_socket, "config1:0-3"), + NULL, +}; + +static struct attribute *generic_pmu_format_attrs[] = { + ARM_CSPMU_FORMAT_EVENT_ATTR, + ARM_CSPMU_FORMAT_FILTER_ATTR, + NULL, +}; + +static struct attribute ** +nv_cspmu_get_event_attrs(const struct arm_cspmu *cspmu) +{ + const struct nv_cspmu_ctx *ctx = to_nv_cspmu_ctx(cspmu); + + return ctx->event_attr; +} + +static struct attribute ** +nv_cspmu_get_format_attrs(const struct arm_cspmu *cspmu) +{ + const struct nv_cspmu_ctx *ctx = to_nv_cspmu_ctx(cspmu); + + return ctx->format_attr; +} + +static const char * +nv_cspmu_get_name(const struct arm_cspmu *cspmu) +{ + const struct nv_cspmu_ctx *ctx = to_nv_cspmu_ctx(cspmu); + + return ctx->name; +} + +static u32 nv_cspmu_event_filter(const struct perf_event *event) +{ + const struct nv_cspmu_ctx *ctx = + to_nv_cspmu_ctx(to_arm_cspmu(event->pmu)); + + if (ctx->filter_mask == 0) + return ctx->filter_default_val; + + return event->attr.config1 & ctx->filter_mask; +} + +enum nv_cspmu_name_fmt { + NAME_FMT_GENERIC, + NAME_FMT_SOCKET +}; + +struct nv_cspmu_match { + u32 prodid; + u32 prodid_mask; + u64 filter_mask; + u32 filter_default_val; + const char *name_pattern; + enum nv_cspmu_name_fmt name_fmt; + struct attribute **event_attr; + struct attribute **format_attr; +}; + +static const struct nv_cspmu_match nv_cspmu_match[] = { + { + .prodid = 0x103, + .prodid_mask = NV_PRODID_MASK, + .filter_mask = NV_PCIE_FILTER_ID_MASK, + .filter_default_val = NV_PCIE_FILTER_ID_MASK, + .name_pattern = "nvidia_pcie_pmu_%u", + .name_fmt = NAME_FMT_SOCKET, + .event_attr = mcf_pmu_event_attrs, + .format_attr = pcie_pmu_format_attrs + }, + { + .prodid = 0x104, + .prodid_mask = NV_PRODID_MASK, + .filter_mask = 0x0, + .filter_default_val = NV_NVL_C2C_FILTER_ID_MASK, + .name_pattern = "nvidia_nvlink_c2c1_pmu_%u", + .name_fmt = NAME_FMT_SOCKET, + .event_attr = mcf_pmu_event_attrs, + .format_attr = nvlink_c2c_pmu_format_attrs + }, + { + .prodid = 0x105, + .prodid_mask = NV_PRODID_MASK, + .filter_mask = 0x0, + .filter_default_val = NV_NVL_C2C_FILTER_ID_MASK, + .name_pattern = "nvidia_nvlink_c2c0_pmu_%u", + .name_fmt = NAME_FMT_SOCKET, + .event_attr = mcf_pmu_event_attrs, + .format_attr = nvlink_c2c_pmu_format_attrs + }, + { + .prodid = 0x106, + .prodid_mask = NV_PRODID_MASK, + .filter_mask = NV_CNVL_FILTER_ID_MASK, + .filter_default_val = NV_CNVL_FILTER_ID_MASK, + .name_pattern = "nvidia_cnvlink_pmu_%u", + .name_fmt = NAME_FMT_SOCKET, + .event_attr = mcf_pmu_event_attrs, + .format_attr = cnvlink_pmu_format_attrs + }, + { + .prodid = 0x2CF, + .prodid_mask = NV_PRODID_MASK, + .filter_mask = 0x0, + .filter_default_val = 0x0, + .name_pattern = "nvidia_scf_pmu_%u", + .name_fmt = NAME_FMT_SOCKET, + .event_attr = scf_pmu_event_attrs, + .format_attr = scf_pmu_format_attrs + }, + { + .prodid = 0, + .prodid_mask = 0, + .filter_mask = NV_GENERIC_FILTER_ID_MASK, + .filter_default_val = NV_GENERIC_FILTER_ID_MASK, + .name_pattern = "nvidia_uncore_pmu_%u", + .name_fmt = NAME_FMT_GENERIC, + .event_attr = generic_pmu_event_attrs, + .format_attr = generic_pmu_format_attrs + }, +}; + +static char *nv_cspmu_format_name(const struct arm_cspmu *cspmu, + const struct nv_cspmu_match *match) +{ + char *name; + struct device *dev = cspmu->dev; + + static atomic_t pmu_generic_idx = {0}; + + switch (match->name_fmt) { + case NAME_FMT_SOCKET: { + const int cpu = cpumask_first(&cspmu->associated_cpus); + const int socket = cpu_to_node(cpu); + + name = devm_kasprintf(dev, GFP_KERNEL, match->name_pattern, + socket); + break; + } + case NAME_FMT_GENERIC: + name = devm_kasprintf(dev, GFP_KERNEL, match->name_pattern, + atomic_fetch_inc(&pmu_generic_idx)); + break; + default: + name = NULL; + break; + } + + return name; +} + +int nv_cspmu_init_ops(struct arm_cspmu *cspmu) +{ + u32 prodid; + struct nv_cspmu_ctx *ctx; + struct device *dev = cspmu->dev; + struct arm_cspmu_impl_ops *impl_ops = &cspmu->impl.ops; + const struct nv_cspmu_match *match = nv_cspmu_match; + + ctx = devm_kzalloc(dev, sizeof(struct nv_cspmu_ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + prodid = FIELD_GET(ARM_CSPMU_PMIIDR_PRODUCTID, cspmu->impl.pmiidr); + + /* Find matching PMU. */ + for (; match->prodid; match++) { + const u32 prodid_mask = match->prodid_mask; + + if ((match->prodid & prodid_mask) == (prodid & prodid_mask)) + break; + } + + ctx->name = nv_cspmu_format_name(cspmu, match); + ctx->filter_mask = match->filter_mask; + ctx->filter_default_val = match->filter_default_val; + ctx->event_attr = match->event_attr; + ctx->format_attr = match->format_attr; + + cspmu->impl.ctx = ctx; + + /* NVIDIA specific callbacks. */ + impl_ops->event_filter = nv_cspmu_event_filter; + impl_ops->get_event_attrs = nv_cspmu_get_event_attrs; + impl_ops->get_format_attrs = nv_cspmu_get_format_attrs; + impl_ops->get_name = nv_cspmu_get_name; + + /* Set others to NULL to use default callback. */ + impl_ops->event_type = NULL; + impl_ops->event_attr_is_visible = NULL; + impl_ops->get_identifier = NULL; + impl_ops->is_cycle_counter_event = NULL; + + return 0; +} +EXPORT_SYMBOL_GPL(nv_cspmu_init_ops); + +MODULE_LICENSE("GPL v2"); diff --git a/drivers/perf/arm_cspmu/nvidia_cspmu.h b/drivers/perf/arm_cspmu/nvidia_cspmu.h new file mode 100644 index 000000000000..71e18f0dc50b --- /dev/null +++ b/drivers/perf/arm_cspmu/nvidia_cspmu.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + * + */ + +/* Support for NVIDIA specific attributes. */ + +#ifndef __NVIDIA_CSPMU_H__ +#define __NVIDIA_CSPMU_H__ + +#include "arm_cspmu.h" + +/* Allocate NVIDIA descriptor. */ +int nv_cspmu_init_ops(struct arm_cspmu *cspmu); + +#endif /* __NVIDIA_CSPMU_H__ */ diff --git a/drivers/perf/arm_dmc620_pmu.c b/drivers/perf/arm_dmc620_pmu.c index 280a6ae3e27c..54aa4658fb36 100644 --- a/drivers/perf/arm_dmc620_pmu.c +++ b/drivers/perf/arm_dmc620_pmu.c @@ -725,6 +725,8 @@ static struct platform_driver dmc620_pmu_driver = { static int __init dmc620_pmu_init(void) { + int ret; + cpuhp_state_num = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, DMC620_DRVNAME, NULL, @@ -732,7 +734,11 @@ static int __init dmc620_pmu_init(void) if (cpuhp_state_num < 0) return cpuhp_state_num; - return platform_driver_register(&dmc620_pmu_driver); + ret = platform_driver_register(&dmc620_pmu_driver); + if (ret) + cpuhp_remove_multi_state(cpuhp_state_num); + + return ret; } static void __exit dmc620_pmu_exit(void) diff --git a/drivers/perf/arm_dsu_pmu.c b/drivers/perf/arm_dsu_pmu.c index 4a15c86f45ef..fe2abb412c00 100644 --- a/drivers/perf/arm_dsu_pmu.c +++ b/drivers/perf/arm_dsu_pmu.c @@ -858,7 +858,11 @@ static int __init dsu_pmu_init(void) if (ret < 0) return ret; dsu_pmu_cpuhp_state = ret; - return platform_driver_register(&dsu_pmu_driver); + ret = platform_driver_register(&dsu_pmu_driver); + if (ret) + cpuhp_remove_multi_state(dsu_pmu_cpuhp_state); + + return ret; } static void __exit dsu_pmu_exit(void) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 3f07df5a7e95..bb56676f50ef 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -514,9 +514,6 @@ static int armpmu_event_init(struct perf_event *event) if (has_branch_stack(event)) return -EOPNOTSUPP; - if (armpmu->map_event(event) == -ENOENT) - return -ENOENT; - return __hw_perf_event_init(event); } @@ -861,16 +858,16 @@ static void cpu_pmu_destroy(struct arm_pmu *cpu_pmu) &cpu_pmu->node); } -static struct arm_pmu *__armpmu_alloc(gfp_t flags) +struct arm_pmu *armpmu_alloc(void) { struct arm_pmu *pmu; int cpu; - pmu = kzalloc(sizeof(*pmu), flags); + pmu = kzalloc(sizeof(*pmu), GFP_KERNEL); if (!pmu) goto out; - pmu->hw_events = alloc_percpu_gfp(struct pmu_hw_events, flags); + pmu->hw_events = alloc_percpu_gfp(struct pmu_hw_events, GFP_KERNEL); if (!pmu->hw_events) { pr_info("failed to allocate per-cpu PMU data.\n"); goto out_free_pmu; @@ -916,17 +913,6 @@ out: return NULL; } -struct arm_pmu *armpmu_alloc(void) -{ - return __armpmu_alloc(GFP_KERNEL); -} - -struct arm_pmu *armpmu_alloc_atomic(void) -{ - return __armpmu_alloc(GFP_ATOMIC); -} - - void armpmu_free(struct arm_pmu *pmu) { free_percpu(pmu->hw_events); diff --git a/drivers/perf/arm_pmu_acpi.c b/drivers/perf/arm_pmu_acpi.c index 96ffadd654ff..90815ad762eb 100644 --- a/drivers/perf/arm_pmu_acpi.c +++ b/drivers/perf/arm_pmu_acpi.c @@ -13,6 +13,7 @@ #include <linux/percpu.h> #include <linux/perf/arm_pmu.h> +#include <asm/cpu.h> #include <asm/cputype.h> static DEFINE_PER_CPU(struct arm_pmu *, probed_pmus); @@ -187,7 +188,7 @@ out_err: return err; } -static struct arm_pmu *arm_pmu_acpi_find_alloc_pmu(void) +static struct arm_pmu *arm_pmu_acpi_find_pmu(void) { unsigned long cpuid = read_cpuid_id(); struct arm_pmu *pmu; @@ -201,16 +202,7 @@ static struct arm_pmu *arm_pmu_acpi_find_alloc_pmu(void) return pmu; } - pmu = armpmu_alloc_atomic(); - if (!pmu) { - pr_warn("Unable to allocate PMU for CPU%d\n", - smp_processor_id()); - return NULL; - } - - pmu->acpi_cpuid = cpuid; - - return pmu; + return NULL; } /* @@ -242,6 +234,22 @@ static bool pmu_irq_matches(struct arm_pmu *pmu, int irq) return true; } +static void arm_pmu_acpi_associate_pmu_cpu(struct arm_pmu *pmu, + unsigned int cpu) +{ + int irq = per_cpu(pmu_irqs, cpu); + + per_cpu(probed_pmus, cpu) = pmu; + + if (pmu_irq_matches(pmu, irq)) { + struct pmu_hw_events __percpu *hw_events; + hw_events = pmu->hw_events; + per_cpu(hw_events->irq, cpu) = irq; + } + + cpumask_set_cpu(cpu, &pmu->supported_cpus); +} + /* * This must run before the common arm_pmu hotplug logic, so that we can * associate a CPU and its interrupt before the common code tries to manage the @@ -254,42 +262,50 @@ static bool pmu_irq_matches(struct arm_pmu *pmu, int irq) static int arm_pmu_acpi_cpu_starting(unsigned int cpu) { struct arm_pmu *pmu; - struct pmu_hw_events __percpu *hw_events; - int irq; /* If we've already probed this CPU, we have nothing to do */ if (per_cpu(probed_pmus, cpu)) return 0; - irq = per_cpu(pmu_irqs, cpu); - - pmu = arm_pmu_acpi_find_alloc_pmu(); - if (!pmu) - return -ENOMEM; + pmu = arm_pmu_acpi_find_pmu(); + if (!pmu) { + pr_warn_ratelimited("Unable to associate CPU%d with a PMU\n", + cpu); + return 0; + } - per_cpu(probed_pmus, cpu) = pmu; + arm_pmu_acpi_associate_pmu_cpu(pmu, cpu); + return 0; +} - if (pmu_irq_matches(pmu, irq)) { - hw_events = pmu->hw_events; - per_cpu(hw_events->irq, cpu) = irq; - } +static void arm_pmu_acpi_probe_matching_cpus(struct arm_pmu *pmu, + unsigned long cpuid) +{ + int cpu; - cpumask_set_cpu(cpu, &pmu->supported_cpus); + for_each_online_cpu(cpu) { + unsigned long cpu_cpuid = per_cpu(cpu_data, cpu).reg_midr; - /* - * Ideally, we'd probe the PMU here when we find the first matching - * CPU. We can't do that for several reasons; see the comment in - * arm_pmu_acpi_init(). - * - * So for the time being, we're done. - */ - return 0; + if (cpu_cpuid == cpuid) + arm_pmu_acpi_associate_pmu_cpu(pmu, cpu); + } } int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { int pmu_idx = 0; - int cpu, ret; + unsigned int cpu; + int ret; + + ret = arm_pmu_acpi_parse_irqs(); + if (ret) + return ret; + + ret = cpuhp_setup_state_nocalls(CPUHP_AP_PERF_ARM_ACPI_STARTING, + "perf/arm/pmu_acpi:starting", + arm_pmu_acpi_cpu_starting, NULL); + if (ret) + return ret; /* * Initialise and register the set of PMUs which we know about right @@ -304,13 +320,27 @@ int arm_pmu_acpi_probe(armpmu_init_fn init_fn) * For the moment, as with the platform/DT case, we need at least one * of a PMU's CPUs to be online at probe time. */ - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct arm_pmu *pmu = per_cpu(probed_pmus, cpu); + unsigned long cpuid; char *base_name; - if (!pmu || pmu->name) + /* If we've already probed this CPU, we have nothing to do */ + if (pmu) continue; + pmu = armpmu_alloc(); + if (!pmu) { + pr_warn("Unable to allocate PMU for CPU%d\n", + cpu); + return -ENOMEM; + } + + cpuid = per_cpu(cpu_data, cpu).reg_midr; + pmu->acpi_cpuid = cpuid; + + arm_pmu_acpi_probe_matching_cpus(pmu, cpuid); + ret = init_fn(pmu); if (ret == -ENODEV) { /* PMU not handled by this driver, or not present */ @@ -335,26 +365,16 @@ int arm_pmu_acpi_probe(armpmu_init_fn init_fn) } } - return 0; + return ret; } static int arm_pmu_acpi_init(void) { - int ret; - if (acpi_disabled) return 0; arm_spe_acpi_register_device(); - ret = arm_pmu_acpi_parse_irqs(); - if (ret) - return ret; - - ret = cpuhp_setup_state(CPUHP_AP_PERF_ARM_ACPI_STARTING, - "perf/arm/pmu_acpi:starting", - arm_pmu_acpi_cpu_starting, NULL); - - return ret; + return 0; } subsys_initcall(arm_pmu_acpi_init) diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c index 00d4c45a8017..25a269d431e4 100644 --- a/drivers/perf/arm_smmuv3_pmu.c +++ b/drivers/perf/arm_smmuv3_pmu.c @@ -959,6 +959,8 @@ static struct platform_driver smmu_pmu_driver = { static int __init arm_smmu_pmu_init(void) { + int ret; + cpuhp_state_num = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "perf/arm/pmcg:online", NULL, @@ -966,7 +968,11 @@ static int __init arm_smmu_pmu_init(void) if (cpuhp_state_num < 0) return cpuhp_state_num; - return platform_driver_register(&smmu_pmu_driver); + ret = platform_driver_register(&smmu_pmu_driver); + if (ret) + cpuhp_remove_multi_state(cpuhp_state_num); + + return ret; } module_init(arm_smmu_pmu_init); diff --git a/drivers/perf/hisilicon/hisi_pcie_pmu.c b/drivers/perf/hisilicon/hisi_pcie_pmu.c index 21771708597d..6fee0b6e163b 100644 --- a/drivers/perf/hisilicon/hisi_pcie_pmu.c +++ b/drivers/perf/hisilicon/hisi_pcie_pmu.c @@ -47,10 +47,14 @@ #define HISI_PCIE_EVENT_M GENMASK_ULL(15, 0) #define HISI_PCIE_THR_MODE_M GENMASK_ULL(27, 27) #define HISI_PCIE_THR_M GENMASK_ULL(31, 28) +#define HISI_PCIE_LEN_M GENMASK_ULL(35, 34) #define HISI_PCIE_TARGET_M GENMASK_ULL(52, 36) #define HISI_PCIE_TRIG_MODE_M GENMASK_ULL(53, 53) #define HISI_PCIE_TRIG_M GENMASK_ULL(59, 56) +/* Default config of TLP length mode, will count both TLP headers and payloads */ +#define HISI_PCIE_LEN_M_DEFAULT 3ULL + #define HISI_PCIE_MAX_COUNTERS 8 #define HISI_PCIE_REG_STEP 8 #define HISI_PCIE_THR_MAX_VAL 10 @@ -91,6 +95,7 @@ HISI_PCIE_PMU_FILTER_ATTR(thr_len, config1, 3, 0); HISI_PCIE_PMU_FILTER_ATTR(thr_mode, config1, 4, 4); HISI_PCIE_PMU_FILTER_ATTR(trig_len, config1, 8, 5); HISI_PCIE_PMU_FILTER_ATTR(trig_mode, config1, 9, 9); +HISI_PCIE_PMU_FILTER_ATTR(len_mode, config1, 11, 10); HISI_PCIE_PMU_FILTER_ATTR(port, config2, 15, 0); HISI_PCIE_PMU_FILTER_ATTR(bdf, config2, 31, 16); @@ -215,8 +220,8 @@ static void hisi_pcie_pmu_config_filter(struct perf_event *event) { struct hisi_pcie_pmu *pcie_pmu = to_pcie_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + u64 port, trig_len, thr_len, len_mode; u64 reg = HISI_PCIE_INIT_SET; - u64 port, trig_len, thr_len; /* Config HISI_PCIE_EVENT_CTRL according to event. */ reg |= FIELD_PREP(HISI_PCIE_EVENT_M, hisi_pcie_get_real_event(event)); @@ -245,6 +250,12 @@ static void hisi_pcie_pmu_config_filter(struct perf_event *event) reg |= HISI_PCIE_THR_EN; } + len_mode = hisi_pcie_get_len_mode(event); + if (len_mode) + reg |= FIELD_PREP(HISI_PCIE_LEN_M, len_mode); + else + reg |= FIELD_PREP(HISI_PCIE_LEN_M, HISI_PCIE_LEN_M_DEFAULT); + hisi_pcie_pmu_writeq(pcie_pmu, HISI_PCIE_EVENT_CTRL, hwc->idx, reg); } @@ -693,10 +704,10 @@ static struct attribute *hisi_pcie_pmu_events_attr[] = { HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_cnt, 0x10210), HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_latency, 0x0011), HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_cnt, 0x10011), - HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_flux, 0x1005), - HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_time, 0x11005), - HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_flux, 0x2004), - HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_time, 0x12004), + HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_flux, 0x0804), + HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_time, 0x10804), + HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_flux, 0x0405), + HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_time, 0x10405), NULL }; @@ -711,6 +722,7 @@ static struct attribute *hisi_pcie_pmu_format_attr[] = { HISI_PCIE_PMU_FORMAT_ATTR(thr_mode, "config1:4"), HISI_PCIE_PMU_FORMAT_ATTR(trig_len, "config1:5-8"), HISI_PCIE_PMU_FORMAT_ATTR(trig_mode, "config1:9"), + HISI_PCIE_PMU_FORMAT_ATTR(len_mode, "config1:10-11"), HISI_PCIE_PMU_FORMAT_ATTR(port, "config2:0-15"), HISI_PCIE_PMU_FORMAT_ATTR(bdf, "config2:16-31"), NULL diff --git a/drivers/perf/marvell_cn10k_tad_pmu.c b/drivers/perf/marvell_cn10k_tad_pmu.c index 69c3050a4348..a1166afb3702 100644 --- a/drivers/perf/marvell_cn10k_tad_pmu.c +++ b/drivers/perf/marvell_cn10k_tad_pmu.c @@ -408,7 +408,11 @@ static int __init tad_pmu_init(void) if (ret < 0) return ret; tad_pmu_cpuhp_state = ret; - return platform_driver_register(&tad_pmu_driver); + ret = platform_driver_register(&tad_pmu_driver); + if (ret) + cpuhp_remove_multi_state(tad_pmu_cpuhp_state); + + return ret; } static void __exit tad_pmu_exit(void) diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 3dc5824141cd..c8ab800652b5 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -1027,14 +1027,19 @@ * keep any .init_array.* sections. * https://bugs.llvm.org/show_bug.cgi?id=46478 */ +#ifdef CONFIG_UNWIND_TABLES +#define DISCARD_EH_FRAME +#else +#define DISCARD_EH_FRAME *(.eh_frame) +#endif #if defined(CONFIG_GCOV_KERNEL) || defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN) # ifdef CONFIG_CONSTRUCTORS # define SANITIZER_DISCARDS \ - *(.eh_frame) + DISCARD_EH_FRAME # else # define SANITIZER_DISCARDS \ *(.init_array) *(.init_array.*) \ - *(.eh_frame) + DISCARD_EH_FRAME # endif #else # define SANITIZER_DISCARDS diff --git a/include/linux/acpi_apmt.h b/include/linux/acpi_apmt.h new file mode 100644 index 000000000000..40bd634d082f --- /dev/null +++ b/include/linux/acpi_apmt.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * ARM CoreSight PMU driver. + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. + * + */ + +#ifndef __ACPI_APMT_H__ +#define __ACPI_APMT_H__ + +#include <linux/acpi.h> + +#ifdef CONFIG_ACPI_APMT +void acpi_apmt_init(void); +#else +static inline void acpi_apmt_init(void) { } +#endif /* CONFIG_ACPI_APMT */ + +#endif /* __ACPI_APMT_H__ */ diff --git a/include/linux/arm_ffa.h b/include/linux/arm_ffa.h index 5f02d2e6b9d9..c87aeecaa9b2 100644 --- a/include/linux/arm_ffa.h +++ b/include/linux/arm_ffa.h @@ -11,6 +11,89 @@ #include <linux/types.h> #include <linux/uuid.h> +#define FFA_SMC(calling_convention, func_num) \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, (calling_convention), \ + ARM_SMCCC_OWNER_STANDARD, (func_num)) + +#define FFA_SMC_32(func_num) FFA_SMC(ARM_SMCCC_SMC_32, (func_num)) +#define FFA_SMC_64(func_num) FFA_SMC(ARM_SMCCC_SMC_64, (func_num)) + +#define FFA_ERROR FFA_SMC_32(0x60) +#define FFA_SUCCESS FFA_SMC_32(0x61) +#define FFA_INTERRUPT FFA_SMC_32(0x62) +#define FFA_VERSION FFA_SMC_32(0x63) +#define FFA_FEATURES FFA_SMC_32(0x64) +#define FFA_RX_RELEASE FFA_SMC_32(0x65) +#define FFA_RXTX_MAP FFA_SMC_32(0x66) +#define FFA_FN64_RXTX_MAP FFA_SMC_64(0x66) +#define FFA_RXTX_UNMAP FFA_SMC_32(0x67) +#define FFA_PARTITION_INFO_GET FFA_SMC_32(0x68) +#define FFA_ID_GET FFA_SMC_32(0x69) +#define FFA_MSG_POLL FFA_SMC_32(0x6A) +#define FFA_MSG_WAIT FFA_SMC_32(0x6B) +#define FFA_YIELD FFA_SMC_32(0x6C) +#define FFA_RUN FFA_SMC_32(0x6D) +#define FFA_MSG_SEND FFA_SMC_32(0x6E) +#define FFA_MSG_SEND_DIRECT_REQ FFA_SMC_32(0x6F) +#define FFA_FN64_MSG_SEND_DIRECT_REQ FFA_SMC_64(0x6F) +#define FFA_MSG_SEND_DIRECT_RESP FFA_SMC_32(0x70) +#define FFA_FN64_MSG_SEND_DIRECT_RESP FFA_SMC_64(0x70) +#define FFA_MEM_DONATE FFA_SMC_32(0x71) +#define FFA_FN64_MEM_DONATE FFA_SMC_64(0x71) +#define FFA_MEM_LEND FFA_SMC_32(0x72) +#define FFA_FN64_MEM_LEND FFA_SMC_64(0x72) +#define FFA_MEM_SHARE FFA_SMC_32(0x73) +#define FFA_FN64_MEM_SHARE FFA_SMC_64(0x73) +#define FFA_MEM_RETRIEVE_REQ FFA_SMC_32(0x74) +#define FFA_FN64_MEM_RETRIEVE_REQ FFA_SMC_64(0x74) +#define FFA_MEM_RETRIEVE_RESP FFA_SMC_32(0x75) +#define FFA_MEM_RELINQUISH FFA_SMC_32(0x76) +#define FFA_MEM_RECLAIM FFA_SMC_32(0x77) +#define FFA_MEM_OP_PAUSE FFA_SMC_32(0x78) +#define FFA_MEM_OP_RESUME FFA_SMC_32(0x79) +#define FFA_MEM_FRAG_RX FFA_SMC_32(0x7A) +#define FFA_MEM_FRAG_TX FFA_SMC_32(0x7B) +#define FFA_NORMAL_WORLD_RESUME FFA_SMC_32(0x7C) + +/* + * For some calls it is necessary to use SMC64 to pass or return 64-bit values. + * For such calls FFA_FN_NATIVE(name) will choose the appropriate + * (native-width) function ID. + */ +#ifdef CONFIG_64BIT +#define FFA_FN_NATIVE(name) FFA_FN64_##name +#else +#define FFA_FN_NATIVE(name) FFA_##name +#endif + +/* FFA error codes. */ +#define FFA_RET_SUCCESS (0) +#define FFA_RET_NOT_SUPPORTED (-1) +#define FFA_RET_INVALID_PARAMETERS (-2) +#define FFA_RET_NO_MEMORY (-3) +#define FFA_RET_BUSY (-4) +#define FFA_RET_INTERRUPTED (-5) +#define FFA_RET_DENIED (-6) +#define FFA_RET_RETRY (-7) +#define FFA_RET_ABORTED (-8) + +/* FFA version encoding */ +#define FFA_MAJOR_VERSION_MASK GENMASK(30, 16) +#define FFA_MINOR_VERSION_MASK GENMASK(15, 0) +#define FFA_MAJOR_VERSION(x) ((u16)(FIELD_GET(FFA_MAJOR_VERSION_MASK, (x)))) +#define FFA_MINOR_VERSION(x) ((u16)(FIELD_GET(FFA_MINOR_VERSION_MASK, (x)))) +#define FFA_PACK_VERSION_INFO(major, minor) \ + (FIELD_PREP(FFA_MAJOR_VERSION_MASK, (major)) | \ + FIELD_PREP(FFA_MINOR_VERSION_MASK, (minor))) +#define FFA_VERSION_1_0 FFA_PACK_VERSION_INFO(1, 0) + +/** + * FF-A specification mentions explicitly about '4K pages'. This should + * not be confused with the kernel PAGE_SIZE, which is the translation + * granule kernel is configured and may be one among 4K, 16K and 64K. + */ +#define FFA_PAGE_SIZE SZ_4K + /* FFA Bus/Device/Driver related */ struct ffa_device { int vm_id; @@ -161,11 +244,11 @@ struct ffa_mem_region_attributes { */ #define FFA_MEM_RETRIEVE_SELF_BORROWER BIT(0) u8 flag; - u32 composite_off; /* * Offset in bytes from the start of the outer `ffa_memory_region` to * an `struct ffa_mem_region_addr_range`. */ + u32 composite_off; u64 reserved; }; diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 62557d4bffc2..99f1146614c0 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -37,9 +37,10 @@ extern void ftrace_boot_snapshot(void); static inline void ftrace_boot_snapshot(void) { } #endif -#ifdef CONFIG_FUNCTION_TRACER struct ftrace_ops; struct ftrace_regs; + +#ifdef CONFIG_FUNCTION_TRACER /* * If the arch's mcount caller does not support all of ftrace's * features, then it must call an indirect function that @@ -110,12 +111,11 @@ struct ftrace_regs { #define arch_ftrace_get_regs(fregs) (&(fregs)->regs) /* - * ftrace_instruction_pointer_set() is to be defined by the architecture - * if to allow setting of the instruction pointer from the ftrace_regs - * when HAVE_DYNAMIC_FTRACE_WITH_ARGS is set and it supports - * live kernel patching. + * ftrace_regs_set_instruction_pointer() is to be defined by the architecture + * if to allow setting of the instruction pointer from the ftrace_regs when + * HAVE_DYNAMIC_FTRACE_WITH_ARGS is set and it supports live kernel patching. */ -#define ftrace_instruction_pointer_set(fregs, ip) do { } while (0) +#define ftrace_regs_set_instruction_pointer(fregs, ip) do { } while (0) #endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs) @@ -126,6 +126,35 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs return arch_ftrace_get_regs(fregs); } +/* + * When true, the ftrace_regs_{get,set}_*() functions may be used on fregs. + * Note: this can be true even when ftrace_get_regs() cannot provide a pt_regs. + */ +static __always_inline bool ftrace_regs_has_args(struct ftrace_regs *fregs) +{ + if (IS_ENABLED(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS)) + return true; + + return ftrace_get_regs(fregs) != NULL; +} + +#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS +#define ftrace_regs_get_instruction_pointer(fregs) \ + instruction_pointer(ftrace_get_regs(fregs)) +#define ftrace_regs_get_argument(fregs, n) \ + regs_get_kernel_argument(ftrace_get_regs(fregs), n) +#define ftrace_regs_get_stack_pointer(fregs) \ + kernel_stack_pointer(ftrace_get_regs(fregs)) +#define ftrace_regs_return_value(fregs) \ + regs_return_value(ftrace_get_regs(fregs)) +#define ftrace_regs_set_return_value(fregs, ret) \ + regs_set_return_value(ftrace_get_regs(fregs), ret) +#define ftrace_override_function_with_return(fregs) \ + override_function_with_return(ftrace_get_regs(fregs)) +#define ftrace_regs_query_register_offset(name) \ + regs_query_register_offset(name) +#endif + typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs); @@ -427,9 +456,7 @@ static inline int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsi { return -ENODEV; } -#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */ -#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS /* * This must be implemented by the architecture. * It is the way the ftrace direct_ops helper, when called @@ -443,9 +470,9 @@ static inline int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsi * the return from the trampoline jump to the direct caller * instead of going back to the function it just traced. */ -static inline void arch_ftrace_set_direct_caller(struct pt_regs *regs, +static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsigned long addr) { } -#endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */ +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */ #ifdef CONFIG_STACK_TRACER diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 0356cb6a215d..0c15c5b7f801 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -174,7 +174,6 @@ void kvm_host_pmu_init(struct arm_pmu *pmu); /* Internal functions only for core arm_pmu code */ struct arm_pmu *armpmu_alloc(void); -struct arm_pmu *armpmu_alloc_atomic(void); void armpmu_free(struct arm_pmu *pmu); int armpmu_register(struct arm_pmu *pmu); int armpmu_request_irq(int irq, int cpu); diff --git a/include/linux/scs.h b/include/linux/scs.h index 18122d9e17ff..4ab5bdc898cf 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -53,6 +53,22 @@ static inline bool task_scs_end_corrupted(struct task_struct *tsk) return sz >= SCS_SIZE - 1 || READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; } +DECLARE_STATIC_KEY_FALSE(dynamic_scs_enabled); + +static inline bool scs_is_dynamic(void) +{ + if (!IS_ENABLED(CONFIG_DYNAMIC_SCS)) + return false; + return static_branch_likely(&dynamic_scs_enabled); +} + +static inline bool scs_is_enabled(void) +{ + if (!IS_ENABLED(CONFIG_DYNAMIC_SCS)) + return true; + return scs_is_dynamic(); +} + #else /* CONFIG_SHADOW_CALL_STACK */ static inline void *scs_alloc(int node) { return NULL; } @@ -62,6 +78,8 @@ static inline void scs_task_reset(struct task_struct *tsk) {} static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } static inline void scs_release(struct task_struct *tsk) {} static inline bool task_scs_end_corrupted(struct task_struct *tsk) { return false; } +static inline bool scs_is_enabled(void) { return false; } +static inline bool scs_is_dynamic(void) { return false; } #endif /* CONFIG_SHADOW_CALL_STACK */ diff --git a/include/soc/amlogic/meson_ddr_pmu.h b/include/soc/amlogic/meson_ddr_pmu.h new file mode 100644 index 000000000000..4a33e4ab8ada --- /dev/null +++ b/include/soc/amlogic/meson_ddr_pmu.h @@ -0,0 +1,66 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2022 Amlogic, Inc. All rights reserved. + */ + +#ifndef __MESON_DDR_PMU_H__ +#define __MESON_DDR_PMU_H__ + +#define MAX_CHANNEL_NUM 8 + +enum { + ALL_CHAN_COUNTER_ID, + CHAN1_COUNTER_ID, + CHAN2_COUNTER_ID, + CHAN3_COUNTER_ID, + CHAN4_COUNTER_ID, + CHAN5_COUNTER_ID, + CHAN6_COUNTER_ID, + CHAN7_COUNTER_ID, + CHAN8_COUNTER_ID, + COUNTER_MAX_ID, +}; + +struct dmc_info; + +struct dmc_counter { + u64 all_cnt; /* The count of all requests come in/out ddr controller */ + union { + u64 all_req; + struct { + u64 all_idle_cnt; + u64 all_16bit_cnt; + }; + }; + u64 channel_cnt[MAX_CHANNEL_NUM]; /* To save a DMC bandwidth-monitor channel counter */ +}; + +struct dmc_hw_info { + void (*enable)(struct dmc_info *info); + void (*disable)(struct dmc_info *info); + /* Bind an axi line to a bandwidth-monitor channel */ + void (*set_axi_filter)(struct dmc_info *info, int axi_id, int chann); + int (*irq_handler)(struct dmc_info *info, + struct dmc_counter *counter); + void (*get_counters)(struct dmc_info *info, + struct dmc_counter *counter); + + int dmc_nr; /* The number of dmc controller */ + int chann_nr; /* The number of dmc bandwidth monitor channels */ + struct attribute **fmt_attr; + const u64 capability[2]; +}; + +struct dmc_info { + const struct dmc_hw_info *hw_info; + + void __iomem *ddr_reg[4]; + unsigned long timer_value; /* Timer value in TIMER register */ + void __iomem *pll_reg; + int irq_num; /* irq vector number */ +}; + +int meson_ddr_pmu_create(struct platform_device *pdev); +int meson_ddr_pmu_remove(struct platform_device *pdev); + +#endif /* __MESON_DDR_PMU_H__ */ diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c index 4c4f5a776d80..4152c71507e2 100644 --- a/kernel/livepatch/patch.c +++ b/kernel/livepatch/patch.c @@ -118,7 +118,7 @@ static void notrace klp_ftrace_handler(unsigned long ip, if (func->nop) goto unlock; - ftrace_instruction_pointer_set(fregs, (unsigned long)func->new_func); + ftrace_regs_set_instruction_pointer(fregs, (unsigned long)func->new_func); unlock: ftrace_test_recursion_unlock(bit); diff --git a/kernel/scs.c b/kernel/scs.c index b7e1b096d906..d7809affe740 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -12,6 +12,10 @@ #include <linux/vmalloc.h> #include <linux/vmstat.h> +#ifdef CONFIG_DYNAMIC_SCS +DEFINE_STATIC_KEY_FALSE(dynamic_scs_enabled); +#endif + static void __scs_account(void *s, int account) { struct page *scs_page = vmalloc_to_page(s); @@ -101,14 +105,20 @@ static int scs_cleanup(unsigned int cpu) void __init scs_init(void) { + if (!scs_is_enabled()) + return; cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, scs_cleanup); } int scs_prepare(struct task_struct *tsk, int node) { - void *s = scs_alloc(node); + void *s; + if (!scs_is_enabled()) + return 0; + + s = scs_alloc(node); if (!s) return -ENOMEM; @@ -148,7 +158,7 @@ void scs_release(struct task_struct *tsk) { void *s = task_scs(tsk); - if (!s) + if (!scs_is_enabled() || !s) return; WARN(task_scs_end_corrupted(tsk), diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index e9e95c790b8e..2c6611c13f99 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -46,10 +46,10 @@ config HAVE_DYNAMIC_FTRACE_WITH_ARGS bool help If this is set, then arguments and stack can be found from - the pt_regs passed into the function callback regs parameter + the ftrace_regs passed into the function callback regs parameter by default, even without setting the REGS flag in the ftrace_ops. - This allows for use of regs_get_kernel_argument() and - kernel_stack_pointer(). + This allows for use of ftrace_regs_get_argument() and + ftrace_regs_get_stack_pointer(). config HAVE_DYNAMIC_FTRACE_NO_PATCHABLE bool diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 33236241f236..acfa4e029bcc 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -2488,14 +2488,13 @@ ftrace_add_rec_direct(unsigned long ip, unsigned long addr, static void call_direct_funcs(unsigned long ip, unsigned long pip, struct ftrace_ops *ops, struct ftrace_regs *fregs) { - struct pt_regs *regs = ftrace_get_regs(fregs); unsigned long addr; addr = ftrace_find_rec_direct(ip); if (!addr) return; - arch_ftrace_set_direct_caller(regs, addr); + arch_ftrace_set_direct_caller(fregs, addr); } struct ftrace_ops direct_ops = { diff --git a/scripts/head-object-list.txt b/scripts/head-object-list.txt index b16326a92c45..f226e45e3b7b 100644 --- a/scripts/head-object-list.txt +++ b/scripts/head-object-list.txt @@ -15,7 +15,6 @@ arch/alpha/kernel/head.o arch/arc/kernel/head.o arch/arm/kernel/head-nommu.o arch/arm/kernel/head.o -arch/arm64/kernel/head.o arch/csky/kernel/head.o arch/hexagon/kernel/head.o arch/ia64/kernel/head.o diff --git a/scripts/module.lds.S b/scripts/module.lds.S index da4bddd26171..bf5bcf2836d8 100644 --- a/scripts/module.lds.S +++ b/scripts/module.lds.S @@ -3,6 +3,12 @@ * Archs are free to supply their own linker scripts. ld will * combine them automatically. */ +#ifdef CONFIG_UNWIND_TABLES +#define DISCARD_EH_FRAME +#else +#define DISCARD_EH_FRAME *(.eh_frame) +#endif + SECTIONS { /DISCARD/ : { *(.discard) diff --git a/tools/testing/selftests/arm64/abi/hwcap.c b/tools/testing/selftests/arm64/abi/hwcap.c index 9f1a7b5c6193..9f255bc5f31c 100644 --- a/tools/testing/selftests/arm64/abi/hwcap.c +++ b/tools/testing/selftests/arm64/abi/hwcap.c @@ -33,6 +33,12 @@ */ typedef void (*sigill_fn)(void); +static void cssc_sigill(void) +{ + /* CNT x0, x0 */ + asm volatile(".inst 0xdac01c00" : : : "x0"); +} + static void rng_sigill(void) { asm volatile("mrs x0, S3_3_C2_C4_0" : : : "x0"); @@ -56,6 +62,12 @@ static void sve2_sigill(void) asm volatile(".inst 0x4408A000" : : : "z0"); } +static void sve2p1_sigill(void) +{ + /* BFADD Z0.H, Z0.H, Z0.H */ + asm volatile(".inst 0x65000000" : : : "z0"); +} + static void sveaes_sigill(void) { /* AESD z0.b, z0.b, z0.b */ @@ -119,6 +131,13 @@ static const struct hwcap_data { bool sigill_reliable; } hwcaps[] = { { + .name = "CSSC", + .at_hwcap = AT_HWCAP2, + .hwcap_bit = HWCAP2_CSSC, + .cpuinfo = "cssc", + .sigill_fn = cssc_sigill, + }, + { .name = "RNG", .at_hwcap = AT_HWCAP2, .hwcap_bit = HWCAP2_RNG, @@ -126,6 +145,12 @@ static const struct hwcap_data { .sigill_fn = rng_sigill, }, { + .name = "RPRFM", + .at_hwcap = AT_HWCAP2, + .hwcap_bit = HWCAP2_RPRFM, + .cpuinfo = "rprfm", + }, + { .name = "SME", .at_hwcap = AT_HWCAP2, .hwcap_bit = HWCAP2_SME, @@ -149,6 +174,13 @@ static const struct hwcap_data { .sigill_fn = sve2_sigill, }, { + .name = "SVE 2.1", + .at_hwcap = AT_HWCAP2, + .hwcap_bit = HWCAP2_SVE2P1, + .cpuinfo = "sve2p1", + .sigill_fn = sve2p1_sigill, + }, + { .name = "SVE AES", .at_hwcap = AT_HWCAP2, .hwcap_bit = HWCAP2_SVEAES, diff --git a/tools/testing/selftests/arm64/abi/syscall-abi-asm.S b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S index b523c21c2278..acd5e9f3bc0b 100644 --- a/tools/testing/selftests/arm64/abi/syscall-abi-asm.S +++ b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S @@ -153,7 +153,7 @@ do_syscall: // Only set a non-zero FFR, test patterns must be zero since the // syscall should clear it - this lets us handle FA64. ldr x2, =ffr_in - ldr p0, [x2, #0] + ldr p0, [x2] ldr x2, [x2, #0] cbz x2, 2f wrffr p0.b @@ -298,7 +298,7 @@ do_syscall: cbz x2, 1f ldr x2, =ffr_out rdffr p0.b - str p0, [x2, #0] + str p0, [x2] 1: // Restore callee saved registers x19-x30 diff --git a/tools/testing/selftests/arm64/fp/fp-stress.c b/tools/testing/selftests/arm64/fp/fp-stress.c index 4e62a9199f97..f8b2f41aac36 100644 --- a/tools/testing/selftests/arm64/fp/fp-stress.c +++ b/tools/testing/selftests/arm64/fp/fp-stress.c @@ -39,10 +39,12 @@ struct child_data { static int epoll_fd; static struct child_data *children; +static struct epoll_event *evs; +static int tests; static int num_children; static bool terminate; -static void drain_output(bool flush); +static int startup_pipe[2]; static int num_processors(void) { @@ -82,12 +84,36 @@ static void child_start(struct child_data *child, const char *program) } /* + * Duplicate the read side of the startup pipe to + * FD 3 so we can close everything else. + */ + ret = dup2(startup_pipe[0], 3); + if (ret == -1) { + fprintf(stderr, "dup2() %d\n", errno); + exit(EXIT_FAILURE); + } + + /* * Very dumb mechanism to clean open FDs other than * stdio. We don't want O_CLOEXEC for the pipes... */ - for (i = 3; i < 8192; i++) + for (i = 4; i < 8192; i++) close(i); + /* + * Read from the startup pipe, there should be no data + * and we should block until it is closed. We just + * carry on on error since this isn't super critical. + */ + ret = read(3, &i, sizeof(i)); + if (ret < 0) + fprintf(stderr, "read(startp pipe) failed: %s (%d)\n", + strerror(errno), errno); + if (ret > 0) + fprintf(stderr, "%d bytes of data on startup pipe\n", + ret); + close(3); + ret = execl(program, program, NULL); fprintf(stderr, "execl(%s) failed: %d (%s)\n", program, errno, strerror(errno)); @@ -112,12 +138,6 @@ static void child_start(struct child_data *child, const char *program) ksft_exit_fail_msg("%s EPOLL_CTL_ADD failed: %s (%d)\n", child->name, strerror(errno), errno); } - - /* - * Keep output flowing during child startup so logs - * are more timely, can help debugging. - */ - drain_output(false); } } @@ -290,12 +310,12 @@ static void start_fpsimd(struct child_data *child, int cpu, int copy) { int ret; - child_start(child, "./fpsimd-test"); - ret = asprintf(&child->name, "FPSIMD-%d-%d", cpu, copy); if (ret == -1) ksft_exit_fail_msg("asprintf() failed\n"); + child_start(child, "./fpsimd-test"); + ksft_print_msg("Started %s\n", child->name); } @@ -307,12 +327,12 @@ static void start_sve(struct child_data *child, int vl, int cpu) if (ret < 0) ksft_exit_fail_msg("Failed to set SVE VL %d\n", vl); - child_start(child, "./sve-test"); - ret = asprintf(&child->name, "SVE-VL-%d-%d", vl, cpu); if (ret == -1) ksft_exit_fail_msg("asprintf() failed\n"); + child_start(child, "./sve-test"); + ksft_print_msg("Started %s\n", child->name); } @@ -320,16 +340,16 @@ static void start_ssve(struct child_data *child, int vl, int cpu) { int ret; + ret = asprintf(&child->name, "SSVE-VL-%d-%d", vl, cpu); + if (ret == -1) + ksft_exit_fail_msg("asprintf() failed\n"); + ret = prctl(PR_SME_SET_VL, vl | PR_SME_VL_INHERIT); if (ret < 0) ksft_exit_fail_msg("Failed to set SME VL %d\n", ret); child_start(child, "./ssve-test"); - ret = asprintf(&child->name, "SSVE-VL-%d-%d", vl, cpu); - if (ret == -1) - ksft_exit_fail_msg("asprintf() failed\n"); - ksft_print_msg("Started %s\n", child->name); } @@ -341,12 +361,12 @@ static void start_za(struct child_data *child, int vl, int cpu) if (ret < 0) ksft_exit_fail_msg("Failed to set SME VL %d\n", ret); - child_start(child, "./za-test"); - ret = asprintf(&child->name, "ZA-VL-%d-%d", vl, cpu); if (ret == -1) ksft_exit_fail_msg("asprintf() failed\n"); + child_start(child, "./za-test"); + ksft_print_msg("Started %s\n", child->name); } @@ -375,11 +395,11 @@ static void probe_vls(int vls[], int *vl_count, int set_vl) /* Handle any pending output without blocking */ static void drain_output(bool flush) { - struct epoll_event ev; int ret = 1; + int i; while (ret > 0) { - ret = epoll_wait(epoll_fd, &ev, 1, 0); + ret = epoll_wait(epoll_fd, evs, tests, 0); if (ret < 0) { if (errno == EINTR) continue; @@ -387,8 +407,8 @@ static void drain_output(bool flush) strerror(errno), errno); } - if (ret == 1) - child_output(ev.data.ptr, ev.events, flush); + for (i = 0; i < ret; i++) + child_output(evs[i].data.ptr, evs[i].events, flush); } } @@ -401,10 +421,11 @@ int main(int argc, char **argv) { int ret; int timeout = 10; - int cpus, tests, i, j, c; + int cpus, i, j, c; int sve_vl_count, sme_vl_count, fpsimd_per_cpu; + bool all_children_started = false; + int seen_children; int sve_vls[MAX_VLS], sme_vls[MAX_VLS]; - struct epoll_event ev; struct sigaction sa; while ((c = getopt_long(argc, argv, "t:", options, NULL)) != -1) { @@ -465,6 +486,12 @@ int main(int argc, char **argv) strerror(errno), ret); epoll_fd = ret; + /* Create a pipe which children will block on before execing */ + ret = pipe(startup_pipe); + if (ret != 0) + ksft_exit_fail_msg("Failed to create startup pipe: %s (%d)\n", + strerror(errno), errno); + /* Get signal handers ready before we start any children */ memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = handle_exit_signal; @@ -484,6 +511,11 @@ int main(int argc, char **argv) ksft_print_msg("Failed to install SIGCHLD handler: %s (%d)\n", strerror(errno), errno); + evs = calloc(tests, sizeof(*evs)); + if (!evs) + ksft_exit_fail_msg("Failed to allocated %d epoll events\n", + tests); + for (i = 0; i < cpus; i++) { for (j = 0; j < fpsimd_per_cpu; j++) start_fpsimd(&children[num_children++], i, j); @@ -497,6 +529,13 @@ int main(int argc, char **argv) } } + /* + * All children started, close the startup pipe and let them + * run. + */ + close(startup_pipe[0]); + close(startup_pipe[1]); + for (;;) { /* Did we get a signal asking us to exit? */ if (terminate) @@ -510,7 +549,7 @@ int main(int argc, char **argv) * useful in emulation where we will both be slow and * likely to have a large set of VLs. */ - ret = epoll_wait(epoll_fd, &ev, 1, 1000); + ret = epoll_wait(epoll_fd, evs, tests, 1000); if (ret < 0) { if (errno == EINTR) continue; @@ -519,13 +558,40 @@ int main(int argc, char **argv) } /* Output? */ - if (ret == 1) { - child_output(ev.data.ptr, ev.events, false); + if (ret > 0) { + for (i = 0; i < ret; i++) { + child_output(evs[i].data.ptr, evs[i].events, + false); + } continue; } /* Otherwise epoll_wait() timed out */ + /* + * If the child processes have not produced output they + * aren't actually running the tests yet . + */ + if (!all_children_started) { + seen_children = 0; + + for (i = 0; i < num_children; i++) + if (children[i].output_seen || + children[i].exited) + seen_children++; + + if (seen_children != num_children) { + ksft_print_msg("Waiting for %d children\n", + num_children - seen_children); + continue; + } + + all_children_started = true; + } + + ksft_print_msg("Sending signals, timeout remaining: %d\n", + timeout); + for (i = 0; i < num_children; i++) child_tickle(&children[i]); diff --git a/tools/testing/selftests/arm64/mte/check_buffer_fill.c b/tools/testing/selftests/arm64/mte/check_buffer_fill.c index 75fc482d63b6..1dbbbd47dd50 100644 --- a/tools/testing/selftests/arm64/mte/check_buffer_fill.c +++ b/tools/testing/selftests/arm64/mte/check_buffer_fill.c @@ -32,7 +32,7 @@ static int check_buffer_by_byte(int mem_type, int mode) bool err; mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG); - item = sizeof(sizes)/sizeof(int); + item = ARRAY_SIZE(sizes); for (i = 0; i < item; i++) { ptr = (char *)mte_allocate_memory(sizes[i], mem_type, 0, true); @@ -69,7 +69,7 @@ static int check_buffer_underflow_by_byte(int mem_type, int mode, char *und_ptr = NULL; mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG); - item = sizeof(sizes)/sizeof(int); + item = ARRAY_SIZE(sizes); for (i = 0; i < item; i++) { ptr = (char *)mte_allocate_memory_tag_range(sizes[i], mem_type, 0, underflow_range, 0); @@ -165,7 +165,7 @@ static int check_buffer_overflow_by_byte(int mem_type, int mode, char *over_ptr = NULL; mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG); - item = sizeof(sizes)/sizeof(int); + item = ARRAY_SIZE(sizes); for (i = 0; i < item; i++) { ptr = (char *)mte_allocate_memory_tag_range(sizes[i], mem_type, 0, 0, overflow_range); @@ -338,7 +338,7 @@ static int check_buffer_by_block(int mem_type, int mode) int i, item, result = KSFT_PASS; mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG); - item = sizeof(sizes)/sizeof(int); + item = ARRAY_SIZE(sizes); cur_mte_cxt.fault_valid = false; for (i = 0; i < item; i++) { result = check_buffer_by_block_iterate(mem_type, mode, sizes[i]); @@ -366,7 +366,7 @@ static int check_memory_initial_tags(int mem_type, int mode, int mapping) { char *ptr; int run, fd; - int total = sizeof(sizes)/sizeof(int); + int total = ARRAY_SIZE(sizes); mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG); for (run = 0; run < total; run++) { @@ -404,7 +404,7 @@ int main(int argc, char *argv[]) { int err; size_t page_size = getpagesize(); - int item = sizeof(sizes)/sizeof(int); + int item = ARRAY_SIZE(sizes); sizes[item - 3] = page_size - 1; sizes[item - 2] = page_size; diff --git a/tools/testing/selftests/arm64/mte/check_mmap_options.c b/tools/testing/selftests/arm64/mte/check_mmap_options.c index a04b12c21ac9..17694caaff53 100644 --- a/tools/testing/selftests/arm64/mte/check_mmap_options.c +++ b/tools/testing/selftests/arm64/mte/check_mmap_options.c @@ -61,9 +61,8 @@ static int check_anonymous_memory_mapping(int mem_type, int mode, int mapping, i { char *ptr, *map_ptr; int run, result, map_size; - int item = sizeof(sizes)/sizeof(int); + int item = ARRAY_SIZE(sizes); - item = sizeof(sizes)/sizeof(int); mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG); for (run = 0; run < item; run++) { map_size = sizes[run] + OVERFLOW + UNDERFLOW; @@ -93,7 +92,7 @@ static int check_file_memory_mapping(int mem_type, int mode, int mapping, int ta { char *ptr, *map_ptr; int run, fd, map_size; - int total = sizeof(sizes)/sizeof(int); + int total = ARRAY_SIZE(sizes); int result = KSFT_PASS; mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG); @@ -132,7 +131,7 @@ static int check_clear_prot_mte_flag(int mem_type, int mode, int mapping) { char *ptr, *map_ptr; int run, prot_flag, result, fd, map_size; - int total = sizeof(sizes)/sizeof(int); + int total = ARRAY_SIZE(sizes); prot_flag = PROT_READ | PROT_WRITE; mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG); @@ -187,7 +186,7 @@ static int check_clear_prot_mte_flag(int mem_type, int mode, int mapping) int main(int argc, char *argv[]) { int err; - int item = sizeof(sizes)/sizeof(int); + int item = ARRAY_SIZE(sizes); err = mte_default_setup(); if (err) diff --git a/tools/testing/selftests/arm64/signal/testcases/TODO b/tools/testing/selftests/arm64/signal/testcases/TODO index 110ff9fd195d..1f7fba8194fe 100644 --- a/tools/testing/selftests/arm64/signal/testcases/TODO +++ b/tools/testing/selftests/arm64/signal/testcases/TODO @@ -1,2 +1 @@ - Validate that register contents are saved and restored as expected. -- Support and validate extra_context. diff --git a/tools/testing/selftests/arm64/signal/testcases/testcases.c b/tools/testing/selftests/arm64/signal/testcases/testcases.c index e1c625b20ac4..d2eda7b5de26 100644 --- a/tools/testing/selftests/arm64/signal/testcases/testcases.c +++ b/tools/testing/selftests/arm64/signal/testcases/testcases.c @@ -1,5 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (C) 2019 ARM Limited */ + +#include <ctype.h> +#include <string.h> + #include "testcases.h" struct _aarch64_ctx *get_header(struct _aarch64_ctx *head, uint32_t magic, @@ -109,7 +113,7 @@ bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err) bool terminated = false; size_t offs = 0; int flags = 0; - int new_flags; + int new_flags, i; struct extra_context *extra = NULL; struct sve_context *sve = NULL; struct za_context *za = NULL; @@ -117,6 +121,7 @@ bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err) (struct _aarch64_ctx *)uc->uc_mcontext.__reserved; void *extra_data = NULL; size_t extra_sz = 0; + char magic[4]; if (!err) return false; @@ -194,11 +199,19 @@ bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err) /* * A still unknown Magic: potentially freshly added * to the Kernel code and still unknown to the - * tests. + * tests. Magic numbers are supposed to be allocated + * as somewhat meaningful ASCII strings so try to + * print as such as well as the raw number. */ + memcpy(magic, &head->magic, sizeof(magic)); + for (i = 0; i < sizeof(magic); i++) + if (!isalnum(magic[i])) + magic[i] = '?'; + fprintf(stdout, - "SKIP Unknown MAGIC: 0x%X - Is KSFT arm64/signal up to date ?\n", - head->magic); + "SKIP Unknown MAGIC: 0x%X (%c%c%c%c) - Is KSFT arm64/signal up to date ?\n", + head->magic, + magic[3], magic[2], magic[1], magic[0]); break; } |