summaryrefslogtreecommitdiff
path: root/arch/mips/kernel/smp.c
AgeCommit message (Collapse)AuthorFilesLines
2022-02-16MIPS: smp: fill in sibling and core maps earlierAlexander Lobakin1-3/+3
After enabling CONFIG_SCHED_CORE (landed during 5.14 cycle), 2-core 2-thread-per-core interAptiv (CPS-driven) started emitting the following: [ 0.025698] CPU1 revision is: 0001a120 (MIPS interAptiv (multi)) [ 0.048183] ------------[ cut here ]------------ [ 0.048187] WARNING: CPU: 1 PID: 0 at kernel/sched/core.c:6025 sched_core_cpu_starting+0x198/0x240 [ 0.048220] Modules linked in: [ 0.048233] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.17.0-rc3+ #35 b7b319f24073fd9a3c2aa7ad15fb7993eec0b26f [ 0.048247] Stack : 817f0000 00000004 327804c8 810eb050 00000000 00000004 00000000 c314fdd1 [ 0.048278] 830cbd64 819c0000 81800000 817f0000 83070bf4 00000001 830cbd08 00000000 [ 0.048307] 00000000 00000000 815fcbc4 00000000 00000000 00000000 00000000 00000000 [ 0.048334] 00000000 00000000 00000000 00000000 817f0000 00000000 00000000 817f6f34 [ 0.048361] 817f0000 818a3c00 817f0000 00000004 00000000 00000000 4dc33260 0018c933 [ 0.048389] ... [ 0.048396] Call Trace: [ 0.048399] [<8105a7bc>] show_stack+0x3c/0x140 [ 0.048424] [<8131c2a0>] dump_stack_lvl+0x60/0x80 [ 0.048440] [<8108b5c0>] __warn+0xc0/0xf4 [ 0.048454] [<8108b658>] warn_slowpath_fmt+0x64/0x10c [ 0.048467] [<810bd418>] sched_core_cpu_starting+0x198/0x240 [ 0.048483] [<810c6514>] sched_cpu_starting+0x14/0x80 [ 0.048497] [<8108c0f8>] cpuhp_invoke_callback_range+0x78/0x140 [ 0.048510] [<8108d914>] notify_cpu_starting+0x94/0x140 [ 0.048523] [<8106593c>] start_secondary+0xbc/0x280 [ 0.048539] [ 0.048543] ---[ end trace 0000000000000000 ]--- [ 0.048636] Synchronize counters for CPU 1: done. ...for each but CPU 0/boot. Basic debug printks right before the mentioned line say: [ 0.048170] CPU: 1, smt_mask: So smt_mask, which is sibling mask obviously, is empty when entering the function. This is critical, as sched_core_cpu_starting() calculates core-scheduling parameters only once per CPU start, and it's crucial to have all the parameters filled in at that moment (at least it uses cpu_smt_mask() which in fact is `&cpu_sibling_map[cpu]` on MIPS). A bit of debugging led me to that set_cpu_sibling_map() performing the actual map calculation, was being invocated after notify_cpu_start(), and exactly the latter function starts CPU HP callback round (sched_core_cpu_starting() is basically a CPU HP callback). While the flow is same on ARM64 (maps after the notifier, although before calling set_cpu_online()), x86 started calculating sibling maps earlier than starting the CPU HP callbacks in Linux 4.14 (see [0] for the reference). Neither me nor my brief tests couldn't find any potential caveats in calculating the maps right after performing delay calibration, but the WARN splat is now gone. The very same debug prints now yield exactly what I expected from them: [ 0.048433] CPU: 1, smt_mask: 0-1 [0] https://git.kernel.org/pub/scm/linux/kernel/git/mips/linux.git/commit/?id=76ce7cfe35ef Signed-off-by: Alexander Lobakin <alobakin@pm.me> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-05-12sched/core: Initialize the idle task with preemption disabledValentin Schneider1-1/+0
As pointed out by commit de9b8f5dcbd9 ("sched: Fix crash trying to dequeue/enqueue the idle thread") init_idle() can and will be invoked more than once on the same idle task. At boot time, it is invoked for the boot CPU thread by sched_init(). Then smp_init() creates the threads for all the secondary CPUs and invokes init_idle() on them. As the hotplug machinery brings the secondaries to life, it will issue calls to idle_thread_get(), which itself invokes init_idle() yet again. In this case it's invoked twice more per secondary: at _cpu_up(), and at bringup_cpu(). Given smp_init() already initializes the idle tasks for all *possible* CPUs, no further initialization should be required. Now, removing init_idle() from idle_thread_get() exposes some interesting expectations with regards to the idle task's preempt_count: the secondary startup always issues a preempt_disable(), requiring some reset of the preempt count to 0 between hot-unplug and hotplug, which is currently served by idle_thread_get() -> idle_init(). Given the idle task is supposed to have preemption disabled once and never see it re-enabled, it seems that what we actually want is to initialize its preempt_count to PREEMPT_DISABLED and leave it there. Do that, and remove init_idle() from idle_thread_get(). Secondary startups were patched via coccinelle: @begone@ @@ -preempt_disable(); ... cpu_startup_entry(CPUHP_AP_ONLINE_IDLE); Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210512094636.2958515-1-valentin.schneider@arm.com
2021-02-03arch: mips: kernel: Fix two spelling in smp.cBhaskar Chowdhury1-3/+3
s/logcal/logical/ s/intercpu/inter-CPU/ Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-11-24smp: Cleanup smp_call_function*()Peter Zijlstra1-19/+6
Get rid of the __call_single_node union and cleanup the API a little to avoid external code relying on the structure layout as much. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
2020-03-31Merge tag 'mips_5.7' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linuxLinus Torvalds1-21/+12
Pull MIPS updates from Thomas Bogendoerfer: - loongson64 irq rework - dmi support loongson - replace setup_irq() by request_irq() - jazz cleanups - minor cleanups and fixes * tag 'mips_5.7' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (44 commits) MIPS: ralink: mt7621: Fix soc_device introduction MIPS: Exclude more dsemul code when CONFIG_MIPS_FP_SUPPORT=n MIPS/tlbex: Fix LDDIR usage in setup_pw() for Loongson-3 MIPS: do not compile generic functions for CONFIG_CAVIUM_OCTEON_SOC MAINTAINERS: Update Loongson64 entry MIPS: Loongson64: Load built-in dtbs MIPS: Loongson64: Add generic dts dt-bindings: mips: Add loongson boards MIPS: Loongson64: Drop legacy IRQ code dt-bindings: interrupt-controller: Add Loongson-3 HTPIC irqchip: Add driver for Loongson-3 HyperTransport PIC controller dt-bindings: interrupt-controller: Add Loongson LIOINTC irqchip: loongson-liointc: Workaround LPC IRQ Errata irqchip: Add driver for Loongson I/O Local Interrupt Controller docs: mips: remove no longer needed au1xxx_ide.rst documentation MIPS: Alchemy: remove no longer used au1xxx_ide.h header ide: remove no longer used au1xxx-ide driver MIPS: Add support for Desktop Management Interface (DMI) firmware: dmi: Add macro SMBIOS_ENTRY_POINT_SCAN_START MIPS: ralink: mt7621: introduce 'soc_device' initialization ...
2020-03-06MIPS: smp: Remove tick_broadcast_countPeter Xu1-8/+1
Now smp_call_function_single_async() provides the protection that we'll return with -EBUSY if the csd object is still pending, then we don't need the tick_broadcast_count counter any more. Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20191216213125.9536-3-peterx@redhat.com
2020-03-05MIPS: Replace setup_irq() by request_irq()afzal mohammed1-21/+12
request_irq() is preferred over setup_irq(). Invocations of setup_irq() occur after memory allocators are ready. Per tglx[1], setup_irq() existed in olden days when allocators were not ready by the time early interrupts were initialized. Hence replace setup_irq() by request_irq(). remove_irq() has been replaced by free_irq() as well. There were build error's during previous version, couple of which was reported by kbuild test robot <lkp@intel.com> of which one was reported by Thomas Bogendoerfer <tsbogend@alpha.franken.de> as well. There were a few more issues including build errors, those also have been fixed. [1] https://lkml.kernel.org/r/alpine.DEB.2.20.1710191609480.1971@nanos Signed-off-by: afzal mohammed <afzal.mohd.ma@gmail.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156Thomas Gleixner1-13/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 1334 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Richard Fontana <rfontana@redhat.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070033.113240726@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-19MIPS: SGI-IP27: do boot CPU init laterThomas Bogendoerfer1-0/+2
To make use of per_cpu variables in interrupt code per_cpu_init() must be done after setup_per_cpu_areas(). This is achieved by calling it in smp_prepare_boot_cpu() via a new smp_ops method. Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
2019-02-04MIPS: MemoryMapID (MMID) SupportPaul Burton1-5/+52
Introduce support for using MemoryMapIDs (MMIDs) as an alternative to Address Space IDs (ASIDs). The major difference between the two is that MMIDs are global - ie. an MMID uniquely identifies an address space across all coherent CPUs. In contrast ASIDs are non-global per-CPU IDs, wherein each address space is allocated a separate ASID for each CPU upon which it is used. This global namespace allows a new GINVT instruction be used to globally invalidate TLB entries associated with a particular MMID across all coherent CPUs in the system, removing the need for IPIs to invalidate entries with separate ASIDs on each CPU. The allocation scheme used here is largely borrowed from arm64 (see arch/arm64/mm/context.c). In essence we maintain a bitmap to track available MMIDs, and MMIDs in active use at the time of a rollover to a new MMID version are preserved in the new version. The allocation scheme requires efficient 64 bit atomics in order to perform reasonably, so this support depends upon CONFIG_GENERIC_ATOMIC64=n (ie. currently it will only be included in MIPS64 kernels). The first, and currently only, available CPU with support for MMIDs is the MIPS I6500. This CPU supports 16 bit MMIDs, and so for now we cap our MMIDs to 16 bits wide in order to prevent the bitmap growing to absurd sizes if any future CPU does implement 32 bit MMIDs as the architecture manuals suggest is recommended. When MMIDs are in use we also make use of GINVT instruction which is available due to the global nature of MMIDs. By executing a sequence of GINVT & SYNC 0x14 instructions we can avoid the overhead of an IPI to each remote CPU in many cases. One complication is that GINVT will invalidate wired entries (in all cases apart from type 0, which targets the entire TLB). In order to avoid GINVT invalidating any wired TLB entries we set up, we make sure to create those entries using a reserved MMID (0) that we never associate with any address space. Also of note is that KVM will require further work in order to support MMIDs & GINVT, since KVM is involved in allocating IDs for guests & in configuring the MMU. That work is not part of this patch, so for now when MMIDs are in use KVM is disabled. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Add set_cpu_context() for ASID assignmentsPaul Burton1-3/+3
When we gain MMID support we'll be storing MMIDs as atomic64_t values and accessing them via atomic64_* functions. This necessitates that we don't use cpu_context() as the left hand side of an assignment, ie. as a modifiable lvalue. In preparation for this introduce a new set_cpu_context() function & replace all assignments with cpu_context() on their left hand side with an equivalent call to set_cpu_context(). To enforce that cpu_context() should not be used for assignments, we rewrite it as a static inline function. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Remove local_flush_tlb_mm()Paul Burton1-2/+2
All 3 variants of local_flush_tlb_mm() are now effectively simple calls to drop_mmu_context(). Remove them and use drop_mmu_context() directly. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2017-11-15Merge tag 'mips_4.15' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/mips Pull MIPS updates from James Hogan: "These are the main MIPS changes for 4.15. Fixes: - ralink: Fix MT7620 PCI build issues (4.5) - Disable cmpxchg64() and HAVE_VIRT_CPU_ACCOUNTING_GEN for 32-bit SMP (4.1) - Fix MIPS64 FP save/restore on 32-bit kernels (4.0) - ptrace: Pick up ptrace/seccomp changed syscall numbers (3.19) - ralink: Fix MT7628 pinmux (3.19) - BCM47XX: Fix LED inversion on WRT54GSv1 (3.17) - Fix n32 core dumping as o32 since regset support (3.13) - ralink: Drop obsolete USB_ARCH_HAS_HCD select Build system: - Default to "generic" (multiplatform) system type instead of IP22 - Use generic little endian MIPS32 r2 configuration as default defconfig instead of ip22_defconfig FPU emulation: - Fix exception generation for certain R6 FPU instructions SMP: - Allow __cpu_number_map to be larger than NR_CPUS for sparse CPU id spaces Miscellaneous: - Add iomem resource for kernel bss section for kexec/kdump - Atomics: Nudge writes on bit unlock - DT files: Standardise "ok" -> "okay" Minor cleanups: - Define virt_to_pfn() - Make thread_saved_pc static - Simplify 32-bit sign extension in __read_64bit_c0_split() - DMA: Use vma_pages() helper - FPU emulation: Replace unsigned with unsigned int - MM: Removed unused lastpfn - Alchemy: Make clk_ops const - Lasat: Use setup_timer() helper - ralink: Use BIT() in MT7620 PCI driver Platform support: BMIPS: - Enable HARDIRQS_SW_RESEND Broadcom BCM63XX: - Add clkdev lookup support - Update clk driver, UART driver, DTs to handle named refclk from DTs - Split apart various clocks to more closely match hardware - Add ethernet clocks Cavium Octeon: - Remove usage of cvmx_wait() in favour of __delay() ImgTec Pistachio: - DT: Drop deprecated dwmmc num-slots property Ingenic JZ4780: - Add NFS root to Ci20 defconfig - Add watchdog to Ci20 DT & defconfig, and allow building of watchdog driver with this SoC Generic (multiplatform): - Migrate xilfpga (MIPSfpga) platform to the generic platform Lantiq xway: - Fix ASC0/ASC1 clocks" * tag 'mips_4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/mips: (46 commits) MIPS: Add iomem resource for kernel bss section. MIPS: cmpxchg64() and HAVE_VIRT_CPU_ACCOUNTING_GEN don't work for 32-bit SMP MIPS: BMIPS: Enable HARDIRQS_SW_RESEND MIPS: pci: Make use of the BIT() macro inside the mt7620 driver MIPS: pci: Remove KERN_WARN instance inside the mt7620 driver MIPS: pci: Remove duplicate define in mt7620 driver MIPS: ralink: Fix typo in mt7628 pinmux function MIPS: ralink: Fix MT7628 pinmux MIPS: Fix odd fp register warnings with MIPS64r2 watchdog: jz4780: Allow selection of jz4740-wdt driver MIPS/ptrace: Update syscall nr on register changes MIPS/ptrace: Pick up ptrace/seccomp changed syscalls MIPS: Fix an n32 core file generation regset support regression MIPS: Fix MIPS64 FP save/restore on 32-bit kernels MIPS: page.h: Define virt_to_pfn() MIPS: Xilfpga: Switch to using generic defconfigs MIPS: generic: Add support for MIPSfpga MIPS: Set defconfig target to a generic system for 32r2el MIPS: Kconfig: Set default MIPS system type as generic MIPS: DTS: Remove num-slots from Pistachio SoC ...
2017-11-07MIPS: Allow __cpu_number_map to be larger than NR_CPUSDavid Daney1-1/+1
In systems where the CPU id space is sparse, this allows a smaller NR_CPUS to be chosen, thus keeping internal data structures smaller. Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: Carlos Munoz <cmunoz@caviumnetworks.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17388/ [jhogan@kernel.org: Add depends on SMP to fix "warning: symbol value '' invalid for MIPS_NR_CPU_NR_MAP"] Signed-off-by: James Hogan <jhogan@kernel.org>
2017-11-01MIPS: SMP: Fix deadlock & online raceMatt Redfearn1-6/+16
Commit 6f542ebeaee0 ("MIPS: Fix race on setting and getting cpu_online_mask") effectively reverted commit 8f46cca1e6c06 ("MIPS: SMP: Fix possibility of deadlock when bringing CPUs online") and thus has reinstated the possibility of deadlock. The commit was based on testing of kernel v4.4, where the CPU hotplug core code issued a BUG() if the starting CPU is not marked online when the boot CPU returns from __cpu_up. The commit fixes this race (in v4.4), but re-introduces the deadlock situation. As noted in the commit message, upstream differs in this area. Commit 8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu bring itself fully up") adds a completion event in the CPU hotplug core code, making this race impossible. However, people were unhappy with relying on the core code to do the right thing. To address the issues both commits were trying to fix, add a second completion event in the MIPS smp hotplug path. It removes the possibility of a race, since the MIPS smp hotplug code now synchronises both the boot and secondary CPUs before they return to the hotplug core code. It also addresses the deadlock by ensuring that the secondary CPU is not marked online before it's counters are synchronised. This fix should also be backported to fix the race condition introduced by the backport of commit 8f46cca1e6c06 ("MIPS: SMP: Fix possibility of deadlock when bringing CPUs online"), through really that race only existed before commit 8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu bring itself fully up"). Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Fixes: 6f542ebeaee0 ("MIPS: Fix race on setting and getting cpu_online_mask") CC: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com> Cc: <stable@vger.kernel.org> # v4.1+: 8f46cca1e6c0: "MIPS: SMP: Fix possibility of deadlock when bringing CPUs online" Cc: <stable@vger.kernel.org> # v4.1+: a00eeede507c: "MIPS: SMP: Use a completion event to signal CPU up" Cc: <stable@vger.kernel.org> # v4.1+: 6f542ebeaee0: "MIPS: Fix race on setting and getting cpu_online_mask" Cc: <stable@vger.kernel.org> # v4.1+ Patchwork: https://patchwork.linux-mips.org/patch/17376/ Signed-off-by: James Hogan <jhogan@kernel.org>
2017-10-31MIPS: generic: Fix compilation error from include asm/mips-cpc.hMatt Redfearn1-1/+1
Commit e83f7e02af50c ("MIPS: CPS: Have asm/mips-cps.h include CM & CPC headers") adds a #error to arch/mips/include/asm/mips-cpc.h if it is included directly. While this commit replaced almost all direct includes of mips-cm.h and mips-cpc.h, 2 remain. With some defconfigs, mips-cps.h is indirectly included before mips-cpc.h, but in others this results in compilation errors: In file included from arch/mips/generic/init.c:23:0: ./arch/mips/include/asm/mips-cpc.h:12:3: error: #error Please include asm/mips-cps.h rather than asm/mips-cpc.h # error Please include asm/mips-cps.h rather than asm/mips-cpc.h In file included from arch/mips/kernel/smp.c:23:0: ./arch/mips/include/asm/mips-cpc.h:12:3: error: #error Please include asm/mips-cps.h rather than asm/mips-cpc.h # error Please include asm/mips-cps.h rather than asm/mips-cpc.h In both cases, fix this by including mips-cps.h instead. Fixes: e83f7e02af50c ("MIPS: CPS: Have asm/mips-cps.h include CM & CPC headers") Signed-off-by: Matt Redfearn <matt.redfearn@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/17492/ Signed-off-by: James Hogan <jhogan@kernel.org>
2017-09-15Merge branch '4.14-features' of ↵Linus Torvalds1-11/+13
git://git.linux-mips.org/pub/scm/ralf/upstream-linus Pull MIPS updates from Ralf Baechle: "This is the main pull request for 4.14 for MIPS; below a summary of the non-merge commits: CM: - Rename mips_cm_base to mips_gcr_base - Specify register size when generating accessors - Use BIT/GENMASK for register fields, order & drop shifts - Add cluster & block args to mips_cm_lock_other() CPC: - Use common CPS accessor generation macros - Use BIT/GENMASK for register fields, order & drop shifts - Introduce register modify (set/clear/change) accessors - Use change_*, set_* & clear_* where appropriate - Add CM/CPC 3.5 register definitions - Use GlobalNumber macros rather than magic numbers - Have asm/mips-cps.h include CM & CPC headers - Cluster support for topology functions - Detect CPUs in secondary clusters CPS: - Read GIC_VL_IDENT directly, not via irqchip driver DMA: - Consolidate coherent and non-coherent dma_alloc code - Don't use dma_cache_sync to implement fd_cacheflush FPU emulation / FP assist code: - Another series of 14 commits fixing corner cases such as NaN propgagation and other special input values. - Zero bits 32-63 of the result for a CLASS.D instruction. - Enhanced statics via debugfs - Do not use bools for arithmetic. GCC 7.1 moans about this. - Correct user fault_addr type Generic MIPS: - Enhancement of stack backtraces - Cleanup from non-existing options - Handle non word sized instructions when examining frame - Fix detection and decoding of ADDIUSP instruction - Fix decoding of SWSP16 instruction - Refactor handling of stack pointer in get_frame_info - Remove unreachable code from force_fcr31_sig() - Convert to using %pOF instead of full_name - Remove the R6000 support. - Move FP code from *_switch.S to *_fpu.S - Remove unused ST_OFF from r2300_switch.S - Allow platform to specify multiple its.S files - Add #includes to various files to ensure code builds reliable and without warning.. - Remove __invalidate_kernel_vmap_range - Remove plat_timer_setup - Declare various variables & functions static - Abstract CPU core & VP(E) ID access through accessor functions - Store core & VP IDs in GlobalNumber-style variable - Unify checks for sibling CPUs - Add CPU cluster number accessors - Prevent direct use of generic_defconfig - Make CONFIG_MIPS_MT_SMP default y - Add __ioread64_copy - Remove unnecessary inclusions of linux/irqchip/mips-gic.h GIC: - Introduce asm/mips-gic.h with accessor functions - Use new GIC accessor functions in mips-gic-timer - Remove counter access functions from irq-mips-gic.c - Remove gic_read_local_vp_id() from irq-mips-gic.c - Simplify shared interrupt pending/mask reads in irq-mips-gic.c - Simplify gic_local_irq_domain_map() in irq-mips-gic.c - Drop gic_(re)set_mask() functions in irq-mips-gic.c - Remove gic_set_polarity(), gic_set_trigger(), gic_set_dual_edge(), gic_map_to_pin() and gic_map_to_vpe() from irq-mips-gic.c. - Convert remaining shared reg access, local int mask access and remaining local reg access to new accessors - Move GIC_LOCAL_INT_* to asm/mips-gic.h - Remove GIC_CPU_INT* macros from irq-mips-gic.c - Move various definitions to the driver - Remove gic_get_usm_range() - Remove __gic_irq_dispatch() forward declaration - Remove gic_init() - Use mips_gic_present() in place of gic_present and remove gic_present - Move gic_get_c0_*_int() to asm/mips-gic.h - Remove linux/irqchip/mips-gic.h - Inline __gic_init() - Inline gic_basic_init() - Make pcpu_masks a per-cpu variable - Use pcpu_masks to avoid reading GIC_SH_MASK* - Clean up mti, reserved-cpu-vectors handling - Use cpumask_first_and() in gic_set_affinity() - Let the core set struct irq_common_data affinity microMIPS: - Fix microMIPS stack unwinding on big endian systems MIPS-GIC: - SYNC after enabling GIC region NUMA: - Remove the unused parent_node() macro R6: - Constify r2_decoder_tables - Add accessor & bit definitions for GlobalNumber SMP: - Constify smp ops - Allow boot_secondary SMP op to return errors VDSO: - Drop gic_get_usm_range() usage - Avoid use of linux/irqchip/mips-gic.h Platform changes: Alchemy: - Add devboard machine type to cpuinfo - update cpu feature overrides - Threaded carddetect irqs for devboards AR7: - allow NULL clock for clk_get_rate BCM63xx: - Fix ENETDMA_6345_MAXBURST_REG offset - Allow NULL clock for clk_get_rate CI20: - Enable GPIO and RTC drivers in defconfig - Add ethernet and fixed-regulator nodes to DTS Generic platform: - Move Boston and NI 169445 FIT image source to their own files - Include asm/bootinfo.h for plat_fdt_relocated() - Include asm/time.h for get_c0_*_int() - Include asm/bootinfo.h for plat_fdt_relocated() - Include asm/time.h for get_c0_*_int() - Allow filtering enabled boards by requirements - Don't explicitly disable CONFIG_USB_SUPPORT - Bump default NR_CPUS to 16 JZ4700: - Probe the jz4740-rtc driver from devicetree Lantiq: - Drop check of boot select from the spi-falcon driver. - Drop check of boot select from the lantiq-flash MTD driver. - Access boot cause register in the watchdog driver through regmap - Add device tree binding documentation for the watchdog driver - Add docs for the RCU DT bindings. - Convert the fpi bus driver to a platform_driver - Remove ltq_reset_cause() and ltq_boot_select( - Switch to a proper reset driver - Switch to a new drivers/soc GPHY driver - Add an USB PHY driver for the Lantiq SoCs using the RCU module - Use of_platform_default_populate instead of __dt_register_buses - Enable MFD_SYSCON to be able to use it for the RCU MFD - Replace ltq_boot_select() with dummy implementation. Loongson 2F: - Allow NULL clock for clk_get_rate Malta: - Use new GIC accessor functions NI 169445: - Add support for NI 169445 board. - Only include in 32r2el kernels Octeon: - Add support for watchdog of 78XX SOCs. - Add support for watchdog of CN68XX SOCs. - Expose support for mips32r1, mips32r2 and mips64r1 - Enable more drivers in config file - Add support for accessing the boot vector. - Remove old boot vector code from watchdog driver - Define watchdog registers for 70xx, 73xx, 78xx, F75xx. - Make CSR functions node aware. - Allow access to CIU3 IRQ domains. - Misc cleanups in the watchdog driver Omega2+: - New board, add support and defconfig Pistachio: - Enable Root FS on NFS in defconfig Ralink: - Add Mediatek MT7628A SoC - Allow NULL clock for clk_get_rate - Explicitly request exclusive reset control in the pci-mt7620 PCI driver. SEAD3: - Only include in 32 bit kernels by default VoCore: - Add VoCore as a vendor t0 dt-bindings - Add defconfig file" * '4.14-features' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (167 commits) MIPS: Refactor handling of stack pointer in get_frame_info MIPS: Stacktrace: Fix microMIPS stack unwinding on big endian systems MIPS: microMIPS: Fix decoding of swsp16 instruction MIPS: microMIPS: Fix decoding of addiusp instruction MIPS: microMIPS: Fix detection of addiusp instruction MIPS: Handle non word sized instructions when examining frame MIPS: ralink: allow NULL clock for clk_get_rate MIPS: Loongson 2F: allow NULL clock for clk_get_rate MIPS: BCM63XX: allow NULL clock for clk_get_rate MIPS: AR7: allow NULL clock for clk_get_rate MIPS: BCM63XX: fix ENETDMA_6345_MAXBURST_REG offset mips: Save all registers when saving the frame MIPS: Add DWARF unwinding to assembly MIPS: Make SAVE_SOME more standard MIPS: Fix issues in backtraces MIPS: jz4780: DTS: Probe the jz4740-rtc driver from devicetree MIPS: Ci20: Enable RTC driver watchdog: octeon-wdt: Add support for 78XX SOCs. watchdog: octeon-wdt: Add support for cn68XX SOCs. watchdog: octeon-wdt: File cleaning. ...
2017-08-30MIPS: SMP: Allow boot_secondary SMP op to return errorsPaul Burton1-1/+5
Allow the boot_secondary SMP op to return an error to __cpu_up(), which will in turn return it to its caller. This will allow SMP implementations to return errors quickly in cases they they know have failed, rather than relying upon __cpu_up() eventually timing out waiting for the cpu_running completion. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17014/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-08-30MIPS: CM: Add cluster & block args to mips_cm_lock_other()Paul Burton1-1/+1
With CM >= 3.5 we have the notion of multiple clusters & can access their CM, CPC & GIC registers via the apporpriate redirect/other register blocks. In order to allow for this introduce cluster & block arguments to mips_cm_lock_other() which configures the redirect/other region to point at the appropriate cluster, core, VP & register block. Since we now have 4 arguments to mips_cm_lock_other() & a common use is likely to be to target the cluster, core & VP corresponding to a particular Linux CPU number we also add a new mips_cm_lock_other_cpu() helper function which handles that without the caller needing to manually pull out the cluster, core & VP numbers. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17013/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-08-30MIPS: Unify checks for sibling CPUsPaul Burton1-7/+5
Up until now we have open-coded checks for whether CPUs are siblings, with slight variations on whether we consider the package ID or not. This will only get more complex when we introduce cluster support, so in preparation for that this patch introduces a cpus_are_siblings() function which can be used to check whether or not 2 CPUs are siblings in a consistent manner. By checking globalnumber with the VP ID masked out this also has the neat side effect of being ready for multi-cluster systems already. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Acked-by: Rafael J. Wysocki <rjw@rjwysocki.net> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17011/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-08-30MIPS: Abstract CPU core & VP(E) ID access through accessor functionsPaul Burton1-4/+4
We currently have fields in struct cpuinfo_mips for the core & VP(E) ID of a particular CPU, and various pieces of code directly access those fields. This patch abstracts such access by introducing accessor functions cpu_core(), cpu_set_core(), cpu_vpe_id() & cpu_set_vpe_id() and having code that needs to access these values call those functions rather than directly accessing the struct cpuinfo_mips fields. This prepares us for changes to the way in which those values are stored in later patches. The cpu_vpe_id() function is introduced even though we already had a cpu_vpe_id() macro for a couple of reasons: 1) It's more consistent with the core, and future cluster, accessors. 2) It ensures a sensible return type without explicit casts. 3) It's generally preferable to use functions rather than macros. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17009/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-08-29MIPS: SMP: Constify smp opsMatt Redfearn1-2/+2
smp_ops providers do not modify their ops structures, so they should be made const for robustness. Since currently the MIPS kernel is not mapped with memory protection, this does not in itself provide any security benefit, but it still makes sense to make this change. There are also slight code size efficincies from the structure being made read-only, saving 128 bytes of kernel text on a pistachio_defconfig. Before: text data bss dec hex filename 7187239 1772752 470224 9430215 8fe4c7 vmlinux After: text data bss dec hex filename 7187111 1772752 470224 9430087 8fe447 vmlinux Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Doug Ledford <dledford@redhat.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Joe Perches <joe@perches.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Steven J. Hill <steven.hill@cavium.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16784/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-08-29smp: Avoid using two cache lines for struct call_single_dataYing Huang1-3/+3
struct call_single_data is used in IPIs to transfer information between CPUs. Its size is bigger than sizeof(unsigned long) and less than cache line size. Currently it is not allocated with any explicit alignment requirements. This makes it possible for allocated call_single_data to cross two cache lines, which results in double the number of the cache lines that need to be transferred among CPUs. This can be fixed by requiring call_single_data to be aligned with the size of call_single_data. Currently the size of call_single_data is the power of 2. If we add new fields to call_single_data, we may need to add padding to make sure the size of new definition is the power of 2 as well. Fortunately, this is enforced by GCC, which will report bad sizes. To set alignment requirements of call_single_data to the size of call_single_data, a struct definition and a typedef is used. To test the effect of the patch, I used the vm-scalability multiple thread swap test case (swap-w-seq-mt). The test will create multiple threads and each thread will eat memory until all RAM and part of swap is used, so that huge number of IPIs are triggered when unmapping memory. In the test, the throughput of memory writing improves ~5% compared with misaligned call_single_data, because of faster IPIs. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Huang, Ying <ying.huang@intel.com> [ Add call_single_data_t and align with size of call_single_data. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Aaron Lu <aaron.lu@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Juergen Gross <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/87bmnqd6lz.fsf@yhuang-mobile.sh.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-07MIPS: Fix race on setting and getting cpu_online_maskMatija Glavinic Pecotic1-3/+3
While testing cpu hoptlug (cpu down and up in loops) on kernel 4.4, it was observed that occasionally check for cpu online will fail in kernel/cpu.c, _cpu_up: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/kernel/cpu.c?h=v4.4.79#n485 518 /* Arch-specific enabling code. */ 519 ret = __cpu_up(cpu, idle); 520 521 if (ret != 0) 522 goto out_notify; 523 BUG_ON(!cpu_online(cpu)); Reason is race between start_secondary and _cpu_up. cpu_callin_map is set before cpu_online_mask. In __cpu_up, cpu_callin_map is waited for, but cpu online mask is not, resulting in race in which secondary processor started and set cpu_callin_map, but not yet set the online mask,resulting in above BUG being hit. Upstream differs in the area. cpu_online check is in bringup_wait_for_ap, which is after cpu reached AP_ONLINE_IDLE,where secondary passed its start function. Nonetheless, fix makes start_secondary safe and not depending on other locks throughout the code. It protects as well against cpu_online checks put in between sometimes in the future. Fix this by moving completion after all flags are set. Signed-off-by: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com> Cc: Alexander Sverdlin <alexander.sverdlin@nokia.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16925/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: Skip IPI setup if we only have 1 CPUPaul Burton1-0/+3
If we're running on a system with only 1 possible CPU then it makes no sense to reserve or initialise IPIs since we'll never use them. Avoid doing so. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16192/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-04-12MIPS: Stengthen IPI IRQ domain sanity checkPaul Burton1-8/+12
Commit fbde2d7d8290 ("MIPS: Add generic SMP IPI support") introduced a sanity check that an IPI IRQ domain can be found during boot, in order to ensure that IPIs are able to be set up in systems using such domains. However it was added at a point where systems may have used an IPI IRQ domain in some situations but not others, and we could not know which were the case until runtime, so commit 578bffc82ec5 ("MIPS: Don't BUG_ON when no IPI domain is found") made that check simply skip IPI init if no domain were found in order to fix the boot for systems such as QEMU Malta. We now use IPI IRQ domains for the MIPS CPU interrupt controller, which means systems which make use of IPI IRQ domains will always do so when running on multiple CPUs. As a result we now strengthen the sanity check to ensure that an IPI IRQ domain is found when multiple CPUs are present in the system. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/15838/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-03-02sched/headers: Prepare to remove the <linux/mm_types.h> dependency from ↵Ingo Molnar1-1/+1
<linux/sched.h> Update code that relied on sched.h including various MM types for them. This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-01-03MIPS: kexec: remove SMP_DUMPMarcin Nowakowski1-17/+0
SMP_DUMP has been added as a new IPI signal when kexec support was added for Cavium Octeon CPUs ('commit 7aa1c8f47e7e ("MIPS: kdump: Add support")'. However, the new signal doesn't appear to ever have a proper handler added (octeon_message_functions[] array has an empty handler for it), and generic IPI handlers now trigger a BUG() on unhandled signal. As the method is unused remove it completely and replace its only invocation with a smp_call_function(). [ralf@linux-mips.org: Renumber SMP_ASK_C0COUNT to avoid numbering gaps.] Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14630/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-03MIPS: SMP: Remove cpu_callin_mapMatt Redfearn1-2/+0
The previous commit made cpu_callin_map redundant, since it is no longer used to signal secondary CPUs starting, or going offline. Remove it now. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Qais Yousef <qsyousef@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Anna-Maria Gleixner <anna-maria@linutronix.de> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: Yang Shi <yang.shi@windriver.com> Cc: David Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14503/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-03MIPS: SMP: Use a completion event to signal CPU upMatt Redfearn1-6/+9
If a secondary CPU failed to start, for any reason, the CPU requesting the secondary to start would get stuck in the loop waiting for the secondary to be present in the cpu_callin_map. Rather than that, use a completion event to signal that the secondary CPU has started and is waiting to synchronise counters. Since the CPU presence will no longer be marked in cpu_callin_map, remove the redundant test from arch_cpu_idle_dead(). Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Maciej W. Rozycki <macro@imgtec.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Qais Yousef <qsyousef@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14502/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-10-05MIPS: kernel: Audit and remove any unnecessary uses of module.hPaul Gortmaker1-1/+1
Historically a lot of these existed because we did not have a distinction between what was modular code and what was providing support to modules via EXPORT_SYMBOL and friends. That changed when we forked out support for the latter into the export.h file. This means we should be able to reduce the usage of module.h in code that is obj-y Makefile or bool Kconfig. The advantage in doing so is that module.h itself sources about 15 other headers; adding significantly to what we feed cpp, and it can obscure what headers we are effectively using. Since module.h was the source for init.h (for __init) and for export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance for the presence of either and replace as needed. In the case of the n32/o32 files, we have to get rid of a couple no-op MODULE_ tags to facilitate the module.h removal. They piggy back off the fs/ elf binary support, which is also a bool Kconfig. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14032/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-10-05MIPS: smp.c: Introduce mechanism for freeing and allocating IPIsMatt Redfearn1-8/+53
For the MIPS remote processor implementation, we need additional IPIs to talk to the remote processor. Since MIPS GIC reserves exactly the right number of IPI IRQs required by Linux for the number of VPs in the system, this is not possible without releasing some recources. This commit introduces mips_smp_ipi_allocate() which allocates IPIs to a given cpumask. It is called as normal with the cpu_possible_mask at bootup to initialise IPIs to all CPUs. mips_smp_ipi_free() may then be used to free IPIs to a subset of those CPUs so that their hardware resources can be reused. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Ohad Ben-Cohen <ohad@wizery.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Lisa Parratt <Lisa.Parratt@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-remoteproc@vger.kernel.org Cc: lisa.parratt@imgtec.com Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14285/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-10-04MIPS: SMP: Wrap call to mips_cpc_lock_other in mips_cm_lock_otherMatt Redfearn1-0/+2
All calls to mips_cpc_lock_other should be wrapped in mips_cm_lock_other. This only matters if the system has CM3 and is using cpu idle, since otherwise a) the CPC lock is sufficent for CM < 3 and b) any systems with CM > 3 have not been able to use cpu idle until now. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: James Hogan <james.hogan@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/14227/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-25MIPS: SMP: Fix possibility of deadlock when bringing CPUs onlineMatt Redfearn1-4/+3
This patch fixes the possibility of a deadlock when bringing up secondary CPUs. The deadlock occurs because the set_cpu_online() is called before synchronise_count_slave(). This can cause a deadlock if the boot CPU, having scheduled another thread, attempts to send an IPI to the secondary CPU, which it sees has been marked online. The secondary is blocked in synchronise_count_slave() waiting for the boot CPU to enter synchronise_count_master(), but the boot cpu is blocked in smp_call_function_many() waiting for the secondary to respond to it's IPI request. Fix this by marking the CPU online in cpu_callin_map and synchronising counters before declaring the CPU online and calculating the maps for IPIs. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reported-by: Justin Chen <justinpopo6@gmail.com> Tested-by: Justin Chen <justinpopo6@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14302/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Exclude sibling CPUs in SMP callsJames Hogan1-2/+4
When performing SMP calls to foreign cores, exclude sibling CPUs from the provided map, as we already handle the local core on the current CPU. This prevents an SMP call from for example core 0, VPE 1 to VPE 0 on the same core. In the process the cpu_foreign_map cpumask is turned into an array of cpumasks, so that each CPU has its own version of it which excludes sibling CPUs. r4k_op_needs_ipi() is also updated to reflect that cache management SMP calls are not needed when all CPUs are siblings (i.e. there are no foreign CPUs according to the new cpu_foreign_map[] semantics which exclude siblings). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: Felix Fietkau <nbd@nbd.name> Cc: Jayachandran C. <jchandra@broadcom.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13801/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: SMP: Drop stop_this_cpu() cpu_foreign_map hackJames Hogan1-8/+1
Commit cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores") added the cpu_foreign_map cpumask containing a single VPE from each online core, and recalculated it when secondary CPUs are brought up. stop_this_cpu() was also updated to recalculate cpu_foreign_map, but with an additional hack before marking the CPU as offline to copy cpu_online_mask into cpu_foreign_map and perform an SMP memory barrier. This appears to have been intended to prevent cache management IPIs being missed when the VPE representing the core in cpu_foreign_map is taken offline while other VPEs remain online. Unfortunately there is nothing in this hack to prevent r4k_on_each_cpu() from reading the old cpu_foreign_map, and smp_call_function_many() from reading that new cpu_online_mask with the core's representative VPE marked offline. It then wouldn't send an IPI to any online VPEs of that core. stop_this_cpu() is only actually called in panic and system shutdown / halt / reboot situations, in which case all CPUs are going down and we don't really need to care about cache management, so drop this hack. Note that the __cpu_disable() case for CPU hotplug is handled in the previous commit, and no synchronisation is needed there due to the use of stop_machine() which prevents hotplug from taking place while any CPU has disabled preemption (as r4k_on_each_cpu() does). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13796/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: SMP: Update cpu_foreign_map on CPU disableJames Hogan1-1/+1
When a CPU is disabled via CPU hotplug, cpu_foreign_map is not updated. This could result in cache management SMP calls being sent to offline CPUs instead of online siblings in the same core. Add a call to calculate_cpu_foreign_map() in the various MIPS cpu disable callbacks after set_cpu_online(). All cases are updated for consistency and to keep cpu_foreign_map strictly up to date, not just those which may support hardware multithreading. Fixes: cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: David Daney <david.daney@cavium.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Hongliang Tao <taohl@lemote.com> Cc: Hua Yan <yanh@lemote.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13799/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: SMP: Clear ASID without confusing has_valid_asid()James Hogan1-2/+15
The SMP flush_tlb_*() functions may clear the memory map's ASIDs for other CPUs if the mm has only a single user (the current CPU) in order to avoid SMP calls. However this makes it appear to has_valid_asid(), which is used by various cache flush functions, as if the CPUs have never run in the mm, and therefore can't have cached any of its memory. For flush_tlb_mm() this doesn't sound unreasonable. flush_tlb_range() corresponds to flush_cache_range() which does do full indexed cache flushes, but only on the icache if the specified mapping is executable, otherwise it doesn't guarantee that there are no cache contents left for the mm. flush_tlb_page() corresponds to flush_cache_page(), which will perform address based cache ops on the specified page only, and also only touches the icache if the page is executable. It does not guarantee that there are no cache contents left for the mm. For example, this affects flush_cache_range() which uses the has_valid_asid() optimisation. It is required to flush the icache when mappings are made executable (e.g. using mprotect) so they are immediately usable. If some code is changed to non executable in order to be modified then it will not be flushed from the icache during that time, but the ASID on other CPUs may still be cleared for TLB flushing. When the code is changed back to executable, flush_cache_range() will assume the code hasn't run on those other CPUs due to the zero ASID, and won't invalidate the icache on them. This is fixed by clearing the other CPUs ASIDs to 1 instead of 0 for the above two flush_tlb_*() functions when the corresponding cache flushes are likely to be incomplete (non executable range flush, or any page flush). This ASID appears valid to has_valid_asid(), but still triggers ASID regeneration due to the upper ASID version bits being 0, which is less than the minimum ASID version of 1 and so always treated as stale. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13795/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-09MIPS: Don't BUG_ON when no IPI domain is foundPaul Burton1-13/+11
Commit fbde2d7d8290 ("MIPS: Add generic SMP IPI support") introduced code that BUG_ON's in the case of a kernel that supports IPI domains but does not have one at runtime. This case is possible on Malta where for IPIs we may use either the GIC (which has an IPI IRQ domain implementation) or core-local software interrupts between VPEs (which do not currently have an IPI IRQ domain implementation). We can not know which will be used until runtime when we know whether a GIC is actually present, and if we run on a system with multiple VPEs and no GIC then the BUG_ON is hit. Commit 19fb5818ed60 ("IPS: Fix broken malta qemu") worked around this for the single-core single-VPE case typically seen using QEMU, but does not catch the multi-VPE case. This patch removes the insufficient CPU presence check that was added and works around the bug differently, effectively reverting that commit. A simple way to reproduce this bug is by using QEMU, which partially implements the MT ASE but does not implement the GIC as of version 2.5. Using "-cpu 34Kf -smp 2" will present a system with 2 VPEs in one core & no GIC, hitting the BUG_ON. Given that we're post-merge-window on the way to v4.6, avoid this by just returning from mips_smp_ipi_init when no IPI IRQ domain is found. Ideally at some point all IPI implementations would be converted to the same IPI IRQ domain interface & we'd be able to restore the check. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Fixes: fbde2d7d8290 ("MIPS: Add generic SMP IPI support") Fixes: 19fb5818ed60 ("IPS: Fix broken malta qemu") Reverts: 19fb5818ed60 ("IPS: Fix broken malta qemu") Cc: Qais Yousef <qsyousef@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Alex Smith <alex.smith@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13007/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-03-29MIPS: Fix broken malta qemuQais Yousef1-0/+12
Malta defconfig compiles with GIC on. Hence when compiling for SMP it causes the new IPI code to be activated. But on qemu malta there's no GIC causing a BUG_ON(!ipidomain) to be hit in mips_smp_ipi_init(). Since in that configuration one can only run a single core SMP (!), skip IPI initialisation if we detect that this is the case. It is a sensible behaviour to introduce and should keep such possible configuration to run rather than die hard unnecessarily. Signed-off-by: Qais Yousef <qsyousef@gmail.com> Reported-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Guenter Roeck <linux@roeck-us.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12892/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-03-15Merge branch 'smp-hotplug-for-linus' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull cpu hotplug updates from Thomas Gleixner: "This is the first part of the ongoing cpu hotplug rework: - Initial implementation of the state machine - Runs all online and prepare down callbacks on the plugged cpu and not on some random processor - Replaces busy loop waiting with completions - Adds tracepoints so the states can be followed" More detailed commentary on this work from an earlier email: "What's wrong with the current cpu hotplug infrastructure? - Asymmetry The hotplug notifier mechanism is asymmetric versus the bringup and teardown. This is mostly caused by the notifier mechanism. - Largely undocumented dependencies While some notifiers use explicitely defined notifier priorities, we have quite some notifiers which use numerical priorities to express dependencies without any documentation why. - Control processor driven Most of the bringup/teardown of a cpu is driven by a control processor. While it is understandable, that preperatory steps, like idle thread creation, memory allocation for and initialization of essential facilities needs to be done before a cpu can boot, there is no reason why everything else must run on a control processor. Before this patch series, bringup looks like this: Control CPU Booting CPU do preparatory steps kick cpu into life do low level init sync with booting cpu sync with control cpu bring the rest up - All or nothing approach There is no way to do partial bringups. That's something which is really desired because we waste e.g. at boot substantial amount of time just busy waiting that the cpu comes to life. That's stupid as we could very well do preparatory steps and the initial IPI for other cpus and then go back and do the necessary low level synchronization with the freshly booted cpu. - Minimal debuggability Due to the notifier based design, it's impossible to switch between two stages of the bringup/teardown back and forth in order to test the correctness. So in many hotplug notifiers the cancel mechanisms are either not existant or completely untested. - Notifier [un]registering is tedious To [un]register notifiers we need to protect against hotplug at every callsite. There is no mechanism that bringup/teardown callbacks are issued on the online cpus, so every caller needs to do it itself. That also includes error rollback. What's the new design? The base of the new design is a symmetric state machine, where both the control processor and the booting/dying cpu execute a well defined set of states. Each state is symmetric in the end, except for some well defined exceptions, and the bringup/teardown can be stopped and reversed at almost all states. So the bringup of a cpu will look like this in the future: Control CPU Booting CPU do preparatory steps kick cpu into life do low level init sync with booting cpu sync with control cpu bring itself up The synchronization step does not require the control cpu to wait. That mechanism can be done asynchronously via a worker or some other mechanism. The teardown can be made very similar, so that the dying cpu cleans up and brings itself down. Cleanups which need to be done after the cpu is gone, can be scheduled asynchronously as well. There is a long way to this, as we need to refactor the notion when a cpu is available. Today we set the cpu online right after it comes out of the low level bringup, which is not really correct. The proper mechanism is to set it to available, i.e. cpu local threads, like softirqd, hotplug thread etc. can be scheduled on that cpu, and once it finished all booting steps, it's set to online, so general workloads can be scheduled on it. The reverse happens on teardown. First thing to do is to forbid scheduling of general workloads, then teardown all the per cpu resources and finally shut it off completely. This patch series implements the basic infrastructure for this at the core level. This includes the following: - Basic state machine implementation with well defined states, so ordering and prioritization can be expressed. - Interfaces to [un]register state callbacks This invokes the bringup/teardown callback on all online cpus with the proper protection in place and [un]installs the callbacks in the state machine array. For callbacks which have no particular ordering requirement we have a dynamic state space, so that drivers don't have to register an explicit hotplug state. If a callback fails, the code automatically does a rollback to the previous state. - Sysfs interface to drive the state machine to a particular step. This is only partially functional today. Full functionality and therefor testability will be achieved once we converted all existing hotplug notifiers over to the new scheme. - Run all CPU_ONLINE/DOWN_PREPARE notifiers on the booting/dying processor: Control CPU Booting CPU do preparatory steps kick cpu into life do low level init sync with booting cpu sync with control cpu wait for boot bring itself up Signal completion to control cpu In a previous step of this work we've done a full tree mechanical conversion of all hotplug notifiers to the new scheme. The balance is a net removal of about 4000 lines of code. This is not included in this series, as we decided to take a different approach. Instead of mechanically converting everything over, we will do a proper overhaul of the usage sites one by one so they nicely fit into the symmetric callback scheme. I decided to do that after I looked at the ugliness of some of the converted sites and figured out that their hotplug mechanism is completely buggered anyway. So there is no point to do a mechanical conversion first as we need to go through the usage sites one by one again in order to achieve a full symmetric and testable behaviour" * 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits) cpu/hotplug: Document states better cpu/hotplug: Fix smpboot thread ordering cpu/hotplug: Remove redundant state check cpu/hotplug: Plug death reporting race rcu: Make CPU_DYING_IDLE an explicit call cpu/hotplug: Make wait for dead cpu completion based cpu/hotplug: Let upcoming cpu bring itself fully up arch/hotplug: Call into idle with a proper state cpu/hotplug: Move online calls to hotplugged cpu cpu/hotplug: Create hotplug threads cpu/hotplug: Split out the state walk into functions cpu/hotplug: Unpark smpboot threads from the state machine cpu/hotplug: Move scheduler cpu_online notifier to hotplug core cpu/hotplug: Implement setup/removal interface cpu/hotplug: Make target state writeable cpu/hotplug: Add sysfs state interface cpu/hotplug: Hand in target state to _cpu_up/down cpu/hotplug: Convert the hotplugged cpu work to a state machine cpu/hotplug: Convert to a state machine for the control processor cpu/hotplug: Add tracepoints ...
2016-03-15Merge branch 'irq-core-for-linus' of ↵Linus Torvalds1-0/+136
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq updates from Thomas Gleixner: "The 4.6 pile of irq updates contains: - Support for IPI irqdomains to support proper integration of IPIs to and from coprocessors. The first user of this new facility is MIPS. The relevant MIPS patches come with the core to avoid merge ordering issues and have been acked by Ralf. - A new command line option to set the default interrupt affinity mask at boot time. - Support for some more new ARM and MIPS interrupt controllers: tango, alpine-msix and bcm6345-l1 - Two small cleanups for x86/apic which we merged into irq/core to avoid yet another branch in x86 with two tiny commits. - The usual set of updates, cleanups in drivers/irqchip. Mostly in the area of ARM-GIC, arada-37-xp and atmel chips. Nothing outstanding here" * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (56 commits) irqchip/irq-alpine-msi: Release the correct domain on error irqchip/mxs: Fix error check of of_io_request_and_map() irqchip/sunxi-nmi: Fix error check of of_io_request_and_map() genirq: Export IRQ functions for module use irqchip/gic/realview: Support more RealView DCC variants Documentation/bindings: Document the Alpine MSIX driver irqchip: Add the Alpine MSIX interrupt controller irqchip/gic-v3: Always return IRQ_SET_MASK_OK_DONE in gic_set_affinity irqchip/gic-v3-its: Mark its_init() and its children as __init irqchip/gic-v3: Remove gic_root_node variable from the ITS code irqchip/gic-v3: ACPI: Add redistributor support via GICC structures irqchip/gic-v3: Add ACPI support for GICv3/4 initialization irqchip/gic-v3: Refactor gic_of_init() for GICv3 driver x86/apic: Deinline _flat_send_IPI_mask, save ~150 bytes x86/apic: Deinline __default_send_IPI_*, save ~200 bytes dt-bindings: interrupt-controller: Add SoC-specific compatible string to Marvell ODMI irqchip/mips-gic: Add new DT property to reserve IPIs MIPS: Delete smp-gic.c MIPS: Make smp CMP, CPS and MT use the new generic IPI functions MIPS: Add generic SMP IPI support ...
2016-03-13MIPS: smp.c: Fix uninitialised temp_foreign_mapJames Hogan1-0/+1
When calculate_cpu_foreign_map() recalculates the cpu_foreign_map cpumask it uses the local variable temp_foreign_map without initialising it to zero. Since the calculation only ever sets bits in this cpumask any existing bits at that memory location will remain set and find their way into cpu_foreign_map too. This could potentially lead to cache operations suboptimally doing smp calls to multiple VPEs in the same core, even though the VPEs share primary caches. Therefore initialise temp_foreign_map using cpumask_clear() before use. Fixes: cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12759/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-03-01arch/hotplug: Call into idle with a proper stateThomas Gleixner1-1/+1
Let the non boot cpus call into idle with the corresponding hotplug state, so the hotplug core can handle the further bringup. That's a first step to convert the boot side of the hotplugged cpus to do all the synchronization with the other side through the state machine. For now it'll only start the hotplug thread and kick the full bringup of the cpu. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: Rik van Riel <riel@redhat.com> Cc: Rafael Wysocki <rafael.j.wysocki@intel.com> Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Sebastian Siewior <bigeasy@linutronix.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tejun Heo <tj@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Turner <pjt@google.com> Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-02-25MIPS: Add generic SMP IPI supportQais Yousef1-0/+136
Use the new generic IPI layer to provide generic SMP IPI support if the irqchip supports it. Signed-off-by: Qais Yousef <qais.yousef@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: <jason@lakedaemon.net> Cc: <marc.zyngier@arm.com> Cc: <jiang.liu@linux.intel.com> Cc: <linux-mips@linux-mips.org> Cc: <lisa.parratt@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Link: http://lkml.kernel.org/r/1449580830-23652-17-git-send-email-qais.yousef@imgtec.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-09-27MIPS: Initialise MAARs on secondary CPUsPaul Burton1-0/+2
MAARs should be initialised on each CPU (or rather, core) in the system in order to achieve consistent behaviour & performance. Previously they have only been initialised on the boot CPU which leads to performance problems if tasks are later scheduled on a secondary CPU, particularly if those tasks make use of unaligned vector accesses where some CPUs don't handle any cases in hardware for non-speculative memory regions. Fix this by recording the MAAR configuration from the boot CPU and applying it to secondary CPUs as part of their bringup. Reported-by: Doug Gilmore <doug.gilmore@imgtec.com> Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: linux-kernel@vger.kernel.org Cc: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Hemmo Nieminen <hemmo.nieminen@iki.fi> Cc: Alex Smith <alex.smith@imgtec.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Patchwork: https://patchwork.linux-mips.org/patch/11239/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03MIPS: SMP: Don't increment irq_count multiple times for call function IPIsAlex Smith1-10/+0
The majority of SMP platforms handle their IPIs through do_IRQ() which calls irq_{enter/exit}(). When a call function IPI is received, smp_call_function_interrupt() is called which also calls irq_{enter,exit}(), meaning irq_count is raised twice. When tick broadcasting is used (which is implemented via a call function IPI), this incorrectly causes all CPU idle time on the core receiving broadcast ticks to be accounted as time spent servicing IRQs, as account_process_tick() will account as such if irq_count is greater than 1. This results in 100% CPU usage being reported on a core which receives its ticks via broadcast. This patch removes the SMP smp_call_function_interrupt() wrapper which calls irq_{enter,exit}(). Platforms which handle their IPIs through do_IRQ() now call generic_smp_call_function_interrupt() directly to avoid incrementing irq_count a second time. Platforms which don't (loongson, sgi-ip27, sibyte) call generic_smp_call_function_interrupt() wrapped in irq_{enter,exit}(). Signed-off-by: Alex Smith <alex.smith@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10770/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-07-10MIPS: c-r4k: Fix cache flushing for MT coresMarkos Chandras1-1/+43
MT_SMP is not the only SMP option for MT cores. The MT_SMP option allows more than one VPE per core to appear as a secondary CPU in the system. Because of how CM works, it propagates the address-based cache ops to the secondary cores but not the index-based ones. Because of that, the code does not use IPIs to flush the L1 caches on secondary cores because the CM would have done that already. However, the CM functionality is independent of the type of SMP kernel so even in non-MT kernels, IPIs are not necessary. As a result of which, we change the conditional to depend on the CM presence. Moreover, since VPEs on the same core share the same L1 caches, there is no need to send an IPI on all of them so we calculate a suitable cpumask with only one VPE per core. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: <stable@vger.kernel.org> # 3.15+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10654/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-05-12MIPS: SMP: Fix build error.Ralf Baechle1-2/+4
CC arch/mips/kernel/smp.o arch/mips/kernel/smp.c: In function ‘start_secondary’: arch/mips/kernel/smp.c:149:2: error: passing argument 2 of ‘cpumask_set_cpu’ discards ‘volatile’ qualifier from pointer target type [-Werror] cpumask_set_cpu(cpu, &cpu_callin_map); ^ In file included from ./arch/mips/include/asm/processor.h:14:0, from ./arch/mips/include/asm/thread_info.h:15, from include/linux/thread_info.h:54, from include/asm-generic/preempt.h:4, from arch/mips/include/generated/asm/preempt.h:1, from include/linux/preempt.h:18, from include/linux/interrupt.h:8, from arch/mips/kernel/smp.c:24: include/linux/cpumask.h:272:91: note: expected ‘struct cpumask *’ but argument is of type ‘volatile struct cpumask_t *’ static inline void cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp) ^ arch/mips/kernel/smp.c: In function ‘smp_prepare_boot_cpu’: arch/mips/kernel/smp.c:211:2: error: passing argument 2 of ‘cpumask_set_cpu’ discards ‘volatile’ qualifier from pointer target type [-Werror] cpumask_set_cpu(0, &cpu_callin_map); ^ In file included from ./arch/mips/include/asm/processor.h:14:0, from ./arch/mips/include/asm/thread_info.h:15, from include/linux/thread_info.h:54, from include/asm-generic/preempt.h:4, from arch/mips/include/generated/asm/preempt.h:1, from include/linux/preempt.h:18, from include/linux/interrupt.h:8, from arch/mips/kernel/smp.c:24: include/linux/cpumask.h:272:91: note: expected ‘struct cpumask *’ but argument is of type ‘volatile struct cpumask_t *’ static inline void cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp) ^ arch/mips/kernel/smp.c: In function ‘__cpu_up’: arch/mips/kernel/smp.c:221:10: error: passing argument 2 of ‘cpumask_test_cpu’ discards ‘volatile’ qualifier from pointer target type [-Werror] while (!cpumask_test_cpu(cpu, &cpu_callin_map)) ^ In file included from ./arch/mips/include/asm/processor.h:14:0, from ./arch/mips/include/asm/thread_info.h:15, from include/linux/thread_info.h:54, from include/asm-generic/preempt.h:4, from arch/mips/include/generated/asm/preempt.h:1, from include/linux/preempt.h:18, from include/linux/interrupt.h:8, from arch/mips/kernel/smp.c:24: include/linux/cpumask.h:294:90: note: expected ‘const struct cpumask *’ but argument is of type ‘volatile struct cpumask_t *’ static inline int cpumask_test_cpu(int cpu, const struct cpumask *cpumask) ^ cc1: all warnings being treated as errors make[2]: *** [arch/mips/kernel/smp.o] Error 1 make[1]: *** [arch/mips/kernel] Error 2 make: *** [arch/mips] Error 2 Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-04-20Merge tag 'cpumask-next-for-linus' of ↵Linus Torvalds1-13/+13
git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux Pull final removal of deprecated cpus_* cpumask functions from Rusty Russell: "This is the final removal (after several years!) of the obsolete cpus_* functions, prompted by their mis-use in staging. With these function removed, all cpu functions should only iterate to nr_cpu_ids, so we finally only allocate that many bits when cpumasks are allocated offstack" * tag 'cpumask-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (25 commits) cpumask: remove __first_cpu / __next_cpu cpumask: resurrect CPU_MASK_CPU0 linux/cpumask.h: add typechecking to cpumask_test_cpu cpumask: only allocate nr_cpumask_bits. Fix weird uses of num_online_cpus(). cpumask: remove deprecated functions. mips: fix obsolete cpumask_of_cpu usage. x86: fix more deprecated cpu function usage. ia64: remove deprecated cpus_ usage. powerpc: fix deprecated CPU_MASK_CPU0 usage. CPU_MASK_ALL/CPU_MASK_NONE: remove from deprecated region. staging/lustre/o2iblnd: Don't use cpus_weight staging/lustre/libcfs: replace deprecated cpus_ calls with cpumask_ staging/lustre/ptlrpc: Do not use deprecated cpus_* functions blackfin: fix up obsolete cpu function usage. parisc: fix up obsolete cpu function usage. tile: fix up obsolete cpu function usage. arm64: fix up obsolete cpu function usage. mips: fix up obsolete cpu function usage. x86: fix up obsolete cpu function usage. ...