Age | Commit message (Collapse) | Author | Files | Lines |
|
Conflicts:
arch/arm/kernel/ptrace.c
|
|
Commit ff9a184c ("ARM: 7400/1: vfp: clear fpscr length and stride bits
on entry to sig handler") flushes the VFP state prior to entering a
signal handler so that a VFP operation inside the handler will trap and
force a restore of ABI-compliant registers. Reflushing and disabling VFP
on the sigreturn path is predicated on the saved thread state indicating
that VFP was used by the handler -- however for SMP platforms this is
only set on context-switch, making the check unreliable and causing VFP
register corruption in userspace since the register values are not
necessarily those restored from the sigframe.
This patch unconditionally flushes the VFP state after a signal handler.
Since we already perform the flush before the handler and the flushing
itself happens lazily, the redundant flush when VFP is not used by the
handler is essentially a nop.
Reported-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The vfp_enable function enables access to the VFP co-processor register
space (cp10 and cp11) on the current CPU and must be called with
preemption disabled. Unfortunately, the vfp_init late initcall does not
disable preemption and can lead to an oops during boot if thread
migration occurs at the wrong time and we end up attempting to access
the FPSID on a CPU with VFP access disabled.
This patch fixes the initcall to call vfp_enable from a non-preemptible
context on each CPU and adds a BUG_ON(preemptible) to ensure that any
similar problems are easily spotted in the future.
Cc: stable@vger.kernel.org
Reported-by: Hyungwoo Yang <hwoo.yang@gmail.com>
Signed-off-by: Hyungwoo Yang <hyungwooy@nvidia.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This is mainly to get rid of the "vfp_pm_suspend: saving vfp state"
message flooding the kernel message ring by default.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The ARM PCS mandates that the length and stride bits of the fpscr are
cleared on entry to and return from a public interface. Although signal
handlers run asynchronously with respect to the interrupted function,
the handler itself expects to run as though it has been called like a
normal function.
This patch updates the state mirroring the VFP hardware before entry to
a signal handler so that it adheres to the PCS. Furthermore, we disable
VFP to ensure that we trap on any floating point operation performed by
the signal handler and synchronise the hardware appropriately. A check
is inserted after the signal handler to avoid redundant flushing if VFP
was not used.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The user VFP state must be preserved (subject to ucontext modifications)
across invocation of a signal handler and this is currently handled by
vfp_{preserve,restore}_context in signal.c
Since this code requires intimate low-level knowledge of the VFP state,
this patch moves it into vfpmodule.c.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Disintegrate asm/system.h for ARM.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Russell King <linux@arm.linux.org.uk>
cc: linux-arm-kernel@lists.infradead.org
|
|
Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Building these files does not reveal a hidden need for
any of these. Since module.h brings in the whole kitchen
sink, it just needlessly adds 30k+ lines to the cpp burden.
There are probably lots more, but ARM files of mach-* and plat-*
don't get coverage via a simple yesconfig build. They will have
to be cleaned up and tested via using their respective configs.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
|
Function vfp_force_reload() clears vfp_current_hw_state, so
update the comment accordingly.
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
When the cpu is powered down in a low power mode, the vfp
registers may be reset.
This patch uses CPU_PM_ENTER and CPU_PM_EXIT notifiers to save
and restore the cpu's vfp registers.
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-and-Acked-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Vishwanath BS <vishwanath.bs@ti.com>
|
|
Conflicts:
arch/arm/kernel/entry-armv.S
|
|
Prevent a preemption event causing the initialized VFP state being
overwritten by ensuring that the VFP hardware access is disabled
prior to starting initialization. We can then do this in safety
while still allowing preemption to occur.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Fix a hole in the VFP thread migration. Lets define two threads.
Thread 1, we'll call 'interesting_thread' which is a thread which is
running on CPU0, using VFP (so vfp_current_hw_state[0] =
&interesting_thread->vfpstate) and gets migrated off to CPU1, where
it continues execution of VFP instructions.
Thread 2, we'll call 'new_cpu0_thread' which is the thread which takes
over on CPU0. This has also been using VFP, and last used VFP on CPU0,
but doesn't use it again.
The following code will be executed twice:
cpu = thread->cpu;
/*
* On SMP, if VFP is enabled, save the old state in
* case the thread migrates to a different CPU. The
* restoring is done lazily.
*/
if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu]) {
vfp_save_state(vfp_current_hw_state[cpu], fpexc);
vfp_current_hw_state[cpu]->hard.cpu = cpu;
}
/*
* Thread migration, just force the reloading of the
* state on the new CPU in case the VFP registers
* contain stale data.
*/
if (thread->vfpstate.hard.cpu != cpu)
vfp_current_hw_state[cpu] = NULL;
The first execution will be on CPU0 to switch away from 'interesting_thread'.
interesting_thread->cpu will be 0.
So, vfp_current_hw_state[0] points at interesting_thread->vfpstate.
The hardware state will be saved, along with the CPU number (0) that
it was executing on.
'thread' will be 'new_cpu0_thread' with new_cpu0_thread->cpu = 0.
Also, because it was executing on CPU0, new_cpu0_thread->vfpstate.hard.cpu = 0,
and so the thread migration check is not triggered.
This means that vfp_current_hw_state[0] remains pointing at interesting_thread.
The second execution will be on CPU1 to switch _to_ 'interesting_thread'.
So, 'thread' will be 'interesting_thread' and interesting_thread->cpu now
will be 1. The previous thread executing on CPU1 is not relevant to this
so we shall ignore that.
We get to the thread migration check. Here, we discover that
interesting_thread->vfpstate.hard.cpu = 0, yet interesting_thread->cpu is
now 1, indicating thread migration. We set vfp_current_hw_state[1] to
NULL.
So, at this point vfp_current_hw_state[] contains the following:
[0] = &interesting_thread->vfpstate
[1] = NULL
Our interesting thread now executes a VFP instruction, takes a fault
which loads the state into the VFP hardware. Now, through the assembly
we now have:
[0] = &interesting_thread->vfpstate
[1] = &interesting_thread->vfpstate
CPU1 stops due to ptrace (and so saves its VFP state) using the thread
switch code above), and CPU0 calls vfp_sync_hwstate().
if (vfp_current_hw_state[cpu] == &thread->vfpstate) {
vfp_save_state(&thread->vfpstate, fpexc | FPEXC_EN);
BANG, we corrupt interesting_thread's VFP state by overwriting the
more up-to-date state saved by CPU1 with the old VFP state from CPU0.
Fix this by ensuring that we have sane semantics for the various state
describing variables:
1. vfp_current_hw_state[] points to the current owner of the context
information stored in each CPUs hardware, or NULL if that state
information is invalid.
2. thread->vfpstate.hard.cpu always contains the most recent CPU number
which the state was loaded into or NR_CPUS if no CPU owns the state.
So, for a particular CPU to be a valid owner of the VFP state for a
particular thread t, two things must be true:
vfp_current_hw_state[cpu] == &t->vfpstate && t->vfpstate.hard.cpu == cpu.
and that is valid from the moment a CPU loads the saved VFP context
into the hardware. This gives clear and consistent semantics to
interpreting these variables.
This patch also fixes thread copying, ensuring that t->vfpstate.hard.cpu
is invalidated, otherwise CPU0 may believe it was the last owner. The
hole can happen thus:
- thread1 runs on CPU2 using VFP, migrates to CPU3, exits and thread_info
freed.
- New thread allocated from a previously running thread on CPU2, reusing
memory for thread1 and copying vfp.hard.cpu.
At this point, the following are true:
new_thread1->vfpstate.hard.cpu == 2
&new_thread1->vfpstate == vfp_current_hw_state[2]
Lastly, this also addresses thread flushing in a similar way to thread
copying. Hole is:
- thread runs on CPU0, using VFP, migrates to CPU1 but does not use VFP.
- thread calls execve(), so thread flush happens, leaving
vfp_current_hw_state[0] intact. This vfpstate is memset to 0 causing
thread->vfpstate.hard.cpu = 0.
- thread migrates back to CPU0 before using VFP.
At this point, the following are true:
thread->vfpstate.hard.cpu == 0
&thread->vfpstate == vfp_current_hw_state[0]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Rename the slightly confusing 'last_VFP_context' variable to be more
descriptive of what it actually is. This variable stores a pointer
to the current owner's vfpstate structure for the context held in the
VFP hardware.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The presence of VFPv4 cannot be detected simply by looking at the FPSID
subarchitecture field, as a value >= 2 signifies the architecture as
VFPv3 or later.
This patch reads from MVFR1 to check whether or not the fused multiply
accumulate instructions are supported. Since these are introduced with
VFPv4, this tells us what we need to know.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Convert some ARM architecture's common code to using
struct syscore_ops objects for power management instead of sysdev
classes and sysdevs.
This simplifies the code and reduces the kernel's memory footprint.
It also is necessary for removing sysdevs from the kernel entirely in
the future.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
VFP registers d16-d31 are callee saved registers and must be preserved
during function calls, including fork(). The VFP configuration should
also be preserved. The patch copies the full VFP state to the child
process.
Reported-by: Paul Wright <paul.wright@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch adds THREAD_NOTIFY_COPY for calling registered handlers
during the copy_thread() function call. It also changes the VFP handler
to use a switch statement rather than if..else and ignore this event.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Improve the documentation for the VFP hotplug notifier handler, so
that people better understand what's going on there and what has
been done for them.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
arch/arm/kernel/return_address.c:37:6: warning: symbol 'return_address' was not declared. Should it be static?
arch/arm/kernel/setup.c:76:14: warning: symbol 'processor_id' was not declared. Should it be static?
arch/arm/kernel/traps.c:259:1: warning: symbol 'die_lock' was not declared. Should it be static?
arch/arm/vfp/vfpmodule.c:156:6: warning: symbol 'vfp_raise_sigfpe' was not declared. Should it be static?
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
We can not guarantee that VFP will be enabled when CPU hotplug brings
a CPU back online from a reset state. Add a hotplug CPU notifier to
ensure that the VFP coprocessor access is enabled whenever a CPU comes
back online.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
MVFR0 and MVFR1 are only available starting with ARM1136 r1p0 release
according to "B.5 VFP changes" in DDI0211F_arm1136_r1p0_trm.pdf. This is
also when TLS register got added, so we can use HAS_TLS also to test for
MVFR0 and MVFR1.
Otherwise VFPFMRX and VFPFMXR access fails and we get:
Internal error: Oops - undefined instruction: 0 [#1]
PC is at no_old_VFP_process+0x8/0x3c
LR is at __und_svc+0x48/0x80
...
Signed-off-by: Tony Lindgren <tony@atomide.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
From: Imre Deak <imre.deak@nokia.com>
Recently the UP versions of these functions were refactored and as
a side effect it became possible to call them for the current thread.
This isn't true for the SMP versions however, so fix this up.
Signed-off-by: Imre Deak <imre.deak@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
A CPU has VFPv3 hardware if the FPSID[19:16] bits are 2 or more.
Currently Linux was only checking for 3 or more.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (100 commits)
ARM: Eliminate decompressor -Dstatic= PIC hack
ARM: 5958/1: ARM: U300: fix inverted clk round rate
ARM: 5956/1: misplaced parentheses
ARM: 5955/1: ep93xx: move timer defines into core.c and document
ARM: 5954/1: ep93xx: move gpio interrupt support to gpio.c
ARM: 5953/1: ep93xx: fix broken build of clock.c
ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH Kconfig
ARM: 5949/1: NUC900 add gpio virtual memory map
ARM: 5948/1: Enable timer0 to time4 clock support for nuc910
ARM: 5940/2: ARM: MMCI: remove custom DBG macro and printk
ARM: make_coherent(): fix problems with highpte, part 2
MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself
ARM: 5945/1: ep93xx: include correct irq.h in core.c
ARM: 5933/1: amba-pl011: support hardware flow control
ARM: 5930/1: Add PKMAP area description to memory.txt.
ARM: 5929/1: Add checks to detect overlap of memory regions.
ARM: 5928/1: Change type of VMALLOC_END to unsigned long.
ARM: 5927/1: Make delimiters of DMA area globally visibly.
ARM: 5926/1: Add "Virtual kernel memory..." printout.
ARM: 5920/1: OMAP4: Enable L2 Cache
...
Fix up trivial conflict in arch/arm/mach-mx25/clock.c
|
|
If we're only reading the VFP context via the ptrace call, there's
no need to invalidate the hardware context - we only need to do that
on PTRACE_SETVFPREGS. This allows more efficient monitoring of a
traced task.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The more I look at vfp_sync_state(), the more I believe it's trying
to do its job in a really obscure way.
Essentially, last_VFP_context[] tracks who owns the state in the VFP
hardware. If last_VFP_context[] is the context for the thread which
we're interested in, then the VFP hardware has context which is not
saved in the software state - so we need to bring the software state
up to date.
If last_VFP_context[] is for some other thread, we really don't care
what state the VFP hardware is in; it doesn't contain any information
pertinent to the thread we're trying to deal with - so don't touch
the hardware.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Commit c98929c07a removed the clearing of the FPSCR[31:28] bits from the
vfp_raise_exceptions() function and the new bits are or'ed with the old
FPSCR bits leading to unexpected results (the original commit was
referring to the cumulative bits - FPSCR[4:0]).
Reported-by: Tom Hameenanttila <tmhameen@marvell.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This avoids races in the VFP code where the dead thread may have
state on another CPU. By moving this code to exit_thread(), we
will be running as the thread, and therefore be running on the
current CPU.
This means that we can ensure that the only local state is accessed
in the thread notifiers.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When the VFP notifier is called for flush_thread(), we may be
preemptible, meaning we might migrate to another CPU, which means
referencing the current CPU number without some form of locking is
invalid, and can cause data corruption.
For the most cases, this isn't a problem since atomic notifiers are run
under rcu lock, which for most configurations results in preemption
being disabled - except when the preemptable tree-based rcu
implementation is selected.
Let's make it safe anyway.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This CPU generates synchronous VFP exceptions in a non-standard way -
the FPEXC.EX bit set but without the FPSCR.IXE bit being set like in the
VFP subarchitecture 1 or just the FPEXC.DEX bit like in VFP
subarchitecture 2. The main problem is that the faulty instruction
(which needs to be emulated in software) will be restarted several times
(normally until a context switch disables the VFP). This patch ensures
that the VFP exception is treated as synchronous.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nico@cam.org>
|
|
We've observed that ARM VFP state can be corrupted during VFP exception
handling when PREEMPT is enabled. The exact conditions are difficult
to reproduce but appear to occur during VFP exception handling when a
task causes a VFP exception which is handled via VFP_bounce and is then
preempted by yet another task which in turn causes yet another VFP
exception. Since the VFP_bounce code is not preempt safe, VFP state then
becomes corrupt. In order to prevent preemption from occuring while
handling a VFP exception, this patch disables preemption while handling
VFP exceptions.
Signed-off-by: George G. Davis <gdavis@mvista.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The VFPv3D16 is a VFPv3 CPU configuration where only 16 double registers
are present, as the VFPv2 configuration. This patch adds the
corresponding hwcap bits so that applications or debuggers have more
information about the supported features.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch adds ptrace support for setting and getting the VFP registers
using PTRACE_SETVFPREGS and PTRACE_GETVFPREGS. The user_vfp structure
defined in asm/user.h contains 32 double registers (to cover VFPv3 and
Neon hardware) and the FPSCR register.
Cc: Paul Brook <paul@codesourcery.com>
Cc: Daniel Jacobowitz <dan@codesourcery.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When CONFIG_PM is selected, the VFP code does not have any handler
installed to deal with either saving the VFP state of the current
task, nor does it do anything to try and restore the VFP after a
resume.
On resume, the VFP will have been reset and the co-processor access
control registers are in an indeterminate state (very probably the
CP10 and CP11 the VFP uses will have been disabled by the ARM core
reset). When this happens, resume will break as soon as it tries to
unfreeze the tasks and restart scheduling.
Add a sys device to allow us to hook the suspend call to save the
current thread state if the thread is using VFP and a resume hook
which restores the CP10/CP11 access and ensures the VFP is disabled
so that the lazy swapping will take place on next access.
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
It's never used and the comments refer to nonatomic and retry
interchangably. So get rid of it.
Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
This patch allows the VFP support code to run correctly on CPUs
compatible with the common VFP subarchitecture specification (Appendix
B in the ARM ARM v7-A and v7-R edition). It implements support for VFP
subarchitecture 2 while being backwards compatible with
subarchitecture 1.
On VFP subarchitecture 1, the arithmetic exceptions are asynchronous
(or imprecise as described in the old ARM ARM) unless the FPSCR.IXE
bit is 1. The exceptional instructions can be read from FPINST and
FPINST2 registers. With VFP subarchitecture 2, the arithmetic
exceptions can also be synchronous and marked by the FPEXC.DEX bit
(the FPEXC.EX bit is cleared). CPUs implementing the synchronous
arithmetic exceptions don't have the FPINST and FPINST2 registers and
accessing them would trigger and undefined exception.
Note that FPEXC.EX bit has an additional meaning on subarchitecture 1
- if it isn't set, there is no additional information in FPINST and
FPINST2 that needs to be saved at context switch or when lazy-loading
the VFP state of a different thread.
The patch also removes the clearing of the cumulative exception flags in
FPSCR when additional exceptions were raised. It is up to the user
application to clear these bits.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
All exception flags of the FPEXC register must be cleared before
returning from exception code to user code, including FP2V and OFC.
Signed-off-by: Takashi Ohmasa <ohmasa.takashi@jp.panasonic.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
VFP device
vfp_init() takes care of the condition when CONFIG_VFP=y but no real VFP
device exists. However, when this condition is true, a compiler might
misplace code lines in a way that will break this support. (To be more
specific - fmrx(FPSID) might be executed before vfp_testing_entry
assignment, which will end up with Oops - undefined instruction).
This patch adds a barrier() to guarantee the right execution ordering.
Signed-off-by: Assaf Hoffman
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Use the fpexc abbreviated names instead of long verbose names
for fpexc bits.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Fix a real section mismatch issue; the test code is thrown away after
initialisation, but if we do not detect the VFP hardware, it is left
hooked into the exception handler. Any VFP instructions which are
subsequently executed risk calling the discarded exception handler.
Introduce a new "null" handler which returns to the "unrecognised
fault" return address.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The current lazy saving of the VFP registers is no longer possible
with thread migration on SMP. This patch implements a per-CPU
vfp-state pointer and the saving of the VFP registers at every context
switch. The registers restoring is still performed in a lazy way.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When we install the handlers for context switching, we must enable
VFP on all CPU cores, otherwise undefined (and random) effects
occur.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Don't set HWCAP_VFP in the processor support file; not only does it
depend on the processor features, but it also depends on the support
code being present. Therefore, only set it if the support code
detects that we have a VFP coprocessor attached.
Also, move the VFP handling of the coprocessor access register into
the VFP support code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The SIGFPE signal should be generated if Division by Zero exception is detected.
Signed-off-by: Takashi Ohmasa <ohmasa.takashi@jp.panasonic.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
The common case for the thread notifier is a context switch. Tell
gcc that this is the most likely condition so it can optimise the
function for this case.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Patch from Daniel Jacobowitz
The recent fix to hide VFP_NAN_FLAG broke the check in vfp_raise_exceptions;
it would attempt to deliver an exception mask of 0xfffffeff instead of reporting
a serious error condition using printk. Define a safe constant to use for
an invalid exception maskm, and use it at both ends.
Signed-off-by: Daniel Jacobowitz <dan@codesourcery.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|