summaryrefslogtreecommitdiff
path: root/include/asm-x86
AgeCommit message (Collapse)AuthorFilesLines
2008-10-06Merge branches 'x86/alternatives', 'x86/cleanups', 'x86/commandline', ↵Ingo Molnar18-52/+84
'x86/crashdump', 'x86/debug', 'x86/defconfig', 'x86/doc', 'x86/exports', 'x86/fpu', 'x86/gart', 'x86/idle', 'x86/mm', 'x86/mtrr', 'x86/nmi-watchdog', 'x86/oprofile', 'x86/paravirt', 'x86/reboot', 'x86/sparse-fixes', 'x86/tsc', 'x86/urgent' and 'x86/vmalloc' into x86-v28-for-linus-phase1
2008-10-06Merge branch 'x86/tracehook' into x86-v28-for-linus-phase1Ingo Molnar3-1/+219
Conflicts: arch/x86/kernel/signal_64.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-06Merge branch 'x86/prototypes' into x86-v28-for-linus-phase1Ingo Molnar19-3/+151
Conflicts: arch/x86/kernel/process_32.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-06Merge branch 'x86/pebs' into x86-v28-for-linus-phase1Ingo Molnar4-57/+265
Conflicts: include/asm-x86/ds.h Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-06Merge branch 'x86/header-guards' into x86-v28-for-linus-phase1Ingo Molnar282-845/+845
Conflicts: include/asm-x86/dma-mapping.h include/asm-x86/gpio.h include/asm-x86/idle.h include/asm-x86/kvm_host.h include/asm-x86/namei.h include/asm-x86/uaccess.h Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-02inotify: fix lock ordering wrt do_page_fault's mmap_semNick Piggin1-0/+1
Fix inotify lock order reversal with mmap_sem due to holding locks over copy_to_user. Signed-off-by: Nick Piggin <npiggin@suse.de> Reported-by: "Daniel J Blueman" <daniel.blueman@gmail.com> Tested-by: "Daniel J Blueman" <daniel.blueman@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-30x86, debug printouts: IOMMU setup failures should not be KERN_ERRAdam Jackson1-3/+3
The number of BIOSes that have an option to enable the IOMMU, or fix anything about its configuration, is vanishingly small. There's no good reason to punish quiet boot for this. Signed-off-by: Adam Jackson <ajax@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-26kgdb, x86_64: fix PS CS SS registers in gdb serialJason Wessel1-11/+9
On x86_64 the gdb serial register structure defines the PS (also known as eflags), CS and SS registers as 4 bytes entities. This patch splits the x86_64 regnames enum into a 32 and 64 version to account for the 32 bit entities in the gdb serial packets. Also the program counter is properly filled in for the sleeping threads. Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
2008-09-26kgdb, x86_64: gdb serial has BX and DX reversedJason Wessel1-2/+2
The BX and DX registers in the gdb serial register packet need to be flipped for gdb to receive the correct data. Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
2008-09-24Merge commit 'v2.6.27-rc7' into x86/pebsIngo Molnar36-135/+301
2008-09-23x86: prevent C-states hang on AMD C1E enabled machinesThomas Gleixner2-0/+3
Impact: System hang when AMD C1E machines switch into C2/C3 AMD C1E enabled systems do not work with normal ACPI C-states even if the BIOS is advertising them. Limit the C-states to C1 for the ACPI processor idle code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-09-23x86: prevent stale state of c1e_mask across CPU offline/onlineThomas Gleixner1-0/+2
Impact: hang which happens across CPU offline/online on AMD C1E systems. When a CPU goes offline then the corresponding bit in the broadcast mask is cleared. For AMD C1E enabled CPUs we do not reenable the broadcast when the CPU comes online again as we do not clear the corresponding bit in the c1e_mask, which keeps track which CPUs have been switched to broadcast already. So on those !$@#& machines we never switch back to broadcasting after a CPU offline/online cycle. Clear the bit when the CPU plays dead. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-09-22x86, NMI watchdog: setup before enabling NMI watchdogAristeu Rozanski1-0/+1
There's a small window when NMI watchdog is being set up that if any NMIs are triggered, the NMI code will make make use of not initalized wd_ops elements: void setup_apic_nmi_watchdog(void *unused) { if (__get_cpu_var(wd_enabled)) return; /* cheap hack to support suspend/resume */ /* if cpu0 is not active neither should the other cpus */ if (smp_processor_id() != 0 && atomic_read(&nmi_active) <= 0) return; switch (nmi_watchdog) { case NMI_LOCAL_APIC: /* enable it before to avoid race with handler */ --> __get_cpu_var(wd_enabled) = 1; --> if (lapic_watchdog_init(nmi_hz) < 0) { (...) asmlinkage notrace __kprobes void default_do_nmi(struct pt_regs *regs) { (...) if (nmi_watchdog_tick(regs, reason)) return; (...) notrace __kprobes int nmi_watchdog_tick(struct pt_regs *regs, unsigned reason) { (...) if (!__get_cpu_var(wd_enabled)) return rc; switch (nmi_watchdog) { case NMI_LOCAL_APIC: rc |= lapic_wd_event(nmi_hz); (...) int lapic_wd_event(unsigned nmi_hz) { struct nmi_watchdog_ctlblk *wd = &__get_cpu_var(nmi_watchdog_ctlblk); u64 ctr; --> rdmsrl(wd->perfctr_msr, ctr); and wd->*_msr will be initialized on each processor type specific setup, after enabling NMIs for PMIs. Since the counter was just set, the chances of an performance counter generated NMI is minimal, but any other unknown NMI would trigger the problem. This patch fixes the problem by setting everything up before enabling performance counter generated NMIs and will set wd_enabled using a callback function. Signed-off-by: Aristeu Rozanski <aris@redhat.com> Acked-by: Don Zickus <dzickus@redhat.com> Acked-by: Prarit Bhargava <prarit@redhat.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22Merge commit 'v2.6.27-rc7' into x86/debugIngo Molnar45-261/+334
2008-09-19Merge commit 'v2.6.27-rc6' into x86/cleanupsIngo Molnar3-7/+14
2008-09-17x86, debug: gpio_free might sleepUwe Kleine-König1-0/+3
According to the documentation gpio_free should only be called from task context only. To make this more explicit add a might sleep to all implementations. This patch changes the gpio_free implementations for the x86 architecture. Signed-off-by: Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10x86: unsigned long pte_pfnHugh Dickins4-11/+7
pte_pfn() has always been of type unsigned long, even on 32-bit PAE; but in the current tip/next/mm tree it works out to be unsigned long long on 64-bit, which gives an irritating warning if you try to printk a pfn with the usual %lx. Now use the same pte_pfn() function, moved from pgtable-3level.h to pgtable.h, for all models: as suggested by Jeremy Fitzhardinge. And pte_page() can well move along with it (remaining a macro to avoid dependence on mm_types.h). Signed-off-by: Hugh Dickins <hugh@veritas.com> Acked-by: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06x86-64: eliminate dead codeJan Beulich1-5/+0
Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05x86: add NOPL as a synthetic CPU feature bitH. Peter Anvin2-6/+13
The long noops ("NOPL") are supposed to be detected by family >= 6. Unfortunately, several non-Intel x86 implementations, both hardware and software, don't obey this dictum. Instead, probe for NOPL directly by executing a NOPL instruction and see if we get #UD. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-05x86, tracehook: clean up implementation of syscall_get_error()Petr Tesarik1-1/+2
The x86-tracehook code now contains this line in syscall_get_error(): return error >= -4095L ? error : 0; Hard-wiring a constant is not nice. Let's use the IS_ERR_VALUE macro from linux/err.h instead. Signed-off-by: Petr Tesarik <ptesarik@suse.cz> Cc: utrace-devel@redhat.com Acked-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05Merge branch 'linus' into x86/tracehookIngo Molnar22-46/+82
2008-08-28Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds1-13/+14
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: update defconfigs x86: msr: fix bogus return values from rdmsr_safe/wrmsr_safe x86: cpuid: correct return value on partial operations x86: msr: correct return value on partial operations x86: cpuid: propagate error from smp_call_function_single() x86: msr: propagate errors from smp_call_function_single() smp: have smp_call_function_single() detect invalid CPUs
2008-08-25Merge branch 'x86/urgent' into x86/cleanupsH. Peter Anvin3-14/+16
2008-08-25x86: msr: fix bogus return values from rdmsr_safe/wrmsr_safeH. Peter Anvin1-8/+8
Impact: bogus error codes (+other?) on x86-64 The rdmsr_safe/wrmsr_safe routines have macros for the handling of the edx:eax arguments. Those macros take a variable number of assembly arguments. This is rather inherently incompatible with using %digit-style escapes in the inline assembly; replace those with %[name]-style escapes. This fixes miscompilation on x86-64, which at the very least caused bogus return values. It is possible that this could also corrupt the return value; I am not sure. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25x86: msr: propagate errors from smp_call_function_single()H. Peter Anvin1-5/+6
Propagate error (-ENXIO) from smp_call_function_single(). These errors can happen when a CPU is unplugged while the MSR driver is open. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds2-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: add X86_FEATURE_XMM4_2 definitions x86: fix cpufreq + sched_clock() regression x86: fix HPET regression in 2.6.26 versus 2.6.25, check hpet against BAR, v3 x86: do not enable TSC notifier if we don't need it x86 MCE: Fix CPU hotplug problem with multiple multicore AMD CPUs x86: fix: make PCI ECS for AMD CPUs hotplug capable x86: fix: do not run code in amd_bus.c on non-AMD CPUs
2008-08-25x86: add X86_FEATURE_XMM4_2 definitionsAustin Zhang1-0/+2
Added Intel processor SSE4.2 feature flag. No in-tree user at the moment, but makes the tree-merging life easier for the crypto tree. Signed-off-by: Austin Zhang <austin.zhang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25KVM: Use .fixup instead of .text.fixup on __kvm_handle_fault_on_rebootEduardo Habkost1-1/+1
vmlinux.lds expects the fixup code to be on a section named .fixup. The .text.fixup section is not mentioned on vmlinux.lds, and is included on the resulting vmlinux (just after .text) only because of ld heuristics on placing orphan sections. However, placing .text.fixup outside .text breaks the definition of _etext, making it exclude the .text.fixup contents. That makes .text.fixup be ignored by the kernel initialization code that needs to know about section locations, such as the code setting page protection bits. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-08-25Merge branch 'linus' into x86/urgentIngo Molnar1-1/+0
2008-08-25Merge branch 'x86/urgent' into x86/cleanupsIngo Molnar5-5/+4
2008-08-23removed unused #include <linux/version.h>'sAdrian Bunk1-1/+0
This patch lets the files using linux/version.h match the files that #include it. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-08-23x86 MCE: Fix CPU hotplug problem with multiple multicore AMD CPUsRafael J. Wysocki1-0/+1
During CPU hot-remove the sysfs directory created by threshold_create_bank(), defined in arch/x86/kernel/cpu/mcheck/mce_amd_64.c, has to be removed before its parent directory, created by mce_create_device(), defined in arch/x86/kernel/cpu/mcheck/mce_64.c . Moreover, when the CPU in question is hotplugged again, obviously the latter has to be created before the former. At present, the right ordering is not enforced, because all of these operations are carried out by CPU hotplug notifiers which are not appropriately ordered with respect to each other. This leads to serious problems on systems with two or more multicore AMD CPUs, among other things during suspend and hibernation. Fix the problem by placing threshold bank CPU hotplug callbacks in mce_cpu_callback(), so that they are invoked at the right places, if defined. Additionally, use kobject_del() to remove the sysfs directory associated with the kobject created by kobject_create_and_add() in threshold_create_bank(), to prevent the kernel from crashing during CPU hotplug operations on systems with two or more multicore AMD CPUs. This patch fixes bug #11337. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Acked-by: Andi Kleen <andi@firstfloor.org> Tested-by: Mark Langsdorf <mark.langsdorf@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22x86: fix section mismatch warning - uv_cpu_initMarcin Slusarz2-0/+2
WARNING: vmlinux.o(.cpuinit.text+0x3cc4): Section mismatch in reference from the function uv_cpu_init() to the function .init.text:uv_system_init() The function __cpuinit uv_cpu_init() references a function __init uv_system_init(). If uv_system_init is only used by uv_cpu_init then annotate uv_system_init with a matching annotation. uv_system_init was ment to be called only once, so do it from codepath (native_smp_prepare_cpus) which is called once, right before activation of other cpus (smp_init). Note: old code relied on uv_node_to_blade being initialized to 0, but it'a not initialized from anywhere. Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Acked-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22x86_64: printout msr -v2Yinghai Lu2-0/+35
commandline show_msr=1 for bsp, show_msr=32 for all 32 cpus. [ mingo@elte.hu: added documentation ] Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22x86, paravirt_ops: use unsigned long instead of u32 for alloc_p*() pfn argsEduardo Habkost1-15/+15
This patch changes the pfn args from 'u32' to 'unsigned long' on alloc_p*() functions on paravirt_ops, and the corresponding implementations for Xen and VMI. The prototypes for CONFIG_PARAVIRT=n are already using unsigned long, so paravirt.h now matches the prototypes on asm-x86/pgalloc.h. It shouldn't result in any changes on generated code on 32-bit, with or without CONFIG_PARAVIRT. On both cases, 'codiff -f' didn't show any change after applying this patch. On 64-bit, there are (expected) binary changes only when CONFIG_PARAVIRT is enabled, as the patch is really supposed to change the size of the pfn args. [ v2: KVM_GUEST: use the right parameter type on kvm_release_pt() ] Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Acked-by: Jeremy Fitzhardinge <jeremy@goop.org> Acked-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-21i386: vmalloc size fixDave Young2-5/+3
Booting kernel with vmalloc=[any size<=16m] will oops on my pc (i386/1G memory). BUG_ON in arch/x86/mm/init_32.c triggered: BUG_ON((unsigned long)high_memory > VMALLOC_START); It's due to the vm area hole. In include/asm-x86/pgtable_32.h: #define VMALLOC_OFFSET (8 * 1024 * 1024) #define VMALLOC_START (((unsigned long)high_memory + 2 * VMALLOC_OFFSET - 1) \ & ~(VMALLOC_OFFSET - 1)) There's several related point: 1. MAXMEM : (-__PAGE_OFFSET - __VMALLOC_RESERVE). The space after VMALLOC_END is included as well, I set it to (VMALLOC_END - PAGE_OFFSET - __VMALLOC_RESERVE) 2. VMALLOC_OFFSET is not considered in __VMALLOC_RESERVE fixed by adding VMALLOC_OFFSET to it. 3. VMALLOC_START : (((unsigned long)high_memory + 2 * VMALLOC_OFFSET - 1) & ~(VMALLOC_OFFSET - 1)) So it's not always 8M, bigger than 8M possible. I set it to ((unsigned long)high_memory + VMALLOC_OFFSET) 4. the VMALLOC_RESERVE is an unused macro, so remove it here. Signed-off-by: Dave Young <hidave.darkstar@gmail.com> Cc: akpm@linux-foundation.org Cc: hidave.darkstar@gmail.com Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2008-08-20x86, SGI UV: hardcode the TLB flush interrupt system vectorCliff Wickman2-5/+1
The UV TLB shootdown mechanism needs a system interrupt vector. Its vector had been hardcoded as 200, but needs to moved to the reserved system vector range so that it does not collide with some device vector. This is still temporary until dynamic system IRQ allocation is provided. But it will be needed when real UV hardware becomes available and runs 2.6.27. Signed-off-by: Cliff Wickman <cpw@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-20x86_64: use save/loadsegment in ia32 compatJeremy Fitzhardinge1-2/+3
Use savesegment and loadsegment consistently in ia32 compat code. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-20Merge branch 'linus' into x86/cleanupsIngo Molnar15-29/+112
2008-08-18x86: <asm/asm.h> consistency cleanupsH. Peter Anvin2-2/+7
Rename _ASM_MOV_UL to _ASM_MOV for consistency with other _ASM_ instructions (_ASM_ADD, _ASM_SUB and so on.) Add ASM_SP, _ASM_BP, _ASM_SI, and _ASM_DI for consistency with _ASM_[ABCD]X. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-18Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds3-6/+6
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: fix build warnings in real mode code x86, calgary: fix section mismatch warning - get_tce_space_from_tar x86: silence section mismatch warning - get_local_pda x86, percpu: silence section mismatch warnings related to EARLY_PER_CPU variables x86: fix i486 suspend to disk CR4 oops x86: mpparse.c: fix section mismatch warning x86: mmconf: fix section mismatch warning x86: fix MP_processor_info section mismatch warning x86, tsc: fix section mismatch warning x86: correct register constraints for 64-bit atomic operations
2008-08-18x86, percpu: silence section mismatch warnings related to EARLY_PER_CPU ↵Marcin Slusarz1-1/+1
variables Quoting Mike Travis in "x86: cleanup early per cpu variables/accesses v4" (23ca4bba3e20c6c3cb11c1bb0ab4770b724d39ac): The DEFINE macro defines the per_cpu variable as well as the early map and pointer. It also initializes the per_cpu variable and map elements to "_initvalue". The early_* macros provide access to the initial map (usually setup during system init) and the early pointer. This pointer is initialized to point to the early map but is then NULL'ed when the actual per_cpu areas are setup. After that the per_cpu variable is the correct access to the variable. As these variables are NULL'ed before __init sections are dropped (in setup_per_cpu_maps), they can be safely annotated as __ref. This change silences following section mismatch warnings: WARNING: vmlinux.o(.data+0x46c0): Section mismatch in reference from the variable x86_cpu_to_apicid_early_ptr to the variable .init.data:x86_cpu_to_apicid_early_map The variable x86_cpu_to_apicid_early_ptr references the variable __initdata x86_cpu_to_apicid_early_map If the reference is valid then annotate the variable with __init* (see linux/init.h) or name the variable: *driver, *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console, WARNING: vmlinux.o(.data+0x46c8): Section mismatch in reference from the variable x86_bios_cpu_apicid_early_ptr to the variable .init.data:x86_bios_cpu_apicid_early_map The variable x86_bios_cpu_apicid_early_ptr references the variable __initdata x86_bios_cpu_apicid_early_map If the reference is valid then annotate the variable with __init* (see linux/init.h) or name the variable: *driver, *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console, WARNING: vmlinux.o(.data+0x46d0): Section mismatch in reference from the variable x86_cpu_to_node_map_early_ptr to the variable .init.data:x86_cpu_to_node_map_early_map The variable x86_cpu_to_node_map_early_ptr references the variable __initdata x86_cpu_to_node_map_early_map If the reference is valid then annotate the variable with __init* (see linux/init.h) or name the variable: *driver, *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console, Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Cc: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-18x86: mmconf: fix section mismatch warningMarcin Slusarz1-1/+1
WARNING: arch/x86/kernel/built-in.o(.cpuinit.text+0x1591): Section mismatch in reference from the function init_amd() to the function .init.text:check_enable_amd_mmconf_dmi() The function __cpuinit init_amd() references a function __init check_enable_amd_mmconf_dmi(). If check_enable_amd_mmconf_dmi is only used by init_amd then annotate check_enable_amd_mmconf_dmi with a matching annotation. check_enable_amd_mmconf_dmi is only called from init_amd which is __cpuinit Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-18x86: correct register constraints for 64-bit atomic operationsMathieu Desnoyers1-4/+4
x86_64 add/sub atomic ops does not seems to accept integer values bigger than 32 bits as immediates. Intel's add/sub documentation specifies they have to be passed as registers. The only operations in the x86-64 architecture which accept arbitrary 64-bit immediates is "movq" to any register; similarly, the only operation which accept arbitrary 64-bit displacement is "movabs" to or from al/ax/eax/rax. http://gcc.gnu.org/onlinedocs/gcc-4.3.0/gcc/Machine-Constraints.html states : e 32-bit signed integer constant, or a symbolic reference known to fit that range (for immediate operands in sign-extending x86-64 instructions). Z 32-bit unsigned integer constant, or a symbolic reference known to fit that range (for immediate operands in zero-extending x86-64 instructions). Since add/sub does sign extension, using the "e" constraint seems appropriate. It applies to 2.6.27-rc, 2.6.26, 2.6.25... Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-16Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds8-18/+48
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (32 commits) x86: add MAP_STACK mmap flag x86: fix section mismatch warning - spp_getpage() x86: change init_gdt to update the gdt via write_gdt, rather than a direct write. x86-64: fix overlap of modules and fixmap areas x86, geode-mfgpt: check IRQ before using MFGPT as clocksource x86, acpi: cleanup, temp_stack is used only when CONFIG_SMP is set x86: fix spin_is_contended() x86, nmi: clean UP NMI watchdog failure message x86, NMI: fix watchdog failure message x86: fix /proc/meminfo DirectMap x86: fix readb() et al compile error with gcc-3.2.3 arch/x86/Kconfig: clean up, experimental adjustement x86: invalidate caches before going into suspend x86, perfctr: don't use CCCR_OVF_PMI1 on Pentium 4Ds x86, AMD IOMMU: initialize dma_ops after sysfs registration x86m AMD IOMMU: cleanup: replace LOW_U32 macro with generic lower_32_bits x86, AMD IOMMU: initialize device table properly x86, AMD IOMMU: use status bit instead of memory write-back for completion wait x86: silence mmconfig printk x86, msr: fix NULL pointer deref due to msr_open on nonexistent CPUs ...
2008-08-15x86: spinlock use LOCK_PREFIXMathieu Desnoyers1-3/+3
Since we are now using DS prefixes instead of NOP to remove LOCK prefixes, there is no longer any problems with instruction boundaries moving around. * Linus Torvalds (torvalds@linux-foundation.org) wrote: > > > On Thu, 14 Aug 2008, Mathieu Desnoyers wrote: > > > > Changing the 0x90 (single-byte nop) currently used into a 0x3E DS segment > > override prefix should fix this issue. Since the default of the atomic > > instructions is to use the DS segment anyway, it should not affect the > > behavior. > > Ok, so I think this is an _excellent_ patch, but I'd like to also then use > LOCK_PREFIX in include/asm-x86/futex.h. > > See commit 9d55b9923a1b7ea8193b8875c57ec940dc2ff027. > > Linus Unless there a rationale for this, I think these be changed to LOCK_PREFIX too. grep "lock ;" include/asm-x86/spinlock.h "lock ; cmpxchgw %w1,%2\n\t" asm volatile("lock ; xaddl %0, %1\n" "lock ; cmpxchgl %1,%2\n\t" Applies to 2.6.27-rc2. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> CC: Linus Torvalds <torvalds@linux-foundation.org> CC: H. Peter Anvin <hpa@zytor.com> CC: Jeremy Fitzhardinge <jeremy@goop.org> CC: Roland McGrath <roland@redhat.com> CC: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> CC: Steven Rostedt <srostedt@redhat.com> CC: Thomas Gleixner <tglx@linutronix.de> CC: Peter Zijlstra <peterz@infradead.org> CC: Andrew Morton <akpm@linux-foundation.org> CC: David Miller <davem@davemloft.net> CC: Ulrich Drepper <drepper@redhat.com> CC: Rusty Russell <rusty@rustcorp.com.au> CC: Gregory Haskins <ghaskins@novell.com> CC: Arnaldo Carvalho de Melo <acme@redhat.com> CC: "Luis Claudio R. Goncalves" <lclaudio@uudg.org> CC: Clark Williams <williams@redhat.com> CC: Christoph Lameter <cl@linux-foundation.org> CC: Andi Kleen <andi@firstfloor.org> CC: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-15x86: revert replace LOCK_PREFIX in futex.hMathieu Desnoyers1-3/+3
Since we now use DS prefixes instead of NOP to remove LOCK prefixes, there are no longer any issues with instruction boundaries moving around. Depends on : x86 alternatives : fix LOCK_PREFIX race with preemptible kernel and CPU hotplug On Thu, 14 Aug 2008, Mathieu Desnoyers wrote: > > Changing the 0x90 (single-byte nop) currently used into a 0x3E DS segment > override prefix should fix this issue. Since the default of the atomic > instructions is to use the DS segment anyway, it should not affect the > behavior. Ok, so I think this is an _excellent_ patch, but I'd like to also then use LOCK_PREFIX in include/asm-x86/futex.h. See commit 9d55b9923a1b7ea8193b8875c57ec940dc2ff027. Linus Applies to 2.6.27-rc2 (and -rc3 unless hell broke loose in futex.h between rc2 and rc3). Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> CC: Linus Torvalds <torvalds@linux-foundation.org> CC: H. Peter Anvin <hpa@zytor.com> CC: Jeremy Fitzhardinge <jeremy@goop.org> CC: Roland McGrath <roland@redhat.com> CC: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> CC: Steven Rostedt <srostedt@redhat.com> CC: Thomas Gleixner <tglx@linutronix.de> CC: Peter Zijlstra <peterz@infradead.org> CC: Andrew Morton <akpm@linux-foundation.org> CC: David Miller <davem@davemloft.net> CC: Ulrich Drepper <drepper@redhat.com> CC: Rusty Russell <rusty@rustcorp.com.au> CC: Gregory Haskins <ghaskins@novell.com> CC: Arnaldo Carvalho de Melo <acme@redhat.com> CC: "Luis Claudio R. Goncalves" <lclaudio@uudg.org> CC: Clark Williams <williams@redhat.com> CC: Christoph Lameter <cl@linux-foundation.org> CC: Andi Kleen <andi@firstfloor.org> CC: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-15x86: add MAP_STACK mmap flagIngo Molnar1-0/+1
as per this discussion: http://lkml.org/lkml/2008/8/12/423 Pardo reported that 64-bit threaded apps, if their stacks exceed the combined size of ~4GB, slow down drastically in pthread_create() - because glibc uses MAP_32BIT to allocate the stacks. The use of MAP_32BIT is a legacy hack - to speed up context switching on certain early model 64-bit P4 CPUs. So introduce a new flag to be used by glibc instead, to not constrain 64-bit apps like this. glibc can switch to this new flag straight away - it will be ignored by the kernel. If those old CPUs ever matter to anyone, support for it can be implemented. Signed-off-by: Ingo Molnar <mingo@elte.hu> Acked-by: Ulrich Drepper <drepper@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-08-15x86: add MAP_STACK mmap flagIngo Molnar1-0/+1
as per this discussion: http://lkml.org/lkml/2008/8/12/423 Pardo reported that 64-bit threaded apps, if their stacks exceed the combined size of ~4GB, slow down drastically in pthread_create() - because glibc uses MAP_32BIT to allocate the stacks. The use of MAP_32BIT is a legacy hack - to speed up context switching on certain early model 64-bit P4 CPUs. So introduce a new flag to be used by glibc instead, to not constrain 64-bit apps like this. glibc can switch to this new flag straight away - it will be ignored by the kernel. If those old CPUs ever matter to anyone, support for it can be implemented. Signed-off-by: Ingo Molnar <mingo@elte.hu> Acked-by: Ulrich Drepper <drepper@gmail.com>
2008-08-15Merge branch 'x86/geode' into x86/urgentIngo Molnar1-1/+2