summaryrefslogtreecommitdiff
path: root/lib/atomic64.c
AgeCommit message (Collapse)AuthorFilesLines
2019-06-03locking/atomic: Use s64 for atomic64Mark Rutland1-16/+16
As a step towards making the atomic64 API use consistent types treewide, let's have the generic atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long long, matching the generated headers. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-4-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner1-5/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-06-21atomics/generic: Define atomic64_fetch_add_unless()Mark Rutland1-7/+7
As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the generic implementation of atomic64_add_unless() into a generic implementation of atomic64_fetch_add_unless(). A wrapper in <linux/atomic.h> will build atomic_add_unless() atop of this, provided it is given a preprocessor definition. No functional change is intended as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-9-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-06-21atomics: Make conditional ops return 'bool'Mark Rutland1-3/+3
Some of the atomics return a status value, which is a boolean value describing whether the operation was performed. To make it clear that this is a boolean value, let's update the common fallbacks to return bool, fixing up the return values and comments likewise. At the same time, let's simplify the description of the operations in their respective comments. The instrumented atomics and generic atomic64 implementation are updated accordingly. Note that atomic64_dec_if_positive() doesn't follow the usual test op pattern, and returns the would-be decremented value. This is not changed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-5-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-16locking/atomic: Implement ↵Peter Zijlstra1-4/+28
atomic{,64,_long}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}() Now that all the architectures have implemented support for these new atomic primitives add on the generic infrastructure to expose and use it. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Borislav Petkov <bp@suse.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-27atomic: Provide atomic_{or,xor,and}Peter Zijlstra1-0/+3
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-08-14locking,arch: Rewrite generic atomic supportPeter Zijlstra1-47/+36
Rewrite generic atomic support to only require cmpxchg(), generate all other primitives from that. Furthermore reduce the endless repetition for all these primitives to a few CPP macros. This way we get more for less lines. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140508135852.940119622@infradead.org Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Howells <dhowells@redhat.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-arch@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-12-20lib: atomic64: Initialize locks statically to fix early usersStephen Boyd1-12/+5
The atomic64 library uses a handful of static spin locks to implement atomic 64-bit operations on architectures without support for atomic 64-bit instructions. Unfortunately, the spinlocks are initialized in a pure initcall and that is too late for the vfs namespace code which wants to use atomic64 operations before the initcall is run. This became a problem as of commit 8823c079ba71: "vfs: Add setns support for the mount namespace". This leads to BUG messages such as: BUG: spinlock bad magic on CPU#0, swapper/0/0 lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 do_raw_spin_lock+0x158/0x198 _raw_spin_lock_irqsave+0x4c/0x58 atomic64_add_return+0x30/0x5c alloc_mnt_ns.clone.14+0x44/0xac create_mnt_ns+0xc/0x54 mnt_init+0x120/0x1d4 vfs_caches_init+0xe0/0x10c start_kernel+0x29c/0x300 coming out early on during boot when spinlock debugging is enabled. Fix this by initializing the spinlocks statically at compile time. Reported-and-tested-by: Vaibhav Bedia <vaibhav.bedia@ti.com> Tested-by: Tony Lindgren <tony@atomide.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-03-07lib: reduce the use of module.h wherever possiblePaul Gortmaker1-1/+1
For files only using THIS_MODULE and/or EXPORT_SYMBOL, map them onto including export.h -- or if the file isn't even using those, then just delete the include. Fix up any implicit include dependencies that were being masked by module.h along the way. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-09-14lib: atomic64: Change the type of local lock to raw_spinlock_tYong Zhang1-11/+11
There are still some leftovers of commit f59ca058 [locking, lib/atomic64: Annotate atomic64_lock::lock as raw] [ tglx: Seems I picked the wrong version of that patch :( ] Signed-off-by: Yong Zhang <yong.zhang0@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Shan Hai <haishan.bai@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Link: http://lkml.kernel.org/r/20110914074924.GA16096@zhy Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2011-09-13locking, lib/atomic64: Annotate atomic64_lock::lock as rawShan Hai1-22/+22
The spinlock protected atomic64 operations must be irq safe as they are used in hard interrupt context and cannot be preempted on -rt: NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8 LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 Call Trace: [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable) [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98 [eb459c40] [c03d2a14] atomic64_read+0x48/0x84 [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150 [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c So annotate it. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: Shan Hai <haishan.bai@gmail.com> Reviewed-by: Yong Zhang <yong.zhang0@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-26atomic: use <linux/atomic.h>Arun Sharma1-1/+1
This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: Arun Sharma <asharma@fb.com> Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-03-01lib: Fix atomic64_add_unless return value conventionLuca Barbieri1-2/+2
atomic64_add_unless must return 1 if it perfomed the add and 0 otherwise. The generic implementation did the opposite thing. Reported-by: H. Peter Anvin <hpa@zytor.com> Confirmed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Luca Barbieri <luca@luca-barbieri.com> LKML-Reference: <1267469749-11878-4-git-send-email-luca@luca-barbieri.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-07-29lib: export generic atomic64_t functionsRoland Dreier1-0/+11
The generic atomic64_t implementation in lib/ did not export the functions it defined, which means that modules that use atomic64_t would not link on platforms (such as 32-bit powerpc). For example, trying to build a kernel with CONFIG_NET_RDS on such a platform would fail with: ERROR: "atomic64_read" [net/rds/rds.ko] undefined! ERROR: "atomic64_set" [net/rds/rds.ko] undefined! Fix this by exporting the atomic64_t functions to modules. (I export the entire API even if it's not all currently used by in-tree modules to avoid having to continue fixing this in dribs and drabs) Signed-off-by: Roland Dreier <rolandd@cisco.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-06-15lib: Provide generic atomic64_t implementationPaul Mackerras1-0/+175
Many processor architectures have no 64-bit atomic instructions, but we need atomic64_t in order to support the perf_counter subsystem. This adds an implementation of 64-bit atomic operations using hashed spinlocks to provide atomicity. For each atomic operation, the address of the atomic64_t variable is hashed to an index into an array of 16 spinlocks. That spinlock is taken (with interrupts disabled) around the operation, which can then be coded non-atomically within the lock. On UP, all the spinlock manipulation goes away and we simply disable interrupts around each operation. In fact gcc eliminates the whole atomic64_lock variable as well. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>