diff options
author | Anton Blanchard <anton@samba.org> | 2010-02-10 00:57:28 +0000 |
---|---|---|
committer | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2010-02-17 14:03:14 +1100 |
commit | 4e14a4d17a8cd66ccab180d32c977091922cfbed (patch) | |
tree | 5d6c9c97853ada47c4748d6965fca54692dbf665 /arch/powerpc/include/asm/ppc-opcode.h | |
parent | 17081102a6e0fe32cf47cdbdf8f2e9ab55273b08 (diff) |
powerpc: Use lwarx hint in spinlocks
Recent versions of the PowerPC architecture added a hint bit to the larx
instructions to differentiate between an atomic operation and a lock operation:
> 0 Other programs might attempt to modify the word in storage addressed by EA
> even if the subsequent Store Conditional succeeds.
>
> 1 Other programs will not attempt to modify the word in storage addressed by
> EA until the program that has acquired the lock performs a subsequent store
> releasing the lock.
To avoid a binutils dependency this patch create macros for the extended lwarx
format and uses it in the spinlock code. To test this change I used a simple
test case that acquires and releases a global pthread mutex:
pthread_mutex_lock(&mutex);
pthread_mutex_unlock(&mutex);
On a 32 core POWER6, running 32 test threads we spend almost all our time in
the futex spinlock code:
94.37% perf [kernel] [k] ._raw_spin_lock
|
|--99.95%-- ._raw_spin_lock
| |
| |--63.29%-- .futex_wake
| |
| |--36.64%-- .futex_wait_setup
Which is a good test for this patch. The results (in lock/unlock operations per
second) are:
before: 1538203 ops/sec
after: 2189219 ops/sec
An improvement of 42%
A 32 core POWER7 improves even more:
before: 1279529 ops/sec
after: 2282076 ops/sec
An improvement of 78%
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'arch/powerpc/include/asm/ppc-opcode.h')
-rw-r--r-- | arch/powerpc/include/asm/ppc-opcode.h | 14 |
1 files changed, 14 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h index ef9aa84cac5a..ecec76051184 100644 --- a/arch/powerpc/include/asm/ppc-opcode.h +++ b/arch/powerpc/include/asm/ppc-opcode.h @@ -24,6 +24,7 @@ #define PPC_INST_ISEL_MASK 0xfc00003e #define PPC_INST_LSWI 0x7c0004aa #define PPC_INST_LSWX 0x7c00042a +#define PPC_INST_LWARX 0x7c000029 #define PPC_INST_LWSYNC 0x7c2004ac #define PPC_INST_LXVD2X 0x7c000698 #define PPC_INST_MCRXR 0x7c000400 @@ -55,15 +56,28 @@ #define __PPC_RA(a) (((a) & 0x1f) << 16) #define __PPC_RB(b) (((b) & 0x1f) << 11) #define __PPC_RS(s) (((s) & 0x1f) << 21) +#define __PPC_RT(s) __PPC_RS(s) #define __PPC_XS(s) ((((s) & 0x1f) << 21) | (((s) & 0x20) >> 5)) #define __PPC_T_TLB(t) (((t) & 0x3) << 21) #define __PPC_WC(w) (((w) & 0x3) << 21) +/* + * Only use the larx hint bit on 64bit CPUs. Once we verify it doesn't have + * any side effects on all 32bit processors, we can do this all the time. + */ +#ifdef CONFIG_PPC64 +#define __PPC_EH(eh) (((eh) & 0x1) << 0) +#else +#define __PPC_EH(eh) 0 +#endif /* Deal with instructions that older assemblers aren't aware of */ #define PPC_DCBAL(a, b) stringify_in_c(.long PPC_INST_DCBAL | \ __PPC_RA(a) | __PPC_RB(b)) #define PPC_DCBZL(a, b) stringify_in_c(.long PPC_INST_DCBZL | \ __PPC_RA(a) | __PPC_RB(b)) +#define PPC_LWARX(t, a, b, eh) stringify_in_c(.long PPC_INST_LWARX | \ + __PPC_RT(t) | __PPC_RA(a) | \ + __PPC_RB(b) | __PPC_EH(eh)) #define PPC_MSGSND(b) stringify_in_c(.long PPC_INST_MSGSND | \ __PPC_RB(b)) #define PPC_RFCI stringify_in_c(.long PPC_INST_RFCI) |