diff options
author | Anton Blanchard <anton@samba.org> | 2012-05-29 19:33:12 +0000 |
---|---|---|
committer | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2012-07-03 14:14:44 +1000 |
commit | fde69282b7ba2701560764b81ebb756deb98cf2b (patch) | |
tree | e350f7d55f90885e9ea9ee26dcb020f0fbcbaa4e /arch/powerpc/lib/vmx-helper.c | |
parent | 6f7839e542ee18770288be75114bd2e6771e1421 (diff) |
powerpc: POWER7 optimised copy_page using VMX and enhanced prefetch
Implement a POWER7 optimised copy_page using VMX and enhanced
prefetch instructions. We use enhanced prefetch hints to prefetch
both the load and store side. We copy a cacheline at a time and
fall back to regular loads and stores if we are unable to use VMX
(eg we are in an interrupt).
The following microbenchmark was used to assess the impact of
the patch:
http://ozlabs.org/~anton/junkcode/page_fault_file.c
We test MAP_PRIVATE page faults across a 1GB file, 100 times:
# time ./page_fault_file -p -l 1G -i 100
Before: 22.25s
After: 18.89s
17% faster
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'arch/powerpc/lib/vmx-helper.c')
-rw-r--r-- | arch/powerpc/lib/vmx-helper.c | 23 |
1 files changed, 23 insertions, 0 deletions
diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c index 753a839f4a14..3cf529ceec5b 100644 --- a/arch/powerpc/lib/vmx-helper.c +++ b/arch/powerpc/lib/vmx-helper.c @@ -49,3 +49,26 @@ int exit_vmx_usercopy(void) pagefault_enable(); return 0; } + +int enter_vmx_copy(void) +{ + if (in_interrupt()) + return 0; + + preempt_disable(); + + enable_kernel_altivec(); + + return 1; +} + +/* + * All calls to this function will be optimised into tail calls. We are + * passed a pointer to the destination which we return as required by a + * memcpy implementation. + */ +void *exit_vmx_copy(void *dest) +{ + preempt_enable(); + return dest; +} |