diff options
author | Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> | 2017-04-18 12:31:27 +0530 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2017-04-19 20:00:19 +1000 |
commit | 321f7d29e5163d7f1c9c0b705acc45bd1be34aa6 (patch) | |
tree | ccb80ac79e28a7b4fad3a2187dad26d571686e6e /arch/powerpc/mm/slice.c | |
parent | 41e20d959e5919c70058369323cefa57428b7aaf (diff) |
powerpc/mmap: Any hint > 128TB searches the full VA space
As part of the new large address space support, processes start out life with a
128TB virtual address space. However when calling mmap() a process can pass a
hint address, and if that hint is > 128TB the kernel will use the full 512TB
address space to try and satisfy the mmap() request.
Currently we have a check that the hint is > 128TB and < 512TB (TASK_SIZE),
which was added as an optimisation to avoid updating addr_limit unnecessarily
and also to avoid calling slice_flush_segments() on all CPUs more than
necessary.
However this has the user-visible side effect that an mmap() hint above 512TB
does not search the full address space unless a preceding mmap() used a hint
value > 128TB && < 512TB.
So fix it to treat any hint above 128TB as a hint to search the full address
space, instead of checking the hint against TASK_SIZE, we instead check if the
addr_limit is already == TASK_SIZE.
This also brings the ABI in-line with what is proposed on x86. ie, that a hint
address above 128TB up to and including (2^64)-1 is an indication to search the
full address space.
Fixes: f4ea6dcb08ea2c (powerpc/mm: Enable mappings above 128TB)
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/mm/slice.c')
-rw-r--r-- | arch/powerpc/mm/slice.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c index ade66c3ecdce..966b9fccfa66 100644 --- a/arch/powerpc/mm/slice.c +++ b/arch/powerpc/mm/slice.c @@ -419,7 +419,8 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, /* * Check if we need to expland slice area. */ - if (unlikely(addr > mm->context.addr_limit && addr < TASK_SIZE)) { + if (unlikely(addr > mm->context.addr_limit && + mm->context.addr_limit != TASK_SIZE)) { mm->context.addr_limit = TASK_SIZE; on_each_cpu(slice_flush_segments, mm, 1); } |