summaryrefslogtreecommitdiff
path: root/mm/memblock.c
AgeCommit message (Collapse)AuthorFilesLines
2011-01-20memblock: fix memblock_is_region_memory()Tomi Valkeinen1-4/+4
memblock_is_region_memory() uses reserved memblocks to search for the given region, while it should use the memory memblocks. I encountered the problem with OMAP's framebuffer ram allocation. Normally the ram is allocated dynamically, and this function is not called. However, if we want to pass the framebuffer from the bootloader to the kernel (to retain the boot image), this function is used to check the validity of the kernel parameters for the framebuffer ram area. Signed-off-by: Tomi Valkeinen <tomi.valkeinen@nokia.com> Acked-by: Yinghai Lu <yinghai@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-10-11memblock: Annotate memblock functions with __init_memblockYinghai Lu1-2/+2
Stephen found WARNING: mm/built-in.o(.text+0x25ab8): Section mismatch in reference from the function memblock_find_base() to the function .init.text:memblock_find_region() The function memblock_find_base() references the function __init memblock_find_region(). This is often because memblock_find_base lacks a __init annotation or the annotation of memblock_find_region is wrong. So let memblock_find_region() to use __init_memblock instead of __init directly. Also fix one function that did not have __init* to be __init_memblock. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <4CB366B1.40405@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-10-11memblock: Allow memblock_init to be called earlyJeremy Fitzhardinge1-0/+6
The Xen setup code needs to call memblock_x86_reserve_range() very early, so allow it to initialize the memblock subsystem before doing so. The second memblock_init() is ignored. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> LKML-Reference: <4CACFDAD.3090900@goop.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-10-05memblock: Fix wraparound in find_region()Yinghai Lu1-1/+6
When trying to find huge range for crashkernel, get [ 0.000000] ------------[ cut here ]------------ [ 0.000000] WARNING: at arch/x86/mm/memblock.c:248 memblock_x86_reserve_range+0x40/0x7a() [ 0.000000] Hardware name: Sun Fire x4800 [ 0.000000] memblock_x86_reserve_range: wrong range [0xffffffff37000000, 0x137000000) [ 0.000000] Modules linked in: [ 0.000000] Pid: 0, comm: swapper Not tainted 2.6.36-rc5-tip-yh-01876-g1cac214-dirty #59 [ 0.000000] Call Trace: [ 0.000000] [<ffffffff82816f7e>] ? memblock_x86_reserve_range+0x40/0x7a [ 0.000000] [<ffffffff81078c2d>] warn_slowpath_common+0x85/0x9e [ 0.000000] [<ffffffff81078d38>] warn_slowpath_fmt+0x6e/0x70 [ 0.000000] [<ffffffff8281e77c>] ? memblock_find_region+0x40/0x78 [ 0.000000] [<ffffffff8281eb1f>] ? memblock_find_base+0x9a/0xb9 [ 0.000000] [<ffffffff82816f7e>] memblock_x86_reserve_range+0x40/0x7a [ 0.000000] [<ffffffff8280452c>] setup_arch+0x99d/0xb2a [ 0.000000] [<ffffffff810a3e02>] ? trace_hardirqs_off+0xd/0xf [ 0.000000] [<ffffffff81cec7d8>] ? _raw_spin_unlock_irqrestore+0x3d/0x4c [ 0.000000] [<ffffffff827ffcec>] start_kernel+0xde/0x3f1 [ 0.000000] [<ffffffff827ff2d4>] x86_64_start_reservations+0xa0/0xa4 [ 0.000000] [<ffffffff827ff3de>] x86_64_start_kernel+0x106/0x10d [ 0.000000] ---[ end trace a7919e7f17c0a725 ]--- [ 0.000000] Reserving 8192MB of memory at 17592186041200MB for crashkernel (System RAM: 526336MB) This is caused by a wraparound in the test due to size > end; explicitly check for this condition and fail. Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <4CAA4DD3.1080401@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-09-15memblock: Fix section mismatch warningsYinghai Lu1-7/+7
Stephen found a bunch of section mismatch warnings with the new memblock changes. Use __init_memblock to replace __init in memblock.c and remove __init in memblock.h. We should not use __init in header files. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Tested-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Yinghai Lu <Yinghai@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> LKML-Reference: <4C912709.2090201@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-08-27memblock: Add memblock_free/reserve_reserved_regions()Yinghai Lu1-0/+24
So we can avoid export memblock_reserved_init_regions() Suggested by Ben. -v2: use __init_memblock attribute Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-08-05memblock: Add memblock_find_in_range()Yinghai Lu1-0/+8
This is a wrapper for memblock_find_base() using slightly different arguments (start,end instead of start,size for example) in order to make it easier to convert existing arch/x86 code. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Option for the architecture to put memblock into the .init sectionYinghai Lu1-24/+24
Arch code can define ARCH_DISCARD_MEMBLOCK in asm/memblock.h, which in turns causes memblock code and data to go respectively into the .init and .initdata sections. This will be used by the x86 architecture. If ARCH_DISCARD_MEMBLOCK is defined, the debugfs files to inspect the memblock arrays after boot are not created. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Make MEMBLOCK_ERROR be 0Benjamin Herrenschmidt1-0/+6
And ensure we don't hand out 0 as a valid allocation. We put the low limit at PAGE_SIZE arbitrarily. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Export MEMBLOCK_ERRORYinghai Lu1-2/+0
will used by x86 memblock_x86_find_in_range_node and nobootmem replacement Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Improve debug output when resizing the reserve arrayYinghai Lu1-3/+4
Print out the location info in addition to which array is being resized. Also use memblocK_dbg() to put that under control of the memblock_debug flag. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Expose some memblock bits for use by x86Yinghai Lu1-1/+2
This exposes memblock_debug and associated memblock_dbg() macro, along with memblock_can_resize so that x86 can use these when ported to use memblock Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Add debugfs files to dump the arrays contentBenjamin Herrenschmidt1-0/+51
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Make memblock_alloc_try_nid() fallback to MEMBLOCK_ALLOC_ANYWHEREBenjamin Herrenschmidt1-1/+1
memblock_alloc_nid() used to fallback to allocating anywhere by using memblock_alloc() as a fallback. However, some of my previous patches limit memblock_alloc() to the region covered by MEMBLOCK_ALLOC_ACCESSIBLE which is not quite what we want for memblock_alloc_try_nid(). So we fix it by explicitely using MEMBLOCK_ALLOC_ANYWHERE. Not that so far only sparc uses memblock_alloc_nid() and it hasn't been updated to clamp the accessible zone yet. Thus the temporary "breakage" should have no effect. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Separate memblock_alloc_nid() and memblock_alloc_try_nid()Benjamin Herrenschmidt1-0/+14
The former is now strict, it will fail if it cannot honor the allocation within the node, while the later implements the previous semantic which falls back to allocating anywhere. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: NUMA allocate can now use early_pfn_mapBenjamin Herrenschmidt1-1/+27
We now provide a default (weak) implementation of memblock_nid_range() which uses the early_pfn_map[] if CONFIG_ARCH_POPULATES_NODE_MAP is set. Sparc still needs to use its own method due to the way the pages can be scattered between nodes. This implementation is inefficient due to our main algorithm and callback construct wanting to work on an ascending addresses bases while early_pfn_map[] would rather work with nid's (it's unsorted at that stage). But it should work and we can look into improving it subsequently, possibly using arch compile options to chose a different algorithm alltogether. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Add "start" argument to memblock_find_base()Benjamin Herrenschmidt1-11/+16
To constraint the search of a region between two boundaries, which will be used by the new NUMA aware allocator among others. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Add arch function to control coalescing of memblock memory regionsBenjamin Herrenschmidt1-1/+18
Some archs such as ARM want to avoid coalescing accross things such as the lowmem/highmem boundary or similar. This provides the option to control it via an arch callback for which a weak default is provided which always allows coalescing. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Add array resizing supportBenjamin Herrenschmidt1-2/+102
When one of the array gets full, we resize it. After much thinking and a few iterations of that code, I went back to on-demand resizing using the (new) internal memblock_find_base() function, which is pretty much what Yinghai initially proposed, though there some differences in the details. To work this relies on the default alloc limit being set sensibly by the architecture. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Move functions around into a more sensible orderBenjamin Herrenschmidt1-142/+159
Some shuffling is needed for doing array resize so we may as well put some sense into the ordering of the functions in the whole memblock.c file. No code change. Added some comments. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: split memblock_find_base() out of __memblock_alloc_base()Benjamin Herrenschmidt1-20/+38
This will be used by the array resize code and might prove useful to some arch code as well at which point it can be made non-static. Also add comment as to why aligning size is important Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> --- v2. Fix loss of size alignment v3. Fix result code
2010-08-05memblock: Move memblock_init() to the bottom of the fileBenjamin Herrenschmidt1-27/+27
It's a real PITA to have to search for it in the middle Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Define MEMBLOCK_ERROR internally instead of using ~(phys_addr_t)0Benjamin Herrenschmidt1-5/+7
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Make memblock_find_region() out of memblock_alloc_region()Benjamin Herrenschmidt1-11/+9
This function will be used to locate a free area to put the new memblock arrays when attempting to resize them. memblock_alloc_region() is gone, the two callsites now call memblock_add_region(). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> --- v2. Fix membase_alloc_nid_region() conversion
2010-08-05memblock: Add debug markers at the end of the arrayBenjamin Herrenschmidt1-0/+11
Since we allocate one more than needed, why not do a bit of sanity checking here to ensure we don't walk past the end of the array ? Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Move memblock arrays to static storage in memblock.c and make ↵Benjamin Herrenschmidt1-1/+9
their size a variable This is in preparation for having resizable arrays. Note that we still allocate one more than needed, this is unchanged from the previous implementation. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Remove memblock_type.size and add memblock.memory_size insteadBenjamin Herrenschmidt1-4/+4
Right now, both the "memory" and "reserved" memblock_type structures have a "size" member. It represents the calculated memory size in the former case and is unused in the latter. This moves it out to the main memblock structure instead Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Change u64 to phys_addr_tBenjamin Herrenschmidt1-58/+60
Let's not waste space and cycles on archs that don't support >32-bit physical address space. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Remove rmo_size, burry it in arch/powerpc where it belongsBenjamin Herrenschmidt1-8/+0
The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact server ppc64 though I hijack it on embedded ppc64 for similar purposes) and represents the area of memory that can be accessed in real mode (aka with MMU off), or on embedded, from the exception vectors (which is bolted in the TLB) which pretty much boils down to the same thing. We take that out of the generic MEMBLOCK data structure and move it into arch/powerpc where it belongs, renaming it to "RMA" while at it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Introduce default allocation limit and use it to replace explicit onesBenjamin Herrenschmidt1-8/+11
This introduce memblock.current_limit which is used to limit allocations from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE). The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still be used with memblock_alloc_base() to allocate really anywhere. It is -no-longer- cropped to MEMBLOCK_REAL_LIMIT which disappears. Note to archs: I'm leaving the default limit to MEMBLOCK_ALLOC_ANYWHERE. I strongly recommend that you ensure that you set an appropriate limit during boot in order to guarantee that an memblock_alloc() at any time results in something that is accessible with a simple __va(). The reason is that a subsequent patch will introduce the ability for the array to resize itself by reallocating itself. The MEMBLOCK core will honor the current limit when performing those allocations. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Expose MEMBLOCK_ALLOC_ANYWHEREBenjamin Herrenschmidt1-2/+0
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Factor the lowest level alloc functionBenjamin Herrenschmidt1-32/+27
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Remove nid_range argument, arch provides memblock_nid_range() insteadBenjamin Herrenschmidt1-5/+8
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-05memblock: Remove memblock_find()Benjamin Herrenschmidt1-32/+0
Nobody uses it anymore. It's semantics were ... weird Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-04memblock: Implement memblock_is_memory and memblock_is_region_memoryBenjamin Herrenschmidt1-8/+34
To make it fast, we steal ARM's binary search for memblock_is_memory() and we use that to also the replace existing implementation of memblock_is_reserved(). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-04memblock: Rename memblock_region to memblock_type and memblock_property to ↵Benjamin Herrenschmidt1-85/+83
memblock_region Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-04memblock: Fix memblock_is_region_reserved() to return a booleanBenjamin Herrenschmidt1-1/+1
All callers expect a boolean result which is true if the region overlaps a reserved region. However, the implementation actually returns -1 if there is no overlap, and a region index (0 based) if there is. Make it behave as callers (and common sense) expect. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-07-14lmb: rename to memblockYinghai Lu1-0/+541
via following scripts FILES=$(find * -type f | grep -vE 'oprofile|[^K]config') sed -i \ -e 's/lmb/memblock/g' \ -e 's/LMB/MEMBLOCK/g' \ $FILES for N in $(find . -name lmb.[ch]); do M=$(echo $N | sed 's/lmb/memblock/g') mv $N $M done and remove some wrong change like lmbench and dlmb etc. also move memblock.c from lib/ to mm/ Suggested-by: Ingo Molnar <mingo@elte.hu> Acked-by: "H. Peter Anvin" <hpa@zytor.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>