Age | Commit message (Collapse) | Author | Files | Lines |
|
memblock_free_late()."
This reverts commit 115d9d77bb0f9152c60b6e8646369fa7f6167593.
The pages being freed by memblock_free_late() have already been
initialized, but if they are in the deferred init range,
__free_one_page() might access nearby uninitialized pages when trying to
coalesce buddies. This can, for example, trigger this BUG:
BUG: unable to handle page fault for address: ffffe964c02580c8
RIP: 0010:__list_del_entry_valid+0x3f/0x70
<TASK>
__free_one_page+0x139/0x410
__free_pages_ok+0x21d/0x450
memblock_free_late+0x8c/0xb9
efi_free_boot_services+0x16b/0x25c
efi_enter_virtual_mode+0x403/0x446
start_kernel+0x678/0x714
secondary_startup_64_no_verify+0xd2/0xdb
</TASK>
A proper fix will be more involved so revert this change for the time
being.
Fixes: 115d9d77bb0f ("mm: Always release pages to the buddy allocator in memblock_free_late().")
Signed-off-by: Aaron Thompson <dev@aaront.org>
Link: https://lore.kernel.org/r/20230207082151.1303-1-dev@aaront.org
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
|
|
If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, memblock_free_pages()
only releases pages to the buddy allocator if they are not in the
deferred range. This is correct for free pages (as defined by
for_each_free_mem_pfn_range_in_zone()) because free pages in the
deferred range will be initialized and released as part of the deferred
init process. memblock_free_pages() is called by memblock_free_late(),
which is used to free reserved ranges after memblock_free_all() has
run. All pages in reserved ranges have been initialized at that point,
and accordingly, those pages are not touched by the deferred init
process. This means that currently, if the pages that
memblock_free_late() intends to release are in the deferred range, they
will never be released to the buddy allocator. They will forever be
reserved.
In addition, memblock_free_pages() calls kmsan_memblock_free_pages(),
which is also correct for free pages but is not correct for reserved
pages. KMSAN metadata for reserved pages is initialized by
kmsan_init_shadow(), which runs shortly before memblock_free_all().
For both of these reasons, memblock_free_pages() should only be called
for free pages, and memblock_free_late() should call __free_pages_core()
directly instead.
One case where this issue can occur in the wild is EFI boot on
x86_64. The x86 EFI code reserves all EFI boot services memory ranges
via memblock_reserve() and frees them later via memblock_free_late()
(efi_reserve_boot_services() and efi_free_boot_services(),
respectively). If any of those ranges happens to fall within the
deferred init range, the pages will not be released and that memory will
be unavailable.
For example, on an Amazon EC2 t3.micro VM (1 GB) booting via EFI:
v6.2-rc2:
# grep -E 'Node|spanned|present|managed' /proc/zoneinfo
Node 0, zone DMA
spanned 4095
present 3999
managed 3840
Node 0, zone DMA32
spanned 246652
present 245868
managed 178867
v6.2-rc2 + patch:
# grep -E 'Node|spanned|present|managed' /proc/zoneinfo
Node 0, zone DMA
spanned 4095
present 3999
managed 3840
Node 0, zone DMA32
spanned 246652
present 245868
managed 222816 # +43,949 pages
Fixes: 3a80a7fa7989 ("mm: meminit: initialise a subset of struct pages if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set")
Signed-off-by: Aaron Thompson <dev@aaront.org>
Link: https://lore.kernel.org/r/01010185892de53e-e379acfb-7044-4b24-b30a-e2657c1ba989-000000@us-west-2.amazonses.com
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
|
|
Commit cf4694be2b2cf ("tools: Add atomic_test_and_set_bit()") changed
tools/arch/x86/include/asm/atomic.h to include <asm/asm.h>, which causes
'make -C tools/testing/memblock' to fail with:
In file included from ../../include/asm/atomic.h:6,
from ../../include/linux/atomic.h:5,
from ./linux/mmzone.h:5,
from ../../include/linux/mm.h:5,
from ../../include/linux/pfn.h:5,
from ./linux/memory_hotplug.h:6,
from ./linux/init.h:7,
from ./linux/memblock.h:11,
from tests/common.h:8,
from tests/basic_api.h:5,
from main.c:2:
../../include/asm/../../arch/x86/include/asm/atomic.h:11:10: fatal error: asm/asm.h: No such file or directory
11 | #include <asm/asm.h>
| ^~~~~~~~~~~
Create a symlink to asm/asm.h in the same manner as the existing one to
asm/cmpxchg.h.
Signed-off-by: Aaron Thompson <dev@aaront.org>
Link: https://lore.kernel.org/r/010101857c402765-96e2dbc6-b82b-47e2-a437-4834dbe0b96b-000000@us-west-2.amazonses.com
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
|
|
Remove completed item from TODO list.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/f2263abe45613b28f1583fbf04a4bffcf735bcf6.1667802195.git.remckee0@gmail.com
|
|
Add tests for memblock_alloc_exact_nid_raw() where the simulated physical
memory is set up with multiple NUMA nodes. Additionally, all but one of
these tests set nid != NUMA_NO_NODE. All tests are run for both top-down
and bottom-up allocation directions.
The tested scenarios are:
Range unrestricted:
- region cannot be allocated:
+ there are no previously reserved regions, but requested node is
too small
+ the requested node is fully reserved
+ the requested node is partially reserved and does not have
enough space
+ none of the nodes have enough memory to allocate the region
Range restricted:
- region can be allocated in the specific node requested without
dropping min_addr:
+ the range fully overlaps with the node, and there are adjacent
reserved regions
- region cannot be allocated:
+ range partially overlaps with two different nodes, where the
second node is the requested node
+ range overlaps with multiple nodes along node boundaries, and
the requested node starts after max_addr
+ nid is set to NUMA_NO_NODE and the total range can fit the
region, but the range is split between two nodes and everything
else is reserved
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/51b14da46e6591428df3aefc5acc7dca9341a541.1667802195.git.remckee0@gmail.com
|
|
Add tests for memblock_alloc_exact_nid_raw() where the simulated physical
memory is set up with multiple NUMA nodes. Additionally, all of these
tests set nid != NUMA_NO_NODE. These tests are run with a bottom-up
allocation direction.
The tested scenarios are:
Range unrestricted:
- region can be allocated in the specific node requested:
+ there are no previously reserved regions
+ the requested node is partially reserved but has enough space
Range restricted:
- region can be allocated in the specific node requested after dropping
min_addr:
+ range partially overlaps with two different nodes, where the
first node is the requested node
+ range partially overlaps with two different nodes, where the
requested node ends before min_addr
+ range overlaps with multiple nodes along node boundaries, and
the requested node ends before min_addr
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/935f0eed5e06fd44dc67d9f49b277923d7896bd3.1667802195.git.remckee0@gmail.com
|
|
Add tests for memblock_alloc_exact_nid_raw() where the simulated physical
memory is set up with multiple NUMA nodes. Additionally, all of these
tests set nid != NUMA_NO_NODE. These tests are run with a top-down
allocation direction.
The tested scenarios are:
Range unrestricted:
- region can be allocated in the specific node requested:
+ there are no previously reserved regions
+ the requested node is partially reserved but has enough space
Range restricted:
- region can be allocated in the specific node requested after dropping
min_addr:
+ range partially overlaps with two different nodes, where the
first node is the requested node
+ range partially overlaps with two different nodes, where the
requested node ends before min_addr
+ range overlaps with multiple nodes along node boundaries, and
the requested node ends before min_addr
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/2cc0883243d68ddc3faf833d2d9e86f48534c1d7.1667802195.git.remckee0@gmail.com
|
|
Add TEST_F_EXACT flag, which specifies that tests should run
memblock_alloc_exact_nid_raw(). Introduce range tests for
memblock_alloc_exact_nid_raw() by using the TEST_F_EXACT flag to run the
range tests in alloc_nid_api.c, since memblock_alloc_exact_nid_raw() and
memblock_alloc_try_nid_raw() behave the same way when nid = NUMA_NO_NODE.
Rename tests and other functions in alloc_nid_api.c by removing "_try".
Since the test names will be displayed in verbose output, they need to
be general enough to refer to any of the memblock functions that the
tests may run.
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/5a4b6d1b6130ab7375314e1c45a6d5813dfdabbd.1667802195.git.remckee0@gmail.com
|
|
Remove the completed items from TODO list.
Signed-off-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/20221011062128.49359-4-shaoqin.huang@intel.com
|
|
Reserve 129th region in the memblock, and this will trigger the
memblock_double_array() function, this needs valid memory regions. So
using dummy_physical_memory_init() to allocate a valid memory region.
At the same time, reserve 128 faked memory region, and make sure these
reserved region not intersect with the valid memory region. So
memblock_double_array() will choose the valid memory region, and it will
success.
Also need to restore the reserved.regions after memblock_double_array(),
to make sure the subsequent tests can run as normal.
Signed-off-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/20221011062128.49359-3-shaoqin.huang@intel.com
|
|
Add 129th region into the memblock, and this will trigger the
memblock_double_array() function, this needs valid memory regions. So
using dummy_physical_memory_init() to allocate a large enough memory
region, and split it into a large enough memory which can be choosed by
memblock_double_array(), and the left memory will be split into small
memory region, and add them into the memblock. It make sure the
memblock_double_array() will always choose the valid memory region that
is allocated by the dummy_physical_memory_init().
So memblock_double_array() must success.
Another thing should be done is to restore the memory.regions after
memblock_double_array(), due to now the memory.regions is pointing to a
memory region allocated by dummy_physical_memory_init(). And it will
affect the subsequent tests if we don't restore the memory region. So
simply record the origin region, and restore it after the test.
Signed-off-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/20221011062128.49359-2-shaoqin.huang@intel.com
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
Pull memblock updates from Mike Rapoport:
"Test suite improvements:
- Added verification that memblock allocations zero the allocated
memory
- Added more test cases for memblock_add(), memblock_remove(),
memblock_reserve() and memblock_free()
- Added tests for memblock_*_raw() family
- Added tests for NUMA-aware allocations in memblock_alloc_try_nid()
and memblock_alloc_try_nid_raw()"
* tag 'memblock-v6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
memblock tests: add generic NUMA tests for memblock_alloc_try_nid*
memblock tests: add bottom-up NUMA tests for memblock_alloc_try_nid*
memblock tests: add top-down NUMA tests for memblock_alloc_try_nid*
memblock tests: add simulation of physical memory with multiple NUMA nodes
memblock_tests: move variable declarations to single block
memblock tests: remove 'cleared' from comment blocks
memblock tests: add tests for memblock_trim_memory
memblock tests: add tests for memblock_*bottom_up functions
memblock tests: update alloc_nid_api to test memblock_alloc_try_nid_raw
memblock tests: update alloc_api to test memblock_alloc_raw
memblock tests: add additional tests for basic api and memblock_alloc
memblock tests: add labels to verbose output for generic alloc tests
memblock tests: update zeroed memory check for memblock_alloc_* tests
memblock tests: update tests to check if memblock_alloc zeroed memory
memblock tests: update reference to obsolete build option in comments
memblock tests: add command line help option
|
|
Add new pageblock_start_pfn() and pageblock_align() macro which are needed
by memblock tests.
Link: https://lkml.kernel.org/r/20220907082643.186979-1-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw()
where the simulated physical memory is set up with multiple NUMA nodes.
Additionally, two of these tests set nid != NUMA_NO_NODE. All tests are
run for both top-down and bottom-up allocation directions.
The tested scenarios are:
Range unrestricted:
- region cannot be allocated:
+ none of the nodes have enough memory to allocate the region
Range restricted:
- region can be allocated in the specific node requested without dropping
min_addr:
+ the range fully overlaps with the node, and there are adjacent
reserved regions
- region cannot be allocated:
+ nid is set to NUMA_NO_NODE and the total range can fit the region,
but the range is split between two nodes and everything else is
reserved
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/4b2c7e6e5f3a9837939e99293c77e0e6fc3ae4f9.1663046060.git.remckee0@gmail.com
|
|
Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw()
where the simulated physical memory is set up with multiple NUMA nodes.
Additionally, all of these tests set nid != NUMA_NO_NODE. These tests are
run with a bottom-up allocation direction.
The tested scenarios are:
Range unrestricted:
- region can be allocated in the specific node requested:
+ there are no previously reserved regions
+ the requested node is partially reserved but has enough space
- the specific node requested cannot accommodate the request, but the
region can be allocated in a different node:
+ there are no previously reserved regions, but node is too small
+ the requested node is fully reserved
+ the requested node is partially reserved and does not have
enough space
Range restricted:
- region can be allocated in the specific node requested after dropping
min_addr:
+ range partially overlaps with two different nodes, where the first
node is the requested node
+ range partially overlaps with two different nodes, where the
requested node ends before min_addr
- region cannot be allocated in the specific node requested, but it can be
allocated in the requested range:
+ range overlaps with multiple nodes along node boundaries, and the
requested node ends before min_addr
+ range overlaps with multiple nodes along node boundaries, and the
requested node starts after max_addr
- region cannot be allocated in the specific node requested, but it can be
allocated after dropping min_addr:
+ range partially overlaps with two different nodes, where the
second node is the requested node
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/00c4810daaf5d050abc71915b24ed7419bb16b51.1663046060.git.remckee0@gmail.com
|
|
Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw()
where the simulated physical memory is set up with multiple NUMA nodes.
Additionally, all of these tests set nid != NUMA_NO_NODE. These tests are
run with a top-down allocation direction.
The tested scenarios are:
Range unrestricted:
- region can be allocated in the specific node requested:
+ there are no previously reserved regions
+ the requested node is partially reserved but has enough space
- the specific node requested cannot accommodate the request, but the
region can be allocated in a different node:
+ there are no previously reserved regions, but node is too small
+ the requested node is fully reserved
+ the requested node is partially reserved and does not have
enough space
Range restricted:
- region can be allocated in the specific node requested after dropping
min_addr:
+ range partially overlaps with two different nodes, where the first
node is the requested node
+ range partially overlaps with two different nodes, where the
requested node ends before min_addr
- region cannot be allocated in the specific node requested, but it can be
allocated in the requested range:
+ range overlaps with multiple nodes along node boundaries, and the
requested node ends before min_addr
+ range overlaps with multiple nodes along node boundaries, and the
requested node starts after max_addr
- region cannot be allocated in the specific node requested, but it can be
allocated after dropping min_addr:
+ range partially overlaps with two different nodes, where the
second node is the requested node
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/84009c5b3969337ccf89df850db56d364f8c228b.1663046060.git.remckee0@gmail.com
|
|
Add function setup_numa_memblock() for setting up a memory layout with
multiple NUMA nodes in a previously allocated dummy physical memory.
This function can be used in place of setup_memblock() in tests that need
to simulate a NUMA system.
setup_numa_memblock():
- allows for setting up a memory layout by specifying the fraction of
MEM_SIZE in each node
Set CONFIG_NODES_SHIFT to 4 when building with NUMA=1 to allow for up to
16 NUMA nodes.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/4566d816a85f009268d4858d1ef06c7571a960f9.1663046060.git.remckee0@gmail.com
|
|
Move variable declarations to a single block at the beginning of each
testing function.
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/e61431e73977f305fdd027bca99d1dc119e96d84.1662264355.git.remckee0@gmail.com
|
|
The tests in alloc_nid_api can now run either memblock_alloc_try_nid()
or memblock_alloc_try_nid_raw(). The comment blocks for these tests
should not refer to a 'cleared' region since that only applies to
memblock_alloc_try_nid(). Remove 'cleared' from the comment blocks so
that the comments are accurate for either memblock function.
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/e8be24137e54e9f81a06af969ded82b319114d7a.1662264347.git.remckee0@gmail.com
|
|
Add tests for memblock_trim_memory() for the following scenarios:
- all regions aligned
- one unaligned region that is smaller than the alignment
- one unaligned region that is unaligned at the base
- one unaligned region that is unaligned at the end
Reviewed-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/0e5f55154a3b66581e04ba3717978795cbc08a5b.1661578349.git.remckee0@gmail.com
|
|
Add simple tests for memblock_set_bottom_up() and memblock_bottom_up().
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/b03701d2faeaf00f7184e4b72903de4e5e939437.1661578349.git.remckee0@gmail.com
|
|
Update memblock_alloc_try_nid() tests so that they test either
memblock_alloc_try_nid() or memblock_alloc_try_nid_raw() depending on the
value of alloc_nid_test_flags. Run through all the existing tests in
alloc_nid_api twice: once for memblock_alloc_try_nid() and once for
memblock_alloc_try_nid_raw().
When the tests run memblock_alloc_try_nid(), they test that the entire
memory region is zero. When the tests run memblock_alloc_try_nid_raw(),
they test that the entire memory region is nonzero. The content of the
memory region is initialized to nonzero, and we expect it to remain
unchanged if running memblock_alloc_try_nid_raw().
Reviewed-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/6fa8938f67872841c10a00afb042947d1d280a04.1661578349.git.remckee0@gmail.com
|
|
Update memblock_alloc() tests so that they test either memblock_alloc()
or memblock_alloc_raw() depending on the value of alloc_test_flags. Run
through all the existing tests in memblock_alloc_api twice: once for
memblock_alloc() and once for memblock_alloc_raw().
When the tests run memblock_alloc(), they test that the entire memory
region is zero. When the tests run memblock_alloc_raw(), they test that
the entire memory region is nonzero. The content of the memory region is
initialized to nonzero, and we expect it to remain unchanged if running
memblock_alloc_raw().
Reviewed-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/5a7cfb2f807ee2cb53ee77f9f5c910107b253d6e.1661578349.git.remckee0@gmail.com
|
|
Add tests for memblock_add(), memblock_reserve(), memblock_remove(),
memblock_free(), and memblock_alloc() for the following test scenarios.
memblock_add() and memblock_reserve():
- add/reserve a memory block in the gap between two existing memory
blocks, and check that the blocks are merged into one region
- try to add/reserve memblock regions that extend past PHYS_ADDR_MAX
memblock_remove() and memblock_free():
- remove/free a region when it is the only available region
+ These tests ensure that the first region is overwritten with a
"dummy" region when the last remaining region of that type is
removed or freed.
- remove/free() a region that overlaps with two existing regions of the
relevant type
- try to remove/free memblock regions that extend past PHYS_ADDR_MAX
memblock_alloc():
- try to allocate a region that is larger than the total size of available
memory (memblock.memory)
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/c23c0393c5b9a53fe7f676996913c629495e9727.1661578349.git.remckee0@gmail.com
|
|
Generic tests for memblock_alloc*() functions do not use separate
functions for testing top-down and bottom-up allocation directions.
Therefore, the function name that is displayed in the verbose testing
output does not include the allocation direction.
Add an additional prefix when running generic tests for
memblock_alloc*() functions that indicates which allocation direction is
set. The prefix will be displayed when the tests are run in verbose mode.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/fb76a42253d2a196a7daea29dd8121a69904f58e.1661578349.git.remckee0@gmail.com
|
|
Update the assert in memblock_alloc_try_nid() and memblock_alloc_from()
tests that checks whether the memory is cleared so that it checks the
entire chunk of allocated memory instead of just the first byte.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/24b3271751756100142e65b75284d43b4d30c9b7.1661578349.git.remckee0@gmail.com
|
|
Add an assert in memblock_alloc() tests where allocation is expected to
occur. The assert checks whether the entire chunk of allocated memory is
cleared.
The current memblock_alloc() tests do not check whether the allocated
memory was zeroed. memblock_alloc() should zero the allocated memory since
it is a wrapper for memblock_alloc_try_nid().
Reviewed-by: Shaoqin Huang <shaoqin.huang@intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/83ffb941b65074f40eb14552f8bfe5b71fe50abd.1661578349.git.remckee0@gmail.com
|
|
The VERBOSE build option was replaced with the --verbose runtime option,
but the comments describing the ASSERT_*() macros still refer to the
VERBOSE build option. Update these comments so that they refer to the
--verbose runtime option.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/5f8a4c2bde34cc029282c68d47eda982d950f421.1660451025.git.remckee0@gmail.com
|
|
Add a help command line option to the help message. Add the help option
to the short and long options so it will be recognized as a valid
option.
Usage:
$ ./main -h
Or:
$ ./main --help
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/0f3b93a79de78c0da1ca90f74fe35e9a85c7cf93.1660451025.git.remckee0@gmail.com
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
Pull memblock updates from Mike Rapoport:
- An optimization in memblock_add_range() to reduce array traversals
- Improvements to the memblock test suite
* tag 'memblock-v5.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
memblock test: Modify the obsolete description in README
memblock tests: fix compilation errors
memblock tests: change build options to run-time options
memblock tests: remove completed TODO items
memblock tests: set memblock_debug to enable memblock_dbg() messages
memblock tests: add verbose output to memblock tests
memblock tests: Makefile: add arguments to control verbosity
memblock: avoid some repeat when add new range
|
|
The VERBOSE option in Makefile has been moved, but there still have the
description left in README. For now, we use `-v` options when running
memblock test to print information, so using the new to replace the
obsolete items.
Signed-off-by: Shaoqin Huang <shaoqin.huang@intel.com>
Acked-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/20220729161125.190845-1-shaoqin.huang@intel.com
|
|
Do 'make -C tools/testing/memblock', get the following errors:
memblock.o: In function `memblock_find_in_range.constprop.9':
memblock.c:(.text+0x4651): undefined reference to `pr_warn_ratelimited'
memblock.o: In function `memblock_mark_mirror':
memblock.c:(.text+0x7171): undefined reference to `mirrored_kernelcore'
Fixes: 902c2d91582c ("memblock: Disable mirror feature if kernelcore is not specified")
Fixes: 14d9a675fd0d ("mm: Ratelimited mirrored memory related warning messages")
Signed-off-by: Liu Xinpeng <liuxp11@chinatelecom.cn>
Tested-by: Ma Wupeng <mawupeng1@huawei.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/1658916453-26312-1-git-send-email-liuxp11@chinatelecom.cn
|
|
Change verbose and movable node build options to run-time options.
Movable node usage:
$ ./main -m
Or:
$ ./main --movable-node
Verbose usage:
$ ./main -v
Or:
$ ./main --verbose
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/20220714031717.12258-1-remckee0@gmail.com
|
|
Remove completed items from TODO list.
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/6a3e74fcb51a07e8d9fbbcbe84bdb8aa8b00e843.1656907314.git.remckee0@gmail.com
|
|
If Memblock simulator was compiled with MEMBLOCK_DEBUG=1, set
memblock_debug to 1 so that memblock_dbg() will print debug information
when memblock functions are tested in Memblock simulator.
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/aee4200cce1c09992ed055006a81fde1b6b5b567.1656907314.git.remckee0@gmail.com
|
|
Add and use functions and macros for printing verbose testing output.
If the Memblock simulator was compiled with VERBOSE=1:
- prefix_push(): appends the given string to a prefix string that will be
printed in test_fail() and test_pass*().
- prefix_pop(): removes the last prefix from the prefix string.
- prefix_reset(): clears the prefix string.
- test_fail(): prints a message after a test fails containing the test
number of the failing test and the prefix.
- test_pass(): prints a message after a test passes containing its test
number and the prefix.
- test_print(): prints the given formatted output string.
- test_pass_pop(): runs test_pass() followed by prefix_pop().
- PREFIX_PUSH(): runs prefix_push(__func__).
If the Memblock simulator was not compiled with VERBOSE=1, these
functions/macros do nothing.
Add the assert wrapper macros ASSERT_EQ(), ASSERT_NE(), and ASSERT_LT().
If the assert condition fails, these macros call test_fail() before
executing assert().
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/f234d443fe154d5ae8d8aa07284aff69edfb6f61.1656907314.git.remckee0@gmail.com
|
|
Add VERBOSE and MEMBLOCK_DEBUG user-provided arguments. VERBOSE will
enable verbose output from Memblock simulator. MEMBLOCK_DEBUG will enable
memblock_dbg() messages.
Update the help message to include VERBOSE and MEMBLOCK_DEBUG. Update
the README to include VERBOSE. The README does not include all available
options and refers to the help message for the remaining options.
Therefore, omit MEMBLOCK_DEBUG from README.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/5503f3efe82ecef5c99961a1d53003c8ad06cf27.1656907314.git.remckee0@gmail.com
|
|
kmemleak_alloc_phys()
Patch series "mm: kmemleak: store objects allocated with physical address
separately and check when scan", v4.
The kmemleak_*_phys() interface uses "min_low_pfn" and "max_low_pfn" to
check address. But on some architectures, kmemleak_*_phys() is called
before those two variables initialized. The following steps will be
taken:
1) Add OBJECT_PHYS flag and rbtree for the objects allocated
with physical address
2) Store physical address in objects if allocated with OBJECT_PHYS
3) Check the boundary when scan instead of in kmemleak_*_phys()
This patch set will solve:
https://lore.kernel.org/r/20220527032504.30341-1-yee.lee@mediatek.com
https://lore.kernel.org/r/9dd08bb5-f39e-53d8-f88d-bec598a08c93@gmail.com
v3: https://lore.kernel.org/r/20220609124950.1694394-1-patrick.wang.shcn@gmail.com
v2: https://lore.kernel.org/r/20220603035415.1243913-1-patrick.wang.shcn@gmail.com
v1: https://lore.kernel.org/r/20220531150823.1004101-1-patrick.wang.shcn@gmail.com
This patch (of 4):
Remove the unused kmemleak_not_leak_phys() function. And remove the
min_count argument to kmemleak_alloc_phys() function, assume it's 0.
Link: https://lkml.kernel.org/r/20220611035551.1823303-1-patrick.wang.shcn@gmail.com
Link: https://lkml.kernel.org/r/20220611035551.1823303-2-patrick.wang.shcn@gmail.com
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Yee Lee <yee.lee@mediatek.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
Remove completed item from TODO list.
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
|
|
Update comments in memblock_free_*() functions to match the style used
in tests/alloc_*.c by rewording to make the expected outcome more apparent
and, if more than one memblock is involved, adding a visual of the
memory blocks.
If the comment has an extra column of spaces, remove the extra space at
the beginning of each line for consistency and to conform to Linux kernel
coding style.
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
|
|
Update comments in memblock_remove_*() functions to match the style used
in tests/alloc_*.c by rewording to make the expected outcome more apparent
and, if more than one memblock is involved, adding a visual of the
memory blocks.
If the comment has an extra column of spaces, remove the extra space at
the beginning of each line for consistency and to conform to Linux kernel
coding style.
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
|
|
Update comments in memblock_reserve_*() functions to match the style used
in tests/alloc_*.c by rewording to make the expected outcome more apparent
and, if more than one memblock is involved, adding a visual of the
memory blocks.
If the comment has an extra column of spaces, remove the extra space at
the beginning of each line for consistency and to conform to Linux kernel
coding style.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
|
|
Update comments in memblock_add_*() functions to match the style used
in tests/alloc_*.c by rewording to make the expected outcome more apparent
and, if more than one memblock is involved, adding a visual of the
memory blocks.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
|
|
Add description of the project, its structure and how to run it.
List what is left to implement and what the known issues are.
Signed-off-by: Karolina Drobnik <karolinadrobnik@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/d5e39b9f7dcef177ebc14282727447bc21e3b38f.1646055639.git.karolinadrobnik@gmail.com
|
|
Add checks for memblock_alloc_try_nid for bottom up allocation direction.
As the definition of this function is pretty close to the core
memblock_alloc_range_nid, the test cases implemented here cover most of
the code paths related to the memory allocations.
The tested scenarios are:
- Region can be allocated within the requested range (both with aligned
and misaligned boundaries)
- Region can be allocated between two already existing entries
- Not enough space between already reserved regions
- Memory at the range boundaries is reserved but there is enough space
to allocate a new region
- The memory range is too narrow but memory can be allocated before
the maximum address
- Edge cases:
+ Minimum address is below memblock_start_of_DRAM()
+ Maximum address is above memblock_end_of_DRAM()
Add test case wrappers to test both directions in the same context.
Signed-off-by: Karolina Drobnik <karolinadrobnik@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/1c0ba11b8da5dc8f71ad45175c536fa4be720984.1646055639.git.karolinadrobnik@gmail.com
|
|
Add tests for memblock_alloc_try_nid for top down allocation direction.
As the definition of this function is pretty close to the core
memblock_alloc_range_nid, the test cases implemented here cover most of
the code paths related to the memory allocations.
The tested scenarios are:
- Region can be allocated within the requested range (both with aligned
and misaligned boundaries)
- Region can be allocated between two already existing entries
- Not enough space between already reserved regions
- Memory range is too narrow but memory can be allocated before
the maximum address
- Edge cases:
+ Minimum address is below memblock_start_of_DRAM()
+ Maximum address is above memblock_end_of_DRAM()
Add checks for both allocation directions:
- Region starts at the min_addr and ends at max_addr
- Maximum address is too close to the beginning of the available
memory
- Memory at the range boundaries is reserved but there is enough space
to allocate a new region
Signed-off-by: Karolina Drobnik <karolinadrobnik@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/d6c282e0f9f62c15bf74c216214604764232d637.1646055639.git.karolinadrobnik@gmail.com
|
|
Add checks for memblock_alloc_from for bottom up allocation direction.
The tested scenarios are:
- Not enough space to allocate memory at the minimal address
- Minimal address parameter is smaller than the start address
of the available memory
- Minimal address parameter is too close to the end of the available
memory
Add test case wrappers to test both directions in the same context.
Signed-off-by: Karolina Drobnik <karolinadrobnik@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/506cf5293c8a21c012b7ea87b14af07754d3e656.1646055639.git.karolinadrobnik@gmail.com
|
|
Add checks for memblock_alloc_from for default allocation direction.
The tested scenarios are:
- Not enough space to allocate memory at the minimal address
- Minimal address parameter is smaller than the start address
of the available memory
- Minimal address is too close to the available memory
Add simple memblock_alloc_from test that can be used to test both
allocation directions (minimal address is aligned or misaligned).
Signed-off-by: Karolina Drobnik <karolinadrobnik@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/3dd645f437975fd393010b95b8faa85d2b86490a.1646055639.git.karolinadrobnik@gmail.com
|
|
Add checks for memblock_alloc for bottom up allocation direction.
The tested scenarios are:
- Region can be allocated on the first fit (with and without
region merging)
- Region can be allocated on the second fit (with and without
region merging)
Add test case wrappers to test both directions in the same context.
Signed-off-by: Karolina Drobnik <karolinadrobnik@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/426674eee20d99dca49caf1ee0142a83dccbc98d.1646055639.git.karolinadrobnik@gmail.com
|
|
Add checks for memblock_alloc for top down allocation direction.
The tested scenarios are:
- Region can be allocated on the first fit (with and without
region merging)
- Region can be allocated on the second fit (with and without
region merging)
Add checks for both allocation directions:
- Region can be allocated between two already existing entries
- Limited memory available
- All memory is reserved
- No available memory registered with memblock
Signed-off-by: Karolina Drobnik <karolinadrobnik@gmail.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/26ccf409b8ff0394559d38d792b2afb24b55887c.1646055639.git.karolinadrobnik@gmail.com
|