diff options
author | Usama Arif <usama.arif@bytedance.com> | 2023-09-13 11:54:01 +0100 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-10-04 10:32:30 -0700 |
commit | fde1c4ecf91640e5a95ec36b71ec2e8ec379ce40 (patch) | |
tree | ebbd7d77db3e2926ae632857ff6ee132e87c11da /mm/hugetlb_vmemmap.h | |
parent | 77e6c43e137c130138c3fbadc847351a83c4befe (diff) |
mm: hugetlb: skip initialization of gigantic tail struct pages if freed by HVO
The new boot flow when it comes to initialization of gigantic pages is as
follows:
- At boot time, for a gigantic page during __alloc_bootmem_hugepage, the
region after the first struct page is marked as noinit.
- This results in only the first struct page to be initialized in
reserve_bootmem_region. As the tail struct pages are not initialized at
this point, there can be a significant saving in boot time if HVO
succeeds later on.
- Later on in the boot, the head page is prepped and the first
HUGETLB_VMEMMAP_RESERVE_SIZE / sizeof(struct page) - 1 tail struct pages
are initialized.
- HVO is attempted. If it is not successful, then the rest of the tail
struct pages are initialized. If it is successful, no more tail struct
pages need to be initialized saving significant boot time.
The WARN_ON for increased ref count in gather_bootmem_prealloc was changed
to a VM_BUG_ON. This is OK as there should be no speculative references
this early in boot process. The VM_BUG_ON's are there just in case such
code is introduced.
[akpm@linux-foundation.org: make it nicer for 80 cols]
Link: https://lkml.kernel.org/r/20230913105401.519709-5-usama.arif@bytedance.com
Signed-off-by: Usama Arif <usama.arif@bytedance.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Fam Zheng <fam.zheng@bytedance.com>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Punit Agrawal <punit.agrawal@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/hugetlb_vmemmap.h')
-rw-r--r-- | mm/hugetlb_vmemmap.h | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 25bd0e002431..4573899855d7 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -10,15 +10,16 @@ #define _LINUX_HUGETLB_VMEMMAP_H #include <linux/hugetlb.h> -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); -void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); - /* * Reserve one vmemmap page, all vmemmap addresses are mapped to it. See * Documentation/vm/vmemmap_dedup.rst. */ #define HUGETLB_VMEMMAP_RESERVE_SIZE PAGE_SIZE +#define HUGETLB_VMEMMAP_RESERVE_PAGES (HUGETLB_VMEMMAP_RESERVE_SIZE / sizeof(struct page)) + +#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); +void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) { |