diff options
author | Borislav Petkov <bp@suse.de> | 2014-01-18 12:48:17 +0100 |
---|---|---|
committer | Matt Fleming <matt.fleming@intel.com> | 2014-03-04 16:17:18 +0000 |
commit | b7b898ae0c0a82489511a1ce1b35f26215e6beb5 (patch) | |
tree | 79167e1a9fe59bf3e4b55772ccd294bf9cad2b11 /arch/x86/platform/efi/efi_64.c | |
parent | 42a5477251f0e0f33ad5f6a95c48d685ec03191e (diff) |
x86/efi: Make efi virtual runtime map passing more robust
Currently, running SetVirtualAddressMap() and passing the physical
address of the virtual map array was working only by a lucky coincidence
because the memory was present in the EFI page table too. Until Toshi
went and booted this on a big HP box - the krealloc() manner of resizing
the memmap we're doing did allocate from such physical addresses which
were not mapped anymore and boom:
http://lkml.kernel.org/r/1386806463.1791.295.camel@misato.fc.hp.com
One way to take care of that issue is to reimplement the krealloc thing
but with pages. We start with contiguous pages of order 1, i.e. 2 pages,
and when we deplete that memory (shouldn't happen all that often but you
know firmware) we realloc the next power-of-two pages.
Having the pages, it is much more handy and easy to map them into the
EFI page table with the already existing mapping code which we're using
for building the virtual mappings.
Thanks to Toshi Kani and Matt for the great debugging help.
Reported-by: Toshi Kani <toshi.kani@hp.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Tested-by: Toshi Kani <toshi.kani@hp.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Diffstat (limited to 'arch/x86/platform/efi/efi_64.c')
-rw-r--r-- | arch/x86/platform/efi/efi_64.c | 32 |
1 files changed, 29 insertions, 3 deletions
diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c index e05c69b46f05..19280900ec25 100644 --- a/arch/x86/platform/efi/efi_64.c +++ b/arch/x86/platform/efi/efi_64.c @@ -137,12 +137,38 @@ void efi_sync_low_kernel_mappings(void) sizeof(pgd_t) * num_pgds); } -void efi_setup_page_tables(void) +int efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages) { + pgd_t *pgd; + + if (efi_enabled(EFI_OLD_MEMMAP)) + return 0; + efi_scratch.efi_pgt = (pgd_t *)(unsigned long)real_mode_header->trampoline_pgd; + pgd = __va(efi_scratch.efi_pgt); - if (!efi_enabled(EFI_OLD_MEMMAP)) - efi_scratch.use_pgd = true; + /* + * It can happen that the physical address of new_memmap lands in memory + * which is not mapped in the EFI page table. Therefore we need to go + * and ident-map those pages containing the map before calling + * phys_efi_set_virtual_address_map(). + */ + if (kernel_map_pages_in_pgd(pgd, pa_memmap, pa_memmap, num_pages, _PAGE_NX)) { + pr_err("Error ident-mapping new memmap (0x%lx)!\n", pa_memmap); + return 1; + } + + efi_scratch.use_pgd = true; + + + return 0; +} + +void efi_cleanup_page_tables(unsigned long pa_memmap, unsigned num_pages) +{ + pgd_t *pgd = (pgd_t *)__va(real_mode_header->trampoline_pgd); + + kernel_unmap_pages_in_pgd(pgd, pa_memmap, num_pages); } static void __init __map_region(efi_memory_desc_t *md, u64 va) |