summaryrefslogtreecommitdiff
path: root/arch/x86/include/asm/init.h
diff options
context:
space:
mode:
authorRafael J. Wysocki <rafael.j.wysocki@intel.com>2016-08-08 15:31:31 +0200
committerRafael J. Wysocki <rafael.j.wysocki@intel.com>2016-08-08 22:04:30 +0200
commite4630fdd47637168927905983205d7b7c5c08c09 (patch)
tree3528e218e396d17ab5db1e74346e39b59424ec74 /arch/x86/include/asm/init.h
parentc226fab474291e3c6ac5fa30a2b0778acc311e61 (diff)
x86/power/64: Always create temporary identity mapping correctly
The low-level resume-from-hibernation code on x86-64 uses kernel_ident_mapping_init() to create the temoprary identity mapping, but that function assumes that the offset between kernel virtual addresses and physical addresses is aligned on the PGD level. However, with a randomized identity mapping base, it may be aligned on the PUD level and if that happens, the temporary identity mapping created by set_up_temporary_mappings() will not reflect the actual kernel identity mapping and the image restoration will fail as a result (leading to a kernel panic most of the time). To fix this problem, rework kernel_ident_mapping_init() to support unaligned offsets between KVA and PA up to the PMD level and make set_up_temporary_mappings() use it as approprtiate. Reported-and-tested-by: Thomas Garnier <thgarnie@google.com> Reported-by: Borislav Petkov <bp@suse.de> Suggested-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Yinghai Lu <yinghai@kernel.org>
Diffstat (limited to 'arch/x86/include/asm/init.h')
-rw-r--r--arch/x86/include/asm/init.h4
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
index 223042086f4e..737da62bfeb0 100644
--- a/arch/x86/include/asm/init.h
+++ b/arch/x86/include/asm/init.h
@@ -5,10 +5,10 @@ struct x86_mapping_info {
void *(*alloc_pgt_page)(void *); /* allocate buf for page table */
void *context; /* context for alloc_pgt_page */
unsigned long pmd_flag; /* page flag for PMD entry */
- bool kernel_mapping; /* kernel mapping or ident mapping */
+ unsigned long offset; /* ident mapping offset */
};
int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- unsigned long addr, unsigned long end);
+ unsigned long pstart, unsigned long pend);
#endif /* _ASM_X86_INIT_H */