diff options
author | Fabio M. De Francesco <fabio.maria.de.francesco@linux.intel.com> | 2023-11-20 15:28:23 +0100 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-12-10 16:51:50 -0800 |
commit | f542b8e582abd93df092c4a2763679e380f14645 (patch) | |
tree | 2361686c5f92a18d525b9cff94465e74fe98afc0 | |
parent | f2bcc99a5e901a13b754648d1dbab60f4adf9375 (diff) |
mm/page_poison: replace kmap_atomic() with kmap_local_page()
kmap_atomic() has been deprecated in favor of kmap_local_page().
Therefore, replace kmap_atomic() with kmap_local_page().
kmap_atomic() is implemented like a kmap_local_page() which also disables
page-faults and preemption (the latter only in !PREEMPT_RT kernels). The
kernel virtual addresses returned by these two API are only valid in the
context of the callers (i.e., they cannot be handed to other threads).
With kmap_local_page() the mappings are per thread and CPU local like in
kmap_atomic(); however, they can handle page-faults and can be called from
any context (including interrupts). The tasks that call kmap_local_page()
can be preempted and, when they are scheduled to run again, the kernel
virtual addresses are restored and are still valid.
The code blocks between the mappings and un-mappings do not rely on the
above-mentioned side effects of kmap_atomic(), so that mere replacements
of the old API with the new one is all that they require (i.e., there is
no need to explicitly call pagefault_disable() and/or preempt_disable()).
Link: https://lkml.kernel.org/r/20231120142836.7219-1-fabio.maria.de.francesco@linux.intel.com
Signed-off-by: Fabio M. De Francesco <fabio.maria.de.francesco@linux.intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-rw-r--r-- | mm/page_poison.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/mm/page_poison.c b/mm/page_poison.c index b4f456437b7e..3e9037363cf9 100644 --- a/mm/page_poison.c +++ b/mm/page_poison.c @@ -21,13 +21,13 @@ early_param("page_poison", early_page_poison_param); static void poison_page(struct page *page) { - void *addr = kmap_atomic(page); + void *addr = kmap_local_page(page); /* KASAN still think the page is in-use, so skip it. */ kasan_disable_current(); memset(kasan_reset_tag(addr), PAGE_POISON, PAGE_SIZE); kasan_enable_current(); - kunmap_atomic(addr); + kunmap_local(addr); } void __kernel_poison_pages(struct page *page, int n) @@ -77,7 +77,7 @@ static void unpoison_page(struct page *page) { void *addr; - addr = kmap_atomic(page); + addr = kmap_local_page(page); kasan_disable_current(); /* * Page poisoning when enabled poisons each and every page @@ -86,7 +86,7 @@ static void unpoison_page(struct page *page) */ check_poison_mem(page, kasan_reset_tag(addr), PAGE_SIZE); kasan_enable_current(); - kunmap_atomic(addr); + kunmap_local(addr); } void __kernel_unpoison_pages(struct page *page, int n) |