diff options
author | Fabio M. De Francesco <fabio.maria.de.francesco@linux.intel.com> | 2023-11-20 15:26:31 +0100 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-12-10 16:51:49 -0800 |
commit | f2bcc99a5e901a13b754648d1dbab60f4adf9375 (patch) | |
tree | 17f27c3f3702295d8a3d422f46f3092692fc141e /mm/mempool.c | |
parent | 24d2613a6356f9c4a0b1b8e17f125562f6c8e11b (diff) |
mm/mempool: replace kmap_atomic() with kmap_local_page()
kmap_atomic() has been deprecated in favor of kmap_local_page().
Therefore, replace kmap_atomic() with kmap_local_page().
kmap_atomic() is implemented like a kmap_local_page() which also disables
page-faults and preemption (the latter only in !PREEMPT_RT kernels). The
kernel virtual addresses returned by these two API are only valid in the
context of the callers (i.e., they cannot be handed to other threads).
With kmap_local_page() the mappings are per thread and CPU local like in
kmap_atomic(); however, they can handle page-faults and can be called from
any context (including interrupts). The tasks that call kmap_local_page()
can be preempted and, when they are scheduled to run again, the kernel
virtual addresses are restored and are still valid.
The code blocks between the mappings and un-mappings don't rely on the
above-mentioned side effects of kmap_atomic(), so that mere replacements
of the old API with the new one is all that they require (i.e., there is
no need to explicitly call pagefault_disable() and/or preempt_disable()).
Link: https://lkml.kernel.org/r/20231120142640.7077-1-fabio.maria.de.francesco@linux.intel.com
Signed-off-by: Fabio M. De Francesco <fabio.maria.de.francesco@linux.intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/mempool.c')
-rw-r--r-- | mm/mempool.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/mm/mempool.c b/mm/mempool.c index 734bcf5afbb7..b3d2084fd989 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -64,10 +64,10 @@ static void check_element(mempool_t *pool, void *element) } else if (pool->free == mempool_free_pages) { /* Mempools backed by page allocator */ int order = (int)(long)pool->pool_data; - void *addr = kmap_atomic((struct page *)element); + void *addr = kmap_local_page((struct page *)element); __check_element(pool, addr, 1UL << (PAGE_SHIFT + order)); - kunmap_atomic(addr); + kunmap_local(addr); } } @@ -89,10 +89,10 @@ static void poison_element(mempool_t *pool, void *element) } else if (pool->alloc == mempool_alloc_pages) { /* Mempools backed by page allocator */ int order = (int)(long)pool->pool_data; - void *addr = kmap_atomic((struct page *)element); + void *addr = kmap_local_page((struct page *)element); __poison_element(addr, 1UL << (PAGE_SHIFT + order)); - kunmap_atomic(addr); + kunmap_local(addr); } } #else /* CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON */ |