diff options
author | Oleg Nesterov <oleg@redhat.com> | 2014-01-10 12:41:25 +1100 |
---|---|---|
committer | Stephen Rothwell <sfr@canb.auug.org.au> | 2014-01-10 12:41:25 +1100 |
commit | 2f7031a8a8ddda927f7f2e12db8d961488ac0f52 (patch) | |
tree | 2bee39065be0ab9e17d87c3b728af849a102f828 /mm | |
parent | c1a8efd88f3d19700fcc8516a4de894aabb7f072 (diff) |
mm: fix the theoretical compound_lock() vs prep_new_page() race
get/put_page(thp_tail) paths do get_page_unless_zero(page_head) +
compound_lock(). In theory this page_head can be already freed and
reallocated as alloc_pages(__GFP_COMP, smaller_order). In this case
get_page_unless_zero() can succeed right after set_page_refcounted(), and
compound_lock() can race with the non-atomic __SetPageHead() in
prep_compound_page().
Perhaps we should rework the thp locking (under discussion), but until
then this patch moves set_page_refcounted() and adds wmb() to ensure that
page->_count != 0 comes as a last change.
I am not sure about other callers of set_page_refcounted(), but at first
glance they look fine to me.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/page_alloc.c | 12 |
1 files changed, 10 insertions, 2 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3f81bc374ca8..42b64528bbb3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -890,8 +890,6 @@ static int prep_new_page(struct page *page, int order, gfp_t gfp_flags) } set_page_private(page, 0); - set_page_refcounted(page); - arch_alloc_page(page, order); kernel_map_pages(page, 1 << order, 1); @@ -901,6 +899,16 @@ static int prep_new_page(struct page *page, int order, gfp_t gfp_flags) if (order && (gfp_flags & __GFP_COMP)) prep_compound_page(page, order); + /* + * Make sure the caller of get_page_unless_zero() will see the + * fully initialized page. Say, to ensure that compound_lock() + * can't race with the non-atomic __SetPage*() above. + */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + smp_wmb(); +#endif + set_page_refcounted(page); + return 0; } |