diff options
author | Minchan Kim <minchan@kernel.org> | 2016-10-26 17:57:12 +0200 |
---|---|---|
committer | Michal Hocko <mhocko@suse.com> | 2016-10-26 18:12:24 +0200 |
commit | ec8e3c7bc7e4ed3b720f6a1c165b6aa147e39df5 (patch) | |
tree | 908ea246d6b16ffccfe6dcb600cbf988604c1fc3 | |
parent | efbf0e57aca747cb05887b7643399861c3c075be (diff) |
mm: make unreserve highatomic functions reliable
Currently, unreserve_highatomic_pageblock bails out if it found highatomic
pageblock regardless of really moving free pages from the one so that it
could mitigate unreserve logic's goal which saves OOM of a process.
This patch makes unreserve functions bail out only if it moves some pages
out of !highatomic free list to avoid such false positive.
Another potential problem is that by race between page freeing and reserve
highatomic function, pages could be in highatomic free list even though
the pageblock is !high atomic migratetype. In that case,
unreserve_highatomic_pageblock can be void if count of highatomic reserve
is less than pageblock_nr_pages. We could solve it simply via draining
all of reserved pages before the OOM. It would have a safeguard role to
exhuast reserved pages before converging to OOM.
Link: http://lkml.kernel.org/r/1476259429-18279-5-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sangseok Lee <sangseok.lee@lge.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-rw-r--r-- | mm/page_alloc.c | 24 |
1 files changed, 17 insertions, 7 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c45c4a158d52..0fbfead6aa7d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2053,8 +2053,12 @@ out_unlock: * potentially hurts the reliability of high-order allocations when under * intense memory pressure but failed atomic allocations should be easier * to recover from than an OOM. + * + * If @force is true, try to unreserve a pageblock even though highatomic + * pageblock is exhausted. */ -static bool unreserve_highatomic_pageblock(const struct alloc_context *ac) +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, + bool force) { struct zonelist *zonelist = ac->zonelist; unsigned long flags; @@ -2066,8 +2070,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac) for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx, ac->nodemask) { - /* Preserve at least one pageblock */ - if (zone->nr_reserved_highatomic <= pageblock_nr_pages) + /* + * Preserve at least one pageblock unless memory pressure + * is really high. + */ + if (!force && zone->nr_reserved_highatomic <= + pageblock_nr_pages) continue; spin_lock_irqsave(&zone->lock, flags); @@ -2112,8 +2120,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac) */ set_pageblock_migratetype(page, ac->migratetype); ret = move_freepages_block(zone, page, ac->migratetype); - spin_unlock_irqrestore(&zone->lock, flags); - return ret; + if (ret) { + spin_unlock_irqrestore(&zone->lock, flags); + return ret; + } } spin_unlock_irqrestore(&zone->lock, flags); } @@ -3317,7 +3327,7 @@ retry: * Shrink them them and try again */ if (!page && !drained) { - unreserve_highatomic_pageblock(ac); + unreserve_highatomic_pageblock(ac, false); drain_all_pages(NULL); drained = true; goto retry; @@ -3436,7 +3446,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order, */ if (*no_progress_loops > MAX_RECLAIM_RETRIES) { /* Before OOM, exhaust highatomic_reserve */ - return unreserve_highatomic_pageblock(ac); + return unreserve_highatomic_pageblock(ac, true); } /* |