diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-04-14 15:09:40 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-04-14 15:09:40 -0700 |
commit | 6b3a707736301c2128ca85ce85fb13f60b5e350a (patch) | |
tree | 2bf1892cf29121150adece8d1221ecd513a4e792 /include/linux/mm.h | |
parent | 4443f8e6ac7755cd775c70d08be8042dc2f936cb (diff) | |
parent | 15fab63e1e57be9fdb5eec1bbc5916e9825e9acb (diff) |
Merge branch 'page-refs' (page ref overflow)
Merge page ref overflow branch.
Jann Horn reported that he can overflow the page ref count with
sufficient memory (and a filesystem that is intentionally extremely
slow).
Admittedly it's not exactly easy. To have more than four billion
references to a page requires a minimum of 32GB of kernel memory just
for the pointers to the pages, much less any metadata to keep track of
those pointers. Jann needed a total of 140GB of memory and a specially
crafted filesystem that leaves all reads pending (in order to not ever
free the page references and just keep adding more).
Still, we have a fairly straightforward way to limit the two obvious
user-controllable sources of page references: direct-IO like page
references gotten through get_user_pages(), and the splice pipe page
duplication. So let's just do that.
* branch page-refs:
fs: prevent page refcount overflow in pipe_buf_get
mm: prevent get_user_pages() from overflowing page refcount
mm: add 'try_get_page()' helper function
mm: make page ref count overflow check tighter and more explicit
Diffstat (limited to 'include/linux/mm.h')
-rw-r--r-- | include/linux/mm.h | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index 76769749b5a5..6b10c21630f5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -966,6 +966,10 @@ static inline bool is_pci_p2pdma_page(const struct page *page) } #endif /* CONFIG_DEV_PAGEMAP_OPS */ +/* 127: arbitrary random number, small enough to assemble well */ +#define page_ref_zero_or_close_to_overflow(page) \ + ((unsigned int) page_ref_count(page) + 127u <= 127u) + static inline void get_page(struct page *page) { page = compound_head(page); @@ -973,8 +977,17 @@ static inline void get_page(struct page *page) * Getting a normal page or the head of a compound page * requires to already have an elevated page->_refcount. */ - VM_BUG_ON_PAGE(page_ref_count(page) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); + page_ref_inc(page); +} + +static inline __must_check bool try_get_page(struct page *page) +{ + page = compound_head(page); + if (WARN_ON_ONCE(page_ref_count(page) <= 0)) + return false; page_ref_inc(page); + return true; } static inline void put_page(struct page *page) |