diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-06-15 16:17:48 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-06-24 14:12:57 -0700 |
commit | eda0047296a16d65a7f2bc60a408f70d178b2014 (patch) | |
tree | 934ab40aff1cafd4ca1da8a2c5ead6aba7adabe1 /mm | |
parent | c2508ec5a58db67093f4fb8bf89a9a7c53a109e9 (diff) |
mm: make the page fault mmap locking killable
This is done as a separate patch from introducing the new
lock_mm_and_find_vma() helper, because while it's an obvious change,
it's not what x86 used to do in this area.
We already abort the page fault on fatal signals anyway, so why should
we wait for the mmap lock only to then abort later? With the new helper
function that returns without the lock held on failure anyway, this is
particularly easy and straightforward.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memory.c | 6 |
1 files changed, 2 insertions, 4 deletions
diff --git a/mm/memory.c b/mm/memory.c index 1a427097b71f..1dff248805bf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5279,8 +5279,7 @@ static inline bool get_mmap_lock_carefully(struct mm_struct *mm, struct pt_regs return false; } - mmap_read_lock(mm); - return true; + return !mmap_read_lock_killable(mm); } static inline bool mmap_upgrade_trylock(struct mm_struct *mm) @@ -5304,8 +5303,7 @@ static inline bool upgrade_mmap_lock_carefully(struct mm_struct *mm, struct pt_r if (!search_exception_tables(ip)) return false; } - mmap_write_lock(mm); - return true; + return !mmap_write_lock_killable(mm); } /* |