summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2018-11-25iov_iter: teach csum_and_copy_to_iter() to handle pipe-backed onesAl Viro1-1/+37
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-24Merge tag 'xarray-4.20-rc4' of git://git.infradead.org/users/willy/linux-daxLinus Torvalds2-82/+107
Pull XArray updates from Matthew Wilcox: "We found some bugs in the DAX conversion to XArray (and one bug which predated the XArray conversion). There were a couple of bugs in some of the higher-level functions, which aren't actually being called in today's kernel, but surfaced as a result of converting existing radix tree & IDR users over to the XArray. Some of the other changes to how the higher-level APIs work were also motivated by converting various users; again, they're not in use in today's kernel, so changing them has a low probability of introducing a bug. Dan can still trigger a bug in the DAX code with hot-offline/online, and we're working on tracking that down" * tag 'xarray-4.20-rc4' of git://git.infradead.org/users/willy/linux-dax: XArray tests: Add missing locking dax: Avoid losing wakeup in dax_lock_mapping_entry dax: Fix huge page faults dax: Fix dax_unlock_mapping_entry for PMD pages dax: Reinstate RCU protection of inode dax: Make sure the unlocking entry isn't locked dax: Remove optimisation from dax_lock_mapping_entry XArray tests: Correct some 64-bit assumptions XArray: Correct xa_store_range XArray: Fix Documentation XArray: Handle NULL pointers differently for allocation XArray: Unify xa_store and __xa_store XArray: Add xa_store_bh() and xa_store_irq() XArray: Turn xa_erase into an exported function XArray: Unify xa_cmpxchg and __xa_cmpxchg XArray: Regularise xa_reserve nilfs2: Use xa_erase_irq XArray: Export __xa_foo to non-GPL modules XArray: Fix xa_for_each with a single element at 0
2018-11-24Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-0/+1
2018-11-22Merge tag 'char-misc-4.20-rc4' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver fixes from Greg KH: "Here are some small char/misc driver fixes for issues that have been reported. Nothing major, highlights include: - gnss sync write fixes - uio oops fix - nvmem fixes - other minor fixes and some documentation/maintainers updates Full details are in the shortlog. All of these have been in linux-next for a while with no reported issues" * tag 'char-misc-4.20-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: Documentation/security-bugs: Postpone fix publication in exceptional cases MAINTAINERS: Add Sasha as a stable branch maintainer gnss: sirf: fix synchronous write timeout gnss: serial: fix synchronous write timeout uio: Fix an Oops on load test_firmware: fix error return getting clobbered nvmem: core: fix regression in of_nvmem_cell_get() misc: atmel-ssc: Fix section annotation on atmel_ssc_get_driver_data drivers/misc/sgi-gru: fix Spectre v1 vulnerability Drivers: hv: kvp: Fix the recent regression caused by incorrect clean-up slimbus: ngd: remove unnecessary check
2018-11-20crypto: chacha - add XChaCha12 supportEric Biggers1-3/+3
Now that the generic implementation of ChaCha20 has been refactored to allow varying the number of rounds, add support for XChaCha12, which is the XSalsa construction applied to ChaCha12. ChaCha12 is one of the three ciphers specified by the original ChaCha paper (https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than ChaCha20 but has a lower, but still large, security margin. We need XChaCha12 support so that it can be used in the Adiantum encryption mode, which enables disk/file encryption on low-end mobile devices where AES-XTS is too slow as the CPUs lack AES instructions. We'd prefer XChaCha20 (the more popular variant), but it's too slow on some of our target devices, so at least in some cases we do need the XChaCha12-based version. In more detail, the problem is that Adiantum is still much slower than we're happy with, and encryption still has a quite noticeable effect on the feel of low-end devices. Users and vendors push back hard against encryption that degrades the user experience, which always risks encryption being disabled entirely. So we need to choose the fastest option that gives us a solid margin of security, and here that's XChaCha12. The best known attack on ChaCha breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's security margin is still better than AES-256's. Much has been learned about cryptanalysis of ARX ciphers since Salsa20 was originally designed in 2005, and it now seems we can be comfortable with a smaller number of rounds. The eSTREAM project also suggests the 12-round version of Salsa20 as providing the best balance among the different variants: combining very good performance with a "comfortable margin of security". Note that it would be trivial to add vanilla ChaCha12 in addition to XChaCha12. However, it's unneeded for now and therefore is omitted. As discussed in the patch that introduced XChaCha20 support, I considered splitting the code into separate chacha-common, chacha20, xchacha20, and xchacha12 modules, so that these algorithms could be enabled/disabled independently. However, since nearly all the code is shared anyway, I ultimately decided there would have been little benefit to the added complexity. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: chacha20-generic - refactor to allow varying number of roundsEric Biggers2-20/+25
In preparation for adding XChaCha12 support, rename/refactor chacha20-generic to support different numbers of rounds. The justification for needing XChaCha12 support is explained in more detail in the patch "crypto: chacha - add XChaCha12 support". The only difference between ChaCha{8,12,20} are the number of rounds itself; all other parts of the algorithm are the same. Therefore, remove the "20" from all definitions, structures, functions, files, etc. that will be shared by all ChaCha versions. Also make ->setkey() store the round count in the chacha_ctx (previously chacha20_ctx). The generic code then passes the round count through to chacha_block(). There will be a ->setkey() function for each explicitly allowed round count; the encrypt/decrypt functions will be the same. I decided not to do it the opposite way (same ->setkey() function for all round counts, with different encrypt/decrypt functions) because that would have required more boilerplate code in architecture-specific implementations of ChaCha and XChaCha. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: chacha20-generic - add HChaCha20 library functionEric Biggers1-6/+44
Refactor the unkeyed permutation part of chacha20_block() into its own function, then add hchacha20_block() which is the ChaCha equivalent of HSalsa20 and is an intermediate step towards XChaCha20 (see https://cr.yp.to/snuffle/xsalsa-20081128.pdf). HChaCha20 skips the final addition of the initial state, and outputs only certain words of the state. It should not be used for streaming directly. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-2/+1
2018-11-19XArray tests: Add missing lockingMatthew Wilcox1-0/+10
Lockdep caught me being sloppy in the test suite and failing to lock the XArray appropriately. Reported-by: kernel test robot <rong.a.chen@intel.com> Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-18lib/ubsan.c: don't mark __ubsan_handle_builtin_unreachable as noreturnArnd Bergmann1-2/+1
gcc-8 complains about the prototype for this function: lib/ubsan.c:432:1: error: ignoring attribute 'noreturn' in declaration of a built-in function '__ubsan_handle_builtin_unreachable' because it conflicts with attribute 'const' [-Werror=attributes] This is actually a GCC's bug. In GCC internals __ubsan_handle_builtin_unreachable() declared with both 'noreturn' and 'const' attributes instead of only 'noreturn': https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84210 Workaround this by removing the noreturn attribute. [aryabinin: add information about GCC bug in changelog] Link: http://lkml.kernel.org/r/20181107144516.4587-1-aryabinin@virtuozzo.com Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: Olof Johansson <olof@lixom.net> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-16net: remove VLAN_TAG_PRESENTMichał Mirosław1-6/+8
Replace VLAN_TAG_PRESENT with single bit flag and free up VLAN.CFI overload. Now VLAN.CFI is visible in networking stack and can be passed around intact. Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16XArray tests: Correct some 64-bit assumptionsMatthew Wilcox1-2/+2
The test-suite caught these two mistakes when compiled for 32-bit. I had only been running the test-suite in 64-bit mode. Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-16XArray: Correct xa_store_rangeMatthew Wilcox1-2/+3
The explicit '64' should have been BITS_PER_LONG, but while looking at this code I realised I meant to use __ffs(), not ilog2(). Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-15test_objagg: Fix warning.David S. Miller1-0/+1
lib/test_objagg.c: In function ‘test_delta_action_item’: ./include/linux/printk.h:308:2: warning: ‘errmsg’ may be used uninitialized in this function [-Wmaybe-uninitialized] Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-15lib: introduce initial implementation of object aggregation managerJiri Pirko5-0/+1351
This lib tracks objects which could be of two types: 1) root object 2) nested object - with a "delta" which differentiates it from the associated root object The objects are tracked by a hashtable and reference-counted. User is responsible of implementing callbacks to create/destroy root entity related to each root object and callback to create/destroy nested object delta. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-14Merge tag 'gnss-4.20-rc3' of ↵Greg Kroah-Hartman1-2/+2
https://git.kernel.org/pub/scm/linux/kernel/git/johan/gnss into char-misc-linus Johan writes: GNSS fixes for v4.20-rc3 The two serdev drivers were using the wrong timeout argument when expecting the serdev_device_write() helper to wait indefinitely, something which could result in incomplete writes when the controller write buffer was getting full. Signed-off-by: Johan Hovold <johan@kernel.org>
2018-11-12lib/gcd: Remove use of CPU_NO_EFFICIENT_FFS macroPaul Burton1-1/+1
The CPU_NO_EFFICIENT_FFS pre-processor macro is no longer used, with all architectures toggling the equivalent Kconfig symbol CONFIG_CPU_NO_EFFICIENT_FFS instead. Remove our check for the unused macro. Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/21046/ Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Zhaoxiu Zeng <zhaoxiu.zeng@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org
2018-11-11kobject: Fix warnings in lib/kobject_uevent.cBo YU1-0/+2
Add a blank after declaration. Signed-off-by: Bo YU <tsu.yubo@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-11kobject: drop unnecessary cast "%llu" for u64Bo YU1-1/+1
There is no searon for u64 var cast to unsigned long long type. Signed-off-by: Bo YU <tsu.yubo@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-11test_firmware: fix error return getting clobberedColin Ian King1-0/+1
In the case where eq->fw->size > PAGE_SIZE the error return rc is being set to EINVAL however this is being overwritten to rc = req->fw->size because the error exit path via label 'out' is not being taken. Fix this by adding the jump to the error exit path 'out'. Detected by CoverityScan, CID#1453465 ("Unused value") Fixes: c92316bf8e94 ("test_firmware: add batched firmware tests") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-06lib/raid6: Fix arm64 test buildJeremy Linton1-2/+2
The lib/raid6/test fails to build the neon objects on arm64 because the correct machine type is 'aarch64'. Once this is correctly enabled, the neon recovery objects need to be added to the build. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-11-05XArray: Fix DocumentationMatthew Wilcox1-5/+5
Minor fixes. Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-05XArray: Handle NULL pointers differently for allocationMatthew Wilcox1-3/+10
For allocating XArrays, it makes sense to distinguish beteen erasing an entry and storing NULL. Storing NULL keeps the index allocated with a NULL pointer associated with it while xa_erase() frees the index. Some existing IDR users rely on this ability. Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-05XArray: Unify xa_store and __xa_storeMatthew Wilcox1-33/+25
Saves around 115 bytes on a tinyconfig build and reduces the amount of code duplication in the XArray implementation. Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-05XArray: Turn xa_erase into an exported functionMatthew Wilcox1-0/+24
Make xa_erase() take the spinlock and then call __xa_erase(), but make it out of line since it's such a common function. Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-05XArray: Unify xa_cmpxchg and __xa_cmpxchgMatthew Wilcox1-41/+0
xa_cmpxchg() was one of the largest functions in the xarray implementation. By turning it into a wrapper and having the callers take the lock (like several other functions), we save 160 bytes on a tinyconfig build and reduce the duplication in xarray.c. Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-05XArray: Regularise xa_reserveMatthew Wilcox2-11/+13
The xa_reserve() function was a little unusual in that it attempted to be callable for all kinds of locking scenarios. Make it look like the other APIs with __xa_reserve, xa_reserve_bh and xa_reserve_irq variants. Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-05XArray: Export __xa_foo to non-GPL modulesMatthew Wilcox1-3/+3
Without this, it's not possible to use static inlines like xa_store_bh() and xa_erase_irq(). Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-05XArray: Fix xa_for_each with a single element at 0Matthew Wilcox2-1/+31
The following sequence of calls would result in an infinite loop in xa_find_after(): xa_store(xa, 0, x, GFP_KERNEL); index = 0; xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) { } xa_find_after() was confusing the situation where we found no entry in the tree with finding a multiorder entry, so it would look for the successor entry forever. Just check for this case explicitly. Includes a few new checks in the test suite to be sure this doesn't reappear. Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-11-01Merge branch 'work.afs' of ↵Linus Torvalds1-41/+84
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull AFS updates from Al Viro: "AFS series, with some iov_iter bits included" * 'work.afs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (26 commits) missing bits of "iov_iter: Separate type from direction and use accessor functions" afs: Probe multiple fileservers simultaneously afs: Fix callback handling afs: Eliminate the address pointer from the address list cursor afs: Allow dumping of server cursor on operation failure afs: Implement YFS support in the fs client afs: Expand data structure fields to support YFS afs: Get the target vnode in afs_rmdir() and get a callback on it afs: Calc callback expiry in op reply delivery afs: Fix FS.FetchStatus delivery from updating wrong vnode afs: Implement the YFS cache manager service afs: Remove callback details from afs_callback_break struct afs: Commit the status on a new file/dir/symlink afs: Increase to 64-bit volume ID and 96-bit vnode ID for YFS afs: Don't invoke the server to read data beyond EOF afs: Add a couple of tracepoints to log I/O errors afs: Handle EIO from delivery function afs: Fix TTL on VL server and address lists afs: Implement VL server rotation afs: Improve FS server rotation error handling ...
2018-10-31Merge tag 'riscv-for-linus-4.20-mw2' of ↵Linus Torvalds4-346/+0
git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux Pull more RISC-V updates from Palmer Dabbelt: "This contains the follow-on patches I'd like to target for the 4.20 merge window. I'm being somewhat conservative here, as while there are a few patches on the mailing list that were posted early in the merge window I'd like to let those bake for another round -- this was a fairly big release as far as RISC-V is concerened, and we need to walk before we can run. As far as the patches that made it go: - A patch to ignore offline CPUs when calculating AT_HWCAP. This should fix GDB on the HiFive unleashed, which has an embedded core for hart 0 which is exposed to Linux as an offline CPU. - A move of EM_RISCV to elf-em.h, which is where it should have been to begin with. - I've also removed the 64-bit divide routines. I know I'm not really playing by my own rules here because I posted the patches this morning, but since they shouldn't be in the kernel I think it's better to err on the side of going too fast here. I don't anticipate any more patch sets for the merge window" * tag 'riscv-for-linus-4.20-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux: Move EM_RISCV into elf-em.h RISC-V: properly determine hardware caps Revert "lib: Add umoddi3 and udivmoddi4 of GCC library routines" Revert "RISC-V: Select GENERIC_LIB_UMODDI3 on RV32"
2018-10-31Revert "lib: Add umoddi3 and udivmoddi4 of GCC library routines"Palmer Dabbelt4-346/+0
We don't want 64-bit divide in the kernel. This reverts commit 6315730e9eab7de5fa9864bb13a352713f48aef1. Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2018-10-31memblock: stop using implicit alignment to SMP_CACHE_BYTESMike Rapoport1-1/+1
When a memblock allocation APIs are called with align = 0, the alignment is implicitly set to SMP_CACHE_BYTES. Implicit alignment is done deep in the memblock allocator and it can come as a surprise. Not that such an alignment would be wrong even when used incorrectly but it is better to be explicit for the sake of clarity and the prinicple of the least surprise. Replace all such uses of memblock APIs with the 'align' parameter explicitly set to SMP_CACHE_BYTES and stop implicit alignment assignment in the memblock internal allocation functions. For the case when memblock APIs are used via helper functions, e.g. like iommu_arena_new_node() in Alpha, the helper functions were detected with Coccinelle's help and then manually examined and updated where appropriate. The direct memblock APIs users were updated using the semantic patch below: @@ expression size, min_addr, max_addr, nid; @@ ( | - memblock_alloc_try_nid_raw(size, 0, min_addr, max_addr, nid) + memblock_alloc_try_nid_raw(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) | - memblock_alloc_try_nid_nopanic(size, 0, min_addr, max_addr, nid) + memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) | - memblock_alloc_try_nid(size, 0, min_addr, max_addr, nid) + memblock_alloc_try_nid(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) | - memblock_alloc(size, 0) + memblock_alloc(size, SMP_CACHE_BYTES) | - memblock_alloc_raw(size, 0) + memblock_alloc_raw(size, SMP_CACHE_BYTES) | - memblock_alloc_from(size, 0, min_addr) + memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr) | - memblock_alloc_nopanic(size, 0) + memblock_alloc_nopanic(size, SMP_CACHE_BYTES) | - memblock_alloc_low(size, 0) + memblock_alloc_low(size, SMP_CACHE_BYTES) | - memblock_alloc_low_nopanic(size, 0) + memblock_alloc_low_nopanic(size, SMP_CACHE_BYTES) | - memblock_alloc_from_nopanic(size, 0, min_addr) + memblock_alloc_from_nopanic(size, SMP_CACHE_BYTES, min_addr) | - memblock_alloc_node(size, 0, nid) + memblock_alloc_node(size, SMP_CACHE_BYTES, nid) ) [mhocko@suse.com: changelog update] [akpm@linux-foundation.org: coding-style fixes] [rppt@linux.ibm.com: fix missed uses of implicit alignment] Link: http://lkml.kernel.org/r/20181016133656.GA10925@rapoport-lnx Link: http://lkml.kernel.org/r/1538687224-17535-1-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Suggested-by: Michal Hocko <mhocko@suse.com> Acked-by: Paul Burton <paul.burton@mips.com> [MIPS] Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc] Acked-by: Michal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Richard Weinberger <richard@nod.at> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31mm: remove include/linux/bootmem.hMike Rapoport1-1/+1
Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header. The includes were replaced with the semantic patch below and then semi-automated removal of duplicated '#include <linux/memblock.h> @@ @@ - #include <linux/bootmem.h> + #include <linux/memblock.h> [sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au [sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au [sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal] Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31memblock: remove _virt from APIs returning virtual addressMike Rapoport1-1/+1
The conversion is done using sed -i 's@memblock_virt_alloc@memblock_alloc@g' \ $(git grep -l memblock_virt_alloc) Link: http://lkml.kernel.org/r/1536927045-23536-8-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@suse.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31mm: remove CONFIG_HAVE_MEMBLOCKMike Rapoport1-2/+1
All architecures use memblock for early memory management. There is no need for the CONFIG_HAVE_MEMBLOCK configuration option. [rppt@linux.vnet.ibm.com: of/fdt: fixup #ifdefs] Link: http://lkml.kernel.org/r/20180919103457.GA20545@rapoport-lnx [rppt@linux.vnet.ibm.com: csky: fixups after bootmem removal] Link: http://lkml.kernel.org/r/20180926112744.GC4628@rapoport-lnx [rppt@linux.vnet.ibm.com: remove stale #else and the code it protects] Link: http://lkml.kernel.org/r/1538067825-24835-1-git-send-email-rppt@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1536927045-23536-4-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Acked-by: Michal Hocko <mhocko@suse.com> Tested-by: Jonathan Cameron <jonathan.cameron@huawei.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/lz4: update LZ4 decompressor moduleGao Xiang2-141/+349
Update the LZ4 compression module based on LZ4 v1.8.3 in order for the erofs file system to use the newest LZ4_decompress_safe_partial() which can now decode exactly the nb of bytes requested [1] to take place of the open hacked code in the erofs file system itself. Currently, apart from the erofs file system, no other users use LZ4_decompress_safe_partial, so no worry about the interface. In addition, LZ4 v1.8.x boosts up decompression speed compared to the current code which is based on LZ4 v1.7.3, mainly due to shortcut optimization for the specific common LZ4-sequences [2]. lzbench testdata (tested in kirin710, 8 cores, 4 big cores at 2189Mhz, 2GB DDR RAM at 1622Mhz, with enwik8 testdata [3]): Compressor name Compress. Decompress. Compr. size Ratio Filename memcpy 5004 MB/s 4924 MB/s 100000000 100.00 enwik8 lz4hc 1.7.3 -9 12 MB/s 653 MB/s 42203253 42.20 enwik8 lz4hc 1.8.0 -9 12 MB/s 908 MB/s 42203096 42.20 enwik8 lz4hc 1.8.3 -9 11 MB/s 965 MB/s 42203094 42.20 enwik8 [1] https://github.com/lz4/lz4/issues/566 https://github.com/lz4/lz4/commit/08d347b5b217b011ff7487130b79480d8cfdaeb8 [2] v1.8.1 perf: slightly faster compression and decompression speed https://github.com/lz4/lz4/commit/a31b7058cb97e4393da55e78a77a1c6f0c9ae038 v1.8.2 perf: slightly faster HC compression and decompression speed https://github.com/lz4/lz4/commit/45f8603aae389d34c689d3ff7427b314071ccd2c https://github.com/lz4/lz4/commit/1a191b3f8d26b50a7c1d41590b529ec308d768cd [3] http://mattmahoney.net/dc/textdata.html http://mattmahoney.net/dc/enwik8.zip Link: http://lkml.kernel.org/r/1537181207-21932-1-git-send-email-gaoxiang25@huawei.com Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Tested-by: Guo Xuenan <guoxuenan@huawei.com> Cc: Colin Ian King <colin.king@canonical.com> Cc: Yann Collet <yann.collet.73@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Fang Wei <fangwei1@huawei.com> Cc: Chao Yu <yuchao0@huawei.com> Cc: Miao Xie <miaoxie@huawei.com> Cc: Sven Schmidt <4sschmid@informatik.uni-hamburg.de> Cc: Kyungsik Lee <kyungsik.lee@lge.com> Cc: <weidu.du@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/kstrtox.c: delete unnecessary castsAlexey Dobriyan1-8/+8
Implicit casts to the same type are done by the language if necessary. Link: http://lkml.kernel.org/r/20181014223934.GA18107@avx2 Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/sg_pool.c: remove unnecessary null check when freeing objectzhong jiang1-4/+3
mempool_destroy(NULL) and kmem_cache_destroy(NULL) are legal Link: http://lkml.kernel.org/r/1533054107-35657-1-git-send-email-zhongjiang@huawei.com Signed-off-by: zhong jiang <zhongjiang@huawei.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/zlib_inflate/inflate.c: remove fall through warningsCorentin Labbe1-0/+12
This patch remove all following fall through warnings by adding /* fall through */ markers. Note that we cannot add "__attribute__ ((fallthrough));" due to it is GCC7 only arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:384:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:391:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:393:16: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:430:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:556:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:595:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:602:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:627:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:646:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:696:25: warning: this statement may fall through [-Wimplicit-fallthrough=] It is easy to see that thoses fall through are needed since in each case state->mode are set to the case value just below. Link: http://lkml.kernel.org/r/1536215920-19955-1-git-send-email-clabbe@baylibre.com Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/parser.c: switch match_number() over to use match_strdup()Eric Biggers1-4/+1
This simplifies the code. No change in behavior. Link: http://lkml.kernel.org/r/20180830194727.191555-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/parser.c: switch match_u64int() over to use match_strdup()Eric Biggers1-4/+1
This simplifies the code. No change in behavior. Link: http://lkml.kernel.org/r/20180830194814.192880-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/parser.c: switch match_strdup() over to use kmemdup_nul()Eric Biggers1-5/+1
This simplifies the code. No change in behavior. Link: http://lkml.kernel.org/r/20180830194436.188867-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/bitmap.c: simplify bitmap_print_to_pagebuf()Rasmus Villemoes1-5/+2
len is guaranteed to lie in [1, PAGE_SIZE]. If scnprintf is called with a buffer size of 1, it is guaranteed to return 0. So in the extremely unlikely case of having just one byte remaining in the page, let's just call scnprintf anyway. The only difference is that this will write a '\0' to that final byte in the page, but that's an improvement: We now guarantee that after the call, buf is a properly terminated C string of length exactly the return value. Link: http://lkml.kernel.org/r/20180818131623.8755-8-linux@rasmusvillemoes.dk Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Yury Norov <ynorov@caviumnetworks.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/bitmap.c: fix remaining space computation in bitmap_print_to_pagebufRasmus Villemoes1-4/+6
For various alignments of buf, the current expression computes 4096 ok 4095 ok 8190 8189 ... 4097 i.e., if the caller has already written two bytes into the page buffer, len is 8190 rather than 4094, because PTR_ALIGN aligns up to the next boundary. So if the printed version of the bitmap is huge, scnprintf() ends up writing beyond the page boundary. I don't think any current callers actually write anything before bitmap_print_to_pagebuf, but the API seems to be designed to allow it. [akpm@linux-foundation.org: use offset_in_page(), per Andy] [akpm@linux-foundation.org: include mm.h for offset_in_page()] Link: http://lkml.kernel.org/r/20180818131623.8755-7-linux@rasmusvillemoes.dk Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Yury Norov <ynorov@caviumnetworks.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31lib/bitmap.c: remove wrong documentationRasmus Villemoes1-5/+0
This promise is violated in a number of places, e.g. already in the second function below this paragraph. Since I don't think anybody relies on this being true, and since actually honouring it would hurt performance and code size in various places, just remove the paragraph. Link: http://lkml.kernel.org/r/20180818131623.8755-2-linux@rasmusvillemoes.dk Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Yury Norov <ynorov@caviumnetworks.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-28Merge tag 'vla-v4.20-rc1' of ↵Linus Torvalds1-0/+2
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull VLA removal from Kees Cook: "Globally warn on VLA use. This turns on "-Wvla" globally now that the last few trees with their VLA removals have landed (crypto, block, net, and powerpc). Arnd mentioned that there may be a couple more VLAs hiding in hard-to-find randconfigs, but nothing big has shaken out in the last month or so in linux-next. We should be basically VLA-free now! Wheee. :) Summary: - Remove unused fallback for BUILD_BUG_ON (which technically contains a VLA) - Lift -Wvla to the top-level Makefile" * tag 'vla-v4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: Makefile: Globally enable VLA warning compiler.h: give up __compiletime_assert_fallback()
2018-10-28Merge branch 'xarray' of git://git.infradead.org/users/willy/linux-daxLinus Torvalds7-942/+3578
Pull XArray conversion from Matthew Wilcox: "The XArray provides an improved interface to the radix tree data structure, providing locking as part of the API, specifying GFP flags at allocation time, eliminating preloading, less re-walking the tree, more efficient iterations and not exposing RCU-protected pointers to its users. This patch set 1. Introduces the XArray implementation 2. Converts the pagecache to use it 3. Converts memremap to use it The page cache is the most complex and important user of the radix tree, so converting it was most important. Converting the memremap code removes the only other user of the multiorder code, which allows us to remove the radix tree code that supported it. I have 40+ followup patches to convert many other users of the radix tree over to the XArray, but I'd like to get this part in first. The other conversions haven't been in linux-next and aren't suitable for applying yet, but you can see them in the xarray-conv branch if you're interested" * 'xarray' of git://git.infradead.org/users/willy/linux-dax: (90 commits) radix tree: Remove multiorder support radix tree test: Convert multiorder tests to XArray radix tree tests: Convert item_delete_rcu to XArray radix tree tests: Convert item_kill_tree to XArray radix tree tests: Move item_insert_order radix tree test suite: Remove multiorder benchmarking radix tree test suite: Remove __item_insert memremap: Convert to XArray xarray: Add range store functionality xarray: Move multiorder_check to in-kernel tests xarray: Move multiorder_shrink to kernel tests xarray: Move multiorder account test in-kernel radix tree test suite: Convert iteration test to XArray radix tree test suite: Convert tag_tagged_items to XArray radix tree: Remove radix_tree_clear_tags radix tree: Remove radix_tree_maybe_preload_order radix tree: Remove split/join code radix tree: Remove radix_tree_update_node_t page cache: Finish XArray conversion dax: Convert page fault handlers to XArray ...
2018-10-26Merge branch 'akpm' (patches from Andrew)Linus Torvalds1-0/+70
Merge updates from Andrew Morton: - a few misc things - ocfs2 updates - most of MM * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (132 commits) hugetlbfs: dirty pages as they are added to pagecache mm: export add_swap_extent() mm: split SWP_FILE into SWP_ACTIVATED and SWP_FS tools/testing/selftests/vm/map_fixed_noreplace.c: add test for MAP_FIXED_NOREPLACE mm: thp: relocate flush_cache_range() in migrate_misplaced_transhuge_page() mm: thp: fix mmu_notifier in migrate_misplaced_transhuge_page() mm: thp: fix MADV_DONTNEED vs migrate_misplaced_transhuge_page race condition mm/kasan/quarantine.c: make quarantine_lock a raw_spinlock_t mm/gup: cache dev_pagemap while pinning pages Revert "x86/e820: put !E820_TYPE_RAM regions into memblock.reserved" mm: return zero_resv_unavail optimization mm: zero remaining unavailable struct pages tools/testing/selftests/vm/gup_benchmark.c: add MAP_HUGETLB option tools/testing/selftests/vm/gup_benchmark.c: add MAP_SHARED option tools/testing/selftests/vm/gup_benchmark.c: allow user specified file tools/testing/selftests/vm/gup_benchmark.c: fix 'write' flag usage mm/gup_benchmark.c: add additional pinning methods mm/gup_benchmark.c: time put_page() mm: don't raise MEMCG_OOM event due to failed high-order allocation mm/page-writeback.c: fix range_cyclic writeback vs writepages deadlock ...
2018-10-26lib/test_kasan.c: add tests for several string/memory API functionsAndrey Ryabinin1-0/+70
Arch code may have asm implementation of string/memory API functions instead of using generic one from lib/string.c. KASAN don't see memory accesses in asm code, thus can miss many bugs. E.g. on ARM64 KASAN don't see bugs in memchr(), memcmp(), str[r]chr(), str[n]cmp(), str[n]len(). Add tests for these functions to be sure that we notice the problem on other architectures. Link: http://lkml.kernel.org/r/20180920135631.23833-3-aryabinin@virtuozzo.com Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Kyeongdon Kim <kyeongdon.kim@lge.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>