summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)AuthorFilesLines
2022-12-14Merge tag 'v6.2-p1' of ↵Linus Torvalds35-728/+1481
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Optimise away self-test overhead when they are disabled - Support symmetric encryption via keyring keys in af_alg - Flip hwrng default_quality, the default is now maximum entropy Algorithms: - Add library version of aesgcm - CFI fixes for assembly code - Add arm/arm64 accelerated versions of sm3/sm4 Drivers: - Remove assumption on arm64 that kmalloc is DMA-aligned - Fix selftest failures in rockchip - Add support for RK3328/RK3399 in rockchip - Add deflate support in qat - Merge ux500 into stm32 - Add support for TEE for PCI ID 0x14CA in ccp - Add mt7986 support in mtk - Add MaxLinear platform support in inside-secure - Add NPCM8XX support in npcm" * tag 'v6.2-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (184 commits) crypto: ux500/cryp - delete driver crypto: stm32/cryp - enable for use with Ux500 crypto: stm32 - enable drivers to be used on Ux500 dt-bindings: crypto: Let STM32 define Ux500 CRYP hwrng: geode - Fix PCI device refcount leak hwrng: amd - Fix PCI device refcount leak crypto: qce - Set DMA alignment explicitly crypto: octeontx2 - Set DMA alignment explicitly crypto: octeontx - Set DMA alignment explicitly crypto: keembay - Set DMA alignment explicitly crypto: safexcel - Set DMA alignment explicitly crypto: hisilicon/hpre - Set DMA alignment explicitly crypto: chelsio - Set DMA alignment explicitly crypto: ccree - Set DMA alignment explicitly crypto: ccp - Set DMA alignment explicitly crypto: cavium - Set DMA alignment explicitly crypto: img-hash - Fix variable dereferenced before check 'hdev->req' crypto: arm64/ghash-ce - use frame_push/pop macros consistently crypto: arm64/crct10dif - use frame_push/pop macros consistently crypto: arm64/aes-modes - use frame_push/pop macros consistently ...
2022-12-12Merge tag 'pull-iov_iter' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull iov_iter updates from Al Viro: "iov_iter work; most of that is about getting rid of direction misannotations and (hopefully) preventing more of the same for the future" * tag 'pull-iov_iter' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: use less confusing names for iov_iter direction initializers iov_iter: saner checks for attempt to copy to/from iterator [xen] fix "direction" argument of iov_iter_kvec() [vhost] fix 'direction' argument of iov_iter_{init,bvec}() [target] fix iov_iter_bvec() "direction" argument [s390] memcpy_real(): WRITE is "data source", not destination... [s390] zcore: WRITE is "data source", not destination... [infiniband] READ is "data destination", not source... [fsi] WRITE is "data source", not destination... [s390] copy_oldmem_kernel() - WRITE is "data source", not destination csum_and_copy_to_iter(): handle ITER_DISCARD get rid of unlikely() on page_copy_sane() calls
2022-12-02crypto: api - Increase MAX_ALGAPI_ALIGNMASK to 127Herbert Xu1-2/+7
Previously we limited the maximum alignment mask to 63. This is mostly due to stack usage for shash. This patch introduces a separate limit for shash algorithms and increases the general limit to 127 which is the value that we need for DMA allocations on arm64. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: Prepare to move crypto_tfm_ctxHerbert Xu17-17/+19
The helper crypto_tfm_ctx is only used by the Crypto API algorithm code and should really be in algapi.h. However, for historical reasons many files relied on it to be in crypto.h. This patch changes those files to use algapi.h instead in prepartion for a move. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: dh - Use helper to set reqsizeHerbert Xu1-1/+3
The value of reqsize must only be changed through the helper. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: rsa-pkcs1pad - Use helper to set reqsizeHerbert Xu1-1/+4
The value of reqsize must only be changed through the helper. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25use less confusing names for iov_iter direction initializersAl Viro1-2/+2
READ/WRITE proved to be actively confusing - the meanings are "data destination, as used with read(2)" and "data source, as used with write(2)", but people keep interpreting those as "we read data from it" and "we write data to it", i.e. exactly the wrong way. Call them ITER_DEST and ITER_SOURCE - at least that is harder to misinterpret... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2022-11-25Revert "crypto: shash - avoid comparing pointers to exported functions under ↵Eric Biggers1-15/+3
CFI" This reverts commit 22ca9f4aaf431a9413dcc115dd590123307f274f because CFI no longer breaks cross-module function address equality, so crypto_shash_alg_has_setkey() can now be an inline function like before. This commit should not be backported to kernels that don't have the new CFI implementation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25crypto: tcrypt - Fix multibuffer skcipher speed test mem leakZhang Yiqun1-9/+0
In the past, the data for mb-skcipher test has been allocated twice, that means the first allcated memory area is without free, which may cause a potential memory leakage. So this patch is to remove one allocation to fix this error. Fixes: e161c5930c15 ("crypto: tcrypt - add multibuf skcipher...") Signed-off-by: Zhang Yiqun <zhangyiqun@phytium.com.cn> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25crypto: algboss - compile out test-related code when tests disabledEric Biggers1-5/+4
When CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is set, the code in algboss.c that handles CRYPTO_MSG_ALG_REGISTER is unnecessary, so make it be compiled out. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25crypto: kdf - silence noisy self-testEric Biggers1-1/+1
Make the kdf_sp800108 self-test only print a message on success when fips_enabled, so that it's consistent with testmgr.c and doesn't spam the kernel log with a message that isn't really important. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25crypto: kdf - skip self-test when tests disabledEric Biggers1-2/+6
Make kdf_sp800108 honor the CONFIG_CRYPTO_MANAGER_DISABLE_TESTS kconfig option, so that it doesn't always waste time running its self-test. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25crypto: api - compile out crypto_boot_test_finished when tests disabledEric Biggers3-6/+29
The crypto_boot_test_finished static key is unnecessary when self-tests are disabled in the kconfig, so optimize it out accordingly, along with the entirety of crypto_start_tests(). This mainly avoids the overhead of an unnecessary static_branch_enable() on every boot. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25crypto: algboss - optimize registration of internal algorithmsEric Biggers2-13/+3
Since algboss always skips testing of algorithms with the CRYPTO_ALG_INTERNAL flag, there is no need to go through the dance of creating the test kthread, which creates a lot of overhead. Instead, we can just directly finish the algorithm registration, like is now done when self-tests are disabled entirely. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25crypto: api - optimize algorithm registration when self-tests disabledEric Biggers2-71/+86
Currently, registering an algorithm with the crypto API always causes a notification to be posted to the "cryptomgr", which then creates a kthread to self-test the algorithm. However, if self-tests are disabled in the kconfig (as is the default option), then this kthread just notifies waiters that the algorithm has been tested, then exits. This causes a significant amount of overhead, especially in the kthread creation and destruction, which is not necessary at all. For example, in a quick test I found that booting a "minimum" x86_64 kernel with all the crypto options enabled (except for the self-tests) takes about 400ms until PID 1 can start. Of that, a full 13ms is spent just doing this pointless dance, involving a kthread being created, run, and destroyed over 200 times. That's over 3% of the entire kernel start time. Fix this by just skipping the creation of the test larval and the posting of the registration notification entirely, when self-tests are disabled. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-18crypto: skcipher - Allow sync algorithms with large request contextsHerbert Xu1-1/+1
Some sync algorithms may require a large amount of temporary space during its operations. There is no reason why they should be limited just because some legacy users want to place all temporary data on the stack. Such algorithms can now set a flag to indicate that they need extra request context, which will cause them to be invisible to users that go through the sync_skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-18crypto: cryptd - Use request context instead of stack for sub-requestHerbert Xu1-17/+19
cryptd is buggy as it tries to use sync_skcipher without going through the proper sync_skcipher interface. In fact it doesn't even need sync_skcipher since it's already a proper skcipher and can easily access the request context instead of using something off the stack. Fixes: 36b3875a97b8 ("crypto: cryptd - Remove VLA usage of skcipher") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-18treewide: use get_random_u32_inclusive() when possibleJason A. Donenfeld2-6/+6
These cases were done with this Coccinelle: @@ expression H; expression L; @@ - (get_random_u32_below(H) + L) + get_random_u32_inclusive(L, H + L - 1) @@ expression H; expression L; expression E; @@ get_random_u32_inclusive(L, H - + E - - E ) @@ expression H; expression L; expression E; @@ get_random_u32_inclusive(L, H - - E - + E ) @@ expression H; expression L; expression E; expression F; @@ get_random_u32_inclusive(L, H - - E + F - + E ) @@ expression H; expression L; expression E; expression F; @@ get_random_u32_inclusive(L, H - + E + F - - E ) And then subsequently cleaned up by hand, with several automatic cases rejected if it didn't make sense contextually. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> # for infiniband Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-11-18treewide: use get_random_u32_below() instead of deprecated functionJason A. Donenfeld2-44/+44
This is a simple mechanical transformation done by: @@ expression E; @@ - prandom_u32_max + get_random_u32_below (E) Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs Reviewed-by: SeongJae Park <sj@kernel.org> # for damon Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> # for infiniband Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> # for arm Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-11-11crypto: move gf128mul library into lib/cryptoArd Biesheuvel3-423/+3
The gf128mul library does not depend on the crypto API at all, so it can be moved into lib/crypto. This will allow us to use it in other library code in a subsequent patch without having to depend on CONFIG_CRYPTO. While at it, change the Kconfig symbol name to align with other crypto library implementations. However, the source file name is retained, as it is reflected in the module .ko filename, and changing this might break things for users. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-04crypto: tcrypt - add SM4 cts-cbc/xts/xcbc testTianjia Zhang1-0/+21
Added CTS-CBC/XTS/XCBC tests for SM4 algorithms, as well as corresponding speed tests, this is to test performance-optimized implementations of these modes. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-04crypto: testmgr - add SM4 cts-cbc/xts/xcbc test vectorsTianjia Zhang2-0/+996
This patch newly adds the test vectors of CTS-CBC/XTS/XCBC modes of the SM4 algorithm, and also added some test vectors for SM4 GCM/CCM. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-04crypto: tcrypt - Drop leading newlines from printsAnirudh Venkataramanan1-5/+5
The top level print banners have a leading newline. It's not entirely clear why this exists, but it makes it harder to parse tcrypt test output using a script. Drop said newlines. tcrypt output before this patch: [...] testing speed of rfc4106(gcm(aes)) (rfc4106-gcm-aesni) encryption [...] test 0 (160 bit key, 16 byte blocks): 1 operation in 2320 cycles (16 bytes) tcrypt output with this patch: [...] testing speed of rfc4106(gcm(aes)) (rfc4106-gcm-aesni) encryption [...] test 0 (160 bit key, 16 byte blocks): 1 operation in 2320 cycles (16 bytes) Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-04crypto: tcrypt - Drop module name from print stringAnirudh Venkataramanan1-2/+1
The pr_fmt() define includes KBUILD_MODNAME, and so there's no need for pr_err() to also print it. Drop module name from the print string. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-04crypto: tcrypt - Use pr_info/pr_errAnirudh Venkataramanan1-7/+7
Currently, there's mixed use of printk() and pr_info()/pr_err(). The latter prints the module name (because pr_fmt() is defined so) but the former does not. As a result there's inconsistency in the printed output. For example: modprobe mode=211: [...] test 0 (160 bit key, 16 byte blocks): 1 operation in 2320 cycles (16 bytes) [...] test 1 (160 bit key, 64 byte blocks): 1 operation in 2336 cycles (64 bytes) modprobe mode=215: [...] tcrypt: test 0 (160 bit key, 16 byte blocks): 1 operation in 2173 cycles (16 bytes) [...] tcrypt: test 1 (160 bit key, 64 byte blocks): 1 operation in 2241 cycles (64 bytes) Replace all instances of printk() with pr_info()/pr_err() so that the module name is printed consistently. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-04crypto: tcrypt - Use pr_cont to print test resultsAnirudh Venkataramanan1-4/+4
For some test cases, a line break gets inserted between the test banner and the results. For example, with mode=211 this is the output: [...] testing speed of rfc4106(gcm(aes)) (rfc4106-gcm-aesni) encryption [...] test 0 (160 bit key, 16 byte blocks): [...] 1 operation in 2373 cycles (16 bytes) --snip-- [...] testing speed of gcm(aes) (generic-gcm-aesni) encryption [...] test 0 (128 bit key, 16 byte blocks): [...] 1 operation in 2338 cycles (16 bytes) Similar behavior is seen in the following cases as well: modprobe tcrypt mode=212 modprobe tcrypt mode=213 modprobe tcrypt mode=221 modprobe tcrypt mode=300 sec=1 modprobe tcrypt mode=400 sec=1 This doesn't happen with mode=215: [...] tcrypt: testing speed of multibuffer rfc4106(gcm(aes)) (rfc4106-gcm-aesni) encryption [...] tcrypt: test 0 (160 bit key, 16 byte blocks): 1 operation in 2215 cycles (16 bytes) --snip-- [...] tcrypt: testing speed of multibuffer gcm(aes) (generic-gcm-aesni) encryption [...] tcrypt: test 0 (128 bit key, 16 byte blocks): 1 operation in 2191 cycles (16 bytes) This print inconsistency is because printk() is used instead of pr_cont() in a few places. Change these to be pr_cont(). checkpatch warns that pr_cont() shouldn't be used. This can be ignored in this context as tcrypt already uses pr_cont(). Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-10-28crypto: af_alg - Support symmetric encryption via keyring keysFrederick Lawler1-1/+134
We want to leverage keyring to store sensitive keys, and then use those keys for symmetric encryption via the crypto API. Among the key types we wish to support are: user, logon, encrypted, and trusted. User key types are already able to have their data copied to user space, but logon does not support this. Further, trusted and encrypted keys will return their encrypted data back to user space on read, which does not make them ideal for symmetric encryption. To support symmetric encryption for these key types, add a new ALG_SET_KEY_BY_KEY_SERIAL setsockopt() option to the crypto API. This allows users to pass a key_serial_t to the crypto API to perform symmetric encryption. The behavior is the same as ALG_SET_KEY, but the crypto key data is copied in kernel space from a keyring key, which allows for the support of logon, encrypted, and trusted key types. Keyring keys must have the KEY_(POS|USR|GRP|OTH)_SEARCH permission set to leverage this feature. This follows the asymmetric_key type where key lookup calls eventually lead to keyring_search_rcu() without the KEYRING_SEARCH_NO_CHECK_PERM flag set. Signed-off-by: Frederick Lawler <fred@cloudflare.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-10-21crypto: tcrypt - fix return value for multiple subtestsRobert Elliott1-128/+128
When a test mode invokes multiple tests (e.g., mode 0 invokes modes 1 through 199, and mode 3 tests three block cipher modes with des), don't keep accumulating the return values with ret += tcrypt_test(), which results in a bogus value if more than one report a nonzero value (e.g., two reporting -2 (-ENOENT) end up reporting -4 (-EINTR)). Instead, keep track of the minimum return value reported by any subtest. Fixes: 4e033a6bc70f ("crypto: tcrypt - Do not exit on success in fips mode") Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-10-21crypto: ccm - use local variables instead of indirect referencesTianjia Zhang1-1/+1
The variable odata has been introduced into the function scope as a variable and should be used directly. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-10-11treewide: use get_random_bytes() when possibleJason A. Donenfeld1-1/+1
The prandom_bytes() function has been a deprecated inline wrapper around get_random_bytes() for several releases now, and compiles down to the exact same code. Replace the deprecated wrapper with a direct call to the real function. This was done as a basic find and replace. Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Yury Norov <yury.norov@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> # powerpc Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-10-11treewide: use get_random_{u8,u16}() when possible, part 1Jason A. Donenfeld1-4/+4
Rather than truncate a 32-bit value to a 16-bit value or an 8-bit value, simply use the get_random_{u8,u16}() functions, which are faster than wasting the additional bytes from a 32-bit value. This was done mechanically with this coccinelle script: @@ expression E; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u16; typedef __be16; typedef __le16; typedef u8; @@ ( - (get_random_u32() & 0xffff) + get_random_u16() | - (get_random_u32() & 0xff) + get_random_u8() | - (get_random_u32() % 65536) + get_random_u16() | - (get_random_u32() % 256) + get_random_u8() | - (get_random_u32() >> 16) + get_random_u16() | - (get_random_u32() >> 24) + get_random_u8() | - (u16)get_random_u32() + get_random_u16() | - (u8)get_random_u32() + get_random_u8() | - (__be16)get_random_u32() + (__be16)get_random_u16() | - (__le16)get_random_u32() + (__le16)get_random_u16() | - prandom_u32_max(65536) + get_random_u16() | - prandom_u32_max(256) + get_random_u8() | - E->inet_id = get_random_u32() + E->inet_id = get_random_u16() ) @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u16; identifier v; @@ - u16 v = get_random_u32(); + u16 v = get_random_u16(); @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u8; identifier v; @@ - u8 v = get_random_u32(); + u8 v = get_random_u8(); @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u16; u16 v; @@ - v = get_random_u32(); + v = get_random_u16(); @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u8; u8 v; @@ - v = get_random_u32(); + v = get_random_u8(); // Find a potential literal @literal_mask@ expression LITERAL; type T; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; position p; @@ ((T)get_random_u32()@p & (LITERAL)) // Examine limits @script:python add_one@ literal << literal_mask.LITERAL; RESULT; @@ value = None if literal.startswith('0x'): value = int(literal, 16) elif literal[0] in '123456789': value = int(literal, 10) if value is None: print("I don't know how to handle %s" % (literal)) cocci.include_match(False) elif value < 256: coccinelle.RESULT = cocci.make_ident("get_random_u8") elif value < 65536: coccinelle.RESULT = cocci.make_ident("get_random_u16") else: print("Skipping large mask of %s" % (literal)) cocci.include_match(False) // Replace the literal mask with the calculated result. @plus_one@ expression literal_mask.LITERAL; position literal_mask.p; identifier add_one.RESULT; identifier FUNC; @@ - (FUNC()@p & (LITERAL)) + (RESULT() & LITERAL) Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Yury Norov <yury.norov@gmail.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@toke.dk> # for sch_cake Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-10-11treewide: use prandom_u32_max() when possible, part 1Jason A. Donenfeld1-43/+43
Rather than incurring a division or requesting too many random bytes for the given range, use the prandom_u32_max() function, which only takes the minimum required bytes from the RNG and avoids divisions. This was done mechanically with this coccinelle script: @basic@ expression E; type T; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u64; @@ ( - ((T)get_random_u32() % (E)) + prandom_u32_max(E) | - ((T)get_random_u32() & ((E) - 1)) + prandom_u32_max(E * XXX_MAKE_SURE_E_IS_POW2) | - ((u64)(E) * get_random_u32() >> 32) + prandom_u32_max(E) | - ((T)get_random_u32() & ~PAGE_MASK) + prandom_u32_max(PAGE_SIZE) ) @multi_line@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; identifier RAND; expression E; @@ - RAND = get_random_u32(); ... when != RAND - RAND %= (E); + RAND = prandom_u32_max(E); // Find a potential literal @literal_mask@ expression LITERAL; type T; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; position p; @@ ((T)get_random_u32()@p & (LITERAL)) // Add one to the literal. @script:python add_one@ literal << literal_mask.LITERAL; RESULT; @@ value = None if literal.startswith('0x'): value = int(literal, 16) elif literal[0] in '123456789': value = int(literal, 10) if value is None: print("I don't know how to handle %s" % (literal)) cocci.include_match(False) elif value == 2**32 - 1 or value == 2**31 - 1 or value == 2**24 - 1 or value == 2**16 - 1 or value == 2**8 - 1: print("Skipping 0x%x for cleanup elsewhere" % (value)) cocci.include_match(False) elif value & (value + 1) != 0: print("Skipping 0x%x because it's not a power of two minus one" % (value)) cocci.include_match(False) elif literal.startswith('0x'): coccinelle.RESULT = cocci.make_expr("0x%x" % (value + 1)) else: coccinelle.RESULT = cocci.make_expr("%d" % (value + 1)) // Replace the literal mask with the calculated result. @plus_one@ expression literal_mask.LITERAL; position literal_mask.p; expression add_one.RESULT; identifier FUNC; @@ - (FUNC()@p & (LITERAL)) + prandom_u32_max(RESULT) @collapse_ret@ type T; identifier VAR; expression E; @@ { - T VAR; - VAR = (E); - return VAR; + return E; } @drop_var@ type T; identifier VAR; @@ { - T VAR; ... when != VAR } Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Yury Norov <yury.norov@gmail.com> Reviewed-by: KP Singh <kpsingh@kernel.org> Reviewed-by: Jan Kara <jack@suse.cz> # for ext4 and sbitmap Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> # for drbd Acked-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Heiko Carstens <hca@linux.ibm.com> # for s390 Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-10-10Merge tag 'mm-stable-2022-10-08' of ↵Linus Torvalds1-0/+2
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull MM updates from Andrew Morton: - Yu Zhao's Multi-Gen LRU patches are here. They've been under test in linux-next for a couple of months without, to my knowledge, any negative reports (or any positive ones, come to that). - Also the Maple Tree from Liam Howlett. An overlapping range-based tree for vmas. It it apparently slightly more efficient in its own right, but is mainly targeted at enabling work to reduce mmap_lock contention. Liam has identified a number of other tree users in the kernel which could be beneficially onverted to mapletrees. Yu Zhao has identified a hard-to-hit but "easy to fix" lockdep splat at [1]. This has yet to be addressed due to Liam's unfortunately timed vacation. He is now back and we'll get this fixed up. - Dmitry Vyukov introduces KMSAN: the Kernel Memory Sanitizer. It uses clang-generated instrumentation to detect used-unintialized bugs down to the single bit level. KMSAN keeps finding bugs. New ones, as well as the legacy ones. - Yang Shi adds a userspace mechanism (madvise) to induce a collapse of memory into THPs. - Zach O'Keefe has expanded Yang Shi's madvise(MADV_COLLAPSE) to support file/shmem-backed pages. - userfaultfd updates from Axel Rasmussen - zsmalloc cleanups from Alexey Romanov - cleanups from Miaohe Lin: vmscan, hugetlb_cgroup, hugetlb and memory-failure - Huang Ying adds enhancements to NUMA balancing memory tiering mode's page promotion, with a new way of detecting hot pages. - memcg updates from Shakeel Butt: charging optimizations and reduced memory consumption. - memcg cleanups from Kairui Song. - memcg fixes and cleanups from Johannes Weiner. - Vishal Moola provides more folio conversions - Zhang Yi removed ll_rw_block() :( - migration enhancements from Peter Xu - migration error-path bugfixes from Huang Ying - Aneesh Kumar added ability for a device driver to alter the memory tiering promotion paths. For optimizations by PMEM drivers, DRM drivers, etc. - vma merging improvements from Jakub Matěn. - NUMA hinting cleanups from David Hildenbrand. - xu xin added aditional userspace visibility into KSM merging activity. - THP & KSM code consolidation from Qi Zheng. - more folio work from Matthew Wilcox. - KASAN updates from Andrey Konovalov. - DAMON cleanups from Kaixu Xia. - DAMON work from SeongJae Park: fixes, cleanups. - hugetlb sysfs cleanups from Muchun Song. - Mike Kravetz fixes locking issues in hugetlbfs and in hugetlb core. Link: https://lkml.kernel.org/r/CAOUHufZabH85CeUN-MEMgL8gJGzJEWUrkiM58JkTbBhh-jew0Q@mail.gmail.com [1] * tag 'mm-stable-2022-10-08' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (555 commits) hugetlb: allocate vma lock for all sharable vmas hugetlb: take hugetlb vma_lock when clearing vma_lock->vma pointer hugetlb: fix vma lock handling during split vma and range unmapping mglru: mm/vmscan.c: fix imprecise comments mm/mglru: don't sync disk for each aging cycle mm: memcontrol: drop dead CONFIG_MEMCG_SWAP config symbol mm: memcontrol: use do_memsw_account() in a few more places mm: memcontrol: deprecate swapaccounting=0 mode mm: memcontrol: don't allocate cgroup swap arrays when memcg is disabled mm/secretmem: remove reduntant return value mm/hugetlb: add available_huge_pages() func mm: remove unused inline functions from include/linux/mm_inline.h selftests/vm: add selftest for MADV_COLLAPSE of uffd-minor memory selftests/vm: add file/shmem MADV_COLLAPSE selftest for cleared pmd selftests/vm: add thp collapse shmem testing selftests/vm: add thp collapse file and tmpfs testing selftests/vm: modularize thp collapse memory operations selftests/vm: dedup THP helpers mm/khugepaged: add tracepoint to hpage_collapse_scan_file() mm/madvise: add file and shmem support to MADV_COLLAPSE ...
2022-10-03crypto: kmsan: disable accelerated configs under KMSANAlexander Potapenko1-0/+30
KMSAN is unable to understand when initialized values come from assembly. Disable accelerated configs in KMSAN builds to prevent false positive reports. Link: https://lkml.kernel.org/r/20220915150417.722975-27-glider@google.com Signed-off-by: Alexander Potapenko <glider@google.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Eric Biggers <ebiggers@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Ilya Leoshkevich <iii@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Kees Cook <keescook@chromium.org> Cc: Marco Elver <elver@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vegard Nossum <vegard.nossum@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-09-24crypto: tcrypt - add async speed test for aria cipherTaehee Yoo1-0/+30
In order to test for the performance of aria-avx implementation, it needs an async speed test. So, it adds async speed tests to the tcrypt. Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-09-24crypto: aria - prepare generic module for optimized implementationsTaehee Yoo2-8/+33
It renames aria to aria_generic and exports some functions such as aria_set_key(), aria_encrypt(), and aria_decrypt() to be able to be used by aria-avx implementation. Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-09-24crypto: add __init/__exit annotations to init/exit funcsXiu Jianfeng7-14/+14
Add missing __init/__exit annotations to init/exit funcs. Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-09-24crypto: blake2s - revert unintended config addition of CRYPTO_BLAKE2SLukas Bulwahn1-21/+0
Commit 2d16803c562e ("crypto: blake2s - remove shash module") removes the config CRYPTO_BLAKE2S. Commit 3f342a23257d ("crypto: Kconfig - simplify hash entries") makes various changes to the config descriptions as part of some consolidation and clean-up, but among all those changes, it also accidently adds back CRYPTO_BLAKE2S after its removal due to the original patch being based on a state before the CRYPTO_BLAKE2S removal. See Link for the author's confirmation of this happening accidently. Fixes: 3f342a23257d ("crypto: Kconfig - simplify hash entries") Link: https://lore.kernel.org/all/MW5PR84MB18424AB8C095BFC041AE33FDAB479@MW5PR84MB1842.NAMPRD84.PROD.OUTLOOK.COM/ Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-09-09crypto: akcipher - default implementation for setting a private keyIgnat Korchagin1-0/+8
Changes from v1: * removed the default implementation from set_pub_key: it is assumed that an implementation must always have this callback defined as there are no use case for an algorithm, which doesn't need a public key Many akcipher implementations (like ECDSA) support only signature verifications, so they don't have all callbacks defined. Commit 78a0324f4a53 ("crypto: akcipher - default implementations for request callbacks") introduced default callbacks for sign/verify operations, which just return an error code. However, these are not enough, because before calling sign the caller would likely call set_priv_key first on the instantiated transform (as the in-kernel testmgr does). This function does not have a default stub, so the kernel crashes, when trying to set a private key on an akcipher, which doesn't support signature generation. I've noticed this, when trying to add a KAT vector for ECDSA signature to the testmgr. With this patch the testmgr returns an error in dmesg (as it should) instead of crashing the kernel NULL ptr dereference. Fixes: 78a0324f4a53 ("crypto: akcipher - default implementations for request callbacks") Signed-off-by: Ignat Korchagin <ignat@cloudflare.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-09-02crypto: testmgr - fix indentation for test_acomp() argsLucas Segarra Fernandez1-1/+1
Set right indentation for test_acomp(). Signed-off-by: Lucas Segarra Fernandez <lucas.segarra.fernandez@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - simplify compression/RNG entriesRobert Elliott1-32/+50
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized" Simplify help text descriptions, update references, and ensure that https references are still valid. Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - simplify cipher entriesRobert Elliott1-113/+121
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized" Simplify help text descriptions, update references, and ensure that https references are still valid. Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - simplify userspace entriesRobert Elliott1-24/+41
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized" Simplify help text descriptions, update references, and ensure that https references are still valid. Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - simplify hash entriesRobert Elliott1-79/+97
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized" Simplify help text descriptions, update references, and ensure that https references are still valid. Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - simplify aead entriesRobert Elliott1-18/+30
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized" Simplify help text descriptions, update references, and ensure that https references are still valid. Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - simplify CRC entriesRobert Elliott1-12/+25
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized" Simplify help text descriptions, update references, and ensure that https references are still valid. Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - simplify public-key entriesRobert Elliott1-21/+34
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized" Simplify help text descriptions, update references, and ensure that https references are still valid. Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - add submenusRobert Elliott1-458/+479
Convert each comment section into a submenu: Cryptographic API Crypto core or helper Public-key cryptography Block ciphers Length-preserving ciphers and modes AEAD (authenticated encryption with associated data) ciphers Hashes, digests, and MACs CRCs (cyclic redundancy checks) Compression Random number generation Userspace interface That helps find entries (e.g., searching for a name like SHA512 doesn't just report the location is Main menu -> Cryptography API, leaving you to wade through 153 entries; it points you to the Digests page). Move entries so they fall into the correct submenus and are better sorted. Suggested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - submenus for arm and arm64Robert Elliott1-0/+6
Move ARM- and ARM64-accelerated menus into a submenu under the Crypto API menu (paralleling all the architectures). Make each submenu always appear if the corresponding architecture is supported. Get rid of the ARM_CRYPTO and ARM64_CRYPTO symbols. The "ARM Accelerated" or "ARM64 Accelerated" entry disappears from: General setup ---> Platform selection ---> Kernel Features ---> Boot options ---> Power management options ---> CPU Power Management ---> [*] ACPI (Advanced Configuration and Power Interface) Support ---> [*] Virtualization ---> [*] ARM Accelerated Cryptographic Algorithms ---> (or) [*] ARM64 Accelerated Cryptographic Algorithms ---> ... -*- Cryptographic API ---> Library routines ---> Kernel hacking ---> and moves into the Cryptographic API menu, which now contains: ... Accelerated Cryptographic Algorithms for CPU (arm) ---> (or) Accelerated Cryptographic Algorithms for CPU (arm64) ---> [*] Hardware crypto devices ---> ... Suggested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-26crypto: Kconfig - move x86 entries to a submenuRobert Elliott1-495/+3
Move CPU-specific crypto/Kconfig entries to arch/xxx/crypto/Kconfig and create a submenu for them under the Crypto API menu. Suggested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>