summaryrefslogtreecommitdiff
path: root/fs/nfsd
AgeCommit message (Collapse)AuthorFilesLines
2018-06-12Merge tag 'nfsd-4.18' of git://linux-nfs.org/~bfields/linuxLinus Torvalds5-14/+27
Pull nfsd updates from Bruce Fields: "A relatively quiet cycle for nfsd. The largest piece is an RDMA update from Chuck Lever with new trace points, miscellaneous cleanups, and streamlining of the send and receive paths. Other than that, some miscellaneous bugfixes" * tag 'nfsd-4.18' of git://linux-nfs.org/~bfields/linux: (26 commits) nfsd: fix error handling in nfs4_set_delegation() nfsd: fix potential use-after-free in nfsd4_decode_getdeviceinfo Fix 16-byte memory leak in gssp_accept_sec_context_upcall svcrdma: Fix incorrect return value/type in svc_rdma_post_recvs svcrdma: Remove unused svc_rdma_op_ctxt svcrdma: Persistently allocate and DMA-map Send buffers svcrdma: Simplify svc_rdma_send() svcrdma: Remove post_send_wr svcrdma: Don't overrun the SGE array in svc_rdma_send_ctxt svcrdma: Introduce svc_rdma_send_ctxt svcrdma: Clean up Send SGE accounting svcrdma: Refactor svc_rdma_dma_map_buf svcrdma: Allocate recv_ctxt's on CPU handling Receives svcrdma: Persistently allocate and DMA-map Receive buffers svcrdma: Preserve Receive buffer until svc_rdma_sendto svcrdma: Simplify svc_rdma_recv_ctxt_put svcrdma: Remove sc_rq_depth svcrdma: Introduce svc_rdma_recv_ctxt svcrdma: Trace key RDMA API events svcrdma: Trace key RPC/RDMA protocol events ...
2018-06-08nfsd: fix error handling in nfs4_set_delegation()Andrew Elble1-1/+4
I noticed a memory corruption crash in nfsd in 4.17-rc1. This patch corrects the issue. Fix to return error if the delegation couldn't be hashed or there was a recall in progress. Use the existing error path instead of destroy_delegation() for readability. Signed-off-by: Andrew Elble <aweits@rit.edu> Fixes: 353601e7d323c ("nfsd: create a separate lease for each delegation") Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-06-08nfsd: fix potential use-after-free in nfsd4_decode_getdeviceinfoScott Mayhew1-0/+2
When running a fuzz tester against a KASAN-enabled kernel, the following splat periodically occurs. The problem occurs when the test sends a GETDEVICEINFO request with a malformed xdr array (size but no data) for gdia_notify_types and the array size is > 0x3fffffff, which results in an overflow in the value of nbytes which is passed to read_buf(). If the array size is 0x40000000, 0x80000000, or 0xc0000000, then after the overflow occurs, the value of nbytes 0, and when that happens the pointer returned by read_buf() points to the end of the xdr data (i.e. argp->end) when really it should be returning NULL. Fix this by returning NFS4ERR_BAD_XDR if the array size is > 1000 (this value is arbitrary, but it's the same threshold used by nfsd4_decode_bitmap()... in could really be any value >= 1 since it's expected to get at most a single bitmap in gdia_notify_types). [ 119.256854] ================================================================== [ 119.257611] BUG: KASAN: use-after-free in nfsd4_decode_getdeviceinfo+0x5a4/0x5b0 [nfsd] [ 119.258422] Read of size 4 at addr ffff880113ada000 by task nfsd/538 [ 119.259146] CPU: 0 PID: 538 Comm: nfsd Not tainted 4.17.0+ #1 [ 119.259662] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.3-1.fc25 04/01/2014 [ 119.261202] Call Trace: [ 119.262265] dump_stack+0x71/0xab [ 119.263371] print_address_description+0x6a/0x270 [ 119.264609] kasan_report+0x258/0x380 [ 119.265854] ? nfsd4_decode_getdeviceinfo+0x5a4/0x5b0 [nfsd] [ 119.267291] nfsd4_decode_getdeviceinfo+0x5a4/0x5b0 [nfsd] [ 119.268549] ? nfs4svc_decode_compoundargs+0xa5b/0x13c0 [nfsd] [ 119.269873] ? nfsd4_decode_sequence+0x490/0x490 [nfsd] [ 119.271095] nfs4svc_decode_compoundargs+0xa5b/0x13c0 [nfsd] [ 119.272393] ? nfsd4_release_compoundargs+0x1b0/0x1b0 [nfsd] [ 119.273658] nfsd_dispatch+0x183/0x850 [nfsd] [ 119.274918] svc_process+0x161c/0x31a0 [sunrpc] [ 119.276172] ? svc_printk+0x190/0x190 [sunrpc] [ 119.277386] ? svc_xprt_release+0x451/0x680 [sunrpc] [ 119.278622] nfsd+0x2b9/0x430 [nfsd] [ 119.279771] ? nfsd_destroy+0x1c0/0x1c0 [nfsd] [ 119.281157] kthread+0x2db/0x390 [ 119.282347] ? kthread_create_worker_on_cpu+0xc0/0xc0 [ 119.283756] ret_from_fork+0x35/0x40 [ 119.286041] Allocated by task 436: [ 119.287525] kasan_kmalloc+0xa0/0xd0 [ 119.288685] kmem_cache_alloc+0xe9/0x1f0 [ 119.289900] get_empty_filp+0x7b/0x410 [ 119.291037] path_openat+0xca/0x4220 [ 119.292242] do_filp_open+0x182/0x280 [ 119.293411] do_sys_open+0x216/0x360 [ 119.294555] do_syscall_64+0xa0/0x2f0 [ 119.295721] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 119.298068] Freed by task 436: [ 119.299271] __kasan_slab_free+0x130/0x180 [ 119.300557] kmem_cache_free+0x78/0x210 [ 119.301823] rcu_process_callbacks+0x35b/0xbd0 [ 119.303162] __do_softirq+0x192/0x5ea [ 119.305443] The buggy address belongs to the object at ffff880113ada000 which belongs to the cache filp of size 256 [ 119.308556] The buggy address is located 0 bytes inside of 256-byte region [ffff880113ada000, ffff880113ada100) [ 119.311376] The buggy address belongs to the page: [ 119.312728] page:ffffea00044eb680 count:1 mapcount:0 mapping:0000000000000000 index:0xffff880113ada780 [ 119.314428] flags: 0x17ffe000000100(slab) [ 119.315740] raw: 0017ffe000000100 0000000000000000 ffff880113ada780 00000001000c0001 [ 119.317379] raw: ffffea0004553c60 ffffea00045c11e0 ffff88011b167e00 0000000000000000 [ 119.319050] page dumped because: kasan: bad access detected [ 119.321652] Memory state around the buggy address: [ 119.322993] ffff880113ad9f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 119.324515] ffff880113ad9f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 119.326087] >ffff880113ada000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 119.327547] ^ [ 119.328730] ffff880113ada080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 119.330218] ffff880113ada100: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb [ 119.331740] ================================================================== Signed-off-by: Scott Mayhew <smayhew@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-06-04Merge tag 'for-4.18/block-20180603' of git://git.kernel.dk/linux-blockLinus Torvalds1-1/+1
Pull block updates from Jens Axboe: - clean up how we pass around gfp_t and blk_mq_req_flags_t (Christoph) - prepare us to defer scheduler attach (Christoph) - clean up drivers handling of bounce buffers (Christoph) - fix timeout handling corner cases (Christoph/Bart/Keith) - bcache fixes (Coly) - prep work for bcachefs and some block layer optimizations (Kent). - convert users of bio_sets to using embedded structs (Kent). - fixes for the BFQ io scheduler (Paolo/Davide/Filippo) - lightnvm fixes and improvements (Matias, with contributions from Hans and Javier) - adding discard throttling to blk-wbt (me) - sbitmap blk-mq-tag handling (me/Omar/Ming). - remove the sparc jsflash block driver, acked by DaveM. - Kyber scheduler improvement from Jianchao, making it more friendly wrt merging. - conversion of symbolic proc permissions to octal, from Joe Perches. Previously the block parts were a mix of both. - nbd fixes (Josef and Kevin Vigor) - unify how we handle the various kinds of timestamps that the block core and utility code uses (Omar) - three NVMe pull requests from Keith and Christoph, bringing AEN to feature completeness, file backed namespaces, cq/sq lock split, and various fixes - various little fixes and improvements all over the map * tag 'for-4.18/block-20180603' of git://git.kernel.dk/linux-block: (196 commits) blk-mq: update nr_requests when switching to 'none' scheduler block: don't use blocking queue entered for recursive bio submits dm-crypt: fix warning in shutdown path lightnvm: pblk: take bitmap alloc. out of critical section lightnvm: pblk: kick writer on new flush points lightnvm: pblk: only try to recover lines with written smeta lightnvm: pblk: remove unnecessary bio_get/put lightnvm: pblk: add possibility to set write buffer size manually lightnvm: fix partial read error path lightnvm: proper error handling for pblk_bio_add_pages lightnvm: pblk: fix smeta write error path lightnvm: pblk: garbage collect lines with failed writes lightnvm: pblk: rework write error recovery path lightnvm: pblk: remove dead function lightnvm: pass flag on graceful teardown to targets lightnvm: pblk: check for chunk size before allocating it lightnvm: pblk: remove unnecessary argument lightnvm: pblk: remove unnecessary indirection lightnvm: pblk: return NVM_ error on failed submission lightnvm: pblk: warn in case of corrupted write buffer ...
2018-05-21nfsd: vfs_mkdir() might succeed leaving dentry negative unhashedAl Viro1-0/+22
That can (and does, on some filesystems) happen - ->mkdir() (and thus vfs_mkdir()) can legitimately leave its argument negative and just unhash it, counting upon the lookup to pick the object we'd created next time we try to look at that name. Some vfs_mkdir() callers forget about that possibility... Acked-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-05-14block: sanitize blk_get_request calling conventionsChristoph Hellwig1-1/+1
Switch everyone to blk_get_request_flags, and then rename blk_get_request_flags to blk_get_request. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-11nfsd: Do not refuse to serve out of cacheTrond Myklebust2-9/+2
Currently the knfsd replay cache appears to try to refuse replying to retries that come within 200ms of the cache entry being created. That makes limited sense in today's world of high speed TCP. After a TCP disconnection, a client can very easily reconnect and retry an rpc in less than 200ms. If this logic drops that retry, however, the client may be quite slow to retry again. This logic is original to the first reply cache implementation in 2.1, and may have made more sense for UDP clients that retried much more frequently. After this patch we will still drop on finding the original request still in progress. We may want to fix that as well at some point, though it's less likely. Note that svc_check_conn_limits is often the cause of those disconnections. We may want to fix that some day. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Acked-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-05-11nfsd: make nfsd4_scsi_identify_device retry with a larger bufferScott Mayhew1-2/+16
nfsd4_scsi_identify_device() performs a single IDENTIFY command for the device identification VPD page using a small buffer. If the reply is too large to fit in this buffer then the GETDEVICEINFO reply will not contain any info for the SCSI volume aside from the registration key. This can happen for example if the device has descriptors using long SCSI name strings. When the initial reply from the device indicates a larger buffer is needed, retry once using the page length from that reply. Signed-off-by: Scott Mayhew <smayhew@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-05-07nfsd: restrict rd_maxcount to svc_max_payload in nfsd_encode_readdirScott Mayhew1-2/+3
nfsd4_readdir_rsize restricts rd_maxcount to svc_max_payload when estimating the size of the readdir reply, but nfsd_encode_readdir restricts it to INT_MAX when encoding the reply. This can result in log messages like "kernel: RPC request reserved 32896 but used 1049444". Restrict rd_dircount similarly (no reason it should be larger than svc_max_payload). Signed-off-by: Scott Mayhew <smayhew@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03nfsd: fix incorrect umasksJ. Bruce Fields3-7/+15
We're neglecting to clear the umask after it's set, which can cause a later unrelated rpc to (incorrectly) use the same umask if it happens to be processed by the same thread. There's a more subtle problem here too: An NFSv4 compound request is decoded all in one pass before any operations are executed. Currently we're setting current->fs->umask at the time we decode the compound. In theory a single compound could contain multiple creates each setting a umask. In that case we'd end up using whichever umask was passed in the *last* operation as the umask for all the creates, whether that was correct or not. So, we should just be saving the umask at decode time and waiting to set it until we actually process the corresponding operation. In practice it's unlikely any client would do multiple creates in a single compound. And even if it did they'd likely be from the same process (hence carry the same umask). So this is a little academic, but we should get it right anyway. Fixes: 47057abde515 (nfsd: add support for the umask attribute) Cc: stable@vger.kernel.org Reported-by: Lucash Stach <l.stach@pengutronix.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03NFSD: Clean up legacy NFS SYMLINK argument XDR decodersChuck Lever7-65/+63
Move common code in NFSD's legacy SYMLINK decoders into a helper. The immediate benefits include: - one fewer data copies on transports that support DDP - consistent error checking across all versions - reduction of code duplication - support for both legal forms of SYMLINK requests on RDMA transports for all versions of NFS (in particular, NFSv2, for completeness) In the long term, this helper is an appropriate spot to perform a per-transport call-out to fill the pathname argument using, say, RDMA Reads. Filling the pathname in the proc function also means that eventually the incoming filehandle can be interpreted so that filesystem- specific memory can be allocated as a sink for the pathname argument, rather than using anonymous pages. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03NFSD: Clean up legacy NFS WRITE argument XDR decodersChuck Lever6-30/+21
Move common code in NFSD's legacy NFS WRITE decoders into a helper. The immediate benefit is reduction of code duplication and some nice micro-optimizations (see below). In the long term, this helper can perform a per-transport call-out to fill the rq_vec (say, using RDMA Reads). The legacy WRITE decoders and procs are changed to work like NFSv4, which constructs the rq_vec just before it is about to call vfs_writev. Why? Calling a transport call-out from the proc instead of the XDR decoder means that the incoming FH can be resolved to a particular filesystem and file. This would allow pages from the backing file to be presented to the transport to be filled, rather than presenting anonymous pages and copying or flipping them into the file's page cache later. I also prefer using the pages in rq_arg.pages, instead of pulling the data pages directly out of the rqstp::rq_pages array. This is currently the way the NFSv3 write decoder works, but the other two do not seem to take this approach. Fixing this removes the only reference to rq_pages found in NFSD, eliminating an NFSD assumption about how transports use the pages in rq_pages. Lastly, avoid setting up the first element of rq_vec as a zero- length buffer. This happens with an RDMA transport when a normal Read chunk is present because the data payload is in rq_arg's page list (none of it is in the head buffer). Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03nfsd: Trace NFSv4 COMPOUND executionChuck Lever2-6/+42
This helps record the identity and timing of the ops in each NFSv4 COMPOUND, replacing dprintk calls that did much the same thing. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03nfsd: Add I/O trace points in the NFSv4 read procChuck Lever5-25/+39
NFSv4 read compound processing invokes nfsd_splice_read and nfs_readv directly, so the trace points currently in nfsd_read are not invoked for NFSv4 reads. Move the NFSD READ trace points to common helpers so that NFSv4 reads are captured. Also, record any local I/O error that occurs, the total count of bytes that were actually returned, and whether splice or vectored read was used. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03nfsd: Add I/O trace points in the NFSv4 write pathChuck Lever3-13/+50
NFSv4 write compound processing invokes nfsd_vfs_write directly. The trace points currently in nfsd_write are not effective for NFSv4 writes. Move the trace points into the shared nfsd_vfs_write() helper. After the I/O, we also want to record any local I/O error that might have occurred, and the total count of bytes that were actually moved (rather than the requested number). Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03nfsd: Add "nfsd_" to trace point namesChuck Lever4-20/+20
Follow naming convention used in client and in sunrpc layers. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03nfsd: Record request byte count, not count of vectorsChuck Lever2-18/+13
Byte count is more helpful to know than vector count. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03nfsd: Fix NFSD trace pointsChuck Lever1-6/+11
nfsd-1915 [003] 77915.780959: write_opened: [FAILED TO PARSE] xid=3286130958 fh=0 offset=154624 len=1 nfsd-1915 [003] 77915.780960: write_io_done: [FAILED TO PARSE] xid=3286130958 fh=0 offset=154624 len=1 nfsd-1915 [003] 77915.780964: write_done: [FAILED TO PARSE] xid=3286130958 fh=0 offset=154624 len=1 Byte swapping and knfsd_fh_hash() are not available in "trace-cmd report", where the print format string is actually used. These data transformations have to be done during the TP_fast_assign step. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03nfsd: use correct enum type in decode_cb_op_statusStefan Agner1-2/+2
Use enum nfs_cb_opnum4 in decode_cb_op_status. This fixes warnings seen with clang: fs/nfsd/nfs4callback.c:451:36: warning: implicit conversion from enumeration type 'enum nfs_cb_opnum4' to different enumeration type 'enum nfs_opnum4' [-Wenum-conversion] status = decode_cb_op_status(xdr, OP_CB_SEQUENCE, &cb->cb_seq_status); ~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~ Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-04-03nfsd: fix boolreturn.cocci warningsFengguang Wu1-3/+3
fs/nfsd/nfs4state.c:926:8-9: WARNING: return of 0/1 in function 'nfs4_delegation_exists' with return type bool fs/nfsd/nfs4state.c:2955:9-10: WARNING: return of 0/1 in function 'nfsd4_compound_in_session' with return type bool Return statements in functions returning bool should use true/false instead of 1/0. Generated by: scripts/coccinelle/misc/boolreturn.cocci Fixes: 68b18f52947b ("nfsd: make nfs4_get_existing_delegation less confusing") Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> [bfields: also fix -EAGAIN] Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-03-20nfsd: create a separate lease for each delegationJ. Bruce Fields1-108/+65
Currently we only take one vfs-level delegation (lease) for each file, no matter how many clients hold delegations on that file. Let's instead keep a one-to-one mapping between NFSv4 delegations and VFS delegations. This turns out to be simpler. There is still a many-to-one mapping of NFS opens to NFS files, and the delegations on one file are all associated with one struct file. The VFS can still distinguish between these delegations since we're setting fl_owner to the struct nfs4_delegation now, not to the shared file. I'm replacing at least one complicated function wholesale, which I don't like to do, but I haven't figured out how to do this more incrementally. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org>
2018-03-20nfsd: move sc_file assignment into alloc_init_delegJ. Bruce Fields1-5/+5
Take an easy chance to simplify the caller a little. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org>
2018-03-20nfsd: factor out common delegation-destruction codeJ. Bruce Fields1-17/+14
Pull some duplicated code into a common helper. This changes the order in destroy_delegation a little, but it looks to me like that shouldn't matter. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org>
2018-03-20nfsd: make nfs4_get_existing_delegation less confusingJ. Bruce Fields1-14/+9
This doesn't "get" anything. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org>
2018-03-20nfsd4: dp->dl_stid.sc_file doesn't need lockingJ. Bruce Fields1-1/+2
The delegation isn't visible to anyone yet. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org>
2018-03-20nfsd4: set fl_owner to delegation, not file pointerJ. Bruce Fields1-10/+7
For now this makes no difference, as for files having delegations, there's a one-to-one relationship between an nfs4_file and its nfs4_delegation. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org>
2018-03-20nfsd: simplify nfs4_put_deleg_lease callsJ. Bruce Fields1-5/+6
Every single caller gets the file out of the delegation, so let's do that once in nfs4_put_deleg_lease. Plus we'll need it there for other reasons. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org>
2018-03-20nfsd: simplify put of fi_deleg_fileJ. Bruce Fields1-1/+4
fi_delegees is basically just a reference count on users of fi_deleg_file, which is cleared when fi_delegees goes to zero. The fi_deleg_file check here is redundant. Also add an assertion to make sure we don't have unbalanced puts. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org>
2018-03-19nfsd: move nfs4_client allocation to dedicated slabcacheJeff Layton1-5/+13
On x86_64, it's 1152 bytes, so we can avoid wasting 896 bytes each. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-03-19nfsd: don't require low ports for gss requestsJ. Bruce Fields1-1/+11
In a traditional NFS deployment using auth_unix, the clients are trusted to correctly report the credentials of their logged-in users. The server assumes that only root on client machines is allowed to send requests from low-numbered ports, so it can use the originating port number to distinguish "real" NFS clients from NFS clients run by ordinary users, to prevent ordinary users from spoofing credentials. The originating port number on a gss-authenticated request is less important. The authentication ties the request to a user, and we take it as proof that that user authorized the request. The low port number check no longer adds much. So, don't enforce low port numbers in the auth_gss case. Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-03-19nfsd: remove unsused "cp_consecutive" fieldJ. Bruce Fields3-4/+2
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-03-19nfsd4: send the special close_stateid in v4.0 replies as wellJeff Layton1-4/+15
We already send it for v4.1, but RFC7530 also notes that the stateid in the close reply is bogus. Always send the special close stateid, even in v4.0 responses. No client should put any meaning on it whatsoever. For now, we continue to increment the stateid value, though that might not be necessary either. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-03-19nfsd: remove blocked locks on client teardownJeff Layton1-19/+43
We had some reports of panics in nfsd4_lm_notify, and that showed a nfs4_lockowner that had outlived its so_client. Ensure that we walk any leftover lockowners after tearing down all of the stateids, and remove any blocked locks that they hold. With this change, we also don't need to walk the nbl_lru on nfsd_net shutdown, as that will happen naturally when we tear down the clients. Fixes: 76d348fadff5 (nfsd: have nfsd4_lock use blocking locks for v4.1+ locks) Reported-by: Frank Sorenson <fsorenso@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Cc: stable@vger.kernel.org # 4.9 Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-02-08Merge tag 'nfsd-4.16' of git://linux-nfs.org/~bfields/linuxLinus Torvalds6-36/+55
Pull nfsd update from Bruce Fields: "A fairly small update this time around. Some cleanup, RDMA fixes, overlayfs fixes, and a fix for an NFSv4 state bug. The bigger deal for nfsd this time around was Jeff Layton's already-merged i_version patches" * tag 'nfsd-4.16' of git://linux-nfs.org/~bfields/linux: svcrdma: Fix Read chunk round-up NFSD: hide unused svcxdr_dupstr() nfsd: store stat times in fill_pre_wcc() instead of inode times nfsd: encode stat->mtime for getattr instead of inode->i_mtime nfsd: return RESOURCE not GARBAGE_ARGS on too many ops nfsd4: don't set lock stateid's sc_type to CLOSED nfsd: Detect unhashed stids in nfsd4_verify_open_stid() sunrpc: remove dead code in svc_sock_setbufsize svcrdma: Post Receives in the Receive completion handler nfsd4: permit layoutget of executable-only files lockd: convert nlm_rqst.a_count from atomic_t to refcount_t lockd: convert nlm_lockowner.count from atomic_t to refcount_t lockd: convert nsm_handle.sm_count from atomic_t to refcount_t
2018-02-08NFSD: hide unused svcxdr_dupstr()Arnd Bergmann1-3/+2
There is now only one caller left for svcxdr_dupstr() and this is inside of an #ifdef, so we can get a warning when the option is disabled: fs/nfsd/nfs4xdr.c:241:1: error: 'svcxdr_dupstr' defined but not used [-Werror=unused-function] This changes the remaining caller to use a nicer IS_ENABLED() check, which lets the compiler drop the unused code silently. Fixes: e40d99e6183e ("NFSD: Clean up symlink argument XDR decoders") Suggested-by: Rasmus Villemoes <rasmus.villemoes@prevas.dk> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-02-08nfsd: store stat times in fill_pre_wcc() instead of inode timesAmir Goldstein3-24/+37
The time values in stat and inode may differ for overlayfs and stat time values are the correct ones to use. This is also consistent with the fact that fill_post_wcc() also stores stat time values. This means introducing a stat call that could fail, where previously we were just copying values out of the inode. To be conservative about changing behavior, we fall back to copying values out of the inode in the error case. It might be better just to clear fh_pre_saved (though note the BUG_ON in set_change_info). Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-02-08nfsd: encode stat->mtime for getattr instead of inode->i_mtimeAmir Goldstein1-0/+1
The values of stat->mtime and inode->i_mtime may differ for overlayfs and stat->mtime is the correct value to use when encoding getattr. This is also consistent with the fact that other attr times are also encoded from stat values. Both callers of lease_get_mtime() already have the value of stat->mtime, so the only needed change is that lease_get_mtime() will not overwrite this value with inode->i_mtime in case the inode does not have an exclusive lease. Signed-off-by: Amir Goldstein <amir73il@gmail.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-02-08nfsd: return RESOURCE not GARBAGE_ARGS on too many opsJ. Bruce Fields2-2/+10
A client that sends more than a hundred ops in a single compound currently gets an rpc-level GARBAGE_ARGS error. It would be more helpful to return NFS4ERR_RESOURCE, since that gives the client a better idea how to recover (for example by splitting up the compound into smaller compounds). This is all a bit academic since we've never actually seen a reason for clients to send such long compounds, but we may as well fix it. While we're there, just use NFSD4_MAX_OPS_PER_COMPOUND == 16, the constant we already use in the 4.1 case, instead of hard-coding 100. Chances anyone actually uses even 16 ops per compound are small enough that I think there's a neglible risk or any regression. This fixes pynfs test COMP6. Reported-by: "Lu, Xinyu" <luxy.fnst@cn.fujitsu.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-02-05nfsd4: don't set lock stateid's sc_type to CLOSEDJ. Bruce Fields1-4/+1
There's no point I can see to stp->st_stid.sc_type = NFS4_CLOSED_STID; given release_lock_stateid immediately sets sc_type to 0. That set of sc_type to 0 should be enough to prevent it being used where we don't want it to be; NFS4_CLOSED_STID should only be needed for actual open stateid's that are actually closed. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-02-05nfsd: Detect unhashed stids in nfsd4_verify_open_stid()Trond Myklebust1-0/+1
The state of the stid is guaranteed by 2 locks: - The nfs4_client 'cl_lock' spinlock - The nfs4_ol_stateid 'st_mutex' mutex so it is quite possible for the stid to be unhashed after lookup, but before calling nfsd4_lock_ol_stateid(). So we do need to check for a zero value for 'sc_type' in nfsd4_verify_open_stid(). Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Tested-by: Checuk Lever <chuck.lever@oracle.com> Cc: stable@vger.kernel.org Fixes: 659aefb68eca "nfsd: Ensure we don't recognise lock stateids..." Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2018-01-29Merge tag 'iversion-v4.16-1' of ↵Linus Torvalds1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/jlayton/linux Pull inode->i_version rework from Jeff Layton: "This pile of patches is a rework of the inode->i_version field. We have traditionally incremented that field on every inode data or metadata change. Typically this increment needs to be logged on disk even when nothing else has changed, which is rather expensive. It turns out though that none of the consumers of that field actually require this behavior. The only real requirement for all of them is that it be different iff the inode has changed since the last time the field was checked. Given that, we can optimize away most of the i_version increments and avoid dirtying inode metadata when the only change is to the i_version and no one is querying it. Queries of the i_version field are rather rare, so we can help write performance under many common workloads. This patch series converts existing accesses of the i_version field to a new API, and then converts all of the in-kernel filesystems to use it. The last patch in the series then converts the backend implementation to a scheme that optimizes away a large portion of the metadata updates when no one is looking at it. In my own testing this series significantly helps performance with small I/O sizes. I also got this email for Christmas this year from the kernel test robot (a 244% r/w bandwidth improvement with XFS over DAX, with 4k writes): https://lkml.org/lkml/2017/12/25/8 A few of the earlier patches in this pile are also flowing to you via other trees (mm, integrity, and nfsd trees in particular)". * tag 'iversion-v4.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/jlayton/linux: (22 commits) fs: handle inode->i_version more efficiently btrfs: only dirty the inode in btrfs_update_time if something was changed xfs: avoid setting XFS_ILOG_CORE if i_version doesn't need incrementing fs: only set S_VERSION when updating times if necessary IMA: switch IMA over to new i_version API xfs: convert to new i_version API ufs: use new i_version API ocfs2: convert to new i_version API nfsd: convert to new i_version API nfs: convert to new i_version API ext4: convert to new i_version API ext2: convert to new i_version API exofs: switch to new i_version API btrfs: convert to new i_version API afs: convert to new i_version API affs: convert to new i_version API fat: convert to new i_version API fs: don't take the i_lock in inode_inc_iversion fs: new API for handling inode->i_version ntfs: remove i_version handling ...
2018-01-29nfsd: convert to new i_version APIJeff Layton1-1/+2
Mostly just making sure we use the "get" wrappers so we know when it is being fetched for later use. Signed-off-by: Jeff Layton <jlayton@redhat.com>
2018-01-22nfsd: auth: Fix gid sorting when rootsquash enabledBen Hutchings1-3/+3
Commit bdcf0a423ea1 ("kernel: make groups_sort calling a responsibility group_info allocators") appears to break nfsd rootsquash in a pretty major way. It adds a call to groups_sort() inside the loop that copies/squashes gids, which means the valid gids are sorted along with the following garbage. The net result is that the highest numbered valid gids are replaced with any lower-valued garbage gids, possibly including 0. We should sort only once, after filling in all the gids. Fixes: bdcf0a423ea1 ("kernel: make groups_sort calling a responsibility ...") Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk> Acked-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-12-21nfsd4: permit layoutget of executable-only filesBenjamin Coddington1-3/+3
Clients must be able to read a file in order to execute it, and for pNFS that means the client needs to be able to perform a LAYOUTGET on the file. This behavior for executable-only files was added for OPEN in commit a043226bc140 "nfsd4: permit read opens of executable-only files". This fixes up xfstests generic/126 on block/scsi layouts. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-12-14kernel: make groups_sort calling a responsibility group_info allocatorsThiago Rafael Becker1-0/+3
In testing, we found that nfsd threads may call set_groups in parallel for the same entry cached in auth.unix.gid, racing in the call of groups_sort, corrupting the groups for that entry and leading to permission denials for the client. This patch: - Make groups_sort globally visible. - Move the call to groups_sort to the modifiers of group_info - Remove the call to groups_sort from set_groups Link: http://lkml.kernel.org/r/20171211151420.18655-1-thiago.becker@gmail.com Signed-off-by: Thiago Rafael Becker <thiago.becker@gmail.com> Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com> Reviewed-by: NeilBrown <neilb@suse.com> Acked-by: "J. Bruce Fields" <bfields@fieldses.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-27lockd: fix "list_add double add" caused by legacy signal interfaceVasily Averin1-3/+4
restart_grace() uses hardcoded init_net. It can cause to "list_add double add" in following scenario: 1) nfsd and lockd was started in several net namespaces 2) nfsd in init_net was stopped (lockd was not stopped because it have users from another net namespaces) 3) lockd got signal, called restart_grace() -> set_grace_period() and enabled lock_manager in hardcoded init_net. 4) nfsd in init_net is started again, its lockd_up() calls set_grace_period() and tries to add lock_manager into init_net 2nd time. Jeff Layton suggest: "Make it safe to call locks_start_grace multiple times on the same lock_manager. If it's already on the global grace_list, then don't try to add it again. (But we don't intentionally add twice, so for now we WARN about that case.) With this change, we also need to ensure that the nfsd4 lock manager initializes the list before we call locks_start_grace. While we're at it, move the rest of the nfsd_net initialization into nfs4_state_create_net. I see no reason to have it spread over two functions like it is today." Suggested patch was updated to generate warning in described situation. Suggested-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Vasily Averin <vvs@virtuozzo.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-11-27race of nfsd inetaddr notifiers vs nn->nfsd_serv changeVasily Averin3-3/+17
nfsd_inet[6]addr_event uses nn->nfsd_serv without taking nfsd_mutex, which can be changed during execution of notifiers and crash the host. Moreover if notifiers were enabled in one net namespace they are enabled in all other net namespaces, from creation until destruction. This patch allows notifiers to access nn->nfsd_serv only after the pointer is correctly initialized and delays cleanup until notifiers are no longer in use. Signed-off-by: Vasily Averin <vvs@virtuozzo.com> Tested-by: Scott Mayhew <smayhew@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-11-27NFSD: make cache_detail structures constBhumika Goyal2-4/+4
Make these const as they are only getting passed to the function cache_create_net having the argument as const. Signed-off-by: Bhumika Goyal <bhumirks@gmail.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-11-27nfsd: check for use of the closed special stateidAndrew Elble1-2/+5
Prevent the use of the closed (invalid) special stateid by clients. Signed-off-by: Andrew Elble <aweits@rit.edu> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-11-27nfsd: fix panic in posix_unblock_lock called from nfs4_laundromatNaofumi Honda1-2/+2
From kernel 4.9, my two nfsv4 servers sometimes suffer from "panic: unable to handle kernel page request" in posix_unblock_lock() called from nfs4_laundromat(). These panics diseappear if we revert the commit "nfsd: add a LRU list for blocked locks". The cause appears to be a typo in nfs4_laundromat(), which is also present in nfs4_state_shutdown_net(). Cc: stable@vger.kernel.org Fixes: 7919d0a27f1e "nfsd: add a LRU list for blocked locks" Cc: jlayton@redhat.com Reveiwed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>