diff options
author | David Howells <dhowells@redhat.com> | 2018-02-06 14:12:32 +0000 |
---|---|---|
committer | David Howells <dhowells@redhat.com> | 2018-02-06 14:36:54 +0000 |
commit | 45df8462730d2149834980d3db16e2d2b9daaf60 (patch) | |
tree | b884a2ea9e5c704a5ec90caf905a3a1902f7ce74 /fs/afs/vlclient.c | |
parent | 8305e579c653b127b292fcdce551e930f9560260 (diff) |
afs: Fix server list handling
Fix server list handling in the following ways:
(1) In afs_alloc_volume(), remove duplicate server list build code. This
was already done by afs_alloc_server_list() which afs_alloc_volume()
previously called. This just results in twice as many VL RPCs.
(2) In afs_deliver_vl_get_entry_by_name_u(), use the number of server
records indicated by ->nServers in the UVLDB record returned by the
VL.GetEntryByNameU RPC call rather than scanning all NMAXNSERVERS
slots. Unused slots may contain garbage.
(3) In afs_alloc_server_list(), don't stop converting a UVLDB record into
a server list just because we can't look up one of the servers. Just
skip that server and go on to the next. If we can't look up any of
the servers then we'll fail at the end.
Without this patch, an attempt to view the umich.edu root cell using
something like "ls /afs/umich.edu" on a dynamic root (future patch) mount
or an autocell mount will result in ENOMEDIUM. The failure is due to kafs
not stopping after nServers'worth of records have been read, but then
trying to access a server with a garbage UUID and getting an error, which
aborts the server list build.
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Reported-by: Jonathan Billings <jsbillings@jsbillings.org>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: stable@vger.kernel.org
Diffstat (limited to 'fs/afs/vlclient.c')
-rw-r--r-- | fs/afs/vlclient.c | 10 |
1 files changed, 7 insertions, 3 deletions
diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c index e372f89fd36a..5d8562f1ad4a 100644 --- a/fs/afs/vlclient.c +++ b/fs/afs/vlclient.c @@ -23,7 +23,7 @@ static int afs_deliver_vl_get_entry_by_name_u(struct afs_call *call) struct afs_uvldbentry__xdr *uvldb; struct afs_vldb_entry *entry; bool new_only = false; - u32 tmp; + u32 tmp, nr_servers; int i, ret; _enter(""); @@ -36,6 +36,10 @@ static int afs_deliver_vl_get_entry_by_name_u(struct afs_call *call) uvldb = call->buffer; entry = call->reply[0]; + nr_servers = ntohl(uvldb->nServers); + if (nr_servers > AFS_NMAXNSERVERS) + nr_servers = AFS_NMAXNSERVERS; + for (i = 0; i < ARRAY_SIZE(uvldb->name) - 1; i++) entry->name[i] = (u8)ntohl(uvldb->name[i]); entry->name[i] = 0; @@ -44,14 +48,14 @@ static int afs_deliver_vl_get_entry_by_name_u(struct afs_call *call) /* If there is a new replication site that we can use, ignore all the * sites that aren't marked as new. */ - for (i = 0; i < AFS_NMAXNSERVERS; i++) { + for (i = 0; i < nr_servers; i++) { tmp = ntohl(uvldb->serverFlags[i]); if (!(tmp & AFS_VLSF_DONTUSE) && (tmp & AFS_VLSF_NEWREPSITE)) new_only = true; } - for (i = 0; i < AFS_NMAXNSERVERS; i++) { + for (i = 0; i < nr_servers; i++) { struct afs_uuid__xdr *xdr; struct afs_uuid *uuid; int j; |