diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-03-12 14:58:35 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-03-12 14:58:35 -0700 |
commit | 2b0a80b0d0bb0a3db74588279bf851b28c6c4705 (patch) | |
tree | 1d73413afae3617422027194eda5f71f94fd2c07 /net | |
parent | 92825b0298ca6822085ef483f914b6e0dea9bf66 (diff) | |
parent | d11ae8e0a76afc506071831854348f2ea1f3290e (diff) |
Merge tag 'ceph-for-5.1-rc1' of git://github.com/ceph/ceph-client
Pull ceph updates from Ilya Dryomov:
"The highlights are:
- rbd will now ignore discards that aren't aligned and big enough to
actually free up some space (myself). This is controlled by the new
alloc_size map option and can be disabled if needed.
- support for rbd deep-flatten feature (myself). Deep-flatten allows
"rbd flatten" to fully disconnect the clone image and its snapshots
from the parent and make the parent snapshot removable.
- a new round of cap handling improvements (Zheng Yan). The kernel
client should now be much more prompt about releasing its caps and
it is possible to put a limit on the number of caps held.
- support for getting ceph.dir.pin extended attribute (Zheng Yan)"
* tag 'ceph-for-5.1-rc1' of git://github.com/ceph/ceph-client: (26 commits)
Documentation: modern versions of ceph are not backed by btrfs
rbd: advertise support for RBD_FEATURE_DEEP_FLATTEN
rbd: whole-object write and zeroout should copyup when snapshots exist
rbd: copyup with an empty snapshot context (aka deep-copyup)
rbd: introduce rbd_obj_issue_copyup_ops()
rbd: stop copying num_osd_ops in rbd_obj_issue_copyup()
rbd: factor out __rbd_osd_req_create()
rbd: clear ->xferred on error from rbd_obj_issue_copyup()
rbd: remove experimental designation from kernel layering
ceph: add mount option to limit caps count
ceph: periodically trim stale dentries
ceph: delete stale dentry when last reference is dropped
ceph: remove dentry_lru file from debugfs
ceph: touch existing cap when handling reply
ceph: pass inclusive lend parameter to filemap_write_and_wait_range()
rbd: round off and ignore discards that are too small
rbd: handle DISCARD and WRITE_ZEROES separately
rbd: get rid of obj_req->obj_request_count
libceph: use struct_size() for kmalloc() in crush_decode()
ceph: send cap releases more aggressively
...
Diffstat (limited to 'net')
-rw-r--r-- | net/ceph/osdmap.c | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c index 98c0ff3d6441..48a31dc9161c 100644 --- a/net/ceph/osdmap.c +++ b/net/ceph/osdmap.c @@ -495,9 +495,8 @@ static struct crush_map *crush_decode(void *pbyval, void *end) / sizeof(struct crush_rule_step)) goto bad; #endif - r = c->rules[i] = kmalloc(sizeof(*r) + - yes*sizeof(struct crush_rule_step), - GFP_NOFS); + r = kmalloc(struct_size(r, steps, yes), GFP_NOFS); + c->rules[i] = r; if (r == NULL) goto badmem; dout(" rule %d is at %p\n", i, r); |