diff options
author | Alex Elder <elder@inktank.com> | 2013-04-01 10:48:40 -0500 |
---|---|---|
committer | Sage Weil <sage@inktank.com> | 2013-05-01 21:17:50 -0700 |
commit | 3bf53337af27a3ccc6e0f433b081063cdf0a2bf6 (patch) | |
tree | a224b926731eb781daa5828f97ec1830d2aae89b /fs/ceph/super.c | |
parent | b0270324c5a9a5157f565c2de34fb1071cfdce7c (diff) |
ceph: set up page array mempool with correct size
In create_fs_client() a memory pool is set up be used for arrays of
pages that might be needed in ceph_writepages_start() if memory is
tight. There are two problems with the way it's initialized:
- The size provided is the number of pages we want in the
array, but it should be the number of bytes required for
that many page pointers.
- The number of pages computed can end up being 0, while we
will always need at least one page.
This patch fixes both of these problems.
This resolves the two simple problems defined in:
http://tracker.ceph.com/issues/4603
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Diffstat (limited to 'fs/ceph/super.c')
-rw-r--r-- | fs/ceph/super.c | 7 |
1 files changed, 5 insertions, 2 deletions
diff --git a/fs/ceph/super.c b/fs/ceph/super.c index 6ddc0bca56b2..7d377c9a5e35 100644 --- a/fs/ceph/super.c +++ b/fs/ceph/super.c @@ -479,6 +479,8 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt, CEPH_FEATURE_FLOCK | CEPH_FEATURE_DIRLAYOUTHASH; const unsigned required_features = 0; + int page_count; + size_t size; int err = -ENOMEM; fsc = kzalloc(sizeof(*fsc), GFP_KERNEL); @@ -522,8 +524,9 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt, /* set up mempools */ err = -ENOMEM; - fsc->wb_pagevec_pool = mempool_create_kmalloc_pool(10, - fsc->mount_options->wsize >> PAGE_CACHE_SHIFT); + page_count = fsc->mount_options->wsize >> PAGE_CACHE_SHIFT; + size = sizeof (struct page *) * (page_count ? page_count : 1); + fsc->wb_pagevec_pool = mempool_create_kmalloc_pool(10, size); if (!fsc->wb_pagevec_pool) goto fail_trunc_wq; |