diff options
author | Alexey Romanov <avromanov@sberdevices.ru> | 2022-11-09 20:50:42 +0900 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2022-11-30 15:58:52 -0800 |
commit | 7c2af309abd24ff4e313246bf9b68f398d95c871 (patch) | |
tree | 75af9d314190c3812a1ec8a7ba37b2997ef01321 /mm/zsmalloc.c | |
parent | f24ee92cbe13242758635e654b2422dbf4912e4b (diff) |
zram: add size class equals check into recompression
It makes no sense for us to recompress the object if it will be in the
same size class. We anyway don't get any memory gain. But, at the same
time, we get a CPU time overhead when inserting this object into zspage
and decompressing it afterwards.
[senozhatsky: rebased and fixed conflicts]
Link: https://lkml.kernel.org/r/20221109115047.2921851-9-senozhatsky@chromium.org
Signed-off-by: Alexey Romanov <avromanov@sberdevices.ru>
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Nhat Pham <nphamcs@gmail.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/zsmalloc.c')
-rw-r--r-- | mm/zsmalloc.c | 21 |
1 files changed, 21 insertions, 0 deletions
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b52b7bb88b52..78feda34ad9a 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1205,6 +1205,27 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage) return get_zspage_inuse(zspage) == class->objs_per_zspage; } +/** + * zs_lookup_class_index() - Returns index of the zsmalloc &size_class + * that hold objects of the provided size. + * @pool: zsmalloc pool to use + * @size: object size + * + * Context: Any context. + * + * Return: the index of the zsmalloc &size_class that hold objects of the + * provided size. + */ +unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size) +{ + struct size_class *class; + + class = pool->size_class[get_size_class_index(size)]; + + return class->index; +} +EXPORT_SYMBOL_GPL(zs_lookup_class_index); + unsigned long zs_get_total_pages(struct zs_pool *pool) { return atomic_long_read(&pool->pages_allocated); |