summaryrefslogtreecommitdiff
path: root/include/linux/slub_def.h
AgeCommit message (Expand)AuthorFilesLines
2011-05-21slub: Deal with hyperthetical case of PAGE_SIZE > 2MChristoph Lameter1-2/+4
2011-05-07slub: Remove CONFIG_CMPXCHG_LOCAL ifdefferyChristoph Lameter1-2/+0
2011-03-22slub: Add statistics for this_cmpxchg_double failuresChristoph Lameter1-0/+1
2011-03-20Merge branch 'slub/lockless' into for-linusPekka Enberg1-2/+5
2011-03-11slub: automatically reserve bytes at the end of slabLai Jiangshan1-0/+1
2011-03-11Lockless (and preemptless) fastpaths for slubChristoph Lameter1-1/+4
2011-03-11slub: min_partial needs to be in first cachelineChristoph Lameter1-1/+1
2010-11-06slub tracing: move trace calls out of always inlined functions to reduce kern...Richard Kennedy1-29/+26
2010-10-06slub: Enable sysfs support for !CONFIG_SLUB_DEBUGChristoph Lameter1-1/+1
2010-10-02slub: reduce differences between SMP and NUMAChristoph Lameter1-4/+1
2010-10-02slub: Dynamically size kmalloc cache allocationsChristoph Lameter1-5/+2
2010-08-22Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/pen...Linus Torvalds1-1/+1
2010-08-11dma-mapping: rename ARCH_KMALLOC_MINALIGN to ARCH_DMA_MINALIGNFUJITA Tomonori1-3/+5
2010-08-09slub: add missing __percpu markup in mm/slub_def.hNamhyung Kim1-1/+1
2010-06-09Merge branch 'perf/core' of git://git.kernel.org/pub/scm/linux/kernel/git/fre...Ingo Molnar1-1/+2
2010-06-09tracing: Remove kmemtrace ftrace pluginLi Zefan1-1/+2
2010-05-30SLUB: Allow full duplication of kmalloc array for 390Christoph Lameter1-1/+1
2010-05-24slub: move kmem_cache_node into it's own cachelineAlexander Duyck1-6/+3
2010-05-19mm: Move ARCH_SLAB_MINALIGN and ARCH_KMALLOC_MINALIGN to <linux/slub_def.h>David Woodhouse1-0/+8
2009-12-20SLUB: this_cpu: Remove slub kmem_cache fieldsChristoph Lameter1-2/+0
2009-12-20SLUB: Get rid of dynamic DMA kmalloc cache allocationChristoph Lameter1-8/+11
2009-12-20SLUB: Use this_cpu operations in slubChristoph Lameter1-5/+1
2009-12-11tracing, slab: Define kmem_cache_alloc_notrace ifdef CONFIG_TRACINGLi Zefan1-2/+2
2009-09-14Merge branches 'slab/cleanups' and 'slab/fixes' into for-linusPekka Enberg1-6/+2
2009-08-30SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256Aaro Koskinen1-4/+2
2009-08-06slab: remove duplicate kmem_cache_init_late() declarationsWu Fengguang1-2/+0
2009-07-08kmemleak: Trace the kmalloc_large* functions in slubCatalin Marinas1-0/+2
2009-06-12slab,slub: don't enable interrupts during early bootPekka Enberg1-0/+2
2009-04-12tracing, kmemtrace: Separate include/trace/kmemtrace.h to kmemtrace part and ...Zhaolei1-1/+1
2009-04-03kmemtrace: use tracepointsEduard - Gabriel Munteanu1-8/+4
2009-04-02Merge branch 'tracing/core-v2' into tracing-for-linusIngo Molnar1-3/+50
2009-03-24Merge branches 'topic/slob/cleanups', 'topic/slob/fixes', 'topic/slub/core', ...Pekka Enberg1-4/+17
2009-02-23slub: move min_partial to struct kmem_cacheDavid Rientjes1-1/+1
2009-02-20Merge branch 'for-ingo' of git://git.kernel.org/pub/scm/linux/kernel/git/penb...Ingo Molnar1-3/+16
2009-02-20SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constantsChristoph Lameter1-3/+16
2009-02-20SLUB: Do not pass 8k objects through to the page allocatorPekka Enberg1-2/+2
2009-02-20SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constantsChristoph Lameter1-3/+16
2008-12-30tracing/kmemtrace: normalize the raw tracer event to the unified tracing APIFrederic Weisbecker1-1/+1
2008-12-29kmemtrace: SLUB hooks.Eduard - Gabriel Munteanu1-3/+50
2008-08-05SLUB: dynamic per-cache MIN_PARTIALPekka Enberg1-0/+1
2008-07-26SL*B: drop kmem cache argument from constructorAlexey Dobriyan1-1/+1
2008-07-04Christoph has movedChristoph Lameter1-1/+1
2008-07-03slub: Do not use 192 byte sized cache if minimum alignment is 128 byteChristoph Lameter1-0/+2
2008-04-27slub: Fallback to minimal order during slab page allocationChristoph Lameter1-0/+2
2008-04-27slub: Update statistics handling for variable order slabsChristoph Lameter1-0/+2
2008-04-27slub: Add kmem_cache_order_objects structChristoph Lameter1-2/+10
2008-04-14slub: No need for per node slab counters if !SLUB_DEBUGChristoph Lameter1-1/+1
2008-03-03slub: Fix up commentsChristoph Lameter1-2/+2
2008-02-14slub: Support 4k kmallocs again to compensate for page allocator slownessChristoph Lameter1-3/+3
2008-02-14slub: Determine gfpflags once and not every time a slab is allocatedChristoph Lameter1-0/+1