- Jan 31, 2014
-
-
Masanari Iida authored
This patch fixed following errors while make htmldocs Warning(/mm/slab.c:1956): No description found for parameter 'page' Warning(/mm/slab.c:1956): Excess function parameter 'slabp' description in 'slab_destroy' Incorrect function parameter "slabp" was set instead of "page" Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Masanari Iida <standby24x7@gmail.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
- Nov 13, 2013
-
-
Qiang Huang authored
We can't see the relationship with memcg from the parameters, so the name with memcg_idx would be more reasonable. Signed-off-by:
Qiang Huang <h.huangqiang@huawei.com> Reviewed-by:
Pekka Enberg <penberg@kernel.org> Acked-by:
David Rientjes <rientjes@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Glauber Costa <glommer@parallels.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Oct 30, 2013
-
-
Joonsoo Kim authored
There is no 'strcut freelist', but codes use pointer to 'struct freelist'. Although compiler doesn't complain anything about this wrong usage and codes work fine, but fixing it is better. Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
After using struct page as slab management, we should not call kmemleak_scan_area(), since struct page isn't the tracking object of kmemleak. Without this patch and if CONFIG_DEBUG_KMEMLEAK is enabled, so many kmemleak warnings are printed. Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
- Oct 24, 2013
-
-
Joonsoo Kim authored
Now, bufctl is not proper name to this array. So change it. Acked-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
Now, virt_to_page(page->s_mem) is same as the page, because slab use this structure for management. So remove useless statement. Acked-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
Now, there are a few field in struct slab, so we can overload these over struct page. This will save some memory and reduce cache footprint. After this change, slabp_cache and slab_size no longer related to a struct slab, so rename them as freelist_cache and freelist_size. These changes are just mechanical ones and there is no functional change. Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
Now, free in struct slab is same meaning as inuse. So, remove both and replace them with active. Acked-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
It's useless now, so remove it. Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
Now, we changed the management method of free objects of the slab and there is no need to use special value, BUFCTL_END, BUFCTL_FREE and BUFCTL_ACTIVE. So remove them. Acked-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
Current free objects management method of the slab is weird, because it touch random position of the array of kmem_bufctl_t when we try to get free object. See following example. struct slab's free = 6 kmem_bufctl_t array: 1 END 5 7 0 4 3 2 To get free objects, we access this array with following pattern. 6 -> 3 -> 7 -> 2 -> 5 -> 4 -> 0 -> 1 -> END If we have many objects, this array would be larger and be not in the same cache line. It is not good for performance. We can do same thing through more easy way, like as the stack. Only thing we have to do is to maintain stack top to free object. I use free field of struct slab for this purpose. After that, if we need to get an object, we can get it at stack top and manipulate top pointer. That's all. This method already used in array_cache management. Following is an access pattern when we use this method. struct slab's free = 0 kmem_bufctl_t array: 6 3 7 2 5 4 0 1 To get free objects, we access this array with following pattern. 0 -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 This may help cache line footprint if slab has many objects, and, in addition, this makes code much much simpler. Acked-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
If we use 'struct page' of first page as 'struct slab', there is no advantage not to use __GFP_COMP. So use __GFP_COMP flag for all the cases. Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
This is trivial change, just use well-defined macro. Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
With build-time size checking, we can overload the RCU head over the LRU of struct page to free pages of a slab in rcu context. This really help to implement to overload the struct slab over the struct page and this eventually reduce memory usage and cache footprint of the SLAB. Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
We can get cachep using page in struct slab_rcu, so remove it. Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
We can get nodeid using address translation, so this field is not useful. Therefore, remove it. Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
Now there is no user colouroff, so remove it. Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
It is more understandable that kmem_getpages() return struct page. And, with this, we can reduce one translation from virt addr to page and makes better code than before. Below is a change of this patch. * Before text data bss dec hex filename 22123 23434 4 45561 b1f9 mm/slab.o * After text data bss dec hex filename 22074 23434 4 45512 b1c8 mm/slab.o And this help following patch to remove struct slab's colouroff. Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
Joonsoo Kim authored
We checked pfmemalloc by slab unit, not page unit. You can see this in is_slab_pfmemalloc(). So other pages don't need to be set/cleared pfmemalloc. And, therefore we should check pfmemalloc in page flag of first page, but current implementation don't do that. virt_to_head_page(obj) just return 'struct page' of that object, not one of first page, since the SLAB don't use __GFP_COMP when CONFIG_MMU. To get 'struct page' of first page, we first get a slab and try to get it via virt_to_head_page(slab->s_mem). Acked-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Pekka Enberg <penberg@iki.fi>
-
- Jul 15, 2013
-
-
Paul Gortmaker authored
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. This removes all the uses of the __cpuinit macros from C files in the core kernel directories (kernel, init, lib, mm, and include) that don't really have a specific maintainer. [1] https://lkml.org/lkml/2013/5/20/589 Signed-off-by:
Paul Gortmaker <paul.gortmaker@windriver.com>
-
- Jul 08, 2013
-
-
Wanpeng Li authored
Give s_next and s_stop slab-specific names instead of exporting "s_next" and "s_stop". Signed-off-by:
Wanpeng Li <liwanp@linux.vnet.ibm.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
- Jul 07, 2013
-
-
Christoph Lameter authored
Some architectures (e.g. powerpc built with CONFIG_PPC_256K_PAGES=y CONFIG_FORCE_MAX_ZONEORDER=11) get PAGE_SHIFT + MAX_ORDER > 26. In 3.10 kernels, CONFIG_LOCKDEP=y with PAGE_SHIFT + MAX_ORDER > 26 makes init_lock_keys() dereference beyond kmalloc_caches[26]. This leads to an unbootable system (kernel panic at initializing SLAB) if one of kmalloc_caches[26...PAGE_SHIFT+MAX_ORDER-1] is not NULL. Fix this by making sure that init_lock_keys() does not dereference beyond kmalloc_caches[26] arrays. Signed-off-by:
Christoph Lameter <cl@linux.com> Reported-by:
Tetsuo Handa <penguin-kernel@I-Love.SAKURA.ne.jp> Cc: Pekka Enberg <penberg@kernel.org> Cc: <stable@vger.kernel.org> [3.10.x] Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Wanpeng Li authored
This patch shares s_next and s_stop between slab and slub. Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Wanpeng Li <liwanp@linux.vnet.ibm.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Wanpeng Li authored
The drain_freelist is called to drain slabs_free lists for cache reap, cache shrink, memory hotplug callback etc. The tofree parameter should be the number of slab to free instead of the number of slab objects to free. This patch fix the callers that pass # of objects. Make sure they pass # of slabs. Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Wanpeng Li <liwanp@linux.vnet.ibm.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
- Jun 08, 2013
-
-
Zhouping Liu authored
After several fixing about kmem_cache_alloc_node(), its comment was splitted. This patch moved it on top of kmem_cache_alloc_node() definition. Signed-off-by:
Zhouping Liu <zliu@redhat.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
- May 01, 2013
-
-
Aaron Tomlin authored
If the nodeid is > num_online_nodes() this can cause an Oops and a panic(). The purpose of this patch is to assert if this condition is true to aid debugging efforts rather than some random NULL pointer dereference or page fault. This patch is in response to BZ#42967 [1]. Using VM_BUG_ON so it's used only when CONFIG_DEBUG_VM is set, given that ____cache_alloc_node() is a hot code path. [1]: https://bugzilla.kernel.org/show_bug.cgi?id=42967 Signed-off-by:
Aaron Tomlin <atomlin@redhat.com> Reviewed-by:
Rik van Riel <riel@redhat.com> Acked-by:
Christoph Lameter <cl@linux.com> Acked-by:
Rafael Aquini <aquini@redhat.com> Acked-by:
David Rientjes <rientjes@google.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
- Apr 29, 2013
-
-
Joe Perches authored
Use the new vsprintf extension to avoid any possible message interleaving. Signed-off-by:
Joe Perches <joe@perches.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Jiri Kosina <jkosina@suse.cz>
-
- Feb 06, 2013
-
-
Christoph Lameter authored
Variables were not properly converted and the conversion caused a naming conflict. Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
- Feb 01, 2013
-
-
Christoph Lameter authored
Put the definitions for the kmem_cache_node structures together so that we have one structure. That will allow us to create more common fields in the future which could yield more opportunities to share code. Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Christoph Lameter authored
The list3 or l3 pointers are pointing to per node structures. Reflect that in the names of variables used. Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Christoph Lameter authored
Extract the optimized lookup functions from slub and put them into slab_common.c. Then make slab use these functions as well. Joonsoo notes that this fixes some issues with constant folding which also reduces the code size for slub. https://lkml.org/lkml/2012/10/20/82 Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Christoph Lameter authored
The kmalloc array is created in similar ways in both SLAB and SLUB. Create a common function and have both allocators call that function. V1->V2: Whitespace cleanup Reviewed-by:
Glauber Costa <glommer@parallels.com> Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Christoph Lameter authored
Have a common definition fo the kmalloc cache arrays in SLAB and SLUB Acked-by:
Glauber Costa <glommer@parallels.com> Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Christoph Lameter authored
Have a common naming between both slab caches for future changes. Acked-by:
Glauber Costa <glommer@parallels.com> Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Christoph Lameter authored
Rename the structure used for the per node structures in slab to have a name that expresses that fact. Acked-by:
Glauber Costa <glommer@parallels.com> Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Christoph Lameter authored
Make slab use the common functions. We can get rid of a lot of old ugly stuff as a results. Among them the sizes array and the weird include/linux/kmalloc_sizes file and some pretty bad #include statements in slab_def.h. The one thing that is different in slab is that the 32 byte cache will also be created for arches that have page sizes larger than 4K. There are numerous smaller allocations that SLOB and SLUB can handle better because of their support for smaller allocation sizes so lets keep the 32 byte slab also for arches with > 4K pages. Reviewed-by:
Glauber Costa <glommer@parallels.com> Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
- Jan 21, 2013
-
-
Rusty Russell authored
Fix up all callers as they were before, with make one change: an unsigned module taints the kernel, but doesn't turn off lockdep. Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au>
-
- Dec 19, 2012
-
-
Glauber Costa authored
This patch clarifies two aspects of cache attribute propagation. First, the expected context for the for_each_memcg_cache macro in memcontrol.h. The usages already in the codebase are safe. In mm/slub.c, it is trivially safe because the lock is acquired right before the loop. In mm/slab.c, it is less so: the lock is acquired by an outer function a few steps back in the stack, so a VM_BUG_ON() is added to make sure it is indeed safe. A comment is also added to detail why we are returning the value of the parent cache and ignoring the children's when we propagate the attributes. Signed-off-by:
Glauber Costa <glommer@parallels.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Acked-by:
David Rientjes <rientjes@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Glauber Costa authored
SLAB allows us to tune a particular cache behavior with tunables. When creating a new memcg cache copy, we'd like to preserve any tunables the parent cache already had. This could be done by an explicit call to do_tune_cpucache() after the cache is created. But this is not very convenient now that the caches are created from common code, since this function is SLAB-specific. Another method of doing that is taking advantage of the fact that do_tune_cpucache() is always called from enable_cpucache(), which is called at cache initialization. We can just preset the values, and then things work as expected. It can also happen that a root cache has its tunables updated during normal system operation. In this case, we will propagate the change to all caches that are already active. This change will require us to move the assignment of root_cache in memcg_params a bit earlier. We need this to be already set - which memcg_kmem_register_cache will do - when we reach __kmem_cache_create() Signed-off-by:
Glauber Costa <glommer@parallels.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Frederic Weisbecker <fweisbec@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: JoonSoo Kim <js1304@gmail.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michal Hocko <mhocko@suse.cz> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Rik van Riel <riel@redhat.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Tejun Heo <tj@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Glauber Costa authored
Implement destruction of memcg caches. Right now, only caches where our reference counter is the last remaining are deleted. If there are any other reference counters around, we just leave the caches lying around until they go away. When that happens, a destruction function is called from the cache code. Caches are only destroyed in process context, so we queue them up for later processing in the general case. Signed-off-by:
Glauber Costa <glommer@parallels.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Frederic Weisbecker <fweisbec@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: JoonSoo Kim <js1304@gmail.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michal Hocko <mhocko@suse.cz> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Rik van Riel <riel@redhat.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Tejun Heo <tj@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-