[PATCH] mm: ptd_alloc take ptlock
Second step in pushing down the page_table_lock. Remove the temporary bridging hack from __pud_alloc, __pmd_alloc, __pte_alloc: expect callers not to hold page_table_lock, whether it's on init_mm or a user mm; take page_table_lock internally to check if a racing task already allocated. Convert their callers from common code. But avoid coming back to change them again later: instead of moving the spin_lock(&mm->page_table_lock) down, switch over to new macros pte_alloc_map_lock and pte_unmap_unlock, which encapsulate the mapping+locking and unlocking+unmapping together, and in the end may use alternatives to the mm page_table_lock itself. These callers all hold mmap_sem (some exclusively, some not), so at no level can a page table be whipped away from beneath them; and pte_alloc uses the "atomic" pmd_present to test whether it needs to allocate. It appears that on all arches we can safely descend without page_table_lock. Signed-off-by:Hugh Dickins <hugh@veritas.com> Signed-off-by:
Andrew Morton <akpm@osdl.org> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
Showing
- fs/exec.c 5 additions, 9 deletionsfs/exec.c
- include/linux/mm.h 18 additions, 0 deletionsinclude/linux/mm.h
- kernel/fork.c 0 additions, 2 deletionskernel/fork.c
- mm/fremap.c 18 additions, 30 deletionsmm/fremap.c
- mm/hugetlb.c 8 additions, 4 deletionsmm/hugetlb.c
- mm/memory.c 32 additions, 72 deletionsmm/memory.c
- mm/mremap.c 9 additions, 18 deletionsmm/mremap.c
Loading
Please register or sign in to comment