The filesystem layer expects pages in the block device's mapping to not
be in highmem (the mapping's gfp mask is set in bdget()), but CMA can
currently replace lowmem pages with highmem pages, leading to crashes in
filesystem code such as the one below:
Unable to handle kernel NULL pointer dereference at virtual address 00000400
pgd = c0c98000
[00000400] *pgd=00c91831, *pte=00000000, *ppte=00000000
Internal error: Oops: 817 [#1] PREEMPT SMP ARM
CPU: 0 Not tainted (3.5.0-rc5+ #80)
PC is at __memzero+0x24/0x80
...
Process fsstress (pid: 323, stack limit = 0xc0cbc2f0)
Backtrace:
[<c010e3f0>] (ext4_getblk+0x0/0x180) from [<c010e58c>] (ext4_bread+0x1c/0x98)
[<c010e570>] (ext4_bread+0x0/0x98) from [<c0117944>] (ext4_mkdir+0x160/0x3bc)
r4:c15337f0
[<c01177e4>] (ext4_mkdir+0x0/0x3bc) from [<c00c29e0>] (vfs_mkdir+0x8c/0x98)
[<c00c2954>] (vfs_mkdir+0x0/0x98) from [<c00c2a60>] (sys_mkdirat+0x74/0xac)
r6:00000000 r5:c152eb40 r4:000001ff r3:c14b43f0
[<c00c29ec>] (sys_mkdirat+0x0/0xac) from [<c00c2ab8>] (sys_mkdir+0x20/0x24)
r6:beccdcf0 r5:00074000 r4:beccdbbc
[<c00c2a98>] (sys_mkdir+0x0/0x24) from [<c000e3c0>] (ret_fast_syscall+0x0/0x30)
Fix this by replacing only highmem pages with highmem.
Change-Id: I6af2d509af48b5a586037be14bd3593b3f269d95
Reported-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
sync_supers task currently wakes up periodically for superblock
writeback. This hurts power on battery driven devices. This patch
turns this housekeeping timer into a deferable timer so that it
does not fire when system is really idle.
CRs-Fixed: 353700
Change-Id: Idc7953b5d0580546808bc5832291ca570837ee7f
Signed-off-by: Yong Wang <yong.y.wang@intel.com>
Signed-off-by: Xia Wu <xia.wu@intel.com>
Signed-off-by: Krishna Vanka <kvanka@codeaurora.org>
alloc_contig_range() performs memory allocation so it also should keep
track on keeping the correct level of memory watermarks. This commit adds
a call to *_slowpath style reclaim to grab enough pages to make sure that
the final collection of contiguous pages from freelists will not starve
the system.
Change-Id: I2d68d9ac2cfcd32ca6f515fc7e44e8d9d850dff1
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
There is a race between the min_free_kbytes sysctl, memory hotplug
and transparent hugepage support enablement. Memory hotplug uses a
zonelists_mutex to avoid a race when building zonelists. Reuse it to
serialise watermark updates.
Change-Id: I31786592a8cc03e579ee01d99d7eba76e926263f
[a.p.zijlstra@chello.nl: Older patch fixed the race with spinlock]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
The MIGRATE_CMA migration type has two main characteristics:
(i) only movable pages can be allocated from MIGRATE_CMA
pageblocks and (ii) page allocator will never change migration
type of MIGRATE_CMA pageblocks.
This guarantees (to some degree) that page in a MIGRATE_CMA page
block can always be migrated somewhere else (unless there's no
memory left in the system).
It is designed to be used for allocating big chunks (eg. 10MiB)
of physically contiguous memory. Once driver requests
contiguous memory, pages from MIGRATE_CMA pageblocks may be
migrated away to create a contiguous block.
To minimise number of migrations, MIGRATE_CMA migration type
is the last type tried when page allocator falls back to other
migration types when requested.
Change-Id: I2bb0954de8be4f212b03dea0e5a508048684bda2
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
This commit adds a row for MIGRATE_ISOLATE type to the fallbacks array
which was missing from it. It also, changes the array traversal logic
a little making MIGRATE_RESERVE an end marker. The letter change,
removes the implicit MIGRATE_UNMOVABLE from the end of each row which
was read by __rmqueue_fallback() function.
Change-Id: Icdbbebb9eece2468c0b963964be9a4c579cbc775
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
This commit adds the alloc_contig_range() function which tries
to allocate given range of pages. It tries to migrate all
already allocated pages that fall in the range thus freeing them.
Once all pages in the range are freed they are removed from the
buddy system thus allocated for the caller to use.
Change-Id: I659b133b1c9991568bfb6bd09c7792e15f2a2bfb
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
This commit exports some of the functions from compaction.c file
outside of it adding their declaration into internal.h header
file so that other mm related code can use them.
This forced compaction.c to always be compiled (as opposed to being
compiled only if CONFIG_COMPACTION is defined) but as to avoid
introducing code that user did not ask for, part of the compaction.c
is now wrapped in on #ifdef.
Change-Id: Id51bc882d1befd5afef2c8d1e5dcc7993496893c
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
This commit creates a map_pages() function which map pages freed
using split_free_pages(). This merely moves some code from
isolate_freepages() so that it can be reused in other places.
Change-Id: Ia67eaee00ca88a402f21a651de409bd93ae00312
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Fix compiler warning for a variable not initialized.
Change-Id: Ieedeb1cfb5a22eb5f671e6bfd1361315347a49af
Signed-off-by: Shashank Mittal <mittals@codeaurora.org>
Fix NR_IPI to be 7 instead of 6 because both googly and core add
an IPI.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Conflicts:
arch/arm/Kconfig
arch/arm/common/Makefile
arch/arm/include/asm/hardware/cache-l2x0.h
arch/arm/mm/cache-l2x0.c
arch/arm/mm/mmu.c
include/linux/wakelock.h
kernel/power/Kconfig
kernel/power/Makefile
kernel/power/main.c
kernel/power/power.h
In recent versions, the platform specific physical
offline returns the number of bytes offlined, so
a value of 0 indicates an error, not success as in
older versions. Make sure that the memory
for the original memory resource nodes is not
freed via kfree, as this memory was obtained
from alloc_bootmem very early in the system's life.
Change-Id: Iffcdd8be4483e043d7605fce596ed438b15f3e02
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit 2421717cb10a06814d7bdb431485aa3a5e364f36)
Add Low Power mode TAG.
Add new API's for mem lowpower modes.
Create new sys file for mem low power modes.
Set SECTION_SIZE_BITS to 28.
Change NPA_MEMORY_NODE_NAME to "/mem/apps/ddr_dpd".
Fix NPA node create function to do atomic_inc()
in atomic_dec_and_test() failure case.
Change-Id: Ia5cb18b99338c43165d5401e619c773cd8d6b3f6
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit b054046e708f8c5b044e76c2df6f72fd607be558)
Conflicts:
arch/arm/include/asm/setup.h
arch/arm/kernel/setup.c
arch/arm/mach-msm/include/mach/memory.h
arch/arm/mach-msm/memory.c
drivers/base/memory.c
include/linux/memory_hotplug.h
The file /sys/devices/system/memory/low_power now exists.
Writing a physical address into this file will put this
section of memory into a low power state (retaining contents)
if the architecture and platform supports it.
Change-Id: I70592d37f1091a1b533f2374546ba67b50ea7d30
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit 1f4d1c8e295aaf66b23309caa0d03b09b7009b99)
Conflicts:
drivers/base/memory.c
include/linux/memory_hotplug.h
This provides the physical memory hotremove API (this is needed since our
physical memory removal is done by powering it off, which needs
to be requested by userspace, not by someone physically pulling memory
out of a machine).
Change-Id: Ic34426a91a1aac2bd4a45677ee00c2b7a3f84746
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit d651b6964bbb50d3c1fee6f76467a0f867286dfb)
If offlined_pages is greater than
zone->present_pages, underflow will occur.
This change will set zone->present_pages to 0 if
offlined_pages is greater.
Change-Id: I728e90c60fb7fc391de7b9c4828ab264ca38653b
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit 80c201e25e8dbc00427b73d90b1527c356526442)
When page migration is unable to find free memory it
will go into an infinite loop because the
all_unreclaimable flag is never set.
This change will allow memory offlining to fail
gracefully if there is not enough memory for page
migration.
__GFP_NORETRY tells the page_alloc to not retry.
__GFP_NOWARN suppresses page fault warnings when
page allocation fails.
__GFP_NOMEMALLOC prevents it from aggressively
allocating beyond zone watermarks.
Change-Id: I94dfd9059851c7b24953f44a4018a3bbac840688
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit 1e6ec3c7e399aa84d29ced48b16a91acaba962f0)
When onlining, the onlined pages must be added to the kernel's
list of free pages using __free_page(). However, pages are not
immediately added but placed in a queue to be processed
when the queue size reaches a watermark. The last pages in
the queue may not be processed in time, and if you try to
offline that memory before it is processed, offlining will
always fail.
This fix calls drain_all_pages(), which will process every
free page in the queue. This ensures that all pages are
accounted for when onlining and nothing gets stuck in the queue.
Change-Id: I54dbc0749556702407090e51ce9246abc5db7d1c
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit aa7e9dec5cfd309cb9eb6cb56a284a61607a925a)
This patch prevents memory hotplug from marking pages of the memmap that
only reference holes in the physical address space as private. Some
architectures (including ARM) attempt to free these unneeded parts of the
memmap, and attempting to free a private page will throw bad_page warnings
and tie up the memory indefinitely.
This patch also allows early_pfn_valid to be architecture specific and
defines it for ARM. The definition for ARM takes into account memory banks
and the holes in physical memory.
CRs-Fixed: 247010
Change-Id: Iad88d427b1b923a808b026c22d2899fa0483cb9e
Signed-off-by: jesset@codeaurora.org
(cherry picked from commit 0b610c773ad6281a3d217fbbe894b2476e9e71dd)
Conflicts:
arch/arm/mm/init.c
The transfer of ->flags causes some of the static mapping virtual
addresses to be prematurely freed (before the mapping is removed) because
VM_LAZY_FREE gets "set" if tmp->flags has VM_IOREMAP set. This might
cause subsequent vmalloc/ioremap calls to fail because it might allocate
one of the freed virtual address ranges that aren't unmapped.
va->flags has different types of flags from tmp->flags. If a region with
VM_IOREMAP set is registered with vm_area_add_early(), it will be removed
by __purge_vmap_area_lazy().
Fix vmalloc_init() to correctly initialize vmap_area for the given
vm_struct.
Also initialise va->vm. If it is not set, find_vm_area() for the early
vm regions will always fail.
Signed-off-by: KyongHo Cho <pullip.cho@samsung.com>
Cc: "Olav Haugan" <ohaugan@codeaurora.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
(cherry picked from commit fa002621c590c56e13cd86e944919a5771a6e03e)
commit 461ae488ec upstream.
Xen backend drivers (e.g., blkback and netback) would sometimes fail to
map grant pages into the vmalloc address space allocated with
alloc_vm_area(). The GNTTABOP_map_grant_ref would fail because Xen could
not find the page (in the L2 table) containing the PTEs it needed to
update.
(XEN) mm.c:3846:d0 Could not find L1 PTE for address fbb42000
netback and blkback were making the hypercall from a kernel thread where
task->active_mm != &init_mm and alloc_vm_area() was only updating the page
tables for init_mm. The usual method of deferring the update to the page
tables of other processes (i.e., after taking a fault) doesn't work as a
fault cannot occur during the hypercall.
This would work on some systems depending on what else was using vmalloc.
Fix this by reverting ef691947d8 ("vmalloc: remove vmalloc_sync_all()
from alloc_vm_area()") and add a comment to explain why it's needed.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Keir Fraser <keir.xen@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit d63c8a029e509ad48ee9290874731789f9008537)
If FIX_MOVABLE_ZONE is enabled, we want a specific size and
location of ZONE_MOVABLE.
Change-Id: I0b858c7310cd328e1118abc9d5fe6f364bb4ffad
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit e6436864f939e77226c8e971610539649ed5f869)
Vmalloc will exit if the amount it needs to allocate is
greater than totalram_pages. Vmalloc cannot allocate
from the movable zone, so pages in the movable zone should
not be counted.
This change adds a new global variable: total_unmovable_pages.
It is calculated in init.c, based on totalram_pages minus
the pages in the movable zone. Vmalloc now looks at this new
global instead of totalram_pages.
total_unmovable_pages can be modified during memory_hotplug.
If the zone you are offlining/onlining is unmovable, then
you modify it similar to totalram_pages. If the zone is
movable, then no change is needed.
Change-Id: Ie55c41051e9ad4b921eb04ecbb4798a8bd2344d6
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit 59f9f1c9ae463a3d4499cd9353619f8b1993371b)
Conflicts:
arch/arm/mm/init.c
mm/memory_hotplug.c
mm/page_alloc.c
mm/vmalloc.c
z->lowmem_reserve[classzone_idx] is an unsigned long but
free_pages and min are longs. If free_pages is
negative, the function will incorrectly return true
because it will treat the negative long as a large,
positive unsigned long.
This change casts z->lowmem_reserve to a long and
fixes a typo in the comment.
Change-Id: Icada1fa5ca650fbcdb0656f637adbb98f223eec5
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit 9f41da81017657a194a4e145bab337f13a4d7fd9)
Carry forward the fix from 2.6.29b kernel to handle the
NR_SECTION_ROOTS == 0 case when turning on SPARSEMEM on MSM.
Also fix the linux coding style typos by replacing
"foo* bar" with "foo *bar".
Change-Id: I77f259f3d62980a37f4ae7c4680a3617c9b4f563
Signed-off-by: Naveen Ramaraj <nramaraj@codeaurora.org>
(cherry picked from commit 1ff28b9751596dbaf55a3e40f593d3013380a3ac)
Add a new function, memblock_overlaps_memory(), to check if a
region overlaps with a memory bank. This will be used by
peripheral loader code to detect when kernel memory would be
overwritten.
Change-Id: I851f8f416a0f36e85c0e19536b5209f7d4bd431c
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
(cherry picked from commit 1aa4e5a974b3087d29510802810170c071df8546)
Conflicts:
include/linux/memblock.h
Add ARM to the supported list of architectures for MEMORY_HOTPLUG. For
ARM, the selection of MEMORY_HOTPLUG/REMOVE is specific to the sub-arch
and has to be explicitly enabled by the sub-arch.
Signed-off-by: Bryan Huntsman <bryanh@codeaurora.org>
(cherry picked from commit 45f580d9c36b93204882dffc6fb9f4a254c3d34a)
Occasionally, testing memcg's move_charge_at_immigrate on rc7 shows
a flurry of hundreds of warnings at kernel/res_counter.c:96, where
res_counter_uncharge_locked() does WARN_ON(counter->usage < val).
The first trace of each flurry implicates __mem_cgroup_cancel_charge()
of mc.precharge, and an audit of mc.precharge handling points to
mem_cgroup_move_charge_pte_range()'s THP handling in commit 12724850e8
("memcg: avoid THP split in task migration").
Checking !mc.precharge is good everywhere else, when a single page is to
be charged; but here the "mc.precharge -= HPAGE_PMD_NR" likely to
follow, is liable to result in underflow (a lot can change since the
precharge was estimated).
Simply check against HPAGE_PMD_NR: there's probably a better
alternative, trying precharge for more, splitting if unsuccessful; but
this one-liner is safer for now - no kernel/res_counter.c:96 warnings
seen in 26 hours.
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
I found some kernel messages such as:
SLUB raid5-md127: kmem_cache_destroy called for cache that still has objects.
Pid: 6143, comm: mdadm Tainted: G O 3.4.0-rc6+ #75
Call Trace:
kmem_cache_destroy+0x328/0x400
free_conf+0x2d/0xf0 [raid456]
stop+0x41/0x60 [raid456]
md_stop+0x1a/0x60 [md_mod]
do_md_stop+0x74/0x470 [md_mod]
md_ioctl+0xff/0x11f0 [md_mod]
blkdev_ioctl+0xd8/0x7a0
block_ioctl+0x3b/0x40
do_vfs_ioctl+0x96/0x560
sys_ioctl+0x91/0xa0
system_call_fastpath+0x16/0x1b
Then using kmemleak I found these messages:
unreferenced object 0xffff8800b6db7380 (size 112):
comm "mdadm", pid 5783, jiffies 4294810749 (age 90.589s)
hex dump (first 32 bytes):
01 01 db b6 ad 4e ad de ff ff ff ff ff ff ff ff .....N..........
ff ff ff ff ff ff ff ff 98 40 4a 82 ff ff ff ff .........@J.....
backtrace:
kmemleak_alloc+0x21/0x50
kmem_cache_alloc+0xeb/0x1b0
kmem_cache_open+0x2f1/0x430
kmem_cache_create+0x158/0x320
setup_conf+0x649/0x770 [raid456]
run+0x68b/0x840 [raid456]
md_run+0x529/0x940 [md_mod]
do_md_run+0x18/0xc0 [md_mod]
md_ioctl+0xba8/0x11f0 [md_mod]
blkdev_ioctl+0xd8/0x7a0
block_ioctl+0x3b/0x40
do_vfs_ioctl+0x96/0x560
sys_ioctl+0x91/0xa0
system_call_fastpath+0x16/0x1b
This bug was introduced by commit a8364d5555 ("slub: only IPI CPUs that
have per cpu obj to flush"), which did not include checks for per cpu
partial pages being present on a cpu.
Signed-off-by: majianpeng <majianpeng@gmail.com>
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Tested-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Why is there less MemFree than there used to be? It perturbed a test,
so I've just been bisecting linux-next, and now find the offender went
upstream yesterday.
Commit 93278814d3 "mm: fix division by 0 in percpu_pagelist_fraction()"
mistakenly initialized percpu_pagelist_fraction to the sysctl's minimum 8,
which leaves 1/8th of memory on percpu lists (on each cpu??); but most of
us expect it to be left unset at 0 (and it's not then used as a divisor).
MemTotal: 8061476kB 8061476kB 8061476kB 8061476kB 8061476kB 8061476kB
Repetitive test with percpu_pagelist_fraction 8:
MemFree: 6948420kB 6237172kB 6949696kB 6840692kB 6949048kB 6862984kB
Same test with percpu_pagelist_fraction back to 0:
MemFree: 7945000kB 7944908kB 7948568kB 7949060kB 7948796kB 7948812kB
Signed-off-by: Hugh Dickins <hughd@google.com>
[ We really should fix the crazy sysctl interface too, but that's a
separate thing - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Merge misc fixes from Andrew Morton.
* emailed from Andrew Morton <akpm@linux-foundation.org>: (8 patches)
MAINTAINERS: add maintainer for LED subsystem
mm: nobootmem: fix sign extend problem in __free_pages_memory()
drivers/leds: correct __devexit annotations
memcg: free spare array to avoid memory leak
namespaces, pid_ns: fix leakage on fork() failure
hugetlb: prevent BUG_ON in hugetlb_fault() -> hugetlb_cow()
mm: fix division by 0 in percpu_pagelist_fraction()
proc/pid/pagemap: correctly report non-present ptes and holes between vmas
Commit 66aebce747 ("hugetlb: fix race condition in hugetlb_fault()")
added code to avoid a race condition by elevating the page refcount in
hugetlb_fault() while calling hugetlb_cow().
However, one code path in hugetlb_cow() includes an assertion that the
page count is 1, whereas it may now also have the value 2 in this path.
The consensus is that this BUG_ON has served its purpose, so rather than
extending it to cover both cases, we just remove it.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Hillf Danton <dhillf@gmail.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: <stable@vger.kernel.org> [3.0.29+, 3.2.16+, 3.3.3+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
percpu_pagelist_fraction_sysctl_handler() has only considered -EINVAL as
a possible error from proc_dointvec_minmax().
If any other error is returned, it would proceed to divide by zero since
percpu_pagelist_fraction wasn't getting initialized at any point. For
example, writing 0 bytes into the proc file would trigger the issue.
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kmemleak tracks the percpu allocations via a specific API and the
originally allocated areas must be removed from kmemleak (via
kmemleak_free). The code was already doing this for SMP systems.
Reported-by: Sami Liedes <sami.liedes@iki.fi>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
pcpu_embed_first_chunk() allocates memory for each node, copies percpu
data and frees unused portions of it before proceeding to the next
group. This assumes that allocations for different nodes doesn't
overlap; however, depending on memory topology, the bootmem allocator
may end up allocating memory from a different node than the requested
one which may overlap with the portion freed from one of the previous
percpu areas. This leads to percpu groups for different nodes
overlapping which is a serious bug.
This patch separates out copy & partial free from the allocation loop
such that all allocations are complete before partial frees happen.
This also fixes overlapping frees which could happen on allocation
failure path - out_free_areas path frees whole groups but the groups
could have portions freed at that point.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: stable@vger.kernel.org
Reported-by: "Pavel V. Panteleev" <pp_84@mail.ru>
Tested-by: "Pavel V. Panteleev" <pp_84@mail.ru>
LKML-Reference: <E1SNhwY-0007ui-V7.pp_84-mail-ru@f220.mail.ru>
Pull two percpu fixes from Tejun Heo:
"One adds missing KERN_CONT on split printk()s and the other makes
the percpu allocator avoid using PMD_SIZE as atom_size on x86_32.
Using PMD_SIZE led to vmalloc area exhaustion on certain
configurations (x86_32 android) and the only cost of using PAGE_SIZE
instead is static percpu area not being aligned to large page
mapping."
* 'for-3.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
percpu, x86: don't use PMD_SIZE as embedded atom_size on 32bit
percpu: use KERN_CONT in pcpu_dump_alloc_info()
Merge fixes from Andrew Morton:
"13 fixes. The acerhdf patches aren't (really) fixes. But they've
been stuck in my tree for up to two years, sent to Matthew multiple
times and the developers are unhappy."
* emailed from Andrew Morton <akpm@linux-foundation.org>: (13 patches)
mm: fix NULL ptr dereference in move_pages
mm: fix NULL ptr dereference in migrate_pages
revert "proc: clear_refs: do not clear reserved pages"
drivers/rtc/rtc-ds1307.c: fix BUG shown with lock debugging enabled
arch/arm/mach-ux500/mbox-db5500.c: world-writable sysfs fifo file
hugetlbfs: lockdep annotate root inode properly
acerhdf: lowered default temp fanon/fanoff values
acerhdf: add support for new hardware
acerhdf: add support for Aspire 1410 BIOS v1.3314
fs/buffer.c: remove BUG() in possible but rare condition
mm: fix up the vmscan stat in vmstat
epoll: clear the tfile_check_list on -ELOOP
mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma