Commit Graph

1176 Commits

Author SHA1 Message Date
Ajay Dudani
da83edd84e Revert "Revert "ARM: 7169/1: topdown mmap support""
This reverts commit f27e4f0e730b99ca4dabed0b408d96dbf73a8fac.

With 01b1dee in system/core to set ADDR_COMPAT_LAYOUT, this is
not needed any longer.

Bug: 8470684
Signed-off-by: Ajay Dudani <adudani@codeaurora.org>
Acked-by: Laura Abbot <lauraa@codeaurora.org>
2013-04-20 13:53:55 -07:00
Duy Truong
04e554807c Update copyright to The Linux Foundation
Change-Id: Ibead64ce2e901dede2ddd1b86088b88f2350ce92
Signed-off-by: Duy Truong <dtruong@codeaurora.org>
2013-03-15 17:07:39 -07:00
Pushkar Joshi
92bb1ac92f tracing: ftrace events for user faults and undefined instructions
New ftrace events (user_fault and undef_instr) for data, prefetch
or undefined instruction aborts. The new ftrace events are under
events/exception.

Change-Id: Iea328b71a1f623861cac9b45d858c3bbe09e1b82
Signed-off-by: Pushkar Joshi <pushkarj@codeaurora.org>
2013-03-15 17:05:57 -07:00
Laura Abbott
a5708bc035 arm: dma: Allow CMA pages to not have a kernel mapping.
Currently, there are use cases where not having any kernel
mapping is required; if the CMA memory needs to be used as
a pool which can have both cached and uncached mappings we
need to remove the mapping to avoid the multiple mapping
problem. Extend the dma APIs to use the DMA_ATTR_NO_KERNEL_MAPPING
with CMA. This doesn't end up saving any virtual address space
but the mapping will still not be present.

Change-Id: I64d21250abbe615c43e2b5b1272ee2b6d106705a
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:21 -08:00
Laura Abbott
4d6e1c5965 arm: dma: Expand the page protection attributes
Currently, the decision on which page protection to use
is limited to writecombine and coherent. Expand to include
strongly ordered memory and non consistent memory.

Change-Id: I7585fe3ce804cf321a5585c3d93deb7a7c95045c
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:20 -08:00
Marek Szyprowski
d1fd8d785d ARM: dma-mapping: fix buffer chunk allocation order
IOMMU-aware dma_alloc_attrs() implementation allocates buffers in
power-of-two chunks to improve performance and take advantage of large
page mappings provided by some IOMMU hardware. However current code, due
to a subtle bug, allocated those chunks in the smallest-to-largest
order, what completely killed all the advantages of using larger than
page chunks. If a 4KiB chunk has been mapped as a first chunk, the
consecutive chunks are not aligned correctly to the power-of-two which
match their size and IOMMU drivers were not able to use internal
mappings of size other than the 4KiB (largest common denominator of
alignment and chunk size).

This patch fixes this issue by changing to the correct largest-to-smallest
chunk size allocation sequence.

Change-Id: I5cc9c12322e832951faf3bba6387946c890e0ed4
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:20 -08:00
Sachin Kamat
7a2356ef59 ARM: dma-mapping: Add missing static storage class specifier
Fixes the following sparse warnings:
arch/arm/mm/dma-mapping.c:231:15: warning: symbol 'consistent_base' was not
declared. Should it be static?
arch/arm/mm/dma-mapping.c:326:8: warning: symbol 'coherent_pool_size' was not
declared. Should it be static?

Change-Id: I90e2ccdc4d132a37ebcd8ae7a8441ad3fede55bf
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:20 -08:00
Vitaly Andrianov
2382e50268 ARM: dma-mapping: use PMD size for section unmap
The dma_contiguous_remap() function clears existing section maps using
the wrong size (PGDIR_SIZE instead of PMD_SIZE).  This is a bug which
does not affect non-LPAE systems, where PGDIR_SIZE and PMD_SIZE are the same.
On LPAE systems, however, this bug causes the kernel to hang at this point.

This fix has been tested on both LPAE and non-LPAE kernel builds.

Change-Id: I63650057864907f1a2d8eed7257665cb2f648bbb
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:19 -08:00
Marek Szyprowski
4f6b3dc624 ARM: dma-mapping: add support for IOMMU mapper
This patch add a complete implementation of DMA-mapping API for
devices which have IOMMU support.

This implementation tries to optimize dma address space usage by remapping
all possible physical memory chunks into a single dma address space chunk.

DMA address space is managed on top of the bitmap stored in the
dma_iommu_mapping structure stored in device->archdata. Platform setup
code has to initialize parameters of the dma address space (base address,
size, allocation precision order) with arm_iommu_create_mapping() function.
To reduce the size of the bitmap, all allocations are aligned to the
specified order of base 4 KiB pages.

dma_alloc_* functions allocate physical memory in chunks, each with
alloc_pages() function to avoid failing if the physical memory gets
fragmented. In worst case the allocated buffer is composed of 4 KiB page
chunks.

dma_map_sg() function minimizes the total number of dma address space
chunks by merging of physical memory chunks into one larger dma address
space chunk. If requested chunk (scatter list entry) boundaries
match physical page boundaries, most calls to dma_map_sg() requests will
result in creating only one chunk in dma address space.

dma_map_page() simply creates a mapping for the given page(s) in the dma
address space.

All dma functions also perform required cache operation like their
counterparts from the arm linear physical memory mapping version.

This patch contains code and fixes kindly provided by:
- Krishna Reddy <vdumpa@nvidia.com>,
- Andrzej Pietrasiewicz <andrzej.p@samsung.com>,
- Hiroshi DOYU <hdoyu@nvidia.com>

Change-Id: I4a9b155bef4d5f2b8a8dfe87751d82960b09b253
[lauraa: context conflicts]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:19 -08:00
Marek Szyprowski
da2b2117de ARM: dma-mapping: use alloc, mmap, free from dma_ops
This patch converts dma_alloc/free/mmap_{coherent,writecombine}
functions to use generic alloc/free/mmap methods from dma_map_ops
structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been
introduced to implement writecombine methods.

Change-Id: I2709e3ffc97546df2f505d555b29c3bb8148daec
[lauraa: context conflicts]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:18 -08:00
Marek Szyprowski
5e0ee00c15 ARM: dma-mapping: remove redundant code and do the cleanup
This patch just performs a global cleanup in DMA mapping implementation
for ARM architecture. Some of the tiny helper functions have been moved
to the caller code, some have been merged together.

Change-Id: I60b3450bd1180ea007e7326a63762d3a44b3c25d
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:18 -08:00
Marek Szyprowski
0e8fe4a111 ARM: dma-mapping: move all dma bounce code to separate dma ops structure
This patch removes dma bounce hooks from the common dma mapping
implementation on ARM architecture and creates a separate set of
dma_map_ops for dma bounce devices.

Change-Id: I42d7869b4f74ffa5f36a4a7526bc0c55aaf6bab7
[lauraa: conflicts due to code cruft]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:18 -08:00
Marek Szyprowski
3f47a1438c ARM: dma-mapping: implement dma sg methods on top of any generic dma ops
This patch converts all dma_sg methods to be generic (independent of the
current DMA mapping implementation for ARM architecture). All dma sg
operations are now implemented on top of respective
dma_map_page/dma_sync_single_for* operations from dma_map_ops structure.

Before this patch there were custom methods for all scatter/gather
related operations. They iterated over the whole scatter list and called
cache related operations directly (which in turn checked if we use dma
bounce code or not and called respective version). This patch changes
them not to use such shortcut. Instead it provides similar loop over
scatter list and calls methods from the device's dma_map_ops structure.
This enables us to use device dependent implementations of cache related
operations (direct linear or dma bounce) depending on the provided
dma_map_ops structure.

Change-Id: Icbd72d1e4fed6d7478b98bb4ead120c02dd26588
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:17 -08:00
Marek Szyprowski
b5702dc251 ARM: dma-mapping: use asm-generic/dma-mapping-common.h
This patch modifies dma-mapping implementation on ARM architecture to
use common dma_map_ops structure and asm-generic/dma-mapping-common.h
helpers.

Change-Id: I574a3b5ac883cd5d9beb79deef8f5cb44fd83296
[lauraa: conflicts due to code cruft/context changes]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:17 -08:00
Marek Szyprowski
5a28bdbf7c ARM: dma-mapping: remove offset parameter to prepare for generic dma_ops
This patch removes the need for the offset parameter in dma bounce
functions. This is required to let dma-mapping framework on ARM
architecture to use common, generic dma_map_ops based dma-mapping
helpers.

Background and more detailed explaination:

dma_*_range_* functions are available from the early days of the dma
mapping api. They are the correct way of doing a partial syncs on the
buffer (usually used by the network device drivers). This patch changes
only the internal implementation of the dma bounce functions to let
them tunnel through dma_map_ops structure. The driver api stays
unchanged, so driver are obliged to call dma_*_range_* functions to
keep code clean and easy to understand.

The only drawback from this patch is reduced detection of the dma api
abuse. Let us consider the following code:

dma_addr = dma_map_single(dev, ptr, 64, DMA_TO_DEVICE);
dma_sync_single_range_for_cpu(dev, dma_addr+16, 0, 32, DMA_TO_DEVICE);

Without the patch such code fails, because dma bounce code is unable
to find the bounce buffer for the given dma_address. After the patch
the above sync call will be equivalent to:

dma_sync_single_range_for_cpu(dev, dma_addr, 16, 32, DMA_TO_DEVICE);

which succeeds.

I don't consider this as a real problem, because DMA API abuse should be
caught by debug_dma_* function family. This patch lets us to simplify
the internal low-level implementation without chaning the driver visible
API.

Change-Id: I9a847e30f345bf5e69fded1747ff79057750fb66
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:21:28 -08:00
Marek Szyprowski
81fc7d89e7 ARM: dma-mapping: introduce DMA_ERROR_CODE constant
Replace all uses of ~0 with DMA_ERROR_CODE, what should make the code
easier to read.

Change-Id: I6c0fff904d8df3a9d2a8a727e62faf000a55c1b5
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:21:28 -08:00
Marek Szyprowski
b172ee0dc3 ARM: dma-mapping: use pr_* instread of printk
Replace all calls to printk with pr_* functions family.

Change-Id: Id03dee8797cd736529ede3ef525a930b90a04042
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:21:27 -08:00
Marek Szyprowski
bacdd6a14e ARM: dma-mapping: use dma_mmap_from_coherent()
Change-Id: Ibc4086c0f48272356187966fce416a57160fab76
[lauraa: context conflicts]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:21:26 -08:00
Laura Abbott
ce95e6e6ce Revert "ARM: 7169/1: topdown mmap support"
commit 7dbaa46678
(ARM: 7169/1: topdown mmap support) allocates mmap addresses from
the top addresses instead of the bottom. Unfortunately, some
userspace components are broken and do checks such as the following:

void* addr = mmap(...);
// Top bit is now the sign bit...
int test = (int)addr;
if (test < 0) {
	//failure
}

Which means that any address greater than 0x80000000 will be marked
as a failure. Until we verify all userspace components are fixed,
revert this change.

Change-Id: I2eacbfb4f7b8fc9bf5704ca90d31c409819d7fbe
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:21:22 -08:00
Junjie Wu
6b71c52fea mm: add HAVE_MEMBLOCK_NODE_MAP support
ARCH_POPULATES_NODE_MAP in 3.0 is replaced by HAVE_MEMBLOCK_NODE_MAP in
3.4.  add_active_range() is replaced with memblock_set_node().  They do
basically the same thing, but embedding nid field into memblock_region
is much cleaner than a separate early_node_map.

HAVE_MEMBLOCK_NODE_MAP is not selected by default.

See commit 4a2164a7db for more info.

Change-Id: Icb44a8cea365b2d32df80628a57535a3d46fbd55
Signed-off-by: Junjie Wu <junjiew@codeaurora.org>
2013-02-27 18:19:51 -08:00
Rohit Vaswani
bebbf12768 arm: Fix compilation error with gcc 4.5.2
Fix warning with the LDM/STM instructions. The instructions dont
have a register offset so set the offset to 0.

Change-Id: I2c988373c88e280015faa43076139650747d7ff3
Acked-by: Kaushik Sikdar <ksikdar@qualcomm.com>
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
2013-02-27 18:17:23 -08:00
Stepan Moskovchenko
202b5b78a6 arm: Support the safe WFE sequence for Krait CPUs
Certain version of the Krait processor require a specific
code sequence to be executed prior to executing a WFE
instruction to permit that instruction to place the
processor into a low-power state.

Change-Id: I308adc691f110a323cbd84e9779675ac045826fa
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
2013-02-27 18:17:06 -08:00
Nicolas Pitre
2650ec3fe8 ARM: 7438/1: fill possible PMD empty section gaps
On ARM with the 2-level page table format, a PMD entry is represented by
two consecutive section entries covering 2MB of virtual space.

However, static mappings always were allowed to use separate 1MB section
entries.  This means in practice that a static mapping may create half
populated PMDs via create_mapping().

Since commit 0536bdf33f (ARM: move iotable mappings within the vmalloc
region) those static mappings are located in the vmalloc area. We must
ensure no such half populated PMDs are accessible once vmalloc() or
ioremap() start looking at the vmalloc area for nearby free virtual
address ranges, or various things leading to a kernel crash will happen.

Change-Id: Icff38652afbd6c2da1211fad110d1abb1621dc86
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reported-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: "R, Sricharan" <r.sricharan@ti.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:15:35 -08:00
Marek Szyprowski
97d08b5f30 ARM: dma-mapping: remove unconditional dependency on CMA
CMA has been enabled unconditionally on all ARMv6+ systems to solve the
long standing issue of double kernel mappings for all dma coherent
buffers. This however created a dependency on CONFIG_EXPERIMENTAL for
the whole ARM architecture what should be really avoided. This patch
removes this dependency and lets one use old, well-tested dma-mapping
implementation also on ARMv6+ systems without the need to use
EXPERIMENTAL stuff.

Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
[lauraa: Fixed conflicts in dma-mapping.c]

Change-Id: I17831dd98204dd8598fc469ae93f0ceb2c7c84c3
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:14:49 -08:00
Marek Szyprowski
b78c8d6fa9 ARM: integrate CMA with DMA-mapping subsystem
This patch adds support for CMA to dma-mapping subsystem for ARM
architecture. By default a global CMA area is used, but specific devices
are allowed to have their private memory areas if required (they can be
created with dma_declare_contiguous() function during board
initialisation).

Contiguous memory areas reserved for DMA are remapped with 2-level page
tables on boot. Once a buffer is requested, a low memory kernel mapping
is updated to to match requested memory access type.

GFP_ATOMIC allocations are performed from special pool which is created
early during boot. This way remapping page attributes is not needed on
allocation time.

CMA has been enabled unconditionally for ARMv6+ systems.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>

Conflicts:

	arch/arm/include/asm/mach/map.h
	arch/arm/mm/init.c
	arch/arm/mm/mm.h
	arch/arm/mm/mmu.c

Change-Id: I85e3b43a9fa1e3c4d33cbc85fff6dee1b815041d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:14:48 -08:00
Stephen Boyd
84d1c1a3a3 Merge branch 'goog/googly' (early part) into goog/msm-soc-3.4
Fix NR_IPI to be 7 instead of 6 because both googly and core add
an IPI.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>

Conflicts:
	arch/arm/Kconfig
	arch/arm/common/Makefile
	arch/arm/include/asm/hardware/cache-l2x0.h
	arch/arm/mm/cache-l2x0.c
	arch/arm/mm/mmu.c
	include/linux/wakelock.h
	kernel/power/Kconfig
	kernel/power/Makefile
	kernel/power/main.c
	kernel/power/power.h
2013-02-25 11:25:46 -08:00
David Ng
2bfed520ac ARM: Change CP15 regs to bump memory throughput on ScorpionMP
Change-Id: I9ace6222750954e43b4b57d049bb74645fb06424
Signed-off-by: David Ng <dave@codeaurora.org>
(cherry picked from commit 76c5892fc1fa36e4e5ebabd2c4e0f10593233b62)

Conflicts:

	arch/arm/mm/proc-v7.S
2013-02-20 02:50:20 -08:00
Jordan Crouse
b0dad0af86 msm: Increase the DMA consistent memory zone to 14MB
Increase the zone for mapping DMA memory on most platforms from 2MB
to the maximum 14MB. The actual memory comes from the normal pool, this
change just increases the virtual range where the memory can be mappped
(at the cost of 7 more 2MB pagetables).

Change-Id: Ibc8067be1fbb92775b3c5e59277b5767e01706ea
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
2013-02-20 02:49:10 -08:00
Stephen Boyd
0fd7101218 arm: cache-l2x0: Make l2x0_cache_sync non-static
Change-Id: I0f8fc9c329bc1136ca9a4f2cf401c3a849dbeb66
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
2013-02-20 02:49:09 -08:00
Stephen Boyd
cb7444771d ARM: cache-l2x0: Fix section mismatch
WARNING: vmlinux.o(.text+0x1066c): Section mismatch in reference
from the function l2cc_suspend() to the function
.init.text:pl310_save()
The function l2cc_suspend() references
the function __init pl310_save().
This is often because l2cc_suspend lacks a __init
annotation or the annotation of pl310_save is wrong.

This looks fairly bad. Every time l2cc_suspend() is called we may
be calling into junk code.

Change-Id: I9c1d8e8dda1f0c8f8272d928f2095f370d4f3426
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
(cherry picked from commit 22ab9347b30c3549e317f436c4be2e30512f47cf)
2013-02-20 02:49:09 -08:00
Taniya Das
31553932c8 ARM: cache-l2x0: Save L2CC registers using pl310 save/resume
This reverts commit a022290fe5165ffe4973355cb76556ce8c629d70.

Save the contents of the L2CC registers in l2x0_init itself as
they are not modified later.

CRs-Fixed: 356696
Change-Id: I05ec3bcce8d1e2f941a9ecbaae8c6598f52831c5
Signed-off-by: Taniya Das <tdas@codeaurora.org>
(cherry picked from commit 38a8c6e63b1478cc520c795e07cd1b6370901d06)

Conflicts:

	arch/arm/include/asm/hardware/cache-l2x0.h
	arch/arm/mach-msm/pm-8x60.c
	arch/arm/mach-msm/pm2.c
	arch/arm/mm/cache-l2x0.c
2013-02-20 02:49:07 -08:00
Neil Leeder
e358b600bf mm: add wrapper function for writing word to kernel text space
Adds a function to encapsulate the locking, removal of write-protection,
word write, cache flush and invalidate and restoration
of write protection. This is a convenience function for callers
needing to update a word in kernel text space.

Change-Id: I9832f0ff659ddc62c55819af5318c94b70f5c11c
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
(cherry picked from commit 32942757bdfb3c67af2cd9c30427adf7d722f7c8)
2013-02-20 02:49:07 -08:00
Neil Leeder
42bc26b44c arm: mm: add functions to temporarily allow write to kernel text
STRICT_MEMORY_RWX write-protects the kernel text section. This
is a problem for tools such as kprobes which need write access
to kernel text space.

This patch introduces a function to temporarily make part of the
kernel text space writeable and another to restore the original state.
They can be called by code which is intentionally writing to
this space, while still leaving the kernel protected from
unintentional writes at other times.

Change-Id: I879009c41771198852952e5e7c3b4d1368f12d5f
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
(cherry picked from commit f06ab97f06fe6e8b3141434695b235e673f5ae37)

Conflicts:

	arch/arm/mm/mmu.c
2013-02-20 02:49:05 -08:00
Larry Bassel
788788d1a9 arm: mm: restrict kernel memory permissions if CONFIG_STRICT_MEMORY_RWX set
If CONFIG_STRICT_MEMORY_RWX is set, make kernel text RX,
kernel data/stack RW and rodata RO so that writing
on kernel text, executing kernel data or stack, or
writing on or executing read-only data is prohibited.

Change-Id: Ib2242c20dabddb63ef3f5655d5794fe418cb6287
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit 5a5305e90d4204fdf0586fbbd9a19b92181e74ea)

Conflicts:

	arch/arm/mm/mmu.c
2013-02-20 02:49:05 -08:00
Larry Bassel
82a3f07134 arm: mm: add CONFIG_STRICT_MEMORY_RWX
If this is set, kernel text will be made RX, kernel data and stack
RW, rodata R so that writing to kernel text, executing kernel data
or stack, or writing to read-only data or kernel text will not
succeed.

Change-Id: Ib80907b34388bf547c4f268a903a766acaab9ae2
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit b347b1b5d7e88266f13c971dbd9116826085330c)
2013-02-20 02:49:04 -08:00
Colin Cross
fc875892c3 ARM: allow the kernel text section to be made read-only
This patch implements CONFIG_DEBUG_RODATA, allowing
the kernel text section to be marked read-only in
order to catch bugs that write over the kernel.  This
requires mapping the kernel code, plus up to 4MB, using
pages instead of sections, which can increase TLB
pressure.

The kernel is normally mapped using 1MB section entries
in the first level page table, and the first level page
table is copied into every mm.  This prevents marking
the kernel text read-only, because the 1MB section
entries are too large granularity to separate the init
section, which is reused as read-write memory after
init, and the kernel text section.  Also, the top level
page table for every process would need to be updated,
which is not possible to do safely and efficiently on SMP.

To solve both problems, allow alloc_init_pte to overwrite
an existing section entry with a fully-populated second
level page table.  When CONFIG_DEBUG_RODATA is set, all
the section entries that overlap the kernel text section
will be replaced with page mappings.  The kernel always
uses a pair of 2MB-aligned 1MB sections, so up to 2MB
of memory before and after the kernel may end up page
mapped.

When the top level page tables are copied into each
process the second level page tables are not copied,
leaving a single second level page table that will
affect all processes on all cpus.  To mark a page
read-only, the second level page table is located using
the pointer in the first level page table for the
current process, and the supervisor RO bit is flipped
atomically.  Once all pages have been updated, all TLBs
are flushed to ensure the changes are visible on all
cpus.

If CONFIG_DEBUG_RODATA is not set, the kernel will be
mapped using the normal 1MB section entries.

Change-Id: I94fae337f882c2e123abaf8e1082c29cd5d483c6
Signed-off-by: Colin Cross <ccross@android.com>
(cherry picked from commit e5e483d133)

Conflicts:

	arch/arm/mm/mmu.c
2013-02-20 02:49:03 -08:00
Larry Bassel
a20589dcf7 arm: remove unneeded debug pr_info
Remove an unneeded debug pr_info.

Change-Id: Iea048e2f6893bad54470be2ef21eb928e76cc00b
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit b31de046c086f3d8a5a6ef03e28a7410b495eb68)
2013-02-20 02:49:02 -08:00
Larry Bassel
172e9b25ec arm: add support for ARCH_POPULATES_NODE_MAP
ARCH_POPULATES_NODE_MAP is used by most of the other
architectures and allows finer-grained control of
how and where zones are placed.

Signed-off-by: Larry Bassel <lbassel@codeaurora.org>

Conflicts:

	arch/arm/mm/init.c
(cherry picked from commit d4e809ea8cca0ae779706dd17a2d36af26efadca)

Conflicts:

	arch/arm/Kconfig
	arch/arm/mm/init.c

Change-Id: I9dd848a87790268f7ad6dd49242fe65207fff90c
2013-02-20 02:49:02 -08:00
Larry Bassel
7377ce6620 arm: make memory power routines conform to current generic API
The various routines to change memory power state used
in physical memory hotplug and hotremove used to take
a start pfn and a number of pages and return 1 for success
and 0 for failure.

The generic API these are called from now takes a start address
and size and returns a byte count of memory powered on or
off, so the ARM and platform specific routines should as well.

Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit a4414b164ee44269ac42c4b5abc9da2ce7bd97d4)

Conflicts:

	arch/arm/mach-msm/board-msm8960.c
	arch/arm/mach-msm/include/mach/memory.h
	arch/arm/mach-msm/memory.c

Change-Id: I409a761edd20bdfe7e7c263fb53f3a3b86531bae
2013-02-20 02:49:01 -08:00
Larry Bassel
2b11045cd7 msm: arch_add_memory should only perform logical memory hotplug
The function arch_add_memory() should only perform logical
memory hotplug, but it was also improperly performing
physical memory hotplug. Even worse, it was adding pages
to the free list before the memory bank they were in
was powered on.

Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit 61fe47257aa9fa598163dda4a3938090bf7d95c6)
2013-02-20 02:49:00 -08:00
Michael Bohan
c7813a5abb arm: mm: Exclude additional mem_map entries from free
A previous patch addressed the issue of move_freepages_block()
trampling on erronously freed mem_map entries for the bank end
pfn. We also need to restrict the start pfn in a
complementary manner.

Also make macro usage consistent by adopting the use of
round_down and round_up.

Signed-off-by: Michael Bohan <mbohan@codeaurora.org>
(cherry picked from commit ccd78a45dcd1d8255edddcf9062e0b0d34d8d27f)
2013-02-20 02:49:00 -08:00
Jack Cheung
e31f6193bb arm: Init SPARSEMEM section for removed memory
If a memblock has been removed with memblock_remove, it will
be skipped in 'for_each_memblock'. If a SPARSEMEM section
is completely enclosed within this removed memblock,
memory_present will never be called and the section will
never be initialized. This will cause garbage dereferences
later on.

This change loops on the memory banks, instead of the memblocks.
Memory banks always exist, regardless of memblock_remove and
ensure that all SPARSEMEM sections will be initialized, even
if they are removed later.

Change-Id: I1b7b0418a7e752f5bf69c3ec2ea8ea17e8ecfec5
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit 7245af9e71b1202d1a972a0bcebea073ce80899a)
2013-02-20 02:48:59 -08:00
Olav Haugan
875b761ed2 arm: handle discontig page struct between sections
If SPARSEMEM is enabled and there is a large amount of
memory, the page structures for the various sections
may not be contiguous. The code to traverse all of the
page structures in show_mem() was incorrectly assuming
all of the page structures were contiguous, causing
kernel panics in this case.

CRs-fixed: 315006
Change-Id: I5e9437c369d23f1513c73feb46623006561d15cf
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
(cherry picked from commit ea5a90bd710a2276183bc6df8eb4a0246746b8a6)
2013-02-20 02:48:59 -08:00
Larry Bassel
8453cb9db5 arm: handle discontiguous page structures between sections
If SPARSEMEM is enabled and there is a large amount of
memory, the page structures for the various sections
may not be contiguous. The code to traverse all of the
page structures in page_init() was incorrectly assuming
all of the page structures were contiguous, causing
early kernel panics in this case.

Change-Id: I10548520f4d1c0c232a2df940ab0f9d57078c586
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit f559f0e79e9a72da3e512a4577a3daa6f95cd603)
2013-02-20 02:48:58 -08:00
Larry Bassel
0088b5dca9 arm: add ARM-specific memory low-power support
Add ARM-specific memory low-power support and allow
the memory add code to call into platform-specific
code as the memory remove and low-power code does.

Change-Id: Ifb00366d8513092c8f14720980b4232fc8d758c0
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit ae314e9efd9f24f6347342a765f923d560080059)
2013-02-20 02:48:57 -08:00
Larry Bassel
131ab41aec msm: improve handling of DONT_MAP_HOLE_AFTER_MEMBANK0
The code to avoid mapping the hole after memory bank 0
located the start and size of this hole after running
generic initialization code which needed to convert
physical addresses to virtual ones and vice-versa
beyond this hole.

While this didn't prevent the kernel from booting, this
isn't clean, and in fact it was giving us more vmalloc
space than the config file specified (and as a result
there was less lowmem than expected).

The code to locate the hole now runs before this
initialization code.

Change-Id: Id67d8b9ea489b6d6a2c20151b0fc9a9d7b5b662d
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit 31a949b5c5004f017e4adfa86f4a136025acdcd4)

Conflicts:

	arch/arm/mach-msm/include/mach/memory.h
	arch/arm/mm/init.c
	arch/arm/mm/mmu.c
2013-02-20 02:48:57 -08:00
Rohit Vaswani
c4dcd5f15c msm: Reorganize CPU config options to be meaningful
commit e02db89be5da3cf610e548e162cfdd824a45b581
	Author:     Stepan Moskovchenko <stepanm@codeaurora.org>
	AuthorDate: Thu May 12 19:41:50 2011 -0700

	Reorganize the meaning of CONFIG_ARCH_MSM_SCORPION and
	CONFIG_ARCH_MSM_SCORPIONMP to be more intuitive, and to
	facilitate adding future targets. The SCORPION option now
	represents any target containing a Scorpion processor,
	regardless of the number of cores present. The MSM_SMP option now
	represents targets containing a multi-processor complex, and is
	selected regardless of CONFIG_SMP.
	The SCORPIONMP option selects both of these.

	Similarly, the KRAIT option now refers to current and future targets
	containing a Krait processor, regardless of core count. The KRAITMP
	option refers to targets containing a Krait-MP complex, and selects
	both KRAIT and MSM_SMP.

	Previously, SCORPIONMP was selected for targets containing either
	Scorpion or Krait CPUs, which is confusing and compicates adding
	new Krait-based targets, as these CPUs are substantially different.

Change-Id: I297d322c3bd931d9534026950cb64b95a0594ecd
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
2013-02-20 02:48:56 -08:00
Iliyan Malchev
3d360912f6 [ARM] msm: scorpion: Clear out efsr/adfsr on power-on and after fault
Change-Id: I1c087d4f56c257357a34482d55e013ab041d15bf
Signed-off-by: Dima Zavin <dima@android.com>
(cherry picked from commit 58148b26102d2dc067c071beb3f9251169ea5663)

Conflicts:

	arch/arm/mach-msm/arch-init-scorpion.S
	arch/arm/mach-msm/io.c
2013-02-20 02:48:55 -08:00
Iliyan Malchev
11c29c62d9 [ARM] qsd8k: print TCSR_SPARE2 in do_imprecise_ext
Signed-off-by: Iliyan Malchev <malchev@google.com>
(cherry picked from commit 5b3aa2070fa0cb75236281aca1af88e60c04fc9d)

Conflicts:

	arch/arm/mm/fault.c

Change-Id: I9ab40393ac9f2592d45f860e763b1f82addc051f
2013-02-20 02:48:55 -08:00
Iliyan Malchev
7f003f27e7 [ARM] qsd8k: print out more registers on an imprecise external abort.
Signed-off-by: Iliyan Malchev <malchev@google.com>
(cherry picked from commit df301fe8ab5235ef1c408affd990f3946a0fcc57)

Conflicts:

	arch/arm/mm/fault.c

Change-Id: I088c827d4987d0c69a81c9c3aaf73bcc64cfbb06
2013-02-20 02:48:54 -08:00