Merge branch 'goog/googly' (early part) into goog/msm-soc-3.4

Fix NR_IPI to be 7 instead of 6 because both googly and core add
an IPI.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>

Conflicts:
	arch/arm/Kconfig
	arch/arm/common/Makefile
	arch/arm/include/asm/hardware/cache-l2x0.h
	arch/arm/mm/cache-l2x0.c
	arch/arm/mm/mmu.c
	include/linux/wakelock.h
	kernel/power/Kconfig
	kernel/power/Makefile
	kernel/power/main.c
	kernel/power/power.h
This commit is contained in:
Stephen Boyd
2013-02-25 10:53:49 -08:00
420 changed files with 121159 additions and 1349 deletions

121
Documentation/android.txt Normal file
View File

@@ -0,0 +1,121 @@
=============
A N D R O I D
=============
Copyright (C) 2009 Google, Inc.
Written by Mike Chan <mike@android.com>
CONTENTS:
---------
1. Android
1.1 Required enabled config options
1.2 Required disabled config options
1.3 Recommended enabled config options
2. Contact
1. Android
==========
Android (www.android.com) is an open source operating system for mobile devices.
This document describes configurations needed to run the Android framework on
top of the Linux kernel.
To see a working defconfig look at msm_defconfig or goldfish_defconfig
which can be found at http://android.git.kernel.org in kernel/common.git
and kernel/msm.git
1.1 Required enabled config options
-----------------------------------
After building a standard defconfig, ensure that these options are enabled in
your .config or defconfig if they are not already. Based off the msm_defconfig.
You should keep the rest of the default options enabled in the defconfig
unless you know what you are doing.
ANDROID_PARANOID_NETWORK
ASHMEM
CONFIG_FB_MODE_HELPERS
CONFIG_FONT_8x16
CONFIG_FONT_8x8
CONFIG_YAFFS_SHORT_NAMES_IN_RAM
DAB
EARLYSUSPEND
FB
FB_CFB_COPYAREA
FB_CFB_FILLRECT
FB_CFB_IMAGEBLIT
FB_DEFERRED_IO
FB_TILEBLITTING
HIGH_RES_TIMERS
INOTIFY
INOTIFY_USER
INPUT_EVDEV
INPUT_GPIO
INPUT_MISC
LEDS_CLASS
LEDS_GPIO
LOCK_KERNEL
LkOGGER
LOW_MEMORY_KILLER
MISC_DEVICES
NEW_LEDS
NO_HZ
POWER_SUPPLY
PREEMPT
RAMFS
RTC_CLASS
RTC_LIB
SWITCH
SWITCH_GPIO
TMPFS
UID_STAT
UID16
USB_FUNCTION
USB_FUNCTION_ADB
USER_WAKELOCK
VIDEO_OUTPUT_CONTROL
WAKELOCK
YAFFS_AUTO_YAFFS2
YAFFS_FS
YAFFS_YAFFS1
YAFFS_YAFFS2
1.2 Required disabled config options
------------------------------------
CONFIG_YAFFS_DISABLE_LAZY_LOAD
DNOTIFY
1.3 Recommended enabled config options
------------------------------
ANDROID_PMEM
ANDROID_RAM_CONSOLE
ANDROID_RAM_CONSOLE_ERROR_CORRECTION
SCHEDSTATS
DEBUG_PREEMPT
DEBUG_MUTEXES
DEBUG_SPINLOCK_SLEEP
DEBUG_INFO
FRAME_POINTER
CPU_FREQ
CPU_FREQ_TABLE
CPU_FREQ_DEFAULT_GOV_ONDEMAND
CPU_FREQ_GOV_ONDEMAND
CRC_CCITT
EMBEDDED
INPUT_TOUCHSCREEN
I2C
I2C_BOARDINFO
LOG_BUF_SHIFT=17
SERIAL_CORE
SERIAL_CORE_CONSOLE
2. Contact
==========
website: http://android.git.kernel.org
mailing-lists: android-kernel@googlegroups.com

View File

@@ -592,6 +592,15 @@ there are not tasks in the cgroup. If pre_destroy() returns error code,
rmdir() will fail with it. From this behavior, pre_destroy() can be
called multiple times against a cgroup.
int allow_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
(cgroup_mutex held by caller)
Called prior to moving a task into a cgroup; if the subsystem
returns an error, this will abort the attach operation. Used
to extend the permission checks - if all subsystems in a cgroup
return 0, the attach will be allowed to proceed, even if the
default permission check (root or same user) fails.
int can_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
(cgroup_mutex held by caller)

View File

@@ -28,6 +28,7 @@ Contents:
2.3 Userspace
2.4 Ondemand
2.5 Conservative
2.6 Interactive
3. The Governor Interface in the CPUfreq Core
@@ -191,6 +192,64 @@ governor but for the opposite direction. For example when set to its
default value of '20' it means that if the CPU usage needs to be below
20% between samples to have the frequency decreased.
2.6 Interactive
---------------
The CPUfreq governor "interactive" is designed for latency-sensitive,
interactive workloads. This governor sets the CPU speed depending on
usage, similar to "ondemand" and "conservative" governors. However,
the governor is more aggressive about scaling the CPU speed up in
response to CPU-intensive activity.
Sampling the CPU load every X ms can lead to under-powering the CPU
for X ms, leading to dropped frames, stuttering UI, etc. Instead of
sampling the cpu at a specified rate, the interactive governor will
check whether to scale the cpu frequency up soon after coming out of
idle. When the cpu comes out of idle, a timer is configured to fire
within 1-2 ticks. If the cpu is very busy between exiting idle and
when the timer fires then we assume the cpu is underpowered and ramp
to MAX speed.
If the cpu was not sufficiently busy to immediately ramp to MAX speed,
then governor evaluates the cpu load since the last speed adjustment,
choosing the highest value between that longer-term load or the
short-term load since idle exit to determine the cpu speed to ramp to.
The tuneable values for this governor are:
min_sample_time: The minimum amount of time to spend at the current
frequency before ramping down. This is to ensure that the governor has
seen enough historic cpu load data to determine the appropriate
workload. Default is 80000 uS.
hispeed_freq: An intermediate "hi speed" at which to initially ramp
when CPU load hits the value specified in go_hispeed_load. If load
stays high for the amount of time specified in above_hispeed_delay,
then speed may be bumped higher. Default is maximum speed.
go_hispeed_load: The CPU load at which to ramp to the intermediate "hi
speed". Default is 85%.
above_hispeed_delay: Once speed is set to hispeed_freq, wait for this
long before bumping speed higher in response to continued high load.
Default is 20000 uS.
timer_rate: Sample rate for reevaluating cpu load when the system is
not idle. Default is 20000 uS.
input_boost: If non-zero, boost speed of all CPUs to hispeed_freq on
touchscreen activity. Default is 0.
boost: If non-zero, immediately boost speed of all CPUs to at least
hispeed_freq until zero is written to this attribute. If zero, allow
CPU speeds to drop below hispeed_freq according to load as usual.
boostpulse: Immediately boost speed of all CPUs to hispeed_freq for
min_sample_time, after which speeds are allowed to drop below
hispeed_freq according to load as usual.
3. The Governor Interface in the CPUfreq Core
=============================================

View File

@@ -29,13 +29,6 @@ The buffer-user
in memory, mapped into its own address space, so it can access the same area
of memory.
*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
For this first version, A buffer shared using the dma_buf sharing API:
- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
this framework.
- with this new iteration of the dma-buf api cpu access from the kernel has been
enable, see below for the details.
dma-buf operations for device dma only
--------------------------------------
@@ -313,6 +306,83 @@ Access to a dma_buf from the kernel context involves three steps:
enum dma_data_direction dir);
Direct Userspace Access/mmap Support
------------------------------------
Being able to mmap an export dma-buf buffer object has 2 main use-cases:
- CPU fallback processing in a pipeline and
- supporting existing mmap interfaces in importers.
1. CPU fallback processing in a pipeline
In many processing pipelines it is sometimes required that the cpu can access
the data in a dma-buf (e.g. for thumbnail creation, snapshots, ...). To avoid
the need to handle this specially in userspace frameworks for buffer sharing
it's ideal if the dma_buf fd itself can be used to access the backing storage
from userspace using mmap.
Furthermore Android's ION framework already supports this (and is otherwise
rather similar to dma-buf from a userspace consumer side with using fds as
handles, too). So it's beneficial to support this in a similar fashion on
dma-buf to have a good transition path for existing Android userspace.
No special interfaces, userspace simply calls mmap on the dma-buf fd.
2. Supporting existing mmap interfaces in exporters
Similar to the motivation for kernel cpu access it is again important that
the userspace code of a given importing subsystem can use the same interfaces
with a imported dma-buf buffer object as with a native buffer object. This is
especially important for drm where the userspace part of contemporary OpenGL,
X, and other drivers is huge, and reworking them to use a different way to
mmap a buffer rather invasive.
The assumption in the current dma-buf interfaces is that redirecting the
initial mmap is all that's needed. A survey of some of the existing
subsystems shows that no driver seems to do any nefarious thing like syncing
up with outstanding asynchronous processing on the device or allocating
special resources at fault time. So hopefully this is good enough, since
adding interfaces to intercept pagefaults and allow pte shootdowns would
increase the complexity quite a bit.
Interface:
int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
unsigned long);
If the importing subsystem simply provides a special-purpose mmap call to set
up a mapping in userspace, calling do_mmap with dma_buf->file will equally
achieve that for a dma-buf object.
3. Implementation notes for exporters
Because dma-buf buffers have invariant size over their lifetime, the dma-buf
core checks whether a vma is too large and rejects such mappings. The
exporter hence does not need to duplicate this check.
Because existing importing subsystems might presume coherent mappings for
userspace, the exporter needs to set up a coherent mapping. If that's not
possible, it needs to fake coherency by manually shooting down ptes when
leaving the cpu domain and flushing caches at fault time. Note that all the
dma_buf files share the same anon inode, hence the exporter needs to replace
the dma_buf file stored in vma->vm_file with it's own if pte shootdown is
requred. This is because the kernel uses the underlying inode's address_space
for vma tracking (and hence pte tracking at shootdown time with
unmap_mapping_range).
If the above shootdown dance turns out to be too expensive in certain
scenarios, we can extend dma-buf with a more explicit cache tracking scheme
for userspace mappings. But the current assumption is that using mmap is
always a slower path, so some inefficiencies should be acceptable.
Exporters that shoot down mappings (for any reasons) shall not do any
synchronization at fault time with outstanding device operations.
Synchronization is an orthogonal issue to sharing the backing storage of a
buffer and hence should not be handled by dma-buf itself. This is explictly
mentioned here because many people seem to want something like this, but if
different exporters handle this differently, buffer sharing can fail in
interesting ways depending upong the exporter (if userspace starts depending
upon this implicit synchronization).
Miscellaneous notes
-------------------
@@ -336,6 +406,20 @@ Miscellaneous notes
the exporting driver to create a dmabuf fd must provide a way to let
userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd().
- If an exporter needs to manually flush caches and hence needs to fake
coherency for mmap support, it needs to be able to zap all the ptes pointing
at the backing storage. Now linux mm needs a struct address_space associated
with the struct file stored in vma->vm_file to do that with the function
unmap_mapping_range. But the dma_buf framework only backs every dma_buf fd
with the anon_file struct file, i.e. all dma_bufs share the same file.
Hence exporters need to setup their own file (and address_space) association
by setting vma->vm_file and adjusting vma->vm_pgoff in the dma_buf mmap
callback. In the specific case of a gem driver the exporter could use the
shmem file already provided by gem (and set vm_pgoff = 0). Exporters can then
zap ptes by unmapping the corresponding range of the struct address_space
associated with their own file.
References:
[1] struct dma_buf_ops in include/linux/dma-buf.h
[2] All interfaces mentioned above defined in include/linux/dma-buf.h

View File

@@ -1953,6 +1953,15 @@ config DEPRECATED_PARAM_STRUCT
This was deprecated in 2001 and announced to live on for 5 years.
Some old boot loaders still use this way.
config ARM_FLUSH_CONSOLE_ON_RESTART
bool "Force flush the console on restart"
help
If the console is locked while the system is rebooted, the messages
in the temporary logbuffer would not have propogated to all the
console drivers. This option forces the console lock to be
released if it failed to be acquired, which will cause all the
pending messages to be flushed.
config CP_ACCESS
tristate "CP register access tool"
default m

View File

@@ -766,6 +766,8 @@ proc_types:
@ b __arm6_mmu_cache_off
@ b __armv3_mmu_cache_flush
#if !defined(CONFIG_CPU_V7)
/* This collides with some V7 IDs, preventing correct detection */
.word 0x00000000 @ old ARM ID
.word 0x0000f000
mov pc, lr
@@ -774,6 +776,7 @@ proc_types:
THUMB( nop )
mov pc, lr
THUMB( nop )
#endif
.word 0x41007000 @ ARM7/710
.word 0xfff8fe00

View File

@@ -45,3 +45,53 @@ config SHARP_PARAM
config SHARP_SCOOP
bool
config FIQ_GLUE
bool
select FIQ
config FIQ_DEBUGGER
bool "FIQ Mode Serial Debugger"
select FIQ
select FIQ_GLUE
default n
help
The FIQ serial debugger can accept commands even when the
kernel is unresponsive due to being stuck with interrupts
disabled.
config FIQ_DEBUGGER_NO_SLEEP
bool "Keep serial debugger active"
depends on FIQ_DEBUGGER
default n
help
Enables the serial debugger at boot. Passing
fiq_debugger.no_sleep on the kernel commandline will
override this config option.
config FIQ_DEBUGGER_WAKEUP_IRQ_ALWAYS_ON
bool "Don't disable wakeup IRQ when debugger is active"
depends on FIQ_DEBUGGER
default n
help
Don't disable the wakeup irq when enabling the uart clock. This will
cause extra interrupts, but it makes the serial debugger usable with
on some MSM radio builds that ignore the uart clock request in power
collapse.
config FIQ_DEBUGGER_CONSOLE
bool "Console on FIQ Serial Debugger port"
depends on FIQ_DEBUGGER
default n
help
Enables a console so that printk messages are displayed on
the debugger serial port as the occur.
config FIQ_DEBUGGER_CONSOLE_DEFAULT_ENABLE
bool "Put the FIQ debugger into console mode by default"
depends on FIQ_DEBUGGER_CONSOLE
default n
help
If enabled, this puts the fiq debugger into console mode by default.
Otherwise, the fiq debugger will start out in debug mode.

View File

@@ -15,4 +15,6 @@ obj-$(CONFIG_ARCH_IXP2000) += uengine.o
obj-$(CONFIG_ARCH_IXP23XX) += uengine.o
obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o
obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp.o
obj-$(CONFIG_FIQ_GLUE) += fiq_glue.o fiq_glue_setup.o
obj-$(CONFIG_FIQ_DEBUGGER) += fiq_debugger.o
obj-$(CONFIG_CP_ACCESS) += cpaccess.o

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,94 @@
/*
* arch/arm/common/fiq_debugger_ringbuf.c
*
* simple lockless ringbuffer
*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/slab.h>
struct fiq_debugger_ringbuf {
int len;
int head;
int tail;
u8 buf[];
};
static inline struct fiq_debugger_ringbuf *fiq_debugger_ringbuf_alloc(int len)
{
struct fiq_debugger_ringbuf *rbuf;
rbuf = kzalloc(sizeof(*rbuf) + len, GFP_KERNEL);
if (rbuf == NULL)
return NULL;
rbuf->len = len;
rbuf->head = 0;
rbuf->tail = 0;
smp_mb();
return rbuf;
}
static inline void fiq_debugger_ringbuf_free(struct fiq_debugger_ringbuf *rbuf)
{
kfree(rbuf);
}
static inline int fiq_debugger_ringbuf_level(struct fiq_debugger_ringbuf *rbuf)
{
int level = rbuf->head - rbuf->tail;
if (level < 0)
level = rbuf->len + level;
return level;
}
static inline int fiq_debugger_ringbuf_room(struct fiq_debugger_ringbuf *rbuf)
{
return rbuf->len - fiq_debugger_ringbuf_level(rbuf) - 1;
}
static inline u8
fiq_debugger_ringbuf_peek(struct fiq_debugger_ringbuf *rbuf, int i)
{
return rbuf->buf[(rbuf->tail + i) % rbuf->len];
}
static inline int
fiq_debugger_ringbuf_consume(struct fiq_debugger_ringbuf *rbuf, int count)
{
count = min(count, fiq_debugger_ringbuf_level(rbuf));
rbuf->tail = (rbuf->tail + count) % rbuf->len;
smp_mb();
return count;
}
static inline int
fiq_debugger_ringbuf_push(struct fiq_debugger_ringbuf *rbuf, u8 datum)
{
if (fiq_debugger_ringbuf_room(rbuf) == 0)
return 0;
rbuf->buf[rbuf->head] = datum;
smp_mb();
rbuf->head = (rbuf->head + 1) % rbuf->len;
smp_mb();
return 1;
}

111
arch/arm/common/fiq_glue.S Normal file
View File

@@ -0,0 +1,111 @@
/*
* Copyright (C) 2008 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
.text
.global fiq_glue_end
/* fiq stack: r0-r15,cpsr,spsr of interrupted mode */
ENTRY(fiq_glue)
/* store pc, cpsr from previous mode */
mrs r12, spsr
sub r11, lr, #4
subs r10, #1
bne nested_fiq
stmfd sp!, {r11-r12, lr}
/* store r8-r14 from previous mode */
sub sp, sp, #(7 * 4)
stmia sp, {r8-r14}^
nop
/* store r0-r7 from previous mode */
stmfd sp!, {r0-r7}
/* setup func(data,regs) arguments */
mov r0, r9
mov r1, sp
mov r3, r8
mov r7, sp
/* Get sp and lr from non-user modes */
and r4, r12, #MODE_MASK
cmp r4, #USR_MODE
beq fiq_from_usr_mode
mov r7, sp
orr r4, r4, #(PSR_I_BIT | PSR_F_BIT)
msr cpsr_c, r4
str sp, [r7, #(4 * 13)]
str lr, [r7, #(4 * 14)]
mrs r5, spsr
str r5, [r7, #(4 * 17)]
cmp r4, #(SVC_MODE | PSR_I_BIT | PSR_F_BIT)
/* use fiq stack if we reenter this mode */
subne sp, r7, #(4 * 3)
fiq_from_usr_mode:
msr cpsr_c, #(SVC_MODE | PSR_I_BIT | PSR_F_BIT)
mov r2, sp
sub sp, r7, #12
stmfd sp!, {r2, ip, lr}
/* call func(data,regs) */
blx r3
ldmfd sp, {r2, ip, lr}
mov sp, r2
/* restore/discard saved state */
cmp r4, #USR_MODE
beq fiq_from_usr_mode_exit
msr cpsr_c, r4
ldr sp, [r7, #(4 * 13)]
ldr lr, [r7, #(4 * 14)]
msr spsr_cxsf, r5
fiq_from_usr_mode_exit:
msr cpsr_c, #(FIQ_MODE | PSR_I_BIT | PSR_F_BIT)
ldmfd sp!, {r0-r7}
add sp, sp, #(7 * 4)
ldmfd sp!, {r11-r12, lr}
exit_fiq:
msr spsr_cxsf, r12
add r10, #1
movs pc, r11
nested_fiq:
orr r12, r12, #(PSR_F_BIT)
b exit_fiq
fiq_glue_end:
ENTRY(fiq_glue_setup) /* func, data, sp */
mrs r3, cpsr
msr cpsr_c, #(FIQ_MODE | PSR_I_BIT | PSR_F_BIT)
movs r8, r0
mov r9, r1
mov sp, r2
moveq r10, #0
movne r10, #1
msr cpsr_c, r3
bx lr

View File

@@ -0,0 +1,100 @@
/*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/percpu.h>
#include <linux/slab.h>
#include <asm/fiq.h>
#include <asm/fiq_glue.h>
extern unsigned char fiq_glue, fiq_glue_end;
extern void fiq_glue_setup(void *func, void *data, void *sp);
static struct fiq_handler fiq_debbuger_fiq_handler = {
.name = "fiq_glue",
};
DEFINE_PER_CPU(void *, fiq_stack);
static struct fiq_glue_handler *current_handler;
static DEFINE_MUTEX(fiq_glue_lock);
static void fiq_glue_setup_helper(void *info)
{
struct fiq_glue_handler *handler = info;
fiq_glue_setup(handler->fiq, handler,
__get_cpu_var(fiq_stack) + THREAD_START_SP);
}
int fiq_glue_register_handler(struct fiq_glue_handler *handler)
{
int ret;
int cpu;
if (!handler || !handler->fiq)
return -EINVAL;
mutex_lock(&fiq_glue_lock);
if (fiq_stack) {
ret = -EBUSY;
goto err_busy;
}
for_each_possible_cpu(cpu) {
void *stack;
stack = (void *)__get_free_pages(GFP_KERNEL, THREAD_SIZE_ORDER);
if (WARN_ON(!stack)) {
ret = -ENOMEM;
goto err_alloc_fiq_stack;
}
per_cpu(fiq_stack, cpu) = stack;
}
ret = claim_fiq(&fiq_debbuger_fiq_handler);
if (WARN_ON(ret))
goto err_claim_fiq;
current_handler = handler;
on_each_cpu(fiq_glue_setup_helper, handler, true);
set_fiq_handler(&fiq_glue, &fiq_glue_end - &fiq_glue);
mutex_unlock(&fiq_glue_lock);
return 0;
err_claim_fiq:
err_alloc_fiq_stack:
for_each_possible_cpu(cpu) {
__free_pages(per_cpu(fiq_stack, cpu), THREAD_SIZE_ORDER);
per_cpu(fiq_stack, cpu) = NULL;
}
err_busy:
mutex_unlock(&fiq_glue_lock);
return ret;
}
/**
* fiq_glue_resume - Restore fiqs after suspend or low power idle states
*
* This must be called before calling local_fiq_enable after returning from a
* power state where the fiq mode registers were lost. If a driver provided
* a resume hook when it registered the handler it will be called.
*/
void fiq_glue_resume(void)
{
if (!current_handler)
return;
fiq_glue_setup(current_handler->fiq, current_handler,
__get_cpu_var(fiq_stack) + THREAD_START_SP);
if (current_handler->resume)
current_handler->resume(current_handler);
}

View File

@@ -271,7 +271,7 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr
* Harvard caches are synchronised for the user space address range.
* This is used for the ARM private sys_cacheflush system call.
*/
#define flush_cache_user_range(vma,start,end) \
#define flush_cache_user_range(start,end) \
__cpuc_coherent_user_range((start) & PAGE_MASK, PAGE_ALIGN(end))
/*

View File

@@ -0,0 +1,64 @@
/*
* arch/arm/include/asm/fiq_debugger.h
*
* Copyright (C) 2010 Google, Inc.
* Author: Colin Cross <ccross@android.com>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _ARCH_ARM_MACH_TEGRA_FIQ_DEBUGGER_H_
#define _ARCH_ARM_MACH_TEGRA_FIQ_DEBUGGER_H_
#include <linux/serial_core.h>
#define FIQ_DEBUGGER_NO_CHAR NO_POLL_CHAR
#define FIQ_DEBUGGER_BREAK 0x00ff0100
#define FIQ_DEBUGGER_FIQ_IRQ_NAME "fiq"
#define FIQ_DEBUGGER_SIGNAL_IRQ_NAME "signal"
#define FIQ_DEBUGGER_WAKEUP_IRQ_NAME "wakeup"
/**
* struct fiq_debugger_pdata - fiq debugger platform data
* @uart_resume: used to restore uart state right before enabling
* the fiq.
* @uart_enable: Do the work necessary to communicate with the uart
* hw (enable clocks, etc.). This must be ref-counted.
* @uart_disable: Do the work necessary to disable the uart hw
* (disable clocks, etc.). This must be ref-counted.
* @uart_dev_suspend: called during PM suspend, generally not needed
* for real fiq mode debugger.
* @uart_dev_resume: called during PM resume, generally not needed
* for real fiq mode debugger.
*/
struct fiq_debugger_pdata {
int (*uart_init)(struct platform_device *pdev);
void (*uart_free)(struct platform_device *pdev);
int (*uart_resume)(struct platform_device *pdev);
int (*uart_getc)(struct platform_device *pdev);
void (*uart_putc)(struct platform_device *pdev, unsigned int c);
void (*uart_flush)(struct platform_device *pdev);
void (*uart_enable)(struct platform_device *pdev);
void (*uart_disable)(struct platform_device *pdev);
int (*uart_dev_suspend)(struct platform_device *pdev);
int (*uart_dev_resume)(struct platform_device *pdev);
void (*fiq_enable)(struct platform_device *pdev, unsigned int fiq,
bool enable);
void (*fiq_ack)(struct platform_device *pdev, unsigned int fiq);
void (*force_irq)(struct platform_device *pdev, unsigned int irq);
void (*force_irq_ack)(struct platform_device *pdev, unsigned int irq);
};
#endif

View File

@@ -0,0 +1,30 @@
/*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __ASM_FIQ_GLUE_H
#define __ASM_FIQ_GLUE_H
struct fiq_glue_handler {
void (*fiq)(struct fiq_glue_handler *h, void *regs, void *svc_sp);
void (*resume)(struct fiq_glue_handler *h);
};
int fiq_glue_register_handler(struct fiq_glue_handler *handler);
#ifdef CONFIG_FIQ_GLUE
void fiq_glue_resume(void);
#else
static inline void fiq_glue_resume(void) {}
#endif
#endif

View File

@@ -5,7 +5,7 @@
#include <linux/threads.h>
#include <asm/irq.h>
#define NR_IPI 6
#define NR_IPI 7
typedef struct {
unsigned int __softirq_pending;

View File

@@ -66,6 +66,7 @@
#define L2X0_STNDBY_MODE_EN (1 << 0)
/* Registers shifts and masks */
#define L2X0_CACHE_ID_REV_MASK (0x3f)
#define L2X0_CACHE_ID_PART_MASK (0xf << 6)
#define L2X0_CACHE_ID_PART_L210 (1 << 6)
#define L2X0_CACHE_ID_PART_L310 (3 << 6)
@@ -103,6 +104,8 @@
#define L2X0_ADDR_FILTER_EN 1
#define REV_PL310_R2P0 4
#define L2X0_PREFETCH_CTRL_OFFSET_SHIFT 0
#define L2X0_PREFETCH_CTRL_WRAP8_INC_SHIFT 23
#define L2X0_PREFETCH_CTRL_WRAP8_SHIFT 30

View File

@@ -17,15 +17,23 @@
#define TRACER_ACCESSED_BIT 0
#define TRACER_RUNNING_BIT 1
#define TRACER_CYCLE_ACC_BIT 2
#define TRACER_TRACE_DATA_BIT 3
#define TRACER_TIMESTAMP_BIT 4
#define TRACER_BRANCHOUTPUT_BIT 5
#define TRACER_RETURN_STACK_BIT 6
#define TRACER_ACCESSED BIT(TRACER_ACCESSED_BIT)
#define TRACER_RUNNING BIT(TRACER_RUNNING_BIT)
#define TRACER_CYCLE_ACC BIT(TRACER_CYCLE_ACC_BIT)
#define TRACER_TRACE_DATA BIT(TRACER_TRACE_DATA_BIT)
#define TRACER_TIMESTAMP BIT(TRACER_TIMESTAMP_BIT)
#define TRACER_BRANCHOUTPUT BIT(TRACER_BRANCHOUTPUT_BIT)
#define TRACER_RETURN_STACK BIT(TRACER_RETURN_STACK_BIT)
#define TRACER_TIMEOUT 10000
#define etm_writel(t, v, x) \
(__raw_writel((v), (t)->etm_regs + (x)))
#define etm_readl(t, x) (__raw_readl((t)->etm_regs + (x)))
#define etm_writel(t, id, v, x) \
(__raw_writel((v), (t)->etm_regs[(id)] + (x)))
#define etm_readl(t, id, x) (__raw_readl((t)->etm_regs[(id)] + (x)))
/* CoreSight Management Registers */
#define CSMR_LOCKACCESS 0xfb0
@@ -43,7 +51,7 @@
#define ETMCTRL_POWERDOWN 1
#define ETMCTRL_PROGRAM (1 << 10)
#define ETMCTRL_PORTSEL (1 << 11)
#define ETMCTRL_DO_CONTEXTID (3 << 14)
#define ETMCTRL_CONTEXTIDSIZE(x) (((x) & 3) << 14)
#define ETMCTRL_PORTMASK1 (7 << 4)
#define ETMCTRL_PORTMASK2 (1 << 21)
#define ETMCTRL_PORTMASK (ETMCTRL_PORTMASK1 | ETMCTRL_PORTMASK2)
@@ -55,9 +63,12 @@
#define ETMCTRL_DATA_DO_BOTH (ETMCTRL_DATA_DO_DATA | ETMCTRL_DATA_DO_ADDR)
#define ETMCTRL_BRANCH_OUTPUT (1 << 8)
#define ETMCTRL_CYCLEACCURATE (1 << 12)
#define ETMCTRL_TIMESTAMP_EN (1 << 28)
#define ETMCTRL_RETURN_STACK_EN (1 << 29)
/* ETM configuration code register */
#define ETMR_CONFCODE (0x04)
#define ETMCCR_ETMIDR_PRESENT BIT(31)
/* ETM trace start/stop resource control register */
#define ETMR_TRACESSCTRL (0x18)
@@ -113,10 +124,25 @@
#define ETMR_TRACEENCTRL 0x24
#define ETMTE_INCLEXCL BIT(24)
#define ETMR_TRACEENEVT 0x20
#define ETMCTRL_OPTS (ETMCTRL_DO_CPRT | \
ETMCTRL_DATA_DO_ADDR | \
ETMCTRL_BRANCH_OUTPUT | \
ETMCTRL_DO_CONTEXTID)
#define ETMR_VIEWDATAEVT 0x30
#define ETMR_VIEWDATACTRL1 0x34
#define ETMR_VIEWDATACTRL2 0x38
#define ETMR_VIEWDATACTRL3 0x3c
#define ETMVDC3_EXCLONLY BIT(16)
#define ETMCTRL_OPTS (ETMCTRL_DO_CPRT)
#define ETMR_ID 0x1e4
#define ETMIDR_VERSION(x) (((x) >> 4) & 0xff)
#define ETMIDR_VERSION_3_1 0x21
#define ETMIDR_VERSION_PFT_1_0 0x30
#define ETMR_CCE 0x1e8
#define ETMCCER_RETURN_STACK_IMPLEMENTED BIT(23)
#define ETMCCER_TIMESTAMPING_IMPLEMENTED BIT(22)
#define ETMR_TRACEIDR 0x200
/* ETM management registers, "ETM Architecture", 3.5.24 */
#define ETMMR_OSLAR 0x300
@@ -140,14 +166,16 @@
#define ETBFF_TRIGIN BIT(8)
#define ETBFF_TRIGEVT BIT(9)
#define ETBFF_TRIGFL BIT(10)
#define ETBFF_STOPFL BIT(12)
#define etb_writel(t, v, x) \
(__raw_writel((v), (t)->etb_regs + (x)))
#define etb_readl(t, x) (__raw_readl((t)->etb_regs + (x)))
#define etm_lock(t) do { etm_writel((t), 0, CSMR_LOCKACCESS); } while (0)
#define etm_unlock(t) \
do { etm_writel((t), UNLOCK_MAGIC, CSMR_LOCKACCESS); } while (0)
#define etm_lock(t, id) \
do { etm_writel((t), (id), 0, CSMR_LOCKACCESS); } while (0)
#define etm_unlock(t, id) \
do { etm_writel((t), (id), UNLOCK_MAGIC, CSMR_LOCKACCESS); } while (0)
#define etb_lock(t) do { etb_writel((t), 0, CSMR_LOCKACCESS); } while (0)
#define etb_unlock(t) \

View File

@@ -30,6 +30,9 @@ extern void asm_do_IRQ(unsigned int, struct pt_regs *);
void handle_IRQ(unsigned int, struct pt_regs *);
void init_IRQ(void);
void arch_trigger_all_cpu_backtrace(void);
#define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace
#endif
#endif

View File

@@ -0,0 +1,28 @@
/*
* arch/arm/include/asm/mach/mmc.h
*/
#ifndef ASMARM_MACH_MMC_H
#define ASMARM_MACH_MMC_H
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include <linux/mmc/sdio_func.h>
struct embedded_sdio_data {
struct sdio_cis cis;
struct sdio_cccr cccr;
struct sdio_embedded_func *funcs;
int num_funcs;
};
struct mmc_platform_data {
unsigned int ocr_mask; /* available voltages */
int built_in; /* built-in device flag */
int card_present; /* card detect state */
u32 (*translate_vdd)(struct device *, unsigned int);
unsigned int (*status)(struct device *);
struct embedded_sdio_data *embedded_sdio;
int (*register_status_notify)(void (*callback)(int card_present, void *dev_id), void *dev_id);
};
#endif

View File

@@ -15,6 +15,7 @@
#include <linux/init.h>
#include <linux/types.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/sysrq.h>
#include <linux/device.h>
#include <linux/clk.h>
@@ -37,26 +38,37 @@ MODULE_AUTHOR("Alexander Shishkin");
struct tracectx {
unsigned int etb_bufsz;
void __iomem *etb_regs;
void __iomem *etm_regs;
void __iomem **etm_regs;
int etm_regs_count;
unsigned long flags;
int ncmppairs;
int etm_portsz;
int etm_contextid_size;
u32 etb_fc;
unsigned long range_start;
unsigned long range_end;
unsigned long data_range_start;
unsigned long data_range_end;
bool dump_initial_etb;
struct device *dev;
struct clk *emu_clk;
struct mutex mutex;
};
static struct tracectx tracer;
static struct tracectx tracer = {
.range_start = (unsigned long)_stext,
.range_end = (unsigned long)_etext,
};
static inline bool trace_isrunning(struct tracectx *t)
{
return !!(t->flags & TRACER_RUNNING);
}
static int etm_setup_address_range(struct tracectx *t, int n,
static int etm_setup_address_range(struct tracectx *t, int id, int n,
unsigned long start, unsigned long end, int exclude, int data)
{
u32 flags = ETMAAT_ARM | ETMAAT_IGNCONTEXTID | ETMAAT_NSONLY | \
u32 flags = ETMAAT_ARM | ETMAAT_IGNCONTEXTID | ETMAAT_IGNSECURITY |
ETMAAT_NOVALCMP;
if (n < 1 || n > t->ncmppairs)
@@ -72,95 +84,185 @@ static int etm_setup_address_range(struct tracectx *t, int n,
flags |= ETMAAT_IEXEC;
/* first comparator for the range */
etm_writel(t, flags, ETMR_COMP_ACC_TYPE(n * 2));
etm_writel(t, start, ETMR_COMP_VAL(n * 2));
etm_writel(t, id, flags, ETMR_COMP_ACC_TYPE(n * 2));
etm_writel(t, id, start, ETMR_COMP_VAL(n * 2));
/* second comparator is right next to it */
etm_writel(t, flags, ETMR_COMP_ACC_TYPE(n * 2 + 1));
etm_writel(t, end, ETMR_COMP_VAL(n * 2 + 1));
etm_writel(t, id, flags, ETMR_COMP_ACC_TYPE(n * 2 + 1));
etm_writel(t, id, end, ETMR_COMP_VAL(n * 2 + 1));
flags = exclude ? ETMTE_INCLEXCL : 0;
etm_writel(t, flags | (1 << n), ETMR_TRACEENCTRL);
if (data) {
flags = exclude ? ETMVDC3_EXCLONLY : 0;
if (exclude)
n += 8;
etm_writel(t, id, flags | BIT(n), ETMR_VIEWDATACTRL3);
} else {
flags = exclude ? ETMTE_INCLEXCL : 0;
etm_writel(t, id, flags | (1 << n), ETMR_TRACEENCTRL);
}
return 0;
}
static int trace_start_etm(struct tracectx *t, int id)
{
u32 v;
unsigned long timeout = TRACER_TIMEOUT;
v = ETMCTRL_OPTS | ETMCTRL_PROGRAM | ETMCTRL_PORTSIZE(t->etm_portsz);
v |= ETMCTRL_CONTEXTIDSIZE(t->etm_contextid_size);
if (t->flags & TRACER_CYCLE_ACC)
v |= ETMCTRL_CYCLEACCURATE;
if (t->flags & TRACER_BRANCHOUTPUT)
v |= ETMCTRL_BRANCH_OUTPUT;
if (t->flags & TRACER_TRACE_DATA)
v |= ETMCTRL_DATA_DO_ADDR;
if (t->flags & TRACER_TIMESTAMP)
v |= ETMCTRL_TIMESTAMP_EN;
if (t->flags & TRACER_RETURN_STACK)
v |= ETMCTRL_RETURN_STACK_EN;
etm_unlock(t, id);
etm_writel(t, id, v, ETMR_CTRL);
while (!(etm_readl(t, id, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to assert timed out\n");
etm_lock(t, id);
return -EFAULT;
}
if (t->range_start || t->range_end)
etm_setup_address_range(t, id, 1,
t->range_start, t->range_end, 0, 0);
else
etm_writel(t, id, ETMTE_INCLEXCL, ETMR_TRACEENCTRL);
etm_writel(t, id, 0, ETMR_TRACEENCTRL2);
etm_writel(t, id, 0, ETMR_TRACESSCTRL);
etm_writel(t, id, 0x6f, ETMR_TRACEENEVT);
etm_writel(t, id, 0, ETMR_VIEWDATACTRL1);
etm_writel(t, id, 0, ETMR_VIEWDATACTRL2);
if (t->data_range_start || t->data_range_end)
etm_setup_address_range(t, id, 2, t->data_range_start,
t->data_range_end, 0, 1);
else
etm_writel(t, id, ETMVDC3_EXCLONLY, ETMR_VIEWDATACTRL3);
etm_writel(t, id, 0x6f, ETMR_VIEWDATAEVT);
v &= ~ETMCTRL_PROGRAM;
v |= ETMCTRL_PORTSEL;
etm_writel(t, id, v, ETMR_CTRL);
timeout = TRACER_TIMEOUT;
while (etm_readl(t, id, ETMR_CTRL) & ETMCTRL_PROGRAM && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to deassert timed out\n");
etm_lock(t, id);
return -EFAULT;
}
etm_lock(t, id);
return 0;
}
static int trace_start(struct tracectx *t)
{
u32 v;
unsigned long timeout = TRACER_TIMEOUT;
int ret;
int id;
u32 etb_fc = t->etb_fc;
etb_unlock(t);
etb_writel(t, 0, ETBR_FORMATTERCTRL);
t->dump_initial_etb = false;
etb_writel(t, 0, ETBR_WRITEADDR);
etb_writel(t, etb_fc, ETBR_FORMATTERCTRL);
etb_writel(t, 1, ETBR_CTRL);
etb_lock(t);
/* configure etm */
v = ETMCTRL_OPTS | ETMCTRL_PROGRAM | ETMCTRL_PORTSIZE(t->etm_portsz);
if (t->flags & TRACER_CYCLE_ACC)
v |= ETMCTRL_CYCLEACCURATE;
etm_unlock(t);
etm_writel(t, v, ETMR_CTRL);
while (!(etm_readl(t, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to assert timed out\n");
etm_lock(t);
return -EFAULT;
/* configure etm(s) */
for (id = 0; id < t->etm_regs_count; id++) {
ret = trace_start_etm(t, id);
if (ret)
return ret;
}
etm_setup_address_range(t, 1, (unsigned long)_stext,
(unsigned long)_etext, 0, 0);
etm_writel(t, 0, ETMR_TRACEENCTRL2);
etm_writel(t, 0, ETMR_TRACESSCTRL);
etm_writel(t, 0x6f, ETMR_TRACEENEVT);
v &= ~ETMCTRL_PROGRAM;
v |= ETMCTRL_PORTSEL;
etm_writel(t, v, ETMR_CTRL);
timeout = TRACER_TIMEOUT;
while (etm_readl(t, ETMR_CTRL) & ETMCTRL_PROGRAM && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to deassert timed out\n");
etm_lock(t);
return -EFAULT;
}
etm_lock(t);
t->flags |= TRACER_RUNNING;
return 0;
}
static int trace_stop(struct tracectx *t)
static int trace_stop_etm(struct tracectx *t, int id)
{
unsigned long timeout = TRACER_TIMEOUT;
etm_unlock(t);
etm_unlock(t, id);
etm_writel(t, 0x440, ETMR_CTRL);
while (!(etm_readl(t, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
etm_writel(t, id, 0x440, ETMR_CTRL);
while (!(etm_readl(t, id, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to assert timed out\n");
etm_lock(t);
dev_err(t->dev,
"etm%d: Waiting for progbit to assert timed out\n",
id);
etm_lock(t, id);
return -EFAULT;
}
etm_lock(t);
etm_lock(t, id);
return 0;
}
static int trace_power_down_etm(struct tracectx *t, int id)
{
unsigned long timeout = TRACER_TIMEOUT;
etm_unlock(t, id);
while (!(etm_readl(t, id, ETMR_STATUS) & ETMST_PROGBIT) && --timeout)
;
if (!timeout) {
dev_err(t->dev, "etm%d: Waiting for status progbit to assert timed out\n",
id);
etm_lock(t, id);
return -EFAULT;
}
etm_writel(t, id, 0x441, ETMR_CTRL);
etm_lock(t, id);
return 0;
}
static int trace_stop(struct tracectx *t)
{
int id;
unsigned long timeout = TRACER_TIMEOUT;
u32 etb_fc = t->etb_fc;
for (id = 0; id < t->etm_regs_count; id++)
trace_stop_etm(t, id);
for (id = 0; id < t->etm_regs_count; id++)
trace_power_down_etm(t, id);
etb_unlock(t);
etb_writel(t, ETBFF_MANUAL_FLUSH, ETBR_FORMATTERCTRL);
if (etb_fc) {
etb_fc |= ETBFF_STOPFL;
etb_writel(t, t->etb_fc, ETBR_FORMATTERCTRL);
}
etb_writel(t, etb_fc | ETBFF_MANUAL_FLUSH, ETBR_FORMATTERCTRL);
timeout = TRACER_TIMEOUT;
while (etb_readl(t, ETBR_FORMATTERCTRL) &
@@ -185,24 +287,15 @@ static int trace_stop(struct tracectx *t)
static int etb_getdatalen(struct tracectx *t)
{
u32 v;
int rp, wp;
int wp;
v = etb_readl(t, ETBR_STATUS);
if (v & 1)
return t->etb_bufsz;
rp = etb_readl(t, ETBR_READADDR);
wp = etb_readl(t, ETBR_WRITEADDR);
if (rp > wp) {
etb_writel(t, 0, ETBR_READADDR);
etb_writel(t, 0, ETBR_WRITEADDR);
return 0;
}
return wp - rp;
return wp;
}
/* sysrq+v will always stop the running trace and leave it at that */
@@ -235,21 +328,18 @@ static void etm_dump(void)
printk("%08x", cpu_to_be32(etb_readl(t, ETBR_READMEM)));
printk(KERN_INFO "\n--- ETB buffer end ---\n");
/* deassert the overflow bit */
etb_writel(t, 1, ETBR_CTRL);
etb_writel(t, 0, ETBR_CTRL);
etb_writel(t, 0, ETBR_TRIGGERCOUNT);
etb_writel(t, 0, ETBR_READADDR);
etb_writel(t, 0, ETBR_WRITEADDR);
etb_lock(t);
}
static void sysrq_etm_dump(int key)
{
if (!mutex_trylock(&tracer.mutex)) {
printk(KERN_INFO "Tracing hardware busy\n");
return;
}
dev_dbg(tracer.dev, "Dumping ETB buffer\n");
etm_dump();
mutex_unlock(&tracer.mutex);
}
static struct sysrq_key_op sysrq_etm_op = {
@@ -276,6 +366,10 @@ static ssize_t etb_read(struct file *file, char __user *data,
struct tracectx *t = file->private_data;
u32 first = 0;
u32 *buf;
int wpos;
int skip;
long wlength;
loff_t pos = *ppos;
mutex_lock(&t->mutex);
@@ -287,31 +381,39 @@ static ssize_t etb_read(struct file *file, char __user *data,
etb_unlock(t);
total = etb_getdatalen(t);
if (total == 0 && t->dump_initial_etb)
total = t->etb_bufsz;
if (total == t->etb_bufsz)
first = etb_readl(t, ETBR_WRITEADDR);
if (pos > total * 4) {
skip = 0;
wpos = total;
} else {
skip = (int)pos % 4;
wpos = (int)pos / 4;
}
total -= wpos;
first = (first + wpos) % t->etb_bufsz;
etb_writel(t, first, ETBR_READADDR);
length = min(total * 4, (int)len);
buf = vmalloc(length);
wlength = min(total, DIV_ROUND_UP(skip + (int)len, 4));
length = min(total * 4 - skip, (int)len);
buf = vmalloc(wlength * 4);
dev_dbg(t->dev, "ETB buffer length: %d\n", total);
dev_dbg(t->dev, "ETB read %ld bytes to %lld from %ld words at %d\n",
length, pos, wlength, first);
dev_dbg(t->dev, "ETB buffer length: %d\n", total + wpos);
dev_dbg(t->dev, "ETB status reg: %x\n", etb_readl(t, ETBR_STATUS));
for (i = 0; i < length / 4; i++)
for (i = 0; i < wlength; i++)
buf[i] = etb_readl(t, ETBR_READMEM);
/* the only way to deassert overflow bit in ETB status is this */
etb_writel(t, 1, ETBR_CTRL);
etb_writel(t, 0, ETBR_CTRL);
etb_writel(t, 0, ETBR_WRITEADDR);
etb_writel(t, 0, ETBR_READADDR);
etb_writel(t, 0, ETBR_TRIGGERCOUNT);
etb_lock(t);
length -= copy_to_user(data, buf, length);
length -= copy_to_user(data, (u8 *)buf + skip, length);
vfree(buf);
*ppos = pos + length;
out:
mutex_unlock(&t->mutex);
@@ -348,28 +450,17 @@ static int __devinit etb_probe(struct amba_device *dev, const struct amba_id *id
if (ret)
goto out;
mutex_lock(&t->mutex);
t->etb_regs = ioremap_nocache(dev->res.start, resource_size(&dev->res));
if (!t->etb_regs) {
ret = -ENOMEM;
goto out_release;
}
t->dev = &dev->dev;
t->dump_initial_etb = true;
amba_set_drvdata(dev, t);
etb_miscdev.parent = &dev->dev;
ret = misc_register(&etb_miscdev);
if (ret)
goto out_unmap;
t->emu_clk = clk_get(&dev->dev, "emu_src_ck");
if (IS_ERR(t->emu_clk)) {
dev_dbg(&dev->dev, "Failed to obtain emu_src_ck.\n");
return -EFAULT;
}
clk_enable(t->emu_clk);
etb_unlock(t);
t->etb_bufsz = etb_readl(t, ETBR_DEPTH);
dev_dbg(&dev->dev, "Size: %x\n", t->etb_bufsz);
@@ -378,6 +469,20 @@ static int __devinit etb_probe(struct amba_device *dev, const struct amba_id *id
etb_writel(t, 0, ETBR_CTRL);
etb_writel(t, 0x1000, ETBR_FORMATTERCTRL);
etb_lock(t);
mutex_unlock(&t->mutex);
etb_miscdev.parent = &dev->dev;
ret = misc_register(&etb_miscdev);
if (ret)
goto out_unmap;
/* Get optional clock. Currently used to select clock source on omap3 */
t->emu_clk = clk_get(&dev->dev, "emu_src_ck");
if (IS_ERR(t->emu_clk))
dev_dbg(&dev->dev, "Failed to obtain emu_src_ck.\n");
else
clk_enable(t->emu_clk);
dev_dbg(&dev->dev, "ETB AMBA driver initialized.\n");
@@ -385,10 +490,13 @@ out:
return ret;
out_unmap:
mutex_lock(&t->mutex);
amba_set_drvdata(dev, NULL);
iounmap(t->etb_regs);
t->etb_regs = NULL;
out_release:
mutex_unlock(&t->mutex);
amba_release_regions(dev);
return ret;
@@ -403,8 +511,10 @@ static int etb_remove(struct amba_device *dev)
iounmap(t->etb_regs);
t->etb_regs = NULL;
clk_disable(t->emu_clk);
clk_put(t->emu_clk);
if (!IS_ERR(t->emu_clk)) {
clk_disable(t->emu_clk);
clk_put(t->emu_clk);
}
amba_release_regions(dev);
@@ -448,7 +558,10 @@ static ssize_t trace_running_store(struct kobject *kobj,
return -EINVAL;
mutex_lock(&tracer.mutex);
ret = value ? trace_start(&tracer) : trace_stop(&tracer);
if (!tracer.etb_regs)
ret = -ENODEV;
else
ret = value ? trace_start(&tracer) : trace_stop(&tracer);
mutex_unlock(&tracer.mutex);
return ret ? : n;
@@ -463,36 +576,50 @@ static ssize_t trace_info_show(struct kobject *kobj,
{
u32 etb_wa, etb_ra, etb_st, etb_fc, etm_ctrl, etm_st;
int datalen;
int id;
int ret;
etb_unlock(&tracer);
datalen = etb_getdatalen(&tracer);
etb_wa = etb_readl(&tracer, ETBR_WRITEADDR);
etb_ra = etb_readl(&tracer, ETBR_READADDR);
etb_st = etb_readl(&tracer, ETBR_STATUS);
etb_fc = etb_readl(&tracer, ETBR_FORMATTERCTRL);
etb_lock(&tracer);
mutex_lock(&tracer.mutex);
if (tracer.etb_regs) {
etb_unlock(&tracer);
datalen = etb_getdatalen(&tracer);
etb_wa = etb_readl(&tracer, ETBR_WRITEADDR);
etb_ra = etb_readl(&tracer, ETBR_READADDR);
etb_st = etb_readl(&tracer, ETBR_STATUS);
etb_fc = etb_readl(&tracer, ETBR_FORMATTERCTRL);
etb_lock(&tracer);
} else {
etb_wa = etb_ra = etb_st = etb_fc = ~0;
datalen = -1;
}
etm_unlock(&tracer);
etm_ctrl = etm_readl(&tracer, ETMR_CTRL);
etm_st = etm_readl(&tracer, ETMR_STATUS);
etm_lock(&tracer);
return sprintf(buf, "Trace buffer len: %d\nComparator pairs: %d\n"
ret = sprintf(buf, "Trace buffer len: %d\nComparator pairs: %d\n"
"ETBR_WRITEADDR:\t%08x\n"
"ETBR_READADDR:\t%08x\n"
"ETBR_STATUS:\t%08x\n"
"ETBR_FORMATTERCTRL:\t%08x\n"
"ETMR_CTRL:\t%08x\n"
"ETMR_STATUS:\t%08x\n",
"ETBR_FORMATTERCTRL:\t%08x\n",
datalen,
tracer.ncmppairs,
etb_wa,
etb_ra,
etb_st,
etb_fc,
etb_fc
);
for (id = 0; id < tracer.etm_regs_count; id++) {
etm_unlock(&tracer, id);
etm_ctrl = etm_readl(&tracer, id, ETMR_CTRL);
etm_st = etm_readl(&tracer, id, ETMR_STATUS);
etm_lock(&tracer, id);
ret += sprintf(buf + ret, "ETMR_CTRL:\t%08x\n"
"ETMR_STATUS:\t%08x\n",
etm_ctrl,
etm_st
);
}
mutex_unlock(&tracer.mutex);
return ret;
}
static struct kobj_attribute trace_info_attr =
@@ -531,42 +658,260 @@ static ssize_t trace_mode_store(struct kobject *kobj,
static struct kobj_attribute trace_mode_attr =
__ATTR(trace_mode, 0644, trace_mode_show, trace_mode_store);
static ssize_t trace_contextid_size_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
/* 0: No context id tracing, 1: One byte, 2: Two bytes, 3: Four bytes */
return sprintf(buf, "%d\n", (1 << tracer.etm_contextid_size) >> 1);
}
static ssize_t trace_contextid_size_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned int contextid_size;
if (sscanf(buf, "%u", &contextid_size) != 1)
return -EINVAL;
if (contextid_size == 3 || contextid_size > 4)
return -EINVAL;
mutex_lock(&tracer.mutex);
tracer.etm_contextid_size = fls(contextid_size);
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_contextid_size_attr =
__ATTR(trace_contextid_size, 0644,
trace_contextid_size_show, trace_contextid_size_store);
static ssize_t trace_branch_output_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", !!(tracer.flags & TRACER_BRANCHOUTPUT));
}
static ssize_t trace_branch_output_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned int branch_output;
if (sscanf(buf, "%u", &branch_output) != 1)
return -EINVAL;
mutex_lock(&tracer.mutex);
if (branch_output) {
tracer.flags |= TRACER_BRANCHOUTPUT;
/* Branch broadcasting is incompatible with the return stack */
tracer.flags &= ~TRACER_RETURN_STACK;
} else {
tracer.flags &= ~TRACER_BRANCHOUTPUT;
}
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_branch_output_attr =
__ATTR(trace_branch_output, 0644,
trace_branch_output_show, trace_branch_output_store);
static ssize_t trace_return_stack_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", !!(tracer.flags & TRACER_RETURN_STACK));
}
static ssize_t trace_return_stack_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned int return_stack;
if (sscanf(buf, "%u", &return_stack) != 1)
return -EINVAL;
mutex_lock(&tracer.mutex);
if (return_stack) {
tracer.flags |= TRACER_RETURN_STACK;
/* Return stack is incompatible with branch broadcasting */
tracer.flags &= ~TRACER_BRANCHOUTPUT;
} else {
tracer.flags &= ~TRACER_RETURN_STACK;
}
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_return_stack_attr =
__ATTR(trace_return_stack, 0644,
trace_return_stack_show, trace_return_stack_store);
static ssize_t trace_timestamp_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", !!(tracer.flags & TRACER_TIMESTAMP));
}
static ssize_t trace_timestamp_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned int timestamp;
if (sscanf(buf, "%u", &timestamp) != 1)
return -EINVAL;
mutex_lock(&tracer.mutex);
if (timestamp)
tracer.flags |= TRACER_TIMESTAMP;
else
tracer.flags &= ~TRACER_TIMESTAMP;
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_timestamp_attr =
__ATTR(trace_timestamp, 0644,
trace_timestamp_show, trace_timestamp_store);
static ssize_t trace_range_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
return sprintf(buf, "%08lx %08lx\n",
tracer.range_start, tracer.range_end);
}
static ssize_t trace_range_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned long range_start, range_end;
if (sscanf(buf, "%lx %lx", &range_start, &range_end) != 2)
return -EINVAL;
mutex_lock(&tracer.mutex);
tracer.range_start = range_start;
tracer.range_end = range_end;
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_range_attr =
__ATTR(trace_range, 0644, trace_range_show, trace_range_store);
static ssize_t trace_data_range_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
unsigned long range_start;
u64 range_end;
mutex_lock(&tracer.mutex);
range_start = tracer.data_range_start;
range_end = tracer.data_range_end;
if (!range_end && (tracer.flags & TRACER_TRACE_DATA))
range_end = 0x100000000ULL;
mutex_unlock(&tracer.mutex);
return sprintf(buf, "%08lx %08llx\n", range_start, range_end);
}
static ssize_t trace_data_range_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned long range_start;
u64 range_end;
if (sscanf(buf, "%lx %llx", &range_start, &range_end) != 2)
return -EINVAL;
mutex_lock(&tracer.mutex);
tracer.data_range_start = range_start;
tracer.data_range_end = (unsigned long)range_end;
if (range_end)
tracer.flags |= TRACER_TRACE_DATA;
else
tracer.flags &= ~TRACER_TRACE_DATA;
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_data_range_attr =
__ATTR(trace_data_range, 0644,
trace_data_range_show, trace_data_range_store);
static int __devinit etm_probe(struct amba_device *dev, const struct amba_id *id)
{
struct tracectx *t = &tracer;
int ret = 0;
void __iomem **new_regs;
int new_count;
u32 etmccr;
u32 etmidr;
u32 etmccer = 0;
u8 etm_version = 0;
if (t->etm_regs) {
dev_dbg(&dev->dev, "ETM already initialized\n");
ret = -EBUSY;
mutex_lock(&t->mutex);
new_count = t->etm_regs_count + 1;
new_regs = krealloc(t->etm_regs,
sizeof(t->etm_regs[0]) * new_count, GFP_KERNEL);
if (!new_regs) {
dev_dbg(&dev->dev, "Failed to allocate ETM register array\n");
ret = -ENOMEM;
goto out;
}
t->etm_regs = new_regs;
ret = amba_request_regions(dev, NULL);
if (ret)
goto out;
t->etm_regs = ioremap_nocache(dev->res.start, resource_size(&dev->res));
if (!t->etm_regs) {
t->etm_regs[t->etm_regs_count] =
ioremap_nocache(dev->res.start, resource_size(&dev->res));
if (!t->etm_regs[t->etm_regs_count]) {
ret = -ENOMEM;
goto out_release;
}
amba_set_drvdata(dev, t);
amba_set_drvdata(dev, t->etm_regs[t->etm_regs_count]);
mutex_init(&t->mutex);
t->dev = &dev->dev;
t->flags = TRACER_CYCLE_ACC;
t->flags = TRACER_CYCLE_ACC | TRACER_TRACE_DATA | TRACER_BRANCHOUTPUT;
t->etm_portsz = 1;
t->etm_contextid_size = 3;
etm_unlock(t);
(void)etm_readl(t, ETMMR_PDSR);
etm_unlock(t, t->etm_regs_count);
(void)etm_readl(t, t->etm_regs_count, ETMMR_PDSR);
/* dummy first read */
(void)etm_readl(&tracer, ETMMR_OSSRR);
(void)etm_readl(&tracer, t->etm_regs_count, ETMMR_OSSRR);
t->ncmppairs = etm_readl(t, ETMR_CONFCODE) & 0xf;
etm_writel(t, 0x440, ETMR_CTRL);
etm_lock(t);
etmccr = etm_readl(t, t->etm_regs_count, ETMR_CONFCODE);
t->ncmppairs = etmccr & 0xf;
if (etmccr & ETMCCR_ETMIDR_PRESENT) {
etmidr = etm_readl(t, t->etm_regs_count, ETMR_ID);
etm_version = ETMIDR_VERSION(etmidr);
if (etm_version >= ETMIDR_VERSION_3_1)
etmccer = etm_readl(t, t->etm_regs_count, ETMR_CCE);
}
etm_writel(t, t->etm_regs_count, 0x441, ETMR_CTRL);
etm_writel(t, t->etm_regs_count, new_count, ETMR_TRACEIDR);
etm_lock(t, t->etm_regs_count);
ret = sysfs_create_file(&dev->dev.kobj,
&trace_running_attr.attr);
@@ -582,35 +927,100 @@ static int __devinit etm_probe(struct amba_device *dev, const struct amba_id *id
if (ret)
dev_dbg(&dev->dev, "Failed to create trace_mode in sysfs\n");
dev_dbg(t->dev, "ETM AMBA driver initialized.\n");
ret = sysfs_create_file(&dev->dev.kobj,
&trace_contextid_size_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_contextid_size in sysfs\n");
ret = sysfs_create_file(&dev->dev.kobj,
&trace_branch_output_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_branch_output in sysfs\n");
if (etmccer & ETMCCER_RETURN_STACK_IMPLEMENTED) {
ret = sysfs_create_file(&dev->dev.kobj,
&trace_return_stack_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_return_stack in sysfs\n");
}
if (etmccer & ETMCCER_TIMESTAMPING_IMPLEMENTED) {
ret = sysfs_create_file(&dev->dev.kobj,
&trace_timestamp_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_timestamp in sysfs\n");
}
ret = sysfs_create_file(&dev->dev.kobj, &trace_range_attr.attr);
if (ret)
dev_dbg(&dev->dev, "Failed to create trace_range in sysfs\n");
if (etm_version < ETMIDR_VERSION_PFT_1_0) {
ret = sysfs_create_file(&dev->dev.kobj,
&trace_data_range_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_data_range in sysfs\n");
} else {
tracer.flags &= ~TRACER_TRACE_DATA;
}
dev_dbg(&dev->dev, "ETM AMBA driver initialized.\n");
/* Enable formatter if there are multiple trace sources */
if (new_count > 1)
t->etb_fc = ETBFF_ENFCONT | ETBFF_ENFTC;
t->etm_regs_count = new_count;
out:
mutex_unlock(&t->mutex);
return ret;
out_unmap:
amba_set_drvdata(dev, NULL);
iounmap(t->etm_regs);
iounmap(t->etm_regs[t->etm_regs_count]);
out_release:
amba_release_regions(dev);
mutex_unlock(&t->mutex);
return ret;
}
static int etm_remove(struct amba_device *dev)
{
struct tracectx *t = amba_get_drvdata(dev);
amba_set_drvdata(dev, NULL);
iounmap(t->etm_regs);
t->etm_regs = NULL;
amba_release_regions(dev);
int i;
struct tracectx *t = &tracer;
void __iomem *etm_regs = amba_get_drvdata(dev);
sysfs_remove_file(&dev->dev.kobj, &trace_running_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_info_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_mode_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_range_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_data_range_attr.attr);
amba_set_drvdata(dev, NULL);
mutex_lock(&t->mutex);
for (i = 0; i < t->etm_regs_count; i++)
if (t->etm_regs[i] == etm_regs)
break;
for (; i < t->etm_regs_count - 1; i++)
t->etm_regs[i] = t->etm_regs[i + 1];
t->etm_regs_count--;
if (!t->etm_regs_count) {
kfree(t->etm_regs);
t->etm_regs = NULL;
}
mutex_unlock(&t->mutex);
iounmap(etm_regs);
amba_release_regions(dev);
return 0;
}
@@ -620,6 +1030,10 @@ static struct amba_id etm_ids[] = {
.id = 0x0003b921,
.mask = 0x0007ffff,
},
{
.id = 0x0003b950,
.mask = 0x0007ffff,
},
{ 0, 0 },
};
@@ -637,6 +1051,8 @@ static int __init etm_init(void)
{
int retval;
mutex_init(&tracer.mutex);
retval = amba_driver_register(&etb_driver);
if (retval) {
printk(KERN_ERR "Failed to register etb\n");

View File

@@ -10,6 +10,8 @@
#include <linux/export.h>
#include <linux/init.h>
#include <linux/device.h>
#include <linux/notifier.h>
#include <linux/cpu.h>
#include <linux/syscore_ops.h>
#include <linux/string.h>
@@ -103,6 +105,25 @@ static struct syscore_ops leds_syscore_ops = {
.resume = leds_resume,
};
static int leds_idle_notifier(struct notifier_block *nb, unsigned long val,
void *data)
{
switch (val) {
case IDLE_START:
leds_event(led_idle_start);
break;
case IDLE_END:
leds_event(led_idle_end);
break;
}
return 0;
}
static struct notifier_block leds_idle_nb = {
.notifier_call = leds_idle_notifier,
};
static int __init leds_init(void)
{
int ret;
@@ -111,8 +132,11 @@ static int __init leds_init(void)
ret = device_register(&leds_device);
if (ret == 0)
ret = device_create_file(&leds_device, &dev_attr_event);
if (ret == 0)
if (ret == 0) {
register_syscore_ops(&leds_syscore_ops);
idle_notifier_register(&leds_idle_nb);
}
return ret;
}

View File

@@ -31,9 +31,9 @@
#include <linux/random.h>
#include <linux/hw_breakpoint.h>
#include <linux/cpuidle.h>
#include <linux/console.h>
#include <asm/cacheflush.h>
#include <asm/leds.h>
#include <asm/processor.h>
#include <asm/thread_notify.h>
#include <asm/stacktrace.h>
@@ -60,6 +60,18 @@ extern void setup_mm_for_reboot(void);
static volatile int hlt_counter;
#ifdef CONFIG_SMP
void arch_trigger_all_cpu_backtrace(void)
{
smp_send_all_cpu_backtrace();
}
#else
void arch_trigger_all_cpu_backtrace(void)
{
dump_stack();
}
#endif
void disable_hlt(void)
{
hlt_counter++;
@@ -92,6 +104,31 @@ __setup("hlt", hlt_setup);
extern void call_with_stack(void (*fn)(void *), void *arg, void *sp);
typedef void (*phys_reset_t)(unsigned long);
#ifdef CONFIG_ARM_FLUSH_CONSOLE_ON_RESTART
void arm_machine_flush_console(void)
{
printk("\n");
pr_emerg("Restarting %s\n", linux_banner);
if (console_trylock()) {
console_unlock();
return;
}
mdelay(50);
local_irq_disable();
if (!console_trylock())
pr_emerg("arm_restart: Console was locked! Busting\n");
else
pr_emerg("arm_restart: Console was locked!\n");
console_unlock();
}
#else
void arm_machine_flush_console(void)
{
}
#endif
/*
* A temporary stack to use for CPU reset. This is static so that we
* don't clobber it with the identity mapping. When running with this
@@ -211,9 +248,9 @@ void cpu_idle(void)
/* endless idle loop with no priority at all */
while (1) {
idle_notifier_call_chain(IDLE_START);
tick_nohz_idle_enter();
rcu_idle_enter();
leds_event(led_idle_start);
while (!need_resched()) {
#ifdef CONFIG_HOTPLUG_CPU
if (cpu_is_offline(smp_processor_id()))
@@ -244,9 +281,9 @@ void cpu_idle(void)
} else
local_irq_enable();
}
leds_event(led_idle_end);
rcu_idle_exit();
tick_nohz_idle_exit();
idle_notifier_call_chain(IDLE_END);
schedule_preempt_disabled();
}
}
@@ -285,6 +322,10 @@ void machine_restart(char *cmd)
{
machine_shutdown();
/* Flush the console to make sure all the relevant messages make it
* out to the console drivers */
arm_machine_flush_console();
arm_pm_restart(reboot_mode, cmd);
/* Give a grace period for failure to restart of 1s */
@@ -295,6 +336,77 @@ void machine_restart(char *cmd)
while (1);
}
/*
* dump a block of kernel memory from around the given address
*/
static void show_data(unsigned long addr, int nbytes, const char *name)
{
int i, j;
int nlines;
u32 *p;
/*
* don't attempt to dump non-kernel addresses or
* values that are probably just small negative numbers
*/
if (addr < PAGE_OFFSET || addr > -256UL)
return;
printk("\n%s: %#lx:\n", name, addr);
/*
* round address down to a 32 bit boundary
* and always dump a multiple of 32 bytes
*/
p = (u32 *)(addr & ~(sizeof(u32) - 1));
nbytes += (addr & (sizeof(u32) - 1));
nlines = (nbytes + 31) / 32;
for (i = 0; i < nlines; i++) {
/*
* just display low 16 bits of address to keep
* each line of the dump < 80 characters
*/
printk("%04lx ", (unsigned long)p & 0xffff);
for (j = 0; j < 8; j++) {
u32 data;
if (probe_kernel_address(p, data)) {
printk(" ********");
} else {
printk(" %08x", data);
}
++p;
}
printk("\n");
}
}
static void show_extra_register_data(struct pt_regs *regs, int nbytes)
{
mm_segment_t fs;
fs = get_fs();
set_fs(KERNEL_DS);
show_data(regs->ARM_pc - nbytes, nbytes * 2, "PC");
show_data(regs->ARM_lr - nbytes, nbytes * 2, "LR");
show_data(regs->ARM_sp - nbytes, nbytes * 2, "SP");
show_data(regs->ARM_ip - nbytes, nbytes * 2, "IP");
show_data(regs->ARM_fp - nbytes, nbytes * 2, "FP");
show_data(regs->ARM_r0 - nbytes, nbytes * 2, "R0");
show_data(regs->ARM_r1 - nbytes, nbytes * 2, "R1");
show_data(regs->ARM_r2 - nbytes, nbytes * 2, "R2");
show_data(regs->ARM_r3 - nbytes, nbytes * 2, "R3");
show_data(regs->ARM_r4 - nbytes, nbytes * 2, "R4");
show_data(regs->ARM_r5 - nbytes, nbytes * 2, "R5");
show_data(regs->ARM_r6 - nbytes, nbytes * 2, "R6");
show_data(regs->ARM_r7 - nbytes, nbytes * 2, "R7");
show_data(regs->ARM_r8 - nbytes, nbytes * 2, "R8");
show_data(regs->ARM_r9 - nbytes, nbytes * 2, "R9");
show_data(regs->ARM_r10 - nbytes, nbytes * 2, "R10");
set_fs(fs);
}
void __show_regs(struct pt_regs *regs)
{
unsigned long flags;
@@ -354,6 +466,8 @@ void __show_regs(struct pt_regs *regs)
printk("Control: %08x%s\n", ctrl, buf);
}
#endif
show_extra_register_data(regs, 128);
}
void show_regs(struct pt_regs * regs)

View File

@@ -57,6 +57,7 @@ enum ipi_msg_type {
IPI_CALL_FUNC,
IPI_CALL_FUNC_SINGLE,
IPI_CPU_STOP,
IPI_CPU_BACKTRACE,
};
static DECLARE_COMPLETION(cpu_running);
@@ -383,6 +384,7 @@ static const char *ipi_types[NR_IPI] = {
S(IPI_CALL_FUNC, "Function call interrupts"),
S(IPI_CALL_FUNC_SINGLE, "Single function call interrupts"),
S(IPI_CPU_STOP, "CPU stop interrupts"),
S(IPI_CPU_BACKTRACE, "CPU backtrace"),
};
void show_ipi_list(struct seq_file *p, int prec)
@@ -514,6 +516,59 @@ static void ipi_cpu_stop(unsigned int cpu)
cpu_relax();
}
static cpumask_t backtrace_mask;
static DEFINE_RAW_SPINLOCK(backtrace_lock);
/* "in progress" flag of arch_trigger_all_cpu_backtrace */
static unsigned long backtrace_flag;
void smp_send_all_cpu_backtrace(void)
{
unsigned int this_cpu = smp_processor_id();
int i;
if (test_and_set_bit(0, &backtrace_flag))
/*
* If there is already a trigger_all_cpu_backtrace() in progress
* (backtrace_flag == 1), don't output double cpu dump infos.
*/
return;
cpumask_copy(&backtrace_mask, cpu_online_mask);
cpu_clear(this_cpu, backtrace_mask);
pr_info("Backtrace for cpu %d (current):\n", this_cpu);
dump_stack();
pr_info("\nsending IPI to all other CPUs:\n");
if (!cpus_empty(backtrace_mask))
smp_cross_call(&backtrace_mask, IPI_CPU_BACKTRACE);
/* Wait for up to 10 seconds for all other CPUs to do the backtrace */
for (i = 0; i < 10 * 1000; i++) {
if (cpumask_empty(&backtrace_mask))
break;
mdelay(1);
}
clear_bit(0, &backtrace_flag);
smp_mb__after_clear_bit();
}
/*
* ipi_cpu_backtrace - handle IPI from smp_send_all_cpu_backtrace()
*/
static void ipi_cpu_backtrace(unsigned int cpu, struct pt_regs *regs)
{
if (cpu_isset(cpu, backtrace_mask)) {
raw_spin_lock(&backtrace_lock);
pr_warning("IPI backtrace for cpu %d\n", cpu);
show_regs(regs);
raw_spin_unlock(&backtrace_lock);
cpu_clear(cpu, backtrace_mask);
}
}
/*
* Main handler for inter-processor interrupts
*/
@@ -562,6 +617,10 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
irq_exit();
break;
case IPI_CPU_BACKTRACE:
ipi_cpu_backtrace(cpu, regs);
break;
default:
printk(KERN_CRIT "CPU%u: Unknown IPI message 0x%x\n",
cpu, ipinr);

View File

@@ -496,7 +496,9 @@ do_cache_op(unsigned long start, unsigned long end, int flags)
if (end > vma->vm_end)
end = vma->vm_end;
flush_cache_user_range(vma, start, end);
up_read(&mm->mmap_sem);
flush_cache_user_range(start, end);
return;
}
up_read(&mm->mmap_sem);
}

View File

@@ -1,9 +1,10 @@
obj-y += io.o idle.o timer.o
obj-y += clock.o
obj-y += subsystem_map.o
obj-$(CONFIG_DEBUG_FS) += clock-debug.o
obj-$(CONFIG_MSM_VIC) += irq-vic.o
obj-$(CONFIG_MSM_IOMMU) += devices-iommu.o
obj-$(CONFIG_MSM_IOMMU) += devices-iommu.o iommu_domains.o
obj-$(CONFIG_ARCH_MSM7X00A) += dma.o irq.o acpuclock-arm11.o
obj-$(CONFIG_ARCH_MSM7X30) += dma.o

View File

@@ -0,0 +1,180 @@
/* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _ARCH_IOMMU_DOMAINS_H
#define _ARCH_IOMMU_DOMAINS_H
#include <linux/memory_alloc.h>
enum {
VIDEO_DOMAIN,
CAMERA_DOMAIN,
DISPLAY_DOMAIN,
ROTATOR_DOMAIN,
MAX_DOMAINS
};
enum {
VIDEO_FIRMWARE_POOL,
VIDEO_MAIN_POOL,
GEN_POOL,
};
struct msm_iommu_domain_name {
char *name;
int domain;
};
struct msm_iommu_domain {
/* iommu domain to map in */
struct iommu_domain *domain;
/* total number of allocations from this domain */
atomic_t allocation_cnt;
/* number of iova pools */
int npools;
/*
* array of gen_pools for allocating iovas.
* behavior is undefined if these overlap
*/
struct mem_pool *iova_pools;
};
struct iommu_domains_pdata {
struct msm_iommu_domain *domains;
int ndomains;
struct msm_iommu_domain_name *domain_names;
int nnames;
unsigned int domain_alloc_flags;
};
struct msm_iova_partition {
unsigned long start;
unsigned long size;
};
struct msm_iova_layout {
struct msm_iova_partition *partitions;
int npartitions;
const char *client_name;
unsigned int domain_flags;
};
#if defined(CONFIG_MSM_IOMMU)
extern struct iommu_domain *msm_get_iommu_domain(int domain_num);
extern int msm_allocate_iova_address(unsigned int iommu_domain,
unsigned int partition_no,
unsigned long size,
unsigned long align,
unsigned long *iova);
extern void msm_free_iova_address(unsigned long iova,
unsigned int iommu_domain,
unsigned int partition_no,
unsigned long size);
extern int msm_use_iommu(void);
extern int msm_iommu_map_extra(struct iommu_domain *domain,
unsigned long start_iova,
unsigned long size,
unsigned long page_size,
int cached);
extern void msm_iommu_unmap_extra(struct iommu_domain *domain,
unsigned long start_iova,
unsigned long size,
unsigned long page_size);
extern int msm_iommu_map_contig_buffer(unsigned long phys,
unsigned int domain_no,
unsigned int partition_no,
unsigned long size,
unsigned long align,
unsigned long cached,
unsigned long *iova_val);
extern void msm_iommu_unmap_contig_buffer(unsigned long iova,
unsigned int domain_no,
unsigned int partition_no,
unsigned long size);
extern int msm_register_domain(struct msm_iova_layout *layout);
#else
static inline struct iommu_domain
*msm_get_iommu_domain(int subsys_id) { return NULL; }
static inline int msm_allocate_iova_address(unsigned int iommu_domain,
unsigned int partition_no,
unsigned long size,
unsigned long align,
unsigned long *iova) { return -ENOMEM; }
static inline void msm_free_iova_address(unsigned long iova,
unsigned int iommu_domain,
unsigned int partition_no,
unsigned long size) { return; }
static inline int msm_use_iommu(void)
{
return 0;
}
static inline int msm_iommu_map_extra(struct iommu_domain *domain,
unsigned long start_iova,
unsigned long size,
unsigned long page_size,
int cached)
{
return -ENODEV;
}
static inline void msm_iommu_unmap_extra(struct iommu_domain *domain,
unsigned long start_iova,
unsigned long size,
unsigned long page_size)
{
}
static inline int msm_iommu_map_contig_buffer(unsigned long phys,
unsigned int domain_no,
unsigned int partition_no,
unsigned long size,
unsigned long align,
unsigned long cached,
unsigned long *iova_val)
{
*iova_val = phys;
return 0;
}
static inline void msm_iommu_unmap_contig_buffer(unsigned long iova,
unsigned int domain_no,
unsigned int partition_no,
unsigned long size)
{
return;
}
static inline int msm_register_domain(struct msm_iova_layout *layout)
{
return -ENODEV;
}
#endif
#endif

View File

@@ -0,0 +1,29 @@
/**
*
* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __MACH_ION_H_
#define __MACH_ION_H_
enum ion_memory_types {
ION_EBI_TYPE,
ION_SMI_TYPE,
};
enum ion_permission_type {
IPT_TYPE_MM_CARVEOUT = 0,
IPT_TYPE_MFC_SHAREDMEM = 1,
IPT_TYPE_MDP_WRITEBACK = 2,
};
#endif

View File

@@ -0,0 +1,83 @@
/*
* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __ARCH_MACH_MSM_SUBSYSTEM_MAP_H
#define __ARCH_MACH_MSM_SUBSYSTEM_MAP_H
#include <linux/iommu.h>
#include <mach/iommu_domains.h>
/* map the physical address in the kernel vaddr space */
#define MSM_SUBSYSTEM_MAP_KADDR 0x1
/* map the physical address in the iova address space */
#define MSM_SUBSYSTEM_MAP_IOVA 0x2
/* ioremaps in the kernel address space are cached */
#define MSM_SUBSYSTEM_MAP_CACHED 0x4
/* ioremaps in the kernel address space are uncached */
#define MSM_SUBSYSTEM_MAP_UNCACHED 0x8
/*
* Will map 2x the length requested.
*/
#define MSM_SUBSYSTEM_MAP_IOMMU_2X 0x10
/*
* Shortcut flags for alignment.
* The flag must be equal to the alignment requested.
* e.g. for 8k alignment the flags must be (0x2000 | other flags)
*/
#define MSM_SUBSYSTEM_ALIGN_IOVA_8K SZ_8K
#define MSM_SUBSYSTEM_ALIGN_IOVA_1M SZ_1M
enum msm_subsystem_id {
INVALID_SUBSYS_ID = -1,
MSM_SUBSYSTEM_VIDEO,
MSM_SUBSYSTEM_VIDEO_FWARE,
MSM_SUBSYSTEM_CAMERA,
MSM_SUBSYSTEM_DISPLAY,
MSM_SUBSYSTEM_ROTATOR,
MAX_SUBSYSTEM_ID
};
static inline int msm_subsystem_check_id(int subsys_id)
{
return subsys_id > INVALID_SUBSYS_ID && subsys_id < MAX_SUBSYSTEM_ID;
}
struct msm_mapped_buffer {
/*
* VA mapped in the kernel address space. This field shall be NULL if
* MSM_SUBSYSTEM_MAP_KADDR was not passed to the map buffer function.
*/
void *vaddr;
/*
* iovas mapped in the iommu address space. The ith entry of this array
* corresponds to the iova mapped in the ith subsystem in the array
* pased in to msm_subsystem_map_buffer. This field shall be NULL if
* MSM_SUBSYSTEM_MAP_IOVA was not passed to the map buffer function,
*/
unsigned long *iova;
};
extern struct msm_mapped_buffer *msm_subsystem_map_buffer(
unsigned long phys,
unsigned int length,
unsigned int flags,
int *subsys_ids,
unsigned int nsubsys);
extern int msm_subsystem_unmap_buffer(struct msm_mapped_buffer *buf);
extern phys_addr_t msm_subsystem_check_iova_mapping(int subsys_id,
unsigned long iova);
#endif /* __ARCH_MACH_MSM_SUBSYSTEM_MAP_H */

View File

@@ -0,0 +1,444 @@
/* Copyright (c) 2010-2012, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/iommu.h>
#include <linux/memory_alloc.h>
#include <linux/platform_device.h>
#include <linux/rbtree.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <asm/sizes.h>
#include <asm/page.h>
#include <mach/iommu.h>
#include <mach/iommu_domains.h>
#include <mach/socinfo.h>
#include <mach/msm_subsystem_map.h>
/* dummy 64K for overmapping */
char iommu_dummy[2*SZ_64K-4];
struct msm_iova_data {
struct rb_node node;
struct mem_pool *pools;
int npools;
struct iommu_domain *domain;
int domain_num;
};
static struct rb_root domain_root;
DEFINE_MUTEX(domain_mutex);
static atomic_t domain_nums = ATOMIC_INIT(-1);
int msm_iommu_map_extra(struct iommu_domain *domain,
unsigned long start_iova,
unsigned long size,
unsigned long page_size,
int cached)
{
int i, ret_value = 0;
unsigned long order = get_order(page_size);
unsigned long aligned_size = ALIGN(size, page_size);
unsigned long nrpages = aligned_size >> (PAGE_SHIFT + order);
unsigned long phy_addr = ALIGN(virt_to_phys(iommu_dummy), page_size);
unsigned long temp_iova = start_iova;
for (i = 0; i < nrpages; i++) {
int ret = iommu_map(domain, temp_iova, phy_addr, page_size,
cached);
if (ret) {
pr_err("%s: could not map %lx in domain %p, error: %d\n",
__func__, start_iova, domain, ret);
ret_value = -EAGAIN;
goto out;
}
temp_iova += page_size;
}
return ret_value;
out:
for (; i > 0; --i) {
temp_iova -= page_size;
iommu_unmap(domain, start_iova, page_size);
}
return ret_value;
}
void msm_iommu_unmap_extra(struct iommu_domain *domain,
unsigned long start_iova,
unsigned long size,
unsigned long page_size)
{
int i;
unsigned long order = get_order(page_size);
unsigned long aligned_size = ALIGN(size, page_size);
unsigned long nrpages = aligned_size >> (PAGE_SHIFT + order);
unsigned long temp_iova = start_iova;
for (i = 0; i < nrpages; ++i) {
iommu_unmap(domain, temp_iova, page_size);
temp_iova += page_size;
}
}
static int msm_iommu_map_iova_phys(struct iommu_domain *domain,
unsigned long iova,
unsigned long phys,
unsigned long size,
int cached)
{
int ret;
struct scatterlist *sglist;
int prot = IOMMU_WRITE | IOMMU_READ;
prot |= cached ? IOMMU_CACHE : 0;
sglist = vmalloc(sizeof(*sglist));
if (!sglist) {
ret = -ENOMEM;
goto err1;
}
sg_init_table(sglist, 1);
sglist->length = size;
sglist->offset = 0;
sglist->dma_address = phys;
ret = iommu_map_range(domain, iova, sglist, size, prot);
if (ret) {
pr_err("%s: could not map extra %lx in domain %p\n",
__func__, iova, domain);
}
vfree(sglist);
err1:
return ret;
}
int msm_iommu_map_contig_buffer(unsigned long phys,
unsigned int domain_no,
unsigned int partition_no,
unsigned long size,
unsigned long align,
unsigned long cached,
unsigned long *iova_val)
{
unsigned long iova;
int ret;
if (size & (align - 1))
return -EINVAL;
ret = msm_allocate_iova_address(domain_no, partition_no, size, align,
&iova);
if (ret)
return -ENOMEM;
ret = msm_iommu_map_iova_phys(msm_get_iommu_domain(domain_no), iova,
phys, size, cached);
if (ret)
msm_free_iova_address(iova, domain_no, partition_no, size);
else
*iova_val = iova;
return ret;
}
void msm_iommu_unmap_contig_buffer(unsigned long iova,
unsigned int domain_no,
unsigned int partition_no,
unsigned long size)
{
iommu_unmap_range(msm_get_iommu_domain(domain_no), iova, size);
msm_free_iova_address(iova, domain_no, partition_no, size);
}
static struct msm_iova_data *find_domain(int domain_num)
{
struct rb_root *root = &domain_root;
struct rb_node *p = root->rb_node;
mutex_lock(&domain_mutex);
while (p) {
struct msm_iova_data *node;
node = rb_entry(p, struct msm_iova_data, node);
if (domain_num < node->domain_num)
p = p->rb_left;
else if (domain_num > node->domain_num)
p = p->rb_right;
else {
mutex_unlock(&domain_mutex);
return node;
}
}
mutex_unlock(&domain_mutex);
return NULL;
}
static int add_domain(struct msm_iova_data *node)
{
struct rb_root *root = &domain_root;
struct rb_node **p = &root->rb_node;
struct rb_node *parent = NULL;
mutex_lock(&domain_mutex);
while (*p) {
struct msm_iova_data *tmp;
parent = *p;
tmp = rb_entry(parent, struct msm_iova_data, node);
if (node->domain_num < tmp->domain_num)
p = &(*p)->rb_left;
else if (node->domain_num > tmp->domain_num)
p = &(*p)->rb_right;
else
BUG();
}
rb_link_node(&node->node, parent, p);
rb_insert_color(&node->node, root);
mutex_unlock(&domain_mutex);
return 0;
}
struct iommu_domain *msm_get_iommu_domain(int domain_num)
{
struct msm_iova_data *data;
data = find_domain(domain_num);
if (data)
return data->domain;
else
return NULL;
}
int msm_allocate_iova_address(unsigned int iommu_domain,
unsigned int partition_no,
unsigned long size,
unsigned long align,
unsigned long *iova)
{
struct msm_iova_data *data;
struct mem_pool *pool;
unsigned long va;
data = find_domain(iommu_domain);
if (!data)
return -EINVAL;
if (partition_no >= data->npools)
return -EINVAL;
pool = &data->pools[partition_no];
if (!pool->gpool)
return -EINVAL;
va = gen_pool_alloc_aligned(pool->gpool, size, ilog2(align));
if (va) {
pool->free -= size;
/* Offset because genpool can't handle 0 addresses */
if (pool->paddr == 0)
va -= SZ_4K;
*iova = va;
return 0;
}
return -ENOMEM;
}
void msm_free_iova_address(unsigned long iova,
unsigned int iommu_domain,
unsigned int partition_no,
unsigned long size)
{
struct msm_iova_data *data;
struct mem_pool *pool;
data = find_domain(iommu_domain);
if (!data) {
WARN(1, "Invalid domain %d\n", iommu_domain);
return;
}
if (partition_no >= data->npools) {
WARN(1, "Invalid partition %d for domain %d\n",
partition_no, iommu_domain);
return;
}
pool = &data->pools[partition_no];
if (!pool)
return;
pool->free += size;
/* Offset because genpool can't handle 0 addresses */
if (pool->paddr == 0)
iova += SZ_4K;
gen_pool_free(pool->gpool, iova, size);
}
int msm_register_domain(struct msm_iova_layout *layout)
{
int i;
struct msm_iova_data *data;
struct mem_pool *pools;
if (!layout)
return -EINVAL;
data = kmalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
pools = kmalloc(sizeof(struct mem_pool) * layout->npartitions,
GFP_KERNEL);
if (!pools)
goto out;
for (i = 0; i < layout->npartitions; i++) {
if (layout->partitions[i].size == 0)
continue;
pools[i].gpool = gen_pool_create(PAGE_SHIFT, -1);
if (!pools[i].gpool)
continue;
pools[i].paddr = layout->partitions[i].start;
pools[i].size = layout->partitions[i].size;
/*
* genalloc can't handle a pool starting at address 0.
* For now, solve this problem by offsetting the value
* put in by 4k.
* gen pool address = actual address + 4k
*/
if (pools[i].paddr == 0)
layout->partitions[i].start += SZ_4K;
if (gen_pool_add(pools[i].gpool,
layout->partitions[i].start,
layout->partitions[i].size, -1)) {
gen_pool_destroy(pools[i].gpool);
pools[i].gpool = NULL;
continue;
}
}
data->pools = pools;
data->npools = layout->npartitions;
data->domain_num = atomic_inc_return(&domain_nums);
data->domain = iommu_domain_alloc(&platform_bus_type,
layout->domain_flags);
add_domain(data);
return data->domain_num;
out:
kfree(data);
return -EINVAL;
}
int msm_use_iommu()
{
return iommu_present(&platform_bus_type);
}
static int __init iommu_domain_probe(struct platform_device *pdev)
{
struct iommu_domains_pdata *p = pdev->dev.platform_data;
int i, j;
if (!p)
return -ENODEV;
for (i = 0; i < p->ndomains; i++) {
struct msm_iova_layout l;
struct msm_iova_partition *part;
struct msm_iommu_domain *domains;
domains = p->domains;
l.npartitions = domains[i].npools;
part = kmalloc(
sizeof(struct msm_iova_partition) * l.npartitions,
GFP_KERNEL);
if (!part) {
pr_info("%s: could not allocate space for domain %d",
__func__, i);
continue;
}
for (j = 0; j < l.npartitions; j++) {
part[j].start = p->domains[i].iova_pools[j].paddr;
part[j].size = p->domains[i].iova_pools[j].size;
}
l.partitions = part;
msm_register_domain(&l);
kfree(part);
}
for (i = 0; i < p->nnames; i++) {
struct device *ctx = msm_iommu_get_ctx(
p->domain_names[i].name);
struct iommu_domain *domain;
if (!ctx)
continue;
domain = msm_get_iommu_domain(p->domain_names[i].domain);
if (!domain)
continue;
if (iommu_attach_device(domain, ctx)) {
WARN(1, "%s: could not attach domain %p to context %s."
" iommu programming will not occur.\n",
__func__, domain,
p->domain_names[i].name);
continue;
}
}
return 0;
}
static struct platform_driver iommu_domain_driver = {
.driver = {
.name = "iommu_domains",
.owner = THIS_MODULE
},
};
static int __init msm_subsystem_iommu_init(void)
{
return platform_driver_probe(&iommu_domain_driver, iommu_domain_probe);
}
device_initcall(msm_subsystem_iommu_init);

View File

@@ -0,0 +1,541 @@
/* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/io.h>
#include <linux/types.h>
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/memory_alloc.h>
#include <linux/module.h>
#include <mach/iommu.h>
#include <mach/iommu_domains.h>
#include <mach/msm_subsystem_map.h>
struct msm_buffer_node {
struct rb_node rb_node_all_buffer;
struct rb_node rb_node_paddr;
struct msm_mapped_buffer *buf;
unsigned long length;
unsigned int *subsystems;
unsigned int nsubsys;
unsigned int phys;
};
static struct rb_root buffer_root;
static struct rb_root phys_root;
DEFINE_MUTEX(msm_buffer_mutex);
static unsigned long subsystem_to_domain_tbl[] = {
VIDEO_DOMAIN,
VIDEO_DOMAIN,
CAMERA_DOMAIN,
DISPLAY_DOMAIN,
ROTATOR_DOMAIN,
0xFFFFFFFF
};
static struct msm_buffer_node *find_buffer(void *key)
{
struct rb_root *root = &buffer_root;
struct rb_node *p = root->rb_node;
mutex_lock(&msm_buffer_mutex);
while (p) {
struct msm_buffer_node *node;
node = rb_entry(p, struct msm_buffer_node, rb_node_all_buffer);
if (node->buf->vaddr) {
if (key < node->buf->vaddr)
p = p->rb_left;
else if (key > node->buf->vaddr)
p = p->rb_right;
else {
mutex_unlock(&msm_buffer_mutex);
return node;
}
} else {
if (key < (void *)node->buf)
p = p->rb_left;
else if (key > (void *)node->buf)
p = p->rb_right;
else {
mutex_unlock(&msm_buffer_mutex);
return node;
}
}
}
mutex_unlock(&msm_buffer_mutex);
return NULL;
}
static struct msm_buffer_node *find_buffer_phys(unsigned int phys)
{
struct rb_root *root = &phys_root;
struct rb_node *p = root->rb_node;
mutex_lock(&msm_buffer_mutex);
while (p) {
struct msm_buffer_node *node;
node = rb_entry(p, struct msm_buffer_node, rb_node_paddr);
if (phys < node->phys)
p = p->rb_left;
else if (phys > node->phys)
p = p->rb_right;
else {
mutex_unlock(&msm_buffer_mutex);
return node;
}
}
mutex_unlock(&msm_buffer_mutex);
return NULL;
}
static int add_buffer(struct msm_buffer_node *node)
{
struct rb_root *root = &buffer_root;
struct rb_node **p = &root->rb_node;
struct rb_node *parent = NULL;
void *key;
if (node->buf->vaddr)
key = node->buf->vaddr;
else
key = node->buf;
mutex_lock(&msm_buffer_mutex);
while (*p) {
struct msm_buffer_node *tmp;
parent = *p;
tmp = rb_entry(parent, struct msm_buffer_node,
rb_node_all_buffer);
if (tmp->buf->vaddr) {
if (key < tmp->buf->vaddr)
p = &(*p)->rb_left;
else if (key > tmp->buf->vaddr)
p = &(*p)->rb_right;
else {
WARN(1, "tried to add buffer twice! buf = %p"
" vaddr = %p iova = %p", tmp->buf,
tmp->buf->vaddr,
tmp->buf->iova);
mutex_unlock(&msm_buffer_mutex);
return -EINVAL;
}
} else {
if (key < (void *)tmp->buf)
p = &(*p)->rb_left;
else if (key > (void *)tmp->buf)
p = &(*p)->rb_right;
else {
WARN(1, "tried to add buffer twice! buf = %p"
" vaddr = %p iova = %p", tmp->buf,
tmp->buf->vaddr,
tmp->buf->iova);
mutex_unlock(&msm_buffer_mutex);
return -EINVAL;
}
}
}
rb_link_node(&node->rb_node_all_buffer, parent, p);
rb_insert_color(&node->rb_node_all_buffer, root);
mutex_unlock(&msm_buffer_mutex);
return 0;
}
static int add_buffer_phys(struct msm_buffer_node *node)
{
struct rb_root *root = &phys_root;
struct rb_node **p = &root->rb_node;
struct rb_node *parent = NULL;
mutex_lock(&msm_buffer_mutex);
while (*p) {
struct msm_buffer_node *tmp;
parent = *p;
tmp = rb_entry(parent, struct msm_buffer_node, rb_node_paddr);
if (node->phys < tmp->phys)
p = &(*p)->rb_left;
else if (node->phys > tmp->phys)
p = &(*p)->rb_right;
else {
WARN(1, "tried to add buffer twice! buf = %p"
" vaddr = %p iova = %p", tmp->buf,
tmp->buf->vaddr,
tmp->buf->iova);
mutex_unlock(&msm_buffer_mutex);
return -EINVAL;
}
}
rb_link_node(&node->rb_node_paddr, parent, p);
rb_insert_color(&node->rb_node_paddr, root);
mutex_unlock(&msm_buffer_mutex);
return 0;
}
static int remove_buffer(struct msm_buffer_node *victim_node)
{
struct rb_root *root = &buffer_root;
if (!victim_node)
return -EINVAL;
mutex_lock(&msm_buffer_mutex);
rb_erase(&victim_node->rb_node_all_buffer, root);
mutex_unlock(&msm_buffer_mutex);
return 0;
}
static int remove_buffer_phys(struct msm_buffer_node *victim_node)
{
struct rb_root *root = &phys_root;
if (!victim_node)
return -EINVAL;
mutex_lock(&msm_buffer_mutex);
rb_erase(&victim_node->rb_node_paddr, root);
mutex_unlock(&msm_buffer_mutex);
return 0;
}
static unsigned long msm_subsystem_get_domain_no(int subsys_id)
{
if (subsys_id > INVALID_SUBSYS_ID && subsys_id <= MAX_SUBSYSTEM_ID &&
subsys_id < ARRAY_SIZE(subsystem_to_domain_tbl))
return subsystem_to_domain_tbl[subsys_id];
else
return subsystem_to_domain_tbl[MAX_SUBSYSTEM_ID];
}
static unsigned long msm_subsystem_get_partition_no(int subsys_id)
{
switch (subsys_id) {
case MSM_SUBSYSTEM_VIDEO_FWARE:
return VIDEO_FIRMWARE_POOL;
case MSM_SUBSYSTEM_VIDEO:
return VIDEO_MAIN_POOL;
case MSM_SUBSYSTEM_CAMERA:
case MSM_SUBSYSTEM_DISPLAY:
case MSM_SUBSYSTEM_ROTATOR:
return GEN_POOL;
default:
return 0xFFFFFFFF;
}
}
phys_addr_t msm_subsystem_check_iova_mapping(int subsys_id, unsigned long iova)
{
struct iommu_domain *subsys_domain;
if (!msm_use_iommu())
/*
* If there is no iommu, Just return the iova in this case.
*/
return iova;
subsys_domain = msm_get_iommu_domain(msm_subsystem_get_domain_no
(subsys_id));
return iommu_iova_to_phys(subsys_domain, iova);
}
EXPORT_SYMBOL(msm_subsystem_check_iova_mapping);
struct msm_mapped_buffer *msm_subsystem_map_buffer(unsigned long phys,
unsigned int length,
unsigned int flags,
int *subsys_ids,
unsigned int nsubsys)
{
struct msm_mapped_buffer *buf, *err;
struct msm_buffer_node *node;
int i = 0, j = 0, ret;
unsigned long iova_start = 0, temp_phys, temp_va = 0;
struct iommu_domain *d = NULL;
int map_size = length;
if (!((flags & MSM_SUBSYSTEM_MAP_KADDR) ||
(flags & MSM_SUBSYSTEM_MAP_IOVA))) {
pr_warn("%s: no mapping flag was specified. The caller"
" should explicitly specify what to map in the"
" flags.\n", __func__);
err = ERR_PTR(-EINVAL);
goto outret;
}
buf = kzalloc(sizeof(*buf), GFP_ATOMIC);
if (!buf) {
err = ERR_PTR(-ENOMEM);
goto outret;
}
node = kzalloc(sizeof(*node), GFP_ATOMIC);
if (!node) {
err = ERR_PTR(-ENOMEM);
goto outkfreebuf;
}
node->phys = phys;
if (flags & MSM_SUBSYSTEM_MAP_KADDR) {
struct msm_buffer_node *old_buffer;
old_buffer = find_buffer_phys(phys);
if (old_buffer) {
WARN(1, "%s: Attempting to map %lx twice in the kernel"
" virtual space. Don't do that!\n", __func__,
phys);
err = ERR_PTR(-EINVAL);
goto outkfreenode;
}
if (flags & MSM_SUBSYSTEM_MAP_CACHED)
buf->vaddr = ioremap(phys, length);
else if (flags & MSM_SUBSYSTEM_MAP_KADDR)
buf->vaddr = ioremap_nocache(phys, length);
else {
pr_warn("%s: no cachability flag was indicated. Caller"
" must specify a cachability flag.\n",
__func__);
err = ERR_PTR(-EINVAL);
goto outkfreenode;
}
if (!buf->vaddr) {
pr_err("%s: could not ioremap\n", __func__);
err = ERR_PTR(-EINVAL);
goto outkfreenode;
}
if (add_buffer_phys(node)) {
err = ERR_PTR(-EINVAL);
goto outiounmap;
}
}
if ((flags & MSM_SUBSYSTEM_MAP_IOVA) && subsys_ids) {
int min_align;
length = round_up(length, SZ_4K);
if (flags & MSM_SUBSYSTEM_MAP_IOMMU_2X)
map_size = 2 * length;
else
map_size = length;
buf->iova = kzalloc(sizeof(unsigned long)*nsubsys, GFP_ATOMIC);
if (!buf->iova) {
err = ERR_PTR(-ENOMEM);
goto outremovephys;
}
/*
* The alignment must be specified as the exact value wanted
* e.g. 8k alignment must pass (0x2000 | other flags)
*/
min_align = flags & ~(SZ_4K - 1);
for (i = 0; i < nsubsys; i++) {
unsigned int domain_no, partition_no;
if (!msm_use_iommu()) {
buf->iova[i] = phys;
continue;
}
d = msm_get_iommu_domain(
msm_subsystem_get_domain_no(subsys_ids[i]));
if (!d) {
pr_err("%s: could not get domain for subsystem"
" %d\n", __func__, subsys_ids[i]);
continue;
}
domain_no = msm_subsystem_get_domain_no(subsys_ids[i]);
partition_no = msm_subsystem_get_partition_no(
subsys_ids[i]);
ret = msm_allocate_iova_address(domain_no,
partition_no,
map_size,
max(min_align, SZ_4K),
&iova_start);
if (ret) {
pr_err("%s: could not allocate iova address\n",
__func__);
continue;
}
temp_phys = phys;
temp_va = iova_start;
for (j = length; j > 0; j -= SZ_4K,
temp_phys += SZ_4K,
temp_va += SZ_4K) {
ret = iommu_map(d, temp_va, temp_phys,
SZ_4K,
(IOMMU_READ | IOMMU_WRITE));
if (ret) {
pr_err("%s: could not map iommu for"
" domain %p, iova %lx,"
" phys %lx\n", __func__, d,
temp_va, temp_phys);
err = ERR_PTR(-EINVAL);
goto outdomain;
}
}
buf->iova[i] = iova_start;
if (flags & MSM_SUBSYSTEM_MAP_IOMMU_2X)
msm_iommu_map_extra
(d, temp_va, length, SZ_4K,
(IOMMU_READ | IOMMU_WRITE));
}
}
node->buf = buf;
node->subsystems = subsys_ids;
node->length = map_size;
node->nsubsys = nsubsys;
if (add_buffer(node)) {
err = ERR_PTR(-EINVAL);
goto outiova;
}
return buf;
outiova:
if (flags & MSM_SUBSYSTEM_MAP_IOVA)
iommu_unmap(d, temp_va, SZ_4K);
outdomain:
if (flags & MSM_SUBSYSTEM_MAP_IOVA) {
/* Unmap the rest of the current domain, i */
for (j -= SZ_4K, temp_va -= SZ_4K;
j > 0; temp_va -= SZ_4K, j -= SZ_4K)
iommu_unmap(d, temp_va, SZ_4K);
/* Unmap all the other domains */
for (i--; i >= 0; i--) {
unsigned int domain_no, partition_no;
if (!msm_use_iommu())
continue;
domain_no = msm_subsystem_get_domain_no(subsys_ids[i]);
partition_no = msm_subsystem_get_partition_no(
subsys_ids[i]);
temp_va = buf->iova[i];
for (j = length; j > 0; j -= SZ_4K,
temp_va += SZ_4K)
iommu_unmap(d, temp_va, SZ_4K);
msm_free_iova_address(buf->iova[i], domain_no,
partition_no, length);
}
kfree(buf->iova);
}
outremovephys:
if (flags & MSM_SUBSYSTEM_MAP_KADDR)
remove_buffer_phys(node);
outiounmap:
if (flags & MSM_SUBSYSTEM_MAP_KADDR)
iounmap(buf->vaddr);
outkfreenode:
kfree(node);
outkfreebuf:
kfree(buf);
outret:
return err;
}
EXPORT_SYMBOL(msm_subsystem_map_buffer);
int msm_subsystem_unmap_buffer(struct msm_mapped_buffer *buf)
{
struct msm_buffer_node *node;
int i, j, ret;
unsigned long temp_va;
if (IS_ERR_OR_NULL(buf))
goto out;
if (buf->vaddr)
node = find_buffer(buf->vaddr);
else
node = find_buffer(buf);
if (!node)
goto out;
if (node->buf != buf) {
pr_err("%s: caller must pass in the same buffer structure"
" returned from map_buffer when freeding\n", __func__);
goto out;
}
if (buf->iova) {
if (msm_use_iommu())
for (i = 0; i < node->nsubsys; i++) {
struct iommu_domain *subsys_domain;
unsigned int domain_no, partition_no;
subsys_domain = msm_get_iommu_domain(
msm_subsystem_get_domain_no(
node->subsystems[i]));
domain_no = msm_subsystem_get_domain_no(
node->subsystems[i]);
partition_no = msm_subsystem_get_partition_no(
node->subsystems[i]);
temp_va = buf->iova[i];
for (j = node->length; j > 0; j -= SZ_4K,
temp_va += SZ_4K) {
ret = iommu_unmap(subsys_domain,
temp_va,
SZ_4K);
WARN(ret, "iommu_unmap returned a "
" non-zero value.\n");
}
msm_free_iova_address(buf->iova[i], domain_no,
partition_no, node->length);
}
kfree(buf->iova);
}
if (buf->vaddr) {
remove_buffer_phys(node);
iounmap(buf->vaddr);
}
remove_buffer(node);
kfree(node);
kfree(buf);
return 0;
out:
return -EINVAL;
}
EXPORT_SYMBOL(msm_subsystem_unmap_buffer);

View File

@@ -33,9 +33,19 @@ static void __iomem *l2x0_base;
static DEFINE_RAW_SPINLOCK(l2x0_lock);
static u32 l2x0_way_mask; /* Bitmask of active ways */
static u32 l2x0_size;
static u32 l2x0_cache_id;
static unsigned int l2x0_sets;
static unsigned int l2x0_ways;
static unsigned long sync_reg_offset = L2X0_CACHE_SYNC;
static void pl310_save(void);
static inline bool is_pl310_rev(int rev)
{
return (l2x0_cache_id &
(L2X0_CACHE_ID_PART_MASK | L2X0_CACHE_ID_REV_MASK)) ==
(L2X0_CACHE_ID_PART_L310 | rev);
}
struct l2x0_regs l2x0_saved_regs;
struct l2x0_of_data {
@@ -132,6 +142,23 @@ void l2x0_cache_sync(void)
raw_spin_unlock_irqrestore(&l2x0_lock, flags);
}
#ifdef CONFIG_PL310_ERRATA_727915
static void l2x0_for_each_set_way(void __iomem *reg)
{
int set;
int way;
unsigned long flags;
for (way = 0; way < l2x0_ways; way++) {
raw_spin_lock_irqsave(&l2x0_lock, flags);
for (set = 0; set < l2x0_sets; set++)
writel_relaxed((way << 28) | (set << 5), reg);
cache_sync();
raw_spin_unlock_irqrestore(&l2x0_lock, flags);
}
}
#endif
static void __l2x0_flush_all(void)
{
debug_writel(0x03);
@@ -145,6 +172,13 @@ static void l2x0_flush_all(void)
{
unsigned long flags;
#ifdef CONFIG_PL310_ERRATA_727915
if (is_pl310_rev(REV_PL310_R2P0)) {
l2x0_for_each_set_way(l2x0_base + L2X0_CLEAN_INV_LINE_IDX);
return;
}
#endif
/* clean all ways */
raw_spin_lock_irqsave(&l2x0_lock, flags);
__l2x0_flush_all();
@@ -155,11 +189,20 @@ static void l2x0_clean_all(void)
{
unsigned long flags;
#ifdef CONFIG_PL310_ERRATA_727915
if (is_pl310_rev(REV_PL310_R2P0)) {
l2x0_for_each_set_way(l2x0_base + L2X0_CLEAN_LINE_IDX);
return;
}
#endif
/* clean all ways */
raw_spin_lock_irqsave(&l2x0_lock, flags);
debug_writel(0x03);
writel_relaxed(l2x0_way_mask, l2x0_base + L2X0_CLEAN_WAY);
cache_wait_way(l2x0_base + L2X0_CLEAN_WAY, l2x0_way_mask);
cache_sync();
debug_writel(0x00);
raw_spin_unlock_irqrestore(&l2x0_lock, flags);
}
@@ -311,26 +354,24 @@ static void l2x0_unlock(u32 cache_id)
void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask)
{
u32 aux;
u32 cache_id;
u32 way_size = 0;
int ways;
const char *type;
l2x0_base = base;
cache_id = readl_relaxed(l2x0_base + L2X0_CACHE_ID);
l2x0_cache_id = readl_relaxed(l2x0_base + L2X0_CACHE_ID);
aux = readl_relaxed(l2x0_base + L2X0_AUX_CTRL);
aux &= aux_mask;
aux |= aux_val;
/* Determine the number of ways */
switch (cache_id & L2X0_CACHE_ID_PART_MASK) {
switch (l2x0_cache_id & L2X0_CACHE_ID_PART_MASK) {
case L2X0_CACHE_ID_PART_L310:
if (aux & (1 << 16))
ways = 16;
l2x0_ways = 16;
else
ways = 8;
l2x0_ways = 8;
type = "L310";
#ifdef CONFIG_PL310_ERRATA_753970
/* Unmapped register. */
@@ -339,24 +380,25 @@ void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask)
outer_cache.set_debug = pl310_set_debug;
break;
case L2X0_CACHE_ID_PART_L210:
ways = (aux >> 13) & 0xf;
l2x0_ways = (aux >> 13) & 0xf;
type = "L210";
break;
default:
/* Assume unknown chips have 8 ways */
ways = 8;
l2x0_ways = 8;
type = "L2x0 series";
break;
}
l2x0_way_mask = (1 << ways) - 1;
l2x0_way_mask = (1 << l2x0_ways) - 1;
/*
* L2 cache Size = Way size * Number of ways
*/
way_size = (aux & L2X0_AUX_CTRL_WAY_SIZE_MASK) >> 17;
way_size = 1 << (way_size + 3);
l2x0_size = ways * way_size * SZ_1K;
way_size = SZ_1K << (way_size + 3);
l2x0_size = l2x0_ways * way_size;
l2x0_sets = way_size / CACHE_LINE_SIZE;
/*
* Check if l2x0 controller is already enabled.
@@ -365,7 +407,7 @@ void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask)
*/
if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & 1)) {
/* Make sure that I&D is not locked down when starting */
l2x0_unlock(cache_id);
l2x0_unlock(l2x0_cache_id);
/* l2x0 controller is disabled */
writel_relaxed(aux, l2x0_base + L2X0_AUX_CTRL);
@@ -388,7 +430,7 @@ void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask)
printk(KERN_INFO "%s cache controller enabled\n", type);
printk(KERN_INFO "l2x0: %d ways, CACHE_ID 0x%08x, AUX_CTRL 0x%08x, Cache size: %d B\n",
ways, cache_id, aux, l2x0_size);
l2x0_ways, l2x0_cache_id, aux, l2x0_size);
/* Save the L2X0 contents, as they are not modified else where */
pl310_save();

View File

@@ -272,6 +272,11 @@ v6_dma_clean_range:
* - end - virtual end address of region
*/
ENTRY(v6_dma_flush_range)
#ifdef CONFIG_CACHE_FLUSH_RANGE_LIMIT
sub r2, r1, r0
cmp r2, #CONFIG_CACHE_FLUSH_RANGE_LIMIT
bhi v6_dma_flush_dcache_all
#endif
#ifdef CONFIG_DMA_CACHE_RWFO
ldrb r2, [r0] @ read for ownership
strb r2, [r0] @ write for ownership
@@ -294,6 +299,18 @@ ENTRY(v6_dma_flush_range)
mcr p15, 0, r0, c7, c10, 4 @ drain write buffer
mov pc, lr
#ifdef CONFIG_CACHE_FLUSH_RANGE_LIMIT
v6_dma_flush_dcache_all:
mov r0, #0
#ifdef HARVARD_CACHE
mcr p15, 0, r0, c7, c14, 0 @ D cache clean+invalidate
#else
mcr p15, 0, r0, c7, c15, 0 @ Cache clean+invalidate
#endif
mcr p15, 0, r0, c7, c10, 4 @ drain write buffer
mov pc, lr
#endif
/*
* dma_map_area(start, size, dir)
* - start - kernel virtual start address

View File

@@ -1,13 +1,6 @@
#ifndef _ASM_X86_IDLE_H
#define _ASM_X86_IDLE_H
#define IDLE_START 1
#define IDLE_END 2
struct notifier_block;
void idle_notifier_register(struct notifier_block *n);
void idle_notifier_unregister(struct notifier_block *n);
#ifdef CONFIG_X86_64
void enter_idle(void);
void exit_idle(void);

View File

@@ -29,19 +29,6 @@
#ifdef CONFIG_X86_64
static DEFINE_PER_CPU(unsigned char, is_idle);
static ATOMIC_NOTIFIER_HEAD(idle_notifier);
void idle_notifier_register(struct notifier_block *n)
{
atomic_notifier_chain_register(&idle_notifier, n);
}
EXPORT_SYMBOL_GPL(idle_notifier_register);
void idle_notifier_unregister(struct notifier_block *n)
{
atomic_notifier_chain_unregister(&idle_notifier, n);
}
EXPORT_SYMBOL_GPL(idle_notifier_unregister);
#endif
struct kmem_cache *task_xstate_cachep;
@@ -378,14 +365,14 @@ static inline void play_dead(void)
void enter_idle(void)
{
percpu_write(is_idle, 1);
atomic_notifier_call_chain(&idle_notifier, IDLE_START, NULL);
idle_notifier_call_chain(IDLE_START);
}
static void __exit_idle(void)
{
if (x86_test_and_clear_bit_percpu(0, is_idle) == 0)
return;
atomic_notifier_call_chain(&idle_notifier, IDLE_END, NULL);
idle_notifier_call_chain(IDLE_END);
}
/* Called from interrupts to signify idle end */

View File

@@ -96,6 +96,8 @@ source "drivers/memstick/Kconfig"
source "drivers/leds/Kconfig"
source "drivers/switch/Kconfig"
source "drivers/accessibility/Kconfig"
source "drivers/infiniband/Kconfig"

View File

@@ -100,6 +100,7 @@ obj-$(CONFIG_CPU_IDLE) += cpuidle/
obj-y += mmc/
obj-$(CONFIG_MEMSTICK) += memstick/
obj-y += leds/
obj-$(CONFIG_SWITCH) += switch/
obj-$(CONFIG_INFINIBAND) += infiniband/
obj-$(CONFIG_SGI_SN) += sn/
obj-y += firmware/

View File

@@ -192,4 +192,30 @@ config DMA_SHARED_BUFFER
APIs extension; the file's descriptor can then be passed on to other
driver.
config SYNC
bool "Synchronization framework"
default n
select ANON_INODES
help
This option enables the framework for synchronization between multiple
drivers. Sync implementations can take advantage of hardware
synchronization built into devices like GPUs.
config SW_SYNC
bool "Software synchronization objects"
default n
depends on SYNC
help
A sync object driver that uses a 32bit counter to coordinate
syncrhronization. Useful when there is no hardware primitive backing
the synchronization.
config SW_SYNC_USER
bool "Userspace API for SW_SYNC"
default n
depends on SW_SYNC
help
Provides a user space API to the sw sync object.
*WARNING* improper use of this can result in deadlocking kernel
drivers from userspace.
endmenu

View File

@@ -21,5 +21,8 @@ obj-$(CONFIG_SYS_HYPERVISOR) += hypervisor.o
obj-$(CONFIG_REGMAP) += regmap/
obj-$(CONFIG_SOC_BUS) += soc.o
obj-$(CONFIG_SYNC) += sync.o
obj-$(CONFIG_SW_SYNC) += sw_sync.o
ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG

View File

@@ -44,8 +44,26 @@ static int dma_buf_release(struct inode *inode, struct file *file)
return 0;
}
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
if (!is_dma_buf_file(file))
return -EINVAL;
dmabuf = file->private_data;
/* check for overflowing the buffer's size */
if (vma->vm_pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
return dmabuf->ops->mmap(dmabuf, vma);
}
static const struct file_operations dma_buf_fops = {
.release = dma_buf_release,
.mmap = dma_buf_mmap_internal,
};
/*
@@ -82,7 +100,8 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops,
|| !ops->unmap_dma_buf
|| !ops->release
|| !ops->kmap_atomic
|| !ops->kmap)) {
|| !ops->kmap
|| !ops->mmap)) {
return ERR_PTR(-EINVAL);
}
@@ -406,3 +425,46 @@ void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num,
dmabuf->ops->kunmap(dmabuf, page_num, vaddr);
}
EXPORT_SYMBOL_GPL(dma_buf_kunmap);
/**
* dma_buf_mmap - Setup up a userspace mmap with the given vma
* @dma_buf: [in] buffer that should back the vma
* @vma: [in] vma for the mmap
* @pgoff: [in] offset in pages where this mmap should start within the
* dma-buf buffer.
*
* This function adjusts the passed in vma so that it points at the file of the
* dma_buf operation. It alsog adjusts the starting pgoff and does bounds
* checking on the size of the vma. Then it calls the exporters mmap function to
* set up the mapping.
*
* Can return negative error values, returns 0 on success.
*/
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
/* check for offset overflow */
if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) < pgoff)
return -EOVERFLOW;
/* check for overflowing the buffer's size */
if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
/* readjust the vma */
if (vma->vm_file)
fput(vma->vm_file);
vma->vm_file = dmabuf->file;
get_file(vma->vm_file);
vma->vm_pgoff = pgoff;
return dmabuf->ops->mmap(dmabuf, vma);
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);

View File

@@ -28,6 +28,7 @@
#include <linux/sched.h>
#include <linux/async.h>
#include <linux/suspend.h>
#include <linux/timer.h>
#include "../base.h"
#include "power.h"
@@ -54,6 +55,12 @@ struct suspend_stats suspend_stats;
static DEFINE_MUTEX(dpm_list_mtx);
static pm_message_t pm_transition;
static void dpm_drv_timeout(unsigned long data);
struct dpm_drv_wd_data {
struct device *dev;
struct task_struct *tsk;
};
static int async_error;
/**
@@ -658,6 +665,30 @@ static bool is_async(struct device *dev)
&& !pm_trace_is_enabled();
}
/**
* dpm_drv_timeout - Driver suspend / resume watchdog handler
* @data: struct device which timed out
*
* Called when a driver has timed out suspending or resuming.
* There's not much we can do here to recover so
* BUG() out for a crash-dump
*
*/
static void dpm_drv_timeout(unsigned long data)
{
struct dpm_drv_wd_data *wd_data = (void *)data;
struct device *dev = wd_data->dev;
struct task_struct *tsk = wd_data->tsk;
printk(KERN_EMERG "**** DPM device timeout: %s (%s)\n", dev_name(dev),
(dev->driver ? dev->driver->name : "no driver"));
printk(KERN_EMERG "dpm suspend stack:\n");
show_stack(tsk, NULL);
BUG();
}
/**
* dpm_resume - Execute "resume" callbacks for non-sysdev devices.
* @state: PM transition of the system being carried out.
@@ -1017,6 +1048,8 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
pm_callback_t callback = NULL;
char *info = NULL;
int error = 0;
struct timer_list timer;
struct dpm_drv_wd_data data;
dpm_wait_for_children(dev, async);
@@ -1033,6 +1066,14 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
return 0;
}
data.dev = dev;
data.tsk = get_current();
init_timer_on_stack(&timer);
timer.expires = jiffies + HZ * 12;
timer.function = dpm_drv_timeout;
timer.data = (unsigned long)&data;
add_timer(&timer);
device_lock(dev);
if (dev->pm_domain) {
@@ -1087,6 +1128,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
}
device_unlock(dev);
del_timer_sync(&timer);
destroy_timer_on_stack(&timer);
complete_all(&dev->power.completion);
if (error) {

259
drivers/base/sw_sync.c Normal file
View File

@@ -0,0 +1,259 @@
/*
* drivers/base/sw_sync.c
*
* Copyright (C) 2012 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/kernel.h>
#include <linux/export.h>
#include <linux/file.h>
#include <linux/fs.h>
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/sw_sync.h>
#include <linux/syscalls.h>
#include <linux/uaccess.h>
static int sw_sync_cmp(u32 a, u32 b)
{
if (a == b)
return 0;
return ((s32)a - (s32)b) < 0 ? -1 : 1;
}
struct sync_pt *sw_sync_pt_create(struct sw_sync_timeline *obj, u32 value)
{
struct sw_sync_pt *pt;
pt = (struct sw_sync_pt *)
sync_pt_create(&obj->obj, sizeof(struct sw_sync_pt));
pt->value = value;
return (struct sync_pt *)pt;
}
EXPORT_SYMBOL(sw_sync_pt_create);
static struct sync_pt *sw_sync_pt_dup(struct sync_pt *sync_pt)
{
struct sw_sync_pt *pt = (struct sw_sync_pt *) sync_pt;
struct sw_sync_timeline *obj =
(struct sw_sync_timeline *)sync_pt->parent;
return (struct sync_pt *) sw_sync_pt_create(obj, pt->value);
}
static int sw_sync_pt_has_signaled(struct sync_pt *sync_pt)
{
struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt;
struct sw_sync_timeline *obj =
(struct sw_sync_timeline *)sync_pt->parent;
return sw_sync_cmp(obj->value, pt->value) >= 0;
}
static int sw_sync_pt_compare(struct sync_pt *a, struct sync_pt *b)
{
struct sw_sync_pt *pt_a = (struct sw_sync_pt *)a;
struct sw_sync_pt *pt_b = (struct sw_sync_pt *)b;
return sw_sync_cmp(pt_a->value, pt_b->value);
}
static void sw_sync_print_obj(struct seq_file *s,
struct sync_timeline *sync_timeline)
{
struct sw_sync_timeline *obj = (struct sw_sync_timeline *)sync_timeline;
seq_printf(s, "%d", obj->value);
}
static void sw_sync_print_pt(struct seq_file *s, struct sync_pt *sync_pt)
{
struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt;
struct sw_sync_timeline *obj =
(struct sw_sync_timeline *)sync_pt->parent;
seq_printf(s, "%d / %d", pt->value, obj->value);
}
static int sw_sync_fill_driver_data(struct sync_pt *sync_pt,
void *data, int size)
{
struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt;
if (size < sizeof(pt->value))
return -ENOMEM;
memcpy(data, &pt->value, sizeof(pt->value));
return sizeof(pt->value);
}
struct sync_timeline_ops sw_sync_timeline_ops = {
.driver_name = "sw_sync",
.dup = sw_sync_pt_dup,
.has_signaled = sw_sync_pt_has_signaled,
.compare = sw_sync_pt_compare,
.print_obj = sw_sync_print_obj,
.print_pt = sw_sync_print_pt,
.fill_driver_data = sw_sync_fill_driver_data,
};
struct sw_sync_timeline *sw_sync_timeline_create(const char *name)
{
struct sw_sync_timeline *obj = (struct sw_sync_timeline *)
sync_timeline_create(&sw_sync_timeline_ops,
sizeof(struct sw_sync_timeline),
name);
return obj;
}
EXPORT_SYMBOL(sw_sync_timeline_create);
void sw_sync_timeline_inc(struct sw_sync_timeline *obj, u32 inc)
{
obj->value += inc;
sync_timeline_signal(&obj->obj);
}
EXPORT_SYMBOL(sw_sync_timeline_inc);
#ifdef CONFIG_SW_SYNC_USER
/* *WARNING*
*
* improper use of this can result in deadlocking kernel drivers from userspace.
*/
/* opening sw_sync create a new sync obj */
int sw_sync_open(struct inode *inode, struct file *file)
{
struct sw_sync_timeline *obj;
char task_comm[TASK_COMM_LEN];
get_task_comm(task_comm, current);
obj = sw_sync_timeline_create(task_comm);
if (obj == NULL)
return -ENOMEM;
file->private_data = obj;
return 0;
}
int sw_sync_release(struct inode *inode, struct file *file)
{
struct sw_sync_timeline *obj = file->private_data;
sync_timeline_destroy(&obj->obj);
return 0;
}
long sw_sync_ioctl_create_fence(struct sw_sync_timeline *obj, unsigned long arg)
{
int fd = get_unused_fd();
int err;
struct sync_pt *pt;
struct sync_fence *fence;
struct sw_sync_create_fence_data data;
if (copy_from_user(&data, (void __user *)arg, sizeof(data)))
return -EFAULT;
pt = sw_sync_pt_create(obj, data.value);
if (pt == NULL) {
err = -ENOMEM;
goto err;
}
data.name[sizeof(data.name) - 1] = '\0';
fence = sync_fence_create(data.name, pt);
if (fence == NULL) {
sync_pt_free(pt);
err = -ENOMEM;
goto err;
}
data.fence = fd;
if (copy_to_user((void __user *)arg, &data, sizeof(data))) {
sync_fence_put(fence);
err = -EFAULT;
goto err;
}
sync_fence_install(fence, fd);
return 0;
err:
put_unused_fd(fd);
return err;
}
long sw_sync_ioctl_inc(struct sw_sync_timeline *obj, unsigned long arg)
{
u32 value;
if (copy_from_user(&value, (void __user *)arg, sizeof(value)))
return -EFAULT;
sw_sync_timeline_inc(obj, value);
return 0;
}
long sw_sync_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
struct sw_sync_timeline *obj = file->private_data;
switch (cmd) {
case SW_SYNC_IOC_CREATE_FENCE:
return sw_sync_ioctl_create_fence(obj, arg);
case SW_SYNC_IOC_INC:
return sw_sync_ioctl_inc(obj, arg);
default:
return -ENOTTY;
}
}
static const struct file_operations sw_sync_fops = {
.owner = THIS_MODULE,
.open = sw_sync_open,
.release = sw_sync_release,
.unlocked_ioctl = sw_sync_ioctl,
};
static struct miscdevice sw_sync_dev = {
.minor = MISC_DYNAMIC_MINOR,
.name = "sw_sync",
.fops = &sw_sync_fops,
};
int __init sw_sync_device_init(void)
{
return misc_register(&sw_sync_dev);
}
void __exit sw_sync_device_remove(void)
{
misc_deregister(&sw_sync_dev);
}
module_init(sw_sync_device_init);
module_exit(sw_sync_device_remove);
#endif /* CONFIG_SW_SYNC_USER */

832
drivers/base/sync.c Normal file
View File

@@ -0,0 +1,832 @@
/*
* drivers/base/sync.c
*
* Copyright (C) 2012 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/debugfs.h>
#include <linux/export.h>
#include <linux/file.h>
#include <linux/fs.h>
#include <linux/kernel.h>
#include <linux/poll.h>
#include <linux/sched.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/sync.h>
#include <linux/uaccess.h>
#include <linux/anon_inodes.h>
static void sync_fence_signal_pt(struct sync_pt *pt);
static int _sync_pt_has_signaled(struct sync_pt *pt);
static LIST_HEAD(sync_timeline_list_head);
static DEFINE_SPINLOCK(sync_timeline_list_lock);
static LIST_HEAD(sync_fence_list_head);
static DEFINE_SPINLOCK(sync_fence_list_lock);
struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops,
int size, const char *name)
{
struct sync_timeline *obj;
unsigned long flags;
if (size < sizeof(struct sync_timeline))
return NULL;
obj = kzalloc(size, GFP_KERNEL);
if (obj == NULL)
return NULL;
obj->ops = ops;
strlcpy(obj->name, name, sizeof(obj->name));
INIT_LIST_HEAD(&obj->child_list_head);
spin_lock_init(&obj->child_list_lock);
INIT_LIST_HEAD(&obj->active_list_head);
spin_lock_init(&obj->active_list_lock);
spin_lock_irqsave(&sync_timeline_list_lock, flags);
list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head);
spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
return obj;
}
EXPORT_SYMBOL(sync_timeline_create);
static void sync_timeline_free(struct sync_timeline *obj)
{
unsigned long flags;
if (obj->ops->release_obj)
obj->ops->release_obj(obj);
spin_lock_irqsave(&sync_timeline_list_lock, flags);
list_del(&obj->sync_timeline_list);
spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
kfree(obj);
}
void sync_timeline_destroy(struct sync_timeline *obj)
{
unsigned long flags;
bool needs_freeing;
spin_lock_irqsave(&obj->child_list_lock, flags);
obj->destroyed = true;
needs_freeing = list_empty(&obj->child_list_head);
spin_unlock_irqrestore(&obj->child_list_lock, flags);
if (needs_freeing)
sync_timeline_free(obj);
else
sync_timeline_signal(obj);
}
EXPORT_SYMBOL(sync_timeline_destroy);
static void sync_timeline_add_pt(struct sync_timeline *obj, struct sync_pt *pt)
{
unsigned long flags;
pt->parent = obj;
spin_lock_irqsave(&obj->child_list_lock, flags);
list_add_tail(&pt->child_list, &obj->child_list_head);
spin_unlock_irqrestore(&obj->child_list_lock, flags);
}
static void sync_timeline_remove_pt(struct sync_pt *pt)
{
struct sync_timeline *obj = pt->parent;
unsigned long flags;
bool needs_freeing;
spin_lock_irqsave(&obj->active_list_lock, flags);
if (!list_empty(&pt->active_list))
list_del_init(&pt->active_list);
spin_unlock_irqrestore(&obj->active_list_lock, flags);
spin_lock_irqsave(&obj->child_list_lock, flags);
list_del(&pt->child_list);
needs_freeing = obj->destroyed && list_empty(&obj->child_list_head);
spin_unlock_irqrestore(&obj->child_list_lock, flags);
if (needs_freeing)
sync_timeline_free(obj);
}
void sync_timeline_signal(struct sync_timeline *obj)
{
unsigned long flags;
LIST_HEAD(signaled_pts);
struct list_head *pos, *n;
spin_lock_irqsave(&obj->active_list_lock, flags);
list_for_each_safe(pos, n, &obj->active_list_head) {
struct sync_pt *pt =
container_of(pos, struct sync_pt, active_list);
if (_sync_pt_has_signaled(pt))
list_move(pos, &signaled_pts);
}
spin_unlock_irqrestore(&obj->active_list_lock, flags);
list_for_each_safe(pos, n, &signaled_pts) {
struct sync_pt *pt =
container_of(pos, struct sync_pt, active_list);
list_del_init(pos);
sync_fence_signal_pt(pt);
}
}
EXPORT_SYMBOL(sync_timeline_signal);
struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size)
{
struct sync_pt *pt;
if (size < sizeof(struct sync_pt))
return NULL;
pt = kzalloc(size, GFP_KERNEL);
if (pt == NULL)
return NULL;
INIT_LIST_HEAD(&pt->active_list);
sync_timeline_add_pt(parent, pt);
return pt;
}
EXPORT_SYMBOL(sync_pt_create);
void sync_pt_free(struct sync_pt *pt)
{
if (pt->parent->ops->free_pt)
pt->parent->ops->free_pt(pt);
sync_timeline_remove_pt(pt);
kfree(pt);
}
EXPORT_SYMBOL(sync_pt_free);
/* call with pt->parent->active_list_lock held */
static int _sync_pt_has_signaled(struct sync_pt *pt)
{
int old_status = pt->status;
if (!pt->status)
pt->status = pt->parent->ops->has_signaled(pt);
if (!pt->status && pt->parent->destroyed)
pt->status = -ENOENT;
if (pt->status != old_status)
pt->timestamp = ktime_get();
return pt->status;
}
static struct sync_pt *sync_pt_dup(struct sync_pt *pt)
{
return pt->parent->ops->dup(pt);
}
/* Adds a sync pt to the active queue. Called when added to a fence */
static void sync_pt_activate(struct sync_pt *pt)
{
struct sync_timeline *obj = pt->parent;
unsigned long flags;
int err;
spin_lock_irqsave(&obj->active_list_lock, flags);
err = _sync_pt_has_signaled(pt);
if (err != 0) {
sync_fence_signal_pt(pt);
goto out;
}
list_add_tail(&pt->active_list, &obj->active_list_head);
out:
spin_unlock_irqrestore(&obj->active_list_lock, flags);
}
static int sync_fence_release(struct inode *inode, struct file *file);
static unsigned int sync_fence_poll(struct file *file, poll_table *wait);
static long sync_fence_ioctl(struct file *file, unsigned int cmd,
unsigned long arg);
static const struct file_operations sync_fence_fops = {
.release = sync_fence_release,
.poll = sync_fence_poll,
.unlocked_ioctl = sync_fence_ioctl,
};
static struct sync_fence *sync_fence_alloc(const char *name)
{
struct sync_fence *fence;
unsigned long flags;
fence = kzalloc(sizeof(struct sync_fence), GFP_KERNEL);
if (fence == NULL)
return NULL;
fence->file = anon_inode_getfile("sync_fence", &sync_fence_fops,
fence, 0);
if (fence->file == NULL)
goto err;
strlcpy(fence->name, name, sizeof(fence->name));
INIT_LIST_HEAD(&fence->pt_list_head);
INIT_LIST_HEAD(&fence->waiter_list_head);
spin_lock_init(&fence->waiter_list_lock);
init_waitqueue_head(&fence->wq);
spin_lock_irqsave(&sync_fence_list_lock, flags);
list_add_tail(&fence->sync_fence_list, &sync_fence_list_head);
spin_unlock_irqrestore(&sync_fence_list_lock, flags);
return fence;
err:
kfree(fence);
return NULL;
}
/* TODO: implement a create which takes more that one sync_pt */
struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt)
{
struct sync_fence *fence;
if (pt->fence)
return NULL;
fence = sync_fence_alloc(name);
if (fence == NULL)
return NULL;
pt->fence = fence;
list_add(&pt->pt_list, &fence->pt_list_head);
sync_pt_activate(pt);
return fence;
}
EXPORT_SYMBOL(sync_fence_create);
static int sync_fence_copy_pts(struct sync_fence *dst, struct sync_fence *src)
{
struct list_head *pos;
list_for_each(pos, &src->pt_list_head) {
struct sync_pt *orig_pt =
container_of(pos, struct sync_pt, pt_list);
struct sync_pt *new_pt = sync_pt_dup(orig_pt);
if (new_pt == NULL)
return -ENOMEM;
new_pt->fence = dst;
list_add(&new_pt->pt_list, &dst->pt_list_head);
sync_pt_activate(new_pt);
}
return 0;
}
static void sync_fence_free_pts(struct sync_fence *fence)
{
struct list_head *pos, *n;
list_for_each_safe(pos, n, &fence->pt_list_head) {
struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
sync_pt_free(pt);
}
}
struct sync_fence *sync_fence_fdget(int fd)
{
struct file *file = fget(fd);
if (file == NULL)
return NULL;
if (file->f_op != &sync_fence_fops)
goto err;
return file->private_data;
err:
fput(file);
return NULL;
}
EXPORT_SYMBOL(sync_fence_fdget);
void sync_fence_put(struct sync_fence *fence)
{
fput(fence->file);
}
EXPORT_SYMBOL(sync_fence_put);
void sync_fence_install(struct sync_fence *fence, int fd)
{
fd_install(fd, fence->file);
}
EXPORT_SYMBOL(sync_fence_install);
static int sync_fence_get_status(struct sync_fence *fence)
{
struct list_head *pos;
int status = 1;
list_for_each(pos, &fence->pt_list_head) {
struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
int pt_status = pt->status;
if (pt_status < 0) {
status = pt_status;
break;
} else if (status == 1) {
status = pt_status;
}
}
return status;
}
struct sync_fence *sync_fence_merge(const char *name,
struct sync_fence *a, struct sync_fence *b)
{
struct sync_fence *fence;
int err;
fence = sync_fence_alloc(name);
if (fence == NULL)
return NULL;
err = sync_fence_copy_pts(fence, a);
if (err < 0)
goto err;
err = sync_fence_copy_pts(fence, b);
if (err < 0)
goto err;
fence->status = sync_fence_get_status(fence);
return fence;
err:
sync_fence_free_pts(fence);
kfree(fence);
return NULL;
}
EXPORT_SYMBOL(sync_fence_merge);
static void sync_fence_signal_pt(struct sync_pt *pt)
{
LIST_HEAD(signaled_waiters);
struct sync_fence *fence = pt->fence;
struct list_head *pos;
struct list_head *n;
unsigned long flags;
int status;
status = sync_fence_get_status(fence);
spin_lock_irqsave(&fence->waiter_list_lock, flags);
/*
* this should protect against two threads racing on the signaled
* false -> true transition
*/
if (status && !fence->status) {
list_for_each_safe(pos, n, &fence->waiter_list_head)
list_move(pos, &signaled_waiters);
fence->status = status;
} else {
status = 0;
}
spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
if (status) {
list_for_each_safe(pos, n, &signaled_waiters) {
struct sync_fence_waiter *waiter =
container_of(pos, struct sync_fence_waiter,
waiter_list);
list_del(pos);
waiter->callback(fence, waiter);
}
wake_up(&fence->wq);
}
}
int sync_fence_wait_async(struct sync_fence *fence,
struct sync_fence_waiter *waiter)
{
unsigned long flags;
int err = 0;
spin_lock_irqsave(&fence->waiter_list_lock, flags);
if (fence->status) {
err = fence->status;
goto out;
}
list_add_tail(&waiter->waiter_list, &fence->waiter_list_head);
out:
spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
return err;
}
EXPORT_SYMBOL(sync_fence_wait_async);
int sync_fence_cancel_async(struct sync_fence *fence,
struct sync_fence_waiter *waiter)
{
struct list_head *pos;
struct list_head *n;
unsigned long flags;
int ret = -ENOENT;
spin_lock_irqsave(&fence->waiter_list_lock, flags);
/*
* Make sure waiter is still in waiter_list because it is possible for
* the waiter to be removed from the list while the callback is still
* pending.
*/
list_for_each_safe(pos, n, &fence->waiter_list_head) {
struct sync_fence_waiter *list_waiter =
container_of(pos, struct sync_fence_waiter,
waiter_list);
if (list_waiter == waiter) {
list_del(pos);
ret = 0;
break;
}
}
spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
return ret;
}
EXPORT_SYMBOL(sync_fence_cancel_async);
int sync_fence_wait(struct sync_fence *fence, long timeout)
{
int err;
if (timeout) {
timeout = msecs_to_jiffies(timeout);
err = wait_event_interruptible_timeout(fence->wq,
fence->status != 0,
timeout);
} else {
err = wait_event_interruptible(fence->wq, fence->status != 0);
}
if (err < 0)
return err;
if (fence->status < 0)
return fence->status;
if (fence->status == 0)
return -ETIME;
return 0;
}
EXPORT_SYMBOL(sync_fence_wait);
static int sync_fence_release(struct inode *inode, struct file *file)
{
struct sync_fence *fence = file->private_data;
unsigned long flags;
sync_fence_free_pts(fence);
spin_lock_irqsave(&sync_fence_list_lock, flags);
list_del(&fence->sync_fence_list);
spin_unlock_irqrestore(&sync_fence_list_lock, flags);
kfree(fence);
return 0;
}
static unsigned int sync_fence_poll(struct file *file, poll_table *wait)
{
struct sync_fence *fence = file->private_data;
poll_wait(file, &fence->wq, wait);
if (fence->status == 1)
return POLLIN;
else if (fence->status < 0)
return POLLERR;
else
return 0;
}
static long sync_fence_ioctl_wait(struct sync_fence *fence, unsigned long arg)
{
__u32 value;
if (copy_from_user(&value, (void __user *)arg, sizeof(value)))
return -EFAULT;
return sync_fence_wait(fence, value);
}
static long sync_fence_ioctl_merge(struct sync_fence *fence, unsigned long arg)
{
int fd = get_unused_fd();
int err;
struct sync_fence *fence2, *fence3;
struct sync_merge_data data;
if (copy_from_user(&data, (void __user *)arg, sizeof(data)))
return -EFAULT;
fence2 = sync_fence_fdget(data.fd2);
if (fence2 == NULL) {
err = -ENOENT;
goto err_put_fd;
}
data.name[sizeof(data.name) - 1] = '\0';
fence3 = sync_fence_merge(data.name, fence, fence2);
if (fence3 == NULL) {
err = -ENOMEM;
goto err_put_fence2;
}
data.fence = fd;
if (copy_to_user((void __user *)arg, &data, sizeof(data))) {
err = -EFAULT;
goto err_put_fence3;
}
sync_fence_install(fence3, fd);
sync_fence_put(fence2);
return 0;
err_put_fence3:
sync_fence_put(fence3);
err_put_fence2:
sync_fence_put(fence2);
err_put_fd:
put_unused_fd(fd);
return err;
}
static int sync_fill_pt_info(struct sync_pt *pt, void *data, int size)
{
struct sync_pt_info *info = data;
int ret;
if (size < sizeof(struct sync_pt_info))
return -ENOMEM;
info->len = sizeof(struct sync_pt_info);
if (pt->parent->ops->fill_driver_data) {
ret = pt->parent->ops->fill_driver_data(pt, info->driver_data,
size - sizeof(*info));
if (ret < 0)
return ret;
info->len += ret;
}
strlcpy(info->obj_name, pt->parent->name, sizeof(info->obj_name));
strlcpy(info->driver_name, pt->parent->ops->driver_name,
sizeof(info->driver_name));
info->status = pt->status;
info->timestamp_ns = ktime_to_ns(pt->timestamp);
return info->len;
}
static long sync_fence_ioctl_fence_info(struct sync_fence *fence,
unsigned long arg)
{
struct sync_fence_info_data *data;
struct list_head *pos;
__u32 size;
__u32 len = 0;
int ret;
if (copy_from_user(&size, (void __user *)arg, sizeof(size)))
return -EFAULT;
if (size < sizeof(struct sync_fence_info_data))
return -EINVAL;
if (size > 4096)
size = 4096;
data = kzalloc(size, GFP_KERNEL);
if (data == NULL)
return -ENOMEM;
strlcpy(data->name, fence->name, sizeof(data->name));
data->status = fence->status;
len = sizeof(struct sync_fence_info_data);
list_for_each(pos, &fence->pt_list_head) {
struct sync_pt *pt =
container_of(pos, struct sync_pt, pt_list);
ret = sync_fill_pt_info(pt, (u8 *)data + len, size - len);
if (ret < 0)
goto out;
len += ret;
}
data->len = len;
if (copy_to_user((void __user *)arg, data, len))
ret = -EFAULT;
else
ret = 0;
out:
kfree(data);
return ret;
}
static long sync_fence_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
struct sync_fence *fence = file->private_data;
switch (cmd) {
case SYNC_IOC_WAIT:
return sync_fence_ioctl_wait(fence, arg);
case SYNC_IOC_MERGE:
return sync_fence_ioctl_merge(fence, arg);
case SYNC_IOC_FENCE_INFO:
return sync_fence_ioctl_fence_info(fence, arg);
default:
return -ENOTTY;
}
}
#ifdef CONFIG_DEBUG_FS
static const char *sync_status_str(int status)
{
if (status > 0)
return "signaled";
else if (status == 0)
return "active";
else
return "error";
}
static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence)
{
int status = pt->status;
seq_printf(s, " %s%spt %s",
fence ? pt->parent->name : "",
fence ? "_" : "",
sync_status_str(status));
if (pt->status) {
struct timeval tv = ktime_to_timeval(pt->timestamp);
seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec);
}
if (pt->parent->ops->print_pt) {
seq_printf(s, ": ");
pt->parent->ops->print_pt(s, pt);
}
seq_printf(s, "\n");
}
static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj)
{
struct list_head *pos;
unsigned long flags;
seq_printf(s, "%s %s", obj->name, obj->ops->driver_name);
if (obj->ops->print_obj) {
seq_printf(s, ": ");
obj->ops->print_obj(s, obj);
}
seq_printf(s, "\n");
spin_lock_irqsave(&obj->child_list_lock, flags);
list_for_each(pos, &obj->child_list_head) {
struct sync_pt *pt =
container_of(pos, struct sync_pt, child_list);
sync_print_pt(s, pt, false);
}
spin_unlock_irqrestore(&obj->child_list_lock, flags);
}
static void sync_print_fence(struct seq_file *s, struct sync_fence *fence)
{
struct list_head *pos;
unsigned long flags;
seq_printf(s, "%s: %s\n", fence->name, sync_status_str(fence->status));
list_for_each(pos, &fence->pt_list_head) {
struct sync_pt *pt =
container_of(pos, struct sync_pt, pt_list);
sync_print_pt(s, pt, true);
}
spin_lock_irqsave(&fence->waiter_list_lock, flags);
list_for_each(pos, &fence->waiter_list_head) {
struct sync_fence_waiter *waiter =
container_of(pos, struct sync_fence_waiter,
waiter_list);
seq_printf(s, "waiter %pF\n", waiter->callback);
}
spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
}
static int sync_debugfs_show(struct seq_file *s, void *unused)
{
unsigned long flags;
struct list_head *pos;
seq_printf(s, "objs:\n--------------\n");
spin_lock_irqsave(&sync_timeline_list_lock, flags);
list_for_each(pos, &sync_timeline_list_head) {
struct sync_timeline *obj =
container_of(pos, struct sync_timeline,
sync_timeline_list);
sync_print_obj(s, obj);
seq_printf(s, "\n");
}
spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
seq_printf(s, "fences:\n--------------\n");
spin_lock_irqsave(&sync_fence_list_lock, flags);
list_for_each(pos, &sync_fence_list_head) {
struct sync_fence *fence =
container_of(pos, struct sync_fence, sync_fence_list);
sync_print_fence(s, fence);
seq_printf(s, "\n");
}
spin_unlock_irqrestore(&sync_fence_list_lock, flags);
return 0;
}
static int sync_debugfs_open(struct inode *inode, struct file *file)
{
return single_open(file, sync_debugfs_show, inode->i_private);
}
static const struct file_operations sync_debugfs_fops = {
.open = sync_debugfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static __init int sync_debugfs_init(void)
{
debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops);
return 0;
}
late_initcall(sync_debugfs_init);
#endif

View File

@@ -6,6 +6,19 @@ menu "Character devices"
source "drivers/tty/Kconfig"
config DEVMEM
bool "Memory device driver"
default y
help
The memory driver provides two character devices, mem and kmem, which
provide access to the system's memory. The mem device is a view of
physical memory, and each byte in the device corresponds to the
matching physical address. The kmem device is the same as mem, but
the addresses correspond to the kernel's virtual address space rather
than physical memory. These devices are standard parts of a Linux
system and most users should say Y here. You might say N if very
security conscience or memory is tight.
config DEVKMEM
bool "/dev/kmem virtual device support"
default y
@@ -583,6 +596,10 @@ config DEVPORT
depends on ISA || PCI
default y
config DCC_TTY
tristate "DCC tty driver"
depends on ARM
source "drivers/s390/char/Kconfig"
config RAMOOPS

View File

@@ -57,6 +57,7 @@ obj-$(CONFIG_IPMI_HANDLER) += ipmi/
obj-$(CONFIG_HANGCHECK_TIMER) += hangcheck-timer.o
obj-$(CONFIG_TCG_TPM) += tpm/
obj-$(CONFIG_DCC_TTY) += dcc_tty.o
obj-$(CONFIG_PS3_FLASH) += ps3flash.o
obj-$(CONFIG_RAMOOPS) += ramoops.o

327
drivers/char/dcc_tty.c Normal file
View File

@@ -0,0 +1,327 @@
/* drivers/char/dcc_tty.c
*
* Copyright (C) 2007 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/console.h>
#include <linux/hrtimer.h>
#include <linux/tty.h>
#include <linux/tty_driver.h>
#include <linux/tty_flip.h>
#include <linux/spinlock.h>
MODULE_DESCRIPTION("DCC TTY Driver");
MODULE_LICENSE("GPL");
MODULE_VERSION("1.0");
static spinlock_t g_dcc_tty_lock = __SPIN_LOCK_UNLOCKED(g_dcc_tty_lock);
static struct hrtimer g_dcc_timer;
static char g_dcc_buffer[16];
static int g_dcc_buffer_head;
static int g_dcc_buffer_count;
static unsigned g_dcc_write_delay_usecs = 1;
static struct tty_driver *g_dcc_tty_driver;
static struct tty_struct *g_dcc_tty;
static int g_dcc_tty_open_count;
static void dcc_poll_locked(void)
{
char ch;
int rch;
int written;
while (g_dcc_buffer_count) {
ch = g_dcc_buffer[g_dcc_buffer_head];
asm(
"mrc 14, 0, r15, c0, c1, 0\n"
"mcrcc 14, 0, %1, c0, c5, 0\n"
"movcc %0, #1\n"
"movcs %0, #0\n"
: "=r" (written)
: "r" (ch)
);
if (written) {
if (ch == '\n')
g_dcc_buffer[g_dcc_buffer_head] = '\r';
else {
g_dcc_buffer_head = (g_dcc_buffer_head + 1) % ARRAY_SIZE(g_dcc_buffer);
g_dcc_buffer_count--;
if (g_dcc_tty)
tty_wakeup(g_dcc_tty);
}
g_dcc_write_delay_usecs = 1;
} else {
if (g_dcc_write_delay_usecs > 0x100)
break;
g_dcc_write_delay_usecs <<= 1;
udelay(g_dcc_write_delay_usecs);
}
}
if (g_dcc_tty && !test_bit(TTY_THROTTLED, &g_dcc_tty->flags)) {
asm(
"mrc 14, 0, %0, c0, c1, 0\n"
"tst %0, #(1 << 30)\n"
"moveq %0, #-1\n"
"mrcne 14, 0, %0, c0, c5, 0\n"
: "=r" (rch)
);
if (rch >= 0) {
ch = rch;
tty_insert_flip_string(g_dcc_tty, &ch, 1);
tty_flip_buffer_push(g_dcc_tty);
}
}
if (g_dcc_buffer_count)
hrtimer_start(&g_dcc_timer, ktime_set(0, g_dcc_write_delay_usecs * NSEC_PER_USEC), HRTIMER_MODE_REL);
else
hrtimer_start(&g_dcc_timer, ktime_set(0, 20 * NSEC_PER_MSEC), HRTIMER_MODE_REL);
}
static int dcc_tty_open(struct tty_struct * tty, struct file * filp)
{
int ret;
unsigned long irq_flags;
spin_lock_irqsave(&g_dcc_tty_lock, irq_flags);
if (g_dcc_tty == NULL || g_dcc_tty == tty) {
g_dcc_tty = tty;
g_dcc_tty_open_count++;
ret = 0;
} else
ret = -EBUSY;
spin_unlock_irqrestore(&g_dcc_tty_lock, irq_flags);
printk("dcc_tty_open, tty %p, f_flags %x, returned %d\n", tty, filp->f_flags, ret);
return ret;
}
static void dcc_tty_close(struct tty_struct * tty, struct file * filp)
{
printk("dcc_tty_close, tty %p, f_flags %x\n", tty, filp->f_flags);
if (g_dcc_tty == tty) {
if (--g_dcc_tty_open_count == 0)
g_dcc_tty = NULL;
}
}
static int dcc_write(const unsigned char *buf_start, int count)
{
const unsigned char *buf = buf_start;
unsigned long irq_flags;
int copy_len;
int space_left;
int tail;
if (count < 1)
return 0;
spin_lock_irqsave(&g_dcc_tty_lock, irq_flags);
do {
tail = (g_dcc_buffer_head + g_dcc_buffer_count) % ARRAY_SIZE(g_dcc_buffer);
copy_len = ARRAY_SIZE(g_dcc_buffer) - tail;
space_left = ARRAY_SIZE(g_dcc_buffer) - g_dcc_buffer_count;
if (copy_len > space_left)
copy_len = space_left;
if (copy_len > count)
copy_len = count;
memcpy(&g_dcc_buffer[tail], buf, copy_len);
g_dcc_buffer_count += copy_len;
buf += copy_len;
count -= copy_len;
if (copy_len < count && copy_len < space_left) {
space_left -= copy_len;
copy_len = count;
if (copy_len > space_left) {
copy_len = space_left;
}
memcpy(g_dcc_buffer, buf, copy_len);
buf += copy_len;
count -= copy_len;
g_dcc_buffer_count += copy_len;
}
dcc_poll_locked();
space_left = ARRAY_SIZE(g_dcc_buffer) - g_dcc_buffer_count;
} while(count && space_left);
spin_unlock_irqrestore(&g_dcc_tty_lock, irq_flags);
return buf - buf_start;
}
static int dcc_tty_write(struct tty_struct * tty, const unsigned char *buf, int count)
{
int ret;
/* printk("dcc_tty_write %p, %d\n", buf, count); */
ret = dcc_write(buf, count);
if (ret != count)
printk("dcc_tty_write %p, %d, returned %d\n", buf, count, ret);
return ret;
}
static int dcc_tty_write_room(struct tty_struct *tty)
{
int space_left;
unsigned long irq_flags;
spin_lock_irqsave(&g_dcc_tty_lock, irq_flags);
space_left = ARRAY_SIZE(g_dcc_buffer) - g_dcc_buffer_count;
spin_unlock_irqrestore(&g_dcc_tty_lock, irq_flags);
return space_left;
}
static int dcc_tty_chars_in_buffer(struct tty_struct *tty)
{
int ret;
asm(
"mrc 14, 0, %0, c0, c1, 0\n"
"mov %0, %0, LSR #30\n"
"and %0, %0, #1\n"
: "=r" (ret)
);
return ret;
}
static void dcc_tty_unthrottle(struct tty_struct * tty)
{
unsigned long irq_flags;
spin_lock_irqsave(&g_dcc_tty_lock, irq_flags);
dcc_poll_locked();
spin_unlock_irqrestore(&g_dcc_tty_lock, irq_flags);
}
static enum hrtimer_restart dcc_tty_timer_func(struct hrtimer *timer)
{
unsigned long irq_flags;
spin_lock_irqsave(&g_dcc_tty_lock, irq_flags);
dcc_poll_locked();
spin_unlock_irqrestore(&g_dcc_tty_lock, irq_flags);
return HRTIMER_NORESTART;
}
void dcc_console_write(struct console *co, const char *b, unsigned count)
{
#if 1
dcc_write(b, count);
#else
/* blocking printk */
while (count > 0) {
int written;
written = dcc_write(b, count);
if (written) {
b += written;
count -= written;
}
}
#endif
}
static struct tty_driver *dcc_console_device(struct console *c, int *index)
{
*index = 0;
return g_dcc_tty_driver;
}
static int __init dcc_console_setup(struct console *co, char *options)
{
if (co->index != 0)
return -ENODEV;
return 0;
}
static struct console dcc_console =
{
.name = "ttyDCC",
.write = dcc_console_write,
.device = dcc_console_device,
.setup = dcc_console_setup,
.flags = CON_PRINTBUFFER,
.index = -1,
};
static struct tty_operations dcc_tty_ops = {
.open = dcc_tty_open,
.close = dcc_tty_close,
.write = dcc_tty_write,
.write_room = dcc_tty_write_room,
.chars_in_buffer = dcc_tty_chars_in_buffer,
.unthrottle = dcc_tty_unthrottle,
};
static int __init dcc_tty_init(void)
{
int ret;
hrtimer_init(&g_dcc_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
g_dcc_timer.function = dcc_tty_timer_func;
g_dcc_tty_driver = alloc_tty_driver(1);
if (!g_dcc_tty_driver) {
printk(KERN_ERR "dcc_tty_probe: alloc_tty_driver failed\n");
ret = -ENOMEM;
goto err_alloc_tty_driver_failed;
}
g_dcc_tty_driver->owner = THIS_MODULE;
g_dcc_tty_driver->driver_name = "dcc";
g_dcc_tty_driver->name = "ttyDCC";
g_dcc_tty_driver->major = 0; // auto assign
g_dcc_tty_driver->minor_start = 0;
g_dcc_tty_driver->type = TTY_DRIVER_TYPE_SERIAL;
g_dcc_tty_driver->subtype = SERIAL_TYPE_NORMAL;
g_dcc_tty_driver->init_termios = tty_std_termios;
g_dcc_tty_driver->flags = TTY_DRIVER_RESET_TERMIOS | TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV;
tty_set_operations(g_dcc_tty_driver, &dcc_tty_ops);
ret = tty_register_driver(g_dcc_tty_driver);
if (ret) {
printk(KERN_ERR "dcc_tty_probe: tty_register_driver failed, %d\n", ret);
goto err_tty_register_driver_failed;
}
tty_register_device(g_dcc_tty_driver, 0, NULL);
register_console(&dcc_console);
hrtimer_start(&g_dcc_timer, ktime_set(0, 0), HRTIMER_MODE_REL);
return 0;
err_tty_register_driver_failed:
put_tty_driver(g_dcc_tty_driver);
g_dcc_tty_driver = NULL;
err_alloc_tty_driver_failed:
return ret;
}
static void __exit dcc_tty_exit(void)
{
int ret;
tty_unregister_device(g_dcc_tty_driver, 0);
ret = tty_unregister_driver(g_dcc_tty_driver);
if (ret < 0) {
printk(KERN_ERR "dcc_tty_remove: tty_unregister_driver failed, %d\n", ret);
} else {
put_tty_driver(g_dcc_tty_driver);
}
g_dcc_tty_driver = NULL;
}
module_init(dcc_tty_init);
module_exit(dcc_tty_exit);

View File

@@ -57,6 +57,7 @@ static inline int valid_mmap_phys_addr_range(unsigned long pfn, size_t size)
}
#endif
#if defined(CONFIG_DEVMEM) || defined(CONFIG_DEVKMEM)
#ifdef CONFIG_STRICT_DEVMEM
static inline int range_is_allowed(unsigned long pfn, unsigned long size)
{
@@ -82,7 +83,9 @@ static inline int range_is_allowed(unsigned long pfn, unsigned long size)
return 1;
}
#endif
#endif
#ifdef CONFIG_DEVMEM
void __weak unxlate_dev_mem_ptr(unsigned long phys, void *addr)
{
}
@@ -209,6 +212,9 @@ static ssize_t write_mem(struct file *file, const char __user *buf,
*ppos += written;
return written;
}
#endif /* CONFIG_DEVMEM */
#if defined(CONFIG_DEVMEM) || defined(CONFIG_DEVKMEM)
int __weak phys_mem_access_prot_allowed(struct file *file,
unsigned long pfn, unsigned long size, pgprot_t *vma_prot)
@@ -330,6 +336,7 @@ static int mmap_mem(struct file *file, struct vm_area_struct *vma)
}
return 0;
}
#endif /* CONFIG_DEVMEM */
#ifdef CONFIG_DEVKMEM
static int mmap_kmem(struct file *file, struct vm_area_struct *vma)
@@ -694,6 +701,8 @@ static loff_t null_lseek(struct file *file, loff_t offset, int orig)
return file->f_pos = 0;
}
#if defined(CONFIG_DEVMEM) || defined(CONFIG_DEVKMEM) || defined(CONFIG_DEVPORT)
/*
* The memory devices use the full 32/64 bits of the offset, and so we cannot
* check against negative addresses: they are ok. The return value is weird,
@@ -727,10 +736,14 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig)
return ret;
}
#endif
#if defined(CONFIG_DEVMEM) || defined(CONFIG_DEVKMEM) || defined(CONFIG_DEVPORT)
static int open_port(struct inode * inode, struct file * filp)
{
return capable(CAP_SYS_RAWIO) ? 0 : -EPERM;
}
#endif
#define zero_lseek null_lseek
#define full_lseek null_lseek
@@ -740,6 +753,7 @@ static int open_port(struct inode * inode, struct file * filp)
#define open_kmem open_mem
#define open_oldmem open_mem
#ifdef CONFIG_DEVMEM
static const struct file_operations mem_fops = {
.llseek = memory_lseek,
.read = read_mem,
@@ -748,6 +762,7 @@ static const struct file_operations mem_fops = {
.open = open_mem,
.get_unmapped_area = get_unmapped_area_mem,
};
#endif
#ifdef CONFIG_DEVKMEM
static const struct file_operations kmem_fops = {
@@ -851,7 +866,9 @@ static const struct memdev {
const struct file_operations *fops;
struct backing_dev_info *dev_info;
} devlist[] = {
#ifdef CONFIG_DEVMEM
[1] = { "mem", 0, &mem_fops, &directly_mappable_cdev_bdi },
#endif
#ifdef CONFIG_DEVKMEM
[2] = { "kmem", 0, &kmem_fops, &directly_mappable_cdev_bdi },
#endif

View File

@@ -99,6 +99,16 @@ config CPU_FREQ_DEFAULT_GOV_CONSERVATIVE
Be aware that not all cpufreq drivers support the conservative
governor. If unsure have a look at the help section of the
driver. Fallback governor will be the performance governor.
config CPU_FREQ_DEFAULT_GOV_INTERACTIVE
bool "interactive"
select CPU_FREQ_GOV_INTERACTIVE
help
Use the CPUFreq governor 'interactive' as default. This allows
you to get a full dynamic cpu frequency capable system by simply
loading your cpufreq low-level hardware driver, using the
'interactive' governor for latency-sensitive workloads.
endchoice
config CPU_FREQ_GOV_PERFORMANCE
@@ -156,6 +166,23 @@ config CPU_FREQ_GOV_ONDEMAND
If in doubt, say N.
config CPU_FREQ_GOV_INTERACTIVE
tristate "'interactive' cpufreq policy governor"
help
'interactive' - This driver adds a dynamic cpufreq policy governor
designed for latency-sensitive workloads.
This governor attempts to reduce the latency of clock
increases so that the system is more responsive to
interactive workloads.
To compile this driver as a module, choose M here: the
module will be called cpufreq_interactive.
For details, take a look at linux/Documentation/cpu-freq.
If in doubt, say N.
config CPU_FREQ_GOV_CONSERVATIVE
tristate "'conservative' cpufreq governor"
depends on CPU_FREQ

View File

@@ -9,6 +9,7 @@ obj-$(CONFIG_CPU_FREQ_GOV_POWERSAVE) += cpufreq_powersave.o
obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE) += cpufreq_userspace.o
obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND) += cpufreq_ondemand.o
obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o
obj-$(CONFIG_CPU_FREQ_GOV_INTERACTIVE) += cpufreq_interactive.o
# CPUfreq cross-arch helpers
obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o

File diff suppressed because it is too large Load Diff

View File

@@ -316,6 +316,27 @@ static int cpufreq_stat_notifier_trans(struct notifier_block *nb,
return 0;
}
static int cpufreq_stats_create_table_cpu(unsigned int cpu)
{
struct cpufreq_policy *policy;
struct cpufreq_frequency_table *table;
int ret = -ENODEV;
policy = cpufreq_cpu_get(cpu);
if (!policy)
return -ENODEV;
table = cpufreq_frequency_get_table(cpu);
if (!table)
goto out;
ret = cpufreq_stats_create_table(policy, table);
out:
cpufreq_cpu_put(policy);
return ret;
}
static int __cpuinit cpufreq_stat_cpu_callback(struct notifier_block *nfb,
unsigned long action,
void *hcpu)
@@ -335,6 +356,10 @@ static int __cpuinit cpufreq_stat_cpu_callback(struct notifier_block *nfb,
case CPU_DEAD_FROZEN:
cpufreq_stats_free_table(cpu);
break;
case CPU_DOWN_FAILED:
case CPU_DOWN_FAILED_FROZEN:
cpufreq_stats_create_table_cpu(cpu);
break;
}
return NOTIFY_OK;
}

View File

@@ -126,14 +126,6 @@ struct menu_device {
#define LOAD_INT(x) ((x) >> FSHIFT)
#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100)
static int get_loadavg(void)
{
unsigned long this = this_cpu_load();
return LOAD_INT(this) * 10 + LOAD_FRAC(this) / 10;
}
static inline int which_bucket(unsigned int duration)
{
int bucket = 0;
@@ -171,10 +163,6 @@ static inline int performance_multiplier(void)
{
int mult = 1;
/* for higher loadavg, we are more reluctant */
mult += 2 * get_loadavg();
/* for IO wait tasks (per cpu!) we add 5x each */
mult += 10 * nr_iowait_cpu(smp_processor_id());

View File

@@ -1 +1 @@
obj-y += drm/ vga/ stub/
obj-y += drm/ vga/ stub/ ion/

18
drivers/gpu/ion/Kconfig Normal file
View File

@@ -0,0 +1,18 @@
menuconfig ION
tristate "Ion Memory Manager"
select GENERIC_ALLOCATOR
select DMA_SHARED_BUFFER
help
Chose this option to enable the ION Memory Manager.
config ION_TEGRA
tristate "Ion for Tegra"
depends on ARCH_TEGRA && ION
help
Choose this option if you wish to use ion on an nVidia Tegra.
config ION_MSM
tristate "Ion for MSM"
depends on ARCH_MSM && ION
help
Choose this option if you wish to use ion on an MSM target.

3
drivers/gpu/ion/Makefile Normal file
View File

@@ -0,0 +1,3 @@
obj-$(CONFIG_ION) += ion.o ion_heap.o ion_system_heap.o ion_carveout_heap.o ion_iommu_heap.o ion_cp_heap.o
obj-$(CONFIG_ION_TEGRA) += tegra/
obj-$(CONFIG_ION_MSM) += msm/

1942
drivers/gpu/ion/ion.c Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,493 @@
/*
* drivers/gpu/ion/ion_carveout_heap.c
*
* Copyright (C) 2011 Google, Inc.
* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/spinlock.h>
#include <linux/err.h>
#include <linux/genalloc.h>
#include <linux/io.h>
#include <linux/ion.h>
#include <linux/mm.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/iommu.h>
#include <linux/seq_file.h>
#include "ion_priv.h"
#include <mach/iommu_domains.h>
#include <asm/mach/map.h>
#include <asm/cacheflush.h>
struct ion_carveout_heap {
struct ion_heap heap;
struct gen_pool *pool;
ion_phys_addr_t base;
unsigned long allocated_bytes;
unsigned long total_size;
int (*request_region)(void *);
int (*release_region)(void *);
atomic_t map_count;
void *bus_id;
unsigned int has_outer_cache;
};
ion_phys_addr_t ion_carveout_allocate(struct ion_heap *heap,
unsigned long size,
unsigned long align)
{
struct ion_carveout_heap *carveout_heap =
container_of(heap, struct ion_carveout_heap, heap);
unsigned long offset = gen_pool_alloc_aligned(carveout_heap->pool,
size, ilog2(align));
if (!offset) {
if ((carveout_heap->total_size -
carveout_heap->allocated_bytes) >= size)
pr_debug("%s: heap %s has enough memory (%lx) but"
" the allocation of size %lx still failed."
" Memory is probably fragmented.",
__func__, heap->name,
carveout_heap->total_size -
carveout_heap->allocated_bytes, size);
return ION_CARVEOUT_ALLOCATE_FAIL;
}
carveout_heap->allocated_bytes += size;
return offset;
}
void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr,
unsigned long size)
{
struct ion_carveout_heap *carveout_heap =
container_of(heap, struct ion_carveout_heap, heap);
if (addr == ION_CARVEOUT_ALLOCATE_FAIL)
return;
gen_pool_free(carveout_heap->pool, addr, size);
carveout_heap->allocated_bytes -= size;
}
static int ion_carveout_heap_phys(struct ion_heap *heap,
struct ion_buffer *buffer,
ion_phys_addr_t *addr, size_t *len)
{
*addr = buffer->priv_phys;
*len = buffer->size;
return 0;
}
static int ion_carveout_heap_allocate(struct ion_heap *heap,
struct ion_buffer *buffer,
unsigned long size, unsigned long align,
unsigned long flags)
{
buffer->priv_phys = ion_carveout_allocate(heap, size, align);
return buffer->priv_phys == ION_CARVEOUT_ALLOCATE_FAIL ? -ENOMEM : 0;
}
static void ion_carveout_heap_free(struct ion_buffer *buffer)
{
struct ion_heap *heap = buffer->heap;
ion_carveout_free(heap, buffer->priv_phys, buffer->size);
buffer->priv_phys = ION_CARVEOUT_ALLOCATE_FAIL;
}
struct sg_table *ion_carveout_heap_map_dma(struct ion_heap *heap,
struct ion_buffer *buffer)
{
struct sg_table *table;
int ret;
table = kzalloc(sizeof(struct sg_table), GFP_KERNEL);
if (!table)
return ERR_PTR(-ENOMEM);
ret = sg_alloc_table(table, 1, GFP_KERNEL);
if (ret)
goto err0;
table->sgl->length = buffer->size;
table->sgl->offset = 0;
table->sgl->dma_address = buffer->priv_phys;
return table;
err0:
kfree(table);
return ERR_PTR(ret);
}
void ion_carveout_heap_unmap_dma(struct ion_heap *heap,
struct ion_buffer *buffer)
{
if (buffer->sg_table)
sg_free_table(buffer->sg_table);
kfree(buffer->sg_table);
buffer->sg_table = 0;
}
static int ion_carveout_request_region(struct ion_carveout_heap *carveout_heap)
{
int ret_value = 0;
if (atomic_inc_return(&carveout_heap->map_count) == 1) {
if (carveout_heap->request_region) {
ret_value = carveout_heap->request_region(
carveout_heap->bus_id);
if (ret_value) {
pr_err("Unable to request SMI region");
atomic_dec(&carveout_heap->map_count);
}
}
}
return ret_value;
}
static int ion_carveout_release_region(struct ion_carveout_heap *carveout_heap)
{
int ret_value = 0;
if (atomic_dec_and_test(&carveout_heap->map_count)) {
if (carveout_heap->release_region) {
ret_value = carveout_heap->release_region(
carveout_heap->bus_id);
if (ret_value)
pr_err("Unable to release SMI region");
}
}
return ret_value;
}
void *ion_carveout_heap_map_kernel(struct ion_heap *heap,
struct ion_buffer *buffer)
{
struct ion_carveout_heap *carveout_heap =
container_of(heap, struct ion_carveout_heap, heap);
void *ret_value;
if (ion_carveout_request_region(carveout_heap))
return NULL;
if (ION_IS_CACHED(buffer->flags))
ret_value = ioremap_cached(buffer->priv_phys, buffer->size);
else
ret_value = ioremap(buffer->priv_phys, buffer->size);
if (!ret_value)
ion_carveout_release_region(carveout_heap);
return ret_value;
}
void ion_carveout_heap_unmap_kernel(struct ion_heap *heap,
struct ion_buffer *buffer)
{
struct ion_carveout_heap *carveout_heap =
container_of(heap, struct ion_carveout_heap, heap);
__arm_iounmap(buffer->vaddr);
buffer->vaddr = NULL;
ion_carveout_release_region(carveout_heap);
return;
}
int ion_carveout_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer,
struct vm_area_struct *vma)
{
struct ion_carveout_heap *carveout_heap =
container_of(heap, struct ion_carveout_heap, heap);
int ret_value = 0;
if (ion_carveout_request_region(carveout_heap))
return -EINVAL;
if (!ION_IS_CACHED(buffer->flags))
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
ret_value = remap_pfn_range(vma, vma->vm_start,
__phys_to_pfn(buffer->priv_phys) + vma->vm_pgoff,
vma->vm_end - vma->vm_start,
vma->vm_page_prot);
if (ret_value)
ion_carveout_release_region(carveout_heap);
return ret_value;
}
void ion_carveout_heap_unmap_user(struct ion_heap *heap,
struct ion_buffer *buffer)
{
struct ion_carveout_heap *carveout_heap =
container_of(heap, struct ion_carveout_heap, heap);
ion_carveout_release_region(carveout_heap);
}
int ion_carveout_cache_ops(struct ion_heap *heap, struct ion_buffer *buffer,
void *vaddr, unsigned int offset, unsigned int length,
unsigned int cmd)
{
void (*outer_cache_op)(phys_addr_t, phys_addr_t);
struct ion_carveout_heap *carveout_heap =
container_of(heap, struct ion_carveout_heap, heap);
switch (cmd) {
case ION_IOC_CLEAN_CACHES:
dmac_clean_range(vaddr, vaddr + length);
outer_cache_op = outer_clean_range;
break;
case ION_IOC_INV_CACHES:
dmac_inv_range(vaddr, vaddr + length);
outer_cache_op = outer_inv_range;
break;
case ION_IOC_CLEAN_INV_CACHES:
dmac_flush_range(vaddr, vaddr + length);
outer_cache_op = outer_flush_range;
break;
default:
return -EINVAL;
}
if (carveout_heap->has_outer_cache) {
unsigned long pstart = buffer->priv_phys + offset;
outer_cache_op(pstart, pstart + length);
}
return 0;
}
static int ion_carveout_print_debug(struct ion_heap *heap, struct seq_file *s,
const struct rb_root *mem_map)
{
struct ion_carveout_heap *carveout_heap =
container_of(heap, struct ion_carveout_heap, heap);
seq_printf(s, "total bytes currently allocated: %lx\n",
carveout_heap->allocated_bytes);
seq_printf(s, "total heap size: %lx\n", carveout_heap->total_size);
if (mem_map) {
unsigned long base = carveout_heap->base;
unsigned long size = carveout_heap->total_size;
unsigned long end = base+size;
unsigned long last_end = base;
struct rb_node *n;
seq_printf(s, "\nMemory Map\n");
seq_printf(s, "%16.s %14.s %14.s %14.s\n",
"client", "start address", "end address",
"size (hex)");
for (n = rb_first(mem_map); n; n = rb_next(n)) {
struct mem_map_data *data =
rb_entry(n, struct mem_map_data, node);
const char *client_name = "(null)";
if (last_end < data->addr) {
seq_printf(s, "%16.s %14lx %14lx %14lu (%lx)\n",
"FREE", last_end, data->addr-1,
data->addr-last_end,
data->addr-last_end);
}
if (data->client_name)
client_name = data->client_name;
seq_printf(s, "%16.s %14lx %14lx %14lu (%lx)\n",
client_name, data->addr,
data->addr_end,
data->size, data->size);
last_end = data->addr_end+1;
}
if (last_end < end) {
seq_printf(s, "%16.s %14lx %14lx %14lu (%lx)\n", "FREE",
last_end, end-1, end-last_end, end-last_end);
}
}
return 0;
}
int ion_carveout_heap_map_iommu(struct ion_buffer *buffer,
struct ion_iommu_map *data,
unsigned int domain_num,
unsigned int partition_num,
unsigned long align,
unsigned long iova_length,
unsigned long flags)
{
struct iommu_domain *domain;
int ret = 0;
unsigned long extra;
struct scatterlist *sglist = 0;
int prot = IOMMU_WRITE | IOMMU_READ;
prot |= ION_IS_CACHED(flags) ? IOMMU_CACHE : 0;
data->mapped_size = iova_length;
if (!msm_use_iommu()) {
data->iova_addr = buffer->priv_phys;
return 0;
}
extra = iova_length - buffer->size;
ret = msm_allocate_iova_address(domain_num, partition_num,
data->mapped_size, align,
&data->iova_addr);
if (ret)
goto out;
domain = msm_get_iommu_domain(domain_num);
if (!domain) {
ret = -ENOMEM;
goto out1;
}
sglist = vmalloc(sizeof(*sglist));
if (!sglist)
goto out1;
sg_init_table(sglist, 1);
sglist->length = buffer->size;
sglist->offset = 0;
sglist->dma_address = buffer->priv_phys;
ret = iommu_map_range(domain, data->iova_addr, sglist,
buffer->size, prot);
if (ret) {
pr_err("%s: could not map %lx in domain %p\n",
__func__, data->iova_addr, domain);
goto out1;
}
if (extra) {
unsigned long extra_iova_addr = data->iova_addr + buffer->size;
ret = msm_iommu_map_extra(domain, extra_iova_addr, extra,
SZ_4K, prot);
if (ret)
goto out2;
}
vfree(sglist);
return ret;
out2:
iommu_unmap_range(domain, data->iova_addr, buffer->size);
out1:
vfree(sglist);
msm_free_iova_address(data->iova_addr, domain_num, partition_num,
data->mapped_size);
out:
return ret;
}
void ion_carveout_heap_unmap_iommu(struct ion_iommu_map *data)
{
unsigned int domain_num;
unsigned int partition_num;
struct iommu_domain *domain;
if (!msm_use_iommu())
return;
domain_num = iommu_map_domain(data);
partition_num = iommu_map_partition(data);
domain = msm_get_iommu_domain(domain_num);
if (!domain) {
WARN(1, "Could not get domain %d. Corruption?\n", domain_num);
return;
}
iommu_unmap_range(domain, data->iova_addr, data->mapped_size);
msm_free_iova_address(data->iova_addr, domain_num, partition_num,
data->mapped_size);
return;
}
static struct ion_heap_ops carveout_heap_ops = {
.allocate = ion_carveout_heap_allocate,
.free = ion_carveout_heap_free,
.phys = ion_carveout_heap_phys,
.map_user = ion_carveout_heap_map_user,
.map_kernel = ion_carveout_heap_map_kernel,
.unmap_user = ion_carveout_heap_unmap_user,
.unmap_kernel = ion_carveout_heap_unmap_kernel,
.map_dma = ion_carveout_heap_map_dma,
.unmap_dma = ion_carveout_heap_unmap_dma,
.cache_op = ion_carveout_cache_ops,
.print_debug = ion_carveout_print_debug,
.map_iommu = ion_carveout_heap_map_iommu,
.unmap_iommu = ion_carveout_heap_unmap_iommu,
};
struct ion_heap *ion_carveout_heap_create(struct ion_platform_heap *heap_data)
{
struct ion_carveout_heap *carveout_heap;
int ret;
carveout_heap = kzalloc(sizeof(struct ion_carveout_heap), GFP_KERNEL);
if (!carveout_heap)
return ERR_PTR(-ENOMEM);
carveout_heap->pool = gen_pool_create(12, -1);
if (!carveout_heap->pool) {
kfree(carveout_heap);
return ERR_PTR(-ENOMEM);
}
carveout_heap->base = heap_data->base;
ret = gen_pool_add(carveout_heap->pool, carveout_heap->base,
heap_data->size, -1);
if (ret < 0) {
gen_pool_destroy(carveout_heap->pool);
kfree(carveout_heap);
return ERR_PTR(-EINVAL);
}
carveout_heap->heap.ops = &carveout_heap_ops;
carveout_heap->heap.type = ION_HEAP_TYPE_CARVEOUT;
carveout_heap->allocated_bytes = 0;
carveout_heap->total_size = heap_data->size;
carveout_heap->has_outer_cache = heap_data->has_outer_cache;
if (heap_data->extra_data) {
struct ion_co_heap_pdata *extra_data =
heap_data->extra_data;
if (extra_data->setup_region)
carveout_heap->bus_id = extra_data->setup_region();
if (extra_data->request_region)
carveout_heap->request_region =
extra_data->request_region;
if (extra_data->release_region)
carveout_heap->release_region =
extra_data->release_region;
}
return &carveout_heap->heap;
}
void ion_carveout_heap_destroy(struct ion_heap *heap)
{
struct ion_carveout_heap *carveout_heap =
container_of(heap, struct ion_carveout_heap, heap);
gen_pool_destroy(carveout_heap->pool);
kfree(carveout_heap);
carveout_heap = NULL;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,85 @@
/*
* drivers/gpu/ion/ion_heap.c
*
* Copyright (C) 2011 Google, Inc.
* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/err.h>
#include <linux/ion.h>
#include "ion_priv.h"
struct ion_heap *ion_heap_create(struct ion_platform_heap *heap_data)
{
struct ion_heap *heap = NULL;
switch (heap_data->type) {
case ION_HEAP_TYPE_SYSTEM_CONTIG:
heap = ion_system_contig_heap_create(heap_data);
break;
case ION_HEAP_TYPE_SYSTEM:
heap = ion_system_heap_create(heap_data);
break;
case ION_HEAP_TYPE_CARVEOUT:
heap = ion_carveout_heap_create(heap_data);
break;
case ION_HEAP_TYPE_IOMMU:
heap = ion_iommu_heap_create(heap_data);
break;
case ION_HEAP_TYPE_CP:
heap = ion_cp_heap_create(heap_data);
break;
default:
pr_err("%s: Invalid heap type %d\n", __func__,
heap_data->type);
return ERR_PTR(-EINVAL);
}
if (IS_ERR_OR_NULL(heap)) {
pr_err("%s: error creating heap %s type %d base %lu size %u\n",
__func__, heap_data->name, heap_data->type,
heap_data->base, heap_data->size);
return ERR_PTR(-EINVAL);
}
heap->name = heap_data->name;
heap->id = heap_data->id;
return heap;
}
void ion_heap_destroy(struct ion_heap *heap)
{
if (!heap)
return;
switch (heap->type) {
case ION_HEAP_TYPE_SYSTEM_CONTIG:
ion_system_contig_heap_destroy(heap);
break;
case ION_HEAP_TYPE_SYSTEM:
ion_system_heap_destroy(heap);
break;
case ION_HEAP_TYPE_CARVEOUT:
ion_carveout_heap_destroy(heap);
break;
case ION_HEAP_TYPE_IOMMU:
ion_iommu_heap_destroy(heap);
break;
case ION_HEAP_TYPE_CP:
ion_cp_heap_destroy(heap);
break;
default:
pr_err("%s: Invalid heap type %d\n", __func__,
heap->type);
}
}

View File

@@ -0,0 +1,354 @@
/*
* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/err.h>
#include <linux/io.h>
#include <linux/ion.h>
#include <linux/mm.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/iommu.h>
#include <linux/pfn.h>
#include "ion_priv.h"
#include <asm/mach/map.h>
#include <asm/page.h>
#include <asm/cacheflush.h>
#include <mach/iommu_domains.h>
struct ion_iommu_heap {
struct ion_heap heap;
unsigned int has_outer_cache;
};
struct ion_iommu_priv_data {
struct page **pages;
int nrpages;
unsigned long size;
};
static int ion_iommu_heap_allocate(struct ion_heap *heap,
struct ion_buffer *buffer,
unsigned long size, unsigned long align,
unsigned long flags)
{
int ret, i;
struct ion_iommu_priv_data *data = NULL;
if (msm_use_iommu()) {
struct scatterlist *sg;
struct sg_table *table;
unsigned int i;
data = kmalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->size = PFN_ALIGN(size);
data->nrpages = data->size >> PAGE_SHIFT;
data->pages = kzalloc(sizeof(struct page *)*data->nrpages,
GFP_KERNEL);
if (!data->pages) {
ret = -ENOMEM;
goto err1;
}
table = buffer->sg_table =
kzalloc(sizeof(struct sg_table), GFP_KERNEL);
if (!table) {
ret = -ENOMEM;
goto err1;
}
ret = sg_alloc_table(table, data->nrpages, GFP_KERNEL);
if (ret)
goto err2;
for_each_sg(table->sgl, sg, table->nents, i) {
data->pages[i] = alloc_page(GFP_KERNEL | __GFP_ZERO);
if (!data->pages[i])
goto err3;
sg_set_page(sg, data->pages[i], PAGE_SIZE, 0);
}
buffer->priv_virt = data;
return 0;
} else {
return -ENOMEM;
}
err3:
sg_free_table(buffer->sg_table);
err2:
kfree(buffer->sg_table);
buffer->sg_table = 0;
for (i = 0; i < data->nrpages; i++) {
if (data->pages[i])
__free_page(data->pages[i]);
}
kfree(data->pages);
err1:
kfree(data);
return ret;
}
static void ion_iommu_heap_free(struct ion_buffer *buffer)
{
struct ion_iommu_priv_data *data = buffer->priv_virt;
int i;
if (!data)
return;
for (i = 0; i < data->nrpages; i++)
__free_page(data->pages[i]);
kfree(data->pages);
kfree(data);
}
void *ion_iommu_heap_map_kernel(struct ion_heap *heap,
struct ion_buffer *buffer)
{
struct ion_iommu_priv_data *data = buffer->priv_virt;
pgprot_t page_prot = PAGE_KERNEL;
if (!data)
return NULL;
if (!ION_IS_CACHED(buffer->flags))
page_prot = pgprot_noncached(page_prot);
buffer->vaddr = vmap(data->pages, data->nrpages, VM_IOREMAP, page_prot);
return buffer->vaddr;
}
void ion_iommu_heap_unmap_kernel(struct ion_heap *heap,
struct ion_buffer *buffer)
{
if (!buffer->vaddr)
return;
vunmap(buffer->vaddr);
buffer->vaddr = NULL;
}
int ion_iommu_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer,
struct vm_area_struct *vma)
{
struct ion_iommu_priv_data *data = buffer->priv_virt;
int i;
unsigned long curr_addr;
if (!data)
return -EINVAL;
if (!ION_IS_CACHED(buffer->flags))
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
curr_addr = vma->vm_start;
for (i = 0; i < data->nrpages && curr_addr < vma->vm_end; i++) {
if (vm_insert_page(vma, curr_addr, data->pages[i])) {
/*
* This will fail the mmap which will
* clean up the vma space properly.
*/
return -EINVAL;
}
curr_addr += PAGE_SIZE;
}
return 0;
}
int ion_iommu_heap_map_iommu(struct ion_buffer *buffer,
struct ion_iommu_map *data,
unsigned int domain_num,
unsigned int partition_num,
unsigned long align,
unsigned long iova_length,
unsigned long flags)
{
struct iommu_domain *domain;
int ret = 0;
unsigned long extra;
int prot = IOMMU_WRITE | IOMMU_READ;
prot |= ION_IS_CACHED(flags) ? IOMMU_CACHE : 0;
BUG_ON(!msm_use_iommu());
data->mapped_size = iova_length;
extra = iova_length - buffer->size;
ret = msm_allocate_iova_address(domain_num, partition_num,
data->mapped_size, align,
&data->iova_addr);
if (!data->iova_addr)
goto out;
domain = msm_get_iommu_domain(domain_num);
if (!domain) {
ret = -ENOMEM;
goto out1;
}
ret = iommu_map_range(domain, data->iova_addr,
buffer->sg_table->sgl,
buffer->size, prot);
if (ret) {
pr_err("%s: could not map %lx in domain %p\n",
__func__, data->iova_addr, domain);
goto out1;
}
if (extra) {
unsigned long extra_iova_addr = data->iova_addr + buffer->size;
ret = msm_iommu_map_extra(domain, extra_iova_addr, extra, SZ_4K,
prot);
if (ret)
goto out2;
}
return ret;
out2:
iommu_unmap_range(domain, data->iova_addr, buffer->size);
out1:
msm_free_iova_address(data->iova_addr, domain_num, partition_num,
buffer->size);
out:
return ret;
}
void ion_iommu_heap_unmap_iommu(struct ion_iommu_map *data)
{
unsigned int domain_num;
unsigned int partition_num;
struct iommu_domain *domain;
BUG_ON(!msm_use_iommu());
domain_num = iommu_map_domain(data);
partition_num = iommu_map_partition(data);
domain = msm_get_iommu_domain(domain_num);
if (!domain) {
WARN(1, "Could not get domain %d. Corruption?\n", domain_num);
return;
}
iommu_unmap_range(domain, data->iova_addr, data->mapped_size);
msm_free_iova_address(data->iova_addr, domain_num, partition_num,
data->mapped_size);
return;
}
static int ion_iommu_cache_ops(struct ion_heap *heap, struct ion_buffer *buffer,
void *vaddr, unsigned int offset, unsigned int length,
unsigned int cmd)
{
void (*outer_cache_op)(phys_addr_t, phys_addr_t);
struct ion_iommu_heap *iommu_heap =
container_of(heap, struct ion_iommu_heap, heap);
switch (cmd) {
case ION_IOC_CLEAN_CACHES:
dmac_clean_range(vaddr, vaddr + length);
outer_cache_op = outer_clean_range;
break;
case ION_IOC_INV_CACHES:
dmac_inv_range(vaddr, vaddr + length);
outer_cache_op = outer_inv_range;
break;
case ION_IOC_CLEAN_INV_CACHES:
dmac_flush_range(vaddr, vaddr + length);
outer_cache_op = outer_flush_range;
break;
default:
return -EINVAL;
}
if (iommu_heap->has_outer_cache) {
unsigned long pstart;
unsigned int i;
struct ion_iommu_priv_data *data = buffer->priv_virt;
if (!data)
return -ENOMEM;
for (i = 0; i < data->nrpages; ++i) {
pstart = page_to_phys(data->pages[i]);
outer_cache_op(pstart, pstart + PAGE_SIZE);
}
}
return 0;
}
static struct sg_table *ion_iommu_heap_map_dma(struct ion_heap *heap,
struct ion_buffer *buffer)
{
return buffer->sg_table;
}
static void ion_iommu_heap_unmap_dma(struct ion_heap *heap,
struct ion_buffer *buffer)
{
if (buffer->sg_table)
sg_free_table(buffer->sg_table);
kfree(buffer->sg_table);
buffer->sg_table = 0;
}
static struct ion_heap_ops iommu_heap_ops = {
.allocate = ion_iommu_heap_allocate,
.free = ion_iommu_heap_free,
.map_user = ion_iommu_heap_map_user,
.map_kernel = ion_iommu_heap_map_kernel,
.unmap_kernel = ion_iommu_heap_unmap_kernel,
.map_iommu = ion_iommu_heap_map_iommu,
.unmap_iommu = ion_iommu_heap_unmap_iommu,
.cache_op = ion_iommu_cache_ops,
.map_dma = ion_iommu_heap_map_dma,
.unmap_dma = ion_iommu_heap_unmap_dma,
};
struct ion_heap *ion_iommu_heap_create(struct ion_platform_heap *heap_data)
{
struct ion_iommu_heap *iommu_heap;
iommu_heap = kzalloc(sizeof(struct ion_iommu_heap), GFP_KERNEL);
if (!iommu_heap)
return ERR_PTR(-ENOMEM);
iommu_heap->heap.ops = &iommu_heap_ops;
iommu_heap->heap.type = ION_HEAP_TYPE_IOMMU;
iommu_heap->has_outer_cache = heap_data->has_outer_cache;
return &iommu_heap->heap;
}
void ion_iommu_heap_destroy(struct ion_heap *heap)
{
struct ion_iommu_heap *iommu_heap =
container_of(heap, struct ion_iommu_heap, heap);
kfree(iommu_heap);
iommu_heap = NULL;
}

310
drivers/gpu/ion/ion_priv.h Normal file
View File

@@ -0,0 +1,310 @@
/*
* drivers/gpu/ion/ion_priv.h
*
* Copyright (C) 2011 Google, Inc.
* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _ION_PRIV_H
#define _ION_PRIV_H
#include <linux/kref.h>
#include <linux/mm_types.h>
#include <linux/mutex.h>
#include <linux/rbtree.h>
#include <linux/ion.h>
#include <linux/iommu.h>
#include <linux/seq_file.h>
enum {
DI_PARTITION_NUM = 0,
DI_DOMAIN_NUM = 1,
DI_MAX,
};
/**
* struct ion_iommu_map - represents a mapping of an ion buffer to an iommu
* @iova_addr - iommu virtual address
* @node - rb node to exist in the buffer's tree of iommu mappings
* @domain_info - contains the partition number and domain number
* domain_info[1] = domain number
* domain_info[0] = partition number
* @ref - for reference counting this mapping
* @mapped_size - size of the iova space mapped
* (may not be the same as the buffer size)
* @flags - iommu domain/partition specific flags.
*
* Represents a mapping of one ion buffer to a particular iommu domain
* and address range. There may exist other mappings of this buffer in
* different domains or address ranges. All mappings will have the same
* cacheability and security.
*/
struct ion_iommu_map {
unsigned long iova_addr;
struct rb_node node;
union {
int domain_info[DI_MAX];
uint64_t key;
};
struct ion_buffer *buffer;
struct kref ref;
int mapped_size;
unsigned long flags;
};
struct ion_buffer *ion_handle_buffer(struct ion_handle *handle);
/**
* struct ion_buffer - metadata for a particular buffer
* @ref: refernce count
* @node: node in the ion_device buffers tree
* @dev: back pointer to the ion_device
* @heap: back pointer to the heap the buffer came from
* @flags: buffer specific flags
* @size: size of the buffer
* @priv_virt: private data to the buffer representable as
* a void *
* @priv_phys: private data to the buffer representable as
* an ion_phys_addr_t (and someday a phys_addr_t)
* @lock: protects the buffers cnt fields
* @kmap_cnt: number of times the buffer is mapped to the kernel
* @vaddr: the kenrel mapping if kmap_cnt is not zero
* @dmap_cnt: number of times the buffer is mapped for dma
* @sg_table: the sg table for the buffer if dmap_cnt is not zero
*/
struct ion_buffer {
struct kref ref;
struct rb_node node;
struct ion_device *dev;
struct ion_heap *heap;
unsigned long flags;
size_t size;
union {
void *priv_virt;
ion_phys_addr_t priv_phys;
};
struct mutex lock;
int kmap_cnt;
void *vaddr;
int dmap_cnt;
struct sg_table *sg_table;
int umap_cnt;
unsigned int iommu_map_cnt;
struct rb_root iommu_maps;
int marked;
};
/**
* struct ion_heap_ops - ops to operate on a given heap
* @allocate: allocate memory
* @free: free memory
* @phys get physical address of a buffer (only define on
* physically contiguous heaps)
* @map_dma map the memory for dma to a scatterlist
* @unmap_dma unmap the memory for dma
* @map_kernel map memory to the kernel
* @unmap_kernel unmap memory to the kernel
* @map_user map memory to userspace
* @unmap_user unmap memory to userspace
*/
struct ion_heap_ops {
int (*allocate) (struct ion_heap *heap,
struct ion_buffer *buffer, unsigned long len,
unsigned long align, unsigned long flags);
void (*free) (struct ion_buffer *buffer);
int (*phys) (struct ion_heap *heap, struct ion_buffer *buffer,
ion_phys_addr_t *addr, size_t *len);
struct sg_table *(*map_dma) (struct ion_heap *heap,
struct ion_buffer *buffer);
void (*unmap_dma) (struct ion_heap *heap, struct ion_buffer *buffer);
void * (*map_kernel) (struct ion_heap *heap, struct ion_buffer *buffer);
void (*unmap_kernel) (struct ion_heap *heap, struct ion_buffer *buffer);
int (*map_user) (struct ion_heap *mapper, struct ion_buffer *buffer,
struct vm_area_struct *vma);
void (*unmap_user) (struct ion_heap *mapper, struct ion_buffer *buffer);
int (*cache_op)(struct ion_heap *heap, struct ion_buffer *buffer,
void *vaddr, unsigned int offset,
unsigned int length, unsigned int cmd);
int (*map_iommu)(struct ion_buffer *buffer,
struct ion_iommu_map *map_data,
unsigned int domain_num,
unsigned int partition_num,
unsigned long align,
unsigned long iova_length,
unsigned long flags);
void (*unmap_iommu)(struct ion_iommu_map *data);
int (*print_debug)(struct ion_heap *heap, struct seq_file *s,
const struct rb_root *mem_map);
int (*secure_heap)(struct ion_heap *heap);
int (*unsecure_heap)(struct ion_heap *heap);
};
/**
* struct ion_heap - represents a heap in the system
* @node: rb node to put the heap on the device's tree of heaps
* @dev: back pointer to the ion_device
* @type: type of heap
* @ops: ops struct as above
* @id: id of heap, also indicates priority of this heap when
* allocating. These are specified by platform data and
* MUST be unique
* @name: used for debugging
*
* Represents a pool of memory from which buffers can be made. In some
* systems the only heap is regular system memory allocated via vmalloc.
* On others, some blocks might require large physically contiguous buffers
* that are allocated from a specially reserved heap.
*/
struct ion_heap {
struct rb_node node;
struct ion_device *dev;
enum ion_heap_type type;
struct ion_heap_ops *ops;
int id;
const char *name;
};
/**
* struct mem_map_data - represents information about the memory map for a heap
* @node: rb node used to store in the tree of mem_map_data
* @addr: start address of memory region.
* @addr: end address of memory region.
* @size: size of memory region
* @client_name: name of the client who owns this buffer.
*
*/
struct mem_map_data {
struct rb_node node;
unsigned long addr;
unsigned long addr_end;
unsigned long size;
const char *client_name;
};
#define iommu_map_domain(__m) ((__m)->domain_info[1])
#define iommu_map_partition(__m) ((__m)->domain_info[0])
/**
* ion_device_create - allocates and returns an ion device
* @custom_ioctl: arch specific ioctl function if applicable
*
* returns a valid device or -PTR_ERR
*/
struct ion_device *ion_device_create(long (*custom_ioctl)
(struct ion_client *client,
unsigned int cmd,
unsigned long arg));
/**
* ion_device_destroy - free and device and it's resource
* @dev: the device
*/
void ion_device_destroy(struct ion_device *dev);
/**
* ion_device_add_heap - adds a heap to the ion device
* @dev: the device
* @heap: the heap to add
*/
void ion_device_add_heap(struct ion_device *dev, struct ion_heap *heap);
/**
* functions for creating and destroying the built in ion heaps.
* architectures can add their own custom architecture specific
* heaps as appropriate.
*/
struct ion_heap *ion_heap_create(struct ion_platform_heap *);
void ion_heap_destroy(struct ion_heap *);
struct ion_heap *ion_system_heap_create(struct ion_platform_heap *);
void ion_system_heap_destroy(struct ion_heap *);
struct ion_heap *ion_system_contig_heap_create(struct ion_platform_heap *);
void ion_system_contig_heap_destroy(struct ion_heap *);
struct ion_heap *ion_carveout_heap_create(struct ion_platform_heap *);
void ion_carveout_heap_destroy(struct ion_heap *);
struct ion_heap *ion_iommu_heap_create(struct ion_platform_heap *);
void ion_iommu_heap_destroy(struct ion_heap *);
struct ion_heap *ion_cp_heap_create(struct ion_platform_heap *);
void ion_cp_heap_destroy(struct ion_heap *);
struct ion_heap *ion_reusable_heap_create(struct ion_platform_heap *);
void ion_reusable_heap_destroy(struct ion_heap *);
/**
* kernel api to allocate/free from carveout -- used when carveout is
* used to back an architecture specific custom heap
*/
ion_phys_addr_t ion_carveout_allocate(struct ion_heap *heap, unsigned long size,
unsigned long align);
void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr,
unsigned long size);
struct ion_heap *msm_get_contiguous_heap(void);
/**
* The carveout/cp heap returns physical addresses, since 0 may be a valid
* physical address, this is used to indicate allocation failed
*/
#define ION_CARVEOUT_ALLOCATE_FAIL -1
#define ION_CP_ALLOCATE_FAIL -1
/**
* The reserved heap returns physical addresses, since 0 may be a valid
* physical address, this is used to indicate allocation failed
*/
#define ION_RESERVED_ALLOCATE_FAIL -1
/**
* ion_map_fmem_buffer - map fmem allocated memory into the kernel
* @buffer - buffer to map
* @phys_base - physical base of the heap
* @virt_base - virtual base of the heap
* @flags - flags for the heap
*
* Map fmem allocated memory into the kernel address space. This
* is designed to be used by other heaps that need fmem behavior.
* The virtual range must be pre-allocated.
*/
void *ion_map_fmem_buffer(struct ion_buffer *buffer, unsigned long phys_base,
void *virt_base, unsigned long flags);
/**
* ion_do_cache_op - do cache operations.
*
* @client - pointer to ION client.
* @handle - pointer to buffer handle.
* @uaddr - virtual address to operate on.
* @offset - offset from physical address.
* @len - Length of data to do cache operation on.
* @cmd - Cache operation to perform:
* ION_IOC_CLEAN_CACHES
* ION_IOC_INV_CACHES
* ION_IOC_CLEAN_INV_CACHES
*
* Returns 0 on success
*/
int ion_do_cache_op(struct ion_client *client, struct ion_handle *handle,
void *uaddr, unsigned long offset, unsigned long len,
unsigned int cmd);
void ion_cp_heap_get_base(struct ion_heap *heap, unsigned long *base,
unsigned long *size);
void ion_mem_map_show(struct ion_heap *heap);
#endif /* _ION_PRIV_H */

View File

@@ -0,0 +1,552 @@
/*
* drivers/gpu/ion/ion_system_heap.c
*
* Copyright (C) 2011 Google, Inc.
* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/err.h>
#include <linux/ion.h>
#include <linux/mm.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/iommu.h>
#include <linux/seq_file.h>
#include <mach/iommu_domains.h>
#include "ion_priv.h"
#include <mach/memory.h>
#include <asm/cacheflush.h>
static atomic_t system_heap_allocated;
static atomic_t system_contig_heap_allocated;
static unsigned int system_heap_has_outer_cache;
static unsigned int system_heap_contig_has_outer_cache;
static int ion_system_heap_allocate(struct ion_heap *heap,
struct ion_buffer *buffer,
unsigned long size, unsigned long align,
unsigned long flags)
{
struct sg_table *table;
struct scatterlist *sg;
int i, j;
int npages = PAGE_ALIGN(size) / PAGE_SIZE;
table = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
if (!table)
return -ENOMEM;
i = sg_alloc_table(table, npages, GFP_KERNEL);
if (i)
goto err0;
for_each_sg(table->sgl, sg, table->nents, i) {
struct page *page;
page = alloc_page(GFP_KERNEL);
if (!page)
goto err1;
sg_set_page(sg, page, PAGE_SIZE, 0);
}
buffer->priv_virt = table;
atomic_add(size, &system_heap_allocated);
return 0;
err1:
for_each_sg(table->sgl, sg, i, j)
__free_page(sg_page(sg));
sg_free_table(table);
err0:
kfree(table);
return -ENOMEM;
}
void ion_system_heap_free(struct ion_buffer *buffer)
{
int i;
struct scatterlist *sg;
struct sg_table *table = buffer->priv_virt;
for_each_sg(table->sgl, sg, table->nents, i)
__free_page(sg_page(sg));
if (buffer->sg_table)
sg_free_table(buffer->sg_table);
kfree(buffer->sg_table);
atomic_sub(buffer->size, &system_heap_allocated);
}
struct sg_table *ion_system_heap_map_dma(struct ion_heap *heap,
struct ion_buffer *buffer)
{
return buffer->priv_virt;
}
void ion_system_heap_unmap_dma(struct ion_heap *heap,
struct ion_buffer *buffer)
{
return;
}
void *ion_system_heap_map_kernel(struct ion_heap *heap,
struct ion_buffer *buffer)
{
if (!ION_IS_CACHED(buffer->flags)) {
pr_err("%s: cannot map system heap uncached\n", __func__);
return ERR_PTR(-EINVAL);
} else {
struct scatterlist *sg;
int i;
void *vaddr;
struct sg_table *table = buffer->priv_virt;
struct page **pages = kmalloc(
sizeof(struct page *) * table->nents,
GFP_KERNEL);
for_each_sg(table->sgl, sg, table->nents, i)
pages[i] = sg_page(sg);
vaddr = vmap(pages, table->nents, VM_MAP, PAGE_KERNEL);
kfree(pages);
return vaddr;
}
}
void ion_system_heap_unmap_kernel(struct ion_heap *heap,
struct ion_buffer *buffer)
{
vunmap(buffer->vaddr);
}
void ion_system_heap_unmap_iommu(struct ion_iommu_map *data)
{
unsigned int domain_num;
unsigned int partition_num;
struct iommu_domain *domain;
if (!msm_use_iommu())
return;
domain_num = iommu_map_domain(data);
partition_num = iommu_map_partition(data);
domain = msm_get_iommu_domain(domain_num);
if (!domain) {
WARN(1, "Could not get domain %d. Corruption?\n", domain_num);
return;
}
iommu_unmap_range(domain, data->iova_addr, data->mapped_size);
msm_free_iova_address(data->iova_addr, domain_num, partition_num,
data->mapped_size);
return;
}
int ion_system_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer,
struct vm_area_struct *vma)
{
if (!ION_IS_CACHED(buffer->flags)) {
pr_err("%s: cannot map system heap uncached\n", __func__);
return -EINVAL;
} else {
struct sg_table *table = buffer->priv_virt;
unsigned long addr = vma->vm_start;
unsigned long offset = vma->vm_pgoff;
struct scatterlist *sg;
int i;
for_each_sg(table->sgl, sg, table->nents, i) {
if (offset) {
offset--;
continue;
}
vm_insert_page(vma, addr, sg_page(sg));
addr += PAGE_SIZE;
}
return 0;
}
}
int ion_system_heap_cache_ops(struct ion_heap *heap, struct ion_buffer *buffer,
void *vaddr, unsigned int offset, unsigned int length,
unsigned int cmd)
{
void (*outer_cache_op)(phys_addr_t, phys_addr_t);
switch (cmd) {
case ION_IOC_CLEAN_CACHES:
dmac_clean_range(vaddr, vaddr + length);
outer_cache_op = outer_clean_range;
break;
case ION_IOC_INV_CACHES:
dmac_inv_range(vaddr, vaddr + length);
outer_cache_op = outer_inv_range;
break;
case ION_IOC_CLEAN_INV_CACHES:
dmac_flush_range(vaddr, vaddr + length);
outer_cache_op = outer_flush_range;
break;
default:
return -EINVAL;
}
if (system_heap_has_outer_cache) {
unsigned long pstart;
struct sg_table *table = buffer->priv_virt;
struct scatterlist *sg;
int i;
for_each_sg(table->sgl, sg, table->nents, i) {
struct page *page = sg_page(sg);
pstart = page_to_phys(page);
/*
* If page -> phys is returning NULL, something
* has really gone wrong...
*/
if (!pstart) {
WARN(1, "Could not translate virtual address to physical address\n");
return -EINVAL;
}
outer_cache_op(pstart, pstart + PAGE_SIZE);
}
}
return 0;
}
static int ion_system_print_debug(struct ion_heap *heap, struct seq_file *s,
const struct rb_root *unused)
{
seq_printf(s, "total bytes currently allocated: %lx\n",
(unsigned long) atomic_read(&system_heap_allocated));
return 0;
}
int ion_system_heap_map_iommu(struct ion_buffer *buffer,
struct ion_iommu_map *data,
unsigned int domain_num,
unsigned int partition_num,
unsigned long align,
unsigned long iova_length,
unsigned long flags)
{
int ret = 0;
struct iommu_domain *domain;
unsigned long extra;
unsigned long extra_iova_addr;
struct sg_table *table = buffer->priv_virt;
int prot = IOMMU_WRITE | IOMMU_READ;
prot |= ION_IS_CACHED(flags) ? IOMMU_CACHE : 0;
if (!ION_IS_CACHED(flags))
return -EINVAL;
if (!msm_use_iommu())
return -EINVAL;
data->mapped_size = iova_length;
extra = iova_length - buffer->size;
ret = msm_allocate_iova_address(domain_num, partition_num,
data->mapped_size, align,
&data->iova_addr);
if (ret)
goto out;
domain = msm_get_iommu_domain(domain_num);
if (!domain) {
ret = -ENOMEM;
goto out1;
}
ret = iommu_map_range(domain, data->iova_addr, table->sgl,
buffer->size, prot);
if (ret) {
pr_err("%s: could not map %lx in domain %p\n",
__func__, data->iova_addr, domain);
goto out1;
}
extra_iova_addr = data->iova_addr + buffer->size;
if (extra) {
ret = msm_iommu_map_extra(domain, extra_iova_addr, extra, SZ_4K,
prot);
if (ret)
goto out2;
}
return ret;
out2:
iommu_unmap_range(domain, data->iova_addr, buffer->size);
out1:
msm_free_iova_address(data->iova_addr, domain_num, partition_num,
data->mapped_size);
out:
return ret;
}
static struct ion_heap_ops vmalloc_ops = {
.allocate = ion_system_heap_allocate,
.free = ion_system_heap_free,
.map_dma = ion_system_heap_map_dma,
.unmap_dma = ion_system_heap_unmap_dma,
.map_kernel = ion_system_heap_map_kernel,
.unmap_kernel = ion_system_heap_unmap_kernel,
.map_user = ion_system_heap_map_user,
.cache_op = ion_system_heap_cache_ops,
.print_debug = ion_system_print_debug,
.map_iommu = ion_system_heap_map_iommu,
.unmap_iommu = ion_system_heap_unmap_iommu,
};
struct ion_heap *ion_system_heap_create(struct ion_platform_heap *pheap)
{
struct ion_heap *heap;
heap = kzalloc(sizeof(struct ion_heap), GFP_KERNEL);
if (!heap)
return ERR_PTR(-ENOMEM);
heap->ops = &vmalloc_ops;
heap->type = ION_HEAP_TYPE_SYSTEM;
system_heap_has_outer_cache = pheap->has_outer_cache;
return heap;
}
void ion_system_heap_destroy(struct ion_heap *heap)
{
kfree(heap);
}
static int ion_system_contig_heap_allocate(struct ion_heap *heap,
struct ion_buffer *buffer,
unsigned long len,
unsigned long align,
unsigned long flags)
{
buffer->priv_virt = kzalloc(len, GFP_KERNEL);
if (!buffer->priv_virt)
return -ENOMEM;
atomic_add(len, &system_contig_heap_allocated);
return 0;
}
void ion_system_contig_heap_free(struct ion_buffer *buffer)
{
kfree(buffer->priv_virt);
atomic_sub(buffer->size, &system_contig_heap_allocated);
}
static int ion_system_contig_heap_phys(struct ion_heap *heap,
struct ion_buffer *buffer,
ion_phys_addr_t *addr, size_t *len)
{
*addr = virt_to_phys(buffer->priv_virt);
*len = buffer->size;
return 0;
}
struct sg_table *ion_system_contig_heap_map_dma(struct ion_heap *heap,
struct ion_buffer *buffer)
{
struct sg_table *table;
int ret;
table = kzalloc(sizeof(struct sg_table), GFP_KERNEL);
if (!table)
return ERR_PTR(-ENOMEM);
ret = sg_alloc_table(table, 1, GFP_KERNEL);
if (ret) {
kfree(table);
return ERR_PTR(ret);
}
sg_set_page(table->sgl, virt_to_page(buffer->priv_virt), buffer->size,
0);
return table;
}
int ion_system_contig_heap_map_user(struct ion_heap *heap,
struct ion_buffer *buffer,
struct vm_area_struct *vma)
{
unsigned long pfn = __phys_to_pfn(virt_to_phys(buffer->priv_virt));
if (ION_IS_CACHED(buffer->flags))
return remap_pfn_range(vma, vma->vm_start, pfn + vma->vm_pgoff,
vma->vm_end - vma->vm_start,
vma->vm_page_prot);
else {
pr_err("%s: cannot map system heap uncached\n", __func__);
return -EINVAL;
}
}
int ion_system_contig_heap_cache_ops(struct ion_heap *heap,
struct ion_buffer *buffer, void *vaddr,
unsigned int offset, unsigned int length,
unsigned int cmd)
{
void (*outer_cache_op)(phys_addr_t, phys_addr_t);
switch (cmd) {
case ION_IOC_CLEAN_CACHES:
dmac_clean_range(vaddr, vaddr + length);
outer_cache_op = outer_clean_range;
break;
case ION_IOC_INV_CACHES:
dmac_inv_range(vaddr, vaddr + length);
outer_cache_op = outer_inv_range;
break;
case ION_IOC_CLEAN_INV_CACHES:
dmac_flush_range(vaddr, vaddr + length);
outer_cache_op = outer_flush_range;
break;
default:
return -EINVAL;
}
if (system_heap_contig_has_outer_cache) {
unsigned long pstart;
pstart = virt_to_phys(buffer->priv_virt) + offset;
if (!pstart) {
WARN(1, "Could not do virt to phys translation on %p\n",
buffer->priv_virt);
return -EINVAL;
}
outer_cache_op(pstart, pstart + PAGE_SIZE);
}
return 0;
}
static int ion_system_contig_print_debug(struct ion_heap *heap,
struct seq_file *s,
const struct rb_root *unused)
{
seq_printf(s, "total bytes currently allocated: %lx\n",
(unsigned long) atomic_read(&system_contig_heap_allocated));
return 0;
}
int ion_system_contig_heap_map_iommu(struct ion_buffer *buffer,
struct ion_iommu_map *data,
unsigned int domain_num,
unsigned int partition_num,
unsigned long align,
unsigned long iova_length,
unsigned long flags)
{
int ret = 0;
struct iommu_domain *domain;
unsigned long extra;
struct scatterlist *sglist = 0;
struct page *page = 0;
int prot = IOMMU_WRITE | IOMMU_READ;
prot |= ION_IS_CACHED(flags) ? IOMMU_CACHE : 0;
if (!ION_IS_CACHED(flags))
return -EINVAL;
if (!msm_use_iommu()) {
data->iova_addr = virt_to_phys(buffer->vaddr);
return 0;
}
data->mapped_size = iova_length;
extra = iova_length - buffer->size;
ret = msm_allocate_iova_address(domain_num, partition_num,
data->mapped_size, align,
&data->iova_addr);
if (ret)
goto out;
domain = msm_get_iommu_domain(domain_num);
if (!domain) {
ret = -ENOMEM;
goto out1;
}
page = virt_to_page(buffer->vaddr);
sglist = vmalloc(sizeof(*sglist));
if (!sglist)
goto out1;
sg_init_table(sglist, 1);
sg_set_page(sglist, page, buffer->size, 0);
ret = iommu_map_range(domain, data->iova_addr, sglist,
buffer->size, prot);
if (ret) {
pr_err("%s: could not map %lx in domain %p\n",
__func__, data->iova_addr, domain);
goto out1;
}
if (extra) {
unsigned long extra_iova_addr = data->iova_addr + buffer->size;
ret = msm_iommu_map_extra(domain, extra_iova_addr, extra, SZ_4K,
prot);
if (ret)
goto out2;
}
vfree(sglist);
return ret;
out2:
iommu_unmap_range(domain, data->iova_addr, buffer->size);
out1:
vfree(sglist);
msm_free_iova_address(data->iova_addr, domain_num, partition_num,
data->mapped_size);
out:
return ret;
}
static struct ion_heap_ops kmalloc_ops = {
.allocate = ion_system_contig_heap_allocate,
.free = ion_system_contig_heap_free,
.phys = ion_system_contig_heap_phys,
.map_dma = ion_system_contig_heap_map_dma,
.unmap_dma = ion_system_heap_unmap_dma,
.map_kernel = ion_system_heap_map_kernel,
.unmap_kernel = ion_system_heap_unmap_kernel,
.map_user = ion_system_contig_heap_map_user,
.cache_op = ion_system_contig_heap_cache_ops,
.print_debug = ion_system_contig_print_debug,
.map_iommu = ion_system_contig_heap_map_iommu,
.unmap_iommu = ion_system_heap_unmap_iommu,
};
struct ion_heap *ion_system_contig_heap_create(struct ion_platform_heap *pheap)
{
struct ion_heap *heap;
heap = kzalloc(sizeof(struct ion_heap), GFP_KERNEL);
if (!heap)
return ERR_PTR(-ENOMEM);
heap->ops = &kmalloc_ops;
heap->type = ION_HEAP_TYPE_SYSTEM_CONTIG;
system_heap_contig_has_outer_cache = pheap->has_outer_cache;
return heap;
}
void ion_system_contig_heap_destroy(struct ion_heap *heap)
{
kfree(heap);
}

View File

@@ -0,0 +1,114 @@
/*
* drivers/gpu/ion/ion_system_mapper.c
*
* Copyright (C) 2011 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/err.h>
#include <linux/ion.h>
#include <linux/memory.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include "ion_priv.h"
/*
* This mapper is valid for any heap that allocates memory that already has
* a kernel mapping, this includes vmalloc'd memory, kmalloc'd memory,
* pages obtained via io_remap, etc.
*/
static void *ion_kernel_mapper_map(struct ion_mapper *mapper,
struct ion_buffer *buffer,
struct ion_mapping **mapping)
{
if (!((1 << buffer->heap->type) & mapper->heap_mask)) {
pr_err("%s: attempting to map an unsupported heap\n", __func__);
return ERR_PTR(-EINVAL);
}
/* XXX REVISIT ME!!! */
*((unsigned long *)mapping) = (unsigned long)buffer->priv;
return buffer->priv;
}
static void ion_kernel_mapper_unmap(struct ion_mapper *mapper,
struct ion_buffer *buffer,
struct ion_mapping *mapping)
{
if (!((1 << buffer->heap->type) & mapper->heap_mask))
pr_err("%s: attempting to unmap an unsupported heap\n",
__func__);
}
static void *ion_kernel_mapper_map_kernel(struct ion_mapper *mapper,
struct ion_buffer *buffer,
struct ion_mapping *mapping)
{
if (!((1 << buffer->heap->type) & mapper->heap_mask)) {
pr_err("%s: attempting to unmap an unsupported heap\n",
__func__);
return ERR_PTR(-EINVAL);
}
return buffer->priv;
}
static int ion_kernel_mapper_map_user(struct ion_mapper *mapper,
struct ion_buffer *buffer,
struct vm_area_struct *vma,
struct ion_mapping *mapping)
{
int ret;
switch (buffer->heap->type) {
case ION_HEAP_KMALLOC:
{
unsigned long pfn = __phys_to_pfn(virt_to_phys(buffer->priv));
ret = remap_pfn_range(vma, vma->vm_start, pfn + vma->vm_pgoff,
vma->vm_end - vma->vm_start,
vma->vm_page_prot);
break;
}
case ION_HEAP_VMALLOC:
ret = remap_vmalloc_range(vma, buffer->priv, vma->vm_pgoff);
break;
default:
pr_err("%s: attempting to map unsupported heap to userspace\n",
__func__);
return -EINVAL;
}
return ret;
}
static struct ion_mapper_ops ops = {
.map = ion_kernel_mapper_map,
.map_kernel = ion_kernel_mapper_map_kernel,
.map_user = ion_kernel_mapper_map_user,
.unmap = ion_kernel_mapper_unmap,
};
struct ion_mapper *ion_system_mapper_create(void)
{
struct ion_mapper *mapper;
mapper = kzalloc(sizeof(struct ion_mapper), GFP_KERNEL);
if (!mapper)
return ERR_PTR(-ENOMEM);
mapper->type = ION_SYSTEM_MAPPER;
mapper->ops = &ops;
mapper->heap_mask = (1 << ION_HEAP_VMALLOC) | (1 << ION_HEAP_KMALLOC);
return mapper;
}
void ion_system_mapper_destroy(struct ion_mapper *mapper)
{
kfree(mapper);
}

View File

@@ -0,0 +1 @@
obj-y += msm_ion.o

View File

@@ -0,0 +1,347 @@
/* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/export.h>
#include <linux/err.h>
#include <linux/ion.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/memory_alloc.h>
#include <linux/fmem.h>
#include <mach/ion.h>
#include <mach/msm_memtypes.h>
#include "../ion_priv.h"
static struct ion_device *idev;
static int num_heaps;
static struct ion_heap **heaps;
struct ion_client *msm_ion_client_create(unsigned int heap_mask,
const char *name)
{
return ion_client_create(idev, heap_mask, name);
}
EXPORT_SYMBOL(msm_ion_client_create);
int msm_ion_secure_heap(int heap_id)
{
return ion_secure_heap(idev, heap_id);
}
EXPORT_SYMBOL(msm_ion_secure_heap);
int msm_ion_unsecure_heap(int heap_id)
{
return ion_unsecure_heap(idev, heap_id);
}
EXPORT_SYMBOL(msm_ion_unsecure_heap);
int msm_ion_do_cache_op(struct ion_client *client, struct ion_handle *handle,
void *vaddr, unsigned long len, unsigned int cmd)
{
return ion_do_cache_op(client, handle, vaddr, 0, len, cmd);
}
EXPORT_SYMBOL(msm_ion_do_cache_op);
static unsigned long msm_ion_get_base(unsigned long size, int memory_type,
unsigned int align)
{
switch (memory_type) {
case ION_EBI_TYPE:
return allocate_contiguous_ebi_nomap(size, align);
break;
case ION_SMI_TYPE:
return allocate_contiguous_memory_nomap(size, MEMTYPE_SMI,
align);
break;
default:
pr_err("%s: Unknown memory type %d\n", __func__, memory_type);
return 0;
}
}
static struct ion_platform_heap *find_heap(const struct ion_platform_heap
heap_data[],
unsigned int nr_heaps,
int heap_id)
{
unsigned int i;
for (i = 0; i < nr_heaps; ++i) {
const struct ion_platform_heap *heap = &heap_data[i];
if (heap->id == heap_id)
return (struct ion_platform_heap *) heap;
}
return 0;
}
static void ion_set_base_address(struct ion_platform_heap *heap,
struct ion_platform_heap *shared_heap,
struct ion_co_heap_pdata *co_heap_data,
struct ion_cp_heap_pdata *cp_data)
{
if (cp_data->reusable) {
const struct fmem_data *fmem_info = fmem_get_info();
if (!fmem_info) {
pr_err("fmem info pointer NULL!\n");
BUG();
}
heap->base = fmem_info->phys - fmem_info->reserved_size_low;
cp_data->virt_addr = fmem_info->virt;
pr_info("ION heap %s using FMEM\n", shared_heap->name);
} else {
heap->base = msm_ion_get_base(heap->size + shared_heap->size,
shared_heap->memory_type,
co_heap_data->align);
}
if (heap->base) {
shared_heap->base = heap->base + heap->size;
cp_data->secure_base = heap->base;
cp_data->secure_size = heap->size + shared_heap->size;
} else {
pr_err("%s: could not get memory for heap %s (id %x)\n",
__func__, heap->name, heap->id);
}
}
static void allocate_co_memory(struct ion_platform_heap *heap,
struct ion_platform_heap heap_data[],
unsigned int nr_heaps)
{
struct ion_co_heap_pdata *co_heap_data =
(struct ion_co_heap_pdata *) heap->extra_data;
if (co_heap_data->adjacent_mem_id != INVALID_HEAP_ID) {
struct ion_platform_heap *shared_heap =
find_heap(heap_data, nr_heaps,
co_heap_data->adjacent_mem_id);
if (shared_heap) {
struct ion_cp_heap_pdata *cp_data =
(struct ion_cp_heap_pdata *) shared_heap->extra_data;
if (cp_data->fixed_position == FIXED_MIDDLE) {
const struct fmem_data *fmem_info =
fmem_get_info();
if (!fmem_info) {
pr_err("fmem info pointer NULL!\n");
BUG();
}
cp_data->virt_addr = fmem_info->virt;
if (!cp_data->secure_base) {
cp_data->secure_base = heap->base;
cp_data->secure_size =
heap->size + shared_heap->size;
}
} else if (!heap->base) {
ion_set_base_address(heap, shared_heap,
co_heap_data, cp_data);
}
}
}
}
/* Fixup heaps in board file to support two heaps being adjacent to each other.
* A flag (adjacent_mem_id) in the platform data tells us that the heap phy
* memory location must be adjacent to the specified heap. We do this by
* carving out memory for both heaps and then splitting up the memory to the
* two heaps. The heap specifying the "adjacent_mem_id" get the base of the
* memory while heap specified in "adjacent_mem_id" get base+size as its
* base address.
* Note: Modifies platform data and allocates memory.
*/
static void msm_ion_heap_fixup(struct ion_platform_heap heap_data[],
unsigned int nr_heaps)
{
unsigned int i;
for (i = 0; i < nr_heaps; i++) {
struct ion_platform_heap *heap = &heap_data[i];
if (heap->type == ION_HEAP_TYPE_CARVEOUT) {
if (heap->extra_data)
allocate_co_memory(heap, heap_data, nr_heaps);
}
}
}
static void msm_ion_allocate(struct ion_platform_heap *heap)
{
if (!heap->base && heap->extra_data) {
unsigned int align = 0;
switch (heap->type) {
case ION_HEAP_TYPE_CARVEOUT:
align =
((struct ion_co_heap_pdata *) heap->extra_data)->align;
break;
case ION_HEAP_TYPE_CP:
{
struct ion_cp_heap_pdata *data =
(struct ion_cp_heap_pdata *)
heap->extra_data;
if (data->reusable) {
const struct fmem_data *fmem_info =
fmem_get_info();
heap->base = fmem_info->phys;
data->virt_addr = fmem_info->virt;
pr_info("ION heap %s using FMEM\n", heap->name);
} else if (data->mem_is_fmem) {
const struct fmem_data *fmem_info =
fmem_get_info();
heap->base = fmem_info->phys + fmem_info->size;
}
align = data->align;
break;
}
default:
break;
}
if (align && !heap->base) {
heap->base = msm_ion_get_base(heap->size,
heap->memory_type,
align);
if (!heap->base)
pr_err("%s: could not get memory for heap %s "
"(id %x)\n", __func__, heap->name, heap->id);
}
}
}
static int is_heap_overlapping(const struct ion_platform_heap *heap1,
const struct ion_platform_heap *heap2)
{
unsigned long heap1_base = heap1->base;
unsigned long heap2_base = heap2->base;
unsigned long heap1_end = heap1->base + heap1->size - 1;
unsigned long heap2_end = heap2->base + heap2->size - 1;
if (heap1_base == heap2_base)
return 1;
if (heap1_base < heap2_base && heap1_end >= heap2_base)
return 1;
if (heap2_base < heap1_base && heap2_end >= heap1_base)
return 1;
return 0;
}
static void check_for_heap_overlap(const struct ion_platform_heap heap_list[],
unsigned long nheaps)
{
unsigned long i;
unsigned long j;
for (i = 0; i < nheaps; ++i) {
const struct ion_platform_heap *heap1 = &heap_list[i];
if (!heap1->base)
continue;
for (j = i + 1; j < nheaps; ++j) {
const struct ion_platform_heap *heap2 = &heap_list[j];
if (!heap2->base)
continue;
if (is_heap_overlapping(heap1, heap2)) {
panic("Memory in heap %s overlaps with heap %s\n",
heap1->name, heap2->name);
}
}
}
}
static int msm_ion_probe(struct platform_device *pdev)
{
struct ion_platform_data *pdata = pdev->dev.platform_data;
int err;
int i;
num_heaps = pdata->nr;
heaps = kcalloc(pdata->nr, sizeof(struct ion_heap *), GFP_KERNEL);
if (!heaps) {
err = -ENOMEM;
goto out;
}
idev = ion_device_create(NULL);
if (IS_ERR_OR_NULL(idev)) {
err = PTR_ERR(idev);
goto freeheaps;
}
msm_ion_heap_fixup(pdata->heaps, num_heaps);
/* create the heaps as specified in the board file */
for (i = 0; i < num_heaps; i++) {
struct ion_platform_heap *heap_data = &pdata->heaps[i];
msm_ion_allocate(heap_data);
heap_data->has_outer_cache = pdata->has_outer_cache;
heaps[i] = ion_heap_create(heap_data);
if (IS_ERR_OR_NULL(heaps[i])) {
heaps[i] = 0;
continue;
} else {
if (heap_data->size)
pr_info("ION heap %s created at %lx "
"with size %x\n", heap_data->name,
heap_data->base,
heap_data->size);
else
pr_info("ION heap %s created\n",
heap_data->name);
}
ion_device_add_heap(idev, heaps[i]);
}
check_for_heap_overlap(pdata->heaps, num_heaps);
platform_set_drvdata(pdev, idev);
return 0;
freeheaps:
kfree(heaps);
out:
return err;
}
static int msm_ion_remove(struct platform_device *pdev)
{
struct ion_device *idev = platform_get_drvdata(pdev);
int i;
for (i = 0; i < num_heaps; i++)
ion_heap_destroy(heaps[i]);
ion_device_destroy(idev);
kfree(heaps);
return 0;
}
static struct platform_driver msm_ion_driver = {
.probe = msm_ion_probe,
.remove = msm_ion_remove,
.driver = { .name = "ion-msm" }
};
static int __init msm_ion_init(void)
{
return platform_driver_register(&msm_ion_driver);
}
static void __exit msm_ion_exit(void)
{
platform_driver_unregister(&msm_ion_driver);
}
subsys_initcall(msm_ion_init);
module_exit(msm_ion_exit);

View File

@@ -0,0 +1 @@
obj-y += tegra_ion.o

View File

@@ -0,0 +1,96 @@
/*
* drivers/gpu/tegra/tegra_ion.c
*
* Copyright (C) 2011 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/err.h>
#include <linux/ion.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include "../ion_priv.h"
struct ion_device *idev;
struct ion_mapper *tegra_user_mapper;
int num_heaps;
struct ion_heap **heaps;
int tegra_ion_probe(struct platform_device *pdev)
{
struct ion_platform_data *pdata = pdev->dev.platform_data;
int err;
int i;
num_heaps = pdata->nr;
heaps = kzalloc(sizeof(struct ion_heap *) * pdata->nr, GFP_KERNEL);
idev = ion_device_create(NULL);
if (IS_ERR_OR_NULL(idev)) {
kfree(heaps);
return PTR_ERR(idev);
}
/* create the heaps as specified in the board file */
for (i = 0; i < num_heaps; i++) {
struct ion_platform_heap *heap_data = &pdata->heaps[i];
heaps[i] = ion_heap_create(heap_data);
if (IS_ERR_OR_NULL(heaps[i])) {
err = PTR_ERR(heaps[i]);
goto err;
}
ion_device_add_heap(idev, heaps[i]);
}
platform_set_drvdata(pdev, idev);
return 0;
err:
for (i = 0; i < num_heaps; i++) {
if (heaps[i])
ion_heap_destroy(heaps[i]);
}
kfree(heaps);
return err;
}
int tegra_ion_remove(struct platform_device *pdev)
{
struct ion_device *idev = platform_get_drvdata(pdev);
int i;
ion_device_destroy(idev);
for (i = 0; i < num_heaps; i++)
ion_heap_destroy(heaps[i]);
kfree(heaps);
return 0;
}
static struct platform_driver ion_driver = {
.probe = tegra_ion_probe,
.remove = tegra_ion_remove,
.driver = { .name = "ion-tegra" }
};
static int __init ion_init(void)
{
return platform_driver_register(&ion_driver);
}
static void __exit ion_exit(void)
{
platform_driver_unregister(&ion_driver);
}
module_init(ion_init);
module_exit(ion_exit);

View File

@@ -1201,6 +1201,9 @@ int hidinput_connect(struct hid_device *hid, unsigned int force)
* UGCI) cram a lot of unrelated inputs into the
* same interface. */
hidinput->report = report;
if (hid->driver->input_register &&
hid->driver->input_register(hid, hidinput))
goto out_cleanup;
if (input_register_device(hidinput->input))
goto out_cleanup;
hidinput = NULL;
@@ -1215,6 +1218,10 @@ int hidinput_connect(struct hid_device *hid, unsigned int force)
goto out_unwind;
}
if (hidinput && hid->driver->input_register &&
hid->driver->input_register(hid, hidinput))
goto out_cleanup;
if (hidinput && input_register_device(hidinput->input))
goto out_cleanup;

View File

@@ -387,8 +387,10 @@ static int magicmouse_raw_event(struct hid_device *hdev,
return 1;
}
static void magicmouse_setup_input(struct input_dev *input, struct hid_device *hdev)
static int magicmouse_setup_input(struct hid_device *hdev, struct hid_input *hi)
{
struct input_dev *input = hi->input;
__set_bit(EV_KEY, input->evbit);
if (input->id.product == USB_DEVICE_ID_APPLE_MAGICMOUSE) {
@@ -471,6 +473,8 @@ static void magicmouse_setup_input(struct input_dev *input, struct hid_device *h
__set_bit(EV_MSC, input->evbit);
__set_bit(MSC_RAW, input->mscbit);
}
return 0;
}
static int magicmouse_input_mapping(struct hid_device *hdev,
@@ -523,12 +527,6 @@ static int magicmouse_probe(struct hid_device *hdev,
goto err_free;
}
/* We do this after hid-input is done parsing reports so that
* hid-input uses the most natural button and axis IDs.
*/
if (msc->input)
magicmouse_setup_input(msc->input, hdev);
if (id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE)
report = hid_register_report(hdev, HID_INPUT_REPORT,
MOUSE_REPORT_ID);
@@ -593,6 +591,7 @@ static struct hid_driver magicmouse_driver = {
.remove = magicmouse_remove,
.raw_event = magicmouse_raw_event,
.input_mapping = magicmouse_input_mapping,
.input_register = magicmouse_setup_input,
};
static int __init magicmouse_init(void)

View File

@@ -319,6 +319,16 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
if (field->physical == HID_DG_STYLUS)
return -1;
/* Only map fields from TouchScreen or TouchPad collections.
* We need to ignore fields that belong to other collections
* such as Mouse that might have the same GenericDesktop usages. */
if (field->application == HID_DG_TOUCHSCREEN)
set_bit(INPUT_PROP_DIRECT, hi->input->propbit);
else if (field->application == HID_DG_TOUCHPAD)
set_bit(INPUT_PROP_POINTER, hi->input->propbit);
else
return 0;
switch (usage->hid & HID_USAGE_PAGE) {
case HID_UP_GENDESK:

View File

@@ -165,6 +165,15 @@ config INPUT_APMPOWER
To compile this driver as a module, choose M here: the
module will be called apm-power.
config INPUT_KEYRESET
tristate "Reset key"
depends on INPUT
---help---
Say Y here if you want to reboot when some keys are pressed;
To compile this driver as a module, choose M here: the
module will be called keyreset.
comment "Input Device Drivers"
source "drivers/input/keyboard/Kconfig"

View File

@@ -25,3 +25,4 @@ obj-$(CONFIG_INPUT_MISC) += misc/
obj-$(CONFIG_INPUT_APMPOWER) += apm-power.o
obj-$(CONFIG_INPUT_OF_MATRIX_KEYMAP) += of_keymap.o
obj-$(CONFIG_INPUT_KEYRESET) += keyreset.o

View File

@@ -23,6 +23,7 @@
#include <linux/input/mt.h>
#include <linux/major.h>
#include <linux/device.h>
#include <linux/wakelock.h>
#include "input-compat.h"
struct evdev {
@@ -43,6 +44,9 @@ struct evdev_client {
unsigned int tail;
unsigned int packet_head; /* [future] position of the first element of next packet */
spinlock_t buffer_lock; /* protects access to buffer, head and tail */
struct wake_lock wake_lock;
bool use_wake_lock;
char name[28];
struct fasync_struct *fasync;
struct evdev *evdev;
struct list_head node;
@@ -80,10 +84,14 @@ static void evdev_pass_event(struct evdev_client *client,
client->buffer[client->tail].value = 0;
client->packet_head = client->tail;
if (client->use_wake_lock)
wake_unlock(&client->wake_lock);
}
if (event->type == EV_SYN && event->code == SYN_REPORT) {
client->packet_head = client->head;
if (client->use_wake_lock)
wake_lock(&client->wake_lock);
kill_fasync(&client->fasync, SIGIO, POLL_IN);
}
@@ -264,6 +272,8 @@ static int evdev_release(struct inode *inode, struct file *file)
mutex_unlock(&evdev->mutex);
evdev_detach_client(evdev, client);
if (client->use_wake_lock)
wake_lock_destroy(&client->wake_lock);
kfree(client);
evdev_close_device(evdev);
@@ -315,6 +325,8 @@ static int evdev_open(struct inode *inode, struct file *file)
client->bufsize = bufsize;
spin_lock_init(&client->buffer_lock);
snprintf(client->name, sizeof(client->name), "%s-%d",
dev_name(&evdev->dev), task_tgid_vnr(current));
client->evdev = evdev;
evdev_attach_client(evdev, client);
@@ -382,6 +394,9 @@ static int evdev_fetch_next_event(struct evdev_client *client,
if (have_event) {
*event = client->buffer[client->tail++];
client->tail &= client->bufsize - 1;
if (client->use_wake_lock &&
client->packet_head == client->tail)
wake_unlock(&client->wake_lock);
}
spin_unlock_irq(&client->buffer_lock);
@@ -654,6 +669,35 @@ static int evdev_handle_mt_request(struct input_dev *dev,
return 0;
}
static int evdev_enable_suspend_block(struct evdev *evdev,
struct evdev_client *client)
{
if (client->use_wake_lock)
return 0;
spin_lock_irq(&client->buffer_lock);
wake_lock_init(&client->wake_lock, WAKE_LOCK_SUSPEND, client->name);
client->use_wake_lock = true;
if (client->packet_head != client->tail)
wake_lock(&client->wake_lock);
spin_unlock_irq(&client->buffer_lock);
return 0;
}
static int evdev_disable_suspend_block(struct evdev *evdev,
struct evdev_client *client)
{
if (!client->use_wake_lock)
return 0;
spin_lock_irq(&client->buffer_lock);
client->use_wake_lock = false;
wake_lock_destroy(&client->wake_lock);
spin_unlock_irq(&client->buffer_lock);
return 0;
}
static long evdev_do_ioctl(struct file *file, unsigned int cmd,
void __user *p, int compat_mode)
{
@@ -735,6 +779,15 @@ static long evdev_do_ioctl(struct file *file, unsigned int cmd,
case EVIOCSKEYCODE_V2:
return evdev_handle_set_keycode_v2(dev, p);
case EVIOCGSUSPENDBLOCK:
return put_user(client->use_wake_lock, ip);
case EVIOCSSUSPENDBLOCK:
if (p)
return evdev_enable_suspend_block(evdev, client);
else
return evdev_disable_suspend_block(evdev, client);
}
size = _IOC_SIZE(cmd);

239
drivers/input/keyreset.c Normal file
View File

@@ -0,0 +1,239 @@
/* drivers/input/keyreset.c
*
* Copyright (C) 2008 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/input.h>
#include <linux/keyreset.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/reboot.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/syscalls.h>
struct keyreset_state {
struct input_handler input_handler;
unsigned long keybit[BITS_TO_LONGS(KEY_CNT)];
unsigned long upbit[BITS_TO_LONGS(KEY_CNT)];
unsigned long key[BITS_TO_LONGS(KEY_CNT)];
spinlock_t lock;
int key_down_target;
int key_down;
int key_up;
int restart_disabled;
int (*reset_fn)(void);
};
int restart_requested;
static void deferred_restart(struct work_struct *dummy)
{
restart_requested = 2;
sys_sync();
restart_requested = 3;
kernel_restart(NULL);
}
static DECLARE_WORK(restart_work, deferred_restart);
static void keyreset_event(struct input_handle *handle, unsigned int type,
unsigned int code, int value)
{
unsigned long flags;
struct keyreset_state *state = handle->private;
if (type != EV_KEY)
return;
if (code >= KEY_MAX)
return;
if (!test_bit(code, state->keybit))
return;
spin_lock_irqsave(&state->lock, flags);
if (!test_bit(code, state->key) == !value)
goto done;
__change_bit(code, state->key);
if (test_bit(code, state->upbit)) {
if (value) {
state->restart_disabled = 1;
state->key_up++;
} else
state->key_up--;
} else {
if (value)
state->key_down++;
else
state->key_down--;
}
if (state->key_down == 0 && state->key_up == 0)
state->restart_disabled = 0;
pr_debug("reset key changed %d %d new state %d-%d-%d\n", code, value,
state->key_down, state->key_up, state->restart_disabled);
if (value && !state->restart_disabled &&
state->key_down == state->key_down_target) {
state->restart_disabled = 1;
if (restart_requested)
panic("keyboard reset failed, %d", restart_requested);
if (state->reset_fn) {
restart_requested = state->reset_fn();
} else {
pr_info("keyboard reset\n");
schedule_work(&restart_work);
restart_requested = 1;
}
}
done:
spin_unlock_irqrestore(&state->lock, flags);
}
static int keyreset_connect(struct input_handler *handler,
struct input_dev *dev,
const struct input_device_id *id)
{
int i;
int ret;
struct input_handle *handle;
struct keyreset_state *state =
container_of(handler, struct keyreset_state, input_handler);
for (i = 0; i < KEY_MAX; i++) {
if (test_bit(i, state->keybit) && test_bit(i, dev->keybit))
break;
}
if (i == KEY_MAX)
return -ENODEV;
handle = kzalloc(sizeof(*handle), GFP_KERNEL);
if (!handle)
return -ENOMEM;
handle->dev = dev;
handle->handler = handler;
handle->name = "keyreset";
handle->private = state;
ret = input_register_handle(handle);
if (ret)
goto err_input_register_handle;
ret = input_open_device(handle);
if (ret)
goto err_input_open_device;
pr_info("using input dev %s for key reset\n", dev->name);
return 0;
err_input_open_device:
input_unregister_handle(handle);
err_input_register_handle:
kfree(handle);
return ret;
}
static void keyreset_disconnect(struct input_handle *handle)
{
input_close_device(handle);
input_unregister_handle(handle);
kfree(handle);
}
static const struct input_device_id keyreset_ids[] = {
{
.flags = INPUT_DEVICE_ID_MATCH_EVBIT,
.evbit = { BIT_MASK(EV_KEY) },
},
{ },
};
MODULE_DEVICE_TABLE(input, keyreset_ids);
static int keyreset_probe(struct platform_device *pdev)
{
int ret;
int key, *keyp;
struct keyreset_state *state;
struct keyreset_platform_data *pdata = pdev->dev.platform_data;
if (!pdata)
return -EINVAL;
state = kzalloc(sizeof(*state), GFP_KERNEL);
if (!state)
return -ENOMEM;
spin_lock_init(&state->lock);
keyp = pdata->keys_down;
while ((key = *keyp++)) {
if (key >= KEY_MAX)
continue;
state->key_down_target++;
__set_bit(key, state->keybit);
}
if (pdata->keys_up) {
keyp = pdata->keys_up;
while ((key = *keyp++)) {
if (key >= KEY_MAX)
continue;
__set_bit(key, state->keybit);
__set_bit(key, state->upbit);
}
}
if (pdata->reset_fn)
state->reset_fn = pdata->reset_fn;
state->input_handler.event = keyreset_event;
state->input_handler.connect = keyreset_connect;
state->input_handler.disconnect = keyreset_disconnect;
state->input_handler.name = KEYRESET_NAME;
state->input_handler.id_table = keyreset_ids;
ret = input_register_handler(&state->input_handler);
if (ret) {
kfree(state);
return ret;
}
platform_set_drvdata(pdev, state);
return 0;
}
int keyreset_remove(struct platform_device *pdev)
{
struct keyreset_state *state = platform_get_drvdata(pdev);
input_unregister_handler(&state->input_handler);
kfree(state);
return 0;
}
struct platform_driver keyreset_driver = {
.driver.name = KEYRESET_NAME,
.probe = keyreset_probe,
.remove = keyreset_remove,
};
static int __init keyreset_init(void)
{
return platform_driver_register(&keyreset_driver);
}
static void __exit keyreset_exit(void)
{
return platform_driver_unregister(&keyreset_driver);
}
module_init(keyreset_init);
module_exit(keyreset_exit);

View File

@@ -279,6 +279,17 @@ config INPUT_ATI_REMOTE2
To compile this driver as a module, choose M here: the module will be
called ati_remote2.
config INPUT_KEYCHORD
tristate "Key chord input driver support"
help
Say Y here if you want to enable the key chord driver
accessible at /dev/keychord. This driver can be used
for receiving notifications when client specified key
combinations are pressed.
To compile this driver as a module, choose M here: the
module will be called keychord.
config INPUT_KEYSPAN_REMOTE
tristate "Keyspan DMR USB remote control (EXPERIMENTAL)"
depends on EXPERIMENTAL
@@ -407,6 +418,11 @@ config INPUT_SGI_BTNS
To compile this driver as a module, choose M here: the
module will be called sgi_btns.
config INPUT_GPIO
tristate "GPIO driver support"
help
Say Y here if you want to support gpio based keys, wheels etc...
config HP_SDC_RTC
tristate "HP SDC Real Time Clock"
depends on (GSC || HP300) && SERIO

View File

@@ -25,8 +25,10 @@ obj-$(CONFIG_INPUT_DA9052_ONKEY) += da9052_onkey.o
obj-$(CONFIG_INPUT_DM355EVM) += dm355evm_keys.o
obj-$(CONFIG_INPUT_GP2A) += gp2ap002a00f.o
obj-$(CONFIG_INPUT_GPIO_TILT_POLLED) += gpio_tilt_polled.o
obj-$(CONFIG_INPUT_GPIO) += gpio_event.o gpio_matrix.o gpio_input.o gpio_output.o gpio_axis.o
obj-$(CONFIG_HP_SDC_RTC) += hp_sdc_rtc.o
obj-$(CONFIG_INPUT_IXP4XX_BEEPER) += ixp4xx-beeper.o
obj-$(CONFIG_INPUT_KEYCHORD) += keychord.o
obj-$(CONFIG_INPUT_KEYSPAN_REMOTE) += keyspan_remote.o
obj-$(CONFIG_INPUT_KXTJ9) += kxtj9.o
obj-$(CONFIG_INPUT_M68K_BEEP) += m68kspkr.o

View File

@@ -0,0 +1,192 @@
/* drivers/input/misc/gpio_axis.c
*
* Copyright (C) 2007 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/kernel.h>
#include <linux/gpio.h>
#include <linux/gpio_event.h>
#include <linux/interrupt.h>
#include <linux/slab.h>
struct gpio_axis_state {
struct gpio_event_input_devs *input_devs;
struct gpio_event_axis_info *info;
uint32_t pos;
};
uint16_t gpio_axis_4bit_gray_map_table[] = {
[0x0] = 0x0, [0x1] = 0x1, /* 0000 0001 */
[0x3] = 0x2, [0x2] = 0x3, /* 0011 0010 */
[0x6] = 0x4, [0x7] = 0x5, /* 0110 0111 */
[0x5] = 0x6, [0x4] = 0x7, /* 0101 0100 */
[0xc] = 0x8, [0xd] = 0x9, /* 1100 1101 */
[0xf] = 0xa, [0xe] = 0xb, /* 1111 1110 */
[0xa] = 0xc, [0xb] = 0xd, /* 1010 1011 */
[0x9] = 0xe, [0x8] = 0xf, /* 1001 1000 */
};
uint16_t gpio_axis_4bit_gray_map(struct gpio_event_axis_info *info, uint16_t in)
{
return gpio_axis_4bit_gray_map_table[in];
}
uint16_t gpio_axis_5bit_singletrack_map_table[] = {
[0x10] = 0x00, [0x14] = 0x01, [0x1c] = 0x02, /* 10000 10100 11100 */
[0x1e] = 0x03, [0x1a] = 0x04, [0x18] = 0x05, /* 11110 11010 11000 */
[0x08] = 0x06, [0x0a] = 0x07, [0x0e] = 0x08, /* 01000 01010 01110 */
[0x0f] = 0x09, [0x0d] = 0x0a, [0x0c] = 0x0b, /* 01111 01101 01100 */
[0x04] = 0x0c, [0x05] = 0x0d, [0x07] = 0x0e, /* 00100 00101 00111 */
[0x17] = 0x0f, [0x16] = 0x10, [0x06] = 0x11, /* 10111 10110 00110 */
[0x02] = 0x12, [0x12] = 0x13, [0x13] = 0x14, /* 00010 10010 10011 */
[0x1b] = 0x15, [0x0b] = 0x16, [0x03] = 0x17, /* 11011 01011 00011 */
[0x01] = 0x18, [0x09] = 0x19, [0x19] = 0x1a, /* 00001 01001 11001 */
[0x1d] = 0x1b, [0x15] = 0x1c, [0x11] = 0x1d, /* 11101 10101 10001 */
};
uint16_t gpio_axis_5bit_singletrack_map(
struct gpio_event_axis_info *info, uint16_t in)
{
return gpio_axis_5bit_singletrack_map_table[in];
}
static void gpio_event_update_axis(struct gpio_axis_state *as, int report)
{
struct gpio_event_axis_info *ai = as->info;
int i;
int change;
uint16_t state = 0;
uint16_t pos;
uint16_t old_pos = as->pos;
for (i = ai->count - 1; i >= 0; i--)
state = (state << 1) | gpio_get_value(ai->gpio[i]);
pos = ai->map(ai, state);
if (ai->flags & GPIOEAF_PRINT_RAW)
pr_info("axis %d-%d raw %x, pos %d -> %d\n",
ai->type, ai->code, state, old_pos, pos);
if (report && pos != old_pos) {
if (ai->type == EV_REL) {
change = (ai->decoded_size + pos - old_pos) %
ai->decoded_size;
if (change > ai->decoded_size / 2)
change -= ai->decoded_size;
if (change == ai->decoded_size / 2) {
if (ai->flags & GPIOEAF_PRINT_EVENT)
pr_info("axis %d-%d unknown direction, "
"pos %d -> %d\n", ai->type,
ai->code, old_pos, pos);
change = 0; /* no closest direction */
}
if (ai->flags & GPIOEAF_PRINT_EVENT)
pr_info("axis %d-%d change %d\n",
ai->type, ai->code, change);
input_report_rel(as->input_devs->dev[ai->dev],
ai->code, change);
} else {
if (ai->flags & GPIOEAF_PRINT_EVENT)
pr_info("axis %d-%d now %d\n",
ai->type, ai->code, pos);
input_event(as->input_devs->dev[ai->dev],
ai->type, ai->code, pos);
}
input_sync(as->input_devs->dev[ai->dev]);
}
as->pos = pos;
}
static irqreturn_t gpio_axis_irq_handler(int irq, void *dev_id)
{
struct gpio_axis_state *as = dev_id;
gpio_event_update_axis(as, 1);
return IRQ_HANDLED;
}
int gpio_event_axis_func(struct gpio_event_input_devs *input_devs,
struct gpio_event_info *info, void **data, int func)
{
int ret;
int i;
int irq;
struct gpio_event_axis_info *ai;
struct gpio_axis_state *as;
ai = container_of(info, struct gpio_event_axis_info, info);
if (func == GPIO_EVENT_FUNC_SUSPEND) {
for (i = 0; i < ai->count; i++)
disable_irq(gpio_to_irq(ai->gpio[i]));
return 0;
}
if (func == GPIO_EVENT_FUNC_RESUME) {
for (i = 0; i < ai->count; i++)
enable_irq(gpio_to_irq(ai->gpio[i]));
return 0;
}
if (func == GPIO_EVENT_FUNC_INIT) {
*data = as = kmalloc(sizeof(*as), GFP_KERNEL);
if (as == NULL) {
ret = -ENOMEM;
goto err_alloc_axis_state_failed;
}
as->input_devs = input_devs;
as->info = ai;
if (ai->dev >= input_devs->count) {
pr_err("gpio_event_axis: bad device index %d >= %d "
"for %d:%d\n", ai->dev, input_devs->count,
ai->type, ai->code);
ret = -EINVAL;
goto err_bad_device_index;
}
input_set_capability(input_devs->dev[ai->dev],
ai->type, ai->code);
if (ai->type == EV_ABS) {
input_set_abs_params(input_devs->dev[ai->dev], ai->code,
0, ai->decoded_size - 1, 0, 0);
}
for (i = 0; i < ai->count; i++) {
ret = gpio_request(ai->gpio[i], "gpio_event_axis");
if (ret < 0)
goto err_request_gpio_failed;
ret = gpio_direction_input(ai->gpio[i]);
if (ret < 0)
goto err_gpio_direction_input_failed;
ret = irq = gpio_to_irq(ai->gpio[i]);
if (ret < 0)
goto err_get_irq_num_failed;
ret = request_irq(irq, gpio_axis_irq_handler,
IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING,
"gpio_event_axis", as);
if (ret < 0)
goto err_request_irq_failed;
}
gpio_event_update_axis(as, 0);
return 0;
}
ret = 0;
as = *data;
for (i = ai->count - 1; i >= 0; i--) {
free_irq(gpio_to_irq(ai->gpio[i]), as);
err_request_irq_failed:
err_get_irq_num_failed:
err_gpio_direction_input_failed:
gpio_free(ai->gpio[i]);
err_request_gpio_failed:
;
}
err_bad_device_index:
kfree(as);
*data = NULL;
err_alloc_axis_state_failed:
return ret;
}

View File

@@ -0,0 +1,239 @@
/* drivers/input/misc/gpio_event.c
*
* Copyright (C) 2007 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/module.h>
#include <linux/input.h>
#include <linux/gpio_event.h>
#include <linux/hrtimer.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
struct gpio_event {
struct gpio_event_input_devs *input_devs;
const struct gpio_event_platform_data *info;
void *state[0];
};
static int gpio_input_event(
struct input_dev *dev, unsigned int type, unsigned int code, int value)
{
int i;
int devnr;
int ret = 0;
int tmp_ret;
struct gpio_event_info **ii;
struct gpio_event *ip = input_get_drvdata(dev);
for (devnr = 0; devnr < ip->input_devs->count; devnr++)
if (ip->input_devs->dev[devnr] == dev)
break;
if (devnr == ip->input_devs->count) {
pr_err("gpio_input_event: unknown device %p\n", dev);
return -EIO;
}
for (i = 0, ii = ip->info->info; i < ip->info->info_count; i++, ii++) {
if ((*ii)->event) {
tmp_ret = (*ii)->event(ip->input_devs, *ii,
&ip->state[i],
devnr, type, code, value);
if (tmp_ret)
ret = tmp_ret;
}
}
return ret;
}
static int gpio_event_call_all_func(struct gpio_event *ip, int func)
{
int i;
int ret;
struct gpio_event_info **ii;
if (func == GPIO_EVENT_FUNC_INIT || func == GPIO_EVENT_FUNC_RESUME) {
ii = ip->info->info;
for (i = 0; i < ip->info->info_count; i++, ii++) {
if ((*ii)->func == NULL) {
ret = -ENODEV;
pr_err("gpio_event_probe: Incomplete pdata, "
"no function\n");
goto err_no_func;
}
if (func == GPIO_EVENT_FUNC_RESUME && (*ii)->no_suspend)
continue;
ret = (*ii)->func(ip->input_devs, *ii, &ip->state[i],
func);
if (ret) {
pr_err("gpio_event_probe: function failed\n");
goto err_func_failed;
}
}
return 0;
}
ret = 0;
i = ip->info->info_count;
ii = ip->info->info + i;
while (i > 0) {
i--;
ii--;
if ((func & ~1) == GPIO_EVENT_FUNC_SUSPEND && (*ii)->no_suspend)
continue;
(*ii)->func(ip->input_devs, *ii, &ip->state[i], func & ~1);
err_func_failed:
err_no_func:
;
}
return ret;
}
static void __maybe_unused gpio_event_suspend(struct gpio_event *ip)
{
gpio_event_call_all_func(ip, GPIO_EVENT_FUNC_SUSPEND);
if (ip->info->power)
ip->info->power(ip->info, 0);
}
static void __maybe_unused gpio_event_resume(struct gpio_event *ip)
{
if (ip->info->power)
ip->info->power(ip->info, 1);
gpio_event_call_all_func(ip, GPIO_EVENT_FUNC_RESUME);
}
static int gpio_event_probe(struct platform_device *pdev)
{
int err;
struct gpio_event *ip;
struct gpio_event_platform_data *event_info;
int dev_count = 1;
int i;
int registered = 0;
event_info = pdev->dev.platform_data;
if (event_info == NULL) {
pr_err("gpio_event_probe: No pdata\n");
return -ENODEV;
}
if ((!event_info->name && !event_info->names[0]) ||
!event_info->info || !event_info->info_count) {
pr_err("gpio_event_probe: Incomplete pdata\n");
return -ENODEV;
}
if (!event_info->name)
while (event_info->names[dev_count])
dev_count++;
ip = kzalloc(sizeof(*ip) +
sizeof(ip->state[0]) * event_info->info_count +
sizeof(*ip->input_devs) +
sizeof(ip->input_devs->dev[0]) * dev_count, GFP_KERNEL);
if (ip == NULL) {
err = -ENOMEM;
pr_err("gpio_event_probe: Failed to allocate private data\n");
goto err_kp_alloc_failed;
}
ip->input_devs = (void*)&ip->state[event_info->info_count];
platform_set_drvdata(pdev, ip);
for (i = 0; i < dev_count; i++) {
struct input_dev *input_dev = input_allocate_device();
if (input_dev == NULL) {
err = -ENOMEM;
pr_err("gpio_event_probe: "
"Failed to allocate input device\n");
goto err_input_dev_alloc_failed;
}
input_set_drvdata(input_dev, ip);
input_dev->name = event_info->name ?
event_info->name : event_info->names[i];
input_dev->event = gpio_input_event;
ip->input_devs->dev[i] = input_dev;
}
ip->input_devs->count = dev_count;
ip->info = event_info;
if (event_info->power)
ip->info->power(ip->info, 1);
err = gpio_event_call_all_func(ip, GPIO_EVENT_FUNC_INIT);
if (err)
goto err_call_all_func_failed;
for (i = 0; i < dev_count; i++) {
err = input_register_device(ip->input_devs->dev[i]);
if (err) {
pr_err("gpio_event_probe: Unable to register %s "
"input device\n", ip->input_devs->dev[i]->name);
goto err_input_register_device_failed;
}
registered++;
}
return 0;
err_input_register_device_failed:
gpio_event_call_all_func(ip, GPIO_EVENT_FUNC_UNINIT);
err_call_all_func_failed:
if (event_info->power)
ip->info->power(ip->info, 0);
for (i = 0; i < registered; i++)
input_unregister_device(ip->input_devs->dev[i]);
for (i = dev_count - 1; i >= registered; i--) {
input_free_device(ip->input_devs->dev[i]);
err_input_dev_alloc_failed:
;
}
kfree(ip);
err_kp_alloc_failed:
return err;
}
static int gpio_event_remove(struct platform_device *pdev)
{
struct gpio_event *ip = platform_get_drvdata(pdev);
int i;
gpio_event_call_all_func(ip, GPIO_EVENT_FUNC_UNINIT);
if (ip->info->power)
ip->info->power(ip->info, 0);
for (i = 0; i < ip->input_devs->count; i++)
input_unregister_device(ip->input_devs->dev[i]);
kfree(ip);
return 0;
}
static struct platform_driver gpio_event_driver = {
.probe = gpio_event_probe,
.remove = gpio_event_remove,
.driver = {
.name = GPIO_EVENT_DEV_NAME,
},
};
static int __devinit gpio_event_init(void)
{
return platform_driver_register(&gpio_event_driver);
}
static void __exit gpio_event_exit(void)
{
platform_driver_unregister(&gpio_event_driver);
}
module_init(gpio_event_init);
module_exit(gpio_event_exit);
MODULE_DESCRIPTION("GPIO Event Driver");
MODULE_LICENSE("GPL");

View File

@@ -0,0 +1,376 @@
/* drivers/input/misc/gpio_input.c
*
* Copyright (C) 2007 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/kernel.h>
#include <linux/gpio.h>
#include <linux/gpio_event.h>
#include <linux/hrtimer.h>
#include <linux/input.h>
#include <linux/interrupt.h>
#include <linux/slab.h>
#include <linux/wakelock.h>
enum {
DEBOUNCE_UNSTABLE = BIT(0), /* Got irq, while debouncing */
DEBOUNCE_PRESSED = BIT(1),
DEBOUNCE_NOTPRESSED = BIT(2),
DEBOUNCE_WAIT_IRQ = BIT(3), /* Stable irq state */
DEBOUNCE_POLL = BIT(4), /* Stable polling state */
DEBOUNCE_UNKNOWN =
DEBOUNCE_PRESSED | DEBOUNCE_NOTPRESSED,
};
struct gpio_key_state {
struct gpio_input_state *ds;
uint8_t debounce;
};
struct gpio_input_state {
struct gpio_event_input_devs *input_devs;
const struct gpio_event_input_info *info;
struct hrtimer timer;
int use_irq;
int debounce_count;
spinlock_t irq_lock;
struct wake_lock wake_lock;
struct gpio_key_state key_state[0];
};
static enum hrtimer_restart gpio_event_input_timer_func(struct hrtimer *timer)
{
int i;
int pressed;
struct gpio_input_state *ds =
container_of(timer, struct gpio_input_state, timer);
unsigned gpio_flags = ds->info->flags;
unsigned npolarity;
int nkeys = ds->info->keymap_size;
const struct gpio_event_direct_entry *key_entry;
struct gpio_key_state *key_state;
unsigned long irqflags;
uint8_t debounce;
bool sync_needed;
#if 0
key_entry = kp->keys_info->keymap;
key_state = kp->key_state;
for (i = 0; i < nkeys; i++, key_entry++, key_state++)
pr_info("gpio_read_detect_status %d %d\n", key_entry->gpio,
gpio_read_detect_status(key_entry->gpio));
#endif
key_entry = ds->info->keymap;
key_state = ds->key_state;
sync_needed = false;
spin_lock_irqsave(&ds->irq_lock, irqflags);
for (i = 0; i < nkeys; i++, key_entry++, key_state++) {
debounce = key_state->debounce;
if (debounce & DEBOUNCE_WAIT_IRQ)
continue;
if (key_state->debounce & DEBOUNCE_UNSTABLE) {
debounce = key_state->debounce = DEBOUNCE_UNKNOWN;
enable_irq(gpio_to_irq(key_entry->gpio));
if (gpio_flags & GPIOEDF_PRINT_KEY_UNSTABLE)
pr_info("gpio_keys_scan_keys: key %x-%x, %d "
"(%d) continue debounce\n",
ds->info->type, key_entry->code,
i, key_entry->gpio);
}
npolarity = !(gpio_flags & GPIOEDF_ACTIVE_HIGH);
pressed = gpio_get_value(key_entry->gpio) ^ npolarity;
if (debounce & DEBOUNCE_POLL) {
if (pressed == !(debounce & DEBOUNCE_PRESSED)) {
ds->debounce_count++;
key_state->debounce = DEBOUNCE_UNKNOWN;
if (gpio_flags & GPIOEDF_PRINT_KEY_DEBOUNCE)
pr_info("gpio_keys_scan_keys: key %x-"
"%x, %d (%d) start debounce\n",
ds->info->type, key_entry->code,
i, key_entry->gpio);
}
continue;
}
if (pressed && (debounce & DEBOUNCE_NOTPRESSED)) {
if (gpio_flags & GPIOEDF_PRINT_KEY_DEBOUNCE)
pr_info("gpio_keys_scan_keys: key %x-%x, %d "
"(%d) debounce pressed 1\n",
ds->info->type, key_entry->code,
i, key_entry->gpio);
key_state->debounce = DEBOUNCE_PRESSED;
continue;
}
if (!pressed && (debounce & DEBOUNCE_PRESSED)) {
if (gpio_flags & GPIOEDF_PRINT_KEY_DEBOUNCE)
pr_info("gpio_keys_scan_keys: key %x-%x, %d "
"(%d) debounce pressed 0\n",
ds->info->type, key_entry->code,
i, key_entry->gpio);
key_state->debounce = DEBOUNCE_NOTPRESSED;
continue;
}
/* key is stable */
ds->debounce_count--;
if (ds->use_irq)
key_state->debounce |= DEBOUNCE_WAIT_IRQ;
else
key_state->debounce |= DEBOUNCE_POLL;
if (gpio_flags & GPIOEDF_PRINT_KEYS)
pr_info("gpio_keys_scan_keys: key %x-%x, %d (%d) "
"changed to %d\n", ds->info->type,
key_entry->code, i, key_entry->gpio, pressed);
input_event(ds->input_devs->dev[key_entry->dev], ds->info->type,
key_entry->code, pressed);
sync_needed = true;
}
if (sync_needed) {
for (i = 0; i < ds->input_devs->count; i++)
input_sync(ds->input_devs->dev[i]);
}
#if 0
key_entry = kp->keys_info->keymap;
key_state = kp->key_state;
for (i = 0; i < nkeys; i++, key_entry++, key_state++) {
pr_info("gpio_read_detect_status %d %d\n", key_entry->gpio,
gpio_read_detect_status(key_entry->gpio));
}
#endif
if (ds->debounce_count)
hrtimer_start(timer, ds->info->debounce_time, HRTIMER_MODE_REL);
else if (!ds->use_irq)
hrtimer_start(timer, ds->info->poll_time, HRTIMER_MODE_REL);
else
wake_unlock(&ds->wake_lock);
spin_unlock_irqrestore(&ds->irq_lock, irqflags);
return HRTIMER_NORESTART;
}
static irqreturn_t gpio_event_input_irq_handler(int irq, void *dev_id)
{
struct gpio_key_state *ks = dev_id;
struct gpio_input_state *ds = ks->ds;
int keymap_index = ks - ds->key_state;
const struct gpio_event_direct_entry *key_entry;
unsigned long irqflags;
int pressed;
if (!ds->use_irq)
return IRQ_HANDLED;
key_entry = &ds->info->keymap[keymap_index];
if (ds->info->debounce_time.tv64) {
spin_lock_irqsave(&ds->irq_lock, irqflags);
if (ks->debounce & DEBOUNCE_WAIT_IRQ) {
ks->debounce = DEBOUNCE_UNKNOWN;
if (ds->debounce_count++ == 0) {
wake_lock(&ds->wake_lock);
hrtimer_start(
&ds->timer, ds->info->debounce_time,
HRTIMER_MODE_REL);
}
if (ds->info->flags & GPIOEDF_PRINT_KEY_DEBOUNCE)
pr_info("gpio_event_input_irq_handler: "
"key %x-%x, %d (%d) start debounce\n",
ds->info->type, key_entry->code,
keymap_index, key_entry->gpio);
} else {
disable_irq_nosync(irq);
ks->debounce = DEBOUNCE_UNSTABLE;
}
spin_unlock_irqrestore(&ds->irq_lock, irqflags);
} else {
pressed = gpio_get_value(key_entry->gpio) ^
!(ds->info->flags & GPIOEDF_ACTIVE_HIGH);
if (ds->info->flags & GPIOEDF_PRINT_KEYS)
pr_info("gpio_event_input_irq_handler: key %x-%x, %d "
"(%d) changed to %d\n",
ds->info->type, key_entry->code, keymap_index,
key_entry->gpio, pressed);
input_event(ds->input_devs->dev[key_entry->dev], ds->info->type,
key_entry->code, pressed);
input_sync(ds->input_devs->dev[key_entry->dev]);
}
return IRQ_HANDLED;
}
static int gpio_event_input_request_irqs(struct gpio_input_state *ds)
{
int i;
int err;
unsigned int irq;
unsigned long req_flags = IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING;
for (i = 0; i < ds->info->keymap_size; i++) {
err = irq = gpio_to_irq(ds->info->keymap[i].gpio);
if (err < 0)
goto err_gpio_get_irq_num_failed;
err = request_irq(irq, gpio_event_input_irq_handler,
req_flags, "gpio_keys", &ds->key_state[i]);
if (err) {
pr_err("gpio_event_input_request_irqs: request_irq "
"failed for input %d, irq %d\n",
ds->info->keymap[i].gpio, irq);
goto err_request_irq_failed;
}
if (ds->info->info.no_suspend) {
err = enable_irq_wake(irq);
if (err) {
pr_err("gpio_event_input_request_irqs: "
"enable_irq_wake failed for input %d, "
"irq %d\n",
ds->info->keymap[i].gpio, irq);
goto err_enable_irq_wake_failed;
}
}
}
return 0;
for (i = ds->info->keymap_size - 1; i >= 0; i--) {
irq = gpio_to_irq(ds->info->keymap[i].gpio);
if (ds->info->info.no_suspend)
disable_irq_wake(irq);
err_enable_irq_wake_failed:
free_irq(irq, &ds->key_state[i]);
err_request_irq_failed:
err_gpio_get_irq_num_failed:
;
}
return err;
}
int gpio_event_input_func(struct gpio_event_input_devs *input_devs,
struct gpio_event_info *info, void **data, int func)
{
int ret;
int i;
unsigned long irqflags;
struct gpio_event_input_info *di;
struct gpio_input_state *ds = *data;
di = container_of(info, struct gpio_event_input_info, info);
if (func == GPIO_EVENT_FUNC_SUSPEND) {
if (ds->use_irq)
for (i = 0; i < di->keymap_size; i++)
disable_irq(gpio_to_irq(di->keymap[i].gpio));
hrtimer_cancel(&ds->timer);
return 0;
}
if (func == GPIO_EVENT_FUNC_RESUME) {
spin_lock_irqsave(&ds->irq_lock, irqflags);
if (ds->use_irq)
for (i = 0; i < di->keymap_size; i++)
enable_irq(gpio_to_irq(di->keymap[i].gpio));
hrtimer_start(&ds->timer, ktime_set(0, 0), HRTIMER_MODE_REL);
spin_unlock_irqrestore(&ds->irq_lock, irqflags);
return 0;
}
if (func == GPIO_EVENT_FUNC_INIT) {
if (ktime_to_ns(di->poll_time) <= 0)
di->poll_time = ktime_set(0, 20 * NSEC_PER_MSEC);
*data = ds = kzalloc(sizeof(*ds) + sizeof(ds->key_state[0]) *
di->keymap_size, GFP_KERNEL);
if (ds == NULL) {
ret = -ENOMEM;
pr_err("gpio_event_input_func: "
"Failed to allocate private data\n");
goto err_ds_alloc_failed;
}
ds->debounce_count = di->keymap_size;
ds->input_devs = input_devs;
ds->info = di;
wake_lock_init(&ds->wake_lock, WAKE_LOCK_SUSPEND, "gpio_input");
spin_lock_init(&ds->irq_lock);
for (i = 0; i < di->keymap_size; i++) {
int dev = di->keymap[i].dev;
if (dev >= input_devs->count) {
pr_err("gpio_event_input_func: bad device "
"index %d >= %d for key code %d\n",
dev, input_devs->count,
di->keymap[i].code);
ret = -EINVAL;
goto err_bad_keymap;
}
input_set_capability(input_devs->dev[dev], di->type,
di->keymap[i].code);
ds->key_state[i].ds = ds;
ds->key_state[i].debounce = DEBOUNCE_UNKNOWN;
}
for (i = 0; i < di->keymap_size; i++) {
ret = gpio_request(di->keymap[i].gpio, "gpio_kp_in");
if (ret) {
pr_err("gpio_event_input_func: gpio_request "
"failed for %d\n", di->keymap[i].gpio);
goto err_gpio_request_failed;
}
ret = gpio_direction_input(di->keymap[i].gpio);
if (ret) {
pr_err("gpio_event_input_func: "
"gpio_direction_input failed for %d\n",
di->keymap[i].gpio);
goto err_gpio_configure_failed;
}
}
ret = gpio_event_input_request_irqs(ds);
spin_lock_irqsave(&ds->irq_lock, irqflags);
ds->use_irq = ret == 0;
pr_info("GPIO Input Driver: Start gpio inputs for %s%s in %s "
"mode\n", input_devs->dev[0]->name,
(input_devs->count > 1) ? "..." : "",
ret == 0 ? "interrupt" : "polling");
hrtimer_init(&ds->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
ds->timer.function = gpio_event_input_timer_func;
hrtimer_start(&ds->timer, ktime_set(0, 0), HRTIMER_MODE_REL);
spin_unlock_irqrestore(&ds->irq_lock, irqflags);
return 0;
}
ret = 0;
spin_lock_irqsave(&ds->irq_lock, irqflags);
hrtimer_cancel(&ds->timer);
if (ds->use_irq) {
for (i = di->keymap_size - 1; i >= 0; i--) {
int irq = gpio_to_irq(di->keymap[i].gpio);
if (ds->info->info.no_suspend)
disable_irq_wake(irq);
free_irq(irq, &ds->key_state[i]);
}
}
spin_unlock_irqrestore(&ds->irq_lock, irqflags);
for (i = di->keymap_size - 1; i >= 0; i--) {
err_gpio_configure_failed:
gpio_free(di->keymap[i].gpio);
err_gpio_request_failed:
;
}
err_bad_keymap:
wake_lock_destroy(&ds->wake_lock);
kfree(ds);
err_ds_alloc_failed:
return ret;
}

View File

@@ -0,0 +1,441 @@
/* drivers/input/misc/gpio_matrix.c
*
* Copyright (C) 2007 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/kernel.h>
#include <linux/gpio.h>
#include <linux/gpio_event.h>
#include <linux/hrtimer.h>
#include <linux/interrupt.h>
#include <linux/slab.h>
#include <linux/wakelock.h>
struct gpio_kp {
struct gpio_event_input_devs *input_devs;
struct gpio_event_matrix_info *keypad_info;
struct hrtimer timer;
struct wake_lock wake_lock;
int current_output;
unsigned int use_irq:1;
unsigned int key_state_changed:1;
unsigned int last_key_state_changed:1;
unsigned int some_keys_pressed:2;
unsigned int disabled_irq:1;
unsigned long keys_pressed[0];
};
static void clear_phantom_key(struct gpio_kp *kp, int out, int in)
{
struct gpio_event_matrix_info *mi = kp->keypad_info;
int key_index = out * mi->ninputs + in;
unsigned short keyentry = mi->keymap[key_index];
unsigned short keycode = keyentry & MATRIX_KEY_MASK;
unsigned short dev = keyentry >> MATRIX_CODE_BITS;
if (!test_bit(keycode, kp->input_devs->dev[dev]->key)) {
if (mi->flags & GPIOKPF_PRINT_PHANTOM_KEYS)
pr_info("gpiomatrix: phantom key %x, %d-%d (%d-%d) "
"cleared\n", keycode, out, in,
mi->output_gpios[out], mi->input_gpios[in]);
__clear_bit(key_index, kp->keys_pressed);
} else {
if (mi->flags & GPIOKPF_PRINT_PHANTOM_KEYS)
pr_info("gpiomatrix: phantom key %x, %d-%d (%d-%d) "
"not cleared\n", keycode, out, in,
mi->output_gpios[out], mi->input_gpios[in]);
}
}
static int restore_keys_for_input(struct gpio_kp *kp, int out, int in)
{
int rv = 0;
int key_index;
key_index = out * kp->keypad_info->ninputs + in;
while (out < kp->keypad_info->noutputs) {
if (test_bit(key_index, kp->keys_pressed)) {
rv = 1;
clear_phantom_key(kp, out, in);
}
key_index += kp->keypad_info->ninputs;
out++;
}
return rv;
}
static void remove_phantom_keys(struct gpio_kp *kp)
{
int out, in, inp;
int key_index;
if (kp->some_keys_pressed < 3)
return;
for (out = 0; out < kp->keypad_info->noutputs; out++) {
inp = -1;
key_index = out * kp->keypad_info->ninputs;
for (in = 0; in < kp->keypad_info->ninputs; in++, key_index++) {
if (test_bit(key_index, kp->keys_pressed)) {
if (inp == -1) {
inp = in;
continue;
}
if (inp >= 0) {
if (!restore_keys_for_input(kp, out + 1,
inp))
break;
clear_phantom_key(kp, out, inp);
inp = -2;
}
restore_keys_for_input(kp, out, in);
}
}
}
}
static void report_key(struct gpio_kp *kp, int key_index, int out, int in)
{
struct gpio_event_matrix_info *mi = kp->keypad_info;
int pressed = test_bit(key_index, kp->keys_pressed);
unsigned short keyentry = mi->keymap[key_index];
unsigned short keycode = keyentry & MATRIX_KEY_MASK;
unsigned short dev = keyentry >> MATRIX_CODE_BITS;
if (pressed != test_bit(keycode, kp->input_devs->dev[dev]->key)) {
if (keycode == KEY_RESERVED) {
if (mi->flags & GPIOKPF_PRINT_UNMAPPED_KEYS)
pr_info("gpiomatrix: unmapped key, %d-%d "
"(%d-%d) changed to %d\n",
out, in, mi->output_gpios[out],
mi->input_gpios[in], pressed);
} else {
if (mi->flags & GPIOKPF_PRINT_MAPPED_KEYS)
pr_info("gpiomatrix: key %x, %d-%d (%d-%d) "
"changed to %d\n", keycode,
out, in, mi->output_gpios[out],
mi->input_gpios[in], pressed);
input_report_key(kp->input_devs->dev[dev], keycode, pressed);
}
}
}
static void report_sync(struct gpio_kp *kp)
{
int i;
for (i = 0; i < kp->input_devs->count; i++)
input_sync(kp->input_devs->dev[i]);
}
static enum hrtimer_restart gpio_keypad_timer_func(struct hrtimer *timer)
{
int out, in;
int key_index;
int gpio;
struct gpio_kp *kp = container_of(timer, struct gpio_kp, timer);
struct gpio_event_matrix_info *mi = kp->keypad_info;
unsigned gpio_keypad_flags = mi->flags;
unsigned polarity = !!(gpio_keypad_flags & GPIOKPF_ACTIVE_HIGH);
out = kp->current_output;
if (out == mi->noutputs) {
out = 0;
kp->last_key_state_changed = kp->key_state_changed;
kp->key_state_changed = 0;
kp->some_keys_pressed = 0;
} else {
key_index = out * mi->ninputs;
for (in = 0; in < mi->ninputs; in++, key_index++) {
gpio = mi->input_gpios[in];
if (gpio_get_value(gpio) ^ !polarity) {
if (kp->some_keys_pressed < 3)
kp->some_keys_pressed++;
kp->key_state_changed |= !__test_and_set_bit(
key_index, kp->keys_pressed);
} else
kp->key_state_changed |= __test_and_clear_bit(
key_index, kp->keys_pressed);
}
gpio = mi->output_gpios[out];
if (gpio_keypad_flags & GPIOKPF_DRIVE_INACTIVE)
gpio_set_value(gpio, !polarity);
else
gpio_direction_input(gpio);
out++;
}
kp->current_output = out;
if (out < mi->noutputs) {
gpio = mi->output_gpios[out];
if (gpio_keypad_flags & GPIOKPF_DRIVE_INACTIVE)
gpio_set_value(gpio, polarity);
else
gpio_direction_output(gpio, polarity);
hrtimer_start(timer, mi->settle_time, HRTIMER_MODE_REL);
return HRTIMER_NORESTART;
}
if (gpio_keypad_flags & GPIOKPF_DEBOUNCE) {
if (kp->key_state_changed) {
hrtimer_start(&kp->timer, mi->debounce_delay,
HRTIMER_MODE_REL);
return HRTIMER_NORESTART;
}
kp->key_state_changed = kp->last_key_state_changed;
}
if (kp->key_state_changed) {
if (gpio_keypad_flags & GPIOKPF_REMOVE_SOME_PHANTOM_KEYS)
remove_phantom_keys(kp);
key_index = 0;
for (out = 0; out < mi->noutputs; out++)
for (in = 0; in < mi->ninputs; in++, key_index++)
report_key(kp, key_index, out, in);
report_sync(kp);
}
if (!kp->use_irq || kp->some_keys_pressed) {
hrtimer_start(timer, mi->poll_time, HRTIMER_MODE_REL);
return HRTIMER_NORESTART;
}
/* No keys are pressed, reenable interrupt */
for (out = 0; out < mi->noutputs; out++) {
if (gpio_keypad_flags & GPIOKPF_DRIVE_INACTIVE)
gpio_set_value(mi->output_gpios[out], polarity);
else
gpio_direction_output(mi->output_gpios[out], polarity);
}
for (in = 0; in < mi->ninputs; in++)
enable_irq(gpio_to_irq(mi->input_gpios[in]));
wake_unlock(&kp->wake_lock);
return HRTIMER_NORESTART;
}
static irqreturn_t gpio_keypad_irq_handler(int irq_in, void *dev_id)
{
int i;
struct gpio_kp *kp = dev_id;
struct gpio_event_matrix_info *mi = kp->keypad_info;
unsigned gpio_keypad_flags = mi->flags;
if (!kp->use_irq) {
/* ignore interrupt while registering the handler */
kp->disabled_irq = 1;
disable_irq_nosync(irq_in);
return IRQ_HANDLED;
}
for (i = 0; i < mi->ninputs; i++)
disable_irq_nosync(gpio_to_irq(mi->input_gpios[i]));
for (i = 0; i < mi->noutputs; i++) {
if (gpio_keypad_flags & GPIOKPF_DRIVE_INACTIVE)
gpio_set_value(mi->output_gpios[i],
!(gpio_keypad_flags & GPIOKPF_ACTIVE_HIGH));
else
gpio_direction_input(mi->output_gpios[i]);
}
wake_lock(&kp->wake_lock);
hrtimer_start(&kp->timer, ktime_set(0, 0), HRTIMER_MODE_REL);
return IRQ_HANDLED;
}
static int gpio_keypad_request_irqs(struct gpio_kp *kp)
{
int i;
int err;
unsigned int irq;
unsigned long request_flags;
struct gpio_event_matrix_info *mi = kp->keypad_info;
switch (mi->flags & (GPIOKPF_ACTIVE_HIGH|GPIOKPF_LEVEL_TRIGGERED_IRQ)) {
default:
request_flags = IRQF_TRIGGER_FALLING;
break;
case GPIOKPF_ACTIVE_HIGH:
request_flags = IRQF_TRIGGER_RISING;
break;
case GPIOKPF_LEVEL_TRIGGERED_IRQ:
request_flags = IRQF_TRIGGER_LOW;
break;
case GPIOKPF_LEVEL_TRIGGERED_IRQ | GPIOKPF_ACTIVE_HIGH:
request_flags = IRQF_TRIGGER_HIGH;
break;
}
for (i = 0; i < mi->ninputs; i++) {
err = irq = gpio_to_irq(mi->input_gpios[i]);
if (err < 0)
goto err_gpio_get_irq_num_failed;
err = request_irq(irq, gpio_keypad_irq_handler, request_flags,
"gpio_kp", kp);
if (err) {
pr_err("gpiomatrix: request_irq failed for input %d, "
"irq %d\n", mi->input_gpios[i], irq);
goto err_request_irq_failed;
}
err = enable_irq_wake(irq);
if (err) {
pr_err("gpiomatrix: set_irq_wake failed for input %d, "
"irq %d\n", mi->input_gpios[i], irq);
}
disable_irq(irq);
if (kp->disabled_irq) {
kp->disabled_irq = 0;
enable_irq(irq);
}
}
return 0;
for (i = mi->noutputs - 1; i >= 0; i--) {
free_irq(gpio_to_irq(mi->input_gpios[i]), kp);
err_request_irq_failed:
err_gpio_get_irq_num_failed:
;
}
return err;
}
int gpio_event_matrix_func(struct gpio_event_input_devs *input_devs,
struct gpio_event_info *info, void **data, int func)
{
int i;
int err;
int key_count;
struct gpio_kp *kp;
struct gpio_event_matrix_info *mi;
mi = container_of(info, struct gpio_event_matrix_info, info);
if (func == GPIO_EVENT_FUNC_SUSPEND || func == GPIO_EVENT_FUNC_RESUME) {
/* TODO: disable scanning */
return 0;
}
if (func == GPIO_EVENT_FUNC_INIT) {
if (mi->keymap == NULL ||
mi->input_gpios == NULL ||
mi->output_gpios == NULL) {
err = -ENODEV;
pr_err("gpiomatrix: Incomplete pdata\n");
goto err_invalid_platform_data;
}
key_count = mi->ninputs * mi->noutputs;
*data = kp = kzalloc(sizeof(*kp) + sizeof(kp->keys_pressed[0]) *
BITS_TO_LONGS(key_count), GFP_KERNEL);
if (kp == NULL) {
err = -ENOMEM;
pr_err("gpiomatrix: Failed to allocate private data\n");
goto err_kp_alloc_failed;
}
kp->input_devs = input_devs;
kp->keypad_info = mi;
for (i = 0; i < key_count; i++) {
unsigned short keyentry = mi->keymap[i];
unsigned short keycode = keyentry & MATRIX_KEY_MASK;
unsigned short dev = keyentry >> MATRIX_CODE_BITS;
if (dev >= input_devs->count) {
pr_err("gpiomatrix: bad device index %d >= "
"%d for key code %d\n",
dev, input_devs->count, keycode);
err = -EINVAL;
goto err_bad_keymap;
}
if (keycode && keycode <= KEY_MAX)
input_set_capability(input_devs->dev[dev],
EV_KEY, keycode);
}
for (i = 0; i < mi->noutputs; i++) {
err = gpio_request(mi->output_gpios[i], "gpio_kp_out");
if (err) {
pr_err("gpiomatrix: gpio_request failed for "
"output %d\n", mi->output_gpios[i]);
goto err_request_output_gpio_failed;
}
if (gpio_cansleep(mi->output_gpios[i])) {
pr_err("gpiomatrix: unsupported output gpio %d,"
" can sleep\n", mi->output_gpios[i]);
err = -EINVAL;
goto err_output_gpio_configure_failed;
}
if (mi->flags & GPIOKPF_DRIVE_INACTIVE)
err = gpio_direction_output(mi->output_gpios[i],
!(mi->flags & GPIOKPF_ACTIVE_HIGH));
else
err = gpio_direction_input(mi->output_gpios[i]);
if (err) {
pr_err("gpiomatrix: gpio_configure failed for "
"output %d\n", mi->output_gpios[i]);
goto err_output_gpio_configure_failed;
}
}
for (i = 0; i < mi->ninputs; i++) {
err = gpio_request(mi->input_gpios[i], "gpio_kp_in");
if (err) {
pr_err("gpiomatrix: gpio_request failed for "
"input %d\n", mi->input_gpios[i]);
goto err_request_input_gpio_failed;
}
err = gpio_direction_input(mi->input_gpios[i]);
if (err) {
pr_err("gpiomatrix: gpio_direction_input failed"
" for input %d\n", mi->input_gpios[i]);
goto err_gpio_direction_input_failed;
}
}
kp->current_output = mi->noutputs;
kp->key_state_changed = 1;
hrtimer_init(&kp->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
kp->timer.function = gpio_keypad_timer_func;
wake_lock_init(&kp->wake_lock, WAKE_LOCK_SUSPEND, "gpio_kp");
err = gpio_keypad_request_irqs(kp);
kp->use_irq = err == 0;
pr_info("GPIO Matrix Keypad Driver: Start keypad matrix for "
"%s%s in %s mode\n", input_devs->dev[0]->name,
(input_devs->count > 1) ? "..." : "",
kp->use_irq ? "interrupt" : "polling");
if (kp->use_irq)
wake_lock(&kp->wake_lock);
hrtimer_start(&kp->timer, ktime_set(0, 0), HRTIMER_MODE_REL);
return 0;
}
err = 0;
kp = *data;
if (kp->use_irq)
for (i = mi->noutputs - 1; i >= 0; i--)
free_irq(gpio_to_irq(mi->input_gpios[i]), kp);
hrtimer_cancel(&kp->timer);
wake_lock_destroy(&kp->wake_lock);
for (i = mi->noutputs - 1; i >= 0; i--) {
err_gpio_direction_input_failed:
gpio_free(mi->input_gpios[i]);
err_request_input_gpio_failed:
;
}
for (i = mi->noutputs - 1; i >= 0; i--) {
err_output_gpio_configure_failed:
gpio_free(mi->output_gpios[i]);
err_request_output_gpio_failed:
;
}
err_bad_keymap:
kfree(kp);
err_kp_alloc_failed:
err_invalid_platform_data:
return err;
}

View File

@@ -0,0 +1,97 @@
/* drivers/input/misc/gpio_output.c
*
* Copyright (C) 2007 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/kernel.h>
#include <linux/gpio.h>
#include <linux/gpio_event.h>
int gpio_event_output_event(
struct gpio_event_input_devs *input_devs, struct gpio_event_info *info,
void **data, unsigned int dev, unsigned int type,
unsigned int code, int value)
{
int i;
struct gpio_event_output_info *oi;
oi = container_of(info, struct gpio_event_output_info, info);
if (type != oi->type)
return 0;
if (!(oi->flags & GPIOEDF_ACTIVE_HIGH))
value = !value;
for (i = 0; i < oi->keymap_size; i++)
if (dev == oi->keymap[i].dev && code == oi->keymap[i].code)
gpio_set_value(oi->keymap[i].gpio, value);
return 0;
}
int gpio_event_output_func(
struct gpio_event_input_devs *input_devs, struct gpio_event_info *info,
void **data, int func)
{
int ret;
int i;
struct gpio_event_output_info *oi;
oi = container_of(info, struct gpio_event_output_info, info);
if (func == GPIO_EVENT_FUNC_SUSPEND || func == GPIO_EVENT_FUNC_RESUME)
return 0;
if (func == GPIO_EVENT_FUNC_INIT) {
int output_level = !(oi->flags & GPIOEDF_ACTIVE_HIGH);
for (i = 0; i < oi->keymap_size; i++) {
int dev = oi->keymap[i].dev;
if (dev >= input_devs->count) {
pr_err("gpio_event_output_func: bad device "
"index %d >= %d for key code %d\n",
dev, input_devs->count,
oi->keymap[i].code);
ret = -EINVAL;
goto err_bad_keymap;
}
input_set_capability(input_devs->dev[dev], oi->type,
oi->keymap[i].code);
}
for (i = 0; i < oi->keymap_size; i++) {
ret = gpio_request(oi->keymap[i].gpio,
"gpio_event_output");
if (ret) {
pr_err("gpio_event_output_func: gpio_request "
"failed for %d\n", oi->keymap[i].gpio);
goto err_gpio_request_failed;
}
ret = gpio_direction_output(oi->keymap[i].gpio,
output_level);
if (ret) {
pr_err("gpio_event_output_func: "
"gpio_direction_output failed for %d\n",
oi->keymap[i].gpio);
goto err_gpio_direction_output_failed;
}
}
return 0;
}
ret = 0;
for (i = oi->keymap_size - 1; i >= 0; i--) {
err_gpio_direction_output_failed:
gpio_free(oi->keymap[i].gpio);
err_gpio_request_failed:
;
}
err_bad_keymap:
return ret;
}

View File

@@ -0,0 +1,387 @@
/*
* drivers/input/misc/keychord.c
*
* Copyright (C) 2008 Google, Inc.
* Author: Mike Lockwood <lockwood@android.com>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/poll.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/spinlock.h>
#include <linux/fs.h>
#include <linux/miscdevice.h>
#include <linux/keychord.h>
#include <linux/sched.h>
#define KEYCHORD_NAME "keychord"
#define BUFFER_SIZE 16
MODULE_AUTHOR("Mike Lockwood <lockwood@android.com>");
MODULE_DESCRIPTION("Key chord input driver");
MODULE_SUPPORTED_DEVICE("keychord");
MODULE_LICENSE("GPL");
#define NEXT_KEYCHORD(kc) ((struct input_keychord *) \
((char *)kc + sizeof(struct input_keychord) + \
kc->count * sizeof(kc->keycodes[0])))
struct keychord_device {
struct input_handler input_handler;
int registered;
/* list of keychords to monitor */
struct input_keychord *keychords;
int keychord_count;
/* bitmask of keys contained in our keychords */
unsigned long keybit[BITS_TO_LONGS(KEY_CNT)];
/* current state of the keys */
unsigned long keystate[BITS_TO_LONGS(KEY_CNT)];
/* number of keys that are currently pressed */
int key_down;
/* second input_device_id is needed for null termination */
struct input_device_id device_ids[2];
spinlock_t lock;
wait_queue_head_t waitq;
unsigned char head;
unsigned char tail;
__u16 buff[BUFFER_SIZE];
};
static int check_keychord(struct keychord_device *kdev,
struct input_keychord *keychord)
{
int i;
if (keychord->count != kdev->key_down)
return 0;
for (i = 0; i < keychord->count; i++) {
if (!test_bit(keychord->keycodes[i], kdev->keystate))
return 0;
}
/* we have a match */
return 1;
}
static void keychord_event(struct input_handle *handle, unsigned int type,
unsigned int code, int value)
{
struct keychord_device *kdev = handle->private;
struct input_keychord *keychord;
unsigned long flags;
int i, got_chord = 0;
if (type != EV_KEY || code >= KEY_MAX)
return;
spin_lock_irqsave(&kdev->lock, flags);
/* do nothing if key state did not change */
if (!test_bit(code, kdev->keystate) == !value)
goto done;
__change_bit(code, kdev->keystate);
if (value)
kdev->key_down++;
else
kdev->key_down--;
/* don't notify on key up */
if (!value)
goto done;
/* ignore this event if it is not one of the keys we are monitoring */
if (!test_bit(code, kdev->keybit))
goto done;
keychord = kdev->keychords;
if (!keychord)
goto done;
/* check to see if the keyboard state matches any keychords */
for (i = 0; i < kdev->keychord_count; i++) {
if (check_keychord(kdev, keychord)) {
kdev->buff[kdev->head] = keychord->id;
kdev->head = (kdev->head + 1) % BUFFER_SIZE;
got_chord = 1;
break;
}
/* skip to next keychord */
keychord = NEXT_KEYCHORD(keychord);
}
done:
spin_unlock_irqrestore(&kdev->lock, flags);
if (got_chord)
wake_up_interruptible(&kdev->waitq);
}
static int keychord_connect(struct input_handler *handler,
struct input_dev *dev,
const struct input_device_id *id)
{
int i, ret;
struct input_handle *handle;
struct keychord_device *kdev =
container_of(handler, struct keychord_device, input_handler);
/*
* ignore this input device if it does not contain any keycodes
* that we are monitoring
*/
for (i = 0; i < KEY_MAX; i++) {
if (test_bit(i, kdev->keybit) && test_bit(i, dev->keybit))
break;
}
if (i == KEY_MAX)
return -ENODEV;
handle = kzalloc(sizeof(*handle), GFP_KERNEL);
if (!handle)
return -ENOMEM;
handle->dev = dev;
handle->handler = handler;
handle->name = KEYCHORD_NAME;
handle->private = kdev;
ret = input_register_handle(handle);
if (ret)
goto err_input_register_handle;
ret = input_open_device(handle);
if (ret)
goto err_input_open_device;
pr_info("keychord: using input dev %s for fevent\n", dev->name);
return 0;
err_input_open_device:
input_unregister_handle(handle);
err_input_register_handle:
kfree(handle);
return ret;
}
static void keychord_disconnect(struct input_handle *handle)
{
input_close_device(handle);
input_unregister_handle(handle);
kfree(handle);
}
/*
* keychord_read is used to read keychord events from the driver
*/
static ssize_t keychord_read(struct file *file, char __user *buffer,
size_t count, loff_t *ppos)
{
struct keychord_device *kdev = file->private_data;
__u16 id;
int retval;
unsigned long flags;
if (count < sizeof(id))
return -EINVAL;
count = sizeof(id);
if (kdev->head == kdev->tail && (file->f_flags & O_NONBLOCK))
return -EAGAIN;
retval = wait_event_interruptible(kdev->waitq,
kdev->head != kdev->tail);
if (retval)
return retval;
spin_lock_irqsave(&kdev->lock, flags);
/* pop a keychord ID off the queue */
id = kdev->buff[kdev->tail];
kdev->tail = (kdev->tail + 1) % BUFFER_SIZE;
spin_unlock_irqrestore(&kdev->lock, flags);
if (copy_to_user(buffer, &id, count))
return -EFAULT;
return count;
}
/*
* keychord_write is used to configure the driver
*/
static ssize_t keychord_write(struct file *file, const char __user *buffer,
size_t count, loff_t *ppos)
{
struct keychord_device *kdev = file->private_data;
struct input_keychord *keychords = 0;
struct input_keychord *keychord, *next, *end;
int ret, i, key;
unsigned long flags;
if (count < sizeof(struct input_keychord))
return -EINVAL;
keychords = kzalloc(count, GFP_KERNEL);
if (!keychords)
return -ENOMEM;
/* read list of keychords from userspace */
if (copy_from_user(keychords, buffer, count)) {
kfree(keychords);
return -EFAULT;
}
/* unregister handler before changing configuration */
if (kdev->registered) {
input_unregister_handler(&kdev->input_handler);
kdev->registered = 0;
}
spin_lock_irqsave(&kdev->lock, flags);
/* clear any existing configuration */
kfree(kdev->keychords);
kdev->keychords = 0;
kdev->keychord_count = 0;
kdev->key_down = 0;
memset(kdev->keybit, 0, sizeof(kdev->keybit));
memset(kdev->keystate, 0, sizeof(kdev->keystate));
kdev->head = kdev->tail = 0;
keychord = keychords;
end = (struct input_keychord *)((char *)keychord + count);
while (keychord < end) {
next = NEXT_KEYCHORD(keychord);
if (keychord->count <= 0 || next > end) {
pr_err("keychord: invalid keycode count %d\n",
keychord->count);
goto err_unlock_return;
}
if (keychord->version != KEYCHORD_VERSION) {
pr_err("keychord: unsupported version %d\n",
keychord->version);
goto err_unlock_return;
}
/* keep track of the keys we are monitoring in keybit */
for (i = 0; i < keychord->count; i++) {
key = keychord->keycodes[i];
if (key < 0 || key >= KEY_CNT) {
pr_err("keychord: keycode %d out of range\n",
key);
goto err_unlock_return;
}
__set_bit(key, kdev->keybit);
}
kdev->keychord_count++;
keychord = next;
}
kdev->keychords = keychords;
spin_unlock_irqrestore(&kdev->lock, flags);
ret = input_register_handler(&kdev->input_handler);
if (ret) {
kfree(keychords);
kdev->keychords = 0;
return ret;
}
kdev->registered = 1;
return count;
err_unlock_return:
spin_unlock_irqrestore(&kdev->lock, flags);
kfree(keychords);
return -EINVAL;
}
static unsigned int keychord_poll(struct file *file, poll_table *wait)
{
struct keychord_device *kdev = file->private_data;
poll_wait(file, &kdev->waitq, wait);
if (kdev->head != kdev->tail)
return POLLIN | POLLRDNORM;
return 0;
}
static int keychord_open(struct inode *inode, struct file *file)
{
struct keychord_device *kdev;
kdev = kzalloc(sizeof(struct keychord_device), GFP_KERNEL);
if (!kdev)
return -ENOMEM;
spin_lock_init(&kdev->lock);
init_waitqueue_head(&kdev->waitq);
kdev->input_handler.event = keychord_event;
kdev->input_handler.connect = keychord_connect;
kdev->input_handler.disconnect = keychord_disconnect;
kdev->input_handler.name = KEYCHORD_NAME;
kdev->input_handler.id_table = kdev->device_ids;
kdev->device_ids[0].flags = INPUT_DEVICE_ID_MATCH_EVBIT;
__set_bit(EV_KEY, kdev->device_ids[0].evbit);
file->private_data = kdev;
return 0;
}
static int keychord_release(struct inode *inode, struct file *file)
{
struct keychord_device *kdev = file->private_data;
if (kdev->registered)
input_unregister_handler(&kdev->input_handler);
kfree(kdev);
return 0;
}
static const struct file_operations keychord_fops = {
.owner = THIS_MODULE,
.open = keychord_open,
.release = keychord_release,
.read = keychord_read,
.write = keychord_write,
.poll = keychord_poll,
};
static struct miscdevice keychord_misc = {
.fops = &keychord_fops,
.name = KEYCHORD_NAME,
.minor = MISC_DYNAMIC_MINOR,
};
static int __init keychord_init(void)
{
return misc_register(&keychord_misc);
}
static void __exit keychord_exit(void)
{
misc_deregister(&keychord_misc);
}
module_init(keychord_init);
module_exit(keychord_exit);

View File

@@ -451,6 +451,12 @@ config TOUCHSCREEN_TNETV107X
To compile this driver as a module, choose M here: the
module will be called tnetv107x-ts.
config TOUCHSCREEN_SYNAPTICS_I2C_RMI
tristate "Synaptics i2c touchscreen"
depends on I2C
help
This enables support for Synaptics RMI over I2C based touchscreens.
config TOUCHSCREEN_TOUCHRIGHT
tristate "Touchright serial touchscreen"
select SERIO

View File

@@ -51,6 +51,7 @@ obj-$(CONFIG_TOUCHSCREEN_ST1232) += st1232.o
obj-$(CONFIG_TOUCHSCREEN_STMPE) += stmpe-ts.o
obj-$(CONFIG_TOUCHSCREEN_TI_TSCADC) += ti_tscadc.o
obj-$(CONFIG_TOUCHSCREEN_TNETV107X) += tnetv107x-ts.o
obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI) += synaptics_i2c_rmi.o
obj-$(CONFIG_TOUCHSCREEN_TOUCHIT213) += touchit213.o
obj-$(CONFIG_TOUCHSCREEN_TOUCHRIGHT) += touchright.o
obj-$(CONFIG_TOUCHSCREEN_TOUCHWIN) += touchwin.o

View File

@@ -0,0 +1,675 @@
/* drivers/input/keyboard/synaptics_i2c_rmi.c
*
* Copyright (C) 2007 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/earlysuspend.h>
#include <linux/hrtimer.h>
#include <linux/i2c.h>
#include <linux/input.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/synaptics_i2c_rmi.h>
static struct workqueue_struct *synaptics_wq;
struct synaptics_ts_data {
uint16_t addr;
struct i2c_client *client;
struct input_dev *input_dev;
int use_irq;
bool has_relative_report;
struct hrtimer timer;
struct work_struct work;
uint16_t max[2];
int snap_state[2][2];
int snap_down_on[2];
int snap_down_off[2];
int snap_up_on[2];
int snap_up_off[2];
int snap_down[2];
int snap_up[2];
uint32_t flags;
int reported_finger_count;
int8_t sensitivity_adjust;
int (*power)(int on);
struct early_suspend early_suspend;
};
#ifdef CONFIG_HAS_EARLYSUSPEND
static void synaptics_ts_early_suspend(struct early_suspend *h);
static void synaptics_ts_late_resume(struct early_suspend *h);
#endif
static int synaptics_init_panel(struct synaptics_ts_data *ts)
{
int ret;
ret = i2c_smbus_write_byte_data(ts->client, 0xff, 0x10); /* page select = 0x10 */
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_write_byte_data failed for page select\n");
goto err_page_select_failed;
}
ret = i2c_smbus_write_byte_data(ts->client, 0x41, 0x04); /* Set "No Clip Z" */
if (ret < 0)
printk(KERN_ERR "i2c_smbus_write_byte_data failed for No Clip Z\n");
ret = i2c_smbus_write_byte_data(ts->client, 0x44,
ts->sensitivity_adjust);
if (ret < 0)
pr_err("synaptics_ts: failed to set Sensitivity Adjust\n");
err_page_select_failed:
ret = i2c_smbus_write_byte_data(ts->client, 0xff, 0x04); /* page select = 0x04 */
if (ret < 0)
printk(KERN_ERR "i2c_smbus_write_byte_data failed for page select\n");
ret = i2c_smbus_write_byte_data(ts->client, 0xf0, 0x81); /* normal operation, 80 reports per second */
if (ret < 0)
printk(KERN_ERR "synaptics_ts_resume: i2c_smbus_write_byte_data failed\n");
return ret;
}
static void synaptics_ts_work_func(struct work_struct *work)
{
int i;
int ret;
int bad_data = 0;
struct i2c_msg msg[2];
uint8_t start_reg;
uint8_t buf[15];
struct synaptics_ts_data *ts = container_of(work, struct synaptics_ts_data, work);
int buf_len = ts->has_relative_report ? 15 : 13;
msg[0].addr = ts->client->addr;
msg[0].flags = 0;
msg[0].len = 1;
msg[0].buf = &start_reg;
start_reg = 0x00;
msg[1].addr = ts->client->addr;
msg[1].flags = I2C_M_RD;
msg[1].len = buf_len;
msg[1].buf = buf;
/* printk("synaptics_ts_work_func\n"); */
for (i = 0; i < ((ts->use_irq && !bad_data) ? 1 : 10); i++) {
ret = i2c_transfer(ts->client->adapter, msg, 2);
if (ret < 0) {
printk(KERN_ERR "synaptics_ts_work_func: i2c_transfer failed\n");
bad_data = 1;
} else {
/* printk("synaptics_ts_work_func: %x %x %x %x %x %x" */
/* " %x %x %x %x %x %x %x %x %x, ret %d\n", */
/* buf[0], buf[1], buf[2], buf[3], */
/* buf[4], buf[5], buf[6], buf[7], */
/* buf[8], buf[9], buf[10], buf[11], */
/* buf[12], buf[13], buf[14], ret); */
if ((buf[buf_len - 1] & 0xc0) != 0x40) {
printk(KERN_WARNING "synaptics_ts_work_func:"
" bad read %x %x %x %x %x %x %x %x %x"
" %x %x %x %x %x %x, ret %d\n",
buf[0], buf[1], buf[2], buf[3],
buf[4], buf[5], buf[6], buf[7],
buf[8], buf[9], buf[10], buf[11],
buf[12], buf[13], buf[14], ret);
if (bad_data)
synaptics_init_panel(ts);
bad_data = 1;
continue;
}
bad_data = 0;
if ((buf[buf_len - 1] & 1) == 0) {
/* printk("read %d coordinates\n", i); */
break;
} else {
int pos[2][2];
int f, a;
int base;
/* int x = buf[3] | (uint16_t)(buf[2] & 0x1f) << 8; */
/* int y = buf[5] | (uint16_t)(buf[4] & 0x1f) << 8; */
int z = buf[1];
int w = buf[0] >> 4;
int finger = buf[0] & 7;
/* int x2 = buf[3+6] | (uint16_t)(buf[2+6] & 0x1f) << 8; */
/* int y2 = buf[5+6] | (uint16_t)(buf[4+6] & 0x1f) << 8; */
/* int z2 = buf[1+6]; */
/* int w2 = buf[0+6] >> 4; */
/* int finger2 = buf[0+6] & 7; */
/* int dx = (int8_t)buf[12]; */
/* int dy = (int8_t)buf[13]; */
int finger2_pressed;
/* printk("x %4d, y %4d, z %3d, w %2d, F %d, 2nd: x %4d, y %4d, z %3d, w %2d, F %d, dx %4d, dy %4d\n", */
/* x, y, z, w, finger, */
/* x2, y2, z2, w2, finger2, */
/* dx, dy); */
base = 2;
for (f = 0; f < 2; f++) {
uint32_t flip_flag = SYNAPTICS_FLIP_X;
for (a = 0; a < 2; a++) {
int p = buf[base + 1];
p |= (uint16_t)(buf[base] & 0x1f) << 8;
if (ts->flags & flip_flag)
p = ts->max[a] - p;
if (ts->flags & SYNAPTICS_SNAP_TO_INACTIVE_EDGE) {
if (ts->snap_state[f][a]) {
if (p <= ts->snap_down_off[a])
p = ts->snap_down[a];
else if (p >= ts->snap_up_off[a])
p = ts->snap_up[a];
else
ts->snap_state[f][a] = 0;
} else {
if (p <= ts->snap_down_on[a]) {
p = ts->snap_down[a];
ts->snap_state[f][a] = 1;
} else if (p >= ts->snap_up_on[a]) {
p = ts->snap_up[a];
ts->snap_state[f][a] = 1;
}
}
}
pos[f][a] = p;
base += 2;
flip_flag <<= 1;
}
base += 2;
if (ts->flags & SYNAPTICS_SWAP_XY)
swap(pos[f][0], pos[f][1]);
}
if (z) {
input_report_abs(ts->input_dev, ABS_X, pos[0][0]);
input_report_abs(ts->input_dev, ABS_Y, pos[0][1]);
}
input_report_abs(ts->input_dev, ABS_PRESSURE, z);
input_report_abs(ts->input_dev, ABS_TOOL_WIDTH, w);
input_report_key(ts->input_dev, BTN_TOUCH, finger);
finger2_pressed = finger > 1 && finger != 7;
input_report_key(ts->input_dev, BTN_2, finger2_pressed);
if (finger2_pressed) {
input_report_abs(ts->input_dev, ABS_HAT0X, pos[1][0]);
input_report_abs(ts->input_dev, ABS_HAT0Y, pos[1][1]);
}
if (!finger)
z = 0;
input_report_abs(ts->input_dev, ABS_MT_TOUCH_MAJOR, z);
input_report_abs(ts->input_dev, ABS_MT_WIDTH_MAJOR, w);
input_report_abs(ts->input_dev, ABS_MT_POSITION_X, pos[0][0]);
input_report_abs(ts->input_dev, ABS_MT_POSITION_Y, pos[0][1]);
input_mt_sync(ts->input_dev);
if (finger2_pressed) {
input_report_abs(ts->input_dev, ABS_MT_TOUCH_MAJOR, z);
input_report_abs(ts->input_dev, ABS_MT_WIDTH_MAJOR, w);
input_report_abs(ts->input_dev, ABS_MT_POSITION_X, pos[1][0]);
input_report_abs(ts->input_dev, ABS_MT_POSITION_Y, pos[1][1]);
input_mt_sync(ts->input_dev);
} else if (ts->reported_finger_count > 1) {
input_report_abs(ts->input_dev, ABS_MT_TOUCH_MAJOR, 0);
input_report_abs(ts->input_dev, ABS_MT_WIDTH_MAJOR, 0);
input_mt_sync(ts->input_dev);
}
ts->reported_finger_count = finger;
input_sync(ts->input_dev);
}
}
}
if (ts->use_irq)
enable_irq(ts->client->irq);
}
static enum hrtimer_restart synaptics_ts_timer_func(struct hrtimer *timer)
{
struct synaptics_ts_data *ts = container_of(timer, struct synaptics_ts_data, timer);
/* printk("synaptics_ts_timer_func\n"); */
queue_work(synaptics_wq, &ts->work);
hrtimer_start(&ts->timer, ktime_set(0, 12500000), HRTIMER_MODE_REL);
return HRTIMER_NORESTART;
}
static irqreturn_t synaptics_ts_irq_handler(int irq, void *dev_id)
{
struct synaptics_ts_data *ts = dev_id;
/* printk("synaptics_ts_irq_handler\n"); */
disable_irq_nosync(ts->client->irq);
queue_work(synaptics_wq, &ts->work);
return IRQ_HANDLED;
}
static int synaptics_ts_probe(
struct i2c_client *client, const struct i2c_device_id *id)
{
struct synaptics_ts_data *ts;
uint8_t buf0[4];
uint8_t buf1[8];
struct i2c_msg msg[2];
int ret = 0;
uint16_t max_x, max_y;
int fuzz_x, fuzz_y, fuzz_p, fuzz_w;
struct synaptics_i2c_rmi_platform_data *pdata;
unsigned long irqflags;
int inactive_area_left;
int inactive_area_right;
int inactive_area_top;
int inactive_area_bottom;
int snap_left_on;
int snap_left_off;
int snap_right_on;
int snap_right_off;
int snap_top_on;
int snap_top_off;
int snap_bottom_on;
int snap_bottom_off;
uint32_t panel_version;
if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
printk(KERN_ERR "synaptics_ts_probe: need I2C_FUNC_I2C\n");
ret = -ENODEV;
goto err_check_functionality_failed;
}
ts = kzalloc(sizeof(*ts), GFP_KERNEL);
if (ts == NULL) {
ret = -ENOMEM;
goto err_alloc_data_failed;
}
INIT_WORK(&ts->work, synaptics_ts_work_func);
ts->client = client;
i2c_set_clientdata(client, ts);
pdata = client->dev.platform_data;
if (pdata)
ts->power = pdata->power;
if (ts->power) {
ret = ts->power(1);
if (ret < 0) {
printk(KERN_ERR "synaptics_ts_probe power on failed\n");
goto err_power_failed;
}
}
ret = i2c_smbus_write_byte_data(ts->client, 0xf4, 0x01); /* device command = reset */
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_write_byte_data failed\n");
/* fail? */
}
{
int retry = 10;
while (retry-- > 0) {
ret = i2c_smbus_read_byte_data(ts->client, 0xe4);
if (ret >= 0)
break;
msleep(100);
}
}
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_read_byte_data failed\n");
goto err_detect_failed;
}
printk(KERN_INFO "synaptics_ts_probe: Product Major Version %x\n", ret);
panel_version = ret << 8;
ret = i2c_smbus_read_byte_data(ts->client, 0xe5);
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_read_byte_data failed\n");
goto err_detect_failed;
}
printk(KERN_INFO "synaptics_ts_probe: Product Minor Version %x\n", ret);
panel_version |= ret;
ret = i2c_smbus_read_byte_data(ts->client, 0xe3);
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_read_byte_data failed\n");
goto err_detect_failed;
}
printk(KERN_INFO "synaptics_ts_probe: product property %x\n", ret);
if (pdata) {
while (pdata->version > panel_version)
pdata++;
ts->flags = pdata->flags;
ts->sensitivity_adjust = pdata->sensitivity_adjust;
irqflags = pdata->irqflags;
inactive_area_left = pdata->inactive_left;
inactive_area_right = pdata->inactive_right;
inactive_area_top = pdata->inactive_top;
inactive_area_bottom = pdata->inactive_bottom;
snap_left_on = pdata->snap_left_on;
snap_left_off = pdata->snap_left_off;
snap_right_on = pdata->snap_right_on;
snap_right_off = pdata->snap_right_off;
snap_top_on = pdata->snap_top_on;
snap_top_off = pdata->snap_top_off;
snap_bottom_on = pdata->snap_bottom_on;
snap_bottom_off = pdata->snap_bottom_off;
fuzz_x = pdata->fuzz_x;
fuzz_y = pdata->fuzz_y;
fuzz_p = pdata->fuzz_p;
fuzz_w = pdata->fuzz_w;
} else {
irqflags = 0;
inactive_area_left = 0;
inactive_area_right = 0;
inactive_area_top = 0;
inactive_area_bottom = 0;
snap_left_on = 0;
snap_left_off = 0;
snap_right_on = 0;
snap_right_off = 0;
snap_top_on = 0;
snap_top_off = 0;
snap_bottom_on = 0;
snap_bottom_off = 0;
fuzz_x = 0;
fuzz_y = 0;
fuzz_p = 0;
fuzz_w = 0;
}
ret = i2c_smbus_read_byte_data(ts->client, 0xf0);
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_read_byte_data failed\n");
goto err_detect_failed;
}
printk(KERN_INFO "synaptics_ts_probe: device control %x\n", ret);
ret = i2c_smbus_read_byte_data(ts->client, 0xf1);
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_read_byte_data failed\n");
goto err_detect_failed;
}
printk(KERN_INFO "synaptics_ts_probe: interrupt enable %x\n", ret);
ret = i2c_smbus_write_byte_data(ts->client, 0xf1, 0); /* disable interrupt */
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_write_byte_data failed\n");
goto err_detect_failed;
}
msg[0].addr = ts->client->addr;
msg[0].flags = 0;
msg[0].len = 1;
msg[0].buf = buf0;
buf0[0] = 0xe0;
msg[1].addr = ts->client->addr;
msg[1].flags = I2C_M_RD;
msg[1].len = 8;
msg[1].buf = buf1;
ret = i2c_transfer(ts->client->adapter, msg, 2);
if (ret < 0) {
printk(KERN_ERR "i2c_transfer failed\n");
goto err_detect_failed;
}
printk(KERN_INFO "synaptics_ts_probe: 0xe0: %x %x %x %x %x %x %x %x\n",
buf1[0], buf1[1], buf1[2], buf1[3],
buf1[4], buf1[5], buf1[6], buf1[7]);
ret = i2c_smbus_write_byte_data(ts->client, 0xff, 0x10); /* page select = 0x10 */
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_write_byte_data failed for page select\n");
goto err_detect_failed;
}
ret = i2c_smbus_read_word_data(ts->client, 0x02);
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_read_word_data failed\n");
goto err_detect_failed;
}
ts->has_relative_report = !(ret & 0x100);
printk(KERN_INFO "synaptics_ts_probe: Sensor properties %x\n", ret);
ret = i2c_smbus_read_word_data(ts->client, 0x04);
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_read_word_data failed\n");
goto err_detect_failed;
}
ts->max[0] = max_x = (ret >> 8 & 0xff) | ((ret & 0x1f) << 8);
ret = i2c_smbus_read_word_data(ts->client, 0x06);
if (ret < 0) {
printk(KERN_ERR "i2c_smbus_read_word_data failed\n");
goto err_detect_failed;
}
ts->max[1] = max_y = (ret >> 8 & 0xff) | ((ret & 0x1f) << 8);
if (ts->flags & SYNAPTICS_SWAP_XY)
swap(max_x, max_y);
ret = synaptics_init_panel(ts); /* will also switch back to page 0x04 */
if (ret < 0) {
printk(KERN_ERR "synaptics_init_panel failed\n");
goto err_detect_failed;
}
ts->input_dev = input_allocate_device();
if (ts->input_dev == NULL) {
ret = -ENOMEM;
printk(KERN_ERR "synaptics_ts_probe: Failed to allocate input device\n");
goto err_input_dev_alloc_failed;
}
ts->input_dev->name = "synaptics-rmi-touchscreen";
set_bit(EV_SYN, ts->input_dev->evbit);
set_bit(EV_KEY, ts->input_dev->evbit);
set_bit(BTN_TOUCH, ts->input_dev->keybit);
set_bit(BTN_2, ts->input_dev->keybit);
set_bit(EV_ABS, ts->input_dev->evbit);
inactive_area_left = inactive_area_left * max_x / 0x10000;
inactive_area_right = inactive_area_right * max_x / 0x10000;
inactive_area_top = inactive_area_top * max_y / 0x10000;
inactive_area_bottom = inactive_area_bottom * max_y / 0x10000;
snap_left_on = snap_left_on * max_x / 0x10000;
snap_left_off = snap_left_off * max_x / 0x10000;
snap_right_on = snap_right_on * max_x / 0x10000;
snap_right_off = snap_right_off * max_x / 0x10000;
snap_top_on = snap_top_on * max_y / 0x10000;
snap_top_off = snap_top_off * max_y / 0x10000;
snap_bottom_on = snap_bottom_on * max_y / 0x10000;
snap_bottom_off = snap_bottom_off * max_y / 0x10000;
fuzz_x = fuzz_x * max_x / 0x10000;
fuzz_y = fuzz_y * max_y / 0x10000;
ts->snap_down[!!(ts->flags & SYNAPTICS_SWAP_XY)] = -inactive_area_left;
ts->snap_up[!!(ts->flags & SYNAPTICS_SWAP_XY)] = max_x + inactive_area_right;
ts->snap_down[!(ts->flags & SYNAPTICS_SWAP_XY)] = -inactive_area_top;
ts->snap_up[!(ts->flags & SYNAPTICS_SWAP_XY)] = max_y + inactive_area_bottom;
ts->snap_down_on[!!(ts->flags & SYNAPTICS_SWAP_XY)] = snap_left_on;
ts->snap_down_off[!!(ts->flags & SYNAPTICS_SWAP_XY)] = snap_left_off;
ts->snap_up_on[!!(ts->flags & SYNAPTICS_SWAP_XY)] = max_x - snap_right_on;
ts->snap_up_off[!!(ts->flags & SYNAPTICS_SWAP_XY)] = max_x - snap_right_off;
ts->snap_down_on[!(ts->flags & SYNAPTICS_SWAP_XY)] = snap_top_on;
ts->snap_down_off[!(ts->flags & SYNAPTICS_SWAP_XY)] = snap_top_off;
ts->snap_up_on[!(ts->flags & SYNAPTICS_SWAP_XY)] = max_y - snap_bottom_on;
ts->snap_up_off[!(ts->flags & SYNAPTICS_SWAP_XY)] = max_y - snap_bottom_off;
printk(KERN_INFO "synaptics_ts_probe: max_x %d, max_y %d\n", max_x, max_y);
printk(KERN_INFO "synaptics_ts_probe: inactive_x %d %d, inactive_y %d %d\n",
inactive_area_left, inactive_area_right,
inactive_area_top, inactive_area_bottom);
printk(KERN_INFO "synaptics_ts_probe: snap_x %d-%d %d-%d, snap_y %d-%d %d-%d\n",
snap_left_on, snap_left_off, snap_right_on, snap_right_off,
snap_top_on, snap_top_off, snap_bottom_on, snap_bottom_off);
input_set_abs_params(ts->input_dev, ABS_X, -inactive_area_left, max_x + inactive_area_right, fuzz_x, 0);
input_set_abs_params(ts->input_dev, ABS_Y, -inactive_area_top, max_y + inactive_area_bottom, fuzz_y, 0);
input_set_abs_params(ts->input_dev, ABS_PRESSURE, 0, 255, fuzz_p, 0);
input_set_abs_params(ts->input_dev, ABS_TOOL_WIDTH, 0, 15, fuzz_w, 0);
input_set_abs_params(ts->input_dev, ABS_HAT0X, -inactive_area_left, max_x + inactive_area_right, fuzz_x, 0);
input_set_abs_params(ts->input_dev, ABS_HAT0Y, -inactive_area_top, max_y + inactive_area_bottom, fuzz_y, 0);
input_set_abs_params(ts->input_dev, ABS_MT_POSITION_X, -inactive_area_left, max_x + inactive_area_right, fuzz_x, 0);
input_set_abs_params(ts->input_dev, ABS_MT_POSITION_Y, -inactive_area_top, max_y + inactive_area_bottom, fuzz_y, 0);
input_set_abs_params(ts->input_dev, ABS_MT_TOUCH_MAJOR, 0, 255, fuzz_p, 0);
input_set_abs_params(ts->input_dev, ABS_MT_WIDTH_MAJOR, 0, 15, fuzz_w, 0);
/* ts->input_dev->name = ts->keypad_info->name; */
ret = input_register_device(ts->input_dev);
if (ret) {
printk(KERN_ERR "synaptics_ts_probe: Unable to register %s input device\n", ts->input_dev->name);
goto err_input_register_device_failed;
}
if (client->irq) {
ret = request_irq(client->irq, synaptics_ts_irq_handler, irqflags, client->name, ts);
if (ret == 0) {
ret = i2c_smbus_write_byte_data(ts->client, 0xf1, 0x01); /* enable abs int */
if (ret)
free_irq(client->irq, ts);
}
if (ret == 0)
ts->use_irq = 1;
else
dev_err(&client->dev, "request_irq failed\n");
}
if (!ts->use_irq) {
hrtimer_init(&ts->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
ts->timer.function = synaptics_ts_timer_func;
hrtimer_start(&ts->timer, ktime_set(1, 0), HRTIMER_MODE_REL);
}
#ifdef CONFIG_HAS_EARLYSUSPEND
ts->early_suspend.level = EARLY_SUSPEND_LEVEL_BLANK_SCREEN + 1;
ts->early_suspend.suspend = synaptics_ts_early_suspend;
ts->early_suspend.resume = synaptics_ts_late_resume;
register_early_suspend(&ts->early_suspend);
#endif
printk(KERN_INFO "synaptics_ts_probe: Start touchscreen %s in %s mode\n", ts->input_dev->name, ts->use_irq ? "interrupt" : "polling");
return 0;
err_input_register_device_failed:
input_free_device(ts->input_dev);
err_input_dev_alloc_failed:
err_detect_failed:
err_power_failed:
kfree(ts);
err_alloc_data_failed:
err_check_functionality_failed:
return ret;
}
static int synaptics_ts_remove(struct i2c_client *client)
{
struct synaptics_ts_data *ts = i2c_get_clientdata(client);
unregister_early_suspend(&ts->early_suspend);
if (ts->use_irq)
free_irq(client->irq, ts);
else
hrtimer_cancel(&ts->timer);
input_unregister_device(ts->input_dev);
kfree(ts);
return 0;
}
static int synaptics_ts_suspend(struct i2c_client *client, pm_message_t mesg)
{
int ret;
struct synaptics_ts_data *ts = i2c_get_clientdata(client);
if (ts->use_irq)
disable_irq(client->irq);
else
hrtimer_cancel(&ts->timer);
ret = cancel_work_sync(&ts->work);
if (ret && ts->use_irq) /* if work was pending disable-count is now 2 */
enable_irq(client->irq);
ret = i2c_smbus_write_byte_data(ts->client, 0xf1, 0); /* disable interrupt */
if (ret < 0)
printk(KERN_ERR "synaptics_ts_suspend: i2c_smbus_write_byte_data failed\n");
ret = i2c_smbus_write_byte_data(client, 0xf0, 0x86); /* deep sleep */
if (ret < 0)
printk(KERN_ERR "synaptics_ts_suspend: i2c_smbus_write_byte_data failed\n");
if (ts->power) {
ret = ts->power(0);
if (ret < 0)
printk(KERN_ERR "synaptics_ts_resume power off failed\n");
}
return 0;
}
static int synaptics_ts_resume(struct i2c_client *client)
{
int ret;
struct synaptics_ts_data *ts = i2c_get_clientdata(client);
if (ts->power) {
ret = ts->power(1);
if (ret < 0)
printk(KERN_ERR "synaptics_ts_resume power on failed\n");
}
synaptics_init_panel(ts);
if (ts->use_irq)
enable_irq(client->irq);
if (!ts->use_irq)
hrtimer_start(&ts->timer, ktime_set(1, 0), HRTIMER_MODE_REL);
else
i2c_smbus_write_byte_data(ts->client, 0xf1, 0x01); /* enable abs int */
return 0;
}
#ifdef CONFIG_HAS_EARLYSUSPEND
static void synaptics_ts_early_suspend(struct early_suspend *h)
{
struct synaptics_ts_data *ts;
ts = container_of(h, struct synaptics_ts_data, early_suspend);
synaptics_ts_suspend(ts->client, PMSG_SUSPEND);
}
static void synaptics_ts_late_resume(struct early_suspend *h)
{
struct synaptics_ts_data *ts;
ts = container_of(h, struct synaptics_ts_data, early_suspend);
synaptics_ts_resume(ts->client);
}
#endif
static const struct i2c_device_id synaptics_ts_id[] = {
{ SYNAPTICS_I2C_RMI_NAME, 0 },
{ }
};
static struct i2c_driver synaptics_ts_driver = {
.probe = synaptics_ts_probe,
.remove = synaptics_ts_remove,
#ifndef CONFIG_HAS_EARLYSUSPEND
.suspend = synaptics_ts_suspend,
.resume = synaptics_ts_resume,
#endif
.id_table = synaptics_ts_id,
.driver = {
.name = SYNAPTICS_I2C_RMI_NAME,
},
};
static int __devinit synaptics_ts_init(void)
{
synaptics_wq = create_singlethread_workqueue("synaptics_wq");
if (!synaptics_wq)
return -ENOMEM;
return i2c_add_driver(&synaptics_ts_driver);
}
static void __exit synaptics_ts_exit(void)
{
i2c_del_driver(&synaptics_ts_driver);
if (synaptics_wq)
destroy_workqueue(synaptics_wq);
}
module_init(synaptics_ts_init);
module_exit(synaptics_ts_exit);
MODULE_DESCRIPTION("Synaptics Touchscreen Driver");
MODULE_LICENSE("GPL");

View File

@@ -468,6 +468,12 @@ config LEDS_TRIGGER_DEFAULT_ON
This allows LEDs to be initialised in the ON state.
If unsure, say Y.
config LEDS_TRIGGER_SLEEP
tristate "LED Sleep Mode Trigger"
depends on LEDS_TRIGGERS && HAS_EARLYSUSPEND
help
This turns LEDs on when the screen is off but the cpu still running.
comment "iptables trigger is under Netfilter config (LED target)"
depends on LEDS_TRIGGERS

View File

@@ -56,3 +56,4 @@ obj-$(CONFIG_LEDS_TRIGGER_HEARTBEAT) += ledtrig-heartbeat.o
obj-$(CONFIG_LEDS_TRIGGER_BACKLIGHT) += ledtrig-backlight.o
obj-$(CONFIG_LEDS_TRIGGER_GPIO) += ledtrig-gpio.o
obj-$(CONFIG_LEDS_TRIGGER_DEFAULT_ON) += ledtrig-default-on.o
obj-$(CONFIG_LEDS_TRIGGER_SLEEP) += ledtrig-sleep.o

View File

@@ -0,0 +1,80 @@
/* drivers/leds/ledtrig-sleep.c
*
* Copyright (C) 2007 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/earlysuspend.h>
#include <linux/leds.h>
#include <linux/suspend.h>
static int ledtrig_sleep_pm_callback(struct notifier_block *nfb,
unsigned long action,
void *ignored);
DEFINE_LED_TRIGGER(ledtrig_sleep)
static struct notifier_block ledtrig_sleep_pm_notifier = {
.notifier_call = ledtrig_sleep_pm_callback,
.priority = 0,
};
static void ledtrig_sleep_early_suspend(struct early_suspend *h)
{
led_trigger_event(ledtrig_sleep, LED_FULL);
}
static void ledtrig_sleep_early_resume(struct early_suspend *h)
{
led_trigger_event(ledtrig_sleep, LED_OFF);
}
static struct early_suspend ledtrig_sleep_early_suspend_handler = {
.suspend = ledtrig_sleep_early_suspend,
.resume = ledtrig_sleep_early_resume,
};
static int ledtrig_sleep_pm_callback(struct notifier_block *nfb,
unsigned long action,
void *ignored)
{
switch (action) {
case PM_HIBERNATION_PREPARE:
case PM_SUSPEND_PREPARE:
led_trigger_event(ledtrig_sleep, LED_OFF);
return NOTIFY_OK;
case PM_POST_HIBERNATION:
case PM_POST_SUSPEND:
led_trigger_event(ledtrig_sleep, LED_FULL);
return NOTIFY_OK;
}
return NOTIFY_DONE;
}
static int __init ledtrig_sleep_init(void)
{
led_trigger_register_simple("sleep", &ledtrig_sleep);
register_pm_notifier(&ledtrig_sleep_pm_notifier);
register_early_suspend(&ledtrig_sleep_early_suspend_handler);
return 0;
}
static void __exit ledtrig_sleep_exit(void)
{
unregister_early_suspend(&ledtrig_sleep_early_suspend_handler);
unregister_pm_notifier(&ledtrig_sleep_pm_notifier);
led_trigger_unregister_simple(ledtrig_sleep);
}
module_init(ledtrig_sleep_init);
module_exit(ledtrig_sleep_exit);

View File

@@ -382,6 +382,14 @@ config HMC6352
This driver provides support for the Honeywell HMC6352 compass,
providing configuration and heading data via sysfs.
config SENSORS_AK8975
tristate "AK8975 compass support"
default n
depends on I2C
help
If you say yes here you get support for Asahi Kasei's
orientation sensor AK8975.
config EP93XX_PWM
tristate "EP93xx PWM support"
depends on ARCH_EP93XX
@@ -425,6 +433,10 @@ config TI_DAC7512
This driver can also be built as a module. If so, the module
will be called ti_dac7512.
config UID_STAT
bool "UID based statistics tracking exported to /proc/uid_stat"
default n
config VMWARE_BALLOON
tristate "VMware Balloon Driver"
depends on X86
@@ -498,6 +510,14 @@ config MAX8997_MUIC
Maxim MAX8997 PMIC.
The MAX8997 MUIC is a USB port accessory detector and switch.
config WL127X_RFKILL
tristate "Bluetooth power control driver for TI wl127x"
depends on RFKILL
default n
---help---
Creates an rfkill entry in sysfs for power control of Bluetooth
TI wl127x chips.
source "drivers/misc/c2port/Kconfig"
source "drivers/misc/eeprom/Kconfig"
source "drivers/misc/cb710/Kconfig"

View File

@@ -33,6 +33,7 @@ obj-$(CONFIG_SENSORS_TSL2550) += tsl2550.o
obj-$(CONFIG_EP93XX_PWM) += ep93xx_pwm.o
obj-$(CONFIG_DS1682) += ds1682.o
obj-$(CONFIG_TI_DAC7512) += ti_dac7512.o
obj-$(CONFIG_UID_STAT) += uid_stat.o
obj-$(CONFIG_C2PORT) += c2port/
obj-$(CONFIG_IWMC3200TOP) += iwmc3200top/
obj-$(CONFIG_HMC6352) += hmc6352.o
@@ -49,3 +50,5 @@ obj-y += carma/
obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
obj-$(CONFIG_ALTERA_STAPL) +=altera-stapl/
obj-$(CONFIG_MAX8997_MUIC) += max8997-muic.o
obj-$(CONFIG_WL127X_RFKILL) += wl127x-rfkill.o
obj-$(CONFIG_SENSORS_AK8975) += akm8975.o

732
drivers/misc/akm8975.c Normal file
View File

@@ -0,0 +1,732 @@
/* drivers/misc/akm8975.c - akm8975 compass driver
*
* Copyright (C) 2007-2008 HTC Corporation.
* Author: Hou-Kun Chen <houkun.chen@gmail.com>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
/*
* Revised by AKM 2009/04/02
* Revised by Motorola 2010/05/27
*
*/
#include <linux/interrupt.h>
#include <linux/i2c.h>
#include <linux/slab.h>
#include <linux/irq.h>
#include <linux/miscdevice.h>
#include <linux/gpio.h>
#include <linux/uaccess.h>
#include <linux/delay.h>
#include <linux/input.h>
#include <linux/workqueue.h>
#include <linux/freezer.h>
#include <linux/akm8975.h>
#include <linux/earlysuspend.h>
#define AK8975DRV_CALL_DBG 0
#if AK8975DRV_CALL_DBG
#define FUNCDBG(msg) pr_err("%s:%s\n", __func__, msg);
#else
#define FUNCDBG(msg)
#endif
#define AK8975DRV_DATA_DBG 0
#define MAX_FAILURE_COUNT 10
struct akm8975_data {
struct i2c_client *this_client;
struct akm8975_platform_data *pdata;
struct input_dev *input_dev;
struct work_struct work;
struct mutex flags_lock;
#ifdef CONFIG_HAS_EARLYSUSPEND
struct early_suspend early_suspend;
#endif
};
/*
* Because misc devices can not carry a pointer from driver register to
* open, we keep this global. This limits the driver to a single instance.
*/
struct akm8975_data *akmd_data;
static DECLARE_WAIT_QUEUE_HEAD(open_wq);
static atomic_t open_flag;
static short m_flag;
static short a_flag;
static short t_flag;
static short mv_flag;
static short akmd_delay;
static ssize_t akm8975_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct i2c_client *client = to_i2c_client(dev);
return sprintf(buf, "%u\n", i2c_smbus_read_byte_data(client,
AK8975_REG_CNTL));
}
static ssize_t akm8975_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct i2c_client *client = to_i2c_client(dev);
unsigned long val;
strict_strtoul(buf, 10, &val);
if (val > 0xff)
return -EINVAL;
i2c_smbus_write_byte_data(client, AK8975_REG_CNTL, val);
return count;
}
static DEVICE_ATTR(akm_ms1, S_IWUSR | S_IRUGO, akm8975_show, akm8975_store);
static int akm8975_i2c_rxdata(struct akm8975_data *akm, char *buf, int length)
{
struct i2c_msg msgs[] = {
{
.addr = akm->this_client->addr,
.flags = 0,
.len = 1,
.buf = buf,
},
{
.addr = akm->this_client->addr,
.flags = I2C_M_RD,
.len = length,
.buf = buf,
},
};
FUNCDBG("called");
if (i2c_transfer(akm->this_client->adapter, msgs, 2) < 0) {
pr_err("akm8975_i2c_rxdata: transfer error\n");
return EIO;
} else
return 0;
}
static int akm8975_i2c_txdata(struct akm8975_data *akm, char *buf, int length)
{
struct i2c_msg msgs[] = {
{
.addr = akm->this_client->addr,
.flags = 0,
.len = length,
.buf = buf,
},
};
FUNCDBG("called");
if (i2c_transfer(akm->this_client->adapter, msgs, 1) < 0) {
pr_err("akm8975_i2c_txdata: transfer error\n");
return -EIO;
} else
return 0;
}
static void akm8975_ecs_report_value(struct akm8975_data *akm, short *rbuf)
{
struct akm8975_data *data = i2c_get_clientdata(akm->this_client);
FUNCDBG("called");
#if AK8975DRV_DATA_DBG
pr_info("akm8975_ecs_report_value: yaw = %d, pitch = %d, roll = %d\n",
rbuf[0], rbuf[1], rbuf[2]);
pr_info("tmp = %d, m_stat= %d, g_stat=%d\n", rbuf[3], rbuf[4], rbuf[5]);
pr_info("Acceleration: x = %d LSB, y = %d LSB, z = %d LSB\n",
rbuf[6], rbuf[7], rbuf[8]);
pr_info("Magnetic: x = %d LSB, y = %d LSB, z = %d LSB\n\n",
rbuf[9], rbuf[10], rbuf[11]);
#endif
mutex_lock(&akm->flags_lock);
/* Report magnetic sensor information */
if (m_flag) {
input_report_abs(data->input_dev, ABS_RX, rbuf[0]);
input_report_abs(data->input_dev, ABS_RY, rbuf[1]);
input_report_abs(data->input_dev, ABS_RZ, rbuf[2]);
input_report_abs(data->input_dev, ABS_RUDDER, rbuf[4]);
}
/* Report acceleration sensor information */
if (a_flag) {
input_report_abs(data->input_dev, ABS_X, rbuf[6]);
input_report_abs(data->input_dev, ABS_Y, rbuf[7]);
input_report_abs(data->input_dev, ABS_Z, rbuf[8]);
input_report_abs(data->input_dev, ABS_WHEEL, rbuf[5]);
}
/* Report temperature information */
if (t_flag)
input_report_abs(data->input_dev, ABS_THROTTLE, rbuf[3]);
if (mv_flag) {
input_report_abs(data->input_dev, ABS_HAT0X, rbuf[9]);
input_report_abs(data->input_dev, ABS_HAT0Y, rbuf[10]);
input_report_abs(data->input_dev, ABS_BRAKE, rbuf[11]);
}
mutex_unlock(&akm->flags_lock);
input_sync(data->input_dev);
}
static void akm8975_ecs_close_done(struct akm8975_data *akm)
{
FUNCDBG("called");
mutex_lock(&akm->flags_lock);
m_flag = 1;
a_flag = 1;
t_flag = 1;
mv_flag = 1;
mutex_unlock(&akm->flags_lock);
}
static int akm_aot_open(struct inode *inode, struct file *file)
{
int ret = -1;
FUNCDBG("called");
if (atomic_cmpxchg(&open_flag, 0, 1) == 0) {
wake_up(&open_wq);
ret = 0;
}
ret = nonseekable_open(inode, file);
if (ret)
return ret;
file->private_data = akmd_data;
return ret;
}
static int akm_aot_release(struct inode *inode, struct file *file)
{
FUNCDBG("called");
atomic_set(&open_flag, 0);
wake_up(&open_wq);
return 0;
}
static int akm_aot_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg)
{
void __user *argp = (void __user *) arg;
short flag;
struct akm8975_data *akm = file->private_data;
FUNCDBG("called");
switch (cmd) {
case ECS_IOCTL_APP_SET_MFLAG:
case ECS_IOCTL_APP_SET_AFLAG:
case ECS_IOCTL_APP_SET_MVFLAG:
if (copy_from_user(&flag, argp, sizeof(flag)))
return -EFAULT;
if (flag < 0 || flag > 1)
return -EINVAL;
break;
case ECS_IOCTL_APP_SET_DELAY:
if (copy_from_user(&flag, argp, sizeof(flag)))
return -EFAULT;
break;
default:
break;
}
mutex_lock(&akm->flags_lock);
switch (cmd) {
case ECS_IOCTL_APP_SET_MFLAG:
m_flag = flag;
break;
case ECS_IOCTL_APP_GET_MFLAG:
flag = m_flag;
break;
case ECS_IOCTL_APP_SET_AFLAG:
a_flag = flag;
break;
case ECS_IOCTL_APP_GET_AFLAG:
flag = a_flag;
break;
case ECS_IOCTL_APP_SET_MVFLAG:
mv_flag = flag;
break;
case ECS_IOCTL_APP_GET_MVFLAG:
flag = mv_flag;
break;
case ECS_IOCTL_APP_SET_DELAY:
akmd_delay = flag;
break;
case ECS_IOCTL_APP_GET_DELAY:
flag = akmd_delay;
break;
default:
return -ENOTTY;
}
mutex_unlock(&akm->flags_lock);
switch (cmd) {
case ECS_IOCTL_APP_GET_MFLAG:
case ECS_IOCTL_APP_GET_AFLAG:
case ECS_IOCTL_APP_GET_MVFLAG:
case ECS_IOCTL_APP_GET_DELAY:
if (copy_to_user(argp, &flag, sizeof(flag)))
return -EFAULT;
break;
default:
break;
}
return 0;
}
static int akmd_open(struct inode *inode, struct file *file)
{
int err = 0;
FUNCDBG("called");
err = nonseekable_open(inode, file);
if (err)
return err;
file->private_data = akmd_data;
return 0;
}
static int akmd_release(struct inode *inode, struct file *file)
{
struct akm8975_data *akm = file->private_data;
FUNCDBG("called");
akm8975_ecs_close_done(akm);
return 0;
}
static int akmd_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
unsigned long arg)
{
void __user *argp = (void __user *) arg;
char rwbuf[16];
int ret = -1;
int status;
short value[12];
short delay;
struct akm8975_data *akm = file->private_data;
FUNCDBG("called");
switch (cmd) {
case ECS_IOCTL_READ:
case ECS_IOCTL_WRITE:
if (copy_from_user(&rwbuf, argp, sizeof(rwbuf)))
return -EFAULT;
break;
case ECS_IOCTL_SET_YPR:
if (copy_from_user(&value, argp, sizeof(value)))
return -EFAULT;
break;
default:
break;
}
switch (cmd) {
case ECS_IOCTL_READ:
if (rwbuf[0] < 1)
return -EINVAL;
ret = akm8975_i2c_rxdata(akm, &rwbuf[1], rwbuf[0]);
if (ret < 0)
return ret;
break;
case ECS_IOCTL_WRITE:
if (rwbuf[0] < 2)
return -EINVAL;
ret = akm8975_i2c_txdata(akm, &rwbuf[1], rwbuf[0]);
if (ret < 0)
return ret;
break;
case ECS_IOCTL_SET_YPR:
akm8975_ecs_report_value(akm, value);
break;
case ECS_IOCTL_GET_OPEN_STATUS:
wait_event_interruptible(open_wq,
(atomic_read(&open_flag) != 0));
status = atomic_read(&open_flag);
break;
case ECS_IOCTL_GET_CLOSE_STATUS:
wait_event_interruptible(open_wq,
(atomic_read(&open_flag) == 0));
status = atomic_read(&open_flag);
break;
case ECS_IOCTL_GET_DELAY:
delay = akmd_delay;
break;
default:
FUNCDBG("Unknown cmd\n");
return -ENOTTY;
}
switch (cmd) {
case ECS_IOCTL_READ:
if (copy_to_user(argp, &rwbuf, sizeof(rwbuf)))
return -EFAULT;
break;
case ECS_IOCTL_GET_OPEN_STATUS:
case ECS_IOCTL_GET_CLOSE_STATUS:
if (copy_to_user(argp, &status, sizeof(status)))
return -EFAULT;
break;
case ECS_IOCTL_GET_DELAY:
if (copy_to_user(argp, &delay, sizeof(delay)))
return -EFAULT;
break;
default:
break;
}
return 0;
}
/* needed to clear the int. pin */
static void akm_work_func(struct work_struct *work)
{
struct akm8975_data *akm =
container_of(work, struct akm8975_data, work);
FUNCDBG("called");
enable_irq(akm->this_client->irq);
}
static irqreturn_t akm8975_interrupt(int irq, void *dev_id)
{
struct akm8975_data *akm = dev_id;
FUNCDBG("called");
disable_irq_nosync(akm->this_client->irq);
schedule_work(&akm->work);
return IRQ_HANDLED;
}
static int akm8975_power_off(struct akm8975_data *akm)
{
#if AK8975DRV_CALL_DBG
pr_info("%s\n", __func__);
#endif
if (akm->pdata->power_off)
akm->pdata->power_off();
return 0;
}
static int akm8975_power_on(struct akm8975_data *akm)
{
int err;
#if AK8975DRV_CALL_DBG
pr_info("%s\n", __func__);
#endif
if (akm->pdata->power_on) {
err = akm->pdata->power_on();
if (err < 0)
return err;
}
return 0;
}
static int akm8975_suspend(struct i2c_client *client, pm_message_t mesg)
{
struct akm8975_data *akm = i2c_get_clientdata(client);
#if AK8975DRV_CALL_DBG
pr_info("%s\n", __func__);
#endif
/* TO DO: might need more work after power mgmt
is enabled */
return akm8975_power_off(akm);
}
static int akm8975_resume(struct i2c_client *client)
{
struct akm8975_data *akm = i2c_get_clientdata(client);
#if AK8975DRV_CALL_DBG
pr_info("%s\n", __func__);
#endif
/* TO DO: might need more work after power mgmt
is enabled */
return akm8975_power_on(akm);
}
#ifdef CONFIG_HAS_EARLYSUSPEND
static void akm8975_early_suspend(struct early_suspend *handler)
{
struct akm8975_data *akm;
akm = container_of(handler, struct akm8975_data, early_suspend);
#if AK8975DRV_CALL_DBG
pr_info("%s\n", __func__);
#endif
akm8975_suspend(akm->this_client, PMSG_SUSPEND);
}
static void akm8975_early_resume(struct early_suspend *handler)
{
struct akm8975_data *akm;
akm = container_of(handler, struct akm8975_data, early_suspend);
#if AK8975DRV_CALL_DBG
pr_info("%s\n", __func__);
#endif
akm8975_resume(akm->this_client);
}
#endif
static int akm8975_init_client(struct i2c_client *client)
{
struct akm8975_data *data;
int ret;
data = i2c_get_clientdata(client);
ret = request_irq(client->irq, akm8975_interrupt, IRQF_TRIGGER_RISING,
"akm8975", data);
if (ret < 0) {
pr_err("akm8975_init_client: request irq failed\n");
goto err;
}
init_waitqueue_head(&open_wq);
mutex_lock(&data->flags_lock);
m_flag = 1;
a_flag = 1;
t_flag = 1;
mv_flag = 1;
mutex_unlock(&data->flags_lock);
return 0;
err:
return ret;
}
static const struct file_operations akmd_fops = {
.owner = THIS_MODULE,
.open = akmd_open,
.release = akmd_release,
.ioctl = akmd_ioctl,
};
static const struct file_operations akm_aot_fops = {
.owner = THIS_MODULE,
.open = akm_aot_open,
.release = akm_aot_release,
.ioctl = akm_aot_ioctl,
};
static struct miscdevice akm_aot_device = {
.minor = MISC_DYNAMIC_MINOR,
.name = "akm8975_aot",
.fops = &akm_aot_fops,
};
static struct miscdevice akmd_device = {
.minor = MISC_DYNAMIC_MINOR,
.name = "akm8975_dev",
.fops = &akmd_fops,
};
int akm8975_probe(struct i2c_client *client,
const struct i2c_device_id *devid)
{
struct akm8975_data *akm;
int err;
FUNCDBG("called");
if (client->dev.platform_data == NULL) {
dev_err(&client->dev, "platform data is NULL. exiting.\n");
err = -ENODEV;
goto exit_platform_data_null;
}
if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
dev_err(&client->dev, "platform data is NULL. exiting.\n");
err = -ENODEV;
goto exit_check_functionality_failed;
}
akm = kzalloc(sizeof(struct akm8975_data), GFP_KERNEL);
if (!akm) {
dev_err(&client->dev,
"failed to allocate memory for module data\n");
err = -ENOMEM;
goto exit_alloc_data_failed;
}
akm->pdata = client->dev.platform_data;
mutex_init(&akm->flags_lock);
INIT_WORK(&akm->work, akm_work_func);
i2c_set_clientdata(client, akm);
err = akm8975_power_on(akm);
if (err < 0)
goto exit_power_on_failed;
akm8975_init_client(client);
akm->this_client = client;
akmd_data = akm;
akm->input_dev = input_allocate_device();
if (!akm->input_dev) {
err = -ENOMEM;
dev_err(&akm->this_client->dev,
"input device allocate failed\n");
goto exit_input_dev_alloc_failed;
}
set_bit(EV_ABS, akm->input_dev->evbit);
/* yaw */
input_set_abs_params(akm->input_dev, ABS_RX, 0, 23040, 0, 0);
/* pitch */
input_set_abs_params(akm->input_dev, ABS_RY, -11520, 11520, 0, 0);
/* roll */
input_set_abs_params(akm->input_dev, ABS_RZ, -5760, 5760, 0, 0);
/* x-axis acceleration */
input_set_abs_params(akm->input_dev, ABS_X, -5760, 5760, 0, 0);
/* y-axis acceleration */
input_set_abs_params(akm->input_dev, ABS_Y, -5760, 5760, 0, 0);
/* z-axis acceleration */
input_set_abs_params(akm->input_dev, ABS_Z, -5760, 5760, 0, 0);
/* temparature */
input_set_abs_params(akm->input_dev, ABS_THROTTLE, -30, 85, 0, 0);
/* status of magnetic sensor */
input_set_abs_params(akm->input_dev, ABS_RUDDER, 0, 3, 0, 0);
/* status of acceleration sensor */
input_set_abs_params(akm->input_dev, ABS_WHEEL, 0, 3, 0, 0);
/* x-axis of raw magnetic vector */
input_set_abs_params(akm->input_dev, ABS_HAT0X, -20480, 20479, 0, 0);
/* y-axis of raw magnetic vector */
input_set_abs_params(akm->input_dev, ABS_HAT0Y, -20480, 20479, 0, 0);
/* z-axis of raw magnetic vector */
input_set_abs_params(akm->input_dev, ABS_BRAKE, -20480, 20479, 0, 0);
akm->input_dev->name = "compass";
err = input_register_device(akm->input_dev);
if (err) {
pr_err("akm8975_probe: Unable to register input device: %s\n",
akm->input_dev->name);
goto exit_input_register_device_failed;
}
err = misc_register(&akmd_device);
if (err) {
pr_err("akm8975_probe: akmd_device register failed\n");
goto exit_misc_device_register_failed;
}
err = misc_register(&akm_aot_device);
if (err) {
pr_err("akm8975_probe: akm_aot_device register failed\n");
goto exit_misc_device_register_failed;
}
err = device_create_file(&client->dev, &dev_attr_akm_ms1);
#ifdef CONFIG_HAS_EARLYSUSPEND
akm->early_suspend.suspend = akm8975_early_suspend;
akm->early_suspend.resume = akm8975_early_resume;
register_early_suspend(&akm->early_suspend);
#endif
return 0;
exit_misc_device_register_failed:
exit_input_register_device_failed:
input_free_device(akm->input_dev);
exit_input_dev_alloc_failed:
akm8975_power_off(akm);
exit_power_on_failed:
kfree(akm);
exit_alloc_data_failed:
exit_check_functionality_failed:
exit_platform_data_null:
return err;
}
static int __devexit akm8975_remove(struct i2c_client *client)
{
struct akm8975_data *akm = i2c_get_clientdata(client);
FUNCDBG("called");
free_irq(client->irq, NULL);
input_unregister_device(akm->input_dev);
misc_deregister(&akmd_device);
misc_deregister(&akm_aot_device);
akm8975_power_off(akm);
kfree(akm);
return 0;
}
static const struct i2c_device_id akm8975_id[] = {
{ "akm8975", 0 },
{ }
};
MODULE_DEVICE_TABLE(i2c, akm8975_id);
static struct i2c_driver akm8975_driver = {
.probe = akm8975_probe,
.remove = akm8975_remove,
#ifndef CONFIG_HAS_EARLYSUSPEND
.resume = akm8975_resume,
.suspend = akm8975_suspend,
#endif
.id_table = akm8975_id,
.driver = {
.name = "akm8975",
},
};
static int __init akm8975_init(void)
{
pr_info("AK8975 compass driver: init\n");
FUNCDBG("AK8975 compass driver: init\n");
return i2c_add_driver(&akm8975_driver);
}
static void __exit akm8975_exit(void)
{
FUNCDBG("AK8975 compass driver: exit\n");
i2c_del_driver(&akm8975_driver);
}
module_init(akm8975_init);
module_exit(akm8975_exit);
MODULE_AUTHOR("Hou-Kun Chen <hk_chen@htc.com>");
MODULE_DESCRIPTION("AK8975 compass driver");
MODULE_LICENSE("GPL");

156
drivers/misc/uid_stat.c Normal file
View File

@@ -0,0 +1,156 @@
/* drivers/misc/uid_stat.c
*
* Copyright (C) 2008 - 2009 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <asm/atomic.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/proc_fs.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/stat.h>
#include <linux/uid_stat.h>
#include <net/activity_stats.h>
static DEFINE_SPINLOCK(uid_lock);
static LIST_HEAD(uid_list);
static struct proc_dir_entry *parent;
struct uid_stat {
struct list_head link;
uid_t uid;
atomic_t tcp_rcv;
atomic_t tcp_snd;
};
static struct uid_stat *find_uid_stat(uid_t uid) {
unsigned long flags;
struct uid_stat *entry;
spin_lock_irqsave(&uid_lock, flags);
list_for_each_entry(entry, &uid_list, link) {
if (entry->uid == uid) {
spin_unlock_irqrestore(&uid_lock, flags);
return entry;
}
}
spin_unlock_irqrestore(&uid_lock, flags);
return NULL;
}
static int tcp_snd_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
int len;
unsigned int bytes;
char *p = page;
struct uid_stat *uid_entry = (struct uid_stat *) data;
if (!data)
return 0;
bytes = (unsigned int) (atomic_read(&uid_entry->tcp_snd) + INT_MIN);
p += sprintf(p, "%u\n", bytes);
len = (p - page) - off;
*eof = (len <= count) ? 1 : 0;
*start = page + off;
return len;
}
static int tcp_rcv_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
int len;
unsigned int bytes;
char *p = page;
struct uid_stat *uid_entry = (struct uid_stat *) data;
if (!data)
return 0;
bytes = (unsigned int) (atomic_read(&uid_entry->tcp_rcv) + INT_MIN);
p += sprintf(p, "%u\n", bytes);
len = (p - page) - off;
*eof = (len <= count) ? 1 : 0;
*start = page + off;
return len;
}
/* Create a new entry for tracking the specified uid. */
static struct uid_stat *create_stat(uid_t uid) {
unsigned long flags;
char uid_s[32];
struct uid_stat *new_uid;
struct proc_dir_entry *entry;
/* Create the uid stat struct and append it to the list. */
if ((new_uid = kmalloc(sizeof(struct uid_stat), GFP_KERNEL)) == NULL)
return NULL;
new_uid->uid = uid;
/* Counters start at INT_MIN, so we can track 4GB of network traffic. */
atomic_set(&new_uid->tcp_rcv, INT_MIN);
atomic_set(&new_uid->tcp_snd, INT_MIN);
spin_lock_irqsave(&uid_lock, flags);
list_add_tail(&new_uid->link, &uid_list);
spin_unlock_irqrestore(&uid_lock, flags);
sprintf(uid_s, "%d", uid);
entry = proc_mkdir(uid_s, parent);
/* Keep reference to uid_stat so we know what uid to read stats from. */
create_proc_read_entry("tcp_snd", S_IRUGO, entry , tcp_snd_read_proc,
(void *) new_uid);
create_proc_read_entry("tcp_rcv", S_IRUGO, entry, tcp_rcv_read_proc,
(void *) new_uid);
return new_uid;
}
int uid_stat_tcp_snd(uid_t uid, int size) {
struct uid_stat *entry;
activity_stats_update();
if ((entry = find_uid_stat(uid)) == NULL &&
((entry = create_stat(uid)) == NULL)) {
return -1;
}
atomic_add(size, &entry->tcp_snd);
return 0;
}
int uid_stat_tcp_rcv(uid_t uid, int size) {
struct uid_stat *entry;
activity_stats_update();
if ((entry = find_uid_stat(uid)) == NULL &&
((entry = create_stat(uid)) == NULL)) {
return -1;
}
atomic_add(size, &entry->tcp_rcv);
return 0;
}
static int __init uid_stat_init(void)
{
parent = proc_mkdir("uid_stat", NULL);
if (!parent) {
pr_err("uid_stat: failed to create proc entry\n");
return -1;
}
return 0;
}
__initcall(uid_stat_init);

View File

@@ -0,0 +1,121 @@
/*
* Bluetooth TI wl127x rfkill power control via GPIO
*
* Copyright (C) 2009 Motorola, Inc.
* Copyright (C) 2008 Texas Instruments
* Initial code: Pavan Savoy <pavan.savoy@gmail.com> (wl127x_power.c)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/gpio.h>
#include <linux/rfkill.h>
#include <linux/platform_device.h>
#include <linux/wl127x-rfkill.h>
static int wl127x_rfkill_set_power(void *data, enum rfkill_state state)
{
int nshutdown_gpio = (int) data;
switch (state) {
case RFKILL_STATE_UNBLOCKED:
gpio_set_value(nshutdown_gpio, 1);
break;
case RFKILL_STATE_SOFT_BLOCKED:
gpio_set_value(nshutdown_gpio, 0);
break;
default:
printk(KERN_ERR "invalid bluetooth rfkill state %d\n", state);
}
return 0;
}
static int wl127x_rfkill_probe(struct platform_device *pdev)
{
int rc = 0;
struct wl127x_rfkill_platform_data *pdata = pdev->dev.platform_data;
enum rfkill_state default_state = RFKILL_STATE_SOFT_BLOCKED; /* off */
rc = gpio_request(pdata->nshutdown_gpio, "wl127x_nshutdown_gpio");
if (unlikely(rc))
return rc;
rc = gpio_direction_output(pdata->nshutdown_gpio, 0);
if (unlikely(rc))
return rc;
rfkill_set_default(RFKILL_TYPE_BLUETOOTH, default_state);
wl127x_rfkill_set_power(NULL, default_state);
pdata->rfkill = rfkill_allocate(&pdev->dev, RFKILL_TYPE_BLUETOOTH);
if (unlikely(!pdata->rfkill))
return -ENOMEM;
pdata->rfkill->name = "wl127x";
pdata->rfkill->state = default_state;
/* userspace cannot take exclusive control */
pdata->rfkill->user_claim_unsupported = 1;
pdata->rfkill->user_claim = 0;
pdata->rfkill->data = (void *) pdata->nshutdown_gpio;
pdata->rfkill->toggle_radio = wl127x_rfkill_set_power;
rc = rfkill_register(pdata->rfkill);
if (unlikely(rc))
rfkill_free(pdata->rfkill);
return 0;
}
static int wl127x_rfkill_remove(struct platform_device *pdev)
{
struct wl127x_rfkill_platform_data *pdata = pdev->dev.platform_data;
rfkill_unregister(pdata->rfkill);
rfkill_free(pdata->rfkill);
gpio_free(pdata->nshutdown_gpio);
return 0;
}
static struct platform_driver wl127x_rfkill_platform_driver = {
.probe = wl127x_rfkill_probe,
.remove = wl127x_rfkill_remove,
.driver = {
.name = "wl127x-rfkill",
.owner = THIS_MODULE,
},
};
static int __init wl127x_rfkill_init(void)
{
return platform_driver_register(&wl127x_rfkill_platform_driver);
}
static void __exit wl127x_rfkill_exit(void)
{
platform_driver_unregister(&wl127x_rfkill_platform_driver);
}
module_init(wl127x_rfkill_init);
module_exit(wl127x_rfkill_exit);
MODULE_ALIAS("platform:wl127x");
MODULE_DESCRIPTION("wl127x-rfkill");
MODULE_AUTHOR("Motorola");
MODULE_LICENSE("GPL");

View File

@@ -50,6 +50,15 @@ config MMC_BLOCK_BOUNCE
If unsure, say Y here.
config MMC_BLOCK_DEFERRED_RESUME
bool "Deferr MMC layer resume until I/O is requested"
depends on MMC_BLOCK
default n
help
Say Y here to enable deferred MMC resume until I/O
is requested. This will reduce overall resume latency and
save power when theres an SD card inserted but not being used.
config SDIO_UART
tristate "SDIO UART/GPS class support"
help

View File

@@ -143,11 +143,7 @@ static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk)
static inline int mmc_get_devidx(struct gendisk *disk)
{
int devmaj = MAJOR(disk_devt(disk));
int devidx = MINOR(disk_devt(disk)) / perdev_minors;
if (!devmaj)
devidx = disk->first_minor / perdev_minors;
int devidx = disk->first_minor / perdev_minors;
return devidx;
}
@@ -660,18 +656,22 @@ static int mmc_blk_cmd_error(struct request *req, const char *name, int error,
req->rq_disk->disk_name, "timed out", name, status);
/* If the status cmd initially failed, retry the r/w cmd */
if (!status_valid)
if (!status_valid) {
pr_err("%s: status not valid, retrying timeout\n", req->rq_disk->disk_name);
return ERR_RETRY;
}
/*
* If it was a r/w cmd crc error, or illegal command
* (eg, issued in wrong state) then retry - we should
* have corrected the state problem above.
*/
if (status & (R1_COM_CRC_ERROR | R1_ILLEGAL_COMMAND))
if (status & (R1_COM_CRC_ERROR | R1_ILLEGAL_COMMAND)) {
pr_err("%s: command error, retrying timeout\n", req->rq_disk->disk_name);
return ERR_RETRY;
}
/* Otherwise abort the command */
pr_err("%s: not retrying timeout\n", req->rq_disk->disk_name);
return ERR_ABORT;
default:
@@ -1411,6 +1411,13 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
struct mmc_blk_data *md = mq->data;
struct mmc_card *card = md->queue.card;
#ifdef CONFIG_MMC_BLOCK_DEFERRED_RESUME
if (mmc_bus_needs_resume(card->host)) {
mmc_resume_bus(card->host);
mmc_blk_set_blksize(md, card);
}
#endif
if (req && !mq->mqrq_prev->req)
/* claim host only for the first request */
mmc_claim_host(card->host);
@@ -1522,6 +1529,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
md->disk->queue = md->queue.queue;
md->disk->driverfs_dev = parent;
set_disk_ro(md->disk, md->read_only || default_ro);
md->disk->flags = GENHD_FL_EXT_DEVT;
/*
* As discussed on lkml, GENHD_FL_REMOVABLE should:
@@ -1796,6 +1804,9 @@ static int mmc_blk_probe(struct mmc_card *card)
mmc_set_drvdata(card, md);
mmc_fixup_device(card, blk_fixups);
#ifdef CONFIG_MMC_BLOCK_DEFERRED_RESUME
mmc_set_bus_resume_policy(card->host, 1);
#endif
if (mmc_add_disk(md))
goto out;
@@ -1821,6 +1832,9 @@ static void mmc_blk_remove(struct mmc_card *card)
mmc_release_host(card->host);
mmc_blk_remove_req(md);
mmc_set_drvdata(card, NULL);
#ifdef CONFIG_MMC_BLOCK_DEFERRED_RESUME
mmc_set_bus_resume_policy(card->host, 0);
#endif
}
#ifdef CONFIG_PM

View File

@@ -27,3 +27,20 @@ config MMC_CLKGATE
support handling this in order for it to be of any use.
If unsure, say N.
config MMC_EMBEDDED_SDIO
boolean "MMC embedded SDIO device support (EXPERIMENTAL)"
depends on EXPERIMENTAL
help
If you say Y here, support will be added for embedded SDIO
devices which do not contain the necessary enumeration
support in hardware to be properly detected.
config MMC_PARANOID_SD_INIT
bool "Enable paranoid SD card initialization (EXPERIMENTAL)"
depends on EXPERIMENTAL
help
If you say Y here, the MMC layer will be extra paranoid
about re-trying SD init requests. This can be a useful
work-around for buggy controllers and hardware. Enable
if you are experiencing issues with SD detection.

View File

@@ -26,6 +26,7 @@
#include <linux/suspend.h>
#include <linux/fault-inject.h>
#include <linux/random.h>
#include <linux/wakelock.h>
#include <linux/mmc/card.h>
#include <linux/mmc/host.h>
@@ -1285,6 +1286,36 @@ static inline void mmc_bus_put(struct mmc_host *host)
spin_unlock_irqrestore(&host->lock, flags);
}
int mmc_resume_bus(struct mmc_host *host)
{
unsigned long flags;
if (!mmc_bus_needs_resume(host))
return -EINVAL;
printk("%s: Starting deferred resume\n", mmc_hostname(host));
spin_lock_irqsave(&host->lock, flags);
host->bus_resume_flags &= ~MMC_BUSRESUME_NEEDS_RESUME;
host->rescan_disable = 0;
spin_unlock_irqrestore(&host->lock, flags);
mmc_bus_get(host);
if (host->bus_ops && !host->bus_dead) {
mmc_power_up(host);
BUG_ON(!host->bus_ops->resume);
host->bus_ops->resume(host);
}
if (host->bus_ops->detect && !host->bus_dead)
host->bus_ops->detect(host);
mmc_bus_put(host);
printk("%s: Deferred resume completed\n", mmc_hostname(host));
return 0;
}
EXPORT_SYMBOL(mmc_resume_bus);
/*
* Assign a mmc bus handler to a host. Only one bus handler may control a
* host at any given time.
@@ -1350,6 +1381,8 @@ void mmc_detect_change(struct mmc_host *host, unsigned long delay)
spin_unlock_irqrestore(&host->lock, flags);
#endif
host->detect_change = 1;
wake_lock(&host->detect_wake_lock);
mmc_schedule_delayed_work(&host->detect, delay);
}
@@ -2009,6 +2042,7 @@ void mmc_rescan(struct work_struct *work)
struct mmc_host *host =
container_of(work, struct mmc_host, detect.work);
int i;
bool extend_wakelock = false;
if (host->rescan_disable)
return;
@@ -2025,6 +2059,12 @@ void mmc_rescan(struct work_struct *work)
host->detect_change = 0;
/* If the card was removed the bus will be marked
* as dead - extend the wakelock so userspace
* can respond */
if (host->bus_dead)
extend_wakelock = 1;
/*
* Let mmc_bus_put() free the bus/bus_ops if we've found that
* the card is no longer present.
@@ -2049,16 +2089,24 @@ void mmc_rescan(struct work_struct *work)
mmc_claim_host(host);
for (i = 0; i < ARRAY_SIZE(freqs); i++) {
if (!mmc_rescan_try_freq(host, max(freqs[i], host->f_min)))
if (!mmc_rescan_try_freq(host, max(freqs[i], host->f_min))) {
extend_wakelock = true;
break;
}
if (freqs[i] <= host->f_min)
break;
}
mmc_release_host(host);
out:
if (host->caps & MMC_CAP_NEEDS_POLL)
if (extend_wakelock)
wake_lock_timeout(&host->detect_wake_lock, HZ / 2);
else
wake_unlock(&host->detect_wake_lock);
if (host->caps & MMC_CAP_NEEDS_POLL) {
wake_lock(&host->detect_wake_lock);
mmc_schedule_delayed_work(&host->detect, HZ);
}
}
void mmc_start_host(struct mmc_host *host)
@@ -2076,7 +2124,8 @@ void mmc_stop_host(struct mmc_host *host)
spin_unlock_irqrestore(&host->lock, flags);
#endif
cancel_delayed_work_sync(&host->detect);
if (cancel_delayed_work_sync(&host->detect))
wake_unlock(&host->detect_wake_lock);
mmc_flush_scheduled_work();
/* clear pm flags now and let card drivers set them as needed */
@@ -2272,7 +2321,11 @@ int mmc_suspend_host(struct mmc_host *host)
{
int err = 0;
cancel_delayed_work(&host->detect);
if (mmc_bus_needs_resume(host))
return 0;
if (cancel_delayed_work(&host->detect))
wake_unlock(&host->detect_wake_lock);
mmc_flush_scheduled_work();
err = mmc_cache_ctrl(host, 0);
@@ -2322,6 +2375,12 @@ int mmc_resume_host(struct mmc_host *host)
int err = 0;
mmc_bus_get(host);
if (mmc_bus_manual_resume(host)) {
host->bus_resume_flags |= MMC_BUSRESUME_NEEDS_RESUME;
mmc_bus_put(host);
return 0;
}
if (host->bus_ops && !host->bus_dead) {
if (!mmc_card_keep_power(host)) {
mmc_power_up(host);
@@ -2372,10 +2431,15 @@ int mmc_pm_notify(struct notifier_block *notify_block,
case PM_SUSPEND_PREPARE:
spin_lock_irqsave(&host->lock, flags);
if (mmc_bus_needs_resume(host)) {
spin_unlock_irqrestore(&host->lock, flags);
break;
}
host->rescan_disable = 1;
host->power_notify_type = MMC_HOST_PW_NOTIFY_SHORT;
spin_unlock_irqrestore(&host->lock, flags);
cancel_delayed_work_sync(&host->detect);
if (cancel_delayed_work_sync(&host->detect))
wake_unlock(&host->detect_wake_lock);
if (!host->bus_ops || host->bus_ops->suspend)
break;
@@ -2396,6 +2460,10 @@ int mmc_pm_notify(struct notifier_block *notify_block,
case PM_POST_RESTORE:
spin_lock_irqsave(&host->lock, flags);
if (mmc_bus_manual_resume(host)) {
spin_unlock_irqrestore(&host->lock, flags);
break;
}
host->rescan_disable = 0;
host->power_notify_type = MMC_HOST_PW_NOTIFY_LONG;
spin_unlock_irqrestore(&host->lock, flags);
@@ -2407,6 +2475,22 @@ int mmc_pm_notify(struct notifier_block *notify_block,
}
#endif
#ifdef CONFIG_MMC_EMBEDDED_SDIO
void mmc_set_embedded_sdio_data(struct mmc_host *host,
struct sdio_cis *cis,
struct sdio_cccr *cccr,
struct sdio_embedded_func *funcs,
int num_funcs)
{
host->embedded_sdio_data.cis = cis;
host->embedded_sdio_data.cccr = cccr;
host->embedded_sdio_data.funcs = funcs;
host->embedded_sdio_data.num_funcs = num_funcs;
}
EXPORT_SYMBOL(mmc_set_embedded_sdio_data);
#endif
static int __init mmc_init(void)
{
int ret;

View File

@@ -329,6 +329,8 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
spin_lock_init(&host->lock);
init_waitqueue_head(&host->wq);
wake_lock_init(&host->detect_wake_lock, WAKE_LOCK_SUSPEND,
kasprintf(GFP_KERNEL, "%s_detect", mmc_hostname(host)));
INIT_DELAYED_WORK(&host->detect, mmc_rescan);
#ifdef CONFIG_PM
host->pm_notify.notifier_call = mmc_pm_notify;
@@ -381,7 +383,8 @@ int mmc_add_host(struct mmc_host *host)
mmc_host_clk_sysfs_init(host);
mmc_start_host(host);
register_pm_notifier(&host->pm_notify);
if (!(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
register_pm_notifier(&host->pm_notify);
return 0;
}
@@ -398,7 +401,9 @@ EXPORT_SYMBOL(mmc_add_host);
*/
void mmc_remove_host(struct mmc_host *host)
{
unregister_pm_notifier(&host->pm_notify);
if (!(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
unregister_pm_notifier(&host->pm_notify);
mmc_stop_host(host);
#ifdef CONFIG_DEBUG_FS
@@ -425,6 +430,7 @@ void mmc_free_host(struct mmc_host *host)
spin_lock(&mmc_host_lock);
idr_remove(&mmc_host_idr, host->index);
spin_unlock(&mmc_host_lock);
wake_lock_destroy(&host->detect_wake_lock);
put_device(&host->class_dev);
}

View File

@@ -806,6 +806,9 @@ int mmc_sd_setup_card(struct mmc_host *host, struct mmc_card *card,
bool reinit)
{
int err;
#ifdef CONFIG_MMC_PARANOID_SD_INIT
int retries;
#endif
if (!reinit) {
/*
@@ -832,7 +835,26 @@ int mmc_sd_setup_card(struct mmc_host *host, struct mmc_card *card,
/*
* Fetch switch information from card.
*/
#ifdef CONFIG_MMC_PARANOID_SD_INIT
for (retries = 1; retries <= 3; retries++) {
err = mmc_read_switch(card);
if (!err) {
if (retries > 1) {
printk(KERN_WARNING
"%s: recovered\n",
mmc_hostname(host));
}
break;
} else {
printk(KERN_WARNING
"%s: read switch failed (attempt %d)\n",
mmc_hostname(host), retries);
}
}
#else
err = mmc_read_switch(card);
#endif
if (err)
return err;
}
@@ -1046,18 +1068,36 @@ static int mmc_sd_alive(struct mmc_host *host)
*/
static void mmc_sd_detect(struct mmc_host *host)
{
int err;
int err = 0;
#ifdef CONFIG_MMC_PARANOID_SD_INIT
int retries = 5;
#endif
BUG_ON(!host);
BUG_ON(!host->card);
mmc_claim_host(host);
/*
* Just check if our card has been removed.
*/
#ifdef CONFIG_MMC_PARANOID_SD_INIT
while(retries) {
err = mmc_send_status(host->card, NULL);
if (err) {
retries--;
udelay(5);
continue;
}
break;
}
if (!retries) {
printk(KERN_ERR "%s(%s): Unable to re-detect card (%d)\n",
__func__, mmc_hostname(host), err);
}
#else
err = _mmc_detect_card_removed(host);
#endif
mmc_release_host(host);
if (err) {
@@ -1096,12 +1136,31 @@ static int mmc_sd_suspend(struct mmc_host *host)
static int mmc_sd_resume(struct mmc_host *host)
{
int err;
#ifdef CONFIG_MMC_PARANOID_SD_INIT
int retries;
#endif
BUG_ON(!host);
BUG_ON(!host->card);
mmc_claim_host(host);
#ifdef CONFIG_MMC_PARANOID_SD_INIT
retries = 5;
while (retries) {
err = mmc_sd_init_card(host, host->ocr, host->card);
if (err) {
printk(KERN_ERR "%s: Re-init card rc = %d (retries = %d)\n",
mmc_hostname(host), err, retries);
mdelay(5);
retries--;
continue;
}
break;
}
#else
err = mmc_sd_init_card(host, host->ocr, host->card);
#endif
mmc_release_host(host);
return err;
@@ -1155,6 +1214,9 @@ int mmc_attach_sd(struct mmc_host *host)
{
int err;
u32 ocr;
#ifdef CONFIG_MMC_PARANOID_SD_INIT
int retries;
#endif
BUG_ON(!host);
WARN_ON(!host->claimed);
@@ -1217,9 +1279,27 @@ int mmc_attach_sd(struct mmc_host *host)
/*
* Detect and init the card.
*/
#ifdef CONFIG_MMC_PARANOID_SD_INIT
retries = 5;
while (retries) {
err = mmc_sd_init_card(host, host->ocr, NULL);
if (err) {
retries--;
continue;
}
break;
}
if (!retries) {
printk(KERN_ERR "%s: mmc_sd_init_card() failure (err = %d)\n",
mmc_hostname(host), err);
goto err;
}
#else
err = mmc_sd_init_card(host, host->ocr, NULL);
if (err)
goto err;
#endif
mmc_release_host(host);
err = mmc_add_card(host->card);

View File

@@ -10,6 +10,7 @@
*/
#include <linux/err.h>
#include <linux/module.h>
#include <linux/pm_runtime.h>
#include <linux/mmc/host.h>
@@ -28,6 +29,10 @@
#include "sdio_ops.h"
#include "sdio_cis.h"
#ifdef CONFIG_MMC_EMBEDDED_SDIO
#include <linux/mmc/sdio_ids.h>
#endif
static int sdio_read_fbr(struct sdio_func *func)
{
int ret;
@@ -713,19 +718,35 @@ static int mmc_sdio_init_card(struct mmc_host *host, u32 ocr,
goto finish;
}
/*
* Read the common registers.
*/
err = sdio_read_cccr(card, ocr);
if (err)
goto remove;
#ifdef CONFIG_MMC_EMBEDDED_SDIO
if (host->embedded_sdio_data.cccr)
memcpy(&card->cccr, host->embedded_sdio_data.cccr, sizeof(struct sdio_cccr));
else {
#endif
/*
* Read the common registers.
*/
err = sdio_read_cccr(card, ocr);
if (err)
goto remove;
#ifdef CONFIG_MMC_EMBEDDED_SDIO
}
#endif
/*
* Read the common CIS tuples.
*/
err = sdio_read_common_cis(card);
if (err)
goto remove;
#ifdef CONFIG_MMC_EMBEDDED_SDIO
if (host->embedded_sdio_data.cis)
memcpy(&card->cis, host->embedded_sdio_data.cis, sizeof(struct sdio_cis));
else {
#endif
/*
* Read the common CIS tuples.
*/
err = sdio_read_common_cis(card);
if (err)
goto remove;
#ifdef CONFIG_MMC_EMBEDDED_SDIO
}
#endif
if (oldcard) {
int same = (card->cis.vendor == oldcard->cis.vendor &&
@@ -1124,14 +1145,36 @@ int mmc_attach_sdio(struct mmc_host *host)
funcs = (ocr & 0x70000000) >> 28;
card->sdio_funcs = 0;
#ifdef CONFIG_MMC_EMBEDDED_SDIO
if (host->embedded_sdio_data.funcs)
card->sdio_funcs = funcs = host->embedded_sdio_data.num_funcs;
#endif
/*
* Initialize (but don't add) all present functions.
*/
for (i = 0; i < funcs; i++, card->sdio_funcs++) {
err = sdio_init_func(host->card, i + 1);
if (err)
goto remove;
#ifdef CONFIG_MMC_EMBEDDED_SDIO
if (host->embedded_sdio_data.funcs) {
struct sdio_func *tmp;
tmp = sdio_alloc_func(host->card);
if (IS_ERR(tmp))
goto remove;
tmp->num = (i + 1);
card->sdio_func[i] = tmp;
tmp->class = host->embedded_sdio_data.funcs[i].f_class;
tmp->max_blksize = host->embedded_sdio_data.funcs[i].f_maxblksize;
tmp->vendor = card->cis.vendor;
tmp->device = card->cis.device;
} else {
#endif
err = sdio_init_func(host->card, i + 1);
if (err)
goto remove;
#ifdef CONFIG_MMC_EMBEDDED_SDIO
}
#endif
/*
* Enable Runtime PM for this func (if supported)
*/
@@ -1179,3 +1222,77 @@ err:
return err;
}
int sdio_reset_comm(struct mmc_card *card)
{
struct mmc_host *host = card->host;
u32 ocr;
int err;
printk("%s():\n", __func__);
mmc_claim_host(host);
mmc_go_idle(host);
mmc_set_clock(host, host->f_min);
err = mmc_send_io_op_cond(host, 0, &ocr);
if (err)
goto err;
host->ocr = mmc_select_voltage(host, ocr);
if (!host->ocr) {
err = -EINVAL;
goto err;
}
err = mmc_send_io_op_cond(host, host->ocr, &ocr);
if (err)
goto err;
if (mmc_host_is_spi(host)) {
err = mmc_spi_set_crc(host, use_spi_crc);
if (err)
goto err;
}
if (!mmc_host_is_spi(host)) {
err = mmc_send_relative_addr(host, &card->rca);
if (err)
goto err;
mmc_set_bus_mode(host, MMC_BUSMODE_PUSHPULL);
}
if (!mmc_host_is_spi(host)) {
err = mmc_select_card(card);
if (err)
goto err;
}
/*
* Switch to high-speed (if supported).
*/
err = sdio_enable_hs(card);
if (err > 0)
mmc_sd_go_highspeed(card);
else if (err)
goto err;
/*
* Change to the card's maximum speed.
*/
mmc_set_clock(host, mmc_sdio_get_max_clock(card));
err = sdio_enable_4bit_bus(card);
if (err > 0)
mmc_set_bus_width(host, MMC_BUS_WIDTH_4);
else if (err)
goto err;
mmc_release_host(host);
return 0;
err:
printk("%s: Error resetting SDIO communications (%d)\n",
mmc_hostname(host), err);
mmc_release_host(host);
return err;
}
EXPORT_SYMBOL(sdio_reset_comm);

Some files were not shown because too many files have changed in this diff Show More