Revert "Revert "pio: Add pin devices, spi0 fix to device tree""
am: cd82bdd625
Change-Id: Iaa87f58aa31833528f289a97d630b825eed4976c
diff --git a/Documentation/scheduler/sched-tune.txt b/Documentation/scheduler/sched-tune.txt
new file mode 100644
index 0000000..9bd2231
--- /dev/null
+++ b/Documentation/scheduler/sched-tune.txt
@@ -0,0 +1,366 @@
+ Central, scheduler-driven, power-performance control
+ (EXPERIMENTAL)
+
+Abstract
+========
+
+The topic of a single simple power-performance tunable, that is wholly
+scheduler centric, and has well defined and predictable properties has come up
+on several occasions in the past [1,2]. With techniques such as a scheduler
+driven DVFS [3], we now have a good framework for implementing such a tunable.
+This document describes the overall ideas behind its design and implementation.
+
+
+Table of Contents
+=================
+
+1. Motivation
+2. Introduction
+3. Signal Boosting Strategy
+4. OPP selection using boosted CPU utilization
+5. Per task group boosting
+6. Question and Answers
+ - What about "auto" mode?
+ - What about boosting on a congested system?
+ - How CPUs are boosted when we have tasks with multiple boost values?
+7. References
+
+
+1. Motivation
+=============
+
+Sched-DVFS [3] is a new event-driven cpufreq governor which allows the
+scheduler to select the optimal DVFS operating point (OPP) for running a task
+allocated to a CPU. The introduction of sched-DVFS enables running workloads at
+the most energy efficient OPPs.
+
+However, sometimes it may be desired to intentionally boost the performance of
+a workload even if that could imply a reasonable increase in energy
+consumption. For example, in order to reduce the response time of a task, we
+may want to run the task at a higher OPP than the one that is actually required
+by it's CPU bandwidth demand.
+
+This last requirement is especially important if we consider that one of the
+main goals of the sched-DVFS component is to replace all currently available
+CPUFreq policies. Since sched-DVFS is event based, as opposed to the sampling
+driven governors we currently have, it is already more responsive at selecting
+the optimal OPP to run tasks allocated to a CPU. However, just tracking the
+actual task load demand may not be enough from a performance standpoint. For
+example, it is not possible to get behaviors similar to those provided by the
+"performance" and "interactive" CPUFreq governors.
+
+This document describes an implementation of a tunable, stacked on top of the
+sched-DVFS which extends its functionality to support task performance
+boosting.
+
+By "performance boosting" we mean the reduction of the time required to
+complete a task activation, i.e. the time elapsed from a task wakeup to its
+next deactivation (e.g. because it goes back to sleep or it terminates). For
+example, if we consider a simple periodic task which executes the same workload
+for 5[s] every 20[s] while running at a certain OPP, a boosted execution of
+that task must complete each of its activations in less than 5[s].
+
+A previous attempt [5] to introduce such a boosting feature has not been
+successful mainly because of the complexity of the proposed solution. The
+approach described in this document exposes a single simple interface to
+user-space. This single tunable knob allows the tuning of system wide
+scheduler behaviours ranging from energy efficiency at one end through to
+incremental performance boosting at the other end. This first tunable affects
+all tasks. However, a more advanced extension of the concept is also provided
+which uses CGroups to boost the performance of only selected tasks while using
+the energy efficient default for all others.
+
+The rest of this document introduces in more details the proposed solution
+which has been named SchedTune.
+
+
+2. Introduction
+===============
+
+SchedTune exposes a simple user-space interface with a single power-performance
+tunable:
+
+ /proc/sys/kernel/sched_cfs_boost
+
+This permits expressing a boost value as an integer in the range [0..100].
+
+A value of 0 (default) configures the CFS scheduler for maximum energy
+efficiency. This means that sched-DVFS runs the tasks at the minimum OPP
+required to satisfy their workload demand.
+A value of 100 configures scheduler for maximum performance, which translates
+to the selection of the maximum OPP on that CPU.
+
+The range between 0 and 100 can be set to satisfy other scenarios suitably. For
+example to satisfy interactive response or depending on other system events
+(battery level etc).
+
+A CGroup based extension is also provided, which permits further user-space
+defined task classification to tune the scheduler for different goals depending
+on the specific nature of the task, e.g. background vs interactive vs
+low-priority.
+
+The overall design of the SchedTune module is built on top of "Per-Entity Load
+Tracking" (PELT) signals and sched-DVFS by introducing a bias on the Operating
+Performance Point (OPP) selection.
+Each time a task is allocated on a CPU, sched-DVFS has the opportunity to tune
+the operating frequency of that CPU to better match the workload demand. The
+selection of the actual OPP being activated is influenced by the global boost
+value, or the boost value for the task CGroup when in use.
+
+This simple biasing approach leverages existing frameworks, which means minimal
+modifications to the scheduler, and yet it allows to achieve a range of
+different behaviours all from a single simple tunable knob.
+The only new concept introduced is that of signal boosting.
+
+
+3. Signal Boosting Strategy
+===========================
+
+The whole PELT machinery works based on the value of a few load tracking signals
+which basically track the CPU bandwidth requirements for tasks and the capacity
+of CPUs. The basic idea behind the SchedTune knob is to artificially inflate
+some of these load tracking signals to make a task or RQ appears more demanding
+that it actually is.
+
+Which signals have to be inflated depends on the specific "consumer". However,
+independently from the specific (signal, consumer) pair, it is important to
+define a simple and possibly consistent strategy for the concept of boosting a
+signal.
+
+A boosting strategy defines how the "abstract" user-space defined
+sched_cfs_boost value is translated into an internal "margin" value to be added
+to a signal to get its inflated value:
+
+ margin := boosting_strategy(sched_cfs_boost, signal)
+ boosted_signal := signal + margin
+
+Different boosting strategies were identified and analyzed before selecting the
+one found to be most effective.
+
+Signal Proportional Compensation (SPC)
+--------------------------------------
+
+In this boosting strategy the sched_cfs_boost value is used to compute a
+margin which is proportional to the complement of the original signal.
+When a signal has a maximum possible value, its complement is defined as
+the delta from the actual value and its possible maximum.
+
+Since the tunable implementation uses signals which have SCHED_LOAD_SCALE as
+the maximum possible value, the margin becomes:
+
+ margin := sched_cfs_boost * (SCHED_LOAD_SCALE - signal)
+
+Using this boosting strategy:
+- a 100% sched_cfs_boost means that the signal is scaled to the maximum value
+- each value in the range of sched_cfs_boost effectively inflates the signal in
+ question by a quantity which is proportional to the maximum value.
+
+For example, by applying the SPC boosting strategy to the selection of the OPP
+to run a task it is possible to achieve these behaviors:
+
+- 0% boosting: run the task at the minimum OPP required by its workload
+- 100% boosting: run the task at the maximum OPP available for the CPU
+- 50% boosting: run at the half-way OPP between minimum and maximum
+
+Which means that, at 50% boosting, a task will be scheduled to run at half of
+the maximum theoretically achievable performance on the specific target
+platform.
+
+A graphical representation of an SPC boosted signal is represented in the
+following figure where:
+ a) "-" represents the original signal
+ b) "b" represents a 50% boosted signal
+ c) "p" represents a 100% boosted signal
+
+
+ ^
+ | SCHED_LOAD_SCALE
+ +-----------------------------------------------------------------+
+ |pppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp
+ |
+ | boosted_signal
+ | bbbbbbbbbbbbbbbbbbbbbbbb
+ |
+ | original signal
+ | bbbbbbbbbbbbbbbbbbbbbbbb+----------------------+
+ | |
+ |bbbbbbbbbbbbbbbbbb |
+ | |
+ | |
+ | |
+ | +-----------------------+
+ | |
+ | |
+ | |
+ |------------------+
+ |
+ |
+ +----------------------------------------------------------------------->
+
+The plot above shows a ramped load signal (titled 'original_signal') and it's
+boosted equivalent. For each step of the original signal the boosted signal
+corresponding to a 50% boost is midway from the original signal and the upper
+bound. Boosting by 100% generates a boosted signal which is always saturated to
+the upper bound.
+
+
+4. OPP selection using boosted CPU utilization
+==============================================
+
+It is worth calling out that the implementation does not introduce any new load
+signals. Instead, it provides an API to tune existing signals. This tuning is
+done on demand and only in scheduler code paths where it is sensible to do so.
+The new API calls are defined to return either the default signal or a boosted
+one, depending on the value of sched_cfs_boost. This is a clean an non invasive
+modification of the existing existing code paths.
+
+The signal representing a CPU's utilization is boosted according to the
+previously described SPC boosting strategy. To sched-DVFS, this allows a CPU
+(ie CFS run-queue) to appear more used then it actually is.
+
+Thus, with the sched_cfs_boost enabled we have the following main functions to
+get the current utilization of a CPU:
+
+ cpu_util()
+ boosted_cpu_util()
+
+The new boosted_cpu_util() is similar to the first but returns a boosted
+utilization signal which is a function of the sched_cfs_boost value.
+
+This function is used in the CFS scheduler code paths where sched-DVFS needs to
+decide the OPP to run a CPU at.
+For example, this allows selecting the highest OPP for a CPU which has
+the boost value set to 100%.
+
+
+5. Per task group boosting
+==========================
+
+The availability of a single knob which is used to boost all tasks in the
+system is certainly a simple solution but it quite likely doesn't fit many
+utilization scenarios, especially in the mobile device space.
+
+For example, on battery powered devices there usually are many background
+services which are long running and need energy efficient scheduling. On the
+other hand, some applications are more performance sensitive and require an
+interactive response and/or maximum performance, regardless of the energy cost.
+To better service such scenarios, the SchedTune implementation has an extension
+that provides a more fine grained boosting interface.
+
+A new CGroup controller, namely "schedtune", could be enabled which allows to
+defined and configure task groups with different boosting values.
+Tasks that require special performance can be put into separate CGroups.
+The value of the boost associated with the tasks in this group can be specified
+using a single knob exposed by the CGroup controller:
+
+ schedtune.boost
+
+This knob allows the definition of a boost value that is to be used for
+SPC boosting of all tasks attached to this group.
+
+The current schedtune controller implementation is really simple and has these
+main characteristics:
+
+ 1) It is only possible to create 1 level depth hierarchies
+
+ The root control groups define the system-wide boost value to be applied
+ by default to all tasks. Its direct subgroups are named "boost groups" and
+ they define the boost value for specific set of tasks.
+ Further nested subgroups are not allowed since they do not have a sensible
+ meaning from a user-space standpoint.
+
+ 2) It is possible to define only a limited number of "boost groups"
+
+ This number is defined at compile time and by default configured to 16.
+ This is a design decision motivated by two main reasons:
+ a) In a real system we do not expect utilization scenarios with more then few
+ boost groups. For example, a reasonable collection of groups could be
+ just "background", "interactive" and "performance".
+ b) It simplifies the implementation considerably, especially for the code
+ which has to compute the per CPU boosting once there are multiple
+ RUNNABLE tasks with different boost values.
+
+Such a simple design should allow servicing the main utilization scenarios identified
+so far. It provides a simple interface which can be used to manage the
+power-performance of all tasks or only selected tasks.
+Moreover, this interface can be easily integrated by user-space run-times (e.g.
+Android, ChromeOS) to implement a QoS solution for task boosting based on tasks
+classification, which has been a long standing requirement.
+
+Setup and usage
+---------------
+
+0. Use a kernel with CGROUP_SCHEDTUNE support enabled
+
+1. Check that the "schedtune" CGroup controller is available:
+
+ root@linaro-nano:~# cat /proc/cgroups
+ #subsys_name hierarchy num_cgroups enabled
+ cpuset 0 1 1
+ cpu 0 1 1
+ schedtune 0 1 1
+
+2. Mount a tmpfs to create the CGroups mount point (Optional)
+
+ root@linaro-nano:~# sudo mount -t tmpfs cgroups /sys/fs/cgroup
+
+3. Mount the "schedtune" controller
+
+ root@linaro-nano:~# mkdir /sys/fs/cgroup/stune
+ root@linaro-nano:~# sudo mount -t cgroup -o schedtune stune /sys/fs/cgroup/stune
+
+4. Setup the system-wide boost value (Optional)
+
+ If not configured the root control group has a 0% boost value, which
+ basically disables boosting for all tasks in the system thus running in
+ an energy-efficient mode.
+
+ root@linaro-nano:~# echo $SYSBOOST > /sys/fs/cgroup/stune/schedtune.boost
+
+5. Create task groups and configure their specific boost value (Optional)
+
+ For example here we create a "performance" boost group configure to boost
+ all its tasks to 100%
+
+ root@linaro-nano:~# mkdir /sys/fs/cgroup/stune/performance
+ root@linaro-nano:~# echo 100 > /sys/fs/cgroup/stune/performance/schedtune.boost
+
+6. Move tasks into the boost group
+
+ For example, the following moves the tasks with PID $TASKPID (and all its
+ threads) into the "performance" boost group.
+
+ root@linaro-nano:~# echo "TASKPID > /sys/fs/cgroup/stune/performance/cgroup.procs
+
+This simple configuration allows only the threads of the $TASKPID task to run,
+when needed, at the highest OPP in the most capable CPU of the system.
+
+
+6. Question and Answers
+=======================
+
+What about "auto" mode?
+-----------------------
+
+The 'auto' mode as described in [5] can be implemented by interfacing SchedTune
+with some suitable user-space element. This element could use the exposed
+system-wide or cgroup based interface.
+
+How are multiple groups of tasks with different boost values managed?
+---------------------------------------------------------------------
+
+The current SchedTune implementation keeps track of the boosted RUNNABLE tasks
+on a CPU. Once sched-DVFS selects the OPP to run a CPU at, the CPU utilization
+is boosted with a value which is the maximum of the boost values of the
+currently RUNNABLE tasks in its RQ.
+
+This allows sched-DVFS to boost a CPU only while there are boosted tasks ready
+to run and switch back to the energy efficient mode as soon as the last boosted
+task is dequeued.
+
+
+7. References
+=============
+[1] http://lwn.net/Articles/552889
+[2] http://lkml.org/lkml/2012/5/18/91
+[3] http://lkml.org/lkml/2015/6/26/620
diff --git a/drivers/android/Kconfig b/drivers/android/Kconfig
index bdfc6c6..a82fc02 100644
--- a/drivers/android/Kconfig
+++ b/drivers/android/Kconfig
@@ -19,6 +19,18 @@
Android process, using Binder to identify, invoke and pass arguments
between said processes.
+config ANDROID_BINDER_DEVICES
+ string "Android Binder devices"
+ depends on ANDROID_BINDER_IPC
+ default "binder"
+ ---help---
+ Default value for the binder.devices parameter.
+
+ The binder.devices parameter is a comma-separated list of strings
+ that specifies the names of the binder device nodes that will be
+ created. Each binder device has its own context manager, and is
+ therefore logically separated from the other devices.
+
config ANDROID_BINDER_IPC_32BIT
bool
depends on !64BIT && ANDROID_BINDER_IPC
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 57f52a2..1e0abd8 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -50,14 +50,13 @@
static DEFINE_MUTEX(binder_deferred_lock);
static DEFINE_MUTEX(binder_mmap_lock);
+static HLIST_HEAD(binder_devices);
static HLIST_HEAD(binder_procs);
static HLIST_HEAD(binder_deferred_list);
static HLIST_HEAD(binder_dead_nodes);
static struct dentry *binder_debugfs_dir_entry_root;
static struct dentry *binder_debugfs_dir_entry_proc;
-static struct binder_node *binder_context_mgr_node;
-static kuid_t binder_context_mgr_uid = INVALID_UID;
static int binder_last_id;
static struct workqueue_struct *binder_deferred_workqueue;
@@ -116,6 +115,9 @@
static bool binder_debug_no_lock;
module_param_named(proc_no_lock, binder_debug_no_lock, bool, S_IWUSR | S_IRUGO);
+static char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES;
+module_param_named(devices, binder_devices_param, charp, S_IRUGO);
+
static DECLARE_WAIT_QUEUE_HEAD(binder_user_error_wait);
static int binder_stop_on_user_error;
@@ -146,6 +148,17 @@
binder_stop_on_user_error = 2; \
} while (0)
+#define to_flat_binder_object(hdr) \
+ container_of(hdr, struct flat_binder_object, hdr)
+
+#define to_binder_fd_object(hdr) container_of(hdr, struct binder_fd_object, hdr)
+
+#define to_binder_buffer_object(hdr) \
+ container_of(hdr, struct binder_buffer_object, hdr)
+
+#define to_binder_fd_array_object(hdr) \
+ container_of(hdr, struct binder_fd_array_object, hdr)
+
enum binder_stat_types {
BINDER_STAT_PROC,
BINDER_STAT_THREAD,
@@ -159,7 +172,7 @@
struct binder_stats {
int br[_IOC_NR(BR_FAILED_REPLY) + 1];
- int bc[_IOC_NR(BC_DEAD_BINDER_DONE) + 1];
+ int bc[_IOC_NR(BC_REPLY_SG) + 1];
int obj_created[BINDER_STAT_COUNT];
int obj_deleted[BINDER_STAT_COUNT];
};
@@ -187,6 +200,7 @@
int to_node;
int data_size;
int offsets_size;
+ const char *context_name;
};
struct binder_transaction_log {
int next;
@@ -211,6 +225,18 @@
return e;
}
+struct binder_context {
+ struct binder_node *binder_context_mgr_node;
+ kuid_t binder_context_mgr_uid;
+ const char *name;
+};
+
+struct binder_device {
+ struct hlist_node hlist;
+ struct miscdevice miscdev;
+ struct binder_context context;
+};
+
struct binder_work {
struct list_head entry;
enum {
@@ -283,6 +309,7 @@
struct binder_node *target_node;
size_t data_size;
size_t offsets_size;
+ size_t extra_buffers_size;
uint8_t data[0];
};
@@ -326,6 +353,7 @@
int ready_threads;
long default_priority;
struct dentry *debugfs_entry;
+ struct binder_context *context;
};
enum {
@@ -649,7 +677,9 @@
static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
size_t data_size,
- size_t offsets_size, int is_async)
+ size_t offsets_size,
+ size_t extra_buffers_size,
+ int is_async)
{
struct rb_node *n = proc->free_buffers.rb_node;
struct binder_buffer *buffer;
@@ -657,7 +687,7 @@
struct rb_node *best_fit = NULL;
void *has_page_addr;
void *end_page_addr;
- size_t size;
+ size_t size, data_offsets_size;
if (proc->vma == NULL) {
pr_err("%d: binder_alloc_buf, no vma\n",
@@ -665,15 +695,20 @@
return NULL;
}
- size = ALIGN(data_size, sizeof(void *)) +
+ data_offsets_size = ALIGN(data_size, sizeof(void *)) +
ALIGN(offsets_size, sizeof(void *));
- if (size < data_size || size < offsets_size) {
+ if (data_offsets_size < data_size || data_offsets_size < offsets_size) {
binder_user_error("%d: got transaction with invalid size %zd-%zd\n",
proc->pid, data_size, offsets_size);
return NULL;
}
-
+ size = data_offsets_size + ALIGN(extra_buffers_size, sizeof(void *));
+ if (size < data_offsets_size || size < extra_buffers_size) {
+ binder_user_error("%d: got transaction with invalid extra_buffers_size %zd\n",
+ proc->pid, extra_buffers_size);
+ return NULL;
+ }
if (is_async &&
proc->free_async_space < size + sizeof(struct binder_buffer)) {
binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
@@ -742,6 +777,7 @@
proc->pid, size, buffer);
buffer->data_size = data_size;
buffer->offsets_size = offsets_size;
+ buffer->extra_buffers_size = extra_buffers_size;
buffer->async_transaction = is_async;
if (is_async) {
proc->free_async_space -= size + sizeof(struct binder_buffer);
@@ -816,7 +852,8 @@
buffer_size = binder_buffer_size(proc, buffer);
size = ALIGN(buffer->data_size, sizeof(void *)) +
- ALIGN(buffer->offsets_size, sizeof(void *));
+ ALIGN(buffer->offsets_size, sizeof(void *)) +
+ ALIGN(buffer->extra_buffers_size, sizeof(void *));
binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
"%d: binder_free_buf %p size %zd buffer_size %zd\n",
@@ -930,8 +967,10 @@
if (internal) {
if (target_list == NULL &&
node->internal_strong_refs == 0 &&
- !(node == binder_context_mgr_node &&
- node->has_strong_ref)) {
+ !(node->proc &&
+ node == node->proc->context->
+ binder_context_mgr_node &&
+ node->has_strong_ref)) {
pr_err("invalid inc strong node for %d\n",
node->debug_id);
return -EINVAL;
@@ -1003,7 +1042,7 @@
static struct binder_ref *binder_get_ref(struct binder_proc *proc,
- uint32_t desc)
+ uint32_t desc, bool need_strong_ref)
{
struct rb_node *n = proc->refs_by_desc.rb_node;
struct binder_ref *ref;
@@ -1011,12 +1050,16 @@
while (n) {
ref = rb_entry(n, struct binder_ref, rb_node_desc);
- if (desc < ref->desc)
+ if (desc < ref->desc) {
n = n->rb_left;
- else if (desc > ref->desc)
+ } else if (desc > ref->desc) {
n = n->rb_right;
- else
+ } else if (need_strong_ref && !ref->strong) {
+ binder_user_error("tried to use weak ref as strong ref\n");
+ return NULL;
+ } else {
return ref;
+ }
}
return NULL;
}
@@ -1028,6 +1071,7 @@
struct rb_node **p = &proc->refs_by_node.rb_node;
struct rb_node *parent = NULL;
struct binder_ref *ref, *new_ref;
+ struct binder_context *context = proc->context;
while (*p) {
parent = *p;
@@ -1050,7 +1094,7 @@
rb_link_node(&new_ref->rb_node_node, parent, p);
rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);
- new_ref->desc = (node == binder_context_mgr_node) ? 0 : 1;
+ new_ref->desc = (node == context->binder_context_mgr_node) ? 0 : 1;
for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
ref = rb_entry(n, struct binder_ref, rb_node_desc);
if (ref->desc > new_ref->desc)
@@ -1237,11 +1281,158 @@
}
}
+/**
+ * binder_validate_object() - checks for a valid metadata object in a buffer.
+ * @buffer: binder_buffer that we're parsing.
+ * @offset: offset in the buffer at which to validate an object.
+ *
+ * Return: If there's a valid metadata object at @offset in @buffer, the
+ * size of that object. Otherwise, it returns zero.
+ */
+static size_t binder_validate_object(struct binder_buffer *buffer, u64 offset)
+{
+ /* Check if we can read a header first */
+ struct binder_object_header *hdr;
+ size_t object_size = 0;
+
+ if (offset > buffer->data_size - sizeof(*hdr) ||
+ buffer->data_size < sizeof(*hdr) ||
+ !IS_ALIGNED(offset, sizeof(u32)))
+ return 0;
+
+ /* Ok, now see if we can read a complete object. */
+ hdr = (struct binder_object_header *)(buffer->data + offset);
+ switch (hdr->type) {
+ case BINDER_TYPE_BINDER:
+ case BINDER_TYPE_WEAK_BINDER:
+ case BINDER_TYPE_HANDLE:
+ case BINDER_TYPE_WEAK_HANDLE:
+ object_size = sizeof(struct flat_binder_object);
+ break;
+ case BINDER_TYPE_FD:
+ object_size = sizeof(struct binder_fd_object);
+ break;
+ case BINDER_TYPE_PTR:
+ object_size = sizeof(struct binder_buffer_object);
+ break;
+ case BINDER_TYPE_FDA:
+ object_size = sizeof(struct binder_fd_array_object);
+ break;
+ default:
+ return 0;
+ }
+ if (offset <= buffer->data_size - object_size &&
+ buffer->data_size >= object_size)
+ return object_size;
+ else
+ return 0;
+}
+
+/**
+ * binder_validate_ptr() - validates binder_buffer_object in a binder_buffer.
+ * @b: binder_buffer containing the object
+ * @index: index in offset array at which the binder_buffer_object is
+ * located
+ * @start: points to the start of the offset array
+ * @num_valid: the number of valid offsets in the offset array
+ *
+ * Return: If @index is within the valid range of the offset array
+ * described by @start and @num_valid, and if there's a valid
+ * binder_buffer_object at the offset found in index @index
+ * of the offset array, that object is returned. Otherwise,
+ * %NULL is returned.
+ * Note that the offset found in index @index itself is not
+ * verified; this function assumes that @num_valid elements
+ * from @start were previously verified to have valid offsets.
+ */
+static struct binder_buffer_object *binder_validate_ptr(struct binder_buffer *b,
+ binder_size_t index,
+ binder_size_t *start,
+ binder_size_t num_valid)
+{
+ struct binder_buffer_object *buffer_obj;
+ binder_size_t *offp;
+
+ if (index >= num_valid)
+ return NULL;
+
+ offp = start + index;
+ buffer_obj = (struct binder_buffer_object *)(b->data + *offp);
+ if (buffer_obj->hdr.type != BINDER_TYPE_PTR)
+ return NULL;
+
+ return buffer_obj;
+}
+
+/**
+ * binder_validate_fixup() - validates pointer/fd fixups happen in order.
+ * @b: transaction buffer
+ * @objects_start start of objects buffer
+ * @buffer: binder_buffer_object in which to fix up
+ * @offset: start offset in @buffer to fix up
+ * @last_obj: last binder_buffer_object that we fixed up in
+ * @last_min_offset: minimum fixup offset in @last_obj
+ *
+ * Return: %true if a fixup in buffer @buffer at offset @offset is
+ * allowed.
+ *
+ * For safety reasons, we only allow fixups inside a buffer to happen
+ * at increasing offsets; additionally, we only allow fixup on the last
+ * buffer object that was verified, or one of its parents.
+ *
+ * Example of what is allowed:
+ *
+ * A
+ * B (parent = A, offset = 0)
+ * C (parent = A, offset = 16)
+ * D (parent = C, offset = 0)
+ * E (parent = A, offset = 32) // min_offset is 16 (C.parent_offset)
+ *
+ * Examples of what is not allowed:
+ *
+ * Decreasing offsets within the same parent:
+ * A
+ * C (parent = A, offset = 16)
+ * B (parent = A, offset = 0) // decreasing offset within A
+ *
+ * Referring to a parent that wasn't the last object or any of its parents:
+ * A
+ * B (parent = A, offset = 0)
+ * C (parent = A, offset = 0)
+ * C (parent = A, offset = 16)
+ * D (parent = B, offset = 0) // B is not A or any of A's parents
+ */
+static bool binder_validate_fixup(struct binder_buffer *b,
+ binder_size_t *objects_start,
+ struct binder_buffer_object *buffer,
+ binder_size_t fixup_offset,
+ struct binder_buffer_object *last_obj,
+ binder_size_t last_min_offset)
+{
+ if (!last_obj) {
+ /* Nothing to fix up in */
+ return false;
+ }
+
+ while (last_obj != buffer) {
+ /*
+ * Safe to retrieve the parent of last_obj, since it
+ * was already previously verified by the driver.
+ */
+ if ((last_obj->flags & BINDER_BUFFER_FLAG_HAS_PARENT) == 0)
+ return false;
+ last_min_offset = last_obj->parent_offset + sizeof(uintptr_t);
+ last_obj = (struct binder_buffer_object *)
+ (b->data + *(objects_start + last_obj->parent));
+ }
+ return (fixup_offset >= last_min_offset);
+}
+
static void binder_transaction_buffer_release(struct binder_proc *proc,
struct binder_buffer *buffer,
binder_size_t *failed_at)
{
- binder_size_t *offp, *off_end;
+ binder_size_t *offp, *off_start, *off_end;
int debug_id = buffer->debug_id;
binder_debug(BINDER_DEBUG_TRANSACTION,
@@ -1252,28 +1443,30 @@
if (buffer->target_node)
binder_dec_node(buffer->target_node, 1, 0);
- offp = (binder_size_t *)(buffer->data +
- ALIGN(buffer->data_size, sizeof(void *)));
+ off_start = (binder_size_t *)(buffer->data +
+ ALIGN(buffer->data_size, sizeof(void *)));
if (failed_at)
off_end = failed_at;
else
- off_end = (void *)offp + buffer->offsets_size;
- for (; offp < off_end; offp++) {
- struct flat_binder_object *fp;
+ off_end = (void *)off_start + buffer->offsets_size;
+ for (offp = off_start; offp < off_end; offp++) {
+ struct binder_object_header *hdr;
+ size_t object_size = binder_validate_object(buffer, *offp);
- if (*offp > buffer->data_size - sizeof(*fp) ||
- buffer->data_size < sizeof(*fp) ||
- !IS_ALIGNED(*offp, sizeof(u32))) {
- pr_err("transaction release %d bad offset %lld, size %zd\n",
+ if (object_size == 0) {
+ pr_err("transaction release %d bad object at offset %lld, size %zd\n",
debug_id, (u64)*offp, buffer->data_size);
continue;
}
- fp = (struct flat_binder_object *)(buffer->data + *offp);
- switch (fp->type) {
+ hdr = (struct binder_object_header *)(buffer->data + *offp);
+ switch (hdr->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
- struct binder_node *node = binder_get_node(proc, fp->binder);
+ struct flat_binder_object *fp;
+ struct binder_node *node;
+ fp = to_flat_binder_object(hdr);
+ node = binder_get_node(proc, fp->binder);
if (node == NULL) {
pr_err("transaction release %d bad node %016llx\n",
debug_id, (u64)fp->binder);
@@ -1282,12 +1475,17 @@
binder_debug(BINDER_DEBUG_TRANSACTION,
" node %d u%016llx\n",
node->debug_id, (u64)node->ptr);
- binder_dec_node(node, fp->type == BINDER_TYPE_BINDER, 0);
+ binder_dec_node(node, hdr->type == BINDER_TYPE_BINDER,
+ 0);
} break;
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
- struct binder_ref *ref = binder_get_ref(proc, fp->handle);
+ struct flat_binder_object *fp;
+ struct binder_ref *ref;
+ fp = to_flat_binder_object(hdr);
+ ref = binder_get_ref(proc, fp->handle,
+ hdr->type == BINDER_TYPE_HANDLE);
if (ref == NULL) {
pr_err("transaction release %d bad handle %d\n",
debug_id, fp->handle);
@@ -1296,32 +1494,348 @@
binder_debug(BINDER_DEBUG_TRANSACTION,
" ref %d desc %d (node %d)\n",
ref->debug_id, ref->desc, ref->node->debug_id);
- binder_dec_ref(ref, fp->type == BINDER_TYPE_HANDLE);
+ binder_dec_ref(ref, hdr->type == BINDER_TYPE_HANDLE);
} break;
- case BINDER_TYPE_FD:
- binder_debug(BINDER_DEBUG_TRANSACTION,
- " fd %d\n", fp->handle);
- if (failed_at)
- task_close_fd(proc, fp->handle);
- break;
+ case BINDER_TYPE_FD: {
+ struct binder_fd_object *fp = to_binder_fd_object(hdr);
+ binder_debug(BINDER_DEBUG_TRANSACTION,
+ " fd %d\n", fp->fd);
+ if (failed_at)
+ task_close_fd(proc, fp->fd);
+ } break;
+ case BINDER_TYPE_PTR:
+ /*
+ * Nothing to do here, this will get cleaned up when the
+ * transaction buffer gets freed
+ */
+ break;
+ case BINDER_TYPE_FDA: {
+ struct binder_fd_array_object *fda;
+ struct binder_buffer_object *parent;
+ uintptr_t parent_buffer;
+ u32 *fd_array;
+ size_t fd_index;
+ binder_size_t fd_buf_size;
+
+ fda = to_binder_fd_array_object(hdr);
+ parent = binder_validate_ptr(buffer, fda->parent,
+ off_start,
+ offp - off_start);
+ if (!parent) {
+ pr_err("transaction release %d bad parent offset",
+ debug_id);
+ continue;
+ }
+ /*
+ * Since the parent was already fixed up, convert it
+ * back to kernel address space to access it
+ */
+ parent_buffer = parent->buffer -
+ proc->user_buffer_offset;
+
+ fd_buf_size = sizeof(u32) * fda->num_fds;
+ if (fda->num_fds >= SIZE_MAX / sizeof(u32)) {
+ pr_err("transaction release %d invalid number of fds (%lld)\n",
+ debug_id, (u64)fda->num_fds);
+ continue;
+ }
+ if (fd_buf_size > parent->length ||
+ fda->parent_offset > parent->length - fd_buf_size) {
+ /* No space for all file descriptors here. */
+ pr_err("transaction release %d not enough space for %lld fds in buffer\n",
+ debug_id, (u64)fda->num_fds);
+ continue;
+ }
+ fd_array = (u32 *)(parent_buffer + fda->parent_offset);
+ for (fd_index = 0; fd_index < fda->num_fds; fd_index++)
+ task_close_fd(proc, fd_array[fd_index]);
+ } break;
default:
pr_err("transaction release %d bad object type %x\n",
- debug_id, fp->type);
+ debug_id, hdr->type);
break;
}
}
}
+static int binder_translate_binder(struct flat_binder_object *fp,
+ struct binder_transaction *t,
+ struct binder_thread *thread)
+{
+ struct binder_node *node;
+ struct binder_ref *ref;
+ struct binder_proc *proc = thread->proc;
+ struct binder_proc *target_proc = t->to_proc;
+
+ node = binder_get_node(proc, fp->binder);
+ if (!node) {
+ node = binder_new_node(proc, fp->binder, fp->cookie);
+ if (!node)
+ return -ENOMEM;
+
+ node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
+ node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
+ }
+ if (fp->cookie != node->cookie) {
+ binder_user_error("%d:%d sending u%016llx node %d, cookie mismatch %016llx != %016llx\n",
+ proc->pid, thread->pid, (u64)fp->binder,
+ node->debug_id, (u64)fp->cookie,
+ (u64)node->cookie);
+ return -EINVAL;
+ }
+ if (security_binder_transfer_binder(proc->tsk, target_proc->tsk))
+ return -EPERM;
+
+ ref = binder_get_ref_for_node(target_proc, node);
+ if (!ref)
+ return -EINVAL;
+
+ if (fp->hdr.type == BINDER_TYPE_BINDER)
+ fp->hdr.type = BINDER_TYPE_HANDLE;
+ else
+ fp->hdr.type = BINDER_TYPE_WEAK_HANDLE;
+ fp->binder = 0;
+ fp->handle = ref->desc;
+ fp->cookie = 0;
+ binder_inc_ref(ref, fp->hdr.type == BINDER_TYPE_HANDLE, &thread->todo);
+
+ trace_binder_transaction_node_to_ref(t, node, ref);
+ binder_debug(BINDER_DEBUG_TRANSACTION,
+ " node %d u%016llx -> ref %d desc %d\n",
+ node->debug_id, (u64)node->ptr,
+ ref->debug_id, ref->desc);
+
+ return 0;
+}
+
+static int binder_translate_handle(struct flat_binder_object *fp,
+ struct binder_transaction *t,
+ struct binder_thread *thread)
+{
+ struct binder_ref *ref;
+ struct binder_proc *proc = thread->proc;
+ struct binder_proc *target_proc = t->to_proc;
+
+ ref = binder_get_ref(proc, fp->handle,
+ fp->hdr.type == BINDER_TYPE_HANDLE);
+ if (!ref) {
+ binder_user_error("%d:%d got transaction with invalid handle, %d\n",
+ proc->pid, thread->pid, fp->handle);
+ return -EINVAL;
+ }
+ if (security_binder_transfer_binder(proc->tsk, target_proc->tsk))
+ return -EPERM;
+
+ if (ref->node->proc == target_proc) {
+ if (fp->hdr.type == BINDER_TYPE_HANDLE)
+ fp->hdr.type = BINDER_TYPE_BINDER;
+ else
+ fp->hdr.type = BINDER_TYPE_WEAK_BINDER;
+ fp->binder = ref->node->ptr;
+ fp->cookie = ref->node->cookie;
+ binder_inc_node(ref->node, fp->hdr.type == BINDER_TYPE_BINDER,
+ 0, NULL);
+ trace_binder_transaction_ref_to_node(t, ref);
+ binder_debug(BINDER_DEBUG_TRANSACTION,
+ " ref %d desc %d -> node %d u%016llx\n",
+ ref->debug_id, ref->desc, ref->node->debug_id,
+ (u64)ref->node->ptr);
+ } else {
+ struct binder_ref *new_ref;
+
+ new_ref = binder_get_ref_for_node(target_proc, ref->node);
+ if (!new_ref)
+ return -EINVAL;
+
+ fp->binder = 0;
+ fp->handle = new_ref->desc;
+ fp->cookie = 0;
+ binder_inc_ref(new_ref, fp->hdr.type == BINDER_TYPE_HANDLE,
+ NULL);
+ trace_binder_transaction_ref_to_ref(t, ref, new_ref);
+ binder_debug(BINDER_DEBUG_TRANSACTION,
+ " ref %d desc %d -> ref %d desc %d (node %d)\n",
+ ref->debug_id, ref->desc, new_ref->debug_id,
+ new_ref->desc, ref->node->debug_id);
+ }
+ return 0;
+}
+
+static int binder_translate_fd(int fd,
+ struct binder_transaction *t,
+ struct binder_thread *thread,
+ struct binder_transaction *in_reply_to)
+{
+ struct binder_proc *proc = thread->proc;
+ struct binder_proc *target_proc = t->to_proc;
+ int target_fd;
+ struct file *file;
+ int ret;
+ bool target_allows_fd;
+
+ if (in_reply_to)
+ target_allows_fd = !!(in_reply_to->flags & TF_ACCEPT_FDS);
+ else
+ target_allows_fd = t->buffer->target_node->accept_fds;
+ if (!target_allows_fd) {
+ binder_user_error("%d:%d got %s with fd, %d, but target does not allow fds\n",
+ proc->pid, thread->pid,
+ in_reply_to ? "reply" : "transaction",
+ fd);
+ ret = -EPERM;
+ goto err_fd_not_accepted;
+ }
+
+ file = fget(fd);
+ if (!file) {
+ binder_user_error("%d:%d got transaction with invalid fd, %d\n",
+ proc->pid, thread->pid, fd);
+ ret = -EBADF;
+ goto err_fget;
+ }
+ ret = security_binder_transfer_file(proc->tsk, target_proc->tsk, file);
+ if (ret < 0) {
+ ret = -EPERM;
+ goto err_security;
+ }
+
+ target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC);
+ if (target_fd < 0) {
+ ret = -ENOMEM;
+ goto err_get_unused_fd;
+ }
+ task_fd_install(target_proc, target_fd, file);
+ trace_binder_transaction_fd(t, fd, target_fd);
+ binder_debug(BINDER_DEBUG_TRANSACTION, " fd %d -> %d\n",
+ fd, target_fd);
+
+ return target_fd;
+
+err_get_unused_fd:
+err_security:
+ fput(file);
+err_fget:
+err_fd_not_accepted:
+ return ret;
+}
+
+static int binder_translate_fd_array(struct binder_fd_array_object *fda,
+ struct binder_buffer_object *parent,
+ struct binder_transaction *t,
+ struct binder_thread *thread,
+ struct binder_transaction *in_reply_to)
+{
+ binder_size_t fdi, fd_buf_size, num_installed_fds;
+ int target_fd;
+ uintptr_t parent_buffer;
+ u32 *fd_array;
+ struct binder_proc *proc = thread->proc;
+ struct binder_proc *target_proc = t->to_proc;
+
+ fd_buf_size = sizeof(u32) * fda->num_fds;
+ if (fda->num_fds >= SIZE_MAX / sizeof(u32)) {
+ binder_user_error("%d:%d got transaction with invalid number of fds (%lld)\n",
+ proc->pid, thread->pid, (u64)fda->num_fds);
+ return -EINVAL;
+ }
+ if (fd_buf_size > parent->length ||
+ fda->parent_offset > parent->length - fd_buf_size) {
+ /* No space for all file descriptors here. */
+ binder_user_error("%d:%d not enough space to store %lld fds in buffer\n",
+ proc->pid, thread->pid, (u64)fda->num_fds);
+ return -EINVAL;
+ }
+ /*
+ * Since the parent was already fixed up, convert it
+ * back to the kernel address space to access it
+ */
+ parent_buffer = parent->buffer - target_proc->user_buffer_offset;
+ fd_array = (u32 *)(parent_buffer + fda->parent_offset);
+ if (!IS_ALIGNED((unsigned long)fd_array, sizeof(u32))) {
+ binder_user_error("%d:%d parent offset not aligned correctly.\n",
+ proc->pid, thread->pid);
+ return -EINVAL;
+ }
+ for (fdi = 0; fdi < fda->num_fds; fdi++) {
+ target_fd = binder_translate_fd(fd_array[fdi], t, thread,
+ in_reply_to);
+ if (target_fd < 0)
+ goto err_translate_fd_failed;
+ fd_array[fdi] = target_fd;
+ }
+ return 0;
+
+err_translate_fd_failed:
+ /*
+ * Failed to allocate fd or security error, free fds
+ * installed so far.
+ */
+ num_installed_fds = fdi;
+ for (fdi = 0; fdi < num_installed_fds; fdi++)
+ task_close_fd(target_proc, fd_array[fdi]);
+ return target_fd;
+}
+
+static int binder_fixup_parent(struct binder_transaction *t,
+ struct binder_thread *thread,
+ struct binder_buffer_object *bp,
+ binder_size_t *off_start,
+ binder_size_t num_valid,
+ struct binder_buffer_object *last_fixup_obj,
+ binder_size_t last_fixup_min_off)
+{
+ struct binder_buffer_object *parent;
+ u8 *parent_buffer;
+ struct binder_buffer *b = t->buffer;
+ struct binder_proc *proc = thread->proc;
+ struct binder_proc *target_proc = t->to_proc;
+
+ if (!(bp->flags & BINDER_BUFFER_FLAG_HAS_PARENT))
+ return 0;
+
+ parent = binder_validate_ptr(b, bp->parent, off_start, num_valid);
+ if (!parent) {
+ binder_user_error("%d:%d got transaction with invalid parent offset or type\n",
+ proc->pid, thread->pid);
+ return -EINVAL;
+ }
+
+ if (!binder_validate_fixup(b, off_start,
+ parent, bp->parent_offset,
+ last_fixup_obj,
+ last_fixup_min_off)) {
+ binder_user_error("%d:%d got transaction with out-of-order buffer fixup\n",
+ proc->pid, thread->pid);
+ return -EINVAL;
+ }
+
+ if (parent->length < sizeof(binder_uintptr_t) ||
+ bp->parent_offset > parent->length - sizeof(binder_uintptr_t)) {
+ /* No space for a pointer here! */
+ binder_user_error("%d:%d got transaction with invalid parent offset\n",
+ proc->pid, thread->pid);
+ return -EINVAL;
+ }
+ parent_buffer = (u8 *)(parent->buffer -
+ target_proc->user_buffer_offset);
+ *(binder_uintptr_t *)(parent_buffer + bp->parent_offset) = bp->buffer;
+
+ return 0;
+}
+
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
- struct binder_transaction_data *tr, int reply)
+ struct binder_transaction_data *tr, int reply,
+ binder_size_t extra_buffers_size)
{
+ int ret;
struct binder_transaction *t;
struct binder_work *tcomplete;
- binder_size_t *offp, *off_end;
+ binder_size_t *offp, *off_end, *off_start;
binder_size_t off_min;
+ u8 *sg_bufp, *sg_buf_end;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
@@ -1330,6 +1844,9 @@
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error;
+ struct binder_buffer_object *last_fixup_obj = NULL;
+ binder_size_t last_fixup_min_off = 0;
+ struct binder_context *context = proc->context;
e = binder_transaction_log_add(&binder_transaction_log);
e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
@@ -1338,6 +1855,7 @@
e->target_handle = tr->target.handle;
e->data_size = tr->data_size;
e->offsets_size = tr->offsets_size;
+ e->context_name = proc->context->name;
if (reply) {
in_reply_to = thread->transaction_stack;
@@ -1381,7 +1899,7 @@
if (tr->target.handle) {
struct binder_ref *ref;
- ref = binder_get_ref(proc, tr->target.handle);
+ ref = binder_get_ref(proc, tr->target.handle, true);
if (ref == NULL) {
binder_user_error("%d:%d got transaction to invalid handle\n",
proc->pid, thread->pid);
@@ -1390,7 +1908,7 @@
}
target_node = ref->node;
} else {
- target_node = binder_context_mgr_node;
+ target_node = context->binder_context_mgr_node;
if (target_node == NULL) {
return_error = BR_DEAD_REPLY;
goto err_no_context_mgr_node;
@@ -1457,20 +1975,22 @@
if (reply)
binder_debug(BINDER_DEBUG_TRANSACTION,
- "%d:%d BC_REPLY %d -> %d:%d, data %016llx-%016llx size %lld-%lld\n",
+ "%d:%d BC_REPLY %d -> %d:%d, data %016llx-%016llx size %lld-%lld-%lld\n",
proc->pid, thread->pid, t->debug_id,
target_proc->pid, target_thread->pid,
(u64)tr->data.ptr.buffer,
(u64)tr->data.ptr.offsets,
- (u64)tr->data_size, (u64)tr->offsets_size);
+ (u64)tr->data_size, (u64)tr->offsets_size,
+ (u64)extra_buffers_size);
else
binder_debug(BINDER_DEBUG_TRANSACTION,
- "%d:%d BC_TRANSACTION %d -> %d - node %d, data %016llx-%016llx size %lld-%lld\n",
+ "%d:%d BC_TRANSACTION %d -> %d - node %d, data %016llx-%016llx size %lld-%lld-%lld\n",
proc->pid, thread->pid, t->debug_id,
target_proc->pid, target_node->debug_id,
(u64)tr->data.ptr.buffer,
(u64)tr->data.ptr.offsets,
- (u64)tr->data_size, (u64)tr->offsets_size);
+ (u64)tr->data_size, (u64)tr->offsets_size,
+ (u64)extra_buffers_size);
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
@@ -1486,7 +2006,8 @@
trace_binder_transaction(reply, t, target_node);
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
- tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
+ tr->offsets_size, extra_buffers_size,
+ !reply && (t->flags & TF_ONE_WAY));
if (t->buffer == NULL) {
return_error = BR_FAILED_REPLY;
goto err_binder_alloc_buf_failed;
@@ -1499,8 +2020,9 @@
if (target_node)
binder_inc_node(target_node, 1, 0, NULL);
- offp = (binder_size_t *)(t->buffer->data +
- ALIGN(tr->data_size, sizeof(void *)));
+ off_start = (binder_size_t *)(t->buffer->data +
+ ALIGN(tr->data_size, sizeof(void *)));
+ offp = off_start;
if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
tr->data.ptr.buffer, tr->data_size)) {
@@ -1522,169 +2044,138 @@
return_error = BR_FAILED_REPLY;
goto err_bad_offset;
}
- off_end = (void *)offp + tr->offsets_size;
+ if (!IS_ALIGNED(extra_buffers_size, sizeof(u64))) {
+ binder_user_error("%d:%d got transaction with unaligned buffers size, %lld\n",
+ proc->pid, thread->pid,
+ extra_buffers_size);
+ return_error = BR_FAILED_REPLY;
+ goto err_bad_offset;
+ }
+ off_end = (void *)off_start + tr->offsets_size;
+ sg_bufp = (u8 *)(PTR_ALIGN(off_end, sizeof(void *)));
+ sg_buf_end = sg_bufp + extra_buffers_size;
off_min = 0;
for (; offp < off_end; offp++) {
- struct flat_binder_object *fp;
+ struct binder_object_header *hdr;
+ size_t object_size = binder_validate_object(t->buffer, *offp);
- if (*offp > t->buffer->data_size - sizeof(*fp) ||
- *offp < off_min ||
- t->buffer->data_size < sizeof(*fp) ||
- !IS_ALIGNED(*offp, sizeof(u32))) {
- binder_user_error("%d:%d got transaction with invalid offset, %lld (min %lld, max %lld)\n",
+ if (object_size == 0 || *offp < off_min) {
+ binder_user_error("%d:%d got transaction with invalid offset (%lld, min %lld max %lld) or object.\n",
proc->pid, thread->pid, (u64)*offp,
(u64)off_min,
- (u64)(t->buffer->data_size -
- sizeof(*fp)));
+ (u64)t->buffer->data_size);
return_error = BR_FAILED_REPLY;
goto err_bad_offset;
}
- fp = (struct flat_binder_object *)(t->buffer->data + *offp);
- off_min = *offp + sizeof(struct flat_binder_object);
- switch (fp->type) {
+
+ hdr = (struct binder_object_header *)(t->buffer->data + *offp);
+ off_min = *offp + object_size;
+ switch (hdr->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
- struct binder_ref *ref;
- struct binder_node *node = binder_get_node(proc, fp->binder);
+ struct flat_binder_object *fp;
- if (node == NULL) {
- node = binder_new_node(proc, fp->binder, fp->cookie);
- if (node == NULL) {
- return_error = BR_FAILED_REPLY;
- goto err_binder_new_node_failed;
- }
- node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
- node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
- }
- if (fp->cookie != node->cookie) {
- binder_user_error("%d:%d sending u%016llx node %d, cookie mismatch %016llx != %016llx\n",
- proc->pid, thread->pid,
- (u64)fp->binder, node->debug_id,
- (u64)fp->cookie, (u64)node->cookie);
+ fp = to_flat_binder_object(hdr);
+ ret = binder_translate_binder(fp, t, thread);
+ if (ret < 0) {
return_error = BR_FAILED_REPLY;
- goto err_binder_get_ref_for_node_failed;
+ goto err_translate_failed;
}
- if (security_binder_transfer_binder(proc->tsk,
- target_proc->tsk)) {
- return_error = BR_FAILED_REPLY;
- goto err_binder_get_ref_for_node_failed;
- }
- ref = binder_get_ref_for_node(target_proc, node);
- if (ref == NULL) {
- return_error = BR_FAILED_REPLY;
- goto err_binder_get_ref_for_node_failed;
- }
- if (fp->type == BINDER_TYPE_BINDER)
- fp->type = BINDER_TYPE_HANDLE;
- else
- fp->type = BINDER_TYPE_WEAK_HANDLE;
- fp->handle = ref->desc;
- binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
- &thread->todo);
-
- trace_binder_transaction_node_to_ref(t, node, ref);
- binder_debug(BINDER_DEBUG_TRANSACTION,
- " node %d u%016llx -> ref %d desc %d\n",
- node->debug_id, (u64)node->ptr,
- ref->debug_id, ref->desc);
} break;
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
- struct binder_ref *ref = binder_get_ref(proc, fp->handle);
+ struct flat_binder_object *fp;
- if (ref == NULL) {
- binder_user_error("%d:%d got transaction with invalid handle, %d\n",
- proc->pid,
- thread->pid, fp->handle);
+ fp = to_flat_binder_object(hdr);
+ ret = binder_translate_handle(fp, t, thread);
+ if (ret < 0) {
return_error = BR_FAILED_REPLY;
- goto err_binder_get_ref_failed;
- }
- if (security_binder_transfer_binder(proc->tsk,
- target_proc->tsk)) {
- return_error = BR_FAILED_REPLY;
- goto err_binder_get_ref_failed;
- }
- if (ref->node->proc == target_proc) {
- if (fp->type == BINDER_TYPE_HANDLE)
- fp->type = BINDER_TYPE_BINDER;
- else
- fp->type = BINDER_TYPE_WEAK_BINDER;
- fp->binder = ref->node->ptr;
- fp->cookie = ref->node->cookie;
- binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);
- trace_binder_transaction_ref_to_node(t, ref);
- binder_debug(BINDER_DEBUG_TRANSACTION,
- " ref %d desc %d -> node %d u%016llx\n",
- ref->debug_id, ref->desc, ref->node->debug_id,
- (u64)ref->node->ptr);
- } else {
- struct binder_ref *new_ref;
-
- new_ref = binder_get_ref_for_node(target_proc, ref->node);
- if (new_ref == NULL) {
- return_error = BR_FAILED_REPLY;
- goto err_binder_get_ref_for_node_failed;
- }
- fp->handle = new_ref->desc;
- binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
- trace_binder_transaction_ref_to_ref(t, ref,
- new_ref);
- binder_debug(BINDER_DEBUG_TRANSACTION,
- " ref %d desc %d -> ref %d desc %d (node %d)\n",
- ref->debug_id, ref->desc, new_ref->debug_id,
- new_ref->desc, ref->node->debug_id);
+ goto err_translate_failed;
}
} break;
case BINDER_TYPE_FD: {
- int target_fd;
- struct file *file;
+ struct binder_fd_object *fp = to_binder_fd_object(hdr);
+ int target_fd = binder_translate_fd(fp->fd, t, thread,
+ in_reply_to);
- if (reply) {
- if (!(in_reply_to->flags & TF_ACCEPT_FDS)) {
- binder_user_error("%d:%d got reply with fd, %d, but target does not allow fds\n",
- proc->pid, thread->pid, fp->handle);
- return_error = BR_FAILED_REPLY;
- goto err_fd_not_allowed;
- }
- } else if (!target_node->accept_fds) {
- binder_user_error("%d:%d got transaction with fd, %d, but target does not allow fds\n",
- proc->pid, thread->pid, fp->handle);
- return_error = BR_FAILED_REPLY;
- goto err_fd_not_allowed;
- }
-
- file = fget(fp->handle);
- if (file == NULL) {
- binder_user_error("%d:%d got transaction with invalid fd, %d\n",
- proc->pid, thread->pid, fp->handle);
- return_error = BR_FAILED_REPLY;
- goto err_fget_failed;
- }
- if (security_binder_transfer_file(proc->tsk,
- target_proc->tsk,
- file) < 0) {
- fput(file);
- return_error = BR_FAILED_REPLY;
- goto err_get_unused_fd_failed;
- }
- target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC);
if (target_fd < 0) {
- fput(file);
return_error = BR_FAILED_REPLY;
- goto err_get_unused_fd_failed;
+ goto err_translate_failed;
}
- task_fd_install(target_proc, target_fd, file);
- trace_binder_transaction_fd(t, fp->handle, target_fd);
- binder_debug(BINDER_DEBUG_TRANSACTION,
- " fd %d -> %d\n", fp->handle, target_fd);
- /* TODO: fput? */
- fp->handle = target_fd;
+ fp->pad_binder = 0;
+ fp->fd = target_fd;
} break;
+ case BINDER_TYPE_FDA: {
+ struct binder_fd_array_object *fda =
+ to_binder_fd_array_object(hdr);
+ struct binder_buffer_object *parent =
+ binder_validate_ptr(t->buffer, fda->parent,
+ off_start,
+ offp - off_start);
+ if (!parent) {
+ binder_user_error("%d:%d got transaction with invalid parent offset or type\n",
+ proc->pid, thread->pid);
+ return_error = BR_FAILED_REPLY;
+ goto err_bad_parent;
+ }
+ if (!binder_validate_fixup(t->buffer, off_start,
+ parent, fda->parent_offset,
+ last_fixup_obj,
+ last_fixup_min_off)) {
+ binder_user_error("%d:%d got transaction with out-of-order buffer fixup\n",
+ proc->pid, thread->pid);
+ return_error = BR_FAILED_REPLY;
+ goto err_bad_parent;
+ }
+ ret = binder_translate_fd_array(fda, parent, t, thread,
+ in_reply_to);
+ if (ret < 0) {
+ return_error = BR_FAILED_REPLY;
+ goto err_translate_failed;
+ }
+ last_fixup_obj = parent;
+ last_fixup_min_off =
+ fda->parent_offset + sizeof(u32) * fda->num_fds;
+ } break;
+ case BINDER_TYPE_PTR: {
+ struct binder_buffer_object *bp =
+ to_binder_buffer_object(hdr);
+ size_t buf_left = sg_buf_end - sg_bufp;
+ if (bp->length > buf_left) {
+ binder_user_error("%d:%d got transaction with too large buffer\n",
+ proc->pid, thread->pid);
+ return_error = BR_FAILED_REPLY;
+ goto err_bad_offset;
+ }
+ if (copy_from_user(sg_bufp,
+ (const void __user *)(uintptr_t)
+ bp->buffer, bp->length)) {
+ binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
+ proc->pid, thread->pid);
+ return_error = BR_FAILED_REPLY;
+ goto err_copy_data_failed;
+ }
+ /* Fixup buffer pointer to target proc address space */
+ bp->buffer = (uintptr_t)sg_bufp +
+ target_proc->user_buffer_offset;
+ sg_bufp += ALIGN(bp->length, sizeof(u64));
+
+ ret = binder_fixup_parent(t, thread, bp, off_start,
+ offp - off_start,
+ last_fixup_obj,
+ last_fixup_min_off);
+ if (ret < 0) {
+ return_error = BR_FAILED_REPLY;
+ goto err_translate_failed;
+ }
+ last_fixup_obj = bp;
+ last_fixup_min_off = 0;
+ } break;
default:
binder_user_error("%d:%d got transaction with invalid object type, %x\n",
- proc->pid, thread->pid, fp->type);
+ proc->pid, thread->pid, hdr->type);
return_error = BR_FAILED_REPLY;
goto err_bad_object_type;
}
@@ -1714,14 +2205,10 @@
wake_up_interruptible(target_wait);
return;
-err_get_unused_fd_failed:
-err_fget_failed:
-err_fd_not_allowed:
-err_binder_get_ref_for_node_failed:
-err_binder_get_ref_failed:
-err_binder_new_node_failed:
+err_translate_failed:
err_bad_object_type:
err_bad_offset:
+err_bad_parent:
err_copy_data_failed:
trace_binder_transaction_failed_buffer_release(t->buffer);
binder_transaction_buffer_release(target_proc, t->buffer, offp);
@@ -1765,6 +2252,7 @@
binder_size_t *consumed)
{
uint32_t cmd;
+ struct binder_context *context = proc->context;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
@@ -1791,17 +2279,19 @@
if (get_user(target, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
- if (target == 0 && binder_context_mgr_node &&
+ if (target == 0 && context->binder_context_mgr_node &&
(cmd == BC_INCREFS || cmd == BC_ACQUIRE)) {
ref = binder_get_ref_for_node(proc,
- binder_context_mgr_node);
+ context->binder_context_mgr_node);
if (ref->desc != target) {
binder_user_error("%d:%d tried to acquire reference to desc 0, got %d instead\n",
proc->pid, thread->pid,
ref->desc);
}
} else
- ref = binder_get_ref(proc, target);
+ ref = binder_get_ref(proc, target,
+ cmd == BC_ACQUIRE ||
+ cmd == BC_RELEASE);
if (ref == NULL) {
binder_user_error("%d:%d refcount change on invalid ref %d\n",
proc->pid, thread->pid, target);
@@ -1937,6 +2427,17 @@
break;
}
+ case BC_TRANSACTION_SG:
+ case BC_REPLY_SG: {
+ struct binder_transaction_data_sg tr;
+
+ if (copy_from_user(&tr, ptr, sizeof(tr)))
+ return -EFAULT;
+ ptr += sizeof(tr);
+ binder_transaction(proc, thread, &tr.transaction_data,
+ cmd == BC_REPLY_SG, tr.buffers_size);
+ break;
+ }
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
@@ -1944,7 +2445,8 @@
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
- binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
+ binder_transaction(proc, thread, &tr,
+ cmd == BC_REPLY, 0);
break;
}
@@ -1997,7 +2499,7 @@
if (get_user(cookie, (binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
- ref = binder_get_ref(proc, target);
+ ref = binder_get_ref(proc, target, false);
if (ref == NULL) {
binder_user_error("%d:%d %s invalid ref %d\n",
proc->pid, thread->pid,
@@ -2698,9 +3200,11 @@
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
+ struct binder_context *context = proc->context;
+
kuid_t curr_euid = current_euid();
- if (binder_context_mgr_node != NULL) {
+ if (context->binder_context_mgr_node) {
pr_err("BINDER_SET_CONTEXT_MGR already set\n");
ret = -EBUSY;
goto out;
@@ -2708,27 +3212,27 @@
ret = security_binder_set_context_mgr(proc->tsk);
if (ret < 0)
goto out;
- if (uid_valid(binder_context_mgr_uid)) {
- if (!uid_eq(binder_context_mgr_uid, curr_euid)) {
+ if (uid_valid(context->binder_context_mgr_uid)) {
+ if (!uid_eq(context->binder_context_mgr_uid, curr_euid)) {
pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
from_kuid(&init_user_ns, curr_euid),
from_kuid(&init_user_ns,
- binder_context_mgr_uid));
+ context->binder_context_mgr_uid));
ret = -EPERM;
goto out;
}
} else {
- binder_context_mgr_uid = curr_euid;
+ context->binder_context_mgr_uid = curr_euid;
}
- binder_context_mgr_node = binder_new_node(proc, 0, 0);
- if (binder_context_mgr_node == NULL) {
+ context->binder_context_mgr_node = binder_new_node(proc, 0, 0);
+ if (!context->binder_context_mgr_node) {
ret = -ENOMEM;
goto out;
}
- binder_context_mgr_node->local_weak_refs++;
- binder_context_mgr_node->local_strong_refs++;
- binder_context_mgr_node->has_strong_ref = 1;
- binder_context_mgr_node->has_weak_ref = 1;
+ context->binder_context_mgr_node->local_weak_refs++;
+ context->binder_context_mgr_node->local_strong_refs++;
+ context->binder_context_mgr_node->has_strong_ref = 1;
+ context->binder_context_mgr_node->has_weak_ref = 1;
out:
return ret;
}
@@ -2949,6 +3453,7 @@
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc;
+ struct binder_device *binder_dev;
binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n",
current->group_leader->pid, current->pid);
@@ -2961,6 +3466,9 @@
INIT_LIST_HEAD(&proc->todo);
init_waitqueue_head(&proc->wait);
proc->default_priority = task_nice(current);
+ binder_dev = container_of(filp->private_data, struct binder_device,
+ miscdev);
+ proc->context = &binder_dev->context;
binder_lock(__func__);
@@ -2976,8 +3484,17 @@
char strbuf[11];
snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
+ /*
+ * proc debug entries are shared between contexts, so
+ * this will fail if the process tries to open the driver
+ * again with a different context. The priting code will
+ * anyway print all contexts that a given PID has, so this
+ * is not a problem.
+ */
proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO,
- binder_debugfs_dir_entry_proc, proc, &binder_proc_fops);
+ binder_debugfs_dir_entry_proc,
+ (void *)(unsigned long)proc->pid,
+ &binder_proc_fops);
}
return 0;
@@ -3070,6 +3587,7 @@
static void binder_deferred_release(struct binder_proc *proc)
{
struct binder_transaction *t;
+ struct binder_context *context = proc->context;
struct rb_node *n;
int threads, nodes, incoming_refs, outgoing_refs, buffers,
active_transactions, page_count;
@@ -3079,11 +3597,12 @@
hlist_del(&proc->proc_node);
- if (binder_context_mgr_node && binder_context_mgr_node->proc == proc) {
+ if (context->binder_context_mgr_node &&
+ context->binder_context_mgr_node->proc == proc) {
binder_debug(BINDER_DEBUG_DEAD_BINDER,
"%s: %d context_mgr_node gone\n",
__func__, proc->pid);
- binder_context_mgr_node = NULL;
+ context->binder_context_mgr_node = NULL;
}
threads = 0;
@@ -3370,6 +3889,7 @@
size_t header_pos;
seq_printf(m, "proc %d\n", proc->pid);
+ seq_printf(m, "context %s\n", proc->context->name);
header_pos = m->count;
for (n = rb_first(&proc->threads); n != NULL; n = rb_next(n))
@@ -3439,7 +3959,9 @@
"BC_EXIT_LOOPER",
"BC_REQUEST_DEATH_NOTIFICATION",
"BC_CLEAR_DEATH_NOTIFICATION",
- "BC_DEAD_BINDER_DONE"
+ "BC_DEAD_BINDER_DONE",
+ "BC_TRANSACTION_SG",
+ "BC_REPLY_SG",
};
static const char * const binder_objstat_strings[] = {
@@ -3494,6 +4016,7 @@
int count, strong, weak;
seq_printf(m, "proc %d\n", proc->pid);
+ seq_printf(m, "context %s\n", proc->context->name);
count = 0;
for (n = rb_first(&proc->threads); n != NULL; n = rb_next(n))
count++;
@@ -3601,23 +4124,18 @@
static int binder_proc_show(struct seq_file *m, void *unused)
{
struct binder_proc *itr;
- struct binder_proc *proc = m->private;
+ int pid = (unsigned long)m->private;
int do_lock = !binder_debug_no_lock;
- bool valid_proc = false;
if (do_lock)
binder_lock(__func__);
hlist_for_each_entry(itr, &binder_procs, proc_node) {
- if (itr == proc) {
- valid_proc = true;
- break;
+ if (itr->pid == pid) {
+ seq_puts(m, "binder proc state:\n");
+ print_binder_proc(m, itr, 1);
}
}
- if (valid_proc) {
- seq_puts(m, "binder proc state:\n");
- print_binder_proc(m, proc, 1);
- }
if (do_lock)
binder_unlock(__func__);
return 0;
@@ -3627,11 +4145,11 @@
struct binder_transaction_log_entry *e)
{
seq_printf(m,
- "%d: %s from %d:%d to %d:%d node %d handle %d size %d:%d\n",
+ "%d: %s from %d:%d to %d:%d context %s node %d handle %d size %d:%d\n",
e->debug_id, (e->call_type == 2) ? "reply" :
((e->call_type == 1) ? "async" : "call "), e->from_proc,
- e->from_thread, e->to_proc, e->to_thread, e->to_node,
- e->target_handle, e->data_size, e->offsets_size);
+ e->from_thread, e->to_proc, e->to_thread, e->context_name,
+ e->to_node, e->target_handle, e->data_size, e->offsets_size);
}
static int binder_transaction_log_show(struct seq_file *m, void *unused)
@@ -3659,20 +4177,44 @@
.release = binder_release,
};
-static struct miscdevice binder_miscdev = {
- .minor = MISC_DYNAMIC_MINOR,
- .name = "binder",
- .fops = &binder_fops
-};
-
BINDER_DEBUG_ENTRY(state);
BINDER_DEBUG_ENTRY(stats);
BINDER_DEBUG_ENTRY(transactions);
BINDER_DEBUG_ENTRY(transaction_log);
+static int __init init_binder_device(const char *name)
+{
+ int ret;
+ struct binder_device *binder_device;
+
+ binder_device = kzalloc(sizeof(*binder_device), GFP_KERNEL);
+ if (!binder_device)
+ return -ENOMEM;
+
+ binder_device->miscdev.fops = &binder_fops;
+ binder_device->miscdev.minor = MISC_DYNAMIC_MINOR;
+ binder_device->miscdev.name = name;
+
+ binder_device->context.binder_context_mgr_uid = INVALID_UID;
+ binder_device->context.name = name;
+
+ ret = misc_register(&binder_device->miscdev);
+ if (ret < 0) {
+ kfree(binder_device);
+ return ret;
+ }
+
+ hlist_add_head(&binder_device->hlist, &binder_devices);
+
+ return ret;
+}
+
static int __init binder_init(void)
{
int ret;
+ char *device_name, *device_names;
+ struct binder_device *device;
+ struct hlist_node *tmp;
binder_deferred_workqueue = create_singlethread_workqueue("binder");
if (!binder_deferred_workqueue)
@@ -3682,7 +4224,7 @@
if (binder_debugfs_dir_entry_root)
binder_debugfs_dir_entry_proc = debugfs_create_dir("proc",
binder_debugfs_dir_entry_root);
- ret = misc_register(&binder_miscdev);
+
if (binder_debugfs_dir_entry_root) {
debugfs_create_file("state",
S_IRUGO,
@@ -3710,6 +4252,37 @@
&binder_transaction_log_failed,
&binder_transaction_log_fops);
}
+
+ /*
+ * Copy the module_parameter string, because we don't want to
+ * tokenize it in-place.
+ */
+ device_names = kzalloc(strlen(binder_devices_param) + 1, GFP_KERNEL);
+ if (!device_names) {
+ ret = -ENOMEM;
+ goto err_alloc_device_names_failed;
+ }
+ strcpy(device_names, binder_devices_param);
+
+ while ((device_name = strsep(&device_names, ","))) {
+ ret = init_binder_device(device_name);
+ if (ret)
+ goto err_init_binder_device_failed;
+ }
+
+ return ret;
+
+err_init_binder_device_failed:
+ hlist_for_each_entry_safe(device, tmp, &binder_devices, hlist) {
+ misc_deregister(&device->miscdev);
+ hlist_del(&device->hlist);
+ kfree(device);
+ }
+err_alloc_device_names_failed:
+ debugfs_remove_recursive(binder_debugfs_dir_entry_root);
+
+ destroy_workqueue(binder_deferred_workqueue);
+
return ret;
}
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
index 6253775..5ead446 100644
--- a/drivers/gpu/drm/drm_atomic.c
+++ b/drivers/gpu/drm/drm_atomic.c
@@ -609,6 +609,8 @@
state->src_h = val;
} else if (property == config->rotation_property) {
state->rotation = val;
+ } else if (property == config->alpha_property) {
+ state->alpha = val;
} else if (plane->funcs->atomic_set_property) {
return plane->funcs->atomic_set_property(plane, state,
property, val);
@@ -656,6 +658,8 @@
*val = state->src_h;
} else if (property == config->rotation_property) {
*val = state->rotation;
+ } else if (property == config->alpha_property) {
+ *val = state->alpha;
} else if (plane->funcs->atomic_get_property) {
return plane->funcs->atomic_get_property(plane, state, property, val);
} else {
diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index cbcc63e..ea053ec 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -5853,13 +5853,31 @@
{ DRM_REFLECT_Y, "reflect-y" },
};
- return drm_property_create_bitmask(dev, 0, "rotation",
+ return drm_property_create_bitmask(dev, DRM_MODE_PROP_ATOMIC, "rotation",
props, ARRAY_SIZE(props),
supported_rotations);
}
EXPORT_SYMBOL(drm_mode_create_rotation_property);
/**
+ * drm_mode_create_alpha_property - create plane alpha property
+ * @dev: DRM device
+ * @max: maximal possible value of alpha property
+ *
+ * This function initializes generic plane's alpha property. Maximum alpha value
+ * is determined by the driver.
+ *
+ * Returns:
+ * Pointer to property on success, NULL on failure.
+ */
+struct drm_property *drm_mode_create_alpha_property(struct drm_device *dev,
+ unsigned int max)
+{
+ return drm_property_create_range(dev, DRM_MODE_PROP_ATOMIC, "alpha", 0, max);
+}
+EXPORT_SYMBOL(drm_mode_create_alpha_property);
+
+/**
* DOC: Tile group
*
* Tile groups are used to represent tiled monitors with a unique
diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
index 8ce2a0c..5d8ca1a 100644
--- a/drivers/gpu/drm/drm_ioctl.c
+++ b/drivers/gpu/drm/drm_ioctl.c
@@ -562,7 +562,7 @@
DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, drm_getcap, DRM_UNLOCKED|DRM_RENDER_ALLOW),
- DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, 0),
+ DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_MASTER),
DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_setunique, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
@@ -630,13 +630,13 @@
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETRESOURCES, drm_mode_getresources, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, drm_prime_handle_to_fd_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
- DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, drm_prime_fd_to_handle_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, drm_prime_handle_to_fd_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, drm_prime_fd_to_handle_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_mode_getplane_res, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_mode_getplane_res, DRM_CONTROL_ALLOW|DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANE, drm_mode_getplane, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANE, drm_mode_getplane, DRM_CONTROL_ALLOW|DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPLANE, drm_mode_setplane, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR, drm_mode_cursor_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETGAMMA, drm_mode_gamma_get_ioctl, DRM_UNLOCKED),
@@ -645,8 +645,8 @@
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCONNECTOR, drm_mode_getconnector, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATTACHMODE, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_DETACHMODE, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPERTY, drm_mode_getproperty_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPROPERTY, drm_mode_connector_property_set_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPERTY, drm_mode_getproperty_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED|DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPROPERTY, drm_mode_connector_property_set_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPBLOB, drm_mode_getblob_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETFB, drm_mode_getfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
@@ -657,10 +657,10 @@
DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATE_DUMB, drm_mode_create_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_MAP_DUMB, drm_mode_mmap_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_DESTROY_DUMB, drm_mode_destroy_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_GETPROPERTIES, drm_mode_obj_get_properties_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_GETPROPERTIES, drm_mode_obj_get_properties_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_SETPROPERTY, drm_mode_obj_set_property_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR2, drm_mode_cursor2_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATOMIC, drm_mode_atomic_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATOMIC, drm_mode_atomic_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATEPROPBLOB, drm_mode_createblob_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_DESTROYPROPBLOB, drm_mode_destroyblob_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
};
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 9f935f5..21eda25 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -333,7 +333,7 @@
* drm_gem_prime_export - helper library implementation of the export callback
* @dev: drm_device to export from
* @obj: GEM object to export
- * @flags: flags like DRM_CLOEXEC
+ * @flags: flags like DRM_CLOEXEC and DRM_RDWR
*
* This is the implementation of the gem_prime_export functions for GEM drivers
* using the PRIME helpers.
@@ -632,7 +632,6 @@
struct drm_file *file_priv)
{
struct drm_prime_handle *args = data;
- uint32_t flags;
if (!drm_core_check_feature(dev, DRIVER_PRIME))
return -EINVAL;
@@ -641,14 +640,11 @@
return -ENOSYS;
/* check flags are valid */
- if (args->flags & ~DRM_CLOEXEC)
+ if (args->flags & ~(DRM_CLOEXEC | DRM_RDWR))
return -EINVAL;
- /* we only want to pass DRM_CLOEXEC which is == O_CLOEXEC */
- flags = args->flags & DRM_CLOEXEC;
-
return dev->driver->prime_handle_to_fd(dev, file_priv,
- args->handle, flags, &args->fd);
+ args->handle, args->flags, &args->fd);
}
int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
index 834fa9f..e727998 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.c
+++ b/drivers/gpu/drm/vc4/vc4_drv.c
@@ -271,6 +271,10 @@
vc4_gem_init(drm);
+ ret = vc4_plane_create_properties(drm);
+ if (ret)
+ goto gem_destroy;
+
ret = component_bind_all(dev, drm);
if (ret)
goto gem_destroy;
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index 312c848..6af08a9 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -507,6 +507,7 @@
u32 vc4_plane_dlist_size(struct drm_plane_state *state);
void vc4_plane_async_set_fb(struct drm_plane *plane,
struct drm_framebuffer *fb);
+int vc4_plane_create_properties(struct drm_device *dev);
/* vc4_v3d.c */
extern struct platform_driver vc4_v3d_driver;
diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
index 7577ad0..b07802e 100644
--- a/drivers/gpu/drm/vc4/vc4_plane.c
+++ b/drivers/gpu/drm/vc4/vc4_plane.c
@@ -821,6 +821,34 @@
src_w, src_h);
}
+int vc4_plane_create_properties(struct drm_device *dev)
+{
+ struct drm_property *prop;
+ if (drm_core_check_feature(dev, DRIVER_ATOMIC)) {
+ // create rotation and alpha property
+ prop = drm_mode_create_rotation_property(dev, DRM_ROTATE_0);
+ if (!prop)
+ return -ENOMEM;
+ dev->mode_config.rotation_property = prop;
+ prop = drm_mode_create_alpha_property(dev, 255);
+ if (!prop)
+ return -ENOMEM;
+ dev->mode_config.alpha_property = prop;
+ }
+ return 0;
+}
+
+static void vc4_plane_attach_properties(struct drm_device *dev, struct drm_plane *plane)
+{
+ struct drm_mode_config *config = &dev->mode_config;
+
+ if (drm_core_check_feature(dev, DRIVER_ATOMIC)) {
+ // attach rotation and alpha property
+ drm_object_attach_property(&plane->base, config->rotation_property, 0);
+ drm_object_attach_property(&plane->base, config->alpha_property, 0);
+ }
+}
+
static const struct drm_plane_funcs vc4_plane_funcs = {
.update_plane = vc4_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
@@ -864,6 +892,8 @@
formats, num_formats,
type);
+ vc4_plane_attach_properties(dev, plane);
+
drm_plane_helper_add(plane, &vc4_plane_helper_funcs);
return plane;
diff --git a/drivers/pinctrl/android-things/devices.c b/drivers/pinctrl/android-things/devices.c
deleted file mode 100644
index 2c5fa8b..0000000
--- a/drivers/pinctrl/android-things/devices.c
+++ /dev/null
@@ -1,815 +0,0 @@
-/*
- * devices.c
- *
- * Runtime pin configuration for Raspberry Pi
- *
- * Copyright (C) 2017 Google, Inc.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/amba/bus.h>
-#include <linux/device.h>
-#include <linux/kdev_t.h>
-#include <linux/mutex.h>
-#include <linux/of.h>
-#include <linux/of_platform.h>
-#include <linux/slab.h>
-
-#include "runtimepinconfig.h"
-#include "platform_devices.h"
-
-DEFINE_MUTEX(sysfs_mutex);
-static struct pin_device **pin_array;
-
-static inline struct pin_device *get_pin_device_by_pin(u32 pin)
-{
- if (!pin_array || pin >= pin_count)
- return NULL;
-
- return pin_array[pin];
-}
-
-/*
- * Call a function for each pin in a device tree node. Look up the pin_device
- * represnting the pin and pass that to the function. Don't call the function if
- * no pin_device is found for a pin.
- */
-static int device_for_each_pin(struct device_node *node,
- int (*fn)(struct pin_device *pin))
-{
- struct device_node *state;
- struct pin_device *pin_dev;
- int i = 0;
- u32 j;
- u32 pin;
- int ret = 0;
-
- while ((state = of_parse_phandle(node, STATE_0, i++))) {
- j = 0;
-
- while (!of_property_read_u32_index(state, PROP_PINS, j++,
- &pin)) {
- if (!(pin_dev = get_pin_device_by_pin(pin)))
- /*
- * This is normal for devices such as uart0,
- * which has a third non-user pin in its device
- * tree entry.
- */
- continue;
-
- if ((ret = fn(pin_dev)))
- break;
- }
-
- of_node_put(state);
-
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
-/*
- * track_pin_device() - Create a pin_device for a platform_device that has just
- * been registered, or update it if one already exists. This function should be
- * called from the driver's probe function.
- *
- * @dev: The platform_device for this pin.
- * @class: The class to use when creating sysfs files for this device. For
- * example, using the pinctrl class causes files to be created in
- * /sys/class/pinctrl.
- *
- * Returns the new (or updated) pin_device on success, NULL on failure.
- */
-struct pin_device *track_pin_device(struct platform_device *dev,
- struct class *class)
-{
- const char *name;
- struct device *chardev;
- struct pin_device *pin_dev;
- int ret;
- u32 pin;
-
- if (!pin_array) {
- pr_err(TAG "pin device array uninitialized\n");
- return NULL;
- }
-
- if (of_property_read_string(dev->dev.of_node, "name", &name)) {
- dev_err(&dev->dev, "no name property found\n");
- return NULL;
- }
-
- if ((ret = get_pin(dev->dev.of_node, &pin))) {
- dev_err(&dev->dev, "no pin property found\n");
- return NULL;
- }
-
- if (pin >= pin_count) {
- dev_err(&dev->dev, "pin property %d out of range %d\n", pin,
- pin_count);
- return NULL;
- }
-
- /* This pin device may be returning after having been unregistered. */
- if ((pin_dev = get_pin_device_by_pin(pin))) {
- dev_dbg(&dev->dev, "tracking existing device\n");
-
- if (pin_dev->device != NULL)
- dev_warn(&dev->dev, "device is being re-added to pin device %d\n",
- pin);
-
- pin_dev->set_gpio = 0;
- pin_dev->device = dev;
-
- return pin_dev;
- }
-
- if (!(pin_dev = kmalloc(sizeof(*pin_dev), GFP_KERNEL))) {
- pr_err(TAG "kmalloc failed %s:%d\n", __FILE__, __LINE__);
- return NULL;
- }
-
- chardev = device_create(class, NULL, MKDEV(0, 0), pin_dev, "%s", name);
- if (IS_ERR(chardev)) {
- dev_err(&dev->dev, "unable to create char device");
- kfree(pin_dev);
- return NULL;
- }
-
- pin_dev->pin = pin;
- pin_dev->set_gpio = 0;
- pin_dev->device = dev;
- pin_dev->of_node = of_node_get(dev->dev.of_node);
- pin_dev->char_device = chardev;
- pin_array[pin] = pin_dev;
-
- dev_dbg(&dev->dev, "tracking new device\n");
-
- return pin_dev;
-}
-
-/*
- * The pin device has been unregistered, but we still want to keep most of its
- * data around in case it needs to be registered again. This function should be
- * called from the driver's remove function.
- */
-void untrack_pin_device(struct pin_device *dev)
-{
- dev->device = NULL;
-}
-
-/*
- * Unregister a generic device. Use the device's bus to determine its type, and
- * call the unregister function for that device type. Perform additional cleanup
- * for AMBA devices.
- */
-static void unregister_device(struct device *dev)
-{
- struct amba_device *adev;
- int ret;
-
- if (!dev) {
- return;
- } else if (dev->bus == &platform_bus_type) {
- platform_device_unregister(to_platform_device(dev));
- } else if (dev->bus == &amba_bustype) {
- adev = to_amba_device(dev);
-
- /*
- * For whatever reason, the AMBA driver doesn't call
- * release_resource (or amba_device_release doesn't get
- * called). Call it here so we're able to add the device back
- * later.
- */
- if (adev->res.parent) {
- if ((ret = release_resource(&adev->res))) {
- dev_warn(dev, "release_resource failed with %d\n",
- ret);
- }
- }
-
- amba_device_unregister(adev);
- } else {
- dev_warn(dev, "can't unregister with unknown bus type %s\n",
- dev->bus->name);
- }
-}
-
-/* Register a device by its device tree node. */
-static int register_device_by_node(struct device_node *node)
-{
- struct device *parent;
- int ret;
-
- /*
- * Mark the node as unpopulated so it will get registered by
- * of_platform_popluate.
- */
- of_node_clear_flag(node, OF_POPULATED);
- parent = find_device_by_node(node->parent);
- if ((ret = of_platform_populate(node->parent, NULL, NULL, parent))) {
- /*
- * Something went wrong trying to bring the pin device back. Set
- * this flag so we don't unintentionally attempt to bring it
- * back next time we call of_platform_populate.
- */
- of_node_set_flag(node, OF_POPULATED);
- pr_err(TAG "unable to register device for %s\n", node->name);
- } else {
- pr_debug(TAG "registered device for %s\n", node->name);
- }
-
- put_device(parent);
-
- return ret;
-}
-
-/*
- * If always_unreg_aux is set, we must unregister the auxiliary device any time
- * we unregister the primary device. Otherwise we can leave the auxiliary device
- * up while changing the pins.
- */
-static void unregister_device_and_maybe_aux(struct device *dev,
- struct bcm_device *bdev)
-{
- struct device *aux_dev;
-
- if (bdev->always_unreg_aux && bdev->aux_dev.of_node) {
- aux_dev = find_device_by_node(bdev->aux_dev.of_node);
- unregister_device(aux_dev);
- }
-
- unregister_device(dev);
-}
-
-/*
- * If always_unreg_aux is set, we must unregister the auxiliary device BEFORE
- * unregistering the primary device. Otherwise unregister the primary device
- * first, then the auxiliary device.
- */
-static void unregister_device_and_aux(struct device *dev,
- struct bcm_device *bdev)
-{
- struct device *aux_dev = NULL;
-
- if (bdev->aux_dev.of_node)
- aux_dev = find_device_by_node(bdev->aux_dev.of_node);
-
- if (bdev->always_unreg_aux) {
- unregister_device(aux_dev);
- unregister_device(dev);
- } else {
- unregister_device(dev);
- unregister_device(aux_dev);
- }
-}
-
-/* Register a device, and its auxiliary device of always_unreg_aux is set. */
-static int register_device_and_maybe_aux(struct bcm_device *dev)
-{
- int ret = register_device_by_node(dev->node.of_node);
-
- if (ret)
- return ret;
-
- if (dev->aux_dev.of_node && dev->always_unreg_aux)
- return register_device_by_node(dev->aux_dev.of_node);
-
- return 0;
-}
-
-/* Register a device and its auxiliary device. */
-static int register_device_and_aux(struct bcm_device *dev)
-{
- int ret;
-
- if (!dev->aux_dev.of_node)
- return register_device_by_node(dev->node.of_node);
-
- if (dev->always_unreg_aux) {
- if ((ret = register_device_by_node(dev->node.of_node)))
- return ret;
- return register_device_by_node(dev->aux_dev.of_node);
- }
-
- if ((ret = register_device_by_node(dev->aux_dev.of_node)))
- return ret;
- return register_device_by_node(dev->node.of_node);
-
- return 0;
-}
-
-/* Restore the device's default pin configuration and register it. */
-static inline int register_default_device(struct bcm_device *dev)
-{
- int ret = set_device_config(dev, NULL);
-
- if (ret)
- return ret;
-
- return register_device_and_aux(dev);
-}
-
-/*
- * Unregister all peripheral devices we know about and prepare their device tree
- * properties for our use.
- */
-int unregister_platform_devices(void)
-{
- struct device *dev;
- struct bcm_device *bdev;
- int ret = 0;
-
- if (pin_array) {
- pr_warn(TAG "pin array is already initialized\n");
- } else {
- pin_array = kcalloc(pin_count, sizeof(*pin_array),
- GFP_KERNEL);
- if (!pin_array) {
- pr_err(TAG "kmalloc failed %s:%d\n", __FILE__,
- __LINE__);
- return -ENOMEM;
- }
- }
-
- for (bdev = platform_devices; bdev->name != NULL; bdev++) {
- if (!bdev->node.path)
- continue;
-
- /* Populate the device and auxiliary device nodes. */
- bdev->node.of_node = of_find_node_by_path(bdev->node.path);
- if (!bdev->node.of_node) {
- pr_warn(TAG "unable to find %s in the device tree\n",
- bdev->node.path);
- continue;
- }
-
- if (bdev->aux_dev.path)
- bdev->aux_dev.of_node = of_find_node_by_path(
- bdev->aux_dev.path);
-
- /* Unregister the device and auxiliary device. */
- if (!bdev->use_default) {
- dev = find_device_by_node(bdev->node.of_node);
- unregister_device_and_aux(dev, bdev);
- }
-
- if ((ret = expand_property(bdev, PROP_PULL, 0)))
- return ret;
- if ((ret = expand_property(bdev, PROP_FUNC,
- bdev->pin_groups[0].function)))
- return ret;
-
- if (!bdev->use_default)
- ret = set_device_config(bdev, &bdev->pin_groups[0]);
- }
-
- return ret;
-}
-
-/*
- * Find the device using this pin. The returned device may be a pin device or
- * peripheral device. We register pin devices for each pin when a peripheral
- * device gets unregistered, so returning NULL here should never happen. The
- * caller must call device_put on the returned device.
- */
-static struct device *find_active_device_by_pin(u32 pin,
- struct bcm_device **dev)
-{
- struct device *pdev;
- int has_pin;
- size_t i;
-
- for (i = 0; platform_devices[i].name != NULL; i++) {
- if (!platform_devices[i].node.path)
- continue;
-
- if (!platform_devices[i].node.of_node) {
- pr_warn(TAG "%s has no device tree node\n",
- platform_devices[i].name);
- continue;
- }
-
- pdev = find_device_by_node(platform_devices[i].node.of_node);
- has_pin = device_has_pin(platform_devices[i].node.of_node, pin);
- if (pdev && has_pin) {
- /*
- * We are assuming only one registered device has this
- * pin in its pins property.
- */
-
- if (dev)
- *dev = &platform_devices[i];
-
- return pdev;
- }
- }
-
- return NULL;
-}
-
-static int register_pin_device(struct pin_device *dev)
-{
- return (!dev->device) ? register_device_by_node(dev->of_node) : 0;
-}
-
-static int set_gpio_flag(struct pin_device *pin)
-{
- pin->set_gpio = 1;
- return 0;
-}
-
-static int clear_gpio_flag(struct pin_device *pin)
-{
- pin->set_gpio = 0;
- return 0;
-}
-
-/*
- * Free a pin for use by a device. If this pin is set to GPIO simply unregister
- * its pin device. If this pin is owned by another device, unregister that
- * device and set set_gpio for each of its pins. Later we will register a new
- * pin device for every pin with set_gpio set.
- */
-static int unregister_pin_for_device(struct pin_device *pin)
-{
- struct device *otherdev;
- struct bcm_device *dev;
-
- if (pin->device) {
- /* This pin is just owned by a pin device, so unregister it. */
- platform_device_unregister(pin->device);
- } else if ((otherdev = find_active_device_by_pin(pin->pin, &dev))) {
- /*
- * This pin is owned by a platform device. For each pin owned by
- * this device, set its set_gpio flag. Later we will clear the
- * flags of those pins used by the platform device we are
- * bringing up, and will create pin devices for those that
- * remain. This prevents having to use a ton of nested loops.
- */
- unregister_device_and_aux(otherdev, dev);
-
- device_for_each_pin(dev->node.of_node, set_gpio_flag);
-
- if (dev->use_default)
- return register_default_device(dev);
- } else {
- /*
- * Nothing is using this pin. It was (hopefully) being used by
- * the platform device we will be bringing back up.
- */
- }
-
- return 0;
-}
-
-/*
- * Return a pointer to the pin property at the specified index. This pointer
- * can be used to read or modify the property. A device's pin list may be spread
- * across several nodes and properties.
- */
-static __be32 *find_pin_property(struct device_node *node, const char *prop,
- int index)
-{
- struct property *pin_prop;
- struct device_node *state;
- int i = 0;
- int j = 0;
- int length;
-
- while ((state = of_parse_phandle(node, STATE_0, i++))) {
- pin_prop = of_find_property(state, prop, &length);
- of_node_put(state);
-
- if (!pin_prop) {
- pr_warn(TAG "no property \"%s\" for %s[%d]\n",
- prop, node->name, i);
- continue;
- }
-
- length /= sizeof(__be32);
- if (index >= j && index < j + length)
- return ((__be32 *)pin_prop->value) + index - j;
-
- j += length;
- }
-
- return NULL;
-}
-
-/*
- * Set this pin to GPIO. If a device owns this pin, unregister it and register
- * new pin devices for each of its pins.
- */
-static inline int set_function_gpio(struct pin_device *dev)
-{
- struct device *pdev;
- struct bcm_device *bdev;
- int ret;
-
- if (dev->device) {
- dev_dbg(&dev->device->dev, "already set to gpio\n");
- return 0;
- }
-
- if (!(pdev = find_active_device_by_pin(dev->pin, &bdev))) {
- /*
- * Nothing is using this pin, so go ahead and register a pin
- * device for it. We should never get here.
- */
- pr_warn(TAG "no device is using pin %d\n", dev->pin);
- return register_pin_device(dev);
- }
-
- /* Unregister the platform device using this pin. */
- unregister_device_and_aux(pdev, bdev);
-
- dev_dbg(pdev, "unregistered to free pin %d\n", dev->pin);
-
- /*
- * Register pin devices for pins previously used by the platform
- * device.
- */
- ret = device_for_each_pin(bdev->node.of_node, register_pin_device);
- if (ret)
- return ret;
-
- if (bdev->use_default)
- return register_default_device(bdev);
-
- return 0;
-}
-
-/* Replace a pin used by a device and register a pin device for the old pin. */
-static inline int replace_pin(struct bcm_device *bcm_dev,
- struct pin_group *group, u32 pin)
-{
- struct pin_device *pin_dev;
- int ret;
- __be32 *pin_prop;
- __be32 *function;
- u32 pin_index;
- u32 old_pin;
-
- pin_index = pin - group->base;
- pin_prop = find_pin_property(bcm_dev->node.of_node, PROP_PINS,
- pin_index);
-
- if (!pin_prop) {
- pr_err(TAG "unable to find pin index %d in %s\n",
- pin_index, bcm_dev->name);
- return -EINVAL;
- }
-
- old_pin = be32_to_cpup(pin_prop);
-
- *pin_prop = cpu_to_be32p(&pin);
-
- if (!(pin_dev = get_pin_device_by_pin(old_pin))) {
- /*
- * This happens when we are bringing up a device that
- * was previously using non-user pins.
- */
- } else if ((ret = register_pin_device(pin_dev))) {
- return ret;
- }
-
- /* Set the pin's function property in the device tree. */
- function = find_pin_property(bcm_dev->node.of_node, PROP_FUNC,
- pin_index);
- if (!function) {
- pr_err(TAG "unable to find pin function index %d in %s\n",
- pin_index, bcm_dev->name);
- return -EINVAL;
- }
-
- *function = cpu_to_be32p(&group->function);
-
- return 0;
-}
-
-/*
- * Set a pin's function while holding sysfs_lock. If the device for this
- * function is registered and doesn't own this pin, set the pin we replace to
- * GPIO. Unregister all devices owning pins that this device needs and set the
- * now unused pins to GPIO.
- */
-static inline int __set_function(struct pin_device *dev,
- struct bcm_device *bcm_dev,
- struct pin_group *group)
-{
- struct device *new_dev;
- struct device *exdev;
- struct bcm_device *exbdev;
- struct pin_device *pin_dev;
- size_t i = 0;
- int has_pin;
- int ret = 0;
-
- if (!bcm_dev->node.path)
- return set_function_gpio(dev);
-
- if (!bcm_dev->node.of_node) {
- pr_err(TAG "%s has no device tree node\n", bcm_dev->name);
- return -EINVAL;
- }
-
- new_dev = find_device_by_node(bcm_dev->node.of_node);
- has_pin = device_has_pin(bcm_dev->node.of_node, dev->pin);
- if (new_dev && has_pin) {
- /* The device already owns this pin and is registered. */
- dev_dbg(new_dev, "pin %d is already reserved\n", dev->pin);
- put_device(new_dev);
- return 0;
- }
-
- /* Unregister this device so we can change its pins. */
- unregister_device_and_maybe_aux(new_dev, bcm_dev);
-
- if (bcm_dev->use_default) {
- /*
- * By default, this device uses non-user pins. Rather than mix
- * user and non-user pins, set this device to use all user pins
- * as soon as it is requested.
- */
- if ((ret = set_device_config(bcm_dev, group))) {
- pr_err(TAG "unable to set config for %s\n",
- bcm_dev->name);
- return ret;
- }
- } else if (!has_pin) {
- if ((ret = replace_pin(bcm_dev, group, dev->pin)))
- return ret;
- }
-
- /*
- * Unregister devices using pins we need, and set the set_gpio flag for
- * each newly freed pin.
- */
- if ((ret = device_for_each_pin(bcm_dev->node.of_node,
- unregister_pin_for_device))) {
- pr_err(TAG "unable to unregister pin devices needed by %s\n",
- bcm_dev->name);
- return ret;
- }
-
- /* Unregister each mutually exclusive device. */
- for (i = 0; bcm_dev->excl && (exbdev = bcm_dev->excl[i]); i++) {
- if ((exdev = find_device_by_node(exbdev->node.of_node))) {
- device_for_each_pin(exbdev->node.of_node,
- set_gpio_flag);
-
- unregister_device_and_aux(exdev, exbdev);
-
- if (exbdev->use_default)
- if ((ret = register_default_device(exbdev)))
- return ret;
- }
- }
-
- /* Clear the set_gpio flag for every pin we're using. */
- device_for_each_pin(bcm_dev->node.of_node, clear_gpio_flag);
-
- /* Register a pin device for each pin we're not using. */
- for (i = 0, ret = 0; i < pin_count; i++) {
- if (!(pin_dev = get_pin_device_by_pin(i)))
- continue;
-
- if (pin_dev->set_gpio)
- if ((ret = register_pin_device(pin_dev)))
- return ret;
-
- pin_dev->set_gpio = 0;
- }
-
- /*
- * Register this platform device. If the device was not previously
- * registered, bring the auxiliary device up as well.
- */
- if (new_dev)
- return register_device_and_maybe_aux(bcm_dev);
- else
- return register_device_and_aux(bcm_dev);
-}
-
-int set_function(struct pin_device *dev, struct bcm_device *bcm_dev,
- struct pin_group *group)
-{
- int ret;
-
- mutex_lock(&sysfs_mutex);
- ret = __set_function(dev, bcm_dev, group);
- mutex_unlock(&sysfs_mutex);
-
- return ret;
-}
-
-
-/*
- * Find a matching pin group and return this pin's index in the property
- * list.
- */
-static inline u32 get_pin_property_index(struct bcm_device *dev, u32 pin)
-{
- struct pin_group *groups;
- size_t i;
-
- groups = dev->pin_groups;
- for (i = 0 ; dev->pin_group_count; i++) {
- if (pin_in_group(pin, groups[i].base, dev->pin_count))
- return pin - groups[i].base;
- }
-
- return -1;
-}
-
-/* Set a GPIO pin's pull-up/pull-down resistor configuration. */
-static inline int set_resistor_gpio(struct pin_device *dev, u32 resistor)
-{
- __be32 *pull;
-
- if (!(pull = find_pin_property(dev->of_node, PROP_PULL, 0))) {
- pr_err(TAG "unable to find resistor index 0 in pin %d\n",
- dev->pin);
- return -EINVAL;
- } else if (*pull == cpu_to_be32p(&resistor)) {
- pr_debug(TAG "pin %d is already set to resistor %d\n",
- dev->pin, resistor);
- return 0;
- }
-
- /* Disable the device before changing the resistor. */
- platform_device_unregister(dev->device);
-
- *pull = cpu_to_be32p(&resistor);
-
- return register_pin_device(dev);
-}
-
-/*
- * Set a pin's pull-up/pull-down resistor configuration while holding
- * sysfs_lock.
- */
-static inline int __set_resistor(struct pin_device *dev, u32 resistor)
-{
- struct device *pdev;
- struct bcm_device *bdev;
- int index;
- __be32 *pull;
-
- if (dev->device)
- return set_resistor_gpio(dev, resistor);
-
- if ((pdev = find_active_device_by_pin(dev->pin, &bdev))) {
- index = get_pin_property_index(bdev, dev->pin);
- if (index == -1) {
- dev_err(pdev, "could not get resistor index for pin %d\n",
- dev->pin);
- return -EINVAL;
- }
- } else {
- pr_err(TAG "no device is using pin %d\n", dev->pin);
- put_device(pdev);
- return -ENODEV;
- }
-
- pull = find_pin_property(bdev->node.of_node, PROP_PULL, index);
- if (!pull) {
- dev_err(pdev, "unable to find pin index %d\n", index);
- put_device(pdev);
- return -EINVAL;
- } else if (*pull == cpu_to_be32p(&resistor)) {
- dev_dbg(pdev, "pin %d is already set to resistor %d\n",
- dev->pin, resistor);
- put_device(pdev);
- return 0;
- }
-
- /* Disable the device before changing the resistor. */
- unregister_device_and_maybe_aux(pdev, bdev);
-
- *pull = cpu_to_be32p(&resistor);
-
- return register_device_and_maybe_aux(bdev);
-}
-
-int set_resistor(struct pin_device *dev, u32 resistor)
-{
- int ret;
-
- mutex_lock(&sysfs_mutex);
- ret = __set_resistor(dev, resistor);
- mutex_unlock(&sysfs_mutex);
-
- return ret;
-}
diff --git a/drivers/pinctrl/android-things/devicetree.c b/drivers/pinctrl/android-things/devicetree.c
deleted file mode 100644
index 1d75e53..0000000
--- a/drivers/pinctrl/android-things/devicetree.c
+++ /dev/null
@@ -1,305 +0,0 @@
-/*
- * devicetree.c
- *
- * Runtime pin configuration for Raspberry Pi
- *
- * Copyright (C) 2017 Google, Inc.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/amba/bus.h>
-#include <linux/of.h>
-#include <linux/of_platform.h>
-#include <linux/slab.h>
-
-#include "runtimepinconfig.h"
-#include "platform_devices.h"
-
-/*
- * get_pin() - Get the first pin in a device tree node. Only intended to be
- * called for pin devices.
- *
- * @node: The device tree node to search.
- * @pin: Set to the pin number if it is found.
- *
- * Returns 0 if the pin number is found.
- */
-int get_pin(struct device_node *node, u32 *pin)
-{
- struct device_node *statenode;
- int ret;
-
- if (!(statenode = of_parse_phandle(node, STATE_0, 0)))
- return -EINVAL;
-
- ret = of_property_read_u32(statenode, PROP_PINS, pin);
- of_node_put(statenode);
- return ret;
-}
-
-/*
- * device_has_pin() - Check if a device tree node contains the specified pin.
- *
- * @node: The device tree node to search.
- * @pin: The pin number to look for.
- *
- * Returns 1 if the device contains the pin, 0 otherwise.
- */
-int device_has_pin(struct device_node *node, u32 pin)
-{
- struct device_node *pinstate;
- int i = 0;
- u32 prop_pin;
- u32 j = 0;
-
- while ((pinstate = of_parse_phandle(node, STATE_0, i++))) {
- j = 0;
-
- while (!of_property_read_u32_index(pinstate, PROP_PINS, j++,
- &prop_pin)) {
- if (prop_pin == pin)
- return 1;
- }
-
- of_node_put(pinstate);
- }
-
- return 0;
-}
-
-/*
- * Creates a struct property with name prop and the specified length in
- * bytes.
- */
-static inline struct property *create_property(const char *prop, int length)
-{
- struct property *ret;
-
- if (!(ret = kzalloc(sizeof(*ret), GFP_KERNEL)))
- goto err_alloc_struct;
- if (!(ret->name = kstrdup(prop, GFP_KERNEL)))
- goto err_alloc_name;
- if (!(ret->value = kzalloc(length, GFP_KERNEL)))
- goto err_alloc_value;
-
- ret->length = length;
- of_property_set_flag(ret, OF_DYNAMIC);
- return ret;
-
-err_alloc_value:
- kfree(ret->name);
-err_alloc_name:
- kfree(ret);
-err_alloc_struct:
- pr_err(TAG "kmalloc failed %s:%d\n", __FILE__, __LINE__);
- return NULL;
-}
-
-static inline void fill_value(__be32 *array, u32 value, int length)
-{
- int size = length / sizeof(*array);
- int i;
-
- for (i = 0; i < size; i++)
- *array++ = cpu_to_be32p(&value);
-}
-
-/*
- * expand_property() - Checks a device's pinctrl properties, and expands them to
- * the proper length if necessary. For example, the resistor property may
- * specify one value for all pins, one value for each pin, or no value. We want
- * to have one value for each pin so we can configure pins individually, no
- * matter which device is using them.
- *
- * @dev: The device to check.
- * @prop_name: The string name of the property to check.
- * @value: The new value to fill an expanded property with.
- *
- * Returns 0 on success.
- */
-int expand_property(struct bcm_device *dev, const char *prop_name, u32 value)
-{
- struct of_changeset changeset;
- struct device_node *prop;
- struct property *pins, *pull;
- struct property *new_pull;
- int pins_length, pull_length;
- int total_pins = 0;
- int ret;
- unsigned long action;
- int i = 0;
-
- of_changeset_init(&changeset);
-
- while ((prop = of_parse_phandle(dev->node.of_node, STATE_0, i++))) {
- pins = of_find_property(prop, PROP_PINS, &pins_length);
- pull = of_find_property(prop, prop_name, &pull_length);
-
- if (!pins) {
- pr_warn(TAG "%s[%d] has no %s property\n", dev->name,
- i - 1, PROP_PINS);
- of_node_put(prop);
- continue;
- }
-
- total_pins += (pins_length / sizeof(value));
- if (total_pins > dev->pin_count) {
- pr_warn(TAG "%s stopping at %d pins out of %d\n",
- dev->name, dev->pin_count, total_pins);
- of_node_put(prop);
- break;
- }
-
- if (pull && pull_length >= pins_length) {
- /*
- * The property already exists and is of the correct
- * length.
- */
- of_node_put(prop);
- continue;
- }
-
- new_pull = create_property(prop_name, pins_length);
- if (!new_pull)
- goto err_prop;
-
- fill_value(new_pull->value, value, pins_length);
-
- action = (!pull) ? OF_RECONFIG_ADD_PROPERTY
- : OF_RECONFIG_UPDATE_PROPERTY;
-
- if (of_changeset_action(&changeset, action, prop, new_pull)) {
- pr_err(TAG "unable to update %s property for %s[%d]\n",
- prop_name, dev->name, i - 1);
- goto err_apply;
- } else {
- pr_debug(TAG "updated %s property for %s[%d]\n",
- prop_name, dev->name, i - 1);
- }
-
- of_node_put(prop);
- }
-
- if ((ret = of_changeset_apply(&changeset)))
- pr_err(TAG "unable to apply changeset for %s\n", dev->name);
-
- of_changeset_destroy(&changeset);
-
- return ret;
-
-err_apply:
- kfree(new_pull);
-err_prop:
- of_changeset_destroy(&changeset);
- of_node_put(prop);
- return -ENOMEM;
-}
-
-static int node_match_device(struct device *dev, void *data)
-{
- return (dev->of_node == data);
-}
-
-/*
- * find_device_by_node() - Finds a device from it's device tree node. The caller
- * must call put_device on the returned device.
- *
- * Returns the device or NULL if no such device is registered.
- */
-struct device *find_device_by_node(struct device_node *node)
-{
- struct platform_device *pdev;
-
- if ((pdev = of_find_device_by_node(node)))
- return &pdev->dev;
- else
- return bus_find_device(&amba_bustype, NULL, node,
- node_match_device);
-}
-
-/*
- * set_device_config() - Sets a device's device tree configuration.
- *
- * @dev: The device to change.
- * @group: The new pin group to set, or NULL for the default.
- *
- * Returns 0 on success.
- */
-int set_device_config(struct bcm_device *dev, struct pin_group *group)
-{
- struct device_node *prop;
- struct property *pins, *function, *pull;
- __be32 *list;
- __be32 *flist = NULL;
- __be32 *plist = NULL;
- u32 pin;
- int length;
- int i, j, total;
-
- if (!group)
- group = &dev->pin_groups[0];
-
- i = 0;
- j = 0;
- while ((prop = of_parse_phandle(dev->node.of_node, STATE_0, i++))) {
- pins = of_find_property(prop, PROP_PINS, &length);
- function = of_find_property(prop, PROP_FUNC, NULL);
- pull = of_find_property(prop, PROP_PULL, NULL);
-
- if (!pins || !function || !pull) {
- pr_warn(TAG "%s[%d] is missing a property\n",
- dev->name, i - 1);
- of_node_put(prop);
- continue;
- }
-
- if (function->length != length || pull->length != length) {
- pr_err(TAG "%s[%d] property size mismatch\n",
- dev->name, i - 1);
- goto err_prop;
- }
-
- list = pins->value;
- flist = function->value;
- plist = pull->value;
-
- length /= sizeof(*list);
- total = j + length;
-
- if (total > dev->pin_count) {
- pr_warn(TAG "%s has %d pins, more than the %d we know about\n",
- dev->name, total, dev->pin_count);
- }
-
- for ( ; j < total && j < dev->pin_count; j++) {
- pin = group->base + j;
- *list++ = cpu_to_be32p(&pin);
- *flist++ = cpu_to_be32p(&group->function);
- *plist++ = cpu_to_be32p(&dev->pin_pull[j]);
- }
-
- of_node_put(prop);
-
- if (total > dev->pin_count) {
- pr_warn(TAG "%s has %d pins, more than the %d we know about\n",
- dev->name, total, dev->pin_count);
- break;
- }
-
- j = total;
- }
-
- return 0;
-
-err_prop:
- of_node_put(prop);
- return -EINVAL;
-}
diff --git a/drivers/pinctrl/android-things/main.c b/drivers/pinctrl/android-things/main.c
deleted file mode 100644
index ad2813a..0000000
--- a/drivers/pinctrl/android-things/main.c
+++ /dev/null
@@ -1,103 +0,0 @@
-/*
- * main.c
- *
- * Runtime pin configuration for Raspberry Pi
- *
- * Copyright (C) 2017 Google, Inc.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/kernel.h>
-#include <linux/kobject.h>
-#include <linux/module.h>
-#include <linux/of.h>
-#include <linux/platform_device.h>
-
-#include "runtimepinconfig.h"
-
-MODULE_LICENSE("GPL v2");
-
-static struct platform_driver pin_driver;
-
-DEVICE_ATTR_WO(function);
-DEVICE_ATTR_WO(resistor);
-
-static struct attribute *pinctrl_attrs[] = {
- &dev_attr_function.attr,
- &dev_attr_resistor.attr,
- NULL
-};
-ATTRIBUTE_GROUPS(pinctrl);
-
-static struct class pinctrl_class = {
- .name = "pinctrl",
- .owner = THIS_MODULE,
- .dev_groups = pinctrl_groups
-};
-
-static int pin_probe(struct platform_device *dev)
-{
- struct pin_device *pin_dev;
-
- if ((pin_dev = track_pin_device(dev, &pinctrl_class)))
- dev_set_drvdata(&dev->dev, pin_dev);
-
- return 0;
-}
-
-static int pin_remove(struct platform_device *dev)
-{
- untrack_pin_device(dev_get_drvdata(&dev->dev));
- return 0;
-}
-
-static int __init runtimepinconfig_init(void)
-{
- if (class_register(&pinctrl_class)) {
- pr_err(TAG "unable create pinctrl class\n");
- goto err_class;
- }
-
- if (unregister_platform_devices()) {
- pr_err(TAG "unable to unregister platform devices\n");
- goto err_unregister_or_driver;
- }
-
- if (__platform_driver_register(&pin_driver, THIS_MODULE)) {
- pr_err(TAG "unable to register pin driver\n");
- goto err_unregister_or_driver;
- }
-
- pr_debug(TAG "module loaded\n");
- return 0;
-
-err_unregister_or_driver:
- class_unregister(&pinctrl_class);
-err_class:
- return -ECANCELED;
-}
-
-static const struct of_device_id pin_match[] = {
- { .compatible = "google,android-things-pins" },
- { }
-};
-MODULE_DEVICE_TABLE(of, pin_match);
-
-static struct platform_driver pin_driver = {
- .probe = pin_probe,
- .remove = pin_remove,
- .driver = {
- .name = "android-things-pins",
- .of_match_table = pin_match
- }
-};
-
-module_init(runtimepinconfig_init);
diff --git a/drivers/pinctrl/android-things/platform_devices.c b/drivers/pinctrl/android-things/platform_devices.c
deleted file mode 100644
index 97a6c0b..0000000
--- a/drivers/pinctrl/android-things/platform_devices.c
+++ /dev/null
@@ -1,174 +0,0 @@
-/*
- * platform_devices.c
- *
- * Runtime pin configuration for Raspberry Pi
- *
- * Copyright (C) 2017 Google, Inc.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include "runtimepinconfig.h"
-#include "platform_devices.h"
-
-/*
- * platform_devices - Specifies devices we want to control pinmuxing for.
- *
- * The device tree tells us which pins are used by which devices by default, but
- * it doesn't tell us which other pins can be used by those devices. It also
- * doesn't tell us which devices are mutually exclusive, or which devices these
- * devices depend on.
- */
-
-const int pin_count = 28;
-
-struct bcm_device platform_devices[] = {
- {
- .name = "SPI0",
- .node = { .path = "/soc/spi@7e204000" },
- .aux_dev = { .path = NULL },
- .use_default = 0,
- .always_unreg_aux = 0,
- .pin_count = 5,
- .pin_pull = (u32 []) { NONE, NONE, NONE, NONE, NONE },
- .pin_group_count = 1,
- .pin_groups = (struct pin_group []) {
- {
- .base = 7,
- .function = ALT0
- }
- },
- .excl = NULL
- }, {
- .name = "PWM",
- .node = { .path = "/soc/pwm@7e20c000" },
- .aux_dev = { .path = "/soc/cprman@7e101000" },
- .use_default = 0,
- .always_unreg_aux = 0,
- .pin_count = 2,
- .pin_pull = (u32 []) { NONE, NONE, },
- .pin_group_count = 2,
- .pin_groups = (struct pin_group []) {
- {
- .base = 12,
- .function = ALT0
- }, {
- .base = 18,
- .function = ALT5
- }
- },
- .excl = (struct bcm_device *[]) {
- &platform_devices[5],
- NULL
- }
- }, {
- .name = "I2C1",
- .node = { .path = "/soc/i2c@7e804000" },
- .aux_dev = { .path = NULL },
- .use_default = 0,
- .always_unreg_aux = 0,
- .pin_count = 2,
- .pin_pull = (u32 []) { NONE, NONE },
- .pin_group_count = 1,
- .pin_groups = (struct pin_group []) {
- {
- .base = 2,
- .function = ALT0
- }
- },
- .excl = NULL
- }, {
- .name = "UART0",
- .node = { .path = "/soc/uart@7e201000" },
- .aux_dev = { .path = NULL },
- .use_default = 1,
- .always_unreg_aux = 0,
- .pin_count = 2,
- .pin_pull = (u32 []) { NONE, UP },
- .pin_group_count = 2,
- .pin_groups = (struct pin_group []) {
- {
- .base = 32,
- .function = ALT3
- }, {
- .base = 14,
- .function = ALT0
- }
- },
- .excl = NULL
- }, {
- .name = "UART1",
- .node = { .path = "/soc/uart@7e215040" },
- .aux_dev = { .path = NULL },
- .use_default = 0,
- .always_unreg_aux = 0,
- .pin_count = 2,
- .pin_pull = (u32 []) { NONE, UP },
- .pin_group_count = 1,
- .pin_groups = (struct pin_group []) {
- {
- .base = 14,
- .function = ALT5
- }
- },
- .excl = NULL
- }, {
- .name = "I2S1",
- .node = { .path = "/soc/i2s@7e203000" },
- .aux_dev = { .path = "/soc/sound" },
- .use_default = 0,
- .always_unreg_aux = 1,
- .pin_count = 4,
- .pin_pull = (u32 []) { NONE, NONE, NONE, NONE },
- .pin_group_count = 1,
- .pin_groups = (struct pin_group []) {
- {
- .base = 18,
- .function = ALT0
- }
- },
- .excl = (struct bcm_device *[]) {
- &platform_devices[1],
- NULL
- }
- }, {
- .name = "GPIO",
- .node = { .path = NULL },
- .aux_dev = { .path = NULL },
- .use_default = 0,
- .always_unreg_aux = 0,
- .pin_count = 26,
- .pin_group_count = 1,
- .pin_groups = (struct pin_group []) {
- {
- .base = 2,
- .function = GPIO
- }
- },
- .excl = NULL
- }, {
- .name = NULL
- }
-};
-
-struct bcm_resistor platform_resistors[] = {
- {
- .name = "NONE",
- .resistor = NONE
- }, {
- .name = "DOWN",
- .resistor = DOWN
- }, {
- .name = "UP",
- .resistor = UP
- }, {
- .name = NULL
- }
-};
diff --git a/drivers/pinctrl/android-things/platform_devices.h b/drivers/pinctrl/android-things/platform_devices.h
deleted file mode 100644
index baea33c..0000000
--- a/drivers/pinctrl/android-things/platform_devices.h
+++ /dev/null
@@ -1,123 +0,0 @@
-/*
- * platform_devices.h
- *
- * Runtime pin configuration for Raspberry Pi
- *
- * Copyright (C) 2017 Google, Inc.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#ifndef PLATFORM_DEVICES_H_
-#define PLATFORM_DEVICES_H_
-
-#define PROP_PINS "brcm,pins"
-#define PROP_FUNC "brcm,function"
-#define PROP_PULL "brcm,pull"
-
-#define STATE_0 "pinctrl-0"
-
-/*
- * struct node_path - Matches a device tree path to the corresponding node
- * structure. Rather than look up the path every time we need to find a node,
- * save a pointer to the node here.
- */
-struct node_path {
- const char *path;
- struct device_node *of_node;
-};
-
-/* Broadcom pin function numbers. See drivers/pinctrl/bcm/pinctrl-bcm2835.c */
-enum bcm_fsel {
- GPIO = 0,
- ALT0 = 4,
- ALT1 = 5,
- ALT2 = 6,
- ALT3 = 7,
- ALT4 = 3,
- ALT5 = 2
-};
-
-/*
- * struct pin_group - Defines the starting pin and hardware-specific function
- * number for a group of pins used by a peripheral. We assume that each group
- * has the same function number and contains bcm_device.pin_count consecutive
- * pins.
- */
-struct pin_group {
- u32 base;
- enum bcm_fsel function;
-};
-
-/*
- * struct bcm_device - Contains details about relevant peripherals on the chip.
- *
- * @name: A string used to look up this device. Used to assign a pin to a
- * device through the sysfs interface.
- * @node: A node_path containing the device tree node for this device.
- * @aux_dev: A node_path containing the device tree node for this device's
- * auxiliary device. The auxiliary device may depend on this
- * device, or this device may depend on it.
- * @use_default: Whether or not we should register this device on
- * a default pin_group if nobody is using it. For example,
- * uart0 on the Raspberry Pi 3 is also used for Bluetooth,
- * so we need to register it again when the user stops
- * using it directly. pin_groups[0] is the default
- * pin_group.
- * @always_unreg_aux: Whether or not we should unregister the auxiliary device
- * whenever we unregister this device. This flag also
- * controls the order in which the devices get
- * registered/unregistered.
- * @pin_count: The number of pins used by this peripheral.
- * @pin_pull: Specifies default resistor values for this device. Only used
- * with use_default.
- * @pin_group_count: The number of pin_groups available to this device and
- * the length of the following array.
- * @pin_groups: Array of pin_groups for this device.
- * @excl: NULL-terminated array of bcm_devices that are mutually exclusive
- * with this device. Whenever we register this device, we must
- * first unregister every device in this array. For example, i2s
- * and pwm are mutually exclusive despite the fact that they can
- * use non-overlapping groups of pins.
- */
-struct bcm_device {
- const char *name;
- struct node_path node;
- struct node_path aux_dev;
- int use_default:1;
- int always_unreg_aux:1;
- int pin_count;
- u32 *pin_pull;
- int pin_group_count;
- struct pin_group *pin_groups;
- struct bcm_device **excl;
-};
-
-/* Broadcom pin resistor numbers. */
-enum bcm_rsel {
- NONE = 0,
- DOWN = 1,
- UP = 2
-};
-
-/*
- * struct bcm_resistor - Matches a string description to a hardware-specific
- * resistor number. Used to set the resistor through the sysfs interface.
- */
-struct bcm_resistor {
- const char *name;
- enum bcm_rsel resistor;
-};
-
-extern struct bcm_device platform_devices[];
-extern struct bcm_resistor platform_resistors[];
-extern const int pin_count;
-
-#endif /* PLATFORM_DEVICES_H_ */
diff --git a/drivers/pinctrl/android-things/runtimepinconfig.h b/drivers/pinctrl/android-things/runtimepinconfig.h
deleted file mode 100644
index b165436..0000000
--- a/drivers/pinctrl/android-things/runtimepinconfig.h
+++ /dev/null
@@ -1,79 +0,0 @@
-/*
- * runtimepinconfig.h
- *
- * Runtime pin configuration for Raspberry Pi
- *
- * Copyright (C) 2017 Google, Inc.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#ifndef RUNTIMEPINCONFIG_H_
-#define RUNTIMEPINCONFIG_H_
-
-#include <linux/device.h>
-#include <linux/of.h>
-#include <linux/pinctrl/consumer.h>
-#include <linux/platform_device.h>
-
-#define TAG "runtimepinconfig: "
-
-struct pin_group;
-struct bcm_device;
-
-/*
- * struct pin_device - Holds information about a pin device that has been
- * registered with our driver.
- *
- * @pin: The physical pin number for this device.
- * @set_gpio: Set by the driver to determine which pin devices to register
- * unregister.
- * @device: The platform_device that represents this pin. We set this to
- * NULL when the pin device gets unregistered.
- * @of_node: The device tree node for this device. We save a pointer to it
- * here in case the pin device gets unregistered.
- * @char_device: The character device used to create the sysfs files for
- * this pin.
- */
-struct pin_device {
- u32 pin;
- int set_gpio:1;
- struct platform_device *device;
- struct device_node *of_node;
- struct device *char_device;
-};
-
-ssize_t function_store(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t buflen);
-ssize_t resistor_store(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t buflen);
-
-int set_function(struct pin_device *dev, struct bcm_device *bcm_dev,
- struct pin_group *group);
-int set_resistor(struct pin_device *dev, u32 resistor);
-
-struct pin_device *track_pin_device(struct platform_device *dev,
- struct class *class);
-void untrack_pin_device(struct pin_device *dev);
-
-int unregister_platform_devices(void);
-
-int get_pin(struct device_node *node, u32 *pin);
-int device_has_pin(struct device_node *node, u32 pin);
-int expand_property(struct bcm_device *dev, const char *prop_name, u32 value);
-struct device *find_device_by_node(struct device_node *node);
-int set_device_config(struct bcm_device *dev, struct pin_group *group);
-
-static inline int pin_in_group(int pin, int base, int count)
-{
- return (pin >= base && pin < base + count);
-}
-
-#endif /* RUNTIMEPINCONFIG_H_ */
diff --git a/drivers/pinctrl/android-things/sysfs.c b/drivers/pinctrl/android-things/sysfs.c
deleted file mode 100644
index eb26448..0000000
--- a/drivers/pinctrl/android-things/sysfs.c
+++ /dev/null
@@ -1,102 +0,0 @@
-/*
- * sysfs.c
- *
- * Runtime pin configuration for Raspberry Pi
- *
- * Copyright (C) 2017 Google, Inc.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/device.h>
-#include <linux/of.h>
-
-#include "runtimepinconfig.h"
-#include "platform_devices.h"
-
-/*
- * These functions are the sysfs interface. Parse the user input and match it to
- * a peripheral device or resistor value. Use the pin number to determine the
- * pin_group, if necessary.
- */
-ssize_t function_store(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t bufsize)
-{
- struct pin_device *pin_dev = dev_get_drvdata(dev);
- struct bcm_device *bcm_dev = NULL;
- size_t i, namelen, inlen;
- struct pin_group *group;
- int ret;
-
- for (inlen = 0; buf[inlen] != '\n' && inlen < bufsize; inlen++)
- ;
-
- /* Match the input buffer to a platform device name. */
- for (i = 0; platform_devices[i].name != NULL; i++) {
- namelen = strlen(platform_devices[i].name);
- if (inlen != namelen)
- continue;
-
- if (!strncmp(platform_devices[i].name, buf, namelen)) {
- bcm_dev = &platform_devices[i];
- break;
- }
- }
-
- if (!bcm_dev) {
- pr_warn(TAG "no matching platform device found on pin %d\n",
- pin_dev->pin);
- return -ENODEV;
- }
-
- group = bcm_dev->pin_groups;
-
- /* Match the pin number to a pin group available to the device. */
- for (i = 0; i < bcm_dev->pin_group_count; i++) {
- if (pin_in_group(pin_dev->pin, group[i].base,
- bcm_dev->pin_count)) {
- if ((ret = set_function(pin_dev, bcm_dev, &group[i])))
- pr_err(TAG "set function %s failed on pin %d\n",
- bcm_dev->name, pin_dev->pin);
- return (ret) ? ret : bufsize;
- }
- }
-
- pr_warn(TAG "no matching pin group for function %s on pin %d\n",
- bcm_dev->name, pin_dev->pin);
- return -EINVAL;
-}
-
-ssize_t resistor_store(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t bufsize)
-{
- struct pin_device *pin_dev = dev_get_drvdata(dev);
- int ret;
- size_t i, namelen, inlen;
-
- for (inlen = 0; buf[inlen] != '\n' && inlen < bufsize; inlen++)
- ;
-
- for (i = 0; platform_resistors[i].name; i++) {
- namelen = strlen(platform_resistors[i].name);
- if (inlen != namelen)
- continue;
-
- if (!strncmp(platform_resistors[i].name, buf, namelen)) {
- ret = set_resistor(pin_dev,
- platform_resistors[i].resistor);
- return (ret) ? ret : bufsize;
- }
- }
-
- pr_warn(TAG "no matching resistor configuration found for pin %d\n",
- pin_dev->pin);
- return -EINVAL;
-}
diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
index a7d2319..fedc909 100644
--- a/include/drm/drm_crtc.h
+++ b/include/drm/drm_crtc.h
@@ -772,6 +772,9 @@
/* Plane rotation */
unsigned int rotation;
+ /* Plane blending */
+ unsigned int alpha;
+
struct drm_atomic_state *state;
};
@@ -1102,6 +1105,7 @@
struct drm_property *tile_property;
struct drm_property *plane_type_property;
struct drm_property *rotation_property;
+ struct drm_property *alpha_property;
struct drm_property *prop_src_x;
struct drm_property *prop_src_y;
struct drm_property *prop_src_w;
@@ -1496,6 +1500,8 @@
extern const char *drm_get_format_name(uint32_t format);
extern struct drm_property *drm_mode_create_rotation_property(struct drm_device *dev,
unsigned int supported_rotations);
+extern struct drm_property *drm_mode_create_alpha_property(struct drm_device *dev,
+ unsigned int max);
extern unsigned int drm_rotation_simplify(unsigned int rotation,
unsigned int supported_rotations);
diff --git a/include/linux/android_aid.h b/include/linux/android_aid.h
new file mode 100644
index 0000000..3d7a5ea
--- /dev/null
+++ b/include/linux/android_aid.h
@@ -0,0 +1,26 @@
+/* include/linux/android_aid.h
+ *
+ * Copyright (C) 2008 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _LINUX_ANDROID_AID_H
+#define _LINUX_ANDROID_AID_H
+
+/* AIDs that the kernel treats differently */
+#define AID_OBSOLETE_000 KGIDT_INIT(3001) /* was NET_BT_ADMIN */
+#define AID_OBSOLETE_001 KGIDT_INIT(3002) /* was NET_BT */
+#define AID_INET KGIDT_INIT(3003)
+#define AID_NET_RAW KGIDT_INIT(3004)
+#define AID_NET_ADMIN KGIDT_INIT(3005)
+
+#endif
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 1a96fda..e133705 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -26,6 +26,10 @@
SUBSYS(cpuacct)
#endif
+#if IS_ENABLED(CONFIG_CGROUP_SCHEDTUNE)
+SUBSYS(schedtune)
+#endif
+
#if IS_ENABLED(CONFIG_BLK_CGROUP)
SUBSYS(io)
#endif
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index c9e4731..4479e48 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -77,6 +77,22 @@
extern unsigned int sysctl_sched_cfs_bandwidth_slice;
#endif
+#ifdef CONFIG_SCHED_TUNE
+extern unsigned int sysctl_sched_cfs_boost;
+int sysctl_sched_cfs_boost_handler(struct ctl_table *table, int write,
+ void __user *buffer, size_t *length,
+ loff_t *ppos);
+static inline unsigned int get_sysctl_sched_cfs_boost(void)
+{
+ return sysctl_sched_cfs_boost;
+}
+#else
+static inline unsigned int get_sysctl_sched_cfs_boost(void)
+{
+ return 0;
+}
+#endif
+
#ifdef CONFIG_SCHED_AUTOGROUP
extern unsigned int sysctl_sched_autogroup_enabled;
#endif
diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
index 3801584..ad8223e 100644
--- a/include/uapi/drm/drm.h
+++ b/include/uapi/drm/drm.h
@@ -668,6 +668,7 @@
__u64 value;
};
+#define DRM_RDWR O_RDWR
#define DRM_CLOEXEC O_CLOEXEC
struct drm_prime_handle {
__u32 handle;
diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
index 41420e3..51f891f 100644
--- a/include/uapi/linux/android/binder.h
+++ b/include/uapi/linux/android/binder.h
@@ -33,6 +33,8 @@
BINDER_TYPE_HANDLE = B_PACK_CHARS('s', 'h', '*', B_TYPE_LARGE),
BINDER_TYPE_WEAK_HANDLE = B_PACK_CHARS('w', 'h', '*', B_TYPE_LARGE),
BINDER_TYPE_FD = B_PACK_CHARS('f', 'd', '*', B_TYPE_LARGE),
+ BINDER_TYPE_FDA = B_PACK_CHARS('f', 'd', 'a', B_TYPE_LARGE),
+ BINDER_TYPE_PTR = B_PACK_CHARS('p', 't', '*', B_TYPE_LARGE),
};
enum {
@@ -48,6 +50,14 @@
typedef __u64 binder_uintptr_t;
#endif
+/**
+ * struct binder_object_header - header shared by all binder metadata objects.
+ * @type: type of the object
+ */
+struct binder_object_header {
+ __u32 type;
+};
+
/*
* This is the flattened representation of a Binder object for transfer
* between processes. The 'offsets' supplied as part of a binder transaction
@@ -56,9 +66,8 @@
* between processes.
*/
struct flat_binder_object {
- /* 8 bytes for large_flat_header. */
- __u32 type;
- __u32 flags;
+ struct binder_object_header hdr;
+ __u32 flags;
/* 8 bytes of data. */
union {
@@ -70,6 +79,84 @@
binder_uintptr_t cookie;
};
+/**
+ * struct binder_fd_object - describes a filedescriptor to be fixed up.
+ * @hdr: common header structure
+ * @pad_flags: padding to remain compatible with old userspace code
+ * @pad_binder: padding to remain compatible with old userspace code
+ * @fd: file descriptor
+ * @cookie: opaque data, used by user-space
+ */
+struct binder_fd_object {
+ struct binder_object_header hdr;
+ __u32 pad_flags;
+ union {
+ binder_uintptr_t pad_binder;
+ __u32 fd;
+ };
+
+ binder_uintptr_t cookie;
+};
+
+/* struct binder_buffer_object - object describing a userspace buffer
+ * @hdr: common header structure
+ * @flags: one or more BINDER_BUFFER_* flags
+ * @buffer: address of the buffer
+ * @length: length of the buffer
+ * @parent: index in offset array pointing to parent buffer
+ * @parent_offset: offset in @parent pointing to this buffer
+ *
+ * A binder_buffer object represents an object that the
+ * binder kernel driver can copy verbatim to the target
+ * address space. A buffer itself may be pointed to from
+ * within another buffer, meaning that the pointer inside
+ * that other buffer needs to be fixed up as well. This
+ * can be done by setting the BINDER_BUFFER_FLAG_HAS_PARENT
+ * flag in @flags, by setting @parent buffer to the index
+ * in the offset array pointing to the parent binder_buffer_object,
+ * and by setting @parent_offset to the offset in the parent buffer
+ * at which the pointer to this buffer is located.
+ */
+struct binder_buffer_object {
+ struct binder_object_header hdr;
+ __u32 flags;
+ binder_uintptr_t buffer;
+ binder_size_t length;
+ binder_size_t parent;
+ binder_size_t parent_offset;
+};
+
+enum {
+ BINDER_BUFFER_FLAG_HAS_PARENT = 0x01,
+};
+
+/* struct binder_fd_array_object - object describing an array of fds in a buffer
+ * @hdr: common header structure
+ * @num_fds: number of file descriptors in the buffer
+ * @parent: index in offset array to buffer holding the fd array
+ * @parent_offset: start offset of fd array in the buffer
+ *
+ * A binder_fd_array object represents an array of file
+ * descriptors embedded in a binder_buffer_object. It is
+ * different from a regular binder_buffer_object because it
+ * describes a list of file descriptors to fix up, not an opaque
+ * blob of memory, and hence the kernel needs to treat it differently.
+ *
+ * An example of how this would be used is with Android's
+ * native_handle_t object, which is a struct with a list of integers
+ * and a list of file descriptors. The native_handle_t struct itself
+ * will be represented by a struct binder_buffer_objct, whereas the
+ * embedded list of file descriptors is represented by a
+ * struct binder_fd_array_object with that binder_buffer_object as
+ * a parent.
+ */
+struct binder_fd_array_object {
+ struct binder_object_header hdr;
+ binder_size_t num_fds;
+ binder_size_t parent;
+ binder_size_t parent_offset;
+};
+
/*
* On 64-bit platforms where user code may run in 32-bits the driver must
* translate the buffer (and local binder) addresses appropriately.
@@ -162,6 +249,11 @@
} data;
};
+struct binder_transaction_data_sg {
+ struct binder_transaction_data transaction_data;
+ binder_size_t buffers_size;
+};
+
struct binder_ptr_cookie {
binder_uintptr_t ptr;
binder_uintptr_t cookie;
@@ -346,6 +438,12 @@
/*
* void *: cookie
*/
+
+ BC_TRANSACTION_SG = _IOW('c', 17, struct binder_transaction_data_sg),
+ BC_REPLY_SG = _IOW('c', 18, struct binder_transaction_data_sg),
+ /*
+ * binder_transaction_data_sg: the sent command.
+ */
};
#endif /* _UAPI_LINUX_BINDER_H */
diff --git a/init/Kconfig b/init/Kconfig
index 235c7a2..5d9097e 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -999,6 +999,23 @@
Provides a simple Resource Controller for monitoring the
total CPU consumed by the tasks in a cgroup.
+config CGROUP_SCHEDTUNE
+ bool "CFS tasks boosting cgroup subsystem (EXPERIMENTAL)"
+ depends on SCHED_TUNE
+ help
+ This option provides the "schedtune" controller which improves the
+ flexibility of the task boosting mechanism by introducing the support
+ to define "per task" boost values.
+
+ This new controller:
+ 1. allows only a two layers hierarchy, where the root defines the
+ system-wide boost value and its direct childrens define each one a
+ different "class of tasks" to be boosted with a different value
+ 2. supports up to 16 different task classes, each one which could be
+ configured with a different boost value
+
+ Say N if unsure.
+
config PAGE_COUNTER
bool
@@ -1237,6 +1254,32 @@
desktop applications. Task group autogeneration is currently based
upon task session.
+config SCHED_TUNE
+ bool "Boosting for CFS tasks (EXPERIMENTAL)"
+ help
+ This option enables the system-wide support for task boosting.
+ When this support is enabled a new sysctl interface is exposed to
+ userspace via:
+ /proc/sys/kernel/sched_cfs_boost
+ which allows to set a system-wide boost value in range [0..100].
+
+ The currently boosting strategy is implemented in such a way that:
+ - a 0% boost value requires to operate in "standard" mode by
+ scheduling all tasks at the minimum capacities required by their
+ workload demand
+ - a 100% boost value requires to push at maximum the task
+ performances, "regardless" of the incurred energy consumption
+
+ A boost value in between these two boundaries is used to bias the
+ power/performance trade-off, the higher the boost value the more the
+ scheduler is biased toward performance boosting instead of energy
+ efficiency.
+
+ Since this support exposes a single system-wide knob, the specified
+ boost value is applied to all (CFS) tasks in the system.
+
+ If unsure, say N.
+
config SYSFS_DEPRECATED
bool "Enable deprecated sysfs features to support old userspace tools"
depends on SYSFS
diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
index 6768797..1fc4b81 100644
--- a/kernel/sched/Makefile
+++ b/kernel/sched/Makefile
@@ -18,4 +18,5 @@
obj-$(CONFIG_SCHED_AUTOGROUP) += auto_group.o
obj-$(CONFIG_SCHEDSTATS) += stats.o
obj-$(CONFIG_SCHED_DEBUG) += debug.o
+obj-$(CONFIG_SCHED_TUNE) += tune.o
obj-$(CONFIG_CGROUP_CPUACCT) += cpuacct.o
diff --git a/kernel/sched/tune.c b/kernel/sched/tune.c
new file mode 100644
index 0000000..95bc8b8
--- /dev/null
+++ b/kernel/sched/tune.c
@@ -0,0 +1,239 @@
+#include <linux/cgroup.h>
+#include <linux/err.h>
+#include <linux/percpu.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+
+#include "sched.h"
+
+unsigned int sysctl_sched_cfs_boost __read_mostly;
+
+#ifdef CONFIG_CGROUP_SCHEDTUNE
+
+/*
+ * EAS scheduler tunables for task groups.
+ */
+
+/* SchdTune tunables for a group of tasks */
+struct schedtune {
+ /* SchedTune CGroup subsystem */
+ struct cgroup_subsys_state css;
+
+ /* Boost group allocated ID */
+ int idx;
+
+ /* Boost value for tasks on that SchedTune CGroup */
+ int boost;
+
+};
+
+static inline struct schedtune *css_st(struct cgroup_subsys_state *css)
+{
+ return css ? container_of(css, struct schedtune, css) : NULL;
+}
+
+static inline struct schedtune *task_schedtune(struct task_struct *tsk)
+{
+ return css_st(task_css(tsk, schedtune_cgrp_id));
+}
+
+static inline struct schedtune *parent_st(struct schedtune *st)
+{
+ return css_st(st->css.parent);
+}
+
+/*
+ * SchedTune root control group
+ * The root control group is used to defined a system-wide boosting tuning,
+ * which is applied to all tasks in the system.
+ * Task specific boost tuning could be specified by creating and
+ * configuring a child control group under the root one.
+ * By default, system-wide boosting is disabled, i.e. no boosting is applied
+ * to tasks which are not into a child control group.
+ */
+static struct schedtune
+root_schedtune = {
+ .boost = 0,
+};
+
+/*
+ * Maximum number of boost groups to support
+ * When per-task boosting is used we still allow only limited number of
+ * boost groups for two main reasons:
+ * 1. on a real system we usually have only few classes of workloads which
+ * make sense to boost with different values (e.g. background vs foreground
+ * tasks, interactive vs low-priority tasks)
+ * 2. a limited number allows for a simpler and more memory/time efficient
+ * implementation especially for the computation of the per-CPU boost
+ * value
+ */
+#define BOOSTGROUPS_COUNT 4
+
+/* Array of configured boostgroups */
+static struct schedtune *allocated_group[BOOSTGROUPS_COUNT] = {
+ &root_schedtune,
+ NULL,
+};
+
+/* SchedTune boost groups
+ * Keep track of all the boost groups which impact on CPU, for example when a
+ * CPU has two RUNNABLE tasks belonging to two different boost groups and thus
+ * likely with different boost values.
+ * Since on each system we expect only a limited number of boost groups, here
+ * we use a simple array to keep track of the metrics required to compute the
+ * maximum per-CPU boosting value.
+ */
+struct boost_groups {
+ /* Maximum boost value for all RUNNABLE tasks on a CPU */
+ unsigned boost_max;
+ struct {
+ /* The boost for tasks on that boost group */
+ unsigned boost;
+ /* Count of RUNNABLE tasks on that boost group */
+ unsigned tasks;
+ } group[BOOSTGROUPS_COUNT];
+};
+
+/* Boost groups affecting each CPU in the system */
+DEFINE_PER_CPU(struct boost_groups, cpu_boost_groups);
+
+static u64
+boost_read(struct cgroup_subsys_state *css, struct cftype *cft)
+{
+ struct schedtune *st = css_st(css);
+
+ return st->boost;
+}
+
+static int
+boost_write(struct cgroup_subsys_state *css, struct cftype *cft,
+ u64 boost)
+{
+ struct schedtune *st = css_st(css);
+
+ if (boost < 0 || boost > 100)
+ return -EINVAL;
+
+ st->boost = boost;
+ if (css == &root_schedtune.css)
+ sysctl_sched_cfs_boost = boost;
+
+ return 0;
+}
+
+static struct cftype files[] = {
+ {
+ .name = "boost",
+ .read_u64 = boost_read,
+ .write_u64 = boost_write,
+ },
+ { } /* terminate */
+};
+
+static int
+schedtune_boostgroup_init(struct schedtune *st)
+{
+ /* Keep track of allocated boost groups */
+ allocated_group[st->idx] = st;
+
+ return 0;
+}
+
+static int
+schedtune_init(void)
+{
+ struct boost_groups *bg;
+ int cpu;
+
+ /* Initialize the per CPU boost groups */
+ for_each_possible_cpu(cpu) {
+ bg = &per_cpu(cpu_boost_groups, cpu);
+ memset(bg, 0, sizeof(struct boost_groups));
+ }
+
+ pr_info(" schedtune configured to support %d boost groups\n",
+ BOOSTGROUPS_COUNT);
+ return 0;
+}
+
+static struct cgroup_subsys_state *
+schedtune_css_alloc(struct cgroup_subsys_state *parent_css)
+{
+ struct schedtune *st;
+ int idx;
+
+ if (!parent_css) {
+ schedtune_init();
+ return &root_schedtune.css;
+ }
+
+ /* Allow only single level hierachies */
+ if (parent_css != &root_schedtune.css) {
+ pr_err("Nested SchedTune boosting groups not allowed\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ /* Allow only a limited number of boosting groups */
+ for (idx = 1; idx < BOOSTGROUPS_COUNT; ++idx)
+ if (!allocated_group[idx])
+ break;
+ if (idx == BOOSTGROUPS_COUNT) {
+ pr_err("Trying to create more than %d SchedTune boosting groups\n",
+ BOOSTGROUPS_COUNT);
+ return ERR_PTR(-ENOSPC);
+ }
+
+ st = kzalloc(sizeof(*st), GFP_KERNEL);
+ if (!st)
+ goto out;
+
+ /* Initialize per CPUs boost group support */
+ st->idx = idx;
+ if (schedtune_boostgroup_init(st))
+ goto release;
+
+ return &st->css;
+
+release:
+ kfree(st);
+out:
+ return ERR_PTR(-ENOMEM);
+}
+
+static void
+schedtune_boostgroup_release(struct schedtune *st)
+{
+ /* Keep track of allocated boost groups */
+ allocated_group[st->idx] = NULL;
+}
+
+static void
+schedtune_css_free(struct cgroup_subsys_state *css)
+{
+ struct schedtune *st = css_st(css);
+
+ schedtune_boostgroup_release(st);
+ kfree(st);
+}
+
+struct cgroup_subsys schedtune_cgrp_subsys = {
+ .css_alloc = schedtune_css_alloc,
+ .css_free = schedtune_css_free,
+ .legacy_cftypes = files,
+ .early_init = 1,
+};
+
+#endif /* CONFIG_CGROUP_SCHEDTUNE */
+
+int
+sysctl_sched_cfs_boost_handler(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos)
+{
+ int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+
+ if (ret || !write)
+ return ret;
+
+ return 0;
+}
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 11783ed..46822df 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -435,6 +435,21 @@
.extra1 = &one,
},
#endif
+#ifdef CONFIG_SCHED_TUNE
+ {
+ .procname = "sched_cfs_boost",
+ .data = &sysctl_sched_cfs_boost,
+ .maxlen = sizeof(sysctl_sched_cfs_boost),
+#ifdef CONFIG_CGROUP_SCHEDTUNE
+ .mode = 0444,
+#else
+ .mode = 0644,
+#endif
+ .proc_handler = &sysctl_sched_cfs_boost_handler,
+ .extra1 = &zero,
+ .extra2 = &one_hundred,
+ },
+#endif
#ifdef CONFIG_PROVE_LOCKING
{
.procname = "prove_locking",
diff --git a/net/Kconfig b/net/Kconfig
index 127da94..ce9585c 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -86,6 +86,12 @@
endif # if INET
+config ANDROID_PARANOID_NETWORK
+ bool "Only allow certain groups to create sockets"
+ default y
+ help
+ none
+
config NETWORK_SECMARK
bool "Security Marking"
help
diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
index 70306cc..709ce9f 100644
--- a/net/bluetooth/af_bluetooth.c
+++ b/net/bluetooth/af_bluetooth.c
@@ -106,11 +106,40 @@
}
EXPORT_SYMBOL(bt_sock_unregister);
+#ifdef CONFIG_PARANOID_NETWORK
+static inline int current_has_bt_admin(void)
+{
+ return !current_euid();
+}
+
+static inline int current_has_bt(void)
+{
+ return current_has_bt_admin();
+}
+# else
+static inline int current_has_bt_admin(void)
+{
+ return 1;
+}
+
+static inline int current_has_bt(void)
+{
+ return 1;
+}
+#endif
+
static int bt_sock_create(struct net *net, struct socket *sock, int proto,
int kern)
{
int err;
+ if (proto == BTPROTO_RFCOMM || proto == BTPROTO_SCO ||
+ proto == BTPROTO_L2CAP) {
+ if (!current_has_bt())
+ return -EPERM;
+ } else if (!current_has_bt_admin())
+ return -EPERM;
+
if (net != &init_net)
return -EAFNOSUPPORT;
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index 5c5db66..eb12bd0 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -121,6 +121,19 @@
#endif
#include <net/l3mdev.h>
+#ifdef CONFIG_ANDROID_PARANOID_NETWORK
+#include <linux/android_aid.h>
+
+static inline int current_has_network(void)
+{
+ return in_egroup_p(AID_INET) || capable(CAP_NET_RAW);
+}
+#else
+static inline int current_has_network(void)
+{
+ return 1;
+}
+#endif
/* The inetsw table contains everything that inet_create needs to
* build a new socket.
@@ -260,6 +273,9 @@
if (protocol < 0 || protocol >= IPPROTO_MAX)
return -EINVAL;
+ if (!current_has_network())
+ return -EACCES;
+
sock->state = SS_UNCONNECTED;
/* Look for the requested type/protocol pair. */
@@ -308,8 +324,7 @@
}
err = -EPERM;
- if (sock->type == SOCK_RAW && !kern &&
- !ns_capable(net->user_ns, CAP_NET_RAW))
+ if (sock->type == SOCK_RAW && !kern && !capable(CAP_NET_RAW))
goto out_rcu_unlock;
sock->ops = answer->ops;
diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
index 669639d..d9b25bd 100644
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -64,6 +64,20 @@
#include <asm/uaccess.h>
#include <linux/mroute6.h>
+#ifdef CONFIG_ANDROID_PARANOID_NETWORK
+#include <linux/android_aid.h>
+
+static inline int current_has_network(void)
+{
+ return in_egroup_p(AID_INET) || capable(CAP_NET_RAW);
+}
+#else
+static inline int current_has_network(void)
+{
+ return 1;
+}
+#endif
+
MODULE_AUTHOR("Cast of dozens");
MODULE_DESCRIPTION("IPv6 protocol stack for Linux");
MODULE_LICENSE("GPL");
@@ -112,6 +126,9 @@
if (protocol < 0 || protocol >= IPPROTO_MAX)
return -EINVAL;
+ if (!current_has_network())
+ return -EACCES;
+
/* Look for the requested type/protocol pair. */
lookup_protocol:
err = -ESOCKTNOSUPPORT;
@@ -158,8 +175,7 @@
}
err = -EPERM;
- if (sock->type == SOCK_RAW && !kern &&
- !ns_capable(net->user_ns, CAP_NET_RAW))
+ if (sock->type == SOCK_RAW && !kern && !capable(CAP_NET_RAW))
goto out_rcu_unlock;
sock->ops = answer->ops;
diff --git a/security/commoncap.c b/security/commoncap.c
index 48071ed..364b7ab 100644
--- a/security/commoncap.c
+++ b/security/commoncap.c
@@ -31,6 +31,10 @@
#include <linux/binfmts.h>
#include <linux/personality.h>
+#ifdef CONFIG_ANDROID_PARANOID_NETWORK
+#include <linux/android_aid.h>
+#endif
+
/*
* If a non-root user executes a setuid-root binary in
* !secure(SECURE_NOROOT) mode, then we raise capabilities.
@@ -73,6 +77,11 @@
{
struct user_namespace *ns = targ_ns;
+ if (cap == CAP_NET_RAW && in_egroup_p(AID_NET_RAW))
+ return 0;
+ if (cap == CAP_NET_ADMIN && in_egroup_p(AID_NET_ADMIN))
+ return 0;
+
/* See if cred has the capability in the target user namespace
* by examining the target user namespace and all of the target
* user namespace's parents.