- Aug 16, 2012
-
-
Ming Lei authored
This patch introduces one devres API of devres_for_each_res so that the device's driver can iterate each resource it has interest in. The firmware loader will use the API to get each firmware name from the device instance. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
This patch will store firmware name into devres list of the device which is requesting firmware loading, so that we can implement auto cache and uncache firmware for devices in need. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
request_firmware_nowait is allowed to be called in atomic context now if @gfp is GFP_ATOMIC, so fix the obsolete comments and states which situations are suitable for using it. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
Callers of request_firmware* must hold the reference count of @device, otherwise it is easy to trigger oops since the firmware loader device is the child of @device. This patch adds comments about the usage. In fact, most of drivers call request_firmware* in its probe() or open(), so the constraint should be reasonable and can be satisfied. Also this patch holds the reference count of @device before schedule_work() in request_firmware_nowait() to avoid that the @device is released after request_firmware_nowait returns and before the worker function is scheduled. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
This patches introduce two kernel APIs of cache_firmware and uncache_firmware, both of which take the firmware file name as the only parameter. So any drivers can call cache_firmware to cache the specified firmware file into kernel memory, and can use the cached firmware in situations which can't request firmware from user space. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
This patch always let firmware_buf own the pages buffer allocated inside firmware_data_write, and add all instances of firmware_buf into the firmware cache global list. Also introduce one private field in 'struct firmware', so release_firmware will see the instance of firmware_buf associated with the current firmware instance, then just 'free' the instance of firmware_buf. The firmware_buf instance represents one pages buffer for one firmware image, so lots of firmware loading requests can share the same firmware_buf instance if they request the same firmware image file. This patch will make implementation of the following cache_firmware/ uncache_firmware very easy and simple. In fact, the patch improves request_formware/release_firmware: - only request userspace to write firmware image once if several devices share one same firmware image and its drivers call request_firmware concurrently. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
This patch introduces struct firmware_buf to describe the buffer which holds the firmware data, which will make the following cache_firmware/uncache_firmware implemented easily. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
If one device driver calls request_firmware_nowait() to request several different firmwares' loading, device_add() will return failure since all firmware loader device use same name of the device who is requesting firmware. This patch always use the name of firmware image as the firmware loader device name to fix the problem since the following patches for caching firmware will make sure only one loading for same firmware is alllowd at the same time. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
The wmb() inside fw_load_abort is not necessary, since complete() and wait_on_completion() has implied one pair of memory barrier. Also wmb() isn't a correct usage, so just remove it. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
This patch fixes two races in loading firmware: 1, FW_STATUS_DONE should be set before waking up the task waitting on _request_firmware_load, otherwise FW_STATUS_ABORT may be thought as DONE mistakenly. 2, Inside _request_firmware_load(), there is a small window between wait_for_completion() and mutex_lock(&fw_lock), and 'echo 1 > loading' still may happen during the period, so this patch checks FW_STATUS_DONE to prevent pages' buffer completed from being freed in firmware_loading_store. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
This patch doesn't transfer ownership of pages' buffer to the instance of firmware until the firmware loading is completed, which will simplify firmware_loading_store a lot, so help to introduce the following cache_firmware and uncache_firmware mechanism during system suspend-resume cycle. In fact, this patch fixes one bug: if writing data into firmware loader device is bypassed between writting 1 and 0 to 'loading', OOPS will be triggered without the patch. Also handle the vmap failure case, and add some comments to make code more readable. Signed-off-by:
Ming Lei <ming.lei@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jean Delvare authored
Right now we have support for explicit platform device IDs, as well as ID-less platform devices when a given device type can only have one instance. However there are cases where multiple instances of a device type can exist, and their IDs aren't (and can't be) known in advance and do not matter. In that case we need automatic device IDs to avoid device name collisions. I am using magic ID value -2 (PLATFORM_DEVID_AUTO) for this, similar to -1 for ID-less devices. The automatically allocated device IDs are global (to avoid an additional per-driver cost.) We keep note that the ID was automatically allocated so that it can be freed later. Note that we also restore the ID to PLATFORM_DEVID_AUTO on error and device deletion, to avoid avoid unexpected behavior on retry. I don't really expect retries on platform device addition, but better safe than sorry. Signed-off-by:
Jean Delvare <khali@linux-fr.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
device_del can happen anytime, so once it happens, the devres of the device will be freed inside device_del, but drivers can't know it has been deleted and may still add resources into the device, so memory leak is caused. This patch moves the devres_release_all to fix the problem. Signed-off-by:
Ming Lei <tom.leiming@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- Aug 01, 2012
-
-
Minchan Kim authored
mm/page_alloc.c has some memory isolation functions but they are used only when we enable CONFIG_{CMA|MEMORY_HOTPLUG|MEMORY_FAILURE}. So let's make it configurable by new CONFIG_MEMORY_ISOLATION so that it can reduce binary size and we can check it simple by CONFIG_MEMORY_ISOLATION, not if defined CONFIG_{CMA|MEMORY_HOTPLUG|MEMORY_FAILURE}. Signed-off-by:
Minchan Kim <minchan@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@suse.cz> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jul 30, 2012
-
-
Marek Szyprowski authored
This patch adds dma_get_sgtable() function which is required to let drivers to share the buffers allocated by DMA-mapping subsystem. Right now the driver gets a dma address of the allocated buffer and the kernel virtual mapping for it. If it wants to share it with other device (= map into its dma address space) it usually hacks around kernel virtual addresses to get pointers to pages or assumes that both devices share the DMA address space. Both solutions are just hacks for the special cases, which should be avoided in the final version of buffer sharing. To solve this issue in a generic way, a new call to DMA mapping has been introduced - dma_get_sgtable(). It allocates a scatter-list which describes the allocated buffer and lets the driver(s) to use it with other device(s) by calling dma_map_sg() on it. This patch provides a generic implementation based on virt_to_page() call. Architectures which require more sophisticated translation might provide their own get_sgtable() methods. Signed-off-by:
Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by:
Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by:
Daniel Vetter <daniel.vetter@ffwll.ch>
-
Marek Szyprowski authored
Commit 9adc5374 ('common: dma-mapping: introduce mmap method') added a generic method for implementing mmap user call to dma_map_ops structure. This patch converts ARM and PowerPC architectures (the only providers of dma_mmap_coherent/dma_mmap_writecombine calls) to use this generic dma_map_ops based call and adds a generic cross architecture definition for dma_mmap_attrs, dma_mmap_coherent, dma_mmap_writecombine functions. The generic mmap virt_to_page-based fallback implementation is provided for architectures which don't provide their own implementation for mmap method. Signed-off-by:
Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by:
Kyungmin Park <kyungmin.park@samsung.com>
-
- Jul 29, 2012
-
-
Al Viro authored
releases what needs to be released after {kern,user}_path_create() Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- Jul 20, 2012
-
-
Dan Williams authored
Now that scsi registers its async scan work with the async subsystem, wait_for_device_probe() is sufficient for ensuring all scanning is complete. [jejb: fix merge problems with eea03c20 Make wait_for_device_probe() also do scsi_complete_async_scans()] Signed-off-by:
Dan Williams <dan.j.williams@intel.com> Tested-by:
Eldad Zack <eldad@fogrefinery.com> Signed-off-by:
James Bottomley <JBottomley@Parallels.com>
-
- Jul 19, 2012
-
-
Colin Cross authored
Commit cf579dfb (PM / Sleep: Introduce "late suspend" and "early resume" of devices) introduced a bug where suspend_late handlers would be called, but if dpm_suspend_noirq returned an error the early_resume handlers would never be called. All devices would end up on the dpm_late_early_list, and would never be resumed again. Fix it by calling dpm_resume_early when dpm_suspend_noirq returns an error. Signed-off-by:
Colin Cross <ccross@android.com> Cc: stable@vger.kernel.org Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
Linus Torvalds authored
Commit a7a20d10 ("sd: limit the scope of the async probe domain") make the SCSI device probing run device discovery in it's own async domain. However, as a result, the partition detection was no longer synchronized by async_synchronize_full() (which, despite the name, only synchronizes the global async space, not all of them). Which in turn meant that "wait_for_device_probe()" would not wait for the SCSI partitions to be parsed. And "wait_for_device_probe()" was what the boot time init code relied on for mounting the root filesystem. Now, most people never noticed this, because not only is it timing-dependent, but modern distributions all use initrd. So the root filesystem isn't actually on a disk at all. And then before they actually mount the final disk filesystem, they will have loaded the scsi-wait-scan module, which not only does the expected wait_for_device_probe(), but also does scsi_complete_async_scans(). [ Side note: scsi_complete_async_scans() had also been partially broken, but that was fixed in commit 43a8d39d ("fix async probe regression"), so that same commit a7a20d10 had actually broken setups even if you used scsi-wait-scan explicitly ] Solve this problem by just moving the scsi_complete_async_scans() call into wait_for_device_probe(). Everybody who wants to wait for device probing to finish really wants the SCSI probing to complete, so there's no reason not to do this. So now "wait_for_device_probe()" really does what the name implies, and properly waits for device probing to finish. This also removes the now unnecessary extra calls to scsi_complete_async_scans(). Reported-and-tested-by:
Artem S. Tashkinov <t.artem@mailcity.com> Cc: Dan Williams <dan.j.williams@gmail.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: James Bottomley <jbottomley@parallels.com> Cc: Borislav Petkov <bp@amd64.org> Cc: linux-scsi <linux-scsi@vger.kernel.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Sachin Kamat authored
Fix the following sparse warning: drivers/base/power/qos.c:465:29: warning: Using plain integer as NULL pointer Signed-off-by:
Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
- Jul 18, 2012
-
-
Sachin Kamat authored
Fix the following sparse warnings: drivers/base/power/main.c:48:1: warning: symbol 'dpm_prepared_list' was not declared. Should it be static? drivers/base/power/main.c:49:1: warning: symbol 'dpm_suspended_list' was not declared. Should it be static? drivers/base/power/main.c:50:1: warning: symbol 'dpm_late_early_list' was not declared. Should it be static? drivers/base/power/main.c:51:1: warning: symbol 'dpm_noirq_list' was not declared. Should it be static? Signed-off-by:
Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
Dimitris Papastamos authored
Signed-off-by:
Dimitris Papastamos <dp@opensource.wolfsonmicro.com> Signed-off-by:
Mark Brown <broonie@opensource.wolfsonmicro.com>
-
- Jul 17, 2012
-
-
Sebastian Ott authored
Do not send the uevent if driver_add_groups failed. Reported-by:
Ming Lei <ming.lei@canonical.com> Signed-off-by:
Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rafael J. Wysocki authored
The pm_runtime_get_noresume() calls before really_probe() and before executing __device_attach() for each driver on the device's bus cause problems to happen if probing fails and if the driver has enabled runtime PM for the device in its .probe() callback. Namely, in that case, if the device has been resumed by the driver after enabling its runtime PM and if it turns out that .probe() should return an error, the driver is supposed to suspend the device and disable its runtime PM before exiting .probe(). However, because the device's runtime PM usage counter was incremented by the core before calling .probe(), the driver's attempt to suspend the device will not succeed and the device will remain in the full-power state after the failing .probe() has returned. To fix this issue, remove the pm_runtime_get_noresume() calls from driver_probe_device() and from device_attach() and replace the corresponding pm_runtime_put_sync() calls with pm_runtime_idle() to preserve the existing behavior (which is to check if the device is idle and to suspend it eventually in that case after probing). Reported-and-tested-by:
Kevin Hilman <khilman@ti.com> Reviewed-by:
Kevin Hilman <khilman@ti.com> Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
Signed-off-by:
Lars-Peter Clausen <lars@metafoo.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Brown authored
When deferred probe was originally added the idea was that devices which defer their probes would move themselves to the end of dpm_list in order to try to keep the assumptions that we're making about the list being in roughly the order things should be suspended correct. However this hasn't been what's been happening and doing it requires a lot of duplicated code to do the moves. Instead take a simple, brute force solution and have the deferred probe code push devices to the end of dpm_list before it retries the probe. This does mean we lock the dpm_list a bit more often but it's very simple and the code shouldn't be a fast path. We do the move with the deferred mutex dropped since doing things with fewer locks held simultaneously seems like a good idea. This approach was most recently suggested by Grant Likely. Signed-off-by:
Mark Brown <broonie@opensource.wolfsonmicro.com> Acked-by:
Rafael J. Wysocki <rjw@sisk.pl> Cc: Grant Likely <grant.likely@secretlab.ca>, Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sebastian Ott authored
Device driver attribute groups are created after userspace is notified via an add event. Fix this by moving the kobject_uevent call to driver_register after the attribute groups are added. Signed-off-by:
Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
Firstly, .shutdown callback may touch a uninitialized hardware if dev->driver is set and .probe is not completed. Secondly, device_shutdown() may dereference a null pointer to cause oops when dev->driver is cleared after it has been checked in device_shutdown(). So just hold device lock and its parent lock(if it has) to fix the races. Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Ming Lei <ming.lei@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- Jul 14, 2012
-
-
Al Viro authored
all callers want the same thing, actually - a kinda-sorta analog of kern_path_create(). I.e. they want parent vfsmount/dentry (with ->i_mutex held, to make sure the child dentry is still their child) + the child dentry. Signed-off-by Al Viro <viro@zeniv.linux.org.uk>
-
- Jul 12, 2012
-
-
Rafael J. Wysocki authored
The power/async device sysfs attribute is only used if both CONFIG_PM_ADVANCED_DEBUG and CONFIG_PM_SLEEP are set, but the code implementing it doesn't depend on CONFIG_PM_SLEEP. As a result, a build warning appears if CONFIG_PM_ADVANCED_DEBUG is set and CONFIG_PM_SLEEP is not set. Fix it by adding a #ifdef CONFIG_PM_SLEEP around the code in question. Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
Rafael J. Wysocki authored
The functions genpd_save_dev() and genpd_restore_dev() are not used for CONFIG_PM_RUNTIME unset, so move them under an appropriate #ifdef. Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
- Jul 11, 2012
-
-
Sachin Kamat authored
Fixes the following sparse warning: drivers/base/power/domain.c:1679:55: warning: Using plain integer as NULL pointer Signed-off-by:
Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
- Jul 10, 2012
-
-
Sachin Kamat authored
Fixes the folloiwng sparse warning: drivers/base/power/domain.c:149:5: warning: symbol '__pm_genpd_poweron' was not declared. Should it be static? Signed-off-by:
Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
Preeti U Murthy authored
On certain bios, resume hangs if cpus are allowed to enter idle states during suspend [1]. This was fixed in apci idle driver [2].But intel_idle driver does not have this fix. Thus instead of replicating the fix in both the idle drivers, or in more platform specific idle drivers if needed, the more general cpuidle infrastructure could handle this. A suspend callback in cpuidle_driver could handle this fix. But a cpuidle_driver provides only basic functionalities like platform idle state detection capability and mechanisms to support entry and exit into CPU idle states. All other cpuidle functions are found in the cpuidle generic infrastructure for good reason that all cpuidle drivers, irrepective of their platforms will support these functions. One option therefore would be to register a suspend callback in cpuidle which handles this fix. This could be called through a PM_SUSPEND_PREPARE notifier. But this is too generic a notfier for a driver to handle. Also, ideally the job of cpuidle is not to handle side effects of suspend. It should expose the interfaces which "handle cpuidle 'during' suspend" or any other operation, which the subsystems call during that respective operation. The fix demands that during suspend, no cpus should be allowed to enter deep C-states. The interface cpuidle_uninstall_idle_handler() in cpuidle ensures that. Not just that it also kicks all the cpus which are already in idle out of their idle states which was being done during cpu hotplug through a CPU_DYING_FROZEN callbacks. Now the question arises about when during suspend should cpuidle_uninstall_idle_handler() be called. Since we are dealing with drivers it seems best to call this function during dpm_suspend(). Delaying the call till dpm_suspend_noirq() does no harm, as long as it is before cpu_hotplug_begin() to avoid race conditions with cpu hotpulg operations. In dpm_suspend_noirq(), it would be wise to place this call before suspend_device_irqs() to avoid ugly interactions with the same. Ananlogously, during resume. References: [1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/674075. [2] http://marc.info/?l=linux-pm&m=133958534231884&w=2 Reported-and-tested-by:
Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by:
Preeti U Murthy <preeti@linux.vnet.ibm.com> Reviewed-by:
Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
- Jul 06, 2012
-
-
Mark Brown authored
Sometimes for failures during very early init the trace infrastructure isn't available early enough to be used. For this sort of problem defining LOG_DEVICE will add printks for basic register I/O on a specific device, allowing trace to be extracted when the trace system doesn't come up early enough to work with. Signed-off-by:
Mark Brown <broonie@opensource.wolfsonmicro.com>
-
- Jul 05, 2012
-
-
Rafael J. Wysocki authored
Make it possible to modify device callbacks used by the generic PM domains core code at any time, not only after the device has been added to a domain. This will allow device drivers to provide their own device PM domain callbacks even if they are registered before adding the devices to PM domains. For this purpose, use the observation that the struct generic_pm_domain_data object containing the relevant callback pointers may be allocated by pm_genpd_add_callbacks() and the callbacks may be set before __pm_genpd_add_device() is run for the given device. This object will then be used by __pm_genpd_add_device(), but it has to be protected from premature removal by reference counting. Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
Rafael J. Wysocki authored
Add a mechanism for counting references to the struct generic_pm_domain_data object pointed to by dev->power.subsys_data->domain_data if the device in question belongs to a generic PM domain. This change is necessary for a subsequent patch making it possible to allocate that object from within pm_genpd_add_callbacks(), so that drivers can attach their PM domain device callbacks to devices before those devices are added to PM domains. This patch has been tested on the SH7372 Mackerel board. Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
- Jul 03, 2012
-
-
Rafael J. Wysocki authored
On some systems there are CPU cores located in the same power domains as I/O devices. Then, power can only be removed from the domain if all I/O devices in it are not in use and the CPU core is idle. Add preliminary support for that to the generic PM domains framework. First, the platform is expected to provide a cpuidle driver with one extra state designated for use with the generic PM domains code. This state should be initially disabled and its exit_latency value should be set to whatever time is needed to bring up the CPU core itself after restoring power to it, not including the domain's power on latency. Its .enter() callback should point to a procedure that will remove power from the domain containing the CPU core at the end of the CPU power transition. The remaining characteristics of the extra cpuidle state, referred to as the "domain" cpuidle state below, (e.g. power usage, target residency) should be populated in accordance with the properties of the hardware. Next, the platform should execute genpd_attach_cpuidle() on the PM domain containing the CPU core. That will cause the generic PM domains framework to treat that domain in a special way such that: * When all devices in the domain have been suspended and it is about to be turned off, the states of the devices will be saved, but power will not be removed from the domain. Instead, the "domain" cpuidle state will be enabled so that power can be removed from the domain when the CPU core is idle and the state has been chosen as the target by the cpuidle governor. * When the first I/O device in the domain is resumed and __pm_genpd_poweron(() is called for the first time after power has been removed from the domain, the "domain" cpuidle state will be disabled to avoid subsequent surprise power removals via cpuidle. The effective exit_latency value of the "domain" cpuidle state depends on the time needed to bring up the CPU core itself after restoring power to it as well as on the power on latency of the domain containing the CPU core. Thus the "domain" cpuidle state's exit_latency has to be recomputed every time the domain's power on latency is updated, which may happen every time power is restored to the domain, if the measured power on latency is greater than the latency stored in the corresponding generic_pm_domain structure. Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl> Reviewed-by:
Kevin Hilman <khilman@ti.com>
-
- Jul 01, 2012
-
-
Rafael J. Wysocki authored
While resuming a device belonging to a PM domain, pm_genpd_runtime_resume() calls __pm_genpd_restore_device() to restore its state, if necessary. The latter starts the device, using genpd_start_dev(), restores its state, using genpd_restore_dev(), and then stops it, using genpd_stop_dev(). However, this last operation is not necessary, because the device is supposed to be operational after pm_genpd_runtime_resume() has returned and because of it pm_genpd_runtime_resume() has to call genpd_start_dev() once again for the "restored" device, which is inefficient. To make things more efficient, remove the call to genpd_stop_dev() from __pm_genpd_restore_device() and the direct call to genpd_start_dev() from pm_genpd_runtime_resume(). [Of course, genpd_start_dev() still has to be called by it for devices with the power.irq_safe flag set, because __pm_genpd_restore_device() is not executed for them.] This change has been tested on the SH7372 Mackerel board. Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-