diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2017-02-20 17:41:31 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-02-20 17:41:31 -0800 |
commit | 02c3de1105228e367320e7fdeffbf511904f398c (patch) | |
tree | d0861ed0752806c6c85e72749734dad9914a8cd9 /drivers/base/power/opp/of.c | |
parent | 7aa7d608112baf63a0b1278955f9619427373807 (diff) | |
parent | eee77a8a0d268b936b1641fd7d55efaa17c351d6 (diff) |
Merge tag 'pm-4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"The majority of changes go into the Operating Performance Points (OPP)
framework and cpufreq this time, followed by devfreq and some
scattered updates all over.
The OPP changes are mostly related to switching over from RCU-based
synchronization, that turned out to be overly complicated and
problematic, to reference counting using krefs.
In the cpufreq land there are core cleanups, documentation updates, a
new driver for Broadcom BMIPS SoCs, a new cpufreq-dt sub-driver for TI
SoCs that require special handling, ARM64 SoCs support for the qoriq
driver, intel_pstate updates, powernv driver update and assorted
fixes.
The devfreq changes are mostly fixes related to the sysfs interface
and some Exynos drivers updates.
Apart from that, the cpuidle menu governor will support per-CPU PM QoS
constraints for the wakeup latency now, some bugs in the wakeup IRQs
framework are fixed, the generic power domains framework should handle
asynchronous invocations of *noirq suspend/resume callbacks from now
on, the analyze_suspend.py script is updated and there is a new tool
for intel_pstate diagnostics.
Specifics:
- Operating Performance Points (OPP) framework fixes, cleanups and
switch over from RCU-based synchronization to reference counting
using krefs (Viresh Kumar, Wei Yongjun, Dave Gerlach)
- cpufreq core cleanups and documentation updates (Viresh Kumar,
Rafael Wysocki)
- New cpufreq driver for Broadcom BMIPS SoCs (Markus Mayer)
- New cpufreq-dt sub-driver for TI SoCs requiring special handling,
like in the AM335x, AM437x, DRA7x, and AM57x families, along with
new DT bindings for it (Dave Gerlach, Paul Gortmaker)
- ARM64 SoCs support for the qoriq cpufreq driver (Tang Yuantian)
- intel_pstate driver updates including a new sysfs knob to control
the driver's operation mode and fixes related to the no_turbo sysfs
knob and the hardware-managed P-states feature support (Rafael
Wysocki, Srinivas Pandruvada)
- New interface to export ultra-turbo frequencies for the powernv
cpufreq driver (Shilpasri Bhat)
- Assorted fixes for cpufreq drivers (Arnd Bergmann, Dan Carpenter,
Wei Yongjun)
- devfreq core fixes, mostly related to the sysfs interface exported
by it (Chanwoo Choi, Chris Diamand)
- Updates of the exynos-bus and exynos-ppmu devfreq drivers (Chanwoo
Choi)
- Device PM QoS extension to support CPUs and support for per-CPU
wakeup (device resume) latency constraints in the cpuidle menu
governor (Alex Shi)
- Wakeup IRQs framework fixes (Grygorii Strashko)
- Generic power domains framework update including a fix to make it
handle asynchronous invocations of *noirq suspend/resume callbacks
correctly (Ulf Hansson, Geert Uytterhoeven)
- Assorted fixes and cleanups in the core suspend/hibernate code, PM
QoS framework and x86 ACPI idle support code (Corentin Labbe, Geert
Uytterhoeven, Geliang Tang, John Keeping, Nick Desaulniers)
- Update of the analyze_suspend.py script is updated to version 4.5
offering multiple improvements (Todd Brandt)
- New tool for intel_pstate diagnostics using the pstate_sample
tracepoint (Doug Smythies)"
* tag 'pm-4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (85 commits)
MAINTAINERS: cpufreq: add bmips-cpufreq.c
PM / QoS: Fix memory leak on resume_latency.notifiers
PM / Documentation: Spelling s/wrtie/write/
PM / sleep: Fix test_suspend after sleep state rework
cpufreq: CPPC: add ACPI_PROCESSOR dependency
cpufreq: make ti-cpufreq explicitly non-modular
cpufreq: Do not clear real_cpus mask on policy init
tools/power/x86: Debug utility for intel_pstate driver
AnalyzeSuspend: fix drag and zoom bug in javascript
PM / wakeirq: report a wakeup_event on dedicated wekup irq
PM / wakeirq: Fix spurious wake-up events for dedicated wakeirqs
PM / wakeirq: Enable dedicated wakeirq for suspend
cpufreq: dt: Don't use generic platdev driver for ti-cpufreq platforms
cpufreq: ti: Add cpufreq driver to determine available OPPs at runtime
Documentation: dt: add bindings for ti-cpufreq
PM / OPP: Expose _of_get_opp_desc_node as dev_pm_opp API
cpufreq: qoriq: Don't look at clock implementation details
cpufreq: qoriq: add ARM64 SoCs support
PM / Domains: Provide dummy governors if CONFIG_PM_GENERIC_DOMAINS=n
cpufreq: brcmstb-avs-cpufreq: remove unnecessary platform_set_drvdata()
...
Diffstat (limited to 'drivers/base/power/opp/of.c')
-rw-r--r-- | drivers/base/power/opp/of.c | 154 |
1 files changed, 60 insertions, 94 deletions
diff --git a/drivers/base/power/opp/of.c b/drivers/base/power/opp/of.c index 3f7d2591b173..779428676f63 100644 --- a/drivers/base/power/opp/of.c +++ b/drivers/base/power/opp/of.c @@ -24,9 +24,11 @@ static struct opp_table *_managed_opp(const struct device_node *np) { - struct opp_table *opp_table; + struct opp_table *opp_table, *managed_table = NULL; + + mutex_lock(&opp_table_lock); - list_for_each_entry_rcu(opp_table, &opp_tables, node) { + list_for_each_entry(opp_table, &opp_tables, node) { if (opp_table->np == np) { /* * Multiple devices can point to the same OPP table and @@ -35,14 +37,18 @@ static struct opp_table *_managed_opp(const struct device_node *np) * But the OPPs will be considered as shared only if the * OPP table contains a "opp-shared" property. */ - if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) - return opp_table; + if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) { + _get_opp_table_kref(opp_table); + managed_table = opp_table; + } - return NULL; + break; } } - return NULL; + mutex_unlock(&opp_table_lock); + + return managed_table; } void _of_init_opp_table(struct opp_table *opp_table, struct device *dev) @@ -229,34 +235,28 @@ free_microvolt: * @dev: device pointer used to lookup OPP table. * * Free OPPs created using static entries present in DT. - * - * Locking: The internal opp_table and opp structures are RCU protected. - * Hence this function indirectly uses RCU updater strategy with mutex locks - * to keep the integrity of the internal data structures. Callers should ensure - * that this function is *NOT* called under RCU protection or in contexts where - * mutex cannot be locked. */ void dev_pm_opp_of_remove_table(struct device *dev) { - _dev_pm_opp_remove_table(dev, false); + _dev_pm_opp_find_and_remove_table(dev, false); } EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); /* Returns opp descriptor node for a device, caller must do of_node_put() */ -static struct device_node *_of_get_opp_desc_node(struct device *dev) +struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev) { /* - * TODO: Support for multiple OPP tables. - * * There should be only ONE phandle present in "operating-points-v2" * property. */ return of_parse_phandle(dev->of_node, "operating-points-v2", 0); } +EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node); /** * _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings) + * @opp_table: OPP table * @dev: device for which we do this operation * @np: device node * @@ -264,12 +264,6 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev) * opp can be controlled using dev_pm_opp_enable/disable functions and may be * removed by dev_pm_opp_remove. * - * Locking: The internal opp_table and opp structures are RCU protected. - * Hence this function internally uses RCU updater strategy with mutex locks - * to keep the integrity of the internal data structures. Callers should ensure - * that this function is *NOT* called under RCU protection or in contexts where - * mutex cannot be locked. - * * Return: * 0 On success OR * Duplicate OPPs (both freq and volt are same) and opp->available @@ -278,22 +272,17 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev) * -ENOMEM Memory allocation failure * -EINVAL Failed parsing the OPP node */ -static int _opp_add_static_v2(struct device *dev, struct device_node *np) +static int _opp_add_static_v2(struct opp_table *opp_table, struct device *dev, + struct device_node *np) { - struct opp_table *opp_table; struct dev_pm_opp *new_opp; u64 rate; u32 val; int ret; - /* Hold our table modification lock here */ - mutex_lock(&opp_table_lock); - - new_opp = _allocate_opp(dev, &opp_table); - if (!new_opp) { - ret = -ENOMEM; - goto unlock; - } + new_opp = _opp_allocate(opp_table); + if (!new_opp) + return -ENOMEM; ret = of_property_read_u64(np, "opp-hz", &rate); if (ret < 0) { @@ -327,8 +316,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) goto free_opp; ret = _opp_add(dev, new_opp, opp_table); - if (ret) + if (ret) { + /* Don't return error for duplicate OPPs */ + if (ret == -EBUSY) + ret = 0; goto free_opp; + } /* OPP to select on device suspend */ if (of_property_read_bool(np, "opp-suspend")) { @@ -345,8 +338,6 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max) opp_table->clock_latency_ns_max = new_opp->clock_latency_ns; - mutex_unlock(&opp_table_lock); - pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n", __func__, new_opp->turbo, new_opp->rate, new_opp->supplies[0].u_volt, new_opp->supplies[0].u_volt_min, @@ -356,13 +347,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) * Notify the changes in the availability of the operable * frequency/voltage list. */ - srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp); + blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ADD, new_opp); return 0; free_opp: - _opp_remove(opp_table, new_opp, false); -unlock: - mutex_unlock(&opp_table_lock); + _opp_free(new_opp); + return ret; } @@ -373,41 +363,35 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) struct opp_table *opp_table; int ret = 0, count = 0; - mutex_lock(&opp_table_lock); - opp_table = _managed_opp(opp_np); if (opp_table) { /* OPPs are already managed */ if (!_add_opp_dev(dev, opp_table)) ret = -ENOMEM; - mutex_unlock(&opp_table_lock); - return ret; + goto put_opp_table; } - mutex_unlock(&opp_table_lock); + + opp_table = dev_pm_opp_get_opp_table(dev); + if (!opp_table) + return -ENOMEM; /* We have opp-table node now, iterate over it and add OPPs */ for_each_available_child_of_node(opp_np, np) { count++; - ret = _opp_add_static_v2(dev, np); + ret = _opp_add_static_v2(opp_table, dev, np); if (ret) { dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, ret); - goto free_table; + _dev_pm_opp_remove_table(opp_table, dev, false); + goto put_opp_table; } } /* There should be one of more OPP defined */ - if (WARN_ON(!count)) - return -ENOENT; - - mutex_lock(&opp_table_lock); - - opp_table = _find_opp_table(dev); - if (WARN_ON(IS_ERR(opp_table))) { - ret = PTR_ERR(opp_table); - mutex_unlock(&opp_table_lock); - goto free_table; + if (WARN_ON(!count)) { + ret = -ENOENT; + goto put_opp_table; } opp_table->np = opp_np; @@ -416,12 +400,8 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) else opp_table->shared_opp = OPP_TABLE_ACCESS_EXCLUSIVE; - mutex_unlock(&opp_table_lock); - - return 0; - -free_table: - dev_pm_opp_of_remove_table(dev); +put_opp_table: + dev_pm_opp_put_opp_table(opp_table); return ret; } @@ -429,9 +409,10 @@ free_table: /* Initializes OPP tables based on old-deprecated bindings */ static int _of_add_opp_table_v1(struct device *dev) { + struct opp_table *opp_table; const struct property *prop; const __be32 *val; - int nr; + int nr, ret = 0; prop = of_find_property(dev->of_node, "operating-points", NULL); if (!prop) @@ -449,18 +430,27 @@ static int _of_add_opp_table_v1(struct device *dev) return -EINVAL; } + opp_table = dev_pm_opp_get_opp_table(dev); + if (!opp_table) + return -ENOMEM; + val = prop->value; while (nr) { unsigned long freq = be32_to_cpup(val++) * 1000; unsigned long volt = be32_to_cpup(val++); - if (_opp_add_v1(dev, freq, volt, false)) - dev_warn(dev, "%s: Failed to add OPP %ld\n", - __func__, freq); + ret = _opp_add_v1(opp_table, dev, freq, volt, false); + if (ret) { + dev_err(dev, "%s: Failed to add OPP %ld (%d)\n", + __func__, freq, ret); + _dev_pm_opp_remove_table(opp_table, dev, false); + break; + } nr -= 2; } - return 0; + dev_pm_opp_put_opp_table(opp_table); + return ret; } /** @@ -469,12 +459,6 @@ static int _of_add_opp_table_v1(struct device *dev) * * Register the initial OPP table with the OPP library for given device. * - * Locking: The internal opp_table and opp structures are RCU protected. - * Hence this function indirectly uses RCU updater strategy with mutex locks - * to keep the integrity of the internal data structures. Callers should ensure - * that this function is *NOT* called under RCU protection or in contexts where - * mutex cannot be locked. - * * Return: * 0 On success OR * Duplicate OPPs (both freq and volt are same) and opp->available @@ -495,7 +479,7 @@ int dev_pm_opp_of_add_table(struct device *dev) * OPPs have two version of bindings now. The older one is deprecated, * try for the new binding first. */ - opp_np = _of_get_opp_desc_node(dev); + opp_np = dev_pm_opp_of_get_opp_desc_node(dev); if (!opp_np) { /* * Try old-deprecated bindings for backward compatibility with @@ -519,12 +503,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table); * * This removes the OPP tables for CPUs present in the @cpumask. * This should be used only to remove static entries created from DT. - * - * Locking: The internal opp_table and opp structures are RCU protected. - * Hence this function internally uses RCU updater strategy with mutex locks - * to keep the integrity of the internal data structures. Callers should ensure - * that this function is *NOT* called under RCU protection or in contexts where - * mutex cannot be locked. */ void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask) { @@ -537,12 +515,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table); * @cpumask: cpumask for which OPP table needs to be added. * * This adds the OPP tables for CPUs present in the @cpumask. - * - * Locking: The internal opp_table and opp structures are RCU protected. - * Hence this function internally uses RCU updater strategy with mutex locks - * to keep the integrity of the internal data structures. Callers should ensure - * that this function is *NOT* called under RCU protection or in contexts where - * mutex cannot be locked. */ int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask) { @@ -590,12 +562,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table); * This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev. * * Returns -ENOENT if operating-points-v2 isn't present for @cpu_dev. - * - * Locking: The internal opp_table and opp structures are RCU protected. - * Hence this function internally uses RCU updater strategy with mutex locks - * to keep the integrity of the internal data structures. Callers should ensure - * that this function is *NOT* called under RCU protection or in contexts where - * mutex cannot be locked. */ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) @@ -605,7 +571,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, int cpu, ret = 0; /* Get OPP descriptor node */ - np = _of_get_opp_desc_node(cpu_dev); + np = dev_pm_opp_of_get_opp_desc_node(cpu_dev); if (!np) { dev_dbg(cpu_dev, "%s: Couldn't find opp node.\n", __func__); return -ENOENT; @@ -630,7 +596,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, } /* Get OPP descriptor node */ - tmp_np = _of_get_opp_desc_node(tcpu_dev); + tmp_np = dev_pm_opp_of_get_opp_desc_node(tcpu_dev); if (!tmp_np) { dev_err(tcpu_dev, "%s: Couldn't find opp node.\n", __func__); |