diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-03-30 17:01:51 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-03-30 17:01:51 -0700 |
commit | 642e53ead6aea8740a219ede509a5d138fd4f780 (patch) | |
tree | 5c4680d0c07315dab24fe7333c62f56bc19ec4e4 /lib | |
parent | 9b82f05f869a823d43ea4186f5f732f2924d3693 (diff) | |
parent | 313f16e2e35abb833eab5bdebc6ae30699adca18 (diff) |
Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar:
"The main changes in this cycle are:
- Various NUMA scheduling updates: harmonize the load-balancer and
NUMA placement logic to not work against each other. The intended
result is better locality, better utilization and fewer migrations.
- Introduce Thermal Pressure tracking and optimizations, to improve
task placement on thermally overloaded systems.
- Implement frequency invariant scheduler accounting on (some) x86
CPUs. This is done by observing and sampling the 'recent' CPU
frequency average at ~tick boundaries. The CPU provides this data
via the APERF/MPERF MSRs. This hopefully makes our capacity
estimates more precise and keeps tasks on the same CPU better even
if it might seem overloaded at a lower momentary frequency. (As
usual, turbo mode is a complication that we resolve by observing
the maximum frequency and renormalizing to it.)
- Add asymmetric CPU capacity wakeup scan to improve capacity
utilization on asymmetric topologies. (big.LITTLE systems)
- PSI fixes and optimizations.
- RT scheduling capacity awareness fixes & improvements.
- Optimize the CONFIG_RT_GROUP_SCHED constraints code.
- Misc fixes, cleanups and optimizations - see the changelog for
details"
* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (62 commits)
threads: Update PID limit comment according to futex UAPI change
sched/fair: Fix condition of avg_load calculation
sched/rt: cpupri_find: Trigger a full search as fallback
kthread: Do not preempt current task if it is going to call schedule()
sched/fair: Improve spreading of utilization
sched: Avoid scale real weight down to zero
psi: Move PF_MEMSTALL out of task->flags
MAINTAINERS: Add maintenance information for psi
psi: Optimize switching tasks inside shared cgroups
psi: Fix cpu.pressure for cpu.max and competing cgroups
sched/core: Distribute tasks within affinity masks
sched/fair: Fix enqueue_task_fair warning
thermal/cpu-cooling, sched/core: Move the arch_set_thermal_pressure() API to generic scheduler code
sched/rt: Remove unnecessary push for unfit tasks
sched/rt: Allow pulling unfitting task
sched/rt: Optimize cpupri_find() on non-heterogenous systems
sched/rt: Re-instate old behavior in select_task_rq_rt()
sched/rt: cpupri_find: Implement fallback mechanism for !fit case
sched/fair: Fix reordering of enqueue/dequeue_task_fair()
sched/fair: Fix runnable_avg for throttled cfs
...
Diffstat (limited to 'lib')
-rw-r--r-- | lib/cpumask.c | 29 |
1 files changed, 29 insertions, 0 deletions
diff --git a/lib/cpumask.c b/lib/cpumask.c index 0cb672eb107c..fb22fb266f93 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -232,3 +232,32 @@ unsigned int cpumask_local_spread(unsigned int i, int node) BUG(); } EXPORT_SYMBOL(cpumask_local_spread); + +static DEFINE_PER_CPU(int, distribute_cpu_mask_prev); + +/** + * Returns an arbitrary cpu within srcp1 & srcp2. + * + * Iterated calls using the same srcp1 and srcp2 will be distributed within + * their intersection. + * + * Returns >= nr_cpu_ids if the intersection is empty. + */ +int cpumask_any_and_distribute(const struct cpumask *src1p, + const struct cpumask *src2p) +{ + int next, prev; + + /* NOTE: our first selection will skip 0. */ + prev = __this_cpu_read(distribute_cpu_mask_prev); + + next = cpumask_next_and(prev, src1p, src2p); + if (next >= nr_cpu_ids) + next = cpumask_first_and(src1p, src2p); + + if (next < nr_cpu_ids) + __this_cpu_write(distribute_cpu_mask_prev, next); + + return next; +} +EXPORT_SYMBOL(cpumask_any_and_distribute); |