diff options
641 files changed, 10108 insertions, 5358 deletions
diff --git a/Documentation/RCU/RTFP.txt b/Documentation/RCU/RTFP.txt index 273e654d7d08..2f0fcb2112d2 100644 --- a/Documentation/RCU/RTFP.txt +++ b/Documentation/RCU/RTFP.txt @@ -31,6 +31,14 @@ has lapsed, so this approach may be used in non-GPL software, if desired. (In contrast, implementation of RCU is permitted only in software licensed under either GPL or LGPL. Sorry!!!) +In 1987, Rashid et al. described lazy TLB-flush [RichardRashid87a]. +At first glance, this has nothing to do with RCU, but nevertheless +this paper helped inspire the update-side batching used in the later +RCU implementation in DYNIX/ptx. In 1988, Barbara Liskov published +a description of Argus that noted that use of out-of-date values can +be tolerated in some situations. Thus, this paper provides some early +theoretical justification for use of stale data. + In 1990, Pugh [Pugh90] noted that explicitly tracking which threads were reading a given data structure permitted deferred free to operate in the presence of non-terminating threads. However, this explicit @@ -41,11 +49,11 @@ providing a fine-grained locking design, however, it would be interesting to see how much of the performance advantage reported in 1990 remains today. -At about this same time, Adams [Adams91] described ``chaotic relaxation'', -where the normal barriers between successive iterations of convergent -numerical algorithms are relaxed, so that iteration $n$ might use -data from iteration $n-1$ or even $n-2$. This introduces error, -which typically slows convergence and thus increases the number of +At about this same time, Andrews [Andrews91textbook] described ``chaotic +relaxation'', where the normal barriers between successive iterations +of convergent numerical algorithms are relaxed, so that iteration $n$ +might use data from iteration $n-1$ or even $n-2$. This introduces +error, which typically slows convergence and thus increases the number of iterations required. However, this increase is sometimes more than made up for by a reduction in the number of expensive barrier operations, which are otherwise required to synchronize the threads at the end @@ -55,7 +63,8 @@ is thus inapplicable to most data structures in operating-system kernels. In 1992, Henry (now Alexia) Massalin completed a dissertation advising parallel programmers to defer processing when feasible to simplify -synchronization. RCU makes extremely heavy use of this advice. +synchronization [HMassalinPhD]. RCU makes extremely heavy use of +this advice. In 1993, Jacobson [Jacobson93] verbally described what is perhaps the simplest deferred-free technique: simply waiting a fixed amount of time @@ -90,27 +99,29 @@ mechanism, which is quite similar to RCU [Gamsa99]. These operating systems made pervasive use of RCU in place of "existence locks", which greatly simplifies locking hierarchies and helps avoid deadlocks. -2001 saw the first RCU presentation involving Linux [McKenney01a] -at OLS. The resulting abundance of RCU patches was presented the -following year [McKenney02a], and use of RCU in dcache was first -described that same year [Linder02a]. +The year 2000 saw an email exchange that would likely have +led to yet another independent invention of something like RCU +[RustyRussell2000a,RustyRussell2000b]. Instead, 2001 saw the first +RCU presentation involving Linux [McKenney01a] at OLS. The resulting +abundance of RCU patches was presented the following year [McKenney02a], +and use of RCU in dcache was first described that same year [Linder02a]. Also in 2002, Michael [Michael02b,Michael02a] presented "hazard-pointer" techniques that defer the destruction of data structures to simplify non-blocking synchronization (wait-free synchronization, lock-free synchronization, and obstruction-free synchronization are all examples of -non-blocking synchronization). In particular, this technique eliminates -locking, reduces contention, reduces memory latency for readers, and -parallelizes pipeline stalls and memory latency for writers. However, -these techniques still impose significant read-side overhead in the -form of memory barriers. Researchers at Sun worked along similar lines -in the same timeframe [HerlihyLM02]. These techniques can be thought -of as inside-out reference counts, where the count is represented by the -number of hazard pointers referencing a given data structure rather than -the more conventional counter field within the data structure itself. -The key advantage of inside-out reference counts is that they can be -stored in immortal variables, thus allowing races between access and -deletion to be avoided. +non-blocking synchronization). The corresponding journal article appeared +in 2004 [MagedMichael04a]. This technique eliminates locking, reduces +contention, reduces memory latency for readers, and parallelizes pipeline +stalls and memory latency for writers. However, these techniques still +impose significant read-side overhead in the form of memory barriers. +Researchers at Sun worked along similar lines in the same timeframe +[HerlihyLM02]. These techniques can be thought of as inside-out reference +counts, where the count is represented by the number of hazard pointers +referencing a given data structure rather than the more conventional +counter field within the data structure itself. The key advantage +of inside-out reference counts is that they can be stored in immortal +variables, thus allowing races between access and deletion to be avoided. By the same token, RCU can be thought of as a "bulk reference count", where some form of reference counter covers all reference by a given CPU @@ -123,8 +134,10 @@ can be thought of in other terms as well. In 2003, the K42 group described how RCU could be used to create hot-pluggable implementations of operating-system functions [Appavoo03a]. -Later that year saw a paper describing an RCU implementation of System -V IPC [Arcangeli03], and an introduction to RCU in Linux Journal +Later that year saw a paper describing an RCU implementation +of System V IPC [Arcangeli03] (following up on a suggestion by +Hugh Dickins [Dickins02a] and an implementation by Mingming Cao +[MingmingCao2002IPCRCU]), and an introduction to RCU in Linux Journal [McKenney03a]. 2004 has seen a Linux-Journal article on use of RCU in dcache @@ -383,6 +396,21 @@ for Programming Languages and Operating Systems}" } } +@phdthesis{HMassalinPhD +,author="H. Massalin" +,title="Synthesis: An Efficient Implementation of Fundamental Operating +System Services" +,school="Columbia University" +,address="New York, NY" +,year="1992" +,annotation={ + Mondo optimizing compiler. + Wait-free stuff. + Good advice: defer work to avoid synchronization. See page 90 + (PDF page 106), Section 5.4, fourth bullet point. +} +} + @unpublished{Jacobson93 ,author="Van Jacobson" ,title="Avoid Read-Side Locking Via Delayed Free" @@ -671,6 +699,20 @@ Orran Krieger and Rusty Russell and Dipankar Sarma and Maneesh Soni" [Viewed October 18, 2004]" } +@conference{Michael02b +,author="Maged M. Michael" +,title="High Performance Dynamic Lock-Free Hash Tables and List-Based Sets" +,Year="2002" +,Month="August" +,booktitle="{Proceedings of the 14\textsuperscript{th} Annual ACM +Symposium on Parallel +Algorithms and Architecture}" +,pages="73-82" +,annotation={ +Like the title says... +} +} + @Conference{Linder02a ,Author="Hanna Linder and Dipankar Sarma and Maneesh Soni" ,Title="Scalability of the Directory Entry Cache" @@ -727,6 +769,24 @@ Andrea Arcangeli and Andi Kleen and Orran Krieger and Rusty Russell" } } +@conference{Michael02a +,author="Maged M. Michael" +,title="Safe Memory Reclamation for Dynamic Lock-Free Objects Using Atomic +Reads and Writes" +,Year="2002" +,Month="August" +,booktitle="{Proceedings of the 21\textsuperscript{st} Annual ACM +Symposium on Principles of Distributed Computing}" +,pages="21-30" +,annotation={ + Each thread keeps an array of pointers to items that it is + currently referencing. Sort of an inside-out garbage collection + mechanism, but one that requires the accessing code to explicitly + state its needs. Also requires read-side memory barriers on + most architectures. +} +} + @unpublished{Dickins02a ,author="Hugh Dickins" ,title="Use RCU for System-V IPC" @@ -735,6 +795,17 @@ Andrea Arcangeli and Andi Kleen and Orran Krieger and Rusty Russell" ,note="private communication" } +@InProceedings{HerlihyLM02 +,author={Maurice Herlihy and Victor Luchangco and Mark Moir} +,title="The Repeat Offender Problem: A Mechanism for Supporting Dynamic-Sized, +Lock-Free Data Structures" +,booktitle={Proceedings of 16\textsuperscript{th} International +Symposium on Distributed Computing} +,year=2002 +,month="October" +,pages="339-353" +} + @unpublished{Sarma02b ,Author="Dipankar Sarma" ,Title="Some dcache\_rcu benchmark numbers" @@ -749,6 +820,19 @@ Andrea Arcangeli and Andi Kleen and Orran Krieger and Rusty Russell" } } +@unpublished{MingmingCao2002IPCRCU +,Author="Mingming Cao" +,Title="[PATCH]updated ipc lock patch" +,month="October" +,year="2002" +,note="Available: +\url{https://lkml.org/lkml/2002/10/24/262} +[Viewed February 15, 2014]" +,annotation={ + Mingming Cao's patch to introduce RCU to SysV IPC. +} +} + @unpublished{LinusTorvalds2003a ,Author="Linus Torvalds" ,Title="Re: {[PATCH]} small fixes in brlock.h" @@ -982,6 +1066,23 @@ Realtime Applications" } } +@article{MagedMichael04a +,author="Maged M. Michael" +,title="Hazard Pointers: Safe Memory Reclamation for Lock-Free Objects" +,Year="2004" +,Month="June" +,journal="IEEE Transactions on Parallel and Distributed Systems" +,volume="15" +,number="6" +,pages="491-504" +,url="Available: +\url{http://www.research.ibm.com/people/m/michael/ieeetpds-2004.pdf} +[Viewed March 1, 2005]" +,annotation={ + New canonical hazard-pointer citation. +} +} + @phdthesis{PaulEdwardMcKenneyPhD ,author="Paul E. McKenney" ,title="Exploiting Deferred Destruction: diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt index 91266193b8f4..9d10d1db16a5 100644 --- a/Documentation/RCU/checklist.txt +++ b/Documentation/RCU/checklist.txt @@ -256,10 +256,10 @@ over a rather long period of time, but improvements are always welcome! variations on this theme. b. Limiting update rate. For example, if updates occur only - once per hour, then no explicit rate limiting is required, - unless your system is already badly broken. The dcache - subsystem takes this approach -- updates are guarded - by a global lock, limiting their rate. + once per hour, then no explicit rate limiting is + required, unless your system is already badly broken. + Older versions of the dcache subsystem take this approach, + guarding updates with a global lock, limiting their rate. c. Trusted update -- if updates can only be done manually by superuser or some other trusted user, then it might not @@ -268,7 +268,8 @@ over a rather long period of time, but improvements are always welcome! the machine. d. Use call_rcu_bh() rather than call_rcu(), in order to take - advantage of call_rcu_bh()'s faster grace periods. + advantage of call_rcu_bh()'s faster grace periods. (This + is only a partial solution, though.) e. Periodically invoke synchronize_rcu(), permitting a limited number of updates per grace period. @@ -276,6 +277,13 @@ over a rather long period of time, but improvements are always welcome! The same cautions apply to call_rcu_bh(), call_rcu_sched(), call_srcu(), and kfree_rcu(). + Note that although these primitives do take action to avoid memory + exhaustion when any given CPU has too many callbacks, a determined + user could still exhaust memory. This is especially the case + if a system with a large number of CPUs has been configured to + offload all of its RCU callbacks onto a single CPU, or if the + system has relatively little free memory. + 9. All RCU list-traversal primitives, which include rcu_dereference(), list_for_each_entry_rcu(), and list_for_each_safe_rcu(), must be either within an RCU read-side diff --git a/Documentation/devicetree/bindings/net/micrel-ks8851.txt b/Documentation/devicetree/bindings/net/micrel-ks8851.txt index 11ace3c3d805..4fc392763611 100644 --- a/Documentation/devicetree/bindings/net/micrel-ks8851.txt +++ b/Documentation/devicetree/bindings/net/micrel-ks8851.txt @@ -7,3 +7,4 @@ Required properties: Optional properties: - local-mac-address : Ethernet mac address to use +- vdd-supply: supply for Ethernet mac diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt index 827104fb9364..f3cd299fcc41 100644 --- a/Documentation/kernel-per-CPU-kthreads.txt +++ b/Documentation/kernel-per-CPU-kthreads.txt @@ -162,7 +162,18 @@ Purpose: Execute workqueue requests To reduce its OS jitter, do any of the following: 1. Run your workload at a real-time priority, which will allow preempting the kworker daemons. -2. Do any of the following needed to avoid jitter that your +2. A given workqueue can be made visible in the sysfs filesystem + by passing the WQ_SYSFS to that workqueue's alloc_workqueue(). + Such a workqueue can be confined to a given subset of the + CPUs using the /sys/devices/virtual/workqueue/*/cpumask sysfs + files. The set of WQ_SYSFS workqueues can be displayed using + "ls sys/devices/virtual/workqueue". That said, the workqueues + maintainer would like to caution people against indiscriminately + sprinkling WQ_SYSFS across all the workqueues. The reason for + caution is that it is easy to add WQ_SYSFS, but because sysfs is + part of the formal user/kernel API, it can be nearly impossible + to remove it, even if its addition was a mistake. +3. Do any of the following needed to avoid jitter that your application cannot tolerate: a. Build your kernel with CONFIG_SLUB=y rather than CONFIG_SLAB=y, thus avoiding the slab allocator's periodic diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index 102dc19c4119..11c1d2049662 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -608,26 +608,30 @@ as follows: b = p; /* BUG: Compiler can reorder!!! */ do_something(); -The solution is again ACCESS_ONCE(), which preserves the ordering between -the load from variable 'a' and the store to variable 'b': +The solution is again ACCESS_ONCE() and barrier(), which preserves the +ordering between the load from variable 'a' and the store to variable 'b': q = ACCESS_ONCE(a); if (q) { + barrier(); ACCESS_ONCE(b) = p; do_something(); } else { + barrier(); ACCESS_ONCE(b) = p; do_something_else(); } -You could also use barrier() to prevent the compiler from moving -the stores to variable 'b', but barrier() would not prevent the -compiler from proving to itself that a==1 always, so ACCESS_ONCE() -is also needed. +The initial ACCESS_ONCE() is required to prevent the compiler from +proving the value of 'a', and the pair of barrier() invocations are +required to prevent the compiler from pulling the two identical stores +to 'b' out from the legs of the "if" statement. It is important to note that control dependencies absolutely require a a conditional. For example, the following "optimized" version of -the above example breaks ordering: +the above example breaks ordering, which is why the barrier() invocations +are absolutely required if you have identical stores in both legs of +the "if" statement: q = ACCESS_ONCE(a); ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */ @@ -643,9 +647,11 @@ It is of course legal for the prior load to be part of the conditional, for example, as follows: if (ACCESS_ONCE(a) > 0) { + barrier(); ACCESS_ONCE(b) = q / 2; do_something(); } else { + barrier(); ACCESS_ONCE(b) = q / 3; do_something_else(); } @@ -659,9 +665,11 @@ the needed conditional. For example: q = ACCESS_ONCE(a); if (q % MAX) { + barrier(); ACCESS_ONCE(b) = p; do_something(); } else { + barrier(); ACCESS_ONCE(b) = p; do_something_else(); } @@ -723,8 +731,13 @@ In summary: use smb_rmb(), smp_wmb(), or, in the case of prior stores and later loads, smp_mb(). + (*) If both legs of the "if" statement begin with identical stores + to the same variable, a barrier() statement is required at the + beginning of each leg of the "if" statement. + (*) Control dependencies require at least one run-time conditional - between the prior load and the subsequent store. If the compiler + between the prior load and the subsequent store, and this + conditional must involve the prior load. If the compiler is able to optimize the conditional away, it will have also optimized away the ordering. Careful use of ACCESS_ONCE() can help to preserve the needed conditional. @@ -1249,6 +1262,23 @@ The ACCESS_ONCE() function can prevent any number of optimizations that, while perfectly safe in single-threaded code, can be fatal in concurrent code. Here are some examples of these sorts of optimizations: + (*) The compiler is within its rights to reorder loads and stores + to the same variable, and in some cases, the CPU is within its + rights to reorder loads to the same variable. This means that + the following code: + + a[0] = x; + a[1] = x; + + Might result in an older value of x stored in a[1] than in a[0]. + Prevent both the compiler and the CPU from doing this as follows: + + a[0] = ACCESS_ONCE(x); + a[1] = ACCESS_ONCE(x); + + In short, ACCESS_ONCE() provides cache coherence for accesses from + multiple CPUs to a single variable. + (*) The compiler is within its rights to merge successive loads from the same variable. Such merging can cause the compiler to "optimize" the following code: @@ -1644,12 +1674,12 @@ for each construct. These operations all imply certain barriers: Memory operations issued after the ACQUIRE will be completed after the ACQUIRE operation has completed. - Memory operations issued before the ACQUIRE may be completed after the - ACQUIRE operation has completed. An smp_mb__before_spinlock(), combined - with a following ACQUIRE, orders prior loads against subsequent stores and - stores and prior stores against subsequent stores. Note that this is - weaker than smp_mb()! The smp_mb__before_spinlock() primitive is free on - many architectures. + Memory operations issued before the ACQUIRE may be completed after + the ACQUIRE operation has completed. An smp_mb__before_spinlock(), + combined with a following ACQUIRE, orders prior loads against + subsequent loads and stores and also orders prior stores against + subsequent stores. Note that this is weaker than smp_mb()! The + smp_mb__before_spinlock() primitive is free on many architectures. (2) RELEASE operation implication: @@ -1694,24 +1724,21 @@ may occur as: ACQUIRE M, STORE *B, STORE *A, RELEASE M -This same reordering can of course occur if the lock's ACQUIRE and RELEASE are -to the same lock variable, but only from the perspective of another CPU not -holding that lock. - -In short, a RELEASE followed by an ACQUIRE may -not- be assumed to be a full -memory barrier because it is possible for a preceding RELEASE to pass a -later ACQUIRE from the viewpoint of the CPU, but not from the viewpoint -of the compiler. Note that deadlocks cannot be introduced by this -interchange because if such a deadlock threatened, the RELEASE would -simply complete. - -If it is necessary for a RELEASE-ACQUIRE pair to produce a full barrier, the -ACQUIRE can be followed by an smp_mb__after_unlock_lock() invocation. This -will produce a full barrier if either (a) the RELEASE and the ACQUIRE are -executed by the same CPU or task, or (b) the RELEASE and ACQUIRE act on the -same variable. The smp_mb__after_unlock_lock() primitive is free on many -architectures. Without smp_mb__after_unlock_lock(), the critical sections -corresponding to the RELEASE and the ACQUIRE can cross: +When the ACQUIRE and RELEASE are a lock acquisition and release, +respectively, this same reordering can occur if the lock's ACQUIRE and +RELEASE are to the same lock variable, but only from the perspective of +another CPU not holding that lock. In short, a ACQUIRE followed by an +RELEASE may -not- be assumed to be a full memory barrier. + +Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not +imply a full memory barrier. If it is necessary for a RELEASE-ACQUIRE +pair to produce a full barrier, the ACQUIRE can be followed by an +smp_mb__after_unlock_lock() invocation. This will produce a full barrier +if either (a) the RELEASE and the ACQUIRE are executed by the same +CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable. +The smp_mb__after_unlock_lock() primitive is free on many architectures. +Without smp_mb__after_unlock_lock(), the CPU's execution of the critical +sections corresponding to the RELEASE and the ACQUIRE can cross, so that: *A = a; RELEASE M @@ -1722,7 +1749,36 @@ could occur as: ACQUIRE N, STORE *B, STORE *A, RELEASE M -With smp_mb__after_unlock_lock(), they cannot, so that: +It might appear that this reordering could introduce a deadlock. +However, this cannot happen because if such a deadlock threatened, +the RELEASE would simply complete, thereby avoiding the deadlock. + + Why does this work? + + One key point is that we are only talking about the CPU doing + the reordering, not the compiler. If the compiler (or, for + that matter, the developer) switched the operations, deadlock + -could- occur. + + But suppose the CPU reordered the operations. In this case, + the unlock precedes the lock in the assembly code. The CPU + simply elected to try executing the later lock operation first. + If there is a deadlock, this lock operation will simply spin (or + try to sleep, but more on that later). The CPU will eventually + execute the unlock operation (which preceded the lock operation + in the assembly code), which will unravel the potential deadlock, + allowing the lock operation to succeed. + + But what if the lock is a sleeplock? In that case, the code will + try to enter the scheduler, where it will eventually encounter + a memory barrier, which will force the earlier unlock operation + to complete, again unraveling the deadlock. There might be + a sleep-unlock race, but the locking primitive needs to resolve + such races properly in any case. + +With smp_mb__after_unlock_lock(), the two critical sections cannot overlap. +For example, with the following code, the store to *A will always be +seen by other CPUs before the store to *B: *A = a; RELEASE M @@ -1730,13 +1786,18 @@ With smp_mb__after_unlock_lock(), they cannot, so that: smp_mb__after_unlock_lock(); *B = b; -will always occur as either of the following: +The operations will always occur in one of the following orders: - STORE *A, RELEASE, ACQUIRE, STORE *B - STORE *A, ACQUIRE, RELEASE, STORE *B + STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B + STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B + ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B If the RELEASE and ACQUIRE were instead both operating on the same lock -variable, only the first of these two alternatives can occur. +variable, only the first of these alternatives can occur. In addition, +the more strongly ordered systems may rule out some of the above orders. +But in any case, as noted earlier, the smp_mb__after_unlock_lock() +ensures that the store to *A will always be seen as happening before +the store to *B. Locks and semaphores may not provide any guarantee of ordering on UP compiled systems, and so cannot be counted on in such a situation to actually achieve @@ -2757,7 +2818,7 @@ in that order, but, without intervention, the sequence may have almost any combination of elements combined or discarded, provided the program's view of the world remains consistent. Note that ACCESS_ONCE() is -not- optional in the above example, as there are architectures where a given CPU might -interchange successive loads to the same location. On such architectures, +reorder successive loads to the same location. On such architectures, ACCESS_ONCE() does whatever is necessary to prevent this, for example, on Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the special ld.acq and st.rel instructions that prevent such reordering. diff --git a/Documentation/networking/netlink_mmap.txt b/Documentation/networking/netlink_mmap.txt index b26122973525..c6af4bac5aa8 100644 --- a/Documentation/networking/netlink_mmap.txt +++ b/Documentation/networking/netlink_mmap.txt @@ -226,9 +226,9 @@ Ring setup: void *rx_ring, *tx_ring; /* Configure ring parameters */ - if (setsockopt(fd, NETLINK_RX_RING, &req, sizeof(req)) < 0) + if (setsockopt(fd, SOL_NETLINK, NETLINK_RX_RING, &req, sizeof(req)) < 0) exit(1); - if (setsockopt(fd, NETLINK_TX_RING, &req, sizeof(req)) < 0) + if (setsockopt(fd, SOL_NETLINK, NETLINK_TX_RING, &req, sizeof(req)) < 0) exit(1) /* Calculate size of each individual ring */ diff --git a/Documentation/networking/packet_mmap.txt b/Documentation/networking/packet_mmap.txt index 1404674c0a02..6fea79efb4cb 100644 --- a/Documentation/networking/packet_mmap.txt +++ b/Documentation/networking/packet_mmap.txt @@ -453,7 +453,7 @@ TP_STATUS_COPY : This flag indicates that the frame (and associated enabled previously with setsockopt() and the PACKET_COPY_THRESH option. - The number of frames than can be buffered to + The number of frames that can be buffered to be read with recvfrom is limited like a normal socket. See the SO_RCVBUF option in the socket (7) man page. diff --git a/Documentation/networking/timestamping.txt b/Documentation/networking/timestamping.txt index 661d3c316a17..048c92b487f6 100644 --- a/Documentation/networking/timestamping.txt +++ b/Documentation/networking/timestamping.txt @@ -21,26 +21,38 @@ has such a feature). SO_TIMESTAMPING: -Instructs the socket layer which kind of information is wanted. The -parameter is an integer with some of the following bits set. Setting -other bits is an error and doesn't change the current state. - -SOF_TIMESTAMPING_TX_HARDWARE: try to obtain send time stamp in hardware -SOF_TIMESTAMPING_TX_SOFTWARE: if SOF_TIMESTAMPING_TX_HARDWARE is off or - fails, then do it in software -SOF_TIMESTAMPING_RX_HARDWARE: return the original, unmodified time stamp - as generated by the hardware -SOF_TIMESTAMPING_RX_SOFTWARE: if SOF_TIMESTAMPING_RX_HARDWARE is off or - fails, then do it in software -SOF_TIMESTAMPING_RAW_HARDWARE: return original raw hardware time stamp -SOF_TIMESTAMPING_SYS_HARDWARE: return hardware time stamp transformed to - the system time base -SOF_TIMESTAMPING_SOFTWARE: return system time stamp generated in - software - -SOF_TIMESTAMPING_TX/RX determine how time stamps are generated. -SOF_TIMESTAMPING_RAW/SYS determine how they are reported in the -following control message: +Instructs the socket layer which kind of information should be collected +and/or reported. The parameter is an integer with some of the following +bits set. Setting other bits is an error and doesn't change the current +state. + +Four of the bits are requests to the stack to try to generate +timestamps. Any combination of them is valid. + +SOF_TIMESTAMPING_TX_HARDWARE: try to obtain send time stamps in hardware +SOF_TIMESTAMPING_TX_SOFTWARE: try to obtain send time stamps in software +SOF_TIMESTAMPING_RX_HARDWARE: try to obtain receive time stamps in hardware +SOF_TIMESTAMPING_RX_SOFTWARE: try to obtain receive time stamps in software + +The other three bits control which timestamps will be reported in a +generated control message. If none of these bits are set or if none of +the set bits correspond to data that is available, then the control +message will not be generated: + +SOF_TIMESTAMPING_SOFTWARE: report systime if available +SOF_TIMESTAMPING_SYS_HARDWARE: report hwtimetrans if available +SOF_TIMESTAMPING_RAW_HARDWARE: report hwtimeraw if available + +It is worth noting that timestamps may be collected for reasons other +than being requested by a particular socket with +SOF_TIMESTAMPING_[TR]X_(HARD|SOFT)WARE. For example, most drivers that +can generate hardware receive timestamps ignore +SOF_TIMESTAMPING_RX_HARDWARE. It is still a good idea to set that flag +in case future drivers pay attention. + +If timestamps are reported, they will appear in a control message with +cmsg_level==SOL_SOCKET, cmsg_type==SO_TIMESTAMPING, and a payload like +this: struct scm_timestamping { struct timespec systime; diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt index e55124e7c40c..ec8be46bf48d 100644 --- a/Documentation/sysctl/kernel.txt +++ b/Documentation/sysctl/kernel.txt @@ -320,10 +320,11 @@ This file shows up if CONFIG_DETECT_HUNG_TASK is enabled. ============================================================== -hung_task_warning: +hung_task_warnings: The maximum number of warnings to report. During a check interval -When this value is reached, no more the warnings will be reported. +if a hung task is detected, this value is decreased by 1. +When this value reaches 0, no more warnings will be reported. This file shows up if CONFIG_DETECT_HUNG_TASK is enabled. -1: report an infinite number of warnings. @@ -441,8 +442,7 @@ feature should be disabled. Otherwise, if the system overhead from the feature is too high then the rate the kernel samples for NUMA hinting faults may be controlled by the numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, -numa_balancing_scan_size_mb, numa_balancing_settle_count sysctls and -numa_balancing_migrate_deferred. +numa_balancing_scan_size_mb, and numa_balancing_settle_count sysctls. ============================================================== @@ -483,13 +483,6 @@ rate for each task. numa_balancing_scan_size_mb is how many megabytes worth of pages are scanned for a given scan. -numa_balancing_migrate_deferred is how many page migrations get skipped -unconditionally, after a page migration is skipped because a page is shared -with other tasks. This reduces page migration overhead, and determines -how much stronger the "move task near its memory" policy scheduler becomes, -versus the "move memory near its task" memory management policy, for workloads -with shared memory. - ============================================================== osrelease, ostype & version: diff --git a/MAINTAINERS b/MAINTAINERS index b7befe758429..900d98eec2fc 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -911,11 +911,11 @@ F: arch/arm/include/asm/hardware/dec21285.h F: arch/arm/mach-footbridge/ ARM/FREESCALE IMX / MXC ARM ARCHITECTURE -M: Shawn Guo <shawn.guo@linaro.org> +M: Shawn Guo <shawn.guo@freescale.com> M: Sascha Hauer <kernel@pengutronix.de> L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained -T: git git://git.linaro.org/people/shawnguo/linux-2.6.git +T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git F: arch/arm/mach-imx/ F: arch/arm/boot/dts/imx* F: arch/arm/configs/imx*_defconfig @@ -1738,6 +1738,7 @@ F: include/uapi/linux/bfs_fs.h BLACKFIN ARCHITECTURE M: Steven Miao <realmz6@gmail.com> L: adi-buildroot-devel@lists.sourceforge.net +T: git git://git.code.sf.net/p/adi-linux/code W: http://blackfin.uclinux.org S: Supported F: arch/blackfin/ @@ -1831,8 +1832,8 @@ F: net/bluetooth/ F: include/net/bluetooth/ BONDING DRIVER -M: Jay Vosburgh <fubar@us.ibm.com> -M: Veaceslav Falico <vfalico@redhat.com> +M: Jay Vosburgh <j.vosburgh@gmail.com> +M: Veaceslav Falico <vfalico@gmail.com> M: Andy Gospodarek <andy@greyhouse.net> L: netdev@vger.kernel.org W: http://sourceforge.net/projects/bonding/ @@ -2800,9 +2801,9 @@ S: Supported F: drivers/acpi/dock.c DOCUMENTATION -M: Rob Landley <rob@landley.net> +M: Randy Dunlap <rdunlap@infradead.org> L: linux-doc@vger.kernel.org -T: TBD +T: quilt http://www.infradead.org/~rdunlap/Doc/patches/ S: Maintained F: Documentation/ @@ -4544,6 +4545,7 @@ M: Greg Rose <gregory.v.rose@intel.com> M: Alex Duyck <alexander.h.duyck@intel.com> M: John Ronciak <john.ronciak@intel.com> M: Mitch Williams <mitch.a.williams@intel.com> +M: Linux NICS <linux.nics@intel.com> L: e1000-devel@lists.sourceforge.net W: http://www.intel.com/support/feedback.htm W: http://e1000.sourceforge.net/ @@ -6002,6 +6004,9 @@ F: include/linux/netdevice.h F: include/uapi/linux/in.h F: include/uapi/linux/net.h F: include/uapi/linux/netdevice.h +F: tools/net/ +F: tools/testing/selftests/net/ +F: lib/random32.c NETWORKING [IPv4/IPv6] M: "David S. Miller" <davem@davemloft.net> @@ -1,7 +1,7 @@ VERSION = 3 PATCHLEVEL = 14 SUBLEVEL = 0 -EXTRAVERSION = -rc6 +EXTRAVERSION = NAME = Shuffling Zombie Juror # *DOCUMENTATION* diff --git a/arch/alpha/include/asm/Kbuild b/arch/alpha/include/asm/Kbuild index a73a8e208a4a..7736f426ff3b 100644 --- a/arch/alpha/include/asm/Kbuild +++ b/arch/alpha/include/asm/Kbuild @@ -1,7 +1,8 @@ -generic-y += clkdev.h +generic-y += clkdev.h generic-y += exec.h -generic-y += trace_clock.h -generic-y += preempt.h generic-y += hash.h +generic-y += mcs_spinlock.h +generic-y += preempt.h +generic-y += trace_clock.h diff --git a/arch/arc/include/asm/Kbuild b/arch/arc/include/asm/Kbuild index 0d3362991c31..e76fd79f32b0 100644 --- a/arch/arc/include/asm/Kbuild +++ b/arch/arc/include/asm/Kbuild @@ -1,15 +1,15 @@ generic-y += auxvec.h generic-y += barrier.h -generic-y += bugs.h generic-y += bitsperlong.h +generic-y += bugs.h generic-y += clkdev.h generic-y += cputime.h generic-y += device.h generic-y += div64.h generic-y += emergency-restart.h generic-y += errno.h -generic-y += fcntl.h generic-y += fb.h +generic-y += fcntl.h generic-y += ftrace.h generic-y += hardirq.h generic-y += hash.h @@ -22,6 +22,7 @@ generic-y += kmap_types.h generic-y += kvm_para.h generic-y += local.h generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += mman.h generic-y += msgbuf.h generic-y += param.h @@ -30,6 +31,7 @@ generic-y += pci.h generic-y += percpu.h generic-y += poll.h generic-y += posix_types.h +generic-y += preempt.h generic-y += resource.h generic-y += scatterlist.h generic-y += sembuf.h @@ -48,4 +50,3 @@ generic-y += ucontext.h generic-y += user.h generic-y += vga.h generic-y += xor.h -generic-y += preempt.h diff --git a/arch/arm/boot/dts/sama5d36.dtsi b/arch/arm/boot/dts/sama5d36.dtsi index 6c31c26e6cc0..db58cad6acd3 100644 --- a/arch/arm/boot/dts/sama5d36.dtsi +++ b/arch/arm/boot/dts/sama5d36.dtsi @@ -8,8 +8,8 @@ */ #include "sama5d3.dtsi" #include "sama5d3_can.dtsi" -#include "sama5d3_emac.dtsi" #include "sama5d3_gmac.dtsi" +#include "sama5d3_emac.dtsi" #include "sama5d3_lcd.dtsi" #include "sama5d3_mci2.dtsi" #include "sama5d3_tcb1.dtsi" diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild index 3278afe2c3ab..23e728ecf8ab 100644 --- a/arch/arm/include/asm/Kbuild +++ b/arch/arm/include/asm/Kbuild @@ -7,16 +7,19 @@ generic-y += current.h generic-y += emergency-restart.h generic-y += errno.h generic-y += exec.h +generic-y += hash.h generic-y += ioctl.h generic-y += ipcbuf.h generic-y += irq_regs.h generic-y += kdebug.h generic-y += local.h generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += msgbuf.h generic-y += param.h generic-y += parport.h generic-y += poll.h +generic-y += preempt.h generic-y += resource.h generic-y += sections.h generic-y += segment.h @@ -33,5 +36,3 @@ generic-y += termios.h generic-y += timex.h generic-y += trace_clock.h generic-y += unaligned.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h index 58b8b84adcd2..2fe85fff5cca 100644 --- a/arch/arm/include/asm/topology.h +++ b/arch/arm/include/asm/topology.h @@ -20,9 +20,6 @@ extern struct cputopo_arm cpu_topology[NR_CPUS]; #define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) #define topology_thread_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) -#define mc_capable() (cpu_topology[0].socket_id != -1) -#define smt_capable() (cpu_topology[0].thread_id != -1) - void init_cpu_topology(void); void store_cpu_topology(unsigned int cpuid); const struct cpumask *cpu_coregroup_mask(int cpu); diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 92f7b15dd221..adabeababeb0 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -30,7 +30,6 @@ #include <linux/uaccess.h> #include <linux/random.h> #include <linux/hw_breakpoint.h> -#include <linux/cpuidle.h> #include <linux/leds.h> #include <linux/reboot.h> @@ -133,7 +132,11 @@ EXPORT_SYMBOL_GPL(arm_pm_restart); void (*arm_pm_idle)(void); -static void default_idle(void) +/* + * Called from the core idle loop. + */ + +void arch_cpu_idle(void) { if (arm_pm_idle) arm_pm_idle(); @@ -168,15 +171,6 @@ void arch_cpu_idle_dead(void) #endif /* - * Called from the core idle loop. - */ -void arch_cpu_idle(void) -{ - if (cpuidle_idle_call()) - default_idle(); -} - -/* * Called by kexec, immediately prior to machine_kexec(). * * This must completely disable all secondary CPUs; simply causing those CPUs diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index 71c53ecfcc3a..3bdfdda70567 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -12,6 +12,7 @@ generic-y += dma.h generic-y += emergency-restart.h generic-y += errno.h generic-y += ftrace.h +generic-y += hash.h generic-y += hw_irq.h generic-y += ioctl.h generic-y += ioctls.h @@ -22,12 +23,14 @@ generic-y += kmap_types.h generic-y += kvm_para.h generic-y += local.h generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += mman.h generic-y += msgbuf.h generic-y += mutex.h generic-y += pci.h generic-y += poll.h generic-y += posix_types.h +generic-y += preempt.h generic-y += resource.h generic-y += scatterlist.h generic-y += sections.h @@ -38,8 +41,8 @@ generic-y += shmbuf.h generic-y += sizes.h generic-y += socket.h generic-y += sockios.h -generic-y += switch_to.h generic-y += swab.h +generic-y += switch_to.h generic-y += termbits.h generic-y += termios.h generic-y += topology.h @@ -49,5 +52,3 @@ generic-y += unaligned.h generic-y += user.h generic-y += vga.h generic-y += xor.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 1c0a9be2ffa8..9cce0098f4cd 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -33,7 +33,6 @@ #include <linux/kallsyms.h> #include <linux/init.h> #include <linux/cpu.h> -#include <linux/cpuidle.h> #include <linux/elfcore.h> #include <linux/pm.h> #include <linux/tick.h> @@ -94,10 +93,8 @@ void arch_cpu_idle(void) * This should do all the clock switching and wait for interrupt * tricks */ - if (cpuidle_idle_call()) { - cpu_do_idle(); - local_irq_enable(); - } + cpu_do_idle(); + local_irq_enable(); } #ifdef CONFIG_HOTPLUG_CPU diff --git a/arch/avr32/include/asm/Kbuild b/arch/avr32/include/asm/Kbuild index c7c64a63c29f..00a0f3ccd6eb 100644 --- a/arch/avr32/include/asm/Kbuild +++ b/arch/avr32/include/asm/Kbuild @@ -1,22 +1,23 @@ -generic-y += clkdev.h -generic-y += cputime.h -generic-y += delay.h -generic-y += device.h -generic-y += div64.h -generic-y += emergency-restart.h -generic-y += exec.h -generic-y += futex.h -generic-y += preempt.h -generic-y += irq_regs.h -generic-y += param.h -generic-y += local.h -generic-y += local64.h -generic-y += percpu.h -generic-y += scatterlist.h -generic-y += sections.h -generic-y += topology.h -generic-y += trace_clock.h +generic-y += clkdev.h +generic-y += cputime.h +generic-y += delay.h +generic-y += device.h +generic-y += div64.h +generic-y += emergency-restart.h +generic-y += exec.h +generic-y += futex.h +generic-y += hash.h +generic-y += irq_regs.h +generic-y += local.h +generic-y += local64.h +generic-y += mcs_spinlock.h +generic-y += param.h +generic-y += percpu.h +generic-y += preempt.h +generic-y += scatterlist.h +generic-y += sections.h +generic-y += topology.h +generic-y += trace_clock.h generic-y += vga.h -generic-y += xor.h -generic-y += hash.h +generic-y += xor.h diff --git a/arch/blackfin/include/asm/Kbuild b/arch/blackfin/include/asm/Kbuild index 359d36fdc247..0d93b9a79ca9 100644 --- a/arch/blackfin/include/asm/Kbuild +++ b/arch/blackfin/include/asm/Kbuild @@ -10,6 +10,7 @@ generic-y += emergency-restart.h generic-y += errno.h generic-y += fb.h generic-y += futex.h +generic-y += hash.h generic-y += hw_irq.h generic-y += ioctl.h generic-y += ipcbuf.h @@ -17,14 +18,16 @@ generic-y += irq_regs.h generic-y += kdebug.h generic-y += kmap_types.h generic-y += kvm_para.h -generic-y += local64.h generic-y += local.h +generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += mman.h generic-y += msgbuf.h generic-y += mutex.h generic-y += param.h generic-y += percpu.h generic-y += pgalloc.h +generic-y += preempt.h generic-y += resource.h generic-y += scatterlist.h generic-y += sembuf.h @@ -44,5 +47,3 @@ generic-y += ucontext.h generic-y += unaligned.h generic-y += user.h generic-y += xor.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/c6x/include/asm/Kbuild b/arch/c6x/include/asm/Kbuild index d73bb85ccdd3..8dbdce8421b0 100644 --- a/arch/c6x/include/asm/Kbuild +++ b/arch/c6x/include/asm/Kbuild @@ -15,6 +15,7 @@ generic-y += exec.h generic-y += fb.h generic-y += fcntl.h generic-y += futex.h +generic-y += hash.h generic-y += hw_irq.h generic-y += io.h generic-y += ioctl.h @@ -24,6 +25,7 @@ generic-y += irq_regs.h generic-y += kdebug.h generic-y += kmap_types.h generic-y += local.h +generic-y += mcs_spinlock.h generic-y += mman.h generic-y += mmu.h generic-y += mmu_context.h @@ -34,6 +36,7 @@ generic-y += percpu.h generic-y += pgalloc.h generic-y += poll.h generic-y += posix_types.h +generic-y += preempt.h generic-y += resource.h generic-y += scatterlist.h generic-y += segment.h @@ -56,5 +59,3 @@ generic-y += ucontext.h generic-y += user.h generic-y += vga.h generic-y += xor.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/cris/include/asm/Kbuild b/arch/cris/include/asm/Kbuild index f3fd8768f095..056027f38351 100644 --- a/arch/cris/include/asm/Kbuild +++ b/arch/cris/include/asm/Kbuild @@ -9,8 +9,9 @@ generic-y += exec.h generic-y += hash.h generic-y += kvm_para.h generic-y += linkage.h +generic-y += mcs_spinlock.h generic-y += module.h +generic-y += preempt.h generic-y += trace_clock.h generic-y += vga.h generic-y += xor.h -generic-y += preempt.h diff --git a/arch/cris/include/asm/bitops.h b/arch/cris/include/asm/bitops.h index 184066ceb1f6..053c17b36559 100644 --- a/arch/cris/include/asm/bitops.h +++ b/arch/cris/include/asm/bitops.h @@ -144,7 +144,7 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr) * definition, which doesn't have the same semantics. We don't want to * use -fno-builtin, so just hide the name ffs. */ -#define ffs kernel_ffs +#define ffs(x) kernel_ffs(x) #include <asm-generic/bitops/fls.h> #include <asm-generic/bitops/__fls.h> diff --git a/arch/frv/include/asm/Kbuild b/arch/frv/include/asm/Kbuild index bc42f14c9c2e..babb9338ebf8 100644 --- a/arch/frv/include/asm/Kbuild +++ b/arch/frv/include/asm/Kbuild @@ -1,6 +1,7 @@ generic-y += clkdev.h generic-y += exec.h -generic-y += trace_clock.h -generic-y += preempt.h generic-y += hash.h +generic-y += mcs_spinlock.h +generic-y += preempt.h +generic-y += trace_clock.h diff --git a/arch/hexagon/include/asm/Kbuild b/arch/hexagon/include/asm/Kbuild index 38ca45d3df1e..eadcc118f950 100644 --- a/arch/hexagon/include/asm/Kbuild +++ b/arch/hexagon/include/asm/Kbuild @@ -25,14 +25,16 @@ generic-y += ipcbuf.h generic-y += irq_regs.h generic-y += kdebug.h generic-y += kmap_types.h -generic-y += local64.h generic-y += local.h +generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += mman.h generic-y += msgbuf.h generic-y += pci.h generic-y += percpu.h generic-y += poll.h generic-y += posix_types.h +generic-y += preempt.h generic-y += resource.h generic-y += rwsem.h generic-y += scatterlist.h @@ -45,8 +47,8 @@ generic-y += siginfo.h generic-y += sizes.h generic-y += socket.h generic-y += sockios.h -generic-y += statfs.h generic-y += stat.h +generic-y += statfs.h generic-y += termbits.h generic-y += termios.h generic-y += topology.h @@ -55,4 +57,3 @@ generic-y += types.h generic-y += ucontext.h generic-y += unaligned.h generic-y += xor.h -generic-y += preempt.h diff --git a/arch/ia64/include/asm/Kbuild b/arch/ia64/include/asm/Kbuild index 283a83154b5e..0da4aa2602ae 100644 --- a/arch/ia64/include/asm/Kbuild +++ b/arch/ia64/include/asm/Kbuild @@ -1,8 +1,9 @@ generic-y += clkdev.h generic-y += exec.h +generic-y += hash.h generic-y += kvm_para.h -generic-y += trace_clock.h +generic-y += mcs_spinlock.h generic-y += preempt.h +generic-y += trace_clock.h generic-y += vtime.h -generic-y += hash.h diff --git a/arch/ia64/include/asm/topology.h b/arch/ia64/include/asm/topology.h index a2496e449b75..5cb55a1e606b 100644 --- a/arch/ia64/include/asm/topology.h +++ b/arch/ia64/include/asm/topology.h @@ -77,7 +77,6 @@ void build_cpu_to_node_map(void); #define topology_core_id(cpu) (cpu_data(cpu)->core_id) #define topology_core_cpumask(cpu) (&cpu_core_map[cpu]) #define topology_thread_cpumask(cpu) (&per_cpu(cpu_sibling_map, cpu)) -#define smt_capable() (smp_num_siblings > 1) #endif extern void arch_fix_phys_package_id(int num, u32 slot); diff --git a/arch/ia64/kernel/uncached.c b/arch/ia64/kernel/uncached.c index a96bcf83a735..20e8a9b21d75 100644 --- a/arch/ia64/kernel/uncached.c +++ b/arch/ia64/kernel/uncached.c @@ -98,7 +98,7 @@ static int uncached_add_chunk(struct uncached_pool *uc_pool, int nid) /* attempt to allocate a granule's worth of cached memory pages */ page = alloc_pages_exact_node(nid, - GFP_KERNEL | __GFP_ZERO | GFP_THISNODE, + GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE, IA64_GRANULE_SHIFT-PAGE_SHIFT); if (!page) { mutex_unlock(&uc_pool->add_chunk_mutex); diff --git a/arch/m32r/include/asm/Kbuild b/arch/m32r/include/asm/Kbuild index 932435ac4e5c..5825a35b2c56 100644 --- a/arch/m32r/include/asm/Kbuild +++ b/arch/m32r/include/asm/Kbuild @@ -1,7 +1,8 @@ generic-y += clkdev.h generic-y += exec.h +generic-y += hash.h +generic-y += mcs_spinlock.h generic-y += module.h -generic-y += trace_clock.h generic-y += preempt.h -generic-y += hash.h +generic-y += trace_clock.h diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig index dbdd2231c75d..b2e322939256 100644 --- a/arch/m68k/Kconfig +++ b/arch/m68k/Kconfig @@ -17,6 +17,7 @@ config M68K select FPU if MMU select ARCH_WANT_IPC_PARSE_VERSION select ARCH_USES_GETTIMEOFFSET if MMU && !COLDFIRE + select HAVE_FUTEX_CMPXCHG if MMU && FUTEX select HAVE_MOD_ARCH_SPECIFIC select MODULES_USE_ELF_REL select MODULES_USE_ELF_RELA diff --git a/arch/m68k/include/asm/Kbuild b/arch/m68k/include/asm/Kbuild index 6fb9e813a910..c67c94a2d672 100644 --- a/arch/m68k/include/asm/Kbuild +++ b/arch/m68k/include/asm/Kbuild @@ -14,8 +14,9 @@ generic-y += irq_regs.h generic-y += kdebug.h generic-y += kmap_types.h generic-y += kvm_para.h -generic-y += local64.h generic-y += local.h +generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += mman.h generic-y += mutex.h generic-y += percpu.h diff --git a/arch/metag/include/asm/Kbuild b/arch/metag/include/asm/Kbuild index b716d807c2ec..c29ead89a317 100644 --- a/arch/metag/include/asm/Kbuild +++ b/arch/metag/include/asm/Kbuild @@ -13,6 +13,7 @@ generic-y += fb.h generic-y += fcntl.h generic-y += futex.h generic-y += hardirq.h +generic-y += hash.h generic-y += hw_irq.h generic-y += ioctl.h generic-y += ioctls.h @@ -23,6 +24,7 @@ generic-y += kmap_types.h generic-y += kvm_para.h generic-y += local.h generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += msgbuf.h generic-y += mutex.h generic-y += param.h @@ -30,6 +32,7 @@ generic-y += pci.h generic-y += percpu.h generic-y += poll.h generic-y += posix_types.h +generic-y += preempt.h generic-y += scatterlist.h generic-y += sections.h generic-y += sembuf.h @@ -52,5 +55,3 @@ generic-y += unaligned.h generic-y += user.h generic-y += vga.h generic-y += xor.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/microblaze/include/asm/Kbuild b/arch/microblaze/include/asm/Kbuild index 2b98bc73642a..1f590ab8f323 100644 --- a/arch/microblaze/include/asm/Kbuild +++ b/arch/microblaze/include/asm/Kbuild @@ -3,6 +3,7 @@ generic-y += barrier.h generic-y += clkdev.h generic-y += exec.h generic-y += hash.h -generic-y += trace_clock.h -generic-y += syscalls.h +generic-y += mcs_spinlock.h generic-y += preempt.h +generic-y += syscalls.h +generic-y += trace_clock.h diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index dcae3a7035db..95fa1f1d5c8b 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -1776,12 +1776,12 @@ endchoice config FORCE_MAX_ZONEORDER int "Maximum zone order" - range 14 64 if HUGETLB_PAGE && PAGE_SIZE_64KB - default "14" if HUGETLB_PAGE && PAGE_SIZE_64KB - range 13 64 if HUGETLB_PAGE && PAGE_SIZE_32KB - default "13" if HUGETLB_PAGE && PAGE_SIZE_32KB - range 12 64 if HUGETLB_PAGE && PAGE_SIZE_16KB - default "12" if HUGETLB_PAGE && PAGE_SIZE_16KB + range 14 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB + default "14" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB + range 13 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB + default "13" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB + range 12 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB + default "12" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB range 11 64 default "11" help @@ -2353,9 +2353,8 @@ config SECCOMP If unsure, say Y. Only embedded should say N here. config MIPS_O32_FP64_SUPPORT - bool "Support for O32 binaries using 64-bit FP" + bool "Support for O32 binaries using 64-bit FP (EXPERIMENTAL)" depends on 32BIT || MIPS32_O32 - default y help When this is enabled, the kernel will support use of 64-bit floating point registers with binaries using the O32 ABI along with the @@ -2367,7 +2366,14 @@ config MIPS_O32_FP64_SUPPORT of your kernel & potentially improve FP emulation performance by saying N here. - If unsure, say Y. + Although binutils currently supports use of this flag the details + concerning its effect upon the O32 ABI in userland are still being + worked on. In order to avoid userland becoming dependant upon current + behaviour before the details have been finalised, this option should + be considered experimental and only enabled by those working upon + said details. + + If unsure, say N. config USE_OF bool diff --git a/arch/mips/alchemy/board-gpr.c b/arch/mips/alchemy/board-gpr.c index 9edc35ff8cf1..acf9a2a37f5a 100644 --- a/arch/mips/alchemy/board-gpr.c +++ b/arch/mips/alchemy/board-gpr.c @@ -53,10 +53,8 @@ void __init prom_init(void) prom_init_cmdline(); memsize_str = prom_getenv("memsize"); - if (!memsize_str) + if (!memsize_str || kstrtoul(memsize_str, 0, &memsize)) memsize = 0x04000000; - else - strict_strtoul(memsize_str, 0, &memsize); add_memory_region(0, memsize, BOOT_MEM_RAM); } diff --git a/arch/mips/alchemy/board-mtx1.c b/arch/mips/alchemy/board-mtx1.c index 9969dbab19e3..25a59a23547e 100644 --- a/arch/mips/alchemy/board-mtx1.c +++ b/arch/mips/alchemy/board-mtx1.c @@ -52,10 +52,8 @@ void __init prom_init(void) prom_init_cmdline(); memsize_str = prom_getenv("memsize"); - if (!memsize_str) + if (!memsize_str || kstrtoul(memsize_str, 0, &memsize)) memsize = 0x04000000; - else - strict_strtoul(memsize_str, 0, &memsize); add_memory_region(0, memsize, BOOT_MEM_RAM); } diff --git a/arch/mips/bcm47xx/board.c b/arch/mips/bcm47xx/board.c index 6d612e2b949b..cdd8246f92b3 100644 --- a/arch/mips/bcm47xx/board.c +++ b/arch/mips/bcm47xx/board.c @@ -1,3 +1,4 @@ +#include <linux/errno.h> #include <linux/export.h> #include <linux/string.h> #include <bcm47xx_board.h> diff --git a/arch/mips/bcm47xx/nvram.c b/arch/mips/bcm47xx/nvram.c index 6decb27cf48b..2bed73a684ae 100644 --- a/arch/mips/bcm47xx/nvram.c +++ b/arch/mips/bcm47xx/nvram.c @@ -196,7 +196,7 @@ int bcm47xx_nvram_gpio_pin(const char *name) char nvram_var[10]; char buf[30]; - for (i = 0; i < 16; i++) { + for (i = 0; i < 32; i++) { err = snprintf(nvram_var, sizeof(nvram_var), "gpio%i", i); if (err <= 0) continue; diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c index 25fbfae06c1f..c2bb4f896ce7 100644 --- a/arch/mips/cavium-octeon/octeon-irq.c +++ b/arch/mips/cavium-octeon/octeon-irq.c @@ -975,10 +975,6 @@ static int octeon_irq_ciu_xlat(struct irq_domain *d, if (ciu > 1 || bit > 63) return -EINVAL; - /* These are the GPIO lines */ - if (ciu == 0 && bit >= 16 && bit < 32) - return -EINVAL; - *out_hwirq = (ciu << 6) | bit; *out_type = 0; @@ -1007,6 +1003,10 @@ static int octeon_irq_ciu_map(struct irq_domain *d, if (!octeon_irq_virq_in_range(virq)) return -EINVAL; + /* Don't map irq if it is reserved for GPIO. */ + if (line == 0 && bit >= 16 && bit <32) + return 0; + if (line > 1 || octeon_irq_ciu_to_irq[line][bit] != 0) return -EINVAL; @@ -1525,10 +1525,6 @@ static int octeon_irq_ciu2_xlat(struct irq_domain *d, ciu = intspec[0]; bit = intspec[1]; - /* Line 7 are the GPIO lines */ - if (ciu > 6 || bit > 63) - return -EINVAL; - *out_hwirq = (ciu << 6) | bit; *out_type = 0; @@ -1570,8 +1566,14 @@ static int octeon_irq_ciu2_map(struct irq_domain *d, if (!octeon_irq_virq_in_range(virq)) return -EINVAL; - /* Line 7 are the GPIO lines */ - if (line > 6 || octeon_irq_ciu_to_irq[line][bit] != 0) + /* + * Don't map irq if it is reserved for GPIO. + * (Line 7 are the GPIO lines.) + */ + if (line == 7) + return 0; + + if (line > 7 || octeon_irq_ciu_to_irq[line][bit] != 0) return -EINVAL; if (octeon_irq_ciu2_is_edge(line, bit)) diff --git a/arch/mips/include/asm/Kbuild b/arch/mips/include/asm/Kbuild index 2d7f65052c1f..05439187891d 100644 --- a/arch/mips/include/asm/Kbuild +++ b/arch/mips/include/asm/Kbuild @@ -2,16 +2,17 @@ generic-y += cputime.h generic-y += current.h generic-y += emergency-restart.h +generic-y += hash.h generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += mutex.h generic-y += parport.h generic-y += percpu.h +generic-y += preempt.h generic-y += scatterlist.h generic-y += sections.h generic-y += segment.h generic-y += serial.h generic-y += trace_clock.h -generic-y += preempt.h generic-y += ucontext.h generic-y += xor.h -generic-y += hash.h diff --git a/arch/mips/include/asm/asmmacro.h b/arch/mips/include/asm/asmmacro.h index 3220c93ea981..4225e99bd7bf 100644 --- a/arch/mips/include/asm/asmmacro.h +++ b/arch/mips/include/asm/asmmacro.h @@ -9,6 +9,7 @@ #define _ASM_ASMMACRO_H #include <asm/hazards.h> +#include <asm/asm-offsets.h> #ifdef CONFIG_32BIT #include <asm/asmmacro-32.h> @@ -54,11 +55,21 @@ .endm .macro local_irq_disable reg=t0 +#ifdef CONFIG_PREEMPT + lw \reg, TI_PRE_COUNT($28) + addi \reg, \reg, 1 + sw \reg, TI_PRE_COUNT($28) +#endif mfc0 \reg, CP0_STATUS ori \reg, \reg, 1 xori \reg, \reg, 1 mtc0 \reg, CP0_STATUS irq_disable_hazard +#ifdef CONFIG_PREEMPT + lw \reg, TI_PRE_COUNT($28) + addi \reg, \reg, -1 + sw \reg, TI_PRE_COUNT($28) +#endif .endm #endif /* CONFIG_MIPS_MT_SMTC */ @@ -106,7 +117,7 @@ .endm .macro fpu_save_double thread status tmp -#if defined(CONFIG_MIPS64) || defined(CONFIG_CPU_MIPS32_R2) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) sll \tmp, \status, 5 bgez \tmp, 10f fpu_save_16odd \thread @@ -159,7 +170,7 @@ .endm .macro fpu_restore_double thread status tmp -#if defined(CONFIG_MIPS64) || defined(CONFIG_CPU_MIPS32_R2) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) sll \tmp, \status, 5 bgez \tmp, 10f # 16 register mode? diff --git a/arch/mips/include/asm/fpu.h b/arch/mips/include/asm/fpu.h index 6b9749540edf..58e50cbdb1a6 100644 --- a/arch/mips/include/asm/fpu.h +++ b/arch/mips/include/asm/fpu.h @@ -57,7 +57,7 @@ static inline int __enable_fpu(enum fpu_mode mode) return 0; case FPU_64BIT: -#if !(defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_MIPS64)) +#if !(defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_64BIT)) /* we only have a 32-bit FPU */ return SIGFPE; #endif diff --git a/arch/mips/include/asm/ftrace.h b/arch/mips/include/asm/ftrace.h index ce35c9af0c28..992aaba603b5 100644 --- a/arch/mips/include/asm/ftrace.h +++ b/arch/mips/include/asm/ftrace.h @@ -22,12 +22,12 @@ extern void _mcount(void); #define safe_load(load, src, dst, error) \ do { \ asm volatile ( \ - "1: " load " %[" STR(dst) "], 0(%[" STR(src) "])\n"\ - " li %[" STR(error) "], 0\n" \ + "1: " load " %[tmp_dst], 0(%[tmp_src])\n" \ + " li %[tmp_err], 0\n" \ "2:\n" \ \ ".section .fixup, \"ax\"\n" \ - "3: li %[" STR(error) "], 1\n" \ + "3: li %[tmp_err], 1\n" \ " j 2b\n" \ ".previous\n" \ \ @@ -35,8 +35,8 @@ do { \ STR(PTR) "\t1b, 3b\n\t" \ ".previous\n" \ \ - : [dst] "=&r" (dst), [error] "=r" (error)\ - : [src] "r" (src) \ + : [tmp_dst] "=&r" (dst), [tmp_err] "=r" (error)\ + : [tmp_src] "r" (src) \ : "memory" \ ); \ } while (0) @@ -44,12 +44,12 @@ do { \ #define safe_store(store, src, dst, error) \ do { \ asm volatile ( \ - "1: " store " %[" STR(src) "], 0(%[" STR(dst) "])\n"\ - " li %[" STR(error) "], 0\n" \ + "1: " store " %[tmp_src], 0(%[tmp_dst])\n"\ + " li %[tmp_err], 0\n" \ "2:\n" \ \ ".section .fixup, \"ax\"\n" \ - "3: li %[" STR(error) "], 1\n" \ + "3: li %[tmp_err], 1\n" \ " j 2b\n" \ ".previous\n" \ \ @@ -57,8 +57,8 @@ do { \ STR(PTR) "\t1b, 3b\n\t" \ ".previous\n" \ \ - : [error] "=r" (error) \ - : [dst] "r" (dst), [src] "r" (src)\ + : [tmp_err] "=r" (error) \ + : [tmp_dst] "r" (dst), [tmp_src] "r" (src)\ : "memory" \ ); \ } while (0) diff --git a/arch/mips/include/asm/syscall.h b/arch/mips/include/asm/syscall.h index 33e8dbfc1b63..f35b131977e6 100644 --- a/arch/mips/include/asm/syscall.h +++ b/arch/mips/include/asm/syscall.h @@ -13,6 +13,7 @@ #ifndef __ASM_MIPS_SYSCALL_H #define __ASM_MIPS_SYSCALL_H +#include <linux/compiler.h> #include <linux/audit.h> #include <linux/elf-em.h> #include <linux/kernel.h> @@ -39,14 +40,14 @@ static inline unsigned long mips_get_syscall_arg(unsigned long *arg, #ifdef CONFIG_32BIT case 4: case 5: case 6: case 7: - return get_user(*arg, (int *)usp + 4 * n); + return get_user(*arg, (int *)usp + n); #endif #ifdef CONFIG_64BIT case 4: case 5: case 6: case 7: #ifdef CONFIG_MIPS32_O32 if (test_thread_flag(TIF_32BIT_REGS)) - return get_user(*arg, (int *)usp + 4 * n); + return get_user(*arg, (int *)usp + n); else #endif *arg = regs->regs[4 + n]; @@ -57,6 +58,8 @@ static inline unsigned long mips_get_syscall_arg(unsigned long *arg, default: BUG(); } + + unreachable(); } static inline long syscall_get_return_value(struct task_struct *task, @@ -83,11 +86,10 @@ static inline void syscall_get_arguments(struct task_struct *task, unsigned int i, unsigned int n, unsigned long *args) { - unsigned long arg; int ret; while (n--) - ret |= mips_get_syscall_arg(&arg, task, regs, i++); + ret |= mips_get_syscall_arg(args++, task, regs, i++); /* * No way to communicate an error because this is a void function. diff --git a/arch/mips/include/asm/topology.h b/arch/mips/include/asm/topology.h index 12609a17dc8b..20ea4859c822 100644 --- a/arch/mips/include/asm/topology.h +++ b/arch/mips/include/asm/topology.h @@ -10,8 +10,4 @@ #include <topology.h> -#ifdef CONFIG_SMP -#define smt_capable() (smp_num_siblings > 1) -#endif - #endif /* __ASM_TOPOLOGY_H */ diff --git a/arch/mips/include/uapi/asm/inst.h b/arch/mips/include/uapi/asm/inst.h index b39ba25b41cc..f25181b19941 100644 --- a/arch/mips/include/uapi/asm/inst.h +++ b/arch/mips/include/uapi/asm/inst.h @@ -163,8 +163,8 @@ enum cop1_sdw_func { */ enum cop1x_func { lwxc1_op = 0x00, ldxc1_op = 0x01, - pfetch_op = 0x07, swxc1_op = 0x08, - sdxc1_op = 0x09, madd_s_op = 0x20, + swxc1_op = 0x08, sdxc1_op = 0x09, + pfetch_op = 0x0f, madd_s_op = 0x20, madd_d_op = 0x21, madd_e_op = 0x22, msub_s_op = 0x28, msub_d_op = 0x29, msub_e_op = 0x2a, nmadd_s_op = 0x30, diff --git a/arch/mips/kernel/ftrace.c b/arch/mips/kernel/ftrace.c index 185ba258361b..374ed74cd516 100644 --- a/arch/mips/kernel/ftrace.c +++ b/arch/mips/kernel/ftrace.c @@ -111,11 +111,10 @@ static int ftrace_modify_code_2(unsigned long ip, unsigned int new_code1, safe_store_code(new_code1, ip, faulted); if (unlikely(faulted)) return -EFAULT; - ip += 4; - safe_store_code(new_code2, ip, faulted); + safe_store_code(new_code2, ip + 4, faulted); if (unlikely(faulted)) return -EFAULT; - flush_icache_range(ip, ip + 8); /* original ip + 12 */ + flush_icache_range(ip, ip + 8); return 0; } #endif diff --git a/arch/mips/kernel/r4k_fpu.S b/arch/mips/kernel/r4k_fpu.S index 253b2fb52026..73b0ddf910d4 100644 --- a/arch/mips/kernel/r4k_fpu.S +++ b/arch/mips/kernel/r4k_fpu.S @@ -35,9 +35,9 @@ LEAF(_save_fp_context) cfc1 t1, fcr31 -#if defined(CONFIG_64BIT) || defined(CONFIG_MIPS32_R2) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) .set push -#ifdef CONFIG_MIPS32_R2 +#ifdef CONFIG_CPU_MIPS32_R2 .set mips64r2 mfc0 t0, CP0_STATUS sll t0, t0, 5 @@ -146,11 +146,11 @@ LEAF(_save_fp_context32) * - cp1 status/control register */ LEAF(_restore_fp_context) - EX lw t0, SC_FPC_CSR(a0) + EX lw t1, SC_FPC_CSR(a0) -#if defined(CONFIG_64BIT) || defined(CONFIG_MIPS32_R2) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) .set push -#ifdef CONFIG_MIPS32_R2 +#ifdef CONFIG_CPU_MIPS32_R2 .set mips64r2 mfc0 t0, CP0_STATUS sll t0, t0, 5 @@ -191,7 +191,7 @@ LEAF(_restore_fp_context) EX ldc1 $f26, SC_FPREGS+208(a0) EX ldc1 $f28, SC_FPREGS+224(a0) EX ldc1 $f30, SC_FPREGS+240(a0) - ctc1 t0, fcr31 + ctc1 t1, fcr31 jr ra li v0, 0 # success END(_restore_fp_context) @@ -199,7 +199,7 @@ LEAF(_restore_fp_context) #ifdef CONFIG_MIPS32_COMPAT LEAF(_restore_fp_context32) /* Restore an o32 sigcontext. */ - EX lw t0, SC32_FPC_CSR(a0) + EX lw t1, SC32_FPC_CSR(a0) mfc0 t0, CP0_STATUS sll t0, t0, 5 @@ -239,7 +239,7 @@ LEAF(_restore_fp_context32) EX ldc1 $f26, SC32_FPREGS+208(a0) EX ldc1 $f28, SC32_FPREGS+224(a0) EX ldc1 $f30, SC32_FPREGS+240(a0) - ctc1 t0, fcr31 + ctc1 t1, fcr31 jr ra li v0, 0 # success END(_restore_fp_context32) diff --git a/arch/mips/kernel/rtlx-cmp.c b/arch/mips/kernel/rtlx-cmp.c index 56dc69635153..758fb3cd2326 100644 --- a/arch/mips/kernel/rtlx-cmp.c +++ b/arch/mips/kernel/rtlx-cmp.c @@ -112,5 +112,8 @@ void __exit rtlx_module_exit(void) for (i = 0; i < RTLX_CHANNELS; i++) device_destroy(mt_class, MKDEV(major, i)); + unregister_chrdev(major, RTLX_MODULE_NAME); + + aprp_hook = NULL; } diff --git a/arch/mips/kernel/rtlx-mt.c b/arch/mips/kernel/rtlx-mt.c index 91d61ba422b4..9c1aca00fd54 100644 --- a/arch/mips/kernel/rtlx-mt.c +++ b/arch/mips/kernel/rtlx-mt.c @@ -144,5 +144,8 @@ void __exit rtlx_module_exit(void) for (i = 0; i < RTLX_CHANNELS; i++) device_destroy(mt_class, MKDEV(major, i)); + unregister_chrdev(major, RTLX_MODULE_NAME); + + aprp_hook = NULL; } diff --git a/arch/mips/math-emu/cp1emu.c b/arch/mips/math-emu/cp1emu.c index 506925b2c3f3..0b4e2e38294b 100644 --- a/arch/mips/math-emu/cp1emu.c +++ b/arch/mips/math-emu/cp1emu.c @@ -1538,10 +1538,10 @@ static int fpux_emu(struct pt_regs *xcp, struct mips_fpu_struct *ctx, break; } - case 0x7: /* 7 */ - if (MIPSInst_FUNC(ir) != pfetch_op) { + case 0x3: + if (MIPSInst_FUNC(ir) != pfetch_op) return SIGILL; - } + /* ignore prefx operation */ break; diff --git a/arch/mips/mti-malta/malta-amon.c b/arch/mips/mti-malta/malta-amon.c index 592ac0427426..84ac523b0ce0 100644 --- a/arch/mips/mti-malta/malta-amon.c +++ b/arch/mips/mti-malta/malta-amon.c @@ -72,7 +72,7 @@ int amon_cpu_start(int cpu, return 0; } -#ifdef CONFIG_MIPS_VPE_LOADER +#ifdef CONFIG_MIPS_VPE_LOADER_CMP int vpe_run(struct vpe *v) { struct vpe_notifications *n; diff --git a/arch/mips/mti-malta/malta-int.c b/arch/mips/mti-malta/malta-int.c index ca3e3a46a42f..2242181a6284 100644 --- a/arch/mips/mti-malta/malta-int.c +++ b/arch/mips/mti-malta/malta-int.c @@ -119,7 +119,7 @@ static void malta_hw0_irqdispatch(void) do_IRQ(MALTA_INT_BASE + irq); -#ifdef MIPS_VPE_APSP_API +#ifdef CONFIG_MIPS_VPE_APSP_API_MT if (aprp_hook) aprp_hook(); #endif @@ -310,7 +310,7 @@ static void ipi_call_dispatch(void) static irqreturn_t ipi_resched_interrupt(int irq, void *dev_id) { -#ifdef MIPS_VPE_APSP_API +#ifdef CONFIG_MIPS_VPE_APSP_API_CMP if (aprp_hook) aprp_hook(); #endif diff --git a/arch/mips/pci/msi-octeon.c b/arch/mips/pci/msi-octeon.c index d37be36dc659..2b91b0e61566 100644 --- a/arch/mips/pci/msi-octeon.c +++ b/arch/mips/pci/msi-octeon.c @@ -150,6 +150,7 @@ msi_irq_allocated: msg.address_lo = ((128ul << 20) + CVMX_PCI_MSI_RCV) & 0xffffffff; msg.address_hi = ((128ul << 20) + CVMX_PCI_MSI_RCV) >> 32; + break; case OCTEON_DMA_BAR_TYPE_BIG: /* When using big bar, Bar 0 is based at 0 */ msg.address_lo = (0 + CVMX_PCI_MSI_RCV) & 0xffffffff; diff --git a/arch/mn10300/include/asm/Kbuild b/arch/mn10300/include/asm/Kbuild index 992e989ab785..cbc6b9bf45da 100644 --- a/arch/mn10300/include/asm/Kbuild +++ b/arch/mn10300/include/asm/Kbuild @@ -3,5 +3,6 @@ generic-y += barrier.h generic-y += clkdev.h generic-y += exec.h generic-y += hash.h -generic-y += trace_clock.h +generic-y += mcs_spinlock.h generic-y += preempt.h +generic-y += trace_clock.h diff --git a/arch/openrisc/include/asm/Kbuild b/arch/openrisc/include/asm/Kbuild index 2e40f1ca8667..480af0d9c2f5 100644 --- a/arch/openrisc/include/asm/Kbuild +++ b/arch/openrisc/include/asm/Kbuild @@ -10,8 +10,8 @@ generic-y += bugs.h generic-y += cacheflush.h generic-y += checksum.h generic-y += clkdev.h -generic-y += cmpxchg.h generic-y += cmpxchg-local.h +generic-y += cmpxchg.h generic-y += cputime.h generic-y += current.h generic-y += device.h @@ -25,6 +25,7 @@ generic-y += fcntl.h generic-y += ftrace.h generic-y += futex.h generic-y += hardirq.h +generic-y += hash.h generic-y += hw_irq.h generic-y += ioctl.h generic-y += ioctls.h @@ -34,6 +35,7 @@ generic-y += kdebug.h generic-y += kmap_types.h generic-y += kvm_para.h generic-y += local.h +generic-y += mcs_spinlock.h generic-y += mman.h generic-y += module.h generic-y += msgbuf.h @@ -41,6 +43,7 @@ generic-y += pci.h generic-y += percpu.h generic-y += poll.h generic-y += posix_types.h +generic-y += preempt.h generic-y += resource.h generic-y += scatterlist.h generic-y += sections.h @@ -53,11 +56,11 @@ generic-y += siginfo.h generic-y += signal.h generic-y += socket.h generic-y += sockios.h -generic-y += statfs.h generic-y += stat.h +generic-y += statfs.h generic-y += string.h -generic-y += switch_to.h generic-y += swab.h +generic-y += switch_to.h generic-y += termbits.h generic-y += termios.h generic-y += topology.h @@ -68,5 +71,3 @@ generic-y += user.h generic-y += vga.h generic-y += word-at-a-time.h generic-y += xor.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/parisc/include/asm/Kbuild b/arch/parisc/include/asm/Kbuild index 752c981bc3c7..ecf25e6678ad 100644 --- a/arch/parisc/include/asm/Kbuild +++ b/arch/parisc/include/asm/Kbuild @@ -1,9 +1,29 @@ +generic-y += auxvec.h generic-y += barrier.h -generic-y += word-at-a-time.h auxvec.h user.h cputime.h emergency-restart.h \ - segment.h topology.h vga.h device.h percpu.h hw_irq.h mutex.h \ - div64.h irq_regs.h kdebug.h kvm_para.h local64.h local.h param.h \ - poll.h xor.h clkdev.h exec.h -generic-y += trace_clock.h -generic-y += preempt.h +generic-y += clkdev.h +generic-y += cputime.h +generic-y += device.h +generic-y += div64.h +generic-y += emergency-restart.h +generic-y += exec.h generic-y += hash.h +generic-y += hw_irq.h +generic-y += irq_regs.h +generic-y += kdebug.h +generic-y += kvm_para.h +generic-y += local.h +generic-y += local64.h +generic-y += mcs_spinlock.h +generic-y += mutex.h +generic-y += param.h +generic-y += percpu.h +generic-y += poll.h +generic-y += preempt.h +generic-y += segment.h +generic-y += topology.h +generic-y += trace_clock.h +generic-y += user.h +generic-y += vga.h +generic-y += word-at-a-time.h +generic-y += xor.h diff --git a/arch/parisc/include/asm/page.h b/arch/parisc/include/asm/page.h index 637fe031aa84..60d5d174dfe4 100644 --- a/arch/parisc/include/asm/page.h +++ b/arch/parisc/include/asm/page.h @@ -32,17 +32,6 @@ void copy_page_asm(void *to, void *from); void copy_user_page(void *vto, void *vfrom, unsigned long vaddr, struct page *pg); -/* #define CONFIG_PARISC_TMPALIAS */ - -#ifdef CONFIG_PARISC_TMPALIAS -void clear_user_highpage(struct page *page, unsigned long vaddr); -#define clear_user_highpage clear_user_highpage -struct vm_area_struct; -void copy_user_highpage(struct page *to, struct page *from, - unsigned long vaddr, struct vm_area_struct *vma); -#define __HAVE_ARCH_COPY_USER_HIGHPAGE -#endif - /* * These are used to make use of C type-checking.. */ diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h index 3516e0b27044..64f2992e439f 100644 --- a/arch/parisc/include/asm/spinlock.h +++ b/arch/parisc/include/asm/spinlock.h @@ -191,8 +191,4 @@ static __inline__ int arch_write_can_lock(arch_rwlock_t *rw) #define arch_read_lock_flags(lock, flags) arch_read_lock(lock) #define arch_write_lock_flags(lock, flags) arch_write_lock(lock) -#define arch_spin_relax(lock) cpu_relax() -#define arch_read_relax(lock) cpu_relax() -#define arch_write_relax(lock) cpu_relax() - #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/parisc/include/uapi/asm/unistd.h b/arch/parisc/include/uapi/asm/unistd.h index 42706794a36f..265ae5190b0a 100644 --- a/arch/parisc/include/uapi/asm/unistd.h +++ b/arch/parisc/include/uapi/asm/unistd.h @@ -828,13 +828,13 @@ #define __NR_finit_module (__NR_Linux + 333) #define __NR_sched_setattr (__NR_Linux + 334) #define __NR_sched_getattr (__NR_Linux + 335) +#define __NR_utimes (__NR_Linux + 336) -#define __NR_Linux_syscalls (__NR_sched_getattr + 1) +#define __NR_Linux_syscalls (__NR_utimes + 1) #define __IGNORE_select /* newselect */ #define __IGNORE_fadvise64 /* fadvise64_64 */ -#define __IGNORE_utimes /* utime */ #define HPUX_GATEWAY_ADDR 0xC0000004 diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index ac87a40502e6..a6ffc775a9f8 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -581,67 +581,3 @@ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn)); } } - -#ifdef CONFIG_PARISC_TMPALIAS - -void clear_user_highpage(struct page *page, unsigned long vaddr) -{ - void *vto; - unsigned long flags; - - /* Clear using TMPALIAS region. The page doesn't need to - be flushed but the kernel mapping needs to be purged. */ - - vto = kmap_atomic(page); - - /* The PA-RISC 2.0 Architecture book states on page F-6: - "Before a write-capable translation is enabled, *all* - non-equivalently-aliased translations must be removed - from the page table and purged from the TLB. (Note - that the caches are not required to be flushed at this - time.) Before any non-equivalent aliased translation - is re-enabled, the virtual address range for the writeable - page (the entire page) must be flushed from the cache, - and the write-capable translation removed from the page - table and purged from the TLB." */ - - purge_kernel_dcache_page_asm((unsigned long)vto); - purge_tlb_start(flags); - pdtlb_kernel(vto); - purge_tlb_end(flags); - preempt_disable(); - clear_user_page_asm(vto, vaddr); - preempt_enable(); - - pagefault_enable(); /* kunmap_atomic(addr, KM_USER0); */ -} - -void copy_user_highpage(struct page *to, struct page *from, - unsigned long vaddr, struct vm_area_struct *vma) -{ - void *vfrom, *vto; - unsigned long flags; - - /* Copy using TMPALIAS region. This has the advantage - that the `from' page doesn't need to be flushed. However, - the `to' page must be flushed in copy_user_page_asm since - it can be used to bring in executable code. */ - - vfrom = kmap_atomic(from); - vto = kmap_atomic(to); - - purge_kernel_dcache_page_asm((unsigned long)vto); - purge_tlb_start(flags); - pdtlb_kernel(vto); - pdtlb_kernel(vfrom); - purge_tlb_end(flags); - preempt_disable(); - copy_user_page_asm(vto, vfrom, vaddr); - flush_dcache_page_asm(__pa(vto), vaddr); - preempt_enable(); - - pagefault_enable(); /* kunmap_atomic(addr, KM_USER1); */ - pagefault_enable(); /* kunmap_atomic(addr, KM_USER0); */ -} - -#endif /* CONFIG_PARISC_TMPALIAS */ diff --git a/arch/parisc/kernel/syscall_table.S b/arch/parisc/kernel/syscall_table.S index 8fa3fbb3e4d3..80e5dd248934 100644 --- a/arch/parisc/kernel/syscall_table.S +++ b/arch/parisc/kernel/syscall_table.S @@ -431,6 +431,7 @@ ENTRY_SAME(finit_module) ENTRY_SAME(sched_setattr) ENTRY_SAME(sched_getattr) /* 335 */ + ENTRY_COMP(utimes) /* Nothing yet */ diff --git a/arch/powerpc/include/asm/Kbuild b/arch/powerpc/include/asm/Kbuild index 6c0a955a1b06..3fb1bc432f4f 100644 --- a/arch/powerpc/include/asm/Kbuild +++ b/arch/powerpc/include/asm/Kbuild @@ -1,7 +1,8 @@ generic-y += clkdev.h +generic-y += hash.h +generic-y += mcs_spinlock.h +generic-y += preempt.h generic-y += rwsem.h generic-y += trace_clock.h -generic-y += preempt.h generic-y += vtime.h -generic-y += hash.h diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h index d0b5fca6b077..c9202151079f 100644 --- a/arch/powerpc/include/asm/topology.h +++ b/arch/powerpc/include/asm/topology.h @@ -99,7 +99,6 @@ static inline int prrn_is_enabled(void) #ifdef CONFIG_SMP #include <asm/cputable.h> -#define smt_capable() (cpu_has_feature(CPU_FTR_SMT)) #ifdef CONFIG_PPC64 #include <asm/smp.h> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index e66d4ec04d95..818dce344e82 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1504,73 +1504,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 1: addi r8,r8,16 .endr - /* Save DEC */ - mfspr r5,SPRN_DEC - mftb r6 - extsw r5,r5 - add r5,r5,r6 - std r5,VCPU_DEC_EXPIRES(r9) - -BEGIN_FTR_SECTION - b 8f -END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) - /* Turn on TM so we can access TFHAR/TFIAR/TEXASR */ - mfmsr r8 - li r0, 1 - rldimi r8, r0, MSR_TM_LG, 63-MSR_TM_LG - mtmsrd r8 - - /* Save POWER8-specific registers */ - mfspr r5, SPRN_IAMR - mfspr r6, SPRN_PSPB - mfspr r7, SPRN_FSCR - std r5, VCPU_IAMR(r9) - stw r6, VCPU_PSPB(r9) - std r7, VCPU_FSCR(r9) - mfspr r5, SPRN_IC - mfspr r6, SPRN_VTB - mfspr r7, SPRN_TAR - std r5, VCPU_IC(r9) - std r6, VCPU_VTB(r9) - std r7, VCPU_TAR(r9) -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM - mfspr r5, SPRN_TFHAR - mfspr r6, SPRN_TFIAR - mfspr r7, SPRN_TEXASR - std r5, VCPU_TFHAR(r9) - std r6, VCPU_TFIAR(r9) - std r7, VCPU_TEXASR(r9) -#endif - mfspr r8, SPRN_EBBHR - std r8, VCPU_EBBHR(r9) - mfspr r5, SPRN_EBBRR - mfspr r6, SPRN_BESCR - mfspr r7, SPRN_CSIGR - mfspr r8, SPRN_TACR - std r5, VCPU_EBBRR(r9) - std r6, VCPU_BESCR(r9) - std r7, VCPU_CSIGR(r9) - std r8, VCPU_TACR(r9) - mfspr r5, SPRN_TCSCR - mfspr r6, SPRN_ACOP - mfspr r7, SPRN_PID - mfspr r8, SPRN_WORT - std r5, VCPU_TCSCR(r9) - std r6, VCPU_ACOP(r9) - stw r7, VCPU_GUEST_PID(r9) - std r8, VCPU_WORT(r9) -8: - - /* Save and reset AMR and UAMOR before turning on the MMU */ -BEGIN_FTR_SECTION - mfspr r5,SPRN_AMR - mfspr r6,SPRN_UAMOR - std r5,VCPU_AMR(r9) - std r6,VCPU_UAMOR(r9) - li r6,0 - mtspr SPRN_AMR,r6 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206) - /* Unset guest mode */ li r0, KVM_GUEST_MODE_NONE stb r0, HSTATE_IN_GUEST(r13) @@ -2203,7 +2136,7 @@ BEGIN_FTR_SECTION END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) #endif mfspr r6,SPRN_VRSAVE - stw r6,VCPU_VRSAVE(r3) + stw r6,VCPU_VRSAVE(r31) mtlr r30 mtmsrd r5 isync @@ -2240,7 +2173,7 @@ BEGIN_FTR_SECTION bl .load_vr_state END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) #endif - lwz r7,VCPU_VRSAVE(r4) + lwz r7,VCPU_VRSAVE(r31) mtspr SPRN_VRSAVE,r7 mtlr r30 mr r4,r31 diff --git a/arch/powerpc/platforms/cell/ras.c b/arch/powerpc/platforms/cell/ras.c index 5ec1e47a0d77..e865d748179b 100644 --- a/arch/powerpc/platforms/cell/ras.c +++ b/arch/powerpc/platforms/cell/ras.c @@ -123,7 +123,8 @@ static int __init cbe_ptcal_enable_on_node(int nid, int order) area->nid = nid; area->order = order; - area->pages = alloc_pages_exact_node(area->nid, GFP_KERNEL|GFP_THISNODE, + area->pages = alloc_pages_exact_node(area->nid, + GFP_KERNEL|__GFP_THISNODE, area->order); if (!area->pages) { diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c index 49318385d4fa..4a0a64fe25df 100644 --- a/arch/powerpc/platforms/cell/spufs/sched.c +++ b/arch/powerpc/platforms/cell/spufs/sched.c @@ -83,7 +83,6 @@ static struct timer_list spuloadavg_timer; #define MIN_SPU_TIMESLICE max(5 * HZ / (1000 * SPUSCHED_TICK), 1) #define DEF_SPU_TIMESLICE (100 * HZ / (1000 * SPUSCHED_TICK)) -#define MAX_USER_PRIO (MAX_PRIO - MAX_RT_PRIO) #define SCALE_PRIO(x, prio) \ max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO / 2), MIN_SPU_TIMESLICE) diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c index 110f4fbd319f..81a7a0a79be7 100644 --- a/arch/powerpc/platforms/powernv/setup.c +++ b/arch/powerpc/platforms/powernv/setup.c @@ -26,7 +26,6 @@ #include <linux/of_fdt.h> #include <linux/interrupt.h> #include <linux/bug.h> -#include <linux/cpuidle.h> #include <linux/pci.h> #include <asm/machdep.h> @@ -225,16 +224,6 @@ static int __init pnv_probe(void) return 1; } -void powernv_idle(void) -{ - /* Hook to cpuidle framework if available, else - * call on default platform idle code - */ - if (cpuidle_idle_call()) { - power7_idle(); - } -} - define_machine(powernv) { .name = "PowerNV", .probe = pnv_probe, @@ -244,7 +233,7 @@ define_machine(powernv) { .show_cpuinfo = pnv_show_cpuinfo, .progress = pnv_progress, .machine_shutdown = pnv_shutdown, - .power_save = powernv_idle, + .power_save = power7_idle, .calibrate_decr = generic_calibrate_decr, .dma_set_mask = pnv_dma_set_mask, #ifdef CONFIG_KEXEC diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 972df0ffd4dc..2db8cc691bf4 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -39,7 +39,6 @@ #include <linux/irq.h> #include <linux/seq_file.h> #include <linux/root_dev.h> -#include <linux/cpuidle.h> #include <linux/of.h> #include <linux/kexec.h> @@ -356,29 +355,24 @@ early_initcall(alloc_dispatch_log_kmem_cache); static void pseries_lpar_idle(void) { - /* This would call on the cpuidle framework, and the back-end pseries - * driver to go to idle states + /* + * Default handler to go into low thread priority and possibly + * low power mode by cedeing processor to hypervisor */ - if (cpuidle_idle_call()) { - /* On error, execute default handler - * to go into low thread priority and possibly - * low power mode by cedeing processor to hypervisor - */ - /* Indicate to hypervisor that we are idle. */ - get_lppaca()->idle = 1; + /* Indicate to hypervisor that we are idle. */ + get_lppaca()->idle = 1; - /* - * Yield the processor to the hypervisor. We return if - * an external interrupt occurs (which are driven prior - * to returning here) or if a prod occurs from another - * processor. When returning here, external interrupts - * are enabled. - */ - cede_processor(); + /* + * Yield the processor to the hypervisor. We return if + * an external interrupt occurs (which are driven prior + * to returning here) or if a prod occurs from another + * processor. When returning here, external interrupts + * are enabled. + */ + cede_processor(); - get_lppaca()->idle = 0; - } + get_lppaca()->idle = 0; } /* diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 65a07750f4f9..bb74b21f007a 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -117,6 +117,7 @@ config S390 select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACE_MCOUNT_TEST + select HAVE_FUTEX_CMPXCHG if FUTEX select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_GZIP select HAVE_KERNEL_LZ4 diff --git a/arch/s390/include/asm/Kbuild b/arch/s390/include/asm/Kbuild index 8386a4a1f19a..57892a8a9055 100644 --- a/arch/s390/include/asm/Kbuild +++ b/arch/s390/include/asm/Kbuild @@ -1,6 +1,7 @@ generic-y += clkdev.h -generic-y += trace_clock.h -generic-y += preempt.h generic-y += hash.h +generic-y += mcs_spinlock.h +generic-y += preempt.h +generic-y += trace_clock.h diff --git a/arch/score/include/asm/Kbuild b/arch/score/include/asm/Kbuild index 146b9d5e89f8..4630cf217b5b 100644 --- a/arch/score/include/asm/Kbuild +++ b/arch/score/include/asm/Kbuild @@ -1,10 +1,11 @@ header-y += + generic-y += barrier.h generic-y += clkdev.h generic-y += hash.h +generic-y += mcs_spinlock.h +generic-y += preempt.h generic-y += trace_clock.h generic-y += xor.h -generic-y += preempt.h - diff --git a/arch/sh/include/asm/Kbuild b/arch/sh/include/asm/Kbuild index 0cd7198a4524..c19e47dacb31 100644 --- a/arch/sh/include/asm/Kbuild +++ b/arch/sh/include/asm/Kbuild @@ -8,18 +8,21 @@ generic-y += emergency-restart.h generic-y += errno.h generic-y += exec.h generic-y += fcntl.h +generic-y += hash.h generic-y += ioctl.h generic-y += ipcbuf.h generic-y += irq_regs.h generic-y += kvm_para.h generic-y += local.h generic-y += local64.h +generic-y += mcs_spinlock.h +generic-y += mman.h +generic-y += msgbuf.h generic-y += param.h generic-y += parport.h generic-y += percpu.h generic-y += poll.h -generic-y += mman.h -generic-y += msgbuf.h +generic-y += preempt.h generic-y += resource.h generic-y += scatterlist.h generic-y += sembuf.h @@ -34,5 +37,3 @@ generic-y += termios.h generic-y += trace_clock.h generic-y += ucontext.h generic-y += xor.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/sh/kernel/idle.c b/arch/sh/kernel/idle.c index 2ea4483fd722..be616ee0cf87 100644 --- a/arch/sh/kernel/idle.c +++ b/arch/sh/kernel/idle.c @@ -16,7 +16,6 @@ #include <linux/thread_info.h> #include <linux/irqflags.h> #include <linux/smp.h> -#include <linux/cpuidle.h> #include <linux/atomic.h> #include <asm/pgalloc.h> #include <asm/smp.h> @@ -40,8 +39,7 @@ void arch_cpu_idle_dead(void) void arch_cpu_idle(void) { - if (cpuidle_idle_call()) - sh_idle(); + sh_idle(); } void __init select_idle_routine(void) diff --git a/arch/sparc/include/asm/Kbuild b/arch/sparc/include/asm/Kbuild index 4b60a0c325ec..a45821818003 100644 --- a/arch/sparc/include/asm/Kbuild +++ b/arch/sparc/include/asm/Kbuild @@ -6,15 +6,16 @@ generic-y += cputime.h generic-y += div64.h generic-y += emergency-restart.h generic-y += exec.h -generic-y += linkage.h -generic-y += local64.h -generic-y += mutex.h +generic-y += hash.h generic-y += irq_regs.h +generic-y += linkage.h generic-y += local.h +generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += module.h +generic-y += mutex.h +generic-y += preempt.h generic-y += serial.h generic-y += trace_clock.h generic-y += types.h generic-y += word-at-a-time.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/sparc/include/asm/smp_64.h b/arch/sparc/include/asm/smp_64.h index dd3bef4b9896..05710393959f 100644 --- a/arch/sparc/include/asm/smp_64.h +++ b/arch/sparc/include/asm/smp_64.h @@ -32,7 +32,6 @@ DECLARE_PER_CPU(cpumask_t, cpu_sibling_map); extern cpumask_t cpu_core_map[NR_CPUS]; -extern int sparc64_multi_core; extern void arch_send_call_function_single_ipi(int cpu); extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); diff --git a/arch/sparc/include/asm/topology_64.h b/arch/sparc/include/asm/topology_64.h index 1754390a426f..a2d10fc64faf 100644 --- a/arch/sparc/include/asm/topology_64.h +++ b/arch/sparc/include/asm/topology_64.h @@ -42,8 +42,6 @@ static inline int pcibus_to_node(struct pci_bus *pbus) #define topology_core_id(cpu) (cpu_data(cpu).core_id) #define topology_core_cpumask(cpu) (&cpu_core_map[cpu]) #define topology_thread_cpumask(cpu) (&per_cpu(cpu_sibling_map, cpu)) -#define mc_capable() (sparc64_multi_core) -#define smt_capable() (sparc64_multi_core) #endif /* CONFIG_SMP */ extern cpumask_t cpu_core_map[NR_CPUS]; diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c index b90bf23e3aab..a1a4400d4025 100644 --- a/arch/sparc/kernel/mdesc.c +++ b/arch/sparc/kernel/mdesc.c @@ -896,10 +896,6 @@ void mdesc_fill_in_cpu_data(cpumask_t *mask) mdesc_iterate_over_cpus(fill_in_one_cpu, NULL, mask); -#ifdef CONFIG_SMP - sparc64_multi_core = 1; -#endif - hp = mdesc_grab(); set_core_ids(hp); diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c index 32a280ec38c1..d7b4967f8fa6 100644 --- a/arch/sparc/kernel/process_64.c +++ b/arch/sparc/kernel/process_64.c @@ -58,9 +58,12 @@ void arch_cpu_idle(void) { if (tlb_type != hypervisor) { touch_nmi_watchdog(); + local_irq_enable(); } else { unsigned long pstate; + local_irq_enable(); + /* The sun4v sleeping code requires that we have PSTATE.IE cleared over * the cpu sleep hypervisor call. */ @@ -82,7 +85,6 @@ void arch_cpu_idle(void) : "=&r" (pstate) : "i" (PSTATE_IE)); } - local_irq_enable(); } #ifdef CONFIG_HOTPLUG_CPU diff --git a/arch/sparc/kernel/prom_64.c b/arch/sparc/kernel/prom_64.c index 6b39125eb927..9a690d39c01b 100644 --- a/arch/sparc/kernel/prom_64.c +++ b/arch/sparc/kernel/prom_64.c @@ -555,9 +555,6 @@ static void *fill_in_one_cpu(struct device_node *dp, int cpuid, int arg) cpu_data(cpuid).core_id = portid + 1; cpu_data(cpuid).proc_id = portid; -#ifdef CONFIG_SMP - sparc64_multi_core = 1; -#endif } else { cpu_data(cpuid).dcache_size = of_getintprop_default(dp, "dcache-size", 16 * 1024); diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index b085311dcd0e..9781048161ab 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -53,8 +53,6 @@ #include "cpumap.h" -int sparc64_multi_core __read_mostly; - DEFINE_PER_CPU(cpumask_t, cpu_sibling_map) = CPU_MASK_NONE; cpumask_t cpu_core_map[NR_CPUS] __read_mostly = { [0 ... NR_CPUS-1] = CPU_MASK_NONE }; diff --git a/arch/sparc/kernel/syscalls.S b/arch/sparc/kernel/syscalls.S index 87729fff13b9..33a17e7b3ccd 100644 --- a/arch/sparc/kernel/syscalls.S +++ b/arch/sparc/kernel/syscalls.S @@ -189,7 +189,8 @@ linux_sparc_syscall32: mov %i0, %l5 ! IEU1 5: call %l7 ! CTI Group brk forced srl %i5, 0, %o5 ! IEU1 - ba,a,pt %xcc, 3f + ba,pt %xcc, 3f + sra %o0, 0, %o0 /* Linux native system calls enter here... */ .align 32 @@ -217,7 +218,6 @@ linux_sparc_syscall: 3: stx %o0, [%sp + PTREGS_OFF + PT_V9_I0] ret_sys_call: ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %g3 - sra %o0, 0, %o0 mov %ulo(TSTATE_XCARRY | TSTATE_ICARRY), %g2 sllx %g2, 32, %g2 diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c index 3b3a360b429a..f5d506fdddad 100644 --- a/arch/sparc/mm/tsb.c +++ b/arch/sparc/mm/tsb.c @@ -273,7 +273,7 @@ void __init pgtable_cache_init(void) prom_halt(); } - for (i = 0; i < 8; i++) { + for (i = 0; i < ARRAY_SIZE(tsb_cache_names); i++) { unsigned long size = 8192 << i; const char *name = tsb_cache_names[i]; diff --git a/arch/tile/include/asm/Kbuild b/arch/tile/include/asm/Kbuild index 3793c75e45d9..0aa5675e7025 100644 --- a/arch/tile/include/asm/Kbuild +++ b/arch/tile/include/asm/Kbuild @@ -11,6 +11,7 @@ generic-y += errno.h generic-y += exec.h generic-y += fb.h generic-y += fcntl.h +generic-y += hash.h generic-y += hw_irq.h generic-y += ioctl.h generic-y += ioctls.h @@ -18,12 +19,14 @@ generic-y += ipcbuf.h generic-y += irq_regs.h generic-y += local.h generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += msgbuf.h generic-y += mutex.h generic-y += param.h generic-y += parport.h generic-y += poll.h generic-y += posix_types.h +generic-y += preempt.h generic-y += resource.h generic-y += scatterlist.h generic-y += sembuf.h @@ -38,5 +41,3 @@ generic-y += termios.h generic-y += trace_clock.h generic-y += types.h generic-y += xor.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/um/include/asm/Kbuild b/arch/um/include/asm/Kbuild index 88a330dcdede..a5e4b6068213 100644 --- a/arch/um/include/asm/Kbuild +++ b/arch/um/include/asm/Kbuild @@ -1,8 +1,28 @@ -generic-y += bug.h cputime.h device.h emergency-restart.h futex.h hardirq.h -generic-y += hw_irq.h irq_regs.h kdebug.h percpu.h sections.h topology.h xor.h -generic-y += ftrace.h pci.h io.h param.h delay.h mutex.h current.h exec.h -generic-y += switch_to.h clkdev.h -generic-y += trace_clock.h -generic-y += preempt.h -generic-y += hash.h generic-y += barrier.h +generic-y += bug.h +generic-y += clkdev.h +generic-y += cputime.h +generic-y += current.h +generic-y += delay.h +generic-y += device.h +generic-y += emergency-restart.h +generic-y += exec.h +generic-y += ftrace.h +generic-y += futex.h +generic-y += hardirq.h +generic-y += hash.h +generic-y += hw_irq.h +generic-y += io.h +generic-y += irq_regs.h +generic-y += kdebug.h +generic-y += mcs_spinlock.h +generic-y += mutex.h +generic-y += param.h +generic-y += pci.h +generic-y += percpu.h +generic-y += preempt.h +generic-y += sections.h +generic-y += switch_to.h +generic-y += topology.h +generic-y += trace_clock.h +generic-y += xor.h diff --git a/arch/unicore32/include/asm/Kbuild b/arch/unicore32/include/asm/Kbuild index 3ef4f9d9bf5d..1e5fb872a4aa 100644 --- a/arch/unicore32/include/asm/Kbuild +++ b/arch/unicore32/include/asm/Kbuild @@ -16,6 +16,7 @@ generic-y += fcntl.h generic-y += ftrace.h generic-y += futex.h generic-y += hardirq.h +generic-y += hash.h generic-y += hw_irq.h generic-y += ioctl.h generic-y += ioctls.h @@ -24,6 +25,7 @@ generic-y += irq_regs.h generic-y += kdebug.h generic-y += kmap_types.h generic-y += local.h +generic-y += mcs_spinlock.h generic-y += mman.h generic-y += module.h generic-y += msgbuf.h @@ -32,6 +34,7 @@ generic-y += parport.h generic-y += percpu.h generic-y += poll.h generic-y += posix_types.h +generic-y += preempt.h generic-y += resource.h generic-y += scatterlist.h generic-y += sections.h @@ -60,5 +63,3 @@ generic-y += unaligned.h generic-y += user.h generic-y += vga.h generic-y += xor.h -generic-y += preempt.h -generic-y += hash.h diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index c026cca5602c..f3aaf231b4e5 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -341,10 +341,6 @@ config X86_USE_3DNOW def_bool y depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML -config X86_OOSTORE - def_bool y - depends on (MWINCHIP3D || MWINCHIPC6) && MTRR - # # P6_NOPs are a relatively minor optimization that require a family >= # 6 processor, except that it is broken on certain VIA chips. diff --git a/arch/x86/Makefile b/arch/x86/Makefile index eeda43abed6e..a414b1471ffc 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -82,8 +82,8 @@ else KBUILD_AFLAGS += -m64 KBUILD_CFLAGS += -m64 - # Don't autogenerate MMX or SSE instructions - KBUILD_CFLAGS += -mno-mmx -mno-sse + # Don't autogenerate traditional x87, MMX or SSE instructions + KBUILD_CFLAGS += -mno-mmx -mno-sse -mno-80387 -mno-fp-ret-in-387 # Use -mpreferred-stack-boundary=3 if supported. KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3) diff --git a/arch/x86/include/asm/Kbuild b/arch/x86/include/asm/Kbuild index 7f669853317a..a8fee078b92f 100644 --- a/arch/x86/include/asm/Kbuild +++ b/arch/x86/include/asm/Kbuild @@ -5,3 +5,4 @@ genhdr-y += unistd_64.h genhdr-y += unistd_x32.h generic-y += clkdev.h +generic-y += mcs_spinlock.h diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h index 1d2091a226bc..19b0ebafcd3e 100644 --- a/arch/x86/include/asm/apic.h +++ b/arch/x86/include/asm/apic.h @@ -93,9 +93,6 @@ static inline int is_vsmp_box(void) return 0; } #endif -extern void xapic_wait_icr_idle(void); -extern u32 safe_xapic_wait_icr_idle(void); -extern void xapic_icr_write(u32, u32); extern int setup_profiling_timer(unsigned int); static inline void native_apic_mem_write(u32 reg, u32 v) @@ -184,7 +181,6 @@ extern int x2apic_phys; extern int x2apic_preenabled; extern void check_x2apic(void); extern void enable_x2apic(void); -extern void x2apic_icr_write(u32 low, u32 id); static inline int x2apic_enabled(void) { u64 msr; @@ -221,7 +217,6 @@ static inline void x2apic_force_phys(void) { } -#define nox2apic 0 #define x2apic_preenabled 0 #define x2apic_supported() 0 #endif @@ -351,7 +346,7 @@ struct apic { int trampoline_phys_low; int trampoline_phys_high; - void (*wait_for_init_deassert)(atomic_t *deassert); + bool wait_for_init_deassert; void (*smp_callin_clear_local_apic)(void); void (*inquire_remote_apic)(int apicid); @@ -517,13 +512,6 @@ extern int default_cpu_present_to_apicid(int mps_cpu); extern int default_check_phys_apicid_present(int phys_apicid); #endif -static inline void default_wait_for_init_deassert(atomic_t *deassert) -{ - while (!atomic_read(deassert)) - cpu_relax(); - return; -} - extern void generic_bigsmp_probe(void); diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h index 04a48903b2eb..69bbb4845020 100644 --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -85,11 +85,7 @@ #else # define smp_rmb() barrier() #endif -#ifdef CONFIG_X86_OOSTORE -# define smp_wmb() wmb() -#else -# define smp_wmb() barrier() -#endif +#define smp_wmb() barrier() #define smp_read_barrier_depends() read_barrier_depends() #define set_mb(var, value) do { (void)xchg(&var, value); } while (0) #else /* !SMP */ @@ -100,7 +96,7 @@ #define set_mb(var, value) do { var = value; barrier(); } while (0) #endif /* SMP */ -#if defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE) +#if defined(CONFIG_X86_PPRO_FENCE) /* * For either of these options x86 doesn't have a strong TSO memory diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index 34f69cb9350a..91d9c69a629e 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -237,7 +237,7 @@ memcpy_toio(volatile void __iomem *dst, const void *src, size_t count) static inline void flush_write_buffers(void) { -#if defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE) +#if defined(CONFIG_X86_PPRO_FENCE) asm volatile("lock; addl $0,0(%%esp)": : :"memory"); #endif } diff --git a/arch/x86/include/asm/nmi.h b/arch/x86/include/asm/nmi.h index 86f9301903c8..5f2fc4441b11 100644 --- a/arch/x86/include/asm/nmi.h +++ b/arch/x86/include/asm/nmi.h @@ -1,6 +1,7 @@ #ifndef _ASM_X86_NMI_H #define _ASM_X86_NMI_H +#include <linux/irq_work.h> #include <linux/pm.h> #include <asm/irq.h> #include <asm/io.h> @@ -38,6 +39,8 @@ typedef int (*nmi_handler_t)(unsigned int, struct pt_regs *); struct nmiaction { struct list_head list; nmi_handler_t handler; + u64 max_duration; + struct irq_work irq_work; unsigned long flags; const char *name; }; diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 5ad38ad07890..bbc8b12fa443 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -445,20 +445,10 @@ static inline int pte_same(pte_t a, pte_t b) return a.pte == b.pte; } -static inline int pteval_present(pteval_t pteval) -{ - /* - * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this - * way clearly states that the intent is that protnone and numa - * hinting ptes are considered present for the purposes of - * pagetable operations like zapping, protection changes, gup etc. - */ - return pteval & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA); -} - static inline int pte_present(pte_t a) { - return pteval_present(pte_flags(a)); + return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE | + _PAGE_NUMA); } #define pte_accessible pte_accessible diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index bf156ded74b5..0f62f5482d91 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -26,10 +26,9 @@ # define LOCK_PTR_REG "D" #endif -#if defined(CONFIG_X86_32) && \ - (defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE)) +#if defined(CONFIG_X86_32) && (defined(CONFIG_X86_PPRO_FENCE)) /* - * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock + * On PPro SMP, we use a locked operation to unlock * (PPro errata 66, 92) */ # define UNLOCK_LOCK_PREFIX LOCK_PREFIX diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h index d35f24e231cd..b28097e4c8c3 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -119,9 +119,10 @@ static inline void setup_node_to_cpumask_map(void) { } extern const struct cpumask *cpu_coregroup_mask(int cpu); -#ifdef ENABLE_TOPO_DEFINES #define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id) #define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id) + +#ifdef ENABLE_TOPO_DEFINES #define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu)) #define topology_thread_cpumask(cpu) (per_cpu(cpu_sibling_map, cpu)) #endif @@ -133,12 +134,6 @@ static inline void arch_fix_phys_package_id(int num, u32 slot) struct pci_bus; void x86_pci_root_bus_resources(int bus, struct list_head *resources); -#ifdef CONFIG_SMP -#define mc_capable() ((boot_cpu_data.x86_max_cores > 1) && \ - (cpumask_weight(cpu_core_mask(0)) != nr_cpu_ids)) -#define smt_capable() (smp_num_siblings > 1) -#endif - #ifdef CONFIG_NUMA extern int get_mp_bus_to_node(int busnum); extern void set_mp_bus_to_node(int busnum, int node); diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c index 1dac94265b59..9f46f2b1cfc2 100644 --- a/arch/x86/kernel/acpi/boot.c +++ b/arch/x86/kernel/acpi/boot.c @@ -613,10 +613,10 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid) int nid; nid = acpi_get_node(handle); - if (nid == -1 || !node_online(nid)) - return; - set_apicid_to_node(physid, nid); - numa_set_node(cpu, nid); + if (nid != -1) { + set_apicid_to_node(physid, nid); + numa_set_node(cpu, nid); + } #endif } diff --git a/arch/x86/kernel/aperture_64.c b/arch/x86/kernel/aperture_64.c index fd972a3e4cbb..9fa8aa051f54 100644 --- a/arch/x86/kernel/aperture_64.c +++ b/arch/x86/kernel/aperture_64.c @@ -18,7 +18,6 @@ #include <linux/pci_ids.h> #include <linux/pci.h> #include <linux/bitops.h> -#include <linux/ioport.h> #include <linux/suspend.h> #include <asm/e820.h> #include <asm/io.h> @@ -54,18 +53,6 @@ int fallback_aper_force __initdata; int fix_aperture __initdata = 1; -static struct resource gart_resource = { - .name = "GART", - .flags = IORESOURCE_MEM, -}; - -static void __init insert_aperture_resource(u32 aper_base, u32 aper_size) -{ - gart_resource.start = aper_base; - gart_resource.end = aper_base + aper_size - 1; - insert_resource(&iomem_resource, &gart_resource); -} - /* This code runs before the PCI subsystem is initialized, so just access the northbridge directly. */ @@ -96,7 +83,6 @@ static u32 __init allocate_aperture(void) memblock_reserve(addr, aper_size); printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n", aper_size >> 10, addr); - insert_aperture_resource((u32)addr, aper_size); register_nosave_region(addr >> PAGE_SHIFT, (addr+aper_size) >> PAGE_SHIFT); @@ -444,12 +430,8 @@ int __init gart_iommu_hole_init(void) out: if (!fix && !fallback_aper_force) { - if (last_aper_base) { - unsigned long n = (32 * 1024 * 1024) << last_aper_order; - - insert_aperture_resource((u32)last_aper_base, n); + if (last_aper_base) return 1; - } return 0; } diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 7f26c9a70a9e..53e20531470e 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -133,6 +133,10 @@ static inline void imcr_apic_to_pic(void) * +1=force-enable */ static int force_enable_local_apic __initdata; + +/* Control whether x2APIC mode is enabled or not */ +static bool nox2apic __initdata; + /* * APIC command line parameters */ @@ -162,8 +166,7 @@ int x2apic_mode; /* x2apic enabled before OS handover */ int x2apic_preenabled; static int x2apic_disabled; -static int nox2apic; -static __init int setup_nox2apic(char *str) +static int __init setup_nox2apic(char *str) { if (x2apic_enabled()) { int apicid = native_apic_msr_read(APIC_ID); @@ -178,7 +181,7 @@ static __init int setup_nox2apic(char *str) } else setup_clear_cpu_cap(X86_FEATURE_X2APIC); - nox2apic = 1; + nox2apic = true; return 0; } @@ -283,8 +286,12 @@ u32 native_safe_apic_wait_icr_idle(void) void native_apic_icr_write(u32 low, u32 id) { + unsigned long flags; + + local_irq_save(flags); apic_write(APIC_ICR2, SET_APIC_DEST_FIELD(id)); apic_write(APIC_ICR, low); + local_irq_restore(flags); } u64 native_apic_icr_read(void) diff --git a/arch/x86/kernel/apic/apic_flat_64.c b/arch/x86/kernel/apic/apic_flat_64.c index 2c621a6b901a..7c1b29479513 100644 --- a/arch/x86/kernel/apic/apic_flat_64.c +++ b/arch/x86/kernel/apic/apic_flat_64.c @@ -198,7 +198,7 @@ static struct apic apic_flat = { .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = NULL, + .wait_for_init_deassert = false, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = default_inquire_remote_apic, @@ -314,7 +314,7 @@ static struct apic apic_physflat = { .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = NULL, + .wait_for_init_deassert = false, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = default_inquire_remote_apic, diff --git a/arch/x86/kernel/apic/apic_noop.c b/arch/x86/kernel/apic/apic_noop.c index 191ce75c0e54..8c7c98249c20 100644 --- a/arch/x86/kernel/apic/apic_noop.c +++ b/arch/x86/kernel/apic/apic_noop.c @@ -172,8 +172,7 @@ struct apic apic_noop = { .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = NULL, - + .wait_for_init_deassert = false, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = NULL, diff --git a/arch/x86/kernel/apic/apic_numachip.c b/arch/x86/kernel/apic/apic_numachip.c index 3e67f9e3d7ef..a5b45df8bc88 100644 --- a/arch/x86/kernel/apic/apic_numachip.c +++ b/arch/x86/kernel/apic/apic_numachip.c @@ -248,7 +248,7 @@ static const struct apic apic_numachip __refconst = { .wakeup_secondary_cpu = numachip_wakeup_secondary, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = NULL, + .wait_for_init_deassert = false, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = NULL, /* REMRD not supported */ diff --git a/arch/x86/kernel/apic/bigsmp_32.c b/arch/x86/kernel/apic/bigsmp_32.c index d50e3640d5ae..e4840aa7a255 100644 --- a/arch/x86/kernel/apic/bigsmp_32.c +++ b/arch/x86/kernel/apic/bigsmp_32.c @@ -199,8 +199,7 @@ static struct apic apic_bigsmp = { .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = default_wait_for_init_deassert, - + .wait_for_init_deassert = true, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = default_inquire_remote_apic, diff --git a/arch/x86/kernel/apic/es7000_32.c b/arch/x86/kernel/apic/es7000_32.c index c55224731b2d..6f8f8b348a39 100644 --- a/arch/x86/kernel/apic/es7000_32.c +++ b/arch/x86/kernel/apic/es7000_32.c @@ -394,12 +394,6 @@ static void es7000_enable_apic_mode(void) WARN(1, "Command failed, status = %x\n", mip_status); } -static void es7000_wait_for_init_deassert(atomic_t *deassert) -{ - while (!atomic_read(deassert)) - cpu_relax(); -} - static unsigned int es7000_get_apic_id(unsigned long x) { return (x >> 24) & 0xFF; @@ -658,8 +652,7 @@ static struct apic __refdata apic_es7000_cluster = { .trampoline_phys_low = 0x467, .trampoline_phys_high = 0x469, - .wait_for_init_deassert = NULL, - + .wait_for_init_deassert = false, /* Nothing to do for most platforms, since cleared by the INIT cycle: */ .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = default_inquire_remote_apic, @@ -722,8 +715,7 @@ static struct apic __refdata apic_es7000 = { .trampoline_phys_low = 0x467, .trampoline_phys_high = 0x469, - .wait_for_init_deassert = es7000_wait_for_init_deassert, - + .wait_for_init_deassert = true, /* Nothing to do for most platforms, since cleared by the INIT cycle: */ .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = default_inquire_remote_apic, diff --git a/arch/x86/kernel/apic/numaq_32.c b/arch/x86/kernel/apic/numaq_32.c index 1e42e8f305ee..030ea1c04f72 100644 --- a/arch/x86/kernel/apic/numaq_32.c +++ b/arch/x86/kernel/apic/numaq_32.c @@ -505,8 +505,7 @@ static struct apic __refdata apic_numaq = { .trampoline_phys_high = NUMAQ_TRAMPOLINE_PHYS_HIGH, /* We don't do anything here because we use NMI's to boot instead */ - .wait_for_init_deassert = NULL, - + .wait_for_init_deassert = false, .smp_callin_clear_local_apic = numaq_smp_callin_clear_local_apic, .inquire_remote_apic = NULL, diff --git a/arch/x86/kernel/apic/probe_32.c b/arch/x86/kernel/apic/probe_32.c index eb35ef9ee63f..cceb352c968c 100644 --- a/arch/x86/kernel/apic/probe_32.c +++ b/arch/x86/kernel/apic/probe_32.c @@ -119,8 +119,7 @@ static struct apic apic_default = { .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = default_wait_for_init_deassert, - + .wait_for_init_deassert = true, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = default_inquire_remote_apic, diff --git a/arch/x86/kernel/apic/summit_32.c b/arch/x86/kernel/apic/summit_32.c index 00146f9b0254..b656128611cd 100644 --- a/arch/x86/kernel/apic/summit_32.c +++ b/arch/x86/kernel/apic/summit_32.c @@ -532,8 +532,7 @@ static struct apic apic_summit = { .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = default_wait_for_init_deassert, - + .wait_for_init_deassert = true, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = default_inquire_remote_apic, diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c index cac85ee6913f..e66766bf1641 100644 --- a/arch/x86/kernel/apic/x2apic_cluster.c +++ b/arch/x86/kernel/apic/x2apic_cluster.c @@ -279,7 +279,7 @@ static struct apic apic_x2apic_cluster = { .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = NULL, + .wait_for_init_deassert = false, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = NULL, diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c index de231e328cae..6d600ebf6c12 100644 --- a/arch/x86/kernel/apic/x2apic_phys.c +++ b/arch/x86/kernel/apic/x2apic_phys.c @@ -133,7 +133,7 @@ static struct apic apic_x2apic_phys = { .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = NULL, + .wait_for_init_deassert = false, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = NULL, diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c index d263b1307de1..7834389ba5be 100644 --- a/arch/x86/kernel/apic/x2apic_uv_x.c +++ b/arch/x86/kernel/apic/x2apic_uv_x.c @@ -396,7 +396,7 @@ static struct apic __refdata apic_x2apic_uv_x = { .wakeup_secondary_cpu = uv_wakeup_secondary, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, - .wait_for_init_deassert = NULL, + .wait_for_init_deassert = false, .smp_callin_clear_local_apic = NULL, .inquire_remote_apic = NULL, diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c index 8779edab684e..d8fba5c15fbd 100644 --- a/arch/x86/kernel/cpu/centaur.c +++ b/arch/x86/kernel/cpu/centaur.c @@ -8,236 +8,6 @@ #include "cpu.h" -#ifdef CONFIG_X86_OOSTORE - -static u32 power2(u32 x) -{ - u32 s = 1; - - while (s <= x) - s <<= 1; - - return s >>= 1; -} - - -/* - * Set up an actual MCR - */ -static void centaur_mcr_insert(int reg, u32 base, u32 size, int key) -{ - u32 lo, hi; - - hi = base & ~0xFFF; - lo = ~(size-1); /* Size is a power of 2 so this makes a mask */ - lo &= ~0xFFF; /* Remove the ctrl value bits */ - lo |= key; /* Attribute we wish to set */ - wrmsr(reg+MSR_IDT_MCR0, lo, hi); - mtrr_centaur_report_mcr(reg, lo, hi); /* Tell the mtrr driver */ -} - -/* - * Figure what we can cover with MCR's - * - * Shortcut: We know you can't put 4Gig of RAM on a winchip - */ -static u32 ramtop(void) -{ - u32 clip = 0xFFFFFFFFUL; - u32 top = 0; - int i; - - for (i = 0; i < e820.nr_map; i++) { - unsigned long start, end; - - if (e820.map[i].addr > 0xFFFFFFFFUL) - continue; - /* - * Don't MCR over reserved space. Ignore the ISA hole - * we frob around that catastrophe already - */ - if (e820.map[i].type == E820_RESERVED) { - if (e820.map[i].addr >= 0x100000UL && - e820.map[i].addr < clip) - clip = e820.map[i].addr; - continue; - } - start = e820.map[i].addr; - end = e820.map[i].addr + e820.map[i].size; - if (start >= end) - continue; - if (end > top) - top = end; - } - /* - * Everything below 'top' should be RAM except for the ISA hole. - * Because of the limited MCR's we want to map NV/ACPI into our - * MCR range for gunk in RAM - * - * Clip might cause us to MCR insufficient RAM but that is an - * acceptable failure mode and should only bite obscure boxes with - * a VESA hole at 15Mb - * - * The second case Clip sometimes kicks in is when the EBDA is marked - * as reserved. Again we fail safe with reasonable results - */ - if (top > clip) - top = clip; - - return top; -} - -/* - * Compute a set of MCR's to give maximum coverage - */ -static int centaur_mcr_compute(int nr, int key) -{ - u32 mem = ramtop(); - u32 root = power2(mem); - u32 base = root; - u32 top = root; - u32 floor = 0; - int ct = 0; - - while (ct < nr) { - u32 fspace = 0; - u32 high; - u32 low; - - /* - * Find the largest block we will fill going upwards - */ - high = power2(mem-top); - - /* - * Find the largest block we will fill going downwards - */ - low = base/2; - - /* - * Don't fill below 1Mb going downwards as there - * is an ISA hole in the way. - */ - if (base <= 1024*1024) - low = 0; - - /* - * See how much space we could cover by filling below - * the ISA hole - */ - - if (floor == 0) - fspace = 512*1024; - else if (floor == 512*1024) - fspace = 128*1024; - - /* And forget ROM space */ - - /* - * Now install the largest coverage we get - */ - if (fspace > high && fspace > low) { - centaur_mcr_insert(ct, floor, fspace, key); - floor += fspace; - } else if (high > low) { - centaur_mcr_insert(ct, top, high, key); - top += high; - } else if (low > 0) { - base -= low; - centaur_mcr_insert(ct, base, low, key); - } else - break; - ct++; - } - /* - * We loaded ct values. We now need to set the mask. The caller - * must do this bit. - */ - return ct; -} - -static void centaur_create_optimal_mcr(void) -{ - int used; - int i; - - /* - * Allocate up to 6 mcrs to mark as much of ram as possible - * as write combining and weak write ordered. - * - * To experiment with: Linux never uses stack operations for - * mmio spaces so we could globally enable stack operation wc - * - * Load the registers with type 31 - full write combining, all - * writes weakly ordered. - */ - used = centaur_mcr_compute(6, 31); - - /* - * Wipe unused MCRs - */ - for (i = used; i < 8; i++) - wrmsr(MSR_IDT_MCR0+i, 0, 0); -} - -static void winchip2_create_optimal_mcr(void) -{ - u32 lo, hi; - int used; - int i; - - /* - * Allocate up to 6 mcrs to mark as much of ram as possible - * as write combining, weak store ordered. - * - * Load the registers with type 25 - * 8 - weak write ordering - * 16 - weak read ordering - * 1 - write combining - */ - used = centaur_mcr_compute(6, 25); - - /* - * Mark the registers we are using. - */ - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); - for (i = 0; i < used; i++) - lo |= 1<<(9+i); - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); - - /* - * Wipe unused MCRs - */ - - for (i = used; i < 8; i++) - wrmsr(MSR_IDT_MCR0+i, 0, 0); -} - -/* - * Handle the MCR key on the Winchip 2. - */ -static void winchip2_unprotect_mcr(void) -{ - u32 lo, hi; - u32 key; - - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); - lo &= ~0x1C0; /* blank bits 8-6 */ - key = (lo>>17) & 7; - lo |= key<<6; /* replace with unlock key */ - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); -} - -static void winchip2_protect_mcr(void) -{ - u32 lo, hi; - - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); - lo &= ~0x1C0; /* blank bits 8-6 */ - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); -} -#endif /* CONFIG_X86_OOSTORE */ - #define ACE_PRESENT (1 << 6) #define ACE_ENABLED (1 << 7) #define ACE_FCR (1 << 28) /* MSR_VIA_FCR */ @@ -362,20 +132,6 @@ static void init_centaur(struct cpuinfo_x86 *c) fcr_clr = DPDC; printk(KERN_NOTICE "Disabling bugged TSC.\n"); clear_cpu_cap(c, X86_FEATURE_TSC); -#ifdef CONFIG_X86_OOSTORE - centaur_create_optimal_mcr(); - /* - * Enable: - * write combining on non-stack, non-string - * write combining on string, all types - * weak write ordering - * - * The C6 original lacks weak read order - * - * Note 0x120 is write only on Winchip 1 - */ - wrmsr(MSR_IDT_MCR_CTRL, 0x01F0001F, 0); -#endif break; case 8: switch (c->x86_mask) { @@ -392,40 +148,12 @@ static void init_centaur(struct cpuinfo_x86 *c) fcr_set = ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK| E2MMX|EAMD3D; fcr_clr = DPDC; -#ifdef CONFIG_X86_OOSTORE - winchip2_unprotect_mcr(); - winchip2_create_optimal_mcr(); - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); - /* - * Enable: - * write combining on non-stack, non-string - * write combining on string, all types - * weak write ordering - */ - lo |= 31; - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); - winchip2_protect_mcr(); -#endif break; case 9: name = "3"; fcr_set = ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK| E2MMX|EAMD3D; fcr_clr = DPDC; -#ifdef CONFIG_X86_OOSTORE - winchip2_unprotect_mcr(); - winchip2_create_optimal_mcr(); - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); - /* - * Enable: - * write combining on non-stack, non-string - * write combining on string, all types - * weak write ordering - */ - lo |= 31; - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); - winchip2_protect_mcr(); -#endif break; default: name = "??"; diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c index 79f9f848bee4..ae407f7226c8 100644 --- a/arch/x86/kernel/cpu/perf_event.c +++ b/arch/x86/kernel/cpu/perf_event.c @@ -892,7 +892,6 @@ static void x86_pmu_enable(struct pmu *pmu) * hw_perf_group_sched_in() or x86_pmu_enable() * * step1: save events moving to new counters - * step2: reprogram moved events into new counters */ for (i = 0; i < n_running; i++) { event = cpuc->event_list[i]; @@ -918,6 +917,9 @@ static void x86_pmu_enable(struct pmu *pmu) x86_pmu_stop(event, PERF_EF_UPDATE); } + /* + * step2: reprogram moved events into new counters + */ for (i = 0; i < cpuc->n_events; i++) { event = cpuc->event_list[i]; hwc = &event->hw; @@ -1043,7 +1045,7 @@ static int x86_pmu_add(struct perf_event *event, int flags) /* * If group events scheduling transaction was started, * skip the schedulability test here, it will be performed - * at commit time (->commit_txn) as a whole + * at commit time (->commit_txn) as a whole. */ if (cpuc->group_flag & PERF_EVENT_TXN) goto done_collect; @@ -1058,6 +1060,10 @@ static int x86_pmu_add(struct perf_event *event, int flags) memcpy(cpuc->assign, assign, n*sizeof(int)); done_collect: + /* + * Commit the collect_events() state. See x86_pmu_del() and + * x86_pmu_*_txn(). + */ cpuc->n_events = n; cpuc->n_added += n - n0; cpuc->n_txn += n - n0; @@ -1183,28 +1189,38 @@ static void x86_pmu_del(struct perf_event *event, int flags) * If we're called during a txn, we don't need to do anything. * The events never got scheduled and ->cancel_txn will truncate * the event_list. + * + * XXX assumes any ->del() called during a TXN will only be on + * an event added during that same TXN. */ if (cpuc->group_flag & PERF_EVENT_TXN) return; + /* + * Not a TXN, therefore cleanup properly. + */ x86_pmu_stop(event, PERF_EF_UPDATE); for (i = 0; i < cpuc->n_events; i++) { - if (event == cpuc->event_list[i]) { + if (event == cpuc->event_list[i]) + break; + } - if (i >= cpuc->n_events - cpuc->n_added) - --cpuc->n_added; + if (WARN_ON_ONCE(i == cpuc->n_events)) /* called ->del() without ->add() ? */ + return; - if (x86_pmu.put_event_constraints) - x86_pmu.put_event_constraints(cpuc, event); + /* If we have a newly added event; make sure to decrease n_added. */ + if (i >= cpuc->n_events - cpuc->n_added) + --cpuc->n_added; - while (++i < cpuc->n_events) - cpuc->event_list[i-1] = cpuc->event_list[i]; + if (x86_pmu.put_event_constraints) + x86_pmu.put_event_constraints(cpuc, event); + + /* Delete the array entry. */ + while (++i < cpuc->n_events) + cpuc->event_list[i-1] = cpuc->event_list[i]; + --cpuc->n_events; - --cpuc->n_events; - break; - } - } perf_event_update_userpage(event); } @@ -1598,7 +1614,8 @@ static void x86_pmu_cancel_txn(struct pmu *pmu) { __this_cpu_and(cpu_hw_events.group_flag, ~PERF_EVENT_TXN); /* - * Truncate the collected events. + * Truncate collected array by the number of events added in this + * transaction. See x86_pmu_add() and x86_pmu_*_txn(). */ __this_cpu_sub(cpu_hw_events.n_added, __this_cpu_read(cpu_hw_events.n_txn)); __this_cpu_sub(cpu_hw_events.n_events, __this_cpu_read(cpu_hw_events.n_txn)); @@ -1609,6 +1626,8 @@ static void x86_pmu_cancel_txn(struct pmu *pmu) * Commit group events scheduling transaction * Perform the group schedulability test as a whole * Return 0 if success + * + * Does not cancel the transaction on failure; expects the caller to do this. */ static int x86_pmu_commit_txn(struct pmu *pmu) { diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h index 4972c244d0bc..3b2f9bdd974b 100644 --- a/arch/x86/kernel/cpu/perf_event.h +++ b/arch/x86/kernel/cpu/perf_event.h @@ -130,9 +130,11 @@ struct cpu_hw_events { unsigned long running[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; int enabled; - int n_events; - int n_added; - int n_txn; + int n_events; /* the # of events in the below arrays */ + int n_added; /* the # last events in the below arrays; + they've never been enabled yet */ + int n_txn; /* the # last events in the below arrays; + added in the current transaction */ int assign[X86_PMC_IDX_MAX]; /* event to counter assignment */ u64 tags[X86_PMC_IDX_MAX]; struct perf_event *event_list[X86_PMC_IDX_MAX]; /* in enabled order */ diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c b/arch/x86/kernel/cpu/perf_event_intel_uncore.c index c88f7f4b03ee..bd2253d40cff 100644 --- a/arch/x86/kernel/cpu/perf_event_intel_uncore.c +++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c @@ -66,6 +66,47 @@ DEFINE_UNCORE_FORMAT_ATTR(mask_vnw, mask_vnw, "config2:3-4"); DEFINE_UNCORE_FORMAT_ATTR(mask0, mask0, "config2:0-31"); DEFINE_UNCORE_FORMAT_ATTR(mask1, mask1, "config2:32-63"); +static void uncore_pmu_start_hrtimer(struct intel_uncore_box *box); +static void uncore_pmu_cancel_hrtimer(struct intel_uncore_box *box); +static void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *event); +static void uncore_pmu_event_read(struct perf_event *event); + +static struct intel_uncore_pmu *uncore_event_to_pmu(struct perf_event *event) +{ + return container_of(event->pmu, struct intel_uncore_pmu, pmu); +} + +static struct intel_uncore_box * +uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu) +{ + struct intel_uncore_box *box; + + box = *per_cpu_ptr(pmu->box, cpu); + if (box) + return box; + + raw_spin_lock(&uncore_box_lock); + list_for_each_entry(box, &pmu->box_list, list) { + if (box->phys_id == topology_physical_package_id(cpu)) { + atomic_inc(&box->refcnt); + *per_cpu_ptr(pmu->box, cpu) = box; + break; + } + } + raw_spin_unlock(&uncore_box_lock); + + return *per_cpu_ptr(pmu->box, cpu); +} + +static struct intel_uncore_box *uncore_event_to_box(struct perf_event *event) +{ + /* + * perf core schedules event on the basis of cpu, uncore events are + * collected by one of the cpus inside a physical package. + */ + return uncore_pmu_to_box(uncore_event_to_pmu(event), smp_processor_id()); +} + static u64 uncore_msr_read_counter(struct intel_uncore_box *box, struct perf_event *event) { u64 count; @@ -1639,6 +1680,349 @@ static struct intel_uncore_type *snb_msr_uncores[] = { &snb_uncore_cbox, NULL, }; + +enum { + SNB_PCI_UNCORE_IMC, +}; + +static struct uncore_event_desc snb_uncore_imc_events[] = { + INTEL_UNCORE_EVENT_DESC(data_reads, "event=0x01"), + INTEL_UNCORE_EVENT_DESC(data_reads.scale, "6.103515625e-5"), + INTEL_UNCORE_EVENT_DESC(data_reads.unit, "MiB"), + + INTEL_UNCORE_EVENT_DESC(data_writes, "event=0x02"), + INTEL_UNCORE_EVENT_DESC(data_writes.scale, "6.103515625e-5"), + INTEL_UNCORE_EVENT_DESC(data_writes.unit, "MiB"), + + { /* end: all zeroes */ }, +}; + +#define SNB_UNCORE_PCI_IMC_EVENT_MASK 0xff +#define SNB_UNCORE_PCI_IMC_BAR_OFFSET 0x48 + +/* page size multiple covering all config regs */ +#define SNB_UNCORE_PCI_IMC_MAP_SIZE 0x6000 + +#define SNB_UNCORE_PCI_IMC_DATA_READS 0x1 +#define SNB_UNCORE_PCI_IMC_DATA_READS_BASE 0x5050 +#define SNB_UNCORE_PCI_IMC_DATA_WRITES 0x2 +#define SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE 0x5054 +#define SNB_UNCORE_PCI_IMC_CTR_BASE SNB_UNCORE_PCI_IMC_DATA_READS_BASE + +static struct attribute *snb_uncore_imc_formats_attr[] = { + &format_attr_event.attr, + NULL, +}; + +static struct attribute_group snb_uncore_imc_format_group = { + .name = "format", + .attrs = snb_uncore_imc_formats_attr, +}; + +static void snb_uncore_imc_init_box(struct intel_uncore_box *box) +{ + struct pci_dev *pdev = box->pci_dev; + int where = SNB_UNCORE_PCI_IMC_BAR_OFFSET; + resource_size_t addr; + u32 pci_dword; + + pci_read_config_dword(pdev, where, &pci_dword); + addr = pci_dword; + +#ifdef CONFIG_PHYS_ADDR_T_64BIT + pci_read_config_dword(pdev, where + 4, &pci_dword); + addr |= ((resource_size_t)pci_dword << 32); +#endif + + addr &= ~(PAGE_SIZE - 1); + + box->io_addr = ioremap(addr, SNB_UNCORE_PCI_IMC_MAP_SIZE); + box->hrtimer_duration = UNCORE_SNB_IMC_HRTIMER_INTERVAL; +} + +static void snb_uncore_imc_enable_box(struct intel_uncore_box *box) +{} + +static void snb_uncore_imc_disable_box(struct intel_uncore_box *box) +{} + +static void snb_uncore_imc_enable_event(struct intel_uncore_box *box, struct perf_event *event) +{} + +static void snb_uncore_imc_disable_event(struct intel_uncore_box *box, struct perf_event *event) +{} + +static u64 snb_uncore_imc_read_counter(struct intel_uncore_box *box, struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + + return (u64)*(unsigned int *)(box->io_addr + hwc->event_base); +} + +/* + * custom event_init() function because we define our own fixed, free + * running counters, so we do not want to conflict with generic uncore + * logic. Also simplifies processing + */ +static int snb_uncore_imc_event_init(struct perf_event *event) +{ + struct intel_uncore_pmu *pmu; + struct intel_uncore_box *box; + struct hw_perf_event *hwc = &event->hw; + u64 cfg = event->attr.config & SNB_UNCORE_PCI_IMC_EVENT_MASK; + int idx, base; + + if (event->attr.type != event->pmu->type) + return -ENOENT; + + pmu = uncore_event_to_pmu(event); + /* no device found for this pmu */ + if (pmu->func_id < 0) + return -ENOENT; + + /* Sampling not supported yet */ + if (hwc->sample_period) + return -EINVAL; + + /* unsupported modes and filters */ + if (event->attr.exclude_user || + event->attr.exclude_kernel || + event->attr.exclude_hv || + event->attr.exclude_idle || + event->attr.exclude_host || + event->attr.exclude_guest || + event->attr.sample_period) /* no sampling */ + return -EINVAL; + + /* + * Place all uncore events for a particular physical package + * onto a single cpu + */ + if (event->cpu < 0) + return -EINVAL; + + /* check only supported bits are set */ + if (event->attr.config & ~SNB_UNCORE_PCI_IMC_EVENT_MASK) + return -EINVAL; + + box = uncore_pmu_to_box(pmu, event->cpu); + if (!box || box->cpu < 0) + return -EINVAL; + + event->cpu = box->cpu; + + event->hw.idx = -1; + event->hw.last_tag = ~0ULL; + event->hw.extra_reg.idx = EXTRA_REG_NONE; + event->hw.branch_reg.idx = EXTRA_REG_NONE; + /* + * check event is known (whitelist, determines counter) + */ + switch (cfg) { + case SNB_UNCORE_PCI_IMC_DATA_READS: + base = SNB_UNCORE_PCI_IMC_DATA_READS_BASE; + idx = UNCORE_PMC_IDX_FIXED; + break; + case SNB_UNCORE_PCI_IMC_DATA_WRITES: + base = SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE; + idx = UNCORE_PMC_IDX_FIXED + 1; + break; + default: + return -EINVAL; + } + + /* must be done before validate_group */ + event->hw.event_base = base; + event->hw.config = cfg; + event->hw.idx = idx; + + /* no group validation needed, we have free running counters */ + + return 0; +} + +static int snb_uncore_imc_hw_config(struct intel_uncore_box *box, struct perf_event *event) +{ + return 0; +} + +static void snb_uncore_imc_event_start(struct perf_event *event, int flags) +{ + struct intel_uncore_box *box = uncore_event_to_box(event); + u64 count; + + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + return; + + event->hw.state = 0; + box->n_active++; + + list_add_tail(&event->active_entry, &box->active_list); + + count = snb_uncore_imc_read_counter(box, event); + local64_set(&event->hw.prev_count, count); + + if (box->n_active == 1) + uncore_pmu_start_hrtimer(box); +} + +static void snb_uncore_imc_event_stop(struct perf_event *event, int flags) +{ + struct intel_uncore_box *box = uncore_event_to_box(event); + struct hw_perf_event *hwc = &event->hw; + + if (!(hwc->state & PERF_HES_STOPPED)) { + box->n_active--; + + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); + hwc->state |= PERF_HES_STOPPED; + + list_del(&event->active_entry); + + if (box->n_active == 0) + uncore_pmu_cancel_hrtimer(box); + } + + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) { + /* + * Drain the remaining delta count out of a event + * that we are disabling: + */ + uncore_perf_event_update(box, event); + hwc->state |= PERF_HES_UPTODATE; + } +} + +static int snb_uncore_imc_event_add(struct perf_event *event, int flags) +{ + struct intel_uncore_box *box = uncore_event_to_box(event); + struct hw_perf_event *hwc = &event->hw; + + if (!box) + return -ENODEV; + + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; + if (!(flags & PERF_EF_START)) + hwc->state |= PERF_HES_ARCH; + + snb_uncore_imc_event_start(event, 0); + + box->n_events++; + + return 0; +} + +static void snb_uncore_imc_event_del(struct perf_event *event, int flags) +{ + struct intel_uncore_box *box = uncore_event_to_box(event); + int i; + + snb_uncore_imc_event_stop(event, PERF_EF_UPDATE); + + for (i = 0; i < box->n_events; i++) { + if (event == box->event_list[i]) { + --box->n_events; + break; + } + } +} + +static int snb_pci2phy_map_init(int devid) +{ + struct pci_dev *dev = NULL; + int bus; + + dev = pci_get_device(PCI_VENDOR_ID_INTEL, devid, dev); + if (!dev) + return -ENOTTY; + + bus = dev->bus->number; + + pcibus_to_physid[bus] = 0; + + pci_dev_put(dev); + + return 0; +} + +static struct pmu snb_uncore_imc_pmu = { + .task_ctx_nr = perf_invalid_context, + .event_init = snb_uncore_imc_event_init, + .add = snb_uncore_imc_event_add, + .del = snb_uncore_imc_event_del, + .start = snb_uncore_imc_event_start, + .stop = snb_uncore_imc_event_stop, + .read = uncore_pmu_event_read, +}; + +static struct intel_uncore_ops snb_uncore_imc_ops = { + .init_box = snb_uncore_imc_init_box, + .enable_box = snb_uncore_imc_enable_box, + .disable_box = snb_uncore_imc_disable_box, + .disable_event = snb_uncore_imc_disable_event, + .enable_event = snb_uncore_imc_enable_event, + .hw_config = snb_uncore_imc_hw_config, + .read_counter = snb_uncore_imc_read_counter, +}; + +static struct intel_uncore_type snb_uncore_imc = { + .name = "imc", + .num_counters = 2, + .num_boxes = 1, + .fixed_ctr_bits = 32, + .fixed_ctr = SNB_UNCORE_PCI_IMC_CTR_BASE, + .event_descs = snb_uncore_imc_events, + .format_group = &snb_uncore_imc_format_group, + .perf_ctr = SNB_UNCORE_PCI_IMC_DATA_READS_BASE, + .event_mask = SNB_UNCORE_PCI_IMC_EVENT_MASK, + .ops = &snb_uncore_imc_ops, + .pmu = &snb_uncore_imc_pmu, +}; + +static struct intel_uncore_type *snb_pci_uncores[] = { + [SNB_PCI_UNCORE_IMC] = &snb_uncore_imc, + NULL, +}; + +static DEFINE_PCI_DEVICE_TABLE(snb_uncore_pci_ids) = { + { /* IMC */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SNB_IMC), + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), + }, + { /* end: all zeroes */ }, +}; + +static DEFINE_PCI_DEVICE_TABLE(ivb_uncore_pci_ids) = { + { /* IMC */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IVB_IMC), + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), + }, + { /* end: all zeroes */ }, +}; + +static DEFINE_PCI_DEVICE_TABLE(hsw_uncore_pci_ids) = { + { /* IMC */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_HSW_IMC), + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), + }, + { /* end: all zeroes */ }, +}; + +static struct pci_driver snb_uncore_pci_driver = { + .name = "snb_uncore", + .id_table = snb_uncore_pci_ids, +}; + +static struct pci_driver ivb_uncore_pci_driver = { + .name = "ivb_uncore", + .id_table = ivb_uncore_pci_ids, +}; + +static struct pci_driver hsw_uncore_pci_driver = { + .name = "hsw_uncore", + .id_table = hsw_uncore_pci_ids, +}; + /* end of Sandy Bridge uncore support */ /* Nehalem uncore support */ @@ -2789,6 +3173,7 @@ again: static enum hrtimer_restart uncore_pmu_hrtimer(struct hrtimer *hrtimer) { struct intel_uncore_box *box; + struct perf_event *event; unsigned long flags; int bit; @@ -2801,19 +3186,27 @@ static enum hrtimer_restart uncore_pmu_hrtimer(struct hrtimer *hrtimer) */ local_irq_save(flags); + /* + * handle boxes with an active event list as opposed to active + * counters + */ + list_for_each_entry(event, &box->active_list, active_entry) { + uncore_perf_event_update(box, event); + } + for_each_set_bit(bit, box->active_mask, UNCORE_PMC_IDX_MAX) uncore_perf_event_update(box, box->events[bit]); local_irq_restore(flags); - hrtimer_forward_now(hrtimer, ns_to_ktime(UNCORE_PMU_HRTIMER_INTERVAL)); + hrtimer_forward_now(hrtimer, ns_to_ktime(box->hrtimer_duration)); return HRTIMER_RESTART; } static void uncore_pmu_start_hrtimer(struct intel_uncore_box *box) { __hrtimer_start_range_ns(&box->hrtimer, - ns_to_ktime(UNCORE_PMU_HRTIMER_INTERVAL), 0, + ns_to_ktime(box->hrtimer_duration), 0, HRTIMER_MODE_REL_PINNED, 0); } @@ -2847,43 +3240,12 @@ static struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type, box->cpu = -1; box->phys_id = -1; - return box; -} - -static struct intel_uncore_box * -uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu) -{ - struct intel_uncore_box *box; - - box = *per_cpu_ptr(pmu->box, cpu); - if (box) - return box; - - raw_spin_lock(&uncore_box_lock); - list_for_each_entry(box, &pmu->box_list, list) { - if (box->phys_id == topology_physical_package_id(cpu)) { - atomic_inc(&box->refcnt); - *per_cpu_ptr(pmu->box, cpu) = box; - break; - } - } - raw_spin_unlock(&uncore_box_lock); - - return *per_cpu_ptr(pmu->box, cpu); -} + /* set default hrtimer timeout */ + box->hrtimer_duration = UNCORE_PMU_HRTIMER_INTERVAL; -static struct intel_uncore_pmu *uncore_event_to_pmu(struct perf_event *event) -{ - return container_of(event->pmu, struct intel_uncore_pmu, pmu); -} + INIT_LIST_HEAD(&box->active_list); -static struct intel_uncore_box *uncore_event_to_box(struct perf_event *event) -{ - /* - * perf core schedules event on the basis of cpu, uncore events are - * collected by one of the cpus inside a physical package. - */ - return uncore_pmu_to_box(uncore_event_to_pmu(event), smp_processor_id()); + return box; } static int @@ -3279,16 +3641,21 @@ static int __init uncore_pmu_register(struct intel_uncore_pmu *pmu) { int ret; - pmu->pmu = (struct pmu) { - .attr_groups = pmu->type->attr_groups, - .task_ctx_nr = perf_invalid_context, - .event_init = uncore_pmu_event_init, - .add = uncore_pmu_event_add, - .del = uncore_pmu_event_del, - .start = uncore_pmu_event_start, - .stop = uncore_pmu_event_stop, - .read = uncore_pmu_event_read, - }; + if (!pmu->type->pmu) { + pmu->pmu = (struct pmu) { + .attr_groups = pmu->type->attr_groups, + .task_ctx_nr = perf_invalid_context, + .event_init = uncore_pmu_event_init, + .add = uncore_pmu_event_add, + .del = uncore_pmu_event_del, + .start = uncore_pmu_event_start, + .stop = uncore_pmu_event_stop, + .read = uncore_pmu_event_read, + }; + } else { + pmu->pmu = *pmu->type->pmu; + pmu->pmu.attr_groups = pmu->type->attr_groups; + } if (pmu->type->num_boxes == 1) { if (strlen(pmu->type->name) > 0) @@ -3334,6 +3701,8 @@ static int __init uncore_type_init(struct intel_uncore_type *type) if (!pmus) return -ENOMEM; + type->pmus = pmus; + type->unconstrainted = (struct event_constraint) __EVENT_CONSTRAINT(0, (1ULL << type->num_counters) - 1, 0, type->num_counters, 0, 0); @@ -3369,7 +3738,6 @@ static int __init uncore_type_init(struct intel_uncore_type *type) } type->pmu_group = &uncore_pmu_attr_group; - type->pmus = pmus; return 0; fail: uncore_type_exit(type); @@ -3501,6 +3869,28 @@ static int __init uncore_pci_init(void) pci_uncores = ivt_pci_uncores; uncore_pci_driver = &ivt_uncore_pci_driver; break; + case 42: /* Sandy Bridge */ + ret = snb_pci2phy_map_init(PCI_DEVICE_ID_INTEL_SNB_IMC); + if (ret) + return ret; + pci_uncores = snb_pci_uncores; + uncore_pci_driver = &snb_uncore_pci_driver; + break; + case 58: /* Ivy Bridge */ + ret = snb_pci2phy_map_init(PCI_DEVICE_ID_INTEL_IVB_IMC); + if (ret) + return ret; + pci_uncores = snb_pci_uncores; + uncore_pci_driver = &ivb_uncore_pci_driver; + break; + case 60: /* Haswell */ + case 69: /* Haswell Celeron */ + ret = snb_pci2phy_map_init(PCI_DEVICE_ID_INTEL_HSW_IMC); + if (ret) + return ret; + pci_uncores = snb_pci_uncores; + uncore_pci_driver = &hsw_uncore_pci_driver; + break; default: return 0; } @@ -3772,7 +4162,7 @@ static void __init uncore_cpu_setup(void *dummy) static int __init uncore_cpu_init(void) { - int ret, cpu, max_cores; + int ret, max_cores; max_cores = boot_cpu_data.x86_max_cores; switch (boot_cpu_data.x86_model) { @@ -3816,29 +4206,6 @@ static int __init uncore_cpu_init(void) if (ret) return ret; - get_online_cpus(); - - for_each_online_cpu(cpu) { - int i, phys_id = topology_physical_package_id(cpu); - - for_each_cpu(i, &uncore_cpu_mask) { - if (phys_id == topology_physical_package_id(i)) { - phys_id = -1; - break; - } - } - if (phys_id < 0) - continue; - - uncore_cpu_prepare(cpu, phys_id); - uncore_event_init_cpu(cpu); - } - on_each_cpu(uncore_cpu_setup, NULL, 1); - - register_cpu_notifier(&uncore_cpu_nb); - - put_online_cpus(); - return 0; } @@ -3867,6 +4234,41 @@ static int __init uncore_pmus_register(void) return 0; } +static void __init uncore_cpumask_init(void) +{ + int cpu; + + /* + * ony invoke once from msr or pci init code + */ + if (!cpumask_empty(&uncore_cpu_mask)) + return; + + get_online_cpus(); + + for_each_online_cpu(cpu) { + int i, phys_id = topology_physical_package_id(cpu); + + for_each_cpu(i, &uncore_cpu_mask) { + if (phys_id == topology_physical_package_id(i)) { + phys_id = -1; + break; + } + } + if (phys_id < 0) + continue; + + uncore_cpu_prepare(cpu, phys_id); + uncore_event_init_cpu(cpu); + } + on_each_cpu(uncore_cpu_setup, NULL, 1); + + register_cpu_notifier(&uncore_cpu_nb); + + put_online_cpus(); +} + + static int __init intel_uncore_init(void) { int ret; @@ -3885,6 +4287,7 @@ static int __init intel_uncore_init(void) uncore_pci_exit(); goto fail; } + uncore_cpumask_init(); uncore_pmus_register(); return 0; diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.h b/arch/x86/kernel/cpu/perf_event_intel_uncore.h index a80ab71a883d..90236f0c94a9 100644 --- a/arch/x86/kernel/cpu/perf_event_intel_uncore.h +++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.h @@ -6,6 +6,7 @@ #define UNCORE_PMU_NAME_LEN 32 #define UNCORE_PMU_HRTIMER_INTERVAL (60LL * NSEC_PER_SEC) +#define UNCORE_SNB_IMC_HRTIMER_INTERVAL (5ULL * NSEC_PER_SEC) #define UNCORE_FIXED_EVENT 0xff #define UNCORE_PMC_IDX_MAX_GENERIC 8 @@ -440,6 +441,7 @@ struct intel_uncore_type { struct intel_uncore_ops *ops; struct uncore_event_desc *event_descs; const struct attribute_group *attr_groups[4]; + struct pmu *pmu; /* for custom pmu ops */ }; #define pmu_group attr_groups[0] @@ -488,8 +490,11 @@ struct intel_uncore_box { u64 tags[UNCORE_PMC_IDX_MAX]; struct pci_dev *pci_dev; struct intel_uncore_pmu *pmu; + u64 hrtimer_duration; /* hrtimer timeout for this box */ struct hrtimer hrtimer; struct list_head list; + struct list_head active_list; + void *io_addr; struct intel_uncore_extra_reg shared_regs[0]; }; diff --git a/arch/x86/kernel/cpu/perf_event_p4.c b/arch/x86/kernel/cpu/perf_event_p4.c index 3486e6660357..5d466b7d8609 100644 --- a/arch/x86/kernel/cpu/perf_event_p4.c +++ b/arch/x86/kernel/cpu/perf_event_p4.c @@ -1257,7 +1257,24 @@ again: pass++; goto again; } - + /* + * Perf does test runs to see if a whole group can be assigned + * together succesfully. There can be multiple rounds of this. + * Unfortunately, p4_pmu_swap_config_ts touches the hwc->config + * bits, such that the next round of group assignments will + * cause the above p4_should_swap_ts to pass instead of fail. + * This leads to counters exclusive to thread0 being used by + * thread1. + * + * Solve this with a cheap hack, reset the idx back to -1 to + * force a new lookup (p4_next_cntr) to get the right counter + * for the right thread. + * + * This probably doesn't comply with the general spirit of how + * perf wants to work, but P4 is special. :-( + */ + if (p4_should_swap_ts(hwc->config, cpu)) + hwc->idx = -1; p4_pmu_swap_config_ts(hwc, cpu); if (assign) assign[i] = cntr_idx; @@ -1322,6 +1339,7 @@ static __initconst const struct x86_pmu p4_pmu = { __init int p4_pmu_init(void) { unsigned int low, high; + int i, reg; /* If we get stripped -- indexing fails */ BUILD_BUG_ON(ARCH_P4_MAX_CCCR > INTEL_PMC_MAX_GENERIC); @@ -1340,5 +1358,19 @@ __init int p4_pmu_init(void) x86_pmu = p4_pmu; + /* + * Even though the counters are configured to interrupt a particular + * logical processor when an overflow happens, testing has shown that + * on kdump kernels (which uses a single cpu), thread1's counter + * continues to run and will report an NMI on thread0. Due to the + * overflow bug, this leads to a stream of unknown NMIs. + * + * Solve this by zero'ing out the registers to mimic a reset. + */ + for (i = 0; i < x86_pmu.num_counters; i++) { + reg = x86_pmu_config_addr(i); + wrmsrl_safe(reg, 0ULL); + } + return 0; } diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index e8368c6dd2a2..d5dd80814419 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -86,10 +86,19 @@ EXPORT_SYMBOL(__kernel_fpu_begin); void __kernel_fpu_end(void) { - if (use_eager_fpu()) - math_state_restore(); - else + if (use_eager_fpu()) { + /* + * For eager fpu, most the time, tsk_used_math() is true. + * Restore the user math as we are done with the kernel usage. + * At few instances during thread exit, signal handling etc, + * tsk_used_math() is false. Those few places will take proper + * actions, so we don't need to restore the math here. + */ + if (likely(tsk_used_math(current))) + math_state_restore(); + } else { stts(); + } } EXPORT_SYMBOL(__kernel_fpu_end); diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index 6fcb49ce50a1..b4872b999a71 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -87,6 +87,7 @@ __setup("unknown_nmi_panic", setup_unknown_nmi_panic); #define nmi_to_desc(type) (&nmi_desc[type]) static u64 nmi_longest_ns = 1 * NSEC_PER_MSEC; + static int __init nmi_warning_debugfs(void) { debugfs_create_u64("nmi_longest_ns", 0644, @@ -95,6 +96,20 @@ static int __init nmi_warning_debugfs(void) } fs_initcall(nmi_warning_debugfs); +static void nmi_max_handler(struct irq_work *w) +{ + struct nmiaction *a = container_of(w, struct nmiaction, irq_work); + int remainder_ns, decimal_msecs; + u64 whole_msecs = ACCESS_ONCE(a->max_duration); + + remainder_ns = do_div(whole_msecs, (1000 * 1000)); + decimal_msecs = remainder_ns / 1000; + + printk_ratelimited(KERN_INFO + "INFO: NMI handler (%ps) took too long to run: %lld.%03d msecs\n", + a->handler, whole_msecs, decimal_msecs); +} + static int __kprobes nmi_handle(unsigned int type, struct pt_regs *regs, bool b2b) { struct nmi_desc *desc = nmi_to_desc(type); @@ -110,26 +125,20 @@ static int __kprobes nmi_handle(unsigned int type, struct pt_regs *regs, bool b2 * to handle those situations. */ list_for_each_entry_rcu(a, &desc->head, list) { - u64 before, delta, whole_msecs; - int remainder_ns, decimal_msecs, thishandled; + int thishandled; + u64 delta; - before = sched_clock(); + delta = sched_clock(); thishandled = a->handler(type, regs); handled += thishandled; - delta = sched_clock() - before; + delta = sched_clock() - delta; trace_nmi_handler(a->handler, (int)delta, thishandled); - if (delta < nmi_longest_ns) + if (delta < nmi_longest_ns || delta < a->max_duration) continue; - nmi_longest_ns = delta; - whole_msecs = delta; - remainder_ns = do_div(whole_msecs, (1000 * 1000)); - decimal_msecs = remainder_ns / 1000; - printk_ratelimited(KERN_INFO - "INFO: NMI handler (%ps) took too long to run: " - "%lld.%03d msecs\n", a->handler, whole_msecs, - decimal_msecs); + a->max_duration = delta; + irq_work_queue(&a->irq_work); } rcu_read_unlock(); @@ -146,6 +155,8 @@ int __register_nmi_handler(unsigned int type, struct nmiaction *action) if (!action->handler) return -EINVAL; + init_irq_work(&action->irq_work, nmi_max_handler); + spin_lock_irqsave(&desc->lock, flags); /* diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 3fb8d95ab8b5..4505e2a950d8 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -298,10 +298,7 @@ void arch_cpu_idle_dead(void) */ void arch_cpu_idle(void) { - if (cpuidle_idle_call()) - x86_idle(); - else - local_irq_enable(); + x86_idle(); } /* diff --git a/arch/x86/kernel/quirks.c b/arch/x86/kernel/quirks.c index 7c6acd4b8995..ff898bbf579d 100644 --- a/arch/x86/kernel/quirks.c +++ b/arch/x86/kernel/quirks.c @@ -529,7 +529,7 @@ static void quirk_amd_nb_node(struct pci_dev *dev) return; pci_read_config_dword(nb_ht, 0x60, &val); - node = val & 7; + node = pcibus_to_node(dev->bus) | (val & 7); /* * Some hardware may return an invalid node ID, * so check it first: diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index a32da804252e..60179ec39d4c 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -122,8 +122,9 @@ static void smp_callin(void) * Since CPU0 is not wakened up by INIT, it doesn't wait for the IPI. */ cpuid = smp_processor_id(); - if (apic->wait_for_init_deassert && cpuid != 0) - apic->wait_for_init_deassert(&init_deasserted); + if (apic->wait_for_init_deassert && cpuid) + while (!atomic_read(&init_deasserted)) + cpu_relax(); /* * (This works even if the APIC is not enabled.) @@ -701,11 +702,15 @@ wakeup_cpu_via_init_nmi(int cpu, unsigned long start_ip, int apicid, int id; int boot_error; + preempt_disable(); + /* * Wake up AP by INIT, INIT, STARTUP sequence. */ - if (cpu) - return wakeup_secondary_cpu_via_init(apicid, start_ip); + if (cpu) { + boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip); + goto out; + } /* * Wake up BSP by nmi. @@ -725,6 +730,9 @@ wakeup_cpu_via_init_nmi(int cpu, unsigned long start_ip, int apicid, boot_error = wakeup_secondary_cpu_via_nmi(id, start_ip); } +out: + preempt_enable(); + return boot_error; } diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index e81df8fce027..2de1bc09a8d4 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3002,10 +3002,8 @@ static int cr8_write_interception(struct vcpu_svm *svm) u8 cr8_prev = kvm_get_cr8(&svm->vcpu); /* instruction emulation calls kvm_set_cr8() */ r = cr_interception(svm); - if (irqchip_in_kernel(svm->vcpu.kvm)) { - clr_cr_intercept(svm, INTERCEPT_CR8_WRITE); + if (irqchip_in_kernel(svm->vcpu.kvm)) return r; - } if (cr8_prev <= kvm_get_cr8(&svm->vcpu)) return r; kvm_run->exit_reason = KVM_EXIT_SET_TPR; @@ -3567,6 +3565,8 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) if (is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK)) return; + clr_cr_intercept(svm, INTERCEPT_CR8_WRITE); + if (irr == -1) return; diff --git a/arch/x86/net/bpf_jit.S b/arch/x86/net/bpf_jit.S index 877b9a1b2152..01495755701b 100644 --- a/arch/x86/net/bpf_jit.S +++ b/arch/x86/net/bpf_jit.S @@ -140,7 +140,7 @@ bpf_slow_path_byte_msh: push %r9; \ push SKBDATA; \ /* rsi already has offset */ \ - mov $SIZE,%ecx; /* size */ \ + mov $SIZE,%edx; /* size */ \ call bpf_internal_load_pointer_neg_helper; \ test %rax,%rax; \ pop SKBDATA; \ diff --git a/arch/x86/um/asm/barrier.h b/arch/x86/um/asm/barrier.h index 7d01b8c56c00..cc04e67bfd05 100644 --- a/arch/x86/um/asm/barrier.h +++ b/arch/x86/um/asm/barrier.h @@ -40,11 +40,7 @@ #define smp_rmb() barrier() #endif /* CONFIG_X86_PPRO_FENCE */ -#ifdef CONFIG_X86_OOSTORE -#define smp_wmb() wmb() -#else /* CONFIG_X86_OOSTORE */ #define smp_wmb() barrier() -#endif /* CONFIG_X86_OOSTORE */ #define smp_read_barrier_depends() read_barrier_depends() #define set_mb(var, value) do { (void)xchg(&var, value); } while (0) diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 256282e7888b..2423ef04ffea 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr, /* Assume pteval_t is equivalent to all the other *val_t types. */ static pteval_t pte_mfn_to_pfn(pteval_t val) { - if (pteval_present(val)) { + if (val & _PAGE_PRESENT) { unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT; unsigned long pfn = mfn_to_pfn(mfn); @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val) static pteval_t pte_pfn_to_mfn(pteval_t val) { - if (pteval_present(val)) { + if (val & _PAGE_PRESENT) { unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT; pteval_t flags = val & PTE_FLAGS_MASK; unsigned long mfn; diff --git a/arch/xtensa/include/asm/Kbuild b/arch/xtensa/include/asm/Kbuild index 0a337e4a8370..c3d20ba6eb86 100644 --- a/arch/xtensa/include/asm/Kbuild +++ b/arch/xtensa/include/asm/Kbuild @@ -9,6 +9,7 @@ generic-y += errno.h generic-y += exec.h generic-y += fcntl.h generic-y += hardirq.h +generic-y += hash.h generic-y += ioctl.h generic-y += irq_regs.h generic-y += kdebug.h @@ -17,7 +18,9 @@ generic-y += kvm_para.h generic-y += linkage.h generic-y += local.h generic-y += local64.h +generic-y += mcs_spinlock.h generic-y += percpu.h +generic-y += preempt.h generic-y += resource.h generic-y += scatterlist.h generic-y += sections.h @@ -27,5 +30,3 @@ generic-y += termios.h generic-y += topology.h generic-y += trace_clock.h generic-y += xor.h -generic-y += preempt.h -generic-y += hash.h diff --git a/block/blk-core.c b/block/blk-core.c index 853f92749202..bfe16d5af9f9 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -693,20 +693,11 @@ blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id) if (!uninit_q) return NULL; - uninit_q->flush_rq = kzalloc(sizeof(struct request), GFP_KERNEL); - if (!uninit_q->flush_rq) - goto out_cleanup_queue; - q = blk_init_allocated_queue(uninit_q, rfn, lock); if (!q) - goto out_free_flush_rq; - return q; + blk_cleanup_queue(uninit_q); -out_free_flush_rq: - kfree(uninit_q->flush_rq); -out_cleanup_queue: - blk_cleanup_queue(uninit_q); - return NULL; + return q; } EXPORT_SYMBOL(blk_init_queue_node); @@ -717,9 +708,13 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn, if (!q) return NULL; - if (blk_init_rl(&q->root_rl, q, GFP_KERNEL)) + q->flush_rq = kzalloc(sizeof(struct request), GFP_KERNEL); + if (!q->flush_rq) return NULL; + if (blk_init_rl(&q->root_rl, q, GFP_KERNEL)) + goto fail; + q->request_fn = rfn; q->prep_rq_fn = NULL; q->unprep_rq_fn = NULL; @@ -742,12 +737,16 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn, /* init elevator */ if (elevator_init(q, NULL)) { mutex_unlock(&q->sysfs_lock); - return NULL; + goto fail; } mutex_unlock(&q->sysfs_lock); return q; + +fail: + kfree(q->flush_rq); + return NULL; } EXPORT_SYMBOL(blk_init_allocated_queue); diff --git a/block/blk-flush.c b/block/blk-flush.c index f598f794c3c6..43e6b4755e9a 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -140,14 +140,17 @@ static void mq_flush_run(struct work_struct *work) blk_mq_insert_request(rq, false, true, false); } -static bool blk_flush_queue_rq(struct request *rq) +static bool blk_flush_queue_rq(struct request *rq, bool add_front) { if (rq->q->mq_ops) { INIT_WORK(&rq->mq_flush_work, mq_flush_run); kblockd_schedule_work(rq->q, &rq->mq_flush_work); return false; } else { - list_add_tail(&rq->queuelist, &rq->q->queue_head); + if (add_front) + list_add(&rq->queuelist, &rq->q->queue_head); + else + list_add_tail(&rq->queuelist, &rq->q->queue_head); return true; } } @@ -193,7 +196,7 @@ static bool blk_flush_complete_seq(struct request *rq, unsigned int seq, case REQ_FSEQ_DATA: list_move_tail(&rq->flush.list, &q->flush_data_in_flight); - queued = blk_flush_queue_rq(rq); + queued = blk_flush_queue_rq(rq, true); break; case REQ_FSEQ_DONE: @@ -326,7 +329,7 @@ static bool blk_kick_flush(struct request_queue *q) q->flush_rq->rq_disk = first_rq->rq_disk; q->flush_rq->end_io = flush_end_io; - return blk_flush_queue_rq(q->flush_rq); + return blk_flush_queue_rq(q->flush_rq, false); } static void flush_data_end_io(struct request *rq, int error) diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c index b718806657cd..c40fb2e81bbc 100644 --- a/drivers/acpi/sleep.c +++ b/drivers/acpi/sleep.c @@ -71,6 +71,17 @@ static int acpi_sleep_prepare(u32 acpi_state) return 0; } +static bool acpi_sleep_state_supported(u8 sleep_state) +{ + acpi_status status; + u8 type_a, type_b; + + status = acpi_get_sleep_type_data(sleep_state, &type_a, &type_b); + return ACPI_SUCCESS(status) && (!acpi_gbl_reduced_hardware + || (acpi_gbl_FADT.sleep_control.address + && acpi_gbl_FADT.sleep_status.address)); +} + #ifdef CONFIG_ACPI_SLEEP static u32 acpi_target_sleep_state = ACPI_STATE_S0; @@ -604,15 +615,9 @@ static void acpi_sleep_suspend_setup(void) { int i; - for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) { - acpi_status status; - u8 type_a, type_b; - - status = acpi_get_sleep_type_data(i, &type_a, &type_b); - if (ACPI_SUCCESS(status)) { + for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) + if (acpi_sleep_state_supported(i)) sleep_states[i] = 1; - } - } suspend_set_ops(old_suspend_ordering ? &acpi_suspend_ops_old : &acpi_suspend_ops); @@ -740,11 +745,7 @@ static const struct platform_hibernation_ops acpi_hibernation_ops_old = { static void acpi_sleep_hibernate_setup(void) { - acpi_status status; - u8 type_a, type_b; - - status = acpi_get_sleep_type_data(ACPI_STATE_S4, &type_a, &type_b); - if (ACPI_FAILURE(status)) + if (!acpi_sleep_state_supported(ACPI_STATE_S4)) return; hibernation_set_ops(old_suspend_ordering ? @@ -793,8 +794,6 @@ static void acpi_power_off(void) int __init acpi_sleep_init(void) { - acpi_status status; - u8 type_a, type_b; char supported[ACPI_S_STATE_COUNT * 3 + 1]; char *pos = supported; int i; @@ -806,8 +805,7 @@ int __init acpi_sleep_init(void) acpi_sleep_suspend_setup(); acpi_sleep_hibernate_setup(); - status = acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b); - if (ACPI_SUCCESS(status)) { + if (acpi_sleep_state_supported(ACPI_STATE_S5)) { sleep_states[ACPI_STATE_S5] = 1; pm_power_off_prepare = acpi_power_off_prepare; pm_power_off = acpi_power_off; diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index 65d3f1b5966c..8cb2522d592a 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -4225,8 +4225,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = { /* devices that don't properly handle queued TRIM commands */ { "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, - { "Crucial_CT???M500SSD1", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, - { "Crucial_CT???M500SSD3", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, + { "Crucial_CT???M500SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, /* * Some WD SATA-I drives spin up and down erratically when the link diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c index 516026954be6..d777bb7cea93 100644 --- a/drivers/block/mtip32xx/mtip32xx.c +++ b/drivers/block/mtip32xx/mtip32xx.c @@ -4498,7 +4498,7 @@ static int mtip_pci_probe(struct pci_dev *pdev, } dev_info(&pdev->dev, "NUMA node %d (closest: %d,%d, probe on %d:%d)\n", my_node, pcibus_to_node(pdev->bus), dev_to_node(&pdev->dev), - cpu_to_node(smp_processor_id()), smp_processor_id()); + cpu_to_node(raw_smp_processor_id()), raw_smp_processor_id()); dd = kzalloc_node(sizeof(struct driver_data), GFP_KERNEL, my_node); if (dd == NULL) { diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index b365e0dfccb6..34898d53395b 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -2109,7 +2109,6 @@ static void rbd_img_obj_callback(struct rbd_obj_request *obj_request) rbd_assert(img_request->obj_request_count > 0); rbd_assert(which != BAD_WHICH); rbd_assert(which < img_request->obj_request_count); - rbd_assert(which >= img_request->next_completion); spin_lock_irq(&img_request->completion_lock); if (which != img_request->next_completion) diff --git a/drivers/clocksource/vf_pit_timer.c b/drivers/clocksource/vf_pit_timer.c index 02821b06a39e..a918bc481c52 100644 --- a/drivers/clocksource/vf_pit_timer.c +++ b/drivers/clocksource/vf_pit_timer.c @@ -54,7 +54,7 @@ static inline void pit_irq_acknowledge(void) static u64 pit_read_sched_clock(void) { - return __raw_readl(clksrc_base + PITCVAL); + return ~__raw_readl(clksrc_base + PITCVAL); } static int __init pit_clocksource_init(unsigned long rate) diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index cf485d928903..199b52b7c3e1 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1129,7 +1129,7 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif, per_cpu(cpufreq_cpu_data, j) = policy; write_unlock_irqrestore(&cpufreq_driver_lock, flags); - if (cpufreq_driver->get) { + if (cpufreq_driver->get && !cpufreq_driver->setpolicy) { policy->cur = cpufreq_driver->get(policy->cpu); if (!policy->cur) { pr_err("%s: ->get() failed\n", __func__); @@ -2143,7 +2143,7 @@ int cpufreq_update_policy(unsigned int cpu) * BIOS might change freq behind our back * -> ask driver for current freq and notify governors about a change */ - if (cpufreq_driver->get) { + if (cpufreq_driver->get && !cpufreq_driver->setpolicy) { new_policy.cur = cpufreq_driver->get(cpu); if (!policy->cur) { pr_debug("Driver did not initialize current freq"); diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c index 78fd174c57e8..f48607cd2540 100644 --- a/drivers/cpuidle/cpuidle-powernv.c +++ b/drivers/cpuidle/cpuidle-powernv.c @@ -14,6 +14,7 @@ #include <asm/machdep.h> #include <asm/firmware.h> +#include <asm/runlatch.h> struct cpuidle_driver powernv_idle_driver = { .name = "powernv_idle", @@ -30,12 +31,14 @@ static int snooze_loop(struct cpuidle_device *dev, local_irq_enable(); set_thread_flag(TIF_POLLING_NRFLAG); + ppc64_runlatch_off(); while (!need_resched()) { HMT_low(); HMT_very_low(); } HMT_medium(); + ppc64_runlatch_on(); clear_thread_flag(TIF_POLLING_NRFLAG); smp_mb(); return index; @@ -45,7 +48,9 @@ static int nap_loop(struct cpuidle_device *dev, struct cpuidle_driver *drv, int index) { + ppc64_runlatch_off(); power7_idle(); + ppc64_runlatch_on(); return index; } diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c index 7ab564aa0b1c..6f7b01956885 100644 --- a/drivers/cpuidle/cpuidle-pseries.c +++ b/drivers/cpuidle/cpuidle-pseries.c @@ -17,6 +17,7 @@ #include <asm/reg.h> #include <asm/machdep.h> #include <asm/firmware.h> +#include <asm/runlatch.h> #include <asm/plpar_wrappers.h> struct cpuidle_driver pseries_idle_driver = { @@ -29,6 +30,7 @@ static struct cpuidle_state *cpuidle_state_table; static inline void idle_loop_prolog(unsigned long *in_purr) { + ppc64_runlatch_off(); *in_purr = mfspr(SPRN_PURR); /* * Indicate to the HV that we are idle. Now would be @@ -45,6 +47,10 @@ static inline void idle_loop_epilog(unsigned long in_purr) wait_cycles += mfspr(SPRN_PURR) - in_purr; get_lppaca()->wait_state_cycles = cpu_to_be64(wait_cycles); get_lppaca()->idle = 0; + + if (irqs_disabled()) + local_irq_enable(); + ppc64_runlatch_on(); } static int snooze_loop(struct cpuidle_device *dev, diff --git a/drivers/gpu/drm/drm_pci.c b/drivers/gpu/drm/drm_pci.c index 5736aaa7e86c..f7af69bcf3f4 100644 --- a/drivers/gpu/drm/drm_pci.c +++ b/drivers/gpu/drm/drm_pci.c @@ -468,8 +468,8 @@ void drm_pci_exit(struct drm_driver *driver, struct pci_driver *pdriver) } else { list_for_each_entry_safe(dev, tmp, &driver->legacy_dev_list, legacy_dev_list) { - drm_put_dev(dev); list_del(&dev->legacy_dev_list); + drm_put_dev(dev); } } DRM_INFO("Module unloaded\n"); diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c b/drivers/gpu/drm/exynos/exynos_drm_drv.c index 215131ab1dd2..c204b4e3356e 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_drv.c +++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c @@ -172,20 +172,24 @@ static int exynos_drm_open(struct drm_device *dev, struct drm_file *file) ret = exynos_drm_subdrv_open(dev, file); if (ret) - goto out; + goto err_file_priv_free; anon_filp = anon_inode_getfile("exynos_gem", &exynos_drm_gem_fops, NULL, 0); if (IS_ERR(anon_filp)) { ret = PTR_ERR(anon_filp); - goto out; + goto err_subdrv_close; } anon_filp->f_mode = FMODE_READ | FMODE_WRITE; file_priv->anon_filp = anon_filp; return ret; -out: + +err_subdrv_close: + exynos_drm_subdrv_close(dev, file); + +err_file_priv_free: kfree(file_priv); file->driver_priv = NULL; return ret; diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 40a2b36b276b..d278be110805 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -842,7 +842,7 @@ void i915_gem_suspend_gtt_mappings(struct drm_device *dev) dev_priv->gtt.base.clear_range(&dev_priv->gtt.base, dev_priv->gtt.base.start / PAGE_SIZE, dev_priv->gtt.base.total / PAGE_SIZE, - false); + true); } void i915_gem_restore_gtt_mappings(struct drm_device *dev) diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index d58b4e287e32..28d24caa49f3 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -214,6 +214,13 @@ int i915_gem_init_stolen(struct drm_device *dev) struct drm_i915_private *dev_priv = dev->dev_private; int bios_reserved = 0; +#ifdef CONFIG_INTEL_IOMMU + if (intel_iommu_gfx_mapped) { + DRM_INFO("DMAR active, disabling use of stolen memory\n"); + return 0; + } +#endif + if (dev_priv->gtt.stolen_size == 0) return 0; diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 9fec71175571..d554169ac592 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -618,33 +618,25 @@ static u32 gm45_get_vblank_counter(struct drm_device *dev, int pipe) /* raw reads, only for fast reads of display block, no need for forcewake etc. */ #define __raw_i915_read32(dev_priv__, reg__) readl((dev_priv__)->regs + (reg__)) -#define __raw_i915_read16(dev_priv__, reg__) readw((dev_priv__)->regs + (reg__)) static bool ilk_pipe_in_vblank_locked(struct drm_device *dev, enum pipe pipe) { struct drm_i915_private *dev_priv = dev->dev_private; uint32_t status; - - if (INTEL_INFO(dev)->gen < 7) { - status = pipe == PIPE_A ? - DE_PIPEA_VBLANK : - DE_PIPEB_VBLANK; + int reg; + + if (INTEL_INFO(dev)->gen >= 8) { + status = GEN8_PIPE_VBLANK; + reg = GEN8_DE_PIPE_ISR(pipe); + } else if (INTEL_INFO(dev)->gen >= 7) { + status = DE_PIPE_VBLANK_IVB(pipe); + reg = DEISR; } else { - switch (pipe) { - default: - case PIPE_A: - status = DE_PIPEA_VBLANK_IVB; - break; - case PIPE_B: - status = DE_PIPEB_VBLANK_IVB; - break; - case PIPE_C: - status = DE_PIPEC_VBLANK_IVB; - break; - } + status = DE_PIPE_VBLANK(pipe); + reg = DEISR; } - return __raw_i915_read32(dev_priv, DEISR) & status; + return __raw_i915_read32(dev_priv, reg) & status; } static int i915_get_crtc_scanoutpos(struct drm_device *dev, int pipe, @@ -702,7 +694,28 @@ static int i915_get_crtc_scanoutpos(struct drm_device *dev, int pipe, else position = __raw_i915_read32(dev_priv, PIPEDSL(pipe)) & DSL_LINEMASK_GEN3; - if (HAS_PCH_SPLIT(dev)) { + if (HAS_DDI(dev)) { + /* + * On HSW HDMI outputs there seems to be a 2 line + * difference, whereas eDP has the normal 1 line + * difference that earlier platforms have. External + * DP is unknown. For now just check for the 2 line + * difference case on all output types on HSW+. + * + * This might misinterpret the scanline counter being + * one line too far along on eDP, but that's less + * dangerous than the alternative since that would lead + * the vblank timestamp code astray when it sees a + * scanline count before vblank_start during a vblank + * interrupt. + */ + in_vbl = ilk_pipe_in_vblank_locked(dev, pipe); + if ((in_vbl && (position == vbl_start - 2 || + position == vbl_start - 1)) || + (!in_vbl && (position == vbl_end - 2 || + position == vbl_end - 1))) + position = (position + 2) % vtotal; + } else if (HAS_PCH_SPLIT(dev)) { /* * The scanline counter increments at the leading edge * of hsync, ie. it completely misses the active portion @@ -2769,10 +2782,9 @@ static void ibx_irq_postinstall(struct drm_device *dev) return; if (HAS_PCH_IBX(dev)) { - mask = SDE_GMBUS | SDE_AUX_MASK | SDE_TRANSB_FIFO_UNDER | - SDE_TRANSA_FIFO_UNDER | SDE_POISON; + mask = SDE_GMBUS | SDE_AUX_MASK | SDE_POISON; } else { - mask = SDE_GMBUS_CPT | SDE_AUX_MASK_CPT | SDE_ERROR_CPT; + mask = SDE_GMBUS_CPT | SDE_AUX_MASK_CPT; I915_WRITE(SERR_INT, I915_READ(SERR_INT)); } @@ -2832,20 +2844,19 @@ static int ironlake_irq_postinstall(struct drm_device *dev) display_mask = (DE_MASTER_IRQ_CONTROL | DE_GSE_IVB | DE_PCH_EVENT_IVB | DE_PLANEC_FLIP_DONE_IVB | DE_PLANEB_FLIP_DONE_IVB | - DE_PLANEA_FLIP_DONE_IVB | DE_AUX_CHANNEL_A_IVB | - DE_ERR_INT_IVB); + DE_PLANEA_FLIP_DONE_IVB | DE_AUX_CHANNEL_A_IVB); extra_mask = (DE_PIPEC_VBLANK_IVB | DE_PIPEB_VBLANK_IVB | - DE_PIPEA_VBLANK_IVB); + DE_PIPEA_VBLANK_IVB | DE_ERR_INT_IVB); I915_WRITE(GEN7_ERR_INT, I915_READ(GEN7_ERR_INT)); } else { display_mask = (DE_MASTER_IRQ_CONTROL | DE_GSE | DE_PCH_EVENT | DE_PLANEA_FLIP_DONE | DE_PLANEB_FLIP_DONE | DE_AUX_CHANNEL_A | - DE_PIPEB_FIFO_UNDERRUN | DE_PIPEA_FIFO_UNDERRUN | DE_PIPEB_CRC_DONE | DE_PIPEA_CRC_DONE | DE_POISON); - extra_mask = DE_PIPEA_VBLANK | DE_PIPEB_VBLANK | DE_PCU_EVENT; + extra_mask = DE_PIPEA_VBLANK | DE_PIPEB_VBLANK | DE_PCU_EVENT | + DE_PIPEB_FIFO_UNDERRUN | DE_PIPEA_FIFO_UNDERRUN; } dev_priv->irq_mask = ~display_mask; @@ -2961,9 +2972,9 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv) struct drm_device *dev = dev_priv->dev; uint32_t de_pipe_masked = GEN8_PIPE_FLIP_DONE | GEN8_PIPE_CDCLK_CRC_DONE | - GEN8_PIPE_FIFO_UNDERRUN | GEN8_DE_PIPE_IRQ_FAULT_ERRORS; - uint32_t de_pipe_enables = de_pipe_masked | GEN8_PIPE_VBLANK; + uint32_t de_pipe_enables = de_pipe_masked | GEN8_PIPE_VBLANK | + GEN8_PIPE_FIFO_UNDERRUN; int pipe; dev_priv->de_irq_mask[PIPE_A] = ~de_pipe_masked; dev_priv->de_irq_mask[PIPE_B] = ~de_pipe_masked; diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c index e06b9e017d6b..234ac5f7bc5a 100644 --- a/drivers/gpu/drm/i915/intel_ddi.c +++ b/drivers/gpu/drm/i915/intel_ddi.c @@ -1244,6 +1244,7 @@ static void intel_ddi_post_disable(struct intel_encoder *intel_encoder) if (type == INTEL_OUTPUT_DISPLAYPORT || type == INTEL_OUTPUT_EDP) { struct intel_dp *intel_dp = enc_to_intel_dp(encoder); intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF); + ironlake_edp_panel_vdd_on(intel_dp); ironlake_edp_panel_off(intel_dp); } diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c index 57552eb386b0..2688f6d64bb9 100644 --- a/drivers/gpu/drm/i915/intel_dp.c +++ b/drivers/gpu/drm/i915/intel_dp.c @@ -1249,17 +1249,24 @@ void ironlake_edp_panel_off(struct intel_dp *intel_dp) DRM_DEBUG_KMS("Turn eDP power off\n"); + WARN(!intel_dp->want_panel_vdd, "Need VDD to turn off panel\n"); + pp = ironlake_get_pp_control(intel_dp); /* We need to switch off panel power _and_ force vdd, for otherwise some * panels get very unhappy and cease to work. */ - pp &= ~(POWER_TARGET_ON | PANEL_POWER_RESET | EDP_BLC_ENABLE); + pp &= ~(POWER_TARGET_ON | EDP_FORCE_VDD | PANEL_POWER_RESET | EDP_BLC_ENABLE); pp_ctrl_reg = _pp_ctrl_reg(intel_dp); I915_WRITE(pp_ctrl_reg, pp); POSTING_READ(pp_ctrl_reg); + intel_dp->want_panel_vdd = false; + ironlake_wait_panel_off(intel_dp); + + /* We got a reference when we enabled the VDD. */ + intel_runtime_pm_put(dev_priv); } void ironlake_edp_backlight_on(struct intel_dp *intel_dp) @@ -1639,7 +1646,7 @@ static void intel_edp_psr_enable_source(struct intel_dp *intel_dp) val |= EDP_PSR_LINK_DISABLE; I915_WRITE(EDP_PSR_CTL(dev), val | - IS_BROADWELL(dev) ? 0 : link_entry_time | + (IS_BROADWELL(dev) ? 0 : link_entry_time) | max_sleep_time << EDP_PSR_MAX_SLEEP_TIME_SHIFT | idle_frames << EDP_PSR_IDLE_FRAME_SHIFT | EDP_PSR_ENABLE); @@ -1784,6 +1791,7 @@ static void intel_disable_dp(struct intel_encoder *encoder) /* Make sure the panel is off before trying to change the mode. But also * ensure that we have vdd while we switch off the panel. */ + ironlake_edp_panel_vdd_on(intel_dp); ironlake_edp_backlight_off(intel_dp); intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF); ironlake_edp_panel_off(intel_dp); diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index 89c484d8ac26..4ee702ac8907 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -866,13 +866,16 @@ static int nouveau_pmops_runtime_suspend(struct device *dev) struct drm_device *drm_dev = pci_get_drvdata(pdev); int ret; - if (nouveau_runtime_pm == 0) - return -EINVAL; + if (nouveau_runtime_pm == 0) { + pm_runtime_forbid(dev); + return -EBUSY; + } /* are we optimus enabled? */ if (nouveau_runtime_pm == -1 && !nouveau_is_optimus() && !nouveau_is_v1_dsm()) { DRM_DEBUG_DRIVER("failing to power off - not optimus\n"); - return -EINVAL; + pm_runtime_forbid(dev); + return -EBUSY; } nv_debug_level(SILENT); @@ -923,12 +926,15 @@ static int nouveau_pmops_runtime_idle(struct device *dev) struct nouveau_drm *drm = nouveau_drm(drm_dev); struct drm_crtc *crtc; - if (nouveau_runtime_pm == 0) + if (nouveau_runtime_pm == 0) { + pm_runtime_forbid(dev); return -EBUSY; + } /* are we optimus enabled? */ if (nouveau_runtime_pm == -1 && !nouveau_is_optimus() && !nouveau_is_v1_dsm()) { DRM_DEBUG_DRIVER("failing to power off - not optimus\n"); + pm_runtime_forbid(dev); return -EBUSY; } diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c index e22be8458d92..bbb17841a9e5 100644 --- a/drivers/gpu/drm/radeon/cik.c +++ b/drivers/gpu/drm/radeon/cik.c @@ -4134,8 +4134,11 @@ static void cik_cp_compute_enable(struct radeon_device *rdev, bool enable) { if (enable) WREG32(CP_MEC_CNTL, 0); - else + else { WREG32(CP_MEC_CNTL, (MEC_ME1_HALT | MEC_ME2_HALT)); + rdev->ring[CAYMAN_RING_TYPE_CP1_INDEX].ready = false; + rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX].ready = false; + } udelay(50); } diff --git a/drivers/gpu/drm/radeon/cik_sdma.c b/drivers/gpu/drm/radeon/cik_sdma.c index 1ecb3f1070e3..94626ea90fa5 100644 --- a/drivers/gpu/drm/radeon/cik_sdma.c +++ b/drivers/gpu/drm/radeon/cik_sdma.c @@ -264,6 +264,8 @@ static void cik_sdma_gfx_stop(struct radeon_device *rdev) WREG32(SDMA0_GFX_RB_CNTL + reg_offset, rb_cntl); WREG32(SDMA0_GFX_IB_CNTL + reg_offset, 0); } + rdev->ring[R600_RING_TYPE_DMA_INDEX].ready = false; + rdev->ring[CAYMAN_RING_TYPE_DMA1_INDEX].ready = false; } /** @@ -291,6 +293,11 @@ void cik_sdma_enable(struct radeon_device *rdev, bool enable) u32 me_cntl, reg_offset; int i; + if (enable == false) { + cik_sdma_gfx_stop(rdev); + cik_sdma_rlc_stop(rdev); + } + for (i = 0; i < 2; i++) { if (i == 0) reg_offset = SDMA0_REGISTER_OFFSET; @@ -420,10 +427,6 @@ static int cik_sdma_load_microcode(struct radeon_device *rdev) if (!rdev->sdma_fw) return -EINVAL; - /* stop the gfx rings and rlc compute queues */ - cik_sdma_gfx_stop(rdev); - cik_sdma_rlc_stop(rdev); - /* halt the MEs */ cik_sdma_enable(rdev, false); @@ -492,9 +495,6 @@ int cik_sdma_resume(struct radeon_device *rdev) */ void cik_sdma_fini(struct radeon_device *rdev) { - /* stop the gfx rings and rlc compute queues */ - cik_sdma_gfx_stop(rdev); - cik_sdma_rlc_stop(rdev); /* halt the MEs */ cik_sdma_enable(rdev, false); radeon_ring_fini(rdev, &rdev->ring[R600_RING_TYPE_DMA_INDEX]); diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c index 84a1bbb75f91..f633c2782170 100644 --- a/drivers/gpu/drm/radeon/radeon_drv.c +++ b/drivers/gpu/drm/radeon/radeon_drv.c @@ -403,11 +403,15 @@ static int radeon_pmops_runtime_suspend(struct device *dev) struct drm_device *drm_dev = pci_get_drvdata(pdev); int ret; - if (radeon_runtime_pm == 0) - return -EINVAL; + if (radeon_runtime_pm == 0) { + pm_runtime_forbid(dev); + return -EBUSY; + } - if (radeon_runtime_pm == -1 && !radeon_is_px()) - return -EINVAL; + if (radeon_runtime_pm == -1 && !radeon_is_px()) { + pm_runtime_forbid(dev); + return -EBUSY; + } drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING; drm_kms_helper_poll_disable(drm_dev); @@ -456,12 +460,15 @@ static int radeon_pmops_runtime_idle(struct device *dev) struct drm_device *drm_dev = pci_get_drvdata(pdev); struct drm_crtc *crtc; - if (radeon_runtime_pm == 0) + if (radeon_runtime_pm == 0) { + pm_runtime_forbid(dev); return -EBUSY; + } /* are we PX enabled? */ if (radeon_runtime_pm == -1 && !radeon_is_px()) { DRM_DEBUG_DRIVER("failing to power off - not px\n"); + pm_runtime_forbid(dev); return -EBUSY; } diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c index 2aecd6dc2610..66ed3ea71440 100644 --- a/drivers/gpu/drm/radeon/radeon_kms.c +++ b/drivers/gpu/drm/radeon/radeon_kms.c @@ -33,6 +33,13 @@ #include <linux/vga_switcheroo.h> #include <linux/slab.h> #include <linux/pm_runtime.h> + +#if defined(CONFIG_VGA_SWITCHEROO) +bool radeon_is_px(void); +#else +static inline bool radeon_is_px(void) { return false; } +#endif + /** * radeon_driver_unload_kms - Main unload function for KMS. * @@ -130,7 +137,8 @@ int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags) "Error during ACPI methods call\n"); } - if (radeon_runtime_pm != 0) { + if ((radeon_runtime_pm == 1) || + ((radeon_runtime_pm == -1) && radeon_is_px())) { pm_runtime_use_autosuspend(dev->dev); pm_runtime_set_autosuspend_delay(dev->dev, 5000); pm_runtime_set_active(dev->dev); diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a06651309388..214b7992a3aa 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -351,9 +351,11 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo, moved: if (bo->evicted) { - ret = bdev->driver->invalidate_caches(bdev, bo->mem.placement); - if (ret) - pr_err("Can not flush read caches\n"); + if (bdev->driver->invalidate_caches) { + ret = bdev->driver->invalidate_caches(bdev, bo->mem.placement); + if (ret) + pr_err("Can not flush read caches\n"); + } bo->evicted = false; } diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 801231c9ae48..0ce48e5a9cb4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -339,11 +339,13 @@ int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma, vma->vm_private_data = bo; /* - * PFNMAP is faster than MIXEDMAP due to reduced page - * administration. So use MIXEDMAP only if private VMA, where - * we need to support COW. + * We'd like to use VM_PFNMAP on shared mappings, where + * (vma->vm_flags & VM_SHARED) != 0, for performance reasons, + * but for some reason VM_PFNMAP + x86 PAT + write-combine is very + * bad for performance. Until that has been sorted out, use + * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719 */ - vma->vm_flags |= (vma->vm_flags & VM_SHARED) ? VM_PFNMAP : VM_MIXEDMAP; + vma->vm_flags |= VM_MIXEDMAP; vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP; return 0; out_unref: @@ -359,7 +361,7 @@ int ttm_fbdev_mmap(struct vm_area_struct *vma, struct ttm_buffer_object *bo) vma->vm_ops = &ttm_bo_vm_ops; vma->vm_private_data = ttm_bo_reference(bo); - vma->vm_flags |= (vma->vm_flags & VM_SHARED) ? VM_PFNMAP : VM_MIXEDMAP; + vma->vm_flags |= VM_MIXEDMAP; vma->vm_flags |= VM_IO | VM_DONTEXPAND; return 0; } diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index 8d67b943ac05..0394811251bd 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -177,8 +177,10 @@ void udl_gem_free_object(struct drm_gem_object *gem_obj) if (obj->vmapping) udl_gem_vunmap(obj); - if (gem_obj->import_attach) + if (gem_obj->import_attach) { drm_prime_gem_destroy(gem_obj, obj->sg); + put_device(gem_obj->dev->dev); + } if (obj->pages) udl_gem_put_pages(obj); @@ -256,9 +258,12 @@ struct drm_gem_object *udl_gem_prime_import(struct drm_device *dev, int ret; /* need to attach */ + get_device(dev->dev); attach = dma_buf_attach(dma_buf, dev->dev); - if (IS_ERR(attach)) + if (IS_ERR(attach)) { + put_device(dev->dev); return ERR_CAST(attach); + } get_dma_buf(dma_buf); @@ -282,6 +287,6 @@ fail_unmap: fail_detach: dma_buf_detach(dma_buf, attach); dma_buf_put(dma_buf); - + put_device(dev->dev); return ERR_PTR(ret); } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c index 82468d902915..e7af580ab977 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c @@ -830,6 +830,24 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data, if (unlikely(ret != 0)) goto out_unlock; + /* + * A gb-aware client referencing a shared surface will + * expect a backup buffer to be present. + */ + if (dev_priv->has_mob && req->shareable) { + uint32_t backup_handle; + + ret = vmw_user_dmabuf_alloc(dev_priv, tfile, + res->backup_size, + true, + &backup_handle, + &res->backup); + if (unlikely(ret != 0)) { + vmw_resource_unreference(&res); + goto out_unlock; + } + } + tmp = vmw_resource_reference(&srf->res); ret = ttm_prime_object_init(tfile, res->backup_size, &user_srf->prime, req->shareable, VMW_RES_SURFACE, diff --git a/drivers/hid/hid-lg4ff.c b/drivers/hid/hid-lg4ff.c index befe0e336471..24883b4d1a49 100644 --- a/drivers/hid/hid-lg4ff.c +++ b/drivers/hid/hid-lg4ff.c @@ -43,6 +43,7 @@ #define G25_REV_MIN 0x22 #define G27_REV_MAJ 0x12 #define G27_REV_MIN 0x38 +#define G27_2_REV_MIN 0x39 #define to_hid_device(pdev) container_of(pdev, struct hid_device, dev) @@ -130,6 +131,7 @@ static const struct lg4ff_usb_revision lg4ff_revs[] = { {DFP_REV_MAJ, DFP_REV_MIN, &native_dfp}, /* Driving Force Pro */ {G25_REV_MAJ, G25_REV_MIN, &native_g25}, /* G25 */ {G27_REV_MAJ, G27_REV_MIN, &native_g27}, /* G27 */ + {G27_REV_MAJ, G27_2_REV_MIN, &native_g27}, /* G27 v2 */ }; /* Recalculates X axis value accordingly to currently selected range */ diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c index 12354055d474..2f19b15f47f2 100644 --- a/drivers/hid/hid-sony.c +++ b/drivers/hid/hid-sony.c @@ -42,6 +42,7 @@ #define DUALSHOCK4_CONTROLLER_BT BIT(6) #define SONY_LED_SUPPORT (SIXAXIS_CONTROLLER_USB | BUZZ_CONTROLLER | DUALSHOCK4_CONTROLLER_USB) +#define SONY_FF_SUPPORT (SIXAXIS_CONTROLLER_USB | DUALSHOCK4_CONTROLLER_USB) #define MAX_LEDS 4 @@ -499,6 +500,7 @@ struct sony_sc { __u8 right; #endif + __u8 worker_initialized; __u8 led_state[MAX_LEDS]; __u8 led_count; }; @@ -993,22 +995,11 @@ static int sony_init_ff(struct hid_device *hdev) return input_ff_create_memless(input_dev, NULL, sony_play_effect); } -static void sony_destroy_ff(struct hid_device *hdev) -{ - struct sony_sc *sc = hid_get_drvdata(hdev); - - cancel_work_sync(&sc->state_worker); -} - #else static int sony_init_ff(struct hid_device *hdev) { return 0; } - -static void sony_destroy_ff(struct hid_device *hdev) -{ -} #endif static int sony_set_output_report(struct sony_sc *sc, int req_id, int req_size) @@ -1077,6 +1068,8 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id) if (sc->quirks & SIXAXIS_CONTROLLER_USB) { hdev->hid_output_raw_report = sixaxis_usb_output_raw_report; ret = sixaxis_set_operational_usb(hdev); + + sc->worker_initialized = 1; INIT_WORK(&sc->state_worker, sixaxis_state_worker); } else if (sc->quirks & SIXAXIS_CONTROLLER_BT) @@ -1087,6 +1080,7 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id) if (ret < 0) goto err_stop; + sc->worker_initialized = 1; INIT_WORK(&sc->state_worker, dualshock4_state_worker); } else { ret = 0; @@ -1101,9 +1095,11 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id) goto err_stop; } - ret = sony_init_ff(hdev); - if (ret < 0) - goto err_stop; + if (sc->quirks & SONY_FF_SUPPORT) { + ret = sony_init_ff(hdev); + if (ret < 0) + goto err_stop; + } return 0; err_stop: @@ -1120,7 +1116,8 @@ static void sony_remove(struct hid_device *hdev) if (sc->quirks & SONY_LED_SUPPORT) sony_leds_remove(hdev); - sony_destroy_ff(hdev); + if (sc->worker_initialized) + cancel_work_sync(&sc->state_worker); hid_hw_stop(hdev); } diff --git a/drivers/hid/hidraw.c b/drivers/hid/hidraw.c index cb0137b3718d..ab24ce2eb28f 100644 --- a/drivers/hid/hidraw.c +++ b/drivers/hid/hidraw.c @@ -320,13 +320,13 @@ static void drop_ref(struct hidraw *hidraw, int exists_bit) hid_hw_close(hidraw->hid); wake_up_interruptible(&hidraw->wait); } + device_destroy(hidraw_class, + MKDEV(hidraw_major, hidraw->minor)); } else { --hidraw->open; } if (!hidraw->open) { if (!hidraw->exist) { - device_destroy(hidraw_class, - MKDEV(hidraw_major, hidraw->minor)); hidraw_table[hidraw->minor] = NULL; kfree(hidraw); } else { diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig index f5ed03164d86..de17c5593d97 100644 --- a/drivers/i2c/busses/Kconfig +++ b/drivers/i2c/busses/Kconfig @@ -387,7 +387,7 @@ config I2C_CBUS_GPIO config I2C_CPM tristate "Freescale CPM1 or CPM2 (MPC8xx/826x)" - depends on (CPM1 || CPM2) && OF_I2C + depends on CPM1 || CPM2 help This supports the use of the I2C interface on Freescale processors with CPM1 or CPM2. diff --git a/drivers/i2c/busses/i2c-cpm.c b/drivers/i2c/busses/i2c-cpm.c index be7f0a20d634..f3b89a4698b6 100644 --- a/drivers/i2c/busses/i2c-cpm.c +++ b/drivers/i2c/busses/i2c-cpm.c @@ -39,7 +39,9 @@ #include <linux/i2c.h> #include <linux/io.h> #include <linux/dma-mapping.h> +#include <linux/of_address.h> #include <linux/of_device.h> +#include <linux/of_irq.h> #include <linux/of_platform.h> #include <sysdev/fsl_soc.h> #include <asm/cpm.h> diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c index a06e12552886..ce953d895f5b 100644 --- a/drivers/input/evdev.c +++ b/drivers/input/evdev.c @@ -954,11 +954,13 @@ static long evdev_do_ioctl(struct file *file, unsigned int cmd, return -EFAULT; error = input_ff_upload(dev, &effect, file); + if (error) + return error; if (put_user(effect.id, &(((struct ff_effect __user *)p)->id))) return -EFAULT; - return error; + return 0; } /* Multi-number variable-length handlers */ diff --git a/drivers/input/keyboard/adp5588-keys.c b/drivers/input/keyboard/adp5588-keys.c index bb3b57bea8ba..5ef7fcf0e250 100644 --- a/drivers/input/keyboard/adp5588-keys.c +++ b/drivers/input/keyboard/adp5588-keys.c @@ -76,8 +76,18 @@ static int adp5588_gpio_get_value(struct gpio_chip *chip, unsigned off) struct adp5588_kpad *kpad = container_of(chip, struct adp5588_kpad, gc); unsigned int bank = ADP5588_BANK(kpad->gpiomap[off]); unsigned int bit = ADP5588_BIT(kpad->gpiomap[off]); + int val; - return !!(adp5588_read(kpad->client, GPIO_DAT_STAT1 + bank) & bit); + mutex_lock(&kpad->gpio_lock); + + if (kpad->dir[bank] & bit) + val = kpad->dat_out[bank]; + else + val = adp5588_read(kpad->client, GPIO_DAT_STAT1 + bank); + + mutex_unlock(&kpad->gpio_lock); + + return !!(val & bit); } static void adp5588_gpio_set_value(struct gpio_chip *chip, diff --git a/drivers/input/misc/da9052_onkey.c b/drivers/input/misc/da9052_onkey.c index 1f695f229ea8..184c8f21ab59 100644 --- a/drivers/input/misc/da9052_onkey.c +++ b/drivers/input/misc/da9052_onkey.c @@ -27,29 +27,32 @@ struct da9052_onkey { static void da9052_onkey_query(struct da9052_onkey *onkey) { - int key_stat; + int ret; - key_stat = da9052_reg_read(onkey->da9052, DA9052_EVENT_B_REG); - if (key_stat < 0) { + ret = da9052_reg_read(onkey->da9052, DA9052_STATUS_A_REG); + if (ret < 0) { dev_err(onkey->da9052->dev, - "Failed to read onkey event %d\n", key_stat); + "Failed to read onkey event err=%d\n", ret); } else { /* * Since interrupt for deassertion of ONKEY pin is not * generated, onkey event state determines the onkey * button state. */ - key_stat &= DA9052_EVENTB_ENONKEY; - input_report_key(onkey->input, KEY_POWER, key_stat); + bool pressed = !(ret & DA9052_STATUSA_NONKEY); + + input_report_key(onkey->input, KEY_POWER, pressed); input_sync(onkey->input); - } - /* - * Interrupt is generated only when the ONKEY pin is asserted. - * Hence the deassertion of the pin is simulated through work queue. - */ - if (key_stat) - schedule_delayed_work(&onkey->work, msecs_to_jiffies(50)); + /* + * Interrupt is generated only when the ONKEY pin + * is asserted. Hence the deassertion of the pin + * is simulated through work queue. + */ + if (pressed) + schedule_delayed_work(&onkey->work, + msecs_to_jiffies(50)); + } } static void da9052_onkey_work(struct work_struct *work) diff --git a/drivers/input/mouse/cypress_ps2.c b/drivers/input/mouse/cypress_ps2.c index 87095e2f5153..8af34ffe208b 100644 --- a/drivers/input/mouse/cypress_ps2.c +++ b/drivers/input/mouse/cypress_ps2.c @@ -409,7 +409,6 @@ static int cypress_set_input_params(struct input_dev *input, __clear_bit(REL_X, input->relbit); __clear_bit(REL_Y, input->relbit); - __set_bit(INPUT_PROP_BUTTONPAD, input->propbit); __set_bit(EV_KEY, input->evbit); __set_bit(BTN_LEFT, input->keybit); __set_bit(BTN_RIGHT, input->keybit); diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c index 26386f9d2569..d8d49d10f9bb 100644 --- a/drivers/input/mouse/synaptics.c +++ b/drivers/input/mouse/synaptics.c @@ -265,11 +265,22 @@ static int synaptics_identify(struct psmouse *psmouse) * Read touchpad resolution and maximum reported coordinates * Resolution is left zero if touchpad does not support the query */ + +static const int *quirk_min_max; + static int synaptics_resolution(struct psmouse *psmouse) { struct synaptics_data *priv = psmouse->private; unsigned char resp[3]; + if (quirk_min_max) { + priv->x_min = quirk_min_max[0]; + priv->x_max = quirk_min_max[1]; + priv->y_min = quirk_min_max[2]; + priv->y_max = quirk_min_max[3]; + return 0; + } + if (SYN_ID_MAJOR(priv->identity) < 4) return 0; @@ -1485,10 +1496,54 @@ static const struct dmi_system_id olpc_dmi_table[] __initconst = { { } }; +static const struct dmi_system_id min_max_dmi_table[] __initconst = { +#if defined(CONFIG_DMI) + { + /* Lenovo ThinkPad Helix */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad Helix"), + }, + .driver_data = (int []){1024, 5052, 2258, 4832}, + }, + { + /* Lenovo ThinkPad X240 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X240"), + }, + .driver_data = (int []){1232, 5710, 1156, 4696}, + }, + { + /* Lenovo ThinkPad T440s */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T440"), + }, + .driver_data = (int []){1024, 5112, 2024, 4832}, + }, + { + /* Lenovo ThinkPad T540p */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T540"), + }, + .driver_data = (int []){1024, 5056, 2058, 4832}, + }, +#endif + { } +}; + void __init synaptics_module_init(void) { + const struct dmi_system_id *min_max_dmi; + impaired_toshiba_kbc = dmi_check_system(toshiba_dmi_table); broken_olpc_ec = dmi_check_system(olpc_dmi_table); + + min_max_dmi = dmi_first_match(min_max_dmi_table); + if (min_max_dmi) + quirk_min_max = min_max_dmi->driver_data; } static int __synaptics_init(struct psmouse *psmouse, bool absolute_mode) diff --git a/drivers/input/mousedev.c b/drivers/input/mousedev.c index 4c842c320c2e..b604564dec5c 100644 --- a/drivers/input/mousedev.c +++ b/drivers/input/mousedev.c @@ -67,7 +67,6 @@ struct mousedev { struct device dev; struct cdev cdev; bool exist; - bool is_mixdev; struct list_head mixdev_node; bool opened_by_mixdev; @@ -77,6 +76,9 @@ struct mousedev { int old_x[4], old_y[4]; int frac_dx, frac_dy; unsigned long touch; + + int (*open_device)(struct mousedev *mousedev); + void (*close_device)(struct mousedev *mousedev); }; enum mousedev_emul { @@ -116,9 +118,6 @@ static unsigned char mousedev_imex_seq[] = { 0xf3, 200, 0xf3, 200, 0xf3, 80 }; static struct mousedev *mousedev_mix; static LIST_HEAD(mousedev_mix_list); -static void mixdev_open_devices(void); -static void mixdev_close_devices(void); - #define fx(i) (mousedev->old_x[(mousedev->pkt_count - (i)) & 03]) #define fy(i) (mousedev->old_y[(mousedev->pkt_count - (i)) & 03]) @@ -428,9 +427,7 @@ static int mousedev_open_device(struct mousedev *mousedev) if (retval) return retval; - if (mousedev->is_mixdev) - mixdev_open_devices(); - else if (!mousedev->exist) + if (!mousedev->exist) retval = -ENODEV; else if (!mousedev->open++) { retval = input_open_device(&mousedev->handle); @@ -446,9 +443,7 @@ static void mousedev_close_device(struct mousedev *mousedev) { mutex_lock(&mousedev->mutex); - if (mousedev->is_mixdev) - mixdev_close_devices(); - else if (mousedev->exist && !--mousedev->open) + if (mousedev->exist && !--mousedev->open) input_close_device(&mousedev->handle); mutex_unlock(&mousedev->mutex); @@ -459,21 +454,29 @@ static void mousedev_close_device(struct mousedev *mousedev) * stream. Note that this function is called with mousedev_mix->mutex * held. */ -static void mixdev_open_devices(void) +static int mixdev_open_devices(struct mousedev *mixdev) { - struct mousedev *mousedev; + int error; + + error = mutex_lock_interruptible(&mixdev->mutex); + if (error) + return error; - if (mousedev_mix->open++) - return; + if (!mixdev->open++) { + struct mousedev *mousedev; - list_for_each_entry(mousedev, &mousedev_mix_list, mixdev_node) { - if (!mousedev->opened_by_mixdev) { - if (mousedev_open_device(mousedev)) - continue; + list_for_each_entry(mousedev, &mousedev_mix_list, mixdev_node) { + if (!mousedev->opened_by_mixdev) { + if (mousedev_open_device(mousedev)) + continue; - mousedev->opened_by_mixdev = true; + mousedev->opened_by_mixdev = true; + } } } + + mutex_unlock(&mixdev->mutex); + return 0; } /* @@ -481,19 +484,22 @@ static void mixdev_open_devices(void) * device. Note that this function is called with mousedev_mix->mutex * held. */ -static void mixdev_close_devices(void) +static void mixdev_close_devices(struct mousedev *mixdev) { - struct mousedev *mousedev; + mutex_lock(&mixdev->mutex); - if (--mousedev_mix->open) - return; + if (!--mixdev->open) { + struct mousedev *mousedev; - list_for_each_entry(mousedev, &mousedev_mix_list, mixdev_node) { - if (mousedev->opened_by_mixdev) { - mousedev->opened_by_mixdev = false; - mousedev_close_device(mousedev); + list_for_each_entry(mousedev, &mousedev_mix_list, mixdev_node) { + if (mousedev->opened_by_mixdev) { + mousedev->opened_by_mixdev = false; + mousedev_close_device(mousedev); + } } } + + mutex_unlock(&mixdev->mutex); } @@ -522,7 +528,7 @@ static int mousedev_release(struct inode *inode, struct file *file) mousedev_detach_client(mousedev, client); kfree(client); - mousedev_close_device(mousedev); + mousedev->close_device(mousedev); return 0; } @@ -550,7 +556,7 @@ static int mousedev_open(struct inode *inode, struct file *file) client->mousedev = mousedev; mousedev_attach_client(mousedev, client); - error = mousedev_open_device(mousedev); + error = mousedev->open_device(mousedev); if (error) goto err_free_client; @@ -861,16 +867,21 @@ static struct mousedev *mousedev_create(struct input_dev *dev, if (mixdev) { dev_set_name(&mousedev->dev, "mice"); + + mousedev->open_device = mixdev_open_devices; + mousedev->close_device = mixdev_close_devices; } else { int dev_no = minor; /* Normalize device number if it falls into legacy range */ if (dev_no < MOUSEDEV_MINOR_BASE + MOUSEDEV_MINORS) dev_no -= MOUSEDEV_MINOR_BASE; dev_set_name(&mousedev->dev, "mouse%d", dev_no); + + mousedev->open_device = mousedev_open_device; + mousedev->close_device = mousedev_close_device; } mousedev->exist = true; - mousedev->is_mixdev = mixdev; mousedev->handle.dev = input_get_device(dev); mousedev->handle.name = dev_name(&mousedev->dev); mousedev->handle.handler = handler; @@ -919,7 +930,7 @@ static void mousedev_destroy(struct mousedev *mousedev) device_del(&mousedev->dev); mousedev_cleanup(mousedev); input_free_minor(MINOR(mousedev->dev.devt)); - if (!mousedev->is_mixdev) + if (mousedev != mousedev_mix) input_unregister_handle(&mousedev->handle); put_device(&mousedev->dev); } diff --git a/drivers/isdn/capi/Kconfig b/drivers/isdn/capi/Kconfig index f04686580040..9816c51eb5c2 100644 --- a/drivers/isdn/capi/Kconfig +++ b/drivers/isdn/capi/Kconfig @@ -16,9 +16,17 @@ config CAPI_TRACE This will increase the size of the kernelcapi module by 20 KB. If unsure, say Y. +config ISDN_CAPI_CAPI20 + tristate "CAPI2.0 /dev/capi support" + help + This option will provide the CAPI 2.0 interface to userspace + applications via /dev/capi20. Applications should use the + standardized libcapi20 to access this functionality. You should say + Y/M here. + config ISDN_CAPI_MIDDLEWARE bool "CAPI2.0 Middleware support" - depends on TTY + depends on ISDN_CAPI_CAPI20 && TTY help This option will enhance the capabilities of the /dev/capi20 interface. It will provide a means of moving a data connection, @@ -26,14 +34,6 @@ config ISDN_CAPI_MIDDLEWARE device. If you want to use pppd with pppdcapiplugin to dial up to your ISP, say Y here. -config ISDN_CAPI_CAPI20 - tristate "CAPI2.0 /dev/capi support" - help - This option will provide the CAPI 2.0 interface to userspace - applications via /dev/capi20. Applications should use the - standardized libcapi20 to access this functionality. You should say - Y/M here. - config ISDN_CAPI_CAPIDRV tristate "CAPI2.0 capidrv interface support" depends on ISDN_I4L diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c index 1af70145fab9..074b9c8e4cf0 100644 --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -979,12 +979,13 @@ static void issue_copy_real(struct dm_cache_migration *mg) int r; struct dm_io_region o_region, c_region; struct cache *cache = mg->cache; + sector_t cblock = from_cblock(mg->cblock); o_region.bdev = cache->origin_dev->bdev; o_region.count = cache->sectors_per_block; c_region.bdev = cache->cache_dev->bdev; - c_region.sector = from_cblock(mg->cblock) * cache->sectors_per_block; + c_region.sector = cblock * cache->sectors_per_block; c_region.count = cache->sectors_per_block; if (mg->writeback || mg->demote) { @@ -2464,20 +2465,18 @@ static int cache_map(struct dm_target *ti, struct bio *bio) bool discarded_block; struct dm_bio_prison_cell *cell; struct policy_result lookup_result; - struct per_bio_data *pb; + struct per_bio_data *pb = init_per_bio_data(bio, pb_data_size); - if (from_oblock(block) > from_oblock(cache->origin_blocks)) { + if (unlikely(from_oblock(block) >= from_oblock(cache->origin_blocks))) { /* * This can only occur if the io goes to a partial block at * the end of the origin device. We don't cache these. * Just remap to the origin and carry on. */ - remap_to_origin_clear_discard(cache, bio, block); + remap_to_origin(cache, bio); return DM_MAPIO_REMAPPED; } - pb = init_per_bio_data(bio, pb_data_size); - if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) { defer_bio(cache, bio); return DM_MAPIO_SUBMITTED; diff --git a/drivers/misc/sgi-xp/xpc_uv.c b/drivers/misc/sgi-xp/xpc_uv.c index b9e2000969f0..95c894482fdd 100644 --- a/drivers/misc/sgi-xp/xpc_uv.c +++ b/drivers/misc/sgi-xp/xpc_uv.c @@ -240,7 +240,7 @@ xpc_create_gru_mq_uv(unsigned int mq_size, int cpu, char *irq_name, nid = cpu_to_node(cpu); page = alloc_pages_exact_node(nid, - GFP_KERNEL | __GFP_ZERO | GFP_THISNODE, + GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE, pg_order); if (page == NULL) { dev_err(xpc_part, "xpc_create_gru_mq_uv() failed to alloc %d " diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c index a2c47476804d..e8f133e926aa 100644 --- a/drivers/net/bonding/bond_alb.c +++ b/drivers/net/bonding/bond_alb.c @@ -730,7 +730,7 @@ static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bon client_info->ntt = 0; } - if (!vlan_get_tag(skb, &client_info->vlan_id)) + if (vlan_get_tag(skb, &client_info->vlan_id)) client_info->vlan_id = 0; if (!client_info->assigned) { diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c index c37878432717..298c26509095 100644 --- a/drivers/net/bonding/bond_options.c +++ b/drivers/net/bonding/bond_options.c @@ -121,6 +121,7 @@ static struct bond_opt_value bond_resend_igmp_tbl[] = { static struct bond_opt_value bond_lp_interval_tbl[] = { { "minval", 1, BOND_VALFLAG_MIN | BOND_VALFLAG_DEFAULT}, { "maxval", INT_MAX, BOND_VALFLAG_MAX}, + { NULL, -1, 0}, }; static struct bond_option bond_opts[] = { diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c index 2e45f6ec1bf0..380d24922049 100644 --- a/drivers/net/ethernet/atheros/alx/main.c +++ b/drivers/net/ethernet/atheros/alx/main.c @@ -1248,19 +1248,13 @@ static int alx_probe(struct pci_dev *pdev, const struct pci_device_id *ent) * shared register for the high 32 bits, so only a single, aligned, * 4 GB physical address range can be used for descriptors. */ - if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && - !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { + if (!dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64))) { dev_dbg(&pdev->dev, "DMA to 64-BIT addresses\n"); } else { - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); if (err) { - err = dma_set_coherent_mask(&pdev->dev, - DMA_BIT_MASK(32)); - if (err) { - dev_err(&pdev->dev, - "No usable DMA config, aborting\n"); - goto out_pci_disable; - } + dev_err(&pdev->dev, "No usable DMA config, aborting\n"); + goto out_pci_disable; } } diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c index d5c2d3e912e5..422aab27ea1b 100644 --- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c +++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c @@ -2436,7 +2436,7 @@ err_reset: err_register: err_sw_init: err_eeprom: - iounmap(adapter->hw.hw_addr); + pci_iounmap(pdev, adapter->hw.hw_addr); err_init_netdev: err_ioremap: free_netdev(netdev); @@ -2474,7 +2474,7 @@ static void atl1e_remove(struct pci_dev *pdev) unregister_netdev(netdev); atl1e_free_ring_resources(adapter); atl1e_force_ps(&adapter->hw); - iounmap(adapter->hw.hw_addr); + pci_iounmap(pdev, adapter->hw.hw_addr); pci_release_regions(pdev); free_netdev(netdev); pci_disable_device(pdev); diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c index cda25ac45b47..6c9e1c9bdeb8 100644 --- a/drivers/net/ethernet/broadcom/bnx2.c +++ b/drivers/net/ethernet/broadcom/bnx2.c @@ -2507,6 +2507,7 @@ bnx2_fw_sync(struct bnx2 *bp, u32 msg_data, int ack, int silent) bp->fw_wr_seq++; msg_data |= bp->fw_wr_seq; + bp->fw_last_msg = msg_data; bnx2_shmem_wr(bp, BNX2_DRV_MB, msg_data); @@ -4000,8 +4001,23 @@ bnx2_setup_wol(struct bnx2 *bp) wol_msg = BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL; } - if (!(bp->flags & BNX2_FLAG_NO_WOL)) - bnx2_fw_sync(bp, BNX2_DRV_MSG_DATA_WAIT3 | wol_msg, 1, 0); + if (!(bp->flags & BNX2_FLAG_NO_WOL)) { + u32 val; + + wol_msg |= BNX2_DRV_MSG_DATA_WAIT3; + if (bp->fw_last_msg || BNX2_CHIP(bp) != BNX2_CHIP_5709) { + bnx2_fw_sync(bp, wol_msg, 1, 0); + return; + } + /* Tell firmware not to power down the PHY yet, otherwise + * the chip will take a long time to respond to MMIO reads. + */ + val = bnx2_shmem_rd(bp, BNX2_PORT_FEATURE); + bnx2_shmem_wr(bp, BNX2_PORT_FEATURE, + val | BNX2_PORT_FEATURE_ASF_ENABLED); + bnx2_fw_sync(bp, wol_msg, 1, 0); + bnx2_shmem_wr(bp, BNX2_PORT_FEATURE, val); + } } @@ -4033,9 +4049,22 @@ bnx2_set_power_state(struct bnx2 *bp, pci_power_t state) if (bp->wol) pci_set_power_state(bp->pdev, PCI_D3hot); - } else { - pci_set_power_state(bp->pdev, PCI_D3hot); + break; + + } + if (!bp->fw_last_msg && BNX2_CHIP(bp) == BNX2_CHIP_5709) { + u32 val; + + /* Tell firmware not to power down the PHY yet, + * otherwise the other port may not respond to + * MMIO reads. + */ + val = bnx2_shmem_rd(bp, BNX2_BC_STATE_CONDITION); + val &= ~BNX2_CONDITION_PM_STATE_MASK; + val |= BNX2_CONDITION_PM_STATE_UNPREP; + bnx2_shmem_wr(bp, BNX2_BC_STATE_CONDITION, val); } + pci_set_power_state(bp->pdev, PCI_D3hot); /* No more memory access after this point until * device is brought back to D0. diff --git a/drivers/net/ethernet/broadcom/bnx2.h b/drivers/net/ethernet/broadcom/bnx2.h index f1cf2c44e7ed..e341bc366fa5 100644 --- a/drivers/net/ethernet/broadcom/bnx2.h +++ b/drivers/net/ethernet/broadcom/bnx2.h @@ -6900,6 +6900,7 @@ struct bnx2 { u16 fw_wr_seq; u16 fw_drv_pulse_wr_seq; + u32 fw_last_msg; int rx_max_ring; int rx_ring_size; @@ -7406,6 +7407,10 @@ struct bnx2_rv2p_fw_file { #define BNX2_CONDITION_MFW_RUN_NCSI 0x00006000 #define BNX2_CONDITION_MFW_RUN_NONE 0x0000e000 #define BNX2_CONDITION_MFW_RUN_MASK 0x0000e000 +#define BNX2_CONDITION_PM_STATE_MASK 0x00030000 +#define BNX2_CONDITION_PM_STATE_FULL 0x00030000 +#define BNX2_CONDITION_PM_STATE_PREP 0x00020000 +#define BNX2_CONDITION_PM_STATE_UNPREP 0x00010000 #define BNX2_BC_STATE_DEBUG_CMD 0x1dc #define BNX2_BC_STATE_BC_DBG_CMD_SIGNATURE 0x42440000 diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c index fcf9105a5476..09f3fefcbf9c 100644 --- a/drivers/net/ethernet/broadcom/cnic.c +++ b/drivers/net/ethernet/broadcom/cnic.c @@ -1,6 +1,6 @@ /* cnic.c: Broadcom CNIC core network driver. * - * Copyright (c) 2006-2013 Broadcom Corporation + * Copyright (c) 2006-2014 Broadcom Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by @@ -342,7 +342,7 @@ static int cnic_send_nlmsg(struct cnic_local *cp, u32 type, while (retry < 3) { rc = 0; rcu_read_lock(); - ulp_ops = rcu_dereference(cnic_ulp_tbl[CNIC_ULP_ISCSI]); + ulp_ops = rcu_dereference(cp->ulp_ops[CNIC_ULP_ISCSI]); if (ulp_ops) rc = ulp_ops->iscsi_nl_send_msg( cp->ulp_handle[CNIC_ULP_ISCSI], @@ -726,7 +726,7 @@ static void cnic_free_dma(struct cnic_dev *dev, struct cnic_dma *dma) for (i = 0; i < dma->num_pages; i++) { if (dma->pg_arr[i]) { - dma_free_coherent(&dev->pcidev->dev, BNX2_PAGE_SIZE, + dma_free_coherent(&dev->pcidev->dev, CNIC_PAGE_SIZE, dma->pg_arr[i], dma->pg_map_arr[i]); dma->pg_arr[i] = NULL; } @@ -785,7 +785,7 @@ static int cnic_alloc_dma(struct cnic_dev *dev, struct cnic_dma *dma, for (i = 0; i < pages; i++) { dma->pg_arr[i] = dma_alloc_coherent(&dev->pcidev->dev, - BNX2_PAGE_SIZE, + CNIC_PAGE_SIZE, &dma->pg_map_arr[i], GFP_ATOMIC); if (dma->pg_arr[i] == NULL) @@ -794,8 +794,8 @@ static int cnic_alloc_dma(struct cnic_dev *dev, struct cnic_dma *dma, if (!use_pg_tbl) return 0; - dma->pgtbl_size = ((pages * 8) + BNX2_PAGE_SIZE - 1) & - ~(BNX2_PAGE_SIZE - 1); + dma->pgtbl_size = ((pages * 8) + CNIC_PAGE_SIZE - 1) & + ~(CNIC_PAGE_SIZE - 1); dma->pgtbl = dma_alloc_coherent(&dev->pcidev->dev, dma->pgtbl_size, &dma->pgtbl_map, GFP_ATOMIC); if (dma->pgtbl == NULL) @@ -900,8 +900,8 @@ static int cnic_alloc_context(struct cnic_dev *dev) if (BNX2_CHIP(cp) == BNX2_CHIP_5709) { int i, k, arr_size; - cp->ctx_blk_size = BNX2_PAGE_SIZE; - cp->cids_per_blk = BNX2_PAGE_SIZE / 128; + cp->ctx_blk_size = CNIC_PAGE_SIZE; + cp->cids_per_blk = CNIC_PAGE_SIZE / 128; arr_size = BNX2_MAX_CID / cp->cids_per_blk * sizeof(struct cnic_ctx); cp->ctx_arr = kzalloc(arr_size, GFP_KERNEL); @@ -933,7 +933,7 @@ static int cnic_alloc_context(struct cnic_dev *dev) for (i = 0; i < cp->ctx_blks; i++) { cp->ctx_arr[i].ctx = dma_alloc_coherent(&dev->pcidev->dev, - BNX2_PAGE_SIZE, + CNIC_PAGE_SIZE, &cp->ctx_arr[i].mapping, GFP_KERNEL); if (cp->ctx_arr[i].ctx == NULL) @@ -1013,7 +1013,7 @@ static int __cnic_alloc_uio_rings(struct cnic_uio_dev *udev, int pages) if (udev->l2_ring) return 0; - udev->l2_ring_size = pages * BNX2_PAGE_SIZE; + udev->l2_ring_size = pages * CNIC_PAGE_SIZE; udev->l2_ring = dma_alloc_coherent(&udev->pdev->dev, udev->l2_ring_size, &udev->l2_ring_map, GFP_KERNEL | __GFP_COMP); @@ -1021,7 +1021,7 @@ static int __cnic_alloc_uio_rings(struct cnic_uio_dev *udev, int pages) return -ENOMEM; udev->l2_buf_size = (cp->l2_rx_ring_size + 1) * cp->l2_single_buf_size; - udev->l2_buf_size = PAGE_ALIGN(udev->l2_buf_size); + udev->l2_buf_size = CNIC_PAGE_ALIGN(udev->l2_buf_size); udev->l2_buf = dma_alloc_coherent(&udev->pdev->dev, udev->l2_buf_size, &udev->l2_buf_map, GFP_KERNEL | __GFP_COMP); @@ -1102,7 +1102,7 @@ static int cnic_init_uio(struct cnic_dev *dev) uinfo->mem[0].size = MB_GET_CID_ADDR(TX_TSS_CID + TX_MAX_TSS_RINGS + 1); uinfo->mem[1].addr = (unsigned long) cp->status_blk.gen & - PAGE_MASK; + CNIC_PAGE_MASK; if (cp->ethdev->drv_state & CNIC_DRV_STATE_USING_MSIX) uinfo->mem[1].size = BNX2_SBLK_MSIX_ALIGN_SIZE * 9; else @@ -1113,7 +1113,7 @@ static int cnic_init_uio(struct cnic_dev *dev) uinfo->mem[0].size = pci_resource_len(dev->pcidev, 0); uinfo->mem[1].addr = (unsigned long) cp->bnx2x_def_status_blk & - PAGE_MASK; + CNIC_PAGE_MASK; uinfo->mem[1].size = sizeof(*cp->bnx2x_def_status_blk); uinfo->name = "bnx2x_cnic"; @@ -1267,14 +1267,14 @@ static int cnic_alloc_bnx2x_resc(struct cnic_dev *dev) for (i = MAX_ISCSI_TBL_SZ; i < cp->max_cid_space; i++) cp->ctx_tbl[i].ulp_proto_id = CNIC_ULP_FCOE; - pages = PAGE_ALIGN(cp->max_cid_space * CNIC_KWQ16_DATA_SIZE) / - PAGE_SIZE; + pages = CNIC_PAGE_ALIGN(cp->max_cid_space * CNIC_KWQ16_DATA_SIZE) / + CNIC_PAGE_SIZE; ret = cnic_alloc_dma(dev, kwq_16_dma, pages, 0); if (ret) return -ENOMEM; - n = PAGE_SIZE / CNIC_KWQ16_DATA_SIZE; + n = CNIC_PAGE_SIZE / CNIC_KWQ16_DATA_SIZE; for (i = 0, j = 0; i < cp->max_cid_space; i++) { long off = CNIC_KWQ16_DATA_SIZE * (i % n); @@ -1296,7 +1296,7 @@ static int cnic_alloc_bnx2x_resc(struct cnic_dev *dev) goto error; } - pages = PAGE_ALIGN(BNX2X_ISCSI_GLB_BUF_SIZE) / PAGE_SIZE; + pages = CNIC_PAGE_ALIGN(BNX2X_ISCSI_GLB_BUF_SIZE) / CNIC_PAGE_SIZE; ret = cnic_alloc_dma(dev, &cp->gbl_buf_info, pages, 0); if (ret) goto error; @@ -1466,8 +1466,8 @@ static int cnic_bnx2x_iscsi_init1(struct cnic_dev *dev, struct kwqe *kwqe) cp->r2tq_size = cp->num_iscsi_tasks * BNX2X_ISCSI_MAX_PENDING_R2TS * BNX2X_ISCSI_R2TQE_SIZE; cp->hq_size = cp->num_ccells * BNX2X_ISCSI_HQ_BD_SIZE; - pages = PAGE_ALIGN(cp->hq_size) / PAGE_SIZE; - hq_bds = pages * (PAGE_SIZE / BNX2X_ISCSI_HQ_BD_SIZE); + pages = CNIC_PAGE_ALIGN(cp->hq_size) / CNIC_PAGE_SIZE; + hq_bds = pages * (CNIC_PAGE_SIZE / BNX2X_ISCSI_HQ_BD_SIZE); cp->num_cqs = req1->num_cqs; if (!dev->max_iscsi_conn) @@ -1477,9 +1477,9 @@ static int cnic_bnx2x_iscsi_init1(struct cnic_dev *dev, struct kwqe *kwqe) CNIC_WR16(dev, BAR_TSTRORM_INTMEM + TSTORM_ISCSI_RQ_SIZE_OFFSET(pfid), req1->rq_num_wqes); CNIC_WR16(dev, BAR_TSTRORM_INTMEM + TSTORM_ISCSI_PAGE_SIZE_OFFSET(pfid), - PAGE_SIZE); + CNIC_PAGE_SIZE); CNIC_WR8(dev, BAR_TSTRORM_INTMEM + - TSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfid), PAGE_SHIFT); + TSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfid), CNIC_PAGE_BITS); CNIC_WR16(dev, BAR_TSTRORM_INTMEM + TSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfid), req1->num_tasks_per_conn); @@ -1489,9 +1489,9 @@ static int cnic_bnx2x_iscsi_init1(struct cnic_dev *dev, struct kwqe *kwqe) USTORM_ISCSI_RQ_BUFFER_SIZE_OFFSET(pfid), req1->rq_buffer_size); CNIC_WR16(dev, BAR_USTRORM_INTMEM + USTORM_ISCSI_PAGE_SIZE_OFFSET(pfid), - PAGE_SIZE); + CNIC_PAGE_SIZE); CNIC_WR8(dev, BAR_USTRORM_INTMEM + - USTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfid), PAGE_SHIFT); + USTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfid), CNIC_PAGE_BITS); CNIC_WR16(dev, BAR_USTRORM_INTMEM + USTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfid), req1->num_tasks_per_conn); @@ -1504,9 +1504,9 @@ static int cnic_bnx2x_iscsi_init1(struct cnic_dev *dev, struct kwqe *kwqe) /* init Xstorm RAM */ CNIC_WR16(dev, BAR_XSTRORM_INTMEM + XSTORM_ISCSI_PAGE_SIZE_OFFSET(pfid), - PAGE_SIZE); + CNIC_PAGE_SIZE); CNIC_WR8(dev, BAR_XSTRORM_INTMEM + - XSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfid), PAGE_SHIFT); + XSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfid), CNIC_PAGE_BITS); CNIC_WR16(dev, BAR_XSTRORM_INTMEM + XSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfid), req1->num_tasks_per_conn); @@ -1519,9 +1519,9 @@ static int cnic_bnx2x_iscsi_init1(struct cnic_dev *dev, struct kwqe *kwqe) /* init Cstorm RAM */ CNIC_WR16(dev, BAR_CSTRORM_INTMEM + CSTORM_ISCSI_PAGE_SIZE_OFFSET(pfid), - PAGE_SIZE); + CNIC_PAGE_SIZE); CNIC_WR8(dev, BAR_CSTRORM_INTMEM + - CSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfid), PAGE_SHIFT); + CSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfid), CNIC_PAGE_BITS); CNIC_WR16(dev, BAR_CSTRORM_INTMEM + CSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfid), req1->num_tasks_per_conn); @@ -1623,18 +1623,18 @@ static int cnic_alloc_bnx2x_conn_resc(struct cnic_dev *dev, u32 l5_cid) } ctx->cid = cid; - pages = PAGE_ALIGN(cp->task_array_size) / PAGE_SIZE; + pages = CNIC_PAGE_ALIGN(cp->task_array_size) / CNIC_PAGE_SIZE; ret = cnic_alloc_dma(dev, &iscsi->task_array_info, pages, 1); if (ret) goto error; - pages = PAGE_ALIGN(cp->r2tq_size) / PAGE_SIZE; + pages = CNIC_PAGE_ALIGN(cp->r2tq_size) / CNIC_PAGE_SIZE; ret = cnic_alloc_dma(dev, &iscsi->r2tq_info, pages, 1); if (ret) goto error; - pages = PAGE_ALIGN(cp->hq_size) / PAGE_SIZE; + pages = CNIC_PAGE_ALIGN(cp->hq_size) / CNIC_PAGE_SIZE; ret = cnic_alloc_dma(dev, &iscsi->hq_info, pages, 1); if (ret) goto error; @@ -1760,7 +1760,7 @@ static int cnic_setup_bnx2x_ctx(struct cnic_dev *dev, struct kwqe *wqes[], ictx->tstorm_st_context.iscsi.hdr_bytes_2_fetch = ISCSI_HEADER_SIZE; /* TSTORM requires the base address of RQ DB & not PTE */ ictx->tstorm_st_context.iscsi.rq_db_phy_addr.lo = - req2->rq_page_table_addr_lo & PAGE_MASK; + req2->rq_page_table_addr_lo & CNIC_PAGE_MASK; ictx->tstorm_st_context.iscsi.rq_db_phy_addr.hi = req2->rq_page_table_addr_hi; ictx->tstorm_st_context.iscsi.iscsi_conn_id = req1->iscsi_conn_id; @@ -1842,7 +1842,7 @@ static int cnic_setup_bnx2x_ctx(struct cnic_dev *dev, struct kwqe *wqes[], /* CSTORM and USTORM initialization is different, CSTORM requires * CQ DB base & not PTE addr */ ictx->cstorm_st_context.cq_db_base.lo = - req1->cq_page_table_addr_lo & PAGE_MASK; + req1->cq_page_table_addr_lo & CNIC_PAGE_MASK; ictx->cstorm_st_context.cq_db_base.hi = req1->cq_page_table_addr_hi; ictx->cstorm_st_context.iscsi_conn_id = req1->iscsi_conn_id; ictx->cstorm_st_context.cq_proc_en_bit_map = (1 << cp->num_cqs) - 1; @@ -2911,7 +2911,7 @@ static int cnic_l2_completion(struct cnic_local *cp) u16 hw_cons, sw_cons; struct cnic_uio_dev *udev = cp->udev; union eth_rx_cqe *cqe, *cqe_ring = (union eth_rx_cqe *) - (udev->l2_ring + (2 * BNX2_PAGE_SIZE)); + (udev->l2_ring + (2 * CNIC_PAGE_SIZE)); u32 cmd; int comp = 0; @@ -3244,7 +3244,8 @@ static int cnic_copy_ulp_stats(struct cnic_dev *dev, int ulp_type) int rc; mutex_lock(&cnic_lock); - ulp_ops = cnic_ulp_tbl_prot(ulp_type); + ulp_ops = rcu_dereference_protected(cp->ulp_ops[ulp_type], + lockdep_is_held(&cnic_lock)); if (ulp_ops && ulp_ops->cnic_get_stats) rc = ulp_ops->cnic_get_stats(cp->ulp_handle[ulp_type]); else @@ -4384,7 +4385,7 @@ static int cnic_setup_5709_context(struct cnic_dev *dev, int valid) u32 idx = cp->ctx_arr[i].cid / cp->cids_per_blk; u32 val; - memset(cp->ctx_arr[i].ctx, 0, BNX2_PAGE_SIZE); + memset(cp->ctx_arr[i].ctx, 0, CNIC_PAGE_SIZE); CNIC_WR(dev, BNX2_CTX_HOST_PAGE_TBL_DATA0, (cp->ctx_arr[i].mapping & 0xffffffff) | valid_bit); @@ -4628,7 +4629,7 @@ static void cnic_init_bnx2_rx_ring(struct cnic_dev *dev) val = BNX2_L2CTX_L2_STATUSB_NUM(sb_id); cnic_ctx_wr(dev, cid_addr, BNX2_L2CTX_HOST_BDIDX, val); - rxbd = udev->l2_ring + BNX2_PAGE_SIZE; + rxbd = udev->l2_ring + CNIC_PAGE_SIZE; for (i = 0; i < BNX2_MAX_RX_DESC_CNT; i++, rxbd++) { dma_addr_t buf_map; int n = (i % cp->l2_rx_ring_size) + 1; @@ -4639,11 +4640,11 @@ static void cnic_init_bnx2_rx_ring(struct cnic_dev *dev) rxbd->rx_bd_haddr_hi = (u64) buf_map >> 32; rxbd->rx_bd_haddr_lo = (u64) buf_map & 0xffffffff; } - val = (u64) (ring_map + BNX2_PAGE_SIZE) >> 32; + val = (u64) (ring_map + CNIC_PAGE_SIZE) >> 32; cnic_ctx_wr(dev, cid_addr, BNX2_L2CTX_NX_BDHADDR_HI, val); rxbd->rx_bd_haddr_hi = val; - val = (u64) (ring_map + BNX2_PAGE_SIZE) & 0xffffffff; + val = (u64) (ring_map + CNIC_PAGE_SIZE) & 0xffffffff; cnic_ctx_wr(dev, cid_addr, BNX2_L2CTX_NX_BDHADDR_LO, val); rxbd->rx_bd_haddr_lo = val; @@ -4709,10 +4710,10 @@ static int cnic_start_bnx2_hw(struct cnic_dev *dev) val = CNIC_RD(dev, BNX2_MQ_CONFIG); val &= ~BNX2_MQ_CONFIG_KNL_BYP_BLK_SIZE; - if (BNX2_PAGE_BITS > 12) + if (CNIC_PAGE_BITS > 12) val |= (12 - 8) << 4; else - val |= (BNX2_PAGE_BITS - 8) << 4; + val |= (CNIC_PAGE_BITS - 8) << 4; CNIC_WR(dev, BNX2_MQ_CONFIG, val); @@ -4742,13 +4743,13 @@ static int cnic_start_bnx2_hw(struct cnic_dev *dev) /* Initialize the kernel work queue context. */ val = KRNLQ_TYPE_TYPE_KRNLQ | KRNLQ_SIZE_TYPE_SIZE | - (BNX2_PAGE_BITS - 8) | KRNLQ_FLAGS_QE_SELF_SEQ; + (CNIC_PAGE_BITS - 8) | KRNLQ_FLAGS_QE_SELF_SEQ; cnic_ctx_wr(dev, kwq_cid_addr, L5_KRNLQ_TYPE, val); - val = (BNX2_PAGE_SIZE / sizeof(struct kwqe) - 1) << 16; + val = (CNIC_PAGE_SIZE / sizeof(struct kwqe) - 1) << 16; cnic_ctx_wr(dev, kwq_cid_addr, L5_KRNLQ_QE_SELF_SEQ_MAX, val); - val = ((BNX2_PAGE_SIZE / sizeof(struct kwqe)) << 16) | KWQ_PAGE_CNT; + val = ((CNIC_PAGE_SIZE / sizeof(struct kwqe)) << 16) | KWQ_PAGE_CNT; cnic_ctx_wr(dev, kwq_cid_addr, L5_KRNLQ_PGTBL_NPAGES, val); val = (u32) ((u64) cp->kwq_info.pgtbl_map >> 32); @@ -4768,13 +4769,13 @@ static int cnic_start_bnx2_hw(struct cnic_dev *dev) /* Initialize the kernel complete queue context. */ val = KRNLQ_TYPE_TYPE_KRNLQ | KRNLQ_SIZE_TYPE_SIZE | - (BNX2_PAGE_BITS - 8) | KRNLQ_FLAGS_QE_SELF_SEQ; + (CNIC_PAGE_BITS - 8) | KRNLQ_FLAGS_QE_SELF_SEQ; cnic_ctx_wr(dev, kcq_cid_addr, L5_KRNLQ_TYPE, val); - val = (BNX2_PAGE_SIZE / sizeof(struct kcqe) - 1) << 16; + val = (CNIC_PAGE_SIZE / sizeof(struct kcqe) - 1) << 16; cnic_ctx_wr(dev, kcq_cid_addr, L5_KRNLQ_QE_SELF_SEQ_MAX, val); - val = ((BNX2_PAGE_SIZE / sizeof(struct kcqe)) << 16) | KCQ_PAGE_CNT; + val = ((CNIC_PAGE_SIZE / sizeof(struct kcqe)) << 16) | KCQ_PAGE_CNT; cnic_ctx_wr(dev, kcq_cid_addr, L5_KRNLQ_PGTBL_NPAGES, val); val = (u32) ((u64) cp->kcq1.dma.pgtbl_map >> 32); @@ -4918,7 +4919,7 @@ static void cnic_init_bnx2x_tx_ring(struct cnic_dev *dev, u32 cli = cp->ethdev->iscsi_l2_client_id; u32 val; - memset(txbd, 0, BNX2_PAGE_SIZE); + memset(txbd, 0, CNIC_PAGE_SIZE); buf_map = udev->l2_buf_map; for (i = 0; i < BNX2_MAX_TX_DESC_CNT; i += 3, txbd += 3) { @@ -4978,9 +4979,9 @@ static void cnic_init_bnx2x_rx_ring(struct cnic_dev *dev, struct bnx2x *bp = netdev_priv(dev->netdev); struct cnic_uio_dev *udev = cp->udev; struct eth_rx_bd *rxbd = (struct eth_rx_bd *) (udev->l2_ring + - BNX2_PAGE_SIZE); + CNIC_PAGE_SIZE); struct eth_rx_cqe_next_page *rxcqe = (struct eth_rx_cqe_next_page *) - (udev->l2_ring + (2 * BNX2_PAGE_SIZE)); + (udev->l2_ring + (2 * CNIC_PAGE_SIZE)); struct host_sp_status_block *sb = cp->bnx2x_def_status_blk; int i; u32 cli = cp->ethdev->iscsi_l2_client_id; @@ -5004,20 +5005,20 @@ static void cnic_init_bnx2x_rx_ring(struct cnic_dev *dev, rxbd->addr_lo = cpu_to_le32(buf_map & 0xffffffff); } - val = (u64) (ring_map + BNX2_PAGE_SIZE) >> 32; + val = (u64) (ring_map + CNIC_PAGE_SIZE) >> 32; rxbd->addr_hi = cpu_to_le32(val); data->rx.bd_page_base.hi = cpu_to_le32(val); - val = (u64) (ring_map + BNX2_PAGE_SIZE) & 0xffffffff; + val = (u64) (ring_map + CNIC_PAGE_SIZE) & 0xffffffff; rxbd->addr_lo = cpu_to_le32(val); data->rx.bd_page_base.lo = cpu_to_le32(val); rxcqe += BNX2X_MAX_RCQ_DESC_CNT; - val = (u64) (ring_map + (2 * BNX2_PAGE_SIZE)) >> 32; + val = (u64) (ring_map + (2 * CNIC_PAGE_SIZE)) >> 32; rxcqe->addr_hi = cpu_to_le32(val); data->rx.cqe_page_base.hi = cpu_to_le32(val); - val = (u64) (ring_map + (2 * BNX2_PAGE_SIZE)) & 0xffffffff; + val = (u64) (ring_map + (2 * CNIC_PAGE_SIZE)) & 0xffffffff; rxcqe->addr_lo = cpu_to_le32(val); data->rx.cqe_page_base.lo = cpu_to_le32(val); @@ -5265,8 +5266,8 @@ static void cnic_shutdown_rings(struct cnic_dev *dev) msleep(10); } clear_bit(CNIC_LCL_FL_RINGS_INITED, &cp->cnic_local_flags); - rx_ring = udev->l2_ring + BNX2_PAGE_SIZE; - memset(rx_ring, 0, BNX2_PAGE_SIZE); + rx_ring = udev->l2_ring + CNIC_PAGE_SIZE; + memset(rx_ring, 0, CNIC_PAGE_SIZE); } static int cnic_register_netdev(struct cnic_dev *dev) diff --git a/drivers/net/ethernet/broadcom/cnic.h b/drivers/net/ethernet/broadcom/cnic.h index 0d6b13f854d9..d535ae4228b4 100644 --- a/drivers/net/ethernet/broadcom/cnic.h +++ b/drivers/net/ethernet/broadcom/cnic.h @@ -1,6 +1,6 @@ /* cnic.h: Broadcom CNIC core network driver. * - * Copyright (c) 2006-2013 Broadcom Corporation + * Copyright (c) 2006-2014 Broadcom Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by diff --git a/drivers/net/ethernet/broadcom/cnic_defs.h b/drivers/net/ethernet/broadcom/cnic_defs.h index 95a8e4b11c9f..dcbca6997e8f 100644 --- a/drivers/net/ethernet/broadcom/cnic_defs.h +++ b/drivers/net/ethernet/broadcom/cnic_defs.h @@ -1,7 +1,7 @@ /* cnic.c: Broadcom CNIC core network driver. * - * Copyright (c) 2006-2013 Broadcom Corporation + * Copyright (c) 2006-2014 Broadcom Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by diff --git a/drivers/net/ethernet/broadcom/cnic_if.h b/drivers/net/ethernet/broadcom/cnic_if.h index 8cf6b1926069..5f4d5573a73d 100644 --- a/drivers/net/ethernet/broadcom/cnic_if.h +++ b/drivers/net/ethernet/broadcom/cnic_if.h @@ -1,6 +1,6 @@ /* cnic_if.h: Broadcom CNIC core network driver. * - * Copyright (c) 2006-2013 Broadcom Corporation + * Copyright (c) 2006-2014 Broadcom Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by @@ -14,8 +14,8 @@ #include "bnx2x/bnx2x_mfw_req.h" -#define CNIC_MODULE_VERSION "2.5.19" -#define CNIC_MODULE_RELDATE "December 19, 2013" +#define CNIC_MODULE_VERSION "2.5.20" +#define CNIC_MODULE_RELDATE "March 14, 2014" #define CNIC_ULP_RDMA 0 #define CNIC_ULP_ISCSI 1 @@ -24,6 +24,16 @@ #define MAX_CNIC_ULP_TYPE_EXT 3 #define MAX_CNIC_ULP_TYPE 4 +/* Use CPU native page size up to 16K for cnic ring sizes. */ +#if (PAGE_SHIFT > 14) +#define CNIC_PAGE_BITS 14 +#else +#define CNIC_PAGE_BITS PAGE_SHIFT +#endif +#define CNIC_PAGE_SIZE (1 << (CNIC_PAGE_BITS)) +#define CNIC_PAGE_ALIGN(addr) ALIGN(addr, CNIC_PAGE_SIZE) +#define CNIC_PAGE_MASK (~((CNIC_PAGE_SIZE) - 1)) + struct kwqe { u32 kwqe_op_flag; diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index 3b6d0ba86c71..70a225c8df5c 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -17649,8 +17649,6 @@ static int tg3_init_one(struct pci_dev *pdev, tg3_init_bufmgr_config(tp); - features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX; - /* 5700 B0 chips do not support checksumming correctly due * to hardware bugs. */ @@ -17682,7 +17680,8 @@ static int tg3_init_one(struct pci_dev *pdev, features |= NETIF_F_TSO_ECN; } - dev->features |= features; + dev->features |= features | NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_CTAG_RX; dev->vlan_features |= features; /* diff --git a/drivers/net/ethernet/brocade/bna/bfa_ioc.c b/drivers/net/ethernet/brocade/bna/bfa_ioc.c index 1803c3959044..354ae9792bad 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_ioc.c +++ b/drivers/net/ethernet/brocade/bna/bfa_ioc.c @@ -1704,7 +1704,7 @@ bfa_flash_sem_get(void __iomem *bar) while (!bfa_raw_sem_get(bar)) { if (--n <= 0) return BFA_STATUS_BADFLASH; - udelay(10000); + mdelay(10); } return BFA_STATUS_OK; } diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c index 3190d38e16fb..d0c38e01e99f 100644 --- a/drivers/net/ethernet/cadence/macb.c +++ b/drivers/net/ethernet/cadence/macb.c @@ -632,11 +632,16 @@ static void gem_rx_refill(struct macb *bp) "Unable to allocate sk_buff\n"); break; } - bp->rx_skbuff[entry] = skb; /* now fill corresponding descriptor entry */ paddr = dma_map_single(&bp->pdev->dev, skb->data, bp->rx_buffer_size, DMA_FROM_DEVICE); + if (dma_mapping_error(&bp->pdev->dev, paddr)) { + dev_kfree_skb(skb); + break; + } + + bp->rx_skbuff[entry] = skb; if (entry == RX_RING_SIZE - 1) paddr |= MACB_BIT(RX_WRAP); @@ -725,7 +730,7 @@ static int gem_rx(struct macb *bp, int budget) skb_put(skb, len); addr = MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, addr)); dma_unmap_single(&bp->pdev->dev, addr, - len, DMA_FROM_DEVICE); + bp->rx_buffer_size, DMA_FROM_DEVICE); skb->protocol = eth_type_trans(skb, bp->dev); skb_checksum_none_assert(skb); @@ -1036,11 +1041,15 @@ static int macb_start_xmit(struct sk_buff *skb, struct net_device *dev) } entry = macb_tx_ring_wrap(bp->tx_head); - bp->tx_head++; netdev_vdbg(bp->dev, "Allocated ring entry %u\n", entry); mapping = dma_map_single(&bp->pdev->dev, skb->data, len, DMA_TO_DEVICE); + if (dma_mapping_error(&bp->pdev->dev, mapping)) { + kfree_skb(skb); + goto unlock; + } + bp->tx_head++; tx_skb = &bp->tx_skb[entry]; tx_skb->skb = skb; tx_skb->mapping = mapping; @@ -1066,6 +1075,7 @@ static int macb_start_xmit(struct sk_buff *skb, struct net_device *dev) if (CIRC_SPACE(bp->tx_head, bp->tx_tail, TX_RING_SIZE) < 1) netif_stop_queue(dev); +unlock: spin_unlock_irqrestore(&bp->lock, flags); return NETDEV_TX_OK; diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 479a7cba45c0..03a351300013 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -528,13 +528,6 @@ fec_restart(struct net_device *ndev, int duplex) /* Clear any outstanding interrupt. */ writel(0xffc00000, fep->hwp + FEC_IEVENT); - /* Setup multicast filter. */ - set_multicast_list(ndev); -#ifndef CONFIG_M5272 - writel(0, fep->hwp + FEC_HASH_TABLE_HIGH); - writel(0, fep->hwp + FEC_HASH_TABLE_LOW); -#endif - /* Set maximum receive buffer size. */ writel(PKT_MAXBLR_SIZE, fep->hwp + FEC_R_BUFF_SIZE); @@ -655,6 +648,13 @@ fec_restart(struct net_device *ndev, int duplex) writel(rcntl, fep->hwp + FEC_R_CNTRL); + /* Setup multicast filter. */ + set_multicast_list(ndev); +#ifndef CONFIG_M5272 + writel(0, fep->hwp + FEC_HASH_TABLE_HIGH); + writel(0, fep->hwp + FEC_HASH_TABLE_LOW); +#endif + if (id_entry->driver_data & FEC_QUIRK_ENET_MAC) { /* enable ENET endian swap */ ecntl |= (1 << 8); diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c index 4be971590461..1fc8334fc181 100644 --- a/drivers/net/ethernet/ibm/ibmveth.c +++ b/drivers/net/ethernet/ibm/ibmveth.c @@ -522,10 +522,21 @@ retry: return rc; } +static u64 ibmveth_encode_mac_addr(u8 *mac) +{ + int i; + u64 encoded = 0; + + for (i = 0; i < ETH_ALEN; i++) + encoded = (encoded << 8) | mac[i]; + + return encoded; +} + static int ibmveth_open(struct net_device *netdev) { struct ibmveth_adapter *adapter = netdev_priv(netdev); - u64 mac_address = 0; + u64 mac_address; int rxq_entries = 1; unsigned long lpar_rc; int rc; @@ -579,8 +590,7 @@ static int ibmveth_open(struct net_device *netdev) adapter->rx_queue.num_slots = rxq_entries; adapter->rx_queue.toggle = 1; - memcpy(&mac_address, netdev->dev_addr, netdev->addr_len); - mac_address = mac_address >> 16; + mac_address = ibmveth_encode_mac_addr(netdev->dev_addr); rxq_desc.fields.flags_len = IBMVETH_BUF_VALID | adapter->rx_queue.queue_len; @@ -1183,8 +1193,8 @@ static void ibmveth_set_multicast_list(struct net_device *netdev) /* add the addresses to the filter table */ netdev_for_each_mc_addr(ha, netdev) { /* add the multicast address to the filter table */ - unsigned long mcast_addr = 0; - memcpy(((char *)&mcast_addr)+2, ha->addr, ETH_ALEN); + u64 mcast_addr; + mcast_addr = ibmveth_encode_mac_addr(ha->addr); lpar_rc = h_multicast_ctrl(adapter->vdev->unit_address, IbmVethMcastAddFilter, mcast_addr); @@ -1372,9 +1382,6 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id) netif_napi_add(netdev, &adapter->napi, ibmveth_poll, 16); - adapter->mac_addr = 0; - memcpy(&adapter->mac_addr, mac_addr_p, ETH_ALEN); - netdev->irq = dev->irq; netdev->netdev_ops = &ibmveth_netdev_ops; netdev->ethtool_ops = &netdev_ethtool_ops; @@ -1383,7 +1390,7 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id) NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; netdev->features |= netdev->hw_features; - memcpy(netdev->dev_addr, &adapter->mac_addr, netdev->addr_len); + memcpy(netdev->dev_addr, mac_addr_p, ETH_ALEN); for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) { struct kobject *kobj = &adapter->rx_buff_pool[i].kobj; diff --git a/drivers/net/ethernet/ibm/ibmveth.h b/drivers/net/ethernet/ibm/ibmveth.h index 451ba7949e15..1f37499d4398 100644 --- a/drivers/net/ethernet/ibm/ibmveth.h +++ b/drivers/net/ethernet/ibm/ibmveth.h @@ -138,7 +138,6 @@ struct ibmveth_adapter { struct napi_struct napi; struct net_device_stats stats; unsigned int mcastFilterSize; - unsigned long mac_addr; void * buffer_list_addr; void * filter_list_addr; dma_addr_t buffer_list_dma; diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index f418f4f20f94..8d76fca7fde7 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -22,6 +22,7 @@ #include <linux/interrupt.h> #include <net/ip.h> #include <net/ipv6.h> +#include <linux/io.h> #include <linux/of.h> #include <linux/of_irq.h> #include <linux/of_mdio.h> @@ -88,8 +89,9 @@ #define MVNETA_TX_IN_PRGRS BIT(1) #define MVNETA_TX_FIFO_EMPTY BIT(8) #define MVNETA_RX_MIN_FRAME_SIZE 0x247c -#define MVNETA_SGMII_SERDES_CFG 0x24A0 +#define MVNETA_SERDES_CFG 0x24A0 #define MVNETA_SGMII_SERDES_PROTO 0x0cc7 +#define MVNETA_RGMII_SERDES_PROTO 0x0667 #define MVNETA_TYPE_PRIO 0x24bc #define MVNETA_FORCE_UNI BIT(21) #define MVNETA_TXQ_CMD_1 0x24e4 @@ -161,7 +163,7 @@ #define MVNETA_GMAC_MAX_RX_SIZE_MASK 0x7ffc #define MVNETA_GMAC0_PORT_ENABLE BIT(0) #define MVNETA_GMAC_CTRL_2 0x2c08 -#define MVNETA_GMAC2_PSC_ENABLE BIT(3) +#define MVNETA_GMAC2_PCS_ENABLE BIT(3) #define MVNETA_GMAC2_PORT_RGMII BIT(4) #define MVNETA_GMAC2_PORT_RESET BIT(6) #define MVNETA_GMAC_STATUS 0x2c10 @@ -710,35 +712,6 @@ static void mvneta_rxq_bm_disable(struct mvneta_port *pp, mvreg_write(pp, MVNETA_RXQ_CONFIG_REG(rxq->id), val); } - - -/* Sets the RGMII Enable bit (RGMIIEn) in port MAC control register */ -static void mvneta_gmac_rgmii_set(struct mvneta_port *pp, int enable) -{ - u32 val; - - val = mvreg_read(pp, MVNETA_GMAC_CTRL_2); - - if (enable) - val |= MVNETA_GMAC2_PORT_RGMII; - else - val &= ~MVNETA_GMAC2_PORT_RGMII; - - mvreg_write(pp, MVNETA_GMAC_CTRL_2, val); -} - -/* Config SGMII port */ -static void mvneta_port_sgmii_config(struct mvneta_port *pp) -{ - u32 val; - - val = mvreg_read(pp, MVNETA_GMAC_CTRL_2); - val |= MVNETA_GMAC2_PSC_ENABLE; - mvreg_write(pp, MVNETA_GMAC_CTRL_2, val); - - mvreg_write(pp, MVNETA_SGMII_SERDES_CFG, MVNETA_SGMII_SERDES_PROTO); -} - /* Start the Ethernet port RX and TX activity */ static void mvneta_port_up(struct mvneta_port *pp) { @@ -2756,12 +2729,15 @@ static void mvneta_port_power_up(struct mvneta_port *pp, int phy_mode) mvreg_write(pp, MVNETA_UNIT_INTR_CAUSE, 0); if (phy_mode == PHY_INTERFACE_MODE_SGMII) - mvneta_port_sgmii_config(pp); + mvreg_write(pp, MVNETA_SERDES_CFG, MVNETA_SGMII_SERDES_PROTO); + else + mvreg_write(pp, MVNETA_SERDES_CFG, MVNETA_RGMII_SERDES_PROTO); + + val = mvreg_read(pp, MVNETA_GMAC_CTRL_2); - mvneta_gmac_rgmii_set(pp, 1); + val |= MVNETA_GMAC2_PCS_ENABLE | MVNETA_GMAC2_PORT_RGMII; /* Cancel Port Reset */ - val = mvreg_read(pp, MVNETA_GMAC_CTRL_2); val &= ~MVNETA_GMAC2_PORT_RESET; mvreg_write(pp, MVNETA_GMAC_CTRL_2, val); @@ -2774,6 +2750,7 @@ static void mvneta_port_power_up(struct mvneta_port *pp, int phy_mode) static int mvneta_probe(struct platform_device *pdev) { const struct mbus_dram_target_info *dram_target_info; + struct resource *res; struct device_node *dn = pdev->dev.of_node; struct device_node *phy_node; u32 phy_addr; @@ -2838,9 +2815,15 @@ static int mvneta_probe(struct platform_device *pdev) clk_prepare_enable(pp->clk); - pp->base = of_iomap(dn, 0); + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) { + err = -ENODEV; + goto err_clk; + } + + pp->base = devm_ioremap_resource(&pdev->dev, res); if (pp->base == NULL) { - err = -ENOMEM; + err = PTR_ERR(pp->base); goto err_clk; } @@ -2848,7 +2831,7 @@ static int mvneta_probe(struct platform_device *pdev) pp->stats = alloc_percpu(struct mvneta_pcpu_stats); if (!pp->stats) { err = -ENOMEM; - goto err_unmap; + goto err_clk; } for_each_possible_cpu(cpu) { @@ -2913,8 +2896,6 @@ err_deinit: mvneta_deinit(pp); err_free_stats: free_percpu(pp->stats); -err_unmap: - iounmap(pp->base); err_clk: clk_disable_unprepare(pp->clk); err_free_irq: @@ -2934,7 +2915,6 @@ static int mvneta_remove(struct platform_device *pdev) mvneta_deinit(pp); clk_disable_unprepare(pp->clk); free_percpu(pp->stats); - iounmap(pp->base); irq_dispose_mapping(dev->irq); free_netdev(dev); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index fad45316200a..84a96f70dfb5 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -742,6 +742,14 @@ static int mlx4_en_replace_mac(struct mlx4_en_priv *priv, int qpn, err = mlx4_en_uc_steer_add(priv, new_mac, &qpn, &entry->reg_id); + if (err) + return err; + if (priv->tunnel_reg_id) { + mlx4_flow_detach(priv->mdev->dev, priv->tunnel_reg_id); + priv->tunnel_reg_id = 0; + } + err = mlx4_en_tunnel_steer_add(priv, new_mac, qpn, + &priv->tunnel_reg_id); return err; } } @@ -1792,6 +1800,8 @@ void mlx4_en_stop_port(struct net_device *dev, int detach) mc_list[5] = priv->port; mlx4_multicast_detach(mdev->dev, &priv->rss_map.indir_qp, mc_list, MLX4_PROT_ETH, mclist->reg_id); + if (mclist->tunnel_reg_id) + mlx4_flow_detach(mdev->dev, mclist->tunnel_reg_id); } mlx4_en_clear_list(dev); list_for_each_entry_safe(mclist, tmp, &priv->curr_list, list) { diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c index 91b69ff4b4a2..7e2995ecea6f 100644 --- a/drivers/net/ethernet/mellanox/mlx4/fw.c +++ b/drivers/net/ethernet/mellanox/mlx4/fw.c @@ -129,13 +129,14 @@ static void dump_dev_cap_flags2(struct mlx4_dev *dev, u64 flags) [0] = "RSS support", [1] = "RSS Toeplitz Hash Function support", [2] = "RSS XOR Hash Function support", - [3] = "Device manage flow steering support", + [3] = "Device managed flow steering support", [4] = "Automatic MAC reassignment support", [5] = "Time stamping support", [6] = "VST (control vlan insertion/stripping) support", [7] = "FSM (MAC anti-spoofing) support", [8] = "Dynamic QP updates support", - [9] = "TCP/IP offloads/flow-steering for VXLAN support" + [9] = "Device managed flow steering IPoIB support", + [10] = "TCP/IP offloads/flow-steering for VXLAN support" }; int i; @@ -859,7 +860,7 @@ int mlx4_QUERY_DEV_CAP_wrapper(struct mlx4_dev *dev, int slave, MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_CQ_TS_SUPPORT_OFFSET); /* For guests, disable vxlan tunneling */ - MLX4_GET(field, outbox, QUERY_DEV_CAP_VXLAN); + MLX4_GET(field, outbox->buf, QUERY_DEV_CAP_VXLAN); field &= 0xf7; MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_VXLAN); @@ -869,7 +870,7 @@ int mlx4_QUERY_DEV_CAP_wrapper(struct mlx4_dev *dev, int slave, MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_BF_OFFSET); /* For guests, disable mw type 2 */ - MLX4_GET(bmme_flags, outbox, QUERY_DEV_CAP_BMME_FLAGS_OFFSET); + MLX4_GET(bmme_flags, outbox->buf, QUERY_DEV_CAP_BMME_FLAGS_OFFSET); bmme_flags &= ~MLX4_BMME_FLAG_TYPE_2_WIN; MLX4_PUT(outbox->buf, bmme_flags, QUERY_DEV_CAP_BMME_FLAGS_OFFSET); @@ -883,7 +884,7 @@ int mlx4_QUERY_DEV_CAP_wrapper(struct mlx4_dev *dev, int slave, } /* turn off ipoib managed steering for guests */ - MLX4_GET(field, outbox, QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET); + MLX4_GET(field, outbox->buf, QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET); field &= ~0x80; MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET); diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c index d711158b0d4b..d413e60071d4 100644 --- a/drivers/net/ethernet/mellanox/mlx4/main.c +++ b/drivers/net/ethernet/mellanox/mlx4/main.c @@ -150,6 +150,8 @@ struct mlx4_port_config { struct pci_dev *pdev; }; +static atomic_t pf_loading = ATOMIC_INIT(0); + int mlx4_check_port_params(struct mlx4_dev *dev, enum mlx4_port_type *port_type) { @@ -749,7 +751,7 @@ static void mlx4_request_modules(struct mlx4_dev *dev) has_eth_port = true; } - if (has_ib_port) + if (has_ib_port || (dev->caps.flags & MLX4_DEV_CAP_FLAG_IBOE)) request_module_nowait(IB_DRV_NAME); if (has_eth_port) request_module_nowait(EN_DRV_NAME); @@ -1407,6 +1409,11 @@ static int mlx4_init_slave(struct mlx4_dev *dev) u32 slave_read; u32 cmd_channel_ver; + if (atomic_read(&pf_loading)) { + mlx4_warn(dev, "PF is not ready. Deferring probe\n"); + return -EPROBE_DEFER; + } + mutex_lock(&priv->cmd.slave_cmd_mutex); priv->cmd.max_cmds = 1; mlx4_warn(dev, "Sending reset\n"); @@ -2319,7 +2326,11 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data) if (num_vfs) { mlx4_warn(dev, "Enabling SR-IOV with %d VFs\n", num_vfs); + + atomic_inc(&pf_loading); err = pci_enable_sriov(pdev, num_vfs); + atomic_dec(&pf_loading); + if (err) { mlx4_err(dev, "Failed to enable SR-IOV, continuing without SR-IOV (err = %d).\n", err); @@ -2670,7 +2681,11 @@ static pci_ers_result_t mlx4_pci_err_detected(struct pci_dev *pdev, static pci_ers_result_t mlx4_pci_slot_reset(struct pci_dev *pdev) { - int ret = __mlx4_init_one(pdev, 0); + const struct pci_device_id *id; + int ret; + + id = pci_match_id(mlx4_pci_table, pdev); + ret = __mlx4_init_one(pdev, id->driver_data); return ret ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; } @@ -2684,6 +2699,7 @@ static struct pci_driver mlx4_driver = { .name = DRV_NAME, .id_table = mlx4_pci_table, .probe = mlx4_init_one, + .shutdown = mlx4_remove_one, .remove = mlx4_remove_one, .err_handler = &mlx4_err_handler, }; diff --git a/drivers/net/ethernet/micrel/ks8851.c b/drivers/net/ethernet/micrel/ks8851.c index 727b546a9eb8..e0c92e0e5e1d 100644 --- a/drivers/net/ethernet/micrel/ks8851.c +++ b/drivers/net/ethernet/micrel/ks8851.c @@ -23,6 +23,7 @@ #include <linux/crc32.h> #include <linux/mii.h> #include <linux/eeprom_93cx6.h> +#include <linux/regulator/consumer.h> #include <linux/spi/spi.h> @@ -83,6 +84,7 @@ union ks8851_tx_hdr { * @rc_rxqcr: Cached copy of KS_RXQCR. * @eeprom_size: Companion eeprom size in Bytes, 0 if no eeprom * @eeprom: 93CX6 EEPROM state for accessing on-board EEPROM. + * @vdd_reg: Optional regulator supplying the chip * * The @lock ensures that the chip is protected when certain operations are * in progress. When the read or write packet transfer is in progress, most @@ -130,6 +132,7 @@ struct ks8851_net { struct spi_transfer spi_xfer2[2]; struct eeprom_93cx6 eeprom; + struct regulator *vdd_reg; }; static int msg_enable; @@ -1414,6 +1417,21 @@ static int ks8851_probe(struct spi_device *spi) ks->spidev = spi; ks->tx_space = 6144; + ks->vdd_reg = regulator_get_optional(&spi->dev, "vdd"); + if (IS_ERR(ks->vdd_reg)) { + ret = PTR_ERR(ks->vdd_reg); + if (ret == -EPROBE_DEFER) + goto err_reg; + } else { + ret = regulator_enable(ks->vdd_reg); + if (ret) { + dev_err(&spi->dev, "regulator enable fail: %d\n", + ret); + goto err_reg_en; + } + } + + mutex_init(&ks->lock); spin_lock_init(&ks->statelock); @@ -1508,8 +1526,14 @@ static int ks8851_probe(struct spi_device *spi) err_netdev: free_irq(ndev->irq, ks); -err_id: err_irq: +err_id: + if (!IS_ERR(ks->vdd_reg)) + regulator_disable(ks->vdd_reg); +err_reg_en: + if (!IS_ERR(ks->vdd_reg)) + regulator_put(ks->vdd_reg); +err_reg: free_netdev(ndev); return ret; } @@ -1523,6 +1547,10 @@ static int ks8851_remove(struct spi_device *spi) unregister_netdev(priv->netdev); free_irq(spi->irq, priv); + if (!IS_ERR(priv->vdd_reg)) { + regulator_disable(priv->vdd_reg); + regulator_put(priv->vdd_reg); + } free_netdev(priv->netdev); return 0; diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c index ce2cfddbed50..656c65ddadb4 100644 --- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c +++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c @@ -4765,7 +4765,9 @@ static int qlge_probe(struct pci_dev *pdev, ndev->features = ndev->hw_features; ndev->vlan_features = ndev->hw_features; /* vlan gets same features (except vlan filter) */ - ndev->vlan_features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; + ndev->vlan_features &= ~(NETIF_F_HW_VLAN_CTAG_FILTER | + NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_CTAG_RX); if (test_bit(QL_DMA64, &qdev->flags)) ndev->features |= NETIF_F_HIGHDMA; diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c index e9779653cd4c..3ff7bc3e7a23 100644 --- a/drivers/net/ethernet/realtek/r8169.c +++ b/drivers/net/ethernet/realtek/r8169.c @@ -209,7 +209,7 @@ static const struct { [RTL_GIGA_MAC_VER_16] = _R("RTL8101e", RTL_TD_0, NULL, JUMBO_1K, true), [RTL_GIGA_MAC_VER_17] = - _R("RTL8168b/8111b", RTL_TD_1, NULL, JUMBO_4K, false), + _R("RTL8168b/8111b", RTL_TD_0, NULL, JUMBO_4K, false), [RTL_GIGA_MAC_VER_18] = _R("RTL8168cp/8111cp", RTL_TD_1, NULL, JUMBO_6K, false), [RTL_GIGA_MAC_VER_19] = diff --git a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c index 72d282bf33a5..c553f6b5a913 100644 --- a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c +++ b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c @@ -151,7 +151,7 @@ static void stmmac_clean_desc3(void *priv_ptr, struct dma_desc *p) sizeof(struct dma_desc))); } -const struct stmmac_chain_mode_ops chain_mode_ops = { +const struct stmmac_mode_ops chain_mode_ops = { .init = stmmac_init_dma_chain, .is_jumbo_frm = stmmac_is_jumbo_frm, .jumbo_frm = stmmac_jumbo_frm, diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h index 7834a3993946..74610f3aca9e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/common.h +++ b/drivers/net/ethernet/stmicro/stmmac/common.h @@ -419,20 +419,13 @@ struct mii_regs { unsigned int data; /* MII Data */ }; -struct stmmac_ring_mode_ops { - unsigned int (*is_jumbo_frm) (int len, int ehn_desc); - unsigned int (*jumbo_frm) (void *priv, struct sk_buff *skb, int csum); - void (*refill_desc3) (void *priv, struct dma_desc *p); - void (*init_desc3) (struct dma_desc *p); - void (*clean_desc3) (void *priv, struct dma_desc *p); - int (*set_16kib_bfsize) (int mtu); -}; - -struct stmmac_chain_mode_ops { +struct stmmac_mode_ops { void (*init) (void *des, dma_addr_t phy_addr, unsigned int size, unsigned int extend_desc); unsigned int (*is_jumbo_frm) (int len, int ehn_desc); unsigned int (*jumbo_frm) (void *priv, struct sk_buff *skb, int csum); + int (*set_16kib_bfsize)(int mtu); + void (*init_desc3)(struct dma_desc *p); void (*refill_desc3) (void *priv, struct dma_desc *p); void (*clean_desc3) (void *priv, struct dma_desc *p); }; @@ -441,8 +434,7 @@ struct mac_device_info { const struct stmmac_ops *mac; const struct stmmac_desc_ops *desc; const struct stmmac_dma_ops *dma; - const struct stmmac_ring_mode_ops *ring; - const struct stmmac_chain_mode_ops *chain; + const struct stmmac_mode_ops *mode; const struct stmmac_hwtimestamp *ptp; struct mii_regs mii; /* MII register Addresses */ struct mac_link link; @@ -460,7 +452,7 @@ void stmmac_get_mac_addr(void __iomem *ioaddr, unsigned char *addr, void stmmac_set_mac(void __iomem *ioaddr, bool enable); void dwmac_dma_flush_tx_fifo(void __iomem *ioaddr); -extern const struct stmmac_ring_mode_ops ring_mode_ops; -extern const struct stmmac_chain_mode_ops chain_mode_ops; +extern const struct stmmac_mode_ops ring_mode_ops; +extern const struct stmmac_mode_ops chain_mode_ops; #endif /* __COMMON_H__ */ diff --git a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c index a96c7c2f5f3f..650a4be6bce5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c +++ b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c @@ -100,10 +100,9 @@ static void stmmac_refill_desc3(void *priv_ptr, struct dma_desc *p) { struct stmmac_priv *priv = (struct stmmac_priv *)priv_ptr; - if (unlikely(priv->plat->has_gmac)) - /* Fill DES3 in case of RING mode */ - if (priv->dma_buf_sz >= BUF_SIZE_8KiB) - p->des3 = p->des2 + BUF_SIZE_8KiB; + /* Fill DES3 in case of RING mode */ + if (priv->dma_buf_sz >= BUF_SIZE_8KiB) + p->des3 = p->des2 + BUF_SIZE_8KiB; } /* In ring mode we need to fill the desc3 because it is used as buffer */ @@ -126,7 +125,7 @@ static int stmmac_set_16kib_bfsize(int mtu) return ret; } -const struct stmmac_ring_mode_ops ring_mode_ops = { +const struct stmmac_mode_ops ring_mode_ops = { .is_jumbo_frm = stmmac_is_jumbo_frm, .jumbo_frm = stmmac_jumbo_frm, .refill_desc3 = stmmac_refill_desc3, diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 078ad0ec8593..8543e1cfd55e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -92,8 +92,8 @@ static int tc = TC_DEFAULT; module_param(tc, int, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(tc, "DMA threshold control value"); -#define DMA_BUFFER_SIZE BUF_SIZE_4KiB -static int buf_sz = DMA_BUFFER_SIZE; +#define DEFAULT_BUFSIZE 1536 +static int buf_sz = DEFAULT_BUFSIZE; module_param(buf_sz, int, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(buf_sz, "DMA buffer size"); @@ -136,8 +136,8 @@ static void stmmac_verify_args(void) dma_rxsize = DMA_RX_SIZE; if (unlikely(dma_txsize < 0)) dma_txsize = DMA_TX_SIZE; - if (unlikely((buf_sz < DMA_BUFFER_SIZE) || (buf_sz > BUF_SIZE_16KiB))) - buf_sz = DMA_BUFFER_SIZE; + if (unlikely((buf_sz < DEFAULT_BUFSIZE) || (buf_sz > BUF_SIZE_16KiB))) + buf_sz = DEFAULT_BUFSIZE; if (unlikely(flow_ctrl > 1)) flow_ctrl = FLOW_AUTO; else if (likely(flow_ctrl < 0)) @@ -286,10 +286,25 @@ bool stmmac_eee_init(struct stmmac_priv *priv) /* MAC core supports the EEE feature. */ if (priv->dma_cap.eee) { + int tx_lpi_timer = priv->tx_lpi_timer; + /* Check if the PHY supports EEE */ - if (phy_init_eee(priv->phydev, 1)) + if (phy_init_eee(priv->phydev, 1)) { + /* To manage at run-time if the EEE cannot be supported + * anymore (for example because the lp caps have been + * changed). + * In that case the driver disable own timers. + */ + if (priv->eee_active) { + pr_debug("stmmac: disable EEE\n"); + del_timer_sync(&priv->eee_ctrl_timer); + priv->hw->mac->set_eee_timer(priv->ioaddr, 0, + tx_lpi_timer); + } + priv->eee_active = 0; goto out; - + } + /* Activate the EEE and start timers */ if (!priv->eee_active) { priv->eee_active = 1; init_timer(&priv->eee_ctrl_timer); @@ -300,13 +315,13 @@ bool stmmac_eee_init(struct stmmac_priv *priv) priv->hw->mac->set_eee_timer(priv->ioaddr, STMMAC_DEFAULT_LIT_LS, - priv->tx_lpi_timer); + tx_lpi_timer); } else /* Set HW EEE according to the speed */ priv->hw->mac->set_eee_pls(priv->ioaddr, priv->phydev->link); - pr_info("stmmac: Energy-Efficient Ethernet initialized\n"); + pr_debug("stmmac: Energy-Efficient Ethernet initialized\n"); ret = true; } @@ -886,10 +901,10 @@ static int stmmac_set_bfsize(int mtu, int bufsize) ret = BUF_SIZE_8KiB; else if (mtu >= BUF_SIZE_2KiB) ret = BUF_SIZE_4KiB; - else if (mtu >= DMA_BUFFER_SIZE) + else if (mtu > DEFAULT_BUFSIZE) ret = BUF_SIZE_2KiB; else - ret = DMA_BUFFER_SIZE; + ret = DEFAULT_BUFSIZE; return ret; } @@ -951,9 +966,9 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, p->des2 = priv->rx_skbuff_dma[i]; - if ((priv->mode == STMMAC_RING_MODE) && + if ((priv->hw->mode->init_desc3) && (priv->dma_buf_sz == BUF_SIZE_16KiB)) - priv->hw->ring->init_desc3(p); + priv->hw->mode->init_desc3(p); return 0; } @@ -984,11 +999,8 @@ static int init_dma_desc_rings(struct net_device *dev) unsigned int bfsize = 0; int ret = -ENOMEM; - /* Set the max buffer size according to the DESC mode - * and the MTU. Note that RING mode allows 16KiB bsize. - */ - if (priv->mode == STMMAC_RING_MODE) - bfsize = priv->hw->ring->set_16kib_bfsize(dev->mtu); + if (priv->hw->mode->set_16kib_bfsize) + bfsize = priv->hw->mode->set_16kib_bfsize(dev->mtu); if (bfsize < BUF_SIZE_16KiB) bfsize = stmmac_set_bfsize(dev->mtu, priv->dma_buf_sz); @@ -1029,15 +1041,15 @@ static int init_dma_desc_rings(struct net_device *dev) /* Setup the chained descriptor addresses */ if (priv->mode == STMMAC_CHAIN_MODE) { if (priv->extend_desc) { - priv->hw->chain->init(priv->dma_erx, priv->dma_rx_phy, - rxsize, 1); - priv->hw->chain->init(priv->dma_etx, priv->dma_tx_phy, - txsize, 1); + priv->hw->mode->init(priv->dma_erx, priv->dma_rx_phy, + rxsize, 1); + priv->hw->mode->init(priv->dma_etx, priv->dma_tx_phy, + txsize, 1); } else { - priv->hw->chain->init(priv->dma_rx, priv->dma_rx_phy, - rxsize, 0); - priv->hw->chain->init(priv->dma_tx, priv->dma_tx_phy, - txsize, 0); + priv->hw->mode->init(priv->dma_rx, priv->dma_rx_phy, + rxsize, 0); + priv->hw->mode->init(priv->dma_tx, priv->dma_tx_phy, + txsize, 0); } } @@ -1288,7 +1300,7 @@ static void stmmac_tx_clean(struct stmmac_priv *priv) DMA_TO_DEVICE); priv->tx_skbuff_dma[entry] = 0; } - priv->hw->ring->clean_desc3(priv, p); + priv->hw->mode->clean_desc3(priv, p); if (likely(skb != NULL)) { dev_kfree_skb(skb); @@ -1844,6 +1856,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) int nfrags = skb_shinfo(skb)->nr_frags; struct dma_desc *desc, *first; unsigned int nopaged_len = skb_headlen(skb); + unsigned int enh_desc = priv->plat->enh_desc; if (unlikely(stmmac_tx_avail(priv) < nfrags + 1)) { if (!netif_queue_stopped(dev)) { @@ -1871,27 +1884,19 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) first = desc; /* To program the descriptors according to the size of the frame */ - if (priv->mode == STMMAC_RING_MODE) { - is_jumbo = priv->hw->ring->is_jumbo_frm(skb->len, - priv->plat->enh_desc); - if (unlikely(is_jumbo)) - entry = priv->hw->ring->jumbo_frm(priv, skb, - csum_insertion); - } else { - is_jumbo = priv->hw->chain->is_jumbo_frm(skb->len, - priv->plat->enh_desc); - if (unlikely(is_jumbo)) - entry = priv->hw->chain->jumbo_frm(priv, skb, - csum_insertion); - } + if (enh_desc) + is_jumbo = priv->hw->mode->is_jumbo_frm(skb->len, enh_desc); + if (likely(!is_jumbo)) { desc->des2 = dma_map_single(priv->device, skb->data, nopaged_len, DMA_TO_DEVICE); priv->tx_skbuff_dma[entry] = desc->des2; priv->hw->desc->prepare_tx_desc(desc, 1, nopaged_len, csum_insertion, priv->mode); - } else + } else { desc = first; + entry = priv->hw->mode->jumbo_frm(priv, skb, csum_insertion); + } for (i = 0; i < nfrags; i++) { const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; @@ -2029,7 +2034,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv) p->des2 = priv->rx_skbuff_dma[entry]; - priv->hw->ring->refill_desc3(priv, p); + priv->hw->mode->refill_desc3(priv, p); if (netif_msg_rx_status(priv)) pr_debug("\trefill entry #%d\n", entry); @@ -2633,11 +2638,11 @@ static int stmmac_hw_init(struct stmmac_priv *priv) /* To use the chained or ring mode */ if (chain_mode) { - priv->hw->chain = &chain_mode_ops; + priv->hw->mode = &chain_mode_ops; pr_info(" Chain mode enabled\n"); priv->mode = STMMAC_CHAIN_MODE; } else { - priv->hw->ring = &ring_mode_ops; + priv->hw->mode = &ring_mode_ops; pr_info(" Ring mode enabled\n"); priv->mode = STMMAC_RING_MODE; } diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c index c61bc72b8e90..8fb32a80f1c1 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c @@ -36,7 +36,7 @@ static const struct of_device_id stmmac_dt_ids[] = { #ifdef CONFIG_DWMAC_STI { .compatible = "st,stih415-dwmac", .data = &sti_gmac_data}, { .compatible = "st,stih416-dwmac", .data = &sti_gmac_data}, - { .compatible = "st,stih127-dwmac", .data = &sti_gmac_data}, + { .compatible = "st,stid127-dwmac", .data = &sti_gmac_data}, #endif /* SoC specific glue layers should come before generic bindings */ { .compatible = "st,spear600-gmac"}, diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index ffd4d12acf6d..7d6d8ec676c8 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -2229,10 +2229,6 @@ static int cpsw_probe(struct platform_device *pdev) goto clean_ale_ret; } - if (cpts_register(&pdev->dev, priv->cpts, - data->cpts_clock_mult, data->cpts_clock_shift)) - dev_err(priv->dev, "error registering cpts device\n"); - cpsw_notice(priv, probe, "initialized device (regs %pa, irq %d)\n", &ss_res->start, ndev->irq); diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 364d0c7952c0..88ef27067bf2 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -355,7 +355,7 @@ int cpdma_ctlr_stop(struct cpdma_ctlr *ctlr) int i; spin_lock_irqsave(&ctlr->lock, flags); - if (ctlr->state != CPDMA_STATE_ACTIVE) { + if (ctlr->state == CPDMA_STATE_TEARDOWN) { spin_unlock_irqrestore(&ctlr->lock, flags); return -EINVAL; } @@ -891,7 +891,7 @@ int cpdma_chan_stop(struct cpdma_chan *chan) unsigned timeout; spin_lock_irqsave(&chan->lock, flags); - if (chan->state != CPDMA_STATE_ACTIVE) { + if (chan->state == CPDMA_STATE_TEARDOWN) { spin_unlock_irqrestore(&chan->lock, flags); return -EINVAL; } diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c index cd9b164a0434..8f0e69ce07ca 100644 --- a/drivers/net/ethernet/ti/davinci_emac.c +++ b/drivers/net/ethernet/ti/davinci_emac.c @@ -1532,9 +1532,9 @@ static int emac_dev_open(struct net_device *ndev) struct device *emac_dev = &ndev->dev; u32 cnt; struct resource *res; - int ret; + int q, m, ret; + int res_num = 0, irq_num = 0; int i = 0; - int k = 0; struct emac_priv *priv = netdev_priv(ndev); pm_runtime_get(&priv->pdev->dev); @@ -1564,15 +1564,24 @@ static int emac_dev_open(struct net_device *ndev) } /* Request IRQ */ + while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, + res_num))) { + for (irq_num = res->start; irq_num <= res->end; irq_num++) { + dev_err(emac_dev, "Request IRQ %d\n", irq_num); + if (request_irq(irq_num, emac_irq, 0, ndev->name, + ndev)) { + dev_err(emac_dev, + "DaVinci EMAC: request_irq() failed\n"); + ret = -EBUSY; - while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) { - for (i = res->start; i <= res->end; i++) { - if (devm_request_irq(&priv->pdev->dev, i, emac_irq, - 0, ndev->name, ndev)) goto rollback; + } } - k++; + res_num++; } + /* prepare counters for rollback in case of an error */ + res_num--; + irq_num--; /* Start/Enable EMAC hardware */ emac_hw_enable(priv); @@ -1639,11 +1648,23 @@ static int emac_dev_open(struct net_device *ndev) return 0; -rollback: - - dev_err(emac_dev, "DaVinci EMAC: devm_request_irq() failed"); - ret = -EBUSY; err: + emac_int_disable(priv); + napi_disable(&priv->napi); + +rollback: + for (q = res_num; q >= 0; q--) { + res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, q); + /* at the first iteration, irq_num is already set to the + * right value + */ + if (q != res_num) + irq_num = res->end; + + for (m = irq_num; m >= res->start; m--) + free_irq(m, ndev); + } + cpdma_ctlr_stop(priv->dma); pm_runtime_put(&priv->pdev->dev); return ret; } @@ -1659,6 +1680,9 @@ err: */ static int emac_dev_stop(struct net_device *ndev) { + struct resource *res; + int i = 0; + int irq_num; struct emac_priv *priv = netdev_priv(ndev); struct device *emac_dev = &ndev->dev; @@ -1674,6 +1698,13 @@ static int emac_dev_stop(struct net_device *ndev) if (priv->phydev) phy_disconnect(priv->phydev); + /* Free IRQ */ + while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, i))) { + for (irq_num = res->start; irq_num <= res->end; irq_num++) + free_irq(irq_num, priv->ndev); + i++; + } + if (netif_msg_drv(priv)) dev_notice(emac_dev, "DaVinci EMAC: %s stopped\n", ndev->name); diff --git a/drivers/net/ethernet/via/via-rhine.c b/drivers/net/ethernet/via/via-rhine.c index ef312bc6b865..6ac20a6738f4 100644 --- a/drivers/net/ethernet/via/via-rhine.c +++ b/drivers/net/ethernet/via/via-rhine.c @@ -923,7 +923,7 @@ static int rhine_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) if (rc) { dev_err(&pdev->dev, "32-bit PCI DMA addresses not supported by the card!?\n"); - goto err_out; + goto err_out_pci_disable; } /* sanity check */ @@ -931,7 +931,7 @@ static int rhine_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) (pci_resource_len(pdev, 1) < io_size)) { rc = -EIO; dev_err(&pdev->dev, "Insufficient PCI resources, aborting\n"); - goto err_out; + goto err_out_pci_disable; } pioaddr = pci_resource_start(pdev, 0); @@ -942,7 +942,7 @@ static int rhine_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) dev = alloc_etherdev(sizeof(struct rhine_private)); if (!dev) { rc = -ENOMEM; - goto err_out; + goto err_out_pci_disable; } SET_NETDEV_DEV(dev, &pdev->dev); @@ -1084,6 +1084,8 @@ err_out_free_res: pci_release_regions(pdev); err_out_free_netdev: free_netdev(dev); +err_out_pci_disable: + pci_disable_device(pdev); err_out: return rc; } diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index 7141a1937360..d6fce9750b95 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -442,6 +442,8 @@ static int netvsc_probe(struct hv_device *dev, if (!net) return -ENOMEM; + netif_carrier_off(net); + net_device_ctx = netdev_priv(net); net_device_ctx->device_ctx = dev; hv_set_drvdata(dev, net); @@ -473,6 +475,8 @@ static int netvsc_probe(struct hv_device *dev, pr_err("Unable to register netdev.\n"); rndis_filter_device_remove(dev); free_netdev(net); + } else { + schedule_delayed_work(&net_device_ctx->dwork, 0); } return ret; diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c index 1084e5de3ceb..b54fd257652b 100644 --- a/drivers/net/hyperv/rndis_filter.c +++ b/drivers/net/hyperv/rndis_filter.c @@ -243,6 +243,22 @@ static int rndis_filter_send_request(struct rndis_device *dev, return ret; } +static void rndis_set_link_state(struct rndis_device *rdev, + struct rndis_request *request) +{ + u32 link_status; + struct rndis_query_complete *query_complete; + + query_complete = &request->response_msg.msg.query_complete; + + if (query_complete->status == RNDIS_STATUS_SUCCESS && + query_complete->info_buflen == sizeof(u32)) { + memcpy(&link_status, (void *)((unsigned long)query_complete + + query_complete->info_buf_offset), sizeof(u32)); + rdev->link_state = link_status != 0; + } +} + static void rndis_filter_receive_response(struct rndis_device *dev, struct rndis_message *resp) { @@ -272,6 +288,10 @@ static void rndis_filter_receive_response(struct rndis_device *dev, sizeof(struct rndis_message) + RNDIS_EXT_LEN) { memcpy(&request->response_msg, resp, resp->msg_len); + if (request->request_msg.ndis_msg_type == + RNDIS_MSG_QUERY && request->request_msg.msg. + query_req.oid == RNDIS_OID_GEN_MEDIA_CONNECT_STATUS) + rndis_set_link_state(dev, request); } else { netdev_err(ndev, "rndis response buffer overflow " @@ -620,7 +640,6 @@ static int rndis_filter_query_device_link_status(struct rndis_device *dev) ret = rndis_filter_query_device(dev, RNDIS_OID_GEN_MEDIA_CONNECT_STATUS, &link_status, &size); - dev->link_state = (link_status != 0) ? true : false; return ret; } diff --git a/drivers/net/ieee802154/at86rf230.c b/drivers/net/ieee802154/at86rf230.c index ab31544bc254..a30258aad139 100644 --- a/drivers/net/ieee802154/at86rf230.c +++ b/drivers/net/ieee802154/at86rf230.c @@ -546,12 +546,12 @@ at86rf230_xmit(struct ieee802154_dev *dev, struct sk_buff *skb) int rc; unsigned long flags; - spin_lock(&lp->lock); + spin_lock_irqsave(&lp->lock, flags); if (lp->irq_busy) { - spin_unlock(&lp->lock); + spin_unlock_irqrestore(&lp->lock, flags); return -EBUSY; } - spin_unlock(&lp->lock); + spin_unlock_irqrestore(&lp->lock, flags); might_sleep(); @@ -725,10 +725,11 @@ static void at86rf230_irqwork_level(struct work_struct *work) static irqreturn_t at86rf230_isr(int irq, void *data) { struct at86rf230_local *lp = data; + unsigned long flags; - spin_lock(&lp->lock); + spin_lock_irqsave(&lp->lock, flags); lp->irq_busy = 1; - spin_unlock(&lp->lock); + spin_unlock_irqrestore(&lp->lock, flags); schedule_work(&lp->irqwork); diff --git a/drivers/net/ifb.c b/drivers/net/ifb.c index c14d39bf32d0..d7b2e947184b 100644 --- a/drivers/net/ifb.c +++ b/drivers/net/ifb.c @@ -180,7 +180,8 @@ static void ifb_setup(struct net_device *dev) dev->tx_queue_len = TX_Q_LIMIT; dev->features |= IFB_FEATURES; - dev->vlan_features |= IFB_FEATURES; + dev->vlan_features |= IFB_FEATURES & ~(NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_STAG_TX); dev->flags |= IFF_NOARP; dev->flags &= ~IFF_MULTICAST; diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 19c9eca0ef26..76d96b9ebcdb 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -164,9 +164,9 @@ static const struct phy_setting settings[] = { * of that setting. Returns the index of the last setting if * none of the others match. */ -static inline int phy_find_setting(int speed, int duplex) +static inline unsigned int phy_find_setting(int speed, int duplex) { - int idx = 0; + unsigned int idx = 0; while (idx < ARRAY_SIZE(settings) && (settings[idx].speed != speed || settings[idx].duplex != duplex)) @@ -185,7 +185,7 @@ static inline int phy_find_setting(int speed, int duplex) * the mask in features. Returns the index of the last setting * if nothing else matches. */ -static inline int phy_find_valid(int idx, u32 features) +static inline unsigned int phy_find_valid(unsigned int idx, u32 features) { while (idx < MAX_NUM_SETTINGS && !(settings[idx].setting & features)) idx++; @@ -204,7 +204,7 @@ static inline int phy_find_valid(int idx, u32 features) static void phy_sanitize_settings(struct phy_device *phydev) { u32 features = phydev->supported; - int idx; + unsigned int idx; /* Sanitize settings based on PHY capabilities */ if ((features & SUPPORTED_Autoneg) == 0) @@ -954,7 +954,8 @@ int phy_init_eee(struct phy_device *phydev, bool clk_stop_enable) (phydev->interface == PHY_INTERFACE_MODE_RGMII))) { int eee_lp, eee_cap, eee_adv; u32 lp, cap, adv; - int idx, status; + int status; + unsigned int idx; /* Read phy status to properly get the right settings */ status = phy_read_status(phydev); diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c index 4b970f7624c0..2f6989b1e0dc 100644 --- a/drivers/net/phy/phy_device.c +++ b/drivers/net/phy/phy_device.c @@ -683,10 +683,9 @@ EXPORT_SYMBOL(phy_detach); int phy_suspend(struct phy_device *phydev) { struct phy_driver *phydrv = to_phy_driver(phydev->dev.driver); - struct ethtool_wolinfo wol; + struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL }; /* If the device has WOL enabled, we cannot suspend the PHY */ - wol.cmd = ETHTOOL_GWOL; phy_ethtool_get_wol(phydev, &wol); if (wol.wolopts) return -EBUSY; diff --git a/drivers/net/usb/Makefile b/drivers/net/usb/Makefile index 433f0a00c683..e2797f1e1b31 100644 --- a/drivers/net/usb/Makefile +++ b/drivers/net/usb/Makefile @@ -11,7 +11,7 @@ obj-$(CONFIG_USB_HSO) += hso.o obj-$(CONFIG_USB_NET_AX8817X) += asix.o asix-y := asix_devices.o asix_common.o ax88172a.o obj-$(CONFIG_USB_NET_AX88179_178A) += ax88179_178a.o -obj-$(CONFIG_USB_NET_CDCETHER) += cdc_ether.o r815x.o +obj-$(CONFIG_USB_NET_CDCETHER) += cdc_ether.o obj-$(CONFIG_USB_NET_CDC_EEM) += cdc_eem.o obj-$(CONFIG_USB_NET_DM9601) += dm9601.o obj-$(CONFIG_USB_NET_SR9700) += sr9700.o diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c index 42e176912c8e..bd363b27e854 100644 --- a/drivers/net/usb/cdc_ether.c +++ b/drivers/net/usb/cdc_ether.c @@ -652,6 +652,13 @@ static const struct usb_device_id products[] = { .driver_info = 0, }, +/* Samsung USB Ethernet Adapters */ +{ + USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, 0xa101, USB_CLASS_COMM, + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), + .driver_info = 0, +}, + /* WHITELIST!!! * * CDC Ether uses two interfaces, not necessarily consecutive. diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c index dbff290ed0e4..d350d2795e10 100644 --- a/drivers/net/usb/cdc_ncm.c +++ b/drivers/net/usb/cdc_ncm.c @@ -68,7 +68,6 @@ static struct usb_driver cdc_ncm_driver; static int cdc_ncm_setup(struct usbnet *dev) { struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0]; - struct usb_cdc_ncm_ntb_parameters ncm_parm; u32 val; u8 flags; u8 iface_no; @@ -82,22 +81,22 @@ static int cdc_ncm_setup(struct usbnet *dev) err = usbnet_read_cmd(dev, USB_CDC_GET_NTB_PARAMETERS, USB_TYPE_CLASS | USB_DIR_IN |USB_RECIP_INTERFACE, - 0, iface_no, &ncm_parm, - sizeof(ncm_parm)); + 0, iface_no, &ctx->ncm_parm, + sizeof(ctx->ncm_parm)); if (err < 0) { dev_err(&dev->intf->dev, "failed GET_NTB_PARAMETERS\n"); return err; /* GET_NTB_PARAMETERS is required */ } /* read correct set of parameters according to device mode */ - ctx->rx_max = le32_to_cpu(ncm_parm.dwNtbInMaxSize); - ctx->tx_max = le32_to_cpu(ncm_parm.dwNtbOutMaxSize); - ctx->tx_remainder = le16_to_cpu(ncm_parm.wNdpOutPayloadRemainder); - ctx->tx_modulus = le16_to_cpu(ncm_parm.wNdpOutDivisor); - ctx->tx_ndp_modulus = le16_to_cpu(ncm_parm.wNdpOutAlignment); + ctx->rx_max = le32_to_cpu(ctx->ncm_parm.dwNtbInMaxSize); + ctx->tx_max = le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize); + ctx->tx_remainder = le16_to_cpu(ctx->ncm_parm.wNdpOutPayloadRemainder); + ctx->tx_modulus = le16_to_cpu(ctx->ncm_parm.wNdpOutDivisor); + ctx->tx_ndp_modulus = le16_to_cpu(ctx->ncm_parm.wNdpOutAlignment); /* devices prior to NCM Errata shall set this field to zero */ - ctx->tx_max_datagrams = le16_to_cpu(ncm_parm.wNtbOutMaxDatagrams); - ntb_fmt_supported = le16_to_cpu(ncm_parm.bmNtbFormatsSupported); + ctx->tx_max_datagrams = le16_to_cpu(ctx->ncm_parm.wNtbOutMaxDatagrams); + ntb_fmt_supported = le16_to_cpu(ctx->ncm_parm.bmNtbFormatsSupported); /* there are some minor differences in NCM and MBIM defaults */ if (cdc_ncm_comm_intf_is_mbim(ctx->control->cur_altsetting)) { @@ -146,7 +145,7 @@ static int cdc_ncm_setup(struct usbnet *dev) } /* inform device about NTB input size changes */ - if (ctx->rx_max != le32_to_cpu(ncm_parm.dwNtbInMaxSize)) { + if (ctx->rx_max != le32_to_cpu(ctx->ncm_parm.dwNtbInMaxSize)) { __le32 dwNtbInMaxSize = cpu_to_le32(ctx->rx_max); err = usbnet_write_cmd(dev, USB_CDC_SET_NTB_INPUT_SIZE, @@ -162,14 +161,6 @@ static int cdc_ncm_setup(struct usbnet *dev) dev_dbg(&dev->intf->dev, "Using default maximum transmit length=%d\n", CDC_NCM_NTB_MAX_SIZE_TX); ctx->tx_max = CDC_NCM_NTB_MAX_SIZE_TX; - - /* Adding a pad byte here simplifies the handling in - * cdc_ncm_fill_tx_frame, by making tx_max always - * represent the real skb max size. - */ - if (ctx->tx_max % usb_maxpacket(dev->udev, dev->out, 1) == 0) - ctx->tx_max++; - } /* @@ -439,6 +430,10 @@ advance: goto error2; } + /* initialize data interface */ + if (cdc_ncm_setup(dev)) + goto error2; + /* configure data interface */ temp = usb_set_interface(dev->udev, iface_no, data_altsetting); if (temp) { @@ -453,12 +448,6 @@ advance: goto error2; } - /* initialize data interface */ - if (cdc_ncm_setup(dev)) { - dev_dbg(&intf->dev, "cdc_ncm_setup() failed\n"); - goto error2; - } - usb_set_intfdata(ctx->data, dev); usb_set_intfdata(ctx->control, dev); @@ -475,6 +464,15 @@ advance: dev->hard_mtu = ctx->tx_max; dev->rx_urb_size = ctx->rx_max; + /* cdc_ncm_setup will override dwNtbOutMaxSize if it is + * outside the sane range. Adding a pad byte here if necessary + * simplifies the handling in cdc_ncm_fill_tx_frame, making + * tx_max always represent the real skb max size. + */ + if (ctx->tx_max != le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize) && + ctx->tx_max % usb_maxpacket(dev->udev, dev->out, 1) == 0) + ctx->tx_max++; + return 0; error2: diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index d89dbe395ad2..adb12f349a61 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -449,9 +449,6 @@ enum rtl8152_flags { #define MCU_TYPE_PLA 0x0100 #define MCU_TYPE_USB 0x0000 -#define REALTEK_USB_DEVICE(vend, prod) \ - USB_DEVICE_INTERFACE_CLASS(vend, prod, USB_CLASS_VENDOR_SPEC) - struct rx_desc { __le32 opts1; #define RX_LEN_MASK 0x7fff @@ -2739,6 +2736,12 @@ static int rtl8152_probe(struct usb_interface *intf, struct net_device *netdev; int ret; + if (udev->actconfig->desc.bConfigurationValue != 1) { + usb_driver_set_configuration(udev, 1); + return -ENODEV; + } + + usb_reset_device(udev); netdev = alloc_etherdev(sizeof(struct r8152)); if (!netdev) { dev_err(&intf->dev, "Out of memory\n"); @@ -2819,9 +2822,9 @@ static void rtl8152_disconnect(struct usb_interface *intf) /* table of devices that work with this driver */ static struct usb_device_id rtl8152_table[] = { - {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8152)}, - {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8153)}, - {REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, PRODUCT_ID_SAMSUNG)}, + {USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8152)}, + {USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8153)}, + {USB_DEVICE(VENDOR_ID_SAMSUNG, PRODUCT_ID_SAMSUNG)}, {} }; diff --git a/drivers/net/usb/r815x.c b/drivers/net/usb/r815x.c deleted file mode 100644 index f0a8791b7636..000000000000 --- a/drivers/net/usb/r815x.c +++ /dev/null @@ -1,248 +0,0 @@ -#include <linux/module.h> -#include <linux/netdevice.h> -#include <linux/mii.h> -#include <linux/usb.h> -#include <linux/usb/cdc.h> -#include <linux/usb/usbnet.h> - -#define RTL815x_REQT_READ 0xc0 -#define RTL815x_REQT_WRITE 0x40 -#define RTL815x_REQ_GET_REGS 0x05 -#define RTL815x_REQ_SET_REGS 0x05 - -#define MCU_TYPE_PLA 0x0100 -#define OCP_BASE 0xe86c -#define BASE_MII 0xa400 - -#define BYTE_EN_DWORD 0xff -#define BYTE_EN_WORD 0x33 -#define BYTE_EN_BYTE 0x11 - -#define R815x_PHY_ID 32 -#define REALTEK_VENDOR_ID 0x0bda - - -static int pla_read_word(struct usb_device *udev, u16 index) -{ - int ret; - u8 shift = index & 2; - __le32 *tmp; - - tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); - if (!tmp) - return -ENOMEM; - - index &= ~3; - - ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), - RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, - index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500); - if (ret < 0) - goto out2; - - ret = __le32_to_cpu(*tmp); - ret >>= (shift * 8); - ret &= 0xffff; - -out2: - kfree(tmp); - return ret; -} - -static int pla_write_word(struct usb_device *udev, u16 index, u32 data) -{ - __le32 *tmp; - u32 mask = 0xffff; - u16 byen = BYTE_EN_WORD; - u8 shift = index & 2; - int ret; - - tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); - if (!tmp) - return -ENOMEM; - - data &= mask; - - if (shift) { - byen <<= shift; - mask <<= (shift * 8); - data <<= (shift * 8); - index &= ~3; - } - - ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), - RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, - index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500); - if (ret < 0) - goto out3; - - data |= __le32_to_cpu(*tmp) & ~mask; - *tmp = __cpu_to_le32(data); - - ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), - RTL815x_REQ_SET_REGS, RTL815x_REQT_WRITE, - index, MCU_TYPE_PLA | byen, tmp, sizeof(*tmp), - 500); - -out3: - kfree(tmp); - return ret; -} - -static int ocp_reg_read(struct usbnet *dev, u16 addr) -{ - u16 ocp_base, ocp_index; - int ret; - - ocp_base = addr & 0xf000; - ret = pla_write_word(dev->udev, OCP_BASE, ocp_base); - if (ret < 0) - goto out; - - ocp_index = (addr & 0x0fff) | 0xb000; - ret = pla_read_word(dev->udev, ocp_index); - -out: - return ret; -} - -static int ocp_reg_write(struct usbnet *dev, u16 addr, u16 data) -{ - u16 ocp_base, ocp_index; - int ret; - - ocp_base = addr & 0xf000; - ret = pla_write_word(dev->udev, OCP_BASE, ocp_base); - if (ret < 0) - goto out1; - - ocp_index = (addr & 0x0fff) | 0xb000; - ret = pla_write_word(dev->udev, ocp_index, data); - -out1: - return ret; -} - -static int r815x_mdio_read(struct net_device *netdev, int phy_id, int reg) -{ - struct usbnet *dev = netdev_priv(netdev); - int ret; - - if (phy_id != R815x_PHY_ID) - return -EINVAL; - - if (usb_autopm_get_interface(dev->intf) < 0) - return -ENODEV; - - ret = ocp_reg_read(dev, BASE_MII + reg * 2); - - usb_autopm_put_interface(dev->intf); - return ret; -} - -static -void r815x_mdio_write(struct net_device *netdev, int phy_id, int reg, int val) -{ - struct usbnet *dev = netdev_priv(netdev); - - if (phy_id != R815x_PHY_ID) - return; - - if (usb_autopm_get_interface(dev->intf) < 0) - return; - - ocp_reg_write(dev, BASE_MII + reg * 2, val); - - usb_autopm_put_interface(dev->intf); -} - -static int r8153_bind(struct usbnet *dev, struct usb_interface *intf) -{ - int status; - - status = usbnet_cdc_bind(dev, intf); - if (status < 0) - return status; - - dev->mii.dev = dev->net; - dev->mii.mdio_read = r815x_mdio_read; - dev->mii.mdio_write = r815x_mdio_write; - dev->mii.phy_id_mask = 0x3f; - dev->mii.reg_num_mask = 0x1f; - dev->mii.phy_id = R815x_PHY_ID; - dev->mii.supports_gmii = 1; - - return status; -} - -static int r8152_bind(struct usbnet *dev, struct usb_interface *intf) -{ - int status; - - status = usbnet_cdc_bind(dev, intf); - if (status < 0) - return status; - - dev->mii.dev = dev->net; - dev->mii.mdio_read = r815x_mdio_read; - dev->mii.mdio_write = r815x_mdio_write; - dev->mii.phy_id_mask = 0x3f; - dev->mii.reg_num_mask = 0x1f; - dev->mii.phy_id = R815x_PHY_ID; - dev->mii.supports_gmii = 0; - - return status; -} - -static const struct driver_info r8152_info = { - .description = "RTL8152 ECM Device", - .flags = FLAG_ETHER | FLAG_POINTTOPOINT, - .bind = r8152_bind, - .unbind = usbnet_cdc_unbind, - .status = usbnet_cdc_status, - .manage_power = usbnet_manage_power, -}; - -static const struct driver_info r8153_info = { - .description = "RTL8153 ECM Device", - .flags = FLAG_ETHER | FLAG_POINTTOPOINT, - .bind = r8153_bind, - .unbind = usbnet_cdc_unbind, - .status = usbnet_cdc_status, - .manage_power = usbnet_manage_power, -}; - -static const struct usb_device_id products[] = { -{ - USB_DEVICE_AND_INTERFACE_INFO(REALTEK_VENDOR_ID, 0x8152, USB_CLASS_COMM, - USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), - .driver_info = (unsigned long) &r8152_info, -}, - -{ - USB_DEVICE_AND_INTERFACE_INFO(REALTEK_VENDOR_ID, 0x8153, USB_CLASS_COMM, - USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), - .driver_info = (unsigned long) &r8153_info, -}, - - { }, /* END */ -}; -MODULE_DEVICE_TABLE(usb, products); - -static struct usb_driver r815x_driver = { - .name = "r815x", - .id_table = products, - .probe = usbnet_probe, - .disconnect = usbnet_disconnect, - .suspend = usbnet_suspend, - .resume = usbnet_resume, - .reset_resume = usbnet_resume, - .supports_autosuspend = 1, - .disable_hub_initiated_lpm = 1, -}; - -module_usb_driver(r815x_driver); - -MODULE_AUTHOR("Hayes Wang"); -MODULE_DESCRIPTION("Realtek USB ECM device"); -MODULE_LICENSE("GPL"); diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c index dd10d5817d2a..f9e96c427558 100644 --- a/drivers/net/usb/usbnet.c +++ b/drivers/net/usb/usbnet.c @@ -752,14 +752,12 @@ EXPORT_SYMBOL_GPL(usbnet_unlink_rx_urbs); // precondition: never called in_interrupt static void usbnet_terminate_urbs(struct usbnet *dev) { - DECLARE_WAIT_QUEUE_HEAD_ONSTACK(unlink_wakeup); DECLARE_WAITQUEUE(wait, current); int temp; /* ensure there are no more active urbs */ - add_wait_queue(&unlink_wakeup, &wait); + add_wait_queue(&dev->wait, &wait); set_current_state(TASK_UNINTERRUPTIBLE); - dev->wait = &unlink_wakeup; temp = unlink_urbs(dev, &dev->txq) + unlink_urbs(dev, &dev->rxq); @@ -773,15 +771,14 @@ static void usbnet_terminate_urbs(struct usbnet *dev) "waited for %d urb completions\n", temp); } set_current_state(TASK_RUNNING); - dev->wait = NULL; - remove_wait_queue(&unlink_wakeup, &wait); + remove_wait_queue(&dev->wait, &wait); } int usbnet_stop (struct net_device *net) { struct usbnet *dev = netdev_priv(net); struct driver_info *info = dev->driver_info; - int retval; + int retval, pm; clear_bit(EVENT_DEV_OPEN, &dev->flags); netif_stop_queue (net); @@ -791,6 +788,8 @@ int usbnet_stop (struct net_device *net) net->stats.rx_packets, net->stats.tx_packets, net->stats.rx_errors, net->stats.tx_errors); + /* to not race resume */ + pm = usb_autopm_get_interface(dev->intf); /* allow minidriver to stop correctly (wireless devices to turn off * radio etc) */ if (info->stop) { @@ -817,6 +816,9 @@ int usbnet_stop (struct net_device *net) dev->flags = 0; del_timer_sync (&dev->delay); tasklet_kill (&dev->bh); + if (!pm) + usb_autopm_put_interface(dev->intf); + if (info->manage_power && !test_and_clear_bit(EVENT_NO_RUNTIME_PM, &dev->flags)) info->manage_power(dev, 0); @@ -1437,11 +1439,12 @@ static void usbnet_bh (unsigned long param) /* restart RX again after disabling due to high error rate */ clear_bit(EVENT_RX_KILL, &dev->flags); - // waiting for all pending urbs to complete? - if (dev->wait) { - if ((dev->txq.qlen + dev->rxq.qlen + dev->done.qlen) == 0) { - wake_up (dev->wait); - } + /* waiting for all pending urbs to complete? + * only then can we forgo submitting anew + */ + if (waitqueue_active(&dev->wait)) { + if (dev->txq.qlen + dev->rxq.qlen + dev->done.qlen == 0) + wake_up_all(&dev->wait); // or are we maybe short a few urbs? } else if (netif_running (dev->net) && @@ -1580,6 +1583,7 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod) dev->driver_name = name; dev->msg_enable = netif_msg_init (msg_level, NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK); + init_waitqueue_head(&dev->wait); skb_queue_head_init (&dev->rxq); skb_queue_head_init (&dev->txq); skb_queue_head_init (&dev->done); @@ -1791,9 +1795,10 @@ int usbnet_resume (struct usb_interface *intf) spin_unlock_irq(&dev->txq.lock); if (test_bit(EVENT_DEV_OPEN, &dev->flags)) { - /* handle remote wakeup ASAP */ - if (!dev->wait && - netif_device_present(dev->net) && + /* handle remote wakeup ASAP + * we cannot race against stop + */ + if (netif_device_present(dev->net) && !timer_pending(&dev->delay) && !test_bit(EVENT_RX_HALT, &dev->flags)) rx_alloc_submit(dev, GFP_NOIO); diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 5b374370f71c..c0e7c64765ab 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -286,7 +286,10 @@ static void veth_setup(struct net_device *dev) dev->features |= NETIF_F_LLTX; dev->features |= VETH_FEATURES; dev->vlan_features = dev->features & - ~(NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX); + ~(NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_STAG_TX | + NETIF_F_HW_VLAN_CTAG_RX | + NETIF_F_HW_VLAN_STAG_RX); dev->destructor = veth_dev_free; dev->hw_features = VETH_FEATURES; diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 5632a99cbbd2..841b60831df1 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -671,8 +671,7 @@ static bool try_fill_recv(struct receive_queue *rq, gfp_t gfp) if (err) break; } while (rq->vq->num_free); - if (unlikely(!virtqueue_kick(rq->vq))) - return false; + virtqueue_kick(rq->vq); return !oom; } @@ -877,7 +876,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) err = xmit_skb(sq, skb); /* This should not happen! */ - if (unlikely(err) || unlikely(!virtqueue_kick(sq->vq))) { + if (unlikely(err)) { dev->stats.tx_fifo_errors++; if (net_ratelimit()) dev_warn(&dev->dev, @@ -886,6 +885,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) kfree_skb(skb); return NETDEV_TX_OK; } + virtqueue_kick(sq->vq); /* Don't wait up for transmitted skbs to be freed. */ skb_orphan(skb); diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 3be786faaaec..0fa3b44f7342 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1762,11 +1762,20 @@ vmxnet3_netpoll(struct net_device *netdev) { struct vmxnet3_adapter *adapter = netdev_priv(netdev); - if (adapter->intr.mask_mode == VMXNET3_IMM_ACTIVE) - vmxnet3_disable_all_intrs(adapter); - - vmxnet3_do_poll(adapter, adapter->rx_queue[0].rx_ring[0].size); - vmxnet3_enable_all_intrs(adapter); + switch (adapter->intr.type) { +#ifdef CONFIG_PCI_MSI + case VMXNET3_IT_MSIX: { + int i; + for (i = 0; i < adapter->num_rx_queues; i++) + vmxnet3_msix_rx(0, &adapter->rx_queue[i]); + break; + } +#endif + case VMXNET3_IT_MSI: + default: + vmxnet3_intr(0, adapter->netdev); + break; + } } #endif /* CONFIG_NET_POLL_CONTROLLER */ diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c index b0f705c2378f..1236812c7be6 100644 --- a/drivers/net/vxlan.c +++ b/drivers/net/vxlan.c @@ -1318,6 +1318,9 @@ static int arp_reduce(struct net_device *dev, struct sk_buff *skb) neigh_release(n); + if (reply == NULL) + goto out; + skb_reset_mac_header(reply); __skb_pull(reply, skb_network_offset(reply)); reply->ip_summed = CHECKSUM_UNNECESSARY; @@ -1339,15 +1342,103 @@ out: } #if IS_ENABLED(CONFIG_IPV6) + +static struct sk_buff *vxlan_na_create(struct sk_buff *request, + struct neighbour *n, bool isrouter) +{ + struct net_device *dev = request->dev; + struct sk_buff *reply; + struct nd_msg *ns, *na; + struct ipv6hdr *pip6; + u8 *daddr; + int na_olen = 8; /* opt hdr + ETH_ALEN for target */ + int ns_olen; + int i, len; + + if (dev == NULL) + return NULL; + + len = LL_RESERVED_SPACE(dev) + sizeof(struct ipv6hdr) + + sizeof(*na) + na_olen + dev->needed_tailroom; + reply = alloc_skb(len, GFP_ATOMIC); + if (reply == NULL) + return NULL; + + reply->protocol = htons(ETH_P_IPV6); + reply->dev = dev; + skb_reserve(reply, LL_RESERVED_SPACE(request->dev)); + skb_push(reply, sizeof(struct ethhdr)); + skb_set_mac_header(reply, 0); + + ns = (struct nd_msg *)skb_transport_header(request); + + daddr = eth_hdr(request)->h_source; + ns_olen = request->len - skb_transport_offset(request) - sizeof(*ns); + for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) { + if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) { + daddr = ns->opt + i + sizeof(struct nd_opt_hdr); + break; + } + } + + /* Ethernet header */ + ether_addr_copy(eth_hdr(reply)->h_dest, daddr); + ether_addr_copy(eth_hdr(reply)->h_source, n->ha); + eth_hdr(reply)->h_proto = htons(ETH_P_IPV6); + reply->protocol = htons(ETH_P_IPV6); + + skb_pull(reply, sizeof(struct ethhdr)); + skb_set_network_header(reply, 0); + skb_put(reply, sizeof(struct ipv6hdr)); + + /* IPv6 header */ + + pip6 = ipv6_hdr(reply); + memset(pip6, 0, sizeof(struct ipv6hdr)); + pip6->version = 6; + pip6->priority = ipv6_hdr(request)->priority; + pip6->nexthdr = IPPROTO_ICMPV6; + pip6->hop_limit = 255; + pip6->daddr = ipv6_hdr(request)->saddr; + pip6->saddr = *(struct in6_addr *)n->primary_key; + + skb_pull(reply, sizeof(struct ipv6hdr)); + skb_set_transport_header(reply, 0); + + na = (struct nd_msg *)skb_put(reply, sizeof(*na) + na_olen); + + /* Neighbor Advertisement */ + memset(na, 0, sizeof(*na)+na_olen); + na->icmph.icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT; + na->icmph.icmp6_router = isrouter; + na->icmph.icmp6_override = 1; + na->icmph.icmp6_solicited = 1; + na->target = ns->target; + ether_addr_copy(&na->opt[2], n->ha); + na->opt[0] = ND_OPT_TARGET_LL_ADDR; + na->opt[1] = na_olen >> 3; + + na->icmph.icmp6_cksum = csum_ipv6_magic(&pip6->saddr, + &pip6->daddr, sizeof(*na)+na_olen, IPPROTO_ICMPV6, + csum_partial(na, sizeof(*na)+na_olen, 0)); + + pip6->payload_len = htons(sizeof(*na)+na_olen); + + skb_push(reply, sizeof(struct ipv6hdr)); + + reply->ip_summed = CHECKSUM_UNNECESSARY; + + return reply; +} + static int neigh_reduce(struct net_device *dev, struct sk_buff *skb) { struct vxlan_dev *vxlan = netdev_priv(dev); - struct neighbour *n; - union vxlan_addr ipa; + struct nd_msg *msg; const struct ipv6hdr *iphdr; const struct in6_addr *saddr, *daddr; - struct nd_msg *msg; - struct inet6_dev *in6_dev = NULL; + struct neighbour *n; + struct inet6_dev *in6_dev; in6_dev = __in6_dev_get(dev); if (!in6_dev) @@ -1360,19 +1451,20 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb) saddr = &iphdr->saddr; daddr = &iphdr->daddr; - if (ipv6_addr_loopback(daddr) || - ipv6_addr_is_multicast(daddr)) - goto out; - msg = (struct nd_msg *)skb_transport_header(skb); if (msg->icmph.icmp6_code != 0 || msg->icmph.icmp6_type != NDISC_NEIGHBOUR_SOLICITATION) goto out; - n = neigh_lookup(ipv6_stub->nd_tbl, daddr, dev); + if (ipv6_addr_loopback(daddr) || + ipv6_addr_is_multicast(&msg->target)) + goto out; + + n = neigh_lookup(ipv6_stub->nd_tbl, &msg->target, dev); if (n) { struct vxlan_fdb *f; + struct sk_buff *reply; if (!(n->nud_state & NUD_CONNECTED)) { neigh_release(n); @@ -1386,13 +1478,23 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb) goto out; } - ipv6_stub->ndisc_send_na(dev, n, saddr, &msg->target, - !!in6_dev->cnf.forwarding, - true, false, false); + reply = vxlan_na_create(skb, n, + !!(f ? f->flags & NTF_ROUTER : 0)); + neigh_release(n); + + if (reply == NULL) + goto out; + + if (netif_rx_ni(reply) == NET_RX_DROP) + dev->stats.rx_dropped++; + } else if (vxlan->flags & VXLAN_F_L3MISS) { - ipa.sin6.sin6_addr = *daddr; - ipa.sa.sa_family = AF_INET6; + union vxlan_addr ipa = { + .sin6.sin6_addr = msg->target, + .sa.sa_family = AF_INET6, + }; + vxlan_ip_miss(dev, &ipa); } diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c index 303ce27964c1..9078a6c5a74e 100644 --- a/drivers/net/wireless/ath/ath9k/hw.c +++ b/drivers/net/wireless/ath/ath9k/hw.c @@ -1548,6 +1548,7 @@ bool ath9k_hw_check_alive(struct ath_hw *ah) if (reg != last_val) return true; + udelay(1); last_val = reg; if ((reg & 0x7E7FFFEF) == 0x00702400) continue; @@ -1560,8 +1561,6 @@ bool ath9k_hw_check_alive(struct ath_hw *ah) default: return true; } - - udelay(1); } while (count-- > 0); return false; diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c index f042a18c8495..55897d508a76 100644 --- a/drivers/net/wireless/ath/ath9k/xmit.c +++ b/drivers/net/wireless/ath/ath9k/xmit.c @@ -2063,7 +2063,7 @@ static struct ath_buf *ath_tx_setup_buffer(struct ath_softc *sc, ATH_TXBUF_RESET(bf); - if (tid) { + if (tid && ieee80211_is_data_present(hdr->frame_control)) { fragno = le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_FRAG; seqno = tid->seq_next; hdr->seq_ctrl = cpu_to_le16(tid->seq_next << IEEE80211_SEQ_SEQ_SHIFT); @@ -2186,7 +2186,7 @@ int ath_tx_start(struct ieee80211_hw *hw, struct sk_buff *skb, txq->stopped = true; } - if (txctl->an) + if (txctl->an && ieee80211_is_data_present(hdr->frame_control)) tid = ath_get_skb_tid(sc, txctl->an, skb); if (info->flags & IEEE80211_TX_CTL_PS_RESPONSE) { diff --git a/drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c b/drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c index 119ee6eaf1c3..ddaa9efd053d 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c +++ b/drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c @@ -1948,8 +1948,10 @@ static int brcmf_sdio_txpkt_prep_sg(struct brcmf_sdio *bus, if (pkt_pad == NULL) return -ENOMEM; ret = brcmf_sdio_txpkt_hdalign(bus, pkt_pad); - if (unlikely(ret < 0)) + if (unlikely(ret < 0)) { + kfree_skb(pkt_pad); return ret; + } memcpy(pkt_pad->data, pkt->data + pkt->len - tail_chop, tail_chop); diff --git a/drivers/net/wireless/iwlwifi/mvm/bt-coex.c b/drivers/net/wireless/iwlwifi/mvm/bt-coex.c index 76cde6ce6551..18a895a949d4 100644 --- a/drivers/net/wireless/iwlwifi/mvm/bt-coex.c +++ b/drivers/net/wireless/iwlwifi/mvm/bt-coex.c @@ -872,8 +872,11 @@ void iwl_mvm_bt_rssi_event(struct iwl_mvm *mvm, struct ieee80211_vif *vif, lockdep_assert_held(&mvm->mutex); - /* Rssi update while not associated ?! */ - if (WARN_ON_ONCE(mvmvif->ap_sta_id == IWL_MVM_STATION_COUNT)) + /* + * Rssi update while not associated - can happen since the statistics + * are handled asynchronously + */ + if (mvmvif->ap_sta_id == IWL_MVM_STATION_COUNT) return; /* No BT - reports should be disabled */ diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c index f47bcbe2945a..3872ead75488 100644 --- a/drivers/net/wireless/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/iwlwifi/pcie/drv.c @@ -359,13 +359,12 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { /* 7265 Series */ {IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5110, iwl7265_2ac_cfg)}, - {IWL_PCI_DEVICE(0x095A, 0x5112, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5100, iwl7265_2ac_cfg)}, - {IWL_PCI_DEVICE(0x095A, 0x510A, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095B, 0x5310, iwl7265_2ac_cfg)}, - {IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_n_cfg)}, {IWL_PCI_DEVICE(0x095B, 0x5210, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5412, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5400, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)}, diff --git a/drivers/net/wireless/mwifiex/11ac.c b/drivers/net/wireless/mwifiex/11ac.c index 5e0eec4d71c7..5d9a8084665d 100644 --- a/drivers/net/wireless/mwifiex/11ac.c +++ b/drivers/net/wireless/mwifiex/11ac.c @@ -189,8 +189,7 @@ int mwifiex_cmd_append_11ac_tlv(struct mwifiex_private *priv, vht_cap->header.len = cpu_to_le16(sizeof(struct ieee80211_vht_cap)); memcpy((u8 *)vht_cap + sizeof(struct mwifiex_ie_types_header), - (u8 *)bss_desc->bcn_vht_cap + - sizeof(struct ieee_types_header), + (u8 *)bss_desc->bcn_vht_cap, le16_to_cpu(vht_cap->header.len)); mwifiex_fill_vht_cap_tlv(priv, vht_cap, bss_desc->bss_band); diff --git a/drivers/net/wireless/mwifiex/11n.c b/drivers/net/wireless/mwifiex/11n.c index 6261f8c53d44..7db1a89fdd95 100644 --- a/drivers/net/wireless/mwifiex/11n.c +++ b/drivers/net/wireless/mwifiex/11n.c @@ -308,8 +308,7 @@ mwifiex_cmd_append_11n_tlv(struct mwifiex_private *priv, ht_cap->header.len = cpu_to_le16(sizeof(struct ieee80211_ht_cap)); memcpy((u8 *) ht_cap + sizeof(struct mwifiex_ie_types_header), - (u8 *) bss_desc->bcn_ht_cap + - sizeof(struct ieee_types_header), + (u8 *)bss_desc->bcn_ht_cap, le16_to_cpu(ht_cap->header.len)); mwifiex_fill_cap_info(priv, radio_type, ht_cap); diff --git a/drivers/net/wireless/mwifiex/scan.c b/drivers/net/wireless/mwifiex/scan.c index 0a8a26e10f01..668547c2de84 100644 --- a/drivers/net/wireless/mwifiex/scan.c +++ b/drivers/net/wireless/mwifiex/scan.c @@ -2101,12 +2101,12 @@ mwifiex_save_curr_bcn(struct mwifiex_private *priv) curr_bss->ht_info_offset); if (curr_bss->bcn_vht_cap) - curr_bss->bcn_ht_cap = (void *)(curr_bss->beacon_buf + - curr_bss->vht_cap_offset); + curr_bss->bcn_vht_cap = (void *)(curr_bss->beacon_buf + + curr_bss->vht_cap_offset); if (curr_bss->bcn_vht_oper) - curr_bss->bcn_ht_oper = (void *)(curr_bss->beacon_buf + - curr_bss->vht_info_offset); + curr_bss->bcn_vht_oper = (void *)(curr_bss->beacon_buf + + curr_bss->vht_info_offset); if (curr_bss->bcn_bss_co_2040) curr_bss->bcn_bss_co_2040 = diff --git a/drivers/net/wireless/rt2x00/rt2800lib.c b/drivers/net/wireless/rt2x00/rt2800lib.c index 7f8b5d156c8c..41d4a8167dc3 100644 --- a/drivers/net/wireless/rt2x00/rt2800lib.c +++ b/drivers/net/wireless/rt2x00/rt2800lib.c @@ -5460,14 +5460,15 @@ static void rt2800_init_bbp_53xx(struct rt2x00_dev *rt2x00dev) rt2800_bbp_write(rt2x00dev, 68, 0x0b); - rt2800_bbp_write(rt2x00dev, 69, 0x0d); - rt2800_bbp_write(rt2x00dev, 70, 0x06); + rt2800_bbp_write(rt2x00dev, 69, 0x12); rt2800_bbp_write(rt2x00dev, 73, 0x13); rt2800_bbp_write(rt2x00dev, 75, 0x46); rt2800_bbp_write(rt2x00dev, 76, 0x28); rt2800_bbp_write(rt2x00dev, 77, 0x59); + rt2800_bbp_write(rt2x00dev, 70, 0x0a); + rt2800_bbp_write(rt2x00dev, 79, 0x13); rt2800_bbp_write(rt2x00dev, 80, 0x05); rt2800_bbp_write(rt2x00dev, 81, 0x33); @@ -5510,7 +5511,6 @@ static void rt2800_init_bbp_53xx(struct rt2x00_dev *rt2x00dev) if (rt2x00_rt(rt2x00dev, RT5392)) { rt2800_bbp_write(rt2x00dev, 134, 0xd0); rt2800_bbp_write(rt2x00dev, 135, 0xf6); - rt2800_bbp_write(rt2x00dev, 148, 0x84); } rt2800_disable_unused_dac_adc(rt2x00dev); diff --git a/drivers/net/wireless/ti/wl1251/rx.c b/drivers/net/wireless/ti/wl1251/rx.c index 123c4bb50e0a..cde0eaf99714 100644 --- a/drivers/net/wireless/ti/wl1251/rx.c +++ b/drivers/net/wireless/ti/wl1251/rx.c @@ -180,7 +180,7 @@ static void wl1251_rx_body(struct wl1251 *wl, wl1251_mem_read(wl, rx_packet_ring_addr, rx_buffer, length); /* The actual length doesn't include the target's alignment */ - skb->len = desc->length - PLCP_HEADER_LENGTH; + skb_trim(skb, desc->length - PLCP_HEADER_LENGTH); fc = (u16 *)skb->data; diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index 7669d49a67e2..301cc037fda8 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -132,8 +132,7 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev) /* If the skb is GSO then we'll also need an extra slot for the * metadata. */ - if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || - skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) + if (skb_is_gso(skb)) min_slots_needed++; /* If the skb can't possibly fit in the remaining slots diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index e5284bca2d90..438d0c09b7e6 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -240,7 +240,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, struct gnttab_copy *copy_gop; struct xenvif_rx_meta *meta; unsigned long bytes; - int gso_type; + int gso_type = XEN_NETIF_GSO_TYPE_NONE; /* Data must not cross a page boundary. */ BUG_ON(size + offset > PAGE_SIZE<<compound_order(page)); @@ -299,12 +299,12 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, } /* Leave a gap for the GSO descriptor. */ - if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) - gso_type = XEN_NETIF_GSO_TYPE_TCPV4; - else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) - gso_type = XEN_NETIF_GSO_TYPE_TCPV6; - else - gso_type = XEN_NETIF_GSO_TYPE_NONE; + if (skb_is_gso(skb)) { + if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) + gso_type = XEN_NETIF_GSO_TYPE_TCPV4; + else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) + gso_type = XEN_NETIF_GSO_TYPE_TCPV6; + } if (*head && ((1 << gso_type) & vif->gso_mask)) vif->rx.req_cons++; @@ -338,19 +338,15 @@ static int xenvif_gop_skb(struct sk_buff *skb, int head = 1; int old_meta_prod; int gso_type; - int gso_size; old_meta_prod = npo->meta_prod; - if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) { - gso_type = XEN_NETIF_GSO_TYPE_TCPV4; - gso_size = skb_shinfo(skb)->gso_size; - } else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) { - gso_type = XEN_NETIF_GSO_TYPE_TCPV6; - gso_size = skb_shinfo(skb)->gso_size; - } else { - gso_type = XEN_NETIF_GSO_TYPE_NONE; - gso_size = 0; + gso_type = XEN_NETIF_GSO_TYPE_NONE; + if (skb_is_gso(skb)) { + if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) + gso_type = XEN_NETIF_GSO_TYPE_TCPV4; + else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) + gso_type = XEN_NETIF_GSO_TYPE_TCPV6; } /* Set up a GSO prefix descriptor, if necessary */ @@ -358,7 +354,7 @@ static int xenvif_gop_skb(struct sk_buff *skb, req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++); meta = npo->meta + npo->meta_prod++; meta->gso_type = gso_type; - meta->gso_size = gso_size; + meta->gso_size = skb_shinfo(skb)->gso_size; meta->size = 0; meta->id = req->id; } @@ -368,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb, if ((1 << gso_type) & vif->gso_mask) { meta->gso_type = gso_type; - meta->gso_size = gso_size; + meta->gso_size = skb_shinfo(skb)->gso_size; } else { meta->gso_type = XEN_NETIF_GSO_TYPE_NONE; meta->gso_size = 0; @@ -500,8 +496,9 @@ static void xenvif_rx_action(struct xenvif *vif) size = skb_frag_size(&skb_shinfo(skb)->frags[i]); max_slots_needed += DIV_ROUND_UP(size, PAGE_SIZE); } - if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || - skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) + if (skb_is_gso(skb) && + (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || + skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)) max_slots_needed++; /* If the skb may not fit then bail out now */ diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c index 00660cc502c5..38901665c770 100644 --- a/drivers/pci/bus.c +++ b/drivers/pci/bus.c @@ -162,8 +162,6 @@ static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res, avail = *r; pci_clip_resource_to_region(bus, &avail, region); - if (!resource_size(&avail)) - continue; /* * "min" is typically PCIBIOS_MIN_IO or PCIBIOS_MIN_MEM to diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 6b05f6134b68..fdbc294821e6 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -1192,6 +1192,9 @@ static int do_pci_enable_device(struct pci_dev *dev, int bars) return err; pci_fixup_device(pci_fixup_enable, dev); + if (dev->msi_enabled || dev->msix_enabled) + return 0; + pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin); if (pin) { pci_read_config_word(dev, PCI_COMMAND, &cmd); diff --git a/drivers/pnp/pnpacpi/rsparser.c b/drivers/pnp/pnpacpi/rsparser.c index 167f3d00c916..66977ebf13b3 100644 --- a/drivers/pnp/pnpacpi/rsparser.c +++ b/drivers/pnp/pnpacpi/rsparser.c @@ -183,9 +183,7 @@ static acpi_status pnpacpi_allocated_resource(struct acpi_resource *res, struct resource r = {0}; int i, flags; - if (acpi_dev_resource_memory(res, &r) - || acpi_dev_resource_io(res, &r) - || acpi_dev_resource_address_space(res, &r) + if (acpi_dev_resource_address_space(res, &r) || acpi_dev_resource_ext_address_space(res, &r)) { pnp_add_resource(dev, &r); return AE_OK; @@ -217,6 +215,17 @@ static acpi_status pnpacpi_allocated_resource(struct acpi_resource *res, } switch (res->type) { + case ACPI_RESOURCE_TYPE_MEMORY24: + case ACPI_RESOURCE_TYPE_MEMORY32: + case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: + if (acpi_dev_resource_memory(res, &r)) + pnp_add_resource(dev, &r); + break; + case ACPI_RESOURCE_TYPE_IO: + case ACPI_RESOURCE_TYPE_FIXED_IO: + if (acpi_dev_resource_io(res, &r)) + pnp_add_resource(dev, &r); + break; case ACPI_RESOURCE_TYPE_DMA: dma = &res->data.dma; if (dma->channel_count > 0 && dma->channels[0] != (u8) -1) diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c index 1f375051483a..5642a9b250c2 100644 --- a/drivers/scsi/be2iscsi/be_main.c +++ b/drivers/scsi/be2iscsi/be_main.c @@ -325,7 +325,7 @@ static int beiscsi_eh_device_reset(struct scsi_cmnd *sc) if (!abrt_task->sc || abrt_task->state == ISCSI_TASK_FREE) continue; - if (abrt_task->sc->device->lun != abrt_task->sc->device->lun) + if (sc->device->lun != abrt_task->sc->device->lun) continue; /* Invalidate WRB Posted for this Task */ diff --git a/drivers/scsi/bnx2fc/bnx2fc_io.c b/drivers/scsi/bnx2fc/bnx2fc_io.c index ed880891cb7c..e9279a8c1e1c 100644 --- a/drivers/scsi/bnx2fc/bnx2fc_io.c +++ b/drivers/scsi/bnx2fc/bnx2fc_io.c @@ -594,13 +594,13 @@ static void bnx2fc_free_mp_resc(struct bnx2fc_cmd *io_req) mp_req->mp_resp_bd = NULL; } if (mp_req->req_buf) { - dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_free_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, mp_req->req_buf, mp_req->req_buf_dma); mp_req->req_buf = NULL; } if (mp_req->resp_buf) { - dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_free_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, mp_req->resp_buf, mp_req->resp_buf_dma); mp_req->resp_buf = NULL; @@ -622,7 +622,7 @@ int bnx2fc_init_mp_req(struct bnx2fc_cmd *io_req) mp_req->req_len = sizeof(struct fcp_cmnd); io_req->data_xfer_len = mp_req->req_len; - mp_req->req_buf = dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, + mp_req->req_buf = dma_alloc_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, &mp_req->req_buf_dma, GFP_ATOMIC); if (!mp_req->req_buf) { @@ -631,7 +631,7 @@ int bnx2fc_init_mp_req(struct bnx2fc_cmd *io_req) return FAILED; } - mp_req->resp_buf = dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, + mp_req->resp_buf = dma_alloc_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, &mp_req->resp_buf_dma, GFP_ATOMIC); if (!mp_req->resp_buf) { @@ -639,8 +639,8 @@ int bnx2fc_init_mp_req(struct bnx2fc_cmd *io_req) bnx2fc_free_mp_resc(io_req); return FAILED; } - memset(mp_req->req_buf, 0, PAGE_SIZE); - memset(mp_req->resp_buf, 0, PAGE_SIZE); + memset(mp_req->req_buf, 0, CNIC_PAGE_SIZE); + memset(mp_req->resp_buf, 0, CNIC_PAGE_SIZE); /* Allocate and map mp_req_bd and mp_resp_bd */ sz = sizeof(struct fcoe_bd_ctx); @@ -665,7 +665,7 @@ int bnx2fc_init_mp_req(struct bnx2fc_cmd *io_req) mp_req_bd = mp_req->mp_req_bd; mp_req_bd->buf_addr_lo = (u32)addr & 0xffffffff; mp_req_bd->buf_addr_hi = (u32)((u64)addr >> 32); - mp_req_bd->buf_len = PAGE_SIZE; + mp_req_bd->buf_len = CNIC_PAGE_SIZE; mp_req_bd->flags = 0; /* @@ -677,7 +677,7 @@ int bnx2fc_init_mp_req(struct bnx2fc_cmd *io_req) addr = mp_req->resp_buf_dma; mp_resp_bd->buf_addr_lo = (u32)addr & 0xffffffff; mp_resp_bd->buf_addr_hi = (u32)((u64)addr >> 32); - mp_resp_bd->buf_len = PAGE_SIZE; + mp_resp_bd->buf_len = CNIC_PAGE_SIZE; mp_resp_bd->flags = 0; return SUCCESS; diff --git a/drivers/scsi/bnx2fc/bnx2fc_tgt.c b/drivers/scsi/bnx2fc/bnx2fc_tgt.c index 4d93177dfb53..d9bae5672273 100644 --- a/drivers/scsi/bnx2fc/bnx2fc_tgt.c +++ b/drivers/scsi/bnx2fc/bnx2fc_tgt.c @@ -673,7 +673,8 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, /* Allocate and map SQ */ tgt->sq_mem_size = tgt->max_sqes * BNX2FC_SQ_WQE_SIZE; - tgt->sq_mem_size = (tgt->sq_mem_size + (PAGE_SIZE - 1)) & PAGE_MASK; + tgt->sq_mem_size = (tgt->sq_mem_size + (CNIC_PAGE_SIZE - 1)) & + CNIC_PAGE_MASK; tgt->sq = dma_alloc_coherent(&hba->pcidev->dev, tgt->sq_mem_size, &tgt->sq_dma, GFP_KERNEL); @@ -686,7 +687,8 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, /* Allocate and map CQ */ tgt->cq_mem_size = tgt->max_cqes * BNX2FC_CQ_WQE_SIZE; - tgt->cq_mem_size = (tgt->cq_mem_size + (PAGE_SIZE - 1)) & PAGE_MASK; + tgt->cq_mem_size = (tgt->cq_mem_size + (CNIC_PAGE_SIZE - 1)) & + CNIC_PAGE_MASK; tgt->cq = dma_alloc_coherent(&hba->pcidev->dev, tgt->cq_mem_size, &tgt->cq_dma, GFP_KERNEL); @@ -699,7 +701,8 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, /* Allocate and map RQ and RQ PBL */ tgt->rq_mem_size = tgt->max_rqes * BNX2FC_RQ_WQE_SIZE; - tgt->rq_mem_size = (tgt->rq_mem_size + (PAGE_SIZE - 1)) & PAGE_MASK; + tgt->rq_mem_size = (tgt->rq_mem_size + (CNIC_PAGE_SIZE - 1)) & + CNIC_PAGE_MASK; tgt->rq = dma_alloc_coherent(&hba->pcidev->dev, tgt->rq_mem_size, &tgt->rq_dma, GFP_KERNEL); @@ -710,8 +713,9 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, } memset(tgt->rq, 0, tgt->rq_mem_size); - tgt->rq_pbl_size = (tgt->rq_mem_size / PAGE_SIZE) * sizeof(void *); - tgt->rq_pbl_size = (tgt->rq_pbl_size + (PAGE_SIZE - 1)) & PAGE_MASK; + tgt->rq_pbl_size = (tgt->rq_mem_size / CNIC_PAGE_SIZE) * sizeof(void *); + tgt->rq_pbl_size = (tgt->rq_pbl_size + (CNIC_PAGE_SIZE - 1)) & + CNIC_PAGE_MASK; tgt->rq_pbl = dma_alloc_coherent(&hba->pcidev->dev, tgt->rq_pbl_size, &tgt->rq_pbl_dma, GFP_KERNEL); @@ -722,7 +726,7 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, } memset(tgt->rq_pbl, 0, tgt->rq_pbl_size); - num_pages = tgt->rq_mem_size / PAGE_SIZE; + num_pages = tgt->rq_mem_size / CNIC_PAGE_SIZE; page = tgt->rq_dma; pbl = (u32 *)tgt->rq_pbl; @@ -731,13 +735,13 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, pbl++; *pbl = (u32)((u64)page >> 32); pbl++; - page += PAGE_SIZE; + page += CNIC_PAGE_SIZE; } /* Allocate and map XFERQ */ tgt->xferq_mem_size = tgt->max_sqes * BNX2FC_XFERQ_WQE_SIZE; - tgt->xferq_mem_size = (tgt->xferq_mem_size + (PAGE_SIZE - 1)) & - PAGE_MASK; + tgt->xferq_mem_size = (tgt->xferq_mem_size + (CNIC_PAGE_SIZE - 1)) & + CNIC_PAGE_MASK; tgt->xferq = dma_alloc_coherent(&hba->pcidev->dev, tgt->xferq_mem_size, &tgt->xferq_dma, GFP_KERNEL); @@ -750,8 +754,8 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, /* Allocate and map CONFQ & CONFQ PBL */ tgt->confq_mem_size = tgt->max_sqes * BNX2FC_CONFQ_WQE_SIZE; - tgt->confq_mem_size = (tgt->confq_mem_size + (PAGE_SIZE - 1)) & - PAGE_MASK; + tgt->confq_mem_size = (tgt->confq_mem_size + (CNIC_PAGE_SIZE - 1)) & + CNIC_PAGE_MASK; tgt->confq = dma_alloc_coherent(&hba->pcidev->dev, tgt->confq_mem_size, &tgt->confq_dma, GFP_KERNEL); @@ -763,9 +767,9 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, memset(tgt->confq, 0, tgt->confq_mem_size); tgt->confq_pbl_size = - (tgt->confq_mem_size / PAGE_SIZE) * sizeof(void *); + (tgt->confq_mem_size / CNIC_PAGE_SIZE) * sizeof(void *); tgt->confq_pbl_size = - (tgt->confq_pbl_size + (PAGE_SIZE - 1)) & PAGE_MASK; + (tgt->confq_pbl_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; tgt->confq_pbl = dma_alloc_coherent(&hba->pcidev->dev, tgt->confq_pbl_size, @@ -777,7 +781,7 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, } memset(tgt->confq_pbl, 0, tgt->confq_pbl_size); - num_pages = tgt->confq_mem_size / PAGE_SIZE; + num_pages = tgt->confq_mem_size / CNIC_PAGE_SIZE; page = tgt->confq_dma; pbl = (u32 *)tgt->confq_pbl; @@ -786,7 +790,7 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, pbl++; *pbl = (u32)((u64)page >> 32); pbl++; - page += PAGE_SIZE; + page += CNIC_PAGE_SIZE; } /* Allocate and map ConnDB */ @@ -805,8 +809,8 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, /* Allocate and map LCQ */ tgt->lcq_mem_size = (tgt->max_sqes + 8) * BNX2FC_SQ_WQE_SIZE; - tgt->lcq_mem_size = (tgt->lcq_mem_size + (PAGE_SIZE - 1)) & - PAGE_MASK; + tgt->lcq_mem_size = (tgt->lcq_mem_size + (CNIC_PAGE_SIZE - 1)) & + CNIC_PAGE_MASK; tgt->lcq = dma_alloc_coherent(&hba->pcidev->dev, tgt->lcq_mem_size, &tgt->lcq_dma, GFP_KERNEL); diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c index e4cf23df4b4f..b87a1933f880 100644 --- a/drivers/scsi/bnx2i/bnx2i_hwi.c +++ b/drivers/scsi/bnx2i/bnx2i_hwi.c @@ -61,7 +61,7 @@ static void bnx2i_adjust_qp_size(struct bnx2i_hba *hba) * yield integral num of page buffers */ /* adjust SQ */ - num_elements_per_pg = PAGE_SIZE / BNX2I_SQ_WQE_SIZE; + num_elements_per_pg = CNIC_PAGE_SIZE / BNX2I_SQ_WQE_SIZE; if (hba->max_sqes < num_elements_per_pg) hba->max_sqes = num_elements_per_pg; else if (hba->max_sqes % num_elements_per_pg) @@ -69,7 +69,7 @@ static void bnx2i_adjust_qp_size(struct bnx2i_hba *hba) ~(num_elements_per_pg - 1); /* adjust CQ */ - num_elements_per_pg = PAGE_SIZE / BNX2I_CQE_SIZE; + num_elements_per_pg = CNIC_PAGE_SIZE / BNX2I_CQE_SIZE; if (hba->max_cqes < num_elements_per_pg) hba->max_cqes = num_elements_per_pg; else if (hba->max_cqes % num_elements_per_pg) @@ -77,7 +77,7 @@ static void bnx2i_adjust_qp_size(struct bnx2i_hba *hba) ~(num_elements_per_pg - 1); /* adjust RQ */ - num_elements_per_pg = PAGE_SIZE / BNX2I_RQ_WQE_SIZE; + num_elements_per_pg = CNIC_PAGE_SIZE / BNX2I_RQ_WQE_SIZE; if (hba->max_rqes < num_elements_per_pg) hba->max_rqes = num_elements_per_pg; else if (hba->max_rqes % num_elements_per_pg) @@ -959,7 +959,7 @@ static void setup_qp_page_tables(struct bnx2i_endpoint *ep) /* SQ page table */ memset(ep->qp.sq_pgtbl_virt, 0, ep->qp.sq_pgtbl_size); - num_pages = ep->qp.sq_mem_size / PAGE_SIZE; + num_pages = ep->qp.sq_mem_size / CNIC_PAGE_SIZE; page = ep->qp.sq_phys; if (cnic_dev_10g) @@ -973,7 +973,7 @@ static void setup_qp_page_tables(struct bnx2i_endpoint *ep) ptbl++; *ptbl = (u32) ((u64) page >> 32); ptbl++; - page += PAGE_SIZE; + page += CNIC_PAGE_SIZE; } else { /* PTE is written in big endian format for * 5706/5708/5709 devices */ @@ -981,13 +981,13 @@ static void setup_qp_page_tables(struct bnx2i_endpoint *ep) ptbl++; *ptbl = (u32) page; ptbl++; - page += PAGE_SIZE; + page += CNIC_PAGE_SIZE; } } /* RQ page table */ memset(ep->qp.rq_pgtbl_virt, 0, ep->qp.rq_pgtbl_size); - num_pages = ep->qp.rq_mem_size / PAGE_SIZE; + num_pages = ep->qp.rq_mem_size / CNIC_PAGE_SIZE; page = ep->qp.rq_phys; if (cnic_dev_10g) @@ -1001,7 +1001,7 @@ static void setup_qp_page_tables(struct bnx2i_endpoint *ep) ptbl++; *ptbl = (u32) ((u64) page >> 32); ptbl++; - page += PAGE_SIZE; + page += CNIC_PAGE_SIZE; } else { /* PTE is written in big endian format for * 5706/5708/5709 devices */ @@ -1009,13 +1009,13 @@ static void setup_qp_page_tables(struct bnx2i_endpoint *ep) ptbl++; *ptbl = (u32) page; ptbl++; - page += PAGE_SIZE; + page += CNIC_PAGE_SIZE; } } /* CQ page table */ memset(ep->qp.cq_pgtbl_virt, 0, ep->qp.cq_pgtbl_size); - num_pages = ep->qp.cq_mem_size / PAGE_SIZE; + num_pages = ep->qp.cq_mem_size / CNIC_PAGE_SIZE; page = ep->qp.cq_phys; if (cnic_dev_10g) @@ -1029,7 +1029,7 @@ static void setup_qp_page_tables(struct bnx2i_endpoint *ep) ptbl++; *ptbl = (u32) ((u64) page >> 32); ptbl++; - page += PAGE_SIZE; + page += CNIC_PAGE_SIZE; } else { /* PTE is written in big endian format for * 5706/5708/5709 devices */ @@ -1037,7 +1037,7 @@ static void setup_qp_page_tables(struct bnx2i_endpoint *ep) ptbl++; *ptbl = (u32) page; ptbl++; - page += PAGE_SIZE; + page += CNIC_PAGE_SIZE; } } } @@ -1064,11 +1064,11 @@ int bnx2i_alloc_qp_resc(struct bnx2i_hba *hba, struct bnx2i_endpoint *ep) /* Allocate page table memory for SQ which is page aligned */ ep->qp.sq_mem_size = hba->max_sqes * BNX2I_SQ_WQE_SIZE; ep->qp.sq_mem_size = - (ep->qp.sq_mem_size + (PAGE_SIZE - 1)) & PAGE_MASK; + (ep->qp.sq_mem_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; ep->qp.sq_pgtbl_size = - (ep->qp.sq_mem_size / PAGE_SIZE) * sizeof(void *); + (ep->qp.sq_mem_size / CNIC_PAGE_SIZE) * sizeof(void *); ep->qp.sq_pgtbl_size = - (ep->qp.sq_pgtbl_size + (PAGE_SIZE - 1)) & PAGE_MASK; + (ep->qp.sq_pgtbl_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; ep->qp.sq_pgtbl_virt = dma_alloc_coherent(&hba->pcidev->dev, ep->qp.sq_pgtbl_size, @@ -1101,11 +1101,11 @@ int bnx2i_alloc_qp_resc(struct bnx2i_hba *hba, struct bnx2i_endpoint *ep) /* Allocate page table memory for CQ which is page aligned */ ep->qp.cq_mem_size = hba->max_cqes * BNX2I_CQE_SIZE; ep->qp.cq_mem_size = - (ep->qp.cq_mem_size + (PAGE_SIZE - 1)) & PAGE_MASK; + (ep->qp.cq_mem_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; ep->qp.cq_pgtbl_size = - (ep->qp.cq_mem_size / PAGE_SIZE) * sizeof(void *); + (ep->qp.cq_mem_size / CNIC_PAGE_SIZE) * sizeof(void *); ep->qp.cq_pgtbl_size = - (ep->qp.cq_pgtbl_size + (PAGE_SIZE - 1)) & PAGE_MASK; + (ep->qp.cq_pgtbl_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; ep->qp.cq_pgtbl_virt = dma_alloc_coherent(&hba->pcidev->dev, ep->qp.cq_pgtbl_size, @@ -1144,11 +1144,11 @@ int bnx2i_alloc_qp_resc(struct bnx2i_hba *hba, struct bnx2i_endpoint *ep) /* Allocate page table memory for RQ which is page aligned */ ep->qp.rq_mem_size = hba->max_rqes * BNX2I_RQ_WQE_SIZE; ep->qp.rq_mem_size = - (ep->qp.rq_mem_size + (PAGE_SIZE - 1)) & PAGE_MASK; + (ep->qp.rq_mem_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; ep->qp.rq_pgtbl_size = - (ep->qp.rq_mem_size / PAGE_SIZE) * sizeof(void *); + (ep->qp.rq_mem_size / CNIC_PAGE_SIZE) * sizeof(void *); ep->qp.rq_pgtbl_size = - (ep->qp.rq_pgtbl_size + (PAGE_SIZE - 1)) & PAGE_MASK; + (ep->qp.rq_pgtbl_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; ep->qp.rq_pgtbl_virt = dma_alloc_coherent(&hba->pcidev->dev, ep->qp.rq_pgtbl_size, @@ -1270,7 +1270,7 @@ int bnx2i_send_fw_iscsi_init_msg(struct bnx2i_hba *hba) bnx2i_adjust_qp_size(hba); iscsi_init.flags = - ISCSI_PAGE_SIZE_4K << ISCSI_KWQE_INIT1_PAGE_SIZE_SHIFT; + (CNIC_PAGE_BITS - 8) << ISCSI_KWQE_INIT1_PAGE_SIZE_SHIFT; if (en_tcp_dack) iscsi_init.flags |= ISCSI_KWQE_INIT1_DELAYED_ACK_ENABLE; iscsi_init.reserved0 = 0; @@ -1288,15 +1288,15 @@ int bnx2i_send_fw_iscsi_init_msg(struct bnx2i_hba *hba) ((hba->num_ccell & 0xFFFF) | (hba->max_sqes << 16)); iscsi_init.num_ccells_per_conn = hba->num_ccell; iscsi_init.num_tasks_per_conn = hba->max_sqes; - iscsi_init.sq_wqes_per_page = PAGE_SIZE / BNX2I_SQ_WQE_SIZE; + iscsi_init.sq_wqes_per_page = CNIC_PAGE_SIZE / BNX2I_SQ_WQE_SIZE; iscsi_init.sq_num_wqes = hba->max_sqes; iscsi_init.cq_log_wqes_per_page = - (u8) bnx2i_power_of2(PAGE_SIZE / BNX2I_CQE_SIZE); + (u8) bnx2i_power_of2(CNIC_PAGE_SIZE / BNX2I_CQE_SIZE); iscsi_init.cq_num_wqes = hba->max_cqes; iscsi_init.cq_num_pages = (hba->max_cqes * BNX2I_CQE_SIZE + - (PAGE_SIZE - 1)) / PAGE_SIZE; + (CNIC_PAGE_SIZE - 1)) / CNIC_PAGE_SIZE; iscsi_init.sq_num_pages = (hba->max_sqes * BNX2I_SQ_WQE_SIZE + - (PAGE_SIZE - 1)) / PAGE_SIZE; + (CNIC_PAGE_SIZE - 1)) / CNIC_PAGE_SIZE; iscsi_init.rq_buffer_size = BNX2I_RQ_WQE_SIZE; iscsi_init.rq_num_wqes = hba->max_rqes; diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c index 854dad7d5b03..c8b0aff5bbd4 100644 --- a/drivers/scsi/bnx2i/bnx2i_iscsi.c +++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c @@ -525,7 +525,7 @@ static int bnx2i_setup_mp_bdt(struct bnx2i_hba *hba) struct iscsi_bd *mp_bdt; u64 addr; - hba->mp_bd_tbl = dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, + hba->mp_bd_tbl = dma_alloc_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, &hba->mp_bd_dma, GFP_KERNEL); if (!hba->mp_bd_tbl) { printk(KERN_ERR "unable to allocate Middle Path BDT\n"); @@ -533,11 +533,12 @@ static int bnx2i_setup_mp_bdt(struct bnx2i_hba *hba) goto out; } - hba->dummy_buffer = dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, + hba->dummy_buffer = dma_alloc_coherent(&hba->pcidev->dev, + CNIC_PAGE_SIZE, &hba->dummy_buf_dma, GFP_KERNEL); if (!hba->dummy_buffer) { printk(KERN_ERR "unable to alloc Middle Path Dummy Buffer\n"); - dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_free_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, hba->mp_bd_tbl, hba->mp_bd_dma); hba->mp_bd_tbl = NULL; rc = -1; @@ -548,7 +549,7 @@ static int bnx2i_setup_mp_bdt(struct bnx2i_hba *hba) addr = (unsigned long) hba->dummy_buf_dma; mp_bdt->buffer_addr_lo = addr & 0xffffffff; mp_bdt->buffer_addr_hi = addr >> 32; - mp_bdt->buffer_length = PAGE_SIZE; + mp_bdt->buffer_length = CNIC_PAGE_SIZE; mp_bdt->flags = ISCSI_BD_LAST_IN_BD_CHAIN | ISCSI_BD_FIRST_IN_BD_CHAIN; out: @@ -565,12 +566,12 @@ out: static void bnx2i_free_mp_bdt(struct bnx2i_hba *hba) { if (hba->mp_bd_tbl) { - dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_free_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, hba->mp_bd_tbl, hba->mp_bd_dma); hba->mp_bd_tbl = NULL; } if (hba->dummy_buffer) { - dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_free_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, hba->dummy_buffer, hba->dummy_buf_dma); hba->dummy_buffer = NULL; } @@ -934,14 +935,14 @@ static void bnx2i_conn_free_login_resources(struct bnx2i_hba *hba, struct bnx2i_conn *bnx2i_conn) { if (bnx2i_conn->gen_pdu.resp_bd_tbl) { - dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_free_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, bnx2i_conn->gen_pdu.resp_bd_tbl, bnx2i_conn->gen_pdu.resp_bd_dma); bnx2i_conn->gen_pdu.resp_bd_tbl = NULL; } if (bnx2i_conn->gen_pdu.req_bd_tbl) { - dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_free_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, bnx2i_conn->gen_pdu.req_bd_tbl, bnx2i_conn->gen_pdu.req_bd_dma); bnx2i_conn->gen_pdu.req_bd_tbl = NULL; @@ -998,13 +999,13 @@ static int bnx2i_conn_alloc_login_resources(struct bnx2i_hba *hba, bnx2i_conn->gen_pdu.resp_wr_ptr = bnx2i_conn->gen_pdu.resp_buf; bnx2i_conn->gen_pdu.req_bd_tbl = - dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_alloc_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, &bnx2i_conn->gen_pdu.req_bd_dma, GFP_KERNEL); if (bnx2i_conn->gen_pdu.req_bd_tbl == NULL) goto login_req_bd_tbl_failure; bnx2i_conn->gen_pdu.resp_bd_tbl = - dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_alloc_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, &bnx2i_conn->gen_pdu.resp_bd_dma, GFP_KERNEL); if (bnx2i_conn->gen_pdu.resp_bd_tbl == NULL) @@ -1013,7 +1014,7 @@ static int bnx2i_conn_alloc_login_resources(struct bnx2i_hba *hba, return 0; login_resp_bd_tbl_failure: - dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, + dma_free_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE, bnx2i_conn->gen_pdu.req_bd_tbl, bnx2i_conn->gen_pdu.req_bd_dma); bnx2i_conn->gen_pdu.req_bd_tbl = NULL; diff --git a/drivers/scsi/isci/host.h b/drivers/scsi/isci/host.h index 4911310a38f5..22a9bb1abae1 100644 --- a/drivers/scsi/isci/host.h +++ b/drivers/scsi/isci/host.h @@ -311,9 +311,8 @@ static inline struct Scsi_Host *to_shost(struct isci_host *ihost) } #define for_each_isci_host(id, ihost, pdev) \ - for (id = 0, ihost = to_pci_info(pdev)->hosts[id]; \ - id < ARRAY_SIZE(to_pci_info(pdev)->hosts) && ihost; \ - ihost = to_pci_info(pdev)->hosts[++id]) + for (id = 0; id < SCI_MAX_CONTROLLERS && \ + (ihost = to_pci_info(pdev)->hosts[id]); id++) static inline void wait_for_start(struct isci_host *ihost) { diff --git a/drivers/scsi/isci/port_config.c b/drivers/scsi/isci/port_config.c index 85c77f6b802b..ac879745ef80 100644 --- a/drivers/scsi/isci/port_config.c +++ b/drivers/scsi/isci/port_config.c @@ -615,13 +615,6 @@ static void sci_apc_agent_link_up(struct isci_host *ihost, SCIC_SDS_APC_WAIT_LINK_UP_NOTIFICATION); } else { /* the phy is already the part of the port */ - u32 port_state = iport->sm.current_state_id; - - /* if the PORT'S state is resetting then the link up is from - * port hard reset in this case, we need to tell the port - * that link up is recieved - */ - BUG_ON(port_state != SCI_PORT_RESETTING); port_agent->phy_ready_mask |= 1 << phy_index; sci_port_link_up(iport, iphy); } diff --git a/drivers/scsi/isci/task.c b/drivers/scsi/isci/task.c index 0d30ca849e8f..5d6fda72d659 100644 --- a/drivers/scsi/isci/task.c +++ b/drivers/scsi/isci/task.c @@ -801,7 +801,7 @@ int isci_task_I_T_nexus_reset(struct domain_device *dev) /* XXX: need to cleanup any ireqs targeting this * domain_device */ - ret = TMF_RESP_FUNC_COMPLETE; + ret = -ENODEV; goto out; } diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h index e1fe95ef23e1..266724b6b899 100644 --- a/drivers/scsi/qla2xxx/qla_def.h +++ b/drivers/scsi/qla2xxx/qla_def.h @@ -2996,8 +2996,7 @@ struct qla_hw_data { IS_QLA82XX(ha) || IS_QLA83XX(ha) || \ IS_QLA8044(ha)) #define IS_MSIX_NACK_CAPABLE(ha) (IS_QLA81XX(ha) || IS_QLA83XX(ha)) -#define IS_NOPOLLING_TYPE(ha) ((IS_QLA25XX(ha) || IS_QLA81XX(ha) || \ - IS_QLA83XX(ha)) && (ha)->flags.msix_enabled) +#define IS_NOPOLLING_TYPE(ha) (IS_QLA81XX(ha) && (ha)->flags.msix_enabled) #define IS_FAC_REQUIRED(ha) (IS_QLA81XX(ha) || IS_QLA83XX(ha)) #define IS_NOCACHE_VPD_TYPE(ha) (IS_QLA81XX(ha) || IS_QLA83XX(ha)) #define IS_ALOGIO_CAPABLE(ha) (IS_QLA23XX(ha) || IS_FWI2_CAPABLE(ha)) diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c index 9bc86b9e86b1..0a1dcb43d18b 100644 --- a/drivers/scsi/qla2xxx/qla_isr.c +++ b/drivers/scsi/qla2xxx/qla_isr.c @@ -2880,6 +2880,7 @@ static int qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp) { #define MIN_MSIX_COUNT 2 +#define ATIO_VECTOR 2 int i, ret; struct msix_entry *entries; struct qla_msix_entry *qentry; @@ -2936,34 +2937,47 @@ msix_failed: } /* Enable MSI-X vectors for the base queue */ - for (i = 0; i < ha->msix_count; i++) { + for (i = 0; i < 2; i++) { qentry = &ha->msix_entries[i]; - if (QLA_TGT_MODE_ENABLED() && IS_ATIO_MSIX_CAPABLE(ha)) { - ret = request_irq(qentry->vector, - qla83xx_msix_entries[i].handler, - 0, qla83xx_msix_entries[i].name, rsp); - } else if (IS_P3P_TYPE(ha)) { + if (IS_P3P_TYPE(ha)) ret = request_irq(qentry->vector, qla82xx_msix_entries[i].handler, 0, qla82xx_msix_entries[i].name, rsp); - } else { + else ret = request_irq(qentry->vector, msix_entries[i].handler, 0, msix_entries[i].name, rsp); - } - if (ret) { - ql_log(ql_log_fatal, vha, 0x00cb, - "MSI-X: unable to register handler -- %x/%d.\n", - qentry->vector, ret); - qla24xx_disable_msix(ha); - ha->mqenable = 0; - goto msix_out; - } + if (ret) + goto msix_register_fail; qentry->have_irq = 1; qentry->rsp = rsp; rsp->msix = qentry; } + /* + * If target mode is enable, also request the vector for the ATIO + * queue. + */ + if (QLA_TGT_MODE_ENABLED() && IS_ATIO_MSIX_CAPABLE(ha)) { + qentry = &ha->msix_entries[ATIO_VECTOR]; + ret = request_irq(qentry->vector, + qla83xx_msix_entries[ATIO_VECTOR].handler, + 0, qla83xx_msix_entries[ATIO_VECTOR].name, rsp); + qentry->have_irq = 1; + qentry->rsp = rsp; + rsp->msix = qentry; + } + +msix_register_fail: + if (ret) { + ql_log(ql_log_fatal, vha, 0x00cb, + "MSI-X: unable to register handler -- %x/%d.\n", + qentry->vector, ret); + qla24xx_disable_msix(ha); + ha->mqenable = 0; + goto msix_out; + } + /* Enable MSI-X vector for response queue update for queue 0 */ if (IS_QLA83XX(ha)) { if (ha->msixbase && ha->mqiobase && diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c index 17d740427240..9969fa1ef7c4 100644 --- a/drivers/scsi/storvsc_drv.c +++ b/drivers/scsi/storvsc_drv.c @@ -1419,6 +1419,9 @@ static void storvsc_device_destroy(struct scsi_device *sdevice) { struct stor_mem_pools *memp = sdevice->hostdata; + if (!memp) + return; + mempool_destroy(memp->request_mempool); kmem_cache_destroy(memp->request_pool); kfree(memp); diff --git a/drivers/tty/serial/sunhv.c b/drivers/tty/serial/sunhv.c index cf86e729532b..dc697cee248a 100644 --- a/drivers/tty/serial/sunhv.c +++ b/drivers/tty/serial/sunhv.c @@ -433,13 +433,10 @@ static void sunhv_console_write_paged(struct console *con, const char *s, unsign unsigned long flags; int locked = 1; - local_irq_save(flags); - if (port->sysrq) { - locked = 0; - } else if (oops_in_progress) { - locked = spin_trylock(&port->lock); - } else - spin_lock(&port->lock); + if (port->sysrq || oops_in_progress) + locked = spin_trylock_irqsave(&port->lock, flags); + else + spin_lock_irqsave(&port->lock, flags); while (n > 0) { unsigned long ra = __pa(con_write_page); @@ -470,8 +467,7 @@ static void sunhv_console_write_paged(struct console *con, const char *s, unsign } if (locked) - spin_unlock(&port->lock); - local_irq_restore(flags); + spin_unlock_irqrestore(&port->lock, flags); } static inline void sunhv_console_putchar(struct uart_port *port, char c) @@ -492,7 +488,10 @@ static void sunhv_console_write_bychar(struct console *con, const char *s, unsig unsigned long flags; int i, locked = 1; - local_irq_save(flags); + if (port->sysrq || oops_in_progress) + locked = spin_trylock_irqsave(&port->lock, flags); + else + spin_lock_irqsave(&port->lock, flags); if (port->sysrq) { locked = 0; } else if (oops_in_progress) { @@ -507,8 +506,7 @@ static void sunhv_console_write_bychar(struct console *con, const char *s, unsig } if (locked) - spin_unlock(&port->lock); - local_irq_restore(flags); + spin_unlock_irqrestore(&port->lock, flags); } static struct console sunhv_console = { diff --git a/drivers/tty/serial/sunsab.c b/drivers/tty/serial/sunsab.c index 380fb5355cb2..5faa8e905e98 100644 --- a/drivers/tty/serial/sunsab.c +++ b/drivers/tty/serial/sunsab.c @@ -844,20 +844,16 @@ static void sunsab_console_write(struct console *con, const char *s, unsigned n) unsigned long flags; int locked = 1; - local_irq_save(flags); - if (up->port.sysrq) { - locked = 0; - } else if (oops_in_progress) { - locked = spin_trylock(&up->port.lock); - } else - spin_lock(&up->port.lock); + if (up->port.sysrq || oops_in_progress) + locked = spin_trylock_irqsave(&up->port.lock, flags); + else + spin_lock_irqsave(&up->port.lock, flags); uart_console_write(&up->port, s, n, sunsab_console_putchar); sunsab_tec_wait(up); if (locked) - spin_unlock(&up->port.lock); - local_irq_restore(flags); + spin_unlock_irqrestore(&up->port.lock, flags); } static int sunsab_console_setup(struct console *con, char *options) diff --git a/drivers/tty/serial/sunsu.c b/drivers/tty/serial/sunsu.c index db79b76f5c8e..9a0f24f83720 100644 --- a/drivers/tty/serial/sunsu.c +++ b/drivers/tty/serial/sunsu.c @@ -1295,13 +1295,10 @@ static void sunsu_console_write(struct console *co, const char *s, unsigned int ier; int locked = 1; - local_irq_save(flags); - if (up->port.sysrq) { - locked = 0; - } else if (oops_in_progress) { - locked = spin_trylock(&up->port.lock); - } else - spin_lock(&up->port.lock); + if (up->port.sysrq || oops_in_progress) + locked = spin_trylock_irqsave(&up->port.lock, flags); + else + spin_lock_irqsave(&up->port.lock, flags); /* * First save the UER then disable the interrupts @@ -1319,8 +1316,7 @@ static void sunsu_console_write(struct console *co, const char *s, serial_out(up, UART_IER, ier); if (locked) - spin_unlock(&up->port.lock); - local_irq_restore(flags); + spin_unlock_irqrestore(&up->port.lock, flags); } /* diff --git a/drivers/tty/serial/sunzilog.c b/drivers/tty/serial/sunzilog.c index 45a8c6aa5837..a2c40ed287d2 100644 --- a/drivers/tty/serial/sunzilog.c +++ b/drivers/tty/serial/sunzilog.c @@ -1195,20 +1195,16 @@ sunzilog_console_write(struct console *con, const char *s, unsigned int count) unsigned long flags; int locked = 1; - local_irq_save(flags); - if (up->port.sysrq) { - locked = 0; - } else if (oops_in_progress) { - locked = spin_trylock(&up->port.lock); - } else - spin_lock(&up->port.lock); + if (up->port.sysrq || oops_in_progress) + locked = spin_trylock_irqsave(&up->port.lock, flags); + else + spin_lock_irqsave(&up->port.lock, flags); uart_console_write(&up->port, s, count, sunzilog_putchar); udelay(2); if (locked) - spin_unlock(&up->port.lock); - local_irq_restore(flags); + spin_unlock_irqrestore(&up->port.lock, flags); } static int __init sunzilog_console_setup(struct console *con, char *options) diff --git a/drivers/tty/tty_ldsem.c b/drivers/tty/tty_ldsem.c index d8a55e87877f..0ffb0cbe2823 100644 --- a/drivers/tty/tty_ldsem.c +++ b/drivers/tty/tty_ldsem.c @@ -39,17 +39,10 @@ lock_acquire(&(l)->dep_map, s, t, r, c, n, i) # define __rel(l, n, i) \ lock_release(&(l)->dep_map, n, i) -# ifdef CONFIG_PROVE_LOCKING -# define lockdep_acquire(l, s, t, i) __acq(l, s, t, 0, 2, NULL, i) -# define lockdep_acquire_nest(l, s, t, n, i) __acq(l, s, t, 0, 2, n, i) -# define lockdep_acquire_read(l, s, t, i) __acq(l, s, t, 1, 2, NULL, i) -# define lockdep_release(l, n, i) __rel(l, n, i) -# else -# define lockdep_acquire(l, s, t, i) __acq(l, s, t, 0, 1, NULL, i) -# define lockdep_acquire_nest(l, s, t, n, i) __acq(l, s, t, 0, 1, n, i) -# define lockdep_acquire_read(l, s, t, i) __acq(l, s, t, 1, 1, NULL, i) -# define lockdep_release(l, n, i) __rel(l, n, i) -# endif +#define lockdep_acquire(l, s, t, i) __acq(l, s, t, 0, 1, NULL, i) +#define lockdep_acquire_nest(l, s, t, n, i) __acq(l, s, t, 0, 1, n, i) +#define lockdep_acquire_read(l, s, t, i) __acq(l, s, t, 1, 1, NULL, i) +#define lockdep_release(l, n, i) __rel(l, n, i) #else # define lockdep_acquire(l, s, t, i) do { } while (0) # define lockdep_acquire_nest(l, s, t, n, i) do { } while (0) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index a0fa5de210cf..e1e22e0f01e8 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -505,9 +505,13 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, r = -ENOBUFS; goto err; } - d = vhost_get_vq_desc(vq->dev, vq, vq->iov + seg, + r = vhost_get_vq_desc(vq->dev, vq, vq->iov + seg, ARRAY_SIZE(vq->iov) - seg, &out, &in, log, log_num); + if (unlikely(r < 0)) + goto err; + + d = r; if (d == vq->num) { r = 0; goto err; @@ -532,6 +536,12 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, *iovcount = seg; if (unlikely(log)) *log_num = nlogs; + + /* Detect overrun */ + if (unlikely(datalen > 0)) { + r = UIO_MAXIOV + 1; + goto err; + } return headcount; err: vhost_discard_vq_desc(vq, headcount); @@ -587,6 +597,14 @@ static void handle_rx(struct vhost_net *net) /* On error, stop handling until the next kick. */ if (unlikely(headcount < 0)) break; + /* On overrun, truncate and discard */ + if (unlikely(headcount > UIO_MAXIOV)) { + msg.msg_iovlen = 1; + err = sock->ops->recvmsg(NULL, sock, &msg, + 1, MSG_DONTWAIT | MSG_TRUNC); + pr_debug("Discarded rx packet: len %zd\n", sock_len); + continue; + } /* OK, now we need to know about added descriptors. */ if (!headcount) { if (unlikely(vhost_enable_notify(&net->dev, vq))) { diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index 37d06ea624aa..61a6ac8fa8fc 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -399,11 +399,25 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) state = BP_EAGAIN; break; } + scrub_page(page); - pfn = page_to_pfn(page); - frame_list[i] = pfn_to_mfn(pfn); + frame_list[i] = page_to_pfn(page); + } - scrub_page(page); + /* + * Ensure that ballooned highmem pages don't have kmaps. + * + * Do this before changing the p2m as kmap_flush_unused() + * reads PTEs to obtain pages (and hence needs the original + * p2m entry). + */ + kmap_flush_unused(); + + /* Update direct mapping, invalidate P2M, and add to balloon. */ + for (i = 0; i < nr_pages; i++) { + pfn = frame_list[i]; + frame_list[i] = pfn_to_mfn(pfn); + page = pfn_to_page(pfn); #ifdef CONFIG_XEN_HAVE_PVMMU /* @@ -429,11 +443,9 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) } #endif - balloon_append(pfn_to_page(pfn)); + balloon_append(page); } - /* Ensure that ballooned highmem pages don't have kmaps. */ - kmap_flush_unused(); flush_tlb_all(); set_xen_guest_handle(reservation.extent_start, frame_list); diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c index 24084732b1d0..80ef38c73e5a 100644 --- a/fs/anon_inodes.c +++ b/fs/anon_inodes.c @@ -41,19 +41,8 @@ static const struct dentry_operations anon_inodefs_dentry_operations = { static struct dentry *anon_inodefs_mount(struct file_system_type *fs_type, int flags, const char *dev_name, void *data) { - struct dentry *root; - root = mount_pseudo(fs_type, "anon_inode:", NULL, + return mount_pseudo(fs_type, "anon_inode:", NULL, &anon_inodefs_dentry_operations, ANON_INODE_FS_MAGIC); - if (!IS_ERR(root)) { - struct super_block *s = root->d_sb; - anon_inode_inode = alloc_anon_inode(s); - if (IS_ERR(anon_inode_inode)) { - dput(root); - deactivate_locked_super(s); - root = ERR_CAST(anon_inode_inode); - } - } - return root; } static struct file_system_type anon_inode_fs_type = { @@ -175,22 +164,15 @@ EXPORT_SYMBOL_GPL(anon_inode_getfd); static int __init anon_inode_init(void) { - int error; - - error = register_filesystem(&anon_inode_fs_type); - if (error) - goto err_exit; anon_inode_mnt = kern_mount(&anon_inode_fs_type); - if (IS_ERR(anon_inode_mnt)) { - error = PTR_ERR(anon_inode_mnt); - goto err_unregister_filesystem; - } - return 0; + if (IS_ERR(anon_inode_mnt)) + panic("anon_inode_init() kernel mount failed (%ld)\n", PTR_ERR(anon_inode_mnt)); -err_unregister_filesystem: - unregister_filesystem(&anon_inode_fs_type); -err_exit: - panic(KERN_ERR "anon_inode_init() failed (%d)\n", error); + anon_inode_inode = alloc_anon_inode(anon_inode_mnt->mnt_sb); + if (IS_ERR(anon_inode_inode)) + panic("anon_inode_init() inode allocation failed (%ld)\n", PTR_ERR(anon_inode_inode)); + + return 0; } fs_initcall(anon_inode_init); diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index cf32f0393369..c0f3718b77a8 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -513,7 +513,7 @@ struct cifs_mnt_data { static inline unsigned int get_rfc1002_length(void *buf) { - return be32_to_cpu(*((__be32 *)buf)); + return be32_to_cpu(*((__be32 *)buf)) & 0xffffff; } static inline void diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 53c15074bb36..834fce759d80 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -2579,31 +2579,19 @@ cifs_writev(struct kiocb *iocb, const struct iovec *iov, struct cifsInodeInfo *cinode = CIFS_I(inode); struct TCP_Server_Info *server = tlink_tcon(cfile->tlink)->ses->server; ssize_t rc = -EACCES; + loff_t lock_pos = pos; - BUG_ON(iocb->ki_pos != pos); - + if (file->f_flags & O_APPEND) + lock_pos = i_size_read(inode); /* * We need to hold the sem to be sure nobody modifies lock list * with a brlock that prevents writing. */ down_read(&cinode->lock_sem); - if (!cifs_find_lock_conflict(cfile, pos, iov_length(iov, nr_segs), + if (!cifs_find_lock_conflict(cfile, lock_pos, iov_length(iov, nr_segs), server->vals->exclusive_lock_type, NULL, - CIFS_WRITE_OP)) { - mutex_lock(&inode->i_mutex); - rc = __generic_file_aio_write(iocb, iov, nr_segs, - &iocb->ki_pos); - mutex_unlock(&inode->i_mutex); - } - - if (rc > 0) { - ssize_t err; - - err = generic_write_sync(file, iocb->ki_pos - rc, rc); - if (err < 0) - rc = err; - } - + CIFS_WRITE_OP)) + rc = generic_file_aio_write(iocb, iov, nr_segs, pos); up_read(&cinode->lock_sem); return rc; } diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c index b37570952846..18cd5650a5fc 100644 --- a/fs/cifs/transport.c +++ b/fs/cifs/transport.c @@ -270,6 +270,26 @@ cifs_rqst_page_to_kvec(struct smb_rqst *rqst, unsigned int idx, iov->iov_len = rqst->rq_pagesz; } +static unsigned long +rqst_len(struct smb_rqst *rqst) +{ + unsigned int i; + struct kvec *iov = rqst->rq_iov; + unsigned long buflen = 0; + + /* total up iov array first */ + for (i = 0; i < rqst->rq_nvec; i++) + buflen += iov[i].iov_len; + + /* add in the page array if there is one */ + if (rqst->rq_npages) { + buflen += rqst->rq_pagesz * (rqst->rq_npages - 1); + buflen += rqst->rq_tailsz; + } + + return buflen; +} + static int smb_send_rqst(struct TCP_Server_Info *server, struct smb_rqst *rqst) { @@ -277,6 +297,7 @@ smb_send_rqst(struct TCP_Server_Info *server, struct smb_rqst *rqst) struct kvec *iov = rqst->rq_iov; int n_vec = rqst->rq_nvec; unsigned int smb_buf_length = get_rfc1002_length(iov[0].iov_base); + unsigned long send_length; unsigned int i; size_t total_len = 0, sent; struct socket *ssocket = server->ssocket; @@ -285,6 +306,14 @@ smb_send_rqst(struct TCP_Server_Info *server, struct smb_rqst *rqst) if (ssocket == NULL) return -ENOTSOCK; + /* sanity check send length */ + send_length = rqst_len(rqst); + if (send_length != smb_buf_length + 4) { + WARN(1, "Send length mismatch(send_length=%lu smb_buf_length=%u)\n", + send_length, smb_buf_length); + return -EIO; + } + cifs_dbg(FYI, "Sending smb: smb_len=%u\n", smb_buf_length); dump_smb(iov[0].iov_base, iov[0].iov_len); diff --git a/fs/dcache.c b/fs/dcache.c index 265e0ce9769c..ca02c13a84aa 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -2833,9 +2833,9 @@ static int prepend_name(char **buffer, int *buflen, struct qstr *name) u32 dlen = ACCESS_ONCE(name->len); char *p; - if (*buflen < dlen + 1) - return -ENAMETOOLONG; *buflen -= dlen + 1; + if (*buflen < 0) + return -ENAMETOOLONG; p = *buffer -= dlen + 1; *p++ = '/'; while (dlen--) { diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 6e39895a91b8..24bfd7ff3049 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -38,6 +38,7 @@ #include <linux/slab.h> #include <linux/ratelimit.h> #include <linux/aio.h> +#include <linux/bitops.h> #include "ext4_jbd2.h" #include "xattr.h" @@ -3921,18 +3922,20 @@ int ext4_get_inode_loc(struct inode *inode, struct ext4_iloc *iloc) void ext4_set_inode_flags(struct inode *inode) { unsigned int flags = EXT4_I(inode)->i_flags; + unsigned int new_fl = 0; - inode->i_flags &= ~(S_SYNC|S_APPEND|S_IMMUTABLE|S_NOATIME|S_DIRSYNC); if (flags & EXT4_SYNC_FL) - inode->i_flags |= S_SYNC; + new_fl |= S_SYNC; if (flags & EXT4_APPEND_FL) - inode->i_flags |= S_APPEND; + new_fl |= S_APPEND; if (flags & EXT4_IMMUTABLE_FL) - inode->i_flags |= S_IMMUTABLE; + new_fl |= S_IMMUTABLE; if (flags & EXT4_NOATIME_FL) - inode->i_flags |= S_NOATIME; + new_fl |= S_NOATIME; if (flags & EXT4_DIRSYNC_FL) - inode->i_flags |= S_DIRSYNC; + new_fl |= S_DIRSYNC; + set_mask_bits(&inode->i_flags, + S_SYNC|S_APPEND|S_IMMUTABLE|S_NOATIME|S_DIRSYNC, new_fl); } /* Propagate flags from i_flags to EXT4_I(inode)->i_flags */ diff --git a/fs/file.c b/fs/file.c index db25c2bdfe46..b61293badfb1 100644 --- a/fs/file.c +++ b/fs/file.c @@ -497,7 +497,7 @@ repeat: error = fd; #if 1 /* Sanity check */ - if (rcu_dereference_raw(fdt->fd[fd]) != NULL) { + if (rcu_access_pointer(fdt->fd[fd]) != NULL) { printk(KERN_WARNING "alloc_fd: slot %d not NULL!\n", fd); rcu_assign_pointer(fdt->fd[fd], NULL); } @@ -683,35 +683,54 @@ EXPORT_SYMBOL(fget_raw); * The fput_needed flag returned by fget_light should be passed to the * corresponding fput_light. */ -struct file *__fget_light(unsigned int fd, fmode_t mask, int *fput_needed) +static unsigned long __fget_light(unsigned int fd, fmode_t mask) { struct files_struct *files = current->files; struct file *file; - *fput_needed = 0; if (atomic_read(&files->count) == 1) { file = __fcheck_files(files, fd); - if (file && (file->f_mode & mask)) - file = NULL; + if (!file || unlikely(file->f_mode & mask)) + return 0; + return (unsigned long)file; } else { file = __fget(fd, mask); - if (file) - *fput_needed = 1; + if (!file) + return 0; + return FDPUT_FPUT | (unsigned long)file; } - - return file; } -struct file *fget_light(unsigned int fd, int *fput_needed) +unsigned long __fdget(unsigned int fd) { - return __fget_light(fd, FMODE_PATH, fput_needed); + return __fget_light(fd, FMODE_PATH); } -EXPORT_SYMBOL(fget_light); +EXPORT_SYMBOL(__fdget); -struct file *fget_raw_light(unsigned int fd, int *fput_needed) +unsigned long __fdget_raw(unsigned int fd) { - return __fget_light(fd, 0, fput_needed); + return __fget_light(fd, 0); } +unsigned long __fdget_pos(unsigned int fd) +{ + unsigned long v = __fdget(fd); + struct file *file = (struct file *)(v & ~3); + + if (file && (file->f_mode & FMODE_ATOMIC_POS)) { + if (file_count(file) > 1) { + v |= FDPUT_POS_UNLOCK; + mutex_lock(&file->f_pos_lock); + } + } + return v; +} + +/* + * We only lock f_pos if we have threads or if the file might be + * shared with another process. In both cases we'll have an elevated + * file count (done either by fdget() or by fork()). + */ + void set_close_on_exec(unsigned int fd, int flag) { struct files_struct *files = current->files; diff --git a/fs/file_table.c b/fs/file_table.c index 5fff9030be34..5b24008ea4f6 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -135,6 +135,7 @@ struct file *get_empty_filp(void) atomic_long_set(&f->f_count, 1); rwlock_init(&f->f_owner.lock); spin_lock_init(&f->f_lock); + mutex_init(&f->f_pos_lock); eventpoll_init_file(f); /* f->f_version: 0 */ return f; diff --git a/fs/hfsplus/catalog.c b/fs/hfsplus/catalog.c index 968ce411db53..32602c667b4a 100644 --- a/fs/hfsplus/catalog.c +++ b/fs/hfsplus/catalog.c @@ -103,6 +103,8 @@ static int hfsplus_cat_build_record(hfsplus_cat_entry *entry, folder = &entry->folder; memset(folder, 0, sizeof(*folder)); folder->type = cpu_to_be16(HFSPLUS_FOLDER); + if (test_bit(HFSPLUS_SB_HFSX, &sbi->flags)) + folder->flags |= cpu_to_be16(HFSPLUS_HAS_FOLDER_COUNT); folder->id = cpu_to_be32(inode->i_ino); HFSPLUS_I(inode)->create_date = folder->create_date = @@ -203,6 +205,36 @@ int hfsplus_find_cat(struct super_block *sb, u32 cnid, return hfs_brec_find(fd, hfs_find_rec_by_key); } +static void hfsplus_subfolders_inc(struct inode *dir) +{ + struct hfsplus_sb_info *sbi = HFSPLUS_SB(dir->i_sb); + + if (test_bit(HFSPLUS_SB_HFSX, &sbi->flags)) { + /* + * Increment subfolder count. Note, the value is only meaningful + * for folders with HFSPLUS_HAS_FOLDER_COUNT flag set. + */ + HFSPLUS_I(dir)->subfolders++; + } +} + +static void hfsplus_subfolders_dec(struct inode *dir) +{ + struct hfsplus_sb_info *sbi = HFSPLUS_SB(dir->i_sb); + + if (test_bit(HFSPLUS_SB_HFSX, &sbi->flags)) { + /* + * Decrement subfolder count. Note, the value is only meaningful + * for folders with HFSPLUS_HAS_FOLDER_COUNT flag set. + * + * Check for zero. Some subfolders may have been created + * by an implementation ignorant of this counter. + */ + if (HFSPLUS_I(dir)->subfolders) + HFSPLUS_I(dir)->subfolders--; + } +} + int hfsplus_create_cat(u32 cnid, struct inode *dir, struct qstr *str, struct inode *inode) { @@ -247,6 +279,8 @@ int hfsplus_create_cat(u32 cnid, struct inode *dir, goto err1; dir->i_size++; + if (S_ISDIR(inode->i_mode)) + hfsplus_subfolders_inc(dir); dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC; hfsplus_mark_inode_dirty(dir, HFSPLUS_I_CAT_DIRTY); @@ -336,6 +370,8 @@ int hfsplus_delete_cat(u32 cnid, struct inode *dir, struct qstr *str) goto out; dir->i_size--; + if (type == HFSPLUS_FOLDER) + hfsplus_subfolders_dec(dir); dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC; hfsplus_mark_inode_dirty(dir, HFSPLUS_I_CAT_DIRTY); @@ -380,6 +416,7 @@ int hfsplus_rename_cat(u32 cnid, hfs_bnode_read(src_fd.bnode, &entry, src_fd.entryoffset, src_fd.entrylength); + type = be16_to_cpu(entry.type); /* create new dir entry with the data from the old entry */ hfsplus_cat_build_key(sb, dst_fd.search_key, dst_dir->i_ino, dst_name); @@ -394,6 +431,8 @@ int hfsplus_rename_cat(u32 cnid, if (err) goto out; dst_dir->i_size++; + if (type == HFSPLUS_FOLDER) + hfsplus_subfolders_inc(dst_dir); dst_dir->i_mtime = dst_dir->i_ctime = CURRENT_TIME_SEC; /* finally remove the old entry */ @@ -405,6 +444,8 @@ int hfsplus_rename_cat(u32 cnid, if (err) goto out; src_dir->i_size--; + if (type == HFSPLUS_FOLDER) + hfsplus_subfolders_dec(src_dir); src_dir->i_mtime = src_dir->i_ctime = CURRENT_TIME_SEC; /* remove old thread entry */ diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h index 08846425b67f..62d571eb69ba 100644 --- a/fs/hfsplus/hfsplus_fs.h +++ b/fs/hfsplus/hfsplus_fs.h @@ -242,6 +242,7 @@ struct hfsplus_inode_info { */ sector_t fs_blocks; u8 userflags; /* BSD user file flags */ + u32 subfolders; /* Subfolder count (HFSX only) */ struct list_head open_dir_list; loff_t phys_size; diff --git a/fs/hfsplus/hfsplus_raw.h b/fs/hfsplus/hfsplus_raw.h index 8ffb3a8ffe75..5a126828d85e 100644 --- a/fs/hfsplus/hfsplus_raw.h +++ b/fs/hfsplus/hfsplus_raw.h @@ -261,7 +261,7 @@ struct hfsplus_cat_folder { struct DInfo user_info; struct DXInfo finder_info; __be32 text_encoding; - u32 reserved; + __be32 subfolders; /* Subfolder count in HFSX. Reserved in HFS+. */ } __packed; /* HFS file info (stolen from hfs.h) */ @@ -301,11 +301,13 @@ struct hfsplus_cat_file { struct hfsplus_fork_raw rsrc_fork; } __packed; -/* File attribute bits */ +/* File and folder flag bits */ #define HFSPLUS_FILE_LOCKED 0x0001 #define HFSPLUS_FILE_THREAD_EXISTS 0x0002 #define HFSPLUS_XATTR_EXISTS 0x0004 #define HFSPLUS_ACL_EXISTS 0x0008 +#define HFSPLUS_HAS_FOLDER_COUNT 0x0010 /* Folder has subfolder count + * (HFSX only) */ /* HFS+ catalog thread (part of a cat_entry) */ struct hfsplus_cat_thread { diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c index fa929f325f87..a4f45bd88a63 100644 --- a/fs/hfsplus/inode.c +++ b/fs/hfsplus/inode.c @@ -375,6 +375,7 @@ struct inode *hfsplus_new_inode(struct super_block *sb, umode_t mode) hip->extent_state = 0; hip->flags = 0; hip->userflags = 0; + hip->subfolders = 0; memset(hip->first_extents, 0, sizeof(hfsplus_extent_rec)); memset(hip->cached_extents, 0, sizeof(hfsplus_extent_rec)); hip->alloc_blocks = 0; @@ -494,6 +495,10 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd) inode->i_ctime = hfsp_mt2ut(folder->attribute_mod_date); HFSPLUS_I(inode)->create_date = folder->create_date; HFSPLUS_I(inode)->fs_blocks = 0; + if (folder->flags & cpu_to_be16(HFSPLUS_HAS_FOLDER_COUNT)) { + HFSPLUS_I(inode)->subfolders = + be32_to_cpu(folder->subfolders); + } inode->i_op = &hfsplus_dir_inode_operations; inode->i_fop = &hfsplus_dir_operations; } else if (type == HFSPLUS_FILE) { @@ -566,6 +571,10 @@ int hfsplus_cat_write_inode(struct inode *inode) folder->content_mod_date = hfsp_ut2mt(inode->i_mtime); folder->attribute_mod_date = hfsp_ut2mt(inode->i_ctime); folder->valence = cpu_to_be32(inode->i_size - 2); + if (folder->flags & cpu_to_be16(HFSPLUS_HAS_FOLDER_COUNT)) { + folder->subfolders = + cpu_to_be32(HFSPLUS_I(inode)->subfolders); + } hfs_bnode_write(fd.bnode, &entry, fd.entryoffset, sizeof(struct hfsplus_cat_folder)); } else if (HFSPLUS_IS_RSRC(inode)) { diff --git a/fs/mount.h b/fs/mount.h index a17458ca6f29..b29e42f05f34 100644 --- a/fs/mount.h +++ b/fs/mount.h @@ -19,13 +19,13 @@ struct mnt_pcp { }; struct mountpoint { - struct list_head m_hash; + struct hlist_node m_hash; struct dentry *m_dentry; int m_count; }; struct mount { - struct list_head mnt_hash; + struct hlist_node mnt_hash; struct mount *mnt_parent; struct dentry *mnt_mountpoint; struct vfsmount mnt; diff --git a/fs/namei.c b/fs/namei.c index 385f7817bfcc..4b491b431990 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -1109,7 +1109,7 @@ static bool __follow_mount_rcu(struct nameidata *nd, struct path *path, return false; if (!d_mountpoint(path->dentry)) - break; + return true; mounted = __lookup_mnt(path->mnt, path->dentry); if (!mounted) @@ -1125,20 +1125,7 @@ static bool __follow_mount_rcu(struct nameidata *nd, struct path *path, */ *inode = path->dentry->d_inode; } - return true; -} - -static void follow_mount_rcu(struct nameidata *nd) -{ - while (d_mountpoint(nd->path.dentry)) { - struct mount *mounted; - mounted = __lookup_mnt(nd->path.mnt, nd->path.dentry); - if (!mounted) - break; - nd->path.mnt = &mounted->mnt; - nd->path.dentry = mounted->mnt.mnt_root; - nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq); - } + return read_seqretry(&mount_lock, nd->m_seq); } static int follow_dotdot_rcu(struct nameidata *nd) @@ -1166,7 +1153,17 @@ static int follow_dotdot_rcu(struct nameidata *nd) break; nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq); } - follow_mount_rcu(nd); + while (d_mountpoint(nd->path.dentry)) { + struct mount *mounted; + mounted = __lookup_mnt(nd->path.mnt, nd->path.dentry); + if (!mounted) + break; + nd->path.mnt = &mounted->mnt; + nd->path.dentry = mounted->mnt.mnt_root; + nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq); + if (!read_seqretry(&mount_lock, nd->m_seq)) + goto failed; + } nd->inode = nd->path.dentry->d_inode; return 0; @@ -1884,7 +1881,7 @@ static int path_init(int dfd, const char *name, unsigned int flags, nd->path = f.file->f_path; if (flags & LOOKUP_RCU) { - if (f.need_put) + if (f.flags & FDPUT_FPUT) *fp = f.file; nd->seq = __read_seqcount_begin(&nd->path.dentry->d_seq); rcu_read_lock(); diff --git a/fs/namespace.c b/fs/namespace.c index 22e536705c45..2ffc5a2905d4 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -23,11 +23,34 @@ #include <linux/uaccess.h> #include <linux/proc_ns.h> #include <linux/magic.h> +#include <linux/bootmem.h> #include "pnode.h" #include "internal.h" -#define HASH_SHIFT ilog2(PAGE_SIZE / sizeof(struct list_head)) -#define HASH_SIZE (1UL << HASH_SHIFT) +static unsigned int m_hash_mask __read_mostly; +static unsigned int m_hash_shift __read_mostly; +static unsigned int mp_hash_mask __read_mostly; +static unsigned int mp_hash_shift __read_mostly; + +static __initdata unsigned long mhash_entries; +static int __init set_mhash_entries(char *str) +{ + if (!str) + return 0; + mhash_entries = simple_strtoul(str, &str, 0); + return 1; +} +__setup("mhash_entries=", set_mhash_entries); + +static __initdata unsigned long mphash_entries; +static int __init set_mphash_entries(char *str) +{ + if (!str) + return 0; + mphash_entries = simple_strtoul(str, &str, 0); + return 1; +} +__setup("mphash_entries=", set_mphash_entries); static int event; static DEFINE_IDA(mnt_id_ida); @@ -36,8 +59,8 @@ static DEFINE_SPINLOCK(mnt_id_lock); static int mnt_id_start = 0; static int mnt_group_start = 1; -static struct list_head *mount_hashtable __read_mostly; -static struct list_head *mountpoint_hashtable __read_mostly; +static struct hlist_head *mount_hashtable __read_mostly; +static struct hlist_head *mountpoint_hashtable __read_mostly; static struct kmem_cache *mnt_cache __read_mostly; static DECLARE_RWSEM(namespace_sem); @@ -55,12 +78,19 @@ EXPORT_SYMBOL_GPL(fs_kobj); */ __cacheline_aligned_in_smp DEFINE_SEQLOCK(mount_lock); -static inline unsigned long hash(struct vfsmount *mnt, struct dentry *dentry) +static inline struct hlist_head *m_hash(struct vfsmount *mnt, struct dentry *dentry) { unsigned long tmp = ((unsigned long)mnt / L1_CACHE_BYTES); tmp += ((unsigned long)dentry / L1_CACHE_BYTES); - tmp = tmp + (tmp >> HASH_SHIFT); - return tmp & (HASH_SIZE - 1); + tmp = tmp + (tmp >> m_hash_shift); + return &mount_hashtable[tmp & m_hash_mask]; +} + +static inline struct hlist_head *mp_hash(struct dentry *dentry) +{ + unsigned long tmp = ((unsigned long)dentry / L1_CACHE_BYTES); + tmp = tmp + (tmp >> mp_hash_shift); + return &mountpoint_hashtable[tmp & mp_hash_mask]; } /* @@ -187,7 +217,7 @@ static struct mount *alloc_vfsmnt(const char *name) mnt->mnt_writers = 0; #endif - INIT_LIST_HEAD(&mnt->mnt_hash); + INIT_HLIST_NODE(&mnt->mnt_hash); INIT_LIST_HEAD(&mnt->mnt_child); INIT_LIST_HEAD(&mnt->mnt_mounts); INIT_LIST_HEAD(&mnt->mnt_list); @@ -575,10 +605,10 @@ bool legitimize_mnt(struct vfsmount *bastard, unsigned seq) */ struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry) { - struct list_head *head = mount_hashtable + hash(mnt, dentry); + struct hlist_head *head = m_hash(mnt, dentry); struct mount *p; - list_for_each_entry_rcu(p, head, mnt_hash) + hlist_for_each_entry_rcu(p, head, mnt_hash) if (&p->mnt_parent->mnt == mnt && p->mnt_mountpoint == dentry) return p; return NULL; @@ -590,13 +620,17 @@ struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry) */ struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry) { - struct list_head *head = mount_hashtable + hash(mnt, dentry); - struct mount *p; - - list_for_each_entry_reverse(p, head, mnt_hash) - if (&p->mnt_parent->mnt == mnt && p->mnt_mountpoint == dentry) - return p; - return NULL; + struct mount *p, *res; + res = p = __lookup_mnt(mnt, dentry); + if (!p) + goto out; + hlist_for_each_entry_continue(p, mnt_hash) { + if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry) + break; + res = p; + } +out: + return res; } /* @@ -633,11 +667,11 @@ struct vfsmount *lookup_mnt(struct path *path) static struct mountpoint *new_mountpoint(struct dentry *dentry) { - struct list_head *chain = mountpoint_hashtable + hash(NULL, dentry); + struct hlist_head *chain = mp_hash(dentry); struct mountpoint *mp; int ret; - list_for_each_entry(mp, chain, m_hash) { + hlist_for_each_entry(mp, chain, m_hash) { if (mp->m_dentry == dentry) { /* might be worth a WARN_ON() */ if (d_unlinked(dentry)) @@ -659,7 +693,7 @@ static struct mountpoint *new_mountpoint(struct dentry *dentry) mp->m_dentry = dentry; mp->m_count = 1; - list_add(&mp->m_hash, chain); + hlist_add_head(&mp->m_hash, chain); return mp; } @@ -670,7 +704,7 @@ static void put_mountpoint(struct mountpoint *mp) spin_lock(&dentry->d_lock); dentry->d_flags &= ~DCACHE_MOUNTED; spin_unlock(&dentry->d_lock); - list_del(&mp->m_hash); + hlist_del(&mp->m_hash); kfree(mp); } } @@ -712,7 +746,7 @@ static void detach_mnt(struct mount *mnt, struct path *old_path) mnt->mnt_parent = mnt; mnt->mnt_mountpoint = mnt->mnt.mnt_root; list_del_init(&mnt->mnt_child); - list_del_init(&mnt->mnt_hash); + hlist_del_init_rcu(&mnt->mnt_hash); put_mountpoint(mnt->mnt_mp); mnt->mnt_mp = NULL; } @@ -739,15 +773,14 @@ static void attach_mnt(struct mount *mnt, struct mountpoint *mp) { mnt_set_mountpoint(parent, mp, mnt); - list_add_tail(&mnt->mnt_hash, mount_hashtable + - hash(&parent->mnt, mp->m_dentry)); + hlist_add_head_rcu(&mnt->mnt_hash, m_hash(&parent->mnt, mp->m_dentry)); list_add_tail(&mnt->mnt_child, &parent->mnt_mounts); } /* * vfsmount lock must be held for write */ -static void commit_tree(struct mount *mnt) +static void commit_tree(struct mount *mnt, struct mount *shadows) { struct mount *parent = mnt->mnt_parent; struct mount *m; @@ -762,8 +795,11 @@ static void commit_tree(struct mount *mnt) list_splice(&head, n->list.prev); - list_add_tail(&mnt->mnt_hash, mount_hashtable + - hash(&parent->mnt, mnt->mnt_mountpoint)); + if (shadows) + hlist_add_after_rcu(&shadows->mnt_hash, &mnt->mnt_hash); + else + hlist_add_head_rcu(&mnt->mnt_hash, + m_hash(&parent->mnt, mnt->mnt_mountpoint)); list_add_tail(&mnt->mnt_child, &parent->mnt_mounts); touch_mnt_namespace(n); } @@ -1153,26 +1189,28 @@ int may_umount(struct vfsmount *mnt) EXPORT_SYMBOL(may_umount); -static LIST_HEAD(unmounted); /* protected by namespace_sem */ +static HLIST_HEAD(unmounted); /* protected by namespace_sem */ static void namespace_unlock(void) { struct mount *mnt; - LIST_HEAD(head); + struct hlist_head head = unmounted; - if (likely(list_empty(&unmounted))) { + if (likely(hlist_empty(&head))) { up_write(&namespace_sem); return; } - list_splice_init(&unmounted, &head); + head.first->pprev = &head.first; + INIT_HLIST_HEAD(&unmounted); + up_write(&namespace_sem); synchronize_rcu(); - while (!list_empty(&head)) { - mnt = list_first_entry(&head, struct mount, mnt_hash); - list_del_init(&mnt->mnt_hash); + while (!hlist_empty(&head)) { + mnt = hlist_entry(head.first, struct mount, mnt_hash); + hlist_del_init(&mnt->mnt_hash); if (mnt->mnt_ex_mountpoint.mnt) path_put(&mnt->mnt_ex_mountpoint); mntput(&mnt->mnt); @@ -1193,16 +1231,19 @@ static inline void namespace_lock(void) */ void umount_tree(struct mount *mnt, int how) { - LIST_HEAD(tmp_list); + HLIST_HEAD(tmp_list); struct mount *p; + struct mount *last = NULL; - for (p = mnt; p; p = next_mnt(p, mnt)) - list_move(&p->mnt_hash, &tmp_list); + for (p = mnt; p; p = next_mnt(p, mnt)) { + hlist_del_init_rcu(&p->mnt_hash); + hlist_add_head(&p->mnt_hash, &tmp_list); + } if (how) propagate_umount(&tmp_list); - list_for_each_entry(p, &tmp_list, mnt_hash) { + hlist_for_each_entry(p, &tmp_list, mnt_hash) { list_del_init(&p->mnt_expire); list_del_init(&p->mnt_list); __touch_mnt_namespace(p->mnt_ns); @@ -1220,8 +1261,13 @@ void umount_tree(struct mount *mnt, int how) p->mnt_mp = NULL; } change_mnt_propagation(p, MS_PRIVATE); + last = p; + } + if (last) { + last->mnt_hash.next = unmounted.first; + unmounted.first = tmp_list.first; + unmounted.first->pprev = &unmounted.first; } - list_splice(&tmp_list, &unmounted); } static void shrink_submounts(struct mount *mnt); @@ -1605,24 +1651,23 @@ static int attach_recursive_mnt(struct mount *source_mnt, struct mountpoint *dest_mp, struct path *parent_path) { - LIST_HEAD(tree_list); + HLIST_HEAD(tree_list); struct mount *child, *p; + struct hlist_node *n; int err; if (IS_MNT_SHARED(dest_mnt)) { err = invent_group_ids(source_mnt, true); if (err) goto out; - } - err = propagate_mnt(dest_mnt, dest_mp, source_mnt, &tree_list); - if (err) - goto out_cleanup_ids; - - lock_mount_hash(); - - if (IS_MNT_SHARED(dest_mnt)) { + err = propagate_mnt(dest_mnt, dest_mp, source_mnt, &tree_list); + if (err) + goto out_cleanup_ids; + lock_mount_hash(); for (p = source_mnt; p; p = next_mnt(p, source_mnt)) set_mnt_shared(p); + } else { + lock_mount_hash(); } if (parent_path) { detach_mnt(source_mnt, parent_path); @@ -1630,20 +1675,22 @@ static int attach_recursive_mnt(struct mount *source_mnt, touch_mnt_namespace(source_mnt->mnt_ns); } else { mnt_set_mountpoint(dest_mnt, dest_mp, source_mnt); - commit_tree(source_mnt); + commit_tree(source_mnt, NULL); } - list_for_each_entry_safe(child, p, &tree_list, mnt_hash) { - list_del_init(&child->mnt_hash); - commit_tree(child); + hlist_for_each_entry_safe(child, n, &tree_list, mnt_hash) { + struct mount *q; + hlist_del_init(&child->mnt_hash); + q = __lookup_mnt_last(&child->mnt_parent->mnt, + child->mnt_mountpoint); + commit_tree(child, q); } unlock_mount_hash(); return 0; out_cleanup_ids: - if (IS_MNT_SHARED(dest_mnt)) - cleanup_group_ids(source_mnt, NULL); + cleanup_group_ids(source_mnt, NULL); out: return err; } @@ -2777,18 +2824,24 @@ void __init mnt_init(void) mnt_cache = kmem_cache_create("mnt_cache", sizeof(struct mount), 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL); - mount_hashtable = (struct list_head *)__get_free_page(GFP_ATOMIC); - mountpoint_hashtable = (struct list_head *)__get_free_page(GFP_ATOMIC); + mount_hashtable = alloc_large_system_hash("Mount-cache", + sizeof(struct hlist_head), + mhash_entries, 19, + 0, + &m_hash_shift, &m_hash_mask, 0, 0); + mountpoint_hashtable = alloc_large_system_hash("Mountpoint-cache", + sizeof(struct hlist_head), + mphash_entries, 19, + 0, + &mp_hash_shift, &mp_hash_mask, 0, 0); if (!mount_hashtable || !mountpoint_hashtable) panic("Failed to allocate mount hash table\n"); - printk(KERN_INFO "Mount-cache hash table entries: %lu\n", HASH_SIZE); - - for (u = 0; u < HASH_SIZE; u++) - INIT_LIST_HEAD(&mount_hashtable[u]); - for (u = 0; u < HASH_SIZE; u++) - INIT_LIST_HEAD(&mountpoint_hashtable[u]); + for (u = 0; u <= m_hash_mask; u++) + INIT_HLIST_HEAD(&mount_hashtable[u]); + for (u = 0; u <= mp_hash_mask; u++) + INIT_HLIST_HEAD(&mountpoint_hashtable[u]); kernfs_init(); diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c index 017d3cb5e99b..6d7be3f80356 100644 --- a/fs/nfsd/vfs.c +++ b/fs/nfsd/vfs.c @@ -449,6 +449,7 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap, fh_lock(fhp); host_err = notify_change(dentry, iap, NULL); fh_unlock(fhp); + err = nfserrno(host_err); out_put_write_access: if (size_change) diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c index 8450262bcf2a..51632c40e896 100644 --- a/fs/ocfs2/file.c +++ b/fs/ocfs2/file.c @@ -2393,8 +2393,8 @@ out_dio: if (((file->f_flags & O_DSYNC) && !direct_io) || IS_SYNC(inode) || ((file->f_flags & O_DIRECT) && !direct_io)) { - ret = filemap_fdatawrite_range(file->f_mapping, pos, - pos + count - 1); + ret = filemap_fdatawrite_range(file->f_mapping, *ppos, + *ppos + count - 1); if (ret < 0) written = ret; @@ -2407,8 +2407,8 @@ out_dio: } if (!ret) - ret = filemap_fdatawait_range(file->f_mapping, pos, - pos + count - 1); + ret = filemap_fdatawait_range(file->f_mapping, *ppos, + *ppos + count - 1); } /* diff --git a/fs/ocfs2/stackglue.c b/fs/ocfs2/stackglue.c index 1324e6600e57..ca5ce14cbddc 100644 --- a/fs/ocfs2/stackglue.c +++ b/fs/ocfs2/stackglue.c @@ -346,7 +346,9 @@ int ocfs2_cluster_connect(const char *stack_name, strlcpy(new_conn->cc_name, group, GROUP_NAME_MAX + 1); new_conn->cc_namelen = grouplen; - strlcpy(new_conn->cc_cluster_name, cluster_name, CLUSTER_NAME_MAX + 1); + if (cluster_name_len) + strlcpy(new_conn->cc_cluster_name, cluster_name, + CLUSTER_NAME_MAX + 1); new_conn->cc_cluster_name_len = cluster_name_len; new_conn->cc_recovery_handler = recovery_handler; new_conn->cc_recovery_data = recovery_data; diff --git a/fs/open.c b/fs/open.c index 4b3e1edf2fe4..b9ed8b25c108 100644 --- a/fs/open.c +++ b/fs/open.c @@ -705,6 +705,10 @@ static int do_dentry_open(struct file *f, return 0; } + /* POSIX.1-2008/SUSv4 Section XSI 2.9.7 */ + if (S_ISREG(inode->i_mode)) + f->f_mode |= FMODE_ATOMIC_POS; + f->f_op = fops_get(inode->i_fop); if (unlikely(WARN_ON(!f->f_op))) { error = -ENODEV; diff --git a/fs/pnode.c b/fs/pnode.c index c7221bb19801..88396df725b4 100644 --- a/fs/pnode.c +++ b/fs/pnode.c @@ -220,14 +220,14 @@ static struct mount *get_source(struct mount *dest, * @tree_list : list of heads of trees to be attached. */ int propagate_mnt(struct mount *dest_mnt, struct mountpoint *dest_mp, - struct mount *source_mnt, struct list_head *tree_list) + struct mount *source_mnt, struct hlist_head *tree_list) { struct user_namespace *user_ns = current->nsproxy->mnt_ns->user_ns; struct mount *m, *child; int ret = 0; struct mount *prev_dest_mnt = dest_mnt; struct mount *prev_src_mnt = source_mnt; - LIST_HEAD(tmp_list); + HLIST_HEAD(tmp_list); for (m = propagation_next(dest_mnt, dest_mnt); m; m = propagation_next(m, dest_mnt)) { @@ -246,27 +246,29 @@ int propagate_mnt(struct mount *dest_mnt, struct mountpoint *dest_mp, child = copy_tree(source, source->mnt.mnt_root, type); if (IS_ERR(child)) { ret = PTR_ERR(child); - list_splice(tree_list, tmp_list.prev); + tmp_list = *tree_list; + tmp_list.first->pprev = &tmp_list.first; + INIT_HLIST_HEAD(tree_list); goto out; } if (is_subdir(dest_mp->m_dentry, m->mnt.mnt_root)) { mnt_set_mountpoint(m, dest_mp, child); - list_add_tail(&child->mnt_hash, tree_list); + hlist_add_head(&child->mnt_hash, tree_list); } else { /* * This can happen if the parent mount was bind mounted * on some subdirectory of a shared/slave mount. */ - list_add_tail(&child->mnt_hash, &tmp_list); + hlist_add_head(&child->mnt_hash, &tmp_list); } prev_dest_mnt = m; prev_src_mnt = child; } out: lock_mount_hash(); - while (!list_empty(&tmp_list)) { - child = list_first_entry(&tmp_list, struct mount, mnt_hash); + while (!hlist_empty(&tmp_list)) { + child = hlist_entry(tmp_list.first, struct mount, mnt_hash); umount_tree(child, 0); } unlock_mount_hash(); @@ -338,8 +340,10 @@ static void __propagate_umount(struct mount *mnt) * umount the child only if the child has no * other children */ - if (child && list_empty(&child->mnt_mounts)) - list_move_tail(&child->mnt_hash, &mnt->mnt_hash); + if (child && list_empty(&child->mnt_mounts)) { + hlist_del_init_rcu(&child->mnt_hash); + hlist_add_before_rcu(&child->mnt_hash, &mnt->mnt_hash); + } } } @@ -350,11 +354,11 @@ static void __propagate_umount(struct mount *mnt) * * vfsmount lock must be held for write */ -int propagate_umount(struct list_head *list) +int propagate_umount(struct hlist_head *list) { struct mount *mnt; - list_for_each_entry(mnt, list, mnt_hash) + hlist_for_each_entry(mnt, list, mnt_hash) __propagate_umount(mnt); return 0; } diff --git a/fs/pnode.h b/fs/pnode.h index 59e7eda1851e..fc28a27fa892 100644 --- a/fs/pnode.h +++ b/fs/pnode.h @@ -36,8 +36,8 @@ static inline void set_mnt_shared(struct mount *mnt) void change_mnt_propagation(struct mount *, int); int propagate_mnt(struct mount *, struct mountpoint *, struct mount *, - struct list_head *); -int propagate_umount(struct list_head *); + struct hlist_head *); +int propagate_umount(struct hlist_head *); int propagate_mount_busy(struct mount *, int); void mnt_release_group_id(struct mount *); int get_dominating_id(struct mount *mnt, const struct path *root); diff --git a/fs/proc/base.c b/fs/proc/base.c index 51507065263b..b9760628e1fd 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -1824,6 +1824,7 @@ static int proc_map_files_get_link(struct dentry *dentry, struct path *path) if (rc) goto out_mmput; + rc = -ENOENT; down_read(&mm->mmap_sem); vma = find_exact_vma(mm, vm_start, vm_end); if (vma && vma->vm_file) { diff --git a/fs/read_write.c b/fs/read_write.c index edc5746a902a..28cc9c810744 100644 --- a/fs/read_write.c +++ b/fs/read_write.c @@ -264,10 +264,22 @@ loff_t vfs_llseek(struct file *file, loff_t offset, int whence) } EXPORT_SYMBOL(vfs_llseek); +static inline struct fd fdget_pos(int fd) +{ + return __to_fd(__fdget_pos(fd)); +} + +static inline void fdput_pos(struct fd f) +{ + if (f.flags & FDPUT_POS_UNLOCK) + mutex_unlock(&f.file->f_pos_lock); + fdput(f); +} + SYSCALL_DEFINE3(lseek, unsigned int, fd, off_t, offset, unsigned int, whence) { off_t retval; - struct fd f = fdget(fd); + struct fd f = fdget_pos(fd); if (!f.file) return -EBADF; @@ -278,7 +290,7 @@ SYSCALL_DEFINE3(lseek, unsigned int, fd, off_t, offset, unsigned int, whence) if (res != (loff_t)retval) retval = -EOVERFLOW; /* LFS: should only happen on 32 bit platforms */ } - fdput(f); + fdput_pos(f); return retval; } @@ -295,7 +307,7 @@ SYSCALL_DEFINE5(llseek, unsigned int, fd, unsigned long, offset_high, unsigned int, whence) { int retval; - struct fd f = fdget(fd); + struct fd f = fdget_pos(fd); loff_t offset; if (!f.file) @@ -315,7 +327,7 @@ SYSCALL_DEFINE5(llseek, unsigned int, fd, unsigned long, offset_high, retval = 0; } out_putf: - fdput(f); + fdput_pos(f); return retval; } #endif @@ -498,7 +510,7 @@ static inline void file_pos_write(struct file *file, loff_t pos) SYSCALL_DEFINE3(read, unsigned int, fd, char __user *, buf, size_t, count) { - struct fd f = fdget(fd); + struct fd f = fdget_pos(fd); ssize_t ret = -EBADF; if (f.file) { @@ -506,7 +518,7 @@ SYSCALL_DEFINE3(read, unsigned int, fd, char __user *, buf, size_t, count) ret = vfs_read(f.file, buf, count, &pos); if (ret >= 0) file_pos_write(f.file, pos); - fdput(f); + fdput_pos(f); } return ret; } @@ -514,7 +526,7 @@ SYSCALL_DEFINE3(read, unsigned int, fd, char __user *, buf, size_t, count) SYSCALL_DEFINE3(write, unsigned int, fd, const char __user *, buf, size_t, count) { - struct fd f = fdget(fd); + struct fd f = fdget_pos(fd); ssize_t ret = -EBADF; if (f.file) { @@ -522,7 +534,7 @@ SYSCALL_DEFINE3(write, unsigned int, fd, const char __user *, buf, ret = vfs_write(f.file, buf, count, &pos); if (ret >= 0) file_pos_write(f.file, pos); - fdput(f); + fdput_pos(f); } return ret; @@ -797,7 +809,7 @@ EXPORT_SYMBOL(vfs_writev); SYSCALL_DEFINE3(readv, unsigned long, fd, const struct iovec __user *, vec, unsigned long, vlen) { - struct fd f = fdget(fd); + struct fd f = fdget_pos(fd); ssize_t ret = -EBADF; if (f.file) { @@ -805,7 +817,7 @@ SYSCALL_DEFINE3(readv, unsigned long, fd, const struct iovec __user *, vec, ret = vfs_readv(f.file, vec, vlen, &pos); if (ret >= 0) file_pos_write(f.file, pos); - fdput(f); + fdput_pos(f); } if (ret > 0) @@ -817,7 +829,7 @@ SYSCALL_DEFINE3(readv, unsigned long, fd, const struct iovec __user *, vec, SYSCALL_DEFINE3(writev, unsigned long, fd, const struct iovec __user *, vec, unsigned long, vlen) { - struct fd f = fdget(fd); + struct fd f = fdget_pos(fd); ssize_t ret = -EBADF; if (f.file) { @@ -825,7 +837,7 @@ SYSCALL_DEFINE3(writev, unsigned long, fd, const struct iovec __user *, vec, ret = vfs_writev(f.file, vec, vlen, &pos); if (ret >= 0) file_pos_write(f.file, pos); - fdput(f); + fdput_pos(f); } if (ret > 0) @@ -968,7 +980,7 @@ COMPAT_SYSCALL_DEFINE3(readv, compat_ulong_t, fd, const struct compat_iovec __user *,vec, compat_ulong_t, vlen) { - struct fd f = fdget(fd); + struct fd f = fdget_pos(fd); ssize_t ret; loff_t pos; @@ -978,7 +990,7 @@ COMPAT_SYSCALL_DEFINE3(readv, compat_ulong_t, fd, ret = compat_readv(f.file, vec, vlen, &pos); if (ret >= 0) f.file->f_pos = pos; - fdput(f); + fdput_pos(f); return ret; } @@ -1035,7 +1047,7 @@ COMPAT_SYSCALL_DEFINE3(writev, compat_ulong_t, fd, const struct compat_iovec __user *, vec, compat_ulong_t, vlen) { - struct fd f = fdget(fd); + struct fd f = fdget_pos(fd); ssize_t ret; loff_t pos; @@ -1045,7 +1057,7 @@ COMPAT_SYSCALL_DEFINE3(writev, compat_ulong_t, fd, ret = compat_writev(f.file, vec, vlen, &pos); if (ret >= 0) f.file->f_pos = pos; - fdput(f); + fdput_pos(f); return ret; } diff --git a/include/asm-generic/bitops/const_hweight.h b/include/asm-generic/bitops/const_hweight.h index fa2a50b7ee66..0a7e06623470 100644 --- a/include/asm-generic/bitops/const_hweight.h +++ b/include/asm-generic/bitops/const_hweight.h @@ -5,14 +5,15 @@ * Compile time versions of __arch_hweightN() */ #define __const_hweight8(w) \ - ( (!!((w) & (1ULL << 0))) + \ - (!!((w) & (1ULL << 1))) + \ - (!!((w) & (1ULL << 2))) + \ - (!!((w) & (1ULL << 3))) + \ - (!!((w) & (1ULL << 4))) + \ - (!!((w) & (1ULL << 5))) + \ - (!!((w) & (1ULL << 6))) + \ - (!!((w) & (1ULL << 7))) ) + ((unsigned int) \ + ((!!((w) & (1ULL << 0))) + \ + (!!((w) & (1ULL << 1))) + \ + (!!((w) & (1ULL << 2))) + \ + (!!((w) & (1ULL << 3))) + \ + (!!((w) & (1ULL << 4))) + \ + (!!((w) & (1ULL << 5))) + \ + (!!((w) & (1ULL << 6))) + \ + (!!((w) & (1ULL << 7))))) #define __const_hweight16(w) (__const_hweight8(w) + __const_hweight8((w) >> 8 )) #define __const_hweight32(w) (__const_hweight16(w) + __const_hweight16((w) >> 16)) diff --git a/include/asm-generic/mcs_spinlock.h b/include/asm-generic/mcs_spinlock.h new file mode 100644 index 000000000000..10cd4ffc6ba2 --- /dev/null +++ b/include/asm-generic/mcs_spinlock.h @@ -0,0 +1,13 @@ +#ifndef __ASM_MCS_SPINLOCK_H +#define __ASM_MCS_SPINLOCK_H + +/* + * Architectures can define their own: + * + * arch_mcs_spin_lock_contended(l) + * arch_mcs_spin_unlock_contended(l) + * + * See kernel/locking/mcs_spinlock.c. + */ + +#endif /* __ASM_MCS_SPINLOCK_H */ diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index be85127bfed3..f27000f55a83 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -171,6 +171,11 @@ static inline int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 add return 0; } +static inline int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write) +{ + return -ENXIO; +} + static inline int kvm_vgic_init(struct kvm *kvm) { return 0; diff --git a/include/linux/audit.h b/include/linux/audit.h index aa865a9a4c4f..ec1464df4c60 100644 --- a/include/linux/audit.h +++ b/include/linux/audit.h @@ -43,6 +43,7 @@ struct mq_attr; struct mqstat; struct audit_watch; struct audit_tree; +struct sk_buff; struct audit_krule { int vers_ops; @@ -463,7 +464,7 @@ extern int audit_filter_user(int type); extern int audit_filter_type(int type); extern int audit_rule_change(int type, __u32 portid, int seq, void *data, size_t datasz); -extern int audit_list_rules_send(__u32 portid, int seq); +extern int audit_list_rules_send(struct sk_buff *request_skb, int seq); extern u32 audit_enabled; #else /* CONFIG_AUDIT */ diff --git a/include/linux/bitops.h b/include/linux/bitops.h index abc9ca778456..be5fd38bd5a0 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -196,6 +196,21 @@ static inline unsigned long __ffs64(u64 word) #ifdef __KERNEL__ +#ifndef set_mask_bits +#define set_mask_bits(ptr, _mask, _bits) \ +({ \ + const typeof(*ptr) mask = (_mask), bits = (_bits); \ + typeof(*ptr) old, new; \ + \ + do { \ + old = ACCESS_ONCE(*ptr); \ + new = (old & ~mask) | bits; \ + } while (cmpxchg(ptr, old, new) != old); \ + \ + new; \ +}) +#endif + #ifndef find_last_bit /** * find_last_bit - find the last set bit in a memory region diff --git a/include/linux/file.h b/include/linux/file.h index cbacf4faf447..4d69123377a2 100644 --- a/include/linux/file.h +++ b/include/linux/file.h @@ -28,33 +28,36 @@ static inline void fput_light(struct file *file, int fput_needed) struct fd { struct file *file; - int need_put; + unsigned int flags; }; +#define FDPUT_FPUT 1 +#define FDPUT_POS_UNLOCK 2 static inline void fdput(struct fd fd) { - if (fd.need_put) + if (fd.flags & FDPUT_FPUT) fput(fd.file); } extern struct file *fget(unsigned int fd); -extern struct file *fget_light(unsigned int fd, int *fput_needed); +extern struct file *fget_raw(unsigned int fd); +extern unsigned long __fdget(unsigned int fd); +extern unsigned long __fdget_raw(unsigned int fd); +extern unsigned long __fdget_pos(unsigned int fd); -static inline struct fd fdget(unsigned int fd) +static inline struct fd __to_fd(unsigned long v) { - int b; - struct file *f = fget_light(fd, &b); - return (struct fd){f,b}; + return (struct fd){(struct file *)(v & ~3),v & 3}; } -extern struct file *fget_raw(unsigned int fd); -extern struct file *fget_raw_light(unsigned int fd, int *fput_needed); +static inline struct fd fdget(unsigned int fd) +{ + return __to_fd(__fdget(fd)); +} static inline struct fd fdget_raw(unsigned int fd) { - int b; - struct file *f = fget_raw_light(fd, &b); - return (struct fd){f,b}; + return __to_fd(__fdget_raw(fd)); } extern int f_dupfd(unsigned int from, struct file *file, unsigned flags); diff --git a/include/linux/fs.h b/include/linux/fs.h index 60829565e552..23b2a35d712e 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -123,6 +123,9 @@ typedef void (dio_iodone_t)(struct kiocb *iocb, loff_t offset, /* File is opened with O_PATH; almost nothing can be done with it */ #define FMODE_PATH ((__force fmode_t)0x4000) +/* File needs atomic accesses to f_pos */ +#define FMODE_ATOMIC_POS ((__force fmode_t)0x8000) + /* File was opened by fanotify and shouldn't generate fanotify events */ #define FMODE_NONOTIFY ((__force fmode_t)0x1000000) @@ -780,13 +783,14 @@ struct file { const struct file_operations *f_op; /* - * Protects f_ep_links, f_flags, f_pos vs i_size in lseek SEEK_CUR. + * Protects f_ep_links, f_flags. * Must not be taken from IRQ context. */ spinlock_t f_lock; atomic_long_t f_count; unsigned int f_flags; fmode_t f_mode; + struct mutex f_pos_lock; loff_t f_pos; struct fown_struct f_owner; const struct cred *f_cred; @@ -808,7 +812,7 @@ struct file { #ifdef CONFIG_DEBUG_WRITECOUNT unsigned long f_mnt_write_state; #endif -}; +} __attribute__((aligned(4))); /* lest something weird decides that 2 is OK */ struct file_handle { __u32 handle_bytes; diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h index 4e4cc28623ad..4cdb3a17bcb5 100644 --- a/include/linux/ftrace_event.h +++ b/include/linux/ftrace_event.h @@ -495,10 +495,6 @@ enum { FILTER_TRACE_FN, }; -#define EVENT_STORAGE_SIZE 128 -extern struct mutex event_storage_mutex; -extern char event_storage[EVENT_STORAGE_SIZE]; - extern int trace_event_raw_init(struct ftrace_event_call *call); extern int trace_define_field(struct ftrace_event_call *call, const char *type, const char *name, int offset, int size, diff --git a/include/linux/futex.h b/include/linux/futex.h index b0d95cac826e..6435f46d6e13 100644 --- a/include/linux/futex.h +++ b/include/linux/futex.h @@ -55,7 +55,11 @@ union futex_key { #ifdef CONFIG_FUTEX extern void exit_robust_list(struct task_struct *curr); extern void exit_pi_state_list(struct task_struct *curr); +#ifdef CONFIG_HAVE_FUTEX_CMPXCHG +#define futex_cmpxchg_enabled 1 +#else extern int futex_cmpxchg_enabled; +#endif #else static inline void exit_robust_list(struct task_struct *curr) { diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 0437439bc047..39b81dc7d01a 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -123,6 +123,10 @@ struct vm_area_struct; __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN | \ __GFP_NO_KSWAPD) +/* + * GFP_THISNODE does not perform any reclaim, you most likely want to + * use __GFP_THISNODE to allocate from a given node without fallback! + */ #ifdef CONFIG_NUMA #define GFP_THISNODE (__GFP_THISNODE | __GFP_NOWARN | __GFP_NORETRY) #else diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h index 66017028dcb3..19ae05d4b8ec 100644 --- a/include/linux/irq_work.h +++ b/include/linux/irq_work.h @@ -30,7 +30,9 @@ void init_irq_work(struct irq_work *work, void (*func)(struct irq_work *)) work->func = func; } -void irq_work_queue(struct irq_work *work); +#define DEFINE_IRQ_WORK(name, _f) struct irq_work name = { .func = (_f), } + +bool irq_work_queue(struct irq_work *work); void irq_work_run(void); void irq_work_sync(struct irq_work *work); diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 92b1bfc5da60..060e5137fd80 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -252,9 +252,9 @@ struct held_lock { unsigned int trylock:1; /* 16 bits */ unsigned int read:2; /* see lock_acquire() comment */ - unsigned int check:2; /* see lock_acquire() comment */ + unsigned int check:1; /* see lock_acquire() comment */ unsigned int hardirqs_off:1; - unsigned int references:11; /* 32 bits */ + unsigned int references:12; /* 32 bits */ }; /* @@ -303,7 +303,7 @@ extern void lockdep_init_map(struct lockdep_map *lock, const char *name, (lock)->dep_map.key, sub) #define lockdep_set_novalidate_class(lock) \ - lockdep_set_class(lock, &__lockdep_no_validate__) + lockdep_set_class_and_name(lock, &__lockdep_no_validate__, #lock) /* * Compare locking classes */ @@ -326,9 +326,8 @@ static inline int lockdep_match_key(struct lockdep_map *lock, * * Values for check: * - * 0: disabled - * 1: simple checks (freeing, held-at-exit-time, etc.) - * 2: full validation + * 0: simple checks (freeing, held-at-exit-time, etc.) + * 1: full validation */ extern void lock_acquire(struct lockdep_map *lock, unsigned int subclass, int trylock, int read, int check, @@ -479,15 +478,9 @@ static inline void print_irqtrace_events(struct task_struct *curr) * on the per lock-class debug mode: */ -#ifdef CONFIG_PROVE_LOCKING - #define lock_acquire_exclusive(l, s, t, n, i) lock_acquire(l, s, t, 0, 2, n, i) - #define lock_acquire_shared(l, s, t, n, i) lock_acquire(l, s, t, 1, 2, n, i) - #define lock_acquire_shared_recursive(l, s, t, n, i) lock_acquire(l, s, t, 2, 2, n, i) -#else - #define lock_acquire_exclusive(l, s, t, n, i) lock_acquire(l, s, t, 0, 1, n, i) - #define lock_acquire_shared(l, s, t, n, i) lock_acquire(l, s, t, 1, 1, n, i) - #define lock_acquire_shared_recursive(l, s, t, n, i) lock_acquire(l, s, t, 2, 1, n, i) -#endif +#define lock_acquire_exclusive(l, s, t, n, i) lock_acquire(l, s, t, 0, 1, n, i) +#define lock_acquire_shared(l, s, t, n, i) lock_acquire(l, s, t, 1, 1, n, i) +#define lock_acquire_shared_recursive(l, s, t, n, i) lock_acquire(l, s, t, 2, 1, n, i) #define spin_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) #define spin_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) @@ -518,13 +511,13 @@ static inline void print_irqtrace_events(struct task_struct *curr) # define might_lock(lock) \ do { \ typecheck(struct lockdep_map *, &(lock)->dep_map); \ - lock_acquire(&(lock)->dep_map, 0, 0, 0, 2, NULL, _THIS_IP_); \ + lock_acquire(&(lock)->dep_map, 0, 0, 0, 1, NULL, _THIS_IP_); \ lock_release(&(lock)->dep_map, 0, _THIS_IP_); \ } while (0) # define might_lock_read(lock) \ do { \ typecheck(struct lockdep_map *, &(lock)->dep_map); \ - lock_acquire(&(lock)->dep_map, 0, 0, 1, 2, NULL, _THIS_IP_); \ + lock_acquire(&(lock)->dep_map, 0, 0, 1, 1, NULL, _THIS_IP_); \ lock_release(&(lock)->dep_map, 0, _THIS_IP_); \ } while (0) #else diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 5f2052c83154..9b61b9bf81ac 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -590,10 +590,10 @@ static inline bool zone_is_empty(struct zone *zone) /* * The NUMA zonelists are doubled because we need zonelists that restrict the - * allocations to a single node for GFP_THISNODE. + * allocations to a single node for __GFP_THISNODE. * * [0] : Zonelist with fallback - * [1] : No fallback (GFP_THISNODE) + * [1] : No fallback (__GFP_THISNODE) */ #define MAX_ZONELISTS 2 diff --git a/include/linux/mutex.h b/include/linux/mutex.h index d3181936c138..11692dea18aa 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -46,6 +46,7 @@ * - detects multi-task circular deadlocks and prints out all affected * locks and tasks (and only those tasks) */ +struct optimistic_spin_queue; struct mutex { /* 1: unlocked, 0: locked, negative: locked, possible waiters */ atomic_t count; @@ -55,7 +56,7 @@ struct mutex { struct task_struct *owner; #endif #ifdef CONFIG_MUTEX_SPIN_ON_OWNER - void *spin_mlock; /* Spinner MCS lock */ + struct optimistic_spin_queue *osq; /* Spinner MCS lock */ #endif #ifdef CONFIG_DEBUG_MUTEXES const char *name; @@ -179,4 +180,4 @@ extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); # define arch_mutex_cpu_relax() cpu_relax() #endif -#endif +#endif /* __LINUX_MUTEX_H */ diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h index 1005ebf17575..5a09a48f2658 100644 --- a/include/linux/netdev_features.h +++ b/include/linux/netdev_features.h @@ -163,4 +163,11 @@ enum { /* changeable features with no special hardware requirements */ #define NETIF_F_SOFT_FEATURES (NETIF_F_GSO | NETIF_F_GRO) +#define NETIF_F_VLAN_FEATURES (NETIF_F_HW_VLAN_CTAG_FILTER | \ + NETIF_F_HW_VLAN_CTAG_RX | \ + NETIF_F_HW_VLAN_CTAG_TX | \ + NETIF_F_HW_VLAN_STAG_FILTER | \ + NETIF_F_HW_VLAN_STAG_RX | \ + NETIF_F_HW_VLAN_STAG_TX) + #endif /* _LINUX_NETDEV_FEATURES_H */ diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index e8eeebd49a98..daafd9561cbc 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3014,7 +3014,7 @@ struct sk_buff *skb_gso_segment(struct sk_buff *skb, netdev_features_t features) { return __skb_gso_segment(skb, features, true); } -__be16 skb_network_protocol(struct sk_buff *skb); +__be16 skb_network_protocol(struct sk_buff *skb, int *depth); static inline bool can_checksum_protocol(netdev_features_t features, __be16 protocol) diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 97fbecdd7a40..7399e6a3e9a0 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -2531,6 +2531,9 @@ #define PCI_VENDOR_ID_INTEL 0x8086 #define PCI_DEVICE_ID_INTEL_EESSC 0x0008 +#define PCI_DEVICE_ID_INTEL_SNB_IMC 0x0100 +#define PCI_DEVICE_ID_INTEL_IVB_IMC 0x0154 +#define PCI_DEVICE_ID_INTEL_HSW_IMC 0x0c00 #define PCI_DEVICE_ID_INTEL_PXHD_0 0x0320 #define PCI_DEVICE_ID_INTEL_PXHD_1 0x0321 #define PCI_DEVICE_ID_INTEL_PXH_0 0x0329 diff --git a/include/linux/rculist.h b/include/linux/rculist.h index dbaf99084112..8183b46fbaa2 100644 --- a/include/linux/rculist.h +++ b/include/linux/rculist.h @@ -247,9 +247,10 @@ static inline void list_splice_init_rcu(struct list_head *list, * primitives such as list_add_rcu() as long as it's guarded by rcu_read_lock(). */ #define list_entry_rcu(ptr, type, member) \ - ({typeof (*ptr) __rcu *__ptr = (typeof (*ptr) __rcu __force *)ptr; \ - container_of((typeof(ptr))rcu_dereference_raw(__ptr), type, member); \ - }) +({ \ + typeof(*ptr) __rcu *__ptr = (typeof(*ptr) __rcu __force *)ptr; \ + container_of((typeof(ptr))rcu_dereference_raw(__ptr), type, member); \ +}) /** * Where are list_empty_rcu() and list_first_entry_rcu()? @@ -285,11 +286,11 @@ static inline void list_splice_init_rcu(struct list_head *list, * primitives such as list_add_rcu() as long as it's guarded by rcu_read_lock(). */ #define list_first_or_null_rcu(ptr, type, member) \ - ({struct list_head *__ptr = (ptr); \ - struct list_head *__next = ACCESS_ONCE(__ptr->next); \ - likely(__ptr != __next) ? \ - list_entry_rcu(__next, type, member) : NULL; \ - }) +({ \ + struct list_head *__ptr = (ptr); \ + struct list_head *__next = ACCESS_ONCE(__ptr->next); \ + likely(__ptr != __next) ? list_entry_rcu(__next, type, member) : NULL; \ +}) /** * list_for_each_entry_rcu - iterate over rcu list of given type diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 72bf3a01a4ee..00a7fd61b3c6 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright IBM Corporation, 2001 * @@ -44,7 +44,9 @@ #include <linux/debugobjects.h> #include <linux/bug.h> #include <linux/compiler.h> +#include <asm/barrier.h> +extern int rcu_expedited; /* for sysctl */ #ifdef CONFIG_RCU_TORTURE_TEST extern int rcutorture_runnable; /* for sysctl */ #endif /* #ifdef CONFIG_RCU_TORTURE_TEST */ @@ -314,7 +316,7 @@ static inline bool rcu_lockdep_current_cpu_online(void) static inline void rcu_lock_acquire(struct lockdep_map *map) { - lock_acquire(map, 0, 0, 2, 1, NULL, _THIS_IP_); + lock_acquire(map, 0, 0, 2, 0, NULL, _THIS_IP_); } static inline void rcu_lock_release(struct lockdep_map *map) @@ -479,11 +481,9 @@ static inline void rcu_preempt_sleep_check(void) do { \ rcu_preempt_sleep_check(); \ rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map), \ - "Illegal context switch in RCU-bh" \ - " read-side critical section"); \ + "Illegal context switch in RCU-bh read-side critical section"); \ rcu_lockdep_assert(!lock_is_held(&rcu_sched_lock_map), \ - "Illegal context switch in RCU-sched"\ - " read-side critical section"); \ + "Illegal context switch in RCU-sched read-side critical section"); \ } while (0) #else /* #ifdef CONFIG_PROVE_RCU */ @@ -510,43 +510,40 @@ static inline void rcu_preempt_sleep_check(void) #endif /* #else #ifdef __CHECKER__ */ #define __rcu_access_pointer(p, space) \ - ({ \ - typeof(*p) *_________p1 = (typeof(*p)*__force )ACCESS_ONCE(p); \ - rcu_dereference_sparse(p, space); \ - ((typeof(*p) __force __kernel *)(_________p1)); \ - }) +({ \ + typeof(*p) *_________p1 = (typeof(*p) *__force)ACCESS_ONCE(p); \ + rcu_dereference_sparse(p, space); \ + ((typeof(*p) __force __kernel *)(_________p1)); \ +}) #define __rcu_dereference_check(p, c, space) \ - ({ \ - typeof(*p) *_________p1 = (typeof(*p)*__force )ACCESS_ONCE(p); \ - rcu_lockdep_assert(c, "suspicious rcu_dereference_check()" \ - " usage"); \ - rcu_dereference_sparse(p, space); \ - smp_read_barrier_depends(); \ - ((typeof(*p) __force __kernel *)(_________p1)); \ - }) +({ \ + typeof(*p) *_________p1 = (typeof(*p) *__force)ACCESS_ONCE(p); \ + rcu_lockdep_assert(c, "suspicious rcu_dereference_check() usage"); \ + rcu_dereference_sparse(p, space); \ + smp_read_barrier_depends(); /* Dependency order vs. p above. */ \ + ((typeof(*p) __force __kernel *)(_________p1)); \ +}) #define __rcu_dereference_protected(p, c, space) \ - ({ \ - rcu_lockdep_assert(c, "suspicious rcu_dereference_protected()" \ - " usage"); \ - rcu_dereference_sparse(p, space); \ - ((typeof(*p) __force __kernel *)(p)); \ - }) +({ \ + rcu_lockdep_assert(c, "suspicious rcu_dereference_protected() usage"); \ + rcu_dereference_sparse(p, space); \ + ((typeof(*p) __force __kernel *)(p)); \ +}) #define __rcu_access_index(p, space) \ - ({ \ - typeof(p) _________p1 = ACCESS_ONCE(p); \ - rcu_dereference_sparse(p, space); \ - (_________p1); \ - }) +({ \ + typeof(p) _________p1 = ACCESS_ONCE(p); \ + rcu_dereference_sparse(p, space); \ + (_________p1); \ +}) #define __rcu_dereference_index_check(p, c) \ - ({ \ - typeof(p) _________p1 = ACCESS_ONCE(p); \ - rcu_lockdep_assert(c, \ - "suspicious rcu_dereference_index_check()" \ - " usage"); \ - smp_read_barrier_depends(); \ - (_________p1); \ - }) +({ \ + typeof(p) _________p1 = ACCESS_ONCE(p); \ + rcu_lockdep_assert(c, \ + "suspicious rcu_dereference_index_check() usage"); \ + smp_read_barrier_depends(); /* Dependency order vs. p above. */ \ + (_________p1); \ +}) /** * RCU_INITIALIZER() - statically initialize an RCU-protected global variable @@ -585,12 +582,7 @@ static inline void rcu_preempt_sleep_check(void) * please be careful when making changes to rcu_assign_pointer() and the * other macros that it invokes. */ -#define rcu_assign_pointer(p, v) \ - do { \ - smp_wmb(); \ - ACCESS_ONCE(p) = RCU_INITIALIZER(v); \ - } while (0) - +#define rcu_assign_pointer(p, v) smp_store_release(&p, RCU_INITIALIZER(v)) /** * rcu_access_pointer() - fetch RCU pointer with no dereferencing @@ -1015,11 +1007,21 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) #define kfree_rcu(ptr, rcu_head) \ __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head)) -#ifdef CONFIG_RCU_NOCB_CPU +#if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) +static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies) +{ + *delta_jiffies = ULONG_MAX; + return 0; +} +#endif /* #if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) */ + +#if defined(CONFIG_RCU_NOCB_CPU_ALL) +static inline bool rcu_is_nocb_cpu(int cpu) { return true; } +#elif defined(CONFIG_RCU_NOCB_CPU) bool rcu_is_nocb_cpu(int cpu); #else static inline bool rcu_is_nocb_cpu(int cpu) { return false; } -#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */ +#endif /* Only for use by adaptive-ticks code. */ diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 6f01771b571c..425c659d54e5 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright IBM Corporation, 2008 * @@ -27,6 +27,16 @@ #include <linux/cache.h> +static inline unsigned long get_state_synchronize_rcu(void) +{ + return 0; +} + +static inline void cond_synchronize_rcu(unsigned long oldstate) +{ + might_sleep(); +} + static inline void rcu_barrier_bh(void) { wait_rcu_gp(call_rcu_bh); @@ -68,12 +78,6 @@ static inline void kfree_call_rcu(struct rcu_head *head, call_rcu(head, func); } -static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies) -{ - *delta_jiffies = ULONG_MAX; - return 0; -} - static inline void rcu_note_context_switch(int cpu) { rcu_sched_qs(cpu); diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 72137ee8c603..a59ca05fd4e3 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright IBM Corporation, 2008 * @@ -31,7 +31,9 @@ #define __LINUX_RCUTREE_H void rcu_note_context_switch(int cpu); +#ifndef CONFIG_RCU_NOCB_CPU_ALL int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies); +#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ void rcu_cpu_stall_reset(void); /* @@ -74,6 +76,8 @@ static inline void synchronize_rcu_bh_expedited(void) void rcu_barrier(void); void rcu_barrier_bh(void); void rcu_barrier_sched(void); +unsigned long get_state_synchronize_rcu(void); +void cond_synchronize_rcu(unsigned long oldstate); extern unsigned long rcutorture_testseq; extern unsigned long rcutorture_vernum; diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 1da693d51255..b66c2110cb1f 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -250,8 +250,7 @@ struct rmap_walk_control { int (*rmap_one)(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg); int (*done)(struct page *page); - int (*file_nonlinear)(struct page *, struct address_space *, - struct vm_area_struct *vma); + int (*file_nonlinear)(struct page *, struct address_space *, void *arg); struct anon_vma *(*anon_lock)(struct page *page); bool (*invalid_vma)(struct vm_area_struct *vma, void *arg); }; diff --git a/include/linux/sched.h b/include/linux/sched.h index a781dec1cd0b..825ed838d4b9 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -3,6 +3,8 @@ #include <uapi/linux/sched.h> +#include <linux/sched/prio.h> + struct sched_param { int sched_priority; @@ -1077,6 +1079,7 @@ struct sched_entity { #endif #ifdef CONFIG_FAIR_GROUP_SCHED + int depth; struct sched_entity *parent; /* rq on which this entity is (to be) queued: */ struct cfs_rq *cfs_rq; @@ -1460,6 +1463,9 @@ struct task_struct { struct mutex perf_event_mutex; struct list_head perf_event_list; #endif +#ifdef CONFIG_DEBUG_PREEMPT + unsigned long preempt_disable_ip; +#endif #ifdef CONFIG_NUMA struct mempolicy *mempolicy; /* Protected by alloc_lock */ short il_next; @@ -1470,9 +1476,10 @@ struct task_struct { unsigned int numa_scan_period; unsigned int numa_scan_period_max; int numa_preferred_nid; - int numa_migrate_deferred; unsigned long numa_migrate_retry; u64 node_stamp; /* migration stamp */ + u64 last_task_numa_placement; + u64 last_sum_exec_runtime; struct callback_head numa_work; struct list_head numa_entry; @@ -1483,15 +1490,22 @@ struct task_struct { * Scheduling placement decisions are made based on the these counts. * The values remain static for the duration of a PTE scan */ - unsigned long *numa_faults; + unsigned long *numa_faults_memory; unsigned long total_numa_faults; /* * numa_faults_buffer records faults per node during the current - * scan window. When the scan completes, the counts in numa_faults - * decay and these values are copied. + * scan window. When the scan completes, the counts in + * numa_faults_memory decay and these values are copied. + */ + unsigned long *numa_faults_buffer_memory; + + /* + * Track the nodes the process was running on when a NUMA hinting + * fault was incurred. */ - unsigned long *numa_faults_buffer; + unsigned long *numa_faults_cpu; + unsigned long *numa_faults_buffer_cpu; /* * numa_faults_locality tracks if faults recorded during the last @@ -1596,8 +1610,8 @@ extern void task_numa_fault(int last_node, int node, int pages, int flags); extern pid_t task_numa_group_id(struct task_struct *p); extern void set_numabalancing_state(bool enabled); extern void task_numa_free(struct task_struct *p); - -extern unsigned int sysctl_numa_balancing_migrate_deferred; +extern bool should_numa_migrate_memory(struct task_struct *p, struct page *page, + int src_nid, int dst_cpu); #else static inline void task_numa_fault(int last_node, int node, int pages, int flags) @@ -1613,6 +1627,11 @@ static inline void set_numabalancing_state(bool enabled) static inline void task_numa_free(struct task_struct *p) { } +static inline bool should_numa_migrate_memory(struct task_struct *p, + struct page *page, int src_nid, int dst_cpu) +{ + return true; +} #endif static inline struct pid *task_pid(struct task_struct *task) @@ -2080,7 +2099,16 @@ static inline void sched_autogroup_exit(struct signal_struct *sig) { } extern bool yield_to(struct task_struct *p, bool preempt); extern void set_user_nice(struct task_struct *p, long nice); extern int task_prio(const struct task_struct *p); -extern int task_nice(const struct task_struct *p); +/** + * task_nice - return the nice value of a given task. + * @p: the task in question. + * + * Return: The nice value [ -20 ... 0 ... 19 ]. + */ +static inline int task_nice(const struct task_struct *p) +{ + return PRIO_TO_NICE((p)->static_prio); +} extern int can_nice(const struct task_struct *p, const int nice); extern int task_curr(const struct task_struct *p); extern int idle_cpu(int cpu); diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h new file mode 100644 index 000000000000..ac322583c820 --- /dev/null +++ b/include/linux/sched/prio.h @@ -0,0 +1,44 @@ +#ifndef _SCHED_PRIO_H +#define _SCHED_PRIO_H + +#define MAX_NICE 19 +#define MIN_NICE -20 +#define NICE_WIDTH (MAX_NICE - MIN_NICE + 1) + +/* + * Priority of a process goes from 0..MAX_PRIO-1, valid RT + * priority is 0..MAX_RT_PRIO-1, and SCHED_NORMAL/SCHED_BATCH + * tasks are in the range MAX_RT_PRIO..MAX_PRIO-1. Priority + * values are inverted: lower p->prio value means higher priority. + * + * The MAX_USER_RT_PRIO value allows the actual maximum + * RT priority to be separate from the value exported to + * user-space. This allows kernel threads to set their + * priority to a value higher than any user task. Note: + * MAX_RT_PRIO must not be smaller than MAX_USER_RT_PRIO. + */ + +#define MAX_USER_RT_PRIO 100 +#define MAX_RT_PRIO MAX_USER_RT_PRIO + +#define MAX_PRIO (MAX_RT_PRIO + NICE_WIDTH) +#define DEFAULT_PRIO (MAX_RT_PRIO + NICE_WIDTH / 2) + +/* + * Convert user-nice values [ -20 ... 0 ... 19 ] + * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ], + * and back. + */ +#define NICE_TO_PRIO(nice) ((nice) + DEFAULT_PRIO) +#define PRIO_TO_NICE(prio) ((prio) - DEFAULT_PRIO) + +/* + * 'User priority' is the nice value converted to something we + * can work with better when scaling various scheduler parameters, + * it's a [ 0 ... 39 ] range. + */ +#define USER_PRIO(p) ((p)-MAX_RT_PRIO) +#define TASK_USER_PRIO(p) USER_PRIO((p)->static_prio) +#define MAX_USER_PRIO (USER_PRIO(MAX_PRIO)) + +#endif /* _SCHED_PRIO_H */ diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h index 34e4ebea8fce..6341f5be6e24 100644 --- a/include/linux/sched/rt.h +++ b/include/linux/sched/rt.h @@ -1,24 +1,7 @@ #ifndef _SCHED_RT_H #define _SCHED_RT_H -/* - * Priority of a process goes from 0..MAX_PRIO-1, valid RT - * priority is 0..MAX_RT_PRIO-1, and SCHED_NORMAL/SCHED_BATCH - * tasks are in the range MAX_RT_PRIO..MAX_PRIO-1. Priority - * values are inverted: lower p->prio value means higher priority. - * - * The MAX_USER_RT_PRIO value allows the actual maximum - * RT priority to be separate from the value exported to - * user-space. This allows kernel threads to set their - * priority to a value higher than any user task. Note: - * MAX_RT_PRIO must not be smaller than MAX_USER_RT_PRIO. - */ - -#define MAX_USER_RT_PRIO 100 -#define MAX_RT_PRIO MAX_USER_RT_PRIO - -#define MAX_PRIO (MAX_RT_PRIO + 40) -#define DEFAULT_PRIO (MAX_RT_PRIO + 20) +#include <linux/sched/prio.h> static inline int rt_prio(int prio) { @@ -35,6 +18,7 @@ static inline int rt_task(struct task_struct *p) #ifdef CONFIG_RT_MUTEXES extern int rt_mutex_getprio(struct task_struct *p); extern void rt_mutex_setprio(struct task_struct *p, int prio); +extern int rt_mutex_check_prio(struct task_struct *task, int newprio); extern struct task_struct *rt_mutex_get_top_task(struct task_struct *task); extern void rt_mutex_adjust_pi(struct task_struct *p); static inline bool tsk_is_pi_blocked(struct task_struct *tsk) @@ -46,6 +30,12 @@ static inline int rt_mutex_getprio(struct task_struct *p) { return p->normal_prio; } + +static inline int rt_mutex_check_prio(struct task_struct *task, int newprio) +{ + return 0; +} + static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *task) { return NULL; diff --git a/include/linux/security.h b/include/linux/security.h index 5623a7f965b7..2fc42d191f79 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -1040,6 +1040,7 @@ static inline void security_free_mnt_opts(struct security_mnt_opts *opts) * Allocate a security structure to the xp->security field; the security * field is initialized to NULL when the xfrm_policy is allocated. * Return 0 if operation was successful (memory to allocate, legal context) + * @gfp is to specify the context for the allocation * @xfrm_policy_clone_security: * @old_ctx contains an existing xfrm_sec_ctx. * @new_ctxp contains a new xfrm_sec_ctx being cloned from old. @@ -1683,7 +1684,7 @@ struct security_operations { #ifdef CONFIG_SECURITY_NETWORK_XFRM int (*xfrm_policy_alloc_security) (struct xfrm_sec_ctx **ctxp, - struct xfrm_user_sec_ctx *sec_ctx); + struct xfrm_user_sec_ctx *sec_ctx, gfp_t gfp); int (*xfrm_policy_clone_security) (struct xfrm_sec_ctx *old_ctx, struct xfrm_sec_ctx **new_ctx); void (*xfrm_policy_free_security) (struct xfrm_sec_ctx *ctx); int (*xfrm_policy_delete_security) (struct xfrm_sec_ctx *ctx); @@ -2859,7 +2860,8 @@ static inline void security_skb_owned_by(struct sk_buff *skb, struct sock *sk) #ifdef CONFIG_SECURITY_NETWORK_XFRM -int security_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp, struct xfrm_user_sec_ctx *sec_ctx); +int security_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp, + struct xfrm_user_sec_ctx *sec_ctx, gfp_t gfp); int security_xfrm_policy_clone(struct xfrm_sec_ctx *old_ctx, struct xfrm_sec_ctx **new_ctxp); void security_xfrm_policy_free(struct xfrm_sec_ctx *ctx); int security_xfrm_policy_delete(struct xfrm_sec_ctx *ctx); @@ -2877,7 +2879,9 @@ void security_skb_classify_flow(struct sk_buff *skb, struct flowi *fl); #else /* CONFIG_SECURITY_NETWORK_XFRM */ -static inline int security_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp, struct xfrm_user_sec_ctx *sec_ctx) +static inline int security_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp, + struct xfrm_user_sec_ctx *sec_ctx, + gfp_t gfp) { return 0; } diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 5e1e6f2d98c2..15ede6a823a6 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -2451,8 +2451,8 @@ int skb_splice_bits(struct sk_buff *skb, unsigned int offset, unsigned int flags); void skb_copy_and_csum_dev(const struct sk_buff *skb, u8 *to); unsigned int skb_zerocopy_headlen(const struct sk_buff *from); -void skb_zerocopy(struct sk_buff *to, const struct sk_buff *from, - int len, int hlen); +int skb_zerocopy(struct sk_buff *to, struct sk_buff *from, + int len, int hlen); void skb_split(struct sk_buff *skb, struct sk_buff *skb1, const u32 len); int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen); void skb_scrub_packet(struct sk_buff *skb, bool xnet); diff --git a/include/linux/slab.h b/include/linux/slab.h index 9260abdd67df..b5b2df60299e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -410,7 +410,7 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) * * %GFP_NOWAIT - Allocation will not sleep. * - * %GFP_THISNODE - Allocate node-local memory only. + * %__GFP_THISNODE - Allocate node-local memory only. * * %GFP_DMA - Allocation suitable for DMA. * Should only be used for kmalloc() caches. Otherwise, use a diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 9b058eecd403..a2783cb5d275 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright (C) IBM Corporation, 2006 * Copyright (C) Fujitsu, 2012 diff --git a/include/linux/torture.h b/include/linux/torture.h new file mode 100644 index 000000000000..b2e2b468e511 --- /dev/null +++ b/include/linux/torture.h @@ -0,0 +1,100 @@ +/* + * Common functions for in-kernel torture tests. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. + * + * Copyright IBM Corporation, 2014 + * + * Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com> + */ + +#ifndef __LINUX_TORTURE_H +#define __LINUX_TORTURE_H + +#include <linux/types.h> +#include <linux/cache.h> +#include <linux/spinlock.h> +#include <linux/threads.h> +#include <linux/cpumask.h> +#include <linux/seqlock.h> +#include <linux/lockdep.h> +#include <linux/completion.h> +#include <linux/debugobjects.h> +#include <linux/bug.h> +#include <linux/compiler.h> + +/* Definitions for a non-string torture-test module parameter. */ +#define torture_param(type, name, init, msg) \ + static type name = init; \ + module_param(name, type, 0444); \ + MODULE_PARM_DESC(name, msg); + +#define TORTURE_FLAG "-torture:" +#define TOROUT_STRING(s) \ + pr_alert("%s" TORTURE_FLAG s "\n", torture_type) +#define VERBOSE_TOROUT_STRING(s) \ + do { if (verbose) pr_alert("%s" TORTURE_FLAG " %s\n", torture_type, s); } while (0) +#define VERBOSE_TOROUT_ERRSTRING(s) \ + do { if (verbose) pr_alert("%s" TORTURE_FLAG "!!! %s\n", torture_type, s); } while (0) + +/* Definitions for a non-string torture-test module parameter. */ +#define torture_parm(type, name, init, msg) \ + static type name = init; \ + module_param(name, type, 0444); \ + MODULE_PARM_DESC(name, msg); + +/* Definitions for online/offline exerciser. */ +int torture_onoff_init(long ooholdoff, long oointerval); +char *torture_onoff_stats(char *page); +bool torture_onoff_failures(void); + +/* Low-rider random number generator. */ +struct torture_random_state { + unsigned long trs_state; + long trs_count; +}; +#define DEFINE_TORTURE_RANDOM(name) struct torture_random_state name = { 0, 0 } +unsigned long torture_random(struct torture_random_state *trsp); + +/* Task shuffler, which causes CPUs to occasionally go idle. */ +void torture_shuffle_task_register(struct task_struct *tp); +int torture_shuffle_init(long shuffint); + +/* Test auto-shutdown handling. */ +void torture_shutdown_absorb(const char *title); +int torture_shutdown_init(int ssecs, void (*cleanup)(void)); + +/* Task stuttering, which forces load/no-load transitions. */ +void stutter_wait(const char *title); +int torture_stutter_init(int s); + +/* Initialization and cleanup. */ +void torture_init_begin(char *ttype, bool v, int *runnable); +void torture_init_end(void); +bool torture_cleanup(void); +bool torture_must_stop(void); +bool torture_must_stop_irq(void); +void torture_kthread_stopping(char *title); +int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m, + char *f, struct task_struct **tp); +void _torture_stop_kthread(char *m, struct task_struct **tp); + +#define torture_create_kthread(n, arg, tp) \ + _torture_create_kthread(n, (arg), #n, "Creating " #n " task", \ + "Failed to create " #n, &(tp)) +#define torture_stop_kthread(n, tp) \ + _torture_stop_kthread("Stopping " #n " task", &(tp)) + +#endif /* __LINUX_TORTURE_H */ diff --git a/include/linux/usb/cdc_ncm.h b/include/linux/usb/cdc_ncm.h index c3fa80745996..2c14d9cdd57a 100644 --- a/include/linux/usb/cdc_ncm.h +++ b/include/linux/usb/cdc_ncm.h @@ -88,6 +88,7 @@ #define cdc_ncm_data_intf_is_mbim(x) ((x)->desc.bInterfaceProtocol == USB_CDC_MBIM_PROTO_NTB) struct cdc_ncm_ctx { + struct usb_cdc_ncm_ntb_parameters ncm_parm; struct hrtimer tx_timer; struct tasklet_struct bh; diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h index e303eef94dd5..0662e98fef72 100644 --- a/include/linux/usb/usbnet.h +++ b/include/linux/usb/usbnet.h @@ -30,7 +30,7 @@ struct usbnet { struct driver_info *driver_info; const char *driver_name; void *driver_priv; - wait_queue_head_t *wait; + wait_queue_head_t wait; struct mutex phy_mutex; unsigned char suspend_count; unsigned char pkt_cnt, pkt_err; diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h index 9650a3ffd2d2..b4956a5fcc3f 100644 --- a/include/net/if_inet6.h +++ b/include/net/if_inet6.h @@ -31,8 +31,10 @@ #define IF_PREFIX_AUTOCONF 0x02 enum { + INET6_IFADDR_STATE_PREDAD, INET6_IFADDR_STATE_DAD, INET6_IFADDR_STATE_POSTDAD, + INET6_IFADDR_STATE_ERRDAD, INET6_IFADDR_STATE_UP, INET6_IFADDR_STATE_DEAD, }; @@ -58,7 +60,7 @@ struct inet6_ifaddr { unsigned long cstamp; /* created timestamp */ unsigned long tstamp; /* updated timestamp */ - struct timer_list dad_timer; + struct delayed_work dad_work; struct inet6_dev *idev; struct rt6_info *rt; diff --git a/include/net/sock.h b/include/net/sock.h index 5c3f7c3624aa..b9586a137cad 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1488,6 +1488,11 @@ static inline void sk_wmem_free_skb(struct sock *sk, struct sk_buff *skb) */ #define sock_owned_by_user(sk) ((sk)->sk_lock.owned) +static inline void sock_release_ownership(struct sock *sk) +{ + sk->sk_lock.owned = 0; +} + /* * Macro so as to not evaluate some arguments when * lockdep is not enabled. @@ -2186,7 +2191,6 @@ static inline void sock_recv_ts_and_drops(struct msghdr *msg, struct sock *sk, { #define FLAGS_TS_OR_DROPS ((1UL << SOCK_RXQ_OVFL) | \ (1UL << SOCK_RCVTSTAMP) | \ - (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE) | \ (1UL << SOCK_TIMESTAMPING_SOFTWARE) | \ (1UL << SOCK_TIMESTAMPING_RAW_HARDWARE) | \ (1UL << SOCK_TIMESTAMPING_SYS_HARDWARE)) diff --git a/include/net/tcp.h b/include/net/tcp.h index 8c4dd63134d4..743accec6c76 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -480,20 +480,21 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb, #ifdef CONFIG_SYN_COOKIES #include <linux/ktime.h> -/* Syncookies use a monotonic timer which increments every 64 seconds. +/* Syncookies use a monotonic timer which increments every 60 seconds. * This counter is used both as a hash input and partially encoded into * the cookie value. A cookie is only validated further if the delta * between the current counter value and the encoded one is less than this, - * i.e. a sent cookie is valid only at most for 128 seconds (or less if + * i.e. a sent cookie is valid only at most for 2*60 seconds (or less if * the counter advances immediately after a cookie is generated). */ #define MAX_SYNCOOKIE_AGE 2 static inline u32 tcp_cookie_time(void) { - struct timespec now; - getnstimeofday(&now); - return now.tv_sec >> 6; /* 64 seconds granularity */ + u64 val = get_jiffies_64(); + + do_div(val, 60 * HZ); + return val; } u32 __cookie_v4_init_sequence(const struct iphdr *iph, const struct tcphdr *th, diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h index 1a8b28db3775..1ee19a24cc5f 100644 --- a/include/trace/ftrace.h +++ b/include/trace/ftrace.h @@ -310,15 +310,12 @@ static struct trace_event_functions ftrace_event_type_funcs_##call = { \ #undef __array #define __array(type, item, len) \ do { \ - mutex_lock(&event_storage_mutex); \ + char *type_str = #type"["__stringify(len)"]"; \ BUILD_BUG_ON(len > MAX_FILTER_STR_VAL); \ - snprintf(event_storage, sizeof(event_storage), \ - "%s[%d]", #type, len); \ - ret = trace_define_field(event_call, event_storage, #item, \ + ret = trace_define_field(event_call, type_str, #item, \ offsetof(typeof(field), item), \ sizeof(field.item), \ is_signed_type(type), FILTER_OTHER); \ - mutex_unlock(&event_storage_mutex); \ if (ret) \ return ret; \ } while (0); diff --git a/init/Kconfig b/init/Kconfig index 009a797dd242..d56cb03c1b49 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1387,6 +1387,13 @@ config FUTEX support for "fast userspace mutexes". The resulting kernel may not run glibc-based applications correctly. +config HAVE_FUTEX_CMPXCHG + bool + help + Architectures should select this if futex_atomic_cmpxchg_inatomic() + is implemented and always working. This removes a couple of runtime + checks. + config EPOLL bool "Enable eventpoll support" if EXPERT default y diff --git a/init/main.c b/init/main.c index eb03090cdced..9c7fd4c9249f 100644 --- a/init/main.c +++ b/init/main.c @@ -561,7 +561,6 @@ asmlinkage void __init start_kernel(void) init_timers(); hrtimers_init(); softirq_init(); - acpi_early_init(); timekeeping_init(); time_init(); sched_clock_postinit(); @@ -613,6 +612,7 @@ asmlinkage void __init start_kernel(void) calibrate_delay(); pidmap_init(); anon_vma_init(); + acpi_early_init(); #ifdef CONFIG_X86 if (efi_enabled(EFI_RUNTIME_SERVICES)) efi_enter_virtual_mode(); diff --git a/ipc/msg.c b/ipc/msg.c index 245db1140ad6..649853105a5d 100644 --- a/ipc/msg.c +++ b/ipc/msg.c @@ -901,6 +901,8 @@ long do_msgrcv(int msqid, void __user *buf, size_t bufsz, long msgtyp, int msgfl return -EINVAL; if (msgflg & MSG_COPY) { + if ((msgflg & MSG_EXCEPT) || !(msgflg & IPC_NOWAIT)) + return -EINVAL; copy = prepare_copy(buf, min_t(size_t, bufsz, ns->msg_ctlmax)); if (IS_ERR(copy)) return PTR_ERR(copy); diff --git a/kernel/Makefile b/kernel/Makefile index bc010ee272b6..4fd847488b76 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -22,7 +22,6 @@ obj-y += sched/ obj-y += locking/ obj-y += power/ obj-y += printk/ -obj-y += cpu/ obj-y += irq/ obj-y += rcu/ @@ -93,6 +92,7 @@ obj-$(CONFIG_PADATA) += padata.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_JUMP_LABEL) += jump_label.o obj-$(CONFIG_CONTEXT_TRACKING) += context_tracking.o +obj-$(CONFIG_TORTURE_TEST) += torture.o $(obj)/configs.o: $(obj)/config_data.h diff --git a/kernel/audit.c b/kernel/audit.c index 34c5a2310fbf..95a20f3f52f1 100644 --- a/kernel/audit.c +++ b/kernel/audit.c @@ -182,7 +182,7 @@ struct audit_buffer { struct audit_reply { __u32 portid; - pid_t pid; + struct net *net; struct sk_buff *skb; }; @@ -500,7 +500,7 @@ int audit_send_list(void *_dest) { struct audit_netlink_list *dest = _dest; struct sk_buff *skb; - struct net *net = get_net_ns_by_pid(dest->pid); + struct net *net = dest->net; struct audit_net *aunet = net_generic(net, audit_net_id); /* wait for parent to finish and send an ACK */ @@ -510,6 +510,7 @@ int audit_send_list(void *_dest) while ((skb = __skb_dequeue(&dest->q)) != NULL) netlink_unicast(aunet->nlsk, skb, dest->portid, 0); + put_net(net); kfree(dest); return 0; @@ -543,7 +544,7 @@ out_kfree_skb: static int audit_send_reply_thread(void *arg) { struct audit_reply *reply = (struct audit_reply *)arg; - struct net *net = get_net_ns_by_pid(reply->pid); + struct net *net = reply->net; struct audit_net *aunet = net_generic(net, audit_net_id); mutex_lock(&audit_cmd_mutex); @@ -552,12 +553,13 @@ static int audit_send_reply_thread(void *arg) /* Ignore failure. It'll only happen if the sender goes away, because our timeout is set to infinite. */ netlink_unicast(aunet->nlsk , reply->skb, reply->portid, 0); + put_net(net); kfree(reply); return 0; } /** * audit_send_reply - send an audit reply message via netlink - * @portid: netlink port to which to send reply + * @request_skb: skb of request we are replying to (used to target the reply) * @seq: sequence number * @type: audit message type * @done: done (last) flag @@ -568,9 +570,11 @@ static int audit_send_reply_thread(void *arg) * Allocates an skb, builds the netlink message, and sends it to the port id. * No failure notifications. */ -static void audit_send_reply(__u32 portid, int seq, int type, int done, +static void audit_send_reply(struct sk_buff *request_skb, int seq, int type, int done, int multi, const void *payload, int size) { + u32 portid = NETLINK_CB(request_skb).portid; + struct net *net = sock_net(NETLINK_CB(request_skb).sk); struct sk_buff *skb; struct task_struct *tsk; struct audit_reply *reply = kmalloc(sizeof(struct audit_reply), @@ -583,8 +587,8 @@ static void audit_send_reply(__u32 portid, int seq, int type, int done, if (!skb) goto out; + reply->net = get_net(net); reply->portid = portid; - reply->pid = task_pid_vnr(current); reply->skb = skb; tsk = kthread_run(audit_send_reply_thread, reply, "audit_send_reply"); @@ -604,9 +608,19 @@ static int audit_netlink_ok(struct sk_buff *skb, u16 msg_type) int err = 0; /* Only support the initial namespaces for now. */ + /* + * We return ECONNREFUSED because it tricks userspace into thinking + * that audit was not configured into the kernel. Lots of users + * configure their PAM stack (because that's what the distro does) + * to reject login if unable to send messages to audit. If we return + * ECONNREFUSED the PAM stack thinks the kernel does not have audit + * configured in and will let login proceed. If we return EPERM + * userspace will reject all logins. This should be removed when we + * support non init namespaces!! + */ if ((current_user_ns() != &init_user_ns) || (task_active_pid_ns(current) != &init_pid_ns)) - return -EPERM; + return -ECONNREFUSED; switch (msg_type) { case AUDIT_LIST: @@ -673,8 +687,7 @@ static int audit_get_feature(struct sk_buff *skb) seq = nlmsg_hdr(skb)->nlmsg_seq; - audit_send_reply(NETLINK_CB(skb).portid, seq, AUDIT_GET, 0, 0, - &af, sizeof(af)); + audit_send_reply(skb, seq, AUDIT_GET, 0, 0, &af, sizeof(af)); return 0; } @@ -794,8 +807,7 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh) s.backlog = skb_queue_len(&audit_skb_queue); s.version = AUDIT_VERSION_LATEST; s.backlog_wait_time = audit_backlog_wait_time; - audit_send_reply(NETLINK_CB(skb).portid, seq, AUDIT_GET, 0, 0, - &s, sizeof(s)); + audit_send_reply(skb, seq, AUDIT_GET, 0, 0, &s, sizeof(s)); break; } case AUDIT_SET: { @@ -905,7 +917,7 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh) seq, data, nlmsg_len(nlh)); break; case AUDIT_LIST_RULES: - err = audit_list_rules_send(NETLINK_CB(skb).portid, seq); + err = audit_list_rules_send(skb, seq); break; case AUDIT_TRIM: audit_trim_trees(); @@ -970,8 +982,8 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh) memcpy(sig_data->ctx, ctx, len); security_release_secctx(ctx, len); } - audit_send_reply(NETLINK_CB(skb).portid, seq, AUDIT_SIGNAL_INFO, - 0, 0, sig_data, sizeof(*sig_data) + len); + audit_send_reply(skb, seq, AUDIT_SIGNAL_INFO, 0, 0, + sig_data, sizeof(*sig_data) + len); kfree(sig_data); break; case AUDIT_TTY_GET: { @@ -983,8 +995,7 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh) s.log_passwd = tsk->signal->audit_tty_log_passwd; spin_unlock(&tsk->sighand->siglock); - audit_send_reply(NETLINK_CB(skb).portid, seq, - AUDIT_TTY_GET, 0, 0, &s, sizeof(s)); + audit_send_reply(skb, seq, AUDIT_TTY_GET, 0, 0, &s, sizeof(s)); break; } case AUDIT_TTY_SET: { diff --git a/kernel/audit.h b/kernel/audit.h index 57cc64d67718..8df132214606 100644 --- a/kernel/audit.h +++ b/kernel/audit.h @@ -247,7 +247,7 @@ extern void audit_panic(const char *message); struct audit_netlink_list { __u32 portid; - pid_t pid; + struct net *net; struct sk_buff_head q; }; diff --git a/kernel/auditfilter.c b/kernel/auditfilter.c index 14a78cca384e..92062fd6cc8c 100644 --- a/kernel/auditfilter.c +++ b/kernel/auditfilter.c @@ -29,6 +29,8 @@ #include <linux/sched.h> #include <linux/slab.h> #include <linux/security.h> +#include <net/net_namespace.h> +#include <net/sock.h> #include "audit.h" /* @@ -1065,11 +1067,13 @@ int audit_rule_change(int type, __u32 portid, int seq, void *data, /** * audit_list_rules_send - list the audit rules - * @portid: target portid for netlink audit messages + * @request_skb: skb of request we are replying to (used to target the reply) * @seq: netlink audit message sequence (serial) number */ -int audit_list_rules_send(__u32 portid, int seq) +int audit_list_rules_send(struct sk_buff *request_skb, int seq) { + u32 portid = NETLINK_CB(request_skb).portid; + struct net *net = sock_net(NETLINK_CB(request_skb).sk); struct task_struct *tsk; struct audit_netlink_list *dest; int err = 0; @@ -1083,8 +1087,8 @@ int audit_list_rules_send(__u32 portid, int seq) dest = kmalloc(sizeof(struct audit_netlink_list), GFP_KERNEL); if (!dest) return -ENOMEM; + dest->net = get_net(net); dest->portid = portid; - dest->pid = task_pid_vnr(current); skb_queue_head_init(&dest->q); mutex_lock(&audit_filter_mutex); diff --git a/kernel/cgroup.c b/kernel/cgroup.c index 105f273b6f86..0c753ddd223b 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -4112,17 +4112,17 @@ static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss) err = percpu_ref_init(&css->refcnt, css_release); if (err) - goto err_free; + goto err_free_css; init_css(css, ss, cgrp); err = cgroup_populate_dir(cgrp, 1 << ss->subsys_id); if (err) - goto err_free; + goto err_free_percpu_ref; err = online_css(css); if (err) - goto err_free; + goto err_clear_dir; dget(cgrp->dentry); css_get(css->parent); @@ -4138,8 +4138,11 @@ static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss) return 0; -err_free: +err_clear_dir: + cgroup_clear_dir(css->cgroup, 1 << css->ss->subsys_id); +err_free_percpu_ref: percpu_ref_cancel_init(&css->refcnt); +err_free_css: ss->css_free(css); return err; } diff --git a/kernel/cpu/Makefile b/kernel/cpu/Makefile deleted file mode 100644 index 59ab052ef7a0..000000000000 --- a/kernel/cpu/Makefile +++ /dev/null @@ -1 +0,0 @@ -obj-y = idle.o diff --git a/kernel/events/core.c b/kernel/events/core.c index fa0b2d4ad83c..661951ab8ae7 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -231,11 +231,29 @@ int perf_cpu_time_max_percent_handler(struct ctl_table *table, int write, #define NR_ACCUMULATED_SAMPLES 128 static DEFINE_PER_CPU(u64, running_sample_length); -void perf_sample_event_took(u64 sample_len_ns) +static void perf_duration_warn(struct irq_work *w) { + u64 allowed_ns = ACCESS_ONCE(perf_sample_allowed_ns); u64 avg_local_sample_len; u64 local_samples_len; + + local_samples_len = __get_cpu_var(running_sample_length); + avg_local_sample_len = local_samples_len/NR_ACCUMULATED_SAMPLES; + + printk_ratelimited(KERN_WARNING + "perf interrupt took too long (%lld > %lld), lowering " + "kernel.perf_event_max_sample_rate to %d\n", + avg_local_sample_len, allowed_ns >> 1, + sysctl_perf_event_sample_rate); +} + +static DEFINE_IRQ_WORK(perf_duration_work, perf_duration_warn); + +void perf_sample_event_took(u64 sample_len_ns) +{ u64 allowed_ns = ACCESS_ONCE(perf_sample_allowed_ns); + u64 avg_local_sample_len; + u64 local_samples_len; if (allowed_ns == 0) return; @@ -263,13 +281,14 @@ void perf_sample_event_took(u64 sample_len_ns) sysctl_perf_event_sample_rate = max_samples_per_tick * HZ; perf_sample_period_ns = NSEC_PER_SEC / sysctl_perf_event_sample_rate; - printk_ratelimited(KERN_WARNING - "perf samples too long (%lld > %lld), lowering " - "kernel.perf_event_max_sample_rate to %d\n", - avg_local_sample_len, allowed_ns, - sysctl_perf_event_sample_rate); - update_perf_cpu_limits(); + + if (!irq_work_queue(&perf_duration_work)) { + early_printk("perf interrupt took too long (%lld > %lld), lowering " + "kernel.perf_event_max_sample_rate to %d\n", + avg_local_sample_len, allowed_ns >> 1, + sysctl_perf_event_sample_rate); + } } static atomic64_t perf_event_id; @@ -1714,7 +1733,7 @@ group_sched_in(struct perf_event *group_event, struct perf_event_context *ctx) { struct perf_event *event, *partial_group = NULL; - struct pmu *pmu = group_event->pmu; + struct pmu *pmu = ctx->pmu; u64 now = ctx->time; bool simulate = false; @@ -2563,8 +2582,6 @@ static void perf_branch_stack_sched_in(struct task_struct *prev, if (cpuctx->ctx.nr_branch_stack > 0 && pmu->flush_branch_stack) { - pmu = cpuctx->ctx.pmu; - perf_ctx_lock(cpuctx, cpuctx->task_ctx); perf_pmu_disable(pmu); @@ -6294,7 +6311,7 @@ static int perf_event_idx_default(struct perf_event *event) * Ensures all contexts with the same task_ctx_nr have the same * pmu_cpu_context too. */ -static void *find_pmu_context(int ctxn) +static struct perf_cpu_context __percpu *find_pmu_context(int ctxn) { struct pmu *pmu; diff --git a/kernel/fork.c b/kernel/fork.c index a17621c6cd42..332688e5e7b4 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -237,6 +237,7 @@ void __put_task_struct(struct task_struct *tsk) WARN_ON(atomic_read(&tsk->usage)); WARN_ON(tsk == current); + task_numa_free(tsk); security_task_free(tsk); exit_creds(tsk); delayacct_tsk_free(tsk); diff --git a/kernel/futex.c b/kernel/futex.c index 44a1261cb9ff..67dacaf93e56 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -157,7 +157,9 @@ * enqueue. */ +#ifndef CONFIG_HAVE_FUTEX_CMPXCHG int __read_mostly futex_cmpxchg_enabled; +#endif /* * Futex flags used to encode options to functions and preserve them across @@ -234,6 +236,7 @@ static const struct futex_q futex_q_init = { * waiting on a futex. */ struct futex_hash_bucket { + atomic_t waiters; spinlock_t lock; struct plist_head chain; } ____cacheline_aligned_in_smp; @@ -253,22 +256,37 @@ static inline void futex_get_mm(union futex_key *key) smp_mb__after_atomic_inc(); } -static inline bool hb_waiters_pending(struct futex_hash_bucket *hb) +/* + * Reflects a new waiter being added to the waitqueue. + */ +static inline void hb_waiters_inc(struct futex_hash_bucket *hb) { #ifdef CONFIG_SMP + atomic_inc(&hb->waiters); /* - * Tasks trying to enter the critical region are most likely - * potential waiters that will be added to the plist. Ensure - * that wakers won't miss to-be-slept tasks in the window between - * the wait call and the actual plist_add. + * Full barrier (A), see the ordering comment above. */ - if (spin_is_locked(&hb->lock)) - return true; - smp_rmb(); /* Make sure we check the lock state first */ + smp_mb__after_atomic_inc(); +#endif +} + +/* + * Reflects a waiter being removed from the waitqueue by wakeup + * paths. + */ +static inline void hb_waiters_dec(struct futex_hash_bucket *hb) +{ +#ifdef CONFIG_SMP + atomic_dec(&hb->waiters); +#endif +} - return !plist_head_empty(&hb->chain); +static inline int hb_waiters_pending(struct futex_hash_bucket *hb) +{ +#ifdef CONFIG_SMP + return atomic_read(&hb->waiters); #else - return true; + return 1; #endif } @@ -954,6 +972,7 @@ static void __unqueue_futex(struct futex_q *q) hb = container_of(q->lock_ptr, struct futex_hash_bucket, lock); plist_del(&q->list, &hb->chain); + hb_waiters_dec(hb); } /* @@ -1257,7 +1276,9 @@ void requeue_futex(struct futex_q *q, struct futex_hash_bucket *hb1, */ if (likely(&hb1->chain != &hb2->chain)) { plist_del(&q->list, &hb1->chain); + hb_waiters_dec(hb1); plist_add(&q->list, &hb2->chain); + hb_waiters_inc(hb2); q->lock_ptr = &hb2->lock; } get_futex_key_refs(key2); @@ -1600,6 +1621,17 @@ static inline struct futex_hash_bucket *queue_lock(struct futex_q *q) struct futex_hash_bucket *hb; hb = hash_futex(&q->key); + + /* + * Increment the counter before taking the lock so that + * a potential waker won't miss a to-be-slept task that is + * waiting for the spinlock. This is safe as all queue_lock() + * users end up calling queue_me(). Similarly, for housekeeping, + * decrement the counter at queue_unlock() when some error has + * occurred and we don't end up adding the task to the list. + */ + hb_waiters_inc(hb); + q->lock_ptr = &hb->lock; spin_lock(&hb->lock); /* implies MB (A) */ @@ -1611,6 +1643,7 @@ queue_unlock(struct futex_hash_bucket *hb) __releases(&hb->lock) { spin_unlock(&hb->lock); + hb_waiters_dec(hb); } /** @@ -2342,6 +2375,7 @@ int handle_early_requeue_pi_wakeup(struct futex_hash_bucket *hb, * Unqueue the futex_q and determine which it was. */ plist_del(&q->list, &hb->chain); + hb_waiters_dec(hb); /* Handle spurious wakeups gracefully */ ret = -EWOULDBLOCK; @@ -2843,9 +2877,28 @@ SYSCALL_DEFINE6(futex, u32 __user *, uaddr, int, op, u32, val, return do_futex(uaddr, op, val, tp, uaddr2, val2, val3); } -static int __init futex_init(void) +static void __init futex_detect_cmpxchg(void) { +#ifndef CONFIG_HAVE_FUTEX_CMPXCHG u32 curval; + + /* + * This will fail and we want it. Some arch implementations do + * runtime detection of the futex_atomic_cmpxchg_inatomic() + * functionality. We want to know that before we call in any + * of the complex code paths. Also we want to prevent + * registration of robust lists in that case. NULL is + * guaranteed to fault and we get -EFAULT on functional + * implementation, the non-functional ones will return + * -ENOSYS. + */ + if (cmpxchg_futex_value_locked(&curval, NULL, 0, 0) == -EFAULT) + futex_cmpxchg_enabled = 1; +#endif +} + +static int __init futex_init(void) +{ unsigned int futex_shift; unsigned long i; @@ -2861,20 +2914,11 @@ static int __init futex_init(void) &futex_shift, NULL, futex_hashsize, futex_hashsize); futex_hashsize = 1UL << futex_shift; - /* - * This will fail and we want it. Some arch implementations do - * runtime detection of the futex_atomic_cmpxchg_inatomic() - * functionality. We want to know that before we call in any - * of the complex code paths. Also we want to prevent - * registration of robust lists in that case. NULL is - * guaranteed to fault and we get -EFAULT on functional - * implementation, the non-functional ones will return - * -ENOSYS. - */ - if (cmpxchg_futex_value_locked(&curval, NULL, 0, 0) == -EFAULT) - futex_cmpxchg_enabled = 1; + + futex_detect_cmpxchg(); for (i = 0; i < futex_hashsize; i++) { + atomic_set(&futex_queues[i].waiters, 0); plist_head_init(&futex_queues[i].chain); spin_lock_init(&futex_queues[i].lock); } diff --git a/kernel/irq_work.c b/kernel/irq_work.c index 55fcce6065cf..a82170e2fa78 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -61,11 +61,11 @@ void __weak arch_irq_work_raise(void) * * Can be re-enqueued while the callback is still in progress. */ -void irq_work_queue(struct irq_work *work) +bool irq_work_queue(struct irq_work *work) { /* Only queue if not already pending */ if (!irq_work_claim(work)) - return; + return false; /* Queue the entry and raise the IPI if needed. */ preempt_disable(); @@ -83,6 +83,8 @@ void irq_work_queue(struct irq_work *work) } preempt_enable(); + + return true; } EXPORT_SYMBOL_GPL(irq_work_queue); diff --git a/kernel/ksysfs.c b/kernel/ksysfs.c index d945a949760f..e660964086e2 100644 --- a/kernel/ksysfs.c +++ b/kernel/ksysfs.c @@ -19,6 +19,8 @@ #include <linux/sched.h> #include <linux/capability.h> +#include <linux/rcupdate.h> /* rcu_expedited */ + #define KERNEL_ATTR_RO(_name) \ static struct kobj_attribute _name##_attr = __ATTR_RO(_name) diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile index baab8e5e7f66..306a76b51e0f 100644 --- a/kernel/locking/Makefile +++ b/kernel/locking/Makefile @@ -1,5 +1,5 @@ -obj-y += mutex.o semaphore.o rwsem.o lglock.o +obj-y += mutex.o semaphore.o rwsem.o lglock.o mcs_spinlock.o ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_lockdep.o = -pg @@ -23,3 +23,4 @@ obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock_debug.o obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o obj-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem-xadd.o obj-$(CONFIG_PERCPU_RWSEM) += percpu-rwsem.o +obj-$(CONFIG_LOCK_TORTURE_TEST) += locktorture.o diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index eb8a54783fa0..bf0c6b0dd9c5 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -1936,12 +1936,12 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next) for (;;) { int distance = curr->lockdep_depth - depth + 1; - hlock = curr->held_locks + depth-1; + hlock = curr->held_locks + depth - 1; /* * Only non-recursive-read entries get new dependencies * added: */ - if (hlock->read != 2) { + if (hlock->read != 2 && hlock->check) { if (!check_prev_add(curr, hlock, next, distance, trylock_loop)) return 0; @@ -2098,7 +2098,7 @@ static int validate_chain(struct task_struct *curr, struct lockdep_map *lock, * (If lookup_chain_cache() returns with 1 it acquires * graph_lock for us) */ - if (!hlock->trylock && (hlock->check == 2) && + if (!hlock->trylock && hlock->check && lookup_chain_cache(curr, hlock, chain_key)) { /* * Check whether last held lock: @@ -2517,7 +2517,7 @@ mark_held_locks(struct task_struct *curr, enum mark_type mark) BUG_ON(usage_bit >= LOCK_USAGE_STATES); - if (hlock_class(hlock)->key == __lockdep_no_validate__.subkeys) + if (!hlock->check) continue; if (!mark_lock(curr, hlock, usage_bit)) @@ -3055,9 +3055,6 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, int class_idx; u64 chain_key; - if (!prove_locking) - check = 1; - if (unlikely(!debug_locks)) return 0; @@ -3069,8 +3066,8 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, if (DEBUG_LOCKS_WARN_ON(!irqs_disabled())) return 0; - if (lock->key == &__lockdep_no_validate__) - check = 1; + if (!prove_locking || lock->key == &__lockdep_no_validate__) + check = 0; if (subclass < NR_LOCKDEP_CACHING_CLASSES) class = lock->class_cache[subclass]; @@ -3138,7 +3135,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, hlock->holdtime_stamp = lockstat_clock(); #endif - if (check == 2 && !mark_irqflags(curr, hlock)) + if (check && !mark_irqflags(curr, hlock)) return 0; /* mark it as used: */ diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c new file mode 100644 index 000000000000..f26b1a18e34e --- /dev/null +++ b/kernel/locking/locktorture.c @@ -0,0 +1,452 @@ +/* + * Module-based torture test facility for locking + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. + * + * Copyright (C) IBM Corporation, 2014 + * + * Author: Paul E. McKenney <paulmck@us.ibm.com> + * Based on kernel/rcu/torture.c. + */ +#include <linux/types.h> +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/module.h> +#include <linux/kthread.h> +#include <linux/err.h> +#include <linux/spinlock.h> +#include <linux/smp.h> +#include <linux/interrupt.h> +#include <linux/sched.h> +#include <linux/atomic.h> +#include <linux/bitops.h> +#include <linux/completion.h> +#include <linux/moduleparam.h> +#include <linux/percpu.h> +#include <linux/notifier.h> +#include <linux/reboot.h> +#include <linux/freezer.h> +#include <linux/cpu.h> +#include <linux/delay.h> +#include <linux/stat.h> +#include <linux/slab.h> +#include <linux/trace_clock.h> +#include <asm/byteorder.h> +#include <linux/torture.h> + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com>"); + +torture_param(int, nwriters_stress, -1, + "Number of write-locking stress-test threads"); +torture_param(int, onoff_holdoff, 0, "Time after boot before CPU hotplugs (s)"); +torture_param(int, onoff_interval, 0, + "Time between CPU hotplugs (s), 0=disable"); +torture_param(int, shuffle_interval, 3, + "Number of jiffies between shuffles, 0=disable"); +torture_param(int, shutdown_secs, 0, "Shutdown time (j), <= zero to disable."); +torture_param(int, stat_interval, 60, + "Number of seconds between stats printk()s"); +torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable"); +torture_param(bool, verbose, true, + "Enable verbose debugging printk()s"); + +static char *torture_type = "spin_lock"; +module_param(torture_type, charp, 0444); +MODULE_PARM_DESC(torture_type, + "Type of lock to torture (spin_lock, spin_lock_irq, ...)"); + +static atomic_t n_lock_torture_errors; + +static struct task_struct *stats_task; +static struct task_struct **writer_tasks; + +static int nrealwriters_stress; +static bool lock_is_write_held; + +struct lock_writer_stress_stats { + long n_write_lock_fail; + long n_write_lock_acquired; +}; +static struct lock_writer_stress_stats *lwsa; + +#if defined(MODULE) || defined(CONFIG_LOCK_TORTURE_TEST_RUNNABLE) +#define LOCKTORTURE_RUNNABLE_INIT 1 +#else +#define LOCKTORTURE_RUNNABLE_INIT 0 +#endif +int locktorture_runnable = LOCKTORTURE_RUNNABLE_INIT; +module_param(locktorture_runnable, int, 0444); +MODULE_PARM_DESC(locktorture_runnable, "Start locktorture at boot"); + +/* Forward reference. */ +static void lock_torture_cleanup(void); + +/* + * Operations vector for selecting different types of tests. + */ +struct lock_torture_ops { + void (*init)(void); + int (*writelock)(void); + void (*write_delay)(struct torture_random_state *trsp); + void (*writeunlock)(void); + unsigned long flags; + const char *name; +}; + +static struct lock_torture_ops *cur_ops; + +/* + * Definitions for lock torture testing. + */ + +static int torture_lock_busted_write_lock(void) +{ + return 0; /* BUGGY, do not use in real life!!! */ +} + +static void torture_lock_busted_write_delay(struct torture_random_state *trsp) +{ + const unsigned long longdelay_us = 100; + + /* We want a long delay occasionally to force massive contention. */ + if (!(torture_random(trsp) % + (nrealwriters_stress * 2000 * longdelay_us))) + mdelay(longdelay_us); +#ifdef CONFIG_PREEMPT + if (!(torture_random(trsp) % (nrealwriters_stress * 20000))) + preempt_schedule(); /* Allow test to be preempted. */ +#endif +} + +static void torture_lock_busted_write_unlock(void) +{ + /* BUGGY, do not use in real life!!! */ +} + +static struct lock_torture_ops lock_busted_ops = { + .writelock = torture_lock_busted_write_lock, + .write_delay = torture_lock_busted_write_delay, + .writeunlock = torture_lock_busted_write_unlock, + .name = "lock_busted" +}; + +static DEFINE_SPINLOCK(torture_spinlock); + +static int torture_spin_lock_write_lock(void) __acquires(torture_spinlock) +{ + spin_lock(&torture_spinlock); + return 0; +} + +static void torture_spin_lock_write_delay(struct torture_random_state *trsp) +{ + const unsigned long shortdelay_us = 2; + const unsigned long longdelay_us = 100; + + /* We want a short delay mostly to emulate likely code, and + * we want a long delay occasionally to force massive contention. + */ + if (!(torture_random(trsp) % + (nrealwriters_stress * 2000 * longdelay_us))) + mdelay(longdelay_us); + if (!(torture_random(trsp) % + (nrealwriters_stress * 2 * shortdelay_us))) + udelay(shortdelay_us); +#ifdef CONFIG_PREEMPT + if (!(torture_random(trsp) % (nrealwriters_stress * 20000))) + preempt_schedule(); /* Allow test to be preempted. */ +#endif +} + +static void torture_spin_lock_write_unlock(void) __releases(torture_spinlock) +{ + spin_unlock(&torture_spinlock); +} + +static struct lock_torture_ops spin_lock_ops = { + .writelock = torture_spin_lock_write_lock, + .write_delay = torture_spin_lock_write_delay, + .writeunlock = torture_spin_lock_write_unlock, + .name = "spin_lock" +}; + +static int torture_spin_lock_write_lock_irq(void) +__acquires(torture_spinlock_irq) +{ + unsigned long flags; + + spin_lock_irqsave(&torture_spinlock, flags); + cur_ops->flags = flags; + return 0; +} + +static void torture_lock_spin_write_unlock_irq(void) +__releases(torture_spinlock) +{ + spin_unlock_irqrestore(&torture_spinlock, cur_ops->flags); +} + +static struct lock_torture_ops spin_lock_irq_ops = { + .writelock = torture_spin_lock_write_lock_irq, + .write_delay = torture_spin_lock_write_delay, + .writeunlock = torture_lock_spin_write_unlock_irq, + .name = "spin_lock_irq" +}; + +/* + * Lock torture writer kthread. Repeatedly acquires and releases + * the lock, checking for duplicate acquisitions. + */ +static int lock_torture_writer(void *arg) +{ + struct lock_writer_stress_stats *lwsp = arg; + static DEFINE_TORTURE_RANDOM(rand); + + VERBOSE_TOROUT_STRING("lock_torture_writer task started"); + set_user_nice(current, 19); + + do { + schedule_timeout_uninterruptible(1); + cur_ops->writelock(); + if (WARN_ON_ONCE(lock_is_write_held)) + lwsp->n_write_lock_fail++; + lock_is_write_held = 1; + lwsp->n_write_lock_acquired++; + cur_ops->write_delay(&rand); + lock_is_write_held = 0; + cur_ops->writeunlock(); + stutter_wait("lock_torture_writer"); + } while (!torture_must_stop()); + torture_kthread_stopping("lock_torture_writer"); + return 0; +} + +/* + * Create an lock-torture-statistics message in the specified buffer. + */ +static void lock_torture_printk(char *page) +{ + bool fail = 0; + int i; + long max = 0; + long min = lwsa[0].n_write_lock_acquired; + long long sum = 0; + + for (i = 0; i < nrealwriters_stress; i++) { + if (lwsa[i].n_write_lock_fail) + fail = true; + sum += lwsa[i].n_write_lock_acquired; + if (max < lwsa[i].n_write_lock_fail) + max = lwsa[i].n_write_lock_fail; + if (min > lwsa[i].n_write_lock_fail) + min = lwsa[i].n_write_lock_fail; + } + page += sprintf(page, "%s%s ", torture_type, TORTURE_FLAG); + page += sprintf(page, + "Writes: Total: %lld Max/Min: %ld/%ld %s Fail: %d %s\n", + sum, max, min, max / 2 > min ? "???" : "", + fail, fail ? "!!!" : ""); + if (fail) + atomic_inc(&n_lock_torture_errors); +} + +/* + * Print torture statistics. Caller must ensure that there is only one + * call to this function at a given time!!! This is normally accomplished + * by relying on the module system to only have one copy of the module + * loaded, and then by giving the lock_torture_stats kthread full control + * (or the init/cleanup functions when lock_torture_stats thread is not + * running). + */ +static void lock_torture_stats_print(void) +{ + int size = nrealwriters_stress * 200 + 8192; + char *buf; + + buf = kmalloc(size, GFP_KERNEL); + if (!buf) { + pr_err("lock_torture_stats_print: Out of memory, need: %d", + size); + return; + } + lock_torture_printk(buf); + pr_alert("%s", buf); + kfree(buf); +} + +/* + * Periodically prints torture statistics, if periodic statistics printing + * was specified via the stat_interval module parameter. + * + * No need to worry about fullstop here, since this one doesn't reference + * volatile state or register callbacks. + */ +static int lock_torture_stats(void *arg) +{ + VERBOSE_TOROUT_STRING("lock_torture_stats task started"); + do { + schedule_timeout_interruptible(stat_interval * HZ); + lock_torture_stats_print(); + torture_shutdown_absorb("lock_torture_stats"); + } while (!torture_must_stop()); + torture_kthread_stopping("lock_torture_stats"); + return 0; +} + +static inline void +lock_torture_print_module_parms(struct lock_torture_ops *cur_ops, + const char *tag) +{ + pr_alert("%s" TORTURE_FLAG + "--- %s: nwriters_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n", + torture_type, tag, nrealwriters_stress, stat_interval, verbose, + shuffle_interval, stutter, shutdown_secs, + onoff_interval, onoff_holdoff); +} + +static void lock_torture_cleanup(void) +{ + int i; + + if (torture_cleanup()) + return; + + if (writer_tasks) { + for (i = 0; i < nrealwriters_stress; i++) + torture_stop_kthread(lock_torture_writer, + writer_tasks[i]); + kfree(writer_tasks); + writer_tasks = NULL; + } + + torture_stop_kthread(lock_torture_stats, stats_task); + lock_torture_stats_print(); /* -After- the stats thread is stopped! */ + + if (atomic_read(&n_lock_torture_errors)) + lock_torture_print_module_parms(cur_ops, + "End of test: FAILURE"); + else if (torture_onoff_failures()) + lock_torture_print_module_parms(cur_ops, + "End of test: LOCK_HOTPLUG"); + else + lock_torture_print_module_parms(cur_ops, + "End of test: SUCCESS"); +} + +static int __init lock_torture_init(void) +{ + int i; + int firsterr = 0; + static struct lock_torture_ops *torture_ops[] = { + &lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, + }; + + torture_init_begin(torture_type, verbose, &locktorture_runnable); + + /* Process args and tell the world that the torturer is on the job. */ + for (i = 0; i < ARRAY_SIZE(torture_ops); i++) { + cur_ops = torture_ops[i]; + if (strcmp(torture_type, cur_ops->name) == 0) + break; + } + if (i == ARRAY_SIZE(torture_ops)) { + pr_alert("lock-torture: invalid torture type: \"%s\"\n", + torture_type); + pr_alert("lock-torture types:"); + for (i = 0; i < ARRAY_SIZE(torture_ops); i++) + pr_alert(" %s", torture_ops[i]->name); + pr_alert("\n"); + torture_init_end(); + return -EINVAL; + } + if (cur_ops->init) + cur_ops->init(); /* no "goto unwind" prior to this point!!! */ + + if (nwriters_stress >= 0) + nrealwriters_stress = nwriters_stress; + else + nrealwriters_stress = 2 * num_online_cpus(); + lock_torture_print_module_parms(cur_ops, "Start of test"); + + /* Initialize the statistics so that each run gets its own numbers. */ + + lock_is_write_held = 0; + lwsa = kmalloc(sizeof(*lwsa) * nrealwriters_stress, GFP_KERNEL); + if (lwsa == NULL) { + VERBOSE_TOROUT_STRING("lwsa: Out of memory"); + firsterr = -ENOMEM; + goto unwind; + } + for (i = 0; i < nrealwriters_stress; i++) { + lwsa[i].n_write_lock_fail = 0; + lwsa[i].n_write_lock_acquired = 0; + } + + /* Start up the kthreads. */ + + if (onoff_interval > 0) { + firsterr = torture_onoff_init(onoff_holdoff * HZ, + onoff_interval * HZ); + if (firsterr) + goto unwind; + } + if (shuffle_interval > 0) { + firsterr = torture_shuffle_init(shuffle_interval); + if (firsterr) + goto unwind; + } + if (shutdown_secs > 0) { + firsterr = torture_shutdown_init(shutdown_secs, + lock_torture_cleanup); + if (firsterr) + goto unwind; + } + if (stutter > 0) { + firsterr = torture_stutter_init(stutter); + if (firsterr) + goto unwind; + } + + writer_tasks = kzalloc(nrealwriters_stress * sizeof(writer_tasks[0]), + GFP_KERNEL); + if (writer_tasks == NULL) { + VERBOSE_TOROUT_ERRSTRING("writer_tasks: Out of memory"); + firsterr = -ENOMEM; + goto unwind; + } + for (i = 0; i < nrealwriters_stress; i++) { + firsterr = torture_create_kthread(lock_torture_writer, &lwsa[i], + writer_tasks[i]); + if (firsterr) + goto unwind; + } + if (stat_interval > 0) { + firsterr = torture_create_kthread(lock_torture_stats, NULL, + stats_task); + if (firsterr) + goto unwind; + } + torture_init_end(); + return 0; + +unwind: + torture_init_end(); + lock_torture_cleanup(); + return firsterr; +} + +module_init(lock_torture_init); +module_exit(lock_torture_cleanup); diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c new file mode 100644 index 000000000000..838dc9e00669 --- /dev/null +++ b/kernel/locking/mcs_spinlock.c @@ -0,0 +1,178 @@ + +#include <linux/percpu.h> +#include <linux/mutex.h> +#include <linux/sched.h> +#include "mcs_spinlock.h" + +#ifdef CONFIG_SMP + +/* + * An MCS like lock especially tailored for optimistic spinning for sleeping + * lock implementations (mutex, rwsem, etc). + * + * Using a single mcs node per CPU is safe because sleeping locks should not be + * called from interrupt context and we have preemption disabled while + * spinning. + */ +static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_queue, osq_node); + +/* + * Get a stable @node->next pointer, either for unlock() or unqueue() purposes. + * Can return NULL in case we were the last queued and we updated @lock instead. + */ +static inline struct optimistic_spin_queue * +osq_wait_next(struct optimistic_spin_queue **lock, + struct optimistic_spin_queue *node, + struct optimistic_spin_queue *prev) +{ + struct optimistic_spin_queue *next = NULL; + + for (;;) { + if (*lock == node && cmpxchg(lock, node, prev) == node) { + /* + * We were the last queued, we moved @lock back. @prev + * will now observe @lock and will complete its + * unlock()/unqueue(). + */ + break; + } + + /* + * We must xchg() the @node->next value, because if we were to + * leave it in, a concurrent unlock()/unqueue() from + * @node->next might complete Step-A and think its @prev is + * still valid. + * + * If the concurrent unlock()/unqueue() wins the race, we'll + * wait for either @lock to point to us, through its Step-B, or + * wait for a new @node->next from its Step-C. + */ + if (node->next) { + next = xchg(&node->next, NULL); + if (next) + break; + } + + arch_mutex_cpu_relax(); + } + + return next; +} + +bool osq_lock(struct optimistic_spin_queue **lock) +{ + struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node); + struct optimistic_spin_queue *prev, *next; + + node->locked = 0; + node->next = NULL; + + node->prev = prev = xchg(lock, node); + if (likely(prev == NULL)) + return true; + + ACCESS_ONCE(prev->next) = node; + + /* + * Normally @prev is untouchable after the above store; because at that + * moment unlock can proceed and wipe the node element from stack. + * + * However, since our nodes are static per-cpu storage, we're + * guaranteed their existence -- this allows us to apply + * cmpxchg in an attempt to undo our queueing. + */ + + while (!smp_load_acquire(&node->locked)) { + /* + * If we need to reschedule bail... so we can block. + */ + if (need_resched()) + goto unqueue; + + arch_mutex_cpu_relax(); + } + return true; + +unqueue: + /* + * Step - A -- stabilize @prev + * + * Undo our @prev->next assignment; this will make @prev's + * unlock()/unqueue() wait for a next pointer since @lock points to us + * (or later). + */ + + for (;;) { + if (prev->next == node && + cmpxchg(&prev->next, node, NULL) == node) + break; + + /* + * We can only fail the cmpxchg() racing against an unlock(), + * in which case we should observe @node->locked becomming + * true. + */ + if (smp_load_acquire(&node->locked)) + return true; + + arch_mutex_cpu_relax(); + + /* + * Or we race against a concurrent unqueue()'s step-B, in which + * case its step-C will write us a new @node->prev pointer. + */ + prev = ACCESS_ONCE(node->prev); + } + + /* + * Step - B -- stabilize @next + * + * Similar to unlock(), wait for @node->next or move @lock from @node + * back to @prev. + */ + + next = osq_wait_next(lock, node, prev); + if (!next) + return false; + + /* + * Step - C -- unlink + * + * @prev is stable because its still waiting for a new @prev->next + * pointer, @next is stable because our @node->next pointer is NULL and + * it will wait in Step-A. + */ + + ACCESS_ONCE(next->prev) = prev; + ACCESS_ONCE(prev->next) = next; + + return false; +} + +void osq_unlock(struct optimistic_spin_queue **lock) +{ + struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node); + struct optimistic_spin_queue *next; + + /* + * Fast path for the uncontended case. + */ + if (likely(cmpxchg(lock, node, NULL) == node)) + return; + + /* + * Second most likely case. + */ + next = xchg(&node->next, NULL); + if (next) { + ACCESS_ONCE(next->locked) = 1; + return; + } + + next = osq_wait_next(lock, node, NULL); + if (next) + ACCESS_ONCE(next->locked) = 1; +} + +#endif + diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h new file mode 100644 index 000000000000..a2dbac4aca6b --- /dev/null +++ b/kernel/locking/mcs_spinlock.h @@ -0,0 +1,129 @@ +/* + * MCS lock defines + * + * This file contains the main data structure and API definitions of MCS lock. + * + * The MCS lock (proposed by Mellor-Crummey and Scott) is a simple spin-lock + * with the desirable properties of being fair, and with each cpu trying + * to acquire the lock spinning on a local variable. + * It avoids expensive cache bouncings that common test-and-set spin-lock + * implementations incur. + */ +#ifndef __LINUX_MCS_SPINLOCK_H +#define __LINUX_MCS_SPINLOCK_H + +#include <asm/mcs_spinlock.h> + +struct mcs_spinlock { + struct mcs_spinlock *next; + int locked; /* 1 if lock acquired */ +}; + +#ifndef arch_mcs_spin_lock_contended +/* + * Using smp_load_acquire() provides a memory barrier that ensures + * subsequent operations happen after the lock is acquired. + */ +#define arch_mcs_spin_lock_contended(l) \ +do { \ + while (!(smp_load_acquire(l))) \ + arch_mutex_cpu_relax(); \ +} while (0) +#endif + +#ifndef arch_mcs_spin_unlock_contended +/* + * smp_store_release() provides a memory barrier to ensure all + * operations in the critical section has been completed before + * unlocking. + */ +#define arch_mcs_spin_unlock_contended(l) \ + smp_store_release((l), 1) +#endif + +/* + * Note: the smp_load_acquire/smp_store_release pair is not + * sufficient to form a full memory barrier across + * cpus for many architectures (except x86) for mcs_unlock and mcs_lock. + * For applications that need a full barrier across multiple cpus + * with mcs_unlock and mcs_lock pair, smp_mb__after_unlock_lock() should be + * used after mcs_lock. + */ + +/* + * In order to acquire the lock, the caller should declare a local node and + * pass a reference of the node to this function in addition to the lock. + * If the lock has already been acquired, then this will proceed to spin + * on this node->locked until the previous lock holder sets the node->locked + * in mcs_spin_unlock(). + * + * We don't inline mcs_spin_lock() so that perf can correctly account for the + * time spent in this lock function. + */ +static inline +void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) +{ + struct mcs_spinlock *prev; + + /* Init node */ + node->locked = 0; + node->next = NULL; + + prev = xchg(lock, node); + if (likely(prev == NULL)) { + /* + * Lock acquired, don't need to set node->locked to 1. Threads + * only spin on its own node->locked value for lock acquisition. + * However, since this thread can immediately acquire the lock + * and does not proceed to spin on its own node->locked, this + * value won't be used. If a debug mode is needed to + * audit lock status, then set node->locked value here. + */ + return; + } + ACCESS_ONCE(prev->next) = node; + + /* Wait until the lock holder passes the lock down. */ + arch_mcs_spin_lock_contended(&node->locked); +} + +/* + * Releases the lock. The caller should pass in the corresponding node that + * was used to acquire the lock. + */ +static inline +void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) +{ + struct mcs_spinlock *next = ACCESS_ONCE(node->next); + + if (likely(!next)) { + /* + * Release the lock by setting it to NULL + */ + if (likely(cmpxchg(lock, node, NULL) == node)) + return; + /* Wait until the next pointer is set */ + while (!(next = ACCESS_ONCE(node->next))) + arch_mutex_cpu_relax(); + } + + /* Pass lock to next waiter. */ + arch_mcs_spin_unlock_contended(&next->locked); +} + +/* + * Cancellable version of the MCS lock above. + * + * Intended for adaptive spinning of sleeping locks: + * mutex_lock()/rwsem_down_{read,write}() etc. + */ + +struct optimistic_spin_queue { + struct optimistic_spin_queue *next, *prev; + int locked; /* 1 if lock acquired */ +}; + +extern bool osq_lock(struct optimistic_spin_queue **lock); +extern void osq_unlock(struct optimistic_spin_queue **lock); + +#endif /* __LINUX_MCS_SPINLOCK_H */ diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index faf6f5b53e77..e1191c996c59 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -83,6 +83,12 @@ void debug_mutex_unlock(struct mutex *lock) DEBUG_LOCKS_WARN_ON(!lock->wait_list.prev && !lock->wait_list.next); mutex_clear_owner(lock); + + /* + * __mutex_slowpath_needs_to_unlock() is explicitly 0 for debug + * mutexes so that we can do it here after we've verified state. + */ + atomic_set(&lock->count, 1); } void debug_mutex_init(struct mutex *lock, const char *name, diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 4dd6e4c219de..14fe72cc8ce7 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -25,6 +25,7 @@ #include <linux/spinlock.h> #include <linux/interrupt.h> #include <linux/debug_locks.h> +#include "mcs_spinlock.h" /* * In the DEBUG case we are using the "NULL fastpath" for mutexes, @@ -33,6 +34,13 @@ #ifdef CONFIG_DEBUG_MUTEXES # include "mutex-debug.h" # include <asm-generic/mutex-null.h> +/* + * Must be 0 for the debug case so we do not do the unlock outside of the + * wait_lock region. debug_mutex_unlock() will do the actual unlock in this + * case. + */ +# undef __mutex_slowpath_needs_to_unlock +# define __mutex_slowpath_needs_to_unlock() 0 #else # include "mutex.h" # include <asm/mutex.h> @@ -52,7 +60,7 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key) INIT_LIST_HEAD(&lock->wait_list); mutex_clear_owner(lock); #ifdef CONFIG_MUTEX_SPIN_ON_OWNER - lock->spin_mlock = NULL; + lock->osq = NULL; #endif debug_mutex_init(lock, name, key); @@ -111,54 +119,7 @@ EXPORT_SYMBOL(mutex_lock); * more or less simultaneously, the spinners need to acquire a MCS lock * first before spinning on the owner field. * - * We don't inline mspin_lock() so that perf can correctly account for the - * time spent in this lock function. */ -struct mspin_node { - struct mspin_node *next ; - int locked; /* 1 if lock acquired */ -}; -#define MLOCK(mutex) ((struct mspin_node **)&((mutex)->spin_mlock)) - -static noinline -void mspin_lock(struct mspin_node **lock, struct mspin_node *node) -{ - struct mspin_node *prev; - - /* Init node */ - node->locked = 0; - node->next = NULL; - - prev = xchg(lock, node); - if (likely(prev == NULL)) { - /* Lock acquired */ - node->locked = 1; - return; - } - ACCESS_ONCE(prev->next) = node; - smp_wmb(); - /* Wait until the lock holder passes the lock down */ - while (!ACCESS_ONCE(node->locked)) - arch_mutex_cpu_relax(); -} - -static void mspin_unlock(struct mspin_node **lock, struct mspin_node *node) -{ - struct mspin_node *next = ACCESS_ONCE(node->next); - - if (likely(!next)) { - /* - * Release the lock by setting it to NULL - */ - if (cmpxchg(lock, node, NULL) == node) - return; - /* Wait until the next pointer is set */ - while (!(next = ACCESS_ONCE(node->next))) - arch_mutex_cpu_relax(); - } - ACCESS_ONCE(next->locked) = 1; - smp_wmb(); -} /* * Mutex spinning code migrated from kernel/sched/core.c @@ -212,6 +173,9 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock) struct task_struct *owner; int retval = 1; + if (need_resched()) + return 0; + rcu_read_lock(); owner = ACCESS_ONCE(lock->owner); if (owner) @@ -446,9 +410,11 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, if (!mutex_can_spin_on_owner(lock)) goto slowpath; + if (!osq_lock(&lock->osq)) + goto slowpath; + for (;;) { struct task_struct *owner; - struct mspin_node node; if (use_ww_ctx && ww_ctx->acquired > 0) { struct ww_mutex *ww; @@ -463,19 +429,16 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, * performed the optimistic spinning cannot be done. */ if (ACCESS_ONCE(ww->ctx)) - goto slowpath; + break; } /* * If there's an owner, wait for it to either * release the lock or go to sleep. */ - mspin_lock(MLOCK(lock), &node); owner = ACCESS_ONCE(lock->owner); - if (owner && !mutex_spin_on_owner(lock, owner)) { - mspin_unlock(MLOCK(lock), &node); - goto slowpath; - } + if (owner && !mutex_spin_on_owner(lock, owner)) + break; if ((atomic_read(&lock->count) == 1) && (atomic_cmpxchg(&lock->count, 1, 0) == 1)) { @@ -488,11 +451,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, } mutex_set_owner(lock); - mspin_unlock(MLOCK(lock), &node); + osq_unlock(&lock->osq); preempt_enable(); return 0; } - mspin_unlock(MLOCK(lock), &node); /* * When there's no owner, we might have preempted between the @@ -501,7 +463,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, * the owner complete. */ if (!owner && (need_resched() || rt_task(task))) - goto slowpath; + break; /* * The cpu_relax() call is a compiler barrier which forces @@ -511,7 +473,15 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, */ arch_mutex_cpu_relax(); } + osq_unlock(&lock->osq); slowpath: + /* + * If we fell out of the spin path because of need_resched(), + * reschedule now, before we try-lock the mutex. This avoids getting + * scheduled out right after we obtained the mutex. + */ + if (need_resched()) + schedule_preempt_disabled(); #endif spin_lock_mutex(&lock->wait_lock, flags); @@ -717,10 +687,6 @@ __mutex_unlock_common_slowpath(atomic_t *lock_count, int nested) struct mutex *lock = container_of(lock_count, struct mutex, count); unsigned long flags; - spin_lock_mutex(&lock->wait_lock, flags); - mutex_release(&lock->dep_map, nested, _RET_IP_); - debug_mutex_unlock(lock); - /* * some architectures leave the lock unlocked in the fastpath failure * case, others need to leave it locked. In the later case we have to @@ -729,6 +695,10 @@ __mutex_unlock_common_slowpath(atomic_t *lock_count, int nested) if (__mutex_slowpath_needs_to_unlock()) atomic_set(&lock->count, 1); + spin_lock_mutex(&lock->wait_lock, flags); + mutex_release(&lock->dep_map, nested, _RET_IP_); + debug_mutex_unlock(lock); + if (!list_empty(&lock->wait_list)) { /* get the first entry from the wait-list: */ struct mutex_waiter *waiter = diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 2e960a2bab81..aa4dff04b594 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -213,6 +213,18 @@ struct task_struct *rt_mutex_get_top_task(struct task_struct *task) } /* + * Called by sched_setscheduler() to check whether the priority change + * is overruled by a possible priority boosting. + */ +int rt_mutex_check_prio(struct task_struct *task, int newprio) +{ + if (!task_has_pi_waiters(task)) + return 0; + + return task_top_pi_waiter(task)->task->prio <= newprio; +} + +/* * Adjust the priority of a task, after its pi_waiters got modified. * * This can be both boosting and unboosting. task->pi_lock must be held. diff --git a/kernel/notifier.c b/kernel/notifier.c index 2d5cc4ccff7f..db4c8b08a50c 100644 --- a/kernel/notifier.c +++ b/kernel/notifier.c @@ -309,7 +309,7 @@ int __blocking_notifier_call_chain(struct blocking_notifier_head *nh, * racy then it does not matter what the result of the test * is, we re-check the list after having taken the lock anyway: */ - if (rcu_dereference_raw(nh->head)) { + if (rcu_access_pointer(nh->head)) { down_read(&nh->rwsem); ret = notifier_call_chain(&nh->head, val, v, nr_to_call, nr_calls); diff --git a/kernel/profile.c b/kernel/profile.c index 6631e1ef55ab..ebdd9c1a86b4 100644 --- a/kernel/profile.c +++ b/kernel/profile.c @@ -549,14 +549,14 @@ static int create_hash_tables(void) struct page *page; page = alloc_pages_exact_node(node, - GFP_KERNEL | __GFP_ZERO | GFP_THISNODE, + GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE, 0); if (!page) goto out_cleanup; per_cpu(cpu_profile_hits, cpu)[1] = (struct profile_hit *)page_address(page); page = alloc_pages_exact_node(node, - GFP_KERNEL | __GFP_ZERO | GFP_THISNODE, + GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE, 0); if (!page) goto out_cleanup; diff --git a/kernel/rcu/Makefile b/kernel/rcu/Makefile index 01e9ec37a3e3..807ccfbf69b3 100644 --- a/kernel/rcu/Makefile +++ b/kernel/rcu/Makefile @@ -1,5 +1,5 @@ obj-y += update.o srcu.o -obj-$(CONFIG_RCU_TORTURE_TEST) += torture.o +obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o obj-$(CONFIG_TREE_RCU) += tree.o obj-$(CONFIG_TREE_PREEMPT_RCU) += tree.o obj-$(CONFIG_TREE_RCU_TRACE) += tree_trace.o diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 79c3877e9c5b..bfda2726ca45 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright IBM Corporation, 2011 * @@ -23,6 +23,7 @@ #ifndef __LINUX_RCU_H #define __LINUX_RCU_H +#include <trace/events/rcu.h> #ifdef CONFIG_RCU_TRACE #define RCU_TRACE(stmt) stmt #else /* #ifdef CONFIG_RCU_TRACE */ @@ -116,8 +117,6 @@ static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head) } } -extern int rcu_expedited; - #ifdef CONFIG_RCU_STALL_COMMON extern int rcu_cpu_stall_suppress; diff --git a/kernel/rcu/torture.c b/kernel/rcu/rcutorture.c index 732f8ae3086a..bd30bc61bc05 100644 --- a/kernel/rcu/torture.c +++ b/kernel/rcu/rcutorture.c @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright (C) IBM Corporation, 2005, 2006 * @@ -48,110 +48,58 @@ #include <linux/slab.h> #include <linux/trace_clock.h> #include <asm/byteorder.h> +#include <linux/torture.h> MODULE_LICENSE("GPL"); MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com> and Josh Triplett <josh@freedesktop.org>"); -MODULE_ALIAS("rcutorture"); -#ifdef MODULE_PARAM_PREFIX -#undef MODULE_PARAM_PREFIX -#endif -#define MODULE_PARAM_PREFIX "rcutorture." - -static int fqs_duration; -module_param(fqs_duration, int, 0444); -MODULE_PARM_DESC(fqs_duration, "Duration of fqs bursts (us), 0 to disable"); -static int fqs_holdoff; -module_param(fqs_holdoff, int, 0444); -MODULE_PARM_DESC(fqs_holdoff, "Holdoff time within fqs bursts (us)"); -static int fqs_stutter = 3; -module_param(fqs_stutter, int, 0444); -MODULE_PARM_DESC(fqs_stutter, "Wait time between fqs bursts (s)"); -static bool gp_exp; -module_param(gp_exp, bool, 0444); -MODULE_PARM_DESC(gp_exp, "Use expedited GP wait primitives"); -static bool gp_normal; -module_param(gp_normal, bool, 0444); -MODULE_PARM_DESC(gp_normal, "Use normal (non-expedited) GP wait primitives"); -static int irqreader = 1; -module_param(irqreader, int, 0444); -MODULE_PARM_DESC(irqreader, "Allow RCU readers from irq handlers"); -static int n_barrier_cbs; -module_param(n_barrier_cbs, int, 0444); -MODULE_PARM_DESC(n_barrier_cbs, "# of callbacks/kthreads for barrier testing"); -static int nfakewriters = 4; -module_param(nfakewriters, int, 0444); -MODULE_PARM_DESC(nfakewriters, "Number of RCU fake writer threads"); -static int nreaders = -1; -module_param(nreaders, int, 0444); -MODULE_PARM_DESC(nreaders, "Number of RCU reader threads"); -static int object_debug; -module_param(object_debug, int, 0444); -MODULE_PARM_DESC(object_debug, "Enable debug-object double call_rcu() testing"); -static int onoff_holdoff; -module_param(onoff_holdoff, int, 0444); -MODULE_PARM_DESC(onoff_holdoff, "Time after boot before CPU hotplugs (s)"); -static int onoff_interval; -module_param(onoff_interval, int, 0444); -MODULE_PARM_DESC(onoff_interval, "Time between CPU hotplugs (s), 0=disable"); -static int shuffle_interval = 3; -module_param(shuffle_interval, int, 0444); -MODULE_PARM_DESC(shuffle_interval, "Number of seconds between shuffles"); -static int shutdown_secs; -module_param(shutdown_secs, int, 0444); -MODULE_PARM_DESC(shutdown_secs, "Shutdown time (s), <= zero to disable."); -static int stall_cpu; -module_param(stall_cpu, int, 0444); -MODULE_PARM_DESC(stall_cpu, "Stall duration (s), zero to disable."); -static int stall_cpu_holdoff = 10; -module_param(stall_cpu_holdoff, int, 0444); -MODULE_PARM_DESC(stall_cpu_holdoff, "Time to wait before starting stall (s)."); -static int stat_interval = 60; -module_param(stat_interval, int, 0644); -MODULE_PARM_DESC(stat_interval, "Number of seconds between stats printk()s"); -static int stutter = 5; -module_param(stutter, int, 0444); -MODULE_PARM_DESC(stutter, "Number of seconds to run/halt test"); -static int test_boost = 1; -module_param(test_boost, int, 0444); -MODULE_PARM_DESC(test_boost, "Test RCU prio boost: 0=no, 1=maybe, 2=yes."); -static int test_boost_duration = 4; -module_param(test_boost_duration, int, 0444); -MODULE_PARM_DESC(test_boost_duration, "Duration of each boost test, seconds."); -static int test_boost_interval = 7; -module_param(test_boost_interval, int, 0444); -MODULE_PARM_DESC(test_boost_interval, "Interval between boost tests, seconds."); -static bool test_no_idle_hz = true; -module_param(test_no_idle_hz, bool, 0444); -MODULE_PARM_DESC(test_no_idle_hz, "Test support for tickless idle CPUs"); + +torture_param(int, fqs_duration, 0, + "Duration of fqs bursts (us), 0 to disable"); +torture_param(int, fqs_holdoff, 0, "Holdoff time within fqs bursts (us)"); +torture_param(int, fqs_stutter, 3, "Wait time between fqs bursts (s)"); +torture_param(bool, gp_exp, false, "Use expedited GP wait primitives"); +torture_param(bool, gp_normal, false, + "Use normal (non-expedited) GP wait primitives"); +torture_param(int, irqreader, 1, "Allow RCU readers from irq handlers"); +torture_param(int, n_barrier_cbs, 0, + "# of callbacks/kthreads for barrier testing"); +torture_param(int, nfakewriters, 4, "Number of RCU fake writer threads"); +torture_param(int, nreaders, -1, "Number of RCU reader threads"); +torture_param(int, object_debug, 0, + "Enable debug-object double call_rcu() testing"); +torture_param(int, onoff_holdoff, 0, "Time after boot before CPU hotplugs (s)"); +torture_param(int, onoff_interval, 0, + "Time between CPU hotplugs (s), 0=disable"); +torture_param(int, shuffle_interval, 3, "Number of seconds between shuffles"); +torture_param(int, shutdown_secs, 0, "Shutdown time (s), <= zero to disable."); +torture_param(int, stall_cpu, 0, "Stall duration (s), zero to disable."); +torture_param(int, stall_cpu_holdoff, 10, + "Time to wait before starting stall (s)."); +torture_param(int, stat_interval, 60, + "Number of seconds between stats printk()s"); +torture_param(int, stutter, 5, "Number of seconds to run/halt test"); +torture_param(int, test_boost, 1, "Test RCU prio boost: 0=no, 1=maybe, 2=yes."); +torture_param(int, test_boost_duration, 4, + "Duration of each boost test, seconds."); +torture_param(int, test_boost_interval, 7, + "Interval between boost tests, seconds."); +torture_param(bool, test_no_idle_hz, true, + "Test support for tickless idle CPUs"); +torture_param(bool, verbose, true, + "Enable verbose debugging printk()s"); + static char *torture_type = "rcu"; module_param(torture_type, charp, 0444); MODULE_PARM_DESC(torture_type, "Type of RCU to torture (rcu, rcu_bh, ...)"); -static bool verbose; -module_param(verbose, bool, 0444); -MODULE_PARM_DESC(verbose, "Enable verbose debugging printk()s"); - -#define TORTURE_FLAG "-torture:" -#define PRINTK_STRING(s) \ - do { pr_alert("%s" TORTURE_FLAG s "\n", torture_type); } while (0) -#define VERBOSE_PRINTK_STRING(s) \ - do { if (verbose) pr_alert("%s" TORTURE_FLAG s "\n", torture_type); } while (0) -#define VERBOSE_PRINTK_ERRSTRING(s) \ - do { if (verbose) pr_alert("%s" TORTURE_FLAG "!!! " s "\n", torture_type); } while (0) static int nrealreaders; static struct task_struct *writer_task; static struct task_struct **fakewriter_tasks; static struct task_struct **reader_tasks; static struct task_struct *stats_task; -static struct task_struct *shuffler_task; -static struct task_struct *stutter_task; static struct task_struct *fqs_task; static struct task_struct *boost_tasks[NR_CPUS]; -static struct task_struct *shutdown_task; -#ifdef CONFIG_HOTPLUG_CPU -static struct task_struct *onoff_task; -#endif /* #ifdef CONFIG_HOTPLUG_CPU */ static struct task_struct *stall_task; static struct task_struct **barrier_cbs_tasks; static struct task_struct *barrier_task; @@ -170,10 +118,10 @@ static struct rcu_torture __rcu *rcu_torture_current; static unsigned long rcu_torture_current_version; static struct rcu_torture rcu_tortures[10 * RCU_TORTURE_PIPE_LEN]; static DEFINE_SPINLOCK(rcu_torture_lock); -static DEFINE_PER_CPU(long [RCU_TORTURE_PIPE_LEN + 1], rcu_torture_count) = - { 0 }; -static DEFINE_PER_CPU(long [RCU_TORTURE_PIPE_LEN + 1], rcu_torture_batch) = - { 0 }; +static DEFINE_PER_CPU(long [RCU_TORTURE_PIPE_LEN + 1], + rcu_torture_count) = { 0 }; +static DEFINE_PER_CPU(long [RCU_TORTURE_PIPE_LEN + 1], + rcu_torture_batch) = { 0 }; static atomic_t rcu_torture_wcount[RCU_TORTURE_PIPE_LEN + 1]; static atomic_t n_rcu_torture_alloc; static atomic_t n_rcu_torture_alloc_fail; @@ -186,22 +134,9 @@ static long n_rcu_torture_boost_rterror; static long n_rcu_torture_boost_failure; static long n_rcu_torture_boosts; static long n_rcu_torture_timers; -static long n_offline_attempts; -static long n_offline_successes; -static unsigned long sum_offline; -static int min_offline = -1; -static int max_offline; -static long n_online_attempts; -static long n_online_successes; -static unsigned long sum_online; -static int min_online = -1; -static int max_online; static long n_barrier_attempts; static long n_barrier_successes; static struct list_head rcu_torture_removed; -static cpumask_var_t shuffle_tmp_mask; - -static int stutter_pause_test; #if defined(MODULE) || defined(CONFIG_RCU_TORTURE_TEST_RUNNABLE) #define RCUTORTURE_RUNNABLE_INIT 1 @@ -232,7 +167,6 @@ static u64 notrace rcu_trace_clock_local(void) } #endif /* #else #ifdef CONFIG_RCU_TRACE */ -static unsigned long shutdown_time; /* jiffies to system shutdown. */ static unsigned long boost_starttime; /* jiffies of next boost test start. */ DEFINE_MUTEX(boost_mutex); /* protect setting boost_starttime */ /* and boost task create/destroy. */ @@ -242,51 +176,6 @@ static atomic_t barrier_cbs_invoked; /* Barrier callbacks invoked. */ static wait_queue_head_t *barrier_cbs_wq; /* Coordinate barrier testing. */ static DECLARE_WAIT_QUEUE_HEAD(barrier_wq); -/* Mediate rmmod and system shutdown. Concurrent rmmod & shutdown illegal! */ - -#define FULLSTOP_DONTSTOP 0 /* Normal operation. */ -#define FULLSTOP_SHUTDOWN 1 /* System shutdown with rcutorture running. */ -#define FULLSTOP_RMMOD 2 /* Normal rmmod of rcutorture. */ -static int fullstop = FULLSTOP_RMMOD; -/* - * Protect fullstop transitions and spawning of kthreads. - */ -static DEFINE_MUTEX(fullstop_mutex); - -/* Forward reference. */ -static void rcu_torture_cleanup(void); - -/* - * Detect and respond to a system shutdown. - */ -static int -rcutorture_shutdown_notify(struct notifier_block *unused1, - unsigned long unused2, void *unused3) -{ - mutex_lock(&fullstop_mutex); - if (fullstop == FULLSTOP_DONTSTOP) - fullstop = FULLSTOP_SHUTDOWN; - else - pr_warn(/* but going down anyway, so... */ - "Concurrent 'rmmod rcutorture' and shutdown illegal!\n"); - mutex_unlock(&fullstop_mutex); - return NOTIFY_DONE; -} - -/* - * Absorb kthreads into a kernel function that won't return, so that - * they won't ever access module text or data again. - */ -static void rcutorture_shutdown_absorb(const char *title) -{ - if (ACCESS_ONCE(fullstop) == FULLSTOP_SHUTDOWN) { - pr_notice( - "rcutorture thread %s parking due to system shutdown\n", - title); - schedule_timeout_uninterruptible(MAX_SCHEDULE_TIMEOUT); - } -} - /* * Allocate an element from the rcu_tortures pool. */ @@ -320,44 +209,6 @@ rcu_torture_free(struct rcu_torture *p) spin_unlock_bh(&rcu_torture_lock); } -struct rcu_random_state { - unsigned long rrs_state; - long rrs_count; -}; - -#define RCU_RANDOM_MULT 39916801 /* prime */ -#define RCU_RANDOM_ADD 479001701 /* prime */ -#define RCU_RANDOM_REFRESH 10000 - -#define DEFINE_RCU_RANDOM(name) struct rcu_random_state name = { 0, 0 } - -/* - * Crude but fast random-number generator. Uses a linear congruential - * generator, with occasional help from cpu_clock(). - */ -static unsigned long -rcu_random(struct rcu_random_state *rrsp) -{ - if (--rrsp->rrs_count < 0) { - rrsp->rrs_state += (unsigned long)local_clock(); - rrsp->rrs_count = RCU_RANDOM_REFRESH; - } - rrsp->rrs_state = rrsp->rrs_state * RCU_RANDOM_MULT + RCU_RANDOM_ADD; - return swahw32(rrsp->rrs_state); -} - -static void -rcu_stutter_wait(const char *title) -{ - while (stutter_pause_test || !rcutorture_runnable) { - if (rcutorture_runnable) - schedule_timeout_interruptible(1); - else - schedule_timeout_interruptible(round_jiffies_relative(HZ)); - rcutorture_shutdown_absorb(title); - } -} - /* * Operations vector for selecting different types of tests. */ @@ -365,7 +216,7 @@ rcu_stutter_wait(const char *title) struct rcu_torture_ops { void (*init)(void); int (*readlock)(void); - void (*read_delay)(struct rcu_random_state *rrsp); + void (*read_delay)(struct torture_random_state *rrsp); void (*readunlock)(int idx); int (*completed)(void); void (*deferred_free)(struct rcu_torture *p); @@ -392,7 +243,7 @@ static int rcu_torture_read_lock(void) __acquires(RCU) return 0; } -static void rcu_read_delay(struct rcu_random_state *rrsp) +static void rcu_read_delay(struct torture_random_state *rrsp) { const unsigned long shortdelay_us = 200; const unsigned long longdelay_ms = 50; @@ -401,12 +252,13 @@ static void rcu_read_delay(struct rcu_random_state *rrsp) * period, and we want a long delay occasionally to trigger * force_quiescent_state. */ - if (!(rcu_random(rrsp) % (nrealreaders * 2000 * longdelay_ms))) + if (!(torture_random(rrsp) % (nrealreaders * 2000 * longdelay_ms))) mdelay(longdelay_ms); - if (!(rcu_random(rrsp) % (nrealreaders * 2 * shortdelay_us))) + if (!(torture_random(rrsp) % (nrealreaders * 2 * shortdelay_us))) udelay(shortdelay_us); #ifdef CONFIG_PREEMPT - if (!preempt_count() && !(rcu_random(rrsp) % (nrealreaders * 20000))) + if (!preempt_count() && + !(torture_random(rrsp) % (nrealreaders * 20000))) preempt_schedule(); /* No QS if preempt_disable() in effect */ #endif } @@ -427,7 +279,7 @@ rcu_torture_cb(struct rcu_head *p) int i; struct rcu_torture *rp = container_of(p, struct rcu_torture, rtort_rcu); - if (fullstop != FULLSTOP_DONTSTOP) { + if (torture_must_stop_irq()) { /* Test is ending, just drop callbacks on the floor. */ /* The next initialization will pick up the pieces. */ return; @@ -520,6 +372,48 @@ static struct rcu_torture_ops rcu_bh_ops = { }; /* + * Don't even think about trying any of these in real life!!! + * The names includes "busted", and they really means it! + * The only purpose of these functions is to provide a buggy RCU + * implementation to make sure that rcutorture correctly emits + * buggy-RCU error messages. + */ +static void rcu_busted_torture_deferred_free(struct rcu_torture *p) +{ + /* This is a deliberate bug for testing purposes only! */ + rcu_torture_cb(&p->rtort_rcu); +} + +static void synchronize_rcu_busted(void) +{ + /* This is a deliberate bug for testing purposes only! */ +} + +static void +call_rcu_busted(struct rcu_head *head, void (*func)(struct rcu_head *rcu)) +{ + /* This is a deliberate bug for testing purposes only! */ + func(head); +} + +static struct rcu_torture_ops rcu_busted_ops = { + .init = rcu_sync_torture_init, + .readlock = rcu_torture_read_lock, + .read_delay = rcu_read_delay, /* just reuse rcu's version. */ + .readunlock = rcu_torture_read_unlock, + .completed = rcu_no_completed, + .deferred_free = rcu_busted_torture_deferred_free, + .sync = synchronize_rcu_busted, + .exp_sync = synchronize_rcu_busted, + .call = call_rcu_busted, + .cb_barrier = NULL, + .fqs = NULL, + .stats = NULL, + .irq_capable = 1, + .name = "rcu_busted" +}; + +/* * Definitions for srcu torture testing. */ @@ -530,7 +424,7 @@ static int srcu_torture_read_lock(void) __acquires(&srcu_ctl) return srcu_read_lock(&srcu_ctl); } -static void srcu_read_delay(struct rcu_random_state *rrsp) +static void srcu_read_delay(struct torture_random_state *rrsp) { long delay; const long uspertick = 1000000 / HZ; @@ -538,7 +432,8 @@ static void srcu_read_delay(struct rcu_random_state *rrsp) /* We want there to be long-running readers, but not all the time. */ - delay = rcu_random(rrsp) % (nrealreaders * 2 * longdelay * uspertick); + delay = torture_random(rrsp) % + (nrealreaders * 2 * longdelay * uspertick); if (!delay) schedule_timeout_interruptible(longdelay); else @@ -677,12 +572,12 @@ static int rcu_torture_boost(void *arg) struct rcu_boost_inflight rbi = { .inflight = 0 }; struct sched_param sp; - VERBOSE_PRINTK_STRING("rcu_torture_boost started"); + VERBOSE_TOROUT_STRING("rcu_torture_boost started"); /* Set real-time priority. */ sp.sched_priority = 1; if (sched_setscheduler(current, SCHED_FIFO, &sp) < 0) { - VERBOSE_PRINTK_STRING("rcu_torture_boost RT prio failed!"); + VERBOSE_TOROUT_STRING("rcu_torture_boost RT prio failed!"); n_rcu_torture_boost_rterror++; } @@ -693,9 +588,8 @@ static int rcu_torture_boost(void *arg) oldstarttime = boost_starttime; while (ULONG_CMP_LT(jiffies, oldstarttime)) { schedule_timeout_interruptible(oldstarttime - jiffies); - rcu_stutter_wait("rcu_torture_boost"); - if (kthread_should_stop() || - fullstop != FULLSTOP_DONTSTOP) + stutter_wait("rcu_torture_boost"); + if (torture_must_stop()) goto checkwait; } @@ -710,15 +604,14 @@ static int rcu_torture_boost(void *arg) call_rcu(&rbi.rcu, rcu_torture_boost_cb); if (jiffies - call_rcu_time > test_boost_duration * HZ - HZ / 2) { - VERBOSE_PRINTK_STRING("rcu_torture_boost boosting failed"); + VERBOSE_TOROUT_STRING("rcu_torture_boost boosting failed"); n_rcu_torture_boost_failure++; } call_rcu_time = jiffies; } cond_resched(); - rcu_stutter_wait("rcu_torture_boost"); - if (kthread_should_stop() || - fullstop != FULLSTOP_DONTSTOP) + stutter_wait("rcu_torture_boost"); + if (torture_must_stop()) goto checkwait; } @@ -742,16 +635,17 @@ static int rcu_torture_boost(void *arg) } /* Go do the stutter. */ -checkwait: rcu_stutter_wait("rcu_torture_boost"); - } while (!kthread_should_stop() && fullstop == FULLSTOP_DONTSTOP); +checkwait: stutter_wait("rcu_torture_boost"); + } while (!torture_must_stop()); /* Clean up and exit. */ - VERBOSE_PRINTK_STRING("rcu_torture_boost task stopping"); - rcutorture_shutdown_absorb("rcu_torture_boost"); - while (!kthread_should_stop() || rbi.inflight) + while (!kthread_should_stop() || rbi.inflight) { + torture_shutdown_absorb("rcu_torture_boost"); schedule_timeout_uninterruptible(1); + } smp_mb(); /* order accesses to ->inflight before stack-frame death. */ destroy_rcu_head_on_stack(&rbi.rcu); + torture_kthread_stopping("rcu_torture_boost"); return 0; } @@ -766,7 +660,7 @@ rcu_torture_fqs(void *arg) unsigned long fqs_resume_time; int fqs_burst_remaining; - VERBOSE_PRINTK_STRING("rcu_torture_fqs task started"); + VERBOSE_TOROUT_STRING("rcu_torture_fqs task started"); do { fqs_resume_time = jiffies + fqs_stutter * HZ; while (ULONG_CMP_LT(jiffies, fqs_resume_time) && @@ -780,12 +674,9 @@ rcu_torture_fqs(void *arg) udelay(fqs_holdoff); fqs_burst_remaining -= fqs_holdoff; } - rcu_stutter_wait("rcu_torture_fqs"); - } while (!kthread_should_stop() && fullstop == FULLSTOP_DONTSTOP); - VERBOSE_PRINTK_STRING("rcu_torture_fqs task stopping"); - rcutorture_shutdown_absorb("rcu_torture_fqs"); - while (!kthread_should_stop()) - schedule_timeout_uninterruptible(1); + stutter_wait("rcu_torture_fqs"); + } while (!torture_must_stop()); + torture_kthread_stopping("rcu_torture_fqs"); return 0; } @@ -802,10 +693,10 @@ rcu_torture_writer(void *arg) struct rcu_torture *rp; struct rcu_torture *rp1; struct rcu_torture *old_rp; - static DEFINE_RCU_RANDOM(rand); + static DEFINE_TORTURE_RANDOM(rand); - VERBOSE_PRINTK_STRING("rcu_torture_writer task started"); - set_user_nice(current, 19); + VERBOSE_TOROUT_STRING("rcu_torture_writer task started"); + set_user_nice(current, MAX_NICE); do { schedule_timeout_uninterruptible(1); @@ -813,7 +704,7 @@ rcu_torture_writer(void *arg) if (rp == NULL) continue; rp->rtort_pipe_count = 0; - udelay(rcu_random(&rand) & 0x3ff); + udelay(torture_random(&rand) & 0x3ff); old_rp = rcu_dereference_check(rcu_torture_current, current == writer_task); rp->rtort_mbtest = 1; @@ -826,7 +717,7 @@ rcu_torture_writer(void *arg) atomic_inc(&rcu_torture_wcount[i]); old_rp->rtort_pipe_count++; if (gp_normal == gp_exp) - exp = !!(rcu_random(&rand) & 0x80); + exp = !!(torture_random(&rand) & 0x80); else exp = gp_exp; if (!exp) { @@ -852,12 +743,9 @@ rcu_torture_writer(void *arg) } } rcutorture_record_progress(++rcu_torture_current_version); - rcu_stutter_wait("rcu_torture_writer"); - } while (!kthread_should_stop() && fullstop == FULLSTOP_DONTSTOP); - VERBOSE_PRINTK_STRING("rcu_torture_writer task stopping"); - rcutorture_shutdown_absorb("rcu_torture_writer"); - while (!kthread_should_stop()) - schedule_timeout_uninterruptible(1); + stutter_wait("rcu_torture_writer"); + } while (!torture_must_stop()); + torture_kthread_stopping("rcu_torture_writer"); return 0; } @@ -868,19 +756,19 @@ rcu_torture_writer(void *arg) static int rcu_torture_fakewriter(void *arg) { - DEFINE_RCU_RANDOM(rand); + DEFINE_TORTURE_RANDOM(rand); - VERBOSE_PRINTK_STRING("rcu_torture_fakewriter task started"); - set_user_nice(current, 19); + VERBOSE_TOROUT_STRING("rcu_torture_fakewriter task started"); + set_user_nice(current, MAX_NICE); do { - schedule_timeout_uninterruptible(1 + rcu_random(&rand)%10); - udelay(rcu_random(&rand) & 0x3ff); + schedule_timeout_uninterruptible(1 + torture_random(&rand)%10); + udelay(torture_random(&rand) & 0x3ff); if (cur_ops->cb_barrier != NULL && - rcu_random(&rand) % (nfakewriters * 8) == 0) { + torture_random(&rand) % (nfakewriters * 8) == 0) { cur_ops->cb_barrier(); } else if (gp_normal == gp_exp) { - if (rcu_random(&rand) & 0x80) + if (torture_random(&rand) & 0x80) cur_ops->sync(); else cur_ops->exp_sync(); @@ -889,13 +777,10 @@ rcu_torture_fakewriter(void *arg) } else { cur_ops->exp_sync(); } - rcu_stutter_wait("rcu_torture_fakewriter"); - } while (!kthread_should_stop() && fullstop == FULLSTOP_DONTSTOP); + stutter_wait("rcu_torture_fakewriter"); + } while (!torture_must_stop()); - VERBOSE_PRINTK_STRING("rcu_torture_fakewriter task stopping"); - rcutorture_shutdown_absorb("rcu_torture_fakewriter"); - while (!kthread_should_stop()) - schedule_timeout_uninterruptible(1); + torture_kthread_stopping("rcu_torture_fakewriter"); return 0; } @@ -921,7 +806,7 @@ static void rcu_torture_timer(unsigned long unused) int idx; int completed; int completed_end; - static DEFINE_RCU_RANDOM(rand); + static DEFINE_TORTURE_RANDOM(rand); static DEFINE_SPINLOCK(rand_lock); struct rcu_torture *p; int pipe_count; @@ -980,14 +865,14 @@ rcu_torture_reader(void *arg) int completed; int completed_end; int idx; - DEFINE_RCU_RANDOM(rand); + DEFINE_TORTURE_RANDOM(rand); struct rcu_torture *p; int pipe_count; struct timer_list t; unsigned long long ts; - VERBOSE_PRINTK_STRING("rcu_torture_reader task started"); - set_user_nice(current, 19); + VERBOSE_TOROUT_STRING("rcu_torture_reader task started"); + set_user_nice(current, MAX_NICE); if (irqreader && cur_ops->irq_capable) setup_timer_on_stack(&t, rcu_torture_timer, 0); @@ -1034,14 +919,11 @@ rcu_torture_reader(void *arg) preempt_enable(); cur_ops->readunlock(idx); schedule(); - rcu_stutter_wait("rcu_torture_reader"); - } while (!kthread_should_stop() && fullstop == FULLSTOP_DONTSTOP); - VERBOSE_PRINTK_STRING("rcu_torture_reader task stopping"); - rcutorture_shutdown_absorb("rcu_torture_reader"); + stutter_wait("rcu_torture_reader"); + } while (!torture_must_stop()); if (irqreader && cur_ops->irq_capable) del_timer_sync(&t); - while (!kthread_should_stop()) - schedule_timeout_uninterruptible(1); + torture_kthread_stopping("rcu_torture_reader"); return 0; } @@ -1083,13 +965,7 @@ rcu_torture_printk(char *page) n_rcu_torture_boost_failure, n_rcu_torture_boosts, n_rcu_torture_timers); - page += sprintf(page, - "onoff: %ld/%ld:%ld/%ld %d,%d:%d,%d %lu:%lu (HZ=%d) ", - n_online_successes, n_online_attempts, - n_offline_successes, n_offline_attempts, - min_online, max_online, - min_offline, max_offline, - sum_online, sum_offline, HZ); + page = torture_onoff_stats(page); page += sprintf(page, "barrier: %ld/%ld:%ld", n_barrier_successes, n_barrier_attempts, @@ -1150,123 +1026,17 @@ rcu_torture_stats_print(void) /* * Periodically prints torture statistics, if periodic statistics printing * was specified via the stat_interval module parameter. - * - * No need to worry about fullstop here, since this one doesn't reference - * volatile state or register callbacks. */ static int rcu_torture_stats(void *arg) { - VERBOSE_PRINTK_STRING("rcu_torture_stats task started"); + VERBOSE_TOROUT_STRING("rcu_torture_stats task started"); do { schedule_timeout_interruptible(stat_interval * HZ); rcu_torture_stats_print(); - rcutorture_shutdown_absorb("rcu_torture_stats"); - } while (!kthread_should_stop()); - VERBOSE_PRINTK_STRING("rcu_torture_stats task stopping"); - return 0; -} - -static int rcu_idle_cpu; /* Force all torture tasks off this CPU */ - -/* Shuffle tasks such that we allow @rcu_idle_cpu to become idle. A special case - * is when @rcu_idle_cpu = -1, when we allow the tasks to run on all CPUs. - */ -static void rcu_torture_shuffle_tasks(void) -{ - int i; - - cpumask_setall(shuffle_tmp_mask); - get_online_cpus(); - - /* No point in shuffling if there is only one online CPU (ex: UP) */ - if (num_online_cpus() == 1) { - put_online_cpus(); - return; - } - - if (rcu_idle_cpu != -1) - cpumask_clear_cpu(rcu_idle_cpu, shuffle_tmp_mask); - - set_cpus_allowed_ptr(current, shuffle_tmp_mask); - - if (reader_tasks) { - for (i = 0; i < nrealreaders; i++) - if (reader_tasks[i]) - set_cpus_allowed_ptr(reader_tasks[i], - shuffle_tmp_mask); - } - if (fakewriter_tasks) { - for (i = 0; i < nfakewriters; i++) - if (fakewriter_tasks[i]) - set_cpus_allowed_ptr(fakewriter_tasks[i], - shuffle_tmp_mask); - } - if (writer_task) - set_cpus_allowed_ptr(writer_task, shuffle_tmp_mask); - if (stats_task) - set_cpus_allowed_ptr(stats_task, shuffle_tmp_mask); - if (stutter_task) - set_cpus_allowed_ptr(stutter_task, shuffle_tmp_mask); - if (fqs_task) - set_cpus_allowed_ptr(fqs_task, shuffle_tmp_mask); - if (shutdown_task) - set_cpus_allowed_ptr(shutdown_task, shuffle_tmp_mask); -#ifdef CONFIG_HOTPLUG_CPU - if (onoff_task) - set_cpus_allowed_ptr(onoff_task, shuffle_tmp_mask); -#endif /* #ifdef CONFIG_HOTPLUG_CPU */ - if (stall_task) - set_cpus_allowed_ptr(stall_task, shuffle_tmp_mask); - if (barrier_cbs_tasks) - for (i = 0; i < n_barrier_cbs; i++) - if (barrier_cbs_tasks[i]) - set_cpus_allowed_ptr(barrier_cbs_tasks[i], - shuffle_tmp_mask); - if (barrier_task) - set_cpus_allowed_ptr(barrier_task, shuffle_tmp_mask); - - if (rcu_idle_cpu == -1) - rcu_idle_cpu = num_online_cpus() - 1; - else - rcu_idle_cpu--; - - put_online_cpus(); -} - -/* Shuffle tasks across CPUs, with the intent of allowing each CPU in the - * system to become idle at a time and cut off its timer ticks. This is meant - * to test the support for such tickless idle CPU in RCU. - */ -static int -rcu_torture_shuffle(void *arg) -{ - VERBOSE_PRINTK_STRING("rcu_torture_shuffle task started"); - do { - schedule_timeout_interruptible(shuffle_interval * HZ); - rcu_torture_shuffle_tasks(); - rcutorture_shutdown_absorb("rcu_torture_shuffle"); - } while (!kthread_should_stop()); - VERBOSE_PRINTK_STRING("rcu_torture_shuffle task stopping"); - return 0; -} - -/* Cause the rcutorture test to "stutter", starting and stopping all - * threads periodically. - */ -static int -rcu_torture_stutter(void *arg) -{ - VERBOSE_PRINTK_STRING("rcu_torture_stutter task started"); - do { - schedule_timeout_interruptible(stutter * HZ); - stutter_pause_test = 1; - if (!kthread_should_stop()) - schedule_timeout_interruptible(stutter * HZ); - stutter_pause_test = 0; - rcutorture_shutdown_absorb("rcu_torture_stutter"); - } while (!kthread_should_stop()); - VERBOSE_PRINTK_STRING("rcu_torture_stutter task stopping"); + torture_shutdown_absorb("rcu_torture_stats"); + } while (!torture_must_stop()); + torture_kthread_stopping("rcu_torture_stats"); return 0; } @@ -1293,10 +1063,6 @@ rcu_torture_print_module_parms(struct rcu_torture_ops *cur_ops, const char *tag) onoff_interval, onoff_holdoff); } -static struct notifier_block rcutorture_shutdown_nb = { - .notifier_call = rcutorture_shutdown_notify, -}; - static void rcutorture_booster_cleanup(int cpu) { struct task_struct *t; @@ -1304,14 +1070,12 @@ static void rcutorture_booster_cleanup(int cpu) if (boost_tasks[cpu] == NULL) return; mutex_lock(&boost_mutex); - VERBOSE_PRINTK_STRING("Stopping rcu_torture_boost task"); t = boost_tasks[cpu]; boost_tasks[cpu] = NULL; mutex_unlock(&boost_mutex); /* This must be outside of the mutex, otherwise deadlock! */ - kthread_stop(t); - boost_tasks[cpu] = NULL; + torture_stop_kthread(rcu_torture_boost, t); } static int rcutorture_booster_init(int cpu) @@ -1323,13 +1087,13 @@ static int rcutorture_booster_init(int cpu) /* Don't allow time recalculation while creating a new task. */ mutex_lock(&boost_mutex); - VERBOSE_PRINTK_STRING("Creating rcu_torture_boost task"); + VERBOSE_TOROUT_STRING("Creating rcu_torture_boost task"); boost_tasks[cpu] = kthread_create_on_node(rcu_torture_boost, NULL, cpu_to_node(cpu), "rcu_torture_boost"); if (IS_ERR(boost_tasks[cpu])) { retval = PTR_ERR(boost_tasks[cpu]); - VERBOSE_PRINTK_STRING("rcu_torture_boost task create failed"); + VERBOSE_TOROUT_STRING("rcu_torture_boost task create failed"); n_rcu_torture_boost_ktrerror++; boost_tasks[cpu] = NULL; mutex_unlock(&boost_mutex); @@ -1342,175 +1106,6 @@ static int rcutorture_booster_init(int cpu) } /* - * Cause the rcutorture test to shutdown the system after the test has - * run for the time specified by the shutdown_secs module parameter. - */ -static int -rcu_torture_shutdown(void *arg) -{ - long delta; - unsigned long jiffies_snap; - - VERBOSE_PRINTK_STRING("rcu_torture_shutdown task started"); - jiffies_snap = ACCESS_ONCE(jiffies); - while (ULONG_CMP_LT(jiffies_snap, shutdown_time) && - !kthread_should_stop()) { - delta = shutdown_time - jiffies_snap; - if (verbose) - pr_alert("%s" TORTURE_FLAG - "rcu_torture_shutdown task: %lu jiffies remaining\n", - torture_type, delta); - schedule_timeout_interruptible(delta); - jiffies_snap = ACCESS_ONCE(jiffies); - } - if (kthread_should_stop()) { - VERBOSE_PRINTK_STRING("rcu_torture_shutdown task stopping"); - return 0; - } - - /* OK, shut down the system. */ - - VERBOSE_PRINTK_STRING("rcu_torture_shutdown task shutting down system"); - shutdown_task = NULL; /* Avoid self-kill deadlock. */ - rcu_torture_cleanup(); /* Get the success/failure message. */ - kernel_power_off(); /* Shut down the system. */ - return 0; -} - -#ifdef CONFIG_HOTPLUG_CPU - -/* - * Execute random CPU-hotplug operations at the interval specified - * by the onoff_interval. - */ -static int -rcu_torture_onoff(void *arg) -{ - int cpu; - unsigned long delta; - int maxcpu = -1; - DEFINE_RCU_RANDOM(rand); - int ret; - unsigned long starttime; - - VERBOSE_PRINTK_STRING("rcu_torture_onoff task started"); - for_each_online_cpu(cpu) - maxcpu = cpu; - WARN_ON(maxcpu < 0); - if (onoff_holdoff > 0) { - VERBOSE_PRINTK_STRING("rcu_torture_onoff begin holdoff"); - schedule_timeout_interruptible(onoff_holdoff * HZ); - VERBOSE_PRINTK_STRING("rcu_torture_onoff end holdoff"); - } - while (!kthread_should_stop()) { - cpu = (rcu_random(&rand) >> 4) % (maxcpu + 1); - if (cpu_online(cpu) && cpu_is_hotpluggable(cpu)) { - if (verbose) - pr_alert("%s" TORTURE_FLAG - "rcu_torture_onoff task: offlining %d\n", - torture_type, cpu); - starttime = jiffies; - n_offline_attempts++; - ret = cpu_down(cpu); - if (ret) { - if (verbose) - pr_alert("%s" TORTURE_FLAG - "rcu_torture_onoff task: offline %d failed: errno %d\n", - torture_type, cpu, ret); - } else { - if (verbose) - pr_alert("%s" TORTURE_FLAG - "rcu_torture_onoff task: offlined %d\n", - torture_type, cpu); - n_offline_successes++; - delta = jiffies - starttime; - sum_offline += delta; - if (min_offline < 0) { - min_offline = delta; - max_offline = delta; - } - if (min_offline > delta) - min_offline = delta; - if (max_offline < delta) - max_offline = delta; - } - } else if (cpu_is_hotpluggable(cpu)) { - if (verbose) - pr_alert("%s" TORTURE_FLAG - "rcu_torture_onoff task: onlining %d\n", - torture_type, cpu); - starttime = jiffies; - n_online_attempts++; - ret = cpu_up(cpu); - if (ret) { - if (verbose) - pr_alert("%s" TORTURE_FLAG - "rcu_torture_onoff task: online %d failed: errno %d\n", - torture_type, cpu, ret); - } else { - if (verbose) - pr_alert("%s" TORTURE_FLAG - "rcu_torture_onoff task: onlined %d\n", - torture_type, cpu); - n_online_successes++; - delta = jiffies - starttime; - sum_online += delta; - if (min_online < 0) { - min_online = delta; - max_online = delta; - } - if (min_online > delta) - min_online = delta; - if (max_online < delta) - max_online = delta; - } - } - schedule_timeout_interruptible(onoff_interval * HZ); - } - VERBOSE_PRINTK_STRING("rcu_torture_onoff task stopping"); - return 0; -} - -static int -rcu_torture_onoff_init(void) -{ - int ret; - - if (onoff_interval <= 0) - return 0; - onoff_task = kthread_run(rcu_torture_onoff, NULL, "rcu_torture_onoff"); - if (IS_ERR(onoff_task)) { - ret = PTR_ERR(onoff_task); - onoff_task = NULL; - return ret; - } - return 0; -} - -static void rcu_torture_onoff_cleanup(void) -{ - if (onoff_task == NULL) - return; - VERBOSE_PRINTK_STRING("Stopping rcu_torture_onoff task"); - kthread_stop(onoff_task); - onoff_task = NULL; -} - -#else /* #ifdef CONFIG_HOTPLUG_CPU */ - -static int -rcu_torture_onoff_init(void) -{ - return 0; -} - -static void rcu_torture_onoff_cleanup(void) -{ -} - -#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */ - -/* * CPU-stall kthread. It waits as specified by stall_cpu_holdoff, then * induces a CPU stall for the time specified by stall_cpu. */ @@ -1518,11 +1113,11 @@ static int rcu_torture_stall(void *args) { unsigned long stop_at; - VERBOSE_PRINTK_STRING("rcu_torture_stall task started"); + VERBOSE_TOROUT_STRING("rcu_torture_stall task started"); if (stall_cpu_holdoff > 0) { - VERBOSE_PRINTK_STRING("rcu_torture_stall begin holdoff"); + VERBOSE_TOROUT_STRING("rcu_torture_stall begin holdoff"); schedule_timeout_interruptible(stall_cpu_holdoff * HZ); - VERBOSE_PRINTK_STRING("rcu_torture_stall end holdoff"); + VERBOSE_TOROUT_STRING("rcu_torture_stall end holdoff"); } if (!kthread_should_stop()) { stop_at = get_seconds() + stall_cpu; @@ -1536,7 +1131,7 @@ static int rcu_torture_stall(void *args) rcu_read_unlock(); pr_alert("rcu_torture_stall end.\n"); } - rcutorture_shutdown_absorb("rcu_torture_stall"); + torture_shutdown_absorb("rcu_torture_stall"); while (!kthread_should_stop()) schedule_timeout_interruptible(10 * HZ); return 0; @@ -1545,27 +1140,9 @@ static int rcu_torture_stall(void *args) /* Spawn CPU-stall kthread, if stall_cpu specified. */ static int __init rcu_torture_stall_init(void) { - int ret; - if (stall_cpu <= 0) return 0; - stall_task = kthread_run(rcu_torture_stall, NULL, "rcu_torture_stall"); - if (IS_ERR(stall_task)) { - ret = PTR_ERR(stall_task); - stall_task = NULL; - return ret; - } - return 0; -} - -/* Clean up after the CPU-stall kthread, if one was spawned. */ -static void rcu_torture_stall_cleanup(void) -{ - if (stall_task == NULL) - return; - VERBOSE_PRINTK_STRING("Stopping rcu_torture_stall_task."); - kthread_stop(stall_task); - stall_task = NULL; + return torture_create_kthread(rcu_torture_stall, NULL, stall_task); } /* Callback function for RCU barrier testing. */ @@ -1583,28 +1160,24 @@ static int rcu_torture_barrier_cbs(void *arg) struct rcu_head rcu; init_rcu_head_on_stack(&rcu); - VERBOSE_PRINTK_STRING("rcu_torture_barrier_cbs task started"); - set_user_nice(current, 19); + VERBOSE_TOROUT_STRING("rcu_torture_barrier_cbs task started"); + set_user_nice(current, MAX_NICE); do { wait_event(barrier_cbs_wq[myid], (newphase = ACCESS_ONCE(barrier_phase)) != lastphase || - kthread_should_stop() || - fullstop != FULLSTOP_DONTSTOP); + torture_must_stop()); lastphase = newphase; smp_mb(); /* ensure barrier_phase load before ->call(). */ - if (kthread_should_stop() || fullstop != FULLSTOP_DONTSTOP) + if (torture_must_stop()) break; cur_ops->call(&rcu, rcu_torture_barrier_cbf); if (atomic_dec_and_test(&barrier_cbs_count)) wake_up(&barrier_wq); - } while (!kthread_should_stop() && fullstop == FULLSTOP_DONTSTOP); - VERBOSE_PRINTK_STRING("rcu_torture_barrier_cbs task stopping"); - rcutorture_shutdown_absorb("rcu_torture_barrier_cbs"); - while (!kthread_should_stop()) - schedule_timeout_interruptible(1); + } while (!torture_must_stop()); cur_ops->cb_barrier(); destroy_rcu_head_on_stack(&rcu); + torture_kthread_stopping("rcu_torture_barrier_cbs"); return 0; } @@ -1613,7 +1186,7 @@ static int rcu_torture_barrier(void *arg) { int i; - VERBOSE_PRINTK_STRING("rcu_torture_barrier task starting"); + VERBOSE_TOROUT_STRING("rcu_torture_barrier task starting"); do { atomic_set(&barrier_cbs_invoked, 0); atomic_set(&barrier_cbs_count, n_barrier_cbs); @@ -1623,9 +1196,8 @@ static int rcu_torture_barrier(void *arg) wake_up(&barrier_cbs_wq[i]); wait_event(barrier_wq, atomic_read(&barrier_cbs_count) == 0 || - kthread_should_stop() || - fullstop != FULLSTOP_DONTSTOP); - if (kthread_should_stop() || fullstop != FULLSTOP_DONTSTOP) + torture_must_stop()); + if (torture_must_stop()) break; n_barrier_attempts++; cur_ops->cb_barrier(); /* Implies smp_mb() for wait_event(). */ @@ -1635,11 +1207,8 @@ static int rcu_torture_barrier(void *arg) } n_barrier_successes++; schedule_timeout_interruptible(HZ / 10); - } while (!kthread_should_stop() && fullstop == FULLSTOP_DONTSTOP); - VERBOSE_PRINTK_STRING("rcu_torture_barrier task stopping"); - rcutorture_shutdown_absorb("rcu_torture_barrier"); - while (!kthread_should_stop()) - schedule_timeout_interruptible(1); + } while (!torture_must_stop()); + torture_kthread_stopping("rcu_torture_barrier"); return 0; } @@ -1672,24 +1241,13 @@ static int rcu_torture_barrier_init(void) return -ENOMEM; for (i = 0; i < n_barrier_cbs; i++) { init_waitqueue_head(&barrier_cbs_wq[i]); - barrier_cbs_tasks[i] = kthread_run(rcu_torture_barrier_cbs, - (void *)(long)i, - "rcu_torture_barrier_cbs"); - if (IS_ERR(barrier_cbs_tasks[i])) { - ret = PTR_ERR(barrier_cbs_tasks[i]); - VERBOSE_PRINTK_ERRSTRING("Failed to create rcu_torture_barrier_cbs"); - barrier_cbs_tasks[i] = NULL; + ret = torture_create_kthread(rcu_torture_barrier_cbs, + (void *)(long)i, + barrier_cbs_tasks[i]); + if (ret) return ret; - } } - barrier_task = kthread_run(rcu_torture_barrier, NULL, - "rcu_torture_barrier"); - if (IS_ERR(barrier_task)) { - ret = PTR_ERR(barrier_task); - VERBOSE_PRINTK_ERRSTRING("Failed to create rcu_torture_barrier"); - barrier_task = NULL; - } - return 0; + return torture_create_kthread(rcu_torture_barrier, NULL, barrier_task); } /* Clean up after RCU barrier testing. */ @@ -1697,19 +1255,11 @@ static void rcu_torture_barrier_cleanup(void) { int i; - if (barrier_task != NULL) { - VERBOSE_PRINTK_STRING("Stopping rcu_torture_barrier task"); - kthread_stop(barrier_task); - barrier_task = NULL; - } + torture_stop_kthread(rcu_torture_barrier, barrier_task); if (barrier_cbs_tasks != NULL) { - for (i = 0; i < n_barrier_cbs; i++) { - if (barrier_cbs_tasks[i] != NULL) { - VERBOSE_PRINTK_STRING("Stopping rcu_torture_barrier_cbs task"); - kthread_stop(barrier_cbs_tasks[i]); - barrier_cbs_tasks[i] = NULL; - } - } + for (i = 0; i < n_barrier_cbs; i++) + torture_stop_kthread(rcu_torture_barrier_cbs, + barrier_cbs_tasks[i]); kfree(barrier_cbs_tasks); barrier_cbs_tasks = NULL; } @@ -1747,90 +1297,42 @@ rcu_torture_cleanup(void) { int i; - mutex_lock(&fullstop_mutex); rcutorture_record_test_transition(); - if (fullstop == FULLSTOP_SHUTDOWN) { - pr_warn(/* but going down anyway, so... */ - "Concurrent 'rmmod rcutorture' and shutdown illegal!\n"); - mutex_unlock(&fullstop_mutex); - schedule_timeout_uninterruptible(10); + if (torture_cleanup()) { if (cur_ops->cb_barrier != NULL) cur_ops->cb_barrier(); return; } - fullstop = FULLSTOP_RMMOD; - mutex_unlock(&fullstop_mutex); - unregister_reboot_notifier(&rcutorture_shutdown_nb); - rcu_torture_barrier_cleanup(); - rcu_torture_stall_cleanup(); - if (stutter_task) { - VERBOSE_PRINTK_STRING("Stopping rcu_torture_stutter task"); - kthread_stop(stutter_task); - } - stutter_task = NULL; - if (shuffler_task) { - VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task"); - kthread_stop(shuffler_task); - free_cpumask_var(shuffle_tmp_mask); - } - shuffler_task = NULL; - if (writer_task) { - VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task"); - kthread_stop(writer_task); - } - writer_task = NULL; + rcu_torture_barrier_cleanup(); + torture_stop_kthread(rcu_torture_stall, stall_task); + torture_stop_kthread(rcu_torture_writer, writer_task); if (reader_tasks) { - for (i = 0; i < nrealreaders; i++) { - if (reader_tasks[i]) { - VERBOSE_PRINTK_STRING( - "Stopping rcu_torture_reader task"); - kthread_stop(reader_tasks[i]); - } - reader_tasks[i] = NULL; - } + for (i = 0; i < nrealreaders; i++) + torture_stop_kthread(rcu_torture_reader, + reader_tasks[i]); kfree(reader_tasks); - reader_tasks = NULL; } rcu_torture_current = NULL; if (fakewriter_tasks) { for (i = 0; i < nfakewriters; i++) { - if (fakewriter_tasks[i]) { - VERBOSE_PRINTK_STRING( - "Stopping rcu_torture_fakewriter task"); - kthread_stop(fakewriter_tasks[i]); - } - fakewriter_tasks[i] = NULL; + torture_stop_kthread(rcu_torture_fakewriter, + fakewriter_tasks[i]); } kfree(fakewriter_tasks); fakewriter_tasks = NULL; } - if (stats_task) { - VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task"); - kthread_stop(stats_task); - } - stats_task = NULL; - - if (fqs_task) { - VERBOSE_PRINTK_STRING("Stopping rcu_torture_fqs task"); - kthread_stop(fqs_task); - } - fqs_task = NULL; + torture_stop_kthread(rcu_torture_stats, stats_task); + torture_stop_kthread(rcu_torture_fqs, fqs_task); if ((test_boost == 1 && cur_ops->can_boost) || test_boost == 2) { unregister_cpu_notifier(&rcutorture_cpu_nb); for_each_possible_cpu(i) rcutorture_booster_cleanup(i); } - if (shutdown_task != NULL) { - VERBOSE_PRINTK_STRING("Stopping rcu_torture_shutdown task"); - kthread_stop(shutdown_task); - } - shutdown_task = NULL; - rcu_torture_onoff_cleanup(); /* Wait for all RCU callbacks to fire. */ @@ -1841,8 +1343,7 @@ rcu_torture_cleanup(void) if (atomic_read(&n_rcu_torture_error) || n_rcu_torture_barrier_error) rcu_torture_print_module_parms(cur_ops, "End of test: FAILURE"); - else if (n_online_successes != n_online_attempts || - n_offline_successes != n_offline_attempts) + else if (torture_onoff_failures()) rcu_torture_print_module_parms(cur_ops, "End of test: RCU_HOTPLUG"); else @@ -1911,12 +1412,11 @@ rcu_torture_init(void) int i; int cpu; int firsterr = 0; - int retval; static struct rcu_torture_ops *torture_ops[] = { - &rcu_ops, &rcu_bh_ops, &srcu_ops, &sched_ops, + &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &sched_ops, }; - mutex_lock(&fullstop_mutex); + torture_init_begin(torture_type, verbose, &rcutorture_runnable); /* Process args and tell the world that the torturer is on the job. */ for (i = 0; i < ARRAY_SIZE(torture_ops); i++) { @@ -1931,7 +1431,7 @@ rcu_torture_init(void) for (i = 0; i < ARRAY_SIZE(torture_ops); i++) pr_alert(" %s", torture_ops[i]->name); pr_alert("\n"); - mutex_unlock(&fullstop_mutex); + torture_init_end(); return -EINVAL; } if (cur_ops->fqs == NULL && fqs_duration != 0) { @@ -1946,7 +1446,6 @@ rcu_torture_init(void) else nrealreaders = 2 * num_online_cpus(); rcu_torture_print_module_parms(cur_ops, "Start of test"); - fullstop = FULLSTOP_DONTSTOP; /* Set up the freelist. */ @@ -1982,108 +1481,61 @@ rcu_torture_init(void) /* Start up the kthreads. */ - VERBOSE_PRINTK_STRING("Creating rcu_torture_writer task"); - writer_task = kthread_create(rcu_torture_writer, NULL, - "rcu_torture_writer"); - if (IS_ERR(writer_task)) { - firsterr = PTR_ERR(writer_task); - VERBOSE_PRINTK_ERRSTRING("Failed to create writer"); - writer_task = NULL; + firsterr = torture_create_kthread(rcu_torture_writer, NULL, + writer_task); + if (firsterr) goto unwind; - } - wake_up_process(writer_task); fakewriter_tasks = kzalloc(nfakewriters * sizeof(fakewriter_tasks[0]), GFP_KERNEL); if (fakewriter_tasks == NULL) { - VERBOSE_PRINTK_ERRSTRING("out of memory"); + VERBOSE_TOROUT_ERRSTRING("out of memory"); firsterr = -ENOMEM; goto unwind; } for (i = 0; i < nfakewriters; i++) { - VERBOSE_PRINTK_STRING("Creating rcu_torture_fakewriter task"); - fakewriter_tasks[i] = kthread_run(rcu_torture_fakewriter, NULL, - "rcu_torture_fakewriter"); - if (IS_ERR(fakewriter_tasks[i])) { - firsterr = PTR_ERR(fakewriter_tasks[i]); - VERBOSE_PRINTK_ERRSTRING("Failed to create fakewriter"); - fakewriter_tasks[i] = NULL; + firsterr = torture_create_kthread(rcu_torture_fakewriter, + NULL, fakewriter_tasks[i]); + if (firsterr) goto unwind; - } } reader_tasks = kzalloc(nrealreaders * sizeof(reader_tasks[0]), GFP_KERNEL); if (reader_tasks == NULL) { - VERBOSE_PRINTK_ERRSTRING("out of memory"); + VERBOSE_TOROUT_ERRSTRING("out of memory"); firsterr = -ENOMEM; goto unwind; } for (i = 0; i < nrealreaders; i++) { - VERBOSE_PRINTK_STRING("Creating rcu_torture_reader task"); - reader_tasks[i] = kthread_run(rcu_torture_reader, NULL, - "rcu_torture_reader"); - if (IS_ERR(reader_tasks[i])) { - firsterr = PTR_ERR(reader_tasks[i]); - VERBOSE_PRINTK_ERRSTRING("Failed to create reader"); - reader_tasks[i] = NULL; + firsterr = torture_create_kthread(rcu_torture_reader, NULL, + reader_tasks[i]); + if (firsterr) goto unwind; - } } if (stat_interval > 0) { - VERBOSE_PRINTK_STRING("Creating rcu_torture_stats task"); - stats_task = kthread_run(rcu_torture_stats, NULL, - "rcu_torture_stats"); - if (IS_ERR(stats_task)) { - firsterr = PTR_ERR(stats_task); - VERBOSE_PRINTK_ERRSTRING("Failed to create stats"); - stats_task = NULL; + firsterr = torture_create_kthread(rcu_torture_stats, NULL, + stats_task); + if (firsterr) goto unwind; - } } if (test_no_idle_hz) { - rcu_idle_cpu = num_online_cpus() - 1; - - if (!alloc_cpumask_var(&shuffle_tmp_mask, GFP_KERNEL)) { - firsterr = -ENOMEM; - VERBOSE_PRINTK_ERRSTRING("Failed to alloc mask"); - goto unwind; - } - - /* Create the shuffler thread */ - shuffler_task = kthread_run(rcu_torture_shuffle, NULL, - "rcu_torture_shuffle"); - if (IS_ERR(shuffler_task)) { - free_cpumask_var(shuffle_tmp_mask); - firsterr = PTR_ERR(shuffler_task); - VERBOSE_PRINTK_ERRSTRING("Failed to create shuffler"); - shuffler_task = NULL; + firsterr = torture_shuffle_init(shuffle_interval * HZ); + if (firsterr) goto unwind; - } } if (stutter < 0) stutter = 0; if (stutter) { - /* Create the stutter thread */ - stutter_task = kthread_run(rcu_torture_stutter, NULL, - "rcu_torture_stutter"); - if (IS_ERR(stutter_task)) { - firsterr = PTR_ERR(stutter_task); - VERBOSE_PRINTK_ERRSTRING("Failed to create stutter"); - stutter_task = NULL; + firsterr = torture_stutter_init(stutter * HZ); + if (firsterr) goto unwind; - } } if (fqs_duration < 0) fqs_duration = 0; if (fqs_duration) { - /* Create the stutter thread */ - fqs_task = kthread_run(rcu_torture_fqs, NULL, - "rcu_torture_fqs"); - if (IS_ERR(fqs_task)) { - firsterr = PTR_ERR(fqs_task); - VERBOSE_PRINTK_ERRSTRING("Failed to create fqs"); - fqs_task = NULL; + /* Create the fqs thread */ + torture_create_kthread(rcu_torture_fqs, NULL, fqs_task); + if (firsterr) goto unwind; - } } if (test_boost_interval < 1) test_boost_interval = 1; @@ -2097,49 +1549,31 @@ rcu_torture_init(void) for_each_possible_cpu(i) { if (cpu_is_offline(i)) continue; /* Heuristic: CPU can go offline. */ - retval = rcutorture_booster_init(i); - if (retval < 0) { - firsterr = retval; + firsterr = rcutorture_booster_init(i); + if (firsterr) goto unwind; - } } } - if (shutdown_secs > 0) { - shutdown_time = jiffies + shutdown_secs * HZ; - shutdown_task = kthread_create(rcu_torture_shutdown, NULL, - "rcu_torture_shutdown"); - if (IS_ERR(shutdown_task)) { - firsterr = PTR_ERR(shutdown_task); - VERBOSE_PRINTK_ERRSTRING("Failed to create shutdown"); - shutdown_task = NULL; - goto unwind; - } - wake_up_process(shutdown_task); - } - i = rcu_torture_onoff_init(); - if (i != 0) { - firsterr = i; + firsterr = torture_shutdown_init(shutdown_secs, rcu_torture_cleanup); + if (firsterr) goto unwind; - } - register_reboot_notifier(&rcutorture_shutdown_nb); - i = rcu_torture_stall_init(); - if (i != 0) { - firsterr = i; + firsterr = torture_onoff_init(onoff_holdoff * HZ, onoff_interval * HZ); + if (firsterr) goto unwind; - } - retval = rcu_torture_barrier_init(); - if (retval != 0) { - firsterr = retval; + firsterr = rcu_torture_stall_init(); + if (firsterr) + goto unwind; + firsterr = rcu_torture_barrier_init(); + if (firsterr) goto unwind; - } if (object_debug) rcu_test_debug_objects(); rcutorture_record_test_transition(); - mutex_unlock(&fullstop_mutex); + torture_init_end(); return 0; unwind: - mutex_unlock(&fullstop_mutex); + torture_init_end(); rcu_torture_cleanup(); return firsterr; } diff --git a/kernel/rcu/srcu.c b/kernel/rcu/srcu.c index 3318d8284384..c639556f3fa0 100644 --- a/kernel/rcu/srcu.c +++ b/kernel/rcu/srcu.c @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright (C) IBM Corporation, 2006 * Copyright (C) Fujitsu, 2012 @@ -36,8 +36,6 @@ #include <linux/delay.h> #include <linux/srcu.h> -#include <trace/events/rcu.h> - #include "rcu.h" /* @@ -398,7 +396,7 @@ void call_srcu(struct srcu_struct *sp, struct rcu_head *head, rcu_batch_queue(&sp->batch_queue, head); if (!sp->running) { sp->running = true; - schedule_delayed_work(&sp->work, 0); + queue_delayed_work(system_power_efficient_wq, &sp->work, 0); } spin_unlock_irqrestore(&sp->queue_lock, flags); } @@ -674,7 +672,8 @@ static void srcu_reschedule(struct srcu_struct *sp) } if (pending) - schedule_delayed_work(&sp->work, SRCU_INTERVAL); + queue_delayed_work(system_power_efficient_wq, + &sp->work, SRCU_INTERVAL); } /* diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c index 1254f312d024..d9efcc13008c 100644 --- a/kernel/rcu/tiny.c +++ b/kernel/rcu/tiny.c @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright IBM Corporation, 2008 * @@ -37,10 +37,6 @@ #include <linux/prefetch.h> #include <linux/ftrace_event.h> -#ifdef CONFIG_RCU_TRACE -#include <trace/events/rcu.h> -#endif /* #else #ifdef CONFIG_RCU_TRACE */ - #include "rcu.h" /* Forward declarations for tiny_plugin.h. */ diff --git a/kernel/rcu/tiny_plugin.h b/kernel/rcu/tiny_plugin.h index 280d06cae352..431528520562 100644 --- a/kernel/rcu/tiny_plugin.h +++ b/kernel/rcu/tiny_plugin.h @@ -14,8 +14,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright (c) 2010 Linaro * diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b3d116cd072d..0c47e300210a 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright IBM Corporation, 2008 * @@ -58,8 +58,6 @@ #include <linux/suspend.h> #include "tree.h" -#include <trace/events/rcu.h> - #include "rcu.h" MODULE_ALIAS("rcutree"); @@ -837,7 +835,7 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp, * to the next. Only do this for the primary flavor of RCU. */ if (rdp->rsp == rcu_state && - ULONG_CMP_GE(ACCESS_ONCE(jiffies), rdp->rsp->jiffies_resched)) { + ULONG_CMP_GE(jiffies, rdp->rsp->jiffies_resched)) { rdp->rsp->jiffies_resched += 5; resched_cpu(rdp->cpu); } @@ -847,7 +845,7 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp, static void record_gp_stall_check_time(struct rcu_state *rsp) { - unsigned long j = ACCESS_ONCE(jiffies); + unsigned long j = jiffies; unsigned long j1; rsp->gp_start = j; @@ -1005,7 +1003,7 @@ static void check_cpu_stall(struct rcu_state *rsp, struct rcu_data *rdp) if (rcu_cpu_stall_suppress || !rcu_gp_in_progress(rsp)) return; - j = ACCESS_ONCE(jiffies); + j = jiffies; /* * Lots of memory barriers to reject false positives. @@ -1423,13 +1421,14 @@ static int rcu_gp_init(struct rcu_state *rsp) /* Advance to a new grace period and initialize state. */ record_gp_stall_check_time(rsp); - smp_wmb(); /* Record GP times before starting GP. */ - rsp->gpnum++; + /* Record GP times before starting GP, hence smp_store_release(). */ + smp_store_release(&rsp->gpnum, rsp->gpnum + 1); trace_rcu_grace_period(rsp->name, rsp->gpnum, TPS("start")); raw_spin_unlock_irq(&rnp->lock); /* Exclude any concurrent CPU-hotplug operations. */ mutex_lock(&rsp->onoff_mutex); + smp_mb__after_unlock_lock(); /* ->gpnum increment before GP! */ /* * Set the quiescent-state-needed bits in all the rcu_node @@ -1557,10 +1556,11 @@ static void rcu_gp_cleanup(struct rcu_state *rsp) } rnp = rcu_get_root(rsp); raw_spin_lock_irq(&rnp->lock); - smp_mb__after_unlock_lock(); + smp_mb__after_unlock_lock(); /* Order GP before ->completed update. */ rcu_nocb_gp_set(rnp, nocb); - rsp->completed = rsp->gpnum; /* Declare grace period done. */ + /* Declare grace period done. */ + ACCESS_ONCE(rsp->completed) = rsp->gpnum; trace_rcu_grace_period(rsp->name, rsp->completed, TPS("end")); rsp->fqs_state = RCU_GP_IDLE; rdp = this_cpu_ptr(rsp->rda); @@ -2304,7 +2304,7 @@ static void force_quiescent_state(struct rcu_state *rsp) if (rnp_old != NULL) raw_spin_unlock(&rnp_old->fqslock); if (ret) { - rsp->n_force_qs_lh++; + ACCESS_ONCE(rsp->n_force_qs_lh)++; return; } rnp_old = rnp; @@ -2316,7 +2316,7 @@ static void force_quiescent_state(struct rcu_state *rsp) smp_mb__after_unlock_lock(); raw_spin_unlock(&rnp_old->fqslock); if (ACCESS_ONCE(rsp->gp_flags) & RCU_GP_FLAG_FQS) { - rsp->n_force_qs_lh++; + ACCESS_ONCE(rsp->n_force_qs_lh)++; raw_spin_unlock_irqrestore(&rnp_old->lock, flags); return; /* Someone beat us to it. */ } @@ -2639,6 +2639,58 @@ void synchronize_rcu_bh(void) } EXPORT_SYMBOL_GPL(synchronize_rcu_bh); +/** + * get_state_synchronize_rcu - Snapshot current RCU state + * + * Returns a cookie that is used by a later call to cond_synchronize_rcu() + * to determine whether or not a full grace period has elapsed in the + * meantime. + */ +unsigned long get_state_synchronize_rcu(void) +{ + /* + * Any prior manipulation of RCU-protected data must happen + * before the load from ->gpnum. + */ + smp_mb(); /* ^^^ */ + + /* + * Make sure this load happens before the purportedly + * time-consuming work between get_state_synchronize_rcu() + * and cond_synchronize_rcu(). + */ + return smp_load_acquire(&rcu_state->gpnum); +} +EXPORT_SYMBOL_GPL(get_state_synchronize_rcu); + +/** + * cond_synchronize_rcu - Conditionally wait for an RCU grace period + * + * @oldstate: return value from earlier call to get_state_synchronize_rcu() + * + * If a full RCU grace period has elapsed since the earlier call to + * get_state_synchronize_rcu(), just return. Otherwise, invoke + * synchronize_rcu() to wait for a full grace period. + * + * Yes, this function does not take counter wrap into account. But + * counter wrap is harmless. If the counter wraps, we have waited for + * more than 2 billion grace periods (and way more on a 64-bit system!), + * so waiting for one additional grace period should be just fine. + */ +void cond_synchronize_rcu(unsigned long oldstate) +{ + unsigned long newstate; + + /* + * Ensure that this load happens before any RCU-destructive + * actions the caller might carry out after we return. + */ + newstate = smp_load_acquire(&rcu_state->completed); + if (ULONG_CMP_GE(oldstate, newstate)) + synchronize_rcu(); +} +EXPORT_SYMBOL_GPL(cond_synchronize_rcu); + static int synchronize_sched_expedited_cpu_stop(void *data) { /* @@ -2880,7 +2932,7 @@ static int rcu_pending(int cpu) * non-NULL, store an indication of whether all callbacks are lazy. * (If there are no callbacks, all of them are deemed to be lazy.) */ -static int rcu_cpu_has_callbacks(int cpu, bool *all_lazy) +static int __maybe_unused rcu_cpu_has_callbacks(int cpu, bool *all_lazy) { bool al = true; bool hc = false; diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 8c19873f1ac9..75dc3c39a02a 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -13,8 +13,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright IBM Corporation, 2008 * diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 6e2ef4b2b920..962d1d589929 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -14,8 +14,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright Red Hat, 2009 * Copyright IBM Corporation, 2009 @@ -1586,11 +1586,13 @@ static void rcu_prepare_kthreads(int cpu) * Because we not have RCU_FAST_NO_HZ, just check whether this CPU needs * any flavor of RCU. */ +#ifndef CONFIG_RCU_NOCB_CPU_ALL int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies) { *delta_jiffies = ULONG_MAX; return rcu_cpu_has_callbacks(cpu, NULL); } +#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ /* * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up @@ -1656,7 +1658,7 @@ extern int tick_nohz_active; * only if it has been awhile since the last time we did so. Afterwards, * if there are any callbacks ready for immediate invocation, return true. */ -static bool rcu_try_advance_all_cbs(void) +static bool __maybe_unused rcu_try_advance_all_cbs(void) { bool cbs_ready = false; struct rcu_data *rdp; @@ -1696,6 +1698,7 @@ static bool rcu_try_advance_all_cbs(void) * * The caller must have disabled interrupts. */ +#ifndef CONFIG_RCU_NOCB_CPU_ALL int rcu_needs_cpu(int cpu, unsigned long *dj) { struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); @@ -1726,6 +1729,7 @@ int rcu_needs_cpu(int cpu, unsigned long *dj) } return 0; } +#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ /* * Prepare a CPU for idle from an RCU perspective. The first major task @@ -1739,6 +1743,7 @@ int rcu_needs_cpu(int cpu, unsigned long *dj) */ static void rcu_prepare_for_idle(int cpu) { +#ifndef CONFIG_RCU_NOCB_CPU_ALL struct rcu_data *rdp; struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); struct rcu_node *rnp; @@ -1790,6 +1795,7 @@ static void rcu_prepare_for_idle(int cpu) rcu_accelerate_cbs(rsp, rnp, rdp); raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */ } +#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ } /* @@ -1799,11 +1805,12 @@ static void rcu_prepare_for_idle(int cpu) */ static void rcu_cleanup_after_idle(int cpu) { - +#ifndef CONFIG_RCU_NOCB_CPU_ALL if (rcu_is_nocb_cpu(cpu)) return; if (rcu_try_advance_all_cbs()) invoke_rcu_core(); +#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ } /* @@ -2101,6 +2108,7 @@ static void rcu_init_one_nocb(struct rcu_node *rnp) init_waitqueue_head(&rnp->nocb_gp_wq[1]); } +#ifndef CONFIG_RCU_NOCB_CPU_ALL /* Is the specified CPU a no-CPUs CPU? */ bool rcu_is_nocb_cpu(int cpu) { @@ -2108,6 +2116,7 @@ bool rcu_is_nocb_cpu(int cpu) return cpumask_test_cpu(cpu, rcu_nocb_mask); return false; } +#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ /* * Enqueue the specified string of rcu_head structures onto the specified @@ -2893,7 +2902,7 @@ static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp) * CPU unless the grace period has extended for too long. * * This code relies on the fact that all NO_HZ_FULL CPUs are also - * CONFIG_RCU_NOCB_CPUs. + * CONFIG_RCU_NOCB_CPU CPUs. */ static bool rcu_nohz_full_cpu(struct rcu_state *rsp) { diff --git a/kernel/rcu/tree_trace.c b/kernel/rcu/tree_trace.c index 4def475336d4..5cdc62e1beeb 100644 --- a/kernel/rcu/tree_trace.c +++ b/kernel/rcu/tree_trace.c @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright IBM Corporation, 2008 * @@ -273,7 +273,7 @@ static void print_one_rcu_state(struct seq_file *m, struct rcu_state *rsp) seq_printf(m, "nfqs=%lu/nfqsng=%lu(%lu) fqlh=%lu oqlen=%ld/%ld\n", rsp->n_force_qs, rsp->n_force_qs_ngp, rsp->n_force_qs - rsp->n_force_qs_ngp, - rsp->n_force_qs_lh, rsp->qlen_lazy, rsp->qlen); + ACCESS_ONCE(rsp->n_force_qs_lh), rsp->qlen_lazy, rsp->qlen); for (rnp = &rsp->node[0]; rnp - &rsp->node[0] < rcu_num_nodes; rnp++) { if (rnp->level != level) { seq_puts(m, "\n"); diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index c54609faf233..4c0a9b0af469 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -12,8 +12,8 @@ * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. * * Copyright IBM Corporation, 2001 * @@ -49,7 +49,6 @@ #include <linux/module.h> #define CREATE_TRACE_POINTS -#include <trace/events/rcu.h> #include "rcu.h" diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index 9a95c8c2af2a..ab32b7b0db5c 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -13,7 +13,7 @@ endif obj-y += core.o proc.o clock.o cputime.o obj-y += idle_task.o fair.o rt.o deadline.o stop_task.o -obj-y += wait.o completion.o +obj-y += wait.o completion.o idle.o obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o obj-$(CONFIG_SCHED_AUTOGROUP) += auto_group.o obj-$(CONFIG_SCHEDSTATS) += stats.o diff --git a/kernel/sched/auto_group.c b/kernel/sched/auto_group.c index 4a073539c58e..e73efba98301 100644 --- a/kernel/sched/auto_group.c +++ b/kernel/sched/auto_group.c @@ -203,7 +203,7 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice) struct autogroup *ag; int err; - if (nice < -20 || nice > 19) + if (nice < MIN_NICE || nice > MAX_NICE) return -EINVAL; err = security_task_setnice(current, nice); diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c index 43c2bcc35761..b30a2924ef14 100644 --- a/kernel/sched/clock.c +++ b/kernel/sched/clock.c @@ -301,14 +301,14 @@ u64 sched_clock_cpu(int cpu) if (unlikely(!sched_clock_running)) return 0ull; - preempt_disable(); + preempt_disable_notrace(); scd = cpu_sdc(cpu); if (cpu != smp_processor_id()) clock = sched_clock_remote(scd); else clock = sched_clock_local(scd); - preempt_enable(); + preempt_enable_notrace(); return clock; } diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 6edbef296ece..ae365aaa8181 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1745,8 +1745,10 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) p->numa_scan_seq = p->mm ? p->mm->numa_scan_seq : 0; p->numa_scan_period = sysctl_numa_balancing_scan_delay; p->numa_work.next = &p->numa_work; - p->numa_faults = NULL; - p->numa_faults_buffer = NULL; + p->numa_faults_memory = NULL; + p->numa_faults_buffer_memory = NULL; + p->last_task_numa_placement = 0; + p->last_sum_exec_runtime = 0; INIT_LIST_HEAD(&p->numa_entry); p->numa_group = NULL; @@ -2149,8 +2151,6 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev) if (mm) mmdrop(mm); if (unlikely(prev_state == TASK_DEAD)) { - task_numa_free(prev); - if (prev->sched_class->task_dead) prev->sched_class->task_dead(prev); @@ -2167,13 +2167,6 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev) #ifdef CONFIG_SMP -/* assumes rq->lock is held */ -static inline void pre_schedule(struct rq *rq, struct task_struct *prev) -{ - if (prev->sched_class->pre_schedule) - prev->sched_class->pre_schedule(rq, prev); -} - /* rq->lock is NOT held, but preemption is disabled */ static inline void post_schedule(struct rq *rq) { @@ -2191,10 +2184,6 @@ static inline void post_schedule(struct rq *rq) #else -static inline void pre_schedule(struct rq *rq, struct task_struct *p) -{ -} - static inline void post_schedule(struct rq *rq) { } @@ -2510,8 +2499,13 @@ void __kprobes preempt_count_add(int val) DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >= PREEMPT_MASK - 10); #endif - if (preempt_count() == val) - trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1)); + if (preempt_count() == val) { + unsigned long ip = get_parent_ip(CALLER_ADDR1); +#ifdef CONFIG_DEBUG_PREEMPT + current->preempt_disable_ip = ip; +#endif + trace_preempt_off(CALLER_ADDR0, ip); + } } EXPORT_SYMBOL(preempt_count_add); @@ -2554,6 +2548,13 @@ static noinline void __schedule_bug(struct task_struct *prev) print_modules(); if (irqs_disabled()) print_irqtrace_events(prev); +#ifdef CONFIG_DEBUG_PREEMPT + if (in_atomic_preempt_off()) { + pr_err("Preemption disabled at:"); + print_ip_sym(current->preempt_disable_ip); + pr_cont("\n"); + } +#endif dump_stack(); add_taint(TAINT_WARN, LOCKDEP_STILL_OK); } @@ -2577,36 +2578,34 @@ static inline void schedule_debug(struct task_struct *prev) schedstat_inc(this_rq(), sched_count); } -static void put_prev_task(struct rq *rq, struct task_struct *prev) -{ - if (prev->on_rq || rq->skip_clock_update < 0) - update_rq_clock(rq); - prev->sched_class->put_prev_task(rq, prev); -} - /* * Pick up the highest-prio task: */ static inline struct task_struct * -pick_next_task(struct rq *rq) +pick_next_task(struct rq *rq, struct task_struct *prev) { - const struct sched_class *class; + const struct sched_class *class = &fair_sched_class; struct task_struct *p; /* * Optimization: we know that if all tasks are in * the fair class we can call that function directly: */ - if (likely(rq->nr_running == rq->cfs.h_nr_running)) { - p = fair_sched_class.pick_next_task(rq); - if (likely(p)) + if (likely(prev->sched_class == class && + rq->nr_running == rq->cfs.h_nr_running)) { + p = fair_sched_class.pick_next_task(rq, prev); + if (likely(p && p != RETRY_TASK)) return p; } +again: for_each_class(class) { - p = class->pick_next_task(rq); - if (p) + p = class->pick_next_task(rq, prev); + if (p) { + if (unlikely(p == RETRY_TASK)) + goto again; return p; + } } BUG(); /* the idle class will always have a runnable task */ @@ -2700,13 +2699,10 @@ need_resched: switch_count = &prev->nvcsw; } - pre_schedule(rq, prev); - - if (unlikely(!rq->nr_running)) - idle_balance(cpu, rq); + if (prev->on_rq || rq->skip_clock_update < 0) + update_rq_clock(rq); - put_prev_task(rq, prev); - next = pick_next_task(rq); + next = pick_next_task(rq, prev); clear_tsk_need_resched(prev); clear_preempt_need_resched(); rq->skip_clock_update = 0; @@ -2908,7 +2904,8 @@ EXPORT_SYMBOL(sleep_on_timeout); * This function changes the 'effective' priority of a task. It does * not touch ->normal_prio like __setscheduler(). * - * Used by the rt_mutex code to implement priority inheritance logic. + * Used by the rt_mutex code to implement priority inheritance + * logic. Call site only calls if the priority of the task changed. */ void rt_mutex_setprio(struct task_struct *p, int prio) { @@ -2998,7 +2995,7 @@ void set_user_nice(struct task_struct *p, long nice) unsigned long flags; struct rq *rq; - if (TASK_NICE(p) == nice || nice < -20 || nice > 19) + if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE) return; /* * We have to be careful, if called from sys_setpriority(), @@ -3076,11 +3073,11 @@ SYSCALL_DEFINE1(nice, int, increment) if (increment > 40) increment = 40; - nice = TASK_NICE(current) + increment; - if (nice < -20) - nice = -20; - if (nice > 19) - nice = 19; + nice = task_nice(current) + increment; + if (nice < MIN_NICE) + nice = MIN_NICE; + if (nice > MAX_NICE) + nice = MAX_NICE; if (increment < 0 && !can_nice(current, nice)) return -EPERM; @@ -3109,18 +3106,6 @@ int task_prio(const struct task_struct *p) } /** - * task_nice - return the nice value of a given task. - * @p: the task in question. - * - * Return: The nice value [ -20 ... 0 ... 19 ]. - */ -int task_nice(const struct task_struct *p) -{ - return TASK_NICE(p); -} -EXPORT_SYMBOL(task_nice); - -/** * idle_cpu - is a given cpu idle currently? * @cpu: the processor in question. * @@ -3189,9 +3174,8 @@ __setparam_dl(struct task_struct *p, const struct sched_attr *attr) dl_se->dl_new = 1; } -/* Actually do priority change: must hold pi & rq lock. */ -static void __setscheduler(struct rq *rq, struct task_struct *p, - const struct sched_attr *attr) +static void __setscheduler_params(struct task_struct *p, + const struct sched_attr *attr) { int policy = attr->sched_policy; @@ -3211,9 +3195,21 @@ static void __setscheduler(struct rq *rq, struct task_struct *p, * getparam()/getattr() don't report silly values for !rt tasks. */ p->rt_priority = attr->sched_priority; - p->normal_prio = normal_prio(p); - p->prio = rt_mutex_getprio(p); + set_load_weight(p); +} + +/* Actually do priority change: must hold pi & rq lock. */ +static void __setscheduler(struct rq *rq, struct task_struct *p, + const struct sched_attr *attr) +{ + __setscheduler_params(p, attr); + + /* + * If we get here, there was no pi waiters boosting the + * task. It is safe to use the normal prio. + */ + p->prio = normal_prio(p); if (dl_prio(p->prio)) p->sched_class = &dl_sched_class; @@ -3221,8 +3217,6 @@ static void __setscheduler(struct rq *rq, struct task_struct *p, p->sched_class = &rt_sched_class; else p->sched_class = &fair_sched_class; - - set_load_weight(p); } static void @@ -3275,6 +3269,8 @@ static int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user) { + int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 : + MAX_RT_PRIO - 1 - attr->sched_priority; int retval, oldprio, oldpolicy = -1, on_rq, running; int policy = attr->sched_policy; unsigned long flags; @@ -3319,7 +3315,7 @@ recheck: */ if (user && !capable(CAP_SYS_NICE)) { if (fair_policy(policy)) { - if (attr->sched_nice < TASK_NICE(p) && + if (attr->sched_nice < task_nice(p) && !can_nice(p, attr->sched_nice)) return -EPERM; } @@ -3338,12 +3334,21 @@ recheck: return -EPERM; } + /* + * Can't set/change SCHED_DEADLINE policy at all for now + * (safest behavior); in the future we would like to allow + * unprivileged DL tasks to increase their relative deadline + * or reduce their runtime (both ways reducing utilization) + */ + if (dl_policy(policy)) + return -EPERM; + /* * Treat SCHED_IDLE as nice 20. Only allow a switch to * SCHED_NORMAL if the RLIMIT_NICE would normally permit it. */ if (p->policy == SCHED_IDLE && policy != SCHED_IDLE) { - if (!can_nice(p, TASK_NICE(p))) + if (!can_nice(p, task_nice(p))) return -EPERM; } @@ -3380,16 +3385,18 @@ recheck: } /* - * If not changing anything there's no need to proceed further: + * If not changing anything there's no need to proceed further, + * but store a possible modification of reset_on_fork. */ if (unlikely(policy == p->policy)) { - if (fair_policy(policy) && attr->sched_nice != TASK_NICE(p)) + if (fair_policy(policy) && attr->sched_nice != task_nice(p)) goto change; if (rt_policy(policy) && attr->sched_priority != p->rt_priority) goto change; if (dl_policy(policy)) goto change; + p->sched_reset_on_fork = reset_on_fork; task_rq_unlock(rq, p, &flags); return 0; } @@ -3443,6 +3450,24 @@ change: return -EBUSY; } + p->sched_reset_on_fork = reset_on_fork; + oldprio = p->prio; + + /* + * Special case for priority boosted tasks. + * + * If the new priority is lower or equal (user space view) + * than the current (boosted) priority, we just store the new + * normal parameters and do not touch the scheduler class and + * the runqueue. This will be done when the task deboost + * itself. + */ + if (rt_mutex_check_prio(p, newprio)) { + __setscheduler_params(p, attr); + task_rq_unlock(rq, p, &flags); + return 0; + } + on_rq = p->on_rq; running = task_current(rq, p); if (on_rq) @@ -3450,16 +3475,18 @@ change: if (running) p->sched_class->put_prev_task(rq, p); - p->sched_reset_on_fork = reset_on_fork; - - oldprio = p->prio; prev_class = p->sched_class; __setscheduler(rq, p, attr); if (running) p->sched_class->set_curr_task(rq); - if (on_rq) - enqueue_task(rq, p, 0); + if (on_rq) { + /* + * We enqueue to tail when the priority of a task is + * increased (user space view). + */ + enqueue_task(rq, p, oldprio <= p->prio ? ENQUEUE_HEAD : 0); + } check_class_changed(rq, p, prev_class, oldprio); task_rq_unlock(rq, p, &flags); @@ -3615,7 +3642,7 @@ static int sched_copy_attr(struct sched_attr __user *uattr, * XXX: do we want to be lenient like existing syscalls; or do we want * to be strict and return an error on out-of-bounds values? */ - attr->sched_nice = clamp(attr->sched_nice, -20, 19); + attr->sched_nice = clamp(attr->sched_nice, MIN_NICE, MAX_NICE); out: return ret; @@ -3836,7 +3863,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr, else if (task_has_rt_policy(p)) attr.sched_priority = p->rt_priority; else - attr.sched_nice = TASK_NICE(p); + attr.sched_nice = task_nice(p); rcu_read_unlock(); @@ -4474,6 +4501,7 @@ void init_idle(struct task_struct *idle, int cpu) rcu_read_unlock(); rq->curr = rq->idle = idle; + idle->on_rq = 1; #if defined(CONFIG_SMP) idle->on_cpu = 1; #endif @@ -4712,6 +4740,22 @@ static void calc_load_migrate(struct rq *rq) atomic_long_add(delta, &calc_load_tasks); } +static void put_prev_task_fake(struct rq *rq, struct task_struct *prev) +{ +} + +static const struct sched_class fake_sched_class = { + .put_prev_task = put_prev_task_fake, +}; + +static struct task_struct fake_task = { + /* + * Avoid pull_{rt,dl}_task() + */ + .prio = MAX_PRIO + 1, + .sched_class = &fake_sched_class, +}; + /* * Migrate all tasks from the rq, sleeping tasks will be migrated by * try_to_wake_up()->select_task_rq(). @@ -4752,7 +4796,7 @@ static void migrate_tasks(unsigned int dead_cpu) if (rq->nr_running == 1) break; - next = pick_next_task(rq); + next = pick_next_task(rq, &fake_task); BUG_ON(!next); next->sched_class->put_prev_task(rq, next); @@ -4842,7 +4886,7 @@ set_table_entry(struct ctl_table *entry, static struct ctl_table * sd_alloc_ctl_domain_table(struct sched_domain *sd) { - struct ctl_table *table = sd_alloc_ctl_entry(13); + struct ctl_table *table = sd_alloc_ctl_entry(14); if (table == NULL) return NULL; @@ -4870,9 +4914,12 @@ sd_alloc_ctl_domain_table(struct sched_domain *sd) sizeof(int), 0644, proc_dointvec_minmax, false); set_table_entry(&table[10], "flags", &sd->flags, sizeof(int), 0644, proc_dointvec_minmax, false); - set_table_entry(&table[11], "name", sd->name, + set_table_entry(&table[11], "max_newidle_lb_cost", + &sd->max_newidle_lb_cost, + sizeof(long), 0644, proc_doulongvec_minmax, false); + set_table_entry(&table[12], "name", sd->name, CORENAME_MAX_SIZE, 0444, proc_dostring, false); - /* &table[12] is terminator */ + /* &table[13] is terminator */ return table; } @@ -6849,7 +6896,6 @@ void __init sched_init(void) rq->rt.rt_runtime = def_rt_bandwidth.rt_runtime; #ifdef CONFIG_RT_GROUP_SCHED - INIT_LIST_HEAD(&rq->leaf_rt_rq_list); init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, NULL); #endif @@ -6938,7 +6984,8 @@ void __might_sleep(const char *file, int line, int preempt_offset) static unsigned long prev_jiffy; /* ratelimiting */ rcu_sleep_check(); /* WARN_ON_ONCE() by default, no rate limit reqd. */ - if ((preempt_count_equals(preempt_offset) && !irqs_disabled()) || + if ((preempt_count_equals(preempt_offset) && !irqs_disabled() && + !is_idle_task(current)) || system_state != SYSTEM_RUNNING || oops_in_progress) return; if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy) @@ -6956,6 +7003,13 @@ void __might_sleep(const char *file, int line, int preempt_offset) debug_show_held_locks(current); if (irqs_disabled()) print_irqtrace_events(current); +#ifdef CONFIG_DEBUG_PREEMPT + if (!preempt_count_equals(preempt_offset)) { + pr_err("Preemption disabled at:"); + print_ip_sym(current->preempt_disable_ip); + pr_cont("\n"); + } +#endif dump_stack(); } EXPORT_SYMBOL(__might_sleep); @@ -7009,7 +7063,7 @@ void normalize_rt_tasks(void) * Renice negative nice level userspace * tasks back to 0: */ - if (TASK_NICE(p) < 0 && p->mm) + if (task_nice(p) < 0 && p->mm) set_user_nice(p, 0); continue; } diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index 99947919e30b..58624a65f124 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -142,7 +142,7 @@ void account_user_time(struct task_struct *p, cputime_t cputime, p->utimescaled += cputime_scaled; account_group_user_time(p, cputime); - index = (TASK_NICE(p) > 0) ? CPUTIME_NICE : CPUTIME_USER; + index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER; /* Add user time to cpustat. */ task_group_account_field(p, index, (__force u64) cputime); @@ -169,7 +169,7 @@ static void account_guest_time(struct task_struct *p, cputime_t cputime, p->gtime += cputime; /* Add guest time to cpustat. */ - if (TASK_NICE(p) > 0) { + if (task_nice(p) > 0) { cpustat[CPUTIME_NICE] += (__force u64) cputime; cpustat[CPUTIME_GUEST_NICE] += (__force u64) cputime; } else { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 6e79b3faa4cd..27ef40925525 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -210,6 +210,16 @@ static inline int has_pushable_dl_tasks(struct rq *rq) static int push_dl_task(struct rq *rq); +static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev) +{ + return dl_task(prev); +} + +static inline void set_post_schedule(struct rq *rq) +{ + rq->post_schedule = has_pushable_dl_tasks(rq); +} + #else static inline @@ -232,6 +242,19 @@ void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) { } +static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev) +{ + return false; +} + +static inline int pull_dl_task(struct rq *rq) +{ + return 0; +} + +static inline void set_post_schedule(struct rq *rq) +{ +} #endif /* CONFIG_SMP */ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags); @@ -586,8 +609,8 @@ static void update_curr_dl(struct rq *rq) * approach need further study. */ delta_exec = rq_clock_task(rq) - curr->se.exec_start; - if (unlikely((s64)delta_exec < 0)) - delta_exec = 0; + if (unlikely((s64)delta_exec <= 0)) + return; schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); @@ -942,6 +965,8 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p) resched_task(rq->curr); } +static int pull_dl_task(struct rq *this_rq); + #endif /* CONFIG_SMP */ /* @@ -988,7 +1013,7 @@ static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq, return rb_entry(left, struct sched_dl_entity, rb_node); } -struct task_struct *pick_next_task_dl(struct rq *rq) +struct task_struct *pick_next_task_dl(struct rq *rq, struct task_struct *prev) { struct sched_dl_entity *dl_se; struct task_struct *p; @@ -996,9 +1021,20 @@ struct task_struct *pick_next_task_dl(struct rq *rq) dl_rq = &rq->dl; + if (need_pull_dl_task(rq, prev)) + pull_dl_task(rq); + /* + * When prev is DL, we may throttle it in put_prev_task(). + * So, we update time before we check for dl_nr_running. + */ + if (prev->sched_class == &dl_sched_class) + update_curr_dl(rq); + if (unlikely(!dl_rq->dl_nr_running)) return NULL; + put_prev_task(rq, prev); + dl_se = pick_next_dl_entity(rq, dl_rq); BUG_ON(!dl_se); @@ -1013,9 +1049,7 @@ struct task_struct *pick_next_task_dl(struct rq *rq) start_hrtick_dl(rq, p); #endif -#ifdef CONFIG_SMP - rq->post_schedule = has_pushable_dl_tasks(rq); -#endif /* CONFIG_SMP */ + set_post_schedule(rq); return p; } @@ -1424,13 +1458,6 @@ skip: return ret; } -static void pre_schedule_dl(struct rq *rq, struct task_struct *prev) -{ - /* Try to pull other tasks here */ - if (dl_task(prev)) - pull_dl_task(rq); -} - static void post_schedule_dl(struct rq *rq) { push_dl_tasks(rq); @@ -1558,7 +1585,7 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p) if (unlikely(p->dl.dl_throttled)) return; - if (p->on_rq || rq->curr != p) { + if (p->on_rq && rq->curr != p) { #ifdef CONFIG_SMP if (rq->dl.overloaded && push_dl_task(rq) && rq != task_rq(p)) /* Only reschedule if pushing failed */ @@ -1623,7 +1650,6 @@ const struct sched_class dl_sched_class = { .set_cpus_allowed = set_cpus_allowed_dl, .rq_online = rq_online_dl, .rq_offline = rq_offline_dl, - .pre_schedule = pre_schedule_dl, .post_schedule = post_schedule_dl, .task_woken = task_woken_dl, #endif diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index dd52e7ffb10e..f3344c31632a 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -321,6 +321,7 @@ do { \ P(sched_goidle); #ifdef CONFIG_SMP P64(avg_idle); + P64(max_idle_balance_cost); #endif P(ttwu_count); @@ -533,15 +534,15 @@ static void sched_show_numa(struct task_struct *p, struct seq_file *m) unsigned long nr_faults = -1; int cpu_current, home_node; - if (p->numa_faults) - nr_faults = p->numa_faults[2*node + i]; + if (p->numa_faults_memory) + nr_faults = p->numa_faults_memory[2*node + i]; cpu_current = !i ? (task_node(p) == node) : (pol && node_isset(node, pol->v.nodes)); home_node = (p->numa_preferred_nid == node); - SEQ_printf(m, "numa_faults, %d, %d, %d, %d, %ld\n", + SEQ_printf(m, "numa_faults_memory, %d, %d, %d, %d, %ld\n", i, node, cpu_current, home_node, nr_faults); } } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 9b4c4f320130..7e9bd0b1fa9e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -322,13 +322,13 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq) list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list) /* Do the two (enqueued) entities belong to the same group ? */ -static inline int +static inline struct cfs_rq * is_same_group(struct sched_entity *se, struct sched_entity *pse) { if (se->cfs_rq == pse->cfs_rq) - return 1; + return se->cfs_rq; - return 0; + return NULL; } static inline struct sched_entity *parent_entity(struct sched_entity *se) @@ -336,17 +336,6 @@ static inline struct sched_entity *parent_entity(struct sched_entity *se) return se->parent; } -/* return depth at which a sched entity is present in the hierarchy */ -static inline int depth_se(struct sched_entity *se) -{ - int depth = 0; - - for_each_sched_entity(se) - depth++; - - return depth; -} - static void find_matching_se(struct sched_entity **se, struct sched_entity **pse) { @@ -360,8 +349,8 @@ find_matching_se(struct sched_entity **se, struct sched_entity **pse) */ /* First walk up until both entities are at same depth */ - se_depth = depth_se(*se); - pse_depth = depth_se(*pse); + se_depth = (*se)->depth; + pse_depth = (*pse)->depth; while (se_depth > pse_depth) { se_depth--; @@ -426,12 +415,6 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq) #define for_each_leaf_cfs_rq(rq, cfs_rq) \ for (cfs_rq = &rq->cfs; cfs_rq; cfs_rq = NULL) -static inline int -is_same_group(struct sched_entity *se, struct sched_entity *pse) -{ - return 1; -} - static inline struct sched_entity *parent_entity(struct sched_entity *se) { return NULL; @@ -819,14 +802,6 @@ unsigned int sysctl_numa_balancing_scan_size = 256; /* Scan @scan_size MB every @scan_period after an initial @scan_delay in ms */ unsigned int sysctl_numa_balancing_scan_delay = 1000; -/* - * After skipping a page migration on a shared page, skip N more numa page - * migrations unconditionally. This reduces the number of NUMA migrations - * in shared memory workloads, and has the effect of pulling tasks towards - * where their memory lives, over pulling the memory towards the task. - */ -unsigned int sysctl_numa_balancing_migrate_deferred = 16; - static unsigned int task_nr_scan_windows(struct task_struct *p) { unsigned long rss = 0; @@ -893,10 +868,26 @@ struct numa_group { struct list_head task_list; struct rcu_head rcu; + nodemask_t active_nodes; unsigned long total_faults; + /* + * Faults_cpu is used to decide whether memory should move + * towards the CPU. As a consequence, these stats are weighted + * more by CPU use than by memory faults. + */ + unsigned long *faults_cpu; unsigned long faults[0]; }; +/* Shared or private faults. */ +#define NR_NUMA_HINT_FAULT_TYPES 2 + +/* Memory and CPU locality */ +#define NR_NUMA_HINT_FAULT_STATS (NR_NUMA_HINT_FAULT_TYPES * 2) + +/* Averaged statistics, and temporary buffers. */ +#define NR_NUMA_HINT_FAULT_BUCKETS (NR_NUMA_HINT_FAULT_STATS * 2) + pid_t task_numa_group_id(struct task_struct *p) { return p->numa_group ? p->numa_group->gid : 0; @@ -904,16 +895,16 @@ pid_t task_numa_group_id(struct task_struct *p) static inline int task_faults_idx(int nid, int priv) { - return 2 * nid + priv; + return NR_NUMA_HINT_FAULT_TYPES * nid + priv; } static inline unsigned long task_faults(struct task_struct *p, int nid) { - if (!p->numa_faults) + if (!p->numa_faults_memory) return 0; - return p->numa_faults[task_faults_idx(nid, 0)] + - p->numa_faults[task_faults_idx(nid, 1)]; + return p->numa_faults_memory[task_faults_idx(nid, 0)] + + p->numa_faults_memory[task_faults_idx(nid, 1)]; } static inline unsigned long group_faults(struct task_struct *p, int nid) @@ -925,6 +916,12 @@ static inline unsigned long group_faults(struct task_struct *p, int nid) p->numa_group->faults[task_faults_idx(nid, 1)]; } +static inline unsigned long group_faults_cpu(struct numa_group *group, int nid) +{ + return group->faults_cpu[task_faults_idx(nid, 0)] + + group->faults_cpu[task_faults_idx(nid, 1)]; +} + /* * These return the fraction of accesses done by a particular task, or * task group, on a particular numa node. The group weight is given a @@ -935,7 +932,7 @@ static inline unsigned long task_weight(struct task_struct *p, int nid) { unsigned long total_faults; - if (!p->numa_faults) + if (!p->numa_faults_memory) return 0; total_faults = p->total_numa_faults; @@ -954,6 +951,69 @@ static inline unsigned long group_weight(struct task_struct *p, int nid) return 1000 * group_faults(p, nid) / p->numa_group->total_faults; } +bool should_numa_migrate_memory(struct task_struct *p, struct page * page, + int src_nid, int dst_cpu) +{ + struct numa_group *ng = p->numa_group; + int dst_nid = cpu_to_node(dst_cpu); + int last_cpupid, this_cpupid; + + this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid); + + /* + * Multi-stage node selection is used in conjunction with a periodic + * migration fault to build a temporal task<->page relation. By using + * a two-stage filter we remove short/unlikely relations. + * + * Using P(p) ~ n_p / n_t as per frequentist probability, we can equate + * a task's usage of a particular page (n_p) per total usage of this + * page (n_t) (in a given time-span) to a probability. + * + * Our periodic faults will sample this probability and getting the + * same result twice in a row, given these samples are fully + * independent, is then given by P(n)^2, provided our sample period + * is sufficiently short compared to the usage pattern. + * + * This quadric squishes small probabilities, making it less likely we + * act on an unlikely task<->page relation. + */ + last_cpupid = page_cpupid_xchg_last(page, this_cpupid); + if (!cpupid_pid_unset(last_cpupid) && + cpupid_to_nid(last_cpupid) != dst_nid) + return false; + + /* Always allow migrate on private faults */ + if (cpupid_match_pid(p, last_cpupid)) + return true; + + /* A shared fault, but p->numa_group has not been set up yet. */ + if (!ng) + return true; + + /* + * Do not migrate if the destination is not a node that + * is actively used by this numa group. + */ + if (!node_isset(dst_nid, ng->active_nodes)) + return false; + + /* + * Source is a node that is not actively used by this + * numa group, while the destination is. Migrate. + */ + if (!node_isset(src_nid, ng->active_nodes)) + return true; + + /* + * Both source and destination are nodes in active + * use by this numa group. Maximize memory bandwidth + * by migrating from more heavily used groups, to less + * heavily used ones, spreading the load around. + * Use a 1/4 hysteresis to avoid spurious page movement. + */ + return group_faults(p, dst_nid) < (group_faults(p, src_nid) * 3 / 4); +} + static unsigned long weighted_cpuload(const int cpu); static unsigned long source_load(int cpu, int type); static unsigned long target_load(int cpu, int type); @@ -1267,7 +1327,7 @@ static int task_numa_migrate(struct task_struct *p) static void numa_migrate_preferred(struct task_struct *p) { /* This task has no NUMA fault statistics yet */ - if (unlikely(p->numa_preferred_nid == -1 || !p->numa_faults)) + if (unlikely(p->numa_preferred_nid == -1 || !p->numa_faults_memory)) return; /* Periodically retry migrating the task to the preferred node */ @@ -1282,6 +1342,38 @@ static void numa_migrate_preferred(struct task_struct *p) } /* + * Find the nodes on which the workload is actively running. We do this by + * tracking the nodes from which NUMA hinting faults are triggered. This can + * be different from the set of nodes where the workload's memory is currently + * located. + * + * The bitmask is used to make smarter decisions on when to do NUMA page + * migrations, To prevent flip-flopping, and excessive page migrations, nodes + * are added when they cause over 6/16 of the maximum number of faults, but + * only removed when they drop below 3/16. + */ +static void update_numa_active_node_mask(struct numa_group *numa_group) +{ + unsigned long faults, max_faults = 0; + int nid; + + for_each_online_node(nid) { + faults = group_faults_cpu(numa_group, nid); + if (faults > max_faults) + max_faults = faults; + } + + for_each_online_node(nid) { + faults = group_faults_cpu(numa_group, nid); + if (!node_isset(nid, numa_group->active_nodes)) { + if (faults > max_faults * 6 / 16) + node_set(nid, numa_group->active_nodes); + } else if (faults < max_faults * 3 / 16) + node_clear(nid, numa_group->active_nodes); + } +} + +/* * When adapting the scan rate, the period is divided into NUMA_PERIOD_SLOTS * increments. The more local the fault statistics are, the higher the scan * period will be for the next scan window. If local/remote ratio is below @@ -1355,11 +1447,41 @@ static void update_task_scan_period(struct task_struct *p, memset(p->numa_faults_locality, 0, sizeof(p->numa_faults_locality)); } +/* + * Get the fraction of time the task has been running since the last + * NUMA placement cycle. The scheduler keeps similar statistics, but + * decays those on a 32ms period, which is orders of magnitude off + * from the dozens-of-seconds NUMA balancing period. Use the scheduler + * stats only if the task is so new there are no NUMA statistics yet. + */ +static u64 numa_get_avg_runtime(struct task_struct *p, u64 *period) +{ + u64 runtime, delta, now; + /* Use the start of this time slice to avoid calculations. */ + now = p->se.exec_start; + runtime = p->se.sum_exec_runtime; + + if (p->last_task_numa_placement) { + delta = runtime - p->last_sum_exec_runtime; + *period = now - p->last_task_numa_placement; + } else { + delta = p->se.avg.runnable_avg_sum; + *period = p->se.avg.runnable_avg_period; + } + + p->last_sum_exec_runtime = runtime; + p->last_task_numa_placement = now; + + return delta; +} + static void task_numa_placement(struct task_struct *p) { int seq, nid, max_nid = -1, max_group_nid = -1; unsigned long max_faults = 0, max_group_faults = 0; unsigned long fault_types[2] = { 0, 0 }; + unsigned long total_faults; + u64 runtime, period; spinlock_t *group_lock = NULL; seq = ACCESS_ONCE(p->mm->numa_scan_seq); @@ -1368,6 +1490,10 @@ static void task_numa_placement(struct task_struct *p) p->numa_scan_seq = seq; p->numa_scan_period_max = task_scan_max(p); + total_faults = p->numa_faults_locality[0] + + p->numa_faults_locality[1]; + runtime = numa_get_avg_runtime(p, &period); + /* If the task is part of a group prevent parallel updates to group stats */ if (p->numa_group) { group_lock = &p->numa_group->lock; @@ -1379,24 +1505,37 @@ static void task_numa_placement(struct task_struct *p) unsigned long faults = 0, group_faults = 0; int priv, i; - for (priv = 0; priv < 2; priv++) { - long diff; + for (priv = 0; priv < NR_NUMA_HINT_FAULT_TYPES; priv++) { + long diff, f_diff, f_weight; i = task_faults_idx(nid, priv); - diff = -p->numa_faults[i]; /* Decay existing window, copy faults since last scan */ - p->numa_faults[i] >>= 1; - p->numa_faults[i] += p->numa_faults_buffer[i]; - fault_types[priv] += p->numa_faults_buffer[i]; - p->numa_faults_buffer[i] = 0; + diff = p->numa_faults_buffer_memory[i] - p->numa_faults_memory[i] / 2; + fault_types[priv] += p->numa_faults_buffer_memory[i]; + p->numa_faults_buffer_memory[i] = 0; - faults += p->numa_faults[i]; - diff += p->numa_faults[i]; + /* + * Normalize the faults_from, so all tasks in a group + * count according to CPU use, instead of by the raw + * number of faults. Tasks with little runtime have + * little over-all impact on throughput, and thus their + * faults are less important. + */ + f_weight = div64_u64(runtime << 16, period + 1); + f_weight = (f_weight * p->numa_faults_buffer_cpu[i]) / + (total_faults + 1); + f_diff = f_weight - p->numa_faults_cpu[i] / 2; + p->numa_faults_buffer_cpu[i] = 0; + + p->numa_faults_memory[i] += diff; + p->numa_faults_cpu[i] += f_diff; + faults += p->numa_faults_memory[i]; p->total_numa_faults += diff; if (p->numa_group) { /* safe because we can only change our own group */ p->numa_group->faults[i] += diff; + p->numa_group->faults_cpu[i] += f_diff; p->numa_group->total_faults += diff; group_faults += p->numa_group->faults[i]; } @@ -1416,6 +1555,7 @@ static void task_numa_placement(struct task_struct *p) update_task_scan_period(p, fault_types[0], fault_types[1]); if (p->numa_group) { + update_numa_active_node_mask(p->numa_group); /* * If the preferred task and group nids are different, * iterate over the nodes again to find the best place. @@ -1465,7 +1605,7 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags, if (unlikely(!p->numa_group)) { unsigned int size = sizeof(struct numa_group) + - 2*nr_node_ids*sizeof(unsigned long); + 4*nr_node_ids*sizeof(unsigned long); grp = kzalloc(size, GFP_KERNEL | __GFP_NOWARN); if (!grp) @@ -1475,9 +1615,14 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags, spin_lock_init(&grp->lock); INIT_LIST_HEAD(&grp->task_list); grp->gid = p->pid; + /* Second half of the array tracks nids where faults happen */ + grp->faults_cpu = grp->faults + NR_NUMA_HINT_FAULT_TYPES * + nr_node_ids; + + node_set(task_node(current), grp->active_nodes); - for (i = 0; i < 2*nr_node_ids; i++) - grp->faults[i] = p->numa_faults[i]; + for (i = 0; i < NR_NUMA_HINT_FAULT_STATS * nr_node_ids; i++) + grp->faults[i] = p->numa_faults_memory[i]; grp->total_faults = p->total_numa_faults; @@ -1534,9 +1679,9 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags, double_lock(&my_grp->lock, &grp->lock); - for (i = 0; i < 2*nr_node_ids; i++) { - my_grp->faults[i] -= p->numa_faults[i]; - grp->faults[i] += p->numa_faults[i]; + for (i = 0; i < NR_NUMA_HINT_FAULT_STATS * nr_node_ids; i++) { + my_grp->faults[i] -= p->numa_faults_memory[i]; + grp->faults[i] += p->numa_faults_memory[i]; } my_grp->total_faults -= p->total_numa_faults; grp->total_faults += p->total_numa_faults; @@ -1562,12 +1707,12 @@ void task_numa_free(struct task_struct *p) { struct numa_group *grp = p->numa_group; int i; - void *numa_faults = p->numa_faults; + void *numa_faults = p->numa_faults_memory; if (grp) { spin_lock(&grp->lock); - for (i = 0; i < 2*nr_node_ids; i++) - grp->faults[i] -= p->numa_faults[i]; + for (i = 0; i < NR_NUMA_HINT_FAULT_STATS * nr_node_ids; i++) + grp->faults[i] -= p->numa_faults_memory[i]; grp->total_faults -= p->total_numa_faults; list_del(&p->numa_entry); @@ -1577,18 +1722,21 @@ void task_numa_free(struct task_struct *p) put_numa_group(grp); } - p->numa_faults = NULL; - p->numa_faults_buffer = NULL; + p->numa_faults_memory = NULL; + p->numa_faults_buffer_memory = NULL; + p->numa_faults_cpu= NULL; + p->numa_faults_buffer_cpu = NULL; kfree(numa_faults); } /* * Got a PROT_NONE fault for a page on @node. */ -void task_numa_fault(int last_cpupid, int node, int pages, int flags) +void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags) { struct task_struct *p = current; bool migrated = flags & TNF_MIGRATED; + int cpu_node = task_node(current); int priv; if (!numabalancing_enabled) @@ -1603,16 +1751,24 @@ void task_numa_fault(int last_cpupid, int node, int pages, int flags) return; /* Allocate buffer to track faults on a per-node basis */ - if (unlikely(!p->numa_faults)) { - int size = sizeof(*p->numa_faults) * 2 * nr_node_ids; + if (unlikely(!p->numa_faults_memory)) { + int size = sizeof(*p->numa_faults_memory) * + NR_NUMA_HINT_FAULT_BUCKETS * nr_node_ids; - /* numa_faults and numa_faults_buffer share the allocation */ - p->numa_faults = kzalloc(size * 2, GFP_KERNEL|__GFP_NOWARN); - if (!p->numa_faults) + p->numa_faults_memory = kzalloc(size, GFP_KERNEL|__GFP_NOWARN); + if (!p->numa_faults_memory) return; - BUG_ON(p->numa_faults_buffer); - p->numa_faults_buffer = p->numa_faults + (2 * nr_node_ids); + BUG_ON(p->numa_faults_buffer_memory); + /* + * The averaged statistics, shared & private, memory & cpu, + * occupy the first half of the array. The second half of the + * array is for current counters, which are averaged into the + * first set by task_numa_placement. + */ + p->numa_faults_cpu = p->numa_faults_memory + (2 * nr_node_ids); + p->numa_faults_buffer_memory = p->numa_faults_memory + (4 * nr_node_ids); + p->numa_faults_buffer_cpu = p->numa_faults_memory + (6 * nr_node_ids); p->total_numa_faults = 0; memset(p->numa_faults_locality, 0, sizeof(p->numa_faults_locality)); } @@ -1641,7 +1797,8 @@ void task_numa_fault(int last_cpupid, int node, int pages, int flags) if (migrated) p->numa_pages_migrated += pages; - p->numa_faults_buffer[task_faults_idx(node, priv)] += pages; + p->numa_faults_buffer_memory[task_faults_idx(mem_node, priv)] += pages; + p->numa_faults_buffer_cpu[task_faults_idx(cpu_node, priv)] += pages; p->numa_faults_locality[!!(flags & TNF_FAULT_LOCAL)] += pages; } @@ -2219,13 +2376,20 @@ static inline void __update_group_entity_contrib(struct sched_entity *se) se->avg.load_avg_contrib >>= NICE_0_SHIFT; } } -#else + +static inline void update_rq_runnable_avg(struct rq *rq, int runnable) +{ + __update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable); + __update_tg_runnable_avg(&rq->avg, &rq->cfs); +} +#else /* CONFIG_FAIR_GROUP_SCHED */ static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq, int force_update) {} static inline void __update_tg_runnable_avg(struct sched_avg *sa, struct cfs_rq *cfs_rq) {} static inline void __update_group_entity_contrib(struct sched_entity *se) {} -#endif +static inline void update_rq_runnable_avg(struct rq *rq, int runnable) {} +#endif /* CONFIG_FAIR_GROUP_SCHED */ static inline void __update_task_entity_contrib(struct sched_entity *se) { @@ -2323,12 +2487,6 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update) __update_cfs_rq_tg_load_contrib(cfs_rq, force_update); } -static inline void update_rq_runnable_avg(struct rq *rq, int runnable) -{ - __update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable); - __update_tg_runnable_avg(&rq->avg, &rq->cfs); -} - /* Add the load generated by se into cfs_rq's child load-average */ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, @@ -2416,7 +2574,10 @@ void idle_exit_fair(struct rq *this_rq) update_rq_runnable_avg(this_rq, 0); } -#else +static int idle_balance(struct rq *this_rq); + +#else /* CONFIG_SMP */ + static inline void update_entity_load_avg(struct sched_entity *se, int update_cfs_rq) {} static inline void update_rq_runnable_avg(struct rq *rq, int runnable) {} @@ -2428,7 +2589,13 @@ static inline void dequeue_entity_load_avg(struct cfs_rq *cfs_rq, int sleep) {} static inline void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update) {} -#endif + +static inline int idle_balance(struct rq *rq) +{ + return 0; +} + +#endif /* CONFIG_SMP */ static void enqueue_sleeper(struct cfs_rq *cfs_rq, struct sched_entity *se) { @@ -2578,10 +2745,10 @@ static void __clear_buddies_last(struct sched_entity *se) { for_each_sched_entity(se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); - if (cfs_rq->last == se) - cfs_rq->last = NULL; - else + if (cfs_rq->last != se) break; + + cfs_rq->last = NULL; } } @@ -2589,10 +2756,10 @@ static void __clear_buddies_next(struct sched_entity *se) { for_each_sched_entity(se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); - if (cfs_rq->next == se) - cfs_rq->next = NULL; - else + if (cfs_rq->next != se) break; + + cfs_rq->next = NULL; } } @@ -2600,10 +2767,10 @@ static void __clear_buddies_skip(struct sched_entity *se) { for_each_sched_entity(se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); - if (cfs_rq->skip == se) - cfs_rq->skip = NULL; - else + if (cfs_rq->skip != se) break; + + cfs_rq->skip = NULL; } } @@ -2746,17 +2913,36 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se); * 3) pick the "last" process, for cache locality * 4) do not run the "skip" process, if something else is available */ -static struct sched_entity *pick_next_entity(struct cfs_rq *cfs_rq) +static struct sched_entity * +pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr) { - struct sched_entity *se = __pick_first_entity(cfs_rq); - struct sched_entity *left = se; + struct sched_entity *left = __pick_first_entity(cfs_rq); + struct sched_entity *se; + + /* + * If curr is set we have to see if its left of the leftmost entity + * still in the tree, provided there was anything in the tree at all. + */ + if (!left || (curr && entity_before(curr, left))) + left = curr; + + se = left; /* ideally we run the leftmost entity */ /* * Avoid running the skip buddy, if running something else can * be done without getting too unfair. */ if (cfs_rq->skip == se) { - struct sched_entity *second = __pick_next_entity(se); + struct sched_entity *second; + + if (se == curr) { + second = __pick_first_entity(cfs_rq); + } else { + second = __pick_next_entity(se); + if (!second || (curr && entity_before(curr, second))) + second = curr; + } + if (second && wakeup_preempt_entity(second, left) < 1) se = second; } @@ -2778,7 +2964,7 @@ static struct sched_entity *pick_next_entity(struct cfs_rq *cfs_rq) return se; } -static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq); +static bool check_cfs_rq_runtime(struct cfs_rq *cfs_rq); static void put_prev_entity(struct cfs_rq *cfs_rq, struct sched_entity *prev) { @@ -3433,22 +3619,23 @@ static void check_enqueue_throttle(struct cfs_rq *cfs_rq) } /* conditionally throttle active cfs_rq's from put_prev_entity() */ -static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq) +static bool check_cfs_rq_runtime(struct cfs_rq *cfs_rq) { if (!cfs_bandwidth_used()) - return; + return false; if (likely(!cfs_rq->runtime_enabled || cfs_rq->runtime_remaining > 0)) - return; + return false; /* * it's possible for a throttled entity to be forced into a running * state (e.g. set_curr_task), in this case we're finished. */ if (cfs_rq_throttled(cfs_rq)) - return; + return true; throttle_cfs_rq(cfs_rq); + return true; } static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer) @@ -3558,7 +3745,7 @@ static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq) } static void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) {} -static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq) {} +static bool check_cfs_rq_runtime(struct cfs_rq *cfs_rq) { return false; } static void check_enqueue_throttle(struct cfs_rq *cfs_rq) {} static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq) {} @@ -4213,13 +4400,14 @@ done: } /* - * sched_balance_self: balance the current task (running on cpu) in domains - * that have the 'flag' flag set. In practice, this is SD_BALANCE_FORK and - * SD_BALANCE_EXEC. + * select_task_rq_fair: Select target runqueue for the waking task in domains + * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, + * SD_BALANCE_FORK, or SD_BALANCE_EXEC. * - * Balance, ie. select the least loaded group. + * Balances load by selecting the idlest cpu in the idlest group, or under + * certain conditions an idle sibling cpu if the domain has SD_WAKE_AFFINE set. * - * Returns the target CPU number, or the same CPU if no balancing is needed. + * Returns the target cpu number. * * preempt must be disabled. */ @@ -4494,26 +4682,124 @@ preempt: set_last_buddy(se); } -static struct task_struct *pick_next_task_fair(struct rq *rq) +static struct task_struct * +pick_next_task_fair(struct rq *rq, struct task_struct *prev) { - struct task_struct *p; struct cfs_rq *cfs_rq = &rq->cfs; struct sched_entity *se; + struct task_struct *p; + int new_tasks; +again: +#ifdef CONFIG_FAIR_GROUP_SCHED if (!cfs_rq->nr_running) - return NULL; + goto idle; + + if (prev->sched_class != &fair_sched_class) + goto simple; + + /* + * Because of the set_next_buddy() in dequeue_task_fair() it is rather + * likely that a next task is from the same cgroup as the current. + * + * Therefore attempt to avoid putting and setting the entire cgroup + * hierarchy, only change the part that actually changes. + */ + + do { + struct sched_entity *curr = cfs_rq->curr; + + /* + * Since we got here without doing put_prev_entity() we also + * have to consider cfs_rq->curr. If it is still a runnable + * entity, update_curr() will update its vruntime, otherwise + * forget we've ever seen it. + */ + if (curr && curr->on_rq) + update_curr(cfs_rq); + else + curr = NULL; + + /* + * This call to check_cfs_rq_runtime() will do the throttle and + * dequeue its entity in the parent(s). Therefore the 'simple' + * nr_running test will indeed be correct. + */ + if (unlikely(check_cfs_rq_runtime(cfs_rq))) + goto simple; + + se = pick_next_entity(cfs_rq, curr); + cfs_rq = group_cfs_rq(se); + } while (cfs_rq); + + p = task_of(se); + + /* + * Since we haven't yet done put_prev_entity and if the selected task + * is a different task than we started out with, try and touch the + * least amount of cfs_rqs. + */ + if (prev != p) { + struct sched_entity *pse = &prev->se; + + while (!(cfs_rq = is_same_group(se, pse))) { + int se_depth = se->depth; + int pse_depth = pse->depth; + + if (se_depth <= pse_depth) { + put_prev_entity(cfs_rq_of(pse), pse); + pse = parent_entity(pse); + } + if (se_depth >= pse_depth) { + set_next_entity(cfs_rq_of(se), se); + se = parent_entity(se); + } + } + + put_prev_entity(cfs_rq, pse); + set_next_entity(cfs_rq, se); + } + + if (hrtick_enabled(rq)) + hrtick_start_fair(rq, p); + + return p; +simple: + cfs_rq = &rq->cfs; +#endif + + if (!cfs_rq->nr_running) + goto idle; + + put_prev_task(rq, prev); do { - se = pick_next_entity(cfs_rq); + se = pick_next_entity(cfs_rq, NULL); set_next_entity(cfs_rq, se); cfs_rq = group_cfs_rq(se); } while (cfs_rq); p = task_of(se); + if (hrtick_enabled(rq)) hrtick_start_fair(rq, p); return p; + +idle: + new_tasks = idle_balance(rq); + /* + * Because idle_balance() releases (and re-acquires) rq->lock, it is + * possible for any higher priority task to appear. In that case we + * must re-start the pick_next_entity() loop. + */ + if (new_tasks < 0) + return RETRY_TASK; + + if (new_tasks > 0) + goto again; + + return NULL; } /* @@ -4751,7 +5037,7 @@ static void move_task(struct task_struct *p, struct lb_env *env) * Is this task likely cache-hot: */ static int -task_hot(struct task_struct *p, u64 now, struct sched_domain *sd) +task_hot(struct task_struct *p, u64 now) { s64 delta; @@ -4785,7 +5071,7 @@ static bool migrate_improves_locality(struct task_struct *p, struct lb_env *env) { int src_nid, dst_nid; - if (!sched_feat(NUMA_FAVOUR_HIGHER) || !p->numa_faults || + if (!sched_feat(NUMA_FAVOUR_HIGHER) || !p->numa_faults_memory || !(env->sd->flags & SD_NUMA)) { return false; } @@ -4816,7 +5102,7 @@ static bool migrate_degrades_locality(struct task_struct *p, struct lb_env *env) if (!sched_feat(NUMA) || !sched_feat(NUMA_RESIST_LOWER)) return false; - if (!p->numa_faults || !(env->sd->flags & SD_NUMA)) + if (!p->numa_faults_memory || !(env->sd->flags & SD_NUMA)) return false; src_nid = cpu_to_node(env->src_cpu); @@ -4912,7 +5198,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env) * 2) task is cache cold, or * 3) too many balance attempts have failed. */ - tsk_cache_hot = task_hot(p, rq_clock_task(env->src_rq), env->sd); + tsk_cache_hot = task_hot(p, rq_clock_task(env->src_rq)); if (!tsk_cache_hot) tsk_cache_hot = migrate_degrades_locality(p, env); @@ -5775,12 +6061,10 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds) pwr_now /= SCHED_POWER_SCALE; /* Amount of load we'd subtract */ - tmp = (busiest->load_per_task * SCHED_POWER_SCALE) / - busiest->group_power; - if (busiest->avg_load > tmp) { + if (busiest->avg_load > scaled_busy_load_per_task) { pwr_move += busiest->group_power * min(busiest->load_per_task, - busiest->avg_load - tmp); + busiest->avg_load - scaled_busy_load_per_task); } /* Amount of load we'd add */ @@ -6359,17 +6643,23 @@ out: * idle_balance is called by schedule() if this_cpu is about to become * idle. Attempts to pull tasks from other CPUs. */ -void idle_balance(int this_cpu, struct rq *this_rq) +static int idle_balance(struct rq *this_rq) { struct sched_domain *sd; int pulled_task = 0; unsigned long next_balance = jiffies + HZ; u64 curr_cost = 0; + int this_cpu = this_rq->cpu; + idle_enter_fair(this_rq); + /* + * We must set idle_stamp _before_ calling idle_balance(), such that we + * measure the duration of idle_balance() as idle time. + */ this_rq->idle_stamp = rq_clock(this_rq); if (this_rq->avg_idle < sysctl_sched_migration_cost) - return; + goto out; /* * Drop the rq->lock, but keep IRQ/preempt disabled. @@ -6407,15 +6697,22 @@ void idle_balance(int this_cpu, struct rq *this_rq) interval = msecs_to_jiffies(sd->balance_interval); if (time_after(next_balance, sd->last_balance + interval)) next_balance = sd->last_balance + interval; - if (pulled_task) { - this_rq->idle_stamp = 0; + if (pulled_task) break; - } } rcu_read_unlock(); raw_spin_lock(&this_rq->lock); + /* + * While browsing the domains, we released the rq lock. + * A task could have be enqueued in the meantime + */ + if (this_rq->cfs.h_nr_running && !pulled_task) { + pulled_task = 1; + goto out; + } + if (pulled_task || time_after(jiffies, this_rq->next_balance)) { /* * We are going idle. next_balance may be set based on @@ -6426,6 +6723,20 @@ void idle_balance(int this_cpu, struct rq *this_rq) if (curr_cost > this_rq->max_idle_balance_cost) this_rq->max_idle_balance_cost = curr_cost; + +out: + /* Is there a task of a high priority class? */ + if (this_rq->nr_running != this_rq->cfs.h_nr_running && + (this_rq->dl.dl_nr_running || + (this_rq->rt.rt_nr_running && !rt_rq_throttled(&this_rq->rt)))) + pulled_task = -1; + + if (pulled_task) { + idle_exit_fair(this_rq); + this_rq->idle_stamp = 0; + } + + return pulled_task; } /* @@ -6496,6 +6807,11 @@ out_unlock: return 0; } +static inline int on_null_domain(struct rq *rq) +{ + return unlikely(!rcu_dereference_sched(rq->sd)); +} + #ifdef CONFIG_NO_HZ_COMMON /* * idle load balancing details @@ -6550,8 +6866,13 @@ static void nohz_balancer_kick(void) static inline void nohz_balance_exit_idle(int cpu) { if (unlikely(test_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)))) { - cpumask_clear_cpu(cpu, nohz.idle_cpus_mask); - atomic_dec(&nohz.nr_cpus); + /* + * Completely isolated CPUs don't ever set, so we must test. + */ + if (likely(cpumask_test_cpu(cpu, nohz.idle_cpus_mask))) { + cpumask_clear_cpu(cpu, nohz.idle_cpus_mask); + atomic_dec(&nohz.nr_cpus); + } clear_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)); } } @@ -6605,6 +6926,12 @@ void nohz_balance_enter_idle(int cpu) if (test_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu))) return; + /* + * If we're a completely isolated CPU, we don't play. + */ + if (on_null_domain(cpu_rq(cpu))) + return; + cpumask_set_cpu(cpu, nohz.idle_cpus_mask); atomic_inc(&nohz.nr_cpus); set_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)); @@ -6867,11 +7194,6 @@ static void run_rebalance_domains(struct softirq_action *h) nohz_idle_balance(this_rq, idle); } -static inline int on_null_domain(struct rq *rq) -{ - return !rcu_dereference_sched(rq->sd); -} - /* * Trigger the SCHED_SOFTIRQ if it is time to do periodic load balancing. */ @@ -7036,7 +7358,15 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p) */ static void switched_to_fair(struct rq *rq, struct task_struct *p) { - if (!p->se.on_rq) + struct sched_entity *se = &p->se; +#ifdef CONFIG_FAIR_GROUP_SCHED + /* + * Since the real-depth could have been changed (only FAIR + * class maintain depth value), reset depth properly. + */ + se->depth = se->parent ? se->parent->depth + 1 : 0; +#endif + if (!se->on_rq) return; /* @@ -7084,7 +7414,9 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) #ifdef CONFIG_FAIR_GROUP_SCHED static void task_move_group_fair(struct task_struct *p, int on_rq) { + struct sched_entity *se = &p->se; struct cfs_rq *cfs_rq; + /* * If the task was not on the rq at the time of this cgroup movement * it must have been asleep, sleeping tasks keep their ->vruntime @@ -7110,23 +7442,24 @@ static void task_move_group_fair(struct task_struct *p, int on_rq) * To prevent boost or penalty in the new cfs_rq caused by delta * min_vruntime between the two cfs_rqs, we skip vruntime adjustment. */ - if (!on_rq && (!p->se.sum_exec_runtime || p->state == TASK_WAKING)) + if (!on_rq && (!se->sum_exec_runtime || p->state == TASK_WAKING)) on_rq = 1; if (!on_rq) - p->se.vruntime -= cfs_rq_of(&p->se)->min_vruntime; + se->vruntime -= cfs_rq_of(se)->min_vruntime; set_task_rq(p, task_cpu(p)); + se->depth = se->parent ? se->parent->depth + 1 : 0; if (!on_rq) { - cfs_rq = cfs_rq_of(&p->se); - p->se.vruntime += cfs_rq->min_vruntime; + cfs_rq = cfs_rq_of(se); + se->vruntime += cfs_rq->min_vruntime; #ifdef CONFIG_SMP /* * migrate_task_rq_fair() will have removed our previous * contribution, but we must synchronize for ongoing future * decay. */ - p->se.avg.decay_count = atomic64_read(&cfs_rq->decay_counter); - cfs_rq->blocked_load_avg += p->se.avg.load_avg_contrib; + se->avg.decay_count = atomic64_read(&cfs_rq->decay_counter); + cfs_rq->blocked_load_avg += se->avg.load_avg_contrib; #endif } } @@ -7222,10 +7555,13 @@ void init_tg_cfs_entry(struct task_group *tg, struct cfs_rq *cfs_rq, if (!se) return; - if (!parent) + if (!parent) { se->cfs_rq = &rq->cfs; - else + se->depth = 0; + } else { se->cfs_rq = parent->my_q; + se->depth = parent->depth + 1; + } se->my_q = cfs_rq; /* guarantee group entities always have weight */ diff --git a/kernel/cpu/idle.c b/kernel/sched/idle.c index 277f494c2a9a..b7976a127178 100644 --- a/kernel/cpu/idle.c +++ b/kernel/sched/idle.c @@ -3,6 +3,7 @@ */ #include <linux/sched.h> #include <linux/cpu.h> +#include <linux/cpuidle.h> #include <linux/tick.h> #include <linux/mm.h> #include <linux/stackprotector.h> @@ -95,8 +96,10 @@ static void cpu_idle_loop(void) if (!current_clr_polling_and_test()) { stop_critical_timings(); rcu_idle_enter(); - arch_cpu_idle(); - WARN_ON_ONCE(irqs_disabled()); + if (cpuidle_idle_call()) + arch_cpu_idle(); + if (WARN_ON_ONCE(irqs_disabled())) + local_irq_enable(); rcu_idle_exit(); start_critical_timings(); } else { diff --git a/kernel/sched/idle_task.c b/kernel/sched/idle_task.c index 516c3d9ceea1..879f2b75266a 100644 --- a/kernel/sched/idle_task.c +++ b/kernel/sched/idle_task.c @@ -13,18 +13,8 @@ select_task_rq_idle(struct task_struct *p, int cpu, int sd_flag, int flags) { return task_cpu(p); /* IDLE tasks as never migrated */ } - -static void pre_schedule_idle(struct rq *rq, struct task_struct *prev) -{ - idle_exit_fair(rq); - rq_last_tick_reset(rq); -} - -static void post_schedule_idle(struct rq *rq) -{ - idle_enter_fair(rq); -} #endif /* CONFIG_SMP */ + /* * Idle tasks are unconditionally rescheduled: */ @@ -33,13 +23,12 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl resched_task(rq->idle); } -static struct task_struct *pick_next_task_idle(struct rq *rq) +static struct task_struct * +pick_next_task_idle(struct rq *rq, struct task_struct *prev) { + put_prev_task(rq, prev); + schedstat_inc(rq, sched_goidle); -#ifdef CONFIG_SMP - /* Trigger the post schedule to do an idle_enter for CFS */ - rq->post_schedule = 1; -#endif return rq->idle; } @@ -58,6 +47,8 @@ dequeue_task_idle(struct rq *rq, struct task_struct *p, int flags) static void put_prev_task_idle(struct rq *rq, struct task_struct *prev) { + idle_exit_fair(rq); + rq_last_tick_reset(rq); } static void task_tick_idle(struct rq *rq, struct task_struct *curr, int queued) @@ -101,8 +92,6 @@ const struct sched_class idle_sched_class = { #ifdef CONFIG_SMP .select_task_rq = select_task_rq_idle, - .pre_schedule = pre_schedule_idle, - .post_schedule = post_schedule_idle, #endif .set_curr_task = set_curr_task_idle, diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 1999021042c7..d8cdf1618551 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -229,6 +229,14 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent) #ifdef CONFIG_SMP +static int pull_rt_task(struct rq *this_rq); + +static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev) +{ + /* Try to pull RT tasks here if we lower this rq's prio */ + return rq->rt.highest_prio.curr > prev->prio; +} + static inline int rt_overloaded(struct rq *rq) { return atomic_read(&rq->rd->rto_count); @@ -315,6 +323,15 @@ static inline int has_pushable_tasks(struct rq *rq) return !plist_head_empty(&rq->rt.pushable_tasks); } +static inline void set_post_schedule(struct rq *rq) +{ + /* + * We detect this state here so that we can avoid taking the RQ + * lock again later if there is no need to push + */ + rq->post_schedule = has_pushable_tasks(rq); +} + static void enqueue_pushable_task(struct rq *rq, struct task_struct *p) { plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks); @@ -359,6 +376,19 @@ void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq) { } +static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev) +{ + return false; +} + +static inline int pull_rt_task(struct rq *this_rq) +{ + return 0; +} + +static inline void set_post_schedule(struct rq *rq) +{ +} #endif /* CONFIG_SMP */ static inline int on_rt_rq(struct sched_rt_entity *rt_se) @@ -440,11 +470,6 @@ static void sched_rt_rq_dequeue(struct rt_rq *rt_rq) dequeue_rt_entity(rt_se); } -static inline int rt_rq_throttled(struct rt_rq *rt_rq) -{ - return rt_rq->rt_throttled && !rt_rq->rt_nr_boosted; -} - static int rt_se_boosted(struct sched_rt_entity *rt_se) { struct rt_rq *rt_rq = group_rt_rq(rt_se); @@ -515,11 +540,6 @@ static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq) { } -static inline int rt_rq_throttled(struct rt_rq *rt_rq) -{ - return rt_rq->rt_throttled; -} - static inline const struct cpumask *sched_rt_period_mask(void) { return cpu_online_mask; @@ -1318,15 +1338,7 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) { struct sched_rt_entity *rt_se; struct task_struct *p; - struct rt_rq *rt_rq; - - rt_rq = &rq->rt; - - if (!rt_rq->rt_nr_running) - return NULL; - - if (rt_rq_throttled(rt_rq)) - return NULL; + struct rt_rq *rt_rq = &rq->rt; do { rt_se = pick_next_rt_entity(rq, rt_rq); @@ -1340,21 +1352,45 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) return p; } -static struct task_struct *pick_next_task_rt(struct rq *rq) +static struct task_struct * +pick_next_task_rt(struct rq *rq, struct task_struct *prev) { - struct task_struct *p = _pick_next_task_rt(rq); + struct task_struct *p; + struct rt_rq *rt_rq = &rq->rt; + + if (need_pull_rt_task(rq, prev)) { + pull_rt_task(rq); + /* + * pull_rt_task() can drop (and re-acquire) rq->lock; this + * means a dl task can slip in, in which case we need to + * re-start task selection. + */ + if (unlikely(rq->dl.dl_nr_running)) + return RETRY_TASK; + } + + /* + * We may dequeue prev's rt_rq in put_prev_task(). + * So, we update time before rt_nr_running check. + */ + if (prev->sched_class == &rt_sched_class) + update_curr_rt(rq); + + if (!rt_rq->rt_nr_running) + return NULL; + + if (rt_rq_throttled(rt_rq)) + return NULL; + + put_prev_task(rq, prev); + + p = _pick_next_task_rt(rq); /* The running task is never eligible for pushing */ if (p) dequeue_pushable_task(rq, p); -#ifdef CONFIG_SMP - /* - * We detect this state here so that we can avoid taking the RQ - * lock again later if there is no need to push - */ - rq->post_schedule = has_pushable_tasks(rq); -#endif + set_post_schedule(rq); return p; } @@ -1724,13 +1760,6 @@ skip: return ret; } -static void pre_schedule_rt(struct rq *rq, struct task_struct *prev) -{ - /* Try to pull RT tasks here if we lower this rq's prio */ - if (rq->rt.highest_prio.curr > prev->prio) - pull_rt_task(rq); -} - static void post_schedule_rt(struct rq *rq) { push_rt_tasks(rq); @@ -1833,7 +1862,7 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p) resched_task(rq->curr); } -void init_sched_rt_class(void) +void __init init_sched_rt_class(void) { unsigned int i; @@ -2007,7 +2036,6 @@ const struct sched_class rt_sched_class = { .set_cpus_allowed = set_cpus_allowed_rt, .rq_online = rq_online_rt, .rq_offline = rq_offline_rt, - .pre_schedule = pre_schedule_rt, .post_schedule = post_schedule_rt, .task_woken = task_woken_rt, .switched_from = switched_from_rt, diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index f964add50f38..f2de7a175620 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -24,24 +24,6 @@ extern long calc_load_fold_active(struct rq *this_rq); extern void update_cpu_load_active(struct rq *this_rq); /* - * Convert user-nice values [ -20 ... 0 ... 19 ] - * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ], - * and back. - */ -#define NICE_TO_PRIO(nice) (MAX_RT_PRIO + (nice) + 20) -#define PRIO_TO_NICE(prio) ((prio) - MAX_RT_PRIO - 20) -#define TASK_NICE(p) PRIO_TO_NICE((p)->static_prio) - -/* - * 'User priority' is the nice value converted to something we - * can work with better when scaling various scheduler parameters, - * it's a [ 0 ... 39 ] range. - */ -#define USER_PRIO(p) ((p)-MAX_RT_PRIO) -#define TASK_USER_PRIO(p) USER_PRIO((p)->static_prio) -#define MAX_USER_PRIO (USER_PRIO(MAX_PRIO)) - -/* * Helpers for converting nanosecond timing to jiffy resolution */ #define NS_TO_JIFFIES(TIME) ((unsigned long)(TIME) / (NSEC_PER_SEC / HZ)) @@ -441,6 +423,18 @@ struct rt_rq { #endif }; +#ifdef CONFIG_RT_GROUP_SCHED +static inline int rt_rq_throttled(struct rt_rq *rt_rq) +{ + return rt_rq->rt_throttled && !rt_rq->rt_nr_boosted; +} +#else +static inline int rt_rq_throttled(struct rt_rq *rt_rq) +{ + return rt_rq->rt_throttled; +} +#endif + /* Deadline class' related fields in a runqueue */ struct dl_rq { /* runqueue is an rbtree, ordered by deadline */ @@ -558,11 +552,9 @@ struct rq { #ifdef CONFIG_FAIR_GROUP_SCHED /* list of leaf cfs_rq on this cpu: */ struct list_head leaf_cfs_rq_list; -#endif /* CONFIG_FAIR_GROUP_SCHED */ -#ifdef CONFIG_RT_GROUP_SCHED - struct list_head leaf_rt_rq_list; -#endif + struct sched_avg avg; +#endif /* CONFIG_FAIR_GROUP_SCHED */ /* * This is part of a global counter where only the total sum @@ -651,8 +643,6 @@ struct rq { #ifdef CONFIG_SMP struct llist_head wake_list; #endif - - struct sched_avg avg; }; static inline int cpu_of(struct rq *rq) @@ -1112,6 +1102,8 @@ static const u32 prio_to_wmult[40] = { #define DEQUEUE_SLEEP 1 +#define RETRY_TASK ((void *)-1UL) + struct sched_class { const struct sched_class *next; @@ -1122,14 +1114,22 @@ struct sched_class { void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags); - struct task_struct * (*pick_next_task) (struct rq *rq); + /* + * It is the responsibility of the pick_next_task() method that will + * return the next task to call put_prev_task() on the @prev task or + * something equivalent. + * + * May return RETRY_TASK when it finds a higher prio class has runnable + * tasks. + */ + struct task_struct * (*pick_next_task) (struct rq *rq, + struct task_struct *prev); void (*put_prev_task) (struct rq *rq, struct task_struct *p); #ifdef CONFIG_SMP int (*select_task_rq)(struct task_struct *p, int task_cpu, int sd_flag, int flags); void (*migrate_task_rq)(struct task_struct *p, int next_cpu); - void (*pre_schedule) (struct rq *this_rq, struct task_struct *task); void (*post_schedule) (struct rq *this_rq); void (*task_waking) (struct task_struct *task); void (*task_woken) (struct rq *this_rq, struct task_struct *task); @@ -1159,6 +1159,11 @@ struct sched_class { #endif }; +static inline void put_prev_task(struct rq *rq, struct task_struct *prev) +{ + prev->sched_class->put_prev_task(rq, prev); +} + #define sched_class_highest (&stop_sched_class) #define for_each_class(class) \ for (class = sched_class_highest; class; class = class->next) @@ -1175,16 +1180,14 @@ extern const struct sched_class idle_sched_class; extern void update_group_power(struct sched_domain *sd, int cpu); extern void trigger_load_balance(struct rq *rq); -extern void idle_balance(int this_cpu, struct rq *this_rq); extern void idle_enter_fair(struct rq *this_rq); extern void idle_exit_fair(struct rq *this_rq); -#else /* CONFIG_SMP */ +#else -static inline void idle_balance(int cpu, struct rq *rq) -{ -} +static inline void idle_enter_fair(struct rq *rq) { } +static inline void idle_exit_fair(struct rq *rq) { } #endif diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c index fdb6bb0b3356..d6ce65dde541 100644 --- a/kernel/sched/stop_task.c +++ b/kernel/sched/stop_task.c @@ -23,16 +23,19 @@ check_preempt_curr_stop(struct rq *rq, struct task_struct *p, int flags) /* we're never preempted */ } -static struct task_struct *pick_next_task_stop(struct rq *rq) +static struct task_struct * +pick_next_task_stop(struct rq *rq, struct task_struct *prev) { struct task_struct *stop = rq->stop; - if (stop && stop->on_rq) { - stop->se.exec_start = rq_clock_task(rq); - return stop; - } + if (!stop || !stop->on_rq) + return NULL; - return NULL; + put_prev_task(rq, prev); + + stop->se.exec_start = rq_clock_task(rq); + + return stop; } static void diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c index 84571e09c907..01fbae5b97b7 100644 --- a/kernel/stop_machine.c +++ b/kernel/stop_machine.c @@ -293,7 +293,7 @@ int stop_two_cpus(unsigned int cpu1, unsigned int cpu2, cpu_stop_fn_t fn, void * */ smp_call_function_single(min(cpu1, cpu2), &irq_cpu_stop_queue_work, - &call_args, 0); + &call_args, 1); lg_local_unlock(&stop_cpus_lock); preempt_enable(); diff --git a/kernel/sys.c b/kernel/sys.c index c0a58be780a4..adaeab6f7a87 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -174,10 +174,10 @@ SYSCALL_DEFINE3(setpriority, int, which, int, who, int, niceval) /* normalize: avoid signed division (rounding problems) */ error = -ESRCH; - if (niceval < -20) - niceval = -20; - if (niceval > 19) - niceval = 19; + if (niceval < MIN_NICE) + niceval = MIN_NICE; + if (niceval > MAX_NICE) + niceval = MAX_NICE; rcu_read_lock(); read_lock(&tasklist_lock); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 49e13e1f8fe6..7754ff16f334 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -386,13 +386,6 @@ static struct ctl_table kern_table[] = { .proc_handler = proc_dointvec, }, { - .procname = "numa_balancing_migrate_deferred", - .data = &sysctl_numa_balancing_migrate_deferred, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = proc_dointvec, - }, - { .procname = "numa_balancing", .data = NULL, /* filled in by handler */ .maxlen = sizeof(unsigned int), diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index 0aa4ce81bc16..5b40279ecd71 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -1435,7 +1435,8 @@ void update_wall_time(void) out: raw_spin_unlock_irqrestore(&timekeeper_lock, flags); if (clock_set) - clock_was_set(); + /* Have to call _delayed version, since in irq context*/ + clock_was_set_delayed(); } /** diff --git a/kernel/torture.c b/kernel/torture.c new file mode 100644 index 000000000000..acc9afc2f26e --- /dev/null +++ b/kernel/torture.c @@ -0,0 +1,719 @@ +/* + * Common functions for in-kernel torture tests. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. + * + * Copyright (C) IBM Corporation, 2014 + * + * Author: Paul E. McKenney <paulmck@us.ibm.com> + * Based on kernel/rcu/torture.c. + */ +#include <linux/types.h> +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/module.h> +#include <linux/kthread.h> +#include <linux/err.h> +#include <linux/spinlock.h> +#include <linux/smp.h> +#include <linux/interrupt.h> +#include <linux/sched.h> +#include <linux/atomic.h> +#include <linux/bitops.h> +#include <linux/completion.h> +#include <linux/moduleparam.h> +#include <linux/percpu.h> +#include <linux/notifier.h> +#include <linux/reboot.h> +#include <linux/freezer.h> +#include <linux/cpu.h> +#include <linux/delay.h> +#include <linux/stat.h> +#include <linux/slab.h> +#include <linux/trace_clock.h> +#include <asm/byteorder.h> +#include <linux/torture.h> + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com>"); + +static char *torture_type; +static bool verbose; + +/* Mediate rmmod and system shutdown. Concurrent rmmod & shutdown illegal! */ +#define FULLSTOP_DONTSTOP 0 /* Normal operation. */ +#define FULLSTOP_SHUTDOWN 1 /* System shutdown with torture running. */ +#define FULLSTOP_RMMOD 2 /* Normal rmmod of torture. */ +static int fullstop = FULLSTOP_RMMOD; +static DEFINE_MUTEX(fullstop_mutex); +static int *torture_runnable; + +#ifdef CONFIG_HOTPLUG_CPU + +/* + * Variables for online-offline handling. Only present if CPU hotplug + * is enabled, otherwise does nothing. + */ + +static struct task_struct *onoff_task; +static long onoff_holdoff; +static long onoff_interval; +static long n_offline_attempts; +static long n_offline_successes; +static unsigned long sum_offline; +static int min_offline = -1; +static int max_offline; +static long n_online_attempts; +static long n_online_successes; +static unsigned long sum_online; +static int min_online = -1; +static int max_online; + +/* + * Execute random CPU-hotplug operations at the interval specified + * by the onoff_interval. + */ +static int +torture_onoff(void *arg) +{ + int cpu; + unsigned long delta; + int maxcpu = -1; + DEFINE_TORTURE_RANDOM(rand); + int ret; + unsigned long starttime; + + VERBOSE_TOROUT_STRING("torture_onoff task started"); + for_each_online_cpu(cpu) + maxcpu = cpu; + WARN_ON(maxcpu < 0); + if (onoff_holdoff > 0) { + VERBOSE_TOROUT_STRING("torture_onoff begin holdoff"); + schedule_timeout_interruptible(onoff_holdoff); + VERBOSE_TOROUT_STRING("torture_onoff end holdoff"); + } + while (!torture_must_stop()) { + cpu = (torture_random(&rand) >> 4) % (maxcpu + 1); + if (cpu_online(cpu) && cpu_is_hotpluggable(cpu)) { + if (verbose) + pr_alert("%s" TORTURE_FLAG + "torture_onoff task: offlining %d\n", + torture_type, cpu); + starttime = jiffies; + n_offline_attempts++; + ret = cpu_down(cpu); + if (ret) { + if (verbose) + pr_alert("%s" TORTURE_FLAG + "torture_onoff task: offline %d failed: errno %d\n", + torture_type, cpu, ret); + } else { + if (verbose) + pr_alert("%s" TORTURE_FLAG + "torture_onoff task: offlined %d\n", + torture_type, cpu); + n_offline_successes++; + delta = jiffies - starttime; + sum_offline += delta; + if (min_offline < 0) { + min_offline = delta; + max_offline = delta; + } + if (min_offline > delta) + min_offline = delta; + if (max_offline < delta) + max_offline = delta; + } + } else if (cpu_is_hotpluggable(cpu)) { + if (verbose) + pr_alert("%s" TORTURE_FLAG + "torture_onoff task: onlining %d\n", + torture_type, cpu); + starttime = jiffies; + n_online_attempts++; + ret = cpu_up(cpu); + if (ret) { + if (verbose) + pr_alert("%s" TORTURE_FLAG + "torture_onoff task: online %d failed: errno %d\n", + torture_type, cpu, ret); + } else { + if (verbose) + pr_alert("%s" TORTURE_FLAG + "torture_onoff task: onlined %d\n", + torture_type, cpu); + n_online_successes++; + delta = jiffies - starttime; + sum_online += delta; + if (min_online < 0) { + min_online = delta; + max_online = delta; + } + if (min_online > delta) + min_online = delta; + if (max_online < delta) + max_online = delta; + } + } + schedule_timeout_interruptible(onoff_interval); + } + torture_kthread_stopping("torture_onoff"); + return 0; +} + +#endif /* #ifdef CONFIG_HOTPLUG_CPU */ + +/* + * Initiate online-offline handling. + */ +int torture_onoff_init(long ooholdoff, long oointerval) +{ + int ret = 0; + +#ifdef CONFIG_HOTPLUG_CPU + onoff_holdoff = ooholdoff; + onoff_interval = oointerval; + if (onoff_interval <= 0) + return 0; + ret = torture_create_kthread(torture_onoff, NULL, onoff_task); +#endif /* #ifdef CONFIG_HOTPLUG_CPU */ + return ret; +} +EXPORT_SYMBOL_GPL(torture_onoff_init); + +/* + * Clean up after online/offline testing. + */ +static void torture_onoff_cleanup(void) +{ +#ifdef CONFIG_HOTPLUG_CPU + if (onoff_task == NULL) + return; + VERBOSE_TOROUT_STRING("Stopping torture_onoff task"); + kthread_stop(onoff_task); + onoff_task = NULL; +#endif /* #ifdef CONFIG_HOTPLUG_CPU */ +} +EXPORT_SYMBOL_GPL(torture_onoff_cleanup); + +/* + * Print online/offline testing statistics. + */ +char *torture_onoff_stats(char *page) +{ +#ifdef CONFIG_HOTPLUG_CPU + page += sprintf(page, + "onoff: %ld/%ld:%ld/%ld %d,%d:%d,%d %lu:%lu (HZ=%d) ", + n_online_successes, n_online_attempts, + n_offline_successes, n_offline_attempts, + min_online, max_online, + min_offline, max_offline, + sum_online, sum_offline, HZ); +#endif /* #ifdef CONFIG_HOTPLUG_CPU */ + return page; +} +EXPORT_SYMBOL_GPL(torture_onoff_stats); + +/* + * Were all the online/offline operations successful? + */ +bool torture_onoff_failures(void) +{ +#ifdef CONFIG_HOTPLUG_CPU + return n_online_successes != n_online_attempts || + n_offline_successes != n_offline_attempts; +#else /* #ifdef CONFIG_HOTPLUG_CPU */ + return false; +#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */ +} +EXPORT_SYMBOL_GPL(torture_onoff_failures); + +#define TORTURE_RANDOM_MULT 39916801 /* prime */ +#define TORTURE_RANDOM_ADD 479001701 /* prime */ +#define TORTURE_RANDOM_REFRESH 10000 + +/* + * Crude but fast random-number generator. Uses a linear congruential + * generator, with occasional help from cpu_clock(). + */ +unsigned long +torture_random(struct torture_random_state *trsp) +{ + if (--trsp->trs_count < 0) { + trsp->trs_state += (unsigned long)local_clock(); + trsp->trs_count = TORTURE_RANDOM_REFRESH; + } + trsp->trs_state = trsp->trs_state * TORTURE_RANDOM_MULT + + TORTURE_RANDOM_ADD; + return swahw32(trsp->trs_state); +} +EXPORT_SYMBOL_GPL(torture_random); + +/* + * Variables for shuffling. The idea is to ensure that each CPU stays + * idle for an extended period to test interactions with dyntick idle, + * as well as interactions with any per-CPU varibles. + */ +struct shuffle_task { + struct list_head st_l; + struct task_struct *st_t; +}; + +static long shuffle_interval; /* In jiffies. */ +static struct task_struct *shuffler_task; +static cpumask_var_t shuffle_tmp_mask; +static int shuffle_idle_cpu; /* Force all torture tasks off this CPU */ +static struct list_head shuffle_task_list = LIST_HEAD_INIT(shuffle_task_list); +static DEFINE_MUTEX(shuffle_task_mutex); + +/* + * Register a task to be shuffled. If there is no memory, just splat + * and don't bother registering. + */ +void torture_shuffle_task_register(struct task_struct *tp) +{ + struct shuffle_task *stp; + + if (WARN_ON_ONCE(tp == NULL)) + return; + stp = kmalloc(sizeof(*stp), GFP_KERNEL); + if (WARN_ON_ONCE(stp == NULL)) + return; + stp->st_t = tp; + mutex_lock(&shuffle_task_mutex); + list_add(&stp->st_l, &shuffle_task_list); + mutex_unlock(&shuffle_task_mutex); +} +EXPORT_SYMBOL_GPL(torture_shuffle_task_register); + +/* + * Unregister all tasks, for example, at the end of the torture run. + */ +static void torture_shuffle_task_unregister_all(void) +{ + struct shuffle_task *stp; + struct shuffle_task *p; + + mutex_lock(&shuffle_task_mutex); + list_for_each_entry_safe(stp, p, &shuffle_task_list, st_l) { + list_del(&stp->st_l); + kfree(stp); + } + mutex_unlock(&shuffle_task_mutex); +} + +/* Shuffle tasks such that we allow shuffle_idle_cpu to become idle. + * A special case is when shuffle_idle_cpu = -1, in which case we allow + * the tasks to run on all CPUs. + */ +static void torture_shuffle_tasks(void) +{ + struct shuffle_task *stp; + + cpumask_setall(shuffle_tmp_mask); + get_online_cpus(); + + /* No point in shuffling if there is only one online CPU (ex: UP) */ + if (num_online_cpus() == 1) { + put_online_cpus(); + return; + } + + /* Advance to the next CPU. Upon overflow, don't idle any CPUs. */ + shuffle_idle_cpu = cpumask_next(shuffle_idle_cpu, shuffle_tmp_mask); + if (shuffle_idle_cpu >= nr_cpu_ids) + shuffle_idle_cpu = -1; + if (shuffle_idle_cpu != -1) { + cpumask_clear_cpu(shuffle_idle_cpu, shuffle_tmp_mask); + if (cpumask_empty(shuffle_tmp_mask)) { + put_online_cpus(); + return; + } + } + + mutex_lock(&shuffle_task_mutex); + list_for_each_entry(stp, &shuffle_task_list, st_l) + set_cpus_allowed_ptr(stp->st_t, shuffle_tmp_mask); + mutex_unlock(&shuffle_task_mutex); + + put_online_cpus(); +} + +/* Shuffle tasks across CPUs, with the intent of allowing each CPU in the + * system to become idle at a time and cut off its timer ticks. This is meant + * to test the support for such tickless idle CPU in RCU. + */ +static int torture_shuffle(void *arg) +{ + VERBOSE_TOROUT_STRING("torture_shuffle task started"); + do { + schedule_timeout_interruptible(shuffle_interval); + torture_shuffle_tasks(); + torture_shutdown_absorb("torture_shuffle"); + } while (!torture_must_stop()); + torture_kthread_stopping("torture_shuffle"); + return 0; +} + +/* + * Start the shuffler, with shuffint in jiffies. + */ +int torture_shuffle_init(long shuffint) +{ + shuffle_interval = shuffint; + + shuffle_idle_cpu = -1; + + if (!alloc_cpumask_var(&shuffle_tmp_mask, GFP_KERNEL)) { + VERBOSE_TOROUT_ERRSTRING("Failed to alloc mask"); + return -ENOMEM; + } + + /* Create the shuffler thread */ + return torture_create_kthread(torture_shuffle, NULL, shuffler_task); +} +EXPORT_SYMBOL_GPL(torture_shuffle_init); + +/* + * Stop the shuffling. + */ +static void torture_shuffle_cleanup(void) +{ + torture_shuffle_task_unregister_all(); + if (shuffler_task) { + VERBOSE_TOROUT_STRING("Stopping torture_shuffle task"); + kthread_stop(shuffler_task); + free_cpumask_var(shuffle_tmp_mask); + } + shuffler_task = NULL; +} +EXPORT_SYMBOL_GPL(torture_shuffle_cleanup); + +/* + * Variables for auto-shutdown. This allows "lights out" torture runs + * to be fully scripted. + */ +static int shutdown_secs; /* desired test duration in seconds. */ +static struct task_struct *shutdown_task; +static unsigned long shutdown_time; /* jiffies to system shutdown. */ +static void (*torture_shutdown_hook)(void); + +/* + * Absorb kthreads into a kernel function that won't return, so that + * they won't ever access module text or data again. + */ +void torture_shutdown_absorb(const char *title) +{ + while (ACCESS_ONCE(fullstop) == FULLSTOP_SHUTDOWN) { + pr_notice("torture thread %s parking due to system shutdown\n", + title); + schedule_timeout_uninterruptible(MAX_SCHEDULE_TIMEOUT); + } +} +EXPORT_SYMBOL_GPL(torture_shutdown_absorb); + +/* + * Cause the torture test to shutdown the system after the test has + * run for the time specified by the shutdown_secs parameter. + */ +static int torture_shutdown(void *arg) +{ + long delta; + unsigned long jiffies_snap; + + VERBOSE_TOROUT_STRING("torture_shutdown task started"); + jiffies_snap = jiffies; + while (ULONG_CMP_LT(jiffies_snap, shutdown_time) && + !torture_must_stop()) { + delta = shutdown_time - jiffies_snap; + if (verbose) + pr_alert("%s" TORTURE_FLAG + "torture_shutdown task: %lu jiffies remaining\n", + torture_type, delta); + schedule_timeout_interruptible(delta); + jiffies_snap = jiffies; + } + if (torture_must_stop()) { + torture_kthread_stopping("torture_shutdown"); + return 0; + } + + /* OK, shut down the system. */ + + VERBOSE_TOROUT_STRING("torture_shutdown task shutting down system"); + shutdown_task = NULL; /* Avoid self-kill deadlock. */ + if (torture_shutdown_hook) + torture_shutdown_hook(); + else + VERBOSE_TOROUT_STRING("No torture_shutdown_hook(), skipping."); + kernel_power_off(); /* Shut down the system. */ + return 0; +} + +/* + * Start up the shutdown task. + */ +int torture_shutdown_init(int ssecs, void (*cleanup)(void)) +{ + int ret = 0; + + shutdown_secs = ssecs; + torture_shutdown_hook = cleanup; + if (shutdown_secs > 0) { + shutdown_time = jiffies + shutdown_secs * HZ; + ret = torture_create_kthread(torture_shutdown, NULL, + shutdown_task); + } + return ret; +} +EXPORT_SYMBOL_GPL(torture_shutdown_init); + +/* + * Detect and respond to a system shutdown. + */ +static int torture_shutdown_notify(struct notifier_block *unused1, + unsigned long unused2, void *unused3) +{ + mutex_lock(&fullstop_mutex); + if (ACCESS_ONCE(fullstop) == FULLSTOP_DONTSTOP) { + VERBOSE_TOROUT_STRING("Unscheduled system shutdown detected"); + ACCESS_ONCE(fullstop) = FULLSTOP_SHUTDOWN; + } else { + pr_warn("Concurrent rmmod and shutdown illegal!\n"); + } + mutex_unlock(&fullstop_mutex); + return NOTIFY_DONE; +} + +static struct notifier_block torture_shutdown_nb = { + .notifier_call = torture_shutdown_notify, +}; + +/* + * Shut down the shutdown task. Say what??? Heh! This can happen if + * the torture module gets an rmmod before the shutdown time arrives. ;-) + */ +static void torture_shutdown_cleanup(void) +{ + unregister_reboot_notifier(&torture_shutdown_nb); + if (shutdown_task != NULL) { + VERBOSE_TOROUT_STRING("Stopping torture_shutdown task"); + kthread_stop(shutdown_task); + } + shutdown_task = NULL; +} + +/* + * Variables for stuttering, which means to periodically pause and + * restart testing in order to catch bugs that appear when load is + * suddenly applied to or removed from the system. + */ +static struct task_struct *stutter_task; +static int stutter_pause_test; +static int stutter; + +/* + * Block until the stutter interval ends. This must be called periodically + * by all running kthreads that need to be subject to stuttering. + */ +void stutter_wait(const char *title) +{ + while (ACCESS_ONCE(stutter_pause_test) || + (torture_runnable && !ACCESS_ONCE(*torture_runnable))) { + if (stutter_pause_test) + schedule_timeout_interruptible(1); + else + schedule_timeout_interruptible(round_jiffies_relative(HZ)); + torture_shutdown_absorb(title); + } +} +EXPORT_SYMBOL_GPL(stutter_wait); + +/* + * Cause the torture test to "stutter", starting and stopping all + * threads periodically. + */ +static int torture_stutter(void *arg) +{ + VERBOSE_TOROUT_STRING("torture_stutter task started"); + do { + if (!torture_must_stop()) { + schedule_timeout_interruptible(stutter); + ACCESS_ONCE(stutter_pause_test) = 1; + } + if (!torture_must_stop()) + schedule_timeout_interruptible(stutter); + ACCESS_ONCE(stutter_pause_test) = 0; + torture_shutdown_absorb("torture_stutter"); + } while (!torture_must_stop()); + torture_kthread_stopping("torture_stutter"); + return 0; +} + +/* + * Initialize and kick off the torture_stutter kthread. + */ +int torture_stutter_init(int s) +{ + int ret; + + stutter = s; + ret = torture_create_kthread(torture_stutter, NULL, stutter_task); + return ret; +} +EXPORT_SYMBOL_GPL(torture_stutter_init); + +/* + * Cleanup after the torture_stutter kthread. + */ +static void torture_stutter_cleanup(void) +{ + if (!stutter_task) + return; + VERBOSE_TOROUT_STRING("Stopping torture_stutter task"); + kthread_stop(stutter_task); + stutter_task = NULL; +} + +/* + * Initialize torture module. Please note that this is -not- invoked via + * the usual module_init() mechanism, but rather by an explicit call from + * the client torture module. This call must be paired with a later + * torture_init_end(). + * + * The runnable parameter points to a flag that controls whether or not + * the test is currently runnable. If there is no such flag, pass in NULL. + */ +void __init torture_init_begin(char *ttype, bool v, int *runnable) +{ + mutex_lock(&fullstop_mutex); + torture_type = ttype; + verbose = v; + torture_runnable = runnable; + fullstop = FULLSTOP_DONTSTOP; + +} +EXPORT_SYMBOL_GPL(torture_init_begin); + +/* + * Tell the torture module that initialization is complete. + */ +void __init torture_init_end(void) +{ + mutex_unlock(&fullstop_mutex); + register_reboot_notifier(&torture_shutdown_nb); +} +EXPORT_SYMBOL_GPL(torture_init_end); + +/* + * Clean up torture module. Please note that this is -not- invoked via + * the usual module_exit() mechanism, but rather by an explicit call from + * the client torture module. Returns true if a race with system shutdown + * is detected, otherwise, all kthreads started by functions in this file + * will be shut down. + * + * This must be called before the caller starts shutting down its own + * kthreads. + */ +bool torture_cleanup(void) +{ + mutex_lock(&fullstop_mutex); + if (ACCESS_ONCE(fullstop) == FULLSTOP_SHUTDOWN) { + pr_warn("Concurrent rmmod and shutdown illegal!\n"); + mutex_unlock(&fullstop_mutex); + schedule_timeout_uninterruptible(10); + return true; + } + ACCESS_ONCE(fullstop) = FULLSTOP_RMMOD; + mutex_unlock(&fullstop_mutex); + torture_shutdown_cleanup(); + torture_shuffle_cleanup(); + torture_stutter_cleanup(); + torture_onoff_cleanup(); + return false; +} +EXPORT_SYMBOL_GPL(torture_cleanup); + +/* + * Is it time for the current torture test to stop? + */ +bool torture_must_stop(void) +{ + return torture_must_stop_irq() || kthread_should_stop(); +} +EXPORT_SYMBOL_GPL(torture_must_stop); + +/* + * Is it time for the current torture test to stop? This is the irq-safe + * version, hence no check for kthread_should_stop(). + */ +bool torture_must_stop_irq(void) +{ + return ACCESS_ONCE(fullstop) != FULLSTOP_DONTSTOP; +} +EXPORT_SYMBOL_GPL(torture_must_stop_irq); + +/* + * Each kthread must wait for kthread_should_stop() before returning from + * its top-level function, otherwise segfaults ensue. This function + * prints a "stopping" message and waits for kthread_should_stop(), and + * should be called from all torture kthreads immediately prior to + * returning. + */ +void torture_kthread_stopping(char *title) +{ + if (verbose) + VERBOSE_TOROUT_STRING(title); + while (!kthread_should_stop()) { + torture_shutdown_absorb(title); + schedule_timeout_uninterruptible(1); + } +} +EXPORT_SYMBOL_GPL(torture_kthread_stopping); + +/* + * Create a generic torture kthread that is immediately runnable. If you + * need the kthread to be stopped so that you can do something to it before + * it starts, you will need to open-code your own. + */ +int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m, + char *f, struct task_struct **tp) +{ + int ret = 0; + + VERBOSE_TOROUT_STRING(m); + *tp = kthread_run(fn, arg, s); + if (IS_ERR(*tp)) { + ret = PTR_ERR(*tp); + VERBOSE_TOROUT_ERRSTRING(f); + *tp = NULL; + } + torture_shuffle_task_register(*tp); + return ret; +} +EXPORT_SYMBOL_GPL(_torture_create_kthread); + +/* + * Stop a generic kthread, emitting a message. + */ +void _torture_stop_kthread(char *m, struct task_struct **tp) +{ + if (*tp == NULL) + return; + VERBOSE_TOROUT_STRING(m); + kthread_stop(*tp); + *tp = NULL; +} +EXPORT_SYMBOL_GPL(_torture_stop_kthread); diff --git a/kernel/trace/ring_buffer_benchmark.c b/kernel/trace/ring_buffer_benchmark.c index a5457d577b98..0434ff1b808e 100644 --- a/kernel/trace/ring_buffer_benchmark.c +++ b/kernel/trace/ring_buffer_benchmark.c @@ -40,8 +40,8 @@ static int write_iteration = 50; module_param(write_iteration, uint, 0644); MODULE_PARM_DESC(write_iteration, "# of writes between timestamp readings"); -static int producer_nice = 19; -static int consumer_nice = 19; +static int producer_nice = MAX_NICE; +static int consumer_nice = MAX_NICE; static int producer_fifo = -1; static int consumer_fifo = -1; @@ -308,7 +308,7 @@ static void ring_buffer_producer(void) /* Let the user know that the test is running at low priority */ if (producer_fifo < 0 && consumer_fifo < 0 && - producer_nice == 19 && consumer_nice == 19) + producer_nice == MAX_NICE && consumer_nice == MAX_NICE) trace_printk("WARNING!!! This test is running at lowest priority.\n"); trace_printk("Time: %lld (usecs)\n", time); diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 815c878f409b..24c1f2382557 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -1600,15 +1600,31 @@ void trace_buffer_unlock_commit(struct ring_buffer *buffer, } EXPORT_SYMBOL_GPL(trace_buffer_unlock_commit); +static struct ring_buffer *temp_buffer; + struct ring_buffer_event * trace_event_buffer_lock_reserve(struct ring_buffer **current_rb, struct ftrace_event_file *ftrace_file, int type, unsigned long len, unsigned long flags, int pc) { + struct ring_buffer_event *entry; + *current_rb = ftrace_file->tr->trace_buffer.buffer; - return trace_buffer_lock_reserve(*current_rb, + entry = trace_buffer_lock_reserve(*current_rb, type, len, flags, pc); + /* + * If tracing is off, but we have triggers enabled + * we still need to look at the event data. Use the temp_buffer + * to store the trace event for the tigger to use. It's recusive + * safe and will not be recorded anywhere. + */ + if (!entry && ftrace_file->flags & FTRACE_EVENT_FL_TRIGGER_COND) { + *current_rb = temp_buffer; + entry = trace_buffer_lock_reserve(*current_rb, + type, len, flags, pc); + } + return entry; } EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve); @@ -6494,11 +6510,16 @@ __init static int tracer_alloc_buffers(void) raw_spin_lock_init(&global_trace.start_lock); + /* Used for event triggers */ + temp_buffer = ring_buffer_alloc(PAGE_SIZE, RB_FL_OVERWRITE); + if (!temp_buffer) + goto out_free_cpumask; + /* TODO: make the number of buffers hot pluggable with CPUS */ if (allocate_trace_buffers(&global_trace, ring_buf_size) < 0) { printk(KERN_ERR "tracer: failed to allocate ring buffer!\n"); WARN_ON(1); - goto out_free_cpumask; + goto out_free_temp_buffer; } if (global_trace.buffer_disabled) @@ -6540,6 +6561,8 @@ __init static int tracer_alloc_buffers(void) return 0; +out_free_temp_buffer: + ring_buffer_free(temp_buffer); out_free_cpumask: free_percpu(global_trace.trace_buffer.data); #ifdef CONFIG_TRACER_MAX_TRACE diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c index e854f420e033..c894614de14d 100644 --- a/kernel/trace/trace_event_perf.c +++ b/kernel/trace/trace_event_perf.c @@ -31,9 +31,25 @@ static int perf_trace_event_perm(struct ftrace_event_call *tp_event, } /* The ftrace function trace is allowed only for root. */ - if (ftrace_event_is_function(tp_event) && - perf_paranoid_tracepoint_raw() && !capable(CAP_SYS_ADMIN)) - return -EPERM; + if (ftrace_event_is_function(tp_event)) { + if (perf_paranoid_tracepoint_raw() && !capable(CAP_SYS_ADMIN)) + return -EPERM; + + /* + * We don't allow user space callchains for function trace + * event, due to issues with page faults while tracing page + * fault handler and its overall trickiness nature. + */ + if (!p_event->attr.exclude_callchain_user) + return -EINVAL; + + /* + * Same reason to disable user stack dump as for user space + * callchains above. + */ + if (p_event->attr.sample_type & PERF_SAMPLE_STACK_USER) + return -EINVAL; + } /* No tracing, just counting, so no obvious leak */ if (!(p_event->attr.sample_type & PERF_SAMPLE_RAW)) diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index f3989ceb5cd5..7b16d40bd64d 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -27,12 +27,6 @@ DEFINE_MUTEX(event_mutex); -DEFINE_MUTEX(event_storage_mutex); -EXPORT_SYMBOL_GPL(event_storage_mutex); - -char event_storage[EVENT_STORAGE_SIZE]; -EXPORT_SYMBOL_GPL(event_storage); - LIST_HEAD(ftrace_events); static LIST_HEAD(ftrace_common_fields); diff --git a/kernel/trace/trace_export.c b/kernel/trace/trace_export.c index 7c3e3e72e2b6..ee0a5098ac43 100644 --- a/kernel/trace/trace_export.c +++ b/kernel/trace/trace_export.c @@ -95,15 +95,12 @@ static void __always_unused ____ftrace_check_##name(void) \ #undef __array #define __array(type, item, len) \ do { \ + char *type_str = #type"["__stringify(len)"]"; \ BUILD_BUG_ON(len > MAX_FILTER_STR_VAL); \ - mutex_lock(&event_storage_mutex); \ - snprintf(event_storage, sizeof(event_storage), \ - "%s[%d]", #type, len); \ - ret = trace_define_field(event_call, event_storage, #item, \ + ret = trace_define_field(event_call, type_str, #item, \ offsetof(typeof(field), item), \ sizeof(field.item), \ is_signed_type(type), filter_type); \ - mutex_unlock(&event_storage_mutex); \ if (ret) \ return ret; \ } while (0); diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 193e977a10ea..3fa5b8f3aae3 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3225,7 +3225,7 @@ static ssize_t wq_nice_store(struct device *dev, struct device_attribute *attr, return -ENOMEM; if (sscanf(buf, "%d", &attrs->nice) == 1 && - attrs->nice >= -20 && attrs->nice <= 19) + attrs->nice >= MIN_NICE && attrs->nice <= MAX_NICE) ret = apply_workqueue_attrs(wq, attrs); else ret = -EINVAL; diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index a48abeac753f..dd7f8858188a 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -980,6 +980,21 @@ config DEBUG_LOCKING_API_SELFTESTS The following locking APIs are covered: spinlocks, rwlocks, mutexes and rwsems. +config LOCK_TORTURE_TEST + tristate "torture tests for locking" + depends on DEBUG_KERNEL + select TORTURE_TEST + default n + help + This option provides a kernel module that runs torture tests + on kernel locking primitives. The kernel module may be built + after the fact on the running kernel to be tested, if desired. + + Say Y here if you want kernel locking-primitive torture tests + to be built into the kernel. + Say M if you want these torture tests to build as a module. + Say N if you are unsure. + endmenu # lock debugging config TRACE_IRQFLAGS @@ -1141,9 +1156,14 @@ config SPARSE_RCU_POINTER Say N if you are unsure. +config TORTURE_TEST + tristate + default n + config RCU_TORTURE_TEST tristate "torture tests for RCU" depends on DEBUG_KERNEL + select TORTURE_TEST default n help This option provides a kernel module that runs torture tests diff --git a/lib/fonts/Kconfig b/lib/fonts/Kconfig index 4dc1b990aa23..34fd931b54b5 100644 --- a/lib/fonts/Kconfig +++ b/lib/fonts/Kconfig @@ -9,7 +9,7 @@ if FONT_SUPPORT config FONTS bool "Select compiled-in fonts" - depends on FRAMEBUFFER_CONSOLE + depends on FRAMEBUFFER_CONSOLE || STI_CONSOLE help Say Y here if you would like to use fonts other than the default your frame buffer console usually use. @@ -22,7 +22,7 @@ config FONTS config FONT_8x8 bool "VGA 8x8 font" if FONTS - depends on FRAMEBUFFER_CONSOLE + depends on FRAMEBUFFER_CONSOLE || STI_CONSOLE default y if !SPARC && !FONTS help This is the "high resolution" font for the VGA frame buffer (the one @@ -45,7 +45,7 @@ config FONT_8x16 config FONT_6x11 bool "Mac console 6x11 font (not supported by all drivers)" if FONTS - depends on FRAMEBUFFER_CONSOLE + depends on FRAMEBUFFER_CONSOLE || STI_CONSOLE default y if !SPARC && !FONTS && MAC help Small console font with Macintosh-style high-half glyphs. Some Mac diff --git a/lib/random32.c b/lib/random32.c index 1e5b2df44291..614896778700 100644 --- a/lib/random32.c +++ b/lib/random32.c @@ -244,8 +244,19 @@ static void __prandom_reseed(bool late) static bool latch = false; static DEFINE_SPINLOCK(lock); + /* Asking for random bytes might result in bytes getting + * moved into the nonblocking pool and thus marking it + * as initialized. In this case we would double back into + * this function and attempt to do a late reseed. + * Ignore the pointless attempt to reseed again if we're + * already waiting for bytes when the nonblocking pool + * got initialized. + */ + /* only allow initial seeding (late == false) once */ - spin_lock_irqsave(&lock, flags); + if (!spin_trylock_irqsave(&lock, flags)) + return; + if (latch && !late) goto out; latch = true; diff --git a/mm/Kconfig b/mm/Kconfig index 2d9f1504d75e..2888024e0b0a 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -575,5 +575,5 @@ config PGTABLE_MAPPING then you should select this. This causes zsmalloc to use page table mapping rather than copying for object mapping. - You can check speed with zsmalloc benchmark[1]. - [1] https://github.com/spartacus06/zsmalloc + You can check speed with zsmalloc benchmark: + https://github.com/spartacus06/zsmapbench diff --git a/mm/compaction.c b/mm/compaction.c index b48c5259ea33..918577595ea8 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -251,7 +251,6 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, { int nr_scanned = 0, total_isolated = 0; struct page *cursor, *valid_page = NULL; - unsigned long nr_strict_required = end_pfn - blockpfn; unsigned long flags; bool locked = false; @@ -264,11 +263,12 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, nr_scanned++; if (!pfn_valid_within(blockpfn)) - continue; + goto isolate_fail; + if (!valid_page) valid_page = page; if (!PageBuddy(page)) - continue; + goto isolate_fail; /* * The zone lock must be held to isolate freepages. @@ -289,12 +289,10 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, /* Recheck this is a buddy page under lock */ if (!PageBuddy(page)) - continue; + goto isolate_fail; /* Found a free page, break it into order-0 pages */ isolated = split_free_page(page); - if (!isolated && strict) - break; total_isolated += isolated; for (i = 0; i < isolated; i++) { list_add(&page->lru, freelist); @@ -305,7 +303,15 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, if (isolated) { blockpfn += isolated - 1; cursor += isolated - 1; + continue; } + +isolate_fail: + if (strict) + break; + else + continue; + } trace_mm_compaction_isolate_freepages(nr_scanned, total_isolated); @@ -315,7 +321,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, * pages requested were isolated. If there were any failures, 0 is * returned and CMA will fail. */ - if (strict && nr_strict_required > total_isolated) + if (strict && blockpfn < end_pfn) total_isolated = 0; if (locked) diff --git a/mm/fremap.c b/mm/fremap.c index bbc4d660221a..34feba60a17e 100644 --- a/mm/fremap.c +++ b/mm/fremap.c @@ -23,28 +23,44 @@ #include "internal.h" +static int mm_counter(struct page *page) +{ + return PageAnon(page) ? MM_ANONPAGES : MM_FILEPAGES; +} + static void zap_pte(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { pte_t pte = *ptep; + struct page *page; + swp_entry_t entry; if (pte_present(pte)) { - struct page *page; - flush_cache_page(vma, addr, pte_pfn(pte)); pte = ptep_clear_flush(vma, addr, ptep); page = vm_normal_page(vma, addr, pte); if (page) { if (pte_dirty(pte)) set_page_dirty(page); + update_hiwater_rss(mm); + dec_mm_counter(mm, mm_counter(page)); page_remove_rmap(page); page_cache_release(page); + } + } else { /* zap_pte() is not called when pte_none() */ + if (!pte_file(pte)) { update_hiwater_rss(mm); - dec_mm_counter(mm, MM_FILEPAGES); + entry = pte_to_swp_entry(pte); + if (non_swap_entry(entry)) { + if (is_migration_entry(entry)) { + page = migration_entry_to_page(entry); + dec_mm_counter(mm, mm_counter(page)); + } + } else { + free_swap_and_cache(entry); + dec_mm_counter(mm, MM_SWAPENTS); + } } - } else { - if (!pte_file(pte)) - free_swap_and_cache(pte_to_swp_entry(pte)); pte_clear_not_present_full(mm, addr, ptep, 0); } } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ae3c8f3595d4..f520b9da9c1f 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2301,35 +2301,6 @@ static void sp_free(struct sp_node *n) kmem_cache_free(sn_cache, n); } -#ifdef CONFIG_NUMA_BALANCING -static bool numa_migrate_deferred(struct task_struct *p, int last_cpupid) -{ - /* Never defer a private fault */ - if (cpupid_match_pid(p, last_cpupid)) - return false; - - if (p->numa_migrate_deferred) { - p->numa_migrate_deferred--; - return true; - } - return false; -} - -static inline void defer_numa_migrate(struct task_struct *p) -{ - p->numa_migrate_deferred = sysctl_numa_balancing_migrate_deferred; -} -#else -static inline bool numa_migrate_deferred(struct task_struct *p, int last_cpupid) -{ - return false; -} - -static inline void defer_numa_migrate(struct task_struct *p) -{ -} -#endif /* CONFIG_NUMA_BALANCING */ - /** * mpol_misplaced - check whether current page node is valid in policy * @@ -2403,52 +2374,9 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long /* Migrate the page towards the node whose CPU is referencing it */ if (pol->flags & MPOL_F_MORON) { - int last_cpupid; - int this_cpupid; - polnid = thisnid; - this_cpupid = cpu_pid_to_cpupid(thiscpu, current->pid); - - /* - * Multi-stage node selection is used in conjunction - * with a periodic migration fault to build a temporal - * task<->page relation. By using a two-stage filter we - * remove short/unlikely relations. - * - * Using P(p) ~ n_p / n_t as per frequentist - * probability, we can equate a task's usage of a - * particular page (n_p) per total usage of this - * page (n_t) (in a given time-span) to a probability. - * - * Our periodic faults will sample this probability and - * getting the same result twice in a row, given these - * samples are fully independent, is then given by - * P(n)^2, provided our sample period is sufficiently - * short compared to the usage pattern. - * - * This quadric squishes small probabilities, making - * it less likely we act on an unlikely task<->page - * relation. - */ - last_cpupid = page_cpupid_xchg_last(page, this_cpupid); - if (!cpupid_pid_unset(last_cpupid) && cpupid_to_nid(last_cpupid) != thisnid) { - /* See sysctl_numa_balancing_migrate_deferred comment */ - if (!cpupid_match_pid(current, last_cpupid)) - defer_numa_migrate(current); - - goto out; - } - - /* - * The quadratic filter above reduces extraneous migration - * of shared pages somewhat. This code reduces it even more, - * reducing the overhead of page migrations of shared pages. - * This makes workloads with shared pages rely more on - * "move task near its memory", and less on "move memory - * towards its task", which is exactly what we want. - */ - if (numa_migrate_deferred(current, last_cpupid)) + if (!should_numa_migrate_memory(current, page, curnid, thiscpu)) goto out; } diff --git a/mm/migrate.c b/mm/migrate.c index 482a33d89134..bed48809e5d0 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -178,6 +178,37 @@ out: } /* + * Congratulations to trinity for discovering this bug. + * mm/fremap.c's remap_file_pages() accepts any range within a single vma to + * convert that vma to VM_NONLINEAR; and generic_file_remap_pages() will then + * replace the specified range by file ptes throughout (maybe populated after). + * If page migration finds a page within that range, while it's still located + * by vma_interval_tree rather than lost to i_mmap_nonlinear list, no problem: + * zap_pte() clears the temporary migration entry before mmap_sem is dropped. + * But if the migrating page is in a part of the vma outside the range to be + * remapped, then it will not be cleared, and remove_migration_ptes() needs to + * deal with it. Fortunately, this part of the vma is of course still linear, + * so we just need to use linear location on the nonlinear list. + */ +static int remove_linear_migration_ptes_from_nonlinear(struct page *page, + struct address_space *mapping, void *arg) +{ + struct vm_area_struct *vma; + /* hugetlbfs does not support remap_pages, so no huge pgoff worries */ + pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); + unsigned long addr; + + list_for_each_entry(vma, + &mapping->i_mmap_nonlinear, shared.nonlinear) { + + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + if (addr >= vma->vm_start && addr < vma->vm_end) + remove_migration_pte(page, vma, addr, arg); + } + return SWAP_AGAIN; +} + +/* * Get rid of all migration entries and replace them by * references to the indicated page. */ @@ -186,6 +217,7 @@ static void remove_migration_ptes(struct page *old, struct page *new) struct rmap_walk_control rwc = { .rmap_one = remove_migration_pte, .arg = old, + .file_nonlinear = remove_linear_migration_ptes_from_nonlinear, }; rmap_walk(new, &rwc); @@ -1158,7 +1190,7 @@ static struct page *new_page_node(struct page *p, unsigned long private, pm->node); else return alloc_pages_exact_node(pm->node, - GFP_HIGHUSER_MOVABLE | GFP_THISNODE, 0); + GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, 0); } /* @@ -1544,9 +1576,9 @@ static struct page *alloc_misplaced_dst_page(struct page *page, struct page *newpage; newpage = alloc_pages_exact_node(nid, - (GFP_HIGHUSER_MOVABLE | GFP_THISNODE | - __GFP_NOMEMALLOC | __GFP_NORETRY | - __GFP_NOWARN) & + (GFP_HIGHUSER_MOVABLE | + __GFP_THISNODE | __GFP_NOMEMALLOC | + __GFP_NORETRY | __GFP_NOWARN) & ~GFP_IOFS, 0); return newpage; @@ -1747,7 +1779,8 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, goto out_dropref; new_page = alloc_pages_node(node, - (GFP_TRANSHUGE | GFP_THISNODE) & ~__GFP_WAIT, HPAGE_PMD_ORDER); + (GFP_TRANSHUGE | __GFP_THISNODE) & ~__GFP_WAIT, + HPAGE_PMD_ORDER); if (!new_page) goto out_fail; diff --git a/mm/rmap.c b/mm/rmap.c index d9d42316a99a..8fc049f9a5a6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1360,8 +1360,9 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount, } static int try_to_unmap_nonlinear(struct page *page, - struct address_space *mapping, struct vm_area_struct *vma) + struct address_space *mapping, void *arg) { + struct vm_area_struct *vma; int ret = SWAP_AGAIN; unsigned long cursor; unsigned long max_nl_cursor = 0; @@ -1663,7 +1664,7 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc) if (list_empty(&mapping->i_mmap_nonlinear)) goto done; - ret = rwc->file_nonlinear(page, mapping, vma); + ret = rwc->file_nonlinear(page, mapping, rwc->arg); done: mutex_unlock(&mapping->i_mmap_mutex); diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c index ec9909935fb6..175273f38cb1 100644 --- a/net/8021q/vlan.c +++ b/net/8021q/vlan.c @@ -307,9 +307,11 @@ static void vlan_sync_address(struct net_device *dev, static void vlan_transfer_features(struct net_device *dev, struct net_device *vlandev) { + struct vlan_dev_priv *vlan = vlan_dev_priv(vlandev); + vlandev->gso_max_size = dev->gso_max_size; - if (dev->features & NETIF_F_HW_VLAN_CTAG_TX) + if (vlan_hw_offload_capable(dev->features, vlan->vlan_proto)) vlandev->hard_header_len = dev->hard_header_len; else vlandev->hard_header_len = dev->hard_header_len + VLAN_HLEN; diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index de51c48c4393..27bfe2f8e2de 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -538,6 +538,9 @@ static int vlan_passthru_hard_header(struct sk_buff *skb, struct net_device *dev struct vlan_dev_priv *vlan = vlan_dev_priv(dev); struct net_device *real_dev = vlan->real_dev; + if (saddr == NULL) + saddr = dev->dev_addr; + return dev_hard_header(skb, real_dev, type, daddr, saddr, len); } @@ -575,6 +578,9 @@ static int vlan_dev_init(struct net_device *dev) dev->features |= real_dev->vlan_features | NETIF_F_LLTX; dev->gso_max_size = real_dev->gso_max_size; + if (dev->features & NETIF_F_VLAN_FEATURES) + netdev_warn(real_dev, "VLAN features are set incorrectly. Q-in-Q configurations may not work correctly.\n"); + /* ipv6 shared card related stuff */ dev->dev_id = real_dev->dev_id; @@ -589,7 +595,8 @@ static int vlan_dev_init(struct net_device *dev) #endif dev->needed_headroom = real_dev->needed_headroom; - if (real_dev->features & NETIF_F_HW_VLAN_CTAG_TX) { + if (vlan_hw_offload_capable(real_dev->features, + vlan_dev_priv(dev)->vlan_proto)) { dev->header_ops = &vlan_passthru_header_ops; dev->hard_header_len = real_dev->hard_header_len; } else { diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c index 63f0455c0bc3..8fe8b71b487a 100644 --- a/net/bridge/br_device.c +++ b/net/bridge/br_device.c @@ -49,14 +49,14 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) brstats->tx_bytes += skb->len; u64_stats_update_end(&brstats->syncp); - if (!br_allowed_ingress(br, br_get_vlan_info(br), skb, &vid)) - goto out; - BR_INPUT_SKB_CB(skb)->brdev = dev; skb_reset_mac_header(skb); skb_pull(skb, ETH_HLEN); + if (!br_allowed_ingress(br, br_get_vlan_info(br), skb, &vid)) + goto out; + if (is_broadcast_ether_addr(dest)) br_flood_deliver(br, skb, false); else if (is_multicast_ether_addr(dest)) { diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c index 28d544627422..d0cca3c65f01 100644 --- a/net/bridge/br_input.c +++ b/net/bridge/br_input.c @@ -29,6 +29,7 @@ static int br_pass_frame_up(struct sk_buff *skb) struct net_device *indev, *brdev = BR_INPUT_SKB_CB(skb)->brdev; struct net_bridge *br = netdev_priv(brdev); struct pcpu_sw_netstats *brstats = this_cpu_ptr(br->stats); + struct net_port_vlans *pv; u64_stats_update_begin(&brstats->syncp); brstats->rx_packets++; @@ -39,18 +40,18 @@ static int br_pass_frame_up(struct sk_buff *skb) * packet is allowed except in promisc modue when someone * may be running packet capture. */ + pv = br_get_vlan_info(br); if (!(brdev->flags & IFF_PROMISC) && - !br_allowed_egress(br, br_get_vlan_info(br), skb)) { + !br_allowed_egress(br, pv, skb)) { kfree_skb(skb); return NET_RX_DROP; } - skb = br_handle_vlan(br, br_get_vlan_info(br), skb); - if (!skb) - return NET_RX_DROP; - indev = skb->dev; skb->dev = brdev; + skb = br_handle_vlan(br, pv, skb); + if (!skb) + return NET_RX_DROP; return NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_IN, skb, indev, NULL, netif_receive_skb); diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index ef66365b7354..93067ecdb9a2 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -1127,9 +1127,10 @@ static void br_multicast_query_received(struct net_bridge *br, struct net_bridge_port *port, struct bridge_mcast_querier *querier, int saddr, + bool is_general_query, unsigned long max_delay) { - if (saddr) + if (saddr && is_general_query) br_multicast_update_querier_timer(br, querier, max_delay); else if (timer_pending(&querier->timer)) return; @@ -1181,8 +1182,16 @@ static int br_ip4_multicast_query(struct net_bridge *br, IGMPV3_MRC(ih3->code) * (HZ / IGMP_TIMER_SCALE) : 1; } + /* RFC2236+RFC3376 (IGMPv2+IGMPv3) require the multicast link layer + * all-systems destination addresses (224.0.0.1) for general queries + */ + if (!group && iph->daddr != htonl(INADDR_ALLHOSTS_GROUP)) { + err = -EINVAL; + goto out; + } + br_multicast_query_received(br, port, &br->ip4_querier, !!iph->saddr, - max_delay); + !group, max_delay); if (!group) goto out; @@ -1228,6 +1237,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, unsigned long max_delay; unsigned long now = jiffies; const struct in6_addr *group = NULL; + bool is_general_query; int err = 0; spin_lock(&br->multicast_lock); @@ -1235,6 +1245,12 @@ static int br_ip6_multicast_query(struct net_bridge *br, (port && port->state == BR_STATE_DISABLED)) goto out; + /* RFC2710+RFC3810 (MLDv1+MLDv2) require link-local source addresses */ + if (!(ipv6_addr_type(&ip6h->saddr) & IPV6_ADDR_LINKLOCAL)) { + err = -EINVAL; + goto out; + } + if (skb->len == sizeof(*mld)) { if (!pskb_may_pull(skb, sizeof(*mld))) { err = -EINVAL; @@ -1256,8 +1272,19 @@ static int br_ip6_multicast_query(struct net_bridge *br, max_delay = max(msecs_to_jiffies(mldv2_mrc(mld2q)), 1UL); } + is_general_query = group && ipv6_addr_any(group); + + /* RFC2710+RFC3810 (MLDv1+MLDv2) require the multicast link layer + * all-nodes destination address (ff02::1) for general queries + */ + if (is_general_query && !ipv6_addr_is_ll_all_nodes(&ip6h->daddr)) { + err = -EINVAL; + goto out; + } + br_multicast_query_received(br, port, &br->ip6_querier, - !ipv6_addr_any(&ip6h->saddr), max_delay); + !ipv6_addr_any(&ip6h->saddr), + is_general_query, max_delay); if (!group) goto out; diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index 8249ca764c79..f23c74b3a953 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -119,22 +119,6 @@ static void __vlan_flush(struct net_port_vlans *v) kfree_rcu(v, rcu); } -/* Strip the tag from the packet. Will return skb with tci set 0. */ -static struct sk_buff *br_vlan_untag(struct sk_buff *skb) -{ - if (skb->protocol != htons(ETH_P_8021Q)) { - skb->vlan_tci = 0; - return skb; - } - - skb->vlan_tci = 0; - skb = vlan_untag(skb); - if (skb) - skb->vlan_tci = 0; - - return skb; -} - struct sk_buff *br_handle_vlan(struct net_bridge *br, const struct net_port_vlans *pv, struct sk_buff *skb) @@ -144,13 +128,27 @@ struct sk_buff *br_handle_vlan(struct net_bridge *br, if (!br->vlan_enabled) goto out; + /* Vlan filter table must be configured at this point. The + * only exception is the bridge is set in promisc mode and the + * packet is destined for the bridge device. In this case + * pass the packet as is. + */ + if (!pv) { + if ((br->dev->flags & IFF_PROMISC) && skb->dev == br->dev) { + goto out; + } else { + kfree_skb(skb); + return NULL; + } + } + /* At this point, we know that the frame was filtered and contains * a valid vlan id. If the vlan id is set in the untagged bitmap, * send untagged; otherwise, send tagged. */ br_vlan_get_tag(skb, &vid); if (test_bit(vid, pv->untagged_bitmap)) - skb = br_vlan_untag(skb); + skb->vlan_tci = 0; out: return skb; @@ -174,6 +172,18 @@ bool br_allowed_ingress(struct net_bridge *br, struct net_port_vlans *v, if (!v) return false; + /* If vlan tx offload is disabled on bridge device and frame was + * sent from vlan device on the bridge device, it does not have + * HW accelerated vlan tag. + */ + if (unlikely(!vlan_tx_tag_present(skb) && + (skb->protocol == htons(ETH_P_8021Q) || + skb->protocol == htons(ETH_P_8021AD)))) { + skb = vlan_untag(skb); + if (unlikely(!skb)) + return false; + } + err = br_vlan_get_tag(skb, vid); if (!*vid) { u16 pvid = br_get_pvid(v); diff --git a/net/core/dev.c b/net/core/dev.c index b1b0c8d4d7df..45fa2f11f84d 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2286,7 +2286,7 @@ out: } EXPORT_SYMBOL(skb_checksum_help); -__be16 skb_network_protocol(struct sk_buff *skb) +__be16 skb_network_protocol(struct sk_buff *skb, int *depth) { __be16 type = skb->protocol; int vlan_depth = ETH_HLEN; @@ -2313,6 +2313,8 @@ __be16 skb_network_protocol(struct sk_buff *skb) vlan_depth += VLAN_HLEN; } + *depth = vlan_depth; + return type; } @@ -2326,12 +2328,13 @@ struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb, { struct sk_buff *segs = ERR_PTR(-EPROTONOSUPPORT); struct packet_offload *ptype; - __be16 type = skb_network_protocol(skb); + int vlan_depth = skb->mac_len; + __be16 type = skb_network_protocol(skb, &vlan_depth); if (unlikely(!type)) return ERR_PTR(-EINVAL); - __skb_pull(skb, skb->mac_len); + __skb_pull(skb, vlan_depth); rcu_read_lock(); list_for_each_entry_rcu(ptype, &offload_base, list) { @@ -2498,8 +2501,10 @@ static netdev_features_t harmonize_features(struct sk_buff *skb, const struct net_device *dev, netdev_features_t features) { + int tmp; + if (skb->ip_summed != CHECKSUM_NONE && - !can_checksum_protocol(features, skb_network_protocol(skb))) { + !can_checksum_protocol(features, skb_network_protocol(skb, &tmp))) { features &= ~NETIF_F_ALL_CSUM; } else if (illegal_highdma(dev, skb)) { features &= ~NETIF_F_SG; diff --git a/net/core/netpoll.c b/net/core/netpoll.c index a664f7829a6d..df9e6b1a9759 100644 --- a/net/core/netpoll.c +++ b/net/core/netpoll.c @@ -742,7 +742,7 @@ static bool pkt_is_ns(struct sk_buff *skb) struct nd_msg *msg; struct ipv6hdr *hdr; - if (skb->protocol != htons(ETH_P_ARP)) + if (skb->protocol != htons(ETH_P_IPV6)) return false; if (!pskb_may_pull(skb, sizeof(struct ipv6hdr) + sizeof(struct nd_msg))) return false; diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index 1a0dac2ef9ad..120eecc0f5a4 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -2121,12 +2121,13 @@ EXPORT_SYMBOL(rtmsg_ifinfo); static int nlmsg_populate_fdb_fill(struct sk_buff *skb, struct net_device *dev, u8 *addr, u32 pid, u32 seq, - int type, unsigned int flags) + int type, unsigned int flags, + int nlflags) { struct nlmsghdr *nlh; struct ndmsg *ndm; - nlh = nlmsg_put(skb, pid, seq, type, sizeof(*ndm), NLM_F_MULTI); + nlh = nlmsg_put(skb, pid, seq, type, sizeof(*ndm), nlflags); if (!nlh) return -EMSGSIZE; @@ -2164,7 +2165,7 @@ static void rtnl_fdb_notify(struct net_device *dev, u8 *addr, int type) if (!skb) goto errout; - err = nlmsg_populate_fdb_fill(skb, dev, addr, 0, 0, type, NTF_SELF); + err = nlmsg_populate_fdb_fill(skb, dev, addr, 0, 0, type, NTF_SELF, 0); if (err < 0) { kfree_skb(skb); goto errout; @@ -2389,7 +2390,8 @@ static int nlmsg_populate_fdb(struct sk_buff *skb, err = nlmsg_populate_fdb_fill(skb, dev, ha->addr, portid, seq, - RTM_NEWNEIGH, NTF_SELF); + RTM_NEWNEIGH, NTF_SELF, + NLM_F_MULTI); if (err < 0) return err; skip: diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 5d6236d9fdce..90b96a11b974 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2127,25 +2127,31 @@ EXPORT_SYMBOL_GPL(skb_zerocopy_headlen); * * The `hlen` as calculated by skb_zerocopy_headlen() specifies the * headroom in the `to` buffer. + * + * Return value: + * 0: everything is OK + * -ENOMEM: couldn't orphan frags of @from due to lack of memory + * -EFAULT: skb_copy_bits() found some problem with skb geometry */ -void -skb_zerocopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen) +int +skb_zerocopy(struct sk_buff *to, struct sk_buff *from, int len, int hlen) { int i, j = 0; int plen = 0; /* length of skb->head fragment */ + int ret; struct page *page; unsigned int offset; BUG_ON(!from->head_frag && !hlen); /* dont bother with small payloads */ - if (len <= skb_tailroom(to)) { - skb_copy_bits(from, 0, skb_put(to, len), len); - return; - } + if (len <= skb_tailroom(to)) + return skb_copy_bits(from, 0, skb_put(to, len), len); if (hlen) { - skb_copy_bits(from, 0, skb_put(to, hlen), hlen); + ret = skb_copy_bits(from, 0, skb_put(to, hlen), hlen); + if (unlikely(ret)) + return ret; len -= hlen; } else { plen = min_t(int, skb_headlen(from), len); @@ -2163,6 +2169,11 @@ skb_zerocopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen) to->len += len + plen; to->data_len += len + plen; + if (unlikely(skb_orphan_frags(from, GFP_ATOMIC))) { + skb_tx_error(from); + return -ENOMEM; + } + for (i = 0; i < skb_shinfo(from)->nr_frags; i++) { if (!len) break; @@ -2173,6 +2184,8 @@ skb_zerocopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen) j++; } skb_shinfo(to)->nr_frags = j; + + return 0; } EXPORT_SYMBOL_GPL(skb_zerocopy); @@ -2838,81 +2851,85 @@ EXPORT_SYMBOL_GPL(skb_pull_rcsum); /** * skb_segment - Perform protocol segmentation on skb. - * @skb: buffer to segment + * @head_skb: buffer to segment * @features: features for the output path (see dev->features) * * This function performs segmentation on the given skb. It returns * a pointer to the first in a list of new skbs for the segments. * In case of error it returns ERR_PTR(err). */ -struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) +struct sk_buff *skb_segment(struct sk_buff *head_skb, + netdev_features_t features) { struct sk_buff *segs = NULL; struct sk_buff *tail = NULL; - struct sk_buff *fskb = skb_shinfo(skb)->frag_list; - skb_frag_t *skb_frag = skb_shinfo(skb)->frags; - unsigned int mss = skb_shinfo(skb)->gso_size; - unsigned int doffset = skb->data - skb_mac_header(skb); + struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list; + skb_frag_t *frag = skb_shinfo(head_skb)->frags; + unsigned int mss = skb_shinfo(head_skb)->gso_size; + unsigned int doffset = head_skb->data - skb_mac_header(head_skb); + struct sk_buff *frag_skb = head_skb; unsigned int offset = doffset; - unsigned int tnl_hlen = skb_tnl_header_len(skb); + unsigned int tnl_hlen = skb_tnl_header_len(head_skb); unsigned int headroom; unsigned int len; __be16 proto; bool csum; int sg = !!(features & NETIF_F_SG); - int nfrags = skb_shinfo(skb)->nr_frags; + int nfrags = skb_shinfo(head_skb)->nr_frags; int err = -ENOMEM; int i = 0; int pos; + int dummy; - proto = skb_network_protocol(skb); + proto = skb_network_protocol(head_skb, &dummy); if (unlikely(!proto)) return ERR_PTR(-EINVAL); csum = !!can_checksum_protocol(features, proto); - __skb_push(skb, doffset); - headroom = skb_headroom(skb); - pos = skb_headlen(skb); + __skb_push(head_skb, doffset); + headroom = skb_headroom(head_skb); + pos = skb_headlen(head_skb); do { struct sk_buff *nskb; - skb_frag_t *frag; + skb_frag_t *nskb_frag; int hsize; int size; - len = skb->len - offset; + len = head_skb->len - offset; if (len > mss) len = mss; - hsize = skb_headlen(skb) - offset; + hsize = skb_headlen(head_skb) - offset; if (hsize < 0) hsize = 0; if (hsize > len || !sg) hsize = len; - if (!hsize && i >= nfrags && skb_headlen(fskb) && - (skb_headlen(fskb) == len || sg)) { - BUG_ON(skb_headlen(fskb) > len); + if (!hsize && i >= nfrags && skb_headlen(list_skb) && + (skb_headlen(list_skb) == len || sg)) { + BUG_ON(skb_headlen(list_skb) > len); i = 0; - nfrags = skb_shinfo(fskb)->nr_frags; - skb_frag = skb_shinfo(fskb)->frags; - pos += skb_headlen(fskb); + nfrags = skb_shinfo(list_skb)->nr_frags; + frag = skb_shinfo(list_skb)->frags; + frag_skb = list_skb; + pos += skb_headlen(list_skb); while (pos < offset + len) { BUG_ON(i >= nfrags); - size = skb_frag_size(skb_frag); + size = skb_frag_size(frag); if (pos + size > offset + len) break; i++; pos += size; - skb_frag++; + frag++; } - nskb = skb_clone(fskb, GFP_ATOMIC); - fskb = fskb->next; + nskb = skb_clone(list_skb, GFP_ATOMIC); + list_skb = list_skb->next; if (unlikely(!nskb)) goto err; @@ -2933,7 +2950,7 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) __skb_push(nskb, doffset); } else { nskb = __alloc_skb(hsize + doffset + headroom, - GFP_ATOMIC, skb_alloc_rx_flag(skb), + GFP_ATOMIC, skb_alloc_rx_flag(head_skb), NUMA_NO_NODE); if (unlikely(!nskb)) @@ -2949,12 +2966,12 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) segs = nskb; tail = nskb; - __copy_skb_header(nskb, skb); - nskb->mac_len = skb->mac_len; + __copy_skb_header(nskb, head_skb); + nskb->mac_len = head_skb->mac_len; skb_headers_offset_update(nskb, skb_headroom(nskb) - headroom); - skb_copy_from_linear_data_offset(skb, -tnl_hlen, + skb_copy_from_linear_data_offset(head_skb, -tnl_hlen, nskb->data - tnl_hlen, doffset + tnl_hlen); @@ -2963,30 +2980,32 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) if (!sg) { nskb->ip_summed = CHECKSUM_NONE; - nskb->csum = skb_copy_and_csum_bits(skb, offset, + nskb->csum = skb_copy_and_csum_bits(head_skb, offset, skb_put(nskb, len), len, 0); continue; } - frag = skb_shinfo(nskb)->frags; + nskb_frag = skb_shinfo(nskb)->frags; - skb_copy_from_linear_data_offset(skb, offset, + skb_copy_from_linear_data_offset(head_skb, offset, skb_put(nskb, hsize), hsize); - skb_shinfo(nskb)->tx_flags = skb_shinfo(skb)->tx_flags & SKBTX_SHARED_FRAG; + skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags & + SKBTX_SHARED_FRAG; while (pos < offset + len) { if (i >= nfrags) { - BUG_ON(skb_headlen(fskb)); + BUG_ON(skb_headlen(list_skb)); i = 0; - nfrags = skb_shinfo(fskb)->nr_frags; - skb_frag = skb_shinfo(fskb)->frags; + nfrags = skb_shinfo(list_skb)->nr_frags; + frag = skb_shinfo(list_skb)->frags; + frag_skb = list_skb; BUG_ON(!nfrags); - fskb = fskb->next; + list_skb = list_skb->next; } if (unlikely(skb_shinfo(nskb)->nr_frags >= @@ -2997,27 +3016,30 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) goto err; } - *frag = *skb_frag; - __skb_frag_ref(frag); - size = skb_frag_size(frag); + if (unlikely(skb_orphan_frags(frag_skb, GFP_ATOMIC))) + goto err; + + *nskb_frag = *frag; + __skb_frag_ref(nskb_frag); + size = skb_frag_size(nskb_frag); if (pos < offset) { - frag->page_offset += offset - pos; - skb_frag_size_sub(frag, offset - pos); + nskb_frag->page_offset += offset - pos; + skb_frag_size_sub(nskb_frag, offset - pos); } skb_shinfo(nskb)->nr_frags++; if (pos + size <= offset + len) { i++; - skb_frag++; + frag++; pos += size; } else { - skb_frag_size_sub(frag, pos + size - (offset + len)); + skb_frag_size_sub(nskb_frag, pos + size - (offset + len)); goto skip_fraglist; } - frag++; + nskb_frag++; } skip_fraglist: @@ -3031,7 +3053,7 @@ perform_csum_check: nskb->len - doffset, 0); nskb->ip_summed = CHECKSUM_NONE; } - } while ((offset += len) < skb->len); + } while ((offset += len) < head_skb->len); return segs; diff --git a/net/core/sock.c b/net/core/sock.c index 5b6a9431b017..c0fc6bdad1e3 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2357,10 +2357,13 @@ void release_sock(struct sock *sk) if (sk->sk_backlog.tail) __release_sock(sk); + /* Warning : release_cb() might need to release sk ownership, + * ie call sock_release_ownership(sk) before us. + */ if (sk->sk_prot->release_cb) sk->sk_prot->release_cb(sk); - sk->sk_lock.owned = 0; + sock_release_ownership(sk); if (waitqueue_active(&sk->sk_lock.wq)) wake_up(&sk->sk_lock.wq); spin_unlock_bh(&sk->sk_lock.slock); diff --git a/net/ipv4/gre_demux.c b/net/ipv4/gre_demux.c index 1863422fb7d5..250be7421ab3 100644 --- a/net/ipv4/gre_demux.c +++ b/net/ipv4/gre_demux.c @@ -182,6 +182,14 @@ static int gre_cisco_rcv(struct sk_buff *skb) int i; bool csum_err = false; +#ifdef CONFIG_NET_IPGRE_BROADCAST + if (ipv4_is_multicast(ip_hdr(skb)->daddr)) { + /* Looped back packet, drop it! */ + if (rt_is_output_route(skb_rtable(skb))) + goto drop; + } +#endif + if (parse_gre_header(skb, &tpi, &csum_err) < 0) goto drop; diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c index bb075fc9a14f..3b01959bf4bb 100644 --- a/net/ipv4/inet_fragment.c +++ b/net/ipv4/inet_fragment.c @@ -208,7 +208,7 @@ int inet_frag_evictor(struct netns_frags *nf, struct inet_frags *f, bool force) } work = frag_mem_limit(nf) - nf->low_thresh; - while (work > 0) { + while (work > 0 || force) { spin_lock(&nf->lru_lock); if (list_empty(&nf->lru_list)) { @@ -278,9 +278,10 @@ static struct inet_frag_queue *inet_frag_intern(struct netns_frags *nf, atomic_inc(&qp->refcnt); hlist_add_head(&qp->list, &hb->chain); + inet_frag_lru_add(nf, qp); spin_unlock(&hb->chain_lock); read_unlock(&f->lock); - inet_frag_lru_add(nf, qp); + return qp; } diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c index 78a89e61925d..a82a22d8f77f 100644 --- a/net/ipv4/ip_tunnel.c +++ b/net/ipv4/ip_tunnel.c @@ -416,9 +416,6 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb, #ifdef CONFIG_NET_IPGRE_BROADCAST if (ipv4_is_multicast(iph->daddr)) { - /* Looped back packet, drop it! */ - if (rt_is_output_route(skb_rtable(skb))) - goto drop; tunnel->dev->stats.multicast++; skb->pkt_type = PACKET_BROADCAST; } diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c index 6f847dd56dbc..8d69626f2206 100644 --- a/net/ipv4/ip_tunnel_core.c +++ b/net/ipv4/ip_tunnel_core.c @@ -108,6 +108,7 @@ int iptunnel_pull_header(struct sk_buff *skb, int hdr_len, __be16 inner_proto) nf_reset(skb); secpath_reset(skb); skb_clear_hash_if_not_l4(skb); + skb_dst_drop(skb); skb->vlan_tci = 0; skb_set_queue_mapping(skb, 0); skb->pkt_type = PACKET_HOST; diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c index b9b3472975ba..28863570dd60 100644 --- a/net/ipv4/ipmr.c +++ b/net/ipv4/ipmr.c @@ -2255,13 +2255,14 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb, } static int ipmr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb, - u32 portid, u32 seq, struct mfc_cache *c, int cmd) + u32 portid, u32 seq, struct mfc_cache *c, int cmd, + int flags) { struct nlmsghdr *nlh; struct rtmsg *rtm; int err; - nlh = nlmsg_put(skb, portid, seq, cmd, sizeof(*rtm), NLM_F_MULTI); + nlh = nlmsg_put(skb, portid, seq, cmd, sizeof(*rtm), flags); if (nlh == NULL) return -EMSGSIZE; @@ -2329,7 +2330,7 @@ static void mroute_netlink_event(struct mr_table *mrt, struct mfc_cache *mfc, if (skb == NULL) goto errout; - err = ipmr_fill_mroute(mrt, skb, 0, 0, mfc, cmd); + err = ipmr_fill_mroute(mrt, skb, 0, 0, mfc, cmd, 0); if (err < 0) goto errout; @@ -2368,7 +2369,8 @@ static int ipmr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb) if (ipmr_fill_mroute(mrt, skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, - mfc, RTM_NEWROUTE) < 0) + mfc, RTM_NEWROUTE, + NLM_F_MULTI) < 0) goto done; next_entry: e++; @@ -2382,7 +2384,8 @@ next_entry: if (ipmr_fill_mroute(mrt, skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, - mfc, RTM_NEWROUTE) < 0) { + mfc, RTM_NEWROUTE, + NLM_F_MULTI) < 0) { spin_unlock_bh(&mfc_unres_lock); goto done; } diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 3cf976510497..1e4eac779f51 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -2628,7 +2628,7 @@ static void get_timewait4_sock(const struct inet_timewait_sock *tw, { __be32 dest, src; __u16 destp, srcp; - long delta = tw->tw_ttd - jiffies; + s32 delta = tw->tw_ttd - inet_tw_time_stamp(); dest = tw->tw_daddr; src = tw->tw_rcv_saddr; diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index f0eb4e337ec8..17a11e65e57f 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -767,6 +767,17 @@ void tcp_release_cb(struct sock *sk) if (flags & (1UL << TCP_TSQ_DEFERRED)) tcp_tsq_handler(sk); + /* Here begins the tricky part : + * We are called from release_sock() with : + * 1) BH disabled + * 2) sk_lock.slock spinlock held + * 3) socket owned by us (sk->sk_lock.owned == 1) + * + * But following code is meant to be called from BH handlers, + * so we should keep BH disabled, but early release socket ownership + */ + sock_release_ownership(sk); + if (flags & (1UL << TCP_WRITE_TIMER_DEFERRED)) { tcp_write_timer_handler(sk); __sock_put(sk); diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index fdbfeca36d63..6c7fa0853fc7 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -133,10 +133,12 @@ static int ipv6_count_addresses(struct inet6_dev *idev); static struct hlist_head inet6_addr_lst[IN6_ADDR_HSIZE]; static DEFINE_SPINLOCK(addrconf_hash_lock); -static void addrconf_verify(unsigned long); +static void addrconf_verify(void); +static void addrconf_verify_rtnl(void); +static void addrconf_verify_work(struct work_struct *); -static DEFINE_TIMER(addr_chk_timer, addrconf_verify, 0, 0); -static DEFINE_SPINLOCK(addrconf_verify_lock); +static struct workqueue_struct *addrconf_wq; +static DECLARE_DELAYED_WORK(addr_chk_work, addrconf_verify_work); static void addrconf_join_anycast(struct inet6_ifaddr *ifp); static void addrconf_leave_anycast(struct inet6_ifaddr *ifp); @@ -151,7 +153,7 @@ static struct rt6_info *addrconf_get_prefix_route(const struct in6_addr *pfx, u32 flags, u32 noflags); static void addrconf_dad_start(struct inet6_ifaddr *ifp); -static void addrconf_dad_timer(unsigned long data); +static void addrconf_dad_work(struct work_struct *w); static void addrconf_dad_completed(struct inet6_ifaddr *ifp); static void addrconf_dad_run(struct inet6_dev *idev); static void addrconf_rs_timer(unsigned long data); @@ -247,9 +249,9 @@ static void addrconf_del_rs_timer(struct inet6_dev *idev) __in6_dev_put(idev); } -static void addrconf_del_dad_timer(struct inet6_ifaddr *ifp) +static void addrconf_del_dad_work(struct inet6_ifaddr *ifp) { - if (del_timer(&ifp->dad_timer)) + if (cancel_delayed_work(&ifp->dad_work)) __in6_ifa_put(ifp); } @@ -261,12 +263,12 @@ static void addrconf_mod_rs_timer(struct inet6_dev *idev, mod_timer(&idev->rs_timer, jiffies + when); } -static void addrconf_mod_dad_timer(struct inet6_ifaddr *ifp, - unsigned long when) +static void addrconf_mod_dad_work(struct inet6_ifaddr *ifp, + unsigned long delay) { - if (!timer_pending(&ifp->dad_timer)) + if (!delayed_work_pending(&ifp->dad_work)) in6_ifa_hold(ifp); - mod_timer(&ifp->dad_timer, jiffies + when); + mod_delayed_work(addrconf_wq, &ifp->dad_work, delay); } static int snmp6_alloc_dev(struct inet6_dev *idev) @@ -751,8 +753,9 @@ void inet6_ifa_finish_destroy(struct inet6_ifaddr *ifp) in6_dev_put(ifp->idev); - if (del_timer(&ifp->dad_timer)) - pr_notice("Timer is still running, when freeing ifa=%p\n", ifp); + if (cancel_delayed_work(&ifp->dad_work)) + pr_notice("delayed DAD work was pending while freeing ifa=%p\n", + ifp); if (ifp->state != INET6_IFADDR_STATE_DEAD) { pr_warn("Freeing alive inet6 address %p\n", ifp); @@ -849,8 +852,7 @@ ipv6_add_addr(struct inet6_dev *idev, const struct in6_addr *addr, spin_lock_init(&ifa->lock); spin_lock_init(&ifa->state_lock); - setup_timer(&ifa->dad_timer, addrconf_dad_timer, - (unsigned long)ifa); + INIT_DELAYED_WORK(&ifa->dad_work, addrconf_dad_work); INIT_HLIST_NODE(&ifa->addr_lst); ifa->scope = scope; ifa->prefix_len = pfxlen; @@ -990,6 +992,8 @@ static void ipv6_del_addr(struct inet6_ifaddr *ifp) enum cleanup_prefix_rt_t action = CLEANUP_PREFIX_RT_NOP; unsigned long expires; + ASSERT_RTNL(); + spin_lock_bh(&ifp->state_lock); state = ifp->state; ifp->state = INET6_IFADDR_STATE_DEAD; @@ -1021,7 +1025,7 @@ static void ipv6_del_addr(struct inet6_ifaddr *ifp) write_unlock_bh(&ifp->idev->lock); - addrconf_del_dad_timer(ifp); + addrconf_del_dad_work(ifp); ipv6_ifa_notify(RTM_DELADDR, ifp); @@ -1103,8 +1107,11 @@ retry: * Lifetime is greater than REGEN_ADVANCE time units. In particular, * an implementation must not create a temporary address with a zero * Preferred Lifetime. + * Use age calculation as in addrconf_verify to avoid unnecessary + * temporary addresses being generated. */ - if (tmp_prefered_lft <= regen_advance) { + age = (now - tmp_tstamp + ADDRCONF_TIMER_FUZZ_MINUS) / HZ; + if (tmp_prefered_lft <= regen_advance + age) { in6_ifa_put(ifp); in6_dev_put(idev); ret = -1; @@ -1601,7 +1608,7 @@ static void addrconf_dad_stop(struct inet6_ifaddr *ifp, int dad_failed) { if (ifp->flags&IFA_F_PERMANENT) { spin_lock_bh(&ifp->lock); - addrconf_del_dad_timer(ifp); + addrconf_del_dad_work(ifp); ifp->flags |= IFA_F_TENTATIVE; if (dad_failed) ifp->flags |= IFA_F_DADFAILED; @@ -1622,20 +1629,21 @@ static void addrconf_dad_stop(struct inet6_ifaddr *ifp, int dad_failed) spin_unlock_bh(&ifp->lock); } ipv6_del_addr(ifp); - } else + } else { ipv6_del_addr(ifp); + } } static int addrconf_dad_end(struct inet6_ifaddr *ifp) { int err = -ENOENT; - spin_lock(&ifp->state_lock); + spin_lock_bh(&ifp->state_lock); if (ifp->state == INET6_IFADDR_STATE_DAD) { ifp->state = INET6_IFADDR_STATE_POSTDAD; err = 0; } - spin_unlock(&ifp->state_lock); + spin_unlock_bh(&ifp->state_lock); return err; } @@ -1668,7 +1676,12 @@ void addrconf_dad_failure(struct inet6_ifaddr *ifp) } } - addrconf_dad_stop(ifp, 1); + spin_lock_bh(&ifp->state_lock); + /* transition from _POSTDAD to _ERRDAD */ + ifp->state = INET6_IFADDR_STATE_ERRDAD; + spin_unlock_bh(&ifp->state_lock); + + addrconf_mod_dad_work(ifp, 0); } /* Join to solicited addr multicast group. */ @@ -1677,6 +1690,8 @@ void addrconf_join_solict(struct net_device *dev, const struct in6_addr *addr) { struct in6_addr maddr; + ASSERT_RTNL(); + if (dev->flags&(IFF_LOOPBACK|IFF_NOARP)) return; @@ -1688,6 +1703,8 @@ void addrconf_leave_solict(struct inet6_dev *idev, const struct in6_addr *addr) { struct in6_addr maddr; + ASSERT_RTNL(); + if (idev->dev->flags&(IFF_LOOPBACK|IFF_NOARP)) return; @@ -1698,6 +1715,9 @@ void addrconf_leave_solict(struct inet6_dev *idev, const struct in6_addr *addr) static void addrconf_join_anycast(struct inet6_ifaddr *ifp) { struct in6_addr addr; + + ASSERT_RTNL(); + if (ifp->prefix_len >= 127) /* RFC 6164 */ return; ipv6_addr_prefix(&addr, &ifp->addr, ifp->prefix_len); @@ -1709,6 +1729,9 @@ static void addrconf_join_anycast(struct inet6_ifaddr *ifp) static void addrconf_leave_anycast(struct inet6_ifaddr *ifp) { struct in6_addr addr; + + ASSERT_RTNL(); + if (ifp->prefix_len >= 127) /* RFC 6164 */ return; ipv6_addr_prefix(&addr, &ifp->addr, ifp->prefix_len); @@ -2268,11 +2291,13 @@ ok: return; } - ifp->flags |= IFA_F_MANAGETEMPADDR; update_lft = 0; create = 1; + spin_lock_bh(&ifp->lock); + ifp->flags |= IFA_F_MANAGETEMPADDR; ifp->cstamp = jiffies; ifp->tokenized = tokenized; + spin_unlock_bh(&ifp->lock); addrconf_dad_start(ifp); } @@ -2323,7 +2348,7 @@ ok: create, now); in6_ifa_put(ifp); - addrconf_verify(0); + addrconf_verify(); } } inet6_prefix_notify(RTM_NEWPREFIX, in6_dev, pinfo); @@ -2472,7 +2497,7 @@ static int inet6_addr_add(struct net *net, int ifindex, manage_tempaddrs(idev, ifp, valid_lft, prefered_lft, true, jiffies); in6_ifa_put(ifp); - addrconf_verify(0); + addrconf_verify_rtnl(); return 0; } @@ -3008,7 +3033,7 @@ static int addrconf_ifdown(struct net_device *dev, int how) hlist_for_each_entry_rcu(ifa, h, addr_lst) { if (ifa->idev == idev) { hlist_del_init_rcu(&ifa->addr_lst); - addrconf_del_dad_timer(ifa); + addrconf_del_dad_work(ifa); goto restart; } } @@ -3046,7 +3071,7 @@ static int addrconf_ifdown(struct net_device *dev, int how) while (!list_empty(&idev->addr_list)) { ifa = list_first_entry(&idev->addr_list, struct inet6_ifaddr, if_list); - addrconf_del_dad_timer(ifa); + addrconf_del_dad_work(ifa); list_del(&ifa->if_list); @@ -3145,10 +3170,10 @@ static void addrconf_dad_kick(struct inet6_ifaddr *ifp) rand_num = prandom_u32() % (idev->cnf.rtr_solicit_delay ? : 1); ifp->dad_probes = idev->cnf.dad_transmits; - addrconf_mod_dad_timer(ifp, rand_num); + addrconf_mod_dad_work(ifp, rand_num); } -static void addrconf_dad_start(struct inet6_ifaddr *ifp) +static void addrconf_dad_begin(struct inet6_ifaddr *ifp) { struct inet6_dev *idev = ifp->idev; struct net_device *dev = idev->dev; @@ -3200,25 +3225,68 @@ out: read_unlock_bh(&idev->lock); } -static void addrconf_dad_timer(unsigned long data) +static void addrconf_dad_start(struct inet6_ifaddr *ifp) { - struct inet6_ifaddr *ifp = (struct inet6_ifaddr *) data; + bool begin_dad = false; + + spin_lock_bh(&ifp->state_lock); + if (ifp->state != INET6_IFADDR_STATE_DEAD) { + ifp->state = INET6_IFADDR_STATE_PREDAD; + begin_dad = true; + } + spin_unlock_bh(&ifp->state_lock); + + if (begin_dad) + addrconf_mod_dad_work(ifp, 0); +} + +static void addrconf_dad_work(struct work_struct *w) +{ + struct inet6_ifaddr *ifp = container_of(to_delayed_work(w), + struct inet6_ifaddr, + dad_work); struct inet6_dev *idev = ifp->idev; struct in6_addr mcaddr; + enum { + DAD_PROCESS, + DAD_BEGIN, + DAD_ABORT, + } action = DAD_PROCESS; + + rtnl_lock(); + + spin_lock_bh(&ifp->state_lock); + if (ifp->state == INET6_IFADDR_STATE_PREDAD) { + action = DAD_BEGIN; + ifp->state = INET6_IFADDR_STATE_DAD; + } else if (ifp->state == INET6_IFADDR_STATE_ERRDAD) { + action = DAD_ABORT; + ifp->state = INET6_IFADDR_STATE_POSTDAD; + } + spin_unlock_bh(&ifp->state_lock); + + if (action == DAD_BEGIN) { + addrconf_dad_begin(ifp); + goto out; + } else if (action == DAD_ABORT) { + addrconf_dad_stop(ifp, 1); + goto out; + } + if (!ifp->dad_probes && addrconf_dad_end(ifp)) goto out; - write_lock(&idev->lock); + write_lock_bh(&idev->lock); if (idev->dead || !(idev->if_flags & IF_READY)) { - write_unlock(&idev->lock); + write_unlock_bh(&idev->lock); goto out; } spin_lock(&ifp->lock); if (ifp->state == INET6_IFADDR_STATE_DEAD) { spin_unlock(&ifp->lock); - write_unlock(&idev->lock); + write_unlock_bh(&idev->lock); goto out; } @@ -3229,7 +3297,7 @@ static void addrconf_dad_timer(unsigned long data) ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|IFA_F_DADFAILED); spin_unlock(&ifp->lock); - write_unlock(&idev->lock); + write_unlock_bh(&idev->lock); addrconf_dad_completed(ifp); @@ -3237,16 +3305,17 @@ static void addrconf_dad_timer(unsigned long data) } ifp->dad_probes--; - addrconf_mod_dad_timer(ifp, - NEIGH_VAR(ifp->idev->nd_parms, RETRANS_TIME)); + addrconf_mod_dad_work(ifp, + NEIGH_VAR(ifp->idev->nd_parms, RETRANS_TIME)); spin_unlock(&ifp->lock); - write_unlock(&idev->lock); + write_unlock_bh(&idev->lock); /* send a neighbour solicitation for our addr */ addrconf_addr_solict_mult(&ifp->addr, &mcaddr); ndisc_send_ns(ifp->idev->dev, NULL, &ifp->addr, &mcaddr, &in6addr_any); out: in6_ifa_put(ifp); + rtnl_unlock(); } /* ifp->idev must be at least read locked */ @@ -3273,7 +3342,7 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp) struct in6_addr lladdr; bool send_rs, send_mld; - addrconf_del_dad_timer(ifp); + addrconf_del_dad_work(ifp); /* * Configure the address for reception. Now it is valid. @@ -3514,23 +3583,23 @@ int ipv6_chk_home_addr(struct net *net, const struct in6_addr *addr) * Periodic address status verification */ -static void addrconf_verify(unsigned long foo) +static void addrconf_verify_rtnl(void) { unsigned long now, next, next_sec, next_sched; struct inet6_ifaddr *ifp; int i; + ASSERT_RTNL(); + rcu_read_lock_bh(); - spin_lock(&addrconf_verify_lock); now = jiffies; next = round_jiffies_up(now + ADDR_CHECK_FREQUENCY); - del_timer(&addr_chk_timer); + cancel_delayed_work(&addr_chk_work); for (i = 0; i < IN6_ADDR_HSIZE; i++) { restart: - hlist_for_each_entry_rcu_bh(ifp, - &inet6_addr_lst[i], addr_lst) { + hlist_for_each_entry_rcu_bh(ifp, &inet6_addr_lst[i], addr_lst) { unsigned long age; /* When setting preferred_lft to a value not zero or @@ -3625,13 +3694,22 @@ restart: ADBG(KERN_DEBUG "now = %lu, schedule = %lu, rounded schedule = %lu => %lu\n", now, next, next_sec, next_sched); - - addr_chk_timer.expires = next_sched; - add_timer(&addr_chk_timer); - spin_unlock(&addrconf_verify_lock); + mod_delayed_work(addrconf_wq, &addr_chk_work, next_sched - now); rcu_read_unlock_bh(); } +static void addrconf_verify_work(struct work_struct *w) +{ + rtnl_lock(); + addrconf_verify_rtnl(); + rtnl_unlock(); +} + +static void addrconf_verify(void) +{ + mod_delayed_work(addrconf_wq, &addr_chk_work, 0); +} + static struct in6_addr *extract_addr(struct nlattr *addr, struct nlattr *local, struct in6_addr **peer_pfx) { @@ -3688,6 +3766,8 @@ static int inet6_addr_modify(struct inet6_ifaddr *ifp, u32 ifa_flags, bool was_managetempaddr; bool had_prefixroute; + ASSERT_RTNL(); + if (!valid_lft || (prefered_lft > valid_lft)) return -EINVAL; @@ -3753,7 +3833,7 @@ static int inet6_addr_modify(struct inet6_ifaddr *ifp, u32 ifa_flags, !was_managetempaddr, jiffies); } - addrconf_verify(0); + addrconf_verify_rtnl(); return 0; } @@ -4383,6 +4463,8 @@ static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token) bool update_rs = false; struct in6_addr ll_addr; + ASSERT_RTNL(); + if (token == NULL) return -EINVAL; if (ipv6_addr_any(token)) @@ -4431,7 +4513,7 @@ static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token) } write_unlock_bh(&idev->lock); - addrconf_verify(0); + addrconf_verify_rtnl(); return 0; } @@ -4633,6 +4715,9 @@ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp) { struct net *net = dev_net(ifp->idev->dev); + if (event) + ASSERT_RTNL(); + inet6_ifa_notify(event ? : RTM_NEWADDR, ifp); switch (event) { @@ -5241,6 +5326,12 @@ int __init addrconf_init(void) if (err < 0) goto out_addrlabel; + addrconf_wq = create_workqueue("ipv6_addrconf"); + if (!addrconf_wq) { + err = -ENOMEM; + goto out_nowq; + } + /* The addrconf netdev notifier requires that loopback_dev * has it's ipv6 private information allocated and setup * before it can bring up and give link-local addresses @@ -5271,7 +5362,7 @@ int __init addrconf_init(void) register_netdevice_notifier(&ipv6_dev_notf); - addrconf_verify(0); + addrconf_verify(); rtnl_af_register(&inet6_ops); @@ -5299,6 +5390,8 @@ errout: rtnl_af_unregister(&inet6_ops); unregister_netdevice_notifier(&ipv6_dev_notf); errlo: + destroy_workqueue(addrconf_wq); +out_nowq: unregister_pernet_subsys(&addrconf_ops); out_addrlabel: ipv6_addr_label_cleanup(); @@ -5334,7 +5427,8 @@ void addrconf_cleanup(void) for (i = 0; i < IN6_ADDR_HSIZE; i++) WARN_ON(!hlist_empty(&inet6_addr_lst[i])); spin_unlock_bh(&addrconf_hash_lock); - - del_timer(&addr_chk_timer); + cancel_delayed_work(&addr_chk_work); rtnl_unlock(); + + destroy_workqueue(addrconf_wq); } diff --git a/net/ipv6/exthdrs_offload.c b/net/ipv6/exthdrs_offload.c index cf77f3abfd06..447a7fbd1bb6 100644 --- a/net/ipv6/exthdrs_offload.c +++ b/net/ipv6/exthdrs_offload.c @@ -25,11 +25,11 @@ int __init ipv6_exthdrs_offload_init(void) int ret; ret = inet6_add_offload(&rthdr_offload, IPPROTO_ROUTING); - if (!ret) + if (ret) goto out; ret = inet6_add_offload(&dstopt_offload, IPPROTO_DSTOPTS); - if (!ret) + if (ret) goto out_rt; out: diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index 16f91a2e7888..64d6073731d3 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -1101,21 +1101,19 @@ static void ip6_append_data_mtu(unsigned int *mtu, unsigned int fragheaderlen, struct sk_buff *skb, struct rt6_info *rt, - bool pmtuprobe) + unsigned int orig_mtu) { if (!(rt->dst.flags & DST_XFRM_TUNNEL)) { if (skb == NULL) { /* first fragment, reserve header_len */ - *mtu = *mtu - rt->dst.header_len; + *mtu = orig_mtu - rt->dst.header_len; } else { /* * this fragment is not first, the headers * space is regarded as data space. */ - *mtu = min(*mtu, pmtuprobe ? - rt->dst.dev->mtu : - dst_mtu(rt->dst.path)); + *mtu = orig_mtu; } *maxfraglen = ((*mtu - fragheaderlen) & ~7) + fragheaderlen - sizeof(struct frag_hdr); @@ -1132,7 +1130,7 @@ int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to, struct ipv6_pinfo *np = inet6_sk(sk); struct inet_cork *cork; struct sk_buff *skb, *skb_prev = NULL; - unsigned int maxfraglen, fragheaderlen, mtu; + unsigned int maxfraglen, fragheaderlen, mtu, orig_mtu; int exthdrlen; int dst_exthdrlen; int hh_len; @@ -1214,6 +1212,7 @@ int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to, dst_exthdrlen = 0; mtu = cork->fragsize; } + orig_mtu = mtu; hh_len = LL_RESERVED_SPACE(rt->dst.dev); @@ -1311,8 +1310,7 @@ alloc_new_skb: if (skb == NULL || skb_prev == NULL) ip6_append_data_mtu(&mtu, &maxfraglen, fragheaderlen, skb, rt, - np->pmtudisc >= - IPV6_PMTUDISC_PROBE); + orig_mtu); skb_prev = skb; diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c index 0eb4038a4d63..8737400af0a0 100644 --- a/net/ipv6/ip6mr.c +++ b/net/ipv6/ip6mr.c @@ -2349,13 +2349,14 @@ int ip6mr_get_route(struct net *net, } static int ip6mr_fill_mroute(struct mr6_table *mrt, struct sk_buff *skb, - u32 portid, u32 seq, struct mfc6_cache *c, int cmd) + u32 portid, u32 seq, struct mfc6_cache *c, int cmd, + int flags) { struct nlmsghdr *nlh; struct rtmsg *rtm; int err; - nlh = nlmsg_put(skb, portid, seq, cmd, sizeof(*rtm), NLM_F_MULTI); + nlh = nlmsg_put(skb, portid, seq, cmd, sizeof(*rtm), flags); if (nlh == NULL) return -EMSGSIZE; @@ -2423,7 +2424,7 @@ static void mr6_netlink_event(struct mr6_table *mrt, struct mfc6_cache *mfc, if (skb == NULL) goto errout; - err = ip6mr_fill_mroute(mrt, skb, 0, 0, mfc, cmd); + err = ip6mr_fill_mroute(mrt, skb, 0, 0, mfc, cmd, 0); if (err < 0) goto errout; @@ -2462,7 +2463,8 @@ static int ip6mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb) if (ip6mr_fill_mroute(mrt, skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, - mfc, RTM_NEWROUTE) < 0) + mfc, RTM_NEWROUTE, + NLM_F_MULTI) < 0) goto done; next_entry: e++; @@ -2476,7 +2478,8 @@ next_entry: if (ip6mr_fill_mroute(mrt, skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, - mfc, RTM_NEWROUTE) < 0) { + mfc, RTM_NEWROUTE, + NLM_F_MULTI) < 0) { spin_unlock_bh(&mfc_unres_lock); goto done; } diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 11dac21e6586..fba54a407bb2 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -1513,7 +1513,7 @@ int ip6_route_add(struct fib6_config *cfg) if (!table) goto out; - rt = ip6_dst_alloc(net, NULL, DST_NOCOUNT, table); + rt = ip6_dst_alloc(net, NULL, (cfg->fc_flags & RTF_ADDRCONF) ? 0 : DST_NOCOUNT, table); if (!rt) { err = -ENOMEM; diff --git a/net/key/af_key.c b/net/key/af_key.c index 1a04c1329362..79326978517a 100644 --- a/net/key/af_key.c +++ b/net/key/af_key.c @@ -433,12 +433,13 @@ static inline int verify_sec_ctx_len(const void *p) return 0; } -static inline struct xfrm_user_sec_ctx *pfkey_sadb2xfrm_user_sec_ctx(const struct sadb_x_sec_ctx *sec_ctx) +static inline struct xfrm_user_sec_ctx *pfkey_sadb2xfrm_user_sec_ctx(const struct sadb_x_sec_ctx *sec_ctx, + gfp_t gfp) { struct xfrm_user_sec_ctx *uctx = NULL; int ctx_size = sec_ctx->sadb_x_ctx_len; - uctx = kmalloc((sizeof(*uctx)+ctx_size), GFP_KERNEL); + uctx = kmalloc((sizeof(*uctx)+ctx_size), gfp); if (!uctx) return NULL; @@ -1124,7 +1125,7 @@ static struct xfrm_state * pfkey_msg2xfrm_state(struct net *net, sec_ctx = ext_hdrs[SADB_X_EXT_SEC_CTX - 1]; if (sec_ctx != NULL) { - struct xfrm_user_sec_ctx *uctx = pfkey_sadb2xfrm_user_sec_ctx(sec_ctx); + struct xfrm_user_sec_ctx *uctx = pfkey_sadb2xfrm_user_sec_ctx(sec_ctx, GFP_KERNEL); if (!uctx) goto out; @@ -2231,14 +2232,14 @@ static int pfkey_spdadd(struct sock *sk, struct sk_buff *skb, const struct sadb_ sec_ctx = ext_hdrs[SADB_X_EXT_SEC_CTX - 1]; if (sec_ctx != NULL) { - struct xfrm_user_sec_ctx *uctx = pfkey_sadb2xfrm_user_sec_ctx(sec_ctx); + struct xfrm_user_sec_ctx *uctx = pfkey_sadb2xfrm_user_sec_ctx(sec_ctx, GFP_KERNEL); if (!uctx) { err = -ENOBUFS; goto out; } - err = security_xfrm_policy_alloc(&xp->security, uctx); + err = security_xfrm_policy_alloc(&xp->security, uctx, GFP_KERNEL); kfree(uctx); if (err) @@ -2335,12 +2336,12 @@ static int pfkey_spddelete(struct sock *sk, struct sk_buff *skb, const struct sa sec_ctx = ext_hdrs[SADB_X_EXT_SEC_CTX - 1]; if (sec_ctx != NULL) { - struct xfrm_user_sec_ctx *uctx = pfkey_sadb2xfrm_user_sec_ctx(sec_ctx); + struct xfrm_user_sec_ctx *uctx = pfkey_sadb2xfrm_user_sec_ctx(sec_ctx, GFP_KERNEL); if (!uctx) return -ENOMEM; - err = security_xfrm_policy_alloc(&pol_ctx, uctx); + err = security_xfrm_policy_alloc(&pol_ctx, uctx, GFP_KERNEL); kfree(uctx); if (err) return err; @@ -3239,8 +3240,8 @@ static struct xfrm_policy *pfkey_compile_policy(struct sock *sk, int opt, } if ((*dir = verify_sec_ctx_len(p))) goto out; - uctx = pfkey_sadb2xfrm_user_sec_ctx(sec_ctx); - *dir = security_xfrm_policy_alloc(&xp->security, uctx); + uctx = pfkey_sadb2xfrm_user_sec_ctx(sec_ctx, GFP_ATOMIC); + *dir = security_xfrm_policy_alloc(&xp->security, uctx, GFP_ATOMIC); kfree(uctx); if (*dir) diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c index 735d0f60c83a..85d9d94c0a3c 100644 --- a/net/l2tp/l2tp_core.c +++ b/net/l2tp/l2tp_core.c @@ -112,7 +112,6 @@ struct l2tp_net { spinlock_t l2tp_session_hlist_lock; }; -static void l2tp_session_set_header_len(struct l2tp_session *session, int version); static void l2tp_tunnel_free(struct l2tp_tunnel *tunnel); static inline struct l2tp_tunnel *l2tp_tunnel(struct sock *sk) @@ -1863,7 +1862,7 @@ EXPORT_SYMBOL_GPL(l2tp_session_delete); /* We come here whenever a session's send_seq, cookie_len or * l2specific_len parameters are set. */ -static void l2tp_session_set_header_len(struct l2tp_session *session, int version) +void l2tp_session_set_header_len(struct l2tp_session *session, int version) { if (version == L2TP_HDR_VER_2) { session->hdr_len = 6; @@ -1876,6 +1875,7 @@ static void l2tp_session_set_header_len(struct l2tp_session *session, int versio } } +EXPORT_SYMBOL_GPL(l2tp_session_set_header_len); struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunnel, u32 session_id, u32 peer_session_id, struct l2tp_session_cfg *cfg) { diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h index 1f01ba3435bc..3f93ccd6ba97 100644 --- a/net/l2tp/l2tp_core.h +++ b/net/l2tp/l2tp_core.h @@ -263,6 +263,7 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb, int length, int (*payload_hook)(struct sk_buff *skb)); int l2tp_session_queue_purge(struct l2tp_session *session); int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb); +void l2tp_session_set_header_len(struct l2tp_session *session, int version); int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len); diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c index 4cfd722e9153..bd7387adea9e 100644 --- a/net/l2tp/l2tp_netlink.c +++ b/net/l2tp/l2tp_netlink.c @@ -578,8 +578,10 @@ static int l2tp_nl_cmd_session_modify(struct sk_buff *skb, struct genl_info *inf if (info->attrs[L2TP_ATTR_RECV_SEQ]) session->recv_seq = nla_get_u8(info->attrs[L2TP_ATTR_RECV_SEQ]); - if (info->attrs[L2TP_ATTR_SEND_SEQ]) + if (info->attrs[L2TP_ATTR_SEND_SEQ]) { session->send_seq = nla_get_u8(info->attrs[L2TP_ATTR_SEND_SEQ]); + l2tp_session_set_header_len(session, session->tunnel->version); + } if (info->attrs[L2TP_ATTR_LNS_MODE]) session->lns_mode = nla_get_u8(info->attrs[L2TP_ATTR_LNS_MODE]); diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c index be5fadf34739..5990919356a5 100644 --- a/net/l2tp/l2tp_ppp.c +++ b/net/l2tp/l2tp_ppp.c @@ -254,12 +254,14 @@ static void pppol2tp_recv(struct l2tp_session *session, struct sk_buff *skb, int po = pppox_sk(sk); ppp_input(&po->chan, skb); } else { - l2tp_info(session, PPPOL2TP_MSG_DATA, "%s: socket not bound\n", - session->name); + l2tp_dbg(session, PPPOL2TP_MSG_DATA, + "%s: recv %d byte data frame, passing to L2TP socket\n", + session->name, data_len); - /* Not bound. Nothing we can do, so discard. */ - atomic_long_inc(&session->stats.rx_errors); - kfree_skb(skb); + if (sock_queue_rcv_skb(sk, skb) < 0) { + atomic_long_inc(&session->stats.rx_errors); + kfree_skb(skb); + } } return; @@ -1312,6 +1314,7 @@ static int pppol2tp_session_setsockopt(struct sock *sk, po->chan.hdrlen = val ? PPPOL2TP_L2TP_HDR_SIZE_SEQ : PPPOL2TP_L2TP_HDR_SIZE_NOSEQ; } + l2tp_session_set_header_len(session, session->tunnel->version); l2tp_info(session, PPPOL2TP_MSG_CONTROL, "%s: set send_seq=%d\n", session->name, session->send_seq); diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c index f43613a97dd6..0c1ecfdf9a12 100644 --- a/net/mac80211/chan.c +++ b/net/mac80211/chan.c @@ -100,6 +100,12 @@ ieee80211_get_chanctx_max_required_bw(struct ieee80211_local *local, } max_bw = max(max_bw, width); } + + /* use the configured bandwidth in case of monitor interface */ + sdata = rcu_dereference(local->monitor_sdata); + if (sdata && rcu_access_pointer(sdata->vif.chanctx_conf) == conf) + max_bw = max(max_bw, conf->def.width); + rcu_read_unlock(); return max_bw; diff --git a/net/mac80211/mesh_ps.c b/net/mac80211/mesh_ps.c index 2802f9d9279d..ad8b377b4b9f 100644 --- a/net/mac80211/mesh_ps.c +++ b/net/mac80211/mesh_ps.c @@ -36,6 +36,7 @@ static struct sk_buff *mps_qos_null_get(struct sta_info *sta) sdata->vif.addr); nullfunc->frame_control = fc; nullfunc->duration_id = 0; + nullfunc->seq_ctrl = 0; /* no address resolution for this frame -> set addr 1 immediately */ memcpy(nullfunc->addr1, sta->sta.addr, ETH_ALEN); memset(skb_put(skb, 2), 0, 2); /* append QoS control field */ diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c index a023b432143b..137a192e64bc 100644 --- a/net/mac80211/sta_info.c +++ b/net/mac80211/sta_info.c @@ -1206,6 +1206,7 @@ static void ieee80211_send_null_response(struct ieee80211_sub_if_data *sdata, memcpy(nullfunc->addr1, sta->sta.addr, ETH_ALEN); memcpy(nullfunc->addr2, sdata->vif.addr, ETH_ALEN); memcpy(nullfunc->addr3, sdata->vif.addr, ETH_ALEN); + nullfunc->seq_ctrl = 0; skb->priority = tid; skb_set_queue_mapping(skb, ieee802_1d_to_ac[tid]); diff --git a/net/netfilter/nfnetlink_queue_core.c b/net/netfilter/nfnetlink_queue_core.c index f072fe803510..108120f216b1 100644 --- a/net/netfilter/nfnetlink_queue_core.c +++ b/net/netfilter/nfnetlink_queue_core.c @@ -354,13 +354,16 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue, skb = nfnetlink_alloc_skb(net, size, queue->peer_portid, GFP_ATOMIC); - if (!skb) + if (!skb) { + skb_tx_error(entskb); return NULL; + } nlh = nlmsg_put(skb, 0, 0, NFNL_SUBSYS_QUEUE << 8 | NFQNL_MSG_PACKET, sizeof(struct nfgenmsg), 0); if (!nlh) { + skb_tx_error(entskb); kfree_skb(skb); return NULL; } @@ -488,13 +491,15 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue, nla->nla_type = NFQA_PAYLOAD; nla->nla_len = nla_attr_size(data_len); - skb_zerocopy(skb, entskb, data_len, hlen); + if (skb_zerocopy(skb, entskb, data_len, hlen)) + goto nla_put_failure; } nlh->nlmsg_len = skb->len; return skb; nla_put_failure: + skb_tx_error(entskb); kfree_skb(skb); net_err_ratelimited("nf_queue: error creating packet message\n"); return NULL; diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c index e9a48baf8551..270b77dfac30 100644 --- a/net/openvswitch/datapath.c +++ b/net/openvswitch/datapath.c @@ -464,7 +464,9 @@ static int queue_userspace_packet(struct datapath *dp, struct sk_buff *skb, } nla->nla_len = nla_attr_size(skb->len); - skb_zerocopy(user_skb, skb, skb->len, hlen); + err = skb_zerocopy(user_skb, skb, skb->len, hlen); + if (err) + goto out; /* Pad OVS_PACKET_ATTR_PACKET if linear copy was performed */ if (!(dp->user_features & OVS_DP_F_UNALIGNED)) { @@ -478,6 +480,8 @@ static int queue_userspace_packet(struct datapath *dp, struct sk_buff *skb, err = genlmsg_unicast(ovs_dp_get_net(dp), user_skb, upcall_info->portid); out: + if (err) + skb_tx_error(skb); kfree_skb(nskb); return err; } @@ -1174,7 +1178,7 @@ static void ovs_dp_reset_user_features(struct sk_buff *skb, struct genl_info *in struct datapath *dp; dp = lookup_datapath(sock_net(skb->sk), info->userhdr, info->attrs); - if (!dp) + if (IS_ERR(dp)) return; WARN(dp->user_features, "Dropping previously announced user features\n"); @@ -1762,11 +1766,12 @@ static int ovs_vport_cmd_dump(struct sk_buff *skb, struct netlink_callback *cb) int bucket = cb->args[0], skip = cb->args[1]; int i, j = 0; + rcu_read_lock(); dp = get_dp(sock_net(skb->sk), ovs_header->dp_ifindex); - if (!dp) + if (!dp) { + rcu_read_unlock(); return -ENODEV; - - rcu_read_lock(); + } for (i = bucket; i < DP_VPORT_HASH_BUCKETS; i++) { struct vport *vport; diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c index 16f4b46161d4..2998989e76db 100644 --- a/net/openvswitch/flow.c +++ b/net/openvswitch/flow.c @@ -73,6 +73,7 @@ void ovs_flow_stats_update(struct sw_flow *flow, struct sk_buff *skb) if ((flow->key.eth.type == htons(ETH_P_IP) || flow->key.eth.type == htons(ETH_P_IPV6)) && + flow->key.ip.frag != OVS_FRAG_TYPE_LATER && flow->key.ip.proto == IPPROTO_TCP && likely(skb->len >= skb_transport_offset(skb) + sizeof(struct tcphdr))) { tcp_flags = TCP_FLAGS_BE16(tcp_hdr(skb)); @@ -91,7 +92,7 @@ static void stats_read(struct flow_stats *stats, unsigned long *used, __be16 *tcp_flags) { spin_lock(&stats->lock); - if (time_after(stats->used, *used)) + if (!*used || time_after(stats->used, *used)) *used = stats->used; *tcp_flags |= stats->tcp_flags; ovs_stats->n_packets += stats->packet_count; @@ -102,30 +103,24 @@ static void stats_read(struct flow_stats *stats, void ovs_flow_stats_get(struct sw_flow *flow, struct ovs_flow_stats *ovs_stats, unsigned long *used, __be16 *tcp_flags) { - int cpu, cur_cpu; + int cpu; *used = 0; *tcp_flags = 0; memset(ovs_stats, 0, sizeof(*ovs_stats)); + local_bh_disable(); if (!flow->stats.is_percpu) { stats_read(flow->stats.stat, ovs_stats, used, tcp_flags); } else { - cur_cpu = get_cpu(); for_each_possible_cpu(cpu) { struct flow_stats *stats; - if (cpu == cur_cpu) - local_bh_disable(); - stats = per_cpu_ptr(flow->stats.cpu_stats, cpu); stats_read(stats, ovs_stats, used, tcp_flags); - - if (cpu == cur_cpu) - local_bh_enable(); } - put_cpu(); } + local_bh_enable(); } static void stats_reset(struct flow_stats *stats) @@ -140,25 +135,17 @@ static void stats_reset(struct flow_stats *stats) void ovs_flow_stats_clear(struct sw_flow *flow) { - int cpu, cur_cpu; + int cpu; + local_bh_disable(); if (!flow->stats.is_percpu) { stats_reset(flow->stats.stat); } else { - cur_cpu = get_cpu(); - for_each_possible_cpu(cpu) { - - if (cpu == cur_cpu) - local_bh_disable(); - stats_reset(per_cpu_ptr(flow->stats.cpu_stats, cpu)); - - if (cpu == cur_cpu) - local_bh_enable(); } - put_cpu(); } + local_bh_enable(); } static int check_header(struct sk_buff *skb, int len) diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index 1313145e3b86..a07d55e75698 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -273,11 +273,12 @@ static struct Qdisc *qdisc_match_from_root(struct Qdisc *root, u32 handle) void qdisc_list_add(struct Qdisc *q) { - struct Qdisc *root = qdisc_dev(q)->qdisc; + if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS)) { + struct Qdisc *root = qdisc_dev(q)->qdisc; - WARN_ON_ONCE(root == &noop_qdisc); - if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS)) + WARN_ON_ONCE(root == &noop_qdisc); list_add_tail(&q->list, &root->list); + } } EXPORT_SYMBOL(qdisc_list_add); diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c index 08ef7a42c0e4..21e251766eb1 100644 --- a/net/sched/sch_fq.c +++ b/net/sched/sch_fq.c @@ -601,6 +601,7 @@ static int fq_resize(struct Qdisc *sch, u32 log) { struct fq_sched_data *q = qdisc_priv(sch); struct rb_root *array; + void *old_fq_root; u32 idx; if (q->fq_root && log == q->fq_trees_log) @@ -615,13 +616,19 @@ static int fq_resize(struct Qdisc *sch, u32 log) for (idx = 0; idx < (1U << log); idx++) array[idx] = RB_ROOT; - if (q->fq_root) { - fq_rehash(q, q->fq_root, q->fq_trees_log, array, log); - fq_free(q->fq_root); - } + sch_tree_lock(sch); + + old_fq_root = q->fq_root; + if (old_fq_root) + fq_rehash(q, old_fq_root, q->fq_trees_log, array, log); + q->fq_root = array; q->fq_trees_log = log; + sch_tree_unlock(sch); + + fq_free(old_fq_root); + return 0; } @@ -697,9 +704,11 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt) q->flow_refill_delay = usecs_to_jiffies(usecs_delay); } - if (!err) + if (!err) { + sch_tree_unlock(sch); err = fq_resize(sch, fq_log); - + sch_tree_lock(sch); + } while (sch->q.qlen > sch->limit) { struct sk_buff *skb = fq_dequeue(sch); diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c index 632090b961c3..3a1767ef3201 100644 --- a/net/sctp/sm_make_chunk.c +++ b/net/sctp/sm_make_chunk.c @@ -1421,8 +1421,8 @@ static void sctp_chunk_destroy(struct sctp_chunk *chunk) BUG_ON(!list_empty(&chunk->list)); list_del_init(&chunk->transmitted_list); - /* Free the chunk skb data and the SCTP_chunk stub itself. */ - dev_kfree_skb(chunk->skb); + consume_skb(chunk->skb); + consume_skb(chunk->auth_chunk); SCTP_DBG_OBJCNT_DEC(chunk); kmem_cache_free(sctp_chunk_cachep, chunk); diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c index ae65b6b5973a..01e002430c85 100644 --- a/net/sctp/sm_statefuns.c +++ b/net/sctp/sm_statefuns.c @@ -760,7 +760,6 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(struct net *net, /* Make sure that we and the peer are AUTH capable */ if (!net->sctp.auth_enable || !new_asoc->peer.auth_capable) { - kfree_skb(chunk->auth_chunk); sctp_association_free(new_asoc); return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); } @@ -775,10 +774,6 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(struct net *net, auth.transport = chunk->transport; ret = sctp_sf_authenticate(net, ep, new_asoc, type, &auth); - - /* We can now safely free the auth_chunk clone */ - kfree_skb(chunk->auth_chunk); - if (ret != SCTP_IERROR_NO_ERROR) { sctp_association_free(new_asoc); return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); diff --git a/net/socket.c b/net/socket.c index 879933aaed4c..a19ae1968d37 100644 --- a/net/socket.c +++ b/net/socket.c @@ -450,16 +450,17 @@ EXPORT_SYMBOL(sockfd_lookup); static struct socket *sockfd_lookup_light(int fd, int *err, int *fput_needed) { - struct file *file; + struct fd f = fdget(fd); struct socket *sock; *err = -EBADF; - file = fget_light(fd, fput_needed); - if (file) { - sock = sock_from_file(file, err); - if (sock) + if (f.file) { + sock = sock_from_file(f.file, err); + if (likely(sock)) { + *fput_needed = f.flags; return sock; - fput_light(file, *fput_needed); + } + fdput(f); } return NULL; } @@ -1985,6 +1986,10 @@ static int copy_msghdr_from_user(struct msghdr *kmsg, { if (copy_from_user(kmsg, umsg, sizeof(struct msghdr))) return -EFAULT; + + if (kmsg->msg_namelen < 0) + return -EINVAL; + if (kmsg->msg_namelen > sizeof(struct sockaddr_storage)) kmsg->msg_namelen = sizeof(struct sockaddr_storage); return 0; diff --git a/net/tipc/config.c b/net/tipc/config.c index e74eef2e7490..e6d721692ae0 100644 --- a/net/tipc/config.c +++ b/net/tipc/config.c @@ -376,7 +376,6 @@ static void cfg_conn_msg_event(int conid, struct sockaddr_tipc *addr, struct tipc_cfg_msg_hdr *req_hdr; struct tipc_cfg_msg_hdr *rep_hdr; struct sk_buff *rep_buf; - int ret; /* Validate configuration message header (ignore invalid message) */ req_hdr = (struct tipc_cfg_msg_hdr *)buf; @@ -398,12 +397,8 @@ static void cfg_conn_msg_event(int conid, struct sockaddr_tipc *addr, memcpy(rep_hdr, req_hdr, sizeof(*rep_hdr)); rep_hdr->tcm_len = htonl(rep_buf->len); rep_hdr->tcm_flags &= htons(~TCM_F_REQUEST); - - ret = tipc_conn_sendmsg(&cfgsrv, conid, addr, rep_buf->data, - rep_buf->len); - if (ret < 0) - pr_err("Sending cfg reply message failed, no memory\n"); - + tipc_conn_sendmsg(&cfgsrv, conid, addr, rep_buf->data, + rep_buf->len); kfree_skb(rep_buf); } } diff --git a/net/tipc/handler.c b/net/tipc/handler.c index e4bc8a296744..1fabf160501f 100644 --- a/net/tipc/handler.c +++ b/net/tipc/handler.c @@ -58,7 +58,6 @@ unsigned int tipc_k_signal(Handler routine, unsigned long argument) spin_lock_bh(&qitem_lock); if (!handler_enabled) { - pr_err("Signal request ignored by handler\n"); spin_unlock_bh(&qitem_lock); return -ENOPROTOOPT; } diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c index 48302be175ce..042e8e3cabc0 100644 --- a/net/tipc/name_table.c +++ b/net/tipc/name_table.c @@ -941,17 +941,48 @@ int tipc_nametbl_init(void) return 0; } +/** + * tipc_purge_publications - remove all publications for a given type + * + * tipc_nametbl_lock must be held when calling this function + */ +static void tipc_purge_publications(struct name_seq *seq) +{ + struct publication *publ, *safe; + struct sub_seq *sseq; + struct name_info *info; + + if (!seq->sseqs) { + nameseq_delete_empty(seq); + return; + } + sseq = seq->sseqs; + info = sseq->info; + list_for_each_entry_safe(publ, safe, &info->zone_list, zone_list) { + tipc_nametbl_remove_publ(publ->type, publ->lower, publ->node, + publ->ref, publ->key); + } +} + void tipc_nametbl_stop(void) { u32 i; + struct name_seq *seq; + struct hlist_head *seq_head; + struct hlist_node *safe; - /* Verify name table is empty, then release it */ + /* Verify name table is empty and purge any lingering + * publications, then release the name table + */ write_lock_bh(&tipc_nametbl_lock); for (i = 0; i < TIPC_NAMETBL_SIZE; i++) { if (hlist_empty(&table.types[i])) continue; - pr_err("nametbl_stop(): orphaned hash chain detected\n"); - break; + seq_head = &table.types[i]; + hlist_for_each_entry_safe(seq, safe, seq_head, ns_list) { + tipc_purge_publications(seq); + } + continue; } kfree(table.types); table.types = NULL; diff --git a/net/tipc/server.c b/net/tipc/server.c index 373979789a73..646a930eefbf 100644 --- a/net/tipc/server.c +++ b/net/tipc/server.c @@ -87,7 +87,6 @@ static void tipc_clean_outqueues(struct tipc_conn *con); static void tipc_conn_kref_release(struct kref *kref) { struct tipc_conn *con = container_of(kref, struct tipc_conn, kref); - struct tipc_server *s = con->server; if (con->sock) { tipc_sock_release_local(con->sock); @@ -95,10 +94,6 @@ static void tipc_conn_kref_release(struct kref *kref) } tipc_clean_outqueues(con); - - if (con->conid) - s->tipc_conn_shutdown(con->conid, con->usr_data); - kfree(con); } @@ -181,6 +176,9 @@ static void tipc_close_conn(struct tipc_conn *con) struct tipc_server *s = con->server; if (test_and_clear_bit(CF_CONNECTED, &con->flags)) { + if (con->conid) + s->tipc_conn_shutdown(con->conid, con->usr_data); + spin_lock_bh(&s->idr_lock); idr_remove(&s->conn_idr, con->conid); s->idr_in_use--; @@ -429,10 +427,12 @@ int tipc_conn_sendmsg(struct tipc_server *s, int conid, list_add_tail(&e->list, &con->outqueue); spin_unlock_bh(&con->outqueue_lock); - if (test_bit(CF_CONNECTED, &con->flags)) + if (test_bit(CF_CONNECTED, &con->flags)) { if (!queue_work(s->send_wq, &con->swork)) conn_put(con); - + } else { + conn_put(con); + } return 0; } diff --git a/net/tipc/socket.c b/net/tipc/socket.c index a4cf274455aa..0ed0eaa62f29 100644 --- a/net/tipc/socket.c +++ b/net/tipc/socket.c @@ -997,7 +997,7 @@ static int tipc_wait_for_rcvmsg(struct socket *sock, long timeo) for (;;) { prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); - if (skb_queue_empty(&sk->sk_receive_queue)) { + if (timeo && skb_queue_empty(&sk->sk_receive_queue)) { if (sock->state == SS_DISCONNECTING) { err = -ENOTCONN; break; @@ -1623,7 +1623,7 @@ static int tipc_wait_for_accept(struct socket *sock, long timeo) for (;;) { prepare_to_wait_exclusive(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); - if (skb_queue_empty(&sk->sk_receive_queue)) { + if (timeo && skb_queue_empty(&sk->sk_receive_queue)) { release_sock(sk); timeo = schedule_timeout(timeo); lock_sock(sk); diff --git a/net/tipc/subscr.c b/net/tipc/subscr.c index 7cb0bd5b1176..642437231ad5 100644 --- a/net/tipc/subscr.c +++ b/net/tipc/subscr.c @@ -96,20 +96,16 @@ static void subscr_send_event(struct tipc_subscription *sub, u32 found_lower, { struct tipc_subscriber *subscriber = sub->subscriber; struct kvec msg_sect; - int ret; msg_sect.iov_base = (void *)&sub->evt; msg_sect.iov_len = sizeof(struct tipc_event); - sub->evt.event = htohl(event, sub->swap); sub->evt.found_lower = htohl(found_lower, sub->swap); sub->evt.found_upper = htohl(found_upper, sub->swap); sub->evt.port.ref = htohl(port_ref, sub->swap); sub->evt.port.node = htohl(node, sub->swap); - ret = tipc_conn_sendmsg(&topsrv, subscriber->conid, NULL, - msg_sect.iov_base, msg_sect.iov_len); - if (ret < 0) - pr_err("Sending subscription event failed, no memory\n"); + tipc_conn_sendmsg(&topsrv, subscriber->conid, NULL, msg_sect.iov_base, + msg_sect.iov_len); } /** @@ -153,14 +149,6 @@ static void subscr_timeout(struct tipc_subscription *sub) /* The spin lock per subscriber is used to protect its members */ spin_lock_bh(&subscriber->lock); - /* Validate if the connection related to the subscriber is - * closed (in case subscriber is terminating) - */ - if (subscriber->conid == 0) { - spin_unlock_bh(&subscriber->lock); - return; - } - /* Validate timeout (in case subscription is being cancelled) */ if (sub->timeout == TIPC_WAIT_FOREVER) { spin_unlock_bh(&subscriber->lock); @@ -215,9 +203,6 @@ static void subscr_release(struct tipc_subscriber *subscriber) spin_lock_bh(&subscriber->lock); - /* Invalidate subscriber reference */ - subscriber->conid = 0; - /* Destroy any existing subscriptions for subscriber */ list_for_each_entry_safe(sub, sub_temp, &subscriber->subscription_list, subscription_list) { @@ -278,9 +263,9 @@ static void subscr_cancel(struct tipc_subscr *s, * * Called with subscriber lock held. */ -static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s, - struct tipc_subscriber *subscriber) -{ +static int subscr_subscribe(struct tipc_subscr *s, + struct tipc_subscriber *subscriber, + struct tipc_subscription **sub_p) { struct tipc_subscription *sub; int swap; @@ -291,23 +276,21 @@ static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s, if (s->filter & htohl(TIPC_SUB_CANCEL, swap)) { s->filter &= ~htohl(TIPC_SUB_CANCEL, swap); subscr_cancel(s, subscriber); - return NULL; + return 0; } /* Refuse subscription if global limit exceeded */ if (atomic_read(&subscription_count) >= TIPC_MAX_SUBSCRIPTIONS) { pr_warn("Subscription rejected, limit reached (%u)\n", TIPC_MAX_SUBSCRIPTIONS); - subscr_terminate(subscriber); - return NULL; + return -EINVAL; } /* Allocate subscription object */ sub = kmalloc(sizeof(*sub), GFP_ATOMIC); if (!sub) { pr_warn("Subscription rejected, no memory\n"); - subscr_terminate(subscriber); - return NULL; + return -ENOMEM; } /* Initialize subscription object */ @@ -321,8 +304,7 @@ static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s, (sub->seq.lower > sub->seq.upper)) { pr_warn("Subscription rejected, illegal request\n"); kfree(sub); - subscr_terminate(subscriber); - return NULL; + return -EINVAL; } INIT_LIST_HEAD(&sub->nameseq_list); list_add(&sub->subscription_list, &subscriber->subscription_list); @@ -335,8 +317,8 @@ static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s, (Handler)subscr_timeout, (unsigned long)sub); k_start_timer(&sub->timer, sub->timeout); } - - return sub; + *sub_p = sub; + return 0; } /* Handle one termination request for the subscriber */ @@ -350,10 +332,14 @@ static void subscr_conn_msg_event(int conid, struct sockaddr_tipc *addr, void *usr_data, void *buf, size_t len) { struct tipc_subscriber *subscriber = usr_data; - struct tipc_subscription *sub; + struct tipc_subscription *sub = NULL; spin_lock_bh(&subscriber->lock); - sub = subscr_subscribe((struct tipc_subscr *)buf, subscriber); + if (subscr_subscribe((struct tipc_subscr *)buf, subscriber, &sub) < 0) { + spin_unlock_bh(&subscriber->lock); + subscr_terminate(subscriber); + return; + } if (sub) tipc_nametbl_subscribe(sub); spin_unlock_bh(&subscriber->lock); diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index 29fc8bee9702..94404f19f9de 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -163,9 +163,8 @@ static inline void unix_set_secdata(struct scm_cookie *scm, struct sk_buff *skb) static inline unsigned int unix_hash_fold(__wsum n) { - unsigned int hash = (__force unsigned int)n; + unsigned int hash = (__force unsigned int)csum_fold(n); - hash ^= hash>>16; hash ^= hash>>8; return hash&(UNIX_HASH_SIZE-1); } @@ -1788,8 +1787,11 @@ static int unix_dgram_recvmsg(struct kiocb *iocb, struct socket *sock, goto out; err = mutex_lock_interruptible(&u->readlock); - if (err) { - err = sock_intr_errno(sock_rcvtimeo(sk, noblock)); + if (unlikely(err)) { + /* recvmsg() in non blocking mode is supposed to return -EAGAIN + * sk_rcvtimeo is not honored by mutex_lock_interruptible() + */ + err = noblock ? -EAGAIN : -ERESTARTSYS; goto out; } @@ -1914,6 +1916,7 @@ static int unix_stream_recvmsg(struct kiocb *iocb, struct socket *sock, struct unix_sock *u = unix_sk(sk); DECLARE_SOCKADDR(struct sockaddr_un *, sunaddr, msg->msg_name); int copied = 0; + int noblock = flags & MSG_DONTWAIT; int check_creds = 0; int target; int err = 0; @@ -1929,7 +1932,7 @@ static int unix_stream_recvmsg(struct kiocb *iocb, struct socket *sock, goto out; target = sock_rcvlowat(sk, flags&MSG_WAITALL, size); - timeo = sock_rcvtimeo(sk, flags&MSG_DONTWAIT); + timeo = sock_rcvtimeo(sk, noblock); /* Lock the socket to prevent queue disordering * while sleeps in memcpy_tomsg @@ -1941,8 +1944,11 @@ static int unix_stream_recvmsg(struct kiocb *iocb, struct socket *sock, } err = mutex_lock_interruptible(&u->readlock); - if (err) { - err = sock_intr_errno(timeo); + if (unlikely(err)) { + /* recvmsg() in non blocking mode is supposed to return -EAGAIN + * sk_rcvtimeo is not honored by mutex_lock_interruptible() + */ + err = noblock ? -EAGAIN : -ERESTARTSYS; goto out; } diff --git a/net/wireless/core.c b/net/wireless/core.c index 010892b81a06..a3bf18d11609 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -788,8 +788,6 @@ void cfg80211_leave(struct cfg80211_registered_device *rdev, default: break; } - - wdev->beacon_interval = 0; } static int cfg80211_netdev_notifier_call(struct notifier_block *nb, diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index c274179d60a2..2f7ddc3a59b4 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1221,7 +1221,7 @@ static int copy_from_user_sec_ctx(struct xfrm_policy *pol, struct nlattr **attrs return 0; uctx = nla_data(rt); - return security_xfrm_policy_alloc(&pol->security, uctx); + return security_xfrm_policy_alloc(&pol->security, uctx, GFP_KERNEL); } static void copy_templates(struct xfrm_policy *xp, struct xfrm_user_tmpl *ut, @@ -1626,7 +1626,7 @@ static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh, if (rt) { struct xfrm_user_sec_ctx *uctx = nla_data(rt); - err = security_xfrm_policy_alloc(&ctx, uctx); + err = security_xfrm_policy_alloc(&ctx, uctx, GFP_KERNEL); if (err) return err; } @@ -1928,7 +1928,7 @@ static int xfrm_add_pol_expire(struct sk_buff *skb, struct nlmsghdr *nlh, if (rt) { struct xfrm_user_sec_ctx *uctx = nla_data(rt); - err = security_xfrm_policy_alloc(&ctx, uctx); + err = security_xfrm_policy_alloc(&ctx, uctx, GFP_KERNEL); if (err) return err; } diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 276e84b8a8e5..10085de886fe 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -330,7 +330,8 @@ static void write_src(void) printf("\tPTR\t_text + %#llx\n", table[i].addr - _text); else - printf("\tPTR\t%#llx\n", table[i].addr); + printf("\tPTR\t_text - %#llx\n", + _text - table[i].addr); } else { printf("\tPTR\t%#llx\n", table[i].addr); } diff --git a/security/capability.c b/security/capability.c index 8b4f24ae4338..21e2b9cae685 100644 --- a/security/capability.c +++ b/security/capability.c @@ -757,7 +757,8 @@ static void cap_skb_owned_by(struct sk_buff *skb, struct sock *sk) #ifdef CONFIG_SECURITY_NETWORK_XFRM static int cap_xfrm_policy_alloc_security(struct xfrm_sec_ctx **ctxp, - struct xfrm_user_sec_ctx *sec_ctx) + struct xfrm_user_sec_ctx *sec_ctx, + gfp_t gfp) { return 0; } diff --git a/security/security.c b/security/security.c index 15b6928592ef..919cad93ac82 100644 --- a/security/security.c +++ b/security/security.c @@ -1317,9 +1317,11 @@ void security_skb_owned_by(struct sk_buff *skb, struct sock *sk) #ifdef CONFIG_SECURITY_NETWORK_XFRM -int security_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp, struct xfrm_user_sec_ctx *sec_ctx) +int security_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp, + struct xfrm_user_sec_ctx *sec_ctx, + gfp_t gfp) { - return security_ops->xfrm_policy_alloc_security(ctxp, sec_ctx); + return security_ops->xfrm_policy_alloc_security(ctxp, sec_ctx, gfp); } EXPORT_SYMBOL(security_xfrm_policy_alloc); diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 4b34847208cc..b332e2cc0954 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -668,7 +668,7 @@ static int selinux_set_mnt_opts(struct super_block *sb, if (flags[i] == SBLABEL_MNT) continue; rc = security_context_to_sid(mount_options[i], - strlen(mount_options[i]), &sid); + strlen(mount_options[i]), &sid, GFP_KERNEL); if (rc) { printk(KERN_WARNING "SELinux: security_context_to_sid" "(%s) failed for (dev %s, type %s) errno=%d\n", @@ -2489,7 +2489,8 @@ static int selinux_sb_remount(struct super_block *sb, void *data) if (flags[i] == SBLABEL_MNT) continue; len = strlen(mount_options[i]); - rc = security_context_to_sid(mount_options[i], len, &sid); + rc = security_context_to_sid(mount_options[i], len, &sid, + GFP_KERNEL); if (rc) { printk(KERN_WARNING "SELinux: security_context_to_sid" "(%s) failed for (dev %s, type %s) errno=%d\n", @@ -2893,7 +2894,7 @@ static int selinux_inode_setxattr(struct dentry *dentry, const char *name, if (rc) return rc; - rc = security_context_to_sid(value, size, &newsid); + rc = security_context_to_sid(value, size, &newsid, GFP_KERNEL); if (rc == -EINVAL) { if (!capable(CAP_MAC_ADMIN)) { struct audit_buffer *ab; @@ -3050,7 +3051,7 @@ static int selinux_inode_setsecurity(struct inode *inode, const char *name, if (!value || !size) return -EACCES; - rc = security_context_to_sid((void *)value, size, &newsid); + rc = security_context_to_sid((void *)value, size, &newsid, GFP_KERNEL); if (rc) return rc; @@ -5529,7 +5530,7 @@ static int selinux_setprocattr(struct task_struct *p, str[size-1] = 0; size--; } - error = security_context_to_sid(value, size, &sid); + error = security_context_to_sid(value, size, &sid, GFP_KERNEL); if (error == -EINVAL && !strcmp(name, "fscreate")) { if (!capable(CAP_MAC_ADMIN)) { struct audit_buffer *ab; @@ -5638,7 +5639,7 @@ static int selinux_secid_to_secctx(u32 secid, char **secdata, u32 *seclen) static int selinux_secctx_to_secid(const char *secdata, u32 seclen, u32 *secid) { - return security_context_to_sid(secdata, seclen, secid); + return security_context_to_sid(secdata, seclen, secid, GFP_KERNEL); } static void selinux_release_secctx(char *secdata, u32 seclen) diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h index 8ed8daf7f1ee..ce7852cf526b 100644 --- a/security/selinux/include/security.h +++ b/security/selinux/include/security.h @@ -134,7 +134,7 @@ int security_sid_to_context(u32 sid, char **scontext, int security_sid_to_context_force(u32 sid, char **scontext, u32 *scontext_len); int security_context_to_sid(const char *scontext, u32 scontext_len, - u32 *out_sid); + u32 *out_sid, gfp_t gfp); int security_context_to_sid_default(const char *scontext, u32 scontext_len, u32 *out_sid, u32 def_sid, gfp_t gfp_flags); diff --git a/security/selinux/include/xfrm.h b/security/selinux/include/xfrm.h index 48c3cc94c168..9f0584710c85 100644 --- a/security/selinux/include/xfrm.h +++ b/security/selinux/include/xfrm.h @@ -10,7 +10,8 @@ #include <net/flow.h> int selinux_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp, - struct xfrm_user_sec_ctx *uctx); + struct xfrm_user_sec_ctx *uctx, + gfp_t gfp); int selinux_xfrm_policy_clone(struct xfrm_sec_ctx *old_ctx, struct xfrm_sec_ctx **new_ctxp); void selinux_xfrm_policy_free(struct xfrm_sec_ctx *ctx); diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c index 5122affe06a8..d60c0ee66387 100644 --- a/security/selinux/selinuxfs.c +++ b/security/selinux/selinuxfs.c @@ -576,7 +576,7 @@ static ssize_t sel_write_context(struct file *file, char *buf, size_t size) if (length) goto out; - length = security_context_to_sid(buf, size, &sid); + length = security_context_to_sid(buf, size, &sid, GFP_KERNEL); if (length) goto out; @@ -731,11 +731,13 @@ static ssize_t sel_write_access(struct file *file, char *buf, size_t size) if (sscanf(buf, "%s %s %hu", scon, tcon, &tclass) != 3) goto out; - length = security_context_to_sid(scon, strlen(scon) + 1, &ssid); + length = security_context_to_sid(scon, strlen(scon) + 1, &ssid, + GFP_KERNEL); if (length) goto out; - length = security_context_to_sid(tcon, strlen(tcon) + 1, &tsid); + length = security_context_to_sid(tcon, strlen(tcon) + 1, &tsid, + GFP_KERNEL); if (length) goto out; @@ -817,11 +819,13 @@ static ssize_t sel_write_create(struct file *file, char *buf, size_t size) objname = namebuf; } - length = security_context_to_sid(scon, strlen(scon) + 1, &ssid); + length = security_context_to_sid(scon, strlen(scon) + 1, &ssid, + GFP_KERNEL); if (length) goto out; - length = security_context_to_sid(tcon, strlen(tcon) + 1, &tsid); + length = security_context_to_sid(tcon, strlen(tcon) + 1, &tsid, + GFP_KERNEL); if (length) goto out; @@ -878,11 +882,13 @@ static ssize_t sel_write_relabel(struct file *file, char *buf, size_t size) if (sscanf(buf, "%s %s %hu", scon, tcon, &tclass) != 3) goto out; - length = security_context_to_sid(scon, strlen(scon) + 1, &ssid); + length = security_context_to_sid(scon, strlen(scon) + 1, &ssid, + GFP_KERNEL); if (length) goto out; - length = security_context_to_sid(tcon, strlen(tcon) + 1, &tsid); + length = security_context_to_sid(tcon, strlen(tcon) + 1, &tsid, + GFP_KERNEL); if (length) goto out; @@ -934,7 +940,7 @@ static ssize_t sel_write_user(struct file *file, char *buf, size_t size) if (sscanf(buf, "%s %s", con, user) != 2) goto out; - length = security_context_to_sid(con, strlen(con) + 1, &sid); + length = security_context_to_sid(con, strlen(con) + 1, &sid, GFP_KERNEL); if (length) goto out; @@ -994,11 +1000,13 @@ static ssize_t sel_write_member(struct file *file, char *buf, size_t size) if (sscanf(buf, "%s %s %hu", scon, tcon, &tclass) != 3) goto out; - length = security_context_to_sid(scon, strlen(scon) + 1, &ssid); + length = security_context_to_sid(scon, strlen(scon) + 1, &ssid, + GFP_KERNEL); if (length) goto out; - length = security_context_to_sid(tcon, strlen(tcon) + 1, &tsid); + length = security_context_to_sid(tcon, strlen(tcon) + 1, &tsid, + GFP_KERNEL); if (length) goto out; diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c index 5d0144ee8ed6..4bca49414a40 100644 --- a/security/selinux/ss/services.c +++ b/security/selinux/ss/services.c @@ -1289,16 +1289,18 @@ out: * @scontext: security context * @scontext_len: length in bytes * @sid: security identifier, SID + * @gfp: context for the allocation * * Obtains a SID associated with the security context that * has the string representation specified by @scontext. * Returns -%EINVAL if the context is invalid, -%ENOMEM if insufficient * memory is available, or 0 on success. */ -int security_context_to_sid(const char *scontext, u32 scontext_len, u32 *sid) +int security_context_to_sid(const char *scontext, u32 scontext_len, u32 *sid, + gfp_t gfp) { return security_context_to_sid_core(scontext, scontext_len, - sid, SECSID_NULL, GFP_KERNEL, 0); + sid, SECSID_NULL, gfp, 0); } /** diff --git a/security/selinux/xfrm.c b/security/selinux/xfrm.c index 0462cb3ff0a7..98b042630a9e 100644 --- a/security/selinux/xfrm.c +++ b/security/selinux/xfrm.c @@ -78,7 +78,8 @@ static inline int selinux_authorizable_xfrm(struct xfrm_state *x) * xfrm_user_sec_ctx context. */ static int selinux_xfrm_alloc_user(struct xfrm_sec_ctx **ctxp, - struct xfrm_user_sec_ctx *uctx) + struct xfrm_user_sec_ctx *uctx, + gfp_t gfp) { int rc; const struct task_security_struct *tsec = current_security(); @@ -94,7 +95,7 @@ static int selinux_xfrm_alloc_user(struct xfrm_sec_ctx **ctxp, if (str_len >= PAGE_SIZE) return -ENOMEM; - ctx = kmalloc(sizeof(*ctx) + str_len + 1, GFP_KERNEL); + ctx = kmalloc(sizeof(*ctx) + str_len + 1, gfp); if (!ctx) return -ENOMEM; @@ -103,7 +104,7 @@ static int selinux_xfrm_alloc_user(struct xfrm_sec_ctx **ctxp, ctx->ctx_len = str_len; memcpy(ctx->ctx_str, &uctx[1], str_len); ctx->ctx_str[str_len] = '\0'; - rc = security_context_to_sid(ctx->ctx_str, str_len, &ctx->ctx_sid); + rc = security_context_to_sid(ctx->ctx_str, str_len, &ctx->ctx_sid, gfp); if (rc) goto err; @@ -282,9 +283,10 @@ int selinux_xfrm_skb_sid(struct sk_buff *skb, u32 *sid) * LSM hook implementation that allocs and transfers uctx spec to xfrm_policy. */ int selinux_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp, - struct xfrm_user_sec_ctx *uctx) + struct xfrm_user_sec_ctx *uctx, + gfp_t gfp) { - return selinux_xfrm_alloc_user(ctxp, uctx); + return selinux_xfrm_alloc_user(ctxp, uctx, gfp); } /* @@ -332,7 +334,7 @@ int selinux_xfrm_policy_delete(struct xfrm_sec_ctx *ctx) int selinux_xfrm_state_alloc(struct xfrm_state *x, struct xfrm_user_sec_ctx *uctx) { - return selinux_xfrm_alloc_user(&x->security, uctx); + return selinux_xfrm_alloc_user(&x->security, uctx, GFP_KERNEL); } /* diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c index 7a20897d33db..7403f348ed14 100644 --- a/sound/core/compress_offload.c +++ b/sound/core/compress_offload.c @@ -133,7 +133,7 @@ static int snd_compr_open(struct inode *inode, struct file *f) kfree(data); } snd_card_unref(compr->card); - return 0; + return ret; } static int snd_compr_free(struct inode *inode, struct file *f) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 850296a1e0ff..8d0a84436674 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -3616,6 +3616,19 @@ static void alc_fixup_auto_mute_via_amp(struct hda_codec *codec, } } +static void alc_no_shutup(struct hda_codec *codec) +{ +} + +static void alc_fixup_no_shutup(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + if (action == HDA_FIXUP_ACT_PRE_PROBE) { + struct alc_spec *spec = codec->spec; + spec->shutup = alc_no_shutup; + } +} + static void alc_fixup_headset_mode_alc668(struct hda_codec *codec, const struct hda_fixup *fix, int action) { @@ -3844,6 +3857,7 @@ enum { ALC269_FIXUP_HP_GPIO_LED, ALC269_FIXUP_INV_DMIC, ALC269_FIXUP_LENOVO_DOCK, + ALC269_FIXUP_NO_SHUTUP, ALC286_FIXUP_SONY_MIC_NO_PRESENCE, ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT, ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, @@ -4020,6 +4034,10 @@ static const struct hda_fixup alc269_fixups[] = { .type = HDA_FIXUP_FUNC, .v.func = alc_fixup_inv_dmic_0x12, }, + [ALC269_FIXUP_NO_SHUTUP] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc_fixup_no_shutup, + }, [ALC269_FIXUP_LENOVO_DOCK] = { .type = HDA_FIXUP_PINS, .v.pins = (const struct hda_pintbl[]) { @@ -4405,6 +4423,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), + SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP), SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC), SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), diff --git a/sound/pci/oxygen/xonar_dg.c b/sound/pci/oxygen/xonar_dg.c index ed6f199f8a38..4cf3200e988b 100644 --- a/sound/pci/oxygen/xonar_dg.c +++ b/sound/pci/oxygen/xonar_dg.c @@ -238,11 +238,21 @@ void set_cs4245_adc_params(struct oxygen *chip, cs4245_write_spi(chip, CS4245_MCLK_FREQ); } +static inline unsigned int shift_bits(unsigned int value, + unsigned int shift_from, + unsigned int shift_to, + unsigned int mask) +{ + if (shift_from < shift_to) + return (value << (shift_to - shift_from)) & mask; + else + return (value >> (shift_from - shift_to)) & mask; +} + unsigned int adjust_dg_dac_routing(struct oxygen *chip, unsigned int play_routing) { struct dg *data = chip->model_data; - unsigned int routing = 0; switch (data->output_sel) { case PLAYBACK_DST_HP: @@ -252,15 +262,23 @@ unsigned int adjust_dg_dac_routing(struct oxygen *chip, OXYGEN_PLAY_MUTE67, OXYGEN_PLAY_MUTE_MASK); break; case PLAYBACK_DST_MULTICH: - routing = (0 << OXYGEN_PLAY_DAC0_SOURCE_SHIFT) | - (2 << OXYGEN_PLAY_DAC1_SOURCE_SHIFT) | - (1 << OXYGEN_PLAY_DAC2_SOURCE_SHIFT) | - (0 << OXYGEN_PLAY_DAC3_SOURCE_SHIFT); oxygen_write8_masked(chip, OXYGEN_PLAY_ROUTING, OXYGEN_PLAY_MUTE01, OXYGEN_PLAY_MUTE_MASK); break; } - return routing; + return (play_routing & OXYGEN_PLAY_DAC0_SOURCE_MASK) | + shift_bits(play_routing, + OXYGEN_PLAY_DAC2_SOURCE_SHIFT, + OXYGEN_PLAY_DAC1_SOURCE_SHIFT, + OXYGEN_PLAY_DAC1_SOURCE_MASK) | + shift_bits(play_routing, + OXYGEN_PLAY_DAC1_SOURCE_SHIFT, + OXYGEN_PLAY_DAC2_SOURCE_SHIFT, + OXYGEN_PLAY_DAC2_SOURCE_MASK) | + shift_bits(play_routing, + OXYGEN_PLAY_DAC0_SOURCE_SHIFT, + OXYGEN_PLAY_DAC3_SOURCE_SHIFT, + OXYGEN_PLAY_DAC3_SOURCE_MASK); } void dump_cs4245_registers(struct oxygen *chip, diff --git a/sound/soc/codecs/88pm860x-codec.c b/sound/soc/codecs/88pm860x-codec.c index 75d0ad5d2dcb..647a72cda005 100644 --- a/sound/soc/codecs/88pm860x-codec.c +++ b/sound/soc/codecs/88pm860x-codec.c @@ -1328,6 +1328,9 @@ static int pm860x_probe(struct snd_soc_codec *codec) pm860x->codec = codec; codec->control_data = pm860x->regmap; + ret = snd_soc_codec_set_cache_io(codec, 0, 0, SND_SOC_REGMAP); + if (ret) + return ret; for (i = 0; i < 4; i++) { ret = request_threaded_irq(pm860x->irq[i], NULL, diff --git a/sound/soc/codecs/si476x.c b/sound/soc/codecs/si476x.c index 52e7cb08434b..fa2b8e07f420 100644 --- a/sound/soc/codecs/si476x.c +++ b/sound/soc/codecs/si476x.c @@ -210,7 +210,7 @@ out: static int si476x_codec_probe(struct snd_soc_codec *codec) { codec->control_data = dev_get_regmap(codec->dev->parent, NULL); - return 0; + return snd_soc_codec_set_cache_io(codec, 0, 0, SND_SOC_REGMAP); } static struct snd_soc_dai_ops si476x_dai_ops = { diff --git a/sound/soc/omap/n810.c b/sound/soc/omap/n810.c index 3fde9e402710..d163e18d85d4 100644 --- a/sound/soc/omap/n810.c +++ b/sound/soc/omap/n810.c @@ -305,7 +305,9 @@ static int __init n810_soc_init(void) int err; struct device *dev; - if (!(machine_is_nokia_n810() || machine_is_nokia_n810_wimax())) + if (!of_have_populated_dt() || + (!of_machine_is_compatible("nokia,n810") && + !of_machine_is_compatible("nokia,n810-wimax"))) return -ENODEV; n810_snd_device = platform_device_alloc("soc-audio", -1); diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c index 47e1ce771e65..28522bd03b8e 100644 --- a/sound/soc/soc-pcm.c +++ b/sound/soc/soc-pcm.c @@ -1989,6 +1989,7 @@ int soc_dpcm_runtime_update(struct snd_soc_card *card) paths = dpcm_path_get(fe, SNDRV_PCM_STREAM_PLAYBACK, &list); if (paths < 0) { + dpcm_path_put(&list); dev_warn(fe->dev, "ASoC: %s no valid %s path\n", fe->dai_link->name, "playback"); mutex_unlock(&card->mutex); @@ -2018,6 +2019,7 @@ capture: paths = dpcm_path_get(fe, SNDRV_PCM_STREAM_CAPTURE, &list); if (paths < 0) { + dpcm_path_put(&list); dev_warn(fe->dev, "ASoC: %s no valid %s path\n", fe->dai_link->name, "capture"); mutex_unlock(&card->mutex); @@ -2082,6 +2084,7 @@ static int dpcm_fe_dai_open(struct snd_pcm_substream *fe_substream) fe->dpcm[stream].runtime = fe_substream->runtime; if (dpcm_path_get(fe, stream, &list) <= 0) { + dpcm_path_put(&list); dev_dbg(fe->dev, "ASoC: %s no valid %s route\n", fe->dai_link->name, stream ? "capture" : "playback"); } diff --git a/tools/include/linux/hash.h b/tools/include/linux/hash.h new file mode 100644 index 000000000000..d026c6573018 --- /dev/null +++ b/tools/include/linux/hash.h @@ -0,0 +1,5 @@ +#include "../../../include/linux/hash.h" + +#ifndef _TOOLS_LINUX_HASH_H +#define _TOOLS_LINUX_HASH_H +#endif diff --git a/tools/lib/api/Makefile b/tools/lib/api/Makefile index ed2f51e11b80..ce00f7ee6455 100644 --- a/tools/lib/api/Makefile +++ b/tools/lib/api/Makefile @@ -9,8 +9,10 @@ LIB_H= LIB_OBJS= LIB_H += fs/debugfs.h +LIB_H += fs/fs.h LIB_OBJS += $(OUTPUT)fs/debugfs.o +LIB_OBJS += $(OUTPUT)fs/fs.o LIBFILE = libapikfs.a diff --git a/tools/perf/util/fs.c b/tools/lib/api/fs/fs.c index f5be1f26e724..5b5eb788996e 100644 --- a/tools/perf/util/fs.c +++ b/tools/lib/api/fs/fs.c @@ -1,8 +1,13 @@ +/* TODO merge/factor in debugfs.c here */ -/* TODO merge/factor into tools/lib/lk/debugfs.c */ +#include <errno.h> +#include <stdbool.h> +#include <stdio.h> +#include <string.h> +#include <sys/vfs.h> -#include "util.h" -#include "util/fs.h" +#include "debugfs.h" +#include "fs.h" static const char * const sysfs__fs_known_mountpoints[] = { "/sys", diff --git a/tools/perf/util/include/linux/magic.h b/tools/lib/api/fs/fs.h index 07d63cf3e0f6..cb7049551f33 100644 --- a/tools/perf/util/include/linux/magic.h +++ b/tools/lib/api/fs/fs.h @@ -1,9 +1,5 @@ -#ifndef _PERF_LINUX_MAGIC_H_ -#define _PERF_LINUX_MAGIC_H_ - -#ifndef DEBUGFS_MAGIC -#define DEBUGFS_MAGIC 0x64626720 -#endif +#ifndef __API_FS__ +#define __API_FS__ #ifndef SYSFS_MAGIC #define SYSFS_MAGIC 0x62656572 @@ -13,4 +9,6 @@ #define PROC_SUPER_MAGIC 0x9fa0 #endif -#endif +const char *sysfs__mountpoint(void); +const char *procfs__mountpoint(void); +#endif /* __API_FS__ */ diff --git a/tools/net/Makefile b/tools/net/Makefile index 004cd74734b6..ee577ea03ba5 100644 --- a/tools/net/Makefile +++ b/tools/net/Makefile @@ -12,7 +12,7 @@ YACC = bison all : bpf_jit_disasm bpf_dbg bpf_asm -bpf_jit_disasm : CFLAGS = -Wall -O2 +bpf_jit_disasm : CFLAGS = -Wall -O2 -DPACKAGE='bpf_jit_disasm' bpf_jit_disasm : LDLIBS = -lopcodes -lbfd -ldl bpf_jit_disasm : bpf_jit_disasm.o diff --git a/tools/perf/Documentation/perf-mem.txt b/tools/perf/Documentation/perf-mem.txt index 888d51137fbe..1d78a4064da4 100644 --- a/tools/perf/Documentation/perf-mem.txt +++ b/tools/perf/Documentation/perf-mem.txt @@ -18,6 +18,10 @@ from it, into perf.data. Perf record options are accepted and are passed through "perf mem -t <TYPE> report" displays the result. It invokes perf report with the right set of options to display a memory access profile. +Note that on Intel systems the memory latency reported is the use-latency, +not the pure load (or store latency). Use latency includes any pipeline +queueing delays in addition to the memory subsystem latency. + OPTIONS ------- <command>...:: diff --git a/tools/perf/Documentation/perf-probe.txt b/tools/perf/Documentation/perf-probe.txt index b715cb71592b..1513935c399b 100644 --- a/tools/perf/Documentation/perf-probe.txt +++ b/tools/perf/Documentation/perf-probe.txt @@ -136,6 +136,8 @@ Each probe argument follows below syntax. 'NAME' specifies the name of this argument (optional). You can use the name of local variable, local data structure member (e.g. var->field, var.field2), local array with fixed index (e.g. array[1], var->array[0], var->pointer[2]), or kprobe-tracer argument format (e.g. $retval, %ax, etc). Note that the name of this argument will be set as the last member name if you specify a local data structure member (e.g. field2 for 'var->field1.field2'.) 'TYPE' casts the type of this argument (optional). If omitted, perf probe automatically set the type based on debuginfo. You can specify 'string' type only for the local variable or structure member which is an array of or a pointer to 'char' or 'unsigned char' type. +On x86 systems %REG is always the short form of the register: for example %AX. %RAX or %EAX is not valid. + LINE SYNTAX ----------- Line range is described by following syntax. diff --git a/tools/perf/MANIFEST b/tools/perf/MANIFEST index f41572d0dd76..c0c87c87b60f 100644 --- a/tools/perf/MANIFEST +++ b/tools/perf/MANIFEST @@ -6,6 +6,7 @@ tools/lib/symbol/kallsyms.c tools/lib/symbol/kallsyms.h tools/include/asm/bug.h tools/include/linux/compiler.h +tools/include/linux/hash.h include/linux/const.h include/linux/perf_event.h include/linux/rbtree.h diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf index 7257e7e9e38a..50d875d970c4 100644 --- a/tools/perf/Makefile.perf +++ b/tools/perf/Makefile.perf @@ -7,6 +7,8 @@ include config/utilities.mak # Define V to have a more verbose compile. # +# Define VF to have a more verbose feature check output. +# # Define O to save output files in a separate directory. # # Define ARCH as name of target architecture if you want cross-builds. @@ -55,6 +57,9 @@ include config/utilities.mak # Define NO_LIBAUDIT if you do not want libaudit support # # Define NO_LIBBIONIC if you do not want bionic support +# +# Define NO_LIBDW_DWARF_UNWIND if you do not want libdw support +# for dwarf backtrace post unwind. ifeq ($(srctree),) srctree := $(patsubst %/,%,$(dir $(shell pwd))) @@ -208,7 +213,7 @@ LIB_H += ../../include/uapi/linux/perf_event.h LIB_H += ../../include/linux/rbtree.h LIB_H += ../../include/linux/list.h LIB_H += ../../include/uapi/linux/const.h -LIB_H += ../../include/linux/hash.h +LIB_H += ../include/linux/hash.h LIB_H += ../../include/linux/stringify.h LIB_H += util/include/linux/bitmap.h LIB_H += util/include/linux/bitops.h @@ -218,9 +223,7 @@ LIB_H += util/include/linux/ctype.h LIB_H += util/include/linux/kernel.h LIB_H += util/include/linux/list.h LIB_H += util/include/linux/export.h -LIB_H += util/include/linux/magic.h LIB_H += util/include/linux/poison.h -LIB_H += util/include/linux/prefetch.h LIB_H += util/include/linux/rbtree.h LIB_H += util/include/linux/rbtree_augmented.h LIB_H += util/include/linux/string.h @@ -244,7 +247,6 @@ LIB_H += util/cache.h LIB_H += util/callchain.h LIB_H += util/build-id.h LIB_H += util/debug.h -LIB_H += util/fs.h LIB_H += util/pmu.h LIB_H += util/event.h LIB_H += util/evsel.h @@ -306,7 +308,6 @@ LIB_OBJS += $(OUTPUT)util/annotate.o LIB_OBJS += $(OUTPUT)util/build-id.o LIB_OBJS += $(OUTPUT)util/config.o LIB_OBJS += $(OUTPUT)util/ctype.o -LIB_OBJS += $(OUTPUT)util/fs.o LIB_OBJS += $(OUTPUT)util/pmu.o LIB_OBJS += $(OUTPUT)util/environment.o LIB_OBJS += $(OUTPUT)util/event.o @@ -408,6 +409,11 @@ endif LIB_OBJS += $(OUTPUT)tests/code-reading.o LIB_OBJS += $(OUTPUT)tests/sample-parsing.o LIB_OBJS += $(OUTPUT)tests/parse-no-sample-id-all.o +ifndef NO_DWARF_UNWIND +ifeq ($(ARCH),x86) +LIB_OBJS += $(OUTPUT)tests/dwarf-unwind.o +endif +endif BUILTIN_OBJS += $(OUTPUT)builtin-annotate.o BUILTIN_OBJS += $(OUTPUT)builtin-bench.o @@ -420,6 +426,9 @@ BUILTIN_OBJS += $(OUTPUT)bench/mem-memset-x86-64-asm.o endif BUILTIN_OBJS += $(OUTPUT)bench/mem-memcpy.o BUILTIN_OBJS += $(OUTPUT)bench/mem-memset.o +BUILTIN_OBJS += $(OUTPUT)bench/futex-hash.o +BUILTIN_OBJS += $(OUTPUT)bench/futex-wake.o +BUILTIN_OBJS += $(OUTPUT)bench/futex-requeue.o BUILTIN_OBJS += $(OUTPUT)builtin-diff.o BUILTIN_OBJS += $(OUTPUT)builtin-evlist.o @@ -475,8 +484,13 @@ ifndef NO_DWARF endif # NO_DWARF endif # NO_LIBELF +ifndef NO_LIBDW_DWARF_UNWIND + LIB_OBJS += $(OUTPUT)util/unwind-libdw.o + LIB_H += util/unwind-libdw.h +endif + ifndef NO_LIBUNWIND - LIB_OBJS += $(OUTPUT)util/unwind.o + LIB_OBJS += $(OUTPUT)util/unwind-libunwind.o endif LIB_OBJS += $(OUTPUT)tests/keep-tracking.o @@ -533,6 +547,7 @@ ifeq ($(NO_PERF_REGS),0) ifeq ($(ARCH),x86) LIB_H += arch/x86/include/perf_regs.h endif + LIB_OBJS += $(OUTPUT)util/perf_regs.o endif ifndef NO_LIBNUMA @@ -655,6 +670,9 @@ $(OUTPUT)tests/python-use.o: tests/python-use.c $(OUTPUT)PERF-CFLAGS -DPYTHON='"$(PYTHON_WORD)"' \ $< +$(OUTPUT)tests/dwarf-unwind.o: tests/dwarf-unwind.c + $(QUIET_CC)$(CC) -o $@ -c $(CFLAGS) -fno-optimize-sibling-calls $< + $(OUTPUT)util/config.o: util/config.c $(OUTPUT)PERF-CFLAGS $(QUIET_CC)$(CC) -o $@ -c $(CFLAGS) -DETC_PERFCONFIG='"$(ETC_PERFCONFIG_SQ)"' $< @@ -707,9 +725,15 @@ $(patsubst perf-%,%.o,$(PROGRAMS)): $(LIB_H) $(wildcard */*.h) # we depend the various files onto their directories. DIRECTORY_DEPS = $(LIB_OBJS) $(BUILTIN_OBJS) $(GTK_OBJS) DIRECTORY_DEPS += $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h -$(DIRECTORY_DEPS): | $(sort $(dir $(DIRECTORY_DEPS))) +# no need to add flex objects, because they depend on bison ones +DIRECTORY_DEPS += $(OUTPUT)util/parse-events-bison.c +DIRECTORY_DEPS += $(OUTPUT)util/pmu-bison.c + +OUTPUT_DIRECTORIES := $(sort $(dir $(DIRECTORY_DEPS))) + +$(DIRECTORY_DEPS): | $(OUTPUT_DIRECTORIES) # In the second step, we make a rule to actually create these directories -$(sort $(dir $(DIRECTORY_DEPS))): +$(OUTPUT_DIRECTORIES): $(QUIET_MKDIR)$(MKDIR) -p $@ 2>/dev/null $(LIB_FILE): $(LIB_OBJS) @@ -886,7 +910,7 @@ config-clean: clean: $(LIBTRACEEVENT)-clean $(LIBAPIKFS)-clean config-clean $(call QUIET_CLEAN, core-objs) $(RM) $(LIB_OBJS) $(BUILTIN_OBJS) $(LIB_FILE) $(OUTPUT)perf-archive $(OUTPUT)perf.o $(LANG_BINDINGS) $(GTK_OBJS) $(call QUIET_CLEAN, core-progs) $(RM) $(ALL_PROGRAMS) perf - $(call QUIET_CLEAN, core-gen) $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)PERF-CFLAGS $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex* + $(call QUIET_CLEAN, core-gen) $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)PERF-CFLAGS $(OUTPUT)PERF-FEATURES $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex* $(QUIET_SUBDIR0)Documentation $(QUIET_SUBDIR1) clean $(python-clean) diff --git a/tools/perf/arch/arm/Makefile b/tools/perf/arch/arm/Makefile index fe9b61e322a5..67e9b3d38e89 100644 --- a/tools/perf/arch/arm/Makefile +++ b/tools/perf/arch/arm/Makefile @@ -3,5 +3,5 @@ PERF_HAVE_DWARF_REGS := 1 LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/dwarf-regs.o endif ifndef NO_LIBUNWIND -LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/unwind.o +LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/unwind-libunwind.o endif diff --git a/tools/perf/arch/arm/util/unwind.c b/tools/perf/arch/arm/util/unwind-libunwind.c index da3dc950550c..729ed69a6664 100644 --- a/tools/perf/arch/arm/util/unwind.c +++ b/tools/perf/arch/arm/util/unwind-libunwind.c @@ -4,7 +4,7 @@ #include "perf_regs.h" #include "../../util/unwind.h" -int unwind__arch_reg_id(int regnum) +int libunwind__arch_reg_id(int regnum) { switch (regnum) { case UNW_ARM_R0: diff --git a/tools/perf/arch/x86/Makefile b/tools/perf/arch/x86/Makefile index 8801fe02f206..1641542e3636 100644 --- a/tools/perf/arch/x86/Makefile +++ b/tools/perf/arch/x86/Makefile @@ -3,7 +3,14 @@ PERF_HAVE_DWARF_REGS := 1 LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/dwarf-regs.o endif ifndef NO_LIBUNWIND -LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/unwind.o +LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/unwind-libunwind.o +endif +ifndef NO_LIBDW_DWARF_UNWIND +LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/unwind-libdw.o +endif +ifndef NO_DWARF_UNWIND +LIB_OBJS += $(OUTPUT)arch/$(ARCH)/tests/regs_load.o +LIB_OBJS += $(OUTPUT)arch/$(ARCH)/tests/dwarf-unwind.o endif LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/header.o LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/tsc.o diff --git a/tools/perf/arch/x86/include/perf_regs.h b/tools/perf/arch/x86/include/perf_regs.h index e84ca76aae77..fc819ca34a7e 100644 --- a/tools/perf/arch/x86/include/perf_regs.h +++ b/tools/perf/arch/x86/include/perf_regs.h @@ -5,14 +5,20 @@ #include "../../util/types.h" #include <asm/perf_regs.h> +void perf_regs_load(u64 *regs); + #ifndef HAVE_ARCH_X86_64_SUPPORT #define PERF_REGS_MASK ((1ULL << PERF_REG_X86_32_MAX) - 1) +#define PERF_REGS_MAX PERF_REG_X86_32_MAX +#define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_32 #else #define REG_NOSUPPORT ((1ULL << PERF_REG_X86_DS) | \ (1ULL << PERF_REG_X86_ES) | \ (1ULL << PERF_REG_X86_FS) | \ (1ULL << PERF_REG_X86_GS)) #define PERF_REGS_MASK (((1ULL << PERF_REG_X86_64_MAX) - 1) & ~REG_NOSUPPORT) +#define PERF_REGS_MAX PERF_REG_X86_64_MAX +#define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_64 #endif #define PERF_REG_IP PERF_REG_X86_IP #define PERF_REG_SP PERF_REG_X86_SP diff --git a/tools/perf/arch/x86/tests/dwarf-unwind.c b/tools/perf/arch/x86/tests/dwarf-unwind.c new file mode 100644 index 000000000000..b602ad93ce63 --- /dev/null +++ b/tools/perf/arch/x86/tests/dwarf-unwind.c @@ -0,0 +1,59 @@ +#include <string.h> +#include "perf_regs.h" +#include "thread.h" +#include "map.h" +#include "event.h" +#include "tests/tests.h" + +#define STACK_SIZE 8192 + +static int sample_ustack(struct perf_sample *sample, + struct thread *thread, u64 *regs) +{ + struct stack_dump *stack = &sample->user_stack; + struct map *map; + unsigned long sp; + u64 stack_size, *buf; + + buf = malloc(STACK_SIZE); + if (!buf) { + pr_debug("failed to allocate sample uregs data\n"); + return -1; + } + + sp = (unsigned long) regs[PERF_REG_X86_SP]; + + map = map_groups__find(&thread->mg, MAP__FUNCTION, (u64) sp); + if (!map) { + pr_debug("failed to get stack map\n"); + return -1; + } + + stack_size = map->end - sp; + stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size; + + memcpy(buf, (void *) sp, stack_size); + stack->data = (char *) buf; + stack->size = stack_size; + return 0; +} + +int test__arch_unwind_sample(struct perf_sample *sample, + struct thread *thread) +{ + struct regs_dump *regs = &sample->user_regs; + u64 *buf; + + buf = malloc(sizeof(u64) * PERF_REGS_MAX); + if (!buf) { + pr_debug("failed to allocate sample uregs data\n"); + return -1; + } + + perf_regs_load(buf); + regs->abi = PERF_SAMPLE_REGS_ABI; + regs->regs = buf; + regs->mask = PERF_REGS_MASK; + + return sample_ustack(sample, thread, buf); +} diff --git a/tools/perf/arch/x86/tests/regs_load.S b/tools/perf/arch/x86/tests/regs_load.S new file mode 100644 index 000000000000..99167bf644ea --- /dev/null +++ b/tools/perf/arch/x86/tests/regs_load.S @@ -0,0 +1,92 @@ + +#include <linux/linkage.h> + +#define AX 0 +#define BX 1 * 8 +#define CX 2 * 8 +#define DX 3 * 8 +#define SI 4 * 8 +#define DI 5 * 8 +#define BP 6 * 8 +#define SP 7 * 8 +#define IP 8 * 8 +#define FLAGS 9 * 8 +#define CS 10 * 8 +#define SS 11 * 8 +#define DS 12 * 8 +#define ES 13 * 8 +#define FS 14 * 8 +#define GS 15 * 8 +#define R8 16 * 8 +#define R9 17 * 8 +#define R10 18 * 8 +#define R11 19 * 8 +#define R12 20 * 8 +#define R13 21 * 8 +#define R14 22 * 8 +#define R15 23 * 8 + +.text +#ifdef HAVE_ARCH_X86_64_SUPPORT +ENTRY(perf_regs_load) + movq %rax, AX(%rdi) + movq %rbx, BX(%rdi) + movq %rcx, CX(%rdi) + movq %rdx, DX(%rdi) + movq %rsi, SI(%rdi) + movq %rdi, DI(%rdi) + movq %rbp, BP(%rdi) + + leaq 8(%rsp), %rax /* exclude this call. */ + movq %rax, SP(%rdi) + + movq 0(%rsp), %rax + movq %rax, IP(%rdi) + + movq $0, FLAGS(%rdi) + movq $0, CS(%rdi) + movq $0, SS(%rdi) + movq $0, DS(%rdi) + movq $0, ES(%rdi) + movq $0, FS(%rdi) + movq $0, GS(%rdi) + + movq %r8, R8(%rdi) + movq %r9, R9(%rdi) + movq %r10, R10(%rdi) + movq %r11, R11(%rdi) + movq %r12, R12(%rdi) + movq %r13, R13(%rdi) + movq %r14, R14(%rdi) + movq %r15, R15(%rdi) + ret +ENDPROC(perf_regs_load) +#else +ENTRY(perf_regs_load) + push %edi + movl 8(%esp), %edi + movl %eax, AX(%edi) + movl %ebx, BX(%edi) + movl %ecx, CX(%edi) + movl %edx, DX(%edi) + movl %esi, SI(%edi) + pop %eax + movl %eax, DI(%edi) + movl %ebp, BP(%edi) + + leal 4(%esp), %eax /* exclude this call. */ + movl %eax, SP(%edi) + + movl 0(%esp), %eax + movl %eax, IP(%edi) + + movl $0, FLAGS(%edi) + movl $0, CS(%edi) + movl $0, SS(%edi) + movl $0, DS(%edi) + movl $0, ES(%edi) + movl $0, FS(%edi) + movl $0, GS(%edi) + ret +ENDPROC(perf_regs_load) +#endif diff --git a/tools/perf/arch/x86/util/unwind-libdw.c b/tools/perf/arch/x86/util/unwind-libdw.c new file mode 100644 index 000000000000..c4b72176ca83 --- /dev/null +++ b/tools/perf/arch/x86/util/unwind-libdw.c @@ -0,0 +1,51 @@ +#include <elfutils/libdwfl.h> +#include "../../util/unwind-libdw.h" +#include "../../util/perf_regs.h" + +bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg) +{ + struct unwind_info *ui = arg; + struct regs_dump *user_regs = &ui->sample->user_regs; + Dwarf_Word dwarf_regs[17]; + unsigned nregs; + +#define REG(r) ({ \ + Dwarf_Word val = 0; \ + perf_reg_value(&val, user_regs, PERF_REG_X86_##r); \ + val; \ +}) + + if (user_regs->abi == PERF_SAMPLE_REGS_ABI_32) { + dwarf_regs[0] = REG(AX); + dwarf_regs[1] = REG(CX); + dwarf_regs[2] = REG(DX); + dwarf_regs[3] = REG(BX); + dwarf_regs[4] = REG(SP); + dwarf_regs[5] = REG(BP); + dwarf_regs[6] = REG(SI); + dwarf_regs[7] = REG(DI); + dwarf_regs[8] = REG(IP); + nregs = 9; + } else { + dwarf_regs[0] = REG(AX); + dwarf_regs[1] = REG(DX); + dwarf_regs[2] = REG(CX); + dwarf_regs[3] = REG(BX); + dwarf_regs[4] = REG(SI); + dwarf_regs[5] = REG(DI); + dwarf_regs[6] = REG(BP); + dwarf_regs[7] = REG(SP); + dwarf_regs[8] = REG(R8); + dwarf_regs[9] = REG(R9); + dwarf_regs[10] = REG(R10); + dwarf_regs[11] = REG(R11); + dwarf_regs[12] = REG(R12); + dwarf_regs[13] = REG(R13); + dwarf_regs[14] = REG(R14); + dwarf_regs[15] = REG(R15); + dwarf_regs[16] = REG(IP); + nregs = 17; + } + + return dwfl_thread_state_registers(thread, 0, nregs, dwarf_regs); +} diff --git a/tools/perf/arch/x86/util/unwind.c b/tools/perf/arch/x86/util/unwind-libunwind.c index 456a88cf5b37..3261f68c6a7c 100644 --- a/tools/perf/arch/x86/util/unwind.c +++ b/tools/perf/arch/x86/util/unwind-libunwind.c @@ -5,7 +5,7 @@ #include "../../util/unwind.h" #ifdef HAVE_ARCH_X86_64_SUPPORT -int unwind__arch_reg_id(int regnum) +int libunwind__arch_reg_id(int regnum) { int id; @@ -69,7 +69,7 @@ int unwind__arch_reg_id(int regnum) return id; } #else -int unwind__arch_reg_id(int regnum) +int libunwind__arch_reg_id(int regnum) { int id; diff --git a/tools/perf/bench/bench.h b/tools/perf/bench/bench.h index 0fdc85269c4d..eba46709b279 100644 --- a/tools/perf/bench/bench.h +++ b/tools/perf/bench/bench.h @@ -31,6 +31,9 @@ extern int bench_sched_pipe(int argc, const char **argv, const char *prefix); extern int bench_mem_memcpy(int argc, const char **argv, const char *prefix __maybe_unused); extern int bench_mem_memset(int argc, const char **argv, const char *prefix); +extern int bench_futex_hash(int argc, const char **argv, const char *prefix); +extern int bench_futex_wake(int argc, const char **argv, const char *prefix); +extern int bench_futex_requeue(int argc, const char **argv, const char *prefix); #define BENCH_FORMAT_DEFAULT_STR "default" #define BENCH_FORMAT_DEFAULT 0 diff --git a/tools/perf/bench/futex-hash.c b/tools/perf/bench/futex-hash.c new file mode 100644 index 000000000000..a84206e9c4aa --- /dev/null +++ b/tools/perf/bench/futex-hash.c @@ -0,0 +1,212 @@ +/* + * Copyright (C) 2013 Davidlohr Bueso <davidlohr@hp.com> + * + * futex-hash: Stress the hell out of the Linux kernel futex uaddr hashing. + * + * This program is particularly useful for measuring the kernel's futex hash + * table/function implementation. In order for it to make sense, use with as + * many threads and futexes as possible. + */ + +#include "../perf.h" +#include "../util/util.h" +#include "../util/stat.h" +#include "../util/parse-options.h" +#include "../util/header.h" +#include "bench.h" +#include "futex.h" + +#include <err.h> +#include <stdlib.h> +#include <sys/time.h> +#include <pthread.h> + +static unsigned int nthreads = 0; +static unsigned int nsecs = 10; +/* amount of futexes per thread */ +static unsigned int nfutexes = 1024; +static bool fshared = false, done = false, silent = false; + +struct timeval start, end, runtime; +static pthread_mutex_t thread_lock; +static unsigned int threads_starting; +static struct stats throughput_stats; +static pthread_cond_t thread_parent, thread_worker; + +struct worker { + int tid; + u_int32_t *futex; + pthread_t thread; + unsigned long ops; +}; + +static const struct option options[] = { + OPT_UINTEGER('t', "threads", &nthreads, "Specify amount of threads"), + OPT_UINTEGER('r', "runtime", &nsecs, "Specify runtime (in seconds)"), + OPT_UINTEGER('f', "futexes", &nfutexes, "Specify amount of futexes per threads"), + OPT_BOOLEAN( 's', "silent", &silent, "Silent mode: do not display data/details"), + OPT_BOOLEAN( 'S', "shared", &fshared, "Use shared futexes instead of private ones"), + OPT_END() +}; + +static const char * const bench_futex_hash_usage[] = { + "perf bench futex hash <options>", + NULL +}; + +static void *workerfn(void *arg) +{ + int ret; + unsigned int i; + struct worker *w = (struct worker *) arg; + + pthread_mutex_lock(&thread_lock); + threads_starting--; + if (!threads_starting) + pthread_cond_signal(&thread_parent); + pthread_cond_wait(&thread_worker, &thread_lock); + pthread_mutex_unlock(&thread_lock); + + do { + for (i = 0; i < nfutexes; i++, w->ops++) { + /* + * We want the futex calls to fail in order to stress + * the hashing of uaddr and not measure other steps, + * such as internal waitqueue handling, thus enlarging + * the critical region protected by hb->lock. + */ + ret = futex_wait(&w->futex[i], 1234, NULL, + fshared ? 0 : FUTEX_PRIVATE_FLAG); + if (!silent && + (!ret || errno != EAGAIN || errno != EWOULDBLOCK)) + warn("Non-expected futex return call"); + } + } while (!done); + + return NULL; +} + +static void toggle_done(int sig __maybe_unused, + siginfo_t *info __maybe_unused, + void *uc __maybe_unused) +{ + /* inform all threads that we're done for the day */ + done = true; + gettimeofday(&end, NULL); + timersub(&end, &start, &runtime); +} + +static void print_summary(void) +{ + unsigned long avg = avg_stats(&throughput_stats); + double stddev = stddev_stats(&throughput_stats); + + printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n", + !silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg), + (int) runtime.tv_sec); +} + +int bench_futex_hash(int argc, const char **argv, + const char *prefix __maybe_unused) +{ + int ret = 0; + cpu_set_t cpu; + struct sigaction act; + unsigned int i, ncpus; + pthread_attr_t thread_attr; + struct worker *worker = NULL; + + argc = parse_options(argc, argv, options, bench_futex_hash_usage, 0); + if (argc) { + usage_with_options(bench_futex_hash_usage, options); + exit(EXIT_FAILURE); + } + + ncpus = sysconf(_SC_NPROCESSORS_ONLN); + + sigfillset(&act.sa_mask); + act.sa_sigaction = toggle_done; + sigaction(SIGINT, &act, NULL); + + if (!nthreads) /* default to the number of CPUs */ + nthreads = ncpus; + + worker = calloc(nthreads, sizeof(*worker)); + if (!worker) + goto errmem; + + printf("Run summary [PID %d]: %d threads, each operating on %d [%s] futexes for %d secs.\n\n", + getpid(), nthreads, nfutexes, fshared ? "shared":"private", nsecs); + + init_stats(&throughput_stats); + pthread_mutex_init(&thread_lock, NULL); + pthread_cond_init(&thread_parent, NULL); + pthread_cond_init(&thread_worker, NULL); + + threads_starting = nthreads; + pthread_attr_init(&thread_attr); + gettimeofday(&start, NULL); + for (i = 0; i < nthreads; i++) { + worker[i].tid = i; + worker[i].futex = calloc(nfutexes, sizeof(*worker[i].futex)); + if (!worker[i].futex) + goto errmem; + + CPU_ZERO(&cpu); + CPU_SET(i % ncpus, &cpu); + + ret = pthread_attr_setaffinity_np(&thread_attr, sizeof(cpu_set_t), &cpu); + if (ret) + err(EXIT_FAILURE, "pthread_attr_setaffinity_np"); + + ret = pthread_create(&worker[i].thread, &thread_attr, workerfn, + (void *)(struct worker *) &worker[i]); + if (ret) + err(EXIT_FAILURE, "pthread_create"); + + } + pthread_attr_destroy(&thread_attr); + + pthread_mutex_lock(&thread_lock); + while (threads_starting) + pthread_cond_wait(&thread_parent, &thread_lock); + pthread_cond_broadcast(&thread_worker); + pthread_mutex_unlock(&thread_lock); + + sleep(nsecs); + toggle_done(0, NULL, NULL); + + for (i = 0; i < nthreads; i++) { + ret = pthread_join(worker[i].thread, NULL); + if (ret) + err(EXIT_FAILURE, "pthread_join"); + } + + /* cleanup & report results */ + pthread_cond_destroy(&thread_parent); + pthread_cond_destroy(&thread_worker); + pthread_mutex_destroy(&thread_lock); + + for (i = 0; i < nthreads; i++) { + unsigned long t = worker[i].ops/runtime.tv_sec; + update_stats(&throughput_stats, t); + if (!silent) { + if (nfutexes == 1) + printf("[thread %2d] futex: %p [ %ld ops/sec ]\n", + worker[i].tid, &worker[i].futex[0], t); + else + printf("[thread %2d] futexes: %p ... %p [ %ld ops/sec ]\n", + worker[i].tid, &worker[i].futex[0], + &worker[i].futex[nfutexes-1], t); + } + + free(worker[i].futex); + } + + print_summary(); + + free(worker); + return ret; +errmem: + err(EXIT_FAILURE, "calloc"); +} diff --git a/tools/perf/bench/futex-requeue.c b/tools/perf/bench/futex-requeue.c new file mode 100644 index 000000000000..a16255876f1d --- /dev/null +++ b/tools/perf/bench/futex-requeue.c @@ -0,0 +1,211 @@ +/* + * Copyright (C) 2013 Davidlohr Bueso <davidlohr@hp.com> + * + * futex-requeue: Block a bunch of threads on futex1 and requeue them + * on futex2, N at a time. + * + * This program is particularly useful to measure the latency of nthread + * requeues without waking up any tasks -- thus mimicking a regular futex_wait. + */ + +#include "../perf.h" +#include "../util/util.h" +#include "../util/stat.h" +#include "../util/parse-options.h" +#include "../util/header.h" +#include "bench.h" +#include "futex.h" + +#include <err.h> +#include <stdlib.h> +#include <sys/time.h> +#include <pthread.h> + +static u_int32_t futex1 = 0, futex2 = 0; + +/* + * How many tasks to requeue at a time. + * Default to 1 in order to make the kernel work more. + */ +static unsigned int nrequeue = 1; + +/* + * There can be significant variance from run to run, + * the more repeats, the more exact the overall avg and + * the better idea of the futex latency. + */ +static unsigned int repeat = 10; + +static pthread_t *worker; +static bool done = 0, silent = 0; +static pthread_mutex_t thread_lock; +static pthread_cond_t thread_parent, thread_worker; +static struct stats requeuetime_stats, requeued_stats; +static unsigned int ncpus, threads_starting, nthreads = 0; + +static const struct option options[] = { + OPT_UINTEGER('t', "threads", &nthreads, "Specify amount of threads"), + OPT_UINTEGER('q', "nrequeue", &nrequeue, "Specify amount of threads to requeue at once"), + OPT_UINTEGER('r', "repeat", &repeat, "Specify amount of times to repeat the run"), + OPT_BOOLEAN( 's', "silent", &silent, "Silent mode: do not display data/details"), + OPT_END() +}; + +static const char * const bench_futex_requeue_usage[] = { + "perf bench futex requeue <options>", + NULL +}; + +static void print_summary(void) +{ + double requeuetime_avg = avg_stats(&requeuetime_stats); + double requeuetime_stddev = stddev_stats(&requeuetime_stats); + unsigned int requeued_avg = avg_stats(&requeued_stats); + + printf("Requeued %d of %d threads in %.4f ms (+-%.2f%%)\n", + requeued_avg, + nthreads, + requeuetime_avg/1e3, + rel_stddev_stats(requeuetime_stddev, requeuetime_avg)); +} + +static void *workerfn(void *arg __maybe_unused) +{ + pthread_mutex_lock(&thread_lock); + threads_starting--; + if (!threads_starting) + pthread_cond_signal(&thread_parent); + pthread_cond_wait(&thread_worker, &thread_lock); + pthread_mutex_unlock(&thread_lock); + + futex_wait(&futex1, 0, NULL, FUTEX_PRIVATE_FLAG); + return NULL; +} + +static void block_threads(pthread_t *w, + pthread_attr_t thread_attr) +{ + cpu_set_t cpu; + unsigned int i; + + threads_starting = nthreads; + + /* create and block all threads */ + for (i = 0; i < nthreads; i++) { + CPU_ZERO(&cpu); + CPU_SET(i % ncpus, &cpu); + + if (pthread_attr_setaffinity_np(&thread_attr, sizeof(cpu_set_t), &cpu)) + err(EXIT_FAILURE, "pthread_attr_setaffinity_np"); + + if (pthread_create(&w[i], &thread_attr, workerfn, NULL)) + err(EXIT_FAILURE, "pthread_create"); + } +} + +static void toggle_done(int sig __maybe_unused, + siginfo_t *info __maybe_unused, + void *uc __maybe_unused) +{ + done = true; +} + +int bench_futex_requeue(int argc, const char **argv, + const char *prefix __maybe_unused) +{ + int ret = 0; + unsigned int i, j; + struct sigaction act; + pthread_attr_t thread_attr; + + argc = parse_options(argc, argv, options, bench_futex_requeue_usage, 0); + if (argc) + goto err; + + ncpus = sysconf(_SC_NPROCESSORS_ONLN); + + sigfillset(&act.sa_mask); + act.sa_sigaction = toggle_done; + sigaction(SIGINT, &act, NULL); + + if (!nthreads) + nthreads = ncpus; + + worker = calloc(nthreads, sizeof(*worker)); + if (!worker) + err(EXIT_FAILURE, "calloc"); + + printf("Run summary [PID %d]: Requeuing %d threads (from %p to %p), " + "%d at a time.\n\n", + getpid(), nthreads, &futex1, &futex2, nrequeue); + + init_stats(&requeued_stats); + init_stats(&requeuetime_stats); + pthread_attr_init(&thread_attr); + pthread_mutex_init(&thread_lock, NULL); + pthread_cond_init(&thread_parent, NULL); + pthread_cond_init(&thread_worker, NULL); + + for (j = 0; j < repeat && !done; j++) { + unsigned int nrequeued = 0; + struct timeval start, end, runtime; + + /* create, launch & block all threads */ + block_threads(worker, thread_attr); + + /* make sure all threads are already blocked */ + pthread_mutex_lock(&thread_lock); + while (threads_starting) + pthread_cond_wait(&thread_parent, &thread_lock); + pthread_cond_broadcast(&thread_worker); + pthread_mutex_unlock(&thread_lock); + + usleep(100000); + + /* Ok, all threads are patiently blocked, start requeueing */ + gettimeofday(&start, NULL); + for (nrequeued = 0; nrequeued < nthreads; nrequeued += nrequeue) + /* + * Do not wakeup any tasks blocked on futex1, allowing + * us to really measure futex_wait functionality. + */ + futex_cmp_requeue(&futex1, 0, &futex2, 0, nrequeue, + FUTEX_PRIVATE_FLAG); + gettimeofday(&end, NULL); + timersub(&end, &start, &runtime); + + update_stats(&requeued_stats, nrequeued); + update_stats(&requeuetime_stats, runtime.tv_usec); + + if (!silent) { + printf("[Run %d]: Requeued %d of %d threads in %.4f ms\n", + j + 1, nrequeued, nthreads, runtime.tv_usec/1e3); + } + + /* everybody should be blocked on futex2, wake'em up */ + nrequeued = futex_wake(&futex2, nthreads, FUTEX_PRIVATE_FLAG); + if (nthreads != nrequeued) + warnx("couldn't wakeup all tasks (%d/%d)", nrequeued, nthreads); + + for (i = 0; i < nthreads; i++) { + ret = pthread_join(worker[i], NULL); + if (ret) + err(EXIT_FAILURE, "pthread_join"); + } + + } + + /* cleanup & report results */ + pthread_cond_destroy(&thread_parent); + pthread_cond_destroy(&thread_worker); + pthread_mutex_destroy(&thread_lock); + pthread_attr_destroy(&thread_attr); + + print_summary(); + + free(worker); + return ret; +err: + usage_with_options(bench_futex_requeue_usage, options); + exit(EXIT_FAILURE); +} diff --git a/tools/perf/bench/futex-wake.c b/tools/perf/bench/futex-wake.c new file mode 100644 index 000000000000..d096169b161e --- /dev/null +++ b/tools/perf/bench/futex-wake.c @@ -0,0 +1,201 @@ +/* + * Copyright (C) 2013 Davidlohr Bueso <davidlohr@hp.com> + * + * futex-wake: Block a bunch of threads on a futex and wake'em up, N at a time. + * + * This program is particularly useful to measure the latency of nthread wakeups + * in non-error situations: all waiters are queued and all wake calls wakeup + * one or more tasks, and thus the waitqueue is never empty. + */ + +#include "../perf.h" +#include "../util/util.h" +#include "../util/stat.h" +#include "../util/parse-options.h" +#include "../util/header.h" +#include "bench.h" +#include "futex.h" + +#include <err.h> +#include <stdlib.h> +#include <sys/time.h> +#include <pthread.h> + +/* all threads will block on the same futex */ +static u_int32_t futex1 = 0; + +/* + * How many wakeups to do at a time. + * Default to 1 in order to make the kernel work more. + */ +static unsigned int nwakes = 1; + +/* + * There can be significant variance from run to run, + * the more repeats, the more exact the overall avg and + * the better idea of the futex latency. + */ +static unsigned int repeat = 10; + +pthread_t *worker; +static bool done = 0, silent = 0; +static pthread_mutex_t thread_lock; +static pthread_cond_t thread_parent, thread_worker; +static struct stats waketime_stats, wakeup_stats; +static unsigned int ncpus, threads_starting, nthreads = 0; + +static const struct option options[] = { + OPT_UINTEGER('t', "threads", &nthreads, "Specify amount of threads"), + OPT_UINTEGER('w', "nwakes", &nwakes, "Specify amount of threads to wake at once"), + OPT_UINTEGER('r', "repeat", &repeat, "Specify amount of times to repeat the run"), + OPT_BOOLEAN( 's', "silent", &silent, "Silent mode: do not display data/details"), + OPT_END() +}; + +static const char * const bench_futex_wake_usage[] = { + "perf bench futex wake <options>", + NULL +}; + +static void *workerfn(void *arg __maybe_unused) +{ + pthread_mutex_lock(&thread_lock); + threads_starting--; + if (!threads_starting) + pthread_cond_signal(&thread_parent); + pthread_cond_wait(&thread_worker, &thread_lock); + pthread_mutex_unlock(&thread_lock); + + futex_wait(&futex1, 0, NULL, FUTEX_PRIVATE_FLAG); + return NULL; +} + +static void print_summary(void) +{ + double waketime_avg = avg_stats(&waketime_stats); + double waketime_stddev = stddev_stats(&waketime_stats); + unsigned int wakeup_avg = avg_stats(&wakeup_stats); + + printf("Wokeup %d of %d threads in %.4f ms (+-%.2f%%)\n", + wakeup_avg, + nthreads, + waketime_avg/1e3, + rel_stddev_stats(waketime_stddev, waketime_avg)); +} + +static void block_threads(pthread_t *w, + pthread_attr_t thread_attr) +{ + cpu_set_t cpu; + unsigned int i; + + threads_starting = nthreads; + + /* create and block all threads */ + for (i = 0; i < nthreads; i++) { + CPU_ZERO(&cpu); + CPU_SET(i % ncpus, &cpu); + + if (pthread_attr_setaffinity_np(&thread_attr, sizeof(cpu_set_t), &cpu)) + err(EXIT_FAILURE, "pthread_attr_setaffinity_np"); + + if (pthread_create(&w[i], &thread_attr, workerfn, NULL)) + err(EXIT_FAILURE, "pthread_create"); + } +} + +static void toggle_done(int sig __maybe_unused, + siginfo_t *info __maybe_unused, + void *uc __maybe_unused) +{ + done = true; +} + +int bench_futex_wake(int argc, const char **argv, + const char *prefix __maybe_unused) +{ + int ret = 0; + unsigned int i, j; + struct sigaction act; + pthread_attr_t thread_attr; + + argc = parse_options(argc, argv, options, bench_futex_wake_usage, 0); + if (argc) { + usage_with_options(bench_futex_wake_usage, options); + exit(EXIT_FAILURE); + } + + ncpus = sysconf(_SC_NPROCESSORS_ONLN); + + sigfillset(&act.sa_mask); + act.sa_sigaction = toggle_done; + sigaction(SIGINT, &act, NULL); + + if (!nthreads) + nthreads = ncpus; + + worker = calloc(nthreads, sizeof(*worker)); + if (!worker) + err(EXIT_FAILURE, "calloc"); + + printf("Run summary [PID %d]: blocking on %d threads (at futex %p), " + "waking up %d at a time.\n\n", + getpid(), nthreads, &futex1, nwakes); + + init_stats(&wakeup_stats); + init_stats(&waketime_stats); + pthread_attr_init(&thread_attr); + pthread_mutex_init(&thread_lock, NULL); + pthread_cond_init(&thread_parent, NULL); + pthread_cond_init(&thread_worker, NULL); + + for (j = 0; j < repeat && !done; j++) { + unsigned int nwoken = 0; + struct timeval start, end, runtime; + + /* create, launch & block all threads */ + block_threads(worker, thread_attr); + + /* make sure all threads are already blocked */ + pthread_mutex_lock(&thread_lock); + while (threads_starting) + pthread_cond_wait(&thread_parent, &thread_lock); + pthread_cond_broadcast(&thread_worker); + pthread_mutex_unlock(&thread_lock); + + usleep(100000); + + /* Ok, all threads are patiently blocked, start waking folks up */ + gettimeofday(&start, NULL); + while (nwoken != nthreads) + nwoken += futex_wake(&futex1, nwakes, FUTEX_PRIVATE_FLAG); + gettimeofday(&end, NULL); + timersub(&end, &start, &runtime); + + update_stats(&wakeup_stats, nwoken); + update_stats(&waketime_stats, runtime.tv_usec); + + if (!silent) { + printf("[Run %d]: Wokeup %d of %d threads in %.4f ms\n", + j + 1, nwoken, nthreads, runtime.tv_usec/1e3); + } + + for (i = 0; i < nthreads; i++) { + ret = pthread_join(worker[i], NULL); + if (ret) + err(EXIT_FAILURE, "pthread_join"); + } + + } + + /* cleanup & report results */ + pthread_cond_destroy(&thread_parent); + pthread_cond_destroy(&thread_worker); + pthread_mutex_destroy(&thread_lock); + pthread_attr_destroy(&thread_attr); + + print_summary(); + + free(worker); + return ret; +} diff --git a/tools/perf/bench/futex.h b/tools/perf/bench/futex.h new file mode 100644 index 000000000000..71f2844cf97f --- /dev/null +++ b/tools/perf/bench/futex.h @@ -0,0 +1,71 @@ +/* + * Glibc independent futex library for testing kernel functionality. + * Shamelessly stolen from Darren Hart <dvhltc@us.ibm.com> + * http://git.kernel.org/cgit/linux/kernel/git/dvhart/futextest.git/ + */ + +#ifndef _FUTEX_H +#define _FUTEX_H + +#include <unistd.h> +#include <sys/syscall.h> +#include <sys/types.h> +#include <linux/futex.h> + +/** + * futex() - SYS_futex syscall wrapper + * @uaddr: address of first futex + * @op: futex op code + * @val: typically expected value of uaddr, but varies by op + * @timeout: typically an absolute struct timespec (except where noted + * otherwise). Overloaded by some ops + * @uaddr2: address of second futex for some ops\ + * @val3: varies by op + * @opflags: flags to be bitwise OR'd with op, such as FUTEX_PRIVATE_FLAG + * + * futex() is used by all the following futex op wrappers. It can also be + * used for misuse and abuse testing. Generally, the specific op wrappers + * should be used instead. It is a macro instead of an static inline function as + * some of the types over overloaded (timeout is used for nr_requeue for + * example). + * + * These argument descriptions are the defaults for all + * like-named arguments in the following wrappers except where noted below. + */ +#define futex(uaddr, op, val, timeout, uaddr2, val3, opflags) \ + syscall(SYS_futex, uaddr, op | opflags, val, timeout, uaddr2, val3) + +/** + * futex_wait() - block on uaddr with optional timeout + * @timeout: relative timeout + */ +static inline int +futex_wait(u_int32_t *uaddr, u_int32_t val, struct timespec *timeout, int opflags) +{ + return futex(uaddr, FUTEX_WAIT, val, timeout, NULL, 0, opflags); +} + +/** + * futex_wake() - wake one or more tasks blocked on uaddr + * @nr_wake: wake up to this many tasks + */ +static inline int +futex_wake(u_int32_t *uaddr, int nr_wake, int opflags) +{ + return futex(uaddr, FUTEX_WAKE, nr_wake, NULL, NULL, 0, opflags); +} + +/** +* futex_cmp_requeue() - requeue tasks from uaddr to uaddr2 +* @nr_wake: wake up to this many tasks +* @nr_requeue: requeue up to this many tasks +*/ +static inline int +futex_cmp_requeue(u_int32_t *uaddr, u_int32_t val, u_int32_t *uaddr2, int nr_wake, + int nr_requeue, int opflags) +{ + return futex(uaddr, FUTEX_CMP_REQUEUE, nr_wake, nr_requeue, uaddr2, + val, opflags); +} + +#endif /* _FUTEX_H */ diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c index d4c83c60b9b2..97d86d828190 100644 --- a/tools/perf/bench/numa.c +++ b/tools/perf/bench/numa.c @@ -1593,6 +1593,7 @@ static void init_params(struct params *p, const char *name, int argc, const char p->data_rand_walk = true; p->nr_loops = -1; p->init_random = true; + p->run_all = argc == 1; } static int run_bench_numa(const char *name, const char **argv) diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c index e47f90cc7b98..1e6e77710545 100644 --- a/tools/perf/builtin-bench.c +++ b/tools/perf/builtin-bench.c @@ -12,6 +12,7 @@ * sched ... scheduler and IPC performance * mem ... memory access performance * numa ... NUMA scheduling and MM performance + * futex ... Futex performance */ #include "perf.h" #include "util/util.h" @@ -54,6 +55,14 @@ static struct bench mem_benchmarks[] = { { NULL, NULL, NULL } }; +static struct bench futex_benchmarks[] = { + { "hash", "Benchmark for futex hash table", bench_futex_hash }, + { "wake", "Benchmark for futex wake calls", bench_futex_wake }, + { "requeue", "Benchmark for futex requeue calls", bench_futex_requeue }, + { "all", "Test all futex benchmarks", NULL }, + { NULL, NULL, NULL } +}; + struct collection { const char *name; const char *summary; @@ -61,11 +70,12 @@ struct collection { }; static struct collection collections[] = { - { "sched", "Scheduler and IPC benchmarks", sched_benchmarks }, + { "sched", "Scheduler and IPC benchmarks", sched_benchmarks }, { "mem", "Memory access benchmarks", mem_benchmarks }, #ifdef HAVE_LIBNUMA_SUPPORT { "numa", "NUMA scheduling and MM benchmarks", numa_benchmarks }, #endif + {"futex", "Futex stressing benchmarks", futex_benchmarks }, { "all", "All benchmarks", NULL }, { NULL, NULL, NULL } }; @@ -76,7 +86,7 @@ static struct collection collections[] = { /* Iterate over all benchmarks within a collection: */ #define for_each_bench(coll, bench) \ - for (bench = coll->benchmarks; bench->name; bench++) + for (bench = coll->benchmarks; bench && bench->name; bench++) static void dump_benchmarks(struct collection *coll) { diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c index a77e31246c00..204fffe22532 100644 --- a/tools/perf/builtin-diff.c +++ b/tools/perf/builtin-diff.c @@ -952,8 +952,8 @@ static int hpp__entry_global(struct perf_hpp_fmt *_fmt, struct perf_hpp *hpp, dfmt->header_width, buf); } -static int hpp__header(struct perf_hpp_fmt *fmt, - struct perf_hpp *hpp) +static int hpp__header(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp, + struct perf_evsel *evsel __maybe_unused) { struct diff_hpp_fmt *dfmt = container_of(fmt, struct diff_hpp_fmt, fmt); @@ -963,7 +963,8 @@ static int hpp__header(struct perf_hpp_fmt *fmt, } static int hpp__width(struct perf_hpp_fmt *fmt, - struct perf_hpp *hpp __maybe_unused) + struct perf_hpp *hpp __maybe_unused, + struct perf_evsel *evsel __maybe_unused) { struct diff_hpp_fmt *dfmt = container_of(fmt, struct diff_hpp_fmt, fmt); diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c index b3466018bbd7..3a7387551369 100644 --- a/tools/perf/builtin-inject.c +++ b/tools/perf/builtin-inject.c @@ -312,7 +312,6 @@ found: sample_sw.period = sample->period; sample_sw.time = sample->time; perf_event__synthesize_sample(event_sw, evsel->attr.sample_type, - evsel->attr.sample_regs_user, evsel->attr.read_format, &sample_sw, false); build_id__mark_dso_hit(tool, event_sw, &sample_sw, evsel, machine); diff --git a/tools/perf/builtin-kvm.c b/tools/perf/builtin-kvm.c index a7350519c63f..21c164b8f9db 100644 --- a/tools/perf/builtin-kvm.c +++ b/tools/perf/builtin-kvm.c @@ -1691,17 +1691,15 @@ int cmd_kvm(int argc, const char **argv, const char *prefix __maybe_unused) OPT_END() }; - - const char * const kvm_usage[] = { - "perf kvm [<options>] {top|record|report|diff|buildid-list|stat}", - NULL - }; + const char *const kvm_subcommands[] = { "top", "record", "report", "diff", + "buildid-list", "stat", NULL }; + const char *kvm_usage[] = { NULL, NULL }; perf_host = 0; perf_guest = 1; - argc = parse_options(argc, argv, kvm_options, kvm_usage, - PARSE_OPT_STOP_AT_NON_OPTION); + argc = parse_options_subcommand(argc, argv, kvm_options, kvm_subcommands, kvm_usage, + PARSE_OPT_STOP_AT_NON_OPTION); if (!argc) usage_with_options(kvm_usage, kvm_options); diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c index 78948882e3de..cdcd4eb3a57d 100644 --- a/tools/perf/builtin-probe.c +++ b/tools/perf/builtin-probe.c @@ -268,9 +268,9 @@ static int opt_set_filter(const struct option *opt __maybe_unused, return 0; } -static void init_params(void) +static int init_params(void) { - line_range__init(¶ms.line_range); + return line_range__init(¶ms.line_range); } static void cleanup_params(void) @@ -515,9 +515,11 @@ int cmd_probe(int argc, const char **argv, const char *prefix) { int ret; - init_params(); - ret = __cmd_probe(argc, argv, prefix); - cleanup_params(); + ret = init_params(); + if (!ret) { + ret = __cmd_probe(argc, argv, prefix); + cleanup_params(); + } return ret; } diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index af47531b82ec..eb524f91bffe 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -649,7 +649,7 @@ error: return ret; } -#ifdef HAVE_LIBUNWIND_SUPPORT +#ifdef HAVE_DWARF_UNWIND_SUPPORT static int get_stack_size(char *str, unsigned long *_size) { char *endptr; @@ -675,7 +675,7 @@ static int get_stack_size(char *str, unsigned long *_size) max_size, str); return -1; } -#endif /* HAVE_LIBUNWIND_SUPPORT */ +#endif /* HAVE_DWARF_UNWIND_SUPPORT */ int record_parse_callchain(const char *arg, struct record_opts *opts) { @@ -704,7 +704,7 @@ int record_parse_callchain(const char *arg, struct record_opts *opts) "needed for -g fp\n"); break; -#ifdef HAVE_LIBUNWIND_SUPPORT +#ifdef HAVE_DWARF_UNWIND_SUPPORT /* Dwarf style */ } else if (!strncmp(name, "dwarf", sizeof("dwarf"))) { const unsigned long default_stack_dump_size = 8192; @@ -720,7 +720,7 @@ int record_parse_callchain(const char *arg, struct record_opts *opts) ret = get_stack_size(tok, &size); opts->stack_dump_size = size; } -#endif /* HAVE_LIBUNWIND_SUPPORT */ +#endif /* HAVE_DWARF_UNWIND_SUPPORT */ } else { pr_err("callchain: Unknown --call-graph option " "value: %s\n", arg); @@ -735,7 +735,9 @@ int record_parse_callchain(const char *arg, struct record_opts *opts) static void callchain_debug(struct record_opts *opts) { - pr_debug("callchain: type %d\n", opts->call_graph); + static const char *str[CALLCHAIN_MAX] = { "NONE", "FP", "DWARF" }; + + pr_debug("callchain: type %s\n", str[opts->call_graph]); if (opts->call_graph == CALLCHAIN_DWARF) pr_debug("callchain: stack dump size %d\n", @@ -749,6 +751,8 @@ int record_parse_callchain_opt(const struct option *opt, struct record_opts *opts = opt->value; int ret; + opts->call_graph_enabled = !unset; + /* --no-call-graph */ if (unset) { opts->call_graph = CALLCHAIN_NONE; @@ -769,6 +773,8 @@ int record_callchain_opt(const struct option *opt, { struct record_opts *opts = opt->value; + opts->call_graph_enabled = !unset; + if (opts->call_graph == CALLCHAIN_NONE) opts->call_graph = CALLCHAIN_FP; @@ -776,6 +782,16 @@ int record_callchain_opt(const struct option *opt, return 0; } +static int perf_record_config(const char *var, const char *value, void *cb) +{ + struct record *rec = cb; + + if (!strcmp(var, "record.call-graph")) + return record_parse_callchain(value, &rec->opts); + + return perf_default_config(var, value, cb); +} + static const char * const record_usage[] = { "perf record [<options>] [<command>]", "perf record [<options>] -- <command> [<options>]", @@ -807,7 +823,7 @@ static struct record record = { #define CALLCHAIN_HELP "setup and enables call-graph (stack chain/backtrace) recording: " -#ifdef HAVE_LIBUNWIND_SUPPORT +#ifdef HAVE_DWARF_UNWIND_SUPPORT const char record_callchain_help[] = CALLCHAIN_HELP "fp dwarf"; #else const char record_callchain_help[] = CALLCHAIN_HELP "fp"; @@ -907,6 +923,8 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused) if (rec->evlist == NULL) return -ENOMEM; + perf_config(perf_record_config, rec); + argc = parse_options(argc, argv, record_options, record_usage, PARSE_OPT_STOP_AT_NON_OPTION); if (!argc && target__none(&rec->opts.target)) diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c index 02f985f3a396..c8f21137dfd8 100644 --- a/tools/perf/builtin-report.c +++ b/tools/perf/builtin-report.c @@ -75,13 +75,10 @@ static int report__config(const char *var, const char *value, void *cb) return perf_default_config(var, value, cb); } -static int report__add_mem_hist_entry(struct perf_tool *tool, struct addr_location *al, - struct perf_sample *sample, struct perf_evsel *evsel, - union perf_event *event) +static int report__add_mem_hist_entry(struct report *rep, struct addr_location *al, + struct perf_sample *sample, struct perf_evsel *evsel) { - struct report *rep = container_of(tool, struct report, tool); struct symbol *parent = NULL; - u8 cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK; struct hist_entry *he; struct mem_info *mi, *mx; uint64_t cost; @@ -90,7 +87,7 @@ static int report__add_mem_hist_entry(struct perf_tool *tool, struct addr_locati if (err) return err; - mi = machine__resolve_mem(al->machine, al->thread, sample, cpumode); + mi = sample__resolve_mem(sample, al); if (!mi) return -ENOMEM; @@ -131,10 +128,9 @@ out: return err; } -static int report__add_branch_hist_entry(struct perf_tool *tool, struct addr_location *al, +static int report__add_branch_hist_entry(struct report *rep, struct addr_location *al, struct perf_sample *sample, struct perf_evsel *evsel) { - struct report *rep = container_of(tool, struct report, tool); struct symbol *parent = NULL; unsigned i; struct hist_entry *he; @@ -144,8 +140,7 @@ static int report__add_branch_hist_entry(struct perf_tool *tool, struct addr_loc if (err) return err; - bi = machine__resolve_bstack(al->machine, al->thread, - sample->branch_stack); + bi = sample__resolve_bstack(sample, al); if (!bi) return -ENOMEM; @@ -190,10 +185,9 @@ out: return err; } -static int report__add_hist_entry(struct perf_tool *tool, struct perf_evsel *evsel, +static int report__add_hist_entry(struct report *rep, struct perf_evsel *evsel, struct addr_location *al, struct perf_sample *sample) { - struct report *rep = container_of(tool, struct report, tool); struct symbol *parent = NULL; struct hist_entry *he; int err = sample__resolve_callchain(sample, &parent, evsel, al, rep->max_stack); @@ -237,25 +231,25 @@ static int process_sample_event(struct perf_tool *tool, return -1; } - if (al.filtered || (rep->hide_unresolved && al.sym == NULL)) + if (rep->hide_unresolved && al.sym == NULL) return 0; if (rep->cpu_list && !test_bit(sample->cpu, rep->cpu_bitmap)) return 0; if (sort__mode == SORT_MODE__BRANCH) { - ret = report__add_branch_hist_entry(tool, &al, sample, evsel); + ret = report__add_branch_hist_entry(rep, &al, sample, evsel); if (ret < 0) pr_debug("problem adding lbr entry, skipping event\n"); } else if (rep->mem_mode == 1) { - ret = report__add_mem_hist_entry(tool, &al, sample, evsel, event); + ret = report__add_mem_hist_entry(rep, &al, sample, evsel); if (ret < 0) pr_debug("problem adding mem entry, skipping event\n"); } else { if (al.map != NULL) al.map->dso->hit = 1; - ret = report__add_hist_entry(tool, evsel, &al, sample); + ret = report__add_hist_entry(rep, evsel, &al, sample); if (ret < 0) pr_debug("problem incrementing symbol period, skipping event\n"); } @@ -934,7 +928,7 @@ repeat: * so don't allocate extra space that won't be used in the stdio * implementation. */ - if (use_browser == 1 && sort__has_sym) { + if (ui__has_annotation()) { symbol_conf.priv_size = sizeof(struct annotation); machines__set_symbol_filter(&session->machines, symbol__annotate_init); diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c index 6a76a07b6789..9ac0a495c954 100644 --- a/tools/perf/builtin-sched.c +++ b/tools/perf/builtin-sched.c @@ -1124,7 +1124,7 @@ static void output_lat_thread(struct perf_sched *sched, struct work_atoms *work_ avg = work_list->total_lat / work_list->nb_atoms; - printf("|%11.3f ms |%9" PRIu64 " | avg:%9.3f ms | max:%9.3f ms | max at: %9.6f s\n", + printf("|%11.3f ms |%9" PRIu64 " | avg:%9.3f ms | max:%9.3f ms | max at: %13.6f s\n", (double)work_list->total_runtime / 1e6, work_list->nb_atoms, (double)avg / 1e6, (double)work_list->max_lat / 1e6, @@ -1527,9 +1527,9 @@ static int perf_sched__lat(struct perf_sched *sched) perf_sched__sort_lat(sched); - printf("\n ---------------------------------------------------------------------------------------------------------------\n"); - printf(" Task | Runtime ms | Switches | Average delay ms | Maximum delay ms | Maximum delay at |\n"); - printf(" ---------------------------------------------------------------------------------------------------------------\n"); + printf("\n -----------------------------------------------------------------------------------------------------------------\n"); + printf(" Task | Runtime ms | Switches | Average delay ms | Maximum delay ms | Maximum delay at |\n"); + printf(" -----------------------------------------------------------------------------------------------------------------\n"); next = rb_first(&sched->sorted_atom_root); @@ -1541,7 +1541,7 @@ static int perf_sched__lat(struct perf_sched *sched) next = rb_next(next); } - printf(" -----------------------------------------------------------------------------------------\n"); + printf(" -----------------------------------------------------------------------------------------------------------------\n"); printf(" TOTAL: |%11.3f ms |%9" PRIu64 " |\n", (double)sched->all_runtime / 1e6, sched->all_count); diff --git a/tools/perf/builtin-timechart.c b/tools/perf/builtin-timechart.c index 25526d6eae59..74db2568b867 100644 --- a/tools/perf/builtin-timechart.c +++ b/tools/perf/builtin-timechart.c @@ -494,7 +494,7 @@ static const char *cat_backtrace(union perf_event *event, continue; } - tal.filtered = false; + tal.filtered = 0; thread__find_addr_location(al.thread, machine, cpumode, MAP__FUNCTION, ip, &tal); @@ -1238,7 +1238,7 @@ static int timechart__record(struct timechart *tchart, int argc, const char **ar for (i = 0; i < old_power_args_nr; i++) *p++ = strdup(old_power_args[i]); - for (j = 1; j < (unsigned int)argc; j++) + for (j = 0; j < (unsigned int)argc; j++) *p++ = argv[j]; return cmd_record(rec_argc, rec_argv, NULL); diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c index 5f989a7d8bc2..65aaa5bbf7ec 100644 --- a/tools/perf/builtin-top.c +++ b/tools/perf/builtin-top.c @@ -993,6 +993,16 @@ parse_callchain_opt(const struct option *opt, const char *arg, int unset) return record_parse_callchain_opt(opt, arg, unset); } +static int perf_top_config(const char *var, const char *value, void *cb) +{ + struct perf_top *top = cb; + + if (!strcmp(var, "top.call-graph")) + return record_parse_callchain(value, &top->record_opts); + + return perf_default_config(var, value, cb); +} + static int parse_percent_limit(const struct option *opt, const char *arg, int unset __maybe_unused) @@ -1117,6 +1127,8 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused) if (top.evlist == NULL) return -ENOMEM; + perf_config(perf_top_config, &top); + argc = parse_options(argc, argv, options, top_usage, 0); if (argc) usage_with_options(top_usage, options); diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c index 6aa6fb6f7bd9..f954c26de231 100644 --- a/tools/perf/builtin-trace.c +++ b/tools/perf/builtin-trace.c @@ -825,7 +825,6 @@ static size_t syscall_arg__scnprintf_signum(char *bf, size_t size, struct syscal P_SIGNUM(PIPE); P_SIGNUM(ALRM); P_SIGNUM(TERM); - P_SIGNUM(STKFLT); P_SIGNUM(CHLD); P_SIGNUM(CONT); P_SIGNUM(STOP); @@ -841,6 +840,15 @@ static size_t syscall_arg__scnprintf_signum(char *bf, size_t size, struct syscal P_SIGNUM(IO); P_SIGNUM(PWR); P_SIGNUM(SYS); +#ifdef SIGEMT + P_SIGNUM(EMT); +#endif +#ifdef SIGSTKFLT + P_SIGNUM(STKFLT); +#endif +#ifdef SIGSWI + P_SIGNUM(SWI); +#endif default: break; } diff --git a/tools/perf/config/Makefile b/tools/perf/config/Makefile index 0331ea2701a3..c23418225c2c 100644 --- a/tools/perf/config/Makefile +++ b/tools/perf/config/Makefile @@ -59,6 +59,18 @@ ifeq ($(NO_PERF_REGS),0) CFLAGS += -DHAVE_PERF_REGS_SUPPORT endif +ifndef NO_LIBELF + # for linking with debug library, run like: + # make DEBUG=1 LIBDW_DIR=/opt/libdw/ + ifdef LIBDW_DIR + LIBDW_CFLAGS := -I$(LIBDW_DIR)/include + LIBDW_LDFLAGS := -L$(LIBDW_DIR)/lib + + FEATURE_CHECK_CFLAGS-libdw-dwarf-unwind := $(LIBDW_CFLAGS) + FEATURE_CHECK_LDFLAGS-libdw-dwarf-unwind := $(LIBDW_LDFLAGS) -ldw + endif +endif + # include ARCH specific config -include $(src-perf)/arch/$(ARCH)/Makefile @@ -147,7 +159,35 @@ CORE_FEATURE_TESTS = \ libunwind \ on-exit \ stackprotector-all \ - timerfd + timerfd \ + libdw-dwarf-unwind + +LIB_FEATURE_TESTS = \ + dwarf \ + glibc \ + gtk2 \ + libaudit \ + libbfd \ + libelf \ + libnuma \ + libperl \ + libpython \ + libslang \ + libunwind \ + libdw-dwarf-unwind + +VF_FEATURE_TESTS = \ + backtrace \ + fortify-source \ + gtk2-infobar \ + libelf-getphdrnum \ + libelf-mmap \ + libpython-version \ + on-exit \ + stackprotector-all \ + timerfd \ + libunwind-debug-frame \ + bionic # Set FEATURE_CHECK_(C|LD)FLAGS-all for all CORE_FEATURE_TESTS features. # If in the future we need per-feature checks/flags for features not @@ -161,17 +201,6 @@ endef $(foreach feat,$(CORE_FEATURE_TESTS),$(call set_test_all_flags,$(feat))) # -# So here we detect whether test-all was rebuilt, to be able -# to skip the print-out of the long features list if the file -# existed before and after it was built: -# -ifeq ($(wildcard $(OUTPUT)config/feature-checks/test-all.bin),) - test-all-failed := 1 -else - test-all-failed := 0 -endif - -# # Special fast-path for the 'all features are available' case: # $(call feature_check,all,$(MSG)) @@ -180,15 +209,6 @@ $(call feature_check,all,$(MSG)) # Just in case the build freshly failed, make sure we print the # feature matrix: # -ifeq ($(feature-all), 0) - test-all-failed := 1 -endif - -ifeq ($(test-all-failed),1) - $(info ) - $(info Auto-detecting system features:) -endif - ifeq ($(feature-all), 1) # # test-all.c passed - just set all the core feature flags to 1: @@ -199,27 +219,6 @@ else $(foreach feat,$(CORE_FEATURE_TESTS),$(call feature_check,$(feat))) endif -# -# Print the result of the feature test: -# -feature_print = $(eval $(feature_print_code)) $(info $(MSG)) - -define feature_print_code - ifeq ($(feature-$(1)), 1) - MSG = $(shell printf '...%30s: [ \033[32mon\033[m ]' $(1)) - else - MSG = $(shell printf '...%30s: [ \033[31mOFF\033[m ]' $(1)) - endif -endef - -# -# Only print out our features if we rebuilt the testcases or if a test failed: -# -ifeq ($(test-all-failed), 1) - $(foreach feat,$(CORE_FEATURE_TESTS),$(call feature_print,$(feat))) - $(info ) -endif - ifeq ($(feature-stackprotector-all), 1) CFLAGS += -fstack-protector-all endif @@ -264,6 +263,7 @@ ifdef NO_LIBELF NO_DWARF := 1 NO_DEMANGLE := 1 NO_LIBUNWIND := 1 + NO_LIBDW_DWARF_UNWIND := 1 else ifeq ($(feature-libelf), 0) ifeq ($(feature-glibc), 1) @@ -282,13 +282,12 @@ else msg := $(error No gnu/libc-version.h found, please install glibc-dev[el]/glibc-static); endif else - # for linking with debug library, run like: - # make DEBUG=1 LIBDW_DIR=/opt/libdw/ - ifdef LIBDW_DIR - LIBDW_CFLAGS := -I$(LIBDW_DIR)/include - LIBDW_LDFLAGS := -L$(LIBDW_DIR)/lib + ifndef NO_LIBDW_DWARF_UNWIND + ifneq ($(feature-libdw-dwarf-unwind),1) + NO_LIBDW_DWARF_UNWIND := 1 + msg := $(warning No libdw DWARF unwind found, Please install elfutils-devel/libdw-dev >= 0.158 and/or set LIBDW_DIR); + endif endif - ifneq ($(feature-dwarf), 1) msg := $(warning No libdw.h found or old libdw.h found or elfutils is older than 0.138, disables dwarf support. Please install new elfutils-devel/libdw-dev); NO_DWARF := 1 @@ -324,25 +323,51 @@ endif # NO_LIBELF ifndef NO_LIBUNWIND ifneq ($(feature-libunwind), 1) - msg := $(warning No libunwind found, disabling post unwind support. Please install libunwind-dev[el] >= 1.1); + msg := $(warning No libunwind found. Please install libunwind-dev[el] >= 1.1 and/or set LIBUNWIND_DIR); NO_LIBUNWIND := 1 + endif +endif + +dwarf-post-unwind := 1 +dwarf-post-unwind-text := BUG + +# setup DWARF post unwinder +ifdef NO_LIBUNWIND + ifdef NO_LIBDW_DWARF_UNWIND + msg := $(warning Disabling post unwind, no support found.); + dwarf-post-unwind := 0 else - ifeq ($(ARCH),arm) - $(call feature_check,libunwind-debug-frame) - ifneq ($(feature-libunwind-debug-frame), 1) - msg := $(warning No debug_frame support found in libunwind); - CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME - endif - else - # non-ARM has no dwarf_find_debug_frame() function: + dwarf-post-unwind-text := libdw + endif +else + dwarf-post-unwind-text := libunwind + # Enable libunwind support by default. + ifndef NO_LIBDW_DWARF_UNWIND + NO_LIBDW_DWARF_UNWIND := 1 + endif +endif + +ifeq ($(dwarf-post-unwind),1) + CFLAGS += -DHAVE_DWARF_UNWIND_SUPPORT +else + NO_DWARF_UNWIND := 1 +endif + +ifndef NO_LIBUNWIND + ifeq ($(ARCH),arm) + $(call feature_check,libunwind-debug-frame) + ifneq ($(feature-libunwind-debug-frame), 1) + msg := $(warning No debug_frame support found in libunwind); CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME endif - - CFLAGS += -DHAVE_LIBUNWIND_SUPPORT - EXTLIBS += $(LIBUNWIND_LIBS) - CFLAGS += $(LIBUNWIND_CFLAGS) - LDFLAGS += $(LIBUNWIND_LDFLAGS) - endif # ifneq ($(feature-libunwind), 1) + else + # non-ARM has no dwarf_find_debug_frame() function: + CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME + endif + CFLAGS += -DHAVE_LIBUNWIND_SUPPORT + EXTLIBS += $(LIBUNWIND_LIBS) + CFLAGS += $(LIBUNWIND_CFLAGS) + LDFLAGS += $(LIBUNWIND_LDFLAGS) endif ifndef NO_LIBAUDIT @@ -602,3 +627,84 @@ ifdef DESTDIR plugindir=$(libdir)/traceevent/plugins plugindir_SQ= $(subst ','\'',$(plugindir)) endif + +# +# Print the result of the feature test: +# +feature_print_status = $(eval $(feature_print_status_code)) $(info $(MSG)) + +define feature_print_status_code + ifeq ($(feature-$(1)), 1) + MSG = $(shell printf '...%30s: [ \033[32mon\033[m ]' $(1)) + else + MSG = $(shell printf '...%30s: [ \033[31mOFF\033[m ]' $(1)) + endif +endef + +feature_print_var = $(eval $(feature_print_var_code)) $(info $(MSG)) +define feature_print_var_code + MSG = $(shell printf '...%30s: %s' $(1) $($(1))) +endef + +feature_print_text = $(eval $(feature_print_text_code)) $(info $(MSG)) +define feature_print_text_code + MSG = $(shell printf '...%30s: %s' $(1) $(2)) +endef + +PERF_FEATURES := $(foreach feat,$(LIB_FEATURE_TESTS),feature-$(feat)($(feature-$(feat)))) +PERF_FEATURES_FILE := $(shell touch $(OUTPUT)PERF-FEATURES; cat $(OUTPUT)PERF-FEATURES) + +ifeq ($(dwarf-post-unwind),1) + PERF_FEATURES += dwarf-post-unwind($(dwarf-post-unwind-text)) +endif + +# The $(display_lib) controls the default detection message +# output. It's set if: +# - detected features differes from stored features from +# last build (in PERF-FEATURES file) +# - one of the $(LIB_FEATURE_TESTS) is not detected +# - VF is enabled + +ifneq ("$(PERF_FEATURES)","$(PERF_FEATURES_FILE)") + $(shell echo "$(PERF_FEATURES)" > $(OUTPUT)PERF-FEATURES) + display_lib := 1 +endif + +feature_check = $(eval $(feature_check_code)) +define feature_check_code + ifneq ($(feature-$(1)), 1) + display_lib := 1 + endif +endef + +$(foreach feat,$(LIB_FEATURE_TESTS),$(call feature_check,$(feat))) + +ifeq ($(VF),1) + display_lib := 1 + display_vf := 1 +endif + +ifeq ($(display_lib),1) + $(info ) + $(info Auto-detecting system features:) + $(foreach feat,$(LIB_FEATURE_TESTS),$(call feature_print_status,$(feat),)) + + ifeq ($(dwarf-post-unwind),1) + $(call feature_print_text,"DWARF post unwind library", $(dwarf-post-unwind-text)) + endif +endif + +ifeq ($(display_vf),1) + $(foreach feat,$(VF_FEATURE_TESTS),$(call feature_print_status,$(feat),)) + $(info ) + $(call feature_print_var,prefix) + $(call feature_print_var,bindir) + $(call feature_print_var,libdir) + $(call feature_print_var,sysconfdir) + $(call feature_print_var,LIBUNWIND_DIR) + $(call feature_print_var,LIBDW_DIR) +endif + +ifeq ($(display_lib),1) + $(info ) +endif diff --git a/tools/perf/config/feature-checks/Makefile b/tools/perf/config/feature-checks/Makefile index 523b7bc10553..2da103c53f89 100644 --- a/tools/perf/config/feature-checks/Makefile +++ b/tools/perf/config/feature-checks/Makefile @@ -26,7 +26,8 @@ FILES= \ test-libunwind-debug-frame.bin \ test-on-exit.bin \ test-stackprotector-all.bin \ - test-timerfd.bin + test-timerfd.bin \ + test-libdw-dwarf-unwind.bin CC := $(CROSS_COMPILE)gcc -MD PKG_CONFIG := $(CROSS_COMPILE)pkg-config @@ -141,6 +142,9 @@ test-backtrace.bin: test-timerfd.bin: $(BUILD) +test-libdw-dwarf-unwind.bin: + $(BUILD) + -include *.d ############################### diff --git a/tools/perf/config/feature-checks/test-all.c b/tools/perf/config/feature-checks/test-all.c index 9b8a544155bb..fc37eb3ca17b 100644 --- a/tools/perf/config/feature-checks/test-all.c +++ b/tools/perf/config/feature-checks/test-all.c @@ -89,6 +89,10 @@ # include "test-stackprotector-all.c" #undef main +#define main main_test_libdw_dwarf_unwind +# include "test-libdw-dwarf-unwind.c" +#undef main + int main(int argc, char *argv[]) { main_test_libpython(); @@ -111,6 +115,7 @@ int main(int argc, char *argv[]) main_test_libnuma(); main_test_timerfd(); main_test_stackprotector_all(); + main_test_libdw_dwarf_unwind(); return 0; } diff --git a/tools/perf/config/feature-checks/test-libdw-dwarf-unwind.c b/tools/perf/config/feature-checks/test-libdw-dwarf-unwind.c new file mode 100644 index 000000000000..f676a3ff442a --- /dev/null +++ b/tools/perf/config/feature-checks/test-libdw-dwarf-unwind.c @@ -0,0 +1,13 @@ + +#include <elfutils/libdwfl.h> + +int main(void) +{ + /* + * This function is guarded via: __nonnull_attribute__ (1, 2). + * Passing '1' as arguments value. This code is never executed, + * only compiled. + */ + dwfl_thread_getframes((void *) 1, (void *) 1, NULL); + return 0; +} diff --git a/tools/perf/design.txt b/tools/perf/design.txt index 63a0e6f04a01..a28dca2582aa 100644 --- a/tools/perf/design.txt +++ b/tools/perf/design.txt @@ -18,7 +18,7 @@ underlying hardware counters. Performance counters are accessed via special file descriptors. There's one file descriptor per virtual counter used. -The special file descriptor is opened via the perf_event_open() +The special file descriptor is opened via the sys_perf_event_open() system call: int sys_perf_event_open(struct perf_event_attr *hw_event_uptr, @@ -82,7 +82,7 @@ machine-specific. If 'raw_type' is 0, then the 'type' field says what kind of counter this is, with the following encoding: -enum perf_event_types { +enum perf_type_id { PERF_TYPE_HARDWARE = 0, PERF_TYPE_SOFTWARE = 1, PERF_TYPE_TRACEPOINT = 2, @@ -95,7 +95,7 @@ specified by 'event_id': * Generalized performance counter event types, used by the hw_event.event_id * parameter of the sys_perf_event_open() syscall: */ -enum hw_event_ids { +enum perf_hw_id { /* * Common hardware events, generalized by the kernel: */ @@ -129,7 +129,7 @@ software events, selected by 'event_id': * physical and sw events of the kernel (and allow the profiling of them as * well): */ -enum sw_event_ids { +enum perf_sw_ids { PERF_COUNT_SW_CPU_CLOCK = 0, PERF_COUNT_SW_TASK_CLOCK = 1, PERF_COUNT_SW_PAGE_FAULTS = 2, @@ -230,7 +230,7 @@ these events are recorded in the ring-buffer (see below). The 'comm' bit allows tracking of process comm data on process creation. This too is recorded in the ring-buffer (see below). -The 'pid' parameter to the perf_event_open() system call allows the +The 'pid' parameter to the sys_perf_event_open() system call allows the counter to be specific to a task: pid == 0: if the pid parameter is zero, the counter is attached to the @@ -260,7 +260,7 @@ The 'flags' parameter is currently unused and must be zero. The 'group_fd' parameter allows counter "groups" to be set up. A counter group has one counter which is the group "leader". The leader -is created first, with group_fd = -1 in the perf_event_open call +is created first, with group_fd = -1 in the sys_perf_event_open call that creates it. The rest of the group members are created subsequently, with group_fd giving the fd of the group leader. (A single counter on its own is created with group_fd = -1 and is diff --git a/tools/perf/perf-completion.sh b/tools/perf/perf-completion.sh index 496e2abb5482..ae3a57694b6b 100644 --- a/tools/perf/perf-completion.sh +++ b/tools/perf/perf-completion.sh @@ -123,7 +123,7 @@ __perf_main () __perfcomp_colon "$evts" "$cur" # List subcommands for 'perf kvm' elif [[ $prev == "kvm" ]]; then - subcmds="top record report diff buildid-list stat" + subcmds=$($cmd $prev --list-cmds) __perfcomp_colon "$subcmds" "$cur" # List long option names elif [[ $cur == --* ]]; then diff --git a/tools/perf/perf.h b/tools/perf/perf.h index e84fa26bc1be..e18a8b5e6953 100644 --- a/tools/perf/perf.h +++ b/tools/perf/perf.h @@ -12,6 +12,9 @@ #ifndef __NR_perf_event_open # define __NR_perf_event_open 336 #endif +#ifndef __NR_futex +# define __NR_futex 240 +#endif #endif #if defined(__x86_64__) @@ -23,6 +26,9 @@ #ifndef __NR_perf_event_open # define __NR_perf_event_open 298 #endif +#ifndef __NR_futex +# define __NR_futex 202 +#endif #endif #ifdef __powerpc__ @@ -251,12 +257,14 @@ void pthread__unblock_sigwinch(void); enum perf_call_graph_mode { CALLCHAIN_NONE, CALLCHAIN_FP, - CALLCHAIN_DWARF + CALLCHAIN_DWARF, + CALLCHAIN_MAX }; struct record_opts { struct target target; int call_graph; + bool call_graph_enabled; bool group; bool inherit_stat; bool no_buffering; diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c index 1e67437fb4ca..b11bf8a08430 100644 --- a/tools/perf/tests/builtin-test.c +++ b/tools/perf/tests/builtin-test.c @@ -115,6 +115,14 @@ static struct test { .desc = "Test parsing with no sample_id_all bit set", .func = test__parse_no_sample_id_all, }, +#if defined(__x86_64__) || defined(__i386__) +#ifdef HAVE_DWARF_UNWIND_SUPPORT + { + .desc = "Test dwarf unwind", + .func = test__dwarf_unwind, + }, +#endif +#endif { .func = NULL, }, diff --git a/tools/perf/tests/dwarf-unwind.c b/tools/perf/tests/dwarf-unwind.c new file mode 100644 index 000000000000..c059ee81c038 --- /dev/null +++ b/tools/perf/tests/dwarf-unwind.c @@ -0,0 +1,144 @@ +#include <linux/compiler.h> +#include <sys/types.h> +#include <unistd.h> +#include "tests.h" +#include "debug.h" +#include "machine.h" +#include "event.h" +#include "unwind.h" +#include "perf_regs.h" +#include "map.h" +#include "thread.h" + +static int mmap_handler(struct perf_tool *tool __maybe_unused, + union perf_event *event, + struct perf_sample *sample __maybe_unused, + struct machine *machine) +{ + return machine__process_mmap_event(machine, event, NULL); +} + +static int init_live_machine(struct machine *machine) +{ + union perf_event event; + pid_t pid = getpid(); + + return perf_event__synthesize_mmap_events(NULL, &event, pid, pid, + mmap_handler, machine, true); +} + +#define MAX_STACK 6 + +static int unwind_entry(struct unwind_entry *entry, void *arg) +{ + unsigned long *cnt = (unsigned long *) arg; + char *symbol = entry->sym ? entry->sym->name : NULL; + static const char *funcs[MAX_STACK] = { + "test__arch_unwind_sample", + "unwind_thread", + "krava_3", + "krava_2", + "krava_1", + "test__dwarf_unwind" + }; + + if (*cnt >= MAX_STACK) { + pr_debug("failed: crossed the max stack value %d\n", MAX_STACK); + return -1; + } + + if (!symbol) { + pr_debug("failed: got unresolved address 0x%" PRIx64 "\n", + entry->ip); + return -1; + } + + pr_debug("got: %s 0x%" PRIx64 "\n", symbol, entry->ip); + return strcmp((const char *) symbol, funcs[(*cnt)++]); +} + +__attribute__ ((noinline)) +static int unwind_thread(struct thread *thread, struct machine *machine) +{ + struct perf_sample sample; + unsigned long cnt = 0; + int err = -1; + + memset(&sample, 0, sizeof(sample)); + + if (test__arch_unwind_sample(&sample, thread)) { + pr_debug("failed to get unwind sample\n"); + goto out; + } + + err = unwind__get_entries(unwind_entry, &cnt, machine, thread, + &sample, MAX_STACK); + if (err) + pr_debug("unwind failed\n"); + else if (cnt != MAX_STACK) { + pr_debug("got wrong number of stack entries %lu != %d\n", + cnt, MAX_STACK); + err = -1; + } + + out: + free(sample.user_stack.data); + free(sample.user_regs.regs); + return err; +} + +__attribute__ ((noinline)) +static int krava_3(struct thread *thread, struct machine *machine) +{ + return unwind_thread(thread, machine); +} + +__attribute__ ((noinline)) +static int krava_2(struct thread *thread, struct machine *machine) +{ + return krava_3(thread, machine); +} + +__attribute__ ((noinline)) +static int krava_1(struct thread *thread, struct machine *machine) +{ + return krava_2(thread, machine); +} + +int test__dwarf_unwind(void) +{ + struct machines machines; + struct machine *machine; + struct thread *thread; + int err = -1; + + machines__init(&machines); + + machine = machines__find(&machines, HOST_KERNEL_ID); + if (!machine) { + pr_err("Could not get machine\n"); + return -1; + } + + if (init_live_machine(machine)) { + pr_err("Could not init machine\n"); + goto out; + } + + if (verbose > 1) + machine__fprintf(machine, stderr); + + thread = machine__find_thread(machine, getpid(), getpid()); + if (!thread) { + pr_err("Could not get thread\n"); + goto out; + } + + err = krava_1(thread, machine); + + out: + machine__delete_threads(machine); + machine__exit(machine); + machines__exit(&machines); + return err; +} diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c index 2b6519e0e36f..7ccbc7b6ae77 100644 --- a/tools/perf/tests/hists_link.c +++ b/tools/perf/tests/hists_link.c @@ -101,6 +101,7 @@ static struct machine *setup_fake_machine(struct machines *machines) .mmap = { .header = { .misc = PERF_RECORD_MISC_USER, }, .pid = fake_mmap_info[i].pid, + .tid = fake_mmap_info[i].pid, .start = fake_mmap_info[i].start, .len = 0x1000ULL, .pgoff = 0ULL, diff --git a/tools/perf/tests/make b/tools/perf/tests/make index 00544b8b644b..5daeae1cb4c0 100644 --- a/tools/perf/tests/make +++ b/tools/perf/tests/make @@ -27,6 +27,7 @@ make_no_ui := NO_NEWT=1 NO_SLANG=1 NO_GTK2=1 make_no_demangle := NO_DEMANGLE=1 make_no_libelf := NO_LIBELF=1 make_no_libunwind := NO_LIBUNWIND=1 +make_no_libdw_dwarf_unwind := NO_LIBDW_DWARF_UNWIND=1 make_no_backtrace := NO_BACKTRACE=1 make_no_libnuma := NO_LIBNUMA=1 make_no_libaudit := NO_LIBAUDIT=1 @@ -35,8 +36,9 @@ make_tags := tags make_cscope := cscope make_help := help make_doc := doc -make_perf_o := perf.o -make_util_map_o := util/map.o +make_perf_o := perf.o +make_util_map_o := util/map.o +make_util_pmu_bison_o := util/pmu-bison.o make_install := install make_install_bin := install-bin make_install_doc := install-doc @@ -49,6 +51,7 @@ make_install_pdf := install-pdf make_minimal := NO_LIBPERL=1 NO_LIBPYTHON=1 NO_NEWT=1 NO_GTK2=1 make_minimal += NO_DEMANGLE=1 NO_LIBELF=1 NO_LIBUNWIND=1 NO_BACKTRACE=1 make_minimal += NO_LIBNUMA=1 NO_LIBAUDIT=1 NO_LIBBIONIC=1 +make_minimal += NO_LIBDW_DWARF_UNWIND=1 # $(run) contains all available tests run := make_pure @@ -65,6 +68,7 @@ run += make_no_ui run += make_no_demangle run += make_no_libelf run += make_no_libunwind +run += make_no_libdw_dwarf_unwind run += make_no_backtrace run += make_no_libnuma run += make_no_libaudit @@ -73,6 +77,7 @@ run += make_help run += make_doc run += make_perf_o run += make_util_map_o +run += make_util_pmu_bison_o run += make_install run += make_install_bin # FIXME 'install-*' commented out till they're fixed @@ -113,8 +118,9 @@ test_make_doc_O := $(test_ok) test_make_python_perf_so := test -f $(PERF)/python/perf.so -test_make_perf_o := test -f $(PERF)/perf.o -test_make_util_map_o := test -f $(PERF)/util/map.o +test_make_perf_o := test -f $(PERF)/perf.o +test_make_util_map_o := test -f $(PERF)/util/map.o +test_make_util_pmu_bison_o := test -f $(PERF)/util/pmu-bison.o define test_dest_files for file in $(1); do \ @@ -167,13 +173,10 @@ test_make_install_info_O := $(test_ok) test_make_install_pdf := $(test_ok) test_make_install_pdf_O := $(test_ok) -# Kbuild tests only -#test_make_python_perf_so_O := test -f $$TMP/tools/perf/python/perf.so -#test_make_perf_o_O := test -f $$TMP/tools/perf/perf.o -#test_make_util_map_o_O := test -f $$TMP/tools/perf/util/map.o - -test_make_perf_o_O := true -test_make_util_map_o_O := true +test_make_python_perf_so_O := test -f $$TMP_O/python/perf.so +test_make_perf_o_O := test -f $$TMP_O/perf.o +test_make_util_map_o_O := test -f $$TMP_O/util/map.o +test_make_util_pmu_bison_o_O := test -f $$TMP_O/util/pmu-bison.o test_default = test -x $(PERF)/perf test = $(if $(test_$1),$(test_$1),$(test_default)) diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c index 4db0ae617d70..8605ff5572ae 100644 --- a/tools/perf/tests/parse-events.c +++ b/tools/perf/tests/parse-events.c @@ -2,7 +2,7 @@ #include "parse-events.h" #include "evsel.h" #include "evlist.h" -#include "fs.h" +#include <api/fs/fs.h> #include <api/fs/debugfs.h> #include "tests.h" #include <linux/hw_breakpoint.h> diff --git a/tools/perf/tests/sample-parsing.c b/tools/perf/tests/sample-parsing.c index 1b677202638d..0014d3c8c21c 100644 --- a/tools/perf/tests/sample-parsing.c +++ b/tools/perf/tests/sample-parsing.c @@ -22,8 +22,8 @@ } while (0) static bool samples_same(const struct perf_sample *s1, - const struct perf_sample *s2, u64 type, u64 regs_user, - u64 read_format) + const struct perf_sample *s2, + u64 type, u64 read_format) { size_t i; @@ -95,8 +95,9 @@ static bool samples_same(const struct perf_sample *s1, } if (type & PERF_SAMPLE_REGS_USER) { - size_t sz = hweight_long(regs_user) * sizeof(u64); + size_t sz = hweight_long(s1->user_regs.mask) * sizeof(u64); + COMP(user_regs.mask); COMP(user_regs.abi); if (s1->user_regs.abi && (!s1->user_regs.regs || !s2->user_regs.regs || @@ -174,6 +175,7 @@ static int do_test(u64 sample_type, u64 sample_regs_user, u64 read_format) .branch_stack = &branch_stack.branch_stack, .user_regs = { .abi = PERF_SAMPLE_REGS_ABI_64, + .mask = sample_regs_user, .regs = user_regs, }, .user_stack = { @@ -201,8 +203,7 @@ static int do_test(u64 sample_type, u64 sample_regs_user, u64 read_format) sample.read.one.id = 99; } - sz = perf_event__sample_event_size(&sample, sample_type, - sample_regs_user, read_format); + sz = perf_event__sample_event_size(&sample, sample_type, read_format); bufsz = sz + 4096; /* Add a bit for overrun checking */ event = malloc(bufsz); if (!event) { @@ -215,8 +216,7 @@ static int do_test(u64 sample_type, u64 sample_regs_user, u64 read_format) event->header.misc = 0; event->header.size = sz; - err = perf_event__synthesize_sample(event, sample_type, - sample_regs_user, read_format, + err = perf_event__synthesize_sample(event, sample_type, read_format, &sample, false); if (err) { pr_debug("%s failed for sample_type %#"PRIx64", error %d\n", @@ -244,8 +244,7 @@ static int do_test(u64 sample_type, u64 sample_regs_user, u64 read_format) goto out_free; } - if (!samples_same(&sample, &sample_out, sample_type, - sample_regs_user, read_format)) { + if (!samples_same(&sample, &sample_out, sample_type, read_format)) { pr_debug("parsing failed for sample_type %#"PRIx64"\n", sample_type); goto out_free; diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h index e0ac713857ba..a24795ca002d 100644 --- a/tools/perf/tests/tests.h +++ b/tools/perf/tests/tests.h @@ -40,5 +40,14 @@ int test__code_reading(void); int test__sample_parsing(void); int test__keep_tracking(void); int test__parse_no_sample_id_all(void); +int test__dwarf_unwind(void); +#if defined(__x86_64__) || defined(__i386__) +#ifdef HAVE_DWARF_UNWIND_SUPPORT +struct thread; +struct perf_sample; +int test__arch_unwind_sample(struct perf_sample *sample, + struct thread *thread); +#endif +#endif #endif /* TESTS_H */ diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c index b720b92eba6e..7ec871af3f6f 100644 --- a/tools/perf/ui/browsers/hists.c +++ b/tools/perf/ui/browsers/hists.c @@ -587,95 +587,52 @@ struct hpp_arg { bool current_entry; }; -static int __hpp__color_callchain(struct hpp_arg *arg) +static int __hpp__overhead_callback(struct perf_hpp *hpp, bool front) { - if (!symbol_conf.use_callchain) - return 0; - - slsmg_printf("%c ", arg->folded_sign); - return 2; -} - -static int __hpp__color_fmt(struct perf_hpp *hpp, struct hist_entry *he, - u64 (*get_field)(struct hist_entry *), - int (*callchain_cb)(struct hpp_arg *)) -{ - int ret = 0; - double percent = 0.0; - struct hists *hists = he->hists; struct hpp_arg *arg = hpp->ptr; - if (hists->stats.total_period) - percent = 100.0 * get_field(he) / hists->stats.total_period; - - ui_browser__set_percent_color(arg->b, percent, arg->current_entry); - - if (callchain_cb) - ret += callchain_cb(arg); - - ret += scnprintf(hpp->buf, hpp->size, "%6.2f%%", percent); - slsmg_printf("%s", hpp->buf); - - if (symbol_conf.event_group) { - int prev_idx, idx_delta; - struct perf_evsel *evsel = hists_to_evsel(hists); - struct hist_entry *pair; - int nr_members = evsel->nr_members; - - if (nr_members <= 1) - goto out; + if (arg->current_entry && arg->b->navkeypressed) + ui_browser__set_color(arg->b, HE_COLORSET_SELECTED); + else + ui_browser__set_color(arg->b, HE_COLORSET_NORMAL); - prev_idx = perf_evsel__group_idx(evsel); + if (front) { + if (!symbol_conf.use_callchain) + return 0; - list_for_each_entry(pair, &he->pairs.head, pairs.node) { - u64 period = get_field(pair); - u64 total = pair->hists->stats.total_period; + slsmg_printf("%c ", arg->folded_sign); + return 2; + } - if (!total) - continue; + return 0; +} - evsel = hists_to_evsel(pair->hists); - idx_delta = perf_evsel__group_idx(evsel) - prev_idx - 1; +static int __hpp__color_callback(struct perf_hpp *hpp, bool front __maybe_unused) +{ + struct hpp_arg *arg = hpp->ptr; - while (idx_delta--) { - /* - * zero-fill group members in the middle which - * have no sample - */ - ui_browser__set_percent_color(arg->b, 0.0, - arg->current_entry); - ret += scnprintf(hpp->buf, hpp->size, - " %6.2f%%", 0.0); - slsmg_printf("%s", hpp->buf); - } + if (!arg->current_entry || !arg->b->navkeypressed) + ui_browser__set_color(arg->b, HE_COLORSET_NORMAL); + return 0; +} - percent = 100.0 * period / total; - ui_browser__set_percent_color(arg->b, percent, - arg->current_entry); - ret += scnprintf(hpp->buf, hpp->size, - " %6.2f%%", percent); - slsmg_printf("%s", hpp->buf); +static int __hpp__slsmg_color_printf(struct perf_hpp *hpp, const char *fmt, ...) +{ + struct hpp_arg *arg = hpp->ptr; + int ret; + va_list args; + double percent; - prev_idx = perf_evsel__group_idx(evsel); - } + va_start(args, fmt); + percent = va_arg(args, double); + va_end(args); - idx_delta = nr_members - prev_idx - 1; + ui_browser__set_percent_color(arg->b, percent, arg->current_entry); - while (idx_delta--) { - /* - * zero-fill group members at last which have no sample - */ - ui_browser__set_percent_color(arg->b, 0.0, - arg->current_entry); - ret += scnprintf(hpp->buf, hpp->size, - " %6.2f%%", 0.0); - slsmg_printf("%s", hpp->buf); - } - } -out: - if (!arg->current_entry || !arg->b->navkeypressed) - ui_browser__set_color(arg->b, HE_COLORSET_NORMAL); + ret = scnprintf(hpp->buf, hpp->size, fmt, percent); + slsmg_printf("%s", hpp->buf); + advance_hpp(hpp, ret); return ret; } @@ -690,14 +647,15 @@ hist_browser__hpp_color_##_type(struct perf_hpp_fmt *fmt __maybe_unused,\ struct perf_hpp *hpp, \ struct hist_entry *he) \ { \ - return __hpp__color_fmt(hpp, he, __hpp_get_##_field, _cb); \ + return __hpp__fmt(hpp, he, __hpp_get_##_field, _cb, " %6.2f%%", \ + __hpp__slsmg_color_printf, true); \ } -__HPP_COLOR_PERCENT_FN(overhead, period, __hpp__color_callchain) -__HPP_COLOR_PERCENT_FN(overhead_sys, period_sys, NULL) -__HPP_COLOR_PERCENT_FN(overhead_us, period_us, NULL) -__HPP_COLOR_PERCENT_FN(overhead_guest_sys, period_guest_sys, NULL) -__HPP_COLOR_PERCENT_FN(overhead_guest_us, period_guest_us, NULL) +__HPP_COLOR_PERCENT_FN(overhead, period, __hpp__overhead_callback) +__HPP_COLOR_PERCENT_FN(overhead_sys, period_sys, __hpp__color_callback) +__HPP_COLOR_PERCENT_FN(overhead_us, period_us, __hpp__color_callback) +__HPP_COLOR_PERCENT_FN(overhead_guest_sys, period_guest_sys, __hpp__color_callback) +__HPP_COLOR_PERCENT_FN(overhead_guest_us, period_guest_us, __hpp__color_callback) #undef __HPP_COLOR_PERCENT_FN diff --git a/tools/perf/ui/gtk/hists.c b/tools/perf/ui/gtk/hists.c index 5b95c44f3435..e395ef9b0ae0 100644 --- a/tools/perf/ui/gtk/hists.c +++ b/tools/perf/ui/gtk/hists.c @@ -8,16 +8,24 @@ #define MAX_COLUMNS 32 -static int __percent_color_snprintf(char *buf, size_t size, double percent) +static int __percent_color_snprintf(struct perf_hpp *hpp, const char *fmt, ...) { int ret = 0; + va_list args; + double percent; const char *markup; + char *buf = hpp->buf; + size_t size = hpp->size; + + va_start(args, fmt); + percent = va_arg(args, double); + va_end(args); markup = perf_gtk__get_percent_color(percent); if (markup) ret += scnprintf(buf, size, markup); - ret += scnprintf(buf + ret, size - ret, " %6.2f%%", percent); + ret += scnprintf(buf + ret, size - ret, fmt, percent); if (markup) ret += scnprintf(buf + ret, size - ret, "</span>"); @@ -25,66 +33,6 @@ static int __percent_color_snprintf(char *buf, size_t size, double percent) return ret; } - -static int __hpp__color_fmt(struct perf_hpp *hpp, struct hist_entry *he, - u64 (*get_field)(struct hist_entry *)) -{ - int ret; - double percent = 0.0; - struct hists *hists = he->hists; - struct perf_evsel *evsel = hists_to_evsel(hists); - - if (hists->stats.total_period) - percent = 100.0 * get_field(he) / hists->stats.total_period; - - ret = __percent_color_snprintf(hpp->buf, hpp->size, percent); - - if (perf_evsel__is_group_event(evsel)) { - int prev_idx, idx_delta; - struct hist_entry *pair; - int nr_members = evsel->nr_members; - - prev_idx = perf_evsel__group_idx(evsel); - - list_for_each_entry(pair, &he->pairs.head, pairs.node) { - u64 period = get_field(pair); - u64 total = pair->hists->stats.total_period; - - evsel = hists_to_evsel(pair->hists); - idx_delta = perf_evsel__group_idx(evsel) - prev_idx - 1; - - while (idx_delta--) { - /* - * zero-fill group members in the middle which - * have no sample - */ - ret += __percent_color_snprintf(hpp->buf + ret, - hpp->size - ret, - 0.0); - } - - percent = 100.0 * period / total; - ret += __percent_color_snprintf(hpp->buf + ret, - hpp->size - ret, - percent); - - prev_idx = perf_evsel__group_idx(evsel); - } - - idx_delta = nr_members - prev_idx - 1; - - while (idx_delta--) { - /* - * zero-fill group members at last which have no sample - */ - ret += __percent_color_snprintf(hpp->buf + ret, - hpp->size - ret, - 0.0); - } - } - return ret; -} - #define __HPP_COLOR_PERCENT_FN(_type, _field) \ static u64 he_get_##_field(struct hist_entry *he) \ { \ @@ -95,7 +43,8 @@ static int perf_gtk__hpp_color_##_type(struct perf_hpp_fmt *fmt __maybe_unused, struct perf_hpp *hpp, \ struct hist_entry *he) \ { \ - return __hpp__color_fmt(hpp, he, he_get_##_field); \ + return __hpp__fmt(hpp, he, he_get_##_field, NULL, " %6.2f%%", \ + __percent_color_snprintf, true); \ } __HPP_COLOR_PERCENT_FN(overhead, period) @@ -216,7 +165,6 @@ static void perf_gtk__show_hists(GtkWidget *window, struct hists *hists, struct perf_hpp hpp = { .buf = s, .size = sizeof(s), - .ptr = hists_to_evsel(hists), }; nr_cols = 0; @@ -243,7 +191,7 @@ static void perf_gtk__show_hists(GtkWidget *window, struct hists *hists, col_idx = 0; perf_hpp__for_each_format(fmt) { - fmt->header(fmt, &hpp); + fmt->header(fmt, &hpp, hists_to_evsel(hists)); gtk_tree_view_insert_column_with_attributes(GTK_TREE_VIEW(view), -1, ltrim(s), diff --git a/tools/perf/ui/hist.c b/tools/perf/ui/hist.c index 78f4c92e9b73..0f403b83e9d1 100644 --- a/tools/perf/ui/hist.c +++ b/tools/perf/ui/hist.c @@ -8,16 +8,27 @@ /* hist period print (hpp) functions */ -typedef int (*hpp_snprint_fn)(char *buf, size_t size, const char *fmt, ...); - -static int __hpp__fmt(struct perf_hpp *hpp, struct hist_entry *he, - u64 (*get_field)(struct hist_entry *), - const char *fmt, hpp_snprint_fn print_fn, - bool fmt_percent) +#define hpp__call_print_fn(hpp, fn, fmt, ...) \ +({ \ + int __ret = fn(hpp, fmt, ##__VA_ARGS__); \ + advance_hpp(hpp, __ret); \ + __ret; \ +}) + +int __hpp__fmt(struct perf_hpp *hpp, struct hist_entry *he, + hpp_field_fn get_field, hpp_callback_fn callback, + const char *fmt, hpp_snprint_fn print_fn, bool fmt_percent) { - int ret; + int ret = 0; struct hists *hists = he->hists; struct perf_evsel *evsel = hists_to_evsel(hists); + char *buf = hpp->buf; + size_t size = hpp->size; + + if (callback) { + ret = callback(hpp, true); + advance_hpp(hpp, ret); + } if (fmt_percent) { double percent = 0.0; @@ -26,9 +37,9 @@ static int __hpp__fmt(struct perf_hpp *hpp, struct hist_entry *he, percent = 100.0 * get_field(he) / hists->stats.total_period; - ret = print_fn(hpp->buf, hpp->size, fmt, percent); + ret += hpp__call_print_fn(hpp, print_fn, fmt, percent); } else - ret = print_fn(hpp->buf, hpp->size, fmt, get_field(he)); + ret += hpp__call_print_fn(hpp, print_fn, fmt, get_field(he)); if (perf_evsel__is_group_event(evsel)) { int prev_idx, idx_delta; @@ -52,16 +63,22 @@ static int __hpp__fmt(struct perf_hpp *hpp, struct hist_entry *he, * zero-fill group members in the middle which * have no sample */ - ret += print_fn(hpp->buf + ret, hpp->size - ret, - fmt, 0); + if (fmt_percent) { + ret += hpp__call_print_fn(hpp, print_fn, + fmt, 0.0); + } else { + ret += hpp__call_print_fn(hpp, print_fn, + fmt, 0ULL); + } } - if (fmt_percent) - ret += print_fn(hpp->buf + ret, hpp->size - ret, - fmt, 100.0 * period / total); - else - ret += print_fn(hpp->buf + ret, hpp->size - ret, - fmt, period); + if (fmt_percent) { + ret += hpp__call_print_fn(hpp, print_fn, fmt, + 100.0 * period / total); + } else { + ret += hpp__call_print_fn(hpp, print_fn, fmt, + period); + } prev_idx = perf_evsel__group_idx(evsel); } @@ -72,41 +89,87 @@ static int __hpp__fmt(struct perf_hpp *hpp, struct hist_entry *he, /* * zero-fill group members at last which have no sample */ - ret += print_fn(hpp->buf + ret, hpp->size - ret, - fmt, 0); + if (fmt_percent) { + ret += hpp__call_print_fn(hpp, print_fn, + fmt, 0.0); + } else { + ret += hpp__call_print_fn(hpp, print_fn, + fmt, 0ULL); + } } } + + if (callback) { + int __ret = callback(hpp, false); + + advance_hpp(hpp, __ret); + ret += __ret; + } + + /* + * Restore original buf and size as it's where caller expects + * the result will be saved. + */ + hpp->buf = buf; + hpp->size = size; + return ret; } #define __HPP_HEADER_FN(_type, _str, _min_width, _unit_width) \ static int hpp__header_##_type(struct perf_hpp_fmt *fmt __maybe_unused, \ - struct perf_hpp *hpp) \ + struct perf_hpp *hpp, \ + struct perf_evsel *evsel) \ { \ int len = _min_width; \ \ - if (symbol_conf.event_group) { \ - struct perf_evsel *evsel = hpp->ptr; \ - \ + if (symbol_conf.event_group) \ len = max(len, evsel->nr_members * _unit_width); \ - } \ + \ return scnprintf(hpp->buf, hpp->size, "%*s", len, _str); \ } #define __HPP_WIDTH_FN(_type, _min_width, _unit_width) \ static int hpp__width_##_type(struct perf_hpp_fmt *fmt __maybe_unused, \ - struct perf_hpp *hpp __maybe_unused) \ + struct perf_hpp *hpp __maybe_unused, \ + struct perf_evsel *evsel) \ { \ int len = _min_width; \ \ - if (symbol_conf.event_group) { \ - struct perf_evsel *evsel = hpp->ptr; \ - \ + if (symbol_conf.event_group) \ len = max(len, evsel->nr_members * _unit_width); \ - } \ + \ return len; \ } +static int hpp_color_scnprintf(struct perf_hpp *hpp, const char *fmt, ...) +{ + va_list args; + ssize_t ssize = hpp->size; + double percent; + int ret; + + va_start(args, fmt); + percent = va_arg(args, double); + ret = value_color_snprintf(hpp->buf, hpp->size, fmt, percent); + va_end(args); + + return (ret >= ssize) ? (ssize - 1) : ret; +} + +static int hpp_entry_scnprintf(struct perf_hpp *hpp, const char *fmt, ...) +{ + va_list args; + ssize_t ssize = hpp->size; + int ret; + + va_start(args, fmt); + ret = vsnprintf(hpp->buf, hpp->size, fmt, args); + va_end(args); + + return (ret >= ssize) ? (ssize - 1) : ret; +} + #define __HPP_COLOR_PERCENT_FN(_type, _field) \ static u64 he_get_##_field(struct hist_entry *he) \ { \ @@ -116,8 +179,8 @@ static u64 he_get_##_field(struct hist_entry *he) \ static int hpp__color_##_type(struct perf_hpp_fmt *fmt __maybe_unused, \ struct perf_hpp *hpp, struct hist_entry *he) \ { \ - return __hpp__fmt(hpp, he, he_get_##_field, " %6.2f%%", \ - percent_color_snprintf, true); \ + return __hpp__fmt(hpp, he, he_get_##_field, NULL, " %6.2f%%", \ + hpp_color_scnprintf, true); \ } #define __HPP_ENTRY_PERCENT_FN(_type, _field) \ @@ -125,8 +188,8 @@ static int hpp__entry_##_type(struct perf_hpp_fmt *_fmt __maybe_unused, \ struct perf_hpp *hpp, struct hist_entry *he) \ { \ const char *fmt = symbol_conf.field_sep ? " %.2f" : " %6.2f%%"; \ - return __hpp__fmt(hpp, he, he_get_##_field, fmt, \ - scnprintf, true); \ + return __hpp__fmt(hpp, he, he_get_##_field, NULL, fmt, \ + hpp_entry_scnprintf, true); \ } #define __HPP_ENTRY_RAW_FN(_type, _field) \ @@ -139,7 +202,8 @@ static int hpp__entry_##_type(struct perf_hpp_fmt *_fmt __maybe_unused, \ struct perf_hpp *hpp, struct hist_entry *he) \ { \ const char *fmt = symbol_conf.field_sep ? " %"PRIu64 : " %11"PRIu64; \ - return __hpp__fmt(hpp, he, he_get_raw_##_field, fmt, scnprintf, false); \ + return __hpp__fmt(hpp, he, he_get_raw_##_field, NULL, fmt, \ + hpp_entry_scnprintf, false); \ } #define HPP_PERCENT_FNS(_type, _str, _field, _min_width, _unit_width) \ @@ -263,15 +327,13 @@ unsigned int hists__sort_list_width(struct hists *hists) struct perf_hpp_fmt *fmt; struct sort_entry *se; int i = 0, ret = 0; - struct perf_hpp dummy_hpp = { - .ptr = hists_to_evsel(hists), - }; + struct perf_hpp dummy_hpp; perf_hpp__for_each_format(fmt) { if (i) ret += 2; - ret += fmt->width(fmt, &dummy_hpp); + ret += fmt->width(fmt, &dummy_hpp, hists_to_evsel(hists)); } list_for_each_entry(se, &hist_entry__sort_list, list) diff --git a/tools/perf/ui/stdio/hist.c b/tools/perf/ui/stdio/hist.c index 831fbb77d1ff..d59893edf031 100644 --- a/tools/perf/ui/stdio/hist.c +++ b/tools/perf/ui/stdio/hist.c @@ -306,12 +306,6 @@ static size_t hist_entry__callchain_fprintf(struct hist_entry *he, return hist_entry_callchain__fprintf(he, total_period, left_margin, fp); } -static inline void advance_hpp(struct perf_hpp *hpp, int inc) -{ - hpp->buf += inc; - hpp->size -= inc; -} - static int hist_entry__period_snprintf(struct perf_hpp *hpp, struct hist_entry *he) { @@ -385,7 +379,6 @@ size_t hists__fprintf(struct hists *hists, bool show_header, int max_rows, struct perf_hpp dummy_hpp = { .buf = bf, .size = sizeof(bf), - .ptr = hists_to_evsel(hists), }; bool first = true; size_t linesz; @@ -404,7 +397,7 @@ size_t hists__fprintf(struct hists *hists, bool show_header, int max_rows, else first = false; - fmt->header(fmt, &dummy_hpp); + fmt->header(fmt, &dummy_hpp, hists_to_evsel(hists)); fprintf(fp, "%s", bf); } @@ -449,7 +442,7 @@ size_t hists__fprintf(struct hists *hists, bool show_header, int max_rows, else first = false; - width = fmt->width(fmt, &dummy_hpp); + width = fmt->width(fmt, &dummy_hpp, hists_to_evsel(hists)); for (i = 0; i < width; i++) fprintf(fp, "."); } diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c index 3aa555ff9d89..809b4c50beae 100644 --- a/tools/perf/util/annotate.c +++ b/tools/perf/util/annotate.c @@ -1236,6 +1236,7 @@ int symbol__annotate_printf(struct symbol *sym, struct map *map, struct dso *dso = map->dso; char *filename; const char *d_filename; + const char *evsel_name = perf_evsel__name(evsel); struct annotation *notes = symbol__annotation(sym); struct disasm_line *pos, *queue = NULL; u64 start = map__rip_2objdump(map, sym->start); @@ -1243,7 +1244,7 @@ int symbol__annotate_printf(struct symbol *sym, struct map *map, int more = 0; u64 len; int width = 8; - int namelen; + int namelen, evsel_name_len, graph_dotted_len; filename = strdup(dso->long_name); if (!filename) @@ -1256,14 +1257,17 @@ int symbol__annotate_printf(struct symbol *sym, struct map *map, len = symbol__size(sym); namelen = strlen(d_filename); + evsel_name_len = strlen(evsel_name); if (perf_evsel__is_group_event(evsel)) width *= evsel->nr_members; - printf(" %-*.*s| Source code & Disassembly of %s\n", - width, width, "Percent", d_filename); - printf("-%-*.*s-------------------------------------\n", - width+namelen, width+namelen, graph_dotted_line); + printf(" %-*.*s| Source code & Disassembly of %s for %s\n", + width, width, "Percent", d_filename, evsel_name); + + graph_dotted_len = width + namelen + evsel_name_len; + printf("-%-*.*s-----------------------------------------\n", + graph_dotted_len, graph_dotted_len, graph_dotted_line); if (verbose) symbol__annotate_hits(sym, evsel); diff --git a/tools/perf/util/cpumap.c b/tools/perf/util/cpumap.c index a9b48c42e81e..7fe4994eeb63 100644 --- a/tools/perf/util/cpumap.c +++ b/tools/perf/util/cpumap.c @@ -1,5 +1,5 @@ #include "util.h" -#include "fs.h" +#include <api/fs/fs.h> #include "../perf.h" #include "cpumap.h" #include <assert.h> diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c index 4045d086d9d9..64453d63b971 100644 --- a/tools/perf/util/dso.c +++ b/tools/perf/util/dso.c @@ -45,8 +45,8 @@ int dso__read_binary_type_filename(const struct dso *dso, debuglink--; if (*debuglink == '/') debuglink++; - filename__read_debuglink(dso->long_name, debuglink, - size - (debuglink - filename)); + ret = filename__read_debuglink(dso->long_name, debuglink, + size - (debuglink - filename)); } break; case DSO_BINARY_TYPE__BUILD_ID_CACHE: diff --git a/tools/perf/util/dso.h b/tools/perf/util/dso.h index cd7d6f078cdd..ab06f1c03655 100644 --- a/tools/perf/util/dso.h +++ b/tools/perf/util/dso.h @@ -102,6 +102,16 @@ struct dso { char name[0]; }; +/* dso__for_each_symbol - iterate over the symbols of given type + * + * @dso: the 'struct dso *' in which symbols itereated + * @pos: the 'struct symbol *' to use as a loop cursor + * @n: the 'struct rb_node *' to use as a temporary storage + * @type: the 'enum map_type' type of symbols + */ +#define dso__for_each_symbol(dso, pos, n, type) \ + symbols__for_each_entry(&(dso)->symbols[(type)], pos, n) + static inline void dso__set_loaded(struct dso *dso, enum map_type type) { dso->loaded |= (1 << type); diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c index b0f3ca850e9e..9d12aa6dd485 100644 --- a/tools/perf/util/event.c +++ b/tools/perf/util/event.c @@ -1,6 +1,7 @@ #include <linux/types.h> #include "event.h" #include "debug.h" +#include "hist.h" #include "machine.h" #include "sort.h" #include "string.h" @@ -94,14 +95,10 @@ static pid_t perf_event__get_comm_tgid(pid_t pid, char *comm, size_t len) static pid_t perf_event__synthesize_comm(struct perf_tool *tool, union perf_event *event, pid_t pid, - int full, perf_event__handler_t process, struct machine *machine) { - char filename[PATH_MAX]; size_t size; - DIR *tasks; - struct dirent dirent, *next; pid_t tgid; memset(&event->comm, 0, sizeof(event->comm)); @@ -124,55 +121,35 @@ static pid_t perf_event__synthesize_comm(struct perf_tool *tool, event->comm.header.size = (sizeof(event->comm) - (sizeof(event->comm.comm) - size) + machine->id_hdr_size); - if (!full) { - event->comm.tid = pid; + event->comm.tid = pid; - if (process(tool, event, &synth_sample, machine) != 0) - return -1; - - goto out; - } - - if (machine__is_default_guest(machine)) - return 0; - - snprintf(filename, sizeof(filename), "%s/proc/%d/task", - machine->root_dir, pid); - - tasks = opendir(filename); - if (tasks == NULL) { - pr_debug("couldn't open %s\n", filename); - return 0; - } + if (process(tool, event, &synth_sample, machine) != 0) + return -1; - while (!readdir_r(tasks, &dirent, &next) && next) { - char *end; - pid = strtol(dirent.d_name, &end, 10); - if (*end) - continue; +out: + return tgid; +} - /* already have tgid; jut want to update the comm */ - (void) perf_event__get_comm_tgid(pid, event->comm.comm, - sizeof(event->comm.comm)); +static int perf_event__synthesize_fork(struct perf_tool *tool, + union perf_event *event, pid_t pid, + pid_t tgid, perf_event__handler_t process, + struct machine *machine) +{ + memset(&event->fork, 0, sizeof(event->fork) + machine->id_hdr_size); - size = strlen(event->comm.comm) + 1; - size = PERF_ALIGN(size, sizeof(u64)); - memset(event->comm.comm + size, 0, machine->id_hdr_size); - event->comm.header.size = (sizeof(event->comm) - - (sizeof(event->comm.comm) - size) + - machine->id_hdr_size); + /* this is really a clone event but we use fork to synthesize it */ + event->fork.ppid = tgid; + event->fork.ptid = tgid; + event->fork.pid = tgid; + event->fork.tid = pid; + event->fork.header.type = PERF_RECORD_FORK; - event->comm.tid = pid; + event->fork.header.size = (sizeof(event->fork) + machine->id_hdr_size); - if (process(tool, event, &synth_sample, machine) != 0) { - tgid = -1; - break; - } - } + if (process(tool, event, &synth_sample, machine) != 0) + return -1; - closedir(tasks); -out: - return tgid; + return 0; } int perf_event__synthesize_mmap_events(struct perf_tool *tool, @@ -324,17 +301,71 @@ int perf_event__synthesize_modules(struct perf_tool *tool, static int __event__synthesize_thread(union perf_event *comm_event, union perf_event *mmap_event, + union perf_event *fork_event, pid_t pid, int full, perf_event__handler_t process, struct perf_tool *tool, struct machine *machine, bool mmap_data) { - pid_t tgid = perf_event__synthesize_comm(tool, comm_event, pid, full, + char filename[PATH_MAX]; + DIR *tasks; + struct dirent dirent, *next; + pid_t tgid; + + /* special case: only send one comm event using passed in pid */ + if (!full) { + tgid = perf_event__synthesize_comm(tool, comm_event, pid, + process, machine); + + if (tgid == -1) + return -1; + + return perf_event__synthesize_mmap_events(tool, mmap_event, pid, tgid, + process, machine, mmap_data); + } + + if (machine__is_default_guest(machine)) + return 0; + + snprintf(filename, sizeof(filename), "%s/proc/%d/task", + machine->root_dir, pid); + + tasks = opendir(filename); + if (tasks == NULL) { + pr_debug("couldn't open %s\n", filename); + return 0; + } + + while (!readdir_r(tasks, &dirent, &next) && next) { + char *end; + int rc = 0; + pid_t _pid; + + _pid = strtol(dirent.d_name, &end, 10); + if (*end) + continue; + + tgid = perf_event__synthesize_comm(tool, comm_event, _pid, + process, machine); + if (tgid == -1) + return -1; + + if (_pid == pid) { + /* process the parent's maps too */ + rc = perf_event__synthesize_mmap_events(tool, mmap_event, pid, tgid, + process, machine, mmap_data); + } else { + /* only fork the tid's map, to save time */ + rc = perf_event__synthesize_fork(tool, fork_event, _pid, tgid, process, machine); - if (tgid == -1) - return -1; - return perf_event__synthesize_mmap_events(tool, mmap_event, pid, tgid, - process, machine, mmap_data); + } + + if (rc) + return rc; + } + + closedir(tasks); + return 0; } int perf_event__synthesize_thread_map(struct perf_tool *tool, @@ -343,7 +374,7 @@ int perf_event__synthesize_thread_map(struct perf_tool *tool, struct machine *machine, bool mmap_data) { - union perf_event *comm_event, *mmap_event; + union perf_event *comm_event, *mmap_event, *fork_event; int err = -1, thread, j; comm_event = malloc(sizeof(comm_event->comm) + machine->id_hdr_size); @@ -354,9 +385,14 @@ int perf_event__synthesize_thread_map(struct perf_tool *tool, if (mmap_event == NULL) goto out_free_comm; + fork_event = malloc(sizeof(fork_event->fork) + machine->id_hdr_size); + if (fork_event == NULL) + goto out_free_mmap; + err = 0; for (thread = 0; thread < threads->nr; ++thread) { if (__event__synthesize_thread(comm_event, mmap_event, + fork_event, threads->map[thread], 0, process, tool, machine, mmap_data)) { @@ -382,6 +418,7 @@ int perf_event__synthesize_thread_map(struct perf_tool *tool, /* if not, generate events for it */ if (need_leader && __event__synthesize_thread(comm_event, mmap_event, + fork_event, comm_event->comm.pid, 0, process, tool, machine, mmap_data)) { @@ -390,6 +427,8 @@ int perf_event__synthesize_thread_map(struct perf_tool *tool, } } } + free(fork_event); +out_free_mmap: free(mmap_event); out_free_comm: free(comm_event); @@ -404,9 +443,12 @@ int perf_event__synthesize_threads(struct perf_tool *tool, DIR *proc; char proc_path[PATH_MAX]; struct dirent dirent, *next; - union perf_event *comm_event, *mmap_event; + union perf_event *comm_event, *mmap_event, *fork_event; int err = -1; + if (machine__is_default_guest(machine)) + return 0; + comm_event = malloc(sizeof(comm_event->comm) + machine->id_hdr_size); if (comm_event == NULL) goto out; @@ -415,14 +457,15 @@ int perf_event__synthesize_threads(struct perf_tool *tool, if (mmap_event == NULL) goto out_free_comm; - if (machine__is_default_guest(machine)) - return 0; + fork_event = malloc(sizeof(fork_event->fork) + machine->id_hdr_size); + if (fork_event == NULL) + goto out_free_mmap; snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir); proc = opendir(proc_path); if (proc == NULL) - goto out_free_mmap; + goto out_free_fork; while (!readdir_r(proc, &dirent, &next) && next) { char *end; @@ -434,12 +477,14 @@ int perf_event__synthesize_threads(struct perf_tool *tool, * We may race with exiting thread, so don't stop just because * one thread couldn't be synthesized. */ - __event__synthesize_thread(comm_event, mmap_event, pid, 1, - process, tool, machine, mmap_data); + __event__synthesize_thread(comm_event, mmap_event, fork_event, pid, + 1, process, tool, machine, mmap_data); } err = 0; closedir(proc); +out_free_fork: + free(fork_event); out_free_mmap: free(mmap_event); out_free_comm: @@ -661,7 +706,7 @@ void thread__find_addr_map(struct thread *thread, al->thread = thread; al->addr = addr; al->cpumode = cpumode; - al->filtered = false; + al->filtered = 0; if (machine == NULL) { al->map = NULL; @@ -687,11 +732,11 @@ void thread__find_addr_map(struct thread *thread, if ((cpumode == PERF_RECORD_MISC_GUEST_USER || cpumode == PERF_RECORD_MISC_GUEST_KERNEL) && !perf_guest) - al->filtered = true; + al->filtered |= (1 << HIST_FILTER__GUEST); if ((cpumode == PERF_RECORD_MISC_USER || cpumode == PERF_RECORD_MISC_KERNEL) && !perf_host) - al->filtered = true; + al->filtered |= (1 << HIST_FILTER__HOST); return; } @@ -748,9 +793,6 @@ int perf_event__preprocess_sample(const union perf_event *event, if (thread == NULL) return -1; - if (thread__is_filtered(thread)) - goto out_filtered; - dump_printf(" ... thread: %s:%d\n", thread__comm_str(thread), thread->tid); /* * Have we already created the kernel maps for this machine? @@ -768,6 +810,10 @@ int perf_event__preprocess_sample(const union perf_event *event, dump_printf(" ...... dso: %s\n", al->map ? al->map->dso->long_name : al->level == 'H' ? "[hypervisor]" : "<not found>"); + + if (thread__is_filtered(thread)) + al->filtered |= (1 << HIST_FILTER__THREAD); + al->sym = NULL; al->cpu = sample->cpu; @@ -779,8 +825,9 @@ int perf_event__preprocess_sample(const union perf_event *event, dso->short_name) || (dso->short_name != dso->long_name && strlist__has_entry(symbol_conf.dso_list, - dso->long_name))))) - goto out_filtered; + dso->long_name))))) { + al->filtered |= (1 << HIST_FILTER__DSO); + } al->sym = map__find_symbol(al->map, al->addr, machine->symbol_filter); @@ -788,12 +835,9 @@ int perf_event__preprocess_sample(const union perf_event *event, if (symbol_conf.sym_list && (!al->sym || !strlist__has_entry(symbol_conf.sym_list, - al->sym->name))) - goto out_filtered; - - return 0; + al->sym->name))) { + al->filtered |= (1 << HIST_FILTER__SYMBOL); + } -out_filtered: - al->filtered = true; return 0; } diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h index 851fa06f4a42..38457d447a13 100644 --- a/tools/perf/util/event.h +++ b/tools/perf/util/event.h @@ -85,6 +85,7 @@ struct sample_event { struct regs_dump { u64 abi; + u64 mask; u64 *regs; }; @@ -259,9 +260,9 @@ int perf_event__preprocess_sample(const union perf_event *event, const char *perf_event__name(unsigned int id); size_t perf_event__sample_event_size(const struct perf_sample *sample, u64 type, - u64 sample_regs_user, u64 read_format); + u64 read_format); int perf_event__synthesize_sample(union perf_event *event, u64 type, - u64 sample_regs_user, u64 read_format, + u64 read_format, const struct perf_sample *sample, bool swapped); diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index 55407c594b87..5c28d82b76c4 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -500,6 +500,34 @@ int perf_evsel__group_desc(struct perf_evsel *evsel, char *buf, size_t size) return ret; } +static void +perf_evsel__config_callgraph(struct perf_evsel *evsel, + struct record_opts *opts) +{ + bool function = perf_evsel__is_function_event(evsel); + struct perf_event_attr *attr = &evsel->attr; + + perf_evsel__set_sample_bit(evsel, CALLCHAIN); + + if (opts->call_graph == CALLCHAIN_DWARF) { + if (!function) { + perf_evsel__set_sample_bit(evsel, REGS_USER); + perf_evsel__set_sample_bit(evsel, STACK_USER); + attr->sample_regs_user = PERF_REGS_MASK; + attr->sample_stack_user = opts->stack_dump_size; + attr->exclude_callchain_user = 1; + } else { + pr_info("Cannot use DWARF unwind for function trace event," + " falling back to framepointers.\n"); + } + } + + if (function) { + pr_info("Disabling user space callchains for function trace event.\n"); + attr->exclude_callchain_user = 1; + } +} + /* * The enable_on_exec/disabled value strategy: * @@ -595,17 +623,8 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts) attr->mmap_data = track; } - if (opts->call_graph) { - perf_evsel__set_sample_bit(evsel, CALLCHAIN); - - if (opts->call_graph == CALLCHAIN_DWARF) { - perf_evsel__set_sample_bit(evsel, REGS_USER); - perf_evsel__set_sample_bit(evsel, STACK_USER); - attr->sample_regs_user = PERF_REGS_MASK; - attr->sample_stack_user = opts->stack_dump_size; - attr->exclude_callchain_user = 1; - } - } + if (opts->call_graph_enabled) + perf_evsel__config_callgraph(evsel, opts); if (target__has_cpu(&opts->target)) perf_evsel__set_sample_bit(evsel, CPU); @@ -1004,7 +1023,7 @@ retry_sample_id: group_fd = get_group_fd(evsel, cpu, thread); retry_open: - pr_debug2("perf_event_open: pid %d cpu %d group_fd %d flags %#lx\n", + pr_debug2("sys_perf_event_open: pid %d cpu %d group_fd %d flags %#lx\n", pid, cpus->map[cpu], group_fd, flags); FD(evsel, cpu, thread) = sys_perf_event_open(&evsel->attr, @@ -1013,7 +1032,7 @@ retry_open: group_fd, flags); if (FD(evsel, cpu, thread) < 0) { err = -errno; - pr_debug2("perf_event_open failed, error %d\n", + pr_debug2("sys_perf_event_open failed, error %d\n", err); goto try_fallback; } @@ -1220,7 +1239,7 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event, memset(data, 0, sizeof(*data)); data->cpu = data->pid = data->tid = -1; data->stream_id = data->id = data->time = -1ULL; - data->period = 1; + data->period = evsel->attr.sample_period; data->weight = 0; if (event->header.type != PERF_RECORD_SAMPLE) { @@ -1396,10 +1415,11 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event, array++; if (data->user_regs.abi) { - u64 regs_user = evsel->attr.sample_regs_user; + u64 mask = evsel->attr.sample_regs_user; - sz = hweight_long(regs_user) * sizeof(u64); + sz = hweight_long(mask) * sizeof(u64); OVERFLOW_CHECK(array, sz, max_size); + data->user_regs.mask = mask; data->user_regs.regs = (u64 *)array; array = (void *)array + sz; } @@ -1451,7 +1471,7 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event, } size_t perf_event__sample_event_size(const struct perf_sample *sample, u64 type, - u64 sample_regs_user, u64 read_format) + u64 read_format) { size_t sz, result = sizeof(struct sample_event); @@ -1517,7 +1537,7 @@ size_t perf_event__sample_event_size(const struct perf_sample *sample, u64 type, if (type & PERF_SAMPLE_REGS_USER) { if (sample->user_regs.abi) { result += sizeof(u64); - sz = hweight_long(sample_regs_user) * sizeof(u64); + sz = hweight_long(sample->user_regs.mask) * sizeof(u64); result += sz; } else { result += sizeof(u64); @@ -1546,7 +1566,7 @@ size_t perf_event__sample_event_size(const struct perf_sample *sample, u64 type, } int perf_event__synthesize_sample(union perf_event *event, u64 type, - u64 sample_regs_user, u64 read_format, + u64 read_format, const struct perf_sample *sample, bool swapped) { @@ -1687,7 +1707,7 @@ int perf_event__synthesize_sample(union perf_event *event, u64 type, if (type & PERF_SAMPLE_REGS_USER) { if (sample->user_regs.abi) { *array++ = sample->user_regs.abi; - sz = hweight_long(sample_regs_user) * sizeof(u64); + sz = hweight_long(sample->user_regs.mask) * sizeof(u64); memcpy(array, sample->user_regs.regs, sz); array = (void *)array + sz; } else { diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index f1b325665aae..0c9926cfb292 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -315,6 +315,24 @@ static inline bool perf_evsel__is_group_event(struct perf_evsel *evsel) return perf_evsel__is_group_leader(evsel) && evsel->nr_members > 1; } +/** + * perf_evsel__is_function_event - Return whether given evsel is a function + * trace event + * + * @evsel - evsel selector to be tested + * + * Return %true if event is function trace event + */ +static inline bool perf_evsel__is_function_event(struct perf_evsel *evsel) +{ +#define FUNCTION_EVENT "ftrace:function" + + return evsel->name && + !strncmp(FUNCTION_EVENT, evsel->name, sizeof(FUNCTION_EVENT)); + +#undef FUNCTION_EVENT +} + struct perf_attr_details { bool freq; bool verbose; diff --git a/tools/perf/util/fs.h b/tools/perf/util/fs.h deleted file mode 100644 index 5e09ce1bab0e..000000000000 --- a/tools/perf/util/fs.h +++ /dev/null @@ -1,7 +0,0 @@ -#ifndef __PERF_FS -#define __PERF_FS - -const char *sysfs__mountpoint(void); -const char *procfs__mountpoint(void); - -#endif /* __PERF_FS */ diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c index e4e6249b87d4..f38590d7561b 100644 --- a/tools/perf/util/hist.c +++ b/tools/perf/util/hist.c @@ -13,13 +13,6 @@ static bool hists__filter_entry_by_thread(struct hists *hists, static bool hists__filter_entry_by_symbol(struct hists *hists, struct hist_entry *he); -enum hist_filter { - HIST_FILTER__DSO, - HIST_FILTER__THREAD, - HIST_FILTER__PARENT, - HIST_FILTER__SYMBOL, -}; - struct callchain_param callchain_param = { .mode = CHAIN_GRAPH_REL, .min_percent = 0.5, @@ -290,7 +283,7 @@ static struct hist_entry *hist_entry__new(struct hist_entry *template) if (he->branch_info) { /* * This branch info is (a part of) allocated from - * machine__resolve_bstack() and will be freed after + * sample__resolve_bstack() and will be freed after * adding new entries. So we need to save a copy. */ he->branch_info = malloc(sizeof(*he->branch_info)); @@ -369,7 +362,7 @@ static struct hist_entry *add_hist_entry(struct hists *hists, he_stat__add_period(&he->stat, period, weight); /* - * This mem info was allocated from machine__resolve_mem + * This mem info was allocated from sample__resolve_mem * and will not be used anymore. */ zfree(&entry->mem_info); @@ -429,7 +422,7 @@ struct hist_entry *__hists__add_entry(struct hists *hists, .weight = weight, }, .parent = sym_parent, - .filtered = symbol__parent_filter(sym_parent), + .filtered = symbol__parent_filter(sym_parent) | al->filtered, .hists = hists, .branch_info = bi, .mem_info = mi, diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h index a59743fa3ef7..1f1f513dfe7f 100644 --- a/tools/perf/util/hist.h +++ b/tools/perf/util/hist.h @@ -14,6 +14,15 @@ struct hist_entry; struct addr_location; struct symbol; +enum hist_filter { + HIST_FILTER__DSO, + HIST_FILTER__THREAD, + HIST_FILTER__PARENT, + HIST_FILTER__SYMBOL, + HIST_FILTER__GUEST, + HIST_FILTER__HOST, +}; + /* * The kernel collects the number of events it couldn't send in a stretch and * when possible sends this number in a PERF_RECORD_LOST event. The number of @@ -132,8 +141,10 @@ struct perf_hpp { }; struct perf_hpp_fmt { - int (*header)(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp); - int (*width)(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp); + int (*header)(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp, + struct perf_evsel *evsel); + int (*width)(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp, + struct perf_evsel *evsel); int (*color)(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp, struct hist_entry *he); int (*entry)(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp, @@ -166,6 +177,20 @@ void perf_hpp__init(void); void perf_hpp__column_register(struct perf_hpp_fmt *format); void perf_hpp__column_enable(unsigned col); +typedef u64 (*hpp_field_fn)(struct hist_entry *he); +typedef int (*hpp_callback_fn)(struct perf_hpp *hpp, bool front); +typedef int (*hpp_snprint_fn)(struct perf_hpp *hpp, const char *fmt, ...); + +int __hpp__fmt(struct perf_hpp *hpp, struct hist_entry *he, + hpp_field_fn get_field, hpp_callback_fn callback, + const char *fmt, hpp_snprint_fn print_fn, bool fmt_percent); + +static inline void advance_hpp(struct perf_hpp *hpp, int inc) +{ + hpp->buf += inc; + hpp->size -= inc; +} + static inline size_t perf_hpp__use_color(void) { return !symbol_conf.field_sep; diff --git a/tools/perf/util/include/linux/hash.h b/tools/perf/util/include/linux/hash.h deleted file mode 100644 index 201f57397997..000000000000 --- a/tools/perf/util/include/linux/hash.h +++ /dev/null @@ -1,5 +0,0 @@ -#include "../../../../include/linux/hash.h" - -#ifndef PERF_HASH_H -#define PERF_HASH_H -#endif diff --git a/tools/perf/util/include/linux/kernel.h b/tools/perf/util/include/linux/kernel.h index d8c927c868ee..9844c31b7c2b 100644 --- a/tools/perf/util/include/linux/kernel.h +++ b/tools/perf/util/include/linux/kernel.h @@ -94,12 +94,6 @@ static inline int scnprintf(char * buf, size_t size, const char * fmt, ...) return (i >= ssize) ? (ssize - 1) : i; } -static inline unsigned long -simple_strtoul(const char *nptr, char **endptr, int base) -{ - return strtoul(nptr, endptr, base); -} - int eprintf(int level, const char *fmt, ...) __attribute__((format(printf, 2, 3))); diff --git a/tools/perf/util/include/linux/list.h b/tools/perf/util/include/linux/list.h index 1d928a0ce997..bfe0a2afd0d2 100644 --- a/tools/perf/util/include/linux/list.h +++ b/tools/perf/util/include/linux/list.h @@ -1,5 +1,4 @@ #include <linux/kernel.h> -#include <linux/prefetch.h> #include "../../../../include/linux/list.h" diff --git a/tools/perf/util/include/linux/prefetch.h b/tools/perf/util/include/linux/prefetch.h deleted file mode 100644 index 7841e485d8c3..000000000000 --- a/tools/perf/util/include/linux/prefetch.h +++ /dev/null @@ -1,6 +0,0 @@ -#ifndef PERF_LINUX_PREFETCH_H -#define PERF_LINUX_PREFETCH_H - -static inline void prefetch(void *a __attribute__((unused))) { } - -#endif diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index c872991e0f65..a53cd0b8c151 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -327,9 +327,10 @@ struct thread *machine__findnew_thread(struct machine *machine, pid_t pid, return __machine__findnew_thread(machine, pid, tid, true); } -struct thread *machine__find_thread(struct machine *machine, pid_t tid) +struct thread *machine__find_thread(struct machine *machine, pid_t pid, + pid_t tid) { - return __machine__findnew_thread(machine, 0, tid, false); + return __machine__findnew_thread(machine, pid, tid, false); } int machine__process_comm_event(struct machine *machine, union perf_event *event, @@ -1026,7 +1027,7 @@ int machine__process_mmap2_event(struct machine *machine, } thread = machine__findnew_thread(machine, event->mmap2.pid, - event->mmap2.pid); + event->mmap2.tid); if (thread == NULL) goto out_problem; @@ -1074,7 +1075,7 @@ int machine__process_mmap_event(struct machine *machine, union perf_event *event } thread = machine__findnew_thread(machine, event->mmap.pid, - event->mmap.pid); + event->mmap.tid); if (thread == NULL) goto out_problem; @@ -1114,7 +1115,9 @@ static void machine__remove_thread(struct machine *machine, struct thread *th) int machine__process_fork_event(struct machine *machine, union perf_event *event, struct perf_sample *sample) { - struct thread *thread = machine__find_thread(machine, event->fork.tid); + struct thread *thread = machine__find_thread(machine, + event->fork.pid, + event->fork.tid); struct thread *parent = machine__findnew_thread(machine, event->fork.ppid, event->fork.ptid); @@ -1140,7 +1143,9 @@ int machine__process_fork_event(struct machine *machine, union perf_event *event int machine__process_exit_event(struct machine *machine, union perf_event *event, struct perf_sample *sample __maybe_unused) { - struct thread *thread = machine__find_thread(machine, event->fork.tid); + struct thread *thread = machine__find_thread(machine, + event->fork.pid, + event->fork.tid); if (dump_trace) perf_event__fprintf_task(event, stdout); @@ -1184,39 +1189,22 @@ static bool symbol__match_regex(struct symbol *sym, regex_t *regex) return 0; } -static const u8 cpumodes[] = { - PERF_RECORD_MISC_USER, - PERF_RECORD_MISC_KERNEL, - PERF_RECORD_MISC_GUEST_USER, - PERF_RECORD_MISC_GUEST_KERNEL -}; -#define NCPUMODES (sizeof(cpumodes)/sizeof(u8)) - static void ip__resolve_ams(struct machine *machine, struct thread *thread, struct addr_map_symbol *ams, u64 ip) { struct addr_location al; - size_t i; - u8 m; memset(&al, 0, sizeof(al)); + /* + * We cannot use the header.misc hint to determine whether a + * branch stack address is user, kernel, guest, hypervisor. + * Branches may straddle the kernel/user/hypervisor boundaries. + * Thus, we have to try consecutively until we find a match + * or else, the symbol is unknown + */ + thread__find_cpumode_addr_location(thread, machine, MAP__FUNCTION, ip, &al); - for (i = 0; i < NCPUMODES; i++) { - m = cpumodes[i]; - /* - * We cannot use the header.misc hint to determine whether a - * branch stack address is user, kernel, guest, hypervisor. - * Branches may straddle the kernel/user/hypervisor boundaries. - * Thus, we have to try consecutively until we find a match - * or else, the symbol is unknown - */ - thread__find_addr_location(thread, machine, m, MAP__FUNCTION, - ip, &al); - if (al.sym) - goto found; - } -found: ams->addr = ip; ams->al_addr = al.addr; ams->sym = al.sym; @@ -1238,37 +1226,35 @@ static void ip__resolve_data(struct machine *machine, struct thread *thread, ams->map = al.map; } -struct mem_info *machine__resolve_mem(struct machine *machine, - struct thread *thr, - struct perf_sample *sample, - u8 cpumode) +struct mem_info *sample__resolve_mem(struct perf_sample *sample, + struct addr_location *al) { struct mem_info *mi = zalloc(sizeof(*mi)); if (!mi) return NULL; - ip__resolve_ams(machine, thr, &mi->iaddr, sample->ip); - ip__resolve_data(machine, thr, cpumode, &mi->daddr, sample->addr); + ip__resolve_ams(al->machine, al->thread, &mi->iaddr, sample->ip); + ip__resolve_data(al->machine, al->thread, al->cpumode, + &mi->daddr, sample->addr); mi->data_src.val = sample->data_src; return mi; } -struct branch_info *machine__resolve_bstack(struct machine *machine, - struct thread *thr, - struct branch_stack *bs) +struct branch_info *sample__resolve_bstack(struct perf_sample *sample, + struct addr_location *al) { - struct branch_info *bi; unsigned int i; + const struct branch_stack *bs = sample->branch_stack; + struct branch_info *bi = calloc(bs->nr, sizeof(struct branch_info)); - bi = calloc(bs->nr, sizeof(struct branch_info)); if (!bi) return NULL; for (i = 0; i < bs->nr; i++) { - ip__resolve_ams(machine, thr, &bi[i].to, bs->entries[i].to); - ip__resolve_ams(machine, thr, &bi[i].from, bs->entries[i].from); + ip__resolve_ams(al->machine, al->thread, &bi[i].to, bs->entries[i].to); + ip__resolve_ams(al->machine, al->thread, &bi[i].from, bs->entries[i].from); bi[i].flags = bs->entries[i].flags; } return bi; @@ -1326,7 +1312,7 @@ static int machine__resolve_callchain_sample(struct machine *machine, continue; } - al.filtered = false; + al.filtered = 0; thread__find_addr_location(thread, machine, cpumode, MAP__FUNCTION, ip, &al); if (al.sym != NULL) { @@ -1385,8 +1371,7 @@ int machine__resolve_callchain(struct machine *machine, return 0; return unwind__get_entries(unwind_entry, &callchain_cursor, machine, - thread, evsel->attr.sample_regs_user, - sample, max_stack); + thread, sample, max_stack); } diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h index f77e91e483dc..c8c74a119398 100644 --- a/tools/perf/util/machine.h +++ b/tools/perf/util/machine.h @@ -41,7 +41,8 @@ struct map *machine__kernel_map(struct machine *machine, enum map_type type) return machine->vmlinux_maps[type]; } -struct thread *machine__find_thread(struct machine *machine, pid_t tid); +struct thread *machine__find_thread(struct machine *machine, pid_t pid, + pid_t tid); int machine__process_comm_event(struct machine *machine, union perf_event *event, struct perf_sample *sample); @@ -91,12 +92,10 @@ void machine__delete_dead_threads(struct machine *machine); void machine__delete_threads(struct machine *machine); void machine__delete(struct machine *machine); -struct branch_info *machine__resolve_bstack(struct machine *machine, - struct thread *thread, - struct branch_stack *bs); -struct mem_info *machine__resolve_mem(struct machine *machine, - struct thread *thread, - struct perf_sample *sample, u8 cpumode); +struct branch_info *sample__resolve_bstack(struct perf_sample *sample, + struct addr_location *al); +struct mem_info *sample__resolve_mem(struct perf_sample *sample, + struct addr_location *al); int machine__resolve_callchain(struct machine *machine, struct perf_evsel *evsel, struct thread *thread, diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h index 257e513205ce..f00f058afb3b 100644 --- a/tools/perf/util/map.h +++ b/tools/perf/util/map.h @@ -90,6 +90,16 @@ u64 map__objdump_2mem(struct map *map, u64 ip); struct symbol; +/* map__for_each_symbol - iterate over the symbols in the given map + * + * @map: the 'struct map *' in which symbols itereated + * @pos: the 'struct symbol *' to use as a loop cursor + * @n: the 'struct rb_node *' to use as a temporary storage + * Note: caller must ensure map->dso is not NULL (map is loaded). + */ +#define map__for_each_symbol(map, pos, n) \ + dso__for_each_symbol(map->dso, pos, n, map->type) + typedef int (*symbol_filter_t)(struct map *map, struct symbol *sym); void map__init(struct map *map, enum map_type type, diff --git a/tools/perf/util/parse-options.c b/tools/perf/util/parse-options.c index d22e3f8017dc..bf48092983c6 100644 --- a/tools/perf/util/parse-options.c +++ b/tools/perf/util/parse-options.c @@ -407,7 +407,9 @@ int parse_options_step(struct parse_opt_ctx_t *ctx, if (internal_help && !strcmp(arg + 2, "help")) return usage_with_options_internal(usagestr, options, 0); if (!strcmp(arg + 2, "list-opts")) - return PARSE_OPT_LIST; + return PARSE_OPT_LIST_OPTS; + if (!strcmp(arg + 2, "list-cmds")) + return PARSE_OPT_LIST_SUBCMDS; switch (parse_long_opt(ctx, arg + 2, options)) { case -1: return parse_options_usage(usagestr, options, arg + 2, 0); @@ -433,25 +435,45 @@ int parse_options_end(struct parse_opt_ctx_t *ctx) return ctx->cpidx + ctx->argc; } -int parse_options(int argc, const char **argv, const struct option *options, - const char * const usagestr[], int flags) +int parse_options_subcommand(int argc, const char **argv, const struct option *options, + const char *const subcommands[], const char *usagestr[], int flags) { struct parse_opt_ctx_t ctx; perf_header__set_cmdline(argc, argv); + /* build usage string if it's not provided */ + if (subcommands && !usagestr[0]) { + struct strbuf buf = STRBUF_INIT; + + strbuf_addf(&buf, "perf %s [<options>] {", argv[0]); + for (int i = 0; subcommands[i]; i++) { + if (i) + strbuf_addstr(&buf, "|"); + strbuf_addstr(&buf, subcommands[i]); + } + strbuf_addstr(&buf, "}"); + + usagestr[0] = strdup(buf.buf); + strbuf_release(&buf); + } + parse_options_start(&ctx, argc, argv, flags); switch (parse_options_step(&ctx, options, usagestr)) { case PARSE_OPT_HELP: exit(129); case PARSE_OPT_DONE: break; - case PARSE_OPT_LIST: + case PARSE_OPT_LIST_OPTS: while (options->type != OPTION_END) { printf("--%s ", options->long_name); options++; } exit(130); + case PARSE_OPT_LIST_SUBCMDS: + for (int i = 0; subcommands[i]; i++) + printf("%s ", subcommands[i]); + exit(130); default: /* PARSE_OPT_UNKNOWN */ if (ctx.argv[0][1] == '-') { error("unknown option `%s'", ctx.argv[0] + 2); @@ -464,6 +486,13 @@ int parse_options(int argc, const char **argv, const struct option *options, return parse_options_end(&ctx); } +int parse_options(int argc, const char **argv, const struct option *options, + const char * const usagestr[], int flags) +{ + return parse_options_subcommand(argc, argv, options, NULL, + (const char **) usagestr, flags); +} + #define USAGE_OPTS_WIDTH 24 #define USAGE_GAP 2 diff --git a/tools/perf/util/parse-options.h b/tools/perf/util/parse-options.h index cbf0149cf221..d8dac8ac5f37 100644 --- a/tools/perf/util/parse-options.h +++ b/tools/perf/util/parse-options.h @@ -140,6 +140,11 @@ extern int parse_options(int argc, const char **argv, const struct option *options, const char * const usagestr[], int flags); +extern int parse_options_subcommand(int argc, const char **argv, + const struct option *options, + const char *const subcommands[], + const char *usagestr[], int flags); + extern NORETURN void usage_with_options(const char * const *usagestr, const struct option *options); @@ -148,7 +153,8 @@ extern NORETURN void usage_with_options(const char * const *usagestr, enum { PARSE_OPT_HELP = -1, PARSE_OPT_DONE, - PARSE_OPT_LIST, + PARSE_OPT_LIST_OPTS, + PARSE_OPT_LIST_SUBCMDS, PARSE_OPT_UNKNOWN, }; diff --git a/tools/perf/util/perf_regs.c b/tools/perf/util/perf_regs.c new file mode 100644 index 000000000000..a3539ef30b15 --- /dev/null +++ b/tools/perf/util/perf_regs.c @@ -0,0 +1,19 @@ +#include <errno.h> +#include "perf_regs.h" + +int perf_reg_value(u64 *valp, struct regs_dump *regs, int id) +{ + int i, idx = 0; + u64 mask = regs->mask; + + if (!(mask & (1 << id))) + return -EINVAL; + + for (i = 0; i < id; i++) { + if (mask & (1 << i)) + idx++; + } + + *valp = regs->regs[idx]; + return 0; +} diff --git a/tools/perf/util/perf_regs.h b/tools/perf/util/perf_regs.h index a3d42cd74919..d6e8b6a8d7f3 100644 --- a/tools/perf/util/perf_regs.h +++ b/tools/perf/util/perf_regs.h @@ -1,8 +1,14 @@ #ifndef __PERF_REGS_H #define __PERF_REGS_H +#include "types.h" +#include "event.h" + #ifdef HAVE_PERF_REGS_SUPPORT #include <perf_regs.h> + +int perf_reg_value(u64 *valp, struct regs_dump *regs, int id); + #else #define PERF_REGS_MASK 0 @@ -10,5 +16,12 @@ static inline const char *perf_reg_name(int id __maybe_unused) { return NULL; } + +static inline int perf_reg_value(u64 *valp __maybe_unused, + struct regs_dump *regs __maybe_unused, + int id __maybe_unused) +{ + return 0; +} #endif /* HAVE_PERF_REGS_SUPPORT */ #endif /* __PERF_REGS_H */ diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c index b752ecb40d86..00a7dcb2f55c 100644 --- a/tools/perf/util/pmu.c +++ b/tools/perf/util/pmu.c @@ -3,7 +3,7 @@ #include <unistd.h> #include <stdio.h> #include <dirent.h> -#include "fs.h" +#include <api/fs/fs.h> #include <locale.h> #include "util.h" #include "pmu.h" diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c index d8b048c20cde..0d1542f33d87 100644 --- a/tools/perf/util/probe-event.c +++ b/tools/perf/util/probe-event.c @@ -70,34 +70,32 @@ static int e_snprintf(char *str, size_t size, const char *format, ...) } static char *synthesize_perf_probe_point(struct perf_probe_point *pp); -static int convert_name_to_addr(struct perf_probe_event *pev, - const char *exec); static void clear_probe_trace_event(struct probe_trace_event *tev); -static struct machine machine; +static struct machine *host_machine; /* Initialize symbol maps and path of vmlinux/modules */ -static int init_vmlinux(void) +static int init_symbol_maps(bool user_only) { int ret; symbol_conf.sort_by_name = true; - if (symbol_conf.vmlinux_name == NULL) - symbol_conf.try_vmlinux_path = true; - else - pr_debug("Use vmlinux: %s\n", symbol_conf.vmlinux_name); ret = symbol__init(); if (ret < 0) { pr_debug("Failed to init symbol map.\n"); goto out; } - ret = machine__init(&machine, "", HOST_KERNEL_ID); - if (ret < 0) - goto out; + if (host_machine || user_only) /* already initialized */ + return 0; - if (machine__create_kernel_maps(&machine) < 0) { - pr_debug("machine__create_kernel_maps() failed.\n"); - goto out; + if (symbol_conf.vmlinux_name) + pr_debug("Use vmlinux: %s\n", symbol_conf.vmlinux_name); + + host_machine = machine__new_host(); + if (!host_machine) { + pr_debug("machine__new_host() failed.\n"); + symbol__exit(); + ret = -1; } out: if (ret < 0) @@ -105,21 +103,66 @@ out: return ret; } +static void exit_symbol_maps(void) +{ + if (host_machine) { + machine__delete(host_machine); + host_machine = NULL; + } + symbol__exit(); +} + static struct symbol *__find_kernel_function_by_name(const char *name, struct map **mapp) { - return machine__find_kernel_function_by_name(&machine, name, mapp, + return machine__find_kernel_function_by_name(host_machine, name, mapp, NULL); } +static struct symbol *__find_kernel_function(u64 addr, struct map **mapp) +{ + return machine__find_kernel_function(host_machine, addr, mapp, NULL); +} + +static struct ref_reloc_sym *kernel_get_ref_reloc_sym(void) +{ + /* kmap->ref_reloc_sym should be set if host_machine is initialized */ + struct kmap *kmap; + + if (map__load(host_machine->vmlinux_maps[MAP__FUNCTION], NULL) < 0) + return NULL; + + kmap = map__kmap(host_machine->vmlinux_maps[MAP__FUNCTION]); + return kmap->ref_reloc_sym; +} + +static u64 kernel_get_symbol_address_by_name(const char *name, bool reloc) +{ + struct ref_reloc_sym *reloc_sym; + struct symbol *sym; + struct map *map; + + /* ref_reloc_sym is just a label. Need a special fix*/ + reloc_sym = kernel_get_ref_reloc_sym(); + if (reloc_sym && strcmp(name, reloc_sym->name) == 0) + return (reloc) ? reloc_sym->addr : reloc_sym->unrelocated_addr; + else { + sym = __find_kernel_function_by_name(name, &map); + if (sym) + return map->unmap_ip(map, sym->start) - + (reloc) ? 0 : map->reloc; + } + return 0; +} + static struct map *kernel_get_module_map(const char *module) { struct rb_node *nd; - struct map_groups *grp = &machine.kmaps; + struct map_groups *grp = &host_machine->kmaps; /* A file path -- this is an offline module */ if (module && strchr(module, '/')) - return machine__new_module(&machine, 0, module); + return machine__new_module(host_machine, 0, module); if (!module) module = "kernel"; @@ -141,7 +184,7 @@ static struct dso *kernel_get_module_dso(const char *module) const char *vmlinux_name; if (module) { - list_for_each_entry(dso, &machine.kernel_dsos, node) { + list_for_each_entry(dso, &host_machine->kernel_dsos, node) { if (strncmp(dso->short_name + 1, module, dso->short_name_len - 2) == 0) goto found; @@ -150,7 +193,7 @@ static struct dso *kernel_get_module_dso(const char *module) return NULL; } - map = machine.vmlinux_maps[MAP__FUNCTION]; + map = host_machine->vmlinux_maps[MAP__FUNCTION]; dso = map->dso; vmlinux_name = symbol_conf.vmlinux_name; @@ -173,20 +216,6 @@ const char *kernel_get_module_path(const char *module) return (dso) ? dso->long_name : NULL; } -static int init_user_exec(void) -{ - int ret = 0; - - symbol_conf.try_vmlinux_path = false; - symbol_conf.sort_by_name = true; - ret = symbol__init(); - - if (ret < 0) - pr_debug("Failed to init symbol map.\n"); - - return ret; -} - static int convert_exec_to_group(const char *exec, char **result) { char *ptr1, *ptr2, *exec_copy; @@ -218,32 +247,23 @@ out: return ret; } -static int convert_to_perf_probe_point(struct probe_trace_point *tp, - struct perf_probe_point *pp) +static void clear_probe_trace_events(struct probe_trace_event *tevs, int ntevs) { - pp->function = strdup(tp->symbol); - - if (pp->function == NULL) - return -ENOMEM; - - pp->offset = tp->offset; - pp->retprobe = tp->retprobe; + int i; - return 0; + for (i = 0; i < ntevs; i++) + clear_probe_trace_event(tevs + i); } #ifdef HAVE_DWARF_SUPPORT + /* Open new debuginfo of given module */ static struct debuginfo *open_debuginfo(const char *module) { - const char *path; + const char *path = module; - /* A file path -- this is an offline module */ - if (module && strchr(module, '/')) - path = module; - else { + if (!module || !strchr(module, '/')) { path = kernel_get_module_path(module); - if (!path) { pr_err("Failed to find path of %s module.\n", module ?: "kernel"); @@ -253,46 +273,6 @@ static struct debuginfo *open_debuginfo(const char *module) return debuginfo__new(path); } -/* - * Convert trace point to probe point with debuginfo - * Currently only handles kprobes. - */ -static int kprobe_convert_to_perf_probe(struct probe_trace_point *tp, - struct perf_probe_point *pp) -{ - struct symbol *sym; - struct map *map; - u64 addr; - int ret = -ENOENT; - struct debuginfo *dinfo; - - sym = __find_kernel_function_by_name(tp->symbol, &map); - if (sym) { - addr = map->unmap_ip(map, sym->start + tp->offset); - pr_debug("try to find %s+%ld@%" PRIx64 "\n", tp->symbol, - tp->offset, addr); - - dinfo = debuginfo__new_online_kernel(addr); - if (dinfo) { - ret = debuginfo__find_probe_point(dinfo, - (unsigned long)addr, pp); - debuginfo__delete(dinfo); - } else { - pr_debug("Failed to open debuginfo at 0x%" PRIx64 "\n", - addr); - ret = -ENOENT; - } - } - if (ret <= 0) { - pr_debug("Failed to find corresponding probes from " - "debuginfo. Use kprobe event information.\n"); - return convert_to_perf_probe_point(tp, pp); - } - pp->retprobe = tp->retprobe; - - return 0; -} - static int get_text_start_address(const char *exec, unsigned long *address) { Elf *elf; @@ -321,12 +301,62 @@ out: return ret; } +/* + * Convert trace point to probe point with debuginfo + */ +static int find_perf_probe_point_from_dwarf(struct probe_trace_point *tp, + struct perf_probe_point *pp, + bool is_kprobe) +{ + struct debuginfo *dinfo = NULL; + unsigned long stext = 0; + u64 addr = tp->address; + int ret = -ENOENT; + + /* convert the address to dwarf address */ + if (!is_kprobe) { + if (!addr) { + ret = -EINVAL; + goto error; + } + ret = get_text_start_address(tp->module, &stext); + if (ret < 0) + goto error; + addr += stext; + } else { + addr = kernel_get_symbol_address_by_name(tp->symbol, false); + if (addr == 0) + goto error; + addr += tp->offset; + } + + pr_debug("try to find information at %" PRIx64 " in %s\n", addr, + tp->module ? : "kernel"); + + dinfo = open_debuginfo(tp->module); + if (dinfo) { + ret = debuginfo__find_probe_point(dinfo, + (unsigned long)addr, pp); + debuginfo__delete(dinfo); + } else { + pr_debug("Failed to open debuginfo at 0x%" PRIx64 "\n", addr); + ret = -ENOENT; + } + + if (ret > 0) { + pp->retprobe = tp->retprobe; + return 0; + } +error: + pr_debug("Failed to find corresponding probes from debuginfo.\n"); + return ret ? : -ENOENT; +} + static int add_exec_to_probe_trace_events(struct probe_trace_event *tevs, int ntevs, const char *exec) { int i, ret = 0; - unsigned long offset, stext = 0; - char buf[32]; + unsigned long stext = 0; if (!exec) return 0; @@ -337,15 +367,9 @@ static int add_exec_to_probe_trace_events(struct probe_trace_event *tevs, for (i = 0; i < ntevs && ret >= 0; i++) { /* point.address is the addres of point.symbol + point.offset */ - offset = tevs[i].point.address - stext; - tevs[i].point.offset = 0; - zfree(&tevs[i].point.symbol); - ret = e_snprintf(buf, 32, "0x%lx", offset); - if (ret < 0) - break; + tevs[i].point.address -= stext; tevs[i].point.module = strdup(exec); - tevs[i].point.symbol = strdup(buf); - if (!tevs[i].point.symbol || !tevs[i].point.module) { + if (!tevs[i].point.module) { ret = -ENOMEM; break; } @@ -388,12 +412,40 @@ static int add_module_to_probe_trace_events(struct probe_trace_event *tevs, return ret; } -static void clear_probe_trace_events(struct probe_trace_event *tevs, int ntevs) +/* Post processing the probe events */ +static int post_process_probe_trace_events(struct probe_trace_event *tevs, + int ntevs, const char *module, + bool uprobe) { + struct ref_reloc_sym *reloc_sym; + char *tmp; int i; - for (i = 0; i < ntevs; i++) - clear_probe_trace_event(tevs + i); + if (uprobe) + return add_exec_to_probe_trace_events(tevs, ntevs, module); + + /* Note that currently ref_reloc_sym based probe is not for drivers */ + if (module) + return add_module_to_probe_trace_events(tevs, ntevs, module); + + reloc_sym = kernel_get_ref_reloc_sym(); + if (!reloc_sym) { + pr_warning("Relocated base symbol is not found!\n"); + return -EINVAL; + } + + for (i = 0; i < ntevs; i++) { + if (tevs[i].point.address) { + tmp = strdup(reloc_sym->name); + if (!tmp) + return -ENOMEM; + free(tevs[i].point.symbol); + tevs[i].point.symbol = tmp; + tevs[i].point.offset = tevs[i].point.address - + reloc_sym->unrelocated_addr; + } + } + return 0; } /* Try to find perf_probe_event with debuginfo */ @@ -416,21 +468,16 @@ static int try_to_find_probe_trace_events(struct perf_probe_event *pev, return 0; } + pr_debug("Try to find probe point from debuginfo.\n"); /* Searching trace events corresponding to a probe event */ ntevs = debuginfo__find_trace_events(dinfo, pev, tevs, max_tevs); debuginfo__delete(dinfo); if (ntevs > 0) { /* Succeeded to find trace events */ - pr_debug("find %d probe_trace_events.\n", ntevs); - if (target) { - if (pev->uprobes) - ret = add_exec_to_probe_trace_events(*tevs, - ntevs, target); - else - ret = add_module_to_probe_trace_events(*tevs, - ntevs, target); - } + pr_debug("Found %d probe_trace_events.\n", ntevs); + ret = post_process_probe_trace_events(*tevs, ntevs, + target, pev->uprobes); if (ret < 0) { clear_probe_trace_events(*tevs, ntevs); zfree(tevs); @@ -563,20 +610,16 @@ static int _show_one_line(FILE *fp, int l, bool skip, bool show_num) * Show line-range always requires debuginfo to find source file and * line number. */ -int show_line_range(struct line_range *lr, const char *module) +static int __show_line_range(struct line_range *lr, const char *module) { int l = 1; - struct line_node *ln; + struct int_node *ln; struct debuginfo *dinfo; FILE *fp; int ret; char *tmp; /* Search a line range */ - ret = init_vmlinux(); - if (ret < 0) - return ret; - dinfo = open_debuginfo(module); if (!dinfo) { pr_warning("Failed to open debuginfo file.\n"); @@ -623,8 +666,8 @@ int show_line_range(struct line_range *lr, const char *module) goto end; } - list_for_each_entry(ln, &lr->line_list, list) { - for (; ln->line > l; l++) { + intlist__for_each(ln, lr->line_list) { + for (; ln->i > l; l++) { ret = show_one_line(fp, l - lr->offset); if (ret < 0) goto end; @@ -646,6 +689,19 @@ end: return ret; } +int show_line_range(struct line_range *lr, const char *module) +{ + int ret; + + ret = init_symbol_maps(false); + if (ret < 0) + return ret; + ret = __show_line_range(lr, module); + exit_symbol_maps(); + + return ret; +} + static int show_available_vars_at(struct debuginfo *dinfo, struct perf_probe_event *pev, int max_vls, struct strfilter *_filter, @@ -707,14 +763,15 @@ int show_available_vars(struct perf_probe_event *pevs, int npevs, int i, ret = 0; struct debuginfo *dinfo; - ret = init_vmlinux(); + ret = init_symbol_maps(false); if (ret < 0) return ret; dinfo = open_debuginfo(module); if (!dinfo) { pr_warning("Failed to open debuginfo file.\n"); - return -ENOENT; + ret = -ENOENT; + goto out; } setup_pager(); @@ -724,23 +781,19 @@ int show_available_vars(struct perf_probe_event *pevs, int npevs, externs); debuginfo__delete(dinfo); +out: + exit_symbol_maps(); return ret; } #else /* !HAVE_DWARF_SUPPORT */ -static int kprobe_convert_to_perf_probe(struct probe_trace_point *tp, - struct perf_probe_point *pp) +static int +find_perf_probe_point_from_dwarf(struct probe_trace_point *tp __maybe_unused, + struct perf_probe_point *pp __maybe_unused, + bool is_kprobe __maybe_unused) { - struct symbol *sym; - - sym = __find_kernel_function_by_name(tp->symbol, NULL); - if (!sym) { - pr_err("Failed to find symbol %s in kernel.\n", tp->symbol); - return -ENOENT; - } - - return convert_to_perf_probe_point(tp, pp); + return -ENOSYS; } static int try_to_find_probe_trace_events(struct perf_probe_event *pev, @@ -776,24 +829,22 @@ int show_available_vars(struct perf_probe_event *pevs __maybe_unused, void line_range__clear(struct line_range *lr) { - struct line_node *ln; - free(lr->function); free(lr->file); free(lr->path); free(lr->comp_dir); - while (!list_empty(&lr->line_list)) { - ln = list_first_entry(&lr->line_list, struct line_node, list); - list_del(&ln->list); - free(ln); - } + intlist__delete(lr->line_list); memset(lr, 0, sizeof(*lr)); } -void line_range__init(struct line_range *lr) +int line_range__init(struct line_range *lr) { memset(lr, 0, sizeof(*lr)); - INIT_LIST_HEAD(&lr->line_list); + lr->line_list = intlist__new(NULL); + if (!lr->line_list) + return -ENOMEM; + else + return 0; } static int parse_line_num(char **ptr, int *val, const char *what) @@ -1267,16 +1318,21 @@ static int parse_probe_trace_command(const char *cmd, } else p = argv[1]; fmt1_str = strtok_r(p, "+", &fmt); - tp->symbol = strdup(fmt1_str); - if (tp->symbol == NULL) { - ret = -ENOMEM; - goto out; + if (fmt1_str[0] == '0') /* only the address started with 0x */ + tp->address = strtoul(fmt1_str, NULL, 0); + else { + /* Only the symbol-based probe has offset */ + tp->symbol = strdup(fmt1_str); + if (tp->symbol == NULL) { + ret = -ENOMEM; + goto out; + } + fmt2_str = strtok_r(NULL, "", &fmt); + if (fmt2_str == NULL) + tp->offset = 0; + else + tp->offset = strtoul(fmt2_str, NULL, 10); } - fmt2_str = strtok_r(NULL, "", &fmt); - if (fmt2_str == NULL) - tp->offset = 0; - else - tp->offset = strtoul(fmt2_str, NULL, 10); tev->nargs = argc - 2; tev->args = zalloc(sizeof(struct probe_trace_arg) * tev->nargs); @@ -1518,20 +1574,27 @@ char *synthesize_probe_trace_command(struct probe_trace_event *tev) if (buf == NULL) return NULL; + len = e_snprintf(buf, MAX_CMDLEN, "%c:%s/%s ", tp->retprobe ? 'r' : 'p', + tev->group, tev->event); + if (len <= 0) + goto error; + + /* Uprobes must have tp->address and tp->module */ + if (tev->uprobes && (!tp->address || !tp->module)) + goto error; + + /* Use the tp->address for uprobes */ if (tev->uprobes) - len = e_snprintf(buf, MAX_CMDLEN, "%c:%s/%s %s:%s", - tp->retprobe ? 'r' : 'p', - tev->group, tev->event, - tp->module, tp->symbol); + ret = e_snprintf(buf + len, MAX_CMDLEN - len, "%s:0x%lx", + tp->module, tp->address); else - len = e_snprintf(buf, MAX_CMDLEN, "%c:%s/%s %s%s%s+%lu", - tp->retprobe ? 'r' : 'p', - tev->group, tev->event, + ret = e_snprintf(buf + len, MAX_CMDLEN - len, "%s%s%s+%lu", tp->module ?: "", tp->module ? ":" : "", tp->symbol, tp->offset); - if (len <= 0) + if (ret <= 0) goto error; + len += ret; for (i = 0; i < tev->nargs; i++) { ret = synthesize_probe_trace_arg(&tev->args[i], buf + len, @@ -1547,6 +1610,79 @@ error: return NULL; } +static int find_perf_probe_point_from_map(struct probe_trace_point *tp, + struct perf_probe_point *pp, + bool is_kprobe) +{ + struct symbol *sym = NULL; + struct map *map; + u64 addr; + int ret = -ENOENT; + + if (!is_kprobe) { + map = dso__new_map(tp->module); + if (!map) + goto out; + addr = tp->address; + sym = map__find_symbol(map, addr, NULL); + } else { + addr = kernel_get_symbol_address_by_name(tp->symbol, true); + if (addr) { + addr += tp->offset; + sym = __find_kernel_function(addr, &map); + } + } + if (!sym) + goto out; + + pp->retprobe = tp->retprobe; + pp->offset = addr - map->unmap_ip(map, sym->start); + pp->function = strdup(sym->name); + ret = pp->function ? 0 : -ENOMEM; + +out: + if (map && !is_kprobe) { + dso__delete(map->dso); + map__delete(map); + } + + return ret; +} + +static int convert_to_perf_probe_point(struct probe_trace_point *tp, + struct perf_probe_point *pp, + bool is_kprobe) +{ + char buf[128]; + int ret; + + ret = find_perf_probe_point_from_dwarf(tp, pp, is_kprobe); + if (!ret) + return 0; + ret = find_perf_probe_point_from_map(tp, pp, is_kprobe); + if (!ret) + return 0; + + pr_debug("Failed to find probe point from both of dwarf and map.\n"); + + if (tp->symbol) { + pp->function = strdup(tp->symbol); + pp->offset = tp->offset; + } else if (!tp->module && !is_kprobe) { + ret = e_snprintf(buf, 128, "0x%" PRIx64, (u64)tp->address); + if (ret < 0) + return ret; + pp->function = strdup(buf); + pp->offset = 0; + } + if (pp->function == NULL) + return -ENOMEM; + + pp->retprobe = tp->retprobe; + + return 0; +} + static int convert_to_perf_probe_event(struct probe_trace_event *tev, struct perf_probe_event *pev, bool is_kprobe) { @@ -1560,11 +1696,7 @@ static int convert_to_perf_probe_event(struct probe_trace_event *tev, return -ENOMEM; /* Convert trace_point to probe_point */ - if (is_kprobe) - ret = kprobe_convert_to_perf_probe(&tev->point, &pev->point); - else - ret = convert_to_perf_probe_point(&tev->point, &pev->point); - + ret = convert_to_perf_probe_point(&tev->point, &pev->point, is_kprobe); if (ret < 0) return ret; @@ -1731,7 +1863,8 @@ static struct strlist *get_probe_trace_command_rawlist(int fd) } /* Show an event */ -static int show_perf_probe_event(struct perf_probe_event *pev) +static int show_perf_probe_event(struct perf_probe_event *pev, + const char *module) { int i, ret; char buf[128]; @@ -1747,6 +1880,8 @@ static int show_perf_probe_event(struct perf_probe_event *pev) return ret; printf(" %-20s (on %s", buf, place); + if (module) + printf(" in %s", module); if (pev->nargs > 0) { printf(" with"); @@ -1784,7 +1919,8 @@ static int __show_perf_probe_events(int fd, bool is_kprobe) ret = convert_to_perf_probe_event(&tev, &pev, is_kprobe); if (ret >= 0) - ret = show_perf_probe_event(&pev); + ret = show_perf_probe_event(&pev, + tev.point.module); } clear_perf_probe_event(&pev); clear_probe_trace_event(&tev); @@ -1807,7 +1943,7 @@ int show_perf_probe_events(void) if (fd < 0) return fd; - ret = init_vmlinux(); + ret = init_symbol_maps(false); if (ret < 0) return ret; @@ -1820,6 +1956,7 @@ int show_perf_probe_events(void) close(fd); } + exit_symbol_maps(); return ret; } @@ -1982,7 +2119,7 @@ static int __add_probe_trace_events(struct perf_probe_event *pev, group = pev->group; pev->event = tev->event; pev->group = tev->group; - show_perf_probe_event(pev); + show_perf_probe_event(pev, tev->point.module); /* Trick here - restore current event/group */ pev->event = (char *)event; pev->group = (char *)group; @@ -2008,113 +2145,175 @@ static int __add_probe_trace_events(struct perf_probe_event *pev, return ret; } -static int convert_to_probe_trace_events(struct perf_probe_event *pev, - struct probe_trace_event **tevs, - int max_tevs, const char *target) +static char *looking_function_name; +static int num_matched_functions; + +static int probe_function_filter(struct map *map __maybe_unused, + struct symbol *sym) { + if ((sym->binding == STB_GLOBAL || sym->binding == STB_LOCAL) && + strcmp(looking_function_name, sym->name) == 0) { + num_matched_functions++; + return 0; + } + return 1; +} + +#define strdup_or_goto(str, label) \ + ({ char *__p = strdup(str); if (!__p) goto label; __p; }) + +/* + * Find probe function addresses from map. + * Return an error or the number of found probe_trace_event + */ +static int find_probe_trace_events_from_map(struct perf_probe_event *pev, + struct probe_trace_event **tevs, + int max_tevs, const char *target) +{ + struct map *map = NULL; + struct kmap *kmap = NULL; + struct ref_reloc_sym *reloc_sym = NULL; struct symbol *sym; - int ret, i; + struct rb_node *nd; struct probe_trace_event *tev; + struct perf_probe_point *pp = &pev->point; + struct probe_trace_point *tp; + int ret, i; - if (pev->uprobes && !pev->group) { - /* Replace group name if not given */ - ret = convert_exec_to_group(target, &pev->group); - if (ret != 0) { - pr_warning("Failed to make a group name.\n"); - return ret; - } + /* Init maps of given executable or kernel */ + if (pev->uprobes) + map = dso__new_map(target); + else + map = kernel_get_module_map(target); + if (!map) { + ret = -EINVAL; + goto out; } - /* Convert perf_probe_event with debuginfo */ - ret = try_to_find_probe_trace_events(pev, tevs, max_tevs, target); - if (ret != 0) - return ret; /* Found in debuginfo or got an error */ - - if (pev->uprobes) { - ret = convert_name_to_addr(pev, target); - if (ret < 0) - return ret; + /* + * Load matched symbols: Since the different local symbols may have + * same name but different addresses, this lists all the symbols. + */ + num_matched_functions = 0; + looking_function_name = pp->function; + ret = map__load(map, probe_function_filter); + if (ret || num_matched_functions == 0) { + pr_err("Failed to find symbol %s in %s\n", pp->function, + target ? : "kernel"); + ret = -ENOENT; + goto out; + } else if (num_matched_functions > max_tevs) { + pr_err("Too many functions matched in %s\n", + target ? : "kernel"); + ret = -E2BIG; + goto out; } - /* Allocate trace event buffer */ - tev = *tevs = zalloc(sizeof(struct probe_trace_event)); - if (tev == NULL) - return -ENOMEM; + if (!pev->uprobes) { + kmap = map__kmap(map); + reloc_sym = kmap->ref_reloc_sym; + if (!reloc_sym) { + pr_warning("Relocated base symbol is not found!\n"); + ret = -EINVAL; + goto out; + } + } - /* Copy parameters */ - tev->point.symbol = strdup(pev->point.function); - if (tev->point.symbol == NULL) { + /* Setup result trace-probe-events */ + *tevs = zalloc(sizeof(*tev) * num_matched_functions); + if (!*tevs) { ret = -ENOMEM; - goto error; + goto out; } - if (target) { - tev->point.module = strdup(target); - if (tev->point.module == NULL) { - ret = -ENOMEM; - goto error; + ret = 0; + map__for_each_symbol(map, sym, nd) { + tev = (*tevs) + ret; + tp = &tev->point; + if (ret == num_matched_functions) { + pr_warning("Too many symbols are listed. Skip it.\n"); + break; } - } - - tev->point.offset = pev->point.offset; - tev->point.retprobe = pev->point.retprobe; - tev->nargs = pev->nargs; - tev->uprobes = pev->uprobes; + ret++; - if (tev->nargs) { - tev->args = zalloc(sizeof(struct probe_trace_arg) - * tev->nargs); - if (tev->args == NULL) { - ret = -ENOMEM; - goto error; + if (pp->offset > sym->end - sym->start) { + pr_warning("Offset %ld is bigger than the size of %s\n", + pp->offset, sym->name); + ret = -ENOENT; + goto err_out; + } + /* Add one probe point */ + tp->address = map->unmap_ip(map, sym->start) + pp->offset; + if (reloc_sym) { + tp->symbol = strdup_or_goto(reloc_sym->name, nomem_out); + tp->offset = tp->address - reloc_sym->addr; + } else { + tp->symbol = strdup_or_goto(sym->name, nomem_out); + tp->offset = pp->offset; + } + tp->retprobe = pp->retprobe; + if (target) + tev->point.module = strdup_or_goto(target, nomem_out); + tev->uprobes = pev->uprobes; + tev->nargs = pev->nargs; + if (tev->nargs) { + tev->args = zalloc(sizeof(struct probe_trace_arg) * + tev->nargs); + if (tev->args == NULL) + goto nomem_out; } for (i = 0; i < tev->nargs; i++) { - if (pev->args[i].name) { - tev->args[i].name = strdup(pev->args[i].name); - if (tev->args[i].name == NULL) { - ret = -ENOMEM; - goto error; - } - } - tev->args[i].value = strdup(pev->args[i].var); - if (tev->args[i].value == NULL) { - ret = -ENOMEM; - goto error; - } - if (pev->args[i].type) { - tev->args[i].type = strdup(pev->args[i].type); - if (tev->args[i].type == NULL) { - ret = -ENOMEM; - goto error; - } - } + if (pev->args[i].name) + tev->args[i].name = + strdup_or_goto(pev->args[i].name, + nomem_out); + + tev->args[i].value = strdup_or_goto(pev->args[i].var, + nomem_out); + if (pev->args[i].type) + tev->args[i].type = + strdup_or_goto(pev->args[i].type, + nomem_out); } } - if (pev->uprobes) - return 1; +out: + if (map && pev->uprobes) { + /* Only when using uprobe(exec) map needs to be released */ + dso__delete(map->dso); + map__delete(map); + } + return ret; - /* Currently just checking function name from symbol map */ - sym = __find_kernel_function_by_name(tev->point.symbol, NULL); - if (!sym) { - pr_warning("Kernel symbol \'%s\' not found.\n", - tev->point.symbol); - ret = -ENOENT; - goto error; - } else if (tev->point.offset > sym->end - sym->start) { - pr_warning("Offset specified is greater than size of %s\n", - tev->point.symbol); - ret = -ENOENT; - goto error; +nomem_out: + ret = -ENOMEM; +err_out: + clear_probe_trace_events(*tevs, num_matched_functions); + zfree(tevs); + goto out; +} + +static int convert_to_probe_trace_events(struct perf_probe_event *pev, + struct probe_trace_event **tevs, + int max_tevs, const char *target) +{ + int ret; + if (pev->uprobes && !pev->group) { + /* Replace group name if not given */ + ret = convert_exec_to_group(target, &pev->group); + if (ret != 0) { + pr_warning("Failed to make a group name.\n"); + return ret; + } } - return 1; -error: - clear_probe_trace_event(tev); - free(tev); - *tevs = NULL; - return ret; + /* Convert perf_probe_event with debuginfo */ + ret = try_to_find_probe_trace_events(pev, tevs, max_tevs, target); + if (ret != 0) + return ret; /* Found in debuginfo or got an error */ + + return find_probe_trace_events_from_map(pev, tevs, max_tevs, target); } struct __event_package { @@ -2135,12 +2334,7 @@ int add_perf_probe_events(struct perf_probe_event *pevs, int npevs, if (pkgs == NULL) return -ENOMEM; - if (!pevs->uprobes) - /* Init vmlinux path */ - ret = init_vmlinux(); - else - ret = init_user_exec(); - + ret = init_symbol_maps(pevs->uprobes); if (ret < 0) { free(pkgs); return ret; @@ -2174,6 +2368,7 @@ end: zfree(&pkgs[i].tevs); } free(pkgs); + exit_symbol_maps(); return ret; } @@ -2323,159 +2518,51 @@ static struct strfilter *available_func_filter; static int filter_available_functions(struct map *map __maybe_unused, struct symbol *sym) { - if (sym->binding == STB_GLOBAL && + if ((sym->binding == STB_GLOBAL || sym->binding == STB_LOCAL) && strfilter__compare(available_func_filter, sym->name)) return 0; return 1; } -static int __show_available_funcs(struct map *map) -{ - if (map__load(map, filter_available_functions)) { - pr_err("Failed to load map.\n"); - return -EINVAL; - } - if (!dso__sorted_by_name(map->dso, map->type)) - dso__sort_by_name(map->dso, map->type); - - dso__fprintf_symbols_by_name(map->dso, map->type, stdout); - return 0; -} - -static int available_kernel_funcs(const char *module) +int show_available_funcs(const char *target, struct strfilter *_filter, + bool user) { struct map *map; int ret; - ret = init_vmlinux(); + ret = init_symbol_maps(user); if (ret < 0) return ret; - map = kernel_get_module_map(module); + /* Get a symbol map */ + if (user) + map = dso__new_map(target); + else + map = kernel_get_module_map(target); if (!map) { - pr_err("Failed to find %s map.\n", (module) ? : "kernel"); + pr_err("Failed to get a map for %s\n", (target) ? : "kernel"); return -EINVAL; } - return __show_available_funcs(map); -} - -static int available_user_funcs(const char *target) -{ - struct map *map; - int ret; - - ret = init_user_exec(); - if (ret < 0) - return ret; - - map = dso__new_map(target); - ret = __show_available_funcs(map); - dso__delete(map->dso); - map__delete(map); - return ret; -} -int show_available_funcs(const char *target, struct strfilter *_filter, - bool user) -{ - setup_pager(); + /* Load symbols with given filter */ available_func_filter = _filter; - - if (!user) - return available_kernel_funcs(target); - - return available_user_funcs(target); -} - -/* - * uprobe_events only accepts address: - * Convert function and any offset to address - */ -static int convert_name_to_addr(struct perf_probe_event *pev, const char *exec) -{ - struct perf_probe_point *pp = &pev->point; - struct symbol *sym; - struct map *map = NULL; - char *function = NULL; - int ret = -EINVAL; - unsigned long long vaddr = 0; - - if (!pp->function) { - pr_warning("No function specified for uprobes"); - goto out; - } - - function = strdup(pp->function); - if (!function) { - pr_warning("Failed to allocate memory by strdup.\n"); - ret = -ENOMEM; - goto out; - } - - map = dso__new_map(exec); - if (!map) { - pr_warning("Cannot find appropriate DSO for %s.\n", exec); - goto out; - } - available_func_filter = strfilter__new(function, NULL); if (map__load(map, filter_available_functions)) { - pr_err("Failed to load map.\n"); - goto out; - } - - sym = map__find_symbol_by_name(map, function, NULL); - if (!sym) { - pr_warning("Cannot find %s in DSO %s\n", function, exec); - goto out; - } - - if (map->start > sym->start) - vaddr = map->start; - vaddr += sym->start + pp->offset + map->pgoff; - pp->offset = 0; - - if (!pev->event) { - pev->event = function; - function = NULL; - } - if (!pev->group) { - char *ptr1, *ptr2, *exec_copy; - - pev->group = zalloc(sizeof(char *) * 64); - exec_copy = strdup(exec); - if (!exec_copy) { - ret = -ENOMEM; - pr_warning("Failed to copy exec string.\n"); - goto out; - } - - ptr1 = strdup(basename(exec_copy)); - if (ptr1) { - ptr2 = strpbrk(ptr1, "-._"); - if (ptr2) - *ptr2 = '\0'; - e_snprintf(pev->group, 64, "%s_%s", PERFPROBE_GROUP, - ptr1); - free(ptr1); - } - free(exec_copy); - } - free(pp->function); - pp->function = zalloc(sizeof(char *) * MAX_PROBE_ARGS); - if (!pp->function) { - ret = -ENOMEM; - pr_warning("Failed to allocate memory by zalloc.\n"); - goto out; + pr_err("Failed to load symbols in %s\n", (target) ? : "kernel"); + goto end; } - e_snprintf(pp->function, MAX_PROBE_ARGS, "0x%llx", vaddr); - ret = 0; + if (!dso__sorted_by_name(map->dso, map->type)) + dso__sort_by_name(map->dso, map->type); -out: - if (map) { + /* Show all (filtered) symbols */ + setup_pager(); + dso__fprintf_symbols_by_name(map->dso, map->type, stdout); +end: + if (user) { dso__delete(map->dso); map__delete(map); } - if (function) - free(function); + exit_symbol_maps(); + return ret; } + diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h index fcaf7273e85a..776c9347a3b6 100644 --- a/tools/perf/util/probe-event.h +++ b/tools/perf/util/probe-event.h @@ -2,6 +2,7 @@ #define _PROBE_EVENT_H #include <stdbool.h> +#include "intlist.h" #include "strlist.h" #include "strfilter.h" @@ -76,13 +77,6 @@ struct perf_probe_event { struct perf_probe_arg *args; /* Arguments */ }; - -/* Line number container */ -struct line_node { - struct list_head list; - int line; -}; - /* Line range */ struct line_range { char *file; /* File name */ @@ -92,7 +86,7 @@ struct line_range { int offset; /* Start line offset */ char *path; /* Real path name */ char *comp_dir; /* Compile directory */ - struct list_head line_list; /* Visible lines */ + struct intlist *line_list; /* Visible lines */ }; /* List of variables */ @@ -124,7 +118,7 @@ extern int parse_line_range_desc(const char *cmd, struct line_range *lr); extern void line_range__clear(struct line_range *lr); /* Initialize line range */ -extern void line_range__init(struct line_range *lr); +extern int line_range__init(struct line_range *lr); /* Internal use: Return kernel/module path */ extern const char *kernel_get_module_path(const char *module); diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c index 061edb162b5b..df0238654698 100644 --- a/tools/perf/util/probe-finder.c +++ b/tools/perf/util/probe-finder.c @@ -34,7 +34,9 @@ #include <linux/bitops.h> #include "event.h" +#include "dso.h" #include "debug.h" +#include "intlist.h" #include "util.h" #include "symbol.h" #include "probe-finder.h" @@ -42,65 +44,6 @@ /* Kprobe tracer basic type is up to u64 */ #define MAX_BASIC_TYPE_BITS 64 -/* Line number list operations */ - -/* Add a line to line number list */ -static int line_list__add_line(struct list_head *head, int line) -{ - struct line_node *ln; - struct list_head *p; - - /* Reverse search, because new line will be the last one */ - list_for_each_entry_reverse(ln, head, list) { - if (ln->line < line) { - p = &ln->list; - goto found; - } else if (ln->line == line) /* Already exist */ - return 1; - } - /* List is empty, or the smallest entry */ - p = head; -found: - pr_debug("line list: add a line %u\n", line); - ln = zalloc(sizeof(struct line_node)); - if (ln == NULL) - return -ENOMEM; - ln->line = line; - INIT_LIST_HEAD(&ln->list); - list_add(&ln->list, p); - return 0; -} - -/* Check if the line in line number list */ -static int line_list__has_line(struct list_head *head, int line) -{ - struct line_node *ln; - - /* Reverse search, because new line will be the last one */ - list_for_each_entry(ln, head, list) - if (ln->line == line) - return 1; - - return 0; -} - -/* Init line number list */ -static void line_list__init(struct list_head *head) -{ - INIT_LIST_HEAD(head); -} - -/* Free line number list */ -static void line_list__free(struct list_head *head) -{ - struct line_node *ln; - while (!list_empty(head)) { - ln = list_first_entry(head, struct line_node, list); - list_del(&ln->list); - free(ln); - } -} - /* Dwarf FL wrappers */ static char *debuginfo_path; /* Currently dummy */ @@ -147,80 +90,7 @@ error: return -ENOENT; } -#if _ELFUTILS_PREREQ(0, 148) -/* This method is buggy if elfutils is older than 0.148 */ -static int __linux_kernel_find_elf(Dwfl_Module *mod, - void **userdata, - const char *module_name, - Dwarf_Addr base, - char **file_name, Elf **elfp) -{ - int fd; - const char *path = kernel_get_module_path(module_name); - - pr_debug2("Use file %s for %s\n", path, module_name); - if (path) { - fd = open(path, O_RDONLY); - if (fd >= 0) { - *file_name = strdup(path); - return fd; - } - } - /* If failed, try to call standard method */ - return dwfl_linux_kernel_find_elf(mod, userdata, module_name, base, - file_name, elfp); -} - -static const Dwfl_Callbacks kernel_callbacks = { - .find_debuginfo = dwfl_standard_find_debuginfo, - .debuginfo_path = &debuginfo_path, - - .find_elf = __linux_kernel_find_elf, - .section_address = dwfl_linux_kernel_module_section_address, -}; - -/* Get a Dwarf from live kernel image */ -static int debuginfo__init_online_kernel_dwarf(struct debuginfo *dbg, - Dwarf_Addr addr) -{ - dbg->dwfl = dwfl_begin(&kernel_callbacks); - if (!dbg->dwfl) - return -EINVAL; - - /* Load the kernel dwarves: Don't care the result here */ - dwfl_linux_kernel_report_kernel(dbg->dwfl); - dwfl_linux_kernel_report_modules(dbg->dwfl); - - dbg->dbg = dwfl_addrdwarf(dbg->dwfl, addr, &dbg->bias); - /* Here, check whether we could get a real dwarf */ - if (!dbg->dbg) { - pr_debug("Failed to find kernel dwarf at %lx\n", - (unsigned long)addr); - dwfl_end(dbg->dwfl); - memset(dbg, 0, sizeof(*dbg)); - return -ENOENT; - } - - return 0; -} -#else -/* With older elfutils, this just support kernel module... */ -static int debuginfo__init_online_kernel_dwarf(struct debuginfo *dbg, - Dwarf_Addr addr __maybe_unused) -{ - const char *path = kernel_get_module_path("kernel"); - - if (!path) { - pr_err("Failed to find vmlinux path\n"); - return -ENOENT; - } - - pr_debug2("Use file %s for debuginfo\n", path); - return debuginfo__init_offline_dwarf(dbg, path); -} -#endif - -struct debuginfo *debuginfo__new(const char *path) +static struct debuginfo *__debuginfo__new(const char *path) { struct debuginfo *dbg = zalloc(sizeof(*dbg)); if (!dbg) @@ -228,21 +98,44 @@ struct debuginfo *debuginfo__new(const char *path) if (debuginfo__init_offline_dwarf(dbg, path) < 0) zfree(&dbg); - + if (dbg) + pr_debug("Open Debuginfo file: %s\n", path); return dbg; } -struct debuginfo *debuginfo__new_online_kernel(unsigned long addr) -{ - struct debuginfo *dbg = zalloc(sizeof(*dbg)); +enum dso_binary_type distro_dwarf_types[] = { + DSO_BINARY_TYPE__FEDORA_DEBUGINFO, + DSO_BINARY_TYPE__UBUNTU_DEBUGINFO, + DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO, + DSO_BINARY_TYPE__BUILDID_DEBUGINFO, + DSO_BINARY_TYPE__NOT_FOUND, +}; - if (!dbg) - return NULL; +struct debuginfo *debuginfo__new(const char *path) +{ + enum dso_binary_type *type; + char buf[PATH_MAX], nil = '\0'; + struct dso *dso; + struct debuginfo *dinfo = NULL; + + /* Try to open distro debuginfo files */ + dso = dso__new(path); + if (!dso) + goto out; - if (debuginfo__init_online_kernel_dwarf(dbg, (Dwarf_Addr)addr) < 0) - zfree(&dbg); + for (type = distro_dwarf_types; + !dinfo && *type != DSO_BINARY_TYPE__NOT_FOUND; + type++) { + if (dso__read_binary_type_filename(dso, *type, &nil, + buf, PATH_MAX) < 0) + continue; + dinfo = __debuginfo__new(buf); + } + dso__delete(dso); - return dbg; +out: + /* if failed to open all distro debuginfo, open given binary */ + return dinfo ? : __debuginfo__new(path); } void debuginfo__delete(struct debuginfo *dbg) @@ -880,7 +773,7 @@ static int find_probe_point_by_line(struct probe_finder *pf) } /* Find lines which match lazy pattern */ -static int find_lazy_match_lines(struct list_head *head, +static int find_lazy_match_lines(struct intlist *list, const char *fname, const char *pat) { FILE *fp; @@ -901,7 +794,7 @@ static int find_lazy_match_lines(struct list_head *head, line[len - 1] = '\0'; if (strlazymatch(line, pat)) { - line_list__add_line(head, linenum); + intlist__add(list, linenum); count++; } linenum++; @@ -924,7 +817,7 @@ static int probe_point_lazy_walker(const char *fname, int lineno, Dwarf_Die *sc_die, die_mem; int ret; - if (!line_list__has_line(&pf->lcache, lineno) || + if (!intlist__has_entry(pf->lcache, lineno) || strtailcmp(fname, pf->fname) != 0) return 0; @@ -952,9 +845,9 @@ static int find_probe_point_lazy(Dwarf_Die *sp_die, struct probe_finder *pf) { int ret = 0; - if (list_empty(&pf->lcache)) { + if (intlist__empty(pf->lcache)) { /* Matching lazy line pattern */ - ret = find_lazy_match_lines(&pf->lcache, pf->fname, + ret = find_lazy_match_lines(pf->lcache, pf->fname, pf->pev->point.lazy_line); if (ret <= 0) return ret; @@ -1096,7 +989,9 @@ static int debuginfo__find_probes(struct debuginfo *dbg, #endif off = 0; - line_list__init(&pf->lcache); + pf->lcache = intlist__new(NULL); + if (!pf->lcache) + return -ENOMEM; /* Fastpath: lookup by function name from .debug_pubnames section */ if (pp->function) { @@ -1149,7 +1044,8 @@ static int debuginfo__find_probes(struct debuginfo *dbg, } found: - line_list__free(&pf->lcache); + intlist__delete(pf->lcache); + pf->lcache = NULL; return ret; } @@ -1537,7 +1433,7 @@ static int line_range_add_line(const char *src, unsigned int lineno, if (lr->path == NULL) return -ENOMEM; } - return line_list__add_line(&lr->line_list, lineno); + return intlist__add(lr->line_list, lineno); } static int line_range_walk_cb(const char *fname, int lineno, @@ -1565,7 +1461,7 @@ static int find_line_range_by_line(Dwarf_Die *sp_die, struct line_finder *lf) /* Update status */ if (ret >= 0) - if (!list_empty(&lf->lr->line_list)) + if (!intlist__empty(lf->lr->line_list)) ret = lf->found = 1; else ret = 0; /* Lines are not found */ diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h index ffc33cdd25cc..92590b2c7e1c 100644 --- a/tools/perf/util/probe-finder.h +++ b/tools/perf/util/probe-finder.h @@ -3,6 +3,7 @@ #include <stdbool.h> #include "util.h" +#include "intlist.h" #include "probe-event.h" #define MAX_PROBE_BUFFER 1024 @@ -29,8 +30,8 @@ struct debuginfo { Dwarf_Addr bias; }; +/* This also tries to open distro debuginfo */ extern struct debuginfo *debuginfo__new(const char *path); -extern struct debuginfo *debuginfo__new_online_kernel(unsigned long addr); extern void debuginfo__delete(struct debuginfo *dbg); /* Find probe_trace_events specified by perf_probe_event from debuginfo */ @@ -66,7 +67,7 @@ struct probe_finder { const char *fname; /* Real file name */ Dwarf_Die cu_die; /* Current CU */ Dwarf_Die sp_die; - struct list_head lcache; /* Line cache for lazy match */ + struct intlist *lcache; /* Line cache for lazy match */ /* For variable searching */ #if _ELFUTILS_PREREQ(0, 142) diff --git a/tools/perf/util/python-ext-sources b/tools/perf/util/python-ext-sources index 595bfc73d2ed..16a475a7d492 100644 --- a/tools/perf/util/python-ext-sources +++ b/tools/perf/util/python-ext-sources @@ -17,6 +17,6 @@ util/xyarray.c util/cgroup.c util/rblist.c util/strlist.c -util/fs.c +../lib/api/fs/fs.c util/trace-event.c ../../lib/rbtree.c diff --git a/tools/perf/util/record.c b/tools/perf/util/record.c index 373762501dad..049e0a09ccd3 100644 --- a/tools/perf/util/record.c +++ b/tools/perf/util/record.c @@ -2,7 +2,7 @@ #include "evsel.h" #include "cpumap.h" #include "parse-events.h" -#include "fs.h" +#include <api/fs/fs.h> #include "util.h" typedef void (*setup_probe_fn_t)(struct perf_evsel *evsel); diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index 5da6ce74c676..55960f22233c 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -702,11 +702,12 @@ static void regs_dump__printf(u64 mask, u64 *regs) } } -static void regs_user__printf(struct perf_sample *sample, u64 mask) +static void regs_user__printf(struct perf_sample *sample) { struct regs_dump *user_regs = &sample->user_regs; if (user_regs->regs) { + u64 mask = user_regs->mask; printf("... user regs: mask 0x%" PRIx64 "\n", mask); regs_dump__printf(mask, user_regs->regs); } @@ -793,7 +794,7 @@ static void dump_sample(struct perf_evsel *evsel, union perf_event *event, if (!dump_trace) return; - printf("(IP, %d): %d/%d: %#" PRIx64 " period: %" PRIu64 " addr: %#" PRIx64 "\n", + printf("(IP, 0x%x): %d/%d: %#" PRIx64 " period: %" PRIu64 " addr: %#" PRIx64 "\n", event->header.misc, sample->pid, sample->tid, sample->ip, sample->period, sample->addr); @@ -806,7 +807,7 @@ static void dump_sample(struct perf_evsel *evsel, union perf_event *event, branch_stack__printf(sample); if (sample_type & PERF_SAMPLE_REGS_USER) - regs_user__printf(sample, evsel->attr.sample_regs_user); + regs_user__printf(sample); if (sample_type & PERF_SAMPLE_STACK_USER) stack_user__printf(&sample->user_stack); diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c index 3e9f336740fa..3b7dbf51d4a9 100644 --- a/tools/perf/util/symbol-elf.c +++ b/tools/perf/util/symbol-elf.c @@ -151,15 +151,15 @@ Elf_Scn *elf_section_by_name(Elf *elf, GElf_Ehdr *ep, gelf_getshdr(sec, shp); str = elf_strptr(elf, ep->e_shstrndx, shp->sh_name); - if (!strcmp(name, str)) { + if (str && !strcmp(name, str)) { if (idx) *idx = cnt; - break; + return sec; } ++cnt; } - return sec; + return NULL; } #define elf_section__for_each_rel(reldata, pos, pos_mem, idx, nr_entries) \ @@ -506,6 +506,8 @@ int filename__read_debuglink(const char *filename, char *debuglink, /* the start of this section is a zero-terminated string */ strncpy(debuglink, data->d_buf, size); + err = 0; + out_elf_end: elf_end(elf); out_close: diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c index e89afc097d8a..95e249779931 100644 --- a/tools/perf/util/symbol.c +++ b/tools/perf/util/symbol.c @@ -410,7 +410,7 @@ struct symbol *dso__find_symbol(struct dso *dso, return symbols__find(&dso->symbols[type], addr); } -struct symbol *dso__first_symbol(struct dso *dso, enum map_type type) +static struct symbol *dso__first_symbol(struct dso *dso, enum map_type type) { return symbols__first(&dso->symbols[type]); } @@ -1251,6 +1251,46 @@ out_failure: return -1; } +static bool dso__is_compatible_symtab_type(struct dso *dso, bool kmod, + enum dso_binary_type type) +{ + switch (type) { + case DSO_BINARY_TYPE__JAVA_JIT: + case DSO_BINARY_TYPE__DEBUGLINK: + case DSO_BINARY_TYPE__SYSTEM_PATH_DSO: + case DSO_BINARY_TYPE__FEDORA_DEBUGINFO: + case DSO_BINARY_TYPE__UBUNTU_DEBUGINFO: + case DSO_BINARY_TYPE__BUILDID_DEBUGINFO: + case DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO: + return !kmod && dso->kernel == DSO_TYPE_USER; + + case DSO_BINARY_TYPE__KALLSYMS: + case DSO_BINARY_TYPE__VMLINUX: + case DSO_BINARY_TYPE__KCORE: + return dso->kernel == DSO_TYPE_KERNEL; + + case DSO_BINARY_TYPE__GUEST_KALLSYMS: + case DSO_BINARY_TYPE__GUEST_VMLINUX: + case DSO_BINARY_TYPE__GUEST_KCORE: + return dso->kernel == DSO_TYPE_GUEST_KERNEL; + + case DSO_BINARY_TYPE__GUEST_KMODULE: + case DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE: + /* + * kernel modules know their symtab type - it's set when + * creating a module dso in machine__new_module(). + */ + return kmod && dso->symtab_type == type; + + case DSO_BINARY_TYPE__BUILD_ID_CACHE: + return true; + + case DSO_BINARY_TYPE__NOT_FOUND: + default: + return false; + } +} + int dso__load(struct dso *dso, struct map *map, symbol_filter_t filter) { char *name; @@ -1261,6 +1301,7 @@ int dso__load(struct dso *dso, struct map *map, symbol_filter_t filter) int ss_pos = 0; struct symsrc ss_[2]; struct symsrc *syms_ss = NULL, *runtime_ss = NULL; + bool kmod; dso__set_loaded(dso, map->type); @@ -1301,7 +1342,11 @@ int dso__load(struct dso *dso, struct map *map, symbol_filter_t filter) if (!name) return -1; - /* Iterate over candidate debug images. + kmod = dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE || + dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE; + + /* + * Iterate over candidate debug images. * Keep track of "interesting" ones (those which have a symtab, dynsym, * and/or opd section) for processing. */ @@ -1311,6 +1356,9 @@ int dso__load(struct dso *dso, struct map *map, symbol_filter_t filter) enum dso_binary_type symtab_type = binary_type_symtab[i]; + if (!dso__is_compatible_symtab_type(dso, kmod, symtab_type)) + continue; + if (dso__read_binary_type_filename(dso, symtab_type, root_dir, name, PATH_MAX)) continue; @@ -1353,15 +1401,10 @@ int dso__load(struct dso *dso, struct map *map, symbol_filter_t filter) if (!runtime_ss && syms_ss) runtime_ss = syms_ss; - if (syms_ss) { - int km; - - km = dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE || - dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE; - ret = dso__load_sym(dso, map, syms_ss, runtime_ss, filter, km); - } else { + if (syms_ss) + ret = dso__load_sym(dso, map, syms_ss, runtime_ss, filter, kmod); + else ret = -1; - } if (ret > 0) { int nr_plt; diff --git a/tools/perf/util/symbol.h b/tools/perf/util/symbol.h index fffe2888a1c7..501e4e722e8e 100644 --- a/tools/perf/util/symbol.h +++ b/tools/perf/util/symbol.h @@ -79,6 +79,17 @@ struct symbol { void symbol__delete(struct symbol *sym); void symbols__delete(struct rb_root *symbols); +/* symbols__for_each_entry - iterate over symbols (rb_root) + * + * @symbols: the rb_root of symbols + * @pos: the 'struct symbol *' to use as a loop cursor + * @nd: the 'struct rb_node *' to use as a temporary storage + */ +#define symbols__for_each_entry(symbols, pos, nd) \ + for (nd = rb_first(symbols); \ + nd && (pos = rb_entry(nd, struct symbol, rb_node)); \ + nd = rb_next(nd)) + static inline size_t symbol__size(const struct symbol *sym) { return sym->end - sym->start + 1; @@ -175,7 +186,7 @@ struct addr_location { struct symbol *sym; u64 addr; char level; - bool filtered; + u8 filtered; u8 cpumode; s32 cpu; }; @@ -223,7 +234,6 @@ struct symbol *dso__find_symbol(struct dso *dso, enum map_type type, u64 addr); struct symbol *dso__find_symbol_by_name(struct dso *dso, enum map_type type, const char *name); -struct symbol *dso__first_symbol(struct dso *dso, enum map_type type); int filename__read_build_id(const char *filename, void *bf, size_t size); int sysfs__read_build_id(const char *filename, void *bf, size_t size); diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c index 0358882c8910..3ce0498bdae6 100644 --- a/tools/perf/util/thread.c +++ b/tools/perf/util/thread.c @@ -142,3 +142,24 @@ int thread__fork(struct thread *thread, struct thread *parent, u64 timestamp) return 0; } + +void thread__find_cpumode_addr_location(struct thread *thread, + struct machine *machine, + enum map_type type, u64 addr, + struct addr_location *al) +{ + size_t i; + const u8 const cpumodes[] = { + PERF_RECORD_MISC_USER, + PERF_RECORD_MISC_KERNEL, + PERF_RECORD_MISC_GUEST_USER, + PERF_RECORD_MISC_GUEST_KERNEL + }; + + for (i = 0; i < ARRAY_SIZE(cpumodes); i++) { + thread__find_addr_location(thread, machine, cpumodes[i], type, + addr, al); + if (al->map) + break; + } +} diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h index 5b856bf942e1..9b29f085aede 100644 --- a/tools/perf/util/thread.h +++ b/tools/perf/util/thread.h @@ -44,12 +44,6 @@ void thread__insert_map(struct thread *thread, struct map *map); int thread__fork(struct thread *thread, struct thread *parent, u64 timestamp); size_t thread__fprintf(struct thread *thread, FILE *fp); -static inline struct map *thread__find_map(struct thread *thread, - enum map_type type, u64 addr) -{ - return thread ? map_groups__find(&thread->mg, type, addr) : NULL; -} - void thread__find_addr_map(struct thread *thread, struct machine *machine, u8 cpumode, enum map_type type, u64 addr, struct addr_location *al); @@ -58,6 +52,11 @@ void thread__find_addr_location(struct thread *thread, struct machine *machine, u8 cpumode, enum map_type type, u64 addr, struct addr_location *al); +void thread__find_cpumode_addr_location(struct thread *thread, + struct machine *machine, + enum map_type type, u64 addr, + struct addr_location *al); + static inline void *thread__priv(struct thread *thread) { return thread->priv; diff --git a/tools/perf/util/trace-event-parse.c b/tools/perf/util/trace-event-parse.c index e0d6d07f6848..c36636fd825b 100644 --- a/tools/perf/util/trace-event-parse.c +++ b/tools/perf/util/trace-event-parse.c @@ -126,6 +126,7 @@ void event_format__print(struct event_format *event, trace_seq_init(&s); pevent_event_info(&s, event, &record); trace_seq_do_printf(&s); + trace_seq_destroy(&s); } void parse_proc_kallsyms(struct pevent *pevent, diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c new file mode 100644 index 000000000000..67db73ec3dab --- /dev/null +++ b/tools/perf/util/unwind-libdw.c @@ -0,0 +1,210 @@ +#include <linux/compiler.h> +#include <elfutils/libdw.h> +#include <elfutils/libdwfl.h> +#include <inttypes.h> +#include <errno.h> +#include "unwind.h" +#include "unwind-libdw.h" +#include "machine.h" +#include "thread.h" +#include "types.h" +#include "event.h" +#include "perf_regs.h" + +static char *debuginfo_path; + +static const Dwfl_Callbacks offline_callbacks = { + .find_debuginfo = dwfl_standard_find_debuginfo, + .debuginfo_path = &debuginfo_path, + .section_address = dwfl_offline_section_address, +}; + +static int __report_module(struct addr_location *al, u64 ip, + struct unwind_info *ui) +{ + Dwfl_Module *mod; + struct dso *dso = NULL; + + thread__find_addr_location(ui->thread, ui->machine, + PERF_RECORD_MISC_USER, + MAP__FUNCTION, ip, al); + + if (al->map) + dso = al->map->dso; + + if (!dso) + return 0; + + mod = dwfl_addrmodule(ui->dwfl, ip); + if (!mod) + mod = dwfl_report_elf(ui->dwfl, dso->short_name, + dso->long_name, -1, al->map->start, + false); + + return mod && dwfl_addrmodule(ui->dwfl, ip) == mod ? 0 : -1; +} + +static int report_module(u64 ip, struct unwind_info *ui) +{ + struct addr_location al; + + return __report_module(&al, ip, ui); +} + +static int entry(u64 ip, struct unwind_info *ui) + +{ + struct unwind_entry e; + struct addr_location al; + + if (__report_module(&al, ip, ui)) + return -1; + + e.ip = ip; + e.map = al.map; + e.sym = al.sym; + + pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n", + al.sym ? al.sym->name : "''", + ip, + al.map ? al.map->map_ip(al.map, ip) : (u64) 0); + + return ui->cb(&e, ui->arg); +} + +static pid_t next_thread(Dwfl *dwfl, void *arg, void **thread_argp) +{ + /* We want only single thread to be processed. */ + if (*thread_argp != NULL) + return 0; + + *thread_argp = arg; + return dwfl_pid(dwfl); +} + +static int access_dso_mem(struct unwind_info *ui, Dwarf_Addr addr, + Dwarf_Word *data) +{ + struct addr_location al; + ssize_t size; + + thread__find_addr_map(ui->thread, ui->machine, PERF_RECORD_MISC_USER, + MAP__FUNCTION, addr, &al); + if (!al.map) { + pr_debug("unwind: no map for %lx\n", (unsigned long)addr); + return -1; + } + + if (!al.map->dso) + return -1; + + size = dso__data_read_addr(al.map->dso, al.map, ui->machine, + addr, (u8 *) data, sizeof(*data)); + + return !(size == sizeof(*data)); +} + +static bool memory_read(Dwfl *dwfl __maybe_unused, Dwarf_Addr addr, Dwarf_Word *result, + void *arg) +{ + struct unwind_info *ui = arg; + struct stack_dump *stack = &ui->sample->user_stack; + u64 start, end; + int offset; + int ret; + + ret = perf_reg_value(&start, &ui->sample->user_regs, PERF_REG_SP); + if (ret) + return false; + + end = start + stack->size; + + /* Check overflow. */ + if (addr + sizeof(Dwarf_Word) < addr) + return false; + + if (addr < start || addr + sizeof(Dwarf_Word) > end) { + ret = access_dso_mem(ui, addr, result); + if (ret) { + pr_debug("unwind: access_mem 0x%" PRIx64 " not inside range" + " 0x%" PRIx64 "-0x%" PRIx64 "\n", + addr, start, end); + return false; + } + return true; + } + + offset = addr - start; + *result = *(Dwarf_Word *)&stack->data[offset]; + pr_debug("unwind: access_mem addr 0x%" PRIx64 ", val %lx, offset %d\n", + addr, (unsigned long)*result, offset); + return true; +} + +static const Dwfl_Thread_Callbacks callbacks = { + .next_thread = next_thread, + .memory_read = memory_read, + .set_initial_registers = libdw__arch_set_initial_registers, +}; + +static int +frame_callback(Dwfl_Frame *state, void *arg) +{ + struct unwind_info *ui = arg; + Dwarf_Addr pc; + + if (!dwfl_frame_pc(state, &pc, NULL)) { + pr_err("%s", dwfl_errmsg(-1)); + return DWARF_CB_ABORT; + } + + return entry(pc, ui) || !(--ui->max_stack) ? + DWARF_CB_ABORT : DWARF_CB_OK; +} + +int unwind__get_entries(unwind_entry_cb_t cb, void *arg, + struct machine *machine, struct thread *thread, + struct perf_sample *data, + int max_stack) +{ + struct unwind_info ui = { + .sample = data, + .thread = thread, + .machine = machine, + .cb = cb, + .arg = arg, + .max_stack = max_stack, + }; + Dwarf_Word ip; + int err = -EINVAL; + + if (!data->user_regs.regs) + return -EINVAL; + + ui.dwfl = dwfl_begin(&offline_callbacks); + if (!ui.dwfl) + goto out; + + err = perf_reg_value(&ip, &data->user_regs, PERF_REG_IP); + if (err) + goto out; + + err = report_module(ip, &ui); + if (err) + goto out; + + if (!dwfl_attach_state(ui.dwfl, EM_NONE, thread->tid, &callbacks, &ui)) + goto out; + + err = dwfl_getthread_frames(ui.dwfl, thread->tid, frame_callback, &ui); + + if (err && !ui.max_stack) + err = 0; + + out: + if (err) + pr_debug("unwind: failed with '%s'\n", dwfl_errmsg(-1)); + + dwfl_end(ui.dwfl); + return 0; +} diff --git a/tools/perf/util/unwind-libdw.h b/tools/perf/util/unwind-libdw.h new file mode 100644 index 000000000000..417a1426f3ad --- /dev/null +++ b/tools/perf/util/unwind-libdw.h @@ -0,0 +1,21 @@ +#ifndef __PERF_UNWIND_LIBDW_H +#define __PERF_UNWIND_LIBDW_H + +#include <elfutils/libdwfl.h> +#include "event.h" +#include "thread.h" +#include "unwind.h" + +bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg); + +struct unwind_info { + Dwfl *dwfl; + struct perf_sample *sample; + struct machine *machine; + struct thread *thread; + unwind_entry_cb_t cb; + void *arg; + int max_stack; +}; + +#endif /* __PERF_UNWIND_LIBDW_H */ diff --git a/tools/perf/util/unwind.c b/tools/perf/util/unwind-libunwind.c index 742f23bf35ff..bd5768d74f01 100644 --- a/tools/perf/util/unwind.c +++ b/tools/perf/util/unwind-libunwind.c @@ -86,7 +86,6 @@ struct unwind_info { struct perf_sample *sample; struct machine *machine; struct thread *thread; - u64 sample_uregs; }; #define dw_read(ptr, type, end) ({ \ @@ -391,30 +390,13 @@ static int access_dso_mem(struct unwind_info *ui, unw_word_t addr, return !(size == sizeof(*data)); } -static int reg_value(unw_word_t *valp, struct regs_dump *regs, int id, - u64 sample_regs) -{ - int i, idx = 0; - - if (!(sample_regs & (1 << id))) - return -EINVAL; - - for (i = 0; i < id; i++) { - if (sample_regs & (1 << i)) - idx++; - } - - *valp = regs->regs[idx]; - return 0; -} - static int access_mem(unw_addr_space_t __maybe_unused as, unw_word_t addr, unw_word_t *valp, int __write, void *arg) { struct unwind_info *ui = arg; struct stack_dump *stack = &ui->sample->user_stack; - unw_word_t start, end; + u64 start, end; int offset; int ret; @@ -424,8 +406,7 @@ static int access_mem(unw_addr_space_t __maybe_unused as, return 0; } - ret = reg_value(&start, &ui->sample->user_regs, PERF_REG_SP, - ui->sample_uregs); + ret = perf_reg_value(&start, &ui->sample->user_regs, PERF_REG_SP); if (ret) return ret; @@ -438,8 +419,9 @@ static int access_mem(unw_addr_space_t __maybe_unused as, if (addr < start || addr + sizeof(unw_word_t) >= end) { ret = access_dso_mem(ui, addr, valp); if (ret) { - pr_debug("unwind: access_mem %p not inside range %p-%p\n", - (void *)addr, (void *)start, (void *)end); + pr_debug("unwind: access_mem %p not inside range" + " 0x%" PRIx64 "-0x%" PRIx64 "\n", + (void *) addr, start, end); *valp = 0; return ret; } @@ -448,8 +430,8 @@ static int access_mem(unw_addr_space_t __maybe_unused as, offset = addr - start; *valp = *(unw_word_t *)&stack->data[offset]; - pr_debug("unwind: access_mem addr %p, val %lx, offset %d\n", - (void *)addr, (unsigned long)*valp, offset); + pr_debug("unwind: access_mem addr %p val %lx, offset %d\n", + (void *) addr, (unsigned long)*valp, offset); return 0; } @@ -459,6 +441,7 @@ static int access_reg(unw_addr_space_t __maybe_unused as, { struct unwind_info *ui = arg; int id, ret; + u64 val; /* Don't support write, I suspect we don't need it. */ if (__write) { @@ -471,16 +454,17 @@ static int access_reg(unw_addr_space_t __maybe_unused as, return 0; } - id = unwind__arch_reg_id(regnum); + id = libunwind__arch_reg_id(regnum); if (id < 0) return -EINVAL; - ret = reg_value(valp, &ui->sample->user_regs, id, ui->sample_uregs); + ret = perf_reg_value(&val, &ui->sample->user_regs, id); if (ret) { pr_err("unwind: can't read reg %d\n", regnum); return ret; } + *valp = (unw_word_t) val; pr_debug("unwind: reg %d, val %lx\n", regnum, (unsigned long)*valp); return 0; } @@ -563,7 +547,7 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb, unw_word_t ip; unw_get_reg(&c, UNW_REG_IP, &ip); - ret = entry(ip, ui->thread, ui->machine, cb, arg); + ret = ip ? entry(ip, ui->thread, ui->machine, cb, arg) : 0; } unw_destroy_addr_space(addr_space); @@ -572,13 +556,11 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb, int unwind__get_entries(unwind_entry_cb_t cb, void *arg, struct machine *machine, struct thread *thread, - u64 sample_uregs, struct perf_sample *data, - int max_stack) + struct perf_sample *data, int max_stack) { - unw_word_t ip; + u64 ip; struct unwind_info ui = { .sample = data, - .sample_uregs = sample_uregs, .thread = thread, .machine = machine, }; @@ -587,7 +569,7 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg, if (!data->user_regs.regs) return -EINVAL; - ret = reg_value(&ip, &data->user_regs, PERF_REG_IP, sample_uregs); + ret = perf_reg_value(&ip, &data->user_regs, PERF_REG_IP); if (ret) return ret; @@ -595,5 +577,5 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg, if (ret) return -ENOMEM; - return get_entries(&ui, cb, arg, max_stack); + return --max_stack > 0 ? get_entries(&ui, cb, arg, max_stack) : 0; } diff --git a/tools/perf/util/unwind.h b/tools/perf/util/unwind.h index d5966f49e22c..b031316f221a 100644 --- a/tools/perf/util/unwind.h +++ b/tools/perf/util/unwind.h @@ -13,24 +13,25 @@ struct unwind_entry { typedef int (*unwind_entry_cb_t)(struct unwind_entry *entry, void *arg); -#ifdef HAVE_LIBUNWIND_SUPPORT +#ifdef HAVE_DWARF_UNWIND_SUPPORT int unwind__get_entries(unwind_entry_cb_t cb, void *arg, struct machine *machine, struct thread *thread, - u64 sample_uregs, struct perf_sample *data, int max_stack); -int unwind__arch_reg_id(int regnum); +/* libunwind specific */ +#ifdef HAVE_LIBUNWIND_SUPPORT +int libunwind__arch_reg_id(int regnum); +#endif #else static inline int unwind__get_entries(unwind_entry_cb_t cb __maybe_unused, void *arg __maybe_unused, struct machine *machine __maybe_unused, struct thread *thread __maybe_unused, - u64 sample_uregs __maybe_unused, struct perf_sample *data __maybe_unused, int max_stack __maybe_unused) { return 0; } -#endif /* HAVE_LIBUNWIND_SUPPORT */ +#endif /* HAVE_DWARF_UNWIND_SUPPORT */ #endif /* __UNWIND_H */ diff --git a/tools/perf/util/util.c b/tools/perf/util/util.c index 42ad667bb317..9f66549562bd 100644 --- a/tools/perf/util/util.c +++ b/tools/perf/util/util.c @@ -1,6 +1,6 @@ #include "../perf.h" #include "util.h" -#include "fs.h" +#include <api/fs/fs.h> #include <sys/mman.h> #ifdef HAVE_BACKTRACE_SUPPORT #include <execinfo.h> diff --git a/tools/testing/selftests/ipc/msgque.c b/tools/testing/selftests/ipc/msgque.c index d66418237d21..aa290c0de6f5 100644 --- a/tools/testing/selftests/ipc/msgque.c +++ b/tools/testing/selftests/ipc/msgque.c @@ -201,6 +201,7 @@ int main(int argc, char **argv) msgque.msq_id = msgget(msgque.key, IPC_CREAT | IPC_EXCL | 0666); if (msgque.msq_id == -1) { + err = -errno; printf("Can't create queue\n"); goto err_out; } diff --git a/tools/testing/selftests/rcutorture/bin/functions.sh b/tools/testing/selftests/rcutorture/bin/functions.sh index 587561d7c035..9b17e810ddc3 100644 --- a/tools/testing/selftests/rcutorture/bin/functions.sh +++ b/tools/testing/selftests/rcutorture/bin/functions.sh @@ -96,6 +96,7 @@ identify_qemu () { echo qemu-system-ppc64 else echo Cannot figure out what qemu command to use! 1>&2 + echo file $1 output: $u # Usually this will be one of /usr/bin/qemu-system-* # Use RCU_QEMU_CMD environment variable or appropriate # argument to top-level script. diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh new file mode 100755 index 000000000000..829186e19eb1 --- /dev/null +++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh @@ -0,0 +1,51 @@ +#!/bin/bash +# +# Analyze a given results directory for locktorture progress. +# +# Usage: sh kvm-recheck-lock.sh resdir +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, you can access it online at +# http://www.gnu.org/licenses/gpl-2.0.html. +# +# Copyright (C) IBM Corporation, 2014 +# +# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> + +i="$1" +if test -d $i +then + : +else + echo Unreadable results directory: $i + exit 1 +fi + +configfile=`echo $i | sed -e 's/^.*\///'` +ncs=`grep "Writes: Total:" $i/console.log 2> /dev/null | tail -1 | sed -e 's/^.* Total: //' -e 's/ .*$//'` +if test -z "$ncs" +then + echo $configfile +else + title="$configfile ------- $ncs acquisitions/releases" + dur=`sed -e 's/^.* locktorture.shutdown_secs=//' -e 's/ .*$//' < $i/qemu-cmd 2> /dev/null` + if test -z "$dur" + then + : + else + ncsps=`awk -v ncs=$ncs -v dur=$dur ' + BEGIN { print ncs / dur }' < /dev/null` + title="$title ($ncsps per second)" + fi + echo $title +fi diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh new file mode 100755 index 000000000000..d75b1dc5ae53 --- /dev/null +++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh @@ -0,0 +1,51 @@ +#!/bin/bash +# +# Analyze a given results directory for rcutorture progress. +# +# Usage: sh kvm-recheck-rcu.sh resdir +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, you can access it online at +# http://www.gnu.org/licenses/gpl-2.0.html. +# +# Copyright (C) IBM Corporation, 2014 +# +# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> + +i="$1" +if test -d $i +then + : +else + echo Unreadable results directory: $i + exit 1 +fi + +configfile=`echo $i | sed -e 's/^.*\///'` +ngps=`grep ver: $i/console.log 2> /dev/null | tail -1 | sed -e 's/^.* ver: //' -e 's/ .*$//'` +if test -z "$ngps" +then + echo $configfile +else + title="$configfile ------- $ngps grace periods" + dur=`sed -e 's/^.* rcutorture.shutdown_secs=//' -e 's/ .*$//' < $i/qemu-cmd 2> /dev/null` + if test -z "$dur" + then + : + else + ngpsps=`awk -v ngps=$ngps -v dur=$dur ' + BEGIN { print ngps / dur }' < /dev/null` + title="$title ($ngpsps per second)" + fi + echo $title +fi diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh index baef09f3469b..a44daaa259a9 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh @@ -1,6 +1,6 @@ #!/bin/bash # -# Given the results directories for previous KVM runs of rcutorture, +# Given the results directories for previous KVM-based torture runs, # check the build and console output for errors. Given a directory # containing results directories, this recursively checks them all. # @@ -27,11 +27,18 @@ PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH for rd in "$@" do + firsttime=1 dirs=`find $rd -name Make.defconfig.out -print | sort | sed -e 's,/[^/]*$,,' | sort -u` for i in $dirs do - configfile=`echo $i | sed -e 's/^.*\///'` - echo $configfile + if test -n "$firsttime" + then + firsttime="" + resdir=`echo $i | sed -e 's,/$,,' -e 's,/[^/]*$,,'` + head -1 $resdir/log + fi + TORTURE_SUITE="`cat $i/../TORTURE_SUITE`" + kvm-recheck-${TORTURE_SUITE}.sh $i configcheck.sh $i/.config $i/ConfigFragment parse-build.sh $i/Make.out $configfile parse-rcutorture.sh $i/console.log $configfile diff --git a/tools/testing/selftests/rcutorture/bin/kvm-test-1-rcu.sh b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh index 151b23788935..94b28bb37d36 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-test-1-rcu.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh @@ -6,15 +6,15 @@ # Execute this in the source tree. Do not run it as a background task # because qemu does not seem to like that much. # -# Usage: sh kvm-test-1-rcu.sh config builddir resdir minutes qemu-args bootargs +# Usage: sh kvm-test-1-run.sh config builddir resdir minutes qemu-args boot_args # -# qemu-args defaults to "" -- you will want "-nographic" if running headless. -# bootargs defaults to "root=/dev/sda noapic selinux=0 console=ttyS0" -# "initcall_debug debug rcutorture.stat_interval=15" -# "rcutorture.shutdown_secs=$((minutes * 60))" -# "rcutorture.rcutorture_runnable=1" +# qemu-args defaults to "-nographic", along with arguments specifying the +# number of CPUs and other options generated from +# the underlying CPU architecture. +# boot_args defaults to value returned by the per_version_boot_params +# shell function. # -# Anything you specify for either qemu-args or bootargs is appended to +# Anything you specify for either qemu-args or boot_args is appended to # the default values. The "-smp" value is deduced from the contents of # the config fragment. # @@ -40,32 +40,34 @@ grace=120 -T=/tmp/kvm-test-1-rcu.sh.$$ +T=/tmp/kvm-test-1-run.sh.$$ trap 'rm -rf $T' 0 . $KVM/bin/functions.sh . $KVPATH/ver_functions.sh config_template=${1} +config_dir=`echo $config_template | sed -e 's,/[^/]*$,,'` title=`echo $config_template | sed -e 's/^.*\///'` builddir=${2} if test -z "$builddir" -o ! -d "$builddir" -o ! -w "$builddir" then - echo "kvm-test-1-rcu.sh :$builddir: Not a writable directory, cannot build into it" + echo "kvm-test-1-run.sh :$builddir: Not a writable directory, cannot build into it" exit 1 fi resdir=${3} if test -z "$resdir" -o ! -d "$resdir" -o ! -w "$resdir" then - echo "kvm-test-1-rcu.sh :$resdir: Not a writable directory, cannot build into it" + echo "kvm-test-1-run.sh :$resdir: Not a writable directory, cannot store results into it" exit 1 fi cp $config_template $resdir/ConfigFragment echo ' ---' `date`: Starting build echo ' ---' Kconfig fragment at: $config_template >> $resdir/log -cat << '___EOF___' >> $T -CONFIG_RCU_TORTURE_TEST=y -___EOF___ +if test -r "$config_dir/CFcommon" +then + cat < $config_dir/CFcommon >> $T +fi # Optimizations below this point # CONFIG_USB=n # CONFIG_SECURITY=n @@ -96,11 +98,23 @@ then cp $builddir/.config $resdir cp $builddir/arch/x86/boot/bzImage $resdir parse-build.sh $resdir/Make.out $title + if test -f $builddir.wait + then + mv $builddir.wait $builddir.ready + fi else cp $builddir/Make*.out $resdir echo Build failed, not running KVM, see $resdir. + if test -f $builddir.wait + then + mv $builddir.wait $builddir.ready + fi exit 1 fi +while test -f $builddir.ready +do + sleep 1 +done minutes=$4 seconds=$(($minutes * 60)) qemu_args=$5 @@ -111,9 +125,10 @@ kstarttime=`awk 'BEGIN { print systime() }' < /dev/null` echo ' ---' `date`: Starting kernel # Determine the appropriate flavor of qemu command. -QEMU="`identify_qemu $builddir/vmlinux.o`" +QEMU="`identify_qemu $builddir/vmlinux`" # Generate -smp qemu argument. +qemu_args="-nographic $qemu_args" cpu_count=`configNR_CPUS.sh $config_template` vcpus=`identify_qemu_vcpus` if test $cpu_count -gt $vcpus @@ -133,12 +148,8 @@ qemu_append="`identify_qemu_append "$QEMU"`" # Pull in Kconfig-fragment boot parameters boot_args="`configfrag_boot_params "$boot_args" "$config_template"`" -# Generate CPU-hotplug boot parameters -boot_args="`rcutorture_param_onoff "$boot_args" $builddir/.config`" -# Generate rcu_barrier() boot parameter -boot_args="`rcutorture_param_n_barrier_cbs "$boot_args"`" -# Pull in standard rcutorture boot arguments -boot_args="$boot_args rcutorture.stat_interval=15 rcutorture.shutdown_secs=$seconds rcutorture.rcutorture_runnable=1" +# Generate kernel-version-specific boot parameters +boot_args="`per_version_boot_params "$boot_args" $builddir/.config $seconds`" echo $QEMU $qemu_args -m 512 -kernel $builddir/arch/x86/boot/bzImage -append \"$qemu_append $boot_args\" > $resdir/qemu-cmd if test -n "$RCU_BUILDONLY" @@ -188,5 +199,5 @@ then fi cp $builddir/console.log $resdir -parse-rcutorture.sh $resdir/console.log $title +parse-${TORTURE_SUITE}torture.sh $resdir/console.log $title parse-console.sh $resdir/console.log $title diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh index 1b7923bf6a70..5a78cbf55f06 100644 --- a/tools/testing/selftests/rcutorture/bin/kvm.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm.sh @@ -30,14 +30,21 @@ scriptname=$0 args="$*" +T=/tmp/kvm.sh.$$ +trap 'rm -rf $T' 0 +mkdir $T + dur=30 +dryrun="" KVM="`pwd`/tools/testing/selftests/rcutorture"; export KVM PATH=${KVM}/bin:$PATH; export PATH builddir="${KVM}/b1" RCU_INITRD="$KVM/initrd"; export RCU_INITRD RCU_KMAKE_ARG=""; export RCU_KMAKE_ARG +TORTURE_SUITE=rcu resdir="" configs="" +cpus=0 ds=`date +%Y.%m.%d-%H:%M:%S` kversion="" @@ -49,7 +56,9 @@ usage () { echo " --builddir absolute-pathname" echo " --buildonly" echo " --configs \"config-file list\"" + echo " --cpus N" echo " --datestamp string" + echo " --dryrun sched|script" echo " --duration minutes" echo " --interactive" echo " --kmake-arg kernel-make-arguments" @@ -58,8 +67,9 @@ usage () { echo " --no-initrd" echo " --qemu-args qemu-system-..." echo " --qemu-cmd qemu-system-..." - echo " --results absolute-pathname" echo " --relbuilddir relative-pathname" + echo " --results absolute-pathname" + echo " --torture rcu" exit 1 } @@ -85,11 +95,21 @@ do configs="$2" shift ;; + --cpus) + checkarg --cpus "(number)" "$#" "$2" '^[0-9]*$' '^--' + cpus=$2 + shift + ;; --datestamp) checkarg --datestamp "(relative pathname)" "$#" "$2" '^[^/]*$' '^--' ds=$2 shift ;; + --dryrun) + checkarg --dryrun "sched|script" $# "$2" 'sched\|script' '^--' + dryrun=$2 + shift + ;; --duration) checkarg --duration "(minutes)" $# "$2" '^[0-9]*$' '^error' dur=$2 @@ -138,6 +158,11 @@ do resdir=$2 shift ;; + --torture) + checkarg --torture "(suite name)" "$#" "$2" '^\(lock\|rcu\)$' '^--' + TORTURE_SUITE=$2 + shift + ;; *) echo Unknown argument $1 usage @@ -146,7 +171,7 @@ do shift done -CONFIGFRAG=${KVM}/configs; export CONFIGFRAG +CONFIGFRAG=${KVM}/configs/${TORTURE_SUITE}; export CONFIGFRAG KVPATH=${CONFIGFRAG}/$kversion; export KVPATH if test -z "$configs" @@ -157,54 +182,231 @@ fi if test -z "$resdir" then resdir=$KVM/res - if ! test -e $resdir - then - mkdir $resdir || : - fi -else +fi + +if test "$dryrun" = "" +then if ! test -e $resdir then mkdir -p "$resdir" || : fi -fi -mkdir $resdir/$ds -touch $resdir/$ds/log -echo $scriptname $args >> $resdir/$ds/log + mkdir $resdir/$ds -pwd > $resdir/$ds/testid.txt -if test -d .git -then - git status >> $resdir/$ds/testid.txt - git rev-parse HEAD >> $resdir/$ds/testid.txt -fi -builddir=$KVM/b1 -if ! test -e $builddir -then - mkdir $builddir || : + # Be noisy only if running the script. + echo Results directory: $resdir/$ds + echo $scriptname $args + + touch $resdir/$ds/log + echo $scriptname $args >> $resdir/$ds/log + echo ${TORTURE_SUITE} > $resdir/$ds/TORTURE_SUITE + + pwd > $resdir/$ds/testid.txt + if test -d .git + then + git status >> $resdir/$ds/testid.txt + git rev-parse HEAD >> $resdir/$ds/testid.txt + fi fi +# Create a file of test-name/#cpus pairs, sorted by decreasing #cpus. +touch $T/cfgcpu for CF in $configs do - # Running TREE01 multiple times creates TREE01, TREE01.2, TREE01.3, ... - rd=$resdir/$ds/$CF - if test -d "${rd}" + if test -f "$CONFIGFRAG/$kversion/$CF" then - n="`ls -d "${rd}"* | grep '\.[0-9]\+$' | - sed -e 's/^.*\.\([0-9]\+\)/\1/' | - sort -k1n | tail -1`" - if test -z "$n" - then - rd="${rd}.2" - else - n="`expr $n + 1`" - rd="${rd}.${n}" - fi + echo $CF `configNR_CPUS.sh $CONFIGFRAG/$kversion/$CF` >> $T/cfgcpu + else + echo "The --configs file $CF does not exist, terminating." + exit 1 fi - mkdir "${rd}" - echo Results directory: $rd - kvm-test-1-rcu.sh $CONFIGFRAG/$kversion/$CF $builddir $rd $dur "-nographic $RCU_QEMU_ARG" "rcutorture.test_no_idle_hz=1 rcutorture.verbose=1 $RCU_BOOTARGS" done +sort -k2nr $T/cfgcpu > $T/cfgcpu.sort + +# Use a greedy bin-packing algorithm, sorting the list accordingly. +awk < $T/cfgcpu.sort > $T/cfgcpu.pack -v ncpus=$cpus ' +BEGIN { + njobs = 0; +} + +{ + # Read file of tests and corresponding required numbers of CPUs. + cf[njobs] = $1; + cpus[njobs] = $2; + njobs++; +} + +END { + alldone = 0; + batch = 0; + nc = -1; + + # Each pass through the following loop creates on test batch + # that can be executed concurrently given ncpus. Note that a + # given test that requires more than the available CPUs will run in + # their own batch. Such tests just have to make do with what + # is available. + while (nc != ncpus) { + batch++; + nc = ncpus; + + # Each pass through the following loop considers one + # test for inclusion in the current batch. + for (i = 0; i < njobs; i++) { + if (done[i]) + continue; # Already part of a batch. + if (nc >= cpus[i] || nc == ncpus) { + + # This test fits into the current batch. + done[i] = batch; + nc -= cpus[i]; + if (nc <= 0) + break; # Too-big test in its own batch. + } + } + } + + # Dump out the tests in batch order. + for (b = 1; b <= batch; b++) + for (i = 0; i < njobs; i++) + if (done[i] == b) + print cf[i], cpus[i]; +}' + +# Generate a script to execute the tests in appropriate batches. +cat << ___EOF___ > $T/script +TORTURE_SUITE="$TORTURE_SUITE"; export TORTURE_SUITE +___EOF___ +awk < $T/cfgcpu.pack \ + -v CONFIGDIR="$CONFIGFRAG/$kversion/" \ + -v KVM="$KVM" \ + -v ncpus=$cpus \ + -v rd=$resdir/$ds/ \ + -v dur=$dur \ + -v RCU_QEMU_ARG=$RCU_QEMU_ARG \ + -v RCU_BOOTARGS=$RCU_BOOTARGS \ +'BEGIN { + i = 0; +} + +{ + cf[i] = $1; + cpus[i] = $2; + i++; +} + +# Dump out the scripting required to run one test batch. +function dump(first, pastlast) +{ + print "echo ----Start batch: `date`"; + print "echo ----Start batch: `date` >> " rd "/log"; + jn=1 + for (j = first; j < pastlast; j++) { + builddir=KVM "/b" jn + cpusr[jn] = cpus[j]; + if (cfrep[cf[j]] == "") { + cfr[jn] = cf[j]; + cfrep[cf[j]] = 1; + } else { + cfrep[cf[j]]++; + cfr[jn] = cf[j] "." cfrep[cf[j]]; + } + if (cpusr[jn] > ncpus && ncpus != 0) + ovf = "(!)"; + else + ovf = ""; + print "echo ", cfr[jn], cpusr[jn] ovf ": Starting build. `date`"; + print "echo ", cfr[jn], cpusr[jn] ovf ": Starting build. `date` >> " rd "/log"; + print "rm -f " builddir ".*"; + print "touch " builddir ".wait"; + print "mkdir " builddir " > /dev/null 2>&1 || :"; + print "mkdir " rd cfr[jn] " || :"; + print "kvm-test-1-run.sh " CONFIGDIR cf[j], builddir, rd cfr[jn], dur " \"" RCU_QEMU_ARG "\" \"" RCU_BOOTARGS "\" > " rd cfr[jn] "/kvm-test-1-run.sh.out 2>&1 &" + print "echo ", cfr[jn], cpusr[jn] ovf ": Waiting for build to complete. `date`"; + print "echo ", cfr[jn], cpusr[jn] ovf ": Waiting for build to complete. `date` >> " rd "/log"; + print "while test -f " builddir ".wait" + print "do" + print "\tsleep 1" + print "done" + print "echo ", cfr[jn], cpusr[jn] ovf ": Build complete. `date`"; + print "echo ", cfr[jn], cpusr[jn] ovf ": Build complete. `date` >> " rd "/log"; + jn++; + } + for (j = 1; j < jn; j++) { + builddir=KVM "/b" j + print "rm -f " builddir ".ready" + print "echo ----", cfr[j], cpusr[j] ovf ": Starting kernel. `date`"; + print "echo ----", cfr[j], cpusr[j] ovf ": Starting kernel. `date` >> " rd "/log"; + } + print "wait" + print "echo ---- All kernel runs complete. `date`"; + print "echo ---- All kernel runs complete. `date` >> " rd "/log"; + for (j = 1; j < jn; j++) { + builddir=KVM "/b" j + print "echo ----", cfr[j], cpusr[j] ovf ": Build/run results:"; + print "echo ----", cfr[j], cpusr[j] ovf ": Build/run results: >> " rd "/log"; + print "cat " rd cfr[j] "/kvm-test-1-run.sh.out"; + print "cat " rd cfr[j] "/kvm-test-1-run.sh.out >> " rd "/log"; + } +} + +END { + njobs = i; + nc = ncpus; + first = 0; + + # Each pass through the following loop considers one test. + for (i = 0; i < njobs; i++) { + if (ncpus == 0) { + # Sequential test specified, each test its own batch. + dump(i, i + 1); + first = i; + } else if (nc < cpus[i] && i != 0) { + # Out of CPUs, dump out a batch. + dump(first, i); + first = i; + nc = ncpus; + } + # Account for the CPUs needed by the current test. + nc -= cpus[i]; + } + # Dump the last batch. + if (ncpus != 0) + dump(first, i); +}' >> $T/script + +if test "$dryrun" = script +then + # Dump out the script, but define the environment variables that + # it needs to run standalone. + echo CONFIGFRAG="$CONFIGFRAG; export CONFIGFRAG" + echo KVM="$KVM; export KVM" + echo KVPATH="$KVPATH; export KVPATH" + echo PATH="$PATH; export PATH" + echo RCU_BUILDONLY="$RCU_BUILDONLY; export RCU_BUILDONLY" + echo RCU_INITRD="$RCU_INITRD; export RCU_INITRD" + echo RCU_KMAKE_ARG="$RCU_KMAKE_ARG; export RCU_KMAKE_ARG" + echo RCU_QEMU_CMD="$RCU_QEMU_CMD; export RCU_QEMU_CMD" + echo RCU_QEMU_INTERACTIVE="$RCU_QEMU_INTERACTIVE; export RCU_QEMU_INTERACTIVE" + echo RCU_QEMU_MAC="$RCU_QEMU_MAC; export RCU_QEMU_MAC" + echo "mkdir -p "$resdir" || :" + echo "mkdir $resdir/$ds" + cat $T/script + exit 0 +elif test "$dryrun" = sched +then + # Extract the test run schedule from the script. + egrep 'start batch|Starting build\.' $T/script | + sed -e 's/:.*$//' -e 's/^echo //' + exit 0 +else + # Not a dryru, so run the script. + sh $T/script +fi + # Tracing: trace_event=rcu:rcu_grace_period,rcu:rcu_future_grace_period,rcu:rcu_grace_period_init,rcu:rcu_nocb_wake,rcu:rcu_preempt_task,rcu:rcu_unlock_preempted_task,rcu:rcu_quiescent_state_report,rcu:rcu_fqs,rcu:rcu_callback,rcu:rcu_kfree_callback,rcu:rcu_batch_start,rcu:rcu_invoke_callback,rcu:rcu_invoke_kfree_callback,rcu:rcu_batch_end,rcu:rcu_torture_read,rcu:rcu_barrier +echo +echo echo " --- `date` Test summary:" +echo Results directory: $resdir/$ds kvm-recheck.sh $resdir/$ds diff --git a/tools/testing/selftests/rcutorture/configs/lock/BUSTED b/tools/testing/selftests/rcutorture/configs/lock/BUSTED new file mode 100644 index 000000000000..1d1da1477fc3 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/lock/BUSTED @@ -0,0 +1,6 @@ +CONFIG_SMP=y +CONFIG_NR_CPUS=4 +CONFIG_HOTPLUG_CPU=y +CONFIG_PREEMPT_NONE=n +CONFIG_PREEMPT_VOLUNTARY=n +CONFIG_PREEMPT=y diff --git a/tools/testing/selftests/rcutorture/configs/lock/BUSTED.boot b/tools/testing/selftests/rcutorture/configs/lock/BUSTED.boot new file mode 100644 index 000000000000..6386c15e9770 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/lock/BUSTED.boot @@ -0,0 +1 @@ +locktorture.torture_type=lock_busted diff --git a/tools/testing/selftests/rcutorture/configs/lock/CFLIST b/tools/testing/selftests/rcutorture/configs/lock/CFLIST new file mode 100644 index 000000000000..a061b22d1892 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/lock/CFLIST @@ -0,0 +1 @@ +LOCK01 diff --git a/tools/testing/selftests/rcutorture/configs/lock/CFcommon b/tools/testing/selftests/rcutorture/configs/lock/CFcommon new file mode 100644 index 000000000000..e372dc269254 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/lock/CFcommon @@ -0,0 +1,2 @@ +CONFIG_LOCK_TORTURE_TEST=y +CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/lock/LOCK01 b/tools/testing/selftests/rcutorture/configs/lock/LOCK01 new file mode 100644 index 000000000000..a9625e3d6cd9 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/lock/LOCK01 @@ -0,0 +1,6 @@ +CONFIG_SMP=y +CONFIG_NR_CPUS=8 +CONFIG_HOTPLUG_CPU=y +CONFIG_PREEMPT_NONE=n +CONFIG_PREEMPT_VOLUNTARY=n +CONFIG_PREEMPT=y diff --git a/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh new file mode 100644 index 000000000000..9746ea1cd6c7 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh @@ -0,0 +1,43 @@ +#!/bin/bash +# +# Kernel-version-dependent shell functions for the rest of the scripts. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, you can access it online at +# http://www.gnu.org/licenses/gpl-2.0.html. +# +# Copyright (C) IBM Corporation, 2014 +# +# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> + +# locktorture_param_onoff bootparam-string config-file +# +# Adds onoff locktorture module parameters to kernels having it. +locktorture_param_onoff () { + if ! bootparam_hotplug_cpu "$1" && configfrag_hotplug_cpu "$2" + then + echo CPU-hotplug kernel, adding locktorture onoff. 1>&2 + echo locktorture.onoff_interval=3 locktorture.onoff_holdoff=30 + fi +} + +# per_version_boot_params bootparam-string config-file seconds +# +# Adds per-version torture-module parameters to kernels supporting them. +per_version_boot_params () { + echo $1 `locktorture_param_onoff "$1" "$2"` \ + locktorture.stat_interval=15 \ + locktorture.shutdown_secs=$3 \ + locktorture.locktorture_runnable=1 \ + locktorture.verbose=1 +} diff --git a/tools/testing/selftests/rcutorture/configs/rcu/BUSTED b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED new file mode 100644 index 000000000000..48d8a245c7fa --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED @@ -0,0 +1,7 @@ +CONFIG_RCU_TRACE=n +CONFIG_SMP=y +CONFIG_NR_CPUS=4 +CONFIG_HOTPLUG_CPU=y +CONFIG_PREEMPT_NONE=n +CONFIG_PREEMPT_VOLUNTARY=n +CONFIG_PREEMPT=y diff --git a/tools/testing/selftests/rcutorture/configs/rcu/BUSTED.boot b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED.boot new file mode 100644 index 000000000000..6804f9dcfc1b --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED.boot @@ -0,0 +1 @@ +rcutorture.torture_type=rcu_busted diff --git a/tools/testing/selftests/rcutorture/configs/CFLIST b/tools/testing/selftests/rcutorture/configs/rcu/CFLIST index cd3d29cb0a47..cd3d29cb0a47 100644 --- a/tools/testing/selftests/rcutorture/configs/CFLIST +++ b/tools/testing/selftests/rcutorture/configs/rcu/CFLIST diff --git a/tools/testing/selftests/rcutorture/configs/rcu/CFcommon b/tools/testing/selftests/rcutorture/configs/rcu/CFcommon new file mode 100644 index 000000000000..d2d2a86139db --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/rcu/CFcommon @@ -0,0 +1,2 @@ +CONFIG_RCU_TORTURE_TEST=y +CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/SRCU-N b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-N index 10a0e27f4c75..9fbb41b9b314 100644 --- a/tools/testing/selftests/rcutorture/configs/SRCU-N +++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-N @@ -1,8 +1,7 @@ CONFIG_RCU_TRACE=n CONFIG_SMP=y -CONFIG_NR_CPUS=8 +CONFIG_NR_CPUS=4 CONFIG_HOTPLUG_CPU=y CONFIG_PREEMPT_NONE=y CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/SRCU-N.boot b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-N.boot index 238bfe3bd0cc..238bfe3bd0cc 100644 --- a/tools/testing/selftests/rcutorture/configs/SRCU-N.boot +++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-N.boot diff --git a/tools/testing/selftests/rcutorture/configs/SRCU-P b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P index 6650e00c6d91..4b6f272dba27 100644 --- a/tools/testing/selftests/rcutorture/configs/SRCU-P +++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P @@ -5,4 +5,3 @@ CONFIG_HOTPLUG_CPU=y CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT=y -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/SRCU-P.boot b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot index 238bfe3bd0cc..238bfe3bd0cc 100644 --- a/tools/testing/selftests/rcutorture/configs/SRCU-P.boot +++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot diff --git a/tools/testing/selftests/rcutorture/configs/TINY01 b/tools/testing/selftests/rcutorture/configs/rcu/TINY01 index 0c2823f21712..0a63e073a00c 100644 --- a/tools/testing/selftests/rcutorture/configs/TINY01 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TINY01 @@ -10,4 +10,3 @@ CONFIG_RCU_TRACE=n CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_PREEMPT_COUNT=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TINY02 b/tools/testing/selftests/rcutorture/configs/rcu/TINY02 index e5072d7528b6..f4feaee40776 100644 --- a/tools/testing/selftests/rcutorture/configs/TINY02 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TINY02 @@ -10,4 +10,3 @@ CONFIG_RCU_TRACE=y CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_PREEMPT_COUNT=y -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE01 b/tools/testing/selftests/rcutorture/configs/rcu/TREE01 index 141119a00044..9c827ec59a97 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE01 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE01 @@ -20,4 +20,3 @@ CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_BOOST=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE01.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE01.boot index 0fc8a3428938..0fc8a3428938 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE01.boot +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE01.boot diff --git a/tools/testing/selftests/rcutorture/configs/TREE02 b/tools/testing/selftests/rcutorture/configs/rcu/TREE02 index 2d4d09608528..1a777b5f68b5 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE02 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE02 @@ -7,7 +7,7 @@ CONFIG_PREEMPT=y CONFIG_HZ_PERIODIC=n CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_FULL=n -CONFIG_RCU_FAST_NO_HZ=n +CONFIG_RCU_FAST_NO_HZ=n CONFIG_RCU_TRACE=n CONFIG_HOTPLUG_CPU=n CONFIG_SUSPEND=n @@ -23,4 +23,3 @@ CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_VERBOSE=y CONFIG_RCU_BOOST=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE03 b/tools/testing/selftests/rcutorture/configs/rcu/TREE03 index a47de5be8a04..c1f111c1561b 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE03 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE03 @@ -20,4 +20,3 @@ CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_BOOST=y CONFIG_RCU_BOOST_PRIO=2 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE04 b/tools/testing/selftests/rcutorture/configs/rcu/TREE04 index 8d839b86a1d5..7dbd27ce17a4 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE04 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE04 @@ -22,4 +22,3 @@ CONFIG_PROVE_RCU_DELAY=n CONFIG_RCU_CPU_STALL_INFO=y CONFIG_RCU_CPU_STALL_VERBOSE=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE04.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE04.boot index 0fc8a3428938..0fc8a3428938 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE04.boot +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE04.boot diff --git a/tools/testing/selftests/rcutorture/configs/TREE05 b/tools/testing/selftests/rcutorture/configs/rcu/TREE05 index b5ba72ea25cb..d0f32e574743 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE05 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE05 @@ -22,4 +22,3 @@ CONFIG_PROVE_RCU_DELAY=y CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE05.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE05.boot index 3b42b8b033cd..3b42b8b033cd 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE05.boot +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE05.boot diff --git a/tools/testing/selftests/rcutorture/configs/TREE06 b/tools/testing/selftests/rcutorture/configs/rcu/TREE06 index 7c95ab48d29f..2e477dfb9c57 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE06 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE06 @@ -23,4 +23,3 @@ CONFIG_PROVE_RCU_DELAY=n CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=y -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE07 b/tools/testing/selftests/rcutorture/configs/rcu/TREE07 index 1467404bdec1..042f86ef362a 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE07 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE07 @@ -21,4 +21,3 @@ CONFIG_PROVE_RCU_DELAY=n CONFIG_RCU_CPU_STALL_INFO=y CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE08 b/tools/testing/selftests/rcutorture/configs/rcu/TREE08 index 7d097a61ac2a..3438cee1e3c5 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE08 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE08 @@ -23,4 +23,3 @@ CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_BOOST=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE08-T b/tools/testing/selftests/rcutorture/configs/rcu/TREE08-T index 442c4e450ab3..bf4523d3e44c 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE08-T +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE08-T @@ -23,4 +23,3 @@ CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_BOOST=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/TREE09 b/tools/testing/selftests/rcutorture/configs/rcu/TREE09 index 0d1ec0d3dfee..81e4f7c0bf0b 100644 --- a/tools/testing/selftests/rcutorture/configs/TREE09 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE09 @@ -18,4 +18,3 @@ CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_BOOST=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n -CONFIG_PRINTK_TIME=y diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/CFLIST b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/CFLIST index 18223947bbcb..18223947bbcb 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/CFLIST +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/CFLIST diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/N1-S-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N1-S-T-NH-SD-SMP-HP index d3ef873eb6e7..d3ef873eb6e7 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/N1-S-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N1-S-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/N2-2-t-nh-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N2-2-t-nh-sd-SMP-hp index 02e418572b1b..02e418572b1b 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/N2-2-t-nh-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N2-2-t-nh-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/N3-3-T-nh-SD-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N3-3-T-nh-SD-SMP-hp index b3100f69c8cf..b3100f69c8cf 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/N3-3-T-nh-SD-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N3-3-T-nh-SD-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/N4-A-t-NH-sd-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N4-A-t-NH-sd-SMP-HP index c56b44530725..c56b44530725 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/N4-A-t-NH-sd-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N4-A-t-NH-sd-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/N5-U-T-NH-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N5-U-T-NH-sd-SMP-hp index 90d924fea9e9..90d924fea9e9 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/N5-U-T-NH-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/N5-U-T-NH-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/NT1-nh b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/NT1-nh index 023f312a931c..023f312a931c 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/NT1-nh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/NT1-nh diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/NT3-NH b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/NT3-NH index 6fd0235dae73..6fd0235dae73 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/NT3-NH +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/NT3-NH diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/P1-S-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P1-S-T-NH-SD-SMP-HP index f72402d7c13d..f72402d7c13d 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/P1-S-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P1-S-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/P2-2-t-nh-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P2-2-t-nh-sd-SMP-hp index 0f3b667d2a9f..0f3b667d2a9f 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/P2-2-t-nh-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P2-2-t-nh-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/P3-3-T-nh-SD-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P3-3-T-nh-SD-SMP-hp index b035e141bf2a..b035e141bf2a 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/P3-3-T-nh-SD-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P3-3-T-nh-SD-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/P4-A-t-NH-sd-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P4-A-t-NH-sd-SMP-HP index 3ccf6a9447f5..3ccf6a9447f5 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/P4-A-t-NH-sd-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P4-A-t-NH-sd-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/P5-U-T-NH-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P5-U-T-NH-sd-SMP-hp index ef624ce73d8e..ef624ce73d8e 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/P5-U-T-NH-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/P5-U-T-NH-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/PT1-nh b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/PT1-nh index e3361c3894a1..e3361c3894a1 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/PT1-nh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/PT1-nh diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/PT2-NH b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/PT2-NH index 64abfc3b4d94..64abfc3b4d94 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/PT2-NH +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/PT2-NH diff --git a/tools/testing/selftests/rcutorture/configs/v0.0/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/ver_functions.sh index e8052539af54..5ace37a89780 100644 --- a/tools/testing/selftests/rcutorture/configs/v0.0/ver_functions.sh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v0.0/ver_functions.sh @@ -20,16 +20,14 @@ # # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> -# rcutorture_param_n_barrier_cbs bootparam-string -# -# Adds n_barrier_cbs rcutorture module parameter to kernels having it. -rcutorture_param_n_barrier_cbs () { - echo $1 -} - -# rcutorture_param_onoff bootparam-string config-file -# -# Adds onoff rcutorture module parameters to kernels having it. -rcutorture_param_onoff () { - echo $1 +# per_version_boot_params bootparam-string config-file seconds +# +# Adds per-version torture-module parameters to kernels supporting them. +# Which old kernels do not. +per_version_boot_params () { + echo rcutorture.stat_interval=15 \ + rcutorture.shutdown_secs=$3 \ + rcutorture.rcutorture_runnable=1 \ + rcutorture.test_no_idle_hz=1 \ + rcutorture.verbose=1 } diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/CFLIST b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/CFLIST index da4cbc668f2a..da4cbc668f2a 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/CFLIST +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/CFLIST diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/N1-S-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N1-S-T-NH-SD-SMP-HP index d81e11d280aa..d81e11d280aa 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/N1-S-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N1-S-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/N2-2-t-nh-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N2-2-t-nh-sd-SMP-hp index 02e418572b1b..02e418572b1b 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/N2-2-t-nh-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N2-2-t-nh-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/N3-3-T-nh-SD-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N3-3-T-nh-SD-SMP-hp index b3100f69c8cf..b3100f69c8cf 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/N3-3-T-nh-SD-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N3-3-T-nh-SD-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/N4-A-t-NH-sd-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N4-A-t-NH-sd-SMP-HP index c56b44530725..c56b44530725 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/N4-A-t-NH-sd-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N4-A-t-NH-sd-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/N5-U-T-NH-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N5-U-T-NH-sd-SMP-hp index 90d924fea9e9..90d924fea9e9 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/N5-U-T-NH-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N5-U-T-NH-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/N6---t-nh-SD-smp-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N6---t-nh-SD-smp-hp index 0ccc36d72738..0ccc36d72738 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/N6---t-nh-SD-smp-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N6---t-nh-SD-smp-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/N7-4-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N7-4-T-NH-SD-SMP-HP index 3f640cf84973..3f640cf84973 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/N7-4-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N7-4-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/N8-2-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N8-2-T-NH-SD-SMP-HP index 285da2dd8ac3..285da2dd8ac3 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/N8-2-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/N8-2-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/NT1-nh b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/NT1-nh index 023f312a931c..023f312a931c 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/NT1-nh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/NT1-nh diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/NT3-NH b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/NT3-NH index 6fd0235dae73..6fd0235dae73 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/NT3-NH +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/NT3-NH diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P1-S-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P1-S-T-NH-SD-SMP-HP index 9647c44cf4b7..9647c44cf4b7 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P1-S-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P1-S-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P2-2-t-nh-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P2-2-t-nh-sd-SMP-hp index 0f3b667d2a9f..0f3b667d2a9f 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P2-2-t-nh-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P2-2-t-nh-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P3-3-T-nh-SD-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P3-3-T-nh-SD-SMP-hp index b035e141bf2a..b035e141bf2a 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P3-3-T-nh-SD-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P3-3-T-nh-SD-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P4-A-t-NH-sd-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P4-A-t-NH-sd-SMP-HP index 3ccf6a9447f5..3ccf6a9447f5 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P4-A-t-NH-sd-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P4-A-t-NH-sd-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P5-U-T-NH-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P5-U-T-NH-sd-SMP-hp index ef624ce73d8e..ef624ce73d8e 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P5-U-T-NH-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P5-U-T-NH-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P6---t-nh-SD-smp-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P6---t-nh-SD-smp-hp index f4c9175828bf..f4c9175828bf 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P6---t-nh-SD-smp-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P6---t-nh-SD-smp-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P7-4-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P7-4-T-NH-SD-SMP-HP index 77a8c5b75763..77a8c5b75763 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P7-4-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P7-4-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P7-4-T-NH-SD-SMP-HP-all b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P7-4-T-NH-SD-SMP-HP-all index 0eecebc6e95f..0eecebc6e95f 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P7-4-T-NH-SD-SMP-HP-all +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P7-4-T-NH-SD-SMP-HP-all diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P7-4-T-NH-SD-SMP-HP-none b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P7-4-T-NH-SD-SMP-HP-none index 0eecebc6e95f..0eecebc6e95f 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P7-4-T-NH-SD-SMP-HP-none +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P7-4-T-NH-SD-SMP-HP-none diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/P7-4-T-NH-SD-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P7-4-T-NH-SD-SMP-hp index 588bc70420cd..588bc70420cd 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/P7-4-T-NH-SD-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/P7-4-T-NH-SD-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/PT1-nh b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/PT1-nh index e3361c3894a1..e3361c3894a1 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/PT1-nh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/PT1-nh diff --git a/tools/testing/selftests/rcutorture/configs/v3.12/PT2-NH b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/PT2-NH index 64abfc3b4d94..64abfc3b4d94 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.12/PT2-NH +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.12/PT2-NH diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/CFLIST b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/CFLIST index 18223947bbcb..18223947bbcb 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/CFLIST +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/CFLIST diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/N1-S-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N1-S-T-NH-SD-SMP-HP index d81e11d280aa..d81e11d280aa 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/N1-S-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N1-S-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/N2-2-t-nh-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N2-2-t-nh-sd-SMP-hp index 02e418572b1b..02e418572b1b 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/N2-2-t-nh-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N2-2-t-nh-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/N3-3-T-nh-SD-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N3-3-T-nh-SD-SMP-hp index b3100f69c8cf..b3100f69c8cf 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/N3-3-T-nh-SD-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N3-3-T-nh-SD-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/N4-A-t-NH-sd-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N4-A-t-NH-sd-SMP-HP index c56b44530725..c56b44530725 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/N4-A-t-NH-sd-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N4-A-t-NH-sd-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/N5-U-T-NH-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N5-U-T-NH-sd-SMP-hp index 90d924fea9e9..90d924fea9e9 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/N5-U-T-NH-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/N5-U-T-NH-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/NT1-nh b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/NT1-nh index 023f312a931c..023f312a931c 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/NT1-nh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/NT1-nh diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/NT3-NH b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/NT3-NH index 6fd0235dae73..6fd0235dae73 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/NT3-NH +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/NT3-NH diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/P1-S-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P1-S-T-NH-SD-SMP-HP index 9647c44cf4b7..9647c44cf4b7 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/P1-S-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P1-S-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/P2-2-t-nh-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P2-2-t-nh-sd-SMP-hp index 0f3b667d2a9f..0f3b667d2a9f 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/P2-2-t-nh-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P2-2-t-nh-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/P3-3-T-nh-SD-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P3-3-T-nh-SD-SMP-hp index b035e141bf2a..b035e141bf2a 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/P3-3-T-nh-SD-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P3-3-T-nh-SD-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/P4-A-t-NH-sd-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P4-A-t-NH-sd-SMP-HP index 3ccf6a9447f5..3ccf6a9447f5 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/P4-A-t-NH-sd-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P4-A-t-NH-sd-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/P5-U-T-NH-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P5-U-T-NH-sd-SMP-hp index ef624ce73d8e..ef624ce73d8e 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/P5-U-T-NH-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/P5-U-T-NH-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/PT1-nh b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/PT1-nh index e3361c3894a1..e3361c3894a1 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/PT1-nh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/PT1-nh diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/PT2-NH b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/PT2-NH index 64abfc3b4d94..64abfc3b4d94 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/PT2-NH +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/PT2-NH diff --git a/tools/testing/selftests/rcutorture/configs/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/ver_functions.sh index 5e40eadea777..bae55692ce6e 100644 --- a/tools/testing/selftests/rcutorture/configs/ver_functions.sh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.3/ver_functions.sh @@ -20,18 +20,6 @@ # # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> -# rcutorture_param_n_barrier_cbs bootparam-string -# -# Adds n_barrier_cbs rcutorture module parameter to kernels having it. -rcutorture_param_n_barrier_cbs () { - if echo $1 | grep -q "rcutorture\.n_barrier_cbs" - then - echo $1 - else - echo $1 rcutorture.n_barrier_cbs=4 - fi -} - # rcutorture_param_onoff bootparam-string config-file # # Adds onoff rcutorture module parameters to kernels having it. @@ -39,8 +27,18 @@ rcutorture_param_onoff () { if ! bootparam_hotplug_cpu "$1" && configfrag_hotplug_cpu "$2" then echo CPU-hotplug kernel, adding rcutorture onoff. 1>&2 - echo $1 rcutorture.onoff_interval=3 rcutorture.onoff_holdoff=30 - else - echo $1 + echo rcutorture.onoff_interval=3 rcutorture.onoff_holdoff=30 fi } + +# per_version_boot_params bootparam-string config-file seconds +# +# Adds per-version torture-module parameters to kernels supporting them. +per_version_boot_params () { + echo $1 `rcutorture_param_onoff "$1" "$2"` \ + rcutorture.stat_interval=15 \ + rcutorture.shutdown_secs=$3 \ + rcutorture.rcutorture_runnable=1 \ + rcutorture.test_no_idle_hz=1 \ + rcutorture.verbose=1 +} diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/CFLIST b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/CFLIST index 18223947bbcb..18223947bbcb 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/CFLIST +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/CFLIST diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/N1-S-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N1-S-T-NH-SD-SMP-HP index d81e11d280aa..d81e11d280aa 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/N1-S-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N1-S-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/N2-2-t-nh-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N2-2-t-nh-sd-SMP-hp index 02e418572b1b..02e418572b1b 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/N2-2-t-nh-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N2-2-t-nh-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/N3-3-T-nh-SD-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N3-3-T-nh-SD-SMP-hp index b3100f69c8cf..b3100f69c8cf 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/N3-3-T-nh-SD-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N3-3-T-nh-SD-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/N4-A-t-NH-sd-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N4-A-t-NH-sd-SMP-HP index c56b44530725..c56b44530725 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/N4-A-t-NH-sd-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N4-A-t-NH-sd-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/N5-U-T-NH-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N5-U-T-NH-sd-SMP-hp index 90d924fea9e9..90d924fea9e9 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/N5-U-T-NH-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/N5-U-T-NH-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/NT1-nh b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/NT1-nh index 023f312a931c..023f312a931c 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/NT1-nh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/NT1-nh diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/NT3-NH b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/NT3-NH index 6fd0235dae73..6fd0235dae73 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/NT3-NH +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/NT3-NH diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/P1-S-T-NH-SD-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P1-S-T-NH-SD-SMP-HP index 9647c44cf4b7..9647c44cf4b7 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/P1-S-T-NH-SD-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P1-S-T-NH-SD-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/P2-2-t-nh-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P2-2-t-nh-sd-SMP-hp index 0f3b667d2a9f..0f3b667d2a9f 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/P2-2-t-nh-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P2-2-t-nh-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/P3-3-T-nh-SD-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P3-3-T-nh-SD-SMP-hp index b035e141bf2a..b035e141bf2a 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/P3-3-T-nh-SD-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P3-3-T-nh-SD-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/P4-A-t-NH-sd-SMP-HP b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P4-A-t-NH-sd-SMP-HP index 3ccf6a9447f5..3ccf6a9447f5 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/P4-A-t-NH-sd-SMP-HP +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P4-A-t-NH-sd-SMP-HP diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/P5-U-T-NH-sd-SMP-hp b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P5-U-T-NH-sd-SMP-hp index ef624ce73d8e..ef624ce73d8e 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/P5-U-T-NH-sd-SMP-hp +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/P5-U-T-NH-sd-SMP-hp diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/PT1-nh b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/PT1-nh index e3361c3894a1..e3361c3894a1 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/PT1-nh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/PT1-nh diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/PT2-NH b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/PT2-NH index 64abfc3b4d94..64abfc3b4d94 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/PT2-NH +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/PT2-NH diff --git a/tools/testing/selftests/rcutorture/configs/v3.5/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/ver_functions.sh index 6a5f13aab44d..8977d8d31b19 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.5/ver_functions.sh +++ b/tools/testing/selftests/rcutorture/configs/rcu/v3.5/ver_functions.sh @@ -26,9 +26,9 @@ rcutorture_param_n_barrier_cbs () { if echo $1 | grep -q "rcutorture\.n_barrier_cbs" then - echo $1 + : else - echo $1 rcutorture.n_barrier_cbs=4 + echo rcutorture.n_barrier_cbs=4 fi } @@ -38,9 +38,20 @@ rcutorture_param_n_barrier_cbs () { rcutorture_param_onoff () { if ! bootparam_hotplug_cpu "$1" && configfrag_hotplug_cpu "$2" then - echo CPU-hotplug kernel, adding rcutorture onoff. - echo $1 rcutorture.onoff_interval=3 rcutorture.onoff_holdoff=30 - else - echo $1 + echo CPU-hotplug kernel, adding rcutorture onoff. 1>&2 + echo rcutorture.onoff_interval=3 rcutorture.onoff_holdoff=30 fi } + +# per_version_boot_params bootparam-string config-file seconds +# +# Adds per-version torture-module parameters to kernels supporting them. +per_version_boot_params () { + echo $1 `rcutorture_param_onoff "$1" "$2"` \ + `rcutorture_param_n_barrier_cbs "$1"` \ + rcutorture.stat_interval=15 \ + rcutorture.shutdown_secs=$3 \ + rcutorture.rcutorture_runnable=1 \ + rcutorture.test_no_idle_hz=1 \ + rcutorture.verbose=1 +} diff --git a/tools/testing/selftests/rcutorture/configs/v3.3/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh index c37432f3572c..8977d8d31b19 100644 --- a/tools/testing/selftests/rcutorture/configs/v3.3/ver_functions.sh +++ b/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh @@ -24,7 +24,12 @@ # # Adds n_barrier_cbs rcutorture module parameter to kernels having it. rcutorture_param_n_barrier_cbs () { - echo $1 + if echo $1 | grep -q "rcutorture\.n_barrier_cbs" + then + : + else + echo rcutorture.n_barrier_cbs=4 + fi } # rcutorture_param_onoff bootparam-string config-file @@ -33,9 +38,20 @@ rcutorture_param_n_barrier_cbs () { rcutorture_param_onoff () { if ! bootparam_hotplug_cpu "$1" && configfrag_hotplug_cpu "$2" then - echo CPU-hotplug kernel, adding rcutorture onoff. - echo $1 rcutorture.onoff_interval=3 rcutorture.onoff_holdoff=30 - else - echo $1 + echo CPU-hotplug kernel, adding rcutorture onoff. 1>&2 + echo rcutorture.onoff_interval=3 rcutorture.onoff_holdoff=30 fi } + +# per_version_boot_params bootparam-string config-file seconds +# +# Adds per-version torture-module parameters to kernels supporting them. +per_version_boot_params () { + echo $1 `rcutorture_param_onoff "$1" "$2"` \ + `rcutorture_param_n_barrier_cbs "$1"` \ + rcutorture.stat_interval=15 \ + rcutorture.shutdown_secs=$3 \ + rcutorture.rcutorture_runnable=1 \ + rcutorture.test_no_idle_hz=1 \ + rcutorture.verbose=1 +} diff --git a/tools/testing/selftests/rcutorture/doc/TREE_RCU-Kconfig.txt b/tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt index adbb76cffb49..adbb76cffb49 100644 --- a/tools/testing/selftests/rcutorture/doc/TREE_RCU-Kconfig.txt +++ b/tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt |