summaryrefslogtreecommitdiff
path: root/kernel/trace
AgeCommit message (Collapse)AuthorFilesLines
2008-07-28Merge branch 'linus' into cpus4096Ingo Molnar1-3/+3
Conflicts: kernel/stop_machine.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-28Merge branch 'linus' into cpus4096Ingo Molnar3-17/+22
2008-07-28stop_machine: fix up ftrace.cRusty Russell1-3/+3
Simple conversion. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Abhishek Sagar <sagar.abhishek@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org>
2008-07-27tracing: remove unused variableAndrea Righi1-1/+0
Remove the following warning with CONFIG_TRACING=y: kernel/trace/trace.c: In function ‘s_next’: kernel/trace/trace.c:1186: warning: unused variable ‘last_ent’ Signed-off-by: Andrea Righi <righi.andrea@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-26Merge branch 'tracing-fixes-for-linus' of ↵Linus Torvalds3-16/+22
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'tracing-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: ftrace: fix modular build ftrace: disable tracing on acpi idle calls ftrace: remove latency-tracer leftover ftrace: only trace preempt off with preempt tracer ftrace: fix 4d3702b6 (post-v2.6.26): WARNING: at kernel/lockdep.c:2731 check_flags (ftrace)
2008-07-26cpumask: change cpumask_of_cpu_ptr to use new cpumask_of_cpuMike Travis1-3/+1
* Replace previous instances of the cpumask_of_cpu_ptr* macros with a the new (lvalue capable) generic cpumask_of_cpu(). Signed-off-by: Mike Travis <travis@sgi.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jack Steiner <steiner@sgi.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-26ftrace: fix modular buildIngo Molnar1-0/+2
fix: ERROR: "start_critical_timings" [drivers/acpi/processor.ko] undefined! ERROR: "stop_critical_timings" [drivers/acpi/processor.ko] undefined! Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-25markers: fix sparse integer as NULL pointer warningHarvey Harrison1-1/+1
kernel/trace/trace_sysprof.c:164:20: warning: Using plain integer as NULL pointer Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24ftrace: remove latency-tracer leftoverIngo Molnar1-3/+0
remove the :vim=ft=help tag from trace files. I used them years ago to syntax-highlight traces and forgot about this hack. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-24Merge branch 'linus' into tracing/urgentIngo Molnar1-1/+3
2008-07-18cpumask: Replace cpumask_of_cpu with cpumask_of_cpu_ptrMike Travis1-1/+3
* This patch replaces the dangerous lvalue version of cpumask_of_cpu with new cpumask_of_cpu_ptr macros. These are patterned after the node_to_cpumask_ptr macros. In general terms, if there is a cpumask_of_cpu_map[] then a pointer to the cpumask_of_cpu_map[cpu] entry is used. The cpumask_of_cpu_map is provided when there is a large NR_CPUS count, reducing greatly the amount of code generated and stack space used for cpumask_of_cpu(). The pointer to the cpumask_t value is needed for calling set_cpus_allowed_ptr() to reduce the amount of stack space needed to pass the cpumask_t value. If there isn't a cpumask_of_cpu_map[], then a temporary variable is declared and filled in with value from cpumask_of_cpu(cpu) as well as a pointer variable pointing to this temporary variable. Afterwards, the pointer is used to reference the cpumask value. The compiler will optimize out the extra dereference through the pointer as well as the stack space used for the pointer, resulting in identical code. A good example of the orthogonal usages is in net/sunrpc/svc.c: case SVC_POOL_PERCPU: { unsigned int cpu = m->pool_to[pidx]; cpumask_of_cpu_ptr(cpumask, cpu); *oldmask = current->cpus_allowed; set_cpus_allowed_ptr(current, cpumask); return 1; } case SVC_POOL_PERNODE: { unsigned int node = m->pool_to[pidx]; node_to_cpumask_ptr(nodecpumask, node); *oldmask = current->cpus_allowed; set_cpus_allowed_ptr(current, nodecpumask); return 1; } Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-18ftrace: only trace preempt off with preempt tracerSteven Rostedt1-2/+4
When PREEMPT_TRACER and IRQSOFF_TRACER are both configured and irqsoff tracer is running, the preempt_off sections might also be traced. Thanks to Andrew Morton for pointing out my mistake of spin_lock disabling interrupts while he was reviewing ftrace.txt. Seems that my example I used actually hit this bug. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-18ftrace: fix 4d3702b6 (post-v2.6.26): WARNING: at kernel/lockdep.c:2731 ↵Steven Rostedt1-11/+16
check_flags (ftrace) On Wed, 16 Jul 2008, Vegard Nossum wrote: > When booting 4d3702b6, I got this huge thing: > > Testing tracer wakeup: <4>------------[ cut here ]------------ > WARNING: at kernel/lockdep.c:2731 check_flags+0x123/0x160() > Modules linked in: > Pid: 1, comm: swapper Not tainted 2.6.26-crashing-02127-g4d3702b6 #30 > [<c015c349>] warn_on_slowpath+0x59/0xb0 > [<c01276c6>] ? ftrace_call+0x5/0x8 > [<c012d800>] ? native_read_tsc+0x0/0x20 > [<c0158de2>] ? sub_preempt_count+0x12/0xf0 > [<c01814eb>] ? trace_hardirqs_off+0xb/0x10 > [<c0182fbc>] ? __lock_acquire+0x2cc/0x1120 > [<c01814eb>] ? trace_hardirqs_off+0xb/0x10 > [<c01276af>] ? mcount_call+0x5/0xa > [<c017ff53>] check_flags+0x123/0x160 > [<c0183e61>] lock_acquire+0x51/0xd0 > [<c01276c6>] ? ftrace_call+0x5/0x8 > [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0 > [<c01a8d45>] ? ftrace_record_ip+0xf5/0x220 > [<c02d5413>] ? debug_locks_off+0x3/0x50 > [<c01a8d45>] ftrace_record_ip+0xf5/0x220 > [<c01276af>] mcount_call+0x5/0xa > [<c02d5418>] ? debug_locks_off+0x8/0x50 > [<c017ff27>] check_flags+0xf7/0x160 > [<c0183e61>] lock_acquire+0x51/0xd0 > [<c01276c6>] ? ftrace_call+0x5/0x8 > [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0 > [<c01affcd>] ? wakeup_tracer_call+0x6d/0xf0 > [<c01625e2>] ? _local_bh_enable+0x62/0xb0 > [<c0158ddd>] ? sub_preempt_count+0xd/0xf0 > [<c01affcd>] wakeup_tracer_call+0x6d/0xf0 > [<c0162724>] ? __do_softirq+0xf4/0x110 > [<c01afff1>] ? wakeup_tracer_call+0x91/0xf0 > [<c01276c6>] ftrace_call+0x5/0x8 > [<c0162724>] ? __do_softirq+0xf4/0x110 > [<c0158de2>] ? sub_preempt_count+0x12/0xf0 > [<c01625e2>] _local_bh_enable+0x62/0xb0 > [<c0162724>] __do_softirq+0xf4/0x110 > [<c01627ed>] do_softirq+0xad/0xb0 > [<c0162a15>] irq_exit+0xa5/0xb0 > [<c013a506>] smp_apic_timer_interrupt+0x66/0xa0 > [<c02d3fac>] ? trace_hardirqs_off_thunk+0xc/0x10 > [<c0127449>] apic_timer_interrupt+0x2d/0x34 > [<c018007b>] ? find_usage_backwards+0xb/0xf0 > [<c0613a09>] ? _spin_unlock_irqrestore+0x69/0x80 > [<c014ef32>] tg_shares_up+0x132/0x1d0 > [<c014d2a2>] walk_tg_tree+0x62/0xa0 > [<c014ee00>] ? tg_shares_up+0x0/0x1d0 > [<c014a860>] ? tg_nop+0x0/0x10 > [<c015499d>] update_shares+0x5d/0x80 > [<c0154a2f>] try_to_wake_up+0x6f/0x280 > [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0 > [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0 > [<c0154c94>] wake_up_process+0x14/0x20 > [<c01725f6>] kthread_create+0x66/0xb0 > [<c0195400>] ? do_stop+0x0/0x200 > [<c0195320>] ? __stop_machine_run+0x30/0xb0 > [<c0195340>] __stop_machine_run+0x50/0xb0 > [<c0195400>] ? do_stop+0x0/0x200 > [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0 > [<c061242d>] ? mutex_unlock+0xd/0x10 > [<c01953cc>] stop_machine_run+0x2c/0x60 > [<c01a94d3>] unregister_ftrace_function+0x103/0x180 > [<c01b0517>] stop_wakeup_tracer+0x17/0x60 > [<c01b056f>] wakeup_tracer_ctrl_update+0xf/0x30 > [<c01ab8d5>] trace_selftest_startup_wakeup+0xb5/0x130 > [<c01ab950>] ? trace_wakeup_test_thread+0x0/0x70 > [<c01aadf5>] register_tracer+0x135/0x1b0 > [<c0877d02>] init_wakeup_tracer+0xd/0xf > [<c085d437>] kernel_init+0x1a9/0x2ce > [<c061397b>] ? _spin_unlock_irq+0x3b/0x60 > [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10 > [<c0877cf5>] ? init_wakeup_tracer+0x0/0xf > [<c0182646>] ? trace_hardirqs_on_caller+0x126/0x180 > [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10 > [<c01269c8>] ? restore_nocheck_notrace+0x0/0xe > [<c085d28e>] ? kernel_init+0x0/0x2ce > [<c085d28e>] ? kernel_init+0x0/0x2ce > [<c01275fb>] kernel_thread_helper+0x7/0x10 > ======================= > ---[ end trace a7919e7f17c0a725 ]--- > irq event stamp: 579530 > hardirqs last enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10 > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10 > softirqs last enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110 > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0 > irq event stamp: 579530 > hardirqs last enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10 > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10 > softirqs last enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110 > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0 > PASSED > > Incidentally, the kernel also hung while I was typing in this report. Things get weird between lockdep and ftrace because ftrace can be called within lockdep internal code (via the mcount pointer) and lockdep can be called with ftrace (via spin_locks). Signed-off-by: Steven Rostedt <srostedt@redhat.com> Tested-by: Vegard Nossum <vegard.nossum@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-14Merge branch 'tracing/ftrace' into auto-ftrace-nextIngo Molnar6-9/+52
2008-07-11ftrace: build fix for ftraced_suspendIngo Molnar1-0/+2
fix: kernel/trace/ftrace.c:1615: error: 'ftraced_suspend' undeclared (first use in this function) kernel/trace/ftrace.c:1615: error: (Each undeclared identifier is reported only once kernel/trace/ftrace.c:1615: error: for each function it appears in.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-11ftrace: separate out the function enabled variableSteven Rostedt1-3/+19
Currently the function tracer uses the global tracer_enabled variable that is used to keep track if the tracer is enabled or not. The function tracing startup needs to be separated out, otherwise the internal happenings of the tracer startup is also recorded. This patch creates a ftrace_function_enabled variable to all the starting of the function traces to happen after everything has been started. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-11ftrace: add ftrace_kill_atomicSteven Rostedt1-0/+15
It has been suggested that I add a way to disable the function tracer on an oops. This code adds a ftrace_kill_atomic. It is not meant to be used in normal situations. It will disable the ftrace tracer, but will not perform the nice shutdown that requires scheduling. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-11ftrace: use current CPU for function startupSteven Rostedt1-0/+3
This is more of a clean up. Currently the function tracer initializes the tracer with which ever CPU was last used for tracing. This value isn't realy useful for function tracing, but at least it should be something other than a random number. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-11ftrace: start wakeup tracing after setting function tracerSteven Rostedt1-1/+2
Enabling the wakeup tracer before enabling the function tracing causes some strange results due to the dynamic enabling of the functions. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-11ftrace: check proper config for preempt typeSteven Rostedt1-1/+1
There is no CONFIG_PREEMPT_DESKTOP. Use the proper entry CONFIG_PREEMPT. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-11ftrace: define function trace nopSteven Rostedt1-2/+8
When CONFIG_FTRACE is not enabled, the tracing_start_functon_trace and tracing_stop_function_trace should be nops. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-11ftrace: move sched_switch enable after markersSteven Rostedt1-2/+2
We have two markers now that are enabled on sched_switch. One that records the context switching and the other that records task wake ups. Currently we enable the tracing first and then set the markers. This causes some confusing traces: # tracer: sched_switch # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | trace-cmd-3973 [00] 115.834817: 3973:120:R + 3: 0:S trace-cmd-3973 [01] 115.834910: 3973:120:R + 6: 0:S trace-cmd-3973 [02] 115.834910: 3973:120:R + 9: 0:S trace-cmd-3973 [03] 115.834910: 3973:120:R + 12: 0:S trace-cmd-3973 [02] 115.834910: 3973:120:R + 9: 0:S <idle>-0 [02] 115.834910: 0:140:R ==> 3973:120:R Here we see that trace-cmd with PID 3973 wakes up task 9 but the next line shows the idle task doing a context switch to task 3973. Enabling the tracing to _after_ the markers are set creates a much saner output: # tracer: sched_switch # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | <idle>-0 [02] 7922.634225: 0:140:R ==> 4790:120:R trace-cmd-4789 [03] 7922.634225: 0:140:R + 4790:120:R Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-10Merge branch 'tracing/sysprof' into auto-ftrace-nextIngo Molnar6-0/+404
2008-07-10Merge branch 'tracing/mmiotrace' into auto-ftrace-nextIngo Molnar4-0/+352
2008-07-03ftrace: prevent ftrace modifications while being kprobe'd, v2Abhishek Sagar1-1/+8
add two missing chunks for ftrace+kprobe. Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-23ftrace: avoid modifying kprobe'd recordsAbhishek Sagar1-4/+31
Avoid modifying the mcount call-site if there is a kprobe installed on it. These records are not marked as failed however. This allowed the filter rules on them to remain up-to-date. Whenever the kprobe on the corresponding record is removed, the record gets updated as normal. Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-23ftrace: freeze kprobe'd recordsAbhishek Sagar2-1/+74
Let records identified as being kprobe'd be marked as "frozen". The trouble with records which have a kprobe installed on their mcount call-site is that they don't get updated. So if such a function which is currently being traced gets its tracing disabled due to a new filter rule (or because it was added to the notrace list) then it won't be updated and continue being traced. This patch allows scanning of all frozen records during tracing to check if they should be traced. Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-23ftrace: store mcount address in rec->ipAbhishek Sagar1-1/+2
Record the address of the mcount call-site. Currently all archs except sparc64 record the address of the instruction following the mcount call-site. Some general cleanups are entailed. Storing mcount addresses in rec->ip enables looking them up in the kprobe hash table later on to check if they're kprobe'd. Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Cc: davem@davemloft.net Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-16namespacecheck: fixesIngo Molnar2-3/+3
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-14ftrace: fix "notrace" filtering priorityAbhishek Sagar1-3/+9
This is a fix to give notrace filter rules priority over "set_ftrace_filter" rules. This fix ensures that functions which are set to be filtered and are concurrently marked as "notrace" don't get recorded. As of now, if a record is marked as FTRACE_FL_FILTER and is enabled, then the notrace flag is not checked. Tested on x86-32. Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-12ftrace: fix printoutJiri Slaby1-3/+2
Do not print loglevel before "entries of %ld bytes". Move it to the previous pr_info. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Cc: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-10ftrace: disable tracing when current_tracer is set to "none"Ankita Garg1-5/+17
Found that inspite of setting the current_tracer to "none", trace from the previous trace type continued to be collected. The patch below fixes this and causes the trace to be disabled when the "none" type is selected. Compile and boot tested the patch for functionality. Signed-off-by: Ankita Garg <ankita@in.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-10ftrace: prevent freeing of all failed updatesAbhishek Sagar1-3/+2
Steven Rostedt wrote: > If we unload a module and reload it, will it ever get converted again? The intent was always to filter core kernel functions to prevent their freeing. Here's a fix which should allow re-recording of module call-sites. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-10ftrace: add debugfs entry 'failures'Abhishek Sagar1-1/+39
Identify functions which had their mcount call-site updates failed. This can help us track functions which ftrace shouldn't fiddle with, and are thus not being traced. If there is no race with any external agent which is modifying the mcount call-site, then this file displays no entries (normal case). Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-10ftrace: remove ftrace_ip_converted()Abhishek Sagar1-7/+0
Remove the unneeded function ftrace_ip_converted(). Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-10ftrace: prevent freeing of all failed updatesAbhishek Sagar1-30/+46
Prevent freeing of records which cause problems and correspond to function from core kernel text. A new flag, FTRACE_FL_CONVERTED is used to mark a record as "converted". All other records are patched lazily to NOPs. Failed records now also remain on frace_hash table. Each invocation of ftrace_record_ip now checks whether the traced function has ever been recorded (including past failures) and doesn't re-record it again. Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-02ftrace: user update and disable dynamic ftrace daemonSteven Rostedt1-47/+110
In dynamic ftrace, the mcount function starts off pointing to a stub function that just returns. On start up, the call to the stub is modified to point to a "record_ip" function. The job of the record_ip function is to add the function to a pre-allocated hash list. If the function is already there, it simply is ignored, otherwise it is added to the list. Later, a ftraced daemon wakes up and calls kstop_machine if any functions have been recorded, and changes the calls to the recorded functions to a simple nop. If no functions were recorded, the daemon goes back to sleep. The daemon wakes up once a second to see if it needs to update any newly recorded functions into nops. Usually it does not, but if a lot of code has been executed for the first time in the kernel, the ftraced daemon will call kstop_machine to update those into nops. The problem currently is that there's no way to stop the daemon from doing this, and it can cause unneeded latencies (800us which for some is bothersome). This patch adds a new file /debugfs/tracing/ftraced_enabled. If the daemon is active, reading this will return "enabled\n" and "disabled\n" when the daemon is not running. To disable the daemon, the user can echo "0" or "disable" into this file, and "1" or "enable" to re-enable the daemon. Since the daemon is used to convert the functions into nops to increase the performance of the system, I also added that anytime something is written into the ftraced_enabled file, kstop_machine will run if there are new functions that have been detected that need to be converted. This way the user can disable the daemon but still be able to control the conversion of the mcount calls to nops by simply, "echo 0 > /debugfs/tracing/ftraced_enabled" when they need to do more conversions. To see the number of converted functions: "cat /debugfs/tracing/dyn_ftrace_total_info" Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-02ftrace: distinguish kretprobe'd functions in trace logsAbhishek Sagar1-3/+24
Tracing functions via ftrace which have a kretprobe installed on them, can produce misleading output in their trace logs. E.g, consider the correct trace of the following sequence: do_IRQ() { ~ irq_enter(); ~ } Trace log (sample): <idle>-0 [00] 4154504455.781616: irq_enter <- do_IRQ But if irq_enter() has a kretprobe installed on it, the return value stored on the stack at each invocation is modified to divert the return to a kprobe trampoline function called kretprobe_trampoline(). So with this the trace would (currently) look like: <idle>-0 [00] 4154504455.781616: irq_enter <- kretprobe_trampoline Now this is quite misleading to the end user, as it suggests something that didn't actually happen. So just to avoid such misinterpretations, the inlined patch aims to output such a log as: <idle>-0 [00] 4154504455.781616: irq_enter <- [unknown/kretprobe'd] Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Acked-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-05-26ftrace: fix updating of ftrace_update_cntAbhishek Sagar1-3/+5
Hi Ingo/Steven, Ftrace currently maintains an update count which includes false updates, i.e, updates which failed. If anything, such failures should be tracked by some separate variable, but this patch provides a minimal fix. Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Cc: rostedt@goodmis.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-26ftrace: safe traversal of ftrace_hash hlistAbhishek Sagar1-2/+2
Hi Steven, I noticed that concurrent instances of ftrace_record_ip() have a race between ftrace_hash list traversal during ftrace_ip_in_hash() (before acquiring ftrace_shutdown_lock) and ftrace_add_hash(). If it's so then this should fix it. Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Cc: rostedt@goodmis.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-26ftrace: fix up cmdline recordingSteven Rostedt4-13/+19
The new work with converting the trace hooks over to markers broke the command line recording of ftrace. This patch fixes it again. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-26ftrace: set_ftrace_notrace featureSteven Rostedt1-40/+130
While debugging latencies in the RT kernel, I found that it would be nice to be able to filter away functions from the trace than just to filter on functions. I added a new interface to the debugfs tracing directory called set_ftrace_notrace When dynamic frace is enabled, this lets you filter away functions that will not be recorded in the trace. It is similar to adding 'notrace' to those functions but by doing it without recompiling the kernel. Here's how set_ftrace_filter and set_ftrace_notrace interact. Remember, if set_ftrace_filter is set, it removes all functions from the trace execpt for those listed in the set_ftrace_filter. set_ftrace_notrace will prevent those functions from being traced. If you were to set one function in both set_ftrace_filter and set_ftrace_notrace and that function was the same, then you would end up with an empty trace. the set of functions to trace is: set_ftrace_filter == empty then all functions not in set_ftrace_notrace else set of the set_ftrace_filter and not in set of set_ftrace_notrace. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-26ftrace: remove printks from irqsoff traceSteven Rostedt1-16/+0
Printing out new max latencies was fine for the old RT tracer. But for mainline it is a bit messy. We also need to test if the run queue is locked before we can do the print. This means that we may not be printing out latencies if the run queue is locked on another CPU. This produces inconsistencies in the output. This patch simply removes the print altogether. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: pq@iki.fi Cc: proski@gnu.org Cc: sandmann@redhat.com Cc: a.p.zijlstra@chello.nl Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-26ftrace: add function tracing to wake up tracingSteven Rostedt1-1/+66
This patch adds function tracing to the functions that are called on the CPU of the task being traced. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: pq@iki.fi Cc: proski@gnu.org Cc: sandmann@redhat.com Cc: a.p.zijlstra@chello.nl Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-26ftrace: move ftrace_special to trace.cSteven Rostedt2-26/+25
Move the ftrace_special out of sched_switch to trace.c. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: pq@iki.fi Cc: proski@gnu.org Cc: sandmann@redhat.com Cc: a.p.zijlstra@chello.nl Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-26ftrace: limit use of check pagesSteven Rostedt2-9/+24
The check_pages function is called often enough that it can cause problems with trace outputs or even bringing the system to a halt. This patch limits the check_pages to the places that are most likely to have problems. The check is made at the flip between the global array and the max save array, as well as when the size of the buffers changes and the self tests. This patch also removes the BUG_ON from check_pages and replaces it with a WARN_ON and disabling of the tracer. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: pq@iki.fi Cc: proski@gnu.org Cc: sandmann@redhat.com Cc: a.p.zijlstra@chello.nl Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-24sysprof: make it depend on X86Thomas Gleixner1-1/+1
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-24x86 mmiotrace: use resource_size_t for phys addressesPekka Paalanen1-8/+12
Signed-off-by: Pekka Paalanen <pq@iki.fi> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-05-24mmiotrace: do not print bogus pid for maps eitherPekka Paalanen1-2/+2
Signed-off-by: Pekka Paalanen <pq@iki.fi> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-05-24mmiotrace: print overrun countsPekka Paalanen1-0/+25
Signed-off-by: Pekka Paalanen <pq@iki.fi> Signed-off-by: Ingo Molnar <mingo@elte.hu>