wiki:linux/profiling

Version 3 (modified by Tim Harvey, 9 months ago) (diff)

convert restored html to wiki markup

Linux OS Code Profiling

There are several options for code profiling on the Linux OS. The kernel itself has a profiling API which can be enabled:

  • CONFIG_PROFILING - General profiling
  • CONFIG_OPROFILE - OProfile system profiling (capable of profiling the whole system including kernel, kernel modules, libraries, and applications)

OProfile was the profiling tool of choice for linux devls for nearly 10 years. A few years back various kernel developers defined and implemented a new formal kernel API to access performance monitor counters (PMC's), which are hardware elements in most modern CPU's, to address needs of performance tools. Prior to this new API oPOProfileofile used a special OProfile-specific kernel module while other tools relied on patches (perctr, perfmon).

The developers of the new profiling API also developed an example tool that used the new API called 'perf'. The perf tool has thus matured greatly in the past few years. oprfile is strickly a profiling tool.

There are other options that are not described here:

  • valgrind / cachegrind / dtrace
  • Google CPU profiler
  • gprof

Reference:

Basic Kernel Profiling (CONFIG_PROFILING and readprofile)

There are several facilities to see where the kernel spends its resources. A simple one which can be built-in with (CONFIG_PROFILING) will store the current EIP (instruction pointer) at each clock tick.

To use this ensure the kernel is built with CONFIG_PROFILING and either boot the kernel with command line option profile=2 or enable at runtime with an echo 2 > /sys/kernel/profiling.

This will cause a file /proc/profile to be created. The number provided (2 in the example above) is the number of positions EIP is shifted right when profiling. So a large number gives a coarse profile. The counters are reset by writing to /proc/profile.

The utility readprofile will output statistics for you. It does not sort so you have to invoke sort explicitly. But given a memory map it will translate addresses to kernel symbols.

Example:

  1. boot kernel compiled with CONFIG_PROFILING
  2. enable (either with placing profile=2 on cmdline or dynamically with:
    echo 2 > /sys/kernel/profiling # enable profiling
    
  3. (optional) clear counters
    echo > /proc/profile # reset counters
    
  4. do some activity you wish to profile
  5. use readprofile to interpret the results:
    readprofile -m System.map | sort -nr | head -2
    510502 total                                      0.1534
    508548 default_idle                           10594.7500
    
    • The first column gives the number of timer ticks. The last column gives the number of ticks divided by the size of the function.
    • The command readprofile -r is equivalent to echo > /proc/profile.

References:

OProfile

OProfile provides a profiler and post-processing tools for analyzing profile data, event counter.

The tool used is called operf. Some processors are not supported by the underlying new perf_events kernel API and thus not supported by operf. If you see Your kernel's Performance Events Subsystem does not support your processor type then you need to try and use opcontrol for the legacy mode.

References:

OProfile Standard Mode (imx6)

Starting with v0.9.8, OProfile switched over to using the new perf_events kernel API with a new set of userspace tools (however OProfile still supports the legacy mode - see below).

Standard mode tools:

  • operf
  • ocount - collect raw event counts on a per-app, per-process, per-cpu, or systrem-wide

Using the standard mode, post-processing of collected raw events is not necessary.

OProfile Legacy Mode (cns3xxx)

The legacy mode (for CPU's that do not implement the new perf_events kernel profiling API. The Gateworks Laguna family using the Cavium cns3xxx CPU falls into this category.

The legacy mode tools consists of:

  • oprofile kernel module (requires CONFIG_PROFILING=y and CONFIG_OPROFILE=m)
  • opcontrol - used to setup profiling (need vmlinux file)
  • opprofiled - the daemon (controlled via opcontrol)
  • opreport - report on collected samples

opcontrol parameters:

  • --session-dir specifies the location to store samples. It defaults to /var/lib/oprofile and you can use this (with both opcontrol and opreport) to use samples from alternate locations
  • --separate specifies how to seperate samples. By default they are all stored in a single file (none), but you can choose to store by:
    • none - no profile separation (default)
    • lib - per-application profiles for libraries
    • kernel - per-application profiles for the kernel and kernel modules
    • thread - profiles for each thread and each task
    • cpu - profiles for each CPU
    • all - all of the above
  • Using profile specification parameters you can choose how to sample and report data"
    • cpu:0 - report just cpu0 (assuming data was collected separately (see above))
  • --vmlinux=file (both for opcontrol and opreport) specifies the vmlinux kernel image required for decrypting kernel symbols
  • --setup will store the following list of parameters in /root/.oprofile/daemonrc to be used as default settings for opcontrol and opreport. Alternatively you can specify setup options to each program as needed

Example usage:

  1. copy your current kernel's vmlinux to /tmp
  2. (optional) setup our configuration for vmlinux symbol decrypting, specific session location, and separating events by cpu:
    opcontrol --setup --vmlinux=/tmp/vmlinux --session-dir=/tmp/session1 --separate=cpu
    
  3. start capturing events:
    opcontrol --start
    
    • you can force a flush of collected events via opcontrol --dump at any time
    • you can clearout current collected events via opcontrol --reset at any time
  4. stop capturing events (and flush data):
    opcontrol --shutdown
    
  5. report events:
    opreport --vmlinux=/tmp/vmlinux --session-dir=/tmp/session1
    
    • if capturing events from individual cpu's separately (as shown above) you can show the info for just cpu0 via opreport cpu:0
    • Note that opreport doesn't make use of the conf file generated by opcontrol --setup

Important notes:

  • because the cns3xxx kernel and/or hardware does not support a performance counter and this means we are forced into timer based mode using timer irq. In this mode profiling is not useful when using code that disables irqs or runs in hardirq context

References:

Perf

In general profiling with the perf tool is considered easier to install and run.

Example:

  1. (optional) copy your current kernel's vmlinux to /tmp
  2. capture 120 seconds worth of profiling data
    perf record -p $(pidofprogram) sleep 120
    
  3. report data (using kernel symbols):
    perf report -k /tmp/vmlinux
    
    • the -k is optional and adds kernel symbol decoding

References:

OpenWrt

OpenWrt has support for both oProfile and perf. Because perf depends on glibc (or at least is configured that way) we recommend oprofile when using OpenWrt.

To enable oProfile on OpenWrt do a make menuconfig and:

  • Global build Settings -> Compile the kernel with profiling enabled
  • Development -> oprofile
  • Development -> oprofile-utils
    • Note that package/devel/oprofile/Makefile may need +librt added to DEPENDS

To enable perf (glibc required):

  • Global build Settings -> Compile the kernel with profiling enabled
  • Development -> perf

You likely want to run non-stripped binaries for anything you want to actually investigate. One way of doing this is to build them with CONFIG_DEBUG=y. For example building compat-wireless:

make target/linux/mac80211/{clean,compile} V=99 CONFIG_DEBUG=y

References: