Version 6 (modified by 21 months ago) ( diff ) | ,
---|
Performance Tuning
Various performance aspects rely heavily on the configuration of your Linux operating system. Here are some things to keep in mind:
- Kernel modules - kernel modules that are not needed can bog down certain paths, such as network routing (iptables/ebtables) (see below)
- Userland services - Various services and daemons that are not needed can chew up system resources (CPU cycles, memory footprint) (see below)
See also Multicore Processing Page
Routing Performance
If you are trying to optimize network routing you can try the following:
- GigE:
- make sure you have a GigE link (where appropriate) in every network segment between your test endpoints (switch segments, target endpoint computer etc)
- if using PoE make sure you have a PoE injector capable of GigE
- General:
- eliminate unnecessary kernel modules which may be present for packet filtering (such as ipt and ebtables related modules). To see an example on removing kernel modules see the OpenWrt/kernelconfig page. If you are using hardware that requires some modules be sure to leave them in place.
- Caution: Please make informed decisions when removing kernel modules as removing hardware related modules may have unintended effects.
- minimize hardware in-between in case it is problematic (direct connection between endpoints)
- eliminate unnecessary userspace applications which may be present. To eliminate virtually all of them (you may need to configure network by hand after doing this) you can use 'for i in $(ls /etc/init.d); do /etc/init.d/$i disable; done; /etc/init.d/enable boot; /etc/init.d/enable done'
- using iperf as a network test tool pay attention to window size which can greatly affect throughput (understand what it means)
- be aware that generating traffic on an embedded node creates a performance hit on that node vs sending traffic 'through' the node
- run 'top' while testing to see where the bottlenecks may be: Understand that if virtually 100% of utilization occurs in sirq (soft irq), irq (hard irq), and nic (network driver) then you have maxed out the performance due to raw interrupts and low level packet handling
- eliminate unnecessary kernel modules which may be present for packet filtering (such as ipt and ebtables related modules). To see an example on removing kernel modules see the OpenWrt/kernelconfig page. If you are using hardware that requires some modules be sure to leave them in place.
iperf
Use iperf to test throughput.
Please read up on iperf on google or this tutorial http://openmaniak.com/iperf.php
Basic Setup:
There is a client and a server. This is differentiated based on the command.
Server:
iperf -s
Client:
iperf -c 192.168.1.1
Note: For UDP, a bandwidth limit is needed. Use the flag -b followed by the bandwidth limit desired (1m, 10m, 100m, 200m, 300m, 500m, 1g, etc).
Because iperf is processor intensive, there is no need to generate more traffic than the processor can handle. Therefore, incrementally increase the bandwidth limit until the results are slightly below the limit, thus not creating larger amounts of overhead.
For example a bandwidth limit of 10m will easily be hit on a GBe link. However, once at 200m bandwidth limit, throughput numbers may hit only 187Mbits/sec thus creating very little processor overhead.
VERY IMPORTANT: ORDER MATTERS. The iperf manual (please read it) states that the bandwidth flag must be placed at the end of the command to work'''
From the client type:
iperf -u -c 192.168.4.1 -b 10m
From the server type:
iperf -s -u
Increasing Bandwidth Limit for UDP Example
root@OpenWrt:/# iperf -u -c 192.168.4.1 -b 10m ------------------------------------------------------------ Client connecting to 192.168.4.1, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 160 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.4.2 port 56615 connected with 192.168.4.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 11.9 MBytes 10.0 Mbits/sec [ 3] Sent 8505 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 11.9 MBytes 10.0 Mbits/sec 0.028 ms 1/ 8506 (0.012%) root@OpenWrt:/# iperf -u -c 192.168.4.1 -b 100m ------------------------------------------------------------ Client connecting to 192.168.4.1, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 160 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.4.2 port 33153 connected with 192.168.4.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 120 MBytes 100 Mbits/sec [ 3] Sent 85304 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 119 MBytes 99.8 Mbits/sec 0.077 ms 514/85305 (0.6%) root@OpenWrt:/# iperf -u -c 192.168.4.1 -b 200m ------------------------------------------------------------ Client connecting to 192.168.4.1, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 160 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.4.2 port 56998 connected with 192.168.4.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 222 MBytes 186 Mbits/sec [ 3] Sent 158150 datagrams [ 3] Server Report: [ 3] 0.0-10.2 sec 59.3 MBytes 48.6 Mbits/sec 15.794 ms 115820/158151 (73%) root@OpenWrt:/# iperf -u -c 192.168.4.1 -b 300m ------------------------------------------------------------ Client connecting to 192.168.4.1, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 160 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.4.2 port 52109 connected with 192.168.4.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 223 MBytes 187 Mbits/sec [ 3] Sent 158814 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 58.0 MBytes 48.7 Mbits/sec 0.240 ms 117423/158815 (74%)
Wireless Tuning
Wireless performance is a very finicky process.
Wireless is dependent on so many factors. Wireless N speeds at 300 Mbps are not typically achieved in real life. An ARM processor on a Gateworks board is not quite the same as a 3.4 GHz X86 Machine.
Here are a few tips for wireless:
- Wireless encryption protocols can affect performance because of computation. Choose wisely.
- Antenna type and orientation and distance are VERY important. Be sure to hook up all antennas to the wireless card and testing 2 boards 3 feet apart may not achieve the best results.
- Wireless N will sometimes run faster at 40MHz vs 20 MHz.
- Try Channel 153 for Wireless N.
- Use a uncluttered channel that other devices are not using.
- The hardware mode for openwrt for the N radios would typically be 11na.
- 5.8GHz will run faster than 2.4GHz.
- Optimize the software as noted below. The more items running on the board, the less processor that can be utilized for all of the wireless traffic.
We have removed some modules at times to reduce the software load and try to obtain higher throughput.
These are the commands that have been using to tune performance for wireless testing on Gateworks boards. This is not a fully supported and will disable certain features. Please use at your own risk.
*WARNING: Run these commands on the Gateworks board once it is booted up. Please only do so over serial as telnet is disabled below
/etc/init.d/batmand disable /etc/init.d/collectd disable /etc/init.d/cron disable /etc/init.d/dnsmasq disable /etc/init.d/dropbear disable /etc/init.d/firewall disable /etc/init.d/gpsd disable /etc/init.d/gscd disable /etc/init.d/led disable /etc/init.d/luci_bwc disable /etc/init.d/luci_dhcp_migrate disable /etc/init.d/luci_fixtime disable /etc/init.d/luci_statistics disable /etc/init.d/miniupnpd disable /etc/init.d/openvpn disable /etc/init.d/qos disable /etc/init.d/rcS disable /etc/init.d/relayd disable /etc/init.d/serialoverip disable /etc/init.d/sysntpd disable /etc/init.d/ntpd disable /etc/init.d/telnet disable /etc/init.d/tinyproxy disable /etc/init.d/uhttpd disable /etc/init.d/umount disable /etc/init.d/usb disable /etc/init.d/vnstat disable mkdir /etc/modules.old mv /etc/modules.d/* /etc/modules.old mv /etc/modules.old/*crypto* /etc/modules.d mv /etc/modules.old/*80211* /etc/modules.d mv /etc/modules.old/*nls-base* /etc/modules.d mv /etc/modules.old/*usb-core* /etc/modules.d mv /etc/modules.old/*ath5k* /etc/modules.d mv /etc/modules.old/*ath9k* /etc/modules.d mv /etc/modules.old/26-ath /etc/modules.d mv /etc/modules.old/50-madwifi /etc/modules.d
Wireless configuration used:
ACCESS POINT WIRELESS UCI Config: root@OpenWrt:/# uci show wireless wireless.radio0=wifi-device wireless.radio0.type=mac80211 wireless.radio0.phy=phy0 wireless.radio0.ht_capab=SHORT-GI-40 TX-STBC RX-STBC1 DSSS_CCK-40 wireless.radio0.disabled=0 wireless.radio0.country=US wireless.radio0.txpower=17 wireless.radio0.htmode=HT40- wireless.radio0.channel=153 wireless.radio0.hwmode=11na wireless.@wifi-iface[0]=wifi-iface wireless.@wifi-iface[0].device=radio0 wireless.@wifi-iface[0].network=lan wireless.@wifi-iface[0].mode=ap wireless.@wifi-iface[0].ssid=gateworks wireless.@wifi-iface[0].wds=1 wireless.@wifi-iface[0].encryption=psk2 wireless.@wifi-iface[0].key=abc CLIENT WDS WIRELESS UCI Config: root@OpenWrt:/# uci show wireless wireless.radio0=wifi-device wireless.radio0.type=mac80211 wireless.radio0.phy=phy0 wireless.radio0.ht_capab=SHORT-GI-40 TX-STBC RX-STBC1 DSSS_CCK-40 wireless.radio0.disabled=0 wireless.radio0.txpower=17 wireless.radio0.country=US wireless.radio0.htmode=HT40- wireless.radio0.channel=153 wireless.radio0.hwmode=11na wireless.@wifi-iface[0]=wifi-iface wireless.@wifi-iface[0].device=radio0 wireless.@wifi-iface[0].network=lan wireless.@wifi-iface[0].ssid=gateworks wireless.@wifi-iface[0].mode=sta wireless.@wifi-iface[0].wds=1 wireless.@wifi-iface[0].encryption=psk2 wireless.@wifi-iface[0].key=abc
We then ran iperf from the the two pc's, with one being the server and one the client.
Server: iperf -s Client: iperf -t10 -w 512k -c 192.168.0.23 iperf TCP results: ------------------------------------------------------------ Client connecting to 192.168.0.23, TCP port 5001 TCP window size: 256 KByte (WARNING: requested 512 KByte) ------------------------------------------------------------ [ 3] local 192.168.0.22 port 52644 connected with 192.168.0.23 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 99.9 MBytes 83.7 Mbits/sec
Wireless Modulation Rate
The modulation rate is dynamically chosen via the mac80211 minstrel algorithm. The rate adjusts depending on the quality of the RF signal. The better the signal, the better the rate. The table below indicates what rate you are currently achieving. If you have a terrible rate, you will notice your bandwidth much lower and need to adjust the antennas / distance / obstructions to get a better rate. The highest rates can be very very hard to achieve.
root@OpenWrt:/# cat /sys/kernel/debug/ieee80211/phy0/netdev\:wlan0/stations/a8\: 54\:b2\:00\:04\:2e/rc_stats type rate throughput ewma prob this prob this succ/attempt success attempts HT20/LGI MCS0 6.1 92.0 100.0 0( 0) 106 111 HT20/LGI MCS1 12.8 96.7 100.0 0( 0) 109 116 HT20/LGI MCS2 19.5 99.7 100.0 0( 0) 112 119 HT20/LGI MCS3 25.8 99.9 100.0 0( 0) 108 110 HT20/LGI MCS4 36.2 95.2 100.0 0( 0) 102 112 HT20/LGI MCS5 44.2 89.8 100.0 0( 0) 108 111 HT20/LGI MCS6 48.4 88.7 100.0 0( 0) 2442 2816 HT20/LGI PMCS7 48.5 79.1 56.6 68(120) 3616 4236 HT20/LGI MCS8 12.1 92.1 100.0 0( 0) 103 110 HT20/LGI MCS9 24.5 95.1 100.0 0( 0) 105 117 HT20/LGI MCS10 37.8 99.5 100.0 0( 0) 118 123 HT20/LGI MCS11 48.3 98.1 100.0 0( 0) 121 127 HT20/LGI MCS12 58.8 81.7 33.3 13( 39) 14294 17331 HT20/LGI t MCS13 66.9 74.2 60.8 0( 0) 62514 85221 HT20/LGI T MCS14 67.9 67.2 58.2 0( 0) 110248 163970 HT20/LGI MCS15 61.4 55.9 28.0 32(114) 141375 243633 T= what we are on t= what we are on next
Here is the output of rc_stats
from a newer kernel. Notice the "T" and "t" markers have been replaced by "ABCD", representing the current rate "A" and the subsequent fallback rates. In this case "P" still represents the rate with the best nominal success probability.
best ____________rate__________ ________statistics________ _____last____ ______sum-of________ mode guard # rate [name idx airtime max_tp] [avg(tp) avg(prob) sd(prob)] [retry|suc|att] [#success | #attempts] HT20 LGI 1 MCS0 0 1477 4.8 4.8 100.0 0.0 1 0 0 1 1 HT20 LGI 1 MCS1 1 739 9.7 9.7 100.0 0.0 0 0 0 1 1 HT20 LGI 1 MCS2 2 493 17.0 17.0 100.0 0.0 0 0 0 1 1 HT20 LGI 1 MCS3 3 369 21.9 21.9 100.0 0.0 0 0 0 1 1 HT20 LGI 1 MCS4 4 246 31.7 31.7 95.0 21.3 5 0 0 71 133 HT20 LGI 1 MCS5 5 185 43.9 43.9 95.7 19.9 0 0 0 19 23 HT20 LGI 1 MCS6 6 164 48.8 48.8 100.0 0.0 0 0 0 1 1 HT20 LGI 1 MCS7 7 148 53.7 53.7 100.0 0.0 0 0 0 1 1 HT20 LGI 2 MCS8 10 739 9.7 9.7 100.0 0.0 0 0 0 1 1 HT20 LGI 2 MCS9 11 369 21.9 21.9 100.0 0.0 0 0 0 1 1 HT20 LGI 2 MCS10 12 246 31.7 31.7 95.6 20.0 0 0 0 14 17 HT20 LGI 2 MCS11 13 185 43.9 43.9 96.1 18.7 5 0 0 19 28 HT20 LGI 2 MCS12 14 123 63.4 63.4 100.0 0.0 0 0 0 1 1 HT20 LGI 2 MCS13 15 93 80.5 80.5 95.2 14.9 6 0 0 67 88 HT20 LGI 2 MCS14 16 82 87.8 87.8 96.1 18.5 6 0 0 31 77 HT20 LGI 2 MCS15 17 74 97.6 97.6 96.1 18.9 0 0 0 43 48 HT20 LGI 3 MCS16 20 493 17.0 17.0 100.0 0.0 0 0 0 1 1 HT20 LGI 3 MCS17 21 246 31.7 31.7 95.0 17.1 5 0 0 52 82 HT20 LGI 3 MCS18 22 164 48.8 48.8 95.1 21.2 0 0 0 19 23 HT20 LGI 3 MCS19 23 123 63.4 63.4 100.0 0.0 0 0 0 1 1 HT20 LGI 3 MCS20 24 82 87.8 87.8 100.0 0.0 0 0 0 1 1 HT20 LGI 3 MCS21 25 62 112.3 112.3 96.0 18.9 6 0 0 46 112 HT20 LGI 3 MCS22 26 55 122.0 122.0 96.0 18.0 6 0 0 51 104 HT20 LGI 3 MCS23 27 49 131.8 131.8 95.6 19.5 0 0 0 29 35 HT20 SGI 1 MCS0 30 1329 4.8 4.8 100.0 0.0 0 0 0 1 1 HT20 SGI 1 MCS1 31 665 12.2 12.2 95.7 19.9 0 0 0 11 12 HT20 SGI 1 MCS2 32 443 17.0 17.0 95.2 17.5 0 0 0 36 42 HT20 SGI 1 MCS3 33 332 24.4 24.4 100.0 0.0 0 0 0 1 1 HT20 SGI 1 MCS4 34 222 36.6 36.6 95.7 19.9 0 0 0 11 12 HT20 SGI 1 MCS5 35 166 46.3 46.3 96.1 18.9 0 0 0 12 15 HT20 SGI 1 MCS6 36 148 53.7 53.7 95.0 15.5 5 0 0 54 71 HT20 SGI 1 MCS7 37 133 58.5 58.5 97.0 9.7 2 0 0 314 327 HT20 SGI 2 MCS8 40 665 12.2 12.2 100.0 0.0 0 0 0 1 1 HT20 SGI 2 MCS9 41 332 24.4 24.4 100.0 0.0 0 0 0 1 1 HT20 SGI 2 MCS10 42 222 36.6 36.6 96.0 19.1 0 0 0 20 22 HT20 SGI 2 MCS11 43 166 46.3 46.3 96.0 19.0 5 0 0 33 43 HT20 SGI 2 MCS12 44 111 68.3 68.3 100.0 0.0 0 0 0 1 1 HT20 SGI 2 MCS13 45 83 87.8 87.8 95.6 18.8 3 0 0 126 230 HT20 SGI 2 MCS14 46 74 97.6 97.6 95.3 18.4 5 0 0 17 52 HT20 SGI 2 MCS15 47 67 104.9 104.9 95.5 20.4 6 0 0 35 109 HT20 SGI 3 MCS16 50 443 17.0 17.0 100.0 0.0 0 0 0 1 1 HT20 SGI 3 MCS17 51 222 36.6 36.6 95.5 20.3 5 0 0 43 67 HT20 SGI 3 MCS18 52 148 53.7 53.7 95.9 19.1 5 0 0 26 37 HT20 SGI 3 MCS19 53 111 68.3 68.3 95.2 20.9 6 0 0 47 89 HT20 SGI 3 MCS20 54 74 97.6 97.6 100.0 0.0 0 0 0 1 1 HT20 SGI 3 MCS21 55 56 122.0 122.0 95.4 16.6 0 0 0 28 36 HT20 SGI 3 MCS22 56 49 131.8 131.8 95.6 20.2 0 0 0 17 21 HT20 SGI 3 MCS23 57 44 144.0 144.0 95.1 18.3 4 0 0 312 566 HT40 LGI 1 MCS0 60 711 12.2 12.2 100.0 0.0 0 0 0 1 1 HT40 LGI 1 MCS1 61 356 21.9 21.9 100.0 0.0 0 0 0 1 1 HT40 LGI 1 MCS2 62 237 34.1 34.1 95.0 14.4 0 0 0 45 58 HT40 LGI 1 MCS3 63 178 43.9 43.9 95.8 19.5 5 0 0 27 32 HT40 LGI 1 MCS4 64 119 63.4 63.4 100.0 0.0 3 0 0 1 1 HT40 LGI 1 MCS5 65 89 83.0 53.7 59.3 29.5 0 0 0 32 53 HT40 LGI 1 MCS6 66 79 90.3 75.6 74.3 25.9 2 0 0 325 517 HT40 LGI 1 MCS7 67 71 100.0 100.0 96.0 15.0 3 0 0 603 1295 HT40 LGI 2 MCS8 70 356 21.9 21.9 100.0 0.0 0 0 0 1 1 HT40 LGI 2 MCS9 71 178 43.9 43.9 95.6 20.1 0 0 0 15 17 HT40 LGI 2 MCS10 72 119 63.4 63.4 95.3 20.4 6 0 0 57 90 HT40 LGI 2 MCS11 73 89 83.0 83.0 93.6 13.7 6 0 0 96 155 HT40 LGI 2 MCS12 74 60 114.7 114.7 95.9 18.6 5 0 0 228 413 HT40 LGI 2 MCS13 75 45 141.6 141.6 96.0 18.9 5 0 0 177 314 HT40 LGI 2 MCS14 76 40 153.8 153.8 95.7 16.6 4 0 0 5026 10437 HT40 LGI 2 MCS15 77 36 166.0 166.0 95.7 15.5 5 0 0 6491 12710 HT40 LGI 3 MCS16 80 237 34.1 34.1 100.0 0.0 0 0 0 1 1 HT40 LGI 3 MCS17 81 119 63.4 63.4 95.9 12.9 6 0 0 80 136 HT40 LGI 3 MCS18 82 79 90.3 90.3 95.0 18.4 6 0 0 88 180 HT40 LGI 3 MCS19 83 60 114.7 114.7 95.0 21.3 0 0 0 13 16 HT40 LGI 3 MCS20 84 40 153.8 153.8 95.7 16.6 4 0 0 1023 2270 HT40 LGI 3 MCS21 85 30 187.9 187.9 95.6 12.6 5 0 0 7254 14918 HT40 LGI 3 C MCS22 86 27 200.1 200.1 96.7 7.4 5 0 0 16623 32247 HT40 LGI 3 B MCS23 87 24 214.8 214.8 98.9 6.8 5 0 0 44049 74685 HT40 SGI 1 MCS0 90 640 12.2 12.2 100.0 0.0 0 0 0 1 1 HT40 SGI 1 MCS1 91 320 24.4 24.4 100.0 0.0 0 0 0 1 1 HT40 SGI 1 MCS2 92 214 36.6 36.6 100.0 0.0 0 0 0 1 1 HT40 SGI 1 MCS3 93 160 48.8 48.8 95.3 19.8 5 0 0 12 20 HT40 SGI 1 MCS4 94 107 70.8 70.8 95.3 20.7 0 0 0 14 18 HT40 SGI 1 MCS5 95 80 90.3 61.0 59.9 32.1 3 0 0 33 57 HT40 SGI 1 MCS6 96 71 100.0 97.6 88.7 23.2 5 0 0 160 333 HT40 SGI 1 MCS7 97 64 107.4 107.4 95.2 20.7 3 0 0 548 802 HT40 SGI 2 MCS8 100 320 24.4 24.4 95.9 15.0 0 0 0 26 30 HT40 SGI 2 MCS9 101 160 48.8 48.8 95.7 20.0 0 0 0 14 16 HT40 SGI 2 MCS10 102 107 70.8 70.8 96.1 13.7 6 0 0 32 64 HT40 SGI 2 MCS11 103 80 90.3 90.3 95.8 19.2 6 0 0 77 152 HT40 SGI 2 MCS12 104 54 124.5 124.5 95.4 19.8 4 0 0 333 620 HT40 SGI 2 MCS13 105 40 153.8 153.8 95.2 19.4 4 0 0 1012 1897 HT40 SGI 2 MCS14 106 36 166.0 166.0 96.0 14.1 5 0 0 3403 8530 HT40 SGI 2 P MCS15 107 32 178.2 178.2 97.0 5.4 5 0 0 6684 13567 HT40 SGI 3 MCS16 110 214 36.6 36.6 96.0 17.2 0 0 0 38 47 HT40 SGI 3 MCS17 111 107 70.8 65.9 83.2 19.5 6 0 0 92 170 HT40 SGI 3 MCS18 112 71 100.0 100.0 95.1 17.3 0 0 0 26 31 HT40 SGI 3 MCS19 113 54 124.5 124.5 95.8 19.4 6 0 0 137 256 HT40 SGI 3 MCS20 114 36 166.0 166.0 95.9 10.8 4 0 0 3255 7198 HT40 SGI 3 D MCS21 115 27 200.1 200.1 95.7 11.7 6 0 0 4013 10042 HT40 SGI 3 MCS22 116 24 212.4 192.8 81.2 38.9 5 0 0 29149 58156 HT40 SGI 3 A MCS23 117 21 227.0 227.0 90.9 9.3 6 35 47 292708 347773
Driver specific notes:
- ath10k: The on-board firmware performs the rate control for ath10k devices so they will not have an
rc_stats
but they do report stats in/sys/kernel/debug/ieee80211/phy0/ath10k/fw_stats
including the current TX and RX rates. Note that the kernel needsCONFIG_ATH10K_DEBUGFS=y
for this
Measuring Performance
In most cases, measuring the performance of a particular system function can be done with specialized tests like iperf
. When a more thorough measurement is desired of the overall system, a small and portable benchmarking suite like lmbench
is more appropriate.
lmbench
lmbench
is a micro-benchmark suite designed to focus attention on the basic building blocks of many common system applications, such as databases, simulations, software development, and networking. It provides a suite of benchmarks that attempt to measure the most commonly found performance bottlenecks in a wide range of system applications. lmbench
is designed to identify, isolate, and reproduce these performance bottlenecks using a set of small microbenchmarks which measure system latency and bandwidth of data movement among the processor and memory, network, file system, and disk.
Installation can be done on Ubuntu systems via sudo apt-get install lmbench
. Alternatively, you can compile the source yourself which can be found at the lmbench website.
Run the following commands to compile the source:
tar xvf lmbench3.tar.gz cd lmbench3 mkdir SCCS; touch SCCS/s.ChangeSet make -C src/
Running the benchmark suite is done via make results
. The binaries for the individual benchmarks can be found in the bin/
directory.
The src/Makefile
has additional make targets for your convenience:
# lmbench [default] builds the benchmark suite for the current os/arch # results builds, configures run parameters, and runs the benchmark # rerun reruns the benchmark using the same parameters as last time # scaling reruns the benchmark using same parameters as last time, # except it asks what scaling value to use # hardware reruns the hardware benchmarks using the same parameters # os reruns the OS benchmarks using the same parameters # clean cleans out sources and run configuration # clobber clean and removes the bin directories # shar obsolete, use cd .. && make shar # depend builds make dependencies (needs gcc) # debug builds all the benchmarks with '-g' debugging flag # assembler builds the .s files for each benchmark3
After the test has been completed, you can compare results from multiple runs with make -C results/ LIST=*
Read the man pages for the individual benchmarks in the doc/
directory, or the lmbench introduction to learn more about the test suite.