Version 4 (modified by 21 months ago) ( diff ) | ,
---|
Multi-Core Processing
Gateworks has single board computers with single, dual, and quad core processors.
This page is to reference only those boards who are using dual and quad core processors.
We encourage customers with a Ventana board to leverage the IMX community at Freescale https://community.freescale.com/community/imx
References: PLEASE UTILIZE:
- https://github.com/torvalds/linux/blob/master/Documentation/networking/scaling.txt
- http://www.embedded.com/design/embedded/4236957/2/Multicore-networking-in-L
- See also our wiki page for Performance Tuning
Sample top command Shows Processor usage
root@OpenWrt:/#top Mem: 37212K used, 728856K free, 0K shrd, 1212K buff, 8100K cached CPU: 0% usr 0% sys 0% nic 100% idle 0% io 0% irq '''0% sirq''' Load average: 0.01 0.02 0.05 1/78 23125 PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 23125 1174 root R 1216 0% 1 0% top 2575 1 root S 8228 1% 0 0% /usr/sbin/collectd 2617 1 root S 2880 0% 2 0% batmand ath0 1423 1 root S 1224 0% 3 0% /sbin/syslogd -C16 1174 1 root S 1224 0% 3 0% /bin/ash --login 1 0 root S 1220 0% 0 0% init 2630 1 root S 1216 0% 0 0% /usr/sbin/ntpd -n -p 0.openwrt.po 2491 1 root S 1216 0% 3 0% /sbin/watchdog -t 5 /dev/watchdog 2468 1 root S 1208 0% 1 0% /usr/sbin/telnetd -l /bin/login.s 1425 1 root S 1204 0% 2 0% /sbin/klogd 1441 1 root S 1120 0% 2 0% /sbin/netifd 1434 1 root S 944 0% 3 0% /sbin/procd 2335 1434 root S 908 0% 1 0% /usr/sbin/dropbear -F -P /var/run 2476 1 root S 868 0% 2 0% /usr/sbin/uhttpd -f -h /www -r Op 1427 1 root S 836 0% 2 0% /sbin/hotplug2 --override --persi 2558 1 nobody S 768 0% 1 0% /usr/sbin/dnsmasq -C /var/etc/dns 2640 1 root S 748 0% 3 0% /usr/sbin/vnstatd -d 1437 1434 root S < 668 0% 2 0% ubusd 528 2 root SW 0 0% 0 0% [kworker/0:1] 620 2 root SW 0 0% 0 0% [kworker/u:3]
SMP Affinity (interrupt steering)
Symmetric multiprocessing (SMP)
The 'affinity' of an interrupt handler can be get/set via /proc/irq/<interrupt>/smp_affinity which is a bitmask of what CPU cores the interrupt handler can run on. By default the affinity for each handler is set to allow all available cores (ie for a dual-core system a value of 3 means bit0 (CPU0) and bit1 (CPU1) are both set). If you want a particular interrupt handler to always occur on a specific CPU you can change that bitmask. To see what interrupt handlers are configured and what interrupt they are on look at /proc/interrupts.
References PLEASE UTILIZE:
- http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux
- http://elinux.org/images/4/43/Understanding_And_Using_SMP_Multicore_Processors_Anderson.pdf
Sample of command /cat/proc/interrupts
root@OpenWrt:/# cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 29: 6498 6713 4601 11306 GIC twd 34: 1 0 0 0 GIC sdma 45: 6173 0 0 0 GIC mxs-dma 47: 1 0 0 0 GIC bch 56: 3070 0 0 0 GIC mmc0 59: 80 0 0 0 GIC IMX-uart 68: 32 0 0 0 GIC 21a0000.i2c 69: 0 0 0 0 GIC 21a4000.i2c 70: 91 0 0 0 GIC 21a8000.i2c 72: 29 0 0 0 GIC ci13xxx_imx 78: 0 0 0 0 GIC ssi@02028000 87: 9 0 0 0 GIC i.MX Timer Tick 150: 3435 0 0 0 GIC 2188000.ethernet 151: 0 0 0 0 GIC 2188000.ethernet 153: 0 0 0 0 GIC ath9k 154: 0 0 0 0 GIC ath9k, ath9k 155: 0 0 0 0 GIC ath9k 352: 0 0 0 0 gpio-mxc mmc0 407: 0 0 0 0 IPU imx_drm 412: 0 0 0 0 IPU imx_drm 567: 0 0 0 0 IPU imx_drm 572: 0 0 0 0 IPU imx_drm IPI0: 0 0 0 0 CPU wakeup interrupts IPI1: 0 0 0 0 Timer broadcast interrupts IPI2: 5027 5875 5953 5026 Rescheduling interrupts IPI3: 5 4 4 5 Function call interrupts IPI4: 3 6 6 3 Single function call interrupts IPI5: 0 0 0 0 CPU stop interrupts Err: 0
PCI Interrupt steering
The PCI specification calls out 4 interrupts (INTA/INTB/INTC/INTD) that are routed to PCI slots. Each slot gets two interrupts and they are shared with other slots dependent on board layout (in a technique called swizzled or barber-polled). This means that if you have a board with 4 PCI slots you can have a single unique interrupt for each slot, however if you have 5 slots or more, those extra slots will share an interrupt with another slot. If you can populate your slots such that you have unique interrupts, you can use smp affinity (above) to configure different CPU cores for the interrupt handlers of those slots which can greatly help performance if the bottleneck is interrupt processing (which usually a the 'top' linux command will help determine).
Note that performance gains are difficult to quantify as there are many factors at play. In general, you can 'tune' your system by using 'top' which shows CPU utilization (per core if you hit the '1' key while running) and moving things around to better balance your system. In general, if you have one core being underutilized, try to spread the load.
Ventana
The IMX6 SoC used on The Gateworks Ventana product family has 4 'legacy' interrupts to support the PCI INTA/INTB/INTC/INTD interrupts:
- 152 pin1/INTD (also used as the MSI int)
- 153 pin2/INTC
- 154 pin3/INTB
- 155 pin4/INTA
- Note these numbers will change across kernels.
Which slot is routed to each depends on the baseboard and expansion mezzanine board stackup and the best way to determine the mapping for your particular board stackup is to populate a device one slot at a time and check /proc/interrupts for the mapping.
Depending on your interrupt routing (board stackup), device slot placement (what device is in what slot), and CPU (number of cores) you can then choose to spread interrupts according to your application needs.
Specifying and determining CPU for a userspace process
The default is for userspace processes to be able to be scheduled on all CPU cores.
The 'taskset' application can be used to specify an smp_affinity for a specific task. As above the affinity is a bitmask specifying what CPU's the task can run on (ie 0x3 for CPU0|CPU1, 0x1 for CPU0, 0x2 for CPU1):
- set the affinity for an existing PID (ie PID 1):
taskset -p 0x1 1 ;# set PID1 to only run on CPU0 taskset -p 0x2 1 ;# set PID1 to only run on CPU1 taskset -p 0x3 1 ;# set PID1 to run on either CPU0 or CPU1
- launch a process with a specific affinity:
taskset 0x1 top ;# run top on CPU0
You can obtain details about what CPU's a process is allowed to run on via proc or tools like top:
- using /proc/<pid>/status to see what CPU's PID1 is allowed on:
# grep Cpus /proc/1/status ;# see current affinity Cpus_allowed: 3 Cpus_allowed_list: 0-1 # taskset -p 0x1 1 ;# set affinity to CPU0 pid 1's current affinity mask: 3 pid 1's new affinity mask: 1 # grep Cpus /proc/1816/status Cpus_allowed: 1 Cpus_allowed_list: 0
- using top (see the CPU column)
# top -n1 ;# 1 iteration of top Mem: 19704K used, 236320K free, 0K shrd, 2892K buff, 4460K cached CPU: 0% usr 6% sys 0% nic 11% idle 0% io 0% irq 81% sirq Load average: 0.05 0.11 0.13 1/44 3590 PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 3588 1423 root R 1204 0% 1 25% top -n1 3073 2 root SW 0 0% 1 5% [kworker/u:2] 695 1 root S 1212 0% 1 0% /sbin/syslogd -C16 1 0 root S 1208 0% 0 0% init 1423 1345 root S 1208 0% 1 0% /bin/ash --login 1493 1345 root S 1208 0% 0 0% /bin/ash --login 627 1 root S 1208 0% 0 0% /bin/ash --login 1345 1 root S 1204 0% 0 0% /usr/sbin/telnetd -l /bin/login.s 1816 627 root S 1200 0% 0 0% {doup} /bin/sh ./doup 697 1 root S 1192 0% 0 0% /sbin/klogd 3590 1816 root S 1192 0% 0 0% sleep 1 1126 1 root S 1140 0% 0 0% hostapd -P /var/run/wifi-phy0.pid 719 1 root S 1116 0% 1 0% /sbin/netifd 706 1 root S 940 0% 0 0% /sbin/procd 699 1 root S 700 0% 0 0% /sbin/hotplug2 --override --persi 712 706 root S < 664 0% 0 0% ubusd 6 2 root SW 0 0% 0 0% [kworker/u:0] 16 2 root SW 0 0% 1 0% [kworker/u:1] 8 2 root SW 0 0% 0 0% [migration/0] 390 2 root SW 0 0% 1 0% [kworker/1:1]
Network Packet Steering
The Receive Packet Steering (RPS) uses a hashing algorithm that takes the ip address and port to generate a hash index that then uses a hash table that map the hash index to CPU #. In the end, it will use the same CPU# for ip address/port combination. This is done by design so that the number of cache hits increase during packet processing of the same network stream. In order to realize the benefits of RPS, you would need to use multiple streams. You can use the '-P' option on iperf to execute such a test.
Single Core Processing
There may be times when a developer needs to only use one CPU because of driver issues, etc. We have found in many instances, especially with wireless application, that they are interrupt intensive and thus the CPU will be running at very low utilization but the interrupt controller is saturated. Since the wireless drivers can only operate on a single core any reduction in overall interrupt traffic helps with performance.
To do this, we will modify the bootargs in the bootloader.
Break into the bootloader by pressing a key at bootup:
Modify the variable bootargs to include maxcpus=1 at the end of the line as shown below:
setenv bootargs ${bootargs} maxcpus=1; saveenv
Note /proc/interrupts at 2 cores:
root@OpenWrt:/# cat /proc/interrupts CPU0 CPU1 29: 7231 7367 GIC twd 33: 884 0 GIC mmc0 39: 206 0 GIC cns3xxx-i2c 45: 168 0 GIC serial 49: 3 0 GIC gig_stat 51: 82 168 GIC gig_switch 63: 0 0 GIC dwc_otg, dwc_otg_pcd, dwc_otg_hcd:usb1 64: 0 0 GIC ehci_hcd:usb2 89: 28 0 GIC timer 91: 1 0 GIC ohci_hcd:usb3 FIQ: 353 441 cns3xxx-fiq IPI0: 0 0 CPU wakeup interrupts IPI1: 0 1 Timer broadcast interrupts IPI2: 2102 2420 Rescheduling interrupts IPI3: 0 0 Function call interrupts IPI4: 1775 1722 Single function call interrupts IPI5: 0 0 CPU stop interrupts Err: 0
Note /proc/interrupts with one core:
root@OpenWrt:/# cat /proc/interrupts CPU0 29: 12577 GIC twd 33: 1469 GIC mmc0 39: 398 GIC cns3xxx-i2c 45: 216 GIC serial 49: 5 GIC gig_stat 51: 489 GIC gig_switch 63: 0 GIC dwc_otg, dwc_otg_pcd, dwc_otg_hcd:usb1 64: 0 GIC ehci_hcd:usb2 89: 28 GIC timer 91: 1 GIC ohci_hcd:usb3 FIQ: 0 0 cns3xxx-fiq IPI0: 0 CPU wakeup interrupts IPI1: 0 Timer broadcast interrupts IPI2: 0 Rescheduling interrupts IPI3: 0 Function call interrupts IPI4: 0 Single function call interrupts IPI5: 0 CPU stop interrupts Err: 0
Note the top command shows all processes on CPU 0:
Attachments (1)
- cpu0.png (89.3 KB ) - added by 7 years ago.
Download all attachments as: .zip