I've setup a Gentoo system with 15x3TB SATA drives on two Supermicro AOC-SAT2-MV8
controllers. Individually each drive is capable of ~160MB/s read and write (tested with hdparm and dd on the Gentoo system). After building a simple RAID6 array on all 15 drives (39TB effective) I find that the PCI-X bus is saturated at 200MB/s. I see 200MB/s read and 200MB/s write. It's an older Supermicro P4SCi motherboard
so that's expected. Each drive maxes out at 40MB/s when they are all being accessed in the array.
The same system has 2 gigabit ethernet adapters. One is connected to the PCI bus and the other is connected to the northbridge. I've setup the system as an iSCSI target, and assigned both ethernet adapters an IP address on separate gigabit networks. Using netperf I can saturate both ports at ~110MB/s (full gigabit speeds) simultaneously.
Now for my problem... I've setup a Debian initiator with multipathing on both networks. I can easily saturate either of the two paths with read tests. But when I attempt to write to the iSCSI target I can never get above 7MB/s. The write performance is horrible.
I'm confident it isn't a hardware issue, because on the iSCSI target I can use DD to read from the array at 157MB/s, while saturating eth0 at 117MB/s and eth1 at 62MB/s all at the same time.
I would expect to see 157MB/s for sequential reads and 157MB/s for sequential writes to/from the array from the initiator.
Using atop I can see my target CPU usage is ~25% system and 75% idle while reading at ~80MB/s and my IRQ usage is 5%. But while writing to the target I can see the same CPU usage of 25% but the IRQ usage jumps to 75%!
Why does my IRQ usage jump to 75% while writing to the target from the initiator? What can I do to prevent that? Is that even the cause the problem?