sporatic do_IRQ: stack overflow
Hello all.
I built a terabyte RAID5 server to use as a networked file server and mythtv backend, but I haven't been able to get it to stay up for more than a couple days typically, a week at most, before it crashes with some interrupt related kernel oops. A very common one is something like "do_IRQ: stack overflow", but I've also seen messages about not being able to handle a kernel paging request during an interrupt. Or a never-ending stack backtrace scrolling off the screen.
The really bizzare thing is that I've swapped out every piece of hardware -- attached the drives to a different mobo&cpu using a different SATA host controller -- and the problem stayed. I even swapped out the PSU and tried different house circuit in case the power was dirty.
The two setups are:
Gigabyte GA-8IPE1000 Pro-G
Pentium 4 3.0E
2GB kingston memory
SYBA SD-SATA-4P PCI SATA Controller Card
nVidia Geforce FX 5200
Asus P4P800-E Deluxe
2.4GHz P4
1GB corsair memory
PROMISE SATAII150 TX4 PCI SATA Controller Card
ATI Radeon 9550
I removed all the mythtv tuners to see if the problem was coupled to them -- it isn't. Also, I tried both SATA cards in both motherboards -- I got the same behavior with either.
The boot drive is a 160GB Samsung and the RAID array is 5 250GB Seagate Barracudas.
For software, I'm running a Fedora Core 4 install with updates from atrpms.net (for the mythtv stuff). Currently, I'm running kernel 2.6.14-1.1653_FC4smp, but the problem has been with me at least from 2.6.12 days. It also shows up with I run the non-smp kernel. (Actually, I'm pretty sure I tested that last point. I'll test it again to make sure.)
I'm at my wits-end trying to figure out what is causing this and how to fix it. I've got 4 other linux machines with pretty much the same software setup -- including the machine I was swaping parts with as described above. They all work absolutly fine. It's just the machine with the RAID5 array thats giving me grief.
If anyone can give me any help, suggestions, or, well, anything at all, I'd be greatly appreciative.
Thanks,
-William
|