IDE RAID performance on a slower system abnormally low
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
IDE RAID performance on a slower system abnormally low
I replaced the hardware on my server a few months back and I have been noticing some significant slow-downs since. I have a server that I primarily use as a NAS device running Samba. I have it running Gentoo and it is all stable, but since I replaced the hardware, it is running dog slow...
The hardware that I had before was an Athlon XP 2000+, 1.5GB DDR400 RAM, (4) 500GB SATA2 HDD attached to a PCI RAID card in a RAID5 configuration, (1) 80GB PATA HDD for root, home, and any other partitions to run the system. It was running on an MSI motherboard that I bought back in 2003, and it had been having some intermittent issues and since it was old hardware I assumed it was going bad. With this hardware, I was able to get approximately 20MB/s transfer speeds to my desktop over Gigabit, but with my new motherboard I'm only able to get 8-10MB/s max.
The new motherboard is a VIA motherboard with onboard dual gigabit ethernet ports, 1GB DDR400, and the same hard drives/RAID card. Overall, I like how well the system runs, but the hard drive performance has been in the toilet since I switched the system around. I am just using genkernel on the system because I was too lazy to custom configure the system, and I didn't reinstall when I replaced the motherboard. Here is the output of lspci, hdparm, and ethtool.
lspci
00:00.0 Host bridge: VIA Technologies, Inc. CX700 Host Bridge (rev 03)
00:00.1 Host bridge: VIA Technologies, Inc. CX700 Host Bridge
00:00.2 Host bridge: VIA Technologies, Inc. CX700 Host Bridge
00:00.3 Host bridge: VIA Technologies, Inc. CX700 Host Bridge
00:00.4 Host bridge: VIA Technologies, Inc. CX700 Host Bridge
00:00.7 Host bridge: VIA Technologies, Inc. CX700 Host Bridge
00:01.0 PCI bridge: VIA Technologies, Inc. VT8237 PCI Bridge
00:0f.0 IDE interface: VIA Technologies, Inc. CX700M2 IDE
00:10.0 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 90)
00:10.1 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 90)
00:10.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 90)
00:10.4 USB Controller: VIA Technologies, Inc. USB 2.0 (rev 90)
00:11.0 ISA bridge: VIA Technologies, Inc. CX700 PCI to ISA Bridge
00:11.7 Host bridge: VIA Technologies, Inc. CX700 Internal Module Bus
00:13.0 Host bridge: VIA Technologies, Inc. CX700 Host Bridge
00:13.1 PCI bridge: VIA Technologies, Inc. CX700 PCI to PCI Bridge
01:00.0 VGA compatible controller: VIA Technologies, Inc. CX700M2 UniChrome PRO II Graphics (rev 03)
02:01.0 Mass storage controller: Promise Technology, Inc. PDC40718 (SATA 300 TX4) (rev 02)
02:05.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10)
02:06.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10)
80:01.0 Audio device: VIA Technologies, Inc. VIA High Definition Audio Controller (rev 10)
ethtool:
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000033 (51)
Link detected: yes
hdparm -Tt /dev/md0 (my raid5 device)
/dev/md0:
Timing cached reads: 440 MB in 2.00 seconds = 219.71 MB/sec
Timing buffered disk reads: 180 MB in 3.03 seconds = 59.46 MB/sec
hdparm on my root disk (not part of the raid)
/dev/hdc:
Timing cached reads: 444 MB in 2.00 seconds = 221.53 MB/sec
Timing buffered disk reads: 130 MB in 3.02 seconds = 43.06 MB/sec
Are these results normal? I was expecting the system to be slower, but not this much slower. For a point of comparison, I've included the output of hdparm on my desktop system (Intel C2D 2.33GHz, 4GB DDR2, (3) 250GB PATA disks in a raid5 with (1) 300gb disk as root)
hdparm on my desktop:
/dev/md0:
Timing cached reads: 5474 MB in 2.00 seconds = 2740.55 MB/sec
Timing buffered disk reads: 358 MB in 3.00 seconds = 119.17 MB/sec
Any help or guidance that could be provided would be greatly appreciated! I'm at a bit of a loss as to how to remedy this situation and it is quite annoying!
Arrgg..., hdparm is very unreliable with benchmarking and you shouldn't for sure run hdparm benchmarks on the raid as the software raid doesn't have a clue about what you're trying to do. Here an example of me running hdparm in sequence on my raid5 (4 2.5 HDDs, 250GB each):
...which will write 2GB of zeros to your raid and monitor with something else (e.g. gkrellm2) the write rate (gives me 80MB/s back on my 2.5 HDDs).
I am saying this because I had deep problems (which I wasn't able to solve) with NFS on Gentoo. Somehow the bigger the file, the longer it took to read/write it with huge I-am-waiting-for-something-gaps. It might have something to do with timeouts of NFS reads/writes.
So, have a deeper look at your raid before pointing the finger at it.
(And only chickens use genkernel )
(And only people with suicide tendencies don't have a look at the kernel config after replacing the motherboard - sorry )
Last edited by Pearlseattle; 10-16-2008 at 02:38 PM.
Ok, I recompiled my kernel using the latest sources from portage, but I'm still getting really low performance (7-9MB/s transfer speeds). The way that I'm testing this is by using scp over a gigabit connection from my desktop to my server. When I copy from my desktop to my laptop (still using gigabit), copies top out around 25-30MB/s, and with my old hardware I could get around 20MB/s using a PCI gigabit network card. I suppose the issue could be related to network drivers, but I'm not sure. I should be using the right driver, and ethtool reports that I'm connected at gigabit. Ever since I replaced the motherboard on this system I have been getting really slow speeds, so it may also be that I'm running it off of a PCI IDE RAID card, but I don't know.
So, you replaced an Athlon XP 2000+ with a Via Eden CPU? Well, if this is true the low speed is most probably due to the CPU usage. This because ssh-encrypted connections (this includes scp) generate a lot of computations.
VIA CPUs are not very powerful - I have a PC with a VIA Eden 730Mhz and the maximum transfer rate I can achieve using scp to copy a file is 4.9 MB/s.
If you want to achieve higher throughput you'll have to buy another motherboard or use a protocol to transfer your files which doesn't encrypt the stream, like NFS or Samba/CIFS.
Greetings.
Thanks for the response. I was slowly coming to that realization myself actually. I copied some files using smb and was able to get up to 15 MB/s. I guess my motherboard just can't handle gigabit speeds even though it has dual gigabit network cards. Oh well. Thanks for the response though! I guess this solves the mystery.
You're welcome!
Yeah, it's a pity that that motherboard has that kind of disadvantage.
Well, as a last resource you could have a look if your MB has an encryption-chip built-in and see if it is possible to couple if with SSH (somehow get & install the drivers and compile ssh to make it use it) to take advantage of it. I do have such a chip on mine, but to be honest I didn't look at that yet - don't have such a need for the time being.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.