Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a 3Ware 6000 series RAID controller connected to four 80G HDs [Raid 5] and two 40G HDs [mirror raid] and I am booting FreeBSD off of a small 8gig Hard drive (inside 8gig is root swap var tmp & usr).
The question/problem is, when dd-ing 'verybigfile' (1400M file) from /usr to the Raid 5 or the Mirror Raid I am only getting 35M/s
yes this is still a great speed. But when I use the same controller on another server I am working on, four 160G HDs. I do not want to have the same problem, as these drives will be hammered all the time.
What is causing this lack of performance? My 'verybigfile' is supposed to be going close to 80M/s no?
What's the hdparm, or really... whatever FreeBSD uses to detect drive speed off of the 8Gb drive. Unless that's a nuclear fast 8Gb drive its read is going to be ATA33 or ATA66, which makes sense for a read of 30Mb/sec, actually that's still a little high... it should start to hork out around 20, especially with a dd!! A byte for byte copy is a pretty mean thing to do...
I feel I need to correct some statements I have made. The 8 gig is not an 8gig but a 40gig. And the raid was writing information to its self at 35MB/s a second, and reading only a bit faster than that.
Shouldn't the raid be getting much faster results? Something near 80MB/s ?
Yeah, this makes more sense now, I was wondering how you were getting 35M/sec sustained pull off of an 8Gb. The 40 runs at what? ATA66, 100, either way... that's the bottleneck in the transfer, the raid array is just limited to the data read off of the 40Gig.
When we do a CAT straight from a file to /dev/null we get 35/36 MB/s on reads, when we write, we get just about 7MB/s on the 3Ware 6000 serise card, to a Raid 5 stripe of 4 80G 5400RPMs
There are many things that effect storage bandwidth:
1) The length of cable and type of cable. For IDE the maximum length is 15 inches. 80 pin cable gives you better data integrity. Also round cable has less cross talk for higher data throughput.
2) Is the hard drives getting enough power. Check to make sure all hard drives are getting 12 volts and 5 volts. Use a multimeter with 40 Megaohm or more. If the voltages are lower, you need to get a beefy power supply.
3) Is there any devices that is giving your computer trouble. Check for driver updates.
4) Whats the temperature of the hard drives. Is it within specs. A change in tempature affects throughput.
5) What manufacture and model of the hard drives. Manufactures like Maxtrox, Western Digital, Seagate uses some CPU time to control them even though you set it to DMA. IBM has the least CPU time.
6) What filesystem are you using ext3, JFS, XFS, etc.
7) What block are using for RAID. Is it 4K 8K 16K 32K. You have to experiment with it.
8) What bus are you using 32-bit or 64-bit. Whats the speed of the bus. Does the bus speed varies.
9) The charactistics (platters, space between platters, RPM, accessing time, etc) of the hard drive affects through
Keep in mind of your network. 100Mbit ethernet can not transfer more than 10 megabytes per second. 1000Mbit ethernet can transfer about 119 megabytes per second but only if you are using 64-bit PCI or INTEL's CSA bus.
You can use mplayer for testing but it tests for continous file transfer. In the server world you need to test for random transfer. Random transfer is much lower.
Distribution: SuSE 8.2/9.0 Pro, IPCop, looking at others as time allows
Posts: 18
Rep:
Falieson,
I've been doing some research into potential RAID controllers, and here's what I can tell you about the performance you're seeing. (Although please don't ask me for URL's or benchies )
Nearly every IDE "hardware" RAID controller is based on a bit of tricky bios magic. However the CPU still accesses the actual drive directly to do transfers and writes. The 3Ware cards are apparently quite unique. They have a special ASIC which handles the disk I/O, providing real hardware RAID. Great! What's not so great is that the cards have only 1 or 2 MB of zero latency SRAM (not SDRAM) for the ASIC to use. From what I've read this has caused problems with RAID 5 because the XOR that produces the parity is bottlenecked by this low amount of memory, thus reducing throughput.
Supposedly sustained/serial writes don't suffer too much, but lots of random writes are fairly poor. Perhaps your filesystem choice is impacting the way in which the file is being written, and causing the poor performance. At the end of the day it all evens out, as the 3Ware has superb read performance, and all the server grade (Web, Database etc) tests based on real disk usage patterns show it holding it's own against the comparable cards.
Two things I do suggest though.
1. Update your firmware. 3Ware can effectively upgrade your hardware because of the way it's designed. One guy saw a significant improvement going from 7.53 to 7.6 (I thinik).
2. Consider using RAID 10. This way the card does not need to perform the expensive XOR's.
Hope this helps. I found all this out by googling . It's tricky finding it, but it's there.
Regards
Steve
P.S. I'm currently favouring this card in my own buying decision over all the others even with the comparitively slow RAID 5!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.