Server drives very slow...
Hi,
I have a dell poweredge 500sc with the following specs: CPU Pentium 3 1.4Ghz RAM 2GB ECC SCSI Controller Adaptec in PCI-X 2x 74GB seagate cheetach 10K U320 in RAID 0 Sata with silicon chip in PCI 2x1.5TB 7200Rpm seagate in RAID 1 I ran bonnie++ to check the drive speeds and I'm getting for both the mirrored and the stripped partitions around 40MB/s which is pretty slow. On my 17" laptop I have 2x320GB hitatchi drives 7200RPM RAID 0 and I'm getting 140MB/s. Any Ideas what could be the problem on the server? The SCSI is not supposed to relate to the CPU from what I know neither the hardware raid on the SATA card. Thanks in advance |
The file storage should have the lowest latency on any setup. Bandwidth is just an add-on to performance. If all your files are smaller than the bandwidth, higher bandwidth will not increase performance. Check the latency of the file storage for the best performance. Though Seagate drives are not as good as Hitachi and Seagate 1.5 TB hard drives has issues.
SATA hard drives gets the latest firmware that the research and development teams has created. SCSI does not get the advancements that SATA gets. This is the reason why your SATA has higher throughputs than SCSI. |
I am guessing that you are running a PCI-X card in a regular PCI slot or with a mixed PCI/PCI-X bus. If this is true that would explain the SCSI being slow.
Quote:
|
Quote:
here is a link to my controller : http://www.adaptec.com/en-US/support.../ASC-29320A-R/ |
Quote:
Quote:
Regardless of the speed that the card is capable of running at, it is limited to the speed that motherboard's bus is running at. Assuming you have the stock motherboard you have a PCI bus (not PCI-X) and its max speed is 66Mhz(not the 133 Mhz your card is capable of running). Basically your PCI-X card is falling back to legacy modes and speeds. These things combined will limit the amount of throughput the SCSI system can do. Roughly speaking cutting the speed in half should cut the throughput in half (320/2=160). No idea what additional restrictions the bus differential will cause. Then since there is the shared bandwidth issue on the bus, so if anything else is going on (Ethernet, etc) you MAY (if bus limit is approached) see additional slowdowns. |
It is not your controller or your computer creating the bottleneck. It is the hard drive, so think. If the storage controller is on a PCI-X bus that is clocked at 33 MHz and only using two hard drives, the bandwidth is still way within bandwidth of the controller and bus. If using eight hard drives then you can talk about bus and controller bottleneck, but for your setup you are not there yet.
For a server or any computer, look at the latency instead of the bandwidth. Bandwidth has nothing to do with performance. Bandwidth is just an add-on to performance. Also today's hard drives has better firmware which relates to higher throughput, but it does not mean lower latency. At the time when Seagate Cheetach were around, hard drives did not have high throughput because they care more about latency. High bandwidth penalizes latency of the hard drive. Since throughput can be increase with RAID stripping, latency is a prime factor in the design. These days people are surrounded by bandwidth which does nothing in the performance point of view. I recommend test the latency of the file storage. For any setup, latency rules performance. |
So I guess the only solution is to get a new mobo with PCI-X @ 133Mhz which means most probably P4 Xeon to stay at the server level :P
Right? |
All times are GMT -5. The time now is 12:53 AM. |