LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   Server drives very slow... (https://www.linuxquestions.org/questions/linux-hardware-18/server-drives-very-slow-744055/)

alderfc7 07-30-2009 11:16 PM

Server drives very slow...
 
Hi,

I have a dell poweredge 500sc with the following specs:

CPU Pentium 3 1.4Ghz
RAM 2GB ECC
SCSI Controller Adaptec in PCI-X
2x 74GB seagate cheetach 10K U320 in RAID 0
Sata with silicon chip in PCI
2x1.5TB 7200Rpm seagate in RAID 1

I ran bonnie++ to check the drive speeds and I'm getting for both the mirrored and the stripped partitions around 40MB/s which is pretty slow. On my 17" laptop I have 2x320GB hitatchi drives 7200RPM RAID 0 and I'm getting 140MB/s.

Any Ideas what could be the problem on the server?
The SCSI is not supposed to relate to the CPU from what I know neither the hardware raid on the SATA card.

Thanks in advance

Electro 07-31-2009 12:03 AM

The file storage should have the lowest latency on any setup. Bandwidth is just an add-on to performance. If all your files are smaller than the bandwidth, higher bandwidth will not increase performance. Check the latency of the file storage for the best performance. Though Seagate drives are not as good as Hitachi and Seagate 1.5 TB hard drives has issues.

SATA hard drives gets the latest firmware that the research and development teams has created. SCSI does not get the advancements that SATA gets. This is the reason why your SATA has higher throughputs than SCSI.

lazlow 07-31-2009 12:39 AM

I am guessing that you are running a PCI-X card in a regular PCI slot or with a mixed PCI/PCI-X bus. If this is true that would explain the SCSI being slow.

Quote:

Apart from this, PCI and PCI-X cards can generally be intermixed on a PCI-X bus, but the speed will be limited to the speed of the slowest card. For example, a PCI 2.3 device running at 32 bits and 66 MHz on a PCI-X 133-MHz bus will limit the total throughput of the bus to 266 MB/s.
From:http://en.wikipedia.org/wiki/PCI-X

alderfc7 07-31-2009 08:51 AM

Quote:

Originally Posted by lazlow (Post 3626250)
I am guessing that you are running a PCI-X card in a regular PCI slot or with a mixed PCI/PCI-X bus. If this is true that would explain the SCSI being slow.



From:http://en.wikipedia.org/wiki/PCI-X

I'm running it on a PCI-X. The mobo is the original server board that comes with the Dell poweredge 500sc.

here is a link to my controller :

http://www.adaptec.com/en-US/support.../ASC-29320A-R/

lazlow 07-31-2009 03:03 PM

Quote:

The system board includes the following built-in features:

* Five PCI slots located on the system board. Two are 64-bit, 33- or 66-MHz slots; three are 32-bit, 33-MHz slots.
From:http://support.dell.com/support/edoc...10.htm#1032099

Quote:

Bus type PCI
From:http://support.dell.com/support/edoc...a0.htm#1034878

Regardless of the speed that the card is capable of running at, it is limited to the speed that motherboard's bus is running at. Assuming you have the stock motherboard you have a PCI bus (not PCI-X) and its max speed is 66Mhz(not the 133 Mhz your card is capable of running). Basically your PCI-X card is falling back to legacy modes and speeds. These things combined will limit the amount of throughput the SCSI system can do. Roughly speaking cutting the speed in half should cut the throughput in half (320/2=160). No idea what additional restrictions the bus differential will cause. Then since there is the shared bandwidth issue on the bus, so if anything else is going on (Ethernet, etc) you MAY (if bus limit is approached) see additional slowdowns.

Electro 07-31-2009 05:57 PM

It is not your controller or your computer creating the bottleneck. It is the hard drive, so think. If the storage controller is on a PCI-X bus that is clocked at 33 MHz and only using two hard drives, the bandwidth is still way within bandwidth of the controller and bus. If using eight hard drives then you can talk about bus and controller bottleneck, but for your setup you are not there yet.

For a server or any computer, look at the latency instead of the bandwidth. Bandwidth has nothing to do with performance. Bandwidth is just an add-on to performance. Also today's hard drives has better firmware which relates to higher throughput, but it does not mean lower latency.

At the time when Seagate Cheetach were around, hard drives did not have high throughput because they care more about latency. High bandwidth penalizes latency of the hard drive. Since throughput can be increase with RAID stripping, latency is a prime factor in the design. These days people are surrounded by bandwidth which does nothing in the performance point of view.

I recommend test the latency of the file storage. For any setup, latency rules performance.

alderfc7 08-01-2009 01:52 AM

So I guess the only solution is to get a new mobo with PCI-X @ 133Mhz which means most probably P4 Xeon to stay at the server level :P

Right?


All times are GMT -5. The time now is 12:53 AM.