Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a dell poweredge 500sc with the following specs:
CPU Pentium 3 1.4Ghz
RAM 2GB ECC
SCSI Controller Adaptec in PCI-X
2x 74GB seagate cheetach 10K U320 in RAID 0
Sata with silicon chip in PCI
2x1.5TB 7200Rpm seagate in RAID 1
I ran bonnie++ to check the drive speeds and I'm getting for both the mirrored and the stripped partitions around 40MB/s which is pretty slow. On my 17" laptop I have 2x320GB hitatchi drives 7200RPM RAID 0 and I'm getting 140MB/s.
Any Ideas what could be the problem on the server?
The SCSI is not supposed to relate to the CPU from what I know neither the hardware raid on the SATA card.
The file storage should have the lowest latency on any setup. Bandwidth is just an add-on to performance. If all your files are smaller than the bandwidth, higher bandwidth will not increase performance. Check the latency of the file storage for the best performance. Though Seagate drives are not as good as Hitachi and Seagate 1.5 TB hard drives has issues.
SATA hard drives gets the latest firmware that the research and development teams has created. SCSI does not get the advancements that SATA gets. This is the reason why your SATA has higher throughputs than SCSI.
I am guessing that you are running a PCI-X card in a regular PCI slot or with a mixed PCI/PCI-X bus. If this is true that would explain the SCSI being slow.
Quote:
Apart from this, PCI and PCI-X cards can generally be intermixed on a PCI-X bus, but the speed will be limited to the speed of the slowest card. For example, a PCI 2.3 device running at 32 bits and 66 MHz on a PCI-X 133-MHz bus will limit the total throughput of the bus to 266 MB/s.
I am guessing that you are running a PCI-X card in a regular PCI slot or with a mixed PCI/PCI-X bus. If this is true that would explain the SCSI being slow.
Regardless of the speed that the card is capable of running at, it is limited to the speed that motherboard's bus is running at. Assuming you have the stock motherboard you have a PCI bus (not PCI-X) and its max speed is 66Mhz(not the 133 Mhz your card is capable of running). Basically your PCI-X card is falling back to legacy modes and speeds. These things combined will limit the amount of throughput the SCSI system can do. Roughly speaking cutting the speed in half should cut the throughput in half (320/2=160). No idea what additional restrictions the bus differential will cause. Then since there is the shared bandwidth issue on the bus, so if anything else is going on (Ethernet, etc) you MAY (if bus limit is approached) see additional slowdowns.
It is not your controller or your computer creating the bottleneck. It is the hard drive, so think. If the storage controller is on a PCI-X bus that is clocked at 33 MHz and only using two hard drives, the bandwidth is still way within bandwidth of the controller and bus. If using eight hard drives then you can talk about bus and controller bottleneck, but for your setup you are not there yet.
For a server or any computer, look at the latency instead of the bandwidth. Bandwidth has nothing to do with performance. Bandwidth is just an add-on to performance. Also today's hard drives has better firmware which relates to higher throughput, but it does not mean lower latency.
At the time when Seagate Cheetach were around, hard drives did not have high throughput because they care more about latency. High bandwidth penalizes latency of the hard drive. Since throughput can be increase with RAID stripping, latency is a prime factor in the design. These days people are surrounded by bandwidth which does nothing in the performance point of view.
I recommend test the latency of the file storage. For any setup, latency rules performance.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.