here is another performance issue thread
I've just bought a new Dell PowerEdge 1800, with dual XEON 2.80GHz and hyperthreading activated, 3GB memory and the Dell/Adapter RAID SATA 6 channels card with 6 Maxtor SATA 160 drives.
The system is running RedHat ES 3.0 aka Linux 2.4.21-4.ELsmp ehci-hcd Intel Corp. 82801EB USB2
The system went with just one big Array of 6 discs in Raid 5, 64K strips
I've noticed poor disc transfer rate and here's some data:
[root@groville root]# vmstat
procs memory swap io system cpu
r b swpd free buff cache si so bi bo in cs us sy wa id
0 0 0 18340 9336 2888196 0 0 17 6 22 36 2 19 17 62
[root@groville root]# uptime
21:21:28 up 1 day, 23:07, 3 users, load average: 0.04, 0.13, 0.31
[root@groville root]# hdparm -tT /dev/sda
Timing buffer-cache reads: 3796 MB in 2.00 seconds = 1898.00 MB/sec
Timing buffered disk reads: 60 MB in 3.02 seconds = 19.87 MB/sec
[root@groville root]# lsmod|grep aacraid
aacraid 43380 7
scsi_mod 116904 5 [sr_mod sg aacraid mptscsih sd_mod]
[root@groville root]# dkms status|grep aacraid
aacraid, 184.108.40.2062.1, 2.4.21-4.ELsmp: installed (original_module exists)
20MB/s with 100% iowait and systems hangs on such a system is really
Of course DELL Support is trying to explain me that with a 133MB/s bus, it's normal, I'm a bit surprised
So my question is:
while reading sites like Speeding up Linux Using hdparm by Rob Flickenger, I think it's possible to tweak my system...
is there anyone experiencing such problems, I mean, system hanging for 3 to 30 seconds while transferring data (400% iowait in my case) and so poor data transfers?