Quote:
Originally Posted by lazlow
Remember to actually use gigabit speeds you have to have the hard drives on both ends to be able to handle it. 1Gb/s is about 95MB/s after overhead. While some newer(consumer) drives can handle this speed (and most raid0 setups) a lot of older drives cannot. The 620Mb/s would be about 77MB/s(before overhead), which would be about where a lot of drives from just a few years ago would fall.
|
Yes, I thought about that, that's why I used ttcp which (presumably) does a memory-to-memory test, bypassing any disk I/O. However, it did strike me when I noticed that ttcp reports approximately the same performance as hdparm does:
[root@target_gb ~]# hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 228 MB in 3.00 seconds = 76.00 MB/sec
ttcp reports: 78630.84 KB/sec
Could it be just a coincidence?..
Anyway, I made another test. On my source server, I have a RAID-5 Volume which is much faster in read:
[rootsource_gb~]# hdparm -t /dev/sdb1
/dev/sdb1:
Timing buffered disk reads: 950 MB in 3.00 seconds = 316.39 MB/sec
I made the following test to read from the RAID, transfer to the network, and dump to /dev/null on the target:
[root@source_gb mnt]# dd if=/mnt/1g ibs=1M | ( ssh target_gb dd of=/dev/null obs=1M)
1024+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 29.3306 s, 36.6 MB/s
2097152+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 29.0326 s, 37.0 MB/s
This way I only get about 40 MB/s, which is what I have also observed with scp. With ftp I could reach about 65 MB/s. Which is why I decided to bypass any disk I/O and use ttcp. The target_gb server is anyway supposed to be diskless.