Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
disks are a lot slower the further the partition is to the inside of the platter because the relative speed is slower.
Quote:
Originally Posted by khaleel5000
For windows I was copying from SDA1 to SDA7
For linux I was copying from SDA11 to SDA12
I think crashmeister has explained the whole issue.
I downloaded the (windows based) disk test program he suggested and found the various disks I tested had nearly two to one transfer rate difference between the low address (outer edge) and high address (inner edge).
You compared a transfer between two locations near the outer edge for Windows against a transfer between two locations near the inner edge for Linux, so you would expect a difference slightly less than the full outer to inner difference, which in turn is slightly less than two to one.
So the difference you found might be entirely explained by the difference in raw transfer rate at outer vs. inner edges of the disk.
An average random seek betwen SDA1 and SDA7 would be 8730 "cylinders", while between SDA11 and SDA12 it would be 8833. But actual cylinders are not equal to the reported "cylinders" and vary for the same reason as the transfer rate between outer and inner. So a random seek between SDA11 and SDA12 is twice as far as a random seek between SDA1 and SDA7. So that MIGHT also be a factor.
But we have no idea how many actual seeks are involved nor how the actual seeks compare to the random seek (in both cases the destination size is large enough that the uncertainty in the seek distance is most of the total seek distance).
Comparing a simple file system like FAT to a Linux file system such as XFS which is complex and then basing the throughput between these two completely different file systems. Windows handles data IO differently than Linux. I suggest compared the results using both EXT2/3 and FAT. There is EXT IFS to provide EXT2 support in Windows 2000/XP.
EXT2/3, XFS, ReiserFS, have one thing in common. They can defrag while writing data to them. JFS does not defrag, so it gets fragmented over time. To stop the defraging, you can include noalign, notail, and a few others. Also using noatime can help. Before using these options, check if it is supported by the file system during mounting.
Disk benchmark programs in Windows does not include the file system during the test, so it goes by raw performance. Raw performance versus disk + file system performance are two different things. Real world testing is disk + file system performance.
Again basing write performance on transferring data from partition and then to the next partition just going to create more confusion. I suggest create a garbage file using /dev/urandom that is at least two times bigger than the capacity of RAM installed. Then include time with cat, dd, cpio the file to /dev/null. This will give you an estimate reading throughput. Sure you can use bonnie or bonnie++ to benchmark a drive, but that program is old and I think it is not accurate.
The utility hdparm is meant to be use with IDE/ATAPI devices. I suggest use sdparm for SCSI and SATA devices.
All the worries of throughput does not matter at all in Linux because programs barely comes close to the size of Windows programs.
Linux has a lot of reduntancies in place to provide
Quote:
That any program uses RAM as a disk I/O buffer is new to me.Thats what they put buffers on the disks for.
It is new to you. Have you been living in a cave far too long. Programs since the DOS ages uses RAM for buffering. DOS had simple buffers for software and data. When smartdrive was invented, it sped up the performance a lot. It helped high latency drives such as hard drives and optical drives to find data fast and transfer them as fast as memory. One problem, it uses RAM for this performance improvement. If I ran my 80386DX-40 that has 8 MB of RAM with out smartdrive, it ran slower.
The memory on drives are cache. The algorithm in today's hard drives and optical drives are very, very efficient, so they can find data very fast compared to their ancestor models that has similar hardware.
Linux, Windows, and Mac uses buffering and cache to increase performance of these high latency drives.
Maybe I should have asked the size of the test file.
You seem to be saying the performance is the same (with the same software) for a copy sda5 to sda6 and for sda11 to sda12. That doesn't make sense if the actual transfer or seek times matter.
If the file being copied is small enough, then other factors dominate and the actual transfer and seek times wouldn't have much impact on the measured data rate.
I don't know what "small enough" is. There are too many caching and buffering layers involved to make a reasonable estimate.
I should have mentioned the size too, the file size is 1023 mb (kida 1 GB , its a .vob , movie file).I copied a 644 mb (Debian 4.0 cd iso 1) and it took roughly 35 seconds from sda5 to sda6 (xfs partitions) [cant say benchmark windows since I dont have it on my pC]
I also 'tested' by copying a 1 1023 mb video file from sda11 to sda 5 and got roughly speed of 31 mbps
while I got a speed of roughly 35 mbps when copying from sda5 to sda 6 so[All partitions mentioned above are xfs]
It does prove there is tangible difference due to the cylinder seek issue but doesnt answer the massive difference between windows and linux's speed difference as now I have tested by converting those windows partitions to linux.
Last edited by khaleel5000; 04-18-2008 at 04:00 PM.
I would just like to ask [because I am feeling that may be the bci busmastering... thing i mentioned in my last post of page 1 Might have a hand in it]
Is there a way to check if linux is using/taking advantage of pci bus mastering option thats enabled from my BIOS?
[The real reason I got into this issue was because a friend,windows user asked me how fast my sata's copying speed is in windows , [we have satas of same model but my system only supports 1.5 gbps while his is advanced and supports 3 gbps SATA interface, so he was also surprised to see the copying speed of around 60 mbps in windows on my pc because thats what his system was providing with new interface and this HD supports new interface i.e sata 2... that copying was done with bus mastering enabled on my system]
People still do not understand the mechanics of hard drives. If the hard drive is rated for 150 MB per second (so called 1.5 Gb per second spec), it does not mean it will transfer data at that rate. Same goes with 300 MB per second (so called 3 Gb per second spec). Also the bus limits the throughput, so a SATA rated at 1.5 Gb per second or 150 MB per second will only get 133 MB per second is on a PCI bus. Same goes true for a SATA 3 Gb per second or 300 MB per second on a PCIe X1 which is only rated at 250 MB per second.
The explaining about throughput of a hard drive does not matter in Linux or in other operating systems because RAID can improve it. RAID can not improve latency, so find a hard drive or storage medium that has the lowest accessing time.
One thing that can hurt xfs performance is file deletion. This can be greatly improved by increasing log buffer size. Create the xfs filesystem from the command line with
mkfs.xfs -l size=64M /dev/???
and mount with the options noatime, nodiratime, logbufs=8.
I was aware that 1.5 Gbps wont give me 1.5 giga BYTEs / second speed (because data rate is basically given in BITS).That wasent the issue , the issue is I am experienceing far slower speed in linux than windows.
I had some time so I formatted both sda5 and sda6 to the following filesystems and following are the speeds that I got :-
both ext3 =31 mbps
both ext2 = 34 mbps
both reiser [prolly 3, reiser 3, as mkfs.reiserfs -V shows :-
mkfs.reiserfs 3.6.19 (2003 www.namesys.com)
= speed on both reiser 30.9 mbps
both jfs = 34 mbps
OS= pclinuxos Minime
uname -a = Linux localhost 2.6.22.15.tex2 #1 SMP Mon Dec 17 23:18:44 CST 2007 i686 Intel(R) Pentium(R) 4 CPU 2.66GHz GNU/Linux
I looked at a number of benchmarks of WD 160GB drives. 30-35MB/s appears to be the normal transfer speed for data located at 110GB and above (while data on the outer edge can reach about 55MB/s).
Last edited by jay73; 04-23-2008 at 09:08 AM.
Reason: typo
Western digital makes atleast 3 different types of harddisk
within the range of PATA or SATA.
they are with 2mb cache, 8mb cache, and 16 mb cache (model no have aabs , aajs and aaks respectively in their names)the cache might influence the speed too.but ok
I will keep searching for the solution
I was aware that 1.5 Gbps wont give me 1.5 giga BYTEs / second speed (because data rate is basically given in BITS).That wasent the issue , the issue is I am experienceing far slower speed in linux than windows.
I had some time so I formatted both sda5 and sda6 to the following filesystems and following are the speeds that I got :-
both ext3 =31 mbps
both ext2 = 34 mbps
both reiser [prolly 3, reiser 3, as mkfs.reiserfs -V shows :-
mkfs.reiserfs 3.6.19 (2003 www.namesys.com)
= speed on both reiser 30.9 mbps
both jfs = 34 mbps
OS= pclinuxos Minime
uname -a = Linux localhost 2.6.22.15.tex2 #1 SMP Mon Dec 17 23:18:44 CST 2007 i686 Intel(R) Pentium(R) 4 CPU 2.66GHz GNU/Linux
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.