LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Closed Thread
  Search this Thread
Old 10-29-2008, 07:05 AM   #16
uncertain
Member
 
Registered: Oct 2008
Location: Katowice, Poland
Distribution: Ubuntu, Backtrack, FC10
Posts: 40

Original Poster
Rep: Reputation: 15

File size has a direct correlation to the speed at which the file is written:

http://img293.imageshack.us/img293/1...erationgm2.png

At 700MB, file write speed descends into the obscenely slow (<6MB/s), but as you can see here, even at 699.1MB write speed is rather high, but still a good 5MB/s slower than it was.

Read speeds (i.e., to write to my internal HDD from the USB disk) are completely unaffected by the slowdown, and still run at previously attained speeds of ~25MB/s.

Last edited by uncertain; 10-29-2008 at 11:02 AM.
 
Old 10-29-2008, 06:04 PM   #17
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
It seems you still do not understand how Linux works because you keep on using GUI to gauge the throughput. GUI programs in Linux are not accurate to calculate throughput in Linux because how X11 works.

I strongly recommend use iozone. Also compile a custom kernel to find out what options affects throughput and responsiveness of your setup.

The following is an example of command that should closely resemble throughput. Play around with the record size.

iozone -ec -t 1 -r 16M -s 1G -+n -C -i 0 -i 1

To really know the throughput of the hard drive and connection between the hard drive and the processor, the following is the same as above with an option that forces to bypass any file system buffers.

iozone -ec -t 1 -r 16M -s 1G -+n -C -I -i 0 -i 1

Another test to include is latency or how long it takes for each chunk of data to be read and write. The following command prints out the latency to a separate files where the test has taken place.

iozone -ec -t 1 -r 16M -s 1G -+n -C -Q -i 0 -i 1

The latency is saved in Child_X_rol.dat for reading and Child_X_wol.dat for writing.

Run the three commands in the directory where the USB storage device is mounted.


On my setups, I get 15 megabytes per second from USB hard drive that is formated using EXT2/EXT3. I have not tested with XFS for external storage which is a high performance file system compared to EXT2/EXT3. My internal 2.5 inch SATA hard drive for my notebook computer has a write throughput of 25 megabytes per second and read throughput is about 45 megabytes per second using XFS as the file system with a file size smaller than the amount of RAM. When file size is larger than the amount of RAM, my internal hard drive read and write throughput is about 25 megabytes per second. The tests was used with iozone and does not even get close to these values when copying or moving files in GUI. The tests has been done on a Dell Inspiron 1520 with an Intel Core Duo T7300 (2 GHz x 2 with 4 MB L2 cache shared), 2 GB of DDR2-667 (1 GB x 2).
 
Old 10-29-2008, 06:23 PM   #18
uncertain
Member
 
Registered: Oct 2008
Location: Katowice, Poland
Distribution: Ubuntu, Backtrack, FC10
Posts: 40

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by Electro View Post
It seems you still do not understand how Linux works because you keep on using GUI to gauge the throughput. GUI programs in Linux are not accurate to calculate throughput in Linux because how X11 works.
It seems you do not understand that I do not want to hear from you any more. I don't care what you say about how to guage throughput. I have a watch, and I know that when it takes me a half hour to move a couple gigabytes of files to a USB drive, when two days ago it took less than three minutes to move the same batch of files SOMETHING IS DAMN WELL WRONG.

A one gigabyte file is a one gigabyte file. Two days ago it took under a minute to transfer, and now it takes upwards of six. The screenshots of my horrendously flawed GUI is just to show the time span needed to complete the transfer. I checked this time against my trusty Timex - it's fairly accurate. Do I have to build a sundial to shut you up? I don't care if Nautilus says that transfer goes faster than the Starship Enterprise - if it takes six minutes to move a one gigabyte file, it takes six minutes to move the file. That readout wouldn't even be there if it wasn't fairly, or at least, semi-accurate.

Stop harassing me with your 1394s and your SATAs and your benchmark programs. I don't care if it writes 5.36534 megs a second, or if it really writes 5.7423 megs a second but slows down when it gets to a particular sector, but speeds back up 13.2 nanoseconds later. That's irrelevant. It has nothing to do with anything. I don't care!

What I now want to know is why is it, that when I run kernel 2.6.24-18, do my USB transfers complete in minutes, and when I use 2.6.24-21.43 (latest update from Ubuntu) it takes hours to do the same thing.

If you can't answer that question that I've stated 3 or 4 times now, then I don't need to hear from you any more. If all you can get is 15MB/s then you have a problem you should look into, too. Since you've accepted this as "how it just works", you're exactly the person I do not want to hear from. Here I am trying to start a thread for people with a known problem to compare data with, and you keep coming at me day after day wanting to argue about stupid crap.

Please - Go away.

Last edited by uncertain; 10-29-2008 at 06:48 PM.
 
Old 10-29-2008, 06:37 PM   #19
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
I already look it up. USB is just slow and you do not understand the hardware.

If you want to be an asshole than go ahead be an asshole. If you did not take my advice to talk to a kernel developer about the problem, then you are indeed an asshole.
 
Old 10-30-2008, 05:17 PM   #20
slumbergod
LQ Newbie
 
Registered: Oct 2008
Posts: 1

Rep: Reputation: 0
Ever since I switched to Linux a year ago I have been plagued with the same problems. This has followed me across two different laptops (one brand new) and two distros (Xubuntu Gutsy and Hardy).

USB flash drive data transfers start really fast and then drop to about 1.5MB/s where they stay for the remainder of the transfer. The GUI gets sluggish too. I never found a solution. It is one of the things I think sux in Linux, not that I am ever going back to Windows.

When I used a dual boot with Windows XP (the same laptop, same flash drive) the data transfer was as fast as I expected. This is some linux issue that seems to affect only some users.
 
Old 10-30-2008, 06:19 PM   #21
jschiwal
LQ Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682
It isn't uncommon for one usb port to operate much slower than another. USB uses two-port hubs, and one interface may be split twice or more
internally, before you even use it.

You can't rely on speed results if there is caching involved.

Like already stated, usb compresses the data on the sending side and uncompresses on the
receiving side. The speed you notice can be greatly effected by the contents of the files and other factors such as mounting options and cpu usage.
USB is not a true bus and the cpu is used where for a scsi drive for example, the controller performs the transfers.

You never said if the usb device is a ram device or a hard disk.
Flash drives can have very fast read speeds but the write/update times are terrible. Different technologies have different results.

Also check if asynchronous or synchronous writing is used.

Look in /var/log/messages. I've seen in Ubuntu mailing lists about the usb speed (1.0 vs 2.0) not being detected properly. Here is a solution that one person used:
Code:
In gutsy 7.10 dist-upgraded to the max:

Ubuntu 64-bit sometimes does not mount USB 2.0 devices as 2.0.

1) plugin in the USB drive
2) mount it if not automounted (sometimes it doesn't)
3) right-click, properties in nautilus, go to drive tab, see speed "12 Mbps"
4) umount it, unplug it
5) sudo rmmod ehci_hcd
6) plugin the USB drive
7) sudo modprobe ehci_hcd
8) mount it if not automounted
I don't know if this is an Ubuntu specific problem or a problem with the kernel modules.

Last edited by jschiwal; 10-30-2008 at 06:20 PM.
 
Old 11-02-2008, 02:07 AM   #22
pixellany
LQ Veteran
 
Registered: Nov 2005
Location: Annapolis, MD
Distribution: Mint
Posts: 17,809

Rep: Reputation: 743Reputation: 743Reputation: 743Reputation: 743Reputation: 743Reputation: 743Reputation: 743
Closing this due to the hostilities that seem to have come in.
 
Old 11-03-2008, 06:55 PM   #23
pixellany
LQ Veteran
 
Registered: Nov 2005
Location: Annapolis, MD
Distribution: Mint
Posts: 17,809

Rep: Reputation: 743Reputation: 743Reputation: 743Reputation: 743Reputation: 743Reputation: 743Reputation: 743
Reopened on the assumptions that 2 members will no longer yell at each other......
 
Old 11-04-2008, 06:12 AM   #24
jbahr
LQ Newbie
 
Registered: Oct 2008
Posts: 3

Rep: Reputation: 0
USB Speed

Electro, do you just make this stuff up?

IEEE-1394 controllers are little different than USB 2.0 controllers. I know, I write drivers for both of them. There is approximately the same ACK overhead, but Firewire has a slightly higher transport-level top end.

Of course, USB drivers employ DMA and many USB controllers have their own DMA engines.

The reason that PCI card-based USB controllers can be faster in multi-device applications is that there is no hub overhead (other than the internal hub in the controller). A directly connected mass storage device, after enumeration, IS essentially a point to point operation.

The compact flash devices we tested were VERY fast 300x CF's. We have achieved write speeds of 29 MB/sec, and read speeds of almost 40 MB/sec. This is reading and writing to as raw mounted CF without filesystem. An example is: http://www.ubergizmo.com/15/archives...lash_card.html.

We have achieved about the same speeds on the fastest of hard disk products with USB 2.0 interface.

Linux has a "lot of redundancies in place" for most file I/O. The worst example is FAT update on any kind of mass storage device. This doesn't explain why *multiple* USB mass storage devices appear to write at much slower proportional speed than single ones. This is a practical problem, as I said earlier, because it costs our client time and money to write to USB mass storage products and gang programming would be useful.

Jschiwal: We looked at that two-port issue, which is why we tested with very fast PCI-based 4-port adapters. Each port is just as fast as the other and there was no indication that the actual device was the bottleneck when attaching multiple mass storage USB devices. It really appears to be in the USB stack, which is where we're looking now. As for caching, we always do a device-level sync and device unmount before terminating and stopping the duration measurement. I'm not aware of any compression in USB traffic. The URB's are just raw data with checksums coming directly from the USB device (in our case a mass storage device). This is true for block and isochronous transfers, which is most of what we measure (either mass storage or video feeds).

As for being a "true bus" and such: There are few zero copy drivers in Linux, so all data transfer (including hard disk traffic) tends to involve the CPU to some extent. There is very little difference between the fundamentals of a SATA driver or a USB driver -- they both receive interrupts, grab packets/transactions and enqueue them for delivery to processes waiting for them. It is true that there is more overhead with USB due to the stack layering, but USB mass storage devices use the SCSI protocol exactly like SCSI disks do. With a fast enough machine (and with 3 GHz multi-core boxes now, they are very fast indeed), you're not going to find that the CPU is the bottleneck *per se* in data transfer speeds.

Which brings me back to our hunt for a fast OS or stack to do multiple mass storage device writes.

Last edited by jbahr; 11-04-2008 at 06:26 AM.
 
Old 11-04-2008, 07:52 PM   #25
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
Quote:
Originally Posted by jbahr View Post
Electro, do you just make this stuff up?

IEEE-1394 controllers are little different than USB 2.0 controllers. I know, I write drivers for both of them. There is approximately the same ACK overhead, but Firewire has a slightly higher transport-level top end.

Of course, USB drivers employ DMA and many USB controllers have their own DMA engines.

The reason that PCI card-based USB controllers can be faster in multi-device applications is that there is no hub overhead (other than the internal hub in the controller). A directly connected mass storage device, after enumeration, IS essentially a point to point operation.

The compact flash devices we tested were VERY fast 300x CF's. We have achieved write speeds of 29 MB/sec, and read speeds of almost 40 MB/sec. This is reading and writing to as raw mounted CF without filesystem. An example is: http://www.ubergizmo.com/15/archives...lash_card.html.

We have achieved about the same speeds on the fastest of hard disk products with USB 2.0 interface.

Linux has a "lot of redundancies in place" for most file I/O. The worst example is FAT update on any kind of mass storage device. This doesn't explain why *multiple* USB mass storage devices appear to write at much slower proportional speed than single ones. This is a practical problem, as I said earlier, because it costs our client time and money to write to USB mass storage products and gang programming would be useful.

Jschiwal: We looked at that two-port issue, which is why we tested with very fast PCI-based 4-port adapters. Each port is just as fast as the other and there was no indication that the actual device was the bottleneck when attaching multiple mass storage USB devices. It really appears to be in the USB stack, which is where we're looking now. As for caching, we always do a device-level sync and device unmount before terminating and stopping the duration measurement. I'm not aware of any compression in USB traffic. The URB's are just raw data with checksums coming directly from the USB device (in our case a mass storage device). This is true for block and isochronous transfers, which is most of what we measure (either mass storage or video feeds).

As for being a "true bus" and such: There are few zero copy drivers in Linux, so all data transfer (including hard disk traffic) tends to involve the CPU to some extent. There is very little difference between the fundamentals of a SATA driver or a USB driver -- they both receive interrupts, grab packets/transactions and enqueue them for delivery to processes waiting for them. It is true that there is more overhead with USB due to the stack layering, but USB mass storage devices use the SCSI protocol exactly like SCSI disks do. With a fast enough machine (and with 3 GHz multi-core boxes now, they are very fast indeed), you're not going to find that the CPU is the bottleneck *per se* in data transfer speeds.

Which brings me back to our hunt for a fast OS or stack to do multiple mass storage device writes.
Fine, if you want to call me wrong then go ahead. You are also wrong stating that USB is faster than Firewire even though you stated Firewire is actually faster. Compact flash can not handle over 20 megabytes per second over long periods. There are plenty of benchmarks that shows that compact flash has a throughput of about a 5 megabytes per second.

Stating that USB resembles SCSI is not true. Kernel developers relates a lot of storage devices using SCSI wrap arounds. Even IDE is going to be treated as SCSI which will slow down the performance of IDE, since the maintainer for IDE has not made any changes for a few years. USB storage should be treated differently compared to SCSI.

All this fighting that USB can handle 480 megabits per second or 60 megabytes per second is non-sense. Everybody knows or should know that USB is just slow. About quarter of USB speed is actually used. There are plenty of evidence that Firewire is just fast in the real world. People need to buy combo devices for external devices when copying or moving files have to be done quickly. If a computer does not have a Firewire connection they can use USB as a last resort.

USB was not meant for sustain high throughput. USB was meant to replace communication ports and have an open structure which it does. Having an open structure will sacrifice performance to interface with just about any device.

Finding an OS that has high performance for storage devices can not happen since there are too many specifications or features that people want after they have used Mac OS X or Windows. Haiku (fornly known as OpenBeOS) might be close, but this depends on the hardware. Creating your own platform and constraining the hardware to only be used by your hardware is the only way to control the performance. Apple did this to make external device easy to recognize and making GUI easy to use.

What uncertain still thinks that GUI should always provide the same throughput all the time when copying or moving files around. I have never think this since I first learn about computers that the throughput that it is calculating or the time that it is showing until completion are not accurate. GUI penalizes performance.
 
Old 11-05-2008, 07:54 AM   #26
jbahr
LQ Newbie
 
Registered: Oct 2008
Posts: 3

Rep: Reputation: 0
Electro: Far be it from me to let the facts interfere with your theories.

J
 
Old 11-05-2008, 01:04 PM   #27
taecha
LQ Newbie
 
Registered: Nov 2008
Posts: 1

Rep: Reputation: 0
I have been experiencing the very same problem.

Quote:
Originally Posted by uncertain View Post
Kernel update last night broke it again.

File transfers are back down to 4 or 5 MB/s and GUI is all but completely unresponsive while the transfer is in action.
Same here. Things got worse with the recent kernel update.

Quote:
Originally Posted by Electro View Post
If you use IEEE-1394 (aka Firewire or i.Link) or even better SATA, your storage device will be a lot faster.
I have both, Firewire and USB drives. Firewire does provide for more stable and somewhat higher transfer rates. In practice, it is not much faster than USB, though. At least not on my system (Ubuntu Hardy). Maybe 2-3 MB/s max. So USB vs. Firewire is obviously neither the problem nor the solution.

Quote:
Originally Posted by uncertain View Post
I have my Home directory on a its' own partition, and moving the files from Trash back to ~/Videos took as long as it would have to move them to the USB device.
Again, I can confirm the problem. Hence, it obviously has nothing to do with the USB connection! With the worsening after the recent kernel update as well as the problem being evidenced on various distros I am inclined to believe it has to do with the kernel.

Is there any kernel developer out there who can give us some insight - or even better a solution to this tricky issue?
 
Old 11-12-2008, 02:26 PM   #28
acgarib
LQ Newbie
 
Registered: Nov 2008
Posts: 3

Rep: Reputation: 0
I can also confirm this problem. (Ubuntu Intrepid 8.10 and Hardy 8.04) I seem to have it worse though. My Seagate Freeagent Pro reads at about 100 KB/s via USB2.0/Firewire according to hdparm.

Writing is a lot faster at around 5 to 10 MB/s.

Both Firewire and USB are equally slow and equally unstable. Most file transfers end up with I/O errors and a frozen hard drive which will not remount until its power cord is physically removed from the wall and plugged back in.

It seems like I am always reformatting it. The last reformat, however, seems to have boosted my reading speed way up to 5 MB/s for both USB and firewire. I don't know if that is because it was reformatted or if I received an update though.

Unlike the rest of the people with this problem, My drive is slow in windows as well. Not as slow, but still slow. (it starts out around 30 MB/s, but about 200 MB into the transfer, it starts to transfer data in short bursts of high speed activity instead of continuous high speed data transferring) Windows averages around 6 MB/s for reading. 12 to 14 for writing.

Anybody know if anything is being done to fix the kernel?
 
Old 11-13-2008, 10:00 AM   #29
lucmove
Senior Member
 
Registered: Aug 2005
Location: Brazil
Distribution: Debian
Posts: 1,432

Rep: Reputation: 109Reputation: 109
Thumbs down

acgarib, sounds like there is something wrong with that drive.

Electro, you're not helping.

I also have problems with USB often. Reading this thread and other threads elsewhere, the whole problem is very clear: USB support in Linux sucks. The only way to deal with this problem is facing the truth, either to hopefully fix it someday or just live with it.

For the record, I have no problems on Windows 98 or XP. Everything works well, fast and I never had anything getting corrupt. Linux is just the opposite: very slow transfer speeds, drives/devices are disconnected at random and file systems have become corrupt at least four times (probably because of the random disconnections).

The only "different" story I can tell here is that I don't have speed issues with external HDs on the USB port. All my speed issues are connected with pen drives or cell phones. Speed issues that I never have with Windows. For example, filling up my 4Gb SD card on Linux takes well over an hour. On Windows, it takes 5 or 10 minutes. I can't remember exactly, I'm just sure it is a very reasonable time.

Too bad I only use Windows 1% of the time.

And my external HD gets disconnected at random quite often.

uncertain, you say you have no speed issues with a certain kernel in a certain distro. You might be on to something. But maybe this is not the best place to discuss it. This really sounds like something that should be taken to someone involved in the kernel development. Provided you're really sure that one certain kernel in one certain distro is really rid of the problem.

Last edited by lucmove; 11-13-2008 at 10:02 AM.
 
Old 11-14-2008, 03:42 AM   #30
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
Quote:
Originally Posted by lucmove View Post

Electro, you're not helping.

I also have problems with USB often. Reading this thread and other threads elsewhere, the whole problem is very clear: USB support in Linux sucks. The only way to deal with this problem is facing the truth, either to hopefully fix it someday or just live with it.

For the record, I have no problems on Windows 98 or XP. Everything works well, fast and I never had anything getting corrupt. Linux is just the opposite: very slow transfer speeds, drives/devices are disconnected at random and file systems have become corrupt at least four times (probably because of the random disconnections).

The only "different" story I can tell here is that I don't have speed issues with external HDs on the USB port. All my speed issues are connected with pen drives or cell phones. Speed issues that I never have with Windows. For example, filling up my 4Gb SD card on Linux takes well over an hour. On Windows, it takes 5 or 10 minutes. I can't remember exactly, I'm just sure it is a very reasonable time.

Too bad I only use Windows 1% of the time.

And my external HD gets disconnected at random quite often.
I am helping, but everybody thinks that I am steering posters in different direction.

I did say that USB poor in Linux.

I did say that USB is software dependent and it is.

I did say that throughput depends on many factors even what options are selected in the kernel. The file system, the controller, and bus depends on the throughput.

I did say that do not rely on GUI for throughput results.

What I have not said is Linux is still being designed for servers. Servers rely heavily on being reliable and stable. Linux suits that environment. Linux does not suit desktops. Con Kolivas have stated this a fact. Neither of the full time kernel developers wants to develop the kernel to be responsive and have the highest throughput for desktops too. From comparing 2.6.16 kernel and 2.6.26 kernel, I have notice performance gotten worst.

USB is fast in Windows but it is unreliable and unstable. Linux has slow USB but it is reliable and stable.

Posting here and complaining about USB being slow also does not help too. I recommend if you do not like it, hack the kernel so it provides better performance for USB with out comprising reliability and stability.
 
  


Closed Thread

Tags
firewire, speed, transfer, usb


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
USB 2.0 slow write but fast read, solution yet? ArchNGEL Linux - Hardware 15 02-17-2008 01:33 PM
USB thumb drive write speeds too fast xaos5 Linux - Hardware 1 01-19-2007 03:04 AM
finding cd/dvdrw read/write speeds? slinky2004 Linux - General 4 01-08-2006 09:32 PM
USB transfer speeds incredibly slow apachedude SUSE / openSUSE 3 10-20-2005 06:16 PM
slow fat32 write speeds xiang83 Slackware 0 08-10-2004 07:53 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 03:45 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration