LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 11-03-2006, 01:55 PM   #1
corkypa
LQ Newbie
 
Registered: Oct 2005
Posts: 18

Rep: Reputation: 0
Fastest method for transferring 100's of GB


I have a linux file server with uncompressed HD video files that are hundreds of GB in size. It think the largest is about 250GB. What is the fastest way to get a file loaded onto a Windows machine for editing? Some general possibilities are

1) Copy file to external HD. Carry disk to Windows machine and attach it there.

2) Use a standard file server protocol, e.g. NFS, CIFS, iSCSI

3) Use a specialzed transfer program, e.g. rsync or firehose

The machines are on a gigabit lan with little traffic. I could add parallel lans or 1394b links if I wanted.

For #1, the question comes up of which file system to use. Format the disk as NTFS an use something like ntfs-3g, or format the disk as ext2 an use the ext2 installable file system for windows.

For #2, it appears iSCSI is generally at least as good as NFS, and sometimes better.

For #3, there is a program called firehose that transfers in parallel over all available connections. I would imagine that this is the fastest network transfer program because it dispenses with all niceties and just shovels bits as fast as possible.

Anyone out there with experience to share on this particular problem?
 
Old 11-03-2006, 02:48 PM   #2
corkypa
LQ Newbie
 
Registered: Oct 2005
Posts: 18

Original Poster
Rep: Reputation: 0
Just to answer part of my own question, I transferred a 2.7GB file with firehose, NFS and rsync between two Debian servers running sarge (3.1). Transfer rates were

88 MB/s firehose
21 MB/s rsync
14 MB/s NFS

The absolute numbers are less interesting than the relative numbers. Just for reference, md5sum goes through the file at 68 MB/s and cating it to /dev/null runs at 104 MB/s. Firehose is close to hitting file system performance limits, which would mean that copying the file to an external disk and walking it over isn't necessarily faster then transferring it with firehose.
 
  


Reply

Tags
gigabyte, transfer



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Method Not Allowed: The requested method POST is not allowed for the URL /writedhcp.p WiWa Linux - Networking 15 01-06-2011 01:20 PM
SlackWare :Method Not Allowed The requested method POST is not allowed for the URL slack31337 Linux - Software 0 04-08-2006 06:09 PM
How to copy 1 file in 100's of subdirectories oadvantage Linux - Newbie 3 03-15-2006 04:48 PM
fastest browser sandy General 86 04-27-2004 12:18 PM
Transferring to a new HD Citizen Bleys Linux - Hardware 3 04-01-2003 02:59 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 04:58 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration