Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
07-28-2008, 03:55 PM
|
#1
|
Member
Registered: Sep 2007
Location: Kansas City
Distribution: Mixed, mostly RH / Fedora
Posts: 76
Rep:
|
recursive scp w/o following links?
What's the best alternative to scp if you want to recursively copy a directory from one machine to another but NOT follow links?
Using rsync or cpio over an nfs mount is possible, but extremely slow.
Fedora 8-9
thanks!
Last edited by WingnutOne; 07-28-2008 at 04:15 PM.
Reason: clarity
|
|
|
07-28-2008, 04:01 PM
|
#2
|
LQ Guru
Registered: Jan 2001
Posts: 24,149
|
If you're going over an NFS mount, why are you even using scp or rsync? You should be able to treat it like a local filesystem and just use cp or mv, etc, then eliminate the copying of links.
|
|
|
07-28-2008, 04:42 PM
|
#3
|
Member
Registered: Sep 2007
Location: Kansas City
Distribution: Mixed, mostly RH / Fedora
Posts: 76
Original Poster
Rep:
|
Quote:
Originally Posted by trickykid
If you're going over an NFS mount, why are you even using scp or rsync? You should be able to treat it like a local filesystem and just use cp or mv, etc, then eliminate the copying of links.
|
For this purpose, mv and cp work about the same as rsync or cpio. (They both use the NFS mount too.)
Copying lots of small files over an NFS mount slows your progress down to a crawl. I don't know how or why, but it's like there's a set minimum amount of time required to copy any given file, no matter how small. For instance: a single 1MB file might only take a few seconds to copy, but 1000 10-byte files (~10KB total) might take several minutes. I don't know why it happens, but I've seen it numerous times on several systems. (If anyone does know why, I'd like to hear about it.)
wn
Last edited by WingnutOne; 07-28-2008 at 04:43 PM.
Reason: clearer
|
|
|
07-28-2008, 09:35 PM
|
#4
|
Senior Member
Registered: Jan 2004
Location: Roughly 29.467N / 81.206W
Distribution: OpenBSD, Debian, FreeBSD
Posts: 1,450
Rep:
|
This is the way I would do it. Start on the machine you want the files to end up on.
Code:
cd /destination/directory
ssh user@remote.host "cd /original/directory; tar cf - ./" | tar xvf -
Since tar doesn't follow symbolic links by default, this should work the way you want. You may add other flags as you want... if both machines are fast and the link is slow... compression would be good (either pipe it through bzip2 or use the flags in tar)... I would discourage -p (preserve permissions) unless the user IDs are the same on both machines... if not, and you want to keep permissions, remember to change the owners and groups after moving the files.
Note: you can replace tar with cpio if that is your preferred method... using the same technique.
Edit: You can make this as complex as you want, using find/cpio/pax/bzip2 and pipes... you can actually manage some very complex selections using ssh like this.
Last edited by frob23; 07-28-2008 at 09:39 PM.
|
|
|
07-29-2008, 12:49 AM
|
#5
|
LQ Guru
Registered: Jan 2001
Posts: 24,149
|
Yeah, I'd agree with frob23, create one big file to copy. Smaller files eat up resources when moving, copying, deleting, etc. You have to realize the drive has to read more blocks to get each inode info about each file, etc. A bunch of small files is a backup administrators worst nightmare. Take for example a 2TB database dump, I could backup the whole thing in a couple of hours. The same amount of data as small misc files could take a day or longer. Don't underestimate them cause they're small, bigger files are easier to deal with when it comes to filesystems.
|
|
|
07-29-2008, 09:59 AM
|
#6
|
Member
Registered: Sep 2007
Location: Kansas City
Distribution: Mixed, mostly RH / Fedora
Posts: 76
Original Poster
Rep:
|
Thanks for the info. I'm using the tar idea (with -z option) and it's much faster than messing with all the individual files.
This has been reduced to a matter of curiosity now, but I'd still like to learn why moving small files through an NFS mount takes so much longer than transferring the same files via ssh.
Thanks again,
wn
|
|
|
09-09-2008, 02:23 AM
|
#7
|
LQ Newbie
Registered: Sep 2008
Posts: 1
Rep:
|
Quote:
Originally Posted by frob23
Code:
cd /destination/directory
ssh user@remote.host "cd /original/directory; tar cf - ./" | tar xvf -
|
Genius!
Never thought of using tar and ssh in that combination.
|
|
|
All times are GMT -5. The time now is 10:01 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|