LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   recursive scp w/o following links? (https://www.linuxquestions.org/questions/linux-general-1/recursive-scp-w-o-following-links-658857/)

WingnutOne 07-28-2008 03:55 PM

recursive scp w/o following links?
 
What's the best alternative to scp if you want to recursively copy a directory from one machine to another but NOT follow links?
Using rsync or cpio over an nfs mount is possible, but extremely slow.

Fedora 8-9

thanks!

trickykid 07-28-2008 04:01 PM

If you're going over an NFS mount, why are you even using scp or rsync? You should be able to treat it like a local filesystem and just use cp or mv, etc, then eliminate the copying of links.

WingnutOne 07-28-2008 04:42 PM

Quote:

Originally Posted by trickykid (Post 3229065)
If you're going over an NFS mount, why are you even using scp or rsync? You should be able to treat it like a local filesystem and just use cp or mv, etc, then eliminate the copying of links.

For this purpose, mv and cp work about the same as rsync or cpio. (They both use the NFS mount too.)
Copying lots of small files over an NFS mount slows your progress down to a crawl. I don't know how or why, but it's like there's a set minimum amount of time required to copy any given file, no matter how small. For instance: a single 1MB file might only take a few seconds to copy, but 1000 10-byte files (~10KB total) might take several minutes. I don't know why it happens, but I've seen it numerous times on several systems. (If anyone does know why, I'd like to hear about it.)

wn

frob23 07-28-2008 09:35 PM

This is the way I would do it. Start on the machine you want the files to end up on.

Code:

cd /destination/directory
ssh user@remote.host "cd /original/directory; tar cf - ./" | tar xvf -

Since tar doesn't follow symbolic links by default, this should work the way you want. You may add other flags as you want... if both machines are fast and the link is slow... compression would be good (either pipe it through bzip2 or use the flags in tar)... I would discourage -p (preserve permissions) unless the user IDs are the same on both machines... if not, and you want to keep permissions, remember to change the owners and groups after moving the files.

Note: you can replace tar with cpio if that is your preferred method... using the same technique.

Edit: You can make this as complex as you want, using find/cpio/pax/bzip2 and pipes... you can actually manage some very complex selections using ssh like this.

trickykid 07-29-2008 12:49 AM

Yeah, I'd agree with frob23, create one big file to copy. Smaller files eat up resources when moving, copying, deleting, etc. You have to realize the drive has to read more blocks to get each inode info about each file, etc. A bunch of small files is a backup administrators worst nightmare. Take for example a 2TB database dump, I could backup the whole thing in a couple of hours. The same amount of data as small misc files could take a day or longer. Don't underestimate them cause they're small, bigger files are easier to deal with when it comes to filesystems.

WingnutOne 07-29-2008 09:59 AM

Thanks for the info. I'm using the tar idea (with -z option) and it's much faster than messing with all the individual files.

This has been reduced to a matter of curiosity now, but I'd still like to learn why moving small files through an NFS mount takes so much longer than transferring the same files via ssh.

Thanks again,

wn

julbra 09-09-2008 02:23 AM

Quote:

Originally Posted by frob23 (Post 3229274)
Code:

cd /destination/directory
ssh user@remote.host "cd /original/directory; tar cf - ./" | tar xvf -


Genius!

Never thought of using tar and ssh in that combination.


All times are GMT -5. The time now is 02:56 AM.