Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
If you're going over an NFS mount, why are you even using scp or rsync? You should be able to treat it like a local filesystem and just use cp or mv, etc, then eliminate the copying of links.
For this purpose, mv and cp work about the same as rsync or cpio. (They both use the NFS mount too.)
Copying lots of small files over an NFS mount slows your progress down to a crawl. I don't know how or why, but it's like there's a set minimum amount of time required to copy any given file, no matter how small. For instance: a single 1MB file might only take a few seconds to copy, but 1000 10-byte files (~10KB total) might take several minutes. I don't know why it happens, but I've seen it numerous times on several systems. (If anyone does know why, I'd like to hear about it.)
Last edited by WingnutOne; 07-28-2008 at 05:43 PM.
This is the way I would do it. Start on the machine you want the files to end up on.
ssh email@example.com "cd /original/directory; tar cf - ./" | tar xvf -
Since tar doesn't follow symbolic links by default, this should work the way you want. You may add other flags as you want... if both machines are fast and the link is slow... compression would be good (either pipe it through bzip2 or use the flags in tar)... I would discourage -p (preserve permissions) unless the user IDs are the same on both machines... if not, and you want to keep permissions, remember to change the owners and groups after moving the files.
Note: you can replace tar with cpio if that is your preferred method... using the same technique.
Edit: You can make this as complex as you want, using find/cpio/pax/bzip2 and pipes... you can actually manage some very complex selections using ssh like this.
Yeah, I'd agree with frob23, create one big file to copy. Smaller files eat up resources when moving, copying, deleting, etc. You have to realize the drive has to read more blocks to get each inode info about each file, etc. A bunch of small files is a backup administrators worst nightmare. Take for example a 2TB database dump, I could backup the whole thing in a couple of hours. The same amount of data as small misc files could take a day or longer. Don't underestimate them cause they're small, bigger files are easier to deal with when it comes to filesystems.