rsync fork failed because of very large number of files transfer
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
A superficial google indicates that this is a problem in the rsync algorithm that goes away with version 3. From your output, it would seem that the receiver is still using version 2.6.9 - perhaps it can be upgraded?
And you can put the list of excluded items into a single text file listing all the folder names or file-types to be excluded at the source-host, put the excluding list file in the home directory in the host from where rsync is issued, then just define the switch e.g. "--exclude=/my_exluded_list.txt"
Last edited by malekmustaq; 11-23-2015 at 07:22 AM.
And you can put the list of excluded items into a single text file listing all the folder names or file-types to be excluded at the source-host, put the excluding list file in the home directory in the host from where rsync is issued, then just define the switch e.g. "--exclude=/my_exluded_list.txt"
It did not make much difference. Initially when I used to take back up of whole source directory, it used to build all files of that directory at once with text "building file list ... done".
But now I am copying backup directory wise, it is building files for each directory and that's why it is taking time more than expected.
There are millions files in source directory, I will paste exact number here once I get the output of
Code:
file source/dir -type f | wc -l
Although the memory exhausted error is not solved. If you have any solution for this error (very very large number of files), please share.
Because I am getting the error while building file. It is not able to build such a large number of files.
With millions of files it is going to be slow, as already noted.
You don't say anything about the target disk size, but in addition to the memory problem you might run into an inode limit on the drive itself, even if the drive space is sufficient. This will vary with the filesystem, also not mentioned so far.
How about a few more details. Target drive partition size, filesystem type, typical file size, length of exclude list, come to mind for starters.
I think its better to upgrade rsync in to version 3 and use the command line option --no-inc-recursive. Have you got a chance to contact iomega for upgrade options?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.