LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 11-20-2015, 04:48 AM   #1
unclesamcrazy
Member
 
Registered: May 2013
Posts: 200

Rep: Reputation: 1
rsync fork failed because of very large number of files transfer


I am trying to transfer very large number of files on remote server but rsync process stopped unexpectedly with following error.

Code:
rsync: fork failed in do_recv: Cannot allocate memory (12)
rsync error: error in IPC code (code 14) at main.c(709) [receiver=2.6.9]
rsync: connection unexpectedly closed (8 bytes received so far) [sender]
rsync error: error in IPC code (code 14) at io.c(605) [sender=3.0.9]
I am using following command to transfer files.
Code:
rsync -avzW /source/folder/ --exclude="first-folder" --exclude="2-folder" --exclude="3-folder" --exclude="4-folder" --exclude="5-folder" 192.168.0.200:/Destination/backup
Please help to solve the error, it was working fine a week before but now it is not working with same configuration of source and destination.

Thank you
 
Old 11-20-2015, 04:58 AM   #2
berndbausch
LQ Addict
 
Registered: Nov 2013
Location: Tokyo
Distribution: Mostly Ubuntu and Centos
Posts: 6,316

Rep: Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002
A superficial google indicates that this is a problem in the rsync algorithm that goes away with version 3. From your output, it would seem that the receiver is still using version 2.6.9 - perhaps it can be upgraded?
 
Old 11-20-2015, 05:15 AM   #3
unclesamcrazy
Member
 
Registered: May 2013
Posts: 200

Original Poster
Rep: Reputation: 1
The receiver is iomega NAS. How should I upgrade, Please help me.
 
Old 11-20-2015, 05:19 AM   #4
descendant_command
Senior Member
 
Registered: Mar 2012
Posts: 1,876

Rep: Reputation: 643Reputation: 643Reputation: 643Reputation: 643Reputation: 643Reputation: 643
Quote:
Originally Posted by unclesamcrazy View Post
The receiver is iomega NAS. How should I upgrade, Please help me.
Ask iomega?

rsync is memory intensive with lots of files - maybe break your transfer into smaller sections
 
1 members found this post helpful.
Old 11-23-2015, 12:14 AM   #5
unclesamcrazy
Member
 
Registered: May 2013
Posts: 200

Original Poster
Rep: Reputation: 1
Thanks for help... now I am transferring data in smaller sections...
Code:
find /source/directory -maxdepth 1 -mindepth 1 -type d -exec rsync -av {} --exclude="1folder" --exclude="2folder" --exclude="3folder" --exclude="4folder" --exclude="5folder" 192.168.0.200:/destination/folder/backup \;
Although it is taking time 5 times more than previous time but at least backup process has been started.
 
Old 11-23-2015, 07:20 AM   #6
malekmustaq
Senior Member
 
Registered: Dec 2008
Location: root
Distribution: Slackware & BSD
Posts: 1,669

Rep: Reputation: 498Reputation: 498Reputation: 498Reputation: 498Reputation: 498
And you can put the list of excluded items into a single text file listing all the folder names or file-types to be excluded at the source-host, put the excluding list file in the home directory in the host from where rsync is issued, then just define the switch e.g. "--exclude=/my_exluded_list.txt"

Last edited by malekmustaq; 11-23-2015 at 07:22 AM.
 
Old 11-23-2015, 11:15 PM   #7
unclesamcrazy
Member
 
Registered: May 2013
Posts: 200

Original Poster
Rep: Reputation: 1
Quote:
Originally Posted by malekmustaq View Post
And you can put the list of excluded items into a single text file listing all the folder names or file-types to be excluded at the source-host, put the excluding list file in the home directory in the host from where rsync is issued, then just define the switch e.g. "--exclude=/my_exluded_list.txt"
It did not make much difference. Initially when I used to take back up of whole source directory, it used to build all files of that directory at once with text "building file list ... done".
But now I am copying backup directory wise, it is building files for each directory and that's why it is taking time more than expected.

There are millions files in source directory, I will paste exact number here once I get the output of
Code:
file source/dir -type f | wc -l
Although the memory exhausted error is not solved. If you have any solution for this error (very very large number of files), please share.
Because I am getting the error while building file. It is not able to build such a large number of files.

Thanks
 
Old 11-23-2015, 11:18 PM   #8
descendant_command
Senior Member
 
Registered: Mar 2012
Posts: 1,876

Rep: Reputation: 643Reputation: 643Reputation: 643Reputation: 643Reputation: 643Reputation: 643
Add more memory (or swap, but that will be *slow*).
 
Old 11-23-2015, 11:41 PM   #9
astrogeek
Moderator
 
Registered: Oct 2008
Distribution: Slackware [64]-X.{0|1|2|37|-current} ::12<=X<=15, FreeBSD_12{.0|.1}
Posts: 6,263
Blog Entries: 24

Rep: Reputation: 4194Reputation: 4194Reputation: 4194Reputation: 4194Reputation: 4194Reputation: 4194Reputation: 4194Reputation: 4194Reputation: 4194Reputation: 4194Reputation: 4194
With millions of files it is going to be slow, as already noted.

You don't say anything about the target disk size, but in addition to the memory problem you might run into an inode limit on the drive itself, even if the drive space is sufficient. This will vary with the filesystem, also not mentioned so far.

How about a few more details. Target drive partition size, filesystem type, typical file size, length of exclude list, come to mind for starters.

Last edited by astrogeek; 11-23-2015 at 11:45 PM.
 
Old 11-24-2015, 12:23 AM   #10
serverpoint.com
Member
 
Registered: Oct 2015
Posts: 52

Rep: Reputation: 6
I think its better to upgrade rsync in to version 3 and use the command line option --no-inc-recursive. Have you got a chance to contact iomega for upgrade options?
 
Old 11-24-2015, 12:30 AM   #11
unclesamcrazy
Member
 
Registered: May 2013
Posts: 200

Original Poster
Rep: Reputation: 1
No I will not be able to contact support, I need to do it myself and I am not able to do it

The total number of files in source directory : 3845408
The size of the source directory : 250 GB
Operating system source : centos 7.1
operating system destination : iomega NAS
cron job runs on : source operating system centos 7.1
Source hard disk size : 1 TB
/dev/mapper/centos-root 50G 16G 35G 31 /
/dev/mapper/centos-home 879G 253G 627G 29 /home
Source RAM : 1849336 kB
Source SWAP : 2113532 kB
Source processor : Pentium(R) Dual-Core CPU E5700 @ 3.00GHz
siblings : 2
cpu cores : 2

Last edited by unclesamcrazy; 11-24-2015 at 01:29 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] rsync large files centguy Linux - Software 2 08-25-2013 09:43 PM
[SOLVED] Transfer large number of files host to host blainemiller Linux - Newbie 3 10-20-2010 09:01 PM
[SOLVED] Transfer large number of files host to host blainemiller Linux - Newbie 5 10-20-2010 04:52 PM
ext3 performance -- very large number of files, large filesystems, etc. td3201 Linux - Server 5 11-25-2008 09:28 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 09:17 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration