Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Red Hat CentOS Ubuntu FreeBSD OpenSuSe
Posts: 252
Rep:
rsync memory usage over ssh
Hello,
FAQ of rsync tsays that it uses a lot of memory and it uses more with -H and --delete options. The faq says about 100 bytes per file, which should translate to about 800MB. But on my rsync procedure, it ate up almost 97% of my 4gb memory on its initial run, and retain as it is. Is this because I am using --stats --recursive --delete -auvz options and over ssh? Im running rsync script every 20 minutes
What is the per-file overhead when all of those options are used? Is there a way to release the memory usage after the initial run of rsync? Thanks.
Nothing obviously wrong to me in that - need to know a "baseline" before your rsync. Presumably you couldn't see any obvious "bad" memory consumers when you were in top.
I'd be thinking inode/dentry consumption - this can be seen in /proc/slabinfo, or slabtop. Later kernels allow you to free these caches - I'd be looking to drop the caches, save the slabinfo, run the rsync, then re-save the slabinfo, and compare them.
drop_caches is likely to have an adverse effect (for a while) on a fileserver, so might not be acceptable depending on your situation. It's documented in the source tree at ../Documentation/filesystems/proc.txt, or "man proc" if you have the feature available on your kernel level.
Of course, having all the memory utilized may not be harming your systems ability to do the work, so you may be worrying needlessly too.That's for you to decide.
Distribution: Red Hat CentOS Ubuntu FreeBSD OpenSuSe
Posts: 252
Original Poster
Rep:
Quote:
Originally Posted by syg00
Nothing obviously wrong to me in that - need to know a "baseline" before your rsync. Presumably you couldn't see any obvious "bad" memory consumers when you were in top.
I'd be thinking inode/dentry consumption - this can be seen in /proc/slabinfo, or slabtop. Later kernels allow you to free these caches - I'd be looking to drop the caches, save the slabinfo, run the rsync, then re-save the slabinfo, and compare them.
drop_caches is likely to have an adverse effect (for a while) on a fileserver, so might not be acceptable depending on your situation. It's documented in the source tree at ../Documentation/filesystems/proc.txt, or "man proc" if you have the feature available on your kernel level.
Of course, having all the memory utilized may not be harming your systems ability to do the work, so you may be worrying needlessly too.That's for you to decide.
Thanks syg. There's no "bad" memory consumers, only ntp, snmp and heartbeat are running aside from the system reqired applications. on my baseline, after a fresh reboot, memory usage is only about 300 t0 400 MB, then when i ran the rsync, it peaks to 3.8 gig,
Im using 2.6.9-55.0.6.ELsmp kernel for CentOS 4.5.
With Linux, if RAM is avail (ie unused), then Linux will use as much as possible to speed up the current program & cache it for further usage.
If you ran other progs as well, you'd see Linux re-proportion the usage amongst all the running progs.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.