which is the better way to rsync files between web servers?
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
which is the better way to rsync files between web servers?
Hi,
I've 15 web servers (in private network) running RHEL, Apache. Needs to sync web files between them. each server is accessible to each other via public key (with passphares).
Here are two cases, I can think about, Please let me know your views and best possible way to implement it.
1) Main server is web1 (where dev upload files initially). So I can make all other servers accessible by web1 without password/passphares and run rsync periodically to update files between them. But security is an issue here as all servers will become easily accessible.
2) Run rsync daemon in all other servers (except web1) on designated port and run rsync command from web1 to sync files. This will do the work but running daemon in all servers might increase overhead and making sure that daemon is running all the time etc. are my concern for this implementation.
Option 1 is fine, security is not really an issue as the account performing the rsync would have to be compromised first. Another possible option is mounting a central location via NFS ...
Thanks kbp for reply but isn't NFS mount will create a single point of failure? At this moment, servers are almost independent to each other and only needs file refresh once in while. Using NFS might make them more vulnerable to issues I think.
Thanks kbp for reply but isn't NFS mount will create a single point of failure? At this moment, servers are almost independent to each other and only needs file refresh once in while. Using NFS might make them more vulnerable to issues I think.
You can create a simple active/passive nfs cluster using keepalived...
that's seems quite interesting.. can you provide more insight.. link? tried googling but seems process of setting up such cluster is not so simple, IMO.
Have a look at Unison Synchronizer. I'm sure it'll suit your needs perfectly and it's a lot easier to set up, runs over an SSH tunnel (which takes care of the security part), allows changes on multiple instances on a file, and so on. I use it for little over a year now on production servers and hasn't failed me yet. No need to set up extra configurations, file systems, nothing.
that's seems quite interesting.. can you provide more insight.. link? tried googling but seems process of setting up such cluster is not so simple, IMO.
Look into Red Hat Cluster Suite, or use the free version from CentOS if you don't have or don't want to pay for a Red Hat subscription.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.