LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (http://www.linuxquestions.org/questions/linux-server-73/)
-   -   which is the better way to rsync files between web servers? (http://www.linuxquestions.org/questions/linux-server-73/which-is-the-better-way-to-rsync-files-between-web-servers-809036/)

cooljai 05-20-2010 05:12 AM

which is the better way to rsync files between web servers?
 
Hi,

I've 15 web servers (in private network) running RHEL, Apache. Needs to sync web files between them. each server is accessible to each other via public key (with passphares).

Here are two cases, I can think about, Please let me know your views and best possible way to implement it.

1) Main server is web1 (where dev upload files initially). So I can make all other servers accessible by web1 without password/passphares and run rsync periodically to update files between them. But security is an issue here as all servers will become easily accessible.

2) Run rsync daemon in all other servers (except web1) on designated port and run rsync command from web1 to sync files. This will do the work but running daemon in all servers might increase overhead and making sure that daemon is running all the time etc. are my concern for this implementation.

Kindly suggest. Thanks in advance.

kbp 05-20-2010 05:24 AM

Option 1 is fine, security is not really an issue as the account performing the rsync would have to be compromised first. Another possible option is mounting a central location via NFS ...

cooljai 05-20-2010 06:54 AM

Thanks kbp for reply but isn't NFS mount will create a single point of failure? At this moment, servers are almost independent to each other and only needs file refresh once in while. Using NFS might make them more vulnerable to issues I think.

Please advice.

JD50 05-20-2010 10:37 AM

Quote:

Originally Posted by cooljai (Post 3975296)
Thanks kbp for reply but isn't NFS mount will create a single point of failure? At this moment, servers are almost independent to each other and only needs file refresh once in while. Using NFS might make them more vulnerable to issues I think.

Please advice.

Have you thought about clustering?

kbp 05-20-2010 10:16 PM

You can create a simple active/passive nfs cluster using keepalived...

cooljai 05-21-2010 06:14 AM

Quote:

Originally Posted by kbp (Post 3976073)
You can create a simple active/passive nfs cluster using keepalived...

that's seems quite interesting.. can you provide more insight.. link? tried googling but seems process of setting up such cluster is not so simple, IMO.

EricTRA 05-21-2010 06:21 AM

Hello,

Have a look at Unison Synchronizer. I'm sure it'll suit your needs perfectly and it's a lot easier to set up, runs over an SSH tunnel (which takes care of the security part), allows changes on multiple instances on a file, and so on. I use it for little over a year now on production servers and hasn't failed me yet. No need to set up extra configurations, file systems, nothing.

Kind regards,

Eric

kbp 05-21-2010 10:51 PM

1 Attachment(s)
I've modified/sanitised a script I used recently, rename to .sh

cheers

JD50 05-22-2010 12:32 AM

Quote:

Originally Posted by cooljai (Post 3976399)
that's seems quite interesting.. can you provide more insight.. link? tried googling but seems process of setting up such cluster is not so simple, IMO.

Look into Red Hat Cluster Suite, or use the free version from CentOS if you don't have or don't want to pay for a Red Hat subscription.


All times are GMT -5. The time now is 12:42 AM.