sneakyimp |
01-17-2013 07:01 PM |
How to replicate a folder and its contents instantly across N servers?
I have been asked to make sure that N web servers (Amazon EC2 instances: maybe two, maybe three, maybe four...maybe N) all maintain the exact same contents for a particular folder. Let's call the folder /home/my_folder. Any additions, subtractions, or changes to the files and directories in this folder will be performed on one master machine and must be propagated *immediately* (or ASAP) to N slaves.
I have considered using NFS to create a shared directory on some machine and just have the slaves mount it, but I worry about performance when the N web servers are responding to HTTP requests that reference these shared files. Would every HTTP request result in a file system action to check the modification date of the file?
Alternatively, I am considering having the N slaves all mount this NFS share and use lsyncd (running locally on the slave) to watch the NFS share for changes. When a change is detected, the slave machine will copy the changes to a copy of the shared folder to its local file system for serving HTTP requests. Can lsync watch a share mounted via NFS? Is there a possiblity that lsync might trigger twice when a large file gets uploaded via FTP?. The page I linked says:
Quote:
Lsyncd watches a local directory trees event monitor interface (inotify or fsevents). It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes.
|
I really hope someone might help me to understand how I might address this request.
|