Asynchronous Distributed Filesystem on RHEL
Linux Questions,
We have a requirement for a new project to synchronize 6TB of data to our DR site. The data is split into a number of volumes (30x200GB I believe) of which one is being actively written to, but the others may have data deleted from. We are running RHEL 4 AS and synchronizing the data around 20km over some relatively large dark fibre links. Has anyone come across a good solution to this problem? I think GFS will not work across high latency and we want something that will survive if the link goes down (hence the asynchronous). DRDB from the Linux-HA project looks like it does what we want but I don't know whether it is supported by RedHat and I have never used it myself. I think rsync will not work with such a large amount of data at least not without some serious trickery otherwise it will cause a large performance hit. In this instance synchronizing through our SAN does not appear to be an option. Thanks Joel |
All times are GMT -5. The time now is 06:45 PM. |