How to refresh shared file system on shared logical volume created in NAS back-end
I want to share the file system between two servers which are connected to Network attached storage(NAS). I am using RHEL 5.0 in my servers. For this I have done following things till now:
1. Created one logical volume(LV)in NAS back-end. Created 'ext3' file system on it after making it available(lvchange -ly). Then mounted in my server1 like:
mount -o rw,sync,dirsync <lv>
B'cz I want to write directly to disk not to write-cache.
2. Next in my server2, I made available the that LV(lvchange -ay) & mounted the file system like:
mount -o ro,sync <lv>
B'cz I don't want to allow anybody to write from server2.
I am able to see the shared file system on both but problem comes while I write something on shared file system from server1 is not visible to server2 immediately.
So I tried followings:
1. Unmount & mount again & I am able to see the changes. But my requirement is to refresh file system online(without unmounting).
2. Run e2fsck -F <lv> on server2. But e2fsck is not safe in mounted file system & internally it does unmount/mount.
3. So I tried with 'sync' & 'echo 3 > /proc/sys/vm/drop_cache' on server2. The file system is getting refreshed but in inconsistent manner. Some times it takes long time or requires to fire those commands multiple times. The flushing of read-cache is not becoming consistent.
I don't want to use clvm & global file system(GFS). my requirement doesn't allow so much changes in my linux build.
So, I am looking for something which can stabilize my filesystem refresh. I am happy with little delay.
Can anybody please help me?
ext3 is not a cluster-aware filesystem and should not be used as such.
See: Can I mount an ext3 filesystem on a SAN from multiple nodes at the same time?
You may very well be better served to leverage NFS... at least for 'server2' since GFS is not a viable option given your noted constraint(s).
Thanks for your suggestion. NFS is the last option I thought though It may be little awkward according to my current architecture. I have following queries:
1. If I go with only GFS, will it work while device is logical volume? I don't want to go with clvm as it requires kernel compilation.
2. Will it be sufficient if I build with GFS SRPM? Compilation of kernel will not be approved.
3. Can we make 'sync, drop_cache' technique more predictable?
4. I have seen one option in mount -o dio. What this option actually doing? Will be helpful?
I don't understand why gfs2 and clvm would require kernel compilation for RHEL5.x. The packages are available via the RHEL5 Clustering and ClusterStorage subscriptions (or rhn|yum channels). They provide the necessary kernel modules...
|All times are GMT -5. The time now is 03:26 PM.|