LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   NFS mount on 2 node "cluster" (https://www.linuxquestions.org/questions/linux-server-73/nfs-mount-on-2-node-cluster-844590/)

machielr 11-16-2010 06:27 AM

NFS mount on 2 node "cluster"
 
Good day all.

We have a set of two production machines running Oracle databases.

There are a couple of SAN attached filesystems of which one of them on the one machine (node1) is created as ext3 and nfs exported to the second machine (node2).

However,during certain conditions related to rac, the interconnect between the two nodes lose connection and due to the loss of communication the servers will reboot.

The problem however is that node2 usually reboots first and by the time it starts up, node1 is not up and running yet, thus causing the nfsmount to not be available on node2.


I have put my head around some options on how to get the servers to automatically resolve this, however I am posting my question here as someone might have a reliable way of managing this.

My one idea is to create a script on node2 to mount the nfs filesystem, create ssh key authenticated user between the nodes and then put another script in place on node1 as part of the startup to ssh to node2 and mount the filesystem.


Does anybody have any better ideas on this?

Regards
Machiel

kerrylinux 11-16-2010 10:22 AM

Quote:

Originally Posted by machielr (Post 4160599)
Good day all.


My one idea is to create a script on node2 to mount the nfs filesystem, create ssh key authenticated user between the nodes and then put another script in place on node1 as part of the startup to ssh to node2 and mount the filesystem.


Does anybody have any better ideas on this?

Regards
Machiel

If I understand you correctly you mean node1 (not node2) mounts the fs via NFS first and afterwards it initiates the mount on node2 via a ssh command on node2. So the automatic mount on node2 has to be disabled in order to allow node1 to do the mount at the right time itself.

But why don't you simply delay node2 for the time it takes node1 to boot up? You could disable the automatic NFS mount on node2 and insert a delay script in rc.local at the end of node2's boot process.
This script would execute a loop checking with "ping node1" if node1 is up. When node1 comes up, you would still cause node2 to sleep a certain time to allow node1 to export the NFS filesystem and then mount it from node2. The benefit being that you always will ensure the correct order of events even if node1 acidentially boots faster than node2 and you do not have to use a (passwordless) ssh connection between the two nodes as node2 does the mount itself.

But your version would work, too.

Regards

Ralph


All times are GMT -5. The time now is 06:30 AM.