Review your favorite Linux distribution.
Go Back > Forums > Linux Forums > Linux - Distributions > Red Hat
User Name
Red Hat This forum is for the discussion of Red Hat Linux.


  Search this Thread
Old 07-12-2007, 03:13 AM   #1
Registered: Aug 2006
Location: Shen Zhen
Distribution: Ubuntu 10.04
Posts: 198

Rep: Reputation: 33
Set up failover NFS by RHCS

Hi folks,

I've just configured NFS failover successfully in the cluster. I would write the my experience here for your future reference, because I really find few related documents online and this job takes me about two days.

Before start, I recomment you to review the manual "The Red Hat Cluster Suite NFS Cookbook: Setting up a Load-Balanced NFS Cluster with Failover Capabilities" which can be downloaded from internet. It is a good guide on how to buile Cluster, while I do not think it describes very clearly on configuring NFS failover by system-config-cluster.

My test environment:
background storage(iscsi protocol):

Step 1: Install Cluster Suite/Global FileSystem
Download the following packages from "" and install them on each nodes(node1 and node2), in this case, I use cman+dlm_lock.
    rpm -ivh iscsi-initiator-utils- 
    rpm -ivh magma-1.0.7-1.i386.rpm
    rpm -ivh magma-devel-1.0.7-1.i386.rpm
    rpm -ivh ccs-*.rpm
    rpm -ivh gulm-*.rpm
    rpm -ivh magma-plugins-1.0.12-0.i386.rpm
    rpm -ivh cman*.rpm
    rpm -ivh dlm-*.rpm
    rpm -ivh perl-Net-Telnet-3.03-3.noarch.rpm
    rpm -ivh fence-1.32.45-1.i386.rpm
    rpm -ivh GFS-*.rpm
    rpm -ivh gnbd-*.rpm
    rpm -ivh lvm2*.rpm
    rpm -ivh iddev*.rpm
    rpm -ivh rgmanager-1.9.68-1.i386.rpm
    rpm -ivh ipvsadm-1.24-6.i386.rpm
    rpm -ivh system-config-cluster-1.0.45-1.0.noarch.rpm
Step 2: Add the IP Addresses to /etc/hosts(on each node):
Code:            node1           node2
Step 3: Create lv and format as GFS filesystem:
   # pvcreate /dev/sda /dev/sdc /dev/sde
   # vgcreate milan
   # vgcreate milan /dev/sda /dev/sdc/ /dev/sde
   # lvcreate -L 1000M -n mirror milan
   # gfs_mkfs -p lock_dlm -t mycluster:phillip_gfs -j 2 /dev/milan/mirror
Step 4: Create the mount point(on each node) and verify if GFS share would be mounted:
   # mkdir /nfstest
   # chmod 777 /nfstest
   # mount -t gfs -o acl /dev/milan/mirror /nfstest
   # umount /nfstest
Step 5: In node1(or node2), execute "system-config-cluster". If you are loging from remote host, issue the following command to redirect the out to your local screen(X-Server):
  # ssh -Y
  # system-config-cluster
Add node1 and node2 to "Cluster Nodes"; Add a Fence Device(Manual Fencing Type),name the fence devices as "NPS"; Click on node1, and edit its fence configuration, "Add a New Fence Level",repeat this step on node2.

Create a Failover Domain named "nfstest",keep the setting as default.

OK,Please pay more attention, the following steps are very important to resource/service configuration:
Create a Resource --- GFS ----Name: phillip_gfs
Mount Point: /nfstest
Device: /dev/milan/mirror
Options: acl
---NFS Export--- Name: nfstest
---NFS Client--- Name: nfsclient
Options: no_root_squash
---IP Address---, keep its monitor link
Create a Service --- name: nfsfailover
Failover Domain: nfstest
Recovery Policy: Relocate

Add a Shared Resource to this service: phillip_gfs
focus on added "phillip_gfs", click "Attach a Shared Resource to the selection",choose "NFS Export: nfstest", then focus on "NFS Export", click "Attach a Shared Resource to the selection", choose "NFS Client: nfs_client".
"Add a share resource to this service", choose the "IP Address",

OK, Go back to "Cluster Configuration", and save the setting "File - Save", and click the right button "Send to Cluster" to copy the configuration to other nodes.

Restart the nodes.

Use a third-party Linux machine,
# ping
it would response well.
# mount /mnt
(Now you may see the "" is displayed when execute the "# ip addr list" in node1
  # ip addr list
  2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:24:0c:72 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
    inet scope global eth0
    inet6 fe80::20c:29ff:fe24:c72/64 scope link
       valid_lft forever preferred_lft forever

Poweroff node1 and see what happens: the node2 would talkover NFS service at a moment and it is smoothly.

Hope this helpful.

Old 09-09-2008, 09:05 PM   #2
LQ Newbie
Registered: Aug 2008
Posts: 3

Rep: Reputation: 0


I'm using RHCS on CentOS 5.2. I was confuse because of NFS Export...
What's the NFS Export "Name" means? Is it just a "name" without any "path"? I setup the configuration about ten times, the log always show "No Export Path"...

can you help me about this question?




Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Set up NFS connection stefane321 Linux - Networking 2 06-01-2007 08:23 PM
shutil from RHCS is not correctly described in the documentation andreseso Linux - Enterprise 0 09-07-2005 06:00 PM
How to set up DHCP failover on Linux 3.0 shahirsha Linux - Networking 2 01-31-2005 12:29 AM
How do I set up http and NFS Joelbarnard Linux - Newbie 4 03-25-2004 01:04 AM
easiest way to set up nfs Nappa Slackware 6 12-31-2003 05:51 PM > Forums > Linux Forums > Linux - Distributions > Red Hat

All times are GMT -5. The time now is 01:25 PM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration