LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Cluster questions in RHEL 6.3 (https://www.linuxquestions.org/questions/linux-newbie-8/cluster-questions-in-rhel-6-3-a-4175498008/)

davistai 03-13-2014 03:00 AM

Cluster questions in RHEL 6.3
 
Hello,
I am using two_node cluster w/ RHEL 6.3, and here's the output by running 'clustat':
#clustat
Cluster Status for mycl @ Thu Mar 13 15:56:05 2014
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
labhe01 1 Online, Local, rgmanager
labhe02 2 Online, rgmanager
/dev/block/8:32 0 Offline, Quorum Disk

Service Name Owner (Last) State
------- ---- ----- ------ -----
service:myService1 labhe01 started


Q1. The Quorum disk status is 'Offline', is it normal? If not, how to fix that?

Q2. The resource can be moved between 2 nodes,i.e., service IP, filesystem. However, if using 'shutdown -h' one node, the other won't take over the resource, why?

Please kindly help. Thanks in advance.

Davis

davistai 03-13-2014 03:42 AM

Hi,All,

Just deselect the item "Do Not Use a Quorum Disk" in Quorum Disk Configuration.
Then, no matter shutdown which node, the other node will take over the resource without error.

But, I was told that if not set quorum disk in two_node environment will get into a trouble inn= the future.
So, anyone can help on this? Any comment will be appreciated.
Thanks,

Davis

chrism01 03-13-2014 03:58 AM

Quote:

Just deselect the item "Do Not Use a Quorum Disk" in Quorum Disk Configuration.
is effectively a double negative.

Basically, Quorum is based on the English word of the same meaning in other words a majority (vote).
Normally on the setup you're describing, each host node and the quorum disk each have one vote.
If on of the hosts die, the other knows its ok to consider itself ok because it can see a total of 2 votes (1 host + 1 quorum disk) which must be greater than half the total theoretically available votes (in this case 3)

HTH

davistai 03-13-2014 08:07 PM

Dear Chris,

Thanks for the comment.
So, "/dev/block/8:32 0 Offline, Quorum Disk" is not normal, right?
Is this caused the failover failed? How to fixed?

When tested the failover, the other alive node, will have such message:
"rgmanager[13791]: #1: Quorum Dissolved"

And when I run 'qdiskd -d -f', that will have another message:
"qdisk_validate: open of /dev/block/11:0 for RDWR failed: No medium found"

Thanks again.

Davis


All times are GMT -5. The time now is 12:55 AM.