LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Virtualization and Cloud
User Name
Password
Linux - Virtualization and Cloud This forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.

Notices


Reply
  Search this Thread
Old 03-04-2014, 04:32 AM   #1
sinkrideutan
LQ Newbie
 
Registered: Mar 2012
Posts: 1

Rep: Reputation: Disabled
how to fence cluster members when using virtualized HBAs


Hi everybody,

when deploying bare-metal RHEL Clusters there is the problem of fencing; powering down a physical machine seems like a good idea, but when "somebody" re-starts the machine your data will get corrupted on-the-fly because the booting machine lights up the HBA, LUNs get mounted and misery starts.

To get around that some organizations use a fencing script that shuts down the interface on the SAN switch to which the HBA is physically connected. This script runs of course on the node that considers itself as the survivor and is controlled by the RHEL ditributed lock manager (DLM) I think. From that moment onwards the failed node can boot as much as it likes, it will not have access to the LUNs.

How can something similar be achieved in this marvellous world of virtualization, where both NICs and HBAs have been virtualized?

Has anybody done this for virtualized HBAs?

Does anybody know of storm control in the sense that all clustered VMs on a failed server will fail-over?

This is for application code, not for Oracle RAC.
 
Old 03-05-2014, 04:48 PM   #2
dyasny
Member
 
Registered: Dec 2007
Location: Canada
Distribution: RHEL,Fedora
Posts: 995

Rep: Reputation: 115Reputation: 115
Normally, those VMs don't have HBas, all they have are hard drives, that are in fact LUNs, files or LVs on the SAN. So that really, all you need to do is keep the standard auto-zoning practice with the hardware.

Having said that, there are technologies like PCI passthrough, VMFEX, SRIOV and so on, that actually pass a physical or semi-physical HBA to a VM directly. In this case, you acan really use that HBA's WWID and do the auto-zoning, just like you would have on a regular host. There should be no real difference.

BTW, in normal modern systems, there should be no need for auto-zoning, because the cluster mechanism is the authority that handles mounts or LV activations on nodes, and the fact that a host booted up seeing a LUN doesn't mean it will mount and start using the LUN directly.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Red Hat Cluster and fence problem gontolal Red Hat 2 02-02-2010 02:51 PM
cluster without fence Ammad Linux - General 1 12-14-2009 05:41 PM
Redhat CLuster fence failed - problem aaron28 Linux - Server 7 03-23-2009 09:45 AM
Fence device ( redhat cluster) m3lyan Linux - Server 6 08-21-2008 03:24 AM
RH Cluster Fence Password Script quackerjack_98 Red Hat 0 08-15-2007 12:38 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Virtualization and Cloud

All times are GMT -5. The time now is 05:45 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration