LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Red Hat (https://www.linuxquestions.org/questions/red-hat-31/)
-   -   Guest fencing on a RHEL KVM host not working (https://www.linuxquestions.org/questions/red-hat-31/guest-fencing-on-a-rhel-kvm-host-not-working-4175434737/)

samengr 10-30-2012 07:11 AM

Guest fencing on a RHEL KVM host not working
 
Hi Everyone,

I have installed three node Red Hat cluster and having issues in configuring Guest fencing on a RHEL KVM host.

OS:
---
[root@server2 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)


Setup
-----

4 guests

server1
server2
server3
server4

server{1,2,3} are part of cluster.
server4 is storage target and luci is installed on this.

Packages for guest fencings are installed on Host
-------

$ yum list fence*
Plugin "product-id" can't be imported
Plugin "subscription-manager" can't be imported
Loaded plugins: refresh-packagekit, rhnplugin
*Note* Red Hat Network repositories are not listed below. You must run this command as root to access RHN repositories.
Installed Packages
fence-virt.x86_64 0.2.3-9.el6 @/fence-virt-0.2.3-9.el6.x86_64
fence-virtd.x86_64 0.2.3-5.1.el6_2 @rhel-x86_64-client-6
fence-virtd-libvirt.x86_64 0.2.3-5.1.el6_2 @rhel-x86_64-client-6
fence-virtd-multicast.x86_64 0.2.3-5.1.el6_2 @rhel-x86_64-client-6
[root@slm Downloads]$

Key file
------

was created with #dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4096 count=1
copied to all guest server{1,2,3}:/etc/cluster/


Fence config file on host
-------------------------

# cat /etc/fence_virt.conf
backends {
libvirt {
uri = "qemu:///system";
}

}

listeners {
multicast {
interface = "virbr0";
port = "1229";
family = "ipv4";
address = "225.0.0.12";
key_file = "/etc/cluster/fence_xvm.key";
}

}

fence_virtd {
module_path = "/usr/lib64/fence-virt";
backend = "libvirt";
listener = "multicast";
}


service fence_virtd is working.

# fence_xvm -o list
Failed to add multicast membership to transmit socket 225.0.0.12: No such device
server1 62d023e4-48d2-f3ed-6e5a-142238406c0a on
server2 e83fdd63-16af-9476-fdf8-aedefb7c606d on
server3 ca25dfc5-63f9-c40a-a5ab-e86fdef22e41 on
server4 2c33cc2e-3adf-98c5-8502-a691775242cd on


On host, If I use fence the following command server3 reboots fine
# fence_xvm -o reboot -H server3 -t 5 -ddddddd



On client side:
--------------

I followed the steps

Once the host configuration is completed fencing must be set up on the guest cluster and this can be done from the web interface.
Select your cluster, go to the tab Fence Devices, click to Add and select Fence virt (Multicast Mode) as instance type, choose a name (fencekvm) and click to Submit.
Now go back to the cluster main page and click on the first node, look for the Fence Devices section and click on Add Fence Method. Choose the method name (fencekvm) and then click to Add Fence Instance and choose the fence instance previously created.

cluster.conf from one of the guest
----------------------


[root@server1 rhel]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="49" name="first-cluster">
<clusternodes>
<clusternode name="server1.private.example.com" nodeid="1">
<fence>
<method name="fencekvm">
<device domain="server1" name="fencekvm"/>
</method>
</fence>
</clusternode>
<clusternode name="server2.private.example.com" nodeid="2">
<fence>
<method name="fencekvm">
<device domain="server2" name="fencekvm"/>
</method>
</fence>
</clusternode>
<clusternode name="server3.private.example.com" nodeid="3">
<fence>
<method name="fencekvm">
<device domain="server3" name="fencekvm"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<failoverdomains>
<failoverdomain name="fo_webserver" ordered="1">
<failoverdomainnode name="server1.private.example.com" priority="1"/>
<failoverdomainnode name="server2.private.example.com" priority="2"/>
<failoverdomainnode name="server3.private.example.com" priority="3"/>
</failoverdomain>
</failoverdomains>
<resources>
<apache config_file="conf/httpd.conf" name="Apache" server_root="/etc/httpd" shutdown_wait="0"/>
<script file="/etc/init.d/httpd" name="httpd"/>
<fs device="/dev/vg_storage/shared" force_unmount="on" fsid="21430" mountpoint="/var/www/html/" name="rs_fs_web_gfs2" quick_status="on" self_fence="on"/>
<ip address="192.168.122.10" sleeptime="60"/>
</resources>
</rm>
<fencedevices>
<fencedevice agent="fence_xvm" name="fencekvm"/>
</fencedevices>
</cluster>
[root@server1 rhel]#


I have created resources and failover domain which you can see in the config but so for haven't called them in service group.



/etc/hosts file is same for all servers

[root@server2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain
::1 localhost localhost.localdomain

10.10.72.110 server1.public.example.com server1
192.168.122.110 server1.private.example.com server1
10.10.72.120 server2.public.example.com server2
192.168.122.120 server2.private.example.com server2
10.10.72.130 server3.public.example.com server3
192.168.122.130 server3.private.example.com server3
10.10.72.140 server4.public.example.com server4
192.168.122.140 server4.private.example.com server4


My problem is when I test fencing from guest to fence the other node it doesnt work.

[root@server2 ~]# fence_node server3.private.example.com -vv
fence server3.private.example.com dev 0.0 agent fence_xvm result: error from agent
agent args: domain=server3 nodename=server3.private.example.com agent=fence_xvm
fence server3.private.example.com failed

in /var/log/messages
Oct 30 09:48:09 server2 fence_node[5509]: fence server3.private.example.com failed


Can any body help to resolve this issue? Let me know if anyone need more output.

Many thanks in advacne.

Sam

camerabambai 01-09-2013 02:34 PM

Same things for me.
Configured fence_virtd and iptables correct on host
confifgure fence_xvm correct and iptables on guest.
But fence from guest to guest doesn't work.
No solution googling,no how-to's...
:(


All times are GMT -5. The time now is 06:24 AM.