Hello,
I am trying to set up a cluster on CentOS by following the instructions on the Pacemaker site (
http://clusterlabs.org/mediawiki/ima...n_Fedora11.pdf).
The cluster is working fine at this point aside from DRBD. When Pacemaker attempts to start DRBD on the cluster, I get an error saying that the device is already mounted or the mount point is busy. Neither of those is the case as near as I can tell, but I'm clearly doing something wrong.
Below are my configuration files and other relevant information. I have also attached the full log generated by the startup of corosync. The handful of search results on Google for this error turned up no leads, so my guess is that this is probably something pretty obvious that I'm missing if only a few people have run into it. Any help would be greatly appreciated!
Here is the error I'm getting:
Code:
Mar 19 14:27:00 yaaserver1 lrmd: [3038]: info: rsc:DatastoreFS:13: start
Mar 19 14:27:00 yaaserver1 Filesystem[3520]: INFO: Running start for /dev/mapper/VolGroup00-drbd on /datastore
Mar 19 14:27:00 yaaserver1 lrmd: [3038]: info: RA output: (DatastoreFS:start:stderr) 2010/03/19_14:27:00 INFO: Running start for /dev/mapper/VolGroup00-drbd on /datastore
Mar 19 14:27:00 yaaserver1 lrmd: [3038]: info: RA output: (DatastoreFS:start:stderr) mount: /dev/mapper/VolGroup00-drbd already mounted or /datastore busy
Mar 19 14:27:00 yaaserver1 Filesystem[3520]: ERROR: Couldn't mount filesystem /dev/mapper/VolGroup00-drbd on /datastore
Mar 19 14:27:00 yaaserver1 lrmd: [3038]: info: RA output: (DatastoreFS:start:stderr) 2010/03/19_14:27:00 ERROR: Couldn't mount filesystem /dev/mapper/VolGroup00-drbd on /datastore
Mar 19 14:27:00 yaaserver1 lrmd: [3038]: WARN: Managed DatastoreFS:start process 3520 exited with return code 1.
Mar 19 14:27:01 yaaserver1 crmd: [3041]: info: process_lrm_event: LRM operation DatastoreFS_start_0 (call=13, rc=1, cib-update=43, confirmed=true) unknown error
Mar 19 14:27:01 yaaserver1 crmd: [3041]: WARN: status_from_rc: Action 37 (DatastoreFS_start_0) on yaaserver1 failed (target: 0 vs. rc: 1): Error
Here is the output of crm_mon:
Code:
============
Last updated: Fri Mar 19 14:46:27 2010
Stack: openais
Current DC: yaaserver1 - partition with quorum
Version: 1.0.7-d3fa20fc76c7947d6de66db7e52526dc6bd7d782
2 Nodes configured, 2 expected votes
4 Resources configured.
============
Online: [ yaaserver1 yaaserver2 ]
ClusterIP (ocf::heartbeat:IPaddr2): Started yaaserver1
Master/Slave Set: DatastoreDataClone
Masters: [ yaaserver2 ]
Slaves: [ yaaserver1 ]
Failed actions:
DatastoreFS_start_0 (node=yaaserver1, call=13, rc=1, status=complete): unknown error
DatastoreFS_start_0 (node=yaaserver2, call=14, rc=1, status=complete): unknown error
Here is my Pacemaker configuration:
Code:
[root@yaaserver1 /]# crm configure show
node yaaserver1
node yaaserver2
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="10.0.0.50" cidr_netmask="32" \
op monitor interval="15s"
primitive DatastoreData ocf:linbit:drbd \
params drbd_resource="datastore" \
op monitor interval="60"
primitive DatastoreFS ocf:heartbeat:Filesystem \
params device="/dev/mapper/VolGroup00-drbd" directory="/datastore" fstype="ext3"
primitive WebSite ocf:heartbeat:apache \
params configfile="/etc/httpd/conf/httpd.conf" \
op monitor interval="20s"
ms DatastoreDataClone DatastoreData \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation WebSite-with-DatastoreFS inf: WebSite DatastoreFS
colocation fs_on_drbd inf: DatastoreFS DatastoreDataClone:Master
colocation website-with-ip inf: WebSite ClusterIP
order DatastoreFS-after-DatastoreData inf: DatastoreDataClone:promote DatastoreFS:start
order WebSite-after-DatastoreFS inf: DatastoreFS WebSite
property $id="cib-bootstrap-options" \
dc-version="1.0.7-d3fa20fc76c7947d6de66db7e52526dc6bd7d782" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
Here is my DRBD configuration:
Code:
[root@yaaserver1 /]# cat /etc/drbd.conf
global {
usage-count yes;
}
common {
protocol C;
}
resource datastore {
meta-disk internal;
device /dev/drbd1;
syncer {
verify-alg sha1;
rate 40M;
}
net {
allow-two-primaries;
}
on yaaserver1 {
disk /dev/mapper/VolGroup00-drbd;
address 10.0.1.51:7789;
}
on yaaserver2 {
disk /dev/mapper/VolGroup00-drbd;
address 10.0.1.52:7789;
}
}
Here is the output of mount:
Code:
[root@yaaserver1 /]# mount
/dev/mapper/VolGroup00-os on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
The /datastore mount point exists and is empty:
Code:
[root@yaaserver1 /]# cd /datastore
[root@yaaserver1 datastore]# ls
[root@yaaserver1 datastore]#
DRBD is not running on startup:
Code:
[root@yaaserver1 datastore]# chkconfig --list drbd
service drbd supports chkconfig, but is not referenced in any runlevel (run 'chkconfig --add drbd')