LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Enterprise (http://www.linuxquestions.org/questions/linux-enterprise-47/)
-   -   Heartbeat cluster won't recognize other node, resource won't start. (http://www.linuxquestions.org/questions/linux-enterprise-47/heartbeat-cluster-wont-recognize-other-node-resource-wont-start-799140/)

slinx 03-31-2010 11:30 AM

Hello,

I followed the directions in this HowTo to a "T" ...except for some modifications for my environment, but it's not quite working.

DRBD itself is working, but I can't get heartbeat to control it. I seem to have an error in my resource definitions. I'm just not finding the documentation very clear on what I need to change.

I want the node to mount the drbd device, then assign the virtual IP, then start mysql and apache. And I want the active node to STONITH the other node if it fails to get a heartbeat.

Here's crm_mon output:
Code:

admin-lab0 ~]$ sudo /usr/sbin/crm_mon
============
Last updated: Wed Mar 31 11:30:56 2010
Current DC: admin-lab0 (de820ffb-dab9-446c-ab5b-9291e5409a69)
2 Nodes configured.
1 Resources configured.
============

Node: admin-lab1 (c07cf70b-865c-41fb-98f7-9a25163c0825): OFFLINE
Node: admin-lab0 (de820ffb-dab9-446c-ab5b-9291e5409a69): online


Failed actions:
    drbddisk_mysql_start_0 (node=admin-lab0, call=6, rc=1): Error

it appears each node can only see itself. I can ping each node through the general interface, and through eth1, which is dedicated to drbd:

Code:

admin-lab1 ~]$ sudo /usr/sbin/crm_mon
============
Last updated: Wed Mar 31 09:42:17 2010
Current DC: admin-lab1 (c07cf70b-865c-41fb-98f7-9a25163c0825)
2 Nodes configured.
1 Resources configured.
============

Node: admin-lab1 (c07cf70b-865c-41fb-98f7-9a25163c0825): online
Node: admin-lab0 (de820ffb-dab9-446c-ab5b-9291e5409a69): OFFLINE


Failed actions:
    drbddisk_mysql_start_0 (node=admin-lab1, call=6, rc=1): Error

I'm getting this error:

Code:

admin-lab0$ sudo /usr/sbin/crm_verify -L -VVV

crm_verify[13157]: 2010/03/31_10:58:16 info: main: =#=#=#=#= Getting XML =#=#=#=#=
crm_verify[13157]: 2010/03/31_10:58:16 info: main: Reading XML from: live cluster
crm_verify[13157]: 2010/03/31_10:58:16 notice: main: Required feature set: 2.0
crm_verify[13157]: 2010/03/31_10:58:16 info: determine_online_status: Node admin-lab0 is online
crm_verify[13157]: 2010/03/31_10:58:16 WARN: unpack_rsc_op: Processing failed op drbddisk_mysql_start_0 on admin-lab0: Error
crm_verify[13157]: 2010/03/31_10:58:16 WARN: unpack_rsc_op: Compatability handling for failed op drbddisk_mysql_start_0 on admin-lab0
crm_verify[13157]: 2010/03/31_10:58:16 notice: group_print: Resource Group: rg_mysql
crm_verify[13157]: 2010/03/31_10:58:16 notice: native_print:    drbddisk_mysql (heartbeat:drbddisk):  Stopped
crm_verify[13157]: 2010/03/31_10:58:16 notice: native_print:    fs_mysql      (heartbeat::ocf:Filesystem):    Stopped
crm_verify[13157]: 2010/03/31_10:58:16 notice: native_print:    ip_mysql      (heartbeat::ocf:IPaddr2):      Stopped
crm_verify[13157]: 2010/03/31_10:58:16 notice: native_print:    mysqld (lsb:mysqld):  Stopped
crm_verify[13157]: 2010/03/31_10:58:16 WARN: native_color: Resource drbddisk_mysql cannot run anywhere
crm_verify[13157]: 2010/03/31_10:58:16 WARN: native_color: Resource fs_mysql cannot run anywhere
crm_verify[13157]: 2010/03/31_10:58:16 WARN: native_color: Resource ip_mysql cannot run anywhere
crm_verify[13157]: 2010/03/31_10:58:16 WARN: native_color: Resource mysqld cannot run anywhere
Warnings found during check: config may not be valid

Additional debug output shows:

Code:

crm_verify[13156]: 2010/03/31_10:58:05 WARN: unpack_rsc_op: Processing failed op drbddisk_mysql_start_0 on admin-lab0: Error
crm_verify[13156]: 2010/03/31_10:58:05 WARN: unpack_rsc_op: Compatability handling for failed op drbddisk_mysql_start_0 on admin-lab0
crm_verify[13156]: 2010/03/31_10:58:05 notice: group_print: Resource Group: rg_mysql
crm_verify[13156]: 2010/03/31_10:58:05 notice: native_print:    drbddisk_mysql (heartbeat:drbddisk):  Stopped
crm_verify[13156]: 2010/03/31_10:58:05 notice: native_print:    fs_mysql      (heartbeat::ocf:Filesystem):    Stopped
crm_verify[13156]: 2010/03/31_10:58:05 notice: native_print:    ip_mysql      (heartbeat::ocf:IPaddr2):      Stopped
crm_verify[13156]: 2010/03/31_10:58:05 notice: native_print:    mysqld (lsb:mysqld):  Stopped
crm_verify[13156]: 2010/03/31_10:58:05 debug: native_print: Allocating: drbddisk_mysql  (heartbeat:drbddisk):  Stopped
crm_verify[13156]: 2010/03/31_10:58:05 debug: native_assign_node: Color drbddisk_mysql, Node[0] admin-lab1: 0
crm_verify[13156]: 2010/03/31_10:58:05 debug: native_assign_node: Color drbddisk_mysql, Node[1] admin-lab0: -1000000
crm_verify[13156]: 2010/03/31_10:58:05 debug: native_assign_node: All nodes for resource drbddisk_mysql are unavailable, unclean or shutting down
crm_verify[13156]: 2010/03/31_10:58:05 WARN: native_color: Resource drbddisk_mysql cannot run anywhere

Plus, It's not colocating the resources as I want:

Code:

crm_verify[13156]: 2010/03/31_10:58:05 debug: unpack_config: Default action timeout: 20s
crm_verify[13156]: 2010/03/31_10:58:05 debug: unpack_config: Default stickiness: 0
crm_verify[13156]: 2010/03/31_10:58:05 debug: unpack_config: Default failure stickiness: 0
crm_verify[13156]: 2010/03/31_10:58:05 debug: unpack_config: STONITH of failed nodes is disabled
crm_verify[13156]: 2010/03/31_10:58:05 debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
crm_verify[13156]: 2010/03/31_10:58:05 debug: unpack_config: On loss of CCM Quorum: Stop ALL resources

Then I have this error in /var/log/messages, so I know I have something wrong in my configuration:
Code:

Mar 30 23:55:42 admin-lab0 lrmd: [11898]: info: rsc:drbddisk_mysql: start
Mar 30 23:55:42 admin-lab0 lrmd: [11898]: info: RA output: (drbddisk_mysql:start:stderr) 'mysql' not defined in your config.
Mar 30 23:55:47 admin-lab0 crmd: [11901]: ERROR: process_lrm_event: LRM operation drbddisk_mysql_start_0 (call=6, rc=1) Error unknown error
Mar 30 23:55:47 admin-lab0 crmd: [11901]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_IPC_MESSAGE origin=route_message ]
Mar 30 23:55:47 admin-lab0 crmd: [11901]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Mar 30 23:55:47 admin-lab0 crmd: [11901]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
Mar 30 23:55:47 admin-lab0 crmd: [11901]: info: do_lrm_rsc_op: Performing op=drbddisk_mysql_stop_0 key=1:1:c79820db-a2ad-4d41-8bcc-f3621d5a3414)
Mar 30 23:55:47 admin-lab0 lrmd: [11898]: info: rsc:drbddisk_mysql: stop
Mar 30 23:55:47 admin-lab0 lrmd: [11898]: info: RA output: (drbddisk_mysql:stop:stderr) 'mysql' not defined in your config.
Mar 30 23:55:47 admin-lab0 lrmd: [11898]: info: RA output: (drbddisk_mysql:stop:stderr) /sbin/drbdadm secondary mysql: exit code 3, mapping to 0
Mar 30 23:55:47 admin-lab0 crmd: [11901]: info: process_lrm_event: LRM operation drbddisk_mysql_stop_0 (call=7, rc=0) complete
Mar 30 23:55:47 admin-lab0 tengine: [11907]: info: notify_crmd: Transition 1 status: te_complete - <null>
Mar 30 23:55:47 admin-lab0 crmd: [11901]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]

What I am trying to figure out is where should I define resources? In /etc/ha.d/ha.cf ? /etc/ha.d/haresources? in the cib.xml file with cibadmin? I'm stumped and the documentation is not very clear with any of them.

Here are my configs:

Linux admin-lab0 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:37:14 EDT 2010 i686 i686 i386 GNU/Linux
CentOS release 5.4 (Final)

drbd-8.3.7-1
drbd-bash-completion-8.3.7-1
drbd-heartbeat-8.3.7-1
drbd-km-2.6.18_164.15.1.el5-8.3.7-12
drbd-pacemaker-8.3.7-1
drbd-udev-8.3.7-1
drbd-utils-8.3.7-1
drbd-xen-8.3.7-1
heartbeat-2.1.3-3.el5.centos
heartbeat-pils-2.1.3-3.el5.centos
heartbeat-stonith-2.1.3-3.el5.centos

/etc/ha.d/ha.cf
Code:

keepalive      2
deadtime        30
warntime        10
initdead        120
bcast eth1
ucast eth0 10.98.4.90
ucast eth0 10.98.4.91
node            admin-lab0
node            admin-lab1
keepalive      2
stonith_host external/ipmi admin-lab0 10.98.5.76 root -----
stonith_host external/ipmi admin-lab1 10.98.6.224 root -----
crm            respawn

/etc/ha.d/haresources
Code:

admin-lab3 IPaddr::10.98.4.93/16/eth0 http mysql
Code:

admin-lab0 ~]$ sudo /usr/sbin/cibadmin -Q
 <cib generated="true" admin_epoch="0" epoch="3" num_updates="19" have_quorum="true" ignore_dtd="false" num_peers="1" cib_feature_revision="2.0" cib-last-written="Tue Mar 30 18:30:39 2010" ccm_transition="1" dc_uuid="de820ffb-dab9-446c-ab5b-9291e5409a69">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <attributes>
          <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.3-node: 552305612591183b1628baa5bc6e903e0f1e26a3"/>
        </attributes>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="c07cf70b-865c-41fb-98f7-9a25163c0825" uname="admin-lab1" type="normal"/>
      <node id="de820ffb-dab9-446c-ab5b-9291e5409a69" uname="admin-lab0" type="normal"/>
    </nodes>
    <resources>
      <group ordered="true" collocated="true" id="rg_mysql">
        <primitive class="heartbeat" type="drbddisk" provider="heartbeat" id="drbddisk_mysql">
          <meta_attributes id="7aa6d6e9-2ddc-4ea9-8298-0884e3e6f53f">
            <attributes>
              <nvpair name="target_role" value="started" id="29c914c0-42d3-47d1-be82-0349fdd8029a"/>
            </attributes>
          </meta_attributes>
          <instance_attributes id="69a45069-a2e2-4267-a8bd-a434b96c463d">
            <attributes>
              <nvpair name="1" value="mysql" id="fe2dd16a-b4bd-400f-8877-ec8002aa4333"/>
            </attributes>
          </instance_attributes>
        </primitive>
        <primitive class="ocf" type="Filesystem" provider="heartbeat" id="fs_mysql">
          <instance_attributes id="41e504aa-1452-4038-830a-edf0db211880">
            <attributes>
              <nvpair name="device" value="/dev/drbd0" id="88156574-9ab5-4806-b936-8d517abcfa8a"/>
              <nvpair name="directory" value="/var/lib/mysql" id="1941ba57-44f3-4791-9a99-474bd173ec25"/>
              <nvpair name="type" value="ext3" id="b2caf7bb-ab70-459c-9e91-7ecf2b64221c"/>
            </attributes>
          </instance_attributes>
        </primitive>
        <primitive class="ocf" type="IPaddr2" provider="heartbeat" id="ip_mysql">
          <instance_attributes id="4cffc9d1-ab51-45e4-a98f-4d3edf31dd2d">
            <attributes>
              <nvpair name="ip" value="10.98.4.93" id="b12c1d30-5164-4245-a426-a0fe3d14dd86"/>
              <nvpair name="cidr_netmask" value="16" id="83129818-9279-440b-97b8-d23bf72a8832"/>
              <nvpair name="nic" value="eth0" id="b3ca5ef5-e2a0-4f1f-b7cd-4f2c178552b3"/>
            </attributes>
          </instance_attributes>
        </primitive>
        <primitive class="lsb" type="mysqld" provider="heartbeat" id="mysqld"/>
      </group>
    </resources>
    <constraints/>
  </configuration>
  <status>
    <node_state id="de820ffb-dab9-446c-ab5b-9291e5409a69" uname="admin-lab0" crmd="online" crm-debug-origin="do_update_resource" shutdown="0" in_ccm="true" ha="active" join="member" expected="member">
      <lrm id="de820ffb-dab9-446c-ab5b-9291e5409a69">
        <lrm_resources>
          <lrm_resource id="drbddisk_mysql" type="drbddisk" class="heartbeat" provider="heartbeat">
            <lrm_rsc_op id="drbddisk_mysql_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition_key="3:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" transition_magic="0:7;3:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" call_id="2" crm_feature_set="2.0" rc_code="7" op_status="0" interval="0" op_digest="335708e636e88faff6fd969f5e0be283"/>
            <lrm_rsc_op id="drbddisk_mysql_start_0" operation="start" crm-debug-origin="do_update_resource" transition_key="8:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" transition_magic="4:1;8:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" call_id="6" crm_feature_set="2.0" rc_code="1" op_status="4" interval="0" op_digest="335708e636e88faff6fd969f5e0be283"/>
            <lrm_rsc_op id="drbddisk_mysql_stop_0" operation="stop" crm-debug-origin="do_update_resource" transition_key="1:1:c79820db-a2ad-4d41-8bcc-f3621d5a3414" transition_magic="0:0;1:1:c79820db-a2ad-4d41-8bcc-f3621d5a3414" call_id="7" crm_feature_set="2.0" rc_code="0" op_status="0" interval="0" op_digest="335708e636e88faff6fd969f5e0be283"/>
          </lrm_resource>
          <lrm_resource id="fs_mysql" type="Filesystem" class="ocf" provider="heartbeat">
            <lrm_rsc_op id="fs_mysql_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition_key="4:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" transition_magic="0:7;4:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" call_id="3" crm_feature_set="2.0" rc_code="7" op_status="0" interval="0" op_digest="a11cf3a35e6400332669268471abdea5"/>
          </lrm_resource>
          <lrm_resource id="ip_mysql" type="IPaddr2" class="ocf" provider="heartbeat">
            <lrm_rsc_op id="ip_mysql_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition_key="5:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" transition_magic="0:7;5:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" call_id="4" crm_feature_set="2.0" rc_code="7" op_status="0" interval="0" op_digest="4cc1f203e540e0dc8fc723f94e4d4a17"/>
          </lrm_resource>
          <lrm_resource id="mysqld" type="mysqld" class="lsb" provider="heartbeat">
            <lrm_rsc_op id="mysqld_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition_key="6:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" transition_magic="0:7;6:0:c79820db-a2ad-4d41-8bcc-f3621d5a3414" call_id="5" crm_feature_set="2.0" rc_code="7" op_status="0" interval="0" op_digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
      <transient_attributes id="de820ffb-dab9-446c-ab5b-9291e5409a69">
        <instance_attributes id="status-de820ffb-dab9-446c-ab5b-9291e5409a69">
          <attributes>
            <nvpair id="status-de820ffb-dab9-446c-ab5b-9291e5409a69-probe_complete" name="probe_complete" value="true"/>
            <nvpair id="status-de820ffb-dab9-446c-ab5b-9291e5409a69-fail-count-drbddisk_mysql" name="fail-count-drbddisk_mysql" value="1"/>
          </attributes>
        </instance_attributes>
      </transient_attributes>
    </node_state>
  </status>
 </cib>

I'm sure I could figure it out if I knew where to go - can someone please help me identify which configuration I need to adjust ? It also looks like stonith is not enabled - how do I do that? The documentation for these tools is just terrible, full of typos and errors.

OK, and it looks like the node is trying to talk to the other one, but not getting a connection:

Code:

Source                                  Destination                              Proto  State        TTL   
10.98.2.20:59155                        10.98.4.91:22                            tcp    ESTABLISHED  119:59:59
10.98.4.91:60001                        10.98.4.91:694                          udp                    0:00:29
10.98.4.91:37433                        10.98.4.90:694                          udp                    0:00:29
192.168.0.1:36230                        192.168.0.2:7788                        tcp    ESTABLISHED  119:59:52
192.168.0.2:40063                        192.168.0.1:7788                        tcp    ESTABLISHED  107:27:23
192.168.0.2:36522                        192.168.0.3:694                          udp                    0:00:29

Ahh... ok I added a rule to allow udp traffic, let's see if that helps...
also saw this when I reloaded the config, so I guess I don't need haresources
Code:

heartbeat[9295]: 2010/03/31_11:46:30 WARN: File /etc/ha.d/haresources exists.
heartbeat[9295]: 2010/03/31_11:46:30 WARN: This file is not used because crm is enabled


OK I got hb_gui and used that... still can't get my resources to load. I'm getting this error:

Code:

Mar 31 14:54:39 admin-lab0 crmd: [16023]: info: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE o
rigin=do_cl_join_finalize_respond ]
Mar 31 14:54:41 admin-lab0 crmd: [16023]: info: do_lrm_rsc_op: Performing op=fs_mysql_monitor_0 key=7:0:cca4c1fb-d252-4050-b27a-4de767dcbb10)
Mar 31 14:54:41 admin-lab0 lrmd: [16020]: info: rsc:fs_mysql: monitor
Mar 31 14:54:41 admin-lab0 crmd: [16023]: info: do_lrm_rsc_op: Performing op=ip_mysql_monitor_0 key=8:0:cca4c1fb-d252-4050-b27a-4de767dcbb10)
Mar 31 14:54:41 admin-lab0 lrmd: [16020]: info: rsc:ip_mysql: monitor
Mar 31 14:54:41 admin-lab0 crmd: [16023]: info: do_lrm_rsc_op: Performing op=mysqld_monitor_0 key=9:0:cca4c1fb-d252-4050-b27a-4de767dcbb10)
Mar 31 14:54:41 admin-lab0 lrmd: [16020]: info: rsc:mysqld: monitor
Mar 31 14:54:41 admin-lab0 crmd: [16023]: info: process_lrm_event: LRM operation mysqld_monitor_0 (call=4, rc=7) complete
Mar 31 14:54:41 admin-lab0 crmd: [16023]: info: process_lrm_event: LRM operation fs_mysql_monitor_0 (call=2, rc=7) complete
Mar 31 14:54:41 admin-lab0 crmd: [16023]: info: process_lrm_event: LRM operation ip_mysql_monitor_0 (call=3, rc=7) complete
Mar 31 14:54:42 admin-lab0 cibadmin: [16127]: info: Invoked: /usr/sbin/cibadmin -Q
Mar 31 14:54:42 admin-lab0 crmd: [16023]: info: do_lrm_rsc_op: Performing op=fs_mysql_start_0 key=6:2:cca4c1fb-d252-4050-b27a-4de767dcbb10)
Mar 31 14:54:42 admin-lab0 lrmd: [16020]: info: rsc:fs_mysql: start
Mar 31 14:54:42 admin-lab0 Filesystem[16128]: [16158]: INFO: Running start for /dev/drbd0 on /data
Mar 31 14:54:42 admin-lab0 Filesystem[16128]: [16163]: INFO: Starting filesystem check on /dev/drbd0
Mar 31 14:54:42 admin-lab0 lrmd: [16020]: info: RA output: (fs_mysql:start:stdout) fsck 1.39 (29-May-2006)
Mar 31 14:54:42 admin-lab0 lrmd: [16020]: info: RA output: (fs_mysql:start:stderr) fsck.ext3
Mar 31 14:54:42 admin-lab0 lrmd: [16020]: info: RA output: (fs_mysql:start:stderr) :
Mar 31 14:54:42 admin-lab0 lrmd: [16020]: info: RA output: (fs_mysql:start:stderr) Read-only file system
Mar 31 14:54:42 admin-lab0 lrmd: [16020]: info: RA output: (fs_mysql:start:stderr)
Mar 31 14:54:42 admin-lab0 lrmd: [16020]: info: RA output: (fs_mysql:start:stderr) while trying to open /dev/drbd0
 ar 31 14:54:42 admin-lab0 lrmd: [16020]: info: RA output: (fs_mysql:start:stderr)
Mar 31 14:54:42 admin-lab0 lrmd: [16020]: info: RA output: (fs_mysql:start:stdout) Disk write-protected; use the -n option to do a read-only check of the device.
Mar 31 14:54:42 admin-lab0 Filesystem[16128]: [16166]: ERROR: Couldn't sucessfully fsck filesystem for /dev/drbd0
Mar 31 14:54:42 admin-lab0 crmd: [16023]: ERROR: process_lrm_event: LRM operation fs_mysql_start_0 (call=5, rc=1) Error unknown error
Mar 31 14:54:44 admin-lab0 crmd: [16023]: info: do_lrm_rsc_op: Performing op=fs_mysql_stop_0 key=1:3:cca4c1fb-d252-4050-b27a-4de767dcbb10)
Mar 31 14:54:44 admin-lab0 lrmd: [16020]: info: rsc:fs_mysql: stop
Mar 31 14:54:44 admin-lab0 Filesystem[16175]: [16205]: INFO: Running stop for /dev/drbd0 on /data
Mar 31 14:54:44 admin-lab0 lrmd: [16020]: info: RA output: (fs_mysql:stop:stderr) /dev/drbd0: Wrong medium type
Mar 31 14:54:44 admin-lab0 crmd: [16023]: info: process_lrm_event: LRM operation fs_mysql_stop_0 (call=6, rc=0) complete
Mar 31 14:54:49 admin-lab0 cibadmin: [16216]: info: Invoked: /usr/sbin/cibadmin -Q
Mar 31 14:55:23 admin-lab0 pengine: [16226]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Mar 31 14:55:24 admin-lab0 crmd: [16227]: info: main: CRM Hg Version: node: 552305612591183b1628baa5bc6e903e0f1e26a3
Mar 31 14:55:31 admin-lab0 cibadmin: [16228]: info: Invoked: /usr/sbin/cibadmin -Q
Mar 31 14:55:42 admin-lab0 crm_mon: [16229]: info: main: Starting crm_mon
Mar 31 14:55:42 admin-lab0 crm_mon: [16229]: notice: mon_timer_popped: Updating...
Mar 31 14:55:42 admin-lab0 crm_mon: [16229]: info: determine_online_status: Node admin-lab0 is online
Mar 31 14:55:42 admin-lab0 crm_mon: [16229]: WARN: unpack_rsc_op: Processing failed op fs_mysql_start_0 on admin-lab0: Error
Mar 31 14:55:42 admin-lab0 crm_mon: [16229]: WARN: unpack_rsc_op: Compatability handling for failed op fs_mysql_start_0 on admin-lab0
Mar 31 14:55:42 admin-lab0 crm_mon: [16229]: info: determine_online_status: Node admin-lab1 is online
Mar 31 14:55:42 admin-lab0 crm_mon: [16229]: WARN: unpack_rsc_op: Processing failed op fs_mysql_start_0 on admin-lab1: Error
Mar 31 14:55:42 admin-lab0 crm_mon: [16229]: WARN: unpack_rsc_op: Compatability handling for failed op fs_mysql_start_0 on admin-lab1


jns 04-02-2010 09:41 AM

What name are you using to ping the other node? Make sure whatever you have in your config matches exactly what is in /etc/hosts and that you can ping using that exact name, eg fqdn, hostname, etc. and not just the ip address. I'm still looking though.

slinx 04-08-2010 11:48 PM

Jessica, thanks for replying. I talked to someone who suggested using multicast, and that worked. Now they can talk to each other, but I still can't get the resources to start.


All times are GMT -5. The time now is 01:00 PM.