LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   DRBD + GFS2 on Centos 5.4 cman problem (https://www.linuxquestions.org/questions/linux-server-73/drbd-gfs2-on-centos-5-4-cman-problem-890057/)

smbdie 07-05-2011 12:31 PM

DRBD + GFS2 on Centos 5.4 cman problem
 
Hello, I`m using this how-to http://wiki.virtastic.com/display/ho...+on+CentOS+5.4 to configure cluster on 2 nodes.

Everything installed successful, these are drbd and cluster.conf:
DRBD:
Code:

}

        startup {
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
                become-primary-on both;
        }

        disk {
                # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
                # no-disk-drain no-md-flushes max-bio-bvecs
                on-io-error detach;
                fencing dont-care;
        }

        net {
                # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
                # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
                # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
                allow-two-primaries;
                after-sb-0pri discard-zero-changes;
                after-sb-1pri discard-secondary;
                after-sb-2pri disconnect;
        }


        syncer {
                # rate after al-extents use-rle cpu-mask verify-alg csums-alg
                rate 3M;
                #group 1;
                al-extents 257;
        }
        on node1 {
                device    /dev/drbd0;
                disk      /dev/cciss/c0d1p2;
                address  10.10.77.11:7789;
                meta-disk /dev/cciss/c0d1p3[0];
        }
        on node2 {
                device    /dev/drbd0;
                disk      /dev/sdd1;
                address  10.10.77.22:7789;
                meta-disk /dev/sdd2[0];
        }
}

Cluster.conf:
Code:

<cluster name="cluster" config_version="1">

 <!-- post_join_delay: number of seconds the daemon will wait before
                        fencing any victims after a node joins the domain
      post_fail_delay: number of seconds the daemon will wait before
                        fencing any victims after a domain member fails
      clean_start    : prevent any startup fencing the daemon might do.
                        It indicates that the daemon should assume all nodes
                        are in a clean state to start. -->
  <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
  <clusternodes>
    <clusternode name="node1" votes="1" nodeid="1">
        <multicast addr="224.0.0.1" interface="eth0"/>
        <fence>
        <!-- Handle fencing manually -->
        <method name="human">
          <device name="human" nodename="node1"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="node2" votes="1" nodeid="2">
      <multicast addr="224.0.0.1" interface="eth0:0"/>
      <fence>
        <!-- Handle fencing manually -->
        <method name="human">
          <device name="human" nodename="node2"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <!-- cman two nodes specification -->
  <cman expected_votes="1" two_node="1"/>
  <cman> <multicast addr="224.0.0.1"/> </cman>
  <fencedevices>
    <!-- Define manual fencing -->
    <fencedevice name="human" agent="fence_manual"/>
  </fencedevices>
</cluster>

cman on node1 starts without a problem, but on node2 cman freeze system with process groupd which loads cpu 100%, there is no other information in the log of node2 system (nothing at all)...

ZayDen 07-13-2011 01:06 AM

Quote:

Originally Posted by smbdie (Post 4405871)
cman on node1 starts without a problem, but on node2 cman freeze system with process groupd which loads cpu 100%, there is no other information in the log of node2 system (nothing at all)...

zaydencorporation.blogspot.com/2011/02/drbd-ha-gfs2-tgt.html


All times are GMT -5. The time now is 07:21 PM.