What I am trying to to is set up a cluster as follows:
- There are four nodes
- Nodes three and four have large amounts of storage that will be used for storing user data.
- Nodes one and two will be running a couple of network services.
I'd like have nodes one and two to have a clustered filesystem just between them to store data for the services run only by them.
One question I have is, is it possible to share one filesystem between two nodes and another filesystem between two other nodes? The cluster seems to try to make the storage changes across all nodes. They will not be able to spend the extra money to get the same storage in all four nodes.
Here is what I have done. I am using CentOS 5.5. After installing CentOS, I did this:
On all nodes:
Code:
[root@node1 ~]# yum groupinstall "Clustering"
[root@node1 ~]# yum groupinstall "Cluster Storage"
[root@node1 ~]# chkconfig ricci on
[root@node1 ~]# service ricci start
On one node:
Code:
[root@node1 ~]# luci_admin init
[root@node1 ~]# chkconfig luci on
[root@node1 ~]# service luci start
Then I created a cluster with Conga and added the four nodes to the cluster. I verified the cluster is quorate.
At this point, the hard drives look like this on node1 and node2:
Code:
[root@node1 ~]# fdisk -l
Disk /dev/sda: 73.2 GB, 73272393728 bytes
64 heads, 32 sectors/track, 69878 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 100 102384 83 Linux
/dev/sda2 101 10340 10485760 8e Linux LVM
Code:
[root@node2 ~]# fdisk -l
Disk /dev/sda: 73.2 GB, 73272393728 bytes
64 heads, 32 sectors/track, 69878 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 100 102384 83 Linux
/dev/sda2 101 10340 10485760 8e Linux LVM
From 10341 on, the drives are empty. Now what? I'd like to make the empty portion of the drive clustered storage for node1 and node2.
I think I am misunderstanding something here. It is my understanding that I only need to set up the drive on one node, and CLVM will take care of the rest.
I have tried:
- Creating a partition /dev/sda3 on both nodes and using Conga to create a PV on node1. The PV is not created on node2.
- Creating a partition /dev/sda3 on both nodes and creating a PV and a volume group on both nodes. Then when I try to make a logical volume I get:
Code:
[root@node1 ~]# lvcreate -L 58G -n vg_cluster lv_virtual_storage
Error locking on node node2: Volume group for uuid not found:
20zlkHY2Y02ff381I185YPCSSrZ2Ti6gglNzJ10QWVQC14XOlsMPV3TuS9noMMoU
Error locking on node node3: Volume group for uuid not found:
20zlkHY2Y02ff381I185YPCSSrZ2Ti6gglNzJ10QWVQC14XOlsMPV3TuS9noMMoU
Error locking on node node4: Volume group for uuid not found:
20zlkHY2Y02ff381I185YPCSSrZ2Ti6gglNzJ10QWVQC14XOlsMPV3TuS9noMMoU
Aborting. Failed to activate new LV to wipe the start of it.
...which I think makes sense because the cluster was supposed to create the PV and the volume group, but did not. I don't even think it tries. The first time it tries to do anything to another node is when I create a logical volume.
Any suggestions?