LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Articles > Jeremy's Magazine Articles
User Name
Password

Notices


By jeremy at 2008-05-29 12:55
Using OCFS2, A Clustering File System
Get a grip on the next-gen Oracle Cluster File System, an alternative to GFS.
Jeremy Garcia
Linux Magazine

Traditionally in Linux, if you wanted to access a file system from more than one machine you would mount it on one machine, and then export it to the others via NFS, CIFS or something similar. This is a stable, mature and well documented procedure, but it does have its downsides. Performance is not great, there can be locking issues, there are potential security implications and worst of all you have a single point of failure.

If your NFS server goes down, all clients immediately lose access. Luckily, there is another way to accomplish this task. A clustered file system is a file system which is simultaneously mounted on multiple servers. Implementing a clustered file system used to be a complicated and expensive proposition. It typically involved a high priced SAN, Fiber Channel HBA’s and switches and a proprietary file system.

Now you have iSCSI and your choice of Open Source clustered file systems available. The most popular is probably GFS, which Red Hat Open Sourced after acquiring Sistina. GFS now ships with both Fedora and CentOS, in addition to being a supported option on top of Red Hat Enterprise Linux. It is included in the mainline kernel as of 2.6.19.

Another option is OCFS2, which is the next generation of the Oracle Cluster File System. It is an extent based, POSIX compliant file system. Unlike the previous release (OCFS), OCFS2 is a general-purpose file system. As of this writing, you should be aware that OCFS2 does not support mmap. While support is planned for the near future, if you need a clustering file system that supports mmap now, you will want to look into GFS. (NOTE: since this article was published, mmap support has been added).

The OCFS2 user space tools are available for a variety of distributions, either directly from Oracle or from the distribution manufacturer. OCFS2 is licensed under the GPL. Check here for the main project page. To utilize OCFS2 you’ll need to install the ocfs2-tools package and the appropriate kernel modules (ocfs2-`uname -r`).

The ocfs2console package is not required but is recommended for ease-of-use. With these installed we’re ready to setup an OCFS2 partition. Setting up your shared storage is beyond the scope of this article and should be completed before you proceed. The examples used in this article are based on a RHEL 5.1 installation.

Some commands or file locations might differ slightly on other distributions. Note that if the current RHEL kernel does not have a corresponding OCFS2 kernel module RPM available directly from Oracle, you can easily rebuild one from the official ocfs2 tarball. As with GFS, OCFS2 is now in the mainline kernel, so if you are using a newer distribution you should check to see if you already have OCFS2 support before looking for the appropriate kernel module package.

First, you’ll need to edit /etc/sysconfig/o2cb and ensure it contains this line:

Code:
O2CB_ENABLED=true
Next, create /etc/ocfs2/cluster.conf on each machine. The following is for a three node cluster. For multi-homed machines, the node parameter should match the host name, regardless of which IP you use.

Code:
node:
        ip_port = 7777
        ip_address = 192.168.100.1
        number = 0
        name = host1.domain.com
        cluster = ocfs2
node:
        ip_port = 7777
        ip_address = 192.168.100.2
        number = 1
        name = host2.domain.com
        cluster = ocfs2
node:
        ip_port = 7777
        ip_address = 192.168.100.3
        number = 2
        name = host3.domain.com
        cluster = ocfs2
cluster:
        node_count = 3
        name = ocfs2
You can now run /etc/init.d/o2cb start to start the cluster services. If everything worked, running /etc/init.d/o2cb status should look as follows:

Quote:
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Active
You are now ready to format your OCFS2 partition.

Code:
# mkfs.ocfs2 -b 4k -C 32K -L "label" -N 4 /dev/sdxX
Now mount your partition on each machine.

Code:
# mount -L "label" /dir
With everything working you should add the partition to your fstab and ensure both o2cb and ocfs2 are set to start at boot.

Code:
/sbin/chkconfig o2cb on
/sbin/chkconfig ocfs2 on
This article has covered all the basics, and should be enough to get you a working OCFS2 setup. If you plan to run OCFS2 in production you should read the FAQ, which includes more advanced topics such as resizing, quorum and fencing, limitations and rolling upgrades.


  



All times are GMT -5. The time now is 03:01 PM.

Main Menu
Advertisement
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration