LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Enterprise (http://www.linuxquestions.org/questions/linux-enterprise-47/)
-   -   GFS vs CXFS vs others ?? (http://www.linuxquestions.org/questions/linux-enterprise-47/gfs-vs-cxfs-vs-others-594125/)

.G. 10-24-2007 03:04 AM

GFS vs CXFS vs others ??
 
Hello All,

Presently looking at implementing a shared / single filesystem for a bunch of our servers.

Long term if all works out, I plan to have approx 60 CentOS 5 boxes connected to the filesystem.

So first things first... has anyone get any recommendations of GFS or CXFS (besides obviously that CXFS is expensive) or perhaps another option.

Our storage environment is currently eight (8) coraid units at present all being presented via AOE.

Has anyone run either GFS or CXFS over AOE ? had success ? failures etc ??

Hope to hear soon.


Regards

elcody02 10-25-2007 03:12 AM

Note, that CXFS and GFS have architectural differences which you should also be taken into account.

CXFS is called a distributed filesystem whereas GFS is a shared storage filesystem which is symmetric.

The bottomline is basically CXFS and many others need nodes with special responsibility (the controlling of metadata) as any file access has to be grated by such a metadata controller. And this controlling is normally done via Ethernet. On the other hand as those filesystems most often are available on different plattforms they can share the data between those. And correct my if I'm wrong but some not too far time ago the metadata controller on CXFS could only be hosted on a SGI machine. Perhaps that has changed.

If you are more intersted in such filesystems you should also take StorNext File System, XSan and some more which I cannot remember ;-). But all of these do cost some money.

If you have a homogenous plattform a symmetric SAN Filesystem like GFS or OCFS2 and specially on Linux is to be preferred. They are quite nicely integrated into the kernel are opensource and are available for most distibutions and are supported. Seen from architectural point of view they also have some advantages as there is no node with special purpose (they use Distributed Locking) and are integrated perfect into the linux storage and clusteringstack (especially GFS in RHEL). This makes live much more easy. I dare say they can also be used as rootfilesystem or for database usage. That makes real fun.

On AOE: GFS can be setup on any blockdevice basically. Cache coherrency has to be assured which normally is not a big deal. Switch off any write back caching on all clients. Although (C)LVM is the best practice way to use with GFS.

Hope that helps
have fun.

.G. 10-25-2007 04:34 AM

thanks for the reply.

I have spoken with SGI and yes the lock manager will only run on an SGI server. (even though that server runs linux !!).

I have started testing etc today and have found the GFS to be running quite nicely. however I think I have run into a major hurdle.

From reading the REDHAT faq, it say that with DLM it will only scale to around 32 nodes at present. I want to run approx 60 Servers. The faq states that you can use GULM instead of you want that many nodes, however it also states that GULM is only for RHEL 4... we're running 5.

Any suggestions....???

elcody02 10-26-2007 03:00 AM

Quote:

Originally Posted by .G. (Post 2936103)
thanks for the reply.

I have spoken with SGI and yes the lock manager will only run on an SGI server. (even though that server runs linux !!).

I have started testing etc today and have found the GFS to be running quite nicely. however I think I have run into a major hurdle.

From reading the REDHAT faq, it say that with DLM it will only scale to around 32 nodes at present. I want to run approx 60 Servers. The faq states that you can use GULM instead of you want that many nodes, however it also states that GULM is only for RHEL 4... we're running 5.

Any suggestions....???

Yup I knew there was something I wanted to add ;-) .
I think with RHEL5 you don't have those limits any more wasn't it. The only limitation I'm aware of with RHEL5 is rgmanager <=16 nodes. I think the faq or that special topic is related to RHEL4 (at least I would suspect ;-) ).

But yes be aware that there are not too many (I've heard of some) installations with more then 32 nodes neither on GFS nor on CXFS nor on any other such filesystem. I would also advice you to think about your architecture very intensly and analyse the file I/O you have to see if a SAN/Cluster FS will do the scaleout as expected with this amount of nodes.

Have fun.

ccolumbu 01-23-2010 03:53 AM

Did you figure this out
 
Quote:

Originally Posted by .G. (Post 2936103)
thanks for the reply.

I have spoken with SGI and yes the lock manager will only run on an SGI server. (even though that server runs linux !!).

I have started testing etc today and have found the GFS to be running quite nicely. however I think I have run into a major hurdle.

From reading the REDHAT faq, it say that with DLM it will only scale to around 32 nodes at present. I want to run approx 60 Servers. The faq states that you can use GULM instead of you want that many nodes, however it also states that GULM is only for RHEL 4... we're running 5.

Any suggestions....???

G,
I need to do something very similar to your original post.
If you figured out how to do it, can you post (or e-mail me) a how to?

Thanks,
^C

StoatWblr 12-14-2010 03:57 PM

GFS - unfit for purpose. No idea on CXFS
 
GFS/GFS2 is unfit for purpose.

Drive it hard enough(*) and nodes will CRASH.

Directory scanning happens at about 1% of the speed of ext3 (which means an incremental backup that might take 2-3 minutes on ext3/4 will take several HOURS on a GFS filesystem

(*) Create a few thousand files in one directory, rename them. Move them to other filesystems.
Rinse, repeat.

This is real world results, not setting out to break it - and Redhat have been jerking us around for a couple of years while not fixing things.

CXFS may be better. OSFS2 may be better. I have no idea as I haven't tried them (yet), but what I CAN say is that if you want a clustered fileserver, DON'T use GFS.

bsdfan 05-07-2013 10:46 PM

Just to share.

We have apps server on prod site. We have 6 node and using GFS2. All on active/active mode. Last time we use GFS and it's run smooth. We upgrade to GFS2 cause of our backup agent from Hitachi not supported GFS. GFS2 is improve a lot of. Search data on SAN is incredible fast from application.

custangro 05-14-2013 06:16 PM

Quote:

Originally Posted by .G. (Post 2934607)
Hello All,

Presently looking at implementing a shared / single filesystem for a bunch of our servers.

Long term if all works out, I plan to have approx 60 CentOS 5 boxes connected to the filesystem.

So first things first... has anyone get any recommendations of GFS or CXFS (besides obviously that CXFS is expensive) or perhaps another option.

Our storage environment is currently eight (8) coraid units at present all being presented via AOE.

Has anyone run either GFS or CXFS over AOE ? had success ? failures etc ??

Hope to hear soon.


Regards

Take a look at GlusterFS

--C

chrism01 05-14-2013 09:56 PM

The OP is from 2007 ;)
I suspect he's already made a decision, although looking at later posts, GFS2 looks promising these days :)


All times are GMT -5. The time now is 05:58 AM.