Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have one question that is driving me nuts.
I've installed RHEL 3 on 2 Dell PE 1850 with cluster services, and both servers are connected to an IBM Shark Storage through fiber channel HBA's.
Our storage has 5TB of total space, formatted in 20GB disks, because we have some Unix servers connected on it, and with this configuration is easy to add and remove disks for each server as needed.
Now I want to add one file system resource in my Linux cluster, so it can be mounted in any node if the main node goes down, but I need a filesystem with more than 50GB of free space. I thought that I could create and volume group in one server, add 3 disks with 20 GB each and create a physical volume and a file system over it. And then, if I need to umount this filesystem on one node and mount it in another, all I have to do is export the VG in one node and import it on the other. I've tested it manually, and it worked fine, just the way I wanted it.
The big problem is that the people who installed my Linux servers and cluster services told me that "It's not supported, and I shouldn't do that". Please note that I'm not trying to do concurrent access on this FS, because for this I know that I need the GFS installed and configured.
Well, I'm doing this kind of thing with HACMP on IBM AIX for at least 10 years, and now someone just told me that it's not supported on Linux. Does someone know if it is true? I'm finding it hard to believe. I've already opened a support request on the RHN, but I didn't receive any answer.
"not supported" doesn't mean "won't work". It is vendor speak for "Try anything you like but don't call us if it blows up in your face". Over the years I've seen many scenarios where we had to do "unsupported" configs just to get what we need.
You might want to have a look at Oracle's OCFS (Oracle Cluster File System) which does work on RH EL 3 and 4 (I think 2 as well but wouldn't swear to it). We've been using it here for about a year. I just attended a presentation about OCFS and ASM that if I understood it correctly said you can use OCFS even if you're not using Oracle DB (we are using RAC).
Hum, very nice indeed. This should solve some other problems I have with other installations.
I'll install a Lotus Domino on this file system, and I don't need it to be available to all nodes at the same time. I think that the OCFS can help, but I'm afraid that it may cause some confusion among the system administrators, so they may try to bring up a Domino server that is already running in other node. But anyway, this is a procedure problem, not technical...
I've tried the LVM over shared storage thing, and it failed miserable.
In some point of utilization, the file system just became corrupt in a way that was impossible to recover.
So, if anyone have a different experience, please tell me.
LVM on shared storage is possible but not so that both nodes can write at the same time. You can just vgimport the storage on the other node at the time of failover. Haven't done this with Linux LVM but have done it with HP-UX LVM. They both use essentially same commands and procedures so are very similar. With vgimport you do NOT vgcreate/lvcreate on the secondary node - it was likely doing these things that corrupted it.
Also on HP-UX one thing that was not "supported" on LVM but was done often at one place I worked at was to vgimport the LVM on a second host in read only mode. You couldn't write to it there but you could see it all the time. (Handy if you want to read from it on one server without impacting anything other than I/O performance on the other.)
If the above doesn't work you may want to use OCFS despite the "confusion" to other admins and just tell them to keep their hands off if they're not bright enough to understand it. At some point you either have to trust your coworkers (or cow-orkers as Dilbert says) or do it all yourself or just deal with the fact that your PHB won't get rid of dead wait so you'll occassionally have to fix their screw ups.
A tertiary option would be to buy a commercial product like Veritas Volume Manager (VxVM). However conceptually VxVM is harder to deal with than LVM or OCFS in my not so humble opinon.
What I noticed is that the Linux LVM allow you to vgimport and mount the filesystems even if the VG is already open on the other node. When I was testing the cluster failover process, I've noticed that, in a given moment, the filesystem was mounted on both servers, and the application was trying to start on the second node, so it had corrupted the entire volume group.
Now I´m working with the "normal" disk access. I've created the logical partition in each disk and configured the cluster service to mount the file system for each area. I think that this is more secure and stable, but a pain to keep control over the space availability.
Well, I think that it'll be all solved we I change my external storage, with a better disk partitioning option.
mjgsantos, you will need RedHat's ClusterSuite and GFS or some other type of clustered file system (PolyServ is a very popular yet expensive one). You might be able to utilize HA-Linux in a failover manner and only mount the share drive once the primary goes down. HA-Linux isn't too bad to setup and worked well in my testing. RH's GFS works well too but it isn't the fastest thing out there.