Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Been needing to setup some clusters for HA - not compute, just HA at this stage, and have been thinking along these lines:
- HA across multiple rooms - hence cannot use SAN as single store, so use dbrd to 'remote mirror' the disks
- heartbeat for the actual cluster failover, with at least 2 links (RS232 and LAN)
- Its all IBM hardware, use IBM director to kill (STONITH) the failed node if I can reach it at all
- Centralise authentication with NIS, one master and one slave
- use rsync to ensure all printers are matched
Alternative - give up the replicated storage, and use a SAN for single storage and Red Hat Cluster Suite.
Most of these customers now have all their hardware, including IBM servers and Red Hat distro from AS2.1 to ES3, running Informix. All have an IBM SAN as well for shared storage.
My question is, anyone doing something similar that might have some nuggets of info (like "nah it didn't work for us" or "watch xxx")? Or anyone got some better ideas that might be more suitable?
We are using AIX 5.2 with HACMP on our two primary servers. The BIG things to watch out for are the syncing of printer definitions, user definitons, and the program installs (make sure to install new programs/updates on both machines with the same paths). It uses a heartbeat via a dedicated 1GB fiber port between the two machines (nothing else between - don't add another point of failure).
This has now been running for 3 years, with the only switchover failures occurring because of the aforementioned syncing and once because the heartbeat link was through a switch that had a port failure.
Good luck - HA takes time and patience to get right.
I've been working on some HA and clustering myself. Still haven'd decided on the way I want to do it.
Regarding using the SAN though keep in mind that you need a cluster aware file system. Probably GFS since you are using Red Hat. If you did want to use a SAN disk across multiple rooms you could try exporting them from the machine connected to the SAN as iscsi if that's available on the versions of RH that you are using. A word of caution-it has made great leaps and bounds in the last few months so you may run into incompatibilites and dependency problems if installing on older OSs.
drbd works well with non-shared disks. I have this setup now and it works great.
When it's all said and done you will most likely have done a lot of reading and experimenting. As w7hd said, it takes time and patience to get it right.
With Red Hat's Cluster Suite and Global Filesystem it is possible to setup diskless cluster nodes that boot the same sharedroot filesystem directly from the SAN. Because all nodes have read/write access to all files on the cluster filesystem there is no need to replicate the data. All changes are instantly valid for all nodes. The servers are reduced to metal boxes with no intelligence.
Usually nodes are placed in different fire compartments within a data center. For critical applications it is adviseable to connect different data centers with fibre to replicate the SAN. So you can build a second cluster at a remote location.
Although replication is good for many situations a SAN based HA-solution is the advanced choice. You are rewarded with great scalability and easy management.