Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
/dev/sdb is the device not the partition, once you create a partition you may want to look at formatting it as GFS2 if you want both servers to have write access
In my case i don't want to install clustering . I just wanted to connect all those two RHEL 5 servers with the SAN. Mainly , in the SAN device , i wanted to create a partition and needed to get mounted to RHEL 5 servers. SAN i'm using for a database. for this database needed to have write permission from RHEL servers.
For example : if i write a file in the First RHEL machine in the SANSs database , that file should be visible for the other RHEL machine as well.
Actually i'm a beginner , could you please give me step by step how to guide coz this implementation is bit critical and i dont have chance to trial run .
You need to make sure you have an HBA installed on the redhat servers. If the hba is there you should see the device under /proc/scsi/... Qlogic HBA's are listed as /proc/scsi/ql***.
Once you have that done and you have zoned the server properly on the fabric you should be able to run system-config-lvm and see the uninitialized disk. You can then initialize the disk, create a partition, and set it to mount.
It is not recommended to have 2 servers mount the same volume without clustering (GFS). You could wind up corrupting the files as nothing will be controlling lock access to the files.
@luke: If this is critical (as in your job is on the line), and you're a beginner, and you have no testing environment, I recommend that you have a frank talk with your boss about hiring a contractor to perform the install / configuration.
You are going to need GFS (or some other clustering file system), or else you'll run into data integrity issues.
okay , thanks for all suggestions. I got installed GFS and mounted a partition in the SAN as /database . This partition is visible from both RHEL servers.
But when i create a folder in one RHEL sever , from other machine i unable to see means its not exist for other server.
what would be the reason , please help .
mkfs.gfs -j 4 -p lock_gulm -t zone:data /dev/volg/data, this is the command i issued for make the file system on /dev/volg/data
For instance suppose , if one machine DOWN , then other one needed to come in to the picture and it should take care of serving for each and every request via SAN database.
apache and mysql running in RHEL servers. but Mysql database pointed in to the SAN .
Current status: via GFS its get mounted for both RHEL servers. but when i create some in the database via one server , other server can not view.
each every machines able to view what they are doing in the /database.
I think , you have got my point and this is not a new setup that u never heard.
run "mount -a" on each server (this should mount everything) and then just "mount" on each server you should see the same mount point (/database on /somedevice) listed on each server. If you dont see the mount for "/database" listed on each server you do not have it mounted on both nodes. That would explain why you can see the directory under /database on one server and not the other.
At this point you might as well just install the rest of the packages for RHCS and configure the cluster software. You can have each node access the disk and file access will be controlled by the lock mechanism you choose (DLM or whatever). You can then have the mounts controlled by the cluster manager (rmanager, cman etc...).
he mentioned he did not want to install clustering which rules out what he was trying to do with standard filesystems with out some type of shared file system management Like NFS or CIFS
Not true. You can install a clustered, non-network filesystem without formally setting up "clustering" (resources, fencing, and the like). It might not be particularly wise in most circumstances, but it can be done.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.