Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Guys....any help here would be greatly appreciated. I have 2 RHEL 3 servers attached to our SAN, they each have there own separate space on the SAN 30GB each. Theses 2 servers are our PRODUCTION and TEST server and my question is this. Today we tried to fail over these server meaning if our prod server fails I would like the Test server to use the PROD servers space on the san. So when we swapped LUNs on each HOST each server had issues mounting the others SAN space? Any ideas?? These two servers are identical servers as is the space allotcated on the San, they are mounted identically on each server with the same mount points and Dev names... We got an error about it mounting and it possibly not being an ext2 partition which it is. Should I have to mount this space differently??? Any suggestions are welcomed!!!!!
Thanks for the reply.
NO we are not doing this for all the root files systems...this actually is just a seperate Partition for the software these servers run. IN the event we loose our production server, by some chance ( hardware ) we would like to use the SAN partition for that server and attach it to the test server. So in our SAN we would just in theory move the LUNS to the appropriate HOST...as we tried today the Test server would not boot having the TEST servers SAN space attached. It claimed that that space was not ext2 and needed fsch run...
They are not in a clustered enviroment either, the test server really is there as a test environment to build new services and test before moving to production. Really they are completely seperate. Even as the SAN see's them they are seperate HOST's with Seperate LUNs carved for SAN space.
Haven't done this on Linux except with OCFS from Oracle which allows you to actually share the filesystems on two servers at the same time (it has its own locking mechanism). However I wouldn't recommend using that unless its for a database (designed for Oracle RAC).
On HP-UX we have Veritas Volume Manager (VxVM) and do this kind of disk move frequently. That requires it to be "deported" from the original server and "imported" on the target.
You should be able to do it with LVM. On the HP-UX at another job we were able to do that with the HP LVM which is similar to the one in Linux. We were even able to mount them Read Only on the other server so we could see them even while they were active Read/Write on the first server. Just had a look and the vgexport/vgimport commands appear to have the same function.
Of course you'd have to put the SAN volumes in a LVM Volume Group (VG). Have a look at:
for an overview. This will point you to the commands used to do that.
I'm not understanding though why I could'nt unmount each SAN allocation from each server and then on the SAN move LUN's to the other server and just mount new space?? When each server is basically the same, POwerpath names them the same so my mount points are the same with the same name. I just get an error about incorrect file system or something..........to run fsck. When I moved everything back all is good? Is there specific data about the current server in that space? where its not allowing me to mount on another server?
Ah PowerPath - just went through this in migrating servers from one Clariion to another.
One thing to keep in mind - Linux may redo your device names in discovery order. That is to say if you have an sda, sdb, sdc and sdd then add another 4 devices there is no guarantee the original 4 will be the same sda, sdb, sdc and sdd - it may be they become sde, sdf, sdg and sdh and the new 4 become the sda, sdb, sdc and sdd. This is unlike HP-UX for example which simply creates new device entries for the added devices.
We went for a fairly involved process for moving volumes. As steps in doing that we stopped PowerPath and also made it recreate its files by doing:
Rename PowerPath configuration files
(in /etc, emcp_devicesDB.dat, emcp_deviceDB.idx, powermt.custom)
NOTE: mv NOT cp
On restart it would recreate these files with the new device order.
You can use powermt display dev=all to get a good view of which sd? devices are associated with which PowerPath pseudodevices.
Also for each add we ran Qlogic's ql-scan-lun.sh utility to make it see the drives. You may wish to get that if you're using Qlogic fibre HBAs or see if the vendor you use has an equivalent. There was another scsi_scan utility we had used in testing that worked fine on one server but on another gave very odd results. The Qlogic utility didn't have the same issues.
Since you're not doing exactly what we were doing I'd recommend you test this on non-critical systems first.
Last edited by MensaWater; 02-16-2007 at 11:14 AM.
Each server is running PowerPath and our SAN environment currently is EMC CX600 moving soon to the new CX3-40 SAN. In our initial test we powered down each server, moved LUNS and powered back on. I do believe you may be right in thinking the drives have changed or dev's each server sees?? I"ll have to try again and see what the server see's on boot. or remount.
So for example - /dev/emcpowerd would be the EMC pseudodevice using the 4 real devices /dev/sdc, /dev/sdh, /dev/sdm & /dev/sdr. The relationship can be seen with "powermt display dev=all" as shown below:
I agree to what you are saying. But there is one issue with PowerPath which I am facing. I have two servers with identical hardware running RHEL 2.1 AS. I have Oracle running on them and use EMC Symm.. for providing the LUNs.
I agree that linux does not maintain /dev/sd[a-z] in order and hence we use PowerPath for creating the Pseudo devices /dev/emcpower[a-z].
Just two days back, EMC provided me a 64 GB LUN which is visible from both the servers. I rebooted the boxes and found out that the new LUN had different Pseudo devices.
On server1 ---> /dev/emcpowers
On server2 ---> /dev/emcpowert
I had a look at it carefully and found out that the /etc/opt/emcpower/emcpower.conf file had the entry of emcpowers on the second server, but its vid was pointing to a device which is available on server 1. On doing fdisk, it was not able to open the device.
So the issue here is that the server 2 unexpectedly skipped /dev/emcpowers. Its a issue here since I would like to have all the names of pseudo devices to be the same on both servers for same LUN. Could I do something here?
You could try doing the removal of the PowerPath config files I mentioned in an earlier post. You'd of course Stop PowerPath then restart it. It sounds like it may have had a pseudo device once that it no longer has.
We did the above for our Oracle RAC shared storage on the CX700 running RHEL 3. I'm not sure how RHEL 2.1 would treat it differently.
We were ultra cautious in doing all this because what we didn't want to have happen is for Oracle to startup with the wrong pseudo devices. As can be seen you can get the actual LUN information from your Clariion in the powermt display dev=all. You need to make sure you are NOT autostarting Oracle DB until after you've insured your pseudodevices and related mounts are exactly what you think they are.
If Oracle attempts to start and has the wrong devices it will likely corrupt things. At a minimum you'd want to do a cold backup of the database before doing this.