Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have RHEL 5.1 and I have created a lvm2 volume group on new EMC Powerpath connected SAN volume. All seems OK . I can create a ext3 file system ,on the new logical volume (/dev/VolGroup01/LogVol00) and all is fine, until I reboot the server. Upon reboot the Volume Group and Logical Volume entries are missing in /dev and consequently the file system fails to mount. I can then quite happily, vgscan --mknodes followed by a vgchange -a y and they are back again, but any ideas why I have this problem on reboot?? Any help much appreciated...trying to refresh my Linux knowledge after a few years break and tearing my hair out with this one. Hopefully it's something silly that someone will spot straight away.
Has anyone found a solution to this problem. It happened to me last night (Sunday) 10 hours before we went live with a major production system. I rebooted the database server to make sure everything was ready for the following morning. My San attached volume groups were gone. Of course my database would not come up. I used Seanikins methodology to bring things back, but I sure would like to find out what causes this.
Yes, we do have the correct entries is FSTAB. The thing is, it's not just that the volumes weren't mounted, but the volume group and logical volume entries disappear from /dev. After vgscan and vgchange they come back and I am able to mount the volumes. I would like to find out why they go away in the first place.
We had a NAS ...We added a LVM on this disk and added entry to the fstab as well, after reboot, not mounted but can scan & mount later.
The problem is when system doing auto mount (entry in fstab), the connection to the NAS is not ready at that time. And until the whole system up, the connection is ready, that's why we can scan & mount later.
Solution is you can add some command to mount that disk in rc.local (double check connection before you mount)...
I just had this occur, myself. It turned out to be a corrupted /etc/lvm/lvm.conf entry. Prior to the reboot, I had modified the 'filter' parameter. About a day later, upon reboot, the exact issue began to occur.
I copied my original lvm.conf back into place. After reboots, my vgs remained persistent.