Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have run aground and am wondering if my error is recoverable. This is my first post here.
I have a locally connected hardware raid controller (Adaptec 2410SA) which until yesterday was running quite happily in Raid 5 with 3 x 500G drives and a total capacity of 935GB. The array reports to the OS [FC5] as /dev/sdd.
Yesterday I extended the array by adding a fourth drive to increase the total capacity to 1.36TB. Now, perhaps unsurprisingly in hindsight, LVM won't mount the partition. The raid device is reporting as being in optimal state @ 1.36TB.
I thought when extending the array that I would still have a 935GB partition as far as linux was concerned with ~430GB of spare capacity on the array with which I could create another logical device in the raid controller which I intended to add to the LVM group.
My question is can I recover this LVM group or have I lost the data and need to start again (groan). The volume group in question is /dev/NewVolGroup/NewLV. The physical raid device is /dev/sdd
Some detail follows. I am out of my depth here so any thoughts welcome. Tom
[root@syd001 ~]# mount /dev/NewVolGroup/NewLV /newdir
mount: you must specify the filesystem type
[root@syd001 ~]# fdisk /dev/sdd
The number of cylinders for this disk is set to 121593.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdd: 1000.1 GB, 1000144371712 bytes
255 heads, 63 sectors/track, 121593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help):
[root@syd001 ~]# pvscan
/dev/dm-2: read failed after 0 of 4096 at 1000135917568: Input/output error
/dev/dm-2: read failed after 0 of 4096 at 0: Input/output error
PV /dev/sdd VG NewVolGroup lvm2 [931.45 GB / 0 free]
PV /dev/hda2 VG VolGroup00 lvm2 [57.16 GB / 32.00 MB free]
Total: 2 [988.61 GB] / in use: 2 [988.61 GB] / in no VG: 0 [0 ]